prompt
stringlengths 0
312k
| target
stringlengths 2
62.4k
|
|---|---|
cooling rates in order to check the convergence. Based on these tests we do
believe that the results are converged to their optimum values to within the error bars shown.
For the larger lattices, the energy deviation in Fig.~\ref{ED} is only $\Delta E/J\approx 0.0004$,
or $\approx 0.06\%$, and is size independent within statistical errors for $L >10$. This should then be
the accuracy in the thermodynamic limit. In Ref.~\onlinecite{lia88} only a few of the short-length amplitudes
were optimized and a functional form---power-law or exponential---was used for the long-range behavior.
The best power-law wave functions had energy deviations of $\Delta E/J \approx 0.0008$; twice as large
as we have obtained here with the fully optimized amplitudes.
\begin{figure}
\includegraphics[width=7.9cm,clip]{fig2.eps}
\caption{(Color online) The staggered structure factor $S(\pi,\pi)$ versus lattice size,
compared with unbiased QMC results. The inset shows the long-distance spin-spin correlation. Statistical
errors are smaller than the symbols.}
\label{SC}
\end{figure}
Having concluded that the stochastic method is the preferred optimization technique, we discuss
only results for other quantities obtained this way. Fig.~\ref{SC} shows the size dependence
of the staggered structure
factor,
\begin{equation}
S(\pi,\pi) = \sum_{x,y} (-1)^{x+y} C(x,y),
\end{equation}
where $C(x,y)$ is the correlation function, defined by
\begin{equation}
C(x_i-x_j,y_i-y_j)= \langle {\bf S}_i \cdot {\bf S}_j\rangle.
\end{equation}
The inset shows the correlation function at the longest distance; $(x,y)=(L/2,L/2)$,.
We again compare with results from unbiased QMC calculations.\cite{san97}
The structure factor of the variational ground state agrees very well with the exact
result for these lattice sizes---the deviations are typically less than $0.5\%$. The long-distance
correlations show deviations that increase slightly with $L$, going to $\approx 2\%$ below the
true values for $L \ge 10$ [which then should also be the asymptotic $L \to \infty$ error of
$S(\pi,\pi)$]. The sublattice magnetization is the square-root of the long-distance
correlation function; it is thus only $1\%$ smaller than the exact value.
\begin{figure}
\includegraphics[width=7cm,clip]{fig3.eps}
\caption{Log-log plot of the amplitude $h(L/2,L/2-1)$ versus the system size.
Statistical errors are of the order of the size of the circles. The line
shows the power-law $h \sim L^{-3}$.}
\label{HR}
\end{figure}
Liang et al.~did not conclusively settle the question of the asymptotic behavior of the
amplitudes $h(x,y)$ for bonds of long length $r=(x^2+y^2)^{1/2}$.\cite{lia88}
The best variational energy was obtained with an algebraic decay; $h \sim r^{-4}$.
However, the energy is not very sensitive to the long-distance behavior and the values
obtained for $r^{-3}$ and $r^{-2}$ were not substantially different at the level of statistical
accuracy achieved. Even with an exponential decay of the bond-length distribution the energy
was not appreciably higher, but then no long-range order is possible and hence this form
can be excluded. In a recent unbiased projector QMC calculation, the probability distribution
$P(x,y)$ of the bonds was calculated.\cite{san05} The form
$P(r) \sim r^{-3}$ was found (with no discernible angular dependence). Without a hard-core
constraint for the VB dimers, the probabilities would clearly be proportional to the amplitudes;
$P(x,y) \propto h(x,y)$, and even with the hard-core constraint one would expect the two to be
strongly related to each other. In fact, as was pointed out in Ref.~\onlinecite{san05}, a wave
function with $h(r) \sim r^{-p}$ does result in $P(r) \sim r^{-p}$. Our variational calculation
confirms that indeed the fully optimized $h(r) \sim r^{-3}$, as demonstrated in Fig.~\ref{HR}
using the longest bonds, $(x,y)=(L/2,L/2-1)$, on the periodic lattices.
Havilio and Auerbach carried out a VB mean-filed calculation which gave an
exponent $p \approx 2.7$.\cite{havilio2} The statistical accuracy in Fig.~\ref{HR} is
perhaps not sufficient to definitely conclude that $p=3$ exactly, or to exclude $p=2.7$, from
this data alone. However, the QMC study of the probability distribution $P(R)$ supports $p=3$
to significantly higher precision.\cite{san05} Moreover, Beach has recently developed a
different mean-field theory which predicts $p=d+1$ for a $d$-dimensional system.
\cite{beachmeanfield} There is thus reason to believe that $r^{-3}$ indeed is the
correct form for $d=2$.
For the $4\times 4$ lattice we can compare the variational wave function with the
exact ground state obtained by exact diagonalization. This comparison is most easily
done by transforming the VB state to the $S^z$ basis. Taking into account
the lattice symmetries, there are $822$ $m_z=0$ states with momentum $k=0$, and
the matrix can easily be diagonalized. We generate the $8!$ VB states $| V_k \rangle$
using a permutation scheme and convert each of them into $2^8$ $S^z$-basis states with
weights $\pm \prod h(x,y)$, and use these to calculate the overlap with the exact
ground state. With the amplitudes normalized by $h(1,0)=1$, there is only one
independent amplitude, $h(2,1)$, to vary for the $4\times 4$ lattice. In
Fig.~\ref{Overlap} we show the overlap as a function of $h(2,1)$. We also
indicate the value of $h(2,1)$ obtained in the variational QMC calculation---it
matches almost perfectly that of the maximum overlap. The best overlap is indeed
very high; $\approx 0.9998$. It would be interesting to see how the overlap
depends on the system size. For a $6\times 6$ lattice, the ground state can also be
calculated, using the Lanczos method, but the space of valence bond states is too
large to calculate the overlap exactly (although it could in principle be done by
stochastic sampling).
\begin{figure}
\null\hskip2mm
\includegraphics[width=8.4cm, clip]{fig4.eps}
\caption{(Color online) Overlap between the exact $4\times 4$ wave function and the
VB wave function versus the single independent amplitude $h(2,1)$. The value of $h(2,1)$
obtained in the variational calculation is indicated.}
\label{Overlap}
\end{figure}
\section{Frustrated systems}
We have also studied the Heisenberg hamiltonian including a frustrating interaction;\cite{j1j2}
\begin{equation}
H = J_1\sum_{\langle i,j\rangle} {\bf S}_i \cdot {\bf S}_j +
J_2\sum_{\langle\langle i,j\rangle\rangle} {\bf S}_i \cdot {\bf S}_j,
\label{fham}
\end{equation}
where $\langle i,j\rangle$ and $\langle\langle i,j\rangle\rangle$ denotes nearest and next-nearest
neighbors, respectively, and $J_1,J_2 > 0$. Also in this case there exists, in principle, a
positive-definite expansion of the ground state in the valence bond basis. This can be easily seen
because a negative coefficient $f_k$ in Eq.~(\ref{psigeneral}) can be made positive simply by
reversing the order of the indices in one singlet
|
in that particular state. However, no practically
useful convention for fixing the order is known. We here use the same partition of the lattice into
A and B sublattice sites as in the non-frustrated case and the same sign convention (\ref{singlet}) for the
singlets. We only consider the $4\times 4$ lattice, which, as we will show, already gives some
interesting information on the behavior of the simple amplitude-product wave function as the
frustration ratio $J_2/J_1$ is increased.
\begin{figure}
\includegraphics[width=8.4cm, clip]{fig5.eps}
\caption{(Color online) Overlap beween the exact $4\times 4$ ground state and the VB wave
function for different values of the frustration $J_2/J_1$. The best amplitudes obtained in
variational Monte Carlo calculations are indicated with the dashed lines.}
\label{Overlap2}
\end{figure}
In the exact calculation we can study both positive and negative values of $h(2,1)$, but for now
we restrict the variational calculation to $h(2,1) > 0$, in order to avoid the Monte Carlo sign
problem caused by negative amplitudes.\cite{signnote} It should be noted, however, that the sign problem here
is much less severe than in exact QMC schemes,\cite{san05} and hence there is some hope of actually
being able to consider mixed signs in variational QMC calculations in the VB basis.\cite{signnote}
In Fig.~\ref{Overlap2} we plot the dependence on $h(2,1)$ of the overlap between the VB wave
function and the exact ground state for several values of $J_2/J_1$. The $h(2,1)$ corresponding
to maximum overlap decreases as the frustration increases. For $J_2/J_1=0.4$ the best overlap
occurs for $h(2,1)<0$. The optimum overlap decreases significantly with $h(2,1)$ for $J_2/J_1 \agt 0.3$,
indicating the increasing effects of bond correlations not taken into account in the product-form
of the expansion coefficients. This deterioration of the wave function may be related to the
phase transition taking place in this model at $J_2/J_1 \approx 0.4$.\cite{j1j2} Note, however,
that even at $J_2/J_1=0.4$ the overlap remains as high as $\approx 0.996$.
There is a point close to $J_2/J_1=0.4$ where $h(2,1)$ vanishes and thus the best wave function
for the $4\times 4$ lattice contains only bonds of length $1$.
Beyond this coupling the optimum wave function requires a negative
$h(2,1)$. It has also been noted previously that wave functions including only the shortest
bonds give the best description of the ground state in a narrow region of high frustration
in a model containing also a third-nearest neighbor interaction $J_3$.\cite{mambrini}
We also show in Fig.~\ref{Overlap2} the values of $h(2,1)$ obtained in the variational calculations.
Interestingly, these values coincides well with the maximum overlaps only when the frustration is weak,
showing that the best variational state in a given class is not always the best in terms of the wave
function.
\section{Summary and conclusions}
In conclusion, we have shown that a variational valence bond wave function, parametrized in terms
of bond amplitude products, gives a very good description over-all of the 2D Heisenberg model.
Although this has been known qualitatively for a long time,\cite{lia88} our study shows that the agreement
is quantitatively even better than what was anticipated in previous studies. The deviation of the
ground state energy from the exact value is $\approx 0.06\%$ for large lattices; almost $50\%$ better
than in previous calculations where a functional form was assumed for the amplitudes.\cite{lia88}
The sublattice magnetization is correct to within $\approx 1\%$ (smaller than the true value).
We have also shown that the amplitudes for bonds of length $r$ decay as $r^{-3}$ for large $r$, which
is the same form as for the probability distribution calculated previously.\cite{san05} It is also
in agreement with a recently developed mean-field theory.\cite{beachmeanfield}
By exactly diagonalizing the hamiltonian on a $4\times 4$ lattice, we have also studied the
frustrated J$_{\rm 1}$--J$_{\rm 2}$ model. Not surprisingly, we found that the quality of the
amplitude-product wave function deteriorates when the frustration
$J_2/J_1$ is increased. However, even at $J_2/J_1=0.4$, i.e.,
close to the phase transition taking place in this model,\cite{j1j2} the
overlap is above $0.996$. It would clearly be interesting to see how well the amplitude-product
state works for the frustrated model on larger lattices. In this regard we note that Capriotti
{\it et al.} \cite{capriotti} recently carried out a variational study of an RVB function written
in terms of fermion operators \cite{and87} and found that it gave the best description of the ground state
of the J$_{\rm 1}$--J$_{\rm 2}$ model at large frustration; $J_2/J_1 \approx 0.5$. However, the
overlap is significantly smaller than what we have found here for the $4\times 4$ system at
the same level of frustration.
Although the fermionic \cite{and87} and bosonic descriptions of the VB states are formally equivalent,
the fermionic wave function, as it is normally written, does not span the full space possible with
the bosonic product state. As a consequence, the bosonic description we have used here
can in practice deliver much better variational wave functions for non-frustrated systems.\cite{poilblanc}
The fermionic description apparently works better for frustrated than non-frustrated systems.\cite{capriotti}
However, if the sign is also optimized for each amplitude in the bosonic product state (which is not
easy for large highly frustrated systems, however, because of Monte Carlo sign problems \cite{signnote})
it is clear that these wave functions should be better than the fermionic RVB state considered so
far.\cite{capriotti} Therefore, the VB wave function we have studied here should, at least in
principle, give an even better description of the ground state of the frustrated model than the
fermionic RVB state optimized in Ref.~\onlinecite{capriotti}. Our results for the $4\times 4$ lattice,
along with the results of Ref.~\onlinecite{capriotti}, suggest that the QMC sign problem \cite{signnote}
should be small up to $J_2/J_1 \approx 0.4$. It may thus even be possible to gain insights into the quantum
phase transition and the controversial state\cite{j1j2} for $J_2/J_1 > 0.4$ with this type of
variational wave function.
Another interesting question is how bond correlations, which are not included in the wave function
considered here, develop as the phase transition at $J_2/J_1 \approx 0.4$ is approached. We are currently
exploring the inclusion of bond-pair correlations to further improve the variational wave function for
the Heisenberg model as well as more complicated spin models.
The stochastic energy minimization scheme that we have introduced here, which requires only the signs
of the first energy derivatives, may also find applications in variational QMC simulations of
electronic systems. Recently proposed efficient optimization schemes \cite{umrigar,sorella} need
the second energy derivatives, and so our scheme requiring only first derivatives has the potential
of significant time savings when the number of variational parameters is large. Very recently, other
powerful schemes also requiring only the first energy derivatives have been developed and have been
shown to be applicable to wave functions with a large number of parameters.\cite{umrigar2}
We have not yet compared the efficiencies of these different optimization approaches with the
stochastic scheme presented here.
\acknowledgments
We would like to thank Kevin Beach for many useful discussions. This work was supported by the
NSF under Grant No.~DMR-0513930.
|
\section{ Introduction.}
The purpose of this paper is to determine the homology (with coefficients in
a field ${\bold F} $) of configuration spaces $C((M,M_o)\times {\bold R} ^n;X)$.
The homology groups
$$H_{*}(C((M,M_o)\times {\bold R} ^n;X);{\bold F} )$$ are
determined by dim $M$, $H_{*}(M,$ $M_o;$ ${\bold F} )$ and $H_{*}(X;{\bold F} )$.
This answers a conjecture of F. Cohen and L. Taylor [10].
Let $A$ be a topological space. The ordered configuration space $\widetilde{C%
}^k(A)$ is the subspace of $A^k$ consisting of all $k$-tuples $(a_1,...,a_k)$
such that $a_i\neq a_j$ for $i\neq j$, where $A^k=A\times ...\times A$ with $%
k$ copies. Define the configuration space
$$
C(A,A_o;X)=\coprod_{k=1}^\infty \widetilde{C}^k(A)\times _{\Sigma _k}X^k
/\approx
$$
where $A_o$ is a closed subspace of $A$, $X$ is a space with
non-degenerate base-point $*$, and $\approx $ is generated by
$$
(a_1,...,a_k;x_1,...,x_k)\approx (a_1,...,a_{k-1};x_1,...,x_{k-1})
$$
if $a_k\in A_o$ or $x_k=*$.
The spaces in the title are given by $C((A,A_o)\times {\bold R} ^n;X)=$ $C(A\times {\bold R} ^n,$ $A_o\times
{\bold R} ^n;$ $X)$.
The applications of configuration spaces can be found in [1, 3, 6, 7, 10, 11, 14, 17, 18]. The homology of configuration spaces for various cases can be found in [4, 5, 8, 9, 10, 11, 12, 13, 20]. We will consider the case that $A$ is a manifold and $A_{0}$ is a submanifold and $n>0$.
In this paper, a space $X$ will always mean a compactly generated weak
Hausdorff space with nondegenerate base-point $*$ such that $(X,*)$ is an
NDR-pair. Homology groups are always taken with coefficients in the field ${\bold F} $;$S_{*}(-)$ willalways mean the singular chain complex with coefficients in the field ${\bf F}$. A manifold will always mean smooth triangulated manifold. For each finitedimensional graded ${\bold F} $-module $V_{*}$ and a connected space $X$, define
$$
{\cal C}^m(V_{*};X)=\bigotimes_{q=0}^m\bigotimes^{\beta
_q(V_{*})}H_{*}(\Omega ^{m-q}S^mX)
$$
where $\beta _q(V_{*})=\dim {}_{{\bold F} }(V_{q})$ is the $q$-th Betti number of
$V_{*}$ and $\Omega ^{m-q}S^mX=*$ if $q>m$.
If $V_q=0$ for all $q\geq m$, then each term $H_{*}(\Omega ^{m-q}S^mX)$ in $%
{\cal C}^m(V_{*};X)$ is an algebra with an algebra filtration induced by $\Omega ^{m-q}S^mX\simeq C({\bold R} ^{m-q},S^qX)$. This filters ${\cal C}^m(V_{*};X)$ by the tensor product filtration. The natural filtration of configuration spaces will be given in section 2. Now our main theorems are as follows
\begin{theorem}
Let $M$ be a smooth triangulated compact manifold and let $M_o$ be a smooth
triangulated compact submanifold of $M$,then
\begin{description}
\item[(1)] for each simply connected space $X$ and $n\geq 1$, there is an $%
{\bold F} $-filtered {\bf module} isomorphism%
$$
\theta :{\cal C}^{\dim M +n}(H_{*}(M,M_o);X) %
\longrightarrow H_{*}C((M,M_o)\times {\bold R} ^n,X)
$$
\item[(2)] for each simply connected space $X$ and $n\geq 2$, there is an $%
{\bold F} $-filtered {\bf algebra} isomorphism%
$$
\theta :{\cal C}^{\dim M +n}(H_{*}(M,M_o);X) %
\longrightarrow H_{*}C((M,M_o)\times {\bold R} ^n;X)
$$
\end{description}
\end{theorem}
\begin{theorem}
Let $M$ be a smooth triangulated compact manifold and let $M_o$ be a smooth
triangulated compact submanifold of $M$, then there exist isomorphisms%
$$
\overline{H}*C((M,M_o)\times {\bold R} ^n;X) \approx %
\bigoplus_{k=1}^\infty \sigma ^{-2k}{\cal D}_k^{m+n}(H_{*}(M,M_o);S^2X)
$$
as ${\bold F} $-modules for any space X and $n\geq 1$, where $m=$dim $M$, $%
{\cal D}_k^{m+n}(V_{*};X)=$%
$$
F_k{\cal C}^{m+n}(V_{*};X)/F_{k-1}{\cal C}^{m+n}(V_{*};X)
$$
and $\sigma ^{-t}$ denotes the $t$-th desuspension of graded modules.
\end{theorem}
The article is organized as follows. In section 2,
some basic properties of configuration spaces are studied. some product decompositions of certain configuration spaces are given in section 3. The proofs of Therem
A and Theorem B will be given in section 4. The author would like to thank Professors Xueguang Zhou and Fred Cohen for their many helpful discussions and encouragement during the writing of the manuscript.The author also would like to thank Miguel Xicotencatl for his help to type the manuscript.
\bigskip
\bigskip
\section{Basic properties of configuration spaces.}
In this section, some basic properties of $%
C(A,A_o;X)$ are recalled. Given an embedding $(A,A_o)$ $\rightarrow (A^{\prime
},A_o^{\prime })$ of space-pairs and a pointed map $X\rightarrow X^{\prime }$%
, there is an induced map $C(A,A_o;X)\longrightarrow C(A^{\prime
},A_o^{\prime };X^{\prime })$. Hence the homotopy type of $C(A,A_o;X)$ is an
invariant of the (relative) isotopy type of $(A,A_o)$ and the homotopy type
of $(X,*)$. The length of a configuration induces a natural filtration
of $C(A,A_o;X)$ by the closed subspaces%
$$
F_kC=F_kC(A,A_o;X)=\coprod_{j=1}^k\widetilde{C}^j(A)\times _{\Sigma _j}X^j
/\approx
$$
$F_oC=*$, $F_1C=A/A_o\wedge X$, see [3]. It is easy to see that $C(A\amalg
A^{\prime },A_o\amalg A_o^{\prime };$ $X)$ is homeomorphic to the product
$$C(A,A_{o};X) \times C(A^{\prime},A_{o}^{\prime};X)$$ and the homeomorphism preserves
the filtration. Hence $C(A,A_o;X)$ is a filtered H-space if there is an
embedding $e:(A\amalg A,A_o\amalg A_o)\longrightarrow (A,A_o)$ such that $%
e|_{\makebox{first copy of }(A,A_o)}$ and $e|_{\makebox{second copy of }(A,A_o)}$
are (relatively) isotopic to the identity map of $(A,A_o)$. In particular, $%
C((A,A_o)\times {\bold R} ^n;X)$ is a filtered H-space for each $n\geq 1$.
Define
$$
\widetilde{C}^k(A|A_o)=\{(a_1,...,a_k)\in \widetilde{C}^k(A) |\makebox{
some }a_j\in A_o\}
$$
and
$$
(X|*)^k=\{(x_1,...,x_k)\in X^{k }|\makebox{ some }x_j=*\}\makebox{ .}
$$
If $(A,A_o)$ is a relative CW-complex, there are $\Sigma _k$-equivalent
cofibrations $\widetilde{C}^k(A|A_o)$ $\rightarrow $ $\widetilde{C}^k(A)$
and $(X|*)^k\rightarrow X^k$ and therefore there are cofibrations%
$$
\widetilde{C}^k(A|A_o)\times _{\Sigma _k}X^k \cup \widetilde{C%
}^k(A)\times _{\Sigma _k}(X|*)^k \longrightarrow \widetilde{C}%
^k(A)\times _{\Sigma _k}X^k
$$
and $F_{k-1}C\rightarrow F_k$ with the same cofibre $%
D_k(A,A_o;X)=F_kC/F_{k-1}C$ $\cong $
$$
\widetilde{C}^k(A)\times _{\Sigma _k}X^{k}/
(\widetilde{C}^k(A|A_o)\times _{\Sigma _k}X^k\cup \widetilde{C}^k(A)\times
_{\Sigma _k}(X|*)^k)
$$
see [2, pp 231-239; 14 pp 162-172 and Thm. 7.1]. We call $D_k(A,A_o;X)$ the $%
k$-adic construction of $C(A,A_o;X)$.
\begin{proposition}
If $(A,A_o)$ is a relative CW-complex and $k\geq 1$, then there is an
isomorphism of ${\bold F} $-modules%
$$
\overline{H}_{*}{\cal D}_k(A,A_o;X)\longrightarrow H_{*}\left( S_{*}(
\widetilde{C}^k(A),\widetilde{C}^k(A|A_o))\otimes _{\Sigma _k}(\overline{H}%
_{*}(X))^{\otimes k}\right)
$$
\end{proposition}
The following lemma is useful.
\begin{otherlemma}
Let $C$ be a free $\Sigma _k$-chain complex and let $K$ and $L$ be chain
complexes. If the chain maps $f,g:K\rightarrow L$ are homotopic, then $%
1\otimes f^{\otimes k}$ and $1\otimes g^{\otimes k}:C\otimes L^{\otimes
k}\longrightarrow C\otimes K^{\otimes k}$ are $\Sigma _k$-equivariently
homotopic, where $\Sigma _k$ acts diagonally on $C\otimes L^{\otimes k}$ and
$C\otimes K^{\otimes k}$.
\end{otherlemma}
{\em Proof.} Let I be the unit chain complex with $0$-simplexes $
\overline{0}$ and $\overline{1}$ and $1$-simplex $\overline{I}$ and the
differential $\partial (\overline{I})=\overline{1}-\overline{0}$, and let $%
D:I\otimes K\rightarrow L$ the chain homotopy between $f$ and $g$, the
composite of $\Sigma _k$-equivariant chain maps%
$$
I\otimes C\otimes K^{\otimes k}\stackrel{\varphi \otimes 1}{\longrightarrow }%
I^{\otimes k}\otimes C\otimes K^{\otimes k}\approx C\otimes (I\otimes
K)^{\otimes k}\stackrel{1\otimes D^{\otimes k}}{\longrightarrow }C\otimes
L^{\otimes k}
$$
is the required $\Sigma _k$-equivariant chain homotopy, where the $\Sigma _k$%
-equivariant chain map $\varphi :I\otimes C\rightarrow I^{\otimes k}\otimes
C $ is defined as follows.
Let ${\cal B}$ be a $\Sigma _k$-basis of $C$, define $\varphi (\overline{0}%
\otimes c)=\overline{0}^k\otimes c$, $\varphi (\overline{1}\otimes c)=
\overline{1}^k\otimes c$ for each$c\in {\cal B}$ and define%
$$
\varphi (\overline{I}\otimes \sigma \cdot c)=\sigma \cdot (\overline{0}%
^{k-1}\otimes \overline{I}+\sum_{j=1}^{k-2}\overline{0}^{k-1-j}\otimes
\overline{I}\otimes \overline{1}^j+\overline{I}\otimes \overline{1}%
^{k-1})\otimes \sigma \cdot c
$$
for each $\sigma \in \Sigma _k$.
\noindent{\em Proof of Proposition 2.1.} By Eilenberg-Zilber Theorem,
there are isomorphisms
$$
\overline{H}_{*}D^k(A,A_o;X)\approx H_{*}(S_{*}(\tilde C^k(A),\tilde
C^k(A|A_o))\otimes _{\Sigma _k}S_{*}(X^k,(X|*)^k))
$$
$$
\approx H_{*}(S_{*}(\tilde C^k(A),\tilde C^k(A|A_o))\otimes _{\Sigma
_k}(S_{*}(X,*))^{\otimes k}).
$$
Since $S_{*}(\tilde C^k(A),\tilde C^k(A|A_o)$ is a free $\Sigma _k$-chain
complex over ${\bold F} $ and $S_{*}(X,*)\simeq \overline{H}_{*}(X)$ as chain
complexes over ${\bold F} $, where $\overline{H}_{*}(X)$ with trivial
differential (see[19, Lemma VIII.3.1]). The assertion follows by the above
Lemma.
\begin{proposition}
If $(A,A_o)$ is a relative CW-complex, then%
$$
\overline{H}_{*}D_k(A,A_o;X)\approx \sigma ^{-2k}\overline{H}%
_{*}D_k(A,A_o;S^2X)
$$
where $\sigma ^{-t}$ is the $t$-th desuspension.
\end{proposition}
{\em Proof.} By Proposition 2.1, there are isomorphisms%
$$
\overline{H}_{*}D^k(A,A_o;X)\approx H_{*}(S_{*}(\tilde C^k(A),\tilde
C^k(A|A_o))\otimes _{\Sigma _k}(\overline{H}_{*}(X))^{\otimes k})
$$
and%
$$
\overline{H}_{*}D^k(A,A_o;S^2X)\approx H_{*}(S_{*}(\tilde C^k(A),\tilde
C^k(A|A_o))\otimes _{\Sigma _k}(\overline{H}_{*}(S^2X))^{\otimes k})
$$
$$
\approx \sigma ^{2k}H_{*}(S_{*}(\tilde C^k(A),\tilde C^k(A|A_o)\otimes
_{\Sigma _k}\overline{H}_{*}(X)^{\otimes k})
$$
\begin{proposition}
If $(A,A_o)$ is a relative CW-complex with an embedding
$$\coprod_{k=1}^%
\infty \tilde C^k(A)/\Sigma _k\rightarrow R^\infty ,$$ then there is a stable
equivalence%
$$
\sigma :C(A,A_o;X)\longrightarrow \bigvee_{k=1}^{\infty} D_k(A,A_o;X)
$$
via stable equivalences%
$$
\sigma _k:F_kC(A,A_o;X)\longrightarrow \bigvee_{j=1}^k D_j(A,A_o;X)
$$
\end{proposition}
{\em Proof.} If $(A,A_o)=(M,M_o)$ a manifold-pair, this is
proved in [3, Prop. 3]. For the general cases, one proceeds in the same way.
\begin{proposition}
Let $X$ be a path connected space and let $(A,A_o)$ be a relative CW-complex with
an embedding $\coprod_{k=1}^\infty \tilde C^k(A)/\Sigma _k\longrightarrow
R^\infty $. There exists an isomorphism of ${\bold F} $-modules%
$$
\varphi :H_{*}C(A,A_o;\bigvee_{\alpha \in I}S^{n_\alpha })\longrightarrow
H_{*}C(A,A_o;X)
$$
where $\{n_\alpha |\alpha \in I\}$ is determinined by $\overline{H}_{*}(X)$.
\end{proposition}
{\em Proof.} Let $\{x_\alpha |\alpha \in I\}$ be a basis of $
\overline{H}_{*}(X)$ and let $n_\alpha =|x_\alpha |$ for $\alpha \in I$, we
have an ${\bold F} $-isomorphism%
$$
\varphi _1:\overline{H}_{*}(\vee_\alpha S^{n_\alpha })\rightarrow
\overline{H}_{*}(X)
$$
Now the assertion follows by Proposition 2.1 and 2.3.
\begin{proposition}
Let $M$ be a smooth compact manifold and let $M_o$ and $N$ be the smooth
compact submanifolds of $M$ with codim$N=0$. If $N/M_o\cap N$ or $X$ is path
connected, then
$$
C(N,N\cap M_o;X)\rightarrow C(M,M_o;X)\rightarrow C(M,N\cup M_o;X)
$$
is a quasifibration.
\end{proposition}
[5, pp. 113].
\bigskip
Now let $M$ be an $m$-manifold and let $W$ be a $m$-manifold without
boundary which contains $M$, e.g. $W=M$ if $M$ is closed, or $W=M\cup
\partial M\times [0,1)$ if $M$ has boundary. Let $\xi $ be the principal $%
O(m)$-bundle of the tangent bundle of $W$. Let $\Gamma _{\xi
[S^mX]}(B,B_o)$ be the space of cross sections of $\xi [S^mX]$ which are
defined on $B$ and take values at $\infty \wedge X$ on $B_o$ for each subspace pair $(B,B_o)$ in $W$%
, where $\xi [S^mX]$ is the associated bundle and $O(m)$ acts diagonally on $%
S^mX=S^m\wedge X$, trivially on $X$ and canonically on $S^m\cong R^m\cup
\{\infty \}$.
\begin{proposition}
Let $M$ be a smooth compact manifold and let $M_o$ be a smooth compact
submanifold of $M$. If $M/M_o$ or $X$ is path connected, then there is a (weak)
homotopy equivalence%
$$
C(M,M_o;X)\rightarrow \Gamma _{\xi [S^mX]}(W-M_o,W-M)
$$
\end{proposition}
[5, Proposition 3.1 and 3, pp. 178]
\bigskip
\begin{remark}
By Proposition 2.6,there is a homotopy equivalence
$$
C((M,M_o)\times {\bold R} ^n;X)\simeq \Omega ^nC(M,M_o;S^nX)
$$
if $M/M_o$ or $X$ is path connected.
\end{remark}
\section{Decomposition Theorems}
In this section, assume that $(M,M_o)$ is a smooth triangulated
compact manifold-pair with $m=$dim$M$ and $W$ is a smooth $m$-manifold
without boundary which contains $M$ and $\xi _W$ is the principal $O(m)$%
-bundle of the tangent bundle of $W$ (see Proposition 2.6). Let $\overline{W}%
=M$ if $M$ is closed, or $\overline{W}=M\times [0,\frac 12]$ if $M$ has
boundary.
\begin{lemma}
If $X$ is path connected, then there is a (weak) homotopy equivalence%
$$
C((M,M_o)\times {\bold R} ;X)\rightarrow \Gamma _{\xi _W[\Omega
S^{m+1}X]}(W-M_o,W-M)
$$
where $O(m)$ acts on $\Omega S^{m+1}X$ via the functor $\Omega S(-)$,i.e. by taking the homeomorphism $\Omega S(\sigma \wedge id|_{X}): \Omega S^{m+1}X \longrightarrow \Omega S^{m+1}X$ for each $\sigma \in O(m)$.
\end{lemma}
{\em Proof.} Notice that $(M,M_o)\times {\bold R} $ is isotopic to $%
(M,M_o)\times I$. By Proposition 2.6, it is easy to see that%
$$
C((M,M_o)\times {\bold R} ;X)\simeq \Gamma _{\xi _{W\times {\bold R} %
}[S^{m+1}X]}((W-M_o,W-M)\times (I,\partial I))
$$
Since $\xi _{W\times {\bold R} }[S^{m+1}X]=\pi ^{*}\xi _W[S^{m+1}X]$, where $%
\pi :W\times {\bold R} \rightarrow W$ is the projection, there is%
$$
\Gamma _{\xi _{W\times {\bold R} }[S^{m+1}X]}((W-M_o,W-M)\times (I,\partial I))
$$
$$
\simeq \Gamma _{\xi _W[\Omega S^{m+1}X]}(W-M_o,W-M)
$$
\begin{lemma}
Let $(N,N_o)$ be a compact submanifold-pair in $\overline{W}$ such that $%
N\subseteq \overline{W}-\partial \overline{W}$, then there is a (weak)
homotopy equivalence%
$$
C((\overline{W}-\nu (N_o),\overline{W}-\nu (N))\times {\bold R} ;X)\rightarrow
\Gamma _{\xi _W[\Omega S^{m+1}X]}(N,N_o)
$$
if $X$ is path connected, where$(\nu (N),\nu (N_o))$ is an open tubular
neighborhood of $(N,N_o)$ in $\overline{W}-\partial \overline{W}$.
\end{lemma}
{\em Proof.} By Lemma 3.1, there is a homotopy equivalence%
$$
C((\overline{W}-\nu (N_o),\overline{W}-\nu (N))\times {\bold R} ;X)\simeq
\Gamma _{\xi _W[\Omega S^{m+1}X]}(W-(\overline{W}-\nu (N)),W-(\overline{W}%
-\nu (N_o)))
$$
$$
\cong \Gamma _{\xi _W[\Omega S^{m+1}X]}(\nu (N),\nu (N_o))
$$
$$
\simeq \Gamma _{\xi _W[\Omega S^{m+1}X]}(N,N_o)
$$
\begin{lemma}
Let $(N,N_o)$ be a submanifold-pair in $W$. There is a homeomorphism%
$$
\Gamma _{\xi _W[\Omega S^{mn+1}X]}(N,N_o)\cong \Gamma _{\
|
xi _{W^n}[\Omega
S^{mn+1}X]}(\Delta _n(N),\Delta _n(N_o))
$$
for $n\geq 1$, where $\Delta _n:W\rightarrow W^n=W\times ...\times W$ the
diagonal map, $O(mn)$ acts on $\Omega S^{mn+1}X$ via the functor $\Omega S(-)
$ ,see Lemma 3.1 as above, and $O(m)$ acts on $\Omega S^{mn+1}X$ via the diagonal inclusion $%
O(m)\rightarrow O(mn),\sigma \rightarrow diag(\sigma,\sigma,...,\sigma)$.
\end{lemma}
{\em Proof.} The diagonal map $\Delta _n:W\rightarrow W^n$ induces a bundle map $
\overline{\Delta }_n:\xi _W\rightarrow \xi _{W^n}$. Consider the induced map on the total spaces of bundles
$\overline{\Delta }_n\times 1:E\xi _W\times \Omega S^{mn+1}X\longrightarrow
E\xi _{W^n}\times \Omega S^{mn+1}X$.
For each (x,y)$\in E\xi _W\times \Omega S^{mn+1}X$ and $\sigma \in O(m)$, notice that
$$
\overline{\Delta }_n\times 1(x\sigma ,\sigma ^{-1}y)=((x,\ldots ,x)\tilde
\sigma ,\Omega S(\sigma^{-1}\wedge \ldots \wedge \sigma^{-1}\wedge 1_{X})y)
$$
$$
=((x,\ldots ,x)\tilde \sigma ,\tilde \sigma ^{-1}y)
$$
where $\tilde \sigma =\left(
\begin{array}{ccc}
\sigma & & \\
& \ddots & \\
& & \sigma
\end{array}
\right) \in O(mn)$.
Hence $\overline{\Delta }_n\times 1$ induces a map%
$$
\Delta : E(\xi _W[\Omega S^{mn+1}X])=E\xi _W\times _{O(m)}\Omega S^{mn+1}X \longrightarrow E\xi
_{W^n}\times _{O(mn)}\Omega S^{mn+1}X
$$
$$
=E(\xi _{W^n}[\Omega S^{mn+1}X]).
$$
Furthermore $\Delta$ induces a homeomorphism%
$$
\Gamma _{\xi _W[\Omega S^{mn+1}X]}(N,N_o)\cong \Gamma _{\xi _{W^n}[\Omega
S^{mn+1}X]}(\Delta _n(N),\Delta _n(N_o)),
$$
since the induced map is 1-1,onto and open.%
The following lemma follows from the naturality of the Samelson products.
\begin{lemma}
There exists $O(m)$-map $\Phi _n: (S^{m}X)^{(n)}\longrightarrow \Omega
S^{m+1}X$ which represents the Samelson products $[[E,E],\ldots ,E]$ for each $n\geq 2$, where $X^{(n)}$ is the reduced join of $n$ copies of $X$ and $E: S^{m}X \longrightarrow \Omega S^{m+1}X$ is the suspension.
\end{lemma}
Now we give some decomposition theorems.
\begin{theorem}
Let $(M,M_o)$ be a smooth triangulated compact manifold-pair, $n+m+1$ even
and $n\geq 1$. There exist a manifold-pair $(\tilde M,\tilde M_o)$ so that $%
\dim \tilde M=2m$ and
$$
C((M,M_o)\times {\bold R} ;S^n)_{(p)}\stackrel{w}{\simeq }C((\tilde M,\tilde
M_o)\times {\bold R} ,S^{2n})_{(p)}\times C(M,M_o;S^n)_{(p)}
$$
where $m=\dim M$ and $p$ is an odd prime or zero and $\stackrel{w}{\simeq}$ means the (weak) homotopy equivalence
\end{theorem}
{\em Proof.} By Lemma 3.1, there is a homotopy equivalence
$$
C((M,M_o)\times {\bold R} ;S^n)\simeq \Gamma _{\xi _W[\Omega
S^{m+1+n}]}(W-M_o,W-M)
$$
The inclusion $j:S^{m+n}\rightarrow \Omega S^{m+1+n}$ is an $O(m)$-map. By
Lemma 3.4, there is an $O(m)$-Samelson product $[j,j]:S^{2m+2n}\rightarrow
\Omega S^{m+1+n}$, which induces an $O(m)$-H-map $\overline{[j,j]}:\Omega
S^{2m+1+2n}\rightarrow \Omega S^{m+1+n}$.
Now consider the $O(m)$-map%
$$
(\overline{[j,j]},j):\Omega S^{2m+1+2n}\times S^{m+n}\longrightarrow \Omega
S^{m+1+n},
$$
which induces a bundle map%
$$
\xi _W[\Omega S^{2m+1+2n}\times S^{m+n}]\longrightarrow \xi _W[\Omega
S^{m+1+n}]
$$
and therefore a map%
$$
\Gamma _{\xi _W[\Omega S^{2m+1+2n}\times S^{m+n}]}(W-M_o,W-M)\longrightarrow
\Gamma _{\xi _W[\Omega S^{m+1+n}]}(W-M_o,W-M)
$$
Notice that%
$$
\begin{array}{c}
\Gamma _{\xi _W[\Omega S^{2m+1+2n}\times S^{m+n}]}(W-M_o,W-M)\simeq \\
\simeq \Gamma _{\xi _W[\Omega S^{2m+1+2n}]}(W-M_o,W-M)\times \Gamma _{\xi
_W[S^{m+n}]}(W-M_o,W-M)
\end{array}
$$
By Lemma 3.3%
$$
\begin{array}{c}
\Gamma _{\xi _W[\Omega S^{2m+1+2n}]}(W-M_o,W-M)\simeq \Gamma _{\xi
_{W^2}[\Omega S^{2m+1+2n}]}(\Delta (W-M_o),\Delta (W-M)) \\
\simeq C((\tilde M,\tilde M_o)\times {\bold R} ;S^{2n})\makebox{ (Lemma 3.2)}
\end{array}
$$
where $\dim \tilde M=2m$. Since $(\overline{[j,j]},j)$ is homotpy equivalent after $p$-localization, the assertion follows by induction on the handle
decomposition of $M$.
\begin{theorem}
Let $(M,M_o)$ be a smooth triangulated compact manifold-pair. If
$X_1,\allowbreak \ldots
,\allowbreak X_k$ are connected spaces, there exist manifold pairs
$$(M_\omega ,M_{o\omega
})$$ such that $C((M,M_o)\times {\bold R} ;X_1\vee \ldots \vee X_k)$ is (weak)
homotopy equivalent to the (weak) product%
$$
\prod_\omega C((M_\omega ,M_{o\omega })\times {\bold R} ;\omega (X_1,\ldots
,X_k))
$$
where $\omega $ runs over all addmissible words in $x_1,\ldots ,x_k$ (the
addmissible words see [21, pp. 511-514]) and $\omega (X_1,\ldots
,X_k)=X_1^{(a_1)}\wedge \ldots \wedge X_k^{(a_k)}$ and $X^{(n)}$ is the
reduced join of $n$ copies of $X$ and $a_i$ is the number of occurences of $%
x_i$ in the word $\omega $.
\end{theorem}
{\em Proof.} Denote $X=X_1\vee \ldots \vee X_k$. By Lemma 3.1, there is a homotopy equivalence%
$$
C((M,M_o)\times {\bold R} ;X)\simeq \Gamma _{\xi _W[\Omega
S^{m+1}X]}(W-M_o,W-M)
$$
Consider the inclusions
$$
i_t:S^mX_t\rightarrow \Omega S^{m+1}X\makebox{\qquad for }1\leq t\leq k
$$
By Lemma 3.4, there are $O(m)$-Samelson products
$$
\omega (i_1,\ldots ,i_k):S^{l(\omega )\cdot m}\omega (X_1,\ldots
,X_k)\longrightarrow \Omega S^{m+1}X
$$
for each word $\omega $, where $l(\omega )$, is the length of $\omega
$, which induces an $O(m)$-H-map%
$$
\tilde \omega (i_1,\ldots ,i_k):\Omega S^{l(\omega )\cdot m}\omega
(X_1,\ldots ,X_k)\longrightarrow \Omega S^{m+1}X
$$
Consider the $O(m)$-map%
$$
f=\prod_\omega \tilde \omega (i_1,\ldots ,i_k):\prod_\omega \Omega S^{{l%
}(\omega )\cdot m}\omega (X_1,\ldots ,X_k)\longrightarrow \Omega S^{m+1}X
$$
which induces a bundle map%
$$
\xi _W[\prod_\omega \Omega S^{l(\omega )\cdot m}\omega (X_1,\ldots
,X_k)]\longrightarrow \xi _W[\Omega S^{m+1}X]
$$
and therefore a map%
$$
\tilde f:\Gamma _{\xi _W[\prod_\omega \Omega S^{l(\omega )\cdot
m}\omega (X_1,\ldots ,X_k)]}(W-M_o,W-M)
$$
$$
\longrightarrow \Gamma _{\xi W[\Omega S^{m+1}X]}(W-M_o,W-M).
$$
By Hilton-Milnor Theorem, $f$ is a (weak) homotopy equivalence. By induction on the handle decomposition of $M$,$\tilde f$ is a (weak) homotopy quivalence.
Notice that%
$$
\Gamma _{\xi _W[\prod_\omega \Omega S^{l(\omega )\cdot m}\omega
(X_1,\ldots ,X_k)]}(W-M_o,W-M)
$$
$$
\simeq \Gamma _{\xi _W[\prod_\omega \Omega S^{l(\omega )+1}\omega
(X_1,\ldots ,X_k)]}(\overline{W}-v(M_o),\overline{W}-v(M))
$$
$$
\cong \prod_\omega \Gamma _{\xi _W[\Omega S^{l(\omega )+1}\omega
(X_1,\ldots ,X_k)]}(\overline{W}-v(M_o),\overline{W}-v(M))
$$
$$
\cong \prod_\omega \Gamma _{\xi _{W^{l(\omega )}}[\Omega S^{l%
(\omega )+1}\omega (X_1,\ldots ,X_k)]}(\Delta _{l(\omega )}(\overline{%
W}-v(M_o)),\Delta _{l(\omega )}(\overline{W}-v(M)))
$$
$$
\simeq \prod_\omega C((M_\omega ,M_{o\omega })\times {\bold R} ;\omega
(X_1,\ldots ,X_k))\makebox{\qquad (Lemma 3.2)}
$$
where $\dim M_\omega =\dim W^{l(\omega )}=l(\omega )\cdot m$.
The assertion follows.
\section{Proofs of Theorems A and B.}
\begin{lemma}
Let $(N,N_o)\subseteq (M,M_o)$ be the smooth compact submanifold pair of $%
(M,M_o)$ with $\dim N=\dim M$ and let $X$ be a simply connected space. If $%
H_{*}(N,N_o)\rightarrow H_{*}(M,M_o)$ is onto, then
$$
H_{*}C((N,N_o)\times {\bold R} ^n;X)\longrightarrow H_{*}C((M,M_o)\times {\bold R} %
^n;X)
$$
is onto for $n\geq 1$.
\end{lemma}
{\em Proof.} Since $(M,M_o)\times {\bold R} ^n$ is isotopic to $%
(M\times I^{n-1},M_o\times I^{n-1})\times {\bold R} $, it is sufficient to show
that%
$$
H_{*}C((N,N_o)\times {\bold R} ;X)\longrightarrow H_{*}C((M,M_o)\times {\bold R} %
;X)
$$
is onto.
\begin{itemize}
\item[Step1:] Assume that $X=S^n$ with $n>1$.
If $m+1+n$ is odd or ${\bold F} $ is of characteristic $2$, this was proved in
[5, Th. A], where $m=\dim M$.
Now assume that $m+1+n$ is even and ${\bold F} $ is of characteristic $\not =2$%
. By Theorem C, there is a $p$-homotopy equivalence
$$
C((M,M_o)\times {\bold R} ;S^n)\simeq C((\tilde M,\tilde M_o)\times {\bold R} %
;S^{2n})\times C(M,M_o;S^n)
$$
and%
$$
C((N,N_o)\times {\bold R} ;S^n)\simeq C((\tilde N,\tilde N_o)\times {\bold R} %
;S^{2n})\times C(N,N_o;S^n)
$$
where $\dim \tilde M+1+2n=\dim \tilde N+1+2n=2m+2n+1$
and $\dim M+n=\dim N+n=m+n$.
The assertion follows by [5, Th. A].
\item[Step 2:] Assume that $X=S^{d_1}\vee \ldots \vee S^{d_k}$ with $d_j>1$.
By Theorem D, there is a (weak) homotopy equivalence%
$$
C((M,M_o)\times {\bold R} ;X)\simeq \prod_\omega C((M_\omega ,M_{o\omega
})\times {\bold R} ;\omega (X))
$$
where $\omega (X)=S^{d_1a_1}\wedge \ldots \wedge
S^{d_ka_k}=S^{d_1a_1+\cdots +d_ka_k}$.
The assertion follows from Step 1.
\item[Step 3:] Assume $H_{*}(X)$ is a finite dimensional ${\bold F} $-vector
space. The assertion follows from Step 2 and Proposition 2.4.
\item[Step 4:] General case.
There exist $X_\alpha $ such that $H_{*}(X)=
\lim_\alpha H_{*}(X_\alpha )$ and $H_{*}(X_\alpha )$ is finite
dimensional ${\bold F} $-vector space for each $\alpha $. The assertion follows
from Step 3.
\end{itemize}
{\em Proof of Theorem A.} First we will prove the absolute
case $M_o=\phi $ by induction on on a handle decomposition of $M$. If M
is a disjoint union $M_1\amalg M_2$, then
$$
C(M\times {\bold R} ^n;X)\cong C(M_1\times {\bold R} ^n;X)\times C(M_2\times {\bf R%
}^n;X)
$$
Hence we can restrict to connected manifolds and start with $M=I^m$, the
assertion is obvious for $M=I^m$. Assume that the assertion holds for $M$
and $\overline{M}=M\cup D$ with $D\cong I^m$ a handle of index $q$ i.e. $%
D\cap M\cong I^{m-q}\times \partial I^q$, $q\geq 1$ since $M$ is connected.
There is a cofibration%
$$
M\stackrel{i}{\rightarrow }\overline{M}\stackrel{j}{\rightarrow }(\overline{M%
},M)\simeq (I^m,I^{m-q}\times \partial I^q)\simeq (S^q,*)
$$
and alternative
I. $H_q(\overline{M})\stackrel{j_{*}}{\longrightarrow }H_q(S^q)$ is onto.
II. $H_q(\overline{M})\longrightarrow H_q(S^q)$ is zero.
\begin{itemize}
\item[Case I.] Consider the quasifibration%
$$
C(M\times {\bold R} ^n;X)\longrightarrow C(\overline{M}\times {\bold R} ^n;X)%
\stackrel{C(j)}{\longrightarrow }C((\overline{M},M)\times {\bold R} ^n;X)
$$
$$
\simeq \Omega ^{m+n-q}S^{m+n}X
$$
Since $j_{*}$ is onto, $C(j)_{*}$ is onto and the Serre spectral
sequence for the quasifibration above collapses. Hence there is a short exact
sequence of Hopf algebras%
$$
H_{*}C(M\times {\bold R} ^n;X)\succ \longrightarrow H_{*}C(\overline{M}\times
{\bold R} ^n;X)\longrightarrow \succ H_{*}\Omega ^{m+n-q}S^{m+n}X
$$
By Proposition 2.3, there is a commutative diagram%
$$
\begin{array}{cccccccccccc}
\Sigma^{\infty}C(\overline{M}\times {\bold R} ^n;X)
&\stackrel{p}{\rightarrow}&\Sigma^{\infty}D_k\overline{M}\times{\bold R} ^n; X)\\
\downarrow & &\downarrow \\
\Sigma^{\infty}C((\overline{M}, M)\times {\bold R} ^n;X) &\stackrel{p}{\rightarrow}&\Sigma^{\infty}D_k((\overline{M},M)\times{\bold R} ^n; X)
\end{array}
$$
for each $k\geq1$, where $p$ is the projection.
Thus\quad
$$
\overline{H}_{*}D_k(\overline{M}\times {\bold R} ^n)\longrightarrow \overline{H}%
_{*}D_k((\overline{M},M)\times {\bold R} ^n;X)
$$
is an epimorphism and $F_rH_{*}C(\overline{M}\times {\bold R} ^n;X)\longrightarrow
F_rH_{*}\Omega ^{m+n-q}S^{m+n}X$ is an epimorphism by Proposition 2.3.
\end{itemize}
Hence there exists an ${\bold F} $-map%
$$
\varphi :H_{*}\Omega ^{m+n-q}S^{m+n}X\longrightarrow H_{*}C(\overline{M}%
\times {\bold R} ^n;X)
$$
such that
\begin{enumerate}
\item $\varphi $ preserves the filtration.
\item $C(j)_{*}\circ \varphi =id$
\end{enumerate}
Now the composite%
$$
H_{*}C(M\times {\bold R} ^n;X)\otimes H_{*}\Omega
^{m+n-q}S^{m+n}X\longrightarrow H_{*}C(\overline{M}\times {\bold R} %
^n;X)^{\otimes 2}
$$
$$
\longrightarrow H_{*}C(\overline{M}\times {\bold R} ^n;X)
$$
is an isomorphism of filtered modules and (1) follows.
If $n>1$, let $\varphi :QH_{*}(\Omega ^{m+n-q}S^{m+n}X)\longrightarrow
QH_{*}C(\overline{M}\times {\bold R} ^n;X)$ so that $QC(j)_{*}\circ \varphi =id$
and $\varphi $ preserves the filtration.
The homomorphism%
$$
QH_{*}\Omega ^{m+n-q}S^{m+n}X\longrightarrow QH_{*}C(\overline{M}\times {\bf %
R}^n;X)\longrightarrow H_{*}C(\overline{M}\times {\bold R} ^n;X)\
$$
induces an algebra map%
$$
H_{*}\Omega ^{m+n-q}S^{m+n}X\longrightarrow H_{*}C(\overline{M}\times {\bold R} %
^n;X)
$$
since $H_{*}C(\overline{M}\times {\bold R} ^n;X)$ is a commutative algebra and $%
H_{*}\Omega ^{m+n-q}S^{m+n}X$ is a free commutative algebra. Thus (2)
follows.
\begin{itemize}
\item[Case II.] Consider the quasifibration%
$$
\Omega C((\overline{M},M)\times {\bold R} ^n;X)\longrightarrow C(M\times {\bold R} %
^n;X)\stackrel{C(i)}{\longrightarrow }C(\overline{M}\times {\bold R} ^n;X)
$$
\end{itemize}
Since $i_{*}$ is onto, $C(i)_{*}$ is onto and the Serre spectral
sequence for the quasifibration above collapses. Hence%
$$
H_{*}C(\overline{M}\times {\bold R} ^n;X)\approx H_{*}C(M\times {\bold R} %
^n;X)//H_{*}\Omega ^{m+n-q+1}S^{m+n}X
$$
Both (1) and (2) follow.
To treat the relative case, we can assume that $M_o$ is part of a closed
collar (see [5]). We will prove by induction om a handle decomposition of $M$%
, and start with $M=M_o$, the assertion is obvious for $M=M_o$ since $%
F(Mo,M_o)\simeq *$. Assume that the assertion holds for $(M,M_o)$. If $
\overline{M}=M\cup D$ with $D\cong I^m$ a hand of index $q$ in $M$. Clearly
we can assume that $q\geq 1$. There is a cofibration%
$$
(M,M_o)\rightarrow (\overline{M},M_o)\rightarrow (\overline{M},M)\simeq
(I^m,I^{m-q}\times \partial I^q)
$$
and again an alternative:
III. $H_q(\overline{M},M_o)\rightarrow H_q(\overline{M},M)$ is onto.
IV. $H_q(\overline{M},M_o)\rightarrow H_q(\overline{M},M)$ is zero.
For Case III and IV, the assertion follows from Lemma 4.1 similar to Case I and II respectively.
{\em Proof of Theorem B.} By Theorem A, there is an isomorphism of filtered ${\bold F} $-modules%
$$
H_{*}C((M,M_o)\times {\bold R} ^n;S^2X)\approx {\cal C}%
^{m+n}(H_{*}(M,M_o);S^2X).
$$
Hence%
$$
\overline{H}_{*}D_k((M,M_o)\times {\bold R} ^n;S^2X)\approx {\cal D}%
_k^{m+n}(H_{*}(M,M_o);S^2X)\makebox{.}
$$
By Proposition 2.2 and Proposition 2.3, there are isomorphisms of ${\bold F} $-modules%
$$
\overline{H}_{*}D_k((M,M_o)\times {\bold R} ^n;X)\approx \sigma ^{-2k}\overline{%
H}_{*}D_k((M,M_o)\times {\bold R} ^n;S^2X)
$$
and%
$$
\overline{H}_{*}C((M,M_o)\times {\bold R} ^n;X)\approx \bigoplus_{k=1}^\infty
\overline{H}_{*}D_k((M,M_o)\times {\bold R} ^n;X)
$$
The assertion follows.
\bigskip
\bigskip
|
\section{Introduction}
The spectroscopic emissions from dissociative products in cometary coma are often used in estimating
production rates of respective cometary parent species which are sublimating directly
from the nucleus \citep{Feldman04, Combi04}. It is a known fact that at smaller ($<$2 AU)
heliocentric distances, the inner
cometary coma is dominantly composed of H$_2$O. The infrared
emissions of H$_2$O molecule are inaccessible from ground because of
strong attenuation by the terrestrial atmosphere.
Since H$_2$O does not show any spectroscopic
transitions in ultraviolet or visible regions of solar spectrum, one can estimate it's
abundance indirectly based on the emissions from daughter products, like OH, O and H.
Thus, tracking emissions of the dissociative products of
H$_2$O has became an important diagnostic tool in estimating the production rate as well as
in understanding the spatial distribution of H$_2$O in comets
\citep{Delsemme76,Delsemme79,Fink84,Schultz92,Morgenthaler01,Furusho06}.
For estimating the density distribution of H$_2$O
from the emissions of daughter species, one has to account
for photochemistry and associated emission processes.
The major dissociative channel of H$_2$O is the
formation of H and OH, but a small fraction is also possible in
O($^3$P, $^1$S, $^1$D) and H$_2$. The radiative decay of metastable $^1$D and $^1$S
states of atomic oxygen leads to emissions
at wavelengths 6300, 6364 \AA\ (red doublet) and 5577 \AA\ (green line), respectively. The energy
levels of atomic oxygen and these forbidden transitions are shown in Figure~\ref{engyo}. Even though
these emissions are accessible from ground-based observatories, most of the times they
are contaminated by telluric night sky emissions as well as emissions from other cometary
species. Doppler shift of these lines, which is
a function of the relative velocity of comet with respect to the Earth, offers a separation from
telluric emissions provided a high resolution cometary spectrum is obtained. In most of the cometary
observations it is very difficult to separate the green line in optical spectrum because
of the contamination from cometary C$_2$ (1-2) P-branch band emission. The red line 6300 \AA\
emission is also mildly contaminated by the Q-branch emission of NH$_2$ molecule, but in high
resolution spectrum this can be easily resolved.
Since these atomic oxygen emissions result due to electronic transitions which are forbidden
by selection rules, solar radiation cannot populate these excited states directly from
the ground state via resonance fluorescence.
The photodissociative excitation and electron
impact excitation of neutral species containing atomic oxygen, and ion-electron dissociative
recombination of O-bearing ion species, can produce
these metastable states \citep{Bhardwaj02}.
If O($^1$D) is not quenched by ambient cometary species, then
photons at wavelengths 6300 and 6364 \AA\ will be emitted in radiative decay to the ground $^3$P state.
Only about 5\% of O($^1$S) atoms result in 2972 and 2958 \AA\ emissions via direct radiative transition
to the ground $^3$P state of atomic oxygen.
Around 95\% of O($^1$S) decays to the ground state through O($^1$D) by emitting green
line (cf. Fig. \ref{engyo}). This implies that if the green line emission is present
in cometary coma, the red doublet emission will
also be present, but the opposite is not always true. The average lifetime of O($^1$D) is relatively
small ($\sim$110 s) compared to the lifetime of H$_2$O molecule ($\sim$8 $\times$ 10$^{4}$ s)
at 1 AU.
The O($^1$S) also has a very short average lifetime of about 0.1 s.
Due to the short lifetime of these metastable species, they cannot
travel larger distances in cometary coma before de-exciting via radiative transitions.
Hence, these emissions have been used as diagnostic tools to
estimate the abundance of H$_2$O in comets \citep{Fink84,Magee90,Morgenthaler01}.
The intensity of O[I] emissions, in Rayleigh, can be calculated using the following equation
\citep{Festou81}
\begin{equation}
I=10^{-6}\tau_p^{-1}\alpha \beta N
\end{equation}
where $\tau_p$ is the lifetime of excited species in seconds, $\alpha$ is the
yield of photodissociation, $\beta$ is
the branching ratio, and N is the column density of cometary species in cm$^{-2}$.
In the case of red doublet (6300 and 6364 \AA), since both emissions arise due to transition
from the same excited state (2P$^4$ $^1$D) to the ground triplet state (2P$^4$ $^3$P),
the intensity ratio of these two lines should be the same as that of
branching ratio of corresponding transitions. Using Einstein transition
probabilities, \cite{Storey00} calculated the intensity ratio of red doublet
and suggested that the intensity of 6300 \AA\ emission
would be 3 times stronger than that of 6364 \AA\ emission, and this has been observed
in several comets also \citep{Spinrad82,Fink84,Morrison97,Cochran01,Capria05,Furusho06,
Capria08,Cochran08}.
The ratio of intensity of green line to the sum of intensities of red doublet
can be calculated as
\begin{equation}
\frac{I_{5577}}{I_{6300} + I_{6364}} = \frac{\tau^{-1}_{green} \alpha_{green}N_{green} \beta_{green}}
{\tau^{-1}_{red}\alpha_{red}N_{red}(\beta_{6300+6364})}
\end{equation}
If the emission intensities of oxygen lines are completely attributed to only photodissociative
excitation of
H$_2$O and column densities are assumed almost same for both emissions, then the ratio of intensities of
green line to red doublet is directly proportional to the ratio of $\tau^{-1}\alpha\beta$. \cite{Festou81}
reviewed these atomic oxygen emissions in comets.
Based on the observation of O[I] 2972 \AA\ emission in the IUE spectrograph of
comet Bradfield (1979X),
\cite{Festou81} calculated the brightness profiles of red and green emissions.
\cite{Festou81} also calculated a
theoretical value for the ratio of the intensity of green line to red doublet
(hereafter refer to as the G/R ratio), which has a value of around 0.1
if H$_2$O is the source for these O[I] emissions in cometary comae,
and it is nearly 1 if the source is CO$_2$ or CO.
Observations of green and red line emissions in several comets have shown that the G/R
ratio is around 0.1, suggesting that H$_2$O is the main source of these O[I] lines.
However, since no experimental cross section or yield for the production of O($^1$S) from H$_2$O is
available in literature, the G/R ratio has been questioned by \cite{Huestis06}.
Generally, the red line is more intense
than the green line because the production of O($^1$D) via dissociative excitation of H$_2$O
is larger compared to the radiative decay of O($^1$S).
Since the lifetime of O($^1$D) is larger, quenching is also a significant loss process
near the nucleus.
So far, the observed G/R ratio in comets is found to vary from
0.022 to 0.3 \citep{Cochran84,Cochran08,Morrison97,Zhang01,Cochran01,Furusho06,
Capria05,Capria08,Capria10}.
There are several reactions not involving H$_2$O which
can also produce these forbidden oxygen lines \citep{Bhardwaj02}.
Among the O-bearing species, CO$_2$ and CO also have dissociative channels producing O($^1$D)
and O($^1$S).
However, complex O-bearing molecules (e.g., H$_2$CO, CH$_3$OH, HCOOH) do not produce
atomic oxygen as a first dissociative product.
Based on the brightness of 6300 \AA\ emission intensity, \cite{Delsemme76} derived the production
rate of O($^1$D) in comet Bennett 1970 II and suggested that the abundance of CO$_2$ is more than
that of H$_2$O. \cite{Delsemme79} estimated the production of O($^1$D) in
dissociation of H$_2$O and CO$_2$;
about 12\% of H$_2$O is dissociated into H$_2$ and O($^1$D), while 67\% of CO$_2$ is
dissociated into CO and O($^1$D). They suggested that a small
amount of CO$_2$ can contribute
much more than H$_2$O to the red doublet emission. The model calculations of \cite{Bhardwaj02}
showed that the production of O($^1$D) is largely through photodissociative excitation
of H$_2$O while the major loss mechanism in the innermost coma is quenching by H$_2$O.
\cite{Cochran01},
based on the observation of width of red and green lines,
argued that there must be another potential source of atomic oxygen in addition
to H$_2$O, which can produce
O($^1$S) and O($^1$D). Observations of the green and red lines in nine comets
showed that the green line is wider than the red line \citep{Cochran08}, which could
be because various parent sources are involved in the production of O($^1$S).
The model of \cite{Glinski04} showed that the chemistry in the inner coma can produce
1\% O$_2$, which can also be a source of red and green lines.
\cite{Manfroid07} also argued, based on lightcurves, that forbidden O[I] emissions are probably
contributed through dissociation sequence of CO$_2$.
Recent observation of comet 17P/Holmes showed that the G/R ratio
can be even 0.3, which is the highest reported value so far:
suggesting that CO$_2$ and CO abundances might be higher at the time of observation \citep{Capria10}.
Considering various arguments based on different observations
and theoretical works, we have developed a
coupled chemistry-emission model to quantify various
mechanisms involved in the production of red and green line emissions of atomic oxygen.
We have calculated the production and loss rates, and the density profiles, of metastable O($^1$D)
and O($^1$S) atoms from the O-bearing species, like H$_2$O, CO$_2$, and CO, and also
from the dissociated
products OH and O. This model is applied to comet C/1996 B2 Hyakutake, which was studied through several observations
in 1996 March \citep{ Biver99, Morrison97, Cochran01, Morgenthaler01, Combi05, Cochran08}.
The line-of-sight integrated brightness profiles along cometocentric distances
are calculated for 5577 and 6300 \AA\ emissions and
compared with the observed profiles of \cite{Cochran08}.
We have also evaluated the role of slit dimension, used in the observation,
in determining the G/R ratio.
The aim of this study is to understand the processes
that determine the value of G/R ratio.
\section{Model}
The neutral parent species considered
in this model are H$_2$O, CO$_2$, and CO.
We do not consider other significant O-bearing species, like H$_2$CO, CH$_3$OH, since their first
dissociation does not lead to the formation of atomic oxygen atom; the O atom appears in
subsequent photodissociation of daughter products, like OH, CO, HCO.
On 1996 March 24, the H$_2$O production rate for comet C/1996 B2 Hyakutake\ measured by \cite{Mumma96} was
1.7 $\times$ 10$^{29}$ s$^{-1}$ .
Based on H Ly-$\alpha$ emission observation, \cite{Combi98} measured H$_2$O production rate as
2.6 $\times$ 10$^{29}$ s$^{-1}$ on 1996 April 4. Using molecular radio line
emissions, \cite{Biver99} derived the production rates of different species at various
heliocentric distances from 1.6 to 0.3 AU. They found that around 1 AU the relative abundance
of CO with respect to H$_2$O is high ($\sim$22\%) in the comet C/1996 B2 Hyakutake.
The number density n$_i(r)$ of $i^{th}$ parent species at a cometocentric
distance $r$ in the coma is calculated using the following Haser's formula
\begin{equation}
n_i(r)=\frac{Q_p}{4\pi v_ir^2} (e^{-\beta_i/r})
\label{haser}
\end{equation}
Here $Q_p$ is the total gas production rate of the comet, $v_i$ and $\beta_i$ are
the gas expansion velocity (taken as 0.8 km s$^{-1}$, \citeauthor{Biver99} 1999) and the
scale length ($\beta_{H_2O}$
= 8.2 $\times$ 10$^{4}$ km, $\beta_{CO_2}$ = 5.0 $\times$ 10$^{5}$
km, and $\beta_{CO}$ = 1.4 $\times$ 10$^{6}$ km) of the $i^{th}$ species, respectively.
The Haser model's neutral density distribution has been used in several previous
studies for deriving the production rate of H$_2$O
in comets based on the intensity of 6300 \AA\ emission \citep{Delsemme76,
Delsemme79, Fink84, Morgenthaler01}.
In our model calculations the H$_2$O production
rate on 1996 March 30 is taken as 2.2 $\times$ 10$^{29}$ s$^{-1}$.
The abundance of CO relative to H$_2$O is taken
as 22\%. Since there is no report on the observation of CO$_2$ in the comet Hyakutake, we
assumed its abundance as 1\% relative to H$_2$O. However, we vary CO$_2$ abundance
to evaluate its effect on the green and red-doublet emissions.
The calculations are
made when the comet C/1996 B2 Hyakutake\ was at a heliocentric distance of 0.94 AU and a geocentric distance of
0.19 AU on 1996 March 30. The calculated G/R ratio on
other days of the observation is also reported.
The number density of OH produced in dissociation of parent species H$_2$O at a given
cometocentric distance $r$ is calculated using Haser's two parameter coma model
\begin{equation}
n_{OH}(r)= \frac{Q_P}{4\pi vr^2} \frac{\beta_P}{\beta_R-\beta_P}(e^{-\beta_Pr}-e^{-\beta_Rr})
\end{equation}
Here $v$ is the average
velocity of daughter species taken as 1 km s$^{-1}$, and $\beta_P$ and
$\beta_R$ are the destruction scale lengths of the parent (H$_2$O, 8.2 $\times$ 10$^{4}$ km)
and daughter (OH, 1.32 $\times$ 10$^{5}$ km) species, respectively \citep{Huebner92}.
The solar UV-EUV flux is taken from SOLAR2000 v.2.3.6 (S2K) model of \cite{Tobiska00}
for the day 1996 March 30, which is shown in Figure~\ref{solflx}. For comparison
the solar flux used by \cite{Huebner92} in calculating O($^1$D) and O($^1$S) production rates
from various O-bearing species is also presented in the same Figure.
The primary photoelectron energy spectrum $Q(E, r, \theta)$
is calculated by degrading
solar radiation in the neutral atmosphere using
\begin{equation}
Q(E, r, \theta) = \sum_{i}\int_{\lambda}n_i(r)\ \sigma_i^I(\lambda)\ I_{\infty}{(\lambda)}\
exp[-\tau(r,\theta,\lambda)]\ d\lambda
\label{pheprod}
\end{equation}
where,
\begin{equation}
\tau(r,\theta,\lambda)= \sum_{i}\sigma_i^A(\lambda)\ sec\ \theta \int_r^\infty n_i(r') dr'
\end{equation}
Here $\sigma_i^A(\lambda)$ and $\sigma_i^I(\lambda)$ are the absorption and ionization
cross sections, respectively, of the $i^{th}$ species at the wavelength $\lambda$,
$n_i(r)$ is its neutral gas density and
$\tau(r,\theta,\lambda)$ is optical depth of the medium at the solar zenith angle $\theta$.
$I_{\infty}(\lambda)$ is the unattenuated solar flux at the top of atmosphere at wavelength $\lambda$.
All calculations are made at solar zenith angle $\theta$ of 0$^0$.
The total photoabsorption and photoionization cross sections of H$_2$O, CO$_2$, and CO
are taken from the compilation of \cite{Huebner92}
(\url{http://amop.space.swri.edu}),
and interpolated at 10 \AA\ bins to make them compatible with the S2K solar flux wavelength bins
for use in our model calculations.
The total photoabsorption and photoionization cross sections for H$_2$O, CO$_2$, and CO
are presented in Figure~\ref{totabcsc}.
The photochemical production rates for ionization and excitation of various species
are calculated using degraded solar flux and cross sections of
corresponding processes (discussed in Section~\ref{disso1so1d}) at different
cometocentric distances.
The primary photoelectrons are degraded in cometary coma to calculate the steady state
photoelectron flux using
the Analytical Yield Spectrum (AYS) approach, which is based on the Monte Carlo method
\citep{Singhal91,Bhardwaj93,Bhardwaj99d,Bhardwaj09}.
The AYS method of degrading electrons in the neutral atmosphere can be
explained briefly in the following manner. Monoenergetic electrons incident along Z-axis in an
infinite medium are degraded in collision-by-collision manner using the Monte Carlo technique.
The energy and position of the primary electron and its secondary or tertiary are recorded at the
instant of an inelastic collision. The total number of inelastic events in the spatial
and energy bins, after the incident electron and all its secondaries
and tertiaries have been completely degraded, is used to generate numerical yield
spectra. These yield spectra contain the yield information about the electron degradation process
and can be employed to calculate the yield for any inelastic event.
The numerical yield spectra generated in this way are in turn
represented analytically, which contains the information about all possible collisional events
based on the
input electron impact cross sections, resulting in the AYS. This yield spectrum can be used
to calculate the steady state photoelectron flux.
More details of the AYS approach and the method of photoelectron computation are given in
several previous papers
\citep{Singhal84,Bhardwaj90,Bhardwaj96,Singhal91, Bhardwaj99a,Bhardwaj03,
Bhardwaj99b,Haider05,Bhardwaj09,Bhardwaj11a,Raghuram11}.
The total inelastic electron impact cross sections
for H$_2$O are taken from \cite{Jackman77} and \cite{Seng76}, and those for CO$_2$
and CO are taken from \cite{Jackman77}.
The electron impact cross sections for different dissociative ionization states of H$_2$O
are taken from \cite{Itikawah2o}, for CO$_2$ from \cite{Bhardwaj09}, and for CO
from \cite{Mcconkey08}.
The volume excitation rates for different processes are calculated using steady state
photoelectron flux and
electron impact cross sections. The electron temperature required for
ion-electron dissociative recombination reactions is
taken from \cite{Korosmezey87}.
The detailed description of coupled chemistry-transport model has been given in our earlier papers
\citep[]{Bhardwaj95,Bhardwaj96,Bhardwaj99a,Bhardwaj02,Haider05,Bhardwaj11}.
Various reactions involved in the production and loss of metastable O($^1$S) and O($^1$D) atoms
considered in our model are listed in Tables~\ref{tab-prlos1s} and~\ref{tab-prlos1d},
respectively.
\section{Dissociation of neutral species producing O($^1$S) and O($^1$D)}
\label{disso1so1d}
\subsection{Photodissociation}
\subsubsection{H$_2$O and OH}
\label{phcsch2o}
The dissociation of H$_2$O molecule starts at
wavelengths less than 2424~\AA\ and the primary products are H and OH.
But the pre-dissociation process mainly starts from 1860 \AA\ \citep{Watanabe53}. The
threshold wavelength for the photoionization of H$_2$O is 984 \AA. Hence, solar UV photons in the
wavelength region 1860 to 984 \AA\ can dissociate H$_2$O and produce different
daughter products. The threshold wavelengths for the dissociation of H$_2$O resulting in the
production of O($^1$S)
and O($^1$D) are
1390~\AA\ and 1770~\AA, respectively. Till now, the
photo-yield value for the production of O($^1$D) from H$_2$O have been measured
in only two experiments.
\cite{Slanger82} measured the O($^1$D) yield in
photodissociation of H$_2$O at 1216 \AA, and found its value to be 10\%.
\cite{Mcnesby62} reported a 25\%
yield for the production of O($^1$D) or O($^1$S) at 1236 \AA\ from H$_2$O.
\cite{Huebner92} calculated photo production rates for different excited species produced from
H$_2$O using absorption and ionization cross sections compiled from different experimental
measurements.
In our model the cross sections for the production of O($^1$D) in photodissociation
of H$_2$O are taken from \cite{Huebner92}, which were determined based on experiments
of \cite{Slanger82} and \cite{Mcnesby62}.
\cite{Huebner92} assumed that
in the 1770 to 1300 \AA\ wavelength region around 25\% of H$_2$O molecules photodissociate
into H$_2$ and O($^1$D), while between 1300 and 984 \AA\ about 10\%
of H$_2$O dissociation produces O($^1$D) (cf. Fig.~\ref{phcsco1d-1}). Below 984 \AA, \cite{Huebner92}
assumed that 33\% of dissociation of H$_2$O leads to the formation of O($^1$D).
\cite{Festou81a} discussed various dissociation channels for H$_2$O in the wavelength region
less than 1860 \AA. Solar photons in the wavelength region 1357 to 1860 \AA\ dissociates around 72\% of
H$_2$O molecules into ground states of H and OH. But, according to \cite{Stief75}
approximately 1\% of H$_2$O molecules are dissociated into H$_2$ and O($^1$D) in this wavelength region.
The calculated rates for the O($^1$D) production from photodissociative excitation
of H$_2$O by \cite{Huebner92} are
5.97 $\times$ 10$^{-7}$ s$^{-1}$ and 1.48 $\times$ 10$^{-6}$ s$^{-1}$ for solar quiet
and active conditions, respectively.
Using the S2K solar EUV-UV flux on 1996 March 30 and cross sections
from \cite{Huebner92} (see Figure~\ref{phcsco1d-1}),
our calculated value is 8 $\times$ 10$^{-7}$ s$^{-1}$ (cf. Table~\ref{tab-prlos1d}),
which is a factor of $\sim$1.5 higher than that of \cite{Huebner92} for solar minimum condition at 1 AU.
This difference in calculated values is mainly due to the higher (a factor of 1.24) value of solar
flux at 1216 \AA\ in S2K model than that used by \cite{Huebner92} (cf. Figure~\ref{solflx}).
No experimentally determined cross sections for
the production of O($^1$S) in photodissociation of H$_2$O are available.
The solar flux at H Lyman-$\alpha$ (cf. Fig.~\ref{solflx}) is more than an order of magnitude
larger than the flux at wavelengths below 1390 \AA, which is the threshold for the O($^1$S) production
in dissociation of H$_2$O.
To account for the production of O($^1$S) in photodissociation of H$_2$O, we assumed an
yield of 0.5\% at solar H Lyman-$\alpha$ (1216 \AA). However, to assess the impact of this assumption
on the green and red
line emissions we varied the yield between 0 and 1\%.
The calculated photo-rate for the production of O($^1$S) from H$_2$O is
6.4 $\times$ 10$^{-8}$ s$^{-1}$ at 1 AU assuming 1\% yield at 1216 \AA\ (cf. Table~\ref{tab-prlos1s}).
The primary dissociative product of H$_2$O is OH. The important destruction mechanisms of OH molecule
are pre-dissociation through fluorescence process and direct photodissociation. The solar radiation
shortward of 928 \AA\ can ionize OH molecule.
The threshold wavelengths for the production of O($^1$D) and O($^1$S) in photodissociation of
OH are 1940 and 1477 \AA, respectively.
The dissociation channels of OH have been discussed by \cite{Budzien94} and \cite{Dishoeck84}.
We have used the photo-rates
given by \cite{Huebner92} for the production of O($^1$D) and O($^1$S)
from OH molecule whose values are 6.4 $\times$ 10$^{-7}$ and
6.7 $\times$ 10$^{-8}$ s$^{-1}$, respectively. These rates are based on dissociation cross sections
of \cite{Dishoeck84}, which are consistent with the red line observation made by
wide-field spectrometer
\citep{Morgenthaler07}.
\subsubsection{CO$_2$}
The threshold wavelengths for dissociation of CO$_2$ molecule
producing O($^1$D) and O($^1$S) are
1671 \AA\ and 1286 \AA, respectively. As noted by \cite{Huestis06}, the O($^1$D)
yield in photodissociation of CO$_2$ has never been measured because of the problem of
rapid quenching of this metastable state.
However, experiment by \cite{Kedzierski98} suggested that this dissociation channel can be studied
in electron impact experiment using solid neon matrix as detector.
\cite{Huebner92} estimated the cross section for O($^1$D) production in
photodissociative
excitation of CO$_2$ (see Figure~\ref{phcsco1d-1}),
and obtained photo-rate values of 9.24 $\times$ 10$^{-7}$ and
1.86 $\times$ 10$^{-6}$ s$^{-1}$ for solar minimum and maximum conditions, respectively.
Using S2K solar flux on 1996 March 30 our calculated
rate for O($^1$D) production in photodissociation of CO$_2$ is
1.2 $\times$ 10$^{-6}$ s$^{-1}$ at 1 AU, which is
higher than the solar minimum rate of \cite{Huebner92} by a factor of 1.3.
This variation is mainly due to the differences in the solar fluxes (cf.~Figure~\ref{solflx})
in the wavelength region 950 to 1100 \AA\ where the
photodissociative cross section for the production of O($^1$D)
maximizes (cf.~Figure~\ref{phcsco1d-1}).
\cite{Lawrence72a} measured the O($^1$S) yield in photodissociative
excitation of CO$_2$ from threshold (1286 \AA) to 800 \AA. The yield of
\cite{Lawrence72a} is different from that measured by \cite{Slanger77}
in the 1060 to 1175 \AA\ region. However, the yield from both experimental
measurements closely matches in the
1110--1140~\AA\ wavelength region, where the yield is unity. In the experiment of
\cite{Slanger77}, a dip in quantum yield is observed at 1089 \AA.
\cite{Huestis10} reviewed the experimental results and suggested the yield for O($^1$S) in
photodissociation of CO$_2$.
We calculated the cross section for the O($^1$S) production in photodissociative excitation
of CO$_2$ (see Figure~\ref{phcsco1d-1}) by multiplying the yield recommended by \cite{Huestis10} with
total absorption cross section of CO$_2$ (see Figure~\ref{totabcsc}).
Using this cross section and S2K solar flux, the rate
for O($^1$S) production is 7.2 $\times$ 10$^{-7}$ s$^{-1}$ at 1 AU.
\subsubsection{CO}
The threshold wavelength for the dissociation of CO molecule into neutral products in the ground state
is 1117.8 \AA\ and in the metastable O($^1$D) and C($^1$D) is 863.4 \AA.
Among the O-bearing species discussed in this paper, CO has the highest dissociation energy of 11.1
eV, while its ionization potential is 14 eV.
\cite{Huebner92} calculated cross sections for the photodissociative excitation
of CO producing O($^1$D) using branching ratios from \cite{Mcelroy71} (cf. Fig.~\ref{phcsco1d-1}).
Rates for the production of O($^1$D) from CO molecule calculated by \cite{Huebner92} are
3.47 $\times$ 10$^{-8}$ and
7.87 $\times$ 10$^{-8}$ s$^{-1}$ for solar minimum and maximum conditions, respectively.
Using the cross section of \cite{Huebner92} and S2K model solar flux, our calculated
rate for the O($^1$D) production from CO is 5.1 $\times$ 10$^{-8}$ s$^{-1}$ at 1 AU,
which is 1.5 times higher than the solar minimum rate of \cite{Huebner92}.
This difference in the calculated value is due to variation in the solar fluxes used in the two
studies in wavelength region 600 to 800 \AA\ (cf.~Figure~\ref{solflx}).
We did not find any reports on the cross section
for the production of O($^1$S) in photodissociation of the CO molecule.
According to \cite{Huebner79} the rate for this reaction can not be more than
4 $\times$ 10$^{-8}$ s$^{-1}$. We have used this value in our model calculations.
This process can be an important source of O($^1$S)
since the comet Hyakutake has a higher CO abundance ($\sim$20\%).
Using this photorate and CO abundance, we will show that this reaction alone can contribute
up to a maximum of 30\% to the total O($^1$S) production.
\subsection{Electron impact dissociation}
In our literature survey we could not find any reported cross section for the
production of O($^1$D) due to electron impact dissociation of H$_2$O.
\cite{Jackman77} have assembled the experimental and theoretical
cross sections for electron
impact on important atmospheric gases in a workable analytical form. The cross sections for
electron impact on atomic oxygen given by \cite{Jackman77} have been used to estimate
emissions which leave the O atom in the metastable ($^1$D) state. The obtained ratios of 85\% in
ground and 15\% in metastable state are used for the atomic states of
C and O produced in electron impact dissociation of H$_2$O, CO$_2$, and CO. It may be noted
that the ground state to metastable state production ratio of 89:11 is observed
for atomic carbon and atomic oxygen produced from photodissociation of CO \citep{Singh91}.
However, as shown later, the contributions of these
electron impact processes to the total production of O($^1$D) are very small ($<$5\%).
\cite{Kedzierski98} measured the cross section for electron impact dissociative
excitation of H$_2$O producing O($^1$S), with overall uncertainty
of 30\%. \cite{LeClair94} measured cross
section for the production of O($^1$S) in dissociation of CO$_2$ by electron impact; they
claimed an uncertainty of 12\% in their experimental cross section measurements.
The cross section for fragmentation of
CO into metastable O($^1$S)
atom by electron impact is measured by \cite{LeClair94a}.
These electron impact cross sections are also recommended by
\cite{Mcconkey08}, and are used in our model for calculating the production
rate of O($^1$S) from H$_2$O, CO$_2$, and CO.
Since the $^1$D and $^1$S are metastable states, the direct excitation of atomic oxygen
by solar radiation is not an effective
excitation mechanism. However the electron impact excitation of atomic
oxygen can populate these excited metastable states, which is a major source of airglow
emissions in
the upper atmospheres of Venus, Earth, and Mars.
We calculated the excitation rates for these processes using
electron impact cross sections from \cite{Jackman77}.
In calculating the photoelectron impact ionization rates of metastable oxygen states,
we calculated the cross sections by changing the threshold energy parameter for ionization of neutral
atomic oxygen in the analytical expression given by \cite{Jackman77}.
The above mentioned electron impact cross sections for the production of O($^1$S) from
H$_2$O, CO$_2$, CO, and O, used in the current model, are presented in
Figure~\ref{ecsco1s} along with the calculated photoelectron flux energy spectrum
at cometocentric distance of 1000 km.
\subsection{Dissociative recombination}
The total dissociative recombination rate for H$_2$O$^+$ reported by \cite{Rosen00}
is 4.3 $\times$ 10$^{-7}$ cm$^{-3}$ s$^{-1}$ at 300 K.
The channels of dissociative recombination
have also been studied by this group.
It was found that the dissociation process is dominated by three-body breakup (H + H + O) that occurs
with a branching ratio of 0.71, while the fraction of two-body breakup (O + H$_2$)
is 0.09, and the branching ratio for the formation of OH + H is 0.2.
The maximum kinetic energy of the dissociative products forming atomic oxygen produced in ground
state are 3.1 eV and 7.6 eV for the three-and two-body dissociation, respectively. Since the
excitation energy required for the formation of metastable O($^1$S) is 4.19 eV, the three-body
dissociation can not produce oxygen atoms in $^1$S state. However, the
O($^1$D) atom can be produced in both,
the three-body and the two-body, breakup dissociation processes.
To incorporate the contribution of H$_2$O$^+$ dissociative recombination in the production
of O($^1$D) and O($^1$S), we assumed that 50\% of branching fraction
of the total recombination in three-body and two-body breakups lead to the formation of O($^1$D)
and O($^1$S) atoms, respectively.
For dissociative recombination of CO$_2^+$, CO$^+$ and OH$^+$ ions we assumed that the
recombination rates are same for the production of both O($^1$D) and O($^1$S).
We will show that these assumptions affects the calculated O($^1$S) and O($^1$D) densities
only at larger ($\ge$ 10$^4$ km) cometocentric distances, but not in the inner coma.
Tables~\ref{tab-prlos1s} and \ref{tab-prlos1d} list the rates, along with the source reference, for these
recombination reactions.
\section{Results and discussion}
\subsection{Production and loss of O($^1$S) atom}
The calculated O($^1$S) production rate profiles for different processes
in comet C/1996 B2 Hyakutake\ are presented in Figure~\ref{o1sprodr1}.
These calculations are made under the assumption of 0.5\% yield of O($^1$S) from H$_2$O at 1216 \AA\
solar H Lyman-$\alpha$ line and 1\% CO$_2$ relative abundance.
The major production source of O($^1$S) is the photodissociative excitation
of H$_2$O throughout the cometary coma. However, very close to the nucleus, the photodissociative
excitation of CO$_2$ is also an equally important process for the O($^1$S) production.
Above 100 km, the photodissociative excitation of CO$_2$ and CO makes an equal contribution in the
production of O($^1$S). Since the cross section
for electron impact dissociative excitation of H$_2$O, CO$_2$, and CO are small
(see Figure~\ref{ecsco1s}),
the contributions from electron impact dissociation to O($^1$S) production are smaller
by an order of magnitude or more than that due to photodissociative excitation.
At larger cometocentric distances ($>$2 $\times$ 10$^3$ km),
the dissociative recombination of H$_2$O$^+$ ion is a significant
production mechanism for O($^1$S), whose contribution is
higher than those from photodissociative excitation of CO$_2$ and CO.
The dissociative recombination of other ions do not make any significant contribution
to the production of O($^1$S).
In the inner coma, the calculated production rates of O($^1$S)
via photodissociative excitation is CO$_2$ at various wavelengths are presented in
Figure~\ref{o1s-pht-co2}. The major
production of O($^1$S) occurs in the wavelength
region 955--1165 \AA\
where the average cross section is $\sim$2 $\times$ 10$^{-17}$ cm$^{-2}$
(cf. Fig.~\ref{phcsco1d-1}) and the average solar flux is $\sim$1 $\times$
10$^{9}$ photons cm$^{-2}$ s$^{-1}$ (cf. Fig.~\ref{solflx}).
The calculated loss rate profiles of O($^1$S) for major processes
are presented in Figure~\ref{o1slos}.
Close to the nucleus ($<$50 km), quenching by H$_2$O is the main loss mechanism
for metastable O($^1$S). Above 100 km, the
radiative decay of O($^1$S) becomes the dominant loss process. The contributions from
other loss processes are orders of magnitude smaller and hence are not shown
in Figure~\ref{o1slos}.
\subsection{Production and loss of O($^1$D) atom}
The production rates as a function of cometocentric distance for various excitation mechanisms
of the O($^1$D) are shown in Figure~\ref{o1dprodr1}. The major source of O($^1$D) production in the
inner coma is photodissociation of H$_2$O.
The wavelength dependent production rates of O($^1$D) from H$_2$O are presented in
Figure~\ref{o1d-pht-h2o}. The O($^1$D) production in photodissociation of
H$_2$O is governed by solar radiation at H Lyman-$\alpha$ (1216 \AA) wavelength.
However, very close to the nucleus, the production of O($^1$D) is
largely due to photons in the wavelength region 1165--1375 \AA. Since the average absorption
cross section of H$_2$O decreases in this wavelength region by an order of magnitude,
the optical depth
at wavelengths greater than 1165 \AA\ is quite small (see Figure~\ref{totabcsc}).
Hence, these photons are able to travel deeper into the coma unattenuated, thereby
reaching close to the nucleus where they dissociate
H$_2$O producing O($^1$D). Thus, at the surface of cometary nucleus the production
of O($^1$D) is controlled by the solar radiation in this wavelength band. In high production rate comets,
the production of O($^1$D) near nucleus would be governed by
solar photons in this wavelength region. The production of O($^1$D) from H$_2$O by
solar photons from other wavelength regions is smaller by more than an order of magnitude.
After photodissociative excitation of H$_2$O, the next significant O($^1$D) production process
at radial distances below 50 km is the photodissociative excitation of CO$_2$.
Above 50 km to about
|
1000 km, the radiative decay of O($^1$S), and at radial distances above 1000 km
the dissociative recombination of H$_2$O$^+$, are the next potential sources of the O($^1$D)
(see Figure~\ref{o1dprodr1}).
The calculated wavelength dependent production rates of O($^1$D) for
photodissociation of CO$_2$ are shown in Figure \ref{o1d-pht-co2}. Solar
radiation in the wavelength region 1165--955 \AA\ dominates the
O($^1$D) production. Since the cross section for the production of O($^1$D)
due to photodissociation of CO$_2$ is more than an order of magnitude
higher in this wavelength region compared to cross section at
other wavelengths (see Figure~\ref{phcsco1d-1}), the solar radiation in
this wavelength band mainly controls the formation of O($^1$D) from CO$_2$.
Other potential contributions are made by solar photons in the wavelength band 1585--1375 \AA\
at distances $<$50 km, and 955--745 \AA\ at radial distances $>$100 km.
Since the CO$_2$ absorption cross section around
1216 \AA\ is smaller by more than two orders of magnitude compared to its maximum value,
the solar radiation at H Ly-$\alpha$ is not an efficient source of
O($^1$D) atoms.
\cite{Zipf69} measured the total rate coefficient for the quenching of O($^1$S) by H$_2$O as
3 $\times$ 10$^{-10}$ cm$^{3}$ s$^{-1}$. The primary channel in quenching mechanism is
the production of two OH atoms.
The production of O($^1$D) is also a possible channel whose rate coefficient
is not reported in the literature.
Hence, we assumed that 1\% of total rate coefficient can lead to the formation of O($^1$D)
in this quenching mechanism. However, this assumption has no implications on
the O($^1$D) production
since the total contribution due to O($^1$S) is about three orders of magnitude smaller than
the major production process of O($^1$D).
The calculated loss rate profiles of O($^1$D) are presented in Figure~\ref{o1dlos}.
Below 1000 km, the O($^1$D) can be quenched by various cometary species. The quenching by H$_2$O
is the major loss mechanism for O($^1$D) below 500 km. Above 2 $\times$ 10$^3$ km
radiative decay is the dominant loss process for O($^1$D).
\subsection{Calculation of green and red-doublet emission intensity}
Using the calculated production and loss rates due to various processes
mentioned above,
and assuming photochemical equilibrium, we computed the number density
of O($^1$S) and O($^1$D) metastable atoms. The calculated number densities are presented
in Figure~\ref{nubden}.
The O($^1$D) density profile shows a broad peak around 200--600 km.
But, in the case of O($^1$S), the density peaks at much lower radial distances of $\sim$60 km.
The number densities of O($^1$D) and O($^1$S) are converted into emission rate profiles
for the red-doublet and green line emissions, respectively, by multiplying with Einstein transition
probabilities as
\begin{eqnarray}
V_{(6300+ 6364)}(r) =A_{(6300+ 6364)} \times [O^1D(r)] \nonumber \\ = A_{(6300+ 6364)}
\frac{\sum_{i=1}^k P_i(r)}{\sum_{i=1}^k L_i(r) + A(^1D) }
\end{eqnarray}
and
\begin{eqnarray}
V_{(5577)}(r) = A_{(5577)} \times [O^1S(r)] \nonumber \\ = A_{(5577)}
\frac{\sum_{i=1}^k P_i(r)}{\sum_{i=1}^k L_i(r) + A(^1S) }
\end{eqnarray}
Where $[O^1S(r)]$ and $[O^1D(r)]$ are the calculated number density for the corresponding
production rates $P_i(r)$ and loss frequencies $L_i(r)$ for O($^1$S) and O($^1$D), respectively.
$A(^1D)$ and $A(^1S)$ are the total Einstein spontaneous emission coefficients for red-doublet
and green line emissions.
Using the emission rate profiles, the line of sight intensity of green and red-doublet emissions
along the projected distance $z$ is calculated as
\begin{equation}
I(z) = 2 \int_{z}^{R}V_{(5577,\ 6300+6364)} (s)ds
\end{equation}
where $s$ is the abscissa along the line of sight, and V$_{(5577,\ 6300+6364)}(s)$ is
the emission rate for the green or red-doublet emission.
The maximum limit of integration $R$ is taken as 10$^5$ km.
The calculated brightness profiles of 5577 and 6300 \AA\ emissions are presented in
Figure~\ref{o1so1d-cmp}.
These brightness profiles are then averaged over the projected area
corresponding to the slit dimension 1.2$''$ $\times$ 8.2$''$ centred
on the nucleus of comet C/1996 B2 Hyakutake\ for the observation on 30 March 1986 \citep{Cochran08}.
The G/R ratio averaged over the slit is also calculated.
\subsection{Model results}
\cite{Morrison97} observed the green and red-doublet emissions on comet C/1996 B2 Hyakutake\
in the high resolution optical spectra obtained on 1996 March 23 and 27
and found the G/R ratio in the range 0.12--0.16.
\cite{Cochran08} observed the 5577 and 6300~\AA\ line emissions on this
comet on 1996 March 9 and 30, with the G/R ratio as 0.09 for 9 March observation.
We calculated the G/R ratio by varying the yield
for O($^1$S) production in photodissociation of H$_2$O at 1216 \AA\ (henceforth refer
to as O($^1$S) yield).
Since CO$_2$ is not observed in this comet, we assumed that a
minimum 1\% CO$_2$ is present in
the coma. However, we also carried out calculations for 0\%,
3\% and 5\% CO$_2$ abundances in the comet.
We calculated the contributions of different production processes in the formation of
O($^1$S) and O($^1$D) at three different projected distances of 10$^2$, 10$^3$, and 10$^4$ km
from the nucleus for the above mentioned CO$_2$ abundances and the O($^1$S) yield values
varying from 0\% to 1\%. These calculations are presented in Table~\ref{tabprj-yld}.
The percentage contribution of major production processes in the projected field of view
for the green and red-doublet emissions are also calculated. The G/R
ratio is calculated after averaging the intensity over the projected area
165 $\times$ 1129 km which corresponds to the dimension of slit used in the observation made
by \cite{Cochran08} on 1996 March 30.
These calculated values are presented in Table~\ref{tabprj-slit}.
Taking 1\% CO$_2$ abundance and 0\% O($^1$S) yield, the calculated percentage
contributions of major production processes of O($^1$S) and O($^1$D) atoms
are presented in Table~\ref{tabprj-yld}.
Around 60 to 90\% of the O($^1$D) is produced from photodissociation of H$_2$O. Contributions
of photodissociative excitation of CO$_2$ and CO in the production of O($^1$S) and O($^1$D) are
15 to 40\% and 1\%, respectively.
Around 10$^4$ km projected distance, the photodissociative excitation
of OH ($\sim$20\%) and the dissociative recombination of H$_2$O$^+$ ($\sim$30\%) are also significant
production processes for O($^1$S) atoms. But, the contributions from these processes in
O($^1$D) production is around 10\% only.
For CO$_2$ abundance of 1\% and O($^1$S) yield of 0.2\%, the calculations presented
in Table~\ref{tabprj-yld} show that the photodissociation of H$_2$O contribute around
20 to 40\% in the production of O($^1$S) and 60 to 90\% in the production of O($^1$D)
atom. The next major source of O($^1$S) production is the
photodissociation of CO$_2$ and CO with each
contributing $\sim$10 to 25\%.
The relative contributions from photodissociation of parent species H$_2$O, CO$_2$, and CO
to O($^1$S) and O($^1$D) production decreases with increase in projected distance from the
nucleus. At 10$^4$ km projected distance, the
photodissociation of OH contribute 15\% and 8\% to the production of O($^1$S) and O($^1$D) atoms,
respectively.
Above 1000 km projected distance, the contribution of H$_2$O$^+$
dissociative recombination to O($^1$S) production is around 20\%.
The production of O($^1$D) atom is mainly via photodissociation of H$_2$O, but
around 10$^4$ km the dissociative recombination of H$_2$O$^+$ ion is
also a significant production process contributing around 12\%. At 10$^4$ km,
dissociative recombination of OH$^+$ also contribute around 10\% to the total O($^1$D) production,
which is not shown in Table~\ref{tabprj-yld}, and this value is independent of O($^1$S) yield
or CO$_2$ abundance. Radiative decay of O($^1$S) is a minor ($\le$5\%) production process
in the formation of O($^1$D).
We also calculated the relative contributions of different processes in the
formation of green and red line emissions in the slit projected field of view, which are presented
in Table~\ref{tabprj-slit}. For the above case,
the photodissociation of H$_2$O contribute around 35\%, while the photodissociation of CO$_2$ and CO
contribute 23\% and 22\%, respectively, to the production of green line emission.
The contribution of dissociative recombination of H$_2$O$^+$ ions is around 10\%.
The major production process of red lines is photodissociation of H$_2$O (90\%);
the dissociative recombination of H$_2$O$^+$ and radiative decay of O($^1$S) atom
are minor ($\le$5\%) production processes. With the O($^1$S) yield of 0.2\% and 1\% CO$_2$ abundance,
the slit-averaged G/R ratio is found to be 0.11.
When the O($^1$S) yield is increased
to 0.5\% with 1\% CO$_2$ abundance (see Table~\ref{tabprj-yld}), the contribution from
photodissociative excitation of H$_2$O to the O($^1$S) production is increased, with value
varying from 35 to 60\%, while the contribution to O($^1$D)
production is not changed. In this case, the contribution from photodissociation
of CO$_2$ and CO to the O($^1$S) production is reduced (values between 10 to 15\%).
The contributions from other processes are not changed significantly.
Table~\ref{tabprj-slit} shows that in this case
around 60\% of green line in the slit projected field
of view is produced via photodissociation of H$_2$O, while the contributions from
photodissociation of CO$_2$ and CO are around 15\% each. The main (90\%) production of
red-doublet emission is through photodissociation of H$_2$O. The slit-averaged G/R
ratio is 0.17.
On further increasing the O($^1$S) yield to 1\% with CO$_2$ abundances of 1\%, the contribution of
photodissociation of H$_2$O to O($^1$S) atom production is further increased
(values between 50 to 75\%) while the
contribution from photodissociation of CO$_2$ and CO is decreased to around
10\% each (cf. Table~\ref{tabprj-yld}). The contributions from
other processes are not affected compared to the previous case.
As seen from Table~\ref{tabprj-slit}, in this case the contribution of photodissociation of
H$_2$O to green line is around 75\% in the
slit projected field of view, while contributions from photodissociation of CO$_2$ and CO
are decreased to 10\% each. The calculated G/R ratio is 0.27 (Table~\ref{tabprj-slit}).
We also evaluated the effect of CO$_2$ on the red-doublet and green line emissions
by varying its abundance to 0\%, 3\% and 5\%. The calculated percentage contribution of major processes
along the projected distances and in the slit projected field of view
are presented in Tables~\ref{tabprj-yld} and \ref{tabprj-slit}, respectively.
In the absence of CO$_2$, the contributions
from H$_2$O, H$_2$O$^+$ and CO in O($^1$S) production are increased by
$\sim$10\% (cf. Tables~\ref{tabprj-yld} and \ref{tabprj-slit}).
Taking 0\% O($^1$S) yield and by increasing CO$_2$ relative abundance from 1 to 3\%,
the percentage contributions for O($^1$S) from photodissociative excitation of CO$_2$ (CO)
is increased (decreased) by 50\%. The contribution from H$_2$O to
O($^1$D) production is not changed.
The calculations presented in Tables~\ref{tabprj-yld} and \ref{tabprj-slit} depict that
the contributions of various processes are significant
in the production O($^1$S) atom, whereas photodissociative
excitation of H$_2$O is the main production process for O($^1$D) atom.
Since comet C/1996 B2 Hyakutake\ is rich in CO (abundance $\sim$22\%) compared to other comets, the
contribution from CO photodissociation to O($^1$S) production
is significant (10--25\%).
In the case of a comet having CO abundance less than 20\%, the major production source of
metastable O($^1$S) atom would be photodissociation of H$_2$O and CO$_2$.
\subsection{Comparison with observations}
In 1996 March, the green and red-doublet emissions were observed
in comet C/1996 B2 Hyakutake\ from two ground-based observatories
\citep{Morrison97,Cochran08}. Each observatory determined
the G/R ratio using different slit size. Using a circular slit, the projected radial distance over
the comet for \cite{Morrison97} observation on
March 23 and March 27 varied from 640 to 653 km, while for \cite{Cochran08} observation, using a
rectangular slit, the
projected area was 480 $\times$ 3720 km on March 9 and 165 $\times$ 1129 km on March 30.
The clear detection of both green and red-doublet emissions and determination of the
G/R ratio could be
done for March 9 and March 23 observations only \citep{Cochran08,Morrison97}. The observed G/R
ratio was 0.09 and 0.12 to 0.16 for the observation on March 9 and March 23, respectively.
Making a very high resolution (R = 200,000) observation of comet C/1996 B2 Hyakutake\ on 1996 March 30,
\cite{Cochran08} obtained radial profiles of 5577
and 6300 \AA\ lines. In Figure~\ref{o1so1d-cmp}
we have compared the model calculated intensity profiles of 6300 and 5577 \AA\ lines
at different projected
cometocentric distances with the observation of \cite{Cochran08}. The calculated G/R ratio
along projected distance is shown in Figure~\ref{ratio-cmp}. The 6300 \AA\ emission
shows a flat profile upto $\sim$500 km, whereas the 5577 \AA\ green
line starts falling off beyond 100
km. This is because of the quenching of O($^1$S) and O($^1$D) by H$_2$O in the inner most coma
(cf. Figures \ref{o1slos} and \ref{o1dlos}), thereby making both the production and loss mechanisms
being controlled by H$_2$O. Above these distances, the emissions are mainly controlled by the radiative
decay of $^1$S and $^1$D states of oxygen atoms.
Similar to the calculations presented in Tables~\ref{tabprj-yld} and
\ref{tabprj-slit}, in Figures~\ref{o1so1d-cmp} and~\ref{ratio-cmp} we present the red
and green line intensity profiles
and the G/R ratios, respectively, for different contributions of O($^1$S)
yield and CO$_2$ abundances.
Since photodissociative excitation of H$_2$O is the main production process for
O($^1$D) atom, the red line intensity is almost independent of the variation
in O($^1$S) yield and CO$_2$ abundance.
In the case of 0\% CO$_2$ abundance, the best fit to the observed green line profile is
obtained when the O($^1$S) yield is $\sim$0.5\% ($\pm$ 0.1\%), where the G/R ratio
varied from 0.06 to 0.26 (cf. Figure~\ref{ratio-cmp}) and the slit-averaged
G/R ratio for March 30
observation is 0.15 (cf. Table~\ref{tabprj-slit}).
The shape of green line profile cannot be explained with 1\% or 0\% O($^1$S)
yield, while the case for 0.2\% O($^1$S) yield can be considered as somewhat consistent with
the observation. For this case, the value of G/R ratio shown in Figure~\ref{ratio-cmp}
is found to vary over a large range of 0.54 to 0.02.
When we consider 1\% CO$_2$ in the comet, the best-fit green profile is obtained when the O($^1$S) yield
is $\sim$0.2\%. The case for 0.5\% O($^1$S) yield also provides the green line profile
consistent with the observation. In both these cases the G/R ratio varies between 0.32 and
0.04 over the cometocentric projected distances of 10 to 10$^4$ km.
The calculated 5577 \AA\ profiles for O($^1$S) yield of 0\%
and 1\% are inconsistent with the observed profile.
In Figure~\ref{o1so1d-cmp} we also show a calculated
profile for a case when the CO$_2$ abundance is 3\% while the O($^1$S) yield is 0\% (i.e., no O($^1$S) is
produced in photodissociation of H$_2$O). The calculated 5577 \AA\ green line profile
shows a good fit to the observed profile: suggesting that even a small abundance of CO$_2$
is enough to produce the required O($^1$S). This is because the CO$_2$ is about an order of magnitude
more efficient in producing O($^1$S) atom than H$_2$O in the photodissociation process
(see Table~\ref{tab-prlos1s}). However, since O($^1$S) would definitely be produced in the
photodissociation
of H$_2$O, and that the CO$_2$ would surely be present in comet (though in smaller abundance),
the most consistent value for the O($^1$S) yield would be around 0.5\%. Assuming
5\% CO$_2$ and 0.5\% O($^1$S) yield, the calculated green line emission profile
is inconsistent with the observation (cf. Figure~\ref{ratio-cmp}).
In this case, the calculated G/R ratio shown in Figure~\ref{ratio-cmp} is found to vary between
0.24 and 0.05.
From the above calculations it is clear that the slit projected area on to the comet also plays
an important role in deciding the G/R
ratio. This point can be better understood from Table~\ref{tab-gdist} where the G/R
ratio is presented for a projected square slit on the comet at different geocentric distances.
It is clear from this
table that for a given physical condition of a comet and at a given heliocentric distance, the observed
G/R ratio for a given slit size can vary according to the geocentric distance of the
comet. For example, for a O($^1$S) yield of 0.2\% (0.5\%) and CO$_2$ abundance of 1\%,
the G/R ratio can be 0.17 (0.26) if the comet is very close to the Earth (0.1 AU), whereas
the G/R ratio can be 0.07 (0.1), 0.06 (0.08), or 0.06 (0.07), if the comet, at the time
of observation, is at a larger distance of 0.5, 1, and 2 AU from the Earth, respectively.
Further, a G/R ratio of $\sim$0.1 can be obtained even for the O($^1$S) yield of 0\%.
This suggests that the value of 0.1 for the G/R ratio is in no way a definitive benchmark
value to conclude that H$_2$O is the parent of atomic oxygen atom in the comet,
since smaller ($\sim$5\% relative to H$_2$O) amounts of CO$_2$ and CO itself can produce
enough O($^1$S) compared to that from H$_2$O.
This table also shows that for observations made around a geocentric distance of 1 AU, the
G/R ratio would be generally closer to 0.1. The G/R ratio observed in different comets ranges
from 0.02 to 0.3 \citep[e.g.,][]{Cochran08,Capria10}.
Thus, we can conclude that the G/R
ratio not only depends on the production and loss mechanisms of O($^1$S) atom, but also depends on the
nucleocentric slit projected area over the comet. Moreover, the CO$_2$ plays an important role in the
production of O($^1$S), and thus the green line emission, in comets. With the present model calculations
and based on the literature survey of dissociation
channels of H$_2$O, we suggest that the O($^1$S) yield from photodissociation of H$_2$O cannot be
more than 1\% of the total absorption cross section of H$_2$O at solar Ly-$\alpha$ radiation.
The best fit value of O($^1$S)
yield derived from Figure~\ref{o1so1d-cmp} for a smaller (1\%) CO$_2$ abundance
in the comet C/1996 B2 Hyakutake\ is 0.4 ($\pm$0.1)\%. As per the Tables~\ref{tab-prlos1s} and~\ref{tab-prlos1d},
this means that the ratio of rates of O($^1$S) to O($^1$D) production
in the H$_2$O photodissociation should be
0.03 ($\pm$0.01), which is much smaller than the value of 0.1 generally used in literature based on
\cite{Festou81}. Further, if the source of red and green lines is CO$_2$ (CO), the ratio of
photorates for O($^1$S) to O($^1$D) would be around 0.6 (0.8)
(see Tables ~\ref{tab-prlos1s} and~\ref{tab-prlos1d}).
To verify whether the O($^1$S) yield of 0.5\% (for the CO$_2$ abundance of 1\%) derived
from Figure~\ref{o1so1d-cmp}, based on the comparison between model and observed red and green
line radial profiles in comet Hyakutake on 1996 March 30, is consistent with the G/R
ratio observed on other
days on this comet, we present in Table~\ref{tab-days} the G/R ratio calculated
for observations made on 1996 March 9, 23, 27, and 30, along with the observed value of G/R
ratio from \cite{Morrison97} and \cite{Cochran08}. These calculations are
made by taking the solar flux on the day of observation using \cite{Tobiska04} SOLAR2000 model
and scaled according to the heliocentric distance of the comet on that date. The CO abundance
is 22\%, same as in all the calculations presented in the paper.
The calculated G/R ratio on March 9, when geocentric distance was 0.55 AU and
H$_2$O production rate 5 $\times$ 10$^{28}$ s$^{-1}$, is 0.09 (see Table~\ref{tab-days})
which is same as the observed ratio obtained by
\cite{Cochran08}. On March 23 and 27 the comet is closer to both Sun and Earth (geocentric distance
$\sim$0.1 AU) and its H$_2$O production rate was 4 times higher than the value on March 9.
The calculated G/R ratio on March 23 is 0.12, which is in agreement with the observed ratio
obtained by \cite{Morrison97}.
\section{Conclusions}
The Green and red-doublet atomic oxygen emissions are observed in comet C/1996 B2 Hyakutake\ in 1996 March
when it was passing quite close to the Earth ($\Delta$ = 0.1 to 0.55 AU).
A coupled chemistry-emission model has been developed to study the production of
green (5577 \AA) and red-doublet (6300 and 6364 \AA) emissions in comets.
This model has been applied to comet Hyakutake and the results are compared with the
observed radial profiles of 5577 and 6300 \AA\ line emissions and the green to red-doublet intensity
ratio. The important results from the present model calculations can be summarized as following.
It may be noted that some of these results enumerated below may vary for other comets
having different gas production rate or heliocentric distance.
\begin{enumerate}
\item The photodissociation of H$_2$O is the dominant production process for the formation
of O($^1$D) throughout the inner cometary coma.
The solar H Ly-$\alpha$ (1216 \AA) flux mainly governs the production of O($^1$D)
in the photodissociative excitation of H$_2$O, but near the nucleus solar radiation in
the wavelength band 1375--1165 \AA\ can control the formation of O($^1$D) from H$_2$O.
\item Other than the photodissociation of H$_2$O molecule, above cometocentric distance of 100 km
the radiative decay of O($^1$S) to O($^1$D) (via 5577 \AA\ line emission), while above 1000 km
the dissociative recombination of H$_2$O$^+$ ions, are also significant source mechanisms
for the formation of O($^1$D) and O($^1$S) atoms.
\item The collisional quenching of O($^1$D) atoms by H$_2$O is significant up to radial distance
of $\sim$1000 km;
above this distance the radiative decay is the main loss mechanism
of O($^1$D) atoms. The collisional quenching of O($^1$D) by other neutral species is an
order of magnitude smaller.
\item The photodissociation of H$_2$O is the major process for the production of O($^1$S) atoms, but
near the nucleus the photodissociation of CO$_2$ can be the dominant source.
The solar H Ly-$\alpha$ (1216 \AA) flux controls the production of O($^1$S)
via photodissociative excitation of H$_2$O.
\item At cometocentric distances of $<$100 km, the main loss process for O($^1$S)
is quenching by H$_2$O molecule, while above 100 km the radiative decay
is the dominant loss process.
\item Since the photoabsorption cross section of CO$_2$ molecule is quite small at 1216 \AA,
the contribution of CO$_2$ in the production of O($^1$S) and O($^1$D) at the
solar H Ly-$\alpha$ is insignificant.
\item Because the CO$_2$ absorption cross section in the 1165--955 \AA\ wavelength range
is higher by an order of magnitude compared to that at other wavelengths,
the solar radiation in this wavelength region mainly controls the
production of O($^1$D) and O($^1$S) in the photodissociative excitation of CO$_2$.
Moreover, the
CO$_2$ absorption cross section in this band is also the largest compared to those of H$_2$O
and CO.
\item The cross section for the photodissociation of H$_2$O producing
O($^1$S) at the solar H Ly-$\alpha$ wavelength (with 1\% O($^1$S) yield) is smaller
by more than two orders of magnitude than the cross section for the photodissociation of CO$_2$ producing
O($^1$S) in the wavelength region 1165--955 \AA\ . Though the solar flux at 1216 \AA\ is higher
compared to that in the 1165--955 \AA\ wavelength region by two orders of magnitude, the larger value
of CO$_2$ cross section in this wavelength band
enables CO$_2$ to be an important source for the production
of metastable O($^1$S) atom.
\item In the case of CO, the dissociation and ionization thresholds are close to each other.
Hence, most of the solar radiation ionizes CO molecule rather than producing the
O($^1$S) and O($^1$D) atoms.
\item Though the CO abundance is relatively high ($\sim$22\%) in comet C/1996 B2 Hyakutake, the contribution of CO
photodissociation in the O($^1$D) production is small ($\sim$1\%), while for the production of O($^1$S)
its contribution is 10 to 25\%.
\item The photoelectron impact dissociative excitation of H$_2$O, CO$_2$, and CO makes
only a minor contribution ($<$1\%) in the formation of metastable O($^1$S) and O($^1$D)
atoms in the inner coma.
\item The O($^1$S) density peaks at shorter radial distances than the O($^1$D) density. The
peak value of O($^1$S) density is found around 60 km from the nucleus, while for the O($^1$D)
a broad peak around 200-600 km is observed.
\item In a H$_2$O-dominated comet, the green line emission is mainly generated in the
photodissociative excitation of H$_2$O with contribution of 40 to 60\% (varying according to the radial
distance) to the total intensity, while the photodissociation of
CO$_2$ is the next potential source contributing 10 to 40\%.
\item For the red line emission the major source is
photodissociative excitation of H$_2$O, with contribution varying from 60 to 90\% depending on
the radial distance from the nucleus.
\item The G/R ratio depends not only on the production and loss processes of the O($^1$S) and O($^1$D)
atoms, but also on the size of observing slit and the geocentric distance of comet at the time of
observation.
\item For a fixed slit size, the calculated value of the G/R ratio is found to vary between 0.03 and 0.5 depending on the
geocentric distance of the comet.
In the inner ($<$300 km) most part of the coma, the G/R ratio is always larger than 0.1, with
values as high as 0.5. On the other hand, at cometocentric distances larger than 1000 km the G/R ratio
is always less than 0.1.
\item The model calculated radial profiles of 6300 and 5577 \AA\ lines are consistent with the
observed profiles on comet C/1996 B2 Hyakutake\ for O($^1$S) yield of 0.4 ($\pm$0.1) and CO$_2$ abundances of 1\%.
\item The model calculated G/R ratio on comet Hyakutake is in good agreement with the G/R
ratio observed on two days in 1996 March by two observatories using different slit sizes.
\end{enumerate}
\section*{Acknowledgments}
S. Raghuram was supported by the ISRO Senior Research Fellowship during the period of this work.
|
\section{Introduction}
Particle physics is presently facing at least two majors issues. A first one is
the exploration of the fundamental mechanism that generates the elementary
particle masses and leads to the existence of a new type of particles, the Higgs
bosons \cite{Higgs:1964pj,Englert:1964et,Guralnik:1964eu}. The discovery in 2012
of such a particle at the CERN Large Hadron Collider (LHC)
\cite{Aad:2012tfa,Chatrchyan:2012xdj} with a mass of \cite{Aad:2015zhl}\\[-3mm]
\begin{equation}
M_H=125~{\rm GeV} \, , \\[-.1mm]
\label{eq:Hmass}
\end{equation}
is acknowledged to be of very high relevance but an equally important
undertaking would be the precise determination of its basic properties
\cite{Khachatryan:2016vau,ATLAS-web,CMS-web}. In particular, we need to answer
to the question whether this new state is the one predicted by the Standard
Model (SM)
\cite{Glashow:1961tr,Weinberg:1967tq,Salam:1968rm,Gross:1973id,Politzer:1973fx},
the theory that describes in a minimal way the electromagnetic, weak and strong
interactions, or it is part of the extended structure of a more fundamental
theory; for reviews of the SM Higgs sector, see for instance
Refs.~\cite{Gunion:1989we,Spira:1997dg,Djouadi:2005gi,Dittmaier:2011ti,Dittmaier:2012vm,Heinemeyer:2013tqa,deFlorian:2016spz,Spira:2016ztx,Dawson:2018dcd}.
This is a particularly important question as the SM has many shortcomings, a
crucial one being due to the Higgs sector itself which is considered to be
highly unnatural from a theoretical perspective, as it does not warrant a
protection against the extremely high scales that contribute to the Higgs boson
mass and make it in principle close to the Planck scale rather than to the weak
scale. Whether or not there is New Physics beyond the SM is vital for particle
physics.
A second major issue, which provides at the same time a decisive hint for the
existence of New Physics beyond the SM, is related to the longstanding problem
\cite{Zwicky:1933gu} of the existence and the nature of the Dark Matter (DM) in
the Universe. Indeed, cosmological considerations and astrophysical observations
point toward the existence of a matter component, distinct from ordinary
baryonic matter, whose cosmological relic abundance according to the recent
extremely precise measurements from the PLANCK satellite~\cite{Ade:2015xua} is
given by
\begin{equation}
\Omega_{\rm DM} h^2 = 0.1188 \pm 0.0010 \, , \label{eq:omegah}
\end{equation}
with $h$ being the reduced Hubble constant, and corresponds to approximately
$25\%$ of the energy budget of the Universe. It is commonly believed that this
DM component is accounted for by a new particle, stable at least on cosmological
scales, with very suppressed interactions with the SM states and cold, i.e.\
non--relativistic at the time of matter--radiation equality in the Universe.
Particle physics proposes a compelling solution to this puzzle in terms of a
colorless, electrically neutral, weakly interacting, absolutely stable particle
with a mass in the vicinity of the electroweak scale. While the observed matter
content in the SM does not involve such a state, the neutrinos being too light
to offer a viable solution, many of its extensions predict the occurrence of new
weakly interacting massive particles (WIMPs) that could naturally account for
this phenomenon; see for instance
Refs.~\cite{Jungman:1995df,Drees:1998ra,Bergstrom:2000pn,Munoz:2003gx,Bertone:2004pz,Feng:2010gw,Drees:2012ji,Roszkowski:2017nbc,Arcadi:2017kky,Kahlhoefer:2017dnp,Tanabashi:2018oca}
for some general reviews on the possible candidates.
In fact, in many extensions of the SM, the naturalness and DM problems can be
solved at once, sometimes in a rather elegant manner. This is, for instance, the
case of supersymmetric theories
\cite{Wess:1974tw,Golfand:1971iw,Drees:2004jm,Baer:2006rs,Martin:1997ns} which
postulate the existence of a new partner to every SM particle and the lightest
superparticle was considered for a long time as the ideal candidate
\cite{Ellis:1983wd,Ellis:1983ew,Goldberg:1983nd,Krauss:1983ik,Griest:1988ma,Drees:1992am}
for Dark Matter\footnote{The two other theoretical constructions that address
the problem of the hierarchy of scales in the SM Higgs sector, namely extra
space--time dimensions and composite models have also their DM candidates,
respectively, the lightest Kaluza--Klein \cite{Servant:2002aq,Cheng:2002ej} and
the lightest T--odd \cite{Cheng:2004yc} states.}. It is extremely tempting and,
in fact, rather natural to consider that these two important issues are
intimately related and the Higgs bosons serve as mediators or portals to the DM.
As a matter of fact, in order to make the DM states absolutely stable, one has
to invoke a discrete symmetry under which they (and their eventual companions in
an extended DM sector) are odd while all SM particles are even, forbidding the
DM to decay into ordinary fermions and gauge bosons. If the DM particle is not
charged under the electroweak group, the Higgs sector of the theory allows to
accommodate in a minimal way the interaction among pairs of DM and of SM
particles~\cite{Silveira:1985rk,McDonald:1993ex,Burgess:2000yq,Kim:2006af,Kanemura:2010sh,Djouadi:2012zc,Djouadi:2011aa,LopezHonorez:2012kv,Andreas:2010dz,Lebedev:2011iq,Mambrini:2011ik,Davoudiasl:2004be,Schabinger:2005ei,Patt:2006fw,OConnell:2006rsp,Barger:2007im,He:2008qm,He:2009yd,Barger:2010mc,Clark:2009dc,Lerner:2009xg,Goudelis:2009zz,Yaguna:2008hd,Cai:2011kb,Biswas:2011td,Farina:2011bh,Hambye:2008bq,Hambye:2009fg,Hisano:2010yh,Englert:2011yb,Englert:2011aa,Andreas:2008xy,Foot:1991bp,Melfo:2011ie,Raidal:2011xk,He:2011de,Mambrini:2011ri,Chu:2011be,Ghosh:2011qc,Greljo:2013wja,Cline:2013gha}.
These Higgs--portal models can then describe in an economic manner a most
peculiar feature of the DM particles, namely their generation mechanism which
is based on the freeze--out paradigm and relates the DM cosmological relic
density to a single particle physics input, their thermally averaged
annihilation cross section. Indeed, in these scenarios, the relic density would be induced when pairs of DM states annihilate into SM fermions and gauge
bosons, through the $s$--channel exchange of the Higgs bosons. These Higgs
bosons will also be the mediators of the mechanisms that allow for the
experimental detection of the DM states.
The simplest of the Higgs--portal scenarios is when the Higgs sector of the
theory is kept minimal and identical to the one postulated in the SM, namely the
single doublet Higgs field structure that leads to the unique $H$ boson which
has been observed so far. Mindful of William of Occam, one could then extend the
model by simply adding only one new particle to the spectrum, the DM state, as
an isosinglet under the electroweak gauge group. Nevertheless, the DM particle
can have the three possible spin assignments, that is, can be a spin--zero or
scalar particle, a spin--1 vector boson or a Dirac or Majorana spin--$\frac12$
fermion (a spin--2 DM state has been also proposed \cite{Babichev:2016bxi}).
Although only effective and eventually non--renormalisable, one can adopt this
approach as it is rather model--independent and does not make any assumption on
the very nature of the DM
\cite{Kim:2006af,Kanemura:2010sh,Djouadi:2012zc,Djouadi:2011aa,LopezHonorez:2012kv,Goodman:2010ku,Fox:2011pm,Buckley:2014fba,Abdallah:2015ter,Alanne:2017oqj}.
In addition, such a scheme can be investigated in all facets as it has a very
restricted number of extra parameters in addition to the SM ones, namely the
mass of the DM particle and its coupling to the Higgs boson\footnote{These two
parameters can be further related by the requirement that the cosmological relic
density takes a value that is very close to the experimentally measured one,
eq.~(\ref{eq:omegah}). However, as will be seen later, one could consider more
general scenarios in which the DM particle is not absolutely stable and/or does
not account for the entire DM in the Universe.}. This effective, simple and
economical SM Higgs--portal scenario can be considered to be, in some sense, a
prototype WIMP model.
A most interesting realization of the SM--like Higgs--portal discussed above is
when the DM particle is an electroweak singlet fermion. However, a coupling
between this DM candidate and the SM Higgs doublet field is necessarily not
renormalizable and this theory can only be effective and valid at the low energy
scale. In order to cure this drawback and make the theory complete in the
ultraviolet regime while keeping the Higgs sector as minimal as in the SM, the
DM state should be accompanied by some fermionic partners that are non--singlets
under the SU(2) electroweak group. The spin--$\frac12$ DM particle could then be
part of an isodoublet or, if it is still an isosinglet, could mix with it.
Hence, the possibility of further extending the fermionic sector of the theory
should be considered.
Besides the option of a fourth generation of fermions with a massive
right--handed neutrino \cite{Belotsky:2002ym,Kribs:2007nz,Denner:2011vt}, which
is now completely excluded by the LHC Higgs data in the context of a SM--like
Higgs sector \cite{Djouadi:2012ae,Kuflik:2012ai}, two other possibilities have
been advocated. A first one is the introduction of a Majorana neutral fermion
that is part of a singlet--doublet lepton extension of the SM, the so--called
singlet--doublet model
\cite{Cohen:2011ec,Cheung:2013dua,Calibbi:2015nha,Yaguna:2015mva}. A second
option for such an extended fermionic sector would be a Dirac heavy neutrino
that belongs to an entire vector--like fermion family added to the SM fermionic
spectrum
\cite{Fujikawa:1994we,Hambye:2008bq,Hambye:2009fg,Hisano:2010yh,Lebedev:2011iq,Angelescu:2015uiz,Angelescu:2016mhl}.
A renormalizable Higgs--DM interaction is then generated through mixing, even if
the DM particle is the isosinglet neutral state in the two constructions. The
fermionic Higgs--portal discussed before can be then interpreted as an effective
limit of such a framework in which the extra fermionic fields, except from the
one of the DM, are assumed to be very heavy and integrated out (though the
scheme is rather constrained by electroweak precision data).
In the case of scalar and vector DM states, the model--independent approach
mentioned above can, instead, be made renormalisable. In the vector case, the
DM can be identified as the stable gauge boson of a dark U(1) gauge symmetry
group that is spontaneously broken by the vacuum expectation value of an
additional complex scalar field
\cite{Hambye:2008bq,Lebedev:2011iq,Baek:2012se,Farzan:2012hh,Arcadi:2016qoz}.
In the scalar case, one can either add simply a gauge singlet field
\cite{Silveira:1985rk,McDonald:1993ex,Burgess:2000yq} or invoke the possibility
of an additional scalar doublet field that does not develop a vacuum expectation
value and, hence, does not participate to electroweak symmetry breaking
\cite{Deshpande:1977rw,LopezHonorez:2006gr,Barbieri:2006bg,Ma:2006km,Arhrib:2013ela}.
The four degrees of freedom of the inert doublet field would then correspond to
four scalar particles and the lightest of them, when electrically neutral, could
be the DM candidate. Hence, in both the vector and scalar cases, the DM
particle comes with additional beyond the SM states that can also be considered
to be heavy in an effective framework. Nevertheless, there are theoretical
constraints on these scenarios, as well as experimental ones that are mainly
due to the high precision electroweak data, which make that the extra states
associated with the DM particle should have a comparable mass and thus, can be
searched for and observed at present or future collider experiments.
Another possibility for having a Higgs--portal model which remains
theoretically consistent up to very high energy scales, is when it is the Higgs
sector itself that is enlarged. For instance, an additional Higgs singlet field
that acquires a vacuum expectation value and mixes with the SM--like Higgs field
allows for a renormalisable coupling with an isosinglet fermion state
\cite{Schabinger:2005ei,Patt:2006fw,OConnell:2006rsp,Barger:2007im,Profumo:2007wc,Baek:2011aa,Bertolini:2012gu,Robens:2015gla,Godunov:2015nea,Falkowski:2015iwa}.
Such a scheme remains minimal compared to the SM effective scenario since the
DM mass can be generated dynamically by the extra singlet field, hence relating
it to the DM coupling to the Higgs bosons. More generally, many extensions which
were considered in the past to address some of the shortcomings of the SM
involve a Higgs sector that is extended by a singlet scalar field. Another
possibility of the additional singlet scalar would be that it does not mix with
the SM Higgs doublet, as it often appears in (partially) composite Higgs models
\cite{Eichten:1979ah,Kaplan:1983sm} thus opening the possibility that the new
singlet could also correspond to a pseudoscalar Higgs state
\cite{Mambrini:2015wyu,DiChiara:2015vdm,Backovic:2015fnp,Falkowski:2015swt,Franceschini:2015kwy,Barducci:2015gtd,DEramo:2016aee,Djouadi:2016eyy}.
The new scalar or pseudoscalar particles, together with the SM Higgs boson, will
then serve as a double portal to the DM. The latter can be again the neutral
component of a vector--like fermion family, for instance. Extensions in which
both scalar and pseudoscalar Higgs states are simultaneously present have also
been considered and lead to a rather interesting phenomenology in the DM
context, in particular when the pseudoscalar state is very light compared to the
scalar one or when the two states are almost degenerate in mass.
Among the theories with an extended scalar sector, two--Higgs doublet models
have a special status and are, by far, the most studied ones in the last
decades; for a review, see Ref.~\cite{Branco:2011iw}. Compared to the SM with
its unique Higgs particle, the Higgs sector of the model involves five
physical states after electroweak symmetry breaking: two CP--even neutral
ones that mix and share the properties of the SM Higgs boson, a CP--odd or
pseudoscalar neutral and two charged Higgs states with properties that are
completely different from those of the SM Higgs boson. The presence of the
additional particle lead to a very rich phenomenology and interesting new
signatures, in particular, as a result of the many possibilities for the
structure of the couplings of the Higgs bosons to standard fermions
\cite{Glashow:1976nt}. Two Higgs--doublet models appear naturally in very well
motivated extensions of the SM, such as the minimal supersymmetric model, and
provide a very good benchmark for investigating physics beyond the SM.
These models should be extended to incorporate the DM particles and this can be
done in a way analogous to what has been mentioned previously, by introducing a
full sequential family of vector--like fermions
\cite{Djouadi:2016eyy,Angelescu:2015uiz,Bizot:2015qqo} or a singlet--doublet of
heavy leptons \cite{Berlin:2015wwa,Arcadi:2018pfo} for instance. As also
noted above, there is the possibility that only one of the Higgs doublets is
responsible of electroweak symmetry breaking, while the other doublet does not
acquire a vacuum expectation value nor couple to SM fermions as a result of a
discrete symmetry, the so--called inert doublet model in which the DM candidate
is the lightest neutral state of the inert field~\cite{Deshpande:1977rw}.
Another scenario which recently gained a wide interest in the context of DM, as
it represents a useful limit of some theoretically well motivated models and
leads to a very interesting phenomenology, is the one in which the two--doublet
Higgs sector is further extended to incorporate a light pseudoscalar singlet
field that can serve as an additional Higgs--portal to the DM
\cite{Ipek:2014gua,Goncalves:2016iyg,Bauer:2017ota,Tunney:2017yfp,Abe:2018bpo}.
Finally, to close this tentative list of possible extended Higgs and DM models,
there are supersymmetric extensions of the SM
\cite{Wess:1974tw,Golfand:1971iw,Drees:2004jm,Baer:2006rs,Martin:1997ns} which
solve what was for a long time considered as the most notorious problem of the
SM, the hierarchy problem mentioned in the beginning of our discussion: the
cancellation of the quadratic divergences that appear when calculating the
radiative corrections to the Higgs boson mass is highly unnatural in the SM and
needs an extreme fine--tuning. Supersymmetric theories postulate the existence
of a new partner to every SM particle with couplings that are related in such a
way that these quadratic divergences are naturally cancelled.
In the Minimal Supersymmetric Standard Model (MSSM)
\cite{Martin:1997ns,Haber:1984rc,Djouadi:1998di,Chung:2003fi}, in which the
Higgs sector is extended to contain two doublet fields
\cite{Gunion:1989we,Djouadi:2005gj,Carena:2002es,Heinemeyer:2004gx}, there is an
ideal candidate for the weakly interacting massive particle which is expected to
form the cold DM: the lightest supersymmetric particle, which is in general a
neutralino, a mixture of the superpartners of the neutral gauge and Higgs bosons
\cite{Ellis:1983wd,Ellis:1983ew,Goldberg:1983nd,Krauss:1983ik,Griest:1988ma,Drees:1992am}.
This particle is absolutely stable when a symmetry called R--parity
\cite{Farrar:1978xj} is conserved and, in a wide and natural range of the MSSM
parameter space, its annihilation rate into SM particles fulfills the
requirement that the resulting cosmological relic density is within the measured
range~
\cite{Jungman:1995df,Drees:1998ra,Bergstrom:2000pn,Munoz:2003gx,Bertone:2004pz,Feng:2010gw,Drees:2012ji,Roszkowski:2017nbc}.
In order to circumvent some shortcomings of the MSSM, the so--called
$\mu$--problem \cite{Nilles:1982mp,Frere:1983ag,Kim:1983dt}, a further extension
that is becoming popular by now, is the so--called next--to--minimal MSSM
(NMSSM) \cite{Ellwanger:2009dp,Maniatis:2009re,Djouadi:2008uw,Baum:2017enm} in
which a complex isosinglet field is added thus extending the two--Higgs doublet
Higgs sector of the theory by an extra CP--even and one CP--odd Higgs particles
that could be very light and have a quite interesting phenomenology.
In most cases, in particular when the superpartners of the fermionic spectrum
are very heavy as indicated by current LHC data, the neutral states of the
extended Higgs sector of these models can serve as the privileged portals to the
DM neutralino in a large area of the parameter space. In fact, the
singlet--doublet lepton model and the models with two--Higgs doublets and a
pseudoscalar field introduced previously can be seen as representing simple
limiting cases of the MSSM and the NMSSM, respectively.
Hence, there is broad variety of models, with various degrees of complexity,
in which the relevant interactions of the DM particles that are present in the
Universe are mediated by the Higgs sector of the theory. The aim of this review
is to analyze these models and to study their phenomenology in both collider
and astroparticle physics experiments.
Actually, a fundamental and interesting aspect of all these Higgs--portal DM
models, is that they can be probed not only in direct
detection~\cite{Goodman:1984dc,Wasserman:1986hh,Drukier:1986tm} in astrophysical
experiments, i.e.\ in elastic scattering of the DM with nuclei, or in indirect
detection, when one looks in the sky for some clean products of their
annihilation processes such as gamma rays
\cite{Silk:1984zy,Turner:1986vr,Rudaz:1987ry,Ellis:1988qp,Primack:1988zm,Bergstrom:1988fp,Bouquet:1989sr,Ellis:2001hv},
but also at colliders. There, and in contrast to astroparticle experiments, one
can search at the same time for the DM particles by looking for instance at
invisible Higgs decays
\cite{Shrock:1982kd,Belotsky:2002ym,Joshipura:1992hp,Choudhury:1993hv,Frederiksen:1994me,Belanger:2001am,Godbole:2003it,Eboli:2000ze,Davoudiasl:2004aj,Batell:2011pz,Belanger:2013kya,Low:2011kp,Espinosa:2012vu}
and other missing transverse energy signatures~\cite{Goodman:2010ku,Fox:2011pm},
as well as for the possible companions of these particles, the new fermions or
new bosons that belong to the same representation or mix with it, and the
mediators of the DM interactions, the Higgs bosons including those that
eventually appear in extended scenarios. These distinct types of searches are
hence highly complementary and in many different ways.
During the last decade, the experimental community, with the lead of the
intense effort at the LHC, complemented by an impressive array of other
experiments, from low energy experiments in the neutrino and B--meson sectors
for instance to cosmology and astroparticle physics experiments searching for DM
such as XENON \cite{Aprile:2015uzo,Aprile:2017iyp,Aprile:2018dbl}, has
challenged the SM from all imaginable corners. While brilliant and historical
successes have been achieved, like the discovery of the Higgs boson, no sign of
a departure from the SM predictions has emerged so far. This is particularly the
case at the high--energy frontier, where the first tests of the Higgs boson
properties at the LHC have shown that the particle is approximately SM--like
\cite{Khachatryan:2016vau,ATLAS-web,CMS-web}. Furthermore, direct searches for
new particles have been performed in many topologies, covering a large number of
new physics possibilities, and turned out to be unsuccessful for the time being
\cite{ATLAS-web,CMS-web}. On the other hand, the absence of signals in
astrophysical experiments searching for DM particles is putting the paradigm of
a weakly interacting massive DM particle under increasing pressure. For
instance, the XENON1T experiment
\cite{Aprile:2015uzo,Aprile:2017iyp,Aprile:2018dbl} has set strong bounds on the
mass and couplings of the DM, excluding large areas of the natural parameter
space of the beyond the SM schemes that predict them. To achieve a better
sensitivity to these extended Higgs--portal scenarios, a significantly larger
data sample is required and, eventually, new experiments that are capable of
exploring higher DM mass scales or smaller couplings are needed.
Particle physics is undergoing a crucial moment where a strategy for the future
is being decided and choices for the next generation of experiments are to be
made \cite{EU-Strategy}. Besides the high--luminosity option of the LHC
\cite{ATLAS:2013hta,CMS:2013xfa,Cepeda:2019klc} in which an extremely large data
sample than presently should be collected at the slightly higher center of mass
energy of 14 TeV and which should be the natural next step, another subsequent
possibility will be to move to higher energies, and doubling the LHC energy is
under serious consideration \cite{Baur:2002ka,Cepeda:2019klc}. In a longer run,
proton colliders with energies up to 100 TeV are currently envisaged both at
CERN \cite{Contino:2016spe} and in China \cite{Tang:2015qga}. A preliminary
step at these colliders would be to run in the much cleaner $e^+e^-$ mode at an
energy of about 250 GeV and with high luminosity, allowing them to be true
Higgs boson factories
\cite{Gomez-Ceballos:2013zzn,Mangano:2018mur,CEPC-SPPCStudyGroup:2015csa,CEPCStudyGroup:2018ghi}.
Such a plan is also under discussion in Japan with a linear $e^+e^-$ collider
that can be possibly extended up to 1 TeV \cite{Djouadi:2007ik,Baer:2013cma}
and at CERN where a multi--TeV $e^+e^-$ machine is contemplated
\cite{Battaglia:2004mw,Linssen:2012hp}.
On the astrophysical front also, several experiments are planed in a near and
medium future with a significant increase in sensitivity in the search for the
DM particles. Some examples of experiments in direct detection are the XENONnT
\cite{Aprile:2015uzo} and the LUX--ZEPLIN (LZ) \cite{Akerib:2015cja} detectors
and, later, the DARWIN experiment \cite{Aalbers:2016jon} which would be the
``ultimate" DM detector as it could reach a sensitivity close to the irreducible
background represented by the $Z$--boson mediated coherent scattering of SM
neutrinos on nucleons, the so--called neutrino floor. Very powerful and
sensitive indirect detection experiments are also planed in a near future, such
as the Cherenkov Telescope Array (CTA) \cite{Acharya:2013sxa} and the High
Altitude Water Cherenkov (HAWC) \cite{Abeysekara:2014ffg}, the next generation
ground--based observatories for gamma--ray astronomy at very high energies.
At this stage, we believe that it is an appropriate time to summarize and update
the large amount of analyses that have been performed in the last decade at both
collider and astrophysics experiments and infer the constraints that they
impose on these Higgs--portal to the DM particles scenarios. It also seems
opportune to investigate the potential of the upgrades planed at present
machines, now that we have a relatively clear idea of the near and medium
future, and at the facilities that are planed for the more remote future, in
pursuing the search for the DM particle and the possible new spectra which is
associated to it. This is the aim of this review: an extensive and comprehensive
account of the present constraints and the future prospects on the various
Higgs--portal scenarios for Dark Matter and the possible complementarity between
the different experiments and approaches.
The organization of this review follows the classification of the numerous
Higgs--portal models for DM introduced above. The next section is devoted to
the minimal Higgs--portal model with a SM--like Higgs sector that mediates the
interactions of an isosinglet spin--0, $\frac12$ and spin--1 DM state in an
effective approach. Section 3 is dedicated to scenarios in which the Higgs
sector is kept minimal but the DM one is extended to incorporate additional
states; we specialize to renormalizable models in which the DM is a
spin--$\frac12$ fermion that is part of a fourth generation family, a
singlet-doublet lepton or a full family of vector--like fermions. The
subsequent sections are instead devoted to scenarios in which it is the Higgs
sector of the theory which is extended to incorporate additional singlet or
doublet fields. In section 4, we analyze extensions with additional scalar
singlets that mix or not with the SM Higgs boson and couple to the DM, either in
the general effective approach or when it is an isosinglet heavy neutrino.
Section 5 is for two--Higgs doublet models that couple to a lepton in a
singlet--doublet or a vector--like representation; we also consider the cases in
which one of the scalar doublet is inert and when an additional light
pseudoscalar Higgs state is present. In section 6, we consider two
supersymmetric models, the MSSM and NMSSM, in which the partners of the fermions
and the gluons are assumed to be very heavy and the DM phenomenology is
essentially mediated by the Higgs bosons. Each of these sections is structured
as follows. In a first part, we introduce the various models and summarize the
eventual theoretical constraints to which they are subject. It is followed by an
extensive discussion of the most relevant collider aspects of the Higgs and DM
sectors and the constraints or prospects in their searches. We conclude the
sections by an updated analysis of the DM phenomenology, the relic density and
the constraints/prospects in direct and indirect detection experiments,
including the eventual complementarity with colliders. Our conclusions will be
briefly stated in section 7. The review includes three appendices: one for the
analytical material describing Higgs and DM production at colliders in
the various models, one for analytical approximations for DM annihilation
cross sections and another for expressions of the renormalization group
evolution of some Higgs couplings.
\setcounter{section}{1}
\renewcommand{\thesection}{\arabic{section}}
\include{sec-SM}
\setcounter{section}{2}
\renewcommand{\thesection}{\arabic{section}}
\include{sec-NF}
\setcounter{section}{3}
\renewcommand{\thesection}{\arabic{section}}
\include{sec-Sing}
\setcounter{section}{4}
\renewcommand{\thesection}{\arabic{section}}
\include{sec-2HDM}
\setcounter{section}{5}
\renewcommand{\thesection}{\arabic{section}}
\include{sec-MSSM}
\setcounter{section}{6}
\renewcommand{\thesection}{\arabic{section}}
\section{Conclusions}
The absence of explanation for one of the most important contemporary
scientific puzzles, the origin and the nature of the observed Dark Matter
component in the Universe, strongly suggests to extend the Standard Model of
particle physics by at least one weakly interacting and massive particle that
would account for it. The interaction between this DM particle and the SM
fermions and gauge bosons, which is at the base of the mechanism that generates
the DM and allows to detect it experimentally, can be accommodated through the
Higgs sector of the theory. The latter hence serves as a privileged ``portal''
between the visible sector of the SM and the DM sector. In general, not only the
dark sector should be extended in order to comprise companions of the DM
particle that would permit renormalisable interactions among other features, but
also, the Higgs sector of the theory can be enlarged, hence allowing for
additional Higgs--portals to the DM states to be present.
In the present work, we have reviewed a multitude of elaborated theoretical
realizations, with various degrees of complexity, of such Higgs--portal
scenarios. We have summarized the important theoretical elements that allow to
describe them, discussed the most relevant collider aspects of the Higgs and DM
sectors including present constraints on the spectra and future prospects for
observation and, finally, analysed and updated the two most important
characteristics of the phenomenology of the DM state, namely its cosmological
relic abundance and its rates in direct and indirect detection in astroparticle
physics experiments. We have paid a particular attention to the complementarity
between, on the one hand, the collider searches for the DM states and their
companions as well as to the extra Higgs bosons and, on the other hand, the
dedicated direct and indirect DM searches.
The minimal way of realizing a Higgs--portal scenario would be simply to extend the SM with a single particle, the DM candidate, which couples to the
unique Higgs boson of the theory through an effective and possibly
non--renormalizable interaction. Although the DM can have three different spin
assignments, namely be a spin--0 scalar, a spin--$\frac12$ Dirac or Majorana
fermion and a spin--1 vector, the resulting model is rather simple as it has
only two free parameters, the DM mass and its coupling to the $H$ boson, and is
thus easily testable. We have thoroughly analysed such a scenario, starting with
the possibility of searching for the DM particles at high energy colliders and,
in particular, at the LHC. Being electrically neutral and stable, they are
essentially undetectable and would appear only as missing transverse energy when
produced in association with visible SM particles which should be then tagged.
In the context of this SM Higgs--portal scenario, there are two main ways to
observe such elusive states. First, if they are lighter than half of the
Higgs mass, $m_{\rm DM} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} \frac12 M_H \approx 62$ GeV, they will appear as
decay products of the observed Higgs boson. For slightly heavier DM
particles, $m_{\rm DM} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} \frac12 M_H$, the produced Higgs boson should be
virtual or off--shell and would split into a pair of DM states, which results
into much smaller production cross sections. Still for light DM particles,
$m_{\rm DM} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} \frac12 M_H$, a second possibility would be to search
indirectly for the invisible Higgs decays into DM particles by measuring
precisely the total decay width of the Higgs boson and, alternatively, its
various visible decay branching fractions. If any additional decay mode like the
invisible one is present at a substantial level, it will affect the two types of
observables. This is one of the primary reasons to perform the high--precision
Higgs measurements that are planned, for instance, in the high--luminosity
option of the LHC or at future collider facilities.
Of course, DM particles can be experimentally probed also through dedicated
search strategies, namely direct and indirect detection. In this review, we have
summarised the constraints on Higgs--portal models from astroparticle physics
experiments and compared them with what is obtained at high--energy colliders
like the LHC. In this context, direct detection typically sets the most
stringent limits. We have updated the presently existing ones, in particular by
the XENON1T experiment which provides the strongest bounds, and discussed the
projected sensitivities of future experiments like XENONnT, LZ and DARWIN
detectors. Current exclusion limits already rule out large regions of the
theoretically viable parameter space of the SM Higgs--portal model and the
absence of a signal at the next generation detectors will rule out thermal DM
states for masses up to about 1 TeV.
One should note that the collider constraints from the invisible width of the
Higgs boson, although relatively weak compared to the above astrophysical ones,
are nevertheless complementary to them, in particular at low DM masses when the
sensitivity of direct detection experiments degrades. Furthermore, the collider
searches do not rely on a specific hypothesis for the DM abundance and are thus
more general, applying also for particles that are stable on detector but not
necessarily on cosmological scales. It is thus extremely important to further
exploit the potential of searches of additional exotic decay channels of the
Higgs boson at the LHC including the high--luminosity option and at future
higher--energy hadron and lepton colliders, covering in particular the region
$M_H \lesssim 2 m_{\rm DM}$.
These colliders are also useful in searching for the possible companions of the
DM particle. Indeed, while the scalar and vector effective DM Higgs--portals are
renormalizable, the singlet fermionic effective one is not, being realized
through a dimension--5 operator. The first type of extension of the SM
Higgs--portal scenario considered in this review hence consisted into enlarging
the DM sector to permit renormalizable interactions of a fermionic DM with the
SM Higgs sector. Two simple examples of extensions have been studied, in
addition to the possibility of a fourth generation of chiral fermions which was
shown to be excluded by present LHC and astrophysical data: the so--called
singlet--doublet lepton model with a Majorana DM and the addition of a full
``family'' of vector--like fermions with its Dirac singlet neutrino being the DM
candidate. Hence, in both scenarios the DM is accompanied by fermionic partners
that are non--singlets under the electroweak group.
Adopting an analogous strategy as the minimal Higgs--portal scenario, we have
summarized the present constraints and the expectations for these two scenarios
from both the collider and astroparticle physics perspectives, as well as from
theoretical considerations such as perturbativity, stability of the electroweak
potential and conformity with the precision data. Concerning the phenomenology
of the DM state, the singlet--doublet lepton model is in similar tension with
direct detection as the effective Higgs--portal, with the exception of the
so--called blind-spots in which the Higgs--DM coupling vanishes. The case of a
vector--like Dirac DM is even more constrained since the vectorial coupling with
the $Z$ boson further enhances the DM spin--independent interactions. The only
viable solution is represented by coannihilations of a mostly singlet--like DM
state that is nearly mass degenerate with the extra leptons that are present in
the spectrum. Indirect detection constraints are not competitive with the ones
from direct detection and have been often omitted. On the other hand, collider
phenomenology is enriched by the possibility of searching for the fermionic, non
isosinglet partners of the DM state. Current limits on their masses and
couplings have been presented and the prospects for future detection at the
HL--LHC, as well as at future proton or $e^+e^-$ colliders have been examined
in detail.
A third class of models considered in this review consisted into extensions of
the Higgs sector of the theory with the incorporation of additional scalar
fields that could also serve as portals to the DM particles. We have first
studied a minimal extension with a singlet scalar Higgs field that develops a
vacuum expectation value and mixes with the SM Higgs state. The DM sector
consisted again on a particle with the three possible spin assignments, namely
spin--0, $\frac12$ and 1, and coupling effectively to the Higgs bosons but this
time in a renormalizable way even in the spin--$\frac12$ DM case. Analogously to
the effective Higgs--portal model, a strong correlation between the DM
annihilation into SM states and its scattering on nucleon cross sections is
present. This implies very strong constrains from DM direct detection which can
be evaded only in proximity of $s$--channel resonances or in the so--called
``secluded'' regime, corresponding to the annihilation of the DM into pairs of
the additional mediator. A quite orthogonal scenario that we have then examined,
is when the additional scalar state does not participate to electroweak symmetry
breaking and does not mix with the SM--like Higgs boson and, thus, can be also
of a pseudoscalar nature. These scalar and pseudoscalar resonances have been
assumed either to have direct couplings only to the heavy top quark, or have
exclusively one--loop couplings with the SM gauge bosons, induced by
vector--like fermions for instance. In such a case, a nice complementarity
between the requirements of a correct relic density and LHC searches for
resonances decaying into gauge bosons, in particular diphotons or heavy top
quarks, can be established. Concerning DM phenomenology, in the case of a scalar
mediator, despite the weakness of the limits from direct detection as a result
of the small resonance couplings to the $c$ and $b$ quarks, the favoured regions
correspond again to the cases in which the mass of the DM lies close to the
$s$--channel resonance or it is greater than the mass of the mediator. The
interactions of the DM with a heavy pseudoscalar mediator are, in turn, left
unconstrained by direct detection and are only moderately sensitive to indirect
detection. A precise assessment of the collider constraints is thus crucial in order to properly probe this scenario.
We have also briefly discussed the option in which both a scalar and a
pseudoscalar resonances are present which is very interesting in two limiting
cases: when the two states are almost degenerate in mass and appear as a single
resonance when produced at colliders and when the pseudoscalar is much lighter
than the scalar and even the DM particle. The DM phenomenology of this last
scenario has been studied in detail and features new efficient DM annihilation
channels without altering the direct detection signals. The correct relic
density is achieved in a region of the parameter space which can be probed by
searches of collimated photons from the decays of the light pseudoscalar.
Further increasing the degree of complexity of the models, we have then
considered the case in which the Higgs sector is extended to incorporate
two--Higgs doublet fields and, possibly, further augmented by a pseudoscalar
SU(2) singlet. While keeping again most of the focus on scenarios with fermionic
DM, extending the singlet--doublet and the vector--like fermion models to the
2HDM case, we have nevertheless also considered a popular model in which the
second scalar doublet is inert and enclose a scalar DM state and its partners.
The former types of models are particularly interesting for two reasons. First,
they can be seen a special and simple limits of more complete theories, namely
the MSSM and NMSSM. On the other hand, they offer a richer Higgs spectrum with a
broad variety of collider signatures that are not fully explored by the
experimental collaborations. Concerning DM phenomenology, different scenarios
have been considered for the various models. In the singlet--doublet case for
instance, we have adopted a set--up that is similar to the MSSM, with heavy
Higgs bosons that are degenerate in mass and having Type--II couplings with to
SM fermions, and shown that strong constraints from LHC searches and flavor
physics apply on it. The case of a Type--I 2HDM coupled with a singlet--doublet
DM is in turn more interesting in this regard as the CP--odd $A$ state can be
kept light enough to impact the DM and open new viable regions of parameter
space for it. For a vector--like DM, the phenomenology is much more contrived
because of the strong spin--independent interactions generated by the DM
vectorial coupling with the $Z$ boson. Viable DM regions can nevertheless open
up, e.g.\ when the DM is heavy enough to annihilate into channels involving
charged Higgs bosons. On the contrary, bounds from DM direct detection are
significantly relaxed (though not absent) when the 2HDM Higgs sector is further
extended with a pseudoscalar singlet. Collider probes hence play a crucial role
to test these models.
As a final step, we have studied the Higgs and the DM sectors of the most
popular ultraviolet complete extensions of the SM, namely supersymmetric
extensions such as the MSSM. We have first characterized the Higgs sector of
the model, reviewing the so--called hMSSM in which the information that the
mass of the lightest $h$ state is $M_h\!=\!125$ GeV, allows to simply describe
it in terms of two input parameters and, hence, simplifies the discussion to a
large extent. In this simple framework, we have summarized the results of
present collider searches for the extra neutral and charged Higgs bosons.
Assuming that the scalar partners of the SM fermions and the partner of the
gluon are very heavy, as indicated by LHC data, we have focused on the chargino
and neutralino sectors of the theory, which incorporate the DM as the lightest
of the neutral particles. Under the assumption that it interacts mostly with
the MSSM Higgs sector, the correct relic density can be achieved for DM masses
below the scale of 1 TeV that keeps the model natural, either around the
``poles'' at the neutral Higgs boson masses or, by invoking a suitable
bino--higgsino admixture of the DM neutralino. Similarly to the singlet--doublet
lepton model, the current and eventual future absence of signals in DM direct
detection experiments will exclude increasingly large regions of the natural and
viable DM parameter space.
The same type of study has been repeated in the case of the NMSSM in which the
Higgs sector is further extended by a complex singlet scalar field which leads
to an extra scalar and pseudoscalar states that can be relatively light. The
model can also be described in terms of a limited set of input parameters. The
DM sector of the NMSSM is enriched as well with the presence of an additional SM
singlet component, the singlino, which increases the number of neutralinos to
five. A suitable admixture of singlino and higgsino components for the DM,
together with the presence of a light pseudoscalar particle, allow to have the
required cosmological relic density for DM masses of few hundreds of GeV and, at
the same time, evade constraints from direct detection and from the LHC.
In summary, we have reviewed in a rather comprehensive way the Higgs--portal
scenarios for DM, which are very interesting and natural realizations of the
WIMP paradigm. We have considered a series of increasingly refined models and
summarized and updated the present constraints to which they are subject at
high--energy colliders and in astroparticle physics experiments. While some of
these models are severely constrained, other scenarios are still viable and
call for a further probing and exploration at present and future
facilities.\bigskip
\noindent {\bf Acknowledgements:} This work is supported by the Estonian
Mobilitas Pluss Grant. Part of this review is based on recent and less recent
work with many colleagues that we would like to thank for fruitful and enjoyable
collaborations. AD would like to thank colleagues at the University of Granada
for their hospitality and for discussions.
\newpage
\setcounter{section}{0}
\renewcommand{\thesection}{A}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
\include{sec-Appendix-Higgs}
\newpage
\setcounter{section}{0}
\renewcommand{\thesection}{B}
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
\include{sec-Appendix-DM}
\newpage
\setcounter{section}{0}
\renewcommand{\thesection}{C}
\setcounter{equation}{0}
\renewcommand{\theequation}{C.\arabic{equation}}
\include{sec-Appendix-RGE}
\newpage
\bibliographystyle{unsrt}
\section{Doublet extensions of the Higgs sector}
We turn now to the scenarios in which the Higgs sector of the theory incorporates
two doublet fields. We first consider the case in which both Higgs doublets
contribute to electroweak symmetry breaking, the so--called 2HDM
\cite{Branco:2011iw}. These are extended to incorporate the DM particles in a
way analogous to what has been done in section 3 and our focus will be on
scenarios in which a full sequential family of vector--like fermions or a
singlet--doublet of heavy leptons are added to the spectrum. We then consider
the case in which only one of the Higgs doublets is responsible of electroweak
symmetry breaking, while the other doublet does not acquire a vev nor couple to
SM fermions, the so called inert doublet model or IDM in which the DM candidate
will be identified with one of the neutral components of the inert field. As a
final scenario, we consider the case in which the two doublets Higgs sector is
further extended to incorporate a light pseudoscalar singlet. Such a scenario is
of phenomenological interest as it allows a gauge invariant coupling between the
SM sector and a pure gauge singlet fermionic DM and represents a useful limit of
the NMSSM, which will be treated in the final section of this review. The
section is structured in an analogous manner as the two previous ones: we first
describe the models, including the related theoretical constraints, move then to
the collider constraints and prospects and conclude with an analysis of the
astrophysical aspects of the DM particle.
\subsection{The two--Higgs doublet model}
In a 2HDM, the Higgs sector incorporates two doublets of scalar fields $\Phi_1$ and $\Phi_2$ and, assuming CP conservation, is described by the following scalar potential
\begin{align}
\label{eq:scalar_potential}
V(\Phi_1,\Phi_2) &= m_{11}^2 \Phi_1^\dagger \Phi_1+ m_{22}^2 \Phi_2^\dagger \Phi_2 - m_{12}^2 \left(\Phi_1^\dagger \Phi_2 + {\rm h.c.} \right)
+\frac{\lambda_1}{2} \left( \Phi_1^\dagger \Phi_1 \right)^2
+\frac{\lambda_2}{2} \left( \Phi_2^\dagger \Phi_2 \right)^2 \nonumber\\ &
+\lambda_3\left(\Phi_1^\dagger \Phi_1 \right)\left(\Phi_2^\dagger \Phi_2 \right)
+\lambda_4\left(\Phi_1^\dagger \Phi_2 \right)\left(\Phi_2^\dagger \Phi_1 \right)
+\frac{\lambda_5}{2}\left[ \left(\Phi_1^\dagger \Phi_2 \right)^2 + {\rm h.c.} \right] \, .
\end{align}
We have assumed from the start the presence of a discrete
symmetry~\cite{Davidson:2005cw} which forbids the appearance of two additional
couplings $\lambda_{6}$ and $\lambda_{7}$. The electroweak symmetry is broken by the vevs $v_1$ and $v_2$ acquired by the fields $\Phi_1$ and $\Phi_2$, respectively. The vevs satisfy the relation $\sqrt{v_1^2+ v_2^2}=v \simeq 246$ GeV and their ratio defines the parameter $\tan\beta \equiv t_\beta=v_2/v_1$ which will play a most important role in the model. After electroweak symmetry
breaking, the two doublet fields can be decomposed as
\begin{equation}
\Phi_i=
\begin{pmatrix} \phi_i^+ \\ (v_i+\rho_i +i \eta_i)/\sqrt{2} \end{pmatrix}~,
\qquad
i=1,2,
\end{equation}
\noindent
and lead to five physical states: two CP--even states $h$ and $H$, a CP--odd state $A$ and two charged Higgs bosons, which are defined through the transformations
\begin{equation}
\label{eq:rotation2}
\left(
\begin{array}{c} \phi_1^+ \\ \phi_2^+ \end{array} \right) = \Re_\beta
\left( \begin{array}{c} G^+ \\ H^+ \end{array} \right), \ \
\left( \begin{array}{c} \eta_1 \\ \eta_2 \end{array} \right)= \Re_\beta
\left( \begin{array}{c} G^0 \\ A \end{array} \right), \ \
\left( \begin{array}{c} \rho_1 \\ \rho_2 \end{array} \right)= \Re_\alpha
\left( \begin{array}{c} H \\ h \end{array} \right) \, ,
\end{equation}
with $\Re_X$ the rotation matrix of angle $X$ given in eq.~(\ref{eq:rotation})
and $G^0,G^+$ the Goldstone bosons that become the longitudinal degrees of
freedom of the SM gauge bosons. The angle $\alpha$ describes the mixing between
the two CP--even states $h$ and $H$, the former being again conventionally identified with the 125 GeV Higgs boson observed at the LHC, while the $H$ boson will be considered to be heavier in our context (although there is still a tiny possibility that a scalar boson lighter than 125 GeV is present in the spectrum~\cite{Cacciapaglia:2016tlr}).
The quartic couplings of the scalar potential eq.~(\ref{eq:scalar_potential}) can be expressed as functions of the masses of the physical states and, introducing $M \equiv m_{12}/(\sin {\beta} \cos {\beta})$, they read
\begin{align}
\label{eq:quartic_physical}
\lambda_1 &= \frac{1}{v^2} \left[- M^2 \tan^2 \beta +\frac{\sin^2 \alpha}{\cos^2 \beta} M_h^2 +\frac{\cos^2 \alpha}{\cos^2\beta}M_H^2 \right], \nonumber \\
\lambda_2 &= \frac{1}{v^2} \left[ -\frac{M^2 }{\tan^2 \beta}+\frac{\cos^2 \alpha}{\sin^2 \beta}M_h^2+\frac{\sin^2 \alpha}{\sin^2 \beta}M_H^2 \right], \nonumber \\
\lambda_3 &= \frac{1}{v^2} \left[-M^2 +2 M_{H^{\pm}}^2 +\frac{\sin 2\alpha}{\sin 2\beta}( M_H^2-M_h^2) \right], \nonumber \\
\lambda_4 &= \frac{1}{v^2} \left[ M^2 + M_A^2 - 2 M_{H^{\pm}}^2 \right], \nonumber \\
\lambda_5 & = \frac{1}{v^2} \left[ M^2 - M_A^2 \right].
\end{align}
They should comply with a series of constraints which, with the help of eq.~(\ref{eq:quartic_physical}), translate into bounds on the masses $M_A,M_H,M_{H^{\pm}}$ as functions of the angles $\alpha$ and $\beta$. The most
relevant bounds are, as in the singlet Higgs case discussed before,
as follows~\cite{Kanemura:2004mg,Becirevic:2015fmu}:
\begin{itemize}
\item the scalar potential should be bounded from below:
\begin{equation}
\label{eq:up1}
\lambda_{1,2} > 0, \; \lambda_3 > -\sqrt{\lambda_1\lambda_2} \; \; {\rm and} \; \lambda_3 + \lambda_4 - \left|\lambda_5\right| > -\sqrt{\lambda_1\lambda_2} \; ;
\end{equation}
\item $s$--wave tree--level unitarity should be satisfied:
\begin{equation}
\label{eq:up2}
\left| a_{\pm} \right|, \left| b_{\pm} \right|, \left| c_{\pm} \right|,
\left| d_\pm \right| , \left| e_\pm \right| ,
\left| f_{\pm} \right| < 8\pi,
\end{equation}
where:\vspace*{-1cm}
\begin{align}
a_{\pm} &= \frac{3}{2}(\lambda_1 + \lambda_2) \pm \sqrt{\frac{9}{4}(\lambda_1-\lambda_2)^2 + (2\lambda_3 + \lambda_4)^2}, \notag \\
b_{\pm} &= \frac{1}{2}(\lambda_1 + \lambda_2) \pm \sqrt{(\lambda_1-\lambda_2)^2 + 4\lambda_4^2}, \notag \\
c_{\pm} &= \frac{1}{2}(\lambda_1 + \lambda_2) \pm \sqrt{(\lambda_1-\lambda_2)^2 + 4\lambda_5^2}, \notag \\
d_{\pm} &= \lambda_3 + 2\lambda_4 \mp 3\lambda_5, \ e_\pm = \lambda_3 \mp \lambda_5, \ f_\pm = \lambda_3 \pm \lambda_4 \; ;
\end{align}
\item $v_1$ and $v_2$ should be global minima for the scalar potential~\cite{Barroso:2013awa}:
\begin{equation}
\label{eq:vacuum_2HDM}
m_{12}^2 \left(m_{11}^2-m_{22}^2 \sqrt{\lambda_1/\lambda_2}\right)\left(\tan\beta-\sqrt[4]{\lambda_1/\lambda_2}\right)>0 \; ;
\end{equation}
\noindent
\item the electroweak vacuum should remain stable:
\begin{align}
& m_{11}^2+\frac{\lambda_1 v^2 \cos^2\beta}{2}+\frac{\lambda_3 v^2 \sin^2\beta}{2}=\tan\beta \left[m_{12}^2-(\lambda_4+\lambda_5)\frac{v^2 \sin 2\beta}{4}\right] , \nonumber\\
& m_{22}^2+\frac{\lambda_2 v^2 \sin^2\beta}{2}+\frac{\lambda_3 v^2 \cos^2\beta}{2}=\frac{1}{\tan\beta} \left[m_{12}^2-(\lambda_4+\lambda_5)\frac{v^2 \sin2\beta}{4}\right] .
\end{align}
\end{itemize}
The mass parameter $m_{12}$ enters only in the quartic couplings among the Higgs bosons,
\begin{eqnarray}
\lambda_{\phi_i \phi_j \phi_k}= g^\text{2HDM}_{\phi_i \phi_j \phi_k}/g^\text{SM}_{HHH} = f(\alpha, \beta, m_{12}) .
\end{eqnarray}
The mixing in the CP--even Higgs sector makes that the neutral $h$ and $H$ bosons share the coupling of the SM Higgs particle to the massive gauge bosons
$V=W,Z$
\begin{eqnarray}
g_{hVV}= g^\text{2HDM}_{hVV}/g^\text{SM}_{HVV}= \sin(\beta-\alpha) \ , \quad
g_{HVV}= g^\text{2HDM}_{HVV}/g^\text{SM}_{HVV}= \cos(\beta-\alpha) ,
\end{eqnarray}
while, by virtue of CP invariance, there is no coupling of the CP--odd $A$ to vector bosons, $g_{AVV}=0$. There are also couplings between two Higgs and a vector boson which, up to a normalization factor, are complementary to the ones given above. For instance, one has
\begin{eqnarray}
g_{hAZ} = g_{h H^\pm W}= \cos(\beta-\alpha) \ , \quad
g_{HAZ} = g_{H H^\pm W} = \sin(\beta-\alpha) .
\label{eq:cplg-HHV1}
\end{eqnarray}
Finally, there are additional bosonic couplings of the charged Higgs boson which are
simply
\begin{eqnarray}
g_{A H^\pm W}= 1 \, ,~~ g_{H^+ H- \gamma} = -e\, , \ g_{H^+ H^- Z} = -e \cos2\theta_W/(\sin\theta_W \cos\theta_W) .
\label{eq:cplg-HHV2}
\end{eqnarray}
The couplings of the various Higgs bosons to the SM fermions are slightly more complicated and are described by the following Yukawa Lagrangian
\begin{align}
-{\cal L}_{\rm Yuk}^{\rm SM}& =\sum\limits_{f=u,d,l} \frac{m_f}{v} \left[g_{hff} \bar{f}f h +g_{Hff} \bar{f}f H-i g_{Aff} \bar{f} \gamma_5 f A \right] \notag \\
&- \frac{\sqrt{2}}{v} \left[ \bar{u} \left(m_u g_{Auu} P_L + m_d g_{Add} P_R \right)d H^+ + m_l g_{All} \bar \nu P_R \ell H^+ + \mathrm{h.c.} \right],
\end{align}
where $P_{L/R}=\frac12(1\mp \gamma_5)$ and $g_{\phi ff}$ are the reduced couplings of the $\phi$ boson to up-- and down--type quarks and charged leptons normalized to the SM couplings, $g_{\phi ff}=g^\text{2HDM}_{\phi ff}/g^\text{SM}_{H ff}$.
\begin{table}[h!]
\renewcommand{\arraystretch}{1.55}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
~~~~~~ & ~~Type I~~ & ~~Type II~ & Lepton-specific & Flipped \\
\hline\hline
$g_{huu}$ & $ \frac{\cos \alpha} { \sin \beta} \rightarrow 1$ & $\frac{ \cos \alpha} {\sin \beta} \rightarrow 1$ & $\frac{ \cos \alpha} {\sin\beta} \rightarrow 1$ & $ \frac{ \cos \alpha}{ \sin\beta}\rightarrow 1$ \\ \hline
$g_{hdd}$ & $\frac{\cos \alpha} {\sin \beta} \rightarrow 1$ & $-\frac{ \sin \alpha} {\cos \beta} \rightarrow 1$ & $\frac{\cos \alpha}{ \sin \beta} \rightarrow 1$ & $-\frac{ \sin \alpha}{ \cos \beta} \rightarrow 1$ \\ \hline
$g_{hll} $ & $\frac{\cos \alpha} {\sin \beta} \rightarrow 1$ & $-\frac{\sin \alpha} {\cos \beta} \rightarrow 1$ & $- \frac{ \sin \alpha} {\cos \beta} \rightarrow 1$ & $\frac{ \cos \alpha} {\sin \beta} \rightarrow 1$ \\ \hline\hline
$g_{Huu}$ & $\frac{\sin \alpha} {\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $\frac{ \sin \alpha} {\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $ \frac{\sin \alpha}{\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $\frac{ \sin \alpha}{ \sin \beta} \rightarrow -\frac{1}{\tan\beta}$ \\ \hline
$g_{Hdd}$ & $ \frac{ \sin \alpha}{\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $\frac{\cos \alpha}{\cos \beta} \rightarrow {\tan\beta}$ & $\frac{\sin \alpha} {\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $\frac{ \cos \alpha} {\cos \beta} \rightarrow {\tan\beta}$ \\ \hline
$g_{Hll}$ & $\frac{ \sin \alpha} {\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ & $\frac{\cos \alpha} {\cos \beta} \rightarrow {\tan\beta}$ & $\frac{ \cos \alpha} {\cos \beta} \rightarrow {\tan\beta}$ & $\frac{\sin \alpha} {\sin \beta} \rightarrow -\frac{1}{\tan\beta}$ \\
\hline\hline
$g_{Auu}$ & $\frac{1}{\tan\beta}$ & $\frac{1}{\tan\beta}$ & $\frac{1}{\tan\beta}$ & $\frac{1}{\tan\beta}$
\\
\hline
$g_{Add}$ & $-\frac{1}{\tan\beta}$ & ${\tan\beta}$ & $-\frac{1}{\tan\beta}$ & ${\tan\beta}$
\\
\hline
$g_{All}$ & $-\frac{1}{\tan\beta}$ & ${\tan\beta}$ & ${\tan\beta}$ & $-\frac{1}{\tan\beta}$
\\
\hline
\end{tabular}
\caption{Couplings of the 2HDM Higgs bosons to fermions, normalized to those of the SM--like Higgs boson, as a function of the angles $\alpha$ and $\beta$ and, in the case of the CP--even Higgs states, their values in the alignment limit $\beta \!-\! \alpha \rightarrow \frac{\pi}{2}$.}
\label{table:2hdm_type}
\end{center}
\vspace*{-6mm}
\end{table}
In a 2HDM in which the appearance of the experimentally not observed
flavour--changing neutral currents (FCNCs) is enforced, two options are in
general discussed for the interactions of the Higgs states with fermions
\cite{Branco:2011iw,Glashow:1976nt}: in the so--called Type II model, the field
$\Phi_1$ generates the masses of isospin down--type fermions and $\Phi_2$ the
masses of up--type quarks. In turn, in Type I models, the field $\Phi_2$ couples
to both isospin up-- and down--type fermions. Here, we will consider besides
these two types of models, two additional options in which the charged leptons
have a different behaviour compared to down--type quarks, namely the lepton
specific model in which the Higgs couplings to quarks are as in Type I but those
to leptons are as in Type II, and the flipped model in which the previous
situation occurs but with Type I and Type II couplings reversed. The values of
the fermion couplings for these four flavour--conserving types of 2HDMs are
listed in Table~\ref{table:2hdm_type}.
Let us now summarize the constraints on this model, besides the theoretical ones
on the scalar potential mentioned above. First, as discussed in section 2,
fits of the Higgs signal strengths favor SM--like couplings for the 125 GeV
state $h$ observed at the LHC and this implies strong constraints on the angles
$\alpha$ and $\beta$. In particular, one should have SM--like couplings of $h$
to the $W$ and $Z$ bosons so that $\kappa_V^2 \equiv \sin^2 (\beta-\alpha)$
is close to unity. We show in Fig.~\ref{fig:higgs_constr} the regions in the $[\cos(\beta-\alpha),\tan\beta]$ plane that are allowed by the combined constraints on the Higgs signal strengths into gauge bosons, $\mu_{\gamma \gamma}, \mu_{WW}, \mu_{ZZ}$ and into bottom quark and tau lepton pairs $\mu_{bb}, \mu_{\tau \tau}$, for the four specific 2HDM realizations.
As it should be clear from the figure, the Type I model allows for a
$\cos(\beta-\alpha)$ value significantly different from zero for $\tan\beta
>1$. In the other three models $\cos(\beta-\alpha)$ is, instead, forced to be
close to zero with the exception of narrow ``arms'' corresponding to the
so--called ``wrong-sign'' Yukawa
regime~\cite{Ferreira:2014naa,Fontes:2014tga,Ferreira:2014sld}, i.e. the case in
which the couplings of the state $h$ with the down--type quarks and/or leptons
are opposite in sign but equal in absolute values with respect to the ones of a
SM--like Higgs boson.
All constraints from the SM--like $h$ signal strengths can be simultaneously
satisfied in the so--called alignment limit, $\beta-\alpha = \frac{\pi} {2}$
\cite{Pich:2009sp,Craig:2013hca,Carena:2013ooa,Bernon:2014nxa}. In this case,
the couplings of the CP--even $h$ and $H$ states to gauge bosons are such that
$g_{hVV}=1$ and $g_{HVV}=0$ and, hence, there is no couplings of $H$ with the
$W$ and $Z$ bosons as it is automatically the case for the $A$ state when CP
conservation in the scalar sector is assumed. The Higgs couplings to fermions in
this alignment limit are also listed in Table~\ref{table:2hdm_type}. As can be
seen, the couplings of the $h$ state are also SM--like,
$g_{huu}=g_{hdd}=g_{hll} \rightarrow 1$, while the couplings of the CP--even $H$ state
reduce to those of the pseudoscalar $A$ boson. In particular, besides the fact
that there is no $H$ coupling to vector bosons, $g_{HVV} \rightarrow g_{AVV} =0$, the
couplings to up--type fermions are $g_{Huu} = \cot \beta$ while those to
down--type fermions are, respectively, $g_{Hdd} = \cot \beta$ and $g_{Hdd} =
\tan \beta$ in Type I and II models, for instance.
As for the couplings between two Higgs bosons and one gauge boson, all those
involving the $h$ state such as $g_{hAZ}$ and $g_{hH^\pm W^\mp}$ tend to zero
in the limit $\beta-\alpha=\frac\pi2$, while those involving the $H$
boson, such as $g_{HAZ}$ and $g_{HH^\pm W^\mp}$, tend to unity. Finally, the two most important triple couplings among the CP--even Higgs bosons simplify to
\begin{align}
\lambda_{hhh} & = 1 \, , \ \lambda_{Hhh} = 0 \, ,
\end{align}
meaning again that the triple $h$ coupling is SM--like, while there is no $Hhh$
coupling at the tree--level. The other triple couplings, which will depend on the additional parameter $m_{12}$, can be ignored as they do not affect our discussion here.
\begin{figure}[!ht]
\begin{center}
\subfloat{\includegraphics[width=0.43\linewidth]{figs-2HDM/pt1.pdf}}~~~~
\subfloat{\includegraphics[width=0.43\linewidth]{figs-2HDM/pt2.pdf}}\\
\subfloat{\includegraphics[width=0.43\linewidth]{figs-2HDM/ptF.pdf}}~~~~
\subfloat{\includegraphics[width=0.43\linewidth]{figs-2HDM/ptL.pdf}}
\end{center}
\vspace*{-5mm}
\caption{Allowed regions from the $h$ signal strengths measured at LHC
in the plane $[\cos({\beta-\alpha}), \tan\beta] $ for the four types of 2HDMs that do not induce FCNCs at tree--level.}
\label{fig:higgs_constr}
\vspace*{-3mm}
\end{figure}
The masses of the extra Higgs bosons are constrained also by the electroweak
precision observables and we have calculated the contribution of the extended
Higgs sector to the $S,T,U$ parameters discussed in subsection 3.2. Using the
three masses $M_H,M_A,M_{H^\pm}$ as well as the two angles $\alpha,\beta$ as
input parameters and the formalism and functions provided for example
in~Ref.~\cite{Branco:2011iw} for the various contributions to the $S,T,U$
parameters, we have determined the excluded regions of the models via the same
$\chi^2$ fit discussed before with the data and the covariance matrix given in
eqs.~(\ref{eq:chi2})--(\ref{eq:covariance}). As expected, the most important
corrections occur in the $T$ or $\Delta \rho$ parameters and, hence, set strong
limits on the mass splitting between at least two of the $H,A,H^{\pm}$ states.
As already pointed out, once the Higgs sector is coupled to the fermionic DM,
additional contributions to the $S,T,U$ parameters are generated and
consequently, one should combine in eq.~(\ref{eq:chi2}) the contributions of
both the extended scalar and fermionic sectors. We will re--discuss in more
detail the bounds from electroweak precision data when we will introduce the
different DM models.
Finally, one has to take into account constraints from flavor physics. While the
four considered models, namely Type I, Type II, leptons specific and flipped
2HDM are free from tree--level FCNCs by construction, these are nevertheless
induced at the loop level. The strongest constraints come from processes
described at the fundamental level by $b \rightarrow s$ transitions whose rates are
mostly sensitive to the parameters $M_{H^{\pm}}$ and $\tan \beta$. The Type II
and the flipped models are the most affected ones and a lower bound associated
with the $B\rightarrow X_s \gamma$ process~\cite{Amhis:2016xyh} leads to
$M_{H^{\pm}}> 570\,\mbox{GeV}$ irrespective of $\tan\beta$
\cite{Misiak:2017bgg}. Additional constraints also come from $B$--meson decay
processes such as $B_s \rightarrow \mu^+ \mu^-$ and $B \rightarrow K \mu^+
\mu^-$~\cite{Arnan:2017lxi}. A comprehensive discussion of flavor constraints
on 2HDMs has been presented e.g. in Ref.~\cite{Enomoto:2015wbn} and we will use
the summary results given there in our analysis.
Following Ref.~\cite{Arcadi:2018pfo}, we have performed a scan of the 2HDMs
over the parameter ranges,
\begin{equation}
\tan\beta \in [1,50], \alpha \in \left[ -\frac{\pi}{2},+\frac{\pi}{2} \right], ~(M_H, M_A, M_{H^\pm}) \in [ (M_h, 20\,{\rm GeV}, 80\,{\rm GeV}), {\rm 1\,TeV}] , \hspace*{-3mm}
\end{equation}
where the alignment limit is not assumed for the angle $\alpha$ at a first
stage and the Higgs masses were taken to be such that $M_H> M_h$ and
$M_{H^\pm} > M_W$ (from LEP2 searches). As already shown in the previous
section, the scenario of a light pseudoscalar mediator is very interesting for
what concerns DM phenomenology and we have consequently left the option of an
$A$ state as light as 20 GeV open (as will be shown later, the possibility of a
light pseudoscalar coupled with the SM Higgs is strongly constrained by collider
searches, hence the choice of a lower limit of 20 GeV is simply made for
numerical convenience). In order to highlight the impact on the 2HDM parameter
space of deviations from the alignment limit, the scans have been repeated while
imposing the relation $\beta-\alpha={\pi}/{2}$.
\begin{figure}[!h]
\vspace*{-4mm}
\begin{center}
\subfloat{\includegraphics[width=0.42\linewidth]{figs-2HDM/psplitTyI.pdf}}~~~
\subfloat{\includegraphics[width=0.42\linewidth]{figs-2HDM/psplitTyII.pdf}}
\end{center}
\vspace*{-5mm}
\caption{Model points in the $[M_H^{\pm},M_A]$ plane allowed by constraints on the quartic couplings, electroweak precision data and the $h$ boson signal strengths. The red points have been generated by taking $\beta$ and $\alpha$ as free parameters and the blue ones assuming $\beta-\alpha={\pi}/{2}$. The green regions are excluded by limits from flavor processes.}
\label{fig:flavor_2HDM0}
\vspace*{-3mm}
\end{figure}
The results of our study are presented in Figs.~\ref{fig:flavor_2HDM0} and
\ref{fig:flavor_2HDM} in, respectively the $[M_{H^{\pm}}, M_A]$ and
$[M_{H^{\pm}},\tan\beta]$ planes. The figures show the model points, i.e. the
assignments of $(M_H,M_A,M_{H^{\pm}},\alpha,\beta)$, which satisfy the
theoretical constraints on the quartic couplings (i.e. a potential bounded from
below and with a proper global minimum and $s$--wave tree--level unitarity) as
well as those from the electroweak precision observables and the observed Higgs
signal strengths. We have distinguished using different colors, namely red and
blue, the model points for which free assignments of $\alpha,\beta$ are made
from the ones for which the alignment limit has been imposed. The green
areas are those excluded by the combined constraints from flavor physics as
given in Ref.~\cite{Enomoto:2015wbn}.
As can be seen from Fig.~\ref{fig:flavor_2HDM0}, the Type I model allows,
compared to the three other models, a larger mass splittings between the
$H^{\pm}$ and $A$ states (an analogous feature would have been also observed
in the $[M_H,M_A]$ and/or $[M_H,M_{H^{\pm}}]$ planes). This is a consequence of
the less severe constraints on the $\beta-\alpha$ difference. Indeed the larger
freedom in the choice of $\alpha$ and $\beta$ translates through
eq.~(\ref{eq:quartic_physical}) into a larger freedom in the assignment of
$M_H,M_A,M_{H^{\pm}}$. On the contrary, in scenarios in which $\alpha$ and
$\beta$ lie close to the alignment limit, the mass degeneracy between the extra
Higgs states will be favored. Fig. \ref{fig:flavor_2HDM0} shows only the
results for the Type--I and Type--II models since the outcome for the lepton
specific and flipped 2HDM scenarios are identical to the Type--II case with the
exception that the green region would be absent for the lepton specific
model.
\begin{figure}[!h]
\vspace*{-3mm}
\begin{center}
{\includegraphics[width=0.43\linewidth]{figs-2HDM/pmHtbTypI.pdf}}~~~
{\includegraphics[width=0.43\linewidth]{figs-2HDM/pmHtbTypII.pdf}}\\[2mm]
{\includegraphics[width=0.43\linewidth]{figs-2HDM/pmHtbls.pdf}}~~~
{\includegraphics[width=0.43\linewidth]{figs-2HDM/pmHtbfl.pdf}}
\end{center}
\vspace*{-5mm}
\caption{The same scan on the model points as considered in Fig.~\ref{fig:flavor_2HDM0} but reported in the $[M_{H^{\pm}},\tan\beta]$ plane; the same color code is used.}
\label{fig:flavor_2HDM}
\vspace*{-3mm}
\end{figure}
Fig.~\ref{fig:flavor_2HDM} is instead intended to highlight the effects of
flavor constraints. As one can see, the scenarios in which the couplings of the
extra Higgs states are enhanced by $\tan\beta$, i.e. the Type II and the flipped
scenarios, are extremely constrained with values $M_{H^{\pm}} \leq
570\,\mbox{GeV}$ already ruled out. In the Type II model, a further stronger
exclusion limit at high $\tan\beta$ comes from the $B_s \rightarrow \mu^+ \mu^-$
process. As shown in the right panel of Fig.~\ref{fig:flavor_2HDM0}, this
constraint impacts also the masses of the other Higgs states as they are
related through eq.~(\ref{eq:quartic_physical}) and expected to be close to the
one of the charged Higgs. In turn, the Type I and lepton specific models are
almost free from the flavor physics constraints except eventually in small regions of the parameter space with relatively low values of $\tan\beta$ and
$M_{H^{\pm}}$.
\subsection{The 2HDM and the Dark Matter sector}
We now consider the Dark Matter sector in the context of two Higgs doublet
models and discuss first two extensions that incorporate a fermionic DM
candidate which are, in fact, simply generalizations of the scenarios already
discussed in section 3: the singlet--doublet model and a full family of
vector--like fermions. The inert doublet model and a scenario with an additional pseudoscalar field will be then analyzed.
\subsubsection{The single--doublet fermion extension}
The singlet--doublet model, introduced in the context of the SM Higgs sector in section 3, can be straightforwardly extended to the case of two doublet Higgs fields \cite{Berlin:2015wwa,Arcadi:2018pfo}. It can be described by the following Lagrangian
\begin{equation}
\mathcal{L}=-\frac{1}{2}M_N N^{'\,2}-M_L L_L L_R -y_1 L_L \Phi_a N^{'}-y_2 L_R \widetilde{\Phi}_b N^{'}+\mbox{h.c.},
\end{equation}
with $a,b=1,2$. As will be made clear later, it is appropriate not to
assume arbitrary couplings of the new fermions with both the $\Phi_1$ and $\Phi_2$ doublets. The physical mass eigenstates are obtained by diagonalizing a mass matrix analogous to eq.~(\ref{eq:SD_mass_matrix}) but with $v$ appropriately replaced by $v_{a,b}$. In the physical basis for both the fermionic and the scalar sector, the relevant interaction Lagrangian for the fermionic states reads
\begin{align}
\mathcal{L} &=\overline{E^-} \gamma^\mu \left(g^V_{W^{\mp}E^{\pm}N_i}-g^A_{W^{\mp}E^{\pm}N_i}\gamma_5\right)N_i W_\mu^{-}+\mbox{h.c.}
+\frac{1}{2}\sum_{i,j=1}^3 \overline{N_i}\gamma^\mu \left(g_{Z N_i N_j}^V-g_{Z N_i N_j}^A \gamma_5\right) N_j Z_\mu \nonumber\\
& +\frac{1}{2}\sum_{i,j=1}^{3}\overline{N_i}\left(y_{h N_i N_j}h+y_{H N_i N_j}H+y_{A N_i N_j}\gamma_5 A\right)N_j +\overline{E^-} \left(g^S_{H^{\pm}EN_i}-g^P_{H^{\pm}EN_i}\gamma_5\right)N_i H^{-}+\mbox{h.c.}\nonumber\\
& -e A_\mu \overline{E^{-}}\gamma^\mu E^{-}-\frac{g}{2 c_W}(1-2 s^2_W) Z_\mu \overline{E^{-}}\gamma^\mu E^{-}+\mbox{h.c.} ,
\end{align}
\noindent
where the couplings in the case of $\phi=h,H,A$ and $H^\pm$ are given by
\begin{align}
\label{eq:SD2HDM_couplings}
& y_{ \phi N_i N_j}=\frac{\delta_\phi}{2\sqrt{2}}\left[U_{i1}\left(y_1 R_a^\phi U_{i2}+y_2 R_b^\phi U_{i3}\right)+(i \leftrightarrow j)\right] , \nonumber\\
& g^{S/P}_{H^{\pm}EN_i}=\frac{1}{2}U_{i1}\left(y_1 R_1^{H^{\pm}} \pm y_2 R_2^ {H^{\pm}}\right) ,
\end{align}
with $\delta_h=\delta_H=-1$ and $\delta_A=-i$ and we have considered the following decomposition of the $\Phi_{1}$ and $\Phi_{2}$ doublets in terms of the physical $h,H,A,H^{\pm}$ Higgs states:
\begin{equation}
\Phi_{1,2}=\frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
\sqrt{2}R_{1,2}^+ H^+ \\
v_{1,2}+R_{1,2}^h h+ R_{1,2}^H H+i R_{1,2}^A A \, ,
\end{array}
\right)
\end{equation}
with the parameters $R_{1,2}$ being the elements of the rotation matrices $\mathcal{R}_{\alpha,\beta}$ defined in eqs.~(\ref{eq:rotation}) and (\ref{eq:rotation2}).
From a bottom--up perspective, there are four possible configurations for the
assignments of the couplings of the new fermions to the doublets $\Phi_1$ and
$\Phi_2$. We will nevertheless focus here simply on two of the cases which arise
once one extends to the DM sector the extra symmetries which define the four
flavor conserving 2HDMs (see next section for a more detailed account). The
two configurations correspond to the cases in which the new fermions couple
exclusively either with the $\Phi_1$ or with the $\Phi_2$ doublet.
In order to have a better insight on the DM phenomenology, it is useful to write the analytical expressions for the DM--Higgs couplings $y_{\phi N_1 N_1}$ in these two scenarios $i=1,2$, as given for instance in Ref.~\cite{Berlin:2015wwa}
\begin{align}
\label{eq:coupling_uu}
& y_{h N_1 N_1}= y^2 v a_i^h \, (m_{N_1}+M_L \sin 2 \theta)/D_i
\, , \nonumber\\
& y_{H N_1 N_1}=y^2 v a_i^H \, (m_{N_1}+M_L \sin 2 \theta)/D_i
\, , \nonumber\\
& y_{A N_1 N_1}=y^2 v a_i^A \, m_{N_1} \cos 2 \theta/ D_i \, ,
\end{align}
where we have used the abbreviations
\begin{eqnarray}
\label{eq:coupling_dd}
D_i= 2 M_L^2+ 4 M_N m_{N_1}-6 m_{N_1}^2+y^2 v^2 a_i^h \, , \hspace*{3cm}
\nonumber \\
a_1^h= \cos^2 \beta , a_1^H= a_1^A = \cos \beta \sin\beta ~~{\rm and}~~
a_2^h= \sin^2 \beta , a_2^H= a_2^A = - \cos \beta \sin\beta \, .
\end{eqnarray}
In order to reduce the number of free parameters, we have assumed the alignment limit.
As already pointed out, in this singlet--doublet model, the impact of the new
fermionic sector is rather modest and the dominant constraints apply mainly to
the scalar sector of the theory and, hence, coincide with the ones discussed in
the previous subsection.
\subsubsection{The vector--like family extension}
Turning to the case in which the 2HDM is linked with an entire family of vector--like fermions, the most general coupling with the two Higgs doublets is described by the following Lagrangian where a sum over $i=1,2$ is implicit
\begin{align}
\label{2HDM_VL_Lag}
-{\cal L}_{\rm VLF} & = y_i^{U_R} \overline{{\cal D}_L}
\tilde{\Phi}_i U^\prime_R + y^{U_L}_i \overline{U^\prime_L} \tilde{\Phi}_i^\dagger {\cal D}_R
+y^{D_R}_i \overline{{\cal D}_L} \Phi_ i D^\prime_R + y^{D_L}_i \overline{D^\prime_L} \Phi_ i^\dagger {\cal D}_R
\notag \\
&+M_{\mathcal{D}} \overline{{\cal D}_L} {\cal D}_R
+ M_U \overline{U^\prime_L} U^\prime_R +M_D \overline{D^\prime_L} D^\prime_R + {\rm h.c.} \, .
\end{align}
The mass eigenstates are obtained through the same bi--diagonalization procedure illustrated in the previous section once one defines the Yukawa couplings in the Higgs mass eigenstate basis. Using the superscript $X = U_{L/R}$ or $D_{L/R}$, one has
\begin{align}
& \begin{pmatrix} y_h^X \\ y_H^X \end{pmatrix} =
\begin{pmatrix} \cos {\beta} & \sin {\beta} \\ \sin {\beta} & -\cos {\beta} \end{pmatrix} \begin{pmatrix} y_1^X \\ y_2^X \end{pmatrix} .
\label{eq:Higgs_basis}
\end{align}
As already discussed, the minimal embedding for a DM candidate consists into the addition of one family with hypercharge $Y=0$,
\begin{align}
\label{eq:lag_VLL_gen}
-{\cal L}_{\rm VLL}&= y_{N_R} \overline{L}_L \tilde{\Phi_i} N_R^\prime + y_{N_L} \overline{N}^\prime_L \tilde{\Phi_i}^\dagger L_R
+y_{E_R} \overline{L}_L \Phi_i E^\prime_R + y_{E_L} \overline{E}^\prime_L \Phi^\dagger_i L_R,\nonumber\\
& +M_L \bar L_L L_R+M_N \bar{N^{'}}_L N^{'}_R+M_E \bar{E^{'}}_L E_R + \mathrm{h.c.}\, .
\end{align}
For our analysis, we will consider the case of generic couplings of the vector--like leptons with both Higgs doublets and the one in which the new leptons are charged under the same $\mathbb{Z}_2^{\rm 2HDM}$ which defines the Type I, Type II, lepton specific and flipped 2HDMs, eq.~(\ref{eq:lag_VLL_gen}),
so that they couple selectively with the doublets $\Phi_{1}$ and $\Phi_{2}$. In this last case the interaction Lagrangian of the new leptons reduces to (for simplicity from now on, we omit mass terms)
\begin{align}
-{\cal L}_{\rm VLL}&= y_{N_R} \overline{L}_L \tilde{\Phi_2} N_R^\prime + y_{N_L} \overline{N}^\prime_L \tilde{\Phi_2}^\dagger L_R
+y_{E_R} \overline{L}_L \Phi_i E^\prime_R + y_{E_L} \overline{E}^\prime_L \Phi^\dagger_i L_R + \mathrm{h.c.}\, .
\end{align}
As can be seen, the vector--like doublet and the singlet $N^{\prime}_{L,R}$, interpreted as ``up--type'' vector fermions, are coupled only to the $\Phi_2$ doublet. This leads to two possibilities for the couplings of the remaining new leptons:
$i)$ $E^{\prime}_{L,R}$ is also even under $\mathbb{Z}_2^{\rm 2HDM}$, meaning
that all vector leptons couple to $\Phi_2$ and
$ii)$ $E^{\prime}_{L,R}$ is odd under $\mathbb{Z}_2^{\rm 2HDM}$, which implies
that vector--like electrons couple to $\Phi_1$, while their partner neutrinos couple to $\Phi_2$.
In the following, these two setups will be referred to as ``model 1'' and
``model 2''. We note that the symmetry $\mathbb{Z}_2^{\rm 2HDM}$ is in general
distinct from $\mathbb{Z}_2^{\rm VLL}$ responsible for the stability of the DM
particle. Indeed, while all the vector leptons should have the same charge under the latter symmetry, they can have different charges under $\mathbb{Z}_2^{\rm
2HDM}$.
In the physical basis, the interactions of the vector--like neutrinos with the neutral Higgs bosons are the same for both ``model 1'' and ``model 2'' and read
\begin{align}
& -\sqrt{2} \mathcal{L}_{\phi NN} \!= \! \begin{pmatrix} N_L^{\dagger}\! & \! N_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \! & \!
y_{N_R} (s_{\beta} h \! - \! c_\beta H \! - \! i c_\beta A ) \\
y_{N_L} (s_{\beta} h \! - \! c_\beta H \! + \! i c_\beta A ) \! & \! 0 \end{pmatrix}
\begin{pmatrix} N_R \\ N_R^{\prime} \end{pmatrix} \! + \! \mathrm{h.c.} \, . \nonumber
\end{align}
In turn, in the case of vector--like electrons we have for ``model 1'' and for ``model 2''
\begin{align}
& -\sqrt{2} \mathcal{L}_{\phi EE}^{(1)} \!= \! \begin{pmatrix} E_L^{\dagger} \! & \! E_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \! & \! y_{E_R}
(c_{\beta} h \! + \! s_\beta H \! - \! is_\beta A) \\ y_{E_L} (c_{\beta} h \! + \! s_\beta H \! + \! i s_\beta A) \! & \! 0 \end{pmatrix} \begin{pmatrix} E_R \\ E_R^{\prime} \end{pmatrix} & + \mathrm{h.c.}, \nonumber
\end{align}
\begin{align}
& -\sqrt{2} \mathcal{L}_{\phi EE}^{(2)} \!= \! \begin{pmatrix} E_L^{\dagger} \! & \! E_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \! & \! y_{E_R} (s_{\beta} h \! - \! c_\beta H \! + \! i c_\beta A) \\ y_{E_L} (s_{\beta} h \!- \! c_\beta H \! - \! i c_\beta A) \! & \! 0 \end{pmatrix}
\begin{pmatrix} E_R \\ E_R^{\prime} \end{pmatrix} \! + \! \mathrm{h.c.} . \nonumber
\end{align}
Concerning the couplings with the charged Higgs boson we have instead
\begin{align}
&\mathcal{L}_{H^{\pm} NE}^{(1)}\! = \!H^+ \begin{pmatrix} N_L^{\dagger} \!&\! N_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \!&\! y_{E_R} s_{\beta} \\ y_{N_L} c_{\beta} \!&\! 0 \end{pmatrix} \begin{pmatrix} E_R \\ E_R^{\prime} \end{pmatrix} \!+\! H^- \begin{pmatrix} E_L^{\dagger} \!&\! E_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \!&\! y_{N_R} c_{\beta} \\ y_{E_L} s_{\beta} \!&\! 0 \end{pmatrix} \begin{pmatrix} N_R \\ N_R^{\prime} \end{pmatrix} \!+\! \mathrm{h.c.}, \nonumber\\
&\mathcal{L}_{H^{\pm} NE}^{(2)} \!= \!H^+ \begin{pmatrix} N_L^{\dagger}\! & \! N_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \! & \! \!-\!y
|
_{E_R} c_{\beta} \\ y_{N_L} c_{\beta} \!& \! 0 \end{pmatrix} \begin{pmatrix} E_R \\ E_R^{\prime} \end{pmatrix}\!+\! H^- \begin{pmatrix} E_L^{\dagger} \! & \! E_L^{\prime\dagger} \end{pmatrix} \begin{pmatrix} 0 \! & \! y_{N_R} c_{\beta} \\ \!-\!y_{E_L} c_{\beta} \!& \! 0 \end{pmatrix} \begin{pmatrix} N_R \\ N_R^{\prime} \end{pmatrix}\!+\! \mathrm{h.c.}\nonumber
\end{align}
It is important to remark that once flavour conserving configurations are adopted, the couplings of the vector--like leptons are sensitive to the value of $\tan\beta$. This would not be the case if each of them can arbitrarily couple with both Higgs doublets.
Analogously to the previous scenarios, renormalization group evolution strongly
constrain the size of the Yukawa couplings of the new fermions. As before, we
will keep the focus of the discussion on the quartic coupling of the scalar
potential, as it is the most sensitive to these effects. In the case of the
2HDM+VLF model, the system of equations to solve is particularly complicated as
it involves five quartic and multiple Yukawa couplings. Assuming for simplicity
the presence of a single family of vector fermions, the evolution equations for
the five quartic couplings $\lambda_{i=1,5}$ are given in Appendix C.
Ref.~\cite{Angelescu:2016mhl}. These equations should be solved in combination with those of the new Yukawa couplings, as well as the one of the top quark and those of the SM gauge couplings.
\begin{figure}
\begin{center}
\hspace*{-8mm}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/RGEwork.pdf}}
\subfloat{\includegraphics[width=0.52\linewidth]{figs-2HDM/RGEfail.pdf}}\\[3mm]
\hspace*{-8mm}
\subfloat{\includegraphics[width=0.52\linewidth]{figs-2HDM/RGEworkVLF.pdf}}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/RGEfailVLF.pdf}}
\end{center}
\caption{Examples of resolution of the renormalisation group equations for
the 2HDM quartic couplings $\lambda_{1\!-\!5}$ for $\tan\beta\!=\!1$, $M_H\! =\! M_A\! = \! M_{H^\pm} \! =\! 800$ GeV. The upper panels refer to extensions of the 2HDM with only vector--like leptons with $y_l\!=\! 0,5$ and $y_L\!=\!1$ (left panel) and $y_l\!=\!y_L\! =\! 2$ (right panel). The plots in the bottom panels refer to the case of 2HDM coupled with a full sequential family of vector--like fermions. The two benchmarks have $y_l\!=\! 0,5$ and $y_L\!=\!1$ (left panel) and $y_l\!=\! 1,5$ and $y_L\!=\!1$ (right panel). See main text for the definition of the $y_{l,L}$ couplings.}
\label{fig:RGEexamples}
\end{figure}
Examples of the evolution of the five quartic couplings with energy are shown in
Fig.~\ref{fig:RGEexamples}, distinguishing the $N_{\rm VLL}=1, N_{\rm VLQ}=0$ and
$N_{\rm VLL}=N_{\rm VLQ}=1$ cases, for the Higgs sector parameters $\tan\beta=1$ and
$M_H=M_A=M_{H^\pm}=800$ GeV. In the left top (bottom) panel, the initial values of
the Yukawa couplings, $y_h^{E_L}(=y_h^{B_L}=y_h^{T_L})=y_l=0.5$ and
$y_L=y_H^{E_L}=-y_H^{E_R}=-y_H^{N_L}=y_H^{N_R}=(=y_H^{B_L}=-y_H^{B_R}=-y_H^{T_L}=y_H^{T_R})=1$,
are sufficiently small such that the conditions
eqs.~(\ref{eq:up1})--(\ref{eq:up2}) are satisfied up to energy scales of the order
of $10^6 (3 \times 10^4)\,\mbox{GeV}$. In the right top (bottom) panel, the large
Yukawas, $y_l=2(1.5),y_L=2(1)$, cause instead the couplings $\lambda_{1,2}$ to
become negative, hence violating the first of the conditions eq.~(\ref{eq:up1}), in
proximity of the energy thresholds corresponding to the masses of th VLF and all
couplings $\lambda_{1\!-\!5}$ to become too large, possibly non perturbative, at
scales of the order of 10 TeV.
The size of the Yukawa couplings of the new fermions is, as already mentioned,
also constrained by the electroweak precision data. In the case of a 2HDM, an
assessment concerning the corresponding limits is complicated by the fact that
the masses of the new scalar bosons affect as well the electroweak data. We
show in Fig.~\ref{fig:EWPT_2HDM_l} an example of the allowed regions of the
parameter space in the case of the simultaneous presence of extra Higgs bosons
and vector fermions.
\begin{figure}
\begin{center}
\hspace*{-3mm}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/pSTUl10.pdf}}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/pSTUl11.pdf}}\\[3mm]
\hspace*{-3mm}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/pSTUh10.pdf}}
\subfloat{\includegraphics[width=0.5\linewidth]{figs-2HDM/pSTUh11.pdf}}
\end{center}
\caption{Allowed regions (the colored ones) by electroweak precision data in the plane $[M_H,M_{H^{\pm}}]$ with a vector--like fermionic content $N_{\rm VLL}\!=\! 1, N_{\rm VLQ}\! =\! 0$ (left panel) and $N_{\rm VLL}\! =\! N_{\rm VLQ}\!=\! 1$ (right panel). For the upper (lower) panels, we have taken: $M_A\!=\! 500\,(750)\,\mbox{GeV}$, $m_{N_1}\!=\!220\,(320)\,\mbox{GeV}$ $m_{E_1}\! =\! 250\,(375)\,\mbox{GeV}$ and $m_{Q_1}=1$ TeV. The blue, purple, orange and red regions represent the allowed parameter space for Yukawa couplings of, respectively, $y_h^{E_L} \! = \! y_h^{B_L} \! = \! y_h^{T_L}\! =\! 0.5,1,2,3$. The green points represent the configurations allowed by the theoretical constraints discussed in the text.
}
\label{fig:EWPT_2HDM_l}
\end{figure}
In the figure, these regions are represented as coloured strips in the
bidimensional plane $[M_H,M_{H^{\pm}}]$ for two values of the pseudoscalar Higgs
mass $M_A \!= \!500$ GeV (top) and 750 GeV (bottom). For these two mass values,
we have considered two configurations for the vector--like fermions, namely
$N_{\rm VLL}\!=\!1,N_{\rm VLQ}\!=\!0$ (left) and $N_{\rm VLL}\!=\! N_{\rm
VLQ}\!=\!1$ (right) and, for each of these, different assignments of the Yukawa
couplings $y_h^{E_L}\! = \! y_h^{B_L} \! =\! y_h^{T_L}$, ranging from 0.5 to 1,
while keeping fixed the other parameters. In particular, we have assumed very
suppressed values of $y_h^{N_L}$ in order to comply with constraints from direct
DM searches to be discussed later. As it can be seen from the figures, the most
favoured configurations consist of vector--fermion Yukawa couplings below unity,
implying that the dominant contribution to electroweak observables comes from
the scalar sector. Higher values of the Yukawa couplings, up to three, can be
nevertheless allowed by invoking cancellations between the fermionic and scalar
contributions. This cancellations occur in rather narrow strips of the
bidimensional plane $[M_H,M_{H^{\pm}}]$ and, in particular, require that the
mass spectrum of the new scalars is not degenerate.
The regions allowed by electroweak observable have been overlapped with the
outcome of a scan on the parameters of the scalar sector, including the constraints
eqs.~(\ref{eq:up1})--(\ref{eq:up2}). As can be seen, one can achieve a mass
spectrum for the new Higgs bosons, compatible with
eqs.~(\ref{eq:up1})--(\ref{eq:up2}) as well as electroweak data, up to $y_h^{E_L}
\approx 3$. As shown above, values $y_h^{E_L} \gtrsim 1$ are nevertheless
disfavored by stability of the scalar potential under RG evolution.
\subsubsection{The inert Higgs doublet model}
In principle, the so--called inert Higgs doublet model
\cite{Deshpande:1977rw,LopezHonorez:2006gr,Barbieri:2006bg,Ma:2006km,Arhrib:2013ela} should have been
discussed in section 3, since it leads to a SM--like Higgs sector, but we
analyze it here as it can be described with a formalism that is very close to
the one of the 2HDM. Indeed, the scalar potential of the model involving the two
doublets $\Phi$ and $\Phi'$ is similar to then one given in
eq.~(\ref{eq:scalar_potential}):
\begin{equation}
V \! = \! \mu^2 |\Phi|^2\! +\! \mu'^2 |\Phi'|^2\! + \! \lambda_1 |\Phi|^4 \! \! \! +\! \lambda_2 |\Phi'|^4 \! +\! \lambda_3 |\Phi|^2 |\Phi'|^2 \! + \! \lambda_4 |\Phi^{\dagger}\Phi'|^2 \! + \! \frac{\lambda_5}{2}\left[ (\Phi^{\dagger}\Phi')^2 \! +\! \mbox{h.c.} \right]. ~
\label{eq:VIDM}
\end{equation}
However, in the case of the inert doublet, the field $\Phi'$ does not acquire a vev and, hence, does not participate to electroweak symmetry breaking. This is left to the doublet $\Phi$ only, which then coincides with the SM Higgs doublet. After electroweak symmetry breaking, the doublet $\Phi'$ can be then simply decomposed as
\begin{equation}
\Phi ' = \begin{pmatrix} H^+ \\ \frac{1}{\sqrt{2}} (H+ iA ) \end{pmatrix}~,
\end{equation}
where, in terms of the SM vev $v$, the SM--Higgs field has a mass given by
$M_h^2=\mu^2+3 \lambda_1^2 v^2$ while the two electrically charged $H^{\pm}$ and the two electrically neutral $H$ and $A$ states have masses given by
\begin{eqnarray}
\label{eq:IDM_masses}
M_{H^{\pm}}^2 \hspace*{-2mm} &&=\mu'^2+\frac{\lambda_3 v^2}{2},\nonumber\\
M_{H}^2&&=\mu'^2+\frac{1}{2}(\lambda_3+\lambda_4+\lambda_5)v^2,\nonumber\\
M_{A}^2&&=\mu'^2+\frac{1}{2}(\lambda_3+\lambda_4-\lambda_5)v^2.
\end{eqnarray}
Hence, the phenomenology of the model will depend on four parameters, the three scalar masses and one quartic coupling or on four quartic couplings or their combinations, for instance, $\lambda_2,\lambda_3$ and
\begin{equation}
\lambda_{L/S}= \frac12 ( \lambda_3 + \lambda_4 \pm \lambda_5),
\label{eq:cplg:labdaL}
\end{equation}
which, respectively, correspond the couplings of the $HH$ and $AA$ pairs to the SM--like Higgs boson $h$. Similarly to the conventional 2HDM, as introduced in the previous subsection, it is possible to use the relations illustrated above to identify as free input parameters for the IDM the four physical masses $M_h,M_A,M_H,M_{H ^{\pm}}$ and the two quartic couplings $\lambda_L$ and $\lambda_2$. The coupling $\lambda_2$ does not actually explicitly appear in the relevant interactions rates for DM phenomenology. It plays nevertheless an important role since it influences the one--loop corrections to the masses of the Higgs states which are crucial to properly determine the DM relic density in the coannihilation regime~\cite{Goudelis:2013uca}.
To have a viable DM sector, one first assumes that the field $\Phi'$ is odd
under a discrete $\mathbb{Z}_2$ symmetry, while the SM fermions are even with
respect to it. In such a way, it is possible to forbid direct coupling between
$\Phi'$ and pairs of SM fermions. The lightest of the neutral scalar $H$ and $A$
states would be then the DM particle and, here, we will restrict to the case
where $H$ is the DM candidate.
Concerning the present constraints on the inert doublet model, one has first the usual ones on the quartic couplings from the requirement of the stability of the electroweak vacuum, which imposes the tree--level relations
\begin{eqnarray}
\lambda_{1,2} >0 \, , \quad
\lambda_3, \lambda_3+\lambda_4-|\lambda_5| >-2 \sqrt{\lambda_1 \lambda_2} \, .
\end{eqnarray}
In addition, one needs small couplings $\lambda_i < 4 \pi$ from the requirement
of perturbativity. These requirements should not only hold at the weak scale but also at high enough energy to have a consistent DM and collider phenomenology. The $\beta$ functions for the five $\lambda_i$ couplings coincide with the ones given in eq.~(\ref{eq:lambda1RGE}) for $y_1=y_2=0$.
Similarly to the conventional 2HDM, the second doublet $\Phi^{'}$ impacts
electroweak precision data which constrain the mass splitting of the extra Higgs states. The model contributions to the $S$ and $T$ parameters read for $x_A=M_A^2/M^2_{H\pm} > x_H=M_H^2/M^2_{H\pm}$ \cite{Barbieri:2006bg}:
\begin{eqnarray}
S&=& \frac{1}{72\pi} \frac{1}{(x_A^2- x_H^2)^3} \big[x_A^6 f_a(x_A) - x_H^2 f_a(x_H) + 9 x_A^2 x_H^2 (x_A^2 f_b(x_A) - x_H^6 f_b(x_H) )\big]\, , \nonumber \\
T&=& \frac{1}{32\pi^2v^2 \alpha} \big[ f(M^2_{H\pm},M^2_H)+f(M^2_{H\pm},M^2_A)
-f(M^2_A,M^2_H) \big]\, ,
\end{eqnarray}
with $f$ given in eq.~(\ref{eq:f-deltarho}), while $f_a(x)=-5+12\log(x)$
and $f_b= 3-4\log(x)$.
Furthermore, there are collider bounds: $M_{H}+M_A \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} M_Z$ from the invisible $Z$ boson width, and from LEP2 searches ~\cite{Pierce:2007ut}: $M_{H^{\pm}}> 70\!-\!90\,\mbox{GeV}$ on charged Higgs and $M_A> 100\,\mbox{GeV},M_H > 80\,\mbox{GeV}$ from $e^+e^-\rightarrow HA$ provided that $M_A-M_H >
8\,\mbox{GeV}$~\cite{Lundstrom:2008ai}.
\subsubsection{The 2HDM plus a pseudoscalar portal}
Another scenario which gained some interest recently is the 2HDM plus a lighter
pseudoscalar state. Indeed, this model offers the possibility to induce in a
gauge invariant manner a coupling of the form $a \bar f \gamma_5 f$ between a
singlet pseudoscalar $a$ and the SM fermions, via the mixing of $a$ with the
pseudoscalar $A$ state of the 2HDM\cite{Ipek:2014gua,Goncalves:2016iyg,Bauer:2017ota,Tunney:2017yfp,Abe:2018bpo}. The most general scalar potential for such a model is given by~\cite{Abe:2018bpo}:
\begin{equation}
V = V(\Phi_1,\Phi_2) + \frac{1}{2} m_{a_0}^2 a_0^2+\frac{\lambda_a}{4}a_0^4
+\left(i \kappa a_0 \Phi^{\dagger}_1\Phi_2+\mbox{h.c.}\right)+\left(\lambda_{1P}a_0^2 \Phi_1^{\dagger}\Phi_1+\lambda_{2P}a_0^2 \Phi_2^{\dagger}\Phi_2\right),
\end{equation}
where $V(\Phi_1,\Phi_2)$ denotes the usual potential of the two Higgs doublet
fields given in eq.~(\ref{eq:scalar_potential}). $\kappa,\lambda_{1P}, \lambda_{2P}$ are the new couplings, assumed here to be real, between the two doublets and the pseudoscalar $a_0$ state.
In this context, we will consider that the DM particle is a fermion $\chi$, singlet under the SM gauge group, which couples with the field $a_0$ according to \begin{equation}
\mathcal{L}=i g_\chi a_0 \bar \chi i \gamma^5 \chi\;.
\end{equation}
After symmetry breaking, the scalar sector of the theory will consist of two CP--even $h,H$, two CP--odd $a_0,A_0$ and two charged $H^{\pm}$ states.
In addition to the usual mixing angles $\alpha$ and $\beta$ of a 2HDM, there is an extra mixing angle $\theta$ which allows to move from the $(A_0,a_0)$ current states to the basis $(A,a)$ of physical CP--odd eigenstates
\begin{equation}
\left(
\begin{array}{c} A_0 \\ a_0 \end{array} \right)= \Re_\theta \left(
\begin{array}{c} A \\ a \end{array} \right) \, \quad
{\rm with} \quad
\tan2\theta=\frac{2 \kappa v}{M_{A}^2-M_{a}^2}\;.
\end{equation}
Similarly to the previous cases, several variants of this model, depending of the configurations of the couplings of the Higgs doublets to the SM fermions, can be considered. We will simply focus here on the specific case of the Type II model and impose the alignment limit $\beta-\alpha=\frac12 \pi $, as well as the mass degeneracy for the $H,A,H^{\pm}$ states. In this setup, the Lagrangian of the model in the mass basis can be decomposed into three main contributions (we omit here the terms involving only the $h,H,A,H^{\pm}$ states which are not relevant to our discussion)
\begin{equation}
\mathcal{L}=\mathcal{L}_{\rm DM}+\mathcal{L}_{\rm Yuk}+\mathcal{L}_{\rm scalar},
\end{equation}
where $\mathcal{L}_{\rm DM}$ is the DM Lagrangian
\begin{equation}
\mathcal{L}_{\rm DM}=g_\chi \left(\cos\theta a+\sin\theta A\right) \bar \chi i \gamma_5 \chi ,
\end{equation}
while $\mathcal{L}_{\rm Yuk}$ contains the Yukawa interactions with the SM fermions
\begin{equation}
\mathcal{L}_{\rm Yuk}=\sum_f \frac{m_f}{v}\bigg[ g_{hff} h \bar f f+g_{Hff}
H\bar f f- i g_{Aff} \bar f \gamma_5 f-i g_{aff} a \bar f \gamma_5 a \bigg] \, , \end{equation}
where the couplings $g_{\phi ff}$ for the 2HDM CP--even and charged fields are given in Table \ref{table:2hdm_type}, while the Yukawa couplings of the pseudoscalar states are given by
\begin{align}
& g_{Auu}=~{\cos\theta}/{\tan\beta},\,\,\,\, g_{Add}=g_{Aee}=~\cos\theta \tan\beta ,\nonumber\\
& g_{auu}=-{\sin\theta}/{\tan\beta},\,\,\,\,g_{add}=g_{aee}=-\sin\theta \tan\beta .
\end{align}
Finally, $\mathcal{L}_{\rm scal}$ contains the trilinear interactions between the CP--even Higgs states and two (pseudo)scalar fields:
\begin{align}
& ~~~~~~~~~~~~~~~~~~~~~~~~~ \mathcal{L}_{\rm scal}=\lambda_{haa}h aa+\lambda_{aAh}h aA+\lambda_{AAh}h AA\, ,\nonumber\\
& \lambda_{haa}=\frac{1}{M_h v}\left[\left(M_h^2+2 M_H^2-2 M_a^2-2 \lambda_3 v^2\right)\sin^2 \theta-2 \left(\lambda_{P1} \cos^2 \beta+\lambda_{P2}\sin^2 \beta\right)v^2 \cos^2 \theta \right] \, , \nonumber\\
& \lambda_{hAa}=\frac{1}{M_H v}[M_h^2+M_H^2-M_a^2-2 \lambda_3 v^2+2 \left(\lambda_{P1}\cos^2 \beta+\lambda_{P2}\sin^2 \beta\right)v^2]\sin\theta \cos\theta \, , \nonumber\\
& \lambda_{hAA}=\frac{1}{M_H v}\left[\cot2\beta\left(2 M_h^2-2 \lambda_3 v^2\right)\sin^2 \theta+\sin 2\beta\left(\lambda_{P1}-\lambda_{P2}\right)v^2 \cos^2 \theta\right].
\end{align}
In the alignment limit, the pseudoscalars are coupled only with the SM--like Higgs state $h$.
Concerning the theoretical constraints, one should impose the usual conditions on the quartic coupling of the potential. Assuming $\lambda_{P1},\lambda_{P2}>0$, these are analogous to the ones that apply to the 2HDM and which are summarized in eqs.~(\ref{eq:up1})--(\ref{eq:up2}). It is nevertheless useful to explicitly discuss the requirements on the coupling $\lambda_3$ in order to have a scalar potential bounded from below
\begin{align}
\lambda_3 > 2 \lambda,\,\,\,\,\,\,\lambda=\frac{M_h^2}{2 v^2} , \
\lambda_3 > \frac{M_A^2-M_a^2}{v^2}\sin^2 \theta -2 \lambda \cot^2 2\beta \, ,
\end{align}
where the last term has been obtained under the assumption $M_A \gg M_a$. Combining these equation with the perturbativity requirement $\lambda_3 < 4 \pi$ tells that it is not possible to have, for $\sin\theta \neq 0$, an arbitrary mass splitting between the $a$ and $A$ states. The non decoupling of the heavy scalar sector is further enforced by the requirement of perturbative unitarity for the $aa, aA$ and $AA$ scattering amplitudes into gauge bosons~\cite{Goncalves:2016iyg}
\begin{align}
\label{eq:uni}
|\Lambda_{\pm}| \leq 8 \pi \nonumber, \mbox{ where }
\Lambda_{\pm} v^2 = \Delta_H^2 - \Delta^2_a (1-\cos 4\theta)/8 \pm \sqrt{ {\Delta_H^2}{v^2}+ \Delta_a^4 (1-\cos4 \theta)/8 } ,
\end{align}
where $\Delta_a=M_A^2-M_a^2$ and $\Delta_H =M^2-M_{H^{\pm}}^2+2 M_W^2- \frac12
M_h^2$ with $M=M_A=M_{H^{\pm}}$. It can be seen that in the limit $M\gg M_a$ and with maximal mixing $\sin2\theta=1$, there is an upper bound on $M_A$ of about 1.4 TeV, which is weakened by lowering the values of $\sin2\theta$. We recall that in the considered setup, the severe lower bound $M>
570\,\mbox{GeV}$~\cite{Misiak:2017bgg} which comes from the constraints on the
mass of the charged Higgs boson from flavor transitions, is also present.
There are also searches for the production of the light $a$ state in association with a $Z$ and an $h$ boson that constrain parts of the parameter space \cite{Bauer:2017ota,Tunney:2017yfp}. Finally, for $M_a \leq \frac12 M_h$, large couplings between the light $a$ and the SM--like $h$ boson would lead to a decay $h\rightarrow a a $ with a large rate given by~\cite{Ipek:2014gua}
\begin{align}
\Gamma(h \rightarrow a a)= \frac{|g_{haa}|^2 M_a}{8 \pi} \sqrt{1- 4 M_a^2/M_h^2 },
\end{align}
and which is constrained both by direct searches of light pseudoscalar Higgs
states at the LHC in the $4b$, $2b 2\ell$ and $4\ell$ (with $\ell=\mu$ or
$\tau$) modes \cite{Khachatryan:2017mnf} and by the Higgs signal strengths and
invisible Higgs decays as discussed in section 2.
\subsection{Constraints and expectations at colliders}
\subsubsection{Higgs cross sections and branching ratios}
We come now to the collider phenomenology of the 2HDM scalars and in particular,
that of the heavier states since the lightest $h$ boson behaves essentially
like the SM Higgs boson. We will adopt for simplicity the benchmark scenario
introduced at the end of section 5.1, namely we assume the alignment limit
$\alpha=\beta- \frac\pi2$ which makes the $h$ boson SM--like and a near mass
degeneracy for the $H,A,H^\pm$ states, $M_H \approx M_A \approx M_{H^\pm}$. In
the case of the Type II model, the pattern in this benchmark is similar to
that of the MSSM which will be discussed later. Here, we briefly summarize the
main features in this particular scenario and then point out the main
differences in the other possible scenarios.
The phenomenology crucially depends on the parameter $\tan\beta$. At high
values, $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 10$, the couplings of the neutral $\Phi=H,A$ and charged $H^\pm$
bosons to top quarks, $\propto 1/\tan\beta$, are strongly suppressed while those to
bottom quarks, $\propto \tan\beta$, are enhanced. The neutral states will then decay
almost exclusively into $b\bar b$ and $\tau^+\tau^-$ pairs, with branching
ratios BR$(\Phi \rightarrow b \bar b) \approx 90\%$ and BR$(\Phi \rightarrow \tau \tau) \approx
10\%$ as a result of the color factor and the mass hierarchy $m_\tau/\bar m_b$,
since on has $m_\tau= 1.78$ GeV and $\bar m_b \simeq 3$ GeV for the
$\overline{\rm MS}$ $b$--mass at the scale of the Higgs masses. All other $H/A$
decays are strongly suppressed, including those into $t\bar t$ pairs despite of
the large top quark mass value. Similarly, the charged Higgs boson will decay
into $t\bar b$ and $\tau \nu$ final states with branching ratios of 90\% and
10\% respectively.
The situation is drastically different at low values of $\tan\beta$, say $\tan\beta \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}
3$. When the $H,A,H^\pm$ states are heavy enough to be allowed by kinematics to decay into top quarks, namely $M_H\! \approx \! M_A \! \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} \! 2m_t$ and
$M_{H^\pm} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} m_t$, the modes $\Phi \rightarrow t\bar t$ and $H^+ \rightarrow tb$ become
almost exclusive and have branching ratios close to one. At intermediate values,
$3 \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} \tan\beta \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 10$, the suppression of the $\Phi tt$ coupling starts to be
effective while the $\Phi bb$ coupling is not yet strongly enhanced, resulting
into a competition between the $b\bar b$ and $t \bar t$ decay channels.
In principle, other Higgs decay modes can be considered. First of all we might
have $H\rightarrow WW,ZZ$ ($A$ does not possess such decays by virtue of CP--invariance)
and $A\rightarrow hZ, H^\pm \rightarrow hW$. Their rates are proportional to
$\cos^2(\beta-\alpha)$ and thus vanish in the alignment limit to which the
Type-II 2HDM must lie close to comply with bounds from the $h$ signal strengths.
Channels like $H \rightarrow AZ, H^\pm W$, $A\rightarrow HZ, H^\pm W$ or $H^\pm \rightarrow AW, HW$
have phase--space suppressed rates or are kinematically forbidden since the
requirement of the alignment limit and the compatibility with electroweak data
imply a near mass degeneracy $M_H \! \approx \! M_A \! \approx \! M_{H^\pm}$.
The Higgs to Higgs decay $H\rightarrow hh$ features also a vanishing rate in the
alignment limit. Note finally that compared to the SM, the loop induced
decays of the neutral states into $gg$ (the top loop is suppressed for $\tan\beta>1$)
and $\gamma\gamma$ (for which the $W$ loop contribution is absent or suppressed)
are much smaller. Hence, only the fermionic decays above are relevant in
general.
As an example, we show in the left--hand side of Fig.~\ref{Fig:BR-phi} the decay
branching ratios of the neutral $\Phi=H,A$ bosons into the various possible
final states, as a function of $\tan\beta$ and for the common mass value
$M_\Phi=M_H=M_A=750$ GeV; the alignment limit is assumed. In the right--hand
side, we display as a function of $\tan\beta$ the total decay width of the two states
which grows like $M_\Phi$ and $(m_t/\tan\beta)^2$ or $(\bar m_b \tan\beta)^2$. It is very
large at low and high $\tan\beta$ values, being $\Gamma_\Phi \! \approx \! 50$ GeV for
$\tan\beta \approx 1$ and $60$ and is minimal at the intermediate value $\tan\beta \approx
\sqrt{m_t / \bar m_b} \approx 7$ as $m_t\simeq 173$ GeV and $\bar m_b \simeq 3$
GeV.
\begin{figure}[!h]
\vspace*{-25mm}
\centerline{\hspace*{-1.cm} \includegraphics[scale=0.76]{figs-2HDM/BR-Atb.pdf}
\hspace*{-8.2cm} \includegraphics[scale=0.76]{figs-2HDM/Ga-Atb.pdf}}
\vspace*{-13.5cm}
\caption{The branching ratios for $\Phi = H/A$ decays into various final states for $M_\Phi=750$ GeV as functions of $\tan\beta$ (left) and
|
Maccio10} mass-concentration relation, the boost factor scales with $m_{\rm lim}$ approximately as a power-law function, with a slope consistent with the value $-0.226$ estimated from the resolved subhaloes in the Aquarius simulation~\citep{Springel08}. Down to an Earth mass, a nominal free-streaming mass scale for cold dark matter haloes, the boost factors rise to a few hundred and a few thousand for galaxy and cluster haloes respectively. These values are slightly higher than those estimated in \citet{Gao12,Springel08} by extrapolating the Aquarius and Phoenix simulations. When the \citet{Ludlow14} mass-concentration relation is adopted, however, the $b(m_{\rm lim})$ function is no longer a power-law, and is significantly reduced at low $m_{\rm lim}$, reflecting the greatly reduced concentration of haloes at the low mass end in this model. Down to an Earth mass, the boost factor is reduced by a factor of 50 in both haloes when using the \citet{Ludlow14} relation compared with that using the \citet{Maccio10} relation.
The lowered boost factor, in addition to the cuspier emission profile from subhaloes, makes the total luminosity profile inside a halo less extended than that expected from \citet{Gao12,Pinzke11}. This implies that constraints on the dark matter annihilation cross-section in clusters based on previous boost-factor estimates \citep[e.g.,]{Huang12,Han12} could be relaxed. We provide some fitting formulae for the subhalo emission in Appendix~\ref{app:annihilation}.
Our approach differs from some previous estimates \citep[e.g.,][]{Strigari07,Anderhalden13,Sanchez14} in that we start from the infall mass to infer the density profile, rather than from the current mass which has been affected by tidal stripping. The concentration of subhaloes plays a vital role in this estimate, with lower concentrations leading to lower boost factors. We acknowledge several limitations of our current estimate. First, the mass-concentration relation at infall time, instead of that at $z=0$, should be applied to the infall mass. This causes the concentrations to be over-estimated when the $z=0$ relation is used, leading to an over-estimate in the boost factor. For example, lowering the concentration parameters by $0.2$~dex (roughly corresponding to the mass-concentration relation at $z\sim2$) leads to a reduction in the boost factor by a factor of $3$. The correct concentration distribution can be found by looking at the redshift distribution of the progenitors either in simulation or from EPS theory. We restrain ourselves from calibrating such relations in this work. We notice that \citet{Bartels15} has recently combined analytical models of the unevolved subhalo mass, the accretion-redshift distribution and a redshift dependent mass-concentration with the average mass stripping rate from \citet{Jiang14} for an evaluation of the boost factor. Secondly, the infall mass function is extrapolated to low mass with a power-law form, while in principle it could differ from a power law and could be calculated theoretically with the EPS theory. Thirdly, the stripping function is also assumed to be independent on infall mass down to the lower mass limit of subhaloes. While this is a very good approximation for the subhaloes resolved in our simulations, deviations from a powerlaw form could become important once the infall mass range becomes much larger. A simple estimate utilising Eq.~\eqref{eq:tidal} suggests a steeper stripping function for low mass subhaloes, which could reduce both the boost factor and the subhalo emission in the inner halo. Despite these limitations, the current model still improves over previous work and can be extended using the current framework. In its current form, our predicted boost factors should be taken as upper limits.
\subsection{The lensing mass profile}
The mass of subhaloes as a function of projected cluster-centric radius at a fixed stellar mass can be measured with weak lensing~\citep[e.g.][]{Li13,Li14,Li15,Sifon15}. Because stellar mass is most directly related to the infall mass of its host subhalo, such measurements essentially constrain the mass of subhaloes selected at fixed infall mass. When stacking subhaloes, disrupted ones (assuming their galaxies persist) contribute no signal, thus they only act to dilute the average signal from surviving subhaloes. In the presence of disrupted subhaloes, the measured signal $\Delta\Sigma_{\rm obs}=f_{\rm s}\Delta \Sigma_{\rm real}$ where $\Delta\Sigma$ is the difference between the cumulative and differential surface density profile of dark matter halo. For subhaloes with (truncated) NFW profiles, the lensing signal $\Delta \Sigma$ is proportional to $m$ (when the other parameters are fixed). Failing to model the disrupted subhaloes would lead to under-estimating the subhalo mass by a factor $f_{\rm s}$. Note this $f_{\rm s}$ is not simply the fraction of orphan galaxies in semi-analytical models, because the latter is not a physical quantity but depends on the resolution of the simulation used by the model.
Once the disrupted fraction has been corrected for, the measured subhalo mass as a function of projected halo-centric distance can be obtained. Because the surviving mass is not a single value at a fixed infall mass, the measured mass is some average of the underlying mass distribution which in general lies in between the mean and median of the distribution \citep{Mandelbaum05a}. To interpret this ``lensing average'' and to correct it to the true median or mean masses requires knowledge of the underlying mass distribution. For subhalo lensing at a given projected distance, the relevant distribution is
\begin{equation}
P(m|m_{\rm acc}, R_{\rm p})=\int_{\rm l.o.s} P(\mu|R) \tilde{\rho}(R)\D l.
\end{equation}
Given this distribution, the mean and median subhalo mass can be evaluated analytically or, more conveniently, with the Monte-Carlo sampler \textsc{SubGen}. The generated Monte-Carlo samples can also be used to evaluate the systematic bias in the lensing measurement relative to the real median mass, by simulating the lensing averaging process as was done in \citet{Han14} for real observations.
In Fig.~\ref{fig:sublens}, we compare our predictions with a recent measurement of subhalo mass in galaxy clusters by \citet{Li15}. To populate subhaloes with galaxies (i.e., converting $m_{\rm acc}$ to $m_{\star}$), we adopt the stellar mass-infall mass relation determined in \citet{WangL13} for satellite galaxies with a scatter $\sigma_{\log m_{\star}}=0.19$ at fixed infall mass. We have corrected for the different definitions between our infall mass and that in ~\citet{WangL13}. The subhaloes are further selected with a stellar mass threshold to compare with the observations. Our result is very similar to that from the mock catalogue in \citet{Li15} created from a semi-analytical galaxy formation model, but requires much less effort to obtain. Accounting for the disrupted subhaloes increases the measured mass by a factor of $\sim 2$. After this correction, the measurements start to show a significant tension with the predicted mean and median. However a full investigation which would have to consider many issues is beyond the scope of this paper.
For example, the observational selection function is more complicated than we have assumed and involves selection in host halo mass~\citep{Sifon15} and redshift. Systematic uncertainties in stellar mass estimates may also introduce bias in the mass ratio as well as complicating the selection function. Contamination from neighbouring massive groups is likely to cause an over-estimate at large radii. On the other hand, the value of $f_{\rm s}$ in the real universe may differ from the value used here. Our $f_{\rm s}$ is estimated from dark matter only simulations. The value of $f_{\rm s}$ in the real universe may be different due to baryons, which make subhaloes more resistant to tidal disruption. By applying the $f_{\rm s}$ correction we have also assumed that galaxies are not disrupted together with their host subhaloes, which may not be the case in the real universe. High resolution hydrodynamical simulations are required to address these uncertainties.
\begin{figure}
\myplot{SubLensMstarRat}
\caption{The projected profile of subhalo mass to stellar mass ratio in galaxy clusters.
|
The dashed and solid lines represent the mean and median mass of survived subhaloes with stellar mass $M_{\star}>10^{10}\msunh$ in a cluster with $M_{200}=10^{14}\msunh$. The shaded region is bounded by the 15th and 75th percentiles of the sub to stellar mass ratio at each radius. The circles with error-bars are the original measurements from \citet{Li15} while the triangles are original results multiplied by $1/f_{\rm s}$ to account for the disrupted subhaloes.}\label{fig:sublens}
\end{figure}
\section{Summary \& Conclusions}\label{sec:summary}
We have developed a model that unifies the distribution of subhaloes in mass, $m$, position, $R$, and infall mass, $m_{\rm acc}$. The model fully specifies the joint distribution of these three quantities in an analytical form (i.e. Equation~\ref{eq:joint}):
\begin{equation}
\D N(m, m_{\rm acc}, R)= \D N(m_{\rm acc}) \tilde{\rho}(R) {\D P(m|m_{\rm acc},R)},\nonumber
\end{equation} where $\D N(m_{\rm acc})$ describes the infall mass distribution, $\tilde{\rho}(R)$ is the spatial probability distribution of dark matter particles inside the host halo, and $\D P(m|m_{\rm acc},R)$ describes the final mass distribution of subhaloes of a given infall mass at a given radius. The specific forms of the relevant terms in the joint distribution are given by Equations~\eqref{eq:InfallMF}, \eqref{eq:rho_def} and \eqref{eq:strip_PDF}, with parameter values applicable to different host haloes masses listed in Table~\ref{table:par}. A Monte-Carlo sampler, \textsc{SubGen}, is also provided that easily generates subhalo samples inside any host halo following the above distribution. Once a subhalo sample is generated, any population statistics involving these variables can be easily obtained.
The support for this model can be summarized as follows:
\begin{itemize}
\item Using high resolution $\Lambda$CDM cosmological simulations of both a galaxy and a cluster sized halo from the Aquarius~\citep{Aquarius} and Phoenix~\citep{Phoenix} projects, we have carefully verified that the shape of the unevolved spatial distribution (i.e., the radial profile at fixed $m_{\rm acc}$) follows the density profile of the host halo, a phenomenon we summarize as \emph{unbiased accretion} of subhaloes. This holds for both surviving subhaloes and unresolved or disrupted subhaloes as traced by their most-bound particles. Dynamical friction leads to a deviation of the unevolved spatial distribution from that of the host halo density profile only in the very inner region and is important only for subhaloes with very large $m_{\rm acc}/M_{200}$.
\item \hl{The amplitude of the unevolved spatial distribution, as described by the unevolved subhalo mass function, $\D N/\D \ln m_{\rm acc}$, follows a power law in each individual halo.}
\item The joint distribution is then obtained following Bayes theorem, by further specifying the connection between $m$ and $m_{\rm acc}$ with the conditional distribution $P(m|m_{\rm acc},R)$. This connection is shaped by tidal stripping, with subhaloes in the inner halo being more heavily stripped on average. Through a convergence study, we find that about $45\%$ of subhaloes are \emph{physically} disrupted (i.e., stripped to $m=0$ regardless of numerical resolution). Because the spatial distribution is independent of infall mass, the same disruption fraction applies to all infall masses and at all radii. For the surviving subhaloes, we find $P(m|m_{\rm acc}, R)$ can be approximated by a log-normal distribution at each radius, with a median radial dependence well approximated by a power law.
\end{itemize}
Marginalizing (i.e., integrating) the joint distribution over any variable, one obtains the joint distribution of the remaining ones. For example, marginalizing over the infall mass, the model simultaneously reproduces the universal final mass function and the universal spatial distribution of subhaloes of a given final mass. In particular, the model predicts that:
\begin{itemize}
\item The final mass function follows a power-law form with the same slope as the infall mass function.
\item The spatial distribution of subhaloes at fixed $m$, which we call the evolved spatial distribution, is flatter than the density profile of the host halo. The ratio between the two is determined by the amount of tidal stripping at each radius. This explains the so called ``anti-bias'' between the galaxy distribution and the subhalo distribution as purely a selection effect.
\item The shape of the evolved distribution is also independent of $m$. The scale-free nature (i.e., power-law form) of the infall mass function and the mass-independence of the unevolved spatial profile are the keys to such independence.
\end{itemize}
The parameters of our model ingredients have been calibrated with simulations and we find only very modest variation with simulated halo mass. This enables the model to be safely interpolated to other halo masses. The calibrated model can be applied to a wide range of problems. We give several such examples, including the universality of the subhalo mass function, the dark matter annihilation emission from subhaloes, and lensing measurements of subhalo mass.
We demonstrate that the universality of the subhalo mass function exists because subhaloes trace the density field at large radii where tidal stripping is irrelevant. Further inside this radius, the mass function is lower in more massive haloes. Using the framework to calculate the dark matter annihilation emission of subhaloes, we demonstrate that the adopted mass-concentration relation for subhaloes is crucial in such calculations. Extrapolated down to an Earth mass, the commonly adopted powerlaw mass-concentration model overpredicts the total subhalo emission by a factor of 50 compared with the results when adopting a more physical mass-concentration relation. The model can also be easily adapted to compare with, as well as to calibrate, gravitational lensing measurements of the subhalo mass. The existence of a physically disrupted subhalo population could potentially lead to a correction to the lensing measurement by a factor of $\sim2$, amplifying the tension between a recent subhalo lensing measurement~\citep{Li15} and theoretical predictions.
The model can be extended to higher redshift and further calibrated in other cosmologies. A dependence on host halo concentration may also be introduced as additional model parameters. The aspect of the model that is of the most interest and least known is how subhaloes are stripped. This is described in the model by the average stripping function and its scatter. The unevolved subhalo mass function, on the other hand, can be fully predicted from EPS theory. In addition, EPS calculations are also capable of providing the distribution of accretion redshifts, which can be combined with a redshift-dependent mass-concentration relation to provide accurate density profile parameters for subhaloes. This could for example improve the predictions for the subhalo annihilation emission.
\section*{Acknowledgments}
We thank Liang Gao and Wojciech Hellwing for helpful discussions, and Aaron Ludlow for providing us a tabulated version of his mass-concentration relation. We thank the anonymous referee for helpful and insightful comments that helped us improve the paper. This work was supported by the Euopean Research Council [GA 267291] COSMIWAY and Science and Technology Facilities Council
Durham Consolidated Grant. YPJ is supported by the 973 program No.~2015CB857003, NFSC~(11533006,11320101002), and a Shanghai key laboratory grant No.~11DZ2260700. This work used the DiRAC Data Centric system at Durham University,
operated by the Institute for Computational Cosmology on behalf of the
STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded
by BIS National E-infrastructure capital grant ST/K00042X/1, STFC
capital grant ST/H008519/1, and STFC DiRAC Operations grant
ST/K003267/1 and Durham University. DiRAC is part of the National
E-Infrastructure. This work was supported by the Science and
Technology Facilities Council [ST/F001166/1].
\bibliographystyle{\mybibstyle}
\setlength{\bibhang}{2.0em}
\setlength\labelwidth{0.0em}
|
\section{Introduction}
The relationship between supermassive black hole (SMBH) growth and the growth of galaxies in the Universe is a major outstanding issue in observational cosmology. There is ample evidence that SMBHs are somehow linked to their host galaxies. Scaling relations such as the $M_{BH} - \sigma$ relationship \citep{Gebhardt00,Ferrarese00} and $M_{BH} - L_{\rm bulge}$ relationship \citep{Marconi03,Bennert10} suggest that galaxies and their nuclear black holes grow in tandem following a process that is still poorly understood, but which may involve feedback from the SMBH regulating the evolution of its host galaxy.
Merger-driven hierarchical structure formation has been an attractive model for explaining
the co-evolution of SMBHs and galaxies. This model was originally motivated by observations of ultra-luminous infrared galaxies (ULIRGs) whose morphologies showed mergers and interaction and which revealed buried, dust-enshrouded quasars as well as high levels of star formation \citep{Sanders88a}. Numerical simulations of major mergers between galaxies that host SMBHs also predict a relationship between quasar ignition, which begins in a heavily enshrouded state, and intense star formation induced by the mergers \citep{DiMatteo05,Hopkins06a}. These simulations reveal the effects of feedback on host galaxy growth, enabling these systems to arrive on the $M-\sigma$ relationship, post merger, and reproduce the bright end of the mass function for galaxies, which is observed to be far steeper than the halo mass function.
Despite the successes of these merger-driven models, recent observations suggest a more complicated picture. \citet{Schawinski11} and \citet{Kocevski12} showed that moderate luminosity ($10^{42} {\rm~erg~ s}^{-1} < L_X < 10^{44} {\rm~erg~ s}^{-1}$) X-ray-selected AGN at $1.5 < z < 3$ reside in undisturbed, disk-dominated galaxies.
In an analysis of the merger fraction seen in quasar and AGN samples as a function of luminosity, \citet{Treister12} find that mergers dominate only at the highest luminosities (i.e., the quasar regime where $L_{\rm bol} \ge 10^{46}$ erg s$^{-1}$).
In addition, theoretical models are beginning to include more complicated scenarios for black hole fueling and galaxy evolution; it is possible that stochastic accretion dominates at low luminosities while mergers drive fueling at high luminosities \citep[e.g.,][]{Hopkins06d,Hirschmann12}.
In support of merger-driven co-evolution at the high luminosity end, \citet{Glikman12} identified a population of objects in which merging appears to be the dominant driver of co-evolution and feedback.
A large population of dust-reddened quasars has been identified by matching the FIRST \citep{Becker95} and 2MASS \citep{Skrutskie06} surveys and selecting objects with very red optical-through-near-infrared colors (we refer to this sample hereafter as F2M; \citealp{Glikman04,Glikman07,Glikman12}; and F2MS; \citealp{Urrutia09}). Spectroscopic observations of these sources have identified $\sim 120$ red quasars spanning a broad range of redshifts ($0.1 < z \lesssim 3$) and reddenings ($0.1<E(B-V)\lesssim 1.5$). F2M red quasars are the most luminous objects in the Universe after correcting for reddening, and their fraction increases with increasing luminosity. They live in merger dominated hosts with elliptical galaxy profiles \citep{Urrutia08}. Their spectra show high fraction of LoBAL and FeLoBAL features providing evidence for outflows that could be associated with feedback \citep{Urrutia09,Farrah12,Glikman12}. Many have extremely high accretion rates, and the high Eddington-ratio systems have large bulge luminosities relative to their black hole masses, suggesting that their stars have formed before the black-hole has finished growing \citep{Urrutia12}. This body of evidence suggests that the dust-reddened quasars in the F2M survey are systems in which a merger-fueled, heavily obscured quasar is emerging from its shrouded environment. Based on the statistical frequency of these sources, compared with optically-selected, blue quasars, \citet{Glikman12} estimate that the duration of the red phase is $\sim 20\%$ as long as the unobscured quasar lifetime.
In addition to the F2M sample, there have been several efforts to identify populations of red quasars in the radio \citep[e.g.,][]{Webster95,White03b} and mid-infrared \citep[e.g.,][]{Lacy04,Lacy07,Polletta06,Polletta08,Stern12}. \citet{Warren00} developed a technique exploiting the $K$-band excess in the power-law shape of quasar spectra compared with stars ('KX' selection method) that is less biased than the optical to dust-extinction. \citet{Maddox08} and \citet{Maddox12} have utilized the KX method to identify quasar samples including moderately reddened sources out to $E(B-V) < 0.5$.
Recently, a targeted search for heavily reddened quasars at $z\sim 2$ using near-infrared selection with no requirement for radio detection criterion discovered 12 red quasars that show similar properties to the F2M quasars \citep{Banerji12,Banerji13}. They arrived at the same interpretation as \citet{Glikman12} and \citet{Urrutia12}: dust reddened quasars are a transitional phase in quasar-galaxy co-evolution.
The F2M survey was relatively shallow, and only revealed the ``tip of the iceberg'' for reddened AGN; at higher redshifts ($z > 1.5$) only the most intrinsically luminous objects are seen. To reach the heavily-reddened higher-redshift analogs to the F2M quasars, a more sensitive near-IR survey is needed to tease out the luminosity and redshift dependences of red quasars.
The ideal survey for extending the F2M red quasar survey is the UKIRT Infrared Deep Sky Survey \citep[UKIDSS;][]{Lawrence07}. UKIDSS is a near-IR imaging sky survey comprised of five tiered surveys with varying depths and areas to supplement the wavelength coverage of the sky beyond the optical. The largest of these, the Large Area Survey (LAS) has, to date, covered $\sim3000$ deg$^2$ in the $Y$, $J$, $H$, and $K$ bands down to $K \sim18$ magnitudes, which is approximately 2.5 magnitudes deeper than 2MASS. In addition, the image quality of UKIDSS is comparable to optical CCD-based surveys, with a typical full-width at half-maximum (FWHM) of the point spread function (PSF) below an arc-second (compared with FWHM of 2\arcsec\ for 2MASS point sources).
In this paper, we present a sample of red quasars using the UKIDSS survey down to $K=17$, or $\sim 1.5$ magnitudes fainter than 2MASS in the near-infrared. We construct our sample by matching the FIRST radio survey to the UKIDSS First Data Release \citep[DR1][]{Warren07} and use optical photometry from the Sloan Digital Sky Survey \citep[SDSS;][]{York00} to select objects with red optical-to-near-infrared colors.
The paper is organized in the following manner. In Section 2 we describe out color selection technique and construction of the red quasar candidate list. We describe our spectroscopic follow-up observations in Section 3. In Section 4, we discuss other methods of selecting reddened quasars, their advantages and drawbacks. We analyze the surface density and demographics of this deeper red quasar sample in Section 5. In Section \ref{sec:ebv} we use all available photometric and spectroscopic data for our quasars to estimate the reddening experienced by each source and conclude our findings in Section 7. When calculating distances, luminosities and other cosmology-dependent quantities, we use the parameters: $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M = 0.30$, and $\Omega_\Lambda = 0.70$.
\section{Red Quasar Color Selection}
\defcitealias{Urrutia09}{F2MS}
In this paper we explore the space density of red quasars extending $\sim 1.5$ magnitudes below the 2MASS $K$-band flux limit by applying selection criteria similar to the F2M survey.
\citet{Glikman04} found that the colors $R-K>4$ and $J-K>1.7$ were efficient color cuts for finding red quasars, which were used to identify 120 red quasars in \citet{Glikman07} and \citet{Glikman12}. However, the F2M survey used optical magnitudes from the Guide Star Catalog 2.2 \citep{Lasker08}, which are based on digitized scans of the POSS-II photographic plates \citep{Reid91}. The $R$-magnitude used in the $R-K$ color is therefore not equivalent to the SDSS $r$-band.
\citet{Urrutia09} used a combination of FIRST, 2MASS and SDSS to create a sample of red quasars (hereafter, we refer to this sample as F2MS). The areal coverage of the F2M and F2MS surveys overlapped and they have many sources in common. However, \citet{Urrutia09}, required $r-K>5$\footnote{$r-K>5$ is roughly equivalent to $R-K>4.5$ \citep{Windhorst91}. } and $J-K>1.3$\footnote{The SDSS magnitudes are on the AB magnitude system \citep{Oke83}, while 2MASS and UKIDSS report Vega magnitudes. These color cuts represent colors computed directly from the respective databases, with no corrections/transformations applied.}. We found, when comparing the full set of F2M quasars to the subset selected by \citet{Urrutia09} that five quasars with $1.5<J-K<1.7$ were missed by the F2M color cuts, which amounted to 9\% of the \citet{Urrutia09} sample. In order to remedy this incompleteness, we amend the original F2M color cuts to take into account the SDSS filters and the incompleteness in $J-K$; for this pilot study of heavily reddened quasars in FIRST+UKIDSS DR1, we require $r-K > 5$, and $J-K > 1.5$ with $K<17$.
Figure \ref{fig:color} shows a series of modeled quasar colors out to $z\le 2.5$ with various amounts of reddening. The yellow line with $E(B-V) = 0.5$ is labeled with the modeled quasars' redshifts. The confirmed F2MS red quasars from \citet{Urrutia09} are shown with red circles. We also plot the colors of M, L and T dwarfs (asterisks and triangles, respectively) which our $J-K>1.5$ color cut largely avoids\footnote{The surface density of low mass stars overwhelms that of red quasars in infrared surveys, thereby compelling us to add the radio selection to help eliminate stellar contamination from our sample.}. \citet{Glikman12} demonstrated that added light from a host galaxy does not signficantly affect the color of these reddened quasars; the largest consequence of the $r-K > 5$ color cut is the possibility of missing lightly reddened, lower-redshift ($z \lesssim 1.3$, $E(B-V) \lesssim 0.5$) quasars.
\begin{figure}
\epsscale{1.2}
\plotone{f1.pdf}
\caption{Colors of quasars with various amounts of reddening and cool field stars, in $J-K$ vs.~$r-K$. The solid lines are modeled Type I quasar colors between $z=0.1$ to $2.8$. The blue line shows the colors of an average quasar with no reddening, while the yellow, orange and red lines show the color tracks of quasars reddened by $E(B-V) = 0.5, 1.0$ and 1.5 magnitudes with an SMC extinction law, respectively. Redshifts spaced by $\Delta z = 0.3$ are labeled on the yellow ($E(B-V)=0.5$) line.
For consistency with the format in which the data products are made available, we use AB magnitudes for the SDSS-$r$ band and Vega magnitudes for the near-infrared $J$ and $K$ bands.
Overplotted are the red quasars from \citet{Urrutia09} (red circles), all but one of which fall into our $J-K=1.5$ and $r-K=5$ color cuts.
Our color cuts (dashed lines) mostly avoid low mass stars (black asterisk, triangle symbols), which are further excluded by our radio selection.
We overplot the UKFS candidates with open circles and the confirmed red quasars with filed circles.
Because of the $r-K$ color cut, our color selection may miss low-redshift, mildly-reddened sources.}\label{fig:color}
\end{figure}
We matched the 3 April, 2011 FIRST radio catalog\footnote{\tt http://sundog.stsci.edu/first/catalogs/readme\_03apr11.html} \citep{White97} to the UKIDSS First Data Release \citep[DR1;][]{Warren07}. This yielded a matched catalog of 4890 sources (2432 with $K\le17$), including all UKIDSS matches within 2\arcsec\ of a FIRST source, not just the nearest match. We make no restriction on the UKIDSS morphological classification in our selection, including all {\tt mergedClass} values from ``stellar'' to ``galaxy''. Most of the objects ($87\%$) are classified as galaxies ({\tt mergedClass = 1}) with only a small fraction ($8\%$) classified as stellar ({\tt mergedClass = -1}). The remaining sources are classified as either {\tt probableStar}, {\tt probablyGalaxy}, {\tt noise} or {\tt saturated} \citep[see Appendix A of ][]{Dye06}.
In order to obtain their optical magnitudes, we then matched these objects, using the FIRST position, to the SDSS DR6 \citep{Adelman-McCarthy08} catalog with a search radius of 2\arcsec. There were 4000 FIRST+UKIDSS sources with a match to SDSS, 2465 of which have $K\le17$ (including multiple matches to the same radio source). \citet{Glikman12} showed that $\sim 50\%$ of the F2M red quasars were classified as extended ({\tt type = 3}) in SDSS. To avoid any morphological bias we make no eliminations based on morphology and keep all classifications from the SDSS. At this stage, the majority of FIRST+UKIDSS+SDSS sources are classified as extended in the optical ($84\%$ versus $16\%$ with a stellar classification).
We also include FIRST+UKIDSS sources with $K\le17$ that are not detected in SDSS, as this sample is likely to contain the most heavily reddened quasars. There were 901 FIRST+UKIDSS sources with no match in the SDSS catalog within 2\arcsec. The SDSS DR6 does not include the deeper observations over Stripe 82, a region of the SDSS footprint covering the area $-50\degr < \alpha_{2000}<59\degr$, $-1.25\degr < \delta_{2000} < 1.25\degr$ that has been re-visited approximately 80 times and whose co-added frames reach $\sim 2$ magnitudes deeper than the nominal SDSS magnitude limits. Although there is considerable overlap between the UKIDSS DR1 (fields LAS 5, LAS 6, LAS 7 and LAS 8) and Stripe 82, we did not incorporate the magnitudes from the co-added Stripe 82 data in our source selection, to maintain uniformity in our selection process.
We applied the color cuts shown in Figure \ref{fig:color} to select red quasar candidates. For the optically undetected sources, we used the quoted 95\% completeness magnitude limits for SDSS, $r = 22.2$ and $i=21.3$ \citep{Adelman-McCarthy06} when computing colors. This color limit automatically includes all undetected sources as candidates, since all the candidates have $K\leq17$, which implies that all optically undetected sources have $r-K \ge 5.2$.
We extracted UKIDSS image cutouts of all the sources using the Wide Field Camera \citep[WFCAM;][]{Casali07} Science Archive \citep[WSA;][]{Hambly08} as well as cutouts from SDSS and removed objects that appeared to be image artifacts, e.g., artifacts associated with nearby bright stars, cross talk and, false detections due to imperfect sky subtraction near the edges of a field. The final quasar candidate list of FIRST+UKIDSS sources with $K\le17$ (with and without SDSS matches) obeying the color criteria $r-K>5$ and $J-K>1.5$ contains 87 objects. We call this the UKFS candidate catalog, listed in Table 1, which includes 69 sources with SDSS detections and 18 candidates without SDSS detections.
Since the versions of FIRST and SDSS that we use fully overlap the UKIDSS DR1 area, the size of our survey is determined by the UKIDSS footprint which is 189.6 deg$^2$ \citep{Warren07}. We use this area to determine the surface density and demographics of the fainter red quasars found in this survey.
\section{Observations}
We obtained 61 spectroscopic identifications of our 87 candidates. These identifications were primarily determined from forty-four near-infrared spectra acquired with the TripleSpec spectrograph \citep{Herter08} on the Palomar Hale telescope during five observing runs between August 2008 and April 2013. The TripleSpec data were reduced using a modified version of the Spextool software, which includes flat-fielding, sky-subtraction, co-addition of individual frames, extraction and wavelength calibration using sky lines \citep{Cushing04}. We also obtained a spectrum of a nearby A0V star at a similar airmass (aiming for $\Delta$ airmass $<0.1$ between the target and the star) after each object. The A0V spectrum is used to correct for telluric absorption \citep{Vacca03}
An additional fifteen optical spectra were obtained from the SDSS. Of these, eleven are from the most recent Data Release 9 \citep[DR9;][]{Ahn12} public spectroscopic database, which includes spectroscopy from the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{Eisenstein11}. BOSS spectra are taken with a fiber-fed, mulit-object spectrograph \citep{Smee12}. A pipeline reduces, classifies and assigns redshifts to the spectra \citep{Bolton12}.
Eleven spectra came from the AAOmega-UKIDSS-SDSS Survey (AUS; Croom et al. in prep) in the Stripe-82 region, an additional three spectra were obtained with the Low Resolution Imaging Spectrograph \citep[LRIS;][]{Oke95} on the W. M. Keck telescope on 15 October 2012, and one object was identified as a luminous red galaxy (LRG) by the 2dF-SDSS LRG and QSO survey \citep[2SLAQ;][]{Cannon06}. In addition, two sources have photometric redshifts determined from the Red-Sequence Cluster Survey \citep{Hsieh05}. And we re-discover two F2M red quasars from \citet{Glikman07} and \citet{Glikman12}. Column (11) of Table 1 lists the origin of the spectroscopy for each candidate in our sample.
Figure \ref{fig:hist} shows the distribution of $K$-band magnitudes for our candidates. The shaded histogram shows shows the objects with spectra; confirmed quasars are shown in the filled histogram. We are 95\% spectroscopically complete below $K=16.5$. Our completeness drops to 64\% for $16.5 < K < 17$. We recover the two F2M quasars that overlap this area and find an additional 12 quasars, for a total of 14 red quasars in the FIRST+UKIDSS DR1 overlap. We hereafter refer to these objects as the UKFS (UKIDSS+FIRST+SDSS) red quasar sample.
\begin{figure}
\epsscale{1.2}
\plotone{f2.pdf}
\caption{$K$-magnitude distribution for the UKFS candidates binned by 0.25 magnitudes. The open bars show all 87 candidates. The shaded histogram shows all 63 sources with spectroscopic observations, while the filled histogram shows the 14 confirmed quasars. Note that the survey is 95\% spectroscopically complete for $K<16.5$.}\label{fig:hist}
\end{figure}
In Figure \ref{fig:spec1} we present a spectral atlas of the 14 UKFS quasars in decreasing redshift order. We label the positions of typical prominent quasar emission lines (Ly$\alpha$~1216, \ion{N}{5}~1240, \ion{Si}{4}~1400, \ion{C}{4}~1550, \ion{C}{3}]~1909, \ion{Mg}{2}~2800, [\ion{O}{2}]~3727, H$\delta$~4102, H$\gamma$~4341, H$\beta$~4862, [\ion{O}{3}]~4959, 5007, H$\alpha$~6563, \ion{He}{1}~10830, Pa$\gamma$~10941, Pa$\beta$~12822~\AA) with vertical dotted lines. We also plot with a red line the best-fit reddened quasar template \citep[from][]{Glikman06} to the spectra (see Section \ref{sec:ebv} for further discussion).
\begin{figure*}
\plotone{f3a.pdf}
\caption{Optical and/or near-infrared spectra of UKFS quasars ordered by redshift. The red line shows the best-fit reddened quasar template to the combined optical and near-infrared spectra. Typical quasar emission lines are marked with vertical dashed lines:
Ly$\alpha$~1216,
N~V~1240,
Si~IV~1400,
C~IV~1550,
C~III]~1909,
Mg~II~2800,
[O~II]~3727,
H$\delta$~4102,
H$\gamma$~4341,
H$\beta$~4862,
[O~III]~4959,
[O~III]~5007,
H$\alpha$~6563,
He~I~10830,
Pa$\gamma$~10941,
Pa$\beta$~12822\AA.}\label{fig:spec1}
\end{figure*}
\begin{figure*}
\figurenum{3b}
\plotone{f3b.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:spec2}
\end{figure*}
\begin{figure*}
\figurenum{3c}
\plotone{f3c.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:spec3}
\end{figure*}
\section{Complementary Red Quasar Selection Methods}
\subsection{The KX-Selection}
Because of the power-law nature of a quasar's SED, quasars are separable from stars, whose spectra are blackbodies, in color-color space. At short wavelengths, quasars are bluer than the bluest stars, giving rise to the so-called ultraviolet excess (UVX). This feature has been exploited for quasar selection in optical surveys, \citep[e.g.,][]{Sandage65,Schmidt83} resulting in a literature of $\gtrsim 10^5$ spectroscopically confirmed quasars out to $z\sim 2.5$ \citep[e.g.,][]{Veron-Cetty10}. At long wavelengths the colors of quasars also diverge from stars, appearing redder, giving rise to the so-called $K$-band excess \citep[KX;][]{Warren00}, which allows for efficient quasar selection in the near-infrared. In addition, since dust extinction is an inverse exponential function of wavelength, near-infrared emission is less affected than the optical and rest-frame UV. This means that KX selection of quasars is far less sensitive to dust, yet remaining as, or possibly more, efficient than UVX selection.
In particular, UVX selection fails for $z\gtrsim2.2$ when the Lyman-$\alpha$ line shifts into the $B$-band reddening the observed quasar's $U-B$ color. At higher redshifts ($z \gtrsim 4$), absorption from the Lyman-$\alpha$ forest further reddens a quasar's UV and optical colors, making UVX selection ineffective and requiring other color selection methods for identifying high redshift quasars \citep[e.g.,][]{Kennefick95,Fan99b,Glikman10}. The KX-selection method extends the redshift range for finding quasars as a result of two effects: (1) optical/UV wavelengths are more susceptible to dust extinction, which is exacerbated at higher redshifts as the rest frame wavelengths are shifted blueward, and (2) Lyman-$\alpha$ forest absorption is not an issue until $z\sim 3.5$ allowing for access to quasars in the $z\sim 2-3$ regime, where optical selection is most incomplete \citep{Warren00,Richards02,Maddox12}.
There have been recent efforts to recover missed quasars, including reddened ones, using KX selection with UKIDSS and SDSS. These studies have identified known quasars from, e.g., SDSS and other optical surveys, as well as additional quasars with unusual properties and/or at redshifts inaccessible to optical selection. \citet{Maddox12} used KX-selection to find quasars in the UKIDSS DR4 LAS data combined with SDSS DR7. This selection resulted in recovering 3294 SDSS quasars, plus 324 new quasars. To compare the UKFS quasars with the KX-selected sources, we matched Tables 4 and 6 from \citet{Maddox12} to the 16 February 2012 release of the FIRST radio catalog, so that both samples are radio-detected down to the same flux density limit. There are 263 FIRST matches to the KX-selected, SDSS-identified quasars \citep[Table 6 in][]{Maddox12} and only 9 FIRST matches to the newly discovered KX-selected quasars listed in Table 4 of \citet{Maddox12}. There is incomplete areal overlap between the two surveys, accounting for some of the missed quasars.
Figure \ref{fig:kx} shows the location of the FIRST-detected quasars identified by \citet{Maddox12} in $g-J$ vs. $J-K$ color-color space. Since the SDSS photometry is on the AB magnitude system \citep{Oke83}, while UKIDSS uses the Vega standard, \citet{Maddox12} shift the SDSS magnitudes to the Vega system for consistency. We have chosen to use the AB magnitudes since they are naturally representative of physical units (i.e., flux density) without prior knowledge of photometric zero points. We plot the quasars from \citet{Maddox12} with blue circles and cyan triangles. We show the location of spectroscopically confirmed stars from SDSS matched to UKIDSS with black contours, while the magenta dashed line indicates the KX-selection boundary defined in \citet{Maddox08} which separates quasars from stars:
\begin{eqnarray}
(g-J)_{\rm Vega} = 4(J-K)_{\rm Vega} - 0.6
\end{eqnarray}
for $(J-K)_{\rm Vega} \le 0.9$ and
\begin{eqnarray}
(g-J)_{\rm Vega} = 33.33(J-K)_{\rm Vega} -27
\end{eqnarray}
for $(J-K)_{\rm Vega} > 0.9$. The conversions from Vega to AB for the UKIDSS bands are $J_{\rm V} = J_{\rm AB} - 0.938$ and $K_{\rm V} = K_{\rm AB} - 1.9$; thus the equation for the KX boundary in the AB magnitude transforms to:
\begin{eqnarray}
(g-J)_{\rm AB} = 4(J-K)_{\rm AB} - 2.21
\end{eqnarray}
for $(J-K)_{\rm AB} \le 0.062$ and
\begin{eqnarray}
(g-J)_{\rm AB} = 33.33(J-K)_{\rm AB} + 4.03
\end{eqnarray}
for $(J-K)_{\rm AB} > 0.062$.
The top and right-hand axes of Figure \ref{fig:kx} are shifted to show colors on the Vega system, for ease of comparison with Figure 2 of \citet{Maddox12}. In addition to these color cuts, \citet{Maddox12} require that their candidates appear stellar in the UKIDSS images -- a criterion not imposed by the UKFS sample selection.
\begin{figure}
\epsscale{1.2}
\plotone{f4.pdf}
\caption{The location of the UKFS candidates in KX color space ($J-K$ vs.\ $g-J$) is shown. We plot quasars found by \citet{Maddox12} with matches in FIRST with blue circles (their recovered quasars from optically selected methods) and cyan (their new quasars). The colors of sepctroscopically-confirmed stars from SDSS are plotted with contours showing that the KX-selection selection boundary (magenta dashed line) effectively separates quasars from stars. Our UKFS sample targets the reddest sources. Red circles represent confirmed quasars, green circles are spectroscopically observed objects showing no broad lines inter spectra, and gray circles are UKFS candidates without spectroscopic observations. }\label{fig:kx}
\end{figure}
The UKFS sources are also plotted in Figure \ref{fig:kx}. Spectroscopically confirmed red quasars are red circles, spectroscopically observed objects that show no broad emission lines are plotted with green circles, and UKFS candidates with no spectroscopic observations are colored gray.
Despite 100\% areal overlap between the UKFS and KX surveys, as well as overlapping color criteria, only one UKFS quasar was found by \citet{Maddox12}: UKFS0016$-$0038, which is one of their new quasars \citep[this object is also found in the sample of][]{Banerji12}. This object is indicated in Figure \ref{fig:kx} with a red circle emphasized by a thick black border, at $(g-J)_{\rm AB} = 3.18$ and $(J-K)_{\rm AB} = 1.58$ representing its colors in the UKFS survey, which derives its magnitudes from the UKIDSS DR1 LAS merged catalog. The same object appears as a small blue point with slightly different colors, at $(g-J)_{\rm AB} = 3.33$ and $(J-K)_{\rm AB} = 1.59$, which come from the UKIDSS DR4 LAS merged catalog.
Our UKFS survey finds red quasars missed by the KX survey of \citet{Maddox12}, partly because that survey had a flux limit of $K\le16.6$ and partly because the selection focused on finding sources with $1.0 \le z \le 3.5$ based on photometric redshift estimates, although quasars with spectroscopically-determined $z<1$ are included in the final sample. In addition, the morphological cut imposed on their sample requires quasars to appear stellar in the UKIDSS and SDSS images. The choice in \citet{Maddox12} to exclude extended sources is intended to avoid host galaxy contamination which affects quasar colors. However, \citet[\S2]{Glikman12} showed that the near-infrared-to-optical colors of reddened quasars are largely unaffected by the presence host galaxy light, because the longer wavelengths are still dominated by the quasar continuum.
Of the 14 quasars found in this work, only UKFS0016$-$0038 is classified as stellar by both UKIDSS and SDSS. Only two UKFS quasars are classified as having stellar morphology ({\tt mergedClass = -1}) in UKIDSS and another two quasars are classified as probableStar ({\tt mergedClass = -2}). The remaining eight quasars are classified as galaxies and would have been excluded by \citet{Maddox12} and \citet{Banerji12} (including UKFS0158$+$0027 at $z=1.35$). As we noted in \citet{Glikman12}, imposing morphological criteria on red quasar candidates selection schemes may exclude a large fraction of the sources, particularly those at low redshifts or in a post-merger phase.
\subsection{Mid-Infrared Selection}\label{sec:wise}
\begin{figure}
\epsscale{1.2}
\plotone{f5.pdf}
\caption{Infrared WISE color-color space from Figure 12 of \citet{Wright10} showing the locations of various classes of astrophysical objects with UKFS sources over-plotted circles. Confirmed red quasars are shown in red and lie more or less where all quasars are expected to be found, apart from a few outliers (see text). Spectroscopically-observed objects that do not show broad emission lines are colored green, while unobserved candidates are plotted with open circles.}\label{fig:wise}
\end{figure}
An alternative method for finding quasars unbiased by dust extinction is to use their mid-infrared colors. Work by \citet{Lacy04} and \citet{Stern05} using the {\em Spitzer} Space Observatory revealed that the power-law nature of quasar spectra can be exploited toward even longer wavelengths. Both of these studies in near-to-mid infrared color space find quasars independently of reddening and have been effective at identifying heavily obscured AGN, e.g., Type II sources that do not reveal broad emission lines in their spectra. Recently \citet{Donley12} improved upon this method to identify large numbers of obscured quasars. While successful at identifying populations of obscured AGN at high luminosities, they are less effective in deep fields \citep{Cardamone08}. The small areal coverage of {\em Spitzer} surveys were therefore unable to identify the rare and luminous quasars found by, e.g., the F2M survey. The recent all-sky data release from the Wide-Field Infrared Survey Explorer \citep[WISE;][]{Wright10} now offers an opportunity to identify rare luminous systems of the kind we have found here in the UKFS sample.
To examine the colors of the UKFS quasars in the mid-infrared, we matched the UKFS sample (87 objects) to the WISE all-sky catalog; 84 have matches within 2\arcsec, including all fourteen confirmed quasars. Figure \ref{fig:wise} shows the location of the UKFS candidates plotted in the WISE W1$-$W2 vs.\ W2$-$W3 color-color space (corresponding to $[3.4\mu{\rm m}] - [4.6\mu{\rm m}]$ vs.\ $[4.6 \mu{\rm m}] - [12 \mu{\rm m}]$ bands). \citet{Wright10} showed that this color-space is effective at separating extragalactic sources from stars and brown dwarfs.
The UKFS red quasars are plotted with red circles and lie mostly in the area where quasars are expected to be found. This means that the WISE color selection can be effective at finding quasars independent of reddening. However, some quasars fall outside the quasar space. Two are bluer in both W1$-$W2 and W2$-$W3, and another three lie in the overlapping region between spirals and Luminous Infrared Galaxies (LIRGs).
As the postage-stamp images of the UKFS quasars presented in Figure \ref{fig:sed1} show, some of the UKFS quasars have close companions, possibly indicative mergers giving rise to the reddening and triggering of these quasars. Since the WISE point-spread-function is $\sim 6$\arcsec\ in W1, W2 and W3, the light from any close companions is blended with the quasar light and likely affects some objects' infrared colors. It is curious, however, that the three quasars in the LIRG/spiral region are all at $z\sim 0.6$\footnote{Although the redshifts for these three sources are based on a single broad emission line, H$\alpha$ is the most plausible line given the absence of an optical spectrum or strong lines in the remaining parts of the infrared spectrum}. In Section \ref{sec:ebv} we consider the effect of significant host galaxy light affecting these sources' SEDs.
UKFS sources with spectroscopic identifications that do not show broad lines are plotted with green circles. All the objects in the WISE quasar space \citep[W1$-$W2 $>0.8$; c.f.,][]{Assef12} have spectra. There are six sources in this space whose spectra do not show board line emission, suggesting that while the WISE color selection is effective at finding quasars, it may suffer from some contamination. Sources with no spectroscopic observation are plotted with open circles. Two red quasars have $0.6 <$ W1$-$W2 $< 0.8$ and $2 < $ W2 $-$ W3$< 3$; one additional object with similar colors in this space has no spectroscopic identification.
Another four unidentified objects in the ULIRG/LIRG/spiral overlapping region may also be quasars.
The remaining unidentified sources with W1$-$W2$<0.5$ and W2$-$W3$<3$ are likely not quasars.
Might some of the objects whose spectra do not show broad lines have infrared luminosities that would place them in the quasar regime? Such objects would be heavily obscured (e.g., Type II or Compton thick) quasars. Figure \ref{fig:irlumhist} shows the WISE-derived infrared luminosities of all sources for which we were able to determine a redshift. Confirmed quasars are overplotted in the filled histogram. The top panel shows the observed-frame W4 22$\mu$m luminosity. The bottom panel shows the rest-frame luminosity at 6.1$\mu$m (corresponding to 22$\mu$m at $z=2.5$, the highest redshift in our sample) which we determine by interpolating between the WISE bands. In both panels, all the quasars are the most luminous sources. Since there are no high luminosity sources with no broad lines we are confident that we haven't missed any quasars among our candidates. However, if these sources are obscured by large amounts of reddening (e.g., $E(B-V) \gtrsim 5$, as is typical for Seyfert 2 galaxies) then our estimate of the luminosity at $6.1\mu$m may still be underestimated. At this wavelength a reddening of $E(B-V)=5$ removes $\sim 40\%$ of the light. And without knowing the amount of extinction, even WISE does not reach long enough rest frame wavelengths to be unaffected by such high extinction.
\begin{figure}
\epsscale{1.2}
\plotone{f6.pdf}
\caption{Infrared luminosities based on WISE photometry for all UKFS source that have redshifts. {\em Upper panel:} shows a histogram of the observed frame 22$\mu$m luminosity. {\em Bottom panel:} shows the rest frame luminosity at a rest frame of 6.1$\mu$m which is the longest rest-frame wavelength common to all sources. In both panels, all confirmed quasars (filled histogram) are the most luminous sources. }\label{fig:irlumhist}
\end{figure}
\section{Red Quasar Demographics}\label{sec:surfdens}
\begin{figure}
\epsscale{1.2}
\plotone{f7.pdf}
\caption{Observed surface density of red quasars compared to blue quasars. The UKFS survey (red stars) goes $\sim 1.5$ magnitudes deeper than F2M (open red squares). The dotted black and solid red lines are the best power law fit to the points between $K=13.5$ and $K=17$. We see a rise in the number counts of red quasars toward fainter magnitudes, suggesting that reddening is common in moderate luminosity quasars and is not just a phenomenon associated with the most luminous sources. }\label{fig:sd}
\end{figure}
We compute the surface density of the UKFS red quasars as a function of $K$-band magnitude, extending the measurement of the F2M survey by 1.5 magnitudes fainter in the $K$-band. The space density of red quasars increases sharply toward fainter magnitudes. We plot in Figure \ref{fig:sd} the number counts of red quasars compared with optically-selected quasar samples. The open red squares are the F2M quasars as shown in Figure 10 of \citet{Glikman12}. The stars represent the results from this study. The three brightest bins each contain only one source, because of the relatively small area covered by the UKIDSS DR1; we mark these with open stars without their error bars to avoid cluttering the plot.
The two brightest sources are the two recovered F2M quasars,
The measurement of the space density at $K=15.5-16$ is highly incomplete in both the F2M survey, as the 2MASS limit is reached, and in this work, as the area probed is too small. The filled stars represent robust measurements of the surface density of radio-selected red quasars in a previously unexplored magnitude regime: their surface density rises smoothly as a power law with increasing magnitude (faintness). We fit a power law to the red quasars' space densities using the robust least-absolute-deviation linear fitting routine, LADFIT, in IDL, which avoids strong outliers, in the magnitude range $13.5 <K\le17$. We find that the number of red quasars per half magnitude bin increases as $N\sim 10^{0.64m}$.
To determine the size of the red quasar population relative to normal, blue quasars, we need to construct an appropriate comparison sample. In \citet{Glikman04,Glikman07,Glikman12} we relied on the FIRST Bright Quasar Survey \citep[FBQS;][]{Gregg96,White00,Becker01} which is a quasar sample constructed from a radio plus optical selection, imposing a $B-R<2.0$ color cut. That sample was then matched to 2MASS to construct a $K$-magnitude distribution with the same flux limits in the radio and near-infrared as the F2M quasars. For comparison, we plot the FBQS surface density for FBQS based on their 2MASS magnitudes in Figure \ref{fig:sd}, as in Figure 10 of Glikman et al. (2012).
Since the UKFS sample relies on FIRST plus UKIDSS we must construct an equivalent optically-selected quasar sample. We utilized the SDSS-UKIDSS-matched quasar catalog of \citet{Peth11}. This sample contains 20,991 spectroscopically confirmed quasars from the SDSS DR5 quasar catalog \citep{Schneider10} (as well as 130,000 quasar candidates selected using the nine colors from SDSS and UKIDSS) that have photometry in the UKIDSS DR3 LAS catalog over 1200 deg$^2$. We then matched this catalog to the 16 February 2012 FIRST catalog, which covers 10,635 deg$^2$ and overlaps the \citet{Peth11} quasar catalog completely, to produce a catalog of 1
|
,197 optically-selected, UKIDSS- and FIRST-detected quasars. We plot the number counts for this comparison sample with filled black circles line in Figure \ref{fig:sd}, scaling the survey area by the 77.4\% spectroscopic completeness of the SDSS quasar survey \citep{Richards06} for an effective area of 928 deg$^2$. The space density of these quasars is consistent with the FBQS number counts in the areas where the two surveys overlap, providing confidence that we have extracted a comparable, but deeper, sample of blue quasars. The best fit a power-law to the blue quasar number counts is $N\sim 10^{0.40m}$.
To determine the fraction of red quasars out to $K=17$ we integrate the power-law curves for the blue and red quasars out to $K=17$ and find that the space density of red quasars is 0.14 deg$^{-2}$, while for blue quasars it is 0.80 deg$^2$ (considering only FIRST-detected sources). This result means that the surface density of red quasars is $17\%$ as high as the surface density of blue quasars. This is a higher fraction than the result from the F2M quasars, which made up $10\%$ of the surface density to $K=14.5$, suggesting that the fraction of red quasars may be increasing with decreasing luminosity and/or redshift.
As we noted in Section \ref{sec:wise} three of the sources in our sample reveal broad emission at $\lambda \sim 1.08 \mu$m, which we interpret to be H$\alpha$ placing them at $z\sim 0.65$, whose WISE infrared colors are in the overlapping LIRG/spiral region. We examine the effect of excluding these sources from the surface density analysis, in case our identifications turn out to be incorrect (e.g., once an optical spectrum is obtained and examined) and find that the fraction of red quasars drops to 9\% (more in line with the F2M fraction) and the shape of the power-law fit changes to $N\sim 10^{0.50m}$, which is still steeper than for blue quasars. Since the UKFS survey uses a more restrictive $r-K$ color cut and therefore might be missing some quasars, these surface densities are likely a lower limit and the rise in space density of red quasars toward fainter magnitudes is potentially even higher.
\section{Reddening and Spectral Energy Distributions}\label{sec:ebv}
\subsection{Fitting a Reddened Quasar Template to the Spectra}
\citet{Glikman12} explored the effectiveness of several dust laws at fitting the spectral shape of red quasars. Compared with the Large Magellanic Cloud (LMC) dust law of \citet{Misselt99}, the milky way dust law of \citet{Cardelli89} and the starburst extinction law of \citet{Calzetti94}, the best fit was produced by Small Magellanic Cloud (SMC) reddening law of \citet{Gordon98}. A similar conclusion was reached by \citet{Hopkins04} who noted that reddened quasar spectra lack the 2175\AA\ silicate absorption feature present in the Galactic and LMC reddening laws.
In \citet{Glikman12} we measured the dust reddening in the F2M red quasars by fitting a quasar template that was reddened by the SMC reddening law of \citet{Gordon98} to our quasar spectra. We fit our spectra with a reddened quasar template in the same fashion as in \citet{Glikman12}. The resultant reddened fits are shown in Figure \ref{fig:spec1}. We see that, in general, the reddened template traces the shape of the continuum. Two significant outliers are UKFS0156$-$0058 and UKFS0135$-$0043, which we discuss below. In four cases we have both an optical and a near-infrared spectrum for the source. For these objects we determined $E(B-V)$ from the, optical, near-infrared and combined spectra. Table 2 lists the resultant values of $E(B-V)$ found via all the methods that we use.
When available, the optical spectrum provides a stronger constraint on the reddening than the near-infrared spectrum, since shorter wavelength light is more sensitive to dust extinction.
However, in this work most of the spectra of UKFS quasars are in the near-infrared, which imposes a weaker constraint on the reddening than if optical spectra were also available for these sources. For example, two of our quasars' near-infrared spectra are best fit with a {\em negative} $E(B-V)$ despite having clearly red colors (e.g., UKFS0135$-$0043).
To remedy this issue, we utilize the extensive broad band photometry from SDSS, UKIDSS and WISE, which include thirteen photometric measurements (SDSS $u$, $g$, $r$, $i$, $z$; UKIDSS $Y$, $J$, $H$, $K_s$ and WISE W1, W2, W3, W4) spanning $0.3 - 22 \mu$m.
To compute the reddening for our quasars from their broad band photometric SEDs, we used the five-band SDSS {\tt modelMag} magnitudes and their errors, together with the UKIDSS DR1 LAS 3\arcsec\ aperture magnitudes and their errors plus the WISE All-Sky Source Catalog photometry with photometric errors. We shifted the UKIDSS magnitudes to the AB system by their zero-point offsets provided in Table 7 of \citet{Hewett06} and Table 1 of \citet{Jarrett11} to be consistent with SDSS, and computed their flux densities, $F_\lambda$, in units of erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ using the equation:
\begin{equation}
F_\lambda = 10^{[-0.4(m_{AB} + 2.406 + 5\log\lambda_{AB})]},
\end{equation}
where $m_{AB}$ is the quasar's apparent magnitude and $\lambda_{AB}$ is the effective wavelength of the given magnitude's bandpass. We used $\lambda_{AB} = 3551$\AA, $4686$\AA, $6165$\AA, $7481$\AA, $8931$\AA\ for the SDSS $u$, $g$, $r$, $i$, $z$ bands, respectively, $\lambda_{AB} = 10305$\AA, $12483$\AA, $16313$\AA, $22010$\AA\ for the UKIDSS $Y$, $J$, $H$, and $K$ bands, respectively \citep{Hewett06} $\lambda_{AB} = 33526$\AA, $46028$\AA, $115608$\AA, $220883$\AA, for the WISE W1, W2, W3 and W4 bands, respectively \citep{Wright10}.
To compute the corresponding unreddened template colors, we passed the quasar spectral energy distribution (SED) from \citet{Richards06} through the transmission curves for each of the nine bandpasses in the quasar's observed frame, in order to obtain a synthetic magnitude of a standard blue quasar at the same observed wavelengths of the UKFS quasars. Converting these magnitudes to fluxes resulted in a thirteen-point SED which was used to compare with our observed SEDs. We use these SEDs to determine the reddening in each quasar in the same manner as with the spectra, except with fewer data points (and weighting by their photometric errors expressed as flux errors).
We plot the rest-frame optical-through-infrared SED of the fourteen UKFS quasars with black circles with error bars in Figure \ref{fig:sed1}. The blue curve is the mean SED for all quasars in the \citet{Richards06} sample\footnote{The choice of SED \citep[there are six sub-grouped SEDs in addition to the total SED in ][]{Richards06} does not impact our measurement of $E(B-V)$ significantly. We tested this by using the different templates to determine $E(B-V)$; all the measured values agree within 0.03 magnitudes.}, with the synthetic photometric measurements plotted with blue squares. We use the ratio of the thirteen point flux arrays from the measured data and the SED to determine the reddening,
\begin{equation}
E(B-V) = -\frac{1.086}{k(\lambda)} \log\Bigg[\frac{f(\lambda)}{f_0(\lambda)}\Bigg], \label{eqn:ebv}
\end{equation}
where $k(\lambda)$ is the SMC dust law. We plot the reddened SED in red. The figure inset is the RGB cutout of size $11.9\arcsec \times 11.9\arcsec$ obtained from SDSS and displayed with an inverted grayscale; several of the images show evidence for interaction.
\begin{figure*}
\plotone{f8a.pdf}
\caption{Rest-frame optical-through-infrared SEDs for UKFS red quasars (black lines) determined from SDSS ($u$,$g$,$r$,$i$,$z$), UKIDSS ($Y$,$J$,$H$,$K$) and WISE (W1 [3.4$\mu$m],W2 [4.6 $\mu$m],W3 [12 $\mu$m],W4 [22 $\mu$m]) photometry, and ordered by decreasing redshift. The blue line is the quasar SED from \citet{Richards06}, over plotted with synthetic SDSS, UKIDSS and WISE photometry (blue squares). The red line shows the best-fit reddened quasar template to the photometric points. The green line shows the best two-component fit including a reddened quasar plus the starburst/ULIRG galaxy template from \citet{Polletta07}.
The image inset is an inverted 11.9\arcsec $\times$ 11.9\arcsec grayscale image from SDSS.}\label{fig:sed1}
\end{figure*}
\begin{figure*}
\figurenum{8b}
\plotone{f8b.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:sed2}
\end{figure*}
\begin{figure*}
\figurenum{8c}
\plotone{f8c.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:sed3}
\end{figure*}
\begin{figure*}
\figurenum{8d}
\plotone{f8d.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:sed4}
\end{figure*}
\begin{figure*}
\figurenum{8e}
\plotone{f8e.pdf}
\caption{{\em Continued.} Optical-through-near-infrared spectra of UKFS quasars. }\label{fig:sed5}
\end{figure*}
The two sources, UKFS$0156-0058$ and UKFS$0135-0043$ have highly divergent spectral fits (Figure \ref{fig:spec3} and Figures \ref{fig:sed4} and \ref{fig:sed5}). UKFS$0156-0058$ shows a strong red continuum in its near infrared spectrum. However, this rise does not continue toward the mid-infrared. Rather, the SED shows a bump with a peak around $\sim 2.5 \mu$m. If this feature is a signature of hot dust close to the sublimation radius, then its temperature would be $\sim 1200$ K, which has been observed in luminous quasars \citep{Glikman06,Netzer07,Mor11} as well as F2M red quasars \citep{Urrutia12}, but not usually at such high luminosities so as to surpass the direct emission from the AGN itself. In the case of UKFS$0135-0043$, the SDSS image shows a close double source with a red and blue component. These sources are so close that they fall within the SDSS fiber diameter of 3\arcsec. Given the unknown contributions from the blue and red objects -- which may be evidence for a merging system -- it is not surprising that this object does not yield consistent $E(B-V)$ values from the different fits. We note, however, that the SED fit yields $E(B-V) = 0.61$ while the optical and combined spectral fits yield $E(B-V) = 0.69$ and 0.70, respectively. Only the $E(B-V)$ from the fit to the near-IR spectrum diverges with a value of $-0.22$.
Using our estimates of the $E(B-V)$ from the SED fits we de-redden the UKFS quasars and examine their intrinsic properties.
In Figure \ref{fig:abskz} we plot the de-reddened absolute $K$-band magnitudes (in the observed frame) for the UKFS quasars versus redshift.
The sources are color-coded by the amount of reddening, with yellow circles representing a small amount of reddening ($E(B-V)\sim 0.1-0.5$) red circles representing large amounts of reddening ($E(B-V)\sim 1-1.5$) and orange in between.
This Figure plots the F2M quasars \citep[similar to Figure 15 of][]{Glikman12}, and the UKFS quasars emphasized with thick black circles. The small black points represent the FBQS and SDSS comparison samples described in Section \ref{sec:surfdens}. The dotted lines shows our sensitivity to finding quasars with different amounts of reddening down to the survey limit of $K=17$. The dashed line shows the 2MASS survey limit of $K=15.5$.
To evaluate the goodness of the SED fits, we compute the reduced $\chi^2$ statistic (dividing $\chi^2$ by eleven agrees of freedom: thirteen photometric points, minus the normalization and $E(B-V)$, which are the two free parameters of our fits).
We quote this value in the legend of each panel ($\chi^2_{\rm red}:$ QSO).
We see that in general, the reddened quasar SED (red line in Figure \ref{fig:sed1}) does not fit well the shape of the measured SED for the UKFS quasars (black line) across the full wavelength range. If these systems are indeed quasars in a post-merger early evolutionary phase, then their SEDs -- especially in the infrared -- may be strongly affected by star formation signatures, e.g., hot dust and PAH emission. Detailed infrared spectroscopy of Palomar-Green quasars \citep{Schweitzer06,Netzer07} as well as a small sample of F2M red quasars \citep{Urrutia12} find that there is great diversity among the mix of star formation and AGN contributors, although the AGN dominates in most of the F2M red quasars (with $L_{\rm QSO}/L_{\rm FIR~SB} >2 - 60$ for all but one source). This becomes more complicated with lower luminosity quasars whose relative contributions from stars and nuclear activity begin to rival each other.
\subsection{The Effect of a Host Galaxy Component to the Reddening Fits}
To investigate the impact of a host galaxy on our fits, we followed the approach in \citet[Eqn. 3]{Glikman07} and tried adding emission from a host galaxy to the model:
\begin{equation}
f(\lambda) = Af_{\rm gal}(\lambda) + Bf_{\rm QSO}(\lambda)e^{-\tau_\lambda} \label{eqn:qso_gal}.
\end{equation}
We used the the galaxy templates from \citet{Polletta07} and settled on the Starburst/ULIRG Arp220, whose deep silicate absorption and hot dust bump beyond 10 $\mu$m resembles the features in the WISE photometry seen in some of our sources.
For the three $z\sim 0.6$ sources, UKFS2230+0022, UKFS0012$-$0045, and UKFS0223+0010, that lie in the LIRG region of Figure \ref{fig:wise}, the fit improved significantly with the addition of a ULIRG template, primarily because of a dip at WISE wavelengths that could correspond to a silicate absorption feature commonly seen in ULIRGs. Their implied galaxy luminosities, while high, are within plausible limits for LIRGs ($M_K = M_{H \rm rest} = \simeq -25.9$ or $L_{\rm near-IR} \simeq 5\times10^{11} L_\odot$). We plot the quasar component of the two-component fit for these sources with colored squares in Figure \ref{fig:abskz}\footnote{We plot these sources with circles and squares to represent their derived values from both models.} and add a row in Table 2 listing their reddening and luminosities from the two-component fit. They are significantly redder ($E(B-V) \simeq 1 - 2$) and less luminous ($M_K \simeq -24.9$ to $-25.6$).
In another three quasars, UKFS0158+0027, UKFS0134+0003, UKFS0908+0528, the more complex model yields a lower $\chi^2$ (labeled QSO+ULIRG in Figure \ref{fig:sed1}). We also plot these sources with squares in Figure \ref{fig:abskz} and list their reddening and luminosity values in Table 2. In UKFS0158+0027 at $z=1.350$ (shown in the top panel of Figure \ref{fig:sed2}), the two-component model (green line) yields a better fit than the quasar-only model. However, the reddening determined from this is $E(B-V) = 3.90$, which implies a de-reddened quasar luminosity of $M_K = -31.94$. Such a high reddening value is unlikely given that we see strong, broad ($\sim 2000$ km s$^{-1}$) H$\alpha$ line in its near-infrared spectrum. For UKFS0135+0003, shown in the middle panel of Figure \ref{fig:sed2}, the inclusion of a host component to the fit also improves $\chi^2$ significantly. In addition, the reddening, $E(B-V) = 1.17$, and de-reddened quasar luminosity, $M_K = -26.84$, are reasonable. The WISE colors of this source (W1$-$W2$=0.83$ and W2$-$W3$=2.38$) place it just outside the quasar region, on the blue end, not near the LIRGs, but its image shows a nearby companion, likely contaminating the WISE photometry (which is not well fit by either model). Finally, UKFS0908+0528 does have an improved $\chi^2$ when fit by a two-component model. But neither model produces a satisfactory fit to the optical photometry. Furthermore, the reddening derived from the two-component fit is $E(B-V) = 2.91$ with a de-reddened quasar luminosity of $M_K = -29.50$. This reddening also seems high given the strong, broad ($\sim 2600$ km s$^{-1}$) H$\alpha$ seen in its near-infrared spectrum.
Therefore, we conclude that 10 of 14 quasars are most likely dominated by quasar continuum emission, whereas in the other four (three LIRG-like sources and UKFS0135+0003), the galaxy could contribute significantly. For the remaining ten sources, the results of the two component fit result in extremely luminous host galaxies ($M_K \simeq -27$ to $-30$). Furthermore, the two-component fits imply that the quasar is far more heavily reddened than the single component fit, with $E(B-V)\simeq2-6$, which translate into extinction-corrected luminosities that are as high or higher than what we estimate from the single component fit. In addition, all of our sources exhibit broad emission lines, so we expect a quasar SED to be the dominant component.
If instead these 10 red quasars actually do lie in galaxies that are more luminous than has ever been seen before (the $z>2$ sources would have $L>10^{13}L_\odot$ in the $R$-band), such an extraordinary claim could be investigated with better sampled photometry. We also note that, when the SED has been better defined, with {\em Spitzer} IRS spectra \citep[Section 4 of][]{Urrutia12}, the galaxy contribution was negligible. At present, our 13 photometric data points prevent us from disentangling the true nature of the reddened quasar SEDs, which are more complicated than a two component model can describe. These objects are in the company of other newly-discovered, complicated systems that are not yet fully understood and not well matched to any known SEDs, e.g. the ``hot DOGs" in \citet{Wu12} or the dust rich quasars in \citet{Dai12}.
\citet{Glikman07,Glikman12} identified several quasars whose flux variability between the epochs of their optical and near-infrared observations mimicked reddening, but which, upon spectroscopic observations, revealed a normal blue quasar. These objects accounted for $<10\%$ of the sample. Although it is possible that quasar variability may have contaminated the UKFS candidate sample and possibly affected our reddening estimates, examination of the data suggests that it is not a significant issue. None of our spectroscopically-confirmed quasars appear to have strong blue continua.
In addition, UKIDSS observed two filters simultaneously $Y$,$J$ and $H$,$K$ \citep{Dye06}. Therefore, while the $J$ and $K$-band images were taken at different times, we see no sharp discontinuities from $Y$ and $J$ to $H$ and $K$. Furthermore, the SDSS photometry is taken near-simultaneously as a result of the drift-scanning design of the survey \citep{York00} (WISE bands are also observed simultaneously). And since the reddening is most sensitive to the shorter wavelengths in the SED, it is the SDSS photometry that most constrains our estimates of $E(B-V)$, we therefore rule out strong variability effects.
Another, more challenging, explanation for divergences seen between the observed quasar SEDs and the reddened template SEDs in Figure \ref{fig:sed1} is that the dust law is not well known. We use the SMC dust-law because it is empirically the best fit to red quasar spectra. However, a better understanding the composition and extinction properties of the dust obscuring red quasars is needed, and beyond the scope of this paper. For what follows, adopt the (relatively conservative) $E(B-V)$ values from the single component fits to a quasar SED in the discussion that follows.
\section{Where are the heavily-reddened high-redshift quasars?}
\begin{figure}
\epsscale{1.2}
\plotone{f9.pdf}
\caption{De-reddened absolute $K$ magnitude versus redshift for red quasars. UKFS quasars are identified by thick black circles, the remaining circles are the F2M quasars from Glikman et al. (2012). The circles are color coded by the amount of reddening, based on the broad band photometric fits and annotated in the legend. The quasar component of the three sources whose WISE colors are consistent with LIRGs and whose SEDs are better-fit by a two-component model are plotted with square symbols. The small black points are the same FBQS and SDSS quasars that are plotted in Figure \ref{fig:sd}. The dashed line indicates the sensitivity limit of the F2M survey ($K=15.5, E(B-V) = 0$) while the dotted lines show the sensitivity limit of UKFS ($K=17$) with increasing amounts of extinction ($E(B-V) = 0.0, 0.5, 1.0$). We recover moderately-reddened quasars at high redshift ($z>2$) that lie just below the 2MASS limit.}\label{fig:abskz}
\end{figure}
Our intention with this red quasar survey was to recover heavily reddened ($E(B-V) \gtrsim 1$) quasars at high redshift ($z\gtrsim 2$) that were not found in the F2M survey. Figure \ref{fig:abskz} shows that we do find heavily reddened quasars at low redshifts ($z\lesssim 1$), but at $z>2$ our sources are all moderately reddened. The SED fitting issues that arose for many of the sources at $z<2$ are not a concern for the three highest redshift quasars, as the quasar-only model produces reasonable fits and the two-component fits yield unphysically luminous hosts. Why don't we find heavily reddened quasars at high redshifts, despite our deeper survey?
Because of the relatively shallow $K\simeq 15.5$ limit of 2MASS, we had found only lightly reddened quasars at high redshifts. In addition, these objects need to be at the very luminous end of the quasar luminosity function (QLF), and are therefore extremely rare: there are only 10 F2M quasars with $E(B-V)>0.2$ and $z>2$ over 9030 deg$^2$.
However, even with this deeper UKIDSS limit, the quasars at $z\gtrsim 2$ with $E(B-V) \gtrsim 1$ still must be extremely luminous ($M_K \gtrsim -31.0$). Because these objects are very rare -- the space density of FBQS quasars with $1.7 < z < 2.7$ and $M_K<-31$ is $\rho_{\rm QSO} = 0.013$ deg$^{-2}$ -- they are not likely to be found in 190 deg$^2$ of this survey. Assuming that the same reddening distribution exists at high redshift as we see in the F2M sample at low redshift (e.g., $z<0.8$), and assuming that the space density of red quasars make up $\sim 20\%$ compared to unreddened quasars, then we can estimate the area, $A$, needed for a survey to detect at least one heavily reddened quasar at $z\sim 2$:.
\begin{equation}
A = 1/(0.2 \times \rho_{\rm QSO} \times f_{E(B-V)}), \label{eqn:red_area}
\end{equation}
where $f_E(B-V)$ is the fraction of red quasars redder than a chosen $E(B-V)$ based on the distribution of reddenings of F2M red quasars at low redshift. Equation \ref{eqn:red_area} shows that one needs at least $\sim 500$ deg$^2$ to find one quasar with $E(B-V)>0.5$, given that $f_{E(B-V)} = 0.68$. More heavily reddened quasars ($E(B-V)\ge1$) are even rarer, with $f_{E(B-V)} = 0.2$, requiring $\sim 2000$ deg$^2$ of survey area to find a single such source.
These estimates are conservative, based on reddenings derived from our single component quasar SED fits. If some host galaxy component is present in the SEDs then the quasar component would be redder (in order to conserve the total observed flux), which means that less area may need to be probed. It is also likely that there is evolution in the reddening distribution and that the density of reddened sources increases with redshift, as the merger rate rises toward $z\sim 2$. However, a larger sample than the one presented here is needed to disentangle the effects of the $K$-correction, evolution of the QLF, selection effects and small number statistics before any statements about the evolution of reddening can be made.
Of course, a survey that does not rely on the radio would increase the sample size by a factor of $\sim 10$, having the same effect as increasing the survey area by the same factor (assuming the radio emission is independent of reddening). Eliminating the radio-detection requirement is not trivial, however, as the number of contaminants rises significantly, especially if the morphological criterion is also removed. We explored the effect of dropping our FIRST detection criterion in the UKIDSS DR1 by selecting sources with $(J-K)_{\rm Vega} > 1.2$, $g_{\rm AB} - J_{\rm Vega} > 1.9$ and $K_{\rm Vega}<17$ and no morphological restriction to see if we recover any red quasar spectra in public spectroscopic databases. Out of 6857 sources obeying the aforementioned criteria (which were not visually inspected) only ten had a spectrum in the SDSS DR9 spectroscopic database; three are classified as early type galaxies and the remaining seven are stars (of spectral types K5 through M9). Another three emission-line galaxy spectra were identified from the WiggleZ Dark Energy Spectroscopic survey \citep{Drinkwater10} as well as 136 LRGs in the 2SLAQ catalog \citep{Cannon06}. No quasars were found. We conclude that radio selection affords us great efficiency and additional wavelength constraints, such as infrared color selection with WISE, would be required in order to effectively find radio quiet red quasars.
\section{Conclusions}
We have presented a pilot survey -- the UKFS survey -- to identify reddened quasars in the FIRST and UKIDSS survey, initially focusing on the 190 deg$^2$ of the UKIDSS first data release. Combining these data with optical photometry from SDSS, we applied the color cuts $r-K>5$ and $J-K>1.5$ and selected 87 candidates with $K\le17$. We have spectroscopic observations of 64 of our candidates, amounting to 74\% completeness, but we are 95\% spectroscopically complete below $K=16.5$ mags. The UKFS survey finds 14 quasars, eight of which are presented here for the first time. Their redshifts extend to $z\sim 2.5$ and their space density rises steeply toward fainter magnitudes. We find that red quasars make up 17\% of quasars based on their {\em apparent} magnitudes. If we exclude the three LIRG-like sources whose nature is more ambiguous, the space density falls to 9\%. Our sample is not large enough to extract the extinction-corrected $K$-band distribution or the intrinsic fraction of red quasars.
We compare our method of red quasar selection to the KX-method and find that the methods are consistent. However, any red quasar selection technique that restricts candidates to morphologically stellar sources will miss most red quasars. Most of the quasars in our sample are not classified as stellar in SDSS or UKIDSS. However, including candidates with extended morphologies adds significant numbers of red galaxy contaminants to the survey.
We examine the infrared colors of the red UKFS quasars from WISE and find that their W2$-$W1 vs.\ W1$-$W2 colors are mostly consistent with those of unreddened quasars, though some sources have colors more similar to LIRGs and/or spiral galaxies. Combining WISE colors with optical to near-infrared color selection minimizes contamination from red galaxies yet allows us to still include extended morphologies and to drop the requirement of a radio detection.
We analyze the SEDs of the UKFS quasars and use broad band photometry along with optical and near-infrared spectroscopy to derive their reddening, $E(B-V)$. Even with the increased depth of UKIDSS we do not find heavily reddened ($E(B-V)\gtrsim0.5$) quasars at high redshifts ($z>2$). To find the heavily reddened quasars at high redshifts, we require either a larger area survey, a deeper flux limit, and/or a longer wavelength selection that is less affected by dust. The results of this survey is a first step toward this end.
\acknowledgments
We thank Nadia Lara whose summer research project -- supported through the Caltech's FSRI program -- helped with the candidate selection.
EG is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-0901994.
SGD acknowledges a partial support from the NSF grant AST-0909182.
We are grateful to the Palomar Observatory staff for their assistance during observing runs. We thank Yale University for support of SDSS-III participation. \\
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The UKIDSS project is defined in \citet{Lawrence07}. UKIDSS uses the UKIRT Wide Field Camera \citep[WFCAM][]{Casali07}. The photometric system is described in \citet{Hewett06}, and the calibration is described in \citet{Hodgkin09}. The pipeline processing and science archive are described in Irwin et al (2009, in prep) and \citet{Hambly08}.
Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
{\it Facilities:} \facility{Sloan}, \facility{VLA}, \facility{Hale (TripleSpec)},
|
\section{Introduction}
A fundamental result in the theory of integrable systems is the correspondence between matrix Lax equations with a spectral parameter and linear flows on the affine Jacobian of a spectral curve. In its simplest version, this result can be stated as in \cite{Beau}: there is a 1-1 correspondence between the affine Jacobian $\operatorname{Jac} S-\Theta$ of a smooth compact curve $S\in |{\mathcal{O}}(kd)|$ of degree $k$ and $GL(k,{\mathbb C})$-conjugacy classes of ${\mathfrak g \mathfrak l}(k,{\mathbb C})$-valued polynomials $A(\zeta)=\sum_{i=0}^d A_i\zeta^i$, the spectrum of which is $S$.\par
Thus, up to conjugation, a matricial polynomial can be recovered from an algebro-geometric data associated to its spectrum. Our first aim in the present work is to recover more of the matricial polynomial than just ``up to conjugation". We show in \S3 how one can recover conjugacy classes of $A(\zeta)$ with respect to proper subgroups of $GL(k,{\mathbb C})$, particularly with respect to a maximal parabolic $P$ and a maximal torus $T$. The conjugacy classes with respect to $P$ correspond to divisors on $S$, rather than to line bundles, while the conjugacy classes with respect to $T$ correspond to a sequence of curves $S_1,S_2,\dots, S_k=S$ with $S_i\in |{\mathcal{O}}(id)|$ satisfying additional conditions (see Theorem \ref{O(2)}). The curves $S_i$ are given by the {\em Gelfand-Zeitlin map} \cite{GKL,KW,Bie-Pidst} with a spectral parameter.
\par
The $d=2$ case of the above setting is closely related to hyperk\"ahler geometry and, in particular, to (pseudo-)hyperk\"ahler metrics which can be obtained by means of the generalised Legendre transform (GLT) of Lindstr\"om and Ro\v{c}ek \cite{LR} (see sections \ref{hk} and \ref{glt} for a review of hyperk\"ahler metrics and of the GLT). This class includes all toric hyperk\"ahler manifolds \cite{BD}, the natural metrics on the moduli spaces of $SU(2)$-monopoles \cite{IR,Hough}, and the gravitational instantons of types $A_k$ \cite{DM} and $D_k$ \cite{CRW, CK, CH}.
\par
From one point of view \cite{TQ, DM}, the metrics constructed via GLT are those with a generalised symmetry of maximal rank, i.e. the twistor projection $Z^{2n+1}\rightarrow {\mathbb{P}}^1$ factorises via a vector bundle $Z^{2n+1}\rightarrow E\rightarrow {\mathbb{P}}^1$ of rank $n$, with the fibres of $Z^{2n+1}\rightarrow E$ being Lagrangian for the twisted symplectic form of $Z$. The bundle $E$ splits as $\bigoplus_{j=1}^n {\mathcal{O}}(2r_j)$ and $r_j=1$ yields a genuine symmetry of the hyperk\"ahler structure. The hyperk\"ahler structure is then recovered from a single function $F$ on the space $V$ of (real) sections of $E$. The hyperk\"ahler manifold $M$ is a torus (or another abelian group) bundle over a submanifold $X\subset V$, with the dimension of the torus equal to $\#\{j;r_j=1\}$. The submanifold $X$ is the image of the generalised moment map on $M$.
The function $F$ can be obtained as a contour integral of a holomorphic function $G$ of $n+1$ variables (or a sum of such).
The functions $G$ and $F$ are often found by ad hoc methods, depending on the example, and it is one of the aims of this paper to present a formal and systematic derivation of $F$ for a class of hyperk\"ahler manifolds obtained via GLT.
\par
The holomorphic function $G$ can be singular or multi-valued, and in the present paper we consider the case of GLT, where the function $G$ arises from an element of $H^1(D,{\mathcal{O}})$ for some branched covering $D$ of $E=\bigoplus_{j=1}^n {\mathcal{O}}(2r_j)$. For example, when $E={\mathcal{O}}(2)\oplus {\mathcal{O}}(4)$ we can consider a $2$-fold covering: $$D_1=\{(\eta,\alpha_1,\alpha_2)\in {\mathcal{O}}(2)\oplus {\mathcal{O}}(2)\oplus {\mathcal{O}}(4);\, \eta^2+\alpha_1\eta+\alpha_2=0\},$$ or a $3$-fold one: $$D_2=\{(\eta,\alpha_1,\alpha_2)\in {\mathcal{O}}(2)\oplus {\mathcal{O}}(2)\oplus {\mathcal{O}}(4);\, (\eta+\alpha_1)(\eta^2+\alpha_2)=0\}.$$ The space $V$ should be now viewed as a space of spectral curves - all compact curves in $|{\mathcal{O}}(4)|$ for $D_1$, and a subset of the set of reducible compact curves in $|{\mathcal{O}}(6)|$ for $D_2$. Our first observation is that, if $D$ is chosen so that $V$ corresponds to {\em all} compact reducible curves $S$ of the form $S=S_1\cup\dots\cup S_k$, with components $S_i\in |{\mathcal{O}}(2m_i)|$, $i=1,\dots,k$, then the submanifold $X$ of $V$ (the image of the generalised moment map on $M$) corresponds to curves $S$ on which a certain line bundle is trivial (see \S\ref{glt_spec}). We then restrict ourselves further, to hyperk\"ahler metrics, the twistor space of which can be trivialised using spectral curves and sections of line bundles. This is the case for $SU(2)$-monopole metrics and for ($A_k$- and $D_k$-) gravitational instantons, and we show that any such hyperk\"ahler manifold can be constructed via GLT. Moreover, we compute explicitly the function $F$ for such a manifold. The examples include $SU(N)$-monopole metrics, asymptotic monopole metrics considered in \cite{clusters}, and, somewhat surprisingly, hyperk\"ahler metrics on regular adjoint orbits of $GL(k,{\mathbb C})$. In this last example, the work done in \S3 (particularly Theorem \ref{O(2)}) plays a crucial role.
\par
Conversely, given a space $V$ of spectral curves and a line bundle $K$ on $T{\mathbb{P}}^1$ with $c_1(K)=0$, we write down a function $F:V\rightarrow {\mathbb{R}}$, the GLT of which produces a pseudo-hyperk\"ahler metric with twistor space trivialised using spectral curves and sections of $K$. This gives a huge family of hyperk\"ahler metrics, analogous to monopole metrics. We observe, for example, that there is a ``master metric" from which all (pseudo-)hyperk\"ahler metrics on regular adjoint orbits of $GL(k,{\mathbb C})$ can be obtained as twistor quotients. Moreover, just as monopole metrics correspond to Nahm's equations, these metrics correspond to other integrable systems, defined by ODEs on triples of matrices. Essentially, $K$ can be determined by a harmonic polynomial on ${\mathbb{R}}^3$, with Nahm's equations corresponding to a quadratic polynomial. In the last section, we consider briefly the matrix-valued ODEs corresponding to a cubic harmonic polynomial.
\section{Line bundles and matricial polynomials}
We give here (mostly following \cite{Theta}) a brief summary of spectral curves and line bundles, including Beauville's theorem.
In what follows, ${\mathbb{T}}$ denotes the total space of the line bundle ${\mathcal{O}}(d)$ on ${\mathbb{P}}^1$, $\pi:{\mathbb{T}}\rightarrow {\mathbb{P}}^1$ is the
projection, $\zeta$ is the affine coordinate on ${\mathbb{P}}^1$ and $\eta$ is the fibre coordinate on ${\mathbb{T}}$. In other words ${\mathbb{T}}$ is obtained by gluing
two copies of ${\mathbb{C}}^2$ with coordinates $(\zeta,\eta)$ and $(\tilde{\zeta},\tilde{\eta})$ via:
$$ \tilde{\zeta}=\zeta^{-1}, \quad \tilde{\eta}=\eta/\zeta^d.$$
We denote the corresponding two open subsets of ${\mathbb{T}}$ by $U_0$ and $U_\infty$.
Let $S$ be a compact algebraic curve in the linear system ${\mathcal{O}}(dk)$, i.e. over $\zeta\neq \infty$ $S$ is defined by the equation
\begin{equation} P(\zeta,\eta)= \eta^k+a_1(\zeta)\eta^{k-1}+\cdots +a_{k-1}(\zeta)\eta+ a_k(\zeta)=0,\label{S}\end{equation}
where $a_i(\zeta)$ is a polynomial of degree $di$. $S$ can be singular or non-reduced.
\par
We recall the following facts (see, e.g., \cite{AHH}):
\begin{proposition} The group $H^1({\mathbb{T}},{\mathcal{O}}_{\mathbb{T}})$ (i.e. line bundles on ${\mathbb{T}}$ with zero first Chern class) is generated by $\eta^i\zeta^{-j}$, $i>0$, $0<j<2i$. The corresponding line bundles have transition functions $\exp(\eta^i\zeta^{-j})$ from $U_0$ to $U_\infty$.\hfill $\Box$\label{T}\end{proposition}
\begin{proposition} The natural map $H^1({\mathbb{T}},{\mathcal{O}}_{\mathbb{T}})\rightarrow H^1(S,{\mathcal{O}}_S)$ is a surjection, i.e. $H^1(S,{\mathcal{O}}_S)$ is generated by $\eta^i\zeta^{-j}$, $0<i\leq k-1$, $0<j<id$.\hfill $\Box$\label{all}\end{proposition}
Thus, the (arithmetic) genus of $S$ is $g=(k-1)(dk-2)/2$.
For a smooth $S$, the last proposition describes line bundles of degree $0$ on $S$.
In general, by a line bundle we mean an invertible sheaf. Its degree is defined as its Euler characteristic plus $g-1$. The theta divisor $\Theta$ is the set of line bundles of degree $g-1$ which have a non-zero section.
\par
Let ${\mathcal{O}}_{\mathbb{T}}(i)$ denote the pull-back of ${\mathcal{O}}(i)$ to ${\mathbb{T}}$ via $\pi:{\mathbb{T}}\rightarrow {\mathbb{P}}^1$. If $E$ is a sheaf on ${\mathbb{T}}$ we denote by $E(i)$ the sheaf
$E\otimes {\mathcal{O}}_{\mathbb{T}}(i)$ and similarly for sheaves on $S$. In particular, $\pi^\ast {\mathcal{O}}$ is identified with ${\mathcal{O}}_S$. We note that the canonical bundle $K_S$ is isomorphic to ${\mathcal{O}}_S(d(k-1)-2)$.
\par
If $F$ is a line bundle of degree $0$ on $S$, determined by a cocycle $q\in H^1({\mathbb{T}},{\mathcal{O}}_{\mathbb{T}})$, and $s\in H^0\bigl(S, F(i)\bigr)$, then we denote by
$s_0,s_\infty$ the representation of $s$ in the trivialisation $U_0,U_\infty$, i.e.:
\begin{equation} s_\infty(\zeta,\eta)=\frac{e^q}{\zeta^i}s_0(\zeta, \eta).\label{represent}\end{equation}
We recall the following theorem of Beauville \cite{Beau}:
\begin{theorem} There is a $1-1$ correspondence between the affine Jacobian $J^{g-1}-\Theta$ of line bundles of degree $g-1$ on $S$ and $GL(k,{\mathbb C})$-conjugacy classes of ${\mathfrak g \mathfrak l}(k,{\mathbb C})$-valued polynomials $A(\zeta)=\sum_{i=0}^d A_i\zeta^i$ such that $A(\zeta)$ is regular (i.e. its centraliser is $1$-dimensional) for every $\zeta$ and the characteristic polynomial of $A(\zeta)$ is \eqref{S}.\hfill ${\Box}$\label{Beauville} \end{theorem}
The correspondence is given by associating to a line bundle $E$ on $S$ its direct image $V=\pi_\ast E$, which has a structure of a $\pi_\ast {\mathcal{O}}$-module. This is the same as a homomorphism $A:V\rightarrow V(d)$ which satisfies \eqref{S}. The condition $E\in J^{g-1}-\Theta$ is equivalent to $H^0(S,E)=H^1(S,E)=0$ and, hence, to $H^0({\mathbb{P}}^1,V)=H^1({\mathbb{P}}^1,V)=0$, i.e. $V=\bigoplus {\mathcal{O}}(-1)$. Thus, we can interpret $A$ as a matricial polynomial precisely when $E\in J^{g-1}-\Theta$.
Somewhat more explicitly, the correspondence is seen from the exact sequence
\begin{equation} 0\rightarrow {\mathcal{O}}_{\mathbb{T}}(-d)^{\oplus k}\rightarrow {\mathcal{O}}_{\mathbb{T}}^{\oplus k}\rightarrow E(1)\rightarrow 0, \label{bundle}\end{equation}
where the first map is given by $\eta\cdot 1-A(\zeta)$ and $E(1)$ is viewed as a sheaf on ${\mathbb{T}}$ supported on $S$. The inverse map is defined by the commuting diagram
\begin{equation}\begin{CD} H^0\bigl(S,E(1)\bigr) @>>> H^0\bigl(D_{\zeta}, E(1)\bigr)\\ @V \tilde{A}(\zeta) VV @VV \cdot \eta V \\ H^0\bigl(S,E(1)\bigr) @>>> H^0\bigl(D_{\zeta}, E(1)\bigr), \end{CD}
\label{endom}\end{equation} where $D_{\zeta}$ is the divisor consisting of points of $S$ which lie above $\zeta$ (counting multiplicities).
That the endomorphism $\tilde{A}(\zeta)$ has degree $d$ in $\zeta$ is proved e.g. in \cite{AHH}.
Given the above proposition, we adopt the following definition:
\begin{definition} A matricial polynomial $A(\zeta)=\sum_{i=0}^d A_i\zeta^i$ is called {\em regular}, if $A(\zeta)$ is a regular matrix for every $\zeta$.\label{regular} \end{definition}
\begin{remark} For a singular curve $S$, Beauville's correspondence most likely extends to $\overline{J^{g-1}}-\overline{\Theta}$, where $\overline{J^{g-1}}$ is the compactified Jacobian in the sense of \cite{Alex}. It seems to us that this is essentially proved in \cite{AHH}.\label{comp}\end{remark}
Finally, we recall the following fact (see, e.g. \cite{Beau2,Theta}):
\begin{proposition} Let $A(\zeta)$ be the matricial polynomial corresponding to $E\in J^{g-1}-\Theta$. Then $A(\zeta)^T$ corresponds to $E^\ast \otimes K_S$.\hfill $\Box$ \label{canonical}\end{proposition}
\section{Conjugacy classes with respect to subgroups\label{conj}}
We assume that $S\in |{\mathcal{O}}(d)|$ is a compact reduced curve of degree $k$. We want to describe the conjugacy classes of matricial polynomials $A(\zeta)=\sum_{i=0}^d A_i\zeta^i$ with respect to several subgroups of $GL(k,{\mathbb C})$:
\begin{itemize}
\item maximal parabolic:
$$ P=\left\{\begin{pmatrix} g & \ast \\ 0 & m\end{pmatrix}; \enskip g\in GL(k-1,{\mathbb C}), m\in {\mathbb C}^\ast\right\};$$
\item maximal reductive:
$$G_{k-1}=\left\{\begin{pmatrix} g & 0 \\ 0 & m\end{pmatrix}; \enskip g\in GL(k-1,{\mathbb C}), m\in {\mathbb C}^\ast\right\};$$
\item maximal torus $T$, consisting of diagonal matrices.\end{itemize}
We denote by $U_{g+k-1}$ the open subset of effective (Cartier) divisors $D$ of degree $g+k-1=dk(k-1)/2$, such that $[D](-1)\not\in \Theta$. Observe that Proposition \ref{canonical} implies that, if $L$ is a line bundle of degree $g+k-1$ and $L(-1)\not\in \Theta$, then $L^\ast(dk-d-1)\not \in \Theta$. Thus, if $D\in U_{g+k-1}$, then the divisor of any section of ${\mathcal{O}}_S(dk-d)[-D]$ also lies in $U_{g+k-1}$.
\par
We need the following fact about sections of ${\mathcal{O}}_S(l)$, the proof of which proceeds analogously to that \cite[Proposition (4.5)]{Hit} or \cite[Lemma (2.16]{HuMu}.
\begin{lemma} If $l<dk$, then any section $s\in H^0(S,{\mathcal{O}}(l))$ may be written uniquely in the form
$$s=\sum_{i=0}^{[l/d]}\eta^i\pi^\ast c_i,$$
where $c_i\in H^0({\mathbb{P}}^1,{\mathcal{O}}(l-di))$.\hfill $\Box$\label{O(l)}\end{lemma}
In particular, any section of ${\mathcal{O}}_S(dk-d)$ can be written uniquely as $\sum_{i=0}^{k-1}\eta^i\pi^\ast c_i$, with $c_{k-1}$ being a constant. Denoting by $(s)$ the divisor of a section $s$, we define
\begin{equation} R_{dk-d}=\left\{\Bigl(\sum_{i=0}^{k-1}\eta^i\pi^\ast c_i\Bigr)\in |{\mathcal{O}}_S(dk-d)|;\enskip c_{k-1}\neq 0\right\}.\end{equation}
We can describe conjugacy classes with respect to groups $P$ and $G_{k-1}$ in algebro-geometric terms:
\begin{proposition} Let $S$ be a reduced curve defined by \eqref{S}. Let $M_S$ be the space of regular matricial polynomials $A(\zeta)=\sum_{i=0}^d A_i\zeta^i$, $A_i\in {\mathfrak g \mathfrak l}(k,{\mathbb C})$, the characteristic polynomial of which is $P(\zeta,\eta)$. There exist natural bijections:
\begin{itemize}
\item[(i)] $M_S/P \simeq U_{g+k-1}$.
\item[(ii)] $M_S/G_{k-1}\simeq {\mathcal{D}}=\left\{(D,D^\prime)\in U_{g+k-1}\times U_{g+k-1};\enskip D+D^\prime \in R_{dk-d}\right\}.$
\end{itemize}\label{PG}
\end{proposition}
\begin{remark} The assumption that $S$ is reduced (i.e. without multiple components) is not needed in (ii). Although we make use of it in (i), this is probably also unnecessary.\end{remark}
\begin{remark} In (i), the natural projection $M_S/P\rightarrow M_S/GL(k,{\mathbb C})$ corresponds to the map $U_{g+k-1}\rightarrow J^{g-1}-\Theta$ given by $D\mapsto {\mathcal{O}}(dk-d-1)[-D]$.\end{remark}
\begin{remark} In (ii), $D+D^\prime=\bigl(\det(\eta-A_{(k-1)}(\zeta))\bigr)$, where $A_{(k-1)}$ denotes the upper-left $(k-1)\times (k-1)$-minor of $A$. Moreover, the projection on either factor realises ${\mathcal{D}}$ as a ${\mathbb C}^{k-1}$-bundle (not a line bundle - it should be viewed as a $({\mathbb{P}}^{k-1}-{\mathbb{C}}^{k-2})$-bundle) over $U_{g+k-1}$.\end{remark}
We now discuss $T$-conjugacy classes. We consider a parameterised version of the Gelfand-Zeitlin map \cite{GKL,KW,Bie-Pidst}, and associate to $A(\zeta)$ $k$ spectral curves:
\begin{equation} S_m=\left\{(\zeta,\eta)\in {\mathbb{T}};\enskip \det\bigl(\eta\cdot 1- A_{(m)}(\zeta)\bigr)=0\right\},\quad m=1,\dots,k, \label{S_m}\end{equation}
where $A_{(m)}(\zeta)$ is the the upper-left $m\times m$ minor of $A(\zeta)$.
We are going to describe only an open subset of $M_S/T$. Set
\begin{equation} M_S^0=\left\{A(\zeta)\in M_S;\enskip \text{$A_{(m)}(\zeta)$ is regular for every $\zeta$ and every $m=1,\dots,k$}\right\}.\label{MS0}\end{equation}
\begin{proposition}
Let $S_1,\dots S_{k-1},S_k=S$ be curves in ${\mathbb{T}}\simeq \operatorname{Tot}{\mathcal{O}}(d)$ with $S_m\in |{\mathcal{O}}_{\mathbb{T}}(md)|$ for $m=1,\dots,k$. Then there exists a matricial polynomial $A(\zeta)\in M_S^0$ of degree $d$ such that each $S_m$ is defined by \eqref{S_m} if and only if on each $S_m$ with $m\in\{1,\dots,k-1\}$ there exists a divisor $D_m$ of degree $g+m-1=dm(m+1)/2$ satisfying the following conditions
\begin{itemize} \item[(i)] $D_m$ is a subdivisor of $S_m\cap S_{m+1}$;
\item[(ii)] $H^0\bigl(S_{m}, {\mathcal{O}}(dm-d-1)[-D]\bigr)=0$;
\item[(iii)] For each $m\in\{2,\dots,k-1\}$, $[D_{m}-D_{m-1}]\simeq {\mathcal{O}}_{S_m}(d)$.\end{itemize}
Moreover, there is a $1-1$ correspondence between these data and $M_S^0/T$. \label{O(2)}\end{proposition}
The remainder of the section is devoted to a proof of these two propositions. We remark that many arguments are adapted from \cite{HuMu}.
We begin with a given matricial polynomial $A(\zeta)$.
\par
Let $F=E(1)$ be the line bundle given by \eqref{bundle}, i.e. $F=\operatorname{Coker}\bigl(\eta-A(\zeta)\bigr)$. Let $G$ be the line bundle corresponding to $A^T(\zeta)$, i.e., owing to Proposition \ref{canonical}, $G=F^\ast\otimes K_S(2)$. Both $F$ and $G$ are line bundles of degree $g+k-1=dk(k-1)/2$, and
\begin{equation} F\otimes G\simeq K_S(2)\simeq {\mathcal{O}}_S(dk-d).\label{FG}\end{equation}
We consider sections $s,s^\prime$ of $F,G$ obtained by projecting the constant section $e_k=(0,\dots,0,1)^T$ of ${\mathcal{O}}_{\mathbb{T}}^{\oplus k}$. We associate divisors to $A(\zeta)$ via the maps
\begin{equation} \Phi_1:A(\zeta)\mapsto (s^\prime),\quad \Phi_2:A(\zeta)\mapsto \bigl((s),(s^\prime)\bigr).\label{Phi}\end{equation}
From the definition, $\Phi_1$ is $P$-invariant and $\Phi_2$ is $G_{k-1}$-invariant. Moreover, the image of $\Phi_1$ lies $U_{g+k-1}$, while the image of $\Phi_2$ lies in
$$ \overline{\mathcal{D}}=\left\{(D,D^\prime)\in U_{g+k-1}\times U_{g+k-1};\enskip D+D^\prime \in |{\mathcal{O}}_S(dk-d)|\right\}.$$
We need to show that $\Phi_2$ maps into $R_{dk-d}$. In fact, we shall show that
\begin{equation} (s)+(s^\prime)=\bigl(\det(\eta-A_{(k-1)}(\zeta))\bigr),\label{D+D}\end{equation}
i.e. $ss^\prime\in H^0(S,{\mathcal{O}}(dk-d))$ defines the curve $S_{k-1}$.
\par
Let the subscript ${\rm adj}$ denotes the classical adjoint:
$$ (\eta-A(\zeta))_{\rm adj}(\eta-A(\zeta))=(\eta-A(\zeta))(\eta-A(\zeta))_{\rm adj}=\det(\eta-A(\zeta))\cdot 1.$$
It follows that $(s)$ coincides with the zero-divisor of $(\eta-A(\zeta))_{\rm adj}e_k$, the latter being a section of $\operatorname{Ker}(\eta-A(\zeta))\simeq G^\ast$. Let us write
$$ A=\begin{pmatrix} B &y \\ x & c\end{pmatrix},\quad B\in {\mathfrak g \mathfrak l}(k-1,{\mathbb C}),\enskip x\in \operatorname{Hom}({\mathbb C}^{k-1},{\mathbb C}), \enskip y\in \operatorname{Hom}({\mathbb C},{\mathbb C}^{k-1}),\enskip c\in {\mathbb C}.$$
Computing minors along the last row gives
\begin{equation} (\eta-A(\zeta))_{\rm adj}e_k=\begin{pmatrix} \vspace{1mm} -(\eta-B(\zeta))_{\rm adj}y\\ \det(\eta-B(\zeta))\end{pmatrix}.\label{s}\end{equation}
Similarly
\begin{equation} (\eta-A(\zeta))^T_{\rm adj}e_k=\begin{pmatrix} \vspace{1mm} -(\eta-B(\zeta))^T_{\rm adj}x^T\\ \det(\eta-B(\zeta))\end{pmatrix}.\label{s'}\end{equation}
These formulae imply that $(s)$ and $(s^\prime)$ are subdivisors of $\bigl(\det(\eta-B(\zeta))\bigr)$.
We now use the Weinstein-Aronszajn formula:
\begin{equation} \det(\eta-A(\zeta))=(\eta-c(\zeta))\det(\eta-B(\zeta))-x(\zeta)(\eta-B(\zeta))_{\rm adj}y(\zeta),\label{WA}\end{equation}
from which we conclude $(s)=\bigl((\eta-B(\zeta))_{\rm adj}y(\zeta)\bigr)$, $(s^\prime)=\bigl(x(\zeta)(\eta-B(\zeta))_{\rm adj}\bigr)$. In addition, $\bigl(\det(\eta-B(\zeta))\bigr)=\bigl(x(\zeta)(\eta-B(\zeta))_{\rm adj}y(\zeta)\bigr)$ (as divisors on $S$). Since at a point of $S\cap S_{k-1}$, $\eta-B(\zeta)$ has corank $1$, $(\eta-B(\zeta))_{\rm adj}$ has rank $1$, and so $(\eta-B(\zeta))_{\rm adj}=uv^T$ for a pair of vectors $u,v$. Therefore $0=x(\zeta)(\eta-B(\zeta))_{\rm adj}y(\zeta)=xuv^Ty$, which means that either $xu=0$ or $v^Ty=0$, and, hence, either $x(\eta-B(\zeta))_{\rm adj}=0$ or $(\eta-B(\zeta))_{\rm adj}y=0$. This proves \eqref{D+D}.
\medskip
We construct the inverse mapping to $\Phi_1$. We need the following lemma.
\begin{lemma}
Let $D\in U_{g+k-1}$. There exists a $D^\prime\in U_{g+k-1}$, such that $D+D^\prime\in R_{dk-d}$.\end{lemma}
\begin{proof} (cf. \cite[p.181]{Hit}. Let $s$ be a section defined by $D$. Since $[D](-1)\not \in \Theta$, $s$ does not vanish identically on any fibre of $\pi:S\rightarrow {\mathbb{P}}^1$, which consists of $k$ distinct points. Let $\pi^{-1}(\zeta)$ be such a fibre. Since $L={\mathcal{O}}_S(dk-d-1)[-D]\not\in \Theta$, there exists a section $s^\prime$, which does not vanish at exactly one point $p\in \pi^{-1}(\zeta)$, and we may assume that $s(p)\neq 0$. Thus, $(ss^\prime)(p)\neq 0$ and $ss^\prime$ vanishes at the remaining $k-1$ points of $\pi^{-1}(\zeta)$. On the other hand, if we had $ss^\prime=\sum_{i=0}^{k-2}\eta^ic_i(\zeta)$, then the vanishing of $ss^\prime$ at $k-1$ points of the fibre would imply that $ss^\prime$ vanishes on the whole fibre, which is a contradiction.\end{proof}
\begin{remark} This lemma is the only place, where the assumption that $S$ is reduced is used.\end{remark}
Let $D\in U_{g+k-1}$. The above lemma and Lemma \ref{O(l)} imply that there exists a section of ${\mathcal{O}}_S(dk-d)[-D]$ of the form $\eta^{k-1}+\sum_{i=0}^{k-2}\eta^ic_i(\zeta)$ and, hence, there exists a well-defined $(k-1)$-dimensional subspace $V$ of $H^0(S, {\mathcal{O}}(dk-d)[-D])$ of the form $\sum_{i=0}^{k-2}\eta^ic_i(\zeta)$. Computing the endomorphism $A(\zeta)$ with respect to the flag $\{0\}\subset V\subset H^0(S, {\mathcal{O}}(dk-d)[-D])$ defines the inverse map $\Phi_1^{-1}:U_{g+k-1}\rightarrow M_S/P$.
\medskip
We can now construct the inverse mapping to $\Phi_2$. Let $(D,D^\prime)\in {\mathcal{D}}$. Let $s=\eta^{k-1}+\sum_{i=0}^{k-2}\eta^ic_i(\zeta)$ be the unique, up to a constant multiple, section of ${\mathcal{O}}_S(dk-d)$, whose divisor is $D+D^\prime$.
This time we have a direct sum decomposition $ H^0(S, {\mathcal{O}}(dk-d)[-D^\prime])={\mathbb C} s\oplus V$, where $V$ is defined as for $\Phi_1^{-1}$. Computing the endomorphism $A(\zeta)$ with respect to this decomposition defines the inverse map $\Phi_2^{-1}:{\mathcal{D}}\rightarrow M_S/G_{k-1}$.
\medskip It remains to prove Proposition \ref{O(2)}. The vector space $V$ defined in the construction of $\Phi_2^{-1}$ (for $S=S_k$) can be also viewed as $ H^0(S_{k-1}, {\mathcal{O}}(d(k-1))[-D^\prime])$, where $S_{k-1}$ is defined by \eqref{S_m}. Thus, $A_{(k-1)}(\zeta)$ is the endomorphism \eqref{endom} for a particular basis of $ H^0(S_{k-1}, {\mathcal{O}}(d(k-1))[-D^\prime])$. It follows, again from the construction of $\Phi_2^{-1}$, that $A(\zeta)$ is determined, up to conjugation by the centre of $G_{k-1}$, by $A_{(k-1)}(\zeta)$ and the divisor $D_{k-1}=D^\prime$ on $S_{k-1}$. Applying now the argument to $A_{(k-1)}(\zeta)$, and so on, we get the divisors $D_m$, $m=k-1,\dots,1$, which (together with the curves $S_m$) determine $A(\zeta)$ up to conjugation by $T$. The $D_m$ clearly satisfy conditions (i) and (ii). Moreover, for every $m=1,\dots,k-1$, the matricial polynomial $A_{m}(\zeta)$ corresponds to both ${\mathcal{O}}_{S_m}(dm)[-D_m]$ and to ${\mathcal{O}}_{S_m}(d(m-1))[-D_{m-1}]$, which proves (iii).
\section{Hyperk\"ahler metrics\label{hk}}
A Riemannian metric is called hyperk\"ahler if its holonomy is a subgroup of $Sp(n)$. Thus, a Riemannian manifold is hyperk\"ahler if it has a triple $I,J,K$ of complex structures, which behave algebraically like a basis of imaginary quaternions, and which are covariant constants for the Levi-Civita connection. We denote by $\omega_I,\omega_J,\omega_K$ the corresponding K\"ahler forms.
\par
There is a corresponding notion of pseudo-hyperk\"ahler metrics in signature $(4p,4q)$.
\subsection{Twistor space}
A hyperk\"ahler structure on a manifold $M$ can be encoded in an algebraic object - the twistor space $Z$. As a manifold, $Z$ is $M\times S^2$, equipped with a complex structure, which is the standard one on $S^2\simeq {\mathbb{P}}^1$, while on the fibre $Z\rightarrow (a,b,c)\in S^2$, it is the complex structure $aI+bJ+cK$ of $M$. The natural projection $\pi:Z\rightarrow {\mathbb{P}}^1$
is holomorphic and $M$ can be identified with a connected component of the space of sections of $\pi$, the normal bundle of which is the direct sum of ${\mathcal{O}}(1)$-s and which are invariant under the antipodal map on $S^2$ (which induces an antiholomorphic involution $\sigma$ on $Z$). Such sections are called {\em twistor lines}. Finally, the K\"ahler forms of $M$ combine to define a twisted holomorphic symplectic form on the fibres of $Z$
\begin{equation}\Omega=\left(\omega_J+\sqrt{-1}\omega_K\right)+2\sqrt{-1} \omega_I\zeta+\left(\omega_J-\sqrt{-1}\omega_K \right)\zeta^2,\label{Omega}\end{equation}
where $\zeta$ is the affine coordinate of ${\mathbb{P}}^1$. Thus, $\Omega$ is an ${\mathcal{O}}(2)$-valued fibrewise symplectic form on $Z$.
\subsection{K\"ahler potentials}
As remarked above, the complex-valued form $\Omega_I=\omega_J+\sqrt{-1}\omega_K$ is a holomorphic symplectic form for the complex structure $I$. The Darboux theorem holds for such forms and we can find a local $I$-holomorphic chart $u_i,z_i$, $i=1,\dots,n$ such that
\begin{equation} \Omega_I=\omega_J+\sqrt{-1}\omega_K=\sum_{i=1}^n du_i\wedge dz_i.\label{omega}\end{equation}
In this local chart, the K\"ahler form $\omega_I$ can be written as
\begin{equation} \omega_I=\frac{\sqrt{-1}}{2} \sum_{i,j}\left(K_{u_i\bar{u}_j}du_i\wedge d\bar{u}_j+ K_{u_i\bar{z}_j}du_i\wedge d\bar{z}_j+ K_{z_i\bar{u}_j}dz_i\wedge d\bar{u}_j +K_{z_i\bar{z}_j}dz_i\wedge d\bar{z}_j\right), \label{omega1}\end{equation}
for a real-valued function $K$ (we write \eqref{omega1} with positive sign, as in our examples the K\"ahler potential is negative). We see that the complex structure $J$ is given by:
\begin{align} J\left(\frac{\partial}{\partial u_i}\right) =\sum_{j=1}^n\left (K_{z_i\bar{u}_j}\frac{\partial}{\partial \bar{u}_j} + K_{z_i\bar{z}_j}\frac{\partial}{\partial \bar{z}_j}\right)\notag\\
J\left(\frac{\partial}{\partial z_i}\right) =\sum_{j=1}^n\left (-K_{u_i\bar{z}_j}\frac{\partial}{\partial \bar{z}_j} - K_{u_i\bar{u}_j}\frac{\partial}{\partial \bar{u}_j}\right).\label{J} \end{align}
Thus the condition $J^2=-1$ gives a system of nonlinear PDE's for $K$. This system is equivalent to the following condition:
\begin{equation}\begin{pmatrix} K_{u_i\bar{u}_j} & K_{u_i\bar{z}_j}\\ K_{z_i\bar{u}_j} & K_{z_i\bar{z}_j}\end{pmatrix} \in Sp(n,{\Bbb C}),\label{sp}\end{equation}
where the symplectic group is defined with respect to the form \eqref{omega}.
\par
Conversely, suppose that in some local coordinate system $u_i,z_i$ we have a K\"ahler form $\omega_I$ given by a K\"ahler potential $K$ such that this system of PDE's is satisfied. Then, if we define $\omega_J+i\omega_K$ by the formula \eqref{omega}, we obtain a hyperhermitian structure. However $\omega_J$ and $\omega_K$ are closed, and so, by Lemma 4.1 in \cite{AH}, $J$ and $K=IJ$ are integrable and we have locally a hyperk\"ahler structure. Therefore there is 1-1 correspondence between K\"ahler potentials satisfying the above system of PDE's and local hyperk\"ahler structures.
\subsection{Twistor lines from a K\"ahler potential\label{twistor_lines}}
From the definition of the twistor space, the hyperk\"ahler structure is determined by the twisted form $\Omega$ and by a family of sections of $\pi:Z\rightarrow {\mathbb{P}}^1$. Let $\zeta=0$ correspond to the complex structure $I$. We can trivialise the twistor space in a neighborhood of a point in $\pi^{-1}(0)$. Let $\zeta,U_1,\dots,U_n,$$Z_1,\dots,Z_n$ be local holomorphic coordinates, so that $\Omega=\sum_{i=1}^n dU_i\wedge dZ_i$ in these coordinates. A twistor line is now a $2n$-tuple of functions $U_i(\zeta),Z_i(\zeta)$. Since there is a unique twistor line passing through every point of $Z$, the functions $U_i(\zeta),Z_i(\zeta)$ are determined by their values $u_i,z_i$ at $\zeta=0$. It follows now from \eqref{Omega} that the K\"ahler form $\omega_I$, and hence the hyperk\"ahler metric, is determined by the values at $\zeta=0$ of first derivatives of $U_i(\zeta),Z_i(\zeta)$ with respect to $\zeta$. More precisely, if
$$ U_i(\zeta)=u_i+p_i\zeta+\dots,\quad Z_i(\zeta)=z_i+q_i\zeta+\dots,$$
then
$$\omega_I=-\frac{\sqrt{-1}}{2}\sum_{i=1}^n\left(du_i\wedge dq_i+dp_i\wedge dz_i\right).$$
Hence, it is enough to know the twistor lines only up to first order (once we trivialise the twistor space near $\zeta=0$).
\par
If $\omega_I$ is given by a K\"ahler potential $K=K(u_i,\bar u_i,z_i,\bar z_i)$, then, comparing the last formula with \eqref{omega1}, we conclude that, up to additive constants, $p_i=K_{z_i}$ and $q_i=-K_{u_i}$. The freedom of adding an arbitrary constant to each $p_i$ and $q_i$ can be incorporated into the choice of a K\"ahler potential, and so, the twistor lines are given, up to the first order, by:
\begin{equation} U_i(\zeta)=u_i+K_{z_i}\zeta+\dots,\quad Z_i(\zeta)=z_i-K_{u_i}\zeta+\dots.\label{lines}\end{equation}
\section{Generalised Legendre transform\label{glt}}
The generalised Legendre transform, invented by Lindstr\"om and Ro\v{c}ek \cite{LR}, is a construction of (pseudo-)hyperk\"ahler metrics whose twistor space admits a special type of Hamiltonians.
It generalises the case of $4n$-dimensional hyperk\"ahler manifolds, the symmetry group of which has rank $n$. Recall that
a tri-Hamiltonian action of a group $H$ on $M$, which extends to a holomorphic action of a complexification $H^{\mathbb C}$ of $H$ for every complex structure, gives rise to a Hamiltonian $\sigma$-equivariant action of $H^{\mathbb C}$ on the twistor space $Z$. The moment map is then a section of $ {\mathfrak{h}}^{\mathbb C}\otimes \pi^\ast{\mathcal{O}}(2)$, where ${\mathfrak{h}}$ is the Lie algebra of $H$.
\par
It happens however, quite often, that the twisted symplectic form $\Omega$ of a twistor space $Z$, of complex dimension $2n+1$, admits $n$ independent Poisson-commuting sections $f_i:Z\rightarrow \pi^\ast{\mathcal{O}}(2r_i)$, where $r_i$ are no longer constrained to be $1$. Each ${\mathcal{O}}(2r_i)$, $r_i\geq 1$, admits a canonical anti-holomorphic involution $\tau$, induced by that of ${\mathcal{O}}(2)\simeq T {\mathbb{P}}^1$, and each $f_i$ is assumed to satisfy $\tau\circ f_i=f_i\circ \sigma$ ($\sigma$ is the anti-holomorphic involution on $Z$ induced by the antipodal map of $S^2$). Such ``completely integrable" hyperk\"ahler manifolds $M$ of quaternionic dimension $n$ are produced by the generalised Legendre transform (GLT), which we proceed to describe.
\par
The maps $f_i$ induce maps $\hat{f}_i$ from the space of sections of $Z$, in particular from the manifold $M$, to the space of sections of ${\mathcal{O}}(2r_i)$, i.e. to the space of polynomials of degree $2r_i$, which we write as
$$ \alpha_i(\zeta)=\sum_{a=0}^{2r_i} w_a^i\zeta^a.$$
The real structure $\tau$ acts on this space by \begin{equation}\tau( w_a^i)=(-1)^{r_i+a}\overline{w^i_{2r_i-a}}\label{sigma}\end{equation} and, consequently, we obtain a map
$$ \hat{f}=\bigl(\hat{f}_1,\dots,\hat{f}_n\bigr):M \rightarrow \bigoplus_{i=1}^n {\Bbb R}^{2r_i+1}.$$
As explained in \cite{LR, HKLR}, $M$ is a torus (or another abelian group) bundle over the image of $\hat{f}$, where the dimension of the torus is equal to $\#\{i;r_i=1\}$. The image of $\hat{f}$ (and the hyperk\"ahler structure of $M$) is, in turn, determined by a function $F: \bigoplus_{i=1}^n {\Bbb R}^{2r_i+1}\rightarrow {\Bbb R}$ satisfying the
system of PDE's:
\begin{equation} F_{w^i_a,w^j_b}=F_{w^i_c,w^j_d}\label{Feq}\end{equation}
for all $a,b,c,d$ such that $a+b=c+d$.
\par
An equivalent characterization of \eqref{Feq} is that $F$ is given by a contour integral of a holomorphic (possibly singular or
multivalued) function of $2n+1$ variables $G=G(\zeta,\alpha_1,\dots,\alpha_n)$
\begin{equation}F(w_a^i)=\oint_c G\bigl(\zeta,\alpha_1(\zeta),\dots,\alpha_n(\zeta)\bigr)/\zeta^2 d\zeta\label{Fint} \end{equation}
where $\alpha_i(\zeta)=\sum_{a=0}^{2r_i} w_a^i\zeta^a$, or a sum of such contour integrals.
\par
The function $F$ determines the hyperk\"ahler structure as follows. We have local complex coordinates $z_1,\dots,z_n,u_1,\dots,u_n$ where
\begin{equation} z_i=w_0^i \label{F1}\end{equation}\begin{equation} F_{w_1^i}=\begin{cases} u_i &\text{if $r_i\geq 2$} \\ u_i+\bar{u}_i &\text{if $r_i=1$}\end{cases} \label{F2}\end{equation}
\begin{equation}F_{w_a^i}= 0\quad \text{if $2\leq a\leq 2r_i-2$}.\label{F3}\end{equation}
\par
Then the K\"ahler potential defined by
\begin{equation}K=F-\sum(u_iw_1^i+\bar{u}_i\bar{w}_1^i)\label{potential} \end{equation}
satisfies the hyperk\"ahler Monge-Amp\`{e}re equations \eqref{sp} and defines the metric of $M$.
Explicit formulae for the metric in terms of second derivatives of $F$ are given in \cite{LR}. We remark that the subset defined by \eqref{F3} is not necessarily a manifold and, hence, the image of $\hat{f}$ is only an open subset of \eqref{F3}. In addition, one usually needs several functions $F$ defined on overlapping regions of $\bigoplus_{i=1}^n {\Bbb R}^{2r_i+1}.$
A simple computation shows also that (cf. \eqref{lines})
\begin{equation} \frac{\partial K}{\partial u_i}=-w_1^i,\quad \frac{\partial K}{\partial z_i}= \frac{\partial F}{\partial w_0^i}.\label{dK}
\end{equation}
\begin{remark} A given function $F$, satisfying \eqref{Feq}, defines a hyperk\"ahler metric on the set where \eqref{F3} holds {\em and} where the matrix
\begin{equation}\bigl[F_{w^i_a,w^j_b}\bigr]_{\scriptscriptstyle 0< a < r_i ,\; 0<b<r_j}\label{F''}\end{equation} is invertible (see \cite{LR}). Thus, it is possible that the second condition fails at every point where \eqref{F3} holds, and we do not obtain a hyperk\"ahler metric from $F$.\label{nondeg}\end{remark}
\begin{example} We consider two examples of (non-generalised) Legendre transform in four dimensions. The first, a well known one, is given by the function $F=2x^2-z\bar{z}$.
It produces the flat metric on ${\mathbb{R}}^4$. The second one is the hyperk\"ahler metric obtained from a cubic harmonic polynomial on ${\mathbb{R}}^3$:
$$ F=2x^3-3xz\bar{z}.$$
The Legendre transform produces a translation-invariant metric in $I$-holomorphic coordinates $z,u$, with $u+\bar{u}=F_x=6x^2-3z\bar{z}$. The K\"ahler potential for $\omega_I$ is
$$K=F-xF_x=-4x^3=-\frac{4}{6^{3/2}} (u+\bar{u}+3z\bar{z})^{3/2}.$$
Up to a constant multiple, the metric is
$$ \frac{1}{x}\left(du d\bar u+3zdud\bar z + 3\bar{z} d\bar u dz +(6x^2+9z\bar z)dz d\bar z\right),$$
where $x=\sqrt{(u+\bar u+3z\bar z)/6}$.
It is defined on the subset $\{(u,z);\; u+\bar u+3z\bar z>0\}$ and is non-complete. In addition to the translational symmetry $u\mapsto u+it$, it possesses a circle symmetry $(u,z)\mapsto (u,e^{it}z)$. This hyperk\"ahler metric is
non-flat. The simplest way to see this is to notice that the surface $z=0$ is a totally geodesic submanifold (since it is the fixed-point set of the circle symmetry). The metric on this surface is $ \sqrt{6}\frac{du d\bar u}{\sqrt{u+\bar u}}$, which is not flat.
\label{harm}\end{example}
\begin{remark} A very natural interpretation of hyperk\"ahler manifolds arising from GLT has been given by Dunajski and Mason \cite{DM2,DM} (see also \cite{GCH}. They show that these manifolds are precisely leaves of the natural hyperk\"ahler foliation of {\em generalised hyperk\"ahler manifolds}, which have a genuine triholomorphic symmetry.
\end{remark}
\section{GLT on spectral curves\label{glt_spec}}
As mentioned above, the function $G$ in \eqref{Fint} can be (and usually is) multivalued. Instead of dealing with such functions we can consider single-valued functions on some covering of ${\mathbb{P}}^1$. For the time being, we assume that the twistor space $Z$ has a (locally surjective) projection
\begin{equation} Z\rightarrow \bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathcal{O}}(2i)\label{all_i},\end{equation}
equivariant with respect to antiholomorphic involutions, so that the induced map on real sections is
\begin{equation}\hat{f}:M\rightarrow \bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}.\label{hat_f}\end{equation}
For every $l$, we identify an element of $\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}$ with a curve $S_l$ in the linear system $|{\mathcal{O}}(2m_l)|$, given by the equations
\begin{equation} P_l(\zeta,\eta)=\eta^{m_l}+\sum_{i=1}^{m_l} \alpha_{i}(\zeta)\eta^{m_l-i}=0,\label{curve}\end{equation}
where $\alpha_i(\zeta)=\alpha_i^l(\zeta)$ is a polynomial of degree $2i$ invariant under \eqref{sigma} and $\eta$ is the fibre coordinate in ${\mathcal{O}}(2)\simeq TP^1$. We identify points of $\bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}$ with the set ${\mathcal{S}}={\mathcal{S}}(m_1,\dots,m_k)$ of $\tau$-invariant singular curves $S=S_1\cup \dots\cup S_k$ satisfying the equation \begin{equation}\prod_{l=1}^k P_l(\zeta,\eta)=0.\label{prod}\end{equation}
\par
We now assume that the function $G$ of \eqref{Fint} can be lifted to a meromorphic function $G(\zeta,\eta)$ on the singular curve $S=S_1\cup \dots\cup S_k$, and, similarly, the cycle $c$ is also defined on $S$. Thus, the function $F:{\mathcal{S}}\rightarrow {\Bbb R}$ is defined by
\begin{equation}F=\sum_{p=1}^q\oint_{c_p} \frac{1}{\zeta^2}G_p(\zeta,\eta)d\zeta,\label{p}\end{equation} where each $G_p$ is a meromorphic function on ${\mathbb{T}}=TP^1$ and $c_p$ is a homology cycle on the singular curve $S$. A sufficient condition for $F$ to be real is given by (cf. \cite{HMR}):
\begin{lemma} A function $F$ defined by \eqref{p} is real, provided each $G_p$ and $c_p$ satisfy the following two conditions:
\begin{equation} \oint_{c_p+\tau_\ast c_p} G_p(\zeta,\eta)\frac{d\zeta}{\zeta^2}=0,\end{equation}
\begin{equation}\overline{G_p(\zeta,\eta)}=-\bar{\zeta}^2G_p\left(-\frac{1}{\bar{\zeta}},-\frac{\bar{\eta}}{\bar{\zeta}^2}\right).\label{Greal}\end{equation}
\label{realG}\end{lemma}
\begin{proof} Let us write $L_p$ for the differential in $p$-th term, so that $F=\sum\oint_{c_p}L_p$. Observe that the second condition implies that $\overline{L_p}= -L_p\circ\tau$, which, together with the first condition, shows that $\oint_{c_p}L_p$ is real.\end{proof}
\par
We now discuss constraints \eqref{F3} for an $F$ of the form \eqref{p}. Recall, that for a smooth curve $S_l$ given by the polynomial \eqref{curve}, a basis of $H^0(S_l,\Omega^1)$ is given by the forms
\begin{equation}\omega_{rs}=\frac{\zeta^r\eta^{s}}{\partial P/\partial\eta}d\zeta\quad 0\leq s\leq m_l-2, \enskip 0\leq r\leq 2(m_l-2)-2s.\label{DFK}\end{equation}
Let $c$ be a cycle on a singular curve $S=S_1\cup\dots\cup S_k$. When restricted to a component $S_l$, $c$ becomes a chain $\gamma_l$ - the sum of a cycle on $S_l$ and oriented paths between intersection points of $S_l$ with other $S_j$.
Let $w^i_a$ be a coefficient of the polynomial $P_l(\zeta,\eta)$. We compute the derivatives $F_{w^i_a}$ at a point $S\in S$, where the component $S_l$ is nonsingular:
$$ \frac{d}{dw_a^i}\oint_c G(\zeta,\eta)\frac{d\zeta }{\zeta^2}=\oint_{c} \frac{1}{\zeta^2}\frac{\partial G}{\partial\eta}\frac{d\eta}{dw_a^i}d\zeta = \int_{\gamma_l} \frac{1}{\zeta^2}\frac{\partial G}{\partial\eta}\frac{d\eta}{dw_a^i}d\zeta,$$
and
\begin{equation}\int_{\gamma_l} \frac{1}{\zeta^2}\frac{\partial G}{\partial\eta}\frac{d\eta}{dw_a^i}d\zeta=-\int_{\gamma_l} \frac{\partial G}{\partial\eta}\frac{\zeta^{a-2}\eta^{m_l-i}}{\partial P_l/\partial\eta}d\zeta,\label{der}\end{equation}
where we computed $d\eta/dw_a^i$ by implicit differentiation of \eqref{curve}:
$$\frac{d\eta}{dw_a^i}=-\frac{\zeta^a\eta^{m_l-i}}{\partial P_l/\partial\eta},$$
and, comparing with \eqref{DFK}, we see that for $2\leq a\leq 2i-2$
\begin{equation} \frac{d}{dw_a^i}\oint_c G(\zeta,\eta)\frac{d\zeta }{\zeta^2}=-\int_{\gamma_l} \frac{\partial G}{\partial\eta}\omega_{a-2,m_l-i}.\label{d/dw}\end{equation}
Thus, the collection of derivatives $\bigl(\frac{d}{dw_a^i}\oint_c G(\zeta,\eta)\frac{d\zeta }{\zeta^2}\bigr)_{a=2,\dots, 2i-2}$ can be viewed as an element of $H^0(S_l,\Omega^1)^\ast$, and so it defines an element of $\operatorname{Jac}^0(S_l)$. Consequently, the constraints \eqref{F3} for $F$ of the form \eqref{p} imply triviality of certain line bundles on each $S_l$.
\begin{remark} The hyperk\"ahler structures obtained from \eqref{p} have a local ${\mathbb{R}}^k$-symmetry.\end{remark}
\subsection{A generalisation\label{gen}}
So far, we have assumed that the twistor space has a locally surjective projection $$ Z\rightarrow \bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathcal{O}}(2i),$$ so that the induced map on real sections is
\begin{equation*}\hat{f}:M\rightarrow \bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1},\end{equation*}
and its image has a non-empty interior. The general form of the GLT, as discussed in \S\ref{glt}, applies also to affine subspaces of $V=\bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}$, where we fix the components belonging to some of the ${\mathbb{R}}^{2i+1}$. Thus, for any $R\subset \{(i,l); l=1,\dots,k,\enskip i=1,\dots,m_l\}$, let $p_R$ be the projection of $V$ onto $X_R=\bigoplus_{(i,l)\in R} {\mathbb{R}}^{2i+1}$. For any $x\in X_R$, we can apply the GLT to the restriction of a given function $F$ on $V$ to the affine subspace $p_R^{-1}(x)$. Once again, we obtain a pseudo-hyperk\"ahler structure, provided that the matrix \eqref{F''} is invertible.
\par
If $V$ was interpreted as a space of spectral curves, then so is $p_R^{-1}(x)$: some of the coefficients $\alpha_i(\zeta)$ in \eqref{curve} are now fixed. The constraints \eqref{F3} are still equivalent to \eqref{d/dw}, but only with respect to a subspace of the space of holomorphic differentials.
\par
This generalisation has a simple interpretation in terms of {\em twistor quotients} introduced in \cite{TQ}. It was shown there that that the hyperk\"ahler metrics obtained from the GLT correspond to twistor spaces which admit a compatible fibrewise action of $\sG=\oplus{\mathcal{O}}(-2i+2)$. The above passage from GLT on $V$ to GLT on $p_R^{-1}(x)$ should be viewed as a twistor quotient with respect to a subgroup of $\sG$. The projection $p_R$ is the moment map for this subgroup and $x_R$ is a particular level set for taking the twistor quotient.
\section{Hyperk\"ahler metrics of monopole type \label{hmon}}
We continue to discuss hyperk\"ahler metrics obtained via the generalised Legendre transform from an $F:\bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}\rightarrow {\mathbb{R}}$.
As observed in the last section, a rather mild assumption that $F$ arises from a contour integral on a covering of ${\mathbb{P}}^1$ defined by \eqref{prod}, implies that the image of $\hat{f}:M\rightarrow {\mathcal{S}}$ consists of spectral curves on which certain line bundles are trivial. A well-known example of this situation is the natural metric on the moduli space of charge $n$ $SU(2)$-monopoles, which arises via GLT from the following function \cite{IR, Hough}:
\begin{equation}-\frac{1}{2\pi i}\oint_{\tilde 0} \frac{\eta^2}{\zeta^3}d\zeta + \oint_c \frac{\eta}{\zeta^2}d\zeta,\label{monopole}\end{equation}
where $\tilde 0$ is the sum of simple contours around points in the fibre $\pi_{|S}^{-1}(0)$.
Here $k=1$, i.e. $S=S_1$, $m_1=n$, and the constraints \eqref{F3} correspond (as easily seen from \eqref{d/dw}) to the triviality on $S_1$ of the line bundle $L^2$ with transition function $\exp(2\eta/\zeta)$. Moreover, in this case the twistor lines are determined by sections of $L^2$: essentially the twistor space can be (locally) trivialised near $\zeta=0$, as in section \ref{twistor_lines}, where $Z_i(\zeta)$ are roots of $P_1(\zeta,\eta)=0$ and $U_i(\zeta)$ is the logarithm of the value of a section of $L^2$ at the point $(\zeta,Z_i(\zeta))\in S_1$. We shall now consider hyperk\"ahler metrics, the twistor lines of which admit a similar description.
We begin by generalising \eqref{monopole}. We want a function $F:{\mathcal{S}}={\mathcal{S}}(m_1,\dots,m_k)\rightarrow {\mathbb{R}}$. Recall that a point of ${\mathcal{S}}$ corresponds to a reducible curve $S=S_1\cup\dots \cup S_k$ given by \eqref{prod}. Let $H_l(\zeta,\eta)$, $l=1,\dots,k$ be a meromorphic function on ${\mathbb{T}}$ which is a linear combination of monomials $\eta^i/\zeta^j$, $i,j>0$. Denote by $\tilde 0_l$, $l=1,\dots,k$, the sum of simple contours around points in the fibre $\pi_{|S_l}^{-1}(0)$ of $S_l$, and, finally, let $c$ be a homology cycle on $S$. We define an $F$ on ${\mathcal{S}}$ by
\begin{equation} \oint_c \frac{\eta}{\zeta^2}d\zeta-\frac{1}{2\pi i}\sum_{l=1}^k \oint_{\tilde 0_l} \frac{1}{\zeta^2}H_l(\zeta,\eta)d\zeta.\label{Fbundle}\end{equation}
For this function, \eqref{d/dw} becomes:
\begin{equation}\frac{dF}{dw_a^i}=\operatorname{Res}_{\tilde 0_l} \frac{\partial H_l}{\partial \eta}\omega_{a-2,m_l-i} - \int_{\gamma_l} \omega_{a-2,m_l-i}, \enskip 2\leq a\leq 2i-2,\label{int-int}\end{equation}
where $\gamma_l$ is the restriction of $c$ to $S_l$.
The second term in this equation defines an abelian sum, which is identified with $\bigl[\Delta_l^+-\Delta_l^-\bigr]$ in the Jacobian of $S_l$, where $\Delta_l^-$ are the points at which $c$ enters $S_l$ from other curves and $\Delta_l^+$ the points at which $c$ leaves $S_l$. Thus, via Serre's duality, we have:
\begin{corollary} Let $F:\bigoplus_{l=1}^k\bigoplus_{i=1}^{m_l} {\mathbb{R}}^{2i+1}\rightarrow {\mathbb C}$ be of the form \eqref{Fbundle} and let $S=S_1\cup\dots\cup S_k$ be such that each $S_l$ is nonsingular. Let $E_l$, $l=1,\dots,k$, be the line bundle on ${\mathbb{T}}$ with transition function $\exp\frac{\partial H_l}{\partial \eta}$. If the equations \eqref{F3} are satisfied at $S$, then
\begin{equation}{ E_l}_{|S_l}\simeq \left[\Delta_l^+-\Delta_l^-\right].\label{triviality}\end{equation}
on each $S_l$.\hfill $\Box$
\label{answer0}
\end{corollary}
\begin{remark}An $F$ of the form \eqref{Fbundle} will give a hyperk\"ahler metric only if it is real-valued.
It follows from Lemma \ref{realG} that the first term is real provided $\tau_\ast c=-c$, while the remaining ones are real if each $H_l$ satisfies \eqref{Greal}. This condition is easily seen to be equivalent to each $H_l$ being a sum of
$$ \frac{\eta^i}{\zeta^j}+\frac{\eta^i}{\zeta^{2i-j-2}},\quad i>0,\enskip 0<j<2i-2,\enskip \text{$i+j$ is odd}.$$\label{real_sF}\end{remark}
\begin{remark} The expression \eqref{int-int} can be further simplified. Observe namely, that the $(l+1)$-st term in \eqref{Fbundle} is simply $-\operatorname{Res}_{\tilde 0_l} H_l(\zeta,\eta)/\zeta^2$ and so it can be expressed as a symmetric function of the roots $\eta_1,\dots,\eta_{n}$ of $ P_l(\zeta,\eta)$. Hence, this term in \eqref{Fbundle} can be written as a function $\tilde{H}(w^i_a)$ of the coefficients of $P_l$, which makes computing its derivatives trivial. For example, in the case of $F$ given by \eqref{monopole}, one computes easily that the first term is equal to
$$-\operatorname{Res}_{\tilde 0} \eta^2/\zeta^3=-\operatorname{Res}_0 \frac{\alpha_1(\zeta)^2-2\alpha_2(\zeta)}{\zeta^3},
$$
where $\alpha_1(\zeta), \alpha_2(\zeta)$ are the coefficients in the polynomial $P(\zeta,\eta)=\eta^n +\sum_{i=1}^n \alpha_i(\zeta)\eta^{n-i}$ defining a spectral curve $S$.
Thus, the derivative with respect to $w_a^i$ ($2i-2\geq a\geq 2$) is $0$ unless $i=2$, and comparing with \eqref{int-int}, we see, as in \cite{HMR}, that \eqref{F3} are satisfied at $S$ if and only if
\begin{equation}\int_{c} \omega_{rs}=\begin{cases}2 & \text{if $r=0$ and $s=n-2$}\\ 0 & \text{otherwise}.\end{cases}\label{02}\end{equation}
\label{tildeH}\end{remark}
Observe now that the reality property of each $H_l$ translates into the following reality property of $E_l$:
\begin{equation} E_l\simeq \overline{\tau^\ast E_l},\label{tau^ast}\end{equation}
where ``bar" means taking the opposite complex structure. We can now identify (at least locally) the (hyperk\"ahler) manifold $M$, obtained from \eqref{Fbundle}, with the set of $(S,\nu)$, where $S=S_1\cup\dots\cup S_k$ is a reducible curve such that \eqref{triviality} holds for $l=1,\dots,k$, and $\nu=(\nu_1,\dots,\nu_k)$ with each $\nu_l$ a section of ${E_l}_{|S_l}\otimes\left[\Delta_l^--\Delta_l^+\right]$ satisfying $\nu_l\overline{\tau^\ast \nu_l}=1\in H^0(S,{\mathcal{O}})$ ($\overline{\tau^\ast \nu_l}$ is a section of ${E_l}^\ast_{|S_l}\otimes\left[\Delta_l^+-\Delta_l^-\right]$). Multiplying each $\nu_l$ by a complex number of modulus one realises $M$ as a $T^k$-bundle over the image of \eqref{hat_f}.
\par
We claim that the metric on $M$ can be described as for monopole metrics. Let us represent each $\nu_l$ by a pair of functions $f_0^l(\zeta,\eta)$ on $S\cap U_0$ and $f_\infty^l(\tilde \zeta,\tilde \eta)$ on $S\cap U_\infty$ ($U_0=\{\zeta\neq \infty\}$, $U_\infty=\{\zeta\neq 0\}$) satisfying $f_\infty^l(\tilde \zeta,\tilde \eta)=\exp\frac{\partial H_l}{\partial \eta}f_0^l(\zeta,\eta)$ over $\zeta\neq 0,\infty$. Let $\bigl(\zeta,\eta^l_j(\zeta)\bigr)$, $j=1,\dots,m_l$, be the points of $S_l$ over $\zeta$. For $\zeta$ near $0$ define a $\zeta$-dependent complex-symplectic form by:
\begin{equation}\Omega(\zeta)=\sum_{l=1}^k\sum_{j=1}^{m_l}\frac{df_0^l\bigl(\zeta,\eta_j^l(\zeta)\bigr)}{f_0^l\bigl(\zeta,\eta_j^l(\zeta)\bigr)}\wedge d\eta_j^l(\zeta).\label{Omega2}\end{equation}
We claim
\begin{theorem} On the subset of $M$, where the fibre of $S$ over $\zeta=0$ consists of distinct points and each $S_l$ is smooth, local $I$-complex coordinates ($I$ is the complex structure corresponding to $\zeta=0$) are given by
$\eta^l_j(0),\log f_0^l\bigl(0,\eta^l_j(0)\bigr)$, $l=1,\dots,k$, $j=1,\dots,m_l$. Moreover, the complex-symplectic form $\omega_J+i\omega_K$ is equal to $\Omega(0)$, while the K\"ahler form $\omega_I$ is equal to $\frac{1}{2\sqrt{-1}\frac{d\Omega(\zeta)}{d\zeta}}_{|\zeta=0}$.
\label{one}\end{theorem}
We shall prove this theorem together with its inverse, which we now state.
Let $E_1,\dots,E_k$ be line bundles on ${\mathbb{T}}$ satisfying the reality property \eqref{tau^ast}. Let $V$ be an open subset of ${\mathcal{S}}={\mathcal{S}}(m_1,\dots,m_k)$ such that all $S\in V$ are isotopic, $S_l$ are smooth, and no intersection points lie over $\zeta=0$ or $\zeta=\infty$.
Suppose that to any $S\in V$ and any $l=1,\dots,k$ we associated disjoint divisors $\Delta_l^-$ and $\Delta_l^-=\tau(\Delta_l^+)$ which are subdivisors of the divisor cut out on $S_l$ by other $S_i$.
\begin{theorem} Suppose that $M$ is a hyperk\"ahler manifold, which as a manifold is the set of $(S,\nu)$, where $S=S_1\cup\dots\cup S_k \in V$ satisfy the above assumptions and \eqref{triviality} holds for $l=1,\dots,k$, and $\nu=(\nu_1,\dots,\nu_k)$ with each $\nu_l$ a section of ${E_l}_{|S_l}\otimes\left[\Delta_l^--\Delta_l^+\right]$ satisfying $\nu_l\overline{\tau^\ast \nu_l}=1\in H^0(S,{\mathcal{O}})$. Suppose also that the twistor space of $M$ is trivialised near $\zeta=0$ so that the twisted symplectic form is given by \eqref{Omega2}.
\par
Then there exists a homology cycle $c$ on $S$, with $\tau_\ast c=-c$, entering each $S_l$ at points of $\Delta_l^-$ and leaving it at points of $\Delta_l^+$, and such that the hyperk\"ahler metric of $M$ is produced by the generalised Legendre transform applied to the function \eqref{Fbundle}.
\label{two}\end{theorem}
The remainder of the section is devoted to a proof of these two theorems. The basic idea comes from \cite{IR,CK,Hough}.
\par
We begin by discussing the situation on a single component $S_l$. Thus, we assume that we consider a smooth curve $C$ given by the equation
\begin{equation*} \eta^m+\sum_{j=1}^m \alpha_j(\zeta)\eta^{m-j}=0,\quad \alpha_j(\zeta)=\sum_{a=0}^{2j}w_a^j\zeta^a.\end{equation*}
We assume that we have a meromorphic section $\nu$ of a line bundle on $C$, which is the restriction of a line bundle $E$ on ${\mathbb{T}}$, with the transition function $\exp \frac{\partial H}{\partial \eta}$. We represent $\nu$ by a pair of meromorphic functions $f_0(\zeta,\eta)$ on $S\cap U_0$ and $f_\infty(\tilde \zeta,\tilde \eta)$ on $S\cap U_\infty$ satisfying $f_\infty(\tilde \zeta,\tilde \eta)=\exp\frac{\partial H}{\partial \eta}f_0(\zeta,\eta)$ over $\zeta\neq 0,\infty$. Let $ \Delta^+$ be the zero divisor of $\nu$ and $ \Delta^-$ its polar divisor. We assume that they are disjoint from the fibres of $C$ over $\zeta=0$ and $\zeta=\infty$. For the time being, we do not require any reality conditions.
\par
We consider the form \eqref{Omega2}, which we rewrite in terms of coefficients of the polynomial rather than its roots (cf. \cite{Hough}):
\begin{equation} \sum_{j=1}^{m}\frac{df_0\bigl(\zeta,\eta_j(\zeta)\bigr)}{f_0\bigl(\zeta,\eta_j(\zeta)\bigr)}\wedge d\eta_j(\zeta)= \sum_{j=1}^{m} dU_j(\zeta)\wedge d\alpha_j(\zeta),\label{Omega3}\end{equation}
where
\begin{equation} U_j(\zeta)=\sum_{i=1}^m \frac{\partial \eta_i(\zeta)}{\partial \alpha_j}\log f_0\bigl(\zeta,\eta_i(\zeta)\bigr).\label{U}\end{equation}
Note that $U_j$ is a (multi-valued) function on ${\mathbb{P}}^1$.
We now cut the surface $C$ as in \cite[pp. 242--243]{GH}, so that $\log f_0$ and the $U_j$ become single valued functions. Let $a_1,\dots, a_g$,$b_1,\dots,b_g$ be cycles on $C$ representing the canonical basis of $H_1(C,{\mathbb{Z}})$, disjoint except the common base point $s_0\in C$ and not containing any zero or pole of $\nu$. Let $\epsilon_i$ be smooth arcs from $s_0$ to the points $\{p_i\}$ in the support of $(\nu)$, disjoint from all $a_r,b_r$ (except for $s_0$).
We may also assume that that $a_r,b_r,\epsilon_i$ do not contain any points of the fibres over $0,\infty$. Then the complement $P$ of all these paths is a simply connected region as drawn below (cf. \cite[p. 242]{GH}).
\medskip
\setlength{\unitlength}{0.00043333in}
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
{\renewcommand{\dashlinestretch}{30}
\begin{picture}(8574,6039)(-500,-10)
\put(2462,-200){\shortstack{$s_0$}}
\put(3802,-230){\shortstack{$a_1$}}
\put(6000,102){\shortstack{$b_1$}}
\put(7882,1320){\shortstack{$a_1^{-1}$}}
\put(8632,3120){\shortstack{$b_1^{-1}$}}
\put(7692,5420){\shortstack{$a_2$}}
\put(2187,3087){\shortstack{$p_1$}}
\put(4887,3762){\shortstack{$p_i$}}
\put(4012,1362){\shortstack{$\epsilon_i$}}
\put(2056,1617){\shortstack{$\epsilon_1$}}
\put(1056,4600){\shortstack{$\Large{P}$}}
\dottedline{80}(12,4000)(12,2637)
\drawline(12,2637)(762,762)(2562,12)
(4812,12)(7287,762)(8562,2337)
(8562,4662)(6912,6012)
\drawline(2562,12)(2560,14)(2555,19)
(2547,27)(2535,40)(2519,57)
(2500,77)(2478,101)(2455,126)
(2431,152)(2407,178)(2385,203)
(2363,227)(2343,250)(2325,272)
(2309,292)(2294,312)(2281,330)
(2268,348)(2257,365)(2247,382)
(2237,399)(2227,419)(2218,438)
(2209,458)(2200,478)(2193,500)
(2185,521)(2179,544)(2173,567)
(2168,590)(2163,614)(2160,637)
(2157,661)(2155,685)(2154,708)
(2153,730)(2154,753)(2155,774)
(2156,795)(2159,816)(2162,837)
(2165,856)(2169,875)(2174,894)
(2179,914)(2185,934)(2191,955)
(2197,976)(2205,998)(2212,1020)
(2220,1043)(2228,1065)(2236,1088)
(2245,1112)(2253,1135)(2261,1157)
(2269,1180)(2277,1203)(2285,1225)
(2292,1247)(2299,1269)(2306,1290)
(2312,1312)(2318,1332)(2323,1353)
(2328,1374)(2333,1396)(2338,1418)
(2343,1441)(2347,1464)(2351,1489)
(2355,1514)(2358,1539)(2361,1565)
(2364,1590)(2366,1617)(2368,1643)
(2370,1669)(2371,1694)(2371,1720)
(2371,1745)(2371,1770)(2370,1794)
(2369,1818)(2367,1841)(2365,1864)
(2362,1887)(2359,1910)(2355,1933)
(2351,1956)(2347,1980)(2342,2004)
(2337,2029)(2331,2054)(2325,2080)
(2318,2105)(2312,2131)(2305,2157)
(2298,2184)(2291,2209)(2284,2235)
(2277,2260)(2270,2285)(2263,2310)
(2257,2333)(2250,2356)(2245,2378)
(2239,2400)(2234,2421)(2229,2442)
(2225,2462)(2220,2486)(2215,2510)
(2211,2534)(2208,2558)(2205,2584)
(2202,2611)(2199,2640)(2197,2670)
(2195,2703)(2193,2737)(2192,2771)
(2191,2806)(2189,2839)(2189,2869)
(2188,2894)(2187,2913)(2187,2926)
(2187,2934)(2187,2937)
\drawline(2562,12)(2565,13)(2570,15)
(2581,19)(2597,26)(2618,34)
(2645,45)(2676,57)(2710,71)
(2746,86)(2783,101)(2819,117)
(2855,132)(2889,146)(2921,161)
(2951,174)(2979,187)(3005,200)
(3030,213)(3053,225)(3075,237)
(3096,249)(3117,262)(3137,274)
(3157,288)(3177,301)(3197,315)
(3217,330)(3237,345)(3257,361)
(3278,377)(3298,394)(3319,412)
(3339,430)(3360,448)(3380,467)
(3400,486)(3420,505)(3439,524)
(3458,543)(3476,562)(3493,581)
(3510,599)(3527,617)(3542,635)
(3558,652)(3573,670)(3587,687)
(3603,706)(3618,725)(3633,744)
(3648,764)(3664,784)(3679,804)
(3694,825)(3709,846)(3723,868)
(3738,890)(3752,912)(3765,934)
(3778,956)(3790,978)(3802,999)
(3813,1020)(3823,1040)(3832,1060)
(3840,1080)(3848,1099)(3855,1118)
(3862,1137)(3869,1158)(3875,1179)
(3880,1200)(3885,1222)(3890,1244)
(3895,1267)(3899,1290)(3904,1314)
(3908,1338)(3912,1362)(3916,1386)
(3920,1410)(3925,1434)(3929,1457)
(3934,1480)(3939,1502)(3944,1524)
(3949,1545)(3955,1566)(3962,1587)
(3969,1606)(3976,1625)(3984,1644)
(3992,1664)(4002,1684)(4012,1704)
(4023,1725)(4035,1746)(4048,1768)
(4062,1790)(4076,1812)(4091,1834)
(4106,1856)(4122,1878)(4139,1899)
(4156,1920)(4173,1940)(4190,1960)
(4208,1980)(4225,1999)(4243,2018)
(4262,2037)(4279,2054)(4297,2072)
(4316,2089)(4335,2107)(4355,2125)
(4376,2144)(4397,2163)(4419,2183)
(4442,2203)(4464,2223)(4487,2243)
(4510,2264)(4533,2284)(4556,2305)
(4578,2325)(4600,2346)(4621,2366)
(4642,2386)(4662,2405)(4681,2425)
(4699,2444)(4717,2462)(4733,2481)
(4750,2499)(4766,2520)(4783,2541)
(4798,2562)(4814,2584)(4829,2606)
(4843,2629)(4857,2653)(4871,2677)
(4884,2702)(4896,2727)(4907,2753)
(4918,2779)(4927,2804)(4936,2830)
(4944,2856)(4951,2881)(4957,2906)
(4962,2930)(4967,2954)(4970,2978)
(4973,3001)(4975,3024)(4976,3046)
(4976,3068)(4976,3090)(4975,3113)
(4974,3138)(4971,3163)(4968,3190)
(4964,3219)(4960,3250)(4954,3282)
(4948,3317)(4942,3353)(4934,3390)
(4927,3427)(4919,3463)(4912,3497)
(4906,3529)(4900,3555)(4895,3577)
(4891,3593)(4889,3603)(4888,3609)(4887,3612)
\end{picture}
\bigskip
\bigskip
We view $P$ as a polygon with sides $a_r,a_r^{-1},b_r,b_r^{-1},\epsilon_i,\epsilon_i^{-1}$. We choose a single-valued branch of $\log f_0$ on $P$.
The function $\kappa_j(\zeta,\eta)=\frac{\partial \eta}{\partial \alpha^j}\log f_0(\zeta,\eta) $ is a meromorphic function on $P$ and we have
$U_j(\zeta)=\sum_{i=1}^m\kappa_j(\zeta,\eta_i)$.
Now, for an integer $s$:
$$\oint_0 U_j\frac{d\zeta}{\zeta^{s}}=\oint_{\tilde{0}}\kappa_j\frac{d\zeta}{\zeta^{s}},$$
where $0$ is a simple cycle around $0\in {\mathbb{P}}^1$ and $\tilde 0$ its lift to $C$.
Since the differential on the right-hand side has poles only at $\zeta=0$ and $\zeta=\infty$, we have
$$\oint_{\tilde{0}}\kappa_j\frac{d\zeta}{\zeta^{s}}=\oint_{\partial P}\kappa_j\frac{d\zeta}{\zeta^{s}}- \oint_{\tilde{\infty}}\kappa_j\frac{d\zeta}{\zeta^{s}}.$$
On the other hand, the patching formula for $\kappa_j$ is
$$\tilde\kappa_j=\zeta^{2j-2}\left(\kappa_j+\frac{\partial H}{\partial \alpha_j}\right),$$
and hence
$$\oint_{\tilde{0}}\kappa_j\frac{d\zeta}{\zeta^{s}}=\oint_{\partial P}\kappa_j\frac{d\zeta}{\zeta^{s}}- \oint_{\tilde{\infty}}\tilde\kappa_j\frac{d\tilde\zeta}{\zeta^{s+2j-4}} + \oint_{\tilde{\infty}}\frac{\partial H}{\partial \alpha_j}\frac{d\zeta}{\zeta^{s}}.$$
The integrand in the third term arises, as observed in Remark \ref{tildeH}, from a function on ${\mathbb{P}}^1$ and, so it can be replaced thanks to the residue theorem, by the integral around $0$. Thus:
\begin{equation} \oint_0 U_j(\zeta)\frac{d\zeta}{\zeta^{s}} + \oint_\infty U_j(\tilde \zeta){\tilde\zeta}^{s+2j-4}d{\tilde\zeta}=\oint_{\partial P}\kappa_j\frac{d\zeta}{\zeta^{s}} - \oint_{\tilde{0}}\frac{\partial H}{\partial \alpha_j}\frac{d\zeta}{\z
|
eta^{s}}.\label{first}\end{equation}
We now compute $ \oint_{\partial P}\kappa_j\frac{d\zeta}{\zeta^{s}}$. We rewrite it as $\oint_{\partial P} \log f_0\psi$, where $\psi$ is a meromorphic differential (equal to $\frac{\partial \eta}{\partial \alpha^j}\frac{d\zeta}{\zeta^{s}}$), and compute it as in \cite[p. 243]{GH}:
for points $p\in a_r$, $p^\prime\in a_r^{-1}$, identified on $C$, we have
$$\log f_0(p^\prime)=\log f_0(p)+\int_{b_{r}} d\log f_0,$$
and so
$$\int_{a_r+a_r^{-1}}\log f_0\psi=-\oint_{b_r}d\log f_0\cdot \oint_{a_r}\psi.$$
Similarly,
$$\int_{b_{r}+b_{r}^{-1}}\log f_0\psi=\oint_{a_r}d\log f_0 \cdot\oint_{b_r}\psi.$$
For points $p\in \epsilon_i$, $p^\prime\in \epsilon_i^{-1}$ identified on $C$,
$$ \log f_0(p^\prime)-\log f_0(p)=-2\pi \sqrt{-1}\operatorname{ord}_{p_i}(f_0),$$
and hence
$$\int_{\epsilon_i+\epsilon_i^{-1}}\log f_0\psi=2\pi \sqrt{-1}\operatorname{ord}_{p_i}(f_0)\int_{s_0}^{p_i}\psi.$$
Since the integral of $\log f_0$ over a homology cycle is an integer multiple of $2\pi \sqrt{-1}$, we have a well defined homology cycle in $H_1(C,{\mathbb{Z}})$, represented by
\begin{equation}\lambda=\frac{1}{2\pi \sqrt{-1}}\left( -\sum_r\left(\oint_{b_r}d\log f_0\right)a_r+\sum_r\left(\oint_{a_r}d\log f_0\right)b_r\right).\label{lambda}\end{equation}
If we define a chain $\gamma$ as
\begin{equation}\gamma=\lambda+\sum_i \operatorname{ord}_{p_i}(f_0)\epsilon _i,\label{gamma}\end{equation}
then we obtain, from the above calculations,
\begin{equation*} \frac{1}{2\pi \sqrt{-1}}\oint_0 U_j\frac{d\zeta}{\zeta^{s}}+ \frac{1}{2\pi \sqrt{-1}}\oint_\infty U_j(\tilde \zeta){\tilde\zeta}^{s+2j-4}d{\tilde\zeta}=\int_\gamma \frac{\partial \eta}{\partial \alpha_j}\frac{d\zeta}{\zeta^{s}}- \frac{1}{2\pi \sqrt{-1}}\oint_{\tilde{0}}\frac{\partial H}{\partial \alpha_j}\frac{d\zeta}{\zeta^{s}}.\label{second}\end{equation*}
We now define a function $\phi$ on a neighbourhood of $(C,\Delta^+,\Delta^-)$ in $$\{(S,D^+,D^-);\: S\in|{\mathcal{O}}(2m)|,\; \text{$D^\pm$ - divisors on $S$ of the same degree as $\Delta^\pm$}\}$$
by
$$\phi(S,D^+,D^-)=\int_\gamma\eta \frac{d\zeta}{\zeta^{2}}-\frac{1}{2\pi \sqrt{-1}}\oint_{\tilde{0}}H(\zeta,\eta)\frac{d\zeta}{\zeta^{2}}.$$
Since $\frac{\partial \alpha_j}{\partial w_a^j}=\zeta^a$, we get at $(S,D^+,D^-)=(C,\Delta^+,\Delta^-)$, (setting $s=2-a$):
\begin{equation*} \frac{1}{2\pi \sqrt{-1}}\oint_0 U_j(\zeta)\zeta^{a-2}d\zeta + \frac{1}{2\pi \sqrt{-1}}\oint_\infty U_j(\tilde \zeta){\tilde\zeta}^{2j-2-a}d{\tilde\zeta}=\frac{\partial \phi}{\partial w_a^j}-R(\Delta^+,\Delta^-),\label{third}\end{equation*}
where
$$R(\Delta^+,\Delta^-)=\sum_{(\zeta,\eta)\in \Delta^+}\frac{\eta}{\zeta^2}\frac{\partial \zeta}{\partial w_a^j}-\sum_{(\zeta,\eta)\in \Delta^-}\frac{\eta}{\zeta^2}\frac{\partial \zeta}{\partial w_a^j}.$$
Hence
\begin{equation} \frac{\partial \phi}{\partial w_a^j}-R(\Delta^+,\Delta^-)=\begin{cases} {\frac{d U_j(\zeta)}{d\zeta}}_{|\zeta=0}& \text{if $a=0$}\\ U_1(0)+ U_1(\infty)& \text{if $a=1$ and $j=1$}\\U_j(0) & \text{if $a=1$ and $j>1$}\\ 0 & \text{if $2\leq a\leq 2j-2$}.\end{cases}\label{crux}\end{equation}
\par
We now impose reality conditions: we assume that the curve $C$ is $\tau$-invariant, the line bundle $E$ satisfies \eqref{tau^ast} and that $\nu\overline{\tau^\ast \nu}=1$. In particular, $\tau(\Delta^-)=\Delta^+$.
\par
First of all, we can choose a canonical basis of $H_1(C,{\mathbb{Z}})$ for which
\begin{equation} \tau_\ast(a_r)=-a_r,\enskip\tau_\ast(b_r)=b_r,\quad r=1,\dots,g.\label{ab_real}\end{equation}
This follows (cf. \cite[p.227]{HMR}) from two facts: (1) since $\tau$ is anti-holomorphic, the intersection number of any two cycles satisfies $\#(\lambda,\mu)=-\#(\tau_\ast\lambda,\tau_\ast\mu)$, and (2) $\tau_\ast$ is diagonalisable. Thus, we take $a_r$ to be the $(-1)$-eigenvectors and $b_r$ to be $1$-eigenvectors of $\tau_\ast$.
\par
We now have $d\log f_0=-\overline{\tau^\ast d\log f_0}$, and, hence,
$$ \oint_{a_r}d\log f_0=-\oint_{a_r}\overline{\tau^\ast d\log f_0}=-\oint_{\tau_\ast(a_r)}\overline{ d\log f_0}=\overline{\oint_{a_r}d\log f_0}.$$
Therefore $ \oint_{a_r}d\log f_0$ is real, but, since it is also an integer multiple of $2\pi \sqrt{-1}$, it must be equal to zero. Hence, the cycle \eqref{lambda} is, in this canonical basis, a linear combination of the $a_r$ only, and so $\tau_\ast\lambda=-\lambda$. Moreover, as $\tau(\Delta^-)=\Delta^+$, we can replace $\sum_i \operatorname{ord}_{p_i}(f_0)\epsilon _i$ in \eqref{gamma} by paths going from $p_i$ to $\tau(p_i)$, so that $\tau_\ast(\gamma)=-\gamma$.
\medskip
We now prove Theorem \ref{one}. We have the function $F$ given by \eqref{Fbundle}, and we know, from Corollary \ref{triviality} and the reality conditions, that on each $S_l$ there is a section $\nu_l$ of ${E_l}_{|S_l}\otimes\left[\Delta_l^--\Delta_l^+\right]$ with $\nu_l\overline{\tau^\ast \nu_l}=1$. From the above calculations, applied to every component $S_l$, we obtain another function $F^\prime= \sum_{l=1}^k \phi({S_l})$, which, apriori, may differ from $F$ in the choice of the cycle. Let $c^\prime$ be the cycle for $F^\prime$, i.e. $c^\prime$ is the sum of $\gamma$'s on different $S_l$. We denote the restriction of $c^\prime$ to $S_l$ by $\gamma^\prime_l$. Computing the second term in \eqref{Fbundle}, as in Remark \ref{tildeH}, we conclude, from \eqref{F3} and \eqref{int-int} that
$$\int_{\gamma_l-\gamma^\prime_l}\Omega=0$$
for every $l$ and every holomorphic differential $\Omega$ on $S_l$. The paths components on each $S_l$ are determined by the singularities of $\nu_l$, and hence, they are the same for ${\gamma_l}$ and for ${\gamma^\prime_l}$. Thus
$$\int_{\lambda_l-\lambda^\prime_l}\Omega=0,$$
where $\lambda_l,\lambda^\prime_l$ are the contour components of ${\gamma_l},{\gamma^\prime_l}$. From the above discussion, with our choice of the basis of $H_1(S_l,{\mathbb{Z}})$, both $\lambda_l$ and $\lambda^\prime_l$ are combination of the $a_r$ only. Therefore $\lambda_l=\lambda^\prime_l$ on every $S_l$, and, consequently, $F=F^\prime$. Theorem \ref{one} follows from \eqref{crux}, \eqref{dK}, and \eqref{lines}.
\medskip
To prove Theorem \ref{two}, we define the function $F$ in \eqref{Fbundle} as $ \sum_{l=1}^k \phi({S_l})$ , and the cycle $c$ on $S$ is the sum of $\gamma$'s on different $S_l$. Theorem \ref{two} follows easily from \eqref{crux}, \eqref{dK}, and \eqref{lines} (all terms of the form $ R(\Delta^+,\Delta^-)$, arising from different $S_l$, cancel).
\section{Examples}
\subsection{$SU(2)$-monopole metrics and asymptotic monopole metrics}
The moduli space of $SU(2)$-monopoles of charge $n$ is a $4n$-dimensional complete hyperk\"ahler manifold, biholomorphic to the space of rational maps ${\mathbb{P}}^1\rightarrow {\mathbb{P}}^1$ of degree $n$. When we vary the complex structure, the denominator of the rational map, corresponding to a given monopole, traces a curve $S\in |{\mathcal{O}}(2n)|$. Hitchin \cite{Hit} shows that the line bundle $L^2$ on $T$ with transition function $\exp(2\eta/\zeta)$ is trivial on $S$. The monopole metric is a basic example of Theorem \ref{two}. Indeed, it has been shown by Ivanov and Ro\v{c}ek \cite{IR} (for $n=2$) and by Houghton \cite{Hough} (for arbitrary $n$) that the monopole metric can be constructed via the generalised Legendre transform from the function
\begin{equation} F=-\frac{1}{2\pi i}\oint_{\tilde 0} \frac{\eta^2}{\zeta^3}d\zeta + \oint_c \frac{\eta}{\zeta^2}d\zeta, \label{Fmon}\end{equation}
on ${\mathcal{S}}(n)$.
\medskip
It has been known since the work of Taubes \cite{Tau} that the infinity of the moduli space of (centred) monopoles corresponds to a monopole decaying to a superposition of monopoles of lower charges. Thus, for any partition $(n_1,\dots,n_k)$ of $n$, there is an asymptotic region of the monopole moduli space, where monopoles are approximately a superposition of $k$ monopoles of charges $n_1,\dots,n_k$. To understand the asymptotic dynamics, we make a guess that the metric approximates the metric given by \eqref{Fmon}, but this time defined on unions of spectral curves of degrees $n_1,\dots,n_k$. In other words, this time $F$ is defined on ${\mathcal{S}}(n_1,\dots,n_k)$. Corollary \ref{answer0} implies that the condition \eqref{F3} is equivalent to $L^2_{|S_{l}}\simeq [\Delta_l^+-\Delta_l^-]$ for every $l$, where $\Delta_l^++\Delta^-_l$ is the divisor cut out on $S_l$ by the other curves. These are, indeed, the constraints for the asymptotic monopole metrics considered in \cite{clusters} and Theorem \ref{two} shows that the metrics produced by the GLT in this case are those in \cite{clusters}. In fact, it was this GLT approach which first suggested what the asymptotic monopole metrics should be.
\medskip
We observe that Remark \ref{tildeH} applies to these asymptotic metrics as well, and the constraints \eqref{F3} are equivalent to \eqref{02} being valid on every $S_l$, $l=1,\dots,k$.
\subsection{$SU(N)$-monopole metrics} We recall the twistor description, due to Hurtubise and Murray, of the moduli space of $SU(N)$-monopoles with maximal symmetry breaking. An $SU(N)$-monopole has a magnetic charge $(m_1,\dots,m_{N-1})$ and its Higgs field at infinity is conjugate to $\sqrt{-1}\operatorname{diag}(\mu_1,\dots,\mu_N)$, with $\mu_1<\mu_2<\dots<\mu_N$. A generic monopole with these data corresponds to a collection of $\tau$-invariant compact spectral curves $S_p\in |{\mathcal{O}}(2m_p)|$, $p=1,\dots,N-1$, in generic position, along with a splitting $S_p\cap S_{p-1}=S_{p,p-1}\cup S_{p-1,p}$ into subsets of disjoint cardinality, such that $\tau(S_{p,p-1})= S_{p-1,p}$ and, over $S_p$,
$$ L^{\mu_{p+1}-\mu_p}(m_{p-1}+m_{p+1})[-S_{p,p+1}-S_{p-1,p}]\simeq {\mathcal{O}},$$
where $L^s$ is the line bundle defined in the previous subsection. In addition, there are vanishing and positivity conditions - see \cite[p.38]{HuMu}.
\par
The moduli space of $SU(N)$-monopoles with fixed $(m_1,\dots,m_{N-1})$ and $(\mu_1,\dots,\mu_N)$ has a natural hyperk\"ahler metric. It follows from the work of Hurtubise and Murray \cite{HuMu} that the Nahm transform induces a biholomorphism between the twistor space of this metric and the twistor space of the natural $L^2$ metric on the moduli space of solutions to Nahm's equations. Moreover, this biholomorphism commutes with the the real stucture, preserves the twistor lines and the fibres of the projections onto ${\mathbb{P}}^1$. Therefore the Nahm transform preserves the hypercomplex structure and, whence, the Levi-Civita connection. \par
It follows now, by comparing \cite[\S 3]{HuMu} with \cite{BielCMP}, that the metric on the moduli space of solutions to Nahm's equations can be described in terms the above spectral data, by the formula \eqref{Omega2}, where $\bigl(\zeta,\eta^l_j(\zeta)\bigr)$, $j=1,\dots,m_l$, are the points of $S_l$ over $\zeta$, $l=1,\dots,N_1$, and $f_0^l(\zeta,\eta)$ represents a section $\sigma_l$ of
\begin{equation} L^{\mu_{l+1}-\mu_l}(m_{l+1}-m_{l-1})[-S_{l,l+1}+S_{l-1,l}],\label{L^mu}\end{equation}
satisfying $\sigma_l\overline{\tau^\ast \sigma_l}=\frac{P_{l+1}}{P_{l-1}}$, where $P_l=P_l(\zeta,\eta)$ is the polynomial defining $S_l$. Let $\nu_l=\sigma_l/\overline{\tau^\ast \sigma_l}$, $l=1,\dots, N-1$. It satisfies $\nu_l\overline{\tau^\ast \nu_l}=1$ and it is a section of
$$ L^{\mu_{l+1}-\mu_l}(m_{l+1}-m_{l-1})[-S_{l,l+1}+S_{l-1,l}]\otimes \left(L^{-\mu_{l+1}+\mu_l}(m_{l+1}-m_{l-1})[-S_{l+1,l}+S_{l,l-1}]\right)^\ast,$$
i.e. of \begin{equation}L^{2\mu_{l+1}-2\mu_l}[S_{l+1,l}+S_{l-1,l}-S_{l,l+1}-S_{l,l-1}].\label{L^2mu}\end{equation}
Moreover $\nu_l$ is represented, on $\{\zeta\neq 0\}$, by $\tilde{f}_0^l(\zeta,\eta)=\bigl(f_0^l(\zeta,\eta)\bigr)^2\frac{P_{l-1}(\zeta,\eta)}{P_{l+1}(\zeta,\eta)}$ and we compute \eqref{Omega2} (omitting $\zeta$ in $\eta^l_j(\zeta)$):
\begin{multline*}
\sum_{l=1}^{N-1}\sum_{j=1}^{m_l}\frac{d\tilde f_0^l(\zeta,\eta_j^l)}{\tilde f_0^l(\zeta,\eta_j^l)}\wedge d\eta_j^l\\= 2\sum_{l=1}^{N-1}\sum_{j=1}^{m_l}\frac{df_0^l(\zeta,\eta_j^l)}{f_0^l(\zeta,\eta_j^l)}\wedge d\eta_j^l+\sum_{l=1}^{N-1}\sum_{j=1}^{m_l} \left(\sum_{i=1}^{m_{l-1}}\frac{d\eta_i^{l-1}}{\eta_i^{l-1}-\eta_j^l}-
\sum_{i=1}^{m_{l+1}}\frac{d\eta_i^{l+1}}{\eta_i^{l+1}-\eta_j^l}\right)\wedge d\eta_j^l\\
=2\sum_{l=1}^{N-1}\sum_{j=1}^{m_l}\frac{df_0^l(\zeta,\eta_j^l)}{f_0^l(\zeta,\eta_j^l)}\wedge d\eta_j^l.
\end{multline*}
Thus, we are in the situation described in Theorem \ref{two}, and the $SU(N)$-monopole metrics arise from the GLT applied
to the function
\begin{equation} F=\frac{1}{2}\left(\oint_c\frac{\eta}{\zeta^2}d\zeta-\frac{1}{2\pi i}\sum_{l=1}^{N-1} (\mu_{l+1}-\mu_l)\oint_{\tilde{0}_l}\frac{\eta^2}{\zeta^3}d\zeta\right)\label{FSU(N)}\end{equation}
on ${\mathcal{S}}(m_1,\dots,m_{N-1})$.
The cycle $c$ satisfies $\tau_\ast c=-c$ and it enters each $S_l$ at points of $S_{l,l+1}+S_{l,l-1}$ and leaves at points of
$S_{l+1,l}+S_{l-1,l}$ ($c$ is determined by the sections $\nu_l$ as in the proof of Theorem \ref{two}).
\begin{remark} For $N=3$ and $\mu_3-\mu_2=\mu_2-\mu_1=1$, the function $F$ is just half of the corresponding to the asymptotic $SU(2)$-monopole metric. Nevertheless, the cycles, and hence the metrics are different. What happens is that the twistor space is the same in both cases, but the real sections corresponding to $SU(3)$-monopoles belong to a different connected component from the sections corresponding to the asymptotic $SU(2)$-monopole metric. This can be seen from the corresponding Nahm flow, which has a singularity at $-1,0,1$ for the $SU(3)$-monopoles \cite{HuMu}, but is smooth on $(-2,0)$ and on $(0,2)$ for the asymptotic $SU(2)$-monopole metric. The point is that the triviality of \eqref{L^2mu} does not imply the triviality of \eqref{L^mu}.\end{remark}
Once again, we can guess the form of the asymptotic metric. In the region, where where the monopole of type $l$, $l=1,\dots N-1$, is approximately a superposition of $k$ monopoles of charges $n_1,\dots,n_k$, the asymptotic metric is given by the GLT applied to \eqref{FSU(N)} on ${\mathcal{S}}(m_1,\dots,m_{l-1},n_1,\dots,n_k,m_{l+1},\dots,m_{N-1})$.
\subsection{Adjoint orbits and related metrics}
It is by now well-known that adjoint orbits of complex semisimple Lie groups carry hyperk\"ahler metrics (cf. \cite{Kr}). For regular semisimple orbits, the most general construction is due to Alekseevsky and Graev \cite{AG} and to Santa-Cruz \cite{SC}, who associate a $U(k)$-invariant pseudo-hyperk\"ahler structure to any reduced spectral curve $S\in |{\mathcal{O}}(2k)|$ (provided $S$ satisfies a reality condition). Suppose that $S$ is a curve given by \eqref{S} and the polynomial coefficients $a_i(\zeta)$ satisfy \eqref{sigma}. The twistor space is defined as
\begin{equation} Z_S= \left\{ p\in {\mathcal{O}}(2)\otimes {\mathfrak g \mathfrak l}(k,{\mathbb C});\enskip \text{$p$ is a regular matrix}\right\} \label{Z_S}
\end{equation}
A real section of $Z_S\rightarrow {\mathbb{P}}^1$ is a quadratic polynomial $
A(\zeta)=A_0+A_1\zeta+A_2\zeta^2$, such that $A(\zeta)$ is a regular matrix for every $\zeta$ and
which satisfies $ A_0=-A_2^\ast, \enskip A_1=A_1^\ast $. Such a real section is a twistor line if, in addition, the normal bundle of $A(\zeta)$ is the sum of ${\mathcal{O}}(1)$'s. This last condition translates into a condition on centralisers of $A(\zeta)$ - see \cite[Theorem 4]{SC}.
The manifold $N_S$ of twistor lines is a pseudo-hyperk\"ahler manifold.
Observe, that a fibre of $Z_S$ over a $\zeta\in {\mathbb{P}}^1$, such that the fibre of $S$ over it consists of distinct points, is an adjoint $GL(k,{\mathbb C})$ orbit. The twisted form $\Omega$ is on such a fibre just the Kostant-Kirillov-Souriau. Consequently, with respect to the complex structure corresponding to such a (generic) $\zeta\in {\mathbb{P}}^1$, $N_S$ is isomorphic to an open subset of an adjoint orbit. The well-known complete hyperk\"ahler metrics of Kronheimer correspond to $S$ fully reducible, i.e. a union of rational curves.
\par
We now claim that the pseudo-hyperk\"ahler structure of $N_S$ can be obtained via the generalised Legendre transform.
We consider the space $W_S\subset {\mathcal{S}}(1,\dots,k)$, defined by setting $S_k=S$, and apply the GLT to the function
\begin{equation} \oint_c \frac{\eta}{\zeta^2}d\zeta\label{Forbit}\end{equation}
on $W_S$. Indeed, the results of \cite{Bie-Pidst} imply that the Kostant-Kirillov-Souriau form of regular adjoint orbits of $GL(k,{\mathbb C})$ is trivialised in coordinates given by the Gelfand-Zeitlin map (considered in \S \ref{conj}) and by the Gelfand-Zeitlin torus. Thus, the twistor space of $N_S$ can be trivialised, owing to Proposition \ref{O(2)} with $d=2$, by the curves $S_l$ and sections $\sigma_l$ of ${\mathcal{O}}(2)[D_{l-1}-D_l]$ satisfying $\sigma_l\overline{\tau^\ast \sigma_l}=\frac{P_{l+1}}{P_{l-1}}$, where $P_l=P_l(\zeta,\eta)$ is the polynomial defining $S_l$. We now proceed as for $SU(N)$-monopoles and conclude, from Theorem \ref{two}, that the metric of $N_S$ is given by the function $\eqref{Forbit}$ on $W_S$.
\medskip
We can also consider the function \eqref{Forbit} on the full ${\mathcal{S}}(1,\dots,k)$. We obtain a (pseudo)-hyperk\"ahler manifold $N$, from which all the $N_S$ can be produced via the twistor quotient construction, as in \S\ref{gen}. The complex symplectic structure of $N$ is that of $GL(k,{\mathbb C})\times P$, where $P\simeq {\mathbb C}^k$ is a regular Slodowy slice (cf. \cite{Bie-Pidst}). The metric on $N$ is a limiting case of the metrics on moduli spaces of $SU(k+1)$-monopoles of charge $(1,\dots,k)$, if we allow $\mu_p-\mu_{p+1}\rightarrow 0$ for $p=1,\dots,k$. The metric on $N$ probably has an $SU(N)$-symmetry, just like the metrics on each $N_S$.
\section{Hyperk\"ahler metrics corresponding to $[\eta^2/\zeta^2]$}
Theorem \ref{one} implies that there should be a whole hierarchy of hyperk\"ahler manifolds analogous to $SU(2)$-monopole spaces and corresponding to other $H$ in the formula \ref{Fbundle} (with $k=1$). Their twistor spaces are obtained by glueing two copies of the space of rational maps of degree $n=m_1$ as in \cite[pp. 49--50]{AH}. The real sections correspond to
spectral curves on which the line bundle with transition function $\exp\frac{\partial H}{\partial \eta}$ is trivial. Let us write $l(\zeta,\eta)=\frac{\partial H}{\partial \eta}$ and $E^s$ for the line bundle with the transition function $\exp sl(\zeta,\eta)$. From the description of $H^1(S,{\mathcal{O}}_S)$, we have
$$ l(\zeta,\eta)=\sum_{i=1}^{n-1}\frac{\eta^i}{\zeta^i}q_i(\zeta),\quad q_i(\zeta)=\sum_{r=-i+1}^{i-1} d_{r,i}\zeta^r,$$
for some complex numbers $d_{r,i}$. Moreover, $E=E^1$ is real and $H$ satisfies \eqref{Greal} if and only if $\overline{d_{r,i}}=(-1)^rd_{-r,i}$. Now, according to the general theory \cite{AHH} the flow in the direction $E^s$ on the affine Jacobian $J^{g-1}_\text{aff}$ of line bundles of degree $g-1$ corresponds to a flow of matricial polynomials, and, hence, a hyperk\"ahler metric exists on the space of matricial flows which correspond to periodic flows. The periodicity means that the matrices should have the same behaviour at $s=1$ as at $s=0$; the latter being canonically determined by the flow on the Jacobian approaching the bundle ${\mathcal{O}}_S(n-2)\in J^{g-1}$.
\par
In the simplest case, $l(\zeta,\eta)=\frac{\eta}{\zeta}$, one obtains Nahm's equations. We wish to discuss briefly the next simplest case $l(\zeta,\eta)=\frac{\eta^2}{\zeta^2}$. One obtains a flow of endomorphisms $\tilde{A}(s,\zeta)=\tilde{A}_0(s)+\tilde{A}_1(s)\zeta+\tilde{A}_2(s)\zeta^2$ of the vector space $H^0\bigl(S, E^s(n-1)\bigr)$ by the general prescription as in \cite{AHH} or \cite{Hit}.
To obtain matrices $A(s,\zeta)=A_0(s)+A_1(s)\zeta+A_2(s)\zeta^2$, one needs to choose a connection. Since we want the matrices to satisfy the Hermitian conditions
\begin{equation} A_0(s)^\ast=-A_2(s),\enskip A_1(s)^\ast=A_1(s),\label{herm}\end{equation} we choose the connection which preserves the Hitchin metric \cite[eq. (6.1)]{Hit} on $H^0\bigl(S, E^s(n-1)\bigr)$. By analogy with \cite{Hit} one considers $\frac{\tilde{A}(s,\zeta)^2}{\zeta^2}$ and takes half of the $\zeta$- constant term together with the positive terms. One can check, as in \cite[pp. 179-181]{Hit}, that this connection
\begin{equation} \nabla_sf=\frac{\partial f}{\partial s}+\left(\frac{1}{2}\bigl(\tilde{A}_1^2+\tilde{A}_0\tilde{A}_2+\tilde{A}_2\tilde{A}_0\bigr)+ \bigl(\tilde{A}_1\tilde{A}_2+\tilde{A}_2\tilde{A}_1\bigr)\zeta + \tilde{A}_2^2\zeta^2\right)f\label{connection}\end{equation}
preserves the metric and gives, after some manipulation, the following equations on the matrices $A_i(s)$:
\begin{eqnarray*}\frac{\partial A_0}{\partial s} & = &\frac{1}{2}[A_0,A_1^2]+\frac{1}{2}[A_0^2, A_2]\nonumber \\
\frac{\partial A_2 }{\partial s} & = &\frac{1}{2}[A_0,A_2^2]+\frac{1}{2}[A_1^2, A_2]\\
\frac{\partial A_1 }{\partial s} & = & A_0A_1A_2-A_2A_1A_0+\frac{1}{2}A_0A_2A_1-\frac{1}{2}A_1A_2A_0+\frac{1}{2}A_1A_0A_2-\frac{1}{2}A_2A_0A_1. \nonumber
\end{eqnarray*}
These equations are invariant under the real structure \eqref{herm} and, if we set $A_0=T_2+iT_3, \enskip A_1=i\sqrt{3}T_1,\enskip A_2=T_2-iT_3$, we obtain the following system of ODE's for the $n\times n$ skew-hermitian matrices $T_1,T_2,T_3$:
\begin{eqnarray*} \frac{\partial T_1}{\partial s} & = &i\bigl(T_3T_1T_2-T_2T_1T_3+T_3T_2T_1-T_1T_2T_3+T_1T_3T_2-T_2T_3T_1\bigr)\\
\frac{\partial T_2}{\partial s} & = & \frac{3}{2}i\bigl[T_3,T_2^2-T_1^2\bigr]\\
\frac{\partial T_3}{\partial s} & = & \frac{3}{2}i\bigl[T_3^2-T_1^2,T_2\bigr].\end{eqnarray*}
We do not know whether these equations have occurred in a different context. The correct boundary conditions guaranteeing perodicity of the flow $E^s$ need investigating (along lines of \cite{Hit}). We expect that $s=0$ and $s=1$ are regular singular points for the $T_i$, which have there an expansion $(s-a)^{-1/2}\cdot(\text{\em analytic})$, $a=0,1$. In addition, the leading term will depend on the coefficients of the curve, unlike for Nahm's equations. The metric will not be an $L^2$-metric; it has to be computed from the formula \eqref{Omega2}. We also note that for $n=1$ we obtain the metric considered in Example \ref{harm}.
|
\section{Introduction}
\label{sec:intro}
One reason for the success of string diagrams, see \cite{selinger} for an overview, can be formulated by the slogan `only connectivity matters' \cite[Sec.10.1]{coecke-kissinger}. Technically, this is usually achieved by ordering input and output wires and using their ordinal numbers as implicit names. We write $\underline n = \{1,\ldots n\}$ to denote the set of $n$ numbered wires and $f:\underline n\to \underline m$ for diagrams $f$ with $n$ inputs and $m$ outputs. This approach is particularly convenient for the generalisations of Lawvere theories known as $\mathsf{PROP}$s \cite{maclane:prop}. In particular, the paper on composing $\mathsf{PROP}$s \cite{lack} has been influential \cite{rewriting-modulo,signal-flow-1}.
\medskip On the other hand, if only connectivity matters, it is natural to consider a formalisation of string diagrams in which wires are not ordered. Thus, instead of ordering wires, we fix a countably infinite set
\[\mathcal N\]
of `names` $a,b,\ldots$, on which the only supported operation or relation is equality.
Mathematically, this means that we work internally in the category of nominal sets introduced by Gabbay and Pitts \cite{gabbay-pitts,pitts}. In the remainder of the introduction, we highlight some of the features of this approach.
\medskip\textbf{\large Partial commutative vs total symmetric tensor. }
One reason why ordered names are convenient is that the tensor $\oplus$ is given by the categorical coproduct (additition) in the skeleton $\mathbb F$ of the category of finite sets. Even though $\underline n\oplus \underline m= \underline m\oplus \underline n$ on objects, the tensor is not commutative but only symmetric, since the canonical arrow $\underline n\oplus \underline m \to \underline m\oplus \underline n$ is not the identity.
\medskip
On the other hand, in the category $\mathsf n\mathbb F$ of finite subsets of $\mathcal N$ (which is equivalent to $\mathbb F$ as an ordinary category), there is a commutative tensor $A \uplus B$ given by union of disjoint sets. The interesting feature that makes commutativity possible is that $\uplus$ is partial with $A\uplus B$ defined if and only if $A\cap B=\emptyset$.
\medskip
While it would be interesting to develop a general theory of partially monoidal categories, our approach in this paper is based on the observation that the partial operation $\uplus:\mathsf n\mathbb F\times \mathsf n\mathbb F\to \mathsf n\mathbb F$ is a total operation $\uplus:\mathsf n\mathbb F\ast \mathsf n\mathbb F\to \mathsf n\mathbb F$ where $\ast$ is the separated product of nominal sets \cite{pitts}.
\medskip\textbf{\large Symmetries disappear in 3 dimensions. }
From a graphical point of view, the move from ordered wires to named wires corresponds to moving from planar graphs to graphs in 3 dimensions. Instead of having a one dimensional line of inputs or outputs, wires are now sticking out of a plane \cite{joyal-street:tensor1}. As a benefit there are no wire-crossings, or, more technically, there are no symmetries to take care of. This simplifies the rewrite rules of calculi formulated in the named setting. For example, rules such as
\vspace{-1em}
\begin{center}
\includegraphics[page=32, width=6cm, height=3cm]{twists_new}
\end{center}
\vspace{-2em}
are not needed anymore. For more on this compare Figs~\ref{fig:smt-theories} and~\ref{fig:nmt-theories}.
\medskip\textbf{\large Example: Simultaneous Substitutions. }
Substitutions $[a{\mapsto}b]$
can be composed sequentially and in parallel as in
\[
[a{\mapsto}b]\hspace{0.24ex};[b{\mapsto}c] = [a{\mapsto}c]
\quad\quad
\quad\quad
[a{\mapsto}b] \uplus [c{\mapsto}d] = [a{\mapsto}b, c{\mapsto}d].
\]
We call $\uplus$ the tensor, or the monoidal or vertical or parallel composition. Semantically, the simultaneous substitution on the right-hand side above, will correspond to the function
$f:\{a,c\}\to \{b,d\}$
satisfying $f(a)=b$ and $f(c)=d$.
Importantly, parallel composition of simultaneous substitutions is partial. For example,
$[a{\mapsto}b] \uplus [a{\mapsto}c]$
is undefined, since there is no function $\{a\}\to\{b,c\}$ that maps $a$ simultaneously to both $b$ and $c$.
\medskip\medskip\textbf{\large The advantages of a 2-dimensional calculus} for simultaneous substitutions over a 1-dimensional calculus are the following.
A calculus of substitutions is an algebraic representation, up to isomorphism, of the category $\mathsf n\mathbb F$ of finite subsets of $\mathcal N$. In a 1-dimensional calculus, operations $[a{\mapsto}b]$ have to be indexed by finite sets $S$
\[[a{\mapsto}b]_S:S\cup \{a\}\to S\cup \{b\}\]
for sets $S$ with $a,b\notin S.$
On the other hand, in a 2-dimensional calculus with an explicit operation $\uplus$ for set-union, indexing with subsets $S$ is unnecessary. Moreover, while the swapping
\[\{a,b\}\to\{a,b\}\]
in the 1-dimensional calculus needs an auxiliary name such as $c$ in
$
[a{\mapsto}c]_{\{b\}} \hspace{0.24ex}; [b{\mapsto} a]_{\{c\}} \hspace{0.24ex}; [c{\mapsto}a]_{\{b\}}
$
it is represented in the 2-dimensional calculus directly by
\[
[a{\mapsto}b] \uplus [b{\mapsto}a]
\]
Finally, while it is possible to write down the equations and rewrite rules for the 1-dimensional calculus, it does not appear as particularly natural. In particular, only in the 2-dimensional calculus, will the swapping
have a simple normal form such as $[a{\mapsto}b] \uplus [b{\mapsto}a]$ (unique up to commutativity of $\uplus$).
\medskip\textbf{\large Overview. }
In order to account for partial tensors, Section~\ref{sec:internal-monoidal}
develops the notion of a monoidal category internal in a symmetric monoidal category. Section~\ref{sec:examples} is devoted to examples, while Section~\ref{sec:nmts} introduces the notion of a nominal prop and Section~\ref{sec:equivalence} shows shat the categories of ordinary and of nominal props are equivalent.
\section{Setting the Scene: String Diagrams and Nominal Sets}\label{sec:scene}
We review some of the necessary terminology but need to refer to the literature for details.
\subsection{String Diagrams}
The mathematical theory of string diagrams can be formalised via $\mathsf{PROP}$s as defined by MacLane~\cite{maclane}. There is also the weaker notion by Lack~\cite{lack}, see Remark 2.9 of Zanasi \cite{zanasi} for a discussion.
\medskip
A $\mathsf{PROP}$ (\textit{products and permutation category}) is a symmetric strict monoidal category, with natural numbers as objects, where the monoidal tensor $\oplus$ is addition. Moreover, $\mathsf{PROP}$s, along with strict symmetric monoidal functors, that are identities on objects, form the category $\mathsf{PROP}$. A $\mathsf{PROP}$ contains all bijections between numbers as they can be be generated from the symmetry (twist) $1\oplus 1\to 1\oplus 1$ and from the parallel composition $\oplus$ and sequential composition $;$ (which we write in diagrammatic order).
\medskip\noindent
$\mathsf{PROP}$s can be presented in algebraic form by operations and equations as \textit{symmetric monoidal theories} ($\mathrm{SMT}$s) \cite{zanasi}.
\medskip
An $\mathrm{SMT}$ $(\Sigma, E)$ has a set $\Sigma$ of generators, where each generator $\gamma \in \Sigma$ is given an arity $m$ and co-arity $n$, usually written as $\gamma : m \to n$ and a set $E$ of equations, which are pairs of $\Sigma$-terms. $\Sigma$-terms can be obtained by composing generators in $\Sigma$ with the unit $\mathit{id} : 1 \to 1$ and symmetry $\sigma : 2 \to 2$, using either the parallel or sequential composition (see Fig~\ref{fig:smt-terms}). Equations $E$ are pairs of $\Sigma$-terms with the same arity and co-arity.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ c c c }
\includegraphics[page=40, width=12mm]{twists_new} &
\qquad\qquad\includegraphics[page=41, width=12mm]{twists_new}\qquad\qquad{} &
\includegraphics[page=42, width=12mm]{twists_new} \\
\(\displaystyle\frac{}{ \gamma: m \to n \in \Sigma}\) &
\(\displaystyle\frac{}{ id:1 \to 1}\) &
\(\displaystyle\frac{}{\sigma : 2 \to 2}\) \\
\end{tabular}
\medskip
\medskip
\medskip
\begin{tabular}{ c c }
\includegraphics[page=39, width=60mm]{twists_new} &
\includegraphics[page=38, width=70mm]{twists_new} \\
\(\displaystyle\frac{ t:m\to n\quad\quad t':o\to p}{ t \oplus t' : m+o\to n+p}\) &
\(\displaystyle\frac{ t:m\to n\quad\quad s:n\to o}{ t\hspace{0.24ex}; s : m \to o}\)
\end{tabular}
\end{center}
\caption{SMT Terms}\label{fig:smt-terms}
\end{figure}%
\noindent
Given an $\mathrm{SMT}$, we can freely generate a $\mathsf{PROP}$, by taking $\Sigma$-terms as arrows, modulo the equations of Fig~\ref{fig:symmetric-monoidal-category}, together with the smallest congruence (with respect to the two compositions) of equations in $E$.
\begin{figure}[h]
\[
\begin{array}{cc}
\mathit{id}_m \hspace{0.24ex}; t = t = t \hspace{0.24ex}; \mathit{id}_n \qquad{}& \qquad
id_0 \oplus t = t = t \oplus id_0 \\[1ex]
(t\hspace{0.24ex}; s)\hspace{0.24ex}; r = t \hspace{0.24ex}; (s \hspace{0.24ex}; r) \qquad{}& \qquad
(t \oplus s) \oplus r = t \oplus (s \oplus r) \\[1ex]
\sigma_{1,1} \hspace{0.24ex}; \sigma_{1,1} = id_2 \qquad{}& \qquad
(s \hspace{0.24ex}; t) \oplus (u \hspace{0.24ex}; v) = (s \oplus u) \hspace{0.24ex}; (t \oplus v)
\\[1ex]
\multicolumn{2}{c}{(t \oplus id_z) \hspace{0.24ex}; \sigma_{n,z} = \sigma_{m,z} \hspace{0.24ex}; (id_z \oplus t)}
\end{array}
\]
\caption{Equations of symmetric monoidal categories}\label{fig:symmetric-monoidal-category}
\end{figure}
\medskip
$\mathsf{PROP}$s admit a nice graphical presentation, wherein the sequential composition is modeled by horizontal composition of diagrams, and parallel/tensor composition is vertical stacking of diagrams (see Fig~\ref{fig:smt-terms}). We now present the $\mathrm{SMT}$s of \colorbox{black!30}{\strut bijections $\mathbb B$}, \colorbox{orange!60}{\strut injections $\mathbb I$}, \colorbox{green!40}{\strut surjections $\mathbb S$}, \colorbox{cyan!40}{\strut functions $\mathbb F$}, \colorbox{magenta!40}{\strut partial functions $\mathbb P$}, \colorbox{cyan!60!magenta!60}{\strut relations $\mathbb R$} and \colorbox{yellow!60}{\strut monotone maps $\mathbb{M}$}.\footnote{The theory of \colorbox{yellow!60}{\strut monotone maps $\mathbb{M}$} does not include equations involving the symmetry $\sigma$ and is in fact presented by a so-called $\mathsf{PRO}$ rather than a $\mathsf{PROP}$. However, in this paper we will only be dealing with theories presented by $\mathsf{PROP}$s (the reason why this is the case is illustrated in the proof of Proposition~\ref{prop:ORD}).}
The diagram in Fig~\ref{fig:smt-theories} shows the generators and the equations that need to be added to the empty $\mathrm{SMT}$, to get a presentation of the given theory.
To ease comparison with the corresponding nominal monoidal theories in Fig~\ref{fig:nmt-theories} later we also added on a \stripbox{\strut striped} background the equations for wire-crossings that are already implied by the naturality of symmetries, that is, the last equation of Fig~\ref{fig:symmetric-monoidal-category}. These are the equations that are part of the definition of a prop in the sense of MacLane~\cite{maclane} but need to be added explicitely to the props in the sense of Lack~\cite{lack}.
\begin{figure}
\includegraphics[page=36, width=\linewidth]{twists_new}
\caption{Symmetric monoidal theories}\label{fig:smt-theories}
\end{figure}
\subsection{Nominal Sets}
Let $\mathcal N$ be a countably infinite set of `names` or `atoms`. Let $\mathfrak S$ be the group of finite\footnote{A permutation is called finite if it is generated by finitely many transpositions.} permutations $\mathcal N\to\mathcal N$. An element $x\in X$ of a group action $\mathfrak S\times X\to X$ is supported by $S\subseteq\mathcal N$ if $\pi\cdot x= x$ for all $\pi\in\mathfrak S$ such that $\pi$ restricted to $S$ is the identity. A group action $\mathfrak S\times X\to X$ such that all elements of $X$ have finite support is called a \emph{nominal set}.
We write $\mathsf{supp}(x)$ for the minimal support of $x$ and $\mathsf{Nom}$ for the category of nominal sets, which has as maps the \emph{equivariant} functions, that is, those functions that respect the permutation action. Our main example is the category of simultaneous substitutions:
\renewcommand{\mathsf{Fun}}{{\mathsf n\mathbb F}}
\begin{example}[$\mathsf{Fun}$]\label{exle:nF}
We denote by $\mathsf{Fun}$ the category of finite subsets of $\mathcal N$ with all functions. While $\mathsf{Fun}$ is a category, it also carries additional nominal structure. In particular, both the set of objects and the set of arrows are nominal sets. with $\mathsf{supp}(A)=A$ and $\mathsf{supp}(f)=A\cup B$ for $f:A\to B$. The categories of injections, surjections, bijections, partial functions and relations are further examples along the same lines.
\end{example}
\begin{gray}\begin{ak}we may want to refer here to the subsection on string diagrams\end{ak}
\begin{remark}
say that $\mathbb B$ bijections, $\mathbb F$ functions, injections $\mathbb I$, surjections $\mathbb S$, partial functions $\mathbb P$ and relations $\mathbb R$ are equivalent to $\mathsf{Fun}$ etc as categories but that these equivalences are not equivariant ... should we use
\[\mathsf n\mathbb B, \mathsf n\mathbb F, \mathsf n\mathbb I, \mathsf n\mathbb S, \mathsf n\mathbb P, \mathsf n\mathbb R\]
as the corresponding names for the nominal versions?
\end{remark}
\end{gray}
\section{Internal monoidal categories}\label{sec:internal-monoidal}
We introduce the notion of an internal monoidal category. Given a symmetric monoidal category $(\mathcal V,I,\otimes)$ with finite limits, we are interested in categories $\mathbb C$, internal in $\mathcal V$, that carry a monoidal structure not of type $\mathbb C\times \mathbb C\to \mathbb C$ but of type $\mathbb C\otimes \mathbb C\to \mathbb C$. This will allow us to account for the partiality of $\uplus$ discussed in the introduction:
\begin{example}
\begin{itemize}
\item The symmetric monoidal (closed) category $(\mathsf{Nom},1,\ast)$ of nominal sets with the separated product $\ast$ is defined as follows \cite{pitts}. $1$ is the terminal object, ie, a singleton with empty support. The separated product of two nominal sets is defined as $A\ast B = \{(a,b)\in A\times B \mid \mathsf{supp}(a)\cap\mathsf{supp}(b)=\emptyset\}$.
\item The category $\mathsf{Fun}$ (and its relatives) of Example~\ref{exle:nF} is an internal monoidal category with monoidal operation given by $A\uplus B=A\cup B$ if $A$ and $B$ are disjoint.
\end{itemize}
\end{example}
\medskip\noindent
$(\mathsf{Fun},\emptyset,\uplus)$ as defined in the previous example is not a monoidal category, since $\uplus$, being partial, is not an operation of type $\mathsf{Fun}\times\mathsf{Fun}\to\mathsf{Fun}$ .
The purpose of this section is to show that $(\mathsf{Fun},\emptyset,\uplus)$ is an internal monoidal category in $(\mathsf{Nom},1,\ast)$ with $\uplus$ of type
\[\uplus:\mathsf{Fun}\ast\mathsf{Fun}\to\mathsf{Fun}.\]
To this end we need to extend $\ast:\mathsf{Nom}\times\mathsf{Nom}\to\mathsf{Nom}$ to
\[\ast:\mathsf{Cat}(\mathsf{Nom})\times\mathsf{Cat}(\mathsf{Nom})\to\mathsf{Cat}(\mathsf{Nom})\]
where we denote by $\mathsf{Cat}(\mathsf{Nom})$, the category of (small) internal categories in $\mathsf{Nom}$.
\medskip
The necessary (and standard) notation from internal categories is reviewed in Appendix A.
\begin{remark}
Let $\mathbb C$ be an internal category in a symmetric monoidal category $(\mathcal V,I,\otimes)$ with finite limits. Since $\otimes$ need not preserve finite limits, we cannot expect that defining $(\mathbb C\otimes\mathbb C)_0=\mathbb C_0\otimes\mathbb C_0$ and $(\mathbb C\otimes\mathbb C)_1=\mathbb C_1\otimes\mathbb C_1$ results in $\mathbb C\otimes\mathbb C$ being an internal category.
\end{remark}
Consequently, putting $(\mathbb C\otimes\mathbb C)_1=\mathbb C_1\otimes\mathbb C_1$ does not extend $\otimes$ to an operation $\mathsf{Cat}(\mathcal V)\times\mathsf{Cat}(\mathcal V)\to\mathsf{Cat}(\mathcal V)$. To show what goes wrong in a concrete instance is the purpose of the next example.
\begin{example}
Define a binary operation $\mathsf{Fun}\ast\mathsf{Fun}$ as $(\mathsf{Fun}\ast\mathsf{Fun})_0=\mathsf{Fun}_0\ast\mathsf{Fun}_0$ and $(\mathsf{Fun}\ast\mathsf{Fun})_1=\mathsf{Fun}_1\ast\mathsf{Fun}_1$. Then $\mathsf{Fun}\ast\mathsf{Fun}$ cannot be equipped with the structure of an internal category. Indeed, assume for a contradiction that there was an appropriate pullback $(\mathsf{Fun}\ast \mathsf{Fun})_2$ and arrow $\mathit{comp}$ such that the two diagrams commute:
\[
\xymatrix@C=20ex{
(\mathsf{Fun}\ast \mathsf{Fun})_2 \
\ar[0,1]|-{\ \mathit{comp}\ }
\ar[dd]_{\pi_1}^{\pi_2}
&
{\ \ \mathsf{Fun}_1\ast \mathsf{Fun}_1 \ \ }
\ar[dd]_{\mathit{dom}}^{\mathit{cod}}
\\
&
\\
\mathsf{Fun}_1\ast\mathsf{Fun}_1\
\ar[0,1]^{\ \mathit{dom}\ }_{\mathit{cod}}
&
{\ \ \mathsf{Fun}_0\ast\mathsf{Fun}_0 \ \ }
}
\]
Let $\delta_{xy}:\{x\}\to\{y\}$ be the unique function in $\mathsf{Fun}$ of type $\{x\}\to\{y\}$. Then $((\delta_{ac},\delta_{bd}), (\delta_{cb},\delta_{da}))$, which can be depicted as
\[
\xymatrix@R=0.5ex{
\{a\} \ar[r]^{\delta_{ac}} & \{c\} \ar[r]^{\delta_{cb}} & \{b\}\\
\{b\} \ar[r]_{\delta_{bd}} & \{d\} \ar[r]_{\delta_{da}} & \{a\}
}
\]
is in the pullback $(\mathsf{Fun}\ast \mathsf{Fun})_2$, but there is no $\mathit{comp}$ such that the two squares above commute, since $\mathit{comp}((\delta_{ac},\delta_{bd}), (\delta_{cb},\delta_{da}))$ would have to be $(\delta_{ab},\delta_{ba})$, which do not have disjoint support and therefore are not in $\mathsf{Fun}_1\ast \mathsf{Fun}_1$.
\qed
\end{example}
The solution to the problem consists in assuming that the given symmetric monoidal category with finite limits $(\mathcal V,1,\otimes)$ is semi-cartesian (aka affine), that is, the unit $1$ is the terminal object. In such a category there are canonical
\[j:A\otimes B\to A\times B\]
and we can use them to define arrows $j_1:(\mathbb C\otimes \mathbb C)_1\to\mathbb C_1\times \mathbb C_1$ that give us the right notion of tensor on arrows. From our example $\mathsf{Fun}$ above, we know that we want arrows $(f,g)$ to be in $(\mathbb C\otimes \mathbb C)_1$ if $\mathit{dom}(f)\cap\mathit{dom}(g)=\emptyset$ and $\mathit{cod}(f)\cap\mathit{cod}(g)=\emptyset$. We now turn this observation into a category theoretic definition.
\medskip
Let $\mathbb C$ and $\mathbb D$ be internal categories in $\mathcal V$. Our first task is to define $(\mathbb C\otimes \mathbb D)_1$. This is accomplished by stipulating that $(\mathbb C\otimes \mathbb D)_1$ is the limit in the diagram below
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=1em, row 2/.style= {yshift=-6em}, column sep=0em, color=black]{
& |[color=cyan]|(\mathbb C\otimes\mathbb D)_1 & & & \mathbb C_1\times\mathbb D_1 & \\
& & \mathbb C_0\otimes\mathbb D_0 & & & \mathbb C_0\times\mathbb D_0 \\
\mathbb C_0\otimes\mathbb D_0 & & & \mathbb C_0\times\mathbb D_0 & & \\};
\path[-stealth, color=black]
(m-1-2) edge [color=cyan] node [above] {$j_1$} (m-1-5)
edge [color=cyan, densely dotted] node [right, xshift=0.7em,yshift=-1.25em] {$\mathit{cod}_{(\mathbb C\otimes\mathbb D)_1}$} (m-2-3)
edge [color=cyan] node [left] {$\mathit{dom}_{(\mathbb C\otimes\mathbb D)_1}$} (m-3-1)
(m-2-3) edge [densely dotted] node [above, xshift=2.5em] {$j$} (m-2-6)
(m-3-1) edge node [below] {$j$} (m-3-4)
(m-1-5) edge [densely dotted] node [right, xshift=-0.7em,yshift=1.25em] {$\mathit{cod}_{\mathbb C_1} \times \mathit{cod}_{\mathbb D_1}$} (m-2-6)
edge [-,line width=6pt,draw=white] (m-3-4) edge node [left,xshift=1.2em, yshift=2.5em] {$\mathit{dom}_{\mathbb C_1} \times \mathit{dom}_{\mathbb D_1}$} (m-3-4);
\end{tikzpicture}
\end{center}
\noindent
In the following we abbreviate the diagram above to
\begin{equation}
\label{equ:j1}
\vcenter{
\xymatrix@C=9ex{
(\mathbb C\otimes
|
\mathbb D)_1 \ar[rr]^{j_1}
\ar@<-1ex>[d]_{\mathit{dom}}
\ar@<1ex>[d]^{\mathit{cod}}
&& \mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{dom}\times \mathit{dom}}
\ar@<1ex>[d]^{\mathit{cod}\times \mathit{cod}}
\\
(\mathbb C\otimes\mathbb D)_0 \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}}
\end{equation}
We are now in the position to extend the monoidal operation $\otimes:\mathcal V\times\mathcal V\to\mathcal V$ to a monoidal operation $\otimes:Cat(\mathcal V)\times Cat(\mathcal V)\to Cat(\mathcal V)$.
\begin{definition}\label{def:internal-tensor}
Let $(\mathcal V,1,\otimes)$ be a monoidal category where the unit is the terminal object. The operation $\otimes:Cat(\mathcal V)\times Cat(\mathcal V)\to Cat(\mathcal V)$ is defined as follows.
\begin{itemize}
\item $(\mathbb C\otimes\mathbb D)_0$ and $(\mathbb C\otimes\mathbb D)_1$ and $\mathit{cod},\mathit{dom}: (\mathbb C\otimes\mathbb D)_1\to (\mathbb C\otimes\mathbb D)_0$ as in the diagram above.
\item $i:(\mathbb C\otimes\mathbb D)_0\to (\mathbb C\otimes\mathbb D)_1$ is the arrow into the limit $(\mathbb C\otimes\mathbb D)_1$ given by
\[
\xymatrix@C=9ex{
(\mathbb C\otimes\mathbb D)_0
\ar@/^/[rrrd]^{(i\times i)\circ j}
\ar@/_/@<-1ex>[rdd]|-{\ \mathit{id} \ }
\ar@/_/@<-3ex>[rdd]|-{\ \mathit{id} \ }
\ar@{..>}[rd]|-{\ i\ }
&&&
\\
&(\mathbb C\otimes\mathbb D)_1 \ar[rr]|-{\ j_1 \ }
\ar@<-1ex>[d]_{\mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}}
&& \mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{cod}\times \mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}\times \mathit{dom}}
\\
&{\ \ (\mathbb C\otimes\mathbb D)_0} \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}
\]
from which one reads off
\[\mathit{dom}\circ i = \mathit{id}_{(\mathbb C\otimes\mathbb D)_0} = \mathit{cod}\circ i\]
\item $(\mathbb C\otimes\mathbb D)_2$ is the pullback
\[
\xymatrix{
&(\mathbb C\otimes\mathbb D)_2\ar[dl]_{\pi_1}\ar[dr]^{\pi_2}&
\\
(\mathbb C\otimes\mathbb D)_1\ar[dr]_{\mathit{cod}}
&&
(\mathbb C\otimes\mathbb D)_1 \ar[dl]^{\mathit{dom}}
\\
&(\mathbb C\otimes\mathbb D)_0&
}
\]
Recalling the definition of $j_1$ from \eqref{equ:j1}, there is also a corresponding $j_2:(\mathbb C\otimes\mathbb D)_2\to\mathbb C_2\times\mathbb D_2$ due to the fact that the product of pullbacks is a pullback of products
\begin{equation}\label{equ:j2}
\vcenter{
\xymatrix@C=2ex{
&(\mathbb C\otimes\mathbb D)_2\ar[dl]_{\pi_1}\ar[dr]^{\pi_2}\ar[rrr]^{j_2}&
&
&\mathbb C_2\times\mathbb D_2\ar[dl]_{\pi_1\times\pi_1}\ar[dr]^{\pi_2\times\pi_2}&
\\
(\mathbb C\otimes\mathbb D)_1\ar[dr]_{\mathit{cod}}\ar@/^1.2pc/@{..>}[rrr]|-{ j_1 }
&&
(\mathbb C\otimes\mathbb D)_1 \ar[dl]^{\mathit{dom}}\ar@/_1.2pc/@{..>}[rrr]|-{ j_1 }
&
\mathbb C_1\times\mathbb D_1\ar[dr]_{\mathit{cod}\times\mathit{cod}}
&&
\mathbb C_1\times\mathbb D_1 \ar[dl]^{\mathit{dom}\times\mathit{dom}}
\\
&(\mathbb C\otimes\mathbb D)_0\ar[rrr]^{j}&
&
&\mathbb C_0\times\mathbb D_0&
}}
\end{equation}
Recall the definition of the limit $(\mathbb C\otimes\mathbb D)_1$ from \eqref{equ:j1}.
Then $\mathit{comp}:(\mathbb C\otimes\mathbb D)_2\to(\mathbb C\otimes\mathbb D)_1$ is the arrow into $(\mathbb C\otimes\mathbb D)_1$
\begin{equation}\label{equ:comp}
\vcenter{
\xymatrix@C=9ex{
(\mathbb C\otimes\mathbb D)_2
\ar@/^/[rrrd]^{\ \ \ \ \ \ \ \ (\mathit{comp}\times \mathit{comp})\circ j_2}
\ar@/_/@<-0ex>[rdd]|-{\ \ \ \mathit{dom}\circ\pi_1 \ }
\ar@/_/@<-3ex>[rdd]|-{\ \mathit{cod}\circ\pi_2 \ }
\ar@{..>}[rd]|-{\ \mathit{comp}\ }
&&&
\\
&(\mathbb C\otimes\mathbb D)_1 \ar[rr]|-{\ j_1\ }
\ar@<-1ex>[d]_{\mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}}
&& \mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{cod}\times \mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}\times \mathit{dom}}
\\
&{\ \ (\mathbb C\otimes\mathbb D)_0} \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}}
\end{equation}
from which one reads off
\[\mathit{dom}\circ \mathit{comp}=\mathit{dom}\circ \pi_1\ \quad\quad \ \mathit{cod}\circ
\mathit{comp}=\mathit{cod}\circ \pi_2\]
\item The equations $\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle = \mathit{id}_{(\mathbb C\otimes\mathbb D)_1} = \mathit{comp}\circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle$ are proved in Proposition~\ref{prop:comp-i}.
\item The equation $\mathit{comp}\,\circ\,\mathit{compl}= \mathit{comp}\,\circ\,\mathit{compr}$ will be shown in Proposition~\ref{prop:comp-assoc}.
\end{itemize}
\end{definition}
This ends the definition of $\mathbb C\otimes\mathbb D$ and the next few pages are devoted to showing that it is indeed an internal category. To prove the next propositions, we will need the following lemma, which can be skipped for now. It is a consequence of the general fact that the isomorphism $[\mathcal I,\mathcal C](K_A,D)\cong\mathcal C(A,\lim D)$ defining limits is natural in $A$ and $D$.
\begin{lemma}\label{lem:hh'}
If in the diagram
\[
\xymatrix@C=9ex{
T
\ar[rr]^{ k }
\ar@/_4pc/@<-2ex>[dd]_{f_1}
\ar@/_4pc/@<-0ex>[dd]^{f_2}
\ar@<-0ex>[d]^{h}
&&
P
\ar@/^4pc/@<1ex>[dd]_{f'_1}
\ar@/^4pc/@<3ex>[dd]^{f'_2}
\ar@<-0ex>[d]^{h'}
\\
(\mathbb C\otimes\mathbb D)_2
\ar[rr]^{ j_2 }
\ar@<-1ex>[d]_{\pi_1}
\ar@<1ex>[d]^{\pi_2}
&&
\mathbb C_2\times\mathbb D_2
\ar@<-1ex>[d]_{\pi_1\times\pi_1}
\ar@<1ex>[d]^{\pi_2\times\pi_2}
\\
(\mathbb C\otimes\mathbb D)_1 \ar[rr]^{ j_1 }
\ar@<-1ex>[d]_{\mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}}
&&
\mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{cod}\times \mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}\times \mathit{dom}}
\\
{\ \ (\mathbb C\otimes\mathbb D)_0} \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}
\]
$f$ and $f'$ are cones commuting with $j_1$ and $k$, that is, if
\begin{align}
\label{equ:lemjh:1} \mathit{cod}\circ f_1 &= \mathit{dom}\circ f_2 \\
\label{equ:lemjh:2} (\mathit{cod}\times\mathit{cod})\circ f'_1 &= (\mathit{dom}\times\mathit{dom})\circ f'_2 \\
\label{equ:lemjh:3} j_1\circ f_i &= f'_i\circ k
\end{align}
and $h,h'$ are the respective unique arrows into the pullbacks, then also
\[h'\circ k=j_2\circ h\]
holds.
\end{lemma}
\begin{longVersion}
\begin{proof}
It suffices to calculate, in a slightly abbreviated style,
$\pi\circ h'\circ k
= f'\circ k
= j_1\circ f
= j_1\circ \pi\circ h
= \pi\circ j_2\circ h$
where the last equality is due to the definition of $j_2$ given in \eqref{equ:j2}. This implies $h'\circ k=j_2\circ h$.
\end{proof}
\end{longVersion}
\noindent
Using the lemma, the next two propositions have reasonably straight forward proofs.
\begin{proposition}\label{prop:comp-i}
$\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle = \mathit{id}_{(\mathbb C\otimes\mathbb D)_1} = \mathit{comp}\circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle$.
\end{proposition}
\begin{longVersion}
\begin{proof}
We show the first equality. According to the definition of the limit $(\mathbb C\otimes\mathbb D)_1$ given in \eqref{equ:j1}, it suffices to show
\begin{align*}
\mathit{cod}\circ\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle & = \mathit{cod}\\
\mathit{dom}\circ\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle & = \mathit{dom}\\
j_1\circ\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle & = j_1
\end{align*}
The first two follow from $\mathit{dom}\circ \mathit{comp}=\mathit{dom}\circ \pi_1$ and $\mathit{cod}\circ
\mathit{comp}=\mathit{cod}\circ \pi_2$.
Recalling $\mathit{comp}\circ\langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1}\rangle=\mathit{id}_{(\mathbb C\otimes\mathbb D)_1}$ and, hence, $(\mathit{comp}\times\mathit{comp})\circ\langle (i\times i)\circ(\mathit{dom}\times\mathit{dom}),\mathit{id}_{\mathbb C_1\times\mathbb D_1} \rangle=\mathit{id}_{\mathbb C_1\times\mathbb D_1}$, the third follows because the two rectangles below commute.
\begin{equation}\label{eq:prop:comp-i}
\vcenter{
\xymatrix@C=9ex{
(\mathbb C\otimes\mathbb D)_1
\ar[rr]^{ j_1 }
\ar@<-0ex>[d]_{\langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle}
&&
\mathbb C_1\times\mathbb D_1
\ar@<-0ex>[d]^{\langle (i\times i)\circ(\mathit{dom}\times\mathit{dom}),\mathit{id}_{\mathbb C_1\times\mathbb D_1} \rangle}
\\
(\mathbb C\otimes\mathbb D)_2
\ar[rr]^{ j_2 }
\ar[d]_{\mathit{comp}}
&&
\mathbb C_2\times\mathbb D_2
\ar[d]^{\mathit{comp}\times\mathit{comp}}
\\
(\mathbb C\otimes\mathbb D)_1 \ar[rr]^{ j_1 }
&&
\mathbb C_1\times\mathbb D_1
}}
\end{equation}
The lower rectangle commutes by the definition of $\mathit{comp}$, see \eqref{equ:comp}.
To show that the upper rectangle commutes, we instantiate the lemma with $k=j_1$ and
$f_1=i\circ\mathit{dom}$ and $f_2=\mathit{id}_{(\mathbb C\otimes\mathbb D)_1}$ and $h=\langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle$ and $f_1'=(i\times i)\circ(\mathit{dom}\times\mathit{dom})$ and $f_2'=\mathit{id}_{\mathbb C_1\times\mathbb D_1}$ and $h'=\langle (i\times i)\circ(\mathit{dom}\times\mathit{dom}),\mathit{id}_{\mathbb C_1\times\mathbb D_1} \rangle$. Equations \eqref{equ:lemjh:1} and \eqref{equ:lemjh:2} are straightforward to check.
\begin{ak}
For \eqref{equ:lemjh:3}, we first verify
\begin{align*}
j\circ\mathit{cod}\circ f_i & =(\mathit{cod}\times\mathit{cod})\circ f'_i\circ j_1\\
j\circ\mathit{dom}\circ f_i& =(\mathit{dom}\times\mathit{dom})\circ f'_i\circ j_1
\end{align*}
For example,
\begin{align*}
j\circ\mathit{cod}\circ f_1
&= j\circ\mathit{cod}\circ i\circ\mathit{dom}\\
&= j\circ\mathit{dom}\\
&= (\mathit{dom}\times\mathit{dom})\circ j_1\\
&= (\mathit{cod}\times\mathit{cod})\circ(i\times i)\circ(\mathit{dom}\times\mathit{dom})\circ j_1\\
& =(\mathit{cod}\times\mathit{cod})\circ f'_1\circ j_1
\end{align*}
and the other cases are similar (CHECK).
It follows that $(\mathit{cod}\circ f_1, \mathit{dom}\circ f_1, f_1'\circ j_1)$ is a cone over the limit \eqref{equ:j1}. Hence the `triangle' $j_1\circ f_1=f'_1\circ j_1$ commutes. Similarly (CHECK), the triangle $j_1\circ f_2=f'_2\circ j_1$ commutes. We now verified the assumptions of the lemma and conclude that that the upper rectangle of Diagram \eqref{eq:prop:comp-i} commutes,
\end{ak}
which was all that was left to show $\mathit{comp}\circ \langle i\circ\mathit{dom},\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} \rangle = \mathit{id}_{(\mathbb C\otimes\mathbb D)_1}$. The second equality $\mathit{id}_{(\mathbb C\otimes\mathbb D)_1} = \mathit{comp}\circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle$ is proved similarly (CHECK).
\begin{sam}
Again, we need to show:
\begin{align*}
\mathit{cod} \circ \mathit{comp} \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle & = \mathit{cod}\\
\mathit{dom} \circ \mathit{comp} \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle & = \mathit{dom}\\
j_1 \circ \mathit{comp} \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle & = j_1
\end{align*}
We have:
\begin{align*}
\mathit{cod} \circ \mathit{comp} \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle & =
\mathit{cod} \circ \pi_2 \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle \\
& = \mathit{cod} \circ i\circ\mathit{cod} \\
& = \mathit{cod}
\end{align*}
(Similarly for $\mathit{dom} \circ \mathit{comp} \circ \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle = \mathit{dom}$.)
For the third equation, we have a diagram similar to the one above, where we need to check that the top square commutes. We instantiate the lemma and get the following diagram :
\[
\xymatrix@C=9ex{
T = (\mathbb C\otimes\mathbb D)_1
\ar[rr]^{ k = j_1 }
\ar@/_7pc/@<-2ex>[dd]_{f_1 = \mathit{id}_{(\mathbb C\otimes\mathbb D)_1}}
\ar@/_7pc/@<-0ex>[dd]^{f_2 = i\circ\mathit{cod}}
\ar@<-0ex>[d]^{h = \langle\mathit{id}_{(\mathbb C\otimes\mathbb D)_1},i\circ\mathit{cod} \rangle}
&&
P = \mathbb C_1\times\mathbb D_1
\ar@/^7pc/@<1ex>[dd]_{f'_1 = \mathit{id}_{\mathbb C_1\times\mathbb D_1}}
\ar@/^7pc/@<3ex>[dd]^{f'_2 = (i\times i)\circ(\mathit{cod}\times\mathit{cod})}
\ar@<-0ex>[d]^{h' = \langle \mathit{id}_{\mathbb C_1\times\mathbb D_1} , (i\times i)\circ(\mathit{cod}\times\mathit{cod}) \rangle}
\\
(\mathbb C\otimes\mathbb D)_2
\ar[rr]^{ j_2 }
\ar@<-1ex>[d]_{\pi_1}
\ar@<1ex>[d]^{\pi_2}
&&
\mathbb C_2\times\mathbb D_2
\ar@<-1ex>[d]_{\pi_1\times\pi_1}
\ar@<1ex>[d]^{\pi_2\times\pi_2}
\\
(\mathbb C\otimes\mathbb D)_1 \ar[rr]^{ j_1 }
\ar@<-1ex>[d]_{\mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}}
&&
\mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{cod}\times \mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}\times \mathit{dom}}
\\
{\ \ (\mathbb C\otimes\mathbb D)_0} \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}
\]
Thus to check that the top square commutes, it is enough to show the following equations hold:
\begin{align*}
\mathit{cod}\circ \mathit{id}_{(\mathbb C\otimes\mathbb D)_1} &= \mathit{dom}\circ i\circ\mathit{cod} \\
(\mathit{cod}\times\mathit{cod})\circ \mathit{id}_{\mathbb C_1\times\mathbb D_1} &= (\mathit{dom}\times\mathit{dom})\circ (i\times i)\circ(\mathit{cod}\times\mathit{cod}) \\
j_1\circ f_i &= f'_i\circ j_1
\end{align*}
\end{sam}
\end{proof}
\end{longVersion}
\begin{proposition}\label{prop:comp-assoc}
$\mathit{comp}\circ\mathit{compl} = \mathit{comp}\circ\mathit{compr}$
\end{proposition}
\begin{longVersion}
\begin{proof}
To show that composition is associative, we need to recall the definition of $\mathit{compl}$ and $\mathit{compr}$ from Remark~\ref{def:internal-cat}, which leads us to consider
\[
\xymatrix@C=9ex{
(\mathbb C\otimes\mathbb D)_3
\ar[rr]^{ j_3 }
\ar@/_5pc/@<-2ex>[d]_{\mathit{compl}}
\ar@/_5pc/@<-0ex>[d]^{\mathit{compr}}
\ar@<-1ex>[d]_{\mathit{left}}
\ar@<1ex>[d]^{\mathit{right}}
&&
\mathbb C_3\times\mathbb D_3
\ar@/^7pc/@<2ex>[d]_{\mathit{compl}\times\mathit{compl}\ }
\ar@/^7pc/@<4ex>[d]^{\mathit{compr}\times\mathit{compr}}
\ar@<-1ex>[d]_{\mathit{left}}
\ar@<1ex>[d]^{\mathit{right}}
\\
(\mathbb C\otimes\mathbb D)_2
\ar@/_4pc/@<-0ex>[d]_{\mathit{comp}}
\ar[rr]^{ j_2 }
\ar@<-1ex>[d]_{\pi_1}
\ar@<1ex>[d]^{\pi_2}
&&
\mathbb C_2\times\mathbb D_2
\ar@/^5pc/@<-0ex>[d]^{\mathit{comp}\times\mathit{comp}}
\ar@<-1ex>[d]_{\pi_1\times\pi_1}
\ar@<1ex>[d]^{\pi_2\times\pi_2}
\\
(\mathbb C\otimes\mathbb D)_1 \ar[rr]^{ j_1 }
\ar@<-1ex>[d]_{\mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}}
&&
\mathbb C_1\times\mathbb D_1
\ar@<-1ex>[d]_{\mathit{cod}\times \mathit{cod}}
\ar@<1ex>[d]^{\mathit{dom}\times \mathit{dom}}
\\
{\ \ (\mathbb C\otimes\mathbb D)_0} \ar[rr]_j
&& \mathbb C_0\times\mathbb D_0
}
\]
where, in analogy with the definition of $j_2$ in \eqref{equ:j2}, $j_3$ is defined as the unique arrow into the pullback $\mathbb C_3\times\mathbb D_3$.
\medskip
The rectangles commute by definition of the $
|
momentum, respectively. The $S$-operator is a complicated function of
the interaction Hamiltonian $V=H-H_0$
and $U_P$ is the unitary and hermitian parity operator normalized to
$U_P^2=1$.
The quantities
\[
\varepsilon_T=A_T^2
\hspace{25pt}
\varepsilon_I=(U_PA_T)^2\equiv A_I^2
\]
are real phase factors which define the 4 different extensions of the restricted
space-time symmetry transformations by space inversion $P=g$, time inversion
$T=-g$ and space-time inversion $I=PT=-1$.
(At this level where we have not talked about any charges, $U_P$ could also be
interpreted as representing the usual $CP$). Of the 4 possible extensions
$(\varepsilon_T,\varepsilon_I)=(\pm1,\pm1)$ the almost exclusive
choice~\cite{R18},\cite{R10} for
$(\varepsilon_T,\varepsilon_I)$ is:
\begin{equation}
\varepsilon_T=(-1)^{2j}
\hspace{20pt}
\varepsilon_I=(-1)^{2j}
\hspace{20pt}
\mbox{where $j$ is the spin}
\label{G33}
\end{equation}
With this choice the only possibility for $A_T$ is~(\ref{G27}) which in the
interpretation requires to identify the set of states with the set of
observables (i.e., no arrow of time) and to assign to every $W(t)$
a $W^T(t)\equiv A_T^{-1}W(t)A_T$ fulfilling~(\ref{G25a}). This is in
contradiction
to the experience that at least for some states it is highly improbable to also
prepare their time reversed states
(cf.~ remark above and ch.~13 ref.~\cite{R13}).
A way out would be to give up either
irreversible time evolution or the time reversal operator. But since time
reversal invariance, defined by~(\ref{G31}) and~(\ref{G32})
has consequences which can be tested
experimentally, e.g., reciprocity relations, it is useful to retain the notion
of $A_T$ also if one includes irreversible time evolution. We therefore want to
explore the three other possibilities for
$(\varepsilon_T,\varepsilon_I)$ which do not fulfill~(\ref{G33}), i.e., the
other extensions of the space time symmetry groups provided by
Wigner~\cite{R16}. All three unconventional extensions
involve {\em time-reversal doubling} of the representation spaces. This
will introduce a further label $r$ in addition to the quantum numbers which we
called $\eta$ in~(\ref{G29}).
For $\eta$ we will choose angular momentum (spin)~$j$, its component~$j_3$
and other intrinsic quantum numbers~$n$,
which we do not specify further: $\eta=j_3,j,n$.
Thus the basis vectors are denoted by $|E^\pm,j_3,j,n;r\rangle$.
The four possible cases, of which the standard case~(\ref{G33}) is given in the
first row, are listed in the following table.
\begin{center}
{\bf Table 1}: Extensions of the space-time symmetry groups by $P$ and
$T$\\[15pt]
\begin{tabular}{|cccc|}
\hline
\multicolumn{2}{|c}{Characterization of the}&&\\
\multicolumn{2}{|c}{$P$ and $T$ extensions}&Representation of&
Representation of\\
$\varepsilon_T$&$\varepsilon_I$&$U_P$&$A_T$\\
\hline
$(-1)^{2j}$&$(-1)^{2j}$&1&$C$\\[5pt]
$-(-1)^{2j}$&$(-1)^{2j}$&
$
\left(
\begin{array}{cc}
1&0\\
0&-1
\end{array}
\right)
$&
$
\left(
\begin{array}{cc}
0&C\\
-C&0
\end{array}
\right)
$\\[15pt]
$(-1)^{2j}$&$-(-1)^{2j}$&
$
\left(
\begin{array}{cc}
1&0\\
0&-1
\end{array}
\right)
$&
$
\left(
\begin{array}{cc}
0&C\\
C&0
\end{array}
\right)
$\\[15pt]
$-(-1)^{2j}$&$-(-1)^{2j}$&
$
\left(
\begin{array}{cc}
1&0\\
0&1
\end{array}
\right)
$&
$
\left(
\begin{array}{cc}
0&C\\
-C&0
\end{array}
\right)
$\\
\hline
\end{tabular}
\end{center}
In this table $C$ is the well known operator:
\begin{equation}
C|E,j_3,j,n;r\rangle=\alpha(r)(-1)^{j-j_3}
|E,-j_3,j,n;r\rangle=
\sum_{j_3'}
\alpha(r)|E,j_3',j,n;r\rangle
C^{(j)}_{j_3'j_3}
\end{equation}
where $\alpha(r)$ is a phase factor and the matrix $C^{(j)}_{\mu\nu}$
is given by
\begin{equation}
C^{(j)}_{\mu\nu}=
(-1)^{j-\mu}\delta_{\mu,-\nu}
\hspace{20pt}
(-j\leq\mu,\nu\leq+j)
\label{G35}
\end{equation}
The index $r$ (= + or -)
labels two subspaces ${\cal H}(r)$ in which all the other known observables $B$
are identical, i.e., $B$ and $U_g$, where $g$ are continuous space-time
transformations not containing $P$ and $T$, are given by
\begin{equation}
B=
\left(
\begin{array}{cc}
B&0\\
0&B
\end{array}
\right);
\hspace{15pt}
U_g=
\left(
\begin{array}{cc}
U_g&0\\
0&U_g
\end{array}
\right).
\label{G36}
\end{equation}
The index $r$ thus also labels the rows and columns of the operator matrices in
the Table~1 and in~(\ref{G36}).
In the conventional case~(\ref{G33}) the label $r$ is not needed and $A_T$ is
given by (ignoring all the unspecified quantum numbers $n$)
\begin{equation}
A_T\mid E,j_3,j\rangle=
(-1)^{j+j_3}\alpha^\prime \mid E,-j_3,j\rangle
\label{G37}
\end{equation}
which we also write (suppressing from now on the quantum numbers $j_3$, $j$)
\begin{equation}
A_T\mid E\rangle=
\alpha
\mid E\rangle.
\label{G37b}
\end{equation}
The exact eigenvectors $\mid E^{\pm}\rangle$ which
are related to the $\mid E\rangle$
by (the formal solution of) the Lippmann-Schwinger equation~(\ref{G29}), have
the standard $A^T$ transformation property~(\ref{G28})
In the conventional case~(\ref{G37}),~(\ref{G28}) we have {\em one} Hilbert
space
${\cal H}$, {\em one} RHS $\Phi=\Phi_++\Phi_-\subset{\cal H}\subset\Phi^\times$;
$\Phi_+\cap\Phi_-\neq\emptyset$
and {\em one} pair of RHS's of Hardy class type~(\ref{G4a}),~(\ref{G4b}).
The operator $A_T$ can only
be defined as in~(\ref{G27}), i.e.:
\begin{equation}
A_T:\Phi_\pm\rightarrow\Phi_\mp;\,\,\,\,\,\,
A_T^\times:\Phi^\times_\pm\rightarrow\Phi^\times_\mp
\label{G38}
\end{equation}
which means that the two spaces $\Phi_-$ and $\Phi_+$ are $A_T$ transforms
of each other. In our earlier discussion of the scattering experiment we have
already concluded that this cannot be possible
for empirical reason. Thus, if one has a quantum
mechanical arrow of time, then the time reversal operator cannot be defined in
the standard way with $A_T^2=+1$ (or $A_T^2=+(-1)^{2j}$).
Of the three unconventional cases the second and the third line of Table~1 gives
the cases in which $A_T$ transforms between parity eigenspaces of opposite
(relative) parity. In
|
these cases the label $r$ can be given by the relative
parity and is therefore also not needed. We therefore choose the case in the
fourth line of the Table~1 characterized by
($\varepsilon_T=-(-1)^{2j}$, $\varepsilon_I=-(-1)^{2j}$).
In this case the action of $A_T$ is given by
\begin{equation}
A_T\mid E,r\rangle=\alpha(r)\mid E,-r\rangle;\,\,\,\,
\alpha^*(r)\alpha(-r)=\varepsilon_T=(-1)(-1)^{2j}
\label{G39}
\end{equation}
and the action of $A_T$ upon the exact energy eigenvectors
$\mid E^{\pm},r\rangle$ is given by
\begin{equation}
A_T\mid E^{\pm},r\rangle=\alpha(r)\mid E^{\mp},-r\rangle
\label{G40}
\end{equation}
In this new case we have two RHS's labeled by the index $r$,
$\Phi^r\subset{\cal H}^r\subset\Phi^{r\times}$
and two pairs of the RHS's of Hardy classes, in place of the one
pair~(\ref{G4a}) and~(\ref{G4b}):
\begin{equation}
\Phi_+^r\subset{\cal H}\subset\Phi_+^{r\times}\,\,\,\,
\mbox{and}\,\,\,\,
\Phi_-^r\subset{\cal H}\subset\Phi_-^{r\times},\,\,\,
r=\pm
\label{G41}
\end{equation}
\begin{equation}
\mbox{for any $\phi^+\in\Phi_-^r$ we have a $\psi^-\equiv
A_T\phi^+\in\Phi_+^{-r}$}
\label{G42}
\end{equation}
\begin{equation}
\mbox{for any $\psi^-\in\Phi_+^r$ we have a $\phi^+\equiv
A_T\psi^-\in\Phi_-^{-r}$}
\label{G43}
\end{equation}
From this we conclude that the operator $A_T$ maps the space $\Phi^r_{\pm}$
(continuously, one to one and) onto the space $\Phi^{-r}_{\mp}$
\begin{equation}
A_T:\Phi^r_{\pm}\rightarrow\Phi^{-r}_{\mp}\,\,\,\,
r=+,-
\label{G44}
\end{equation}
The conjugate operator which is defined as the extension of the adjoint operator
$A_T^\dagger:{\cal H}^r\rightarrow{\cal H}^{-r}$
according to
\begin{equation}
A^\dagger_T|_{\Phi^r}\subset A^\dagger_T\subset A^\times_T
\,\,\,\,\mbox{in}\,\,\,\,
\Phi^{r}\subset{\cal H}^r\subset\Phi^{r\times},
\label{G45}
\end{equation}
is then a (continuous, one to one) mapping between the corresponding dual spaces
\begin{equation}
A_T^\times:
\Phi^{r\times}_\pm\rightarrow
\Phi^{-r\times}_\mp\,\,\,\,
r=+,-\,\,.
\label{G46}
\end{equation}
Thus
an operator $A_T$, which is compatible with our physical interpretation of the
spaces $\Phi_+$ and $\Phi_-$
has indeed been given by Wigner in~\cite{R16}
for the case $\varepsilon_T=\varepsilon_I=-1(=-(-1)^{2j})$.
In this case $A_T$ (and $A_T^\dagger$) transforms --- according to~(\ref{G44})
--- from the space $\Phi^r_+$ ($r=+$), which contains vectors representing
properties of the outgoing scattering products of our real experiment, into
the space $\Phi^{-r}_-$, which contains {\em in}-state vectors of scattering
experiment which we cannot prepare (e.g., incoming spherical waves with fixed
phase relations). Vice versa, the space $\Phi^r_-$ (containing vectors that
represent real preparable {\em in}-states) is mapped by $A_T$ onto $\Phi^{-r}_+$
(containing properties which we cannot observe).
The same arguments apply according to~(\ref{G46}) to the microphysical resonance
states. The exponentially decaying Gamow vector $\psi^G=\mid
z_R,r^-\rangle\in\Phi_+^{r\times}$, $z_R=E_R-i\Gamma/2$, is mapped into a
vector $\mid z_R^*,-r^+\rangle\in\Phi_+^{-r\times}$
which exponentially decreases into the {\em negative} time direction. And the
Gamow state of our resonance scattering experiment,$\tilde{\psi}^G=\mid
z_R^*,r^+\rangle\sqrt{2\pi\Gamma}\in\Phi_+^{r\times}$,
which exponentially grows from $t=-\infty$ to $t=0$ (the time when the
preparation is completed and the registration begins)
is mapped by $A_T^\times$ into a vector $\mid z_R,-r^-\rangle\in\Phi^{-r}_+$
which like the $\mid z_R^*,-r^+\rangle$ cannot be detected in our scattering
experiment.
Thus mathematically, due to the time reversal doubling, we have two arrows of
time pointing in opposite directions. For $r=+$ we have two
semigroups~(\ref{G8+}),~(\ref{G8})
both evolving into the same direction of time. For
$t\leq0$ we have the semigroup
$U_-^\times=e^{-iH^\times t}$
(of growth) and for $t\geq0$ we have the semigroup
$U_+^\times=e^{-iH^\times t}$
(of decay). These provide our arrow of time. The RHS's~(\ref{G41})
with $r=-$ describe the time-reversal image of our physical experiments; this
time-reversed experiment we will find impossible to prepare.
One can show that like in the conventional case also in this new case
with $(\varepsilon_T=-(-1)^j,\,\,\varepsilon_I=-(-1)^j)$
we have
\begin{equation}
\mid E,r^+\rangle=
\mid E,r^-\rangle S(E)=
\mid E,r^-\rangle e^{2\delta(E)}
\,\,\,\,\mbox{for}
\,\,\,\,r=\pm,
\label{G47}
\end{equation}
(where $\delta(E)$ is the phase shift and $S(E)$ the $S$-matrix).
This is the consequence of ``time reversal invariance'' defined
by~(\ref{G31}) and~(\ref{G32}).
This means that the two spaces $\Phi_-^r$ (describing states)
and the two spaces $\Phi_+^r$ (describing observables) with different values
of~$r$, $r=+$ and $r=-$, are not intermingled by the dynamics given by $H$ or
the
$S$-operator. The experimentally tested consequences of time reversal invariance
like reciprocity relation remain intact separately for each value of $r$.
In conclusion, we have seen that the quantum mechanical arrow of time and
irreversible time evolution on the microphysical level (as exemplified by all
quantum mechanical resonance states) are not in contradiction to time reversal
invariance as defined by~~(\ref{G31}) and~(\ref{G32}).
However, for quantum physical systems with
irreversible time evolutions (resonances) the time-reversal operator $A_T$
is not the standard operator with $A_T^2=(-1)^{2j}$.
The price that we have to pay for describing irreversible time evolution and
time reversal invariance in a consistent way is the doubling of the spaces. One
pair of spaces,~(\ref{G41}) with $r=+$, contains microphysical states that
became and decay in our time direction. The other,~(\ref{G41}) with $r=-$,
contains microstates that became and decay in the opposite time direction.
Time-reversal invariance, as defined by~(\ref{G31}) and~(\ref{G32})
for the observables, does
not lead to a time symmetry for the states, like~(\ref{G25a}) and~(\ref{G25b}).
This is in agreement with the empirical facts that some conceivable
time-reversed states are highly improbable and practically impossible to
prepare~\cite{R13}. Theoretically, the time symmetry of the observables given
by~(\ref{G31}) and~(\ref{G32})
can be broken for the states in two different ways leading to two
arrows of time, $r=+$ and $r=-$.
We belive that the principle, if any, that selects the one arrow
over the other lies outside the scope of the theory
|
\subsection{SBOM stakeholders \textcolor{blue}{need adjustment, and then update the whole paper}}
\subsection{Stage Zero: Planning and Preparation}
\label{subsec_planning}
At the planning stage, we prepared a research protocol\footnote{\url{https://drive.google.com/drive/folders/1gylBoeF09sJRg2xEpJh2tWnjp0cCMhxM?usp=sharing}} and drafted two types of interview questions: demographics and open-ended. For demographics, we ask about the participants' background information, such as job roles and experiences. For the open-ended questions, we asked how the participants perceive SBOMs.
We obtained ethics approvals for this study.
\subsection{Stage One: Interview}
\label{subsec_interview}
\textbf{Pilot interview and protocol refinement}. Before the formal interviews, we conducted a small-scale pilot interview with 3 participants from our connections. Based on their feedback and suggestions, we adjusted some interview questions.
\textbf{Participant recruitment}. We recruited 17 SBOM practitioners from 13 organizations (e.g., CISA, Oracle) across 7 countries (see Table \ref{interviewee-info}). Interviewees were recruited by: a) emailing our contacts who helped further disseminate the invitation emails to their colleagues; b) emailing developers on GitHub working on SBOM-related projects whose email addresses are public; c) advertising on Twitter and LinkedIn and interested people can contact the first author. The 17 interviewees have worked in software-related fields for around 14 years on average (min 4 years and max 30 years), while they have been working actively in SBOM-related fields for around 1.4 years on average (min 2 months and max 5 years). We will refer to the 17 interviewees as I1 to I17.
\begin{table}[]
\renewcommand\arraystretch{1.1}
\centering
\caption{Interviewee Demographics*}
\label{interviewee-info}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllcc}
\hline
\textbf{ID} &
\multicolumn{1}{c}{\textbf{Field of work}} &
\multicolumn{1}{c}{\textbf{Country}} &
\textbf{Work exp.} &
\textbf{SBOM exp.} \\ \hline
I1 & Dev. & China & 10 & 0.5 \\
I2 & Dev. & China & 10 & 0.2 \\
I3 & Dev. & China & 10 & 0.2 \\
I4 & Dev. \& Sec. & Australia & 20 & 3.0 \\
I5 & Dev. & US & 18 & 0.5 \\
I6 & Dev. & US & 20 & 0.5 \\
I7 & Sec. & Brazil & 4 & 0.5 \\
I8 & Dev. \& Cnslt. & Ireland & 15 & 1.5 \\
I9 & Sec. & India & 5.5 & 1.0 \\
I10 & Cnslt. \& Adv. & US & 20 & 3.0 \\
I11 & Dev. \& Sec. (SBOM tool) & Israel & 12 & 1.5 \\
I12 & Cnslt. \& Adv. & Israel & 15 & 0.3 \\
I13 & Dev.\&Sec.\&Res. & Australia & 30 & 2.0 \\
I14 & Dev. \& Sec. \& Res. & US & 10 & 3.0 \\
I15 & Dev. & India & 8 & 1.5 \\
I16 & Dev. (SBOM tool) & US & 20 & 1.0 \\
I17 & Cnslt. \& Adv. & US & 15 & 5.0 \\ \hline
\multicolumn{5}{l}{\begin{tabular}[c]{@{}l@{}}\small\\ \small *Dev./Sec.: Software Development/Security; Cnslt.: Consultant; Adv.:\\ \small Advisor; Res.: Researcher. Experiences are listed in years, as of July 2022\end{tabular}}
\end{tabular}%
}
\end{table}
\textbf{Transcribing and coding}. 1) Transcribing. The interviews were audio-recorded. The first author transcribed the audio recordings, and the second author double-checked the transcripts. 2) Pilot coding. The first two authors (i.e., coders) conducted a pilot coding of the 3 pilot interview transcripts. They discussed the initial coding results and reached a certain level of preliminary agreement on the granularity of thematic coding. 3) Code generation. The coders then performed thematic coding to qualitatively analyze the interview transcripts \cite{smith2015qualitative,bi2022accessibility} of 17 interviewees using \href{https://www.maxqda.com/}{MAXQDA2022} tool.
The first coder generated 574 codes under 86 cards (i.e., repetitive and similar codes classified into the same category). The second coder generated 364 codes under 41 cards. After discussing the coding results with a third author, the coders further cleared the coding granularity, combined similar cards, and disposed of cards with limited value. Finally, a total of 54 unique cards were generated.
\textbf{Data analysis and open card sorting}. The coders separately sorted the 54 generated cards into potential themes (not predefined) given thematic similarity. After the sorting process, the coders calculated Cohen's Kappa value \cite{cohen1960coefficient} to assess their agreement level. The overall value was 0.77, indicating substantial agreement. The coders discussed their disagreements to reach a common ground. he coders reviewed and agreed on the final themes to reduce card sorting bias. Eventually, we derived 26 statements (see Table \ref{statements}) under 3 themes: State of SBOMs Practice (T1), SBOM Tooling Support (T2), and SBOM Issues and Concerns (T3). All the authors have double-checked our coding results to ensure the reported results are accurate and consistent.
\subsection{Stage Two: Online Survey}
\label{subsec_survey}
We conducted an online survey to confirm or refute the extracted statements. We designed the survey following Kitchenham and Pfleeger's guideline \cite{kitchenham2008personal}. The survey was anonymous, and all information collected was non-identifiable.
\textbf{Survey design and pilot study}. The survey was published via \href{https://www.qualtrics.com/}{Qualtrics}. Different types of questions were included in the survey (e.g., multiple choice and free text). The statements are scored on a 5-point Likert scale (Strongly disagree, Disagree, Neutral, Agree, Strongly agree), with an additional ``Not sure".
We piloted the survey with six participants from Australia and Singapore and then refined the survey. The pilot study results were excluded from the final results. The formal survey consists of 7 sections:
demographics, SBOM status quo, generation, distribution, tooling, benefits, and concerns.
\textbf{Participants recruitment}. To increase the number of participants, we adopted the following strategy for recruitment:
\begin{itemize}
\item We contacted industrial practitioners from several companies worldwide and asked for their help in disseminating the survey invitation emails.
\item We sent invitation emails to over 2000 developers from GitHub whose email addresses are publicly available.
\item We posted the recruitment advertisement on social media platforms (i.e., Twitter and LinkedIn).
\end{itemize}
\begin{table}[]
\renewcommand\arraystretch{1.1}
\centering
\caption{Survey respondents demographics*}
\label{survey_demo}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\multicolumn{1}{c}{\textbf{Field of work}} &
\multicolumn{1}{c}{\textbf{Project team}} &
\multicolumn{1}{c}{\textbf{Work exp.}} &
\multicolumn{1}{c}{\textbf{SBOM exp.}} \\ \midrule
Dev. (38.6\%) & \textless 10 ppl. (16) & \textless 1 year (1) & \textless 6 months (17) \\
Sec. (30.1\%) & 10-20 ppl. (20) & 1-3 years (11) & 0.5-1 year (17) \\
Cnslt./Adv. (10.8\%) & 20-50 ppl. (15) & 3-5 years (13) & 1-2 years (13) \\
Mgmt (9.7\%) & \textgreater 50 ppl. (15) & 5-10 years (9) & \textgreater 2 years (19) \\
Res. (9.7\%) & - & \textgreater 10 years (32) & - \\
Other (1.1\%) & - & - & - \\ \midrule
\multicolumn{4}{l}{\begin{tabular}[c]{@{}l@{}}\footnotesize{*Mgmt.: Management; ppl.: people. Since multiple answers are supported}\\ for respondents' work, field of work is listed in percentages while the\\ others are listed with response numbers.\end{tabular}}
\end{tabular}%
}
\end{table}
We received a total of 129 responses, including 27 with respondents selecting ``(Very) unfamiliar with SBOM".
Note that there could be more people unfamiliar with SBOM who did not respond to our survey.
After removing them and the incomplete responses and responses completed within 2 minutes, we had 65 valid responses.
We acknowledge that the number of responses is not as ideal as similar empirical studies (e.g., \cite{bi2022accessibility,xiaxin}). However, we believe this is consistent with our findings on the lack of SBOM adoption and education (i.e., Findings 1 and 10).
The 65 participants come from 15 countries across 5 continents. The top 3 countries where the participants reside are Australia, China, and the US. An overview of the survey respondents' demographics is presented in Table \ref{survey_demo}.
It is worth noting that although nearly half (47.7\%) of the respondents have worked in the software field for over 10 years, only one quarter (27.7\%) have worked on SBOMs for over 2 years, indicating that SBOM is still a relatively fresh concept to software practitioners.
\textbf{Data analysis}. Apart from the demographics, SBOM familiarity questions, and a final optional free-text question, all statements are presented as Likert-scale questions (see the bar charts in Table \ref{statements}) for the evaluation of the agreement degree.
\begin{table*}[]
\centering
\caption{interview and survey results on SBOM statements}
\label{statements}
\resizebox{\textwidth}{!}{%
\begin{tabular}{llcc}
\hline
\multicolumn{2}{c|}{} &
\multicolumn{2}{c}{\textbf{Likert distribution}} \\ \cline{3-4}
\multicolumn{2}{c|}{\multirow{-2}{*}{\textbf{Statement}}} &
Graph &
Score \\ \hline
\multicolumn{4}{l}{\textbf{T1. State of SBOM practice}} \\ \hline
\multicolumn{1}{l|}{S1. Improving transparency and visibility into the software products is the biggest benefit of SBOMs.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s1.png} &
4.42 \\
\multicolumn{1}{l|}{S2. SBOM data form the foundation of a potential SBOM-centric ecosystem.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s2.png} &
4.05 \\
\multicolumn{1}{l|}{S3. The benefits brought by SBOMs outweigh the costs of SBOMs (e.g., extra learning and management of SBOMs \& tools).} &
\multicolumn{1}{l|}{\multirow{-3}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{benefit}\\ SBOM benefits\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s3.png} &
4.34 \\ \hline
\multicolumn{1}{l|}{S4. Currently third-party (open source or proprietary) components are not equipped with SBOMs.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s4.png} &
4.11 \\
\multicolumn{1}{l|}{S5. SBOMs are not generated for all software products (produced/used) within an organization.} &
\multicolumn{1}{l|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{adoption}\\ SBOM adoption\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s5.png} &
4.25 \\ \hline
\multicolumn{1}{l|}{S6. SBOMs can be generated at different stages of the software development lifecycle.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s6.png} &
4.14 \\
\multicolumn{1}{l|}{S7. Currently, a new SBOM is not always re-generated when there's any change to software artifacts.} &
\multicolumn{1}{l|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{sbomgepoint}\\ SBOM generation points\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s7.png} &
3.31 \\ \hline
\multicolumn{1}{l|}{\cellcolor[HTML]{C0C0C0}S8. SBOMs are currently generated in a non-standardized format (e.g., not SPDX nor CycloneDX nor SWID).} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s8.png} &
\cellcolor[HTML]{C0C0C0}{\color[HTML]{333333} 2.86} \\
\multicolumn{1}{l|}{S9. Despite the 7 minimum data fields recommended by NTIA, the minimum fields are not necessarily all included in SBOMs.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s9.png} &
3.57 \\
\multicolumn{1}{l|}{S10. In practice, SBOMs are extended with more useful data fields (other than the 7 minimum data fields) whenever possible.} &
\multicolumn{1}{l|}{\multirow{-3}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{sbom_data}\\ SBOM data fields and\\ standarization\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s10.png} &
4.34 \\ \hline
\multicolumn{1}{l|}{\cellcolor[HTML]{C0C0C0}S11. SBOMs are currently only generated for internal consumption.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s11.png} &
\cellcolor[HTML]{C0C0C0}2.97 \\
\multicolumn{1}{l|}{S12. Access control should be required for the distribution of SBOMs for proprietary software/components.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s12.png} &
4.14 \\
\multicolumn{1}{l|}{S13. Content tailoring (sharing partial SBOMs) should be required for SBOM distribution of proprietary software/components.} &
\multicolumn{1}{l|}{\multirow{-3}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{distribution}\\ SBOM distribution\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s13.png} &
3.57 \\ \hline
\multicolumn{1}{l|}{S14. SBOM producer's (i.e., software vendor) reputation is important for assessing SBOM integrity (e.g., completeness).} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s14.png} &
3.4 \\
\multicolumn{1}{l|}{S15. Currently there are no validation mechanisms to ensure SBOM integrity (accuracy, completeness etc).} &
\multicolumn{1}{l|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{sbom_validation}\\ SBOM validation\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s15.png} &
3.28 \\ \hline
\multicolumn{1}{l|}{S16. Current vulnerability management with SBOMs doesn't focus on the actual exploitability of the vulnerability.} &
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Section \ref{vex}\\ Vul. \& exploitability\end{tabular}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s16.png} &
3.72 \\ \hline
\multicolumn{1}{l|}{S17. SBOMs for AI software are different from SBOMs for traditional software.} &
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Section \ref{aibom}\\ AIBOM\end{tabular}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s26.png} &
3.26 \\ \hline
\textbf{T2. State of SBOM tooling support} &
&
&
\\ \hline
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}S18. Although existing sources (e.g., package manager, POM.xml) are already there, it is still necessary to parse and feed\\ metadata from these sources into a standard format via SBOM tools.\end{tabular}} &
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Section \ref{necessity}\\ Necessity of SBOM tools\end{tabular}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s17.png} &
4.08 \\ \cline{1-2}
\multicolumn{1}{l|}{S19. There are significantly limited tools for SBOM consumption.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s18.png} &
4 \\
\multicolumn{1}{l|}{S20. SBOM consumption should be integrated with existing tools (e.g., vulnerability/configuration management tools).} &
\multicolumn{1}{l|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{availability}\\ Availability of SBOM tools\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s19.png} &
4.03 \\ \cline{1-2}
\multicolumn{1}{l|}{S21. Existing SBOM tools can be hard to use (e.g., lack of usability, complexity).} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s20.png} &
3.57 \\
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}S22. SBOM tools lack of interoperability and standardization (e.g., the hash of one component generated by different tools \\ can be different).\end{tabular}} &
\multicolumn{1}{l|}{\multirow{-2}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{usability}\\ Usability of SBOM tools\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s21.png} &
3.92 \\ \cline{1-2}
\multicolumn{1}{l|}{S23. End users can't validate the integrity (e.g., accuracy and completeness) of the generated SBOMs by existing tools.} &
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Section \ref{tool_validation}\\ Validation of SBOM tools\end{tabular}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s22.png} &
3.69 \\ \hline
\multicolumn{4}{l}{\textbf{T3. SBOM issues \& concerns}} \\ \hline
\multicolumn{1}{l|}{\cellcolor[HTML]{C0C0C0}S24. Existing SBOM standards don't meet current market demands (e.g., the standards support only limited fields).} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s23.png} &
\cellcolor[HTML]{C0C0C0}2.85 \\
\multicolumn{1}{l|}{S25. Attackers can take advantage of the information contained in SBOMs.} &
\multicolumn{1}{l|}{} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s24.png} &
3.63 \\
\multicolumn{1}{l|}{S26. There is hesitation in adopting SBOMs due to various concerns (e.g., lack of basic IT asset management).} &
\multicolumn{1}{l|}{\multirow{-3}{*}{\begin{tabular}[c]{@{}l@{}}Section \ref{concern}\\ SBOM concerns\end{tabular}}} &
\includegraphics[width = 0.85cm, height = 0.35 cm]{statement_figs/s25.png} &
3.98 \\ \hline
\end{tabular}%
}
\end{table*}
\subsection{RQ1: What is the current state of SBOM practice?}
\label{sbompractice}
To answer RQ1, we discussed 16 statements in this section based on T1 (see Table \ref{statements}).
Our results suggest that SBOMs are not widely adopted.
SBOM generation and distribution require further standardization and maturer mechanisms.
SBOM data validation is generally neglected.
For the typical SBOM use case of vulnerability management, the exploitability status classification should be more than binary.
\subsubsection{SBOM benefits}
\label{benefit}
We summarized 3 statements (i.e., S1-S3) on SBOM benefits based on the interviews.
15 of 17 interviewees mentioned that the enhanced transparency of the software supply chain is one of the exceptional advantages of SBOMs [\textbf{S1}, 90.8\% agree, 1.5\% disagree]. Transparency brings a lot of favorable consequences, such as end-of-life software management, vulnerability tracking, and license compliance checking \cite{lf_2022}.
\textit{``The biggest benefit is knowing exactly what is being bundled in your software, right? So it is to assure our customers that, if there's a vulnerability reported, you immediately know (whether) you are impacted or not, \textbf{if the SBOM is accurate}." (I13-Dev.\&Sec.\&Res.)}
Another benefit originates from the SBOM data.
The unification of software composition details provided by SBOMs is beneficial as the standardized SBOM data has the potential to be further built upon.
\textit{``The SBOM itself isn't the valuable part. The valuable part is, how do we turn that data into intelligence, into action." (I17-Cnslt.\&Adv.)}
Based on the SBOM data, it is promising that there will be SBOM-centric ecosystems emerging [\textbf{S2}, 86.2\% agree, 1.5\% disagree]. However, due to the lack of SBOM adoption (see Section \ref{adoption}), such ecosystems are long-term goals not to be achieved soon.
Thirdly, although adopting SBOMs requires extra efforts (e.g., additional tools and processes, education to related personnel), the benefits of SBOMs outweigh the costs [\textbf{S3}, 86.2\% agree, 7.7\% disagree].
\faThumbsODown \,Although some organizations are \textit{``worried whether SBOMs would increase the cost of a software product" (I1-Dev.)},
\faThumbsOUp \, the majority favor the benefits brought by SBOMs as the potential loss without SBOMs can be devastating.
\textit{``What I think about is, what is the cost when a vulnerability is exploited? ...if you look at SolarWinds event, that cost was \$800 million." (I6-Dev.)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 1}: The transparency brought by SBOMs can enable accountability, traceability and security, but there is a lack of \textbf{systematic consumption-scenario-driven design of SBOM features}.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM adoption}
\label{adoption}
We summarized 2 statements (i.e., S4-S5) on SBOM adoption status (see Table \ref{statements}).
Despite the benefits of SBOMs and the SBOM readiness report's\cite{lf_2022} optimistic results that 90\% of the 412 sampled organizations have started or are planning their SBOM journey, \textbf{the adoption of SBOM is not as optimistic} according to our interviews and survey. For example, most existing third-party software or components, either open source or proprietary, are not equipped with SBOMs [\textbf{S4}, 83.1\% agree, 13.8\% disagree].
As I2 stated,
\textit{``when introducing third-party components to our organization, we need to try to generate SBOMs for them because not all of them have SBOMs.'' (I2-Dev.)}
However, considering the prevalence of OSS, the unavailability of SBOMs for (open-source) software/components also holds software vendors back from SBOM adoption as they may wonder whether SBOM adoption is an industrial consensus.
In addition, SBOMs are not generated for all software even inside a software vendor organization [\textbf{S5}, 87.7\% agree, 7.7\% disagree].
As stated by I10, \textit{``software vendors may be producing SBOMs for some customer products.
But I bet most of them don't generate SBOMs for the financial software they are using." (I10-Cnslt.\& Adv.)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt
]
\textbf{Finding 2}: A large portion of widely used software, especially OSS, does not have SBOMs. \textbf{The incentives for generating SBOMs for OSS and proprietary software need to be propagated}.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM generation point}
\label{sbomgepoint}
We summarized 2 statements (i.e., S6-S7 in Table \ref{statements}) on SBOM (re-)generation.
SBOMs can be generated at a set of different stages in the software development life cycle (e.g., build time, run time, before delivery, after third-party components introduction, etc.) [\textbf{S6}, 84.6\% agree, 12.3\% disagree], which differs from case to case as \textit{``the challenge here sort of builds on the sheer diversity of software" (I17-Cnslt.\&Adv.)} (e.g., modern/legacy software, container images, cloud-based software). For example, \textit{``for legacy software that's already out there in the wild, maybe you don't have reproducible builds. You only have that built artifacts... it's like you can't do it (generate SBOMs) at build time anymore... it's just too difficult... But you could probably do a run time (SBOM) generator." (I8-Dev.\&Cnslt)}
As stated by I8, \textit{``I don't really think there is a perfect time (for SBOM generation)". (I8-Dev.\&Cnslt)}
Since software development goes through a life cycle, \textbf{ideally an SBOM should be generated at the early stages and then gradually enriched with more information from the latter stages}, which was supported by I12 and I14.
\textit{``I think the way to actually include the most full SBOM is to have visibility towards the whole cycles, from the report to the build to the factory." (I12-Cnslt.\&Adv.)}
\textit{``I do not think we should produce an SBOM as a one-shot process, but rather we should be carrying evidence and partial SBOMs and enriching them in every single operation." (I14-Dev.\&Sec.\&Res.)}
As for SBOM re-generation, it is evident that whenever any change happens to any software artifact, the corresponding SBOMs should be timely re-generated to reflect this change, which is hardly the current practice followed by every SBOM producer [\textbf{S7}, 53.8\% agree, 35.4\% disagree].
As stated by I10,
\textit{``that is more of an aspirational goal - it's not something that will be realized right away. Because right now it would be just great if they put out a new SBOM whenever they did a new major version, and that is better than nothing."(I10-Cnslt.\&Adv.)}
However, since some organizations are re-generating SBOMs upon each change, and if SBOM re-generation is to be a standard practice, solutions like SBOM version control are needed for managing the SBOMs.
As stated by I8, with different versions of SBOMs, \textit{``you need to version control your SBOMs, and figure out how to distribute that information to your customers." (I8-Dev.\&Cnslt)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 3}:
SBOM generation is belated and not dynamic, while ideally SBOMs are expected to be \textbf{generated during early software development stages and continuously enriched/updated}.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM data fields standardization}
\label{sbom_data}
In this subsection, we investigated what data fields generated SBOMs contain, based on S8-S10 in Table \ref{statements}, since not knowing what to include in an SBOM is the second biggest concern for producing SBOMs according to the SBOM readiness report\cite{lf_2022}.
First, despite the existence and relative prevalence of two major SBOM standards (i.e., SPDX and CycloneDX),
some organizations generate SBOMs based on their customized non-standard formats [\textbf{S8}, 27.7\% agree, 32.3\% disagree].
Based on the survey results, although more respondents agree with standardized SBOM formats, over one quarter agree with customized SBOM generation.
As I4 stated,
\textit{``the people I spoke to (in some organizations) might have been doing this (generating SBOMs), but they might not have been doing a standard-format SBOM. They would just be keeping an inventory of all their software components." (I4-Dev.\&Sec.)}
Second, although there are 7 minimum SBOM data fields recommended by NTIA\cite{sbom_mini}, in practice, generated SBOMs don't always meet the minimum bar [\textbf{S9}, 63.1\% agree, 24.6\% disagree] for two main reasons (i.e., software vendor customization, data availability).
Some software vendors choose only to include a subset of the minimum data fields, or customize their minimum requirements to meet their respective needs. According to I3, his/her organization
\textit{``has its own minimum requirements (different from NTIA's)" (I3, Dev.)}.
Software vendors sometimes do not include all the minimum data fields as the relevant data is not always attainable.
\textit{``The truth is, I tried to put in as much as what you said (7 minimum data fields). I don't have access all the time to all the details." (I11-Dev.\&Sec.(SBOM tool))}
As a result, when relevant information of certain data fields is unavailable, the generated SBOMs can be \textit{``full of non-assertion elements... In practice, this means that, yes, the standards themselves can support the NTIA recommendation; No, the tools are omitting those fields or leaving them blank." (I14-Dev.\&Sec.\&Res.)}
Third, for some organizations producing SBOMs or building SBOM tools to produce SBOMs, they want to include/support as much ``useful" information in SBOMs [\textbf{S10}, 87.7\% agree, 4.6\% disagree].
For the former, since an SBOM can effectively help with internal software supply chain management, they prefer to generate more comprehensive SBOMs with additional information such as vulnerability.
As I13 mentioned,
\textit{``we want to produce the best information... then the developers can look at it and then do their work" (I13-Dev.\&Sec.\&Res.)}
For the latter, the more comprehensive information their SBOM tools support, the more competitive they are in the market. As stated by I11, \textit{``I also have a big scope of metadata depending on the target other than the base." (I11-Dev.\&Sec.(SBOM tool))}
Furthermore, as I14 pointed out, \textit{``something relatively worse happens... to create business value... they are trying to extend it (an SBOM) with things that may or may not be relevant to the problem." (I14-Dev.\&Sec.\&Res.)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 4}: Despite official recommendations on minimum SBOM data fields, there is still a lack of consensus on what to include in SBOMs.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM distribution}
\label{distribution}
In this subsection we discussed 3 statements (i.e., S11-S13 in Table \ref{statements}).
A considerable portion of the respondents agree that
\textit{``organizations generate SBOMs for internal consumption, rather than giving them to customers" (I11-Dev.\&Sec.(SBOM tool)} [\textbf{S11}, 40\% agree, 38.5\% disagree].
The authors notice a lack of consensus on this statement from the survey respondents. We believe this is consistent with the general ``lack of consensus" status of the SBOM as in the SBOM readiness report\cite{lf_2022}, resulting from the relative recentness and lack of adoption.
However, since SBOMs are now being distributed in practice, proper distribution mechanisms are needed.
According to the SBOM readiness report\cite{lf_202
|
2}, one of the leading concerns for SBOM production is that some information inside an SBOM is too sensitive and risky to be public.
Since the source code is already publicly available for OSS, their SBOMs should be public.
For proprietary software/components, although some of the SBOMs can be public, \textit{``authenticated (access control) is going to be the norm" (I4-Dev.\&Sec)} [\textbf{S12}, 76.9\% agree, 13.8\% disagree], depending on the software vendors' policies.
\textit{``At least for some segments of the market, I think access management is part of it. You need to be able to share your SBOMs with whomever you want to share, and not have all of the world get access to it." (I12-Cnslt.\&Adv.)}
Apart from access control, content tailoring (selective sharing) is also helpful for mitigating the above concern.
There can be a negotiated compromise between the software vendor and its downstream procurers on what to include in the distributed SBOMs, instead of sharing the complete SBOMs [\textbf{S13}, 60\% agree, 24.6\% disagree].
\textit{``There's got to be a mechanism... like a router in the middle, that takes the SBOMs or VEXs produced by the suppliers and routes them down to each end user, exactly what they need." (I0-Cnslt.\&Adv.)}, so that \textit{``only the right people can see the right information" (I13--Dev.\&Sec.\&Res.)}.
\begin{center}
\label{f5}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 5}:
Proprietary and sensitive information in SBOMs introduces barriers to SBOM distribution. \textbf{Selective sharing (content tailoring) and access control
mechanisms need to be considered}.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM validation}
\label{sbom_validation}
This subsection is based on S14 and S15 (see Table \ref{statements}).
The lack of SBOM integrity validation is a shared problem mentioned by 13 out of 17 interviewees. Without reliable validation measures [\textbf{S15}, 49.3\% agree, 26.2\% disagree], software procurers may only roughly assess the quality of an SBOM by referring to the SBOM producer's reputation [\textbf{S14}, 50.8\% agree, 13.8\% disagree].
SBOM integrity is two-fold:
a) \textbf{SBOM data integrity} (whether the SBOM has been tampered with), and b) \textbf{SBOM tooling integrity} (\textit{tooling capability} as to the competence to generate complete and accurate SBOMs; and \textit{tooling security} as to whether the SBOM generation tools are hacked). We discuss SBOM tooling integrity in Section \ref{tool_validation}.
SBOM data tampering can come from outside and inside an organization (i.e., \textbf{external/internal tampering}).
External tampering is more straightforward as an SBOM can be \textit{``easy to tamper (with) and easy to fake" (I14-Dev.\&Sec.\&Res.)} without reliable validation methods.
Thus, proper validation mechanisms are needed (e.g., signing using \href{https://docs.sigstore.dev/cosign/overview}{sigstore's Cosign}).
Inside tampering means a software vendor may change the SBOM data considering customer acceptance and security issues. For example,
I14 and I15 mentioned instances of internal tampering based on their experiences:\\
a)
\textit{``You could have a release engineer at the last minute, realizing that they wanted to change the SBOM just because otherwise, the customer wouldn't take it. It's not that the whole organization lied. But it does mean that they got to tamper with the SBOM that doesn't again faithfully represent the product that they weren't given." (I14-Dev.\&Sec.\&Res.)}\\
b)
\textit{``Whenever we are going to use an open source project, there has to be a security check... if some kind of (vulnerable) code is there, we just need to remove it... we are not actually passing those kinds of changes to the public." (I15-Dev.)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 6}:
Trust in SBOM data needs to be assured considering tampering threats. \textbf{SBOM data validation/verification mechanisms and integrity services are needed}.
\end{tcolorbox}
\end{center}
\subsubsection{Vulnerability and exploitability}
\label{vex}
In this section we discuss SBOMs for vulnerability management and the exploitability of vulnerabilities (i.e., S16 in Table \ref{statements}).
Although vulnerability management is a representative SBOM use case\cite{9174365}, vulnerability management currently barely considers the actual exploitability [\textbf{S16}, 73.8\% agree, 13.8\% disagree].
Nevertheless, the exploitability of a vulnerability should be taken seriously, as a vulnerability may not necessarily be exploitable \cite{yin_apply_2020}.
\textit{``I think it's a very, very interesting point, and it's a very legit issue. Vulnerability and exploitability are totally different. You can't simply send a long report to the developers and ask them to update each and every vulnerable dependency that has been flagged. I know the development team might end up ignoring your report, or come back at you saying, `are you able to exploit this vulnerability? No? Then why should I go and update it if you are unable to exploit it?'" (I9-Sec.)}
As mitigation, vulnerabilities are often selectively fixed based on the criticality (e.g., The Common Vulnerability Scoring System (CVSS) score). As stated by I9, \textit{``if it's a critical or a high vulnerability... then make sure it is updated. But when it comes to (a) medium or low (criticality vulnerability), then ignore it." (I9-Sec.)}
Although there are efforts towards exploitability, such as \href{https://www.cisa.gov/known-exploited-vulnerabilities-catalog}{CISA's Known Exploited Vulnerabilities Catalog} that serves as a ``must patch list", there are only limited records (around 800 as of August 2022) in this catalog.
Vulnerability Exploitability eXchange (VEX) has emerged as a tailored method to cope with such problems. \faThumbsOUp\,A VEX is a security advisory produced by a software vendor that allows the assertions of the vulnerability status of a software product \cite{vex}.
As companion artifacts to SBOMs \cite{sbom_sharing}, VEXs provide SBOM operators with clearer understanding of the vulnerabilities and suggested remediation.
\faThumbsODown \,However, current exploitability evaluation is manual and subjective to the domain knowledge of the security experts\cite{6754581,slava_end_nodate}.
Also, \textit{``the ability to differentiate between whether it's exploitable or not is a hard thing to do itself" (I11-Dev.\&Sec.(SBOM tool))},
\textit{``especially if you want to automate it". (I13--Dev.\&Sec.\&Res.)}
What is more, \textit{``there is no way to confirm this (VEX), and it's actually very, very hard to prove a negative. So if you see a VEX entry that says this (vulnerability) doesn't hit me, the only way to prove it wrong is to make an exploit yourself. Again, if you are able to do that, then you're almost making things worse, right? (I14-Dev.\&Sec.\& Res.)"}
\faThumbsODown \,To further complicate this problem, \textbf{there is hardly guaranteed unexploitability}.
As I11, an SBOM tool developer with security (hacker) experience, stated,
\textit{``I agree the more valuable these exploitable vulnerabilities are, but I don't agree that the ones defined less exploitable are not valuable. I used to be on the attacker's side... Hackers can take their time, and they can find a way to put together a lot of things that look very not exploitable, and at the end of the day, find themselves with very easy and exploitable access." (I11-Dev.\&Sec.(SBOM tool))}
A possible solution is there should be \textit{``potential exploitability" (I14-Dev.\&Sec.\& Res.)}.
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 7}: It is unclear what to do with vulnerabilities with limited exploitability exposed by SBOMs/VEXs.
\end{tcolorbox}
\end{center}
\subsubsection{AIBOM}
\label{aibom}
This subsection is based on S16 (see Table \ref{statements}).
AI software is software with AI components. Compared with SBOMs for traditional software, SBOMs for AI software (i.e., AIBOMs) are different [\textbf{S16}, 47.7\% agree, 24.6\% disagree].
Although some interviewees thought an AIBOM \textit{``contains only additional AI package information" (I1-Dev.)}, the AI artifacts (e.g., data, code, model, configuration) also need provenance and co-versioning\cite{lu_towards_2022, barclay_providing_nodate}.
An AIBOM (see Fig. \ref{aissc_fig}) records not only the software composition information as a traditional SBOM, but also contains information about the data/model/code/configuration co-versioning registries, allowing transparency and accountability into the AI artifacts for AI model training and evaluation.
Considering the AI software deployment is continuous progress (e.g., continuous training in case of data/concept drift), these AI artifacts' co-versioning registries are more dynamic and subject to change, while the component inventory information is relatively static.
To reduce frequent re-generation of the AIBOM, the co-versioning registries can be independent of the AIBOMs, instead of being embedded in the AIBOMs.
\subsection{RQ2: What is the current state of SBOM tooling support?}
\label{tooling}
This section discusses 6 statements (see T2 in Table \ref{statements}).
Although some practitioners argue that SBOM tools were not necessary, the importance and necessity of SBOM tools are recognized by most participants.
However, the existing tools still lack maturity in general and require further development.
\subsubsection{Necessity of SBOM tools}
\label{necessity}
This subsection is based on S18 in Table \ref{statements}.
The most interesting argument about SBOM tooling is the necessity of using SBOM tools to generate SBOMs.
Since currently few organizations (e.g., the US government agencies) are actually requiring SBOMs to be provided upon software delivery, \faThumbsODown \,some interviewees think they do not have to generate SBOMs, especially when most SBOM tools serve like a ``proxy": they merely feed the existing metadata (e.g., package manager) into a standard format, but the data is already there with or without SBOMs.\\
\textit{``We do not use SBOMs internally... because we have tools, which is where the SBOM information is coming anyway: package manager. Those tools generally just take a look at the files in the system and the configuration of the system, whereas SBOM tools just essentially parse those and then try to use that in a format. So, if we don't have a lot of end users... why would we introduce this (SBOM)? It's like a middleman that really doesn't produce much." (I14-Dev.\&Sec.\& Res.)}
\faThumbsOUp \,That being said, most survey respondents think \textbf{generating SBOMs using SBOM tools is necessary} [\textbf{S18}, 83.1\% agree, 10.8\% disagree], which is consistent with the benefit of standardization and unification of the software composition data enabled by SBOMs discussed in Section \ref{benefit}.
Generating SBOMs is more than simply putting metadata from different sources into a standard format as SBOMs are usually enriched with information such as licenses.
\subsubsection{Availability of SBOM tools}
\label{availability}
In this subsection, we discussed statements S19-S20 in Table \ref{statements}.
Tooling is an integral part of SBOM, as SBOMs are not manually generated nor intended for direct human consumption. The generation and consumption of SBOMs rely on SBOM tools.
``Shift left" originates from DevSecOps\cite{rajapakse_challenges_2022}, which means shifting the security work to earlier stages of the software development life cycle so that security issues can be identified and fixed earlier.
Considering there is a \textit{``lack of more developer-oriented (SBOM) tools that are more familiar by developers" (I7-Sec.)}, and SBOMs are tightly coupled with security tasks, SBOM tools should also consider ``shift left".
Despite there is a lot of existing SBOM tools as mentioned in Section \ref{background}, contrary to the finding in the SBOM readiness report\cite{lf_2022} that ``SBOM consumption mirrors SBOM production", our finding shows that currently,
the \textit{``SBOM generation is ahead of SBOM consumption" (I17-Cnslt.\&Adv.)}, and there are significantly limited tools for SBOM consumption [\textbf{S19}, 75.4\% agree, 12.3\% disagree].
As stated by I17,
\textit{``the large bucket of what we don't have today in 2022 is SBOM consumption." (I17-Cnslt.\&Adv.)}
Without SBOM consumption tools, even if an SBOM was provided to a software procurer, the procurer would wonder, \textit{``what do I do with the SBOMs? How do I process them? How do I analyze them?" (I12-Cnslt.\&Adv.)}
Besides dedicated SBOM consumption tools, a possible solution is to feed SBOMs into existing IT asset management tools [\textbf{S20}, 41.5\% agree, 12.3\% disagree], which requires functional extensions.
\subsubsection{Usability of SBOM tools}
\label{usability}
This section discusses S21-S22 in Table \ref{statements}.
Although the SBOM tools market is proliferating with the expectation to ``explode" in 2022 and 2023\cite{lf_2022}, the usability of existing tools remains an issue.
SBOM tools can be hard to use due to various reasons. (e.g., complexity, aggressivity, lack of generalization) [\textbf{S21}, 64.6\% agree, 18.5\% disagree].
For example,
\textit{``to use the CycloneDX Maven plugin, it's required to import this plugin in the POM.xml file. For open source software, it is all right. But for proprietary software, this introduces invasion, which can be a problem". (I3-Dev.)}
Although four interviewees (i.e., I9-I12) mentioned that there were user-friendly tools such as \href{https://dependencytrack.org/}{Dependency-track}, the interviewees also acknowledged that most SBOM tools were open source and not enterprise-ready.
A problem with open source tools is,
\textit{``an organization needs to have the capability of knowing open source projects, running them, fine-tuning them towards its needs, maintaining them" (I12-Cnslt.\&Adv.)}, which can be a considerable problem for smaller-scale organizations and start-ups.
Tooling interoperability and standardization also hinder the usability of SBOM tools [\textbf{S22}, 73.8\% agree, 10.8\% disagree].
As mentioned by I17, SBOM tooling is also \textit{``an area where we need further harmonization and standardization." (I17-Cnslt.\& Adv.)}
For instance, the SBOM data (e.g., component hash) of the same software/components generated by different tools can be different, while
\textit{``the whole point of a hash is that it should be the same (for the same component), so that...downstream users can validate it". (I17-Cnslt.\& Adv.)}
\subsubsection{Integrity of SBOM tools}
\label{tool_validation}
In this subsection, we discuss SBOM tooling integrity based on S23 in Table \ref{statements}.
As mentioned in Section \ref{sbom_validation}, SBOM integrity consists of SBOM data integrity and SBOM tooling integrity. SBOM tooling integrity also includes two aspects: a) \textit{tooling competence}: the completeness and accuracy of the accuracy caused by SBOM tooling capability; and b) \textit{tooling security}: whether the SBOM generation toolchain has been maliciously altered.
Most respondents agree that the integrity of SBOMs generated by existing tools cannot be validated [\textbf{S23}, 69.2\% agree, 18.5\% disagree]. The accuracy and completeness of the generated SBOMs caused by tooling competence is a common concern for generating SBOMs. To the best of our knowledge, there is no comprehensive measure or validation against such unintentional mistakes from the end users' point of view.
However, the intentional tampering resulting from compromised toolchains is another story. One possible solution is to evaluate the SBOM tools' assurance based on Automated Rapid Certification Of Software (ARCOS)\cite{martin_automated_nodate} though it is still a work in progress.
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 8}: There is a lack of maturity in SBOM tooling. More reliable, user-friendly, standard-conformable, and interoperable enterprise-level SBOM tools, especially SBOM consumption tools, are needed.
\end{tcolorbox}
\end{center}
\subsection{RQ3: What are the main concerns for SBOM?}
\label{concern}
This section investigates practitioners' main concerns for SBOMs based on T3 in Table \ref{statements}.
During the interviews, SBOM tool developers' common concern lay with the SBOM standard formats.
Most respondents remained concerned about SBOMs being ``roadmaps for attackers"\cite{NTIA_myth}.
The most fundamental issue is the lack of SBOM adoption and education.
\subsubsection{SBOM formats' lack of extensibility}
In this subsection we discussed S24 in Table \ref{statements}.
There are mainly two competing SBOM standard formats (i.e., SPDX and CycloneDX), and neither can fully meet current market needs [\textbf{S24}, 35.4\% agree, 33.8\% disagree]. Notably, this statement was mentioned by both interviewees (i.e., I11, I16) working on SBOM tool development.
Although the respondents show a discrepancy and lack of consensus on this statement, it is consistent with the interview results.
Interestingly, I11 considered the SBOM format standardization to be one of the most significant benefits, while agreed the existing standards remain to be developed.
\faThumbsOUp\,On the one hand,
\textit{``the big advantage (of SBOMs) is standardization. The formats allow a lot of people to understand the same language" (I11-Dev.\&Sec.(SBOM tool))}. It offers
\textit{``a unified framework to communicate software composition information" (I14-Dev.\&Sec.\& Res.)}.
\faThumbsODown\,On the other hand, some think the formats are not extensible enough. For example, \textit{``current formats only support one dependency relationship, DependsOn" (I11-Dev.\&Sec.(SBOM tool))}.
We summarized possible format extension points in Section \ref{formatextension} based on the interviews.\\
a)
\textit{``My biggest concern is the dynamicity and the ability to use the standard formats of SBOMs... to define the things I want to do with these SBOMs." (I11-Dev.\&Sec.(SBOM tool))}\\
b)
\textit{``My biggest concern... is that the standardization is really kind of not good... competing standards... different properties, and different kinds of purposes. But consolidating down until reasonable sets of things are the same between the competing formats would be great." (I16-Dev.(SBOM tool))}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 9}: Although there is a set of standard formats, \textbf{they require further consensus, standardization as well as additional extension points}.
\end{tcolorbox}
\end{center}
\subsubsection{SBOM information sensitivity}
In this section, we discussed S25 in Table \ref{statements}.
There are two types of opinions among the participants about the SBOM information sensitivity issue:
\faThumbsODown\,Some think certain information inside the SBOM is too risky and sensitive to be public, and the information inside an SBOM may serve the attackers as a "roadmap" of the software and supply chain [\textbf{S25}, 63.1\% agree, 20\% disagree].
\faThumbsOUp\,Meanwhile, others believe that there is no need to worry as the attackers do not need SBOMs because they already have tools to easily get the software composition information. SBOMs are actually roadmaps for defenders to help level the playing field\cite{NTIA_myth}.
Just as I4 stated,
\textit{``on the surface, it seems like a valid concern. But... if you're highly sophisticated cooperation, you probably don't necessarily need this information. As a start-up, you don't get to be a target in an attack... I think the benefit of that visibility is certainly on the defender's side." (I4-Dev.\&Sec.)}
In addition, access control and content tailoring (see Section \ref{distribution}) can also play a role in mitigating this concern.
\subsubsection{SBOM adoption and education deficiency}
\label{adoptionconcern}
In this section, we discussed S26 in Table \ref{statements}.
The most fundamental and imminent concern is limited SBOM adoption [\textbf{S26}, 80\% agree, 7.7\% disagree].
Organizations may have various reasons for their hesitation to SBOM adoption. For example, they may worry about the industrial consensus or SBOMs' value to customers, or they may lack even the most basic IT asset management.\\
a)
\textit{``I'm actually a bit worried about the people producing and consuming SBOMs because I think the market is not really ready. They need to (be) educate(d), and many people don't know what SBOMs are." (I11-Dev.\&Sec.(SBOM tool))}\\
b)
\textit{``For SBOMs to become valuable to consume, many people need to produce (SBOMs)... So one of the concerns I have about SBOMs is, is everybody going to follow this pattern (to produce SBOMs)? Because there're obviously people (saying) it's not accurate so they don't want to produce it." (I6-Dev.)}
However, despite being concerned about SBOM adoption, I6 also agreed that
\textit{``even the less accurate ones are better than adding no visibility" (I6-Dev.)}. Because
\textit{``there's always going to be a maturity problem. The idea that because some people can't use SBOM so others shouldn't... That's not how we do security." (I17-Cnslt.\&Adv.)}
SBOM education is needed not only to educate the public for increased SBOM adoption, but the SBOM practitioners also need to be educated and realize SBOMs have unaddressed issues before they rush into generating SBOMs.
\textit{``The problem right now is that we are almost putting the cart before the horse - we're expecting the SBOMs to fix the problems rather than fixing the problems with an SBOM." (I14-Dev.\&Sec.)}
\begin{center}
\begin{tcolorbox}[breakable, colback=gray!10,
colframe=black,
width=\columnwidth,
arc = 1mm,
boxrule=0.5 pt,
]
\textbf{Finding 10}: There is a lack of market awareness and good value propositions for SBOM adoption. SBOM advocators need to: \textbf{a) leverage relevant regulation and use cases such as procurement evaluation and supply chain risk management to improve SBOM awareness; and b)
promote more SBOM consumption tools with clear benefits}.
\end{tcolorbox}
\end{center}
\subsection{Discussion}
\subsection{Implications}
This section discusses below key implications for future SBOM research and development.
\subsubsection{Goal model}
\label{goalmodel}
Based on the study results, we present a goal model for future SBOM endeavors (see Fig. \ref{goalmodel_fig}).
As mentioned in Sections \ref{adoption} and \ref{adoptionconcern}, the lack of SBOM adoption causes a substantial obstacle to SBOM progress.
To achieve
\textbf{increased SBOM adoption and more SBOM-enabled benefits}, there are three goals to be satisfied:
\textbf{a) Higher-quality SBOM generation} (findings 3, 4, 6, 8, and 9): maturer tooling support for the generation of more standardized tamper-proof ``dynamic" SBOMs \cite{rezilion}. For instance, further standardization on SBOM-included data fields is needed. SBOM industry should strictly conform to an agreed minimum data fields, while considering different industry and business sectors when adding optional data fields.
\textbf{b) Clearer benefits and use cases for SBOM consumption} (findings 1, 2, 10): SBOM education (e.g., on SBOM-enabled benefits) results in increased SBOM adoption (including consumption); Increased SBOM adoption, in turn, leads to more developed SBOM-centric ecosystems with favorable use cases.
\textbf{c) Lower barriers in SBOM sharing and distribution} (findings 5 and 7):
The distribution and sharing of SBOMs and the vulnerability status (e.g., VEXs) need to be more flexible with proper mechanisms that meet both the software vendors' and procurers' needs. Technologies such as Blockchain, confidential computing (e.g., zero-knowledge proofs, secure multiparty computation for sharing without access) can potentially be leveraged to communicate SBOM data. During the distribution of SBOM data, there also need to be risk-based flexible policies to communicate unfixed vulnerabilities.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{goalmodel2.png}
\caption{SBOM goal model.}
\label{goalmodel_fig}
\end{figure}
\subsubsection{Format extension points}
\label{formatextension}
As discussed in Section \ref{concern}, the interviewees mentioned several potential extensions to the existing SBOM formats. Other than supporting more dependency relationship types, another possible extension point is component types. \textit{``For example, the components of a Git is the commits, and CycloneDX does not have such a component (type) defined." (I11-Dev.\&Sec.(SBOM tool))}
The third extension point is file location.
\textit{``My organization has its own SBOM format... which has... things like where does it find something in a file system... However, formats like SPDX might not have a place for that." (I16-Dev.(SBOM tool))}
Apart from the three points mentioned by interviewees, another essential point is verifiable credentials\cite{lu_towards_2022} embedded in or linked to an SBOM.
For traditional software, such credentials can prove the validity of software/components.
For AI software, responsible AI-related information such as conformance to certain AI ethics principles can be included.
\subsection{Positioning with respect to SBOM readiness report}
This section compares the key differences between our findings and the SBOM readiness report's results. This study broadens/deepens the SBOM readiness report mainly from 9 SBOM aspects: benefits, adoption, generation, distribution, integrity, vulnerability management (with SBOMs/VEXs), tooling, concerns, and AIBOM (See Table \ref{compare}).
\begin{table*}[]
\renewcommand\arraystretch{1.15}
\centering
\caption{SBOM readiness report v.s. this paper}
\label{compare}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|l|l}
\hline
\textbf{Topics} &
\textbf{SBOM readiness report} &
\textbf{This paper} \\ \hline
\textbf{Benefits} &
\begin{tabular}[c]{@{}l@{}}16 specific benefits of SBOMs (10 for producing and 6 for\\ consuming SBOMs), all enabled by transparency.\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Transparency, and subsequently enabled \textbf{accountability, traceability, and}\\ \textbf{security}. (Finding 1)\end{tabular} \\ \hline
\textbf{Adoption} &
\begin{tabular}[c]{@{}l@{}}90\% surveyed organizations have started SBOM journey;\\ 47\% are using (i.e., producing/consuming) SBOMs.\end{tabular} &
\begin{tabular}[c]{@{}l@{}}SBOM adoption is \textbf{worrying}: limited generation \& more limited consumption. \\(Findings 1, 2, 10)\end{tabular} \\ \hline
\textbf{Generation} &
\begin{tabular}[c]{@{}l@{}}a) SBOMs can be generated at different SDLC stages.\\ b) More organizations favor including more than baseline\\ SBOM information.\end{tabular} &
\begin{tabular}[c]{@{}l@{}}a) SBOMs can be generated at different SDLC stages but practitioners expect\\ \textbf{``dynamic" SBOM generation} throughout SDLC (Finding 3).\\ b) SBOM-included data fields need \textbf{further standardization} (Finding 4).\end{tabular} \\ \hline
\textbf{Distribution} &
N/A &
\textbf{Secure yet flexible} SBOM distribution mechanisms are needed. (Finding 5) \\ \hline
\textbf{Integrity} &
N/A &
SBOM \textbf{integrity assurances} are needed against tampering threats. (Finding 6) \\ \hline
\textbf{Vulnerability} &
SBOMs should reflect vulnerability information. &
\begin{tabular}[c]{@{}l@{}}a) Organizations \textbf{may not want to share sensitive (vulnerability) data}\\ (Finding 5).\\ b) Mechanisms are needed to \textbf{communicate vulnerabilities with limited/}\\\textbf{undetermined exploitability} (Finding 7).\end{tabular} \\ \hline
\textbf{Tooling} &
Limited availability of SBOM tooling &
\begin{tabular}[c]{@{}l@{}}Affirmed \textbf{necessity} but \textbf{limited availability, usability, and integrity} of SBOM\\ tooling. (Finding 8)\end{tabular} \\ \hline
\textbf{Concerns} &
\begin{tabular}[c]{@{}l@{}}4 shared concerns for production \& consumption: industry\\ commitment, data fields consensus, value of SBOMs,\\ tooling availability. 2 additional concerns for production:\\ information privacy, correctness.\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Explicitly identifies 3 major concerns but covers more throughout\\a) \textbf{Standard formats}' lack of extensibility (Finding 9).\\ b) SBOM information sensitivity \& privacy (Finding 5).\\ c) \textbf{Adoption and education} deficiency (Finding 10).\end{tabular} \\ \hline
\textbf{AIBOM} &
N/A &
AIBOM should also include AI/ML-specific data. (Section \ref{aibom})\\ \hline
\end{tabular}%
}
\end{table*}
\subsection{Threats to validity}
The number of survey participants may pose a threat to validity. However, considering the relative novelty of SBOM and the lack of SBOM adoption, we believe this threat can be justified. It is possible that some of the survey respondents do not have a comprehensive understanding on SBOMs, which may introduce noise to the collected data. To mitigate this threat, we let the respondents choose whether they are familiar with SBOMs. If their answer is no, the survey will automatically end.
Still, we cannot fully ascertain whether the collected responses are accurate reflections of their beliefs. It is a common and tolerable threat to validity in many existing similar empirical studies, which assume that the majority of responses truly reflect what respondents truly believe.
\subsection{software supply chain security}
On the one hand, there has been work on SSC security.
For instance,
Ohm et al.\cite{ohm2020backstabber} summarized 174 OSS packages used for real-world malicious SSC attacks.
Blockchain has been applied to SSC security (e.g., \cite{8473517,9653024,8890486}. Especially, Marjanović et al. \cite{marjanovic2021improving} used blockchain-based techniques to record the software composition details.
On the other hand, there are limited scholarly papers on SBOMs.
Martin et al. \cite{9174365} introduced the concept of the SBOM and listed nine possible use scenarios.
In 2021, Carmody et al. \cite{carmody_building_2021} presented a high-level overview of how SBOMs help build resilient medical SSCs. They illustrated the benefits of SBOMs for software producers, consumers and regulators, as well as relevant progress on SBOMs.
Back in 2019, Barclay et al. \cite{barclay2019towards} introduced their ideas on applying BOMs to data ecosystems for transparency and traceability, where they detailed a conceptual model to combine a BOM (static) and a bill of lots (dynamic) to jointly record the static data components and the dynamic data of a specific experiment.
Based on their previous work, as a step towards operationalizing the conceptual model, Barclay et al. \cite{barclay2022providing} recently introduced their work using a BOM as a verifiable credential for transparency into the AI SSCs, which is a step towards AIBOM.
\section{Introduction}
\input{1Introduction}
\section{Background: What is SBOM?}
\input{2Background}
\section{Research Methodology}
\input{3Researchmethodology}
\section{Study results}
\input{4Researchresults}
\section{Discussion and implications}
\input{5Discussion}
\section{Related work}
\input{6Relatedwork}
\section{conclusion and future work}
\label{7conclusion}
SBOMs are essential to SSC security considering the transparency enabled by SBOMs and the subsequently enhanced accountability, traceability and security. In this study, we interviewed 17 and surveyed 65 SBOM practitioners on their perception of SBOM. Despite the promising SSC transparency and security enabled by SBOMs,
there are still open challenges to be addressed.
To accelerate the adoption of SBOMs, higher-quality SBOM generation, clearer benefits and use cases in SBOM consumption, and lower barriers in SBOM sharing are prerequisites which need to be further studied, mitigated and addressed. In addition, SBOMs for AI software (i.e., AIBOM) are an inevitable trend given the popularity of AI and AI software, and AIBOMs need to consider the co-evolution of data/model/code/configuration.
\section*{Acknowledgment}
The authors would like to thank all the interview and survey participants for their great help and support.
This work could never have been accomplished without them.
\Urlmuskip=0mu plus 1mu\relax
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Relativistic outflows carry large amounts of energy ($10^{43}-10^{47}\,{\rm erg/s}$ in the case of extragalactic jets and $10^{36}-10^{40}\,{\rm erg/s}$ in the case of microquasars) from the environment of compact objects, where they are triggered, to large distances (hundreds of kpc for extragalactic jets and parsecs for microquasars). The jets are formed by extraction of rotational energy from a Kerr black-hole \cite{bz77,tch11,bk12,bkp12} or from an accretion disc \cite{bp82}. A part of that energy is released in the form of radiative output, throughout the whole electromagnetic spectrum, mainly via synchrotron and inverse-Compton (either external or internal) processes. The radiative losses are typically taken as a negligible part of the total energetic budget, so jets are modelled as adiabatic. Nevertheless, this can be incorrect in regions in which the magnetic field is intense and radiative losses are strong, e.g., close to the jet formation region. The adiabatic approximation is probably valid once the jet is mass-loaded and the intensity of the magnetic field has dropped due to possible investment in acceleration after formation and jet expansion. At kiloparsec scales, the jets show a morphological dichotomy between collimated, edge-brightened FRII radiogalaxies and less collimated, dimmer FRI radiogalaxies \cite{fr74}.
Jets can be modelled as plasmas, because the Larmor radius of particles moving around magnetic field lines is very small compared to the scales of the problem \cite{br74}, which can be understood as the magnetic field giving consistency to the flow. The jets carry their energy flux in the form of kinetic, internal and magnetic energy flux. The relative weight of the three energetic {\emph channels} determines the way in which the jet interacts with its environment, the instabilities that may develop and the possibility of particle acceleration at shocks.
The cooling times of energetic particles are inversely proportional to the particle Lorentz factor and to the square of the magnetic field. In terms of the critical emitting frequency, it can be written as (see, e.g., \cite{ba16}):
\begin{equation}
t_c \, = \, 3 \sqrt{3 \pi} \sqrt{\frac{m_e c e}{\nu_c}} \frac{1}{\sigma_T \beta^2} B^{-3/2},
\end{equation}
where $m_e$ and $e$ are the electron mass and charge, respectively, $c$ is the speed of light, $\sigma_T$ is the Thomson cross-section, $\beta$ is the particle speed with respect to the speed of light, $\nu_c$ is the critical emitting frequency, and $B$ is the magnetic field in Gauss. Using a relatively low observing frequency, typically used to image large-scale jet structures, 178\,MHz, we obtain:
\begin{equation}
t_c \, \simeq 1.2 \times 10^8 B^{-3/2}\, {\rm s}.
\end{equation}
If the bulk velocity is close to the speed of light, this translates in a distance of $\simeq 4$ light-years for 1 Gauss, or $1.2\times 10^5\,{\rm ly}$ for $1 \,{\rm mG}$. Taking into account that the magnetic field drops with distance (by simple expansion and conservation of the magnetic flux), we can infer that little particle re-acceleration is needed. However, the cooling times and distances shorten with $\nu_c^{-1/2}$, and, therefore, X-ray emission would be unexpected at kiloparsec scales. Still, X-rays are observed in FRI radio galaxies within the deceleration region (a few hundred pc to several kpc) and even at larger scales in FRII jets. Moreover, extended gamma-ray emission has been reported by \emph{Fermi} observations of a close FRI radio galaxy (Centaurus A). These observations imply particle acceleration mechanisms at work at large distances from the active nucleus. Interestingly, it has been noted that a larger percentage of gamma-ray detections among FRI jets than FRII's with the \emph{Fermi} observatory, which cannot be accounted for in terms of source populations \cite{dg16,gr12}. This is counter-intuitive taking into account that FRIs are weaker than FRIIs in terms of jet energy budgets (see, e.g., \cite{gc01}).
Possible acceleration processes include Fermi-I type acceleration process at shocks, acceleration in turbulent flows, shear acceleration at the transition between the jet and its surrounding medium, or magnetic reconnection (see \cite{rl18} and references therein). In this contribution I summarise a set of possible magnetohydrodynamics scenarios that can take place in jets and become the frame in which either kinetic or magnetic energy can be dissipated, increasing the internal energy budget and accelerating particles to non-thermal energies. A more extended revision of the subject can be also found in \cite{pe19}. This contribution is structured as follows: In Section 2, I summarize the possible instabilities that can develop in jets and contribute to jet deceleration and energy dissipation. In Section 3, I describe the scenario of jet-obstacle interaction and the role of such events on jet evolution. In Section 4, I derive an expression for the distance in which we expect the mass load by stars to become relevant (this is also the distance in which we expect a stronger dissipation generated by collisions). I also discuss the differences and similarities in energy dissipation scenarios between microquasar and extragalactic jets. Finally, in Section 5 I present the conclusions that can be extracted from this contribution.
\section{Jet instabilities}
When jets are formed they are probably magnetically dominated, and, thus, sub-Alfv\'enic. The magnetic field structure generated by rotation is presumably toroidal. Acceleration may stretch the lines, but the conservation of the magnetic flux favours that toroidal field dominates the magnetic field structure with distance, in the case of expanding outflows. Farther downstream, the investment of internal and magnetic energy in the process of bulk acceleration, and entrainment, change the jet energy flux to particle dominated. These changes are relevant to the instability modes that may develop.
The linear growth of instabilities is studied by linearizing the RMHD equations and introducing wave-like perturbations (see \cite{pe19} and references therein). The solutions include complex values for the frequency and wavenumber, where the imaginary part becomes the growth-rate of the instability. Numerical simulations can then be used to study the post-linear and non-linear regimes, although high numerical resolution is demanded to follow the growth of linear modes from small amplitudes \cite{pe04}.
While the jet is magnetically dominated, the current-driven instability (CDI) may develop (see \cite{bo13,kim17,kim18,bo19}). This instability is favoured mainly by toroidal configurations. From an observational point of view, there are no hints of global disruption at parsec scales, which may have, at least, two different explanations: 1) the instability modes do not develop fast enough to affect the flow before the flow conditions change, or 2) there is no global magnetic field configuration and the instability only develops at small scales within the jet flow, if at all. Among the stabilising mechanisms for the CDI, a strong poloidal field component, jet expansion, wide shear-layers or winds shielding the jet, or a non-negligible azimuthal velocity component (see also \cite{ma16}) have been shown to delay the growth of disrupting CDI modes.
As pointed out, a drop in the magnetic energy flux is expected during the acceleration phase, which, together with the flow acceleration itself, bring the jet to becoming super-Alfv\'enic. At this point, the CDI is suppressed with respect to the Kelvin-Helmholtz instability (KHI) modes (see, e.g., \cite{ha07}). As opposed to the case of the sub-parsec scales, a number of observed jet structures have been related to the growth of KHI modes at parsec-scales (e.g., \cite{lz01,har03,har05,har11,vg19}). Still, no global jet disruption on the base of the development of such modes has been reported, but in one case (S5~0836+710), in which the jet decollimation and deceleration has been attributed to the growth of a helical mode out to kiloparsec scales \cite{pe12a,pe12b,ka19}.
Again, because the growth-rates of global instabilities are related to the time the waves need to cross the jet and bounce to its boundaries (see, e.g., \cite{pc85,pk15}), jet expansion \cite{har86} and axial velocity (Lorentz factor, \cite{pe05,pe10}) reduce them and favour jet stability. In the case of the Lorentz factor, it contributes to stretch the distance between bounces, so even if the jet opening angle is small, the growth-times or distances may be long.
Although the nonlinear development of KHI and CDI modes has been related to the deceleration of FRI jets \cite{pm07,ro08,tb16}, the morphological dichotomy between FRI and FRII radiogalaxies seems to be related to the growth of small-scale instabilities that trigger mixing from the jet boundaries to its axis \cite{lb14}. In this context, short wavelength KHI modes \cite{pe10}, Rayleigh-Taylor instability (RTI, \cite{mm13,ma17,to17,mk07,mk09}) or the centrifugal instability (CFI, \cite{gk18a,gk18b}) have been suggested to develop in expanding and recollimating jets. Nevertheless, although FRI jets have relatively large opening angles at the decelerating region, there is no observational hint of such large-scale recollimation shocks in archetypical FRI jets, so the solution to the problem does not seem to be unique.
Even if the aforementioned list of possible developing instabilities plays no evident role in the FR dichotomy, they may certainly contribute to dissipate part of the magnetic and/or kinetic energy and thus contribute to long term deceleration. The dissipation induced by instabilities is relatively small during the linear phase of the amplitude growth, and it is associated to the oscillations generated, but it becomes relevant in the cases in which the amplitudes reach post-linear or non-linear values. In this case, the dissipation takes place via shocks and turbulent mixing.
\section{Interactions}
Stars and clouds are numerous in galactic cores and can penetrate the jet as they orbit around the nucleus. This process triggers a strong bow-shaped shock wave around the obstacle \cite{ko94}. In the case of clouds, the shock crosses them, heating the gas, which expands and enlarges the interaction cross-section, whereas in the case of stars, the stellar wind equilibrates the jet flow at a distance that is determined by setting the wind-to-jet momentum ratio to unity:
\begin{equation}\label{eq:rs}
R_s=\sqrt{\frac{\dot{M}_{\rm w}\,v_{\rm w}} {4 \pi\,\rho_{\rm j}\,\gamma_{\rm j}^2\,v_{\rm j}^2 }},
\end{equation}
where $R_s$ is the distance between the contact discontinuity and the star, $\dot{M}_{\rm w}$ is the stellar wind mass flux, $v_{\rm w}$ is the wind velocity, and the subscript ${\rm j}$ indicates jet parameters: rest-mass density, $\rho$, Lorentz factor, $\gamma$, and velocity, $v_{\rm j}$. This expression can also be written as
\begin{eqnarray}\label{eq:rs2}
R_s=2.14 \times 10^{12} \left(\frac{\dot{M}_{\rm w}}{10^{-11}\,{\rm M_\odot \, yr^{-1}}}\right)^{1/2} \,\left(\frac{v_{\rm w}}{10\,{\rm km s^{-1}}}\right)^{1/2} \times \, \nonumber \\ \left(\frac{L_{\rm j}}{10^{43}\,{\rm erg s^{-1}}} \right)^{-1/2} \,\left(\frac{v_{\rm j}}{c}\right)^{-1/2} \,\left(\frac{R_{\rm j}}{1\,{\rm pc}}\right)^{1/2} \,\left(\frac{h_{\rm j}}{c^2}\right)^{1/2}\, {\rm cm},
\end{eqnarray}
where we have used that $\rho_{\rm j}\,\gamma_{\rm j}^2\,v_{\rm j}^2 \,=\, L_{\rm j}\, v_{\rm j} / (\pi R_{\rm j}^2 h_{\rm j})$, with $L_{\rm j}$ the jet power, $R_{\rm j}$ the jet radius and $h_{\rm j}$ the specific enthalpy. Equation \ref{eq:rs2} only depends on the jet power, radius and velocity for $h_{\rm j} \simeq c^2$ (i.e., for a cold jet). The angular size that such a value of $R_s$ would imply is even below the micro-arcsecond for sources at $\simeq 20\,{\rm Mpc}$. Therefore, it is very difficult to observe these interactions, unless the stellar wind is very powerful, with mass-loss orders of magnitude larger than $\dot{M}_{\rm w}\, =\, 10^{-11}\,{\rm M_\odot \, yr^{-1}}$ and in nearby AGN jets, as it has been suggested in the case of Centaurus A \cite{wo08,go10}. In \cite{br12,dlc16,pe17b}, the authors have studied the detail of jet-star/cloud interactions in the RHD approximation. They show that the local dissipation of jet kinetic energy in the interaction region can be efficient and that mixing can take place rapidly by means of the development of helical instabilities in the shocked wind tail.
Several works have pointed out the possible relevance of such interactions in the production of X-rays along the jet (e.g., \cite{wyk13,wyk15,vi17}), and also gamma-rays close to active galactic nuclei (e.g., \cite{bp97,bba10,ara13,tab19}). The particle acceleration is modelled to take place within the interaction region (via the Fermi I process, mainly) and the gamma-rays typically produced by inverse Compton scattering of stellar photons, or an external photon field if the interaction takes place close to the active nucleus.
Downstream of the obstacle, a cometary tail of shocked gas forms, which is surrounded by shocked jet plasma. The expansion of this over-pressured gas with respect to its environment favours acceleration along the tail. Furthermore, depending on the conditions, the tail can be easily destabilised and trigger mixing \cite{pe17b}. The global dynamical role of mass-loading by stars and other obstacles can be studied via relativistic magnetohydrodynamical (RMHD) simulations, by introducing a source term in the mass equation (the particles being injected with zero velocity and temperature relative to the jet) for each numerical cell, accounting for the number of stars per unit volume and distance to the nucleus. Then, assuming steady-state conditions, we can isolate the role of entrainment \cite{bo96}, or, by running dynamical simulations, we can test the combined effect of entrainment plus other processes, such as the development of instabilities \cite{pe14}.
Steady-state simulations \cite{bo96} (see also Perucho et al., in preparation) already showed that this represents an efficient global mechanism for energy dissipation in jets. In \cite{bo96}, the authors reported that cold jets could be globally heated by the dissipation induced by the injection of mass by a distribution of stars. In terms of radiative output, this process would trigger strong emissivity \cite{wyk13,wyk15,vi17}, just as observed in the decelerating regions of FRIs \cite{lb14}. However, the modelling of those regions reveals a progressive deceleration from the jet boundaries. This means that, although mass-load by stars may efficiently contribute to jet deceleration, for the case of relatively weak jets ($L_j \sim 10^{42}\,{\rm erg/s}$) and old populations of low-mass stars ($\dot{M}_w \sim 10^{-12} M_\odot \,{\rm yr^{-1}}$), or for more powerful jets ($L_j \sim 10^{43}\,{\rm erg/s}$) and a relatively large number of red giants ($\dot{M}_w \sim 10^{-9} M_\odot \,{\
|
rm yr^{-1}}$), it still does not seem to be the only answer to the dichotomy. Only if the bubbles formed by the shocked stellar winds in the interstellar medium are rapidly eroded at the jet shear-layer and they largely contribute to entrainment could the model be reconciled with the observations: In that situation, most of the entrainment would occur at the jet boundaries, and it would be dragged inwards by turbulent mixing. However, there are doubts about this \cite{tab19}.
Dynamical simulations \cite{pe14} show that the energy dissipation and heating of the jet induce expansion, which favours the penetration of more stars and, via deceleration, it also increases the growth-rates of different modes. In the case of axisymmetric simulations, only pinching (symmetric) modes can develop and the jets are disrupted by a strong shock, once the pinch becomes non-linear. This is an interesting example on how different processes can act together: In such a case, it would not make much sense to attribute jet deceleration to one or the other, but it should be attributed to the combination of both.
Particle acceleration can, in the scenario of jet-star interactions, take place both locally at shocks, and via extended acceleration in the turbulent mixing tails. It is certainly difficult to observe a single interaction (Eq.~\ref{eq:rs2}), but, owing to the large amount of stars embedded in the jet at a given time, it could certainly contribute to diffuse emission such as that detected in X-rays along the deceleration region \cite{lb14,kh12} in FRI jets. Another relevant role of mass entrainment, in terms of radiative output, is the reduction of Doppler boosting, which makes aligned, distant jets, more difficult to detect, and has the opposite effect in misaligned, closer jets.
Assuming that the magnetic flux is very small compared to the kinetic flux beyond the jet flow acceleration zone, one can estimate the distance to which the jet momentum is completely consumed by the acceleration of entrained particles by equating the jet initial momentum of the jet ($L_j/(\gamma_j\,c$, with $v_j \simeq c$) to the entrained mass \cite{hb06}:
\begin{eqnarray} \label{eq:hb}
l_{\rm d} \simeq \frac{1} {\gamma_{\rm j}} \! \! \left(\frac{L_{\rm j}} {10^{43}\, {\rm erg\,s}^{-1}}\right)
\!\! \left(\frac{\dot{M}} {10^{-11}\, {\rm M_\odot} {\rm yr}^{-1}}\right)^{-1} \left(\frac{n_s} {1\,{\rm pc}^{-3}}\right)^{-1}\, \!\!
\! \left(\frac{R_{\rm j}} {10\, {\rm pc}}\right)^{-2} \!\! \, 10^2\, {\rm kpc},
\end{eqnarray}
where $\dot{M}$ is the mean mass-loss rate of the stellar population in the galaxy, and $n_s$ is the number of stars per unit volume. From this expression, we see that a large contribution from stars is needed to completely decelerate a FRI jet with $L_{\rm j} = 10^{43} {\rm erg\,s^{-1}}$ within 1~kpc. This can only happen if the stellar population is dominated by red giants. However, because $l_d$ is inversely proportional to the square of the jet radius, and taking into account that energy dissipation produces an increase of the jet pressure (and thus, enhances expansion and favours the penetration of more stars), the deceleration process can undergo a certain degree of feedback, shortening $l_d$. Furthermore, there are other aspects that have to be taken into account when considering Eq.~\ref{eq:hb}: 1) The jet momentum is never completely depleted in FRI jets, so $l_d$ is, in this respect, an upper limit of the distance in which the jet becomes transonic, and 2) the number of stars drops with distance, and this would increase the value of $l_d$.
\section{Discussion}
\subsection{Entrainment and dissipation}
Equation~\ref{eq:hb} provides an estimate of the distance at which the jet's momentum is completely absorbed by the entrained particles and the flow is thus stopped. We can make another estimate about the distance at which the mass load starts to be relevant with respect to the injected mass flux. Comparing the mass flux at two different positions along the jet, we obtain:
\begin{equation} \label{eq:mcons}
\rho_{\rm j}\, \gamma_{\rm j} \,R_{\rm j}^2 \,v\, = \,\rho_{\rm j,0} \, \gamma_{\rm j,0} \, R_{\rm j, 0}^2\, c\, +\, \dot{M} \,n_s \, R_{\rm j}^2\, \Delta z,
\end{equation}
where the subscript $0$ indicates an initial location where the mass entrainment is negligible, and we assume $v_{\rm j,0} \simeq c$. Taking a constant opening angle, we can write $R_{\rm j} \simeq \Delta z\, \tan(\alpha_{\rm j})$, for large enough values of $\Delta z$. Then, comparing the two terms on the right-hand-side, we can state that mass-load will start to be relevant when
\begin{equation} \label{eq:mcmp}
\dot{M} \, n_s \, \tan(\alpha_{\rm j})^2 \,(\Delta z)^3 \, \simeq \, \rho_{\rm j,0}\, \gamma_{\rm j,0} \,R_{\rm j, 0}^2 \,c.
\end{equation}
We can also write the initial mass flux in terms of the jet power:
\begin{equation} \label{eq:mpow}
\rho_{\rm j,0} \,\gamma_{\rm j,0}\, \pi \,R_{\rm j,0}^2 \,c\, \simeq\, \frac{L_{\rm j}}{\gamma_{\rm j,0} \,c^2},
\end{equation}
which is valid when $v_{\rm j,0} \simeq c$ and specific enthalpy is $h_{\rm j} \simeq c^2$ (i.e., cold jet). We derive, substituting $\Delta z$ by $l_m$, and $R_{\rm j}$ by $\Delta z \, \tan(\alpha_{\rm_j})$ (half opening angle):
\begin{eqnarray} \label{eq:l_m}
l_m \simeq 390 \left( \frac{1} {\gamma_{\rm j,0}\,(\tan(\alpha_{\rm j}))^2}\, \! \! \left(\frac{L_{\rm j}} {10^{43}\, {\rm erg\,s}^{-1}}\right)
\!\! \left(\frac{\dot{M}} {10^{-11}\, {\rm M_\odot} {\rm yr}^{-1}}\right)^{-1} \left(\frac{n_s} {0.1\,{\rm pc}^{-3}}\right)^{-1}\,
\! \right)^{1/3} \!\! \, {\rm pc}.
\end{eqnarray}
This expression is not very sensitive to changes of the parameters in one order of magnitude, and reveals the possible relevant contribution to deceleration of stellar populations in which the number of red giants is large enough to increase the mean mass-loss rates. Again, the expression does not take into account the drop in $n_s$ with distance, but a large drop is not expected within the inner kiloparsecs, mainly in the giant ellipticals typically hosting FRI jets. For a small opening angle of $1^\circ$, we obtain $l_m\simeq 8.4/\gamma_{\rm j,0}\,{\rm kpc}$. An increase of the opening angle could compensate for the drop in the number of stars with distance: Actually, an opening angle of $1.5^\circ$ would reduce the estimate of the $l_m$ to 1~kpc. For a jet in free expansion, $\alpha_{\rm j} \simeq 1/\gamma_{\rm j}$, so $\alpha_{\rm j}\simeq 5.7^\circ$ if $\gamma_{\rm j}\simeq 10$. In this case, $l_m\leq 200\,{\rm pc}$. Interestingly, the distance obtained is within the order of magnitude of the expected deceleration scales in FRIs.
If the jet energy flux is dominated by the rest-mass of the particles, this distance will coincide with the deceleration distance. However, this is probably not the case at parsec scales, where the Lorentz factor can be large, and the internal or magnetic energy can be relevant. In that case, $l_m$ indicates the distance at which the jet particle composition is significantly changed. The relative importance of the energy flux associated to the mass of the particles will determine to which extent this is also dynamically relevant.
As a consequence of the previous statement, there are two options to interpret the situation in FRI jets, where deceleration takes place at hundreds of parsecs to a few kiloparsecs: 1) if this is caused by stars mainly, it means that the jet should be particle dominated at $z\simeq 100$~pc, and 2) if the jet energy flux is dominated by the magnetic or internal (hot jet) energy flux or it has a large Lorentz factor, then it becomes difficult to explain the observed behaviour only with stellar mass-load, unless the population of red giants or young stars is important enough.
Beyond the deceleration distance, the jet becomes transonic and particle acceleration and significant radiative output will mainly take place in the turbulent regions triggered by entrainment (see \cite{bi84,bi86a,bi86b,dy86,ko88,ko90a,ko90b}). It is also important to stress that, although the above expressions are derived for stellar mass-loss, the terms $(\dot{M} \,n_s)^{-1}$ can be substituted by the entrainment from the jet boundaries, although in this case the modelling should take into account the inhomogeneity of the loading, which takes place radially inwards from the boundaries (see, e.g., \cite{wa09}).
It has been reported that the gamma-ray observatory \emph{Fermi} detects relatively more FRIs than FRIIs (see \cite{dg16,gr12}), and this cannot be reproduced in terms of population numbers. A straight temptation to explain this fact is to claim that dissipation of kinetic energy is stronger in FRIs. However, even if the fraction of dissipated energy in FRIIs is smaller, their larger energy budget could compensate that difference in the investment of kinetic energy into particle acceleration. Therefore, extra causes are needed. One possible option is Doppler boosting: while the gamma-ray emission is deboosted in FRIIs because of misalignment with our line of sight, the deceleration of FRI jets could favour detections at large viewing angles. This possibility would be supported by the detection of FRIIs with longer integration times. Another option is that the processes taking place in FRIs are more efficient in terms of particle acceleration than those in FRIIs. In this case, the main different mechanism is the turbulence triggered by entrainment (either from ambient gas or from stellar winds). As we have seen, we expect dissipation in FRIs to be caused by interaction with obstacles (shocks plus turbulence) within the inner kiloparsecs and entrainment produced by small-scale instabilities, and by turbulent mixing beyond this region. Then, the different detection fractions would demand such process to be a more efficient particle accelerator than shocks. The detection of extended emission in a nearby FRI radiogalaxy (Centaurus A, \cite{ab10}) indicates that, indeed, turbulence could be an efficient accelerator.
\subsection{A note on Microquasars}
Although microquasar jets may form via the same physical mechanisms as extragalactic jets and may resemble them at large scales \cite{mar17}, the environment through which they evolve is extremely different from that through which extragalactic jets propagate. In the case of young microquasars, the jet may cross the companion wind, shocked wind, the supernova remnant (SNR), and the shocked interstellar medium (ISM), before entering the ISM. In the case of an aged microquasar, the jet has to get through the wind, shocked-wind, and shocked ISM \cite{brpb12}.
Once triggered and accelerated, the jet suffers the lateral impact of the companion's wind \cite{pbr08,pbrk10,brb16,yoo16}. Depending on the wind-to-jet momentum ratio, oblique shocks and bends can be developed in the jet. The inhomogeneities in the winds of massive companions, which may embed dense clumps, can play the role of clouds in extragalactic jets, enhancing mass-loading and energy dissipation \cite{pbr12}.
In this initial region, shocks and non-linear effects on the jet should be expected. Therefore, strong kinetic and magnetic energy dissipation must take place. Beyond the binary region, the jet radius of the generated outflow is of the order of the size of the orbit, so that, unless the companion's wind disrupts it, the outflow is the result of the expansion of the injected plasma from the compact object and the bending/entrainment produced by the wind.
When the jet crosses any of the discontinuities defined by the SN or wind shocks, it enters denser media. In this case, dissipation must also occur at the strong interaction, which would show up as a temporary structure, before the jet carves its way through the shocked gas. Transient jet head deceleration is also expected. Finally, once the jet opens its way up to the ISM, it evolves through a fairly homogeneous medium \cite{bo+09}.
As we can conclude from the previous paragraphs, the evolution of microquasar jets takes place in environments that are completely diverse from those found by extragalactic jets. The latter are typically formed from non-orbiting black-holes (but in some specific cases, probably), and find inhomogeneous media in the inner kiloparsecs, as they cross the ISM, but evolve, grossly speaking, through decreasing density and pressure environments. On the contrary, the former are formed along orbits that are much wider than their injection radii, suffer the impact of a lateral wind and cross several discontinuities into denser media. These all represent non-linear processes,\footnote{In powerful jets and low mass companions, the impact of the wind could be taken as a linear perturbation.} which can drastically change the jet evolution and trigger an efficient acceleration of particles.
\section{Conclusions}
Altogether, the large amounts of energy transported by jets and a series of dissipative processes like the ones described here (or others, as magnetic reconnection in small scales), give a reasonable scenario to explain the high and very-high energy emitted from jets. The growth of different instability types and the interaction of the jet flow with different types of obstacles probably represent the two main scenarios by which jets dissipate energy, via shocks, shear and turbulence along the inner kiloparsecs of jet evolution. In the case of large scales FRI jets, once the flows become transonic, the most probable process to accelerate particles is turbulence,
Although the evolution of microquasar jets must be very different to that of extragalactic jets, due to the different environmental conditions, analogous non-linear processes can take place in this case, such as interactions with dense inhomogeneous clumps formed in the stellar winds of massive companion stars, or helical instabilities induced by the orbital motion of the injection point, or the stellar wind.
A difficult problem to overcome from a theoretical perspective is the link between these large-scale plasma scenarios and the detailed particle acceleration process, mainly because of the orders of magnitude difference in spatial scales. Overcoming this gap is a big challenge that numerical techniques will have to face for the exact relation between the macroscopic phenomena and the radiative output from relativistic outflows to be understood.
\section*{Acknowledgements}
I thank the SOC and LOC of the meeting for their hospitality and the organization of a very interesting meeting. I also thank the referee of this contribution for his/her positive comments. MP has been supported by the Spanish Ministerio de Ciencia y
Universidades (grants AYA2015-66899-C2-1-P and AYA2016-77237-C3-3-P)
and the Generalitat Valenciana (grant PROMETEOII/2014/069).
|
\section*{Acknowledgments}
We thank Mark Gieles and Else Starkenburg for helpful discussions and comments on the manuscript, and
Don VandenBerg for valuable advice on the selection of isochrones.
Comments from the anonymous referees helped improve the presentation.
The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
\paragraph{Funding:} AJR was supported by National Science Foundation grant AST-1616710, and as a Research Corporation for Science Advancement Cottrell Scholar. JPB acknowledges support from HST grant HST-GO-15078.
AJR and SSL were supported in part by the National Science Foundation under Grant No. NSF PHY-1748958
\paragraph{Authors contributions:}JPB secured the observing time for this project, all authors contributed to the planning of the observations, and the inclusion of EXT8 as a target was suggested by AJR. AW conducted the observations and SSL carried out the data reduction and analysis and drafted the paper. All authors assisted in the interpretation of the results and writing of the paper.
\paragraph{Competing interests:} None.
\paragraph{Data and materials availability:}
The average measured abundances are listed in Table~S2 and individual measurements are in Tables S3-S9. The raw spectra of EXT8 and Hodge~III are available in the W. M. Keck Observatory Archive at http://koa.ipac.caltech.edu, program ID U177Hr, Semester 2019B, and program ID U040Hr, Semester 2015B, P.I
|
. Brodie.
The UVES observations of M15 are available in the ESO archive at http://archive.eso.org, program ID 095.B-0677(A), P. I. Larsen.
The CFHT MegaCam images of EXT8 are available in the CFHT data archive at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/cfht, proposal ID 16BC25, product IDs 2011389p, 2011390p, and 2011391p.
The spectral analysis was carried out with the \textsc{ISPy3} code \cite{ispy3}, and the King model fitting was done with the \textsc{Baolab} code \cite{Larsen1999}, available at https://github.com/soerenslarsen/baolab, DOI 10.5281/zenodo.4036106.
\nocite{makee}
\nocite{Kroupa2001}
\nocite{Kurucz1970}
\nocite{Sbordone2004}
\nocite{kurucz}
\nocite{Kurucz1981}
\nocite{Dotter2007}
\nocite{Dotter2016}
\nocite{Salaris1993}
\nocite{Pietrinferni2006}
\nocite{Cox2000}
\nocite{Kurucz2005}
\nocite{McWilliam2008}
\nocite{Eitner2019}
\nocite{Castelli2005}
\nocite{hireswww}
\nocite{King1962}
\nocite{Stanek1998}
\nocite{Larsen2002b}
\section*{Supplementary materials}
Materials and Methods\\
Supplementary Text\\
Figures S1-S3\\
Tables S1 to S10 \\
References (\textit{39-57})
\end{document}
|
\section*{Introduction} \label{sec Introduction}
\addcontentsline{toc}{section}{Introduction}
Soient $\mathcal{E}_{n}$ l'anneau des germes de fonctions $\mathcal{C}^{\infty}$ \`a l'origine $\underline{0}$ de $\mathbb{R}^{n}$
et $\hat{\mathcal{E}}_{n}$ celui des s\'eries formelles en $n$ ind\'etermin\'ees et \`a coefficients r\'eels. L'application:
$$T_{\underline{0}}: \mathcal{E}_{n} \rightarrow \hat{\mathcal{E}}_{n}$$
qui \`a une fonction $f$ associe son jet de Taylor infini est un morphisme d'anneau. Le th\'eor\`eme de r\'ealisation de Borel affirme que $T_{\underline{0}}$ est surjectif:
si $\hat{f} \in \hat{\mathcal{E}}_{n}$ est une s\'erie formelle, il existe un germe de fonction $f \in \mathcal{E}_{n}$ tel que $T_{\underline{0}} f = \hat{f}$.
Un tel $f$ sera dit une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{f}$. Deux r\'ealisations de $\hat{f}$ diff\`erent par une fonction plate en $\underline{0}$.
Si l'on note $\mathcal{M}_{n} \subset \mathcal{E}_{n}$ l'id\'eal maximal constitu\'e des germes s'annulant au point $\underline{0}$, alors le noyau de
$T_{\underline{0}}$ est exactement l'id\'eal des fonctions plates $Ker T_{\underline{0}} = \mathcal{M}_{n}^{\infty} = \bigcap_{k} \mathcal{M}_{n}^{k}$.
L'application $T_{\underline{0}}$ s'\'etend naturellement aux $\mathcal{E}_{n}$-modules "classiques". Si $\Omega_{n}^{k}$ d\'esigne le $\mathcal{E}_{n}$-module des
germes de $k$-formes diff\'erentielles et $\hat{\Omega}_{n}^{k}$ le $\hat{\mathcal{E}}_{n}$-module des $k$-formes formelles on d\'esignera encore
$T_{\underline{0}}: \Omega_{n}^{k} \rightarrow \hat{\Omega}_{n}^{k}$ le morphisme qui \`a une $k$-forme $\mathcal{C}^{\infty}$ associe son jet Taylorien infini en
l'origine. De m\^eme si $\mathcal{X}_{n}$ (resp. $\hat{\mathcal{X}}_{n}$) d\'esigne l'alg\`ebre de Lie des germes de champs de vecteurs $\mathcal{C}^{\infty}$
(resp. formels) et $\mathrm{Diff}(\mathbb{R}^{n}_{0})$ (resp. $\widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$) celui des germes de diff\'eomorphisme $\mathcal{C}^{\infty}$
(resp. formels), en $\underline{0} \in \mathbb{R}^{n}$, on dispose encore de morphismes de jets Tayloriens
$T_{\underline{0}}: \mathcal{X}_{n} \rightarrow \hat{\mathcal{X}}_{n}$ et $T_{\underline{0}}: \mathrm{Diff}(\mathbb{R}^{n}_{0}) \rightarrow \widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$.
Dans le premier cas c'est un morphisme d'alg\`ebre de Lie et dans le second un morphisme de groupe. Le th\'eor\`eme de Borel s'implante brutalement sur ces espaces, i.e.
tout \'el\'ement de $\hat{\Omega}_{n}^{k}$, $\hat{\mathcal{X}}_{n}$ ou $\widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$ poss\`ede une r\'ealisation $\mathcal{C}^{\infty}$ dans les
espaces correspondants. Mais ces espaces poss\`edent des structures suppl\'ementaires, produit ext\'erieur et op\'erateur $d$ pour les $\Omega_{n}^{k}$, crochet de Lie
pour les champs de vecteurs et composition pour les diff\'eomorphismes. Se posent alors les probl\`emes de r\'ealisation de type Borel tenant compte de ces structures. Ce que nous appelons le th\'eor\`eme de Borel avec contraintes. En voici quelques exemples:
\begin{enumerate}
\item Etant donn\'e, $\hat{G} \subset \widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$, un sous groupe de type fini de diff\'eomor-phismes formels, existe-t-il une r\'ealisation $G \subset \mathrm{Diff}(\mathbb{R}^{n}_{0})$ telle que la restriction $T_{\underline{0}}: G \rightarrow \hat{G}$ soit un isomorphisme de groupe?
\item Soit $\hat{\mathcal{G}} \subset \hat{\mathcal{X}}_{n}$ une sous alg\`ebre de Lie de champs de vecteurs formels de dimension finie. Existe-t-il une r\'ealisation $\mathcal{G} \subset \mathcal{X}_{n}$ telle que la restriction $T_{\underline{0}}: \mathcal{G} \rightarrow \hat{\mathcal{G}}$ soit un isomorphisme d'alg\`ebre de Lie?
\item Soit $\hat{\omega}$ une 1-forme int\'egrable formelle non triviale. Peut on cette fois trouver une r\'ealisation $\omega \in \Omega^{1}_{n}$ de $\hat{\omega}$ qui soit int\'egrable, i.e. $d \omega \wedge \omega = 0$? Un probl\`eme analogue se pose pour les syst\`emes de Pfaff.
\end{enumerate}
Les probl\`emes de type Borel ont int\'eress\'e de nombreux math\'ematiciens. C'est ainsi que dans le cadre de l'\'etude des alg\`ebres quasi-analytiques J. C. Tougeron \cite{Tougeron3}
montre que le morphisme $T_{\underline{0}}: \mathcal{E}_{1} \rightarrow \hat{\mathcal{E}}_{1}$ poss\`ede des sections. Toutefois ces sections ne respectent pas la
composition.
En dimension 2, o\`u la condition d'int\'egrabilit\'e est triviale, R. Roussarie \cite{Roussarie} a donn\'e plusieurs r\'esultats de type Borel.
Sans r\'esoudre, en toute g\'en\'eralit\'e, les probl\`emes \'enum\'er\'es ci-dessus nous apportons des r\'eponses positives dans quelques cas particuliers. Ces r\'eponses sont parfois
des adaptations de r\'esultats relativement classiques (d\'etermination finie par exemple) ou n\'ecessitant des techniques sp\'ecifiques.
\section{Alg\`ebres de Lie de champs de vecteurs}
\subsection{Alg\`ebres semi-simples, alg\`ebres de rang ponctuel 1, alg\`e-bres saturables}
Soient $\hat{\mathcal{L}}$ une sous alg\`ebre de Lie de $\hat{\mathcal{X}}_{n}$ et $\hat{\mathcal{L}} (\underline{0}) = \{ \hat{X} (\underline{0}) / \hat{X} \in \hat{\mathcal{L}} \}$
l'\'evaluation de $\hat{\mathcal{L}}$ en $\underline{0}$. Nous nous int\'eressons au cas purement singulier o\`u $\hat{\mathcal{L}} (\underline{0}) = \{ \underline{0} \}$.
Sous cette hypoth\`ese l'ensemble ${\mathcal{L}}^{1} = \{J^{1} X / X \in \hat{\mathcal{L}} \}$ "des parties lin\'eaires" des \'el\'ements de $\hat{\mathcal{L}}$ est une
sous alg\`ebre de Lie de l'alg\`ebre de Lie $\mathcal{X}^{1}_{n}$ des champs de vecteurs lin\'eaires de $\mathbb{R}^{n}$. Notons que $\mathcal{X}^{1}_{n}$ est
isomorphe \`a l'espace vectoriel des endomorphismes, $\mathrm{End} \mathbb{R}^{n}$, de $\mathbb{R}^{n}$. \\
Supposons que $\hat{\mathcal{L}}$ soit semi-simple, i.e. $\hat{\mathcal{L}}$ n'admet pas d'id\'eal r\'esoluble non nul. D'apr\`es un r\'esultat de R. Hermann \cite{Hermann} $J^{1} : \hat{\mathcal{L}} \rightarrow \mathcal{L}^{1}$ est injectif et $\hat{\mathcal{L}}$ est formellement lin\'earisable. Ceci signifie qu'il existe $\hat{\Phi} \in \widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$ qui conjugue $\hat{\mathcal{L}}$ \`a $\mathcal{L}^{1}$:
$$\hat{\mathcal{L}} = \hat{\Phi}_{\ast} \hat{\mathcal{L}}^{1} = \{ \hat{\Phi}_{\ast} (J^{1} X ) / X \in \hat{\mathcal{L}} \}.$$
Soit $\Phi$ une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{\Phi}$; l'alg\`ebre de Lie $\mathcal{L} = \Phi_{\ast} \mathcal{L}^{1}$ est une r\'ealisation de
$\hat{\mathcal{L}}$ et par construction $T_{\underline{0}}: \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme. D'o\`u le:
\begin{theorem} Soit $\hat{\mathcal{L}} \subset \mathcal{M}_{n} \hat{\mathcal{X}}_{n}$ une alg\`ebre de Lie semi-simple de champ de vecteurs formels. Alors
$\hat{\mathcal{L}}$ poss\`ede une r\'ealisation $\mathcal{C}^{\infty}$ not\'ee $\mathcal{L}$ telle que \\ $T_{\underline{0}}: \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit
un isomorphisme.
\end{theorem}
Comme ci-dessus toutes les alg\`ebres de Lie lin\'earisables de champs formels poss\`edent une r\'ealisation $\mathcal{C}^{\infty}$; de m\^eme celles qui sont conjugu\'ees \`a
une alg\`ebre de champs polynomiaux. C'est le cas en petite dimension $n$ d'espace. En dimension $n = 1$, la classification formelle des sous alg\`ebres
$\hat{\mathcal{L}}$ de dimension finie de $\hat{\mathcal{X}}_{1}$ fait partie du folklore. Elle est probablement connue de S. Lie, F. Klein et E. Cartan.
\begin{enumerate}
\item $n = 1$ et $\dim \hat{\mathcal{L}} = 1$; $\hat{\mathcal{L}}$ est formellement conjugu\'ee \`a l'alg\`ebre engendr\'ee par l'un des champs $\frac{\partial}{\partial x}$ et $X_{p , \lambda} = \frac{x^{p + 1}}{1 - \lambda x^{p}} \frac{\partial}{\partial x}$ avec $p \in \mathbb{N}$ et $\lambda \in \mathbb{R}$.
\item $n = 1$ et $\dim \hat{\mathcal{L}} = 2$; $\hat{\mathcal{L}}$ est formellement conjugu\'ee \`a l'une des alg\`ebres $\langle \frac{\partial}{\partial x} , x \frac{\partial}{\partial x} \rangle$ et $\langle x \frac{\partial}{\partial x} , x^{p} \frac{\partial}{\partial x} \rangle$ avec $p \in \mathbb{N} \setminus \{ 0 \}$. Toutes ces alg\`ebres sont isomorphes \`a l'alg\`ebre du groupe des transformations affines de la droite.
\item $n = 1$ et $\dim \hat{\mathcal{L}} = 3$; $\hat{\mathcal{L}}$ est formellement conjugu\'ee \`a l'alg\`ebre \\ $\langle \frac{\partial}{\partial x} , x \frac{\partial}{\partial x} , x^{2} \frac{\partial}{\partial x} \rangle$ qui est l'alg\`ebre du groupe des transformations homographiques $\mathbb{P }G L ( 2 , \mathbb{R} )$.
\end{enumerate}
Toutes ces alg\`ebres $\hat{\mathcal{L}}$ sont formellement conjugu\'ees \`a des alg\`ebres de champs analytiques $\mathcal{L}^{an}$. Le th\'eor\`eme de Borel usuel, appliqu\'e \`a
une conjuguante $\hat{\Phi}$ ($\hat{\mathcal{L}} = \hat{\Phi}_{\star} \mathcal{L}^{an}$) produit une r\'ealisation $\mathcal{L} = \Phi_{\star} \mathcal{L}^{an}$ des $\hat{\mathcal{L}}$
consid\'er\'ees. En dimension plus grande la classification des alg\`ebres de champs formels n'est pas connue. En dimension deux on peut faire la liste de celles qui sont
formellement lin\'earisables (pour lesquelles on aura donc des \'enonc\'es de type Borel). Ceci met en jeu des conditions de non r\'esonance, \`a la Poincar\'e, portant sur leur
radical r\'esoluble. En un certain sens cette liste de type "zoologique" ne met \`a jour ni des techniques nouvelles, ni des r\'esultats nouveaux. Pour illustrer ce qui
pr\'ec\`ede nous allons traiter quelques cas sp\'eciaux en dimension 2 (d'espace); en particulier celui des alg\`ebres commutatives. Nous avons pour cela besoin de la
notion de rang ponctuel g\'en\'erique ($\mathcal{r} ( \hat{\mathcal{L}} )$) que nous d\'efinissons pour n'importe quelle alg\`ebre $\hat{\mathcal{L}}$ de $\hat{\mathcal{X}}_{n}$.
\begin{definition} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}$ une sous alg\`ebre non nulle; le rang ponctuel g\'en\'erique
(ou plus simplement le rang) $\mathcal{r} (\hat{\mathcal{L}})$ est le nombre maximal $k$ d'\'el\'ements, $\hat{X}_{1} , \ldots , \hat{X}_{k}$ de $ \hat{\mathcal{L}}$,
qui sont $\hat{\mathcal{E}}_{n}$-ind\'ependants. Si $\hat{X}_{j} = \sum \hat{a}_{i , j} \frac{\partial}{\partial x_{i}}$, $\hat{a}_{i , j} \in \hat{\mathcal{E}}_{n}$, $j = 1 , \ldots , \mathcal{r} ( \hat{\mathcal{L}} )$, la matrice $(\hat{a}_{i , j})$ poss\`ede un mineur $\mathcal{r} (\hat{\mathcal{L}}) \times \mathcal{r} (\hat{\mathcal{L}})$ de d\'eterminant non nul.
\end{definition}
Le rang ponctuel g\'en\'erique est trivialement major\'e par la dimension ambiante. Par exemple l'alg\`ebre
$\hat{\mathcal{L}} = \{ \hat{f}(x_{2}) \frac{\partial}{\partial x_{1}} / \hat{f} \in \hat{\mathcal{E}}_{1} \}$ est une sous alg\`ebre de Lie de $\hat{\mathcal{E}}_{2} $
de dimension infinie et de rang $1$. Notons que $\hat{\mathcal{L}}$ est commutative et que le th\'eor\`eme de Borel, appliqu\'e aux $\hat{f}$, donne une r\'ealisation
$\mathcal{C}^{\infty}$ de $\hat{\mathcal{L}}$. L'alg\`ebre $\mathcal{L} = \{ f(x_{2}) \frac{\partial}{\partial x_{1}} / f \in \mathcal{E}_{1} \}$ est commutative et
se projette sur $\hat{\mathcal{L}}$ ($T_{\underline{0}} \mathcal{L} = \hat{\mathcal{L}}$), mais $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ n'est
pas un isomorphisme puisque si $P \in \mathcal{E}_{1}$ est une fonction plate, alors $T_{\underline{0}} (P(x_{2})\frac{\partial}{\partial x_{1}} = 0$. En fait
choisissons une $\mathbb{R}$-base $\{ \hat{a}_{i} , i \in I \}$ du $\mathbb{R}$-espace vectoriel $\hat{\mathcal{E}}_{1}$ et soient $a_{i}\in \mathcal{E}_{1}$, $i \in I$,
des r\'ealisations de Borel des $\hat{a}_{i}$. Alors le $\mathbb{R}$-espace vectoriel $\mathcal{E}^{'}$ engendr\'e par les $a_{i}$ produit une r\'ealisation
$\mathcal{L}^{'} = \{ f(x_{2}) \frac{\partial}{\partial x_{1}} / i \in I \}$ pour laquelle $T_{\underline{0}} : \mathcal{L}^{'} \rightarrow \hat{\mathcal{L}}$ est
un isomorphisme.
\subsection{Description des sous alg\`ebres $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}$ de dimension finie et de rang 1}
Soient $\hat{X}$ un champ formel et $E \subset \hat{\mathcal{E}}_{n}$ un sous espace vectoriel ayant la propri\'et\'e suivante:
$$\forall (\hat{f} , \hat{g}) \in E \times E , \hat{f} \hat{X} (\hat{g}) - \hat{g} \hat{X} (\hat{f}) \in E \eqno{(\star)}$$
o\`u $\hat{f} \mapsto \hat{X} ( \hat{f} )$ est la d\'erivation associ\'ee \`a $\hat{X}$. Alors l'alg\`ebre
$\hat{\mathcal{L}} = E . \hat{X} = \{ \hat{f} \hat{X} / \hat{f}
\in E \}$ est de rang $1$ et de dimension celle de $E$. En fait toute sous alg\`ebre de Lie $\hat{\mathcal{L}}$ de rang $1$ s'obtient ainsi. En effet si $\hat{Y} = \sum \hat{a}_{i } \frac{\partial}{\partial x_{i}}$, $\hat{a}_{i } \in \hat{\mathcal{E}}_{n}$ est un \'el\'ement non nul, alors le champ $\hat{X} = \frac{\hat{Y}}{\mathrm{pgcd} ( \hat{a}_{1} , \ldots , \hat{a}_{n} )}$ convient. A noter que le champ $\hat{X}$ n'appartient peut \^etre pas \`a $\hat{\mathcal{L}}$.
L'alg\`ebre $\hat{\mathcal{L}}$ de rang $1$ sera dite \textit{saturable} s'il existe $\hat{X} = \sum \hat{a}_{i } \frac{\partial}{\partial x_{i}} \in \hat{\mathcal{L}}$
satisfaisant $\mathrm{pgcd} ( \hat{a}_{1} , \ldots , \hat{a}_{n} ) = 1$. Dans ce cas $\hat{\mathcal{L}} = \{ \hat{f} \hat{X} / \hat{f} \in E \}$ satisfait la condition
$(\ast)$. L'alg\`ebre de Lie $\langle \frac{\partial}{\partial x} , x \frac{\partial}{\partial x} , x^{2} \frac{\partial}{\partial x} \rangle$ est saturable de rang
$1$; ici $\hat{X} = \frac{\partial}{\partial x}$. Par contre l'alg\`ebre $\langle x \frac{\partial}{\partial x} , x^{2} \frac{\partial}{\partial x} \rangle$ n'est pas
saturable. On obtient d'autres exemples d'alg\`ebres saturables; pour cela notons $R_{n} = \sum x_{i} \frac{\partial}{\partial x_{i}} \in \hat{\mathcal{X}}_{n}$
le champ "radial" et $E_{n}^{d}$ l'espace vectoriel des polyn\^omes homog\`enes de degr\'e $d$ en les variables
$x_{1} , \ldots , x_{n}$. Les alg\`ebres de Lie $\mathcal{R}_{n}^{d} := E_{n}^{d} R_{n} = \{ f . R_{n} / f \in E_{n}^{d} \}$ sont commutatives de rang ponctuel
g\'en\'erique $1$. Parmi ces alg\`ebres seules les $\mathcal{R}_{n}^{0}$, $n \geq 2$, sont saturables. On peut fabriquer d'autres sous alg\`ebres de $\hat{\mathcal{X}}_{n}$
au moyen de $\mathcal{R}_{n}^{d}$. Par exemple les alg\`ebres $\overline{\mathcal{R}}_{n}^{d}$, engendr\'ees par le champ radial et $\mathcal{R}_{n}^{d}$, sont
r\'esolubles de rang $1$. Les sous alg\`ebres de $\mathrm{GL} (n, \mathbb{R}) \subset \mathrm{End} \mathbb{R}^{n}$, en particulier les $\mathfrak{sl} (n, \mathbb{R})$, peuvent \^etre vues
comme des alg\`ebres de champs de vecteurs lin\'eaires sur $\mathbb{R}^{n}$.
Notons $\mathfrak{sl} \mathbb{R}^{d}_{n}$ le $\mathbb{R}$-espace vectoriel des champs de vecteurs engendr\'e par $\mathfrak{sl} (n, \mathbb{R})$ et $\mathbb{R}^{d}_{n}$; c'est en fait
une sous alg\`ebre de Lie dont la d\'ecomposition de Levi-Malcev est: $\mathfrak{sl} \mathbb{R}^{d}_{n} = \mathbb{R}^{d}_{n} \oplus \mathfrak{sl} (n, \mathbb{R})$. Ces alg\`ebres sont de
rang maximal $n$. De m\^eme les alg\`ebres $\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n}$ engendr\'ees par $\mathcal{R}_{n}$ et $\mathfrak{sl} \mathbb{R}^{d}_{n} $ ont leur d\'ecomposition de
Levi-Malcev du type suivant: $\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n} = \overline{\mathbb{R}}^{d}_{n} \oplus \mathfrak{sl} (n, \mathbb{R})$. La d\'efinition suivante est naturelle:
\begin{definition} Soit $\mathcal{L}^{d}$ une sous alg\`ebre de Lie de $\hat{\mathcal{X}}_{n}$ dont les \'el\'ements sont des champs de vecteurs polyn\^omiaux de degr\'e
inf\'erieur ou \'egal \`a $d$. On dit que $\mathcal{L}^{d}$ est $d$-d\'eterminante si pour toute sous alg\`ebre $\hat{\mathcal{L}} \subset \hat{\mathcal{E}}_{n}$ ayant
$\mathcal{L}^{d}$ pour $d$-jet et telle que l'application jet d'ordre $d$ ($J^{d} : \hat{\mathcal{L}} \rightarrow \mathcal{L}^{d}$) soit un isomorphisme d'alg\`ebre de
Lie, alors $\hat{\mathcal{L}}$ et $\mathcal{L}^{d}$ sont conjugu\'ees: il existe $\hat{\Phi}$ appartenant \`a $\widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$ tel que
$\hat{\Phi}_{\ast} \hat{\mathcal{L}} = \mathcal{L}^{d}$.
\end{definition}
\begin{theorem} Les alg\`ebres $\overline{\mathbb{R}}^{d}_{n}$ et $\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n}$ sont $d + 1$ d\'eterminantes.
\end{theorem}
\begin{proof}
Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}$ une sous alg\`ebre telle que $J^{d + 1} : \hat{\mathcal{L}} \rightarrow \mathcal{L}^{d + 1}$ soit un
isomorphisme avec $\mathcal{L}^{d + 1}$ \'egal \`a $\overline{\mathbb{R}}^{d}_{n}$ ou $\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n}$. Il existe donc un \'el\'ement $\hat{R}$ de
$\hat{\mathcal{L}}$ dont le jet d'ordre $d + 1$ est pr\'ecis\'ement $R_{n}$. Le th\'eor\`eme de lin\'earisation de Poincar\'e produit un diff\'eomorphisme formel $\hat{\Phi}$
v\'erifiant $J^{ d + 1} \hat{\Phi} = \mathrm{Id}_{\mathbb{R}^{n}}$ et $\hat{\Phi}_{\ast} \check{R} = R_{n}$, o\`u $\mathrm{Id}_{\mathbb{R}^{n}}$ d\'esigne l'identit\'e de $\mathbb{R}^{n}$.
Soit $\hat{X}$ un \'el\'ement de $\hat{\mathcal{L}}$ tel que $J^{d + 1} \hat{X} = X_{d}$ appartient \`a $\mathcal{R}_{n}^{d}$ ou \`a
$\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n}$. Comme $[ R , X_{d} ] = d X_{d}$ on a $[ \hat{R }, \hat{X} ] = d \hat{X}$ puisque $J^{d + 1}$ est un isomorphisme.
On en d\'eduit que:
$$[\hat{\Phi}_{\ast} \hat{R} , \hat{\Phi}_{\ast} \hat{X} ] = d \hat{\Phi}_{\ast} \hat{X} . $$
Un calcul \'el\'ementaire montre que le champ formel $\hat{\Phi}_{\ast}\hat{X}$ est homog\`ene de degr\'e $d + 1$; comme $J^{d + 1} \hat{\Phi} = \mathrm{Id}_{\mathbb{R}^{n}}$,
$\hat{\Phi}_{\ast} \hat{X} = X_{d}$. Ainsi $\hat{\Phi}_{\ast} \hat{\mathcal{L}} = \mathcal{L}^{d + 1}$.
\end{proof}
On d\'eduit du th\'eor\`eme pr\'ec\'edent l'existence de $\hat{\Phi}$ tel que $\hat{\Phi}_{\ast} \hat{\mathcal{L}} = \mathcal{L}^{d + 1}$. Soit $\Phi$ une
r\'ealisation de Borel de $\hat{\Phi}$, et $\mathcal{L} = (\Phi^{-1})_{\ast} \mathcal{L}^{d + 1}$. Alors $\mathcal{L}$ est une r\'ealisation de $\hat{\mathcal{L}}$
telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme; d'o\`u le:
\begin{corollary} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}$ une sous alg\`ebre de Lie. On suppose que $J^{d + 1} \hat{L} = \mathcal{L}^{d + 1}$ est
une sous alg\`ebre de Lie et que $J^{d + 1} : \hat{\mathcal{L}} \rightarrow \mathcal{L}^{d + 1}$ est un isomorphisme. Si $\mathcal{L}^{d + 1}$ est \'egal \`a
$\overline{\mathbb{R}}^{d}_{n}$ ou $\mathfrak{sl} \overline{\mathbb{R}}^{d}_{n}$, alors $\hat{\mathcal{L}}$ poss\`ede une r\'ealisation $\mathcal{C}^{\infty}$ $\mathcal{L}$ telle
que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme.
\end{corollary}
Consid\'erons \`a pr\'esent une alg\`ebre de Lie saturable, de rang $1$ et de dimension finie:
$\hat{\mathcal{L}} = E. \hat{X} = \langle \hat{X} , \hat{f}_{1}\hat{X} \ldots , \hat{f}_{p}\hat{X} \rangle$, o\`u $\hat{f}_{k} \in \hat{\mathcal{E}}_{n}$.
Ici $\dim \hat{\mathcal{L}} = \dim E = p + 1$ que l'on suppose sup\'erieure ou \'egale \`a $2$. Le fait que $\hat{\mathcal{L}}$ soit une alg\`ebre de Lie implique que le sous
espace vectoriel $E = \langle 1 , \hat{f}_{1} \ldots , \hat{f}_{p} \rangle$ est invariant sous l'action de la d\'erivation $\hat{X}$; ce qui conduit au syst\`eme
diff\'erentiel:
\begin{equation} \mathcal{D} (\hat{X})
\left\{
\begin{array}{ccc}
\hat{X} (\hat{f}_{1}) & = & \sum_{j = 0}^{p} \lambda_{1}^{j} \hat{f}_{j}\\
. & . & \\
\hat{X} (\hat{f}_{p}) & = & \sum_{j = 0}^{p} \lambda_{p}^{j} \hat{f}_{j}
\end{array}
\right.
\end{equation}
o\`u l'on a pos\'e $\hat{f}_{0} = 1$. On distingue plusieurs cas suivant la nature du premier jet non nul du champ $\hat{X}$.
\subsubsection{Alg\`ebres saturables de rang 1 non singuli\`eres}
Dans ce cas le champ formel $\hat{X}$ est non singulier. Il existe, en particulier, $\hat{\Phi}$ appartenant \`a $\widehat{\mathrm{Diff}}(\mathbb{R}^{n}_{0})$ tel que
$\hat{\Phi}_{\ast} \hat{X} = \frac{\partial}{\partial x_{1}}$ et $\hat{\Phi}_{\ast} \hat{\mathcal{L}} = E \circ \hat{\Phi}^{- 1} .
\frac{\partial}{\partial x_{1}} = \langle \frac{\partial}{\partial x_{1}} , \hat{g}_{1} \frac{\partial}{\partial x_{1}} \ldots , \hat{g}_{p} \frac{\partial}{\partial x_{1}} \rangle$
avec $\hat{g}_{i} \circ \hat{\Phi} = \hat{f}_{i}$. Le syst\`eme diff\'erentiel $\mathcal{D} (\hat{\Phi}_{\ast} \hat{X}) = \mathcal{D}(\frac{\partial}{\partial x_{1}})$
devient maintenant un syst\`eme d'\'equations diff\'erentielles ordinaires:
\begin{equation}
\mathcal{D}(\frac{\partial}{\partial x_{1}})
\left\{
\begin{array}{ccc}
\frac{\partial \hat{g}_{1}}{\partial x_{1}} & = & \sum_{j = 0}^{p} \lambda_{1}^{j} \hat{g}_{j}\\
. & . & \\
\frac{\partial \hat{g}_{p}}{\partial x_{1}} & = & \sum_{j = 0}^{p} \lambda_{p}^{j} \hat{g}_{j}
\end{array}
\right.
\end{equation}
Consid\'erons le comme un syst\`eme diff\'erentiel en une variable $x_{1}$. Il poss\`ede un syst\`eme fondamental de solutions $s_{1}$, $\ldots$ , $s_{p}$ ( $s_{k} = ( s_{k}^{l} )$, $l = 1 , \ldots , p$), les $s_{k}^{l}$ \'etant des germes de fonctions analytiques (m\^emes globales) \`a l'origine de $\mathbb{R}$. Il est clair que les solutions formelles $\hat{g} = ( \hat{g}_{1} , \ldots , \hat{g}_{p} )$ s'\'ecrivent sous la forme:
$$\hat{g} ( x_{1} , \ldots , x_{n} ) = \sum \hat{h}^{j}_{k} ( x_{2} , \ldots , x_{n} ) . s_{j} ( x_{1} )$$
o\`u les $\hat{h}^{j}_{k} \in \hat{\mathcal{E}}_{n - 1}$. Soient $h^{j}_{k} \in \mathcal{E}_{n - 1}$ des r\'ealisations de Borel des $\hat{h}^{j}_{k}$, les composantes $g_{1}$, $\ldots$ , $g_{p}$ du vecteur $g ( x_{1} , \ldots , x_{n} ) = \sum h^{j}_{k} ( x_{2} , \ldots , x_{n} ) . s_{j} ( x_{1} )$ sont $\mathcal{C}^{\infty}$ et solution du syst\`eme $\mathcal{D}(\frac{\partial}{\partial x_{1}})$. Une v\'erification \'el\'ementaire montre que
$\mathcal{L}^{'} = \langle \frac{\partial}{\partial x_{1}} , g_{1} \frac{\partial}{\partial x_{1}} \ldots , g_{p} \frac{\partial}{\partial x_{1}} \rangle$ est une alg\`ebre de Lie dont le jet Taylorien infini est $\hat{\Phi}_{\ast} \hat{\mathcal{L}}$. En consid\'erant une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{\Phi}$
on obtient trivialement une alg\`ebre de Lie, $\mathcal{L} = \Phi^{- 1}_{\ast} \mathcal{L}^{'}$, telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme. D'o\`u la:
\begin{proposition} Soit $\hat{\mathcal{L}} = E . \hat{X} \subset \hat{\mathcal{X}}_{n}$ une alg\`ebre saturable de rang $1$. Si $\hat{X}$ est non singulier, alors il existe une sous alg\`ebre $\mathcal{L} \subset \mathcal{X}_{n}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme.
\end{proposition}
\subsubsection{Alg\`ebres saturables de rang 1 \`a 1-jet nul}
Ici $J^{1} \hat{X}$ est identiquement nul. Dans ce cas pour tout \'el\'ement $\hat{Z}$ de $\hat{\mathcal{L}}$ le jet d'ordre 1, $J^{1} \hat{Z}$, de $\hat{Z}$ est nul.
Ceci implique que l'application adjointe $ad_{\hat{Z}} : \hat{\mathcal{L}} \rightarrow \hat{\mathcal{L}}$ est nilpotente. En effet les valeurs propres de $ad_{\hat{Z}}$ sont n\'ecessairement nulles. Par suite l'alg\`ebre de Lie $\hat{\mathcal{L}}$ est nilpotente et poss\`ede donc un centre non trivial $\hat{\mathcal{C}} \subset \hat{\mathcal{L}} = \langle \hat{X} , \hat{f}_{1} \hat{X} , \ldots , \hat{f}_{p} \hat{X} \rangle$. Si $\hat{X}$ est dans le centre, alors $[\hat{X} , \hat{f}_{i} \hat{X} ] = 0$ pour tout $i$, et donc $\hat{X} (f_{i}) = 0$, pour $i = 1 , \ldots, p$. Il en r\'esulte que $\hat{\mathcal{L}}$ est ab\'elienne et que $\hat{X}$ poss\`ede une int\'egrale premi\`ere non constante si $\dim \hat{\mathcal{L}} \geq 2$. Si $\hat{X}$ n'est pas dans $\hat{\mathcal{C}}$, alors il existe un \'el\'ement non constant de $\hat{\mathcal{E}}_{n}$ tel que $\hat{f} \hat{ X}$ appartient \`a $\hat{\mathcal{C}}$. Comme $\hat{f} \hat{ X}$ et $\hat{ X}$ commutent, $\hat{f}$ est une int\'egrale premi\`ere non constante, par suite de $0 = [\hat{f} \hat{X} , \hat{f}_{i} \hat{X}] = \hat{X} (\hat{f}_{i}). \hat{X}$ on d\'eduit que chaque $\hat{f}_{i}$ est une int\'egrale premi\`ere de $\hat{X}$. Ceci conduit au fait que $\hat{X}$ appartient \`a $\hat{\mathcal{C}}$ en contradiction avec l'hypoth\`ese. Donc $\hat{\mathcal{L}}$ est ab\'elienne et on a la:
\begin{proposition} Soit $\hat{\mathcal{L}} = E . \hat{X} \subset \hat{\mathcal{X}}_{n}$ une alg\`ebre saturable de rang $1$ et de dimension finie. Si $J^{1} \hat{X} = 0$ et $\dim \hat{\mathcal{L}} > 1$, alors $\hat{\mathcal{L}}$ est ab\'elienne et le champ $\hat{X}$ poss\`ede une int\'egrale premi\`ere non constante.
\end{proposition}
\begin{remark} Si le champ poss\`ede une int\'egrale premi\`ere non constante $\hat{f}_{0}$ on peut supposer que $\hat{f}_{0}(0) = 0$ dans le cas saturable. Chaque \'el\'ement $\hat{l}$ de $\hat{\mathcal{E}}_{1}$ produit une int\'egrale premi\`ere $\hat{l}(\hat{f}_{0})$. En particulier l'espace vectoriel $\hat{\mathcal{L}} = \{ \hat{l}(\hat{f}_{0}) \hat{X} / \hat{l} \in \hat{\mathcal{E}}_{1} \}$ est une alg\`ebre de Lie ab\'elienne de dimension infinie.
\end{remark}
Nous allons traiter le cas sp\'ecifique, en dimension d'espace $2$, des alg\`ebres saturables. L'avantage de la dimension $2$ est cons\'equence d'un r\'esultat de J.-F. Mattei et R. Moussu \cite{Mattei} qui dit que si $\hat{X}$ est un champ non nul, appartenant \`a $\hat{\mathcal{X}}_{2}$, ayant une int\'egrale premi\`ere formelle non constante, alors l'anneau des ses int\'egrales premi\`eres formelles $\hat{\mathcal{A}} ( \hat{X} ) \subset \hat{\mathcal{E}}_{2}$ est engendr\'e par un \'el\'ement. Plus pr\'ecisement il existe $\hat{f}_{0}$
dans $\hat{\mathcal{E}}_{2}$, dit minimal, d\'efini \`a composition \`a gauche pr\`es par les \'el\'ements de $\widehat{\mathrm{Diff}}_{1} (\mathbb{R}_{0})$, tel que: $$\hat{\mathcal{A}} ( \hat{X} ) = \{ \hat{l} ( \hat{f}_{0} ) / \hat{l} \in \hat{\mathcal{E}}_{1} \}$$
Cet \'enonc\'e est \'etabli dans \cite{Mattei} dans les cadres complexes formel et holomorphe, mais il s'adapte assez facilement au cas r\'eel. On se propose d'\'etablir la:
\begin{proposition} Soit $\hat{\mathcal{L}} = E . \hat{X} \subset \hat{\mathcal{X}}_{2}$ une sous alg\`ebre saturable de rang $1$ et $\dim \hat{\mathcal{L}} > 1$. On suppose que $\hat{\mathcal{L}}$ est ab\'elienne et que le champ $\hat{X}$ poss\`ede une int\'egrale premi\`ere non constante. Alors il existe une sous alg\`ebre $\mathcal{L} \subset \mathcal{X}_{2}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme.
\end{proposition}
\begin{proof}
Soit $\hat{f}_{0}$ une int\'egrale premi\`ere minimale de $\hat{X}$. La d\'ecomposition complexe en facteurs irr\'eductibles de $\hat{f}_{0}$ est du type:
$$\hat{f}_{0} = \hat{f}_{1}^{n_{1}} \ldots \hat{f}_{k}^{n_{k}} (\hat{f}_{k + 1} \overline{\hat{f}}_{k + 1} )^{n_{k + 1}} \ldots (\hat{f}_{p} \overline{\hat{f}}_{p} )^{n_{p}}$$
avec $n_{i} \in \mathbb{N}$, $\hat{f}_{i} \in \hat{\mathcal{E}}_{2}$ pour $i = 1 , \ldots , k$ et $\hat{f}_{i} \in \hat{\mathcal{O}}_{2} \setminus \hat{\mathcal{E}}_{2}$ pour $i = k + 1 , \ldots , p$ o\`u $\hat{\mathcal{O}}_{n}$ d\'esigne l'anneau des s\'eries formelles en $n$ variables complexes et $\overline{\hat{f}}_{i}$ le conjugu\'e de $\hat{f}_{i}$. Notons $\hat{\omega}_{0}$ la 1-forme d\'efinie par:
$$\hat{\omega}_{0} = \hat{f}_{1}^{n_{1}} \ldots \hat{f}_{k}^{n_{k}} \hat{f}_{k + 1} \overline{\hat{f}}_{k + 1} \ldots \hat{f}_{p} \overline{\hat{f}}_{p} ( \sum_{i = 1}^{k} n_{i}\frac{d \hat{f}_{i}}{\hat{f}_{i}} + \sum_{i = k + 1}^{p} n_{i} (\frac{d \hat{f}_{i}\overline{\hat{f}}_{i}}{\hat{f}_{i}\overline{\hat{f}}_{i}} )).$$
Cette 1-forme, \`a priori complexe, est visiblement r\'eelle et \`a singularit\'e alg\'ebriquement isol\'ee. Soit $\hat{X}_{0}$ un champ de vecteurs dual de $\hat{\omega}_{0}$: $\hat{\omega}_{0} = i_{\hat{X}_{0}} d x_{1} \wedge d x_{2}$. L'alg\`ebre de Lie $\hat{\mathcal{L}} = E. \hat{X}$ est du type $\hat{\mathcal{L}} = \langle \hat{X} , \hat{l}_{1}(\hat{f}_{0}) \hat{X} , \ldots , \hat{l}_{p}(\hat{f}_{0}) \hat{X} \rangle$. Consid\'erons des r\'ealisations $\mathcal{C}^{\infty}$ de $\hat{f}_{1} , \ldots , \hat{f}_{k} , \hat{g}_{k + 1} = \hat{f}_{k + 1} \overline{\hat{f}}_{k + 1} , \ldots , \hat{g}_{p} = \hat{f}_{p} \overline{\hat{f}}_{p}$ not\'ees respectivement $f_{1} , \ldots , f_{k} , g_{k + 1} , \ldots , g_{p}$. Ces r\'ealisations induisent une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{X}_{0}$, not\'ee $X_{0}$, poss\'edant l'int\'egrale premi\`ere \\
$f_{0} = f_{1}^{n_{1}} \ldots f_{k}^{n_{k}} g_{k + 1}^{n_{k + 1}} \ldots g_{p}^{n_{p}}$. Comme $\hat{\omega}_{0}$ est \`a singularit\'e isol\'ee il existe $\hat{h} \in \hat{\mathcal{E}}_{2}$ tel que $\hat{X} = \hat{h} \hat{X}_{0}$. Le choix d'une r\'ealisation $h$ de $\hat{h}$ en produit une, $X = h X_{0}$, pour $\hat{X}$. Consid\'erons des r\'ealisations $l_{1} , \ldots , l_{p}$ de $\hat{l}_{1} , \ldots , \hat{l}_{p}$ respectivement. L'alg\`ebre de Lie $\mathcal{L} = \langle X , l_{1}(f_{0}) X , \ldots , l_{p}(f_{0}) X \rangle$ est une r\'ealisation ab\'elienne de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}}: \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme.
\end{proof}
\subsubsection{Alg\`ebres saturables de rang 1 et \`a 1-jet non nul}
On \'ecrit $\hat{\mathcal{L}} = \langle \hat{X} , \hat{f}_{1} \hat{X} , \ldots , \hat{f}_{p} \hat{X}_{p} \rangle$ avec $\hat{X}(0) = 0$ et $J^{1} \hat{X} \neq 0$. Ici encore l'espace $E = \langle 1 , \hat{f}_{1} , \ldots , \hat{f}_{p} \rangle$ est invariant par la d\'erivation $\hat{X}$. On se place en dimension 2; nous distinguons les diff\'erents types de Jordan pour le 1-jet $X_{1}$ de $\hat{X}$ \`a conjugaison lin\'eaire pr\`es:
\begin{enumerate}
\item $\lambda_{1} x_{1}\frac{\partial}{\partial x_{1}} + \lambda_{2} x_{1}\frac{\partial}{\partial x_{2}}$ avec $(\lambda_{1} , \lambda_{2}) \neq (0 , 0)$ (cas diagonal r\'eel).
\item $(\alpha x_{1} - \beta x_{2})\frac{\partial}{\partial x_{1}} + (\beta x_{1} + \alpha x_{2}) \frac{\partial}{\partial x_{2}}$ o\`u $(\alpha , \beta) \neq (0 , 0)$ (cas diagonal complexe).
\item $(\lambda x_{1} + x_{2})\frac{\partial}{\partial x_{1}} + \lambda x_{2} \frac{\partial}{\partial x_{2}}$ (cas non semi-simple).
\end{enumerate}
Le champ $\hat{X}$ est formellement conjugu\'e \`a sa partie lin\'eaire dans les cas suivants:
\begin{itemize}
\item Cas 1 sans r\'esonnance, i.e. $i_{1} \lambda_{1} + i_{2} \lambda_{2} \neq \lambda_{j}$ pour tout couple $(i_{1}, i_{2})$ d'entiers, $i_{1} + i_{2} \geq 2$ et $j \in \{ 1 , 2 \}$.
\item Cas 2 avec $\alpha \neq 0$. On observe ici que les valeurs propres $\alpha \pm i \beta$ sont complexes et sans r\'esonnances lorsque $\beta \neq 0$.
\item Cas 3 avec $\lambda \neq 0$.
\end{itemize}
{\bf 1.2.3.1 R\'ealisation $\mathcal{C}^{\infty}$ dans les cas semi-simples non r\'esonnants}\\
Ils correspondent aux cas diagonal r\'eel sans r\'esonnance et diagonal complexe avec $\alpha \neq 0$. Dans le premier cas, \`a conjugaison formelle pr\`es, on suppose que $\hat{X} = X_{1} = J^{1} \hat{X}$. Le champ $X_{1}$ \'etant semi-simple, il l'est en tant que d\'erivation et sa restriction \`a $E$ l'est aussi. On peut donc supposer que les $\hat{f}_{j}$ forment une base de vecteurs propres de $\hat{X}$: $\hat{X} \hat{f}_{j} = \mu_{j} \hat{f}_{j}$, $\mu_{j} \in \mathbb{R}$.
Si $x_{1}^{i_{1}} x_{2}^{i_{2}}$ est un mon\^ome apparaissant avec un coefficient non nul dans $\hat{f}_{j}$, alors on a:
\begin{equation}
i_{1} \lambda_{1} + i_{2} \lambda_{2} = \mu_{j}.
\end{equation}
\begin{lemma} Si $\lambda_{1}$ et $\lambda_{2}$ sont non r\'esonnants, l'ensemble $\Lambda$ des couples $(i_{1}, i_{2}) \in \mathbb{N}^{2}$ satisfaisant $(3)$ est fini.
\end{lemma}
\begin{proof}
On fixe $(k_{1} , k_{2}) \in \Lambda$ r\'ealisant le minimum pour l'ordre lexicographique. Si $(i_{1} , i_{2})$ appartient \`a $\Lambda$ et $(i_{1} , i_{2}) \neq (k_{1} , k_{2})$ on a:
\begin{equation}
(i_{1} - k_{1}) \lambda_{1} + (i_{2} - k_{2}) \lambda_{2} = 0.
\end{equation}
Comme $(\lambda_{1} , \lambda_{2})$ est un couple non r\'esonnant alors $i_{1} \neq k_{1}$ et $i_{2} \neq k_{2}$. Par suite $i_{1} > k_{1}$ et $i_{2} \neq k_{2}$. Si $i_{2}$ \'etait strictement sup\'erieur \`a $k_{2}$, alors $(i_{1} - k_{1})$ et $(i_{2} - k_{2})$ seraient positifs; ce qui cr\'eerait une r\'esonnance. Donc $i_{2}$ est strictement inf\'erieur \`a $k_{2}$ et ne peut donc prendre qu'un nombre fini de valeurs. Puisque $i_{1}$ est d\'etermin\'e par $i_{2}$ le lemme est v\'erifi\'e.
\end{proof}
Une cons\'equence du lemme est que chaque $\hat{f}_{j}$ est un polyn\^ome. \\
Dans le cas diagonal complexe lin\'eaire avec $\alpha \neq 0$ \`a conjugaison formelle pr\`es on suppose que $\hat{X}$ est \'egal \`a son 1-jet et on obtient un r\'esultat
analogue au cas r\'eel. Ainsi l'alg\`ebre $\hat{\mathcal{L}}$ est conjugu\'ee \`a une alg\`ebre $\mathcal{L}_{pol}$ de champs de vecteurs polynomiaux dans les cas diagonal r\'eel non r\'esonnant et diagonal complexe avec $\alpha \neq 0$:
$$\hat{\mathcal{L}} = \hat{\phi}_{\ast} \mathcal{L}_{pol} , \ \hat{\phi} \in \widehat{\mathrm{Diff}}(\mathbb{R}^{2}_{0}) \ {\rm et} \ \mathcal{L}_{pol} \subset \mathcal{X}_{2}. $$
Si $\phi \in \mathrm{Diff}(\mathbb{R}^{2}_{0})$ est une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{\phi}$ alors $\mathcal{L} = \phi_{\ast} \mathcal{L}_{pol}$ est une r\'ealisation de Borel de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme.\\
{\bf 1.2.3.2 Cas r\'esonnant}
Nous \'etudions d'abord les r\'esonnances de type Poincar\'e-Dulac, cas o\`u le 1-jet de $\hat{X}$ s'\'ecrit $X_{1} = \lambda (x_{1} \frac{\partial}{\partial x_{1}} + n x_{2} \frac{\partial}{\partial x_{2}})$, $\lambda \neq 0$ et $n \in \mathbb{N} \setminus \{0 , 1 \}$. Ici il y a une seule r\'esonnance: $n \lambda_{1} = \lambda_{2}$.
D'apr\`es le Th\'eor\`eme de Poincar\'e-Dulac, \cite{Cerveau2} le champ $\hat{X}$ est \`a conjugaison formelle pr\`es $\hat{X} = \lambda [ x_{1} \frac{\partial}{\partial x_{1}} + ( n x_{2} + \mu x_{1}^{n}) \frac{\partial}{\partial x_{2}} ]$ avec $\mu = 0$ ou $1$. Lorsque $\mu = 0$ on est dans le cas diagonal r\'eel. Ce cas se traite comme en 1.2.3.1: $\hat{\mathcal{L}}$ est formellement conjugu\'ee \`a une alg\`ebre polyn\^omiale. Ici encore $\hat{\mathcal{L}}$ se r\'ealise de fa\c{c}on $\mathcal{C}^{\infty}$.\\
Sinon on a une ramification du cas $n = 1 , \mu= 1$ qu'on fera en fin de paragraphe.\\
Nous avons ensuite la situation des r\'esonnances pures qui se traite en deux sous cas:
\begin{itemize}
\item Cas hyperbolique o\`u le 1-jet $X_{1} = \lambda (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$, $\lambda \neq 0$, $p$ et $q$ sont des entiers positifs et $\langle p , q \rangle = 1$ ou $p = 0$ et $q \neq 0$ ou cas noeud-col $p = 0$ et $q = 1$.
\item Cas elliptique: $X_{1} = \beta (- x_{2} \frac{\partial}{\partial x_{1}} + x_{1} \frac{\partial}{\partial x_{2}})$, o\`u $\beta \neq 0$.
\end{itemize}
Notons que le champ $\lambda (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$ a une int\'egrale premi\`ere monomiale $x_{1}^{p} x_{2}^{q}$ tandis que le champ $\beta ( - x_{2} \frac{\partial}{\partial x_{1}} + x_{1} \frac{\partial}{\partial x_{2}} )$ a l'int\'egrale premi\`ere $x_{1}^{2} + x_{2}^{2}$. Nous allons d\'etailler le premier cas, le second pr\'esente une certaine similarit\'e. La th\'eorie des formes normales J. Martinet \cite{Martinet} (ou de la Jordanisation) permet d'\'ecrire $\hat{X} = \hat{S} + \hat{N}$ avec $[\hat{S} , \hat{N} ] = 0$ et $\hat{S}$ est formellement conjugu\'e \`a sa partie lin\'eaire, i.e. $\hat{\Phi}_{\ast} \hat{S} = \lambda (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$, et le champ $\hat{N}$ est nilpotent, ce qui signifie ici que $J^{1} \hat{N} = 0$. Remarquons que l'on peut supposer pour notre contexte que $\lambda = 1$.
Comme $E = \langle 1, \hat{f}_{1} , \ldots , \hat{f}_{p} \rangle$ est invariant sous l'action de $\hat{X}$, il l'est sous l'action de sa partie semi simple $\hat{S}$ la restriction de $\hat{S}$ \`a $E$ reste semi simple, c'est \`a dire diagonalisable: \`a l'action pr\`es de $\hat{\phi}$ que $\hat{S} = X_{1}$. D'o\`u:
\begin{equation}
(q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}}) ( \hat{f}_{j} ) = \mu_{j} \hat{f}_{j}.
\end{equation}
Si $\mu_{j} = 0$, $\hat{f}_{j}$ est une int\'egrale premi\`ere de $\hat{S}$ et s'\'ecrit $\hat{f}_{j} = \hat{l}_{j}(x_{1}^{p} x_{2}^{q})$ avec $\hat{l}_{j} \in \hat{\mathcal{E}}_{1}$. Si $\mu_{j} \neq 0 $, alors la relation $(5)$ implique que $\mu_{j}$ est de la forme $\mu_{j} = q r_{j} - p s_{j}$. Soit $F_{j} = \{ (k , l) \in \mathbb{N}^{2} \setminus (0 , 0) / kq - p l = \mu_{j} \}$ et $(r_{j} , s_{j})$ le plus petit \'el\'ement de $F_{j}$ pour l'ordre lexicographique. On a un lemme technique \'el\'ementaire suivant qui est analogue au Lemme 10:
\begin{lemma} Si $(k , l)$ appartient \`a $F_{j}$ alors $(k , l) = (r_{j} , s_{j}) + s (p , q)$, o\`u $s \in \mathbb{N}$.
\end{lemma}
\begin{proof}
Si $(p , q)$ est de type $(1 , 0)$, alors $(r_{j}, s_{j})$ est de type $(0 , s_{j})$ et $(k , l) = (0 , s_{j}) + k (1 , 0)$. Supposons dor\'enavant que $p q \neq 0$. Si $(k , l ) \neq (r_{j}, s_{j})$ on a $(k - r_{j}) q - (l - s_{j}) p = 0$ et donc $k - r_{j} \neq 0$. Par suite, puisque $(r_{j}, s_{j})$ est minimal pour l'ordre lexicographique et $p q \neq 0$, $k - r_{j} > 0$ et n\'ecessairement $l - s_{j} > 0$. Le fait que $\langle p , q \rangle = 1$ implique
l'existence de $s$, d'o\`u le lemme.
\end{proof}
Il r\'esulte du lemme pr\'ec\'edent que l'on peut \'ecrire $\hat{f}_{j} = x_{1}^{r_{j}} x_{2}^{s_{j}} \hat{\varphi} (x_{1}^{p} x_{2}^{q})$ o\`u $\hat{\varphi}_{j} \in \hat{\mathcal{E}}_{1}$, $r_{j}$ et $s_{j}$ \'etant des entiers comme ci-dessus. Le champ $\hat{N}$ commutant avec $\hat{S} = \lambda (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$ a la forme suivante:
$$\hat{N} = x_{1} \hat{\alpha} (x_{1}^{p} x_{2}^{q}) \frac{\partial}{\partial x_{1}} + x_{2} \hat{\beta} (x_{1}^{p} x_{2}^{q}) \frac{\partial}{\partial x_{2}}$$
o\`u $\hat{\alpha}$ et $\hat{\beta}$ appartiennent \`a $\hat{\mathcal{E}}_{1}$.\\
En fait en remarquant que les transformations $(x_{1} \hat{A} (x_{1}^{p} x_{2}^{q}) , x_{2} \hat{B} (x_{1}^{p} x_{2}^{q}))$, $\hat{A} , \hat{B} \in \hat{\mathcal{E}}_{1}$, laissent invariant $\hat{S}$ on peut supposer que $\hat{N}$ est de la forme:
$$\hat{N} = \hat{a} (x_{1}^{p} x_{2}^{q}) (\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}})$$
o\`u $\lambda_{1} , \lambda_{2} \in \mathbb{R}$ et $\hat{a} \in \hat{\mathcal{E}}_{1}$. Comme le champ $\hat{X} = \hat{S} + \hat{N}$ agissant sur $E$ garde au moins un vecteur propre, disons $\hat{f}_{1}$, on a: $$\hat{X} ( x_{1}^{r_{j}} x_{2}^{s_{j}} \hat{\varphi}_{j} ( x_{1}^{p} x_{2}^{q})) = ( q r_{1} - p s_{1} ) x_{1}^{r_{1}} x_{2}^{s_{1}} \hat{\varphi}_{1} ( x_{1}^{p} x_{2}^{q}).$$
Puisque $\hat{a} \neq 0$ on obtient par un calcul \'el\'ementaire:
\begin{equation}
(r_{1}\lambda_{1} + s_{1} \lambda_{2}) \hat{\varphi}_{1} ( t ) + (p\lambda_{1} + q \lambda_{2} ) t \hat{\varphi}_{1}^{'} ( t ) = 0.
\end{equation}
Remarquons la possibilit\'e des cas sp\'eciaux suivants:
\begin{itemize}
\item $(\lambda_{1} , \lambda_{2}) = (0 ,0)$; auquel cas $\hat{X} = \hat{S}$ et $(6)$ ne donne aucun renseignement sur $\hat{\varphi}_{1}$.
\item $(\lambda_{1} , \lambda_{2}) \neq (0 ,0)$ et $p\lambda_{1} + q \lambda_{2} = 0$; comme $\hat{\varphi}_{1}$ est non identiquement nul, $r_{1} q - s_{1} p = 0$ et $\hat{f}_{1} = \hat{\psi}( x_{1}^{p} x_{2}^{q}))$ est une int\'egrale premi\`ere de $\hat{X}$ qui s'\'ecrit dans ce cas $\hat{X} = \hat{b} ( x_{1}^{p} x_{2}^{q})) ( q x_{1} \frac{\partial}{\partial x_{1}} -p x_{2} \frac{\partial}{\partial x_{2}})$.
\end{itemize}
Dans le cas g\'en\'erique o\`u $(p \lambda_{1} + q \lambda_{2}) \neq 0$ on constate que $\hat{\varphi}_{1}$ est un mon\^ome: $\hat{\varphi}_{1} = \varepsilon t^{s}$ avec $s = \frac{r_{1}\lambda_{1} + s_{1} \lambda_{2}}{p\lambda_{1} + q \lambda_{2}} \in \mathbb{N}$; ce qui montre \'egalement que $\hat{f}_{1}$ est mon\^omiale.
\begin{lemma} La restriction $\hat{X}_{ \mid E} : E \rightarrow E$ de la d\'erivation $\hat{X}$ est semi simple.
\end{lemma}
\begin{proof}
Si ce n'est pas le cas, comme les valeurs propres de $\hat{X}$ sont celles de $\hat{S}$, et donc r\'eelles sous nos hypoth\`eses, la Jordanisation de $\hat{X}_{ \mid E}$ est r\'eelle et il existe, \`a re-indexation pr\`es, $\hat{f}_{1}$ et $\hat{f}_{2 }$ tels que $\hat{X} (\hat{f}_{1} ) = \mu_{1} \hat{f}_{1} $ et $\hat{X} (\hat{f}_{2} ) = \mu_{1} \hat{f}_{2} + \hat{f}_{1}$. Il en r\'esulte que:
$$[ \hat{f}_{1} \hat{X} , \hat{f}_{2} \hat{X}] = ( \hat{f}_{1} \hat{X} (\hat{f}_{2} ) - \hat{f}_{2} \hat{X}( \hat{f}_{1} ) ) \hat{X} = \hat{f}^{2}_{1}\hat{X}.$$
On en d\'eduit que $\hat{f}_{1}^{2} \in E$; de m\^eme on a les \'equations:
\begin{equation}
[ \hat{f}_{1} \hat{X} , \hat{f}_{1}^{2} \hat{X}] = \mu_{1} \hat{f}_{1}^{3} \hat{X}
\end{equation}
\begin{equation}[ \hat{f}_{1}^{2} \hat{X} , \hat{f}_{2} \hat{X}] = (\hat{f}_{1}^{2} \hat{X}(\hat{f}_{2} ) - \hat{f}_{2} \hat{X} (\hat{f}_{1}^{2})) \hat{X} = ( \hat{f}_{1}^{3} - \mu_{1} \hat{f}_{1}^{2} \hat{f}_{2} ) \hat{X}
\end{equation}
On en d\'eduit que si $\mu_{1}$ est non nul, d'apr\`es $(7)$, $\hat{f}_{1}^{3}$ appartient \`a $E$; dans le cas contraire $\hat{f}_{1}^{3}$ appartient \'egalement \`a $E$ d'apr\`es $(8)$. Supposons par induction que $\hat{f}_{1}^{k}$ appartient \`a $E$, on a alors $[ \hat{f}_{1} \hat{X} , \hat{f}_{1}^{k} \hat{X}] = \mu_{1} (k - 1 )\hat{f}_{1}^{k + 1} \hat{X}$. Et de nouveau $\hat{f}_{1}^{k + 1}$ appartient \`a $E$ lorsque $\mu_{1}$ est non nul. Si $\mu_{1}$ est nul on a $[ \hat{f}_{1}^{k} \hat{X} , \hat{f}_{2} \hat{X}] = \hat{f}_{1}^{k + 1} \hat{X}$, et on en d\'eduit encore que $\hat{f}_{1}^{k + 1}$ appartient \`a $E$. Comme $\hat{f}_{1} ( 0 ) = 0$, les ordres des $\hat{f}_{1}^{k }$ augmentent; ce qui implique que $E$ est dimension infinie. Ceci est en contradiction avec $\dim E < + \infty$.
\end{proof}
Il r\'esulte du Lemme 11 que $\hat{\mathcal{L}} = \{ \hat{X} , \hat{f}_{1} \hat{X} , \ldots , \hat{f}_{p} \hat{X} \}$, o\`u les $\hat{f}_{i}$ sont des vecteurs propres de $\hat{X}$: $\hat{X} ( \hat{f}_{i} ) = \mu_{i} \hat{f}_{i}$, $\mu_{i} \in \mathbb{R}$. La structure d'alg\`ebre de Lie est donn\'ee par $[ \hat{X} , \hat{f}_{i} \hat{X}] = \mu_{i} \hat{f}_{i} \hat{X}$ et $[ \hat{f}_{i} \hat{X} , \hat{f}_{j} \hat{X}] = ( \mu_{i}- \mu_{j} ) \hat{f}_{i} \hat{f}_{j} \hat{X}$.
Comme $f_{0} = 1$ appartient \`a $E$ l'un des $\mu_{i}$, disons $\mu_{0}$, est nul. Si tous les $\mu_{i}$ sont nuls, alors tous les $\hat{f}_{i}$ sont des int\'egrales premi\`eres du champ $\hat{X}$; comme $E$ est suppos\'e de dimension superieur ou \'egal \`a deux l'une au moins de ces int\'egrales premi\`eres est non constante. Le champ $\hat{X}$ s'\'ecrit $\hat{X} = \hat{b} ( x_{1}^{p} x_{2}^{q}) (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$ et $\hat{f}_{i}$ est \'egale \`a $\hat{l}_{i} ( x_{1}^{p} x_{2}^{q})$, o\`u $\hat{b}$ et les $\hat{l}_{i}$ appartiennent \`a $\hat{\mathcal{E}}_{1}$. Soit $\hat{\Phi}$ le diff\'eomorphisme de mise sous forme normale qui lin\'earise $\hat{S}$. En choisissant des r\'ealisations $\Phi$, $b$ et $f_{i}$ de $\hat{\Phi}$, $\hat{b}$ et $\hat{f}_{i}$ respectivement on obtient une r\'ealisation $\mathcal{L}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme.
Supposons les $\mu_{i}$ sont non tous nuls. Nous devons envisager deux cas: celui o\`u tous les $\mu_{i}$ sont non nuls pour $i \neq 0$, et celui o\`u l'un des $\mu_{i}$ est nul pour $i \neq 0$. Pla\c{c}ons nous dans ce dernier cas. Disons $\hat{X}\hat{f}_{1} = 0$, $\hat{f}_{1}$ est non constant et $\hat{X}\hat{f}_{2} = \mu_{2} \hat{f}_{2}$ avec $\mu_{2} \neq 0$. On peut supposer que $\hat{f}_{1}(0) = 0$; ce que l'on fera. On a
$[ \hat{f}_{1} \hat{X} , \hat{f}_{2} \hat{X}] = \mu_{2} \hat{f}_{1} \hat{f}_{2} \hat{X}$. Et donc $\hat{f}_{1} \hat{f}_{2}^{2} \hat{X}$ appartient \`a $\hat{\mathcal{L}}$. De m\^eme $[ \hat{f}_{1} \hat{X} , \hat{f}_{1} \hat{f}_{2} \hat{X}] = \mu_{2} \hat{f}_{1}^{2} \hat{f}_{2} \hat{X}$ et donc $\hat{f}_{1}^{2} \hat{f}_{2} \hat{X}$ appartient \`a $\hat{\mathcal{L}}$. Par induction on montre que tous les $\hat{f}_{1}^{k} \hat{f}_{2} \hat{X}$, $k \in \mathbb{N}$, sont dans $\hat{\mathcal{L}}$. Comme $\hat{f}_{1} (0) = 0$ ils forment une famille libre en contradiction avec la finitude de la dimension de $\hat{\mathcal{L}}$. Ainsi ce cas ne se pr\'esente pas.\\
Consid\'erons la situation o\`u tous les $\mu_{i}$, \`a l'exception de $\mu_{0}$, sont non nuls, i.e. la dimension du sous espace propre $V(0)$ associ\'e \`a $\mu_{0}$ est $1$. Les $\hat{f}_{i}$ \'etant propres pour $\hat{X}$, ils le sont pour sa partie semi-simple $\hat{S}$ et sont annul\'es par la partie nilpotente: $\hat{S} \hat{f}_{i} = \mu_{i} \hat{f}_{i}$ et $\hat{N} \hat{f}_{i} = 0$.
Supposons l'existence de $\mu_{i} \neq \mu_{j}$ pour deux indices distincts $i$ et $j$ qu'on suppose \^etre $1$ et $2$. On a alors
$[ \hat{f}_{1} \hat{X} , \hat{f}_{2} \hat{X}] = ( \mu_{2} - \mu_{1}) \hat{f}_{1} \hat{f}_{2} \hat{X}$ et donc $\hat{f}_{1} \hat{f}_{2}. \hat{X}$ est dans $\hat{\mathcal{L}}$. On v\'erifie que $\hat{X} (\hat{f}_{1} \hat{f}_{2}) = ( \mu_{2} + \mu_{1}) \hat{f}_{1} \hat{f}_{2}$. Notons que $\mu_{2} + \mu_{1}$ est non nul puisque $V(0)$ est de dimension r\'eelle $1$. On a aussi $[ \hat{f}_{1} \hat{X} , \hat{f}_{1} \hat{f}_{2} \hat{X}] = \mu_{2} \hat{f}_{1}^{2} \hat{f}_{2} \hat{X}$. Comme $\mu_{2}$ est non nul, $\hat{f}_{1}^{2}\hat{f}_{2} \hat{X}$ appartient \`a $\hat{\mathcal{L}}$. Supposons que pour $n \leq k$, $\hat{f}^{k}_{1} \hat{f}_{2} \hat{X}$ soit dans $\hat{\mathcal{L}}$. Des relations $\hat{X} (\hat{f}^{n}_{1} \hat{f}_{2}) = ( n \mu_{1} + \mu_{2}) \hat{f}^{n}_{1} \hat{f}_{2}$ et $\dim V(0) = 1$ on voit que n\'ecessairement $n \mu_{1} + \mu_{2} \neq 0$. On d\'eduit alors de $[ \hat{f}_{1} \hat{X} , \hat{f}_{1}^{k} \hat{f}_{2} \hat{X}] = ( (k - 1)\mu_{1} + \mu_{2} ) \hat{f}_{1}^{k + 1} \hat{f}_{2} \hat{X}$ que $\hat{f}^{k + 1}_{1} \hat{f}_{2} \hat{X}$ est dans $\hat{\mathcal{L}}$. Ainsi $\hat{f}^{k}_{1} \hat{f}_{2} \hat{X}$ appartient \`a $\hat{\mathcal{L}}$ et ceci pour tout $k$. Ce qui contredit la finitude de la dimension de $E$.
Donc tous les r\'eels $\mu_{i}$, hormis $\mu_{0}$, sont \'egaux \`a une constante $\mu$ non nulle. On en d\'eduit que les $\hat{f}_{i}$ sont du type $\hat{f}_{i} = x_{1}^{r} x_{2}^{s}\hat{\varphi}_{i} (x_{1}^{p} x_{2}^{q})$. L'alg\`ebre $\hat{\mathcal{L}}$ a donc la pr\'esentation suivante $\hat{\mathcal{L}} = \{ \hat{X} , \hat{f}_{1} \hat{X} , \ldots , \hat{f}_{p} \hat{X} \} $ avec $[ \hat{X} , \hat{f}_{i}\hat{X}] = \mu \hat{f}_{i} \hat{X}$, $\mu = q r - p s$ et $[ \hat{f_{i}}\hat{X} , \hat{f}_{j}\hat{X}] = 0$. Rappelons que $\hat{N}$ est du type $\hat{a}(x_{1}^{p} x_{2}^{q}) (\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}})$. Comme on l'a vu en $(6)$:
$$(r \lambda_{1} + s \lambda_{2}) \hat{\varphi}_{i} + (p \lambda_{1} + q \lambda_{2}) t \hat{\varphi}^{'} \equiv 0.$$
Si $p \lambda_{1} + q \lambda_{2} \neq 0$ alors chaque $\hat{\varphi}_{i}$ est un mon\^ome, et donc $\hat{f}_{i}$ aussi: $\hat{f}_{i} = f_{i} = x_{1}^{r} x_{2}^{s} (x_{1}^{p} x_{2}^{q})^{k_{i}}$. Soient $a$ une r\'ealisation de $\hat{a}$ et $X = (q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}}) + a (x_{1}^{p} x_{2}^{q}) (\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}})$. L'alg\`ebre
$\mathcal{L} = \langle X , f_{1} X , \ldots , f_{p} X \rangle$ est alors, \`a conjugaison $\mathcal{C}^{\infty}$ pr\`es, une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme. \\
Si $p \lambda_{1} + q \lambda_{2} = 0$, alors $\hat{X}$ est de la forme $\hat{X} = \hat{b}(x_{1}^{p} x_{2}^{q})(q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}})$; et puisque $\mu = q r - p s \neq 0$ on v\'erifie qu'en fait $\hat{b}$ est constant.
Ainsi $\hat{\mathcal{L}} = \langle X = q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}} , \hat{\varphi}_{1}(x_{1}^{p} x_{2}^{q}) X , \ldots , \hat{\varphi}_{p}(x_{1}^{p} x_{2}^{q}) X \rangle$, o\`u $\hat{\varphi}_{i}$ appartient \`a $\hat{\mathcal{E}}_{1}$. Ici aussi, en r\'ealisant les $\hat{\varphi}_{i}$, on obtient une r\'ealisation $\mathcal{C}^{\infty}$ de $\mathcal{L}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme. \\
Lorsque $\hat{X}$ est elliptique on consid\`ere le complexifi\'e $\hat{\mathcal{L}}^{\mathbb{C}}$ de l'alg\`ebre de Lie $\hat{\mathcal{L}}$. Le diff\'eomorphisme $(x_{1} , x_{2}) \mapsto \Phi (x_{1} , x_{2}) = ( x_{1} + i x_{2} , i x_{1} + x_{2})$ conjugue $\hat{X}$ \`a $\hat{Y} = X_{1} + \hat{N}$ o\`u $X_{1} = i \beta (x_{1}\frac{\partial}{\partial x_{1}} - x_{2}\frac{\partial}{\partial x_{2}})$ et $\hat{N}$ est nilpotent ($J^{1}\hat{N}$ = 0). En suivant la preuve de ce qui pr\'ec\`ede on voit que:
\begin{itemize}
\item Soit $\hat{X} = \hat{b} (x_{1}^{2} + x_{2}^{2}) X$, o\`u $X = \beta ( - x_{2}\frac{\partial}{\partial x_{1}} + x_{1}\frac{\partial}{\partial x_{2}})$ et $\hat{\mathcal{L}} = \langle \hat{X} , \hat{\varphi}_{1}(x_{1}^{2} + x_{2}^{2}) , \ldots , \hat{\varphi}_{p}(x_{1}^{2} + x_{2}^{2}) \rangle$, o\`u $\hat{b}$ et les $\hat{\varphi}_{i}$ appartiennent \`a $\hat{\mathcal{E}}_{1}$.\\
\item Soit les vecteurs propres de $\hat{\Phi}_{\ast}\hat{X}^{\mathbb{C}}$ sont des polyn\^omes, $\hat{X}^{\mathbb{C}}$ \'etant le complexifi\'e de $\hat{X}$. Ce qui permet d'en d\'eduire, \`a conjugaison formelle pr\`es, que $\hat{\mathcal{L}} = \langle \hat{X} , f_{1} \hat{X} , \ldots , f_{p}\hat{X}_{p} \rangle$, o\`u les $f_{i}$ sont des polyn\^omes.
\end{itemize}
Selon le cas, en r\'ealisant $\hat{b}$ et les $\hat{\varphi}_{i}$ ou $\hat{X}$, on obtient encore une r\'ealisation $\mathcal{L}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme.\\
{\bf 1.2.3.3 Les cas non semi-simples}
Lorsque $\hat{X}$ a son 1-jet nilpotent, disons $x_{1} \frac{\partial}{\partial x_{2}}$, $\hat{X}$ est lui m\^eme nilpotent. L'exemple typique d'alg\`ebre saturable pr\'esentant cette configuration est $\hat{\mathcal{L}} = \langle \hat{X} = x_{1} \frac{\partial}{\partial x_{2}} , \hat{f}_{1}(x_{1}) \hat{X}, \ldots , \hat{f}_{p}(x_{1}) \hat{X} \rangle$, o\`u les $\hat{f}_{i} \in \hat{\mathcal{E}}_{1}$.\\
Soit $\hat{\mathcal{L}} = \langle \hat{X} , \hat{f}_{1} \hat{X}, \ldots , \hat{f}_{p} \hat{X} \rangle$ une sous alg\`ebre de Lie saturable de $\hat{\mathcal{X}}_{2}$ avec $J^{1} \hat{X} = x_{1} \frac{\partial}{\partial x_{2}}$. Comme $\hat{X}$ est nilpotent en suivant la preuve du Lemme 12 on voit que les $\hat{X} \hat{f}_{i}$ sont nuls et que $\hat{\mathcal{L}}$ est ab\'elienne. En particulier si $\dim \hat{\mathcal{L}} \geq 2$, les $\hat{f}_{i}$ sont non constants; et $\hat{X}$ a ainsi une int\'egrale premi\`ere non constante. La Proposition 9 traite ce cas. \\
Dans le cas contraire, \`a diff\'eomorphisme formel pr\`es, $\hat{X} = (\lambda x_{1} + x_{2})\frac{\partial}{\partial x_{1}} + \lambda x_{2} \frac{\partial}{\partial x_{2}} $, $\lambda \neq 0$. Notons qu'on peut supposer que $\lambda = 1$, ce que l'on fera. La Jordanisation du champ $\hat{X}$ est r\'eelle; on en d\'eduit que celle de $\hat{X}_{|E}$ est \'egalement r\'eelle. Consid\'erons une suite $\hat{f}_{1} , \ldots , \hat{f}_{m}$ d'\'el\'ements de $E$ telle que $\hat{X}(\hat{f}_{1}) = \mu \hat{f}_{1}$ et $\hat{X}(\hat{f}_{i}) = \mu \hat{f}_{i} + \hat{f}_{i - 1}$ pour $i = 2 , \ldots , m$. Posons $\hat{f}_{i} = \sum_{j \geq k_{i}} A_{j}^{i}$, o\`u $A_{j}^{i}$ est homog\`ene de degr\'e $j$ et $A_{k_{i}}^{i}$ est non nul. La condition $\hat{X}(\hat{f}_{1}) = \mu \hat{f}_{1}$ implique que:
\begin{equation} (\mu - j) A^{1}_{j} = x_{2} \frac{\partial A^{1}_{j}}{\partial x_{1}} \ \forall j \in \mathbb{N}.
\end{equation}
Si $\mu - j$ est non nul alors $A^{1}_{j} \equiv 0$. En effet si on \'ecrit $A^{1}_{j} = \sum_{k = l}^{j} \alpha_{k}^{j} x_{1}^{k} x_{2}^{j - k}$ en reportant le dans l'\'equation pr\'ec\'edente on voit que n\'ecessairement $\alpha_{l}^{j} = 0$. On en d\'eduit que $\mu = k_{1}$ et $\hat{f}_{1} = \alpha x_{2}^{k_{1}}$. Supposons qu'on ait montr\'e que $\hat{f}_{i - 1}$ est un polyn\^ome. L'\'equation $\hat{X}(\hat{f}_{i}) = \mu \hat{f}_{i} + \hat{f}_{i - 1}$ implique que:
$$ (\mu - j) A^{i}_{j} = x_{2} \frac{\partial A^{i}_{j}}{\partial x_{1}} \ \forall j > d^{\circ} \hat{f}_{i - 1}. $$
Cette \'equation est du m\^eme type que $(9)$; on en d\'eduit que tous les $A^{i}_{j}$ sont nuls lorsque $j > d^{\circ} \hat{f}_{i - 1}$ \`a l'exception peut \^etre d'un seul. Ce qui implique que $\hat{f}_{i}$ est un polyn\^ome. \\
En consid\'erant une base dans laquelle la matrice de $\hat{X}_{|E}$ est sous forme de Jordan on d\'eduit que $\hat{\mathcal{L}}$ est conjugu\'ee \`a $\langle X = ( x_{1} + x_{2})\frac{\partial}{\partial x_{1}} + x_{2} \frac{\partial}{\partial x_{2}} , f_{1} X , \ldots , f_{p} X \rangle$, o\`u les $f_{i}$ sont des polyn\^omes. En r\'esum\'e nous obtenons le:
\begin{theorem} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{2}$ une sous alg\`ebre de Lie de dimension finie, saturable et de rang ponctuel 1. Il existe une r\'ealisation $\mathcal{C}^{\infty}$ not\'ee $\mathcal{L}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme d'alg\`ebre de Lie.
\end{theorem}
\subsection{Dimension deux: alg\`ebres ab\'eliennes de rang deux}
La Proposition 7 montre que les alg\`ebres ab\'eliennes jouent un r\^ole sp\'ecial dans notre contexte. Toutefois cette proposition ne poss\`ede pas de g\'en\'eralisa-tion lorsque le rang ponctuel g\'en\'erique est plus grand que 1. Par exemple l'alg\`ebre engendr\'ee par les trois champs $X_{0} = x_{1} x_{2} \frac{\partial}{\partial x_{3}}$, $X_{1} = x_{2} x_{3} \frac{\partial}{\partial x_{4}}$ et $X_{2} = x_{1} x_{2}^{2} \frac{\partial}{\partial x_{4}}$, $[X_{0} , X_{1}] = X_{2}$, est une sous alg\`ebre de rang 2 de $\hat{\mathcal{X}}_{4}$; elle est toutefois nilpotente. En fait on a la:
\begin{proposition} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}$ une sous alg\`ebre de Lie de dimension finie. On suppose que $J^{1} \hat{\mathcal{L}} = \{ 0 \}$, i.e. $\hat{\mathcal{L}} \subset \mathcal{M}^{2} \hat{\mathcal{X}}_{n}$. Alors $\hat{\mathcal{L}}$ est nilpotente.
\end{proposition}
\begin{proof}
Il suffit d'\'etablir que les applications $ad_{\hat{X}} : \hat{\mathcal{L}} \rightarrow \hat{\mathcal{L}}$ d\'efinies par $ad_{\hat{X}}(\hat{Y}) = [\hat{X} , \hat{Y}]$ et $\hat{X} \in \hat{\mathcal{L}}$, sont toutes nilpotentes d'apr\`es \cite{Bourbaki}. On peut, quitte \`a complexifier, supposer que $\hat{\mathcal{L}}$ est d\'efinie sur $\mathbb{C}$, i.e. $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{n}(\mathbb{C}^{n}_{,0})$. Soient $\hat{X}$ et $\hat{Y}$ deux \'el\'ements de $\hat{\mathcal{L}}$ tels que $\hat{Y}$ est propre pour $ad_{\hat{X}}$: $ad_{\hat{X}}(\hat{Y}) = \mu \hat{Y}$. Comme $J^{1} \hat{X} = 0$ l'ordre du premier jet non nul de $[\hat{X} , \hat{Y}]$ est strictement sup\'erieur \`a celui de $\hat{Y}$; on en d\'eduit que $\mu$ est nulle et par suite $ad_{\hat{X}}$ est nilpotente.
\end{proof}
On appelle quotient formel tout \'el\'ement du corps des fractions $\hat{\mathcal{M}}_{n}$ de l'anneau des s\'eries formelles de $\hat{\mathcal{E}}_{n}$. Un \'el\'ement de $\hat{\mathcal{M}}_{n}$ s'\'ecrit $\frac{\hat{f}}{\hat{g}}$, o\`u $\hat{f}$ et $\hat{g}$ sont des \'el\'ements de $\hat{\mathcal{E}}_{n}$ sans facteur commun. Pour les alg\`ebres commutatives de champs formels, on obtient en dimension deux d'espace:
\begin{lemma} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{2}$ une alg\`ebre de Lie ab\'elienne de champs formels de rang $2$. Alors $\dim \hat{\mathcal{L}} = 2$.
\end{lemma}
{\bf Preuve} Soient $\hat{X}$ et $\hat{Y}$ deux \'el\'ements de $\hat{\mathcal{L}}$ tels que $det ( \hat{X} , \hat{Y} ) \neq 0$. Si $\hat{Z}$ appartient \`a $\hat{\mathcal{L}}$ il existe des quotients formels $\hat{A}$ et $\hat{B}$ tels que $\hat{Z} = \hat{A} \hat{X} + \hat{B} \hat{Y}$ (alg\`ebre lin\'eaire sur le corps des s\'eries formelles). Comme $[\hat{X} , \hat{Y}] = [\hat{X} , \hat{Z}] = [\hat{Y} , \hat{Z}] = 0$ on a $\hat{X} ( \hat{A} ) = \hat{X} ( \hat{B} ) = \hat{Y} ( \hat{A} ) = \hat{Y} ( \hat{B} ) = 0$. Par suite on en d\'eduit que $\frac{\partial \hat{A}}{\partial x_{i}} = \frac{\partial \hat{B}}{\partial x_{i}} = 0$, pour $i = 1 , 2$. Ce qui implique que $\hat{A}$ et $\hat{B}$ sont constantes.
\begin{examples} Dans chacun des cas suivants l'alg\`ebre $\hat{\mathcal{L}}$ est ab\'elienne de rang $2$.
\begin{enumerate}
\item Variables s\'epar\'ees: $\hat{\mathcal{L}} = \langle \hat{f}_{1} ( x_{1} ) \frac{\partial}{\partial x_{1}} , \hat{f}_{1} ( x_{2} ) \frac{\partial}{\partial x_{2}} \rangle$, o\`u $\hat{f}_{i}$ appartient \`a $\hat{\mathcal{E}}_{1}$.
\item Lin\'eaires diagonales: $\hat{\mathcal{L}} = \langle \lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}} , \mu_{1} x_{1} \frac{\partial}{\partial x_{1}} + \mu_{2} x_{2} \frac{\partial}{\partial x_{2}} \rangle$, avec $ \lambda_{1} \mu_{2} - \lambda_{2} \mu_{1} \neq 0$.
\item R\'esonnantes: $\hat{\mathcal{L}} = \langle q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}} , \hat{a} ( x_{1}^{p} x_{2}^{q} ) ( \lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}} ) \rangle$, o\`u $\lambda_{i} \in \mathbb{R}$, $p , q \in \mathbb{N}$ et $p \lambda_{1} + q \lambda_{2} \neq 0$.
\end{enumerate}
On dispose, dans chacun de ces cas, de r\'ealisation $\mathcal{C}^{\infty}$ $\mathcal{L}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}} : \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme.
\end{examples}
Si $\hat{X}$ et $\hat{Y}$ sont tels que $det ( \hat{X} , \hat{Y} ) \neq 0$ il existe deux 1-formes diff\'erentielles $\hat{\alpha}$ et $\hat{\beta}$, \`a coefficients dans $\hat{\mathcal{M}}_{2}$ telles que $i_{\hat{X}} \hat{\alpha} = i_{\hat{Y}} \hat{\beta} = 1$ et $i_{\hat{X}} \hat{\beta} = i_{\hat{Y}} \hat{\alpha} = 0$. La commutation de $\hat{X}$ et $\hat{Y}$ implique que les 1-formes $\hat{\alpha}$ et $\hat{\beta}$ sont ferm\'ees.
\subsubsection{Formes normales des formes ferm\'ees \`a coefficient dans $\hat{\mathcal{M}}_{n}$}
Consid\'erons un germe de 1-forme m\'eromorphe ferm\'ee $\omega$ \`a l'origine de $\mathbb{C}^{n}$; il s'\'ecrit sous la forme $\omega = \frac{\Theta}{f}$, avec $\Theta \in \Omega( \mathbb{C}^{n} )$ germes de 1-forme holomorphe et $f \in \mathcal{O} ( \mathbb{C}^{n} )$, $f = f_{1}^{n_{1} + 1} \ldots f_{p}^{n_{p} + 1}$, les $f_{i}$ \'etant irr\'eductibles et aucun des $f_{i}$ ne divisant $\Theta$. D. Cerveau et J.-F. Mattei \cite{Cerveau2} \'etablissent la d\'ecomposition, en "\'el\'ements simples", suivante de $\omega$:
$$ \omega = \sum_{i = 1}^{p} \lambda_{i} \frac{d f_{i}}{f_{i}} + d ( \frac{ H}{f_{1}^{n_{1}} \ldots f_{p}^{n_{p} }} ) $$
o\`u $\lambda_{i} \in \mathbb{C}$ (r\'esidu de $\omega$ le long de $f_{i}$) et $H \in \mathcal{O} ( \mathbb{C}^{n} )$. Dans le cas r\'eduit o\`u $\omega$ est \`a p\^oles simples, $ \omega = \sum_{i = 1}^{p} \lambda_{i} \frac{d f_{i}}{f_{i}} + d H$, $\omega$ est dite logarithmique. La d\'ecomposition en \'el\'ements simples s'\'etend aux formes m\'eromorphes formelles ferm\'ees, c'est \`a dire \`a coefficients dans le corps des fractions $\hat{\mathcal{M}}_{n} ( \mathbb{C} )$ de $\hat{\mathcal{O}} ( \mathbb{C}^{n} )$. Pour un tel $\hat{\omega}$ on a de mani\`ere analogue:
$$ \hat{\omega} = \sum_{i = 1}^{p} \lambda_{i} \frac{d \hat{f}_{i}}{\hat{f}_{i}} + d ( \frac{ \hat{H}}{\hat{f}_{1}^{n_{1}} \ldots \hat{f}_{p}^{n_{p} }} ). $$
Consid\'erons \`a pr\'esent une 1-forme m\'eromorphe formelle ferm\'ee $\hat{\omega} = \frac{\hat{\Theta}}{\hat{f}}$ o\`u $\hat{\Theta} = \sum_{i = 1}^{n} \hat{a}_{i} d x_{i} \in \hat{\Omega}^{1}_{n}$ et les $\hat{a}_{i} , \hat{f} \in \hat{\mathcal{E}}_{n}$. Notons $\hat{f}_{\mathbb{C}}$ le complexifi\'e de $\hat{f}$. La d\'ecomposition en facteurs irr\'eductibles de $\hat{f}_{\mathbb{C}}$ est du type
$\hat{f}_{\mathbb{C}} = \hat{f}_{1}^{n_{1} + 1} \ldots \hat{f}_{q}^{n_{q} + 1} (\hat{g}_{1}\hat{h}_{1} )^{m_{1} + 1} \ldots (\hat{g}_{l}\hat{h}_{l} )^{m_{l} + 1}$, o\`u les $\hat{f}_{1}, \ldots , \hat{f}_{q}$ sont r\'eels, i.e. des complexifi\'es d'\'el\'ements de $\hat{\mathcal{E}}_{n}$, et les $\hat{g}_{j}$ et $\hat{h}_{j}$ sont complexes conjugu\'es; ce qui revient \`a dire que $\hat{g}_{j} \hat{h}_{j}$ est le complexifi\'e d'un \'el\'ement de $\hat{\mathcal{E}}_{n}$ du type $\hat{P}_{j}^{2} + \hat{Q}_{j}^{2} = ( \hat{P}_{j} + i \hat{Q}_{j} ) ( \hat{P}_{j} - i \hat{Q}_{j} )$.
Si $\hat{\omega}^{\mathbb{C}}$ est le complexifi\'e de $\hat{\omega}$ on a:
$$ \hat{\omega}^{\mathbb{C}} = \sum_{i = 1}^{p} \lambda_{i} \frac{d \hat{f}_{i}}{\hat{f}_{i}} + \sum_{j = 1}^{l} \mu_{j} \frac{d \hat{g}_{j}}{\hat{g}_{j}} + \sum_{j = 1}^{l} \bar{\mu}_{j} \frac{d \hat{h}_{j}}{\hat{h}_{j}} + d \left( \frac{\hat{K}}{\hat{f}_{1}^{n_{1}} \ldots \hat{f}_{p}^{n_{p} } (\hat{g}_{1}\hat{h}_{1})^{m_{1}} \ldots (\hat{g}_{l}\hat{h}_{1})^{m_{l}}} \right) $$
avec $\lambda_{i} \in \mathbb{R}$, $\mu_{j} \in \mathbb{C}$, $\hat{K} \in \hat{\mathcal{O}} (\mathbb{C}^{n}_{, 0} )$ r\'eel. La forme $\hat{\omega}^{\mathbb{C}}$ est invariante sous l'action de l'automorphisme de corps $z \mapsto \bar{z}$. De sorte que $\hat{\omega}$ s'\'ecrit finalement sous la forme $(\ast \ast)$:
$$\hat{\omega} = \sum_{i = 1}^{p} \lambda_{i} \frac{d \hat{f}_{i}}{\hat{f}_{i}} + \sum_{j = 1}^{l} (a_{j} \frac{d ( \hat{P}_{j}^{2} + \hat{Q}_{j}^{2} ) }{( \hat{P}_{j}^{2} + \hat{Q}_{j}^{2} )} + b_{j} \frac{\hat{P}_{j} d \hat{Q}_{j} - \hat{Q}_{j} d \hat{P}_{j})}{( \hat{P}_{j}^{2} + \hat{Q}_{j}^{2} }) + $$ $$d \left( \frac{\hat{H}}{\hat{f}_{1}^{n_{1}} \ldots \hat{f}_{p}^{n_{p} } ( \hat{P}_{1}^{2} + \hat{Q}_{1}^{2} )^{m_{1}} \ldots ( \hat{P}_{l}^{2} + \hat{Q}_{l}^{2} )^{m_{l}}}\right) $$
avec $\lambda_{i}, a_{j} , b_{j} \in \mathbb{R}$, $\hat{f}_{i} , \hat{P}_{j} , \hat{Q}_{j}$ et $\hat{H}$ dans $\hat{\mathcal{E}}_{n}$.
On dit que $\hat{\omega}$ est logarithmique s'il est \`a p\^oles simples; c'est \`a dire les $n_{i}$ et $m_{j}$ sont tous nuls.
\subsubsection{Classification}
Revenons \`a une sous alg\`ebre commutative $\hat{\mathcal{L}} = \langle \hat{X} , \hat{Y} , [ \hat{X} , \hat{Y} ] = 0 \rangle $ de $\hat{\mathcal{X}}_{2}$. Supposons que l'un des \'el\'ements de $\hat{\mathcal{L}}$, disons $\hat{X}$, soit non singulier. Il existe un diff\'eomorphisme formel $\hat{\Phi}$ tel que
$\hat{\Phi}_{\ast}\hat{X} = \frac{\partial}{\partial x_{1}}$. Un calcul \'el\'ementaire montre que $\hat{\Phi}_{\ast}\hat{Y}$ s'\'ecrit sous la forme $ \hat{a} ( x_{2} ) \frac{\partial}{\partial x_{1}} + \hat{b}(x_{2} ) \frac{\partial}{\partial x_{2}}$, o\`u $\hat{a}$ et $\hat{b}$ appartiennent \`a $\hat{\mathcal{E}}_{1}$. Soient $a$, $b \in \mathcal{E}_{1}$ et $\Phi \in \mathrm{Diff}_{2}$ des r\'ealisations de $\hat{a}$, $\hat{b}$ et $\hat{\Phi}$ respectivement. En posant $X = \Phi^{- 1}_{\ast} \frac{\partial}{\partial x_{1}}$ et $Y = \Phi^{- 1}_{\ast} ( a \frac{\partial}{\partial x_{1}} + b \frac{\partial}{\partial x_{2}} )$ on obtient une r\'ealisation $\mathcal{L} = \langle X , Y \rangle$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}}: \mathcal{L} \rightarrow \hat{\mathcal{L}}$ est un isomorphisme d'alg\`ebre.
Dans la suite on supposera que tous les \'el\'ements de $\hat{\mathcal{L}} = \langle \hat{X} , \hat{Y} , [ \hat{X} , \hat{Y} ] = 0 \rangle $ sont singuliers, i.e. $\hat{X} ( 0 ) = \hat{Y} ( 0 ) = 0$. Comme pr\'ec\'edemment on note $\mathcal{L}^{1}$ l'alg\`ebre des 1-jets des \'el\'ements de $\hat{\mathcal{L}}$, $\hat{\alpha}$ et $\hat{\beta}$ les 1-formes ferm\'ees \`a coefficients dans $\hat{\mathcal{M}}_{2}$ d\'efinies par $i_{\hat{X}} \hat{\alpha} = i_{\hat{Y}} \hat{\beta} = 1$ et $i_{\hat{X}} \hat{\beta} = i_{\hat{Y}} \hat{\alpha} = 0$. On note $\hat{\mathcal{L}}^{\star}$ l'espace vectoriel $\hat{\mathcal{L}}^{\star} = \hat{\alpha} \mathbb{R} + \hat{\beta} \mathbb{R}$. On dira que $\hat{\mathcal{L}}$ est logarithmique si l'\'el\'ement g\'en\'erique de $\hat{\mathcal{L}}^{\star}$ est logarithmique et semi-logarithmique si $\hat{\mathcal{L}}^{\star}$ contient une 1-forme logarithmique non triviale.
Les alg\`ebres lin\'eaires diagonales $ \langle x_{1} \frac{\partial}{\partial x_{1}} , x_{2} \frac{\partial}{\partial x_{2}} \rangle$ sont logarithmiques tandis que les alg\`ebres r\'esonnantes $\langle q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}} , \hat{q} ( x_{1}^{p} x_{2}^{q} )(\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}} ) \rangle$ sont semi-logarithmiques.
\begin{proposition} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{2}$ une alg\`ebre ab\'elienne de rang $2$ telle que $\hat{\mathcal{L}} ( 0 ) = 0$. Si $\hat{\mathcal{L}}$ est semi-logarithmique, alors $\mathcal{L}^{1}$ est non nul. De plus $\mathcal{L}^{1}$ contient un \'el\'ement non nilpotent.
\end{proposition}
\begin{proof}
On conserve les notations pr\'ec\'edentes; supposons que $\hat{\alpha}$ est logarithmique et $i_{\hat{X}} \hat{\alpha} = 1$. Il est plus commode de travailler avec les complexifi\'es $\hat{\alpha}_{\mathbb{C}}$ et $\hat{X}_{\mathbb{C}}$ de $\hat{\alpha}$ et $\hat{X}$ respectivement. Puisque $\hat{\alpha}$ est logarithmique on a $\hat{\alpha}_{\mathbb{C}} = \sum \lambda_{i} \frac{d \hat{f}_{i}}{f_{i}} + d \hat{H}$, $\hat{f}_{i} , \hat{H} \in \hat{\mathcal{O}} ( \mathbb{C}^{2} )$. Comme les $\hat{f}_{i}$ sont irr\'eductibles la condition $i_{\hat{X}} \hat{\alpha} = 1$ implique que $\hat{X}_{\mathbb{C}} ( \hat{f}_{i} ) = \mu_{i} \hat{f}_{i}$, $\mu_{i} \in \hat{\mathcal{O}} ( \mathbb{C}^{2} )$. On a alors $\sum \mu_{i} \hat{f}_{i} + \hat{X}_{\mathbb{C}} ( \hat{H}) =1$ et comme $\hat{X}_{\mathbb{C}} ( 0 ) = 0$ l'un des $\mu_{i} ( 0 )$, disons $\mu_{1} ( 0 )$, est non nul. La condition $\hat{X}_{\mathbb{C}} ( \hat{f}_{1} ) = \mu_{1} \hat{f}_{1}$ implique la non nullit\'e de $J^{1} \hat{X}_{\mathbb{C}}$ et donc de $J^{1} \hat{X}$. Remarquons que le premier jet non nul de $\hat{f}_{1}$ est propre pour la d\'erivation $J^{1} \hat{X}$ avec pour valeur propre $\mu_{1} ( 0 )$ diff\'erente de $0$. Par suite $J^{1} \hat{X}$ est non nilpotent.
\end{proof}
A conjugaison lin\'eaire et constante multiplicative pr\`es les 1-jets des champs non nilpotents qui vont entrer en jeu sont les suivants:
\begin{enumerate}
\item $\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}}$ sans r\'esonnance, i.e. $i_{1} \lambda_{1} + i_{2} \lambda_{2} \neq\lambda_{j}$, $j = 1 , 2$, si $i_{1}$ et $i_{2}$ sont des entiers tels que $i_{1} +
|
i_{j} \geq 2$.
\item $ x_{1} \frac{\partial}{\partial x_{1}} + n x_{2} \frac{\partial}{\partial x_{2}}$, $n \geq 2$ (Poincar\'e-Dulac).
\item $q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}}$, $p , q \in \mathbb{N}^{\ast}$, $\langle p , q \rangle = 1$ (hyperbolique r\'esonnante de type $( p , q)$.
\item $ x_{1} \frac{\partial}{\partial x_{1}} + (x_{1} + x_{2}) \frac{\partial}{\partial x_{2}}$.
\item $(\alpha x_{1} + \beta x_{2})\frac{\partial}{\partial x_{1}} + (- \beta x_{1} + \alpha x_{2} ) \frac{\partial}{\partial x_{2}}$, $\alpha \neq 0$.
\item $ x_{2} \frac{\partial}{\partial x_{1}} - x_{1} \frac{\partial}{\partial x_{2}}$ (elliptique).
\item $ x_{1} \frac{\partial}{\partial x_{1}}$ (noeud-col).
\end{enumerate}
Les 1-jets de type 1, 4, et 5 sont 1-d\'eterminants, i.e. si $\hat{X}$ a son 1-jet de type 1 , 4 ou 5 alors $\hat{X}$ est formellement conjugu\'e \`a sa partie lin\'eaire $X_{1} = J^{1} \hat{X}$. De plus un champ $\hat{Y}$ commutant avec $X_{1} $ est lin\'eaire. En utilisant la liste pr\'ec\'edente et la d\'ecomposition de Jordan des champs de vecteurs formels en parties semi-simple et nilpotente \cite{Bourbaki}, \cite{Cerveau2} on \'etablit sans peine la:
\begin{proposition} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{2}$ une sous alg\`ebre commutative de rang 2, $\hat{\mathcal{L}} (0) = \{ 0 \}$. Si $\hat{\mathcal{L}}$ est semi-logarithmique alors, \`a conjugaison par un \'el\'ement de $\widehat{\mathrm{Diff}}(\mathbb{R}^{2}_{0})$ pr\`es, $\hat{\mathcal{L}}$ appartient \`a la liste suivante:
\begin{enumerate}
\item $\langle x_{1} \frac{\partial}{\partial x_{1}} , x_{2} \frac{\partial}{\partial x_{2}} \rangle$
\item $\langle x_{1} \frac{\partial}{\partial x_{1}} + n x_{2} \frac{\partial}{\partial x_{2}} , x_{1}^{n} \frac{\partial}{\partial x_{2}} \rangle$, o\`u $n \geq 2$.
\item $\langle q x_{1} \frac{\partial}{\partial x_{1}} - p x_{2} \frac{\partial}{\partial x_{2}} , \hat{a} (x_{1}^{p}x_{2}^{q}) (\lambda_{1} x_{1} \frac{\partial}{\partial x_{1}} + \lambda_{2} x_{2} \frac{\partial}{\partial x_{2}}) \rangle$, $\hat{a} \in \mathcal{M} .\hat{\mathcal{E}}_{1}$, $\lambda_{1} , \lambda_{2} \in \mathbb{R}$.
\item $\langle x_{1} \frac{\partial}{\partial x_{1}} + x_{2} \frac{\partial}{\partial x_{2}} , x_{1}\frac{\partial}{\partial x_{2}} \rangle$.
\item $\langle x_{1} \frac{\partial}{\partial x_{1}} + x_{2} \frac{\partial}{\partial x_{2}} , x_{2} \frac{\partial}{\partial x_{1}} - x_{1} \frac{\partial}{\partial x_{2}} \rangle$
\item $ x_{2} \frac{\partial}{\partial x_{1}} - x_{1} \frac{\partial}{\partial x_{2}} , \alpha \hat{a} ( x_{1} \frac{\partial}{\partial x_{1}} + x_{2} \frac{\partial}{\partial x_{2}}) + \beta \hat{a} (x_{2} \frac{\partial}{\partial x_{1}} - x_{1} \frac{\partial}{\partial x_{2}}) \rangle$, o\`u $\hat{a} \in \mathcal{M} .\hat{\mathcal{E}}_{1}$ et $\alpha , \beta \in \mathbb{R}$.
\item $\langle x_{1} \frac{\partial}{\partial x_{1}} , \hat{a}(x_{2}) \frac{\partial}{\partial x_{2}} \rangle$, o\`u $\hat{a} \in \mathcal{M}^{2} .\hat{\mathcal{E}}_{1}$
\end{enumerate}
\end{proposition}
Remarquons que les alg\`ebres de type 1 et 5 sont logarithmiques et les autres sont semi-logarithmiques. On constate ainsi que logarithmique implique lin\'earisable. Les alg\`ebres de type 4 sont de type 2 lorsque $n = 1$, mais sont lin\'earisables. On peut en fait donner des formes normales plus pr\'ecises pour les $\hat{a}$, $\hat{l}_{1}$ et $\hat{l}_{2}$, mais ceci ne sera pas utile ($\hat{a} (t) = \frac{t^{s + 1}}{1 - \lambda t^{s}}$, $s \in \mathbb{N}^{\ast}$ et $\lambda \in \mathbb{R}$). On d\'eduit de tout cela le:
\begin{theorem} Soit $\hat{\mathcal{L}} \subset \hat{\mathcal{X}}_{2}$ une sous alg\`ebre ab\'elienne de rang $2$. Il existe une r\'ealisation $\mathcal{C}^{\infty}$ $\mathcal{L}$ de $\hat{\mathcal{L}}$ telle que $T_{\underline{0}}: \mathcal{L} \rightarrow \hat{\mathcal{L}}$ soit un isomorphisme dans les deux cas suivants:
\begin{enumerate}
\item $\hat{\mathcal{L}}(0) \neq 0$.
\item $\hat{\mathcal{L}}(0) = 0$ et $\hat{\mathcal{L}}$ est semi-logarithmique.
\end{enumerate}
\end{theorem}
\begin{proof}
Soit $\hat{\Phi}$ un \'el\'ement de $\widehat{\mathrm{Diff}}(\mathbb{R}^{2}_{0})$ tel que $\hat{\Phi}_{\ast}\hat{\mathcal{L}} = \hat{\mathcal{L}}_{0}$ soit comme dans la Proposition 18. Si $\hat{\mathcal{L}}_{0}$ est de type 3, 5, 6 ou 7 on choisit une r\'ealisation $\mathcal{C}^{\infty}$, $a \in \mathcal{M}\mathcal{E}_{1}$, de $\hat{a}$. On construit ainsi des alg\`ebres $\mathcal{L}_{0}$, r\'ealisations $\mathcal{C}^{\infty}$ de $\hat{\mathcal{L}}_{0}$. La sous alg\`ebre $\mathcal{L} = \Phi_{\ast} \mathcal{L}_{0}$, o\`u $\Phi$ est une r\'ealisation de $\hat{\Phi}$, satisfait le th\'eor\`eme.
\end{proof}
\begin{remark} Nous n'avons pas abord\'e ici le cas g\'en\'eral non semi-loga-rithmique pour lequel il n'y a pas \`a notre connaissance de mod\`eles comme dans la Proposition 18.
\end{remark}
\section{Equations implicites}
Rappelons les d\'efinitions et propri\'et\'es des id\'eaux ferm\'es de $\mathcal{E}_{n}$, afin d'\'enoncer le principal r\'esultat de J. C. Tougeron \cite{Tougeron1} et \cite{Tougeron2}, que nous utiliserons. Nous conservons ses notations. Soient $\Omega$ un ouvert de $\mathbb{R}^{n}$ et $\mathcal{E}(\Omega)$ l'alg\`ebre des fonctions num\'eriques ind\'efiniment d\'erivables sur $\Omega$ muni de sa structure classique d'espace de Frechet. Etant donn\'es $\underline{a} \ \in \ \Omega$ et $ \varphi \ \in \ \mathcal{E} (\Omega)$ on note $T_{\underline{a}} \varphi$ le d\'eveloppement en s\'erie formelle de $\varphi$ en $a$. Si $I$ est un id\'eal, on note $T_{\underline{a}} I = \{ T_{\underline{a}} \varphi \ / \ \varphi \ \in \ I \} \subset \hat{\mathcal{E}}_{n}$. On dira que $I$ est ferm\'e si c'est un ferm\'e de $\mathcal{E} (\Omega)$ muni de sa structure d'espace de Frechet. Nous avons le r\'esultat suivant, d\^u \`a Whitney, qui carat\'erise les id\'eaux ferm\'es de $\mathcal{E} ( \Omega )$:
\begin{theorem} \cite{Tougeron1}, \cite{Malgrange} Un id\'eal $I$ de $\mathcal{E} (\Omega)$ est ferm\'e si et seulement si pour tout $\varphi$ appartenant \`a $\mathcal{E} (\Omega)$ tel que $T_{\underline{a}} \varphi \ \in \ T_{\underline{a}} I$ pour tout $\underline{a}$ \'el\'ement de $\Omega$, alors $\varphi$ appartient \`a $I$.
\end{theorem}
H\"ormander a montr\'e que l'id\'eal $\varphi \mathcal{E} (\Omega)$ est ferm\'e lorsque $\varphi$ est un polyn\^ome. Lojasiewicz a montr\'e le m\^eme r\'esultat sous l'hypoth\`ese que $\varphi$ est analytique sur $\Omega$. Plus g\'en\'eralement Malgrange a prouv\'e que:
\begin{theorem} \cite{Malgrange}, \cite{Tougeron1} Soient $\varphi_{1}$, $\ldots$ , $\varphi_{p}$ des fonctions analytiques sur $\Omega$ et $<\varphi_{1} , \ldots , \varphi_{n}>$ l'id\'eal engendr\'e par $\varphi_{1}$, $\ldots$ , $\varphi_{p}$, alors $<\varphi_{1} , \ldots , \varphi_{n}>$ est ferm\'e.
\end{theorem}
La notion d'id\'eaux ferm\'es se g\'en\'eralise \`a ceux de $\mathcal{E}_{n}$ de type fini.
\begin{definition} Soit $I = <\varphi_{1} , \ldots , \varphi_{p}>$ un id\'eal de $\mathcal{E}_{n}$. On dit que $I$ est ferm\'e s'il existe un voisinage $\Omega$ de $\underline{0}$ et des repr\'esentants $\tilde{\varphi}_{1}$, $\ldots$ , $\tilde{\varphi}_{p}$ de $\varphi_{1}, \ldots , \varphi_{p}$ respectivement tels que $< \tilde{\varphi}_{1} , \ldots , \tilde{\varphi}_{p} >$ soit ferm\'e dans $\mathcal{E} (\Omega)$.
\end{definition}
\begin{example} Soit $\varphi \ \in \ \mathcal{E}_{2}$ non plat, i.e. $T_{\underline{0}} \varphi \neq 0$. Alors $\varphi \mathcal{E}_{2}$ est ferm\'e. En effet, d'apr\`es D. Cerveau et R. Mattei \cite{Cerveau2}, il existe un diff\'eomorphisme formel $\hat{\Phi}$ de $\widehat{\mathrm{Diff}} (\mathbb{R}^{2}_{0})$ tel que $(T_{\underline{0}} \varphi) \circ \hat{\Phi} = P$, $P$ \'etant un polyn\^ome. Soit $\Phi$ une r\'ealisation de Borel de $\hat{\Phi}$, $P \circ \Phi^{- 1^{}}$ est une r\'ealisation de Borel de $T_{\underline{0}} \varphi$ et est $\mathcal{C}^{\infty}$ conjugu\'e \`a un polyn\^ome. On en d\'eduit que $(P \circ \Phi^{- 1}) \mathcal{E}_{2}$ est ferm\'e, et par suite $\varphi \mathcal{E}_{2}$.
\end{example}
Voici un exemple d'id\'eal non ferm\'e donn\'e par Tougeron \cite{Tougeron1}:
\begin{example} La fonction $ f^{+} = y^{2} + \exp{( - \frac{1}{x^{2}})}$ n'engendre pas un id\'eal ferm\'e. En effet $\forall \ \underline{a} \ \in \mathbb{R}^{2}$, $T_{\underline{a}} \exp{( - \frac{1}{x^{2}})} \ \in \ T_{\underline{a}} ( f^{+} \mathcal{E} (\mathbb{R}^{2})$, cependant $\exp{( - \frac{1}{x^{2}})}$ n'appartient pas \`a $ f^{+} \mathcal{E} (\mathbb{R}^{2})$.
\end{example}
\begin{definition} Soit $I = <\varphi_{1} , \ldots , \varphi_{p}>$ l'id\'eal de $\mathcal{E} (\Omega)$ engendr\'e par $\varphi_{1}$, $\ldots$ , $\varphi_{p}$. On appelle module des relations de $I$ le $\mathcal{E} (\Omega)$-module $\{ (h_{1} , \ldots , h_{p} ) \ / h_{i} \ \in \ \mathcal{E}(\Omega) , \sum_{i = 1}^{p} h_{i} \varphi_{i} = 0 \}$.
\end{definition}
On a le r\'esultat remarquable suivant, d\^u \`a Malgrange:
\begin{theorem} Soit $I = <\varphi_{1} , \ldots , \varphi_{p}>$ un id\'eal de $\mathcal{E}_{n}$, $\hat{h}_{1}$, $\ldots$ , $\hat{h}_{p}$ des \'el\'ements de $\hat{\mathcal{E}}_{n}$ tels que $ \sum_{i = 1}^{p} \hat{h}_{i} T_{\underline{0}} \varphi_{i} = 0$. Si $I$ est ferm\'e, alors il existe des \'el\'ements $h_{1}, \ldots , h_{p}$ de $\mathcal{E}_{n}$ tels que $\sum_{i = 1}^{p} h_{i} \varphi_{i} = 0$.
\end{theorem}
Rappelons dans le m\^eme ordre d'id\'ee le c\'el\`ebre r\'esultat d'approximation d'Artin \cite{Artin}:
\begin{theorem} Soit $F$ un germe d'application holomorphe \`a l'origine de $\mathbb{C}^{n} \times \mathbb{C}^{p}$. On consid\`ere l'\'equation implicite:
\begin{equation} F( x , y ) = 0
\end{equation}
o\`u $ x$ appartient \`a $\mathbb{C}^{n}$ et $y$ \`a $\mathbb{C}^{p}$. Si $(10)$ poss\`ede une solution formelle $\hat{y} \in \hat{\mathcal{O}}(\mathbb{C}^{n}_{, 0})^{p}$ et $k \in \mathbb{N}$ est fix\'e, alors il existe une solution convergente $y_{k} \in\mathcal{O}(\mathbb{C}^{n}_{, 0})^{p}$ de $(10)$ telle que le jet d'ordre $k$ de $y_{k}$ v\'erifie $J^{k} y_{k} = J^{k} \hat{y}$.
\end{theorem}
Cet \'enonc\'e poss\`ede une version analytique r\'eelle qui a \'et\'e compl\'et\'ee comme suit par J. C. Tougeron \cite{Tougeron3}:
\begin{theorem} Soient $F$ un germe d'application analytique \`a l'origine de $\mathbb{R}^{n} \times \mathbb{R}^{p}$ et $\hat{y} \in \hat{\mathcal{E}}_{n}^{p}$ une solution formelle de l'\'equation implicite $(10)$. Il existe un germe d'application $\mathcal{C}^{\infty}$, $y \in \mathcal{E}_{n}^{p}$ v\'erifiant:
$ F( x , y ) = 0 $ et $T_{\underline{0}} y = \hat{y}$.
\end{theorem}
On consid\`ere maintenant la donn\'ee d'une \'equation implicite "compl\`etement formelle"
\begin{equation}
\hat{F}( x , y ) = 0
\end{equation}
avec $\hat{F} \in \hat{\mathcal{E}}_{n + p}^{l}$. On suppose qu'il existe une solution formelle $\hat{y} \in \hat{\mathcal{E}}_{n}^{p}$ de ${(11)}$ : $\hat{F}(x, \hat{y}) = 0$. On se demande s'il existe $F \in \mathcal{E}_{n + p}^{l}$ et $y \in \mathcal{E}_{n}^{p}$ tels que $F(x , y(x)) = 0 $, $T_{\underline{0}} F =
\hat{F}$ et $T_{\underline{0}} y = \hat{y}$. Si $\hat{F}$ d\'epend effectivement de $x$, i.e. $\frac{\partial \hat{F}}{x_{i}} \neq 0$ pour un certain indice $i$, la solution est simple. On se donne une r\'ealisation $\tilde{F}$ de $\hat{F}$ et une r\'ealisation $x \mapsto y ( x )$ de $\hat{y}$. L'application $x \mapsto \tilde{F} ( x , y ( x )) = a ( x )$ est plate \`a l'origine, au sens o\`u chaque composante $a_{i}$ de $a$ est plate. On pose alors $F ( x , y ) = \tilde{F} (x , y ) - a ( x)$; visiblement $T_{\underline{0}} F = \hat{F}$ et $F (x , y (x ) ) = 0 $. Lorsque $\hat{F} = \hat{F} ( y )$ ne d\'epend pas de $x$ et que l'on dispose d'une solution formelle $\hat{y} ( x )$: $\hat{F} ( \hat{y} ( x ) ) = 0$, le probl\`eme semble d\'elicat. En petite dimension ( $n = p = l = 1$) on obtient le r\'esultat suivant:
\begin{theorem} Soient $\hat{F} \in \hat{\mathcal{E}}_{2}$ et $( \hat{y}_{1} , \hat{y}_{2}) \in \hat{\mathcal{E}}_{1}^{2}$ une solution param\'etrique de $\hat{F} = 0$, i.e. $\hat{F}( \hat{y}_{1} , \hat{y}_{2}) = 0$. Il existe $F \in \mathcal{E}_{2}$ et $(y_{1}, y_{2}) \in \mathcal{E}_{1}^{2}$ satisfaisant $F(y_{1}, y_{2}) = 0$, $T_{\underline{0}} F = \hat{F}$ et $T_{\underline{0}}( y_{1} , y_{2} ) = ( \hat{y}_{1} , \hat{y}_{2})$.
\end{theorem}
\begin{proof}
On suppose que $\hat{F}$ et $(\hat{y}_{1}, \hat{y}_{2})$ sont non constants. Le Th\'eor\`eme 4.4 de D. Cerveau et J.F. Mattei \cite{Cerveau2} s'adapte facilement: il existe un diff\'eomorphisme formel $\hat{\Phi} \in \widehat{\mathrm{Diff}}(\mathbb{R}^{2}_{0})$ tel que $\hat{F} \circ \hat{\Phi} = P$ soit un polyn\^ome. Si $(\hat{y}_{1} , \hat{y}_{2})$ est une solution de $\hat{F} = 0$, alors $(\hat{Y}_{1}, \hat{Y}_{2}) = \hat{\Phi}^{- 1} (\hat{y}_{1} , \hat{y}_{2}) \in \hat{\mathcal{E}}_{1}^{2}$ est solution de $P (\hat{Y}_{1} , \hat{Y}_{2}) = 0$. Le th\'eor\`eme de Tougeron assure qu'il existe une solution $( Y_{1} , Y_{2} ) \in \mathcal{E}_{1}^{2}$ \`a $P ( Y_{1} , Y_{2} ) = 0$ avec $T_{\underline{0}} ( Y_{1} , Y_{2} ) = ( \hat{Y}_{1} , \hat{Y}_{2} )$. Soit $\Phi$ une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{\Phi}$. On v\'erifie que $F = P \circ \Phi^{- 1}$ et $( y_{1} , y_{2} ) = \Phi ( Y_{1} , Y_{2} )$ satisfont l'\'enonc\'e du th\'eor\`eme.
\end{proof}
\begin{conjecture}
Le probl\`eme pr\'ec\'edent a une r\'eponse positive en toute g\'en\'eralit\'e.
\end{conjecture}
Un r\'esultat classique de Malgrange \cite{Malgrange} assure qu'un champ de vecteurs analytique $X \in \mathcal{X}_{2}$, poss\'edant une int\'egrale premi\`ere formelle $\hat{X}$ non constante, en poss\`ede une analytique. Ce r\'esultat ne persiste pas en classe $\mathcal{C}^{\infty}$. En effet soit $X = x_{2} \frac{\partial}{\partial x_{1}} - x_{1}\frac{\partial}{\partial x_{2}} - \exp{\frac{- 1}{x_{1}^{2} + x_{2}^{2}}} (x_{1} \frac{\partial}{\partial x_{1}} + x_{2}\frac{\partial}{\partial x_{2}})$. Il poss\`ede l'int\'egrale premi\`ere formelle $x_{1}^{2} + x_{2}^{2}$ (i.e. $T_{\underline{0}} X ( x_{1}^{2} + x_{2}^{2} ) = 0$) mais il ne poss\`ede aucune int\'egrale premi\`ere $\mathcal{C}^{\infty}$ non constante. La raison est que toutes les trajectoires de $X$ adh\`erent au point singulier $\underline{0}$ (spirales). Par contre on a l'\'enonc\'e suivant, toujours propre \`a la dimension deux:
\begin{theorem} Soient $\hat{X}$ et $\hat{F}$ deux \'el\'ements de $\hat{\mathcal{X}}_{2}$ et $\hat{\mathcal{E}}_{2}$ respectivement. On suppose que $\hat{F}$ est une int\'egrale premi\`ere non constante de $\hat{X}$ ( $\hat{X} .\hat{F} = 0$). Il existe $X \in \mathcal{X}_{2}$ et $F \in \mathcal{E}_{2}$ tels que $X . F = 0$, $T_{\underline{0}} X = \hat{X}$ et $T_{\underline{0}} F = \hat{F}$.
\end{theorem}
\begin{proof}
Soit $\hat{\Phi} \in \widehat{\mathrm{Diff}}_{2}$ un diff\'eomorphisme formel tel que $\hat{F} \circ \hat{\Phi} = P$ soit un polyn\^ome. Le champ $\hat{Y} = \hat{\Phi}^{- 1}_{\ast}\hat{X} = \sum \hat{Y}_{j} \frac{\partial }{\partial x_{j}}$ a $P$ pour int\'egrale premi\`ere:
$$\sum \hat{Y}_{j} \frac{\partial P}{\partial x_{j}} = 0$$
cette \'egalit\'e peut \^etre interpr\'et\'ee comme une relation lin\'eaire entre les polyn\^omes $\frac{\partial P}{\partial x_{j}}$. Le Th\'eor\`eme 29 produit un champ de vecteurs $Y = \sum Y_{j} \frac{\partial }{\partial x_{j}}$ de classe $\mathcal{C}^{\infty}$ tel que $Y. P = 0$. Soit $\Phi$ une r\'ealisation de Borel de $\hat{\Phi}$; le champ $X = \Phi_{\ast} Y$ et la fonction $F = P \circ \Phi^{- 1}$ conviennent.
\end{proof}
Dans le m\^eme ordre d'id\'ee le probl\`eme des s\'eparatrices a \'et\'e consid\'er\'e par plusieurs auteurs notamment Dumortier \cite{Dumortier}, Roussarie \cite{Roussarie} \cite{Kelley}, $\ldots$ etc. Soit $X$ un champ de vecteur $\mathcal{C}^{\infty}$ \`a l'origine $\underline{0}$ de $\mathbb{R}^{2}$ \`a singularit\'e alg\'ebriquement isol\'ee, $T_{\underline{0}} X = \hat{X}$ son jet de Taylor en $\underline{0}$ et $\omega = i_{X} d x_{1} \wedge d x_{2} \in \Omega^{1}_{2}$. Une s\'eparatrice formelle de $X$ (ou de $\omega$) est une courbe param\'etr\'ee formelle $\hat{\gamma} = ( \hat{\gamma}_{1} , \hat{\gamma}_{2} ) \in \hat{\mathcal{E}}^{2}_{1}$, non constante telle que $\hat{\gamma} ( 0) = \underline{0}$ et $\hat{\gamma}^{\ast} \hat{\omega} = 0$.
Le th\'eor\`eme de r\'eduction des singularit\'es de Seidenberg \cite{Mattei} ajout\'e \`a l'\'etude locale des singularit\'es apparaissant en fin de processus de r\'eduction permet d'\'etablir avec les notations pr\'ec\'edentes le:
\begin{theorem} Soient $X$ un champ de vecteur $\mathcal{C}^{\infty}$, \`a singularit\'e isol\'ee \`a l'origine de $\mathbb{R}^{2}$ et $\hat{\gamma}$ une s\'eparatrice formelle non triviale de $X$. Il existe une s\'eparatrice $\mathcal{C}^{\infty}$, $\gamma \in \mathcal{E}_{1}^{2}$ de $X$ r\'ealisant $\hat{\gamma}$: $T_{\underline{0}} \gamma = \hat{\gamma}$ et $\gamma^{\ast} \omega = 0$.
\end{theorem}
Faisons quelques commentaires sur l'\'enonc\'e ci-dessus. Pour une telle courbe $\gamma$, il existe $F \in \mathcal{E}_{2}$, irr\'eductible \`a singularit\'e alg\'ebriquement isol\'ee telle que $F^{- 1} ( 0 )$ soit \'egale \`a l'image de $\gamma$. En effet il existe $\hat{F} \in \hat{\mathcal{E}}_{2}$, \`a singularit\'e alg\'ebriquement isol\'ee telle que $\hat{F} \circ \hat{\gamma} = 0$. La param\'etrisation est \`a reparam\'etrisation pr\`es par un \'el\'ement $\hat{\tau} \in \hat{\mathcal{E}}_{1}$, $\hat{\gamma} = \hat{\delta} \circ \hat{\tau}$ l'unique courbe formelle v\'erifiant cela. D'un autre c\^ot\'e $\hat{\gamma}$ \'etant fix\'e l'id\'eal $\langle \hat{F} \rangle$ engendr\'e par $\hat{F}$ est intrins\`eque. Soit $F^{'}$ une r\'ealisation $\mathcal{C}^{\infty}$ de $\hat{F}$. Le morphisme $\pi$ de r\'esolution des singularit\'es de $\hat{\gamma}$ (qui est \'egalement celui de $\hat{F}$) fait que $( F^{'} \circ \pi = 0 )$ est une courbe \`a croisements normaux. Soient $\gamma^{'}$ d\'efini par $\pi \circ \gamma^{'} = \gamma$. On peut modifier localement $F^{'} \circ \pi$ par composition \`a droite par un germe de diff\'eomorphisme $\varphi$, plat \`a l'identit\'e le long du diviseur $\pi^{- 1} ( \underline{0} )$ de sorte que $F^{'} \circ \pi \circ \varphi$ s'annule pr\'ecisement sur $\gamma^{'}$ ($F^{'} \circ \pi \circ \varphi \circ \gamma^{'} \equiv 0$). La fonction $F$ d\'efinie par $F \circ \pi = F^{'} \circ \pi \circ \varphi$ est $\mathcal{C}^{\infty}$, en dehors de l'origine, et $( F - F^{'} ) \circ \pi$ est plat le long de $\pi^{- 1} ( \underline{0} )$. Il en r\'esulte que $F$ s'\'etend de mani\`ere $\mathcal{C}^{\infty}$ en $\underline{0} \in \mathbb{R}^{2}$. \\
\begin{conjecture}
Soient $X$ un germe de champ de vecteurs $\mathcal{C}^{\infty}$ \`a l'origine de $\mathbb{R}^{n}$, \`a singularit\'e alg\'ebriquement isol\'ee et poss\'edant une courbe formelle invariante $\hat{\gamma} \in \hat{\mathcal{E}}_{1}^{n}$. Alors il existe $\gamma \in \mathcal{E}_{1}^{n}$ tel que $T_{\underline{0}} \gamma = \hat{\gamma}$ et $\gamma$ est une courbe invariante de $X$.
\end{conjecture}
Le contre-exemple \`a l'existence d'int\'egrale premi\`ere $\mathcal{C}^{\infty}$ en pr\'esence d'int\'egrale premi\`ere formelle est d\^u au ph\'enom\`ene de spiralement. La pr\'esence de s\'eparatrice formelle (et donc $\mathcal{C}^{\infty}$) interdit ce spiralement. R. Roussarie \cite{Roussarie} a d\'emontr\'e que si le champ $\mathcal{C}^{\infty}$ $X$, \`a singularit\'e isol\'ee \`a l'origine de $\mathbb{R}^{2}$, poss\`ede une s\'eparatrice formelle et une int\'egrale premi\`ere formelle $\hat{F}$ (ce qui revient \`a demander que les z\'eros formels de $\hat{F}$ soient non r\'eduits \`a $\{ \underline{0} \}$), alors $X$ poss\`ede une int\'egrale premi\`ere $F$, de classe $\mathcal{C}^{\infty}$, non plate. On peut pr\'eciser cet \'enonc\'e pour obtenir la condition $T_{\underline{0}} F = \hat{F}$ (en travaillant sur les int\'egrale premi\`eres minimales formelles pour le voir). \\
Soit $S$ une s\'eparatrice du champ $X$, \`a singularit\'e isol\'ee \`a l'origine de $\mathbb{R}^{2}$. Si $S$ est d\'efinie par $( F = 0 )$, o\`u $F \in \mathcal{E}_{2}$ est \`a singularit\'e isol\'ee, alors $X. F$ est divisible par $F$, i.e. $X. F = g. F$ pour un certain $g \in \mathcal
|
1$ and $G_2$. To see that $\Phi$ is injective, suppose $\Phi(u_1 \oplus u_2) = 0$. Then $u_1$ and $u_2$ vanish at $x$ by definition of $\c U_0(G_j,L_j,M)$, and they vanish on the rest of $G_1$ and $G_2$ respectively because $G_1$ and $G_2$ only intersect at $x$. To show surjectivity of $\Phi$, let $u \in \c U_0(G,L,M)$, and let $u_j = u|_{V_j}$. Clearly, $u_j = 0$ on $\partial V_j$ since $\partial V_j \subseteq \partial V$. Moreover, $L_j u_j(y) = Lu(y) = 0$ for every $y \in V_j \setminus \{x\}$. Because $\im (L_j \otimes \id_M) \subseteq \ker (\epsilon_j \otimes \id_M)$, this implies that $L_j u(x) = 0$ also. Thus, $u_j \in \c U_0(G_j,L_j,M)$ and hence $u = \Phi(u_1 \oplus u_2) \in \im \Phi$. So $\Phi$ is an isomorphism as desired.
In the case of a disjoint union, the claim for $\tilde{\Upsilon}$ is proved in a similar way after noting that
\[
\ker \epsilon \cong \ker \epsilon_1 \oplus \ker \epsilon_2 \oplus R.
\]
The claim for $\c U$ follows by applying $\Hom(-,M)$. The argument for $\c U_0$ is similar to the boundary wedge-sum case but easier.
\end{proof}
\begin{definition}
{\bf Completely reducible finite $\partial$-graphs} are defined to be the smallest class $\c C$ of finite $\partial$-graphs that contains the empty graph and is closed under layerable extensions, disjoint unions, and boundary wedge-sums. More informally, a graph $G$ is completely reducible if it can be reduced to nothing by layer-stripping and splitting apart boundary wedge-sums and disjoint unions.
\end{definition}
\begin{definition} \label{def:completelyreducible}
A finite $\partial$-graph is {\bf irreducible} if it has no boundary spikes, boundary edges, or isolated boundary vertices, and it is not a boundary wedge-sum or disjoint union. Note that every irreducible $\partial$-graph is a flower.
\end{definition}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\begin{scope}
\node[bd] (A) at (0,0) {};
\node[int] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (E) at (-0.7,1.7) {};
\node[int] (F) at (0,2.4) {};
\node[int] (G) at (0,-1) {};
\node[bd] (H) at (0,-2) {};
\draw (G) to[bend left = 35] (H);
\draw (H) to [bend left = 35] (G);
\draw (G) to (A);
\draw (A) to (C); \draw (C) to (D);
\draw (C) to (E); \draw (D) to (F); \draw (E) to (F);
\end{scope}
\draw[->] (0.9,0.7) to (2.1,1.3);
\draw[->] (0.9,-0.7) to (2.1,-1.3);
\node at (1.5,0) {(4)};
\begin{scope}[shift = {(3,-1)}]
\node[bd] (A) at (0,0) {};
\node[int] (G) at (0,-1) {};
\node[bd] (H) at (0,-2) {};
\draw (G) to[bend left = 35] (H);
\draw (H) to [bend left = 35] (G);
\draw (G) to (A);
\end{scope}
\begin{scope}[shift = {(3,1)}]
\node[bd] (A) at (0,0) {};
\node[int] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (E) at (-0.7,1.7) {};
\node[int] (F) at (0,2.4) {};
\draw (A) to (C); \draw (C) to (D);
\draw (C) to (E); \draw (D) to (F); \draw (E) to (F);
\end{scope}
\draw[->] (3.8,-2) to (5.2,-2);
\draw[->] (3.8,2) to (5.2,2);
\node at (4.5,1.5) {(2)};
\node at (4.5,-1.5) {(2)};
\begin{scope}[shift = {(6,-1)}]
\node[bd] (G) at (0,-1) {};
\node[bd] (H) at (0,-2) {};
\draw (G) to[bend left = 35] (H);
\draw (H) to [bend left = 35] (G);
\end{scope}
\begin{scope}[shift = {(6,1)}]
\node[bd] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (E) at (-0.7,1.7) {};
\node[int] (F) at (0,2.4) {};
\draw (C) to (D);
\draw (C) to (E); \draw (D) to (F); \draw (E) to (F);
\end{scope}
\draw[->] (6.8,-2) to (8.2,-2);
\draw[->] (6.8,2) to (8.2,2);
\node at (7.5,1.5) {(3)};
\node at (7.5,-1.5) {(3)};
\begin{scope}[shift = {(9,-1)}]
\node[bd] (G) at (0,-1) {};
\node[bd] (H) at (0,-2) {};
\end{scope}
\begin{scope}[shift = {(9,1)}]
\node[bd] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (E) at (-0.7,1.7) {};
\node[int] (F) at (0,2.4) {};
\draw (D) to (F); \draw (E) to (F);
\end{scope}
\draw[->] (9.8,-2) to (11.2,-2);
\draw[->] (9.8,2) to (11.2,2);
\node at (10.5,1.5) {(2)};
\node at (10.5,-1.5) {(1)};
\begin{scope}[shift = {(12,1)}]
\node[bd] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (F) at (0,2.4) {};
\draw (D) to (F);
\end{scope}
\node at (12,-2) {$\varnothing$};
\draw[->] (12.8,2) to (14.2,2);
\node at (13.5,1.5) {(3)};
\begin{scope}[shift = {(15,1)}]
\node[bd] (C) at (0,1) {};
\node[bd] (D) at (0.7,1.7) {};
\node[bd] (F) at (0,2.4) {};
\end{scope}
\draw[->] (15.8,2) to (17.2,2);
\node at (16.5,1.5) {(1)};
\node at (18,2) {$\varnothing$};
\end{tikzpicture}
\end{center}
\caption{A completely reducible $\partial$-graph. The boundary vertices are black and interior vertices are white. The operations are (1) isolated boundary vertex deletion, (2) boundary spike contraction, (3) boundary edge deletion, (4) splitting a boundary wedge-sum.}
\label{fig:CRexample}
\end{figure}
The following is an analogue of Proposition \ref{prop:layerablebehavior}:
\begin{proposition} \label{prop:reduciblebehavior}
Let $G$ be a finite nonempty completely reducible $\partial$-graph. If $(G,L)$ is a normalized $R^\times$-network, then $(G,L)$ is non-degenerate and $\tilde{\Upsilon}(G,L)$ is a free $R$-module of rank $|\partial V(G)| - 1$.
\end{proposition}
\begin{proof}
Let $\mathcal{C}$ be the class of $\partial$-graphs for which the claims hold, together with the empty $\partial$-graph. Lemmas \ref{lem:layeringUpsilon3} and \ref{lem:wedgesumUpsilon} imply that $\mathcal{C}$ is closed under layerable extensions, disjoint unions, and boundary wedge-sums.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\begin{scope}
\node at (0,2) {$\tilde{G}$};
\node[int] (2A) at (-3,-1.5) [label = left:$2$] {};
\node[bd] (1B) at (-3,1.5) [label = left:$1$] {};
\node[bd] (1A) at (-2,-0.5) [label = left:$1$] {};
\node[int] (2B) at (-2,0.5) [label = left:$2$] {};
\node[int] (3A) at (-1,-0.5) [label = below:$3$] {};
\node[int] (3B) at (-1,0.5) [label = above:$3$] {};
\node[bd] (4A) at (0,-0.5) [label = below:$4$] {};
\node[bd] (4B) at (0,0.5) [label = above:$4$] {};
\node[int] (5A) at (1,-0.5) [label = below:$5$] {};
\node[int] (5B) at (1,0.5) [label = above:$5$] {};
\node[bd] (7A) at (2,-0.5) [label = right:$7$] {};
\node[int] (6B) at (2,0.5) [label = right:$6$] {};
\node[int] (6A) at (3,-1.5) [label = right:$6$] {};
\node[bd] (7B) at (3,1.5) [label = right:$7$] {};
\draw (3A) to (1A) to (2B) to (3B);
\draw (3B) to (1B) to (2A) to (3A);
\draw (3A) to (4A) to (5A);
\draw (3B) to (4B) to (5B);
\draw (5A) to (7A) to (6B) to (5B);
\draw (5B) to (7B) to (6A) to (5A);
\end{scope}
\begin{scope}[shift={(7,0)}]
\node at (0,1) {$G$};
\node[bd] (1) at (-2,0.5) [label = left:$1$] {};
\node[int] (2) at (-2,-0.5) [label = left:$2$] {};
\node[int] (3) at (-1,0) [label = below:$3$] {};
\node[bd] (4) at (0,0) [label = below:$4$] {};
\node[int] (5) at (1,0) [label = below:$5$] {};
\node[int] (6) at (2,-0.5) [label = right:$6$] {};
\node[bd] (7) at (2,0.5) [label = right:$7$] {};
\draw (1) to (2) to (3) to (1);
\draw (3) to (4) to (5);
\draw (5) to (6) to (7) to (5);
\end{scope}
\end{tikzpicture}
\caption{A covering map $f \colon \tilde{G} \to G$ such that $G$ decomposes as a boundary wedge-sum and $\tilde{G}$ does not. In fact, $G$ is completely reducible and $\tilde{G}$ is irreducible.} \label{fig:reduciblefunctorialityfail}
\end{center}
\end{figure}
Unlike layer-stripping operations, the operation of spliting apart a boundary wedge-sum does \emph{not} pull back through unramified $\partial$-graph morphisms. The problem is illustrated in Figure \ref{fig:reduciblefunctorialityfail}. However, we do have
\begin{observation} \label{obs:wedgesumfunctoriality}
Suppose that $G$ is a sub-$\partial$-graph of $H$ and that $H$ decomposes as a boundary wedge-sum or disjoint union of $H_1$ and $H_2$. Then $G$ decomposes as a boundary wedge-sum or disjoint union of $G \cap H_1$ and $G \cap H_2$. Together with Lemma \ref{lem:layerstrippingfunctoriality}, this implies that a sub-$\partial$-graph of a completely reducible $\partial$-graph is also completely reducible.
\end{observation}
\subsection{Algebraic Characterization}
We shall prove an algebraic characterization of complete reducibility in the same way as we did for layerability (Theorem \ref{thm:layerabilitycharacterization}). As in Lemma \ref{lem:layerabilityfieldcharacterization}, we first construct degenerate networks over fields.
\begin{lemma}\label{lem:reducibilityfieldcharacterization}
Let $G$ be a finite $\partial$-graph and let $F$ be an infinite field. Then $G$ is completely reducible if and only if every normalized $F^\times$-network on $G$ is non-degenerate.
\end{lemma}
\begin{proof}
The implication $\implies$ follows from Proposition \ref{prop:reduciblebehavior}.
Let $G'$ be a minimal sub-$\partial$-graph of $G$ which is not completely reducible. Note that $G'$ must be irreducible. As in the proof of Lemma \ref{lem:layerabilityfieldcharacterization}, it suffices to construct degenerate edge weights on $G'$.
Our strategy is choose a potential function $u$ first with $u|_{\partial V} = 0$, and \emph{then} choose an edge-weight function $w$ that will make $L u \equiv 0$. Let $S \subseteq E(G)$ be the union of all cycles, i.e., $S$ contains every edge that is part of any cycle. Note that every edge in $S$ must have endpoints in distinct components of $G \setminus S$. Define $u$ to be zero on every component of $G \setminus S$ that contains a boundary vertex of $G$, and assign $u$ a different nonzero value on each component of $G \setminus S$ that does not contain any boundary vertices.
We need to guarantee that $u$ is not identically zero. But in fact, we claim that $u$ is nonzero at every interior vertex. To prove this, it suffices to show that every edge $e$ with endpoints $x \in \partial V$ and $y \in V^\circ$ must be in $S$, that is, such an edge $e$ must be contained in some cycle. By hypothesis, our edge $e$ is not a boundary spike. Thus, there is some other edge $e' \neq e$ incident to $x$. Let $z$ be the other endpoint $e'$. Since $G$ is not a boundary wedge-sum, deleting $x$ leaves $G$ connected. Thus, there is a path $P = \{e_1,\dots,e_k\}$ from $y$ to $z$ which avoids $x$. Then $P \cup \{e,e'\}$ is a cycle containing $e$. Consequently, $u$ is nonzero at every interior vertex.
Now we choose the edge weights. Choose oriented cycles $C_1, \dots, C_k$ such that $S = \bigcup_{j=1}^k (C_j \cup \overline{C}_j)$. If $e \in C_j$, then $e \in S$ and hence $e_+$ and $e_-$ are in distinct components of $G \setminus S$, so $du(e) = u(e_+) - u(e_-) \neq 0$. For each $j$, define
\[
w_j(e) = w_j(\overline{e}) = \begin{cases}
1/du(e), \text{ for } e \in C_j \\
0, \text{ for } e \not \in C_j \cup \overline{C}_j.
\end{cases}
\]
Then $w_j(e) du(e)$ is $1$ on $C_j$ and $-1$ on $\overline{C}_j$ and vanishes elsewhere. Therefore, if we let $L_j$ be the Laplacian associated to the edge-weight function $w_j$, then we have $L_j u = 0$. For each $e \in S$, there is a weight function $w_j$ with $w_j(e) \neq 0$. Since $F$ is infinite and the graph is finite, we may choose $\alpha_j \in F$ such that $\sum_{j=1}^k \alpha_j w_j(e) \neq 0$ for all $e \in S$ simultaneously.
Set $w = 1_{E \setminus S} + \sum_{j=1}^k \alpha_j w_j$ and let $L$ be the associated Laplacian. Then $w(e) \neq 0$ for each $e$. Because $u$ is constant on each component of $G \setminus S$, we know that $u(e_+) - u(e_-) = 0$ for each $e \in E \setminus S$. Thus, these edges do not contribute to $L u$, and so
\[
L u = \sum_{j=1}^k \alpha_j L_ju = 0.
\]
Thus, $(G,L)$ is the desired degenerate $F^\times$-network becuase $0 \neq u \in \c U_0(G,L,F)$.
\end{proof}
We proved equivalent algebraic characterizations for layerability by assigning indeterminates to the edges (see Proposition \ref{prop:genericfieldnetwork}). The analogue for normalized networks is as follows.
\begin{definition}
Let $G$ be a $\partial$-graph and let $F$ be a field. Then $\tilde{R} = \tilde{R}(G,F) = F[t_e^{\pm 1}: e \in E]$ will denote the Laurent polynomial algebra over $F$ with generators indexed by the edges of $G$. Let $\tilde{L} = \tilde{L}(G,F)$ denote the weighted Laplacian over $\tilde{R}$ given by $\tilde{w}(e) = t_e$.
\end{definition}
\begin{proposition} \label{prop:genericfieldnetwork2}
Let $G$ be a finite $\partial$-graph such that each component contains at least one boundary vertex, and let $F$ be a field. Then $(G, \tilde{L})$ is non-degenerate. Moreover, $\tilde{\Upsilon}(\tilde{G}, \tilde{L})$ is a flat $\tilde{R}$-module if and only if every normalized $F^\times$-network on $G$ is non-degenerate.
\end{proposition}
\begin{proof}
To prove that $(G,\tilde{L})$ is non-degenerate, it suffices to prove that each connected component of $(G,\tilde{L})$ is non-degenerate. Therefore, we may assume without loss of generality that $G$ is connected. Since our original graph has at least one boundary vertex in each connected component, we may assume $G$ is connected and has at least one boundary vertex $x$.
Recall that $(G,\tilde{L})$ is non-degenerate if and only if $\tilde{L} \colon \tilde{R}V^\circ \to \tilde{R}V$ is injective (see proof of \ref{prop:tor}). For our given boundary vertex $x$, let $\tilde{L}_x \colon \tilde{R}(V \setminus \{x\}) \to \tilde{R}(V \setminus \{x\})$ be the Laplacian $\tilde{L}$, with the domain restricted to chains in $\tilde{R}(V \setminus \{x\}) \subseteq \tilde{R}V$, and with the output truncated by applying the canonical projection $\tilde{R}V \to \tilde{R}(V \setminus x)$. Then injectivity of $\tilde{L}_x$ will imply injectivity of $\tilde{L} \colon \tilde{R}V^\circ \to \tilde{R}V$ since $V \setminus x \supseteq V^\circ$ and $V \setminus x \subseteq \partial V$. By the weighted matrix-tree theorem (see \cite[Theorem 1]{RF} and \cite[Theorem 4.2]{RK}), we have
\[
\det \tilde{L}_x = \sum_{T\in\text{Span}(G)} \prod_{e \in T} t_e \neq 0,
\]
where Span$(G)$ denotes the set of spanning trees of $G$. Since we assumed $G$ is connected, $\det \tilde{L}_x$ is a nonzero polynomial in $(t_e)_{e \in E}$ and hence is a nonzero element of the Laurent polynomial algebra $\tilde{R}$. Since $\tilde{R}$ is an integral domain, it follows that $\tilde{L}_x$ is injective. This completes the proof that $(G,\tilde{L})$ is non-degenerate.
The rest of the proof is exactly the same as for Proposition \ref{prop:genericfieldnetwork}.
\end{proof}
The following Theorem is proved the same way as Theorem \ref{thm:layerabilitycharacterization}.
\begin{theorem} \label{thm:reducibilitycharacterization}
Let $G$ be a finite $\partial$-graph such that every component has at least one boundary vertex. The following are equivalent:
\begin{enumerate}
\item $G$ is completely reducible.
\item For every ring $R$, every normalized $R^\times$-network on $G$ is non-degenerate.
\item For every ring $R$, for every non-degenerate normalized $R^\times$-network $(G,L)$ on the $\partial$-graph $G$, $\tilde{\Upsilon}(G,L)$ is a free $R$-module.
\item There exists an infinite field $F$ such that $\tilde{\Upsilon}(G, \tilde{L}(G,F))$ is a flat $\tilde{L}(G,F)$-module.
\item There exists an infinite field $F$ such that every normalized $F^\times$-network on $G$ is non-degenerate.
\end{enumerate}
\end{theorem}
\subsection{Boundary-Interior Bipartitle $\partial$-Graphs}
The correspondence between algebraic and $\partial$-graph-theoretic conditions in Theorem \ref{thm:reducibilitycharacterization} is illustrated by the following proposition about bipartite graphs. We present both an algebraic proof and an inductive $\partial$-graph-theoretic proof for comparison. We say a $\partial$-graph is {\bf boundary-interior bipartite} if every edge has one interior endpoint and one boundary endpoint (similar to Example \ref{ex:completebipartite}).
\begin{proposition} \label{prop:boundaryinteriorbipartite}
Suppose that $G$ is a nonempty finite boundary-interior bipartite $\partial$-graph, $|V^\circ| \geq |\partial V|$, and every interior vertex has degree $\geq 2$. Then $G$ is not completely reducible.
\end{proposition}
\begin{proof}[Algebraic proof]
Let $F$ be any field other than the field $F_2$ with two elements. We will construct a degenerate $F^\times$-network on $G$. Since each interior vertex has at least two edges incident to it and each edge is only incident to one interior vertex, we can choose $w: E \to F^\times$ such that $\sum_{e \in \mathcal{E}(x)} w(e) = 0$ for each $x \in V^\circ$. If $u \in 0^{\partial V} \times F^{V^\circ} \subset F^V$, then $L u|_{V^\circ} = 0$ since
\[
L u(x) = \sum_{e: e_+ = x} w(e)(u(x) - u(e_-)) = \sum_{e: e_+ = x} w(e) u(x) = 0 \text{ for all } x \in V^\circ.
\]
Combining this with the fact that $\im L \subseteq \ker \epsilon$ yields
\[
L(0^{\partial V} \times F^{V^\circ}) \subseteq \left\{\phi \in F^{\partial V}: \sum_{x \in \partial V} \phi(x) = 0 \right\} \times 0^{V^\circ}.
\]
Therefore, $\dim L(0^{\partial V} \times F^{V^\circ}) \leq |\partial V| - 1 < |V^\circ|$, since we assumed $|\partial V| \leq |V^\circ|$. Therefore, by the rank-nullity theorem,
\[
\c U_0(G,L, F) = \ker(L\colon F^{V^\circ} \to F^V) \neq 0. \qedhere
\]
\end{proof}
\begin{proof}[$\partial$-graph-theoretic proof]
By Observation \ref{obs:wedgesumfunctoriality}, it suffices to show that $G$ has a sub-$\partial$-graph which is not completely reducible. We proceed by induction on the number of vertices.
Since $G$ is nonempty and $|V^\circ| \geq |\partial V|$, $G$ must have at least one interior vertex $x$. By assumption $x$ has some neighbor $y$, and $y$ must be a boundary vertex since the $\partial$-graph is boundary-interior bipartite. Therefore, $G$ must have at least one boundary vertex and one interior vertex. If $G$ has only two vertices, it must have exactly one interior vertex and one boundary vertex with at least two parallel edges between them. Then $G$ is irreducible.
Suppose $G$ has $n > 2$ vertices and divide into cases:
\begin{itemize}
\item If $G$ is irreducible, we are done.
\item Suppose $G$ has a boundary spike $(x,y)$ with $x \in \partial V$ and $y \in V^\circ$. Let $G'$ be the $\partial$-graph obtained by contracting the space. Then $y$ is a boundary vertex in $G'$ and by assumption all its neighbors are boundary vertices in $G$. Thus, we can delete the boundary edges incident to $y$ and then delete the now isolated boundary vertex $y$ to obtain a harmonic sub-$\partial$-graph $G'$ which satisfies the original hypotheses. The new $\partial$-graph $G'$ is nonempty because $|V(G)| > 2$. By inductive hypothesis, $G'$ is not completely reducible.
\item If $G$ can be split apart as a boundary wedge sum or a disjoint union, then each piece is boundary-interior bipartite with interior vertices that have degree $\geq 2$. Moreover, one of the two subgraphs must have $|\partial V| \leq |V^\circ|$, and hence is not completely reducible by inductive hypothesis.
\item $G$ has no boundary edges by assumption. Moreover, if $G$ has an isolated boundary vertex, that can be treated as a special case of disjoint unions.
\end{itemize}
\end{proof}
\section{Network Duality} \label{sec:duality}
\subsection{Dual Circular Planar Networks, Harmonic Conjugates}
As shown in \cite[Theorem 2]{CoriRossin}, dual planar graphs have isomorphic critical groups. In this section, we generalize this result to circular planar normalized $R^\times$-networks. The theory here adapts the ideas of duality and discrete complex analysis found in \cite[\S 2]{Mercat}, \cite[\S 10]{CMM}, \cite{Perry}. In this section, all the networks will be normalized (that is, they will satisfy $d = 0$).
\begin{definition}
A {\bf circular planar $\partial$-graph} $G$ is a (finite) $\partial$-graph embedded in the closed unit disk $\overline{D}$ in the complex plane such that $V\cap \partial D=\partial V$. The {\bf faces} of $G$ are the components of $D\setminus G$.
\end{definition}
\begin{definition}
A connected circular planar $\partial$-graph has a {\bf circular planar dual} $G^\dagger$ defined as follows: The vertices of $G^\dagger$ correspond to the faces of $G$; each vertex of $G^\dagger$ is placed in the interior of the corresponding face of $G$. The edges of $G^\dagger$ correspond to the edges of $G$. For each oriented edge $e$ of $G$, there is a dual edge $e^\dagger$ where $e_+^\dagger$ corresponds to the face on the right of $e$ and $e_-^\dagger$ corresponds to the face on the left of $e$. A vertex of $G^\dagger$ is considered a boundary vertex if the corresponding face has a side along $\partial D$. For further explanation and illustration, see \cite[Definition 5.1 and Figure 1]{Perry}.
\end{definition}
\begin{remark}
The planar dual is constructed in a similar fashion for a connected planar network without boundary, and the process is well explained in \cite[\S 2.1 and Figure 2]{Mercat}. To incorporate planar networks without boundary into the circular planar framework, we may designate an arbitrary vertex to be a boundary vertex and embed the $\partial$-graph into the disk.
\end{remark}
\begin{definition}
If $(G,L)$ is a circular planar normalized $R^\times$-network, then the \emph{dual network} $(G^\dagger,L^\dagger)$ is the network on $G^\dagger$ with $w(e^\dagger) = w(e)^{-1}$. We make the same definition for planar normalized $R^\times$-networks without boundary.
\end{definition}
\begin{theorem} \label{thm:duality}
If $(G,L)$ is a connected circular planar normalized $R^\times$-network, then
\[
\tilde{\Upsilon}(G^\dagger, L^\dagger) \cong \tilde{\Upsilon}(G,L).
\]
The same holds for planar normalized $R^\times$-networks without boundary.
\end{theorem}
Theorem \ref{thm:duality} generalizes \cite[Theorem 2]{CoriRossin} to $R^\times$-networks. Our proof combines ideas from \cite[\S 26 - 29]{Biggs} and \cite[\S 7]{CM}.
\begin{proof}
Consider the circular planar case; the proof for planar networks without boundary is the same. The result follows from reformulating $\tilde{\Upsilon}$ in terms of oriented edges rather than vertices. Recall that $C_1(G)$ is the free $R$-module on the oriented edges $E$ modulo the relations $\overline{e} = -e$ (see \S \ref{subsec:discretedifferentialgeometry}). Then $\ker \epsilon$ can be identified with the quotient of $C_1(G)$ by the submodule generated by oriented cycles. The cycle submodule is in fact generated by the oriented boundaries of interior faces. Moreover, $L(RV^\circ)$ corresponds to the submodule of $C_1(G)$ generated by $\sum_{e \in \mathcal{E}(x)} w(e) e$. The edges bounding an interior face of $G$ correspond to the edges incident to an interior vertex in $G^\dagger$. Therefore,
\begin{align*}
\tilde{\Upsilon}(G,L) &\cong \frac{C_1(G)}{(\sum_{e^\dagger \in \mathcal{E}(x)} e: x \in V^\circ(G^\dagger)) + (\sum_{e \in \mathcal{E}(x)} w(e) e: x \in V^\circ(G))} \\
\tilde{\Upsilon}(G^\dagger,L^\dagger) &\cong \frac{C_1(G^\dagger,L^\dagger)}{(\sum_{e^\dagger \in \mathcal{E}(x)} w(e^\dagger) e^\dagger: x \in V^\circ(G^\dagger)) + (\sum_{e \in \mathcal{E}(x)} e^\dagger: x \in V^\circ(G))}.
\end{align*}
Since $w(e^\dagger) = w(e)^{-1}$, we can define an isomorphism $\tilde{\Upsilon}(G,L) \to \tilde{\Upsilon}(G^\dagger,L^\dagger)$ by $e \mapsto w(e)^{-1} e^\dagger$.
\end{proof}
Application of $\Hom(-,M)$ yields the following discrete-complex-analytic interpretation of network duality, as in \cite[\S 2]{Mercat}, \cite[\S 7]{Perry}:
\begin{proposition} \label{prop:harmonicconjugates}
Let $(G,L)$ be a circular planar normalized $R^\times$-network. Modulo constant functions, for every $M$-valued harmonic function $u$ on $(G,L)$, there is a unique harmonic conjugate $v$ on $(G^\dagger,L^\dagger)$ satisfying the discrete Cauchy-Riemann equation $w(e)du(e) = dv(e^\dagger)$, where $du(e) = u(e_+) - u(e_-)$ and $dv(e^\dagger) = v(e_+^\dagger) - v(e_-^\dagger)$. Moreover, a function $u: V(G) \to M$ is harmonic if and only if there exists a function $v$ such that $w(e)du(e) = dv(e^\dagger)$. The same holds for planar normalized $R^\times$-networks without boundary.
\end{proposition}
\begin{proof}
Given our interpretation of $\tilde{\Upsilon}(G,L)$ in the previous proof, a harmonic function modulo constants is equivalent to a map $\phi\colon E(G) \to M$ such that $\phi(e)$ sums to zero around every oriented cycle and $\sum_{e \in \mathcal{E}(x)} w(e) \phi(e) = 0$ for each interior vertex; the correspondence between $u$ and $\phi$ is given by $\phi(e) = du(e)$. For every such $\phi$, we can define a similar function $\psi$ on the dual network by $\psi(e^\dagger) = w(e) \phi(e)$. This proves the existence and uniqueness of harmonic conjugates.
Next, we must prove that if $u$ and $v$ satisfy $w(e) du(e) = dv(e)$, then $u$ is harmonic. But note that for each $x \in V^\circ(G)$, we have
\[
Lu(e) = \sum_{e \in \mathcal{E}(x)} w(e) du(e) = \sum_{e \in \mathcal{E}(x)} dv(e^\dagger) = 0
\]
because $\{e^\dagger: e \in \mathcal{E}(x)\}$ is a cycle in $G^\dagger$. The proof for the case without boundary is the same.
\end{proof}
\begin{proposition} \label{prop:dualCR}
Let $G$ be a connected circular planar $\partial$-graph. Then $G$ is completely reducible if and only if $G^\dagger$ is completely reducible.
\end{proposition}
\begin{proof}
By Theorem \ref{thm:reducibilitycharacterization}, $G$ is completely reducible if and only if for every ring $R$, for every normalized $R^\times$-network $(G,L)$ on the $\partial$-graph, $\tilde{\Upsilon}(G,L)$ is a free $R$-module. Clearly, $(G,L) \mapsto (G^\dagger,L^\dagger)$ defines a bijection between $R^\times$-networks on $G$ and $R^\times$-networks on $G^\dagger$. Thus, Theorem \ref{thm:duality} implies that $G$ is completely reducible if and only if $G^\dagger$ is completely reducible.
\end{proof}
\begin{remark}
There is a direct combinatorial proof of Proposition \ref{prop:dualCR} as well, which we will merely sketch here. It requires extending the definition of dual to circular planar $\partial$-graphs which are disconnected, which is somewhat tricky and tedious since the dual is not unique; this problem is best dealt by reformulating it using medial graphs as in \cite{WJ}. One can then show that contracting a boundary spike on $G$ corresponds to deleting a boundary edge in $G^\dagger$ and vice versa. A decomposition of $G$ into a boundary wedge-sum or disjoint union corresponds to a similar decomposition of $G^\dagger$.
\end{remark}
\subsection{Wheel Graphs} \label{subsec:wheel}
\begin
|
{figure}
\begin{center}
\vspace{-1cm}
\begin{tikzpicture}[scale = 0.5]
\node[bd] (Q) at (0,0) {};
\node at (324:0.7) {$0$};
\node[int] (0) at (0:4) [label = 0:$a_0$] {};
\node[int] (2) at (72:4) [label = 72:$a_2$] {};
\node[int] (4) at (144:4) [label = 144:$a_4$] {};
\node[int] (6) at (216:4) [label = 216:$a_6$] {};
\node[int] (8) at (288:4) [label = 288:$a_8$] {};
\begin{scope}[blue]
\node[bd,blue] (R) at (-6,0) [label = left:$0$] {};
\node[int,draw=blue] (1) at (36:2.5) {};
\node at (36:1.8) {$a_1$};
\node[int,draw=blue] (3) at (108:2.5) {};
\node at (108:1.8) {$a_3$};
\node[int,draw=blue] (5) at (180:2.5) {};
\node at (180:1.8) {$a_5$};
\node[int,draw=blue] (7) at (252:2.5) {};
\node at (252:1.8) {$a_7$};
\node[int,draw=blue] (9) at (324:2.5) {};
\node at (324:1.8) {$a_9$};
\end{scope}
\begin{scope}[->]
\draw (0) to (2); \draw[->] (2) to (4); \draw[->] (4) to (6); \draw[->] (6) to (8); \draw[->] (8) to (0);
\draw (Q) to (0); \draw[->] (Q) to (2); \draw[->] (Q) to (4); \draw[->] (Q) to (6); \draw[->] (Q) to (8);
\end{scope}
\begin{scope}[->,blue]
\draw[->] (1) to (3); \draw[->] (3) to (5); \draw[->] (5) to (7); \draw[->] (7) to (9); \draw[->] (9) to (1);
\draw[->] (R) to (5);
\draw (R) .. controls (-5,5) and (-2,6) .. (3);
\draw (R) .. controls (-5,-5) and (-2,-6) .. (7);
\draw (R) .. controls (-8,9) and (9,7) .. (1);
\draw (R) .. controls (-8,-9) and (9,-7) .. (9);
\end{scope}
\end{tikzpicture}
\vspace{-1cm}
\end{center}
\caption{$W_5$ and its (isomorphic) dual. Arrows indicate the paired dual oriented edges.} \label{fig:wheel}
\end{figure}
Consider the wheel graph $W_n$ embedded in the complex plane with vertices at $\left\{e^{2\pi i k/n}\right\}_{k\in\m Z}$ and at $0$. Edges connect $0$ to $e^{2\pi ik/n}$ and $e^{2\pi ik/n}$ to $e^{2\pi i (k+1)/n}$ for all $k\in \m Z$. Figure \ref{fig:wheel} depicts $W_5$ and its planar dual. Note that the dual of $W_n$ is isomorphic to $W_n$. We call the vertex $0$ the {\bf hub} and the set of vertices $\{e^{2\pi i k /n}\}$ the {\bf rim}, and we apply the same terminology to $W_n^\dagger$. We denote the hub vertex of $W_n^\dagger$ by $0^\dagger$.
The critical group of $W_n$ is computed in \cite{Biggs2} using chip-firing, induction, and the symmetry of the graph, and a connection with Lucas sequences is uncovered. We present an alternate approach, computing the sandpile group using harmonic continuation and planar duality.
\begin{proposition}[{\cite[Theorem 9.2]{Biggs2}}] \label{prop:wheel}
Let $W_n$ be the wheel graph and let $F_0 = 0$, $F_1 = 1$, $F_2 = 1$, $F_3 = 2$, \dots be the Fibonacci numbers. Then
\[
\Crit(W_n) \cong \begin{cases} \Z / (F_{n-1} + F_{n+1}) \times \Z / (F_{n-1} + F_{n+1}), & n \text{ odd,} \\ \Z / F_n \times \Z / 5F_n, & n \text{ even.} \end{cases}
\]
\end{proposition}
\begin{proof}
By Proposition \ref{prop:criticalgroupnoboundary} it suffices to compute the $\Q / \Z$-valued harmonic functions modulo constants, that is,
\[
\Crit(W_n) \cong \tilde{\c U}(W_n, L_{\std}, \Q / \Z).
\]
By Proposition \ref{prop:harmonicconjugates}, it suffices to compute the $\Z$-module of pairs satisfying Cauchy-Riemann, that is,
\[
\{(u,v) \in [(\Q / \Z)^{V(W_n)}/(\text{constants}) \times (\Q / \Z)^{V(W_n^\dagger)} / (\text{constants})] \colon w(e) du(e) = dv(e^\dagger)\}.
\]
Instead of working modulo constants, we will normalize our functions so that $u$ and $v$ vanish at the hub vertices of $W_n$ and $W_n^\dagger$ respectively. (The hub vertices are colored solid in Figure \ref{fig:wheel}). Thus, we want to compute
\[
\{(u,v) \in [(\Q / \Z)^{V(W_n)} \times (\Q / \Z)^{V(W_n^\dagger)}] \colon u(0) = 0, v(0^\dagger) = 0, w(e) du(e) = dv(e^\dagger)\}.
\]
Let $a_0, a_1, a_2, \dots$ be the values of $u$ or $v$ on the rim vertices of $W_n$ and $W_n^\dagger$ in counterclockwise order as shown in the Figure \ref{fig:wheel}, with indices taken modulo $2n$. The Cauchy-Riemann equations can be rewritten
\[
a_{j+1} - a_{j-1} = a_j - 0.
\]
In other words, the numbers $a_j$ satisfy the Fibonacci-Lucas recurrence $a_{j+1} = a_j + a_{j-1}$, so that
\[
\begin{pmatrix} a_{j+1} \\ a_j \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_j \\ a_{j-1} \end{pmatrix}.
\]
Note that a harmonic pair $(u,v)$ is uniquely determined by $(a_1,a_0)$. More precisely, if $A$ is the $2 \times 2$ matrix of the recursion, then $(a_1,a_0)^t \in (\Q / \Z)^2$ will produce a harmonic pair $(u,v)$ through the iteration process if and only if it is a fixed point of $A^{2n}$. The module of harmonic pairs $(u,v)$ is thus isomorphic to the kernel of $A^{2n} - I$ acting on $(\Q/\Z)^2$. So the invariant factors of the critical group are given by the Smith normal form of $A^{2n} - I$, which is the same as the Smith normal form of $A^n - A^{-n}$ because $A$ is invertible over $\Z$. For $n \geq 1$,
\[
A^n = \begin{pmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{pmatrix}, \qquad A^{-n} = (-1)^n \begin{pmatrix} F_{n-1} & -F_n \\ -F_n & F_{n+1} \end{pmatrix}.
\]
If $n$ is odd, then
\[
A^n - A^{-n} = (F_{n+1} + F_{n-1})I,
\]
and if $n$ is even, then
\[
A^n - A^{-n} = \begin{pmatrix} F_{n+1} - F_{n-1} & 2F_n \\ 2F_n & F_{n-1} - F_{n+1} \end{pmatrix} = F_n \begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}.
\]
From here, the computation of the invariant factors is straightforward.
\end{proof}
\begin{remark}
Johnson \cite{WJ} in essence developed a system of ``discrete analytic continuation'' for harmonic conjugate pairs $(u,v)$. Although we will not do so here, we believe future research should combine his ideas with the algebraic machinery of this paper. Such a theory of discrete analytic continuation would have similar applications to those of Theorem \ref{thm:explicitalgorithm}.
\end{remark}
\section{Covering Maps and Symmetry} \label{sec:symmetry}
The $\partial$-graphs $\CLF(m,n)$ (\S \ref{sec:CLF}) and $W_n$ (\S \ref{subsec:wheel}) had a cyclic structure with a natural action of $\Z / m$ or $\Z / n$ by $\partial$-graph automorphisms. In this section, we will sketch potential applications of symmetry in general, showing how symmetry imposes algebraic constraints on the group structure of $\Upsilon(G,L)$. In particular, for $\Z$-networks, symmetry yields some information about the torsion primes of $\Upsilon(G,L)$. In this section, we shall be brief and not develop a complete theory. We will merely record a few simple observations for the benefit of future research.
Recall that covering maps of $\partial$-graphs were defined in Definition \ref{def:coveringmap}. We define a {\bf covering map of $R$-networks} in the obvious way; it is an $R$-network morphism such that the underlying $\partial$-graph morphism is a covering map. We say a covering map is {\bf finite-sheeted} if $|f^{-1}(x)|$ is finite for every $x \in V(G)$ and $|f^{-1}(e)|$ is finite for every $e \in E(G)$. We say $f$ is {\bf $n$-sheeted} if $|f^{-1}(x)| = n$ for every $x \in V(G)$ and $|f^{-1}(e)| = n$ for every $e \in E(G)$.
We will also the notation
\[
\c U_0^+(G,L,M) = \{u \in \c U(G,L,M) \colon u|_{\partial V(G)} = 0, Lu|_{\partial V(G)} = 0\}.
\]
This differs from $\c U_0(G,L,M)$ in that we no longer require $u$ to be finitely supported; however, for finite networks $\c U_0^+(G,L,M) = \c U_0(G,L,M)$. Moreover, we assume familiarity with the terminology for the actions of finite groups on sets.
\begin{observation} \label{obs:coveringMap}
Let $f\colon (\tilde{G},\tilde{L}) \to (G,L)$ be a covering map.
\begin{enumerate}
\item As in Lemma \ref{lem:upsilonfunctor} $f$ induces a surjection $\Upsilon(\tilde{G},\tilde{L}) \to \Upsilon(G,L)$ providing the following isomorphism:
\[
\Upsilon(G,L) \cong \left.\Upsilon(\tilde{G},\tilde{L}) \middle/ \sum_{\substack{
x,y\in V(G)\\
f(x) = f(y)
}} R(x - y)\right.
\]
\item As in Lemma \ref{lem:ufunctor}, there is an injective map $f^*: \c U(G,L, M) \to \c U(\tilde{G},\tilde{L}, M)$ given by $u \mapsto u \circ f$ which identifies harmonic functions on $(G,L)$ with harmonic functions on $\tilde{G},\tilde{L})$ that are constant on each fiber of $f$.
\item Moreover, $f^*$ restricts to an injective map $\c U_0^+(G,L,M) \to \c U_0^+(\tilde{G},\tilde{L},M)$.
\item If $f$ is finite-sheeted, then $f^*$ restricts to an injective map $\c U_0(G,L,M) \to \c U_0(\tilde{G}, \tilde{L}, M)$.
\end{enumerate}
\end{observation}
\begin{observation} \label{obs:averaging}
Suppose $f \colon (\tilde{G},\tilde{L}) \to (G,L)$ is a finite-sheeted covering map.
\begin{enumerate}
\item Proceeding similarly to Lemma \ref{lem:u0functor}, we can define a map
\[
f_*\colon \c U(\tilde{G},\tilde{L},M) \to \c U(G,L,M) \colon (f_*u)(y) = \sum_{x \in f^{-1}(y)} u(x).
\]
\item Moreover, $f_*$ restricts to define maps $\c U_0^+(G,L,M) \to \c U_0^+(G,L,M)$ and $\c U_0(G,L,M) \to \c U_0(G,L,M)$.
\item If $f$ is $n$-sheeted, then $f_* \circ f^* u = n \cdot u$.
\item Suppose $f$ is $n$-sheeted and let $M$ be an $R$-module. Viewing $n$ as an element of $R$ via the ring morphism $\Z \to R$, we see that multiplication by $n$ defines an $R$-module morphism $n: M \to M$. Assume $n: M \to M$ is an isomorphism and let $n^{-1}: M \to M$ denote the inverse map. Then $n^{-1} f_* \circ f^* = \id$. Hence, $f^*$ defines a split injection $\c U(G,L,M) \to \c U(\tilde{G}, \tilde{L}, M)$ and
\[
\c U(\tilde{G}, \tilde{L}, M) = f^* \c U(G,L,M) \oplus \ker f_*.
\]
Similarly,
\[
\c U_0(\tilde{G}, \tilde{L}, M) = f^* \c U_0(G,L,M) \oplus \ker f_*|_{\c U_0(\tilde{G},\tilde{L},M)}
\]
and the same holds for $\c U_0^+$.
\end{enumerate}
(Compare \cite[Lemma 4.1]{bakerNor1} as well as Maschke's theorem from representation theory \cite[\S 18.1, Thm.\ 1]{DummitandFoote}.)
\end{observation}
\begin{observation} \label{obs:groupAction}
Suppose that $K$ is a group which acts by $R$-network automorphisms on the $R$-network $(\tilde{G},\tilde{L})$. Assume the action on vertices and edges is free and that $kx \not \sim x$ for every $k \in G \setminus \{\id\}$ and every $x \in V$.
\begin{enumerate}
\item There exists a quotient network $(G,L) = (\tilde{G},\tilde{L}) / K$ and a covering map $f \colon (\tilde{G},\tilde{L}) \to (G,L)$.
\item There is a corresponding action of $K$ on $\c U(\tilde{G},\tilde{L}, M)$ given by $k \cdot u = k_*u$ where $k_*$ is defined as in Observation \ref{obs:averaging}. The fixed-point submodule of this action is
\[
\c U(\tilde{G}, \tilde{L}, M)^K = f^* \c U(G,L,M).
\]
The same applies with $\c U$ replaced by $\c U_0$ or $\c U_0^+$.
\item Suppose $K$ is a finite $p$-group for some prime $p$. Then by a standard argument using the orbits of the $K$-action on $\c U(\tilde{G},\tilde{L},M)$, we have
\[
|\c U(\tilde{G},\tilde{L}, M)| \equiv |\c U(G,L,M)| \text{ mod } p,
\]
provided both sides are finite (for instance, assuming $\tilde{G}$ and $M$ are finite). The same holds for $\c U_0$ and $\c U_0^+$.
\end{enumerate}
\end{observation}
While these statements hold in general, the mod $p$ counting formula seems especially useful for the case $R = \Z$. In the following Proposition, we make use of the classification of finitely generated $\Z$-modules (see \cite[\S 12.1]{DummitandFoote}).
\begin{proposition} \label{prop:Znetworkgroupaction}
Suppose $(\tilde{G}, \tilde{L})$ is a finite non-degenerate $\Z$-network. Suppose $K$ is a finite $p$-group which acts by $\Z$-network automorphisms on $(\tilde{G},\tilde{L})$ as in Observation \ref{obs:groupAction}, let $(G,L)$ be the quotient network, and let $f: (\tilde{G},\tilde{L}) \to (G,L)$ be the projection map.
\begin{enumerate}
\item The $\Z$-network $(G,L)$ is finite and non-degenerate.
\item The generalized critical group $\Upsilon(\tilde{G},\tilde{L})$ has nontrivial $p$-torsion if and only if $\Upsilon(G,L)$ has nontrivial $p$-torsion.
\item Let $q$ be a prime distinct from $p$ and let $k \in \Z$. Then
\[
\c U_0(\tilde{G}, \tilde{L}, \Z / q^k) = f^* \c U_0(G, L, \Z / q^k) \oplus M_{q^k}
\]
where $M_{q^k} := \ker f_*|_{\c U_0(\tilde{G}, \tilde{L}, \Z / q^k)}$.
\item For $q \neq p$, the action of $K$ on $M_{q^k}$ has no fixed points other than zero and in particular $|M_{q^k}| \equiv 1$ mod $p$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We assume covering maps to be surjective on the vertex and edge sets by definition; thus, since $\tilde{G}$ is finite, $G$ is also finite. Because $f^*$ defines an injective map $\c U_0(G,L,\Z) \to \c U_0(\tilde{G},\tilde{L}, \Z) = 0$, we know $(G,L)$ is non-degenerate.
(2) Because the networks are non-degenerate, Proposition \ref{prop:tor} shows that
\[
\Tor_1(\Upsilon(\tilde{G},\tilde{L}), \Z / p) \cong \c U_0(\tilde{G},\tilde{L}, \Z / p)
\]
and the same holds for $(G,L)$. On the other hand, by Observation \ref{obs:groupAction} (3), we have
\[
|\c U_0(\tilde{G},\tilde{L}, \Z / p)| \equiv |\c U_0(G,L, \Z / p)| \text{ mod } p.
\]
Each of the two $\Z$-modules in this equation is either zero (hence has cardinality one mod $p$) or else it has cardinality zerp mod $p$, which implies (2).
(3) Note that $f$ is a $|K|$-sheeted covering map. Since $|K|$ is a power of $p$, multiplication by $|K|$ acts as an isomorphism on $\Z / q^k$. Therefore, claim (3) follows from Observation \ref{obs:averaging} (4).
(4) It follows from (3) and Observation \ref{obs:groupAction} (2) that zero is the only fixed point of the $K$-action on $M_{q^k}$. Since $K$ is a $p$-group, we thus have $|M_{q^k}| \equiv 1$ mod $p$.
\end{proof}
\begin{example}
Consider the networks $\CLF(m,n)$ from \S \ref{sec:CLF}. There is an obvious translation action of $\Z / k$ on $\CLF(km,n)$ with the quotient $\CLF(m,n)$. The covering map $\CLF(km,n) \to \CLF(m,n)$ induces an inclusion $\c U_0(\CLF(m,n), L_{\std}, \Q/\Z) \to \c U_0(\CLF(km,n), L_{\std}, \Q / \Z)$. Note that when $k$ is a power of $2$, Proposition \ref{prop:Znetworkgroupaction} (2) holds because $\c U_0(\CLF(m,n), \Z / 2^\ell)$ is nontrivial for all $m \geq 2$ and $n \geq 1$ by Theorem \ref{thm:CLF}. Moreover, for odd integers $k$, we have $\c U_0(\CLF(m,n), \Z / k) = 0$ for all $m$, so (2) also holds when $k$ is an odd prime power. One can verify that the other claims in Proposition \ref{prop:Znetworkgroupaction} also hold rather vacuously in the case of $\CLF(m,n)$ as well.
\end{example}
\begin{remark}
Though Proposition \ref{prop:Znetworkgroupaction} falls far short of computing $\Upsilon(\tilde{G},\tilde{L})$ from $\Upsilon(G,L)$, it nonetheless gives a significant amount of information, especially in parts (3) and (4). Indeed, one can argue from the classification of finite $\Z$-modules that the $q$-torsion component of $\Upsilon(\tilde{G},\tilde{L})$ is uniquely determined up to isomorphism by the quantities $|\Tor_1(\Upsilon(\tilde{G},\tilde{L}), \Z / q^k)|$ for $k = 0, 1, \dots$. Moreover, by (3)
\[
|\Tor_1(\Upsilon(\tilde{G},\tilde{L}), \Z / q^k)| = |\c U_0(G,L, \Z / q^k)| \cdot |M_{q^k}|.
\]
By (4), we know $|M_{q^k}|$ is a power of $q$ which equals $1$ mod $p$ and that the group $K$ acts by automorphisms on $M_{q^k}$ with no nontrivial fixed points. This narrows down the possibilities for $|M_{q^k}|$, especially when combined with other information such as bounds on the number of invariant factors for the torsion part of $\Upsilon(G,L)$ from Corollary \ref{cor:invariantfactorsbound} or bounds on the size of $\Tor_1(\Upsilon(\tilde{G}, \tilde{L}), \Q / \Z)$ obtained through determinantal computations.
\end{remark}
As stated, Proposition \ref{prop:Znetworkgroupaction} does not yield optimal information for the case of graphs without boundary and the critical group since it relies on non-degeneracy. The simplest way to handle this problem is by considering $\partial$-graphs with one boundary vertex (see Proposition \ref{prop:criticalgrouponeboundary}) and allowing one branching point in our covering map.
\begin{definition}
Let $\tilde{G}$ and $G$ be $\partial$-graphs with exactly one boundary vertex each, called $\tilde{x}$ and $x$ respectively. A {\bf pseudo-covering map} $f \colon \tilde{G} \to G$ is a $\partial$-morphism such that $f$ is surjective on the vertex and edge sets, $f$ maps $\tilde{x}$ to $x$, $f$ maps interior vertices to interior vertices, $f$ maps edges to edges, and $\deg(f,y) = 1$ for every $y \in V(\tilde{G}) \setminus \{\tilde{x}\}$.
\end{definition}
The foregoing observations all adapt to pseudo-covering maps for \emph{normalized} $R$-networks (and in particular apply to critical groups). The verifications are straightforward once we make the following observation: Let $G$ be a $\partial$-graph with a single vertex $x$ and let $L$ be a weighted Laplacian (recall this means $d = 0$). If $u: V \to M$ satisfies $Lu(y) = 0$ for all $y \neq x$, then it also satisfies $Lu(x) = 0$ because $\sum_{y \in V(G)} Lu(y) = 0$.
\begin{example}
Let $W_n$ be the wheel graph from \S \ref{subsec:wheel} where $0$ is considered a boundary vertex. For any $k \in \N$, there is a group action of $\Z / k$ on $W_{kn}$ by rotation and a corresponding quotient map $W_{kn} \to W_n$ which is a pseudo-covering map. By combining the results from \ref{subsec:wheel} with the results from this section, we obtain the following information about the $q$-torsion components of $\Crit(W_n)$ for each prime $q$.
(1) The $q$-torsion component has at most two invariant factors. Indeed, the harmonic continuation argument in Proposition \ref{prop:wheel} showed that $\c U_0(W_n, L_{\std}, \Q / \Z)$ is isomorphic to the submodule of $(\Q / \Z)^2$ consisting of fixed points of $A^{2n}$. A submodule of $(\Q / \Z)^2$ can have at most two invariant factors. Since $\c U_0(W_n, L_{\std}, \Q / \Z)$ has at most two invariant factors, so does its $q$-torsion component.
(2) For every $k$, there exists some $n$ such that $\c U_0(W_n, L_{\std}, \Z / q^k) \cong (\Z / q^k)^2$. To prove this, it suffices to show that every $\phi \in (\Z / q^k)^2$ will be a fixed point of $A^{2n}$ for some $n$. Note that $A$ maps $(\Z / q^k)^2$ into itself and $(\Z / q^k)^2$ is finite, so there must exist two distinct integers $k$ and $\ell$ with $A^{2k} \phi = A^{2\ell} \phi$. Since $A$ is invertible over $\Z$, we have $A^{2(k-\ell)} \phi = \phi$, so we can take $n = k - \ell$.
(3) If $q$ is a prime other than $5$, then we know from Proposition \ref{prop:wheel} that the $q$-torsion component of $\Crit(W_n)$ has the form $(\Z / q^k)^2$ for some $k$. Moreover, the $5$-torsion component has the form $(\Z / 5^k)^2$ for odd $n$ and $\Z / 5^k \times \Z / 5^{k+1}$ for even $n$.
(4) If $m | n$, then there is a pseudo-covering map $W_m \to W_n$ and hence by Observation \ref{obs:coveringMap} (4), we can identify the $q$-torsion component of $\Crit(W_n)$ with a submodule of the $q$-torsion component for $\Crit(W_m)$.
(5) Suppose $n$ is such that the $q$-torsion component $\Crit(W_n)$ has two invariant factors, and let $p$ be a prime other than $q$. Then $\Crit(W_{pn})$ has the same $q$-torsion submodule as $\Crit(W_n)$. Indeed, multiplication by $p$ acts as an isomorphism on $\Z / q^k$. Thus, by Proposition \ref{prop:Znetworkgroupaction} (3), we have
\[
\c U_0(W_{pn}, L_{\std}, \Z / q^k) \cong \c U_0(W_n, L_{\std}, \Z / q^k) \oplus M_{q^k}.
\]
We know that $\c U_0(W_n, L_{\std}, \Z / q^k)$ has two invariant factors, while $\c U_0(W_{pn}, L_{\std}, \Z / q^k)$ has at most two invariant factors. This implies that $M_{q^k} = 0$ and hence
\[
\c U_0(W_{pn}, L_{\std}, \Z / q^k) \cong \c U_0(W_n, L_{\std}, \Z / q^k).
\]
Since this holds for all $k$, the $q$-torsion components of $\Crit(W_{pn})$ and $\Crit(W_n)$ are isomorphic.
\end{example}
\section{Open Problems} \label{sec:openproblems}
Much like the sandpile group, the fundamental module $\Upsilon$ connects ideas from network theory, combinatorics, algebraic topology, homological algebra, and complex analysis. We have correlated the algebraic properties of $\Upsilon$ with the combinatorial properties of $\partial$-graphs, including $\partial$-graph morphisms, layer-stripping, boundary wedge-sums, duality, and symmetry and we have given applications to the critical group and Laplacian eigenvalues. Our results lead to the following questions:
\begin{question}
Do our algebraic invariants extend to higher-dimensional cell complexes, along the lines of \cite{DKM}? Do they generalize to directed graphs? What are the analogues of $\partial$-graph morphisms and layer-stripping in these settings?
\end{question}
\begin{question}
Can the techniques developed herein (particularly Theorem \ref{thm:explicitalgorithm}) be used to aid the computation of previously intractable sandpile groups? What applications do they have for computing eigenvectors and characteristic polynomials?
\end{question}
\begin{question}
Corollary \ref{cor:invariantfactorsbound} used layer-stripping to give a bound on the number of invariant factors for $\Crit(G)$ and the multiplicity of eigenvalues. How sharp is this bound for general graphs? For a graph without boundary, is there an algebraic characterization of the minimal number of boundary vertices one has to assign to achieve layerability? What is the most efficient algorithm for finding a choice of boundary vertices that achieves this minimal number?
\end{question}
\begin{question}
Are there other operations on $\partial$-graphs which interact nicely with $\Upsilon$ and with $\partial$-graph morphisms? Can such operations be used to compute $\Upsilon$ or at least produce short exact sequences? See Remark \ref{rem:nonunitlayering} and \cite[Proposition 2]{Lor1}, \cite[Proposition 21]{Treumann}.
\end{question}
\begin{question}
We have studied algebraic invariants which test layerability (Theorem \ref{thm:layerabilitycharacterization}). Are there algebraic invariants of $\partial$-graphs which test whether or not the electrical inverse problem can be solved by layer-stripping?
\end{question}
\begin{question}
Do Theorems \ref{thm:layerabilitycharacterization} and \ref{thm:reducibilitycharacterization} extend to infinite $\partial$-graphs? In particular, for a fixed infinite graph $G$, if $\Upsilon(G,L)$ is flat for all unit edge-weight functions $w$, must $\Upsilon(G,L)$ also be free for all unit edge-weight functions?
\end{question}
\begin{question}
Determine the $\Z$-module of $\m Q/\m Z$-harmonic functions supported in a given subset of the $\m Z^2$ lattice. Applying Lemma \ref{lem:M1computation} to $\CLF(\infty,n)$ resolves the case of a diagonal strip with sides parallel to the lines $y=\pm x$. An argument using harmonic continuation shows that these are the only \emph{strips} with a nonzero answer.
\end{question}
\begin{question}
Can the techniques of \S \ref{sec:CLF} be modified to handle $\partial$-graphs built from the triangular or hexagonal lattice rather than the rectangular lattice?
\end{question}
\subsection*{Acknowledgments:} The ideas in this paper were in part developed at the University of Washington REU in Electrical Inverse Problems (summer 2015), in which David Jekel and Avi Levy were graduate student TAs, and the undergraduates Will Dana, Collin Litterell, and Austin Stromme were students. We all owe a great debt to James A.\ Morrow for organizing the REU, participating in discussions, and encouraging our interest in networks.
Furthermore, we thank the organizers of the CMO-BIRS workshop on Sandpile Groups for their support, encouragement, and hospitality. One of the authors presented a preliminary version of these results at this workshop and would like to thank Dustin Cartwright, Caroline Klivans, Lionel Levine, Jeremy Martin, David Perkinson, and Farbod Shokrieh for stimulating discussions.
We thank the referees and journal editors of SIDMA for many helpful corrections and suggestions on exposition, as well as for pointing out several references.
\bibliographystyle{siam}
|
\section{Introduction}
\label{sec:setting}
The main motivation of this paper is to study the asymptotic behavior as $\epsilon\to 0$ of the value function of an optimal
control problem in ${\mathbb R}^2$ in which the running cost and dynamics may jump across a periodic oscillatory interface $\Gamma_{\epsilon,\epsilon}$, when
the oscillations of $\Gamma_{\epsilon,\epsilon}$ have an amplitude of the order of $\epsilon$ and a period of the order of $\epsilon^2$, (see Figure \ref{fig:geom1} below, which actually describes a more general case).
The respective roles of the two indices in $\Gamma_{\epsilon,\epsilon}$ will be explained in \S~\ref{sec:geometry} below.
The interface $\Gamma_{\epsilon,\epsilon}$ separates two unbounded regions of ${\mathbb R}^2$, $\Omega_{\epsilon,\epsilon}^L$ and $\Omega_{\epsilon,\epsilon}^R$.
The present work is a natural continuation of a previous one, \cite{MR3565416}, in which both the amplitude and the period of the oscillations were of the order of $\epsilon$.
In \cite{MR3565416}, it was possible to make a change of variables in order to map the interface onto a flat one. Here, it is no longer possible,
and the route leading to the homogenization result becomes more complex.
\\
To characterize the optimal control problem, one has to specify the admissible dynamics at a point $x\in \Gamma_{\epsilon,\epsilon}$: in our setting, no mixture is allowed at the interface,
i.e. the admissible dynamics are the ones corresponding to the subdomain $ \Omega_{\epsilon,\epsilon}^L$ {\bf and} entering $ \Omega_{\epsilon,\epsilon}^L$,
or corresponding to the subdomain $ \Omega_{\epsilon,\epsilon}^R$ {\bf and} entering $ \Omega_{\epsilon,\epsilon}^R$.
Hence the situation differs from those studied in the
articles of G. Barles, A. Briani and E. Chasseigne \cite{barles2011bellman,barles2013bellman} and of G. Barles, A. Briani, E. Chasseigne and N. Tchou \cite{MR3424272},
in which mixing is allowed at the interface. The optimal control problem under consideration has been first studied in \cite{oudet2014}: the value function is
characterized as the viscosity solution of a Hamilton-Jacobi equation
with special transmission conditions on $ \Gamma_{\epsilon,\epsilon}$; a comparison principle for this problem is proved in \cite{oudet2014} with arguments
from the theory of optimal control similar to those introduced in \cite{barles2011bellman,barles2013bellman}. In parallel to \cite{oudet2014},
Imbert and Monneau have studied similar problems
from the viewpoint of PDEs, see \cite{imbert:hal-01073954}, and have obtained comparison results for quasi-convex Hamiltonians.
There has been a very active research effort on finding simpler and more general/powerful proofs of the above-mentioned comparison results,
see \cite{2016arXiv161101977B} and the very recent work of P-L. Lions and P. Souganidis \cite{2017arXiv170404001L}.
\\
In particular, \cite{imbert:hal-01073954} contains a characterization of the viscosity solution of the transmission problem
with a reduced set of test-functions; this characterization will be used in the present work.
Note that \cite{oudet2014,imbert:hal-01073954} can be seen as extensions of articles
devoted to the analysis of Hamilton-Jacobi equations on networks, see \cite{MR3057137,MR3023064,MR3358634,MR3621434,MR3556345},
because the notion of interface used there can be seen as a generalization of the notion of vertex (or junction) for a network.
\\
We will see that as $\epsilon$ tends to $0$, the value function converges to the solution of an effective problem
related to a flat interface $\Gamma$, with Hamilton-Jacobi equations in the half-planes limited by $\Gamma$ and a transmission condition on $\Gamma$.
Whereas the partial differential equation far from the interface is unchanged, the main difficulty consists in finding the effective transmission condition on $\Gamma$.
Naturally, the latter depends on the dynamics and running costs but also
keeps memory of the vanishing oscillations.
The present work is strongly related to \cite{MR3565416}, but also to two articles, \cite{MR3299352} and \cite{MR3441209},
about singularly perturbed problems leading to effective Hamilton-Jacobi equations on networks.
In \cite{MR3299352}, the authors of the present paper study a family of star-shaped planar domains $D^\epsilon$
made of $N$ non intersecting semi-infinite strips of thickness $\epsilon$ and of a central region whose diameter is proportional to $\epsilon$.
As $\epsilon \to 0$, the domains $D^\epsilon$ tend to a network ${\mathcal G}$ made of $N$ half-lines sharing an endpoint $O$, named the vertex or junction point.
For infinite horizon optimal control problems in which the state is constrained to remain in the closure of $D^\epsilon$,
the value function tends to the solution of a Hamilton-Jacobi equation on ${\mathcal G}$, with an effective transmission condition at $O$.
The related effective Hamiltonian, which corresponds to trajectories staying close to the junction point, was
obtained in \cite{MR3299352} as the limit of a sequence of ergodic constants corresponding to larger and larger bounded subdomains.
Note that the same problem and the question of the correctors in unbounded domains were also discussed by P-L. Lions in his lectures at Coll{\`e}ge de France respectively in January 2017, and
in January and February 2014, see \cite{PLL}.
The same kind of construction was then used in \cite{MR3441209}, in which Galise, Imbert and Monneau study a family of time dependent Hamilton-Jacobi equations
in a simple network composed of two half-lines with a perturbation of the Hamiltonian localized in a small region close to the junction.
In \cite{MR3441209}, a key point was the use of a single test-function at the vertex which was first proposed in \cite{MR3621434,imbert:hal-01073954}.
This idea will be also used in the present work. Note that similar techniques were used in the recent works of Forcadel et al, \cite{forcadel:hal-01097085,MR3640560,forcadel:hal-01332787}, which deal with applications to traffic flows. Finally, multiscale homogenization and singular perturbation problems with first and second order Hamilton Jacobi equations (without discontinuities)
have been addressed in \cite{MR2371792,MR2487745}.
\\
Note that slight modifications of the techniques used below yield the asymptotic behavior of the transmission problems with oscillatory interfaces of amplitude $\epsilon$ and period $\epsilon ^{1+q}$ with $q\ge 0$, (see \S~\ref{sec:effect-probl-obta} and \cite{MR3565416} for $q=0$ and Remark \ref{sec:main-result-5} below for $q>0$).
Also, even if we focus on a two-dimensional problem, all the results below hold in the case when ${\mathbb R}^N$ is divided into two subregions, separated by a smooth and periodic $N-1$ dimensional oscillatory interface with two scales.
Finally, we wish to stress the fact that an important possible application of our work is the homogenization of a transmission problem in geometrical optics, with two media separated by a two-scale interface.
\\
The paper is organized as follows: in the remaining part of \S~\ref{sec:setting}, we set the problem. We will see in particular that it is convenient to consider a more general setting
than the one described above, with two small parameters $\eta $ and $\epsilon$ instead of one: more precisely, the amplitude of the oscillations will be of the order of $\eta$ whereas the period will of the order of $\eta\epsilon$. In \S~\ref{sec:effect-probl-obta}, we keep $\eta$ fixed while $\epsilon$ tends to $0$: the region where the two media are mixed is a strip whose width is of the order of $\eta$: in this region, an effective Hamiltonian is found by classical homogenization techniques, see \cite{LPV}; the main difficulty is to obtain
the effective transmission conditions on the boundaries of the strip (two parallel straight lines) and to prove
the convergence. The techniques will be reminiscent of \cite{MR3299352,MR3441209,MR3565416}, because only one parameter tends to $0$. The effective transmission conditions keeps track of the geometry of the interface at the scale $\epsilon$. \\
In \S~\ref{sec:second-passage-limit}, we take the latter effective problem which depends on $\eta$, and have $\eta$ tend to $0$: we obtain a new effective transmission condition on a single flat interface, and prove the convergence result. Note that this passage to the limit is an intermediate step in order to study the two-scale homogenization problem described in the beginning of the introduction, but that it has also an interest for itself.
\\
In \S~\ref{sec:simult-pass-limit}, we take $\eta=\epsilon$, i.e. we consider the
interface $\Gamma_{\epsilon,\epsilon}$ described at the beginning of the introduction, and let $\epsilon$ tend to $0$: at the limit, we obtain the same effective problem as the one found in \S~\ref{sec:second-passage-limit}, by letting first $\epsilon$ then $\eta$ tend to $0$.
\\
Sections~\ref{sec:effect-probl-obta}, \ref{sec:second-passage-limit} and \ref{sec:simult-pass-limit} are organized in the same way: the main result is stated first, then proved in the remaining part of the section. For the conciseness of \S~\ref{sec:effect-probl-obta}, some technical proofs will be given in an appendix.
\subsection{The geometry}
\label{sec:geometry}
Let $(e_1, e_2)$ be an orthonormal basis of ${\mathbb R}^2$.
For two real numbers $a,b$ such that $0<a<b<1$, consider the set ${\mathbb S}=\left\{ a, b \right\}+{\mathbb Z}$.
Let $g: {\mathbb R} \to {\mathbb R}$ be a continuous function, periodic with period $1$, such that
\begin{enumerate}
\item $g$ is ${\mathcal C}^2$ in ${\mathbb R} \backslash {\mathbb S}$
\item $g\left(a \right)= g\left(b\right)=0$
\item $\displaystyle \mathop {\lim}_{t\to a^-} g'(t)=\mathop{\lim}_{t\to a^+} g'(t)=+\infty$ and
$\displaystyle \mathop {\lim}_{t\to a^-} \frac {g''(t)}{g'(t)}=\mathop{\lim}_{t\to a^+} \frac {g''(t)}{g'(t)}=0$
\item $\displaystyle \mathop {\lim}_{t\to b^-} g'(t)=\mathop{\lim}_{t\to b^+} g'(t)=-\infty$ and
$\displaystyle \mathop {\lim}_{t\to b^-} \frac {g''(t)}{g'(t)}=\mathop{\lim}_{t\to b^+} \frac {g''(t)}{g'(t)}=0$
\end{enumerate}
Let $G$ be the multivalued Heavyside step function, periodic with period $1$, such that
\begin{enumerate}
\item $G(a)=G(b)=[-1,1]$
\item $G(t)=\{1\}$ if $t\in (a,b )$
\item $G(t)=\{-1\}$ if $t\in [0,a)\cup(b,1]$
\end{enumerate}
Let $\eta$ and $\epsilon$ be two positive parameters: consider the ${\mathcal C}^2$ curve $\Gamma_{\eta,\epsilon}$ defined as the graph of the multivalued function
$g_{\eta,\epsilon}: x_2\mapsto \eta G(\frac {x_2}{ \epsilon \eta})+ \eta\epsilon g(\frac {x_2}{ \epsilon \eta})$.
We also define the domain $\Omega_{\eta,\epsilon}^R $ (resp. $\Omega_{\eta,\epsilon}^L $) as the epigraph (resp. hypograph)
of $g_{\eta,\epsilon}$:
\begin{eqnarray}\label{eq:44}
\Omega_{\eta,\epsilon}^R= & \{ x\in {\mathbb R}^2 : x_1> g_{\eta,\epsilon} (x_2)\},\\
\label{eq:45}
\Omega_{\eta,\epsilon}^L= & \{ x\in {\mathbb R}^2 : x_1< g_{\eta,\epsilon} (x_2)\}.
\end{eqnarray}
The unit normal vector $n_{\eta,\epsilon}(x)$ at $ x\in \Gamma_{\eta,\epsilon}$ is defined as follows: setting $y_2= \frac { x_2}{\eta \epsilon}$,
\begin{displaymath}
n_{\eta,\epsilon}(x)= \left\{
\begin{array}[c]{cl}
\displaystyle
\left (1 + \left( g'(y_2)\right)^2 \right) ^{-1/2} \left( e_1 - g' (y_2) e_2\right)
\quad &\hbox{ if }\quad y_2 \notin {\mathbb S}\\
\displaystyle - e_2 \quad&\hbox{ if }\quad y_2 = a \mod{1}\\
\displaystyle e_2 \quad&\hbox{ if } \quad y_2 = b \mod{1}.
\end{array}
\right.
\end{displaymath}
Note that $n_{\eta,\epsilon}(x)$ is oriented from $\Omega^L_{\eta,\epsilon}$ to $\Omega^R_{\eta,\epsilon}$.
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=0.5, trans/.style={thick,<->,shorten >=2pt,shorten <=2pt,>=stealth} ]
\draw[red,thick] (0,10) -- (10,10);
\draw[red,thick] (10,10) .. controls (11,10) and (11,9.5) .. (10,9.5);
\draw[red,thick] (10,9.5) -- (0,9.5);
\draw[red,thick] (0,9.5) .. controls (-0.5,9.5) and (-0.5,9) .. (0,9);
\draw[red,thick] (0,9) -- (10,9);
\draw[red,thick] (10,9) .. controls (11,9) and (11,8.5) .. (10,8.5);
\draw[red,thick] (10,8.5) -- (0,8.5);
\draw[red,thick] (0,8.5) .. controls (-0.5,8.5) and (-0.5,8) .. (0,8);
\draw[red,thick] (0,8) -- (10,8);
\draw[red,thick] (10,8) .. controls (11,8) and (11,7.5) .. (10,7.5);
\draw[red,thick] (10,7.5) -- (0,7.5);
\draw[red,thick] (0,7.5) .. controls (-0.5,7.5) and (-0.5,7) .. (0,7);
\draw[red,thick] (0,7) -- (10,7);
\draw[red,thick] (10,7) .. controls (11,7) and (11,6.5) .. (10,6.5);
\draw[red,thick] (10,6.5) -- (0,6.5);
\draw[red,thick] (0,6.5) .. controls (-0.5,6.5) and (-0.5,6) .. (0,6);
\draw[red,thick] (0,6) -- (10,6);
\draw[red,thick] (10,6) .. controls (11,6) and (11,5.5) .. (10,5.5);
\draw[red,thick] (10,5.5) -- (0,5.5);
\draw[red,thick] (0,5.5) .. controls (-0.5,5.5) and (-0.5,5) .. (0,5);
\draw[red,thick] (0,5) -- (10,5);
\draw[red,thick] (10,5) .. controls (11,5) and (11,4.5) .. (10,4.5);
\draw[red,thick] (10,4.5) -- (0,4.5);
\draw[red,thick] (0,4.5) .. controls (-0.5,4.5) and (-0.5,4) .. (0,4);
\draw[red,thick] (0,4) -- (10,4);
\draw[red,thick] (10,4) .. controls (11,4) and (11,3.5) .. (10,3.5);
\draw[red,thick] (10,3.5) -- (0,3.5);
\draw[red,thick] (0,3.5) .. controls (-0.5,3.5) and (-0.5,3) .. (0,3);
\draw[trans] (0.1,11) -- (10.1,11) ;
\draw (5,11) node[above]{$2\eta$};
\draw[trans] (9.7,11) -- (10.8,11) ;
\draw (10.4,11) node[above]{{\small $\sim \eta\epsilon$}};
\draw[trans] (-0.6,11) -- (0.4,11) ;
\draw (-0.2,11) node[above]{{\small $\sim \eta\epsilon$}};
\draw[trans] (5,10.1) -- (5,8.9) ;
\draw (5,9.5) node[right]{$\eta\epsilon$};
\draw (-1,6.5) node[left]{$\Omega^L_{\eta,\epsilon}$};
\draw (11,6.5) node[right]{$\Omega^R_{\eta,\epsilon}$};
\end{tikzpicture}
\caption{The oscillatory interface $\Gamma_{\eta,\epsilon}$ separates $\Omega^L_{\eta,\epsilon}$ and $\Omega^R_{\eta,\epsilon}$. It has two scales: its amplitude $\eta$ and period $\eta\epsilon$}
\label{fig:geom1}
\end{center}
\end{figure}
In \S~\ref{sec:effect-probl-obta}, we will let $\epsilon$ tend to zero and keep $\eta$ fixed.
In \S~\ref{sec:simult-pass-limit}, we will focus on the case when $\eta=\epsilon$ and let $\epsilon$ tend to $0$.
\subsection{The optimal control problem in $\Omega^L_{\eta,\epsilon}\cup \Omega^R_{\eta,\epsilon} \cup \Gamma_{\eta,\epsilon}$}
\label{optimal1}
We consider infinite-horizon optimal control problems which have different dynamics and running costs in the regions
$\Omega^i_{\eta,\epsilon}$, $i=L, R$.
The sets of controls associated to the index $i=L,R$ will be called $A^i$;
similarly, the notations $f^i$ and $\ell^i$ will be used for the dynamics and running costs.
The following assumptions will be made in all the present work.
\subsubsection{Standing Assumptions}
\label{sec:assumptions}
\begin{description}
\item{[H0]} $A$ is a metric space (one can take $A={\mathbb R}^m$). For $i=L,R$, $A^i$ is a non empty compact subset of $A$ and
$f^i: A^i \to {\mathbb R}^2$ is a continuous function. The sets $A^i$ are disjoint.
Define $M_f= \max_{i=L, R} \sup_{
a \in A^i } | f^i(a)| $.
The notation $F^i$ will be used for the set $F^i=\{f^i(a), a\in A^i\} $.
\item{[H1]} For $i=L,R$, the function $\ell^i: A^i\to {\mathbb R}$ is continuous and bounded.
Define $M_\ell= \max_{i=L, R} \sup_{
a \in A^i } | \ell^i(a)| $.
\item{[H2] } For any $i=L,R$, the non empty set ${\rm{FL}}^i= \{ (f^i(a), \ell^i(a) ) , a\in A^i\}$ is closed and convex.
\item{[H3] } There is a real number $\delta_0>0$ such that for $i=L,R$,
$ B(0,\delta_0) \subset F^i$.
\end{description}
We stress the fact that all the results below hold provided the latter assumptions are satisfied,
although, in order to avoid tedious repetitions, we will not mention them explicitly in the statements.
\begin{remark}\label{sec:standing-assumptions}
We have assumed that the dynamics $f^i$ and running costs $\ell^i$, $i=L,R$, do not depend on $x$. This assumption is made only for simplicity. With further classical assumptions, it would be possible to generalize all the results contained in this paper to the case
when $f^i$ and $\ell^i$ depend on $x$: typical such assumptions are
\begin{enumerate}
\item the Lipschitz continuity of $f^i$ with respect to $x$ uniformly in $a\in A^i$: there exists $L_f$ such that
for $i=L,R$, $\forall a\in A^i$, $x,y\in {\mathbb R}^2$, $ |f^i(x,a)-f^i(y,a)|\le L_f |x-y|$
\item the existence of a modulus of continuity $\omega_\ell$
such that for any $i=L,R$, $x,y\in {\mathbb R}^2$ and $a\in A^i$,
\begin{displaymath}
|\ell^i(x,a)-\ell^i(y,a)|\le \omega_{\ell} (|x-y|).
\end{displaymath}
\end{enumerate}
Even if these assumptions are standard, keeping track of a possible slow dependency of the Hamiltonian with respect to $x$ in the homogenization process below would have led us to tackle several technical questions and to significantly increase the length of the paper. It would have also made the
essential ideas more difficult to grasp.\\
Moreover, it is clear that if $f^i $ and $\ell^i$ do depend on $x$ except in a strip containing the oscillatory interface,
for example the strip $\{ x: |x_1|<1 \}$ for $\epsilon$ and $\eta$ small enough, then all what follows holds and does not require any further technicality.
\end{remark}
\subsubsection{The optimal control problem}
\label{sec:optim-contr-probl}
Let the closed set ${\mathcal M}_{\eta,\epsilon}$ be defined as follows:
\begin{equation}
\label{eq: def-M_epsilon}
{\mathcal M}_{\eta,\epsilon}=\left\{(x,a);\; x\in {\mathbb R}^2,\quad a\in A^i \hbox{ if } x\in \Omega_{\eta,\epsilon}^i,\; i=L,R,\; \hbox{ and } a \in A^L\cup A^R \hbox{ if } x \in \Gamma_{\eta,\epsilon}\right \}.
\end{equation}
The dynamics $f_{\eta,\epsilon}$ is a function defined in ${\mathcal M}_{\eta,\epsilon}$ with values in ${\mathbb R}^2$:
\begin{displaymath}
\forall (x,a)\in {\mathcal M}_{\eta,\epsilon},\quad\quad f_{\eta,\epsilon}(x, a)= f^i(a)
\quad \hbox{ if } x\in \Omega_{\eta,\epsilon}^i \hbox{ or }(x\in\Gamma_{\eta,\epsilon} \hbox{ and } a\in A^i).
\end{displaymath}
The function $f_{\eta,\epsilon}$ is continuous on ${\mathcal M}_{\eta,\epsilon}$ because the sets $A^i$ are disjoint. Similarly, let the running cost $\ell_{\eta,\epsilon}: {\mathcal M}_{\eta,\epsilon}\to {\mathbb R}$ be given by
\begin{displaymath}
\forall (x,a)\in {\mathcal M}_{\eta,\epsilon},\quad\quad \ell_{\eta,\epsilon}(x, a)=
\ell^i(a).
\quad \hbox{ if } x\in \Omega_{\eta,\epsilon}^i \hbox{ or }(x\in\Gamma_{\eta,\epsilon} \hbox{ and } a\in A^i).
\end{displaymath}
For $x\in {\mathbb R}^2$, the set of admissible trajectories starting from $x$ is
\begin{equation}
\label{eq:2}
{\mathcal T}_{x,\eta,\epsilon}=\left\{
\begin{array}[c]{ll}
( y_x, a) \displaystyle \in L_{\rm{loc}}^\infty( {\mathbb R}^+; {\mathcal M}_{\eta,\epsilon}): \quad & y_x\in {\rm{Lip}}({\mathbb R}^+; {\mathbb R}^2),
\\ &\displaystyle y_x(t)=x+\int_0^t f_{\eta,\epsilon}( y_x(s), a(s)) ds \quad \forall t\in {\mathbb R}^+
\end{array}\right\}.
\end{equation}
The cost associated to the trajectory $ ( y_x, a)\in {\mathcal T}_{x,\eta,\epsilon}$ is
\begin{equation}
\label{eq:46}
{\mathcal J}_{\eta,\epsilon}(x;( y_x, a) )=\int_0^\infty \ell_{\eta,\epsilon}(y_x(t),a(t)) e^{-\lambda t} dt,
\end{equation}
with $\lambda>0$. The value function of the infinite horizon optimal control problem is
\begin{equation}
\label{eq:4}
v_{\eta,\epsilon}(x)= \inf_{( y_x, a)\in {\mathcal T}_{x,\eta,\epsilon}} {\mathcal J}_{\eta,\epsilon}(x;( y_x, a) ).
\end{equation}
\begin{proposition}\label{sec:assumption}
The value function $v_{\eta,\epsilon}$ is bounded uniformly in $\eta$ and $\epsilon$ and continuous in $ {\mathbb R}^2$.
\end{proposition}
\begin{proof}
This result is classical and can be proved with the same arguments as in \cite{MR1484411}.
\end{proof}
\subsection{The Hamilton-Jacobi equation}
\label{sec:hamilt-jacobi-equat}
Similar optimal control problems have recently been studied in \cite{MR3358634,MR3621434,oudet2014,imbert:hal-01073954}.
It turns out that $v_{\eta,\epsilon}$ can be characterized as the viscosity solution of a Hamilton-Jacobi equation with a discontinuous Hamiltonian,
(once the notion of viscosity solution has been specially tailored to cope with the above mentioned discontinuity).
We briefly recall the definitions used e.g. in \cite{oudet2014}.
\paragraph{Hamiltonians} For $i=L,R$, let the Hamiltonians $H^i: {\mathbb R}^2\rightarrow {\mathbb R} $ and $H_{ \Gamma_{\eta,\epsilon}}:\Gamma_{\eta,\epsilon}\times {\mathbb R}^2\times {\mathbb R}^2\to {\mathbb R}$ be defined by
\begin{eqnarray}
\label{eq:7}
H^i(p)&=& \max_{a\in A^i} (-p \cdot f^i(a) -\ell^i(a)),\\
\label{eq:8}
H_{ \Gamma_{\eta,\epsilon}} (x,p^L,p^R)&=& \max \{ \;H_{ \Gamma_{\eta,\epsilon}}^{+,L}(x,p^L),H_{ \Gamma_{\eta,\epsilon}}^{-,R} (x,p^R)\},
\end{eqnarray}
where in (\ref{eq:8}), $p^L\in {\mathbb R}^2 $ and $p^R \in {\mathbb R}^2$.
\begin{eqnarray}
\label{eq:29}
H_{ \Gamma_{\eta,\epsilon}}^{-,i} (x,p)= \max_{a\in A^i \hbox{ s.t. } f^i(a)\cdot n_{\eta,\epsilon}(x)\ge 0} (-p\cdot f^i(a) -\ell^i(a)), \quad \forall x\in \Gamma_{\eta,\epsilon}, \forall p\in {\mathbb R}^2,
\\
\label{eq:30}
H_{ \Gamma_{\eta,\epsilon}}^{+,i} (x,p)= \max_{a\in A^i \hbox{ s.t. } f^i(a)\cdot n_{\eta,\epsilon}(x)\le 0} (-p\cdot f^i(a) -\ell^i(a)), \quad \forall x\in \Gamma_{\eta,\epsilon}, \forall p\in {\mathbb R}^2.
\end{eqnarray}
\paragraph{Test-functions}
For $\eta >0$ and $\epsilon>0$, the function $\phi: {\mathbb R}^2\to {\mathbb R}$ is an admissible test-function if
$\phi$ is continuous in ${\mathbb R}^2$ and for any $i\in \{L,R\}$, $\phi|_{\overline{\Omega^i_{\eta,\epsilon}}} \in{\mathcal C}^1(\overline{\Omega^i_{\eta,\epsilon}})$.
\\The set of admissible test-functions is noted ${\mathcal R}_{\eta,\epsilon}$. If $\phi \in {\mathcal R}_{\eta,\epsilon}$, $x\in \Gamma_{\eta,\epsilon}$ and $i\in \{L,R\}$, we set
$\displaystyle D\phi^i(x)= \lim_{\overset{ x'\to x}{x'\in \Omega^i_{\eta,\epsilon}}}D\phi(x')$.
\begin{remark}
\label{sec:test-functions}
If $x\in \Gamma_{\eta,\epsilon}$, $\phi$ is test-function and $p^L= D\phi^L (x)$, $p^R= D\phi^R (x)$, then $p^L -p^R$ is colinear to $n_{\eta,\epsilon}(x)$ defined in \S~\ref{sec:geometry}.
\end{remark}
\paragraph{Definition of viscosity solutions}
We are going to define viscosity solutions of the following transmission problem:
\begin{eqnarray}
\label{eq:58}
\lambda u(x)+H^L(Du(x))&= 0, \quad \quad & \hbox{ if }x\in \Omega^L_{\eta,\epsilon},\\
\label{eq:59}
\lambda u(x)+H^R(Du(x))&= 0, \quad \quad & \hbox{ if }x\in \Omega^R_{\eta,\epsilon},\\
\label{eq:60}
\lambda u(x)+H_{ \Gamma_{\eta,\epsilon}} (x,Du^L(x),Du^R(x))&= 0, \quad \quad & \hbox{ if }x\in\Gamma_{\eta,\epsilon},
\end{eqnarray}
where $u^L$ (respectively $u^R$) stands for $u|_{\overline{\Omega^L_{\eta,\epsilon}}}$ ( respectively $u|_{\overline{\Omega^R_{\eta,\epsilon}}}$). For brevity, we also note this problem
\begin{equation}\label{HJaepsilon}
\lambda u+{\cal{H}}_{\eta,\epsilon}(x, Du)=0.
\end{equation}
\begin{itemize}
\item An upper semi-continuous function $u:{\mathbb R}^2\to{\mathbb R}$ is a subsolution of \eqref{HJaepsilon}
if for any $x\in {\mathbb R}^2$, any $\phi\in{\mathcal R}_{\eta,\epsilon}$ s.t. $u-\phi$ has a local maximum point at $x$, then
\begin{eqnarray}
\label{eq:5bis}
\lambda u(x)+H^i(D\phi^i(x))&\le 0, \quad \quad & \hbox{ if }x\in \Omega^i_{\eta,\epsilon},\\
\label{eq:5bisgamma}
\lambda u(x)+H_{ \Gamma_{\eta,\epsilon}} (x,D\phi^L(x),D\phi^R(x))&\le 0, \quad \quad & \hbox{ if }x\in\Gamma_{\eta,\epsilon},
\end{eqnarray}
where, for $x\in \Gamma_{\eta,\epsilon}$, the notation $D\phi^i(x)$ is introduced in the definition of the test-functions,
see also Remark~\ref{sec:test-functions}.
\item A lower semi-continuous function $u:{\mathbb R}^2\to{\mathbb R}$ is a supersolution of \eqref{HJaepsilon}
if for any $x\in {\mathbb R}^2$, any $\phi\in{\mathcal R}_{\eta,\epsilon}$ s.t. $u-\phi$ has a local minimum point at $x$, then
\begin{eqnarray}
\label{eq:6bis} \lambda u(x)+H^i(D\phi^i(x))&\geq 0, \quad \quad & \hbox{ if }x\in \Omega^i_{\eta,\epsilon},\\
\label{eq:6bisgamma} \lambda u(x)+H_{ \Gamma_{\eta,\epsilon}} (x,D\phi^L(x),D\phi^R(x))&\ge 0 \quad \quad & \hbox{ if }x\in\Gamma_{\eta,\epsilon}.
\end{eqnarray}
\item A continuous function $u:{\mathbb R}^2\to{\mathbb R}$ is a viscosity solution of \eqref{HJaepsilon} if it is both a viscosity sub and supersolution of \eqref{HJaepsilon}.
\end{itemize}
We skip the proof of the following theorem, see \cite{oudet2014,imbert:hal-01073954}.
\begin{theorem}
\label{existence-epsilon_a}
The value function $v_{\eta,\epsilon}$ defined in \eqref{eq:4} is the unique bounded viscosity solution of \eqref{HJaepsilon}.
\end{theorem}
\subsection{The main result and the general orientation}
\label{sec:main-result-main}
We set
\begin{equation}
\label{eq:104}
\Omega^L=\{x\in {\mathbb R}^2, x_1<0\}, \quad \Omega^R=\{x\in {\mathbb R}^2, x_1>0\}, \quad \Gamma=\{x\in {\mathbb R}^2, x_1=0\}.
\end{equation}
\paragraph{Informal statement of the main result}
Our main result, namely Theorem \ref{sec:main-result-4} below,
is that, as $\epsilon\to 0$, $v_{\epsilon,\epsilon}$ converges locally uniformly to $v$, the unique bounded viscosity solution of
\begin{eqnarray}
\label{eq:14}
\lambda v(z)+ H^L( D v(z)) = 0 & &\hbox{if } z\in \Omega^L,\\
\label{eq:15}
\lambda v(z)+ H^R( D v(z)) = 0 & &\hbox{if } z\in \Omega^R,\\
\label{eq:17} \lambda v(z)+\max\left(E(\partial_{z_2}v(z)), H^{L,R}( D v^L(z),D v^R(z)) \right) = 0 & &\hbox{if } z\in \Gamma.
\end{eqnarray}
The Hamiltonians $H^L$ and $H^R$ are defined in (\ref{eq:7}).
In the effective transmission condition (\ref{eq:17}),
\begin{equation}
\label{eq:25}
H^{L,R}( p^L,p^R) = \max \{ \;H^{+,1,L}(p^L),H^{-,1,R} (p^R)\},
\end{equation}
for $p^L, p^R \in {\mathbb R}^2 $. For $i=L,R$,
$H^{+,1,i}(p)$ (respectively $H^{-,1,i}(p)$) is the nondecreasing (respectively nonincreasing) part of the Hamiltonian $H^i$ with respect to $p_1$. In what follows, $p^L -p^R$ will be colinear to $e_1$.
The effective flux-limiter $E: {\mathbb R}\to {\mathbb R}$ will be characterized in \S \ref{sec:second-passage-limit} below.
\\
For brevity, the problem in (\ref{eq:14})-(\ref{eq:17}) will sometimes be noted
\begin{equation}\label{eq:16}
\lambda v(z)+{\cal{H}}(z, Dv(z))=0.
\end{equation}
\paragraph{General orientation}
The proof of this result will be done by using Evans' method of perturbed test-functions, see \cite{MR1007533}. Such a method requires to build a family of correctors depending on a single real variable $p_2$ (which stands for the derivative of $v$ along $\Gamma$).
The corrector, that will be noted $\xi_\epsilon(p_2,\cdot)$, solves a cell problem, see (\ref{eq:48}) below,
with a transmission condition on the interface $\Gamma_{1,\epsilon}$, (note that the original geometry
is dilated by a factor $1/\epsilon$). The ergodic constant associated to the latter cell problem will be noted $E_\epsilon(p_2)$ in \S~\ref{sec:simult-pass-limit}. The fact that the corrector and the ergodic constant still depend on $\epsilon$ is connected to the existence of two small scales in the problem.
The existence of the pairs $(\xi_\epsilon(p_2),E_\epsilon(p_2) )$ and the asymptotic behavior of
$\xi_\epsilon(p_2)$ as $\epsilon\to 0$ will be obtained essentially by using the arguments
proposed in \S \ref{sec:effect-probl-obta} below. In fact, for a reason that will soon become clear, in \S \ref{sec:effect-probl-obta}, we consider the interfaces $\Gamma_{\eta,\epsilon}$ instead of $\Gamma_{1,\epsilon}$, for a fixed arbitrary positive parameter $\eta$. Then, the region where the two media are mixed is a strip whose width is $\sim \eta$, see Figure~\ref{fig:geom2}: in this region, an effective Hamiltonian is found by classical homogenization techniques; the main achievement of \S \ref{sec:effect-probl-obta} is to obtain the effective transmission conditions on the boundaries of the strip (two parallel straight lines) and to prove the convergence of the solutions of the transmission problems as $\epsilon\to 0$.
\\
Next, in \S~\ref{sec:second-passage-limit}, we pass to the limit as $\eta\to 0$ in the effective problem that we have just obtained in \S \ref{sec:effect-probl-obta}. We obtain (\ref{eq:16}) at the limit $\eta\to 0$, whose solution is unique, (the well posedness of (\ref{eq:16}) implies the convergence of the whole family of solutions as $\eta\to 0$) and a characterization of $E(p_2)$ in (\ref{eq:17}), see (\ref{eq:18}) below.
In the proof of the main result stated above and in Theorem~\ref{sec:main-result-4},
Evans' method requires to study the asymptotic behavior of $E_\epsilon(p_2)$ and of $x\mapsto \epsilon \xi_\epsilon(p_2, x/\epsilon)$ as $\epsilon \to 0$. This is precisely what is done in \S~\ref{sec:simult-pass-limit} below, where, in particular, we prove that $\lim_{\epsilon\to 0} E_\epsilon(p_2)= E(p_2)$ given by (\ref{eq:18}).
Therefore, we will prove that the limit of $v_{\epsilon,\epsilon}$ as $\epsilon\to 0$ can be obtained by
considering the transmission problems with interfaces $\Gamma_{\eta,\epsilon}$, letting $\epsilon$ tend to $0$ first, then $\eta$ tend to $0$.
This is coherent with the intuition that since the two scales $\epsilon^2$ and $\epsilon$ are well separated,
the asymptotic behavior can be obtained in two successive steps.
In what follows, a significant difficulty in the construction of the correctors is
the unboundedness of the domains in which they should be defined.
It is addressed by using the ideas proposed in \cite{MR3299352,MR3441209,MR3565416}.
\section{The effective problem obtained by letting $\epsilon$ tend to $0$}
\label{sec:effect-probl-obta}
In \S~\ref{sec:effect-probl-obta}, $\eta$ is a fixed positive number, whereas $\epsilon$ tends to $0$.
\subsection{Main result}\label{sec:main-result}
Let the domains $\Omega^{L}_{\eta}$, $\Omega^{M}_{\eta}$ and $\Omega^{R} _{\eta}$ and the straight lines $\Gamma^{L,M}_\eta$, $\Gamma^{M,R}_\eta$ be defined by
\begin{eqnarray}
\label{eq:32}
\Omega^{L}_{\eta}=\{x\in {\mathbb R}^2, x_1<-\eta\}, \quad \Omega^{M}_{\eta}=\{x\in {\mathbb R}^2, |x_1|<\eta\},\quad \Omega^{R}_{\eta}=\{x\in {\mathbb R}^2, x_1>\eta\},\\
\label{eq:33}
\Gamma^{L,M}_\eta=\{x\in {\mathbb R}^2, x_1=-\eta\}, \quad \Gamma^{M,R}_\eta=\{x\in {\mathbb R}^2, x_1=\eta\}.
\end{eqnarray}
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=0.5, trans/.style={thick,<->,shorten >=2pt,shorten <=2pt,>=stealth} ]
\draw[red,thick] (0,0) -- (0,5);
\draw[red,thick] (5,0) -- (5,5);
\draw[trans] (-0.1,5.5) -- (5.1,5.5) ;
\draw (2.5,5.5) node[above]{$2\eta$};
\draw (-1,2.5) node[left]{$\Omega^L_{\eta}$};
\draw (6,2.5) node[right]{$\Omega^R_{\eta}$};
\draw (2.5,2.5) node[]{$\Omega^M_{\eta}$};
\draw (0,-1) node[below]{$\Gamma^{L,M}_{\eta}$};
\draw (5,-1) node[below]{$\Gamma^{M,R}_{\eta}$};
\end{tikzpicture}
\caption{The geometry of the asymptotic problem when $\epsilon\to 0$}
\label{fig:geom2}
\end{center}
\end{figure}
\begin{theorem}\label{th:convergence_result}
As $\epsilon\to 0$, $v_{\eta,\epsilon}$ converges locally uniformly to $v_\eta$ the unique bounded viscosity solution of
\begin{eqnarray}
\label{def:HJeffective1}
\lambda v(z)+ H^L( D v(z)) = 0 & &\hbox{if } z\in \Omega^L_{\eta},\\
\label{def:HJeffective2}
\lambda v(z)+ H^M( D v(z)) = 0 & &\hbox{if } z\in \Omega^M_{\eta},\\
\label{def:HJeffective3}
\lambda v(z)+ H^R( D v(z)) = 0 & &\hbox{if } z\in \Omega^R_{\eta},\\
\label{def:HJeffective4}
\lambda v(z)+\max\left(E^{L,M}(\partial_{z_2}v(z)), H^{L,M}( D v^L(z),D v^M(z)) \right) = 0 & &\hbox{if } z\in \Gamma^{L,M}_\eta,\\
\label{def:HJeffective5}
\lambda v(z)+\max\left(E^{M,R}(\partial_{z_2}v(z)), H^{M,R}( D v^M(z),D v^R(z)) \right) = 0 & &\hbox{if } z\in \Gamma^{M,R}_\eta,
\end{eqnarray}
where $v^L$ (respectively $v^M$,$v^R$) stands for $v|_{\overline{\Omega^L_{\eta}}}$
(respectively $v|_{\overline{\Omega^M_{\eta}}}$, $v|_{\overline{\Omega^R_{\eta}}}$),
and that we note for short
\begin{equation}\label{def:HJeffective_short}
\lambda v(z)+{\cal{H_\eta}}(z, Dv(z))=0.
\end{equation}
The Hamiltonians $H^L$ and $H^R$ are defined in (\ref{eq:7}). The effective Hamiltonian $H^M$ will be defined in \S~\ref{sec:effect-hamilt-omeg} below (note that $p\mapsto H^M (p)$ is convex).
In (\ref{def:HJeffective4}), \begin{equation}
\label{eq:28}
H^{L,M}(p^L,p^M) = \max \{ \;H^{+,1,L}(p^L),H^{-,1,M} (p^M)\},
\end{equation}
for $p^L\in {\mathbb R}^2 $ and $p^M \in {\mathbb R}^2$ (in what follows, $p^L -p^M$ is colinear to $e_1$) and, for $i=L,M,R$, $p\mapsto H^{+,1,i}(p)$ (respectively $p\mapsto H^{-,1,i}(p)$) is the nondecreasing (respectively nonincreasing) part of the Hamiltonian $H^i$ with respect to the first coordinate $p_1$ of $p$.
In (\ref{def:HJeffective5}),
\begin{equation}
\label{eq:31}
H^{M,R}( p^M,p^R) = \max \{ \;H^{+,1,M}(p^M),H^{-,1,R} (p^R)\},
\end{equation}
for $p^M\in {\mathbb R}^2 $ and $p^R \in {\mathbb R}^2$.
\\
The effective flux limiters $E^{L,M}$ and $E^{M,R}$ will be defined in \S~\ref{sec:ergod-const-state}.
\end{theorem}
Let us list the notions which are needed by Theorem~\ref{th:convergence_result} and give a few comments:
\begin{enumerate}
\item Problem~(\ref{def:HJeffective_short}) is a transmission problem across the interfaces $\Gamma^{L,M}_\eta$ and $\Gamma^{M,R}_\eta$,
with the respective effective transmission conditions~(\ref{def:HJeffective4}) and (\ref{def:HJeffective5}).
The notion of viscosity solutions of (\ref{def:HJeffective_short})
is similar to the one defined for problem (\ref{HJaepsilon}).
\item Note that the Hamilton-Jacobi equations in $\Omega^{L}_{\eta}$ and $\Omega^{R}_{\eta}$ are directly inherited from (\ref{eq:5bis}): this is quite natural,
since the Hamilton-Jacobi equation at $x\in \Omega^{L}_{\eta}$ and $x\in \Omega^{L}_{\eta}$ does not depend on $\epsilon$ if $\epsilon$ is small enough.
\item The effective Hamiltonian $H^M(p)$ arising in $\Omega^M_\eta$ will be found by solving classical one dimensional
cell-problems in the fast vertical variable $y_2=x_2/\epsilon$. The cell problems are one dimensional,
since
for any interval $I$
such that $ I \subset\subset (-\eta, \eta)$,
$ \Gamma_{\eta,\epsilon} \cap (I\times {\mathbb R})$ is made of straight horizontal lines as soon as $\epsilon$ is small enough.
\item The Hamiltonian $H^{L,M}$ appearing in the effective transmission condition at the interface $\Gamma^{L,M}_\eta$ is built by considering
only the effective dynamics related to $ \Omega^{i,\eta}$ which point from $\Gamma^{L,M}_\eta$
toward $\Omega^{i,\eta}$, for $i=L,M$. The same remark holds for $H^{M,R}$ mutatis mutandis.
\item The effective flux limiters $E^{L,M}$ and $E^{M,R}$ are the only ingredients in the effective problem that keep track of the function $g$.
They are constructed in \S~\ref{sec:ergod-const-state} and \ref{sec:passage-limit-as} below, see (\ref{eq:def_E}), as
the limit of a sequence of ergodic constants related to larger and larger domains bounded in the horizontal direction. This is reminiscent of a construction first performed in \cite{MR3299352} for
singularly perturbed problems in optimal control leading to Hamilton-Jacobi equations posed on a network. Later, similar constructions were used in \cite{MR3441209,MR3565416}.
\item For proving Theorem~\ref{th:convergence_result},
the chosen strategy is reminiscent of \cite{MR3441209},
because it relies on the construction of a single corrector,
whereas the method proposed in \cite{MR3299352} requires the construction of an infinite family of correctors. This will be done in \S~\ref{sec:proof-theor-refth:c} and the slopes at infinity of the correctors will be studied in \S~\ref{sec:asympt-valu-slop}.
\end{enumerate}
\subsection{The effective Hamiltonian $H^M$}\label{sec:effect-hamilt-omeg}
The first step in understanding the asymptotic behavior of the value function $v_{\eta,\epsilon}$ as $\epsilon\to 0$ is to look at what happens
in $\Omega^M$, i.e. in the region where $|x_1| <\eta$. For that, it is possible to rely on existing results, see \cite{LPV} for the first work on the topic.
In $\Omega^M$, if the sequence of value functions $v_{\eta,\epsilon}$ converges to $v_\eta$ uniformly as $\epsilon\to 0$, then
$v_\eta$ is a viscosity solution of a first order partial differential equation involving an effective Hamiltonian noted $H^M$ in (\ref{def:HJeffective2}) and in the rest of the paper.
The latter will be obtained by solving a one-dimensional periodic boundary value problem in the fast variable $y\in {\mathbb R}$, usually
named {\sl a cell problem}. Before stating the result, it is convenient to introduce
the open sets $Y_\eta ^L = \left(\eta a, \eta b \right)+ \eta {\mathbb Z}$ and $Y_\eta ^R= {\mathbb R} \setminus \overline {Y_\eta ^L}$, and the discrete sets
$\gamma_\eta^{a}= \{\eta a\}+ \eta {\mathbb Z}$, $\gamma_\eta^{b}= \{\eta b\}+ \eta {\mathbb Z}$.
For $p\in {\mathbb R}^2$, $i=L, R$, we also define the Hamiltonians:
\begin{eqnarray}
H^{-,2,i}(p) &=& \max_{\alpha \in A^i, f^i(\alpha ) \cdot e_2 \ge 0 } (-p \cdot f^i(\alpha ) -\ell^i(\alpha )),\\
H^{+,2,i}(p) &=& \max_{\alpha \in A^i, f^i(\alpha ) \cdot e_2 \le 0 } (-p \cdot f^i(\alpha ) -\ell^i(\alpha )).
\end{eqnarray}
Note that $H^{+,2,i}(p)$ (respectively $H^{-,2,i}(p)$) is the nondecreasing (respectively nonincreasing)
part of the Hamiltonian $p\mapsto H^i(p)$ with respect to the second coordinate $p_2$ of $p$.
\begin{proposition}
\label{sec:effect-hamilt-omeg-1}
For any $p\in {\mathbb R}^2$ there exists a unique real number $H^M(p)$ such that the following one dimensional
cell-problem has a Lipschitz continuous viscosity solution $\zeta(p,\cdot)$:
\begin{eqnarray}
H^R\left ( p+ \frac {d\zeta}{dy}(y) e_2 \right) = H^M(p), \quad \hbox{ if } y\in Y_\eta ^R ,\\
H^L\left ( p+ \frac {d\zeta}{dy}(y) e_2 \right) = H^M(p), \quad \hbox{ if } y \in Y_\eta ^L,\\
\max\left( H^{+,2,R} \left ( p+ \frac {d\zeta}{dy} (y ^-) e_2 \right), H^{-,2,L}\left ( p+ \frac {d\zeta}{dy}(y ^+) e_2 \right) \right)= H^M(p),
\hbox{ if } y\in \gamma_\eta^{a} ,\\
\max\left( H^{+,2,L}\left ( p+ \frac {d\zeta}{dy} (y^-) e_2 \right), H^{-,2,R}\left ( p+ \frac {d\zeta}{dy} (y ^+) e_2 \right) \right)= H^M(p),
\hbox{ if } y\in \gamma_\eta^{b},\\
\zeta \hbox{ is periodic in }y \hbox{ with period }\eta.
\end{eqnarray}
\end{proposition}
The following lemma contains information on $H^M$: we skip its proof because it is very much like the proof of \cite[Lemma 4.16]{MR3565416}.
\begin{lemma}
\label{sec:effect-hamilt-hm}
The function $p\mapsto H^M(p)$ is convex.
There exists a constant $C$
such that for any
$ p, p'\in {\mathbb R}^2$,
\begin{eqnarray}
\label{lem:E_lipschitz_wrt_p_2}
\mid H^M(p)-H^M (p') \mid \le C |p-p'|,
\\
\label{lem:E_coercive}
\delta_0|p|-C\le H^M (p)\le C |p|+C.
\end{eqnarray}
\end{lemma}
As in \cite{MR3299352,MR3441209}, we introduce three functions $ E_0^i: {\mathbb R} \to {\mathbb R}$, $i=L,M,R$, and two functions $ E^{L,M}_0: {\mathbb R} \to {\mathbb R}$ and $ E^{M,R}_0 : {\mathbb R} \to {\mathbb R}$:
\begin{eqnarray}
\label{def:E_0^i}
E_0^i(p_2)&=&\min\left\lbrace H^i(p_2e_2+qe_1), \quad q\in {\mathbb R} \right\rbrace,
\\
\label{def:ELM_0}
E^{L,M}_0(p_2)&=&\max\left\lbrace E_0^L(p_2), E_0^M(p_2)\right\rbrace,
\\
\label{def:EMR_0}
E^{M,R}_0(p_2)&=&\max\left\lbrace E_0^M(p_2), E_0^R(p_2)\right\rbrace.
\end{eqnarray}
For $i=L,M,R$, $H^{+,1,i}(p)$ (respectively $H^{-,1,i}(p)$) is the nondecreasing (respectively nonincreasing)
part of the Hamiltonian $p\mapsto H^i(p)$ with respect to the first coordinate $p_1$ of $p$.
For $ p_2\in {\mathbb R}$, there exists a unique pair of real numbers $p^{-,i}_{1,0}(p_2)\le p^{+,i}_{1,0}(p_2) $ such that
\begin{eqnarray*}
H^{-,1,i}(p_2 e_2+p_1 e_1) &=& \left\lbrace
\begin{array}{ll}
H^i(p_2 e_2+p_1e_1) & \mbox{ if } p_1\le p^{-,i}_{1,0}(p_2),\\
E^i_0(p_2) & \mbox{ if } p_1> p^{-,i}_{1,0}(p_2),
\end{array}
\right. \\
H^{+,1,i}(p_2 e_2+p_1e_1) &=& \left\lbrace
\begin{array}{ll}
E^i_0(p_2) & \mbox{ if } p_1\le p^{+,i}_{1,0}(p_2),\\
H^i(p_2 e_2+p_1e_1) & \mbox{ if } p_1> p^{+,i}_{1,0}(p_2).
\end{array}
\right.
\end{eqnarray*}
\subsection{Truncated cell problems for the construction of the flux limiters $E^{M,R}$ and $E^{L,M}$}
\label{sec:ergod-const-state}
In what follows, we focus on the construction of $E^{M,R}$ and on its properties,
the construction of $E^{L,M}$ being completely symmetric.
\subsubsection{Zooming near the line $\Gamma^{M,R}_\eta$}\label{sec:state-constr-probl}
Asymptotically when $\epsilon\to 0$, the two lines $\Gamma^{L,M}_\eta$ and $\Gamma^{M,R}_\eta$ appear very far from each other at the scale $\epsilon$.
This is why we are going to introduce another geometry obtained by first zooming near
$\Gamma^{M,R}_\eta$ at a scale $1/\epsilon$, then letting $\epsilon $ tend to $0$.
\\
Let $\widetilde G$ be the multivalued step function, periodic with period $1$, such that
\begin{enumerate}
\item $\widetilde G(a)=\widetilde G(b)=[-\infty,0]$
\item $\widetilde G(t)=\{0\}$ if $t\in (a,b )$
\item $\widetilde G(t)=\{-\infty\}$ if $t \in [0,a )\cup(b,1]$
\end{enumerate}
Consider the curve $\widetilde \Gamma_{\eta}$ defined as the graph of the multivalued function
$\tilde g_{\eta}: x_2\mapsto \eta \widetilde G(\frac {x_2}{ \eta})+ \eta g(\frac {x_2}{ \eta})$.
We also define the domain $\widetilde \Omega_{\eta}^R $ (resp. $\widetilde \Omega_{\eta}^L $) as the epigraph (resp. hypograph)
of $\tilde g_{\eta}$:
\begin{eqnarray*}
\widetilde \Omega_{\eta}^R= & \{ x\in {\mathbb R}^2 : x_1> \tilde g_{\eta} (x_2)\},\\
\widetilde \Omega_{\eta}^L= & \{ x\in {\mathbb R}^2 : x_1< \tilde g_{\eta} (x_2)\}.
\end{eqnarray*}
The unit normal vector $\tilde n_{\eta}(x)$ at $ x\in \widetilde \Gamma_{\eta}$ is defined as follows: setting $y_2= \frac { x_2}{\eta}$,
\begin{displaymath}
\tilde n_{\eta}(x)= \left\{
\begin{array}[c]{cl}
\displaystyle
\left (1 + \left( g'(y_2)\right)^2 \right) ^{-1/2} \left( e_1 - g' (y_2) e_2\right)
\quad &\hbox{ if }\quad y_2 \notin {\mathbb S}\\
\displaystyle - e_2 \quad&\hbox{ if }\quad y_2 = a \mod{1}\\
\displaystyle e_2 \quad&\hbox{ if } \quad y_2 = b \mod{1}.
\end{array}
\right.
\end{displaymath}
Note that $\tilde n_{\eta}(x)$ is oriented from $\widetilde \Omega^L_{\eta}$ to $\widetilde \Omega^R_{\eta}$.
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1, trans/.style={thick,<->,shorten >=2pt,shorten <=2pt,>=stealth} ]
\draw[red,thick] (0,10) -- (10,10);
\draw[red,thick] (10,10) .. controls (11,10) and (11,9.5) .. (10,9.5);
\draw[red,thick] (10,9.5) -- (0,9.5);
\foreach \x in {0,...,53}
\draw[red] (\x/5,9.6)--(\x/5,9.9);
\draw (4,9.75) node[right]{$\widetilde \Omega^L_{\eta}$};
\draw[red,thick] (0,9) -- (10,9);
\draw[red,thick] (10,9) .. controls (11,9) and (11,8.5) .. (10,8.5);
\draw[red,thick] (10,8.5) -- (0,8.5);
\foreach \x in {0,...,53}
\draw[red] (\x/5,8.6)--(\x/5,8.9);
\draw (4,8.75) node[right]{$\widetilde \Omega^L_{\eta}$};
\draw[red,thick] (0,8) -- (10,8);
\draw[red,thick] (10,8) .. controls (11,8) and (11,7.5) .. (10,7.5);
\draw[red,thick] (10,7.5) -- (0,7.5);
\foreach \x in {0,...,53}
\draw[red] (\x/5,7.6)--(\x/5,7.9);
\draw (4,7.75) node[right]{$\widetilde \Omega^L_{\eta}$};
\draw[red,thick] (0,7) -- (10,7);
\draw[red,thick] (10,7) .. controls (11,7) and (11,6.5) .. (10,6.5);
\draw[red,thick] (10,6.5) -- (0,6.5);
\foreach \x in {0,...,53}
\draw[red] (\x/5,6.6)--(\x/5,6.9);
\draw (4,6.75) node[right]{$\widetilde \Omega^L_{\eta}$};
\draw[trans] (9.9,10.5) -- (10.8,10.5) ;
\draw (10.4,10.5) node[above]{$\sim \eta$};
\draw[trans] (5,10.1) -- (5,8.9) ;
\draw (5,9.5) node[right]{$\eta$};
\draw (11,8.5) node[right]{$\widetilde \Omega^R_{\eta}$};
\end{tikzpicture}
\caption{The interface $\widetilde \Gamma_{\eta}$ separates the disconnected domain $\widetilde \Omega^L_{\eta}$ and the connected domain $\widetilde \Omega^R_{\eta}$.}
\label{fig:geom3}
\end{center}
\end{figure}
\subsubsection{State-constrained problem in truncated domains}\label{sec:state-constr-probl-1}
We introduce the following Hamiltonians:
\begin{eqnarray}
\label{eq:1}
H_{ \widetilde \Gamma_{\eta}}^{-,i} (p,y)= \max_{a\in A^i \hbox{ s.t. } f^i(a)\cdot \tilde n_{\eta}(y)\ge 0} (-p\cdot f^i(a) -\ell^i(a)), \quad \forall y\in \widetilde \Gamma_{\eta}, \forall p\in {\mathbb R}^2,
\\
\label{eq:34}
H_{ \widetilde \Gamma_{\eta}}^{+,i} (p,y)= \max_{a\in A^i \hbox{ s.t. } f^i(a)\cdot \tilde n_{\eta}(y)\le 0} (-p\cdot f^i(a) -\ell^i(a)), \quad \forall y\in \widetilde \Gamma_{\eta}, \forall p\in {\mathbb R}^2,
\end{eqnarray}
with $\tilde n_{\eta}(y)$ defined in \S~\ref{sec:state-constr-probl}, and, for $p^L, p^R \in {\mathbb R}^2 $
\begin{equation}
\label{eq:3}
H_{ \widetilde \Gamma_{\eta}} (p^L,p^R,y)= \max \{ \;H_{ \widetilde \Gamma_{\eta}}^{+,L}(p^L,y),H_{ \widetilde \Gamma_{\eta}}^{-,R} (p^R,y)\}.
\end{equation}
In what follows, $p^L -p^R$ will always be colinear to $\tilde n_{\eta}(y)$.
For $\rho>0$, let us set $ Y^\rho=\{y: |y_1| < \rho \}$. For $\rho$ large enough such that $\tilde \Gamma_\eta$ is
strictly contained in $ \{y: y_1<\rho\}$, consider the {\sl truncated cell problem}
\begin{equation}
\label{trunc-cellp}
\left\{
\begin{array}[c]{lll}
H^L(Du(y)+p_2e_2)&\le \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Omega_{\eta}^L \cap Y^\rho , \\
H^L(Du(y)+p_2e_2)&\ge \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Omega_{\eta}^L \cap \overline{Y^\rho} , \\
H^R(Du(y)+p_2e_2)&\le \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Omega_{\eta}^R \cap Y^\rho , \\
H^R(Du(y)+p_2e_2)&\ge \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Omega_{\eta}^R \cap \overline{Y^\rho} , \\
H_{ \widetilde \Gamma_{\eta}} (Du^L(y)+p_2e_2 , Du^R(y)+p_2e_2,y)&\le \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Gamma_{\eta} \cap Y^\rho , \\
H_{ \widetilde \Gamma_{\eta}} (Du^L(y)+p_2e_2 , Du^R(y)+p_2e_2,y)&\ge \lambda_\rho( p_2)&\hbox{ if } y\in \widetilde \Gamma_{\eta} \cap \overline{Y^\rho} , \\
u \hbox{ is 1-periodic w.r.t. } y_2/\eta,
\end{array}
\right.
\end{equation}
where the inequations are understood in the sense of viscosity.
\begin{lemma}
\label{lem:existence_solution_truncated_cell_pb}
There is a unique $\lambda_\rho( p_2)\in {\mathbb R}$ such that \eqref {trunc-cellp} admits a viscosity solution.
For this choice of $\lambda_\rho( p_2)$, there exists a solution $\chi_\rho(p_2,\cdot)$
which is Lipschitz continuous with Lipschitz constant
$L$ depending on $p_2$ only (independent of $\rho$).
\end{lemma}
\begin{proof}
We skip the proof of this lemma, since it is very much like that of \cite[ Lemma 4.6]{MR3565416}.
\end{proof}
\subsection{ The effective flux limiter $E^{M,R}(p_2)$ and the global cell problem}
\label{sec:passage-limit-as}
As in \cite{MR3299352,MR3565416}, using the optimal control interpretation of (\ref{trunc-cellp}), it is easy to prove that for
a positive $K$ which may depend on $p_2$ but not on $\rho$, and for
all $0<\rho_1\le \rho_2$,
\[\lambda_{\rho_1}( p_2)\leq \lambda_{\rho_2}( p_2)\leq K.\]
For $p_2\in {\mathbb R}$, the effective tangential Hamiltonian $E^{M, R}(p_2)$ is defined by
\begin{equation}
\label{eq:def_E}
E^{M, R}(p_2)=\lim_{\rho\rightarrow \infty} \lambda_{\rho}( p_2).
\end{equation}
For a fixed $p_2\in{\mathbb R}$, the {\sl global cell-problem} reads
\begin{equation}
\label{cellpE}
\left\{
\begin{array}[c]{lll}
H^L(Du(y)+p_2e_2)&= E^{M, R}( p_2) &\hbox{ if } y\in \widetilde \Omega_{\eta}^L , \\
H^R(Du(y)+p_2e_2)&= E^{M, R}( p_2) &\hbox{ if } y\in \widetilde \Omega_{\eta}^R , \\
H_{ \widetilde \Gamma_{\eta}} (Du^L(y)+p_2e_2 , Du^R(y)+p_2e_2,y)&= E^{M, R}( p_2) & \hbox{ if } y\in \widetilde \Gamma_{\eta} , \\
u \hbox{ is 1-periodic w.r.t. } y_2/\eta.
\end{array}
\right.
\end{equation}
The following theorem is proved exactly as Theorem 4.8 in \cite{MR3565416}.
\begin{theorem}
\label{thm:stability_from_truncated_cell_pb_to_global_cell_pb}
Let $\chi_\rho( p_2,\cdot)$ be a sequence of uniformly Lipschitz continuous solutions of the truncated cell-problem \eqref{trunc-cellp} which converges to $ \chi( p_2,\cdot)$
locally uniformly in ${\mathbb R}^2$. Then $ \chi(p_2,\cdot)$ is a Lipschitz continuous viscosity solution of the global cell-problem \eqref{cellpE}.
By subtracting $\chi(p_2, 0)$ to $\chi_\rho(p_2,\cdot)$ and $\chi(p_2,\cdot)$, we may also assume that $\chi(p_2,0)=0$.
\end{theorem}
\subsubsection{Comparison between $E^{M, R}_0(p_2)$ and $E^{M, R}(p_2)$ respectively defined in (\ref{def:EMR_0}) and (\ref{eq:def_E})}\label{sec:comp-betw-e_0-1}
For $\epsilon>0$, let us set $W_\epsilon(p_2,y)=\epsilon\chi( p_2,\frac {y-\eta e_1} {\epsilon})$. The following result is reminiscent of \cite[Theorem 4.6,iii]{MR3441209}:
\begin{lemma}
\label{lem:rescaling_omega}
For any $p_2\in {\mathbb R}$,
there exists a sequence $\epsilon_n$ of positive numbers tending to $0$ as $n\to +\infty$ such that $W_{\epsilon_n} (p_2,\cdot)$ converges locally uniformly to a
Lipschitz function $y\mapsto W( p_2, y)$ (the Lipschitz does not depend on $\eta$).
This function is constant with respect to $y_2$ and satisfies $W( p_2,\eta e_1)=0$. It is a viscosity solution of
\begin{equation}
\label{W}
\begin{array}[c]{rcll}
H^R(Du(y)+p_2e_2)&=&E^{M, R}(p_2),\quad&\hbox{ if}\quad y_1>\eta,\\
H^M(Du(y)+p_2e_2)&=&E^{M, R}(p_2),\quad&\hbox{ if}\quad y_1<\eta.
\end{array}
\end{equation}
\end{lemma}
\begin{proof}
It is clear that $y\mapsto W_\epsilon( p_2,y)$ is a Lipschitz continuous function with a constant $\Lambda$ independent of $\epsilon$ and that $W_\epsilon( p_2,\eta e_1)=0$.
Thus, from Ascoli-Arzela's Theorem, we may assume that $y\mapsto W_\epsilon( p_2,y)$ converges locally uniformly to some function $y \mapsto W(p_2,y)$, maybe after the extraction of a subsequence.
The function $y \mapsto W(p_2,y)$ is Lipschitz continuous with constant $\Lambda$ and $W(p_2,\eta e_1)=0$.
Moreover, since $W_\epsilon( p_2,y)$ is periodic with respect to $y_2$ with period $\epsilon$, $W(p_2,y)$ does not depend on $y_2$.\\
To prove that $W(p_2,\cdot)$ is a viscosity solution of \eqref{W},
we focus on the more difficult case when $y_1<\eta$; we also restrict ourselves to proving that $W(p_2,\cdot)$ is a viscosity subsolution of \eqref{W}, because the proof that $W(p_2,\cdot)$
is a viscosity supersolution follows the same lines.\\
Consider $\bar y\in {\mathbb R}^2 $ such that $\bar y_1<\eta$, $\phi \in {\mathcal C}^1({\mathbb R}^2)$ and $r_0<0$ such that $ B(\bar y,r_0)$ is contained in $\{y_1<\eta\}$ and that
\begin{equation*}
\label{proof:rescaling1}
W(p_2,y)-\phi(y)<W(p_2,\bar y)-\phi(\bar y)=0 \mbox{ for } y\in B(\bar y,r_0)\setminus\{\bar y\}.
\end{equation*}
We first observe that $y\mapsto W_\epsilon( p_2,y)$ is a viscosity solution of
\begin{equation}
\label{eq:9}
\begin{array}[c]{rcll}
H^i(Du(y)+p_2e_2)&=& E^{M,R}( p_2), \quad & \hbox{ if }y\in \Omega^i_{\eta,\epsilon} \cap B(\bar y,r_0) , \,i=L,R,\\
H_{ \Gamma_{\eta,\epsilon}} (y,Du^L(y)+p_2e_2, Du^R(y)+p_2e_2 )&=& E^{M,R}( p_2), \quad & \hbox{ if }y\in\Gamma_{\eta,\epsilon} \cap B(\bar y,r_0).
\end{array}
\end{equation}
We wish to prove that
$H^M(D\phi(\bar y)+p_2e_2)\le E^{M,R}( p_2)$.
Let us argue by contradiction and assume that there exists $\theta>0$ such that
\begin{equation}
\label{proof:rescaling3}
H^M(D\phi(\bar y)+p_2e_2)= E^{M,R}( p_2)+\theta.
\end{equation}
Take $\phi_\epsilon(y)=\phi(y)+\epsilon \zeta( D\phi(\bar y)+p_2e_2, \frac{y_2}{\epsilon})-\delta$, where
$\zeta$ is a one-dimensional periodic corrector constructed in Proposition \ref{sec:effect-hamilt-omeg-1} and $\delta>0$ is a fixed positive number.
We claim that for $r>0$ small enough, $\phi_\epsilon$ is a viscosity supersolution of
\begin{equation}
\label{eq:10}
\begin{array}[c]{rcll}
H^i(Du(y)+p_2e_2)&\ge & E^{M,R}( p_2)+\frac \theta 2 , \quad & \hbox{ if }y\in \Omega^i_{\eta,\epsilon} \cap B(\bar y,r) ,\\
H_{ \Gamma_{\eta,\epsilon}} (y,Du^L(y)+p_2e_2, Du^R(y)+p_2e_2 )&\ge& E^{M,R}( p_2) +\frac \theta 2, \quad & \hbox{ if }y\in\Gamma_{\eta,\epsilon} \cap B(\bar y,r).
\end{array}
\end{equation}
This comes from (\ref{proof:rescaling3}), the definition of $ \zeta( D\phi(\bar y)+p_2e_2, \frac{y_2}{\epsilon})$, the ${\mathcal C}^1$ regularity of $\phi$
and the Lipschitz continuity of $H^ i$ and $ H_{ \Gamma_{\eta,\epsilon}}$ with respect to the $p$ variables.\\
Hence, $W_\epsilon(p_2,\cdot)$ is a subsolution of (\ref{eq:9}) and $\phi_\epsilon$ is a supersolution of (\ref{eq:10})
in $B(\bar y,r)$.
Moreover for $r>0$ small enough,
$ \max_{y\in \partial B(\bar y,r)}\left(W(p_2,y)-\phi(y) \right)<0$. Hence, for $\delta>0$ and $\epsilon>0$ small enough
$\max_{y\in \partial B(\bar y,r)}\left(W_\epsilon(p_2,y)-\phi_\epsilon(y) \right) \le 0$.\\
Thanks to a standard comparison principle (which holds thanks to the fact that $\frac{\theta}{2}>0$)
\begin{equation}
\label{proof:rescaling6}
\max_{y\in B(\bar y,r)}\left(W_\epsilon(p_2,y)-\phi_\epsilon(y) \right) \le 0.
\end{equation}
Letting $\epsilon \to 0$ in \eqref{proof:rescaling6}, we deduce that $W(p_2,\bar y)\le\phi(\bar y)-\delta$,
which is in contradiction with the assumptions.
\end{proof}
Using Lemma~\ref{lem:rescaling_omega}, it is possible to compare
$E^{M, R}_0(p_2)$ and $E^{M, R}(p_2)$ respectively defined in (\ref{def:EMR_0}) and (\ref{eq:def_E})
\begin{proposition}
\label{cor:E_bigger_than_E_0}
For any $p_2 \in {\mathbb R}$,
\begin{equation}
\label{eq:cor:E_bigger_than_E_0}
E^{M, R}(p_2) \geq E^{M, R}_0(p_2).
\end{equation}
\end{proposition}
\begin{proof}
Thanks to Lemma \ref{lem:rescaling_omega}, the function $y\mapsto W(p_2,y)$ is a viscosity solution
$H^M(Du(y)+p_2e_2)=E^{M, R}(p_2)$ in $\{y: y_1<\eta\}$. Keeping in mind that $W(p_2,y)$ is independent of $y_2$,
we see that for almost all $y_1<\eta$, $E^{M, R}(p_2)= H^M(\partial_{y_1}W(p_2,y_1)e_1+p_2e_2) \ge E_0^M(p_2)$, from (\ref{def:E_0^i}).
Similarly, we show that $E^{M, R}(p_2)= H^R(\partial_{y_1}W(p_2,y_1)e_1+p_2e_2) \ge E_0^R(p_2)$ at almost $y_1>\eta$, and we conclude using (\ref{def:EMR_0}).
\end{proof}
\subsubsection{Asymptotic values of the slopes of $\chi$ as $y_1\to \infty$}\label{sec:asympt-valu-slop}
From Proposition \ref{cor:E_bigger_than_E_0} and the coercivity of the Hamiltonians $H^i$, $i=M,R$,
the following numbers are well defined for all $p_2 \in {\mathbb R}$:
\begin{eqnarray}
\label{eq:5}
\overline{\Pi}^M(p_2)\!= \!\min\left\lbrace q\in {\mathbb R} : H^M( p_2e_2+qe_1)=H^{-,1,M}( p_2e_2+qe_1)=E^{M,R}(p_2) \right\rbrace\\
\label{eq:6}
\widehat{\Pi}^M(p_2)\!= \!\max\left\lbrace q\in {\mathbb R} : H^M( p_2e_2+qe_1)=H^{-,1,M}( p_2e_2+qe_1)=E^{M,R}(p_2) \right\rbrace\\
\label{eq:23}
\overline{\Pi}^R(p_2)\!= \!\min\left\lbrace q\in {\mathbb R} : H^R( p_2e_2+qe_1)=H^{+,1,R}( p_2e_2+qe_1)=E^{M,R}(p_2) \right\rbrace\\
\label{eq:24}
\widehat{\Pi}^R(p_2)\!= \!\max\left\lbrace q\in {\mathbb R} : H^R( p_2e_2+qe_1)=H^{+,1,R}( p_2e_2+qe_1)=E^{M,R}(p_2) \right\rbrace
\end{eqnarray}
\begin{remark}
\label{rmk:equality_bar_Pi_and_hat_Pi}
From the convexity of the Hamiltonians $H^i$ and $H^{\pm,1,i}$, we deduce that if for $i=M,R$, $E^i_0(p_2)<E ^{M,R}(p_2)$, then
$\overline{\Pi}^i(p_2)=\widehat{\Pi}^i(p_2)$. In this case, we will use the notation
\begin{equation}
\label{eq:special_notation_for_Pi}
\Pi^i(p_2)=\overline{\Pi}^i(p_2)=\widehat{\Pi}^i(p_2).
\end{equation}
\end{remark}
Propositions \ref{cor:slopes_omega} and \ref{cor:control_slopes_W} below, which will be proved in Appendix~\ref{sec:proofs-prop-refc},
provide information on the growth of $y\mapsto \chi(p_2,y)$ as $|y_1|\to \infty$, where $\chi$ is obtained in Theorem~\ref{thm:stability_from_truncated_cell_pb_to_global_cell_pb} and is a solution of the cell problem (\ref{cellpE}):
\begin{proposition}
\label{cor:slopes_omega}
With $\Pi^i(p_2)\in {\mathbb R}$ defined in \eqref{eq:special_notation_for_Pi} for $i=M,R$,
\begin{enumerate}
\item If $E^{M,R}( p_2)>E_0^R(p_2)$, then, there exist $\rho^*=\rho^*(p_2) >0$ and $M^*= M^*(p_2) \in {\mathbb R}$
such that, for all $y\in [\rho^*,+\infty)\times {\mathbb R}$, $h_1\ge 0$ and $h_2\in {\mathbb R}$,
\begin{equation}
\label{slope32}
\chi( p_2, y+h_1e_1+h_2 e_2)-\chi( p_2, y)\geq \Pi^R(p_2) h_1-M^*.
\end{equation}
\item If $E^{M,R}( p_2)>E_0^M(p_2)$, then, there exist $\rho^*=\rho^*(p_2) >0$ and $M^*= M^*(p_2) \in {\mathbb R}$
such that, for all $y\in (-\infty,-\rho^*]\times {\mathbb R}$, $h_1\ge 0$ and $h_2\in {\mathbb R}$,
\begin{equation}
\label{slope3_bis2}
\chi( p_2, y-h_1e_1+h_2 e_2)-\chi( p_2, y)\geq - \Pi^M(p_2) h_1-M^*.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proposition}
\label{cor:control_slopes_W}
For $p_2 \in {\mathbb R}$, $y\mapsto W(p_2,y)$ defined in Lemma~\ref{lem:rescaling_omega} satisfies
\begin{eqnarray}
\label{cor:control_slopes_W1}
\overline{\Pi}^R(p_2)\le \partial_{y_1}W(p_2,y) \le \widehat{\Pi}^R(p_2) & &\hbox{ for a.a. } y\in (\eta,+\infty)\times {\mathbb R},\\
\label{cor:control_slopes_W2}
\overline {\Pi}^M(p_2) \le \partial_{y_1}W(p_2,y) \le \widehat{\Pi}^M(p_2) & &\hbox{ for a.a. }y\in (-\infty,\eta)\times{\mathbb R},
\end{eqnarray}
and for all $y$:
\begin{equation}
\label{eq:control_slopes_W_summary}
- \widehat{\Pi}^M(p_2) (y_1-\eta)^-
+ \overline{\Pi}^R(p_2) (y_1-\eta)^+ \le W(p_2,y) \le
- \overline{\Pi}^M(p_2) (y_1-\eta)^-
+ \widehat{\Pi}^R(p_2) (y_1-\eta)^+
.
\end{equation}
\end{proposition}
\subsection{Proof of Theorem \ref{th:convergence_result}}
\label{sec:proof-theor-refth:c}
\subsubsection{ A reduced set of test-functions}\label{sec:reduced-set-test}
From \cite{MR3621434} and \cite{imbert:hal-01073954},
we may use an equivalent definition for the viscosity solution of (\ref{def:HJeffective_short}).
We focus on the transmission condition at the interface $\Gamma_\eta^{M,R}$, because the same kind of arguments apply to the transmission at $\Gamma_\eta^{L,M}$. Theorem \ref{th:restriction_set_of_test_functions} below, which is reminiscent of \cite[Theorem~2.7]{MR3621434}, will tell us that the transmission condition on $\Gamma_\eta^{M,R}$ can be tested with a reduced set of test-functions.
\begin{definition}
\label{def:test_functions_set_restricted}
Recall that $\overline{\Pi}^i$ and $\widehat{\Pi}^i$, $i=M,R$, have been introduced in (\ref{eq:5})-~(\ref{eq:24}).
Let $\Pi: \Gamma_\eta^{M,R}\times {\mathbb R} \to {\mathbb R}^2$, $(z, p_2) \mapsto \left( \Pi^M(z,p_2),\Pi^R(z,p_2) \right)$ be such that, for all $(z,p_2)$
\begin{equation}
\label{eq:39}
\begin{split}
\overline{\Pi}^M (p_2) &\le \Pi^M(z,p_2) \le \widehat{\Pi}^M(p_2). \\
\overline{\Pi}^R (p_2) &\le \Pi^R(z,p_2) \le \widehat{\Pi}^R(p_2) .
\end{split}
\end{equation}
For $\bar z\in \Gamma_\eta^{M,R}$, the reduced set of test-functions ${\mathcal R}^\Pi(\bar z)$ associated to the map $\Pi$ is
the set of the functions $\varphi\in {\mathcal C}^0 ({\mathbb R}^2)$ such that there exists a ${\mathcal C}^1$ function $\psi: \Gamma_\eta^{M,R} \to {\mathbb R}$ with
\begin{equation}
\label{eq:40}
\varphi(z+ t e_1)= \psi(z)+ \left( \Pi^R\left(\bar z, \partial_{z_2}\psi(\bar z) \right) 1_{t>0}+\Pi^M \left(\bar z, \partial_{z_2}\psi(\bar z)\right) 1_{t<0} \right) t.
\end{equation}
\end{definition}
The following theorem is reminiscent of \cite[Theorem~2.7]{MR3621434}.
\begin{theorem} \label{th:restriction_set_of_test_functions}
Let $u:{\mathbb R}^2\to {\mathbb R}$ be a subsolution (resp. supersolution) of (\ref{def:HJeffective2}) and (\ref{def:HJeffective3}).
Consider a map $\Pi:\Gamma_\eta^{M,R}\times {\mathbb R}\to {\mathbb R}^2$, $(z, p_2) \mapsto \left( \Pi^M(z,p_2),\Pi^R(z,p_2) \right)$ such that (\ref{eq:39}) holds for all $(z,p_2)\in \Gamma_\eta^{M,R}\times {\mathbb R}$.
\\
We assume furthermore that $u$ is Lipschitz continuous in $\Gamma_\eta^{M,R}+ B(0,r)$ for some $r>0$.
The function $u$ is a subsolution (resp. supersolution) of (\ref{def:HJeffective5})
if and only if for any $z\in \Gamma_\eta^{M,R}$ and for all $\varphi \in {\mathcal R}^\Pi(z)$ such that $u-\varphi$ has a local maximum (resp. local minimum) at $z$,
\begin{equation}
\label{eq:th_restriction_set_of_test_functions}
\lambda u(z)+\max\left(E^{M,R}(\partial_{z_2}\varphi(z)), H^{M,R}( D \varphi^M(z),D \varphi^R(z)) \right) \le 0, \quad (\hbox{resp. }\ge 0).
\end{equation}
\end{theorem}
\begin{proof}
The proof follows the lines of that of \cite[Theorem~2.7]{MR3621434} and is also given in \cite[Appendix C]{MR3565416}
\end{proof}
\begin{remark}
\label{sec:reduced-set-test-1}
In the statement of Theorem \ref{th:restriction_set_of_test_functions}, we have chosen to restrict ourselves to functions that are Lipschitz continuous in $\Gamma^ {M,R}_\eta+ B(0,r)$ (this property makes the proof simpler); indeed, since the functions $v_{\eta, \epsilon}$ are Lipschitz continuous with a Lipschitz constant $\Lambda$ independent of $\epsilon$ (and also of $\eta$), the relaxed semi-limits of $v_{\eta, \epsilon}$ as $\epsilon\to 0$ are also Lipschitz continuous with the same Lipschitz constant $\Lambda$, see (\ref{eq:84}) below and \cite[Remark 3.1]{MR3565416}. \\
In fact, a more general version of Theorem \ref{th:restriction_set_of_test_functions} can be stated for any lower semi-continuous supersolution, and for the upper semi-continuous subsolutions $u$ such that for all $z\in \Gamma^ {M,R}_\eta$, $ u(z)= \limsup_{z'\to z, z'\in \Omega^i_\eta} u(z')$, $\forall i=M,R$,
as in \cite{MR3621434,imbert:hal-01073954}.
\end{remark}
\subsubsection{Proof of Theorem~\ref{th:convergence_result}}
\label{sec:proof-theor-refth:c-1}
Let us consider the relaxed semi-limits
\begin{equation}
\label{eq:84}
\overline{v_\eta}(z)={\limsup_\epsilon}^{*} {v}_{\eta,\epsilon}(z)=\limsup_{z'\to z, \epsilon\to 0}{v}_{\eta,\epsilon}(z')
\quad \mbox{ and } \quad \underline{v_\eta}(z)=\underset{\epsilon}{{\liminf}_{*}}{v}_{\eta,\epsilon}(z)
=\liminf_{z'\to z, \epsilon\to 0}{v}_{\eta,\epsilon}(z').
\end{equation}
Note that $ \overline{v_\eta}$ and $ \underline{v_\eta}$ are well defined,
since $\left( v_{\eta,\epsilon}\right)_\epsilon$ is uniformly bounded. We will prove that $\overline{v_\eta}$ and $\underline{v_\eta}$ are respectively a subsolution and a supersolution of \eqref{def:HJeffective_short}.
It is classical to check that the functions $\overline{v_\eta}(z)$ and $\underline{v_\eta}(z)$
are respectively a bounded subsolution and a bounded supersolution in $\Omega^i_\eta$, $i=L,M,R$, of
\begin{equation}
\label{eq:85}
\l u(z)+H^i(Du(z))= 0.
\end{equation}
From comparison theorems proved in \cite{barles2013bellman,imbert:hal-01073954,oudet2014}, this will imply that $\overline{v_\eta}=\underline{v_\eta}=v_\eta=\lim_{\epsilon\to 0} v_{\eta,\epsilon}$.
We just have to check the transmission conditions (\ref{def:HJeffective4}) and (\ref{def:HJeffective5}), and it is enough to focus on the latter, since the former is dealt with in a very same manner.
\\
We focus on $\overline{v_\eta}$ since the proof for $\underline{v_\eta}$ is similar.
\\
We are going to use Theorem \ref{th:restriction_set_of_test_functions} with the special choice for
the map $\Pi: {\mathbb R} \to {\mathbb R}^2$:
$\Pi(p_2)= \left( \widehat \Pi^M (p_2) ,\overline \Pi^R (p_2) \right)$.
Note that Theorem \ref{th:restriction_set_of_test_functions} can indeed be applied,
because, $\overline{v_\eta}$ is Lipschitz continuous, see Remark~\ref{sec:reduced-set-test-1}.
Take $\bar z\in \Gamma_\eta^{M,R}$ and a test-function $\varphi\in {\mathcal R}^\Pi(\bar z)$, i.e. of the form
\begin{equation}
\label{eq:86}
\varphi(z+ t e_1)= \psi(z)+ \left( \overline \Pi^R\left( \partial_{z_2}\psi(\bar z) \right) 1_{t>0}+
\widehat \Pi^M \left( \partial_{z_2}\psi(\bar z)\right) 1_{t<0} \right) t, \quad \forall z\in \Gamma_\eta^{M,R} , t\in {\mathbb R},
\end{equation}
for a ${\mathcal C}^1$ function $\psi: \Gamma_\eta^{M,R} \to {\mathbb R}$, such that $\overline{v_\eta}-\varphi$ has a strict local maximum at $\bar z$ and that $\overline{v_\eta}(\bar z)=\varphi(\bar z)$. \\
Let us argue by contradiction with (\ref{eq:th_restriction_set_of_test_functions}) and assume that
\begin{equation}
\label{eq:proof_convergence_sub_contradiction}
\lambda \varphi(\bar z)+ \max\left(E^{M, R}(\partial_{z_2}\varphi(\bar z)), H^{M,R}(D \varphi^M(\bar z),D \varphi^R(\bar z)) \right)=\theta >0.
\end{equation}
From (\ref{eq:86}), we see that $
H^{M,R}(D \varphi^M(\bar z),D \varphi^R(\bar z))\le E^{M, R} (\partial_{z_2}\varphi(\bar z))$ and
(\ref{eq:proof_convergence_sub_contradiction}) is equivalent to
\begin{equation}
\label{eq:proof_convergence_sub_contradiction_bis}
\lambda \psi(\bar z)+E^{M, R}( \partial_{z_2}\psi(\bar z) )=\theta>0.
\end{equation}
Let $\chi( \partial_{z_2}\psi(\bar z),\cdot)$ be a solution of (\ref{cellpE})
such that $\chi(\partial_{z_2}\psi(\bar z),0)=0$ (see Theorem~\ref{thm:stability_from_truncated_cell_pb_to_global_cell_pb}),
and $W(\partial_{z_2}\psi(\bar z),z_1)=\lim_{\epsilon\to 0}\epsilon\chi(\partial_{z_2}\psi(\bar z),\frac {z-\eta e_1} {\epsilon})$.
\paragraph{Step 1}
Hereafter, we will consider a small positive radius $r$ such that $r< \eta/4$. Then for $\epsilon$ small enough, $\Omega_{\eta, \epsilon}^i \cap B(\bar z, r)= \left( \eta e_1 + \epsilon \widetilde \Omega_{\eta}^i \right)\cap B(\bar z, r)$ for $i=L,R$. We claim that for $\epsilon$ and $r$ small enough, the function $\varphi^\epsilon$:
\[\varphi^\epsilon(z)=\psi(\eta e_1+ z_2 e_2)+\epsilon\chi( \partial_{z_2}\psi(\bar z),\frac {z-\eta e_1} {\epsilon})\]
is a viscosity supersolution of
\begin{equation}
\label{cellpE_modifed}
\left\{
\begin{array}[c]{rcll}
\lambda \varphi^\epsilon(z)+ H^i(D\varphi^\epsilon(z))& \ge& \frac{\theta}{2}\quad &\hbox{ if } z\in\Omega^i_{\eta,\epsilon}\cap B(\bar z,r),\; i=L,R, \\
\lambda \varphi^\epsilon(z)+H_{\Gamma_{\eta, \epsilon}} (z,D\left( \varphi^\epsilon\right)^L(z),D\left( \varphi^\epsilon\right)^R(z)) &\ge& \frac{\theta}{2}&\hbox{ if } z\in\Gamma_{\eta,\epsilon}\cap B(\bar z,r),
\end{array}
\right.
\end{equation}
where $H^i$ and $H_{\Gamma_{\eta, \epsilon}} $ are defined in (\ref{eq:7})-(\ref{eq:8}).
\\
Indeed, if $\xi$ is a test-function in ${\mathcal R}_{\eta,\epsilon}$ such that $\varphi^\epsilon-\xi$ has a local minimum at
$z^\star\in B(\bar z,r)$, then, from the definition of $\varphi^\epsilon$,
$y\mapsto \chi( \partial_{z_2}\psi(\bar z),y-\frac \eta \epsilon e_1)-\frac{1}{\epsilon}\left(\xi(\epsilon y)-\psi(\eta e_1+\epsilon y_2 e_2) \right)$
has a local minimum at $\frac{z^\star}{\epsilon}$.
\\
If $\frac{z^\star - \eta e_1}{\epsilon}\in \widetilde \Omega^i_\eta$, for $i=L$ or $R$, then
$H^i(D\xi(z^\star) -\partial_{z_2} \psi(\eta e_1+ z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2)
\ge E^{M,R}(\partial_{z_2}\psi(\bar z))$.
From the regularity properties of $H^i$,
\begin{displaymath}
H^i(D\xi(z^\star) -\partial_{z_2} \psi(\eta e_1+ z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2)
= H^i(D\xi(z^\star)) + o_{r\to 0}(1),
\end{displaymath}
thus
\begin{equation*}
\label{eq:proof_convergence_case1_nb1}
\lambda \varphi^\epsilon(z^\star)+ H^i(D\xi(z^\star))
\ge E^{M,R}(\partial_{z_2}\psi(\bar z)) +\lambda
\left(\psi(\eta e_1+ z^\star_2 e_2)+\epsilon\chi( \partial_{z_2}\psi(\bar z),\frac {z^\star-\eta e_1} {\epsilon})\right)
+o_{r\to 0}(1).
\end{equation*}
From \eqref{eq:proof_convergence_sub_contradiction_bis}, this implies that
\begin{equation*}
\lambda \varphi^\epsilon(z^\star)+ H^i(D\xi(z^\star))\ge \theta +\lambda \epsilon\chi( \partial_{z_2}\psi(\bar z),\frac {z^\star-\eta e_1} {\epsilon})+o_{r\to 0}(1).
\end{equation*}
Recall that the function $y\mapsto \epsilon \chi( \partial_{z_2}\psi(\bar z),\frac {y-\eta e_1} {\epsilon})$
converges locally uniformly to $y\mapsto W(\partial_{z_2}\psi(\bar z) ,y)$,
which is a Lipschitz continuous function, independent of $y_2$ and such that $W(\partial_{z_2}\psi(\bar z),0)=0$.
Therefore, for $ \eta$ and $r$ small enough, $\lambda \varphi^\epsilon(z^\star)+ H^i(D\xi(z^\star))
\ge \frac{\theta}{2}$.
\\
If $\frac{ z^\star -\eta e_1}{\epsilon}\in \widetilde \Gamma_\eta$, then, we have
\[H^{+,L}_{\widetilde \Gamma_\eta} (D\xi^L(z^\star) -\partial_{z_2} \psi(\eta e_1+ z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2, \frac{ z^\star -\eta e_1}{\epsilon} )
\ge E^{M,R}(\partial_{z_2}\psi(\bar z))\]
or
\[H^{-,R}_{\widetilde \Gamma_\eta} (D\xi^R(z^\star) -\partial_{z_2} \psi(\eta e_1+ z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2, \frac{ z^\star -\eta e_1}{\epsilon})\ge E^{M,R}(\partial_{z_2}\psi(\bar z)).\]
Since the Hamiltonians $H^{\pm,i}_{\widetilde \Gamma_\eta}$ enjoy the same regularity properties as $H^{ i}$,
it is possible to use the same arguments as in the case when $\frac{z^\star-\eta e_1}{\epsilon}\in \Omega^i$.
For $r$ and $\epsilon$ small enough,
\[\lambda \varphi^\epsilon(z^\star)+H_{\Gamma_{\eta, \epsilon}} (z^\star,D\left( \varphi^\epsilon\right)^L(z^\star),D\left( \varphi^\epsilon\right)^R(z^\star)) \ge \frac{\theta}{2}.
\]
The claim that $\varphi^\epsilon$ is a supersolution of \eqref{cellpE_modifed} is proved.
\paragraph{Step 2} Let us prove that there exist some positive constants $K_r>0$ and $\epsilon_0>0$ such that
\begin{equation}
\label{eq:proof_convergence_case1_nb3}
v_{\eta,\epsilon}(z)+K_r\le \varphi^\epsilon(z), \quad \forall z\in \partial B(\bar z, r), \;\forall \epsilon\in (0, \epsilon_0).
\end{equation}
Indeed, since $\overline{v_\eta}-\varphi$ has a strict local maximum at $\bar z$
and since $\overline{v_\eta}(\bar z)=\varphi(\bar z)$,
there exists a positive constant $\tilde K_r>0$ such that
$\overline{v_\eta}(z)+\tilde K_r\le \varphi(z)$ for any $z\in \partial B(\bar z, r)$.
Since $\displaystyle \overline{v_\eta}={\limsup_\epsilon}^{*} v_{\eta,\epsilon}$, there exists $\tilde \epsilon_0>0$ such that
\begin{equation}
\label{eq:87}
v_{\eta,\epsilon}(z)+\frac{\tilde K_r}{2}\le \varphi(z)\quad \hbox{
for any $0<\epsilon<\tilde \epsilon_0$ and $z\in \partial B(\bar z, r)$}.
\end{equation}
On the other hand, from \eqref{eq:control_slopes_W_summary} in Proposition \ref{cor:control_slopes_W},
\begin{equation}
\label{eq:88}
\begin{split}
&\psi(z_2 e_2+\eta e_1)+W( \partial_{z_2}\psi(\bar z),z)\\ \ge&
\psi(z_2 e_2+\eta e_1)+ \left( \overline \Pi^R ( \partial_{z_2}\psi(\bar z) ) 1_{z_1>\eta}
+\widehat \Pi^M( \partial_{z_2}\psi(\bar z)) 1_{z_1<\eta} \right) (z_ 1-\eta) = \varphi(z).
\end{split}
\end{equation}
Moreover, $z\mapsto \varphi^\epsilon(z)$ converges locally uniformly to
$z\mapsto \psi(\eta e_1+z_2e_2)+W( \partial_{z_2}\psi(\bar z),z)$ as $\epsilon$ tends to $0$.
By collecting the latter observation, (\ref{eq:88}) and (\ref{eq:87}), we get \eqref{eq:proof_convergence_case1_nb3}
for some constants $K_r>0$ and $\epsilon_0>0$.
\paragraph{Step 3}
From the previous steps, we find by comparison that for $r$ and $\epsilon$ small enough,
\begin{displaymath}
v_{\eta,\epsilon}(z)+K_r\le \varphi^\epsilon(z) \quad \quad \forall z \in B(\bar z, r).
\end{displaymath}
Setting $z=\bar z$ and taking the $\limsup$ as $\epsilon\to 0$, we obtain
\begin{displaymath}
\overline{v_\eta}(\bar z)+K_r\le \psi(\bar z)=\varphi(\bar z)=\overline{v_\eta}(\bar z),
\end{displaymath}
which cannot happen. The proof is completed. \ifmmode\else\unskip\quad\fi\squareforqed
\begin{remark}
For the proof of the supersolution inequality, the test-function $\varphi$ should be chosen of the form
\begin{displaymath}
\varphi(z+ t e_1)= \psi(z)+ \left( \widehat \Pi^R\left( \partial_{z_2}\psi(\bar z) \right) 1_{t>0}+
\overline \Pi^M \left( \partial_{z_2}\psi(\bar z)\right) 1_{t<0} \right) t, \quad \forall z\in \Gamma_\eta^{M,R} , t\in {\mathbb R},
\end{displaymath}
where $\psi\in {\mathcal C}^1({\mathbb R})$.
\end{remark}
\section{The second passage to the limit: $\eta$ tends to $0$}
\label{sec:second-passage-limit}
We now aim at passing to the limit in (\ref{def:HJeffective_short}) as $\eta$ tends to $0$.
Recall that $\Omega^L$, $\Omega^R$ and $\Gamma$ are defined in (\ref{eq:104}).
\subsection{Main result}\label{sec:main-result-1}
\begin{theorem}\label{sec:main-result-2}
As $\eta\to 0$, $v_{\eta}$ converges locally uniformly to $v$, the unique bounded viscosity solution of
(\ref{eq:14})-~(\ref{eq:17}), ( for short (\ref{eq:16})).
\\
In the transmission condition (\ref{eq:17}), namely
\begin{displaymath}
\lambda v(z)+\max\left(E(\partial_{z_2}v(z)), \;H^{+,1,L}(D v^L(z)),\;H^{-,1,R} (D v^R(z) \right) = 0 \hbox{if } z\in \Gamma,
\end{displaymath}
the effective flux limiter $E$ is given for $p_2\in {\mathbb R}$ by
\begin{equation}\label{eq:18}
E(p_2)= \max(E^{L,M}(p_2), E^{M,R}(p_2) ),
\end{equation}
where $E^{L,M}$ and $ E^{M,R}$ are defined in \S \ref{sec:passage-limit-as}, see ~(\ref{eq:cor:E_bigger_than_E_0}).
\end{theorem}
\begin{remark}
\label{sec:main-result-6}
It is striking that, in (\ref{eq:18}), the effective flux limiter $E(p_2)$ can be deduced explicitly from the limiters $E^{L,M}(p_2)$ and $E^{M,R}(p_2)$
obtained in \S~\ref{sec:effect-probl-obta}.
\end{remark}
\bigskip
Let us consider the relaxed semi-limits
\begin{equation}
\label{def:v_tilde_overline_underline}
\overline{{v}}(z)={\limsup_\eta}^{*} {v}_\eta(z)=\limsup_{z'\to z, \eta\to 0}{v}_\eta(z') \quad \mbox{ and } \quad \underline{{v}}(z)=\underset{\eta}{{\liminf}_{*}}{v}_\eta(z)=\liminf_{z'\to z, \eta\to 0}{v}_\eta(z').
\end{equation}
Note that $ \overline{{v}}$ and $ \underline{{v}}$ are well defined, since $\left( v_\eta\right)_\eta$ is uniformly bounded by $M_\ell /\lambda$, see (\ref{eq:4}).
It is classical to check that the functions $\overline{v}(z)$ and $\underline{{v}}(z)$ are respectively a bounded subsolution and a bounded supersolution in $\Omega^i$ of
\begin{equation}
\label{eq:5bisnew}
\l u(z)+H^i(Du(z))= 0.
\end{equation}
To find the effective transmission on $\Gamma$, we shall proceed as in \cite{MR3299352,MR3441209,MR3565416} and consider cell problems in larger and larger bounded domains.
\subsection{Proof of Theorem~\ref{sec:main-result-2}}
\subsubsection{State-constrained problem in truncated domains}\label{sec:state-constr-probl-2}
Let us fix $p_2\in {\mathbb R}$. For $\rho>1$, we consider the one dimensional {\sl truncated cell problem}:
\begin{equation}
\label{eq:20}
\left\{
\begin{array}[c]{l}
\displaystyle H^L\left(\frac{du}{dy} (y)+p_2e_2\right)\le \mu_\rho( p_2),\quad\quad\quad\hfill\hbox{ if } y\in (-\rho,-1), \\
\displaystyle H^L\left(\frac{du}{dy} (y)+p_2e_2\right)\ge \mu_\rho( p_2),\quad\quad\quad\hfill\hbox{ if } y\in [-\rho,-1), \\
\displaystyle H^M\left(\frac{du}{dy} (y)+p_2e_2\right)=\mu_\rho( p_2),\quad\quad\quad\hfill\hbox{ if } y\in (-1,1), \\
\displaystyle H^R\left(\frac{du}{dy} (y)+p_2e_2\right)\le \mu_\rho( p_2),\quad\quad\quad\hfill\hbox{ if } y\in (1,\rho), \\
\displaystyle H^R\left(\frac{du}{dy} (y)+p_2e_2\right)\ge \mu_\rho( p_2),\quad\quad\quad\hfill\hbox{ if } y\in (1,\rho], \\
\displaystyle \max\left(E^{L,M}(p_2), H^{L,M}\left( \frac{d u^L}{dy}(-1^-)+p_2e_2 , \frac{d u^M}{dy}(-1^+)+p_2e_2 \right) \right) =\mu_\rho( p_2),\\
\displaystyle \max\left(E^{M,R}(p_2), H^{M,R}\left( \frac{d u^M}{dy} (1^-)+p_2e_2 , \frac{du^R}{dy} (1^+)+p_2e_2 \right) \right) =\mu_\rho( p_2).
\end{array}
\right.
\end{equation}
Exactly as in \cite{MR3565416}, we can prove the following lemma:
\begin{lemma}
\label{sec:state-constr-probl-3}
There is a unique $\mu_\rho( p_2)\in {\mathbb R}$ such that (\ref{eq:20}) admits a bounded solution.
For this choice of $\mu_\rho( p_2)$, there exists a solution $y\mapsto \psi_\rho(p_2,y)$ which is Lipschitz continuous with a Lipschitz constant
$L$ depending on $p_2$ only (independent of $\rho$).
\end{lemma}
It is also possible to check that there exists a scalar constant $K$ such that for all real numbers $\rho_1$ and $\rho_2$ such that $\rho_1\leq \rho_2$,
\[\mu_{\rho_1}( p_2)\leq \mu_{\rho_2}( p_2)\leq K.\]
From this property, it is possible to pass to the limit as $\rho\to +\infty$: the effective tangential Hamiltonian $E( p_2)$ is defined by
\begin{equation}
\label{eq:21}
E( p_2)=\lim_{\rho\rightarrow \infty} \mu_{\rho}( p_2).
\end{equation}
\subsubsection{The global cell problem}\label{sec:global-cell-problem}
Fixing $p_2\in{\mathbb R}$, the {\sl global cell-problem} reads
\begin{equation}
\label{eq:22}
\left\{
\begin{array}[c]{l}
\displaystyle H^L\left(\frac{du}{dy} (y) e_1+p_2e_2\right)=E( p_2),\quad\quad\quad\hfill\hbox{ if } y<-1, \\
\displaystyle H^M\left(\frac{du}{dy} (y)e_1+p_2e_2\right)=E( p_2),\quad\quad\quad\hfill\hbox{ if } y\in (-1,1), \\
\displaystyle H^R\left(\frac{du}{dy} (y)e_1+p_2e_2\right)=E( p_2),\quad\quad\quad\hfill\hbox{ if } y> 1, \\
\displaystyle \max\left(E^{L,M}(p_2), H^{L,M}\left( \frac{d u}{dy}(-1^-)e_1+p_2e_2 , \frac{d u}{dy}(-1^+)e_1+p_2e_2 \right) \right) =E( p_2),\\
\displaystyle \max\left(E^{M,R}(p_2), H^{M,R}\left( \frac{d u}{dy} (1^-) e_1+p_2e_2 , \frac{du^R}{dy} (1^+)e_1 +p_2e_2 \right) \right) =E( p_2).
\end{array}
\right.
\end{equation}
Exactly as in \cite{MR3565416}, we obtain the existence of a solution of the global cell problem by passing to the limit in (\ref{eq:20}) as $\rho\to +\infty$:
\begin{proposition}[Existence of a global corrector]
\label{sec:state-constr-probl-5}
For $p_2\in {\mathbb R}$, there exists $\psi( p_2,\cdot)$ a Lipschitz continuous viscosity solution of (\ref{eq:22})
such that $\psi( p_2,0)=0$. For $\eta>0$, setting $W_\eta( p_2,y)=\eta\psi( p_2,\frac y {\eta})$,
there exists a sequence $\eta_n$ such that $W_{\eta_n} ( p_2,\cdot)$ converges locally uniformly to a
Lipschitz function $y\mapsto W( p_2, y)$, with the same Lipschitz constant as $\psi$. The function $W$ is a viscosity solution of
\begin{equation}
\label{eq:19}
H^i\left( \frac {du}{dy_1} (y_1) e_1 +p_2e_2\right)=E( p_2)\quad\hbox{ if } y_1e_1\in \Omega^i,
\end{equation}
and satisfies $W( p_2,0)=0$. Moreover,
\begin{equation*}
E( p_2)\ge \max\left\lbrace E_0^L(p_2), E_0^R(p_2)\right\rbrace.
\end{equation*}
\end{proposition}
\subsubsection{Proof of (\ref{eq:18})}
In view of Proposition \ref{sec:state-constr-probl-5}, the following numbers are well defined for all $p_2 \in {\mathbb R}$:
\begin{eqnarray}
\label{eq:26}
\overline{\pi}^L(p_2)\!= \!\min\left\lbrace q\in {\mathbb R} : H^L( p_2e_2+qe_1)=H^{-,1,L}( p_2e_2+qe_1)=E(p_2) \right\rbrace,\\
\label{eq:27}
\widehat{\pi}^L(p_2)\!= \!\max\left\lbrace q\in {\mathbb R} : H^L(p_2e_2+qe_1)=H^{-,1,L}( p_2e_2+qe_1)=E(p_2) \right\rbrace,\\
\label{eq:37}
\overline{\pi}^R(p_2)\!= \!\min\left\lbrace q\in {\mathbb R} : H^R( p_2e_2+qe_1)=H^{+,1,R}( p_2e_2+qe_1)=E(p_2) \right\rbrace,\\
\label{eq:38}
\widehat{\pi}^R(p_2)\!= \!\max\left\lbrace q\in {\mathbb R} : H^R( p_2e_2+qe_1)=H^{+,1,R}( p_2e_2+qe_1)=E(p_2) \right\rbrace.
\end{eqnarray}
From the convexity of the Hamiltonians $H^i$, we deduce that for $i=L,R$, if $E^i_0(p_2)<E (p_2)$, then
$\overline{\pi}^i(p_2)=\widehat{\pi}^i(p_2)$. In this case, we will use the notation
\begin{equation}
\label{eq:41}
\pi^i(p_2)=\overline{\pi}^i(p_2)=\widehat{\pi}^i(p_2).
\end{equation}
\begin{lemma}
\label{sec:global-cell-problem-2}
For any $p_2 \in {\mathbb R}$:
\begin{itemize}
\item if $E(p_2)>E_0^R(p_2)$,
then $\psi( p_2, \cdot)$ is affine in the interval $(1,+\infty)$ and $\partial_y \psi ( p_2, y)= \pi^R(p_2)$
\item if $E( p_2)>E_0^L(p_2)$,
then $\psi( p_2, \cdot)$ is affine in the interval $(-\infty,-1)$ and $\partial_y \psi ( p_2, y)= \pi^L(p_2)$.
\end{itemize}
\end{lemma}
\begin{proof}
If $E(p_2)>E_0^R(p_2)$, we prove, exactly as Proposition \ref{cor:slopes_omega} that there exist $\rho^*=\rho^*(p_2) >0$ and $M^*= M^*(p_2) \in {\mathbb R}$
such that, for all $y\in [\rho^*,+\infty)$, $h_1\ge 0$,
\begin{equation}
\label{eq:42}
\psi(p_2, y+h_1e_1)-\psi(p_2, y)\geq \pi^R(p_2) h_1-M^*.
\end{equation}
From (\ref{eq:42}), classical arguments on viscosity solutions of one-dimensional equations with convex Hamiltonians yield the desired result for $y>1$.
The same kind of arguments are used for $y<-1$.
\end{proof}
\begin{proposition}\label{sec:proof-refeq:18}
The constant $E(p_2)$ defined in (\ref{eq:21}) satisfies (\ref{eq:18}).
\end{proposition}
\begin{proof}
From the fourth and fifth equations in (\ref{eq:22}), we see that $E(p_2)\ge \max( E^{L,M}(p_2), E^{M,R}(p_2))$. Moreover, we know that
$ E^{M,R}(p_2)\ge E^{M,R}_0(p_2)= \max (E^{M}_0(p_2), E^{R}_0(p_2))$ from Proposition~\ref{cor:E_bigger_than_E_0}. Similarly $ E^{L,M}(p_2)\ge E^{L,M}_0(p_2)= \max (E^{L}_0(p_2), E^{M}_0(p_2))$.
\\
We make out two main cases:
\begin{enumerate}
\item If $E(p_2)= E_0^M(p_2)$, then using the observations above,
we get that $ E(p_2)=E^{L,M}(p_2)= E^{M,R
|
}(p_2) )$, which implies (\ref{eq:18}).
\item If $E(p_2)> E_0^M(p_2)$, then we can define two real numbers $\pi^{M, -}< \pi^{M,+}$ such that
\begin{displaymath}
\begin{split}
H^M( \pi^{M, -} e_1+ p_2e_2 )= H^{-,1,M}(\pi^{M, -} e_1+ p_2e_2 )= E(p_2),\\
H^M( \pi^{M, +} e_1+ p_2e_2 )= H^{+,1,M}( \pi^{M, +} e_1+ p_2e_2 )= E(p_2),
\end{split}
\end{displaymath}
and one and only one of the following three assertions is true:
\begin{enumerate}
\item the function $\psi(p_2,\cdot)$ defined in Proposition~\ref{sec:state-constr-probl-5} is affine in $(-1,1)$ with slope $\pi^{M, -}$: in this case, $ H^{+,1,M}( \partial_y \psi ( p_2, 1^-) e_1+ p_2e_2 )< E(p_2)$:
using the fifth equation in (\ref{eq:22}), we deduce that \[\max \left(E^{M,R}(p_2), H^{-,1,R}\left( \frac{d \psi }{dy} (1^+) e_1+p_2e_2 \right)\right)=E( p_2);\]
there are two subcases:
\begin{enumerate}
\item if $E(p_2)=E_0^R(p_2)$, then using the fact that $E^{M,R}(p_2)\ge E_0^R(p_2)$, we get that $E^{M,R}(p_2)= E( p_2)$
\item if $E(p_2)>E_0^R(p_2)$, then as a consequence of Lemma~\ref{sec:global-cell-problem-2}, we see that $H^{-,1,R} (\partial_y \psi ( p_2, 1^+) e_1 +p_2e_2)< E(p_2)$,
which again implies that $E^{M,R}(p_2)= E( p_2)$.
\end{enumerate}
Therefore $E^{M,R}(p_2)= E( p_2)$, and since $E^{L,M}(p_2)\le E( p_2)$ from the fourth equation in (\ref{eq:22}), we obtain (\ref{eq:18}).
\item $\psi(p_2,\cdot)$ is affine in $(-1,1)$ with slope $\pi^{M, +}$. The same arguments as in the previous case yield that $E^{L,M}(p_2)= E(p_2)$ then (\ref{eq:18}).
\item $\psi(p_2,\cdot)$ is piecewise affine in $(-1,1)$, with the slope $\pi^{M, +}$ in $(-1, c)$ and the slope $\pi^{M, -}$ in $( c,1)$, for some $c$ with $|c|<1$.
Hence, $H^{-,1,M} ( \partial_y \psi ( p_2, -1^+) e_1 +p_2e_2)< E(p_2)$ and $ H^{+,1,M}( \partial_y \psi ( p_2, 1^-) e_1+ p_2e_2 )< E(p_2)$: therefore,
\begin{eqnarray}
\label{eq:35}
\displaystyle \max\left(E^{L,M}(p_2), H^{+,1,L}( \partial_y \psi ( p_2, -1^-) e_1+ p_2e_2 )\right)=E(p_2),\\
\label{eq:36}
\displaystyle \max\left(E^{M,R}(p_2), H^{-,1,R} ( \partial_y \psi (p_2, 1^+) e_1 +p_2e_2) \right) =E(p_2).
\end{eqnarray}
\begin{enumerate}
\item If $E( p_2)= E_0^R(p_2)$, then the very first observation in the proof imply that $E^{M,R}(p_2)=E(p_2)$.
\item If $E( p_2)> E_0^R(p_2)$, then from Lemma~\ref{sec:global-cell-problem-2}, $ H^{-,1,R} (\partial_y \psi ( p_2, 1^+) e_1+p_2e_2)< E(p_2)$, and (\ref{eq:36}) yields
that $E^{M,R}(p_2)=E(p_2)$.
\end{enumerate}
Similarly, using (\ref{eq:35}), we find that $E^{L,M}(p_2)=E(p_2)$, so \\ $ E^{L,M}(p_2)=E^{M,R}(p_2)=E(p_2)$, which yields (\ref{eq:18}).
\end{enumerate}
\end{enumerate}
\end{proof}
\begin{remark}\label{sec:proof-refeq:18-1}
We have actually proved that $E(p_2)$ defined by (\ref{eq:18}) is the
unique constant such that the global cell problem (\ref{eq:22}) has a Lipschitz continuous solution.
\end{remark}
\subsubsection{End of the proof of Theorem~\ref{sec:main-result-2}}
\label{sec:end-proof-theorem}
The end of the proof of Theorem ~\ref{sec:main-result-2} is completely similar to the proof of the main result in \cite{MR3565416}. The general method was first proposed in \cite{MR3441209} and
uses Evans' method of perturbed test-functions with the particular test-functions proposed in \cite{imbert:hal-01073954}, to which the correctors found in
Proposition \ref{sec:state-constr-probl-5} are associated. For brevity, we do not repeat the proof here.
\section{Simultaneous passage to the limit as $\eta=\epsilon\to 0$}
\label{sec:simult-pass-limit}
\subsection{Main result}
\label{sec:main-result-3}
We now turn our attention to the case when $\eta=\epsilon$. We are interested in the asymptotic behavior of the sequence
$v_{\epsilon, \epsilon}$ as $\epsilon\to 0$. The main result tells us that the limit is the same function $v$ as the one defined
in Theorem \ref{sec:main-result-2}, i.e. obtained by two successive passages to the limit in $v_{\eta, \epsilon}$, first by letting $\epsilon\to 0$ then $\eta\to 0$.
\begin{theorem}\label{sec:main-result-4}
As $\epsilon\to 0$, $v_{\epsilon,\epsilon}$ converges locally uniformly to $v$, the unique bounded viscosity solution of
(\ref{eq:14})-(\ref{eq:17}), with $H^{L,R}$ given by (\ref{eq:25}) and the effective flux-limiter
$E(p_2)$ given by (\ref{eq:18}).
\end{theorem}
\begin{remark}
\label{sec:main-result-5}
Note that the same convergence result holds for the sequence $v_{\epsilon, \epsilon^q}$ where $q$ is any positive number.
\end{remark}
\subsection{Correctors}\label{sec:correctors}
Let us consider the problem in the original geometry dilated by the factor $1/\epsilon$.
Defining $\Omega_{1,\epsilon}^L, \Omega_{1,\epsilon}^R, \Gamma_{1,\epsilon}$ and $ H_{ \Gamma_{1,\epsilon}}$ as in \S~\ref{sec:geometry} and \S~\ref{sec:optim-contr-probl}, and recalling that $Y^\rho=\{y\in {\mathbb R}^2: |y_1|<\rho\}$, we consider the truncated cell problem
\begin{equation}
\label{eq:43}
\left\{
\begin{array}[c]{lll}
H^L(Du(y)+p_2e_2)&\le E_{\epsilon,\rho}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^L \cap Y^\rho , \\
H^L(Du(y)+p_2e_2)&\ge E_{\epsilon,\rho}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^L \cap \overline{Y^\rho} , \\
H^R(Du(y)+p_2e_2)&\le E_{\epsilon,\rho}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^R \cap Y^\rho , \\
H^R(Du(y)+p_2e_2)&\ge E_{\epsilon,\rho}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^R \cap \overline{Y^\rho} , \\
H_{ \Gamma_{1,\epsilon}} (y,Du^L(y)+p_2e_2 , Du^R(y)+p_2e_2)&=
E_{\epsilon,\rho}( p_2)&\hbox{ if } y\in \Gamma_{1,\epsilon} , \\
u \hbox{ is $\epsilon$ periodic w.r.t. } y_2,
\end{array}
\right.
\end{equation}
where $\rho$ is large enough such that $\Gamma_{1,\epsilon}\subset\subset Y^\rho$ and the inequations are understood in the sense of viscosity. The following lemma can be proved with the same ingredients as in \S~\ref{sec:state-constr-probl-1}:
\begin{lemma}
\label{sec:trunc-cell-probl-1}
There is a unique $E_{\epsilon,\rho}( p_2)\in {\mathbb R}$ such that (\ref{eq:43}) admits a viscosity solution.
For this choice of $E_{\epsilon,\rho}( p_2)$, there exists a solution $\xi_{\epsilon,\rho}(p_2,\cdot)$
which is Lipschitz continuous with a Lipschitz constant $L$ depending on $p_2$ only (independent of $\epsilon$ and $\rho$).
\end{lemma}
As in \cite{MR3299352,MR3565416}, using the optimal control interpretation of (\ref{eq:43}), it is easy to prove that for
a positive $K$ which may depend on $p_2$ but not on $\rho$ and $\epsilon$ and for
all $0<\rho_1\le \rho_2$,
\begin{equation}
\label{eq:61}
E_{\epsilon,\rho_1}( p_2)\leq E_{\epsilon,\rho_2}( p_2)\leq K.
\end{equation}
For $p_2\in {\mathbb R}$, let $E_{\epsilon}(p_2)$ be defined by
\begin{equation}
\label{eq:47}
E_{\epsilon}( p_2)=\lim_{\rho\rightarrow \infty} E_{\epsilon,\rho}( p_2).
\end{equation}
For a fixed $p_2\in{\mathbb R}$, the {\sl global cell-problem} reads
\begin{equation}
\label{eq:48}
\left\{
\begin{array}[c]{lll}
H^L(Du(y)+p_2e_2)&= E_{\epsilon}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^L , \\
H^R(Du(y)+p_2e_2)&= E_{\epsilon}( p_2)&\hbox{ if } y\in \Omega_{1,\epsilon}^R , \\
H_{ \Gamma_{1,\epsilon}} (y,Du^L(y)+p_2e_2 , Du^R(y)+p_2e_2)&=
E_{\epsilon}( p_2)&\hbox{ if } y\in \Gamma_{1,\epsilon} , \\
u \hbox{ is $\epsilon$ periodic w.r.t. } y_2.
\end{array}
\right.
\end{equation}
The following theorem can be obtained by using the same arguments as in \S~\ref{sec:passage-limit-as}:
\begin{theorem}
\label{sec:trunc-cell-probl-2}
Let $\xi_{\epsilon, \rho}( p_2,\cdot)$ be a sequence of uniformly Lipschitz continuous solutions of the truncated cell-problem (\ref{eq:43}) which converges to $ \xi_\epsilon( p_2,\cdot)$
locally uniformly on ${\mathbb R}^2$ as $\rho\to +\infty$. The function $\xi_\epsilon( p_2,\cdot)$ is a Lipschitz continuous viscosity solution of the global cell-problem (\ref{eq:48}).
\end{theorem}
Using the control interpretation of (\ref{eq:43}), we see that $ E_\epsilon(p_2)$ is bounded independently
of $\epsilon$.
We may thus suppose that, possibly after the extraction of a subsequence, $\lim_{\epsilon\to 0} E_\epsilon(p_2)= E(p_2)$.
Moreover, if $E(p_2)> E_0^R(p_2)$, then for $\epsilon$ small enough, $E_\epsilon(p_2)> E_0^R(p_2)$ and we can define
$\pi^R(p_2)$ and $\pi^R_\epsilon (p_2)$ as the unique real numbers such that
\begin{displaymath}
\begin{split}
H^R( p_2e_2+{\pi}^R(p_2)e_1)&=H^{+,1,R}( p_2e_2+{\pi}^R(p_2)e_1)=E(p_2),
\\
H^R( p_2e_2+{\pi}^R_\epsilon(p_2)e_1)&=H^{+,1,R}( p_2e_2+{\pi}^R_\epsilon(p_2)e_1)=E_\epsilon(p_2).
\end{split}
\end{displaymath}
Note that ${\pi}^R(p_2)=\lim_{\epsilon\to 0} {\pi}^R_\epsilon(p_2)$.
Then we can prove exactly as Proposition~\ref{cor:slopes_omega} that if $E( p_2)>E_0^R(p_2)$, then,
there exist $\rho^*=\rho^*(p_2) >0$ and $M^*= M^*(p_2) \in {\mathbb R}$
such that, for all $\epsilon >0$ small enough, for all $(y_1,y_2)\in [\rho^*,+\infty)\times {\mathbb R}$, $h_1\ge 0$ and $h_2\in {\mathbb R}$,
\begin{equation}
\label{eq:55}
\xi_\epsilon( p_2, y+h_1e_1+h_2 e_2)-\xi_\epsilon( p_2, y)\geq \pi^R_\epsilon(p_2) h_1-M^*.
\end{equation}
Of course, the same observations can be made on the left side of the interface:
if $E(p_2)> E_0^L(p_2)$, then for $\epsilon$ small enough, $E_\epsilon(p_2)> E_0^L(p_2)$ and we can define
$\pi^L(p_2)$ and $\pi^L_\epsilon (p_2)$ as the unique real numbers such that
\begin{displaymath}
\begin{split}
H^L( p_2e_2+{\pi}^L(p_2)e_1)&=H^{-,1,L}( p_2e_2+{\pi}^L(p_2)e_1)=E(p_2),
\\
H^L( p_2e_2+{\pi}^L_\epsilon(p_2)e_1)&=H^{-,1,L}( p_2e_2+{\pi}^L_\epsilon(p_2)e_1)=E_\epsilon(p_2).
\end{split}
\end{displaymath}
If $E( p_2)>E_0^L(p_2)$, then, there exist $\rho^*=\rho^*(p_2) >0$ and $M^*= M^*(p_2) \in {\mathbb R}$
such that, for all $\epsilon >0$ small enough, for all $(y_1,y_2)\in (-\infty,-\rho^*] \times {\mathbb R}$, $h_1\ge 0$ and $h_2\in {\mathbb R}$,
\begin{equation}
\label{eq:56}
\xi_\epsilon( p_2, y+h_1e_1+h_2 e_2)-\xi_\epsilon( p_2, y)\leq \pi^L_\epsilon(p_2) h_1+M^*.
\end{equation}
Using similar arguments to those in \S~\ref{sec:passage-limit-as}
and Remark~\ref{sec:proof-refeq:18-1}, we obtain the following results:
\begin{theorem}
\label{sec:trunc-cell-probl-3}
Let $(\epsilon_n)$ be a sequence of positive numbers tending to $0$ such that the solution of (\ref{eq:48})
$(\xi_{\epsilon_n}( p_2,\cdot), E_{\epsilon_n}(p_2))$ satisfy:
$E_{\epsilon_n} (p_2)\to E(p_2)$ and $\xi_{\epsilon_n}( p_2,\cdot)\to \xi( p_2,\cdot)$ locally uniformly.
Then $\xi( p_2,\cdot)$ depends on $y_1$ only and is Lipchitz continuous,
$(\xi( p_2,\cdot), E(p_2) )$ is a solution of (\ref{eq:22}) and $ E(p_2)= \max(E^{L,M}(p_2), E^{M,R}(p_2) )$.
\end{theorem}
\begin{corollary}
\label{sec:trunc-cell-probl-4}
As $\epsilon\to 0$, the whole sequence $E_\epsilon(p_2)$ tends to $ \max(E^{L,M}(p_2), E^{M,R}(p_2) )$.
\end{corollary}
The construction of the function $\xi$ has been useful to characterize $E(p_2)$ by (\ref{eq:18}).
However, since $\xi$ is the solution of~(\ref{eq:22}), it is not directly connected to the oscillating interface $\Gamma_{\epsilon,\epsilon}$, and $\xi$
will not be useful when applying Evans' method
to prove Theorem~\ref{sec:main-result-4}.
The function used in Evans' method will rather be $\xi_\epsilon$ and the following proposition
will therefore be useful. We skip its proof, because it is very much similar to that of Proposition~\ref{cor:control_slopes_W}.
\begin{proposition}
\label{sec:trunc-cell-probl-5}
For any $p_2>0$, there exists a sequence
$(\epsilon_n)$ of positive numbers tending to $0$ such that $y\mapsto \epsilon_n \xi_{\epsilon_n}( p_2,\frac y {\epsilon_n})$ converges locally uniformly to $y\mapsto W(p_2, y)$. The function $W(p_2,\cdot)$ does not depend on $y_2$ and is a Lipschitz continuous viscosity solution of (\ref{eq:19}).
By adding a same constant to $W(p_2,\cdot)$ and $\xi_{\epsilon_n}( p_2,\cdot)$, one can impose that $W( p_2,0)=0$.
Moreover,
\begin{equation}
\label{eq:89}
- \widehat{\pi}^L(p_2) (y_1)^-
+ \overline{\pi}^R(p_2) (y_1)^+ \le W(p_2,y) \le
- \overline{\pi}^L(p_2) (y_1)^-
+ \widehat{\pi}^R(p_2) (y_1)^+,
\end{equation}
where for $i=L,R$, the values $\overline{\pi}^i(p_2)$ and $\widehat{\pi}^L(p_2)$ are defined in (\ref{eq:26})-(\ref{eq:38}).
\end{proposition}
\subsection{Proof of Theorem~\ref{sec:main-result-4}}
\label{sec:proof-theor-refs}
The proof of Theorem~\ref{sec:main-result-4} is similar to that of Theorem~\ref{th:convergence_result}.
Let us consider the relaxed semi-limits
\begin{equation}
\label{eq:90}
\overline{v}(z)={\limsup_\epsilon}^{*} {v}_{\epsilon,\epsilon}(z)=\limsup_{z'\to z, \epsilon\to 0}{v}_{\epsilon,\epsilon}(z')
\quad \mbox{ and } \quad \underline{v}(z)=\underset{\epsilon}{{\liminf}_{*}}{v}_{\epsilon,\epsilon}(z)
=\liminf_{z'\to z, \epsilon\to 0}{v}_{\epsilon,\epsilon}(z').
\end{equation}
Note that $ \overline{v}$ and $ \underline{v}$ are well defined,
since $\left( v_{\epsilon,\epsilon}\right)_\epsilon$ is uniformly bounded.
It is classical to check that the functions $\overline{v}(z)$ and $\underline{v}(z)$
are respectively a bounded subsolution and a bounded supersolution in $\Omega^i$, $i=L,R$, of
\begin{equation}
\label{eq:91}
\l u(z)+H^i(Du(z))= 0.
\end{equation}
We will prove that $\overline{v}$ and $\underline{v}$ are respectively a subsolution and a supersolution
of (\ref{eq:16}). From the comparison theorem proved in \cite{barles2013bellman,imbert:hal-01073954,oudet2014},
this will imply that $\overline{v}=\underline{v}=v=\lim_{\epsilon\to 0} v_{\epsilon,\epsilon}$. We just have to check the transmission condition (\ref{eq:17}).
\\
Take $\bar z =(0, \bar z_2) \in \Gamma$. It is possible to use the counterpart of Theorem \ref{th:restriction_set_of_test_functions}
because $\overline{v}$ is Lipschitz continuous, see Remark~\ref{sec:reduced-set-test-1}.
\\
Take a test-function of the form
\begin{equation}
\label{eq:92}
\varphi(z+ t e_1)= \psi(z)+ \left( \overline \pi^R\left( \partial_{z_2}\psi(\bar z) \right) 1_{t>0}+
\widehat \pi^L \left( \partial_{z_2}\psi(\bar z)\right) 1_{t<0} \right) t,
\quad \forall z\in \Gamma , t\in {\mathbb R},
\end{equation}
for a ${\mathcal C}^1$ function $\psi: \Gamma \to {\mathbb R}$, such that $\overline{v}-\varphi$ has a strict local maximum at $\bar z$ and that $\overline{v}(\bar z)=\varphi(\bar z)$. \\
Let us argue by contradiction and assume that
\begin{equation}
\label{eq:93}
\lambda \varphi(\bar z)+ \max\left(E(\partial_{z_2}\varphi(\bar z)), H^{L,R}(D \varphi^L(\bar z),D \varphi^R(\bar z)) \right)=\theta >0.
\end{equation}
From (\ref{eq:92}), we see that $
H^{L,R}(D \varphi^L(\bar z),D \varphi^R(\bar z))\le E (\partial_{z_2}\varphi(\bar z))$ and
(\ref{eq:93}) is equivalent to
\begin{equation}
\label{eq:94}
\lambda \psi(\bar z)+E( \partial_{z_2}\psi(\bar z) )=\theta>0.
\end{equation}
\paragraph{Step 1}
Consider a sequence $(\xi_{\epsilon_n})_n$ as in Proposition~\ref{sec:trunc-cell-probl-5},
that we note $(\xi_\epsilon)$ for short.
We claim that for $\epsilon$ and $r$ small enough, the function $\varphi^\epsilon$:
\[\varphi^\epsilon(z)=\psi(z_2 e_2)+\epsilon\xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {z} {\epsilon})\]
is a viscosity supersolution of
\begin{equation}
\label{eq:95}
\left\{
\begin{array}[c]{rcll}
\lambda \varphi^\epsilon(z)+ H^i(D\varphi^\epsilon(z))& \ge& \frac{\theta}{2}\quad &\hbox{ if } z\in\Omega^i_{\epsilon,\epsilon}\cap B(\bar z,r),\; i=L,R, \\
\lambda \varphi^\epsilon(z)+H_{\Gamma_{\epsilon, \epsilon}} (z,D\left( \varphi^\epsilon\right)^L(z),D\left( \varphi^\epsilon\right)^R(z)) &\ge& \frac{\theta}{2}&\hbox{ if } z\in\Gamma_{\epsilon,\epsilon}\cap B(\bar z,r).
\end{array}
\right.
\end{equation}
Indeed, if $\nu$ is a test-function in ${\mathcal R}_{\epsilon,\epsilon}$ such that $\varphi^\epsilon-\nu$ has a local minimum at
$z^\star\in B(\bar z,r)$, then, from the definition of $\varphi^\epsilon$,
$y\mapsto \xi_\epsilon( \partial_{z_2}\psi(\bar z),y)-\frac{1}{\epsilon}\left(\nu(\epsilon y)
-\psi(\epsilon y_2 e_2) \right)$
has a local minimum at $\frac{z^\star}{\epsilon}$.
\\
If $\frac{z^\star}{\epsilon}\in \Omega^i_{1,\epsilon}$, for $i=L$ or $R$, then , from (\ref{eq:48}),
$H^i(D\nu(z^\star) -\partial_{z_2} \psi( z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2)
\ge E_\epsilon(\partial_{z_2}\psi(\bar z))$.
From the regularity properties of $H^i$,
\begin{displaymath}
H^i(D\nu(z^\star) -\partial_{z_2} \psi(z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2)
= H^i(D\nu(z^\star)) + o_{r\to 0}(1),
\end{displaymath}
thus, using also the convergence of $E_\epsilon$ to $E$,
\begin{equation*}
\lambda \varphi^\epsilon(z^\star)+ H^i(D\nu(z^\star))
\ge E(\partial_{z_2}\psi(\bar z)) +\lambda
\left(\psi(z^\star_2 e_2)+\epsilon \xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {z^\star} {\epsilon})\right)
+o_{r\to 0}(1) +o_{\epsilon\to 0}(1).
\end{equation*}
From (\ref{eq:94}), this implies that
\begin{equation*}
\lambda \varphi^\epsilon(z^\star)+ H^i(D\nu(z^\star))\ge \theta +\lambda \epsilon\xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {z^\star} {\epsilon})+o_{r\to 0}(1)+o_{\epsilon\to 0}(1).
\end{equation*}
From the Lipschitz continuity of $\xi_\epsilon( \partial_{z_2}\psi(\bar z),\cdot)$ with a constant independent of $\epsilon$, we get that
$ \epsilon\xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {z^\star} {\epsilon})= \epsilon\xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {\bar z} {\epsilon}) +o_{r\to 0}(1)$. Moreover it is easy to check that
$ \epsilon\xi_\epsilon( \partial_{z_2}\psi(\bar z),\frac {\bar z} {\epsilon})= o_{\epsilon\to 0}(1)$.
Therefore, for $r$ and $\epsilon$ small enough, $\lambda \varphi^\epsilon(z^\star)+ H^i(D\nu(z^\star))\ge \theta/2$.
\\
If $\frac{ z^\star }{\epsilon}\in \Gamma_{1,\epsilon}$, then we have
\[H^{+,L}_{\Gamma_{1,\epsilon}} ( \frac{ z^\star }{\epsilon}, D\nu^L (z^\star) -\partial_{z_2} \psi(z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2, )\ge E_\epsilon(\partial_{z_2}\psi(\bar z))\]
or
\[H^{+,R}_{\Gamma_{1,\epsilon}} ( \frac{ z^\star }{\epsilon}, D\nu^R (z^\star) -\partial_{z_2} \psi(z^\star_2 e_2) e_2 + \partial_{z_2}\psi(\bar z) e_2, )\ge E_\epsilon(\partial_{z_2}\psi(\bar z))
\]
Since the Hamiltonians $H^{\pm,i}_{\Gamma_{1,\epsilon}}$ enjoys the same regularity properties as $H^{\pm i}$,
it is possible to use the same arguments as in the case when $\frac{z^\star}{\epsilon}\in \Omega^i_{1,\epsilon}$.
For $r$ and $\epsilon$ small enough,
\[\lambda \varphi^\epsilon(z^\star)+H_{\Gamma_{\epsilon, \epsilon}} (z^\star,D\left( \varphi^\epsilon\right)^L(z^\star),D\left( \varphi^\epsilon\right)^R(z^\star)) \ge \frac{\theta}{2}.
\]
The claim that $\varphi^\epsilon$ is a supersolution of (\ref{eq:95}) is proved.
\paragraph{Step 2} Let us prove that there exist some positive constants $K_r>0$ and $\epsilon_0>0$ such that
\begin{equation}
\label{eq:99}
v_{\epsilon,\epsilon}(z)+K_r\le \varphi^\epsilon(z), \quad \forall z\in \partial B(\bar z, r), \;\forall \epsilon\in (0, \epsilon_0).
\end{equation}
Indeed, since $\overline{v}-\varphi$ has a strict local maximum at $\bar z$
and since $\overline{v}(\bar z)=\varphi(\bar z)$,
there exists a positive constant $\tilde K_r>0$ such that
$\overline{v}(z)+\tilde K_r\le \varphi(z)$ for any $z\in \partial B(\bar z, r)$.
Since $\displaystyle \overline{v}={\limsup_\epsilon}^{*} v_{\epsilon,\epsilon}$, there exists $\tilde \epsilon_0>0$ such that
\begin{equation}
\label{eq:96}
v_{\epsilon,\epsilon}(z)+\frac{\tilde K_r}{2}\le \varphi(z)\quad \hbox{
for any $0<\epsilon<\tilde \epsilon_0$ and $z\in \partial B(\bar z, r)$}.
\end{equation}
On the other hand, from Proposition~\ref{sec:trunc-cell-probl-5},
\begin{equation}
\label{eq:98}
\psi(z_2 e_2)+W( \partial_{z_2}\psi(\bar z),z) \ge
\psi(z_2 e_2)+ \left( \overline \pi^R ( \partial_{z_2}\psi(\bar z) ) 1_{z_1>0}
+\widehat \pi^L( \partial_{z_2}\psi(\bar z)) 1_{z_1<0} \right) z_ 1= \varphi(z).
\end{equation}
Moreover, $z\mapsto \varphi^\epsilon(z)$ converges locally uniformly to
$z\mapsto \psi(z_2e_2)+W( \partial_{z_2}\psi(\bar z),z)$ as $\epsilon$ tends to $0$.
By collecting the latter observation, (\ref{eq:98}) and (\ref{eq:96}), we get \eqref{eq:99}
for some constants $K_r>0$ and $\epsilon_0>0$.
\paragraph{Step 3}
From the previous steps, we find by comparison that for $r$ and $\epsilon$ small enough,
\begin{displaymath}
v_{\epsilon,\epsilon}(z)+K_r\le \varphi^\epsilon(z) \quad \quad \forall z \in B(\bar z, r).
\end{displaymath}
Taking the $\limsup$ as $z=\bar z$ and $\epsilon\to 0$, we obtain
\begin{displaymath}
\overline{v}(\bar z)+K_r\le \psi(\bar z)=\varphi(\bar z)=\overline{v}(\bar z),
\end{displaymath}
which cannot happen. The proof is completed. \ifmmode\else\unskip\quad\fi\squareforqed
|
\section{Introduction}
Tight integration of sensing hardware and control is key to
mastery of manipulation in cluttered, occluded, or dynamic environments.
Artificial tactile sensors, however, are challenging to integrate and maintain: They are most useful when located at the distal end of the manipulation chain (where space is tight); they are subject to high-forces and wear (which reduces their life span or requires tedious maintenance procedures); and they require instrumentation capable of routing and processing high-bandwidth data.
Among the many tactile sensing technologies developed in the last decades~\cite{tactile_review_newer}, vision-based tactile sensors are a promising variant. They provide high spatial resolution with compact instrumentation and are synergistic with recent image-based deep learning techniques.
Current implementations of these sensors, however, are often bulky and/or fragile~\cite{Tactilesensor_TacTip_edge,GelSightUSB,GelSight_Dong}. Robotic grasping benefits from sensors that are compactly-integrated and that are rugged enough to sustain the shear and normal forces involved in grasping.
To address this need we present a tactile-sensing finger, \textit{GelSlim}, designed for grasping in cluttered environments (\figref{fig:teaser}). This finger, similar to other vision-based tactile sensors, uses a camera to measure tactile imprints (\figref{fig:showing-off}).\\
\begin{figure}[t]
\centering
\vspace{2mm}
\includegraphics[width=\linewidth]{figures/figure1update.jpg}
\caption{{\bf GelSlim fingers} picking a textured flashlight from clutter with the corresponding tactile image at right. The sensor is calibrated to normalize output over time and reduce the effects of wear on signal quality. The flashlight, though occluded, is shown for the reader's clarity.}
\vspace{-2mm}
\label{fig:teaser}
\end{figure}
\begin{figure*}[t]
\centering
\vspace{2mm}
\includegraphics[width=\textwidth]{figures/showing-off-large.jpg}
\caption{{\bf Tactile imprints.} From left to right: The MCube Lab's logo, 80/20 aluminum extrusion, a PCB, a screw, a Lego brick, and a key.}
\vspace{-2mm}
\label{fig:showing-off}
\end{figure*}
\begin{figure}[t]
\centering
\vspace{2mm}
\includegraphics[width=\linewidth]{figures/figure1-5.jpg}
\caption{{\bf GelSlim finger.} Pointed adaptation of the GelSight sensor featuring a larger 50mm $\times$ 50mm sensor pad and strong, slim construction.}
\vspace{-2mm}
\label{fig:sensor}
\end{figure}
In this work we present:
\begin{itemize}
\item \textbf{Design} of a vision-based high-resolution tactile-sensing finger with the form factor necessary to gain access to cluttered objects, and toughness to sustain the forces involved in everyday grasping (\secref{sec:design}). \change{The sensor outputs raw images of the sensed surface, which encode shape and texture of the object at contact.}
\item \textbf{Calibration} framework to regularize the sensor output over time and across sensor individuals (\secref{sec:calibration}). We suggest four metrics to track the quality of the tactile feedback.
\item \textbf{Evaluation} of the sensor's durability by monitoring its image quality over more than 3000 grasps (\secref{sec:calibration}).
\end{itemize}
The long term goal of this research is to enable reactive grasping and manipulation.
The use of tactile feedback in the control loop of robotic manipulation is key for reliability.
Our motivation stems from efforts in developing bin-picking systems to grasp novel objects in cluttered scenes and from the need to observe the geometry of contact to evaluate and control the quality of a grasp~\cite{Zeng2017, Zeng2018}.
In cluttered environments like a pantry or a warehouse storage cell, as in the Amazon Robotics Challenge~\cite{Correll2016}, a robot faces the challenge of singulating target objects from a tightly-packed collection of items.
Cramped spaces and clutter lead to frequent contact with non-target objects. Fingers must be compact and, when possible, pointed to squeeze between target and clutter (\figref{fig:sensor}). To make use of learning approaches, tactile sensors must also be resilient to the wear and tear from long experimental sessions which often yield unexpected collisions. Finally, sensor calibration and signal conditioning are key to the consistency of tactile feedback as the sensor's physical components decay.
\section{Related Work}
The body of literature on tactile sensing technologies is large~\cite{tactile_review_newer,Tactilesensor_review1}. Here we discuss relevant works related to the technologies used by the proposed sensor: vision-based tactile sensors and GelSight sensors.
\subsection{Vision-based tactile sensors}
Cameras provide high-spatial-resolution 2D signals without the need for many wires. Their sensing field and working distance can also be tuned with an optical lens. For these reasons, cameras are an interesting alternative to several other sensing technologies, which tend to have higher temporal bandwidth but more limited spatial resolution.
Ohka~\textit{et al}.~\cite{Tactilesensor_array1996} designed an early vision-based tactile sensor. It is comprised of a flat rubber sheet, an acrylic plate and a CCD camera to measure three-dimensional force. The prototyped sensor, however, was too large to be realistically integrated in a practical end-effector. GelForce~\cite{Tactilesensor_Gelforce}, a tactile sensor shaped like a human finger, used a camera to track two layers of dots on the sensor surface to measure both the magnitude and direction
of an applied force.
Instead of measuring force, some vision-based tactile sensors focus on measuring geometry, such as edges, texture or 3D shape of the contact surface. Ferrier and Brockett~\cite{Tactilesensor_dotshape} proposed an algorithm to reconstruct the 3D surface by analyzing the distribution of the deformation of a set of markers on a tactile sensor. This principle has inspired several other contributions. The TacTip sensor~\cite{Tactilesensor_TacTip_edge} uses a similar principle to detect edges and estimate the rough 3D geometry of the contact surface. Mounted on a GR2 gripper, the sensor gave helpful feedback when reorienting a cylindrical object in hand~\cite{Tactilesensor_TacTip_inhand}.
Yamaguchi~\cite{Tactilesensor_CMU} built a tactile sensor with a clear silicone gel that can be mounted on a Baxter hand. Unlike the previous sensors, Yamaguchi's also captures the local color and shape information since the sensing region is transparent. The sensor was used to detect slip and estimate contact force.
\subsection{GelSight sensors}
The GelSight sensor is a vision-based tactile sensor that measures the 2D texture and 3D topography of the contact surface. It utilizes a piece of elastomeric gel with an opaque coating as the sensing surface, and a webcam above the gel to capture contact deformation from changes in lighting contrast as reflected by the opaque coating.
The gel is illuminated by color LEDs with inclined angles and different directions. The resulting colored shading can be used to reconstruct the 3D geometry of the gel deformation. The original, larger GelSight sensor~\cite{GelSight2009,GelSight2011} was designed to measure the 3D topography of the contact surface with micrometer-level spatial resolution. Li~\textit{et al}.~\cite{GelSightUSB} designed a cuboid fingertip version that could be integrated in a robot finger. Li's sensor has a $1\times1$ cm$^2$ sensing area, and can measure fine 2D texture and coarse 3D information. A new version of the GelSight sensor was more recently proposed by Dong~\textit{et al}.~\cite{GelSight_Dong} to improve 3D geometry measurements and standardize the fabrication process. A detailed review of different versions of GelSight sensors can be found in ~\cite{GelSight_review}.
GelSight-like sensors with rich 2D and 3D information have been successfully applied in robotic manipulation.
Li~\textit{et al}.~\cite{GelSightUSB} used GelSight's localization capabilities to insert a USB connector, where the sensor used the texture of the characteristic USB logo to guide the insertion.
Izatt~\textit{et al}.~\cite{GelSightRuss} explored the use of the 3D point cloud measured by a GelSight sensor in a state estimation filter to find the pose of a grasped object in a peg-in-hole task.
Dong~\textit{et al}.~\cite{GelSight_Dong} used the GelSight sensor to detect slip from variations in the 2D texture of the contact surface in a robot picking task. The 2D image structure of the output from a GelSight sensor makes it a good fit for deep learning architectures. GelSight sensors have also been used to estimate grasp quality~\cite{calandra2017feeling}.
\subsection{Durability of tactile sensors}
Frictional wear is an issue intrinsic to tactile sensors. Contact forces and torques during manipulation are significant and can be harmful to both the sensor surface and its inner structure.
Vision-based tactile sensors are especially sensitive to frictional wear, since they rely on the deformation of a soft surface for their sensitivity. These sensors commonly use some form of soft silicone gel, rubber or other soft material as a sensing surface~\cite{Tactilesensor_Gelforce,Tactilesensor_TacTip,Tactilesensor_CMU, GelSight_Dong}.
To enhance the durability of the sensor surface, researchers have investigated using protective skins such as plastic~\cite{Tactilesensor_CMU}, or making the sensing layer easier to replace by involving 3D printing techniques with soft material~\cite{Tactilesensor_TacTip}.
Another mechanical weakness of vision-based tactile sensors is the adhesion between the soft sensing layer and its stronger supporting layer. Most sensors discussed above use either silicone tape or rely on the adhesive property of the silicone rubber, which can be insufficient under practical shear forces involved in picking and lifting objects. The wear effects on these sensors are especially relevant if one attempts to use them in a data-driven/learning context~\cite{Tactilesensor_CMU,calandra2017feeling}.
Durability is key to the practicality of a tactile sensor; however, none of the above provide quantitative analysis of their sensor's durability over usage.
\section{Design Goals}
\label{sec:goals}
\begin{figure}[t]
\centering
\vspace{2mm}
\includegraphics[width=6cm]{figures/Gelsight-construction3.jpg}
\caption{{\bf The construction of a GelSight sensor.} A general integration of a GelSight sensor in a robot finger requires three components: camera, light, and gel, in particular arrangement. Li's original fingertip schematic \cite{GelSightUSB} is shown at left with our \textit{GelSlim} at right.}
\vspace{-2mm}
\label{fig:GelSight-construction}
\end{figure}
In a typical GelSight-like sensor, a clear gel with an opaque outer membrane is illuminated by a light source and captured by a camera (\figref{fig:GelSight-construction}). The position of each of these elements depends on the specific requirements of the sensor. Typically, for ease of manufacturing and optical simplicity, the camera's optical axis is normal to the gel (left of \figref{fig:GelSight-construction}).
To reproduce 3D using photometric techniques~\cite{GelSight2011}, at least three colors of light must be directed across the gel from different directions.
Both of these geometric constraints, the camera placement and the illumination path, are counterproductive to slim robot finger integrations, and existing sensor implementations are cuboid. In most manipulation applications, successful grasping requires fingers with the following qualities:
\begin{itemize}
\item \textbf{Compactness} allows fingers to singulate objects from clutter by squeezing between them or separating them from the environment.
\item \textbf{Uniform Illumination} makes sensor output consistent across as much of the gel pad as possible.
\item \textbf{Large Sensor Area} extends the area of the tactile cues, both where there is and where there is not contact. This can provide a better knowledge of the state of the grasped object and, ultimately, enhanced controllability.
\item \textbf{Durability} affords signal stability, necessary for the time-span of the sensor. This is especially important for data-driven techniques that build models from experience.
\end{itemize}
In this paper we propose a redesign of the form, materials, and processing of the GelSight sensor to turn it into a GelSight \emph{finger}, yielding a more useful finger shape with a more consistent and calibrated output (right of \figref{fig:GelSight-construction}).
The following sections describe the geometric and optical tradeoffs in its design (\secref{sec:design}), as well as the process to calibrate and evaluate it (\secref{sec:calibration}).
\section{Design and Fabrication}
\label{sec:design}
\begin{figure}[b]
\centering
\vspace{2mm}
\includegraphics[width=\linewidth]{figures/fabric-comparison.jpg}
\caption{{\bf Texture in the sensor fabric skin improves signal strength.} When an object with no texture is grasped against the gel with no fabric, signal is very low (a-b). The signal improves with textured fabric skin (c-d). The difference stands out well when processed with Canny edge detection.}
\vspace{-2mm}
\label{fig:covered-difference}
\end{figure}
To realize the design goals in \secref{sec:goals}, we propose the following changes to a standard design of a vision-based GelSight-like sensor:
1) Photometric stereo for 3D reconstruction requires precise illumination. Instead, we focus on recovering texture and contact surface, which will allow more compact light-camera arrangements.
2) The softness of the gel plays a role in the resolution of the sensor, but is also damaging to its life span. We will achieve higher durability by protecting the softest component of the finger, the gel, with textured fabric.
3) Finally, we will improve the finger's compactness, illumination uniformity, and sensor pad size with a complete redesign of the sensor optics.
\subsection{Gel Materials Selection}
A GelSight sensor's gel must be elastomeric, optically clear, soft, and resilient. Gel hardness represents a tradeoff between spatial resolution and strength. Maximum sensitivity and resolution is only possible when gels are very soft, but their softness yields two major drawbacks: low tensile strength and greater viscoelasticity.
Given our application's lesser need for spatial resolution, we make use of slightly harder, more resilient gels compared to other Gelsight sensors~\cite{GelSightUSB,GelSight_Dong}. Our final gel formulation is a two-part silicone (XP-565 from Silicones Inc.) mixed in a 15:1 ratio of parts A to B. The outer surface of our gel is coated with a specular silicone paint using a process developed by Yuan~\textit{et al}.~\cite{GelSight_review}.
The surface is covered with a stretchy, loose-weave fabric to prevent damage to the gel while increasing signal strength. Signal strength is proportional to deformation due to pressure on gel surface. Because the patterned texture of the fabric lowers the contact area between object and gel, pressure is increased to the point where the sensor can detect the contact patch of flat objects pressed against the flat gel (\figref{fig:covered-difference}).
\subsection{Sensor Geometry Design Space}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{figures/parametric-wedge.jpg}
\caption{{\bf The design space of a single-reflection GelSight sensor.} Based on the camera's depth of field and viewing angle, it will lie at some distance away from the gel. These parameters, along with mirror and camera angles, determine the thickness of the finger and the size of the gel pad. The virtual camera created by the mirror is drawn for visualization purposes.}
\label{fig:parametric-wedge}
\vspace{-3mm}
\end{figure}
We change the sensor's form factor by using a mirror to reflect the gel image back to the camera. This allows us to have a larger sensor pad by placing the camera farther away while also keeping the finger comparatively thin. A major component of finger thickness is the optical region with thickness $h$ shown in \figref{fig:parametric-wedge}, which is given by the trigonometric relation:
\begin{equation} \label{equ1}
h = \frac{L \cdot \cos(\beta-\frac{\Phi}{2}-2\alpha)}{\cos(\frac{\Phi}{2}-\beta+\alpha)}\,,
\end{equation}
where $\Phi$ is the camera's field of view, $\alpha$ is mirror angle, $\beta$ is the camera angle relative to the base, and $L$ is the length of the gel. $L$ is given by the following equation and also relies on the disparity between the shortest and longest light path from the camera (depth of field):
\begin{equation} \label{equ4}
L = \frac{(a-b) \cdot \sin{\Phi}}{2\sin{\frac{\Phi}{2}} \cdot \sin{(\beta-2\alpha)}}\,.
\end{equation}
Together, the design requirements $h$ and $L$, vary with the design variables, $\alpha$ and $\beta$, and are constrained by the camera's depth of field: $(a-b)$ and viewing angle: $\Phi$. These design constraints ensure that both near and far edges given by \eqref{equ4} are in focus and that the gel is maximally sized and the finger is minimally thick.
\subsection{Optical Path: Photons From Source, to Gel, to Camera}
\begin{figure}[t]
\centering
\includegraphics[width=7.5cm]{figures/light-bouncing2.jpg}
\caption{{\bf The journey of a light ray through the finger.} The red line denoting the light ray is: 1) Emitted by two compact, high-powered LEDs on each side. 2) Routed internal to acrylic guides via total internal reflection. 3) Redistributed to be parallel and bounced toward the gel pad by a parabolic mirror. 4) Reflected $90^{\circ}$ on a mirror surface to graze across the gel. 5) Reflected up by an object touching the gel. 6) Reflected to the camera by a flat mirror (not shown in the figure).}
\label{fig:light-bouncing}
\vspace{-3mm}
\end{figure}
Our method of illuminating the gel makes three major improvements relative to previous sensors: a slimmer finger tip, more even illumination, and a larger gel pad. Much like Li did in his original GelSight finger design~\cite{GelSightUSB}, we use acrylic wave guides to move light throughout the sensor with fine control over the angle of incidence across the gel (\figref{fig:light-bouncing}). However, our design moves the LEDs used for illumination farther back in the finger by using an additional reflection, thus allowing our finger to be slimmer at the tip.
The light cast on the gel originates from a pair of high-powered, neutral white, surface-mount LEDs (OSLON SSL 80) on each side of the finger. Light rays stay inside the acrylic wave guide due to total internal reflection by the difference in refractive index between acrylic and air.
Optimally, light rays would be emitted parallel so as to not lose intensity as light is cast across the gel. However, light emitters are usually point sources. A line of LEDs, as in Li's sensor, helps to evenly distribute illumination
|
across one dimension while intensity decays across the length of the sensor.
Our approach uses a parabolic reflection (Step 3 in \figref{fig:light-bouncing}) to ensure that light rays entering the gel pad are close to parallel. The two small LEDs are approximated as a single point source and placed at the parabola's focus. Parallel light rays bounce across the gel via a hard $90^{\circ}$ reflection. Hard reflections through acrylic wave guides are accomplished by painting those surfaces with mirror finish paint.
When an object makes contact with the fabric over the gel pad, it creates a pattern of light and dark spots as the specular gel interacts with the grazing light. This image of light and dark spots is transmitted back to the camera off a front-surface glass mirror. The camera (Raspberry Pi Spy Camera) was chosen for its small size, low price, high framerate/resolution, and good depth of field.
\subsection{Lessons Learned}
For robotic system integrators, or those interested in designing their own GelSight sensors, the following is a collection of small but important lessons we learned:
\begin{enumerate}
\item{\bf{Mirror:}} Back surface mirrors create a ``double image" from reflections off front and back surfaces especially at the reflection angles we use in our sensor. Glass front surface mirrors give a sharper image.
%
\item{\bf{Clean acrylic:}} Even finger oils on the surface of a wave guide can interrupt total internal reflection. Clean acrylic obtains maximum illumination efficiency.
%
\item{\bf{Laser cut acrylic:}} Acrylic pieces cut by laser exhibit stress cracking at edges after contacting solvents from glue or mirror paint. Cracks break the optical continuity in the substrate and ruin the guide. Stresses can be relieved by annealing first.
%
\item{\bf{LED choice:}} This LED was chosen for its high luminous efficacy (103 lm/W), compactness (3mm $\times$ 3mm), and small viewing angle (80$^\circ$). Small viewing angle directs more light into the thin wave guide.
%
\item{\bf{Gel paint type:}} From our experience in this configuration, semi-specular gel coating provides a higher-contrast signal than lambertian gel coatings. Yuan~\textit{et al}. \cite{GelSight_review} describe the different types of coatings and how to manufacture them.
%
\item{\bf{Affixing silicone gel:}} When affixing the silicone gel to the substrate, most adhesives we tried made the images hazy or did not acceptably adhere to either the silicone or substrate. We found that \textit{Adhesives Research ARclear 93495} works well. Our gel-substrate bond is also stronger than other gel-based sensors because of its comparatively large contact area.
\end{enumerate}
Some integration lessons revolve around the use of a Raspberry Pi spy camera. It enables a very high data-rate but requires a 15-pin Camera Serial Interface (CSI) connection with the Raspberry Pi. Since the GelSlim sensor was designed for use on a robotic system where movement and contact are part of normal operation, the processor (Raspberry Pi) is placed away from the robot manipulator. We extended the camera's fragile ribbon cable by first adapting it to an HDMI cable inside the finger, then passing that HDMI cable along the kinematic chain of the robot. Extending the camera this way allows us to make it up to several meters long, mechanically and electrically protect the contacts, and route power to the LEDs through the same cable.
The final integration of the sensor in our robot finger also features a rotating joint to change the angle of the finger tip relative to the rest of the finger body. This movement does not affect the optical system and allows us to more effectively grasp a variety of objects in clutter.
\change{There are numerous ways to continue improving the sensor's durability and simplify the sensor's fabrication process. For example, while the finger is \textit{slimmer}, it is not \textit{smaller}. It will be a challenge to make integrations sized for smaller robots due to camera field of view and depth of field constraints. Additionally, our finger has an un-sensed, rigid tip that is less than ideal for two reasons: it is the part of the finger with the richest contact information, and its rigidity negatively impacts the sensor's durability. To decrease contact forces applied due to this rigidity, we will add compliance to the finger-sensor system.}
\subsection{Gel Durability Failures}
We experimented with several ways to protect the gel surface before selecting a fabric skin. Most non-silicone coatings will not stick to the silicone bulk, so we tested various types of filled and non-filled silicones. Because this skin coats the outside, using filled (tougher, non-transparent) silicones is an option. One thing to note is that thickness added outside of the specular paint increases impedance of the gel, thus decreasing resolution. To deposit a thin layer onto the bulk, we diluted filled, flowable silicone adhesive with NOVOCS silicone thinner from Smooth-On Inc. We found that using solvent in proportions greater than 2:1 (solvent:silicone) caused the gel to wrinkle (possibly because solvent diffused into the gel and caused expansion).
Using a non-solvent approach to deposit thin layers like spin coating is promising, but we did not explore this path. Furthermore, thin silicone coatings often rubbed off after a few hundred grasps signaling that they did not adhere to the gel surface effectively. Plasma pre-treatment of the silicone surface could more effectively bond substrate and coating, but we were unable to explore this route.
\section{Sensor Calibration}
\label{sec:calibration}
The consistency of sensor output is key for sensor usability.
The raw image from a GelSlim sensor right after fabrication has two intrinsic issues: non-uniform illumination and a strong perspective distortion.
In addition, the sensor image stream may change during use due to small deformations of the hardware, compression of the gel, or camera shutter speed fluctuations.
To improve the consistency of the signal we introduce a two-step calibration process, illustrated in \figref{fig:step1} and~\figref{fig:step2}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{figures/calibration_step1.jpg}
\caption{\textbf{Calibration Step 1 (manufacture correction)}: capture raw image (a1) against a calibration pattern with four rectangles and a non-contact image (b1); calculate the ``transform matrix" \encircle{{\tiny T}} according to image (a1); do operation \encircle{{\scriptsize 1}} ``image warping and cropping'' to image (a1) and (b1) and get (a2) and (b2); apply Gaussian filter to (b2) to get ``background illumination'' \encircle{{\tiny B}}; do operation \encircle{{\scriptsize 2}} to (a2) and get the calibrated image (a3); record the ``mean value'' \encircle{{\tiny M}} of image (b2) as brightness reference.}
\label{fig:step1}
\vspace{-3mm}
\end{figure}
\myparagraph{Calibration Step 1. Manufacture correction.}
After fabrication, the sensor signal can vary with differences in camera perspective and illumination intensity.
To correct for camera perspective, we capture a tactile imprint in \figref{fig:step1} (a1) against a calibration pattern with four flat squares (\figref{fig:calibration-targets} left).
With the distance between the outer edges of the four squares, we estimate the perspective transformation matrix $T$ that allows us to warp the image to a normal perspective and crop the boundaries.
The contact surface information in the warped image (\figref{fig:step1} (a2)) is more user-friendly. We assume the perspective camera matrix remains constant, so the manufacture calibration is done only once.
We correct for non-homogenous illumination by estimating the illumination distribution of the background $B$ using a strong Gaussian filter on a non-contact warped image (\figref{fig:step1} (b2)). The resulting image, after subtracting the non-uniform illumination background (\figref{fig:step1} (a3)), is visually more homogeneous. In addition, we record the mean value of the warped non-contact image $M$ as brightness reference for future use.
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{figures/calibration_step2.jpg}
\caption{\textbf{Calibration Step 2 (on-line sensor maintenance)}: apply transformation \encircle{{\tiny T}} to start from a warped and cropped image (a1) and (b1); operation
\encircle{{\scriptsize 2}} uses the non-contact image from step 1 and adds constant \encircle{{\tiny M}} to calibrate the image brightness (a2) and (b2); operation \encircle{{\scriptsize 3}} performs a
local contrast adjustment (a3) and (b3). All the images show the imprint of the calibration ``dome"
after fabrication and 3300 grasps. \textcolor{black}{The red circles in b(3) highlight the region where the gel wears out after 3300 grasps.}}
\label{fig:step2}
\vspace{-3mm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/calibration-targets.jpg}
\caption{Three tactile profiles to calibrate the sensor. From left to right: A rectangle with sharp corners, a ball-bearing array, and a 3D printed dome.}
\label{fig:calibration-targets}
\vspace{-3mm}
\end{figure}
\myparagraph{Calibration Step 2. On-line Sensor Maintenance.}
The aim of the second calibration step is to keep the sensor output consistent over time.
We define four metrics to evaluate the temporal consistency of the 2D signal: \textit{Light intensity and distribution}, \textit{Signal strength}, \textit{Signal strength distribution} and \textit{Gel condition}. In the following subsection, we will describe and evaluate these metrics in detail.
We will make use of the calibration targets in \figref{fig:calibration-targets} to track the signal quality, including a ball-bearing array and a 3D printed dome. We conduct over 3300 aggressive grasp-lift-vibrate experiments on several daily objects with two GelSlim fingers on a WSG-50 gripper \change{attached to an ABB IRB 1600ID robotic arm}. We take a tactile imprint of the two calibration targets every 100 grasps. \change{The data presented in the following sections were gathered with a single prototype and are for the purposes of evaluating sensor durability trends.}
\subsection{Metric I: Light Intensity and Distribution}
The light intensity and distribution are the mean and standard deviation of the light intensity in a non-contact image.
The light intensity distribution in the gel is influenced by the condition of the light source, the consistency of the optical path and the homogeneity of the paint of the gel. The three factors can change due to wear.
\figref{fig:light} shows their evolution over grasps before (blue) and after (red) background illumination correction. The standard deviations are shown as error bars.
The blue curve (raw output from the sensor) shows that the mean brightness of the image drops slowly over time, especially after around 1750 grasps.
This is likely due to slight damage of the optical path. The variation of the image brightness over space decreases slightly, which is likely caused by the fact that the bright two sides of the image get darker and more similar to the center region. \figref{fig:step2} shows an example of the decrease in illumination before (a1) and after (b1) 3300 grasps.
We compensate for the changes in light intensity by subtracting the background and adding a constant $M$ (brightness reference from step one) to the whole image. The background illumination is obtained from the Gaussian filtered non-contact image at that point. The mean and variance of the corrected images, shown in red in \figref{fig:light}, are more consistent. \figref{fig:step2} shows an example of the improvement after 3300 grasps.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/Light_distribution_v2.jpg}
\caption{Evolution of the light intensity distribution.}
\label{fig:light}
\vspace{-3mm}
\end{figure}
\subsection{Metric II: Signal Strength}
The signal strength $S$ is a measure of the dynamic range of the tactile image under contact.
It is intuitively the brightness and contrast of a contact patch, and we define it as:
\begin{equation} \label{equ3}
S = H(\sigma-m)(\frac{2\mu}{255} + \frac{\sigma}{n})\,,
\end{equation}
where $\mu$ is the mean and $\sigma$ the standard deviation of the image intensity in the contact region, and $H(x)$ is the Heaviside step function. $H(\sigma-m)$ means that if the standard deviation is smaller than $m$, signal strength is 0. Experimentally, we set $m$ to 5, and $n$, the standard deviation normalizer, to 30.
Maintaining a consistent signal strength during use is one of the most important factors for the type of contact information we can extract in a vision-based tactile sensor.
In a GelSlim sensor, signal strength is affected by the elasticity of the gel, which degrades after use.
We track the signal strength during grasps by using the ``dome" calibration pattern designed to yield a single contact patch. \figref{fig:signal_strength} shows its evolution. The blue curve (from raw output) shows a distinct drop of the signal strength after 1750 grasps. \change{The brightness decrease described in the previous subsection is one of the key reasons}.
The signal strength can be enhanced by increasing both the contrast and brightness of the image. The brightness adjustment done after fabrication improves the signal strength, shown in green in \figref{fig:signal_strength}. However, the image with brightness correction after 3300 grasps shown in \figref{fig:step2} (b2) still has decreased contrast.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{figures/signal_strength_v2.jpg}
\caption{The change of signal strength across the number of grasps performed.}
\label{fig:signal_strength}
\vspace{-3mm}
\end{figure}
To enhance the image contrast according to the illumination, we perform adaptive histogram equalization to the image, which increases the contrast on the whole image, and then fuses the images with and without histogram equalization together according to the local background illumination. The two images after the whole calibration are shown in \figref{fig:step2} (a3) and (b3). The signal strength after calibrating illumination and contrast (\figref{fig:signal_strength} in red) shows better consistency during usage.
\subsection{Metric III: Signal Strength Distribution}
The force distribution after grasping an object is non-uniform across the gel.
During regular use, the center and distal regions of the gel are more likely to be contacted during grasping, which puts more wear on the gel in those areas. This phenomenon results in a non-uniform degradation of the signal strength.
To quantify this phenomenon, we extract the signal strength of each pressed region from the ``ball array" calibration images taken every 100 grasps (see \figref{fig:SS_distribution} (b) before and (c) after calibration). We use the standard deviation of the $5 \times 5$ array of signal strengths to represent the signal strength distribution, and compensate for variations by increasing the contrast non-uniformly in the decreased regions.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/Signal_strength_distribution_v2.jpg}
\caption{(a) The evolution of signal strength distribution (b) The raw output of ``ball array" calibration image (c) The calibrated ``ball array" calibration image.}
\label{fig:SS_distribution}
\vspace{-3mm}
\end{figure}
\figref{fig:SS_distribution} shows signal strength distribution before and after calibration in (blue) and (red) respectively. The red curve shows some marginal improvement in the consistency over usage. \change{The sudden increase of the curve after 2500 grasps is caused by the change in light distribution likely due to damage of the optical path by an especially aggressive grasp.}
\subsection{Metric IV: Gel Condition}
The sensor's soft gel is covered by a textured fabric skin for protection. Experimentally, this significantly increases the resilience to wear. However, the reflective paint layer of the gel may still wear out after use.
Since the specular paint acts as a reflection surface, the regions of the gel with damaged paint do not respond to contact signal and are seen as black pixels, which we call \emph{dead pixels}.
We define the gel condition as the percentage of dead pixels in the image. \figref{fig:gel_condition} shows the evolution of the number of dead pixels over the course of 3000 grasps. Only a small amount of pixels (less than 0.06\%, around 170 pixels) are damaged, highlighted with red circles in \figref{fig:step2} (b3).
Sparse dead pixels can be ignored or fixed with interpolation, but a large number of clustered dead pixels can be solved only by replacing the gel.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/Gelcondition_v2.jpg}
\caption{Evolution of the gel condition.}
\label{fig:gel_condition}
\vspace{-3mm}
\end{figure}
\section{Conclusions and Future Work}
\label{sec:conclusion}
In this paper, we present a compact integration of a visual-tactile sensor in a robotic phalange. Our design features: a gel covered with a textured fabric skin that improves durability and contact signal strength; a compact integration of the GelSight sensor optics; and an improved illumination over a larger tactile area.
Despite the improved wear resistance, the sensor still ages over use. We propose four metrics to track this aging process and create a calibration framework to regularize sensor output over time. We show that, while the sensor degrades minimally over the course of several thousand grasps, the digital calibration procedure is able to condition the sensor output to improve its usable life-span.
\myparagraph{Sensor Functionality.}
\change{The sensor outputs images of tactile imprints that encode shape and texture of the object at contact. For example, contact geometry in pixel space could be used in combination with knowledge of grasping force and gel material properties to infer 3D local object geometry. If markers are placed on the gel surface, marker flow can be used to estimate object hardness~\cite{GelSightIROS16} or shear forces~\cite{GelSightShear}.}
These quantities, as well as the sensor's calibrated image output, can be used directly in model-based or learning-based approaches to robot grasping and manipulation. This information could be used to track object pose, inform a data-driven classifier to predict grasp stability, or as real-time observations in a closed-loop regrasp policy~\cite{Frank_Maria_regrasp}.
\myparagraph{Applications in robotic dexterity.}
\change{We anticipate that GelSlim's unique form factor will facilitate the use of these sensing modalities in a wide variety of applications -- especially in cluttered scenarios where visual feedback is lacking, where access is limited, or where difficult to observe contact forces play a key role. We are especially interested in using real-time contact geometry and force information to monitor and control tasks that require in-hand dexterity and reactivity such as picking a tool in a functional grasp and then using it or grasping a nut and screwing it on a bolt. Ultimately, these contact-rich tasks can only be robustly tackled with tight integration of sensing and control. While the presented solution is just one path forward, we beilive that high-resolution tactile sensors hold particular promise.}
\bibliographystyle{IEEEtranN}
{\footnotesize
|
\section{INTRODUCTION}\label{sec:1}
Whenever the Fourier law obtains, (here confined to the linear first order version ) $\mathbf{J_q}=\kappa \nabla T(\mathbf{r})$ where $\mathbf{J_q}$ is the heat current vector, $\kappa$ the thermal conductivity and $ T(\mathbf{r})$ the temperature at coordinate $\mathbf{r}$, then Fourier maintained that \cite[Sec.III, no. 57-64, pp.41-45]{four1} (a) net heat energy flow cannot occur in the absence of a temperature gradient, and (b) net heat flow occurs from hot to cold temperature regions that are connected if a temperature gradient exists. With the implication of local behavior
|
, his postulates (a) and (b) are taken to imply
\begin{equation}\label{e:1}
\mathbf{J_q.}\nabla T \le 0,
\end{equation}
where (a) and (b) taken together refer to the Fourier (\textbf{F}) principle in (\ref{e:1}). Fourier and his followers claim that conductive heat is local in nature (within the limits of molecular volumes and particle interaction times) with (\ref{e:1}) obtaining where
Benofy and Quay \cite[p.11]{bq} following Fourier have argued that the Fourier law is essentially local in nature, where whenever a temperature gradient is present, there can be a flow of heat but there cannot be such
|
i,\mathbb{H}}^{R}})=[q_{i}]$. So, $q_{i}$ is a right eigenvalue of finite type.
\skip 0.1 cm
Conversely, set $E_{T}^{\sigma}=\{q_{1},\ q_{2}...,q_{n}\}$, where $q_{i}$ is a right eigenvalue of $T$ of finite type for all $i\in\{1,2,...,n\}$.
Applying \cite[Theorem 5.6]{NCFCBO}, we have
\begin{align*}\mathbb{I}_{R(P_{\sigma})}=\displaystyle \sum_{i=1}^{n}P_{[q_{i}]}|_{R(P_{\sigma})}.\end{align*}
Since $P_{[q_{i}]}P_{[q_{j}]}=0$ for all $i\neq j$, then
\begin{align*}R(P_{\sigma})=R(P_{[q_{1}]})\oplus R(P_{[q_{2}]})\oplus ...\oplus R(P_{[q_{n}]}).\end{align*}
This implies that
\begin{align*}\dim (R(P_{\sigma}))=\displaystyle\sum_{i=1}^{n}\dim (R(P_{[q_{i}]})).\end{align*}
In particular, we have $P_{\sigma}$ which is a finite rank operator.\qed
\begin{remark}
{\rm In the complex spectral theory, much attention has ben paid to eigenvalue of finite type, see \cite{charfi,Gohberg,J1,J2,Lutgen}. It is useful for the study of the essential spectrum of certain operators-matrices. We refer to \cite{charfi} for this point on the two-groupe transport operators. More precisely, let $V_{\mathbb{C}}$ be a complex Banach space and let $T$ be a closed operator in $V_{\mathbb{C}}$. The Browder resolvent set of $T$ is given by
\begin{align*}\rho_{B}(T):=\rho(T)\cup\sigma_{d}(T),\end{align*}
where we use the notation $\rho(.)$ for the resolvent set of $T$ and $\sigma_{d}(.)$ the set of eigenvalues of finite type of $T$. In fact, the usual resolvent
\begin{align*}R_{\lambda}(A):=(A-\lambda)^{-1}\end{align*}
can be extended to $\rho_{B}(T)$, e.g. \cite{Lutgen}. Motivated by this, \cite{charfi} gives a version of the Frobenius-Schur factorization using the Browder resolvent. This makes it possible to study the essential spectrum of serval types operators-matrices. In this paper, we have described the discrete $S-$spectrum in quaternionic setting. In this regard, as in complex case, we can define the spherical Browder resolvent. Although, we avoided studying it in this paper, we will cover that in a future article.}
\end{remark}
\section{Some results on the Weyl $S$-spectrum}\label{sec:3}
In this section, we develop a deeper understanding of the concept of the Weyl $S$-spectrum of the bounded right linear operator. More precisely, we describe the boundary of the $S$-spectrum. Likewise, we deal with the particular case of the spectral theorem. To begin with, we recall:
\begin{definition}\cite{BK2}{\rm Let $T\in\Bc(V_{\mathbb{H}}^{R})$. The Weyl $S-$spectrum is the set
\begin{align*}\sigma_{W}^{S}(T)=\displaystyle \bigcap_{K\in\mathcal{K}(V_{\mathbb{H}}^{R})}\sigma_{S}(T+K).\end{align*}}
\end{definition}
\vskip 0.1 cm
The study of the essential and the Weyl $S-$spectra are established using the Fredholm theory, see \cite{BK,BK2}. We refer to \cite{B1} for the investigation of the Fredholm and Weyl elements with respect to a quaternionic Banach algebra homomorphism.
\begin{definition}
{\rm A Fredholm operator is an operator $T\in\Bc(V_{\mathbb{H}}^{R})$ such that $N(T)$ and $V_{\mathbb{H}}^{R}/R(T)$ are finite dimensional. We will denote by $\Phi(V_{\mathbb{H}}^{R})$ the set of all Fredholm operators.}
\end{definition}
\vskip 0.1 cm
From \cite{BK,BK2}, we have
\begin{align*}\Phi(V_{\mathbb{H}}^{R})=\Phi_{l}(V_{\mathbb{H}}^{R})\cap\Phi_{r}(V_{\mathbb{H}}^{R})\end{align*}
where
\begin{align*}\Phi_{l}(V_{\mathbb{H}}^{R})=\Big\{T\in \Bc(V_{\mathbb{H}}^{R}):\mbox{ R(T) is closed and }\dim (N(T))<\infty\Big\}\end{align*}
and
\begin{align*}\Phi_{r}(V_{\mathbb{H}}^{R})=\Big\{T\in \Bc(V_{\mathbb{H}}^{R}):\mbox{ R(T) is closed and }\dim (N(T^{\dag}))<\infty\Big\}.\end{align*}
Let $T\in \Phi_{l}(V_{\mathbb{H}}^{R})\cup \Phi_{r}(V_{\mathbb{H}}^{R})$. Then, the index of $T$ is given by
\begin{align*}i(T):=\dim N(T)-\dim(V_{\mathbb{H}}^{R}/R(T)).\end{align*}
\begin{theorem}\cite{BK,BK2} \label{t:5}
Let $T\in\Bc(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\sigma_{e}^{S}(T)=\mathbb{H}\backslash \Phi_{T}\mbox{ and }\sigma_{W}^{S}(T)=\mathbb{H}\backslash W_{T}\end{align*}
where
\begin{align*}\Phi_{T}:=\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\Big\}\end{align*}
\mbox{ and }
\begin{align*}W_{T}:=\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))=0\Big\}.\end{align*}
\end{theorem}
\begin{remark} {\rm Let $V_{\mathbb{H}}^{R}$ be a quaternionic space and $T\in\mathcal{B}(V_{\mathbb{H}}^{R})$.
\begin{enumerate}
\item Note that, in general, we have
\begin{align*}\sigma_{e}^{S}(T)\subset\sigma_{W}^{S}(T)=\sigma_{1,W}^{S}(T)\cup\sigma_{2,W}^{S}(T)\subset \sigma_{S}(T)\backslash\sigma_{d}(T).\end{align*}
where
\begin{align*}\sigma_{1,W}^{S}(T):=\mathbb{H}\backslash\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in \Phi_{l}(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))\leq 0\Big\}\end{align*}
and
\begin{align*}\sigma_{2,W}^{S}(T):=\mathbb{H}\backslash\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in \Phi_{r}(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))\geq 0\Big\}.\end{align*}
\noindent In particular, $\sigma_{W}^{S}(T)$ does not contain eigenvalues of finite type.
\item In \cite{B1}, one proves that $q\longmapsto i(T)$ is constant on any component of $\Phi_{T}$. In this way, we see that if $\Phi_{T}$ is connected, then
\begin{align*}\sigma_{e}^{S}(T)=\sigma_{W}^{S}(T).\end{align*}
\end{enumerate}}
\end{remark}
\vskip 0.3 cm
\noindent The first result in this section is the next theorem.
\begin{theorem}\label{t:4}
Let $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\partial\sigma_{W}^{S}(T)\subset\sigma_{1,W}^{S}(T).\end{align*}
In particular, if $\Phi_{T}$ is connected, then
\begin{align*}\partial\sigma_{e}^{S}(T)=\partial\sigma_{W}^{S}(T)\subset\Big\{q\in\mathbb{H}:\ Q_{q}(T)\not\in \Phi_{l}(V_{\mathbb{H}}^{R})\Big\}.\end{align*}
\end{theorem}
To prove Theorem \ref{t:4}, we first study the concept of the minimum modulus. Let $V_{\mathbb{H}}^{R}$ be a separable right Hilbert space and $T\in \Bc(V_{\mathbb{H}}^{R})$. The minimum modulus of $T$ is given by
\begin{align*}\mu(T):=\displaystyle\inf_{\|x\|=1}\|Tx\|.\end{align*}
\vskip 0.3 cm
\noindent To begin with, we give the following lemma.
\begin{lemma}\label{l1}
Let $T$ and $S$ be two bounded right linear operators on a right quaternionic Hilbert space. Then,
\begin{enumerate}
\item If $\|T-S\|<\mu(T)$, then $\mu(S)>0$ and $\overline{R(S)}$ is not a proper subset of $\overline{R(T)}$.\\
\item If $\|T-S\|<\frac{\mu(T)}{2}$, then $\overline{R(S)}$ is not a proper subset of $\overline{R(T)}$ and $\overline{R(T)}$
is not a proper subset
|
of $\overline{R(S)}$.
\end{enumerate}
\end{lemma}
\proof The proof is the same as for the complex Banach space, see Lemma 2.3 and lemma 2.4 in \cite{HAE} for a complex proof.\qed
\vskip 0.3 cm
For $T\in \mathcal{B}(V_{\mathbb{H}}^{R})$, $q\in\mathbb{H}$ and $\varepsilon>0$ we set:
\begin{align*}\Oc(T,q,\varepsilon):=\Big\{q'\in\mathbb{H}:\ 2\vert {\rm Re}(q)-{\rm Re}(q')\vert\|T\|+\vert |q'|^{2}-|q|^{2}\vert<\varepsilon.\Big\}\end{align*}
It is clear that $\Oc(T,q,\varepsilon)$ is an open set in $\mathbb{H}$.
\begin{corollary}Let $T\in \Bc(V_{\mathbb{H}}^{R})$ and $q_{0}\in \rho_{S}(T)$. Then, $q\in \rho_{S}(T)$ for each $q\in \Oc(T,q_{0},\mu(Q_{q_{0}}(T)))$.
\end{corollary}
\proof Let $q\in \Oc(T,q_{0},\mu(Q_{q_{0}}(T)))$. Then,
\begin{align*}\|Q_{q}(T)-Q_{q_{0}}(T)\|
&=\|2({\rm Re}(q_{0})-{\rm Re}(q))T+|q|^{2}-|q_{0}|^{2}\|\\
&\leq 2\vert ({\rm Re}(q_{0})-{\rm Re}(q)\vert\|T\|+\vert|q|^{2}-|q_{0}|^{2} \vert\\
&<\mu(Q_{q_{0}}(T)).\end{align*}
We can apply Lemma \ref{l1} to conclude that
\begin{align*}\mu(Q_{q}(T))>0\mbox{ and }\overline{R(Q_{q}(T))}=\overline{R(Q_{q_{0}}(T))}=V_{\mathbb{H}}^{R}.\end{align*}
By \cite[Proposition 3.5]{BK}, $R(Q_{q}(T))$ is closed. Hence, $q\in \rho_{S}(T)$.\qed
\noindent We recall:
\begin{lemma}\cite[Lemma 7.3.9]{DFIS} \label{l2}Let $n\in\mathbb{N}$ and $q,s\in\mathbb{H}$. Set
\begin{align*}P_{2n}(q)=q^{2n}-2{\rm Re}(s^{n})q^{n}+|s^{n}|^{2}.\end{align*}
Then,
\begin{align*}P_{2n}(q)
&=\Qc_{2n-2}(q)(q^{2}-2{\rm Re}(s)q+|s|^{2})\\
&=(q^{2}-2{\rm Re}(s)q+|s|^{2})\Qc_{2n-2}(q),\end{align*}
where $\Qc_{2n-2}(q)$ is a polynomial of degree $2n-2$ in $q$.
\end{lemma}
\vskip 0.3 cm
\noindent \emph{\bf Proof of Theorem \ref{t:4}}
\noindent Set:
\begin{align*}f(T):=\displaystyle\sup_{K\in \mathcal{K}(V_{\mathbb{H}}^{R})}\mu(T+K).\end{align*}
Similar proof in the complex case, we have $f(T)>0$ if and only if
\begin{align*}T\in\Phi_{l}(V_{\mathbb{H}}^{R})\mbox{ and }\dim N(T)\leq\dim(V_{\mathbb{H}}^{R}/R(T)).\end{align*}
Now,
Since $\sigma_{e}^{S}(T)$ is not empty (e.g., \cite[Proposition 7.14]{BK}) and $\sigma_{e}^{S}(T)\subset\sigma_{W}^{S}(T)$, then $\partial\sigma_{W}^{S}(T)$ is not empty. Let us then take an element $p$ in $\partial\sigma_{W}^{S}(T)$. Assume that $p\not \in\sigma_{1,w}^{S}(T)$. Then,
\begin{align*} f(Q_{p}(T))>0.\end{align*}
So, there exists $K_{0}\in\mathcal{K}(V_{\mathbb{H}}^{R})$ such that
\begin{align*}\mu(Q_{p}(T)+K_{0})>0.\end{align*}
\noindent Since $\Oc(T,p,\frac{\mu(Q_{p}(T)+K_{0})}{2})$ is an open neighborhood of $p$ and
\begin{align*}p\in\overline{\Big\{q\in\mathbb{H}:\ Q_{q}(T)\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{q}(T))=0\Big\}},\end{align*}
then there exists $p_{0}\in \Oc(T,p,\frac{\mu(Q_{p}(T)+K_{0})}{2})$ such that
\begin{align*}p_{0}\in W_{T}.\end{align*}
\noindent On the other hand,
\begin{align*}\|Q_{p_{0}}(T)+K_{0}-Q_{p}(T)-K_{0}\|
&\leq \vert |p_{0}|^{2}-|p|^{2}\vert+2\vert Re(p)-Re(p_{0})\vert\|T\|\\
&<\displaystyle \frac{\mu(Q_{p}(T)+K_{0})}{2}.\end{align*}
Applying Lemma \ref{l1}, we obtain
\begin{align*}R(Q_{p}(T)+K_{0})=V_{\mathbb{H}}^{R} .\end{align*}
Indeed, since $\mu(Q_{p_{0}}+K_{0})>0$ and $p_{0}\in W_{T}$, then
\begin{align*}\dim(V_{\mathbb{H}}^{R}/R(Q_{p_{0}}(T)+K_{0}))=0.\end{align*}
In this way, we see that
\begin{align*}Q_{p}(T)+K_{0}\in\Phi(V_{\mathbb{H}}^{R})\mbox{ and }i(Q_{p}(T)+K_{0})=0.\end{align*}
This implies that, $p\not\in \sigma_{W}^{S}(T)$.
\vskip 0.1 cm
\noindent The rest of the proof follows immediately from \cite[Theorem 5.13]{B1}.\qed
\vskip 0.3 cm
We will now deal with the particular spectral theorem for the essential S-spectra.
\begin{theorem}
Let $T\in\Bc(V_{\mathbb{H}}^{R})$. Then,
\begin{align*}\sigma_{e}^{S}(T^{n})=\Big\{q^{n}\in\mathbb{H}:\ q\in\sigma_{e}^{S}(T)\Big\}=(\sigma_{e}^{S}(T))^{n}.\end{align*}
\end{theorem}
\proof
According to \cite[Lemma 3.10]{DFIS} and the proof of \cite[Theorem 7.3.11]{DFIS} we have
\begin{align*}T^{2n}-2{\rm Re}(q)T^{n}+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}=\displaystyle\prod_{j=0}^{n-1}
(T^{2}-2Re(q_{j})T+|q_{j}|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}),\end{align*}
where $q_{j}, j=0,...,n-1$ are the solutions of $p^{n}=q$ in the complex plane $\mathbb{C}_{I_{q}}$. Let $q\in\sigma_{e}^{S}(T^{n})$. Then, $Q_{q}(T^{n})\not\in\Phi(V_{\mathbb{H}}^{R})$.
We can apply \cite[Theorem 6.13]{BK}, we infer that there exists $i\in\{0,1,...,n-1\}$ such that
\begin{align*}Q_{q_{i}}(T)\not\in \Phi(V_{\mathbb{H}}^{R}).\end{align*}
Therefore, $q_{i}\in\sigma_{e}^{S}(T)$. In this way, we see that $q=q_{i}^{n}\in(\sigma_{e}^{S}(T))^{n}$. To prove the inverse inclusion, we consider $p=q^{n}$, where $q\in\sigma_{e}^{S}(T)$.
By Lemma \ref{l2} and \cite[Theorem 7.3.7]{DFIS}, we get
\begin{align*}T^{2n}-2{\rm Re}(q^{n})T^{n}+\vert q^{n}\vert^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}
&=\Qc_{2n-2}(T)(T^{2}-2{\rm Re}(q)T+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}})\\
&=(T^{2}-2{\rm Re}(q)T+|q|^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}})\Qc_{2n-2}(T).\end{align*}
\noindent Since $Q_{q}(T)\not\in \Phi(V_{\mathbb{H}}^{R})$, we can apply \cite[Corollary 6.14]{BK}, we deduce that
\begin{align*}T^{2n}-2{\rm Re}(q^{n})T^{n}+\vert q^{n}\vert^{2}\mathbb{I}_{V_{\mathbb{H}}^{R}}\not\in \Phi(V_{\mathbb{H}}^{R}).\end{align*}
So, $p\in\sigma_{e}^{S}(T^{n})$. \qed
\vskip 0.1 cm
|
\section{Introduction} \label{section:introduction}
\subsection*{The Markov group conjecture and its decoupled version}
In 1967, the following problem was posed and partially analysed by Kendall \cite{Kendall1967} and Speakman \cite{Speakman1967}.
\begin{conjecture}[Markov group conjecture] \label{conj:markov-groups}
Let $T = (T_t)_{t \in [0,\infty)}$ be a Markovian $C_0$-semigroup on $\ell^1$ and assume that $T_1: \ell^1 \to \ell^1$ is bijective (i.e., $T$ extends to a $C_0$-group). Then $T$ has bounded generator.
\end{conjecture}
Here, \emph{Markovian} (or \emph{Markov}) means that, for each $t \ge 0$, the operator $T_t$ is positive (in the sense that $T_t x \ge 0$ for all $x \ge 0$) and norm-preserving on the positive cone.
For a few classes of semigroups the conjecture is easy to prove (see \cite[Section~3]{Kendall1967}), and in the first years after the formulation of the conjecture, partial results were obtained by various authors \cite{Williams1969, Cuthbert1972, Cuthbert1975, Mountford1977}. Afterwards though, progress on the problem has been slow. An overview of the problem was given by Kingman on several occassions; see \cite[Section~2]{Kingman1983}, \cite{Kingman2006}, \cite[Section~9]{Kingman2006a}. In attempts to find a counterexample, a common approach is to consider finite dimensional matrices $Q_n$ that generate Markov semigroups on $\mathbb{R}^{d_n}$ such that $\norm{e^{-Q_n}} \le M$ for all indices $n$ and a fixed constant $M$. If one succeeded in choosing $Q_n$ such that $\norm{Q_n} \to \infty$, the block diagonal operator on $\ell^1$ with block entries $Q_n$ would generate a Markov semigroup on $\ell^1$ that disproves the conjecture.
Such direct sum semigroups were already considered in the original papers by Kendall and Speakman \cite{Kendall1967, Speakman1967}, and the strategy to use them for constructing a counterexample was further discussed by Kingman in \cite[Section~2]{Kingman1983} and \cite[Sections~2 and~3]{Kingman2006}. Phrased in other words, the goal of this strategy is to find a counterexample to the following slightly weaker conjecture. Motivated by the diagonal construction described above, one could call it the \emph{decoupled Markov group conjecture}.
For each $d \in \mathbb{N}$, endow $\mathbb{R}^{d \times d}$ with the operator norm induced by the $1$-norm on $\mathbb{R}^d$.
\begin{conjecture} \label{conj:markov-groups-finite-uniform}
Let $M \ge 1$ be a real number. Then there exists a real number $C = C(M) \ge 0$ with the following property:
For every $d \in \mathbb{N}$ and for every matrix $Q \in \mathbb{R}^{d \times d}$ that satisfies $\norm{e^{-Q}} \le M$ and whose associated matrix semigroup $(e^{tQ})_{t \in [0,\infty)}$ is column stochastic, we have $\norm{Q} \le C$.
\end{conjecture}
This conjecture was explicitely formulated by Kingman in \cite[p.\ 186]{Kingman1983}.
\begin{remarks_no_number}
\begin{enumerate}[(a)]
\item The main point of Conjecture~\ref{conj:markov-groups-finite-uniform} is that $C(M)$ does not depend on the dimension $d$. It is very easy to prove a dimension dependent estimate: namely, we have
\begin{align*}
\norm{Q} \le 2 \modulus{\trace{Q}} \le 2d \log(M)
\end{align*}
for each matrix $Q \in \mathbb{R}^{d \times d}$ that satisfies the assumptions in Conjecture~\ref{conj:markov-groups-finite-uniform} (use that $e^{\modulus{\trace Q}} = e^{-\trace Q} = \det(e^{-Q}) \le M^d$).
\item Note that, if $(e^{tQ})_{t \in [0,\infty)}$ is column stochastic and $M \ge 1$, then the estimate $\norm{e^{-Q}} \le M$ is equivalent to $\sup_{t \in [-1,\infty)} \norm{e^{tQ}} \le M$. When we discuss bounded positive semigroups (rather than only column stochastic ones) below, we will use this latter estimate rather than $\norm{e^{-Q}} \le M$.
\end{enumerate}
\end{remarks_no_number}
\subsection*{Main result}
The main objective of this paper is to prove Conjecture~\ref{conj:markov-groups-finite-uniform}. In fact, though, we show a much stronger result which has nothing to do with the finite dimensional spaces $\mathbb{R}^d$ nor with choice of the $1$-norm on them. We prove:
\begin{theorem} \label{thm:main-result}
Let $M \ge 1$ be a real number. Then there exists a universal constant $C = C(M) \in [0,\infty)$ (depending solely on $M$) with the following property:
For every complex Banach lattice $E$ and every bounded linear operator $Q$ on $E$ that satisfies
\begin{align}
\label{eq:norm-estimate}
\sup_{t \in [-1,\infty)} \norm{e^{tQ}} \le M
\end{align}
and whose associated semigroup $(e^{tQ})_{t \in [0,\infty)}$ is positive, we have $\norm{Q} \le C$.
\end{theorem}
The essence of the theorem is: if one knows a priori that the generator $Q$ of a bounded positive $C_0$-semigroup is bounded, then one can estimate the norm $\norm{Q}$ by a constant that merely depends on the number $\sup_{t \in [-1,\infty)} \norm{e^{tQ}}$.
\subsection*{Relation to the Markov group conjecture}
On finite dimensional spaces all operators are bounded, so Theorem~\ref{thm:main-result} implies that Conjecture~\ref{conj:markov-groups-finite-uniform} is true, and we conclude that one cannot disprove the Markov group conjecture~\ref{conj:markov-groups} by using a block diagonal construction that consists of finite dimensional blocks (or, more generally, of blocks that have bounded generator).
It is not immediately clear (at least not to the author) whether the Markov group conjecture~\ref{conj:markov-groups} follows from Theorem~\ref{thm:main-result}. It was mentioned by Kingman in \cite[pages 186-187]{Kingman1983} that it might be possible to derive Conjecture~\ref{conj:markov-groups} from the a priori weaker statement in Conjecture~\ref{conj:markov-groups-finite-uniform} by means of approximation, but in a later paper the same author noted that it is actually not clear whether~\ref{conj:markov-groups} and~\ref{conj:markov-groups-finite-uniform} are equivalent \cite[beginning of Section~4]{Kingman2006}.
If there is indeed a way to derive the Markov group conjecture~\ref{conj:markov-groups} from its decoupled version~\ref{conj:markov-groups-finite-uniform} or more generally from Theorem~\ref{thm:main-result} by means of approximation, this endeavour is necessarily subject to considerable theoretical restrictions; see the remark at the end of Section~\ref{section:proof-of-main-result} for details.
In \cite[page 6]{Kingman2006} Kingman asked whether the assertion of Theorem~\ref{thm:main-result} holds if one only considers the single infinite-dimensional Banach lattice $\ell^1$ (in the notation of \cite[Section~3]{Kingman2006}, he asked whether $K(m) < \infty$ for number each $m > 1$); Theorem~\ref{thm:main-result} shows that the answer is positive.
\subsection*{Organization of the paper}
We prove Theorem~\ref{thm:main-result} in Section~\ref{section:proof-of-main-result}. In Section~\ref{section:non-positive-semigroups} we briefly explain that a similar result also holds for certain classes of non-positive semigroups on $L^p$ if $p \not= 2$. In the appendix we briefly recall a few facts about filter products of Banach spaces; these are needed in the proof of our main result.
\subsection*{Prerequisites}
We assume that reader to be familiar with the basic theories of $C_0$-semigroups (see for instance \cite{Engel2000}) and Banach lattices (see for instance \cite{Schaefer1974} and \cite{Meyer-Nieberg1991}). We call a linear operator $T$ on a Banach lattice \emph{positive} if $Tf \ge 0$ whenever $f \ge 0$ (i.e., no strict positivity is required in any sense).
\section{Proof of the main result} \label{section:proof-of-main-result}
The subsequent proof uses that concept of a \emph{filter product} of a sequence of Banach lattices. Readers not familiar with this technology can find a (very) brief introduction, as well as several references, in Appendix~\ref{appendix:reminder-of-filter-products}.
\begin{proof}[Proof of Theorem~\ref{thm:main-result}]
Fix $M$ and assume that such a constant $C = C(M)$ does not exist. Then we can find a sequence of complex Banach lattices $E_n$ and a sequence of bounded linear operator $Q_n$ on $E_n$ such that: each $Q_n$ generates a positive semigroup on $E_n$, each $Q_n$ satisfies the norm estimate~\eqref{eq:norm-estimate} and each $Q_n$ has norm $\norm{Q_n} \ge n$. We set $R_n := \frac{Q_n}{\norm{Q_n}}$ for each $n$.
Let $\mathcal{F}$ denote the Fr\'{e}chet filter on $\mathbb{N}$ (or any other Filter which is finer than the Fr\'{e}chet filter) and let $E := (E_n)^\mathcal{F}$ denote the $\mathcal{F}$-product of the spaces $E_n$ (see Appendix~\ref{appendix:reminder-of-filter-products}). Then $E$ is a complex Banach lattice. We define $R := (R_n)^\mathcal{F}$, i.e., $R$ is the bounded linear operator on $E$ given by $R(x_n)^\mathcal{F} = (R_nx_n)^\mathcal{F}$ for each norm bounded sequence $(x_n)$ of vectors $x_n \in E_n$. Since each $R_n$ has norm $1$, we also have $\norm{R} = 1$.
We now derive a contradiction by showing that we must actually have $R = 0$. To this end, observe that
\begin{align*}
e^{tR} = (e^{tR_n})^\mathcal{F}
\end{align*}
for all $t \in \mathbb{R}$. For each $t \in [0,\infty)$ and each $n \in \mathbb{N}$ we note that the operator $e^{tR_n} = e^{\frac{t}{\norm{Q_n}}Q_n}$ is positive and has norm at most $M$; hence, $e^{tR}$ is positive and satisfies $\norm{e^{tR}} \le M$ for each $t \in [0,\infty)$. Therefore, every spectral value of $R$ has real part $\le 0$, and it follows from infinite-dimensional Perron--Frobenius theory that $\sigma(R) \cap i \mathbb{R} \subseteq \{0\}$ (see \cite[Corollary~C-III-2.13]{Nagel1986}).
Now comes the essential point: we claim that the group $(e^{tR})_{t \in \mathbb{R}}$ is also bounded for negative times. To see this, let $t > 0$. For every index $n \ge t$ we then have $\norm{Q_n} \ge n \ge t$, so
\begin{align*}
\norm{e^{-tR_n}} = \norm{e^{-\frac{t}{\norm{Q_n}} Q_n}} \le M
\end{align*}
since $-\frac{t}{\norm{Q_n}} \in [-1,0]$ and since $Q_n$ satisfies~\eqref{eq:norm-estimate}. On the $\mathcal{F}$-product $E$, only the norms for large indices $n$ matter, so $\norm{e^{-tR}} \le M$.
|
As $t > 0$ was arbitrary, the group $(e^{tR})_{t \in \mathbb{R}}$ is indeed bounded.
Thus, the spectrum $\sigma(R)$ is a subset of the imaginary axis and therefore, $\sigma(R) = \{0\}$. Now we use the boundedness of the group $(e^{tR})_{t \in \mathbb{R}}$ a second time: as $\sigma(R) = \{0\}$, it follows from the $C_0$-group version if Gelfand's $T = \id$ theorem \cite[Corollary~4.4.11]{Arendt2011} that $e^{tR} = \id_E$ for each time $t$. So $R = 0$, a contradiction.
\end{proof}
\begin{remarks_no_number}
\begin{enumerate}[(a)]
\item The arguments in the proof above that use Perron-Frobenius theory and Gelfand's $T = \id$ theorem actually show that, if a bounded linear operator $Q$ generates a bounded group $(e^{tQ})_{t \in \mathbb{R}}$ which is positive for $t \ge 0$, then $Q = 0$. Hence, we can choose $C(1) = 0$ in the theorem.
\item The proof of Theorem~\ref{thm:main-result} demonstrates how infinite dimensional methods can be of use to solve finite dimensional problems: if the spaces $E_n$ in the proof are $\mathbb{C}^{d_n}$ and endowed with the $1$-norm (meaning that we prove Conjecture~\ref{conj:markov-groups-finite-uniform} rather than the more general Theorem~\ref{thm:main-result}), their $\mathcal{F}$-product $E$ will still be an infinite dimensional Banach lattice. Even if we replace $\mathcal{F}$ with an ultrafilter, $E$ will be infinite dimensional unless the dimensions $d_n$ are bounded.
\item In a sense, Theorem~\ref{thm:main-result} can be considered as a Tauberian theorem: on a class of operators, we consider the transformation
\begin{align*}
Q \; \mapsto \; \Big([-1,\infty) \ni t \mapsto e^{tQ}\Big)
\end{align*}
which maps each operator to a certain operator-valued function. Theorem~\ref{thm:main-result} then says that $Q$ can be bounded uniformly by a norm bound of its transform. This interpretation of Theorem~\ref{thm:main-result} was kindly brought to my attention by Wolfgang Arendt.
\item A similar approach as in the proof above, using Perron--Frobenius theory and Gelfand's $T = \id$ theorem to show that a given semigroup generator equals $0$, was used in \cite[Section~2]{Glueck2018} to give a new proof of a classical result of Sherman about lattice ordered $C^*$-algebras.
The same comments as at the end of \cite[Section~2]{Glueck2018} also apply to the proof above; in particular:
\item The Perron--Frobenius type theorem from \cite[Corollary~C-III-2.13]{Nagel1986} that we used in the proof relies on quite heavy machinery. However, we only use the result for semigroups with bounded generators, for which it is much simpler to prove -- see for instance \cite[Proposition~2.2]{Glueck2018}.
\item Our proof also uses Gelfand's $T = \id$ theorem for $C_0$-semigroups which is not quite trivial. But again, we apply this theorem only for semigroups with bounded generator -- and for these, it can be derived from the single operator version of Gelfand's $T = \id$ theorem, which is a bit simpler (see for instance \cite[Theorem~1.1]{Allan1989}).
\end{enumerate}
\end{remarks_no_number}
Let us comment once again on the connection between Theorem~\ref{thm:main-result} and the Markov group conjecture~\ref{conj:markov-groups}.
\begin{remark_no_number}
The following approach to the Markov group conjecture is tempting: given the $C_0$-semigroup $T$ in the conjecture, we could try to approximate it by a sequence of semigroups $T_n$ which are, say, also (sub-)Markovian (or at least positive and uniformly bounded) and which have bounded generators $Q_n$. If we manage to choose this approximation such that $\norm{e^{-tQ_n}} \le M$ for all indices $n$, then Theorem~\ref{thm:main-result} implies that $\norm{Q_n} \le C(M)$ for all $n$, and from this we can derive that the generator $Q$ of $T$ is bounded, too (provided that the approximation is sufficiently reasonable in the sense that the $Q_n$ converge to $Q$, say strongly on the domain of $Q$). This approach is also discussed by Kingman at the beginning of \cite[Section~4]{Kingman2006}.
Let us now explain how Theorem~\ref{thm:main-result} provides a new perspective on this idea. The discrete structure of $\ell^1$ is, of course, essential for the Markov group conjecture, since the conjecture is false on other $L^1$-spaces (consider for instance the rotation group on $L^1(\mathbb{T})$, where $\mathbb{T}$ denotes the complex unit circle). So where does discreteness enter the game?
For the application of Theorem~\ref{thm:main-result}, the discrete structure of $\ell^1$ does not matter since we proved the theorem for all Banach lattices. Hence, it is necessarily the approximation procedure where the discreteness of $\ell^1$ has to be used. So if the approximation approach is supposed to work, either the construction of the approximation itself or the proof of the property $\norm{e^{-Q_n}} \le M$ has to make use of the discreteness of $\ell^1$ in a fundamental way.
Note that classical approximations, such as the ones of Hille and Yosida (see \cite[Section~II-3.3]{Engel2000}), work on any Banach space. So we conclude that either such approximation procedures cannot be used in the approach discussed above, or the discreteness of $\ell^1$ has to be used to show that such a procedure allows an estimate of the type $\norm{e^{-Q_n}} \le M$ (which is not true for the Hille and the Yosida approximation on general $L^1$-spaces, as can again be seen be considering the rotation group on $L^1(\mathbb{T})$).
\end{remark_no_number}
\section{On non-positive semigroups} \label{section:non-positive-semigroups}
The only step in the proof of Theorem~\ref{thm:main-result} where we needed positivity of the semigroups was the application of a Perron--Frobenius type result to derive that the spectrum of $R$ intersects $i\mathbb{R}$ at most in $0$. There are, however, similar results for certain classes of non-positive semigroups:
Let $p \in [1,\infty)$, but $p \not= 2$, and consider the complex-valued space $L^p$ over an arbitrary measure space. If $A$ is the generator of a contractive, real and eventually norm continuous $C_0$-semigroup on $L^p$, then $\sigma(A) \cap i\mathbb{R} \subseteq \{0\}$; this was proved in \cite[Corollary~4.6 and Remark~4.8(i)]{Glueck2016}. (By \emph{real}, we mean that the semigroup operators map real-valued functions to real-valued functions; by \emph{contractive}, we mean that every semigroup operator has norm at most $1$.) So we can deduce the following theorem.
\begin{theorem} \label{thm:contractive}
Fix $p \in [1,\infty) \setminus \{2\}$ and a real number $M \ge 1$. Then there exists a universal constant $C = C(p,M) \in [0,\infty)$ (depending solely on $p$ and $M$) with the following property:
For every $L^p$-space (over an arbitary measure space) and every bounded linear operator $Q$ on $L^p$ that satisfies $\norm{e^{-Q}} \le M$ and whose associated semigroup $(e^{tQ})_{t \in [0,\infty)}$ is real and contractive, we have $\norm{Q} \le C$.
\end{theorem}
We point out that the semigroup generated by $Q$ is real if and only if $Q$ itself is real. Our proof of Theorem~\ref{thm:contractive} uses spectral theory, and thus complex $L^p$-spaces. However, the theorem holds for real-valued $L^p$-spaces as well, even with the same constant $C(p,M)$. This follows from the fact that the complex extension of a bounded linear operator $T$ on a real-valued $L^p$-space has the same norm as $T$ itself \cite[Proposition~2.1.1]{Fendler1998}.
\begin{proof}[Proof of Theorem~\ref{thm:contractive}]
The argument is very similar to the proof of Theorem~\ref{thm:main-result}, with two simple changes:
\begin{enumerate}[(1)]
\item The spaces $E_n$ are now $L^p$-spaces, and we need their filter product $E$ to be an $L^p$-space, too. Thus, we have to replace the Fr\'{e}chet filter $\mathcal{F}$ with a free ultrafilter $\mathcal{U}$ on $\mathbb{N}$ (see Subsection~\ref{subsection:ultrafilters} in the appendix).
\item Instead of Perron-Frobenius theory, we now derive the fact $\sigma(R) \cap i \mathbb{R} \subseteq \{0\}$ from the results quoted before Theorem~\ref{thm:contractive}. This works since $E$ is an $L^p$-space for $p\not= 2$ and since the ultraproduct of real operators is again real.
\end{enumerate}
The rest of the proof is the same.
\end{proof}
\begin{remark_no_number}
Theorems~\ref{thm:main-result} and~\ref{thm:contractive} actually yield two independent reasons for Conjecture~\ref{conj:markov-groups-finite-uniform} to be true:
Theorem~\ref{thm:main-result} implies the conjecture since every column stochastic semigroup is positive and bounded. Indepently of that, Theorem~\ref{thm:contractive} implies the conjecture since overy column stochastic semigroup is real and contractive with respect to the $1$-norm on $\mathbb{R}^d$.
\end{remark_no_number}
We conclude the paper with the following simple example which demonstrates why the positivity assumption cannot be dropped in Theorem~\ref{thm:main-result} (without any replacement) and why the assumption $p \not= 2$ cannot be dropped in Theorem~\ref{thm:contractive} -- not even for finite dimensional spaces with fixed dimension.
\begin{example} \label{ex:rotation-in-2d}
Endow $\mathbb{C}^2$ with the Euclidean norm. For each $n \in \mathbb{N}$, consider the $2 \times 2$-matrix
\begin{align*}
Q_n :=
\begin{pmatrix}
0 & -n \\
n & \phantom{-}0
\end{pmatrix}.
\end{align*}
It has spectrum $\{-in,in\}$, and its operator norm (induced by the Euclidean norm on $\mathbb{C}^2$) is $\norm{Q_n} = n$. The matrix $Q_n$ generates the two-dimensional rotation group that is given by
\begin{align*}
e^{tQ_n} =
\begin{pmatrix}
\cos(nt) & -\sin(nt) \\
\sin(nt) & \phantom{-} \cos(nt)
\end{pmatrix}
\end{align*}
for each time $t \in \mathbb{R}$. Hence, $\norm{e^{tQ_n}} = 1$ for all $t \in \mathbb{R}$ and all $n \in \mathbb{N}$, so we cannot bound $\norm{Q_n} = n$ by a constant multiple of $\sup_{t \in [-1,\infty)} \norm{e^{tQ_n}} = 1$.
The reason why Theorem~\ref{thm:main-result} cannot be applied is that the semigroup $(e^{tQ_n})_{t \in [0,\infty)}$ is not positive, and Theorem~\ref{thm:contractive} cannot be applied since the semigroup is not contractive with respect to the $p$-norm for any $p \not= 2$.
\end{example}
\subsection*{Acknowledgements}
It is my pleasure to thank Markus Haase for bringing the Markov group conjecture to my attention.
|
\section{Introduction}
Holographic Space Time (HST) is a formalism for generalizing string theory to situations where the asymptotic regions of space-time are not frozen vacuum states.
In particular, it gives us a well defined holographic quantum theory of Big Bang cosmology\cite{holocosm}. In \cite{holoinflation}, two of the authors (TB and WF) introduced a model of HST, which they claimed could reproduce the results of slow-roll inflation. In this paper, we use results from \cite{malda},\cite{mcfaddenskenderis} and \cite{others} to prove and improve that claim. As a consequence we will show that {\it if the two-point functions of inflationary fluctuations coincide with those in a single-field slow-roll model, then they probe only coarse features of the underlying fundamental quantum theory. } Any model which produces small, approximately Gaussian, approximately $SO(1,4)$ covariant fluctuations yields two- and three-point functions determined by two unitary representations of $SO(1,4)$. A generic model has $9$ parameters: the scaling dimension of the scalar operator on the projective light cone (see below); the strength of the scalar and tensor Gaussian fluctuations; the normalizations of the $\langle S^3\rangle$, $\langle ST^2\rangle$, and $\langle S^2 T\rangle$ three-point functions; and the 3 different tensor structures in the $\langle T^3 \rangle$ three-point function. Maldacena's squeezed limit theorem, when combined with $SO(1,4)$, fixes all 3-point functions involving the scalar in terms of the scale dependence of the corresponding two-point functions, reducing the number of parameters to $6$\footnote{We emphasize that the word parameters above actually refers to functions of the background Hubble radius $H(t)$ and its first two time derivatives.}. A general quantum theory with $SO(1,4)$ invariance and localized operators $S$ and $T$ contains many invariant density matrices, and the two- and three-point functions do not determine either the underlying model or the particular invariant density matrix. Finally, we note that the dominance of the scalar over the tensor two-point fluctuations, which is evident in the CMB data, follows from general cosmological perturbation theory along with the assumptions that there was a period of near dS evolution and that the intrinsic $S$ and $T$ fluctuations are of the same order of magnitude, which is the case in both conventional slow-roll models and the HST model of inflation. However, HST predicts a different dependence of the fluctuation normalizations on the time dependent background $H(t)$. For correlation functions involving the scalar, this difference can be masked by choosing different backgrounds to fit the data. HST makes the unambiguous prediction that the tensor tilt vanishes, whereas conventional slow roll predicts it to be $r/8$ (where $r$ is the ratio of amplitudes of tensor and scalar fluctuations) and data already constrains $r < 0.1$, so this may be hard to differentiate from zero even if we observe the tensor fluctuations.
We have noted a set of relations for fluctuations that follow from very general properties of cosmological perturbation theory. For us, the validity of classical cosmological perturbation theory follows from Jacobson's observation that Einstein's equations (up to the cosmological constant, which in HST is a boundary condition for large proper times) are the hydrodynamic equations of a system, like HST, which obeys the Bekenstein-Hawking relation between entropy and area. In \cite{holoinflation} we argued that this meant that there would be a hydrodynamic inflaton field (or fields) even in regimes where HST is not well-approximated by {\it quantum} effective field theory (QUEFT)\footnote{We refer to the classical field equations, which, according to Jacobson, encode the hydrodynamics of space-time as a Thermodynamic Effective Field Theory, or THEFT.}. The fluctuations calculated from an underlying HST model, which we argued to be approximately $SO(1,4)$ covariant and whose magnitude we estimated, are put into the classical hydrodynamical inflaton equations as fluctuating initial conditions. In fact, in the co-moving gauge, we can view the inflaton as part of the metric and this picture follows from Jacobson's original argument\footnote{Even in the FRW part of the metric, we can view the inflaton field as just a generic way to impose the dominant energy condition on a geometry defined by an otherwise unconstrained Hubble radius $H(t)$.}.
We will argue below that certain constraints on the parameters not determined by symmetries follow from quite general arguments based on classical cosmological perturbation theory, while other constraints correspond to a choice of parameters in the underlying discrete HST model. Note that, within the HST framework, the validity of classical cosmological perturbation theory is a statement about the Jacobsonian hydrodynamics of a system, which is NOT well-approximated by QUEFT. The constraints fix the $SO(1,4)$ representation of the tensor fluctuations, and imply the dominance of scalar over tensor fluctuations. When combined with estimates from the HST model, these constraints suggest that the tensor two-point function {\it might} be observed in the Planck data. Non-gaussian fluctuations involving at least one scalar component are small. The most powerful discriminant between models is the tensor 3 point function. Standard slow-roll models produce only one of the three forms allowed by symmetry. A second one can be incorporated by adding higher derivative terms to the bulk effective action, but the validity of the bulk effective field theory expansion requires it to be smaller than the dominant term\footnote{In weakly coupled string theory, we might consider inflation with Hubble radius at the string scale, and get this second term to be sizable, but this is a non-generic situation, based on a hypothetical weakly coupled inflation model, for which we do not have a worldsheet description.}. The third form violates parity and cannot arise in any model based on bulk effective field theory. We argue that it could be present in more general models, if parity is not imposed as a fundamental symmetry.
Unfortunately, all extant models, whether based on effective field theory or HST, predict that the tensor 3 point function is smaller than the, as yet unobserved, tensor two-point function by a power of $\frac{H}{m_P}$ and we cannot expect to detect it in the foreseeable future.
This paper is organized as follows: in the next section we present the mathematical analysis of two- and three-point functions in a generic quantum theory carrying a unitary representation of $SO(1,4)$, written in terms of operators localized on a three sphere.
We emphasize that this theory does not satisfy the axioms of quantum field theory on the three sphere. In particular, it does not satisfy reflection positivity, because the Hamiltonian is a generator of a unitary representation of $SO(1,4)$, and it is not bounded from below. The absence of highest weight generators prevents the usual continuation to a Lorentzian signature space-time. In consequence, a general theory of this type will have a large selection of dS invariant density matrices, rather than the unique pure state of conventional CFT. Nonetheless, two- and three-point functions are completely determined by symmetry, up to a few constants. The work of Maldacena and collaborators\cite{malda} (see also later work of Mcfadden and Skenderis\cite{mcfaddenskenderis} and others\cite{others}) shows that, to leading order in slow-roll parameters, single field slow-roll inflation is in this category of theories\footnote{When we use the phrase slow-roll model, we mean a model in which the fluctuations are calculated in terms of QUEFT in a slow-roll background. In our HST model, the fluctuations are calculated in an underlying non-field theoretical quantum model, and put into the hydrodynamic equations of that model as initial conditions.}. Thus, the oft-heard claim that objections\footnote{We recall these objections in Appendix A.} to the conceptual basis of slow-roll inflation must be wrong, because the theory fits the data, are ill-founded. Our analysis shows that current data probe only certain approximate symmetries of a theory of primordial fluctuations, and determine a small number of parameters that are undetermined by group theory. Furthermore, the success of the slow-roll fit to these parameters, amounts, so far, to the statement that the fluctuations are predicted to be small and approximately Gaussian; that the scaling exponents are within a small range around certain critical values; and that the scalar two-point function is much larger than that of the tensor. Given the central limit theorem, the first part of this prediction does not seem to be such an impressive statement. Note also that if there is any environmental selection going on in the explanation of the initial conditions of the universe, even the statement that the fluctuations are small might be understood as environmental selection. The fact that the scalar two-point function dominates the tensor is a consequence of general properties of classical gravitational perturbation theory around a background which is approximately de Sitter.
The relative sizes of various three-point functions were also derived by Maldacena in this quite general setting.
We should emphasize that our remarks are relevant to data analysis only if future data remains consistent with slow-roll inflation. There is a variety of inflationary models, and many of them produce non-Gaussian fluctuations which are not $SO(1,4)$ covariant. If future observations favor such a model, they would rule out the simple symmetry arguments, and disprove both slow-roll inflation and the holographic model of inflation. Thus, although we set out to prove that our HST model {\it could} fit the data, we have ended up realizing that the current data probe only a few general properties of the underlying theory of primordial fluctuations. At the moment, we do not have enough control over our model to make predictions that go beyond these simple ones.
In section 3 we sketch the holographic inflation (HI) model of \cite{holoinflation} and recall how it leads to a prediction of small, approximately Gaussian, approximately $SO(1,4)$ invariant fluctuations. It also resolves all of the conceptual problems of conventional inflation and gives a completely quantum mechanical and causal solution to the flatness and horizon problems, as well as an explanation of the homogeneity and isotropy of the very early universe --- all of the latter without inflation. Within the HST formalism, the HI model also explains why there is any local physics in the world, despite the strong entropic pressure to fill the universe with a single black hole at all times.
In the Conclusions we discuss ways in which the data might distinguish between different models of fluctuations. Appendix A recalls the conceptual problems of conventional inflation models, while Appendix B recalls the unitary irreducible representations of $SO(1,4)$.
\section{Fluctuations from symmetry}
In early work on holographic cosmology\cite{holocosm} TB and WF postulated that inflation took place at a time when the universe was well described by effective quantum field theory (QUEFT), and that the inflaton was a quantum field. Our attitude to this began to change as a consequence of two considerations. The first was that, although inflationary cosmology and de Sitter space are {\it not} the same thing, it seemed plausible that at least part of the fully quantum version of inflationary cosmology should involve evolution of independent dS horizon volumes in a manner identical to a stable dS space, over many e-foldings. However, over such time scales we expected each horizon volume to be fully thermalized. The black hole entropy formulas in dS space tell us that the fully thermalized state has {\it no local excitations} and therefore is not well modeled by field theory.
In parallel with this realization, we began to appreciate Jacobson's 1995 argument\cite{ted}, indicating that the classical Einstein equations were just hydrodynamics for a system obeying the local connection between area and entropy, for maximally accelerated Rindler observers. The gravitational field should only be quantized in special circumstances where the covariant entropy bound is far from saturated and bulk localized excitations are decoupled from most of the horizon degrees of freedom (DOF). Jacobson's argument does not give a closed system of equations, because it does not provide a model for the stress tensor. We realized that this meant that other fields like the inflaton, which provide the stress tensor model, could also be classical hydrodynamical fields, unrelated to the QUEFT fields that describe particle physics in the later stages of the universe.
In \cite{holoinflation} TB and WF constructed a model which begins with a maximal entropy $p=\rho$ Big Bang, passes through a stage with $e^{3N_e}$ decoupled horizon volumes of dS space and evolves to a model with approximate $SO(1,4)$ invariance. The corrections to $SO(1,4)$ for correlation functions of a small number of operators are of order $e^{-N_e}$. In trying to assess the extent to which the predictions of such a model could fit CMB and large scale structure data, we realized that, to leading order in the slow-roll approximation, {\it the results of many conventional inflationary models amounted to a prediction of approximate $SO(1,4)$ invariance and the choice of a small number of parameters.} Work of \cite{malda},\cite{mcfaddenskenderis} and \cite{others} has shown that even if we can measure all two- and three-point functions of both scalar and tensor fluctuations, there are only $9$ parameters. Some of these parameters are related, by an argument due to Maldacena. Current measurements only determine two of the parameters and bound some of the others
(the tensor spectral index can't be measured until we actually see tensor fluctuations). Our conclusion is that observations and general principles tell us only that the correct theory of the inflationary universe has the following properties
\begin{itemize}
\item It is a quantum theory that is approximately $SO(1,4)$ invariant, and the density matrix of the universe at the end of inflation is approximately invariant. There are many such density matrices in a typical reducible representation of $SO(1,4)$.
\item The tensor and scalar fluctuations are expectation values of operators transforming in two particular representations of $SO(1,4)$. CMB and LSS data determine the normalization of the two-point function and representation
of the scalar operator, and put bounds on the two-point function of the tensor and all 3 point functions. If future measurements detect neither B mode polarization nor indications that the fluctuations are non-Gaussian, then we will learn no more about the correct description of the universe before and during the inflationary era.
\item When combined with Maldacena's ``squeezed limit" theorem and general features of cosmological perturbation theory around an approximately dS solution, $SO(1,4)$ invariance gives results almost equivalent to a single field slow-roll model. We will discuss the differences below. Thus, it is possible that even measurement of {\it all} the two- and three-point functions will teach us only about the symmetry properties of the underlying model. In fact, measurement of the tensor 3-point function could rule out conventional slow roll, but would not distinguish between more general models obeying the symmetry criteria described above.
\end{itemize}
In particular, the models proposed in \cite{holoinflation}, which resolve all of the conceptual problems of QUEFT based inflation models (see Appendix A),
will fit the current data as well as any conventional model. At our current level of understanding, those models do not permit us to give a detailed prediction for the scalar tilt, apart from the fact that it should be small. The tensor tilt is predicted to vanish. HST models suggest very small non-Gaussianity in correlation functions involving scalar perturbations, as we will see below, and explain why the scalar two-point function is much larger than that of the tensor. Indeed, this follows from $SO(1,4)$ symmetry, Maldacena's long wavelength theorem for scalar fluctuations, and very general properties of classical gravitational fluctuations around a nearly dS FRW model.
Classical cosmological perturbation theory identifies two gauge invariant quantities, which characterize fluctuations, and transform as a scalar and a transverse traceless tensor under $SO(3)$. We will attempt to find $SO(1,4)$ covariant operators, whose expectation values give us the two- and three-point correlation functions of these fluctuations. The form of these fluctuations is determined by group theory, up to a few constants.
In order to handle $SO(1,4)$ transformation properties in a compact manner, we use the description of the 3-sphere as the projective future light cone in $4 + 1$ dimensional Minkowski space. That is, it is the set of $5$ component vectors $X^{\mu}$ satisfying
$X^{\mu} X_{\mu} = 0$, $X^0 > 0$ and identified under $X^{\mu} \rightarrow \lambda X^{\mu}$ with $\lambda > 0$. Fields on the sphere are $SO(1,4)$ tensor functions of $X$, which transform as representations of the group of identifications, isomorphic to $R^+$. These representations are characterized by their tensor transformation properties, covariant constraints, and a single complex number $\Delta$
such that $$ F (\lambda X) = \lambda^{- \Delta} F(X), $$ so that the field is completely determined by its values on the sphere. The allowed values of $\Delta$ are constrained by the unitarity of the representation of $SO(1,4)$ in the Hilbert space of the theory. An expectation value of the products of two or three of these operators, in any dS invariant density matrix, is determined in terms of the $9$ numbers discussed above. For the tensor modes, the determination of the 3 point function has been demonstrated in \cite{malda} and \cite{others}, but there is not yet a compact formula. We hope that the five dimensional formalism will provide one, but we reserve this for future work.
Ordinary QFT in dS space has {\it many} dS invariant states, both pure and impure, as a consequence of the fact that there are no highest weight unitary representations of $SO(1,4)$\footnote{We are not talking here about the (in)-famous $\alpha$ vacua, which are states of Gaussian quantum fields whose two-point function is dS invariant, but has singularities when a point approaches its anti-pode. Rather, in the context of bulk QUEFT, we're speaking about dS invariant excitations of the conventional Bunch-Davies vacuum. These are not represented by Gaussian wave functionals.}. In conventional CFT, the product of two lowest weight representations contains only representations of weight higher than the sum of the individual weights, but this is not true for unitary representations of $SO(1,4)$. Products of non-trivial irreps can have singlet components. We can make even more general $SO(1,4)$ invariant density matrices by taking weighted sums of the projectors on this plethora of pure invariant states. This is in marked contrast to conventional CFT, whose Hilbert space consists of lowest weight unitary representations of $SO(2,3)$ and has a unique invariant state. Nonetheless, because the constraints of dS invariance on $2$ and $3$ point functions are expressed as analytic partial differential equations, in which the cosmological constant appears as an analytic parameter, these functions {\it are} analytic continuations of corresponding expressions in ordinary CFT. While we do not think that QFT in dS space is the correct theory of inflationary fluctuations, nor that the quantum theory of dS space is $SO(1,4)$ invariant; we do think that it is plausible that the quantum theory of a cosmology that has a large number of e-folds of inflation, followed by sub-luminal expansion which allows observers to see all of that space-time, should have an approximate $SO(1,4)$ symmetry realized by unitary operators in a Hilbert space. This was explained in \cite{holoinflation}.
The scaling symmetry $R^+$ plays another useful role, since we are trying to make a quantum model of many horizon volumes of the asymptotic future of a classical dS space. dS space asymptotes to the future light cone $X^2 = 0$, and the rescaling transformation is simply time evolution in either the global or flat slicing. The two times are asymptotically equivalent.
For large time in the flat slicing, we have $$X\cdot Y \sim e^{2t/R} ({\bf x - y})^2 .$$ Thus, the scaling dimension of the operator tells us about its large time behavior in the flat slicing of dS space.
The field corresponding to the scalar fluctuations is a scalar $S (X)$, with $\Delta_S = (3/2 - \sqrt{3/2 - m^2 R^2})$. In this formula $m^2$ is the mass of a bulk scalar field, which would give rise to this representation by dS/CFT calculations, as in \cite{malda}, when evaluated in the Bunch-Davies vacuum. It parametrizes one of the series (Called Class I in Appendix B) of unitary representations of $SO(1,4)$ described in \cite{thomasnewton}. These are the analogs of the complementary and principal series of unitary representations of $SL(2,R)$. In ordinary QFT in dS space, the Wheeler-DeWitt wave function for this bulk field determines the in-in correlator of the corresponding bulk quantum field in the Bunch-Davies vacuum. We note that this is the result of direct calculation of ordinary QFT in dS space and does not invoke any analytic continuation from an AdS calculation. We will discuss the relation to dS/CFT in more detail below. The values for which the square root is real are called the complementary series of representations, while those for which the real part is fixed at $3/2$ while the imaginary part varies, are the principal series. At the level of the two-point function we could view the usual scalar fluctuation as determined by the correlator of $\langle \Phi^{\dagger} \Phi \rangle$, which makes sense even for the complex operators of the principle series. However, there is no consistent interpretation of the 3 point function of principal series operators, except to set the various complex 3 point functions to zero. Thus, in order to have an interpretation as fluctuations of the real quantity $\zeta$, we must restrict attention to the complementary series. Thus $\Delta_S$ is bounded between $0$ and $3/2$. Conventional slow-roll models have $\Delta_S = 0$. Note that, strictly speaking, this is not in the list of unitary representations, but is a limit of them. We believe that this is a consequence of the logarithmic behavior of massless minimally coupled propagators in dS space, which may make the definition of the global symmetry generators a bit delicate.
From the point of view of phenomenology, this subtlety is irrelevant. $SO(1,4)$ invariance is only an approximate property of inflationary models and we can certainly consider arbitrarily small values of $\Delta_S$, so the distinction between zero and other values could never be determined from the data.
Our point is that the representation constant of the field fixes its two and 3 point functions up to pre-factors. The two-point function of such a field is fixed, up to a multiplicative constant to be
$$ {\rm Tr} [\rho \Phi (X) \Phi (Y) ] = C^S_2 (X^{\mu} Y_{\mu} )^{- 2 \Delta_S} .$$ Here $\rho$ can be any $SO(1,4)$ invariant density matrix.
The flat slicing of dS space is
$$ ds^2 = - dt^2 + e^{2t/R}\ d {\bf y}^2 .$$ Using the asymptotic relation between the flat coordinates and the light cone, we find a momentum space correlator (in radial momentum space coordinates)
$$ 4\pi C^S_2 k^{-1}( \frac{e^{t/R}}{k})^{ 4\Delta_S} .$$
Similarly, the 3 point function is determined to be
$${\rm Tr} [ \rho \Phi (X_1 , X_2 , X_3 ) ] = C_3 \prod_{ij} X_{ij}^{\frac{-\Delta_S}{2}} ,$$ where
$X_{ij} = X_i \cdot X_j .$ This form follows from $SO(1,4)$, the scaling symmetry, and symmetry under permutations of the points. The latter symmetry follows from the assumption that the operators commute with each other. In both HST and a conventional slow-roll model, this is a consequence of the fact that the different points are causally disconnected during inflation, and that we are computing an expectation value at fixed time. In the HST model, $SO(1,4)$ invariance sets in via a coupling together of DOF at different points, using a time dependent Hamiltonian, which approaches a generator of $SO(1,4)$ when the number of e-folds is large\footnote{Below, we'll recall the meaning of the bulk concept ``number of e-foldings" in the HST model.}.
Maldacena and Pimentel\cite{malda}, among others \cite{others}, have shown how all of the $3$ point functions are determined up to 3 normalizations by $SO(1,4)$ group theory. Similar results were obtained by McFadden, Skenderis and collaborators\cite{mcfaddenskenderis}. Our results for correlation functions of tensor modes have to coincide with theirs because the only representation of $SO(1,4)$ that has the right number of components to represent the transverse traceless graviton fluctuation is the Class IV representation (in the notation of Newton\cite{thomasnewton}) with $s=2$. The Casimir operators have the values $Q = -6 , W = -12$. The class $IV_{a,b}$ representations are the two different helicity modes of the graviton. See Appendix B for details of the classification of $SO(1,4)$ representations. Note however that the coefficients in front of these group theoretic predictions are different in slow-roll and HST models. In slow-roll models, both scalar and tensor fluctuations are computed as two-point functions of quantum fields in the background space-time $H(t)$, while in HST, the magnitude of the fluctuations is determined by a fixed Hubble parameter $H$, as we will review below.
Two interesting points about the pure tensor 3-point functions were made by \cite{parityviolating} and by Maldacena and Pimentel\footnote{The explicit forms for these three-point functions are not terribly illuminating.
The most elegant expression we know is in the spinor helicity formalism used by Maldacena and Pimentel, and it would be redundant and pointless to reproduce that here. We hope that the realization of the three sphere as the five dimensional projective light cone will simplify these expressions, but we have not yet succeeded in showing this.}. In bulk field theory computations, the parity violating term allowed by group theory does not appear in correlation functions. The corresponding term in the logarithm of the WD wave function is purely imaginary and does not contribute to correlators of operators that are simply functionals of the fields appearing in the wave functional. Neither the tensor nor scalar operators involve functional derivatives acting on the wave functional, and so their correlators are insensitive to this term. In addition, one of the two parity conserving structures only appears if we allow higher derivative terms in the bulk action. In a more general $SO(1,4)$ invariant theory, the vanishing of the parity violating term might follow from an underlying reflection invariance of the microscopic dynamics, while there is no reason for the two parity conserving terms to have very different normalizations\footnote{Maldacena and Pimentel point out that in a hypothetical model of inflation in perturbative string theory, the derivative expansion can break down even though quantum gravity corrections are negligible, if the Hubble scale is the string scale. They argue that this could produce parity conserving terms, with comparable magnitude. An actual computation of these terms would
require us to find a worldsheet formulation of the hypothetical weakly coupled string model of inflation.}.
In general, knowledge of the parity operation on the fields, plus the fact that the fields commute with each other, {\it does not} imply that the parity operator commutes with the fields. Rather, it is like a permutation operator, which permutes the elements of a complete ortho-normal basis. The properties of parity imply that it squares to a multiple of the unit operator.
In the conventional approach to slow-roll inflation models, the Hilbert space is interpreted as the thermo-field double of field theory in a single causal patch and the state is taken to be the Bunch-Davies state, which reproduces thermal correlation functions in the theory of the causal patch. This state is invariant under a $Z_2$ which reverses both the orientation of the 3-sphere and the time in the causal patch. When combined with the TCP invariance of bulk quantum field theory, this leads to parity invariant correlation functions. Another way of seeing the same result is to note that the WD density matrix for the Bunch-Davies vacuum, is diagonal in the same basis as the fields whose expectation values we are computing. The parity operation is defined as complex conjugation of the WD wave function, and leaves the diagonal matrix elements of the density matrix invariant. These are the only matrix elements relevant to calculating these particular expectation values.
In the HI model of inflation, thermal fluctuations in {\it many} initially decoupled dS causal patches are coupled together by a time dependent Hamiltonian, which, in the limit of a large number of e-folds, approaches a generator of $SO(1,4)$. In this limit, one can argue that the density matrix should become approximately $SO(1,4)$ invariant, but we do not see a general argument that it be parity invariant. Similarly, there is no reason for the density matrix to be diagonal in the same basis as the fields $S(X)$ and $T(X)$ on the three sphere. The parity operation acts simply on the fields, but not necessarily on the density matrix. {\it Consequently, there is no argument that the parity violating part of the tensor 3 point function must vanish, or be small compared to the other two terms.} Thus, the tensor bispectrum may be the only clear discriminant between slow-roll inflation and a general class of $SO(1,4)$ invariant models that includes HI.
The group theory analysis does not determine the scaling dimension $\Delta_{S}$ or the coefficients of the various two- and three-point functions. In the next section, we review the Holographic Inflation (HI) model, which makes predictions for some of these unknown constants.
Note however, that Maldacena, using the bulk effective field theory description of fluctuations, has derived several relations between the nine parameters on quite general grounds. The fundamental gauge invariant measure of scalar fluctuations is the scalar metric perturbation $\zeta$, where
\begin{eqnarray*}
ds^2 &=& - N^2 dt^2 + h_{ij} (dx^i + N^i)(dx^j + N^j)\\
h_{ij} &=& a^2(t) [(1 + 2\zeta)\delta_{ij} + \gamma_{ij} ],
\end{eqnarray*}
with $\gamma_{ij} $transverse and traceless. When $\zeta (x) $ is constant, this is just a rescaling of the spatial FRW coordinates so its effect is completely determined. Thus, in a three-point function including $\zeta$, which depends on three momentum vectors satisfying the triangle condition ${\bf k_1 + k_2 + k_3} = 0$, the squeezed limit where the $k_i$ of $\zeta$ is taken to zero, is completely determined by the coordinate transformation of the corresponding two-point function. Since $SO(1,4)$ fixes the momentum dependence of all $3$ point functions up to a multiplicative constant, the constants in the $\langle S T T \rangle$, $\langle SST\rangle$ and $\langle S S S \rangle$ three-point functions, are determined by those in the $\langle T T \rangle$, and $ \langle S S \rangle$ two-point functions. This leads to the prediction of small non-Gaussianity in the slow-roll limit, and reduces our $9$ constants to $6$. We have argued that the HST model does have a description in terms of coarse grained classical field theory, and so should obey Maldacena's constraint.
Slow-roll inflation models determine the magnitudes of fluctuations in terms of the quantum fluctuations of canonically normalized free fields in the Bunch-Davies state. In the single field slow-roll models, this leads to an exact relation between the scalar and tensor tilts and the normalizations of the $\langle S^2\rangle$ and $\langle T^2\rangle$ correlators. The HI model does not lead to this relation. However, the relative orders of magnitude of the scalar and tensor two-point functions are determined by very general geometrical considerations. The quantity $\zeta$is shown in Appendix A of\cite{lyth} to satisfy
$$\zeta = - 3 \bar{H} \delta t ,$$ where $\delta t$ is the proper time displacement between two infinitesimally separated co-moving hypersurfaces, and $\bar{H}$ the homogeneous Hubble radius. This requires only that the metric be locally FRW, that the cosmological fluid have vanishing vorticity, and that fluctuations away from homogeneity and isotropy are treated to first order. On the other hand,
$$\delta t = \frac{\delta H}{\dot{\bar{H}}}, $$ where $\delta H$ is the local fluctuation in the Hubble parameter. If the metric is close to dS then $\dot{\bar{H}}$ is small.
$\delta H$ is the fluctuation in the inverse radius of space-time scalar curvature, while the tensor fluctuations are fluctuations in the spin two part of the curvature, which is defined intrinsically by the fact that the background is spatially flat.
Thus, we can conclude quite generally that the fluctuations in $\delta H$ and in the spin two piece should be of the same order of magnitude. In section 3 we will recall that in the HI model, general statistical arguments indicate that these fluctuations have the magnitude $\frac{1}{R M_P}$, where $R$ is the radius of the approximate dS space.
We want to emphasize that apart from the last remark, these are purely classical geometrical considerations. Adopting Jacobson's point of view about Einstein's equations, we can say that any quantum theory of gravity whose local hydrodynamics looks like dS space for a sufficiently long period, will give predictions for scalar and tensor fluctuations that are qualitatively similar to those of slow-roll inflation. We will discuss the observational discrimination between different models below.
\subsection{Tilt}
The scalar two-point function is given at large times in the flat slicing by
$$ \langle \zeta (k) \zeta (-k) \rangle = \frac{A}{k^3} \frac{H^2}{\dot{H}^2} (\frac{e^{t/R}}{k})^{- 4\Delta_S} .$$
In slow-roll models, The relevant value of $t$, at which to evaluate this formula, depends on $k$ via the equation
$$ k = a (t(k)) H(t(k)).$$ Notice that in these formulae we've reverted to the use of $R$ for the constant inflationary dS radius, while $H$ is the varying Hubble parameter. In a general model, $H$ will be decreasing with time and $\dot{H}$ will be increasing, as inflation ends.
Modes with higher $k$ leave the horizon at a later time and so the normalization $\frac{H^4}{\dot{H}^2}$ will be smaller for these modes.
However, there is another effect coming from the fact that $\Delta_S$ is positive. As inflation ends, $a(t)$ is not increasing as rapidly as the exponential so $\frac{e^{t/R}}{a(t)} $ increases as $t$ increases (we neglect the variation of $H(t)$ in the horizon crossing formula, because it is not in the exponential). Thus, the logarithmic derivative of the correlation function will have a negative contribution from the prefactor and a positive one proportional to $ \Delta_S$ . Since both effects depend on the slow variation of $H$, the tilt will be small (remember that $ \Delta_S$ is bounded by unitarity), but its sign depends on the value of $\Delta_S$. The slow-roll result of red tilt is obtained for small $\Delta_S$, but near the unitarity bound the tilt could be of either sign, depending on the behavior of $H(t)$. The conventional slow-roll model usually assumes $\Delta_S =0$.
Similar remarks apply to the tensor fluctuations. However, the overall constant in these is not in general fixed in terms of the normalization of the scalar two-point function, as it is in a conventional slow-roll model. If we ever measure the tensor fluctuations, we will be able to see whether the slow-roll consistency condition, relating the magnitudes and tilts of tensor and scalar two-point functions is satisfied.
Thus far, we've compared slow-roll inflation to a general model satisfying only approximate $SO(1,4)$ invariance of the density matrix, and the existence of an approximately de Sitter classical background geometry $H(t)$. If we now specialize to models constructed in HST, we find a different prediction for the scale dependence of the normalization parameter $A$ (and the corresponding normalization of the tensor two-point function). In slow-roll models we find
$$A_{S,T} = C_{S,T} (\frac{H(t)}{M_P})^2 , $$ with fixed numerical coefficients. The HST model, as we will explain below, predicts instead that $$A_{S,T} = D_{S,T}^{\prime} (\frac{1}{R M_P})^2 , $$ with numerical coefficients which are not yet calculable. This has the consequence that {\it the HST model predicts no tensor tilt}. It also suggests that the size of the tensor fluctuations might be large enough to be seen in the Planck data (but the unknown coefficients make it impossible to say this definitively).
For a given function $H(t)$ we have the following predictions for the scalar tilt
$$ n_s^{\rm slow\ roll} = \frac{H}{H^2 + \dot{H}}\frac{d}{dt} (6 {\rm ln}\ H - 2 {\rm ln}\ (\dot{H}) ) .$$
$$ n_s^{HST} = \frac{H}{H^2 + \dot{H}}\bigl[ \frac{d}{dt} (4 {\rm ln}\ H - 2 {\rm ln}\ (\dot{H}) ) - 4\frac{\Delta_S}{R} \bigr] + 4 \Delta_S .$$
Note that in the slow-roll limit, where $H \approx R^{-1}$, the last term in square brackets cancels the term outside the brackets, and also that $\Delta_S$ is bounded from above by $3/2$. The two formulae for the tilt
are different, but both predict that it is small, and that the sign of $n_s - 1$ depends on the time variation of $H(t)$. Note however that $H(t)$ is not measured by anything else than the primordial fluctuations, so we can adjust $H(t)$ and $\Delta_S$ to make a slow-roll model have, within the observational errors, the same predictions as an HST model. It's possible that further study of the consistency conditions on HST models would enable us to make more precise theoretical statements about $H(t)$ and $\Delta_S$, but at the moment it does not appear that the scalar power spectrum can distinguish between them. The absence of tensor tilt {\it is} a clear distinguishing feature.
Measurement of the tensor bispectrum would give us a much finer discrimination between models. {\it In particular, observation of the parity violating part of this function would rule out all models based on conventional effective field theory in the Bunch-Davies vacuum.} It's unfortunate then that HI, like the slow-roll models, predicts that the tensor bispectrum is down by a factor of $\frac{H}{m_P}$, from the tensor two-point function, which is in turn smaller than the already observed scalar fluctuations by a factor of order $\frac{\dot{H}}{H^2}$. It seems unlikely that we will measure it in the near future.
\subsection{Comparison With the Approaches of Maldacena and McFadden-Skenderis}
Maldacena's derivation of the dS/CFT correspondence implies that the quantum theory defined by his equations carries a unitary representation of $SO(1,4)$, {\it within the semi-classical approximation to the bulk physics}. He argues that the analytic continuation, in the c.c., of the generating functional of correlators of an Euclidean CFT with a large radius AdS dual, is the Wheeler DeWitt (WD) wave functional of the corresponding bulk Lagrangian on the 3-sphere. This argument has been generalized to all orders in the semiclassical expansion of the bulk Lagrangian by Harlow and Stanford\cite{hs}. {\it To all orders in the semi-classical expansion} the WD wave functional defines a positive metric Hilbert space. The correlation functions defined by Maldacena are expectation values of operators localized at points on the sphere, in a given state in this Hilbert space. They are covariant under $SO(1,4)$ and the Hilbert space carries a unitary representation of $SO(1,4)$. In this semi-classical analysis the state in the Hilbert space is the Bunch Davies vacuum for dS quantum field theory, defined by analytic continuation from a Euclidean functional integral.
It is important to realize that these correlators are {\it not} correlators in the "non-unitary" CFT, which defines the coefficients in the exponent of the WD wave function. The complex weights, which seem so mysterious in the CFT are familiar as the complex parameters labeling the complementary and principle series of {\it unitary} representations of $SO(1,4)$. Furthermore, although our quantum theory contains operators localized at points on the 3-sphere, it is NOT a Euclidean QFT on the 3-sphere. The correlators in such a QFT would be analytic continuations of expectation values of operators in a theory on $2 + 1$ dimensional dS space, and would satisfy reflection positivity on the 3-sphere. This cannot be the case because none of the generators of $SO(1,4)$ are bounded from below. The usual radial quantization of a theory on the sphere describes a Hilbert space composed of unitary highest weight representations of $SO(2,3)$, whose analytic continuation are highest weight non-unitary representations of $SO(1,4)$, not the unitary unbounded representations that one finds by doing bulk quantum field theory.
Many people have been tempted to use the AdS/CFT correspondence to {\it define} a quantum theory in dS space by just using the analytically continued correlators of some exact CFT to define a non-perturbative WD wave function. We are not sure what this would mean. The formal analytic continuation of the path integral gives rise to a wave functional satisfying the exact WD equation. There is no positive definite scalar product on the space of solutions of this hyperbolic equation,and it is not clear how to give a quantum interpretation of the correlation functions that would be defined by this procedure. We propose instead that the correct non-perturbative generalization of Maldacena's observation is that the inflationary correlation functions, in leading order in deviations from dS invariance, be given by expectation values of localized operators in a quantum theory on the 3-sphere, carrying a unitary representation of $SO(1,4)$. As we've noted above, current observations are completely accounted for by this principle, without the need for a detailed model.
The application of Maldacena's version of dS/CFT to inflation works only to leading order in the slow-roll approximation. M-S instead begin from a correspondence between holographic RG flows and full inflationary cosmologies. It has long been known that the equations for gravitational instantons, of which domain walls are a special case, have the form of FRW cosmologies, with positive spatial curvature, if we interpret the AdS radial direction as time. In particular, when the Lorentzian signature potential has negative curvature in AdS space, corresponding to a Breitenlohner-Freedman allowed tachyonic direction, the cosmology asymptotes to dS space. From the AdS point of view, such domain walls represent RG flows under perturbation of the CFT at the AdS maximum, by a relevant operator. M-S show that by performing a careful analytic continuation of fluctuations around the domain wall solution, they can write inflationary fluctuations in terms of analytically continued correlators in the QFT defined implicitly by the domain wall. Furthermore, in another paper, they show that if the relevant operator is nearly marginal, then the analytic continuation of the formulas for correlators in the perturbed CFT, computed by conformal perturbation
theory, produce fluctuations corresponding to a slow-roll model. That is, they obtain slow-roll correlators when these correlators are plugged into the formulae they derived in the domain wall case where the RG flow was tractable in the leading order AdS/CFT approximation. They suggest a holographic theory of inflation, in which their formulae are applied to the correlators in a general QFT.
To leading order in slow-roll parameters, and in the bulk semi-classical approximation, the results of M-S are equivalent to those of Maldacena, though they are derived by a different method.
Thus, we can give them a quantum mechanical interpretation, as above.
We are unsure what to say about the non-perturbative definitions of inflationary correlators, which they propose, since we do not know how to interpret them as quantum expectation values. In Maldacena's case, the attempt to interpret the analytically continued generating function as the WD wave function, no longer produces quantum mechanics if the semi-classical approximation does not apply. We cannot make a similarly definitive negative statement about the non-perturbative proposals of M-S, but we cannot prove that their procedure defines a quantum mechanics. We suspect that the proposals of M-S and Maldacena are in fact equivalent to all orders in the bulk semi-classical approximation, at least for slow-roll models (Maldacena only treats slow-roll models), and that the same objections to the M-S proposal for using exact, analytically continued CFT correlators, would apply.
We would like to opine that the term dS/CFT and the analytic continuation from AdS space are both somewhat misleading. dS is inappropriate because we are not dealing with a theory of eternal dS space, with an entropy proportional to $R^2$. In a stable dS space the correlators that we compute can never be measured by any local observer. Instead, these formulae apply, approximately, to an inflationary model with a large number of e-folds of inflation. In such a model, the entropy accessible after inflation is of order $e^{3 N_e} R^2$, and these correlators are measurable by post-inflationary observers. CFT is inappropriate in general because a CFT has a unique $SO(1,4)$ invariant density matrix. The analytic continuation from AdS space is meaningful only in the semi-classical expansion, and in that expansion it gives the unique Bunch-Davies state of $SO(1,4)$ invariant bulk field theory. We have argued that apart from the precise slow-roll consistency relation, this does not give predictions for two-point functions that are significantly different than those provided by symmetry and general theorems alone.
A number of other authors \cite{others} have invoked ``conformal invariance" to constrain inflationary correlators. While we agree with many of the equations proposed by these authors, we believe that we have provided the only correct interpretation of these results within the framework of an underlying quantum mechanical theory. One interesting question that we have not resolved is the extent to which there exist ``Ward identities" relating correlators of different numbers of tensor fluctuations. In standard slow-roll inflation, the normalization of tensor to scalar fluctuations is completely fixed (not just in order of magnitude in slow roll), by the normalization of the bulk Einstein action. In the relationship with AdS/CFT, this normalization is ``dual" to the fact that the coefficients of the log of the WD wave function, are analytically continued correlators of the stress tensor.
In ordinary QFT, there are two ways to derive stress tensor Ward identities. We can analytically continue relations derived from commutators and time ordered products in the Lorentzian continuation of the theory, {\it or} we can interpret stress tensor correlators as the response to variations of the metric of the Euclidean manifold on which the functional integral is defined. We have taken pains to stress that in the proper interpretation of our $SO(1,4)$ invariant quantum theory, no analytic continuation to Lorentzian signature is allowed. In the next section, when we review the construction of the quantum theory from HST, it will be apparent that the round metric on the 3 sphere plays a special role in the construction, and it is not clear how to define the model on a generic 3-geometry. Consequently, we do not see how to define Ward identities beyond the semi-classical approximation to bulk geometry.
In summary, while the results of previous authors on ``dS/CFT" for the computation of inflationary correlators are correct to all orders in the bulk semi-classical expansion, they {\it do not} lead to a new non-perturbative definition of quantum gravity in an inflationary universe. An appropriate non-perturbative generalization of these results is to assume the fluctuations may be calculated as expectation values of a scalar and tensor operator on $S^3$. The density matrix is approximately $SO(1,4)$ invariant. That is, we assume that the quantum theory is approximately a reducible unitary representation of $SO(1,4)$. We emphasize the word {\it approximate}
in these desiderata, because our HST model is finite dimensional, but approaches a representation of $SO(1,4)$ exponentially, as $N_e \rightarrow\infty$. This statement should be take to refer to convergence of expectation values of a small number of operators.
The theory should also have a Jacobsonian hydrodynamic description in terms of classical fields in a space-time which is close to dS space for a long period, but allows the horizon to expand to encompass many horizon volumes of dS. The purpose of the present paper was primarily to show that this broad framework was sufficient for understanding the observations. Two of the authors, TB and WF, believe the HST model of \cite{holoinflation} is the only genuine model of quantum gravity that has these properties. One need not share this belief to accept the general framework of symmetries and cosmological perturbation theory.
\section{The Holographic Inflation model}
The basic idea of HST is to formulate quantum gravity as an infinite set of independent quantum systems, with consistency relations for ``mutually accessible information". Each individual system describes the universe as seen from a given time-like world line (not always, or even usually, a geodesic), evolving in proper time along that trajectory. The dynamics along each trajectory is constrained by causality: the evolution operator for any proper time interval factorizes as\footnote{We use notation appropriate for a Big Bang cosmology. $0$ is the time of the Big Bang. An analogous treatment of a time symmetric space-time would use an evolution operator $U(T,-T)$.} $$U(T,0) = U_{in} (T,0) \otimes U_{out} (T,0),$$ where $U_{in}$ acts only on ``the Hilbert space ${\cal H} (T, {\bf x})$ of degrees of freedom in the causal diamond determined by the past and future endpoints of the trajectory". $U_{out} (t,0)$ operates in the tensor complement of ${\cal H} (T, {\bf x})$ in ${\cal H} (T_{max}, {\bf x})$ . ${\bf x}$ is a label for the trajectory. The dimension of ${\cal H} (T, {\bf x})$, in the limit that it is large, determines the area of the holographic screen of the causal diamond via the Bekenstein-Hawking relation (generalized beyond black holes by Fischler, Susskind and Bousso), $$A(T, {\bf x}) = 4 L_P^{d-2}\ {\rm ln\ dim}\ [ {\cal H} (T, {\bf x})] .$$ We will take $d = 4$ in this paper. The causal relations between different diamonds are encoded in commutation properties of operators, as in quantum field theory (QFT).
$A(T,{\bf x})$ must not decrease as $T$ increases. For small $T$ it will always increase. It may reach infinity at finite $T$, as in AdS space; remain finite as $T\rightarrow\infty$, as in dS space; or asymptote to infinity with $T$. For trajectories inside black holes, or Big Crunch universes, $T_{max}$ will be finite. It's clear that there must be jumps in $T$, where the dimension of ${\cal H} (T, {\bf x})$, changes, and it's not likely that we need to discuss continuous interpolations between these discrete times. In the models of this paper, the discrete jumps will be of order the Planck time.
For any time, and any pair of trajectories, we introduce a Hilbert space ${\cal O} (T, {\bf x,y})$ whose dimension encodes the information mutually accessible to detectors traveling along the two different trajectories. ${\cal O} (T, {\bf x,y})$ is a tensor factor in both ${\cal H} (T, {\bf x})$ and ${\cal H} (T, {\bf y})$.
We define two trajectories to be nearest neighbors if
\begin{equation*}
{\rm dim}\ {\cal O} (T, {\bf x,y}) = {\rm dim}\ {\cal H} (T - 1, {\bf x}) = {\rm dim}\ {\cal H} (T - 1, {\bf y}).
\end{equation*}
Translated into geometrical terms, this means that the space-like distance between nearest neighbor trajectories, at any time, is the Planck scale. The second equality defines what we call {\it equal area time slicing} for our cosmology. We want the nearest neighbor relation to define a topology on the space of trajectories, which we think of as the topology of a Cauchy surface in space-time. It is probable that it is enough to think of this space as the space of zero simplices of a $d - 1 = 3$ dimensional simplicial complex, but for ease of exposition we use a cubic lattice. We require that ${\rm dim}\ {\cal O} (T, {\bf x,y})$ be a non-increasing function of the number of steps $d({\bf x,y})$ in the minimal lattice walk between the two-points.
The choice of ${\rm dim}\ {\cal O} (T, {\bf x,y})$ for points which are not nearest neighbors is determined by an infinite set of dynamical consistency requirements. Given time evolution operators and initial states in each trajectory Hilbert space, we can determine two time dependent density matrices $\rho (T, {\bf x})$ and $\rho (T, {\bf y})$ in ${\cal O} (T, {\bf x,y})$. We require that
$$\rho (T, {\bf x}) = V(T,{\bf x,y}) \rho (T, {\bf x}) V^{\dagger} (T,{\bf x,y}),$$ with $V(T,{\bf x,y})$ unitary. This constrains the overlap Hilbert spaces, as well as the time evolution operators and initial states.
As TB and WF have emphasized many times, the structure of space-time, both causal and conformal, is completely determined by quantum mechanics in HST, but the space-time metric is not a fluctuating quantum variable. The true variables are quantized versions of the orientation of pixels on the holographic screen. They are sections of the spinor bundle over the screen, but in order to satisfy the Covariant Entropy Bound for a finite area screen, we restrict attention to a finite dimensional subspace of the spinor bundle, defined by an eigenvalue cutoff on the Dirac operator\cite{tbjk}. For the geometries considered in this paper, with only four large space-time dimensions, the screen is a two sphere with radius $\sim N$ in Planck units times an internal manifold $K$ of fixed size. The variables are a collection of $N\times N+1$ complex matrices, $\psi_i^A (P)$ one for each independent section of the cutoff spinor bundle on $K$. Their anti-commutation relations are
$$[\psi_i^A (P), {\psi^{\dagger}}^j_B (Q)]_+ = \delta_i^j \delta^A_B Z_{PQ},$$ with appropriate commutation relations with the $Z_{PQ}$ to make this into a super-algebra with a finite dimensional unitary representation whose representation space is generated by the action of the fermionic generators.
We will not have to use much of this formalism in the present paper, because the era of cosmic history that we are discussing is almost featureless. The covariant entropy bound is almost saturated, with the size of deviation from its saturation related to the size of the fluctuations discussed in this paper. We will explain this somewhat oracular remark below.
\subsection{Review of the HI Model}
We now review the model of inflation and fluctuations described in \cite{holoinflation}.
We begin with a holographic space time model of a flat FRW universe with $p=\rho$\cite{holocosmmath}, which we believe is the generic description of the early stages of any Big Bang universe. The Big Bang hypersurface is a topological cubic lattice of observer trajectories. The Hilbert space of {\it any} observer's causal diamond $T$ units of Planck time after the Big Bang, has dimension ${\rm dim\ }{\cal P}^{T(T+1)}$, where ${\cal P}$ is the fundamental representation of the compactification superalgebra. At each time the Hamiltonian is chosen from a random distribution of Hermitian matrices in this Hilbert space, with the following provisos
\begin{itemize}
\item Every observer has the {\it same} Hamiltonian at each instant of time.
\item For large $T$, the Hamiltonian approaches\footnote{The word {\it approaches} means that the CFT can be perturbed by a random irrelevant operator.} that of a non-integrable $1 + 1$ dimensional CFT with central charge $T^2$, living on an interval of length of $o(T)$, with a cutoff of order $1/T$, in Planck units. The bulk volume scales like $T^3$, so the bulk energy density scales like $1/T^2$, and the bulk entropy density like $1/T$, which is the Friedman equation for the $p = \rho$ FRW space-time. The theory has no scale but the Planck scale, so the spatial curvature vanishes, and the model saturates the covariant entropy bound\cite{FSB} at all times
|
.
\end{itemize}
We then modify this model in the following way. Choose two integers, $n$ and $N$ such that $1 \ll n \ll N$, which will determine the Hubble scale during inflation and the value of the Hubble scale corresponding to the observed cosmological constant, respectively. Choose one point on the lattice to represent the origin of ``our" coordinate system. We will treat the tilted hypercube consisting of all points a distance $\leq N/n$ lattice steps from the origin, differently than the points outside. For these points, we stop the growth of the Hilbert space at time $n$, for a while, and allow the Hamiltonian to remain constant. We also use $1 + 1$ dimensional conformal transformations to replace it with the same model on an interval of length $n^3$ with a cutoff of order $1/ n^3$. In \cite{holoinflation} we argued that this was the Hamiltonian of a single horizon volume of dS space, with Hubble radius $n$. The rescaling of the Hamiltonian should be viewed as a change of the trajectory under consideration from that of a geodesic observer in the original FRW, to that of a static observer in dS space. The Jacobsonian effective geometry corresponding to this model up to time $n$ is a $p=\rho$ FRW, which evolves to a dS space with Hubble radius $n$. The Jacobsonian Lagrangian contains the gravitational field and a scalar field, and the dynamics of the underlying model would imply that they were both homogeneous, if we had stopped the growth of the Hilbert space everywhere in the lattice of trajectories.
Outside the tilted hypercube however, we continue to use the $p=\rho$ Hamiltonian. In
\cite{holoinflation}, TB and WF argued that if $n = N$ there was a consistent set of overlap rules, which had the property that points outside the hypercube were forever decoupled from those inside, in the sense that the overlaps between interior and exterior points are always empty. The exterior Jacobsonian effective geometry corresponding to this model is a spherically symmetric black hole of radius $N$ in the $p =\rho$ geometry. The interior geometry is not, however consistent with this unless $n = N$. The Israel junction condition, if we insisted on a dS geometry in the interior, would require that the boundary of the hypercube be a trapped surface with Hubble radius $N$.
We then proposed to modify the time evolution inside the hypercube to resolve this problem. Our modification is only to the Hamiltonian of a single observer at the center of the hypercube. We do not have a fully consistent HST model, with compatible Hamiltonians for all observers, corresponding to this model. However, since our single observer model behaves approximately like a local field theory at times $\gg n$, and QFT satisfies the HST overlap rules approximately, we expect that a full model can be constructed. We call this single observer model, Holographic Inflation (HI).
According to the rules of HST, the observer at the center of the HI model will be decoupled from the rest of the DOF of the universe forever. Since there exist solutions of Einstein's equations with multiple black holes embedded in a $p =\rho$ universe, we believe that the HI model can be embedded inside a larger model, in which the central observer's finite universe eventually collides with other universes, with different values of the cosmological constant. In \cite{holoinflation} we argued that this is one of many possible ways to solve the ``Boltzmann brain non-problem". Since the collision time can be any time between a few times the current age of the universe, to the unimaginably long recurrence time for the first Boltzmann brain, this embedding is completely irrelevant to any observation we could conceivably make.
\begin{figure}[t!]
\centering
\includegraphics[width=.8\textwidth]{HST_Inflation_1}
\caption{This figure illustrates how the time dependent Hamiltonian of the HI model encompasses more DOF on the fuzzy 3-sphere (explained below), as time goes on. Each band in the figure represents a fuzzy 2-sphere of radius $R(t_k ) = R \sin (\theta_k)$ at time $t_k$. The horizon radius $R(t)$ is a smooth function that approximates this discrete growth of the horizon for a large number of e-folds. It determines an FRW cosmology through
$ R(t) = R a(t)\int_{I}^t \frac{ds}{a(s)} .$.}
\label{couple}
\end{figure}
The Hilbert space of the Holographic Inflation model has entropy of order $N^2$. Initially, the Hilbert space is broken up into $(N/n)^2 $ tensor factors, each of which
behaves like a single horizon volume of dS space. That is to say, the state of each of these systems is changing rapidly in time in a manner that leads to scrambling of information on a time scale $n\ {\rm ln}\ n$\cite{susskindsekino}. Now we gradually begin to couple these systems together, starting from those that are close to the center of the hypercube as shown in Figure \ref{couple}. The idea behind this is that time evolution up to time $n$ gave us multiple copies of the single $dS_n$ Hilbert space, corresponding to different observers. We now map all of those copies into the Hilbert space of the central observer. We want to get an emergent space-time which looks like multiple horizon volumes of $dS_n$.
Initially, the Hamiltonians of different observers were synchronized and the universe was exactly homogeneous and isotropic. However, when we couple together the copies of these systems in the Hamiltonian of the central observer, the coupling does not occur at synchronized times. Thus, the initial state as each successive horizon volume is coupled in can be thought of as a tensor product, but with a different, randomly chosen, state of the $dS_n$ system in each factor. {\it This is the origin of the local fluctuations, which eventually show up in the microwave sky of the central observer. It is also the origin of LOCALITY itself. } A conformal diagram of this unsynchronized coupling of $dS_n$ horizon volumes can be seen in Figure \ref{conformal}.
Indeed, in \cite{holoinflation} we pointed out that if we took $N = n$ then we can find a completely consistent model of a universe which evolves smoothly from the $p=\rho$ Big Bang, to $dS_N$, without ever producing a local fluctuation. It is exactly homogeneous and isotropic at all times, despite the fact that the initial state is random and the Hamiltonian is a fast scrambler. Although it corresponds to a coarse grained effective geometry, the model contains no local excitations around that background. Instead, it saturates the Covariant Entropy Bound at all times and is never well approximated by QUEFT, despite the fact that it is, for much of its history, a low curvature space-time. By taking $1 \ll n \ll N$, we find a model that interpolates between the $p = \rho$ Big Bang, and asymptotic $dS_N$, via an era of small localized fluctuations, which, for a long time, remain decoupled from the majority of the horizon DOF in $dS_N$.
\begin{figure}[t!]
\centering
\includegraphics[width=.8\textwidth]{Conformal_Inflation_2}
\caption{A conformal diagram showing initially, separately evolving horizon volumes being coupled together asynchronously. The observer starts in the central horizon volume and the colored regions later in that horizon's history indicate when a nearby horizon volume (discretely separated at the colored points at $t=0$) is coupled to the Hilbert space of the observer. The red regions indicate sections of space-time that are decoupled from the central observer and allowed to evolve freely. Since this evolution is not synchronized with the time dependence of the Hamiltonian of the central horizon volume, the asynchronous coupling of independent horizon volumes gives rise to local fluctuations (indicated by different color opacities in the figure). }
\label{conformal}
\end{figure}
Thus, the role of inflation in the Holographic Inflation model is precisely to generate localized fluctuations, by starting the system off in a state where commuting copies of the same DOF are in different quantum states, from the point of view of the central observer. Below, we will map these commuting copies to different points on a fuzzy 3-sphere, so that the fluctuations in their quantum states become local inhomogeneities of the 3-sphere. These are, in our model, the origin of the CMB fluctuations, and they provide the raison d'{\^ etre} for localized excitations of the ultimate $dS_N$ space. One might say that the most probable path between the $p=\rho$ geometry and $dS_N$ is the homogeneous model described in the previous paragraph. By forcing the universe to go through a state where tensor factors of its Hilbert space are decoupled, the inflation model chooses a less probable, though more interesting, path\footnote{We are using the word probable in a somewhat peculiar way in this sentence. That is, the exactly homogeneous, entropy maximizing, model is a different choice of time dependent evolution operator than the HI model, which contains a period of inflation, and produces localized fluctuations. The latter model exploits the basic postulate of HST that the initial state in any causal diamond whose past tip is on the Big Bang hypersurface, is unentangled with DOF outside that diamond, to construct an evolution operator that exhibits approximate locality for a subset of DOF. As a consequence, the state of this model does not have maximal entropy for the period between the beginning of inflation and the time when all localized excitations decay to the dS vacuum. It's not clear whether we should call the second model "less probable" than the first. They are not part of the same theory. What we mean is that, at intermediate times, a random choice of state would coincide with the actual state determined by the time dependence of the first model, while the states of the second model would look non-random.}.
In the model described in \cite{holoinflation}, we organized all of the DOF which have interacted up to the end of inflation, in terms of variables localized on a fuzzy hemi-3-sphere of radius $E$. In order to match with the bulk picture of inflationary geometry, this corresponds to sphere with $e^{3N_e} n^2$ DOF. The boundary of this sphere is the holographic screen of the central observer's causal diamond at the end of inflation.
Indeed, in the bulk picture of inflation, all DOF encountered by the central observer in the future have been processed during the inflationary period.
Thus $$E^2 = e^{3N_e} n^2 \leq N^2 = 10^{123},$$
and $$ N_e \leq 94. 4 - 2/3 {\rm ln}\ n = 85.4 - 4/3 {\rm ln}\ \frac{M_I}{M_U} ,$$ where the ratio in the last term is that of the scale of inflation to the unification scale ($2\times 10^{16}$ GeV).
On the other hand, $E$ must be large enough to encompass all of the degrees of freedom that manifest themselves as fluctuations in the CMB.
The entropy of CMB photons in the current universe is
$$ (\frac{T}{M_P} N)^3 \sim 10^{89} .$$ However, the entropy in the {\it fluctuations} is only a fraction of this
$$S_{fluct} = 3\frac{\delta T}{T} \times 10^{89} \sim e^{169} \leq e^{3N_e} \times n^2 .
$$Thus $$ N_e \geq 49 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U}.$$
We can estimate the size of the local fluctuations by the usual rules of statistical mechanics. The local subsystems have entropy of order $n^2$, so that a typical fluctuation of a local quantity, is $o(1/n)$. This indicates an inflation scale of order the GUT scale, if we use the CMB data to normalize the two-point function. The fluctuations are also close to Gaussian, again because they are extensive on the inflationary holoscreen. $k$-point functions scale like $n^{-k}$. Note that, apart from factors which arise from the translation of these quantum amplitudes into the fluctuations used in classical cosmological perturbation theory, this is the scaling of $k$ point functions expected in a conventional slow-roll model. However, the size of these fluctuations is fixed by $n{-1}$, rather than by the effective $H(t)$ that one would get if one computed QUEFT fluctuations in a slowly evolving cosmology. We pointed out in the previous section that this leads to a prediction of zero tensor tilt in the HI model, but that our ignorance of the correct form of $H(t)$ makes it difficult to differentiate the predictions for scalar tilt of the HI and bulk QUEFT models of fluctuations.
The main burden of the present paper is to explore the consequences of the dS invariance of these fluctuations. Note that there is no meaning to dS invariance in the theory of a single stable dS space. The physics of that system is confined to a single horizon volume, and only an $R \times SU(2)$ subgroup of $SO(1,4)$ leaves the horizon volume invariant. The coset of this subgroup maps the observer's horizon volume into others, and does not act on physical observables. However, in our Holographic Inflation model, the central observer sees $(E/n)^2$ horizon volumes, and if this number is large, we must build a model which closely approximates the properties of the classical $dS_n$ space, which is seen by a single observer. At the end of inflation, this observer's causal diamond contains many horizon volumes of $dS_n$ and so a model {\it approximately} invariant under $SO(1,4)$ is appropriate. We will argue that the corrections to this symmetric model, {\it for the calculation} of correlations between fluctuations at small numbers of points, are suppressed by powers of $e^{ - N_e}$, and it is reasonable to neglect them. The continuous $SO(1,4)$ invariant model overestimates the total number of quantum states in the universe by an infinite factor, but most of those states are not probed by the limited observations we make on the CMB.
At this point it is worth noting that the model presented in \cite{holoinflation} and the present paper, does not really describe the CMB. Within the HST formalism, we have not yet understood how to describe conventional radiation and matter dominated universes, where the source of the gravitational field is particles, rather than another effective classical field (the inflaton). Our model ends with a time independent Hamiltonian which is (approximately) a generator, $\mathcal{L}_{04}$, of $SO(1,4)$. It is {\it not} the Hamiltonian we have conjectured to describe particle physics in $dS_N$\cite{tbds}\cite{holounruh} .
Thus our model is not a realistic cosmology. Its hydrodynamic description is that of an FRW geometry coupled to a scalar field, which has small inhomogeneous fluctuations on a 3-hemisphere of radius $e^{N_e} n$. In the Jacobsonian effective field theory description, these are fluctuations in the classical value of the inflaton, which are chosen from an approximately Gaussian, approximately dS invariant distribution described in the previous section. The normalization of the two-point function is determined by $n$, and we've observed that it coincides with the observed strength of CMB fluctuations if $n$ is of order the unification Hubble radius, but the model has no CMB. The Lagrangian for the inflaton must interpolate between the $p = \rho$ geometry, and $dS_N$ via a period of $N_e = \frac{2}{3} {\rm ln}\ (E/n)$ e-folds of inflation, plus sub-luminal expansion for the period when the horizon radius stretches from $E$ to $N$. It is therefore a conventional slow-roll inflation Lagrangian, with parameters chosen to fit the underlying quantum model.
To accommodate hypothetical HI models with blue tilt, we can either tune the inflaton potential so that the slow-roll parameter $\eta > 6\epsilon$, or use a hybrid model as the Jacobsonian THEFT. We emphasize that from the point of view of HST, we are merely searching for the classical model that fits the hydrodynamics of an underlying quantum system. In HI, that quantum system is not even approximately a QUEFT, at least at the beginning of inflation, when the fluctuations are actually generated. The fluctuations calculated from the density matrix of the underlying model are inserted into the classical space-time equations of the THEFT, as fluctuations of the metric, in the co-moving gauge for the inflaton.
In a more realistic model, we would have to make a transition from $\mathcal{L}_{04}$ to the Hamiltonian of a geodesic observer in $dS_N$. The latter Hamiltonian describes particles, and we would have to show how the fluctuations in the inflaton get transmuted into distributions of photons and matter. This is the physics encompassed in the conventional process of {\it reheating}, and the subsequent propagation of photons through an inhomogeneous space-time, including phenomena like the
Sachs-Wolfe effect. We know perfectly well how to build a QUEFT of this era, by coupling the classical inhomogeneous inflaton field to quantum fields describing particles. It's basically the challenge of describing the particle physics in terms of HST that is beyond our reach at present. There are however, a few remarks that we can make. The first is that the conventional matter and radiation dominated eras lead to an increase in the radius of the horizon by an amount $\alpha N$, with $\alpha$ a parameter strictly less than, but of order, $1$. The fact that $\alpha$ is less than one follows from the general properties of asymptotically dS cosmologies which are not exactly dS, while the fact that it's of order one reflects the very recent crossover between matter and radiation domination. Thus, we should take $E \ll N$.
\subsection{The Fuzzy 3-sphere}
In order to construct a model, which is effectively local in a 3 dimensional space, we label the $E^2 = e^{3 N_e} n^2 $ variables in the following manner. The geometry seen by an observer at the end of inflation is a 3-sphere of radius $R_I = e^{N_e} n$.
We have of order $E^2$ degrees of freedom, which can be thought of holographically as living on the holographic screen of a causal diamond, with radius $ E \sim e^{\frac{3 N_e}{2}} n\gg R_I$, when the number of e-foldings is large. We will distribute these ``uniformly" over a fuzzy 3-sphere of radius $R_I$.
\begin{figure}[t!]
\centering
\includegraphics[width= .9\textwidth]{geodesic}
\caption{Tilings of fuzzy two spheres of different radii. The maximally localized spinor wave functions at the centers of the tiles are a basis for the cutoff spinor bundle, with angular momentum cutoff determined by the radius of the sphere in Planck units.}
\label{tile}
\end{figure}
A 3-hemisphere can be thought of as a fiber bundle with two-sphere fibers, over the interval $[0, \frac{\pi}{2}]$ . The two-sphere at angle $\theta$ has a radius $R_I \sin\theta $. The HST version of this geometry is a collection of variables $\psi_i^A (\theta_k ) ,$ where the matrix
at $\theta_k$ is $N_k \times N_k + 1$ with $$N_k =R_I \sin\theta_k .$$
We take $\sin\theta_1 = n/R_I$, while $$\sum_k \sin\theta_k (\sin\theta_k + 1/R_I ) = e^{N_e} .$$ The $\theta_k$ are equally spaced in angle along the interval. Since each $N_k \geq n \gg 1$, we can construct, for each two sphere, a basis of spinor spherical harmonics localized on the faces of a truncated-icosahedral, geodesic tiling of the sphere, obtaining an approximately local description of our hemi-3-sphere. This tiling scheme is shown in Figure \ref{tile}. The centers of the faces, combined with the discretized interval parametrized by $\theta_k$ define a lattice on the 3 sphere. Our spinor variables $\psi_i^A$ have a natural action of $SO(4) = SU(2) \times SU(2)$ acting separately on the rows and columns of the matrix. We combine this with the discrete $SO(4)$ rotations which take points of the lattice into each other. As $N_e \rightarrow\infty$ we can construct operators which turn our unitary representation of $SO(3)$ into a unitary representation of $SO(4)$. In addition, we argued in \cite{holoinflation} that, in the limit, we could construct operators $\mathcal{L}_{0M}$ which extend this to a unitary representation of $SO(1,4)$. We thus conjectured that the Hilbert space of the localized variables $\psi_i^A (\theta_k )$ admits an action of $SO(1,4)$, in the limit $N_e \rightarrow\infty$ ,
and that it can be described in terms of field operators $O_A (X)$ transforming covariantly under the action of $SO(1,4)$ on the 3-sphere, as we assumed in the previous section. {\it We have argued that they DO NOT obey the axioms of conventional Euclidean CFT. In particular, the Hilbert space admits an infinite dimensional unitary representation of $SO(1,4)$ which cannot be highest weight (there are no highest weight unitary representations). This also implies that there are generally many $SO(1,4)$ invariant states in the representation. Our results for the correlation functions of inflationary fluctuations depended on the assumption that the state of the system after inflation is invariant, but not on the particular choice of invariant state.} Note that the operators $O(X)$ representing the local fluctuations commute at different points, because they probe properties of the individual, originally non-interacting, horizon volumes. We work in the Schrodinger picture, in which the density matrix, rather than the operators, evolve.
The preceding paragraph described mathematics. We incorporate it into the physics of our inflationary universe in the following way. We have followed the universe using the rules of HST from its inception until a time when the particle horizon had a size $n$. At that time, a very large number of observers have Hilbert spaces of entropy $n^2$ and are described by identical states and Hamiltonians. The individual Hamiltonian is that of a non-integrable, cutoff $1 + 1$ dimensional field theory whose evolution, time averaged over several e-foldings, produces a maximally uncertain density matrix. This description extends out from a central point on the lattice of observers for a distance $N/n$, up to the surface that will eventually be our cosmological horizon. Points on the lattice of observers that are more than $n$ steps apart, have no overlap conditions. We make a coarser sublattice, consisting of centers of tilted cubes on the original lattice, whose Hilbert spaces have no overlaps. We now want to describe the Hamiltonian of the central observer, from the time that individual points on the coarse sublattice thermalize, until the end of inflation. We will {\it not} provide a complete HST description of this era, because it is currently beyond our powers.
To construct this Hamiltonian, we begin with the Hilbert space of entropy $E^2$ described above, and identify points in the coarse lattice of HST observers, with points on the fuzzy 3-sphere described above. The central observer is identified with the point $\theta_1$ on the interval. There is no sense in further localizing it on the fuzzy two sphere at that point, because the state in its Hilbert space is varying randomly over a Hilbert space of entropy $n^2$. There are no localized observables at length scales smaller than $n$. We think of this geometrically as saying that the area of the hexagon centered at this observer's position has area $n^2$. Each point on the 3-spherical lattice has, at the beginning of this era, an identical wave function in a Hilbert space of entropy $n^2$. The time dependent Hamiltonian of the central observer now begins to couple together points on the spherical lattice, in a manner consistent with causality. That is, as the proper time of the central observer increases, we assume that its causal diamond increases in area, and the Hamiltonian couples together points that are closest to it on the 3-sphere, in accordance with the covariant entropy bound. In principle, the rate, in proper time, at which the area of the holographic screen grows, tells us about the FRW background geometry. The Jacobsonian effective field theory of this is a model of gravity coupled to a scalar, with a potential that leads to $N_e$ e-folds of inflation, and a rapid transition to $dS_N$. We are dealing with only a single observer, and do not have overlap constraints to guide us, so we could incorporate any geometry consistent with the entropy bounds.
The rate at which different points on the sphere are coupled together is not connected to the rate of change of the state according to the local Hamiltonian, which is randomizing individual Hilbert spaces of entropy $n^2$. {\it Therefore there will be local fluctuations of the initial quantum state at different points on the 3-sphere}. This is the physical origin of the fluctuations whose form we described in the previous section. Above, we have argued that when $n \gg 1$ they are approximately Gaussian and estimated their magnitude. They should clearly be thought of as statistical fluctuations in the quantum state, rather than quantum fluctuations in a pure state. Of course, since we detect these fluctuations in properties of a macroscopic system, there is no way that one could have ever detected the quantum nature of fluctuations in the conventional inflationary picture, but the point of principle is significant. In a more realistic model, these fluctuations would be the origin of what we observe in the CMB and the clumpy distribution of matter around us.
We construct our model so that, by the time the size of the holographic screen has reached $E$, the Hamiltonian of the DOF in that diamond is the generator $\mathcal{L}_{04}$, which approaches an element of the $SO(1,4)$ Lie Algebra in the (fictitious) limit $N_e \rightarrow\infty$. The system is characterized by a density matrix, because the state of each point on the fuzzy 3-sphere is random, and the times at which different points become coupled together are not locked in unison\footnote{For purists, we should point out that we're not postulating non-unitary evolution, merely noting that the initial conditions of our problem introduce some randomness into the pure state of the universe. We're simply making predictions by averaging over this ensemble of possible random states, since no observation can ever determine what the correct initial state was.}. Note however that the initial time averaged density matrices at each point {\it are} identical, by construction, and are exactly $SO(3)$ invariant. It is extremely plausible that the density matrix is approximately $SO(1,4)$ invariant when $N_e$ is large. This is our principal assumption. The ``lattice spacing" on our 3-sphere is of order $e^{- N_e}$ so corrections to $SO(1,4)$ invariance are, plausibly, exponentially small. Note that we are free to construct a model for which this is true. The only constraint on model building in HST (apart from those we are clearly satisfying) comes from the overlap rules. We are not, of course, implementing the overlap rules in this paper, but we see no reason why they should be incompatible with approximate $SO(1,4)$ invariance of a single observer's density matrix.
It's important to realize that $SO(1,4)$ invariance of the density matrix does not imply exact dS invariance of the universe, as described by its THEFT. The density matrix is a probability distribution for fluctuations and the THEFT is the result of classical evolution starting from typical initial conditions. This is, of course, exactly as in conventional inflation models. Also, the fact that, in the underlying HI model, all degrees of freedom are in interaction, means that inflation is ending, so even the homogeneous background should be moving away from its dS form.
\section{Conclusions and Comparison With Observations}
We have argued that the form of primordial fluctuations, which has been derived to leading order in slow-roll parameters for a slow-roll inflation model with the assumption of the Bunch-Davies vacuum (see Appendix A for an argument that this assumption is a fine tuning of massive proportions), in fact follows from a much less restrictive set of assumptions. These are $SO(1,4)$ invariance and approximate Gaussianity, plus a particular choice for the $SO(1,4)$ representations for the operator representing scalar fluctuations. This choice, plus $8$ normalizations for the different two- and three-point functions, determine the fluctuations uniquely. In slow-roll models, these normalizations depend on parameters in the slow-roll potential, while Gaussianity is a prediction of the model and the leading non-Gaussian amplitude is suppressed by a power of the slow-roll parameters. We have noted that the dominance of scalar over tensor two-point fluctuations is a general consequence of cosmological perturbation theory for near de Sitter backgrounds, and the assumption that the scalar and tensor components of the curvature have similar intrinsic fluctuations (as they do in both slow-roll and HI models). Maldacena's squeezed limit theorem, combined with $SO(1,4)$ invariance, determines all three-point functions involving scalars in terms of the scalar and tensor two-point functions.
We've also reviewed the HST model of inflation presented in \cite{holoinflation}. It predicts approximately Gaussian and $SO(1,4)$ invariant fluctuations, robustly and without assumptions about the initial state. Like all HST cosmologies it is completely finite and quantum mechanical. $SO(1,4)$ invariance follows from the assumption that evolution with the $\mathcal{L}_{04}$ generator of an initially $SO(3)$ invariant density matrix will lead to an $SO(1,4)$ invariant density matrix after a large number of e-foldings.
The number of e-foldings is not a completely independent parameter, but is bounded by the ratio between the inflationary and final values of the Hubble radius. If we require that we have enough entropy in the system at the end of inflation to account for the CMB fluctuations, then
$$ 49 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U} \leq N_e \leq 85.4 - \frac{4}{3} {\rm ln}\ \frac{M_I}{M_U} .$$ In order to leave room for the subluminal expansion of conventional cosmology, we should not be near the lower bound.
In the slow-roll models, the small deviation from the ``scale invariant" predictions $$n_S = n_T + 1 \sim 1$$ is explained by the slow-roll condition. A similar argument for a general $SO(1,4)$ symmetric model (and in particular the HI model) follows from the fact that the parameter $\Delta_{S}$ labeling the scalar fluctuations is bounded, $\leq 3/2$, by unitarity of the representation of $SO(1,4)$. The construction of the HST model guarantees that the effective bulk geometry, constructed from local thermodynamics following the prescription of Jacobson, goes through a period of inflation, which ends. We do not yet have an HST description of reheating, and the era of cosmology dominated by particle physics. The dominance of the scalar over tensor fluctuations, the smallness of non-Gaussianity involving the scalar, and the fact that the scalar and tensor tilt are both small, all follow from the fact that $\frac{\dot{\bar{H}}}{(\bar{H})^2}$ is small and that $\Delta_{S}$ is bounded. At the level of two-point functions, the only relation that distinguishes conventional slow-roll inflation (including hybrid inflation models) from generic dS invariant quantum theory is the precise relation between the normalizations and tilts of the scalar and tensor fluctuations and the fact that the HI model predicts vanishing tensor tilt. Depending on the precise form of $H(t)$, there may be a critical value of $\Delta_S$ for which the scalar tilt shifts from red to blue. It will be interesting to see whether further investigation of HST models can predict that the scalar tilt is red. At our present level of understanding, the scalar tilt is a competition between a blue tilt induced by choosing a ``massive" representation for $\Delta_S$ and a red tilt induced by the conventional normalization of fluctuations. We do not have an {\it a priori} argument for which of these dominates, or even whether there are different models where either can dominate.
At the level of non-Gaussian fluctuations, things are a bit more interesting. Slow-roll models with Lagrangians containing only the minimal number of derivative terms give rise to only one of the three possible $SO(1,4)$ covariant forms for the triple tensor correlation function. Even if we include higher derivatives, we cannot get the parity violating form. Thus, observation of the purely tensor bispectrum could tell us whether we were seeing conventional slow roll, or merely a generic model with approximate $SO(1,4)$ symmetry. On the other hand, the parity violating amplitude might be forbidden in general by a discrete symmetry of the HI model. At the moment, we do not see an argument, which would require such a symmetry.
We also want to emphasize that the inflation literature is replete with models which give the standard predictions for two-point functions, but predict three-point functions which are far from $SO(1,4)$ invariant. In these models, Maldacena's squeezed limit theorem does not imply that the scalar three-point function is small everywhere in momentum space. According to our current understanding, observation of a large scalar 3 point function, could rule out all models based on $SO(1,4)$ symmetry, and might point to some non-vanilla, QUEFT based inflation model.
Our considerations imply that so long as observations remain consistent with some slow-roll inflation model, they will not distinguish a particular model among the rather large class we have discussed without also observing tensor fluctuations. The only observations that are likely to validate the idea of a QUEFT with Bunch-Davies fluctuations of quantum fields are a precise validation of the single field slow-roll relation between two-point functions, or a measurement of the tensor three-point function. On the other hand, observations that validate non-standard inflationary models, like DBI inflation, or show evidence for iso-curvature fluctuations, could rule out the general framework discussed in this paper. While it is possible that HST models can be generalized to include iso-curvature fluctuations, this is not in the spirit of those models. The key principle of HST cosmologies is that the very early universe is in a maximally mixed state, which is constantly changing as new DOF enter the horizon. The model of \cite{holoinflation}, was designed to be the minimal deviation from such a maximal entropy cosmology, which allowed for a period in which localized excitations decouple from the bulk of the degrees of freedom on the horizon. A model with more structure during the inflationary era would introduce questions like ``Why was this necessary?", which could at best be justified (though it seems unlikely) by anthropic considerations.
To conclude, we want to re-iterate a few basic points. Conventional inflation appears fine tuned because of what is usually called the Trans-Planckian problem (Appendix A). A generic state of the DOF that QUEFT buries in the extreme UV modes of the inflationary patch, has no reason to evolve to the Bunch-Davies state. Moreover if we accept the idea that local patches of dS space become completely thermalized within a few e-foldings, and that the generic state has no localized excitations, then it does not really make sense to treat its dynamics by QUEFT. In \cite{holoinflation} we proposed a model that preserves causality, unitarity, the covariant entropy bound, and which, with no fine tuning of initial conditions, leads to a coarse grained space-time description as a flat FRW model with a large number of e-folds of inflation. The model produces a nearly Gaussian spectrum of almost-deSitter invariant scalar and tensor metric fluctuations. The model can be matched to a slow-roll QUEFT model (with a different space-time metric) at the level of scalar fluctuations, but predicts no tensor tilt and, in the absence of an explicitly imposed symmetry, would have all three invariant forms of the tensor three-point function with roughly equal weights.
|
\section{Introduction}
\label{sec:Introduction}
Whereas the reduction of turbulent transport by zonal flows is a widely accepted phenomenon~\cite{Hammett1993,Diamond2005}, its quantitative understanding is still poor. The determination of the zonal flow amplitude and turbulent transport level are nonlinear questions whose answers require costly gyrokinetic simulations. The cost grows enormously if, by means of parameter scans, one wants to know how those quantities depend on the magnetic geometry or the plasma conditions. A useful simplification is provided by the initial value problem consisting of calculating the long-time and collisionless evolution of a zonal perturbation. Among other reasons, its usefulness is due to the fact that an exact expression for the value of the perturbation at $t=\infty$, called residual value, can be derived. Even though the explicit evaluation of the final expression can only be carried out in simplified geometries and for long wavelengths of the perturbation, the analytical solution gives insight into the physics.
This partly explains the attention attracted by the work of Rosenbluth and Hinton~\cite{Rosenbluth1998} and the effort put on subsequent extensions that we cite below. In stellarator research, the interest in this problem was spurred by the suggestion~\cite{Watanabe2008} of a direct relation between zonal flow residual value and turbulent transport level.
The seminal paper~\cite{Rosenbluth1998} dealt with long-wavelength
potential perturbations in large aspect ratio and circular cross
section tokamaks. Explicit solutions of related problems for more
complex tokamak geometries and arbitrary wavelengths have been given
in~\cite{Xiao2006,Xiao2007}. In recent years, several articles have
addressed the problem in stellarators~\cite{Sugama2005, Sugama2006,
Mishchenko2008, Helander2011, Xanthopoulos2011}. In this paper we
report on a code to evaluate fast and accurately the exact expressions
of the zonal flow residual value, for arbitrary wavelengths and for
tokamak and stellarator geometries. But why is this useful if the
same answer can be obtained, in principle, by linear runs of a local
gyrokinetic code? As we will illustrate later on, it turns out that
the solution by means of gyrokinetic simulations of the residual zonal
flow problem, especially at short wavelengths, is very demanding in
terms of computational resources. We will show that our method is
faster. These differences can be of several orders of magnitude in
computing time, especially in the case of stellarators. Moreover, the
code and the results of this work are not only interesting from the
point of view of physics, but also for validation of gyrokinetic
codes.
The rest of the paper is organized as follows. In
Section~\ref{sec:Residual} we derive the residual zonal flow
expressions for arbitrary wavelengths and magnetic geometry. In
Section~\ref{sec:cas3dk} we report on the code used to evaluate the
expressions presented in Section~\ref{sec:Residual}. As a check, we
compare our calculations with analytical results from
\cite{Xiao2006,Xiao2007}, obtained in simplified tokamak geometry. In
order to avoid any confusion, we note that the results in
\cite{Xiao2006,Xiao2007} were not derived as an
initial value problem, but as the stationary solution of a forced
system. We explain this in more detail in
Section~\ref{sec:cas3dk}. In Section~\ref{sec:results} our results are
compared with local and global gyrokinetic simulations of the initial
value problem, employing the codes
\gene~\cite{Jenko2000,Gorler2011,GENE,Xanthopoulos2009} and {\small EUTERPE}
\cite{Jost2001, Kleiber2012}, respectively. We also compare the
differences in computational time required by each approach showing
that the method introduced in this paper is faster than
those gyrokinetic simulations. Stellarator residual
values are calculated, using our method and also gyrokinetic codes,
for a range of wavelengths much wider than previously available in the
literature. The stellarator calculations are done for the standard
configuration of Wendelstein 7-X (W7-X). We comment on a purely
stellarator effect already predicted in~\cite{Sugama2006,
Helander2011}; namely, that the approximation of adiabatic electrons
is always incorrect (even at long wavelengths) for the purpose of
determining the residual zonal flow in stellarators. We quantify the
error by computing, with our method, the residual value when kinetic
or adiabatic electrons are used. This result is confirmed by
gyrokinetic simulations. The conclusions are presented in
Section~\ref{sec:conclusions}.
Throughout this paper, we assume that the electrostatic potential $\varphi$ associated to the zonal flow is constant on flux surfaces. It is true that zonal flows ({\it i.e.} flows that are only weakly damped, and therefore remain in the plasma for long times) usually correspond to electrostatic potentials with small variations on flux surfaces, but we should emphasize that assuming that $\varphi$ is constant on flux surfaces is, at most, a good approximation.
\section{Linear collisionless evolution of zonal flows}
\label{sec:Residual}
In this section, we give a detailed calculation of the residual zonal flow for arbitrary wavelengths in tokamak and stellarator geometries. We solve the linear and collisionless gyrokinetic equations at long times, assuming that the electrostatic potential perturbation is constant on flux surfaces.
In strongly magnetized plasmas, one employs the smallness of $\rho_{ts\star} =\rho_{ts}/L$ to average over the gyromotion. Here, $L$ is the characteristic length of variation of the magnitude of the magnetic field $B$, $\rho_{ts}=v_{ts}/\Omega_s$ is the thermal gyroradius, $v_{ts} = \sqrt{T_s/m_s}$ is the thermal speed, $\Omega_s=Z_seB/m_s$ is the gyrofrequency, $T_s$ is the equilibrium temperature, $m_s$ is the mass, and $Z_se$ is the charge of species $s$, where $e$ is the proton charge. Gyrokinetic theory~\cite{Catto1978,Frieman1982,Brizard07,Parra2011,Dubin1983,Parra2014} gives a procedure to rigorously derive the gyroaveraged kinetic equations order by order in $\rho_{ts\star} \ll 1$. The averaging operation is conveniently expressed in a new set of phase space coordinates, called gyrokinetic coordinates. Denote by $\{\mathbf{r},\mathbf{v}\}$ the particle position and velocity. The coordinate transformation, to lowest order in $\rho_{ts\star}$, is given by
\begin{eqnarray}\label{eq:gyrokintrans}
\mathbf{r} =
\mathbf{R} + \rhobf_s(\mathbf{R},v,\lambda,\gamma)
+ O(\rho_{ts\star}^2 L),\nonumber\\[5pt]
\mathbf{v} =
v_{\parallel}(\mathbf{R},v,\lambda,\sigma)\hat\mathbf{b}(\mathbf{R}) +
\Omega_s
\rhobf_s(\mathbf{R},v,\lambda,\gamma)\times\hat\mathbf{b}(\mathbf{R})
+ O(\rho_{ts\star}v_{ts}).
\end{eqnarray}
In equation (\ref{eq:gyrokintrans}), $\mathbf{R}$ is the gyrocenter position, $v$ is the magnitude of $\mathbf{v}$, $\lambda = B^{-1}v_\perp^2/v^2$ is the pitch angle, $\sigma = v_{\parallel}/|v_{\parallel}|$ is the sign of the parallel velocity
\begin{equation}
v_\parallel(\mathbf{R},v,\lambda,\sigma) =
\sigma v \sqrt{1- \lambda B(\mathbf{R})},
\end{equation}
$\hat\mathbf{b}$ is the unit vector in the direction of the magnetic field $\mathbf{B}$, $v_\perp$ is the component of the velocity perpendicular to $\mathbf{B}$, and $\rhobf_s$ is the gyroradius vector, defined as
\begin{equation}
\rhobf_s(\mathbf{R},v,\lambda,\gamma)
= \frac{m_sv}{Z_s e}\sqrt{\frac{\lambda}{B(\mathbf{R})}}
\left[\hat\mathbf{e}_2(\mathbf{R}) \cos{\gamma} -
\hat\mathbf{e}_1(\mathbf{R}) \sin{\gamma}\right].
\end{equation}
Here, $\hat\mathbf{e}_1(\mathbf{R})$ and $\hat\mathbf{e}_2(\mathbf{R})$ are unit vector fields orthogonal to each other which satisfy $\hat\mathbf{e}_1 \times \hat\mathbf{e}_2 = \hat\mathbf{b}$ at every point. Finally, the gyrophase $\gamma$ is
\begin{equation}
\gamma = \arctan(\mathbf{v}\cdot \hat\mathbf{e}_2 /
\mathbf{v}\cdot \hat\mathbf{e}_1).
\end{equation}
We introduce straight field line coordinates $\{\psi,\theta,\zeta\}$, where $\psi\in[0,1]$ is the radial coordinate defined as the normalized toroidal flux $\psi=\Psi_t/\Psi_t^\mathrm{edge}$, $\theta$ is a poloidal angle and $\zeta$ is a toroidal angle, with $\theta,\zeta\in[0,1)$. The magnetic field in these coordinates is written as
\begin{equation}\label{eq:Bcontravariantform}
{\bf B} =
-\Psi_p'(\psi)\nabla \psi \times \nabla (\zeta - q(\psi)\theta).
\end{equation}
Here, $q(\psi) = \Psi'_t(\psi)/\Psi'_p(\psi)$ is the safety factor, and $\Psi_p'(\psi)$ and $\Psi_t'(\psi)$ are the derivatives of the poloidal and toroidal fluxes with respect to $\psi$. It will be convenient to define the coordinate $\alpha := \zeta - q(\psi) \theta$, that labels magnetic field lines on each flux surface. Unless otherwise stated, we use $\{\psi,\theta,\alpha\}$ as the set of independent spatial coordinates. Note that $\mathbf{B} \cdot\nabla\psi = 0$, $\mathbf{B} \cdot\nabla\alpha = 0$. We employ $\theta$ as the coordinate along a field line.
The distribution function in gyrokinetic coordinates, $F_s=F_s(\psi,\theta,\alpha,v,\lambda,\sigma,\gamma,t)$, can be written as
\begin{equation}
F_s(\mathbf{R},v,\lambda,\sigma,\gamma,t) = F_{s0}(\mathbf{R},v) +
F_{s1}(\mathbf{R},v,\lambda,\sigma,t) + O(\rho_{ts\star}^2F_{s0}),
\end{equation}
where $F_{s1}=O(\rho_{ts\star}F_{s0})$ and $F_{s0}$ is a Maxwellian distribution whose density $n_s$ and temperature $T_s = m_s v_{ts}^2$ are flux functions,
\begin{equation}
F_{s0}(\mathbf{R},v) :=
\frac{n_s(\psi(\mathbf{R}))}{(\sqrt{2\pi} v_{ts}(\psi(\mathbf{R})))^3}
\exp\left(-\, \frac{v^2}{2v^2_{ts}(\psi(\mathbf{R}))}\right).
\end{equation}
The lowest-order quasineutrality condition implies $\sum_s Z_s e n_s(\psi) = 0$. Note that to $O(\rho_{ts\star}F_{s0})$ the distribution function is independent of the gyrophase.
The linear and collisionless time evolution of $F_{s1}$ is given by~\cite{CalvoParra2012,Parra2014}
\begin{eqnarray}\label{eq:gk0}
\fl \partial_t H_{s1}
+ (v_{\parallel}\,\hat\mathbf{b} + \mathbf{v}_{ds})
\cdot\nabla H_{s1}
=
\frac{Z_se}{T_s}\partial_t \langle\varphi\rangle F_{s0}
\nonumber \\[5pt]
\fl\hspace{1cm}
+
\frac{1}{B}\left(\nabla\langle\varphi\rangle
\times\hat\mathbf{b}\right)\cdot\nabla\psi
\left[\frac{n_s'}{n_s}
+ \left(\frac{m_s v^2}{2 T_s} +
\frac{3}{2}\right)\frac{T_s'}{T_s} \right] F_{s0},
\end{eqnarray}
where primes denote differentiation with respect to $\psi$, the function $H_{s1}$ is defined by
\begin{equation}
H_{s1}= F_{s1} + \frac{Z_se}{T_s}
\langle\varphi\rangle F_{s0},
\end{equation}
the gyroaveraged electrostatic potential is
\begin{equation}
\langle \varphi\rangle\left(\mathbf{R},v,\lambda, t\right):=
\frac{1}{2\pi}\int_0^{2\pi}
\varphi \left(\mathbf{R}+\rhobf_s(\mathbf{R},v,\lambda,\gamma),t
\right)\mathrm{d}\gamma
\end{equation}
and the magnetic drift velocity reads
\begin{equation}
\mathbf{v}_{ds} =
\frac{v^2}{\Omega_s} \hat\mathbf{b} \times
\left[(1-\lambda B)\hat\mathbf{b}\cdot\nabla
\hat\mathbf{b} + \frac{\lambda}{2}\nabla B \right].
\label{eq:driftv}
\end{equation}
The orderings in gyrokinetic theory allow to separate the variations of the fields on the small and large scales, and decompose in Fourier modes with respect to the former. Since we are interested in studying the evolution of an electrostatic potential perturbation that depends only on $\psi$ and the problem is linear, we can take a single mode of the form
\begin{equation}\label{eq:eikonalvarphi}
\varphi(\mathbf{r},t) =
\varphi_k(\psi(\mathbf{r}),t) \exp(\mathrm{i}k_\psi\psi(\mathbf{r})).
\end{equation}
Here,
\begin{equation}
L^{-1}\ll k_\perp \lesssim \rho_{ts}^{-1}
\end{equation}
with $k_\perp(\mathbf{R}) = k_\psi |\nabla \psi(\mathbf{R})|$, and $\varphi_k$ varies on the macroscopic scale $L$. Observe that, due to the effects of magnetic geometry, the dependence of $\varphi_k$ on $\psi$ cannot be avoided even for flat density and temperature profiles. A recent explanation of scale separation, as well as a proof of the equivalence between the local and global approaches to gyrokinetic theory can be found in \cite{Parra2015b}.
To lowest order, the gyroaveraged electrostatic potential is
\begin{equation}
\langle \varphi \rangle(\mathbf{R},v,\lambda, t) =
\varphi_k(\psi(\mathbf{R}),t) \, J_0(k_\perp\rho_s)
\exp({\mathrm{i}k_\psi\psi(\mathbf{R})}),
\end{equation}
where the magnitude of the gyroradius vector is
\begin{equation}
\rho_s(\mathbf{R},v,\lambda)
= \frac{m_sv}{Z_se}\sqrt{\frac{\lambda}{B(\mathbf{R})}}
\end{equation}
and $J_0$ is the zeroth-order Bessel function of the first kind,
\begin{equation}
J_0(x) =
\frac{1}{2\pi}\int_0^{2\pi}
\exp(\mathrm{i} x \sin\gamma)\mathrm{d}\gamma.
\end{equation}
If the electrostatic potential has the form (\ref{eq:eikonalvarphi}), then the distribution function can be written as
\begin{equation}
\fl F_{s1}(\mathbf{R},v,\lambda,\sigma,t)=
f_s(\psi(\mathbf{R}),\theta(\mathbf{R}),\alpha(\mathbf{R}),
v,\lambda,\sigma,t) \,\exp({\mathrm{i}k_\psi\psi(\mathbf{R})})
\end{equation}
and consequently
\begin{equation}\label{eq:eikonalH}
\fl H_{s1}(\mathbf{R},v,\lambda,\sigma,t)
= h_s(\psi(\mathbf{R}),\theta(\mathbf{R}),\alpha(\mathbf{R}),
v,\lambda,\sigma,t) \,\exp({\mathrm{i}k_\psi\psi(\mathbf{R})}),
\end{equation}
where $f_s$ and $h_s$ vary on the scale $L$. Then, equation (\ref{eq:gk0}) becomes
\begin{eqnarray}
\left(\partial_t + v_\parallel\, \hat\mathbf{b} \cdot \nabla
+ \mathrm{i}k_\psi \omega_s\right) h_s
= \frac{Z_s e}{T_s} \partial_t \varphi_k J_{0s} F_{s0},
\label{eq:gk2}
\end{eqnarray}
where we have used the notation $\omega_s := \mathbf{v}_{ds}\cdot\nabla \psi$ for the radial magnetic drift frequency and $J_{0s}\equiv J_0(k_\perp \rho_s)$. From now on, and for brevity, we omit the dependence of $\varphi_k$ on $\psi$; that is, we write $\varphi_k(t)$ instead of $\varphi_k(\psi,t)$.
Denote by $\omega$ the frequency associated to the time derivative in (\ref{eq:gk2}). The objective is to expand (\ref{eq:gk2}) in powers of $\omega/(v_{ts}L^{-1})\ll 1$, solve the lowest order equations and determine $\varphi_k(t)$ in the limit $t\to \infty$. The $\omega/(v_{ts}L^{-1})\ll 1$ expansion means, in particular, that we average over the lowest order particle trajectories and solve for time scales much longer than a typical orbit time, which is $O(L/v_{ts})$. We define the orbit average for a phase-space function $Q(\psi,\theta,\alpha,v,\lambda,\sigma,t)$ as
\begin{equation}
\overline{Q} :=
\left\{
\begin{array}{lcr}
\langle B\, Q /|v_\parallel |\rangle_\psi /
\langle B /|v_\parallel|\rangle_\psi &&
\textrm{for passing particles\,\,}\vspace{0.2cm} \\
\omega_b\oint \mathrm{d}\theta \,Q /(v_\parallel\,
\hat\mathbf{b} \cdot \nabla \theta) &&
\textrm{for trapped particles,}
\end{array}
\right.
\label{eq:bounceaverages}
\end{equation}
where $\omega_b := [\oint \mathrm{d}\theta / (v_\parallel\, \hat\mathbf{b}\cdot\nabla \theta)]^{-1}$ is the bounce frequency. Given a function $G(\psi,\theta,\alpha)$, the flux surface average is defined by
\begin{equation}
\langle G \rangle_\psi =
V'(\psi)^{-1} \int_0^{1} \mathrm{d}\theta \int_0^{1}
\mathrm{d} \alpha \sqrt{g}\, G(\psi,\theta,\alpha).
\label{eq:fsa}
\end{equation}
Here, $\sqrt{g} = [(\nabla\psi \times \nabla\theta) \cdot \nabla\alpha]^{-1}$ is the square root of the metric determinant and $V'(\psi) = \int_0^{1} \mathrm{d} \theta \int_0^{1} \mathrm{d} \alpha \sqrt{g}$ is the derivative of the volume enclosed by the flux surface labeled by $\psi$. The symbol $\oint$ stands for integration over the trapped trajectory, where the bounce points $\theta_b$ are the solutions of $1-\lambda B(\psi,\theta_b,\alpha) = 0$ for given values of $\psi$ and $\alpha$, and given an initial condition for the particle trajectory.
Observe that the orbit average operation has the property
\begin{equation}
\overline{v_\parallel \hat\mathbf{b}\cdot\nabla Q} = 0
\end{equation}
for any single-valued function $Q$. We write the radial magnetic drift frequency as a sum of its orbit averaged and fluctuating parts
\begin{equation}
\omega_s =
\overline{\omega_s}
+v_\parallel\, \hat\mathbf{b}\cdot\nabla \delta_s,
\label{eq:mde}
\end{equation}
where $\delta_s = \delta_s(\psi,\theta,\alpha,v,\lambda,\sigma)$, that we choose to be odd in $\sigma$, is the radial displacement of the particle's gyrocenter from its mean flux surface. The solution of the magnetic differential equation that determines $\delta_s$ is given in~\ref{sec:mde}.
Defining $\underline{h_s} := h_s \exp({\mathrm{i} k_\psi \delta_s})$ and $\underline{\varphi_k} := \varphi_k \exp({\mathrm{i} k_\psi \delta_s})$, equation (\ref{eq:gk2}) yields
\begin{equation}
\left( \partial_t + v_\parallel \hat\mathbf{b}\cdot\nabla +
\mathrm{i} k_\psi \overline{\omega_s} \right)
\underline{{h}_s} =
\frac{Z_s e}{T_s} \partial_t\underline{{\varphi}_k} J_{0s}
F_{s0}.
\label{eq:gkUnderlinedVariables}
\end{equation}
It is worth noting that the expansion in $\omega/(v_{ts}L^{-1})$ only makes sense if
\begin{equation}
\frac{k_\psi\overline{\omega_s}}{v_{ts}L^{-1}}\sim
\frac{\omega}{v_{ts}L^{-1}}
\ll 1.
\end{equation}
For $k_\perp\rho_{ts}\sim 1$, this implies
\begin{equation}\label{eq:condexpansion}
\frac{\overline{\omega_s}}{\omega_s}
\sim
\frac{\omega}{v_{ts}L^{-1}}
\ll 1.
\end{equation}
This trivially holds in a tokamak because $\overline{\omega_s}=0$ for all trajectories. In a generic stellarator, $\overline{\omega_s}=0$ only for passing particles. Then, condition (\ref{eq:condexpansion}) requires that the secular radial drifts of trapped particles be sufficiently small. We assume that this is the case and carry out the expansion in $\omega/(v_{ts}L^{-1})$.
We write
\begin{equation}
\underline{{h}_s}=\underline{{h}_{s}^{(0)}} +
\underline{{h}_{s}^{(1)}}+
\underline{{h}_{s}^{(2)}}+\dots,
\end{equation}
with $\underline{{h}_{s}^{(j+1)}}/\underline{{h}_{s}^{(j)}}\sim \omega/(v_{ts}L^{-1})$. Then, we expand equation (\ref{eq:gkUnderlinedVariables}). To lowest order, one obtains
\begin{equation}\label{eq:lowestorderequationh}
v_\parallel \hat\mathbf{b}\cdot\nabla
\underline{{h}_{s}^{(0)}} = 0,
\end{equation}
implying that $\underline{{h}_{s}^{(0)}}$ is constant along the lowest order trajectories; {\it i.e.}
\begin{equation}\label{eq:lowestorderequationh}
\underline{{h}_{s}^{(0)}} = \overline{\underline{{h}_{s}^{(0)}}}.
\end{equation}
To next order, we have
\begin{eqnarray}
\left( \partial_t + \mathrm{i} k_\psi \overline{\omega_s} \right)
\underline{{h}_{s}^{(0)}}
+ v_\parallel \hat\mathbf{b}\cdot\nabla
\underline{{h}_{s}^{(1)}} =
\frac{Z_s e}{T_s} \partial_t \underline{{\varphi}_k}
J_{0s} F_{s0}.
\label{eq:nextordereqNotInLaplaceSpace}
\end{eqnarray}
We do not write $\underline{{\varphi}_{k}^{(0)}}$ to ease the notation. The orbit average of (\ref{eq:nextordereqNotInLaplaceSpace}) annihilates the term $v_\parallel \hat\mathbf{b}\cdot\nabla \underline{{h}_{s}^{(1)}}$, and we find
\begin{eqnarray}
\left( \partial_t + \mathrm{i} k_\psi \overline{\omega_s} \right)
\underline{{h}_{s}^{(0)}}
=
\frac{Z_s e}{T_s} \overline{\partial_t \underline{{\varphi}_k}
J_{0s}} F_{s0}.
\label{eq:nextordereqNotInLaplaceSpaceAveraged}
\end{eqnarray}
It is useful to work in Laplace space in order to solve this equation. The Laplace transform of a function $Q(t)$ is defined as $\widehat{Q}(p) = \int_0^\infty Q(t) \mathrm{e}^{-pt} \mathrm{d}t$, where $p$ denotes the variable in Laplace space. We apply it to (\ref{eq:nextordereqNotInLaplaceSpaceAveraged}) and obtain
\begin{eqnarray}
\left( p + \mathrm{i} k_\psi \overline{\omega_s} \right)
\underline{\widehat{h}_{s}^{(0)}}
=
\frac{Z_s e}{T_s}p \overline{\underline{\widehat{\varphi}_k}
J_{0s}} F_{s0} + \underline{f_s}(0).
\label{eq:nextordereqInLaplaceSpaceAveraged}
\end{eqnarray}
Here, $\underline{f_s}(0) := f_s(0) \exp({ \mathrm{i} k_\psi \delta_s})$ and $f_s(0)$ is the initial condition for $f_s$; {\it i.e.} $f_s(0)\equiv f_s(\psi,\theta,\alpha,v,\lambda,\sigma,0)$.
The solution of (\ref{eq:nextordereqInLaplaceSpaceAveraged}) yields
\begin{equation}
\widehat{h}_s^{(0)} =
\frac{\mathrm{e}^{-\mathrm{i} k_\psi \delta_s}}{p + \mathrm{i} k_\psi
\overline{\omega_s}}
\left( \frac{Z_s e}{T_s} p\, \widehat{\varphi}_k\,
\overline{\mathrm{e}^{\mathrm{i} k_\psi \delta_s} J_{0s}} F_{s0}
+ \overline{\mathrm{e}^{\mathrm{i} k_\psi \delta_s} f_s(0) }
\right).
\label{eq:gk6}
\end{equation}
In order to have a closed system of equations we employ the gyrokinetic quasineutrality equation (see, for example, references \cite{CalvoParra2012, Parra2014}),
\begin{eqnarray}
\fl \sum_s \frac{Z_s^2 e}{T_s} n_s\, \varphi(\mathbf{R},t) = \nonumber\\
\fl\hspace{1cm}
\sum_s Z_s \int
H_{s}(\mathbf{R}-\rhobf_s(\mathbf{R},v,\lambda,\gamma),v,\lambda,\sigma,t)
\mathrm{d}^3 v.
\label{eq:qn0}
\end{eqnarray}
Here, the short-hand notation $\int Q \mathrm{d}^3 v$ means, for a function $Q(\psi,\theta,\alpha,v,\lambda,\sigma,\gamma)$,
\begin{equation}
\fl \int Q \, \mathrm{d}^3\upsilon =
\sum_{\sigma=-1}^1
\int_0^{2\pi}\mathrm{d}\gamma\int_0^\infty \mathrm{d}v\, \int_0^{1/B}
\mathrm{d}\lambda\, \frac{ v^2 B}{2\sqrt{1-\lambda B}}
Q(\psi,\theta,\alpha,v,\lambda,\sigma,\gamma).
\end{equation}
Using (\ref{eq:eikonalvarphi}) and (\ref{eq:eikonalH}), and flux-surface averaging, we get
\begin{equation}
\sum_s \frac{Z_s^2 e}{T_s}n_s\, {\varphi}_k =
\left\langle
\sum_s Z_s \int J_{0s} {h}_s
\mathrm{d}^3 v
\right\rangle_\psi.
\label{eq:qn2}
\end{equation}
To lowest order in $\omega/(v_{ts}L^{-1})\ll 1$, and after transforming to Laplace space, equation (\ref{eq:qn2}) gives
\begin{equation}
\sum_s \frac{Z_s^2 e}{T_s}n_s\, \widehat{\varphi}_k =
\left\langle
\sum_s Z_s \int J_{0s} \widehat{h}_s^{(0)}
\mathrm{d}^3 v
\right\rangle_\psi.
\label{eq:qn2aux}
\end{equation}
We employ (\ref{eq:gk6}) to write the right side of (\ref{eq:qn2aux}) in terms of the electrostatic potential and the initial condition, and solve for $\widehat{\varphi}_k$. The result is
\begin{eqnarray}
\fl
\widehat{\varphi}_k(p) =
\frac{
\sum_s Z_s\left\{ \frac{1}{p+\mathrm{i}k_\psi\overline{\omega_s}}
\mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{\mathrm{e}^{\mathrm{i}k_\psi\delta_s} f_s(0) } /F_{s0}
\right\}_s
}{
\sum_s \frac{Z_s^2 e}{T_s}
\left\{1 - \frac{p}{p+\mathrm{i}k_\psi\overline{\omega_s}} \,
\mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}} \right\}_s
},
\label{eq:qn3}
\end{eqnarray}
where we have simplified the notation by defining
\begin{equation}
\fl
\left\{Q\right\}_s :=
\left\langle
\sum_{\sigma=-1}^1 \int_0^\infty \mathrm{d}v\,\int_0^{1/B}
\mathrm{d}\lambda\, \frac{\pi v^2 B}{\sqrt{1-\lambda B}}
Q (\psi,\theta,\alpha,v,\lambda,\sigma)\,
F_{s0} \right\rangle_\psi
\label{eq:corchete}
\end{equation}
for gyrophase independent functions on phase space.
The residual value is found from the well-known property of the Laplace transform
\begin{equation}\label{eq:propertyLaplaceTransform}
\lim_{t\to\infty}\varphi_k(t) = \lim_{p\to 0}p\widehat{\varphi}_k(p).
\end{equation}
Applying (\ref{eq:propertyLaplaceTransform}) to equation (\ref{eq:qn3}), we find
\begin{equation}
\varphi_k (\infty) =
\frac{ \sum_s Z_s\left\{ \mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{\mathrm{e}^{\mathrm{i}k_\psi\delta_s} f_s(0) } /F_{s0}
\right\}_s^{\overline{\omega_s}=0}
}{
\sum_s \frac{Z_s^2e}{T_s}
\left[\left\{1\right\}_s-\left\{ \mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}}
\right\}_s^{\overline{\omega_s}=0}
\right]
},
\label{eq:rsk}
\end{equation}
where $\varphi_k(\infty) \equiv \lim_{t\to\infty}\varphi_k(t)$. The superscript $\overline{\omega_s}=0$ means that the integration is performed only for particles whose trajectory satisfies $\overline{\omega_s}=0$. In tokamaks, this property holds true for both trapped and passing particles, and therefore the integrals in (\ref{eq:rsk}) are performed over the whole phase space. In stellarators, $\overline{\omega_s}=0$ is satisfied exclusively for passing particles. Only in perfectly omnigenous stellarators~\cite{Cary1997.1,Cary1997.2,Parra2015} have trapped particles vanishing average radial magnetic drift. Hence, in a generic stellarator, the integrals in (\ref{eq:rsk}) with superscript $\overline{\omega_s}=0$ are performed only over the passing region of phase space.
The residual level is usually defined as the normalized value $\varphi_k(\infty)/\varphi_k(0)$. The relation between $f_s(0)$ and $\varphi_k(0)$ is given by the flux-surface averaged quasineutrality equation at $t=0$,
\begin{equation}
\fl
\sum_s \frac{Z_s^2 e}{T_s}n_s
\left\langle 1-\Gamma_0(k_\perp^2\rho_{ts}^2)\right\rangle_\psi \varphi_k(0) =
\left\langle \sum_s Z_s \int J_{0s}f_s(0) \mathrm{d}^3 v \right\rangle_\psi.
\label{eq:qn4}
\end{equation}
Here, we have employed the identity $\int J_0^2(k_\perp \rho_s) F_{0s} \mathrm{d}^3 v = n_s \Gamma_0(k_\perp^2\rho_{ts}^2)$, where $\Gamma_0(k_\perp^2\rho_{ts}^2) := \mathrm{e}^{-k_\perp^2\rho_{ts}^2} \,I_0(k_\perp^2\rho_{ts}^2)$, and $I_0$ is the zeroth order modified Bessel function. We will use the notation $\Gamma_{0s}\equiv \Gamma_0(k_\perp^2\rho_{ts}^2)$.
For later comparison with gyrokinetic simulations, it will be useful to have at hand the expressions corresponding to the approximation of adiabatic electrons. Using this approximation, equation (\ref{eq:qn2}) can be written as
\begin{equation}
\sum_{s\neq e}\frac{Z_s^2 e}{T_s} n_s\, \widehat{\varphi}_k(p) =
\left\langle \sum_{s\neq e} Z_s\int \mathrm{d}^3 \upsilon\,
J_{0s}\widehat{h}_s (p) \right\rangle_\psi.
\end{equation}
Proceeding as shown previously for the fully kinetic case, we find that the expression for $\varphi_k(\infty)$ reads
\begin{equation}
\varphi_k (\infty) =
\frac{ \sum_{s\neq e} Z_s \left\{ \mathrm{e}^{-\mathrm{i}
k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s}
f_s(0)}/F_{s0} \right\}_s^{\overline{\omega_s}=0}
}{
\sum_{s\neq e} \frac{Z_s^2e}{T_s}
\left[\left\{1\right\}_s-\left\{ \mathrm{e}^{-\mathrm{i} k_\psi
\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}}
\right\}_s^{\overline{\omega_s}=0} \right]
} .
\label{eq:rsa}
\end{equation}
In this case, the equation that relates $f_s(0)$ and $\varphi_k(0)$ is
\begin{equation}
\fl \sum_{s\neq e}\frac{Z_s^2e}{T_s}\, n_s
\left\langle 1-\Gamma_{0s}\right\rangle_\psi \varphi_k(0) =
\left\langle \sum_{s\neq e} Z_s \int
J_{0s} f_s(0) \mathrm{d}^3 v \right\rangle_\psi.
\label{eq:qn6}
\end{equation}
The residual zonal flow was computed in reference \cite{Rosenbluth1998} for $k_\perp \rho_{ti}\ll 1$ in a large aspect ratio, circular cross section tokamak with adiabatic electrons. An extension of the derivation of \cite{Rosenbluth1998} was proposed in references \cite{Xiao2006, Xiao2007} to allow for short-wavelength perturbations, and also for kinetic electrons and more complex tokamak geometries. In reference \cite{Xiao2007}, comparisons of analytical calculations with gyrokinetic simulations are shown. The enhancement of tokamak residual zonal flows at short wavelengths was originally found in reference \cite{Jenko2000} by means of gyrokinetic simulations with the code {\small GS2}. But the short-wavelength calculations of \cite{Jenko2000, Xiao2006, Xiao2007} do not correspond to an initial zonal value problem because the quasineutrality equation is forced with a source term. At long wavelengths, the initial value problem and the forced system give the same result. This will be explained in more detail
in Section \ref{sec:cas3dk}.
In stellarators, the residual zonal flow calculation has been done in \cite{Sugama2005,Sugama2006,Mishchenko2008, Helander2011,Xanthopoulos2011}. In references \cite{Mishchenko2008,Helander2011,Xanthopoulos2011} the emphasis is put on long-wavelength zonal flows. In references \cite{Sugama2005,Sugama2006}, the derivation of the equations is valid for long and short-wavelengths but some approximations are used to describe the magnetic geometry.
\section{Evaluation of the expressions for the residual zonal flow}
\label{sec:cas3dk}
The evaluation of the right side of equation (\ref{eq:rsk}) (and, of course, (\ref{eq:rsa})) requires the calculation of quantities of the form
\begin{equation}
\left\{P\, \overline{Q}\right\}_s :=
\left\langle \sum_{\sigma=-1}^1 \int_0^\infty \mathrm{d}v\,
\int_0^{1/B} \mathrm{d}\lambda\,
\frac{\pi v^2 B}{\sqrt{1-\lambda B}}\, P\, \overline{Q}\,
F_{s0} \right\rangle_\psi,
\label{eq:corchete2}
\end{equation}
for functions $P=P(\psi,\theta,\alpha,v,\lambda,\sigma)$ and $Q=Q(\psi,\theta,\alpha,v,\lambda,\sigma)$, where the orbit average is defined in (\ref{eq:bounceaverages}). These averages depend on the details of the magnetic field and cannot be evaluated analytically, except in simplified cases (for example, in the cases considered in \cite{Rosenbluth1998,Xiao2006,Xiao2007}). In this work, we evaluate equation (\ref{eq:rsk}) using the framework of the code {\casdk} \cite{koenies2000,koenies2008}. For this purpose, we have included in this code the relevant finite Larmor radius effects, the solution to the magnetic differential equations described in \ref{sec:mde}, and the integration over the velocity coordinates $v$ and $\sigma$.
The code {\casdk} is well suited to perform the average (\ref{eq:bounceaverages}). Since the lowest order particle trajectories lie entirely on flux surfaces, all the calculations are local, thus permitting a parallelization by flux surface using MPI. The magnetic equilibrium is obtained from the 3D MHD equilibrium code {\small VMEC}~\cite{Hirshman1983} and then transformed into Boozer coordinates $\{\psi,\theta,\zeta\}$, which can easily be transformed to the coordinates $\left\{ \psi, \theta, \alpha \right\}$. On a given flux surface, the pitch angle $\lambda$ distinguishes between passing and trapped trajectories. The passing-trapped boundary is given by $\lambda_\mathrm{\,c} = 1/B^\mathrm{\,max}$, where $B^\mathrm{\,max}$ is the maximum of $B$ on the flux surface. Passing particles have $\lambda$ values with $0 < \lambda < \lambda_\mathrm{c}$ and trapped particles are those with $\lambda_\mathrm{c} < \lambda < 1/B^\mathrm{min}$, where $B^\mathrm{min}$ is the minimum of $B$ on the flux surface. Trapped
particles can live inside one or several magnetic field periods. In {\casdk}, they are grouped by the number of periods they go through. The groups are obtained by setting a large number of initial conditions for the trajectories, and finding the bounce points $\theta_b$ from the bounce condition $1-\lambda B(\psi, \theta_b, \alpha) = 0$ for constant $\psi$ and $\alpha$. From this procedure, the boundaries of each group are found and the numerical integration for a given group is performed by covering the region they define with new trajectories. Note that each group requires different numerical resolution. Some trapped trajectories close to the passing-trapped boundary may require a large number of periods until the bounce points are found. If this number is sufficiently large, typically larger than 500 periods, the trajectories are then considered as passing. A Gauss-Legendre quadrature scheme is used for the integration in $\theta$, $\alpha$ and $\lambda$, which avoids the numerical problems that may
appear at points where $1-\lambda B=0$.
In reference \cite{Mishchenko2008}, the phase space
integrations with {\casdk} discussed in the previous paragraph were
employed to obtain the zonal flow frequency in stellarator geometry
in the long-wavelength limit. In this limit, there are some
simplifications. We have implemented new functionalities in {\casdk}
that also allow to deal with short wavelengths. Next, we describe
the main features of this extension of {\casdk}.
First, we note that the integration of the resulting
expressions over the velocity coordinate $v$ could be performed
analytically in \cite{Mishchenko2008}. If one wants to calculate the
averages involved in (\ref{eq:rsk}), the integration over the
velocity coordinates $v$ and $\sigma$ must be computed numerically.
We have included in {\casdk} the integration over these coordinates on
top of the $\theta$, $\alpha$ and $\lambda$ integrations. For the
integration over $v$, a linear scheme has been used. This scheme
allows the computation of any moment in $v$ of the
Maxwellian distribution function, $F_{s0}(\psi,v)$.
Second, the equations to obtain the residual level
(\ref{eq:rsk}) and (\ref{eq:rsa}) incorporate finite Larmor
radius effects. These effects are encoded in $J_0(k_\perp\rho_s)$,
$\Gamma_{0} (k_\perp^2\rho_{ts}^2)$ and $\exp(\pm
\mathrm{i}k_\psi\delta_s)$. Here, $k_\perp=k_\psi |\nabla \psi|$
where $k_\psi$ is an input parameter and the quantity $|\nabla
\psi|=|\nabla \psi|(\theta,\alpha)$ is obtained from the {\small
VMEC} equilibrium. We have included those factors and adapted the
resolution of the integration on phase space to their strong
oscillatory behavior at short wavelengths. We have also implemented
in the code the expression of $\delta_s$ in stellarator
geometry, for both passing and trapped particles (see the
derivations in \ref{sec:mde}). For passing particles, $\delta_s$ is
given by equation (\ref{eq:deltap}) and for trapped particles it is
given by equation (\ref{eq:mdet3}).
The above modifications included in {\casdk} allow us
to calculate the residual level in tokamak or stellarator geometry
for arbitrary wavelengths. In the rest of the paper, we use the
terminology ``{\casdk}'' and ``extension to {\casdk}'' interchangeably.
\begin{figure}
\includegraphics[width=1.\linewidth]{longwavelength}
\caption{Radial dependence of the residual level given by (\ref{eq:rh}), by (\ref{eq:XCkequaltozero}), and by the evaluation of (\ref{eq:rsa}) with {\footnotesize CAS3D-K} in the long-wavelength limit. A tokamak with major radius $R=1.7$~m, minor radius $a=0.4$~m, and $q$ profile given in figure \ref{fig:qprofileSectionAnalyticalChecks} has been used.}
\label{fig:RHXC}
\end{figure}
As a preliminary check of these extensions to {\casdk}, in this section we compare our results with analytical results available in the literature. For these comparisons, we take a plasma consisting of singly charged ions and electrons, and assume flat density and temperature profiles with the same values for both species.
\begin{figure}
\includegraphics[width=1.\linewidth]{q-XC}
\caption{Safety factor profile of the tokamak employed for the calculations of Section \ref{sec:cas3dk}.}
\label{fig:qprofileSectionAnalyticalChecks}
\end{figure}
In reference \cite{Rosenbluth1998}, Rosenbluth and Hinton (R-H) calculated the residual level in large aspect ratio tokamaks with circular cross section and adiabatic electrons, in the limit $k_\perp\rho_{ti}\ll 1$. Denote the safety factor by $q$ and the inverse aspect ratio by $\varepsilon = (a/R) \sqrt{\psi}$, where $a$ is the minor radius and $R$ is the major radius. The result obtained in \cite{Rosenbluth1998} is
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)} =
\frac{1}{1+1.6\, q^2\varepsilon^{-1/2}}\,.
\label{eq:rh}
\end{equation}
In reference \cite{Xiao2006}, Xiao and Catto gave an expression more accurate in the inverse aspect ratio expansion. Namely,
\begin{equation}\label{eq:XCkequaltozero}
\frac{\varphi_k(\infty)}{\varphi_k(0)}=
\frac{1}{1 + 1.6 q^2\varepsilon^{-1/2}+0.5
q^2 + 0.36 q^2 \varepsilon^{1/2}}\, .
\end{equation}
Both of these results were obtained
by using the analytical equilibrium of a large
aspect ratio circular tokamak which, in our coordinates, is given by
\begin{equation}
B=\frac{B_0}{1+\varepsilon\cos(2\pi\theta)},
\label{eq:larteq}
\end{equation}
where $B_0$ is the magnetic field strength at the magnetic axis. The
analytical solutions (\ref{eq:rh}) and (\ref{eq:XCkequaltozero}) are
plotted in figure \ref{fig:RHXC}, together with the numerical
evaluation of (\ref{eq:rsa}) with {\casdk}, for $k_\perp\rho_{ti}\ll 1$
and a Maxwellian initial condition. We use an axisymmetric tokamak
with major radius $R=1.7$~m, minor radius $a=0.4$~m, and $q$ profile
given in figure \ref{fig:qprofileSectionAnalyticalChecks}. For the
{\casdk} computations, the equilibrium is obtained with {\small VMEC}
employing the aspect ratio and safety factor values just mentioned.
The wavenumber used in the {\casdk} calculation is
$k_\psi=0.5$ and the dimensionless quantity $\left\langle
k_\perp\rho_{ti} \right\rangle_\psi$ ranges from 0.0015 in the
innermost radial position to 0.0068 in the outermost
one. We have checked that the residual zonal flow
value obtained with {\casdk} and shown in figure \ref{fig:RHXC} does
not change if $\left\langle k_\perp\rho_{ti} \right\rangle_\psi$
is further decreased. The regions of figure \ref{fig:RHXC}
where the curves agree and where the curves differ are as expected
(see the remarks in \cite{Yamagishi2012} about figure 3(a) in that
reference). The analytical equilibrium of a large aspect ratio
circular tokamak, used in deriving the equations (\ref{eq:rh}) and
(\ref{eq:XCkequaltozero}), differs less from the numerical equilibrium
obtained with {\small VMEC} in radial positions closer to the center.
We will see in Section \ref{sec:results} that the {\casdk} results
coincide with gyrokinetic simulations of zonal flow evolution, in
which {\small VMEC} equilibria are also used.
\begin{figure}
\includegraphics[width=1.\linewidth]{bf_XC_23}
\caption{Magnetic field strength along a field line of the analytical large aspect ratio circular tokamak equilibrium, given by equation (\ref{eq:larteq}), with $\varepsilon=0.2$, and for the numerical equilibrium obtained with {\footnotesize VMEC} (with $R=1.7$ m, $a=0.4$ m) at $\psi=0.7$.}
\label{fig:bfield}
\end{figure}
As explained above, Xiao and Catto (X-C) also addressed in references \cite{Xiao2006, Xiao2007} the extension of the calculation in \cite{Rosenbluth1998} to short wavelengths. They gave the result
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)} =
\frac{\sum_s\frac{Z_s^2}{T_s} \left\{1- J_{0s}^2\right\}_s
}{
\sum_s \frac{Z_s^2}{T_s}
\left\{1 - \mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}
}\right\}_s}
\label{eq:xck}
\end{equation}
for the residual zonal flow in a tokamak at arbitrary wavelengths and with kinetic electrons. The expression provided by X-C for the case of adiabatic electrons is
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)} =
\frac{\sum_{s\neq e}\frac{Z_s^2}{T_s} \left\{1- J_{0s}^2\right\}_s
}{
\sum_{s\neq e} \frac{Z_s^2}{T_s}
\left\{1 - \mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s} }\right\}_s}.
\label{eq:xca}
\end{equation}
\begin{figure}
\includegraphics[width=1.\linewidth]{forced_ke_cas3dk_XC}
\caption{Comparison of the result in references \cite{Xiao2006,Xiao2007} and the evaluation of (\ref{eq:xck}) with {\footnotesize CAS3D-K}. The parameters of the tokamak are the same as in figure \ref{fig:RHXC}.}
\label{fig:XC2}
\end{figure}
In order to avoid any confusion, we have to point out that
(\ref{eq:xck}) and (\ref{eq:xca}) were not derived as
the solution of the initial value problem explained in Section
\ref{sec:Residual}, but assuming that the
quasineutrality equation is forced with a source term. The argument
of X-C can be streamlined as follows. Go back to (\ref{eq:rsk}) for
the tokamak case (that is, $\overline{\omega_s}=0$ for all
particles). X-C consider that finite orbit width effects do not
affect the initial condition; {\it i.e.}
\begin{eqnarray}
\fl
\sum_s Z_s\left\{ \mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} f_s(0)/F_{s0}} \right\}_s
\approx
\nonumber\\[5pt]
\fl
\hspace{1cm}
\sum_s Z_s\left\{J_{0s} f_s(0)/F_{s0} \right\}_s =
\left\langle \sum_s Z_s \int J_{0s} f_s(0) \mathrm{d}^3 v \right\rangle_\psi .
\label{eq:XCapprox}
\end{eqnarray}
Hence, in this approximation, (\ref{eq:rsk}) gives
\begin{equation}
\varphi_k (\infty) \approx
\frac{\left\langle \sum_s Z_s \int J_{0s} f_s(0)
\mathrm{d}^3 v \right\rangle_\psi
}{
\sum_s \frac{Z_s^2e}{T_s}
\left[\left\{1\right\}_s-\left\{ \mathrm{e}^{-\mathrm{i} k_\psi \delta_s} J_{0s}
\overline{ \mathrm{e}^{\mathrm{i} k_\psi \delta_s} J_{0s}}
\right\}_s^{\overline{\omega_s}=0} \right]
}.
\label{eq:XCapprox}
\end{equation}
The quasineutrality equation at $t=0$, (\ref{eq:qn4}), can be trivially rewritten as
\begin{equation}
\fl
\varphi_k(0) =
\left(\sum_s \frac{Z_s^2 e}{T_s}\{1-J_{0s}^2\}_s\right)^{-1}
\left\langle \sum_s Z_s \int J_{0s} f_s(0) \mathrm{d}^3 v
\right\rangle_\psi.
\label{eq:qnt0again}
\end{equation}
From the quotient of (\ref{eq:XCapprox}) and (\ref{eq:qnt0again}), one
obtains equation (\ref{eq:xck}). Analogously, one can obtain
(\ref{eq:xca}) from (\ref{eq:rsa}). From these manipulations, it is
clear that in the X-C calculation the charge perturbation at $t=0$ can
be viewed as a constant source term in the
quasineutrality equation (see also the remark after equation
(\ref{eq:rlta})).
References \cite{Xiao2006} and \cite{Xiao2007}
provided analytical evaluations of the right sides of (\ref{eq:xck})
and (\ref{eq:xca}) for simplified tokamak geometries. Since we can
directly evaluate the right sides of (\ref{eq:xck}) and
(\ref{eq:xca}) with {\casdk}, we will compare the results as an
additional check of our numerical tool.
\begin{figure}
\includegraphics[width=1.\linewidth]{forced_ae_cas3dk_XC}
\caption{Comparison of the result in reference \cite{Xiao2006} for adiabatic electrons and the evaluation of equation (\ref{eq:xca}) with {\footnotesize CAS3D-K}. The parameters of the tokamak are the same as in figure \ref{fig:RHXC}.}
\label{fig:XC1}
\end{figure}
In \cite{Xiao2006,Xiao2007}, the analytical equilibrium of a circular
cross section, large aspect ratio tokamak with safety factor $q=2$ and
inverse aspect ratio $\varepsilon=0.2$ was used. For the calculations
with {\casdk}, we employ the {\small VMEC} tokamak equilibrium described
above, which has similar parameters at $\psi=0.7$. At
this radial position, the {\small VMEC} equilibrium satisfies $q=2$
and $\varepsilon=0.2$ within an error of 1.5\% ($\varepsilon=0.197$ and $q=2.03$). The difference between the {\small VMEC} equilibrium and the analytical one is illustrated in figure \ref{fig:bfield}, where we compare the magnetic field strength along a field line for both equilibria.
In the {\small VMEC} equilibrium, the value of the magnetic field
strength at the magnetic axis is $B_0=1.87$ T. In general, deviations
from circularity are expected in the numerical equilibrium because of effects like the Shafranov shift that are
not taken into account in the analytical equilibrium. These deviations
are smaller for radial positions closer to the center. The
comparisons for the cases with fully kinetic species (\ref{eq:xck})
and with adiabatic electrons (\ref{eq:xca}) are shown in figures
\ref{fig:XC2} and \ref{fig:XC1}. The agreement is quite good. The fact
that the curves present some differences, especially at short
wavelengths, is not surprising because the equilibria are not
identical.
As already advanced above, we will see that the
{\casdk} calculations agree very well with the results from gyrokinetic
simulations carried out with \gene~coupled to {\small GIST} \cite{Xanthopoulos2009} and {\small EUTERPE}, that also employ {\small VMEC} equilibria. It should be noted that further gyrokinetic codes with similar capabilities exist. The independently developed code {\small GKV-X} \cite{Nunami2010}, for instance, is also able to handle {\small VMEC} equilibria.
In the next section, we compare {\casdk} calculations of the residual
zonal flow with the results obtained from gyrokinetic simulations.
\section{Comparison of the residual zonal flow values obtained with {\small CAS3D-K} and with gyrokinetic simulations}
\label{sec:results}
In this section, we calculate the residual zonal flow as an initial value problem for a wide range of radial wavelengths in tokamak and stellarator geometries. We use the numerical techniques explained in Section~\ref{sec:cas3dk} to evaluate the required averages using the code {\casdk}. These calculations will be compared with the results from two gyrokinetic codes, showing that the extension of {\casdk} is faster.
The residual level for the initial value problem is given by (\ref{eq:rsk}), once an initial condition $f_s(0)$ has been specified. This initial condition has to satisfy (\ref{eq:qn4}). An initial condition fulfilling this equation is
\begin{equation}
f_s(0) =
\frac{Z_s e}{T_s} \frac{ \left\langle
1-\Gamma_{0s}\right\rangle_\psi }{\Gamma_{0s}}
J_{0s} F_{s0}\, \varphi_k(0).
\label{eq:ic}
\end{equation}
Using (\ref{eq:ic}), we find that the expression for the residual level is
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)} =
\frac{
\sum_s \frac{Z_s^2}{T_s}
\left\{ \mathrm{e}^{-\mathrm{i}k_\psi\delta_s}J_{0s} \,
\overline{\mathrm{e}^{\mathrm{i}k_\psi\delta_s}
J_{0s} \left\langle 1-\Gamma_{0s}\right\rangle_\psi
/\Gamma_{0s} }
\right\}_s^{\overline{\omega_s}=0}
}{
\sum_s \frac{Z_s^2}{T_s}
\left[\left\{1\right\}_s-\left\{
\mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s} \,
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}}
\right\}_s^{\overline{\omega_s}=0}
\right]
}.
\label{eq:rltk}
\end{equation}
Similarly, from (\ref{eq:rsa}) and (\ref{eq:ic}) we obtain the residual level when using the approximation of adiabatic electrons. Namely,
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)} =
\frac{ \sum_{s\neq e} \frac{Z_s^2}{T_s}
\left\{ \mathrm{e}^{-\mathrm{i}k_\psi\delta_s}J_{0s} \,
\overline{\mathrm{e}^{\mathrm{i}k_\psi\delta_s}
J_{0s} \left\langle 1-\Gamma_{0s}\right\rangle_\psi /\Gamma_{0s} }
\right\}_s^{\overline{\omega_s}=0}
}{
\sum_{s\neq e} \frac{Z_s^2}{T_s}
\left[\left\{1\right\}_s-\left\{
\mathrm{e}^{-\mathrm{i}k_\psi\delta_s} J_{0s} \,
\overline{ \mathrm{e}^{\mathrm{i}k_\psi\delta_s} J_{0s}}
\right\}_s^{\overline{\omega_s}=0}
\right]
}.
\label{eq:rlta}
\end{equation}
As explained above, in our numerical evaluations of (\ref{eq:rltk})
and (\ref{eq:rlta}) with {\casdk}, we assume that in stellarator
geometry the trapped trajectories have $\overline{\omega_s} \neq 0$.
To be precise, the Xiao and Catto formulas
(\ref{eq:x
|
ck}) and (\ref{eq:xca}) can be obtained from an initial
value problem calculation by choosing an initial condition $f_s(0)$
different from ours. However, the initial conditions that recover
the X-C results necessarily have increasingly fast
oscillations along the orbit for increasing $k_\psi$, and seem of
limited interest for the analysis of turbulence simulations. For
this reason, we choose a different initial condition that is in our
opinion more relevant.
The comparison with gyrokinetic simulations will be carried out by using the \gene~and {\small EUTERPE} codes. \gene~\cite{Jenko2000,Gorler2011,GENE, Xanthopoulos2009} is a Eulerian gyrokinetic $\delta f$ code which can be run in radially global, full flux surface or flux tube simulation domains. The code can use adiabatic or kinetic electrons and is able to deal with tokamak and stellarator geometries. In the \gene~simulations, we calculate the zonal flow response for a wide range of radial wavelengths, using both adiabatic and kinetic electrons. {\small EUTERPE}~\cite{Jost2001,Kleiber2012} is a global $\delta f$ gyrokinetic code in 3D geometry with a Lagrangian Particle In Cell (PIC) scheme. In the simulations with {\small EUTERPE}, a $k_\perp\rho_{ts} < 1$ approximation is employed in the quasineutrality equation that limits the range of wavelengths for which we can carry out the calculations. With {\small EUTERPE}, we have been able to simulate adiabatic electrons and also kinetic heavy electrons.
All the gyrokinetic
simulations shown in this work are linear and collisionless, with a plasma that, unless stated otherwise, consists of only two species: singly charged ions and electrons, $s=\{i,e\}$. We take flat density and temperature profiles with the same values for both species.
In {\small EUTERPE} an initial condition proportional to $\sin(k_\psi\psi)$ is used for the perturbed distribution function. After the first time step, a zonal perturbation to the potential with the same radial dependence $\varphi(0)\propto \sin( k_\psi \psi)$ appears, that is used as the initial zonal flow. For the implementation of the initial condition in {\small EUTERPE}, the Bessel functions $J_{0s}$ and $\Gamma_{0s}$ are approximated to lowest order in $k_\perp\rho_{ts}\ll 1$. The initial condition in {\small EUTERPE} is then
\begin{equation}
F_{s1}(0) =
\frac{Z_s e}{T_s} \left\langle k_\perp^2\rho_{ts}^2
\right\rangle_\psi \varphi_k(0) \sin(k_\psi \psi)\, F_{s0}.
\end{equation}
Since \gene~works in Fourier space for the radial coordinate, we initialize the perturbed distribution function with only one radial mode which produces a potential with a single mode of unit amplitude.
\begin{figure}
\includegraphics[width=1.\linewidth]{q-LW}
\caption{Safety factor profiles employed in Section \ref{sec:results} for the tokamak (solid line) and W7-X (dashed line) calculations.}
\label{fig:qprofileSectionGyroSim}
\end{figure}
\subsection{Tokamak}
\label{sec:Tokamakresults}
\begin{figure}
\includegraphics[width=1.\linewidth]{lart_kae}
\caption{Residual zonal flow for the initial value problem in an axisymmetric large aspect ratio tokamak with major radius $R=0.95$~m, minor radius $a=0.25$~m and $q$ profile given in figure \ref{fig:qprofileSectionGyroSim}. The values predicted by R-H and X-C (equations (\ref{eq:rh}) and (\ref{eq:XCkequaltozero}), respectively) are also shown for comparison.}
\label{fig:tk_kae}
\end{figure}
First, we compare gyrokinetic simulations and {\casdk} calculations in tokamak geometry. We use an axisymmetric device with major radius $R=0.95$~m, minor radius $a=0.25$~m, and $q$ profile given in figure \ref{fig:qprofileSectionGyroSim}, whose equilibrium is determined by {\small VMEC}. We use flat temperature profiles with $T_i=T_e$. The residual levels obtained with {\small EUTERPE}, {\casdk} and the flux tube version of \gene~are shown in figure \ref{fig:tk_kae} for a radial position $\psi=0.25$. We show the calculations with fully kinetic species and also using the approximation of adiabatic electrons. The results of the gyrokinetic codes have been obtained fitting the temporal evolution of the potential to an exponential decay model
\begin{equation}
\varphi(t)/\varphi(0) =
R + A \exp({-\xi t}).
\label{eq:expfit}
\end{equation}
The results with {\casdk} correspond to the evaluation of the equations (\ref{eq:rltk}) and (\ref{eq:rlta}). From figure \ref{fig:tk_kae}, we can see that the agreement among the results of {\casdk}, {\small EUTERPE} and \gene~is excellent. When evaluating the residual level with gyrokinetic codes, a certain variability in the results must be assumed. This variability comes from the fitting method (smaller than 1\%), the discretization in phase space and the control of the numerical noise, among other factors. All the results in this work obtained with {\gene} show variations smaller than 10\%. In any case, figure \ref{fig:tk_kae} shows that the residual level obtained by the three independent methods coincides within a margin smaller than this quantity, which gives us confidence to consider that the overall error is quite small.
\begin{figure}
\includegraphics[width=1.\linewidth]{lart_tau-fit}
\caption{The same evaluation with {\footnotesize CAS3D-K} of equation (\ref{eq:rlta}) as in figure \ref{fig:tk_kae}, with the adiabatic electron approximation, and the evaluation of equation (\ref{eq:rltk}) with fully kinetic species for different values of $\tau$.}
\label{fig:larttau}
\end{figure}
As can be seen in figure \ref{fig:tk_kae}, the residual value has local maxima centered at the scales of the electron and ion Larmor radii. In the long-wavelength limit, $k_\perp\rho_{ti} \ll 1$, the residual level in a tokamak does not depend on $k_\perp$. Its value is well predicted by (\ref{eq:rh}), and even more accurately by (\ref{eq:XCkequaltozero}), for a large aspect ratio tokamak with circular cross section. For the {\small VMEC} equilibrium used here, these predictions (also indicated in figure \ref{fig:tk_kae}) are not so accurate for the reasons discussed in the paragraph below equation (\ref{eq:XCkequaltozero}). At very short wavelengths, $k_\perp\rho_{te}>2$, the residual value approaches zero as $\left\langle k_\perp \rho_{ts} \right\rangle_\psi^{-1}$ when using fully kinetic species and also for the adiabatic electron approximation. In figure \ref{fig:tk_kae}, it is shown that the adiabatic electron approximation in tokamaks is good for $k_\perp\rho_{te} \lesssim 0.1$. In figure \ref{fig:larttau}, we
reproduce the results with {\casdk} in figure \ref{fig:tk_kae} together with the evaluation of (\ref{eq:rltk}) for different values of $\tau:=T_e/T_i$. On the electron scale, the results with the adiabatic electron approximation and with kinetic electrons only coincide in the limit $\tau\gg1$. Due to the reasons pointed out above, the simulations with {\small EUTERPE} have been carried out only for $k_\perp\rho_{ti}< 1$. Finally, it is obvious that the results of the forced system of figures \ref{fig:XC2} and \ref{fig:XC1} and the initial value problem of figure \ref{fig:tk_kae} behave in a completely different way for $k_\perp\rho_{ti}\gtrsim 1$.
\subsection{Stellarator}
\label{sec:Stellaratorresults}
Now, we turn to stellarator geometry. We use an equilibrium for the standard configuration of the stellarator W7-X obtained with {\small VMEC}. The $q$ profile is given in figure \ref{fig:qprofileSectionGyroSim} and we take flat density and temperature profiles with $T_i=T_e$. In figure \ref{fig:W7Xak}, calculations of the residual level with {\casdk}, {\small EUTERPE} and the full flux surface version of \gene~are shown for $\psi=0.25$. Two curves correspond to {\casdk} computations, one using adiabatic electrons (see equation (\ref{eq:rlta})) and the other one using kinetic electrons (see equation (\ref{eq:rltk})). In figure \ref{fig:W7Xak}, the results of the gyrokinetic simulations were obtained employing both the approximation of adiabatic electrons and fully kinetic species with \gene, whereas only calculations with adiabatic electrons are shown for {\small EUTERPE}. These results have been fitted to an exponential decay model (\ref{eq:expfit}) to get the residual value. Similar results can be obtained
with an algebraic decay model as suggested in reference \cite{Helander2011}. The results of {\casdk} show remarkable agreement with both gyrokinetic codes.
\begin{figure}
\includegraphics[width=1.\linewidth]{w7x_CEG}
\caption{Residual zonal flow level for the initial value problem in the standard configuration of the W7-X stellarator at $\psi=0.25$. The $q$ profile is shown in figure \ref{fig:qprofileSectionGyroSim}.}
\label{fig:W7Xak}
\end{figure}
\begin{figure}
\includegraphics[width=1.\linewidth]{w7x-e_fatD-euterpe-lin-8}
\caption{Residual zonal flow level for the initial value problem in the standard configuration of the W7-X stellarator at $\psi=0.25$, with deuterium ions ($D$) and kinetic heavy electrons ($E$) ($m_E = 400 m_e$) and also using the approximation of adiabatic electrons.}
\label{fig:w7x-e_fatD-euterpe-lin-8}
\end{figure}
As explained and quantified at the end of this section, the gyrokinetic simulations with kinetic species are much more demanding in terms of computational resources than those with adiabatic electrons. Global simulations with {\small EUTERPE} using fully kinetic electrons in stellarator geometry would require an extremely large computing time. This time can be reduced by increasing the mass of the species involved. We have calculated for deuterium ions and kinetic heavy electrons $s=\{D,E\}$, with $m_E = 400\,m_e$ and $T_D=T_E$. The results are shown in figure \ref{fig:w7x-e_fatD-euterpe-lin-8} where we compare the residual level calculated with {\small EUTERPE} and {\casdk} at $\psi=0.25$. The results with adiabatic electrons shown in this figure are exactly the same as those with adiabatic electrons in figure \ref{fig:W7Xak}, obtained for hydrogen ions. Note that, with adiabatic electrons, as the residual level only depends on $\left\langle k_\perp\rho_{ti} \right\rangle_\psi$, the
curves for hydrogen or deuterium ions are exactly the same.
\begin{figure}
\includegraphics[width=1.\linewidth]{w7x-tau}
\caption{The same evaluation with {\footnotesize CAS3D-K} of equation (\ref{eq:rlta}) as in figure \ref{fig:W7Xak}, with the adiabatic electron approximation, and the evaluation of equation (\ref{eq:rltk}) with fully kinetic species for different values of $\tau$.}
\label{fig:w7xtau}
\end{figure}
In figure \ref{fig:W7Xak}, like in tokamaks, we find local maxima of the residual level centered around the scales of the electron and ion Larmor radii. However, at long wavelengths, $k_\perp\rho_{ti}\ll 1$, the residual level as a function of $k_\perp \rho_{ti}$ behaves very differently in tokamaks and in stellarators (see, for example, figures \ref{fig:tk_kae} and \ref{fig:W7Xak}). This can be easily understood by expanding (\ref{eq:rltk}) and (\ref{eq:rlta}) in $k_\perp\rho_{ti}\ll 1$. The numerator of these expressions scales quadratically with $k_\perp \rho_{ti}$ in tokamaks and stellarators. The difference comes from the denominator. In a stellarator, the denominator is non-zero when $k_\perp\rho_{ti} = 0$. However, in a tokamak the denominator scales quadratically with $k_\perp\rho_{ti}$. The denominator has been often related to the shielding effects of collisionless classical and neoclassical polarization currents \cite{Rosenbluth1998, Watanabe2008, Xiao2006, Xiao2007}.
\begin{figure}
\includegraphics[width=1.\linewidth]{w7x-eH-approximation-log}
\caption{Range of validity of the long-wavelength (LW) approximations, when using fully kinetic species (\ref{eq:stelllwke}) and with the approximation of adiabatic electrons (\ref{eq:stelllwae}), compared to the exact expressions (\ref{eq:rltk}) and (\ref{eq:rlta}), in the standard configuration of the W7-X stellarator at $\psi=0.25$ and with $T_i=T_e$.}
\label{fig:lwapprox}
\end{figure}
It is worth giving explicitly the $k_\perp\rho_{ti}\ll 1$ expansions of (\ref{eq:rltk}) and (\ref{eq:rlta}) in a stellarator and discussing a stellarator specific point in detail. The lowest order term of (\ref{eq:rltk}) gives
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)}=
\frac{(1-\epsilon_t)\left\langle k_\perp \rho_{ti}
\right\rangle_\psi^{2}}{\epsilon_t
\left(1+T_i/(Z_i^2T_e)\right)} +O(\left\langle k_\perp \rho_{ti}
\right\rangle_\psi^{4}),
\label{eq:stelllwke}
\end{equation}
where $\epsilon_t = n_s^{-1}\{1\}_s^{{\rm trapped}}$ is the fraction of trapped particles. Here, the superindex ``trapped'' means that the phase space integration is performed only over the trapped region. However, if we use the approximation of adiabatic electrons, (\ref{eq:rlta}), we find
\begin{equation}
\frac{\varphi_k(\infty)}{\varphi_k(0)}=
\frac{(1-\epsilon_t)\left\langle k_\perp \rho_{ti}
\right\rangle_\psi^{2}}{\epsilon_t}+O(\left\langle k_\perp \rho_{ti}
\right\rangle_\psi^{4}).
\label{eq:stelllwae}
\end{equation}
Hence, in stellarators, the adiabatic electron approximation gives and incorrect residual zonal flow, even at long wavelengths. This has been pointed out in references~\cite{Sugama2005, Sugama2006} and is confirmed by the calculations shown in figures \ref{fig:W7Xak} and \ref{fig:w7x-e_fatD-euterpe-lin-8}. The curves in figure \ref{fig:w7xtau} for different values of $\tau$ quantify the error of the adiabatic electron approximation for any wavelength. As can be seen in this figure, the residual level obtained with this approximation, only coincides with that obtained with fully kinetic species in the limit $\tau\gg1$. In figure \ref{fig:lwapprox}, we plot the curves in figure \ref{fig:W7Xak} corresponding to {\casdk} together with the evaluation of their expansions to lowest order in $k_\perp\rho_{ti}\ll 1$, (\ref{eq:stelllwke}) and (\ref{eq:stelllwae}). It is clear from figure~\ref{fig:lwapprox} that (\ref{eq:stelllwke}) and (\ref{eq:stelllwae}) are good approximations of (\ref{eq:rltk}) and (\ref{eq:rlta}) respectively for $k_\perp\rho_{ti} \lesssim 0.2$.
We point out that at scales comparable to the ion Larmor radius,
$k_\perp\rho_{ti}\sim 1$, the residual level appears to be larger in
stellarators than in tokamaks (see, for example, figures
\ref{fig:larttau} and \ref{fig:w7xtau}). In order to discard trivial
explanations, we have studied with {\casdk} the residual level in a
tokamak configuration with the same aspect ratio and $q$ profile as
those of the standard configuration of W7-X and the results are much
closer to the tokamak case than to the stellarator results. This is
not surprising because, as shown in reference \cite{Xiao2007}, not
only the aspect ratio but also shaping effects like elongation,
triangularity and Shafranov shift, among others, affect the residual
level. We leave for a future work a detailed study of the magnetic
configuration influence on the residual level.
\subsection{Simulation conditions and computational time}
\label{sec:ComputationalTime}
The converged results shown in this work require disparate computing
resources, depending on the code and the physical problem.
The relevant numerical parameters and the computational
resources required by each code are described
below.
In {\casdk}, we used 256 points for the integration over the velocity coordinate $v$, with $0\leq v\leq 4\pi v_{ts}$. For the integration over the $\lambda$ coordinate, we used 72 integration points in the tokamak and 24 in W7-X.
Along the field line, we used 32 points at long wavelengths and up to 4096 for short wavelengths. This resolution allows the correct integration of the highly oscillatory functions. Thanks to axisymmetry, in a tokamak all field lines on a flux surface are equivalent. In W7-X, we used 1024 field lines to cover the flux surface for passing particles. For the evaluation of $\delta_s$ in the stellarator case, all the modes with $|m|\leq 8$ and $|n|\leq 8$ were retained. The calculations were carried out in the EULER cluster at CIEMAT, equipped with Xeon 5450 quadcore processors at 3 GHz and 4XDDR Infiniband network.
In \gene, a 1D spatial grid
along the field line ($z$ coordinate) is used in the tokamak cases
while in stellarator simulations a 2D spatial grid in coordinates $(y,z)$ is used to describe a full flux surface ($y$ is the
coordinate along the binormal direction). In
velocity space, a 2D grid in parallel velocity and magnetic
moment coordinates $(v_{\parallel}, \mu)$ is used in both the
tokamak and the stellarator cases. The resolution of the spatial
and velocity grids used are given in table \ref{tab:GENE} together
with the time step and the total simulation time for each case. Times are given in $1/\Omega_G$ units, with $\Omega_G=a/v_{te}$, where we recall that $a$ is the minor radius and $v_{te}$ is the thermal velocity of electrons.
The GENE simulations were run in HYDRA \cite{HYDRA}, equipped with Intel Ivybridge at 2.8 GHz and SandyBridge-EP at 2.6 GHz processors interconnected by Infiniband FDR14.
\begin{table}
\centering
\begin{tabular}{ r ccccc c}
\hline \hline
&\multicolumn{2}{c}{long wavelength ($k_\perp\rho_{ti}< 1$)} & & \multicolumn{2}{c}{short wavelength ($k_\perp\rho_{ti}> 1$)} &\\
\cline{2-3} \cline{5-6} & adiabatic e$^{-}$ & kinetic e$^{-}$ & & adiabatic e$^{-}$ & kinetic e$^{-}$ &\\
\hline
$n_z $ & $64$ & $64$ & & $128$ & $256$ &\\
$n_{v_{\parallel}} $ & $128$ & $1024$ & & $1024-2048$ & $2048 - 4096$\\
$n_{\mu}$ & $40$ & $40$ & & $40$ & $ 40$
&\hspace{-.5cm} tokamak \\
$\Delta t_G$ & $1-6$ & $0.03-0.06$ & & $0.2-1$ & $0.02$& \\
$T_G$ & $4000-15000$ & $3000-10000$ & & $4000-10000$ & $150$ & \\
\hline
$n_y$ & $64$ & $ 64$& & $ 64$ & $ 64$ & \\
$n_z $ & $256$ & $128$ & & $128$ & $128$ & \\
$n_{v_{\parallel}}$ & $128$ & $256-512$ & & $128$ & $128 -512$ & \hspace{-.5cm} stellarator \\
$n_{\mu}$ & $20$ & $20$ & & $20$ & $10-20$ & \hspace{-.5cm} \\
$\Delta t_G$ & $4-8$ & $0.06$ & & $0.2-5$ & $0.06$ &\\
$T_G$ & $100000-200000$ & $40000$ & & $300-5000$ & $10000-550000$ &\\
\hline \hline
\end{tabular}
\caption{Numerical parameters used in GENE simulations. The time step ($\Delta t_G$) and the total simulation time ($T_G$) are given in $\Omega_G^{-1}$ units, with $\Omega_G=a/v_{te}$.}
\label{tab:GENE}
\end{table}
In {\small EUTERPE}, the electric potential is represented in a 3D spatial grid in PEST coordinates $(s_E,\theta_E,\phi_E)$ whose radial resolution must be large enough to correctly represent the potential perturbation. The number of markers was set according to the grid resolution to maintain the ratio of markers per grid cell approximately constant. A low-pass squared filter in Fourier space ($k_{\theta_E},k_{\phi_E}$) is used to reduce the noise.
In table \ref{tab:EUTERPE} the resolution of the spatial grid $(n_{s_E},n_{\phi_E}, n_{\phi_E})$, the number of markers, the filter cutoff, the time step and the total simulation time used for each case are given. Times are given in $1/\Omega_E$ units, where $\Omega_E=eB^*/m_i$, $e$ is the elementary charge, $m_i$ is the ion mass and $B^*$ is the average of the magnetic field along the magnetic axis.
The {\small EUTERPE} simulations were carried out in EULER and MareNostrum III \cite{MN3}, equipped with Intel SandyBridge-EP processors at 2.6 GHz and Infiniband FDR10 interconnection.
\begin{table}
\centering
\begin{tabular}{ r ccccc}
\hline \hline
{\centering long wavelength}&\multicolumn{2}{c}{tokamak} & & \multicolumn{2}{c}{stellarator}\\
\cline{2-3} \cline{5-6}($k_\perp\rho_{ti}< 1 $) & adiabatic e$^{-}$ & kinetic e$^{-}$& &adiabatic e$^{-}$ & kinetic e$^{-} {^\dag}$ \\
\hline \hline
$n_{s_E}$ & $32-192$ & --- & & $64-192 $ & $32-96$ \\
$n_{\theta_E} \times n_{\phi_E}$ &$16 \times 16$ & --- & & $32 \times 32$ & $16 \times 16$ \\
\# of markers & 40 M $-$ 240 M & --- & & 40 M $-$ 240 M & $40$ M $-$ 120 M \\
filter cutoff& $5 $ & --- & & $5, 10 ^{\dag\dag}$ & $5 $ \\
$\Delta t_E $ & $10 - 5$& --- & & $50 - 20 $ & $0.5$ \\
$T_E$ & $60000$& --- & & $400000$ & $45000$ \\
\hline \hline
\end{tabular}
\caption{Numerical parameters used in EUTERPE simulations. The time step ($\Delta t_E$) and the total simulation time ($T_E$) are given in $\Omega_E^{-1}$ units, with $\Omega_E=eB^*/m$. $^\dag$This range corresponds to calculations with deuterium ions and heavy electrons. $^{\dag\dag}$Only for the shortest-wavelength case.}
\label{tab:EUTERPE}
\end{table}
In table~\ref{tab:cpuhts} we illustrate
the computational cost, in total CPU core hours (that
is, the time summed up over all the cores employed in the
simulation), for the different codes and cases studied. Of course,
the values shown in table~\ref{tab:cpuhts} are
simply indicative, as they depend on the numerical
details of the simulations and the type of CPU employed in each
calculation. In addition, a systematic analysis
of the optimal resolution to carry out the computations with each
code has not been performed. The main conclusion that we can extract
from table~\ref{tab:cpuhts} is that determining the residual zonal flow with
{\casdk} is faster than with {\gene} and {\small EUTERPE}. This is
specially true for stellarators. The reason is that in
stellarators only passing particles contribute to the residual value,
while in tokamaks also trapped particles count, and trapped
trajectories typically demand a more careful numerical treatment than
passing ones. Whereas the {\casdk} calculation simply drops the
contribution from the trapped region, the gyrokinetic runs simulate
all trajectories. However, we can see from table~\ref{tab:cpuhts} that
the computational cost when using the gyrokinetic
codes is higher in stellarator geometry because it requires increased
resolution in phase space to obtain converged results.
\begin{table}
\centering
\begin{tabular}{ r ccccc c}
\hline \hline
&\multicolumn{2}{c}{long wavelength ($k_\perp\rho_{ti}< 1$)} & & \multicolumn{2}{c}{short wavelength ($k_\perp\rho_{ti}> 1$)} &\\
\cline{2-3} \cline{5-6} & adiabatic e$^{-}$ & kinetic e$^{-}$ & & adiabatic e$^{-}$ & kinetic e$^{-}$ &\\
\hline
{\casdk} & $ 1.5$ & $ 3$ & & $ 15$ & $ 30$ &\\
\gene & $10^{\dag\dag}$ & $ 1300$ & & $ 150$ & $ 200$ &\hspace{-.5cm} tokamak \\
{\small EUTERPE} & $ 7000$ & --- & & --- & --- &\\
\hline
{\casdk} & $ 0.5$ & $ 1$ & & $ 4$ & $ 8$ &\\
\gene & $ 2000$ & $ 40000$ & & $ 200$ & $ 250000$ &\hspace{-.5cm} stellarator \\
{\small EUTERPE} & $20000$ & $80000^\dag$ & & --- & ---& \\
\hline \hline
\end{tabular}
\caption{Estimated CPU time (total core hours) to obtain the residual zonal flow value with the different codes. We give estimations for tokamaks and stellarators; for adiabatic and kinetic electrons; and for long and short wavelengths. $^\dag$This range corresponds to calculations with deuterium ions and heavy electrons. $^{\dag\dag}$For very small wavenumbers, \gene~computes the residual zonal flow in a tokamak in approximately 0.5 CPU hours.}
\label{tab:cpuhts}
\end{table}
We observe that {\small EUTERPE}, a 3D global code, requires much more
CPU time than \gene, particularly in the tokamak case, in which the
flux tube version of \gene~is used. The reason is that {\small
EUTERPE} simulates the whole plasma while \gene~is here operated in
a radially local limit. The computational cost with
{\small EUTERPE} increases with $k_\perp\rho_{ti}$, because more
flux surfaces have to be considered as $k_\perp$ increases to properly
resolve the radial structure of the potential in all the plasma
volume. In {\small EUTERPE} the different values of $k_\perp\rho_{ti}$
at a given radial position are obtained by keeping the value of
$\rho_{ti}$ (determined by the ion mass, the temperature and the
magnetic field) and varying the value of $k_\perp$. In {\casdk}, the
resolution at short wavelengths must be increased to correctly
calculate the highly oscillatory functions related to the finite orbit
width and the finite Larmor radius effects.
The analytical expression obtained by Rosenbluth and Hinton in
reference \cite{Rosenbluth1998}, given by equation (\ref{eq:rh}), has
been largely used as a linear benchmark for gyrokinetic codes in
tokamak geometry and in the long-wavelength limit. The results
presented in this work show that {\casdk} can be used to perform those
benchmarks not only in tokamak geometry but also in stellarator
geometry and for arbitrary wavelengths. Examples of such benchmarks
are given in figure~\ref{fig:tk_kae} for the global code {\small
EUTERPE} and the flux tube version of \gene~in tokamak geometry, and
in figure~\ref{fig:W7Xak} for {\small EUTERPE} and the full flux
surface version of \gene~in stellarator geometry.
Finally, it is a matter of fact that the tokamak calculation including
a source term in the quasineutrality equation~\cite{Xiao2006,Xiao2007}
has become quite popular in the literature. Just for completeness, we
give an analogous calculation for the stellarator in \ref{sec:forced}
using gyrokinetic simulations and {\casdk}. We also show the results for
the tokamak using gyrokinetic codes (these were not included in
Section \ref{sec:cas3dk}).
\section{Conclusions}
\label{sec:conclusions}
In this work we have treated analytically the linear collisionless zonal flow evolution as an initial value problem, and derived expressions (see (\ref{eq:rsk}) and (\ref{eq:rsa})) for the residual value that are valid for arbitrary wavelengths and for tokamak and stellarator geometries. The expressions (\ref{eq:rsk}) and (\ref{eq:rsa}) involve certain averages in phase space, and also the solution of magnetic differential equations, that cannot be evaluated analytically except for very special situations. We have extended the code {\casdk} to evaluate such expressions in general. We have tested the extension of the code by comparing its results with analytical formulae available in the literature for simplified tokamak geometry~\cite{Xiao2006, Xiao2007}. These tests are given in figures~\ref{fig:RHXC}, \ref{fig:XC2} and \ref{fig:XC1}.
Then, we have computed the residual zonal flow level in tokamak and stellarator geometries for a wide range of radial wavelengths, using both the approximation of adiabatic electrons and fully kinetic electrons. We have compared the results of {\casdk} with those obtained from two gyrokinetic codes: the global code {\small EUTERPE} and the radially local versions of \gene~(full flux surface and flux tube). The comparisons are shown in figures~\ref{fig:tk_kae}, \ref{fig:W7Xak} and \ref{fig:w7x-e_fatD-euterpe-lin-8}.
A stellarator specific effect has been discussed in detail. Namely, the fact that the adiabatic electron approximation gives incorrect zonal flow residuals even for $k_\perp\rho_{ti}\ll 1$, unlike in tokamaks. This effect has also been confirmed by means of gyrokinetic simulations. This is shown in figures \ref{fig:W7Xak}, \ref{fig:w7x-e_fatD-euterpe-lin-8} and \ref{fig:w7xtau}.
Finally, we stress the efficiency of our method to determine the
residual zonal flow. Gyrokinetic simulations with
\gene~and {\small EUTERPE} to obtain the residual level are
computationally expensive, especially with fully kinetic species, in
the short-wavelength region, and in stellarator geometry. On the
contrary, the calculations with {\casdk} are less
demanding (see table~\ref{tab:cpuhts}), particularly in stellarator
geometry. These results show that {\casdk} is a useful tool to
calculate fast and accurately the residual level in any toroidal
geometry and for arbitrary wavelengths. This code is even more useful
in stellarator geometry as kinetic electrons must be considered to
correctly calculate the residual level. It can also provide a good
benchmark for gyrokinetic codes.
\section*{Acknowledgments}
\label{sec:acknowledgements}
The authors thank Peter Catto, Bill Dorland, Per Helander, Alexey Mishchenko and Yong Xiao for helpful discussions, and Antonio L\'opez-Fraguas for his help with the usage of {\small VMEC}. The authors thankfully acknowledge the computer resources, technical expertise and assistance provided by the Barcelona Supercomputing Center (BSC), the Rechenzentrum Garching (RZG) and the Computing Center of
CIEMAT.
This research has been funded in part by grant ENE2012-30832, Ministerio de Econom\'ia y Competitividad (Spain) and by an FPI-CIEMAT PhD fellowship. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
|
\section{Introduction}
The Meissner effect in a mesoscopic cylindrical structure consisting of a
superconductor coated with a pure normal-metal layer has interesting
features due to the coherent quantum effects in the normal metal. They are
observable when there is a good contact between S and N constituents. This
problem within the quasiclassical Eilenberger formalism was first
investigated theoretically by Zaikin \cite{1a}.
Recent advanced technologies of preparation of pure samples have enabled to
investigate the coherence properties of mesoscopic samples taking a proper
account of the proximity effect \cite{2a}. The samples were superconducting
Nb wires with a radius $R$ of tens of microns coated with a thin layer $d$
of high-purity Cu, Ag or Au. Mota and co-workers \cite{3a} detected
surprising behavior of the magnetic susceptibility $\chi $ of a cylindrical
NS structure (N and S are for the normal metal and the superconductor,
respectively) at very low temperatures ( \textrm{T} $<$ $100$ mK ) in an
external magnetic field parallel to the NS boundary. Most intriguingly, a
decrease in the sample temperature below a certain point $T_{r}$ (at a fixed
field) produced a paramagnetic reentrant effect: the decrease of magnetic
susceptibility of the structure is changed to an unexpected grow. A similar
behavior was observed in the isothermal reentrant effect in a field
decreasing to a certain value $\mathrm{H}_{r}$ below which the
susceptibility started to grow sharply. It is emphasized in Ref. \cite{4a}
that the detected magnetic response of the NS structure is similar to the
properties of the persistent currents in mesoscopic normal rings.
There have been numerous attempts to explain the paramagnetic reentrant
effect theoretically \cite{5a,6a,7a}. However, the predicted amplitudes of
the effect were too small ( except for \cite{6a} ) to account for the
experimental facts. Fauchere, Belzig and Blatter \cite{6a} explain the large
paramagnetic effect assuming strong repulsive electron-electron interaction
in noble metals. The proximity effect in the N metal induces an order
parameter will be shifted by $\pi $ from the order parameter $\Delta $ of
the bulk superconductor. This generates the paramagnetic instability of the
Andreev states, and the density of states of the NS structure exhibits a
\textbf{\textit{single }}peak near zero energy. The theory developed in Ref.
\cite{6a} essentially relies on the assumption of the repulsive electron
interaction in the normal region.
Maki and Haas \cite{7a} made the assumption that\textbf{\ }in noble metals
(Ag, Au) p-wave superconductors may occur with a transition temperature of
order $10$ mK. Below $T_{c}$ p -wave triplet superconductivity emerges
around the periphery of the cylinder. The diamagnetic current flowing in the
periphery is compensated by a quantized paramagnetic current in the opposite
direction thus providing a simple explanation for the reentrant effect.
\begin{figure}[tbp]
\includegraphics[width=4in]{fig1.ps}
\caption{Two classes of trajectories in the normal metal of the NS structure
in a magnetic field: trajectories forming the Andreev levels (a):
trajectories colliding only with the dielectric boundary (b ). This figure
has been taken from \protect\cite{9a}.}
\end{figure}
As in \cite{5a}, the authors of \cite{7a} also allow for paramagnetic
current in the system, which flows in the opposite direction to the
diamagnetic current. Its amplitude is sufficient to explain the reentrant
effect, but this theory says nothing about the temperature and field
dependences of magnetic susceptibility at ultra-low temperatures and in low
magnetic fields.
A theoretical basis for understanding the paramagnetic reentrant effect has
been proposed in \cite{8a,9a,10a}. The theory is essentially based on the
properties of the quantized levels of the NS structure. The Meissner effect
is rather special in a superconducting cylinder coated with a pure
normal-metal layer. The applied magnetic field generates superconducting
current in the surface layer whose thickness is equal to the field
penetration depth $\delta $. Simultaneously, the Aharonov-Bohm effect
generates persistent current (through the mechanism of the Andreev
scattering of quasiparticles) in the normal layer near the NS boundary. If N
and S metals are separated by a dielectric layer destroying the Andreev
scattering, the additional current disappears, and the Meissner effect
returns to its usual form. The levels with energies no more than $\Delta $ (2%
$\Delta $ is the gap of the superconductor) appear inside the normal metal
bounded by the dielectric (vacuum) on one side and the superconductor on the
other side. Because of the Aharonov-Bohm effect \cite{11a}, the spectrum of
the NS structure in a weak field is a function of the magnetic flux. The
specific feature of the Andreev quantum levels of the structure is that by
varying field $\mathrm{H}$ (or temperature $T$) each level in the well
periodically comes into coincidence with the chemical potential of the
metal. As a result, the state of the system suffer strong degeneracy, and
the density of states on the energy of the NS sample experiences resonance
spikes \cite{9a}. This contributes significantly to the magnetic moment and
causes a reentrant effect. Note that in \cite{9a} the calculation was
performed for orbital susceptibility. In \cite{6a} the explanation involves
the spin (Pauli) susceptibility of the system.
In this study we calculated the free energy of the NS structure and its
magnetic moment (current of magnetization) in the magnetic field. An
approximate analytical calculation was supplemented with a numerical one
based on the exact spectrum of Andreev levels \cite{12a} in the NS contact.
Our approach is not based on application of the Eilenberger equation. We
calculate the thermodynamic potential to obtain the magnetic moment.
\section{Theory and results}
\subsection{Spectrum of Quasiparticles of the NS Structure}
Consider a superconducting cylinder with the radius $R$ which is
concentrically embedded with a thin layer $d$ of a pure normal metal. The
structure is placed in a magnetic field $\vec{H}\left( 0,0,\mathrm{H}\right)
$ oriented along the symmetry axis of the structure. It is assumed that the
field is weak to the extent that the effect of twisting of quasiparticle
trajectories becomes negligible. It actually reduces to the Aharonov-Bohm
effect \cite{11a}, i.e., allows for the increment in the phase of the wave
function of the quasiparticle moving along its trajectory in the vector
potential field.
We proceed with a simplified model of NS structure in which the order
parameter magnitude changes stepwise at the NS boundary ( $\Delta \left(
x\right) =\Theta \left( -x\right) $, $\mathrm{Im}\left( \Delta \right) =0$,
a bulk superconductor in the region $x<0$ and a normal metal layer in the
region $0\leq x\leq d$ ). It is also assumed that the magnetic field does
not penetrate into the superconductor. The coherent properties observed in
the pure normal metal can be attributed to its large "coherence" length $\xi
_{N}\left( T\right) =\frac{\hbar \cdot \mathrm{v}_{F}}{\pi \cdot k_{B}T}$ ( $%
\mathrm{v}_{F}$ is the Fermi velocity, $k_{B}$ is the Boltzmann constant) at
very low temperatures ($\xi _{N}\left( T\right) $ $\gg $ $d$). Besides, the
spectrum of quasiparticles was obtained assuming a negligible curvature of
the NS boundary.
One can easily distinguish two classes of trajectories, inside the normal
metal. One of them includes the trajectories which collide in succession
with the dielectric and NS boundaries (\textbf{Fig.1}). The quasiparticles
moving along these trajectories have energies $\left\vert E\right\vert
<\Delta $ and are localized inside the potential well bounded by a high
dielectric barrier ( $\simeq $ $1eV$ ) on one side and by the
superconducting gap $\Delta $ on the other side ($\Delta $ $=$ $3.56\cdot $ $%
k_{B}T_{c}/2$, $\Delta (\mathrm{Nb})$ $\approx $ $1.42$ meV). On its
collisions, the quasiparticle is reflected specularly from the dielectric
and experiences the Andreev scattering at the NS boundary \cite{12a}. We
introduce an angle $\alpha $ at which the quasiparticle hits the dielectric
boundary. The angle is measured in the positive direction from the normal to
the boundary (\textbf{Fig.1}). In this case the first class contains the
trajectories with $\alpha $ varying within the range $-\alpha _{c}\leq
\alpha \leq \alpha _{c}$ ( $\alpha _{c}$ is the angle at which the
trajectory touches the NS boundary, sin$\left( \alpha _{c}\right) $ $=$ $%
\frac{R}{R+d}$ ).
\begin{figure}[tbp]
\includegraphics[width=4in]{fig2.ps}
\caption{The dependence of the density of states of the NS structure on the
magnetic flux $\Phi $ ( $E=E_{F}=0$ ). Normalization was performed for the
flux $\Phi _{{\protect\small max}}$ corresponding to the highest value of $%
\protect\nu \left( \Phi \right) $ ($\ \Phi _{max}\approx 2.175$ ).}
\end{figure}
Another class includes the trajectories whose spectra are formed by
collisions with the dielectric only, i.e., the trajectories with $\alpha
>\alpha _{c}$. The two groups of trajectories produce significantly
different spectra of quasiparticles. The distinctions are particularly
obvious in the presence of the magnetic field. The trajectories with $\alpha
\lesssim \alpha _{c}$ form a spectrum of Andreev levels which contains an
integral of the vector potential field. The spectrum characterizes the
magnetic flux through the area of the triangle between the quasiparticle
trajectory and the part of the NS boundary. It also determines the magnitude
of the screening current produced by "particles" and "holes" in the N layer.
These states are responsible for the reentrant effect. The trajectories with
$\alpha >\alpha _{c}$ do not collide with the NS boundary. The states
induced by these trajectories are practically similar to the "whispering
gallery" type of states appearing in the cross section of a solid normal
cylinder in a weak magnetic field \cite{13a}, \cite{14a}. The size of the
caustic of these trajectories is of the order of the cylinder radius, i.e.,
they correspond in high magnetic quantum numbers. The spectrum thus formed
carries no information about the parameters of the superconductor, and it is
impossible to meet the resonance condition in this case. These states make a
paramagnetic contribution in the thermodynamics of the NS structure but
their amplitude is small ( $\sim 1/\left( k_{F}\cdot R\right) $ ). It is
therefore discarded from further consideration. Our interest will be
concentrated on the trajectories with $\left\vert \alpha \right\vert \leq
\alpha _{c}$.
The spectrum of quasiparticles of the NS structure can be obtained easily
using the multidimensional quasiclassical method generalized for the case of
the Andreev scattering in the system \cite{15a}, \cite{16a}. After collision
with the NS boundary the "particle" transforms into a "hole". The "hole"
travels practically along the path of the "particle" but in the reverse
direction (\textbf{Fig.1}).
The spectrum was derived by quantizing the adiabatic invariant $\frac{1}{%
2\pi }\oint \vec{P}\cdot d\vec{s}$ , where $\vec{P}$ $=$ $\vec{p}+$ $\frac{e%
}{c}\vec{A}$, $\vec{A}$ $=$ $\left( 0,A_{y}\left( x\right) ,0\right) $, $%
\vec{P}_{0}$ $=$ $\vec{p}_{0}-$ $\frac{\left\vert e\right\vert }{c}\vec{A}$
for a "particle" and $\vec{P}_{1}$ $=$ $\vec{p}_{1}+$ $\frac{\left\vert
e\right\vert }{c}\vec{A}$ for a "hole". Note that each collision with the NS
boundary multiplies the wave function amplitude of the quasiparticle by a
factor of $\exp \left( -i\arccos \left( {E/\Delta }\right) \right) $.
\begin{figure}[tbp]
\includegraphics[width=4in]{fig3.ps}
\caption{Free energy (normalized per value $\Omega \left( \Phi _{\max
}\right) $) as a function of the flux $\Phi $.}
\end{figure}
Let $\mathrm{{\mathcal{L}}}_{0}$ be the length of the quasiparticle
trajectory between the collisions at the boundaries of the N layer. We thus
\cite{8a}, \cite{9a} arrive at the expression for the spectrum of the
Andreev levels in the NS structure:%
\begin{widetext}
\begin{equation}
E_{n}\left( q,\alpha ,\Phi \right) =\frac{\pi \hbar \mathrm{v}_{L}(q)}{%
\mathrm{{\mathcal{L}}}_{0}}\left[ n+\frac{1}{\pi }\arccos \left( \frac{%
E_{n}\left( q,\alpha ,\Phi \right) }{\Delta }\right) -\frac{\tan \left(
\alpha \right) }{\pi }\Phi \right]. \label{E1}
\end{equation}
\end{widetext}Here $\mathrm{v}_{L}(q)={\sqrt{p_{F}^{2}-q^{2}}/m^{\ast }}$, $%
\mathrm{{\mathcal{L}}}_{0}$ is length of the quasiparticle trajectory, $p_{F}
$ is the Fermi momentum, $q$ is the quasiparticle momentum component along
the cylinder axis ( $\left\vert q\right\vert \leq p_{F}$ ), ${m^{\ast }}$ is
the effective mass of the quasiparticle, and $\Phi _{0}={hc/2e}$ is the
superconducting flux quantum. The factor $\Phi $ appearing in the last term
in Eq. (\ref{E1}) has the meaning of "phase"%
\begin{equation}
\Phi =\frac{2\pi }{\Phi _{0}}\int\limits_{0}^{d}A_{y}\left( x\right) dx
\label{E2}
\end{equation}%
which is dependent on the vector potential field $\vec{A}$ $=$ $\left(
0,A_{y}\left( x\right) ,0\right) $. The spectrum of Eq. (\ref{E1}) is
similar to Kulik's spectrum \cite{17a} for the current state of the SNS
contact. However, Eq. (\ref{E1}) includes an angle-dependent magnetic flux
instead of the phase difference of the contacting superconductors.
The length of the quasiparticle trajectory ( $2AB$ ) is readily found from
\textbf{Fig.1} using the sine and cosine theorems:%
\begin{equation}
AB=d\cdot \left( \frac{\cos \left( \alpha \right) -\sqrt{\sin \left( \alpha
_{c}\right) ^{2}-\sin \left( \alpha \right) ^{2}}}{1-\sin \left( \alpha
_{c}\right) }\right) \label{E3}
\end{equation}%
where sin$\left( \alpha _{c}\right) $ $=$ $\frac{R}{R+d}$, $-\alpha _{c}\leq
\alpha \leq \alpha _{c}$ . The spectrum in Eq. (\ref{E1}) was derived
assuming that the mean free path of the quasiparticles was much longer than
the cross-section perimeter of the cylinder and the requirement $d\ll R$ was
obeyed. In this limit $\mathrm{{\mathcal{L}}}_{0}$ $=$ $2AB$ $\cong $ ${%
2d/\cos \left( \alpha \right) }$ $\left( \underset{\alpha _{c}\rightarrow
\pi /2}{\lim }AB={d/\cos \left( \alpha \right) }\right) $ , i.e., the radius
$R$ drops out from the expression for the spectrum. Although the boundary
curvature of the sample is disregarded, the information about its
cylindrical geometry is retained through a correct choice of the limits of
integration for the angle $\alpha $: $-\alpha _{c}\leq \alpha \leq \alpha
_{c}$. Putting $\mathrm{{\mathcal{L}}}_{0}$ $=$ ${2d/\cos \left( \alpha
\right) }$, we obtain the following expression for the spectrum ( as in \cite%
{9a} ):%
\begin{widetext}
\begin{equation}
E_{n}\left( q,\alpha ,\Phi \right) =\frac{\pi \hbar \mathrm{v}_{L}(q)\cos
\left( \alpha \right) }{2d}\left[ n+\frac{1}{\pi }\arccos \left( \frac{%
E_{n}\left( q,\alpha ,\Phi \right) }{\Delta }\right) -\frac{\tan \left(
\alpha \right) }{\pi }\Phi \right]. \label{E4}
\end{equation}
\end{widetext}\qquad The spectrum in Eq. (\ref{E4}) has an important feature
\cite{8a}, \cite{9a}. As the "phase" $\Phi $ Eq. (\ref{E2})\ changes, the
density of states exhibits resonance spikes. Every time when the Andreev
level coincides with the chemical potential of the metal, the state of the
NS structure suffers strong degeneracy showing up spikes. The dependence of
the density of states upon the magnetic flux calculated numerically for the
NS system is illustrated in \textbf{Fig.2}.
Note that in \cite{1a}, \cite{18a} the diamagnetic current of NS structure
was calculated using $\alpha _{c}=\pi /2$ instead of the upper limit of
integration for the angle ( $\alpha _{c}<\pi /2$ ) and assuming implicitly
an infinitely large number of Andreev levels. Therefore, these results
cannot be conformed with the experimental findings. The reason is not only
that the calculation was made for a flat geometry rather than for a curved
NS boundary. Numerical analysis shows that an adequate interpretation of the
experimental magnetic moment-field dependence is possible only with a proper
choice of the upper limit of integration with respect to $\alpha $. If $%
\alpha >\alpha _{c}$ or $\alpha _{c}=\pi /2$, the consideration includes
effectively the states unrelated to the Andreev levels.
\subsection{Self-consistent equation}
To calculate the "phase" $\Phi \left( T,\mathrm{H}\right) $ from Eq. (\ref%
{E2}), we should know the distribution of the vector potential field inside
the normal metal. Zaikin \cite{1a} has shown that the proximity effect
caused the Meissner effect leads to an inhomogeneous distribution of the
vector potential field over the N layer of the structure:%
\begin{equation}
A_{y}\left( x\right) =\mu _{0}\mathrm{H}x+\mu _{0}j\cdot x\left( d-{x/2}%
\right) \label{ins3}
\end{equation}%
where $\mu _{0}$\ is the permeability of free space ( the SI system of units
is employed, the geometry of the proximity model system is the same as in
\cite{19ains} Fig 1). This expression can be obtained from the Maxwell
equation $rot\left( rot\left( \vec{A}\right) \right) $ $=$ $\mu _{0}\vec{j}$
$=$ $\left( 0,\mu _{0}j,0\right) $ assuming that the current density is
uniform over the cross-section of the conductor and the boundary condition $%
\left. \vec{A}\right\vert _{x=0}$ $=$ $0$, $\left. rot\left( \vec{A}\right)
\right\vert _{x=d}$ $=$ $\vec{B}\left( 0,0,\mu _{0}\mathrm{H}\right) $ is
met. The fact that the current density is constant in the N-layer follows
from spatial homogeneity of the density of Andreev levels over the whole
thickness of the
|
N-layer. In cylindrical geometries, if the N-layer
thickness is not thin compared to the radius ( $d\gtrsim R$\ ), the current
density is not constant in space \cite{19a}.
The magnetic moment per unit length of the N-layer $M\left( \mathrm{H}%
\right) =-\frac{1}{\mu _{0}}\frac{d\Omega \left( T,\Phi \left( \mathrm{H}%
\right) \right) }{d\mathrm{H}}$ ( z-component ) and the current density $%
\vec{j}$ are related via a relation%
\begin{equation}
M\left( T,\mathrm{H}\right) =\frac{1}{2}\int\limits_{V_{N}}\left[ \vec{r}%
\times \vec{j}\left( \vec{r}\right) \right] _{z}dV=\frac{-1}{\mu _{0}}\frac{%
d\Omega }{d\mathrm{H}} \label{ins1}
\end{equation}%
where $V_{N}$ is a volume of the N-layer unity height, $\Omega \left( T,\Phi
\right) $ is the free energy per unit length. Then, the current appears from
Eq. (\ref{ins1}) is a function of the magnetic flux $\Phi $ and temperature:%
\begin{equation}
j=-\frac{1}{\pi R^{2}d\cdot \mu _{0}}\frac{d\Omega \left( T,\Phi \left(
\mathrm{H}\right) \right) }{d\mathrm{H}} \label{ins4}
\end{equation}%
We can write down the self-consistent equation for $\Phi \left( T,\mathrm{H}%
\right) $ using Eqs. (\ref{E2}), (\ref{ins3}) and (\ref{ins4}):%
\begin{equation}
\Phi \left( T,\mathrm{h}\right) =\mathrm{h}+\eta \cdot M^{\ast }\left( \Phi
\right) \frac{\partial \Phi \left( T,\mathrm{h}\right) }{\partial \mathrm{h}}
\label{E5}
\end{equation}%
where $\mathrm{h}=\frac{\mathrm{H}}{\mathrm{H}_{0}}$, $\mathrm{H}_{0}=\frac{%
\Phi _{0}}{\pi d^{2}\cdot \mu _{0}}$, $\eta =\frac{d^{2}}{3R^{2}\Phi _{0}%
\mathrm{H}_{0}}$, $M^{\ast }\left( \Phi \right) =-\frac{d\Omega }{d\Phi }$
\cite{20FNT}. To describe the field effect on the magnetic moment%
\begin{equation}
M\left( T,\mathrm{H}\right) =\frac{M^{\ast }\left( T,\Phi \right) }{\mu _{0}%
\mathrm{H}_{0}}\cdot \frac{\partial \Phi \left( T,\mathrm{h}\right) }{%
\partial \mathrm{h}} \label{ins2}
\end{equation}%
of a NS structure, it is necessary to find the dependence $\Phi \left( T,%
\mathrm{h}\right) $ from Eq. (\ref{E5}). After calculating the free energy $%
\Omega \left( T,\Phi \right) $ from the spectrum of Eq. (\ref{E4}), we can
estimate the magnetic moment Eq. (\ref{ins2}) using a solution of the
differential equation (\ref{E5}): $M\left( T,\mathrm{H}\right) $ $=$ $\frac{1%
}{\eta \cdot \mu _{0}\mathrm{H}_{0}}$ $\left( \Phi \left( T,\mathrm{h}%
\right) -\mathrm{h}\right) $. In this article, we used the "thermodynamic"
approach Eq. (\ref{ins4}) which leads to the first order differential
equation (\ref{E5}) for the function $\Phi \left( T,\mathrm{h}\right) $.
\begin{figure}[tbp]
\includegraphics[width=4in]{fig4.ps}
\caption{The magnitude of $M^{\ast }\left( \Phi \right) /\left\vert M^{\ast
}\left( \Phi _{\max }\right) \right\vert $ as a function of the flux $\Phi $%
. }
\end{figure}
However, another approach based on of the Eilenberger formalism ( \cite{1a},
\cite{18a} ) yields an \textbf{algebraic} self-consistent equation ( Eq. 22
in (\cite{18a})\ ) for the "phase" $\Phi \left( T,\mathrm{h}\right) $:%
\begin{equation}
\Phi \left( T,\mathrm{h}\right) =\mathrm{h}+const\cdot j\left( \Phi \right)
\label{ins6}
\end{equation}%
( in notation Eq., (\ref{E5}) ). In this equation the function $j\left( \Phi
\right) $ is described by the expression Eq., 13 in \cite{1a}. Clearly, both
approaches (\ref{E5}), (\ref{ins6}) will lead to quite different
dependencies of $M\left( T,\mathrm{H}\right) $ on magnetic field and
temperature. In our point of view, the self-consistent equation (\ref{ins6})
cannot be applied to the cylindrical NS structures. While deriving the
expression for a current \cite{1a}, the author assumed that $\alpha _{c}=\pi
/2$ (the case of a plane). This means account for the contribution
non-Andreev states ( $\alpha >\alpha _{c}$). For the calculation of
thermodynamic potential in equation (\ref{E5}) we shall use the actual
magnitude of the parameter $\alpha _{c}$ ( $sin(\alpha _{c})=R/(R+d)$ ). The
approximation $d<<R$ was used by us in the derivation of the spectrum (\ref%
{E4}) only. In other words, the neglect of the curvature of cylindrical
samples (i.e. the path length of quasiparticles was choosen as ${d/\cos
\left( \alpha \right) }$ (\ref{E3})) does not entail the need of account for
the states with $\alpha >\alpha _{c}$ for the cylindrical NS of structures (%
\textbf{Fig.1}).
\subsection{Analytical estimation of the magnetic moment of the NS structure}
We proceed from the expression for the free energy of a NS contact:%
\begin{equation}
\Omega =-k_{B}T\sum\limits_{n,q,\alpha ,s}\ln \left( 1+e^{-\frac{E_{n}\left(
q,\alpha \right) }{k_{B}T}}\right) \label{E6}
\end{equation}%
where the summation is over the spin variable $s=\pm 1$ and all the states
related to the quasiparticles trajectories with $\left\vert \alpha
\right\vert \leq \alpha _{c}$ in Eq. (\ref{E4}). Then, we obtain the
following expression for the free energy per unit length $L$
\begin{widetext}
\begin{equation}
\Omega \left( \Phi \right) =-\frac{R\cdot k_{B}T}{\pi \hbar ^{2}}%
\sum\limits_{n=-\infty }^{n=\infty }\int_{-\alpha c}^{\alpha
_{c}}\int_{-p_{F}}^{p_{F}}\ln \left( 1+\exp \left( -\frac{E_{n}\left(
q,\alpha ,\Phi \right) }{k_{B}T}\right) \right) \sqrt{p_{F}^{2}-q^{2}}\cos
\left( \alpha \right) dqd\alpha \label{E15}
\end{equation}
\end{widetext}where the energy $E_{n}\left( q,\alpha ,\Phi \right) $ is
given by the exact expression for the spectrum in Eq. (\ref{E4}). For
simplification, we introduce the dimensionless quantities $\varepsilon _{n}=%
\frac{E_{n}\left( q,\alpha ,\Phi \right) }{\Delta }$, $\sigma =\frac{\hbar
\cdot p_{F}}{2d\cdot \Delta \cdot m^{\ast }}$, $-1\leq \varepsilon _{n}\leq
1 $, ( $\Delta $ is the superconducting gap ) and perform the change of
variables $\left\{ q,\alpha \right\} \rightarrow \left\{ u,v\right\} $:%
\begin{equation}
\left\{
\begin{array}{c}
u={\sigma \cdot \sqrt{1-\left( \frac{q}{p_{F}}\right) ^{2}}\cdot \cos \left(
\alpha \right) } \\
v={\sigma \cdot \sqrt{1-\left( \frac{q}{p_{F}}\right) ^{2}}\cdot \sin \left(
\alpha \right) }%
\end{array}%
\right. . \label{E16}
\end{equation}%
The spectrum and the free energy become%
\begin{equation}
\varepsilon _{n}=\left[ n\pi +\arccos \left( \varepsilon _{n}\right) \right]
\cdot u-\Phi \cdot v\text{,} \label{E17}
\end{equation}%
\begin{equation}
\Omega \left( \Phi \right) =c_{1}T\sum\limits_{n=0}^{n=\infty
}\iint\limits_{S}\frac{u\ln \left( 2\cosh \left( \frac{\varepsilon _{n}}{%
2c_{2}T}\right) \right) dudv}{\sqrt{\sigma ^{2}-u^{2}-v^{2}}} \label{E18}
\end{equation}%
where $c_{1}=-2\frac{R\cdot \Delta \cdot c_{2}}{\pi }\cdot \left( \frac{p_{F}%
}{\sigma \cdot \hbar }\right) ^{2}$, $c_{2}=\frac{k_{B}}{\Delta }$, $0\leq
u\leq \sigma $, $-\sigma \sin \left( \alpha _{c}\right) \leq v\leq \sigma
\sin \left( \alpha _{c}\right) $, $\varepsilon _{n}\overset{def}{=}%
\varepsilon _{n}\left( u,v,\Phi \right) $, an integration domain $S$ is a
sector of a circle of radius $\sigma $. In the expression (\ref{E18}) we
also took into account the symmetry of the spectrum in Eq. (\ref{E17}):%
\begin{equation}
\varepsilon _{-\left\vert n\right\vert }\left( u,v,\Phi \right)
=-\varepsilon _{\left\vert n\right\vert -1}\left( u,-v,\Phi \right) \text{.}
\label{E8}
\end{equation}%
Making use of the relation%
\begin{equation}
\frac{d\varepsilon _{n}}{d\Phi }=-\frac{v\cdot \sqrt{1-\varepsilon _{n}^{2}}%
}{u+\sqrt{1-\varepsilon _{n}^{2}}} \label{ins5}
\end{equation}%
we evaluate the derivative of the free energy with respect to the flux $%
M^{\ast }\left( \Phi \right) =-\frac{d\Omega }{d\Phi }$:%
\begin{equation}
M^{\ast }\left( \Phi \right) =c_{3}\sum\limits_{n=0}^{n=\infty
}\iint\limits_{S}\frac{uv\tanh \left( \frac{\varepsilon _{n}}{2c_{2}\cdot T}%
\right) \sqrt{1-\varepsilon _{n}^{2}}dudv}{\left( u+\sqrt{1-\varepsilon
_{n}^{2}}\right) \sqrt{\sigma ^{2}-u^{2}-v^{2}}} \label{E19}
\end{equation}%
where $c_{3}=\frac{R\cdot \Delta }{\pi }\left( \frac{p_{F}}{\sigma \cdot
\hbar }\right) ^{2}$. Eqs. (\ref{ins2}), (\ref{E5}) and (\ref{E19}) fully
determine the non-linear magnetic response of a cylindrical NS structure to
an externally applied magnetic field $\mathrm{H}$.
The integral expression of Eq. (\ref{E19}) suggests that $M^{\ast }\left(
\Phi \right) $ is the odd function of the flux $\Phi $: $M^{\ast }\left(
\Phi \right) $ $=$ $-M^{\ast }\left( -\Phi \right) $. A linear term of the
function $M^{\ast }\left( \Phi \right) $ has been determined from an
approximate estimation of the integral in Eq. (\ref{E19}). This calculation
is similar to that in the Attachment of \cite{9a}. The final expression for
the magnetic moment is%
\begin{equation}
M\left( T,\mathrm{h}\right) \simeq M_{0}\sum\limits_{n=0}^{n_{0}}\frac{\ln
\cosh \left( \frac{T_{A}\cdot \tilde{n}}{2T}\right) }{\tilde{n}^{3}\left[
1+\left( \frac{\Phi \left( T,\mathrm{h}\right) }{\pi \cdot \tilde{n}}\right)
^{2}\right] ^{\frac{{3}}{2}}} \label{E14}
\end{equation}%
where $\tilde{n}$ $=$ $n+\frac{1}{2}$, $T_{A}$ $=$ $\frac{\hbar \cdot
\mathrm{v}_{F}}{2\pi d\cdot k_{B}}$ is Andreev temperature, $M_{0}=$ $-$ $%
c_{3}$ $\cdot $ $\sigma ^{2}\cdot \left( \frac{T}{T_{A}}\right) $ $\cdot $ $%
\Phi \left( T,\mathrm{h}\right) \cdot \frac{\partial \Phi \left( T,\mathrm{h}%
\right) }{\partial \mathrm{h}}$, the "phase" $\Phi \left( T,\mathrm{h}%
\right) $ is a solution of the differential equation (\ref{E5}), $n_{0}$ is
the number of Andreev levels in the potential well ( $n_{0}\approx \frac{%
\Phi \left( T,\mathrm{h}\right) }{\pi }\tan \left( \alpha _{c}\right) $ ).
Eq. (\ref{E14}) shows that the magnetic moment is diamagnetic in the range
of small fields ( $\Phi \left( \mathrm{h}\right) =const\cdot \mathrm{h}$, $%
const>0$ ) and allows for the contributions of "particles" and "holes".
\subsection{Numerical results}
Let us compare two approaches described above for the calculation of the
magnetic moment of the NS structure. \textbf{Fig.5} shows the function $%
M^{\ast }\left( \Phi \right) $ and dependency of the current on the magnetic
flux obtained in the Green's function approach. For comparison, we obtained
the dependence $M^{\ast }\left( \Phi \right) $ at the same value $\alpha
_{c}=\pi /2$ as was used in the derivation of the formula $j\left( \Phi
\right) $ in \cite{18a}. In the initial part (linear in the $\Phi $) both
curves coincide. In this approximation ($\Phi \left( \mathrm{h}\right)
=const\cdot \mathrm{h}$) the self-consistent equation (\ref{ins2}) turns
into Eq. (\ref{ins6}). Thus, at small values of the magnetic field we\textbf{%
\ }would obtain\textbf{\ }the same field dependence of the magnetic moment $%
M\left( T,\mathrm{h}\right) $ for the NS structure in both approaches.
\begin{figure}[tbp]
\includegraphics[width=4in]{fig5.ps}
\caption{The magnitudes $M^{\ast }\left( \Phi \right) /\left\vert M^{\ast
}\left( 0.1\right) \right\vert $ and $j\left( \Phi \right) /\left\vert
j\left( 0.1\right) \right\vert $ as the functions of the flux $\Phi $ (for
explanation see text).}
\end{figure}
However, in large fields the behavior $M\left( T,\mathrm{h}\right) $ is
quite different. To calculate $M\left( T,\mathrm{h}\right) $ from Eq. (\ref%
{ins2}), we have used the following physical values of the NS structure: $%
R=8.3$ $\mu $m, $d=3.2$ $\mu $m, ($\alpha _{c}=36^{0}$), $\mathrm{v}%
_{F}\left( \text{Au}\right) =1.4\cdot 10^{8}$ cm/s, $\Delta \left( \text{Nb}%
\right) =1.42$ meV ( $\sigma =0.644$, $\eta \cdot c_{3}=5.3\cdot 10^{3}$, $%
\mathrm{H}_{0}=51$A/m $=0.64$ Oe ). The selected parameters are close to
those used in the experiment \cite{3a,4a,20a}.
The results of calculation according to formulas (\ref{E18}) and (\ref{E19})
are illustrated in \textbf{Fig. 3, 4}. While plotting \textbf{Fig. 3,} the
nonzero quantity $\Omega \left( \Phi =0\right) $ was omitted. The dependence
$M^{\ast }\left( \Phi \right) =-\frac{d\Omega }{d\Phi }$ (\textbf{Fig. 4})
crosses the abscissa thereby determining singular points of the differential
equation (\ref{E5}). The dependence $\Phi \left( \mathrm{h},\mathrm{T}%
\right) $ calculated through numerical solution of the self-consistency
equation (\ref{E5}) exhibits jumps and is illustrated in \textbf{Fig.6a} for
the branches corresponding to the minimum of the Gibbs free energy \cite%
{Gibbs}:%
\begin{equation}
\mathcal{G}\left( T,\mathrm{H}\right) =\Omega \left( T,\mathrm{H}\right) +%
\frac{1}{2\mu _{0}}\int\limits_{V_{N}}\left( \vec{B}-\mu _{0}\vec{H}\right)
^{2}dV \label{Gibbs}
\end{equation}%
where $\vec{B}=rot\left( \vec{A}\right) $, $\vec{H}=\vec{H}\left( 0,0,%
\mathrm{H}\right) $. The magnetic moment $M\left( \mathrm{h}\right) $ and
the free energy $\Omega \left( \mathrm{h}\right) $ as functions of the
magnetic field are shown in \textbf{Fig. 7a, Fig. 6b.} Each jump $\Delta
\Omega $ of the free energy (see \textbf{Fig. 6b}) is accompanied by the
jump of the magnetic moment $\Delta M$ (see \textbf{Fig. 7a}) in such a way
that the Gibbs free energy (\ref{Gibbs}) is a continuous function of versus
the magnetic field $\mathrm{h}$. We have not performed an analysis of
behavior Gibbs free energy (\ref{Gibbs}) near the points where the magnetic
moment has jumps because it is beyond the semi-classical approximation
adopted in this article.
\section{Conclusions}
The goal of our study was to interpret the experiments performed by A. C.
Mota et al., \cite{3a,4a}, who detected an anomalous behavior of the
magnetic susceptibility of the NS structure in a weak magnetic field at
millikelvin temperatures.
\begin{figure}[tbp]
\includegraphics[width=4in]{fig6.ps}
\caption{a) the dependence of $\Phi \left( T,\
|
$).
\section{Quantum Discord for the class of states (\ref{GG3})}\label{iii}
As mentioned earlier, we will follow the approach of \cite{kavmodiprl} to characterize and quantify all
kinds of correlations in a quantum state. The definitions of relevant quantities are:
\bqa\label{qddf1}\mbox{ Entanglement }E&=&\min_{\sigma\in \mathcal{D}}S(\rho\|\sigma)\\
\label{qddf2}\mbox{ Discord }D&=&\min_{\chi\in \mathcal{C}}S(\rho\|\chi)\\
\label{qddf3}\mbox{ Dissonance }Q&=&\min_{\chi\in \mathcal{C}}S(\sigma\|\chi)\\
\label{qddf4}\mbox{ Classical correlations }C &=&\min_{\pi\in \mathcal{P}}S(\rho\|\pi)\eqa
where $\mathcal{P}$ is the set of all product states (i.e., states of the form $\pi=\pi_1\otimes\pi_2\otimes\ldots\otimes\pi_N$),
$\mathcal{C}$ is the set of all classical states (i.e., states of the form $\chi=\sum_{\overrightarrow{k}}p_{\overrightarrow{k}}
|\overrightarrow{k}\ran\lan \overrightarrow{k}|$, with the local states $|k_n\ran$ spanning an orthonormal basis), $\mathcal{D}$ is the set of all separable states (i.e., states of the form
$\sigma=\sum_k p_k \pi_1^k\otimes\pi_2^k\otimes\ldots\otimes\pi_N^k$) and $S(x\|y)=\mbox{Tr}[x\log x-x\log y]$
is the relative entropy of $x$ with respect to $y$. We shall first find out the closest separable state (CSS) to the class of states (\ref{GG3}).
Fortunately it turns out that the CSS is also a classical state thereby implying $D=E=R_E(|G_{ij}\ran),\quad Q=0$.
Before proceeding to calculations, we recall that finding out the CSS is a challenging problem
\cite{eisert03, hkpra10}. To obtain the CSS to a multipartite state,
two interesting tools are available in the literature. The first one is a lower bound through the generalization of Plenio-Vedral
formula \cite{plenved01}:
\bq\label{pvfre} S(\rho_N||\sigma_N)\ge S(\rho_{N-1}||\sigma_{N-1})+S(\rho_{N-1})-S(\rho_N),\eq
where $\rho_N$ is any $N$-partite state and $\sigma_N$ is an $N$-separable state. So, for any $N$-qubit pure state $\rho_N$ we have the lower bound \bq\label{qdf1}E(\rho_N)\ge\max\{E(\rho_{N-1})+S(\rho_{N-1})\}\eq
where the maximum is taken over all possible bipartition of $N-1$ versus single qubit.
The second tool is due to Wei et. al. \cite{WEGM4qic}. For any $N$-partite pure state $|\psi\ran$,
it gives a lower bound on $E$ through $P_{max}(|\psi\ran)$:
\bq\label{tcweire} E(|\psi\ran\lan\psi|)\ge-\log_2P_{max}(|\psi\ran).\eq Since $E$ is
defined through minimization, if we can find a separable state $\sigma$ which saturates either of the bounds in
(\ref{qdf1}) and (\ref{tcweire}), then $\sigma$ will be the required CSS. Infact the later bound has been extensively used
to derive REE of symmetric Dicke states \cite{WEGM4qic} and even mixtures of them \cite{tcweree08}. But unfortunately,
these bounds are not saturated for the states in (\ref{GG3}). Indeed, the bound (\ref{tcweire}) is not saturated even for the simplest 2-qubit non-maximally Bell states (e.g., for $|\phi\ran=a|00\ran+b|11\ran$ with $|a|^2+|b|^2=1$, one has $P_{max}=\max\{|a|^2,|b|^2\}$
whereas $E(|\phi\ran)=-|a|^2\log|a|^2-|b|^2\log|b|^2=H(|a|^2)$ \cite{vedral97}). It is thus quite challenging to derive the CSS. The reverse problem (i.e., starting from a $\sigma$ on the boundary of $\mathcal{D}$, determining all entangled state $\rho$ for which $\sigma$ is the CSS) is also interesting and has been solved for the 2-qubit case \cite{si03r}, very recently for multiparty states
\cite{gg10}. We shall apply this multi-party criteria to derive the CSS. The criteria reads:
\emph{\textbf{Necessary and sufficient criteria for CSS }}\cite{gg10}: $\sigma\in\mathcal{D}$ is a CSS for an entangled state $\rho$ if and only if \bq\label{gourcr}\max_{\sigma'\in D}\mbox{ Tr }\sigma'L_\sigma(\rho)=1,\eq
where the linear operator $L_{\sigma}$ is defined in the following way. Let the eigendecomposition of hermitian positive operator $\alpha$ be $\alpha=\mbox{diag}(a_1,a_2,\ldots,a_n)$. Then for any $\beta=[b_{ij}]_{i,j=1}^n$, $L_{\alpha}(\beta)$ is defined by\bq\label{gourlab}[L_{\alpha}(\beta)]_{kl}=\left\{\begin{array}{ll}
b_{kl}\frac{\ln a_k-\ln a_l}{a_k-a_l}, & \mbox{if }a_k\ne a_l \\
b_{kl}\frac{1}{a}, & \mbox{if }a_k=a_l=a
\end{array}\right.\eq
We shall now derive the CSS of our states. Since REE is invariant under LU and (\ref{GG3}) can be transformed to (\ref{nGG1}) by LU, we can consider REE of this state, without loss of generality. The state (\ref{nGG1}) has GME similar to the non-maximal Bell state. So we assume that it will have a similar REE also. Hence we take the CSS as \bq\label{eq1}
\sigma=\frac{c^2}{2}(|00\ran\lan00|+|11\ran\lan11|)+
\frac{s^2}{2}(|01\ran\lan01|+|10\ran\lan10|),\eq
where (and hereafter) we have dropped the suffixes $m$ and $n$.
\textbf{Proof}:\bqa\sigma&=&\mbox{diag}(\frac{c^2}{2},\frac{s^2}{2},\frac{s^2}{2},\frac{c^2}{2})\nonumber\\
\mbox{and }\rho&=&\frac{1}{2}\left(
\begin{array}{cccc}
c^2 & cse^{-i\gamma}& cse^{-i\gamma} & \pm c^2 \\
cse^{i\gamma}& s^2 & s^2 & \pm cse^{i\gamma}\\
cse^{i\gamma}& s^2 & s^2 & \pm cse^{i\gamma}\\
\pm c^2 & \pm cse^{-i\gamma}& \pm cse^{-i\gamma} & c^2 \\
\end{array}
\right).\nonumber\eqa
Hence from the definition of $L_\sigma(\rho)$,
\bq L_\sigma(\rho)=\left(
\begin{array}{cccc}
1 & qe^{-i\gamma}& qe^{-i\gamma} & \pm1 \\
qe^{i\gamma}& 1 & 1 & \pm qe^{i\gamma}\\
qe^{i\gamma}& 1 & 1 & \pm qe^{i\gamma}\\
\pm1 & \pm qe^{-i\gamma}&\pm qe^{-i\gamma} & 1 \\
\end{array}
\right)\nonumber\eq where $q=\frac{cs\log\frac{c^2}{s^2}}{c^2-s^2}$. Note that $|q|\le1$.
Now let $\sigma'=\sum p_k|\phi_k\ran\lan\phi_k|$. Then
\bqa &&\mbox{ Tr }\sigma'L_\sigma(\rho)=\sum p_k\lan\phi_k|L_\sigma(\rho)|\phi_k\ran\nonumber\\
&=&\sum p_k[|\lan\phi_k|00\ran|^2+|\lan\phi_k|01\ran|^2+|\lan\phi_k|10\ran|^2+|\lan\phi_k|11\ran|^2\nonumber\\
&+&2Real(qe^{-i\gamma}\lan\phi_k|00\ran(\lan01|\phi_k\ran+\lan10|\phi_k\ran))\nonumber\\
&\pm& 2Real\lan\phi_k|00\ran\lan11|\phi_k\ran+2Real\lan\phi_k|01\ran\lan10|\phi_k\ran\nonumber\\
&\pm& 2Real(qe^{-i\gamma}\lan\phi_k|11\ran(\lan01|\phi_k\ran+\lan10|\phi_k\ran))]\nonumber\\
&\le&\sum p_k[|\lan\phi_k|00\ran|^2+|\lan\phi_k|01\ran|^2+|\lan\phi_k|10\ran|^2+|\lan\phi_k|11\ran|^2\nonumber\\
&+&(|\lan\phi_k|00\ran||\lan\phi_k|01\ran|+\ldots+|\lan\phi_k|10\ran||\lan\phi_k|11\ran|)]\nonumber\\
&=&\sum p_k[|\lan\phi_k|00\ran|+|\lan\phi_k|01\ran|+|\lan\phi_k|10\ran|+|\lan\phi_k|11\ran|]^2\nonumber\eqa
Since each $|\phi_k\ran$ is a product state, we have
$|\phi_k\ran=|\varphi_k\ran|\psi_k\ran$. So the last expression above can be written as
$\sum p_k[(|\lan{\varphi}_k|0\ran|+
|
|\lan{\varphi}_k|1\ran|)(|\lan{\psi}_k|0\ran|+|\lan{\psi}_k|1\ran|)]^2\le1,$ since for any normalized product state $|\phi\ran$ (of $\ge$ 2-qubits), $|\lan\phi|0\ran|+|\lan\phi|1\ran|\le1$ (which can be seen from (\ref{max3})).\hfill $\blacksquare$
Thus $\sigma$ is indeed the CSS. Being a classical state as well, $\sigma$ is also the CCS, thereby yielding $D=E=-c^2\log\frac{c^2}{2}-(1-c^2)\log\frac{(1-c^2)}{2}
=1+H(c^2)$ and $Q=0$. We have depicted all the known bounds and our exact results for this state in Fig-\ref{fig1}.
\begin{figure}
\includegraphics[width=7.5cm]{fig1.eps}
\caption{(color online only) Known bounds Vs. exact results: BG = bound on GME obtained in \cite{slc}, G = exact GME, BE= bound on REE obtained through $-\log_2 P_{max}$ \cite{WEGM4qic}, E = exact REE}\label{fig1}
\end{figure}
\section{Conjecture for discord of $N$-qubit $W$ state}\label{iv}
From the discussion of the previous section, it is clear that determining CSS is a non trivial task. Determining the CCS is even more complicated because the set $\mathcal{C}$ is not a convex set and hence the standard tools of convex optimization theory is not directly applicable. However, to calculate the discord $D$ and dissonance $Q$, the authors of \cite{kavmodiprl} have simplified the task of minimizing over $\mathcal{C}$. They have shown that for any given $\rho$, the CCS $\chi_{\rho}$ is given by $\chi_{\rho}=\sum_{\overrightarrow{k}}|\overrightarrow{k}\ran\lan\overrightarrow{k}|\rho|\overrightarrow{k}\ran\lan\overrightarrow{k}|$, where $\{|\overrightarrow{k}\ran\}$ forms the eigenbasis of $\chi_\rho$. This simplifies expressions
for $D$ and $Q$ as the minimization of the relative entropy over $\mathcal{C}$ reduces to minimization of the
von Neumann entropy $S(\chi_x)$ over the choice of local basis $\{|\overrightarrow{k}\ran\}$:
\bq\label{kavnedd} D=S(\chi_\rho)-S(\rho),\quad Q=S(\chi_\sigma)-S(\sigma),\eq
where $S(\chi_x)=\min_{|\overrightarrow{k}\ran}S(|\overrightarrow{k}\ran\lan\overrightarrow{k}|x|
\overrightarrow{k}\ran\lan\overrightarrow{k}|)$. Therefore, for numerical computation of $D$, one can choose arbitrary local bases and find the minimum of the corresponding entropies. An even finer approach is to generate a vector (with equal spacing) and using Gram-Schmidt method construct a complete orthonormal basis and obtain the minimum entropy. This technique is useful mostly in low dimensional cases \cite{nakahara}.
The CSS to the $N$-qubit $W$-state is known to be \cite{WEGM4qic} \bq \sigma_W=\sum_{k=0}^N~^NC_k\left(\frac{k}{N}\right)^k\left(\frac{N-k}{N}\right)^{N-k}|S(N,k)\ran\lan S(N,k)|,\nonumber\eq
$|S(N,k)\ran$ being the $k$-th symmetric (Dicke) state. For $N\ge3$, the above separable state is not a classical state. Therefore $D\ne E$ and $Q\ne0$ for $W$ states (contrary to the $GHZ$ case, where the CSS was a classical state).
Since the $W$ state is symmetric, we assume that the CCS can be chosen to be symmetric \cite{guess}. So we choose each of the local orthonormal basis of the classical state $\chi_W$ as \bqa |0'\ran&=&\sqrt{p}|0\ran+\sqrt{1-p}|1\ran\nonumber\\
|1'\ran&=&\sqrt{1-p}|0\ran-\sqrt{p}|1\ran\nonumber\eqa so that $\lan x'|x\ran=(-1)^x\sqrt{p}$, $\lan
x'|y\ran=\sqrt{1-p}$, $x\ne y=0,1$. Therefore we have \bq\lan x'_1x'_2\ldots x'_N|y_1y_2\ldots y_N\ran=(-1)^{m_1}(\sqrt{p})^{m}(\sqrt{1-p})^{N-m},\nonumber\eq where $m$ is the number of positions where the two binary strings $x$ and $y$ agree and $m_1$ is the number of positions where they both have 1. Since for the $W$ state each $y$ has exactly one 1, the inner product $\lan x' |W\ran$ will just depend on the number of 1's in $x$. So, if a basis $|x_k\ran$ has $k$ number of 1s, we have
\bqa\label{wdis1}\lan x'_k|W\ran=\frac{1}{\sqrt{N}}\left[-~^kC_1(\sqrt{p})^{N-k+1}(\sqrt{1-p})^{k-1}\right.\nonumber\\
+\left.~^{N-k}C_1(\sqrt{p})^{N-k-1}(\sqrt{1-p})^{k+1}\right]\nonumber\\
=\frac{1}{\sqrt{N}}(\sqrt{p})^{N-k-1}(\sqrt{1-p})^{k-1}[N(1-p)-k]\eqa
Now from (\ref{kavnedd}), to determine $D$, we have to find the minimum of \bqa\label{wdis2}\mathbb{S}&=&S\left(\sum
\limits_{x'=x'_1x'_2\ldots x'_N;x_j=0,1}|x'\ran\lan x'|W\ran\lan W|x'\ran\lan x'|\right)\nonumber\\
&=&-\sum\limits_{x'=x'_1x'_2\ldots x'_N;x_j=0,1}|\lan x'|W\ran|^2\log_2|\lan x'|W\ran|^2\nonumber\\
&=&-\sum\limits_{k=0}^N~^NC_k\lambda_k\log_2\lambda_k,\eqa where $\lambda_k=|\lan x'_k|W\ran|^2$ with $x_k$ being any binary string of length $N$ having $k$ 1s, is given by (\ref{wdis1}). It can easily be checked that $\mathbb{S}$ has (global) minimum at $p=0,1$ (see Fig.-\ref{fig2}). Therefore the CCS to $W$ state is the dephased state in computational basis and consequently we have $D=\log_2N$.
Employing the method of \cite{nakahara}, we have also numerically verified (independent of the assumption that the CCS is symmetric) that upto $N=5$, this indeed is the minimum. We thus \emph{\textbf{conjecture that discord of $N$-qubit W state is $\log_2N$}}.
\begin{figure}
\includegraphics[width=7.5cm]{fig2.eps}
\caption{(color online only) $p$ Vs. $\mathbb{S}$}\label{fig2}
\end{figure}\\
\section{Discussion }\label{v}
First of all, we note that both the results (\ref{GGpmax1}) and (\ref{eq1}) can straightforwardly
be extended to the case of \textit{non-maximal} $GHZ$ states
(i.e., $a|i_1i_2\ldots i_N\ran+b|\bar{i}_1\bar{i}_2\ldots
\bar{i}_N\ran, |a|^2+|b|^2=1$). However, calculation of GME for
superposition of two \emph{arbitrary} GHZ states is more
involved. In fact, even for a single non-maximal (generalized) $W$
state, obtaining the GME is quite non-trivial. Recently, the three qubit
case has been studied in \cite{tcw} which has been further generalized
to $N$-qubits \cite{sud}. From a broader perspective, a generalization of GME in
which the maximum distance would be calculated from the set of all states which are equivalent
under stochastic local operations and classical communications (instead of just product states), has
been introduced in \cite{ut}. It would be interesting to see how the GME of superposition behaves
in this context.
Another basic question related to the measure of correlations
is the \emph{additivity} of the proposed measure. It is
known that GME is in general not additive \cite{wh}; precisely,
for $N\ge3$, GME is not additive for any two $N$-partite
antisymmetric states \cite{zch}. However, this is still not known for the case of total correlations $T_{\rho}$
( defined as $S(\rho\|\pi_{\rho})$) in a quantum state. It has been conjectured \cite{kavmodiprl} that $T_{\rho}$ is
subadditive: $T_{\rho}>E+Q+C_{\sigma}$, where $C_{\sigma}$ is the \emph{classical correlation}
$S({\chi}_{\sigma}\|{\pi}_{\sigma})$.
A further direction along our line of study would be to explore the correlations in $N$-qubit $GHZ$-diagonal states (an arbitrary mixture of the states $|G_{ij}^{\pm}\ran\lan G_{ij}^{\pm}|$). Because of the simple structure (both algebraic and geometric), the two qubit case allows easy computation of all the measures and has been studied extensively. But, beyond this, even the criteria for entanglement is unknown till date. We hope that the lower bound in (\ref{qdf1}) may provide some insight in determining the structure of the CSS which then can be verified using the necessary and sufficient condition given in \cite{gg10}.
To conclude, we have derived analytically the GME and discord (via REE) for superposition of some orthonormal $GHZ$ states.
We have also conjectured the discord for $W$ states. Perhaps a similar approach could be applied to other permutationally invariant states.
\textbf{Acknowledgment}: SR would like to thank T.-C. Wei and Gilad Gour
for helpful discussions.
|
\section*{Introduction}
Let $X$ be a scheme over a perfect field $k$ of characteristic $p>0$.
The de Rham-Witt complex $W\Omega^*_{X/k}$ was defined by Illusie \cite{Illusie}
relying on ideas of Bloch, Deligne and Lubkin. It is a projective system of
complexes of $W(k)$-modules on $X$, which is indexed by the positive integers.
If $X$ is smooth then the hypercohomology of $W_n\Omega^*_{X/k}$ admits a natural
comparison isomorphism to the crystalline cohomology of $X$ with respect to $W_n(k)$.
Langer and Zink have extended Illusie's definition of the de Rham-Witt complex
to a relative situation, where $X$ is a scheme over $\Spec(R)$ and $R$ is a $\Z_{(p)}$-algebra \cite{LZ}. If $p$ is nilpotent in $R$ and $X$ is smooth, then they construct a functorial
comparison isomorphism
$$
H^*(X,W_n\Omega^*_{X/R})\cong H^*_{crys}(X/W_n(R)).
$$
The big de Rham-Witt complex $\mathbb{W}\Omega^*_{A}$ was introduced, for any commutative ring $A$, by Hesselholt and
Madsen \cite{HM}. The original construction relied on the adjoint functor theorem and
has been replaced by a direct and explicit method due to Hesselholt \cite{H}.
Again, it is a projective system of graded sheaves $[S\mapsto \mathbb{W}_S\Omega^*_A]$, but the index set consists
of finite truncation sets; that is, finite subsets $S$ of $\mathbb{N}_{>0}$ having the
property that whenever $n\in S$, all (positive) divisors of $n$ are also contained in $S$.
For the ring of integers, $\mathbb{W}\Omega^*_{\Z}$ has been computed by Hesselholt \cite{H}. It vanishes in degree $\geq 2$, but $\mathbb{W}\Omega^1_{\Z}$ is non-zero.
Let $X$ be an $R$-scheme. In this paper we will consider the relative version
$$
S\mapsto \mathbb{W}_S\Omega^*_{X/R}
$$
of the (big) de Rham-Witt complex, which is constructed from
$\mathbb{W}\Omega^*_{X}$ by killing the ideal generated by $\mathbb{W}\Omega^1_{R}$.
The relation with the de Rham-Witt complex of Langer-Zink is given in
Proposition~\ref{proposition-comparison-Langer-Zink}: if $R$ is a $\Z_{(p)}$-algebra then
$$
\mathbb{W}_{\{1,p,\dots, p^{n-1}\}}\Omega^*_{A/R} = W_n\Omega^*_{A/R}.
$$
In the following we will use the notation $W_n=\mathbb{W}_{\{1,p,\dots, p^{n-1}\}}$, assuming that a prime $p$ has been fixed.
It is natural to consider $\mathbb{W}_S\Omega^*_{X/R}$ as a sheaf of complexes on the scheme $\mathbb{W}_S(X)$, which is obtained by gluing $\Spec(\mathbb{W}_S(A_i))$
for an affine covering $X=\bigcup_i \Spec(A_i)$. Then the components $\mathbb{W}_S\Omega^q_{X/R}$ form quasi-coherent sheaves,
and are coherent under suitable finiteness conditions.
Our purpose is to show that the de Rham-Witt cohomology
$$
H^i_{dRW}(X/\mathbb{W}_S(R)) \overset{{\rm def}}{=} H^i(\mathbb{W}_S(X),\mathbb{W}_S\Omega^*_{X/R})
$$
is as well-behaved as the usual de Rham cohomology. The main theorem of the paper
is the following.
\begin{introthm}[cf.~Theorem~\ref{thm-projective-blue}]\label{introthm-projective-blue}
Let $R$ be a smooth $\Z$-algebra. Let $X$ be a smooth and proper $R$-scheme. Suppose that the de Rham cohomology $H^*_{dR}(X/R)$ of $X$
is a flat $R$-module. Then $H^*_{dRW}(X/\mathbb{W}_S(R))$ is a finitely generated
projective $\mathbb{W}_S(R)$-module for all finite truncation sets $S$. Moreover, for an inclusion of finite truncation sets $T\subset S$, the induced map
\begin{equation}
H^*_{dRW}(X/\mathbb{W}_S(R))\otimes_{\mathbb{W}_S(R)}\mathbb{W}_T(R)\xr{\cong} H^*_{dRW}(X/\mathbb{W}_T(R))
\end{equation}
is an isomorphism.
\end{introthm}
In order to prove Theorem \ref{introthm-projective-blue}, we will construct
for all maximal ideals $\mf{m}$ of $R$
and $n,j>0$, a natural quasi-isomorphism:
$$
R\Gamma(W_n\Omega^*_{X/R})\otimes^{\mathbb{L}}_{W_n(R)}W_n(R/\mf{m}^j)\xr{{\rm q-iso}} R\Gamma(W_n\Omega^*_{X\otimes R/\mf{m}^j/(R/\mf{m}^j)}),
$$
where $p={\rm char}(R/\mf{m})$. The right hand side is $R\Gamma$ of
the de Rham-Witt complex defined by Langer and Zink. Thus it
computes the crystalline cohomology, which in our case is a free
$W_n(R/\mf{m}^j)$-module. Taking the limit $\varprojlim_j$, this will yield the
flatness of
$$
H^*_{dRW}(X/W_n(R))\otimes_{W_n(R)} W_n(\varprojlim_j R/\mf{m}^j)
$$
as $W_n(\varprojlim_j R/\mf{m}^j)$-module for all maximal ideals $\mf{m}$, which is sufficient
in order to conclude the flatness of the de Rham-Witt cohomology.
Concerning Poincar\'e duality we will show the following theorem.
\begin{introthm}[cf.~Corollary~\ref{corollary-Poincare-duality-made-simple}]
\label{introthm-2}
Let $R$ be a smooth $\Z$-algebra. Let $X\xr{} \Spec(R)$ be a smooth projective morphism such that $H^*_{dR}(X/R)$ is a flat $R$-module.
Suppose that $X$ is connected of relative dimension $d$.
If the canonical map
\begin{equation*}
H^{i}_{dR}(X/R)\xr{} \Hom_R(H^{2d-i}_{dR}(X/R),R)
\end{equation*}
is an isomorphism, then the same holds for the de Rham-Witt cohomology:
\begin{equation*}
H^{i}_{dRW}(X/\mathbb{W}_S(R))\xr{\cong} \Hom_{\mathbb{W}_S(R)}(H^{2d-i}_{dRW}(X/\mathbb{W}_S(R)),\mathbb{W}_S(R)),
\end{equation*}
for all finite truncation sets $S$.
\end{introthm}
In fact, de Rham-Witt cohomology is equipped with a richer structure than
the $\mathbb{W}(R)$-module structure, coming from the Frobenius operators
$$
\phi_n:H^*_{dRW}(X/\mathbb{W}_S(R)) \xr{} H^*_{dRW}(X/\mathbb{W}_{S/n}(R)),
$$
for all positive integers $n$, and where $S/n:=\{s\in S\mid ns\in S\}$. These
are Frobenius linear maps
satisfying $\phi_n\circ \phi_m=\phi_{nm}$.
The relationship with the Frobenius
action on the crystalline cohomology of the fibers is as follows.
Let $\mf{m}$ be a maximal ideal of $R$, set $k=R/\mf{m}$ and $p={\rm char}(k)$. If $H^*_{dR}(X/R)$ is torsion-free then
there is a natural isomorphism
$$
H^i_{dRW}(X/W_n(R))\otimes_{W_n(R)} W_n(k) \cong H^i_{crys}(X\otimes_R k/W_n(k)),
$$
and $\phi_p\otimes F_p$ corresponds via this isomorphism to the composition of $H^i_{crys}({\rm Frob})$ with the projection.
As will be made precise in Section \ref{section-values}, the projective system
$$
H^i_{dRW}(X/\mathbb{W}(R))\overset{{\rm def}}{=} [S\mapsto H^i_{dRW}(X/\mathbb{W}_S(R))],
$$
together with the Frobenius morphisms $\{\phi_n\}_{n\in \mathbb{N}_{>0}}$, defines an object
in a rigid $\otimes$-category $\mathcal{C}_R$.
Maybe the most important property of $\mathcal{C}_R$ is
the existence of a conservative, faithful $\otimes$-functor
$$
T:\mathcal{C}_R \xr{} \text{($R$-modules)}, \qquad T(H^i_{dRW}(X/\mathbb{W}(R)))=H^i_{dR}(X/R).
$$
Moreover, $\mathcal{C}_R$ has Tate objects $\mathbf{1}(m)$, $m\in \Z$, and the
first step towards Poincar\'e duality will be to prove the existence of a natural
morphism in $\mathcal{C}_R$:
$$
H^{2d}_{dRW}(X/\mathbb{W}(R))\xr{}\mathbf{1}(-d) \qquad (d=\text{relative dimension of $X/R$}).
$$
Then it will follow easily that
$$
H^{i}_{dRW}(X/\mathbb{W}(R))\xr{\cong} \underline{{\rm Hom}}(H^{2d-i}_{dRW}(X/\mathbb{W}(R)),\mathbf{1}(-d)),
$$
provided that the assumptions of Theorem \ref{introthm-2} are satisfied. Taking
the underlying $\mathbb{W}(R)$-modules one obtains Theorem \ref{introthm-2}.
\subsection*{Acknowledgements}
After this manuscript had appeared on arXiv, we received a letter from professor James Borger who informed us that he had already obtained
Theorem \ref{introthm-projective-blue}, for $R=\Z[N^{-1}]$, in a joint work with Mark Kisin by using similar methods.
I thank Andreas Langer and Kay R\"ulling for several useful comments on the first version of the paper.
\tableofcontents
\section{Relative de Rham-Witt complexes}
\subsection{Witt vectors}
For the definition and the basic properties of the ring of Witt vectors we refer to \cite[\textsection1]{H}. We briefly recall the notions in this section.
A subset $S\subset \mathbb{N}=\{1,2,\dots\}$ is called a \emph{truncation set} if $n\in S$ implies
that all positive divisors of $n$ are contained in $S$. For a truncation set $S$ and $n\in S$, we define
$
S/n:=\{s\in S\mid sn\in S\}.
$
Let $A$ be a commutative ring. For all truncation sets $S$ we have the ring of Witt vectors
$
\mathbb{W}_S(A)
$
at our disposal. The ghost map is the functorial ring homomorphism
$$ gh=(gh_n)_{n\in S}:\mathbb{W}_S(A)\xr{} \prod_{n\in S}A,\quad
gh_n((a_s)_{s\in S}):=\sum_{d\mid n} d\cdot a_d^{n/d}.$$
It is injective provided that $A$ is $\Z$-torsion-free.
For all positive integers $n$, there is a functorial morphism of rings
$$
F_n:\mathbb{W}_S(A)\xr{}\mathbb{W}_{S/n}(A),
$$
called the \emph{Frobenius}. Moreover there is a functorial morphism of $\mathbb{W}_S(A)$-modules, the \emph{Verschiebung},
\begin{align*}
V_n: \mathbb{W}_{S/n}(A)&\xr{} \mathbb{W}_S(A),
\end{align*}
where the source is a $\mathbb{W}_S(A)$-module via $F_n$.
For all coprime positive integers $n,m \in \mathbb{N}$ we have
$$
F_n\circ V_n=n, \quad F_n\circ V_m = V_m \circ F_n \qquad \quad ((m,n)=1).
$$
We have a multiplicative Teichm\"uller map
$$
[-]:A\xr{} \mathbb{W}_S(A), \quad a\mapsto [a]:=(a,0,0,\dots)\in \mathbb{W}_S(A),
$$
and if $S$ is finite then every element $a\in \mathbb{W}_S(A)$ can be written as
$$
a=\sum_{s\in S}V_s([a_s])
$$
with unique elements $(a_s)_{s\in S}$ in $A$.
Let $T\subset A$ be a multiplicative set and suppose that $S$
is a finite truncation set. We can consider $T$ via the Teichm\"uller map
as multiplicative set in $\mathbb{W}_S(A)$. Then the natural ring homomorphism
$$
T^{-1}\mathbb{W}_S(A)\xr{} \mathbb{W}_S(T^{-1}A).
$$
is an isomorphism. If $T\subset \Z$ is a multiplicative set then
$$
\mathbb{W}_S(A)\otimes_{\Z}T^{-1}\Z \xr{} \mathbb{W}_S(T^{-1}A)
$$
is an isomorphism.
Let $S$ be a truncation set, and let $n$ be a positive integer; set $T:=S\backslash \{s\in S; n\mid s\}$. Then $T$ is a truncation set and
we have a short exact sequence of $\mathbb{W}_S(A)$-modules:
\begin{equation}\label{equation-short-exact-seq-S/n-S-T}
0\xr{} \mathbb{W}_{S/n}(A)\xr{V_n} \mathbb{W}_S(A)\xr{R^S_{T}} \mathbb{W}_T(A)\xr{} 0.
\end{equation}
\begin{example}
We have
$
\mathbb{W}_S(\Z)=\prod_{n\in S} \Z\cdot V_n(1),
$
and the product is given by $V_m(1)\cdot V_n(1)=c\cdot V_{mn/c}(1)$, where $c=(m,n)$ is the greatest common divisor \cite[Proposition~1.6]{H}.
\end{example}
\subsubsection{}
The following theorem will be very useful throughout the paper.
\begin{thm}\label{thm-van-der-Kallen-Borger}(Borger-van der Kallen)
Let $S$ be a finite truncation set, and let $n$ be a positive integer.
Let $\rho:A\xr{} B$ be an \'etale ring homomorphism.
The following hold.
\begin{enumerate}
\item The induced ring homomorphism $\mathbb{W}_S(A)\xr{}\mathbb{W}_S(B)$ is \'etale.
\item The morphism
$$
\mathbb{W}_S(B)\otimes_{\mathbb{W}_S(A),F_n}\mathbb{W}_{S/n}(A)\xr{} \mathbb{W}_{S/n}(B), \quad b\otimes a\mapsto F_n(b)\cdot \mathbb{W}_{S/n}(\rho)(a),
$$
is an isomorphism.
\end{enumerate}
\end{thm}
The references for this theorem are \cite[Theorem~B]{Borger3} \cite[Corollary~15.4]{Borger4} and \cite[Theorem~2.4]{vdK} (cf. \cite[Theorem~1.22]{H}).
By using Theorem~\ref{thm-van-der-Kallen-Borger}, the exact sequence (\ref{equation-short-exact-seq-S/n-S-T}), and
induction on the length of $S$, we easily obtain the following corollary.
\begin{corollary}\label{corollary-van-der-Kallen-Borger}
Let $\rho:A\xr{} B$ be an \'etale ring homomorphism. Let $S$ be a finite
truncation set.
\begin{itemize}
\item [(i)] For an inclusion of truncation sets $T\subset S$, the map
$$
\mathbb{W}_S(B)\otimes_{\mathbb{W}_S(A)}\mathbb{W}_T(A)\xr{}\mathbb{W}_T(B)
$$
is an isomorphism.
\item [(ii)] Let $n$ be a positive integer. For any $A$-algebra $C$, the natural ring homomorphism
$$
\mathbb{W}_{S/n}(C)\otimes_{F_n,\mathbb{W}_S(A)}\mathbb{W}_{S}(B) \xr{} \mathbb{W}_{S/n}(C\otimes_A B), \quad c\otimes b\mapsto c\cdot F_n(b)
$$
is an isomorphism.
\end{itemize}
\end{corollary}
\begin{notation}\label{notation-Wn}
If a prime $p$ has been fixed then we set $W_n:=\mathbb{W}_{\{1,p,p^2,\dots,p^{n-1}\}}$.
\end{notation}
\subsubsection{}\label{section-epsilon-decomposition}
Let $p$ be a prime. Let $R$ be a $\Z_{(p)}$-algebra.
Since all primes different from $p$ are invertible in $R$,
the same holds in $\mathbb{W}_S(R)$. The category of $\mathbb{W}_S(R)$-modules, for a finite truncation set $S$, factors in the following way. Set
$$
\epsilon_{1,S}:=\prod_{\substack{\text{primes $\ell\neq p$}\\ S/\ell\neq \emptyset}} (1-\frac{1}{\ell}V_{\ell}(1))\in \mathbb{W}_S(R),
$$
and $\epsilon_{n,S}:=\frac{1}{n}V_n\left(\epsilon_{1,S/n}\right)$ for all positive integers $n$ with $(n,p)=1$.
Of course, if $S/n=\emptyset$ then $\epsilon_{S,n}=0$.
In the following we will simply write $\epsilon_n$ for $\epsilon_{n,S}$.
For all positive integers $n\neq n'$ with $(n,p)=1=(n',p)$ the equalities
$$
\epsilon_{n}^2=\epsilon_{n}, \qquad \epsilon_{n}\epsilon_{n'}=0,
$$
hold. Moreover, if $(m,p)=1=(n,p)$ then
$$
F_m(\epsilon_n)=\begin{cases} \epsilon_{n/m} & \text{if $m\mid n$,} \\
0 & \text{if $m\nmid n$}.\end{cases}
$$
Since $\sum_{(n,p)=1} \epsilon_n=1$ we obtain a decomposition of rings
\begin{equation}\label{equation-epsilon-decomposition-Witt-vectors}
\mathbb{W}_S(R)=\prod_{n\geq 1, (n,p)=1} \epsilon_n \mathbb{W}_S(R).
\end{equation}
\begin{notation}\label{notation-S-p}
For a finite truncation set $S$ we denote by $S_p$ the elements in $S$
that are $p$-powers, that is
$
S_p=S\cap \{p^i\mid i\geq 0\}.
$
\end{notation}
The map $$R_{(S/n)_p}^{S/n}\circ F_n:\mathbb{W}_S(R)\xr{} \mathbb{W}_{(S/n)_p}(R)$$ induces an
isomorphism $\epsilon_n \mathbb{W}_S(R) \cong \mathbb{W}_{(S/n)_p}(R)$. Thus
$$
M\mapsto \bigoplus_{n\geq 1,(n,p)=1} \epsilon_n M
$$
defines an equivalence of categories
\begin{equation}\label{equation-epsilon-decomposition}
(\text{$\mathbb{W}_S(R)$-modules})\xr{\cong} \prod_{n\geq 1, (n,p)=1}(\text{$\mathbb{W}_{(S/n)_p}(R)$-modules}).
\end{equation}
\subsubsection{} The following two lemmas are concerned with maximal ideals in $\mathbb{W}_S(R)$.
\begin{lemma}\label{lemma-points-of-WSR}
Let $R$ be a ring. Let $S$ be a finite truncation set. For every maximal ideal $\mathfrak{m}\subset \mathbb{W}_S(R)$
there exists a maximal ideal $\mathfrak{p}\subset R$ such that $ \mathbb{W}_S(R) \xr{} \mathbb{W}_S(R)/ \mathfrak{m}$ factors through $\mathbb{W}_S(R_{\mathfrak{p}})$.
\begin{proof}
Set $k=\mathbb{W}_S(R)/ \mathfrak{m}$, we distinguish two cases:
\begin{enumerate}
\item $k$ has characteristic $0$,
\item $k$ has characteristic $p>0$.
\end{enumerate}
In the first case we can factor
$$
\mathbb{W}_S(R) \xr{} \mathbb{W}_S(R)\otimes_{\Z}\Q\xr{=} \mathbb{W}_S(R\otimes_{\Z}\Q)\xr{} k.
$$
Since $\mathbb{W}_S(R\otimes_{\Z}\Q)\xr{gh,\cong} \prod_{s\in S}R\otimes \Q$, the claim follows.
Suppose now that $k$ has characteristic $p>0$. We have a factorization
$$\mathbb{W}_S(R) \xr{} \mathbb{W}_S(R)\otimes_{\Z}\Z_{(p)}\xr{=} \mathbb{W}_S(R\otimes_{\Z}\Z_{(p)})\xr{} k.$$
By decomposing
\begin{multline*}
\mathbb{W}_S(R\otimes \Z_{(p)}) \xr{=} \prod_{n\geq 1,(n,p)=1} \epsilon_n \mathbb{W}_S(R\otimes \Z_{(p)}) \\ \xr{\cong,\prod_n R^{S/n}_{(S/n)_p}\circ F_n} \prod_{n\geq 1,(n,p)=1} \mathbb{W}_{(S/n)_p}(R\otimes \Z_{(p)}),
\end{multline*}
we can reduce to the case where $S$ consists only of $p$-powers.
Finally, $V_{p}(a)^2=pV_{p}(a^2)$, for all $a\in \mathbb{W}_{S/p}(R\otimes \Z_{(p)})$, hence $V_{p}(a)$ maps to zero in $k$. Therefore $\mathbb{W}_S(R\otimes \Z_{(p)})\xr{} k$ factors through $\mathbb{W}_{S}(R\otimes \Z_{(p)})\xr{} \mathbb{W}_{\{1\}}(R\otimes \Z_{(p)})=R\otimes \Z_{(p)}\xr{\rho} k$. In this case we can take
$\mathfrak{p}=\ker(R\xr{} R\otimes \Z_{(p)} \xr{\rho} k).$
\end{proof}
\end{lemma}
\begin{lemma}\label{lemma-WSRp-local-more-general}
Let $p$ be a prime. Let $R$ be a ring such that every maximal ideal $\mf{p}$ satisfies ${\rm char}(R/\mf{p})=p>0$.
Let $S$ be a $p$-typical finite truncation set.
Then every maximal ideal $\mf{m}$ of $\mathbb{W}_S(R)$ is of the form
$
\ker(\mathbb{W}_S(R)\xr{R^S_{\{1\}}} R\xr{} R/\mf{p}),
$
for a unique maximal ideal $\mf{p}$ of $R$.
\begin{proof}
Let $\mf{m}$ be a maximal ideal of $\mathbb{W}_S(R)$, set $k=\mathbb{W}_S(R)/\mf{m}$.
We claim that ${\rm char}(k)=p$. Suppose that ${\rm char}(k)\neq p$. From the
commutative diagram
$$
\xymatrix
{
\mathbb{W}_S(R) \ar[r]\ar[d]^{gh}
&
\mathbb{W}_S(R)\otimes \Z[p^{-1}] \ar[r] \ar[d]^{gh}_{\cong}
&
k
\\
\prod_{s\in S} R\ar[r]
&
\prod_{s\in S} R\otimes \Z[p^{-1}] \ar[ru]
&
}
$$
we conclude that there is a factorization $\mathbb{W}_S(R)\xr{gh_i}R\xr{} k$, but there are no epimorphism $R\xr{} k$ to a field of characteristic $\neq p$.
Thus we may suppose that ${\rm char}(k)=p$. Because $V_p(a)^2=pV_p(a^2)$ for
all $a\in \mathbb{W}_{S/p}(R)$, we obtain a factorization $\mathbb{W}_S(R)\xr{R^S_{\{1\}}}R\xr{} k$, which defines $\mf{p}:=\ker(R\xr{} k)$.
\end{proof}
\end{lemma}
\subsection{Relative de Rham-Witt complex}
For every commutative ring $A$ we have the absolute de Rham-Witt complex $$ S\mapsto \mathbb{W}_S\Omega^*_{A}$$ constructed by Hesselholt \cite{H}, at our disposal.
The absolute de Rham-Witt complex is the initial object in the category of Witt complexes \cite[\textsection4]{H}. In this section we will define
the relative version, which is studied in this paper.
\begin{definition}
Let $A$ be an $R$-algebra. Let $S$ be a truncation set and $q\geq 0$. We define
$$
\mathbb{W}_S\Omega^{q}_{A/R}= \varprojlim_{\substack{T\subset S\\ \text{$T$ finite}}} \mathbb{W}_T\Omega^{q}_{A}/\left( \mathbb{W}_T\Omega^{1}_{R} \cdot \mathbb{W}_T\Omega^{q-1}_{A}\right)
$$
For $q=0$, the definition means $\mathbb{W}_S\Omega^{0}_{A/R}=\mathbb{W}_S(A)$.
\end{definition}
We get an induced anti-symmetric graded algebra structure on $\mathbb{W}_S\Omega^{*}_{A/R}$, that is, $\omega_1\cdot \omega_2=(-1)^{\deg(\omega_1)\deg(\omega_2)}\omega_2\cdot \omega_1$.
Recall that by construction of $\mathbb{W}_S\Omega^*_A$, there is, for all finite truncation sets $S$, a surjective morphism of graded $\mathbb{W}_S(A)$-algebras
\begin{equation}\label{equation-tensor-to-absolute-dRW}
\pi:T^*_{\mathbb{W}_S(A)} \Omega^1_{\mathbb{W}_S(A)} \xr{} \mathbb{W}_S\Omega^*_{A},
\end{equation}
such that $\pi(da)=da$ for all $a\in \mathbb{W}_S(A)$.
\begin{lemma}\label{lemma-morph-de-Rham-to-de-Rham-Witt}
Let $S$ be a finite truncation set.
\begin{enumerate}
\item The morphism \eqref{equation-tensor-to-absolute-dRW} induces a surjective morphism of anti-symmetric graded algebras
\begin{equation}\label{equation-dR-to-absolute-dRW}
\pi:\Omega^*_{\mathbb{W}_S(A)/\mathbb{W}_S(R)} \xr{} \mathbb{W}_S\Omega^*_{A/R},
\end{equation}
which by abuse of notation is called $\pi$ again.
\item $\mathbb{W}_S\Omega^*_{A/R}$ is a differential graded algebra and (\ref{equation-dR-to-absolute-dRW}) is compatible with the differential.
\end{enumerate}
\begin{proof}
For (1). This follows from $\pi(da\otimes da)\in d\log[-1]\cdot \mathbb{W}_S\Omega^1_A$ \cite[\textsection3]{H} and $d\log[-1]\in \mathbb{W}_S\Omega^1_R$.
For (2). The differential $d:\mathbb{W}_S\Omega^*_{A/R}\xr{} \mathbb{W}_S\Omega^*_{A/R}$ is well-defined, because $\mathbb{W}_S\Omega^*_R$ is generated by $\mathbb{W}_S\Omega^1_R$. It satisfies $d\circ d=0$, because $d\log[-1]\in \mathbb{W}_S\Omega^1_R$. The compatibility of $\pi$ with $d$ follows from $\pi(da)=da$ for all $a\in \mathbb{W}_S(A)$.
\end{proof}
\end{lemma}
\subsubsection{}
Induced from the absolute de Rham-Witt complex, we obtain for all positive integers $n$:
\begin{align}
F_n:&\mathbb{W}_S\Omega^{q}_{A/R}\xr{} \mathbb{W}_{S/n}\Omega^{q}_{A/R}\label{equation-F-relative-over-Z},\\
V_n:&\mathbb{W}_{S/n}\Omega^{q}_{A/R} \xr{} \mathbb{W}_S\Omega^{q}_{A/R}, \label{equation-V-relative-over-Z}
\end{align}
and $S\mapsto \mathbb{W}_S\Omega^{*}_{A/R}$ forms a Witt complex. Note that, computed in the absolute de Rham-Witt complex,
we have
\begin{align*}
V_n(da\cdot \omega)=V_n(F_ndV_n(a)\cdot \omega)=dV_n(a)\cdot V_n(\omega),
\end{align*}
hence $V_n(\mathbb{W}_{S/n}\Omega^1_R\cdot \mathbb{W}_{S/n}\Omega^{q-1}_A)\subset \mathbb{W}_S\Omega^1_R\cdot \mathbb{W}_S\Omega^{q-1}_A$.
The following equalities hold for the maps (\ref{equation-F-relative-over-Z}), (\ref{equation-V-relative-over-Z}):
$$
V_nF_nd=dV_nF_n, \quad dV_nd=0.
$$
\begin{proposition}\label{proposition-relative-de-Rham-Witt-complex-initial}
The Witt complex $S\mapsto \mathbb{W}_S\Omega^*_{A/R}$ is the initial object in the
category of Witt complexes over $A$ with $\mathbb{W}(R)$-linear differential.
\begin{proof}
Let $S\mapsto E_S^*$ be a Witt complex over $A$ with $\mathbb{W}(R)$-linear differential, that is, $d(a\omega)=ad(\omega)$ for $a\in \mathbb{W}_S(R)$ and
$\omega\in E_S^*$. We only need to show that the canonical morphism
$$
[S\mapsto \mathbb{W}_S\Omega^*_{A}]\xr{} [S\mapsto E_S^*]
$$
factors through $[S\mapsto \mathbb{W}_S\Omega^*_{A/R}]$. It is enough to check this for finite truncation sets. Because $\pi$ (\ref{equation-tensor-to-absolute-dRW}) is surjective, we conclude that $\mathbb{W}_S\Omega^1_{R}$ is generated by elements of the form $da$ with $a\in \mathbb{W}_S(R)$, which implies the claim.
\end{proof}
\end{proposition}
As a corollary we obtain the following statement.
\begin{corollary}\label{corollary-de-Rham-Witt-complex-localization}
Let $A$ be an $R$-algebra, let $p$ be a prime, and set $R':=R\otimes_{\Z}\Z_{(p)},A':=A\otimes_{\Z}\Z_{(p)}$.
There is a unique isomorphism
$$
[S\mapsto \mathbb{W}_S\Omega^*_{A'/R'}]\xr{} [S\mapsto \varprojlim_{\substack{T\subset S\\\text{$T$ finite}}}\mathbb{W}_T\Omega^*_{A/R}\otimes_{\Z}\Z_{(p)}]
$$
of Witt complexes over $A'$.
\end{corollary}
\begin{proposition}\label{proposition-epsilon-decomposion-deRham-Witt}
Let $R$ be a $\Z_{(p)}$-algebra and let $A,B$ be $R$-algebras. Let $S$ be a finite truncation set.
\begin{enumerate}
\item Via the equivalence from \eqref{equation-epsilon-decomposition}
we have
\begin{equation}\label{equation-epsilon-decomposion-deRham-Witt}
\mathbb{W}_S\Omega^*_{A/R}\mapsto \bigoplus_{n\geq 1, (n,p)=1} \mathbb{W}_{(S/n)_p}\Omega^*_{A/R}.
\end{equation}
\item For a morphism $f:A\xr{} B$ the induced morphism $f_S:\mathbb{W}_S\Omega^*_{A/R}
\xr{} \mathbb{W}_S\Omega^*_{B/R}$ maps to
$$
f_S\mapsto \bigoplus_{n\geq 1, (n,p)=1} f_{(S/n)_p}
$$
via the equivalence from \eqref{equation-epsilon-decomposition}.
\end{enumerate}
\begin{proof}
For (1). The claim follows from \cite[Proposition~1.2.5]{HM}. In the notation of loc.~cit.~the right hand side (\ref{equation-epsilon-decomposion-deRham-Witt})
equals $i_{!}i^*\mathbb{W}\Omega^*_{A/R}$, and $i^*,i_{!}$ preserve initial objects, since
both functors admit a right adjoint.
For (2). Follows immediately from the construction in (1).
\end{proof}
\end{proposition}
\begin{proposition}\label{proposition-comparison-Langer-Zink}
Let $R$ be a $\Z_{(p)}$-algebra, let $A$ be an $R$-algebra. Then
$$
n\mapsto \mathbb{W}_{\{1,p,\dots,p^{n-1}\}}\Omega^*_{A/R}
$$
is the relative de Rham-Witt complex $n\mapsto W_{n}\Omega^*_{A/R}$ defined by Langer and Zink \cite{LZ}.
\begin{proof}
We have a restriction functor
\begin{multline*}
i^*:\text{(Witt systems over $A$ with $\mathbb{W}(R)$-linear differential)} \xr{}\\
\text{($F$-$V$-procomplexes over the $R$-algebra $A$),}
\end{multline*}
where we use the definition of \cite[\textsection4]{H} for the source category
and the definition of \cite[Introduction]{LZ} for the target category.
The functor $i^*$ admits a right adjoint functor $i_!$ defined in \cite[\textsection1.2]{HM}. Therefore $i^*([S\mapsto \mathbb{W}_S\Omega^*_{A/R}])$ is the
initial object in the category of $F$-$V$-procomplexes as is the relative
de Rham-Witt complex constructed by Langer and Zink \cite{LZ}.
\end{proof}
\end{proposition}
\subsubsection{}
Let $S$ be a finite truncation set.
Let $A\xr{} B$ be an \'etale morphism of $R$-algebras. For all $q\geq 0$ the induced morphism of $\mathbb{W}_S(B)$-modules
\begin{equation}\label{equation-relative-big-etale-base-change}
\mathbb{W}_S(B)\otimes_{\mathbb{W}_S(A)} \mathbb{W}_S\Omega^q_{A/R}\xr{\cong} \mathbb{W}_S\Omega^q_{B/R}
\end{equation}
is an isomorphism. Indeed, this follows immediately from the analogous fact for the absolute de Rham-Witt complex \cite[Theorem~C]{H}.
\begin{lemma}
Let $R'\xr{} R$ be an \'etale ring homomorphism. Let $A$ be an $R$-algebra. Then, for all truncation sets $S$,
$$
\mathbb{W}_S\Omega^*_{A/R'}\xr{} \mathbb{W}_S\Omega^*_{A/R}
$$
is an isomorphism.
\begin{proof}
We may assume that $S$ is finite. The assertion follows from
\begin{align*}
\mathbb{W}_S\Omega^1_{R'}\otimes_{\mathbb{W}_S(R')}\mathbb{W}_S(A) &\xr{=}\mathbb{W}_S\Omega^1_{R'}\otimes_{\mathbb{W}_S(R')} \mathbb{W}_S(R) \otimes_{\mathbb{W}_S(R)} \mathbb{W}_S(A)\\
&\xr{\cong} \mathbb{W}_S\Omega^1_R\otimes_{\mathbb{W}_S(R)} \mathbb{W}_S(A).
\end{align*}
\end{proof}
\end{lemma}
\subsubsection{}
For every truncation set $S$ we have a functor
$$
\mathbb{W}_S:\text{(Schemes)}\xr{} \text{(Schemes)}, \quad X\mapsto \mathbb{W}_S(X).
$$
This functor has been studied by Borger \cite{Borger4}, our notation differs slightly: the notation is $W^*$ in \cite{Borger4}.
For an affine scheme $U=\Spec(A)$, we have $\mathbb{W}_S(U)=\Spec(\mathbb{W}_S(A))$.
If $X$ is separated and $(U_i)_{i\in I}$ is an affine covering of $X$, then $\mathbb{W}_S(X)$ is obtained by gluing $\mathbb{W}_S(U_i)$
along $\mathbb{W}_S(U_i\times_X U_j)$. In particular, $(\mathbb{W}_S(U_i))_{i\in I}$ is an affine covering of $\mathbb{W}_S(X)$.
The functor is extended to non-separated schemes in the usual way.
If $T\subset S$ is an inclusion of finite truncation sets then
$$\imath_{T,S}:\mathbb{W}_T(X)\xr{} \mathbb{W}_S(X)$$
is a closed immersion and functorial in $X$.
\subsubsection{}
If $X$ is an $R$-scheme then we can glue in the same way a quasi-coherent sheaf $\mathbb{W}_S\Omega^q_{X/R}$. Indeed, let us suppose that $X$ is separated. Let
$(\Spec(A_i))_{i\in I}$ be an affine covering and set $\Spec(A_{ij})=\Spec(A_i)\times_{X}\Spec(A_j)$. For every $i$, the $\mathbb{W}_S(A_i)$-module
$\mathbb{W}_S\Omega^q_{A_i/R}$ defines a quasi-coherent sheaf $\mathbb{W}_S\Omega^q_{\Spec(A_i)/R}$ on $\mathbb{W}_S(\Spec(A_i))$. Since
$$
\Gamma(\mathbb{W}_S(\Spec(A_{ij}))),\mathbb{W}_S\Omega^q_{\Spec(A_i)/R}) = \mathbb{W}_S\Omega^q_{A_i/R}\otimes_{\mathbb{W}_S(A_i)}\mathbb{W}_S(A_{ij}) = \mathbb{W}_S\Omega^q_{A_{ij}/R},
$$
by using (\ref{equation-relative-big-etale-base-change}), we can glue to a quasi-coherent sheaf $\mathbb{W}_S\Omega_{X/R}$ on $\mathbb{W}_S(X)$.
Independence of the covering and $\jmath^*\mathbb{W}_S\Omega^q_{X/R}=\mathbb{W}_S\Omega^q_{U/R}$, for every open $\jmath:U\xr{} X$, can be checked.
\subsubsection{}
\label{subsubsection-coherent}
If $\mathbb{W}_S(X)\xr{} \mathbb{W}_S(\Spec(R))$ is of finite type and $\mathbb{W}_S(X)$ is noetherian, then $\mathbb{W}_S\Omega^q_{X/R}$ is coherent. Indeed,
we have a surjective morphism $\Omega^j_{\mathbb{W}_S(X)/\mathbb{W}_S(R)}\xr{} \mathbb{W}_S\Omega^j_{X/R}$ and the assumptions imply that $\Omega^j_{\mathbb{W}_S(X)/\mathbb{W}_S(R)}$ is coherent.
\subsubsection{}
If $f:X\xr{} Y$ is a morphism of $R$-schemes then we get $$\mathbb{W}_S\Omega^q_{Y/R}\xr{} \mathbb{W}_S(f)_*\mathbb{W}_S\Omega^q_{X/R}.$$
For an inclusion of truncation sets $T\subset S$, we obtain
$$
\mathbb{W}_S \Omega^q_{X/R}\xr{} \imath_{T,S*}\mathbb{W}_T\Omega^q_{X/R}.
$$
The following diagram is commutative:
$$
\xymatrix
{
\mathbb{W}_S\Omega^q_{Y/R} \ar[rr]\ar[d]
&
&
\mathbb{W}_S(f)_*\mathbb{W}_S\Omega^q_{X/R} \ar[d]
\\
\imath_{T,S*}\mathbb{W}_T\Omega^q_{Y/R} \ar[r]
&
\imath_{T,S*}\mathbb{W}_T(f)_*\mathbb{W}_T\Omega^q_{X/R} \ar[r]^{=}
&
\mathbb{W}_S(f)_*\imath_{T,S*}\mathbb{W}_T\Omega^q_{X/R}.
}
$$
The differential, the Frobenius and the Verschiebung operations are defined in the evident way:
\begin{align*}
&d:\mathbb{W}_{S}\Omega^q_{X/R} \xr{} \mathbb{W}_{S}\Omega^{q+1}_{X/R},\\
&F_n: \mathbb{W}_{S}\Omega^q_{X/R} \xr{} \imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^q_{X/R}, \\
&V_n: \imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^q_{X/R} \xr{} \mathbb{W}_{S}\Omega^q_{X/R}.
\end{align*}
\begin{definition}\label{definition-dRW-coh-new}
Let $X$ be an $R$-scheme, let $S$ be a finite truncation set. We define
$$
H^i_{dRW}(X/\mathbb{W}_S(R)):=H^i(\mathbb{W}_S(X),\mathbb{W}_S\Omega^*_{X/R}),
$$
where the right hand side is the hypercohomology for the Zariski topology.
\end{definition}
\subsubsection{}\label{section-phi-beta-de-Rham-Witt}
Note that $F_n$ and $V_n$ are not morphisms of complexes.
For all positive integers $n$ and all finite truncation sets we set
\begin{equation}\label{equation-definition-phin}
\phi_n=n^qF_n:\mathbb{W}_S\Omega^q_{X/R}\xr{} \imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^q_{X/R},
\end{equation}
to get a morphism of complexes
$$
\mathbb{W}_S\Omega^*_{X/R}\xr{\phi_n} \imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^*_{X/R}.
$$
Suppose that $X$ is smooth over $R$ of relative dimension $d$. Then we set
$$
\beta_n=n^{d-q}V_n:\imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^q_{X/R}\xr{} \mathbb{W}_{S}\Omega^q_{X/R}
$$
(we will prove $\mathbb{W}_S\Omega^q_{X/R}=0$ if $q>d$ in Proposition \ref{proposition-WSOmega-torsionfree}(ii)). We obtain a morphism of complexes
$$
\beta_n:\imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^*_{X/R}\xr{} \mathbb{W}_{S}\Omega^*_{X/R},
$$
satisfying the equalities:
\begin{align*}
&\phi_n\circ \beta_n=n^{d+1}, \\
&\beta_n(\lambda \cdot \phi_n(x))=n^{d}V_n(\lambda)\cdot x \quad \text{for all $x\in \mathbb{W}_S\Omega^*_{X/R}$ and $\lambda\in \imath_{S/n,S*}\mathbb{W}_{S/n}\Omega^*_{X/R}$.}
\end{align*}
In Section \ref{section-values} we will study the $\{\phi_n\}_{n\geq 1}$ operations induced on the de Rham-Witt cohomology.
\subsubsection{}\label{subsubsection-can-be-computed-by-Cech-covering}
Note that the Hodge to de Rham spectral sequence and the quasi-coherence of $\mathbb{W}_S\Omega^q_{X/R}$ imply the following fact.
Assume $X$ is separated and $\mathbb{W}_S(X)$ is a noetherian scheme.
Let $(U_i)$ be an open affine covering for $X$, we denote by $\mathfrak{U}=(\mathbb{W}_S(U_i))$ the induced covering of $\mathbb{W}_S(X)$. Then we can compute
$H^i_{dRW}(X/\mathbb{W}_S(R))$ by using the \v{C}ech complex for $\mathfrak{U}$:
$$
H^i(C(\mathfrak{U},\mathbb{W}_S\Omega^*_{X/R}))\xr{\cong} H^i_{dRW}(X/\mathbb{W}_S(R)).
$$
In the derived category we have a quasi-isomorphism:
$$
C(\mathfrak{U},\mathbb{W}_S\Omega^*_{X/R})\xr{\text{q-iso}} R\Gamma(\mathbb{W}_S\Omega^*_{X/R}).
$$
\begin{proposition}\label{proposition-WSOmega-torsionfree}
Let $R$ be a flat $\Z$-algebra. Let $X$ be a smooth $R$-scheme. Let $S$ be a finite truncation set.
\begin{itemize}
\item[(i)] For all non-negative integers $q$, $\mathbb{W}_S\Omega^q_{X/R}$ is $\Z$-torsion-free, that is,
multiplication by a non-zero integer is injective.
\item[(ii)] Let $d$ be the relative dimension of $X/R$. Then $\mathbb{W}_S\Omega^q_{X/R}=0$ for all $q>d$.
\end{itemize}
\begin{proof}
For (i) it suffices to prove that $\mathbb{W}_S\Omega^q_{X/R}\otimes \Z_{(p)}=\mathbb{W}_S\Omega^q_{X'/R'}$ is $p$-torsion-free for all primes $p$,
where $X'=X\otimes_{\Z} \Z_{(p)}$ and $R'=R\otimes_{\Z} \Z_{(p)}$. For (ii) it suffices to show that $\mathbb{W}_S\Omega^q_{X'/R'}$ vanishes.
Via the decomposition \ref{equation-epsilon-decomposion-deRham-Witt} we may suppose that $S=\{1,p,\dots,p^{n-1}\}$.
Certainly we may assume that $X'=\Spec(B)$ and that there exists an \'etale ring homomorphism $R'[x_1,\dots,x_d]\xr{} B$. By using
\eqref{equation-relative-big-etale-base-change} we are reduced to the case
$B=R'[x_1,\dots,x_d]$. The claim follows in this case from the explicit description of the de Rham-Witt complex
in \cite[\textsection2]{LZ}, more precisely \cite[Proposition~2.17]{LZ}.
\end{proof}
\end{proposition}
\subsection{Finiteness}
\begin{proposition}\label{proposition-finiteness}
Let $R$ be a flat and finitely generated $\Z$-algebra. Let $X$ be a flat and proper scheme of relative dimension $d$ over $R$.
Let $S$ be a finite truncation set. The following hold.
\begin{itemize}
\item [(i)] For all non-negative integers $i,j$ the cohomology group
$
H^i(\mathbb{W}_S(X),\mathbb{W}_S\Omega^j_{X/R})
$
is a finitely generated $\mathbb{W}_S(R)$-module.
\item [(ii)] For all $i> d$ and $j\geq 0$, we have $H^i(\mathbb{W}_S(X),\mathbb{W}_S\Omega^j_{X/R})=0$.
\item [(iii)] For all $i$, the de Rham-Witt cohomology $H^i_{dRW}(X/\mathbb{W}_S(R))$ (Definition ~\ref{definition-dRW-coh-new}) is a finitely generated
$\mathbb{W}_S(R)$-module.
\item [(iv)] Suppose $X/R$ is smooth. Then $H^i_{dRW}(X/\mathbb{W}_S(R))=0$ for all $i>2d$.
\end{itemize}
\begin{proof}
For (i). We denote by $f:X\xr{} \Spec(R)$ the structure morphism. The scheme $\mathbb{W}_S(X)$
is noetherian, because it is of finite type over $\Spec(\Z)$.
By \cite[Proposition~16.13]{Borger4} the induced morphism $\mathbb{W}_S(f):\mathbb{W}_S(X)\xr{}\mathbb{W}_S(R)$ is proper.
Moreover, $\mathbb{W}_S\Omega^j_{X/R}$ defines a coherent sheaf on $\mathbb{W}_S(X)$ (see \ref{subsubsection-coherent}).
For (ii). The fibers of $\mathbb{W}_S(f)$ at closed points of $\Spec(\mathbb{W}_S(R))$ have
dimension $d$. In fact, as topological spaces they are disjoint unions
of the corresponding fibers of $f$. This implies the claim.
For (iii). Follows from (i) via the Hodge to de Rham spectral sequence.
For (iv). Again this follows from the Hodge to de Rham spectral sequence, statement (ii), and Proposition \ref{proposition-WSOmega-torsionfree}(ii).
\end{proof}
\end{proposition}
\section{De Rham-Witt cohomology}
\subsection{Reduction modulo an ideal}
\subsubsection{}
Recall that $W_n=\mathbb{W}_{\{1,p,\dots,p^{n-1}\}}$ whenever a prime $p$ has been fixed (Notation~\ref{notation-Wn}).
The goal of this section is to prove the following theorem.
\begin{thm}\label{thm-comparison-isom}
Let $R$ be a flat $\Z_{(p)}$-algebra, let $B$ be a smooth $R$-algebra, and let $n$ be positive integers.
Let $I\subset R$ be an ideal such that $p^m\in I$ for some $m$.
Choose a $W_n(R)$-free resolution
$$
T:=\dots \xr{} T^{-2}\xr{} T^{-1}\xr{} T^{0}
$$
of $W_n(R/I)$.
There exists a functorial quasi-isomorphism of complexes of $W_n(R)$-modules
\begin{equation}\label{equation-comp-isomorphism-with-T}
W_n\Omega^*_{B/R}\otimes_{W_n(R)} T\xr{} W_n\Omega^*_{(B/IB)/(R/I)}.
\end{equation}
In particular, we obtain an isomorphism
\begin{equation}\label{equation-comp-isomorphism}
W_n\Omega^*_{B/R}\otimes^{\mathbb{L}}_{W_n(R)} W_n(R/I)\xr{\cong} W_n\Omega^*_{(B/IB)/(R/I)},
\end{equation}
in the derived category of $W_n(R)$-modules.
\end{thm}
More precisely, functoriality means that for any morphism $A\xr{} B$ of smooth $R$-algebras, the diagram
$$
\xymatrix
{
W_n\Omega^*_{B/R}\otimes_{W_n(R)} T \ar[r]
&
W_n\Omega^*_{(B/IB)/(R/I)}
\\
W_n\Omega^*_{A/R}\otimes_{W_n(R)} T \ar[r]\ar[u]
&
W_n\Omega^*_{(A/IA)/(R/I)} \ar[u]
}
$$
is commutative.
\begin{remark}
The proof of Theorem \ref{thm-comparison-isom} does not go
beyond the methods of \cite{LZ}, so that the theorem may be well-known but
we couldn't provide a reference
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm-comparison-isom}]
We define the morphism (\ref{equation-comp-isomorphism-with-T}) by
$$
W_n\Omega^*_{B/R}\otimes_{W_n(R)} T \xr{} W_n\Omega^*_{B/R}\otimes_{W_n(R)} W_n(R/I) \xr{} W_n\Omega^*_{(B/I)/(R/I)},
$$
so that the functoriality of (\ref{equation-comp-isomorphism-with-T}) is obvious.
\emph{1.Step:} The first step is the reduction to $B=R[x_1,\dots,x_d]$.
We can use the \v{C}ech complex (see \ref{subsubsection-can-be-computed-by-Cech-covering})
in order to reduce to the case where there exists an \'etale morphism $A=R[x_1,\dots,x_d]\xr{} B$.
Note that $p^{nm}=0$ in $W_n(R/I)$.
Since $W_n\Omega^*_{B/R}$ is $p$-torsion-free (Proposition ~\ref{proposition-WSOmega-torsionfree}), we see that
\begin{equation}\label{equation-mod-pnm}
W_n\Omega^*_{B/R}\otimes^{\mathbb{L}}_{W_n(R)} W_n(R/I)\xr{}
W_n\Omega^*_{B/R}/p^{nm}\otimes^{\mathbb{L}}_{W_n(R)/p^{nm}} W_n(R/I)
\end{equation}
is a quasi-isomorphism. Clearly, morphism (\ref{equation-comp-isomorphism})
factors through (\ref{equation-mod-pnm}). It will be easier to work modulo $p^{nm}$,
because $dF^{nm}_{p}=p^{nm}F^{nm}_pd$ vanishes modulo $p^{nm}$.
Set $c=nm+n$, we claim that
\begin{align}\label{align-from-A-to-B-modulo-pnm}
\left(W_{c}(B)/p^{nm}\otimes_{W_{c}(A)/p^{nm}} W_n\Omega^*_{A/R}/p^{nm},
id\otimes d\right) &\xr{} (W_n\Omega^*_{B/R}/p^{nm},d) \\
b\otimes \omega &\mapsto F_p^{nm}(b)\cdot \omega, \nonumber
\end{align}
is an isomorphism of complexes. Note that $W_c(A)$ acts on $W_n\Omega^*_{A/\Z}/p^{nm}$ via $W_c(A)\xr{F^{nm}_p} W_n(A)$, and therefore (\ref{align-from-A-to-B-modulo-pnm})
is a morphism of complexes.
Theorem \ref{thm-van-der-Kallen-Borger} implies
that
$$
W_{c}(B)\otimes_{W_c(A)} M\xr{\cong} W_{n}(B)\otimes_{W_n(A)} M, \quad b\otimes m\mapsto F^{nm}_p(b)\otimes m,
$$
is an isomorphism for all $W_n(A)$-modules $M$. Thus the claim follows from (\ref{equation-relative-big-etale-base-change}).
On the other hand, Corollary \ref{corollary-van-der-Kallen-Borger} shows that for every $W_n(A/I)$-module $M$ the map
$$
W_{c}(B)/p^{nm}\otimes_{W_{c}(A)/p^{nm}}M \xr{} W_n(B/I)\otimes_{W_n(A/I)} M, \quad b\otimes m\mapsto F_{p}^{nm}(b)\otimes m,
$$
is an isomorphism. This yields an isomorphism of complexes
$$
\left(W_{c}(B)/p^{nm}\otimes_{W_{c}(A)/p^{nm}} W_n\Omega^*_{(A/IA)/(R/I)},
id\otimes d\right) \xr{} (W_n\Omega^*_{(B/IB)/(R/I)},d).
$$
Finally, since $W_{c}(B)/p^{nm}$ is \'etale over $W_{c}(A)/p^{nm}$, we are reduced to proving that
$$
W_n\Omega^*_{A/R}/p^{nm}\otimes^{\mathbb{L}}_{W_n(R)/p^{nm}} W_n(R/I)
\xr{} W_n\Omega^*_{(A/IA)/(R/I)}
$$
is a quasi-isomorphism.
\emph{2.Step:} Proof of the case $B=R[x_1,\dots,x_d]$.
In this case it follows from \cite[\textsection2]{LZ} and the proof of \cite[Theorem~3.5]{LZ} that
$$
\Omega^{*}_{W_n(R)[x_1,\dots,x_d]/W_n(R)}\xr{} \Omega^{*}_{W_n(B)/W_n(R)} \xr{\pi} W_n\Omega^*_{B/R}
$$
is a quasi-isomorphism, where the first morphism is induced by $x_i\mapsto [x_i]$. The same statement holds for $R/I$, hence
the assertion follows from the quasi-isomorphism
$$
\Omega^{*}_{W_n(R)[x_1,\dots,x_d]/W_n(R)} \otimes^{\mathbb{L}}_{W_n(R)} W_n(R/I)\xr{} \Omega^{*}_{W_n(R/I)[x_1,\dots,x_d]/W_n(R/I)}.
$$
\end{proof}
\begin{corollary}\label{corollary-comparison-thm-RGamma}
Let $R$ be a flat and finitely generated $\Z$-algebra, and let $\mf{m}\subset R$ be a maximal ideal; set $p={\rm char}(R/\mf{m})$.
Let $X$ be a smooth and proper $R$-scheme, let $n,j$ be positive integers.
There is a natural quasi-isomorphism of complexes of $W_n(R)$-modules:
$$
R\Gamma(W_n\Omega^*_{X/R}) \otimes^{\mathbb{L}}_{W_n(R)} W_n(R/\mf{m}^j) \xr{} R\Gamma(W_n\Omega^*_{X\otimes_{R} R/\mf{m}^j / (R/\mf{m}^j)}).
$$
\begin{proof}
The claim follows from Theorem \ref{thm-comparison-isom} by using \v{C}ech complexes (see \ref{subsubsection-can-be-computed-by-Cech-covering}).
\end{proof}
\end{corollary}
\subsection{Flatness}
\begin{thm}\label{thm-projective-blue}
Let $R$ be a smooth $\Z$-algebra. Let $X$ be a smooth and proper $R$-scheme. Suppose that the de Rham cohomology $H^*_{dR}(X/R)$ of $X$
is a flat $R$-module. Then $H^*_{dRW}(X/\mathbb{W}_S(R))$ is a finitely generated
projective $\mathbb{W}_S(R)$-module for all finite truncation sets $S$. Moreover, for an inclusion of finite truncation sets $T\subset S$, the induced map
\begin{equation}\label{equation-from-S-to-T}
H^*_{dRW}(X/\mathbb{W}_S(R))\otimes_{\mathbb{W}_S(R)}\mathbb{W}_T(R)\xr{\cong} H^*_{dRW}(X/\mathbb{W}_T(R))
\end{equation}
is an isomorphism.
\end{thm}
Since $\mathbb{W}_S(R)$ is a noetherian ring and we know that $H^*_{dRW}(X/\mathbb{W}_S(R))$
is a finitely generated $\mathbb{W}_S(R)$-module (Proposition \ref{proposition-finiteness}), it remains to show that it is flat.
This is a local property and can be checked after localization at maximal ideals of $\mathbb{W}_S(R)$.
Our proof relies on Theorem \ref{thm-comparison-isom} or, more precisely,
Corollary \ref{corollary-comparison-thm-RGamma}.
\begin{lemma}\label{lemma-faithfully-flat-blue}
Let $R$ be a finitely generated $\Z$-algebra. Let $\mathfrak{m}$ be a maximal ideal of $R$, let $n$ be a positive
integer, and set $p={\rm char}(R/\mf{m})$.
Then
$W_n(R_{\mathfrak{m}})\xr{} W_n(\varprojlim_{i} R/\mathfrak{m}^i)$ is faithfully flat.
\begin{proof}
By Lemma \ref{lemma-WSRp-local-more-general}, both rings are local. Thus we only need to prove flatness.
We note that $W_n(R)$ is a noetherian ring, because $R$ is a finitely generated $\Z$-algebra. Thus
$W_n(R_{\mathfrak{m}})$, being a localization of $W_n(R)$, is a noetherian ring.
Obviously, we have the equalities
$$
W_n(\varprojlim_i R/\mf{m}^i)=\varprojlim_i W_n(R/\mf{m}^i) = \varprojlim_i W_n(R_{\mf{m}})/W_n(\mf{m}^iR_{\mf{m}}).
$$
Moreover, it is easy to check that $(W_n(\mf{m}^iR_{\mf{m}}))_i$ and $(W_n(\mf{m}R_{\mf{m}})^i)_i$ induce the same topology on $W_n(R_{\mf{m}})$.
Therefore
\begin{equation}\label{equation-rewrite-as-adic-completion}
\varprojlim_i W_n(R_{\mf{m}})/W_n(\mf{m}R_{\mf{m}})^i \xr{\cong} \varprojlim_i W_n(R_{\mf{m}})/W_n(\mf{m}^iR_{\mf{m}}),
\end{equation}
which implies flatness.
\end{proof}
\end{lemma}
\begin{lemma}\label{lemma-going-to-the-projective-limit-blue}
Let $R$ be a finitely generated $\Z$-algebra. Let $\mathfrak{m}$ be a maximal ideal of $R$, let $n$ be a positive
integer, and set $p={\rm char}(R/\mf{m})$.
Let $C$ be a bounded complex of $W_n(R_{\mathfrak{m}})$-modules such that
$H^i(C)$ is a finitely generated $W_n(R_{\mathfrak{m}})$-module for all $i$.
Then, for all $i$,
$$
H^i(C)\otimes_{W_n(R_{\mathfrak{m}})} W_n(\varprojlim_{j} R/\mathfrak{m}^j) \cong \varprojlim_j
H^i\left(C\otimes^{\mathbb{L}}_{W_n(R_{\mathfrak{m}})} W_n(R/\mathfrak{m}^j)\right).
$$
\begin{proof}
Set $\hat{R}:=\varprojlim_{j} R/\mathfrak{m}^j$.
The map is induced by $C\xr{} C\otimes^{\mathbb{L}}_{W_n(R_{\mathfrak{m}})} W_n(R/\mathfrak{m}^j)$
and the $ W_n(\hat{R})$-module structure on the right hand side.
As a first step we will prove that $H^i\left(C\otimes^{\mathbb{L}}_{W_n(R_{\mathfrak{m}})} W_n(R/\mathfrak{m}^j)\right)$ is a finite group.
Clearly, we may assume that $C=C_0$ is concentrated in degree $0$. Since $C_0$
is finitely generated we conclude that ${\rm Tor}_i^{W_n(R_{\mathfrak{m}})}(C_0,W_n(R/\mathfrak{m}^j))$ is a finitely
generated $W_n(R/\mathfrak{m}^j)$-module for all $i$. The ring $W_n(R/\mathfrak{m}^j)$ contains only finitely
many elements, hence $$H^{-i}(C\otimes^{\mathbb{L}}_{W_n(R_{\mathfrak{m}})} W_n(R/\mathfrak{m}^j))={\rm Tor}_i(C_0,W_n(R/\mathfrak{m}^j))$$ is finite.
By using Lemma \ref{lemma-faithfully-flat-blue} and the first step (all $R^1\varprojlim$ vanish) we can reduce the assertion to the case of a complex $C=C_0$ that
is concentrated in degree zero (hence $C_0$ is finitely generated). In this case we need to show:
\begin{itemize}
\item [(a)] $C_0\otimes_{W_n(R_{\mathfrak{m}})}W_n(\hat{R})\xr{=}\varprojlim_{j}(C_0\otimes_{W_n(R_{\mathfrak{m}})}W_n(R/\mathfrak{m}^j))$,
\item [(b)] $\varprojlim_{j}{\rm Tor}_i(C_0,W_n(R/\mathfrak{m}^j))=0$ for all $i>0$.
\end{itemize}
Claim (a) follows from (\ref{equation-rewrite-as-adic-completion}). Claim (b) follows from (a) and the flatness of $W_n(R_{\mathfrak{m}}) \xr{} W_n(\hat{R})$.
\end{proof}
\end{lemma}
\begin{proposition}\label{proposition-de-Rham-Witt-cohomology-limit-blue}
Assumptions as in Corollary \ref{corollary-comparison-thm-RGamma}.
Set $X_j:=X\otimes_{R}R/\mathfrak{m}^j$, $R_j:=R/\mathfrak{m}^j$, $\hat{R}=\varprojlim_j R_j$.
\begin{itemize}
\item [(i)] For all $i$ and $n$, we have a functorial isomorphism
\begin{equation}\label{equation-de-Rham-Witt-cohomology-limit-blue}
H^i_{dRW}(X/W_n(R))\otimes_{W_n(R)} W_n(\hat{R})\xr{\cong} \varprojlim_j H^i(X_j,W_n\Omega^*_{X_j/R_j}).
\end{equation}
\item [(ii)] Suppose furthermore that the following conditions are satisfied:
\begin{enumerate}
\item There exists a lifting $\phi:\hat{R}\xr{} \hat{R}$ of the absolute Frobenius on $R/\mf{m}$; let $\rho:\hat{R}\xr{} W_n(\hat{R})$ be the induced ring homomorphism. By abuse of notation we will denote the restriction of $\rho$ to $R$ by $\rho$ again.
\item The de Rham cohomology $H^*_{dR}(X/R)$ is a locally free $R$-module.
\end{enumerate}
Then there is an isomorphism
$$
H^i(X_j,W_n\Omega^*_{X_j/R_j}) \cong H^i_{dR}(X/R)\otimes_{R,\rho} W_n(R_j)
$$
which is natural in the following sense. For all $l>j$ we have a commutative diagram
$$
\xymatrix{
H^i(X_l,W_n\Omega^*_{X_l/R_l})\ar[r]^-{\cong}\ar[d]
&
H^i_{dR}(X/R)\otimes_{R,\rho} W_n(R_l)\ar[d]^{id\otimes W_n(R_l\xr{} R_j)}
\\
H^i(X_j,W_n\Omega^*_{X_j/R_j}) \ar[r]^-{\cong}
&
H^i_{dR}(X/R)\otimes_{R,\rho} W_n(R_j).
}
$$
For a morphism of $R$-schemes $f:X\xr{} Y$, where $Y/R$ satisfies the same assumptions as $X$, the following diagram is commutative:
$$
\xymatrix{
H^i(Y_j,W_n\Omega^*_{Y_j/R_j}) \ar[r]^-{\cong} \ar[d]^{f^*}
&
H^i_{dR}(Y/R)\otimes_{R,\rho} W_n(R_j) \ar[d]^{f^*\otimes id}
\\
H^i(X_j,W_n\Omega^*_{X_j/R_j}) \ar[r]^-{\cong}
&
H^i_{dR}(X/R)\otimes_{R,\rho} W_n(R_j).
}
$$
\end{itemize}
\begin{proof}
For (i). Set $C=R\Gamma(W_n\Omega^*_{X/R})\otimes_{W_n(R)} W_n(R_{\mf{m}})$.
In view of Proposition \ref{proposition-finiteness}, the assumptions for Lemma \ref{lemma-going-to-the-projective-limit-blue}
are satisfied. Applying the lemma and using Corollary \ref{corollary-comparison-thm-RGamma} implies the claim.
For (ii). Consider the following cartesian squares
$$
\xymatrix
{
X_j \ar[r]\ar[d]
&
X_{n,j} \ar[r]\ar[d]
&
X\otimes_R \hat{R}\ar[d]
\\
\Spec(R_j) \ar[r]^-{{\rm gh}_1}
&
\Spec(W_n(R_j)) \ar[r]^-{\rho}
&
\Spec(\hat{R}),
}
$$
where $X_{n,j}$ is by definition the fibre product. Note that $\hat{R}\xr{\rho} W_n(\hat{R})\xr{{\rm gh}_1}\hat{R}$ is the identity, which implies
that the left hand square is cartesian.
By the comparison theorem \cite[Theorem~3.1]{LZ} we have a functorial isomorphism
$$
H^i(X_j,W_n\Omega^*_{X_j/R_j}) \cong H^i_{crys}(X_j/W_n(R_j)).
$$
By the comparison isomorphism
of crystalline cohomology with de Rham cohomology due to Berthelot-Ogus we get
\begin{align*}
H^i_{crys}(X_j/W_n(R_j)) & \cong H^i_{dR}(X_{n,j}/W_n(R_j)) \\
& \cong H^i_{dR}(X/R)\otimes_{R,\rho} W_n(R_j).
\end{align*}
For the last isomorphism we have used condition (2) on the de Rham cohomology of $X$.
\end{proof}
\end{proposition}
\begin{proof}[Proof of Theorem \ref{thm-projective-blue}]
Without loss of generality we may assume that $R$ is integral.
It suffices to show the flatness of $H^i_{dRW}(X/\mathbb{W}_S(R))$ when considered as
a $\mathbb{W}_S(R)$-module. This can be checked after localizing at maximal ideals.
By using Lemma \ref{lemma-points-of-WSR} it suffices to prove that $H^i_{dRW}(X/\mathbb{W}_S(R))\otimes_{\mathbb{W}_S(R)} \mathbb{W}_S(R_{\mf{m}})$ is a flat
$\mathbb{W}_S(R_{\mf{m}})$-module for every maximal ideal $\mf{m}\subset R$. Similarly, it is sufficient to prove (\ref{equation-from-S-to-T}) after
tensoring with $\mathbb{W}_T(R_{\mf{m}})$.
Let $\mf{m}\subset R$ be a maximal ideal, and set $p={\rm char}(R/\mf{m})$.
By using the decomposition of $\mathbb{W}_S\Omega^*_{X/R}\otimes \Z_{(p)}$ from Proposition \ref{proposition-epsilon-decomposion-deRham-Witt} together with \eqref{equation-epsilon-decomposition} we may assume that $S$ is $p$-typical, say $S=\{1,p,\dots,p^{n-1}\}$, and hence $T=\{1,p,\dots,p^{m-1}\}$.
Since $R$ is a smooth $\Z$-algebra, there is a lifting $\phi:\hat{R}\xr{}\hat{R}$ of the absolute
Frobenius of $R/\mf{m}$, where $\hat{R}=\varprojlim_j R/\mf{m}^j$. Therefore Proposition \ref{proposition-de-Rham-Witt-cohomology-limit-blue} implies
\begin{multline*}
H^i_{dRW}(X/W_n(R))\otimes_{W_n(R)} W_n(\hat{R})\xr{\cong} \varprojlim_j H^i(X_j,W_n\Omega^*_{X_j/R_j}) \\
\xr{\cong} H^i_{dR}(X/R)\otimes_{R,\rho} W_n(\hat{R}),
\end{multline*}
and we can prove the flatness by using Lemma \ref{lemma-faithfully-flat-blue}.
Tensoring (\ref{equation-from-S-to-T}) with $W_{m}(\hat{R})$ (recall that $T=\{1,p,\dots,p^{m-1}\}$) and by using Proposition \ref{proposition-de-Rham-Witt-cohomology-limit-blue}(ii), we see that $\text{(\ref{equation-from-S-to-T})}\otimes W_{m}(\hat{R})$ is induced by the identity on the de Rham cohomology. Hence it is an
isomorphism by Lemma \ref{lemma-faithfully-flat-blue}.
\end{proof}
\section{Poincar\'e duality} \label{section-values}
\subsection{A rigid $\otimes$-category}
\begin{definition} \label{definition-phi-N}
Let $R$ be a $\Z$-torsion-free ring and $Q$ a non-empty truncation set. We denote by $\mathcal{C}'_{Q,R}$
the category with objects being contravariant functors $S\mapsto M_S$ from finite truncation
sets contained in $Q$ to sets, together with
\begin{itemize}
\item a $\mathbb{W}_S(R)$-module structure on $M_S$, for all truncation sets $S\subset Q$, such that
the maps $M_S\xr{} M_T$, for $T\subset S$, are morphisms of $\mathbb{W}_S(R)$-modules when $M_T$ is considered as
a $\mathbb{W}_S(R)$-module via the projection $\pi_T:\mathbb{W}_S(R)\xr{} \mathbb{W}_T(R)$,
\item for all positive integers $n$ and all truncation sets $S\subset Q$, maps
$$
\phi_n:M_S\xr{} M_{S/n},
$$
such that
\begin{itemize}
\item $\phi_{n}\circ \phi_m=\phi_{nm}$ for all $n,m$,
\item $\phi_n$ is a morphism of $\mathbb{W}_S(R)$-modules when $M_{S/n}$ is considered
as a $\mathbb{W}_S(R)$ module via $F_n:\mathbb{W}_S(R)\xr{} \mathbb{W}_{S/n}(R)$,
\item for all truncation sets $T\subset S\subset Q$ the following diagram is commutative:
$$
\xymatrix{
M_S\ar[r]^{\phi_n}\ar[d]
&
M_{S/n}\ar[d]
\\
M_T\ar[r]^{\phi_n}
&
M_{T/n}.
}
$$
\end{itemize}
\end{itemize}
The functor $S\mapsto M_S$ is required to satisfy the following properties.
\begin{itemize}
\item For all truncation sets $S\subset Q$, the $\mathbb{W}_S(R)$-module $M_S$ is finitely generated and projective.
\item For all truncation sets $T\subset S\subset Q$:
$$
\mathbb{W}_T(R)\otimes_{\mathbb{W}_S(R)}M_S \xr{} M_T
$$
is an isomorphism.
\item There is a positive integer $a$ such that there exist morphisms
\begin{equation}\label{equation-beta_N}
\beta_n:M_{S/n}\xr{} M_{S},
\end{equation}
for all positive integers $n$ and all finite truncation sets $S\subset Q$, satisfying the following properties:
\begin{itemize}
\item $\beta_n$ is a morphism of $\mathbb{W}_S(R)$-modules when $M_{S/n}$ is considered
as a $\mathbb{W}_S(R)$ module via $F_n:\mathbb{W}_S(R)\xr{} \mathbb{W}_{S/n}(R)$,
\item $\beta_n(\lambda \cdot \phi_n(x))=n^{a-1}V_n(\lambda)\cdot x,$ for all $x\in M_S,\lambda\in \mathbb{W}_{S/n}(R)$,
\item $\phi_n\circ \beta_n=n^{a}.$
\end{itemize}
\end{itemize}
Morphisms between two objects in $\mathcal{C}'_{Q,R}$ are morphism of functors
that are compatible with the $[S\mapsto \mathbb{W}_S(R)]$-module structure and
commute with $\phi_n$ for all positive integers $n$. We simply write $\mathcal{C}'_{R}$ for $\mathcal{C}'_{\mathbb{N}_{>0},R}$.
\end{definition}
\begin{remark}
Note that the $\beta_n$ are not part of the datum; we can always change
$\beta_n\mapsto n^{b}\beta_n$ for a non-negative integer $b$.
\end{remark}
For an inclusion of truncation sets $Q\subset Q'$, we have an evident functor
$$
\mathcal{C}'_{Q',R}\xr{} \mathcal{C}'_{Q,R}.
$$
\begin{proposition}
Let $M\in {\rm ob}(\mathcal{C}'_{Q,R})$. Let $S\subset Q$ be a finite truncation set. Fix $a>0$ and $\beta_n$ as in \ref{equation-beta_N}.
\begin{enumerate}
\item For all positive integers $n,m$ with $(n,m)=1$ we have
$$\phi_n\circ \beta_m=\beta_m\circ \phi_n,$$
considered as morphisms $M_{S/m}\xr{} M_{S/n}$.
\item For all positive integers $n,m$ we have
$$\beta_{n}\circ \beta_m=\beta_{nm},$$
considered as morphisms $M_{S/nm}\xr{} M_{S}$.
\item For all truncation sets $T\subset S$ the following diagram is commutative:
$$
\xymatrix{
M_{S/n}\ar[r]^{\beta_n}\ar[d]
&
M_{S}\ar[d]
\\
M_{T/n}\ar[r]^{\beta_n}
&
M_{T}.
}
$$
\end{enumerate}
\begin{proof}
The ring $\mathbb{W}_S(R)$ is $\Z$-torsion-free, because
it can be considered via the ghost map as a subring of $\prod_{s\in S}R$, and $R$ is $\Z$-torsion-free by assumption. Since $M_S$ is a flat $\mathbb{W}_S(R)$-module, it is $\Z$-torsion-free.
For (1). Since ${\rm image}(\phi_m)\supset m^aM_{S/m}$ it is sufficient to prove
$$
\phi_n\circ \beta_m\circ \phi_m=\beta_m\circ \phi_n\circ \phi_m.
$$
This follows from $\beta_m\circ \phi_m=V_m(1)m^{a-1}$ and $\phi_n\circ \phi_m=\phi_m\circ \phi_n$.
For (2). We may argue as in (1) by composing with $\circ \phi_{nm}$.
$$
\beta_{n}\circ \beta_m \circ \phi_{nm}(x)=
\beta_{n}(V_{m}(1)m^{a-1}\phi_{n}(x))
=m^{a-1}n^{a-1}V_{nm}(1)x \\%\quad \text{because $V_n(1)V_{nm}(1)=nV_{nm}(1)$,}\\
=\beta_{nm}\circ \phi_{nm}(x).
$
For (3). We may argue as in (1) by composing with $\circ \phi_{n}$. The computation
is straightforward.
\end{proof}
\end{proposition}
\begin{lemma}\label{lemma-commute-with-beta}
Let $f:M\xr{} N$ be a morphism in $\mathcal{C}'_{Q,R}$,
and choose a positive integer $a$ and $\beta_{M,n}$, $\beta_{N,n}$ as in (\ref{equation-beta_N}).
Then $f_S\circ \beta_{M,n}=\beta_{N,n}\circ f_{S/n}$ for all $S,n$.
In particular, the choice of the $\beta_{n}$ in Definition \ref{definition-phi-N} depends only on the positive integer $a$.
\begin{proof}
Again, we may use that $M_S$ is $\Z$-torsion-free. Now,
\begin{align*}
n^a\beta_nf(x)&=\beta_n(f(n^ax))
=\beta_n(f(\phi_n\beta_n(x)))\\
&=\beta_n\phi_nf(\beta_n(x))
=n^{a-1}V_n(1)f(\beta_n(x))\\
&=f(n^{a-1}V_n(1)\beta_n(x))
=f(\beta_n\phi_n\beta_n(x))
=n^af(\beta_n(x)).
\end{align*}
\end{proof}
\end{lemma}
\begin{proposition}[Tensor products]
For two objects $M,N$ in $\mathcal{C}'_{Q,R}$ we set
$$
(M\otimes N)_S:=M_S\otimes_{\mathbb{W}_S(R)} N_S, \quad \phi_n:=\phi_{M,n}\otimes \phi_{N,n}.
$$
Then $M\otimes N$ defines an object in $\mathcal{C}'_{Q,R}$.
\begin{proof}
This is a straightforward calculation. We can take
$\beta_{M\otimes N,n}=\beta_{M,n}\otimes \beta_{N,n}$.
\end{proof}
\end{proposition}
The tensor product equips $\mathcal{C}'_{Q,R}$ with the structure of a $\otimes$-category with identity object $\mathbf{1}$, where
$$
\mathbf{1}_S:=\mathbb{W}_S(R), \qquad \phi_{\mathbf{1},n}=F_n.
$$
\begin{definition}(Tate objects)
Let $b$ be a non-negative integer. We define the object $\mathbf{1}(-b)$ in $\mathcal{C}'_{Q,R}$ by
$$
\mathbf{1}(-b)_S:=\mathbb{W}_S(R), \qquad \phi_{\mathbf{1}(-b),n}=n^bF_n.
$$
\end{definition}
For an object $M$ in $\mathcal{C}'_{Q,R}$, $M_S$ is $\Z$-torsion-free, hence we get an isomorphism:
$$
\Hom_{\mathcal{C}'_{Q,R}}(M,N)\xr{\cong} \Hom_{\mathcal{C}'_{Q,R}}(M\otimes \mathbf{1}(-b),N\otimes \mathbf{1}(-b))
$$
\begin{definition}
We denote by $\mathcal{C}_{Q,R}$ the category with objects $M(b)$, where $M$
is an object in $\mathcal{C}'_{Q,R}$ and $b\in \Z$. As morphisms we set
$$
\Hom_{\mathcal{C}_{Q,R}}(M(b_1),N(b_2))=\Hom_{\mathcal{C}'_{Q,R}}(M\otimes \mathbf{1}(b_1-c),N\otimes \mathbf{1}(b_2-c)),
$$
where $c\in \Z$ is such that $b_1-c,b_2-c\leq 0$.
\end{definition}
For two truncation sets $Q\subset Q'$, we have an obvious functor
$$
\mathcal{C}_{Q',R} \xr{} \mathcal{C}_{Q,R}.
$$
The category $\mathcal{C}_{Q,R}$ is additive and via $M\mapsto M(0)$ the
category $\mathcal{C}'_{Q,R}$ is a full subcategory of $\mathcal{C}_{Q,R}$. For
$M\in \mathcal{C}'_{Q,R}$, we have $M(-b)=M\otimes \mathbf{1}(-b)$ if $b$ is non-negative.
For an integer $b$, the functor
\begin{align*}
\mathcal{C}_{Q,R}\xr{} \mathcal{C}_{Q,R}, \quad M(n)\mapsto M(n+b)
\end{align*}
is an equivalence and has $M(n)\mapsto M(n-b)$ as inverse functor.
For $M(b_1),N(b_2)$ in $\mathcal{C}_{Q,R}$ we set
$$
M(b_1)\otimes N(b_2):=(M\otimes N)(b_1+b_2).
$$
The tensor product equips $\mathcal{C}_{Q,R}$ with the structure of a $\otimes$-category with identity object $\mathbf{1}$.
\subsubsection{Internal Hom}
The reason for introducing the new category $\mathcal{C}_{Q,R}$ is the internal Hom
construction.
Let $M,N$ be two objects in $\mathcal{C}'_{Q,R}$, fix positive integers $a_M,a_N$
and $\beta_{n,M},\beta_{n,N}$ as in (\ref{equation-beta_N}). In a first step we
are going to define
an object $\underline{{\rm Hom}}'(M,N)$ in $\mathcal{C}'_{Q,R}$ that depends on the choice of $a_M$. We set
$$
\underline{{\rm Hom}}'(M,N)_S:=\Hom_{\mathbb{W}_S(R)}(M_S,N_S).
$$
We note that
$$
\Hom_{\mathbb{W}_S(R)}(M_S,N_S)\otimes_{\mathbb{W}_S(R)}\mathbb{W}_T(R)\xr{\cong} \Hom_{\mathbb{W}_T(R)}(M_T,N_T),
$$
since $M_S$ is finitely generated and projective.
We define
\begin{align*}
\phi_n:\Hom_{\mathbb{W}_S(R)}(M_S,N_S)&\xr{} \Hom_{\mathbb{W}_{S/n}(R)}(M_{S/n},N_{S/n}) \\
\phi_n(f)&:=\phi_n\circ f\circ \beta_n.
\end{align*}
This definition depends on $a_M$. It is easy to check that $\underline{{\rm Hom}}'(M,N)$
is an object in $\mathcal{C}'_{Q,R}$ (take $\beta_n(f):=\beta_n\circ f\circ \phi_n$
and $a=a_M+a_N$). We set
\begin{equation}\label{equation-definition-iHom}
\underline{{\rm Hom}}(M,N):=\underline{{\rm Hom}}'(M,N)(a_M)
\end{equation}
as an object in $\mathcal{C}_{Q,R}$. In view of Lemma \ref{lemma-commute-with-beta}
this definition is independent of any choices. For two objects $M(b_1),N(b_2)$
in $\mathcal{C}_{Q,R}$ we set
$$
\underline{{\rm Hom}}(M(b_1),N(b_2)):= \underline{{\rm Hom}}(M,N)(b_2-b_1).
$$
\subsubsection{}
For three objects $M,N,P$ in $\mathcal{C}_{Q,R}$ we have an obvious natural
isomorphism
$$
\underline{{\rm Hom}}(M\otimes N,P) = \underline{{\rm Hom}}(M,\underline{{\rm Hom}}(N,P)).
$$
\begin{proposition}
For objects $M,N$ in $\mathcal{C}_{Q,R}$ we have a natural isomorphism
$$
\Hom(\mathbf{1},\underline{{\rm Hom}}(M,N))\xr{} \Hom(M,N).
$$
\begin{proof}
We may assume that $M,N\in \mathcal{C}'_{Q,R}$.
Fix $a_M$ and $\beta_{M,n}$ as in (\ref{equation-beta_N}). We need to show
that
$$
\Hom(\mathbf{1}(-a_M),\underline{{\rm Hom}}'(M,N))=\Hom(M,N),
$$
and know that
\begin{multline*}
\Hom(\mathbf{1}(-a_M),\underline{{\rm Hom}}'(M,N))=\{[S\mapsto f_S]\mid f_S\otimes_{\mathbb{W}_S(R)}\mathbb{W}_T(R)=f_T\;\text{for $T\subset S\subset Q$}, \\
\phi_{N,n}\circ f_S\circ \beta_{M,n} = n^{a_M}f_{S/n} \quad \text{for all $n, S\subset Q$.}\}
\end{multline*}
Since $\phi_{M,n}(M_S)\supset n^{a_M}M_{S/n}$ we have
\begin{align*}
\phi_{N,n}\circ f_S\circ \beta_{M,n} = n^{a_M}f_{S/n}
&\Leftrightarrow \phi_{N,n}\circ f_S\circ \beta_{M,n}\circ \phi_{M,n} = n^{a_M}f_{S/n}\circ \phi_{M,n} \\
&\Leftrightarrow \phi_{N,n}\circ f_S\circ n^{a_{M}-1} V_n(1) = n^{a_M}f_{S/n}\circ \phi_{M,n}\\
&\Leftrightarrow n^{a_{M}}\phi_{N,n}\circ f_S = n^{a_M}f_{S/n}\circ \phi_{M,n}\\
&\Leftrightarrow \phi_{N,n}\circ f_S = f_{S/n}\circ \phi_{M,n}.
\end{align*}
\end{proof}
\end{proposition}
For $M\in \mathcal{C}_{Q,R}$ we define the dual by
$$
M^{\vee}:=\underline{{\rm Hom}}(M,\mathbf{1}).
$$
It equips $\mathcal{C}_{Q,R}$ with the structure of rigid $\otimes$-category. We have
$$
M^{\vee}\otimes N = \underline{{\rm Hom}}(M,N).
$$
\subsubsection{Functoriality}
\begin{proposition}\label{proposition-functoriality}
Let $R\xr{} A$ be a ring homomorphism between $\Z$-torsion-free rings.
The assignment
\begin{align*}
&[S\mapsto M_S]\mapsto [S\mapsto M_S\otimes_{\mathbb{W}_S(R)}\mathbb{W}_S(A)], \quad [n\mapsto \phi_n]\mapsto [n\mapsto \phi_n\otimes F_n],\\
&[S\mapsto f_S]\mapsto [S\mapsto f_S\otimes id_{\mathbb{W}_S(A)}]
\end{align*}
defines a functor
$$
\mathcal{C}'_{Q,R}\xr{} \mathcal{C}'_{Q,A}.
$$
The functor can be extended in the obvious way to a functor
$
\mathcal{C}_{Q,R}\xr{} \mathcal{C}_{Q,A}.
$
\begin{proof}
Straightforward.
\end{proof}
\end{proposition}
\subsubsection{}
Our motivation for introducing $\mc{C}_{Q,R}$ comes from geometry.
\begin{proposition}\label{proposition-big-de-Rham-Witt-phi-module}
Assumptions as in Theorem \ref{thm-projective-blue}. Let $Q$ be a non-empty truncation set. For all $i\geq 0$ the assignment
$$
S\mapsto H^i_{dRW}(X/\mathbb{W}_S(R)), \quad n\mapsto \phi_n,
$$
defines an object in $\mc{C}'_{Q,R}$.
\begin{proof}
Theorem \ref{thm-projective-blue} implies that these modules are projective
and finitely generated. For the construction of $\phi_n$ and $\beta_n$ see Section \ref{section-phi-beta-de-Rham-Witt}.
\end{proof}
\end{proposition}
\begin{definition}\label{definition-bd-R}
Let $X\xr{} \Spec(R)$ be a morphism such that the assumptions of Theorem \ref{thm-projective-blue} are satisfied. For all $i$, we denote by $H^i_{dRW}(X/\mathbb{W}(R))$ the object in $\mc{C}_{R}$
that is given by $S\mapsto H^i_{dRW}(X/\mathbb{W}_S(R))$ (Proposition \ref{proposition-big-de-Rham-Witt-phi-module}).
We call $H^*_{dRW}(X/\mathbb{W}(R))$ the \emph{de Rham-Witt cohomology} of $X$.
\end{definition}
\subsubsection{}
Let $X,Y$ be smooth proper schemes over $R$ such that the assumptions of Theorem
\ref{thm-projective-blue} are satisfied for $X$ and $Y$.
The multiplication
$$
R\Gamma(\mathbb{W}_S\Omega^*_{X/R})\times R\Gamma(\mathbb{W}_S\Omega^*_{Y/R})\xr{} R\Gamma(\mathbb{W}_S\Omega^*_{X\times_R Y/R})
$$
induces a morphism in $\mc{C}_R$:
\begin{equation}\label{equation-product-varities-C-R}
H^i_{dRW}(X/\mathbb{W}(R))\otimes H^{j}_{dRW}(Y/\mathbb{W}(R))\xr{} H^{i+j}_{dRW}(X\times_R Y/\mathbb{W}(R)).
\end{equation}
\subsection{The tangent space functor}
We have a functor of rigid $\otimes$-categories
\begin{align*}
&T:\mathcal{C}_{Q,R} \xr{} \text{(finitely generated and projective $R$-modules)} \\
&T(M(n)) := M_{\{1\}}.
\end{align*}
\begin{proposition}\label{proposition-T-conservative}
The functor $T$ is conservative, i.e.~if $T(f)$ is an isomorphism then $f$ is an isomorphism.
\begin{proof}
It is sufficient to consider a morphism $f:M\xr{} N$ in $\mathcal{C}'_{Q,R}$.
We need to show that $f_S:M_S\xr{} N_{S}$ is an isomorphism provided that $f_{\{1\}}$ is an isomorphism. We may choose a positive integer $a$ and $\beta_{M,n}$, $\beta_{N,n}$ as in (\ref{equation-beta_N}). By Lemma \ref{lemma-commute-with-beta} the
morphism $f$ commutes with $\beta_n$.
Let $n:=\max\{s\mid s\in S\}$; by induction we know that $f_T$ is an isomorphism for
$T=S\backslash \{n\}$.
Set $I=\ker(\mathbb{W}_S(R)\xr{} \mathbb{W}_T(R))$, we know that $I=\{V_n(\lambda)\mid \lambda\in R\}$.
It suffices to show that
\begin{equation}\label{equation-fS-restricted}
IM_S\xr{f_S} IN_S
\end{equation}
is an isomorphism. If $f_S(V_n(\lambda)x)=0$ then $n^{a-1}V_n(\lambda)f_S(x)=0$ and therefore
$\beta_n(\lambda\cdot \phi_nf_S(x))=\beta_n(\lambda\cdot f_{\{1\}}(\phi_n(x)))$ vanishes. Since $\beta_n$ is injective, we conclude
$\lambda \cdot \phi_n(x)=0$, hence $$0=\beta_n(\lambda\cdot\phi_n(x))=n^{a-1}V_n(\lambda)x,$$ which implies $V_n(\lambda)x=0$.
For the surjectivity of (\ref{equation-fS-restricted}) we note that, by induction, for every
$y\in N_S$ there is $x\in M_S$ with $f_S(x)-y\in IN_S$. Therefore it suffices
to show that $I^{a}N_S$ is contained in the image of $f_S$. Now,
$$
V_n(\lambda_1)\cdots V_n(\lambda_a)=n^{a-1}V_n
|
(\lambda_1\cdots\lambda_a).
$$
Thus
$$
V_n(\lambda_1)\cdots V_n(\lambda_a)y=f_S(\beta_nf^{-1}_{\{1\}}(\lambda_1\cdots\lambda_a\cdot\phi_n(y))).
$$
\end{proof}
\end{proposition}
\begin{corollary}\label{corollary-kuenneth-over-R}
Let $X,Y$ be smooth proper schemes over $R$ such that the assumptions of Theorem \ref{thm-projective-blue} are satisfied for $X$ and $Y$.
If
$$
\bigoplus_{i+j=n} H^i_{dR}(Y/R)\otimes_R H^j_{dR}(X/R)\xr{} H^n_{dR}(X\times_R Y/R)
$$
is an isomorphism then
$$
\bigoplus_{i+j=n} H^i_{dRW}(X/\mathbb{W}(R))\otimes H^j_{dRW}(Y/\mathbb{W}(R))\xr{} H^n_{dRW}(X\times_R Y/\mathbb{W}(R))
$$
(see \eqref{equation-product-varities-C-R}) is an isomorphism in $\mc{C}_{R}$.
\begin{proof}
This is an application of Proposition \ref{proposition-T-conservative}, because
$$
T(H^i_{dRW}(-/\mathbb{W}(R)))=H^i_{dR}(-/R).
$$
\end{proof}
\end{corollary}
\begin{proposition}\label{proposition-T-faithful}
The functor $T$ is faithful.
\begin{proof}
It is sufficient to consider a morphism $f:M\xr{} N$ in $\mathcal{C}'_{Q,R}$.
We need to show that $f_S:M_S\xr{} N_{S}$ vanishes provided that $f_{\{1\}}$ is zero. We may choose a positive integer $a$ and $\beta_{M,n}$, $\beta_{N,n}$ as in (\ref{equation-beta_N}). By Lemma \ref{lemma-commute-with-beta} the
morphism $f$ commutes with $\beta_n$.
Let $n:=\max(S)$; by induction we know that $f_T=0$ for
$T=S\backslash \{n\}$, so that for all $x\in M_S$ the image $f_S(x)$ is of the form
$f_S(x)=V_n(\lambda)y$. Since
$$
0=f_{\{1\}}\circ \phi_n(x)=\phi_n\circ f_S(x)=n\cdot \lambda \cdot \phi_n(y),
$$
we conclude $\lambda\cdot \phi_n(y)=0$ and $n^{a-1}V_n(\lambda)y=0$, hence $f_S(x)=0$.
\end{proof}
\end{proposition}
\subsubsection{}
The following proposition shows that an object in $\mathcal{C}'_{Q,R}$, where $R$ is a $\Z_{(p)}$-algebra, is determined by the $p$-typical part, that is, on its values for truncation sets consisting of $p$-powers. Recall the notation $S_p$ from Notation \ref{notation-S-p}.
\begin{proposition}\label{proposition-phiN-modules-over-Zp}
Let $R$ a $\Z$-torsion-free ring. Let $Q$ be a truncation set. Suppose $p$ is a prime such that $\ell^{-1}\in R$ for all primes $\ell\in Q\backslash \{p\}$.
Let $M,N$ be $\mathcal{C}'_{Q,R}$-modules.
\begin{enumerate}
\item Via the equivalence of \eqref{equation-epsilon-decomposition}:
$$M_S\mapsto \bigoplus_{n\in S, (n,p)=1} M_{(S/n)_p}.$$
\item If $f:M\xr{} N$ is a morphism in $\mathcal{C}'_{Q,R}$ then $f_S\mapsto \bigoplus_{n\in S, (n,p)=1}f_{(S/n)_p}$
via the equivalence \eqref{equation-epsilon-decomposition}.
\item The restriction functor
$$
\mathcal{C}'_{Q,R}\xr{} \mathcal{C}'_{Q_p,R}
$$
is an equivalence of categories.
\end{enumerate}
\begin{proof}
For (1). First one proves that the projection
$\epsilon_1M_S\xr{} M_{S_p}$ is an isomorphism (see Notation \ref{notation-S-p} for $S_p$). The second step is the isomorphism
$$
\phi_n:\epsilon_nM_S\xr{} \epsilon_1M_{S/n},
$$
with $\frac{\epsilon_n}{n^a}\beta_n$ as inverse.
Statement (2) is obvious, and (3) follows from (1) and (2).
\end{proof}
\end{proposition}
\begin{proposition}\label{proposition-R-over-Q-phi-N-modules}
Let $R$ a $\Z$-torsion-free ring. Let $Q$ be a truncation set. Suppose $p^{-1}\in R$ for all primes $p\in Q$.
Then
$$
T:\mathcal{C}'_{Q,R} \xr{} \text{(finitely generated and projective $R$-modules)}
$$
defines an equivalence of categories.
\begin{proof}
Straightforward.
\end{proof}
\end{proposition}
\subsubsection{}
Let $P$ be a set of primes (maybe infinite). We set $\Z_{P}:=\Z[p^{-1}\mid p\in P]$. Let $A$ be a commutative ring.
We denote by ${\rm Mod}_{A}$ the category of $A$-modules. We define the category ${\rm Mod}_{A,P}$ to be the category
with objects
$$
((M_p)_{p\in P},(\alpha_{p,\ell})_{p,\ell\in P}),
$$
where $M_p$ is an $A\otimes_{\Z} \Z_{P\backslash \{p\}}$-module, and $\alpha_{p,\ell}:M_{\ell}\otimes_{\Z_{P\backslash \{\ell\}}}\Z_P\xr{} M_{p}\otimes_{\Z_{P\backslash \{p\}}}\Z_P$
is an isomorphism of $A\otimes \Z_P$-modules such that
$$
\alpha_{p_1,p_1}=id, \quad
\alpha_{p_1,p_2}\circ \alpha_{p_2,p_3}=\alpha_{p_1,p_3} \quad
\text{for all $p_1,p_2,p_3\in P$.}
$$
The morphisms of ${\rm Mod}_{A,P}$ are defined in the evident way.
If $P$ is finite and non-empty, then the evident functor
$$
R_P:{\rm Mod}_{A}\xr{} {\rm Mod}_{A,P}
$$
is an equivalence of categories, because we can glue quasi-coherent sheaves. If $P$ is infinite then this may fail to be an equivalence, but we still
have the following properties, whose proof is left to the reader.
\begin{lemma}\label{lemma-gluing-with-infinite-primes}
Suppose $P\neq \emptyset$.
\begin{itemize}
\item [(i)] $R_P$ is faithful.
\item [(ii)] For every $N\in {\rm Mod}_A$ such that $N\xr{} N\otimes_{\Z}\Z_P$ is injective, and every $M\in {\rm Mod}_A$ the following map
is an isomorphism:
$$
\Hom_{{\rm Mod}_A}(M,N)\xr{\cong} \Hom_{{\rm Mod}_{A,P}}(R_P(M),R_P(N)).
$$
\item [(iii)] Suppose that $A\xr{} A\otimes \Z_P$ is injective. Let $\tilde{M}=((\tilde{M}_p),(\alpha_{p,\ell}))\in {\rm Mod}_{A,P}$
be such that $\tilde{M}_p$ is a finitely generated and projective $A\otimes_{\Z} \Z_{P\backslash \{p\}}$-module for all $p\in P$. Then there exists a finitely generated and
projective $M\in {\rm Mod}_A$ such that
$R_P(M)\cong \tilde{M}$.
\end{itemize}
\end{lemma}
For a positive integer $a$, we denote by $\mc{C}'_{Q,R,a}$ the full subcategory of $\mc{C}'_{Q,R}$ consisting of objects such that there exist $\{\beta_n\}_{n}$ as in \eqref{equation-beta_N} for $a$.
\begin{definition}
Let $Q$ be a non-empty truncation set, and let $P$ be the set of primes of $Q$.
We denote by $\mc{LC}'_{Q,R,a}$ the category with objects $$((M_p)_{p\in P},(\alpha_{p,\ell})_{p,\ell \in P}),$$ where
\begin{itemize}
\item $M_p\in {\rm ob}(\mc{C}'_{Q_p,R\otimes \Z_{P\backslash \{p\}},a})$ for all $p\in P$,
\item $\alpha_{p,\ell}:T(M_{\ell})\otimes_{\Z_{P\backslash \{\ell\}}} \Z_P \xr{\cong} T(M_{p})\otimes_{\Z_{P\backslash \{p\}}} \Z_P$ is an isomorphism such that
$$
\alpha_{p_1,p_1}=id, \quad
\alpha_{p_1,p_2}\circ \alpha_{p_2,p_3}=\alpha_{p_1,p_3} \quad
\text{for all $p_1,p_2,p_3\in P$.}
$$
\end{itemize}
The morphisms are defined in the evident way.
\end{definition}
Broadly speaking the next proposition shows that the category $\mc{C}'_{Q,R,a}$
is glued from the local components via the functor $T$.
\begin{proposition}
Let $R$ be a $\Z$-torsion-free ring, and let $a$ be a positive integer. For every non-empty truncation set $Q$ the evident functor
$$
\mc{C}'_{Q,R,a}\xr{} \mc{LC}'_{Q,R,a}
$$
is an equivalence of categories.
\begin{proof}
The claim follows easily from Proposition \ref{proposition-phiN-modules-over-Zp}, Proposition \ref{proposition-R-over-Q-phi-N-modules}, and Lemma \ref{lemma-gluing-with-infinite-primes}.
\end{proof}
\end{proposition}
\subsection{Proof of Poincar\'e duality}
\subsubsection{}
Let $f:X\xr{} \Spec(R)$ be a smooth, projective morphism of relative dimension $d$ between noetherian schemes such that $H^*_{dR}(X/R)$ is a flat $R$-module. Suppose furthermore that $\Spec(R)$ is integral and the field of fractions of $R$ has characteristic zero.
We know that $H^0(X,\mathcal{O}_X)$ is a finite \'etale $R$-algebra and $$H^0_{dR}(X/R)=H^0(X,\mathcal{O}_X).$$ Since $H^*_{dR}(X/R)$ is
flat, we have
$$
H^i_{dR}(X/R)\otimes_R k(y)\xr{\cong} H^i_{dR}(X_y/k(y)),
$$
for every point $y\in \Spec(R)$, and $X_y$ being the fibre of $y$. In particular, we obtain
\begin{equation}\label{equation-H0-on-fibres}
H^0(X,\mathcal{O}_X)\otimes_R k(y)\xr{\cong} H^0(X_y,\mathcal{O}_{X_y}).
\end{equation}
By Grothendieck-Serre duality we see that $y\mapsto \dim_{k(y)} H^d(X_y,\omega_{X_y})$ is a constant function, thus $H^d(X,\omega_{X/R})$ is a finitely generated
projective $R$-module and we have
$$
H^d(X,\omega_{X/R})\otimes_R k(y)\xr{\cong} H^d(X_y,\omega_{X_y})
$$
for every point $y\in \Spec(R)$. Since the Hodge to de Rham spectral sequence degenerates at the generic point of $\Spec(R)$, we conclude:
$$
H^d(X,\omega_{X/R})\xr{\cong} H^{2d}_{dR}(X/R).
$$
Recall that we have a trace map
$$
{\rm Tr}:H^d(X,\omega_{X/R})\xr{} R;
$$
we will also denote by ${\rm Tr}$ the induced map $H^{2d}_{dR}(X/R)\xr{} R$.
The duality pairing
$$
H^0(X,\mathcal{O}_X) \times H^d(X,\omega_{X/R})\xr{} R
$$
induces a duality pairing
$$
H^0_{dR}(X/R)\times H^{2d}_{dR}(X/R) \xr{} R.
$$
Note that the fibres of $f$ are connected if $H^0(X,\mathcal{O}_X)=R$. Moreover, the equality $H^0(X,\mathcal{O}_X)=R$ implies that the fibres
are geometrically connected by using (\ref{equation-H0-on-fibres}).
Suppose now that $H^0(X,\mathcal{O}_X)=R$, and set $c_X:={\rm Tr}^{-1}(1)\in H^d(X,\omega_{X/R})=H^{2d}_{dR}(X/R)$. For a generically finite $R$-morphism
$g:X\xr{} Y$, where $Y$ satisfies the same assumptions as $X$ (in particular, $Y/R$ is of relative dimension $d$),
we have a pull-back map
$$
g^*:H^{2d}_{dR}(Y/R)=H^d(Y,\omega_{Y/R})\xr{} H^d(X,\omega_{X/R}) = H^{2d}_{dR}(X/R)
$$
which is dual to the trace map
$$
g_*:H^0(X,\mathcal{O}_X)\xr{} H^0(Y,\mathcal{O}_Y), \quad g_*(1)=\deg(g),
$$
thus
$
g^*(c_Y)=\deg(g)\cdot c_X.
$
\begin{proposition}\label{proposition-trace-map}
Let $R$ be a smooth $\Z$-algebra.
Let $X$ be a smooth projective scheme over $R$ such that $H^*_{dR}(X/R)$ is a projective $R$-module.
Suppose that $X$ is connected of relative dimension $d$. There is an isomorphism
$$
H^{2d}_{dRW}(X/\mathbb{W}(R))\cong H^0_{dRW}(X/\mathbb{W}(R))\otimes \mathbf{1}(-d)
$$
and a natural morphism in $\mathcal{C}_R$:
$$
H^{2d}_{dRW}(X/\mathbb{W}(R)) \xr{} \mathbf{1}(-d)
$$
\begin{proof}
Certainly, we may suppose that $\Spec(R)$ is integral.
\emph{1.Step:} Reduction to $X/R$ has geometrically connected fibres.
Set $L=H^0(X,\mathcal{O}_X)$, $L$ is a finite \'etale $R$-algebra. It suffices to show the existence
of an isomorphism
\begin{equation}\label{equation-reduction-to-geom-int-case}
H^{2d}_{dRW}(X/\mathbb{W}(L)) \xr{\tau} \mathbf{1}(-d)
\end{equation}
in $\mathcal{C}_L$ such that $\tau_{\{1\}}$ is the trace map. In view of $$H^{2d}_{dRW}(X/\mathbb{W}(L))=H^{2d}_{dRW}(X/\mathbb{W}(R)),$$ (\ref{equation-reduction-to-geom-int-case}) yields in $\mathcal{C}_R$:
\begin{equation}\label{equation-induced-trace-map-on-R}
H^{2d}_{dRW}(X/\mathbb{W}(R))\xr{\cong} H^0_{dRW}(X/\mathbb{W}(R))\otimes \mathbf{1}(-d) \xr{tr\otimes id} \mathbf{1}(-d),
\end{equation}
with $tr:H^0_{dRW}(X/\mathbb{W}(R)) \xr{} \mathbf{1}$ being defined by the usual trace map
$$
H^0_{dRW}(X/\mathbb{W}_S(R))=\mathbb{W}_S(L) \xr{} \mathbb{W}_S(R).
$$
The morphism (\ref{equation-induced-trace-map-on-R}) is functorial because it induces the usual trace map after evaluation at $\{1\}$.
Therefore we may assume $R=L$ in the following.
\emph{2.Step:} Proposition \ref{proposition-R-over-Q-phi-N-modules} implies the existence of a unique isomorphism
$$
e:\mathbf{1}(-d)\otimes \Q \xr{\cong} H^{2d}_{dRW}(X/\mathbb{W}(R))\otimes \Q
$$
that induces ${\rm Tr}^{-1}$ after evaluation at $\{1\}$. In other words, there is a unique system $(e_S)_{S}$ with
$e_S\in H^{2d}_{dRW}(X/\mathbb{W}_S(R))\otimes \Q$ such that
\begin{enumerate}
\item $\pi_{S,T}(e_S)=e_T$ for all $T\subset S$, where $\pi_{S,T}$ is induced by the projection $$H^{2d}_{dRW}(X/\mathbb{W}_S(R))\xr{} H^{2d}_{dRW}(X/\mathbb{W}_T(R)),$$
\item $\phi_n(e_S)=n^d\cdot e_{S/n}$ for all $n,S$,
\item $e_{\{1\}}={\rm Tr}^{-1}(1)$.
\end{enumerate}
Our goal is to show
\begin{equation}\label{equation-eS-integral-claim}
e_S\in H^{2d}_{dRW}(X/\mathbb{W}_S(R))
\end{equation}
for every finite truncation set $S$. The strategy of the proof will be to show this for $X=\P^d_R$ first. The next step will be to
prove that (\ref{equation-eS-integral-claim}) is local in $\Spec(R)$. Locally on $\Spec(R)$ we can find generically finite morphisms
to $\P^d$, which can be used together with the explicit description of de Rham-Witt cohomology after completion (Proposition \ref{proposition-de-Rham-Witt-cohomology-limit-blue}) to prove the claim.
\emph{3.Step:} Suppose $X=\P^d_R$. For any finite $S$, we get a morphism of $\mathbb{W}_S(R)$-schemes
$$
g_S:\mathbb{W}_S(\P^d_R)\xr{} \P^d_{\mathbb{W}_S(R)}
$$
induced by $\frac{x_i}{x_j}\mapsto [\frac{x_i}{x_j}]$ on the standard affine covering. The morphisms $g_S$ are compatible with the Frobenius
morphisms provided that the action on $\P^d_{\mathbb{W}_S(R)}$ is given by $\phi_n^*(x_i)=x_i^n$.
We obtain
\begin{multline*}
g^*:H^d(\P^d_{\mathbb{W}_S(R)},\omega_{\P^d_{\mathbb{W}_S(R)}/\mathbb{W}_S(R)})\xr{} H^d(\mathbb{W}_S(\P^d_R),\Omega^d_{\mathbb{W}_S(\P^d_R)/\mathbb{W}_S(R)}) \\
\xr{} H^d(\mathbb{W}_S(\P^d_R),\mathbb{W}_S\Omega^d_{\P^d_R/R}) \xr{} H^{2d}_{dRW}(\P^d_R/\mathbb{W}_S(R)).
\end{multline*}
Note that ${\rm Tr}:H^d(\P^d_{\mathbb{W}_S(R)},\omega_{\P^d_{\mathbb{W}_S(R)}/\mathbb{W}_S(R)})\xr{\cong} \mathbb{W}_S(R)$ and $\delta_S:={\rm Tr}^{-1}(1)$ satisfies
$\phi^*_n(\delta_S)=n^d\cdot \delta_{S/n}$. Therefore $e_S=g^*(\delta_S)$, which proves (\ref{equation-eS-integral-claim}) in the case
of a projective space.
\emph{4.Step:} We claim that in order to prove (\ref{equation-eS-integral-claim}) it is sufficient to prove
\begin{equation}\label{equation-eS-integral-on-maximal-ideal}
e_S\in H^{2d}_{dRW}(X/\mathbb{W}_S(R))\otimes_{\mathbb{W}_S(R)}\mathbb{W}_S(R_{\mf{m}})
\end{equation}
for every maximal ideal $\mf{m}$. Indeed, let $\mathscr{F}$ be the coherent sheaf on $\Spec(\mathbb{W}_S(R))$ associated to $M:=H^{2d}_{dRW}(X/\mathbb{W}_S(R))$.
For every $\mf{m}$, we can choose an open affine neighborhood $U_{\mf{m}}\subset \Spec(R)$ and a section $e_{\mf{m}}\in \mathscr{F}(\mathbb{W}_S(U_{\mf{m}}))$ mapping to
$e_S\in M\otimes_{\mathbb{W}_S(R)}\mathbb{W}_S(R_{\mf{m}})$. The section $e_{\mf{m}}$ is unique and the sections $(e_{\mf{m}})_{\mf{m}}$ glue to a section of $\mathscr{F}$ on $\bigcup_{\mf{m}} \mathbb{W}_S(U_{\mf{m}})=\mathbb{W}_S(\Spec(R))$, which proves the claim.
Let $\hat{R}$ be the completion $\varprojlim_{j} R/\mf{m}^j$. For every integer $n$, we have
$$
n\hat{R}\cap R_{\mf{m}} =\bigcap_{j=1}^{\infty} (nR_{\mf{m}} + \mf{m}^j)=nR_{\mf{m}},
$$
and thus $(R_{\mf{m}}\otimes_{\Z} \Q)\cap \hat{R}=R_{\mf{m}}$ as intersection in $\hat{R}\otimes_{\Z}\Q$. Therefore
\begin{equation}\label{equation-eS-integral-on-maximal-ideal-completion}
e_S\in H^{2d}_{dRW}(X/\mathbb{W}_S(R))\otimes_{\mathbb{W}_S(R)}\mathbb{W}_S(\hat{R})
\end{equation}
implies (\ref{equation-eS-integral-on-maximal-ideal}).
\emph{5.Step:} We will show (\ref{equation-eS-integral-on-maximal-ideal-completion}).
We may pass from $\Spec(R)$ to a neighborhood $\Spec(R')$ of $\mf{m}$. Let $R'$ be such that there exists a generically finite $R'$-morphism
$$
f:X\times_{\Spec(R)}\Spec(R')\xr{} \P^d_{R'}.
$$
The existence is proved in Proposition \ref{proposition-gen-finite-to-Pd} below. Then $e_S=\frac{1}{\deg(f)}f^*(e_S)$, because the
classes $(\frac{1}{\deg(f)}f^*(e_S))_S$ satisfy the properties listed in the second step.
Set $p={\rm char}(R/\mf{m})$. To prove (\ref{equation-eS-integral-on-maximal-ideal-completion}) we may assume that $S=\{1,p,\dots,p^{n-1}\}$. Then Proposition
\ref{proposition-de-Rham-Witt-cohomology-limit-blue}(ii) yields the claim, because for the de Rham cohomology we know that $f^*({\rm Tr}^{-1}(1))$
is divisible by $\deg(f)$.
\end{proof}
\end{proposition}
\begin{proposition}\label{proposition-gen-finite-to-Pd}
Let $Y$ be of finite type over $\Spec(\Z)$. Let $X/Y$ be smooth projective such that $X$ is connected of relative dimension $d$.
For every closed point $y\in Y$ there is open neighborhood $U$ of $y$, and a generically finite $U$-morphism $X\times_{Y}U\xr{} \P^d_U$.
\end{proposition}
In order to prove Proposition \ref{proposition-gen-finite-to-Pd} we will need a sequence of lemmas.
\begin{lemma}\label{lemma-gen-hyperplane-section}
Let $R$ be a local noetherian ring. Let $X/R$ be a smooth projective $R$-scheme such that every connected component
of $X$ has relative dimension $d\geq 0$ over $\Spec(R)$. Let $\mathscr{L}$ be a relative ample
line bundle. There is $n>0$ satisfying the following property: for every $k\geq 1$ there is a section $s\in H^0(X,\mathscr{L}^{\otimes kn})$ such that $V(s)$ is smooth of relative dimension $d-1$ over $R$.
\begin{proof}
Let $y\in \Spec(R)$ denote the closed point. For $n\gg 0$ we have $H^i(X_y,\mathscr{L}_{\mid X_y}^{\otimes n})=0$ for all $i>0$. By semicontinuity
we get $H^i(X,\mathscr{L}^{\otimes n})=0$ for all $i>0$, and
$$
H^0(X,\mathscr{L}^{\otimes n}) \xr{} H^0(X_y,\mathscr{L}_{\mid X_y}^{\otimes n})
$$
is surjective. Replace $\mathscr{L}$ by a power such that this holds for all $n\geq 1$.
If the residue field of $R$ is infinite then we can find a section $s_y\in H^0(X_y,\mathscr{L}_{\mid X_y}^{\otimes n})$
such that $V(s_y)$ is smooth of dimension $d-1$.
In the case of a finite residue field we have to use \cite{P} and may have to replace $\mathscr{L}$ by a high enough power again.
Let $s$ be a lifting of $s_y$ to $H^0(X,\mathscr{L}^{\otimes n})$, set $H:=V(s)$. If $d=0$ then $H$ is empty, because it has empty intersection with
the special fibre. For $d\geq 1$, $H$ is flat by the local criterion for flatness, because it has transversal intersection with the special fibre.
Since $H\xr{} \Spec(R)$ is flat and the special fibre is smooth, we conclude that $H$ is smooth. By Chevalley's theorem, $H$ is of relative dimension $d-1$.
\end{proof}
\end{lemma}
\begin{remark}
$H$ is empty if and only if $d=0$.
\end{remark}
\begin{lemma} \label{lemma-gen-finite}
Assumptions as in Lemma \ref{lemma-gen-hyperplane-section}. There is $n\geq 1$ and sections $s_0,\dots,s_d\in H^0(X,\mathscr{L}^{\otimes n})$
such that
\begin{enumerate}
\item $\bigcap_{i=0}^d V(s_i)$ is empty,
\item $\bigcap_{i=1}^{d} V(s_i)$ is finite over $R$ and non-empty.
\end{enumerate}
\begin{proof}
Let $m$ and $s\in H^0(X,\mathscr{L}^{\otimes m})$ such that $H=V(s)$ is a smooth hypersurface as in Lemma \ref{lemma-gen-hyperplane-section}.
Without loss of generality $m=1$.
For $k\gg 0$, we get a surjective map
$$
H^0(X,\mathscr{L}^{\otimes k})\xr{} H^0(H,\mathscr{L}_{\mid H}^{\otimes k}).
$$
By induction on $d$ we can find $s_{H,0},\dots,s_{H,d-1}\in H^0(H,\mathscr{L}_{\mid H}^{\otimes k})$, for some $k\geq 1$, satisfying the desired properties for $H$. Note that $s^j_{H,0},\dots,s^j_{H,d-1}$, for all $j\geq 1$, also satisfy the properties, hence we may suppose $k\gg 0$.
Choose some liftings $s_0,s_1,\dots,s_{d-1}\in H^0(X,\mathscr{L}^{\otimes k})$. Then $s_0,s_1,\dots,s_{d-1},s^k$ satisfy the required properties.
\end{proof}
\end{lemma}
\begin{proof}[Proof of Proposition \ref{proposition-gen-finite-to-Pd}]
Let $\mathscr{L}$ be a relative ample line bundle. Apply Lemma \ref{lemma-gen-finite} to the local ring of $Y$ at $y$. The sections $s_0,\dots,s_d$
extend to $X\times_{Y}\Spec(U)$ for an open affine neighborhood $U$ of $y$. After possibly shrinking $U$ we have $\bigcap_{i=0}^d V(s_i)=\emptyset$ so that
$$
X\times_{Y}\Spec(U) \xr{} \P^d_{U},
$$
defined by $s_0,\dots,s_d$, is well-defined. The second property of Lemma \ref{lemma-gen-finite} implies that the morphism is generically finite.
\end{proof}
\begin{corollary}\label{corollary-Poincare-duality-made-simple}
Let $R$ be a smooth $\Z$-algebra. Let $X\xr{} \Spec(R)$ be a smooth projective morphism such that $H^*_{dR}(X/R)$ is a projective $R$-module.
Suppose that $X$ is connected of relative dimension $d$.
If the canonical map
\begin{equation}\label{equation-poincare-duality-deRham}
H^{i}_{dR}(X/R)\xr{} \Hom_R(H^{2d-i}_{dR}(X/R),R)
\end{equation}
is an isomorphism, then
\begin{equation} \label{equation-poincare-duality}
H^{i}_{dRW}(X/\mathbb{W}(R))\xr{\cong} \underline{{\rm Hom}}(H^{2d-i}_{dRW}(X/\mathbb{W}(R)),\mathbf{1}(-d)).
\end{equation}
\begin{proof}
In view of Proposition \ref{proposition-trace-map} and (\ref{equation-product-varities-C-R}) we get a morphism in $\mathcal{C}_R$:
$$
H^{i}_{dRW}(X/\mathbb{W}(R))\otimes H^{2d-i}_{dRW}(X/\mathbb{W}(R)) \xr{} H^{2d}_{dRW}(X/\mathbb{W}(R)) \xr{} \mathbf{1}(-d)
$$
inducing (\ref{equation-poincare-duality}). Now, $T(\ref{equation-poincare-duality})=(\ref{equation-poincare-duality-deRham})$ proves the claim.
\end{proof}
\end{corollary}
\begin{remark}
Note that the map
\begin{equation*}
H^{i}_{dR}(X/R)\xr{} \Hom_R(H^{2d-i}_{dR}(X/R),R)
\end{equation*}
induced by the pairing
$$
H^i_{dR}(X/R)\otimes_R H^{2d-i}_{dR}(X/R) \xr{} H^{2d}_{dR}(X/R)\xr{} R
$$
is an isomorphism if for every closed point $y\in \Spec(R)$
the Hodge-to-de-Rham spectral sequence for the fibre at $y$,
\begin{equation} \label{equation-Hodge-to-de-Rham-remark}
H^{j}(X_y,\Omega^i_{X_y/k(y)})\Rightarrow H^{i+j}_{dR}(X_y/k(y)),
\end{equation}
degenerates.
Indeed, since the de Rham cohomology is locally free, it is
also stable under base change and it suffices to show that, for every closed
point $y\in \Spec(R)$, the Poincar\'e pairing for the fibre at $y$,
$$
H^i_{dR}(X_y/k(y))\otimes_{k(y)} H^{2d-i}_{dR}(X_y/k(y)) \xr{} H^{2d}_{dR}(X_y/k(y))\xr{} k(y),
$$
is non-degenerate. This follows easily from the degeneration of the Hodge-to-de-Rham spectral sequence and Serre duality.
The degeneration of the spectral sequence (\ref{equation-Hodge-to-de-Rham-remark}) is known in the following cases:
\begin{itemize}
\item $H^j(X,\Omega^i_{X/R})$ is torsion-free for all $i,j$,
\item $\dim X_{y}\leq {\rm char}(k(y))$.
\end{itemize}
As an example, we have abelian schemes or curves over $R$.
\end{remark}
|
\section{Bag of little bootstraps}
In this section we consider the core inferential problem of evaluating
the quality of point estimators, a problem that is addressed by the
bootstrap [\citet{Efron}] and related resampling-based methods. The material
in this section summarizes research described in \citet{Kleiner}.
The usual implementation of the bootstrap involves the
``computationally-intensive''
procedure of resampling the original data with replacement, applying
the estimator
to each such bootstrap resample, and using the resulting distribution
as an
approximation to the true sampling distribution and thereby computing
(say) a confidence interval. A~notable virtue of this approach in the
setting of modern distributed computing platforms is that it readily
parallelizes -- each bootstrap resample can be processed independently
by the
processors of a ``cloud computer.'' Thus in principle it should be possible
to compute bootstrap confidence intervals in essentially the same
runtime as is
required to compute the point estimate. In the massive data setting, however,
there is a serious problem: each bootstrap resample is itself massive (roughly
0.632 times the original dataset size). Processing such resampled
datasets can
overwhelm available computational resources; for example, with a
terabyte of
data it may not be straightforward to send a few hundred resampled datasets,
each of size 632 gigabytes, to a set of distributed processors on a network.
An appealing alternative is to work with subsets of data, an instance
of the
divide-and-conquer paradigm. Existing examples of this alternative
approach include
subsampling [\citet{Politis-etal}] and the $m$-out-of-$n$
bootstrap [\citet{Bickel}].
In both cases, the idea is that a dataset of size $n$ can be processed into
multiple sets of size $m$ (there are ${n \choose m}$ such subsets in
the case
of sampling without replacement), and the estimator computed on each set.
This yields fluctuations in the values of the point estimate. The challenge
is the one referred to earlier -- these fluctuations are on the wrong scale,
being based on datasets that are smaller than the original dataset.
Both subsampling
and the $m$-out-of-$n$ bootstrap assume that an analytical correction factor
is available (e.g., $\sqrt{m/n}$) to rescale the confidence intervals
obtained from the sampled subsets. This renders these procedures
somewhat less
``user-friendly'' than the bootstrap, which requires no such correction factor.
There are situations in which the bootstrap is known to be inconsistent,
and where subsampling and the $m$-out-of-$n$ bootstrap are consistent;
indeed, the search for a broader range of consistency results was the original
motivation for exploring these methods. On the other hand, finite sample
results do not necessarily favor the consistent procedures over the
bootstrap [see, e.g., \citet{Samworth}]. The intuition is as follows.
For small values of $m$, the procedure performs poorly, because each
estimate is highly noisy. As $m$ increases, the noise decreases and
performance improves. For large values of $m$, however, there are too
few subsamples, and performance again declines. In general it is difficult
to find the appropriate value of $m$ for a given problem.
In recent work, \citet{Kleiner} have explored a new procedure, the
``Bag of Little
Bootstraps'' (BLB), which targets computational efficiency, but which also
alleviates some of the difficulties of subsampling, the $m$-out-of-$n$
bootstrap and the bootstrap, essentially by combining aspects of these
procedures. The basic idea of BLB is as follows. Consider a subsample
of size $m$ (taken either with replacement or without replacement).
Note that this subsample is itself a random sample from the population,
and thus the empirical distribution formed from this subsample is an
approximation to the population distribution. It is thus reasonable
to sample from this empirical distribution as a plug-in proxy for
the population. In particular, there is nothing preventing us from
sampling $n$ times from this empirical distribution (rather than $m$
times). That is, we can implement the bootstrap on the correct scale
using this subsample, simply by using it to generate multiple bootstrap
samples of size $n$. Now, the resulting confidence interval will be
a bona fide bootstrap confidence interval, but it will be noisy, because
it is based on a (small) subsample. But we can proceed as in subsampling,
repeating the procedure multiple times with randomly chosen subsamples.
We obtain a set of bootstrap confidence intervals, which we combine
(e.g., by averaging) to yield the overall bootstrap confidence interval.
\begin{figure}
\includegraphics{sp17f01.eps}
\caption{The BLB procedure. From the original dataset, $\{X_1,\ldots,
X_n\}$, $s$ subsamples of size $m$ are formed. From each of these
subsamples, $r$ bootstrap resamples are formed, each of which are
conceptually of size~$n$ (but would generally be stored as weighted
samples of size $m$). The resulting bootstrap estimates of risk are
averaged. In a parallel implementation of BLB, the boxes in the
diagram would correspond to separate processors; moreover, the
bootstrap resampling within a box could also be parallelized.}
\label{figblb}
\end{figure}
The procedure is summarized in Figure~\ref{figblb}. We see that
BLB is composed of two nested procedures, with the inner procedure
being the bootstrap applied to a subsample, and the outer procedure
being the combining of these multiple bootstrap estimates. From a
computational point of view, the BLB procedure can be mapped onto a
distributed computing architecture by letting each subsample be processed
by a separate processor. Note that BLB has the virtue that the subsamples
sent to each processors are small (of~size~$m$). Moreover, although
the inner loop of bootstrapping conceptually creates multiple resampled
datasets of size $n$, it is not generally necessary to create actual
datasets of size $n$; instead we form weighted datasets of size $m$.
(Also, the weights can be obtained as draws from a Poisson distribution
rather than via explicit multinomial sampling.) Such is the case, for
example, for estimators that are plug-in functionals of the empirical
distribution.
An example taken from \citet{Kleiner} serves to illustrate the very
substantial computational gains that can be reaped from this approach.
Consider computing bootstrap confidence intervals for the estimates
of the individual components of the parameter vector in logistic
regression, where the covariate vector has dimension 3000 and there
are 6\,000\,000 data points, and where a distributed computing platform
involving 80 cores (8 cores on each of ten processors) is available.
To implement the bootstrap at this scale, we can parallelize the
logistic regression, and sequentially process the bootstrap resamples.
Results from carrying out such a procedure are shown as the dashed
curve in Figure~\ref{figblb-results}, where we see that the processing
of each bootstrap resample requires approximately 2000 seconds of
processing time.
\begin{figure}
\includegraphics{sp17f02.eps}
\caption{Univariate confidence intervals for a logistic regression
on 6\,000\,000 data points using a 80-core distributed computing platform.
The data were generated synthetically and thus the ground-truth sampling
distribution was available via Monte Carlo; the $y$-axis is a measure
of relative error with respect to this ground truth. See \citet{Kleiner}
for further details.}
\label{figblb-results}
\end{figure}
The other natural approach is to implement a parallel version of BLB
as we have discussed, where each processor executes the bootstrap on
$m(n) = n^\gamma$ points via weighted logistic regressions. The results
are also shown in Figure~\ref{figblb-results}, as a single dot in the
lower-left corner of the figure. Here $\gamma$ is equal to $0.7$.
We see that BLB has finished in less time than is required for a single
iteration of the bootstrap on the full dataset, and indeed in less than
750 seconds has delivered an accuracy that is significantly better than
that obtained by the bootstrap after 15\,000 seconds.
Thus we see that there is a very strong synergy between a particular way
to organize bootstrap-style computation and the capabilities of modern
distributed computing platforms. Moreover, although the development of
BLB was motivated by the computational imperative, it can be viewed as
a novel statistical procedure to be compared to the bootstrap and
subsampling according to more classical criteria; indeed, \citet{Kleiner}
present experiments that show that even on a single processor BLB converges
faster than the bootstrap and it is less sensitive to the choice of $m$
than subsampling and the $m$-out-of-$n$ bootstrap.
There is much more to be done along these lines. For example, stratified
sampling and other sophisticated sampling schemes can likely be mapped
in useful ways to distributed platforms. For dependent data, one wants to
resample in ways that respect the dependence, and this presumably favors
certain kinds of data layout and algorithms over others. In general, for
statistical inference to not run aground on massive datasets, we need
for statistical thinking to embrace computational thinking.
\section{Divide-and-conquer matrix factorization}
Statistics has long exploited matrix analysis as a core computational tool,
with linear models, contingency tables and multivariate analysis providing
well-known examples. Matrices continue to be the focus of much recent
computationally-focused research, notably as representations of graphs
and networks and in collaborative filtering applications. At the core
of a significant number of analysis procedures is the notion of
\emph{matrix factorization}, with the singular value decomposition
(SVD) providing a canonical example. The SVD in particular yields
low-rank representations of matrices, which often maps directly to
modeling assumptions.
Many matrix factorization procedures, including the SVD, have cubic
algorithmic complexity in the row or column dimension of the matrix.
This is overly burdensome in many applications, in statistics and beyond,
and there is a strong motivation for applied linear algebra researchers
to devise efficient ways to exploit parallel and distributed hardware
in the computation of the SVD and other factorizations. One could
take the point of view that this line of research is outside of the purview
of statistics; that statisticians should simply keep an eye on developments.
But this neglects the fact that as problems grow in size, it is the particular
set of modeling assumptions at play that determine whether efficient
algorithms are available, and in particular whether computationally-efficient
approximations can be obtained that are statistically meaningful.
As a particularly salient example, in many statistical applications
involving large-scale matrices it is often the case that many entries
of a matrix are missing. Indeed, the quadratic growth in the number of
entries of a matrix is often accompanied by a linear growth in the rate
of observation of matrix entries, such that at large scale the vast majority
of matrix entries are missing. This is true in many large-scale collaborative
filtering applications, where, for example, most individuals will have
rated a small fraction of the overall set of items (e.g., books or movies).
It is also true in many graph or network analysis problems, where each node
is connected to a vanishingly small fraction of the other nodes.
In recent work, \citet{Mackey} have studied a divide-and-conquer methodology
for matrix factorization that aims to exploit parallel hardware platforms.
Their framework, referred to as \emph{Divide-Factor-Combine} (DFC),
is rather simple from an algorithmic point of view -- a matrix is
partitioned according to its columns or rows, matrix factorizations
are obtained (in parallel) for each of the submatrices using a
``base algorithm'' that is one of the standard matrix factorization
methods, and the factorizations are combined to obtain an overall factorization
(see Figure~\ref{figdfc-pipeline}).
\begin{figure}
\includegraphics{sp17f03.eps}
\caption{The DFC pipeline. The matrix $M$ is partitioned according to its
columns and the resulting submatrices $\{C_i\}$ are factored in parallel.
The factored forms, $\{\hat{C}_i\}$, are then transmitted to a central
location where they are combined into an overall factorization $\hat
{L}^{\mathrm{proj}}$.}
\label{figdfc-pipeline}
\end{figure}
The question is how to design such a pipeline so as to retain the statistical
guarantees of the base algorithm, while providing computational speed-ups.
Let us take the example of \emph{noisy matrix completion} [see,
e.g., \citet{CandesPlan2010}], where we model a matrix $M \in\mathbb
{R}^{m \times n}$
as the sum of a low-rank matrix $L_0$ (with rank $r \ll\min(m,n)$)
and a
noise matrix $Z$:
\[
M = L_0 + Z,
\]
and where only a small subset of the entries of $M$ are observed. Letting
$\Omega$ denote the set of indices of the observed entries, the goal
is to
estimate $L_0$ given $\{M_{ij}\dvtx (i,j) \in\Omega\}$. This goal can be
formulated in terms of an optimization problem:
\begin{eqnarray}
&\displaystyle\min_L \qquad\operatorname{rank}(L)&
\nonumber\\[-8pt]\\[-8pt]
&\mbox{subject to } \displaystyle \sum_{(i,j) \in\Omega} (L_{ij} -
M_{ij})^2 \leq \Delta^2,&\nonumber
\end{eqnarray}
for a specified value $\Delta$. This problem is computationally intractable,
so it is natural to consider replacing the rank function with its tightest
convex relaxation, the nuclear norm, yielding the following convex
optimization problem [\citet{CandesPlan2010}]:
\begin{eqnarray}\label{eqnuclear-norm}
&\displaystyle\min_L \qquad\|{L}\|_*&
\nonumber\\[-8pt]\\[-8pt]
&\mbox{subject to } \displaystyle \sum_{(i,j) \in\Omega} (L_{ij} -
M_{ij})^2 \leq \Delta^2,&\nonumber
\end{eqnarray}
where $\|{L}\|_*$ denotes the nuclear norm of $L$ (the sum of the singular
values of $L$).
\citet{CandesPlan2010} have provided conditions under which the
solution to
equation (\ref{eqnuclear-norm}) recovers the matrix $L_0$ despite the
potentially
large number
of unobserved entries of~$M$. These conditions involve a structural
assumption on the matrix $L_0$ (that its singular vectors should not be
too sparse or too correlated) and a sampling assumption for~$\Omega$
(that the entries of the matrix are sampled uniformly at random).
We will refer to the former assumption as ``$(\mu,r)$-coherence,''
referring to \citet{CandesPlan2010} for technical details.
Building on work by \citet{Recht}, \citet{Mackey} have proved
the following theorem, in the spirit of \citet{CandesPlan2010}
but with weaker conditions:
\begin{theorem}
\label{thmconvex-mc-noise}
Suppose that $L_0$ is $(\mu,r)$-coherent and $s$ entries of $M$ are
observed at locations $\Omega$ sampled uniformly without replacement,
where
\[
s \geq32 \mu r(m+n) \log^2(m+n).
\]
Then, if $\sum_{(i,j) \in\Omega} (M_{ij} - L_{0,ij})^2 \leq\Delta
^2$ a.s.,
the minimizer $\hat{L}$ of equation (\ref{eqnuclear-norm}) satisfies
\[
\|{L_0 - \hat{L}}\|_F \leq c_e \sqrt{mn}
\Delta,
\]
with high probability, where $c_e$ is a universal constant.
\end{theorem}
Note in particular that the required sampling rate $s$ is a vanishing fraction
of the total number of entries of $M$.
Theorem~\ref{thmconvex-mc-noise} exemplifies the kind of theoretical guarantee
that one would like to retain under the DFC framework. Let us therefore consider
a particular example of the DFC framework, referred to as ``\textsc{DFC-Proj}''
by \citet{Mackey}, in which the ``divide'' step consists in the partitioning
of the columns of $M$ into $t$ submatrices each having $l$ columns
(assuming for simplicity that $l$ divides $n$), the ``factor'' step
involves solving the nuclear norm minimization problem in equation
(\ref{eqnuclear-norm})
for each submatrix (in parallel), and the ``combine'' step consists of a
projection step in which the $t$ low-rank approximations are projected
onto a
common subspace.\footnote{In particular, letting
$\{\hat{C}_1, \hat{C}_2,\ldots, \hat{C}_t\}$ denote
the $t$ low-rank approximations, we can project these submatrices
onto the column space of any one of these submatrices, for example,
$\hat{C}_1$. Mackey, Talwalkar and Jordan
(\citeyear{Mackey}) also propose an ``ensemble'' version of this procedure
in which
the low-rank submatrices are projected onto each other (i.e., onto
$\hat
|
{C}_k$,
for $k = 1,\ldots, t$) and the resulting projections are averaged.}
Retaining a theoretical guarantee for \textsc{DFC-Proj} from that of
its base algorithm essentially involves ensuring that the
$(\mu,r)$-coherence of the overall matrix is not increased very much
in the
random selection of submatrices in the ``divide'' step, and that the
low-rank approximations obtained in the ``factor'' step are not far
from the low-rank approximation that would be obtained from the overall
matrix.
In particular, \citet{Mackey} establish the following theorem:\footnote{We
have simplified the statement of the theorem to streamline the presentation;
see Mackey, Talwalkar and Jordan (\citeyear{Mackey}) for the full result.}
\begin{theorem}
Suppose that $L_0$ is $(\mu,r)$-coherent and that $s$ entries of $M$ are
observed at locations $\Omega$ sampled uniformly without replacement.
Then, if $\sum_{(i,j) \in\Omega} (M_{ij} - L_{0,ij})^2 \leq\Delta
^2$ a.s.,
and the base algorithm in the ``factor'' step of \textsc{DFC-Proj} involves
solving the optimization problem of equation (\ref{eqnuclear-norm}),
it suffices to choose
\[
l \geq {c\mu^2 r^2(m+n)n \log^2(m+n)}/
\bigl(s\varepsilon^2\bigr)
\]
columns in the ``divide'' step to achieve
\[
\|{L_0 - \hat{L}}\|_F \leq(2+\varepsilon)
c_e \sqrt{mn} \Delta,
\]
with high probability.
\end{theorem}
Thus, the \textsc{DFC-Proj} algorithm achieves essentially the rate
established in
Theorem~\ref{thmconvex-mc-noise} for the nuclear norm minimization algorithm.
Moreover, if we set $s=\omega((m+n)\log^2(m+n))$, which is slightly
faster than
the lower bound in Theorem~\ref{thmconvex-mc-noise}, then we see that
$l/n \rightarrow0$.
That is, \textsc{DFC-Proj} succeeds even if only a vanishingly small
fraction of
the columns are sampled to form the submatrices in the ``divide'' step.
Figure~\ref{figdfc-proj-results} shows representative numerical results
in an experiment on matrix completion reported by \citet{Mackey}.
\begin{figure}
\begin{tabular}{@{}cc@{}}
\includegraphics{sp17f04a.eps}
& \includegraphics{sp17f04b}\\
(a) & (b)
\end{tabular}
\caption{Numerical experiments on matrix completion with DFC.
\textup{(a)} Accuracy for a state-of-the-art baseline algorithm
(``Base-MC'') and \textsc{DFC-Proj} (see Mackey, Talwalkar and Jordan
(\citeyear{Mackey}) for details),
as a function of the percentage of revealed entries.
\textup{(b)} Runtime as a function of matrix dimension
(these are square matrices, so $m = n$).}
\label{figdfc-proj-results}
\end{figure}
The leftmost figure shows that the accuracy (measured as root mean
square error) achieved by the ensemble version of \textsc{DFC-Proj}
is nearly the same as that of the base algorithm. The rightmost
figure shows that this accuracy is obtained at a fraction of the
computational cost required by the baseline algorithm.
\section{Convex relaxations}
The methods that we have discussed thus far provide a certain degree
of flexibility in the way inferential computations are mapped onto
a computing infrastructure, and this flexibility implicitly defines
a tradeoff between speed and accuracy. In the case of BLB the flexibility
inheres in the choice of $m$ (the subsample size) and in the case of DFC
it is the choice of $l$ (the submatrix dimension). In the work that
we discuss in this section the goal is to treat such tradeoffs explicitly.
To achieve this, \citet{Chandrasekaran} define a notion of ``algorithmic
weakening,'' in which a hierarchy of algorithms is ordered by both
computational efficiency and statistical efficiency. The problem
that they address is to develop a quantitative relationship among
three quantities: the number of data points, the runtime and the
statistical risk.
\citet{Chandrasekaran} focus on the denoising problem, an important
theoretical testbed in the study of high-dimensional
inference [cf. \citet{DonJ1998}]. The model is the following:
\begin{equation}\label{eqdenoise}
\by= \mathbf{x}^\ast+ \sigma\bz,
\end{equation}
where $\sigma> 0$, the noise vector $\bz\in\R^p$ is standard normal,
and the unknown parameter $\mathbf{x}^\ast$ belongs to a known subset
$\bs
\subset\R^p$.
The problem is to estimate $\mathbf{x}^\ast$ based on $n$ independent
observations
$\{\by_i\}_{i=1}^n$ of $\by$.
Consider a shrinkage estimator given by a projection of the sufficient
statistic $\bar{\by} = \frac{1}{n} \sum_{i=1}^n \by_i$ onto a
convex set
$\bc$ that is an outer approximation to $\bs$, that is, $\bs\subset\bc$:
\begin{equation}\label{eqshrink}
\hat{\bx}_n(\bc) = \arg\min_{\bx\in\R^p}\quad
\frac{1}{2}\llVert \bar{\by} - \bx\rrVert _{\ell_2}^2
\quad\mbox{s.t.}\quad \bx\in\bc.
\end{equation}
The procedure studied by \citet{Chandrasekaran} consists of a \emph{set}
of such projections, $\{\hat{\bx}_n(\bc_i)\}$, obtained from a hierarchy
of convex outer approximations,
\[
\bs\subseteq\cdots\subseteq\bc_3 \subseteq\bc_2
\subseteq\bc_1.
\]
The intuition is that for $i < j$, the estimator $\{\hat{\bx}_n(\bc
_i)\}$
will exhibit poorer statistical performance than $\{\hat{\bx}_n(\bc
_j)\}$,
given that $\bc_i$ is a looser approximation to $\bs$ than $\bc_j$,
but that
$\bc_i$ can be chosen to be a simpler geometrical object than $\bc
_j$, such
that it is computationally cheaper to optimize over $\bc_i$, and thus more
samples can be processed by the estimator $\{\hat{\bx}_n(\bc_i)\}$
in a given time frame, offsetting the increase in statistical risk.
Indeed, such \emph{convex relaxations} have been widely used to give
efficient approximation algorithms for intractable problems in computer
science [\citet{Vaz2004}], and much is known about the decrease in runtime
as one moves along the hierarchy of relaxations. To develop a time/data
tradeoff, what is needed is a connection to statistical risk as one moves
along the hierarchy.
\citet{Chandrasekaran} show that convex geometry provides such a connection.
Define the \emph{Gaussian squared-complexity} of a set $\mathcal{D}
\in\R^p$
as follows:
\[
\G(\mathcal{D}) = \mathbb{E} \Bigl[\sup_{\bolds{\delta} \in\mathcal{D}} \langle\bolds {
\delta}, \bz\rangle^2 \Bigr],
\]
where the expectation is with respect to $\bz\sim\mathcal{N}(0,I_{p
\times p})$.
Given a closed convex set $\bc\in\R^p$ and a point $\ba\in\bc$,
define the
\emph{tangent cone} at $\ba$ with respect to $\bc$ as
\begin{equation}\label{eqtcone}
T_{\bc}(\ba) = \mathrm{cone}\{\bb-\ba\mid\bb\in\bc\},
\end{equation}
where $\mathrm{cone}(\cdot)$ refers to the conic hull of a set obtained
by taking nonnegative linear combinations of elements of the set.
(See Figure~\ref{figrelax} for a depiction of the geometry.)
\begin{figure}[b]
\includegraphics{sp17f05.eps}
\caption{(left) A signal set $\bs$ containing the
true signal $\mathbf{x}^\ast$; (middle) Two convex constraint sets
$\bc$ and~$\bc'$, where $\bc$ is the convex hull of $\bs$ and $\bc'$ is
a relaxation that is more efficiently computable than $\bc$; (right)
The tangent cone $T_\bc(\mathbf{x}^\ast)$ is contained inside the
tangent cone $T_{\bc'}(\mathbf{x}^\ast)$. Consequently,\vspace*{1pt} the Gaussian
squared-complexity $\G(T_\bc(\mathbf {x}^\ast) \cap B_{\ell_2}^p)$ is
smaller than the complexity $\G(T_{\bc'}(\mathbf{x}^\ast) \cap B_{\ell
_2}^p)$, so that the estimator $\hat{\bx}_n(\bc)$ requires fewer
samples than the estimator $\hat{\bx}_n(\bc')$ for a risk of at most
$1$.} \label{figrelax}
\end{figure}
Let\vspace*{1pt} $B_{\ell_2}^p$ denote the $\ell_2$ ball in $\R^p$. \citet{Chandrasekaran}
establish the following theorem linking the Gaussian squared-complexity
of tangent cones and the statistical risk:
\begin{theorem}
\label{thmdenoise}
For $\mathbf{x}^\ast\in\bs\subset\R^p$ and with $\bc\subseteq\R
^p$ convex such
that $\bs\subseteq\bc$, we have the error bound
\[
\mathbb{E} \bigl[\bigl\|\mathbf{x}^\ast- \hat{\bx}_n(\bc)
\bigr\|_{\ell
_2}^2 \bigr] \leq\frac{\sigma^2}{n} \G
\bigl(T_\bc\bigl(\mathbf{x}^\ast\bigr) \cap
B_{\ell_2}^p\bigr).
\]
\end{theorem}
This risk bound can be rearranged to yield a way to estimate the number
of data points needed to achieve a given level of risk. In particular,
the theorem implies that if
\begin{equation}\label{eqsample-complexity}
n \geq\sigma^2 \G\bigl(T_\bc\bigl(\mathbf{x}^\ast
\bigr) \cap B^p_{\ell_2}\bigr),
\end{equation}
then $\mathbb{E} [\|\mathbf{x}^\ast- \hat{\bx}_n(\bc)\|
_{\ell_2}^2
] \leq1$.
The overall implication is that as the number of data points $n$ grows,
we can back off to computationally cheaper estimators and still control
the statistical risk, simply by choosing the largest $\bc_i$ such
that the right-hand side of equation (\ref{eqsample-complexity}) is
less than $n$.
This yields a time/data tradeoff.
To exemplify the kinds of concrete tradeoffs that can be obtained via
this formalism, \citet{Chandrasekaran} consider a stylized sparse principal
component analysis problem, modeled using the following signal set:
\[
\bs= \bigl\{\Pi M \Pi' \mid \Pi \mbox{ is a } \sqrt{p} \times
\sqrt{p} \mbox{ permutation matrix}\bigr\},
\]
where the top-left $k \times k$ block of $M \in\R^{\sqrt{p} \times
\sqrt{p}}$
has entries equal to $\sqrt{p}/k$ and all other entries are zero.
In Table~\ref{tabsparse-pca} we show the runtimes and sample sizes
associated with
\begin{table}
\tablewidth=260pt
\caption{Time-data tradeoffs for the sparse PCA problem, expressed as
a function of the matrix dimension~$p$. See Chandrasekaran and Jordan (\citeyear{Chandrasekaran}) for
details.}
\label{tabsparse-pca}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lll@{}}
\hline
$\bc$ & Runtime & $n$ \\
\hline
conv($\bs$) & super-poly($p$) & \mbox{$\sim$}$p^{1/4} \log(p)$ \\[2pt]
Nuclear norm ball & $p^{3/2}$ & \mbox{$\sim$}$p^{1/2}$ \\
\hline
\end{tabular*}
\end{table}
two different convex relaxations of $S$: the convex hull of $S$ and the
nuclear norm ball. The table reveals a time-data tradeoff -- to achieve constant
risk we can either use a more expensive procedure that requires few
data points
or a cheaper procedure that requires few data points or a cheaper procedure
that requires more data points.
\begin{table}
\tablewidth=260pt
\caption{Time-data tradeoffs for the cut-matrix denoising problem,
expressed as
a function of the matrix dimension $p$, where $c_1 < c_2 < c_3$.
See \citet{Chandrasekaran} for details.}
\label{tabmatrix-cut}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lll@{}}
\hline
$\bc$ & Runtime & $n$ \\
\hline
Cut polytope & super-poly($p$) & $c_1 p^{1/2}$ \\[1pt]
Elliptope & $p^{7/4}$ & $c_2 p^{1/2}$ \\[1pt]
Nuclear norm ball & $p^{3/2}$ & $c_3 p^{1/2}$ \\
\hline
\end{tabular*}
\end{table}
As a second example, consider the cut-matrix denoising problem, where the
signal set is as follows:
\[
\bs= \bigl\{\ba\ba'\mid\ba\in\{-1,+1\}^{\sqrt{p}} \bigr\}.
\]
Table~\ref{tabmatrix-cut} displays the runtimes and sample sizes
associated with
three different convex relaxations of this signal set. Here the
tradeoff is
in the constants associated with the sample size, favoring the cheaper methods.
\citet{Chandrasekaran} also consider other examples, involving variable ordering
and banded covariance matrices. In all of these examples, it seems to
be the
case that the cheaper methods achieve the same risk as more expensive methods
with not very many additional data points.
\section{Discussion}
We have reviewed several lines of research that aim to bring computational
considerations into contact with statistical considerations, with a particular
focus on the matrix-oriented estimation problems that arise frequently
in the
setting of massive data. Let us also mention several other recent theoretical
contributions to the statistics/computation interface. Divide-and-conquer
methodology has been explored by \citet{ChenXie} in the setting of
regression and
classification. Their methods involve estimating parameters on subsets
of data
in parallel and using weighted combination rules to merge these
estimates into
an overall estimates. They are able to show asymptotic equivalence to
an estimator
based on all of the data and also show (empirically) a significant
speed-up via
the divide-and-conquer method. The general idea of algorithmic
weakening via
hierarchies of model families has been explored by several authors;
see, for
example, \citet{Agarwal} and \citet{Shalev-Shwartz}, where the focus is model
selection and classification, and \citet{Amini}, where the focus is
sparse covariance matrix estimation. In all of these lines of work the goal
is to develop theoretical tools that explicitly reveal tradeoffs relating
risk, data and time.
It is important to acknowledge the practical reality that massive datasets
are often complex, heterogeneous and noisy, and the goal of research on
scalability is not that of developing a single methodology that applies
to the analysis of such datasets. Indeed, massive datasets will require
the full range of statistical methodology to be brought to bear in order
for assertions of knowledge on the basis of massive data analysis to be
believable.
The problem is that of recognizing that the analysts of massive data will
often be interested not only in statistical risk, but in risk/time tradeoffs,
and that the discovery and management of such tradeoffs can only be achieved
if the algorithmic character of statistical methodology is fully acknowledged
at the level of the foundational principles of the field.
\section*{Acknowledgements}
I wish to acknowledge numerous colleagues who have helped to shape the
perspective presented here, in particular Venkat Chandrasekaran, Ariel Kleiner,
Lester Mackey, Purna Sarkar and Ameet Talwalkar.
|
\subsection{Code Representation}
\label{sec:code}
In program analysis, various representations of the program are utilized to manifest deeper information behind the textual code, where classic concepts include ASTs, control flow graphs, and data flow graphs that capture the syntactic and semantic relationships among the different elements of the source code.
Many vulnerabilities such as memory disclosure are too implicit to be spotted without a joint consideration of the structure, control flow and data dependencies of code \cite{yamaguchi2014}.
For example, it is reported that ASTs alone can be used to find only insecure arguments. While combining ASTs with control flow graphs, it enables to cover two more types of vulnerabilities, i.e., resource leaks and some use-after-free vulnerabilities. By further integrating the ASTs, control flow graphs and data flow graphs, it is possible to describe most types of vulnerabilities except two that require information other than static code (race condition depends on runtime properties and design errors are hard to model without details on the intended design of a program).
Though \cite{yamaguchi2014} \emph{manually} crafted the vulnerability templates in the form of graph traversals, it conveyed the key insight and proved the feasibility to learn a broader range of vulnerability patterns through integrating properties of ASTs, control flow graphs and data flow graphs into a joint data structure. Based on this insight, this work attempts to automatically mine extensive vulnerability patterns from comprehensive code representations.
Beside the three classical code structures, we also take the natural sequence of source code into consideration, since the recent advance on deep learning based vulnerability detection has demonstrated its usability ~\cite{ndss18vuldeepecker,russell2018automated}.
It can complement the classical representations because its unique flat structure captures the relationships of code tokens in `human-readable' source code.
Next we briefly introduce each type of the code representations and how we embed various types of graphs into one joint graph, following a code example of integer overflow as in Figure~\ref{fig:graph_representation}(a) and its graph representation as shown in Figure~\ref{fig:graph_representation}(b).
\begin{comment}
\begin{lstlisting}[float,language=C,caption={A code example of integer overflow},basicstyle=\fontsize{13}{13}\selectfont\ttfamily,label=snippet]
short add (short b) {
short a = 32767;
if (b > 0){
a = a + b;
}
return a;
}
\end{lstlisting}
\end{comment}
\begin{comment}
\begin{figure*}[t!]
\begin{minipage}[b]{0.2\textwidth}
\centering
\includegraphics[scale=0.35]{graph_listing.eps}
\caption{Code snippet of integer overflow}
\end{minipage}
\hfill
\begin{minipage}[b]{0.8\textwidth}
\centering
\includegraphics[scale=0.25]{graph_representation.eps}
\caption{Graph Representation of Code Snippet in Listing 1}
\end{minipage}
\end{figure*}
\end{comment}
\begin{figure*}[htb]
\centering
\includegraphics[width=1\textwidth]{figures/overflow_graph_new.eps}
\caption{Graph Representation of Code Snippet with Integer Overflow }
\label{fig:graph_representation}
\vspace{-0.1in}
\end{figure*}
\begin{description}
\setlength\itemsep{0.01in}
\item[Abstract Syntax Tree (AST)] AST is an ordered tree representation structure of source code. Usually, it is the first-step representation used by code parsers to understand the fundamental structure of the program and to examine syntactic errors. Hence, it forms the basis for the generation of many other code representations.
Starting from the root node, the codes are broken down into code blocks, statements, declaration, expressions and etc., and finally into the primary tokens.
These tokens form the leaf nodes in ASTs.
We call edges representing the directed child-parent relationship in AST \textit{Child} edge.
The major AST nodes are shown in Figure~\ref{fig:graph_representation}.
All the boxes are AST nodes, with specific codes in the first line and node type annotated. The blue boxes are leaf nodes of AST and purple arrows represent the child-parent relations.
\item[Control Flow Graph (CFG)] CFG describes the possible orders in which code statements can be executed.
The path alternatives are determined by conditional statements, e.g., \textit{if}, \textit{for}, and \textit{switch} statements. In CFGs, nodes denote statements and conditions, and they are connected by directed \textit{Control} edges to indicate the transfer of control. The control edges are highlighted with green arrows in Figure~\ref{fig:graph_representation}. Particularly, the flow starts from the entry and ends at the exit, and two different paths derive at the \textit{if} statements.
\item[Data Flow Graph (DFG)] DFG tracks the usage of variables throughout the CFG. Data flow is variable oriented and any data flow involves the reading or writing of certain variables. A DFG edge represents the subsequent reading or writing onto the same variables. It is illustrated by orange double arrows in Figure~\ref{fig:graph_representation} and with the involved variables annotated over the edge. For example, the parameter $b$ is used in both the \textit{if} condition and the assignment statement.
\item[Natural Code Sequence] In order to encode the natural sequential order of the source code, we use \textit{NextToken} edges to connect neighboring code tokens in the ASTs. There are two benefits with such encoding, one is to reserve the programming logic reflected by the sequence of source code, the other is to keep it as precise as possible to only the key tokens. The \textit{NextToken} edges are denoted by red arrows in Figure~\ref{fig:graph_representation}, string up all the leaf nodes of the AST
\end{description}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/network.eps}
\caption{The Architecture of Devign with CNN Classifiers}
\label{fig:network}
\vspace{-0.1in}
\end{figure*}
\vspace{-0.02in}
\subsection{Gated Graph Neural Network Model}\label{sec:node}
Our network model builds upon the Gated Graph Neural Networks (GGNN) which extends Graph Neural Networks to output sequences \cite{li2015gated}. We modify and improve it for the problem of graph classification. We first formulate the problem, and then introduce the network model.
\noindent \textbf{Problem Formulation} We denote a function $F$ in source code as a multi-edged graph $G=(V,E)$, where each vertex $v$ in $V$ denotes a node in the graph, and each edge $e_{i,j}^{t} \in E_t$ denotes that node $v_i, v_j$ is connected via an edge of type $t$ where $t$ is an integer and $0 \leq t\leq T$. The total number of edge types $T$ denotes all the number of the integrated graph representations to facilitate vulnerability discovery. Therefore, given $G=(V,E)$ for $F$, we aim to predict whether $F$ is vulnerable or not.
\noindent \textbf{Learning Node Representations} We summarize the basics of GGNN here. Given a graph $G = (V, E)$, let ${x_i \in X}$ be the real-valued vectors of node features where each node $i$ associates with an annotation $x_i$ representing the initial feature of the node. For each node $i\in V$, we then initialize the node state vector $h_i$ using the initial annotation by copying $x_i$ into the first dimensions and padding extra 0's to allow hidden states that are larger than the annotation size, i.e., $h_i = [x_i^\top, \mathbf{0}]^\top$. To propagate information throughout graphs, at each time step $t$, all nodes communicate with each other by passing information via edges dependent on the edge type and direction (described by the adjacent matrix $A$), i.e.,
\begin{equation}
a_i^t = A_i^\top \bigg[h_1^{(t-1)\top},\dots,h_{|V|}^{(t-1)\top}\bigg] + b
\end{equation}
where $b$ is the biases. In particular, a new state for a node $i$ is calculated by aggregating all types of incoming information. The remaining steps are gated recurrent unit (GRU) like updates that incorporate information from the other nodes and the previous time step to update a node's hidden state $h_i^t$. The above propagation procedure iterates over a fixed number of time steps, and the state vectors at the last time step is the final node representation.
\noindent \textbf{Graph-level Classification} The generated node embeddings by GGNNs can be used as input to any prediction layer, e.g., for node classification or link prediction or graph-level classification, and then the whole model can be trained in an end-to-end fashion. In our approach, we require to perform the task of graph-level classification to determine whether an input function is vulnerable or not.
The standard approach to graph classification is gathering all these generated node embeddings globally, e.g., using a linear weighted summation to flatly adding up all the embeddings \cite{li2015gated,dai2016discriminative}. Consequently, this approach hinders effective classification over entire graphs \cite{ying2018hierarchical,zhang2018end}. Since our node representations are both semantic and structural, learned from GRU-like graph neural networks instead of graph convolution networks that need a sortpooling layer to sort the structural node features in a consistent vertex ordering before inputting into traditional CNNs, we directly apply 1-D convolution layers and dense layers for more effective prediction\footnote{We also tried LSTMs and BiLSTMs (with and without attention mechanisms) on the ordered nodes to classify, however, the CNN classifiers work best overall.}. Figure~\ref{fig:network} illustrates the modified network architecture.
\subsection{Data Preparation}
It is never trivial to obtain high-quality data sets of vulnerable functions due to the demand of qualified expertise. We noticed that despite \cite{dam2017automatic} released data sets of vulnerable functions, the labels are generated by statistic analyzers which are not accurate.
Other potential datasets used in \cite{du2019leopard} are not available.
In this work, supported by our industrial partners, we invested a team of security to collect and label the data from scratch. Besides raw function collection, we need to generate graph representations for each function and initial representations for each node in a graph. We describe the detailed procedures below.
\noindent \textbf{Raw Data Gathering}
To test the capability of \emph{Devign}\xspace in learning vulnerability patterns, we evaluate on manually-labeled functions collected from 4 large C-language open-source projects that are popular among developers and diversified in functionality, i.e., Linux Kernel, QEMU, Wireshark, and FFmpeg.
To facilitate and ensure the quality of data labelling, we started by collecting security-related commits which we would label as vulnerability-fix commits or non-vulnerability fix commits, and then extracted vulnerable or non-vulnerable functions directly from the labeled commits.
The vulnerability-fix commits (VFCs) are commits that fix potential vulnerabilities, from which we can extract vulnerable functions from the source code of versions previous to the revision made in the commits.
The non-vulnerability-fix commits (non-VFCs) are commits that do not fix any vulnerability, similarly from which we can extract non-vulnerable functions from the source code before the modification.
We adopted the approach proposed in \cite{fse2017vul} to collect the commits.
It consists of the following two steps. 1) \emph{Commits Filtering}. Since only a tiny part of commits are vulnerability related, we exclude the security-unrelated commits whose messages are not matched by a set of security-related keywords such as DoS and injection. The rest, more likely security-related, are left for manual labelling. 2) \emph{Manual Labelling}. A team of four professional security researchers spent totally \textit{600 man-hours} to perform a two round data labelling and cross-verification.
\begin{comment}
\begin{description}
\setlength\itemsep{0.01em}
\item[1) Commits Filtering] Since only a tiny part of commits are vulnerability related, we exclude the security-unrelated commits whose messages are not matched by a set of security-related keywords such as DoS and injection. The rest, more likely security-related, are left for manual labelling.
\item[2) Manual Labelling] A team of four security researchers spent totally \textit{600 man-hours} to perform a two round data labelling and cross-verification.
\end{description}
\end{comment}
Given a VFC or non-CFC, based on the modified functions, we extract the source code of these functions before the commit is applied, and assign the labels accordingly.
\begin{table*}[t]
\centering
\scriptsize {\addtolength{\tabcolsep}{-3pt}
\caption{Classification accuracies and F1 scores in percentages: The two far-right columns give the maximum and average relative difference in accuracy/F1 compared to \emph{Devign}\xspace model with the composite code representations, i.e., \emph{Devign}\xspace (Composite). }
\label{tbl-result}
\begin{tabular}{c | c c | c c | c c | c c | c c | c c | c c }
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{2}{c|}{Linux Kernel} & \multicolumn{2}{c|}{QEMU} & \multicolumn{2}{c|}{Wireshark} & \multicolumn{2}{c|}{FFmpeg} & \multicolumn{2}{c|}{Combined} & \multicolumn{2}{c|}{Max Diff} & \multicolumn{2}{c}{Avg Diff} \\
& ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 \\
\midrule
Metrics + Xgboost & 67.17 & 79.14 & 59.49 & 61.27 & 70.39 & 61.31 & 67.17 & 63.76 & 61.36 & 63.76 & 14.84 & 11.80 & 10.30 & 8.71 \\
3-layer BiLSTM & 67.25 & 80.41 & 57.85 & 57.75 & 69.08 & 55.61 & 53.27 & 69.51 & 59.40 & 65.62 & 16.48 & 15.32 & 14.04 & 8.78\\
3-layer BiLSTM + Att & 75.63 & 82.66 & 65.79 & 59.92 & 74.50 & 58.52 & 61.71 & 66.01 & 69.57 & 68.65 & 8.54 & 13.15 & 5.97 & 7.41 \\
CNN & 70.72 & 79.55 & 60.47 & 59.29 & 70.48 & 58.15 & 53.42 & 66.58 & 63.36 & 60.13 & 16.16 & 13.78 & 11.72 & 9.82
\\
\midrule
\emph{Ggrn}\xspace (AST) & 72.65 & 81.28 & 70.08 & 66.84 & 79.62 & 64.56 & 63.54 & 70.43 & 67.74 & 64.67 & 6.93 & 8.59 & 4.69 & 5.01\\
\emph{Ggrn}\xspace (CFG) & 78.79 & 82.35 & 71.42 & 67.74 & 79.36 & 65.40 & 65.00 & 71.79 & 70.62 & 70.86 & 4.58 & 5.33 & 2.38 & 2.93\\
\emph{Ggrn}\xspace (NCS) & 78.68 & 81.84 & 72.99 & 69.98 & 78.13 & 59.80 & 65.63 & 69.09 & 70.43 & 69.86 & 3.95 & 8.16 & 2.24 & 4.45\\
\emph{Ggrn}\xspace (DFG\_C) & 70.53 & 81.03 & 69.30 & 56.06 & 73.17 & 50.83 & 63.75 & 69.44 & 65.52 & 64.57 & 9.05 & 17.13 & 6.96 & 10.18\\
\emph{Ggrn}\xspace (DFG\_R) & 72.43 & 80.39 & 68.63 & 56.35 & 74.15 & 52.25 & 63.75 & 71.49 & 66.74 & 62.91 & 7.17 & 16.72 & 6.27 & 9.88\\
\emph{Ggrn}\xspace (DFG\_W) & 71.09 & 81.27 & 71.65 & 65.88 & 72.72 & 51.04 & 64.37 & 70.52 & 63.05 & 63.26 & 9.21 & 16.92 & 6.84 & 8.17\\
\emph{Ggrn}\xspace (Composite) & 74.55 & 79.93 & 72.77 & 66.25 & 78.79 & 67.32 & 64.46 & 70.33 & 70.35 & 69.37 & 5.12 & 6.82 & 3.23 & 3.92\\
\midrule
\emph{Devign}\xspace (AST) & \textbf{80.24} & 84.57 & 71.31 & 65.19 & 79.04 & 64.37 & 65.63 & 71.83 & 69.21 & 69.99 & 3.95 & 7.88 & 2.33 & 3.37\\
\emph{Devign}\xspace (CFG) & 80.03 & 82.91 & 74.22 & 70.73 & 79.62 & 66.05 & 66.89 & 70.22 & 71.32 & 71.27 & 2.69 & 3.33 & 1.00 & 2.33\\
\emph{Devign}\xspace (NCS) & 79.58 & 81.41 & 72.32 & 68.98 & 79.75 & 65.88 & 67.29 & 68.89 & 70.82 & 68.45 & 2.29 & 4.81 & 1.46 & 3.84\\
\emph{Devign}\xspace (DFG\_C) & 78.81 & 83.87 & 72.30 & 70.62 & 79.95 & 66.47 & 65.83 & 70.12 & 69.88 & 70.21 & 3.75 & 3.43 & 2.06 & 2.30\\
\emph{Devign}\xspace (DFG\_R) & 78.25 & 80.33 & 73.77 & 70.60 & 80.66 & 66.17 & 66.46 & 72.12 & 71.49 & 70.92 & 3.12 & 4.64 & 1.29 & 2.53\\
\emph{Devign}\xspace (DFG\_W) & 78.70 & 84.21 & 72.54 & 71.08 & 80.59 & 66.68 & 67.50 & 70.86 & 71.41 & 71.14 & 2.08 & 2.69 & 1.27 & 1.77\\
\emph{Devign}\xspace (Composite) & 79.58 & \textbf{ 84.97 } & \textbf{74.33} & \textbf{ 73.07 } & \textbf{81.32} & \textbf{ 67.96 } & \textbf{69.58} & \textbf{ 73.55 } & \textbf{72.26} & \textbf{ 73.26 } & - & - & - & -\\
\midrule
\end{tabular}
}
\end{table*}
\noindent \textbf{Graph Generation} We make use of the open-source code analysis platform for C/C++ based on code property graphs, Joern~\cite{yamaguchi2014}, to extract ASTs and CFGs for all functions in our data sets.
Due to some inner compile errors and exceptions in Joern, we can only obtain ASTs and CFGs for part of functions. We filter out these functions without ASTs and CFGs or with oblivious errors in ASTs and CFGs.
Since the original DFGs edges are labeled with the variables involved, which tremendously increases the number of the types of edges and meanwhile complicates embedded graphs, we substitute the DFGs with three other relations, \textit{LastRead (DFG\_R)}, \textit{LastWrite (DFG\_W)}, and \textit{ComputedFrom (DFG\_C)} ~\cite{learning_to_represent}, to make it more adaptive for the graph embedding. \textit{DFG\_R} represents the immediately last read of each occurrence of the variable. Each occurrence can be directly recognized from the leaf nodes of ASTs.
\textit{DFG\_W} represents the immediately last write of each occurrence of variables. Similarly, we makes these annotations to the leaf node variables.
\emph{DFG\_C} determines the sources of a variable. In an assignment statement, the left-hand-side (lhs) variable is assigned with a new value by the right-hand-side (rhs) expression. DFG\_C captures such relations between the lhs variable and each of the rhs variable.
\begin{comment}
\begin{description}
\setlength\itemsep{0.01em}
\item[DFG\_R] represents the immediately last read of each occurrence of the variable. Each occurrence can be directly recognized from the leaf nodes of ASTs.
For example, in Figure~\ref{fig:graph_representation}, for the identifier $b$ in the \textit{if} condition, its previously latest read happens in the parameter definition. Thus, an edge is built between these two occurrences of identifier $b$.
\item[DFG\_W] represents the immediately last write of each occurrence of variables. Similarly, we makes these annotations to the leaf node variables. For example, in Figure~\ref{fig:graph_representation}, for the identifier $a$ in the \textit{return} statement, its last time write can happen either in the $a = a + b$ or the $a = 32767$. Thus, two edges are built for that $a$.
\item[DFG\_C] determines the sources of a variable. In an assignment statement, the left-hand-side (lhs) variable is assigned with a new value by the right-hand-side (rhs) expression. We build such \textit{ComputedFrom} relations between the lhs variable and each of the rhs variable. For example, in Figure~\ref{fig:graph_representation}, $a^1$ is computed from $a^2$ and $b$ in the assignment statement $a^1 = a^2 + b.$
\end{description}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.99\textwidth]{graph_three_edges.eps}
\caption{Graph Representation }
\label{fig:graph_three_edges}
\end{figure*}
\end{comment}
Further, we remove functions with node size greater than 500 for computational efficiency, which accounts for 15\%. We summarize the statistics of the data sets in Table \ref{tbl-data}.
\subsection{Baseline Methods}
In the performance comparison,
we compare \emph{Devign}\xspace with the state-of-the-art machine-learning-based vulnerability prediction methods, as well as the gated graph recurrent network (\emph{Ggrn}\xspace) that used the linearly weighted summation for classification.
\vspace*{-0.4\baselineskip}
\begin{description}
\setlength\itemsep{0.01em}
\item[Metrics + Xgboost] \cite{du2019leopard}: We collect totally 4 complexity metrics and 11 vulnerability metrics for each function using Joern, and utilize Xgboost for classification. Here we did not use the proposed binning and ranking method because it was not learning based, but a heuristic designed to rank the likelihood of being vulnerable. We search the best parameters via Bayes Optimization \cite{snoek2012practical}.
\item[3-layer BiLSTM] \cite{ndss18vuldeepecker}: It treats the source code as natural languages and input the tokenized code into bidirectional LSTMs with initial embeddings trained via Word2vec. Here we implemented a 3-layer bidirectional for the best performance.
\item[3-layer BiLSTM + Att:] It is an improved version of \cite{ndss18vuldeepecker} with the attention mechanism \cite{yang2016hierarchical}.
\item[CNN] \cite{russell2018automated}: Similar to \cite{ndss18vuldeepecker}, it takes source code as natural languages and utilizes the bag of words to get the initial embeddings of code tokens, and then feeds them to CNNs to learn.
\end{description}\vspace*{-0
|
.4\baselineskip}
\subsection{Performance Evaluation}
\textbf{\emph{Devign}\xspace Configuration}
In the embedding layer, the dimension of word2vec for the initial node representation is 100.
In the gated graph recurrent layer, we set the the dimension of hidden states as 200, and number of time steps as 6.
For the \emph{Conv} parameters of \emph{Devign}\xspace, we apply (1, 3) filter with ReLU activation function for the first convolution layer which is followed by a max pooling layer with (1, 3) filter and (1, 2) stride, and (1, 1) filter for the second convolution layer with a max pooling layer with (2, 2) filter and (1, 2) stride.
We use the Adam optimizer with learning rate 0.0001 and batch size 128, and $L2$ regularization to avoid overfitting.
We randomly shuffle each dataset and split 75\% for the training and the rest 25\% for validation.
We train our model on Nvidia Graphics Tesla M40 and P40, with 100-epoch patience for early stopping.
\noindent\textbf{Results Analysis} We use \textit{accuracy} and \textit{F1 score} to measure performance. Table~\ref{tbl-result} summarizes all the experiment results.
First, we analyze the results regarding \textbf{Q1}, the performance of \emph{Devign}\xspace with other learning based methods. From the results on baseline methods, \emph{Ggrn}\xspace and \emph{Devign}\xspace with composite code representations, we can see that both \emph{Ggrn}\xspace and \emph{Devign}\xspace significantly outperform the baseline methods in all the data sets. Especially, compared to all the baseline methods, the relative accuracy gain by \emph{Devign}\xspace is averagely 10.51\%, at least 8.54\% on the QEMU dataset.
\emph{Devign}\xspace (Composite) outperforms the 4 baseline methods in terms of F1 score as well, i.e., the relative gain of F1 score is 8.68\% on the average and the minimum relative gains on each dataset (Linux Kernel, QEMU, Wirshark, FFmpeg and Combined) are 2.31\%, 11.80\%, 6.65\%, 4.04\% and 4.61\% respectively. As Linux follows best practices of coding style, the F1 score 84.97 by \emph{Devign}\xspace is highest among all datasets.
\emph{Hence, \emph{Devign}\xspace with comprehensive semantics encoded in graphs performs significantly better than the state-of-the-art vulnerability identification methods.}
Next, we investigate the answer to \textbf{Q2} about the performance gain of \emph{Devign}\xspace against \emph{Ggrn}\xspace.
We first look at the score with the composite code representation.
It demonstrates that, in all the data sets, \emph{Devign}\xspace reaches higher accuracy (an average of 3.23\%) than \emph{Ggrn}\xspace, where the highest accuracy gain is 5.12\% on the FFmpeg data set. Also \emph{Devign}\xspace gets higher F1, an average of 3.92\% than \emph{Ggrn}\xspace, where the highest F1 gain is 6.82 \% on the QEMU data set.
Meanwhile, we look at the score with each single code representation, from which, we get similar conclusion that generally \emph{Devign}\xspace significantly outperforms \emph{Ggrn}\xspace, where the maximum accuracy gain is 9.21\% for the DFG\_W edge and the maximum F1 gain is 17.13\% for the DFG\_C.
\emph{
Overall the average accuracy and F1 gain by \emph{Devign}\xspace compared with \emph{Ggrn}\xspace are 4.66\%, 6.37\% among all cases, which indicates the \textit{Conv} module extracts more related nodes and features for graph-level prediction.}
Then we check the results for \textbf{Q3} to answer whether \emph{Devign}\xspace can learn different types of code representation and the performance on composite graphs. Surprisingly we find that the results learned from single-edged graphs are quite encouraging in both of \emph{Ggrn}\xspace and \emph{Devign}\xspace. For \emph{Ggrn}\xspace, we find that the accuracy in some specific types of edges is even slightly higher than that in the composite graph, e.g., both CFG and NCS graphs have better results on the FFmpeg and combined data set. For \emph{Devign}\xspace, in terms of accuracy, except the Linux data set, the composite graph representation is overall superior to any single-edged graph with the gain ranging from 0.11\% to 3.75\%. In terms of F1 score, the improvement brought by composite graph compared with the single-edged graphs is averagely 2.69\%, ranging from 0.4\% to 7.88\% in the \emph{Devign}\xspace in all tests.
\emph{In summary, composite graphs help \emph{Devign}\xspace to learn better prediction models than single-edged graphs.}
\begin{table*}[tp]
\centering
\tiny {\addtolength{\tabcolsep}{-3pt}
\caption{Classification accuracies and F1 scores in percentiles under the imbalanced setting }
\label{tbl-result}
\begin{tabular}{c | c c | c c | c c || c c | c c | c c | c c }
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{2}{c|}{Cppcheck} & \multicolumn{2}{c|}{Flawfinder} & \multicolumn{2}{c||}{CXXX} & \multicolumn{2}{c|}{3-layer BiLSTM} &
\multicolumn{2}{c|}{3-layer BiLSTM + Att} & \multicolumn{2}{c|}{CNN } & \multicolumn{2}{c}{\emph{Devign}~(Composite) } \\
& ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 & ACC & F1 \\
\midrule
Linux & 75.11 & 0 & 78.46 & 12.57 & 19.44 & 5.07 & 18.25 & 13.12 & 8.79 & 16.16 & 29.03 & 15.38 & 69.41 & \textbf{24.64} \\
\midrule
QEMU & 89.21 & 0 & 86.24 & 7.61 & 33.64 & 9.29 & 29.07 & 15.54 & 78.43 & 10.50 & 75.88 & 18.80 & 89.27 & \textbf{41.12} \\
\midrule
Wireshark & 89.19 & 10.17 & 89.92 & 9.46 & 33.26 &3.95 & 91.39 & 10.75 & 84.90 & 28.35 &86.09 &8.69 &89.37 & \textbf{42.05} \\
\midrule
FFmpeg & 87.72 & 0 & 80.34 & 12.86 & 36.04 & 2.45 & 11.17 & 18.71 & 8.98 & 16.48 & 70.07 & 31.25 & 69.06 & \textbf{34.92} \\
\midrule
Combined & 85.41 & 2.27 & 85.65 & 10.41 & 29.57 & 4.01 & 9.65 & 16.59 & 15.58 & 16.24 & 72.47 & 17.94 & 75.56 & \textbf{27.25} \\
\midrule
\end{tabular}
}
\vspace{-0.2in}
\end{table*}
To answer \textbf{Q4} on comparison with static analyzers on the real imbalanced dataset, we randomly sampled the test data to create imbalanced datasets with 10\% vulnerable functions according to a large industrial scale analysis \cite{fse2017vul}. We compare with the well-known open-source static analyzers Cppcheck, Flawfinder, and a commercial tool CXXX which we hide the name for legal concern. The results are shown in Table~\label{tbl-result}, where our approach outperforms significantly all static analyzers with an F1 score 27.99 higher. Meanwhile, static analyzers tend to miss most vulnerable functions and have high false positives, e.g., Cppcheck found 0 vulnerability in 3 out of the 4 single project datasets.
Finally to answer \textbf{Q5} on the latest exposed vulnerabilities, we scrape the latest 10 CVEs of each project respectively to check whether \emph{Devign}\xspace can be potentially applied to identify zero-day vulnerabilities. Based on commit fix of the 40 CVEs, we totally get 112 vulnerable functions. \emph{We input these functions into the trained \emph{Devign}\xspace model and achieve an average accuracy of 74.11\%, which manifests \emph{Devign}\xspace's potentiality of discovering new vulnerabilities in practical applications.}
\subsection{Problem Formulation}
Most machine learning or pattern based approaches predict vulnerability at the coarse granularity level of a source file or an application, i.e., whether a source file or an application is potentially vulnerable \cite{nguyen10,yamaguchi2014,ndss18vuldeepecker,dam2017automatic}. Here we analyze vulnerable code at the \textit{function level} which is the fine level of granularity in the overall flow of vulnerability analysis.
We formalize the identification of vulnerable functions as a binary classification problem, i.e., learning to decide whether a given function in raw source code is vulnerable or not.
Let a sample of data be defined as $((c_i, y_i) | c_i \in \mathcal{C}, y_i \in \mathcal{Y}), i \in \{1,2, \dots, n\}$, where $ \mathcal{C}$ denotes the set of functions in code, $\mathcal{Y} =\{0, 1\}^n $ represents the label set with $1$ for vulnerable and $0$ otherwise, and $n$ is the number of instances.
Since $c_i$ is a function, we assume it is encoded as a multi-edged graph $g_i(V,X,A) \in \mathcal{G}$ (See Section~\ref{sec:code} for the embedding details).
Let $m$ be the total number of nodes in $V$, $X \in \mathbb{R}^{m \times d} $ is the initial node feature matrix where each vertex $v_j$ in $V$ is represented by a $d$-dimensional real-valued vector $x_j \in \mathbb{R}^d$.
$A \in \{0,1\}^{k \times m \times m }$ is the adjacency matrix, where $k$ is the total number of edge types. An element $e^{p}_{s,t} \in A$ equal to $1$ indicates that node $v_s, v_t$ is connected via an edge of type $p$, and $0$ otherwise.
The goal of \emph{Devign}\xspace is to learn a mapping from $\mathcal{G}$ to $\mathcal{Y}$,
$f: \mathcal{G} \mapsto \mathcal{Y}$ to predict whether a function is vulnerable or not.
The prediction function $f$ can be learned by minimizing the loss function below:
\vspace{-2mm}
\begin{small}
\begin{equation}
\min \sum_{i=1}^n \mathcal{L}(f(g_i(V, X, A), y_i|c_i) ) + \lambda \omega(f)
\end{equation}
\end{small}
where $\mathcal{L}(\cdot)$ is the cross entropy loss function, $\omega(\cdot)$ is a regularization, and $\lambda$ is an adjustable weight.
\subsection{Graph Embedding Layer of Composite Code Semantics}
\label{sec:code}
As illustrated in Figure~\ref{fig:network}, the graph embedding layer $EMB$ is a mapping from the function code $c_i$ to graph data structures as the input of the model, i.e.,
\begin{equation}
g_i(V, X, A) = EMB(c_i), \forall i = \{1, \dots, n \}
\end{equation}
In this section, we describe the motivation and method on why and how to utilize the classical code representations to embed the code into a composite graph for feature learning.
\subsubsection{Classical Code Graph Representation and Vulnerability Identification}
In program analysis, various representations of the program are utilized to manifest deeper semantics behind the textual code, where classic concepts include ASTs, control flow, and data flow graphs that capture the syntactic and semantic relationships among the different tokens of the source code.
Majority of vulnerabilities such as memory leak are too subtle to be spotted without a joint consideration of the composite code semantics \cite{yamaguchi2014}.
For example, it is reported that ASTs alone can be used to find only insecure arguments \cite{yamaguchi2014}. By combining ASTs with control flow graphs, it enables to cover two more types of vulnerabilities, i.e., resource leaks and some use-after-free vulnerabilities. By further integrating the three code graphs, it is possible to describe most types except two that need extra external information (i.e., race condition depending on runtime properties and design errors that are hard to model without details on the intended design of a program)
Though \cite{yamaguchi2014} \emph{manually} crafted the vulnerability templates in the form of graph traversals, it conveyed the key insight and proved the feasibility to learn a broader range of vulnerability patterns through integrating properties of ASTs, control flow graphs and data flow graphs into a joint data structure.
Beside the three classical code structures, we also take the natural sequence of source code into consideration, since the recent advance on deep learning based vulnerability detection has demonstrated its effectiveness ~\cite{ndss18vuldeepecker,russell2018automated}.
It can complement the classical representations because its unique flat structure captures the relationships of code tokens in a `human-readable' fashion.
\subsubsection{Graph Embedding of Code}
Next we briefly introduce each type of the code representations and how we represent various subgraphs into one joint graph, following a code example of integer overflow as in Figure~\ref{fig:graph_representation}(a) and its graph representation as shown in Figure~\ref{fig:graph_representation}(b).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{overflow_graph_new.eps}
\caption{Graph Representation of Code Snippet with Integer Overflow }
\label{fig:graph_representation}
\vspace{-0.1in}
\end{figure*}
\vspace*{-0.5\baselineskip}
\begin{description}
\setlength\itemsep{0.01in}
\item[Abstract Syntax Tree (AST)] AST is an ordered tree representation structure of source code. Usually, it is the first-step representation used by code parsers to understand the fundamental structure of the program and to examine syntactic errors. Hence, it forms the basis for the generation of many other code representations and the node set of AST $V^{ast}$ includes all the nodes of the rest three code representations used in this paper.
Starting from the root node, the codes are broken down into code blocks, statements, declaration, expressions etc., and finally into the primary tokens that form the leaf nodes.
The major AST nodes are shown in Figure~\ref{fig:graph_representation}.
All the boxes are AST nodes, with specific codes in the first line and node type annotated. The blue boxes are leaf nodes of AST and purple arrows represent the child-parent \textit{AST} relations.
\item[Control Flow Graph (CFG)] CFG describes all paths that might be traversed through a program during its execution.
The path alternatives are determined by conditional statements, e.g., \textit{if}, \textit{for}, and \textit{switch} statements. In CFGs, nodes denote statements and conditions, and they are connected by directed
edges to indicate the transfer of control. The \textit{CFG} edges are highlighted with green dashed arrows in Figure~\ref{fig:graph_representation}. Particularly, the flow starts from the entry and ends at the exit, and two different paths derive at the \textit{if} statements.
\item[Data Flow Graph (DFG)] DFG tracks the usage of variables throughout the CFG. Data flow is variable oriented and any data flow involves the access or modification of certain variables. A DFG edge represents the subsequent access or modification onto the same variables. It is illustrated by orange double arrows in Figure~\ref{fig:graph_representation} and with the involved variables annotated over the edge. For example, the parameter $b$ is used in both the \textit{if} condition and the assignment statement.
\item[Natural Code Sequence (NCS)] In order to encode the natural sequential order of the source code, we use \textit{NCS} edges to connect neighboring code tokens in the ASTs. The main benefit with such encoding is to reserve the programming logic reflected by the sequence of source code. The \textit{NCS} edges are denoted by red arrows in Figure~\ref{fig:graph_representation}, connect all the leaf nodes of the AST
\end{description}\vspace*{-0.5\baselineskip}
Consequently, a function $c_i$ can be denoted by a joint graph $g$ with the four types of subgraphs (or $4$ types of edges) sharing the same set of nodes $V=V^{ast}$.
As shown in Figure~(\ref{fig:graph_representation}), every node $v \in V$ has two attributes, \textit{Code} and \textit{Type}. \textit{Code} contains the source code represented by $v$, and the type of $v$ denotes the type attribute. The initial node representation $x_v$ shall reflect the two attributes.
Hence, we encode \textit{Code} by using a pre-trained word2vec model with the code corpus built on the whole source code files in the projects, and \textit{Type} by label encoding.
We concatenate the two encodings together as the initial node representation $x_v$.
\subsection{Gated Graph Recurrent Layers}
The key idea of graph neural networks is to embed node representation from local neighborhoods through the neighborhood aggregation.
Based on the different techniques for aggregating neighborhood information, there are graph convolutional networks \cite{schlichtkrull2018modeling}, GraphSAGE \cite{velivckovic2017graph}, gated graph recurrent networks \cite{li2015gated} and their variants.
We chose the gated graph recurrent network to learn the node embedding, because it allows to go deeper than the other two and is more suitable for our data with both semantics and graph structures \cite{Representation-2018}.
Given an embedded graph $g_i(V, X, A)$,
for each node $v_j \in V$, we initialize the node state vector $h_j^{(1)} \in \mathbb{R}^{z}, z \geq d $ using the initial annotation by copying $x_j$ into the first dimensions and padding extra 0's to allow hidden states that are larger than the annotation size, i.e., $h_j^1 = [x_j^\top, \mathbf{0}]^\top$.
Let $T$ be the total number of time-step for neighborhood aggregation.
To propagate information throughout graphs, at each time step $t \leq T$, all nodes communicate with each other by passing information via edges dependent on the edge type and direction (described by the $p^{th}$ adjacent matrix $ A_p $ of $A$, from the definition we can find that the number of adjacent matrix equals to edge types), i.e.,
\begin{small}
\begin{equation}
\label{eq:msg_passing}
a_{j,p}^{(t-1)} = A_p^\top \bigg(W_p \bigg[h_1^{(t-1)\top},\dots,h_{m}^{(t-1)\top}\bigg]+ b \bigg)
\end{equation}
\end{small}
where $W_p \in \mathbb{R}^{z\times z}$ is the weight to learn and $b$ is the bias. In particular, a new state $a_{j,p}$ of node ${v_j}$ is calculated by aggregating information of all neighboring nodes defined on the adjacent matrix $A_p$ on edge type $p$. The remaining steps are gated recurrent unit (GRU) that incorporate information from all types with node $v$ and the previous time step to get the current node's hidden state $h_{i,v}^{(t)}$, i.e.,
\begin{small}
\begin{equation}
\label{eq:state_update}
h_{j}^{(t)} = GRU(h_{j}^{(t-1)}, AGG (\{a_{j,p}^{(t-1)}\}_{p=1}^{k}))
\end{equation}
\end{small}
where $AGG(\cdot)$ denotes an aggregation function that could be one of the functions $\{MEAN, MAX, SUM, CONCAT\}$ to aggregate the information from different edge types to compute the next time-step node embedding $h^{(t)}$. We use the $SUM$ function in the implementation.
The above propagation procedure iterates over $T$ time steps, and the state vectors at the last time step
$H_i^{(T)} = \{h_{j}^{(T)}\}_{j=1}^m$
is the final node representation matrix for the node set $V$.
\subsection{The Conv Layer}
The generated node features from the gated graph recurrent layers can be used as input to any prediction layer, e.g., for node or link or graph-level prediction, and then the whole model can be trained in an end-to-end fashion. In our problem, we require to perform the task of graph-level classification to determine whether a function $c_i$ is vulnerable or not.
The standard approach to graph classification is gathering all these generated node embeddings globally, e.g., using a linear weighted summation to flatly adding up all the embeddings \cite{li2015gated,dai2016discriminative} as shown in Eq~(\ref{eq:mlp}),
\begin{small}
\begin{equation}
\label{eq:mlp}
\Tilde{y}_i = {Sigmoid} \bigg( \sum MLP([H_i^{(T)}, x_i])
\bigg)
\end{equation}
\end{small}
where the $sigmoid$ function is used for classification and $MLP$ denotes a Multilayer Perceptron (MLP) that maps the concatenation of $H_i^{(T)}$ and $ x_i$ to a $\mathbb{R}^m$ vector. This kind of approach hinders effective classification over entire graphs \cite{ying2018hierarchical,zhang2018end}.
Thus, we design the \textit{Conv} module to select sets of nodes and features that are relevant to the current graph-level task.
Previous works in \cite{zhang2018end} proposed to use a SortPooling layer after the graph convolution layers to sort the node features in a consistent node order for graphs without fixed ordering, so that traditional neural networks can be added after it and trained to extract useful features characterizing the rich information encoded in graph.
In our problem, each code representation graph has its own predefined order and connection of nodes encoded in the adjacent matrix, and the node features are learned through gated recurrent graph layers instead of graph convolution networks that requires to sort the node features from different channels. Therefore, we directly apply 1-D convolution and dense neural networks to learn features relevant to the graph-level task for more effective prediction\footnote{We also tried LSTMs and BiLSTMs (with and without attention mechanisms) on the sorted nodes in AST order, however, the convolution networks work best overall.}.
We define $\sigma (\cdot)$ as a 1-D convolutional layer with maxpooling, then
\begin{small}
\begin{equation}
\sigma (\cdot)= MAXPOOL\big( Relu \big(CONV(\cdot)\big)\big)
\end{equation}
\end{small}
Let $l$ be the number of convolutional layers applied, then the \textit{Conv} module, can be expressed as
\begin{small}
\begin{eqnarray}
Z_i^{(1)} = \sigma \big([H_i^{(T)}, x_i] \big), \dots, Z_i^{(l)} = \sigma \big(Z_i^{(l-1)}\big)
\\
Y_i^{(1)} = \sigma \big(H_i^{(T)}\big), \dots, Y_i^{(l)} = \sigma \big(Y_i^{(l-1)}\big)
\\
\Tilde{y}_i = Sigmoid\big(AVG(MLP(Z_i^{(l)}) \odot MLP(Y_i^{(l)}) )\big)
\end{eqnarray}
\end{small}
where we firstly apply traditional 1-D convolutional and dense layers respectively on the concatenation $[H_i^{(T)}, x_i]$ and the final node features $H_i^{(T)}$, followed by a pairwise multiplication on the two outputs, then an average aggregation on the resulted vector, and at last make a prediction.
\section{Introduction}
\input{intro.tex}
\section{The \emph{Devign}\xspace Model}
\input{ggnn.tex}
\section{Evaluation}
\input{eva.tex}
\section{Related Work}
\input{related.tex}
\section{Conclusion and Future Work}
\input{con.tex}
\bibliographystyle{IEEEtran}
\input{main.bbl}
\end{document}
|
\section{Introduction}
The well known Sz\'{a}sz-Mirakyan operators are defined as
\begin{align*}
B_{n}^{0}(f,x) = \sum_{k=0}^\infty e^{-nx}\frac{(nx)^k}{k!} f(k/n), \hspace{10mm} x \in [0,\infty).
\end{align*}
In order to generalize the Sz\'asz-Mirakyan operators Jain, \cite{J}, introduced the following operators
\begin{align}\label{e1}
B_{n}^{\beta}(f,x) = \sum_{k=0}^\infty L_{n,k}^{(\beta)}(x) f(k/n), \hspace{10mm} x \in [0,\infty)
\end{align}
where $0 \leq \beta<1$ and the basis function is defined as
\begin{align*}
L_{n,k}^{(\beta)}(x) = \frac{nx(nx+k\beta)^{k-1}}{k!}e^{-(nx+k\beta)}
\end{align*}
where it is seen that $\sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x)=1$. As a special case when $\beta=0$,
the operators (\ref{e1}) reduce to the Sz\'{a}sz-Mirakyan operators. Umar and Razi, \cite{UR}, used
a Kantorovich type modification of $L_{n,k}^{(\beta)}(x)$ in order to approximate integrable functions,
where some direct estimates were considered. Recently Farca\c{s}, \cite{af}, studied the operators
(\ref{e1}) and estimated a Voronovskaja type asymptotic formula. While a review of Farca\c{s}' work
was undertaken it was found that minor errors were given in Lemma 2.1 . These errors have been corrected
and are given in Lemma 1.
In 1967 Durrmeyer, \cite{Dur}, introduced the integral modification of the well known Bernstein polynomials,
which were later studied by Derriennic \cite{der}, Gonska-Zhou \cite{gon1},\cite{gon2} and Agrawal-Gupta \cite{pna}.
The Durrmeyer type modification of the operators (\ref{e1}), with different weight functions, have been
proposed by Tarabie \cite{st} and Gupta et al \cite{vg1}. In approximations by linear positive operators, moment
estimations play an important role. So far no standard Durrmeyer type modification of the operators (\ref{e1}) have
been discussed due to its complicated form in finding moments and this problem has not been discussed in the last
four decades. Here we overcome this difficulty and we consider the following Durrmeyer variant of the operators
(\ref{e1}) in the form
\begin{align}\label{e2}
D_{n}^{\beta}(f,x) &= \sum_{k=0}^{\infty}\left(\int_0^\infty L_{n,k}^{(\beta)}(t)\,dt \right)^{-1}L_{n,k}^{(\beta)}
(x)\int_{0}^{\infty}L_{n,k}^{(\beta)}(t)f(t)\,dt \nonumber\\
&= \sum_{k=0}^{\infty} \frac{< L_{n,k}^{(\beta)}(t), f(t) >}{< L_{n,k}^{(\beta)}(t), 1 >} \, L_{n,k}^{(\beta)}(x)
\end{align}
where $< f,g> = \int_{0}^{\infty} f(t)g(t)dt$. For the special case of $\beta = 0$ these operators reduce to the
Sz\'asz-Mirakyan-Durrmeyer operators (see \cite{vgrp} and references therein). It has been observed that
these operators have interesting convergence properties. In the original form of the operators (\ref{e1})
and its other integral modifications, one has to consider the restriction that $\beta \to 0$ as $n \to \infty$,
in order to obtain convergence. For these actual Durrmeyer variants, (\ref{e2}), we need not to take any
restrictions on $\beta$. Because of this beautiful property it is of worth to study these operators. Here
we find moments using Stirling numbers of first kind and confluent hypergeometric function and estimate some
basic direct results.
\section{Moments}
\begin{lemma} \cite{J}, \cite{af} \label{l1}
For the operators defined by (\ref{e1}) the moments are as follows:
\begin{align}
B_{n}^{\beta}(1,x) &= 1, \hspace{10mm} B_n^{\beta}(t,x)=\frac{x}{1-\beta} \nonumber\\
B_{n}^{\beta}(t^{2},x) &= \frac{x^2}{(1-\beta)^2}+\frac{x}{n(1-\beta)^3}, \nonumber\\
B_{n}^{\beta}(t^{3},x) &= \frac{x^3}{(1-\beta)^3}+\frac{3 \, x^2}{n(1-\beta)^4}+\frac{(1+2\beta)\, x}
{n^2(1-\beta)^5} \label{e3} \\
B_{n}^{\beta}(t^{4},x) &= \frac{x^4}{(1-\beta)^4}+\frac{6 \, x^3}{n(1-\beta)^5}+\frac{(7+8\beta) x^2}
{n^2(1-\beta)^6} +\frac{(6\beta^2 + 8\beta +1) x}{n^3(1-\beta)^7} \nonumber\\
B_{n}^{\beta}(t^{5}, x) &= \frac{x^{5}}{(1-\beta)^{5}} + \frac{10 \, x^{4}}{n(1-\beta)^{6}} + \frac{
5(4 \beta + 5) \, x^{3}}{n^{2}(1-\beta)^{7}} \nonumber\\
& \hspace{10mm} + \frac{15(2 \beta^{2} + 4 \beta + 1) \, x^{2}}{ n^{3}
(1-\beta)^{8}} + \frac{(24\beta^{3} + 58 \beta^{2} + 22 \beta + 1) \, x}{n^{4}(1-\beta)^{9}} \nonumber
\end{align}
\end{lemma}
\begin{lemma} \label{l2} For $0\le \beta<1$, we have
\begin{align}
\frac{< L_{n,k}^{(\beta)}(t), t^{r} >}{< L_{n,k}^{(\beta)}(t), 1 >} = P_{r}(k; \beta) \label{e4}
\end{align}
where $< f,g> =\int_{0}^{\infty} f(t)g(t)dt$ and $P_{r}(k; \beta)$ is a polynomial of order $r$ in the variable $k$.
In particular
\begin{align}
P_{0}(k; \beta) &= 1 \nonumber\\
P_{1}(k; \beta) &= \frac{1}{n} \left[ (1-\beta) k + \frac{1}{1-\beta} \right], \nonumber\\
P_{2}(k; \beta) &= \frac{1}{n^{2}} \left[ (1-\beta)^{2} k^{2} + 3 k + \frac{2!}{1-\beta} \right], \label{e5} \\
P_{3}(k; \beta) &= \frac{1}{n^{3}} \left[ (1-\beta)^{3} k^{3} + 6(1-\beta) k^{2} +
\frac{(11-8\beta) \, k}{1-\beta} + \frac{3!}{1-\beta} \right], \nonumber\\
P_{4}(k; \beta) &= \frac{1}{n^{4}} \left[ (1-\beta)^{4} k^{4} + 10 (1-\beta)^{2} k^{3} + 5(7-4\beta) k^{2}
+ \frac{10(5-3\beta) \, k}{1-\beta} + \frac{4!}{1-\beta} \right] \nonumber\\
P_{5}(k;\beta) &= \frac{1}{n^{5}} \left[ (1-\beta)^{5} k^{5} + 15 (1-\beta)^{3} k^{4} + 5 (1-\beta)(17 - 8 \beta) k^{3}
\right. \nonumber\\
& \hspace{20mm} \left. + \frac{15(15-20\beta + 6\beta^{2}) \, k^{2}}{1-\beta} + \frac{(274-144\beta) \, k}{1-\beta}
+ \frac{5!}{1-\beta} \right] \nonumber
\end{align}
\end{lemma}
\begin{proof} First, we consider the integral:
\begin{align*}
<L_{n,k}^{(\beta)}(t), t^{r} > &= \int_{0}^{\infty} L_{n,k}^{(\beta)}(t) \, t^{r} \, dt \\
&= \frac{n}{k!}\int_0^\infty e^{-(nt+k\beta)}t^{r+1}(nt+k\beta)^{k-1}dt
\end{align*}
We use Tricomi's confluent hypergeometric function:
\begin{align*}
U(a,b,c) = \frac{1}{\Gamma(a)}\int_0^\infty e^{-ct}t^{a-1}(1+t)^{b-a-1},a>0,c>0
\end{align*}
we have
\begin{align}
<L_{n,k}^{(\beta)}(t), t^{r} > &= \frac{n}{k!}\int_0^\infty e^{-(nt+k\beta)}t^{r+1}(nt+k\beta)^{k-1}dt \nonumber\\
&= \frac{1}{k!}\int_0^\infty (x+k\beta)^{k-1}e^{-(x+k\beta)}\left(\frac{x}{n}\right)^{r+1}dx \nonumber\\
&= \frac{(k\beta)^{k+r+1}}{k!n^{r+1}}e^{-k\beta}\int_0^\infty e^{-k\beta t}(1+t)^{k-1}t^{r+1}dt \nonumber\\
&= \frac{(k\beta)^{k+r+1}}{k!n^{r+1}}e^{-k\beta}(r+1)!U(r+2,k+r+2,k\beta). \label{e6}
\end{align}
The evaluation of $<L_{n,k}^{(\beta)}(t), t^{r} >$ can also be seen in the form
\begin{align*}
<L_{n,k}^{(\beta)}(t), t^{r} > &= \frac{(r+1)!}{k \, n^{r+1}} \, e^{-k \beta} \, \sum_{s=0}^{k-1} \binom{k+r-s}{r+1}
\frac{(k\beta)^{s}}{s!} \nonumber\\
&= \frac{e^{-x}}{k n^{r+1}} \, \sum_{s=0}^{k-1} \phi_{r}(s) \frac{x^{s}}{s!}
\end{align*}
where $x = \beta k$ and $\phi_{r}(s)$ is given by
\begin{align}
\phi_{r}(s) &= (k-s)_{r+1} = \sum_{j=0}^{r+1} s(r+1, r-j+1) (k-s)^{r-j+1} \label{e7}
\end{align}
where $s(n,k)$ are the Stirling numbers of the first kind. The first few may be written as
\begin{align*}
\phi_{0} &= k-s \\
\phi_{1} &= (k-s)^{2} + (k-s) \\
\phi_{2} &= (k-s)^{3} + 3 (k-s)^{2} + 2(k-s) \\
\phi_{3} &= (k-s)^{4} + 6 (k-s)^{3} + 11 (k-s)^{2} + 6 (k-s)
\end{align*}
It can now be determined that
\begin{align}
<L_{n,k}^{(\beta)}(t), t^{r} > = \frac{e^{-x}}{k n^{r+1}} \, \sum_{j=0}^{r+1} s(r+1, r-j+1) \theta_{r-j}(x) \label{e8}
\end{align}
where
\begin{align*}
\theta_{m}(x) = \sum_{s=0}^{k-1} (k-s)^{m+1} \frac{x^{s}}{s!}.
\end{align*}
For the case of $r=0$, (\ref{e8}) becomes
\begin{align*}
< L_{n,k}^{(\beta)}(t), 1 > = \frac{e^{-x}}{k n} \, \theta_{0}(x) = \frac{e^{-x}}{k n} \, \sum_{s=0}^{k-1} (k-s)
\frac{x^{s}}{s!}
\end{align*}
and for the case $r=1$,
\begin{align*}
< L_{n,k}^{(\beta)}(t), t > = \frac{e^{-x}}{k n^{2}} \, \theta_{1}(x) + \frac{1}{n} < L_{n,k}^{(\beta)}(t), 1 >.
\end{align*}
Dividing both sides by $< L_{n,k}^{(\beta)}(t), 1 >$ leads to the expression
\begin{align}
P_{1}(k; \beta) = \frac{1}{n} \left( 1 + \frac{ \theta_{1}(x) }{ \theta_{0}(x) } \right) = \frac{1}{n}
( 1 + S_{1}(x) ) \nonumber
\end{align}
where $S_{r}(x)$ is defined by
\begin{align}
S_{r}(x) = \frac{\theta_{r}(x)}{\theta_{0}(x)} = \frac{ \sum_{s=0}^{k-1} (k-s)^{r+1} \frac{x^{s}}{s!} }{ \sum_{s=0}^{k-1}
(k-s) \frac{x^{s}}{s!} }. \label{e9}
\end{align}
The general form of $P_{r}(k; \beta)$ is given by
\begin{align}
P_{r}(k; \beta) = \frac{1}{n^{r}} \, \sum_{j=0}^{r+1} s(r+1, j) S_{j-1}(x). \label{e10}
\end{align}
What remains is to obtain calculations for $S_{r}(x)$. From (\ref{e9}) it is seen that $S_{0}(x) = 1$
and
\begin{align}
S_{1}(x) &= k - \left( \frac{k-1}{k} \right) x + \left(\frac{x}{k}\right)^{2} +
\left(\frac{x}{k}\right)^{3} + \left(\frac{x}{k}\right)^{4} + \cdots
= k - x + \frac{x}{k-x} \nonumber\\
S_{2}(x) &= x^{2} - (2k-3) x + k^{2} - \frac{x}{k-x}, \label{e11} \\
S_{3}(x) &= k^{3} - 3k -1 - (3k^{2}- 6k + 7) x + 3(k-2)x^{2} - x^{3} + \frac{k(3k+1)}{k-x}, \nonumber\\
S_{4}(x) &= k^{4} - (4k^{3} - 10k^{2} + 10k -15) x + (6k^{2} -20k + 25) x^{2} \nonumber\\
& \hspace{20mm} - 2(2k-5) x^{3} + x^{4} - \frac{(10k+1)x}{k-x}. \nonumber
\end{align}
Since $x= \beta k$ then the first few $S_{r}(\beta k)$ are seen to be
\begin{align}
S_{1}(\beta k) &= (1-\beta) k + \frac{\beta}{1-\beta} \nonumber\\
S_{2}(\beta k) &= (1-\beta)^{2} k^{2} + 3 \beta k - \frac{\beta}{1-\beta} \label{e12} \\
S_{3}(\beta k) &= (1-\beta)^{3} k^{3} + 6 \beta (1-\beta) k^{2} + \frac{\beta(7\beta-4) \, k}{1-\beta} +
\frac{\beta}{1-\beta} \nonumber
|
\\
S_{4}(\beta k) &= (1-\beta)^{4} k^{4} + 10 \beta (1-\beta)^{2} k^{3} + 5\beta (5 \beta -2) k^{2}
+ \frac{5 \beta (1-3\beta) \, k}{1-\beta} - \frac{\beta}{1-\beta} \nonumber
\end{align}
which are polynomials of order $r$ in the variable $k$. Using the resulting expressions of $S_{r}(\beta k)$,
provided in (\ref{e12}), in (\ref{e10}) lead to the $P_{r}(k; \beta)$ polynomials of (\ref{e5}). It is
now sufficient to conclude that
\begin{align*}
\frac{< L_{n,k}^{(\beta)}(t), t^{r} >}{< L_{n,k}^{(\beta)}(t), 1 >} = P_{r}(k; \beta)
\end{align*}
are polynomials of order $r$ in the variable $k$.
\end{proof}
\begin{lemma}\label{l3}
For $0 \leq \beta < 1$, $r \geq 0$, the polynomials $P_{r}(k; \beta)$ satisfy the recurrence relationship
\begin{align}
n^{2} P_{r+2}(k; \beta) = n [ (1-\beta) k + r + 2 ] P_{r+1}(k; \beta) + (r+2) \beta k P_{r}(k; \beta). \label{e13}
\end{align}
\end{lemma}
\begin{proof}
By utilizing the recurrence relation,
\begin{align*}
U(a, b; z) = (a+1) z U(a+2, b+2; z) + (z-b) U(a+1, b+1; z),
\end{align*}
for the Tricomi confluent hypergeometric functions, (\ref{e6}) becomes
\begin{align*}
n^{2} <L_{n,k}^{\beta}(t), t^{r+1}> &= n [ (1-\beta)k + r + 1] <L_{n,k}^{\beta}(t), t^{r}>
+ (r+1) \beta k <L_{n,k}^{\beta}(t), t^{r-1}>.
\end{align*}
Now dividing by $<L_{n,k}^{(\beta)}(t), 1>$ leads to the desired relationship for the polynomials
$P_{r}(k; \beta)$ given by (\ref{e13}).
\end{proof}
\begin{lemma}\label{l4}
If the $r$-th order moment with monomials $e_r(t)=t^r,r=0,1,\cdots$ of the operators (\ref{e2}) be defined as
\begin{align*}
T_{n,r}^{\beta}(x):D_n^\beta(e_r,x) = \sum_{k=0}^{\infty}\left(\int_0^\infty L_{n,k}^{(\beta)}(t)\,dt \right)^{-1}
L_{n,k}^{(\beta)}(x)\int_{0}^{\infty}L_{n,k}^{(\beta)}(t)t^r\,dt
\end{align*}
or
\begin{align}
T_{n,r}^{\beta}(x) = \sum_{k=0}^{\infty} P_{r}(k; \beta) \, L_{n,k}^{(\beta)}(x). \label{e14}
\end{align}
The first few are:
\begin{align}
T_{n,0}^{\beta}(x) &= 1, \hspace{10mm} T_{n,1}^{\beta}(x)=x+\frac{1}{n(1-\beta)}, \nonumber\\
T_{n,2}^{\beta}(x) &= x^2+\frac{4x}{n(1-\beta)}+\frac{2!}{n^2(1-\beta)}, \nonumber\\
T_{n,3}^{\beta}(x) &= x^3+\frac{9 \, x^2}{n(1-\beta)} + \frac{6(3-\beta) \, x}{n^2(1-\beta)^2}+
\frac{3!}{n^3(1-\beta)}, \label{e15} \\
T_{n,4}^{\beta}(x) &= x^{4} + \frac{16 \, x^{3}}{n(1-\beta)} + \frac{12(6-\beta) \, x^{2}}{n^{2}(1-\beta)^{2}}
+ \frac{12(3 \beta^{2} - 6 \beta + 8) \, x}{n^{3}(1-\beta)^{3}} + \frac{4!}{n^{4}(1-\beta)}. \nonumber\\
T_{n,5}^{\beta}(x) &= x^{5} + \frac{25 \, x^{4}}{n(1-\beta)} + \frac{20(10- \beta) \, x^{3}}
{n^{2}(1-\beta)^{2}} + \frac{120(\beta^{2} - 2 \beta + 5) \, x^{2}}{n^{3}(1-\beta)^{3}} \nonumber\\
& \hspace{15mm} + \frac{120(5 - 6\beta + 6\beta^{2} - \beta^{3}) \, x}{n^{4}(1-\beta)^{4}} + \frac{5!}{n^{5}(1-\beta)}
\nonumber
\end{align}
\end{lemma}
\begin{proof}
Obviously by (\ref{e2}), we have $T_{n,0}^{\beta}(x)=1.$ Next by definition of $T_{n,r}^{\beta}(x)$, we have
\begin{align*}
T_{n,r}^{\beta}(x) = \sum_{k=0}^{\infty}\frac{< L_{n,k}^{(\beta)}(t), t^r >}{< L_{n,k}^{(\beta)}(t), 1 >} \,
L_{n,k}^{(\beta)}(x)
= \sum_{k=0}^{\infty} P_{r}(k; \beta) \, L_{n,k}^{(\beta)}(x).
\end{align*}
Using Lemma \ref{l1} and Lemma \ref{l2}, we have
\begin{align*}
T_{n,1}^{\beta}(x) &= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x)P_{1}(k; \beta)
= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x) \, \frac{1}{n} \left[ (1-\beta) k + \frac{1}{1-\beta} \right]\\
&= (1-\beta)B_n^{\beta}(t,x)+\frac{1}{n(1-\beta)}B_n^{\beta}(1,x)\\
&= x+\frac{1}{n(1-\beta)}.
\end{align*}
\begin{align*}
T_{n,2}^{\beta}(x) &= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x)P_{2}(k; \beta)
= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x) \, \frac{1}{n^{2}} \left[ (1-\beta)^{2} k^{2} + 3 k + \frac{2}{1-\beta} \right]\\
&= (1-\beta)^2 \, B_{n}^{\beta}(t^2,x) + \frac{3}{n} \, B_{n}^{\beta}(t,x) + \frac{2}{n^2(1-\beta)} \\
&= x^{2} + \frac{4x}{n(1-\beta)} + \frac{2}{n^2(1-\beta)}.
\end{align*}
\begin{align*}
T_{n,3}^{\beta}(x) &= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x)P_{3}(k; \beta) \\
&= \sum_{k=0}^{\infty}L_{n,k}^{(\beta)}(x) \, \frac{1}{n^3} \left[(1-\beta)^3 k^3 + 6(1-\beta)k^2 +
\frac{(11-8\beta) \, k}{1-\beta} + \frac{3!}{1-\beta}\right] \\
&= (1-\beta)^{3} B_{n}^{\beta}(t^3,x) + \frac{6(1-\beta)}{n} B_{n}^{\beta}(t^2,x) \\
& \hspace{20mm} + \frac{(11-8\beta)}{n^2(1-\beta)} B_{n}^{\beta}(t,x) + \frac{3!}{n^3(1-\beta)}
B_{n}^{\beta}(1,x) \\
&= x^3 + \frac{9 \, x^2}{n(1-\beta)} + \frac{6(3-\beta) \, x}{n^2(1-\beta)^2} + \frac{3!}{n^3(1-\beta)}.
\end{align*}
A continuation of this process will provide $T_{n,r}^{\beta}(x)$ for cases of $r \geq 4$.
\end{proof}
\begin{lemma}\label{l5}
For $r \geq 1$ the polynomials $T_{n,r}^{\beta}(x)$ satisfy the relation
\begin{align}
T_{n,r}^{\beta}(x) = \left( x + \frac{2r-1}{n(1-\beta)} \right) \, T_{n,r-1}^{\beta}(x) - \sum_{j=0}^{r-2}
\frac{(-1)^{j} \, A_{j}^{r-2}}{n^{j+2} (1-\beta)^{j+2}} \, T_{n,r-j-2}^{\beta}(x) \label{e16}
\end{align}
where the first few coefficients $A_{j}^{r}$ are given by
\begin{align*}
A_{0}^{0} &= 1 + 2 \beta \\
A_{0}^{1} &= 4 + 4 \beta \hspace{10mm} A_{1}^{1} = 2 \beta + 6 \beta^{2} \\
A_{0}^{2} &= 9 + 6 \beta \hspace{10mm} A_{1}^{2} = 6 \beta + 30 \beta^{2} \hspace{10mm}
A_{2}^{2} = 12 \beta^{2} + 24 \beta^{3} \\
A_{0}^{3} &= 16 + 8 \beta \hspace{8mm} A_{1}^{3} = 12 \beta + 84 \beta^{2} \hspace{8mm}
A_{2}^{3} = 60 \beta^{2} + 96 \beta^{3} \\
& \hspace{20mm} A_{3}^{3} = -12 \beta^{2} + 48 \beta^{3} + 120 \beta^{4}
\end{align*}
\end{lemma}
\begin{proof}
Making use of (\ref{e13}) and (\ref{e14}) leads to the relation
\begin{align*}
n^{2} T_{n,r}^{\beta}(x) - n (r+1) T_{n,r}^{\beta}(x) = \sum_{k=0}^{\infty} k \left[ n (1-\beta) P_{r}(k; \beta)
+ (r+1) \beta P_{r-1}(k; \beta) \right] L_{n,k}^{(\beta)}(x).
\end{align*}
Now making use of (\ref{e5}) the summation can be reformed into the desired relation. This can be verified
by considering $T_{n,r}^{\beta}(x)$ as a linear combination of $T_{n,j}^{\beta}(x)$ for $0 \leq j \leq r-1$.
\end{proof}
\begin{remark} \label{r1}
If we denote the central moment as $\mu_{n,r}^\beta(x)=D_{n}^{\beta}((t-x)^r,x)$, then
\begin{align}
\mu_{n,1}^\beta(x) &= \frac{1}{n(1-\beta)}, \hspace{10mm} \mu_{n,2}^\beta(x) = \frac{2x}{n(1-\beta)}
+\frac{2!}{n^2(1-\beta)} \nonumber\\
\mu_{n,3}^\beta(x) &= \frac{12 \, x}{n^2(1-\beta)^2}+\frac{3!}{n^3(1-\beta)}, \label{e17} \\
\mu_{n,4}^\beta(x) &= \frac{12 \, x^2}{n^2(1-\beta)^2} + \frac{12(6 -2 \beta + \beta^2) \, x}{n^3(1-\beta)^2}
+ \frac{4!}{n^4(1-\beta)}. \nonumber
\end{align}
In general using the similar approach, one can show that:
$$\mu_{n,r}^\beta(x)=O\left(n^{-[(r+1)/2]}\right),$$
where $[\alpha]$ denotes the integral part of $\alpha.$
\end{remark}
\section{Direct Estimates}
In this section, we establish the following direct result:
\begin{Proposition}\label{t1}
Let $f$ be a continuous function on $[0,\infty)$ for $n\to \infty$, the sequence $\{D_{n}^{\beta}(f,x)\}$ converges
uniformly to $f(x)$ in $[a,b]\subset[0,\infty).$
\end{Proposition}
\begin{proof}
For sufficiently large $n$, it is obvious from Lemma \ref{l4} that $D_{n}^{\beta}(e_0,x),\,\,D_{n}^{\beta}(e_1,x)$ and
$D_{n}^{\beta}(e_2,x)$ converges uniformly to $1,\,\,x$ and $x^2$ respectively on every compact subset of $[0,\infty).$
Thus the required result follows from Bohman-Korovkin theorem.
\end{proof}
\begin{theorem}\label{t2} Let $f$ be a bounded integrable function on $[0,\infty)$ and has second derivative at a point
$x\in [0,\infty)$, then $$\lim_{n\to \infty} n[D_{n}^{\beta}(f,x)-f(x)]=\frac{1}{1-\beta}f^\prime(x)+\frac{x}{1-\beta}
f^{\prime\prime}(x).$$
\end{theorem}
\begin{proof}
By the Taylor's expansion of $f$, we have
\begin{equation}\label{e18}
f(t)=f(x)+f^{\prime}(x)(t-x)+\frac{1}{2}f^{\prime\prime}(x)(t-x)^{2}
+r(t,x)(t-x)^{2},
\end{equation}
where $r(t,x)$ is the remainder term and $\displaystyle\lim_{n\rightarrow\infty}r(t,x)=0.$
Operating $D_{n}^{\beta}$ to the equation (\ref{e18}), we obtain
\begin{align*}
D_{n}^{\beta}(f,x)-f(x) &= D_{n}^{\beta}(t-x,x)f^{\prime}(x)+ D_{n}^{\beta}\left(\left(t-x\right)^{2},x\right)
\frac{f^{\prime\prime}(x)}{2} \\
& \hspace{10mm} + D_{n}^{\beta}\left( r\left( t,x\right) \left( t-x\right)^{2},x\right)
\end{align*}
Using Cauchy-Schwarz inequality, we have
\begin{equation}\label{e19}
D_{n}^{\beta}\left(r\left(t,x\right)\left(t-x\right)^{2},x\right)\leq \sqrt{D_{n}^{\beta}\left(r^{2}\left(t,x\right)
,x\right)}\sqrt{D_{n}^{\beta}\left(\left(t-x\right)^{4},x\right)}.
\end{equation}
As $r^{2}\left(x,x\right)=0$ and $r^{2}\left(t,x\right) \in C_{2}^{\ast}[0,\infty)$, we have
\begin{equation}\label{e20}
\lim_{n\rightarrow\infty}D_{n}^{\beta}\left(r^{2}\left(t,x\right),x\right)=r^{2}\left(x,x\right) = 0
\end{equation}
uniformly with respect to $x\in\left[0,A\right].$ Now from (\ref{e19}),
(\ref{e20}) and from Remark \ref{r1
|
of any time-evolution in $\eta$, we may use
Eqns.~(\ref{eqn:weff_0}) and (\ref{eqn:weff_1}) to constrain $\eta$
now. Eliminating $\eta$ from these equations, we find a permissible
line through $w_0-w_a$ space
\begin{align}
\xo{(1)}{w_\mathrm{eff}} = \frac{\Delta_\mathrm{BH}'}{\Omega_\Lambda}\xo{(0)}{w_\mathrm{eff}} + \left(\frac{\Delta_\mathrm{BH}'' + \Delta_\mathrm{BH}'}{3\Omega_\Lambda} - \eta'(1)\right) \label{eqn:planck_constraint_line}.
\end{align}
We will use the BH population model, developed from the stellar
population in Appendix~\ref{sec:distribution}, to estimate the
derivatives of $\Delta_\mathrm{BH}$. From
Eqn.~(\ref{eqn:stellar_bh_model}), we numerically find that
\begin{align}
\Delta_\mathrm{BH}'(1) &= \begin{cases}
5.364 \times 10^{-5} & \text{(rapid}) \\
6.135 \times 10^{-5} & \text{(delayed})
\end{cases} \\
\Delta_\mathrm{BH}''(1) &= \begin{cases}
-2.214 \times 10^{-4} & \text{(rapid)} \\
-2.447 \times 10^{-4} & \text{(delayed)}
\end{cases},
\end{align}
which will produce a permissible band in $w_0-w_a$ space. For
$\eta' \equiv 0$, this band is displayed in Figures
\ref{fig:planck-consistency-eta} and
\ref{fig:planck-consistency-zoomed}. Evidently, Planck data
disfavor gravastars with large $\eta$
\begin{align}
0 < \eta(1) \leqslant 3\times10^{-2} \qquad \eta'(1) \equiv 0.
\end{align}
Within this range, however, gravastars are consistent with
Planck best fit constraints.
By inspection of Eqn.~(\ref{eqn:planck_constraint_line}),
$\eta'$ simply translates the constraint region vertically. If
we are to remain consistent with $\rmd \Delta_\mathrm{BH}/\rmd a
\geqslant 0$ for $0.9 < a < 1$, we must have
\begin{align}
\xo{(1)}{w_\mathrm{eff}} \leqslant -10\xo{(0)}{w_\mathrm{eff}} + 10[\eta(a) - 1].
\end{align}
This bound, and the positivity bounds given in
Eqn.~(\ref{eqn:positivity_bounds}) are displayed in
Figure~\ref{fig:planck-consistency-eta} for a variety of $\eta$.
Recent results constraining the $w$CDM model from the Dark Energy
Survey (DES) ~\citep{abbott2017dark} cannot be immediately applied to
constrain $\eta$. This is because $w_\mathrm{eff}$ induced by a
gravastar population changes in time even if $\eta$ remains fixed for
all time. Since we predict only small changes in $w_\mathrm{eff}$,
however, it is reasonable to expect that the $w$CDM model will
approximate the gravastar scenario at late times. Indeed, their
reported value of $w$ \citep[][Eqn.~VII.5]{abbott2017dark} becomes
the following constraint on $\eta$
\begin{align}
\eta < 4\times10^{-2} \qquad (w_0 =-1.00^{+0.04}_{-0.05}),
\end{align}
which is consistent with the constraints reported in
\S\ref{sec:nutdiscussion}.
\section{Conclusion}
Gravitational vacuum stars have become a viable and popular
theoretical alternative to the pathological classical black hole (BH)
solutions of GR. These objects appear as BHs to exterior vacuum
observers, but contain de Sitter interiors beneath a thin crust. This
crust is located above the classical Schwarzchild horizon, placing the
entire gravastar in causal contact with the exterior universe. Many
existing studies have focused on the observational consequences of the
crust region for optical and gravitational wave signatures. These
studies are hindered by systematics relating to the
|
crust, which is
only loosely constrained theoretically. In this paper, we have
developed complementary constraints on the gravastar scenario based on
the well-established properties of their de Sitter interiors.
We place our gravastars within a flat Friedmann cosmology, which is
nowhere vacuum, and show from the action principle that the zero-order
Friedmann source must contain an averaged term sensitive to the
interiors. This is consistent with Birkhoff's theorem, which does not
apply without vacuum boundaries. Through conservation of
stress-energy, this term induces a time-dependent Dark Energy (DE)
density. This density is directly correlated to the evolution of
non-linear structure via star formation and subsequent collapse. The
gravastar crust produces a small deviation $\eta$ from a pure de
Sitter ($w=-1$) equation of state. This deviation becomes the single
parameter characterizing the gravastar population.
We replace all black holes with gravastars and consider the
cosmological effects of their subsequent DE contribution at three
epochs. During the primordial epoch ($T\sim10^{22}\mathrm{K}-T\sim
10^{11}\mathrm{K}$), we find that the fraction of matter collapsing
into a primordial gravastar population with $\eta < 10^{-1}$ is
constrained between $\sim 10-50$ orders of magnitude more than
existing primordial BH constraints. During the dark ages $(8.9
\lesssim z \lesssim 20)$ we show, via two approaches, that existing
astrophysical data support formation of a population of gravastars
that can account for all of the present-day DE density. We show that
any gravastar population with crust parameter $\eta < 6\times 10^{-2}$
can resolve the coincidence problem. During late-times ($z < 5$), we
precisely interpret the gravastar scenario in the usual language of a
time-varying dark fluid. Using a BH population model built from the
cosmic star formation history and stellar collapse simulations, we
predict time-variation in the magnitude of $w(a)$ that tracks star
formation. We demonstrate complete consistency with Planck, given a
gravastar population with $\eta < 3\times 10^{-2}$. Further, we
predict very little time-variation in $w(a)$ at late times, consistent
with the recent results of the Dark Energy Survey.
We make definitive predictions for both the gravitational astronomy
community and dark energy surveys in the form of unexpected
quantitative correlations between the time-evolution of the DE density
and the BH population. In summary, the cosmological consequences of a
gravastar population are unambiguous, readily testable, and already
resolve many outstanding observational questions, without requiring
any \emph{ad hoc} departure from GR.
All code for generating the presented data and its
visualizations is released publicly (oprcp4).
\software{
\href{https://github.com/kcroker/oprcp4}{oprcp4},
\href{https://www.scipy.org}{\texttt{scipy}}~\citep{scipy},
\href{http://maxima.sourceforge.net/}{GNU Maxima},
\href{http://www.gnuplot.info}{gnuplot}}
\acknowledgments
This paper is dedicated to the memory of Prof. J. M. J. Madey,
inventor of the free-electron laser. His emphasis on the ``paramount
importance of boundary conditions'' heavily influenced this research.
The author thanks N. Kaiser (IfA) for sustained theoretical criticism,
J. Weiner (U.~Hawai`i) for thorough technical feedback concerning the
action, and T. Browder (U.~Hawai`i) and K. Nishimura (U.~Hawai`i) for
copious feedback during the preparation of all versions of the
manuscript. Additional thanks go to S. Ballmer (aLIGO) for
conversations concerning the capabilities of present and planned
gravitational wave observatories, C. Corti (AMS02) for visualization
suggestions, C. McPartland (IfA) for guidance in the stellar
literature, N. Warrington (U. Maryland) for stimulating discussions
and feedback, J. Kuhn (IfA) for comments on rich clusters, J. Learned
(U.~Hawai`i) for encouragement, R. Matsuda (U.~Tokyo/IPMU) for
comments on clarity, and The University of Tokyo for hospitality
during the preparation of this manuscript. This work was performed
with financial support from the Fulbright U.S. Student Program.
|
\section{Introduction}
\subsection{Background and motivation} Spectral gap for probability measure preserving actions is a fundamental notion in mathematics with a wide range of applications.
The goal of this paper is to introduce and study a notion of spectral gap for general measure preserving actions.
We begin our discussion by recalling the following:
\begin{definition}
A measure preserving action $\Gamma\curvearrowright (X,\mu)$ of a countable group $\Gamma$ on a standard probability space $(X,\mu)$ is said to have {\it spectral gap} if there exist $S\subset\Gamma$ finite and $\kappa>0$ such that
$$\|F\|_{2}\leqslant\kappa\sum_{g\in S}\|g\cdot F-F\|_{2}\;\;\;\text{for any $F\in L^2(X,\mu)$ with $\int_{X}F\;\text{d}\mu=0$}.$$
Here, $g\cdot F$ denotes the function given by $(g\cdot F)(x)=F(g^{-1}x)$, for every $g\in\Gamma$ and $x\in X$.
To justify the terminology, consider the self-adjoint averaging operator $P_S(\xi)=\frac{1}{2|S|}\sum_{g\in S}(g\cdot F+g^{-1}\cdot F)$.
Then the constant function {\bf 1} is an eigenfunction of $P_S$ with eigenvalue $1$, and the existence of $\kappa>0$ as above is equivalent to the presence of a gap right below $1$ in the spectrum of $P_S$.
\end{definition}
Let $G$ be a compact Lie group and denote by $m_G$ its Haar measure.
An important question, which has been investigated intensively over the last three decades, is whether the left translation action $\Gamma\curvearrowright (G,m_G)$, associated to a countable dense subgroup $\Gamma<G$, has spectral gap.
Interest in this question first arose in the early 1980s, in connection with Ruziewicz's problem for the $n$-sphere $S^n$ (also known as the Banach-Ruziewicz problem). The latter asks if the Lebesgue measure on $S^n$ is the unique finitely additive, rotation-invariant measure defined on all Lebesgue measurable subsets. For $n=1$, Banach used the amenability of $SO(2)$ (as a discrete group) to show that the answer is negative \cite{Ba23}. For $n\geqslant 2$, however, the problem remained open for a long time.
First, it was realized that the existence of a countable dense subgroup of $SO(n+1)$ with the spectral gap property implies an affirmative answer \cite{dJR79,Ro81}. By using Kazhdan's property (T), Margulis \cite{Ma80} and Sullivan \cite{Su81} then obtained an affirmative answer for every $n\geqslant 4$. The remaining cases $n=2, 3$ were finally settled in the affirmative by Drinfeld \cite{Dr84} via the construction of a countable dense subgroup of $SU(2)$ with the spectral gap property.
An optimal such construction was achieved soon after by Lubotzky, Phillips, and Sarnak \cite{LPS86,LPS87} (see \cite{Oh05} for a generalization to compact simple Lie groups not locally isomorphic to $SO(3)$). For all of this, see the excellent survey \cite{Lu94}.
Later on, a new robust method for proving the spectral gap property for subgroups of $SU(2)$ was developed by Gamburd, Jakobson, and Sarnak \cite{GJS99}. It is worth pointing out that in all of these results, the subgroups involved are generated by matrices with algebraic entries.
In 2006, a breakthrough was made by Bourgain and Gamburd who established the spectral gap property for any dense subgroup of $SU(2)$ generated by matrices with algebraic entries \cite{BG06}. Their approach followed two earlier major works: the authors' work on expansion for Cayley graphs of $SL_2(\mathbb F_p)$ \cite{BG05}, and Helfgott's product theorem for subsets of $SL_2(\mathbb F_p)$ \cite{He05}.
Subsequently, Bourgain and Gamburd established the spectral gap property for dense subgroups of $G=SU(d)$ generated by matrices with algebraic entries, for any $d\geqslant 2$ \cite{BG10}. Recently, this was generalized further by Benoist and de Saxc\'{e} to cover arbitrary connected compact simple Lie groups $G$ \cite{BdS14}.
If $G$ is a compact group and $\Gamma$ is a countable dense subgroup with the spectral gap property, then the Haar measure $m_G$ is the unique finitely additive $\Gamma$-invariant measure defined on all measurable subsets of $G$.
One of the main motivations for this paper is to formulate and prove analogues of the results from \cite{BG06,BG10,BdS14} that apply to general simple Lie groups $G$. By analogy with the compact case, it would thus be desirable to find a notion of spectral gap for infinite measure preserving actions, which in the case of left translation actions on locally compact groups $G$, implies a uniqueness property for its left Haar measures as finitely additive measures.
\subsection{Local spectral gap}
As we explain in Corollary \ref{BR}, the following new notion of spectral gap satisfies the desired property.
\begin{definition}
Let $\Gamma\curvearrowright (X,\mu)$ be a measure preserving action of a countable group $\Gamma$ on a standard measure space $(X,\mu)$. We say that $\Gamma\curvearrowright (X,\mu)$ has {\it local spectral gap} with respect to a measurable set $B\subset X$ of finite measure if there exist $S\subset\Gamma$ finite and $\kappa>0$ such that
$$\|F\|_{2,B}\leqslant\kappa\sum_{g\in S}\|g\cdot F-F\|_{2,B}\;\;\;\text{for any $F\in L^2(X,\mu)$ with $\int_{B}F\;\text{d}\mu=0$}.$$
Here, $\|F\|_{2,B}:=\displaystyle{\big(\int_{B}|F|^2\;\text{d}\mu\big)^{\frac{1}{2}}}$ denotes the $L^2$-norm of the restriction of $F$ to $B$.
\end{definition}
\begin{remark}\label{rem1} We continue with a few remarks on this definition:
\begin{enumerate}
\item Although the action $\Gamma\curvearrowright (X,\mu)$ is not required to be ergodic, this is automatic if the action has local spectral gap with respect to a set $B$ such that $\cup_{g\in\Gamma}\;g\cdot B$ is co-null in $X$.
\item If $(X,\mu)$ is a probability space and $B=X$, then local spectral gap coincides with spectral gap. Assume that $(X,\mu)$ is an infinite measure space with $X$ being a locally compact space and $\mu$ a Radon measure. Then the notion of local spectral gap aims to capture the intuitive idea that functions on $X$ which are locally almost $\Gamma$-invariant, must be locally almost constant (see also Proposition \ref{kazhdan}). This is different from the ``global" notion of spectral gap requiring that there is no sequence of unit almost $\Gamma$-invariant functions in $L^2(X,\mu)$. Indeed, since any sequence of unit almost $\Gamma$-invariant functions in $L^2(X,\mu)$ converges weakly to $0$ on compact subsets of $X$, the latter reflects only the dynamics of the action at infinity.
\item The notion of local spectral gap appears implicitly in Margulis' positive resolution of the Banach-Ruziewicz problem for $\mathbb R^n$ ($n\geqslant 3$).
More precisely, with the above terminology,
he first shows the existence of a subgroup $\Gamma<\mathbb R^n\rtimes SO(n)$ such that the action $\Gamma\curvearrowright (\mathbb R^n,\lambda^n)$ has local spectral gap, and then concludes that the Lebesgue measure $\lambda^n$ is indeed the unique finitely additive isometry-invariant measure defined on all bounded measurable subsets of $\mathbb R^n$ \cite{Ma82}.
\item While local spectral gap might depend on the choice of $B$, the following independence result can be easily shown: assume that $B_1,B_2$ are measurable subsets of $X$ such that $B_1\subset K\cdot B_2$ and $B_2\subset K\cdot B_1$, for some finite set $K\subset\Gamma$. Then local spectral gap with respect to $B_1$ is equivalent to local spectral gap with respect to $B_2$ (see Proposition \ref{indep}).
\end{enumerate}
\end{remark}
{\bf Notation.}
Let $G$ be locally compact second countable group and $H<G$ be a closed subgroup.
Here and after, we assume that the locally compact topology on $G$ is Hausdorff.
We denote by $m_G$ a fixed left Haar measure of $G$. We also denote by $m_{G/H}$ a fixed quasi-invariant Borel regular measure on $G/H$ which is ``nice", in the sense that it arises from a rho-function for the pair $(G,H)$ (see \cite[Theorem B.1.4.]{BdHV08}).
The following is our main result.
\begin{main}[local spectral gap]\label{main}
Let $G$ be a connected simple Lie group. Denote by $\frak g$ the Lie algebra of $G$ and by ${\operatorname{Ad}}:G\rightarrow\operatorname{GL}(\frak g)$ its adjoint representation.
Let $\Gamma<G$ be a dense subgroup. Assume that there is a basis $\frak B$ of $\frak g$ such that the matrix of $\operatorname{Ad}(g)$ in the basis $\frak B$ has algebraic entries, for any $g\in\Gamma$. Let $B\subset G$ be a measurable set with compact closure and non-empty interior.
Then the left translation action $\Gamma\curvearrowright (G,m_G)$ has local spectral gap with respect to $B$. \end{main}
In the case $G$ is compact, Theorem \ref{main} recovers the main results of \cite{BG06,BG10,BdS14}. On the other hand, if $G$ is not compact, Theorem \ref{main} reveals an entirely new type of phenomenon for locally compact groups.
\begin{remark}
The assumption on $\Gamma<G$ is in particular satisfied if $G=\operatorname{SL}_n(\mathbb R)$, for some $n\geqslant 2$, and $\Gamma$ is a dense subgroup of $G$ such that every matrix $g\in\Gamma$ has algebraic entries.
\end{remark}
\begin{remark} In view of Remark \ref{rem1} (4), the conclusion of Theorem \ref{main} does not depend on the choice of the set $B$. Indeed, if $B_1\subset G$ has compact closure and $B_2\subset G$ has non-empty interior, then there exists a finite set $K\subset\Gamma$ such that $B_1\subset K\cdot B_2$.
\end{remark}
Theorem \ref{main} is a consequence of our main technical result proving a restricted spectral gap estimate in the spirit of Bourgain and Yehudayoff's pioneering work \cite{Bo09,BY11}.
\begin{main}[restricted spectral gap]\label{restricted}
Assume that $\Gamma<G$ are as in Theorem \ref{main}.
Let $B\subset G$ be a measurable set with compact closure. Let $U$ be a neighborhood of the identity element in $G$.
Then there exist a finite set $T\subset\Gamma\cap U$ and a finite dimensional subspace $V\subset L^2(B)$ such that the probability measure $\mu=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})$ satisfies $\|\mu*F\|_2<\frac{1}{2}\|F\|_2$, for every $F\in L^2(B)\ominus V$.
\end{main}
Note that unlike Theorem \ref{main}, this result is new even in the case of compact groups, where it leads to some unexpected consequences (see Remark \ref{spectra}).
Theorem \ref{restricted} concerns the left regular representation of $G$. The proof of Theorem \ref{restricted} moreover shows that
for any $0<r<1$ there exists a finite set $T\subset\Gamma\cap U$ such that the conclusion holds with $r$ in place of $\frac{1}{2}$. As a consequence, it follows that a more general statement, addressing all quasi-regular representations of $G$, holds true.
\begin{mcor}\label{by}
Assume that $\Gamma<G$ are as in Theorem \ref{main}.
Let $H<G$ be a closed subgroup and denote by $\pi:G\rightarrow \mathcal U(L^2(G/H,m_{G/H}))$ be the associated quasi-regular unitary representation. Let $B\subset G/H$ be a measurable set with compact closure. Let $U$ be a neighborhood of the identity in $G$.
Then there exist a finite set $T\subset\Gamma\cap U$ and a finite dimensional subspace $V\subset L^2(B)$ such that the probability measure $\mu=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})$ satisfies $\|\pi(\mu)(F)\|_2<\frac{1}{2}\|F\|_2$, for any $F\in L^2(B)\ominus V$.
\end{mcor}
Here, for a probability measure $\mu$, we denote by $\pi(\mu)$ the averaging operator $\sum_{g\in G}\mu(\{g\})\pi(g)$.
Corollary \ref{by} generalizes \cite[Theorem 5]{BY11} which deals with the case when $G$ is $SL_2(\mathbb R)$ and $H$ is the subgroup of upper triangular matrices. Then $G/H$ can be identified with the real projective line, $\mathbb P^1(\mathbb R)$.
The proof of \cite[Theorem 5]{BY11} is specific to this situation, as it relies on the fact that the action of $SL_2(\mathbb R)$ on $\mathbb P^1(\mathbb R)$ is $2$-transitive to show a certain mixing property. Corollary \ref{by} provides an alternative approach to the mixing property in this case. Corollary \ref{by} is new in all other cases with $G$ non-compact, including the simplest one when $G=SL_2(\mathbb R)$ and $H$ is trivial.
\begin{remark}\label{quant}
In the case $G$ has trivial center, the proof of Theorem \ref{restricted} yields a more quantitative statement (see Theorem \ref{restricted2}).
To explain this, identify $G$ with a subgroup of $\operatorname{GL}_n(\mathbb R)$, for some $n$, and endow it with the metric induced by the Hilbert-Schmidt norm $\|.\|_2$. For $\varepsilon>0$, denote $B_{\varepsilon}(1)=\{g\in G|\;\|g-1\|_2<\varepsilon\}$.
Then the proof of Theorem \ref{restricted} shows that there is a constant $C>1$ (depending on $\Gamma$) such that for any small enough $\varepsilon>0$, there exist a finite set $T\subset\Gamma\cap B_{\varepsilon}(1)$ which freely generates a group, and a finite dimensional subspace $V\subset L^2(B)$ such that denoting $\mu=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})$ we have
\begin{itemize}
\item $|T|<\frac{1}{\varepsilon^{C}}$, and
\item $\|\mu*F\|_2<\varepsilon\|F\|_2$, for every $F\in L^2(B)\ominus V$.
\end{itemize}
\end{remark}
\begin{remark}\label{spectra}
Theorem \ref{restricted} (and its quantitative version) sheds some new light on the spectra of averaging operators on compact groups.
In order to briefly recall known results along these lines, assume for simplicity that $G=SU(2)$.
Then the irreducible representations of $G$ can be listed as $\pi_n:G\rightarrow\mathcal U(\mathcal H_n)$, where dim$(\mathcal H_n)=n+1$, for every $n\geqslant 0$, and by the Peter-Weyl theorem we have that $L^2(G)=\bigoplus_{n\geqslant 0}\mathcal H_n^{\oplus {(n+1)}}$.
Let $T\subset G$ be a finite set which freely generates a subgroup, consider the probability measure $\mu=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})$, and denote by $P_{\mu}$ the operator $F\mapsto\mu*F$.
Then $P_{\mu}$ is self-adjoint and since $\|P_{\mu}\|\leqslant 1$, its spectrum is contained in $[-1,1]$.
Moreover, since $P_{\mu}$ can be identified with $\bigoplus_{n\geqslant 0}\pi_n(\mu)^{\oplus {n+1}}$, it is also diagonalizable.
The asymptotic distribution of the eigenvalues of $P_{\mu}$ has been studied in \cite{LPS86, GJS99}, where it is shown that most of them lie in the interval $\big[-\frac{\sqrt{2|T|-1}}{|T|},\frac{\sqrt{2|T|-1}}{|T|} \big]$.
More precisely, if $d_n$ denotes the number of eigenvalues of $\pi_n(\mu)$ that lie outside this interval (so-called ``exceptional" eigenvalues), then $\frac{d_n}{n}\rightarrow 0$ (see \cite[Theorem 1.1]{LPS86}).
Assume from now on that the elements of $T$ have algebraic entries. Then the following sharper estimate holds: $\frac{d_n}{n}\ll\frac{1}{\log n}$, for large $n$ (see \cite[Theorem 1.3]{GJS99}). A remarkable fact, discovered by Lubotzky, Phillips and Sarnak is that for certain sets $T$, the operator $P_{\mu}$ has no exceptional eigenvalues, i.e. $d_n=0$, for all $n\geqslant 1$ (see \cite{LPS86,LPS87}).
As already mentioned above, the more recent work \cite{BG06} implies that $P_{\mu}$ has a spectral gap, i.e. the supremum $\kappa_{\mu}$ of the spectrum of $P_{\mu}$ but $1$ satisfies $\kappa_{\mu}<1$.
However, besides these facts, not much is known about the exceptional eigenvalues of $P_{\mu}$. In particular, to the best of our knowledge, it is unknown whether $\kappa_{\mu}$ is ever an eigenvalue of $P_{\mu}$.
Theorem \ref{restricted} implies that $k_{\mu}$ can be an isolated eigenvalue of $P_{\mu}$, and thus $P_{\mu}$ can have a second spectral gap.
Moreover, it shows that operators of the form $P_{\mu}$ may have arbitrarily many gaps at the top of their spectrum.
To make this precise, let $\varepsilon>0$ small enough, and let $T\subset\Gamma\cap B_{\varepsilon}(1)$ and $\mu$ as given by Remark \ref{quant}.
Then $P_{\mu}$ has only finitely many eigenvalues outside the interval $[-\varepsilon,\varepsilon]$.
On the other hand, since $T\subset B_{\varepsilon}(1)$, the number of eigenvalues of $P_{\mu}$ belonging to the interval $(\frac{1}{2},1)$ gets arbitrarily large, as $\varepsilon\rightarrow 0$. In fact, it is easy to see that this number is $\gg\frac{1}{\varepsilon^2}$.
In combination with \cite[Theorem 1.1]{LPS86} the following picture emerges:
the spectrum of $P_{\mu}$ contains
\begin{itemize}
\item the whole interval $\big[-\frac{\sqrt{2|T|-1}}{|T|},\frac{\sqrt{2|T|-1}}{|T|} \big]$
\item only finitely many points, all of which are isolated eigenvalues, outside $\big[-\frac{1}{|T|^{\frac{1}{C}}},\frac{1}{|T|^{\frac{1}{C}}}\big]$.
\item $\gg |T|^{\frac{2}{C}}$ points in the interval $(\frac{1}{2},1)$.
\end{itemize}
\end{remark}
\subsection{Applications} We now turn to discussing several applications of our main results.
\subsubsection{\bf{The Banach-Ruziewicz problem}} The original Banach-Ruziewicz problem asks whether the Lebesgue measure on $S^n$ (resp. $\mathbb R^n$) is the unique rotation-invariant (resp. isometry-invariant) finitely additive measure defined on all bounded Lebesgue measurable sets. This problem is an illustration of a general question:
let $\Gamma$ be a locally compact group acting isometrically on a locally compact metric space $X$ with an invariant Radon measure $\mu$. Is $\mu$ the unique $\Gamma$-invariant finitely additive measure defined on all $\mu$-measurable subsets of $X$ with compact closure? Here and after, uniqueness is of course meant up to a multiplicative constant.
If the space $X$ is compact and the group $\Gamma$ is countable discrete, then a positive answer to this question is closely connected to the spectral gap of the action. The connection stems from the well-known fact that the action $\Gamma\curvearrowright (X,\mu)$ has spectral gap if and only if integration against $\mu$ is the unique $\Gamma$-invariant mean on $L^{\infty}(X,\mu)$ (see \cite{Ro81,Sc81}). On the other hand, if $\mu$ is unique among invariant finitely additive measures, then integration against $\mu$ is unique among invariant means. The converse of this statement is also true for certain classes of actions, including left translation actions on compact groups (see Remark \ref{cominv}).
In Section \ref{6}, we generalize these results to the case when $X$ is locally compact.
Assume that every orbit $\Gamma\cdot x$ is dense in $X$, and denote by $L^{\infty}_{\text{c}}(X,\mu)$ the algebra of $L^{\infty}$-functions with compact support. Firstly, we prove that the action $\Gamma\curvearrowright (X,\mu)$ has local spectral gap with respect to a measurable set with compact closure and non-empty interior if and only if integration against $\mu$ is the unique $\Gamma$-invariant positive linear functional on $L^{\infty}_{\text{c}}(X,\mu)$ (see Theorem \ref{mean}). This result is partially inspired by Margulis' work \cite{Ma82}, which we follow closely in the proof of the only if assertion.
Secondly, in the case of left translation actions $\Gamma\curvearrowright (G,m_G)$ on locally compact groups $G$, we show that the Haar measure $m_G$ is unique among invariant finitely additive measures if and only if integration against $m_G$ is unique among invariant positive linear functionals on $L^{\infty}_{\text{c}}(G,m_G)$ (see Theorem \ref{BRtext}).
Altogether, by combining these two results we derive the following:
\begin{main}\label{BR}
Let $G$ be a locally compact second countable group and $\Gamma<G$ be a countable dense subgroup. Denote by $\mathcal C(G)$ the family of measurable subsets $A\subset G$ with compact closure.
Then the following conditions are equivalent:
\begin{enumerate}
\item If $\nu:\mathcal C(G)\rightarrow [0,\infty)$ is a $\Gamma$-invariant, finitely additive measure, then there exists $\alpha\geqslant 0$ such that $\nu(A)=\alpha\;m_G(A)$, for all $A\in\mathcal C(G)$.
\item The left translation action $\Gamma\curvearrowright (G,m_G)$ has local spectral gap.
\end{enumerate}
\end{main}
Note that in order to treat arbitrary locally compact groups, we use the structure theory of locally compact groups \cite{MZ55} as well as Breuillard and Gelander's topological Tits alternative \cite{BG04}.
As an immediate consequence of Theorems \ref{main} and \ref{BR} we deduce the following uniqueness characterization of Haar measures on simple Lie groups, in the spirit of the Banach-Ruziewicz problem:
\begin{mcor}
Assume that $\Gamma<G$ are as in Theorem \ref{main}.
Then, up to a multiplicative constant, the Haar measure $m_G$ of $G$ is the unique finitely additive $\Gamma$-invariant measure defined on $\mathcal C(G)$.
\end{mcor}
\subsubsection{\bf{Orbit equivalence rigidity}} Next, we apply our results to the theory of orbit equivalence of actions. This area has flourished in the last 15 years, with many new exciting developments (see the surveys \cite{Po07,Fu09,Ga10}). To recall the notion of orbit equivalence, consider two ergodic measure preserving actions $\Gamma\curvearrowright (X,\mu)$ and $\Lambda\curvearrowright (Y,\nu)$ of countable groups $\Gamma$, $\Lambda$ on standard measure spaces $(X,\mu)$, $(Y,\nu)$. The actions are called {\it orbit equivalent} if there exists a measure class preserving Borel isomorphism $\theta:X\rightarrow Y$ such that $\theta(\Gamma\cdot x)=\Lambda\cdot\theta(x)$, for $\mu$-almost every $x\in X$. The simplest instance of when the actions are orbit equivalent is when they are {\it conjugate}, i.e. there exists a measure class preserving Borel isomorphism $\theta:X\rightarrow Y$ and a group isomorphism $\delta:\Gamma\rightarrow\Lambda$ such that $\theta(g\cdot x)=\delta(g)\cdot\theta(x)$, for all $g\in\Gamma$ and $\mu$-almost every $x\in X$.
In general, however, orbit equivalence is a much weaker notion of equivalence than conjugacy. This is best illustrated by the striking theorem of Ornstein-Weiss and Connes-Feldman-Weiss showing that if the groups $\Gamma,\Lambda$ are both infinite amenable and the measure spaces $(X,\mu),(Y,\nu)$ are either both finite or both infinite, then the actions are orbit equivalent (see \cite{OW80,CFW81}).
In sharp contrast, there exist ``rigid" situations when for certain classes of actions of non-amenable groups one can deduce conjugacy from orbit equivalence.
It was recently discovered in \cite{Io13} that such a rigidity phenomenon occurs for left translation actions on compact groups in the presence of spectral gap. More precisely, let $\Gamma<G$ and $\Lambda<H$ be countable dense subgroups of compact connected Lie groups with trivial centers.
Assuming that $\Gamma\curvearrowright (G,m_G)$ has spectral gap, it follows from \cite[Corollary 6.3]{Io13} that the actions $\Gamma\curvearrowright (G,m_G)$ and $\Lambda\curvearrowright (H,m_H)$ are orbit equivalent if and only if they are conjugate. Most recently, this result has been generalized to the case when $G$ and $H$ are arbitrary, not necessarily compact, connected Lie groups with trivial centers (see \cite[Theorems A and 4.1]{Io14}). The only difference is that in the locally compact setting, the spectral gap assumption has to be replaced with the assumption that the action $\Gamma\curvearrowright (G,m_G)$ is strongly ergodic.
To recall the latter notion, let $\Gamma\curvearrowright (X,\mu)$ be an ergodic measure preserving action. Then, loosely speaking, strong ergodicity requires that any sequence of asymptotically invariant subsets of $X$ must be asymptotically trivial. In order to make this precise, since the measure $\mu$ can be infinite, we first choose a probability measure $\mu_0$ on $X$ with the same null sets as $\mu$.
The action is said to be {\it strongly ergodic}
if any sequence $\{A_n\}$ of measurable subsets of $X$ satisfying $\mu_0(g\cdot A_n\;\Delta\; A_n)\rightarrow 0$, for all $g\in\Gamma$, must satisfy $\mu_0(A_n)(1-\mu_0(A_n))\rightarrow 0$ \cite{CW80,Sc80}. It is easy to see that this definition does not depend on the choice of $\mu_0$.
For translation actions on compact groups, strong ergodicity is implied by the spectral gap property, which is now known to hold in considerably large generality by \cite{BG06,BG10,BdS14}. On the other hand, in the case of translation actions on locally compact non-compact groups, strong ergodicity seems much harder to work with, and so far could only be checked in two rather specific situations (see \cite[Propositions G and H]{Io14}).
Nevertheless, strong ergodicity is implied by local spectral gap, for arbitrary ergodic measure preserving actions. Moreover, for translation actions on locally compact groups, we prove that local spectral gap and strong ergodicity are equivalent (see Theorem \ref{BRtext}). This generalizes \cite[Theorem 4]{AE10}, which dealt with the compact case.
Consequently, all actions covered by Theorem \ref{main} are strongly ergodic, which in combination with \cite{Io14} allows us to conclude the following:
\begin{mcor}\label{OErig}
Assume that $\Gamma<G$ are as in Theorem \ref{main}. Suppose that $G$ has trivial center. Let $H$ be any connected Lie group with trivial center and $\Lambda<H$ be any countable dense subgroup.
Then the left translation actions $\Gamma\curvearrowright (G,m_G)$ and $\Lambda\curvearrowright (H,m_H)$ are orbit equivalent if and only if there is a topological isomorphism $\delta:G\rightarrow H$ such that $\delta(\Gamma)=\Lambda$.
\end{mcor}
\begin{remark} If $\widetilde\Gamma<G$ is a countable subgroup that contains $\Gamma$, then Theorem \ref{main} implies that the action $\widetilde\Gamma\curvearrowright (G,m_G)$ has local spectral gap.
Hence, Corollary \ref{OErig} remains valid if $\Gamma$ is replaced by $\widetilde\Gamma$.
\end{remark}
\begin{remark}
In the context of Corollary \ref{OErig}, assume moreover that $\Gamma$ is a free group. Since the left translation action $\Gamma\curvearrowright (G,m_G)$ is strongly ergodic, it is not amenable in the sense of \cite{Zi78}. Then \cite[Theorem A]{HV12} (which builds on \cite{OP07,PV11}) implies that $L^{\infty}(G)$ is the unique Cartan subalgebra of the $L^{\infty}(G)\rtimes\Gamma$, up to unitary conjugacy. In combination with Corollary \ref{OErig}, we deduce that the crossed product von Neumann algebras $L^{\infty}(G)\rtimes\Gamma$ and $L^{\infty}(H)\rtimes\Lambda$ are isomorphic if and only if there is a topological isomorphism $\delta:G\rightarrow H$ such that $\delta(\Gamma)=\Lambda$.
\end{remark}
\subsubsection{\bf{Continuous and monotone expanders}} Our main results also lead to a general construction of continuous and monotone expanders, extending the main result of \cite{BY11}. Expander graphs are infinite families of highly connected sparse finite graphs. It is sometimes desirable to find expander graphs within certain classes of graphs. A finite graph is called {\it monotone} if it is defined by monotone functions. This means that the vertex set of the graph can be identified with $[n]=\{1,2,...,n\}$ in such a way that there exist partially defined monotone maps $\varphi_i:[n]\rightarrow [n], 1\leqslant i\leqslant d,$ such that two vertices $a,b$ are connected iff $b=\varphi_i(a)$, for some $i$.
Bourgain and Yehudayoff recently found the first explicit construction of constant degree monotone expander graphs \cite{Bo09,BY11}. Their approach is to first build a continuous monotone expander and then discretize it to obtain monotone expanders.
In their terminology, a {\it continuous expander} consists of a family of smooth partially defined maps $\varphi_i:B\rightarrow B$, $1\leqslant i\leqslant d$, where $B$ is a compact subset of a manifold endowed with a finite measure $A\mapsto |A|$, such that the following holds: there is $\kappa>0$ such that for every measurable set $A\subset B$ with $|A|\leqslant\frac{|B|}{2}$, we have $|\cup_{i=1}^d\varphi_i(A)|\geqslant (1+\kappa)|A|.$
As a consequence of Theorem \ref{restricted}, we obtain the following result.
\begin{mcor}\label{monexp}
Assume that $\Gamma<G$ are as in Theorem \ref{main}.
Let $H<G$ be a closed subgroup and $B\subset G/H$ be a measurable set with compact closure and non-empty interior. For a measurable subset $A\subset G/H$, denote $|A|:=m_{G/H}(A)$.
Then there exists a finite set $S\subset\Gamma$ finite for which there is a constant $\kappa>0$ such that for any measurable set $A\subset B$ with $|A|\leqslant\frac{|B|}{2}$ we have
$$|\big(\cup_{g\in S}g\cdot A\big)\cap B|\geqslant (1+\kappa)|A|.$$
Moreover, if $B$ is open and connected, and $\varepsilon>0$ is given, then $S\subset\Gamma$ can be taken inside $B_{\varepsilon}(1)$.
\end{mcor}
Assume that $G$ is equal to $SL_2(\mathbb R)$, $H$ is the subgroup of upper triangular matrices, identify $G/H$ with the real projective line $\mathbb P^1(\mathbb R)=\mathbb R\cup\{\infty\}$, and let $B=[0,1]$.
With this notation, \cite[Theorem 4]{BY11} provides a finite set $S\subset SL_2(\mathbb Q)$ which satisfies the conclusion of Corollary \ref{monexp}. Moreover, $S$ can be taken close enough to the identity so that the restriction $\tilde g$ of every $g\in S$ to $B\cap g^{-1}B$ is monotonically increasing. Therefore, the family $\{\tilde g\}_{g\in S}$ is a continuous monotone expander.
Corollary \ref{monexp} generalizes \cite[Theorem 4]{BY11} by showing the existence of such a set $S$ inside any dense subgroup of $G$ generated by algebraic elements. Note that, as opposed to \cite{BY11}, our construction of $S$ is not explicit. On the other hand, unlike \cite{BY11}, our construction does not rely on the strong Tits alternative from \cite{Br08}.
\subsubsection{\bf{Spectral gap for delayed bounded random walks}} Our last application concerns random walks on Lie groups that are bounded and delayed, in a sense made precise below. Let $G$ be a connected simple Lie group and $S\subset G$ be a finite symmetric set. Denote $k=|S|$ and enumerate $S=\{g_1,...,g_k\}$. Let $B\subset G$ be a measurable set which is bounded (i.e. has compact closure).
We define a random walk on $B$ as follows: a given point $x\in B$ moves with probability $\frac{1}{k}$ to each of the points $h_1x, h_2x,...,h_kx$, where $h_i=g_i$, if $g_ix\in B$, and $h_i=e$, if $g_ix\notin B$. In other words, with probability $\frac{1}{k}$, $x$ either moves to $g_ix$ or stays put, depending on whether $g_ix$ belongs to $B$ or not.
The associated transition operator $P_S:L^2(B)\rightarrow L^2(B)$ is given by $$P_S(F)=\frac{1}{k}\sum_{i=1}^k\big( {\bf 1}_{B\cap g_iB}\;g_i\cdot F+{\bf 1}_{B\setminus g_iB}\;F\big), \;\;\;\;\text{for every $F\in L^2(B)$}.$$
Then $P_S$ is symmetric, $\|P_S\|\leqslant 1$, and $P_S({\bf 1}_B)={\bf 1}_B$, where ${\bf 1}_B$ denotes the characteristic function of $B$.
Theorem \ref{restricted} allows us to deduce the existence of many sets $S$ such that $P_S$ has a spectral gap.
\begin{mcor}\label{rwalk}
Assume that $\Gamma<G$ are as in Theorem \ref{main}.
Then there exists a finite symmetric set $S\subset\Gamma$ such that the operator $P_S:L^2(B)\rightarrow L^2(B)$ satisfies
$$\|{P_S}_{|L^2(B)\ominus\mathbb C{\bf 1}_B}\|<1.$$
\end{mcor}
When $G$ is compact and $B=G$, this result is a consequence of \cite{BG06,BG10,BdS14}. Corollary \ref{rwalk} is new in all other cases, including the case when $G$ is compact and $B$ is a proper subset.
\subsection{On the proof of restricted spectral gap}
Our approach to proving restricted spectral gap is a combination of general results from \cite{dS14,BdS14}, refinements of techniques from \cite{BG10,SGV11}, and ideas from \cite{BY11} on how to treat non-compact situations.
It relies on the remarkable strategy invented by Bourgain and Gamburd \cite{BG05,BG06} to prove spectral gap in the compact setting.
To briefly recall this strategy, consider a symmetric probability measure $\mu$ on a compact group $G$, for which we want to establish the spectral gap property.
A first step is to show that the convolution powers of $\mu$ become ``flat'' rather quickly. Then one uses a mixing inequality to deduce spectral gap for the corresponding operator $P_{\mu}:L^2(G)\rightarrow L^2(G)$ given by $P_{\mu}(F)=\mu*F$.
{\bf Flattening.} The term flat roughly means that after ``discretizing'' the group $G$, the measure has a small $2$-norm, compared to the scale at which we discretize $G$. In \cite{BdS14}, Benoist and de Saxc\'e proved a general flattening lemma for connected compact simple Lie groups $G$. They showed that if a measure $\nu$ on $G$ is not already flat and does not concentrate on any proper closed subgroup of $G$, then its convolution square $\nu \ast \nu$ will be significantly flatter. A repeated application of this result shows that a measure on $G$ with small mass on closed subgroups will flatten rather quickly.
{\bf Escaping subgroups.} Thus, in order to show that $P_{\mu}$ has spectral gap, it is necessary to show that, quite quickly, convolution powers of $\mu$ have small mass on closed subgroups. To guarantee this, one needs (due to the currently available techniques) to impose a diophantine assumption on the support of $\mu$. Typically, one assumes that $\mu$ is supported on finitely many elements with algebraic entries (when viewed as matrices via the adjoint representation).
{\bf Mixing inequality.} The concluding part, deducing spectral gap out of flatness of some small power of $\mu$, relies on a mixing inequality. If $G$ is a finite group, the mixing inequality bounds the norm of the operator $P_{\mu}$ in terms of the $2$-norm of $\mu$ (see \cite{BNP08} and \cite[Proposition 1.3.7]{Ta15}). This step relies on the representation theory of the ambient group $G$. Specifically, one usually uses the idea, due to Sarnak and Xue \cite{SX91}, of exploiting ``high multiplicity" of eigenvalues.
In the non-compact setting, we will prove restricted spectral gap with a similar strategy. Recall, however, that our aim is somewhat different from the spectral gap property for compact groups. Indeed, we are given a connected simple Lie group $G$, a dense subgroup $\Gamma<G$ and an open ball $B \subset G$. Our goal is to produce a measure $\mu$ supported on $\Gamma$ and on an arbitrarily small neighborhood of $1$, such that the averaging operator $P_\mu: L^2(B) \to L^2(G)$ has norm less than $\frac{1}{2}$ (after discarding a finite dimensional subspace $V \subset L^2(B)$). Let us emphasize the main differences that occur in the proof.
Firstly, we show that the mixing inequality still holds in our setting, leading to a result that might be of independent interest (see Theorem \ref{rho}). Our proof is inspired by the ``geometric approach" introduced in \cite{BY11,BG10}, but here we address a far greater level of generality. Also, our proof is elementary, in that it only relies on basic results from the representation theory of $G$, and essentially self-contained.
Using this inequality, we reduce to the task of producing a measure $\mu$ with support contained in $\Gamma$ and arbitrarily close to $1$, whose convolution powers flatten rather quickly.
The flattening lemma of \cite{BdS14} relies on two main tools: a {\it product theorem} due to de Saxc\'e \cite{dS14}, and the {\it non-commutative Balog-Szemer\'edi-Gowers Lemma} due to Tao \cite{Ta06}. It turns out that these two tools actually hold for general (not necessarily compact) connected simple Lie groups. So, by reproducing the proof of \cite[Lemma 2.5]{BdS14}, we get a similar flattening lemma in the locally compact setting (see Corollary \ref{flattening}). An important aspect is that our lemma only applies to measures whose support is {\it controlled} (relative to the scale at which we discretize $G$).
Next, refining techniques from \cite[Section 3]{SGV11} we construct a measure $\mu$, supported on $\Gamma$ and on an arbitrarily small neighborhood of $1$, that will escape proper subgroups quickly when taking convolution powers (Propositions \ref{ping-pong} and \ref{neigh}). Therefore, we are almost in position to apply the flattening lemma to some convolution powers of $\mu$. However, we need to make sure that these convolution powers still have a controlled support. This amounts to bounding the speed of escape of subgroups in terms of the size of the support of $\mu$. A priori, the measure $\mu$ that we construct does not admit such a nice bound. As in \cite{BY11}, an application of the pigeonhole principle allows us to construct a new measure $\mu'$ with an improved bound.
Then $\mu'$ satisfies all the required assumptions to ensure that it will become flat quickly enough. Finally, our mixing inequality will allow us to show restricted spectral gap for this new measure $\mu'$.
We will provide more quantitative statements of the main steps of the proof in Section \ref{outline}.
\subsection{Organization of the paper} Besides the introduction, this paper has seven other sections and an appendix. In Section 2, we establish some basic properties of local spectral gap, explain how Theorem \ref{main} follows from Theorem \ref{restricted}, and provide a detailed outline of the proof of Theorem \ref{restricted}. Sections 3, 4, and 5 are each devoted to one of the three main parts of the proof of Theorem \ref{restricted}. In Section 6 we conclude the proof of Theorem \ref{restricted} and derive Corollary \ref{by}.
In Sections 7 and 8, we prove Theorem \ref{BR} and Corollaries \ref{monexp}, \ref{rwalk}, respectively. Finally, the Appendix deals with the proof of Lemma \ref{BdS}.
\subsection{Acknowledgements} We are grateful to Cyril Houdayer, Hee Oh and Peter Sarnak for helpful comments.
\tableofcontents
\section{Preliminaries}
\subsection{Terminology} We begin by introducing various terminology concerning analysis on groups.
Let $G$ be a
locally compact second countable ({\it l.c.s.c.}) group and fix a left Haar measure $m_G$.
Given a measurable set $A\subset G$ and a measurable function $f:G\rightarrow\mathbb C$, we denote
$$|A|:=m_G(A),\;\;\;\;\int_{G}f(x)\;\text{d}x:=\int_{G}f\;\text{d}m_G,\;\;\;\;\text{and}$$ $$\|f\|_{p,A}:=\|{\bf 1}_Af\|_p=\Big(\int_{A}|f(x)|^p\;\text{d}x\Big)^{\frac{1}{p}}.$$
We denote by
$\mathcal M(G)$ the family of Borel probability measures on $G$.
Let $f,g:G\rightarrow\mathbb C$ be measurable functions and $\mu,\nu\in\mathcal M(G)$.
Then the convolution functions $f*g$, $\mu*f:G\rightarrow\mathbb C$ and probability measure $\mu*\nu$ are defined (when the integrals make sense) by the formulae $$(f*g)(x)=\int_{G}f(y)g(y^{-1}x)\;\text{d}y,\;\;\;\;(\mu*f)(x)=\int_{G}f(y^{-1}x)\;\text{d}\mu(y)\;\;\;\;\text{and}$$ $$\int_{G}F\;\text{d}(\mu*\nu)=\int_{G}\int_{G}F(xy)\;\text{d}\mu(x)\text{d}\nu(y)$$
for any continuous $F:G\rightarrow\mathbb C$.
We will often use the following inequalities $$\|f*g\|_2\leqslant \|f\|_1\|g\|_2,\;\;\;\;\|f*g\|_{\infty}\leqslant\|f\|_2\|g\|_2\;\;\;\;\;\text{and}\;\;\;\;\|\mu*f\|_2\leqslant\|f\|_2.$$
Further, we denote by $\check{f}:G\rightarrow\mathbb C$ the function given by $\check{f}(x)=\overline{f(x^{-1})}$.
Similarly, $\check{\mu}$ is the Borel probability measure given by ${\int_{G}F\;\text{d}\check{\mu}=\int_{G}\check{F}\;\text{d}\mu}$, for any continuous $F:G\rightarrow\mathbb R$. We say that $\mu$ is {\it symmetric} if $\check{\mu}=\mu$. For $n\geqslant 1$, we denote by $\mu^{*n}$ the $n$-fold convolution product of $\mu$ with itself.
We also denote by supp$(\mu)$ the {\it support} of $\mu$.
If $\mu$ and $\nu$ have finite support, then $\check{\mu}(\{x\})=\overline{\mu(\{x^{-1}\})}$ and $(\mu*\nu)(\{x\})=\sum_{y\in G}\mu(\{y\})\nu(\{y^{-1}x\})$, for any $x,y\in G$.
If $G$ is unimodular, we denote by $\lambda,\rho:G\rightarrow\mathcal U(L^2(G))$ the {\it left} and {\it right} regular representations of $G$ given by $\lambda_g(f)(x)=f(g^{-1}x)$, $\rho_g(f)(x)=f(xg)$, for every $f\in L^2(G)$ and any $g,x\in G$. Notice that $\lambda_g(f)=\delta_g*f$ and $\rho_g(f)=f*\delta_{g^{-1}}$, where $\delta_g$ denotes the Dirac measure at $g\in G$.
Next, we establish a useful result that we will need later on.
\begin{lemma}\label{powers}\label{A^{-1}A} Let $\mu$ be a symmetric Borel probability measure on $G$ and $n\geqslant 1$.
Then
\begin{enumerate}
\item $\|\mu^{*n}*f\|_2\geqslant \|\mu*f\|_2^{2n}$, for every $f\in L^2(G)$ with $\|f\|_2=1$.
\item $\mu^{*n}(A)^2\leqslant\mu^{*(2n)}(A^{-1}A)$, for every measurable set $A\subset G$.
\end{enumerate}
\end{lemma}
{\it Proof.} (1)
Since $\mu$ is symmetric, we have $\|\mu^{*m}*f\|_2^2=\langle\mu^{*m}*f,\mu^{*m}*f\rangle=\langle\mu^{*2m}*f,f\rangle\leqslant\|\mu^{*2m}*f\|_2$, for any $m\geqslant 0$.
By induction, it follows that $\|\mu^{*2^m}*f\|_2\geqslant \|\mu*f\|^{2^m}$, for all $m\geqslant 0$.
Choose $m\geqslant 0$ such that $2^m\leqslant n<2^{m+1}$. Then $\|\mu^{*n}*f\|_2\geqslant\|\mu^{*2^{m+1}}*f\|_2\geqslant \|\mu*f\|^{2^{m+1}}\geqslant\|\mu*f\|^{2n},$ as claimed.
(2) Indeed, we have $\mu^{*n}(A)^2=\mu^{*n}(A^{-1})\mu^{*n}(A)\leqslant\mu^{*(2n)}(A^{-1}A)$.
\hfill$\blacksquare$
\subsection{Basic properties of local spectral gap}
We continue with several elementary properties of local spectral gap, starting with an easy, but useful, equivalent formulation of local spectral gap.
\begin{proposition}\label{kazhdan}
Let $\Gamma\curvearrowright (X,\mu)$ be a measure preserving action of a countable group $\Gamma$, and $B\subset X$ a measurable set of finite measure. Then $\Gamma\curvearrowright (X,\mu)$ has local spectral gap with respect to $B$ if and only if
there exist a finite set
$F\subset\Gamma$ and a constant $\kappa>0$ such that the following holds:
$$\|\xi-\frac{1}{\mu(B)}{\int_{B}\xi\;\text{d}\mu}\|_{2,B}\leqslant\kappa\sum_{g\in F}\|g\cdot\xi-\xi\|_{2,B}\;\;\;\text{for any $\xi\in L^2(G)$}.$$
\end{proposition}
{\it Proof.} The {\it if} implication is clear. To prove the {\it only if} implication, suppose that $\Gamma\curvearrowright (X,\mu)$ has local spectral gap with respect to $B$. Then there are a finite set
$F\subset\Gamma$ and $\kappa>0$ such that
$\|\eta\|_{2,B}\leqslant\kappa\sum_{g\in F}\|g\cdot\eta-\eta\|_{2,B}$, for any $\eta\in L^2(X)$ with ${\int_{B}\eta\;\text{d}\mu=0}$. We may assume that $e\in F$.
Let $\xi\in L^2(X)$ and put $\alpha={\frac{1}{\mu(B)}\int_{B}\xi\;\text{d}\mu}$. Let $C=\cup_{g\in F}g^{-1}B$ and define $\eta=\xi-\alpha {\bf 1}_C\in L^2(G)$. Then ${\int_B\eta\;\text{d}\mu=0}$ and $\|g\cdot\eta-\eta\|_{2,B}=\|g\cdot\xi-\xi\|_{2,B}$, for all $g\in F$. The conclusion now follows.
\hfill$\blacksquare$
\begin{proposition}\label{indep}
Let $\Gamma\curvearrowright (X,\mu)$ be a measure preserving action of a countable group $\Gamma$, and $B_1,B_2\subset X$ measurable sets of finite measure. Assume there is a finite set $K\subset\Gamma$ such that $B_1\subset\cup_{h\in K}hB_2$ and $B_2\subset\cup_{h\in K}hB_1$.
Then $\Gamma\curvearrowright (X,\mu)$ has local spectral gap with respect to $B_1$ if and only if it does with respect to $B_2$.
\end{proposition}
{\it Proof.} Assume that local spectral gap holds with respect to $B_1$, but not $B_2$. Let $\xi_n\in L^2(X)$ be a sequence satisfying $\|\xi_n\|_{2,B_2}=1$, ${\int_{B_2}\xi_n\;\text{d}\mu=0}$, for all $n$, and $\|g\cdot\xi_n-\xi_n\|_{2,B_2}\rightarrow 0$, for every $g\in\Gamma$.
If $g\in\Gamma$, then we have \begin{align*}\|g\cdot\xi_n-\xi_n\|_{2,B_1}&\leqslant\sum_{h\in K}\|g\cdot\xi_n-\xi_n\|_{2,hB_2}=\sum_{h\in K}\|(h^{-1}g)\cdot\xi_n-h^{-1}\cdot\xi_n\|_{2,B_2}\\&\leqslant\sum_{h\in K}\Big(\|(h^{-1}g)\cdot\xi_n-\xi_n\|_{2,B_2}+\|h^{-1}\cdot\xi_n-\xi_n\|_{2,B_2}\Big).\end{align*}
This implies that $\|g\cdot\xi_n-\xi_n\|_{2,B_1}\rightarrow 0$, for every $g\in\Gamma$. Since we have local spectral gap with respect to $B_1$, Proposition \ref{kazhdan} provides scalars $\alpha_n\in\mathbb C$ such that $\|\xi_n-\alpha_n\|_{2,B_1}\rightarrow 0$. By reasoning as above, it follows that $\|\xi_n-\alpha_n\|_{2,B_2}\rightarrow 0$. Since ${\int_{B_2}\xi_n\;\text{d}\mu=0}$, for all $n$, we get that $\alpha_n\rightarrow 0$. Hence, $\|\xi_n\|_{2,B_2}\rightarrow 0$, which gives the desired contradiction. \hfill$\blacksquare$
Next, we establish that local spectral gap passes to direct product actions.
\begin{proposition}\label{products} For $i\in\{1,2\}$, let $\Gamma_i\curvearrowright (X_i,\mu_i)$ be a measure preserving action which has local spectral gap with respect to a measurable set $B_i\subset X_i$ of finite measure.
Then the product action $\Gamma_1\times\Gamma_2\curvearrowright (X_1\times X_2,\mu_1\times\mu_2)$ has local spectral gap with respect to $B_1\times B_2$.
\end{proposition}
{\it Proof.}
By Lemma \ref{kazhdan}, for $i\in\{1,2\}$, we can find $F_i\subset\Gamma_i$ finite and $\kappa_i>0$ such that
\begin{equation}\label{kazhd}\|\xi-\frac{1}{\mu_i(B_i)}{\int_{B_i}\xi\;\text{d}\mu_i}\|_{2,B_i}^2\leqslant\kappa_i\sum_{g\in F_i}\|g\cdot\xi-\xi\|_{2,B_i}^2\;\;\;\text{for any $\xi\in L^2(X_i)$}.\end{equation}
Denote $(X,\mu)=(X_1\times X_2,\mu_1\times\mu_2)$ and $B=B_1\times B_2$. Let $\xi\in L^2(X,\mu)$ and put $\alpha={\frac{1}{\mu(B)}\int_B\xi\;\text{d}\mu}$. For $y\in X_2$, define $\xi^y(x)=\xi(x,y)$ and $f(y)={\frac{1}{\mu_1(B_1)}\int_{B_1}\xi^y\;\text{d}\mu_1}$.
Then it is easy to see that $f\in L^2(X_2)$ and $\|g\cdot f-f\|_{2,B_2}^2\leqslant{\frac{1}{\mu_1(B_1)}}\|g\cdot\xi-\xi\|_{2,B_1\times B_2}^2$, for all $g\in\Gamma_2$.
Since ${\frac{1}{\mu_2(B_2)}\int_{B_2}f\;\text{d}\mu_2=\alpha}$, by using the last inequality and applying \eqref{kazhd} to $f$ we get that
\begin{equation}\label{kazhd2}\|f-\alpha\|_{2,B_2}^2\leqslant\kappa_2\sum_{g\in F_2}\|g\cdot f-f\|_{2,B_2}^2\leqslant\frac{\kappa_2}{\mu_1(B_1)}\sum_{g\in F_2}\|g\cdot\xi-\xi\|_{2,B_1\times B_2}^2 \end{equation}
On the other hand, by applying \eqref{kazhd} to $\xi^y$ we get that ${\|\xi^y-f(y)\|_{2,B_1}^2\leqslant\kappa_1\sum_{g\in F_1}\|g\cdot\xi^y-\xi^y\|_{2,B_1}^2}$. By integrating over $y\in B_2$, we derive that
\begin{equation}\label{kazhd3} \int_{B}|\xi(x,y)-f(y)|^2\;\text{d}\mu(x,y)\leqslant\kappa_1\sum_{g\in F_1}\|g\cdot\xi-\xi\|_{2,B_1\times B_2}^2.
\end{equation}
It is now clear that the combination of \eqref{kazhd2} and \eqref{kazhd3} implies the conclusion.
\hfill$\blacksquare$
Finally, we record a result asserting that local spectral gap passes through certain quotients. Since its proof is very similar to that of Corollary \ref{by}, we leave its details to the reader.
\begin{proposition}
Let $G$ be a l.c.s.c. group, $H<G$ a closed subgroup, and $\Gamma<G$ a countable dense subgroup. Assume that $G/H$ admits a $G$-invariant Borel regular measure $m_{G/H}$. Suppose that the left translation action $\Gamma\curvearrowright (G,m_G)$ has local spectral gap.
Then the left translation action $\Gamma\curvearrowright (G/H,m_{G/H})$ has local spectral gap.
\end{proposition}
\subsection{Deduction of Theorem \ref{main} from Theorem \ref{restricted}} The aim of this subsection is to show that Theorem \ref{restricted} implies Theorem \ref{main}. This relies on the following result.
\begin{proposition}\label{AtoB}
Let $G$ be a l.c.s.c. group, $\Gamma<G$ a countable dense subgroup, and $B\subset G$ a measurable set with non-empty interior and compact closure.
Assume that there exists a constant $c > 0$ satisfying the following property: for any neighborhood $U$ of the identity, there are a finite set $S \subset \Gamma \cap U$ and a finite dimensional vector space $V \subset L^2(G)$ such that for all $\xi\in L^2(B)\ominus V$ we have \[\max_{g \in S} \Vert g\cdot\xi - \xi \Vert_{2} \geqslant c\Vert \xi \Vert_{2}.\]
Then the left translation action $\Gamma\curvearrowright (G,m_G)$ has local spectral gap with respect to $B$.
\end{proposition}
{\it Proof.} Assume by contradiction that the conclusion is false. Then there is a sequence $\xi_n\in L^2(G)$ satisfying
$\Vert \xi_n \Vert_{2,B} = 1$, ${\int_B\xi_n\;\text{d}\mu = 0}$, for all $n$, and $\lim\limits_{n\rightarrow\infty} \Vert g \cdot \xi_n - \xi_n \Vert_{2,B} = 0$, for all $g \in \Gamma$.
If $C\subset G$ is a compact set, then $C$ can be covered with finitely many of the sets $\{gB\}_{g\in\Gamma}$. It follows that $\sup_{n}\|\xi_n\|_{2,C}<\infty$ and $\lim\limits_{n\rightarrow\infty}\|g\cdot\xi_n-\xi_n\|_{2,C}=0$, for all $g\in\Gamma$. Since $G$ is second countable, we can find a subsequence $\{\xi_{n_k}\}$ of $\{\xi_n\}$ and $\xi\in L^2_{\text{loc}}(G)$ (i.e. a locally $L^2$-integrable function) such that ${\bf 1}_C\xi_{n_k}\rightarrow {\bf 1}_C\xi$, weakly, for every compact set $C\subset G$. But then $\xi$ must be $\Gamma$-invariant, and hence constant by ergodicity. Since $\xi_n$ has mean zero on $B$, for all $n$, we derive that $\xi=0$, almost everywhere. This argument implies that ${\bf 1}_C\xi_n\rightarrow 0$, weakly, for any compact set $C\subset G$.
Let $\lim\limits_n$ be a bounded linear functional on $\ell^{\infty}(\mathbb N)$ which extends the limit.
Then $\nu(C)=\lim\limits_n \Vert {\bf 1}_C\xi_n \Vert_2^2$ defines a $\Gamma$-invariant finitely additive measure on bounded Haar measurable subsets of $G$. Since $\nu(B) \neq 0$, we get that $\nu\not=0$.
Since $B$ has non-empty interior, by using finite additivity, we can find two open sets $B_1 \subsetneq B_2\subset B$ such that $\nu(B_1) \neq 0$, $\nu(B_2 \setminus B_1) \leqslant (c^2 \nu(B_1))/4$, and there exists a closed intermediate subset $B_1 \subset F \subset B_2$.
By local compactness, we can find an intermediate open set $B_0$ between $B_1$ and $B_2$ and a neighborhood $U$ of the identity small enough so that $B_1 \subset gB_0 \subset B_2$, for $g\in U$. Put $p = \mathbf{1}_{B_0}$.
As the sequence $\{p\xi_n\}$ converges weakly to $0$ and is supported on $B$, the following claim contradicts our assumption on $c$.
{\bf Claim.} For all $g \in \Gamma \cap U$, we have $\lim\limits_n \Vert g \cdot (p\xi_n) - (p\xi_n)\Vert_2 \leqslant \frac{c}{2} \lim\limits_n \Vert p\xi_n\Vert_2$.
Indeed, for $g \in \Gamma \cap U$ we can estimate
\begin{align*}
\lim_n \Vert g \cdot (p\xi_n) - (p\xi_n)\Vert_2 & = \lim_n \Vert (g \cdot p)(g \cdot \xi_n) - (p\xi_n)\Vert_2\\
& \leqslant \lim_n \Vert (g \cdot p)\xi_n - (p\xi_n)\Vert_2 + \lim_n \Vert (g \cdot p)(g \cdot \xi_n) - (g \cdot p)\xi_n\Vert_2\\
& = \lim_n \Vert (g \cdot p - p)\xi_n\Vert_2 + \lim_n \Vert g \cdot \xi_n - \xi_n\Vert_{2,gB_0}
\end{align*}
But by the above, $\lim\limits_n \Vert g \cdot \xi_n - \xi_n\Vert_{2,gB_0} = 0$. Moreover, since $B_1 \subset gB_0 \subset B_2$, we get that
\[\lim_n \Vert (g \cdot p - p)\xi_n\Vert_2^2 \leqslant \nu(B_2 \setminus B_1) \leqslant \frac{c^2}{4}\nu(B_1) \leqslant \frac{c^2}{4} \nu(B_r(1)) = \frac{c^2}{4} \lim_n \Vert p\xi_n\Vert_2^2 \qedhere\]
\hfill$\blacksquare$
{\bf Proof of Theorem \ref{main}}. Assume that Theorem \ref{restricted} holds and let us explain how Theorem \ref{main} follows. Let $S\subset\Gamma$ be a finite set and denote $\mu={\frac{1}{|S|}\sum_{g\in S}\delta_g}$. If $\xi\in L^2(G)$, then $$\sum_{g\in S}\|g\cdot\xi-\xi\|_2^2=2|S|\Big(\|\xi\|_2^2-\Re\langle\mu*\xi,\xi\rangle\Big).$$ Thus, if we have that $\|\mu*\xi\|<\frac{1}{2}\|\xi\|_2$, then $\max_{g\in S}\|g\cdot\xi-\xi\|_2>\|\xi\|_2$.
By combining Theorem \ref{restricted} and Proposition \ref{AtoB}, we conclude that $\Gamma\curvearrowright (G,m_G)$ has local spectral gap. \hfill$\blacksquare$
\subsection{Reduction to groups with trivial center} Next, we will argue that in order to prove Theorem \ref{restricted}, we may reduce to the case when $G$ has trivial center.
Assume that Theorem \ref{restricted} holds for connected simple Lie groups with trivial center.
Let $G$ be a connected simple Lie group, $B\subset G$ a measurable set with compact closure and non-empty interior, and $c>0$.
Let $\pi:G\rightarrow\operatorname{GL}(\frak g)$ be the adjoint representation of $G$. Put $G_0=\pi(G)$ and $\Gamma_0=\pi(\Gamma)$.
Since $\pi$ has discrete kernel, we can find a small enough compact neighborhood of the identity $C\subset G$ and $\varepsilon_0>0$ such that $\pi$ is 1-1 on $\cup_{g\in B_{\varepsilon_0}(1)}gC$. Let $K\subset G$ be a finite set such that $B\subset\cup_{h\in K}Ch$. Write $B$ as a disjoint union $B=\sqcup_{h\in K}C_h$, where $C_h$ is a subset of $Ch$, for every $h\in K$.
Since $G_0$ has trivial center, the conclusion of Theorem \ref{restricted} holds for $(G_0,\Gamma_0,\pi(C))$ by our assumption. It is then easy to see that Theorem \ref{restricted} also holds for $(G,\Gamma, C)$.
Thus, given $\varepsilon>0$, there are a finite set $T\subset\Gamma\cap B_{\varepsilon}(1)$ and a finite dimensional subspace $W\subset L^2(C)$ such that $\mu:=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})$ satisfies $\|\mu*F\|_2<\frac{c}{|K|}\|F\|_2$, for every $F\in L^2(C)\ominus W.$
Let $V\subset L^2(B)$ be the linear span of $\{{\bf 1}_{C_h}\rho_h(W)|h\in K\}$, where $\{\rho_g\}_{g\in G}$ denotes the right regular representation of $G$.
Let $F\in L^2(B)\ominus V$. If $h\in K$, then $\rho_h^{-1}({\bf 1}_{C_h}F)\in L^2(C)\ominus W$, and therefore $$\|\mu*({\bf 1}_{C_h}F)\|_2=\|\mu*(\rho_h^{-1}({\bf 1}_{C_h}F))\|_2<\frac{c}{|K|}\|\rho_h^{-1}({\bf 1}_{C_h}F)\|_2=\frac{c}{|K|}\|{\bf 1}_{C_h}F\|_2.$$
Since $F=\sum_{h\in K}{\bf 1}_{C_h}F$, we deduce that $\|\mu*F\|_2<c\|F\|_2$. Since $c>0$ is arbitrary, and $V\subset L^2(B)$ is finite dimensional, this implies that $G$ satisfies the conclusion of Theorem \ref{restricted}.
\subsection{Outline of the proof of Theorem \ref{restricted}}
\label{outline}
The previous subsection allows us to work hereafter with connected simple groups with trivial center.
Note, however, that although our results will be stated only for groups with trivial center, they have analogues
for general connected simple groups.
In order to outline the proof of Theorem \ref{restricted}, let us introduce some more notation.
Let $G$ be a connected simple Lie group with trivial center, and $\Gamma<G$ be a dense subgroup. Suppose there is a basis $\frak B$ of $\frak g$ such that the matrix of Ad$(g)$ in the basis $\frak B$ has algebraic entries, for any $g\in\Gamma$.
Let $n$ be the dimension of $G$, $\frak g$ its Lie algebra, and Ad$:G\rightarrow GL(\frak g)$ its adjoint representation. We identify $G\cong$ Ad$(G)$, $\frak g\cong\mathbb R^n$ via the basis $\frak B$, and $GL(\frak g)\cong GL_n(\mathbb R)\subset\mathbb M_n(\mathbb R)$. In particular, in this identification we have that $\Gamma<GL_n(\bar{\mathbb Q})$.
For $\alpha=(\alpha_{i,j})_{i,j=1}^n\in\mathbb M_n(\mathbb R)$, we denote by $\|\alpha\|_2={(\sum_{i,j=1}^n|\alpha_{i,j}|^2)^{1/2}}$ its Hilbert-Schmidt norm. We endow $G$ with the metric given by $(g,h)\mapsto\|\text{Ad}(g)-\text{Ad}(h)\|_2$. Abusing notation, we write $\|g-h\|_2:=\|\text{Ad}(g)-\text{Ad}(h)\|_2$ and $\|g\|_2:=\|\text{Ad}(g)\|_2$.
Note that $$\|gh-gk\|_2\leqslant \|g\|_2\|h-k\|_2,\;\;\;\;\;\text{for all $g,h,k\in G$}.$$
For $x\in G$ and $\delta>0$, we denote $B_{\delta}(x):=\{y\in G|\|x-y\|_2\leqslant\delta\}$. For $\delta> 0$, we let $A^{(\delta)}=\cup_{x\in A}B_{\delta}(x)$ be the {\it $\delta$-neighborhood} of $A\subset G$, and denote $$P_{\delta}:=\frac{{\bf 1}_{B_{\delta}(1)}}{|B_{\delta}(1)|}\in L^1(G)_{+,1}.$$
As explained at the end of the Introduction, the proof of Theorem \ref{restricted} splits into three parts, dealt with in the following three sections.
\begin{itemize}
\item In Section 3 we produce measures with small support that {\bf Escape subgroups} quickly.
\end{itemize}
There are two steps for this. First, we produce for all $\varepsilon > 0$ a finite set $S\subset\Gamma\cap B_{\varepsilon}(1)$ and constants $d,C > 0$ such that for $\delta>0$ small enough, the measure $\mu_S :=\frac{1}{2|S|}\sum_{g\in S}(\delta_g+\delta_{g^{-1}})$ satisfies
$\mu_S^{*n}(H^{(\delta)})\leqslant\delta^{d}$, for all proper closed subgroups $H$ and $n\approx C\log{\frac{1}{\delta}}.$
This step is obtained by combining Propositions \ref{ping-pong} and \ref{neigh}. The set $S$ that we obtain freely generates a free group.
One can of course get a better constant $C$ by modifying accordingly the value of $d$. But for a fixed $d$, the value of $C$ depends on $\varepsilon$. Namely, it could happen that $C \to \infty$ as $\varepsilon \to 0$.
As explained in the introduction, we want to control the speed of escape in terms of $\varepsilon$.
So the second step is to upgrade the set $S$ to a set $T$, also contained in $B_{\varepsilon}(1)$, such that the following holds (Theorem \ref{escape}).
{\it There are constants $d_1,d_2>0$ not depending on $T$ such that the probability measure $\mu_T$ satisfies
$\mu_T^{*n}(H^{(\delta)})\leqslant\delta^{d_1}$, for all $\delta > 0$ small enough, all proper closed subgroups $H$ and $n \approx d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}.$}
This improvement is obtained using the pigeonhole principle and the freeness of the elements of $S$.
\begin{itemize}
\item In Section 4 we extend the {\bf $\ell^2$-flattening} lemma from \cite{BdS14}.
\end{itemize}
Our generalization of the flattening lemma \cite[Lemma 2.5]{BdS14} to the locally compact setting does not require much additional effort. However, it only applies for measures with controlled support. But we anticipated this issue in part 1 above, by controlling the speed of escape in terms of $\varepsilon$. Indeed, we want to apply the flattening lemma to the measure $\mu_T^{\ast n}$, with $n \approx d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}$. Now, the support of $\mu_T^{\ast n}$ is contained in $B_{\delta^{-\beta}}(1)$, with $\beta>0$ arbitrarily small. Since the ``controlled support'' condition that we require is soft enough, we are in position to apply our flattening Lemma \ref{BdS}.
Thus, our main result (Corollary \ref{flattening}) shows that the measure $\mu_T$ produced in Section 3 will flatten rather quickly:
given $\alpha>0$, we have $\|\mu_{T}^{*n}*P_{\delta}\|_2\leqslant\delta^{-\alpha}$, for $\delta$ small enough and $n\sim\log\frac{1}{\delta}$.
\begin{itemize}
\item In Section 5 we prove a {\bf Mixing inequality} and combine it with the above to conclude.
\end{itemize}
More precisely, we show that if $\mu_T$ is the measure produced in Section 3, then the convolution operator $F \in L^2(B) \mapsto (\mu_T \ast F) \in L^2(G)$ has norm less than $1/2$, when restricted to the orthogonal complement of a finite dimensional subspace $V \subset L^2(B)$. The first observation is that this flexibility of discarding a finite dimensional subspace $V$ when trying to bound the norm of $\Vert \mu_T \ast F\Vert_2$, allows us to restrict our study to functions $F$ that live at a ``small scale''. Namely, it will be enough to consider functions $F$ that do not change much when ``discretizing'' the group with high accuracy. This reduction is achieved via a Littlewood-Paley type decomposition (Theorem \ref{L-P} and Corollary \ref{level_delta}). Then we are left to show a mixing inequality (Theorem \ref{rho}). This is inspired by \cite[Lemma 10.35]{BG10}, and should be thought of as an analogue of the well-known mixing inequality for finite groups (see e.g. \cite[Proposition 1.3.7]{Ta15}), after discretizing the group. We will then be able to conclude restricted spectral gap by combining this inequality with the flattening obtained in Section 4.
\section{Escape from subgroups}
The goal of this section is to prove the following:
\begin{theorem}[escape from subgroups]\label{escape}
Let $G$ be a connected simple Lie group with trivial center, and
\text{Ad}$:G\rightarrow GL(\frak g)$ be its adjoint representation. Let $\Gamma<G$ be a countable dense subgroup. Assume that there there is a basis $\frak B$ of $\frak g$ such that the matrix of Ad$(g)$ in the basis $\frak B$ has algebraic entries, for every $g\in\Gamma$.
Then there are constants $d_1,d_2 > 0$ depending on $\Gamma$ only such that the following holds.
Given $\varepsilon_1>0$, we can find $0<\varepsilon<\varepsilon_1$ and a finite set $T\subset \Gamma\cap B_{\varepsilon}(1)$ which freely generates a subgroup of $\Gamma$ such that for any small enough $\delta>0$, the probability measure $\mu={\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})}$ satisfies $$\mu^{*2n}(H^{(\delta)})\leqslant\delta^{d_1},\;\;\;\text{where $n=\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}}\Big\rfloor$},$$ for any proper closed connected subgroup $H<G$.
\end{theorem}
\subsection{Ping-pong} The first ingredient in the proof of Theorem \ref{escape} is a proposition which, roughly speaking, asserts the existence of representations $\rho_i:\Gamma\rightarrow$ GL$(V_i)$, $i\in I$, and $M\geqslant 2$ such that
\begin{itemize}
\item the intersection of $\Gamma$ with any proper closed subgroup of $G$ stabilizes a line in some $V_i$, and
\item we can find a set $S\subset\Gamma$ of simultaneous ``ping-pong players" for all the $\rho_i$'s in any given neighborhood of the identity in $G$ such that $|S|=M$.
\end{itemize}
\begin{proposition}\label{ping-pong}
Let $G$ be a connected simple real Lie group with trivial center. Let $\Gamma<G$ be a finitely generated dense subgroup. Assume that there there is a basis $\frak B$ of the Lie algebra $\frak g$ of $G$ such that the matrix of Ad$(g)$ in the basis $\frak B$ has algebraic entries, for every $g\in\Gamma$.
Then there exist finitely many vector spaces $V_i$, $i\in I$, defined over local fields $K_i$, representations $\rho_i: \Gamma \to \operatorname{GL}(V_i)$, and an integer $M\geqslant 2$ such that the following properties hold true:
\begin{enumerate}
\item For any proper closed subgroup $H < G$ such that $\Gamma \cap H$ is non-discrete, there exist $i\in I$ and $[v] \in \P(V_i)$ such that $\rho_i(g)([v])=[v]$, for all $g\in \Gamma\cap H$.
\item For any $\eta>0$, there is a finite set $S\subset\Gamma$ satisfying $|S|=M$ and $S\subset B_{\eta}(1)$ such that for all $i\in I$ and every $g\in\tilde S:=S\cup S^{-1}$, we can find two sets $K_g^{(i)}\subset U_g^{(i)}\subset \P(V_i)$ such that the following conditions hold:
\begin{enumerate}[(a)]
\item For every $g\in\tilde S$ we have $\rho_i(g)(U_g^{(i)})\subset K_g^{(i)}$.
\item Every line $[v]\in\P(V_i)$ is contained in at least two of the sets $\{U_g^{(i)}\}_{g\in\tilde S}$.
\item For every $g_1,g_2\in\tilde S$ we have $K_{g_1}^{(i)}\subset U_{g_2}^{(i)}$, unless $g_1g_2=1$.
\item For every $g_1,g_2\in\tilde S$ we have $K_{g_1}^{(i)}\cap K_{g_2}^{(i)}=\emptyset$, unless $g_1=g_2$.
\end{enumerate}
\end{enumerate}
\end{proposition}
Before proving Proposition \ref{ping-pong}, let us record a simple observation that will be used later.
\begin{lemma}\emph{\cite{SGV11}}\label{SGV}
In the setting from Proposition \ref{ping-pong}, let $i\in I$ and $v\in V_i\setminus\{0\}$.
Let $g=g_ng_{n-1}...g_1$ be a reduced word on length $n$ in $\tilde S$. Assume that $\rho_i(g)([v])=[v]$ and let $1\leqslant j<n$.
\begin{enumerate}
\item If $\rho_i(g_jg_{j-1}...g_1)([v])\in U_{g_{j+1}}$, then $g_{j+1},...,g_n$ are uniquely determined by $v$.
\item If $\rho_i(g_jg_{j-1}...g_1)([v])\notin U_{g_{j+1}}$, then $\rho_i(g_lg_{l-1}...g_1)([v])\notin U_{g_{l+1}}$, for all $1\leqslant l\leqslant j$.
\end{enumerate}
\end{lemma}
{\it Proof.} For simplicity, denote $\rho=\rho_i$ and $K_g=K_g^{(i)}$, $U_g=U_g^{(i)}$, for all $g\in\tilde S$.
Assume that $\rho(g_jg_{j-1}...g_1)([v])\in U_{g_{j+1}}$.
Since $\rho(g_{j+1})(U_{g_{j+1}})\subset K_{g_{j+1}}$, we get $\rho(g_{j+1}g_j...g_1)([v])\in K_{g_{j+1}}$. Since $g_{j+1}\not=g_{j+2}^{-1}$, we have that $K_{g_{j+1}}\subset U_{g_{j+2}}$, and hence $\rho(g_{j+1}g_j...g_1)([v])\in U_{g_{j+2}}$. Using induction it follows that $\rho(g_pg_{p-1}...g_1)([v])\in K_{g_p}$, for all $j+1\leqslant p\leqslant n$. Thus, $[v]=\rho(g_n...g_1)([v])\in K_{g_n}$. Since the sets $\{K_g\}_{g\in\tilde S}$ are mutually disjoint, $g_n$ is therefore determined by $v$. Further, we have that $\rho(g_n^{-1})([v])=\rho(g_{n-1}...g_1)\in K_{g_{n-1}}$. Since $\rho(g_n^{-1})([v])$ is determined by $v$, we deduce that $g_{n-1}$ is also determined by $v$. The first assertion now follows by induction.
Since the beginning of the proof implies the second assertion, the proof is complete.
\hfill$\blacksquare$
The rest of this subsection is devoted to proving Proposition \ref{ping-pong}. The proof is very similar to the the proof of \cite[Proposition 21]{SGV11}. Consider a connected simple real Lie group $G$ with trivial center, together with a finitely generated dense subgroup $\Gamma$ as in the statement of Proposition \ref{ping-pong}.
By identifying $G$ with $\operatorname{Ad}(G)$, we can assume that $G$ is the connected component of a real algebraic group $\mathbb{G} = \overline{\operatorname{Ad}(G)}^Z \subset \operatorname{GL}(\mathfrak{g})$. By our assumptions on $\Gamma$ we can find a number field $k$ with an embedding $k \subset \mathbb{R}$ and a basis of $\mathfrak{g}$ such that $\Gamma \subset \operatorname{GL}_d(k) \subset \operatorname{GL}_d(\mathbb{R}) \cong \operatorname{GL}(\mathfrak{g})$. Since $\Gamma$ is Zariski dense in $\mathbb{G}$, we see that $\mathbb{G}$ is in fact defined over $k$.
Now, note that in order to check item (1) of Proposition \ref{ping-pong} for a subgroup $H < G$, it suffices to check it for the closure of $\Gamma\cap H$ (in the real topology). This shows that we only have to deal with proper closed subgroups $H < G$ which are non-discrete in $G$ and such that $\Gamma \cap H$ is dense in $H$. But if $H$ is such a subgroup, then its Zariski closure $\mathbb{H} \subset \mathbb{G}$ is a proper algebraic subgroup which is defined over $k$, because $\Gamma \cap H \subset \operatorname{GL}_d(k)$ is a Zariski dense subgroup of it. Hence the Lie algebra $\mathfrak{h} \subset \mathfrak{g}$ of $\mathbb{H}$ is a non-trivial proper subspace of $\mathfrak{g}$ defined over $k$ which is globally invariant under $\mathbb{H}$ but not under $\mathbb{G}$, because $G$ is simple. Altogether, we find that the line in
$\bigwedge_{j=1}^{\dim \mathfrak{h}} \mathfrak{g}(k)$ corresponding to the subspace $\mathfrak{h}(k) \subset \mathfrak{g}(k)$ is invariant under $\mathbb{H}(k)$, but not under $\mathbb{G}(k)$.
Next, define the finite set of representations $\rho_i$, $i \in I$, to be the collection of all (non-trivial) irreducible subrepresentations of the representations of $\mathbb{G}$ on $\bigwedge_{j=1}^m \mathfrak{g}$, $m < d = \dim G$. These are algebraic representations, defined over a finite extension $k'$ of $k$. We will show that Proposition \ref{ping-pong} holds when we view these representations $\rho_i$ as defined over appropriate places $K_i$ of $k'$. Note that no matter how we choose the places $K_i$, we still get representations of $\Gamma \subset \mathbb{G}(k)$ which satisfy item (1) of the proposition, by the above paragraph.
Let us now choose the places $K_i$ for which we will be able to prove that item (2) of the proposition also holds true.
\begin{lemma}
\label{choosefields}
Use the above notation.
Then for every $i \in I$, there are a local field $K_i$ and a sequence $(h_n)_n$ in $\Gamma$ which converges to $1$ in the real topology such that $(\rho_i(h_n))_n$ goes to infinity in the $K_i$-topology.
\end{lemma}
{\it Proof.}
Fix $i \in I$. First we claim that there is a sequence $(h_n)_n \subset \Gamma$ which converges to $1$ in the real topology such that the elements $\rho_i(h_n)$ are pairwise distinct.
To prove the claim, we view the representations $\rho_i$ as representations over $\mathbb{C}$ by fixing an embedding $k' \subset \mathbb{C}$. This way, it makes sense to talk about $\rho_i(G)$. Note that the image $\rho_i(B_1(1))$ of the unit ball of $G$ is connected. Since the representation $\rho_i$ is non-trivial and $\Gamma$ is dense in $G$, there exists a sequence $(h_n)_n \subset \Gamma \cap B_1(1)$ such that $\rho_i(h_n)$ is non-trivial and converges to $1$ in the complex topology. In particular, after passing to a subsequence of $h_n$ if necessary, we get that the elements $\rho_i(h_n)$ are distinct.
As $(h_n)_n$ is a bounded sequence and $\rho_i(h_n)$ converges to $1$, we deduce that $h_n$ converges to $1$ as well, proving our claim.
Next, denote by $R$ the ring generated by the coefficients of the elements $\operatorname{Ad}(g)$, $g \in \Gamma$. Since $\Gamma$ is finitely generated, $R\subset k'$ is a finitely generated subring. The discrete diagonal embedding of $k'$ in its ad\`ele group gives a discrete embedding of $R$ in a product of finitely many places $K_\nu$, $\nu \in \mathcal{S}$, of $k'$.
From this we obtain a discrete embedding ${\rho_i(\mathbb{G}(R)) \hookrightarrow \Pi_{\nu \in \mathcal{S}} \rho_i(\mathbb{G}(K_\nu))}$. In particular, $\rho_i(\Gamma)$ is discrete inside $\Pi_{\nu \in \mathcal{S}} \rho_i(\mathbb{G}(K_\nu))$. Therefore there exists a field $K_i := K_\nu$ such that the infinite set $\{ \rho_i(h_n) \}$ is unbounded as a subset of $\rho_i(\mathbb{G}(K_\nu))$.
\hfill$\blacksquare$
Below, we denote by $\overline{\Gamma}^1$ the set of sequences $(h_n) \subset \Gamma$ which converge to $1$ in the real topology. For $i\in I$, we view $\rho_i: \mathbb{G} \to \operatorname{GL}(V_i)$ as a representation over $K_i$, and equip $\operatorname{GL}(V_i)$ with the operator norm $\Vert \cdot \Vert_i$ corresponding to the absolute value on $K_i$.
We denote by $A_i$ the set of cluster points in the $K_i$-topology of sequences of the form $(\rho_i(h_n)/\Vert \rho_i(h_n) \Vert_i)_n$, where $(h_n) \in\overline{\Gamma}^1$.
Finally, we put $r_i = \min_{b \in A_i} \operatorname{rk}(b)$, where $\operatorname{rk}(b)$ is the rank of $b$.
A key fact that we will use is that since $G$ is simple, we have that $\rho_i(g)$ has determinant $1$ for all $g \in G$, and in particular for all $g \in \Gamma$. Hence, if $(h_n)_n\subset\Gamma$ and $(\rho_i(h_n))_n$ is unbounded in the $K_i$-topology, then the normalized sequence $(\rho_i(h_n)/\Vert \rho_i(h_n) \Vert_{K_i})_n$ has a non-invertible cluster point. So by our choice of $K_i$, we have $r_i < d_i := \dim(V_i)$.
Let us mention the following stability result for the sets $A_i$.
\begin{lemma}
\label{stability}
If $b$ and $b'$ belong to $A_i$ and $bb' \neq 0$, then some scalar multiple of $bb'$ belongs to $A_i$. In particular $\operatorname{rk}(bb') \geq r_i$.
\end{lemma}
{\it Proof.}
If $b = \lim_n \rho_i(g_n)/\Vert \rho_i(g_n) \Vert_i$ and $b' = \lim_n \rho_i(h_n)/\Vert \rho_i(h_n) \Vert_i$, with $(g_n)_n, (h_n)_n \in \overline{\Gamma}^1$, then the product sequence $(g_nh_n)_n \subset \Gamma$ converges to 1 in the real topology. Moreover,
\[\lim_n \frac{\rho_i(g_nh_n)}{\Vert \rho_i(g_n) \Vert_i \Vert \rho_i(h_n) \Vert_i} = bb', \qquad \text{so that} \qquad \lim_n \frac{\Vert \rho_i(g_nh_n)\Vert_i}{\Vert \rho_i(g_n) \Vert_i \Vert \rho_i(h_n) \Vert_i} =\Vert bb' \Vert_i.\]
Therefore
\[ \lim_n \frac{\rho_i(g_nh_n)}{\Vert \rho_i(g_nh_n) \Vert_i} = \frac{bb'}{\Vert bb' \Vert_i}. \qedhere \]
\hfill$\blacksquare$
Now, we turn to the construction of the set $S$ from Proposition \ref{escape}. The following lemma will produce the first element of $S$. In the context of Lemma \ref{firstplayer}, it will be $g_n$ with $n$ large enough, depending on $\eta$. The other elements of $S$ will arise as appropriate conjugates of this first element.
\begin{lemma}\label{firstplayer}
There exists a sequence $(g_n)_n \in \overline{\Gamma}^1$ such that for all $i \in I$
\begin{enumerate}[(1)]
\item $\lim_n \frac{\rho_i(g_n)}{\Vert \rho_i(g_n)\Vert_{i}} = b_i$ for some $b_i$ with $\operatorname{rk}(b_i) = r_i$;
\item $\operatorname{Range}(b_i) \cap \operatorname{Ker}(b_i) = \{0\}$.
\end{enumerate}
\end{lemma}
{\it Proof.} We proceed in three steps.
{\bf Step 1.} There exists a sequence $(h_n)_n \in \overline{\Gamma}^1$ which satisfies (1) above for all $i \in I$.
We proceed by induction. Enumerate the set $I = \{1,\cdots,\vert I \vert \}$. Assume that $(k_n)_n \in \overline{\Gamma}^1$ satisfies (1) for all indices $i < i_0$, for some $1 \leqslant i_0 < |I|$. Taking a subsequence if necessary, we can assume that the sequence $(\rho_{i_0}(k_n)/\Vert \rho_{i_0}(k_n) \Vert_{i_0})$ converges to some element $b_{i_0} \in \operatorname{End}(V_i)$.
Since the rank of $b_{i_0}$ could be greater than $r_{i_0}$, we also consider a sequence $(k_n')_n \in \overline{\Gamma}^1$ such that $(\rho_{i_0}(k_n')/\Vert \rho_{i_0}(k_n') \Vert_{K_{i_0}})_n$ converges to some $b_{i_0}'$ with rank $r_{i_0}$. Taking a subsequence we can assume $(\rho_i(k_n')/\Vert \rho_{i}(k_n') \Vert_{i})_n$ converges to some element $b_i'$ for all $i < i_0$, with possibly $\operatorname{rk}(b_i') > r_i$.
We will prove the existence of an element $g \in \Gamma$ such that the sequence $(gk_ng^{-1}k_n')_n$ satisfies (1) for all $i \leqslant i_0$. Note that no matter how we choose $g$, this sequence is inside $\overline{\Gamma}^1$. In fact it suffices to find $g \in \Gamma$ such that $\rho_i(g^{-1})b_i\rho_i(g)b_i' \neq 0$, for all $i \leqslant i_0$. Indeed, then Lemma \ref{stability} implies that the sequence $(\rho_i(gk_ng^{-1}k_n')/\Vert \rho_i(gk_ng^{-1}k_n') \Vert_i)_n$ converges to some non-zero multiple of $\rho_i(g^{-1})b_i\rho_i(g)b_i'$, which has rank at most equal to $\min(\operatorname{rk}(b_i),\operatorname{rk}(b_i')) = r_i$.
For each $i \leqslant i_0$, the set $X_i = \{g \in \mathbb{G}(K_i) \, \vert \, \operatorname{Range}(\rho_i(g)b_i') \nsubseteq \operatorname{Ker}(b_i)\}$ is a Zariski open set in $\mathbb{G}$ which is non-empty because $\rho_i$ is irreducible. Therefore $\Gamma \bigcap (\cap_{i \leqslant i_0} X_i)$ is nonempty. This proves Step 1.
{\bf Step 2.} There exists a sequence $(g_n)_n \in \overline{\Gamma}^1$ such that $(1)$ is true for any $i \in I$ and the corresponding elements $b_i$ satisfy $b_i^2 \neq 0$.
Consider a sequence $(h_n)$ as in Step 1, and denote by $b_i'$ the corresponding elements. We will find an element $g \in \Gamma$ such that the sequence of elements $g_n := gh_ng^{-1}h_n$ does what we want.
As above, for any $i$, the set $X_i$ of elements $g \in \mathbb{G}(K_i)$ such that $\operatorname{Range}(\rho_i(g)b_i') \nsubseteq \operatorname{Ker}(b_i')$ is a non-empty Zariski-open set. So is the set $Y_i$ of $g \in \mathbb{G}(K_i)$ such that $\operatorname{Range}(\rho_i(g^{-1})b_i') \nsubseteq \operatorname{Ker}(b_i')$, for all $i \leqslant |I|$. Take $g \in \Gamma \bigcap (\cap_{i\leqslant |I|} (X_i \cap Y_i))$ so that the element $a_i := \rho_i(g)b_i'\rho_i(g^{-1})b_i'$ is non-zero. Then for all $i$, the sequence $(\rho_i(g_n)/\Vert \rho_i(g_n)\Vert_i)_n$ converges to some nonzero multiple $b_i$ of $a_i$. We claim that $a_i^2 \neq 0$.
Indeed, Lemma \ref{stability} implies that the rank of $a_i$ is equal to $r_i = \operatorname{rk}(\rho_i(g)b_i')$. This means that the range of $a_i$ is equal to the range of $\rho_i(g)b_i'$. Since $g \in X_i$, it follows that $b_i'a_i$ is non-zero. Using again Lemma \ref{stability}, we get that the rank of $b_i'a_i$ is equal to $r_i = \operatorname{rk}(b_i')$. This means that the range of $b_i'a_i$ is equal to the range of $b_i'$. But since $g \in Y_i$ we see that $b_i'\rho_i(g^{-1})b_i'a_i \neq 0$. This shows that $a_i^2 \neq 0$.
{\bf Step 3.} The sequence from Step 2 satisfies the conclusion of the lemma.
We just need to check that for all $i$, any element $b \in A_i$ with rank $r_i$ and such that $b^2 \neq 0$ satisfies $\operatorname{Range}(b) \cap \operatorname{Ker}(b) = \{0\}$. Indeed, if $b^2 \neq 0$ then Lemma \ref{stability} implies that some multiple of $b^2$ belongs to $A_i$. Hence $\operatorname{rk}(b^2) = r_i = \operatorname{rk}(b)$. This precisely means that $\operatorname{Range}(b) \cap \operatorname{Ker}(b) = \{0\}$.
\hfill$\blacksquare$
Before actually proving Proposition \ref{ping-pong}, let us give two easy lemmas.
\begin{lemma}\label{inverse}
Given a local field $\mathcal{K}$, consider a sequence of invertible elements $(g_n)_n \subset \operatorname{GL}_d(\mathcal{K})$, such that
\[ \lim_n \frac{g_n}{\Vert g_n \Vert} = b \qquad \text{and} \qquad \lim_n \frac{g_n^{-1}}{\Vert g_n^{-1} \Vert} = b',\]
for some non-invertible elements $b,b' \in M_d(\mathcal{K})$.
Then $bb' = 0$, so that $\operatorname{Range}(b') \subset \operatorname{Ker}(b)$.
\end{lemma}
{\it Proof.}
Note that $bb'$ is a scalar matrix, being the limit of the sequence $(1/ \Vert g_n \Vert \Vert g_n^{-1} \Vert)_n$. Since it is non-invertible, it must be $0$.
\hfill$\blacksquare$
\begin{lemma}\label{conjugators}
Let $\rho:\mathbb{G}(\mathcal{K})\rightarrow\operatorname{GL}(W_{\rho})$ be an irreducible algebraic representation over a local field $\mathcal{K}$. Let $V_1^+,V_1^-,V_2^+,V_2^-\subseteq W_\rho$ be non-zero, proper subspaces such that $V_1^+\cap V_1^-=V_2^+\cap V_2^-=\{0\}$ and $V_1^+\subseteq V_2^-$ and $V_2^+\subseteq V_1^-$.
For $M \geqslant 1$, denote by $X_M \subseteq \mathbb{G}(\mathcal{K})^M$ the set of $M$-tuples $(h_1,\ldots,h_M)$ satisfying the following two conditions.
\begin{enumerate}
\item For $1 \leqslant s \neq t \leqslant M$, we have $\rho(h_s)V_1^+ \nsubseteq \rho(h_t)(V_1^{-} \cup V_2^-)$ and $\rho(h_s)V_2^+ \nsubseteq \rho(h_t)(V_1^{-} \cup V_2^-)$;
\item For any subset $S \subset \{1,\dots,M\}$ and any choice of $V_s \in \{V_1^-,V_2^-\}$, $s \in S$, we have
\[\dim(\cap_{s \in S} \rho(h_s)V_s) \leqslant \max(0,\dim(W_\rho) - \vert S \vert).\]
\end{enumerate}
Then $X_M$ is a nonempty Zariski-open set.
\end{lemma}
{\it Proof.}
Denote by $A_M$ (resp. $B_M$) the set of $M$-tuples satisfying condition (1) (resp. (2)). Then $A_M$ is clearly a finite intersection of Zariski open sets, which are non-empty by irreducibility of $\rho$.
Let us prove by induction over $M$ that $B_M$ is a non-empty Zariski open set. For $M = 1$, the condition (2) is empty, so this is clearly true. Assuming the result for $M$, let us check it for $M+1$.
Consider the finite collection of all vector spaces of the form $E_\alpha = \cap_{s \in S} \rho(h_s)V_s \subset W_\rho$, where $S \subset \{1,\dots,M\}$ and $V_s \in \{V_1^-,V_2^-\}$, for all $s \in S$. Then $B_{M+1}$ is equal to
\[\Big\{(h_1,\dots,h_{M+1}) \, \Big\vert \, (h_1,\dots,h_{M}) \in B_M \text{ and } E_\alpha \nsubseteq \rho(h_{M+1})(V_1^- \cup V_2^-) \text{ for all } \alpha \text{ with } E_\alpha \neq\emptyset \Big\}.\]
Using this, it can be easily seen that $B_{M+1}$ is a finite intersection of non-empty Zariski open sets. Therefore, $B_{M+1}$ is Zariski open, as well as non-empty by the Zariski connectedness of $\mathbb{G}$.
\hfill$\blacksquare$
{\it Proof of Proposition \ref{ping-pong}.}
Consider the representations $\rho_i$ over local fields $K_i$, $i \in I$, defined above. For $i\in I$, we consider the representation $\rho_{i'}: g \mapsto \rho_i(g^{-1})^t$. Note that by the definition of $r_i$, we clearly have $r_{i'} = r_i$.
Applying Lemma \ref{firstplayer} to the set
of representations $\{\rho_i\}_{i\in I}\cup\{\rho_{i'}\}_{i\in I}$, we obtain a sequence $(g_n)_n \subset \Gamma$ which converges to $1$ in the real topology, and elements $b_i, b_{i'} \in \operatorname{End}(V_i)$ such that for all $i \in I$,
\begin{itemize}
\item $\lim_n \frac{\rho_i(g_n)}{\Vert \rho_i(g_n)\Vert_{K_i}} = b_i$ and $\lim_n \frac{\rho_i(g_n^{-1})}{\Vert \rho_i(g_n^{-1})\Vert_{K_i}} = b_{i'}$,
\item $\operatorname{rk}(b_i) = \operatorname{rk}(b_{i'}) = r_i$, and
\item $\operatorname{Range}(b_i) \cap \operatorname{Ker}(b_i) = \operatorname{Range}(b_{i'}) \cap \operatorname{Ker}(b_{i'}) = \{0\}$.
\end{itemize}
By Lemma \ref{inverse}, we can add the following property to the above list:
\begin{itemize}
\item $\operatorname{Range}(b_i) \subset \operatorname{Ker}(b_{i'})$ and $\operatorname{Range}(b_{i'}) \subset \operatorname{Ker}(b_i)$.
\end{itemize}
Now, for $i \in I$, the sets $V_{i,1}^+ = \operatorname{Range}(b_i)$, $V_{i,1}^- = \operatorname{Ker}(b_i)$, $V_{i,2}^+ = \operatorname{Range}(b_{i'})$ and $V_{i,2}^- = \operatorname{Ker}(b_{i'})$ satisfy the hypothesis of Lemma \ref{conjugators}. Put $M := \max_i(\dim(\rho_i)) + 1$. For $i\in I$, denote by $X_i$ the non-empty Zariski open subset of $\mathbb{G}(K_i)^M$ given by Lemma \ref{conjugators} applied to these sets. Pick an $M$-tuple $(h_1,\cdots,h_M) \in \Gamma \bigcap (\cap_i X_i)$.
Before going further, let us mention that $\rho_i(h_s)V_{i,1}^+ = \operatorname{Range}(h_sb_{i}h_s^{-1})$, $\rho_i(h_s)V_{i,1}^- = \operatorname{Ker}(h_sb_{i}h_s^{-1})$, whereas $\rho_i(h_s)V_{i,2}^+ = \operatorname{Range}(h_sb_{i'}h_s^{-1})$, $\rho_i(h_s)V_{i,2}^- = \operatorname{Ker}(h_sb_{i'}h_s^{-1})$.
Then by the definition of $X_i$, for every $i \in I$ and $1 \leqslant s \neq t \leqslant N$, we have $\rho_i(h_s)V_{i,1}^+ \nsubseteq \rho_i(h_t)V_{i,2}^{-}$. This means that $(h_tb_{i'}h_t^{-1}).(h_sb_{i}h_s^{-1}) \neq 0$. But both $(h_sb_{i}h_s^{-1})$ and $(h_tb_{i'}h_t^{-1})$ belong to $A_i$ and have rank $r_i$. Thus, their product has rank equal to $r_i$ by Lemma \ref{stability}. From this we deduce that $\rho_i(h_s)V_{i,1}^+ \cap \rho_i(h_t)V_{i,2}^{-} = \{0\}$.
Similarly, $\rho_i(h_s)V_{i,1}^+ \cap \rho_i(h_t)V_{i,1}^{-} = \{0\}$ and $\rho(h_s)V_{i,2}^+ \cap \rho(h_t)(V_{i,1}^{-} \cup V_{i,2}^-)= \{0\}$.
Using the above properties, for every $i \in I$ and $1 \leqslant s \leqslant N$, we can find compact neighborhoods $K_{i,s}, K_{i,s}' \subset \P(V_i)$ of $\P(\rho_i(h_s)V_{i,1}^+)$ and $\P(\rho_i(h_s)V_{i,2}^+)$ respectively, and open sets $U_{i,s}, U_{i,s}' \subset \P(V_i)$ which are complements of neighborhoods of $\P(\rho_i(h_s)V_{i,1}^-)$ and $\P(\rho_i(h_s)V_{i,2}^-)$, respectively, such that:
\begin{itemize}
\item $K_{i,s} \subset U_{i,s}$ and $K_{i,s}' \subset U_{i,s}'$ for all $s$;
\item $K_{i,s}' \cap U_{i,s} = \emptyset = K_{i,s} \cap U_{i,s}'$;
\item For all $s \neq t$, $K_{i,s} \subset U_{i,t} \cap U_{i,t}'$ and $K_{i,s}' \subset U_{i,t} \cap U_{i,t}'$;
\item For any $x \in \P(V_i)$, we can find at least two indices $s$ for which $x \in U_{i,s}$ or $x \in U_{i,s}'$.
\end{itemize}
The last fact is due to property (2) from Lemma \ref{conjugators}, which implies that for any set $S \subset \{1,\cdots,M\}$ with $\vert S \vert = M - 1$ and any choice of $V_s \in \{V_{i,1}^-,V_{i,2}^-\}$, $s \in S$, we have $\cap_{s \in S} \rho(h_s)V_s = \{0\}$.
Finally, given $\eta > 0$, we can find $n$ large enough so that for all $i \in I$ and all $s$, we have $h_sg_nh_s^{-1}(U_{i,s}) \subset K_{i,s}$, $h_sg_n^{-1}h_s^{-1}(U_{i,s}') \subset K_{i,s}'$ and $h_sg_nh_s^{-1}, h_sg_n^{-1}h_s^{-1} \in B_\eta(1)$. We define $S$ to be the set of elements $\{h_sg_nh_s^{-1} \, \vert 1 \leqslant s \leqslant M\}$. If $g = h_sg_nh_s^{-1} \in S$, define $K_g^{(i)} = K_{i,s}$ and $U_g^{(i)} = U_{i,s}$, and if $g = h_sg_n^{-1}h_s^{-1} \in S^{-1}$, define $K_g^{(i)} = K_{i,s}'$ and $U_g^{(i)} = U_{i,s}'$.
These sets are easily seen to satisfy the desired properties.
\hfill$\blacksquare$
\subsection{From subgroups to neighborhoods of subgroups}
The goal of this section is to prove the following proposition, which roughly says that algebraic points with small logarithmic height cannot be very close to a proper algebraic subgroup. Our method is fairly similar to~\cite[Proposition 16]{SGV11} (see also \cite[Proposition 3.11]{BdS14} or \cite[Proposition 4]{Va10}).
\begin{proposition}\label{neigh}
Let $G$ be a connected simple Lie group and $T \subset G$ a finite subset. Assume that there there is a basis $\mathfrak{B}$ of the Lie algebra $\mathfrak{g}$ of $G$ such that the matrix of Ad$(g)$ in the basis $\mathfrak{B}$ has algebraic entries, for every $g\in T$.
Then there exists a constant $C>0$ (depending on $T$) such that for every integer $n\geqslant 1$ and any non-discrete proper closed subgroup $H<G$, we can find a proper closed subgroup $H'<G$ such that
\[W_{\leqslant n}(T) \cap H^{(e^{-Cn})}\subseteq H',\]
where $W_{\leqslant n}(T) = \{g_1g_2...g_n \, | \, g_1,g_2,...,g_n \in T \cup T^{-1}\}$.
\end{proposition}
{\bf Notation}. In this subsection, we use the notation $O_X(a)$ to denote a positive quantity bounded by $Ca$, for some constant $C>0$ depending only on $X$. We also use the notation $a\gg_{X}b$ to mean the existence of some constant $C>0$ depending only on $X$ such that $a\geqslant Cb$.
\begin{lemma}\label{l:OneLargeCoordinate}
Let $X\subseteq {\rm M}_n(\mathbb{R})$ be a finite subset. Suppose the $\mathbb{R}$-span $A$ of $X$ is an $\mathbb{R}$-algebra, and $V:=\mathbb{R}^n$ is a simple $A$-module. Then there exists $c_0 > 0$ such that for every $\mathbf{l} \in V^{\ast}$ and $\mathbf{v} \in V$
\[\max_{x\in X} |\mathbf{l} (x\mathbf{v})|\geqslant c_0 \|\mathbf{l}\|_2\|\mathbf{v}\|_2.\]
\end{lemma}
{\it Proof}.
Let $H_{X}(\mathbf{l},\mathbf{v}):=\max_{x\in X}|\mathbf{l}(x\mathbf{v})|$. We need to show that
the infimum of $H_X(\mathbf{l},\mathbf{v})$ on the pair of unit vectors is positive. Suppose the contrary. So by the continuity of $H_X:V^{\ast}\times V\rightarrow \mathbb{R}$, there are unit vectors $\mathbf{l}_0$ and $\mathbf{v}_0$ such that $H_X(\mathbf{l}_0,\mathbf{v}_0)=0$. This implies that for any $a\in A$ we have $\mathbf{l}_0(a \mathbf{v}_0)=0$. Hence the $A$-module generated by $\mathbf{v}_0$ is a proper subspace which contradicts the simplicity of $V$.
\hfill$\blacksquare$
\begin{lemma}\label{l:InProperSubvariety}
Let $G$ be a simple Lie group and $T \subseteq G$ be a finite symmetric set such that $\Gamma = \langle T \rangle$ is a dense subgroup of $G$. Suppose that the matrix of Ad$(g)$ with respect to a basis $\mathfrak{B}$ of the Lie algebra $\frak g$ of $G$ has algebraic entries, for every $g\in T$. Then there exists $C_1 > 0$ such that the following holds:
If $n\geqslant 1$ is an integer, then for any proper non-discrete closed subgroup $H$ of $G$, there are non-zero vectors $\mathbf{v}\in \mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}$ and $\mathbf{l}\in\mathfrak{g}^{\ast}\otimes_{\mathbb{R}}\mathbb{C}$ such that
$\mathbf{l}(\operatorname{Ad}(\gamma)(\mathbf{v}))=0$, for any $\gamma \in W_{\leqslant n}(T) \cap H^{(e^{-C_1n})}$.
\end{lemma}
{\it Proof.}
Since $\Gamma$ is a dense subgroup of $G$, the $\mathbb{R}$-span $A$ of $\operatorname{Ad}(\Gamma)$ in ${\rm End}_{\mathbb{R}}(\mathfrak{g})$ is equal to the $\mathbb{R}$-span of $\operatorname{Ad}(G)$. Denote by $d$ the dimension of $G$. It is easy to see that the $\mathbb{R}$-span of $W_{\leqslant d^2}(T)$ is equal to $A$. Hence by Lemma~\ref{l:OneLargeCoordinate}, there exists $c_0 > 0$ such that for any $\mathbf{l}\in \mathfrak{g}^{\ast}$ and $\mathbf{v}\in \mathfrak{g}$ we have
\begin{equation}\label{e:LargeCoordinate}
\max_{\gamma\in W_{\leqslant d^2}(T)} |\mathbf{l} (\operatorname{Ad}(\gamma)(\mathbf{v}))|\geqslant c_0\|\mathbf{l}\|_2\|\mathbf{v}\|_2,
\end{equation}
as the adjoint representation is irreducible.
Let $H<G$ be a proper non-discrete closed subgroup and fix $n \geq 1$. Let $\mathbf{v}\in \mathfrak{g}, \mathbf{l}\in \mathfrak{g}^{\ast}$ such that
\begin{enumerate}
\item $\|\mathbf{v}\|_2=1$ and $\|\mathbf{l}\|_2=1$.
\item $\mathbf{v}\in \mathfrak{h}$ and $\mathfrak{h}\subseteq \ker \mathbf{l}$ where $\mathfrak{h}:=\Lie(H)$ is the Lie algebra of $H$.
\end{enumerate}
By using (\ref{e:LargeCoordinate}) and rescaling $\mathbf{v}$, we find $\gamma_0\in W_{\leqslant d^2}(T)$, $\mathbf{v}_H\in \mathfrak{g}$, and $\mathbf{l}_H\in \mathfrak{g}^{\ast}$ such that
\begin{enumerate}
\item $\|\mathbf{l}_H\|_2=1$, $\|\mathbf{v}_H\|_2 \leqslant 1/c_0$.
\item $\mathbf{l}_H(\operatorname{Ad}(\gamma_0)(\mathbf{v}_H))=1$.
\item for any $h\in H$, $\mathbf{l}_H(\operatorname{Ad}(h)(\mathbf{v}_H))=0$.
\end{enumerate}
By the hypothesis, $\mathfrak{g}$ has a basis $\mathfrak{B}:=\{\mathbf{v}_1,\ldots, \mathbf{v}_d\}$ such that $\mathbf{v}_i^{\ast}(\operatorname{Ad}(\gamma)(\mathbf{v}_j))\in \overline{\mathbb{Q}}$, for any $\gamma\in \Gamma$ and $1\leqslant i,j\leqslant d$, where $\mathfrak{B}^{\ast}:=\{\mathbf{v}_1^{\ast},\ldots,\mathbf{v}_d^{\ast}\}$ is the dual basis. Since $\Gamma$ is finitely generated, there are a number field $k$ and a finite set of places $\mathcal{S}$ of $k$ such that $\mathbf{v}_i^{\ast}(\operatorname{Ad}(\gamma)(\mathbf{v}_j))\in \mathcal{O}_k(\mathcal{S})$, for any $\gamma\in \Gamma$.
For any $g\in G$, let $\eta_g(\underline{x},\underline{y}) \in \mathbb{R}[x_1,\ldots,x_d,y_1,\ldots,y_d]$ be the polynomial $\eta_g([\mathbf{l}]_{\mathfrak{B}^{\ast}},[\mathbf{v}]_{\mathfrak{B}}):=\mathbf{l}(\operatorname{Ad}(g)(\mathbf{v}))$, where $[\mathbf{l}]_{\mathfrak{B}^{\ast}}$ (resp. $[\mathbf{v}]_{\mathfrak{B}}$) is the vector of coordinates of $\mathbf{l}$ in the basis $\mathfrak{B}^{\ast}$ (resp. $\mathfrak{B}$). It is clear that $\eta_g$ is a degree $2$ polynomial in $2d$ variables. Fix a constant $C_1 > 0$ large enough, depending only on $T$ (we will be more specific later).
Now, suppose that the following system of polynomial equations do not have a common solution over $\mathbb{C}$
\begin{align*}
\eta_{\gamma}(\underline{x},\underline{y}) &=0 \hspace{1mm}\h\text{ for any }\gamma\in W_{\leq n}(T)\cap H^{(e^{-C_1n})},\\
\eta_{\gamma_0}(\underline{x},\underline{y})-1&=0.
\end{align*}
We notice that the coefficients of $\eta_{\gamma}$ are in $\mathcal{O}_k(\mathcal{S})$. We view $\mathcal{O}_k(\mathcal{S})$ as a discrete subring of $\prod_{\mathfrak{p}\in V_k(\infty)\cup \mathcal{S}}k_{\mathfrak{p}}$, where $V_k(\infty)$ is the set of Archimedean places of $k$. It is clear that the $\mathcal{S}$-norm (the maximum norm in $\prod_{\mathfrak{p}\in V_k(\infty)\cup \mathcal{S}}k_{\mathfrak{p}}$) of the coefficients of $\eta_{\gamma}$ for $\gamma\in W_{\leqslant n}(T)$ is at most $e^{O_{T}(n)}$. Then by the effective Nullstellensatz~\cite[Theorem IV]{MW} there are polynomials $q_{\gamma}(\underline{x},\underline{y}),q_{\gamma_0}(\underline{x},\underline{y})\in \mathcal{O}_k[x_1,\ldots,x_d,y_1,\ldots,y_d]$ and $a\in \mathcal{O}_k$ such that
\begin{enumerate}
\item $\sum_{\gamma\in W_{\leqslant n}(T)\cap H^{(e^{-C_1n})}} q_{\gamma}(\underline{x},\underline{y}) \eta_{\gamma}(\underline{x},\underline{y})+ q_{\gamma_0}(\underline{x},\underline{y}) \eta_{\gamma_0}(\underline{x},\underline{y})=a.$
\item $\deg q_{\gamma}, \deg q_{\gamma_0}\ll_{d,\deg k} 1$.
\item The $\mathcal{S}$-norms of the coefficients of $q_{\gamma}$ and $q_{\gamma_0}$ are at most $e^{O_{T}(n)}$.
\item The $\mathcal{S}$-norm of $a$ is at most $e^{O_{T}(n)}$, and it is non-zero.
\end{enumerate}
Since $a\in \mathcal{O}_k$ is non-zero, we have $1\leqslant |N_{k/\mathbb{Q}}(a)|=\prod_{\mathfrak{p}\in V_k(\infty)}|a|_{\mathfrak{p}}\leqslant (\min_{\mathfrak{p}\in V_k(\infty)}|a|_{\mathfrak{p}})\|a\|_S^{\deg k-1}$. Thus
\begin{equation}\label{e:LowerBoundOna}
\min_{\mathfrak{p}\in V_k(\infty)}|a|_{\mathfrak{p}}\geqslant e^{-O_{T}(n)}.
\end{equation}
Suppose $\mathfrak{p}_0\in V_k(\infty)$ is the place which gives us the embedding of $\operatorname{Ad}(\Gamma)$ into ${\rm End}_{\mathbb{R}}(\mathfrak{g})$.
So by the properties of $\mathbf{l}_H$ and $\mathbf{v}_H$ mentioned above we have that
\begin{align*}
|\eta_{\gamma}(\mathbf{l}_H,\mathbf{v}_H)|_{\mathfrak{p}_0}&\leqslant e^{-C_1n/2}, \\
|q_{\gamma}(\mathbf{l}_H,\mathbf{v}_H)|_{\mathfrak{p}_0}&\leqslant e^{O_{T}(n)},
\end{align*}
for any $\gamma\in \textstyle W_{\leqslant n}(T) \cap H^{(e^{-C_1n})}$. Hence we have
\[\textstyle
|\sum_{\gamma\in W_{\leqslant n}(T)\cap H^{(e^{-C_1n})}} q_{\gamma}(\mathbf{l}_H,\mathbf{v}_H) \eta_{\gamma}(\mathbf{l}_H,\mathbf{v}_H)+ q_{\gamma_0}(\mathbf{l}_H,\mathbf{v}_H) \eta_{\gamma_0}(\mathbf{l}_H,\mathbf{v}_H)|_{\mathfrak{p}_0}\leqslant e^{O_{T}(n)-C_1n/2}\leqslant e^{-C_1n/4}
\]
if we chose $C_1$ large enough. But if we chose $C_1$ perhaps even larger (but still depending only on $T$) this contradicts \eqref{e:LowerBoundOna}.
\hfill$\blacksquare$
{\it Proof of Proposition~\ref{neigh}.}
Let $\mathbb{G}$ be the Zariski-closure of $\operatorname{Ad}(G)$ in $\operatorname{GL}(\mathfrak{g})$. By Lemma~\ref{l:InProperSubvariety}, there exists a constant $C_1 > 0$ such that for any $n$ and any non-discrete proper closed subgroup $H$ of $G$ there is a variety $X$ (depending on $H$ and $n$) of $\mathbb{G}$ whose dimension is strictly less than $\dim \mathbb{G}$ such that $\operatorname{Ad}(W_{\leqslant n}(T) \cap H^{(e^{-C_1n})})\subseteq X$. Using the generalized Bezout theorem it was proved in \cite[Proposition 3.2]{EMO05} that there is $N(X)\geqslant 1$ such that
$W_{\leqslant N(X)}(A) \not\subseteq X$ whenever $A$ generates a Zariski-dense subgroup of $\mathbb{G}$.
Moreover, by the proof of \cite[Proposition 3.2]{EMO05}, $N(X)$ is bounded above by some bound depending on the number of irreducible components of $X$ and the maximal degree of an irreducible component of $X$. Since $X$ is the intersection of $\mathbb{G}$ with a hyperplane, we conclude that $N:=\sup_{X}N(X)<\infty$. This number $N$ only depends on $T$.
Next, we show that there exists $C > 0$ (depending only on $T$) such that for all multiple $n$ of $N$,
\begin{equation}\label{e:ProperSubgroup}
\textstyle W_{\leqslant N}\Big(W_{\leqslant n/N}(T)) \cap H^{(e^{-Cn})}\Big) \subseteq W_{\leqslant n}(T) \cap H^{(e^{-C_1n})}.
\end{equation}
This - coupled with the above paragraph - implies that for all multiple $n$ of $N$ and all proper closed subgroup $H$ of $G$, the set $W_{\leqslant n/N}(T) \cap H^{(e^{-Cn})}$ is contained in a proper algebraic subgroup of $\mathbb{G}$.
For any $\gamma_i\in W_{\leqslant n/N}(T) \cap H^{(e^{-Cn})}$, there are $h_i\in H$ such that $\|\operatorname{Ad}(\gamma_i)-\operatorname{Ad}(h_i)\|_2\leqslant e^{-Cn}$ and $\|\operatorname{Ad}(\gamma_i)\|_2\leqslant e^{O_{T}(n)}$. Hence, $\|\operatorname{Ad}(h_i)\|_2\leqslant e^{O_{T}(n)}$ and
\begin{align*}
\|\operatorname{Ad}(\gamma_1\cdots \gamma_N)-\operatorname{Ad}(h_1\cdots h_N)\|_2&=\|\sum_{i=0}^{N-1}(\operatorname{Ad}(\gamma_1\cdots \gamma_{N-i}h_{N-i+1} \cdots h_N)-\operatorname{Ad}(\gamma_1\cdots \gamma_{N-i-1}h_{N-i} \cdots h_N)) \|_2\\
&\leqslant \sum_{i=0}^{N-1} (\prod_{j=1}^{N-i-1} \|\operatorname{Ad}(\gamma_j)\|_2) (\prod_{j=N-i+1}^N\|\operatorname{Ad} h_j\|_2) (\|\operatorname{Ad} \gamma_{N-i}-\operatorname{Ad} h_{N-i}\|_2) \\
& \leqslant e^{O_{T}(n)-Cn} \leqslant e^{-C_1n}
\end{align*}
if $C'\gg_{T} 1$, which implies (\ref{e:ProperSubgroup}).
\hfill$\blacksquare$
\subsection{Proof of Theorem \ref{escape}}
By \cite[Corollary 2.5]{BrG02}, $\Gamma$ contains a finitely generated subgroup which is dense in $G$. Thus, we may assume that $\Gamma$ is finitely generated.
Let $\rho_i:\Gamma\rightarrow\text{GL}(V_i)$, $i\in I$, be the representations and $M\geqslant 2$ be the integer given by Theorem \ref{ping-pong}.
By a result of Kazhdan and Margulis (see \cite[Theorem 8.16]{Ra72}), there is a neighborhood $U$ of the identity in $G$ such that for any discrete subgroup $\Sigma<G$, $\Sigma\cap U$ is contained in a connected nilpotent subgroup of $G$.
Let $U_0\subset U$ be an open set such that $U$ contains the closure of $U_0^{-1}U_0$.
Throughout the proof, we fix two constants $\kappa>1$ and $\eta>0$ (depending on $G$ only) such that
\begin{enumerate}[(a)]
\item\label{a} $B_R(1)$ can be covered by at most $R^{\kappa}$ of the sets $\{gU_0\}_{g\in G}$, whenever $R>2$,
\item\label{b} $B_R(1)$ can be covered by at most ${\Big(\frac{R}{r}\Big)^{\kappa}}$ balls in $G$ of radius ${\frac{r}{2}}$, whenever $R>2r>0$,
\item\label{c} $\|x^{-1}\|_2\leqslant \|x\|_2^{\kappa}$, for every $x\in G$, and
\item\label{d} ${(1+\eta)^{(3\kappa +4)\kappa}<\Big(\frac{2M-1}{2M-2}\Big)^{\frac{1}{13}}}$.
\end{enumerate}
Let $S\subset\Gamma$ be a set satisfying Theorem \ref{ping-pong} such that $\tilde S=S\cup S^{-1}\subset B_{\eta}(1)$ and $|S|=M$.
For $i\in I$, let $K_g^{(i)}\subset U_g^{(i)}$ ($g\in\tilde S$) be the subsets of $V_i$ provided by Theorem \ref{ping-pong}.
The usual ping-pong lemma implies that $S$ freely generates a subgroup of $\Gamma$, which we denote by $\langle S\rangle$.
Let $|g|_{S}$ be the length of an element $g\in\langle S\rangle$ with respect to $\tilde S$. We denote by $W_n(S)$ the set of elements of length $n$, and by $W_{\leqslant n}(S)$ the set of elements of length at most $n$.
Let $\ell\geqslant 1$ be an integer and put ${\varepsilon=(1+\eta)^{-\ell}}$.
In part 1 of the proof, we construct a finite set $T\subset \Gamma\cap B_{\varepsilon}(1)$. Our construction is inspired by the proof of \cite[Lemma 3]{BY11}.
In the rest of the proof (parts 2-4), we provide constants $d_1,d_2 > 0$ and show that $T$ satisfies the conclusion of Theorem \ref{escape}, whenever $\ell$ is large enough.
This will clearly imply Theorem \ref{escape}.
\vskip 0.1in
{\bf Part 1: construction of the set $T$.}\label{part1}
\vskip 0.05in
Let $a,b\in S$ with $a \not= b$ and define $$Y=\{w=s_1s_2...s_{\ell}|s_1=a,s_{\ell}=b,s_2,...,s_{\ell-1}\in\tilde S,s_{i+1}\not=s_i^{-1},\text{for all}\;1\leqslant i<\ell\}.$$
Let $Z=\{w^3|w\in Y\}.$
Since $|\tilde S|=2M$, we get that $|Z|=|Y|\geqslant (2M-1)^{\ell-3}$.
Since $\tilde S\subset B_{\eta}(1)$, it follows that $W_n(S)\subset B_{(1+\eta)^{n}}(1)$, for all $n\geqslant 1$.
Since $Z\subset W_{3\ell}(S)$, we get that $Z\subset B_{(1+\eta)^{3\ell}}(1)$.
By using (\ref{b}), $Z$ can be covered by at most ${\Big[\frac{(1+\eta)^{3\ell}}{\frac{\varepsilon}{(1+\eta)^{3\kappa\ell}}}\Big]^{\kappa}=(1+\eta)^{(3\kappa+4)\kappa\ell}}$ balls of radius ${\frac{\varepsilon}{2(1+\eta)^{3\kappa\ell}}}$.
From this we deduce that there is $g_0\in Z$ such that \begin{equation} |B_{\frac{\varepsilon}{(1+\eta)^{3\kappa\ell}}}(g_0)\cap Z|\geqslant\frac{|Z|}{(1+\eta)^{(3\kappa+4)\kappa\ell}}\geqslant\frac{(2M-1)^{\ell-3}}{(1+\eta)^{(3\kappa+4)\kappa\ell}} \end{equation}
We define $T=g_0^{-1}(B_{\frac{\varepsilon}{(1+\eta)^{3\kappa\ell}}}(g_0)\cap Z)\setminus \{1\}$ and $\tilde T=T\cup T^{-1}$. Then $|T|\geqslant \frac{(2M-1)^{\ell-3}}{(1+\eta)^{(3\kappa+4)\kappa\ell}}-1$. Since by inequality (\ref{d}) we have that ${\frac{2M-1}{(1+\eta)^{(3\kappa+4)\kappa}}>(2M-2)^{\frac{1}{13}}(2M-1)^{\frac{12}{13}}}$, we get that \begin{equation}\label{T}|T|\geqslant \frac{[(2M-2)^{\frac{1}{13}}(2M-1)^{\frac{12}{13}}]^{\ell}}{(2M-1)^4}\geqslant [(2M-2)^{\frac{1}{13}}(2M-1)^{\frac{12}{13}}]^{\ell-5},\;\;\text{for all}\;\;\ell\geqslant 1.\end{equation}
If $g\in T$, then ${\|g_0g-g_0\|_2\leqslant\frac{\varepsilon}{(1+\eta)^{3\kappa\ell}}}$. Since $\|g_0\|_2\leqslant (1+\eta)^{3\ell}$, we get $\|g_0^{-1}\|_2\leqslant \|g_0\|_2^{\kappa}\leqslant (1+\eta)^{3\kappa \ell}$.
Altogether, it follows that $\|g-1\|_2\leqslant \|g_0^{-1}\|_2\|g_0g-g_0\|_2\leqslant\varepsilon$, for all $g\in T$. Hence $T\subset B_{\varepsilon}(1)$.
We end part (1) of the proof by recording a useful property of $T$.
\vskip 0.05in
{\bf Claim 1.}
If $g\in W_n(T)$, then $n\ell\leqslant |g|_S\leqslant 6n\ell$. Thus, $T$ freely generates a free subgroup of $\Gamma$.
{\it Proof.} It is enough to show that $n\ell\leqslant |g|_S\leqslant 3n\ell$, for all $g\in W_n(Z)$ and $n\geqslant 1$.
Let $g=g_n^{\varepsilon_n}...g_1^{\varepsilon_1}$, where $n\geqslant 1$ and $g_1,...,g_n\in Z$, $\varepsilon_1,...,\varepsilon_n\in\{\pm1\}$ are such that $g_{i+1}^{\varepsilon_{i+1}}g_{i}^{\varepsilon_{i}}\not=1$, for all $1\leqslant i\leqslant n-1$.
Let $w_1,...,w_n\in Y$ such that $g_1=w_1^3,...,g_n=w_n^3$. Then $$g=w_n^{2\varepsilon_n}(w_n^{\varepsilon_n}w_{n-1}^{\varepsilon_{n-1}})w_{n-1}^{\varepsilon_{n-1}}(w_{n-1}^{\varepsilon_{n-1}}w_{n-2}^{\varepsilon_{n-2}})...(w_{2}^{\varepsilon_{2}}w_1^{\varepsilon_1})w_1^{2\varepsilon_1}.$$
Since $w_{i+1}^{\varepsilon_{i+1}}w_{i}^{\varepsilon_{i}}\not=1$ and $w_i^{\varepsilon}w_i^{\varepsilon}$ is already reduced, after making all the possible cancellations, the middle $w_i^{\varepsilon_i}$ from
$g_i^{\varepsilon_i}=w_i^{\varepsilon_i}w_i^{\varepsilon_i}w_i^{\varepsilon_i}$ will not be affected. This implies the conclusion. \hfill$\square$
\vskip 0.1in
{\bf Part 2: bounding the number of returns.}
\vskip 0.05in
We continue by showing that the number of elements $g\in W_n(T)$ which fix a given line $[v]\in\P(V_i)$, for some $i\in I$, is bounded above by $|W_n(T)|^{1-c_0}$, for a constant $c_0>0$.
\vskip 0.05in
{\bf Claim 2.}
There exist $n_0 \geqslant 1$, $\ell_0\geqslant 1$ and $c_0>0$ such that given $i\in I$ and $v\in V_i$, we have $$|\{g\in W_n(T)|\rho_i(g)([v])=[v]\} \vert \leqslant |W_n(T)|^{1-c_0},\;\;\;\text{for all $\ell\geqslant\ell_0$ and $n\geqslant n_0$.}$$
{\it Proof of Claim 2.}
Let $i\in I$ and $v\in V_i$.
For simplicity, we denote $\rho=\rho_i$ and $K_g=K_g^{(i)}$, $U_g=U_g^{(i)}$, for $g\in\tilde S$. Fix $n\geqslant 1$ and define $A=\{g\in W_n(T)|\rho(g)([v])=[v]\}$. Denote $N=\lfloor{\frac{n\ell}{2}}\rfloor$.
In order to estimate $|A|$, we partition $A$ into two subsets according to the reduced form of $g$. Let $g\in A$ and $g=k_pk_{p-1}...k_1$ be its reduced form with respect to $\tilde S$, where $p=|g|_{S}$ and $k_1,...,k_p\in\tilde S$.
By Claim 1 we get that $2N\leqslant n\ell\leqslant p\leqslant 6n\ell\leqslant 12N+6$. We define $$B=\{g\in A|\rho(k_Nk_{N-1}...k_1)([v])\in U_{k_{N+1}}\}\;\;\;\text{and}\;\;\;C=A\setminus B.$$
We proceed by estimating $|B|$ and $|C|$ separately.
\vskip 0.05in
{\bf Claim 3.} ${|B|\leqslant (2|T|)^{\frac{11 n}{12}+1}}$, for all $n >12$.
{\it Proof of Claim 3.}
Assume that $g=k_pk_{p-1}...k_1\in B$. Then the first part of Lemma \ref{SGV} implies that $k_{N+1},...,k_{p-1},k_{p}$ are uniquely determined by $v$.
Now, since $g\in W_n(T)$, we can write $g=g_n^{\varepsilon_n}g_{n-1}^{\varepsilon_{n-1}}...g_1^{\varepsilon_1}$, where $g_1,...,g_n\in T$, $\varepsilon_1,...,\varepsilon_n\in\{\pm 1\}$ and $g_j^{\varepsilon_j}\not=g_{j+1}^{\varepsilon_{j+1}}$, for all $1\leqslant j<n$. Let $w_0\in Y$ such that $g_0=w_0^3$ and $w_1,...,w_n\in Y\setminus\{w_0\}$ such that $g_1=w_0^{-3}w_1^{3},...,g_n=w_0^{-3}w_n^{3}$.
Then the reduced form can be written as $g=h_nw_n^{\varepsilon_n}h_{n-1}...h_1w_1^{\varepsilon_1}h_0$, where $h_i\in\Gamma$ satisfies $|h_i|_{S}\leqslant 5\ell$, and the factor $w_i^{\varepsilon_i}$ corresponds to the middle $w_i^{\varepsilon_i}$ from $w_i^{3\varepsilon_i}$.
We claim that $g_q^{\varepsilon_q}$ is uniquely determined, for any $q$ such that $n\geqslant q\geqslant\frac{11n}{12}+1$.
More precisely, we will show by induction that
$\varepsilon_q$, $w_q$ and $h_q$ are uniquely determined, for all $q$ with $n\geqslant q\geqslant{\frac{11n}{12}}+1$.
First, if $q=n$, we have that either $h_n=w_0^{-3}w_n$, if $\varepsilon_n=1$, or $h_n=w_n^{-1}$, if $\varepsilon_n=-1$.
Since $|h_n|_{S}\leqslant 4\ell$ and $p-N\geqslant n\ell-N\geqslant \frac{n\ell}{2}>4\ell$, it follows that $w_n$ and $\varepsilon_n$ are determined.
Specifically, there are two cases: (1) $k_p...k_{p-\ell+1}=w_0^{-1}$ or (2) $k_p...k_{p-\ell+1}=w_n^{-1}$. In case (1) $\varepsilon_n=1$, $w_n=k_{p-3\ell}...k_{p-4\ell+1}$, and $h_n=w_0^{-3}w_n$, while in case (2) $\varepsilon_n=-1$, $w_n=k_{p-\ell+1}^{-1}...k_p^{-1}$ and $h_n=w_n^{-1}$.
Assume that $\varepsilon_n,...,\varepsilon_{q+1}$, $w_n,...,w_{q+1}$, $h_n,...,h_{q+1}$ are determined, for some $q$ with $n\geqslant q\geqslant {\frac{11 n}{12}}+1$.
Since $|h_nw_n^{\varepsilon_n}...h_{q+1}w_{q+1}^{\varepsilon_{q+1}}|_{S}\leqslant 6(n-q)\ell$ and $p-N\geqslant \frac{n\ell}{2}\geqslant 6(n-q+1)\ell=6(n-q)\ell+6\ell$, we deduce that the first $6\ell$ letters from the left in the reduced word $h_qw_q^{\varepsilon_q}...h_1w_1^{\varepsilon_1}h_0$ with respect to $S$ are determined. Note that $h_q\in\{w_{q+1}w_0^{-3}w_q, w_{q+1}w_q^{-1}\}$, if $\varepsilon_{q+1}=1$, and $h_q\in\{w_{q+1}^{-1}w_q, w_{q+1}^{-1}w_0^{3}w_q^{-1}\}$, if $\varepsilon_{q+1}=-1$. Since $\varepsilon_{q+1}$, $w_{q+1}$ are determined and $|h_qw_q^{\varepsilon_q}|_{S}\leqslant 6\ell$, it follows easily that $\varepsilon_q,w_q$ and $h_q$ are determined. This finishes the proof of our assertion.
Therefore, if $q=\lfloor{{\frac{11n}{12}}}\rfloor+2$, then $g_n^{\varepsilon_n},...,g_q^{\varepsilon_q}\in\tilde T$ are uniquely determined for every $g=g_n^{\varepsilon_n}...g_1^{\varepsilon_1}\in B$. Since $g_1^{\varepsilon_1},...,g_{q-1}^{\varepsilon_{q-1}}$ can each take at most $2|T|$ values, we get that $|B|\leqslant (2|T|)^{q-1}\leqslant (2|T|)^{\frac{11 n}{12}+1}$. \hfill$\square$
\vskip 0.05in
{\bf Claim 4.} $|C|\leqslant [4(2M-2)^{\ell}]^{{\frac{n}{6}}}(2|T|)^{n+2-{\frac{n}{6}}}$, for all $n\geqslant 1$.
{\it Proof of Claim 4.} Assume that $g=k_pk_{p-1}...k_1\in C$. Then the second part of Lemma \ref{SGV} implies that $\rho(k_jk_{j-1}...k_1)([v])\notin U_{k_{j+1}}$, for all $1\leqslant j\leqslant N$.
Below we will use this fact as follows. Suppose that $k_1,...,k_j$ are already determined, for some $1\leqslant j\leqslant N$. Since $\rho(k_jk_{j-1}...k_1)([v])$ belongs to at least $2$ of the sets $\{U_g\}_{g\in\tilde S}$ and $|\tilde S|=2M$, we derive that $k_{j+1}\in\tilde S$ can take at most $2M-2$ values.
Now, since $g\in W_n(T)$, we can write $g=g_n^{\varepsilon_n}g_{n-1}^{\varepsilon_{n-1}}...g_1^{\varepsilon_1}$, where $g_1,...,g_n\in T$, $\varepsilon_1,...,\varepsilon_n\in\{\pm 1\}$ and $g_j^{\varepsilon_j}\not=g_{j+1}^{\varepsilon_{j+1}}$, for all $1\leqslant j<n$. Let $w_1,...,w_n\in Y\setminus\{w_0\}$ such that $g_1=w_0^{-3}w_1^{3},...,g_n=w_0^{-3}w_n^{3}$.
Let $q$ with $1\leqslant q\leqslant{\frac{n}{12}}-1$ and assume that $g_1^{\varepsilon_1},...,g_q^{\varepsilon_q}$ are already determined. In other words, assume that $w_1,...,w_q$ and $\varepsilon_1,...,\varepsilon_q$ are determined. Our goal is to estimate the number of possible values of $g_{q+1}^{\varepsilon_{q+1}}\in\tilde T$.
Depending on the values of $\varepsilon_q,\varepsilon_{q+1}\in\{\pm1\}$ we are in one of four cases.
We assume that $\varepsilon_q=\varepsilon_{q+1}=1$, since the estimates in the other three cases are entirely similar. In this case, we have
$g=g_n^{\varepsilon_n}...g_{q+2}^{\varepsilon_{q+2}}(w_0^{-3}w_{q+1}^2)(w_{q+1}w_0^{-1})(w_0^{-2}w_q^3)g_{q-1}^{\varepsilon_{q-1}}...g_1^{\varepsilon_1}$. Let $j$ such that we have $(w_0^{-2}w_q^3)g_{q-1}^{\varepsilon_{q-1}}...g_1^{\varepsilon_1}=k_jk_{j-1}...k_1$. Then $j$ and $k_1,...,k_j$ are determined. Note that $j\leqslant 6q\ell$.
Write $w_0=r_1...r_{\ell}$, $w_{q+1}=s_1...s_{\ell}$, where $r_1,...,r_{\ell},s_1,...,r_{\ell}\in\tilde S$. Notice that $|w_{q+1}w_0^{-1}|_{S}$ is even and $2\leqslant |w_{q+1}w_0^{-1}|_{S}\leqslant 2\ell-2$.
Let $1\leqslant \ell'\leqslant \ell-1$ such that $|w_{q+1}w_0^{-1}|_{S}=2\ell'$.
Assume that $1\leqslant\ell'\leqslant\ell-1$ is determined.
Then $s_{\ell'+1}=r_{\ell'+1},...,s_{\ell}=r_{\ell}$, hence $s_{\ell'+1},...,s_{\ell}$ are determined. Since $w_{q+1}w_0^{-1}=s_1...s_{\ell'}r_{\ell'}^{-1}...r_1^{-1}$, we get that $k_{j+1}=r_1^{-1},...,k_{j+\ell'}=r_{\ell'}^{-1}$ and $k_{j+\ell'+1}=s_{\ell'},...,k_{j+2\ell'}=s_1$. Hence $k_1,...,k_{j+\ell'}$ are determined.
As $j+2\ell'\leqslant 6\ell q+2(\ell-1)<N$ we get $\rho(k_{j+\ell'}...k_1)([v])\notin U_{k_{j+\ell'+1}}$. The beginning of the proof implies that $k_{j+\ell'+1}$ and hence $s_{\ell'}$ can take at most $2M-2$ values.
Moreover, if $k_{j+\ell'+1},...,k_{j+\ell'+p}$ are determined, for some $1\leqslant p\leqslant \ell'-1$, then since $\rho(k_{j+\ell'+p}...k_1)([v])\notin U_{k_{j+\ell'+p+1}}$, we deduce that $k_{j+\ell'+p+1}$ and therefore $s_{\ell'-p}$ can take at most $2M-2$ values. It follows that there are at most $(2M-2)^{\ell'}$ possibilities for $s_1,...,s_{\ell'}$.
We derive that in the case $\varepsilon_q=\varepsilon_{q+1}=1$, the total number of possible values of $w_{q+1}$ is at most ${\sum_{\ell'=1}^{\ell-1}(2M-2)^{\ell'}\leqslant (2M-2)^{\ell}}$. By adapting the above argument, it follows
that the number of possible values of $w_{q+1}$ is at most $(2M-2)^{\ell}$ in the other three cases as well. Altogether, we get that if $g_1^{\varepsilon_1},...,g_q^{\varepsilon_q}$ are determined and $1\leqslant q\leqslant{\frac{n}{12}}-1$, then $g_{q+1}^{\varepsilon_{q+1}}$ can take at most $4(2M-2)^{\ell}$ values.
Let $q=\lfloor{\frac{n}{12}}\rfloor$. Thus, if $g_1^{\varepsilon_1}$ is determined, then $g_2^{\varepsilon_2}...g_q^{\varepsilon_q}$ can take at most $[4(2M-2)^{\ell}]^{q-1}$ values.
As $g_1^{\varepsilon_1},g_{q+1}^{\varepsilon_{q+1}},...,g_{n}^{\varepsilon_{n}}$ can each take at most $2|T|$ values, we get that $|C|\leqslant [4(2M-2)^{\ell}]^{{\frac{n}{12}}}(2|T|)^{n+2-{\frac{n}{12}}}.$ This finishes the proof of Claim 4.
\hfill$\square$
\vskip 0.05in
{\it End of proof of Claim 2}.
By combining Claims 3 and 4, and using that $|T|\leqslant (2M-1)^{\ell-1}$, we get \begin{equation}\label{A}|A|\leqslant (2|T|)^{\frac{11 n}{12}+1}+
[4(2M-2)^{\ell}]^{{\frac{n}{12}}}(2|T|)^{n+2-{\frac{n}{12}}}\leqslant [(2M-2)^{\frac{1}{12}}(2M-1)^{\frac{11}{12}}]^{(n+24)(\ell+2)}\end{equation}
for all $n > 12$ and every $\ell\geqslant 1$. Equations \eqref{T} and \eqref{A} together imply that there exist $n_0 \geqslant 1$, $c_0>0$ and $l_0\geqslant 1$
such that $|A|\leqslant |T|^{(1-c_0)n}$, for all $n\geqslant n_0$ and every $\ell\geqslant\ell_0$. Since $|W_n(T)|=2|T|(2|T|-1)^{n-1}>|T|^{n}$, the conclusion of Claim 2 follows.
\hfill$\square$
\vskip 0.1in
{\bf Part 3: bounding the probability of return.}\label{part3}
\vskip 0.05in
Define ${\mu=\frac{1}{2|T|}\sum_{g\in T}(\delta_g+\delta_{g^{-1}})}$.
By using Part 2 and following closely the proof of \cite[Proposition 9]{Va10} (see also \cite[Proposition 7]{SGV11}) we next estimate $\mu^{*n}(\{g\in\Gamma|\rho_i(g)([v])=[v]\})$.
\vskip 0.05in
{\bf Claim 5.} There exist $n_1 \geqslant 1$ and $c > 0$ such that for every $i\in I$, $v\in V_i$ and $\ell\geqslant\ell_0$, we have that
\[\mu^{*n}(\{g\in\Gamma|\rho_i(g)([v])=[v]\})\leqslant |T|^{-cn}, \;\;\;\text{for all $n\geqslant n_1$}.\]
{\it Proof of Claim 5.} Denote $\rho=\rho_i$ and $A=\{g\in\Gamma|\rho(g)([v])=[v]\}$. Let $n\geqslant 10n_0$, where $n_0$ is as in Part 2. For every $k\geqslant 1$, fix $g_k\in W_k(T)$. Then $\mu^{*n}(\{g\})=\mu^{*n}(\{g_k\})$, for all $g\in W_k(T)$. Since $\mu^{*n}$ is supported on words of length at most $n$ in $T$, we get
\[\mu^{*n}(A)=\sum_{k=0}^n\mu^{*n}(A\cap W_k(T))=\sum_{k=0}^n|A\cap W_k(T)|\mu^{*n}(\{g_k\}).\]
Let us now majorize each of the terms involved. First, by Kesten's theorem \cite{Ke59} we have that
\[\mu^{*n}(\{g\})\leqslant {\Big(\frac{\sqrt{2|T|-1}}{|T|}\Big)}^{n},\;\;\text{for all $g\in\Gamma$}.\]
Moreover, we deduce from Part 2 that for $n \geqslant 10n_0$, we have that
\[|A\cap W_k(T)| \leqslant |W_k(T)|^{1-c_0} \leqslant (2|T|-1)^{-\frac{c_0n}{10}}|W_k(T)| \text{ for all } k \geqslant n/10.\]
When $k < n/10$, we use the brutal bound $|A\cap W_k(T)| \leqslant |W_k(T)| \leqslant (2|T|)^k$. Altogether, we get
\begin{align*}
\mu^{*n}(A) &\leqslant\sum_{1\leqslant k<\frac{n}{10}}(2|T|)^k{\Big(\frac{\sqrt{2|T|-1}}{|T|}\Big)}^{n}+(2|T|-1)^{-\frac{c_0n}{10}}\sum_{\frac{n}{10}\leqslant k\leqslant n}|W_k(T)|\mu^{*n}(\{g_k\})\\
&\leqslant\sum_{1\leqslant k<\frac{n}{10}} (2|T|)^k{\Big(\frac{\sqrt{2|T|-1}}{|T|}\Big)}^{n}+(2|T|-1)^{-\frac{c_0n}{10}}\\
&\leqslant (2|T|)^{\frac{n}{10}}{\Big(\frac{\sqrt{2|T|-1}}{|T|}\Big)}^{n}+(2|T|-1)^{-\frac{c_0n}{10}}.
\end{align*}
The conclusion of Claim 5 is now immediate.
\hfill$\square$
\vskip 0.1in
{\bf Part 4: end of the proof.}
\vskip 0.05in
We are now ready to conclude the proof.
Let $n_1,c,\ell_0$ be given as above and let $C$ be the constant given by Proposition \ref{neigh}. Since $M\geqslant 2$, we get that $(2M-1)^{\frac{12}{13}}\geqslant 3^{\frac{12}{13}}>e$.
By using \eqref{T}, and after taking a larger $\ell_0$, we may assume that $|T|\geqslant e^{\ell}$ and that $(4n+1)2^{(\kappa+1)n}\leqslant e^{\frac{n\ell}{4}}$, for any $n\geqslant 1$ and $\ell\geqslant\ell_0$.
\vskip 0.05in
{\bf Claim 6.} Let $\delta>0$ be small enough and $n$ be an integer such that $\frac{\log(1+\eta)}{7C}\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}\leqslant n\leqslant\frac{\log(1+\eta)}{6C}\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}$.
Then $\mu^{*n}(H^{(\delta)})\leqslant\delta^{\frac{\min\{c,\frac{1}{4}\}}{7C}}$, for every proper closed connected subgroup $H<G$.
\vskip 0.05in
{\it Proof of Claim 6}.
Fix $n$ as in the claim and let $H<G$ be a proper closed connected subgroup.
Thanks to Proposition \ref{neigh}, we can find a proper closed subgroup $H'<G$ such that \[W_{\leqslant 6n\ell}(S)\cap H^{(e^{-C6n\ell})}\subset H'.\]
Let $g\in$ supp$(\mu^{*n})\cap H^{(\delta)}$. Then $g\in W_{\leqslant n}(T)$ and since $T\subset W_{6\ell}(S)$, we deduce that $g\in W_{\leqslant 6n\ell}(S)$. Since $\varepsilon=(1+\eta)^{-\ell}$, hence $\ell=\frac{\log{\frac{1}{\varepsilon}}}{\log(1+\eta)}$, the assumption on $n$ implies that $\delta\leqslant e^{-6n\ell C}$.
By using the previous paragraph, we derive that $g\in H'$.
Since $\mu$ is supported on $T$, we also have $g\in\langle T\rangle$.
Denoting $\Gamma_0=\langle T\rangle\cap H'$, we therefore get that
\begin{equation}\label{muneigh}\mu^{*n}(H^{(\delta)})\leqslant\mu^{*n}(\Gamma_0). \end{equation}
We continue by treating two separate cases:
{\bf Case 1}. $\Gamma_0$ is non-discrete in $G$.
In this case, since $\Gamma_0\subset\Gamma\cap H'$, we get that $\Gamma\cap H'$ is non-discrete. Proposition \ref{ping-pong} implies the existence of $i\in I$ and $[v]\in\mathbb P(V_i)$ such that $\rho_i(g)([v])=[v]$, for all $g\in\Gamma\cap H'$.
Since $|T|\geqslant e^{\ell}$, by combining \ref{muneigh} with Part 3, we get that for $\delta>0$ small enough so that $n\geqslant n_1$,
\begin{equation}\label{est1}\mu^{*n}(H^{(\delta)})\leqslant\mu^{*n}(\Gamma\cap H')\leqslant \mu^{*n}(\{g\in\Gamma|\rho_i(g)([v])=[v]\})\leqslant |T|^{-cn}\leqslant e^{-cn\ell}.\end{equation}
Since $n \geqslant \frac{\log{(1+\eta)}\log{\frac{1}{\delta}}}{7C\log{\frac{1}{\varepsilon}}}$, we get $n\ell\geqslant\frac{\log{\frac{1}{\delta}}}{7C}$. This implies that $e^{-cn\ell}\leqslant\delta^{\frac{c}{7C}}$, proving the claim.
{\bf Case 2.} $\Gamma_0$ is discrete in $G$.
In this case, by the definition of $U$, we have that $\Gamma_1:=\langle\Gamma_0\cap U\rangle$ is a nilpotent group.
Since $\Gamma_1<\langle T\rangle$ and $\langle T\rangle$ is a free group, $\Gamma_1$ must be a cyclic group. As a consequence, we have that $$|\Gamma_0\cap U\cap\text{supp}(\mu^{*2n})|\leqslant |\Gamma_1\cap\text{supp}(\mu^{*2n})|=|\Gamma_1\cap W_{\leqslant 2n}(T)|\leqslant 4n+1.$$
Next, if $N=\lfloor (1+\varepsilon)^{\kappa n}\rfloor$, then by (\ref{a}) we can find $g_1,...,g_N\in G$ such that $B_{(1+\varepsilon)^n}(1)\subset\cup_{i=1}^Ng_iU_0$. Since supp$(\mu^{*n})\subset B_{(1+\varepsilon)^n}(1)$, we thus get that $\Gamma_0\cap\text{supp}(\mu^{*n})\subset\cup_{i=1}^N(\Gamma_0\cap g_iU_0\cap\text{supp}(\mu^{*n})).$ Recall that $U_0^{-1}U_0\subset U$. So, if $1\leqslant i\leqslant N$ and $x,y\in \Gamma_0\cap g_iU_0\cap\text{supp}(\mu^{*n})$, then $x^{-1}y\in \Gamma_0\cap U\cap\text{supp}(\mu^{*2n})$. This implies that $|\Gamma_0\cap g_iU_0\cap\text{supp}(\mu^{*n})|\leqslant |\Gamma_0\cap U\cap\text{supp}(\mu^{*2n})|\leqslant 4n+1$, for every $1\leqslant i\leqslant N$.
Altogether, we get that $|\Gamma_0\cap\text{supp}(\mu^{*n})|\leqslant (4n+1)N\leqslant (4n+1)(1+\varepsilon)^{\kappa n}.$
In combination with Kesten's theorem, we derive that $$\mu^{*n}(\Gamma_0)\leqslant (4n+1)(1+\varepsilon)^{\kappa n}{\Big(\frac{\sqrt{2|T|-1}}{|T|}\Big)}^{n}.$$
Since $|T|\geqslant e^{\ell}$, we get $\frac{\sqrt{2|T|-1}}{|T|}\leqslant\frac{2}{\sqrt{|T|}}\leqslant 2e^{-\frac{\ell}{2}}$. Since $(4n+1)(1+\varepsilon)^{\kappa n}2^n\leqslant (4n+1)2^{(\kappa+1)n}\leqslant e^{\frac{n\ell}{4}}$, by using \ref{muneigh} we conclude that \begin{equation}\label{est2}\mu^{*n}(H^{(\delta)})\leqslant\mu^{*n}(\Gamma_0)\leqslant e^{-\frac{n\ell}{4}}.\end{equation}
Since $n\geqslant\frac{\log(1+\eta)}{7C}\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}=\frac{\log{\frac{1}{\delta}}}{7C\ell}$, by combining \ref{est1} and \ref{est2} we get that $$\mu^{*n}(H^{(\delta)})\leqslant e^{-\min\{c,\frac{1}{4}\}n\ell}\leqslant e^{-\frac{\min\{c,\frac{1}{4}\}}{7C}\log{\frac{1}{\delta}}}=\delta^{\frac{\min\{c,\frac{1}{4}\}}{7C}},$$
which proves Claim 6. \hfill$\square$
Finally, put $d_1 = \frac{\min\{c,\frac{1}{4}\}}{7C}$ and $d_2 = \frac{\log(1+\eta)}{12C}.$
Then Claim 6 implies that $d_1,d_2 > 0$ satisfy the conclusion of Theorem \ref{escape}.
\hfill$\blacksquare$
\vskip 0.1in
\section{$\ell^2$-flattening}
A key step in Bourgain and Gamburd's remarkable strategy \cite{BG05} for proving spectral gap is the so-called {\bf $\ell^2$-flattening lemma}. In \cite{BG06} and \cite{BG10}, Bourgain and Gamburd established a flattening lemma for probability measures on $SU(2)$ and $SU(d), d\geqslant 2$, respectively. Bourgain and Yehudayoff then proved a flattening lemma for probability measures on $SL_2(\mathbb R)$ whose support is large but ``controlled" \cite{BY11}. All of these results rely on {\bf product theorems} for the respective Lie groups.
In an important recent development, de Saxc\'{e} obtained a product theorem for arbitrary connected simple Lie groups \cite{dS14}.
This allowed Benoist and de Saxc\'{e} \cite{BdS14} to extend the flattening lemmas of \cite{BG06,BG10} to any compact connected simple Lie group.
In this section, we first note that the product theorem of \cite{dS14} allows to derive a flattening lemma in the spirit of \cite[Lemma 4.1]{BY11} for arbitrary connected simple Lie groups.
\begin{lemma}[$\ell^2$-flattening, \cite{BdS14}]\label{BdS} Let $G$ be a connected simple Lie group with trivial center.
Given $\alpha,\kappa>0$, there exist $\beta,\gamma>0$ such that the following holds for any $\delta>0$ small enough.
Suppose that $\mu$ is a symmetric Borel probability measure on $G$ such that
\begin{enumerate}
\item supp$(\mu)\subset B_{\delta^{-\beta}}(1)$,
\item $\|\mu*P_{\delta}\|_2\geqslant \delta^{-\alpha}$, and
\item $(\mu*\mu)(H^{(\rho)})\leqslant\delta^{-\gamma}\rho^{\kappa}$, for all $\rho\geqslant\delta$ and any proper closed connected subgroup $H<G$.
\end{enumerate}
Then $\|\mu*\mu*P_{\delta}\|_2\leqslant\delta^{\gamma}\|\mu*P_{\delta}\|_2.$
\end{lemma}
Lemma \ref{BdS} follows by adapting the proof of \cite[Lemma 2.5]{BdS14} in order to deal with non-compact Lie groups $G$ and measures $\mu$ with large controlled support (in the sense of (1)). Nevertheless, for completeness, we include the details of proof in the Appendix.
For now, we assume this lemma, and continue towards proving our main results.
More precisely, by applying Lemma \ref{BdS} repeatedly we obtain:
\begin{corollary}\label{flattening}
Let $G$ be a connected simple Lie group with trivial center, and $d_1,d_2 > 0$ be given.
Then for every $\alpha>0$, there exist $\varepsilon_0>0$ and $c_0>0$ such the following holds.
Let $0<\varepsilon<\varepsilon_0$ and $\mu$ be a Borel probability measure on $G$ such that
supp$(\mu)\subset B_{\varepsilon}(1)$. Assume that
for any $\delta>0$ small enough we have $\mu^{*2n}(H^{(\delta)})\leqslant\delta^{d_1}$, for any proper connected closed subgroup $H<G$, where $n=\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}}\Big\rfloor$.
Then for any $\delta>0$ small enough we have $\|\mu^{*n}*P_{\delta}\|_2\leqslant\delta^{-\alpha}$, for any integer $n\geqslant c_0\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}.$
\end{corollary}
{\it Proof.} Let $\alpha>0$. By Lemma \ref{BdS} there are $\beta,\gamma>0$ such that for any $\delta>0$ small enough the following holds: if $\nu$ is a symmetric Borel probability measure on $G$ which satisfies
\begin{itemize}
\item [(a)] supp$(\nu)\subset B_{\delta^{-\beta}}(1)$, and
\item [(b)] $(\nu*\nu)(H^{(\rho)})\leqslant\delta^{-\gamma}\rho^{\frac{d_1}{4}}$, for all $\rho\geqslant\delta$ and any proper closed connected subgroup $H<G$, \end{itemize}
then either $\|\nu*P_{\delta}\|_2\leqslant\delta^{-\alpha}$, or
$\|\nu*\nu*P_{\delta}\|_2\leqslant\delta^{\gamma}\|\nu*P_{\delta}\|_2.$
We first claim that there is a constant $C>1$ depending only on $G$ such that the following holds. Let $\rho\in (0,1)$, $R>2$, $a,b,x\in B_R(1)$ and $h,k\in G$ such that $\|x^{-1}a-h\|_2\leqslant\rho$ and $\|x^{-1}b-k\|_2\leqslant\rho$. Then $\|b^{-1}a-k^{-1}h\|_2\leqslant R^C\rho$.
Indeed, the claim follows since there is a constant $c>1$ depending only on $G$ such that $\|y^{-1}\|_2\leqslant (\|y\|_2+1)^{c}$, for any $y\in G$, and we have that
\begin{align*}
\|b^{-1}a-k^{-1}h\|_2&=\|(x^{-1}b)^{-1}(x^{-1}a)-k^{-1}h\|_2\\&\leqslant\|x^{-1}\|_2\|a\|_2\|(x^{-1}b)^{-1}-k^{-1}\|_2+\|k^{-1}\|_2\|x^{-1}a-h\|_2\\&\leqslant \|x^{-1}\|_2\|a\|_2\|(x^{-1}b)^{-1}\|_2\|k^{-1}\|_2\|x^{-1}b-k\|_2+\|k^{-1}\|_2\|x^{-1}a-h\|_2.
\end{align*}
Let $k\geqslant 1$ be the smallest integer such that $\delta^{k\gamma}\|P_{\delta}\|_2\leqslant\delta^{-\alpha}$, for any $\delta>0$ small enough. Let $\varepsilon>0$ small enough such that $\frac{2^kd_2}{4\log{\frac{1}{\varepsilon}}}<\frac{\min\{\beta,\frac{\gamma}{d_1C}\}}{\varepsilon}$. Let $\mu$ be a Borel probability measure on $G$ which is supported on $B_{\varepsilon}(1)$ and satisfies the hypothesis. The proof relies on the following:
{\bf Claim.} If $\delta>0$ is small enough and $n$ is an integer such that $\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{4\log{\frac{1}{\varepsilon}}}}\Big\rfloor\leqslant n\leqslant\min\{\beta,\frac{\gamma}{d_1C}\}\frac{\log{\frac{1}{\delta}}}{\varepsilon}$, then the measure $\nu=\mu^{*n}$ satisfies conditions (a) and (b).
{\it Proof of the claim.}
Since supp$(\nu)\subset B_{(1+\varepsilon)^n}(1)$ and $(1+\varepsilon)^n\leqslant [(1+\varepsilon)^{\frac{1}{\varepsilon}}]^{\beta\log{\frac{1}{\delta}}}<e^{\beta\log{\frac{1}{\delta}}}=\delta^{-\beta}$, we get that $\nu$ satisfies (a).
To verify (b), let $\rho\geqslant\delta$ and $H<G$ be a proper closed connected subgroup. We may assume that $\rho\leqslant\delta^{\frac{4\gamma}{d_1}}$, because otherwise $\delta^{-\gamma}\rho^{\frac{d_1}{4}}>1$ and (b) is trivially satisfied.
Let $m=\Big\lfloor{d_2\frac{\log{\frac{1}{\rho^{\frac{1}{2}}}}}{\log{\frac{1}{\varepsilon}}}}\Big\rfloor=\Big\lfloor{d_2\frac{\log{\frac{1}{\rho}}}{2\log{\frac{1}{\varepsilon}}}}\Big\rfloor$. Then $m\leqslant 2n$ and the hypothesis implies that
\[\mu^{*2m}(H^{\rho^{(\frac{1}{2})}})\leqslant\rho^{\frac{d_1}{2}}\].
For $x\in G$, denote $A_x=xH^{(\rho)}\cap$ supp$(\mu^{*m})$. Since $\mu^{*2n}=\mu^{*(2n-m)}*\mu^{*m}$, we have
\begin{equation}\label{nu_1}
\nu*\nu(H^{(\rho)})=\mu^{*2n}(H^{(\rho)})\leqslant\sup_{x\in \text{supp}(\mu^{*(2n-m)})}\mu^{*m}(xH^{(\rho)})=\sup_{x\in \text{supp}(\mu^{*(2n-m)})}\mu^{*m}(A_x).
\end{equation}
Further, since $\mu^{*m}$ is symmetric, Lemma \ref{A^{-1}A} implies that
\begin{equation}\label{nu_2}
\mu^{*m}(A_x)\leqslant\mu^{*2m}(A_x^{-1}A_x)^{\frac{1}{2}},\;\;\;\;\text{for any $x\in G$}.
\end{equation}
Let $x\in$ supp$(\mu^{*(2n-m)})$ and $a,b\in A_x$. Since supp$(\mu^{*k})\subset B_{(1+\varepsilon)^k}(1)$, for any $k\geqslant 1$, we have that $a,b,x\in B_{(1+\varepsilon)^{2n}}(1)$. By the definition of $A_x$, we can find $h,k\in H$ such that $\|x^{-1}a-h\|_2\leqslant\rho$ and $\|x^{-1}b-k\|_2\leqslant\rho$. The earlier claim implies that $\|b^{-1}a-k^{-1}h\|_2\leqslant (1+\varepsilon)^{2Cn}\rho$.
Since $n<\frac{\gamma}{d_1C}\frac{\log{\frac{1}{\delta}}}{\varepsilon}$, we get that $(1+\varepsilon)^{2Cn}<e^{2Cn\varepsilon}<\delta^{-\frac{2\gamma}{d_1}}\leqslant\rho^{-\frac{1}{2}}.$ Thus $\|b^{-1}a-k^{-1}h\|_2\leqslant\rho^{\frac{1}{2}}$.
Since $k^{-1}h\in H$ and $a,b\in A_x$ are arbitrary, we deduce that $A_x^{-1}A_x\subset H^{(\rho^{\frac{1}{2}})}$.
By combining \eqref{nu_1} and \eqref{nu_2} we therefore derive that $$\nu*\nu(H^{(\rho)})\leqslant\mu^{*2m}(H^{(\rho^{\frac{1}{2}})})^{\frac{1}{2}}\leqslant\rho^{\frac{d_1}{4}},$$
which finishes the proof of the claim.
\hfill$\square$
Let $\delta>0$ and put $n_0=\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{4\log{\frac{1}{\varepsilon}}}}\Big\rfloor$ and $n_1=2^kn_0$.
We claim that $\|\mu^{*n_1}*P_{\delta}\|_2\leqslant\delta^{-\alpha}$, for any small enough $\delta>0$.
Once this claim is proven, the conclusion follows for $c_0=2^{k-2}d_2$ since $n_1\leqslant 2^{k-2}d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}$ and $\|\mu^{*n}*P_{\delta}\|_2=\|\mu^{*(n-n_1)}*(\mu^{*n_1}*P_{\delta})\|_2\leqslant\|\mu^{*n_1}*P_{\delta}\|_2$, for any $n\geqslant n_1$.
Assume by contradiction that the claim is false and let $0\leqslant i\leqslant k$. Then $2^in_0\leqslant n_1$ and therefore $\|\mu^{*2^in_0}*P_{\delta}\|_2\geqslant\|\mu^{*n_1}*P_{\delta}\|_2>\delta^{-\alpha}$.
On the other hand,
$\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{4\log{\frac{1}{\varepsilon}}}}\Big\rfloor\leqslant 2^in_0\leqslant\min\{\beta,\frac{\gamma}{d_1C}\}\frac{\log{\frac{1}{\delta}}}{\varepsilon}$. The claim implies that $\mu^{2^in_0}$ satisfies conditions (a) and (b). As $\|\mu^{*2^in_0}*P_{\delta}\|_2>\delta^{-\alpha}$ we must have $$\|\mu^{*2^{i+1}n_0}*P_{\delta}\|_2\leqslant\delta^{\gamma}\|\mu^{*2^in_0}*P_{\delta}\|_2,\;\;\;\text{for every $0\leqslant i\leqslant k$.}$$
By combining these inequalities we deduce that $\|\mu^{*n_1}*P_{\delta}\|_2\leqslant\delta^{k\gamma}\|\mu^{*n_0}*P_{\delta}\|_2\leqslant\delta^{k\gamma}\|P_{\delta}\|_2\leqslant\delta^{-\alpha},$
which is a contradiction.
\hfill$\blacksquare$
\section{A mixing inequality}
The goal of this section is to prove an analogue for simple Lie groups of the well-known mixing inequality for quasirandom finite groups (see \cite[Proposition 1.3.7]{Ta15}). In the next section, we will combine this mixing inequality with Corollary \ref{flattening} and a Littlewood-Paley decomposition on simple Lie groups to deduce Theorem \ref{restricted}.
\begin{theorem}[mixing inequality]\label{rho}
Let $G$ be a connected simple Lie group with trivial center. Denote by $d$ the dimension of $G$, and let $B \subset G$ be a measurable set with compact closure.
Then there exist constants $a,b,\kappa>0$ such that for every $F\in L^2(B)$ with $\|F\|_2=1$, we have
$$\|f*F\|_2^{16d}\leqslant a\|P_{\delta}*F\|_2 + b\delta^\kappa,$$
for all $f\in L^2(G)$ with $\|f\|_2=1$ and all $0<\delta<1.$
\end{theorem}
This result and its proof are inspired by \cite[Lemma 10.35]{BG10}, which dealt with the case $G=SU(d)$, for $d\geqslant 2$. In particular, we borrow from \cite{BG10} the idea of reducing to functions $F$ that satisfy an additional ``symmetry", i.e. are eigenvectors for a maximal torus of $G$. This reduction is crucial, as it will allow us to exploit certain cancellations appearing in the integrals.
Turning to the proof of Theorem \ref{rho}, we start with a classical lemma which can be easily deduced from \cite[Section 2]{RS88}. We denote by $C_{\text{c}}^1(G)$ the space of compactly supported $C^1$-functions on $G$.
\begin{lemma}
\label{LieRC}
Let $G$ and $H$ be two Lie groups of dimensions $n$ and $m$. Assume that $n \geqslant m$. Consider an analytic function $\phi : G \to H$ such that the derivative $d\phi_x:\mathfrak{g} \to \mathfrak{h}$ has rank $m$, at almost every point $x \in G$. Let $\psi \in C_{\text{c}}^1(G)$ and denote by $\mu=\phi_*(\psi \cdot dm_G)$ the push-forward measure of the measure $\psi \cdot dm_G$ on $G$ through $\phi$.
Then $\mu$ is absolutely continuous with respect to $m_H$, and the Radon-Nykodym derivative $\rho:H \to \mathbb{R}$ is {\it$L^1$-H\"older}: there exist $\alpha > 0$ and $C > 0$ such that
\[\int_H \vert \rho(g^{-1}h) - \rho(h) \vert\text{d}h \leqslant C\Vert g - 1\Vert_2^\alpha, \;\;\;\;\text{for every $g \in H$}.\]
\end{lemma}
By applying this lemma, we obtain the following:
\begin{lemma}
\label{abs}
Let $G$ be a connected simple Lie group and $H<G$ be a connected compact Lie subgroup of dimension $1$. Define $\pi:G\times H^2\rightarrow G$ by letting $\pi(g,t_1,t_2)=t_1gt_1^{-1}t_2g^{-1}t_2^{-1}$, for all $g\in G, t_1,t_2\in H$. Let $\psi \in C_{\text{c}}^1(G)$ and define $\nu=\pi_*((\psi \cdot dm_G)\times m_{H} \times m_H)$.
Then $\nu^{*n}$ is absolutely continuous with respect to $m_G$, and the corresponding Radon-Nykodym derivative is $L^1$-H\"older, for every integer $n\geqslant$ dim$(G)$.
\end{lemma}
{\it Proof.}
Let $n\geqslant 1$.
Then $\nu^{*n}=\pi^{(n)}_*({(\psi \cdot dm_G)}^n\times m_H^{2n})$, where $\pi^{(n)}:G^n\times H^{2n}\rightarrow G$ is defined as
\[\pi^{(n)}(g_1,...,g_n,t_1,...,t_{2n})=\prod_{i=1}^n\pi(g_i,t_{2i-1},t_{2i})=\prod_{i=1}^n(t_{2i-1}g_it_{2i-1}^{-1}t_{2i}g_i^{-1}t_{2i}^{-1}).\]
By Lemma \ref{LieRC}, we only have to check that the derivative of the analytic function $\pi^{(n)}$ has rank $d$, at almost every point, as soon as $n \geqslant d := \dim(G)$.
Fix $n \geqslant d$. Let $\frak g$ and $\frak h$ be the Lie algebras of $G$ and $H$, respectively. Let Ad$:G\rightarrow GL(\frak g)$ be the adjoint representation of $G$. Since dim$(H)=1$ and $H$ is connected, there is $b\in\frak g$ such that $\frak h=\{ub \, \vert \, u \in \mathbb{R}\}$ and $H=\{\exp(ub) \, \vert \, u \in \mathbb{R}\}$.
Let $X_n$ be the set of $(g_1,...,g_n,t_1,...,t_{2n})\in G^n\times H^{2n}$ such that the following set spans $\frak g$:
\[\{\text{Ad}(\prod_{j=1}^{i-1}\pi(g_j,t_{2j-1},t_{2j}))(b)-\text{Ad}((\prod_{j=1}^{i-1}\pi(g_j,t_{2j-1},t_{2j}))t_{2i-1}g_it_{2i-1}^{-1})(b)\;|\; 1\leqslant i\leqslant n\}.\]
{\bf Claim 1.}
$\operatorname{rk}(d(\pi^{(n)})_x)=d$, for every $x\in X_n$.
{\it Proof of Claim 1.} Take $x=(g_1,...,g_n,t_1,...,t_{2n})\in X_n$. Proving the claim amounts to showing that the map $\tilde \pi_n : y \mapsto \pi^{(n)}(y)\pi^{(n)}(x)^{-1}$ is such that $d(\tilde \pi_n)_x$ has rank $d$. For all $1 \leqslant i \leqslant n$, define a map $\varphi_i : \mathbb{R} \to G$ by the formula
\[\varphi_i(u)= \tilde \pi_n(g_1,...,g_n,t_1,...,t_{2i - 2},\exp(ub)t_{2i-1},t_{2i} \cdots, t_{2n})\]
The derivative $\varphi_i'(0) \in \frak g$ belongs to the range of the derivative $d(\tilde \pi_n)_x$, while an easy computation gives that
\[\varphi_i'(0) = \text{Ad}(\prod_{j=1}^{i-1}\pi(g_j,t_{2j-1},t_{2j}))(b)-\text{Ad}((\prod_{j=1}^{i-1}\pi(g_j,t_{2j-1},t_{2j}))t_{2i-1}g_it_{2i - 1}^{-1})(b).\]
Since $x\in X_n$, the set $\{\varphi_i'(0) \, \vert \, 1 \leqslant i \leqslant n \}$ spans $\frak g$, and $d(\tilde \pi_n)_x$ is therefore onto.
\hfill$\square$
{\bf Claim 2.} $X_n$ is a nonempty Zariski open subset of $G^n\times H^{2n}$, for every $n\geqslant d$.
{\it Proof of Claim 2.} Since $X_n$ is clearly Zariski open, for every $n\geqslant 1$, it remains to argue that $X_n$ is nonempty, whenever $n\geqslant d$. Since $G$ is simple, $\frak g$ is the only non-trivial Ad$(G)$-invariant subspace of $\frak g$. Thus, the span of $\{\text{Ad}(g)(b)-\text{Ad}(h)(b)|g,h\in G\}$ is equal to $\frak g$. Equivalently, we derive that the span of $\{\text{Ad}(g)(b)-b|g\in G\}$ is also equal to $\frak g$.
We can therefore find $g_1,...,g_d\in G$ such that $\{\text{Ad}(g_i)(b)-b|1\leqslant i\leqslant d\}$ spans $\frak g$. Define $g_{d+1}=....=g_n=t_1=...=t_{2n}=1$. Then it is clear that $(g_1,...,g_n,t_1,...,t_{2n})\in X_n$, which shows that $X_n$ is nonempty, as claimed. \hfill$\square$
Finally, if $n\geqslant d$, then Claim 2 implies that $X_n$ a co-null subset of $G^n\times H^{2n}$.
\hfill$\blacksquare$
\vskip 0.1in
We are now ready to prove Theorem \ref{rho}.
{\it Proof of Theorem \ref{rho}.} Let $F\in L^2(B)$ and $f\in L^2(G)$ with $\|f\|_2=1$. Since $F*\check{F}$ is supported on $BB^{-1}$, we have
$\|f*F\|_2^2=\langle f*F,f*F\rangle=\langle\check{f}*f,F*\check{F}\rangle\leqslant \|\check{f}*f\|_{2,BB^{-1}}\|F*\check{F}\|_2$. Since $\|\check{f}*f\|_{\infty}\leqslant 1$, we get that $\|\check{f}*f\|_{2,BB^{-1}}\leqslant |BB^{-1}|^{1/2}$. Moreover, for every $g\in G$, we have that $F*\check{F}(g)=\int_G\overline{F(g^{-1}x)}F(x)\;\text{d}x=\overline{\langle\lambda_g(F),F\rangle}.$
By putting these facts together, we get that $$\|f*F\|_2^{16d}\leqslant |BB^{-1}|^{4d}(\int_G|\langle\lambda_g(F),F\rangle|^2\;\text{d}g)^{4d}.$$
Thus, the conclusion reduces to proving the following:
$(*)$ there exist constants $a,b,\kappa>0$ such that for every $F\in L^2(B)$ with $\|F\|_2=1$, we have that $$\big(\int_G|\langle\lambda_g(F),F\rangle|^2\;\text{d}g\big)^{4d}\leqslant a\|P_{\delta}*F\|_2+b\delta^{\kappa},\;\;\;\;\text{for all $0<\delta<1$.}$$
To this end, we fix a compact connected Lie subgroup $H$ of $G$ with dimension $1$.
Below, we denote by $x,y,z,g$ elements of $G$ and by $t,t_1,t_2$ elements of $H$.
Writing $\text{d}x$ (respectively, $\text{d}t$) will refer to integration against the Haar measure of $G$ (respectively, $H$).
Let $\widetilde B\subset G$ be an open set with compact closure which contains $B^{-1}B$. Let $\psi \in C_{\text{c}}^1(G)$ be a non-negative function which is equal to $1$ on $\widetilde B$.
Define $\pi:G\times H^2\rightarrow G$ by $\pi(x,t_1,t_2)=t_1xt_1^{-1}t_2x^{-1}t_2^{-1}$, for all $x\in G$ and $t_1,t_2\in H$. Let $\nu=\pi_*((\psi \cdot dm_G) \times m_H^2)$. Lemma \ref{abs} implies that $\nu^{*d}$ is absolutely continuous with respect to $m_G$ and the corresponding Radon-Nykodym derivative $\rho$ is $L^1$-H\"older. In other words, there exist $\kappa > 0$ and $C > 0$ such that
\[\int_G \vert \rho(g^{-1}h) - \rho(h) \vert dh \leqslant C\Vert g - 1\Vert_2^{2\kappa}, \;\; \forall g \in G.\]
For $x\in G$, we define an operator $R_x:L^2(G)\rightarrow L^2(G)$ by the formula
\[(R_xf)(z)=\int_{H}f(ztxt^{-1})\;\text{d}t, \; f\in L^2(G), \; z\in G.\]
{\bf Claim 1}. For every $f\in L^2(G)$, we have $\displaystyle{\int_{\widetilde{B}}\|R_x(f)\|_2^2\;\text{d}x} \leqslant \Vert f*\nu \Vert_2\Vert f \Vert_2$.
\begin{proof}[Proof of Claim 1]
Let $f\in L^2(G)$.
Since $(R_x^*R_xf)(z) =\displaystyle{\int_{H^2}}f(zt_1x^{-1}t_1^{-1}t_2xt_2^{-1})\;\text{d}t_1\;\text{d}t_2,$ the claim follows from the following calculation
\begin{align*}
\int_{\widetilde{B}}\|R_x(f)\|_2^2\;\text{d}x \leqslant \int_{G}\|R_x(f)\|_2^2\psi(x)\;\text{d}x & = \int_{G} \langle R_x^*R_x(f),f\rangle\psi(x)\; \text{d}x\\
&= \int_{G}\Big(\int_{G \times H^2}f(zt_1x^{-1}t_1^{-1}t_2xt_2^{-1})\; \psi(x)\;\text{d}x\;\text{d}t_1\;\text{d}t_2\Big)\overline{f(z)}\; \text{d}z\\
& =\int_{G}(f*\nu)(z)\overline{f(z)} \; \text{d}z \leqslant \Vert f*\nu \Vert_2\Vert f \Vert_2. \qedhere
\end{align*}
\end{proof}
Next, using that $\rho$ is $L^1$-H\"older, we deduce the following claim:
{\bf Claim 2}. There is $c>0$ such that $\|P_{\delta}*f*\rho-f*\rho\|_2\leqslant c\delta^{\kappa}\|f\|_2$, for all $f\in L^2(B)$ and $0<\delta<1$.
{\it Proof of Claim 2}.
Take $f \in L^2(B)$ and $\delta > 0$. Note that for $x \in G$, we have
\[(P_{\delta}*f*\rho-f*\rho)(x) = \frac{1}{\vert B_\delta\vert} \int_{B_\delta(1) \times B} f(z) (\rho(z^{-1}y^{-1}x) - \rho(z^{-1}x)) \; \d y \; \d z.\]
Using the Cauchy-Schwarz inequality and $L^1$-H\"older condition for $\rho$, we get that $\Vert P_{\delta}*f*\rho-f*\rho\Vert_2^2 $ is at most equal to
\begin{align*}
\frac{1}{\vert B_\delta\vert^2}\int_G \Big(\int_{B_\delta(1) \times B} \vert & f(z) \vert^2 \vert \rho(z^{-1}y^{-1}x) - \rho(z^{-1}x)\vert \; \d y \; \d z\Big)\Big(\int_{B_\delta(1) \times B} \vert \rho(z^{-1}y^{-1}x) - \rho(z^{-1}x)\vert \; \d y \; \d z\Big) \; \d x\\
& \leqslant \frac{2\Vert \rho \Vert_1}{\vert B_\delta\vert} \int_{G \times B_\delta(1) \times B} \vert f(z) \vert^2 \vert \rho(z^{-1}y^{-1}x) - \rho(z^{-1}x)\vert \; \d x \; \d y \; \d z\\
& \leqslant \frac{2\Vert \rho \Vert_1}{\vert B_\delta\vert} \int_B \vert f(z) \vert^2 \Big( \int_{G \times B_\delta(1)} \vert \rho(z^{-1}y^{-1}zx) - \rho(x)\vert \; \d x \; \d y\Big) \; \d z\\
& \leqslant 2C \Vert \rho \Vert_1 \int_B \vert f(z) \vert^2 \sup_{y \in B_\delta(1)} \Vert z^{-1}yz - 1 \Vert_2^{2\kappa} \; \d z \; \leqslant \; c^2\delta^{2\kappa} \Vert f \Vert_2^2,
\end{align*}
for some constant $c > 0$ independent of $f$ and $\delta$.
\hfill$\square$
\vskip 0.05in
Let $F\in L^2(B)$ with $\|F\|_2=1$.
The proof of $(*)$ splits into two cases.
{\bf Case 1}.
We first prove assertion $(*)$ in the following case:
there is a character $\eta:H\rightarrow\mathbb T$ such that for all $t \in H$, $F(xt)=\eta(t)F(x)$, for almost every $x\in G$.
Then for almost every $(x,y,t)\in G^2\times H$ we have that $F(xt)\overline{F(yt)}=F(x)\overline{F(y)}$.
By using this fact we get that
\begin{align*}
\int_{G}|\langle\lambda_g(F),F\rangle|^2\;\text{d}g & = \int_{G^3}F(g^{-1}x)\overline{F(x)}\,\overline{F(g^{-1}y)}F(y)\;\text{d}g\;\text{d}x\;\text{d}y\\
& = \int_{G^3}\int_{H}F(g^{-1}xt)\overline{F(x)}\,\overline{F(g^{-1}yt)}F(y)\;\text{d}g\;\text{d}x\;\text{d}y\;\text{d}t
\end{align*}
Using left invariance of the Haar measure and unimodularity on the $g$ and $y$ variables, we get
\begin{align*}
\int_{G}|\langle\lambda_g(F),F\rangle|^2\;\text{d}g & = \int_{G^3}\int_{H}F(gt^{-1}y^{-1}xt)\overline{F(x)}\,\overline{F(g)}F(y) \;\text{d}g\;\text{d}x\;\text{d}y\;\text{d}t\\
& = \int_{G^3}\int_{H}F(gt^{-1}y^{-1}t)\overline{F(x)}\,\overline{F(g)}F(xy) \;\text{d}g\;\text{d}x\;\text{d}y\;\text{d}t\\
& = \int_{G} (\check{F}*F)(y)\; \langle R_{y^{-1}}F,F\rangle\;\text{d}y = \int_{G} (\check{F}*F)(y)\; \langle F,R_yF\rangle\;\text{d}y
\end{align*}
Since $\|\check{F}*F\|_{\infty}\leqslant 1$ and $\check{F}*F\in L^2(\widetilde{B})$, we conclude from Claim 1 and Lemma \ref{powers} that
\begin{align*}
\int_{G}|\langle\lambda_g(F),F\rangle|^2\;\text{d}g\leqslant \int_{\widetilde{B}}\|R_x(F)\|_2\;\text{d}x &\leqslant |\widetilde{B}|^{1/2}\Big(\int_{\widetilde B}\|R_x(F)\|_2^2\;\text{d}x\Big)^{1/2}\\& \leqslant |\widetilde{B}|^{\frac{1}{2}}\|F*\nu\|_2^{\frac{1}{2}}\leqslant |\widetilde{B}|^{\frac{1}{2}}\|\nu\|^{\frac{1}{4}}\|F*\nu^{*d}\|_2^{\frac{1}{4d}}\\&=|\widetilde{B}|^{\frac{1}{2}}\|\nu\|^{\frac{1}{4}}\|F*\rho\|_2^{\frac{1}{4d}}.
\end{align*}
On the other hand, Claim 2 yields
\[\|F*\rho\|_2\leqslant \big(\|P_{\delta}*F*\rho\|_2+\|F*\rho-P_{\delta}*F*\rho\|_2\big)\leqslant \|\rho\|_1\|P_{\delta}*F\|_2+c\delta^\kappa.\]
Thus, if we let $a=|\widetilde{B}|^{2d}\|\nu\|^{d}\|\rho\|_1$ and $b=c|\widetilde{B}|^{2d}\|\nu\|^d$, the desired inequality $(*)$ follows in Case 1. Moreover, notice the crucial fact that $a$ and $b$ are independent of the character $\eta$.
{\bf Case 2.} We now prove $(*)$ for an arbitrary function $F\in L^2(B)$ with $\|F\|_2=1$.
Consider the unitary representation $H \curvearrowright^{\sigma} L^2(G)$ corresponding to the right multiplication action $H \curvearrowright G$.
Since $H$ is compact and abelian, we can decompose
\[L^2(G)=\underset{\eta\in\text{Char}(H)}\bigoplus\mathcal H_{\eta},\]
where $\mathcal H_{\eta}$ denotes the eigenspace of $\sigma$ corresponding to a character $\eta:H\rightarrow\mathbb T$.
Thus, we can decompose $F = \sum_\eta F_{\eta}$, where $F_\eta(xt)=\eta(t)F(x)$, for almost every $x\in G$, $t \in H$. Note that the functions $F_\eta$ do not necessarily belong to $L^2(B)$. However, $F_\eta$ belongs to the closure of the linear span of $\sigma(H)F$, and therefore to $L^2(BH)$, for every $\eta$.
By applying Case 1 with $BH$ instead of $B$, and using homogeneity, we get that there exist constants $a,b,\kappa>0$ (independent of $F$) such that for all $\eta \in \operatorname{Char}(H)$ we have
\[(\int_{G}|\langle\lambda_g(F_\eta),F_\eta\rangle|^2\;\text{d}g)^{4d} \leqslant a\|P_{\delta}*F_\eta\|_2\Vert F_\eta \Vert_2^{16d-1} + b\delta^\kappa \Vert F_\eta \Vert_2^{16d}.\]
Since all the norms on $\mathbb{R}^2$ are equivalent, we find $a' , b' > 0$ (only depending on $a,b,d$) such that for all $\eta\in \operatorname{Char}(H)$ we have that
\[(\int_{G}|\langle\lambda_g(F_\eta),F_\eta\rangle|^2\;\text{d}g)^{1/2} \leqslant a'\|P_{\delta}*F_\eta\|_2^{1/8d} \Vert F_\eta\Vert_2^{2 - 1/8d} + b'\delta^{\kappa/8d} \Vert F_\eta \Vert_2^{2}.\]
But since $\lambda_g(F_\eta) \in \mathcal{H}_\eta$ and $P_\delta \ast F_\eta \in \mathcal{H}_\eta$ for all $\eta$, by using the triangle inequality for $\|.\|_2$ and H\"older's inequality we get that
\begin{align*}
\Big(\int_{G}|\langle\lambda_g(F),F\rangle |^2\;\text{d}g\Big)^{1/2} & = \Big(\int_{G} \Big\vert \sum_\eta \langle\lambda_g(F_\eta),F_\eta\rangle \Big\vert^2\;\text{d}g\Big)^{1/2}\\ &\leqslant\sum_{\eta} \Big(\int_{G}|\langle\lambda_g(F_\eta),F_\eta\rangle|^2\;\text{d}g\Big)^{1/2}\\
& \leqslant \Big(\sum_\eta a'\|P_{\delta}*F_\eta\|_2^{1/8d} \Vert F_\eta\Vert_2^{2 - 1/8d}\Big) + b'\delta^{\kappa/8d}\\
& \leqslant a' \Big( \sum_\eta \Vert P_\delta \ast F_\eta \Vert_2^2 \Big)^{1/16d} \Big( \sum_\eta \Vert F_\eta \Vert_2^2 \Big)^{1 - 1/16d} + b'\delta^{\kappa/8d}\\
& = a' \Vert P_\delta \ast F \Vert_2^{1/8d} + b'\delta^{\kappa/8d}.
\end{align*}
Using again the equivalence of norms in $\mathbb{R}^2$ and modifying the values of $a$ and $b$ if necessary, the conclusion follows. \hfill$\blacksquare$
\section{Proofs of Theorem \ref{restricted} and Corollary \ref{by}}
\subsection{A Littlewood-Paley decomposition on Lie groups}
Let $G$ be a connected simple Lie group with trivial center.
In order to prove Theorem \ref{restricted}, we next introduce a Littlewood-Paley decomposition on $G$. This is analogous to the Littlewood-Paley decomposition on $G=SU(d)$ defined by Bourgain and Gamburd in \cite[Section 10]{BG10}.
As before, we endow $G$ with the $\|.\|_2$ metric and denote by $\mathcal C(G)$ the family of measurable subsets of $G$ with compact closure.
We define bounded linear operators $\Delta_i:L^2(G)\rightarrow L^2(G)$, $i\geqslant 0$, as follows
\begin{align*}
\Delta_0(F) & = P_{1/2} \ast F\\
\Delta_i(F) & = P_{2^{-(i+1)}} \ast F - P_{2^{-i}} \ast F, \text{ for all } i \geqslant 1.
\end{align*}
\begin{remark}
The decomposition $F=\sum_{i\geqslant 0}\Delta_i(F)$ is analogous to the classical Littlewood-Paley decomposition on $\mathbb R^n$, in the following sense.
For any $i\geqslant 0$, the function $\Delta_i(F)$ ``lives" at scale $2^{-i}$: it is essentially constant at scales $\ll 2^{-i}$ and essentially has mean zero on balls of radius $\gg 2^{-i}$.
\end{remark}
We now prove that the operators $\Delta_i, i\geqslant 0,$ yield an almost orthogonal decomposition of $L^2(G)$. This will allow us to reduce to functions living at an arbitrary small scale in the proof of restricted spectral gap Theorem \ref{restricted}.
\begin{theorem}\label{L-P}
There exists a constant $C > 0$ such that for all $F\in L^2(G)$ and any $\mu\in\mathcal M(G)$ with $\operatorname{supp}(\mu) \subset B_1(1)$, we have that
\begin{enumerate}
\item $\sum_{i \geqslant 0} \|\Delta_i(F)\|_2^2\leqslant C\|F\|_2^2$.
\item $\|\mu*F\|_2^2\leqslant C\sum_{i\geqslant 0}\|\mu*\Delta_i(F)\|_2^2$.
\item $\sum_{i \geqslant 0} 2^{i/2}\Vert P_{2^{-2i}} \ast \Delta_i(F) - \Delta_i(F) \Vert_2^2 \leqslant C\sum_{i \geqslant 0} \Vert \Delta_i(F)\Vert_2^2$.
\item $\sum_{i \geqslant 0} 2^{i/2}\Vert P_{2^{-i/2}} \ast \Delta_i(F) \Vert_2^2 \leqslant C\sum_{i \geqslant 0} \Vert \Delta_i(F)\Vert_2^2$.
\end{enumerate}
\end{theorem}
The first ingredient of the proof of Theorem \ref{L-P} is the following lemma. This lemma and its proof are a variation of \cite[Lemma 11]{KS71} due to Knapp and Stein.
\begin{lemma}[Cotlar-Stein]
\label{cotlar-stein}
Consider a Hilbert space $\mathcal H$ and bounded operators $T_i : \mathcal H \to\mathcal H$, $i \geqslant 0$. Assume that there exists $\varphi : \mathbb{Z} \to \mathbb{R}_+$ with $\Phi := \sum_{n \in \mathbb{Z}} \varphi(n) < \infty$ such that for all $i,j \geqslant 0$, we have $\Vert T_j^*T_i \Vert^{1/2} \leqslant \varphi(j-i)$ and $\Vert T_iT_j^* \Vert^{1/2} \leqslant \varphi(i-j)$. For $k\geqslant 0$, denote $\Phi_k:= \sum_{\vert n \vert \geqslant k} \varphi(n)$.
Then for all $\xi \in\mathcal H$ and all $k \geqslant 0$ we have
\[ \sum_{i,j : \, \vert i - j \vert \geqslant k} \vert \langle T_i\xi,T_j\xi \rangle \vert \, \leqslant \, \Phi_k \Phi \Vert \xi \Vert^2.\]
\end{lemma}
{\it Proof.}
Fix $\xi \in\mathcal H$ and $k \geqslant 0$. For every $i,j \geqslant 0$, we choose a scalar $\alpha_{i,j}$ in such a way that $\vert \langle T_i\xi,T_j\xi \rangle \vert= \alpha_{i,j} \langle T_i\xi,T_j\xi \rangle$, and that $\alpha_{i,j} = 0$ whenever $\vert \langle T_i\xi,T_j\xi \rangle \vert=0$.
Then for all $N \geqslant 0$, the operator $R_N := \sum_{0 \leqslant i,j \leqslant N: \; \vert i - j \vert \geqslant k} \alpha_{i,j}T_j^*T_i$ is self-adjoint.
In order to prove the lemma, it is sufficient to check that the operator norm of $R_N$ is at most $\Phi_k \Phi$, for all $N\geqslant 0$.
Take $N \geqslant 0$. Since $R_N$ is self-adjoint, $\Vert R_N \Vert^p = \Vert R_N^p \Vert$, for all integers $p\geqslant 1$. This leads to the estimate:
\[\Vert R_N \Vert^p \leqslant \sum_{0 \leqslant i_1,j_1,\dots,i_p,j_p \leqslant N: \, \vert i_l - j_l \vert \geqslant k, \, \forall 1\leqslant l\leqslant p} \Vert T_{j_1}^*T_{i_1}T_{j_2}^*T_{i_2}\cdots T_{j_p}^*T_{i_p}\Vert.\]
Since
the general term of this sum is bounded by the following two quantities
\begin{align*}
\Vert T_{j_1}^*T_{i_1}T_{j_2}^*T_{i_2}\cdots T_{j_p}^*T_{i_p}\Vert & \leqslant \Vert T_{j_1}^*T_{i_1}\Vert\Vert T_{j_2}^*T_{i_2}\Vert \cdots \Vert T_{j_p}^*T_{i_p}\Vert\;\;\;\text{and}\\
\Vert T_{j_1}^*T_{i_1}T_{j_2}^*T_{i_2}\cdots T_{j_p}^*T_{i_p}\Vert & \leqslant \Vert T_{j_1}^* \Vert \Vert T_{i_1}T_{j_2}^* \Vert \cdots \Vert T_{i_{p-1}}T_{j_p}^* \Vert \Vert T_{i_p}\Vert,
\end{align*}
we get that
\begin{align*}
\Vert R_N \Vert^p & \leqslant \sum_{0 \leqslant i_1,j_1,\dots,i_p,j_p \leqslant N: \, \vert i_l - j_l \vert \geqslant k, \, \forall 1\leqslant l\leqslant p} (\Vert T_{j_1}^* \Vert \Vert T_{j_1}^*T_{i_1}\Vert \Vert T_{i_1}T_{j_2}^*\Vert \cdots \Vert T_{j_p}^*T_{i_p}\Vert \Vert T_{i_p} \Vert)^{1/2}\\
& \leqslant N(\max_{0 \leqslant i \leqslant N} \Vert T_i \Vert)(\max_{1 \leqslant j \leqslant N}\sum_{1 \leqslant i \leqslant N: \, \vert i-j \vert \geqslant k} \Vert T_j^*T_i\Vert^{1/2})^p(\max_{0 \leqslant i \leqslant N}\sum_{1 \leqslant j \leqslant N} \Vert T_iT_j^*\Vert^{1/2})^{p-1}\\
& \leqslant N(\max_{0 \leqslant i \leqslant N} \Vert T_i \Vert)\Phi_k^p \Phi^{p-1}.
\end{align*}
Since $p\geqslant 1$ is arbitrary, we indeed get that $\Vert R_N \Vert \leqslant \Phi_k\Phi$.
\hfill$\blacksquare$
\begin{remark}
The case $k=0$ of Lemma \ref{cotlar-stein} recovers the classical Cotlar-Stein lemma (see \cite[Chapter VII]{St93}) which asserts that, under the same assumptions as above, the sum $\sum_{i \geqslant 0} T_i$ converges in the strong operator topology. Lemma \ref{cotlar-stein} also implies that the sum $\sum_{i \geqslant 0} \Vert T_i\xi\Vert^2$ is finite, for all $\xi \in\mathcal H$. Later on, we will use the following inequalities, which follow easily from Lemma \ref{cotlar-stein}
\begin{equation}\label{CS1}
\sum_{i \geqslant 0} \Vert T_i\xi\Vert^2 \leqslant \Phi^2\Vert \xi \Vert^2,
\end{equation}
\begin{equation}\label{CS2}
\Vert \sum_{i \geqslant 0} T_i\xi\Vert_2^2 \leqslant k \sum_{i \geqslant 0} \Vert T_i\xi\Vert^2 + \Phi_k\Phi\Vert \xi \Vert^2, \;\; \text{for all}\;\; k \geqslant 0.
\end{equation}
\end{remark}
In order to prove Theorem \ref{L-P} we will also need the following lemma which allows us to quantify the ``orthogonality" between the operators $\Delta_i$, $i\geqslant 0$.
For a Borel probability measure $\mu\in\mathcal M(G)$ we denote by $T_{\mu}:L^2(G)\rightarrow L^2(G)$ the contractive operator given by $T_{\mu}(F)=\mu*F$.
\begin{lemma}
\label{almostorthogonal}
There exists a constant $C_0>0$ such that for any Borel probability measure $\mu\in\mathcal M(G)$ with $\operatorname{supp}(\mu) \subset B_1(1)$ we have that
\begin{align*}\|(P_{\delta_1}-P_{\delta_2}) \ast \mu \ast P_{\delta_3}\|_1\leqslant C_0\frac{\delta_2}{\delta_3},\;\;\text{for all}\;\; 0<\delta_1\leqslant\delta_2\leqslant\delta_3<1,\;\;\text{and} \\
\Vert \Delta_j^*T_\mu^*T_\mu \Delta_i \Vert \leqslant \frac{C_0}{2^{\vert i - j \vert}} \; \text{ and } \; \Vert T_\mu \Delta_i \Delta_j^* T_\mu^* \Vert\leqslant \frac{C_0}{2^{\vert i - j \vert}},\;\;\text{for all}\;\; i,j\geqslant 0.
\end{align*}
\end{lemma}
{\it Proof.}
Denote by $B_1$, $B_2$ and $B_3$ the balls centered at $1$ with respective radii $\delta_1$, $\delta_2$ and $\delta_3$.
Note that $\Vert (P_{\delta_1}-P_{\delta_2}) \ast \mu \ast P_{\delta_3}\Vert_1 \leqslant{\int_{G} \Vert (P_{\delta_1}-P_{\delta_2}) \ast \delta_x \ast P_{\delta_3}\Vert_1\;\text{d}\mu(x)}$. So it suffices to prove that lemma for Dirac measures $\mu = \delta_x$, with $x \in B_1(1)$.
Fix $x \in B_1(1)$. Then for all $y \in G$, we have $\vert(P_{\delta_1}-P_{\delta_2}) \ast \delta_x \ast P_{\delta_3}(y)\vert \leqslant \|P_{\delta_1}-P_{\delta_2}\|_1/ |B_3| \leqslant 2/ |B_3|$. Let us now bound the measure of the support of $(P_{\delta_1}-P_{\delta_2}) \ast \delta_x \ast P_{\delta_3}$. One easily checks that this support is contained in $B_2xB_3 \cap B_2x(G \setminus B_3)$.
Firstly, if $y \in B_2xB_3$, we write $y = axb$, where $a \in B_2$, $b \in B_3$. Then we have that $\Vert y \Vert_2 \leqslant 8$ and
\begin{align*}
\Vert x^{-1}y - 1 \Vert_2 \leqslant \Vert x^{-1}y - b \Vert_2 + \delta_3
& \leqslant \Vert x^{-1} \Vert_2 \Vert y - xb \Vert_2 + \delta_3\\
& \leqslant \Vert x^{-1} \Vert_2 \Vert xb \Vert_2 \delta_2 + \delta_3 \leqslant C_1 \delta_2 + \delta_3,
\end{align*}
where $C_1>0$ is independent of $x \in B_1(1)$.
Secondly, if $y\in B_2x(G\setminus B_3)$, we write $y = a'xb'$, where $a'\in B_2$ and $b' \notin B_3$. Then we see that $\Vert xb' \Vert_2 \leqslant \Vert a^{-1} \Vert_2 \Vert y \Vert_2 \leqslant 8\Vert a^{-1} \Vert_2$ and
\begin{align*}
\Vert x^{-1}y - 1 \Vert_2 \geqslant \Vert b' - 1 \Vert_2 - \Vert x^{-1}y - b' \Vert_2 & \geqslant \delta_3 - \Vert x^{-1} \Vert_2\Vert y - xb' \Vert_2\\
& \geqslant \delta_3 - \Vert x^{-1} \Vert_2\Vert xb' \Vert_2\delta_2 \geqslant \delta_3 - C_2\delta_2,
\end{align*}
where $C_2 > 0$ is independent of $x\in B_1(1)$.
Therefore, the support of $(P_{\delta_1}-P_{\delta_2}) \ast \delta_x \ast P_{\delta_3}$ is contained in $x (B_{\delta_3+C_1\delta_2}\setminus B_{\delta_3-C_2\delta_2})$.
Altogether, we get that \[\|(P_{\delta_1}-P_{\delta_2}) \ast \delta_x \ast P_{\delta_3}\|_1\leqslant{\frac{2\;|B_{\delta_3+C_1\delta_2}\setminus B_{\delta_3 - C_2\delta_2}|}{|B_{\delta_3}|}},\]
which implies the first inequality.
By using the fact that $\Vert x^{-1} - 1 \Vert_2 \leqslant \Vert x^{-1}\Vert_2 \Vert x - 1\Vert_2$ and arguing similarly to the above, it follows that the quantities $\|(\check
|
{P}_{\delta_1} - \check{P}_{\delta_2}) \ast \mu \ast P_{\delta_3}\|_1$ and $\|(P_{\delta_1}-P_{\delta_2}) \ast \mu \ast \check{P}_{\delta_3}\|_1$ are bounded above by $C_0\delta_2 / \delta_3$, for some possibly larger constant $C_0>0$. Since $\|f*g\|_2\leqslant \|f\|_1\|g\|_2$, for all $f \in L^1(G)$, $g \in L^2(G)$, these estimates imply the rest of the asserted inequalities.\hfill$\blacksquare$
\bigskip
{\it Proof of Theorem \ref{L-P}}.
Let $C_0>0$ be the constant provided by Lemma \ref{almostorthogonal} and define $\varphi: \mathbb{Z} \to \mathbb{R}_+$ by letting $\varphi(n) = \frac{C_0^{1/2}}{2^{\vert n \vert/2}}$. Then Lemma \ref{almostorthogonal} gives that for any finitely supported probability measure on $G$ with supp$(\mu)\subset B_1(1)$, the operators $T_i := T_\mu \Delta_i$ on $L^2(G)$ satisfy $\Vert T_j^*T_i \Vert^{1/2} \leqslant \varphi(j-i)$ and $\Vert T_iT_j^* \Vert^{1/2} \leqslant \varphi(i-j)$, for all $i,j \geqslant 0$.
Let $\Phi$ and $\Phi_k$ be as defined in Lemma \ref{cotlar-stein} and take $k$ large enough so that $\Phi_k\Phi < 1$.
Let $F \in L^2(G)$. Since $\lim\limits_{\delta\rightarrow 0}\|P_{\delta}*F-F\|_2=0$, we get that $F = \sum_{i \geqslant 0} \Delta_i(F)$. By combining this fact with equations \eqref{CS1} and \eqref{CS2}, we derive that
\begin{equation}\label{eqsum}
\frac{1}{\Phi^2} \sum_{i \geqslant 0} \Vert \Delta_i(F) \Vert_2^2 \; \leqslant \; \Vert F \Vert_2^2 \; \leqslant \; \frac{k}{1 - \Phi_k\Phi} \sum_{i \geqslant 0} \Vert \Delta_i(F) \Vert_2^2
\end{equation}
Similarly, for all $\mu \in \operatorname{Prob}(G)$ with supp$(\mu)\subset B_1(1)$, we have
\[\Vert \mu \ast F \Vert_2^2 \leqslant \frac{k}{1 - \Phi_k\Phi} \sum_{i \geqslant 0} \Vert \mu \ast \Delta_i(F) \Vert_2^2,\]
Further, Lemma \ref{almostorthogonal} implies that for all $i \geqslant 0$, we have
\[\Vert P_{2^{-2i}}\ast \Delta_i(F)-\Delta_i(F) \Vert_2 \leqslant \frac{4C_0}{2^i}\Vert F \Vert_2 \qquad \text{and} \qquad \Vert P_{2^{-i/2}} \ast \Delta_i(F) \Vert_2 \leqslant \frac{C_0}{2^{i/2}}\Vert F \Vert_2.\]
Therefore,
\[\sum_{i \geqslant 0} 2^{i/2}\Vert P_{2^{-2i}} \ast \Delta_i(F) - \Delta_i(F) \Vert_2^2 \leqslant 2(4C_0)^2 \Vert F \Vert_2^2\]
and
\[\sum_{i \geqslant 0} 2^{i/2}\Vert P_{2^{-i/2}} \ast \Delta_i(F) \Vert_2^2 \leqslant \frac{\sqrt 2}{\sqrt 2 - 1}C_0^2\Vert F \Vert_2^2.\]
It is now clear that the conclusion of Theorem \ref{L-P} holds for $C>0$ large enough (but still independent of $\mu$ and $F$).
\hfill$\blacksquare$
\subsection{Reduction to functions living at a small scale}
We continue with a consequence of Theorem \ref{L-P} that will allow us to reduce the problem of proving restricted spectral gap to functions that live at an arbitrarily small scale $\delta>0$.
\begin{corollary}\label{level_delta}
Let $C>0$ be the constant provided by Theorem \ref{L-P}.
Let $0 < r < 1$. Let $B\in\mathcal C(G)$ and $\mu\in\mathcal M(G)$ be a Borel probability measure with supp$(\mu)\subset B_1(1)$. Assume that for any finite dimensional subspace $V\subset L^2(B)$, there is $F\in L^2(B)\ominus V$ such that $\|\mu*F\|_2>r \|F\|_2$.
Let $\widetilde B\subset G$ be an open set with compact closure which contains the closure of $B$.
Then for every $\delta_0>0$, there exist $F\in L^2(\widetilde B)$ and $0<\delta<\delta_0$ such that
\begin{enumerate}
\item $\|\mu*F\|_2> r \|F\|_2/(2C)$.
\item $\|P_{\delta}*F-F\|_2<\delta^{1/16}\|F\|_2$.
\item $\|P_{\delta^{1/4}}*F\|_2<\delta^{1/16}\|F\|_2$.
\end{enumerate}
\end{corollary}
{\it Proof.}
Let $\delta_0>0$. Choose $N\geqslant 1$ such that $2^{-N}<\delta_0/2$ and $2^{-N/4} < r^2/(16C^3)$.
Since $B$ has compact closure, the operator $L^2(B)\ni F\mapsto P_{\delta}*F\in L^2(G)$ is compact, for any $\delta>0$.
Hence, the operator $L^2(B)\ni F\mapsto \Delta_i(F)\in L^2(G)$ is compact, for all $i\geqslant 0$.
The hypothesis implies that we can find $F_0 \in L^2(B)$ such that $\|\mu*F_0\|_2>r\|F_0\|_2$ and $\sum_{i=0}^{N-1}\|\Delta_i*F_0\|^2_2 < r^2 \|F_0\|_2^2/(2C)$.
By using Theorem \ref{L-P} (2), we derive that $\sum_{i\geqslant 0}\|\mu*\Delta_i(F_0)\|_2^2\geqslant \|\mu*F_0\|_2^2/C > r^2 \| F_0 \|_2^2/C$. Since $\sum_{i=0}^{N-1}\|\mu*\Delta_i(F_0)\|_2 < r^2 \|F_0\|_2^2/(2C)$, we get that $\sum_{i\geqslant N}\|\mu*\Delta_i(F_0)\|_2^2 > r^2 \|F_0\|_2^2/(2C)$. In combination with Theorem \ref{L-P} (1) we deduce that $\sum_{i\geqslant N}\|\mu*\Delta_i(F_0)\|_2^2 > r^2 (\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2)/(2C^2)$, or equivalently
\begin{equation}\label{eq1}\sum_{i\geqslant N}(\|\Delta_i(F_0)\|_2^2-\|\mu*\Delta_i(F_0)\|_2^2)\leqslant \big(1-\frac{r^2}{2C^2}\big)\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2. \end{equation}
Since $\sum_{i\geqslant 0}\|\Delta_i(F_0)\|_2^2\geqslant \|F_0\|_2^2/C$ by Theorem \ref{L-P} (2) and $\sum_{i=0}^{N-1}\|\Delta_i*F_0\|^2_2 < \|F_0\|_2^2/(2C)$, we get that $\sum_{i\geqslant 0}\|\Delta_i(F_0)\|_2^2< 2(\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2)$. By combining this inequality with Theorem \ref{L-P} (3) and using that $2^{-N/4}<r^2/(16C^3)$ we deduce that
\begin{align}\label{eq2}
\sum_{i\geqslant N}2^{i/4}\|P_{2^{-2i}}*\Delta_i(F_0)-\Delta_i(F_0)\|_2^2&\leqslant 2^{-N/4}\sum_{i\geqslant 0}2^{i/2}\|P_{2^{-2i}}*\Delta_i(F_0)-\Delta_i(F_0)\|_2^2\\&\leqslant 2^{-N/4}C\Big(\sum_{i\geqslant 0}\|\Delta_i(F_0)\|_2^2\Big)\nonumber\\&< 2^{-N/4+1}C\Big(\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2\Big)\leqslant \frac{r^2}{8C^2}\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2.\nonumber
\end{align}
Similarly, by using Theorem \ref{L-P} (4), we get that \begin{equation}\label{eq3} \sum_{i\geqslant N}2^{i/4}\|P_{2-i/2}*\Delta_i(F_0)\|_2^2< \frac{r^2}{8C^2}\sum_{i\geqslant N}\|\Delta_i(F_0)\|_2^2.\end{equation}
By combining equations \eqref{eq1}, \eqref{eq2}, and \eqref{eq3} we can find $i\geqslant N$ such that \[(\|\Delta_i(F_0)\|_2^2-\|\mu*\Delta_i(F_0)\|_2^2)+2^{i/4}\|P_{2^{-2i}}*\Delta_i(F_0)-\Delta_i(F_0)\|_2^2+2^{i/4}\|P_{2-i/2}*\Delta_i(F_0)\|_2^2<(1-\frac{r^2}{4C^2})\|\Delta_i(F_0)\|_2^2.\]
Let $F:=\Delta_i(F_0)$ and $\delta:=2^{-2i}$. Then $\delta\leqslant 2^{-2N}<\delta_0$. Moreover, the above inequality implies that $\|\mu*F\|_2^2> r^2 \|F\|_2^2/(4C^2)$, $\|P_{\delta}*F-F\|_2^2<\delta^{1/8}\|F\|_2^2$, and $\|P_{\delta^{1/4}}*F \|_2<\delta^{1/8}\|F \|_2^2$. Finally, notice that since $F_0\in L^2(B)$, the support of $F$ is contained in $B_{2^{-i+1}}(1)B \subset B_{\delta_0}(1)B$ and hence in $\widetilde B$, if $\delta_0>0$ is small enough. \hfill $\blacksquare$
\subsection{Proof of Theorem \ref{restricted}} Next, we prove the following ``quantitative restricted spectral gap" theorem for all measures with small support that escape subgroups at a controlled speed.
It is clear that this result in combination with Theorem \ref{escape} immediately implies Theorem \ref{restricted}.
\begin{theorem}\label{restricted2}
Let $G$ be a connected simple Lie group with trivial center and $B\subset G$ a measurable set with compact closure. Let $d_1,d_2>0$ be given.
Then there exist $c>0$ and $\varepsilon_2>0$ such that the following holds true.
Let $0<\varepsilon<\varepsilon_2$ and $\mu\in\mathcal M(G)$ be a Borel probability on $G$ with $supp(\mu)\subset B_{\varepsilon}(1)$. Assume that for all $\delta > 0$ small enough, we have that for any proper connected closed subgroup $H<G$,
\[\mu^{*2n}(H^{(\delta)})\leqslant\delta^{d_1}, \text{ where } n=\Big\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}}\Big\rfloor.\]
Then there exists a finite dimensional subspace $V\subset L^2(B)$ such that $\|\mu*F\|_2<\varepsilon^{c}\|F\|_2$, for every $F\in L^2(B)\ominus V$.
\end{theorem}
Theorem \ref{restricted2} also implies the quantitative version of Theorem \ref{restricted} referred to in Remarks \ref{quant} and \ref{spectra}.
{\it Proof.}
Let $B \subset G$ and $d_1,d_2 >0$ be as in the statement of the theorem.
Let $\widetilde B\subset G$ be an open set with compact closure which contains the closure of $B$.
Denote $d=\text{dim}(G)$.
We start by quantifying how small $\varepsilon>0$ should be.
First,
Theorem \ref{rho} provides constants $a,b,\kappa > 0$ such that for any $F\in L^2(\widetilde B)$ with $\|F\|_2=1$ we have \begin{equation}\label{coeff}\|f*F\|_2^{16d}\leqslant a\|P_{\delta}*F\|_2+b\delta^\kappa,\;\;\text{ for all $f\in L^2(G)$ with $\|f\|_2=1$ and all $0<\delta<1$}. \end{equation}
Put $q = \min\{\frac{1}{16},\kappa/4\}$ and let $C>0$ be the constant provided in Theorem \ref{L-P}. Choose
\begin{itemize}
\item $0<\alpha < \frac{q}{16d}$ and denote by $c_0$ and $\varepsilon_0$ the corresponding constants given by Corollary \ref{flattening}.
\item $c>0$ such that $2c_0c<\min\{\frac{1}{16},\frac{q}{16d}-\alpha\}$.
\item $0<\varepsilon < \varepsilon_0$ small enough so that $2c_0(c+\frac{\log{2C}}{\log{\frac{1}{\varepsilon}}})<\min\{\frac{1}{16},\frac{q}{16d}-\alpha\}$.
\end{itemize}
Next, take a probability measure $\mu$ on $G$ supported on $B_{\varepsilon}(1)$ such that for all $\delta > 0$ small enough, we have $\mu^{*2n}(H^{(\delta)})\leqslant\delta^{d_1},$ where $n=\lfloor{d_2\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}}\rfloor$, for any proper connected closed subgroup $H<G$.
By Corollary \ref{flattening}, there exists $\delta_0 > 0$ such that for all $0<\delta<\delta_0$, we have that
\begin{equation}\label{flattt} \|\mu^{*n}*P_{\delta}\|_2\leqslant\delta^{-\alpha},\;\;\text{ for all } n \geqslant \Big\lfloor{c_0\frac{\log{\frac{1}{\delta}}}{\log\frac{1}{\varepsilon}}}\Big\rfloor.\end{equation}
Taking $\delta_0$ smaller if necessary, we can assume that $\delta^{2c_0(c+\frac{\log{2C}}{\log{\frac{1}{\varepsilon}}})}>(a+b)^{\frac{1}{16d}}\delta^{\frac{q}{16d}-\alpha}+\delta^{\frac{1}{16}}$, for $\delta<\delta_0$.
Now, assume by contradiction that the measure $\mu$ does not satisfy the conclusion of the theorem. Then by Corollary \ref{level_delta}, there exists $F\in L^2(\widetilde{B})$ with $\|F\|_2=1$ and $0 < \delta < \delta_0$ such that
\begin{enumerate}
\item $\|\mu*F\|_2 > \varepsilon^c/(2C)$.
\item $\|P_{\delta}*F-F\|_2<\delta^{1/16}$.
\item $\|P_{\delta^{\frac{1}{4}}}*F\|_2<\delta^{1/16}$.
\end{enumerate}
Let $n=\Big\lfloor{c_0\frac{\log{\frac{1}{\delta}}}{\log\frac{1}{\varepsilon}}}\Big\rfloor$.
Since $\mu$ is symmetric, by using Lemma \ref{powers} we derive that
\begin{align*}
\Big(\frac{\varepsilon^c}{2C}\Big)^{2n} \leqslant \|\mu*F\|^{2n} \leqslant \|\mu^{*n}*F\|_2 & \leqslant \|\mu^{*n} \ast P_{\delta} \ast F\|_2 + \|P_{\delta} \ast F-F\|_2
\end{align*}
On the other hand, by combining \eqref{coeff} and \eqref{flattt} we get that \begin{align*}\|\mu^{*n}*P_{\delta}*F\|_2&\leqslant \big(a\|P_{\delta^{1/4}}*F\|_2+\delta^{\kappa/4}\big)^{\frac{1}{16d}}\|\mu^{*n}*P_{\delta}\|_2\\&\leqslant(a\delta^{\frac{1}{16}}+b\delta^{\kappa/4})^{\frac{1}{16d}}\;\delta^{-\alpha}\leqslant(a+b)^{\frac{1}{16d}}\delta^{\frac{q}{16d}-\alpha}. \end{align*}
By putting the last two inequalities together we get that $\big(\frac{\varepsilon^c}{2C}\big)^{2n} \leqslant(a+b)^{\frac{1}{16d}}\delta^{\frac{q}{16d}-\alpha}+\delta^{\frac{1}{16}}.$
Since $\big(\frac{\varepsilon^c}{2C}\big)^{2n}\geqslant \big(\frac{\varepsilon^c}{2C}\big)^{2c_0\frac{\log{\frac{1}{\delta}}}{\log{\frac{1}{\varepsilon}}}}=\delta^{2c_0(c+\frac{\log{2C}}{\log{\frac{1}{\varepsilon}}})},$ this contradicts the choice of $\delta>0$.
\hfill$\blacksquare$
\subsection{Proof of Corollary \ref{by}}
Let $\Gamma$, $G$, $H$ and $B \subset G/H$ be as in the statement of Corollary \ref{by}. Recall that the measure $m_{G/H}$ on $G/H$ arises from a rho-function for the pair $(G,H)$ (see \cite[Theorem B.1.4.]{BdHV08}).
Thus, there exists a continuous function $\rho: G \to \mathbb{R}_+^*$ such that
\begin{equation}\label{intrho}
\int_G f(x)\rho(x) \d x = \int_{G/H}\int_H f(xh)\d h\, \d m_{G/H}(xH), \; \text{ for all } f \in C_c(G).
\end{equation}
Of course, equality \ref{intrho} holds more generally for any function $f \in L^1(G)$ with compact support. The measure $m_{G/H}$ is not necessarily $G$-invariant, but the function $\rho$ allows to determine the translates of $g \cdot m_{G/H}$ (see \cite[Lemma B.1.3]{BdHV08}):
\begin{equation}\label{RNrho}
\frac{d(g \cdot m_{G/H})}{dm_{G/H}}(xH) = \frac{\rho(gx)}{\rho(x)}, \; \text{ for all } x,g \in G.
\end{equation}
Put $B_1 = B_1(1) \cdot B \subset G/H$ and $B_2 = B_1(1)^{-1} \cdot B_1 \subset G/H$. Let $p:G\to G/H$ be the canonical projection.
Let $\tilde B_1, \tilde B_2 \subset G$ be open sets with compact closures such that $B_i \subset p(\tilde B_i)$ for $i = 1,2$. Replacing $\tilde B_i$ by $\tilde B_i \cdot K$ for some compact set $K \subset H$ with non empty interior, we may also assume that $\int_H 1_{\tilde B_i}(xh)\d h$ is bounded away from $0$ uniformly in $x \in \tilde B_i$.
Then using \eqref{intrho} there exists $\beta > 0$ such that $\Vert F \Vert_2/\beta \leqslant \Vert F \circ p \Vert_{2,\tilde B_i} \leqslant \beta \Vert F \Vert_2$ for all $F \in L^2(B_i)$, for both $i = 1$ and $i = 2$.
Fix $\varepsilon \in (0,1)$ small enough so that $\vert \sqrt{\frac{\rho(x)}{\rho(gx)}} - 1\vert \leqslant\frac{1}{4}$, for all $g \in B_\varepsilon(1)$ and $x \in \tilde B_1$. Take $r > 0$ such that $2r\beta^4 < 1/16$.
By Theorem \ref{restricted2} there exist a finite dimensional space $V \subset L^2(\tilde B_2)$ and a finite set $T \subset\Gamma$ such that the measure $\mu = \frac{1}{2\vert T \vert} \sum_{g \in T} (\delta_g + \delta_{g^{-1}})$ satisfies supp$(\mu)\subset B_{\varepsilon}(1)$ and $ \Vert \mu \ast F\Vert_2 < r \Vert F \Vert_2$, for all $F \in L^2(\tilde B_2) \ominus V$.
Take a sequence of functions $F_n \in L^2(B)$ which converges weakly to $0$ and such that $\Vert F_n \Vert_2 = 1$ for all $n$. To prove the corollary, it is enough to show that eventually $\Vert \pi(\mu)(F_n) \Vert_2 < \frac{1}{2}$.
First, remark that by our choice of $\varepsilon$, we have $\vert 1 - \sqrt{\frac{\rho(gx)}{\rho(x)}}\vert \leqslant \frac{1}{4}\sqrt{\frac{\rho(gx)}{\rho(x)}}$ for all $g \in \operatorname{supp}(\mu)$ and $x \in\tilde B_1$. Thus, Equation \eqref{RNrho} gives for all $F \in L^2(B)$:
\begin{align*}
\Vert \pi(\mu)(F) - \mu \ast F\Vert_2 & = \frac{1}{2\vert T \vert } \Vert \sum_{g \in T \cup T^{-1}} (\sqrt{\frac{\rho(g \, \cdot)}{\rho}} - 1)F(g^{-1} \, \cdot)\Vert_2\\
& \leqslant \frac{1}{4} \frac{1}{2\vert T \vert } \sum_{g \in T \cup T^{-1}} \Vert \sqrt{\frac{\rho(g \, \cdot )}{\rho}}F(g^{-1} \, \cdot )\Vert_2 = \frac{1}{4} \Vert F \Vert_2.
\end{align*}
Therefore, for all $n$ we have
\begin{equation}\label{boundRN}
\Vert \pi(\mu)(F_n)\Vert_2 \leqslant \Vert \mu \ast F_n\Vert_2 + \frac{1}{4}.
\end{equation}
So we are left to bound $\Vert \mu \ast F_n\Vert_2$ by $\frac{1}{4}$ for all $n$ large enough.
Since $\mu \ast F_n$ is supported on $B_1$, by the definition of $\beta$, we have that
\begin{align*}
\Vert \mu \ast F_n \Vert_2^2 \leqslant \beta^2 \Vert (\mu \ast F_n) \circ p \Vert_{2,\tilde B_1}^2 & = \beta^2 \Vert \mu \ast (F_n \circ p) \Vert_{2,\tilde B_1}^2\\
& = \beta^2 \langle \mu \ast (F_n \circ p) 1_{\tilde B_1}, \mu \ast (1_{\tilde B_2}.(F_n \circ p)) \rangle\\
& \leqslant \beta^2 \Vert \mu \ast (F_n \circ p)\Vert_{2,\tilde B_1} \Vert \mu \ast (1_{\tilde B_2}.(F_n \circ p)) \Vert_2
\end{align*}
The second line above comes from the fact that $1_{\tilde B_1} \leqslant 1_{g \cdot \tilde B_2}$ for all $g \in \operatorname{supp}(\mu) \subset B_1(1)$. Using the same fact we moreover see that $\Vert \mu \ast (F_n \circ p)\Vert_{2,\tilde B_1} \leqslant \Vert F_n \circ p \Vert_{2,\tilde B_2} \leqslant \beta\Vert F_n \Vert_2 = \beta$. In summary,
\[\Vert \mu \ast F_n \Vert_2^2 \leqslant \beta^3 \Vert \mu \ast (1_{\tilde B_2}.(F_n \circ p)) \Vert_2.\]
Using \eqref{intrho} one easily checks that the sequence $(1_{\tilde B_2}.(F_n \circ p))_n \subset L^2(\tilde B_2)$ goes weakly to $0$. Hence, we deduce from the restricted spectral gap assumption on $\mu$ that for $n$ large enough,
\[\Vert \mu \ast (1_{\tilde B_2}.(F_n \circ p)) \Vert_2 < 2r \Vert 1_{\tilde B_2}.(F_n \circ p) \Vert_2 = 2r\Vert F_n \circ p \Vert_{2,\tilde B_2} \leqslant 2r\beta.\]
Altogether, we get that for $n$ large enough:
\[\Vert \mu \ast F_n \Vert_2^2 < 2r\beta^4 \leqslant \frac{1}{16}.\]
Combining this with \eqref{boundRN}, we indeed get that $\Vert \pi(\mu)(F_n) \Vert_2 < \frac{1}{2}$ for $n$ large enough.\hfill $\blacksquare$
\section{The Banach-Ruziewicz problem}\label{6}
This section is devoted to the proof of Theorem \ref{BR}.
Moreover, we will show the following:
\begin{theorem}\label{BRtext}
Let $G$ be a l.c.s.c. group and $\Gamma<G$ a countable dense subgroup.
Then the following four conditions are equivalent:
\begin{enumerate}
\item If $\nu:\mathcal C(G)\rightarrow [0,\infty)$ is a $\Gamma$-invariant, finitely additive measure, then there exists $\alpha\geqslant 0$ such that $\nu(A)=\alpha |A|$, for all $A\in\mathcal C(G)$.
\item If $\Phi:L^{\infty}_{\text{c}}(G,m_G)\rightarrow\mathbb C$ is a $\Gamma$-invariant, positive linear functional, then there exists $\alpha\geqslant 0$ such that $\Phi(f)=\alpha{\int_{G}f\;\text{d}m_G}$, for all $f\in L^{\infty}_{\text{c}}(G,m_G)$.
\item The translation action $\Gamma\curvearrowright (G,m_G)$ has local spectral gap with respect to a measurable set $B\subset G$ with compact closure and non-empty interior.
\item The translation action $\Gamma\curvearrowright (G,m_G)$ is strongly ergodic.
\end{enumerate}
\end{theorem}
\begin{remark}\label{cominv}
Suppose that $G$ is {\it compact}. It is clear that (1) $\Longrightarrow$ (2). Further, it is well-known that (2) $\Longleftrightarrow$ (3) and (3) $\Longrightarrow$ (4) (see Theorem \ref{spec}). The implication (4) $\Longleftrightarrow$ (3) was established recently in \cite[Theorem 4]{AE10}.
Let us also explain why (2) $\Longrightarrow$ (1). Note that if (2) holds, then (3) does as well, hence $\Gamma$ is non-amenable.
Since the action $\Gamma\curvearrowright G$ is free, it follows that $G$ is $\Gamma$-paradoxical (see Definition \ref{echidec}). The proof of \cite[Theorem 2.1.17]{Lu94} implies that any subsets $B,C\subset G$ with non-empty interior are equidecomposable. Further, the proof of \cite[Proposition 2.1.12]{Lu94} gives that any finitely additive $\Gamma$-invariant measure $\nu:\mathcal C(G)\rightarrow [0,\infty)$ is absolutely continuous with respect to $m_G$. It follows readily that (1) holds.
Theorem \ref{BRtext} is therefore contained in the literature when $G$ is compact. Our contribution is to show that it holds for {\it arbitrary} locally compact groups.
\end{remark}
Turning to locally compact groups $G$, the non-trivial implications, which we will address below, are (2) $\Longrightarrow$ (3), (2) $\Longrightarrow$ (1), and (4) $\Longrightarrow$ (3).
\subsection{Local spectral gap and uniqueness of invariant means} In order to prove implication (2) $\Longrightarrow$ (3) from Theorem \ref{BRtext},
we give an equivalent formulation of local spectral gap in terms of uniqueness of invariant linear functionals (see Theorem \ref{mean}).
This generalizes a well-known result for probability measure preserving actions.
Let $\Gamma\curvearrowright (X,\mu)$ be a probability measure preserving action of a countable group $\Gamma$.
Then integration against $\mu$ defines a $\Gamma$-invariant {\bf mean} (i.e. a unital positive linear functional) on $L^{\infty}(X,\mu)$.
In the early 1980's, it was realized that whether this is the unique $\Gamma$-invariant mean on $L^{\infty}(X,\mu)$ is equivalent to the spectral gap of the action. More precisely, the following was shown:
\begin{theorem}\emph{\cite{Ro81, Sc81}}\label{spec} Let $\Gamma\curvearrowright (X,\mu)$ be an ergodic measure preserving action of a countable group $\Gamma$ on a probability space $(X,\mu)$.
Consider the following conditions:
\begin{enumerate}
\item If $\Phi:L^{\infty}(X,\mu)\rightarrow\mathbb C$ is a $\Gamma$-invariant mean, then ${\Phi(f)=\int_Xf\;\text{d}\mu}$, for all $f\in L^{\infty}(X,\mu)$.
\item There does not exist a sequence $\{A_n\}$ of measurable subsets of $X$ such that $\mu(A_n)>0$, for all $n$, $\lim\limits_{n\rightarrow\infty}\mu(A_n)=0$, and $\lim\limits_{n\rightarrow\infty}\mu(gA_n\Delta A_n)/\mu(A_n)=0$, for all $g\in\Gamma$.
\item If a sequence $\varphi_n\in L^1(X,\mu)$ of positive functions satisfies ${\int_{X}\varphi_n\;\text{d}\mu=1}$, for all $n$, and
$\lim\limits_{n}\|g\cdot\varphi_n-\varphi_n\|_1=0$, for all $g\in\Gamma$, then $\lim\limits_{n}\|\varphi_n-1\|_{1}=0$.
\item The action $\Gamma\curvearrowright (X,\mu)$ has spectral gap.
\item The action $\Gamma\curvearrowright (X,\mu)$ is strongly ergodic.
\end{enumerate}
Then conditions (1)-(4) are equivalent and they all imply condition (5).
\end{theorem}
The equivalence of (1) and (2
|
mathbf{curl}})}$ and $\Vert\mathbf{z}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \Vert\operatorname{\mathbf{curl}}\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})}$.
\end{lemma}
\begin{proof}
With the aid of the operators ${\mathbf{R}}^{\operatorname*{curl}}$,
$R^{\operatorname*{grad}}$ of Lemma~\ref{lemma:mcintosh}, we write
$\displaystyle
{\mathbf{u}}=\nabla R^{\operatorname*{grad}}({\mathbf{u}}-{\mathbf{R}%
}^{\operatorname*{curl}}(\operatorname*{\mathbf{curl}}{\mathbf{u}}))+{\mathbf{R}%
}^{\operatorname*{curl}}(\operatorname*{\mathbf{curl}}{\mathbf{u}})
=:\nabla \varphi + {\mathbf z}.
$
The stability properties of the operators $\mathbf{R}^{\operatorname{curl}}$ and $R^{\operatorname{grad}}$
give
\begin{align*}
\Vert\varphi\Vert_{H^{s+1}(\widehat{K})}^2 &\lesssim \Vert\mathbf{u}-\mathbf{R}^{\operatorname{curl}}(\operatorname{\mathbf{curl}}\mathbf{u})\Vert_{\mathbf{H}^s(\widehat{K})}^2 \lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})}^2 + \Vert\mathbf{R}^{\operatorname{curl}}(\operatorname{\mathbf{curl}}\mathbf{u})\Vert_{\mathbf{H}^{s+1}(\widehat{K})}^2 \\
&\lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})}^2 + \Vert\operatorname{\mathbf{curl}}\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})}^2 = \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K},\operatorname{\mathbf{curl}})}^2,\\
\Vert\mathbf{z}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} &= \Vert\mathbf{R}^{\operatorname{curl}}(\operatorname{\mathbf{curl}}\mathbf{u})\Vert_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \Vert\operatorname{\mathbf{curl}}\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})}.
\qedhere
\end{align*}
\end{proof}
\begin{lemma}
\label{lemma:helmholtz-like-decomp-v2} Let $s \in [0,1]$. Then each ${\mathbf{u}}
\in{\mathbf{H}}^{s}(\widehat{K})$ can be written as
${\mathbf{u}} = \nabla\varphi+ \operatorname{\mathbf{curl}}\mathbf{z}$ with $\varphi\in H^{s+1}%
(\widehat{K}) \cap H^1_0(\widehat{K})$, $\mathbf{z} \in \mathbf{H}^{s+1}(\widehat{K})$ and
$\|\varphi\|_{H^{s+1}(\widehat{K})} + \|\mathbf{z}\|_{{\mathbf H}^{s+1}(\widehat{K})} \lesssim \|\mathbf{u}\|_{\mathbf{H}^s(\widehat{K})}$.
\end{lemma}
\begin{proof}
We define $\varphi\in H^{1}_0(\widehat{K})$ as the solution of the problem
\begin{align*}
-\Delta\varphi=-\operatorname{div}\mathbf{u}, \quad \varphi=0 \text{ on }\partial\widehat{K}.
\end{align*}
Since $\operatorname{div}:H^1(\widehat{K}) \rightarrow L^2(\widehat{K})$ and
$\operatorname{div}:L^2(\widehat{K}) \rightarrow (H^1_0(\widehat{K}))^\prime =: H^{-1}(\widehat{K})$
the convexity of $\widehat{K}$ gives
$\varphi \in H^1_0(\widehat{K})$ if $s = 0$ and
$\varphi \in H^2(\widehat{K}) \cap H^1_0(\widehat{K})$ if $s = 1$. By interpolation
$\varphi \in H^{s+1}(\widehat{K}) \cap H^1_0(\widehat{K})$ with
$\|\varphi \|_{H^{s+1}(\widehat{K})} \lesssim \|{\mathbf u}\|_{{\mathbf H}^s(\widehat K)}$.
With the operator $\mathbf{R}^{\operatorname{curl}}$ of Lemma~\ref{lemma:mcintosh}, we define $\mathbf{z}:=\mathbf{R}^{\operatorname{curl}}(\mathbf{u}-\nabla\varphi)\in\mathbf{H}^{s+1}(\widehat{K})$.
Noting $\operatorname{div}(\mathbf{u}-\nabla\varphi)=0$,
we have ${\mathbf u} = \nabla \varphi+ \operatorname{\mathbf{curl}} {\mathbf z}$
by Lemma~\ref{lemma:mcintosh}, (\ref{item:lemma:mcintosh-i}). The stability property of $\mathbf{R}^{\operatorname{curl}}$ gives
\begin{align*}
&\|\mathbf{z}\|_{\mathbf{H}^{s+1}(\widehat{K})} = \|\mathbf{R}^{\operatorname{curl}}(\mathbf{u}-\nabla\varphi)\|_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \|\mathbf{u}-\nabla\varphi\|_{\mathbf{H}^s(\widehat{K})} \lesssim \|\mathbf{u}\|_{\mathbf{H}^s(\widehat{K})}.
\qedhere
\end{align*}
\end{proof}
\begin{lemma}
\label{lemma:helmholtz-decomposition-div}
Let $s \geq 0$. Then each ${\mathbf{u}}
\in{\mathbf{H}}^{s}(\widehat{K},\operatorname*{div})$ can be written as
${\mathbf{u}} = \operatorname*{\mathbf{curl}} \boldsymbol{\varphi}+ {\mathbf{z}}$ with $\boldsymbol{\varphi}\in \mathbf{H}^{s+1}%
(\widehat{K})$, ${\mathbf{z}} \in{\mathbf{H}}^{s+1}(\widehat{K})$ satisfying $\Vert\boldsymbol{\varphi}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K},\operatorname*{div})}$ and $\Vert\mathbf{z}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \Vert\operatorname*{div}\mathbf{u}\Vert_{H^s(\widehat{K})}$.
\end{lemma}
\begin{proof}
With the operators $\mathbf{R}^{\operatorname*{curl}}$ and $\mathbf{R}^{\operatorname*{div}}$ of Lemma~\ref{lemma:mcintosh}, we write
\begin{align*}
{\mathbf{u}}=\operatorname*{\mathbf{curl}} \mathbf{R}^{\operatorname*{curl}}({\mathbf{u}}-{\mathbf{R}%
}^{\operatorname*{div}}(\operatorname*{div}{\mathbf{u}}))+{\mathbf{R}%
}^{\operatorname*{div}}(\operatorname*{div}{\mathbf{u}})
=:\operatorname*{\mathbf{curl}} \boldsymbol{\varphi} + {\mathbf z}.
\end{align*}
The stability properties of $\mathbf{R}^{\operatorname*{curl}}$ and $\mathbf{R}^{\operatorname*{div}}$ imply
\begin{align*}
\Vert\boldsymbol{\varphi}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} &\lesssim \Vert \mathbf{u}-\mathbf{R}^{\operatorname*{div}}(\operatorname*{div}\mathbf{u})\Vert_{\mathbf{H}^s(\widehat{K})} \lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})} + \Vert\mathbf{R}^{\operatorname{div}}(\operatorname*{div}\mathbf{u})\Vert_{H^{s+1}(\widehat{K})} \\
&\lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K})} + \Vert\operatorname*{div}\mathbf{u}\Vert_{H^s(\widehat{K})} \lesssim \Vert\mathbf{u}\Vert_{\mathbf{H}^s(\widehat{K},\operatorname*{div})}, \\
\Vert\mathbf{z}\Vert_{\mathbf{H}^{s+1}(\widehat{K})} &= \Vert\mathbf{R}^{\operatorname*{div}}(\operatorname*{div}\mathbf{u})\Vert_{\mathbf{H}^{s+1}(\widehat{K})} \lesssim \Vert\operatorname*{div}\mathbf{u}\Vert_{H^s(\widehat{K})}.
\qedhere
\end{align*}
\end{proof}
We now state the Friedrichs inequalities for the operators $\operatorname{\mathbf{curl}}$ and $\operatorname{div}$.
\begin{lemma}
[discrete Friedrichs inequality for $\mathbf{H}(\operatorname{\mathbf{curl}})$ in 3D, \protect{\cite[Lemma~{5.1}]{demkowicz08}}]
\label{lemma:discrete-friedrichs-3d}
There exists $C > 0$ independent of $p$ and ${\mathbf{u}}$ such that
\begin{equation}
\label{eq:lemma:discrete-friedrichs-3d}\|{\mathbf{u}}\|_{L^{2}(\widehat{K})}
\leq C \|\operatorname{\mathbf{curl}} {\mathbf{u}}\|_{L^{2}(\widehat{K})}%
\end{equation}
in the following two cases:
\begin{enumerate}
[(i)]
\item \label{item:lemma:discrete-friedrichs-3d-i} ${\mathbf{u}}\in
\mathbf{Q}_{p,\perp}(\widehat{K}) := \{ {\mathbf v} \in \mathbf{Q}_p(\widehat K)\colon({\mathbf{v}},\nabla \psi)_{L^{2}(\widehat{K})}=0 \quad \forall \psi\in W_{p+1}(\widehat{K})\}$,
\item \label{item:lemma:discrete-friedrichs-3d-ii} ${\mathbf{u}} \in
\mathring{\mathbf{Q}}_{p,\perp}(\widehat{K}):=\{ {\mathbf v} \in \mathring{\mathbf{Q}}_p(\widehat K)\colon
({\mathbf v},\nabla \psi)_{L^2(\widehat K)} = 0 \quad \forall \psi \in \mathring{W}_{p+1}(\widehat K)\}$.
\end{enumerate}
\end{lemma}
\begin{lemma}
[discrete Friedrichs inequality for $\mathbf{H}(\operatorname*{div})$]
\label{lemma:discrete-friedrichs-div} There exists $C > 0$ independent of $p$
and ${\mathbf{u}}$ such that
\begin{equation}
\label{eq:lemma:discrete-friedrichs-div-3d}\|{\mathbf{u}}\|_{L^{2}(\widehat{K})} \leq C
\|\operatorname{div} {\mathbf{u}}\|_{L^{2}(\widehat{K})}%
\end{equation}
in the following two cases:
\begin{enumerate}
[(i)]
\item \label{item:lemma:discrete-friedrichs-div-i} ${\mathbf{u}}\in
\mathbf{V}_{p}(\widehat{K})$ satisfies $({\mathbf{u}%
},\operatorname*{\mathbf{curl}} \mathbf{v})_{L^{2}(\widehat{K})}=0$ for all $\mathbf{v}\in \mathbf{Q}_{p}(\widehat{K})$,
\item \label{item:lemma:discrete-friedrichs-div-ii} ${\mathbf{u}} \in
\mathring{\mathbf{V}}_p(\widehat{K})$ satisfies $({\mathbf{u}}, \operatorname*{\mathbf{curl}} \mathbf{v})_{L^{2}(\widehat{K})} = 0$
for all $\mathbf{v} \in\mathring{\mathbf{Q}}_{p,\perp}(\widehat{K})$.
\end{enumerate}
\end{lemma}
\begin{proof}
The statement (\ref{item:lemma:discrete-friedrichs-div-i}) is taken from \cite[Lemma~{5.2}]{demkowicz08}.
It is also shown in \cite[Lemma~{5.2}]{demkowicz08} that the Friedrichs inequality
(\ref{eq:lemma:discrete-friedrichs-div-3d}) holds for all ${\mathbf u}$ satisfying
\begin{align}
\label{eq:item:lemma-discrete-friedrichs-div-ii-alternative}
{\mathbf{u}} \in
\mathring{\mathbf{V}}_p(\widehat{K})
\text{ satisfies } ({\mathbf{u}}, \operatorname*{\mathbf{curl}} \mathbf{v})_{L^{2}(\widehat{K})} = 0
\text{ for all } \mathbf{v} \in \mathring{{\mathbf Q}}_{p}(\widehat K).
\end{align}
To see that the condition
(\ref{item:lemma:discrete-friedrichs-div-ii}) in Lemma~\ref{lemma:discrete-friedrichs-div}
suffices, assume that ${\mathbf u}$ satisfies the condition
(\ref{item:lemma:discrete-friedrichs-div-ii}) in Lemma~\ref{lemma:discrete-friedrichs-div} and
write ${\mathbf v} \in \mathring{\mathbf Q}_p(\widehat{K})$ as
$\mathbf{v}=\Pi_{\nabla \mathring{W}_{p+1}}\mathbf{v} + (\mathbf{v}-\Pi_{\nabla \mathring{W}_{p+1}}\mathbf{v})$, where $\Pi_{\nabla \mathring{W}_{p+1}}$ denotes the $L^2$-projection onto $\nabla \mathring{W}_{p+1}(\widehat{K}) \subset \mathring{\mathbf{Q}}_p(\widehat{K})$. Then observe that ${\mathbf v} - \Pi_{\nabla \mathring{W}_{p+1}}\mathbf{v} \in
\mathring{\mathbf{Q}}_{p,\perp}(\widehat K)$ so that
\begin{align*}
(\mathbf{u},\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})} =
(\mathbf{u},\underbrace{\operatorname*{\mathbf{curl}}(\Pi_{\nabla \mathring{W}_{p+1}}\mathbf{v}}_
{=0} )
)_{L^2(\widehat{K})}
+ \underbrace{ (\mathbf{u},\operatorname*{\mathbf{curl}}(\mathbf{v}-\Pi_{\nabla \mathring{W}_{p+1}}\mathbf{v})
)_{L^2(\widehat{K})}
}_{=0
\text{ since ${\mathbf v} - \Pi_{\nabla \mathring{W}_{p+1}} \mathbf {v} \in \mathring{\mathbf Q}_{p,\perp}(\widehat{K})$}} = 0;
\end{align*}
hence, ${\mathbf u}$ satisfies in fact
(\ref{eq:item:lemma-discrete-friedrichs-div-ii-alternative}). Thus,
it satisfies the Friedrichs inequality (\ref{eq:lemma:discrete-friedrichs-div-3d}).
\end{proof}
\subsection{Stability of the operator $\protect\widehat\Pi^{\operatorname*{grad},3d}_{p+1}$}
The three-dimensional analog of Theorem~\ref{lemma:demkowicz-grad-2D} is:
\begin{theorem}
\label{lemma:demkowicz-grad-3D}
Assume that all interior angles of the 4 faces of $\widehat K$ are smaller than $2\pi/3$. Then,
for every $s\in [0,1]$
there is $C_s > 0$ such that
for all $u\in H^2(\widehat{K})$
\begin{subequations}
\begin{align}
\label{eq:lemma:demkowicz-grad-3D-10}
\Vert u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\Vert_{H^{1-s}(\widehat{K})}&\leq
C_{s}p^{-(1+s)} \inf_{v \in W_{p+1}(\widehat K)} \Vert u -v\Vert_{H^{2}(\widehat{K})}, \\
\label{eq:lemma:demkowicz-grad-3D-20}
\Vert \nabla(u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u)\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K})}&\leq
C_{s}p^{-(1+s)} \inf_{v \in W_{p+1}(\widehat K)} \Vert u -v\Vert_{H^{2}(\widehat{K})}.
\end{align}
\end{subequations}
Additionally, \eqref{eq:lemma:demkowicz-grad-3D-10} holds for $s = 0$ without the conditions on the
angles of the faces of $\widehat K$.
\end{theorem}
\begin{proof}
The proof proceeds along the same lines as in the 2D case. First, we observe from the
projection property of $\widehat\Pi^{\operatorname*{grad},3d}_{p+1}$ that it suffices to show
(\ref{eq:lemma:demkowicz-grad-3D-10}), \eqref{eq:lemma:demkowicz-grad-3D-20} with $v = 0$ in the infimum.
Next, the trace theorem implies $u|_{f} \in H^{3/2}(f)$ for every face $f \in {\mathcal F}(\widehat K)$.
{}From Theorem~\ref{lemma:demkowicz-grad-2D} we get, for every face $f\in{\mathcal{F}%
}(\widehat{K})$ and $s\in\lbrack0,1]$,
\begin{equation}
\Vert u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\Vert_{H^{1-s}(f)}\leq Cp^{-(1/2+s)}%
\Vert u\Vert_{H^{2}(\widehat{K})}.
\end{equation}
Since $u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\in
C(\partial\widehat{K})$, we conclude
\begin{equation}
\Vert u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\Vert_{H^{1-s}(\partial \widehat{K})}\leq
Cp^{-(1/2+s)}\Vert u\Vert_{H^{2}(\widehat{K})}
\label{eq:lemma:demkowicz-grad-3D-120}%
\end{equation}
for $s\in\{0,1\}$ and then, by interpolation for all $s\in\lbrack0,1]$.
Next, we show \eqref{eq:lemma:demkowicz-grad-3D-10} for $s=0$ (from which \eqref{eq:lemma:demkowicz-grad-3D-20} for $s=0$ follows). As in the 2D case, we use
Lemma~\ref{lemma:Pgrad3d}, the estimate \eqref{eq:lemma:demkowicz-grad-3D-120},
the fact that $P^{\operatorname{grad},3d} u - \widehat\Pi^{\operatorname*{grad},3d}_{p+1} u$ is discrete harmonic, and
the polynomial preserving lifting of \cite{munoz-sola97}, to arrive at
\begin{align}
\nonumber |u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u|_{H^{1}(\widehat{K})} &\leq |u-P^{\operatorname*{grad},3d}u|_{H^1(\widehat{K})} + |P^{\operatorname*{grad},3d}u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u|_{H^1(\widehat{K})} \\
\label{eq:lemma:demkowicz-grad-3D-145} &\lesssim p^{-1} \Vert u\Vert_{H^2(\widehat{K})} + \Vert P^{\operatorname*{grad},3d}u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\Vert_{H^{1/2}(\partial\widehat{K})} \\
\nonumber &\lesssim p^{-1} \Vert u\Vert_{H^2(\widehat{K})} + \Vert u-P^{\operatorname*{grad},3d}u\Vert_{H^1(\widehat{K})}
\lesssim p^{-1} \Vert u\Vert_{H^2(\widehat{K})}.
\end{align}
The $L^{2}$-estimate, i.e., the case $s = 1$ in (\ref{eq:lemma:demkowicz-grad-3D-10}),
is obtained by a duality argument: Let $z\in
H^{2}(\widehat{K})\cap H_{0}^{1}(\widehat{K})$ be given by
\[
-\Delta z=\widetilde{e}:=u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u\quad
\mbox {on $\widehat K$},\qquad z|_{\partial\widehat{K}}=0.
\]
Integration by parts leads to
\begin{equation}
\Vert\widetilde{e}\Vert_{L^{2}(\widehat{K})}^{2}=\int_{\widehat{K}}\nabla
z\cdot\nabla\widetilde{e}-\int_{\partial\widehat{K}}\partial_{n}%
z\widetilde{e}. \label{eq:lemma:demkowicz-grad-3D-200}%
\end{equation}
For the first term in (\ref{eq:lemma:demkowicz-grad-3D-200}) we use the
orthogonality properties satisfied by $\widetilde{e}$ and \eqref{eq:lemma:demkowicz-grad-3D-145} to get
\begin{equation}
|(\nabla z,\nabla\widetilde{e})_{L^{2}(\widehat{K})}| \leq \operatorname*{inf}_{\pi\in\mathring{W}_{p+1}(\widehat{K})} \Vert z-\pi\Vert_{H^1(\widehat{K})} \Vert\nabla\widetilde{e}\Vert_{L^2(\widehat{K})} \lesssim p^{-1}%
\Vert\widetilde{e}\Vert_{L^{2}(\widehat{K})}\Vert\nabla\widetilde{e}%
\Vert_{L^{2}(\widehat{K})}. \label{eq:lemma:demkowicz-grad-3D-500}%
\end{equation}
For the second term in (\ref{eq:lemma:demkowicz-grad-3D-200}),
we use Theorem~\ref{lemma:demkowicz-grad-2D} for each face $f \in {\mathcal F}(\widehat K)$.
The assumptions on the angles of the faces of $\widehat K$ imply that Theorem~\ref{lemma:demkowicz-grad-2D}
is applicable with $s = 3/2$ (since the pertinent $\widehat s > 3/2 = \pi/(2 \pi/3)$) to give
\begin{align}
\label{eq:lemma:demkowicz-grad-3D-510}%
|(\partial_n z,\widetilde{e})_{L^2(f)}|
&\leq \Vert\partial_n z\Vert_{H^{1/2}(f)} \Vert\widetilde{e}\Vert_{\widetilde{H}^{-1/2}(f)}
\lesssim p^{-2} \Vert\partial_n z\Vert_{H^{1/2}(f)} \Vert u\Vert_{H^{3/2}(f)}\\
\nonumber
&\lesssim p^{-2} \Vert\widetilde{e}\Vert_{L^2(\widehat{K})} \Vert u\Vert_{H^2(\widehat{K})}.
\end{align}
Inserting (\ref{eq:lemma:demkowicz-grad-3D-500}), (\ref{eq:lemma:demkowicz-grad-3D-510})
in \eqref{eq:lemma:demkowicz-grad-3D-200} gives the desired estimate \eqref{eq:lemma:demkowicz-grad-3D-10} for $s = 1$.
An interpolation argument completes the proof for the intermediate values $s\in(0,1)$.
We show the estimate \eqref{eq:lemma:demkowicz-grad-3D-20} for $s=1$ by duality.
Again, we set $\widetilde{e}:=u-\widehat\Pi^{\operatorname*{grad},3d}_{p+1} u$ and need an estimate for
\begin{align}
\label{eq:lemma:demkowicz-grad-3D-700}
\|\nabla\widetilde{e}\|_{\widetilde{\mathbf{H}}^{-1}(\widehat{K})} = \operatorname*{sup}_{\mathbf{v}\in \mathbf{H}^1(\widehat{K})} \frac{(\nabla\widetilde{e},\mathbf{v})_{L^2(\widehat{K})}}{\|\mathbf{v}\|_{\mathbf{H}^1(\widehat{K})}}.
\end{align}
According to Lemma~\ref{lemma:helmholtz-like-decomp-v2}, any $\mathbf{v}\in \mathbf{H}^1(\widehat{K})$ can be decomposed as $\mathbf{v}=\nabla\varphi+\operatorname{\mathbf{curl}} \mathbf{z}$
with $\varphi\in H^2(\widehat{K}) \cap H^1_0(\widehat{K})$ and $\mathbf{z}\in \mathbf{H}^2(\widehat{K})$. Integration by parts then gives
\begin{align*}
(\nabla\widetilde{e},\mathbf{v})_{L^2(\widehat{K})} = (\nabla\widetilde{e},\nabla\varphi)_{L^2(\widehat{K})} + (\Pi_\tau \nabla\widetilde{e},\gamma_\tau \mathbf{z})_{L^2(\partial\widehat{K})}.
\end{align*}
For the first term, we use Lemma~\ref{lemma:Pgrad1d} and \eqref{eq:lemma:demkowicz-grad-3D-10}
(applied with $s=0$) to obtain
\begin{align*}
\bigl| (\nabla\widetilde{e},\nabla\varphi)_{L^2(\widehat{K})}\bigr| \lesssim \|\nabla\widetilde{e}\|_{L^2(\widehat{K})} \operatorname*{inf}_{\pi\in \mathring{W}_{p+1}(\widehat{K})} \|\varphi-\pi\|_{H^1(\widehat{K})} \lesssim p^{-2} \|u\|_{H^2(\widehat{K})} \|\mathbf{v}\|_{\mathbf{H}^1(\widehat{K})},
\end{align*}
imitating \eqref{eq:lemma:demkowicz-grad-3D-500}. To treat the second term, we note that $\mathbf{z}\in \mathbf{H}^2(\widehat{K})$ implies $\mathbf{z}\in \mathbf{H}^{3/2}(f)$ for each face $f\in\mathcal{F}(\widehat{K})$.
Thus, Theorem~\ref{lemma:demkowicz-grad-2D} is again applicable with $s=3/2$, and we get
\begin{align*}
&\bigl| (\Pi_\tau \nabla\widetilde{e},\gamma_\tau \mathbf{z})_{L^2(f)} \bigr| =
\bigl| (\nabla_f\widetilde{e},\gamma_\tau \mathbf{z})_{L^2(f)} \bigr| \lesssim \|\nabla_f \widetilde{e}\|_{\widetilde{\mathbf{H}}^{-3/2}(f)} \|\gamma_\tau\mathbf{z}\|_{\mathbf{H}^{3/2}(f)} \\
&\quad \stackrel{\text{Thm.~\ref{lemma:demkowicz-grad-2D}}}{\lesssim}
p^{-2} \|u\|_{H^{3/2}(f)} \|{\mathbf z}\|_{H^2(\widehat{K})} \lesssim p^{-2} \|u\|_{H^2(\widehat{K})} \|\mathbf{v}\|_{\mathbf{H}^1(\widehat{K})}.
\end{align*}
Inserting the last two estimates in \eqref{eq:lemma:demkowicz-grad-3D-700} yields \eqref{eq:lemma:demkowicz-grad-3D-20} for $s=1$. The estimate \eqref{eq:lemma:demkowicz-grad-3D-20} for $s\in (0,1)$ now follows by interpolation.
\end{proof}
\subsection{Stability of the operator $\protect\widehat \Pi^{\operatorname*{curl},3d}_p$}
As in the proof of Lemma~\ref{lemma:Picurl-face}, a key
ingredient is the existence of a polynomial preserving lifting operator from
the boundary to the element with the appropriate
mapping properties and an additional orthogonality property.
For ${\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})$,
a lifting operator has been constructed in
\cite{demkowicz-gopalakrishnan-schoeberl-II}.
We formulate a simplified version
of their results and also explicitly modify that lifting to ensure a convenient orthogonality property.
\begin{lemma}
\label{lemma:Hcurl-lifting}
Introduce on the trace space $\Pi_{\tau}{\mathbf{H}}(\widehat{K}%
,\operatorname{\mathbf{curl}})$ the norm
\begin{equation}
\Vert{\mathbf{z}}\Vert_{{\mathbf{X}}^{-1/2}}:=\inf\{\Vert{\mathbf{v}}%
\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})}\,|\,\Pi_{\tau
}{\mathbf{v}}={\mathbf{z}}\}.
\end{equation}
There exists $C >0$ (independent of $p \in {\mathbb N}$)
and, for each $p \in {\mathbb N}$,
a lifting operator ${\boldsymbol{\mathcal{L}}}^{\operatorname*{curl}%
,3d}_p:\Pi_\tau {\mathbf Q}_p(\widehat{K}) \rightarrow {\mathbf Q}_p(\widehat K)$
with the following properties:
\begin{enumerate}
[(i)]
\item \label{item:lemma:Hcurl-lifting-i}
$\Pi_\tau {\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p(\Pi_\tau {\mathbf z}) =
\Pi_\tau {\mathbf z}$ for all ${\mathbf z} \in {\mathbf Q}_p(\widehat K)$.
\item \label{item:lemma:Hcurl-lifting-ii} There holds $\displaystyle\Vert
{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p{\mathbf{z}}\Vert_{{\mathbf{H}%
}(\widehat{K},\operatorname{\mathbf{curl}})}\leq C\Vert{\mathbf{z}}\Vert_{{\mathbf{X}%
}^{-1/2}}.$
\item \label{item:lemma:Hcurl-lifting-iia} There holds the orthogonality $(\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p\mathbf{z},\nabla v)_{L^2(\widehat{K})} = 0$ for all $v\in \mathring{W}_{p+1}(\widehat{K})$.
\item \label{item:lemma:Hcurl-lifting-iii}
Let ${\mathbf T}:= \Pi_\tau {\mathbf H}^{2}(\widehat{K})$.
A function ${\mathbf{z}}\in{\mathbf{T}}$ is in ${L}^2(\partial\widehat{K})$,
facewise in ${\mathbf H}^{3/2}_{T}$, and
$$\displaystyle\Vert{\mathbf{z}}\Vert_{{\mathbf{X}}^{-1/2}}\leq
C\sum_{f\in{\mathcal{F}}(\widehat{K})}\left[ \Vert{\mathbf{z}}\Vert
_{\widetilde{\mathbf{H}}_T^{-1/2}(f)}+\Vert\operatorname{curl}_{f}{\mathbf{z}%
}\Vert_{\widetilde{H}^{-1/2}(f)}\right] .
$$
Here, we recall from \eqref{eq:def-negative-norm} that $\|\cdot\|_{\widetilde{\mathbf{H}}_T^{-1/2}(f)}$
is defined to be dual to $\|\cdot\|_{{\mathbf{H}}_T^{1/2}(f)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The lifting operator $\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}:\Pi_\tau({\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})) \rightarrow {\mathbf H}(\widehat{K},\operatorname*{\mathbf{curl}})$ constructed in \cite{demkowicz-gopalakrishnan-schoeberl-II} has the desired polynomial preserving property (\ref{item:lemma:Hcurl-lifting-i}) and continuity property
(\ref{item:lemma:Hcurl-lifting-ii}), \cite[Thm.~{7.2}]%
{demkowicz-gopalakrishnan-schoeberl-II}. To ensure
(\ref{item:lemma:Hcurl-lifting-iia}) we define the desired lifting operator
by $\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p\mathbf{z} := \boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z}-\mathbf{w}_0$,
where $\mathbf{w}_0$ is defined by the following saddle point problem:
Find $\mathbf{w}_0\in \mathring{\mathbf{Q}}_p(\widehat{K})$
and $\varphi\in \mathring{W}_{p+1}(\widehat{K})$ such that
for all
$\mathbf{q}\in \mathring{\mathbf{Q}}_p(\widehat{K})$ and all
$\mu\in \mathring{W}_{p+1}(\widehat{K})$
\begin{subequations}
\label{eq:lemma:saddle-point-curl}
\begin{align}
\label{eq:lemma:saddle-point-curl-a}
(\operatorname*{\mathbf{curl}}\mathbf{w}_0,\operatorname*{\mathbf{curl}}\mathbf{q})_{L^2(\widehat{K})} + (\mathbf{q},\nabla\varphi)_{L^2(\widehat{K})} & = (\operatorname*{\mathbf{curl}}(\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z}),\operatorname*{\mathbf{curl}}\mathbf{q})_{L^2(\widehat{K})}
\\
\label{eq:lemma:saddle-point-curl-b}
(\mathbf{w}_0,\nabla\mu)_{L^2(\widehat{K})} & = (\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z},\nabla\mu)_{L^2(\widehat{K})}.
\end{align}
\end{subequations}
Problem~\eqref{eq:lemma:saddle-point-curl} is uniquely solvable:
We define the bilinear forms
$a(\mathbf{w},\mathbf{q}):=(\operatorname*{\mathbf{curl}}\mathbf{w},\operatorname*{\mathbf{curl}}\mathbf{q})_{L^2(\widehat{K})}$
and $b(\mathbf{w},\varphi) := (\mathbf{w},\nabla\varphi)_{L^2(\widehat{K})}$
for $\mathbf{w}, \mathbf{q}\in \mathring{\mathbf{Q}}_p(\widehat{K})$ and $\varphi\in \mathring{W}_{p+1}(\widehat{K})$.
Coercivity of $a$ on the kernel of
$b$ with
\begin{align*}
\operatorname*{ker}b=
\{\mathbf{q}\in \mathring{\mathbf{Q}}_p(\widehat{K})\colon (\mathbf{q},\nabla\mu)_{L^2(\widehat{K})} = 0 \, \forall\mu\in \mathring{W}_{p+1}\} = \mathring{\mathbf{Q}}_{p,\perp}(\widehat K),
\end{align*}
follows from the Friedrichs inequality (Lemma~\ref{lemma:discrete-friedrichs-3d}) by
\begin{align*}
a(\mathbf{v},\mathbf{v})&=\Vert\operatorname*{\mathbf{curl}}\mathbf{v}\Vert_{L^2(\widehat{K})}^2 \geq \frac{1}{2C^2} \Vert\mathbf{v}\Vert_{L^2(\widehat{K})}^2 + \frac{1}{2}\Vert\operatorname*{\mathbf{curl}}\mathbf{v}\Vert_{L^2(\widehat{K})}^2 \\
&\geq \operatorname*{min}\{\frac{1}{2C^2},\frac{1}{2}\}\Vert\mathbf{v}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})}^2
\end{align*}
for all $\mathbf{v}\in \operatorname*{ker}b$. Next, we show the inf-sup condition
\begin{align*}
\operatornamewithlimits{inf}_{\varphi\in\mathring{W}_{p+1}(\widehat{K})} \operatornamewithlimits{sup}_{\mathbf{w}\in\mathring{\mathbf{Q}}_p(\widehat{K})} \frac{b(\mathbf{w},\varphi)}{\Vert\mathbf{w}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \Vert\varphi\Vert_{H^1(\widehat{K})}} \geq C.
\end{align*}
Given $\varphi\in\mathring{W}_{p+1}(\widehat{K})$, choose $\mathbf{w}=\nabla\varphi\in\mathring{\mathbf{Q}}_p(\widehat{K})$. Hence,
\begin{align*}
\frac{b(\mathbf{w},\varphi)}{\Vert\mathbf{w}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \Vert\varphi\Vert_{H^1(\widehat{K})}} = \frac{\Vert\nabla\varphi\Vert_{L^2(\widehat{K})}^2}{\Vert\nabla\varphi\Vert_{L^2(\widehat{K})} \Vert\varphi\Vert_{H^1(\widehat{K})}} \geq C
\end{align*}
by Poincar\'e's inequality. Thus, the saddle point problem \eqref{eq:lemma:saddle-point-curl} has a
unique solution
$(\mathbf{w}_0,\varphi) \in \mathring{\mathbf{Q}}_p(\widehat{K}) \times \mathring{W}_{p+1}(\widehat K)$.
In fact, taking ${\mathbf q} = \nabla \varphi$ in (\ref{eq:lemma:saddle-point-curl-a}) reveals
$\varphi = 0$.
The lifting operator $\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p$ now obviously satisfies
(\ref{item:lemma:Hcurl-lifting-i}) and (\ref{item:lemma:Hcurl-lifting-iia}) by construction.
For (\ref{item:lemma:Hcurl-lifting-ii}) note that the solution $\mathbf{w}_0$ satisfies the estimate
$\Vert\mathbf{w}_0\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \lesssim \Vert f\Vert + \Vert g\Vert$,
where $f(\mathbf{v})=(\operatorname*{\mathbf{curl}}(\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z}),\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})}$, $g(v)=(\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z},\nabla v)_{L^2(\widehat{K})}$, and $\Vert \cdot \Vert$ denotes the operator norm. Thus,
\begin{align*}
\Vert f\Vert =\!\! \operatornamewithlimits{sup}_{\Vert\mathbf{v}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \leq 1} \! |(\operatorname*{\mathbf{curl}}(\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z}),\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})}| \leq \Vert\operatorname*{\mathbf{curl}}(\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z})\Vert_{L^2(\widehat{K})} \lesssim \Vert\mathbf{z}\Vert_{\mathbf{X}^{-1/2}}.
\end{align*}
The estimate
$\displaystyle \Vert g\Vert \lesssim \Vert\mathbf{z}\Vert_{\mathbf{X}^{-1/2}}
$
is shown in a similar way. Hence, (\ref{item:lemma:Hcurl-lifting-ii}) follows from
\begin{align*}
\Vert\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p\mathbf{z}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \leq \Vert\boldsymbol{\mathcal{E}}^{\operatorname*{curl}}\mathbf{z}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} + \Vert\mathbf{w}_0\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})} \lesssim \Vert\mathbf{z}\Vert_{\mathbf{X}^{-1/2}}.
\end{align*}
We now show (\ref{item:lemma:Hcurl-lifting-iii}), proceeding in several steps.
\newline
\emph{1st~step:}
Clearly, ${\mathbf z}$ is in $L^2(\partial \widehat K)$ and facewise in ${\mathbf H}^{3/2}_T$.
The surface curl of ${\mathbf z} \in {\mathbf T}$,
denoted $\operatorname{curl}_{\partial\widehat K} {\mathbf z}$, is defined by
${\mathbf n} \cdot \operatorname{\mathbf{curl}} \widetilde{\mathbf z} \in H^{-1/2}(\partial\widehat K)$
for any lifting $\widetilde{\mathbf z} \in {\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})$
of ${\mathbf z}$.
This definition is indeed independent of the lifting since the difference
${\boldsymbol \delta}$ of two liftings is in ${\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}})$
and by the deRham diagram (see, e.g., \cite[eqn. (3.60)]{Monkbook}) we then have
$\operatorname{\mathbf{curl}} {\boldsymbol \delta} \in {\mathbf H}_0(\widehat K,\operatorname{div})$.
Furthermore, since an ${\mathbf H}^2$-lifting of ${\mathbf z}$ exists,
$\operatorname{curl}_{\partial\widehat K} {\mathbf z} \in H^{-1/2}(\partial\widehat K)$ is facewise in ${\mathbf H}_T^{1/2}$ and coincides
facewise with $\operatorname{curl}_f {\mathbf z}$.
\emph{2nd~step:}
We construct a particular lifting ${\mathbf Z} \in {\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})$
of ${\mathbf z} \in {\mathbf X}^{-1/2}$ and will use
$\|{\mathbf z}\|_{{\mathbf X}^{-1/2}} \leq \|{\mathbf Z}\|_{{\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})}$.
This lifting ${\mathbf Z}$ is taken to be the solution of the following (constrained)
minimization problem:
\begin{align*}
& \mbox{ Minimize } \|\operatorname{\mathbf{curl}} {\mathbf Y}\|_{L^2(\widehat{K})}
\mbox{ under the constraints} \\
& \mbox{$\Pi_\tau {\mathbf Y} = {\mathbf z}$ \qquad and \qquad $({\mathbf Y} ,\nabla \varphi)_{L^2(\widehat{K})} = 0$
for all $\varphi \in H^1_0(\widehat{K})$.}
\end{align*}
This minimization problem can be solved with the method of Lagrange multipliers as was done
in (\ref{eq:lemma:saddle-point-curl}). Without repeating the arguments,
we obtain, in strong form, the problem:
Find $({\mathbf Z},\varphi) \in {\mathbf H}(\widehat{K},\operatorname{\mathbf{curl}}) \times H^1_0(\widehat{K})$ such that
\begin{align*}
\operatorname{\mathbf{curl}}
\operatorname{\mathbf{curl}} {\mathbf Z} + \nabla \varphi = 0 \quad \mbox{ in $\widehat{K}$},
\qquad
\operatorname{div} {\mathbf Z} = 0 \quad \mbox{ in $\widehat{K}$},
\qquad \Pi_\tau {\mathbf Z} = {\mathbf z}.
\end{align*}
As was observed above, the Lagrange multiplier $\varphi$ in fact vanishes so that we conclude that
the minimizer ${\mathbf Z}$ solves
\begin{align*}
\operatorname{\mathbf{curl}} \operatorname{\mathbf{curl}} {\mathbf Z} = 0, \qquad
\operatorname{div} {\mathbf Z} = 0,
\qquad \Pi_\tau {\mathbf Z} = {\mathbf z}.
\end{align*}
\emph{3rd~step:} We bound ${\mathbf w}:= \operatorname{\mathbf{curl}} {\mathbf Z}$. We have
\begin{align}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-40-vorn}
\operatorname{\mathbf{curl}} {\mathbf w} = 0 , \qquad
\operatorname{div} {\mathbf w} = 0, \qquad
{\mathbf n} \cdot {\mathbf w} = \operatorname{curl}_{\partial\widehat{K}} {\mathbf z}.
\end{align}
{}From $\operatorname{\mathbf{curl}} {\mathbf w} = 0$, we get that ${\mathbf w}$ is a gradient:
${\mathbf w} = \nabla \psi$. The second and third conditions in (\ref{eq:lemma:X-1/2-vs-H-1/2-curl-40-vorn}) show
\begin{align*}
-\Delta \psi = 0 \qquad \partial_n \psi = {\mathbf n} \cdot {\mathbf w} =
\operatorname{curl}_{\partial\widehat{K}} {\mathbf z}.
\end{align*}
The integrability condition is satisfied since
$( {\mathbf n} \cdot {\mathbf w},1)_{L^2(\partial\widehat{K})}
= (\operatorname{div} {\mathbf w},1)_{L^2(\widehat{K})} = 0$.
Thus we conclude by standard {\sl a priori} estimates for the Laplace problem
\begin{equation}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-45-vorn}
\|\operatorname{\mathbf{curl}} {\mathbf Z}\|_{L^2(\widehat{K})} =
\|{\mathbf w}\|_{L^2(\widehat{K})} = \|\nabla \psi\|_{L^2(\widehat{K})}
\lesssim \|\operatorname{curl}_{\partial\widehat{K}} {\mathbf z}\|_{H^{-1/2}(\partial\widehat{K})}.
\end{equation}
\emph{4th~step:} To bound ${\mathbf Z}$, we write it with the operators
${\mathbf R}^{\operatorname{curl}}$ and $R^{\operatorname{grad}}$ of Lemma~\ref{lemma:mcintosh} as
\begin{align}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-50-vorn}
& {\mathbf Z} = \nabla \phi + \widetilde {\mathbf z},
\qquad \widetilde {\mathbf z}:= {\mathbf R}^{\operatorname{curl}}(\operatorname{\mathbf{curl}}{\mathbf Z}),
\qquad \phi := R^{\operatorname{grad}} ({\mathbf Z} - \mathbf{R}^{\operatorname{curl}}(\operatorname{\mathbf {curl}} \widetilde {\mathbf z})), \\
\label{eq:lemma:X-1/2-vs-H-1/2-curl-100-vorn}
& \mbox{ with }
\|\widetilde {\mathbf z}\|_{H^1(\widehat{K})} \lesssim \|\operatorname{\mathbf{curl}} {\mathbf Z}\|_{L^2(\widehat{K})}
\lesssim \|\operatorname{curl}_{\partial\widehat{K}} {\mathbf z}\|_{H^{-1/2}(\partial\widehat{K})}.
\end{align}
For the control of $\phi$, we proceed by an integration by parts argument.
Noting that $\operatorname{div} {\mathbf Z} = 0$, we have
$$
\nabla \phi + \widetilde {\mathbf z} = {\mathbf Z}
= \operatorname{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}} ({\mathbf Z})
= \operatorname{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}} (\nabla \phi)
+ \operatorname{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}} (\widetilde {\mathbf z}).
$$
With the integration by parts formula (\ref{eq:integration-by-parts}) (which is actually
valid for functions in ${\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})$ as shown in
\cite[Thm.~{3.29}]{Monkbook}) we get
\[
(\operatorname{\mathbf{curl}} {\mathbf Z}, {\mathbf v})_{L^2(\widehat{K})}
\stackrel{(\ref{eq:integration-by-parts})}{= }
({\mathbf Z}, \operatorname{\mathbf{curl}} {\mathbf v})_{L^2(\widehat{K})}
- ({\mathbf z}, \gamma_\tau {\mathbf v})_{L^2(\partial \widehat{K})}.
\]
Selecting ${\mathbf v} = {\mathbf R}^{\operatorname{curl}}(\nabla \phi) \in {\mathbf H}^1(\widehat{K})$, we get
\begin{align*}
(\operatorname{\mathbf{curl}} {\mathbf Z},{\mathbf R}^{\operatorname{curl}}(\nabla \phi) )_{L^2(\widehat{K})} &=
(\nabla \phi + \widetilde {\mathbf z}, \nabla \phi + \widetilde {\mathbf z} - \operatorname{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}}(\widetilde {\mathbf z}))_{L^2(\widehat{K})} \\
&\quad - ({\mathbf z}, \gamma_\tau {\mathbf R}^{\operatorname{curl}}(\nabla \phi))_{L^2(\partial \widehat{K})}.
\end{align*}
In view of the mapping property
${\mathbf R}^{\operatorname{curl}}:L^2(\widehat{K}) \rightarrow {\mathbf H}^1(\widehat{K})$ we obtain
\begin{align}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-200-vorn}
\|\nabla \phi\|^2_{L^2(\widehat{K})} &\lesssim
\|\operatorname{\mathbf{curl}} {\mathbf Z}\|_{L^2(\widehat{K})} \|\nabla \phi\|_{L^2(\widehat{K})} +
\|\widetilde{\mathbf z}\|_{L^2(\widehat K)}
\|\widetilde{\mathbf z} - \operatorname{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}} (\widetilde{\mathbf z})\|_{L^2(\widehat K)} \\
\nonumber
& \quad \mbox{} +
\|\widetilde {\mathbf z} - \operatorname*{\mathbf{curl}} {\mathbf R}^{\operatorname{curl}}(\widetilde {\mathbf z})\|_{L^2(\widehat{K})} \|\nabla \phi\|_{L^2(\widehat{K})} \\
\nonumber
& \quad \mbox{}+
\|\widetilde {\mathbf z} \|_{L^2(\widehat{K})} \|\nabla \phi\|_{L^2(\widehat{K})} +
\left| ({\mathbf z},\gamma_\tau {\mathbf R}^{\operatorname{curl}}(\nabla \phi))_{L^2(\partial\widehat K)} \right|.
\end{align}
Combining
(\ref{eq:lemma:X-1/2-vs-H-1/2-curl-50-vorn}),
(\ref{eq:lemma:X-1/2-vs-H-1/2-curl-100-vorn}),
(\ref{eq:lemma:X-1/2-vs-H-1/2-curl-200-vorn}) shows
\begin{align}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-500-vorn}
\|{\mathbf Z}\|_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})}
&\lesssim \|\widetilde {\mathbf z}\|_{L^2(\widehat K)} +
\|\nabla \phi\|_{L^2(\widehat K)} + \|\operatorname{\mathbf{curl}}{\mathbf Z}\|_{L^2(\widehat K)} \\
\nonumber
& \lesssim \sup_{{\mathbf v} \in {\mathbf H}^1(\widehat K)}
\frac{ ({\mathbf z},\gamma_\tau {\mathbf v})_{L^2(\partial \widehat K)}}
{\|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}}
+ \|\operatorname{curl}_{\partial \widehat K} {\mathbf z}\|_{H^{-1/2}(\partial \widehat K)}.
\end{align}
\emph{5th~step:}
Since ${\mathbf z}$ and
$\operatorname{curl}_{\partial\widehat{K}} {\mathbf z}$ are actually $L^2$-functions, the norm
$\|\cdot\|_{{\mathbf X}^{-1/2}}$ can be estimated in a localized fashion:
The continuity of the inclusions
$H^{1/2}(\partial\widehat{K}) \subset \prod_{f \in {\mathcal F}(\widehat{K})} H^{1/2}(f)$
and $\gamma_\tau {\mathbf H}^{1}(\widehat{K}) \subset
\prod_{f \in {\mathcal F}(\widehat{K})} {\mathbf H}_T^{1/2}(f)$ implies
\begin{subequations}
\label{eq:lemma:X-1/2-vs-H-1/2-curl-550-vorn}
\begin{align}
\|\operatorname{curl}_{\partial \widehat{K}} {\mathbf z}\|_{H^{-1/2}(\partial\widehat{K})} & \lesssim
\sum_{f \in {\mathcal F}(\widehat{K})} \|\operatorname{curl}_f {\mathbf z}\|_{\widetilde{H}^{-1/2}(f)}, \\
\sup_{{\mathbf v} \in {\mathbf H}^1(\widehat K)}
\frac{({\mathbf z},\gamma_\tau {\mathbf v})_{L^2(\partial\widehat K)}}
{\|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}} & \lesssim
\sum_{f \in {\mathcal F}(\widehat{K})} \|{\mathbf z}\|_{\widetilde{\mathbf{H}}^{-1/2}_T(f)}.
\end{align}
\end{subequations}
We finally obtain the desired estimate
$$
\|{\mathbf z} \|_{{\mathbf X}^{-1/2}}
\lesssim \|{\mathbf Z}\|_{{\mathbf H}(\widehat K,\operatorname{\mathbf{curl}}) }
\stackrel{\text{(\ref{eq:lemma:X-1/2-vs-H-1/2-curl-500-vorn}),
(\ref{eq:lemma:X-1/2-vs-H-1/2-curl-550-vorn})}}{\lesssim}\sum_{f \in {\mathcal F}(\widehat K)}
\|{\mathbf z}\|_{\widetilde{\mathbf{H}}^{-1/2}_T(f)} +
\|\operatorname{curl}_f {\mathbf z}\|_{\widetilde{H}^{-1/2}(f)}.
$$
This concludes the proof. We mention that an alternative proof of
the assertion (\ref{item:lemma:Hcurl-lifting-iii}) could be based on the intrinsic characterization
of the trace spaces of ${\mathbf H}(\widehat K,\operatorname{\mathbf{curl}})$ given in
\cite{BuffaCiarlet2001,BuffaCiarlet2001b}.
\end{proof}
\begin{theorem}
\label{thm:H1curl-approximation}
Let $\widehat K$ be a fixed tetrahedron. Then
there exists $C>0$ independent of $p$ such
that for all ${\mathbf{u}}\in{\mathbf{H}}^{1}(\widehat{K},\operatorname{\mathbf{curl}})$
\begin{equation}
\Vert{\mathbf{u}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{u}}%
\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})}\leq Cp^{-1}%
\inf_{{\mathbf v} \in {\mathbf Q}_p(\widehat K)}
\Vert{\mathbf{u}} - \mathbf{v}\Vert_{{\mathbf{H}}^{1}(\widehat{K},\operatorname{\mathbf{curl}})}.
\label{eq:thmH1curl-approximation-10}%
\end{equation}
\end{theorem}
\begin{proof}
\emph{1st~step:} Since $\widehat \Pi^{\operatorname*{curl},3d}_p$ is a projection operator, it suffices to show the bound
with ${\mathbf v} = 0$ in the infimum.
\emph{2nd~step:} Write, with the operators $R^{\operatorname*{grad}}$,
${\mathbf{R}}^{\operatorname*{curl}}$ of Lemma~\ref{lemma:mcintosh}, the
function ${\mathbf{u}}\in{\mathbf{H}}^{1}(\widehat{K},\operatorname{\mathbf{curl}})$ as
${\mathbf{u}}=\nabla\varphi+{\mathbf{v}}$ with $\varphi\in H^{2}(\widehat{K})$
and ${\mathbf{v}}\in{\mathbf{H}}^{2}(\widehat{K})$. We have $\Vert\varphi
\Vert_{H^{2}(\widehat{K})}\lesssim\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}%
^{1}(\widehat{K},\operatorname{\mathbf{curl}})}$ and $\Vert{\mathbf{v}}\Vert
_{{\mathbf{H}}^{2}(\widehat{K})}\lesssim\Vert\operatorname{\mathbf{curl}}{\mathbf{u}%
}\Vert_{{\mathbf{H}}^{1}(\widehat{K})}$. From the commuting diagram property,
we readily get
\begin{align*}
\Vert\nabla\varphi-\widehat \Pi^{\operatorname*{curl},3d}_p\nabla
\varphi\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})}\!&=\Vert
\nabla(\varphi-\widehat\Pi^{\operatorname*{grad},3d}_{p+1}\varphi)\Vert
_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})}\!=|\varphi-\widehat\Pi^{\operatorname*{grad},3d}_{p+1}\varphi|_{H^{1}(\widehat{K})} \\
& \stackrel{\text{Thm.~\ref{lemma:demkowicz-grad-3D}}}{\lesssim} p^{-1}%
\Vert\varphi\Vert_{H^{2}(\widehat{K})}.
\end{align*}
\emph{3rd~step:} We claim
\begin{equation}
\Vert\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}})\Vert_{{\mathbf{X}}^{-1/2}}\leq Cp^{-1}\Vert{\mathbf{v}}%
\Vert_{{\mathbf{H}}^{2}(\widehat{K})}. \label{eq:thm:H1curl-approximation-30}%
\end{equation}
To see this, we note ${\mathbf{v}}\in{\mathbf{H}}^{2}(\widehat{K})$ and
estimate with Lemma~\ref{lemma:Hcurl-lifting}
\begin{align*}
&\Vert\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}})\Vert_{{\mathbf{X}}^{-1/2}}\lesssim\\
& \sum_{f\in{\mathcal{F}%
}(\widehat{K})}\Vert\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}})\Vert_{\widetilde{\mathbf{H}}_T^{-1/2}%
(f)}+\Vert\operatorname{curl}_{f}(\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}))\Vert_{\widetilde{H}^{-1/2}(f)}.
\end{align*}
We consider each face $f\in{\mathcal{F}}(\widehat{K})$ separately.
Lemmas~\ref{lemma:picurl-negative-I}, \ref{lemma:picurl-negative-II}, \ref{lemma:Picurl-face} imply with
the aid of the continuity of the trace
$\Pi_\tau: {\mathbf H}^2(\widehat K) \rightarrow {\mathbf H}^{3/2}_T(f) \subset {\mathbf H}^{1/2}(f,\operatorname{curl})$
\begin{align*}
\Vert\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v})%
}\Vert_{\widetilde{\mathbf{H}}_T^{-1/2}(f)} &\!\!\!\!
\stackrel{\text{Lem.~\ref{lemma:picurl-negative-I}}}{\lesssim} \!\!p^{-1/2}\Vert\Pi_{\tau
}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}})\Vert
_{{\mathbf{H}}(f,\operatorname{curl})}\\
& \!\!\!\! \stackrel{\text{Lem.~\ref{lemma:Picurl-face}}}{\lesssim}
\!\!
p^{-1/2-1/2}\Vert\Pi_{\tau}{\mathbf{v}}\Vert_{{\mathbf{H}}%
^{1/2}(f,\operatorname{curl})
\lesssim p^{-1}\Vert{\mathbf{v}}\Vert
_{{\mathbf{H}}^{2}(\widehat{K})},\\
\Vert\operatorname{curl}(\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p%
{\mathbf{v}}))\Vert_{\widetilde{H}^{-1/2}(f)} &
\!\!\!\!
\stackrel{\text{Lem.~\ref{lemma:picurl-negative-II}}}{\lesssim}
\!\!
p^{-1/2}\Vert\operatorname{curl}(\Pi_{\tau}({\mathbf{v}}%
-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}))\Vert_{L^{2}(f)}\\
& \lesssim p^{-1/2-1/2} \Vert\Pi_{\tau}{\mathbf{v}}\Vert_{{\mathbf{H}}%
^{1/2}(f,\operatorname{curl})}
\lesssim p^{-1}\Vert{\mathbf{v}}\Vert
_{{\mathbf{H}}^{2}(\widehat{K})}.
\end{align*}
\emph{4th~step:} Since ${\mathbf{v}}\in{\mathbf{H}}^{2}(\widehat{K})$, the
approximation $P^{\operatorname*{curl},3d}{\mathbf{v}}\in
\mathbf{Q}_p(\widehat{K})$ given by
Lemma~\ref{lemma:Pcurl3d} satisfies
\begin{equation}
\Vert{\mathbf{v}}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{{\mathbf{H}%
}(\widehat{K},\operatorname{\mathbf{curl}})}\leq Cp^{-1}\Vert{\mathbf{v}}%
\Vert_{{\mathbf{H}}^{2}(\widehat{K})}. \label{eq:thm:H1curl-approximation-10c}%
\end{equation}
We note
\begin{align*}
\Vert{\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}%
\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})} & \leq\Vert
{\mathbf{v}}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{{\mathbf{H}%
}(\widehat{K},\operatorname{\mathbf{curl}})}
+\Vert\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}-P^{\operatorname*{curl},3d}{\mathbf{v}%
}\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{\mathbf{curl}})}\\
& \leq p^{-1}\Vert{\mathbf{v}}\Vert_{{\mathbf{H}}^{2}(\widehat{K})}%
+\Vert\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}%
-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{{\mathbf{H}}(\widehat{K}%
,\operatorname{\mathbf{curl}})}.
\end{align*}
For the term $\Vert\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}%
}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{{\mathbf{H}}(\widehat{K}%
,\operatorname{\mathbf{curl}})}$, we introduce the abbreviation ${\mathbf{E}%
}:=\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}%
-P^{\operatorname*{curl},3d}{\mathbf{v}}\in\mathbf{Q}_p(\widehat{K})$ and observe that the orthogonality
conditions (\ref{eq:Pi_curl-b}), (\ref{eq:Pi_curl-a}) satisfied by
$\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}$ and the conditions
(\ref{eq:lemma:Pcurl3d-a}), (\ref{eq:lemma:Pcurl3d-b}) satisfied by
$P^{\operatorname*{curl},3d}{\mathbf{v}}$, lead to two orthogonalities:
\begin{subequations}
\label{eq:orth-3d}%
\begin{align}
\label{eq:orth-3d-a}%
(\operatorname{\mathbf{curl}}{\mathbf{E}},\operatorname{\mathbf{curl}}{\mathbf{w}}%
)_{L^{2}(\widehat{K})}& =0\quad\forall{\mathbf{w}}\in\mathring{\mathbf{Q}}_{p}(\widehat{K}),\\
\label{eq:orth-3d-b}%
({\mathbf{E}},\nabla w)_{L^{2}(\widehat{K})}%
&=0\quad\forall w\in\mathring{W}_{p+1}(\widehat{K}).
\end{align}
\end{subequations}
By Lemma~\ref{lemma:Hcurl-lifting}, the orthogonality condition
\begin{align*}
(\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p\Pi_\tau \mathbf{E},\nabla w)_{L^2(\widehat{K})} = 0 \quad \forall w\in\mathring{W}_{p+1}(\widehat{K})
\end{align*}
holds. Hence, the discrete Friedrichs inequality of
Lemma~\ref{lemma:discrete-friedrichs-3d} is applicable to
$\mathbf{E}-\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p\Pi_\tau \mathbf{E}$, and we get
\begin{align}
\Vert{\mathbf{E}}\Vert_{L^{2}(\widehat{K})} & \leq\Vert{\boldsymbol{\mathcal{L}}%
}^{\operatorname*{curl},3d}_p\Pi_{\tau}{\mathbf{E}}\Vert_{L^{2}(\widehat{K}%
)}+\Vert{\mathbf{E}}-{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p\Pi_{\tau
}{\mathbf{E}}\Vert_{L^{2}(\widehat{K})}%
\label{eq:thm:H1curl-approximation-100}\\
\nonumber
& \lesssim\Vert{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p\Pi_{\tau}{\mathbf{E}%
}\Vert_{L^{2}(\widehat{K})}+\Vert\operatorname{\mathbf{curl}}({\mathbf{E}}%
-{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p\Pi_{\tau}{\mathbf{E}})\Vert
_{L^{2}(\widehat{K})}\\
\nonumber
&\lesssim\Vert{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}%
\Pi_{\tau}{\mathbf{E}}\Vert_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})}%
+\Vert\operatorname{\mathbf{curl}}{\mathbf{E}}\Vert_{L^{2}(\widehat{K})}
\\
\nonumber
& \lesssim\Vert\Pi_{\tau}{\mathbf{E}}\Vert_{{\mathbf{X}}^{-1/2}}%
+\Vert\operatorname{\mathbf{curl}}{\mathbf{E}}\Vert_{L^{2}(\widehat{K})}.
\end{align}
Using again the lifting $\boldsymbol{\mathcal{L}}^{\operatorname*{curl},3d}_p$ of
Lemma~\ref{lemma:Hcurl-lifting} and \eqref{eq:orth-3d-a}, we get
\begin{equation}
\Vert\operatorname{\mathbf{curl}}{\mathbf{E}}\Vert_{L^{2}(\widehat{K})}\leq
\Vert\operatorname{\mathbf{curl}}{\boldsymbol{\mathcal{L}}}^{\operatorname*{curl},3d}_p\Pi_{\tau
}{\mathbf{E}}\Vert_{L^{2}(\widehat{K})}\lesssim\Vert\Pi_{\tau}{\mathbf{E}%
}\Vert_{{\mathbf{X}}^{-1/2}}. \label{eq:thm:H1curl-approximation-200}%
\end{equation}
We conclude the proof by observing
\begin{align*}
& \Vert{\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}%
\Vert_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})} \leq\Vert{\mathbf{v}%
}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{\mathbf{H}(\widehat{K}%
,\operatorname{\mathbf{curl}})}+\Vert{\mathbf{E}}\Vert_{\mathbf{H}(\widehat{K}%
,\operatorname{\mathbf{curl}})}
\\
&\quad \overset{(\ref{eq:thm:H1curl-approximation-100}),(\ref{eq:thm:H1curl-approximation-200})}
{\lesssim}
\Vert {\mathbf v}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{\mathbf{H}(\widehat{K}%
,\operatorname{\mathbf{curl}})}
+\Vert\Pi_{\tau}{\mathbf{E}}%
\Vert_{{\mathbf{X}}^{-1/2}}\\
& \quad \lesssim
\Vert{\mathbf{v}}-P^{\operatorname*{curl},3d}{\mathbf{v}}\Vert_{\mathbf{H}(\widehat{K}%
,\operatorname{\mathbf{curl}})}+\Vert\Pi_{\tau}({\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}})\Vert_{{\mathbf{X}}^{-1/2}}
\!\!\!\!
\overset{(\ref{eq:thm:H1curl-approximation-30}%
),(\ref{eq:thm:H1curl-approximation-10c})}{\lesssim}
\! \!\!\!
p^{-1}\Vert{\mathbf{v}%
}\Vert_{{\mathbf{H}}^{2}(\widehat{K})}.
\qedhere
\end{align*}
\end{proof}
For negative norm estimates $\|{\mathbf u} - \widehat \Pi^{\operatorname*{curl},3d}_p {\mathbf u}\|_{\widetilde {\mathbf H}^{-s}(\widehat K,
\operatorname{\mathbf{curl}})}$ with $s \ge 0$ we need the following Helmholtz decompositions:
\begin{lemma}[Helmholtz decomposition]
\label{lemma:helmholtz-3d}
Any ${\mathbf v} \in {\mathbf H}^1(\widehat K)$ can be written as
\begin{align}
\label{eq:lemma:helmholtz-3d-10}
{\mathbf v} & = \nabla \varphi_0 + \operatorname{\mathbf{ curl}} \operatorname{\mathbf{curl}} {\mathbf z}_0, \\
\label{eq:lemma:helmholtz-3d-20}
{\mathbf v} & = \nabla \varphi_1 + \operatorname{\mathbf{curl}} {\mathbf z}_1,
\end{align}
where $\varphi_0 \in H^2(\widehat K) \cap H^1_0(\widehat K)$ and
${\mathbf z}_0 \in {\mathbf H}^1(\widehat K,\operatorname{\mathbf{curl}}) \cap {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}})$ and
where $\varphi_1 \in H^2(\widehat K)$ and
${\mathbf z}_1 \in {\mathbf H}^1(\widehat K,\operatorname{\mathbf{curl}}) \cap {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}})$ together with the estimates
\begin{align*}
\|\varphi_0\|_{H^2(\widehat K)} +
\|{\mathbf z}_0\|_{{\mathbf H}^1(\widehat K,\operatorname{\mathbf{curl}})}
& \leq C \|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}, \\
\|\varphi_1\|_{H^2(\widehat K)} +
\|{\mathbf z}_1\|_{{\mathbf H}^1(\widehat K,\operatorname{\mathbf{curl}})}
& \leq C \|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}.
\end{align*}
\end{lemma}
\begin{proof}
Before proving these decompositions, we recall the continuous embeddings
\begin{equation}
\label{eq:saranen}
{\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}}) \cap {\mathbf H}(\widehat K,\operatorname{div}) \subset {\mathbf H}^1(\widehat K)
\quad \mbox{ and } \quad
{\mathbf H}(\widehat K,\operatorname{\mathbf{curl}}) \cap {\mathbf H}_0(\widehat K,\operatorname{div})
\subset {\mathbf H}^1(\widehat K),
\end{equation}
which hinge on the convexity of $\widehat K$ (see \cite{birman-solomyak87,saranen82} and the discussion
in \cite[Rem.~{3.48}]{Monkbook}).
We construct the decomposition (\ref{eq:lemma:helmholtz-3d-20}):
We define $\varphi_1 \in H^1(\widehat K)$ as the solution of
$$
-\Delta \varphi_1 = -\operatorname{div} {\mathbf v} \quad \mbox{ in $\widehat K$},
\qquad \partial_n \varphi_1 = {\mathbf n} \cdot {\mathbf v} \quad \mbox{ on $\partial \widehat K$.}
$$
The contribution ${\mathbf z}_1$ is defined by the following saddle point problem:
Find $({\mathbf z}_1, \psi) \in {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}}) \times H^1_0(\widehat K)$
such that
\begin{align*}
(\operatorname{\mathbf{curl}} {\mathbf z}_1,
\operatorname{\mathbf{curl}} {\mathbf w})_{L^2(\widehat K)} - (\nabla \psi,{\mathbf w})_{L^2(\widehat K)} & =
(\operatorname{\mathbf{curl}} {\mathbf v} ,{\mathbf w})_{L^2(\widehat K)} \qquad \forall {\mathbf w} \in {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}}),
\\
( {\mathbf z}_1,\nabla q)_{L^2(\widehat K)} &=0 \qquad \forall q \in H^1_0(\widehat K).
\end{align*}
This problem is uniquely solvable,
we have $\psi = 0$ (since $\operatorname{div} \operatorname{\mathbf{curl}} {\mathbf v} =0$)
and the {\sl a priori} estimate
\begin{align*}
\|{\mathbf z}_1\|_{{\mathbf H}(\operatorname{\mathbf{curl}},\widehat K)}
\lesssim \|\operatorname{\mathbf{curl}} {\mathbf v}\|_{L^2(\widehat K)}
\lesssim \|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}.
\end{align*}
(In the proof of Lemma~\ref{lemma:Hcurl-lifting}, we considered a similar problem in a discrete setting; here,
the appeal to the discrete Friedrichs inequality of Lemma~\ref{lemma:discrete-friedrichs-3d} needs to replaced with that to the continuous
one, \cite[Cor.~{3.51}]{Monkbook}.)
{}From $\operatorname{div} {\mathbf z}_1 = 0$ and (\ref{eq:saranen}), we furthermore infer
$\|{\mathbf z}_1\|_{{\mathbf H}^1(\widehat K)} \lesssim \|{\mathbf v}\|_{{\mathbf H}^1(\widehat K)}$.
The representation (\ref{eq:lemma:helmholtz-3d-20}) is obtained from the observation that the difference
${\boldsymbol\delta}:= {\mathbf v} - \nabla \varphi_1 - \operatorname{\mathbf{curl}} {\mathbf z}_1$ satisfies,
by construction, $\operatorname{div} {\boldsymbol \delta} = 0$, $\operatorname{\mathbf{curl}}{\boldsymbol\delta} = 0$,
${\mathbf n} \cdot {\boldsymbol\delta} =
({\mathbf n} \cdot {\mathbf v} - \partial_n\varphi_1) - {\mathbf n} \cdot \operatorname{\mathbf{curl}}{\mathbf z}_1 =
0 - \operatorname{curl}_{\partial\widehat K} \Pi_\tau {\mathbf z}_1 = 0 - 0 = 0$ so that again
(\ref{eq:saranen}) (specifically, in the form \cite[Cor.~{3.51}]{Monkbook}) implies ${\boldsymbol \delta} = 0$.
Finally, from ${\mathbf v} \in {\mathbf H}^1(\widehat K)$, $\varphi_1 \in H^2(\widehat K)$ and the representation
(\ref{eq:lemma:helmholtz-3d-20}), we infer $\operatorname{\mathbf{curl}} {\mathbf z}_1 \in {\mathbf H}^1(\widehat K)$.
We construct the decomposition (\ref{eq:lemma:helmholtz-3d-10}):
We define $\varphi_0 \in H^1_0(\widehat K)$ as the solution of
$$
-\Delta \varphi_0 = -\operatorname{div} {\mathbf v} \quad \mbox{ in $\widehat K$},
\qquad \varphi_0 = 0 \quad \mbox{ on $\partial \widehat K$.}
$$
Next, we define $({\mathbf z}_0,\psi) \in {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}}) \times H^1_0(\widehat K)$ as the solution
of the saddle point problem
\begin{align*}
(\operatorname{\mathbf{curl}} {\mathbf z}_0,
\operatorname{\mathbf{curl}} {\mathbf w})_{L^2(\widehat K)} - (\nabla \psi,{\mathbf w})_{L^2(\widehat K)} & =
({\mathbf v} - \nabla \varphi_0,{\mathbf w})_{L^2(\widehat K)} \quad \forall {\mathbf w} \in {\mathbf H}_0(\widehat K,\operatorname{\mathbf{curl}}),
\\
( {\mathbf z}_0,\nabla q)_{L^2(\widehat K)} &=0 \qquad \forall q \in H^1_0(\widehat K).
\end{align*}
Again, this problem is uniquely solvable and, in fact $\psi = 0$
(since $\operatorname{div} ({\mathbf v} - \nabla \varphi_0) = 0$).
We have
$\|{\mathbf z}_0\|_{{\mathbf H}(\operatorname{\mathbf{curl}},\widehat K)}
\lesssim \|{\mathbf v} - \nabla \varphi_0\|_{L^2(\widehat K)} \lesssim \|{\mathbf v}\|_{L^2(\widehat K)}$.
Since $\operatorname{div} {\mathbf z}_0 = 0$, we get from (\ref{eq:saranen}) that
$\|{\mathbf z}_0\|_{{\mathbf H}^1(\widehat K)} \lesssim \|{\mathbf v}\|_{L^2(\widehat K)}$. Finally,
an integration by parts reveals
$$
\operatorname{\mathbf{curl}}
\operatorname{\mathbf{curl}} {\mathbf z_0} = {\mathbf v} - \nabla \varphi_0,
$$
which is the representation (\ref{eq:lemma:helmholtz-3d-10}).
\end{proof}
We control the approximation error in negative Sobolev norms.
\begin{theorem}
\label{thm:duality-again}
Assume that all interior angles of the 4 faces of $\widehat K$ are smaller than $2\pi/3$. Then
for $s \in [0,1]$ and all $\mathbf{u}\in \mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})$
there holds the estimate
\begin{align*}
\Vert\mathbf{u}-\widehat \Pi^{\operatorname*{curl},3d}_p\mathbf{u}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{\mathbf{curl}})}
\leq C_s p^{-(1+s)} \inf_{{\mathbf v} \in {\mathbf Q}_p(\widehat{K})} \Vert\mathbf{u} - \mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})}.
\end{align*}
\end{theorem}
\begin{proof}
By the familiar argument that $\widehat \Pi^{\operatorname*{curl},3d}_p$ is a projection, we may restrict the proof to the case
${\mathbf v} = 0$ in the infimum. The case $s = 0$ is covered by
Theorem~\ref{thm:H1curl-approximation}. In the remainder of the proof, we will show the case $s = 1$ as the
case $s \in (0,1)$ then follows by an interpolation argument.
We write $\mathbf{E}:=\mathbf{u}-\widehat \Pi^{\operatorname*{curl},3d}_p\mathbf{u}$ for simplicity.
By definition we have
\begin{align}
\label{eq:lemma:duality-again-100}
\Vert\mathbf{E}\Vert_{\widetilde{\mathbf{H}}^{-1}(\widehat{K},\operatorname{\mathbf{curl}})}
& \sim \Vert\mathbf{E}\Vert_{\widetilde{\mathbf{H}}^{-1}(\widehat{K})} + \Vert\operatorname{\mathbf{curl}}\mathbf{E}\Vert_{\widetilde{\mathbf{H}}^{-1}(\widehat{K})} \\
\nonumber
&=
\operatorname*{sup}_{\mathbf{v}\in\mathbf{H}^1(\widehat{K})} \frac{(\mathbf{E},\mathbf{v})_{L^2(\widehat{K})}}{\Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}} +
\operatorname*{sup}_{\mathbf{v}\in\mathbf{H}^1(\widehat{K})} \frac{(\operatorname{\mathbf{curl}} \mathbf{E},\mathbf{v})_{L^2(\widehat{K})}}{\Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}}.
\end{align}
We start with estimating the first supremum in (\ref{eq:lemma:duality-again-100}).
According to Lemma~\ref{lemma:helmholtz-3d}, any $\mathbf{v}\in\mathbf{H}^1(\widehat{K})$ can be decomposed as
\begin{align*}
\mathbf{v}=\nabla\varphi + \operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}\mathbf{z}
\end{align*}
with $\varphi\in H^2(\widehat{K}) \cap H_0^1(\widehat{K})$ and $\mathbf{z}\in\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}}) \cap \mathbf{H}_0(\widehat{K},\operatorname{\mathbf{curl}})$. We also observe
$\operatorname{\mathbf{curl}}\mathbf{z}\in \mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})$. Thus by Lemma~\ref{lemma:helmholtz-like-decomp} we can further decompose $\operatorname{\mathbf{curl}}\mathbf{z}$ as
\begin{align}
\label{eq:lemma:duality-again-25}
\operatorname{\mathbf{curl}}\mathbf{z} = \nabla\varphi_2 + \mathbf{z}_2
\end{align}
with $\varphi_2\in H^2(\widehat{K})$ and $\mathbf{z}_2 \in \mathbf{H}^2(\widehat{K})$. We estimate each
term in the decomposition
$(\mathbf{E},\mathbf{v})_{L^2(\widehat{K})} =
(\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})} +
(\mathbf{E},\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})}$
separately. Using the orthogonality condition \eqref{eq:Pi_curl-b} and Theorem~\ref{thm:H1curl-approximation}, we get
\begin{align}
\nonumber
\bigl|(\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})}\bigr| &=
\bigl| \operatorname*{inf}_{w\in\mathring{W}_{p+1}(\widehat{K})} (\mathbf{E},\nabla(\varphi-w))_{L^2(\widehat{K})} \bigr|\lesssim p^{-1} \Vert\varphi\Vert_{H^2(\widehat{K})} \Vert\mathbf{E}\Vert_{L^2(\widehat{K})} \\
\label{eq:lemma:duality-again-30}
&\lesssim p^{-1} \Vert\mathbf{v}\Vert_{H^1(\widehat{K})} \Vert\mathbf{E}\Vert_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})} \lesssim p^{-2} \Vert\mathbf{v}\Vert_{H^1(\widehat{K})} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})}.
\end{align}
Integration by parts and (\ref{eq:lemma:duality-again-25}) give
\begin{align}
\nonumber
&(\mathbf{E},\operatorname{\mathbf{curl}}\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})}
=
(\mathbf{E},\operatorname{\mathbf{curl}}\mathbf{z}_2)_{L^2(\widehat{K})}
\\ \nonumber & \quad
=
(\operatorname{\mathbf{curl}} \mathbf{E},\mathbf{z}_2)_{L^2(\widehat{K})} +
(\Pi_\tau \mathbf{E},\gamma_\tau \mathbf{z}_2)_{L^2(\partial \widehat{K})} \\
\nonumber
& \quad =
(\operatorname{\mathbf{curl}} \mathbf{E},\operatorname{\mathbf{curl}} \mathbf{z})_{L^2(\widehat{K})} -
(\operatorname{\mathbf{curl}} \mathbf{E},\nabla \varphi_2)_{L^2(\widehat{K})} +
(\Pi_\tau \mathbf{E},\gamma_\tau \mathbf{z}_2)_{L^2(\partial \widehat{K})} \\
\label{eq:lemma:duality-again-50}
& \quad =
(\operatorname{\mathbf{curl}} \mathbf{E},\operatorname{\mathbf{curl}} \mathbf{z})_{L^2(\widehat{K})} -
({\mathbf n} \cdot \operatorname*{\mathbf{curl}} \mathbf{E},\varphi_2)_{L^2(\partial \widehat{K})} +
(\Pi_\tau \mathbf{E},\gamma_\tau \mathbf{z}_2)_{L^2(\partial \widehat{K})}.
\end{align}
We estimate these three terms separately. For the first term in (\ref{eq:lemma:duality-again-50}), we
use the orthogonality \eqref{eq:Pi_curl-a} and Theorem~\ref{thm:H1curl-approximation} to get
\begin{align}
\label{eq:lemma:duality-again-52}
& \bigl| (\operatorname{\mathbf{curl}}\mathbf{E},\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})}
\bigr| = \bigl| \operatorname*{inf}_{\mathbf{w}\in\mathring{\mathbf{Q}}_p(\widehat{K})} (\operatorname{\mathbf{curl}}\mathbf{E},\operatorname{\mathbf{curl}}(\mathbf{z}-\mathbf{w}))_{L^2(\widehat{K})}
\bigr|\\
\nonumber
& \qquad \qquad \lesssim p^{-1} \Vert\operatorname{\mathbf{curl}}\mathbf{E}\Vert_{L^2(\widehat{K})} \Vert\mathbf{z}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})} \\
\nonumber
&
\qquad
\qquad
\lesssim p^{-1} \Vert\mathbf{v}\Vert_{H^1(\widehat{K})} \Vert\mathbf{E}\Vert_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})}
\lesssim p^{-2} \Vert\mathbf{v}\Vert_{H^1(\widehat{K})} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})},
\end{align}
cf. also the proof of Lemma~\ref{lemma:picurl-negative-I} for the approximation arguments (use the lifting of Lemma~\ref{lemma:Hcurl-lifting}). For the second term in (\ref{eq:lemma:duality-again-50}), we note that
$\operatorname*{\mathbf{curl}} {\mathbf E} \in {\mathbf H}^1(\widehat K)$ so that
the integral over $\partial\widehat K$ can be split into a sum of face
contributions and $({\mathbf n} \cdot
\operatorname*{\mathbf{curl}} {\mathbf E})|_f = \operatorname*{curl}_f \Pi_\tau {\mathbf E}$.
We also observe that our assumption on the angles of the faces of $\widehat K$
allows us to select $s = 3/2$ in Lemmas~\ref{lemma:picurl-negative-II} and \ref{lemma:Picurl-face}
since the pertinent $\widehat s$ satisfies $\widehat s > 3/2 = \pi/(2 \pi/3)$.
We get for each face contribution
\begin{align}
\label{eq:lemma:duality-again-61}
\bigl| (\operatorname{curl}_f\Pi_\tau\mathbf{E},\varphi_2)_{L^2(f)}\bigr|
& \stackrel{\text{Lem.~\ref{lemma:picurl-negative-II}}}{\lesssim} p^{-3/2} \Vert\operatorname{curl}_f\Pi_\tau\mathbf{E}\Vert_{L^2(f)} \Vert\varphi_2\Vert_{H^{3/2}(f)} \\
&\stackrel{\text{Lem.~\ref{lemma:Picurl-face}}}{\lesssim} p^{-2} \Vert\Pi_\tau\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\operatorname{curl},f)} \Vert\varphi_2\Vert_{H^2(\widehat{K})}
\\ \nonumber &
\lesssim p^{-2} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\operatorname{\mathbf{curl}},\widehat{K})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}.
\end{align}
Finally, for the third term in (\ref{eq:lemma:duality-again-50}) we infer with
Lemmas~\ref{lemma:picurl-negative-I}, \ref{lemma:Picurl-face}
\begin{align}
\label{eq:lemma:duality-again-54}
\left| (\Pi_\tau\mathbf{E},\gamma_\tau\mathbf{z}_2)_{L^2(f)}\right|
&\stackrel{\text{Lem.~\ref{lemma:picurl-negative-I}}}{\lesssim} p^{-3/2} \Vert\Pi_\tau\mathbf{E}\Vert_{\mathbf{H}(f,\operatorname{curl})} \Vert\gamma_\tau\mathbf{z}_2\Vert_{\mathbf{H}^{3/2}(f)}\\
\nonumber
&\stackrel{\text{Lem.~\ref{lemma:Picurl-face}}}{\lesssim} p^{-2}\Vert\Pi_\tau\mathbf{u}\Vert_{\mathbf{H}^{1/2}(f,\operatorname{curl})} \Vert\mathbf{z_2}\Vert_{\mathbf{H}^2(\widehat{K})} \\
\nonumber
& \lesssim p^{-2} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}.
\end{align}
Adding \eqref{eq:lemma:duality-again-61} and \eqref{eq:lemma:duality-again-54} over all faces
and taking note of (\ref{eq:lemma:duality-again-52}) shows that we estimate the first supremum
(\ref{eq:lemma:duality-again-100}) in the desired fashion.
To estimate the second supremum in (\ref{eq:lemma:duality-again-100}),
we decompose $\mathbf{v}\in\mathbf{H}^1(\widehat{K})$ as
\begin{align*}
\mathbf{v}=\nabla\varphi+\operatorname{\mathbf{curl}}\mathbf{z}
\end{align*}
with $\varphi\in H^2(\widehat{K})$ and $\mathbf{z} \in \mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}}) \cap \mathbf{H}_0(\widehat{K},\operatorname{\mathbf{curl}})$ according to Lemma~\ref{lemma:helmholtz-3d}. Thus we have
to control the expression $(\operatorname{\mathbf{curl}}\mathbf{E},\mathbf{v})_{L^2(\widehat{K})} = (\operatorname{\mathbf{curl}}\mathbf{E},\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})} + (\operatorname{\mathbf{curl}}\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})}$. Using the orthogonality condition \eqref{eq:Pi_curl-a} and Theorem~\ref{thm:H1curl-approximation}, the first term is estimated by
\begin{align*}
\bigr| (\operatorname{\mathbf{curl}}\mathbf{E}&,\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})}
\bigr| =\bigl| \operatorname*{inf}_{\mathbf{w}\in\mathring{\mathbf{Q}}_p(\widehat{K})} (\operatorname{\mathbf{curl}}\mathbf{E},\operatorname{\mathbf{curl}}(\mathbf{z}-\mathbf{w}))_{L^2(\widehat{K})} \bigr| \\
& \lesssim p^{-1} \Vert\mathbf{E}\Vert_{\mathbf{H}(\widehat{K},\operatorname{\mathbf{curl}})} \Vert\mathbf{z}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})}
\lesssim p^{-2} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})}.
\end{align*}
For the second term, an integration by parts yields in view of
$\operatorname{curl}_f \Pi_\tau {\mathbf E} = {\mathbf n} \cdot \operatorname{\mathbf{curl}} {\mathbf E}$
\begin{align*}
(\operatorname{\mathbf{curl}}\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})} = \sum_{f\in\mathcal{F}(\widehat{K})}(\operatorname{curl}_f\Pi_\tau\mathbf{E},\varphi)_{L^2(f)},
\end{align*}
where the decomposition into face contributions is again permitted by the regularity of ${\mathbf E}$ and $\varphi$.
We obtain
\begin{align*}
\bigl| (\operatorname{curl}_f\Pi_\tau\mathbf{E},\varphi)_{L^2(f)}\bigr|\! \lesssim p^{-3/2} \Vert\Pi_\tau\mathbf{E}\Vert_{\mathbf{H}(f,\operatorname{curl})} \Vert\varphi\Vert_{H^{3/2}(f)}\! \lesssim p^{-2} \Vert\mathbf{u}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}
\end{align*}
by Lemmas~\ref{lemma:picurl-negative-II} and \ref{lemma:Picurl-face}, which finishes the proof.
\end{proof}
For functions ${\mathbf u}$ with discrete $\operatorname{\mathbf{curl}}$, we have the following result.
\begin{lemma}
\label{lemma:better-regularity}
Assume that all interior angles of the $4$ faces of $\widehat K$ are smaller than $2 \pi/3$.
Then for all $k\geq1$ and
all ${\mathbf{u}}\in{\mathbf{H}}^{k}(\widehat{K})$ with $\operatorname*{\mathbf{curl}}%
{\mathbf{u}}\in {\mathbf V}_p(\widehat K) \supset ({\mathcal{P}}_{p}(\widehat{K}))^{3}$
\begin{equation}
\Vert{\mathbf{u}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{u}}%
\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{\mathbf{curl}})}\leq C_{s,k}p^{-(k+s)}\Vert{\mathbf{u}}\Vert
_{\mathbf{H}^{k}(\widehat{K})}, \qquad s\in [0,1].
\label{eq:proposition:better-regularity}%
\end{equation}
If $p\geq k-1$, then $\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}%
^{k}(\widehat{K})}$ can be replaced with the seminorm $|{\mathbf{u}%
}|_{{\mathbf{H}}^{k}(\widehat{K})}$.
Moreover,
\eqref{eq:proposition:better-regularity} holds for $s =0$ without the conditions on the angles of the
faces of $\widehat K$.
\end{lemma}
\begin{proof}
We employ the regularized right inverses of the operators $\nabla$ and
$\operatorname*{\mathbf{curl}}$ and proceed as in Lemma~\ref{lemma:better-regularity-2d}. We
write, using the decomposition of Lemma~\ref{lemma:helmholtz-like-decomp},
$\displaystyle
{\mathbf{u}}=\nabla R^{\operatorname*{grad}}({\mathbf{u}}-{\mathbf{R}%
}^{\operatorname*{curl}}\operatorname*{\mathbf{curl}}{\mathbf{u}})+{\mathbf{R}}^{\operatorname*{curl}}\operatorname*{\mathbf{curl}}{\mathbf{u}}=:\nabla
\varphi+{\mathbf{v}}%
$
with $\varphi\in H^{k+1}(\widehat{K})$ and ${\mathbf{v}}\in{\mathbf{H}}%
^{k}(\widehat{K})$ together with
\begin{equation}
\Vert\varphi\Vert_{H^{k+1}(\widehat{K})}+\Vert{\mathbf{v}}\Vert_{{\mathbf{H}%
}^{k}(\widehat{K})}\lesssim \Vert{\mathbf{u}}\Vert_{{\mathbf{H}}%
^{k}(\widehat{K})}+\Vert\operatorname*{\mathbf{curl}}{\mathbf{u}}\Vert_{{\mathbf{H}%
}^{k-1}(\widehat{K})} \lesssim \Vert{\mathbf{u}}\Vert_{{\mathbf{H}}%
^{k}(\widehat{K})}.
\label{eq:lemma:projection-based-interpolation-approximation-100}%
\end{equation}
The assumption $\operatorname*{\mathbf{curl}}{\mathbf{u}}\in {\mathbf V}_p(\widehat K)$
and
Lemma~\ref{lemma:mcintosh}, (\ref{item:lemma:mcintosh-v})
imply ${\mathbf{v}}={\mathbf{R}}%
^{\operatorname*{curl}}\operatorname*{\mathbf{curl}}{\mathbf{u}}\in
\mathbf{Q}_p(\widehat{K})$; furthermore, since
$\widehat \Pi^{\operatorname*{curl},3d}_p$ is a projection, we have
${\mathbf{v}}-\widehat \Pi^{\operatorname*{curl},3d}_p{\mathbf{v}}=0$.
With the commuting diagram property $\nabla\widehat\Pi^{\operatorname*{grad},3d}_{p+1}=\widehat \Pi^{\operatorname*{curl},3d}_p\nabla$ and \eqref{eq:lemma:demkowicz-grad-3D-20} we get
\begin{align*}
\Vert(\operatorname{I}-\widehat \Pi^{\operatorname*{curl},3d}_p){\mathbf{u}}\Vert
_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{\mathbf{curl}})} &= \Vert(\operatorname{I}-\widehat \Pi^{\operatorname*{curl},3d}_p)\nabla\varphi+\underbrace{(\operatorname{I}-\widehat \Pi^{\operatorname*{curl},3d}_p){\mathbf{v}}}_{=0}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{\mathbf{curl}})} \\
&= \Vert\nabla(\operatorname{I}-\widehat\Pi^{\operatorname*{grad},3d}_{p+1})\varphi\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K})}\lesssim p^{-(k+s)}\Vert\varphi\Vert_{H^{k+1}(\widehat{K})}.
\end{align*}
The proof of (\ref{eq:proposition:better-regularity})
is complete in view of
(\ref{eq:lemma:projection-based-interpolation-approximation-100}). Replacing
$\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}^{k}(\widehat{K})}$ with $|{\mathbf{u}%
}|_{{\mathbf{H}}^{k}(\widehat{K})}$ is possible since the
projector $\widehat \Pi^{\operatorname*{curl},3d}_p$ reproduces polynomials
of degree $p$.
\end{proof}
\subsection{Stability of the operator $\protect\widehat\Pi^{\operatorname*{div},3d}_{p}$}
Similar to Lemma~\ref{lemma:Picurl-edge}, we have:
\begin{lemma}
\label{lemma:Pidiv-face} For
${\mathbf{u}}\in{\mathbf{H}}^{1/2}(\widehat{K},\operatorname*{div})$ and $s \ge 0$
we have for each face $f\in{\mathcal{F}}(\widehat{K})$
\begin{equation}
\Vert({\mathbf{u}}-\widehat\Pi^{\operatorname*{div},3d}_{p}{\mathbf{u}%
})\cdot{\mathbf{n}}_{f}\Vert_{\widetilde{H}^{-s}(f)}\leq C_s p^{-s}%
\inf_{v \in V_p(f)}
\Vert{\mathbf{u}}\cdot{\mathbf{n}}_{f} - v\Vert_{L^{2}(f)}.
\label{eq:lemma:Pidiv-face-20}%
\end{equation}
\end{lemma}
\begin{proof}
We first show that for ${\mathbf u} \in {\mathbf H}^{1/2}(\widehat K,\operatorname{div})$
the normal trace ${\mathbf n}_f \cdot {\mathbf u} \in L^2(f)$ for each face $f$. To that end,
we write with the aid of Lemma~\ref{lemma:helmholtz-decomposition-div}
${\mathbf u} = \operatorname{\mathbf{curl}} {\boldsymbol \varphi} + {\mathbf z}$ with
${\boldsymbol \varphi}$, ${\mathbf z} \in {\mathbf H}^{3/2}(\widehat K)$. We have
${\mathbf n}_f \cdot {\mathbf z} \in {\mathbf H}^1(f)$. Noting
${\boldsymbol \varphi}|_f \in {\mathbf H}^1(f)$ and
$({\mathbf n}_f \cdot \operatorname{\mathbf{curl}} {\boldsymbol \varphi})|_f
= \operatorname{curl}_f (\Pi_\tau {\boldsymbol \varphi})|_f$, we conclude that
$({\mathbf n}_f \cdot \operatorname{\mathbf{curl}} {\boldsymbol \varphi})|_f \in L^2(f)$.
Note that \eqref{eq:Pi_div-d} and \eqref{eq:Pi_div-c} imply that on faces the operator $\widehat\Pi^{\operatorname*{div},3d}_{p}$
is the $L^2$-projection onto $V_p(f)$. Thus, \eqref{eq:lemma:Pidiv-face-20} holds for $s=0$.
The case $s>0$ follows by a duality argument.
To that end define $\tilde{e}:=\left(\mathbf{u}-\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}\right)\cdot \mathbf{n}_f$.
We observe that each $w\in \mathcal{P}_p(f)$ can be written as $w=\overline{w}+(w-\overline{w})$
with $\overline{w}$ being the average of $w$ on $f$. Since $w-\overline{w} \in \mathring{V_p}(f)$, \eqref{eq:Pi_div-d} and \eqref{eq:Pi_div-c} imply $(\tilde{e},w)_{L^2(f)} =0$ for any
$w \in \mathcal{P}_p(f)$. Thus we have for arbitrary $v \in H^s(f)$
\begin{align*}
\bigl|(\tilde{e},v)_{L^2(f)}\bigr| &= \bigl|\inf_{w\in \mathcal{P}_p(f)} (\tilde{e},v-w)_{L^2(f)} \bigr|
\leq \Vert\tilde{e}\Vert_{L^2(f)} \inf_{w\in\mathcal{P}_p(f)} \Vert v-w\Vert_{L^2(f)} \\
& \lesssim p^{-s} \Vert\tilde{e}\Vert_{L^2(f)} \Vert v\Vert_{H^s(f)}.
\qedhere
\end{align*}
\end{proof}
As in the analysis of the operators in the previous sections, the existence of
a polynomial preserving lifting operator from the boundary $\partial\widehat{K}$ to $\widehat{K}$
with appropriate properties plays an important role. Such a lifting operator has been constructed in
\cite{demkowicz-gopalakrishnan-schoeberl-III}. Paralleling Lemma~\ref{lemma:Hcurl-lifting}
we modify that lifting slightly to explicitly ensure an additional orthogonality property.
\begin{lemma}
\label{lemma:lifting-operator-div}
Denote the (normal) trace space of ${\mathbf V}_p(\widehat K)$ by
\begin{equation*}
V_p(\partial \widehat K):= \{v \in L^2(\partial \widehat K)\,|\,
\exists {\mathbf v} \in {\mathbf V}_p(\widehat K) \text{ such that } {\mathbf n}_f \cdot {\mathbf v}|_f = v|_f
\quad
\forall f \in {\mathcal F}(\widehat K)\}.
\end{equation*}
There exist $C > 0$ (independent of $p$) and,
for each $p \in {\mathbb N}_0$ a lifting operator
$\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p:
V_p(\partial\widehat K) \rightarrow {\mathbf V}_p(\widehat K)$
with the following properties:
\begin{enumerate}[(i)]
\item \label{item:lemma:Hdiv-lifting-i}
${\mathbf n}_f \cdot \boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_pz = z|_f$ for each
$f \in {\mathcal F}(\widehat K)$ and $z \in V_p(\partial \widehat K)$.
\item \label{item:lemma:Hdiv-lifting-iii} There holds
$\Vert\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p z
\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}
\leq C\Vert z\Vert_{\widetilde{H}^{-1/2}(\partial\widehat{K})}$.
\item \label{item:lemma:Hdiv-lifting-iv} There holds the
orthogonality $(\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_pz,\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})} = 0$ for all $\mathbf{v}\in \mathring{\mathbf{Q}}_p(\widehat{K})$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall the space
$\mathring{\mathbf{Q}}_{p,\perp}(\widehat K) =\{\mathbf{q}\in\mathring{\mathbf{Q}}_p(\widehat{K})
\colon\!(\mathbf{q},\nabla\psi)_{L^2(\widehat{K})} \!= 0 \, \forall \psi\in \mathring{W}_{p+1}(\widehat{K})\}$
defined in Lemma~\ref{lemma:discrete-friedrichs-3d}.
Let $z\in \widetilde{H}^{-1/2}(\partial\widehat{K})$ be a function with the property $z|_f \in V_p(f)$ for all faces $f\in\mathcal{F}(\widehat{K})$. We define the lifting operator $\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_pz:=\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z-\mathbf{w}_0$,
where $\boldsymbol{\mathcal{E}}^{\operatorname*{div}}:
H^{-1/2}(\partial\widehat K) \rightarrow {\mathbf H}(\widehat K,\operatorname*{div})$
denotes the lifting operator
from \cite{demkowicz-gopalakrishnan-schoeberl-III} and $\mathbf{w}_0$ is determined by the
following saddle point problem: Find $\mathbf{w}_0\in \mathring{\mathbf{V}}_p(\widehat{K})$
and $\boldsymbol{\varphi}\in \mathring{\mathbf{Q}}_{p,\perp}(\widehat{K})$
such that
\begin{subequations}
\label{eq:lemma:saddle-point-div}
\begin{align}
\label{eq:lemma:saddle-point-div-a}
(\operatorname*{div}\mathbf{w}_0,\operatorname*{div}\mathbf{v})_{L^2(\widehat{K})} + (\mathbf{v},\operatorname*{\mathbf{curl}}\boldsymbol{\varphi})_{L^2(\widehat{K})} & = (\operatorname*{div}(\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z),\operatorname*{div}\mathbf{v})_{L^2(\widehat{K})} \quad \forall\mathbf{v}\in \mathring{\mathbf{V}}_p(\widehat{K}) \\
\label{eq:lemma:saddle-point-div-b}
(\mathbf{w}_0,\operatorname*{\mathbf{curl}}\boldsymbol{\mu})_{L^2(\widehat{K})} & = (\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z,\operatorname*{\mathbf{curl}}\boldsymbol{\mu})_{L^2(\widehat{K})} \qquad \forall\boldsymbol{\mu}\in \mathring{\mathbf{Q}}_{p,\perp}(\widehat{K}).
\end{align}
\end{subequations}
Unique solvability of Problem~\eqref{eq:lemma:saddle-point-div} is seen as follows:
Define the bilinear forms $a(\mathbf{w},\mathbf{q}):=(\operatorname*{div}\mathbf{w},\operatorname*{div}\mathbf{q})_{L^2(\widehat{K})}$ and $b(\mathbf{w},\boldsymbol{\varphi}):=(\mathbf{w},\operatorname*{\mathbf{curl}}\boldsymbol{\varphi})_{L^2(\widehat{K})}$ for $\mathbf{w},\mathbf{q}\in\mathring{\mathbf{V}}_p(\widehat{K})$ and $\boldsymbol{\varphi}\in\mathring{\mathbf{Q}}_{p,\perp}(\widehat{K})$.
Coercivity of $a$ on the kernel of $b$, $\operatorname*{ker}b=\{\mathbf{v}\in\mathring{\mathbf{V}}_p(\widehat{K}): (\mathbf{v},\operatorname*{\mathbf{curl}} \boldsymbol{\mu})_{L^2(\widehat{K})} = 0 \, \forall \boldsymbol{\mu}\in \mathring{\mathbf{Q}}_{p,\perp}(\widehat{K})\}$, follows from the Friedrichs inequality
for the divergence operator (cf.~Lemma~\ref{lemma:discrete-friedrichs-div})
since for
$ {\mathbf v} \in \operatorname*{ker} b$ one has
\begin{align*}
a(\mathbf{v},\mathbf{v})&=\Vert\operatorname*{div}\mathbf{v}\Vert_{L^2(\widehat{K})}^2 \geq \frac{1}{2C^2} \Vert\mathbf{v
|
}\Vert_{L^2(\widehat{K})}^2 + \frac{1}{2}\Vert\operatorname*{div}\mathbf{v}\Vert_{L^2(\widehat{K})}^2 \\
&\geq \operatorname*{min}\{\frac{1}{2C^2},\frac{1}{2}\}\Vert\mathbf{v}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}^2.
\end{align*}
Next, the inf-sup condition for $b$ follows easily by considering,
for given
$\boldsymbol{\varphi}\in\mathring{\mathbf{Q}}_{p,\perp}(\widehat{K})$, the function
$\mathbf{w}=\operatorname*{\mathbf{curl}}\boldsymbol{\varphi}\in\mathring{\mathbf{V}}_p(\widehat{K})$
in $b({\mathbf w},{\boldsymbol{\varphi}})$ and using
the Friedrichs inequality for the $\operatorname*{\mathbf{curl}}$ (Lemma~\ref{lemma:discrete-friedrichs-3d}),
\begin{align*}
\frac{b(\mathbf{w},\boldsymbol{\varphi})}{\Vert\mathbf{w}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \Vert\boldsymbol{\varphi}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})}} = \frac{\Vert\operatorname*{\mathbf{curl}}\boldsymbol{\varphi}\Vert_{L^2(\widehat{K})}^2}{\Vert\operatorname*{\mathbf{curl}}\boldsymbol{\varphi}\Vert_{L^2(\widehat{K})} \Vert\boldsymbol{\varphi}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{\mathbf{curl}})}}
\stackrel{\text{Lem.~\ref{lemma:discrete-friedrichs-3d}}}{\geq} C.
\end{align*}
Thus, the saddle point problem \eqref{eq:lemma:saddle-point-div} has a unique solution
$(\mathbf{w}_0, {\boldsymbol\varphi}) \in
\mathring{\mathbf{V}}_p(\widehat{K}) \times \mathring{\mathbf{Q}}_{p,\perp}(\widehat K)$.
In fact, selecting ${\mathbf v} = \operatorname{\mathbf{curl}} {\boldsymbol \varphi}$ in
(\ref{eq:lemma:saddle-point-div-a}) shows ${\boldsymbol \varphi} = 0$.
The lifting operator $\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p$
now obviously satisfies (\ref{item:lemma:Hdiv-lifting-i})
and (\ref{item:lemma:Hdiv-lifting-iv})
by construction, cf. \cite[Theorem~7.1]{demkowicz-gopalakrishnan-schoeberl-III} for the properties of
the operator $\boldsymbol{\mathcal{E}}^{\operatorname*{div}}$.
For (\ref{item:lemma:Hdiv-lifting-iii}) note that the solution $\mathbf{w}_0$ satisfies
the estimate $\Vert\mathbf{w}_0\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \lesssim \Vert f\Vert + \Vert g\Vert$, where $f(\mathbf{v})=(\operatorname*{div}(\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z),\operatorname*{div}\mathbf{v})_{L^2(\widehat{K})}$, $g(v)=(\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z,\operatorname*{\mathbf{curl}} \mathbf{v})_{L^2(\widehat{K})}$, and $\Vert \cdot \Vert$ denotes the operator norm. Thus,
\begin{align*}
\Vert f\Vert = \operatorname*{sup}_{\Vert\mathbf{v}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \leq 1} |(\operatorname*{div}(\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z),\operatorname*{div}\mathbf{v})_{L^2(\widehat{K})}| \leq \Vert\operatorname*{div}(\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z)\Vert_{L^2(\widehat{K})} \lesssim \Vert z\Vert_{\widetilde{H}^{-1/2}(\partial\widehat{K})}.
\end{align*}
The estimate
$\displaystyle
\Vert g\Vert \lesssim \Vert z\Vert_{\widetilde{H}^{-1/2}(\partial\widehat{K})}
$
is shown similarly. Hence, (\ref{item:lemma:Hdiv-lifting-iii}) follows from
\begin{align*}
\Vert\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p z\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \leq \Vert\boldsymbol{\mathcal{E}}^{\operatorname*{div}}z\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} + \Vert\mathbf{w}_0\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \lesssim \Vert z\Vert_{\widetilde{H}^{-1/2}(\partial\widehat{K})}.
\qquad
\qedhere
\end{align*}
\end{proof}
\begin{theorem}
\label{thm:H1div-approximation}
Let $\widehat K$ be a fixed tetrahedron.
There exists $C>0$ independent of $p$ such
that for all ${\mathbf{u}}\in{\mathbf{H}}^{1/2}(\widehat{K},\operatorname{div}%
)$
\begin{equation}
\Vert{\mathbf{u}}-\widehat\Pi^{\operatorname*{div},3d}_{p}{\mathbf{u}}%
\Vert_{{\mathbf{H}}(\widehat{K},\operatorname{div})}\leq Cp^{-1/2}%
\inf_{{\mathbf v} \in {\mathbf V}_p(\widehat K)}
\Vert{\mathbf{u} - \mathbf{v}}\Vert_{{\mathbf{H}}^{1/2}(\widehat{K},\operatorname{div})}.
\label{eq:thmH1div-approximation-10}%
\end{equation}
\end{theorem}
\begin{proof}
\emph{1st~step:} By the projection property of $\widehat\Pi^{\operatorname*{div},3d}_{p}$, it suffices to show
(\ref{eq:thmH1div-approximation-10}) for ${\mathbf v} = 0$.
\emph{2nd~step:} As shown in Lemma~\ref{lemma:Pidiv-face}, $\mathbf{u}\cdot \mathbf{n}_f \in L^2(f)$ on each face $f\in \mathcal{F}(\widehat{K})$. Thus we get
from Lemma~\ref{lemma:Pidiv-face}
\begin{align}
\label{eq:thm:H1div-approximation-12}
\Vert({\mathbf{u}}-\widehat\Pi^{\operatorname*{div},3d}_{p}{\mathbf{u}})\cdot{\mathbf{n}}_{f}\Vert_{\widetilde{H}^{-1/2}(f)}\lesssim p^{-1/2}\Vert{\mathbf{u}}\cdot{\mathbf{n}}_{f}\Vert_{L^2(f)}
\lesssim p^{-1/2} \|{\mathbf u}\|_{{\mathbf H}^{1/2}(\widehat K,\operatorname{div})}.
\end{align}
\emph{3rd~step:}
The difference ${\mathbf u} - \widehat\Pi^{\operatorname*{div},3d}_{p} {\mathbf u}$ is estimated using
the approximation $P^{\operatorname{div},3d} {\mathbf u}$ of Lemma~\ref{lemma:Pdiv3d}.
We abbreviate
$\mathbf{E}:=\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}-P^{\operatorname*{div},3d}\mathbf{u}\in \mathbf{V}_p(\widehat{K})$.
Since $\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}$ satisfies the orthogonality conditions \eqref{eq:Pi_div-b}
and \eqref{eq:Pi_div-a}, and $P^{\operatorname*{div},3d}\mathbf{u}$ satisfies
the conditions \eqref{eq:lemma:Pdiv3d} and \eqref{eq:lemma:Pdiv3d-20}, we have the two orthogonality conditions
\begin{subequations}
\label{eq:thm:H1div-approximation-20}
\begin{align}
\label{eq:thm:H1div-approximation-20-a}
(\operatorname*{div}\mathbf{E},\operatorname*{div}\mathbf{v})_{L^2(\widehat{K})} &= 0 \qquad \forall \mathbf{v}\in \mathring{\mathbf{V}}_p(\widehat{K}),
\\
\label{eq:thm:H1div-approximation-20-b}
(\mathbf{E},\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})}
&= 0 \qquad \forall \mathbf{v}\in \mathring{\mathbf{Q}}_p(\widehat{K}).
\end{align}
\end{subequations}
By Lemma~\ref{lemma:lifting-operator-div}, the orthogonality condition
\[
(\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p(\mathbf{E}\cdot\mathbf{n}),\operatorname*{\mathbf{curl}}\mathbf{v})_{L^2(\widehat{K})} = 0 \qquad \forall \mathbf{v}\in \mathring{\mathbf{Q}}_p(\widehat{K})
\]
holds; hence the discrete Friedrichs inequality
(Lemma~\ref{lemma:discrete-friedrichs-div}, (\ref{item:lemma:discrete-friedrichs-div-ii}))
can be applied to $\mathbf{E}-\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p(\mathbf{E}\cdot\mathbf{n}) \in \mathring{\mathbf{V}}_p(\widehat{K})$. Thus, we obtain
\begin{align}
\label{eq:thm:H1div-approximation-30}
\begin{split}
\Vert\mathbf{E}\Vert_{L^2(\widehat{K})} &\leq \Vert\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p(\mathbf{E}\cdot\mathbf{n})\Vert_{L^2(\widehat{K})} + \Vert\mathbf{E}-\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p
(\mathbf{E}\cdot\mathbf{n})\Vert_{L^2(\widehat{K})} \\
&\lesssim \Vert\mathbf{E}\cdot\mathbf{n}\Vert_{H^{-1/2}(\partial\widehat{K})} + \Vert\operatorname*{div}(\mathbf{E}-\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p(\mathbf{E}\cdot\mathbf{n}))\Vert_{L^2(\widehat{K})} \\
&\lesssim \Vert\mathbf{E}\cdot\mathbf{n}\Vert_{H^{-1/2}(\partial\widehat{K})} + \Vert\operatorname*{div}\mathbf{E}\Vert_{L^2(\widehat{K})}.
\end{split}
\end{align}
\emph{4th~step:}
Using \eqref{eq:thm:H1div-approximation-20-a}, we get
\begin{align}
\label{eq:thm:H1div-approximation-40}
\Vert\operatorname*{div}\mathbf{E}\Vert_{L^2(\widehat{K})}^2 = (\operatorname*{div}\mathbf{E},\operatorname*{div}\boldsymbol{\mathcal{L}}^{\operatorname*{div},3d}_p(\mathbf{E}\cdot\mathbf{n}))_{L^2(\widehat{K})} \lesssim \Vert\operatorname*{div}\mathbf{E}\Vert_{L^2(\widehat{K})} \Vert\mathbf{E}\cdot\mathbf{n}\Vert_{H^{-1/2}(\partial\widehat{K})}.
\end{align}
Combining
(\ref{eq:thm:H1div-approximation-30}),
(\ref{eq:thm:H1div-approximation-40})
we arrive at
\begin{equation}
\label{eq:thm:H1div-approximation-100}
\|{\mathbf E}\|_{{\mathbf H}(\widehat K ,\operatorname{div})}
\lesssim \|{\mathbf E} \cdot {\mathbf n}\|_{H^{-1/2}(\partial \widehat K)}.
\end{equation}
\emph{5th~step:} The triangle inequality and the continuity of the normal trace operator give
\begin{align*}
\Vert \mathbf{u}-\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}&\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}
\leq \Vert\mathbf{u} - P^{\operatorname{div,3d}}{\mathbf u}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}
+ \Vert\mathbf{E}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})} \\
&\stackrel{(\ref{eq:thm:H1div-approximation-100})}{\lesssim}
\Vert\mathbf{u} - P^{\operatorname{div,3d}}{\mathbf u}\Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}
+ \Vert\mathbf{E}\cdot{\mathbf n}\Vert_{H^{-1/2}(\partial \widehat{K})} \\
&\lesssim \Vert\mathbf{u} - P^{\operatorname{div},3d} \mathbf{u} \Vert_{\mathbf{H}(\widehat{K},\operatorname*{div})}
+ \sum_{f\in\mathcal{F}(\widehat{K})}\Vert(\mathbf{u}-\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u})\cdot\mathbf{n}_f\Vert_{\widetilde{H}^{-1/2}(f)} \\
&\overset{\eqref{eq:thm:H1div-approximation-12},\text{Lem.~\ref{lemma:Pdiv3d}}}{\lesssim} p^{-1/2}\Vert\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname*{div})}.
\qedhere
\end{align*}
\end{proof}
Considering the approximation error in negative Sobolev norms is the next step.
\begin{theorem}
\label{thm:duality-again-div}
Let $\widehat K$ be a fixed tetrahedron.
For $s \in [0,1]$ and for all $\mathbf{u}\in \mathbf{H}^{1/2}(\widehat{K},\operatorname{div})$
there holds the estimate
\begin{align*}
\Vert\mathbf{u}-\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{div})}
\leq C_s p^{-1/2-s} \inf_{{\mathbf v} \in {\mathbf V}_p(\widehat K)} \Vert\mathbf{u} - \mathbf{v}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname{div})}.
\end{align*}
\end{theorem}
\begin{proof}
In view of the projection property of $\widehat\Pi^{\operatorname*{div},3d}_{p}$, we restrict to showing the estimate with ${\mathbf v} = 0$.
The case $s = 0$ is shown in Theorem~\ref{thm:H1div-approximation}. We will therefore merely focus
on the case $s = 1$ as the case $s \in (0,1)$ follows by interpolation.
We write $\mathbf{E}:=\mathbf{u}-\widehat\Pi^{\operatorname*{div},3d}_{p}\mathbf{u}$ for simplicity.
By definition we have
\begin{align}
\label{eq:lemma:duality-again-div-10}
\Vert\mathbf{E}\Vert_{\widetilde{\mathbf{H}}^{-1}(\widehat{K},\operatorname{div})} &\sim
\Vert\mathbf{E}\Vert_{\widetilde{\mathbf{H}}^{-1}(\widehat{K})} + \Vert\operatorname{div}\mathbf{E}\Vert_{\widetilde{H}^{-1}(\widehat{K})}
\\
\nonumber
&=
\operatorname*{sup}_{\mathbf{v}\in\mathbf{H}^1(\widehat{K})} \frac{(\mathbf{E},\mathbf{v})_{L^2(\widehat{K})}}{\Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}} +
\operatorname*{sup}_{{v}\in{H}^1(\widehat{K})} \frac{(\operatorname{div} \mathbf{E},{v})_{L^2(\widehat{K})}}{\Vert{v}\Vert_{{H}^1(\widehat{K})}}.
\end{align}
We start with estimating the first supremum in (\ref{eq:lemma:duality-again-div-10}).
We write $\mathbf{v}\in\mathbf{H}^1(\widehat{K})$ as
\begin{align*}
\mathbf{v}=\nabla\varphi + \operatorname{\mathbf{curl}}\mathbf{z}
\end{align*}
with $\varphi\in H^2(\widehat{K})$ and $\mathbf{z}\in\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}}) \cap \mathbf{H}_0(\widehat{K},\operatorname{\mathbf{curl}})$ according to Lemma~\ref{lemma:helmholtz-3d}
and have to bound the two terms in $(\mathbf{E},\mathbf{v})_{L^2(\widehat{K})} = (\mathbf{E},\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})} + (\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})}$. For the first term,
Theorem~\ref{thm:H1div-approximation} yields
\begin{align*}
\bigl|(\mathbf{E},\operatorname{\mathbf{curl}}\mathbf{z})_{L^2(\widehat{K})}\bigr|
&= \bigl|\operatorname*{inf}_{\mathbf{w}\in\mathring{\mathbf{Q}}_p(\widehat{K})} (\mathbf{E},\operatorname{\mathbf{curl}}(\mathbf{z}-\mathbf{w}))_{L^2(\widehat{K})} \bigr| \lesssim p^{-1} \Vert\mathbf{E}\Vert_{L^2(\widehat{K})} \Vert\mathbf{z}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{\mathbf{curl}})} \\
&\lesssim p^{-3/2} \Vert\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname{div})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})},
\end{align*}
where the infimum is estimated as in Lemma~\ref{lemma:picurl-negative-I}, without repeating the arguments here. For the second term, we employ integration by parts to get
\begin{align}
\label{eq:lemma:duality-again-div-40}
(\mathbf{E},\nabla\varphi)_{L^2(\widehat{K})} = -(\operatorname{div}\mathbf{E},\varphi)_{L^2(\widehat{K})} + \sum_{f\in\mathcal{F}(\widehat{K})} (\mathbf{E}\cdot\mathbf{n}_f,\varphi)_{L^2(f)}
\end{align}
Denote by $\overline{\varphi}:=(\int_{\widehat{K}}\varphi)/|\widehat{K}|$ the average of $\varphi$. Integration by parts gives
\begin{align}
\label{eq:lemma:duality-again-div-60}
(\operatorname{div}\mathbf{E},\varphi)_{L^2(\widehat{K})} = (\operatorname{div}\mathbf{E},\varphi-\overline{\varphi})_{L^2(\widehat{K})} + \overline{\varphi}(\mathbf{E}\cdot\mathbf{n},1)_{L^2(\partial\widehat{K})} \!\overset{\eqref{eq:Pi_div-d}}{=}\! (\operatorname{div}\mathbf{E},\varphi-\overline{\varphi})_{L^2(\widehat{K})}.
\end{align}
We then define the auxiliary function $\psi$ by
\begin{align*}
\Delta\psi=\varphi-\overline{\varphi}, \qquad \partial_n\psi=0 \text{ on }\partial\widehat{K}
\end{align*}
and set $\boldsymbol{\Phi}:=\nabla\psi$. Since $\operatorname{div}\boldsymbol{\Phi}=\Delta\psi=\varphi-\overline{\varphi}$, we get
\begin{align}
\label{eq:lemma:duality-again-div-80}
\big| (\operatorname{div}\mathbf{E}&,\varphi-\overline{\varphi})_{L^2(\widehat{K})} \big|
= \big| (\operatorname{div}\mathbf{E},\operatorname{div}\boldsymbol{\Phi})_{L^2(\widehat{K})} \big| \\
&\overset{\eqref{eq:Pi_div-a}}{=}
\bigl|
\operatorname*{inf}_{\mathbf{w}\in\mathring{\mathbf{V}}_p(\widehat{K})} (\operatorname{div}\mathbf{E},\operatorname{div}(\boldsymbol{\Phi}-\mathbf{w}))_{L^2(\widehat{K})} \bigr|
\label{eq:lemma:duality-again-div-82}
\lesssim p^{-1} \Vert\mathbf{E}\Vert_{\mathbf{H}(\widehat{K},\operatorname{div})}
\Vert\boldsymbol{\Phi}\Vert_{\mathbf{H}^1(\widehat{K},\operatorname{div})} \\
&\lesssim p^{-3/2} \Vert\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname{div})}
\Vert\varphi \Vert_{H^1(\widehat{K})}
\nonumber
\lesssim p^{-3/2} \Vert\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname{div})}
\Vert{\mathbf v}\Vert_{{\mathbf H}^1(\widehat{K})}.
\end{align}
Thus, only estimates for the boundary terms in \eqref{eq:lemma:duality-again-div-40} are missing.
The orthogonality properties \eqref{eq:Pi_div-d} and \eqref{eq:Pi_div-c} as well as Lemma~\ref{lemma:Pidiv-face} lead to
\begin{align*}
\bigl|(\mathbf{E}\cdot\mathbf{n},\varphi)_{L^2(f)}\bigr|
&= \bigl| \operatorname*{inf}_{w\in V_p(f)} (\mathbf{E}\cdot\mathbf{n},\varphi-w)_{L^2(f)} \bigr|
\lesssim p^{-1} \Vert\mathbf{E}\cdot\mathbf{n}\Vert_{\widetilde{H}^{-1/2}(f)} \Vert\varphi\Vert_{H^{3/2}(f)} \\
&\!\!\!\!\!\!\!\!\stackrel{\text{Lem.~\ref{lemma:Pidiv-face}}}{\lesssim}\!\! p^{-3/2} \Vert\mathbf{u}\cdot\mathbf{n}\Vert_{L^2(f)} \Vert\varphi\Vert_{H^2(\widehat{K})} \lesssim p^{-3/2} \Vert\mathbf{u}\Vert_{\mathbf{H}^{1/2}(\widehat{K},\operatorname{div})} \Vert\mathbf{v}\Vert_{\mathbf{H}^1(\widehat{K})}.
\end{align*}
Thus, we have estimated the first term of \eqref{eq:lemma:duality-again-div-10}.
We now handle the second supremum in (\ref{eq:lemma:duality-again-div-10}).
Such estimates have already been derived in \eqref{eq:lemma:duality-again-div-60}
and \eqref{eq:lemma:duality-again-div-80}; we merely have to note
that the function $\varphi$ in these lines satisfied $\varphi\in H^2(\widehat{K})$,
but $H^1(\widehat{K})$-regularity is indeed sufficient as is visible in (\ref{eq:lemma:duality-again-div-82}).
\end{proof}
For functions whose divergence is a polynomial, we get the following result similar to Lemma~\ref{lemma:better-regularity}.
\begin{lemma}
\label{lemma:better-regularity-div}
Assume that all interior angles of the $4$ faces of $\widehat K$ are smaller than $2\pi/3$.
For all $k\geq1$, $s \in [0,1]$, and all
${\mathbf{u}}\in{\mathbf{H}}^{k}(\widehat{K})$ with $\operatorname*{div}%
{\mathbf{u}}\in {\mathcal{P}}_{p}(\widehat{K})$ there holds
\begin{equation}
\Vert{\mathbf{u}}-\widehat\Pi^{\operatorname*{div},3d}_{p}{\mathbf{u}}%
\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{div})}\leq C_{s,k}p^{-(k+s)}\Vert{\mathbf{u}}\Vert
_{\mathbf{H}^{k}(\widehat{K})}.
\label{eq:proposition-better-regularity-div}%
\end{equation}
If $p\geq k-1$, then $\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}%
^{k}(\widehat{K})}$ can be replaced with the seminorm $|{\mathbf{u}%
}|_{{\mathbf{H}}^{k}(\widehat{K})}$.
Moreover, \eqref{eq:proposition-better-regularity-div} holds for $s = 0$ without the conditions on the
angles of the faces of $\widehat K$.
\end{lemma}
\begin{proof}
We write, using the decomposition of Lemma~\ref{lemma:helmholtz-decomposition-div},
$\displaystyle
{\mathbf{u}}=\operatorname*{\mathbf{curl}} \mathbf{R}^{\operatorname*{curl}}({\mathbf{u}}-{\mathbf{R}%
}^{\operatorname*{div}}\operatorname*{div}{\mathbf{u}})+{\mathbf{R}}^{\operatorname*{div}}\operatorname*{div}{\mathbf{u}}=:\operatorname*{\mathbf{curl}}
\boldsymbol{\varphi}+{\mathbf{z}}%
$
with $\boldsymbol{\varphi}\in \mathbf{H}^{k+1}(\widehat{K})$ and ${\mathbf{z}}\in{\mathbf{H}}%
^{k}(\widehat{K})$ together with
\begin{equation}
\Vert\boldsymbol{\varphi}\Vert_{\mathbf{H}^{k+1}(\widehat{K})}+\Vert{\mathbf{z}}\Vert_{{\mathbf{H}}^{k}(\widehat{K})} \lesssim \Vert{\mathbf{u}}\Vert_{{\mathbf{H}}^{k}(\widehat{K})}+\Vert\operatorname*{div}{\mathbf{u}}\Vert_{H^{k-1}(\widehat{K})} \leq C\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}^{k}(\widehat{K})}.
\label{eq:lemma:projection-based-interpolation-approximation-200}%
\end{equation}
The assumption $\operatorname*{div}{\mathbf{u}}\in {\mathcal{P}}_{p}(\widehat K)$
and
Lemma~\ref{lemma:mcintosh}, (\ref{item:lemma:mcintosh-vi})
imply ${\mathbf{z}}={\mathbf{R}}%
^{\operatorname*{div}}\operatorname*{div}{\mathbf{u}}\in
\mathbf{V}_p(\widehat{K})$; furthermore, since
$\widehat\Pi^{\operatorname*{div},3d}_{p}$ is a projection, we conclude
${\mathbf{z}}-\widehat\Pi^{\operatorname*{div},3d}_{p}{\mathbf{z}}=0$. Thus,
we get from the commuting diagram
\begin{align*}
& \Vert(\operatorname{I}-\widehat\Pi^{\operatorname*{div},3d}_{p}){\mathbf{u}}\Vert
_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{div})}=\Vert(\operatorname{I}-\widehat\Pi^{\operatorname*{div},3d}_{p})\operatorname*{\mathbf{curl}}\boldsymbol{\varphi}+\underbrace{(\operatorname{I}-\widehat\Pi^{\operatorname*{div},3d}_{p}){\mathbf{z}}}_{=0}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{div})}\\
&\qquad = \Vert\operatorname{\mathbf{curl}}(\operatorname{I}-\widehat \Pi^{\operatorname*{curl},3d}_p)\boldsymbol{\varphi}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{div})}
\leq
\Vert(\operatorname{I}-\widehat \Pi^{\operatorname*{curl},3d}_p)\boldsymbol{\varphi}\Vert_{\widetilde{\mathbf{H}}^{-s}(\widehat{K},\operatorname{\mathbf{curl}})} \\
& \qquad
\stackrel{\text{Thm.~{\ref{thm:duality-again}}}}{\lesssim}
p^{-(1+s)}
\inf_{{\mathbf v} \in {\mathbf Q}_p(\widehat K)} \Vert\boldsymbol{\varphi} -{\mathbf v}\Vert_{\mathbf{H}^{1}(\widehat{K},\operatorname{\mathbf{curl}})}
\stackrel{\text{Lem.~\ref{lemma:Pgrad1d}},\eqref{eq:lemma:projection-based-interpolation-approximation-200}}
{\lesssim} p^{-(k+s)} \Vert\mathbf{u}\Vert_{\mathbf{H}^k(\widehat{K})}.
\end{align*}
Replacing
$\Vert{\mathbf{u}}\Vert_{{\mathbf{H}}^{k}(\widehat{K})}$ with $|{\mathbf{u}%
}|_{{\mathbf{H}}^{k}(\widehat{K})}$ follows from the observation that the
projector $\widehat\Pi^{\operatorname*{div},3d}_{p}$ reproduces polynomials
of degree $p$.
\end{proof}
\subsection*{Acknowledgement} JMM is grateful to his colleague
Joachim Sch\"oberl (TU Wien) for inspiring discussions on the topic
of the paper and, in particular, for pointing out the arguments of
Theorem~\ref{lemma:demkowicz-grad-2D}. CR acknowledges the support
of the Austrian Science Fund (FWF) under grant P 28367-N35.
\bibliographystyle{plain}
|
\section{Introduction}
There is a general agreement that the nuclear activity of galaxies
(with exception of starburst galaxies) is powered by accretion of
interstellar gas onto a massive black hole (e.g. Rees 1984). The accretion
flow most probably forms a kind of disc structure due to an excess of
angular momentum.
The evidence of such disc-like structure is observed directly in HST pictures
of the galaxy M87 at
a distance of several pc from the center (Harms et al. 1994) and even up to 0.009
pc in the case of the galaxy NGC 4258 due to the presence of a water maser
(Greenhill et al. 1995).
Closer in the accretion flow is clearly two-phase. The cold gas most probably
forms a
relatively flat configuration, either in a form of an accretion disc
or blobs. This material is embeded in a hot medium.
There are several
observational arguments in favour of this general scenario.
Disc-like geometry of the cold phase explains the lack of
significant absorption in quasars and Seyfert 1 galaxies
as well as the presence of the X-ray
spectral features in Seyfert 1 galaxies
due to reprocessing by cool optically thick gas (i.e. the reflection
component) covering approximately
half of the sky for the source of X-rays (e.g. Pounds et al 1990,
Matsuoka et al. 1990; for a review, see Mushotzky, Done \& Pounds 1993).
Additional support
for the disc-like geometry and Keplerian motion comes from determination
of the shape of the $K_{\alpha}$ line from ASCA data (Fabian et al. 1994). The
reflection component is not seen in quasars (Williams et al. 1992)
most probably due to higher
ionization stage of the gas (e.g. \. Zycki et al. 1994, \. Zycki \& Czerny 1994).
The hot optically thin medium is required to explain the generation of the
observed hard X-ray emission extending up to a few hundreds of keV (see
Mushotzky et al. 1993). It most probably
forms a kind of corona above the cold layer of the gas
although an alternative view was also suggested
(X-ray emission comming from shocks formed
by gas outflowing along the symmetry axis, e.g. Henri \& Pelletier 1991).
The heating mechanism of the corona, its structure and radial extension are
presently unknown.
In the innermost parts of the flow a (sometimes considerable) fraction
of the total gravitational energy of accreting gas has to be dissipated
in the hot corona to provide both the X-ray bolometric luminosity and the
extension of the emitted spectrum into high frequencies. Most probable
emission mechanism is the Compton upscattering of the photons
emitted by the cool gas by hot thermal electrons.
The medium is not necessarily uniform, as we observe both the effect of
moderate Comptonization
(modification of the high frequency tail of the big bump emission
seen in soft X-ray band in a number of sources, e.g. Czerny \& Elvis 1987,
Wilkes \& Elvis 1987, Walter \& Fink 1993) as well as significant (but still
unsaturated) Comptonization which leads to formation of the hard X-ray
power law. It suggests that hard X-ray emission is perhaps produced
in hotter, maybe magnetically driven compact active regions (e.g. Haardt,
Maraschi \& Ghisellini 1994, Stern et al. 1995) embedded, or surrounded by
still hot but cooler plasma. Stochastic nature of the X-ray variability
supports this view (Czerny \& Lehto 1996).
In the outer parts of the flow the amount of energy available is small and
the radiation emitted there does not practically contribute to the
bolometric luminosity of the source and these regions are even less
understood than the innermost flow. On the other hand the formation of
the Compton-heated corona seems to be inevitable if only the outer parts of
the disc are not shielded from the the radiation
comming from innermost parts (e.g.
Begelman, McKee and Shields 1983, Begelman \& McKee 1983,
Ostriker, McKee \& Klein 1991 - hereafter OMK, Raymond 1993). The existence
of such a corona is actually observed in X-ray binaries (e.g.
White \& Holt 1982, McClintock et al. 1982, Fabian \& Guilbert
1982). The existence of such a corona in AGN may have very
significant influence on the observed spectrum.
The principal role of the corona surrounding the disc-like flow
at a radius $\sim 0.01 - 1 $ pc is not in the
direct dissipation of the energy but in redirecting the radiation generated
in the inner parts towards outer parts of the disc by almost elastic
scattering. The direct irradiation in the case of AGN is not efficient
unless the source of radiation is situated high above the disc surface.
The disc surface in AGN does not flare in its inner parts,
according to the widely adopted description of the radiation pressure
dominated disc by Shakura and Sunyaev (1973) and in its outer
flaring parts does not cover more than a per cent of the sky of the
central X-ray source (e.g. Hure et al. 1994, Siemiginowska, Czerny \&
Kostyunin 1996).
The irradiation of the disc surface due to the scattering by the
corona has two consequences:
(i) the irradiation modifies the optical/UV
continuum emitted by the disc and (ii) it leads to line
formation at the basis of the corona thus contributing significantly
to the observed Broad Emission Lines.
Both effects are the subject of our study, with the aim to confirm the
presence of the outer corona and to constrain its properties.
The plan of this paper is following. The description of the model
of the corona and the method of
computation of the continuum emission of the disc and line intensities
and profiles from clouds forming in the disk/boundary layer is
described in Section 2. In Section 3 we present the results for a range
of model parameters and compare them with observations as well as we
discuss the consequences of the model. Conclusions are given in Section 4.
\section{The model of the continuum and line emission}
\subsection{Structure of the corona and location of the source of photons}
The model of the corona above the outer parts of an accretion disc ($0.01 -
1 $ pc) is based on the theory of the two-phase equilibrium studied originally
by Spitzer (1978) and further developed in the context of AGN in a number
of papers, starting from Krolik, McKee and Tarter (1981) (hereafter KMT).
The temperature of the corona is mainly constrained by the inverse Compton
heating and cooling by the incident radiation.
The corona is irradiated by three different radiation sources: the
high-energy central UV/X-ray source, radiation from the central source
that has been scattered in corona and lower energy radiation from
underlying viscous accretion disc.
The central hard X-ray emission and the central thermal emission
(for the most part UV) of the
disk are both represented as a point like
source located at a height $H_X$ along the symmetry axis.
Although in reality both emission regions are extended most of the
energy is released within a few
gravitational radii from the black hole which is $\sim 10 ^{-4}$ or less
of
the the outer radius of a corona so the extension of the central source
can be neglected.
In our calculations we take half of the total disk luminosity as the
value of central thermal luminosity. The proportion of total X-ray
luminosity to central thermal luminosity is discussed in Sec. 2.3.1.
Folowing KMT and OMK we denote the inverse Compton temperature of the
direct radiation (i.e. from the central source) by $T_{IC}$.
For the
spectrum of quasars used by KMT the value of $T_{IC}$ is
$\sim 10^8$ K (KMT), but
allowing for more emission from "big blue bump" reduces $T_{IC}$ to
$\sim 10^7$ K (Mathews \& Ferland 1987, Fabian et al. 1986). The presence of
additional heating besides inverse Compton process does not modify the
basic picture (if the heating rate
is proportional to the gas density, e.g.
Yaqoob 1990). However, it can raise the temperature of the corona above
the inverse Compton limit, thus decoupling its value from the shape of the
incident radiation spectrum. Therefore, we can treat the maximum value of
the surface temperature of the corona, $T_C$, as a free parameter of the model.
Including scattered and disc components of irradiation, as well as
bremsstrahlung cooling process near base of the corona, the temperature
of the corona (constant in the vertical direction) is (OMK)
\begin{equation}
T_{cor(r)}=\frac{T_{C}}{2}\frac{F_{dir(r)}+F_{scat(r)}+F_{visc(r)}(T_{visc(r)}/T_{C})}{F},
\end{equation}
where
\begin{equation}
F_{(r)}=F_{dir(r)}+F_{scat(r)}+F_{visc(r)}
\end{equation}
is total radiative flux at the base of the corona from all sources of
irradiation. Indices 'dir', 'scat' and 'visc' are related to direct,
scattered and disc components respectively and $T_{visc}$ is the
temperature of radiation emitted locally from the disc ($F_{visc}$).
The fraction of the central direct radiation ($F_{dir}$) and radiation
scattered by the corona ($F_{scat}$) towards the disc at a radius $r$
is given by simple analytical formulae of OMK in the case of
sources with low ratio of the luminosity to the Eddington luminosity.
If the luminosity of the source is closer to the Eddington luminosity the
corona is optically thicker and multiple scatterings play a significant role.
In this case the numerical computations are necessary but the results are
given in the paper of Murray et al. (1994) - hereafter MCKM - for a few
sets of model parameters.
We use both the papers of OMK and MCKM to calculate the
direct and scattered radiative flux irradiated the disk at a given radius.
Therefore we follow the assumptions about the corona structure and the
geometry made in those papers.
The corona extends up to 0.2 of the true maximum radius where the thermal
energy of the gas particle $\sim kT_C$ is
balanced by gravitational energy of the particle $\frac{GM\mu}{r}$
\begin{equation}
r_{C}=\frac{GM\mu}{kT_{C}}\approx\frac{10^{10}}{T_{C8}}
\left(\frac{M}{M_{\odot}}\right) cm,
\end{equation}
where $T_{C8}$ is
the maximum surface temperature of the corona expressed in units of
$10^8$ K, $\mu$ is the mean mass per particle ($\mu=0.61m_{p}$ for fully
ionized gas of cosmic abundances) and $M$ is the mass of the
black hole.
The ionization parameter $\Xi$ (KMT) at the basis of the corona is equal
(McKee \& Begelman 1990)
\begin{equation}
\Xi_{b(r)}\approx 1.3T^{-3/2}_{cor8(r)} ,
\end{equation}
assuming that at higher densities bremsstrahlung is the only atomic cooling
process. Since the ionization parameter is defined as a ratio of the incident
radiation pressure to the gas pressure, its value determines the density
at the basis of the corona (see Section 2.3.2).
\subsection{Computation of the optical/UV continuum}
\subsubsection{Heating by viscous dissipation}
Accretion disc is heated by dissipation of the gravitational energy of
the gas through viscous forces. The radiative flux which corresponds with
this energy for the disc with nonrotating black hole is given by (Page \& Thorne
1974)
\begin{equation}
F_{visc(r)}=\frac{3GM\dot{M}}{8\pi
r^3_g}\cdot f_{(r)}
\end{equation}
\begin{equation}
f_{(r)}=\frac{\left[\sqrt{r}-\sqrt{3}+\sqrt{\frac{3}{8}}
\ln\left(\frac{2-\sqrt{2}}{2+\sqrt{2}}\frac{\sqrt{r}+\sqrt{\frac{3}{2}}}
{\sqrt{r}-\sqrt{\frac{3}{2}}}\right)\right]}{r^{\frac{5}{2}}(r-\frac{3}{2})} ,
\end{equation}
where
\begin{equation}
r_g=\frac{2GM}{c^2}\approx 3.01\cdot
10^{-5}T_{C8}r_{C}
\end{equation}
is the Schwarzschild radius,
and
\begin{equation}
\dot{M}=\frac{L_{visc}}{\varepsilon c^{2}}
\end{equation}
is the accretion rate.
The efficiency $\varepsilon$ is $\sim 5.6$\% for a nonrotating black hole.
In equations (5) and (6) the radius r is expressed in $r_g$ units.
This picture does not leave any room for the X-ray emission. Actually, a
fraction of energy due to accretion is dissipated in the form of X-rays.
However, reliable predictions of that fraction as a function of radius are
not available (see e.g. Witt et al. 1996). On the other hand the efficiency
of accretion is most probably higher than adopted as the black hole may
well be rotating, increasing the bolometric luminosity easily by a factor few.
Therefore we describe the accretion disc emission as above but we allow
that the total luminosity of the central source $L$ may be higher than the
disc luminosity $L_{visc}$ due to the contribution from unaccounted X-ray
emission.
\subsubsection{Direct radiative heating of the disc}
The additional radiation flux heating the disc is the flux of radiation
from the central UV/X-ray source with luminosity L
\begin{equation}
F_{dir(r)}=(1-A)\frac{L}{4\pi({r}^2+H^2_X)}
f_{dir(r)}cos{\theta}_{dir(r)}
\end{equation}
where A is the albedo of the disc, ${\theta}_{dir(r)}$ is the angle between
the incident ray and the normal to the disc at $r$, and
\begin{equation}
f=\frac{F}{L/4\pi r^2}
\end{equation}
is used by OMK and MCKM dimensionless factor, which is
the ratio of the radiation flux at the base of the corona to the
unattenuated flux from the central source (in this definition in the
denominator the component $H^2_X$ is neglected). To determine the factor
$f_{dir}$ we use the results of OMK or MCKM papers as we mentioned in
Sec. 2.1.
We assume A=0.5 as an appropriate value for quasars because the disc surface
is partially ionized (see e.g. \. Zycki et al. 1994).
\subsubsection{Heating by corona}
To describe the effect of irradiation by photons from the central source
scattered towards the disc surface by extended corona we also use the method
described in Sec. 2.1 to determine the scattered radiation flux
\begin{equation}
F_{scat(r)}=(1-A)\frac{L}{4\pi r^2}
f_{scat(r)} \langle cos{\theta}_{scat} \rangle ,
\end{equation}
where $\langle {\theta}_{scat} \rangle $ is the angle between the
direction from the scattering point and the normal to the disc at $r$,
averaged over the whole volume of the corona. We determine the factor
$f_{scat(r)} \langle cos{\theta}_{scat} \rangle$ from OMK or MCKM papers.
We adopt the same albedo as for the direct flux.
\subsubsection{Computation of the spectrum}
The effective temperature of the disc photosphere is given by
\begin{equation}
T_{eff(r)}=\left(\frac{F_{visc(r)}+F_{dir(r)}+F_{scat(r)}}{\sigma}\right)^{\frac{1}{4}}
.
\end{equation}
Assuming the black body emission from the disc, one can calculate the shape
of continuum as follows
\begin{equation}
f_{\nu}=2\pi \intop_{3r_g}^{r_{max}}rB_{\nu}[T_{eff(r)}]dr,
\end{equation}
where $B_{\nu}$ is the Planck function and $r_{max}$ is the outer radius
of the disc (in calculations we assume $r_{max}=r_{C}$).
We neglect the modification of the disc spectrum due to electron scattering.
Although such effects were discussed by a number of authors (e.g.
Czerny \& Elvis 1987, Ross \& Fabian 1993, Shimura \& Takahara 1995)
these corrections strongly depend on the accuracy of the description of atomic
processes as well as assumptions on the disc viscosity.
\subsection{The spectrum of the central source and the emission lines}
\subsubsection{Incident radiation spectrum}
The computations of the continuum given in Sect. 2.2 cover only
the optical/UV band as only the knowledge of the total X-ray luminosity, but
not the shape of the primary X-ray emission, was required to compute the
thermal emission of the disc dominating in this spectral band. On the other
hand the computations of the strength of the emission lines require the
determination of the spectrum in the EUV and X-ray band as well.
Therefore, for the purpose of calculating emission lines we use the spectral
shapes which approximate well the observed overall spectra of Seyfert galaxies
and quasars.
The shape of the big blue bump is parametrized in a similar way as by
Mathews \& Ferland (1987). However, we adjusted parametrization in agreement
with the present observational data and we adopted interpolation to give
the ratio of the bolometric luminosities of the big blue bump to X-rays
equal $\sim 1$ for Seyfert galaxies and $\sim 10$ for quasars. The details
of this parametrization are given in Appendix. The inverse Compton
temperatures for these two spectra are $5.8\cdot 10^{7}$ K for Seyfert
galaxies and $1.1\cdot 10^{7}$ K for quasars.
\subsubsection{Local line emissivity}
We assume that the high ionization emission lines come from the clouds which
form in the narrow intermediate zone between an accretion disc and a hot
corona. Thermal
instability of irradiated gas at intermediate temperatures causes discontinuous
transition between a disc and a corona (Begelman, McKee \& Shields 1983), if
only radiative processes are taken into account. In realistic situation, when
some level of a turbulence is present in the medium, we may expect the
spontaneous formation of the cool clumps embedded in the hot coronal plasma
in a relatively narrow transitory zone (R\' o\. za\' nska \& Czerny 1996).
The column density of the clouds is not determined precisely but the estimates
based of the relative efficiency of condution and thermal processes give
values of the same order as values expected for the clouds forming the
Broad Line Region. Therefore we assume for all the clouds the column density
equal $10^{23}$ cm$^{-2}$.
Cloud parameters (density $n_{cl}$ and temperature $T_{cl}$) at a given
radius are determined
by two requirements. This first condition is the pressure equilibrium
with the hot medium at the basis of the corona. The second condition, in the
case of optically very thin plasma, should be the settlement on the lower
stable branch at the $\Xi - T $ curve (KMT). However, this branch is not applicable
for media of higher optical depth. Since we do not expect clouds to cool
below the local effective temperature of the disk surface, we assume their
temperature to be at that value, $T_{cl}=T_{eff}$.
Therefore we adopt the condition
\begin{equation}
n_{cl(r)}=n_{d(r)}=n_{b(r)}\frac{T_{cor(r)}}{T_{eff(r)}}=
n_{b(r)}\frac{T_{cor(r)}}{T_{cl(r)}}
\end{equation}
where $n_{b(r)}$ is the density at the basis of the corona determined from
the value of the ionization parameter $\Xi_b$ and $T_{cor(r)}$ and
$T_{eff(r)}$ are given by eq. (1) and (12).
We discuss the kinematics of the clouds in Sect. 2.3.3 since it is essential
for computation of the line profiles. However, cloud motion
influences the line intensities as well.
Since the clouds are blown out radiatively from the formation region they
do not form a flat layer on the top of a disc but they are fully exposed
to the incident radiation flux so the inclination angle of the direct incident
flux does not have to be included through the cosine factor, as it is the case
for the disk surface.
We calculate the emissivity of several emission lines (see Table 5 for the
list) using the photoionization code CLOUDY. The calculations are made
for a grid of radii resulting from the used range of $n_{cl}$: $10^9 -
10^{13} cm^{-3}$ (the same for all models).
For each radius separately we calculate
the contribution to emission lines assuming the local value of the
total heating flux $F_{(r)}$ given by eq. (2),(5),(6),(9),(11), density
$n_{d(r)}$ from eq. (14) and fixing the column density $N_{H}=10^{23}$ cm$^{-2}$.
\subsubsection{Radial dependence of number of clouds}
Clouds forming in the transition layer between the disk and the corona does
not necessarily cover the disk surface uniformly. Local number of clouds
give weight to the local emissivity thus influencing both the line ratios
and line profiles. The number of clouds, $ N _{(r)}$,
existing at a given radius depends
both on the cloud formation rate, $\dot N_{(r)}$, and expected life time of
a cloud, $t_{(r)}$
\begin{equation}
N _{(r)}=\dot N_{(r)} t_{(r)}.
\end{equation}
As the detailed process of cloud formation and destruction is not well
understood we discuss a few representative cases based on available estimates.
We consider two cases of the cloud formation process. In the first case we
assume that only one cloud can form at a given moment and a given radius
and the formation time is given by the characteristic isobaric cooling time,
$\tau_{(r)}$ (McKee \& Begelman 1990)
\begin{equation}
{\rm case (I)}~~~~~ \dot N_{(r)}\sim \frac{1}{\tau_{(r)}}\sim n_{b(r)} .
\end{equation}
In the second case we assume that the clouds form in the entire
instability zone therefore the number of the clouds forming at the same time
is related to ratio of the zone geometrical thickness $\Delta Z$ , which
is of order of the Field lenght (e.g. R\'o\.za\'nska \& Czerny 1996)
to the size of the cloud, $r_{cl(r)}=N_{H}/n_{cl(r)}$
\begin{equation}
{\rm case (II)}~~~~~ \dot N_{(r)} \sim \frac{\Delta Z_{(r)}}{r_{cl(r)} \tau_{(r)}}
\sim n_{cl(r)} .
\end{equation}
We describe the destruction process using three different approaches.
In the case (i) we assume that the clouds survive only within the instability
zone. As they move through the zone upwards
under the influence of the radiation from the disk surface their life time
is given by the travel time through the zone $\Delta Z$. We assume that
the radiation acceleration is constant. It gives us the relation
\begin{equation}
{\rm case (i)}~~~~~ t_{(r)} \sim \frac{1}{\sqrt{n_{b(r)}}} .
\end{equation}
In the case (ii) we assume that clouds survive even outside the instability
zone but undergo the destruction due to conduction, evaporating into
surrounding hot corona. The timescale of such a process
is given by McKee \& Begelman (1990)
\begin{equation}
{\rm case (ii)}~~~~~ t_{(r)} \sim \frac{1}{n_{b(r)}} .
\end{equation}
In the case (iii) we assume that clouds are accelerated so efficiently by the
radiation pressure that they reach soon the velocities exceeding the local
sound velocity in the corona. Reaching this terminal velocity of $\sim
2000$ km/s
they are destroyed by dynamical instabilities. Since we assume that
the radiation acceleration of the clouds is independent from the
disk radius the time needed to reach
this terminal velocity is also constant if small variations of the corona
temperature with the radius are ignored. In that case
\begin{equation}
{\rm case (iii)}~~~~~ t_{(r)} = const.
\end{equation}
We summarize the six models of the number of clouds $N_{(r)}$ in Table 1.
\begin{table}
\caption{The models of radial dependence of number of clouds.}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Model & Cases from & The radial \\
\ & Sec. 2.3.3 & dependence \\
\hline
a & (I)(i) & $\sqrt{n_{b(r)}}$ \\
b & (I)(ii) & const. \\
c & (I)(iii) & $n_{b(r)}$ \\
d & (II)(i) & $\sqrt{n_{b(r)}}/T_{cl(r)}$ \\
e & (II)(ii) & $T_{cl(r)}$ \\
f & (II)(iii) & $n_{cl(r)}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Line profiles}
We assume that the velocity field of the emitting clouds consists of the
Keplerian orbital motion and the outflow perpendicular to the disc
surface.
In our computations of the orbital motion we follow the method of
Chen and Halpern
(1989), including the relativistic effects. Their procedure requires the
specific intensity from the disc surface
\begin{equation}
I_{(r ,{\nu}_e)}=\frac{1}{4\pi}{\epsilon}_{(r)}\cdot \frac
{e^{-\frac{{({\nu}_e-{\nu}_0)}^2}{2{\sigma}^2}}}{(2\pi )^{1/2}\sigma }
\ [ergs s^{-1}{cm}^{-2}{sr}^{-1}Hz^{-1}]
\end{equation}
where ${\epsilon}_{(r)} $ is the local emissivity of the clouds (see
Sec. 2.3.2) and the exponential factor is
related to local broadening of the emission line. The broadening (due to
electron scattering or turbulence e.g.) is characterized by quantity
$\sigma$, and ${\nu}_e$, ${\nu}_0$ are the emitted and rest frequencies.
After Chen and Halpern (1989) we use in our calculations
$\sigma$/${\nu}_0$=0.03.
The vertical motion is computed assuming that the acceleration of the clouds
is constant, i.e. the velocity increases linearly with the distance from
the disc surface. Maximum velocity is constrained by the cloud destruction.
Since the line profiles are not strongly dependent on the detailed description
of the vertical motion as the orbital velocity is usually much higher than
the vertical one we do not distinguish between the three cases introduced in
Section 2.3.3 and in all cases we assume that the clouds move under the
influence of constant radiative acceleration and reach the same terminal
velocity equal 2000 km/s, independently from the disk radius, as expected in
case (iii).
\begin{figure*}
\leavevmode
\epsfysize = 110 mm \epsfbox[20 380 560 770]{Fig1.ps}
\caption{The spectra of irradiated accretion disks parametrized
by the $L/L_{Edd}$ ratio and the corona
|
temperature. The dotted curves show
the spectra of non-irradiated discs and the dashed ones illustrate the
cases with $H_X=33.2r_g$.}
\end{figure*}
\section{Results and discussion}
We calculate the local radiation flux at the disc surface for
the following parameters
\[
M=10^8M_{\odot}
\]
\[
\varepsilon =0.056
\]
\[
T_{C}=10^8 K; 10^7 K
\]
\[
L=0.01 L_{E}; 0.34 L_E; 0.59 L_E
\]
\[
H_X=3.32 r_g; 33.2 r_g.
\]
The second value of the height could only be used in the calculations of the
low luminosity cases as the numerical solutions of the high luminosity
corona structure (MCKM) are available only for the first value.
In agreement with generally accepted trend that low ratios of luminosity
to the Eddington luminosity are appropriate for Seyfert galaxies and
luminosities closer to the Eddington limit are appropriate for quasars,
we use Seyfert galaxies spectrum for $L=0.01 L_{E}$ and quasars spectrum
for $L=0.34 L_E, 0.59 L_E$ (see Appendix).
\subsection{IR/optical/UV continuum}
We calculate the accretion disk spectra taking into account the direct
irradiation by the central source as well as the irradiation by the flux
scattered in the corona, as described in Sect. 2.2.
Figure 1 illustrates our results in $log\nu f_{\nu}-log\nu$ form.
In the case of low $L/L_{Edd}$ ratio, the corona is never strong and the direct
irradiation dominates, independently from the corona temperature. Therefore
the adopted location of the irradiating source is of significant importance.
No corona influence is seen when the temperature is lower, some redistribution
of the flux towards the outer part of the disk is present when the temperature
is higher.
In the case of high $L/L_{Edd}$ ratio, the effect of the corona is essential and
direct irradiation negligible which means that the location of the central
source is not essential any longer (unless $H_X$ would be very high indeed).
The effect of irradiation is significant even for lower corona temperature
and it is particularly strong for high temperature, leading to significant
enhancement of spectra in IR/optical band.
\begin{table}
\caption{Spectral indices in the selected ranges of continuum.}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
\hline
Model & $\alpha_{opt}$ & $\alpha_{UV-1}$ & $\alpha_{UV-2}$ \\
\hline
\hline
$L=0.01L_E;T=10^7 K;$ & $-$ 0.5 & $-$ 1.1 & $-$ 1.22 \\
\multicolumn{4} {l} {$H_X=33.2r_g$} \\
\hline
$L=0.01L_E;T=10^7 K;$ & 0.01 & $-$ 0.74 & $-$ 0.94 \\
\multicolumn{4} {l} {$H_X=3.32r_g$} \\
\hline
$L=0.01L_E;T=10^8 K;$ & $-$ 0.06 & $-$ 0.75 & $-$ 0.94 \\
\multicolumn{4} {l} {$H_X=33.2r_g$} \\
\hline
$L=0.01L_E;T=10^8 K;$ & $-$ 0.1 & $-$ 0.9 & $-$ 1.14 \\
\multicolumn{4} {l} {$H_X=3.32r_g$} \\
\hline
$L=0.34L_E;T=10^7 K$ & $-$ 0.19 & $-$ 0.27 & $-$ 0.32 \\
\hline
$L=0.34L_E;T=10^8 K$ & $-$ 0.55 & $-$ 0.36 & $-$ 0.4 \\
\hline
$L=0.59L_E;T=10^7 K$ & $-$ 0.32 & $-$ 0.3 & $-$ 0.34 \\
\hline
$L=0.59L_E;T=10^8 K$ & $-$ 0.66 & $-$ 0.32 & $-$ 0.35 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
In order to compare the derived spectra and observed continuum
shapes of accretion disc
we calculate the spectral indices $\alpha$ ($f_{\nu}\sim
\nu^{\alpha}$) in optical range ($\nu \sim 10^{14.5}\div 10^{15}$ Hz)
and in the following pieces of UV range $\nu \sim 10^{15.12}\div
10^{15.32}$ Hz (UV-1) and $\nu \sim 10^{15.22}\div 10^{15.36}$ Hz
(UV-2). They are presented in Table 2. The mean values of
spectral index for these three ranges obtained from observations by
various groups for various samples of quasars we show in Table 3.
It could be supplemented by the UV slope (between 15.13 and 15.45) of
the composite radio quiet quasar spectrum given by Zheng et.~al (1996)
equal -0.86, but their formal error does not reflect the dispersion of the
contributing spectra.
Unfortunately, equaly reliable data for Seyfert galaxies are not available
since in the case of weaker active galactic nuclei the determination of the
spectral slope is complicated by strong contamination of the spectra by
circumnuclear starlight.
Comparing directly the predicted spectral slopes of the high luminosity models
(roughly adjusted to quasar
luminosities) with the observed values we conclude that they roughly correspond
to the data, taking into account large errors. However, the agreement with
mean values is far from perfect since the UV slopes are too flat. This is
clearly caused by adopting the value of the black hole mass equal
$10^8 M_{\odot}$, actually too low by a factor 3 to 10. Higher values of
the mass would give flatter optical spectra and steeper UV spectra due to the
the decrease of the disk temperature for given $L/L_{Edd}$ ratio.
It might therefore slightly favor the models with higher corona temperature.
Unfortunately,
the available corona models are only for $10^8 M_{\odot}$ (see Sect. 2.1) so
we cannot support this conclusion quantitatively.
\begin{table}
\caption{The mean values with standard deviations of spectral index
obtained from observations.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\hline
$\alpha_{opt}$ &
$\alpha_{UV-1}$ &
$\alpha_{UV-2}$ \\
Neugebauer et al. & Baldwin et al. & Francis et al. \\
1987 & 1989 & 1992 \\
\hline
\ & \ & \ \\
$-$0.4$\mp$0.4 & $-$0.91$\mp$0.34 & $-$0.67$\mp$0.5 \\
\ & \ & \ \\
\hline
\hline
\end{tabular}
\end{center}
\medskip
in Neugebauer et al. paper the emission in '\it small blue bump \rm '
is subtracted.
\end{table}
\subsection{Emission lines}
\begin{figure}
\epsfxsize = 80 mm \epsfysize = 75 mm \epsfbox[50 380 480 750]{Fig2.ps}
\caption{The examples of the radial distribution of the local line emissivity
of four strongest lines as defined in Sect. 2.3.2 for $L/L_{Edd}$ =0.59 and the corona
temperature $10^7$ K (left panel) and $10^8$ K (right panel). The solid,
long-dashed, short-dashed and dotted curves represent $Ly_{\alpha}$, CIV, HeII
and NV lines respectively.}
\end{figure}
We can easily see that the emission of the broad emission lines by clouds
formed close to the basis of the hot accretion disk corona is a reasonable
assumption.
The typical widths of
lines reaching FWHM$\approx$2000 - 10000 km/s correspond to Keplerian
velocities over the radii range
\mbox{$1.3\cdot 10^{16} cm < r < 3.4 \cdot 10^{17} cm$} in the case of an
accretion disc around $10^8 M_{\odot}$ black hole. It is the region covered
by the corona and the cloud emissivity is large in this range (see Fig. 2).
Also the line ratios should be reasonable as the cloud formation is based on
special requirements as for the ionization parameter.
Detailed predictions of the model can support this scenario and allow to
put some constraints on the $L/L_{Edd}$ ratio, corona temperature and the radial
distribution of clouds.
\subsubsection{Line profiles}
\begin{table*}
\begin{minipage}{150mm}
\caption{FWHMs in km/s of $Ly_{\alpha}$ and CIV lines profiles}
\begin{tabular}{@{}lcccccccc}
\hline
\hline
Model &
\multicolumn{2}{c}{$i=0^{\circ}$} &
\multicolumn{2}{c}{$i=30^{\circ}$} &
\multicolumn{2}{c}{$i=60^{\circ}$} &
\multicolumn{2}{c}{$i=80^{\circ}$} \\
\ & $Ly_{\alpha}$ & CIV & $Ly_{\alpha}$ & CIV & $Ly_{\alpha}$ &
CIV & $Ly_{\alpha}$ & CIV \\
\hline
\multicolumn{9} {c} {Model a} \\
\hline
$L=0.34L_E;T=10^7 K$ & 3130 & 3240 & 7070 & 7590 & 10940 & 12080
& 12450 & 13080 \\
$L=0.34L_E;T=10^8 K$ & 2840 & 2760 & 5890 & 6700 & 9020 & 10800
& 9950 & 12150 \\
$L=0.59L_E;T=10^7 K$ & 2980 & 2690 & 5750 & 5750 & 8880 & 9060
& 9760 & 10300 \\
$L=0.59L_E;T=10^8 K$ & 2690 & 2690 & 5194 & 4900 & 7710 & 7400
& 8440 & 8070 \\
\hline
\multicolumn{9} {c} {Model b} \\
\hline
$L=0.01L_E;T=10^8 K;H_X=33.2r_g$ & 2700 & 2700 & 3980 & 4090 & 5670
& 5780 & 6220 & 6340 \\
$L=0.01L_E;T=10^8 K;H_X=3.32r_g$ & 2800 & 2800 & 4460 & 4720 & 6670
& 7040 & 7150 & 7900 \\
\hline
$L=0.34L_E;T=10^7 K$ & 2620 & 2610 & 3430 & 3390 & 4680 & 4670 &
5200 & 5270 \\
$L=0.34L_E;T=10^8 K$ & 2620 & 2650 & 3510 & 3790 & 5450 & 5670 &
6010 & 6340 \\
$L=0.59L_E;T=10^7 K$ & 2620 & 2690 & 3200 & 3280 & 4346 & 4462 &
5000 & 5100 \\
$L=0.59L_E;T=10^8 K$ & 2600 & 2610 & 3790 & 3790 & 5670 & 5780 &
6300 & 6450 \\
\hline
\multicolumn{9} {c} {Model f} \\
\hline
$L=0.34L_E;T=10^7 K$ & 5120 & 2910 & 16690 & 11860 & $>$ 20000
& $\sim$ 20000 & ... & ... \\
$L=0.34L_E;T=10^8 K$ & 2870 & 2870 & 9320 & 9770 & 15360 & 16020
& 17240 & 17970 \\
$L=0.59L_E;T=10^7 K$ & 4050 & 2840 & 11050 & 9360 & 16830 & 15210
& 19200 & 16700 \\
$L=0.59L_E;T=10^8 K$ & 2730 & 2730 & 6810 & 6260 & 10610 & 9500
& 11790 & 10550 \\
\hline
\multicolumn{9} {c} {Observations} \\
\hline
Brotherton et al. (1994)ALS &
\multicolumn{4}{c}{$Ly_{\alpha}$: 8000 $\pm$ 600} &
\multicolumn{4}{c}{CIV: 6800 $\pm$ 300} \\
\hline
Baldwin et al. (1989)BQS & \multicolumn{8} {c} {CIV: 2500 $\div$
8000; mean=5680 $\pm$ 390} \\
\hline
Wills et al. (1993) & \multicolumn{8} {c} {CIV: 1970 $\div$
10400; mean=4870 $\pm$ 180} \\
\hline
\hline
\end{tabular}
\end{minipage}
\end{table*}
We present the calculated profiles for $C_{IV}$ and Ly$\alpha$ lines
only, because these lines are statistical investigated best of all.
Line profiles determined by the model depend on the disk/corona model and
on the adopted radial distribution of the number of clouds, reflecting
various assumption about their formation and destruction.
An example of the radial distribution of the local emissivity of several lines
is shown in Fig. 2. It is further combined with the radial distribution of
the number of clouds. In the case of model (b) (see Table 1) this distribution
is preserved whilst in other models it includes additional local weight
and either outer or inner parts of the distribution are enhanced.
We calculate the profiles for four values of the inclination angle of
accretion disc: 0$^o$, 30$^o$, 60$^o$ and 80$^o$.
According to the unified scheme of
AGN supported by a number of strong observational data (e.g.
Antonucci 1993) objects viewed at large inclination angles are obscured
by molecular/dusty torus and are not identified as quasars so the mean
inclination angle is expected to be smaller than 60$^o$.
We compare the profiles with the mean values and observed ranges of the
these two line widths.
The first sample are exceptionally
well studied line profiles of quasars by Brotherton et al. (1994).
These autors analysed two quasar samples (a small sample with
high quality UV spectra and the other from Large Bright Quasar Survay;
hereafter ALS and LBQS)
and were able to decompose
the contribution to broad lines from Intermediate Line Region
(distant and most probably spherical) and Very Broad Line Region.
Here we use the ALS data.
\begin{table*}
\begin{minipage}{180mm}
\caption{Line Intensity Ratios}
\begin{tabular}{@{}lccccccccccc}
\hline
\hline
Model & $Ly_{\alpha}$ & NV & CII & SiIV & OIV] & CIV & HeII & OIII] &
AlIII & SiIII] & CIII] \\
\ & 1216 & 1240 & 1335 & 1397 & 1402 & 1549 & 1640 & 1663 & 1859 &
1892 & 1909 \\
\hline
\multicolumn{12} {c} {Model a} \\
\hline
$L=0.01L_E;T=10^8 K;H_X=33.2r_g$ & 80 & 19 & 0 & 16 & 8 & 100 & 18
& 4 & 2 & 2 & 3 \\
$L=0.01L_E;T=10^8 K;H_X=3.32r_g$ & 92 & 23 & 0 & 10 & 10 & 100 & 15
& 5 & 1 & 2 & 5 \\
\hline
$L=0.34L_E;T=10^7 K$ & 114 & 15 & 0 & 0 & 4 & 100 & 76 & 0 & 0 & 0
& 0 \\
$L=0.34L_E;T=10^8 K$ & 239 & 33 & 0 & 19 & 8 & 100 & 45 & 3 & 1 &
3 & 3 \\
$L=0.59L_E;T=10^7 K$ & 144 & 20 & 0 & 1 & 1 & 100 & 58 & 2 & 0 & 0
& 1 \\
$L=0.59L_E;T=10^8 K$ & 239 & 25 & 0 & 19 & 7 & 100 & 42 & 2 & 1 &
3 & 2 \\
\hline
\multicolumn{12} {c} {Model b} \\
\hline
$L=0.01L_E;T=10^8 K;H_X=33.2r_g$ & 83 & 4 & 0 & 7 & 3 & 100 & 6 &
7 & 1 & 3 & 14 \\
$L=0.01L_E;T=10^8 K;H_X=3.32r_g$ & 84 & 6 & 0 & 6 & 4 & 100 & 6 &
7 & 1 & 2 & 14 \\
\hline
$L=0.34L_E;T=10^7 K$ & 132 & 5 & 0 & 1 & 2 & 100 & 24 & 1 & 0 & 0
& 0 \\
$L=0.34L_E;T=10^8 K$ & 276 & 8 & 1 & 14 & 5 & 100 & 21 & 8 & 1 & 10
& 14 \\
$L=0.59L_E;T=10^7 K$ & 125 & 6 & 0 & 1 & 3 & 100 & 12 & 2 & 0 & 0 &
1 \\
$L=0.59L_E;T=10^8 K$ & 226 & 7 & 0 & 13 & 4 & 100 & 20 & 3 & 1 &
8 & 5 \\
\hline
\multicolumn{12} {c} {Model f} \\
\hline
$L=0.34L_E;T=10^7 K$ & 210 & 34 & 0 & 0 & 3 & 100 & 144 & 0 & 0 &
0 & 0 \\
$L=0.34L_E;T=10^8 K$ & 234 & 46 & 0 & 21 & 9 & 100 & 55 & 1 & 2 &
2 & 1 \\
$L=0.59L_E;T=10^7 K$ & 166 & 25 & 0 & 1 & 1 & 100 & 83 & 1 & 0 &
0 & 0 \\
$L=0.59L_E;T=10^8 K$ & 250 & 37 & 0 & 22 & 9 & 100 & 54 & 2 & 2 &
2 & 1 \\
\hline
\multicolumn{12} {c} {Observations} \\
\hline
Brotherton et al. (1994)ALS & 219 & 111 & 6 & \multicolumn{2} {c}
{42} & 100 & 31 & 0 & 17 & $<$5 & 41 \\
Brotherton et al. (1994)LBQS & 154 & 137 & 6 & \multicolumn{2} {c}
{55} & 100 & 40 & 0 & 22 & ... & 59 \\
\hline
Baldwin et al. (1989)BQS & 236 & 106 & ... & ... & ... & 100 &
\multicolumn{2} {c} {30} & 16 & ... & 43 \\
\hline
Francis et al. (1991)LBQS & \multicolumn{2} {c} {159} & 4 &
\multicolumn{2} {c} {30} & 100 & \multicolumn{2} {c} {29} &
\multicolumn{3} {c} {46(AlIII + CIII])} \\
\hline
Zheng et al. (1996) & 192 & 27 & 1 & \multicolumn{2} {c} {16} &
100 & 7 & 5 & 6 & 5 & 23 \\
\hline
\hline
\end{tabular}
\end{minipage}
\end{table*}
The second sample (Bright Quasar Sample, BQS) is from Baldwin et al.
(1989)
and the third one quasars from Wills et al. (1993) which are not
decomposed
into ILR and VBLR, as above. In our model we expect the contribution
from
broad range of radii and such a decomposition might not be necessary.
In Table 4 we give the observational constraits and the results from
our most promising models, i.e. case (a), (b) and (f).
Radial cloud distribution (c) gives unreasonably broad profiles for both
lines
since they enhance too much the emission from inner radii (the number of
clouds decreases with radius far too fast). Models from families (d) and
(e)
gives exactly the opposite trend, leading to unreasonably narrow
profiles.
Model (b) also gives too narrow profiles, although not as narrow as case
(c).
Slight decrease of the cloud number with radius is clearly favored.
Second and third among the class (a) models well represent the CIV
distribution and require the dusty torus to cover inclination angles
above
$\sim 60^o$. However, these solutions do not satisfy the
observational requirement that the $Ly_{\alpha}$ is broader than the CIV.
We conclude that the most favorable representation is given by model
(f),
$L/L_{Edd} =0.59$ and the corona temperature $10^8$ K, with the torus
shielding
the view above $\sim 60 - 70^o$. This last quantity is very sensitive to
the corona temperature. We find the solution with low temperature
equally
satisfactory if the opening angle of the torus is as small as $30^o$.
\subsubsection{Line intensities}
We compute line ratios for three models of the radial cloud distribution
which were most promissing from the point of view of $Ly_{\alpha}$ and CIV profiles
varying also other model parameters. The results are presented in Table 5.
We compare them with the line ratios for quasars determined by Brotherton
et al. (1994) for ALS and LBQS samples, Baldwin et al. (1989), Francis et
al. (1991) and Zheng et al. (1996).
The $Ly_{\alpha}$ to CIV ratio predicted by models with the luminosity close to the
Eddington luminosity and the mean quasar spectrum fall reasonably close to
the observed values,
although many models with low corona temperature tend to underproduce
this ratio whilst high temperture corona models sometimes overproduce it.
The dispersion between different observational results is considerable but
we see that low corona temperature, consistent with the inverse Compton
temperature of the quasar spectra, is allowed only in case (f) and $L/L_{Edd}$ ratio
equal 0.34; all other models require higher temperature to produce
the required amount of $Ly_{\alpha}$.
Model (f), favored by the study of the line profiles,
with the corona temperature somewhat below $10^8$ K seems quite
attractive if compared, for example, with Zheng et al. (1996).
It could reproduce well the $Ly_{\alpha}$ to CIV ratio as well as
the amount of NV and (perhaps) SiIV. However, it overproduces HeII line,
although this discrepancy is not so strong if other samples are considered.
It also underproduce some other weak lines like AlIII, SiIII] and CIII].
However, it is possible that there is a contribution to the line emission
from collisional excitation. In the case of clouds moving across the corona
it is only natural to expect the formation of the shock at the cloud front
and therefore some additional ionization. This effect was neglected in out
study.
\bigskip
\section{Conclusions}
Presented results support the following view of quasars.
Quasars are radiating at $L/L_{Edd}$ ratio $\sim 0.5$, accretion disk in these
objects is surrounded by a hot corona with temperature higher than the inverse
Compton temperature. Broad emission lines observed in their spectra come from
clouds which form continuously at the basis of the corona due to thermal
instabilities and are being blown out by the radiation pressure untill they
are destroyed when they reach supersonic velocities, $\sim 2000 $km/s.
However, further studies are necessary to confirm this picture. For example,
extension of the computations towards higher values of the mass of
the black hole and the inclusion of the contribution from collisionally heated
sides of moving clouds may be essential.
\bigskip
\section*{Acknowledgements}
We thank Gary Ferland for providing us with his photoionization code
CLOUDY version 80.07. We are gratefull to Agata R\'o\.za\'nska for
many helpful discussions.
This work was supported in part
by grants 2P03D00410 and 2P30D02008 of the Polish State Committee for
Scientific Research.
\bigskip
\section*{Appendix}
We parametrize the continua emitted by the central regions of the
accretion flow by adopting fixed values of the energy index $\alpha$
in a number of
energy bands given in $log\nu$. In the case of Sefert galaxies we
assume 10.5 - 14.83 ($\alpha=2.5$),14.83 - 15.76 (-0.5), 15.76 - 16.12 (-1),
16.12 - 16.60 (-3), 16.60 - 18.70 (-0.7), 18.70 - 19.37 (-0.9), 19.37 - 22.37
(-1.67). In the case of quasars we assume 10.5 - 14.83 ($\alpha$=2.5),
14.83 - 15.76 (-0.5), 15.76 - 16.12 (-1), 16.12 - 17.05 (-3), 17.05 - 19.37
(-0.7), 19.37 - 22.37 (-1.67).
\bigskip
|
\section{Introduction}
What were the feedback effects from the first generation of stars in the Universe? The first stars, which formed in dark matter (DM) minihaloes of
mass $\sim 10^{6}{\rmn M}_{\odot}$ at redshifts of $z \sim 20$, were likely very massive, having characteristic masses of the order of $\sim 100 {\rmn M}_{\odot}$
(Bromm, Coppi \& Larson 1999, 2002; Abel, Bryan \& Norman 2002; Nakamura \& Umemura 2001). These massive Population III stars would have radiated at temperatures of
$\sim$ 10$^5$~K (e.g. Bond, Arnett \& Carr 1984), generating enough ionizing photons to completely ionize the minihaloes in which they were formed and to contribute to
the reionization of the Universe (e.g. Barkana \& Loeb 2001; Alvarez, Bromm \& Shapiro 2006).
In addition to the radiation emitted by the first stars during their lifetimes,
those Population III stars with masses 40 ${\rmn M}_{\odot}$ $\la$ $M_{*}$ $\la$ 140 ${\rmn M}_{\odot}$ or $M_{*}$ $\ga$ 260 ${\rmn M}_{\odot}$ are predicted to have
collapsed to form black holes directly, possibly providing the seeds for the first quasars (Madau \& Rees 2001; Heger et al. 2003; Madau et al. 2004; Ricotti \&
Ostriker 2004; Kuhlen \& Madau 2005), although
more massive seed black holes may have been formed after the epoch of the first stars in DM haloes with virial temperatures of $\ga$ 10$^4$ K (Bromm \& Loeb 2003;
Begelman,
Volonteri \& Rees 2006; Spaans \& Silk 2006).
The growth of the first black holes must have been rapid enough to account for the powerful quasars observed at redshifts of $z \ga 6$ (e.g. Fan et al. 2004, 2006),
believed to be fueled by accretion onto supermassive black holes (SMBHs) with masses $\sim$ 10$^9$ ${\rmn M}_{\odot}$ (e.g. Haiman \& Loeb 2001; Volonteri \& Rees 2005;
Volonteri \& Rees 2006). How such vigorous accretion of matter could have taken place poses an important question, as it has been shown that the radiation from the first
stars heats and evacuates the gas residing within the $\sim$ 10$^6$ ${\rmn M}_{\odot}$ minihaloes in which they are born (Kitayama et al. 2004; Whalen, Abel \& Norman
2004; Alvarez et al. 2006). Notwithstanding some possible contribution to the accreted mass from self-interacting dark matter (SIDM) particles (see Spergel \& Steinhardt
2000; Hu et al. 2006), the baryonic mass around these primordial massive black holes (MBH) must have been efficiently replenished soon after the birth of the black hole.
In the course of hierarchical structure formation, this continued accretion of matter is naturally accomplished through mergers of the black hole's parent halo with its
neighboring haloes (see e.g. Ricotti \& Ostriker 2004; Kuhlen \& Madau 2005; Malbon et al. 2006; Li et al. 2006).
As recent theoretical work on the growth of supermassive black holes has been carried out under the assumption that Pop~III seed black holes can begin accreting at
the Eddingtion limit very soon after their formation (e.g. Li et al. 2006; Malbon et al. 2006), it stands as an important task to determine in which environments this
might actually be possible.
The effects of the radiation from the first stars may have enhanced subsequent star formation through the production of H$_2$ inside relic H~II regions, as well as in
partially ionized shells just ahead of ionization fronts, as proposed by Ricotti, Gnedin \& Shull (2001, 2002; see also Ahn \& Shapiro 2006). The former possibility has
recently received considerable attention (Oh \& Haiman 2003; O'Shea et al. 2005; Nagakura \& Omukai 2005; Johnson \& Bromm 2006).
O'Shea et al. (2005) have reported that second-generation star formation could have occurred in the ionized minihaloes neighboring the first stars, owing to the formation
of H$_2$ molecules in the recombining primordial gas (see Shapiro \& Kang 1987; Ferrara 1998; Ricotti, Gnedin \& Shull 2001, 2002). However, Alvarez et al. (2006) find
that neighboring minihaloes are self-shielded to the ionizing radiation of the first stars, and thus that star formation in neighboring minihaloes may not, in fact, have
been significantly enhanced. Also, it has been shown that the
activation of cooling by deuterium hydride (HD) molecules inside relic H~II regions may provide an avenue for the formation of Population II.5 (Pop~II.5) stars, with
masses of the order of 10 ${\rmn M}_{\odot}$ and formed from strongly ionized primordial gas (Mackey, Bromm \& Hernquist 2003; Johnson \& Bromm 2006; see also Nagakura
\& Omukai 2005). However, it remains to fully elucidate the formation process of Pop~II.5 stars within the first relic H~II regions if, indeed, the
radiation from the first stars evacuates the gas contained in their parent haloes and, yet, does not substantially ionize the gas in its neighboring minihaloes.
Here we present the results of three-dimensional numerical simulations of the recombination of the first relic H~II regions, investigating the possibility of Pop~II.5
star formation in such regions. We assume, for this case, the first star to
have a mass of 100
${\rmn M}_{\odot}$ and to collapse directly to a black hole. Additionally, we simulate
the merger of this parent halo with a neighboring neutral minihalo which has not yet experienced star formation,
in order to determine the necessary conditions for the black hole to begin accreting gas at the Eddington limit, and so to grow to a mass of 10$^9$ ${\rmn M}_{\odot}$
by a redshift of $z$ $\sim$ 6.
In future work, we will study the feedback from a pair-instability
supernova,
the other possible fate predicted for single primordial stars with masses $\ga$ 100 ${\rmn M}_{\odot}$ (e.g. Rakavy \& Shaviv 1967; Bond, Arnett \& Carr 1984; Heger
et al. 2003).
The details of our numerical methodology are given in Section 2. The
results of our simulations of the recombination of the relic H~II region appear in Section 3. Our results from the simulation of the merging of the relic H~II region
with a pre-collapse halo are presented in Section 4, while the implications of these results for the growth of the remnant black hole appear in Section 5.
Finally, in Section 6 we summarize our results and present our conclusions.
\section{Methodology}
\subsection{Chemical network}
We employ the parallel version of the GADGET code for our three-dimensional numerical simulations. This code includes a tree, hierarchical gravity solver combined
with the smoothed particle hydrodynamics (SPH) method for tracking the evolution of the gas (Springel, Yoshida \& White 2001). Along with H$_2$, H$_2$$^+$,
H, H$^-$, H$^+$, e$^-$, He, He$^{+}$, and He$^{++}$, we have included the five deuterium species D, D$^+$, D$^-$, HD and HD$^-$, using the same chemical network as
in Johnson \& Bromm (2006).
\begin{figure}
\vspace{2pt}
\epsfig{file=figure1.ps,width=8.5cm,height=7.cm}
\caption{Comparison of results from our one-zone model (see Johnson \& Bromm 2006) with the results from a three-dimensional numerical simulation with GADGET, for the
case of a collapsing spherical minihalo of uniform density. The solid lines show the results of the one-zone model, while the triangular symbols show output from the
three-dimensional simulation. There is clearly good agreement between the two calculations for the evolution of the gas
density (top-left panel), as well as for the H$_2$, HD, and the free electron abundances
(top-right, bottom-left, and bottom-right panels, respectively).
}
\end{figure}
As a test of the reliability of the chemical network incorporated into GADGET, we have simulated the idealized case of the homologous collapse of a
$\sim$ 10$^6$
${\rmn M}_{\odot}$ spherical cloud with an initial uniform density of $n_i$ $\sim 2$ cm$^{-3}$. The chemical species are initialized with their primordial abundances,
as given in Galli \& Palla (1998). The temperature of the gas, initially a uniform 200 K, was chosen so that there would be little pressure support of the gas against
gravitational collapse, allowing the cloud to collapse essentially in free-fall. We then followed the thermal and chemical evolution of the gas near the center of the
sphere, where the density profile is approximately flat. We compare our simulation results with those obtained
from the one-zone model, employed in previous work (see Johnson \& Bromm 2006). In this one-zone calculation a cloud
of uniform density collapses homologously under its own gravity, its density evolving
according to
\begin{equation}
\frac{dn}{dt} = \left(24\pi G \mu m_{\rmn H}\right)^{1/2} n^{3/2}\left[1-\left(\frac{n_{\rmn i}}{n}\right)^{1/3}\right]^{1/2} \mbox{\ ,}
\end{equation}
where $n$ is the density at time $t$ after the onset of the collapse, $m_{\rmn H}$ is the mass of the hydrogen atom, and $\mu$ is the mean molecular weight.
Fig.~1 shows a comparison of the evolution of the density and of the abundances of H$_2$, HD,
and free electrons found from our GADGET simulation with that found from the calculation using our one-zone model. The agreement
is very good, giving us confidence in the accuracy of our chemical network.
\subsection {First ionizing source}
The initial conditions for our three-dimensional SPH calculation are given by a cosmological simulation of high-$z$ structure formation that evolves both the dark matter
and baryonic components, initialized according to the $\Lambda$CDM model at $z$ = 100. In carrying out the cosmological simulation used in this study, we adopt the
same parameters as in earlier work (Bromm, Yoshida \& Hernquist 2003). We thus use a periodic box of size $L$ = 100 $h^{-1}$ kpc comoving and a number of particles
$N_{\rmn DM}$ = $N_{\rmn SPH}$ = 128$^3$. The SPH particle mass here is $\sim$ 8 ${\rmn M}_{\odot}$.
\begin{figure}
\vspace{2pt}
\epsfig{file=figure2.ps,width=8.5cm,height=7.cm}
\caption{The properties of the primordial gas in the minihalo identified to host the first star, as functions of distance from the center. The
values shown for the central density, temperature, e$^-$ fraction, and H$_2$ fraction are very close to the canonical values generally found in simulations of Pop~III
star formation (e.g. Bromm \& Larson 2004).
}
\end{figure}
\begin{figure}
\vspace{2pt}
\epsfig{file=figure3.ps,width=8.5cm,height=7.cm}
\caption{The density, temperature and radial velocity of primordial gas ionized and heated by radiation from the first star, as a function of distance from the star,
for two representative stellar masses. Here we show the situation after 3 Myr for the case of the 100
${\rmn M}_{\odot}$ star and after 2 Myr for the case of the 200 ${\rmn M}_{\odot}$ star. The 100 ${\rmn M}_{\odot}$ star will likely collapse directly to form a
black hole after
this time, while the 200 ${\rmn M}_{\odot}$ is likely to explode as a pair-instability supernova. In the present work, we track the evolution of the ionized gas as
it recombines and cools
in the case of the 100 ${\rmn M}_{\odot}$ black hole-forming star.
}
\end{figure}
We identified the first SPH particle to achieve a density above 10$^{4.5}$ cm$^{-3}$ within our cosmological box at $z \sim 19.5$, finding it at the center of a
minihalo with a total mass of $\sim$ 10$^6$ ${\rmn M}_{\odot}$.
Fig.~2 presents the properties of the primordial gas as a function of distance from the minihalo center. The gas temperature rises as particles are
adiabatically heated as they fall into the potential well of the halo, and then drops nearer the center of the halo where the H$_2$ fraction rises to
$\sim$ 10$^{-3.4}$ and
molecular cooling can thus efficiently cool the gas to $\sim$ 200 K (e.g. Bromm \& Larson 2004).
Having identified the location of the first star, we placed a point source of ionizing radiation at that location in our cosmological box. This was effected by including
the following heating rates and ionization rate coefficients in our calculations of the thermal and chemical evolution of the gas:
\begin{equation}
\Gamma_{\rmn HI *} = n_{\rmn HI} \frac{8.23 \times 10^{-18}} {r^{2}} {\rmn erg\ } {\rmn cm}^{-3} {\rmn s}^{-1} \mbox{\ }
\end{equation}
\begin{equation}
\Gamma_{\rmn HeI *} = n_{\rmn HeI} \frac{1.9 \times 10^{-17}} {r^{2}} {\rmn erg\ } {\rmn cm}^{-3} {\rmn s}^{-1} \mbox{\ }
\end{equation}
\begin{equation}
\Gamma_{\rmn HeII *} = n_{\rmn HeII} \frac{3.16 \times 10^{-19}} {r^{2}} {\rmn erg\ } {\rmn cm}^{-3} {\rmn s}^{-1} \mbox{\ }
\end{equation}
\begin{equation}
k_{\rmn H *} = \frac{8.96 \times 10^{-7}} {r^{2}} {\rmn s}^{-1}\mbox{\ }
\end{equation}
\begin{equation}
k_{\rmn HeI *} = \frac{1.54 \times 10^{-6}} {r^2} {\rmn s}^{-1}\mbox{\ }
\end{equation}
\begin{equation}
k_{\rmn HeII *} = \frac{2.72 \times 10^{-8}} {r^2} {\rmn s}^{-1}\mbox{\ }
\end{equation}
where $r$ is the distance from the star in pc, and the subscripts denote the chemical species subject to photoionization and photoheating.
These heating rates and ionization coefficients are derived from the models given in Schaerer (2002) for the case of a $\sim$ 100 ${\rmn M}_{\odot}$ Pop~III star, assuming
the stars emit a blackbody spectrum (see also Bromm, Kudritzki \& Loeb 2001). We also carried out a simulation of the ionization and heating of the gas assuming a stellar
mass of 200 ${\rmn M}_{\odot}$, and the resulting density, temperature, and radial velocity profiles around the central ionizing source, for each case, are shown in
Fig.~3. The profiles show the situation after 3~Myr and 2~Myr of photoheating and photoionization, for the 100 ${\rmn M}_{\odot}$ and 200 ${\rmn M}_{\odot}$ cases,
respectively
(see e.g. Schaerer 2002). The higher effective temperature and luminosity of the 200 ${\rmn M}_{\odot}$ star results in both a harder spectrum and more ionizing
photons, and so in a higher heating rate of the surrounding gas from photoionization. Thus, as can be seen in Fig.~3, the temperature of the gas at a given
distance from the central source is at least several $\sim$ 10$^3$~K higher for the case of the 200 ${\rmn M}_{\odot}$ star than for that of the 100
${\rmn M}_{\odot}$ star. Also, due
to the shorter lifetime of the 200 ${\rmn M}_{\odot}$ star, the shock that arises from the steep temperature and density gradients encountered during the
photoheating of the gas has not moved as far out from the central star for the 200 ${\rmn M}_{\odot}$ case, although the shock velocity
is higher in this case.
Although we neglect the detailed radiative transfer of the ionizing photons here, we succeed at reproducing the basic features of the temperature and density profiles
of the gas around the ionizing source that have been found in previous radiative transfer calculations (Kitayama et al. 2004; Whalen et al. 2004; Alvarez et
al. 2006). Also, we heat and ionize the gas only within 500~pc of the central source, which is roughly consistent with an inhomogeneously ionized region around the
first star of order $\sim$ 1 kpc, as found by Alvarez et al. (2006), without impinging on neighboring minihaloes in our cosmological box. We require that our H~II
region not encompass any neighboring minihaloes because these can be self-shielded to ionizing radiation even if they reside inside the H~II region, and so
we cannot accurately follow the chemical evolution of the gas inside those minihaloes while the central ionizing source is on.
Since we do not explicitly follow the propagation of the ionization front with time, we also do not resolve the time-dependent effects on the chemistry and thermal
evolution of the gas that can give rise to, for instance, the formation of shells of H$_2$ molecules just outside the I-front (see Ricotti, Gnedin \& Shull 2001, 2002).
Despite the inability of our method to capture the detailed structure of the H~II region, we do expect that we can capture the essential chemical and thermal evolution
of the relic H~II region as a whole in our simulations, as it expands, cools, and recombines after we remove the central source from the calculation.
We used the version of
GADGET which integrates the entropy equation for our photoionization calculation, as we also do later for our simulations of the relic H~II region. As
opposed to the integration of the energy equation, this formulation of GADGET conserves both energy and entropy and is much more successful at resolving the
thermal evolution of gas that experiences shocks or strong local energy injection
(Springel \& Hernquist 2002).
\subsection {Recombination and molecule formation}
With the formation of a black hole by direct collapse of the 100 ${\rmn M}_{\odot}$ Pop~III star, the relic H~II region left behind begins to recombine and cool. We
implement this by simply setting the photoionization coefficients and heating rates to zero, for the case of the
100 ${\rmn M}_{\odot}$ central star. Thus, the temperature, density, and radial velocity profiles shown in Fig.~3 are the initial conditions for our simulation of the
relic H~II region. We follow the thermal and chemical evolution of the relic H~II region for $\sim$ 100 Myr, considering in particular the production of molecules and the
cooling of the primordial gas.
\subsubsection {Photodissociation of molecules}
Molecular hydrogen can easily be dissociated by absorption of Lyman-Werner (LW) photons with energies between 11.2 and 13.6 eV (e.g. Haiman, Rees \& Loeb 1997; Bromm \&
Larson 2004, and references therein). Although
it is well-established that an external LW background could be produced by stars born in neighboring minihaloes
(e.g. Haiman, Rees \& Loeb 1997; Haiman, Abel \& Rees 2000; Ciardi et al. 2000; Machacek, Bryan \& Abel 2001),
here we assume
that at the death of our first star a negligible UV background has been established by emission from stars elsewhere in the Universe.
Thus, to evaluate the effect that photodissociation has on the molecule fraction in the first relic H~II region, we consider as the only
source of dissociating radiation two-photon emission from the 2$^1$S $\to$ 1$^1$S transition in recombining helium atoms from within the relic H~II region itself
(Johnson \& Bromm 2006). Given the much larger Einstein A coefficient for two-photon emission from 2$^1$S than from 2$^3$S, 51.3 s$^{-1}$ for the 2$^1$S $\to$ 1$^1$S
transition versus 2.2 $\times$ 10$^{-5}$ s$^{-1}$ for the 2$^3$S $\to$ 1$^1$S transition, this should be a sound approximation (Mathis 1957; Osterbrock \& Ferland 2006).
To include a prescription for the photodissociation rates of H$_2$ and HD in our code we assume for simplicity that the relic H~II region is spherically symmetric and
that it is optically thin to the LW photons. With this latter assumption, we obtain an upper limit for the dissociation rate, as the molecule fraction can approach
$\sim$ 10$^{-3}$ in relic primordial H~II regions, which may lead to an appreciable optical depth to LW photons (see e.g. Ricotti, Gnedin \& Shull 2001;
Oh \& Haiman 2003; Kuhlen \& Madau 2005;
O'Shea et al. 2005). We estimate the total number of He recombinations, He$^{+}$ + e$^-$ $\to$ He + $h\nu$,
per second within the H~II region that lead to population of the
2$^1$S state, $Q_{\rmn 2^1 S}$, according to
\begin{equation}
Q_{\rmn 2^1 S} = \sum \frac{\alpha_{\rmn B} n_{\rmn e} n_{\rmn HeII} m_{\rmn SPH}}{3 \mu m_{\rmn H} n} \mbox{\ ,}
\end{equation}
where the sum is over all SPH particles in the H~II region. Here $n_{\rmn e}$ is the number density of free electrons, $n_{\rmn He~II}$ is the number density of He~II, $n$
is the total number density, $m_{\rmn SPH}$ is the
mass per SPH particle, $\mu$ is the mean molecular weight, and $\alpha_{\rmn B}$ is the Case B total He recombination coefficient to singlet states. We
take it that $\la$ 1/3 of the recombinations to the singlet levels of He I result
ultimately in population of the 2$^1$S state, which is accounted for by the factor of 1/3 in the above formula (see Pottasch 1961; Osterbrock \& Ferland 2006).
For the LW flux at the edge of the H~II region, we find
\begin{equation}
J_{\rmn LW} \sim 10^{-6} \frac{Q_{\rmn 2^1 S}}{4 \pi R^2} \mbox{\ ,}
\end{equation}
where $R$ is the radius (in cm) of the He~II region, $J_{\rmn LW}$ is the LW flux in units of 10$^{-21}$ erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ sr$^{-1}$, and we have
conservatively estimated the probability of LW photon emission per two-photon transition 2$^1$S $\to$ 1$^1$S as $\la $ 0.4 (see Osterbrock \& Ferland 2006). We
compute the timescale for the photodissociation of H$_2$ and HD as
\begin{equation}
t_{\rmn diss} \sim 10^8 {\rmn \, yr} \left(\frac{Q_{\rmn 2^1 S}}{10^{45}{\rmn s}^{-1}} \right)^{-1} \mbox{\ ,}
\end{equation}
where we have taken $R \sim 500$~pc and used $t_{\rm diss}$ $\sim$ 10$^8$ yr ($J_{\rmn LW}$/10$^{-4}$)$^{-1}$ (see Oh \& Haiman 2003; Johnson \& Bromm 2006).
Inverting this timescale, we find a typical rate for the dissociation of H$_2$
and HD of
\begin{equation}
k_{\rmn diss} \sim 10^{-16}{\rmn \,s}^{-1}\left(\frac{Q_{\rmn 2^1 S}}{10^{45}{\rmn s}^{-1}}\right) \mbox{\ ,}
\end{equation}
which is included in the calculation of the molecule fraction in our simulations.
To give an estimate of the importance of the dissociation of molecules due to two-photon emission, we calculate t$_{\rmn diss}$ for the simplified case of a spherical
recombining H~II region of uniform density. In this case, we have the total number of recombinations to He~I which ultimately result in the 2$^1$S state
given by
\begin{eqnarray}
Q_{\rmn 2^1 S} & = & \frac{4 \pi}{9} R^3 \alpha_{\rmn B} n_{\rmn e} n_{\rmn HeII} \nonumber \\
& \simeq &
10^{50}{\rmn \,s}^{-1}\left(\frac{R}{500{\rmn \, pc}}\right)^3
\left(\frac{n_{\rmn e} n_{\rmn HeII}}{{\rmn cm}^{-6}}\right)
\mbox{\ .}
\end{eqnarray}
Using equation (10), we find
\begin{equation}
t_{\rmn diss} \sim
10^{3}{\rmn \,yr}
\left(\frac{n_{\rmn e} n_{\rmn HeII}}{{\rmn cm}^{-6}}\right)^{-1}
\mbox{\ ,}
\end{equation}
which clearly shows that at high densities and at times when He~recombination is still ongoing, the photodissociation of any molecules that
have formed may be important, even in the absence of an externally generated LW background. However, for our case of a relic H~II region surrounding a minihalo,
since it is predominantly at lower temperatures ($\la$ 5,000 K) that molecules are formed, at which times
|
much of the He II has already recombined, and since our H~II
region is expanding to lower densities with time, we expect that dissociation of molecules due to two-photon emission will be unimportant, at least
during the later evolutionary stages.
Molecular hydrogen could also be dissociated by radiation generated during the accretion of gas onto the remnant black hole. However, as we show in Section 5, we find
that the accretion rate onto the black hole is low, comparable to that found by O'Shea et al. (2005), for at least a few
10~Myr after the collapse of the central ionizing star. O'Shea et al. estimate
that this accretion rate results in a photodissociation rate of H$_2$ which is at least an order of magnitude below the formation rate of H$_2$ in the relic H~II
region. We thus neglect the possible effects of photodissociating radiation due to accretion onto the remnant MBH.
\subsection {Merging minihaloes}
In order to determine which, and how quickly, neighboring minihaloes will collapse following the formation of a Pop~III remnant black hole, we consider the situation
in which the relic H~II region surrounding the remnant black hole merges with a neighboring DM halo and its accompanying, un-ionized and dense gas component. Since
we are simulating the evolution of the relic H~II region and the infalling neutral minihalo after the death of the Pop~III star, in these merger simulations we do not include
any photoheating or photodissociating terms in our calculations of the thermal and chemical history of the gas. We initiate
the merger by placing the spherical relic H~II region, immediately following the collapse of the central star, with properties shown in Fig.~3, adjacent to a second spherical
region of radius 500~pc at the center of which is a minihalo which is still neutral, not yet having hosted star formation.
The region containing the neutral minihalo is selected and cut out from elsewhere in the same cosmological box,
and is then placed adjacent to the relic H~II region in a new, otherwise empty simulation box.
The initial separation between the centers of the two minihalos is 1~kpc proper
for all merger simulations carried out here.
The reason why we must choose these initial conditions, and why we may not simply continue running our cosmological simulation of the relic H~II region and wait for
a merger to occur, is that we are limited by the size of the cosmological box. Our box size is $\sim$ 100$h^{-1}$~kpc, which is
too small to contain the large wavelength density modes that drive the mergers of minihaloes. Thus, we carry out the mergers in an empty box, imparting a
relative velocity of 8~km~s$^{-1}$ to the merging halos, comparable to the virial velocity of the resulting system,
setting them on trajectories for a direct collision.
We assume that the molecules in the infalling,
pre-collapse halo have been destroyed by the H$_2$ photodissociating LW flux from
the nearby Pop~III star which has formed the relic H~II region. We carry out simulations in which the neighboring halo has peak gas densities of $\sim$ 0.1
cm$^{-3}$, 1 cm$^{-3}$, 10 cm$^{-3}$, and 100 cm$^{-3}$, corresponding to different degrees of pre-collapse. In the case of the 0.1 cm$^{-3}$ peak density neighboring halo, the free-fall time will be comparable to the
timescale for the completion of the merger and so we can neglect the possibility of this
minihalo collapsing to form a star before the completion of the merger. However, given that we assume that there are no molecules in the neighboring halo at the
outset of the merger, we expect that collapse will be delayed even for the higher density cases, as cooling will be suppressed until molecules have reformed (see
Mesinger et al. 2006).
Although the initial conditions for these simulations are idealized, as we have not followed the merger of the haloes in a fully cosmological context but instead in a
box containing only the relic H~II region and the infalling minihalo, we are able to discern the crucial aspects in the thermal and chemical evolution of the gas in
the vicinity of the remnant Pop~III black hole.
We do note, however, that additional effects that we do not consider here, such as the formation of an H$_2$ shell and the driving of a shock through a partially
ionized minihalo, can become important for cases in which significant portions of the infalling minihalo are ionized (see Ahn \& Shapiro 2006). A more sophisticated
treatment of radiative transfer will be required to more
accurately follow the evolution of the gas
(see Ahn \& Shapiro 2006; Susa \& Umemura 2006).
\section {Evolution of primordial gas in relic H~II regions}
The results of our recombination simulation are presented in Fig.~4 at three representative times after the death of the central star. As can be seen in the panels
showing the temperature as a function of the density, the gas initially cools largely by adiabatic expansion, as at temperatures below
$\sim$ 10$^4$ K the gas closely follows the adiabatic relation $T$ $\propto$ $n^{2/3}$, delineated in the panels on the right-hand side. As the gas recombines, the
cooling rate due to collisional excitation of the newly-formed hydrogen atoms is enhanced and the temperature of the gas drops to $\sim$ 10$^4$ K, at which point the
cooling rate decreases when molecular hydrogen becomes the main coolant, aside from adiabatic cooling, which continues as the gas expands into the intergalactic
medium (IGM).
\begin{figure*}
\includegraphics[width=7.in]{figure4.ps}
\caption{The evolution of the relic H~II region. From left to right, the panels show the free electron fraction, the H$_2$ fraction, the HD fraction, and the
temperature as functions of density at $\sim$ 1 Myr (top row), $\sim$ 10 Myr (middle row), and $\sim$ 100 Myr (bottom row) after the collapse of the central star to a
black hole. The long-dashed line in the rightmost panels denotes the temperature of the cosmic background radiation, $T_{\rmn CMB}$, while the short-dashed line denotes
the line $T$ $\propto$ $n^{2/3}$, along which gas evolves adiabatically. Here, we plot only the SPH particles that were subjected to the photoionizing radiation of the
central star, that is, particles within $\sim$ 500 pc of the central star.
}
\end{figure*}
The HD fraction in the highest density regions of the recombining gas increases to $X_{\rmn HD}$ $\sim$ 10$^{-7}$ after $\sim$ 10 Myr since the death of the central star,
and to
$X_{\rmn HD}$ $\sim$ 10$^{-6}$ after 100 Myr. Thus, the HD fraction quickly rises above the critical value of $X_{\rmn HD, crit}$ $\sim$ 10$^{-8}$ for efficient cooling
of primordial gas in local thermodynamic equilibrium (LTE) to the temperature of the cosmic microwave background (CMB) (see Johnson \& Bromm 2006). Because LTE can only
be achieved at much higher densities than those that persist in the relic H~II region we consider here, radiative cooling to the CMB floor would only
be a viable possibility if densities are somehow increased to the point that LTE can be established. This would
happen if the gas is at some point incorporated into a larger DM halo and becomes gravitationally bound once more. That the HD fraction is so high, however,
means that the potential for Pop~II.5 star formation does exist, in principle, if the gas becomes gravitationally bound and thus available for star formation (see also
Nagakura \& Omukai 2005).
We evaluate the recombination time after 100~Myr by taking the density of H$^+$ and of free electrons to be $n_{\rmn H^+}$ $\sim$ $n_{\rmn e}
\sim
10^{-4}$cm$^{-3}$, as can be seen from the lower-left panel of Fig.~4. Then, assuming at these low densities a Case A
recombination coefficient of
$\alpha_A$ $\sim$ 6 $\times$ 10$^{-13}$ cm$^{3}$ s$^{-1}$, we obtain a recombination time of $t_{\rmn rec} \sim$ 500 Myr. This is more than twice the Hubble time at
these redshifts, and suggests that the free electron fraction left over from the ionization caused by the first stars may have remained an important catalyst for molecule
formation even after hundreds of millions of years since the death of the central star (see e.g. Shapiro \& Kang 1987; Yamada \& Nishi 1998;
O'Shea et al. 2005; Nagakura \& Omukai 2005).
The optical depth to LW photons becomes unity for H$_2$ column densities of $\sim$ 10$^{14}$ cm$^{-2}$ (e.g. Draine \& Bertoldi 1996; Osterbrock
\& Ferland 2006), and for our
relic H~II region we estimate
\begin{equation}
\tau_{\rmn LW} \sim \frac{n_{\rmn H} X_{\rmn H_{2}} R}{10^{14} {\rmn cm}^{-2}} \mbox{\ ,}
\end{equation}
where $X_{\rmn H_2}$ is the molecule fraction, $R$ is the radius of the relic H~II region, and $n_{\rmn H}$ is the number density of hydrogen nuclei. We find that the
molecule fraction approaches $X_{\rmn H_2}$ $\sim$ 10$^{-3}$ and that the number density becomes $n_{\rmn H}$ $\sim$ 10$^{-3}$cm$^{-3}$.
Taking the radius of the H~II region to be $\sim$ 1 kpc, we find that the optical depth to LW photons becomes of the order of $\tau_{\rmn LW}$ $\sim$ 10. If the
density of
star-forming minihaloes is thus high enough, an appreciable suppression of the background LW flux may result from the high molecule fraction which arises in relic
H~II regions. This may provide an important degree of shielding from molecule-dissociating radiation and may lead to a higher overall efficiency of star-formation
in minihaloes with virial temperatures $\la$ 10$^4$ K (see Ricotti, Gnedin \& Shull 2001; Machacek, Bryan \& Abel 2001, 2003; Oh \& Haiman 2002).
We carried out simulations both with and without the photodissociating two-photon emission from recombining He II included,
and we found that this photodissociating radiation had little effect on the molecule abundances.
To estimate the level of LW background radiation necessary to efficiently
photodissociate H$_2$ molecules, we evaluate the H$_2$ formation timescale at $\sim$ 10 Myr after the death of the central star. Taking representative
values for the temperature, number density, and abundances of the chemical species after 10 Myr of recombination, we find a formation timescale for H$_2$ of
$t_{\rmn form}$ $\sim$ 10$^7$ yr. The continued formation of H$_2$ is driven by the high abundances of H, H$^{-}$, and e$^{-}$, which are the
reactants in the following
reaction sequence that is the most important for the production of H$_2$ (e.g. Kang \& Shapiro 1992):
\begin{equation}
{\rmn H} + {\rmn e^{-}} \to {\rmn H^{-}} + h\nu \mbox{\ ,}
\end{equation}
\begin{equation}
{\rmn H} + {\rmn H^{-}} \to {\rmn H_{2}} + {\rmn e^{-}} \mbox{\ .}
\end{equation}
Equating the formation timescale for H$_2$ with the dissociation timescale for H$_2$, given by $t_{\rmn diss}$ $\sim$ 10$^8$ yr ($J_{\rmn LW}$/10$^{-4}$)$^{-1}$,
we find a critical value for the
background LW flux, below which molecules in the relic H~II region will not be photodissociated efficiently, of the order of $J_{\rmn LW, crit}$ $\sim$
10$^{-3}$ (see also Oh \& Haiman 2003). This is comparable to the background LW flux that is expected to have been established by the first generations of
stars at redshifts $z \ga 15$
(e.g. Greif \& Bromm 2006). Taking into account the optical depth to LW photons of order $\tau_{\rmn LW}$ $\sim$ 10, we find that the most heavily self-shielded
molecules in the center of relic H~II region could only be dissociated by a background LW flux, emanating from outside the relic H~II region, at least of the order of
$J_{\rmn LW}$ $\sim$ 10. This value would, however, decrease with time if the molecules nearer to the periphery of the relic H~II region were dissociated
by the external LW background flux, and so become unavailable for shielding the inner molecules from the dissociating radiation.
We note also that if the gas in the relic H~II region has large velocity gradients, owing to turbulence that we do not resolve in these simulations, then the optical
depth to LW photons may be lower than the value we find here by a factor of a few (see Draine \& Bertoldi 1996; Osterbrock \& Ferland 2006). This would not, however,
effect the value that we find for $J_{\rmn LW, crit}$, as this is independent of the optical depth to LW photons.
The values that we find for $J_{\rmn LW, crit}$ and for $\tau_{\rmn LW}$ suggest that the enhanced fractions of H$_2$ and HD inside relic H~II regions could have
persisted down to at least redshifts of $z$ $\sim$ 15, and so, in
principle, would have been available for star formation at least down to these redshifts
(see also Ricotti, Gnedin \& Shull 2002).
The high HD abundance, which becomes at least an order of magnitude
above the critical abundance for cooling the primordial gas to the CMB temperature floor, could thus have led to Pop~II.5 star formation inside the first relic H~II
regions, if these regions became incorporated into more massive DM haloes that could gravitationally bind, and so increase the density of, the recombining
primordial gas.
\section{Evolution of primordial gas in merging minihaloes}
To discern the conditions under which the Pop~III remnant black hole could efficiently accrete the dense, cold gas supplied by a neighboring minihalo,
we have tracked the evolution of the gas in such merging systems with a range of initial peak densities
of the gas within the infalling neutral minihalo.
Again, here we have assumed that the LW flux from the now-collapsed Pop~III star has destroyed all of the molecules inside the neutral infalling halo, although
molecules can reform during the merger, since there is no longer a LW flux from the original Pop~III star.
The initial peak
densities of the halos we follow
are 0.1, 1, 10, and 10$^2$ cm$^{-3}$. Fig.~5 shows the evolution of the density structure in the merger between the relic H~II region and a halo with a peak
density of 10~cm$^{-3}$, as a representative case.
Fig.~6 shows the time evolution of a merger
of a relic H~II region, in which the central star has just collapsed to form a black hole, with a neutral pre-collapse minihalo which has a
peak density of 0.1~cm$^{-3}$ at the time of the formation of the black hole,
at four representative times. As in
the case of the recombination of the relic H~II region evolved in our $100 h^{-1}$~kpc cosmological box, which did not experience a merger, the relic H~II region gas
expands and cools largely adiabatically.
This expansion is evident in Fig.~5 as well. Also, as can be seen from the temperature rise in the pre-collapse halo, shown in black in Fig.~6, the expansion of the
relic H~II region, combined with the 8~km~s$^{-1}$ relative velocity of the infalling halo with respect to the relic H~II region, shock-heats the neutral gas
within the infalling halo, contributing
to the suppression of the density in the pre-collapse halo. The gas in the pre-collapse halo, furthermore, does not reform H$_2$ molecules efficiently,
owing to the low density of the gas in this halo. The H$_2$ fraction approaches only $\sim$ 10$^{-6}$ after 100 Myr since the death of the central star.
Thus cooling of the gas is inefficient, and the
highest densities achieved in this merger are $n \la 10^{-0.5}$~cm$^{-3}$.
\begin{figure*}
\includegraphics[width=7.in]{figure7.ps}
\caption{
The merging of the relic H~II region with a neutral neighboring minihalo with an initial central density of $\sim$ 10 cm$^{-3}$. The H~II region is on the right-hand
side of the top-left panel at the beginning of the merger, while the pre-collapse halo with which it merges is on the left-hand side. The location of the remnant black
hole, initially at the center of the relic H~II region, is shown by the black square in each panel. Here, we assume that the black hole has a ballistic trajectory,
with a constant velocity of 4~km~s$^{-1}$ to the left, equal to the initial velocity of the relic H~II region. The highest density gas is shown in white and the lower
density gas is shown in blue.
The merger is shown at 1 Myr (top-left), 10 Myr (top-right), 50 Myr (bottom-left), and 100 Myr (bottom-right) after the death of the central star.
The halo shown on the left-hand side collapses to a density of $\sim 10^3$~cm$^{-3}$ during the merger (see Fig.~7).
}
\end{figure*}
We can expect the gas within a minihalo that merges with a relic H~II region to be dispersed and have its density suppressed whenever the ram pressure of the expanding relic H~II
region, given by $P_{\rmn ram} \sim n_{\rmn H~II} m_{\rmn H} v^2$, is
higher than the pressure of the neutral gas within the infalling halo, given by $P_{\rmn gas} = n_{\rmn gas} k_{\rmn B} T$, where
$v$ is the expansion velocity of the relic H~II region, $n_{\rmn H~II}$ is the density in the relic H~II region, $n_{\rmn gas}$ is the density of the gas
within the neutral minihalo that is
merging with the relic H~II region, and $T$ is the temperature of this gas. Taking a fiducial value of $v$ $\sim$ 10~km~s$^{-1}$ for the expansion velocity, as can
be seen in Fig.~3, and taking $n_{\rmn H~II}$ $\sim$ 10$^{-2}$ cm$^{-3}$ as a typical value for the density of the relic H~II region,
as can be seen in Fig.~4, we find the
following condition for the retention of the neutral gas in the potential well of the infalling halo:
\begin{equation}
n_{\rmn gas} T \ga 10^2 {\rmn K\,} {\rmn cm}^{-3} \mbox{\ .}
\end{equation}
The condition in equation (17) can be satisfied for
primordial gas collapsing in a minihalo with a mass of $\sim$ 10$^6$ ${\rmn M}_{\odot}$, provided that
densities of at least $n$ $\sim$ 10$^{-0.5}$~cm$^{-3}$ have been reached prior
to the merger [see Fig.~10 in Bromm et al. (2002)].
However, if the gas in the infalling minihalo collapses to form a star before the completion of the merger, then the gas in the minihalo will be heated and expand to lower
densities, as shown in Fig.~3 (see also Abel, Wise \& Bryan 2006). This final collapse to form a star will occur on the order of the free-fall time,
$t_{\rmn ff}$ $\propto$ $n^{-1/2}$, if the free-fall time is longer
than the cooling time (e.g. Tegmark et al. 1997; Ciardi \& Ferrara 2005). If the gas cannot cool efficiently, however, this collapse will be delayed. This may
indeed be the case in regions near the first stars where
the LW
flux generated during their $\la$ 3 Myr lives could destroy the H$_2$ molecules inside the pre-collapse haloes, depriving them of the coolants that allow for star
formation (e.g. Yoshida et al. 2003).
\begin{figure}
\vspace{2pt}
\epsfig{file=figure5.ps,width=8.5cm,height=7.cm}
\caption{
The evolution of the merger of the relic H~II region with a neutral neighboring minihalo. The H~II region gas is in orange, while the gas from the pre-collapse minihalo
is in black. The central density of the pre-collapse minihalo is initially $\sim$ 0.1 cm$^{-3}$, roughly 10$^2$ times the background IGM density. We follow the same
convention for the lines
delineating the CMB temperature and adiabatic evolution as in Fig.~4.
The criterion given by equation (17) for neutral gas retention is not
satisfied in this case,
and the low density gas in the infalling halo is shock-heated and remains at low densities during the merger. Furthermore, the H$_2$ fraction stays below 10$^{-6}$ for
the entire 100 Myr duration of the merger, further preventing the gas from cooling and collapsing to higher densities.
}
\end{figure}
\begin{figure}
\vspace{2pt}
\epsfig{file=figure6.ps,width=8.5cm,height=7.cm}
\caption{
Same as in Fig.~6, except that the neutral gas has an initial peak density of $\sim$ 10 cm$^{-3}$. The
criterion in equation (17) for neutral gas retention is satisfied here,
and the densest gas, in the center of the infalling minihalo, retains its high density despite the outer layers of gas being shock-heated in the merger.
Also, the H$_2$ fraction approaches 10$^{-3}$ after
100 Myr at these higher densities, so that the gas in the merging halo cools and collapses efficiently to a density of $\sim$ 10$^3$ cm$^{-3}$. As the labels
indicate, the four panels correspond to the same times since the death of the central star as the four panels in Fig.~5, which shows the evolution of the projected
gas density.
}
\end{figure}
To find the highest densities that could be achieved during a merger of a neutral minihalo with a relic H~II region, we compare the timescale for completion of the merger,
$t_{\rmn merge}$, with the timescale for the collapse of the neutral minihalo, $t_{\rmn collapse}$, the latter found from our simulations of mergers involving minihaloes
with peak densities of 1, 10, and 10$^2$ cm$^{-3}$ at the time of the formation of the black hole and the cessation of the radiation from the original Pop~III star.
For each of these initial densities, the criterion given by equation (17) for neutral gas retention is satisfied.
Cases for which $t_{\rmn merge} \la t_{\rmn collapse}$ will give rise to mergers resulting in the highest densities of gas that can be accreted onto the black hole,
as it is these minihalos that will merge completely with the black hole before collapsing to form a second star. We define the merger timescale as
\begin{equation}
t_{\rmn merge} \simeq \int_{r_{\rmn MBH}}^0 \frac{dr}{(2{\rmn G} M_{\rmn halo} (\frac{1}{r}-\frac{1}{r_{\rmn MBH}}))^{\frac{1}{2}}} \mbox{\ ,}
\end{equation}
where $r_{\rmn MBH}$
is the initial distance between the centers of the merging haloes at the time of the formation of the massive black hole in the relic H~II region, and $M_{\rmn halo}\sim 10^6 {\rmn M}_{\odot}$.
We note, however, that this is only an approximation to the actual time that would be required for a merger to take place,
as this formula assumes that the merging minihaloes start at rest with respect to each other at the time of the collapse of the first star, and this will not be the
case in general (see e.g. Abel, Wise \& Bryan 2006).
Fig.~7 shows the time evolution for a merger with a neutral peak density of 10 cm$^{-3}$, at four
representative times, just as in Fig.~6. The gas in the pre-collapse halo, in this case, does reform H$_2$ molecules efficiently, owing to the high density of the gas
in this halo. The H$_2$ fraction approaches $\sim$ 10$^{-3}$ after 100 Myr. Thus, gas cooling is efficient, and the density reaches
$n$ $\sim$ 10$^{3}$ cm$^{-3}$ after $\sim$ 100 Myr. Fig.~5 shows the evolution of the gas density structure in this merger, at the
four representative times which are also shown in Fig.~7.
For the case of a merger with an initial peak density of 1 cm$^{-3}$, we find that the H$_2$ fraction becomes only of the order of
10$^{-5}$, and the density in this minihalo thus does not increase beyond $\sim$ 1 cm$^{-3}$ within 100 Myr, since molecular cooling is suppressed. We also find
that for the case of a merging minihalo with an initial density of 100 cm$^{-3}$ the H$_2$ fraction becomes $\sim$ 10$^{-3}$ and the halo collapses after $\sim$ 60 Myr.
As expected, for higher initial peak densities in the infalling haloes, the timescales for the collapse of these haloes become shorter, both because H$_2$ molecules are
reformed more quickly and because the free-fall time is shorter for denser haloes (see also Mesinger et al. 2006).
\begin{figure}
\vspace{2pt}
\epsfig{file=figure8.eps,width=8.5cm,height=7.cm}
\caption{Requirements for the efficient accretion of gas onto a Pop~III remnant black hole. The two horizontal lines show the time it takes for
infalling minihaloes with central densities of 10 and 100~cm$^{-3}$ to collapse and form stars. The merger timescale, $t_{\rmn merge}$, defined in equation
(18), is the time it takes for the remnant black hole to merge with the infalling minihalo, and is a function of $r_{\rmn MBH}$, the distance between the black hole
and the center of the infalling minihalo at the time of the formation of the black hole. For the black hole to efficiently accrete gas at near the Eddington rate,
it must merge with the infalling halo before this halo collapses to form a star. Thus, efficient accretion onto the black hole is only possible if
$t_{\rmn merge} <
t_{\rmn collapse}$.
}
\end{figure}
Fig.~8 shows the requirements for infalling minihaloes to collapse
|
is shown in
Fig.~\ref{Fig:Properties_region_I_prime}. We see that $\sigma$ is
always positive but $p$ is negative, it is
rather a tension. These are
the tension shells.
Qualitatively, one can
understand why these shells,
with normal pointing to $r_+$, i.e.,
$\text{sign}\left(X\right)=-1$, must be
supported by tension, by remembering that a free-falling particle in the
region outside the event horizon will infall
towards the event horizon $r_+$
itself. Therefore, a particle momentarily comoving with the shell but
detached from it will infall towards the black hole region of the
exterior Reissner-Nordstr\"om spacetime, hence, a perfect fluid thin
shell located at the junction hypersurface, in order to be static, must
by supported by tension.
Notice from Fig.~\ref{Fig:Properties_region_I_prime}
that as the charge $Q$ is increased one needs more
tension support, as expected, the electric repulsion obliges
an increase in the tension.
Notice that here $R$ is finite, although it can be arbitrarily large,
in which case
the energy density $\sigma$, the
tension $-p$, and the charge
density $\sigma_{e}$, all tend to zero.
Notice that $\sigma$ has a nonmonotonic behavior.
Notice
also that when $R\to r_+$, the energy density is finite, but the
tension of the shells goes to infinity,
while the charge
density is also finite. Indeed, for $R= r_+$ one
has a shell at the horizon with properties
similar to a
quasiblack hole, although one with
additional structures.
When $Q=0$ the outer solution is Schwarzschild.
In relation to the energy conditions of the shell
one can work out and find
that the null, the weak, and the dominant
energy conditions are verified for $R\geq R_{I'}$, where
$R_{I'}$ is some specific radius that we present later,
and the strong energy condition is never verified,
see a detailed presentation ahead.
\begin{figure}[h]
\subfloat[\label{Fig:energy_subregion_I_prime}]
{\includegraphics[scale=0.45]{sigma_region_I_prime}}
\hspace*{\fill}
\subfloat[\label{Fig:pressure_subregion_I_prime}]
{\includegraphics[scale=0.45]{pressure_region_I_prime}}
\caption{\label{Fig:Properties_region_I_prime}
Physical properties of a
nonextremal tension shell black hole,
i.e.,
an electric perfect fluid thin shell
in a nonextremal Reissner-Nordstr\"om state, in the
location $R>r_+$, i.e., located outside the event horizon, with
orientation such that the normal points towards $r_+$. The interior
is Minkowski, the exterior is nonextremal Reissner-Nordstr\"om
spacetime. Panel (a) Energy density $\sigma$ of the shell as a
function of the radius $R$ of the shell for various values of the
$\frac{Q}{M}$ ratio. The energy density is adimensionalized through
the mass $M$, $8\pi M\sigma$, and the radius is adimensionalized
through the gravitational radius $r_+$, $\frac{R}{r_+}$. Panel (b)
Tension $-p$ on the shell as a function of the radius $R$ of the shell
for various values of the $\frac{Q}{M}$ ratio. The tension is
adimensionalized through the mass $M$, $-8\pi Mp$, and the radius is
adimensionalized through the gravitational radius $r_+$,
$\frac{R}{r_+}$.
}
\end{figure}
\newpage
The Carter-Penrose diagram for this case can be drawn from the
building blocks of an interior Minkowski spacetime and the full
nonextremal Reissner-Nordstr\"om spacetime. In
Fig.~\ref{Fig:Penrose_diagram_Mink_RN_regions_Iprime} the
Carter-Penrose diagram of a shell spacetime in a nonextremal
Reissner-Nordstr\"om state, in the location $R>r_+$, with orientation
such that the normal points towards $r_+$, i.e.,
$\text{sign}\left(X\right)=-1$, is shown. In the diagram it is clear
that the tension shell is in the other side of the Carter-Penrose
diagram of a Reissner-Nordstr\"om spacetime. From
Fig.~\ref{Fig:Penrose_diagram_Mink_RN_regions_Iprime} it is seen, that
it is clearly a black hole solution, not a vacuum black hole, neither
a regular black hole. The solutions represent tension shell black
holes. Note $r_+$ and $r_-$ are the event horizon and the Cauchy
horizon radii, and there is an Einstein-Rosen bridge, provided by a
dynamic wormhole in the spacetime. Tension shell black holes were
found in~\citep{Katz_Lynden-Bell_1991} for the zero electric charge
case, i.e., for the Schwarzschild shells, in which case the
Carter-Penrose diagram is similar, only the $r=0$ singularity is
spacelike, and the diagram does not repeat itself. In the
Reissner-Nordstr\"om spacetime, contrary to Schwarzschild, there is an
infinitude of possible diagrams. In the diagram (a) of
Fig.~\ref{Fig:Penrose_diagram_Mink_RN_regions_Iprime} it is clear that
the tension shell is outside the event horizon in the other side of
the diagram in the region $\mathrm{I'}$ shown. One can then put
another shell in the region $\mathrm{I'}$ above and repeating the
procedure ad infinitum. In the diagram (b) of
Fig.~\ref{Fig:Penrose_diagram_Mink_RN_regions_Iprime} the tension
shell is again outside the event horizon in the other side of the
diagram in the region $\mathrm{I'}$ shown. One can then put an
infinity region in the region $\mathrm{I'}$ above and repeating the
procedure ad infinitum. As what one puts in the regions
$\mathrm{I'}$, either a tension shell or infinity, is not decided by
the solution, an infinite number of different Carter-Penrose diagrams
can be drawn, since there are an infinite number of combinations to
locate a shell or infinity when one goes upward or downward through the
diagram. When $R= r_+$ the shell with its interior forms a
tension quasiblack hole with special features since
it is attached to the other regions of the
Reissner-Nordstr\"om spacetime.
\begin{figure}[h]
\subfloat[
\label{Fig:Penrose_diagram_Mink_RN_regions_Iprime1}]
{\includegraphics[height=0.31\paperheight]
{Undercharged_Carter_Penrose_Mink_RN_junction_I_prime}}
\hskip 2cm
\subfloat[\label{Fig:Penrose_diagram_Mink_RN_region_IIIrepeated}]
{\includegraphics[height=0.31\paperheight]
{Undercharged_Carter_Penrose_Mink_RN_junction_I_prime_alternative}}
\caption{
\label{Fig:Penrose_diagram_Mink_RN_regions_Iprime}
Carter-Penrose diagrams of a tension shell black hole, i.e., a thin
shell spacetime in a nonextremal Reissner-Nordstr\"om state, in the
location $R>r_+$, i.e., located outside the event horizon, with
orientation such that the normal points towards $r_+$. The interior
is Minkowski, the exterior is nonextremal Reissner-Nordstr\"om
spacetime. For zero electric charge the exterior is
Schwarzschild, in which case the timelike singularities
turn into spacelike ones.
Panel~(a) The Carter-Penrose diagram contains a shell in
the regions $\mathrm{I'}$ shown and another shell in the next region
$\mathrm{I'}$, which is repeated for all regions $\mathrm{I'}$.
Panel~(b) The Carter-Penrose diagram contains a shell in region
$\mathrm{I'}$ and an infinity in the next region $\mathrm{I'}$, which
is then repeated for all regions $\mathrm{I'}$. An infinite number of
different Carter-Penrose diagrams can be drawn, since there are an
infinite number of combinations to locate the shell and infinity.
}
\end{figure}
\newpage
The physical interpretation of this case has some complexity. This
nonextremal thin shell solution carries with it a white hole connected
to a black hole through a wormhole. The energy density and pressure
obey some of the energy conditions if the radius of the shell is sufficiently
large, i.e., is sufficiently larger than the gravitational
radius. When the radius of the shell approaches the gravitational
radius, the energy conditions are not obeyed, and when the radius of
the shell is at the gravitational radius the solution turns into a
tension quasiblack hole an object with interesting properties.
The causal and global structures as displayed by the
Carter-Penrose diagram in its simplest form shows the important
spacetime regions. We have called this solution a tension shell black
hole, but it could be called as well a tension shell nontraversable
wormhole, since there is a nontraversable wormhole that links the
white hole to the black hole region. As in Reissner-Nosdstr\"om
solution, This tension shell black hole possesses Cauchy horizons,
and, as in the vacuum Reissner-Nosdstr\"om solution, it is subject to
be destroyed by perturbations. Presumably, the perturbation would turn
the Cauchy horizon into a null or spacelike singularity, turning in
turn the nonextremal tension shell solution into a solution similar to
the electrically uncharged Lynden-Bell-Katz tension shell black hole
solution. Moreover, these solutions, in the same way as the full
Reissner-Nosdstr\"om or Schwarzschild solutions, are universes in
themselves, and, if they existed, they would have to be given directly
by mother nature, rather than appear by, say, a straight gravitational
collapse or some other process. So, this case falls into the category
of having some of
the energy conditions verified and the geometrical setup
being physically peculiar, although full of interest, as matter
solutions on the other side of the Carter-Penrose diagram are
rare. Moreover these solutions are familiar, in the sense that
nontraversable wormholes with white and black holes are well known.
\centerline{}
\newpage
\subsection{Formalism for nonextremal electric thin shells
outside the gravitational radius}
\label{Subsec:Induced_MinkowskiandRN_ouside_event_horizon}
\subsubsection{Preliminaries}
\label{prel1}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in the nonextremal state, i.e., $r_+>r_-$ or
$M>Q$, for which the shell's location obeys $R>r_+$, and
for which the orientation is such that the normal to the shell points
towards spatial infinity or towards $r_+$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism} and
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN}.
\subsubsection{Induced metric and extrinsic curvature of $\mathcal{S}$
as seen from $\mathcal{M}_{\rm i}$}
\label{asseenfrom1}
Let us start by analyzing the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$, whose line element in spherical
coordinates is given by
\begin{equation}
ds_{\rm i}^{2}=-dt_{\rm i}^{2}+d\mathrm{r}^{2}+\mathrm{r}^{2}d\Omega^{2}\,,
\label{eq:Mink_metric_interior}
\end{equation}
where $t_{\rm i}$ and $\mathrm{r}$ are the time and radial coordinates,
respectively, and
$d\Omega^{2}\equiv d\theta^{2}+\sin^{2}\theta d\varphi^{2}$,
with $\theta$ and $\varphi$ being the angular coordinates.
The subscript $\rm i$ denotes interior or inside from now onwards.
The junction from the interior to the exterior is made through a
hypersurface $\mathcal{S}$. We assume the hypersurface $\mathcal{S}$
to be static, i.e., static as seen from a free-falling observer in the
interior Minkowski spacetime. In general, $\mathcal{S}$ can be either
timelike or spacelike, however, since we are considering Minkowski
spacetime, it is not possible to have a static spacelike surface
hence, $\mathcal{S}$ must be timelike. It is convenient to choose the
coordinates on $\mathcal{S}$ to be $\left\{ y^{a}\right\}
=\left(\tau,\theta,\varphi\right)$, where $\tau$ is the proper time
measured by an observer comoving with $\mathcal{S}$. It follows that
denoting $u_{\rm i}$ as the 4-velocity of an observer comoving with
the shell as seen from the inside, we can define a unit vector
$e_{\tau}$ such that $e_{\tau}\equiv u_{\rm i}$. The hypersurface
$\mathcal{S}$, as seen from the interior
spacetime $\mathcal{M}_{\rm i}$,
is parameterized by $\tau$, such that the surface's radial
coordinate is described by a function
$\mathrm{r}\vert_\mathcal{S}\equiv R=R\left(\tau\right)$. The fact
that $\mathcal{S}$ is assumed to be static implies
$\frac{d\,R}{d\tau}=0$, from which we have that $u_{\rm
i}^{\alpha}=\left(\frac{dt_{\rm i}}{d\tau},0,0,0\right)$, where
$u_{\rm i}^{\alpha}$ represents the components of the 4-velocity
$u_{\rm i}$ as seen from the interior spacetime $\mathcal{M}_{\rm
i}$. Since $\mathcal{S}$ is a timelike hypersurface, it must verify
$u_{{\rm i}\alpha}u_{\rm i}^{\alpha}=-1$. With these latter two
equations we find that $\frac{dt_{\rm i}}
{d\tau}=\pm1$. Imposing that $u_{\rm
i}$ points to the future leads to the choice of the plus sign, thus
\begin{equation}
u_{\rm i}^{\alpha}=\left(1,0,0,0\right)\,.
\label{eq:Mink_vel_explicit}
\end{equation}
From Eqs.~(\ref{eq:inducedh}) and (\ref{eq:Mink_vel_explicit})
we can find the induced metric on $\mathcal{S}$ by the spacetime
$\mathcal{M}_{\rm i}$, such that
\begin{equation}
\left.ds_{\rm i}^{2}\right|_{\mathcal{S}}
=-d\tau^{2}+R^{2}d\Omega^{2}\,.
\label{eq:induced_metric_Mink}
\end{equation}
Also, with the expression for the 4-velocity of an observer comoving
with $\mathcal{S}$, we can now use Eqs.~(\ref{eq:normal_orthogonal})
and (\ref{eq:Mink_vel_explicit}) to find the expression for the components
of the unit normal as seen from $\mathcal{M}_{\rm i}$,
$n_{\rm i}^{\alpha}$,
hence $n_{{\rm i}\alpha}=\lambda\left(0,1,0,0\right)$ where $\lambda$
is a normalization factor.
Using Eqs.~(\ref{eq:normal_normalized})
and (\ref{eq:Mink_metric_interior})
and the condition that $n$ is
spacelike, yields $\lambda=\pm1$. Since we are studying the case
where the interior Minkowski spacetime is spatially compact and enclosed
by the hypersurface $\mathcal{S}$, we must choose the plus sign,
such that, the expression for the outward pointing unit normal to
$\mathcal{S}$ is given by
\begin{equation}
n_{{\rm i}\alpha}=\left(0,1,0,0\right)\,.
\label{eq:normal_Mink}
\end{equation}
We are now in position to compute the components of the extrinsic
curvature of $\mathcal{S}$ as seen from $\mathcal{M}_{\rm i}$,
$K_{{\rm i}\,ab}$.
In the case where the matching surface $\mathcal{S}$ is timelike,
static and spherically symmetric, the nonzero components of the extrinsic
curvature are given by $K_{\tau\tau}=-a^{\alpha}n_{\alpha}$,
$K_{\theta\theta}=\nabla_{\theta}n_{\theta}$,
$K_{\varphi\varphi}=\nabla_{\varphi}n_{\varphi}$,
where $a^{\alpha}\equiv u^{\beta}\nabla_{\beta}u^{\alpha}$,
see Appendix~\ref{Appendix_sec:Extrinsic_curvature}.
Taking into account Eqs.~(\ref{eq:normal_orthogonal}),
(\ref{eq:Mink_metric_interior}),
(\ref{eq:induced_metric_Mink}),
and~(\ref{eq:normal_Mink}),
we find that the nontrivial
components of the extrinsic curvature as seen from the interior
Minkowski spacetime, see Eq.~(\ref{eq:extrinsic1}), are given by
\begin{equation}
{K_{\rm i}}^{\tau}{}_{\tau}=0\,,\quad\quad
{K_{\rm i}}^{\theta}{}_{\theta}=
{K_{\rm i}}^{\varphi}{}_{\varphi}=\frac{1}{R}\,,
\label{eq:Extrinsic_curvature_Mink}
\end{equation}
where the induced metric taken from
Eq.~(\ref{eq:induced_metric_Mink}) was
used to raise the indices.
\subsubsection{Induced metric and extrinsic
curvature of $\mathcal{S}$ as seen
from $\mathcal{M}_{\rm e}$}
\label{induceM+1}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the nonextremal
state, i.e., $r_+>r_-$ or $M>Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_non_extremal}, for which the
shell's location obeys $R>r_+$, and for which the
orientation is such that the normal to the shell points towards
increasing $r$ or towards decreasing $r$ as seen from the exterior, as
used in the two previous subsections.
For a nonextremal shell with $R>r_+$ we work
with the coordinate pathch that has no
coordinate singularity at the
gravitational radius $r=r_+$. For
the setting of coordinate patches
in the nonextremal Reissner-Nordstr\"om spacetime see
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN}, see
also \cite{Comer_Katz_1994} for
the coordinate patches of an uncharged shell
matched to the Schwarzschild spacetime.
In this region and for the chosen coordinate
patch, the line element for the
Reissner-Nordstr\"om spacetime in Kruskal-Szekeres coordinates is
given by
\begin{eqnarray}
ds_{\rm e}^{2}=4\left(\frac{r_++r_-}{r_+-r_-}\right)^{2}&&
\frac{r_+^{4}}{r^{2}}e^{-\frac{r\,\left(r_+-r_-\right)}{r_+^{2}}}
\left(\frac{r-r_-}{r_++r_-}
\right)^{1+\left(\frac{r_-}{r_+}\right)^{2}}\left(dX^{2}-dT^{2}
\right)+r^{2}\left(T,X\right)d\Omega^{2}\,,
\label{eq:metric_RN_rplus}\\
&&X^{2}-T^{2}=e^{\frac{r\,\left(r_+-r_-\right)}{r_+^{2}}}
\left(\frac{r-r_+}{r_++r_-}\right)\left(
\frac{r-r_-}{r_++r_-}\right)^{-\left(
\frac{r_-}{r_+}\right)^{2}}\,,
\nonumber
\end{eqnarray}
with $r\left(T,X\right)$ being given implicitly by the latter equation.
The subscript $\rm e$ denotes exterior from now onwards.
The shell's radial coordinate when measured by an observer at
$\mathcal{M}_{\rm e}$ is described by a function
$r\vert_\mathcal{S}\equiv{R}=
{R}\left(\tau\right)$, where $\tau$ is the proper time of an
observer comoving with the surface $\mathcal{S}$, which, since we
assume it to be static, is such that $\frac{d{R}}{d\tau}=0$.
Strictly, $R$ should be written as another letter, say
${\cal R}$, but as we will see we can put ${\cal R}=R$ and so
we stick to the letter $R$ from the start.
Considering the second of the equations
given in Eq.~(\ref{eq:metric_RN_rplus}), $\frac{d{R}}{d\tau}=0$
implies that the $X$ and $T$ coordinates of a point on $\mathcal{S}$
must verify $X^{2}-T^{2}=\text{constant}$. Taking the derivative of
$X^{2}-T^{2}=\text{constant}$ in order to the proper time we find the
relation $\frac{\partial X}{\partial\tau}=\frac{T}{X}\frac{\partial
T}{\partial\tau}$. In our previous analysis of the
$\mathcal{M}_{\rm i}$ spacetime, we found that the
hypersurface $\mathcal{S}$ must be
timelike, then, due to the first junction condition, $\mathcal{S}$
must also be timelike when seen from the exterior
$\mathcal{M}_{\rm e}$ spacetime. Therefore, the components
of the 4-velocity of an
observer comoving with it as seen from $\mathcal{M}_{\rm e}$ are,
$u_{\rm e}^{\alpha}=\left(\frac{\partial T}{\partial\tau},\frac{\partial
X}{\partial\tau},0,0\right)$. Using $\frac{\partial
X}{\partial\tau}=\frac{T}{X}\frac{\partial T}{\partial\tau}$ and
$u_{{\rm e}\alpha}u_{\rm e}^{\alpha}=-1$ we find $\frac{\partial
T}{\partial\tau}=\pm\sqrt{\frac{g^{^{XX}}\,X^{2}}{X^{2}-T^{2}}}$ and
$\frac{\partial
X}{\partial\tau}=\pm\sqrt{\frac{g^{^{XX}}\,T^{2}}{X^{2}-T^{2}}}$,
so that,
\begin{equation}
u_{\rm e}^{\alpha}=\sqrt{\frac{g^{^{XX}}}{X^{2}-T^{2}}}
\left(X,T,0,0\right)\,,\label{eq:4velocity_value_rplus}
\end{equation}
where the sign was chosen in order that $u_{\rm e}$ points to the
future and $g^{^{XX}}$ is the $XX$ component of the inverse metric
associated with Eq.~(\ref{eq:metric_RN_rplus}). Notice that the
expression found for the components of $u_{\rm e}$,
Eq.~(\ref{eq:4velocity_value_rplus}), only makes sense, physically, if
$X^{2}>T^{2}$.
Looking at
the second of the equations
given in Eq.~(\ref{eq:metric_RN_rplus}),
one has that $X^{2}>T^{2}$ implies that ${R}>r_+$ so,
either the shell is located in the region $\mathrm{I}$ or in the
region $\mathbb{\mathrm{I}}'$, see
Figure~\ref{Fig:Penrose_diagram_RN_non_extremal}. The restriction on
the allowed regions for the shell is a consequence of the shell being
assumed static, if we were to consider a dynamic shell or a different
interior spacetime, then shells in the black hole or the white hole
region
could also be treated. Note also that our choice of the plus sign in
Eq.~(\ref{eq:4velocity_value_rplus}), such that $u_{\rm e}$ points to the
future, is the correct one in both $\mathrm{I}$ or $\mathrm{I}'$
regions. Equation~(\ref{eq:4velocity_value_rplus}) can now be used to
find the induced metric on the hypersurface $\mathcal{S}$ by the
spacetime $\mathcal{M}_{\rm e}$, such that
$
\left.ds_{\rm e}^{2}\right|_{\mathcal{S}}
=-d\tau^{2}+{R}^{2}d\Omega^{2}$.
From the first junction condition, Eq.~(\ref{eq:1st_junct_cond}),
matching Eq.~(\ref{eq:induced_metric_Mink}) with this equation
for $\left.ds_{\rm e}^{2}\right|_{\mathcal{S}}$, we find
that ${R}$, the radial coordinate of $\mathcal{S}$ when measured by
an observer at $\mathcal{M}_{\rm e}$, and $R$, the radial coordinate
of $\mathcal{S}$ when measured by an observer at $\mathcal{M}_{\rm i}$,
must be indeed equal, as we have anticipated.
So, generically, $R$ describes the radial
coordinate of the shell for either the interior and exterior
spacetime, and so, the intrinsic line elements of the shell,
namely,
$
\left.ds_{\rm i}^{2}\right|_{\mathcal{S}}
=-d\tau^{2}+R^{2}d\Omega^{2}
$
and
$
\left.ds_{\rm e}^{2}\right|_{\mathcal{S}}
=-d\tau^{2}+{R}^{2}d\Omega^{2}$,
can be written
uniquely as
\begin{equation}
\left.ds^{2}\right|_{\mathcal{S}}=-d\tau^{2}+R^{2}d\Omega^{2}\,.
\label{eq:induced_metric_RN}
\end{equation}
Now, using the fact that the unit normal to $\mathcal{S}$ is
spacelike, implies $n_{\rm e}^{\alpha}n_{{\rm e}\alpha}=+1$. Then,
taking into account Eqs.~(\ref{eq:normal_orthogonal}) and
(\ref{eq:4velocity_value_rplus}),
we find $n_{{\rm e}\alpha}=\pm\sqrt{
\frac{g_{_{XX}}}{X^{2}-T^{2}}}\left(-T,X,0,0\right)$.
To proceed, we must choose the sign for the normal. The choice of
the sign is related with the orientation
of the shell, i.e., the direction of the normal, and we
impose that it points in the direction of increasing $X$ coordinate.
This implies that the choice of the sign is different if we consider
the shell to be in the region $\mathrm{I}$ or $\mathrm{I}'$, see
Figure~\ref{Fig:Penrose_diagram_RN_non_extremal} and
also Figure~\ref{Appendix_fig:Coordinate_patch_1}
of Appendix~\ref{Appendix_subsec:Kruskal-Szekeres_coordinates}.
One of the simplifications
that the use of the Kruskal-Szekeres coordinates introduces is that
the choice of the sign can be written in a concise manner, such that
\begin{equation}
n_{{\rm e}\alpha}=\text{sign}\left(X\right)\sqrt{
\frac{g_{_{XX}}}{X^{2}-T^{2}}}\left(-T,X,0,0\right)\,,
\label{eq:normal_value_rplus}
\end{equation}
where the quantities on the right-hand side are to be evaluated at
$r=R$ and $\text{sign}\left(X\right)$ is the signum function of the
coordinate $X$ of the shell. Notice however, that the usage of this
notation is simply to treat in a concise way the two possible
directions of the normal of the shell. Physically, there is nothing
different between a shell located in either region, i.e., with
positive or negative values of $X$. Having found the normal to the
hypersurface $\mathcal{S}$ as seen from the exterior nonextremal
Reissner-Nordstr\"om spacetime, we can now compute the nonzero
components of the extrinsic curvature. Following the results in
Appendix~\ref{Appendix_subsec:Extrinsic_curvature_outside_event_horizon}
we have
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}=\frac{\text{sign}
\left(X\right)}{2R^{2}k}\left(r_++r_--2
\frac{r_+r_-}{R}\right)\,,\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}=\frac{\text{sign}
\left(X\right)\left(r_+-r_-\right)}{2r_+^{2}R}
\sqrt{g_{_{XX}}\left(X^{2}-T^{2}\right)}\,,
\label{eq:Nonextremal_Extrinsic_RN_outside_event_horizon}
\end{equation}
where $k$, here, is the redshift function given in
Eq.~(\ref{eq:redshift}),
evaluated at $R$, i.e., $k(R,r_+,r_-)=
\sqrt{\left(1-\frac{r_+}{R}\right)\left(1-\frac{r_-}{R}\right)}$.
\subsubsection{Shell's energy density and pressure
\label{subSubsec:shellsenergydensityandpressure}}
We are now in position to find the properties of a perfect fluid thin
shell in a nonextremal Reissner-Nordstr\"om state, located outside the
gravitational radius or event horizon radius, depending on the case.
The shell's stress-energy tensor is given in Eq.~(\ref{eq:perfect}),
an expression containing
the energy per unit area $\sigma$, the
tangential pressure of the fluid $p$,
the velocity $u_a$, and
the induced metric $h_{ab}$.
From our choice of coordinates on
$\mathcal{S}$ we have that $\left\{ y^{a}\right\}
=\left(\tau,\theta,\varphi\right)$, the four-velocity $u_{\rm i}^\alpha$
is given in Eq.~(\ref{eq:Mink_vel_explicit}),
and the metric $h_{ab}$ is given through
Eq.~(\ref{eq:induced_metric_RN}). Putting
everything together we find that
$S_{\tau}^{\tau}=-\sigma$,
$S_{\theta}^{\theta}=S_{\varphi}^{\varphi}=p$.
Comparing these latter equations
with the second junction condition, Eq.~(\ref{eq:2nd_junct_cond}),
taking into account the components of the induced metric,
given through Eq.~(\ref{eq:induced_metric_RN}),
and the fact that
$\left[K_{\theta}^{\theta}\right]=\left[K_{\varphi}^{\varphi}\right]$,
we find $\sigma=-\frac{1}{4\pi}\left[K_{\theta}^{\theta}\right]$
and $p=\frac{1}{8\pi}\left[K_{\tau}^{\tau}\right]-\frac{\sigma}{2}$.
With the components of the extrinsic curvature found in
Eqs.~(\ref{eq:Extrinsic_curvature_Mink})
and~(\ref{eq:Nonextremal_Extrinsic_RN_outside_event_horizon})
we then obtain
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1-\text{sign}\left(X\right)k\right)\,,
\label{eq:sigma_value_rplus}
\end{equation}
\begin{equation}
8\pi p=\frac{\text{sign}\left(X\right)}{2Rk}\left[
\left(1-\text{sign}\left(X\right)k\right)^{2}-
\frac{r_+r_-}{R^{2}}\right]\,,
\label{eq:pressure_value_rplus}
\end{equation}
where $k$ here is the redshift function given
in Eq.~(\ref{eq:redshift})
evaluated at $R$, i.e., $k(R,r_+,r_-)
=\sqrt{\left(1-\frac{r_+}{R}\right)\left(1-\frac{r_-}{R}\right)}$.
As the surface electric current density $s_a$ on the
thin shell is defined as $s_a=\sigma_{e}u_a$,
where $\sigma_{e}$ represents the
electric charge density and $u_a$ is the velocity
of the shell, and since the Minkowski spacetime has zero electric
charge,
from Eqs.~(\ref{eq:junct_cond_Faradayb})-(\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value})
it follows that
\begin{equation}
8\pi \sigma_{e}=2\frac{\sqrt{r_+r_-}}{ R^{2}}\,.
\label{eq:chargedensity1}
\end{equation}
In Eqs.~(\ref{eq:sigma_value_rplus}) and~(\ref{eq:pressure_value_rplus})
it is clear that it is necessary to pick the sign in $\text{sign}
\left(X\right)$.
Let us start with $\text{sign}\left(X\right)=+1$. It is useful here to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$, in terms of $M$ and $Q$.
Using Eq.~(\ref{eq:KS_horizons_radius0}) in Eqs.~(\ref{eq:sigma_value_rplus})
and~(\ref{eq:pressure_value_rplus}) with $\text{sign}
\left(X\right)=+1$ we have
$8\pi\sigma=\frac{2}{R}\left(1-k\right)$,
$8\pi p=\frac{1}{2Rk}\left[\left(1-k\right)^{2}-
\frac{Q^{2}}{R^{2}}\right]$,
where
$k(R,M,Q)
=\sqrt{
1-\frac{2M}{R}+
\frac{Q^2}{R^2}}$,
and
also from Eq.~(\ref{eq:chargedensity1}) we have
$8\pi \sigma_{e}=\frac{2Q}{R^{2}}$.
Let us now take $\text{sign}\left(X\right)=-1$. It is also useful here to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$, in terms of $M$ and $Q$.
Using Eq.~(\ref{eq:KS_horizons_radius0}) in Eqs.~(\ref{eq:sigma_value_rplus})
and~(\ref{eq:pressure_value_rplus}) with $\text{sign}
\left(X\right)=-1$ we have
$8\pi\sigma=\frac{2}{R}\left(1+k\right)$,
$8\pi p=-\frac{1}{2Rk}\left[\left(1+k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]$, where again
$k(R,M,Q)
=\sqrt{
1-\frac{2M}{R}+
\frac{Q^2}{R^2}}$, and
also from Eq.~(\ref{eq:chargedensity1}) we have
$8\pi\sigma_{e}=\frac{2Q}{R^{2}}$.
These are the expressions used in the two previous
subsections.
Note also that when $r_-=0$, then $\sigma_{e}=0$ and the
electric charge $Q$ is zero, $Q=0$,
so the outside spacetime is
described by the Schwarzschild
solution, for which Eqs.~(\ref{eq:sigma_value_rplus})
and (\ref{eq:pressure_value_rplus})
can be written explicitly as
$8\pi\sigma\vert_{r_-=0}=\frac{2}{R}
\left(1-\text{sign}\left(X\right)\sqrt{1-
\frac{r_+}{R}}\right)$
and $8\pi p\vert_{r_-=0}=\frac{\text{sign}
\left(X\right)}{2R}\sqrt{\frac{1}{1-
\frac{r_+}{R}}}\left[\left(1-\text{sign}\left(X\right)
\sqrt{1-\frac{r_+}{R}}\right)^{2}\right]$, and which
are the energy density and the tangential pressure
for a shell matching Minkowski to
the Schwarzschild spacetime.
\clearpage{}
\section{Nonextremal electric thin shells inside the Cauchy radius:
Tension shell regular and nonregular
black holes and compact shell naked singularities}
\label{insidecauchy}
\subsection{Nonextremal electric thin shells inside the Cauchy horizon:
Tension shell regular and nonregular black holes}
\label{Subsec:nonextremalnormalcauchy}
Here we study the case of a fundamental electric thin shell in the
nonextremal state, i.e., $r_+>r_-$ or $M>Q$, for which the shell's
location obeys $R<r_-$, and for which the orientation is
such that the normal to the shell points towards $r_-$. In this case
horizons do exist and so, following the nomenclature, $r_+$ is both
the gravitational and the event horizon radius, and $r_-$ is both the
Cauchy radius and the Cauchy horizon radius. The normal to the shell
pointing towards $r_-$ means in the notation of the Kruskal
coordinate $X$ that we take $\text{sign}\left(X\right)=+1$, see the
end of this section and
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN} for
details.
As functions of $M $, $Q$, and $R$, the shell's energy
density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1-k\right)\,,
\label{eq:sigma_value_rminus_MQ_sign_plus}
\end{equation}
\begin{equation}
8\pi p=\frac{1}{2Rk}\left[\left(1-k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]\,,
\label{eq:pressure_value_rminus_MQ_sign_plus}
\end{equation}
respectively, with $k=\sqrt{1-\frac{2M}{R}+\frac{Q^{2}}{R^{2}}}$.
\begin{figure}[h]
\subfloat[]{\includegraphics[width=0.8\textwidth]{sigma_regions_III}}\\
\subfloat[\label{Fig:Pressure_region_III_prime}]{
\includegraphics[scale=0.45]{pressure_region_III_prime}}
\caption{\label{Fig:Properties_region_III_prime}
Physical
properties of a nonextremal tension shell regular and
nonregular black hole, i.e.,
an electric perfect fluid
thin shell in a nonextremal Reissner-Nordstr\"om state, in the
location $R<r_-$, i.e., located inside the Cauchy radius,
and with orientation such that the normal points towards $r_-$.
The interior is Minkowski and the exterior is nonextremal
Reissner-Nordstr\"om spacetime.
Panel (a)
Energy density $\sigma$ of the shell as a function of the radius $R$
of the shell for various values of the $\frac{Q}{M}$ ratio. The energy
density is adimensionalized through the mass $M$, $8\pi M\sigma$, and
the radius is adimensionalized through the Cauchy radius $r_-$,
$\frac{R}{r_-}$. The marked zone on the top left is amplified on the
right. Panel (b) Tension $-p$ on the shell as a function of the
radius $R$ of the shell for various values of the $\frac{Q}{M}$
ratio. The tension is adimensionalized through the mass $M$, $-8\pi
Mp$, and the radius is adimensionalized through the Cauchy
radius $r_-$, $\frac{R}{r_-}$.}
\end{figure}
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$, $Q$, and $R$, by
\begin{equation}
8\pi \sigma_{e}=\frac{2Q}{R^{2}}\,.
\label{eq:chargedensity12cauchy}
\end{equation}
The behavior of $\sigma$ and $p$ as functions of the radial coordinate
$R$ of the shell for various values of the $\frac{Q}{M}$ ratio in this
case is shown in Figure~\ref{Fig:Properties_region_III_prime}. We see
that, depending on the radial coordinate of the shell, the energy
density might take negative values. Indeed, from
Eq.~(\ref{eq:sigma_value_rminus_MQ_sign_plus}) we find that for
$R<\frac{Q^{2}}{2M}$ the energy density, $\sigma$, is negative. Also,
this kind of thin shell is always supported by tension, see also
Eq.~(\ref{eq:pressure_value_rminus_MQ_sign_plus}). It is a tension
shell. This is related to the fact that the Reissner-Nordstr\"om
singularity at $r=0$ is repulsive. Moreover, we see that both the
energy density and the pressure of the shell diverge to negative
infinity as the shell gets closer to $R=0$. On the other hand, in the
limit of $R\to r_-$ the pressure diverges to negative infinity, but
the energy density, $\sigma$, tends to $4\pi\sigma=\frac{1}{r_-}$.
When $Q=0$, i.e., $r_-=0$,
the solution is the vacuum
Schwarzschild solution, since as $R<r_-$, one
has in the limit $R=0$, which is singular.
In relation to the energy conditions of the shell we can
say that the null, the weak, the dominant, and the strong energy
conditions are never verified in this case, see a detailed
presentation ahead.
\newpage
The Carter-Penrose diagram for this case
can be drawn directly from the building
blocks of an interior Minkowski spacetime and the full
nonextremal Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_RN_region_III_towards_Cauchy_horizon}
two possible
Carter-Penrose diagrams of a
shell spacetime in a nonextremal Reissner-Nordstr\"om state,
in
the location $R<r_-$, with orientation such that
the normal points towards $r_-$, i.e.,
$\text{sign}\left(X\right)=+1$, are shown.
It is a tension shell black hole spacetime. More specifically,
there is an infinitude of possible diagrams.
Indeed,
in the diagram (a) it is clear that the tension shell is inside the
Cauchy horizon in both regions $\mathrm{III}$ and
$\mathrm{III'}$ of a Reissner-Nordstr\"om spacetime. Admitting that
the portion shown of the diagram repeats itself ad infinitum then the
black hole is regular. In the diagram (b) there is a shell in region
$\mathrm{III}$ and a singularity in region $\mathrm{III'}$, and so it
is not a regular black hole, it is a tension shell black hole with a
singularity. Since what one puts in the regions $\mathrm{III}$ and
$\mathrm{III'}$, either a shell or a singularity, is not decided by
the solution, an infinite number of different Carter-Penrose diagrams
can be drawn, as there are an infinite number of combinations to
locate a shell or a singularity when one goes upward or downward
through the diagram. Note that $r_+$ and $r_-$ are the event horizon
and the Cauchy horizon radii, clearly, and the Einstein-Rosen bridge,
i.e., the dynamic wormhole, is there.
Regular black holes with shells that are
sandwiched between a de Sitter interior and a Reissner-Nordstr\"om
exterior were built in \cite{lemoszanchinregularbhs}.
\begin{figure}[h]
\subfloat[
\label{Fig:Penrose_diagram_Mink_RN_region_III_alt}]
{\includegraphics[height=0.31\paperheight]
{Undercharged_Carter_Penrose_Mink_RN_junction_region_III_prime_alternative}
}\hspace*{4cm}
\subfloat[
\label{Fig:Penrose_diagram_Mink_RN_region_III}]
{\includegraphics[height=0.31\paperheight]
{Undercharged_Carter_Penrose_Mink_RN_junction_region_III_prime}
}
\caption{
\label{Fig:Penrose_diagram_Mink_RN_region_III_towards_Cauchy_horizon}
Carter-Penrose diagrams of the tension shell black holes,
i.e., a thin shell spacetime
in a nonextremal Reissner-Nordstr\"om state, in the location $R<r_-$,
i.e., located inside the Cauchy radius, with orientation
such that the normal to the shell points towards $r_-$.
The interior is Minkowski, the exterior is Reissner-Nordstr\"om
spacetime.
Panel~(a) The Carter-Penrose diagram contains a shell in
both regions $\mathrm{III}$ and $\mathrm{III'}$. If this pattern is
repeated ad infinitum then it is a tension shell regular black hole.
Panel~(b) The Carter-Penrose diagram contains a shell in region
$\mathrm{III}$ and a singularity in region $\mathrm{III'}$. It is a
tension shell black hole, now not regular.
An infinite number of different Carter-Penrose diagrams can be drawn,
since there are an infinite number of combinations to locate the shell.
}
\end{figure}
\newpage
The physical interpretation of this case is of real interest. This
nonextremal thin shell solution provides a regular black hole
solution. The energy density and pressure never obey the energy
conditions for all shell radii, i.e., shell radii between zero and the
Cauchy horizon. The causal and global structure as displayed by the
Carter-Penrose diagram shows clearly that there is no singularity if
one adopts the simplest form of the diagram, meaning also that the
topology of the region inside the Cauchy horizons is a three-sphere,
as usual for regular black holes. As in the Reissner-Nosdstr\"om
vacuum solution, this tension shell regular black holes possess Cauchy
horizons, and so they are subject to instabilities, which would lead
the solutions to an endpoint which can only be guessed. As regular
black holes these solutions join the other known regular black hole
solutions which are of interest in quantum gravitational settings that
presumably get rid of the singularities. So, this case falls into the
category of having the energy conditions never verified, and so in
this sense is odd, although of interest as regular black hole
matter solutions always are. As much as a regular black hole is
familiar so this shell solution is familiar.
\newpage
\subsection{Nonextremal electric thin shells inside the Cauchy radius:
Compact shell naked singularities}
\label{Subsec:nonextcompactshellnakedsingularity}
Here we study the case of a fundamental electric thin shell in the
nonextremal state, i.e., $r_+>r_-$ or $M>Q$, for which the shell's
location obeys $R<r_-$, and for which the orientation is
such that the normal to the shell points towards $r=0$. In this case,
horizons do not exist and so, following the nomenclature, $r_+$ is
the gravitational radius, and $r_-$ is
the
Cauchy radius. The normal to the shell
pointing towards $r=0$ means in the notation of the Kruskal
coordinate $X$ that we take $\text{sign}\left(X\right)=-1$, see the
end of this section and
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN} for
details.
As functions of $M $, $Q$, and $R$, the shell's energy
density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1+k\right)\,,
\label{eq:sigma_value_rminus_MQ_sign_minus}
\end{equation}
\begin{equation}
8\pi p=-\frac{1}{2Rk}\left[\left(1+k\right)^{2}-
\frac{Q^{2}}{R^{2}}\right]\,,
\label{eq:pressure_value_rminus_MQ_sign_minus}
\end{equation}
respectively,
with $k=\sqrt{1-\frac{2M}{R}+\frac{Q^{2}}{R^{2}}}$.
The electric charge density $\sigma_{e}$
is given in terms of $M$, $Q$, and $R$
by Eq.~(\ref{eq:chargedensity12cauchy}).
The behavior of $\sigma$ and $p$ as functions of the radial coordinate
$R$ of the shell for various values of the $\frac{Q}{M}$ ratio
in this case
is shown in
Figure~\ref{Fig:Properties_region_III}.
\begin{figure}[h]
\subfloat[]
{\includegraphics[scale=0.45]{sigma_region_III}}
\hfill{}\subfloat[\label{Fig:Pressure_region_III}]
{\includegraphics[scale=0.45]{pressure_region_III}}
\caption{
\label{Fig:Properties_region_III}
Physical properties of a nonextremal
compact thin shell singularity, i.e., an
electric perfect fluid thin shell in a nonextremal
Reissner-Nordstr\"om state, in the location $R<r_-$, i.e., located
inside the Cauchy radius, and with orientation such that the normal
points towards $r=0$. The interior is Minkowski and the exterior is
nonextremal Reissner-Nordstr\"om spacetime, although what it is
interior and what is exterior is blurred in this case.
Panel (a) Energy density $\sigma$ of the shell as a function of the
radius $R$ of the shell for various values of the $\frac{Q}{M}$
ratio. The energy density is adimensionalized through the mass $M$,
$8\pi M\sigma$, and the radius is adimensionalized through the Cauchy
radius $r_-$, $\frac{R}{r_-}$.
Panel (b) Pressure $p$ on the shell as a function of the radius $R$ of
the shell for various values of the $\frac{Q}{M}$ ratio. The pressure
is adimensionalized through the mass $M$, $8\pi Mp$, and the radius is
adimensionalizedthrough the Cauchy radius $r_-$, $\frac{R}{r_-}$.
}
\end{figure}
We see that the energy density of the shell is always positive and the
shell is supported by pressure. As the radial coordinate of the shell,
$R$, goes to zero, both the energy density and pressure of the shell
diverge to infinity. Moreover, as $R\to r_-$ the energy density tends
to $\frac1{4\pi\,r_-}$ and the pressure diverges to infinity. When
$Q=0$ the solution does not exist. In relation to the energy
conditions of the shell we can say that the null and the weak energy
conditions are verified for $0<R<r_-$, the dominant energy condition
is verified for $0<R<R_{\rm III}$, with $R_{\rm III}$ to be given
later, and the strong energy condition is verified for $0<R<r_-$, see
a detailed presentation ahead.
\newpage
The Carter-Penrose diagram for this case can be drawn directly from
the building blocks of an interior Minkowski spacetime and the full
nonextremal Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_RN_region_III_towards_singularity}
the Carter-Penrose diagram of a shell spacetime in a nonextremal
Reissner-Nordstr\"om state, in the location $R<r_-$, with orientation
such that the normal points towards $r=0$, i.e.,
$\text{sign}\left(X\right)=-1$, is shown. It is a compact shell naked
singularity spacetime. It is clearly a compact space, $r$ goes from 0
to $R$ and then decreases back to 0 at the timelike singularity, such
that there is no clear distinction what is interior from what is
exterior. We use the hash symbol $\#$ to represent the connected sum
of the spacetime manifolds, in order to conserve the conformal
structure in the Carter-Penrose diagram of the total spacetime. It is
difficult to understand if this solution can be achieved from a
physical phenomenon. However, we expect the shell to be the source of
the singularity since the shell is the source of the exterior
spacetime, although it is very difficult to understand why the
singularity is formed away from the shell itself. Nonetheless, surely
the non-linearity of the theory just leads to this counterintuitive
behavior.
\begin{figure}[h]
\includegraphics[height=0.25\paperheight]
{Carter_Penrose_Mink_RN_junction_region_III}
\caption{
\label{Fig:Penrose_diagram_Mink_RN_region_III_towards_singularity}
Carter-Penrose diagram of the compact shell naked singularity, i.e., a
thin shell spacetime in a nonextremal Reissner-Nordstr\"om state, in
the location $R<r_-$, i.e., located inside the Cauchy radius, with
orientation such that the normal to the shell points towards $r=0$.
Part of the spacetime is Minkowski, part is Reissner-Nordstr\"om, in
this case there is no clear distinction what is interior from what is
exterior. The hash symbol $\#$ represents the connected sum of the
two spacetimes.}
\end{figure}
The physical interpretation of this case is most curious. This
nonextremal thin shell solution provides a closed spatial static
universe with a singularity at one pole. The energy density and
pressure obey the energy conditions for certain shell radii. The
causal and global structure as displayed by the Carter-Penrose diagram
show the characteristics of this universe that has two sheets joined at
the shell with one sheet having a singularity at its pole and with no
horizons. So, this case falls into the category of having the energy
conditions verified and the resulting spacetime being peculiar.
\newpage
\subsection{Formalism for nonextremal electric thin shells
inside the Cauchy radius}
\label{Subsec:Induced_MinkowskiandRN_inside_cauchy_horizon}
\subsubsection{Preliminaries}
\label{prel2}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in the nonextremal state, i.e., $r_+>r_-$ or
$M>Q$, for which the shell's location obeys $R<r_-$, and
for which the orientation is such that the normal to the shell points
towards $r_-$ or towards $r=0$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism} and
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN}.
\subsubsection{Induced metric and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm i}$}
\label{asseenfrom2}
Let us start by mentioning the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$. Since it is the same as
the analysis done in the previous section we only quote
the important equations.
They are the interior metric Eq.~(\ref{eq:Mink_metric_interior}),
the interior four-velocity of the shell
Eq.~(\ref{eq:Mink_vel_explicit}), the
metric for the shell at radius $R$ given in
Eq.~(\ref{eq:induced_metric_Mink}), the normal to the shell
Eq.~(\ref{eq:normal_Mink}),
and the extrinsic curvature from the inside
Eq.~(\ref{eq:Extrinsic_curvature_Mink}).
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm e}$}
\label{Subsec:shells_inside_cauchy_horizon}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the nonextremal
state, i.e., $r_+>r_-$ or $M>Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_non_extremal}, for which the
shell's location has radius $R$ obeying $R<r_-$, and for which the
orientation is such that the normal to the shell points towards
increasing $r$, i.e., towards $r_-$, or towards decreasing $r$, i.e.,
towards $r=0$, as seen from the exterior, as used in the two previous
subsections.
For a nonextremal shell with $R<r_-$ we work with the coordinate
patch that has no coordinate singularity at the gravitational radius
$r=r_-$. Many of the previous results are also valid for the second
coordinate patch. From the discussion in
Appendix~\ref{Appendix_subsec:Kruskal-Szekeres_coordinates0},
the line element for the Reissner-Nordstr\"om spacetime in
Kruskal-Szekeres coordinates in this patch is,
\begin{eqnarray}
ds_{\rm e}^{2}=4\left(\frac{r_++r_-}{r_+-r_-}\right)^{2}&&
\frac{r_-^{4}}{r^{2}}e^{\frac{r\left(r_+-r_-
\right)}{r_-^{2}}}\left(\frac{r_+-r}{r_++r_-}
\right)^{1+\left(\frac{r_+}{r_-}
\right)^{2}}\left(dX^{2}-dT^{2}\right)+r^{2}
\left(T,X\right)d\Omega^{2}\,,
\label{eq:metric_RN_rminus}\\
&&
X^{2}-T^{2}=e^{-\frac{r\left(r_+-r_-\right)}{r_-^{2}}}\left(
\frac{r_--r}{r_++r_-}\right)\left(\frac{r_+-r}{r_++r_-}
\right)^{-\left(\frac{r_+}{r_-}\right)^{2}}\,,
\nonumber
\end{eqnarray}
with $r\left(T,X\right)$ being given implicitly by the latter equation.
The shell's radial coordinate when measured by an observer at
$\mathcal{M}_{\rm e}$ is constant since the shell is static, so from
the second of the equations in Eq.~(\ref{eq:metric_RN_rminus}) we take
that the $X$ and $T$ coordinates of the shell must verify
$X^{2}-T^{2}=\text{constant}$. Now, as was argued in the previous
section, a static shell must be timelike as seen from both interior
and exterior spacetimes. The restriction $X^{2}-T^{2}=\text{constant}$
and the analysis performed in subsection~\ref{induceM+1}, imply that
the components of the 4-velocity $u_{\rm e}$ of an observer comoving
with the shell as seen from the exterior spacetime, are given by
\begin{equation}
u_{\rm e}^{\alpha}=-\sqrt{\frac{g^{^{XX}}}{X^{2}-T^{2}}}
\left(X,T,0,0\right)\,,
\label{eq:4velocity_value_rminus}
\end{equation}
where, in this case, $g^{^{XX}}$ is the $XX$ component of the inverse
of the metric in Eq.~(\ref{eq:metric_RN_rminus}). We see that
Eq.~(\ref{eq:4velocity_value_rminus}) only makes sense physically, if
$X^{2}-T^{2}>0$, which, taking into account the second of the
equations in Eq.~(\ref{eq:metric_RN_rminus}), allows us to conclude
that the shell must then be located either at the region $\mathrm{III}$
or $\mathrm{III}'$, see
Figure~\ref{Fig:Penrose_diagram_RN_non_extremal}. Let us remark that
the minus sign in Eq.~(\ref{eq:4velocity_value_rminus}) arises from
the convention that the 4-velocity points to the future for both
regions $\mathrm{III}$ and $\mathrm{III}'$. Making use of
Eqs.~(\ref{eq:metric_RN_rminus}) and (\ref{eq:4velocity_value_rminus})
to find the induced metric on $\mathcal{S}$ as seen by an observer at
$\mathcal{M}_{\rm e}$ and imposing the first junction condition,
Eq.~(\ref{eq:1st_junct_cond}), we deduce that the shell's radial
coordinate $R$ is the same as measured by an observer at
$\mathcal{M}_{\rm i}$ or $\mathcal{M}_{\rm e}$ and the induced metric
on $\mathcal{S}$ is given by Eq.~(\ref{eq:induced_metric_RN}), namely,
\begin{equation}
\left.ds^{2}\right|_{\mathcal{S}}=-d\tau^{2}+R^{2}d\Omega^{2}\,.
\label{eq:induced_metric_RNcauchy}
\end{equation}
Combining $n_{\rm e}^{\alpha}n_{{\rm e}\alpha}=1$, see
Eq.~(\ref{eq:normal_normalized}),
$n_{{\rm e}\alpha}u_{\rm e}^{\alpha}=0$, see
Eq.~(\ref{eq:normal_orthogonal}),
and Eq.~(\ref{eq:4velocity_value_rminus}), we find the expression
for the components of the unit normal to the hypersurface $\mathcal{S}$,
as seen from the exterior spacetime $\mathcal{M}_{\rm e}$, to be
$n_{{\rm e}\alpha}=\pm\sqrt{\frac{g_{_{XX}}}{X^{2}-T^{2}}}
\left(-T,X,0,0\right)$.
To specify the sign of the normal to $\mathcal{S}$ for each region
we consider two orientations: the orientation
where the normal $n_{{\rm e}\alpha}$
points
towards the Cauchy radius at $r_-$ and the orientation
where the normal
points towards the singularity $r=0$. These two orientations can be treated
in a concise way by assuming, for example, a shell located either in
the region $\mathrm{III}$ or $\mathrm{III}'$ and the normal pointing
in the direction of decreasing $X$ coordinate, such that
\begin{equation}
n_{{\rm e}\alpha}=\text{sign}\left(X\right)\sqrt{
\frac{g_{_{XX}}}{X^{2}-T^{2}}}\left(T,-X,0,0\right)\,.
\label{eq:normal_value_rminus}
\end{equation}
Note the importance of the sign of the normal to
yield totally different physical and geometrical
properties to a shell in the same location, here
in the region $R<r_-$.
Then, using the
results from
Appendix~\ref{Appendix_subsec:Extrinsic_curvature_inside_Cauchy_horizon},
we find the nonzero components of the extrinsic curvature of
$\mathcal{S}$
as seen from the exterior spacetime to be given by
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}=
\frac{\text{sign}\left(X\right)}{2R^{2}k}\left[r_++r_--2
\frac{r_+r_-}{R}\right]\,,\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}
=\frac{\text{sign}\left(X\right)\left(r_+-r_-\right)}{2r_-^{2}R}
\sqrt{g_{_{XX}}\left(X^{2}-T^{2}\right)}\,,
\label{eq:Nonextremal_Extrinsic_RN_inside_cauchy_horizon}
\end{equation}
where $k$, here, is the redshift function given in
Eq.~(\ref{eq:redshift}),
evaluated at $R$, i.e., $k(R,r_+,r_-)=\sqrt{\left(1-
\frac{r_+}{R}\right)\left(1-\frac{r_-}{R}\right)}$.
A comment is in order here. In our study of a
shell in a nonextremal
Reissner-Nordstr\"om state, we have worked with two coordinate patches
to describe the various regions of the Reissner-Nordstr\"om spacetime
exterior to the shell as was done in~\citep{Comer_Katz_1994}, see also
Appendix~\ref{Appendix_sec:Kruskal-Szekeres_coordinates_RN}. It is
possible to find a coordinate system that covers the entire
Reissner-Nordstr\"om spacetime without coordinate singularities,
see~\citep{Graves_Brill_1960} and also \cite{Carter_1966_2} or
\cite{Hawking_Ellis_book,MTW_Book,felicebook},
but we have not followed this path, as it
is not the best one to our aims, and thus we
have separated the study of a
shell located in a region described by one coordinate patch and the
other.
\subsubsection{Shell's energy density and pressure
\label{subSubsec:shellsenergydensityandpressurecauchy}}
We are now in position to find the properties of a perfect fluid thin
shell in a nonextremal Reissner-Nordstr\"om state,
located
inside the
Cauchy horizon radius or Cauchy radius, depending on the case.
The shell's stress-energy tensor is
given in Eq.~(\ref{eq:perfect}), an expression containing the energy
per unit area $\sigma$, the tangential pressure of the fluid $p$, the
four-velocity $u_a$, and the induced metric $h_{ab}$.
From our choice
of coordinates on $\mathcal{S}$ we have that $\left\{ y^{a}\right\}
=\left(\tau,\theta,\varphi\right)$,
the four-velocity $u_a$ is given in
Eq.~(\ref{eq:4velocity_value_rminus}),
and the metric $h_{ab}$ is given
in Eq.~(\ref{eq:induced_metric_RNcauchy}). Putting everything together
we find
$S_{\tau}^{\tau}=-\sigma$,
$S_{\theta}^{\theta}=S_{\varphi}^{\varphi}=p$. Comparing these latter
equations with the second junction condition,
Eq.~(\ref{eq:2nd_junct_cond}), taking into account the components of
the induced metric, given through Eq.~(\ref{eq:induced_metric_RNcauchy}),
and the fact that
$\left[K_{\theta}^{\theta}\right]=\left[K_{\varphi}^{\varphi}\right]$,
we find $\sigma=-\frac{1}{4\pi}\left[K_{\theta}^{\theta}\right]$ and
$p=\frac{1}{8\pi}\left[K_{\tau}^{\tau}\right]-\frac{\sigma}{2}$. With
the components of the extrinsic curvature found in
Eqs.~(\ref{eq:Extrinsic_curvature_Mink})
and~(\ref{eq:Nonextremal_Extrinsic_RN_inside_cauchy_horizon}) we
obtain the following properties of a perfect fluid thin shell
located inside of the Cauchy radius,
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1-\text{sign}\left(X\right)k\right)\,,
\label{eq:sigma_value_rminus}
\end{equation}
\begin{equation}
8\pi p=\frac{\text{sign}\left(X\right)}{2Rk}\left[\left(1-
\text{sign}\left(X\right)k\right)^{2}-\frac{r_+r_-}{R^{2}}
\right]\,,\label{eq:pressure_value_rminus}
\end{equation}
where $k$ here is the redshift function given
in Eq.~(\ref{eq:redshift})
evaluated at $R$, i.e., $k(R,r_+,r_-)
=\sqrt{\left(1-\frac{r_+}{R}\right)
\left(1-\frac{r_-}{R}\right)}$.
As the surface electric current density $s_a$ on the
thin shell is defined as $s_a=\sigma_{e}u_a$, where
$\sigma_{e}$ represents the
electric charge density and $u_a$ is the velocity
of the shell,
from
Eqs.~(\ref{eq:junct_cond_Faradayb})-(\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value})
it follows that
\begin{equation}
8\pi\sigma_{e}=2\frac{\sqrt{r_+r_-}}{R^{2}}\,.
\label{eq:chargedensitycuachy}
\end{equation}
Now, the
expressions found for the energy density
and pressure for a shell locate inside the
Cauchy radius, Eqs.~(\ref{eq:sigma_value_rminus}) and
(\ref{eq:pressure_value_rminus}), are the same as
Eqs.~(\ref{eq:sigma_value_rplus}) and (\ref{eq:pressure_value_rplus})
found for for the energy density
and pressure for a shell locate outside the gravitational radius.
However, the behavior of the properties of the shell will be different
since, the radial coordinate of the shell, $R$, in this case ranges
between zero and $r_-$. As before, we have to distinguish the two
possible orientations provided by the $\text{sign}\left(X\right)$. In
Eqs.~(\ref{eq:sigma_value_rminus})
and~(\ref{eq:pressure_value_rminus}) it is clear that it is necessary
to pick the sign in $\text{sign} \left(X\right)$. Let us start with
$\text{sign}\left(X\right)=+1$. It is useful to give the expressions
for the shell's energy density and pressure, $\sigma$ and $p$ in terms
of $M$ and $Q$. Using Eq.~(\ref{eq:KS_horizons_radius0}) in
Eqs.~(\ref{eq:sigma_value_rplus}) and~(\ref{eq:pressure_value_rplus})
with $\text{sign} \left(X\right)=+1$ we have
$8\pi\sigma=\frac{2}{R}\left(1-k\right)$, $8\pi
p=\frac{1}{2Rk}\left[\left(1-k\right)^{2}-
\frac{Q^{2}}{R^{2}}\right]$, and also from
Eq.~(\ref{eq:chargedensity1}) we have $8\pi\sigma_{e}=\frac{2Q}{R^{2}}$.
Let us now take $\text{sign}\left(X\right)=-1$. It is useful
to give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$ in terms of $M$ and $Q$. Using
Eq.~(\ref{eq:KS_horizons_radius0}) in
Eqs.~(\ref{eq:sigma_value_rplus}) and~(\ref{eq:pressure_value_rplus})
with $\text{sign} \left(X\right)=-1$ we have
$8\pi\sigma=\frac{2}{R}\left(1+k\right)$, $8\pi
p=-\frac{1}{2Rk}\left[\left(1+k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]$,
with
$k=\sqrt{1-\frac{2M}{R}+\frac{Q^{2}}{R^{2}}}$,
and also from Eq.~(\ref{eq:chargedensity1}) we have
$8\pi\sigma_{e}=\frac{2Q}{R^{2}}$. These are the expressions used in
the two previous subsections.
Note also that when $r_-=0$,
then $\sigma_{e}=0$ and the
electric charge $Q$ is zero, $Q=0$,
and since $R<r_-$ we obtain that
there either the solution is vacuum and singular
or there is no solution, in brief,
there is no shell solution.
\clearpage{}
\section{Extremal electric thin shells outside
the gravitational radius:
Majumdar-Papapetrou star shells and extremal tension shell singularities}
\label{Sec:Extremal-thin-shells-outside}
\subsection{Extremal electric thin shells outside the
gravitational radius: Majumdar-Papapetrou star shells}
\label{Subsec:extremalnormaloutside}
Here we study the case of a fundamental electric thin
shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's
location obeys $R>r_+$, and for which the orientation is such that the
normal to the shell points towards spatial infinity. In this case
horizons do not exist and so, following the nomenclature, $r_+$ is
the gravitational radius. Also, since
$r_+$ and $r_-$ have the same value we opt to use consistently
the gravitational radius $r_+$ rather than the Cauchy radius
$r_-$. In general we also opt to use $M$ rather than $Q$.
The normal to the shell
pointing towards spatial infinity means
that the new parameter $\xi$ we introduce for the extremal states
has value $\xi=+1$, see the
end of this section.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma= & \frac{2M}{ R^{2}}\,,
\label{eq:Extremal_sigma_value_outside_xi_positive}\\
8\pi p= & 0\,.\label{eq:Extremal_pressure_value_outside_xi_positive}
\end{align}
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$ and $R$, by
\begin{equation}
8\pi\sigma_{e}=\frac{2M}{ R^{2}}\,,
\label{eq:chargedensity12extremal}
\end{equation}
The behavior of $\sigma$ and $p$, in
Eqs.~(\ref{eq:Extremal_sigma_value_outside_xi_positive})
and ~(\ref{eq:Extremal_pressure_value_outside_xi_positive}),
as functions of the radial coordinate
$R$ of the $\frac{Q}{M}=1$ extremal shell
is shown in Figure~\ref{Fig:Properties_extremal_outside_normal}.
\begin{figure}[h]
\subfloat[]{
\includegraphics[scale=0.45]
{Extremal_outside_normal_energy}}
\hspace*{\fill}
\subfloat[]{
\includegraphics[scale=0.45]
{Extremal_outside_normal_pressure}}
\caption{\label{Fig:Properties_extremal_outside_normal}
Physical properties of a Majumdar-Papapetrou star
shell, i.e., an electric
perfect fluid thin shell in an extremal Reissner-Nordstr\"om state, in
the location $R>r_+$, i.e., located outside the gravitational radius,
and with orientation such that the normal points towards spatial
infinity.
The interior is Minkowski and the exterior is extremal
Reissner-Nordstr\"om spacetime. Extremal
means $\frac{Q}{M}=1$. Panel (a) Energy density $\sigma$ of
the shell as a function of the radius $R$ of the shell.
The energy density is adimensionalized through
the mass $M$, $8\pi M\sigma$, and the radius is adimensionalized
through the gravitational radius $r_+$, $\frac{R}{r_+}$. Panel (b)
Pressure $p$ on the shell as a function of the radius $R$ of the
shell.
The radius is adimensionalized through
the gravitational radius $r_+$, $\frac{R}{r_+}$.
The pressure is zero, the
shell is supported by electric repulsion alone, it is
Majumdar-Papapetrou matter.
}
\end{figure}
These shells are characterized by a positive energy density and
vanishing pressure support, and so the matter that composes this kind
of shells is Majumdar-Papapetrou matter, i.e., electric dust, there is
no need for matter pressure since there is an inbuilt equilibrium
between gravitational attraction and electrostatic repulsion. These
are extremal star shells or Majumdar-Papapetrou star shells.
Majumdar-Papapetrou matter shells with a Minkowski interior matched to
an exterior extremal Reissner-Nordstr\"om spacetime, with the implicit
assumption that the outward unit normal to the matching surface points
towards spacial infinity, have been considered in
many works. Notice that when $R\to\infty$, the energy density
$\sigma$ and the charge density $\sigma_{e}$, all tend to zero, i.e.,
the shell disperses away. Notice also that when $R\to r_+$, the
energy density is finite, the pressure remains zero, and the charge
density $\sigma_{e}$ is also finite. Indeed, for $R= r_+$ one has a
quasiblack hole, discussed in detail ahead. When
$Q=0$, and so $M=0$, there is no shell, only Minkowski spacetime. In
relation to the energy conditions of the shell one can work out and
find that the null, the weak, the dominant, and the strong energy
conditions are verified for $R>r_+$, see a detailed presentation
ahead.
The Carter-Penrose diagram can be drawn directly from the building
blocks of an interior Minkowski spacetime and the exterior asymptotic
region of an extremal Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_extremal_RN_outside} the
Carter-Penrose diagram of an extremal Reissner-Nordstr\"om shell
spacetime for a junction surface with normal pointing towards spatial
infinity is shown. It is clearly a star shell, a Majumdar-Papapetrou
star shell in an asymptotically flat spacetime.
\begin{figure}[h]
\includegraphics[height=0.29\paperheight]
{Carter_Penrose_Mink_RN_junction_normal}
\caption{\label{Fig:Penrose_diagram_Mink_extremal_RN_outside}
Carter-Penrose diagram of a Majumdar-Papapetrou star shell, i.e., a
thin shell spacetime in an extremal Reissner-Nordstr\"om state,
located at $R>r_+$, i.e., located outside the gravitational radius,
with orientation such that the normal points towards
spatial infinity. The
interior is Minkowski, the exterior is extremal Reissner-Nordstr\"om.
This star shell is supported by electrical repulsion alone.
}
\end{figure}
The physical interpretation of this case is clear cut, and it is
similar to the corresponding nonextremal shell. This extremal thin
shell solution mimics an extremal star. The energy density and
pressure obey the energy conditions for any radius, indeed the shell
is composed of Majumdar-Papapetrou matter. The causal and global
structure as displayed by the Carter-Penrose diagram are well behaved
and rather elementary. So, this case falls into the category of having
the energy conditions verified and the geometrical setup is physically
reasonable.
\newpage
\subsection{Extremal electric thin shells outside the
event horizon: Extremal tension shell singularities}
\label{Subsec:extremalnormaloutsidenormaltoin}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R>r_+$, and for which the
orientation is such that the normal to the shell points towards $r_+$.
In this case horizons do exist and so, following the nomenclature,
$r_+$ is both the gravitational and the event horizon radius. Also,
$r_+$ and $r_-$ have the same value and we opt to use the event
horizon radius $r_+$ rather than the Cauchy horizon radius $r_-$. We
also opt to use $M$ rather than $Q$. The normal to the shell pointing
towards $r_+$ means in the notation we use that we take $\xi=-1$, see
the end of this section for details.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma= & \frac{2}{R}\left(2-\frac{M}{R}\right)\,,
\label{eq:Extremal_sigma_value_outside_xi_negative}\\
8\pi p= & -\frac{2}{R}\,.
\label{eq:Extremal_pressure_value_outside_xi_negative}
\end{align}
The electric charge density $\sigma_{e}$ is given in terms of
$M$ and $R$ by
$
8\pi\sigma_{e}=\frac{2M}{ R^{2}}$,
which is
identical to Eq.~(\ref{eq:chargedensity12extremal}).
The
behavior of $\sigma$ and $p$, in
Eqs.~(\ref{eq:Extremal_sigma_value_outside_xi_negative})
and~(\ref{eq:Extremal_pressure_value_outside_xi_negative}),
as functions
of the radial coordinate $R$ of the $\frac{Q}{M}=1$ extremal shell is
shown in Figure~\ref{Fig:Properties_extremal_outside_alternative}.
\begin{figure}[h]
\subfloat[]{\includegraphics[scale=0.45]
{Extremal_outside_alternative_energy}}
\hspace*{\fill}
\subfloat[]{\includegraphics[scale=0.45]
{Extremal_outside_alternative_pressure}}
\caption{\label{Fig:Properties_extremal_outside_alternative}
Physical properties of an extremal tension shell singularity,
i.e., an electric
perfect fluid thin shell in an
extremal Reissner-Nordstr\"om state,
in the location
$R>r_+$, i.e., located outside the event horizon, with orientation
such that the normal points towards $r_+$. The interior is Minkowski,
the exterior is extremal Reissner-Nordstr\"om spacetime.
Extremal means $\frac{Q}{M}=1$.
Panel (a)
Energy density $\sigma$ of the shell as a function of the radius $R$
of the shell. The energy density is
adimensionalized through the mass $M$, $8\pi M\sigma$, and the radius
is adimensionalized through the gravitational radius $r_+$,
$\frac{R}{r_+}$. Panel (b) Pressure $p$ on the shell as a function of
the radius $R$ of the shell. The pressure is negative, so the shell
is supported by tension. The radius is
adimensionalized through the gravitational radius $r_+$,
$\frac{R}{r_+}$.
}
\end{figure}
The matter fluid that composes such shells is characterized by
positive energy density $\sigma$ and is supported by tension $-p$,
with both falling to zero when $R=\infty$. In this case since $p$ is
not zero, the shell is not composed of Majumdar-Papapetrou matter.
Notwithstanding the exterior spacetime is extremal. Examples of
spacetimes for which $M=Q$ globally whose interior is not made of
Majumdar-Papapetrou matter, as is the case here, are many. However,
this case is of of particular interest since matter properties
provided by
Eqs.~(\ref{eq:Extremal_sigma_value_outside_xi_negative})-(\ref{eq:Extremal_pressure_value_outside_xi_negative})
and the electric charge density
$
8\pi\sigma_{e}=\frac{2M}{ R^{2}}$,
have specific relevant features. Indeed, $\sigma$ has two terms,
namely, an intrinsic geometrical one given by $\frac4R$ and a
gravitational one which is negative given by
$-\frac{2M}{R^2}$. These two terms can be considered independent and
$\sigma$ is the sum of the two.
The first term of $\sigma$, $\frac4R$, is a geometrical term that also
gives rise to a geometrical tension given by $-\frac2R$ and ensures
that there is a shell for sure caused from the embedding of the shell
in the interior and exterior spacetimes, as the radial distance grows
up to a maximum at the shell with
radius $R$ nd then diminishes to $r_+$ and finally to zero at the
timelike singularity. This geometric term exists independently on
whether there is spacetime mass $M$ or not, indeed, the spacetime mass
energy coming from this geometrical term is zero since $\frac4R+2p=0$.
The second term $-\frac{2M}{R^2}$ is negative and can be explained by
the fact that due to the electric charge density
$
8\pi\sigma_{e}=\frac{2M}{ R^{2}}$
on the shell, there is electric
repulsion, and on
the other hand, since positive gravity is on the direction of $r_+$
and $r=0$, to counterbalance the electric repulsion and the direction
of positive gravity, the shell has to have an anti repulsive negative
energy density, an anti gravity term or anti Majumdar-Papapetrou
energy density term, of value $-\frac{2M}{R^2}$ . Note also that
$\sigma+2p+\sigma_e=0$.
When $Q=0$, and so $M=0$, there is still a shell of radius $R$, but
with a Minkowski spacetime on each side of it.
In relation to the energy conditions of the shell one can work out and
find that the null, the weak, and the dominant energy conditions are
verified for $R>r_+$, and the strong energy condition is never
verified, see a detailed presentation ahead.
The Carter-Penrose diagram for this case
can be drawn directly from the building
blocks of an interior Minkowski spacetime and the full extremal
Reissner-Nordstr\"om spacetime.
In
Figure~\ref{Fig:Penrose_diagram_Mink_extremal_RN_outside_alternative}
the Carter-Penrose diagram of a shell spacetime in an extremal
Reissner-Nordstr\"om state, in the location $R>r_+$, with orientation
such that the normal points towards $r_+$, i.e.,
$\xi=-1$, is shown.
It has a horizon,
but the existence of the singularity is more
striking, i.e., it is
an extremal tension shell singularity.
\begin{figure}[h]
\subfloat[
\label{Fig:Penrose_diagram_Mink_extremal_RN_outside_alternative_2}]
{\includegraphics[height=0.29\paperheight]
{Carter_Penrose_Mink_Extremal_RN_junction_I_alternative}}
\hspace*{5cm}
\subfloat[
\label{Fig:Penrose_diagram_Mink_extremal_RN_outside_alternative_1}]
{\includegraphics[height=0.29\paperheight]
{Carter_Penrose_Mink_Extremal_RN_junction_I_alternative_2}}
\caption{\label{Fig:Penrose_diagram_Mink_extremal_RN_outside_alternative}
Carter-Penrose diagrams of an extremal
tension shell singularity, i.e., a thin
shell spacetime in an extremal Reissner-Nordstr\"om state, in the
location $R>r_+$, i.e., located outside the event horizon, with
orientation such that the normal points towards $r_+$. The interior
is Minkowski, the exterior is extremal Reissner-Nordstr\"om
spacetime.
Panel~(a)
The Carter-Penrose diagram contains a shell in region
$\mathrm{I}$ and repeats itself upwards.
Panel~(b)
The Carter-Penrose diagram contains a shell in the regions
$\mathrm{I}$ shown that passes into asymptotically flat regions.
An infinite number of different Carter-Penrose diagrams can be drawn,
since there are an infinite number of combinations to place the shell
and infinity.
}
\end{figure}
There is an infinitude of possible diagrams as the maximal analytical
extension of the resulting spacetime can always contain a thin matter
shell outside the event horizon or only at a discrete number of these
regions. In the diagram (a) the tension shell is outside the
event horizon in region
$\mathrm{I}$. Then, the tension shell repeats itself
in the next portion of the diagram. It is a compact tension shell that
repeats itself. In the diagram (b) of the figure the tension shell is
outside the event horizon in region $\mathrm{I}$. Then, an asymptotic
infinity takes over in the next portion of the diagram. Since what
one puts in the regions $\mathrm{I}$, either a shell or infinity, is
not decided by the solution, indeed an infinite number of different
Carter-Penrose diagrams can be drawn, as there are an infinite number
of combinations to locate a shell or infinity when one goes upward or
downward through the diagram. This is a tension shell, but since it
is extremal there is no Einstein-Rosen bridge, no dynamic wormhole.
The physical interpretation of this case is somewhat simple, with the
case itself being unusual. This extremal thin shell solution, in its
simplest form, turns the space around up to a horizon and then opens
up to another universe with another shell, or to a singularity, and so
on. The energy density and pressure have special features as has
been just pointed out, and obey some of the energy conditions. The
causal and global structures as displayed by the Carter-Penrose
diagram show the unique features of this spacetime. So, this case
falls into the category of having some of
the energy conditions verified and
the geometrical setup is strange.
\newpage
\subsection{Formalism for extremal electric thin
shells outside the gravitational radius
\label{Subsec:Extremal_induced_Minkowski_RN_ouside_event_horizon}}
\subsubsection{Preliminaries}
\label{prel3}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in an extremal state, i.e., $r_+=r_-$ or
$M=Q$, and indeed
$r_+=r_-=M=Q$,
for which the shell's radius $R$ location obeys $R>r_+$, and
for which the orientation is such that the normal to the shell points
towards infinity or towards $r_+$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism}.
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm i}$}
\label{Subsec:Extremal_induced_Mink_outside}
Let us start by analyzing the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$. Since it is the same as
the analysis done previously we only quote
the important equations.
They are the interior metric Eq.~(\ref{eq:Mink_metric_interior}),
the interior four-velocity of the shell
Eq.~(\ref{eq:Mink_vel_explicit}), the
metric for the shell at radius $R$
Eq.~(\ref{eq:induced_metric_Mink}), the normal to the shell
Eq.~(\ref{eq:normal_Mink}),
and the extrinsic curvature from the inside
Eq.~(\ref{eq:Extrinsic_curvature_Mink}).
\subsubsection{Induced metric, and extrinsic curvature
of $\mathcal{S}$ as seen from $\mathcal{M}_{\rm e}$}
\label{Subsec:Extremal_induced_RN_outside}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the extremal
state, i.e., $r_+=r_-$ or $M=Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_extremal}, for which the
shell's radius $R$ location obeys $R>r_+$, and for which the
orientation is such that the normal to the shell points towards
increasing $r$, i.e., towards infinity,
or towards decreasing $r$ i.e., towards $r_+$,
as seen from the exterior, as
used in the two previous subsections.
For an extremal shell located at $R>r_+$
one has also to be concerned
about the normal vector to the shell.
In the extremal Reissner-Nordstr\"om spacetime
there is no Einstein-Rosen-bridge and so
there is no
ambiguity in the definition of the radial coordinate as the value
of the circumferential radius. Thus, there is no need
for the Kruskal-Skekeres $\left( T,X,\theta,\varphi\right)$
coordinates and we can resort
in this analysis of the induced
metric and extrinsic curvature of the matching surface
using simply the Schwarzschild
coordinates $\left(t,r,\theta,\varphi\right)$. The
Reissner-Nordstr\"om line element for the exterior extremal solution is
\begin{equation}
ds_{\rm e}^{2}=-\left(1-\frac{r_+}{r}\right)^2dt^{2}
+\frac{dr^{2}}{\left(1-\frac{r_+}{r}\right)^2}
+r^{2}d\Omega^{2}\,.
\label{RNextremal2}
\end{equation}
Assuming the circumferential radius of the matching surface
$\mathcal{S}$ to be described by a function ${R}
\left(\tau\right)$, where $\tau$ is the proper time of an observer
comoving with $\mathcal{S}$ and imposing the shell to be static implies
that $\frac{d{R}}{d\tau}=0$. Then, the 4-velocity of an observer
comoving with $\mathcal{S}$, as seen from $\mathcal{M}_{\rm e}$, is
given by
\begin{equation}
u_{\rm e}^{\alpha}=\left(\frac{1}{k},0,0,0\right)\,,
\label{eq:Extremal_4velocity_exterior_outside_horizon}
\end{equation}
where, in this situation the redshift function $k$
at $\mathcal{S}$ is given in
Eq.~(\ref{eq:redshift}),
evaluated at $R$, i.e., $k(R,r_+=r_-)\equiv k(R,r_+)=1-
\frac{r_+}{R}$.
Equation~(\ref{eq:Extremal_4velocity_exterior_outside_horizon})
can now be used to compute the induced metric on $\mathcal{S}$ by
$\mathcal{M}_{\rm e}$, and we find
$
\left.ds^{2}_{\rm e}\right|_{\mathcal{S}}
=-d\tau^{2}+{R}^{2}d\Omega^{2}$.
Imposing the the first junction condition
Eq.~(\ref{eq:1st_junct_cond})
and
Eq.~(\ref{eq:induced_metric_Mink}) we find
that the shell's radial functions at each
sice of $\mathcal{S}$ are the same, and so the
matching surface $\mathcal{S}$
is characterized by the line element
\begin{equation}
\left.ds^{2}\right|_{\mathcal{S}}=-d\tau^{2}+R^{2}d\Omega^{2}\,.
\label{eq:Extremal_induced_metric_outside_horizon}
\end{equation}
Using the normalization and orthogonality
relations~(\ref{eq:normal_normalized})
and (\ref{eq:normal_orthogonal}) allows us to find the following
expression for the normal
\begin{equation}
n_{{\rm e}\alpha}=\xi\left(0,\frac{1}{k},0,0\right)\,,
\label{eq:Extremal_normal_exterior_outside_horizon}
\end{equation}
where the parameter $\xi=\left\{ -1,1\right\} $ is defined as $\xi=+1$
if the outside unit normal to the shell points in the direction of
increasing radial coordinate $r$, measured by an observer in the
exterior $\mathcal{M}_{\rm e}$ spacetime, and $\xi=-1$ if the outside
unit normal to the shell points in the direction of decreasing radial
coordinate $r$, again, measured by an observer in the exterior
$\mathcal{M}_{\rm e}$
spacetime. In the extremal case
the parameter $\xi$ takes the place of the $\text{sign}\left(X\right)$
used in the nonextremal case.
Taking into account
Eqs.~(\ref{eq:normal_orthogonal}),
(\ref{eq:Extremal_4velocity_exterior_outside_horizon}),
and~(\ref{eq:Extremal_normal_exterior_outside_horizon}) we find that
the nonzero components of the extrinsic curvature of the matching
surface, see Eq.~(\ref{eq:extrinsic1}), are given by
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}=
\xi\frac{r_+}{R^{2}}
\,,\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}=
\xi\frac{k}{R}\,.
\label{eq:Extremal_extrinsic_curvature_RN_outside}
\end{equation}
\subsubsection{Shell's energy density and pressure
\label{subSubsec:shellsenergydensityandpressureextremalouts}}
Having determined the components of the extrinsic curvature of the
matching surface $\mathcal{S}$ as seen from the interior and exterior
spacetimes we are now in position to use the second junction
condition given in Eq.~(\ref{eq:2nd_junct_cond})
to find the expressions for the energy density and pressure support
of the extremal thin shell in these cases.
The shell's stress-energy tensor is given in Eq.~(\ref{eq:perfect}),
so
Eqs.~(\ref{eq:Extrinsic_curvature_Mink})
and (\ref{eq:Extremal_extrinsic_curvature_RN_outside}) yield
\begin{align}
8\pi\sigma= & \frac{2}{R}
\left[1-\xi\left(1-\frac{r_+}{R}\right)\right]\,,
\label{eq:Extremal_sigma_value_outside}\\
8\pi p= & \frac{1}{R}\left(\xi-1\right)\,,
\label{eq:Extremal_pressure_value_outside}
\end{align}
where again
here $k=1-\frac{r_+}{R}$. Note that $p$ in
Eq.~(\ref{eq:Extremal_pressure_value_outside})
is independent of $M$, it only depends on
$R$ and thus on the geometry of the shell as embedded in the ambient
spacetime.
Moreover, since the surface electric current density $s_a$ on
the thin shell is $s_a=\sigma_{e}u_a$, where $\sigma_{e}$
represents the electric charge density, and since the Minkowski
spacetime has zero electric charge,
from Eqs.~(\ref{eq:junct_cond_Faradayb}), (\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value})
it follows that
\begin{equation}
8\pi \sigma_{e}=2\frac{r_+}{ R^{2}}\,.
\label{eq:chargedensity1extremalout}
\end{equation}
The radial coordinate of the shell is in the range $r_+<R<\infty$.
Equations~(\ref{eq:Extremal_sigma_value_outside})
and (\ref{eq:Extremal_pressure_value_outside}),
together with
(\ref{eq:chargedensity1extremalout}),
can now be used to study the properties of the thin matter shells
separating a Minkowski spacetime from an exterior
extremal Reissner-Nordstr\"om
spacetime, located outside the extremal
gravitational radius $r_+$.
In Eqs.~(\ref{eq:Extremal_sigma_value_outside})
and~(\ref{eq:Extremal_pressure_value_outside})
it is clear that it is necessary to pick the sign of $\xi$.
Let us start with $\xi=+1$. It is useful to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$. in terms of $M=Q$, where we opt for $M$.
Using Eq.~(\ref{eq:KS_horizons_radius0}), i.e., $r_+=M$, in
Eqs.~(\ref{eq:Extremal_sigma_value_outside})
and~(\ref{eq:Extremal_pressure_value_outside}) with $\xi=+1$ we have
$8\pi\sigma= \frac{M}{4\pi R^{2}}$,
$8\pi p= 0$, and
also from Eq.~(\ref{eq:chargedensity1extremalout}) we have
$8\pi \sigma_{e}=\frac{2M}{ R^{2}}$.
Let us now take $\xi=-1$.
It is useful to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$, in terms of $M=Q$, where as usual we opt for $M$.
Using Eqs.~(\ref{eq:Extremal_sigma_value_outside})
and~(\ref{eq:Extremal_pressure_value_outside})
with $\xi=-1$ we have
$8\pi\sigma= \frac{2}{R}\left(2-\frac{M}{R}\right)$,
$8\pi p= -\frac{2}{R}$, and
also from Eq.~(\ref{eq:chargedensity1extremalout}) we have
again $8\pi \sigma_{e}=\frac{2M}{ R^{2}}$.
These are the expressions used in
the two previous subsections.
\clearpage{}
\section{Extremal electric thin shells inside the gravitational
radius:
Extremal tension shell regular and nonregular black holes and
Majumdar-Papapetrou compact naked singularities}
\label{Sec:Extremal-thin-shells-inside12}
\subsection{Extremal electric thin shells inside the event horizon:
Extremal tension shell regular and nonregular black holes}
\label{Subsec:extremalnormalinsideplus}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R<r_+$, and so also $R<r_-$, and
for which the orientation is such that the normal to the shell points
towards $r_+$, i.e., we choose the quantity $\xi$ which gives the
direction of the normal as $\xi=+1$, see the end of this section for
details.. In this case horizons do exist and so, following the
nomenclature, $r_+$ is both the gravitational and the event horizon
radius, and since $r_+=r_-$ it is also the Cauchy horizon radius and
the Cauchy radius. We opt to use $r_+$ and $M$.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma= & \frac{2}{R}\left(2-\frac{M}{R}\right)\,,
\label{eq:Extremal_sigma_value_inside_xi_positive}\\
8\pi p= & -\frac{2}{R}\,.
\label{eq:Extremal_pressure_value_inside_xi_positive}
\end{align}
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$ and $R$, by
\begin{equation}
8\pi\sigma_{e}=\frac{2M}{R^{2}}\,,
\label{eq:chargedensity12insextremal}
\end{equation}
The behavior of $\sigma$ and $p$, in
Eqs.~(\ref{eq:Extremal_sigma_value_inside_xi_positive})
and ~(\ref{eq:Extremal_pressure_value_inside_xi_positive}),
as functions of the radial coordinate
$R$ of the $\frac{Q}{M}=1$ extremal shell
is shown in Figure~\ref{Fig:Properties_extremal_inside_normal}.
\begin{figure}[h]
\subfloat[]
{\includegraphics[scale=0.45]
{Extremal_inside_normal_energy}}
\hspace*{\fill}
\subfloat[]
{\includegraphics[scale=0.45]
{Extremal_inside_normal_pressure}}
\caption{\label{Fig:Properties_extremal_inside_normal}
Physical
properties of an
extremal tension shell regular and nonregular black hole,
i.e., an electric
perfect fluid thin shell in an extremal Reissner-Nordstr\"om state, in
the location $R<r_+=r_-$, i.e., located inside the even hoizon, and
with orientation such that the normal points towards $r_+$. The
interior is Minkowski and the exterior is extremal
Reissner-Nordstr\"om spacetime.
Extremal means $\frac{Q}{M}=1$.
Panel (a) Energy density $\sigma$ of
the shell as a function of the radius $R$ of the shell.
The energy density is adimensionalized through
the mass $M$, $8\pi M\sigma$, and the radius is adimensionalized
through the gravitational radius $r_+$, $\frac{R}{r_+}$. Panel (b)
Tension $-p$ on the shell as a function of the radius $R$ of the
shell. The tension is
adimensionalized through the mass $M$, $-8\pi Mp$, and the radius is
adimensionalized through the event horizon radius $r_+$,
$\frac{R}{r_+}$.
}
\end{figure}
These shells are characterized by a positive energy density for $R$
near $r_+$ that changes sign from positive to negative values when the
radius of the shell $R$ obeys $R=\frac{M}2$ up to minus infinity when
$R=0$. The exterior spacetime is extremal although $p$ is not zero
and so the shell is not made of Majumdar-Papapetrou matter, this case
providing thus another instance,
of the many instances found
in the literature, for which $M=Q$ globally but with an
interior that is not made of Majumdar-Papapetrou matter.
Equation~(\ref{eq:Extremal_sigma_value_inside_xi_positive})
shows that
$\sigma$ is the sum of a geometrical term given by $\frac4R$ and
a gravitational term which is negative given by
$-\frac{2M}{R^2}$, wit the two terms being independent.
The first term of $\sigma$, $\frac4R$, is a geometrical term that also
gives rise to a geometrical tension given by $-\frac2R$ and ensures
that there is a shell for sure with radius
$R$ inside the Cauchy horizon
$r_+=r_-$.
This geometric term exists independently on
whether there is spacetime mass $M$ or not, indeed, the spacetime mass
energy coming from this geometrical term is zero since $\frac4R+2p=0$.
The second term $-\frac{2M}{R^2}$ is negative and can be explained by
the fact that inside a Cauchy horizon $r_+=r_-$ gravity is repulsive,
here manifested by $\sigma_e=\frac{2M}{R^2}$, and since the shell is
indeed inside $r_+=r_-$ the shell tends naturally to $r_+$, so to
counterbalance this effect and produce a static shell, the shell has
to have an anti repulsive negative energy density, an anti
Majumdar-Papapetrou energy density, of value $-\frac{2M}{R^2}$ . Note
also that $\sigma+2p+\sigma_e=0$.
When $Q=0$, and so $M=0$, and since $R<M$, in the
limiting case one has $R=0$, and we are left with
a singular massless null shell at $R=0$
with $\sigma+2p=0$
surrounded by a massless spacetime, i.e., a Minkowski spacetime.
This Minkowski spacetime
with a well defined singularity
at its center is a new and interesting solution of
Einstein equation.
In relation to the energy conditions of the shell one can work out and
find that the null, the weak, the dominant, and the strong energy
conditions are never verified, see a detailed presentation ahead.
The Carter-Penrose diagram can be drawn directly from the building
blocks of an interior Minkowski spacetime and the full extremal
Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_extremal_RN_inside} two possible
Carter-Penrose diagrams of a shell spacetime in an extremal
Reissner-Nordstr\"om state, in the location $R<r_+=r_-$, with
orientation such that the normal points towards $r_+$, are shown.
It is clearly a black hole, more specifically, a tension shell black
hole. In the diagram (a) the tension shell is inside the event
horizon in region $\mathrm{II}$. Then,
in the next portion of the diagram there is another shell and so
onwards. So, this realization it is a regular tension black hole. In
the diagram (b) the tension shell is also inside the event horizon
in region $\mathrm{II}$. Then, the tension shell is replaced by the
timelike singularity at $r=0$. So, in this realization it is a
nonregular tension black hole. Since what one puts in the regions
$\mathrm{II}$, either a shell or a singularity, is not decided by the
solution, an infinite number of different Carter-Penrose diagrams can
be drawn, as there are an infinite number of combinations to locate a
shell or a singularity when one goes upward or downward through the
diagram. So, similarly to the
previous subsection, in the case of shells whose unit normal points
towards the event horizon, the maximal analytical extension of the
spacetime may always contain a thin shell inside the event horizon or
only at some regions.
\begin{figure}[h]
\subfloat[\label{Fig:Penrose_diagram_Mink_extremal_RN_inside_normal_1}]
{\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_Extremal_RN_junction_III}}
\hspace*{5cm}
\subfloat[\label{Fig:Penrose_diagram_Mink_extremal_RN_inside_normal_2}]
{\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_Extremal_RN_junction_III_alternative}}
\caption{\label{Fig:Penrose_diagram_Mink_extremal_RN_inside}
Carter-Penrose diagrams of the extremal tension shell black holes,
i.e., a thin shell spacetime in an extremal Reissner-Nordstr\"om
state, in the location $R<r_+=r_-$, i.e., located inside the event
horizon radius, with orientation such that the normal to the shell
points towards $r_+$. The interior is Minkowski, the exterior is
extremal Reissner-Nordstr\"om spacetime. Panel~(a) The Carter-Penrose
diagram contains a shell in the region $\mathrm{II}$. If this pattern
is repeated ad infinitum then it is an extremal tension shell regular
black hole. Panel~(b) The Carter-Penrose diagram contains a shell in
region $\mathrm{II}$ and a singularity in regions $\mathrm{II}$ above
and below. It is a tension shell black hole, now not regular.
An infinite number of different Carter-Penrose diagrams can be drawn,
since there are an infinite number of combinations to place the shell
and the singularity.
}
\end{figure}
The physical interpretation of this case is of some interest. This
extremal thin shell solution provides an extremal regular black hole
solution. The energy density and pressure never obey the energy
conditions for all shell radii, i.e., shell radii between zero and the
horizon. The causal and global structure as displayed by the
Carter-Penrose diagram shows clearly that there is no singularity if
one adopts the simplest form of the diagram. As regular extremal black
holes these solutions join the other known regular black hole
solutions which are of interest in quantum gravitational settings that
presumably get rid of the singularities. So, this case falls into the
category of having the energy conditions never verified, and in this
sense is odd, although of interest as regular black hole matter
solutions always are.
\subsection{Extremal electric thin shells inside the
gravitational radius:
Majumdar-Papapetrou compact shell naked singularities}
\label{Subsec:extremalnormalinsideminus}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed,
$r_+=r_-=M=Q$, for which the shell's location obeys $R<r_+=r_-$,
and for which the orientation is such that the normal to
the shell points towards $r=0$, i.e., we choose the quantity $\xi$
which gives the direction of the normal as $\xi=-1$, see the end of
this section for details.. In this case horizons do not exist and so,
following the nomenclature, $r_+$ is both the gravitational radius,
and since $r_+=r_-$ it is also the Cauchy radius. We opt to use $r_+$
and $M$.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma= & \frac{2M}{R^2}\,,
\label{eq:Extremal_sigma_value_insideother}\\
8\pi p= & 0\,.\label{eq:Extremal_pressure_value_insideother}
\end{align}
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$ and $R$ by
Eq.~(\ref{eq:chargedensity12insextremal}).
The behavior of $\sigma$ and $p$, in
Eqs.~(\ref{eq:Extremal_sigma_value_insideother})
and ~(\ref{eq:Extremal_pressure_value_insideother}),
as functions of the radial coordinate
$R$ of the $\frac{Q}{M}=1$ extremal shell
is shown in Figure~\ref{Fig:Properties_extremal_inside_alternative}.
\begin{figure}[h]
\subfloat[]{
\includegraphics[scale=0.45]
{Extremal_inside_alternative_energy}}
\hspace*{\fill}
\subfloat[]{
\includegraphics[scale=0.45]
{Extremal_inside_alternative_pressure}}
\caption{\label{Fig:Properties_extremal_inside_alternative}
Physical properties of a Majumdar-Papapetrou compact shell naked
singularity, i.e., an electric perfect fluid thin shell in an extremal
Reissner-Nordstr\"om state, in the location $R<r_+=r_-$, and with
orientation such that the normal points towards $r=0$. The interior
is Minkowski and the exterior is extremal Reissner-Nordstr\"om
spacetime. Extremal means $\frac{Q}{M}=1$.
Panel (a) Energy density $\sigma$ of the shell as a
function of the radius $R$ of the shell.
The energy density is adimensionalized through the
mass $M$, $8\pi M\sigma$, and the radius is adimensionalized through
the gravitational radius $r_+$, $\frac{R}{r_+}$. Panel (b) Pressure
on the shell as a function of the radius $R$ of the shell. The
pressure is zero, the shell is supported by electric repulsion, it is
Majumdar-Papapetrou matter. The radius is adimensionalized through
the gravitational radius $r_+$, $\frac{R}{r_+}$.
}
\end{figure}
These shells are characterized by a positive energy density for all
shell's radii. The pressure is zero, and so the matter is
Majumdar-Papapetrou matter. When $Q=0$, and so $M=0$, there is no
shell spacetime. In relation to the
energy conditions of the shell one can work out and find that the
null, the weak, the dominant, and the strong energy conditions are
verified for $0<R<r_+$, see a detailed presentation ahead.
The Carter-Penrose diagram can be drawn directly from the building
blocks of an interior Minkowski spacetime and the full extremal
Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_extremal_RN_inside_alternative}
the Carter-Penrose diagram of a shell spacetime in an extremal
Reissner-Nordstr\"om state, in the location $R<r_+=r_-$, with
orientation such that the normal points towards $r=0$, is shown. It
is a Majumdar-Papapetrou, i.e., extremal, compact shell naked
singularity spacetime. It is clearly a compact space, the coordinate
$r$ goes from 0 to $R$ and then decreases back to 0 at the timelike
singularity, such that there is no clear distinction
of what is outside from what is inside.
We use the hash symbol $\#$ to represent the connected
sum of the spacetime manifolds, in order to conserve the conformal
structure in the Carter-Penrose diagram of the total spacetime.
\begin{figure}[h]
\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_RN_junction_region_III}
\caption{
\label{Fig:Penrose_diagram_Mink_extremal_RN_inside_alternative}
Carter-Penrose diagram of the Majumdar-Papapetrou compact shell naked
singularity spacetime, i.e., a shell in an extremal
Reissner-Nordstr\"om
state, in the location $R<r_+=r_-$, i.e., located inside the event
horizon radius, with orientation such that the normal to the shell
points towards $r=0$. The interior is Minkowski, the exterior is
extremal
Reissner-Nordstr\"om.
There is no clear distinction
of what is outside from what is inside.
The hash symbol $\#$ represents the
connected sum of the two spacetimes.
}
\end{figure}
The physical interpretation of this case is noteworthy, and it is
similar to the corresponding nonextremal shell. This
extremal thin shell solution provides a closed spatial static universe
with a singularity at one pole. There are no horizons. The energy
density and pressure obey the energy conditions for all shell radii,
indeed the shell is composed of Majumdar-Papapetrou matter. The
causal and global structure as displayed by the Carter-Penrose diagram
show the characteristics of this universe that has two sheets joined at
the shell with one sheet having a singularity at its pole and with no
horizons. The singularity is avoidable to timelike curves. So, this
case falls into the category of having the energy conditions verified
and the resulting spacetime being peculiar.
\subsection{Formalism for extremal electric thin
shells inside the gravitational radius
\label{Subsec:Extremal_induced_Minkowski_RN_inside_event_horizon}}
\subsubsection{Preliminaries}
\label{prel4}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in an extremal state, i.e., $r_+=r_-$ or
$M=Q$,
for which the shell's radius $R$ location obeys $R<r_+=r_-$, and
for which the orientation is such that the normal to the shell points
towards $r_+$ or towards $r=0$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism}.
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen from $\mathcal{M}_{\rm i}$}
\label{Subsec:Extremal_induced_Mink_inside2}
Let us start by analyzing the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$. Since it is the same as
the analysis done previously we only quote
the important equations.
They are the interior metric Eq.~(\ref{eq:Mink_metric_interior}),
the interior four-velocity of the shell
Eq.~(\ref{eq:Mink_vel_explicit}), the
metric for the shell at radius $R$
Eq.~(\ref{eq:induced_metric_Mink}), the normal to the shell
Eq.~(\ref{eq:normal_Mink}),
and the extrinsic curvature from the inside
Eq.~(\ref{eq:Extrinsic_curvature_Mink}).
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm e}$}
\label{Subsec:Extremal_induced_RN_inside}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the extremal
state, i.e., $r_+=r_-$ or $M=Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_extremal}, for which the
shell's obeys $R<r_+=r_-$, and for which the
orientation is such that the normal to the shell points towards
increasing $r$, i.e., towards $r_+$,
or towards decreasing $r$ i.e., towards $r=0$,
as seen from the exterior, as
used in the two previous subsections.
Most of the analysis and results of
Section~\ref{Sec:Extremal-thin-shells-outside} are still verified,
namely,
the extremal
Reissner-Nordstr\"om line element $ds_{\rm e}^{2}$
given in Eq.~(\ref{RNextremal2}),
the four-velocity
$u_{\rm e}^\alpha$
given in Eq.~(\ref{eq:Extremal_4velocity_exterior_outside_horizon}),
the line element on $\mathcal{S}$, $\left.ds^{2}\right|_{\mathcal{S}}$
given in Eq.~(\ref{eq:Extremal_induced_metric_outside_horizon}),
and the normal to the surface $\mathcal{S}$
given in Eq.~(\ref{eq:Extremal_normal_exterior_outside_horizon}).
Then, taking
into account that here we are considering that, $R$, the radial
coordinate of $\mathcal{S}$ as seen from $\mathcal{M}_{\rm e}$,
verifies $R<r_+=r_-$, we find the following expressions for the nonzero
components of the extrinsic curvature of the matching hypersurface
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}
=-\xi\frac{r_+}{R^{2}}
\,,\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}
=\xi\frac{k}{R}
\,,\label{eq:eq:Extremal_extrinsic_curvature_RN_inside}
\end{equation}
where, as before, the parameter $\xi$ is defined
as $\xi=+1$ if the orientation is such
that the outside unit normal to the shell points in the
direction of increasing radial coordinate $r$, measured by an observer
in the exterior $\mathcal{M}_{\rm e}$ spacetime, and $\xi=-1$ if the
the orientation is such
that the outside unit normal to the shell points in the
direction of decreasing
radial coordinate $r$, and the redshift function $k$ at the
shell is given by
$k=|1-\frac{r_+}{R}|$, i.e., since $R<r_+$
one has $k=\frac{r_+}{R}-1$.
\subsubsection{Shell's energy density and pressure}
\label{subSubsec:shellsenergydensityandpressureextremalins}
Having determined the components of the extrinsic curvature of the
matching surface $\mathcal{S}$ as seen from the interior and exterior
spacetimes we are now in position to use the second junction
condition
given in Eq.~(\ref{eq:2nd_junct_cond}) to find the expressions for the
energy density and pressure support of the thin shell.
The shell's stress-energy tensor is given in Eq.~(\ref{eq:perfect}),
and
Eqs.~(\ref{eq:Extrinsic_curvature_Mink})
and~(\ref{eq:eq:Extremal_extrinsic_curvature_RN_inside})
then yield
\begin{align}
8\pi\sigma= & \frac{2}{R}\left[1+\xi
\left(1-\frac{r_+}{R}\right)\right]\,,
\label{eq:Extremal_sigma_value_inside}\\
8\pi p= & -\frac{1}{R}\left(1+\xi\right)\,,
\label{eq:Extremal_pressure_value_inside}
\end{align}
where we used
$k=\frac{r_+}{R}-1=\frac{M}{R}-1$. Note that $p$ in
Eq.~(\ref{eq:Extremal_pressure_value_outside})
is independent of $M$, it only depends on
$R$ and thus on the geometry of the shell as embedded in the ambient
spacetime.
Moreover, defining the surface electric current density
$s_a$ on the
thin shell as $s_a=\sigma_{e}u_a$, where $\sigma_{e}$
represents the
electric charge density, and since the Minkowski spacetime
has zero electric
charge,
from Eqs.~(\ref{eq:junct_cond_Faradayb})-(\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value})
it follows that
\begin{equation}
8\pi \sigma_{e}=2\frac{r_+}{R^{2}}\,.
\label{eq:chargedensity1extremalin}
\end{equation}
The radial coordinate of the shell is in the range $0<R<r_+$.
Equations~(\ref{eq:Extremal_sigma_value_inside})
and (\ref{eq:Extremal_pressure_value_inside}),
together with
(\ref{eq:chargedensity1extremalin}),
can now be used to study the properties of the thin matter shells
separating a Minkowski spacetime from an exterior
extremal Reissner-Nordstr\"om
spacetime, located inside the event horizon $r_+$.
In Eqs.~(\ref{eq:Extremal_sigma_value_inside})
and (\ref{eq:Extremal_pressure_value_inside})
it is clear that it is necessary to pick the sign $\xi$.
Let us start with $\xi=+1$. It is useful to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$ in terms of $M=Q$, we opt for $M$.
Using Eq.~(\ref{eq:KS_horizons_radius0}), i.e., $r_+=M$, in
Eqs.~(\ref{eq:Extremal_sigma_value_inside})
and (\ref{eq:Extremal_pressure_value_inside})
with $\xi=+1$ we have
$8\pi\sigma= \frac{2}{R}\left(2-\frac{M}{R}\right)$,
$8\pi p= -\frac{2}{ R}$, and
also from Eq.~(\ref{eq:chargedensity1extremalout}) we have
$8\pi\sigma_{e}=\frac{2M}{R^{2}}$.
Let us now take $\xi=-1$.
It is useful to
give the expressions for the shell's energy density and pressure,
$\sigma$ and $p$ in terms of $M=Q$, we opt for $M$.
Using Eqs.~(\ref{eq:Extremal_sigma_value_inside})
and (\ref{eq:Extremal_pressure_value_inside})
with $\xi=-1$ we have
$8\pi\sigma= \frac{2M}{R^2}$,
$8\pi p= 0$, and
also from Eq.~(\ref{eq:chargedensity1extremalout}) we have again
$8\pi\sigma_{e}=\frac{2M}{R^{2}}$. These are the expressions used in
the two previous subsections.
\newpage
\section{Extremal electric thin shells at the gravitational radius:
Majumdar-Papapetrou shell quasiblack holes,
extremal null shell quasinonblack holes, extremal null shell
singularities,
and Majumdar-Papapetrou null shell singularities
\label{Sec:Horizon_thin_shells}}
\subsection{Extremal electric thin shells at the event horizon:
Majumdar-Papapetrou shell quasiblack holes
and extremal null shell quasinonblack holes }
\label{Subsec:extremalqbhdnormalinsideplus}
\subsubsection{Majumdar-Papapetrou shell quasiblack holes}
\label{majumdarpapapetroushellquasiblackholes}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R=r_+$, and for which the
orientation is such that the normal to the shell points towards
spatial infinity. Moreover, there is an additional characterization
for shells at the horizon. This case comes from the limit of $R\to
r_+$ from above and so is the limiting case of the case studied in
Sec.\ref{Subsec:extremalnormaloutside}. In this case a horizon is
barely formed, namely, we have a
quasihorizon, and so, following the nomenclature, $r_+$ is both the
gravitational radius and the quasihorizon radius. This is an extremal
quasiblack hole~\citep{lemoszasla2020}. Also $r_+$ and $r_-$ have the
same value. In general we also opt to use $M$ rather than $Q$. The
normal to the shell pointing towards spatial infinity means in the
notation for the extremal states that the new parameter $\xi$ has
value $\xi=+1$, see the end of this section for details.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma= & \frac{2}{M}\,,
\label{eq:qbh_sigma}\\
8\pi p= & 0\,.\label{eq:qbhpressure}
\end{align}
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$
by
\begin{equation}
8\pi\sigma_{e}= \frac{2}{M}\,.
\label{qbhchargedensity}
\end{equation}
Since it is one point in a plot of $\sigma$ or $p$ as functions of
$\frac{R}{M}$, there is no need to draw a figure. The shell is
characterized by a positive energy density. The pressure is zero, and
so the matter is Majumdar-Papapetrou matter, i.e., $\sigma_e=\sigma$,
and therefore is fully supported by electric repulsion. This is an
interesting system to consider, this case when the shell's radius is
taken to the event horizon radius. It is a quasiblack hole
configuration. The Majumdar-Papapetrou shell
quasiblack hole is regular in that all
curvature scalars are finite everywhere. When $Q=0$, so
$M=0$ and $r_+=0$, the shel is at $R=0$,
and the spacetime is singular being
Minkowski in the exterior.
In relation to the energy conditions of the shell one can
work out and find that the null, the weak, the dominant, and the
strong energy conditions are always verified, see a detailed
presentation ahead.
The Carter-Penrose diagram can be drawn with some care
from the building
blocks of an interior Minkowski spacetime and the
exterior asymptotic region of an
extremal Reissner-Nordstr\"om spacetime, see
\cite{qbh4lemoszaslacp} and for more details see
\cite{lemoszasla2020}.
In Figure~\ref{Fig:Penrose_diagram_Mink_horizonshell_RN1}
the Carter-Penrose
diagram
of a Majumdar-Papapetrou shell quasiblack hole, i.e.,
for $R=r_+$ and a junction surface
with orientation such that the outside normal points towards
spacial infinity is shown. We use the hash symbol $\#$
to represent the connected sum
of the spacetime manifolds, in order to conserve the conformal
structure in the Carter-Penrose diagram of the total spacetime.
We see that when the shell is at $R=r_+$, i.e., the shell
is at a null
surface, the two regions contain incomplete
geodesics with ending points at the matching surface, so that,
observers at each spacetime are disconnected and the manifold is
composed by two separate regions.
\begin{figure}[h]
\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_RN_QuasiBlackhole_junction_outside_normal}
\caption{\label{Fig:Penrose_diagram_Mink_horizonshell_RN1}
Carter-Penrose diagram of a Majumdar-Papapetrou
shell quasiblack hole, i.e., a
thin shell spacetime in an extremal Reissner-Nordstr\"om state, with
the shell located at $R=r_+$, i.e., located at the gravitational
radius or quasihorizon, with orientation such that the normal points
towards infinity, and such that $R\to r_+$ from $R> r_+$. The
interior is Minkowski, the exterior is extremal Reissner-Nordstr\"om.
This quasiblack hole shell is supported by electrical
repulsion alone.
}
\end{figure}
\newpage
Some remarks on quasiblack holes should be made. In this section we
treated an extremal quasiblack hole,
namely, a Majumdar-Papapetrou
shell quasiblack hole. Since it is Majumdar-Papapetrou the
pressure on the shell is zero, $p=0$, and so the extremal quasiblack
hole is regular in this sense. On the other hand, in
Section~\ref{Subsec:nonextremalnormaloutside} on nonextremal shells
that are located outside $r_+$, $R>r_+$, with orientation such that
the normal points towards spatial infinity, one has that
Eqs.~(\ref{eq:sigma_value_rplus_MQ_sign_plus}),
(\ref{eq:pressure_value_rplus_MQ_sign_plus}), and
(\ref{eq:chargedensity12}), in the limit that the shell is located
at the gravitational radius, $R=r_+$, yield that the surface density
$\sigma$ is finite, the pressure support $p$ of the thin matter shell
diverges to infinity, and the electric charge density $\sigma_e$ is
finite. This case defines a nonextremal quasiblack hole. Since the
pressure diverges the spacetime of nonextremal shells at $R=r_+$
presents some type of singularity. This singularity is mild however,
with entropy and the mass formulas being derived in this limiting case,
see~\citep{lemoszasla2020}. The Carter-Penrose diagram of a
nonextremal quasiblack hole is similar to the Carter-Penrose diagram
for a Majumdar-Papapetrou one, i.e., the one showed in
Figure~\ref{Fig:Penrose_diagram_Mink_horizonshell_RN1}.
Since nonextremal quasiblack holes are somewhat singular and
extremal ones are not, we have treated these
within the extremal state and mentioned the nonextremal here.
The physical interpretation of this case is known and it is
remarkable. The extremal thin shell solution with its radius at the
horizon radius, is inherited from the extremal thin shell star, and
provides a typical extremal quasiblack hole. A quasiblack hole is an
object on the verge of becoming a black hole, but cannot turn into
such one.
The energy density and pressure shows that the matter is
Majumdar-Papapetrou and obey the energy conditions. The causal and
global structure as displayed by the Carter-Penrose diagram show the
quasiblack hole characteristics. These quasiblack holes have no
curvature singularities, although at the quasihorizon there is some
form of singular degeneracy that disconnects the interior from the
exterior. It can form in a limiting process of quasistatic collapse.
Quasiblack holes are of great interest because they reveal new black
hole properties or black hole properties in a new perspective. So, this
case falls into the category of having some of
the energy conditions verified
and the geometrical setup is interesting and peculiar.
\subsubsection{Extremal null shell quasinonblack holes}
\label{extremalnullshellblackholes}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R=r_+$, and for which the
orientation is such that the normal to the shell points towards
spatial infinity. Moreover, there is an additional characterization
for shells at the horizon. This case here comes from the limit of $R\to
r_+$ from below and so is the limiting case of the case studied in
Sec.\ref{Subsec:extremalnormalinsideplus}.
In this case $r_+$ is timelike on
one side and lightlike on the other side.
Thus, a horizon, or rather a quasinonhorizon, does exist and
so, following the nomenclature, $r_+$ is both the gravitational radius
and the quasinonhorizon radius.
Also $r_+$ and $r_-$ have the same value.
In general, we also opt here
to use $M$ rather than $Q$. The normal to the
shell pointing towards spatial infinity means in the notation for the
extremal states that the new parameter $\xi$ has value $\xi=+1$, see
the end of this section for details.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma&= \;\; \frac{2}{M}\,,
\label{eq:qbh_sigma2new2}\\ 8\pi p&= -\frac{2}{M}\,.
\label{eq:qbhpressure2new2}
\end{align}
Also, the electric
charge density $\sigma_{e}$ is given in terms of $M$ by
Eq.~(\ref{qbhchargedensity}). Since it is one point in a plot of
$\sigma$ or $p$ as functions of $\frac{R}{M}$, there is no need to
draw a figure. The shell is characterized by a positive energy
density. The pressure is negative, so it is a tension. The equation
of state is $\sigma+2p+\sigma_e=0$, inherited from
the extremal $R<r_+$ shell.
When $Q=0$, and so $M=0$, there is a singular null shell at $R=0$,
and a Minkowski spacetime in the exterior.
In relation to the energy conditions of the
shell one can work out and find that the null, the weak and the
dominant energy conditions are always verified, and the strong energy
condition is always violated, see a detailed presentation ahead.
The Carter-Penrose diagram can be drawn with some care from the
building blocks of an interior Minkowski spacetime and the exterior
asymptotic region of an extremal Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_horizonshell_RN2}, the
Carter-Penrose diagram of an extremal null shell quasinonblack hole,
i.e., for
$R=r_+$ and a junction surface with orientation such that the outside
normal points towards spatial infinity, is shown.
We use the hash symbol $\#$ to represent the connected sum of the
spacetime manifolds, in order to conserve the conformal structure in
the Carter-Penrose diagram of the total spacetime. This setup is very
different from the quasiblack hole limit of the last section leading
to a new Carter-Penrose diagram. Nonetheless, we see that as in the
previous case, when the shell is at $R=r_+$, i.e., the shell,
for one of the regions, is at a
null surface, the two regions contain incomplete geodesics with ending
points at the matching surface, so that, observers at each spacetime
are disconnected and the manifold is composed by two separate regions.
\begin{figure}[h]
\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_RN_QuasiBlackhole_junction_inside_alternative}
\caption{\label{Fig:Penrose_diagram_Mink_horizonshell_RN2}
Carter-Penrose diagram of an extremal null shell quasinonblack hole,
i.e., a
thin shell spacetime in an extremal Reissner-Nordstr\"om state, with
the shell located at $R=r_+$, i.e., located at the gravitational
radius or horizon, with orientation such that the normal points
towards $r=0$, and such that
$R\to r_+$ from $R< r_+$. The interior
is Minkowski, the exterior is extremal Reissner-Nordstr\"om.
}
\end{figure}
The physical interpretation of this case is also remarkable. The
extremal thin shell solution with its radius at the horizon radius, is
inherited from the extremal regular black hole, and provides an
example of an extremal quasinonblack hole It is an object that is on
the verge of becoming a star solution, but cannot turn into one. The
energy density and pressure shows that the matter obeys some of the
energy conditions. These quasinonblack holes have no curvature
singularities, although at the quasinonhorizon there is some form of
singular degeneracy that disconnects the interior from the exterior.
The causal and global structure as displayed by the Carter-Penrose
diagram show the characteristics pertaining to quasinonblack hole.
These quasinonblack hole solutions are new, they have showed up here
for the first time. So, this case falls into the category of having
the energy conditions verified and the geometrical setup is new,
very interesting, and peculiar.
\subsection{Extremal electric thin shells at the gravitational radius:
Extremal null shell singularities, and Majumdar-Papapetrou null shell
singularities}
\label{Subsec:extremalqbgnormalinsideminus}
\subsubsection{Extremal null shell singularities}
\label{extremalnullshellsingularities}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R=r_+$, and for which the
orientation is such that the normal to the shell points towards
the singularity at $r=0$. Moreover,
as we have seen above, there is an additional characterization
for shells at the horizon, this case comes from the limit of $R\to
r_+$ from above and so is the limiting case of the case studied in
Sec.\ref{Subsec:extremalnormaloutsidenormaltoin}.
In this case the shell
is at the horizon, thus in a sense
a quasihorizon does exist, and
so, following the nomenclature, $r_+$ is both the gravitational radius
and the quasihorizon radius.
Also $r_+$ and $r_-$ have the same value.
In general we also opt to use $M$ rather than $Q$.
This is an extremal null shell singularity.
The normal to the
shell pointing towards the singularity at $r=0$
means in the notation for the
extremal states that the new parameter $\xi$ has value $\xi=-1$, see
the end of this section for details.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma&= \;\; \frac{2}{M}\,,
\label{eq:qbh_sigma2new}\\ 8\pi p&=
-\frac{2}{M}\,,
\label{eq:qbhpressure2new}
\end{align}
so the matter is not Majumdar-Papapetrou.
Also, the electric
charge density $\sigma_{e}$ is given in terms of $M$ by
Eq.~(\ref{qbhchargedensity}). The equation
of state is $\sigma+2p+\sigma_e=0$, inherited from
the extremal $R<r_+$ shell.
In relation to the energy conditions of the shell one can work out and
find that the null, the weak and the dominant energy conditions are
always verified whereas, the strong energy condition is never
verified, see a detailed presentation ahead.
The Carter-Penrose diagram can be drawn with some care
from the building
blocks of an interior Minkowski spacetime and the
exterior asymptotic region of an
extremal Reissner-Nordstr\"om spacetime.
In Figure~\ref{Fig:Penrose_diagram_Mink_horizonnullshell_RN2}
the Carter-Penrose
diagram of an extremal null shell singularity, i.e.,
for $R=r_+$ from above and a junction surface
with orientation such that the outside normal points towards the
singularity at $r=0$ is shown. We see that when the shell is at
$R=r_+$, that is the shell is at a null
surface, the two regions contain incomplete
geodesics with ending points at the matching surface, so that,
observers at each spacetime are disconnected and the manifold is
composed by two separate regions.
\begin{figure}[h]
\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_RN_QuasiBlackhole_junction_outside_alternative}
\caption{\label{Fig:Penrose_diagram_Mink_horizonnullshell_RN2}
Carter-Penrose diagram of an extremal null shell singularity, i.e., a
thin shell spacetime in an extremal Reissner-Nordstr\"om state, with
the shell located at $R=r_+$, i.e., located at the gravitational
radius from above, with orientation such that the normal points
towards the singularity at $r=0$, and such that
$R\to r_+$ from $R> r_+$. The interior is Minkowski,
the exterior is extremal
Reissner-Nordstr\"om.
}
\end{figure}
The physical interpretation of this case follows from the
corresponding extremal shell outside the gravitational radius. This
extremal thin shell solution, with the shell itself at the horizon, or
more properly, at the quasinonhorizon, turns the space around at the
quasinonhorizon and then ends in a singularity. The energy density and
pressure obey some of the energy conditions. The causal and global
structures as displayed by the Carter-Penrose diagram are interesting
and the two parts up to the shell and from the shell to the
singularity are disjoint, with the quasinonhorizon presenting some form
of degeneracy, although there are no curvature singularities there.
So, this case falls into the category of having some of the energy
conditions verified and the geometrical setup is rather strange.
\newpage
\subsubsection{Majumdar-Papapetrou null shell singularities}
\label{extremalmajumdarpapapetroushellsingularities}
Here we study the case of a fundamental electric thin shell in the
extremal state, i.e., $r_+=r_-$ or $M=Q$, and indeed, $r_+=r_-=M=Q$,
for which the shell's location obeys $R=r_+$, and for which the
orientation is such that the normal to the shell points towards
the singularity at $r=0$. Moreover, there is an additional
characterization
for shells at the horizon, this case comes from the limit of $R\to
r_+$ from below and so is the limiting case of the case studied in
Sec.\ref{Subsec:extremalnormalinsideminus}.
In this case there is a null shell, which is not a horizon,
and
so, following the nomenclature, $r_+$ is the gravitational radius.
Also $r_+$ and $r_-$ have the same value.
In general we also opt to use $M$ rather than $Q$. The normal to the
shell pointing towards $r=0$ means in the notation for the
extremal states that the new parameter $\xi$ has value $\xi=-1$, see
the end of this section for details.
As functions of $M$ and $R$, the shell's energy density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{align}
8\pi\sigma&= \;\; \frac{2}{M}\,,
\label{eq:qbh_sigma2}\\ 8\pi p&= 0\,.
\label{eq:qbhpressure2}
\end{align}
Also, the electric
charge density $\sigma_{e}$ is given in terms of $M$ by
Eq.~(\ref{qbhchargedensity}). Since it is one point in a plot of
$\sigma$ or $p$ as functions of $\frac{R}{M}$, there is
no need to draw a
figure.
The shell is
characterized by a positive energy density. The pressure is zero, and
so the matter is Majumdar-Papapetrou matter, i.e., $\sigma_e=\sigma$,
and therefore is fully supported by electric repulsion.
When $Q=0$, and so
$M=0$, there is a singularity at $R=0$ and
Minkowski in the exterior.
In relation to the energy conditions of the shell one can
work out and find that the null, the weak, the dominant, and the
strong energy conditions are always verified, see a detailed
presentation ahead.
The Carter-Penrose diagram can be drawn with some care from the
building blocks of an interior Minkowski spacetime and the exterior
asymptotic region of an extremal Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_horizonnullshell2_RN2} the
Carter-Penrose diagram of an extremal Majumdar-Papapetrou shell
singularity, i.e., for $R=r_+$ from below and a junction surface with
orientation such that the outside normal points towards $r=0$ is
shown. The two regions contain complete geodesics so that the
manifold is composed by two connected regions, where in the interior
there is Minkowski spacetime, and on the exterior extremal
Reissner-Nordstr\"om spacetime.
\begin{figure}[h]
\includegraphics[height=0.26\paperheight]
{Carter_Penrose_Mink_RN_QuasiBlackhole_junction_inside_normal}
\caption{\label{Fig:Penrose_diagram_Mink_horizonnullshell2_RN2}
Carter-Penrose diagram of an extremal Majumdar-Papapetrou
null shell
singularity, i.e., a thin shell spacetime in an extremal
Reissner-Nordstr\"om state, with the shell located at $R=r_+$, i.e.,
located at the gravitational radius, with orientation such that the
normal points towards $r=0$, and such that
$R\to r_+$ from $R<r_+$. The interior is Minkowski, the exterior
is extremal Reissner-Nordstr\"om.
}
\end{figure}
\newpage
The physical interpretation of this case follows from the
corresponding extremal shell inside the gravitational radius. This
extremal thin shell solution provides a closed spatial static universe
with a singularity at one pole. There are quasihorizons. The energy
density and pressure obey the energy conditions for all shell radii,
indeed the shell is composed of Majumdar-Papapetrou matter. The causal
and global structure as displayed by the Carter-Penrose diagram show
the characteristics of this universe that has two sheets joined at the
shell. For one sheet, i.e., for one side of the universe, the shell is
timelike, for the other sheet, the shell is null, and possesses a
timelike singularity. So, this case falls into the category of having
the energy conditions verified and the resulting spacetime being
strange.
\subsection{Formalism for extremal electric thin
shells at the gravitational radius
\label{Subsec:Extremal_induced_Minkowski_RN_qbhinsidenormalboth}}
\subsubsection{Preliminaries}
\label{prel5}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in an extremal state, i.e., $r_+=r_-$ or
$M=Q$, for which the shell's radius $R$ location obeys $R=r_+=r_-$, and
for which the orientation is such that the normal to the shell points
towards infinity or towards $r=0$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism}.
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen from $\mathcal{M}_{\rm i}$}
\label{Subsec:Extremal_induced_Minkowski_RN_qbhinsidenormalsubsub}
Let us start by analyzing the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$. Since it is the same as
the analysis done previously we only quote
the important equations.
They are the interior metric Eq.~(\ref{eq:Mink_metric_interior}),
the interior four-velocity of the shell
Eq.~(\ref{eq:Mink_vel_explicit}), the
metric for the shell at radius $R$
Eq.~(\ref{eq:induced_metric_Mink}), the normal to the shell
Eq.~(\ref{eq:normal_Mink}),
and the extrinsic curvature from the inside
Eq.~(\ref{eq:Extrinsic_curvature_Mink}).
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm e}$}
\label{Subsec:Extremal_induced_RN_insideqbhinsidenormalsubsub2}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the extremal state,
i.e., $r_+=r_-$ or $M=Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_extremal}, for which the radius of
the shell $R$ tends towards $r_+=r_-$, and for which the orientation is
such that the normal to the shell points towards increasing $r$, i.e.,
towards spatial infinity, or towards decreasing $r$, i.e., towards
$r=0$, as seen from the exterior, as we considered in the two previous
subsections.
Moreover, besides the direction of the
normal as seen from the exterior spacetime, we also have to
differentiate between the cases when the shell is located outside or
inside the event horizon, i.e., $R>r_+$, or
$R<r_+$,
see
Sec.~\ref{Sec:Extremal-thin-shells-outside} and
\ref{Sec:Extremal-thin-shells-inside12},
respectively.
The direction of the normal is taken into account by the
parameter $\xi$ as previously used.
In order to account for the two possibilities
$R>r_+$ and
$R<r_+$ when $R$ tends to $r_+$,
we
introduce a new sign parameter $\chi$ defined by
$\chi=\text{sign}\left(R-r_{+}\right)$. Then, we can take directly
from Eqs.~(\ref{eq:Extremal_extrinsic_curvature_RN_outside}) and
(\ref{eq:eq:Extremal_extrinsic_curvature_RN_inside}) the expressions
for the extrinsic curvature
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}
=\chi\,\xi\frac{1}{r_+}
\,,\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}
=\xi\frac{k}{R}\,,
\label{eq:eq:qbhExtremal_extrinsic_curvature_RN_inside}
\end{equation}
where again $\xi$ is defined
as $\xi=+1$ if the outside unit normal to the shell points in the
direction of increasing radial coordinate $r$, measured by an observer
in the exterior $\mathcal{M}_{\rm e}$ spacetime, and $\xi=-1$ if the
outside unit normal to the shell points in the direction of decreasing
radial coordinate $r$, and $k=\left|1-\frac{r_+}{R}\right|$.
\subsubsection{Shell's energy density and pressure}
\label{subSubsec:shellsenergydensityandpressureextremalqbhins}
Having determined the components of the extrinsic curvature of the
matching surface $\mathcal{S}$ as seen from the interior and exterior
spacetimes we are now in position to use the second junction
condition~(\ref{eq:2nd_junct_cond}) to find the expressions for the
energy density and pressure support of the thin shell. The relations
$\sigma=-\frac{1}{4\pi}\left[K_{\theta}^{\theta}\right]$ and
$p=\frac{1}{8\pi}\left[K_{\tau}^{\tau}\right]-\frac{\sigma}{2}$ then
yield
\begin{align}
8\pi\sigma &= \frac{2}{r_+}\,,
\label{eq:qbhExtremal_sigma_value_inside}\\
8\pi p &= \frac{\xi}{r_+}\left(\chi-\xi\right)\,.
\label{eq:qbhExtremal_pressure_value_inside}
\end{align}
Moreover, defining the surface electric current density $s_a$ on
the thin shell as $s_a=\sigma_{e}u_a$, where $\sigma_{e}$
represents the electric charge density, and since the Minkowski
spacetime has zero electric charge, from
Eqs.~(\ref{eq:junct_cond_Faradayb})-(\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value}) it follows that
\begin{equation}
8\pi\sigma_{e}=\frac{2}{r_+}\,.
\label{eq:chargedensity1extremalqbh}
\end{equation}
The radial coordinate of the shell is $R=r_+$.
In Eq.~(\ref{eq:qbhExtremal_pressure_value_inside}) it is clear that
it is necessary to pick the signs of $\xi$ and $\chi$.
It is useful to give the expressions for
the shell's energy density and pressure, $\sigma$ and $p$ in terms of
$M=Q$, where as usual we opt for $M$. Using
Eq.~(\ref{eq:KS_horizons_radius0}), i.e., $r_+=M$, in
Eqs.~(\ref{eq:qbhExtremal_sigma_value_inside})-(\ref{eq:chargedensity1extremalqbh})
with $\xi=+1$ and $\chi=+1$ we have $8\pi\sigma=\frac{2}{M}$, $8\pi p=0$,
and $8\pi\sigma_{e}=\frac{2}{M}$. Choosing now $\xi=+1$ and
$\chi=-1$, with $r_+=M$, in
Eqs.~(\ref{eq:qbhExtremal_sigma_value_inside})-(\ref{eq:chargedensity1extremalqbh})
we have $8\pi\sigma=\frac{2}{M}$, $8\pi p=-\frac{2}{M}$, and
$8\pi\sigma_{e}=\frac{2}{M}$. Choosing then $\xi=-1$ and $\chi=+1$,
with $r_+=M$, in
Eqs.~(\ref{eq:qbhExtremal_sigma_value_inside})-(\ref{eq:chargedensity1extremalqbh})
we have $8\pi\sigma=\frac{2}{M}$, $8\pi p=-\frac{2}{M}$, and
$8\pi\sigma_{e}=\frac{2}{M}$.
Choosing finally
$\xi=-1$ and $\chi=-1$, with $r_+=M$, in
Eqs.~(\ref{eq:qbhExtremal_sigma_value_inside})-(\ref{eq:chargedensity1extremalqbh})
we have $8\pi\sigma=\frac{2}{M}$, $8\pi p=0$, and
$8\pi\sigma_{e}=\frac{2}{M}$. These are the expressions used in the
two previous subsections to study the properties of the thin matter
shells located at the event horizon $r_+$ separating a Minkowski
spacetime from an exterior extremal Reissner-Nordstr\"om spacetime.
\clearpage{}
\clearpage{}
\section{Overcharged electric thin shells:
Overcharged star shells and compact overcharged
shell naked singularities
\label{Sec:Overcharged_thin_shells}}
\subsection{Overcharged electric thin shells:
Overcharged star like shells}
\label{Subsec:overchargednormalplus}
Here we study the case of a fundamental electric thin
shell in the overcharged state, i.e.,
$r_+$ and $r_-$ are not real, or $M<Q$,
for which the shell's
location is anywhere, i.e., $0<R<\infty$,
and for which the orientation is such that the
normal to the shell points towards spatial infinity. In this case
horizons do not exist and moreover $r_+$
and $r_-$ do not exist, and so there is
neither gravitational radius nor Cauchy radius.
The normal to the shell
pointing towards spatial infinity means in the notation
we have been using that $\xi=+1$, see the
end of this section for
details.
As functions of $M $, $Q$, and $R$, the shell's energy
density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1-k\right)\,,
\label{eq:sigma_value_overc}
\end{equation}
\begin{equation}
8\pi p=\frac{1}{2Rk}\left[\left(1-k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]\,,
\label{eq:pressure_value_overc}
\end{equation}
respectively, with $k=\sqrt{1-\frac{2M}{R}+\frac{Q^{2}}{R^{2}}}$.
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$, $Q$, and $R$, by
\begin{equation}
8\pi \sigma_{e}=\frac{2Q}{R^{2}}\,.
\label{eq:chargedensityoverc}
\end{equation}
The behavior of $\sigma$ and $p$ as functions of the
radial coordinate $R$ of the shell for various values of the
$\frac{Q}{M}$ ratio in this case is shown in
Figure~\ref{Fig:Properties_overcharged_normal}.
From Figure~\ref{Fig:Properties_overcharged_normal} we see that,
depending on the radial coordinate of the shell, the energy density
might take negative values. Indeed, from
Eq.~(\ref{eq:sigma_value_overc}) we find that for $R<\frac{Q^{2}}{2M}$
the energy density $\sigma$ is negative. Also, this
case of thin shell is always supported by negative pressure, i.e.,
tension, see
Eq.~(\ref{eq:pressure_value_overc}). It is a tension shell and can also
be a negative energy density shell. The fact that it is supported by
negative energy density sometimes and by tension
translates the well known fact that the Reissner-Nordstr\"om
singularity at $r=0$ is repulsive. Moreover, we see that both the
energy density and the pressure of the shell diverge to negative
infinity as the shell gets closer to $R=0$. On the other hand, in the
limit of $R\to\infty$ both energy density and the pressure go to zero.
When $Q=0$ there are no shells,
since then $M=0$ as we are not considering negative
$M$.
In relation to the energy conditions of the shell we can say that the
null, the weak, and the dominant, energy conditions are verified when
$R\geq R_{{\mathrm I}'}$, and the strong energy condition is verified
when $R\geq\frac{Q^2}{M}$, see a detailed presentation
ahead.
\begin{figure}[h]
\subfloat[]
{\includegraphics[scale=0.45]{Overcharged_normal_energy}}
\hspace*{\fill}
\subfloat[]
{\includegraphics[scale=0.45]{Overcharged_normal_pressure}}
\caption{
\label{Fig:Properties_overcharged_normal}
Physical properties of an overcharged star shell i.e., an electric
perfect fluid thin shell in an overcharged Reissner-Nordstr\"om state,
in any location, i.e., $0<R<\infty$,
and with orientation such that the normal points towards spatial infinity.
Panel (a)
Energy density $\sigma$ of the shell as a function of the radius $R$
of the shell for various values of the $\frac{Q}{M}$ ratio. The energy
density is adimensionalized through the mass $M$, $8\pi M\sigma$, and
the radius is adimensionalized through the mass $M$, $\frac{R}{M}$.
Panel (b) Tension $-p$ on the shell as a function of the radius $R$ of
the shell for various values of the $\frac{Q}{M}$ ratio. The tension
is adimensionalized through the mass $M$, $-8\pi Mp$, and the radius
is adimensionalized through the mass $M$, $\frac{R}{M}$.
}
\end{figure}
\newpage
The Carter-Penrose diagram can be drawn directly from the building
blocks of an interior Minkowski spacetime and the exterior asymptotic
infinite region of the overcharged
Reissner-Nordstr\"om spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_overcharged_RN}
the
Carter-Penrose diagram of an overcharged
Reissner-Nordstr\"om star shell
spacetime for a junction surface with normal pointing towards spatial
infinity is shown. It is clearly a star shell, a
star in an asymptotically flat spacetime.
\begin{figure}[h]
\label{Fig:Penrose_diagram_Mink_overcharged_RN_normal}
{\includegraphics[height=0.27\paperheight]
{Carter_Penrose_Mink_RN_junction_normal}}
\caption{\label{Fig:Penrose_diagram_Mink_overcharged_RN}
Carter-Penrose diagram of an overcharged star thin shell, i.e., a
thin shell spacetime in an overcharged
Reissner-Nordstr\"om state,
with a thin shell located at any radius $R$,
with orientation such that the normal points towards spatial infinity.
The interior is Minkowski, the exterior
is overcharged Reissner-Nordstr\"om. The shell is a star
shell supported by tension and for sufficiently small $R$ also by
negative energy density.
}
\end{figure}
The physical interpretation of this case is clear cut, and it is
similar to the corresponding nonextremal and extremal
shells. This overcharged thin
shell solution mimics an overcharged star. The energy density and
pressure obey the energy conditions for certain radii.
The causal and global
structure as displayed by the Carter-Penrose diagram are well behaved
and rather elementary. So, this case falls into the category of having
the energy conditions verified and the geometrical setup is physically
reasonable.
\subsection{Overcharged electric thin shells: Overcharged compact
shell naked singularities}
\label{Sec:Overcharged_thin_shells2}
Here we study the case of a fundamental electric thin
shell in the overcharged state, i.e.,
$r_+$ and $r_-$ are not real, or $M<Q$,
for which the shell's
location is anywhere, i.e., $0<R<\infty$,
and for which the orientation is such that the
normal to the shell points towards $r=0$. In this case
horizons do not exist and moreover $r_+$
and $r_-$ do not exist, and so there is
neither gravitational radius nor Cauchy radius.
The normal to the shell
pointing towards $r=0$ means in the notation
we have been using that $\xi=-1$, see the
end of this section for
details.
As functions of $M $, $Q$, and $R$, the shell's energy
density $\sigma$ and
pressure $p$, are, see the end of this section,
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1+k\right)\,,
\label{eq:sigma_value_overcnormalto0}
\end{equation}
\begin{equation}
8\pi p=-\frac{1}{2Rk}\left[\left(1+k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]\,,
\label{eq:pressure_value_overcnormalto0}
\end{equation}
respectively, with $k=\sqrt{1-\frac{2M}{R}+\frac{Q^{2}}{R^{2}}}$.
Also, the electric charge density $\sigma_{e}$
is given in terms of $M$, $Q$, and $R$, by
Eq.~(\ref{eq:chargedensityoverc}).
The behavior of $\sigma$ and $p$ as functions of the
radial coordinate $R$ of the shell for various values of the
$\frac{Q}{M}$ ratio in this case is shown in
Figure~\ref{Fig:Properties_overcharged_alternative}.
From Figure~\ref{Fig:Properties_overcharged_alternative}
we see that
the energy density
is positive for all shells.
Also, this
kind of thin shell is always supported by tension.
It is a tension shell. The fact that it is supported by
tension translates the well known fact that the Reissner-Nordstr\"om
singularity at $r=0$ is repulsive. Moreover, we see that both the
energy density and the tension of the shell diverge to
infinity as the shell gets closer to $R=0$. On the other hand, in the
limit of $R\to\infty$ both go to zero.
An interesting feature
of this kind of shells is the change in the behavior of $-p$
for $R>M$, where the tension needed to support such shells is smaller
as the $\frac{Q}{M}$ ratio increases. Moreover, if the ratio
$\frac{Q}{M}$ is in the range
$1<\frac{Q}{M}<\sqrt{2}$
we find that the tension support of the matter fluid that composes
this type of shells is an increasing function at $R=M$ and this function
contains a local minimum in the region $0<R<M$. Notwithstanding,
the minimum value is zero only in the extremal case, $\frac{Q}{M}=1$.
When $Q = 0$ there are no shells since we
are not considering negative $M$.
In relation to the energy
conditions of the shell we can say that the null, the weak, and the
dominant, energy conditions are verified when $R>0$, and the
strong energy condition is verified when $R\leq\frac{Q^2}{M}$ in this case,
see a detailed presentation ahead.
\begin{figure}[h]
\subfloat[]
{\includegraphics[scale=0.45]
{Overcharged_alternative_energy}}
\hspace*{\fill}\subfloat[]
{\includegraphics[scale=0.45]
{Overcharged_alternative_pressure}}
\caption{\label{Fig:Properties_overcharged_alternative}
Physical properties of an overcharged
compact shell naked singularity, i.e., an electric
perfect fluid thin shell in an overcharged Reissner-Nordstr\"om state,
in any location, i.e., $0<R<\infty$,
and with orientation such that the normal points towards $r=0$.
Panel (a) Energy density $\sigma$ of the shell as a
function of the radius $R$ of the shell for various values of the
$\frac{Q}{M}$ ratio. The energy density is adimensionalized through
the mass $M$, $8\pi M\sigma$, and the radius is adimensionalized
through the mass $M$, $\frac{R}{M}$. Panel (b) Tension $-p$ on the
shell as a function of the radius $R$ of the shell for various values
of the $\frac{Q}{M}$ ratio. The tension is adimensionalized through
the mass $M$, $-8\pi Mp$, and the radius is adimensionalized through
the mass $M$, $\frac{R}{M}$.
}
\end{figure}
\newpage
The Carter-Penrose diagram can be drawn directly from the building
blocks of an interior Minkowski spacetime and the exterior
region neighbor to $r=0$ of the overcharged Reissner-Nordstr\"om
spacetime. In
Figure~\ref{Fig:Penrose_diagram_Mink_overcharged_RN2} the
Carter-Penrose diagram of an overcharged Reissner-Nordstr\"om star
shell spacetime for a junction surface with normal pointing towards
the $r=0$ sing
|
ularity is shown. It is clearly a compact shell naked
singularity, such that there is no clear distinction
of what is outside from what is inside.
\begin{figure}[h]
\label{Fig:Penrose_diagram_Mink_overcharged_RN_alternative}
{\includegraphics[height=0.27\paperheight]
{Carter_Penrose_Mink_RN_junction_region_III}}
\caption{\label{Fig:Penrose_diagram_Mink_overcharged_RN2}
Carter-Penrose diagram of an overcharged compact
shell naked singularity, i.e., a thin
shell spacetime in an overcharged Reissner-Nordstr\"om state, located
at any radius $R$, with orientation such that the normal points
towards $r=0$. There is no clear distinction
of what is outside from what is inside.
The interior is Minkowski, the exterior is overcharged
Reissner-Nordstr\"om. This is a compact shell naked singularity
spacetime.
}
\end{figure}
\newpage
The physical interpretation of this case is understood by now, it is
similar to the corresponding nonextremal and extremal shells. This
overcharged thin shell solution provides a closed spatial static
universe with a singularity at one pole. There are no horizons. The
energy density and pressure obey the energy conditions for certain
shell radii. The causal and global structure as displayed by the
Carter-Penrose diagram show the characteristics of this universe that
has two sheets joined at the shell with one sheet having a singularity
at its pole and with no horizons. The singularity is avoidable to
timelike curves. So, this case falls into the category of having the
energy conditions verified and the resulting spacetime being peculiar.
\subsection{Formalism for overcharged shells
\label{Subsec:Overcharged_induced_Minkowski_RN}}
\subsubsection{Preliminaries}
\label{prel6}
We now make a careful study to derive the properties of the
fundamental electric thin shell used in the two previous subsections,
i.e., the thin shell in an overcharged
state, i.e., $r_+$ and $r_-$
do not exist or
$M<Q$, for which the shell's radius $R$ location
obeys $0<R<\infty$, and
for which the orientation is such that the normal to the shell points
towards spatial infinity or towards $r=0$.
It should be read as an appendix to the previous two
subsections. We use the formalism developed in
Sec.~\ref{Sec:Junction_formalism}.
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen from $\mathcal{M}_{\rm i}$}
\label{Subsec:Overcharged_induced_Minkowski_RN2}
Let us start by analyzing the interior
Minkowski spacetime, $\mathcal{M}_{\rm i}$. Since it is the same as
the analysis done previously we only quote
the important equations.
They are the interior metric Eq.~(\ref{eq:Mink_metric_interior}),
the interior four-velocity of the shell
Eq.~(\ref{eq:Mink_vel_explicit}), the
metric for the shell at radius $R$
Eq.~(\ref{eq:induced_metric_Mink}), the normal to the shell
Eq.~(\ref{eq:normal_Mink}),
and the extrinsic curvature from the inside
Eq.~(\ref{eq:Extrinsic_curvature_Mink}).
\subsubsection{Induced metric, and extrinsic curvature of
$\mathcal{S}$ as seen
from $\mathcal{M}_{\rm e}$}
\label{Subsec:Overcharged_induced_RN}
To proceed we have now to find the expressions for the induced metric
on $\mathcal{S}$ and the extrinsic curvature components as seen from
the exterior spacetime, $\mathcal{M}_{\rm e}$, in the overcharged
state, i.e., $r_+$ and $r_-$ do not exist or $M<Q$, see
Figure~\ref{Fig:Penrose_diagram_RN_extremal}, for which the
shell's obeys $0<R<\infty$, and for which the
orientation is such that the normal to the shell points towards
increasing $r$, i.e., towards spatial infinity,
or towards decreasing $r$ i.e., towards $r=0$,
as seen from the exterior, as
used in the two previous subsections.
The line element for the overcharged Reissner-Nordstr\"om spacetime,
now in the quantities
$M$ and $Q$, since $r_+$ and $r_-$ do not exist, is
\begin{equation}
ds_{\rm e}^{2}=
-\left(1-\frac{2M}{r}+\frac{Q^2}{r^2}\right)dt^2+
\frac{dr^2}{1-\frac{2M}{r}+\frac{Q^2}{r^2}}
+r^{2}
d\Omega^{2}\,,
\label{eq:metricovercharged}
\end{equation}
where $M<Q$.
Considering a static shell
as we have been doing,
the components of the 4-velocity $u^\alpha$ of an observer comoving
with the shell as seen from the exterior spacetime, are given by
\begin{equation}
u_{\rm e}^{\alpha}=-
\left(\frac{1}{k},0,0,0\right)\,,
\label{eq:overc4velocity_value_rminus}
\end{equation}
where
$k=\sqrt{1-
\frac{2M}{R}+
\frac{Q^2}{R^2}}$.
To find the induced metric
on $\mathcal{S}$ as seen by an observer at $\mathcal{M}_{\rm e}$ and
imposing the first junction condition, Eq.~(\ref{eq:1st_junct_cond}),
we find that the shell's radial coordinate $R$ is the same as measured
by an observer at $\mathcal{M}_{\rm i}$ or $\mathcal{M}_{\rm e}$ and the
induced metric on $\mathcal{S}$ is given by
Eq.~(\ref{eq:induced_metric_RN}),
namely,
\begin{equation}
\left.ds^{2}\right|_{\mathcal{S}}=-d\tau^{2}+R^{2}d\Omega^{2}\,.
\label{eq:induced_metric_RNoverc}
\end{equation}
Combining $n_{\rm e}^{\alpha}n_{{\rm e}\alpha}=1$, see
Eq.~(\ref{eq:normal_normalized}),
$n_{{\rm e}\alpha}u_{\rm e}^{\alpha}=0$, see
Eq.~(\ref{eq:normal_orthogonal}),
and Eq.~(\ref{eq:overc4velocity_value_rminus}), we find the expression
for the components of the unit normal to the hypersurface $\mathcal{S}$,
as seen from the exterior spacetime $\mathcal{M}_{\rm e}$, to be
$n_{{\rm e}\alpha}=\pm\left(0,\frac{1}{k},0,0\right)$.
To specify the sign of the normal to $\mathcal{S}$ for each region
we consider two orientations: the orientation where the normal $n$ points
towards spatial infinity and the orientation where the normal
points towards the singularity $r=0$. These two orientations can be treated
in a concise way by using $\xi=\pm1$, such that
\begin{equation}
n_{{\rm e}\alpha}=\xi
\left(0,\frac{1}{k},0,0\right)\,.
\label{eq:overcnormal_value_rminus}
\end{equation}
Then, we find the nonzero components of the extrinsic curvature of
$\mathcal{S}$ as seen from the exterior spacetime to be given by
\begin{equation}
{K_{\rm e}}^{\tau}{}_{\tau}=
\frac{\xi}{R^{2}k}\left( M-
\frac{Q^2}{R}\right)\,,
\quad\quad
{K_{\rm e}}^{\theta}{}_{\theta}=
{K_{\rm e}}^{\varphi}{}_{\varphi}
=
\xi\frac{k}{R}\,,
\label{eq:overc_Extrinsic_RN_inside_cauchy_horizon}
\end{equation}
where again $k$ is the redshift function given by
$k=\sqrt{1-
\frac{2M}{R}+
\frac{Q^2}{R^2}}$.
\subsubsection{Shell's energy density and pressure}
\label{subSubsec:shellsenergydensityandpressureextremalovercharged}
Having determined the components of the extrinsic curvature of the
matching surface $\mathcal{S}$ as seen from the interior and exterior
spacetimes we are now in position to use the second junction
condition~(\ref{eq:2nd_junct_cond}) to find the expressions for the
energy density and pressure support of a perfect fluid thin shell
in an overcharged state.
Using the shell's stress-energy tensor
given in Eq.~(\ref{eq:perfect})
we find
\begin{equation}
8\pi\sigma=\frac{2}{R}\left(1-\xi k\right)\,,
\label{eq:overcsigma}
\end{equation}
\begin{equation}
8\pi p=\frac{\xi}{2Rk}\left[\left(1-\xi k\right)^{2}-\frac{Q^2}{R^{2}}
\right]\,,
\label{eq:overcpressure}
\end{equation}
where the redshift function of the shell at $r=R$ is given by
$k=\sqrt{1-\frac{2M}{R}+\frac{Q^2}{R^2}}$.
Moreover, defining the surface electric current density $s_a$ on
the thin shell as $s_a=\sigma_{e}u_a$, where $\sigma_{e}$
represents the electric charge density, and since the Minkowski
spacetime has zero electric charge,
from Eqs.~(\ref{eq:junct_cond_Faradayb})-(\ref{eq:junct_cond_Faraday2})
and~(\ref{eq:RN_FaradayMaxwell_value})
it follows that
\begin{equation}
8\pi \sigma_{e}=\frac{2Q}{R^{2}}\,.
\label{eq:overcchargedensity}
\end{equation}
As before, we have to distinguish the two possible orientations provided by
$\xi$. In
Eqs.~(\ref{eq:overcsigma})
and~(\ref{eq:overcpressure}) it is clear that it is
necessary to pick the sign in $\xi$. Let us
start with $\xi=+1$. Eqs.~(\ref{eq:overcsigma})
and~(\ref{eq:overcpressure}) with $\xi=+1$
yield $8\pi\sigma=\frac{2}{R}\left(1-k\right)$, $8\pi
p=\frac{1}{2Rk}\left[\left(1-k\right)^{2}-
\frac{Q^{2}}{R^{2}}\right]$, and also from
Eq.~(\ref{eq:overcchargedensity}) we have
$8\pi \sigma_{e}=\frac{2Q}{R^{2}}$. Let us now take $\xi=-1$.
Eqs.~(\ref{eq:overcsigma})
and~(\ref{eq:overcpressure}) with $\xi=-1$
yield $8\pi\sigma=\frac{2}{R}\left(1+k\right)$, $8\pi
p=-\frac{1}{2Rk}\left[\left(1+k\right)^{2}-\frac{Q^{2}}{R^{2}}
\right]$, and also from Eq.~(\ref{eq:overcchargedensity}) we have
$8\pi \sigma_{e}=\frac{2Q}{R^{2}}$. These are the expressions used in
the two previous subsections. Note that for the overcharged
case $0<R<\infty$.
\newpage
\section{A synopsis to all
the fundamental electric thin shells: Energy conditions and the
bewildering variety of Carter-Penrose
diagrams}
\label{Sec:SinopECandCP}
\subsection{Energy conditions for the fundamental electric thin shells}
\subsubsection{Energy-conditions}
The analysis of the properties of the fundamental electric
shells, i.e., timelike, static, perfect fluid thin
shells with a Minkowski interior and a Reissner-Nordstr\"om exterior
showed that both the energy density and pressure support depend on the
state of the shell, on the location of the shell and on the
orientation of the shell, i.e., on the direction of the outside
pointing normal. Moreover, we saw that in some situations the energy
density and pressure may take negative values, and this feature can
also depend on the value of the radial coordinate of the shell.
Here, we address the question of which shells and in what
conditions do they verify the various energy conditions.
The energy conditions are a set of restrictions on the stress-energy
tensor. In the case of a perfect fluid they lead to specific
constraints on the energy density and pressure, see,
e.g.,~\citep{andreasson2009} for energy conditions on shells, see
also~\citep{visserbook,Lemos_Lobo_2008} for energy conditions
on shells and
\citep{Hawking_Ellis_book} for the original setting
of energy conditions. Here we will
study the null, weak, dominant, and strong energy conditions
for the fundamental electric thin shells. Now,
each energy condition may be considered to hold at any point of the
spacetime or along a flowline, where the specific energy condition is
only verified on average, allowing for pointwise violations. We
consider the pointwise version of the energy conditions. Let us
first briefly explain the physical motivation for each energy
condition and their implications on the properties of a perfect fluid
thin shell.
The null energy condition, or NEC, represents the restriction that the
energy density of any matter distribution in spacetime experienced
by a ligh-ray is nonnegative. For a generic
stress-energy tensor ${T}_{\alpha\beta}$,
this is represented by
${T}_{\alpha\beta}k^{\alpha}k^{\beta}\geq0$
for any future pointing null vector field $k^\alpha$. For a perfect fluid
thin shell with stress-energy tensor ${S}_{ab}$
given by Eq.~(\ref{eq:perfect})
this implies
\begin{equation}
\hskip -1.6cm
\sigma+p\geq0\hfill
\,.
\label{eq:NEC_perfect_fluid}
\end{equation}
The weak energy condition, or WEC, is a more restrictive version of
the NEC where it is imposed that the energy density of any matter
distribution in spacetime measured by any timelike observer must be
nonnegative, then ${T}_{\alpha\beta}v^{\alpha}v^{\beta}\geq0$ for any
future pointing, timelike vector field $v^\alpha$. For a perfect fluid
thin shell with stress-energy tensor ${S}_{ab}$
given by Eq.~(\ref{eq:perfect}) this leads to the following restrictions
\begin{equation}
\sigma\geq0\,,\quad\quad
\sigma+p\geq0
\,.
\end{equation}
The dominant energy condition, or DEC, represents the
statement that in addition
to the WEC being verified, the flow of energy can never be observed
to be faster than light, that is, in addition to
${T}_{\alpha\beta}v^{\alpha}v^{\beta}\geq0$,
the vector field $Y^\alpha$ with components given by
$Y^{\alpha}=-{T}_{\beta}{}^{\alpha}v^{\beta}$,
verifies $Y^{\alpha}Y_{\alpha}\leq0$, for any timelike future pointing
vector field $v^\alpha$. For a perfect fluid
thin shell with stress-energy tensor ${S}_{ab}$
given by Eq.~(\ref{eq:perfect}) this
implies
\begin{equation}
\sigma\geq0\,,\quad\quad
\sigma-|p|\geq0\,.
\label{eq:DEC_perfect_fluid}
\end{equation}
The strong energy condition, or SEC, represents the restriction that nearby
timelike geodesics are always focused towards each other,
essentially
guaranteeing that gravity is always perceived to be attractive by
any timelike observer. In the case of general relativity,
this is found by guaranteeing
$\left({T}_{\alpha\beta}-\frac{1}{2}
g_{\alpha\beta}\mathcal{T}_{\gamma}{}^{\gamma}\right)
v^{\alpha}v^{\beta}\geq0$
for any timelike vector field $v^\alpha$. For a perfect fluid
thin shell with stress-energy tensor ${S}_{ab}$
given by Eq.~(\ref{eq:perfect})
we find,
\begin{equation}
\sigma+p\geq0
\,,\quad\quad
\sigma+2p\geq0
\,.
\label{eq:SEC}
\end{equation}
\subsubsection{Limiting radii from an
analysis of the energy conditions on fundamental
electric thin shells}
From Eqs.~(\ref{eq:NEC_perfect_fluid})-(\ref{eq:SEC}) we see
that the energy conditions imply various restrictions on the energy
density and pressure of a perfect fluid. In the considered setup,
we have found that the properties of the perfect fluid
fundamental electric thin shells are
functions essentially
of the radius $R$ of the shell. Hence, the constrains
imposed by the energy conditions on the thin shell for the various
possible spacetimes will lead to restrictions on $R$.
Anticipating what follows, we present
the expressions for the limiting radii $R_{\mathrm{I}}$,
$R_{\mathrm{I}'}$,
and $R_{\mathrm{III}}$, that arise from solving the
inequalities (\ref{eq:NEC_perfect_fluid})-(\ref{eq:SEC}) for
the various junction spacetimes, i.e.,
\begin{align}
R_{\mathrm{I}}= & \frac{M}{36}\left[25+3\left(\frac{Q}{M}\right)^{2}+
\frac{9\left(\frac{Q}{M}\right)^{4}-570\left(\frac{Q}{M}\right)^{2}+
625}{\Delta_{\mathrm{I}}}+\Delta_{\mathrm{I}}\right]\,,
\label{eq:energy_cond_RI}\\
R_{\mathrm{I}'}= & \frac{M}{4}\left[3+\left(\frac{Q}{M}\right)^{2}+
\frac{\left(\frac{Q}{M}\right)^{4}-10\left(\frac{Q}{M}\right)^{2}+
9}{\Delta_{\mathrm{I}'}}+\Delta_{\mathrm{I}'}\right]\,,
\label{eq:energy_cond_RIp}\\
R_{\mathrm{III}}= & \frac{M}{72}\left[50+6\left(\frac{Q}{M}\right)^{2}-
\frac{\left(1-i\sqrt{3}\right)\left[9\left(\frac{Q}{M}\right)^{4}-570
\left(\frac{Q}{M}\right)^{2}+625\right]}{\Delta_{\mathrm{I}}}-
\left(1+i\sqrt{3}\right)\Delta_{\mathrm{I}}\right]\,,
\label{eq:energy_cond_RIII}
\end{align}
with
\begin{equation}
\begin{aligned}\Delta_{\mathrm{I}} & =\sqrt[3]{27
\left(\frac{Q}{M}\right)^{6}+216\left(\frac{Q}{M}
\right)^{3}\sqrt{9\left(\frac{Q}{M}\right)^{4}+366
\left(\frac{Q}{M}\right)^{2}-375}+5211\left(\frac{Q}{M}
\right)^{4}-21375\left(\frac{Q}{M}\right)^{2}+15625}\,,\\
\\
\Delta_{\mathrm{I}'} & =\sqrt[3]{8\left(\frac{Q}{M}
\right)^{3}\,\,\sqrt{\left(\left(\frac{Q}{M}\right)^{2}-1
\right)^{2}}+\left(\frac{Q}{M}\right)^{6}+17\left(\frac{Q}{M}
\right)^{4}-45\left(\frac{Q}{M}\right)^{2}+27}\,.
\end{aligned}
\end{equation}
The expressions for
$R_{\mathrm{I}}$ and $R_{\mathrm{I}'}$ can be read directly,
the expression for $R_{\mathrm{III}}$
is written in terms of the imaginary unit $i$, but for the range of values
of the ratio $\frac{Q}{M}$ of interest, this function takes purely real values.
Moreover, although it is not clear from the expressions, the values
of the radii $R_{\mathrm{I}}$, $R_{\mathrm{I}'}$, and $R_{\mathrm{III}}$
are independent of the sign of $Q$, as expected.
For completeness, in Figure~\ref{Fig:Energy_radii} we present the
behavior of the various limiting radii defined in
Eqs.~(\ref{eq:energy_cond_RI})-(\ref{eq:energy_cond_RIII})
as functions of the ratio $\frac{Q}{M}$.
\begin{figure}[h]
\subfloat[\label{Fig:Energy_radii_undercharged}]
{\includegraphics[height=0.2\paperheight]
{Non_extremal_energy_radii}}
\hfill{}
\subfloat[\label{Fig:Energy_radii_overcharged}]
{\includegraphics[height=0.2\paperheight]
{Overcharged_energy_radii}}
\caption{\label{Fig:Energy_radii}Behavior of the various radii, whose
expressions are given by Eqs.~(\ref{eq:energy_cond_RI}) -
(\ref{eq:energy_cond_RIII}), found by imposing the null, weak,
dominant and strong energy conditions to the thin shells present at
the matching surface of the various junction spacetimes.}
\end{figure}
\newpage
\subsubsection{Table of the energy conditions on fundamental
electric thin shells}
Using the expressions for the energy density and pressure support for
the thin matter shell for each resulting junction spacetime in the
inequalities (\ref{eq:NEC_perfect_fluid})-(\ref{eq:SEC}),
allows us to find the constraints on the
shell's location so that each of the tested energy conditions is
verified. In the table of Figure~\ref{Table:Energy_cond_table}
we summarize the results.
\begin{figure}[h]
{\includegraphics[height=0.27\paperheight]
{tableenergyconditions}}
\caption{\label{Table:Energy_cond_table}
Range of values of the radius $R$ of the fundamental electric thin
shell, in the various allowed locations of the exterior
Reissner-Nordstr\"om spacetime
for which, the null, weak, dominant and
strong energy conditions are verified. The symbols $\uparrow$ and
$\downarrow$ denote the orientation of the shell, i.e., outward normal
pointing to increasing radius and to decreasing radius from the shell,
respectively. The symbols $>$ and $<$ for Sections VII A and VIIB
in the table denote whether the approach to $r_+$ is done
through $R>r_+$ or
$R<r_+$, respectively.
}
\end{figure}
\subsubsection{Detailed description}
For the fundamental electric shells in a nonextremal state, located
outside the gravitational radius $r_+$, $R>r_+$, we find that when
their orientation is such that the outward normal points to spatial
infinity, Section~\ref{Subsec:nonextremalnormaloutside}, i.e., the
star shells, they always verify the NEC and WEC, they verify the DEC
for $R>R_{\mathrm{I}}$, and also always verify the SEC, and when their
orientation is such that the outward normal points to the gravitational
radius $r_+$, Section~\ref{Subsec:nonextremaltensionoutside}, i.e.,
the tension shell black holes, they verify the NEC, WEC, and DEC for
$R>R_{\mathrm{I}'}$, and the SEC is always violated. Moreover, the
limiting radius $R_{\mathrm{I}'}$ of Eq.~(\ref{eq:energy_cond_RIp})
also determines the value of the circumferential radius of the shell
for which its energy density is maximum and thus it is connected to
the bumps in de energy density $\sigma$ of
Figure~\ref{Fig:Properties_region_I_prime}.
For the fundamental electric shells in a nonextremal state, located
inside the Cauchy radius $r_-$, $R<r_-$, we find that when their
orientation is such that the outward normal points to $r_-$,
Section~\ref{Subsec:nonextremalnormalcauchy}, the tension shell
regular and nonregular black holes, none of the energy conditions are
verified, and when their orientation is such that the outward normal
points to the $r=0$ singularity,
Section~\ref{Subsec:nonextcompactshellnakedsingularity}, the compact
shell naked singularities, the shells always verify the NEC and WEC,
verify the DEC in the domain $0<R\leq R_{\mathrm{III}}$, and always
verify the SEC.
For the fundamental electric shells in an extremal state, $r_+=r_-$,
located outside the gravitational radius $r_+$, $R>r_+$, we find that
when their orientation is such that the outward normal points to
spatial infinity, Section~\ref{Subsec:extremalnormaloutside}, i.e.,
the Majumdar-Papapetrou star shells, they always verify the NEC, WEC,
DEC, and SEC, and when their orientation is such that the outward
normal points to the event horizon,
Section~\ref{Subsec:extremalnormaloutsidenormaltoin}, the extremal
tension shell black holes, they always verify the NEC, WEC, and DEC,
and the SEC is always violated.
For the fundamental electric shells in an extremal state, $r_+=r_-$,
located inside the gravitational radius, $R<r_+$, we find that when
their orientation is such that the outward normal points to spatial
infinity, Section~\ref{Subsec:extremalnormalinsideplus}, the extremal
tension shell regular and nonregular black holes, none of the energy
conditions are verified by the shells, and when their orientation is
such that the outward normal points to the $r=0$ singularity,
Section~\ref{Subsec:extremalnormalinsideminus}, the
Majumdar-Papapetrou compact shell naked singularities, the shells
always verify the NEC, WEC, DEC, and SEC.
For the fundamental electric shells in an extremal state, $r_+=r_-$,
located in the limit at the gravitational radius, $R=r_+$, we find
that when their orientation is such that the outward normal points to
spatial infinity and the limit of $R\to r_+$ comes from above,
Section~\ref{majumdarpapapetroushellquasiblackholes}, the
Majumdar-Papapetrou shell quasiblack holes, the shells always verify
the NEC, WEC, DEC, and SEC, the matter is Majumdar-Papapetrou matter,
whereas when the limit of $R\to r_+$ comes from below,
Section~\ref{extremalnullshellblackholes}, the extremal null shell
black holes, the shells verify the NEC, WEC, DEC, and never verify the
SEC,
and when their orientation is such that the outward normal points to
the $r=0$ singularity and the limit of $R\to r_+$ comes from above,
Section~\ref{extremalnullshellsingularities}, the extremal tension
shell null singularities, one has that these shells always verify the
NEC, WEC and DEC, and never verify the SEC,
whereas when the limit of $R\to r_+$ comes from below,
Section~\ref{extremalmajumdarpapapetroushellsingularities}, the
extremal Majumdar-Papapetrou null shell singularities, the
shells verify
the NEC, WEC, DEC, and SEC, the matter is Majumdar-Papapetrou matter.
For the fundamental electric shells in an overcharged state, $r_+$ and
$r_-$ do not exist and $M<Q$, located at any radius $R$, we find that
when their orientation is such that the outward normal points to
spatial infinity, Section~\ref{Subsec:overchargednormalplus}, the
overcharged star shells, the shells verify the NEC, WEC, DEC for
$R\geq R_{\mathrm{I}'}$, and the SEC when $R\geq \frac{Q^{2}}{M}$, and
when their orientation is such that the outward normal points to the
$r=0$ singularity, Section~\ref{Sec:Overcharged_thin_shells2}, the
overcharged compact shell naked singularities, the NEC, WEC, DEC are
always satisfied and the SEC when $R\leq \frac{Q^{2}}{M}$. The
results for the strong energy condition
of an overcharged shell
indicate that in the
overcharged Reissner-Nordstr\"om spacetime the singularity is
repulsive in a core region within $r<\frac{Q^{2}}{M}$. Our result
extends that of~\citep{Graves_Brill_1960,Carter_1966_2} where it was
found that the nonextremal and extremal Reissner-Nordstr\"om
solutions are characterized by a
repulsive region delimited, respectively, by the Cauchy or event
horizons. Here, although there are no horizons, we see that the same
conclusion holds, and confirm the result given
in, e.g.,~\citep{felicebook} that there is a repulsive region in the
overcharged Reissner-Nordstr\"om spacetime near the singularity, and,
in addition, find the limiting radius of this repulsive region.
\newpage
\subsection{The bewildering variety
of the Carter-Penrose diagrams for
the fundamental electric thin shells}
\label{Sec:Bewild}
In addition to performing an analysis
on the physical properties of the shells, i.e.,
their energy density $\sigma$, pressure $p$,
and the corresponding energy conditions, we have
drawn the
Carter-Penrose diagram in each of the fourteen
cases.
These diagrams
for
the fundamental electric thin shells are summarized in the chart of
Figure~\ref{alldiagrams} which displays
clearly their bewildering variety.
\begin{figure}[h]
{\includegraphics[height=0.60\paperheight]
{chart_CP_diagrams.pdf}}
\caption{\label{alldiagrams}
A chart with all the fourteen
different Carter-Penrose diagrams for the fundamental electric
charged shells, i.e., static shells with a Minkowski interior and a
Reissner-Nordstr\"om exterior.}
\end{figure}
\noindent There were cases that the
solution does not tell precisely how to continue the
Carter-Penrose diagram, one can
either repeat the shell, or draw horizons and infinities at will, in
any possible combination.
\newpage
\section{Conclusions \label{Sec:Conclusions}}
We have classified and studied
the spacetimes generated by a fundamental
electric thin shell, i.e., a spherical static electrical thin shell
with a Minkowski interior and a Reissner-Nordstr\"om exterior.
All three
main states a shell with
a Reissner-Nordstr\"om exterior can have were considered,
namely, nonextremal, extremal, and overcharged. In the
nonextremal state there are still two possible locations
for the shell, namely, the
shell is located outside the gravitational radius or the shell is
located inside the Cauchy radius. In the extremal state there are
three possibilities, namely, the shell is located outside the
gravitational radius, the shell is located inside the gravitational
radius, or the shell is located at the gravitational radius. In the
overcharged state there is only one possibility, the shell can be
located anywhere. We have seen, in the wake of the work of
Lynden-Bell and Katz for non-electrical thin shells with a
Schwarzschild exterior, that each of the locations has still two
possibilities, either the outward normal to the shell points toward
increasing radius or it points toward decreasing radius.
For extremal shells at the gravitational radius there is still
a subdivision, either the shell approaches the gravitational
radius from above, or it approaches the gravitational
radius from below.
In all there
are fourteen different cases.
For each of the fourteen different shells we have worked out the energy
density $\sigma$ and the pressure $p$ and analyzed the energy
conditions of the matter on the shell. In addition we have drawn the
Carter-Penrose diagrams in all the fourteen cases. There were cases
that the solution does not tell precisely how to continue the diagram,
one can either repeat the shell, or draw horizons and infinities at
will, in any possible combination. In addition,
in some cases the distinction between what is
interior and what is exterior is blurred.
The maximum analytical extension of the fundamental electric
shells and consequent Carter-Penrose diagrams,
showed that there is a plethora of
solutions that encompass
nonextremal star shells,
nonextremal tension shell black holes,
nonextremal tension shell regular and
nonregular black holes,
nonextremal compact shell naked singularities,
Majumdar-Papapetrou star
shells,
extremal tension shell singularities,
extremal tension shell regular and nonregular black holes,
Majumdar-Papapetrou compact shell naked singularities,
Majumdar-Papapetrou shell quasiblack
holes,
extremal null shell quasinonblack holes,
extremal null shell singularities,
Majumdar-Papapetrou null shell singularities,
overcharged star shells,
and overcharged compact shell naked singularities.
In some of the cases it was found that the energy conditions are
verified and the geometrical setup is physically reasonable, in other
cases it was found that the energy conditions are verified but the
resulting geometry is rather peculiar, or even strange, or that the
energy conditions are violated but the resulting geometry seems
physically reasonable. Therefore, the set of solutions might be
greatly reduced if we only choose solutions which indeed obey the
energy conditions and are physically reasonable or, regard only
solutions that verify the energy conditions, independently of the
geometrical setup, or, maintain only solutions whose geometry seems
reasonable. Here we choose to maintain everything as good and
interesting solutions, to be tested a posteriori.
\section*{Acknowledgments}
JPSL acknowledges Funda\c c\~ao para a
Ci\^encia e Tecnologia - FCT, Portugal, for financial
support through Project No.~UIDB/00099/2020.
PL acknowledges IDPASC and FCT, Portugal, for financial support
through Grant No.~PD/BD/114074/2015, and thanks
Centro de Matem\'{a}tica, Universidade do Minho,
where part of this work has been performed, for
the hospitality.
\newpage
|
\section{Introduction}
Accurate measurements of quantum correlations in the next generation of experiments with ultracold atoms in optical lattices are one of the most challenging obstacles in the quest for the realization of quantum simulators of magnetic systems \cite{reviewBloch,reviewAnna}. Not only local order parameters, but also long range correlations are necessary for the faithful discrimination of magnetic phases in the strongly correlated regime.
One of the advantages of using optical lattices for simulating solid state systems, is that atoms are easily coupled to light with extremely accurate control. The atomic matter properties are then inferred from the measurement of the light scattered off of an atomic sample.
Here, we review a proposal for inferring spin-spin correlations using a quantum polarization spectroscopy scheme based on a light matter interface\cite{polzik}. The idea, put forward initially in Ref.~\cite{Eckert2008} and further developed in Refs.~\cite{Roscilde2009,dechiara-spectroscopy}, consists in coupling the polarization of a beam illuminating the optical lattice with the magnetic momenta of the trapped atoms. As we explain in this work, using a light beam in a standing wave configuration, as sketched in Fig.~\ref{fig:setup}, one can achieve a modulation of the light-atoms coupling which allows one to reconstruct the atomic spin-spin correlations. One of the most important advantages of this method is that it is non destructive, i.e. the atomic sample is kept in the trap and can be reused for further measurements. We apply this method to a specific one-dimensional spin chain. We show how to measure the model phase diagram by accessing the order parameters of the different phases.
Furthermore, we will show that apart from measuring spin-spin correlations, polarization spectroscopy allows to discriminate whether a magnetic system, in our case a spin chain, is entangled or not. This is a long standing problem in the context of quantum information theory and many-body systems (see for example \cite{TothReview}), since it is extremely difficult to characterize quantum entanglement for general many particle states. In some specific cases, such as spin systems, one can derive spin squeezing inequalities involving the system total angular momentum which reveals whether the many-body state is entangled or not. In this context, several proposals have been put forward based on particle scattering, e.g. neutron scattering, to probe entanglement in magnetic systems \cite{Wiesniak2005,krammer,cramer}. Here we show that entanglement witnesses based on spin-squeezing inequalities are straightforwardly measured with our proposed scheme. Moreover, the flexibility of optical setups in modulating the periodicity of the probe wave give a lot of freedom and accuracy in the measurements compared to neutron scattering. We expect this scheme to open a new route for the, so far elusive, detection of many-body entanglement.
The paper is organized as follows:
in Sec.~\ref{sec:QPS} we review the quantum polarization spectroscopy technique aimed at measuring spin-spin correlations in optical lattices while in Sec.~\ref{sec:model} we discuss how to simulate spin chains in optical lattices; in Sec.~\ref{sec:results} we show the numerical results for the output signal for the reconstruction of the phase diagram of the spin model; in Sec.~\ref{sec:ent} the entanglement detection using quantum polarization spectroscopy is described, and finally in Sec.~\ref{sec:conclusions} we conclude.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{setup}
\caption{(Color online) Schematic detection setup, atoms placed in an optical lattice of periodicity $d/2$ (thin line; red lattice) are illuminated by a laser beam in a standing wave configuration (dark line; blue lattice) shifted by $\alpha$ from the optical lattice configuration. The output light is redirected through a polarimeter which measures its polarization through a homodyne detection (HD).}
\label{fig:setup}
\end{center}
\end{figure}
\section{The detection scheme}
\label{sec:QPS}
The detection scheme based on polarization spectroscopy that we use in this work has been described in \cite{Eckert2008,Roscilde2009,dechiara-spectroscopy}.
The scheme consists in shining the atoms with a non resonant probe beam in a standing wave configuration as shown in Fig.~\ref{fig:setup}. Due to the Faraday effect, the polarization of the incoming light is rotated as a consequence of an effective magnetic field generated by the atomic magnetic moments. By measuring the change in the polarization of the output light we acquire information on the total angular momentum of the atomic sample.
For light propagating along the z-axis, parallel to the atomic array, light-atom interaction is best expressed using the Stokes parameters defined as:
\begin{eqnarray}
s_1&=&\frac{1}{2}(a^\dagger_x a_x-a^\dagger_y a_y),
\\
s_2&=&\frac{1}{2}(a^\dagger_y a_x+a^\dagger_x a_y),
\\
s_3&=&\frac{1}{2i}(a^\dagger_y a_x-a^\dagger_x a_y),
\end{eqnarray}
where $a_x$ and $a_y$ are the photon annihilation operators with polarization along $x$ and $y$ such that $n_x=a^\dagger_xa_x$ and $n_y=a^\dagger_ya_y$ are the number of photons per unit of time with polarization $x$ and $y$ respectively. Using this definition, the atom-light interaction is described by the Hamiltonian:
\begin{equation}
H_{AL} = -\kappa s_3 J^{eff}_z,
\end{equation}
where the coupling constant $\kappa$ depends on the optical depth of the atomic sample and on the probability of exciting an atom due to the probe. The effective angular momentum $J^{eff}_z$ depends on the intensity profile of the probe beam. In the case of a simple standing wave it is given by
\begin{equation}
\label{eq:Jeff}
J^{eff}_z =\frac{1}{\sqrt{L}} \sum_n c_n S_{zn},
\end{equation}
and the coefficients are defined as
\begin{equation}
c_n = 2 \int dz \cos^2[k_P(z-a)] |w(z-nd)|^2,
\end{equation}
where $k_P$ is the wavevector of the probe light, $a$ is a shift and $w(z-nd)$ is the first band Wannier function of the atom centered at lattice position $z=nd$. In the calculations, for simplicity, we approximate the Wannier functions with delta functions centered at the lattice positions so that the coefficients are now given by $c_n=2\cos^2[k_Pd(n-\alpha)]$ where we defined the dimensionless shift $\alpha=a/d$.
We assume the incoming light to be strongly polarized along the $x$ direction, i.e. $\langle S_1 \rangle = N_{ph} \gg 1$ where $S_i=\int dt s_i$ and $N_{ph}$ is the beam total number of photons. Therefore we can approximate the other two Stokes operators as two effective conjugated variables: $X=S_2/\sqrt{N_{ph}}$ and $P=S_3/\sqrt{N_{ph}}$ such that
\begin{equation}
[X,P] = \frac{i S_1}{N_{ph}} \sim i.
\end{equation}
Integrating out the Heisenberg equations of motion for these light quadratures we obtain
\begin{equation}
X_{out} = X_{in} - \kappa J_z^{eff},
\end{equation}
where $X_{in}$ is the quadrature of the incoming light, and $X_{out}$ is the output light emerging from the sample that can be measured using homodyne detection as shown in Fig.~\ref{fig:setup}. Since we assumed the initial beam to be strongly polarized along the $x$ direction, $\langle X_{in}\rangle = 0$, we obtain:
\begin{equation}
\langle X_{out} \rangle = - \kappa \langle J_z^{eff} \rangle,
\end{equation}
thus, the mean of the effective angular momentum is mapped into the mean output light quadrature. Similarly, higher moments (fluctuations) are also mapped. In this way all the moments of $ J_z^{eff} $ can be extracted by the noise distribution of the output light and in particular the variance:
\begin{equation}
(\Delta X_{out})^2 = (\Delta X_{in})^2 +\kappa^2(\Delta J_z^{eff})^2,
\end{equation}
where $(\Delta X_{in})^2$ is the input noise (for a coherent state ($\Delta X_{in})^2=1/2$).
Different magnetic phases can be distinguished by studying the mean effective angular momentum $\langle J_z^{eff} \rangle$ and the variance $(\Delta J_z^{eff})^2$. The former immediately tells us whether the spin chain is ferromagnetic or not. The second gives us access to magnetic correlations:
\begin{eqnarray}
\label{eq:eps}
\varepsilon(k_P,\alpha)& =& (\Delta J_z^{eff})^2 =
\nonumber\\
&=&\frac 4L \sum_{nm}\cos^2[k_P d(m-\alpha)] \cos^2[k_P d(n-\alpha)]
\nonumber\\
&\times&\mathcal G_z(m,n).
\end{eqnarray}
where $\mathcal G_z(m,n)\equiv\langle S_{zm} S_{zn}\rangle -\langle S_{zm}\rangle\langle S_{zn}\rangle $ is the two-point correlation function.
As noticed in \cite{Roscilde2009}, in the case of a sample for which the net magnetization is zero, the output signal can be connected to the magnetic structure factor. Indeed by averaging the signal over the phase shift $\alpha$ one gets:
\begin{eqnarray}
\label{eq:eps_structure}
\bar\varepsilon(k_P) \equiv\int d\alpha\varepsilon(k_P,\alpha)=
\frac{1}{2}S(2k_P)
\end{eqnarray}
In principle, averaging over the phase shift $\alpha$ we are losing some information on the correlations. In this work we assume an accurate control on the shift $\alpha$ and we define the quantity:
\begin{equation}
\label{eq:deltaeps}
\Delta\varepsilon(k_P,\alpha_1,\alpha_2)\equiv \varepsilon(k_P,\alpha_1)-\varepsilon(k_P,\alpha_2)
\end{equation}
which is the difference of the signal with fixed wavevector $k_P$ at two different phase shifts. We will show that the quantity $\Delta\varepsilon(k_P,\alpha_1,\alpha_2)$, by appropriately choosing the parameters $k_P, \alpha_1, \alpha_2$, can be linked to the local order parameters necessary for the identification of the different phases of a given model.
If we assume no net magnetization, then the expression for $\Delta\varepsilon$ simplifies to:
\begin{eqnarray}
\Delta\varepsilon(k_P,\alpha1,\alpha2) &=& \frac{1}{2L}\sum_{mn}\left\{\cos[2k_Pd(m+n-2\alpha_1)] \right .
\nonumber\\
&-&\left.\cos[2k_Pd(m+n-2\alpha_2)] \right\} \mathcal G_z(m,n)
\end{eqnarray}
\section{Realization of spin-$1$ Hamiltonians in optical lattices}
\label{sec:model}
Here we briefly review how to simulate spin chains with ultracold atoms in optical lattices and discuss its phase diagram. Spin-1 atoms confined in a deep optical lattice generated by two counter-propagating lasers of wavelength $\lambda$ are well described, within the tight binding approximation, by the Bose-Hubbard Hamiltonian \cite{Jaksch1998}. Defining the creation and annihilation operators $a_{i,\sigma}^\dagger$ and $a_{i,\sigma}$ in site $i$ of an atom with spin components $\sigma=1,0,-1$ along the quantization axis, the Hamiltonian takes the form:
\begin{eqnarray}
H_{BH} &=& \frac{U_0}{2}\sum_i n_i(n_i-1)+\frac{U_2}{2}\sum\left(\bm{S}_i^2-2n_i\right)
-\mu\sum n_i+
\\
&-&t\sum_{i\sigma} \left(a_{i,\sigma}^\dagger a_{i+i,\sigma}+h.c. \right)
\nonumber
\end{eqnarray}
The operator $n_i = \sum_\sigma a_{i,\sigma}^\dagger a_{i,\sigma}$ is the total number operator of site $i$ while $\bm{S}_i = \sum_{\sigma,\sigma'}a_{i,\sigma}^\dagger \bm{T}_{\sigma,\sigma'} a_{i,\sigma'}$ is the spin operator (matrices $\bm{T}$ are the usual spin-1 angular momentum operators and we use $\hbar=1$).
The parameters appearing in the Hamiltonian $H_{BH}$ are: the usual Hubbard repulsion $U_0$, a spin dependent interaction $U_2$, the chemical potential $\mu$ and the tunneling rate $t$. While the chemical potential fixes the total number of atoms, the remaining parameters can be evaluated from the depth of the optical lattice and from the scattering lengths associated with different scattering channels \cite{Imambekov}.
The phase diagram of this model in the $\mu-t$ plane consists of insulating lobes as in the spinless Bose-Hubbard model where the lobes size depends on the ratio $U_2/U_0$ \cite{Rizzi2005}. For unit filling and for sufficiently small tunneling $t$ the system is in a Mott insulator state with one atom per site. Virtual tunneling of the atoms between neighboring sites gives rise to an effective magnetic interaction described by the bilinear-biquadratic Hamiltonian \cite{Imambekov}:
\begin{eqnarray}
\label{eq:HBB}
H_{BB}
= J\sum_i \cos(\theta) \bm{S}_i\cdot\bm{S}_{i+1}+ \sin(\theta) (\bm{S}_i\cdot\bm{S}_{i+1})^2
\end{eqnarray}
The Hamiltonian \eqref{eq:HBB} is derived within second order perturbation theory in the ratio $t/U_\alpha$, $\alpha=0,2$ and the relevant parameters read:
\begin{eqnarray}
\tan(\theta) &=& \frac{U_0}{U_0-2U_2},
\\
J &=& \frac{2t^2}{U_0+U_2} \sqrt{1+\tan^2(\theta)},
\end{eqnarray}
where the angle $\theta$ varies in the interval $[-\pi;\pi]$.
Hamiltonian \eqref{eq:HBB} is characterized by a rich phase diagram, sketched in Fig.~\ref{fig:phasediagram}, depending on the angle $\theta$ and which has been extensively studied in the literature, see \cite{AKLT,Chubukov,FathSolyom1991,FathSolyom1995, Schollwock1996,Buchta2005, Rizzi2005,Lauchli2006,Reed,Schollwock_book} and references therein. Here we briefly discuss the model phase diagram and the corresponding order parameters.
\emph{The ferromagnetic phase.-} For $\pi/2 <\theta < 5\pi/4$ the ground state is ferromagnetic: all the spins, breaking the rotational symmetry of $H_{BB}$, align along some direction with a net spontaneous magnetization, which serves as a local order parameter. For the remaining values of $\theta$ the ground state lacks of spontaneous magnetization. However, within this interval we can distinguish different phases.
\begin{figure}[t]
\begin{center}
\begin{pspicture}(-2.5,-2.5)(2.5,2.5)
\pscircle[linewidth=0.05](0,0){2}
\psline[linewidth=0.05](-1.4142,-1.4142)(1.4142,1.4142)
\psline[linewidth=0.05](0,0)(0,2)
\psline[linewidth=0.05](0,0)(1.4142,-1.4142)
\rput(-1,0){\Large F}
\rput(1,0){\Large H}
\rput(0.5,1.3){\Large C}
\rput(0,-1){\Large D}
\rput(0,2.35){\Large $\frac{\pi}{2}$}
\rput(1.6,1.6){\Large $\frac{\pi}{4}$}
\rput(1.6,-1.6){\Large $-\frac{\pi}{4}$}
\rput(-1.75,-1.7){\Large $-\frac{3\pi}{4}$}
\end{pspicture}
\end{center}
\caption{Phase diagram of the bilinear-biquadratic Hamiltonian \eqref{eq:HBB} in the interval $\theta\in[-\pi;\pi]$. The four phases are: the ferromagnetic phase (F), the critical phase (C), the Haldane phase (H) and the dimer phase (D).}
\label{fig:phasediagram}
\end{figure}
\emph{The critical phase.-} In the interval $\pi/4<\theta<\pi/2$ the system is in a critical phase in which the model is gapless due to soft collective modes at momenta $q = 0,\pm 2\pi/(3d)$ where $d=\lambda/2$ is the distance between two adjacent sites. The ground state organizes in slightly correlated clusters of three neighboring spins (trimers). This fact is reflected in the spin-spin correlation functions $\langle S_{zi}S_{z(i+r)}\rangle $ which show a period-$3$ oscillations \cite{FathSolyom1991}. In momentum space this feature emerges as a peak at $q= 2\pi/(3d)$ in the magnetic structure factor defined as:
\begin{equation}
S(q) = \frac 1L \sum_{mn} e^{i qd(m-n)} \langle S_{zm}S_{zn}\rangle.
\end{equation}
Recently L\"auchli et al. \cite{Lauchli2006} have shown that nematic (i.e. quadrupolar) correlations at momentum $q= 2\pi/(3d)$ are enhanced in the critical phase while spin correlations become smaller when increasing $\theta$ from $0.2\pi$ to $0.5 \pi$. Together with the absence of the gap, the enhanced nematic correlations are a distinctive feature of the critical phase.
\emph{The Haldane phase.-} For $-\pi/4<\theta<\pi/4$ the system is in the Haldane phase which is gapped and contains for $\theta=0$ the
|
spin-1 isotropic Heisenberg chain and for $\tan(\theta) = 1/3$ the Affleck-Kennedy-Lieb-Tasaki (AKLT) point for which the ground state is exactly known \cite{AKLT}. Numerical results in this region based on density matrix renormalization group (DMRG) simulations show that decreasing $\theta$ from $\pi/4$ to $\theta_L \simeq 0.1314 \pi$, the so called Lifshitz point, the peak at momentum $q= 2\pi/(3d)$ in the magnetic structure factor moves continuously to $q=\pi/d$ (see Ref.~\cite{Schollwock1996}). Notice that, although these peaks signal some correlations, the presence of a gap excludes local long range magnetic order and spin correlations decay exponentially. The Haldane phase can be instead characterized in terms of a hidden topological order parameter, called the string order parameter \cite{Rommelse}:
\begin{equation}
O_\pi(m,n) =\left \langle S_{zm} \exp\left(i \pi \sum_{l=m-1}^{n-1} S_{zl}\right) S_{zn}\right\rangle
\end{equation}
This order, being topological, cannot be revealed with local measurements.
\emph{The dimer phase.-} The interval $-3\pi/4<\theta<-\pi/4$ is still debated. At $\theta=-\pi/4$ the gap closes and for smaller values of $\theta$ it reopens again. In this region the ground state breaks translational invariance and organizes in slightly correlated dimers. For $-3\pi/4<\theta<-\pi/2$ it is still under debate whether the system is always dimerized or it becomes nematic as proposed by Chubukov \cite{Chubukov}. Numerical results \cite{FathSolyom1995,Buchta2005,Rizzi2005,Lauchli2006} show that the dimer order parameter:
\begin{equation}
\label{eq:dimer_order_parameter}
D= |\langle H_i-H_{i+1}\rangle|
\end{equation}
where $H_i=\cos(\theta) \bm{S}_i\cdot\bm{S}_{i+1}+ \sin(\theta) (\bm{S}_i\cdot\bm{S}_{i+1})^2$,
is different from zero up to values very close to $\theta=-3\pi/4$ giving strong evidence for the absence of the nematic phase except only in an infinitesimal region close to $\theta=-3\pi/4$.
\section{Phase diagram reconstruction}
\label{sec:results}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.8]{3}
\caption{Left column, the function $\varepsilon(k_P,\alpha)$ for different values of $\theta$ in the three phases for $L=132$: top $\theta = -0.5\pi$ (dimer), middle $\theta=0$ (Haldane), bottom $\theta=0.3\pi$ (critical). Right column, the same plots but restricted to $\alpha=0$.}
\label{fig:epsall}
\end{center}
\end{figure}
In this section we discuss the results of the detection scheme applied to the bilinear-biquadratic Hamiltonian. The quantities $\varepsilon(k_P,\alpha)$, which depend on all possible correlations between two spins, are computed numerically by means of the DMRG algorithm \cite{dmrg}. We simulate spin chains with open boundary conditions and lengths which are multiple of $2$ and $3$ reducing known finite size effects due to incommensurability \cite{FathSolyom1991}. In the DMRG simulations we choose the number of block states sufficiently large to ensure that the truncation error is less than $10^{-6}$.
The ferromagnetic phases is easily detected by looking at the average value of the effective angular momentum:
\begin{equation}
\langle J_z^{eff}(k=0)\rangle =\frac{2}{\sqrt{L}} \sum_n S_{zn}
\end{equation}
which is proportional to the total magnetization along the $z$ direction.
Since $\langle J_z^{eff}\rangle$ is zero in the other three phases, we need the second moment of $J_z^{eff}$ in order to characterize this phase. In Fig.~\ref{fig:epsall} we show $\varepsilon(k_P,\alpha)$ in the critical, Haldane and dimer phases. A common feature of the three phases is the presence of a high peak at $k_P d=\pi/2$ due to antiferromagnetic correlations. Apart from this, the three plots are qualitative different. In fact for $\theta>\theta_L$, the Lifshitz point, the signal is characterized by peaks at $k_Pd\sim\pi/3$ and $k_Pd\sim2\pi/3$. These resemble the peaks of the magnetic structure factor\footnote{Notice that from Eq.~\eqref{eq:eps_structure}, $\varepsilon(k_P,\alpha)$ is related to the structure factor $S(2k_P)$ at double the value of the momentum.} and are due to the period-3 oscillations of the correlation functions. We will study these correlations in Sec.~\ref{sec:critical} and show that they detect the critical phase. For $\theta < -\pi/4$ we find the appearance of other small peaks at $k_P d = \pi/4$ and $k_P d = 3\pi/4$ signaling a different order with a larger period. We will study more carefully these features in Sec.~\ref{sec:dimer}.
Since the presence of these distinctive peaks is relevant for the determination of the phase of the spin chain, we find it convenient to subtract the background generated by all possible correlations in definition \eqref{eq:eps} by instead using the quantity $\Delta\varepsilon(k_P,\alpha_1,\alpha_2)$ defined in Eq.~\eqref{eq:deltaeps}.
To see how to choose the parameters $k_P,\alpha_1,\alpha_2$, let us consider the dimer phase. In this case we find it convenient to choose $k_P=\pi/4d$ which is the periodicity of the dimers. Then we study the behavior of $\varepsilon(\pi/4d,\alpha)$ in one point of the dimer phase as a function of $\alpha$ as shown in Fig.~\ref{fig:epscritk14}. The quantity $\varepsilon(\pi/4d,\alpha)$ is an oscillating function of $\alpha$. In order to optimize the information on the correlations at $k_P=\pi/4d$ we choose the difference between the maximum at $\alpha_1=3/2$ and the minimum at $\alpha_2=1/2$. Thus as an indicator of the critical phase, we will study the quantity $\Delta\varepsilon(\pi/4d,3/2,1/2)$. In the critical phase, a similar analysis leads to $k_P=\pi/4d, \alpha_1=5/4,\alpha_2=1/2$ (see also \cite{dechiara-spectroscopy}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{epscritk14}
\caption{The quantity $\varepsilon(\pi/4d,\alpha)$ for $\theta=-0.5 \pi$ (dimer phase) for $L=132$ as a function of $\alpha$.}
\label{fig:epscritk14}
\end{center}
\end{figure}
\subsection{Detecting the critical phase}
\label{sec:critical}
For the critical phase, we have seen that the distinctive peaks are at $k_P d=\pi/3$, and as shown in the previous section we choose $\alpha_1=5/4$ and $\alpha_2=1/2$. Thus we define the quantity \begin{eqnarray}
C_\varepsilon&=&\Delta\varepsilon(\pi/3d,5/4,1/2) =
\nonumber\\
&=& \frac{1}{L} \sum_{mn} \cos\left[\frac{2\pi}{3} (m+n) +\frac{\pi}{3}\right] \mathcal G_z(m,n),
\end{eqnarray}
where we used the fact that the ground state is an eigenstate of the total angular momentum with zero eigenvalue:
\begin{equation}
\sum_n \langle S_{zm}S_{zn}\rangle = \langle S_{zm}\sum_n S_{zn}\rangle =0
\end{equation}
The quantity $ C_\varepsilon$ is sensitive to correlations which oscillate with a period 3 and represents a footprint of the critical phase. In fact, in Fig.~\ref{fig:criticaldimer} we show the signal $ C_\varepsilon$ for different values of $\theta$ in the antiferromagnetic phase between $-0.7\pi$ and $0.5\pi$. The results clearly show that the critical phase is very well detected by a positive value of $ C_\varepsilon$. For $\theta=0.2 \pi$, in the Haldane phase and close to the phase transition, we still observe a large positive value, probably due to residual period 3 correlations persisting in the Haldane phase for $\theta>\theta_L$. However for $\theta=0.2\pi$ we find a non negligible dependence with the size of the sample. A finite size scaling analysis suggests that in the thermodynamical limit for $L\to\infty$ the quantity $ C_\varepsilon$ goes to zero as $1/L$ for $\theta=0.2\pi$, while for the other values of $\theta \ge 0.24 \pi$ it converges to a finite value (see Ref.~\cite{dechiara-spectroscopy}).
Our findings indicate that by measuring $ C_\varepsilon$ which depends only on spin-spin interactions we are able to infer the occurrence of the phase transition and thus the quantity $C_\varepsilon$ behaves as an order parameter for the critical phase.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{5}
\caption{(Color online) The quantities $C_\varepsilon=\Delta\varepsilon(\pi/3d,5/4,1/2)$ (squares) and $D_\varepsilon =\Delta\varepsilon(\pi/4d,1/2,3/2)$ (circles) as a function of $\theta$ for $L=132$. We distinguish the model phases with different shading: horizontal lines (dimer), no shading (Haldane), oblique lines (critical). The solid and dashed lines are only guides to the eye.}
\label{fig:criticaldimer}
\end{center}
\end{figure}
\subsection{Detecting the dimerized phase}
\label{sec:dimer}
Let us now consider the dimerized phase. As discussed before the presence of peaks at $k_P=\pi/4d$ signals pairing of neighboring spins. Notice that if we average the signal $\varepsilon(k_P,\alpha)$ over $\alpha$ these peaks disappear. Therefore these features are not visible in the magnetic structure factor.
We find that the quantity
\begin{eqnarray}
D_\varepsilon &\equiv&\Delta\varepsilon(\pi/4d,1/2,3/2)
\nonumber\\
&=& -\frac 1L \sum_{mn} \sin\left[\frac{\pi}{2} (m+n)\right] \mathcal G_z(m,n)\end{eqnarray}
is suitable for the detection of the dimer phase. The factor $\sin\left[\pi/2 (m+n)\right]$ ensures that only the pairs of spins with positions $m$ and $n$ of opposite parity contribute to $ D_\varepsilon$. Moreover the $\sin$ function gives an alternating sign depending on whether the distance between the sites is even or odd. Therefore the quantity $ D_\varepsilon$ is an extension to long range correlations of the dimer order parameter $D$ defined in Eq.~\eqref{eq:dimer_order_parameter}.
In Fig.~\ref{fig:criticaldimer} we show the results for the signal $ D_\varepsilon$ for different values of $\theta$. Similar to the dimer order parameter $D$, the quantity $ D_\varepsilon$ is significantly different from zero only in the dimerized phase, therefore acting as an alternative dimer order parameter.
\section{Entanglement detection}
\label{sec:ent}
Detecting entanglement in many-body systems is not an easy task. In magnetic systems, such as the spin chain considered in this work, one can employ spin squeezing inequalities based on collective angular momentum operators (see \cite{TothReview} for a review). An entanglement witness is an operator which is positive valued for all separable (non entangled) states, while there exists at least one entangled state for which the expectation value of the witness is negative.
The witness we propose is based on the effective angular momentum defined in \eqref{eq:Jeff}. The construction follows Refs.~\cite{TothReview,Wiesniak2005}.
As before we define an effective angular momentum which we assume can be written on the light fluctuations:
\begin{equation}
J_\alpha = \sum_m c_m S_{\alpha m} \qquad \alpha=x,y,z
\end{equation}
where now we consider the angular momentum fluctuations in the two other directions.
Let us consider the quantity:
\begin{eqnarray}
V &=& \sum_{\alpha=x,y,z} \Delta J_\alpha^2 =
\sum_{\alpha=x,y,z}\sum_{ij} c_i c_j (\langle S_{\alpha i}S_{\alpha j}\rangle- \langle S_{\alpha i}\rangle\langle S_{\alpha j}\rangle)
\end{eqnarray}
Now if the many-body system is in a product state:
\begin{equation}
\rho_{prod} = \rho_1 \otimes \rho_2 \otimes\dots\otimes \rho_N
\end{equation}
we have:
\begin{equation}
\langle S_{\alpha i}S_{\alpha j}\rangle- \langle S_{\alpha i}\rangle\langle S_{\alpha j}\rangle = \delta_{ij}\left( \langle S_{\alpha i}^2\rangle- \langle S_{\alpha i}\rangle^2\right)
\end{equation}
Using the relation for spin $s$ particles:
\begin{equation}
\langle S_{xi}^2\rangle+\langle S_{yi}^2\rangle+\langle S_{zi}^2\rangle = s(s+1)
\end{equation}
and the inequality:
\begin{equation}
\langle S_{xi}\rangle^2+\langle S_{yi}\rangle^2+\langle S_{zi}\rangle^2 \le s^2
\end{equation}
we see that for product states:
\begin{equation}
\label{eq:prod}
V_{prod}\ge s\sum_i c_i^2
\end{equation}
If we consider separable states:
\begin{equation}
\rho_{sep} = \sum_n p_n \rho_{n,sep}, \quad 0<p_n<1, \quad \sum_n p_n=1
\end{equation}
where each state $\rho_{n,sep}$ in the mixture is separable, we have
\begin{equation}
V_{sep}=\sum_\alpha \Delta J_\alpha^2 \ge \sum_n p_n \sum_\alpha (\Delta J_\alpha^2)_n
\ge \sum_n p_n s\sum_i c_i^2 = s\sum_i c_i^2
\end{equation}
where the first inequality comes for a mixture $\rho=\sum_n p_n \rho_n$: $\Delta X^2 \ge \sum_n p_n (\Delta X^2)_n$ and $(\Delta X^2)_n$ is the variance evaluated in the $n$th ensemble element; the second inequality comes from Eq.~\eqref{eq:prod}.
Therefore, a possible entanglement witness is given by the quantity:
\begin{equation}
\label{eq:w1}
W = V-s\sum_i c_i^2
\end{equation}
Notice that the coefficients $c_i$ and consequently the quantity $V$ depend on the probe light momentum $k$ and on the shift, $a$, between the optical lattice and the probe light on the standing wave configuration. Both parameters can be changed giving, therefore, an important and necessary flexibility for detection of different entangled states.
In Fig.~\ref{fig:wit1} we show $W$ for $a=0$ as a function of $k$ for states in the critical, Haldane and dimer, phases. It is evident that for certain values of $k$, $W$ is negative thus detecting entanglement.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{wit1}
\caption{(Color Online) Expectation value of the entanglement witness $W$ from Eq.~\eqref{eq:w1} for three values of $\theta$ in the three different phases: $\theta=-0.5\pi$ (solid (red), dimer phase) $\theta= 0.102 \pi$ (dashed (green), AKLT point in the Haldane phase), $\theta= 0.3\pi$ (dotted(blue), critical phase). In the numerical simulations we take $L=96$. All these states are clearly detected for small enough values of $k$.}
\label{fig:wit1}
\end{center}
\end{figure}
This method provides an operational entanglement detection scheme which is scalable, robust and that can be realized in present-day experiments with ultracold atoms in optical lattices. We stress that the quantity $W$ is very general and can be used even if the sample is subject to thermal fluctuations or disorder.
\section{Conclusions}
\label{sec:conclusions}
We have presented a probing technique based on matter-light interface for the investigation of quantum correlations in magnetic systems simulated by ultracold atoms in optical lattices. We have shown that this scheme permits to obtain experimentally the order parameters of non trivial magnetic phases by homodyne measuring the fluctuations of the probing light quadratures after crossing the atomic sample. Moreover, we have shown that this technique, which is not destructive and realizable with present technology, allows also to detect experimentally the entanglement in magnetic non trivial many body systems without actually carrying out an unnecessary state tomography.
\begin{acknowledgements}
We thank Oriol Romero-Isart for frutiful discussions. We acknowledge support from the Spanish MICINN (Juan de la Cierva, FIS2008-01236 and QOIT-Consolider Ingenio 2010), Generalitat de Catalunya Grant No. 2005SGR-00343. We used the DMRG code available at {\tt http://www.dmrg.it}.
\end{acknowledgements}
\pagebreak
|
\section{{\small{}introduction}}
Recently, the study of many-body quantum systems has taken a prominent role due to new horizons of experimental plausibility, especially using ultra cold gases, see \cite{dalibard-rev-mod-phys, cazalilla-et-al} and reference therein. The scope of ``many-body quantum systems'' is wide and includes isolated systems \cite{polkonivkov-et-al}, quantum quenches \cite{,calabrese-cardy,revue-quench}, a coupling to a reservoir \cite{Prosen2011} and many more \cite{nature-phys-revue} (and references therein). In this study, we wish to explore some aspects of many-body quantum systems and of transport dynamics under continuous and strong measurements.
When dealing with quantum transport, a fundamental question that arises is how the system dynamics is altered by the measurement process and what are the consequences on the transport properties. This is for instance of particular relevance for ultracold atomic and molecular gases in optical lattices.
Intuitively, we expect the emerging transport dynamics to not only be induced by the measurement but to also depend on the monitoring process. This effect has been seen experimentally in \cite{key4} and discussed theoretically in \cite{key3,key3_1}.
Famously, a system under continuous and strong measurement is repeatedly projected onto a single eigenstate. Effectively, the system persists at that state for a long time. This is known as the quantum Zeno effects~\cite{zeno}. However, at long enough time scales, the system can jump between the eigenstates of the measurement, introducing a rich dynamics at a slow time scale. Here, we report on the dynamics of some simple quantum chains (fermionic and bosonic), under continuous measurement.
A direct consequence of the Zeno effect is to inhibit transport. Even if the many-body system is driven out-of-equilibrium, the asymptotic system steady state supports neither currents nor long range correlations and hence reflects some kind of induced localization~\cite{local1,local2,many-local,many-local2}. Indeed, besides projecting the system state onto eigenstates, quantum monitoring back-action induces random stochasticity into the system dynamics which destroys coherences and produces localization. This is analogous to the observed many-body localization induced by stochastic randomness in models of critical systems~\cite{Bernard-Doyon-17}.
It was previously noticed that for strong and continuous measurement, the slow dynamics between the measurement eigenstates forms a Markov process \cite{key2,key2_2,BBT-15,Frohlich-et-al}. It is interesting to check whether the quantum origins of some canonical Markov processes can be found. Here, we will do just that. We will show which quantum setup leads to the symmetric exclusion process (SSEP), the inclusion process and a sub-class of the misanthrope processes. The fact that some of these processes are exactly solvable allow us to decipher statistical properties of quantum transport under strong monitoring.
Because the monitoring apparatus act as macro- or meso-scopic devices interacting with the many-body system and hence induce dissipation, these slow dynamics are locally diffusive -- or at least sub-ballistic -- even if the transport in the un-monitored system is ballistic. These dynamics are classical because strongly monitoring a system projects the latter on the measurement eigenstates, called pointer states. The emergent classical dynamics is therefore dependent on the monitoring process. Nevertheless, echoes of the quantum origin of these classical dynamics remain.
The method to find the emerging Markovian dynamics is general and can be used in related experiments for consistency checks in the study of quantum transport processes with measurements, or for quantum systems continuously interacting with a reservoir.
The outline of the paper is the following. In Sec.~\ref{sec:RIP and sto Lind} we recap the repeated interaction technique that produces quantum trajectories and discuss the limit of strong measurement. In Sec.~\ref{sec:Results} we present the emerging dynamics of the XY spin chain under strong measurement of the local $\sigma^z$. Moreover, we discuss the emerging dynamics of the bosonic tight-binding Hamiltonian under strong measurement of the local occupancy as well as generalizing to variants of the local occupancy. Finally, a detailed discussion is performed in Sec.~\ref{sec:Discussion}, suggesting possible implications to the obtained results.
\section{The repeated measurement procedure and quantum monitoring
\label{sec:RIP and sto Lind}}
In this section, we recall how the repeated measurement procedure produces a stochastic Lindblad equation, whose trajectories describe the evolution of a monitored quantum system ~\cite{book-qmeasure,book-qmeasure_2}. These are the so-called quantum trajectories~\cite{qtraj-hist,qtraj-hist_2,qtraj-hist_3}. It will serves us as the starting ground for the processes we wish to consider. Then, we discuss the effective dynamics emerging at the strong measurement limit.
\subsection{Quantum trajectories}
Let us consider a quantum system with density matrix $\rho$ and a series of probes, all prepared in the state $\ket{\varphi}\bra{\varphi}$. A single probe is sent to interact with the system for a short time, after which it is measured with respect to some observable. This procedure is repeated indefinitely.
Suppose that $s$ and $\ket{s}\bra{s}$ form the complete set of eigenvalues and projectors of the probe measurement. The evolution of the system after interacting and measuring the probe at state $s$ is given by $\rho \rightarrow \frac{F_s \rho F^\dagger _s}{\pi(s)} $, where the Kraus operators are $F_s = \bra{s} U \ket{\varphi}$ and $\pi(s) = \Tr ( F_s \rho F^\dagger _s) $ are the associated probabilities to measure the probe in state $\ket{s}$. Here $U$ is the unitary evolution operator for the system + probe dynamics prior to the measurement. Notice that $F_s$ acts on the space of the system only, tracing out the probe's degrees of freedom. The unitarity of $U$ implies that the Kraus operators satisfy $\sum_s F^\dagger _s F_s = \mathbb{1} $ and this ensures that the conservation of probability $\sum_s \pi(s) =1$ is maintained.
Now, let us assume a continuous evolution of the system's density matrix $\rho$. Namely, we consider each ``turn'' of interaction + measurement with a probe to happen at a short time $dt$ and to slowly change the density matrix at the $dt$ scale.
A simple method to achieve that is to require the Kraus operators to be composed of normalized unity operators with perturbative expansions scaling with $\sqrt{dt}$, see e.g.~\cite{book-qmeasure,book-qmeasure_2,yann-steph,Pellegrini}. The continuous evolution of the density matrix is captured by stochastic Lindblad equations, called quantum trajectory equations \cite{qtraj-hist,qtraj-hist_2,qtraj-hist_3},
\begin{equation}
d\rho_t = -\frac{i}{\hbar}\left[H,\rho_t \right]dt + \eta \nu_f L_N(\rho_t)dt + \sqrt{\eta \nu_f}\, M_N(\rho_t)\, dB_t. \label{eq:Sto Lind} ,
\end{equation}
with $L_N(\rho) = N\rho N^\dagger - \frac{1}{2} \lbrace N^\dagger N ,\rho \rbrace$ and $M_N(\rho) = N\rho + \rho N^\dagger - \rho \Tr ( N\rho + \rho N^\dagger)$.
Here $H$ is the Hamiltonian of the system and $N$ is an operator associated with the interaction and measurement of the probes~\footnote{Eq.\eqref{eq:Sto Lind} can be generalized to include a series of different measurement operators $N_i$ accompanied by their associated Brownian motions $B^i_t$.}. The $\left[\cdot,\cdot\right]$ and $\lbrace \cdot,\cdot\rbrace$ are the standard commutation and anti-commutation operators and $dB_t$ is the standard It\^o increment satisfying $dB^2_t= dt$. The cumulated classical random signal $S_t$ produced by the monitoring process changes in time according to $dS_t=\eta \nu_f \Tr ( \rho_t(N + N^\dagger))\, dt + dB_t$. Its time drift is governed by the time dependent expectation of the system observable $N+N^\dag$ and hence provides a continuous monitoring of that observable.
Furthermore, $\eta \nu_f$ determines the rate at which information is extracted and $\eta$ is a dimensionless parameter we will vary in what follows.
Discarding the outcomes of the measurements leads to a mean dynamics, i.e. averaging with respect to the possible quantum trajectories of $\rho$. This yields the (mean) Lindblad evolution equation
\begin{equation}
\frac{d}{dt}\bar{\rho}_t = -\frac{i}{\hbar}\left[H,\bar{\rho}_t \right] + \eta \nu_f L_N(\bar{\rho}_t). \label{eq:mea Lindblad}
\end{equation}
The presence of the second Lindblad term reflects the dissipation induced by the measurement back-action.
The Lindblad equation is the most general Markovian evolution equation which is trace preserving and completely positive. It can also be derived from different settings \cite{Zoller_Book,Breuer_Book,Lindblad1976}. However, we consider here the repeated interaction procedure to allow putting the results of the paper in a concrete experimental context.
In what follows, we consider a $1D$ lattice, with a series of localized measurement operators $N_i=N_i^\dag$, with $i$ indexing the lattice site, that have a non-degenerate spectrum~\footnote{We here restrict ourselves to self-adjoint measurement operators, $N=N^\dag$. Measurement with non-self adjoint operators may also be considered.}. Thus we may rewrite the mean Lindblad equation in the form
\begin{equation}
\frac{d}{dt}\bar{\rho}_t = L(\bar{\rho}_t) + \eta\, L_b(\bar{\rho}_t) \label{eq:Lindblad on a lattice}
\end{equation}
with $L(\rho) = -\frac{i}{\hbar}\left[H,\rho \right] $ and $L_b(\rho) = \nu_f \sum_j L_{N_j}(\rho)$. The system Hamiltonian $H=\sum_i h_i$ is the sum of local interactions.
\subsection{Strong measurements}
Let us consider the limit $\eta \rightarrow \infty$, where the dissipative part of the Lindblad equation (\ref{eq:Lindblad on a lattice}) dominates the Hamiltonian evolution. Here, any density matrix not belonging to the kernel of $L_b$ is exponentially suppressed in time. Hence, the system is projected to a state in the kernel $\ker L_b$ and remains there for a long time. This is a manifestation of the quantum Zeno effect.
At the time scale $s= t/\eta$ for $t,\eta \rightarrow \infty$ a non-trivial dynamics emerges, where the density matrix exhibits an interplay between the pointer states, i.e. the eigenstates of the measurement operators which form a basis of $\ker L_b$.
The simplest approach to capture the emerging dynamics of strong dissipation consists of looking at the mean evolution (\ref{eq:Lindblad on a lattice}). Second order perturbation theory then yields that the mean effective evolution equation is $\frac{d}{ds}\bar{\rho}_s = \mathcal{L}(\bar{\rho}_s )
$ where
\begin{equation}
\mathcal{L}(\rho ) = - \Pi_0 L (L^\perp _b)^{-1} L \Pi_0\, ( \rho).
\label{eq: strong diss evo}
\end{equation}
Here, $\Pi_0$ is the projector onto $\ker L_b$ and $(L^\perp _b)^{-1}$ denotes the inverse of the restriction of $L_b $ onto the complement of $\ker L_b$. Since the slow dynamics is composed of the interplay between the pointer states, the mean density matrix can be written as
\begin{equation}
\bar{\rho}_s = \sum_{{\epsilon} } \bar Q_s({\epsilon}) \mathbb{P}({\epsilon}),
\end{equation}
where $\mathbb{P}_{{\epsilon}}$ are the projectors onto the pointer states denoted by $|\epsilon\rangle$ and $\bar Q_s(\epsilon)$ are their respective time-dependent weights, with $\sum_\epsilon \bar Q_s(\epsilon)=1$. Therefore, the evolution of the mean density matrix $\bar{\rho}_s$ is contained in the time evolution of the $\bar Q_s(\epsilon)$.
A more informative approach to the effective slow dynamics consists of looking at the quantum trajectories for the system density matrix $\rho$, whose evolutions are governed by the stochastic evolution equation (\ref{eq:Sto Lind}), in the limit $\eta\to\infty$ at fixed $s=t/\eta$. One then learn \cite{BBT-15,Frohlich-et-al} that, at any given fixed time $s$, the system is in one of the pointer states, with probability one. So that, at any fixed time $s$, the system is in a pure, but random, time dependent pointer state $\rho_s=\mathbb{P}({\epsilon_s})$. The probability for the system to be in a given pointer state $\mathbb{P}({\epsilon})$ is $\bar Q_s(\epsilon)$. As shown in \cite{BBT-15,Frohlich-et-al}, the slow dynamics is then reduced to Markovian quantum jumps from one pointer state to another with probability rate depending on the system Hamiltonian and measurement operators~\footnote{The convergence of the quantum trajectory dynamics to a Markov chain on the set of pointer states is weak in the sense it only claims the convergence of the N-point functions. It is not a strong convergence.}. The Markovian evolution of the probabilities $\bar Q_s(\epsilon)$ is then equivalent to (\ref{eq: strong diss evo}).
For our purpose, it is sufficient to say that $\bar Q_s(\epsilon)$ follows a Markovian evolution. In what follows we will identify the emerging Markovian dynamics for a few choice of many-body Hamiltonian dynamics.
\section{Results
\label{sec:Results}}
In this section we derive our main results. Namely, we find the Markovian dynamics describing the large $\eta$ limit of a few choice Hamiltonians and measurements.
\subsection{Spin chain with strong local $\sigma^z$ measurements
\label{subsec: spin chain }}
Let us consider a periodic system of $ L$ sites occupied by spin $\frac{1}{2}$ fermions. The system evolves according to the XY Hamiltonian $H= \varepsilon \sum_j (\sigma^x _j \sigma^x _{j+1} + \sigma^y _j \sigma^y _{j+1}) $ and the measurement operators are $N_j = \sigma_j ^z$. The mean dynamics (\ref{eq:Lindblad on a lattice}) is then that of the XY model with dephasing noise, see e.g. \cite{dephasing,dephasing2,dephasing3}, but the stochastic quantum trajectories are typically different from this mean evolution.
At the large $\eta$ limit, using \eqref{eq: strong diss evo} we find
\begin{equation}
\frac{d}{ds} \bar{\rho}_s = -\frac{1}{2}D \sum_j \left[ \sigma^+ _j \sigma^- _{j+1}, \left[ \sigma^- _j \sigma^+ _{j+1}, \bar{\rho}_s \right] \right] + \textnormal{h.c.}, \label{eq: eff dyn spin chain}
\end{equation}
where $\sigma^\pm = \sigma^x \pm i\sigma^y $ and $D
|
= \frac{\varepsilon^2}{\hbar^2 \nu_f}$.
The pointer states here are $\mathbb{P}(\epsilon) = \otimes_j\, \mathbb{P}^{\varepsilon_j} _j$, where $\varepsilon_j = \pm$ and $\mathbb{P}^{\varepsilon_j} _j$ are the projectors onto the local $\ket{\pm}$ states, so that strongly measuring the local $\sigma^z$ gives access to the instantaneous spin profile. Using the local two-site pointer states, one obtains
\begin{eqnarray}
\frac{1}{D}\mathcal{L}(\mathbb{P}^{+} _j \otimes \mathbb{P}^{-} _{j+1}) &=& \mathbb{P}^{-} _j \otimes \mathbb{P}^{+} _{j+1}-\mathbb{P}^{+} _j \otimes \mathbb{P}^{-} _{j+1} \nonumber
\\
\frac{1}{D}\mathcal{L}(\mathbb{P}^{-} _j \otimes \mathbb{P}^{+} _{j+1}) &=& \mathbb{P}^{+} _j \otimes \mathbb{P}^{-} _{j+1}-\mathbb{P}^{-} _j \otimes \mathbb{P}^{+} _{j+1}
\\ \nonumber
\mathcal{L}(\mathbb{P}^{+} _j \otimes \mathbb{P}^{+} _{j+1}) &=& 0
\\ \nonumber
\mathcal{L}(\mathbb{P}^{-} _j \otimes \mathbb{P}^{-} _{j+1}) &=& 0.
\end{eqnarray}
If we interpret the local $\mathbb{P}^\pm _j$ states as site $j$ being occupied or not, the process describes the SSEP. Namely, a particle at site $j$ can hop to site $j\pm1$ with rate $D$, only if site $j\pm1$ is empty (see Fig.~\ref{fig:Illustration-of-SSEP}). For the SSEP, the resulting dynamics are diffusive. This is starkly different than the limit of $\eta \rightarrow 0$, where the dynamics are expected to be ballistic \cite{Giamarchi_book}.
So far we have considered a periodic chain, avoiding discussion of the boundaries. Of particular interests are the possible couplings to reservoirs, pushing the system out-of-equilibrium.
In \footnote{The scaling $ \frac{d}{dt} \rho = L(\rho)+ \eta L_b(\rho) + \eta^{-1} L_{\textnormal{bdry}(\rho)} $ keeps the bulk and boundary dynamics on equal footing. Then, the evolution equation is given by $\frac{d}{ds}\rho = \Pi_0 L_{\textnormal{bdry}}\Pi_0 \rho - \Pi_0 L (L^\perp _b )^{-1} L \Pi_0 \rho$. }, we explain how to put the boundary and bulk dynamics on equal footing. We can then add the boundary dissipative terms $L^{\pm} _{\textnormal{1,N}}= \sqrt{\frac{\alpha^{\pm} _\textnormal{1,N} }{2} } \sigma^{\pm} _{1,N}$ to reconstruct the driven SSEP dynamics explored in \cite{key5__,Derrida2004} at the large dissipation limit with $N$ sites and $\alpha^{\pm} _\textnormal{1,N}$ are the incoming/outgoing rates at sites $1,N$.
Once the connection to the classical SSEP has been made, we can use it to learn bits of information about the original quantum system and compare its behavior with or without monitoring. Let us choose a concrete set up, widely used \cite{CFGCGF1,CFGCGF2}, to put the system away from equilibrium. We consider the quantum system on an infinite line, preparing it with a domain wall initial state. Namely, we set the initial time density matrix to be $\rho_\mathrm{initial}=\rho_l\otimes\mathrm\rho_r$, with $\rho_{l,r}\propto\otimes_{i\lessgtr 0}\,e^{-\mu_{l,r}\sigma_i^z}$ where $\mu_{l,r}$ are different left/right chemical potentials. Then, we let the system evolve. The asymmetry between the left/right chemical potentials produces a spin flow through the origin that we may try to characterize. In absence of monitoring this flow is ballistic. In presence of monitoring this flow is diffusive. Because this set-up leads to the SSEP model studied in \cite{ssep-wall}, we learn from this reference that the spin current and all spin quantum correlations dies on a diffusive time scale $1/\sqrt{t}$. Therefore, in the strong monitoring limit, the asymptotic steady state supports no current and is localized with vanishing correlation length. This is in contrast with the infinite correlation length in absence of monitoring.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.25]{ssepmis.jpg}
\caption{The rates of the emerging dynamics. (a) The SSEP dynamics, where particles can jump to empty neighboring sites with rate $D$. (b) The misanthrope model dynamics, where a particle jumps from site $j$ to site $j\pm 1$ with rate $R_{n_j,n_{j\pm 1}}$ depending on the local occupancy $n_j,n_{j\pm1}$ of site $j$ and the target site $j\pm 1 $.
\label{fig:Illustration-of-SSEP}}
\end{centering}
\end{figure}
\subsection{Boson chain with strong local occupancy measurements
\label{subsec: boson chain}}
Consider again a periodic chain of $L$ sites, now occupied by bosons following the tight-binding Hamiltonian $H= \varepsilon\sum_j (a^\dagger _j a_{j+1} + a _j a^\dagger _{j+1} )$, where $a _j , a^\dagger _{j}$ are bosonic creation and annihilation operators with the canonical commutation relations $\big[ a _j , a^\dagger _{k}\big] = \delta_{j;k} $. We consider the local measurement operators to be $N_j = \hat{n}_j= a^\dagger _j a_j$, the local occupancy operators. Similarly to \ref{subsec: spin chain }, we find that the emerging dynamics according to \eqref{eq: strong diss evo} is
\begin{equation}
\frac{d}{ds} \bar{\rho}_s = -\frac{1}{2} D \sum_j \left[ a _j a^\dagger _{j+1}, \left[ a^\dagger _j a _{j+1} , \bar{\rho}_s \right] \right] + \textnormal{h.c.}
\end{equation}
The pointer states are $\otimes_j \mathbb{P}^{n_j} _j $, where $\mathbb{P}^{n_j} _j = \ket{n_j}\bra{n_j}$ is the Fock space of the $j$-th site. We interpret the pointer states as configurations, specifying the (unbounded) number of particles at each site. Therefore, by obtaining
\begin{eqnarray}
\mathcal{L}(\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1} ) & = &
R_{n_{j}+1,n_{j+1}-1}
\mathbb{P}^{n_j +1} _j \otimes \mathbb{P}^{n_{j+1} -1} _{j+1} - R_{n_{j},n_{j+1}}
\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1}
\\ \nonumber
&& + R_{n_{j+1}+1,n_{j}-1}\mathbb{P}^{n_j -1} _j \otimes \mathbb{P}^{n_{j+1} +
1} _{j+1} - R_{n_{j+1},n_{j}}
\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1}
\end{eqnarray}
with $R_{x,y}=x(y+1)$, we can fully define the Markov process (see Fig.~\ref{fig:Illustration-of-SSEP}). A particle at site $j$ can jump to a nearby site (say $j+1$) with rate $R_{n_{j},n_{j+1}}$, depending on the local occupancy of the sites $j$ and $j+1$. We thus have found that, at the large $\eta$ limit, the emerging dynamics of the bosonic chain is the inclusion process (with $m=2$, see \cite{Grosskinsky2011}). The shift of $y$ by $1$ in $R_{x,y}$ is a consequence of the canonical commutation relations, and hence it is an echo of the well known fact, at the core of stimulated emission, that bosons have a tendency to group.
We note however that our process satisfies detailed balance. At this point, it is worthwhile to notice that in the initial setup, left-right symmetry is conserved. Naively, we should not expect any breaking of detailed balance.
In what comes next, we study the Markov limit for measurement schemes of space-dependent functions of the occupancy.
\subsection{Inhomogeneous measurements}
Let us now consider again the bosonic chain with the tight-binding Hamiltonian. However, now we wish to consider space dependent measurements with $N_j = f_j(\hat{n}_j) $, where $f_j$ are analytic functions of their arguments. Here we stress again the requirement for non-degeneracy of the $N_j$ operators. So, we assume that the pointer states remain as in \ref{subsec: boson chain}. The dynamics of the pointer states is then given to be
\begin{eqnarray}
\mathcal{L}(\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1} ) & = &
R^{j,j+1} _{n_{j}+1,n_{j+1}-1}
\mathbb{P}^{n_j +1} _j \otimes \mathbb{P}^{n_{j+1} -1} _{j+1} - R^{j,j+1} _{n_{j},n_{j+1}}
\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1}
\\ \nonumber
&& + R^{j+1,j} _{n_{j+1}+1,n_{j}-1}\mathbb{P}^{n_j -1} _j \otimes \mathbb{P}^{n_{j+1} +
1} _{j+1} - R^{j+1,j} _{n_{j+1},n_{j}}
\mathbb{P}^{n_j} _j \otimes \mathbb{P}^{n_{j+1}} _{j+1},
\end{eqnarray}
where now
\begin{equation}
R^{j,j+1}_{x,y} = \frac{x(y+1)}{\left(f_j(x)-f_j(x-1)\right)^2 + \left(f_{j+1}(y)-f_{j+1}(y+1)\right)^2 }
\end{equation}
denotes the rate of a particle jumping from site $j \rightarrow j+1$ (see Fig.~\ref{fig:Illustration-of-SSEP}). This process is known as (a subclass) of the misanthrope model \cite{key7}, the jump rate between neighboring sites $j,k$ depends only on the occupancy at the sites $j,k$. Unsurprisingly, detailed balance persists even when we carry out a space-dependent measurement scheme. While we did eliminate translational invariance symmetry, we did not explicitly break the right-left symmetry. Thus, no current is generated and detailed balance can be recovered.
This last fact is general: The Markov chain obtained by strongly monitoring a quantum system is always double stochastic~\footnote{Double stochasticity means that the unit vector $(1,1,\cdots)$ is both a left and right eigenvector of the Markov matrix.} as long as the system dynamics in absence of monitoring is unitary, i.e. the system Lindbladian $L$ does not contain dissipative terms. Indeed if $L(\rho)=-\frac{i}{\hbar}[H,\rho]$, then the effective Lindbladian (\ref{eq: strong diss evo}) annihilated the identity matrix, $\mathcal{L}(\mathbb{1})=0$. Thus, no current may be generated by monitoring, without feedback, a Hamiltonian system even by playing with the measurement operators (e.g. even if those operators break the left-right symmetry).
\section{Discussion
\label{sec:Discussion}}
The behavior of lattice systems subject to strong monitoring was studied here for a few models. The transport behavior was found to be diffusive, contrary to the ballistic transport found when no monitoring is allowed for the same models. In the strong measurement limit, the effective dynamics is that of a classical Markovian chain for which detailed balance holds, so that, as expected, no current can be generated by passively monitoring. However, since monitoring gives access to extra information via the output signals, one may choose to feedback on the system~\cite{q-feedback}, or to modulate the measurement process as in \cite{BBT-control,feed-control}, to
e.g. break detailed balance or to make a system state follow a prescribed trajectory \cite{Mirrahimi}.
Moreover, we have shown that at the strong measurement limit the models in question where found to follow the SSEP for fermionic chains and the inclusion process (misanthrope process) for bosonic chains. It would be interesting to see whether models with more conserved quantities will follow the nonlinear fluctuating hydrodynamics theory \cite{Spohn2015,Popkov2015}. It may allow to explain which type of transport behavior we will encounter.
The emerging dynamics at the large dissipation limit can be interpreted by a completely classical models, e.g. the SSEP and the inclusion process. We therefore find it interesting that a echo of the quantum statistics remains, i.e. fermions have an exclusion in the emerging dynamics, while bosons do not and may show a tendency to bunch.
While the bosonic models may show a preference to bunch, they do not condensate \cite{Grosskinsky2011,key7}. This is unsurprising, as we have an $1D$ model for bosons. Common wisdom suggests to explore a $3D$ model to be able to observe a condensation. It would be interesting to check whether for a generalization of our bosonic models, a condensation transition occurs, in the large dissipation limit and in general.
\begin{acknowledgments}
This work has been supported by ANR contract ANR-14-CE25-0003. OS would like to thank Ori Hirschberg and Takahiro Nemoto for useful discussions. DB thanks Michel Bauer for discussions and collaborations.
\end{acknowledgments}
\newpage
\begin{widetext}
\end{widetext}
\bibliographystyle{apsrev4-1}
|
\section{Introduction}
One of the central questions in the quantum theory of classically
chaotic systems is whether the fluctuations in the spectra follow the
predictions of random matrix theory (RMT). The spectral form factor, which
is the Fourier transform of the two-point correlation function, is
often used to characterize these fluctuations.
The advantage of the form factor is that it has a convenient expansion
in terms of pairs of periodic orbits of the classical system. This
expansion
|
is an application of trace formulae (see \cite{CPS98} for a
review) and has been a starting point for many investigations. The
progress of understanding the role of periodic orbits in the universal
behavior of the form factor of different systems is marked by such
milestones as the diagonal approximation \cite{Ber85}, the first
off-diagonal contribution \cite{Sie02,SR01} and the most recent preprint
\cite{MHB
|
{n}A}e^{-D\|y\|^2}\langle y,e_1\rangle^2dy\\
&\leq&
Cn^{-d/2-1}\int_{\mathbb R^d}e^{-D\|y\|^2}\langle y,e_1\rangle^2dy=O\left(n^{\frac{-d-2}{2}}\right).
\end{eqnarray*}
The way to see \eqref{eq:mix_der_fr} is similar -- this time what we need to notice is that
$\left|
e^{-i\langle x,z\rangle}-e^{-i\langle x,z+e_1\rangle}-e^{-i\langle x,z+e_2\rangle}+e^{-i\langle x,z+e_1+e_2\rangle}
\right|
\leq
\langle x,e_1\rangle\cdot\langle x,e_2\rangle$
and that when we substitute $y=\sqrt{n}x$ we get
$\langle x,e_1\rangle\cdot\langle x,e_2\rangle=n^{-1}\langle y,e_1\rangle\cdot\langle y,e_2\rangle$.
\end{proof}
\begin{lemma}\label{lem:lbound}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Let $\{X_n\}$ be a RWRE starting at the origin.
Let $\sigma^2$ be the (annealed) covariance matrix of $X_{\tau_2}-X_{\tau_1}$, and let $U$ be the expectation of
$X_{\tau_2}-X_{\tau_1}$. Let $\Sigma$ be the inverse matrix of $\sigma^2$. Let $\bar U=\BbbE(X_{T_{\partial\mathcal P(0,N)}})$. Fix $a>0$. There exists a constant $c$ such that
for every $x\in\partial^+\mathcal P(0,N)$, if
\[
(x-\bar U)^{\tiny T}\Sigma(x-\bar U)<a\cdot\frac{N^2}{\langle U,e_1\rangle},
\]
then
\[
\BbbP\big(X_{T_{\partial\mathcal P(0,N)}}=x\big)>
cN^{1-d}e^{-3a}.
\]
\end{lemma}
\begin{proof}
The proof is very similar to that of Lemma \ref{lem:ann_der}, but slightly simpler. We continue to use the
notations $B(l,k)$, $B(l)$ and $\hat B(l)$.
Let $\delta=\delta(a)$ be a small number.
By the local limit Theorem (see, e.g. \cite{durrett}), for $l>N^2/2$ and every $y\in\mathcal P(0,N)$ such that $\langle y,e_1\rangle=l$,
\begin{equation}\label{eq:llt}
\BbbP\big(Z_l=y\,;\,B(l,k)\big)\geq (2\pi\beta)^{-\frac 12}k^{-\frac{d}{2}}
\left(e^{-(y-Uk)^{\tiny T}\Sigma(y-Uk)/2k}-\delta\right)
\end{equation}
where $\beta$ is the determinant of $\sigma^2$.
Fix $l=N^2$ and let $M=\frac{{N^2}}{\langle U,e_1\rangle}$. Then,
using
\eqref{eq:llt}, for $x\in\partial^+\mathcal P(0,N)$,
\begin{eqnarray*}
\BbbP\big(Z_{N^2}=x\big)&\geq&\BbbP\big(Z_{N^2}=x\,;\,B({N^2})\big)\\
&\geq& \sum_{k=\lceil M-\sqrt{M}\rceil}^{\lceil M+\sqrt{M}\rceil}
\BbbP\big(Z_{N^2}=x\,;\,B({N^2},k)\big)\\
&\geq& (\pi\beta)^{-\frac 12} M^{-\frac{d}{2}}
\sum_{k=\lceil M-\sqrt{M}\rceil}^{\lceil M+\sqrt{M}\rceil}
\left(e^{-\frac{(x-Uk)^{\tiny T}\Sigma(x-Uk)}{M-\sqrt{M}}}-\delta\right)\\
&\geq& (\pi\beta)^{-\frac 12} M^{-\frac{d-1}{2}}
\left(
e^{-\frac{(x-\bar U)^{\tiny T}\Sigma(x-\bar U)}{M-\sqrt{M}}
-\frac{(\sqrt{M}U)^{\tiny T}\Sigma(\sqrt{M}U)}{M-\sqrt{M}}}
-\delta
\right)\\
&\geq& cN^{1-d}e^{-3a}
\end{eqnarray*}
\end{proof}
\subsection{Quenched exit estimates}\label{subsec:quenched}
In this subsection we show that with very high probability the quenched exit distribution from a basic {block} is similar to the annealed one. This is the only part of the paper that requires the high dimension assumption.
The goal of this subsection is the following proposition:
\begin{proposition}\label{prop:quenched}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Fix $0<\theta\leq 1$.
There exists an event $G(N)=G(\theta,N)\subseteq\Omega$ such that $P(G(N))=1-N^{-\xi(1)}$ and such that
for all $\omega\in G(N)$,
\begin{enumerate}
\item For every $z\in\tilde{\mathcal P}(0,N)$,
\begin{equation*}
P_\omega^z\big(T_{\partial\mathcal P(0,N)}\neq T_{\partial^+\mathcal P(0,N)}\big)=N^{-\xi(1)}.
\end{equation*}
\item\label{item:toch0} For every $z\in\tilde{\mathcal P}(0,N)$,
\begin{equation}\label{eq:toch0}
\left\|
E_\omega^z\left[
X_{T_{\partial\mathcal P(0,N)}}
\right]
-\BbbE^z\left[
X_{T_{\partial\mathcal P(0,N)}}
\right]
\right\|\leq R_3(N)
\end{equation}
\item
For every $z\in\tilde{\mathcal P}(0,N)$ and every ($d-1$ dimensional) cube $Q\subseteq\partial^+\mathcal P(0,N)$ of size length $\left[N^\theta\right]$,
\begin{equation}\label{eq:quenched_exit}
\left|
P_\omega^z\left[X_{T_{\partial\mathcal P(0,N)}}\in Q\right]
-\BbbP^z\left[X_{T_{\partial\mathcal P(0,N)}}\in Q\right]
\right|
<N^{(\theta-1)(d-1)-\theta\big(\frac{d-1}{d+1}\big)}.
\end{equation}
\end{enumerate}
\end{proposition}
From Proposition \ref{prop:quenched} we get the following corollary:
\ignore{
Old statement of corollary in rec_bin.
}
\begin{corollary}\label{cor:quenched}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Fix $\theta<1/2$ and let $G(N)$ be as in Proposition \ref{prop:quenched}.
Let $\omega\in G(N)$, and let $z\in\tilde{\mathcal P}(0,N)$. Let $D=D(\omega,z)$ be the quenched exit distribution
from $\mathcal P(0,N)$, and let $\bar{D}=\bar{D}(\omega,z)$ be $D$ conditioned on $\partial^+\mathcal P(0,N)$. Let $\mathbb D=\mathbb D(z)$ be the annealed exit distribution, and let $\bar\mathbb D$ be the annealed exit distribution conditioned on
$\partial^+\mathcal P(0,N)$.
Then
\begin{enumerate}
\item\label{item:from_right} $D(\partial^+\mathcal P(0,N))= 1-N^{-\xi(1)}$.
\item\label{item:y+z} If $X\sim\bar{D}$, then it can be written as $X=Y+Z$, where $\|Z\|\leq (d+1)N^\theta$ a.s. and
$Y\sim (\bar{\mathbb D}+D_2)$, where
$D_2$ is a signed measure such that
\begin{enumerate}
\item\label{item:mass} $\|D_2\|:=\sum_x|D_2(x)|
\leq\lambda=N^{-{\theta}\frac{d-1}{2(d+1)}}$.
\item\label{item:balanced} $\sum_{x}D_2(x)=0$.
\item\label{item:first_mom} $\sum_{x}xD_2(x)=0$.
\item\label{item:sec_mom} $\sum_{x}|D_2(x)|\|x-E_{\bar\mathbb D}\|_1^2 \leq \lambda N^2$,
where $E_{\bar\mathbb D}$, a vector in $\mathbb R^d$, is the expectation of the probability distribution $\bar\mathbb D$.
\end{enumerate}
\end{enumerate}
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:quenched}]
Part \ref{item:from_right} is trivial, and therefore we will prove Part \ref{item:y+z}.
Partition $\partial^+\mathcal P(0,N)$ into disjoint cubes
$Q_1,Q_2,\ldots Q_n$ of side-length $N^\theta$. We get $n=R_5(N)^{d-1}N^{(d-1)(1-\theta)}$ such cubes.
For every $1\leq k\leq n$,
\[
|\bar{D}(Q_k)-\bar{\mathbb D}(Q_k)| \leq
N^{(\theta-1)(d-1)-\theta\big(\frac{d-1}{d+1}\big)}.
\]
We define $Y^\prime$ as follows: For every $k$, we take $Y^\prime$ to be in $Q_k$ whenever $X\in Q_k$.
Conditioned on the event $Y^\prime\in Q_k$, we take $Y^\prime$ to be independent of $X$, with
\[
{\bf P}(Y^\prime=x|Y^\prime\in Q_k)=\frac{\bar\mathbb D(x)}{\bar\mathbb D(Q_k)}
\]
for every $x\in Q_k$.
Then clearly $\|X-Y^\prime\|<dN^\theta$.
Therefore, $\|E(Y^\prime)-E(X)\|<dN^\theta$. By \eqref{eq:toch0}, $\|E(X)-E_{\bar\mathbb D}\|\leq R_3(N)$ and
thus $\|E(Y^\prime)-E_{\bar\mathbb D}\|<(d+1)N^\theta$. Then there exists a variable $U$, independent
of $Y^\prime$ and $X$, such that
$\|U\|<(d+1)N^\theta+1$ and $E(Y^\prime+U)=E_{\bar\mathbb D}$. Define $Y=Y^\prime+U$. Then Parts \ref{item:balanced} and \ref{item:first_mom} are immediate.
To see Part \ref{item:mass}, we first note that
\[
{\bf P}(Y=x)=\sum_{u\,:\,\|u\|<(d+1)N^\theta+1}{\bf P}(U=u){\bf P}(Y^\prime=x-u).
\]
Therefore,
\begin{eqnarray}
\nonumber
&&\sum_x|{\bf P}(Y=x)-\bar\mathbb D(x)|\\
\label{eq:fmqnc}
&\leq&\sum_{u\,:\,\|u\|<(d+1)N^\theta+1}{\bf P}(U=u)\sum_x|{\bf P}(Y^\prime=x-u)-\bar\mathbb D(x)|.
\end{eqnarray}
By Part \ref{item:first_der} of Lemma \ref{lem:ann_der}, for every $x$ and every $u$ such that $\|u\|<(d+1)N^\theta+1$,
\[
|\bar\mathbb D(x-u)-\bar\mathbb D(x)|\leq C(d+1)N^\theta\cdot N^{-d}=C(d+1)N^{\theta-d}.
\]
Therefore, with $D_2$ as defined in Part \ref{item:y+z} of the corollary,
\begin{eqnarray*}
\sum_x|D_2(x)|&=&\sum_x|{\bf P}(Y=x)-\bar\mathbb D(x)| \\
&\leq& \sum_x \big(|{\bf P}(Y^\prime=x)-\bar\mathbb D(x)|+C(d+1)N^{\theta-d}\big)\\
&=& \left(\sum_{k=1}^n|\bar{D}(Q_k)-\bar\mathbb D(Q_k)|\right)+R_5(N)^{d-1}N^{d-1}\cdot C(d+1)N^{\theta-d} \\
&\leq& C(d+1)R_5(N)^{d-1}N^{\theta-1} + R_5(N)^{d-1}N^{(d-1)(1-\theta)}\cdot
N^{(\theta-1)(d-1)-\theta\big(\frac{d-1}{d+1}\big)}\\
&=& R_5(N)^{d-1}\left(C(d+1)N^{\theta-1}+N^{-\theta\big(\frac{d-1}{d+1}\big)}\right)
\leq R_6(N)N^{-\theta\big(\frac{d-1}{d+1}\big)}<\lambda
\end{eqnarray*}
To see Part \ref{item:sec_mom}, note that $\|x-E_{\bar\mathbb D}\|_1\leq dNR_5(N)$ for every $x$ in the support of $D_2$.
Therefore,
\begin{eqnarray*}
\sum_{x}|D_2(x)|\|x-E_{\bar\mathbb D}\|_1^2
&\leq& d^2R_5^2(N)N^2\sum_{x}|D_2(x)|\\
\leq d^2R_5^2(N)N^2\cdot R_6(N)N^{-\theta\big(\frac{d-1}{d+1}\big)}
&\leq& \lambda N^2.
\end{eqnarray*}
\end{proof}
Corollary \ref{cor:quenched} can be formulated slightly differently in the language of couplings.
We need a definition.
\begin{definition}\label{def:close}
For two probability measures $\mu_1$ and $\mu_2$ on $\mathbb Z^d$, and for $\lambda<1$ and $k\in\mathbb N$ we say that $\mu_2$ is \underline{$(\lambda,k)$-close} to $\mu_1$ if there exists a joint distribution ("coupling") $\mu$ of three random variables, $Z_1$, $Z_2$ and $Z_0$ such that
\begin{enumerate}
\item\label{item:marg} $Z_1\sim\mu_1$ and $Z_2\sim\mu_2$.
\item\label{item:masbound} $\mu(Z_1\neq Z_0)\leq \lambda$.
\item\label{item:distbound} $\mu(\|Z_0-Z_2\|<k)=1$.
\item\label{item:stmom} $\sum_{x}x[\mu(Z_1=x)-\mu(Z_0=x)]=E_\mu(Z_1)-E_\mu(Z_0)=0$.
\item\label{item:ndmom} $\sum_{x}|\mu(Z_1=x)-\mu(Z_0=x)|\|x-E_\mu(Z_1)\|_1^2 \leq \lambda {\texttt{var}}(Z_1)$
\end{enumerate}
\ignore{
Let $C$ be a constant. If in addition
$\mu(Z_0=x)\geq C\mu(Z_1=x)$ for all $x$ such that $\|x-E(Z_1)\|<\sqrt{{\texttt{var}}(Z_1)}$, then we say that $\mu_2$ is $C$-locally close to $\mu_1$.
}
\end{definition}
Using Definition \ref{def:close}, Part \ref{item:y+z} of Corollary \ref{cor:quenched} can be formulated as
saying that if $\omega\in G(N)$, then $\bar{D}$ is $\big(N^{-\theta\frac{d-1}{2(d+1)}},(d+1)N^{\theta}\big)$-close to $\bar{\mathbb D}$.
(We need to see that the variance of a $\bar{\mathbb D}$ distributed variable is at least at the order of magnitude of $N^2$. This follows, e.g, from the annealed lower bound in Lemma \ref{lem:lbound})
The following claim is immediate and useful.
\begin{claim}\label{claim:interm}
In the language of Definition \ref{def:close}, the distribution of $Z_0$ is $(\lambda,0)$-close to $\mu_1$.
\end{claim}
\ignore{
Another useful corollary is the following.
\begin{corollary}\label{cor:pntqnc}
Let $\omega\in G(n)$, and let $z\in\tilde\mathcal P(0,N)$. Let
$\hat z=\BbbE^z(X_{T_{\partial \mathcal P(0,N)}})$.
\begin{enumerate}
\item Using the language of Part \ref{item:y+z} of Corollary \ref{cor:quenched}, there exists a constant $C$ such that
$P_\omega^z(Y=y)\geq CN^{1-d}$
for every
$y\in\partial^+\mathcal P(0,N)$ satisfying $\|y-\hat z\|\leq N$.
\item
For every
$y\in\partial^+\mathcal P(0,N)$ satisfying $\|y-\hat z\|\leq N$,
\[
P_\omega^z(X_{T_{\partial \mathcal P(0,N)}}=y)\geq CN^{1-d}\eta^{dN^{\theta}},
\]
where $\eta$ is the ellipticity constant, as in \eqref{eq:unifelliptic}.
\end{enumerate}
\end{corollary}
}
We now proceed to proving Proposition \ref{prop:quenched}. We start with a version of Azuma's inequality.
Let $\{M_k\}_{k=1}^n$ be a zero mean martingale with respect to a filtration $\{\mathcal F_k\}_{k=1}^n$ on the sample space $\Omega$. For simplicity we denote $M_0=0$ and $\mathcal F_0=\{\emptyset,\Omega\}$. for $k=1,\ldots,n$, let
$D_k=M_k-M_{k-1}$. Define
\[
U_k=\operatorname{esssup}(|D_k|\ |\ \mathcal F_{k-1})=\lim_{p\to\infty}\big[E(|D_k|^p|\mathcal F_{k-1})\big]^\frac{1}{p}
\]
and we define the {\em essential variance} of the martingale to be
\begin{equation*}
U:=\operatorname{esssup}\left(\sum_{k=1}^nU_k^2\right).
\end{equation*}
\begin{lemma}\label{lem:azuma}
For every $K$,
\begin{equation*}
{\bf P}(|M_n|>K)\leq 2e^{-\frac{K^2}{2U}}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is similar to that of Azuma's inequality:
First we show that for every $k$,
\begin{equation}\label{eq:essbnd}
{\bf E} \left(e^{\sum_{j=k}^nD_j}|\mathcal F_{k-1}\right)
\leq
e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k}^nU_j^2|\mathcal F_{k-1}\right)}.
\end{equation}
Indeed, for $k=n$ \eqref{eq:essbnd} is clear, and assuming \eqref{eq:essbnd} for
$k+1$, we get
\begin{eqnarray*}
{\bf E} \left(e^{\sum_{j=k}^nD_j}|\mathcal F_{k-1}\right)
&=&
{\bf E} \left( e^{D_k} {\bf E}\left({e^{\sum_{j=k+1}^nD_j}}|\mathcal F_{k}\right)|\mathcal F_{k-1}\right)\\
\leq
{\bf E} \left(e^{D_k}e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k+1}^{n}U_j^2|\mathcal F_k\right)}|\mathcal F_{k-1}\right)
&\leq&
{\bf E} \left(e^{D_k}e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k+1}^{n}U_j^2|\mathcal F_{k-1}\right)}|\mathcal F_{k-1}\right)\\
=
e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k+1}^{n}U_j^2|\mathcal F_{k-1}\right)}{\bf E} \left(e^{D_k}|\mathcal F_{k-1}\right)
&\leq&
e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k+1}^{n}U_j^2|\mathcal F_{k-1}\right)}e^{{\frac 12} U_k^2}\\
&=&
e^{{\frac 12}\operatorname{esssup}\left(\sum_{j=k}^nU_j^2|\mathcal F_{k-1}\right)}.
\end{eqnarray*}
For $k=0$ this gives us that
\begin{equation*}
{\bf E}\left(e^{M_n}\right)\leq e^{{\frac 12} U}
\end{equation*}
and that for every $\lambda$,
\begin{equation*}
{\bf E}\left(e^{\lambda M_n}\right)\leq e^{{\frac 12} \lambda^2U}.
\end{equation*}
Using Markov's inequality once with $\lambda=\frac{K}{U}$ and once with $\lambda=-\frac{K}{U}$ gives the desired result.
\end{proof}
Next we discuss the intersection structure of two independent walks in the same environment.
\begin{lemma}\label{lem:inter}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Let $X^{(1)}:=\{X_n^{(1)}\}$ and $X^{(2)}:=\{X_n^{(2)}\}$ be two independent random walks running in the same environment $\omega$.
Let $[X^{(i)}]$ be the set of points visited by $\{X_n^{(i)}\}$.
Then there exists $C$ such that for every $n$,
\begin{equation*}
E\left[
P_{\omega,\omega}\left(
\left\{\left|
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap\mathcal P(0,N)\right|>nR_1^d(N)
\right\}
\ \cap A_N(X^{(1)}) \cap A_N(X^{(2)})
\right)
\right]<e^{-Cn}
\end{equation*}
\end{lemma}
\begin{proof}
Let $k\geq 0$ be such that $k+R_1(N)<N$. Then from the definition of the event $A_N$, for a random walk $X=\{X_n\}$,
\begin{equation}\label{eq:inslab}
{\bf 1}_{A_N(X)}\cdot \big| \{x\,:\,x\in[X]\,;\,k<\langle x,e_1\rangle<k+R_1(N)\}\big| < 2^dR_1^d(N).
\end{equation}
For every $k$, let $Q^-_k=\mathcal P(0,N)\cap\{x\,:\,\langle x,e_1\rangle<kR_1(N)\}$ and
$Q^+_k=\mathcal P(0,N)\cap\{x\,:\,\langle x,e_1\rangle\geq kR_1(N) \}$. In addition, let
$\hat A_N=A_N\big(X^{(1)}\big)\cap A_N\big(X^{(2)}\big)$.
Using Propositions 3.1, 3.4 and 3.7 of \cite{BZ07}, as well as uniform ellipticity, and, again, recalling the definition of $A_N$, we see that there exists $\rho>0$ such that for every $k$
\begin{equation}\label{eq:frombz}
E\left[
P_{\omega,\omega}\left(
\left\{
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap Q^+_{k+1}=\emptyset
\right\}
\left|
\hat A_N\,;\, \left[X^{(1)}\right]\cap Q^-_{k};\left[X^{(2)}\right]\cap Q^-_{k}
\right.\right)
\right]>\rho.
\end{equation}
{\bf Remark:} As stated in \cite{BZ07}, Propositions 3.1, 3.4 and 3.7 of \cite{BZ07} require moment assumptions on the regeneration times. Nevertheless, examining their proofs, all they need are moment assumptions on the number of sites visited before $\tau_1$, and these moment assumptions are satisfied by Lemma \ref{lem:regrad}.
Now, let
\[
J^{(\mbox{even})}=\bigl\{k\,:\,k\mbox{ is even and }
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap Q^+_{k}\cap Q^-_{k+1}\neq\emptyset
\bigr\}
\]
and
\[
J^{(\mbox{odd})}=\bigl\{k\,:\,k\mbox{ is odd and }
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap Q^+_{k}\cap Q^-_{k+1}\neq\emptyset
\bigr\}.
\]
Then, by \eqref{eq:frombz}, conditioned on $\hat A_N$, both $J^{(\mbox{even})}$ and $J^{(\mbox{odd})}$ are dominated
by a geometric variable with parameter $\rho$.
The lemma now follows when we remember that by \eqref{eq:inslab},
\[
{\bf 1}_{\hat A_N}\cdot
\left|\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap\mathcal P(0,N)\right|
\leq 2^d R_1^d(N)\big(J^{(\mbox{even})}+J^{(\mbox{odd})}\big)
\]
\end{proof}
As a corollary we get the following estimate:
\begin{lemma}\label{lem:interquenched}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
With the same notation as in Lemma \ref{lem:inter},
\begin{equation}\label{eq:interquenched}
P\left[\omega\ :\
E_{\omega,\omega}\left(\left|
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap\mathcal P(0,N)\right|
\cdot{\bf 1}_{A_N(X^{(1)})\cap A_N(X^{(2)})}
\right)\geq R_2(N)
\right]=N^{-\xi(1)}
\end{equation}
\end{lemma}
Let $J(N)\subseteq\Omega$ be the event that
for every starting point $z$ in the middle third of the block,
\begin{equation*}
E^{z,z}_{\omega,\omega}\left(\left|
\left[X^{(1)}\right]\cap\left[X^{(2)}\right]\cap\mathcal P(0,N)\right|
\cdot{\bf 1}_{A_N(X^{(1)})\cap A_N(X^{(2)})}
\right)\leq R_2(N).
\end{equation*}
Then, by Lemma \ref{lem:interquenched},
$P(J(N))=1-N^{-\xi(1)}.$
Fix $z\in\tilde\mathcal P(0,N)$. For every $\omega$ and $x\in\mathcal P(0,N)$, we let
\[
H^z(\omega,x):=P^z_\omega(x\in [X] \mbox{ and } A_N(\{X_n\}))
\]
be the hitting probability of $x$.
Then for $\omega\in J(N)$
\begin{equation}\label{eq:heak}
\sum_{x\in\mathcal P(0,N)}(H^z(\omega,x))^2\leq R_2(N).
\end{equation}
\begin{lemma}\label{lem:toch}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
There exists an event $K(N)\subseteq\Omega$ such that $P(K(N))=1-N^{-\xi(1)}$
and for every $\omega\in K(N)$ and $z\in\tilde{\mathcal P}(0,N)$,
\begin{equation}\label{eq:toch}
\left\|
E_\omega^z\left[
X_{T_{\partial\mathcal P(0,N)}}
\right]
-\BbbE^z\left[
X_{T_{\partial\mathcal P(0,N)}}
\right]
\right\|\leq
R_3(N).
\end{equation}
\end{lemma}
\begin{proof}
Define
\begin{eqnarray*}
U(\omega,z):=
\left\| \begin{array}{c}
E^z_\omega\left[
X_{T_{\partial\mathcal P(0,N)}}\cdot {\bf 1}_{A_N} \cdot {\bf 1}_{T_{\partial\mathcal P(0,N)}=T_{\partial^+\mathcal P(0,N)}}
\right]\\
-\BbbE^z\left[
X_{T_{\partial\mathcal P(0,N)}} \cdot {\bf 1}_{A_N} \cdot {\bf 1}_{T_{\partial\mathcal P(0,N)}=T_{\partial^+\mathcal P(0,N)}}
| J(N)\right] \end{array}
\right\|.
\end{eqnarray*}
It is sufficient to show that for a large enough set of $\omega$-s,
\begin{equation}\label{eq:toch2}
U(\omega,z)
\leq R_2^{2d+2}(N).
\end{equation}
\eqref{eq:toch2} is sufficient, because for a set $M$ of environments of measure $1-N^{-\xi(1)}$, for every $\omega\in M$
we have that $P^z_\omega(T_{\partial\mathcal P(0,N)}=T_{\partial^+\mathcal P(0,N))})=N^{-\xi(1)}$. Since
$\|X_{T_{\partial\mathcal P(0,N)}}\|_\infty<CN^2$, on every $\omega\in M$ the contribution of the event
$\big\{T_{\partial\mathcal P(0,N)}\neq T_{\partial^+\mathcal P(0,N))}\big\}$ to the expectation of $X_{T_{\partial\mathcal P(0,N)}}$ is bounded
by $1$.
To show \eqref{eq:toch2}, we order the vertices in $\mathcal P(0,N)$ lexicographically, $x_1,x_2,\ldots$, with the first coordinate being
the most significant. Let $\mathcal F_n$ be the $\sigma$-algebra on the sample space $\big(J(N)\subseteq\Omega,P(\cdot|J(N))\big)$ which is determined by $\omega|_{x_1,\ldots,x_n}$ and let $\{M_k\}$ be the martingale
\[
M_k:=E\left[\left.
E_\omega\left[
X_{T_{\partial\mathcal P(0,N)}}\cdot {\bf 1}_{A_N} \cdot {\bf 1}_{T_{\partial\mathcal P(0,N)}=T_{\partial^+\mathcal P(0,N)}}
\right]
\right|\ \mathcal F_k\right]
\]
Next we calculate $\operatorname{esssup}(M_k-M_{k-1}|\mathcal F_{k-1})$. The argument is similar to the one used in \cite{BZ07}, which is based on ideas from \cite{bolthausen_sznitman}.
Let
\begin{equation*}
B(x):=\{y\ :\ \langle y,e_1\rangle = \langle x,e_1\rangle-1\mbox{ and } \|y-x\|\leq R_1^2(N)\}.
\end{equation*}
Note that if $x$ is visited and $A_N$ holds, then the first visit to the layer
\begin{equation*}
H(x):=\{y\ :\ \langle y,e_1\rangle = \langle x,e_1\rangle-1\}
\end{equation*}
is in $B(x)$.
Therefore,
\begin{eqnarray}
\nonumber
U_k=
\operatorname{esssup}(M_k-M_{k-1}|\mathcal F_{k-1})
&\leq&
R^2(N)\Large \texttt{P}(x_k \in [X]\ | \ \mathcal F_{k-1})\\
\nonumber
\leq
R^2(N)\sum_{y\in B(x_k)} \Large \texttt{P}(X_{T_{H(x_k)}}=y\ |\ \mathcal F_{k-1})
&=&
R^2(N)\sum_{y\in B(x_k)} P_\omega(X_{T_{H(x_k)}}=y)\\
\nonumber
&\leq&
R^2(N)\sum_{y\in B(x_k)} P_\omega(y\in[X]),\\ \label{eq:411a}
\end{eqnarray}
where the first inequality follows from the fact that the regeneration containing $x_k$ is of size no more than
$R^2(N)$, and after this regeneration the distribution of the walk is the annealed distribution.
Remembering that $|B(x_k)|\leq 2^dR_2^{d}(N)$ and that every $y$ is in $B(x)$ for at most
$2^dR_2^{d}(N)$ points $x$,
\begin{eqnarray}
\nonumber
\sum_{k=1}^n U_k^2
&\leq&
\sum_{k=1}^n R^4(N) \left[\sum_{y\in B(x_k)} P_\omega(y\in[X])\right]^2\\
\nonumber
&\leq&
2^dR_2^d(N)R^{4}(N)\sum_{k=1}^n\sum_{y\in B(x_k)} P_\omega(y\in[X])^2\\
\nonumber
&\leq&
2^{2d}R_2^{2d}(N)R^{4}(N)\sum_{y\in\mathcal P(0,N)} H^2(\omega,y)
\leq 2^{2d}R_2^{2d}(N)R^{4}(N)\cdot R_2(N) \leq R_2^{2d+2}(N).\\ \label{eq:411b}
\end{eqnarray}
Therefore, by Lemma \ref{lem:azuma},
\begin{eqnarray*}
P\left(\left.\omega\ :\
U(\omega)
>R_2^{2d+2}(N) \ \right| \ J(N) \right)
<2e^{-\frac{R_2^{4d+4}(N)}{2R_2^{2d+2}(N)}}=N^{-\xi(1)}.
\end{eqnarray*}
\eqref{eq:toch2} follows.
\end{proof}
We now estimate the quenched exit distribution from $\mathcal P(0,N)$. Fix a starting point for the walk
$z\in\tilde{\mathcal P}(0,N)$. We start with the following
lemma. Recall that for every $k$, we define
$H_k$ to be the hyper-plain $H_k=\{v\in\mathbb Z^d:\langle v,e_1\rangle=k\}$.
\begin{lemma}\label{lem:dpachotechad}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Fix $0<\theta\leq 1$.
Let $B^{\theta}(N)\subseteq\Omega$ be the event that for every $\frac 25N^2\leq M\leq N^2$ and every ($d-1$ dimensional) cube $Q$ of side length $N^{\theta}$ which is contained in $H_M$,
\begin{equation*}
\left|
P_\omega^z\left(X_{T_M}\in Q\ ; \ A_N\right)
-\BbbP^z\left(X_{T_M}\in Q\ ; \ A_N\right)
\right|
\leq N^{(\theta-1)(d-1)}.
\end{equation*}
then for $\theta>\frac{d-1}{d}$,
\[
P\left(B^{\theta}(N)\right) = 1-N^{-\xi(1)}.
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:dpachotechad}]
Fix $\theta$, and let $\frac{d-1}{d}<\theta^\prime<\theta$.
Let
\[
V=\bigl[N^{2\theta^\prime}\bigr].
\]
Fix $\frac 25N^2\leq M\leq N^2$.
Let $v\in H_{M+V}$, and let $\mathcal G$ be the $\sigma$-algebra that is determined by the configuration on
\[
\mathcal P^M(0,N)=\mathcal P(0,N)\cap\{x\,:\,\langle x,e_1\rangle\leq M\}.
\]
We are interested in the quantity
\[
J^{(M)}(v)=E\bigl[
P_\omega(X_{T_{M+V}}=v\,;\, A_N)\, |
\mathcal G
\bigr].
\]
\begin{figure}[h]
\begin{center}
\epsfig{figure=figs/sigalg, width=10cm}
\caption{\sl
The quantity $J^{(M)}(v)$ is the probability of hitting the point $v$, conditioned on the environment in the shaded area, and averaged over the environment elsewhere.
}
\label{fig:sigalg}
\end{center}
\end{figure}
Similar to the proof of Lemma \ref{lem:toch}, we let $\{x_i\}_{i=1}^n$ be a lexicographic ordering of the
vertices in
$
\mathcal P^M(0,N),
$ and let $\{\mathcal F_i\}$ be the $\sigma$-algebra on $J(N)$ which is determined by $\omega|_{x_1,\ldots,x_i}$.
We consider the martingale
$
M_i=E\bigl[
P_\omega(X_{T_{M+V}}=v\,;\,A_N)\, |
\mathcal F_i
\bigr].
$
In order to use Lemma \ref{lem:azuma}, we will need to bound
$
U_i
=\operatorname{esssup}(M_{i}-M_{i-1}\,|\,\mathcal F_{i-1}).
$
Remember that
$x_i$ is the vertex s.t. $\omega_{x_i}$ is measurable with respect to $\mathcal F_i$ but not with respect to $\mathcal F_{i-1}$.
Then we claim tha
\begin{equation}\label{eq:main_est_d-1}
U_i
\leq CR(N)E\big[P_\omega(x_i\mbox{ is hit })\,|\,\mathcal F_{i-1}\big]V^{-d/2}.
\end{equation}
We now show the main estimate \eqref{eq:main_est_d-1}.
Let $\omega^\prime$ be an environment that agrees with $\omega$ everywhere except, possibly, $x_i$.
We let ${\bf P}$ be the distribution of a walk that follows the law $\omega$ on $\{x_k:k\leq i\}$ and the annealed distribution on $\mathbb Z^d\setminus\{x_k:k\leq i\}$. Equivalently, let ${\bf P}^\prime$ be the distribution of a walk that follows the law $\omega^\prime$ on $\{x_k:k\leq i\}$ and the annealed distribution on $\mathbb Z^d\setminus\{x_k:k\leq i\}$.
More precisely, for an event $B\subseteq (\mathbb Z^d)^\mathbb N$ on the space of possible paths for the walk,
\[
{\bf P}(B)=\Large \texttt{P}(B\times\Omega | \omega_{x_1},\ldots,\omega_{x_i}; A_N),
\]
and equivalently for ${\bf P}^\prime$. Then
\begin{equation}\label{eq:uibound}
U_i\leq \sup_{\omega^\prime}\big|{\bf P}^\prime(X_{T_{M+V}}=v)-{\bf P}(X_{T_{M+V}}=v)\big|,
\end{equation}
where the supremum is taken over all environments $\omega^\prime$ that agree with $\omega$ on $\mathbb Z^d\setminus\{x_i\}$.
Note that conditioned on the event that $x_i$ is not visited, the distributions ${\bf P}$ and ${\bf P}^\prime$ are the same.
Now, for both measures ${\bf P}$ and ${\bf P}^\prime$, condition on the event that $x_i$ is visited. Let $u$ be the first
regeneration point after $x_i$. Then ${\bf P}$ and ${\bf P}^\prime$ a.s, $\|u-x_i\|_1<dR(N)$. This follows from the
conditioning on $A_N$.
Therefore, from Parts \ref{item:first_der} and \ref{item:first_der_ort} of Lemma \ref{lem:ann_der} we get that
\[
|{\bf P}(X_{T_{M+V}}=v | x_i \mbox{ is visited})-\BbbP^{x_i}(X_{T_{M+V}}=v)|<CR(N)V^{-d/2}
\]
and
\[
|{\bf P}^\prime(X_{T_{M+V}}=v| x_i \mbox{ is visited})-\BbbP^{x_i}(X_{T_{M+V}}=v)|<CR(N)V^{-d/2}
\]
Therefore,
\[
U_i\leq CR(N)V^{-d/2}{\bf P}(x_i \mbox{ is visited}).
\]
\eqref{eq:main_est_d-1} follows.
Using \eqref{eq:main_est_d-1}, conditioned on $J(N)$, and based on the same calculation as in \eqref{eq:411a} and \eqref{eq:411b},
\begin{eqnarray*}
U&=&\operatorname{esssup}(\sum_{i=1}^nU_i^2)\\
&\leq& R^6(N)V^{-d}.
\end{eqnarray*}
Therefore, by Lemma \ref{lem:azuma}, for every $v\in H_{M+V}$ and every number $\delta$,
\begin{eqnarray*}
P\left(\left|
E\bigl[
P_\omega(X_{T_{M+V}}=v)\,;\, A_N\, |
\mathcal G
\bigr]-
\BbbP(X_{T_{M+V}}=v\,;\, A_N)\,
\right|>\delta
\right)\\
\leq 2P(J(N)^c) + 2e^{-\frac{\delta^2}{2R^6(N)V^{-d}}}
\end{eqnarray*}
In particular, if
$
\delta=\frac 14N^{1-d}=\frac 14V^{-d/2}V^{\eta},
$
with $\eta=\frac{d+\frac{1-d}{\theta^\prime}}{2}>0$, then we get that
\[
P\left(\left|
E\bigl[
P_\omega(X_{T_{M+V}}=v\,\,;\, A_N)\, |
\mathcal G
\bigr]-
\BbbP(X_{T_{M+V}}=v\,;\, A_N\,)
\right|>\frac 14N^{1-d}
\right)
=N^{-\xi(1)}
\]
and
\begin{eqnarray*}
P\left(\left|
E\bigl[
P_\omega(X_{T_{M+V}}=v) |
\mathcal G
\bigr]-
\BbbP(X_{T_{M+V}}=v)
\right|>\frac 12N^{1-d}
\right)
\leq P\big(\omega\,:\,P_\omega(A_N^c)\geq \frac 14N^{1-d}\big)\\
+
P\left(\left|
E\bigl[
P_\omega(X_{T_{M+V}}=v\,\,;\, A_N)\, |
\mathcal G
\bigr]-
\BbbP(X_{T_{M+V}}=v\,;\, A_N\,)
\right|>\frac 14N^{1-d}
\right
=N^{-\xi(1)}.
\end{eqnarray*}
Let $T(N)$ be the event that
\[
\left|
E\bigl[
P_\omega(X_{T_{M+V}}=v)\, |
\mathcal G
\bigr]-
\BbbP(X_{T_{M+V}}=v)
\right|\leq\frac 12N^{1-d}
\]
for every $\frac 25N^2\leq M\leq N^2$ and every $v\in H_{M+V}\cap \mathcal P(0,2N)$.
Then $P(T(N))=1-N^{-\xi(1)}$. Now consider $\omega\in T(N)$, and fix $\frac 25N^2\leq M\leq N^2$
and a cube $Q$ of side length $N^{\theta}$ which is contained in $H_M$.
We want to estimate
\begin{equation}\label{eq:hefreshcube}
L(Q)=
\left|
P_\omega^z\left(X_{T_M}\in Q\ ; \ A_N\right)
-\BbbP^z\left(X_{T_M}\in Q\ ; \ A_N\right)
\right|.
\end{equation}
Let $c(Q)$ be the center of the cube $Q$, and let
$
c^\prime(Q)=c(Q)+V\frac{\vartheta}{\langle \vartheta,e_1\rangle}.
$
Then we let
\[
Q^{(1)}=\{v\in H_{V+M}\,:\,\|v-c^\prime(Q)\|_\infty<\frac 12(0.9)^{1/d}N^{\theta}\}
\]
and
\[
Q^{(2)}=\{v\in H_{V+M}\,:\,\|v-c^\prime(Q)\|_\infty<\frac 12(1.1)^{1/d}N^{\theta}\}.
\]
Then by simple annealed estimates,
\begin{equation}\label{eq:qlobndann}
\BbbP^z(X_{T_{V+M}}\in Q^{(1)})<\BbbP^z(X_{T_{M}}\in Q)+N^{-\xi(1)},
\end{equation}
\begin{equation}\label{eq:qupbndann}
\BbbP^z(X_{T_{V+M}}\in Q^{(2)})>\BbbP^z(X_{T_{M}}\in Q)-N^{-\xi(1)},
\end{equation}
\begin{equation}\label{eq:qlobndque}
E\big[P^z_\omega(X_{T_{V+M}}\in Q^{(1)})|\mathcal G\big]<P^z_\omega(X_{T_{M}}\in Q)+N^{-\xi(1)},
\end{equation}
and
\begin{equation}\label{eq:qupbndque}
E\big[P^z_\omega(X_{T_{V+M}}\in Q^{(2)})|\mathcal G\big]>P^z_\omega(X_{T_{M}}\in Q)-N^{-\xi(1)}.
\end{equation}
From the definition of $T(N)$ and \eqref{eq:qlobndann}, \eqref{eq:qupbndann}, \eqref{eq:qlobndque} and \eqref{eq:qupbndque}, it follows that $T(N)\subseteq B^{\theta}(N)$.
Therefore, $P(B^{\theta}(N))\geq P(T(N))=1-N^{-\xi(1)}$.
\end{proof}
Using Lemma \ref{lem:dpachotechad} as a building block, we can get a similar yet weaker result for every choice of
$\theta$.
\begin{lemma}\label{lem:alltheta}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
For every $0<\theta\leq 1$ and $h$ let $\bar{B}^{(\theta,h)}(N)$ be the event that
for every $z\in\tilde{\mathcal P}(0,N)$, every $\frac 12N^2\leq M\leq N^2$ and every cube $Q$ of side length $N^{\theta}$ which is contained in $H_M$,
\begin{equation}\label{eq:alltheta}
P_\omega^z\left(X_{T_M}\in Q\ ; \ A_N\right)
\leq R_h(N) N^{(\theta-1)(d-1)}.
\end{equation}
Then for every $0<\theta\leq 1$ there exists $h=h(\theta)$ such that $P(\bar{B}^{(\theta,h)}(N))=1-N^{-\xi(1)}$
\end{lemma}
\begin{proof}
We prove the lemma by descending induction on $\theta$.
From Lemma \ref{lem:dpachotechad}, $P\big(\bar{B}^{(\theta,1)}(N)\big)=1-N^{-\xi(1)}$ for every
$1\geq\theta>\frac{d-1}{d}.$ For the induction step, fix $\theta$ and assume that the statement of the lemma holds for some $\theta^\prime$ such that $\theta>\frac{d-1}{d}\theta^\prime$, and let $h^\prime=h(\theta^\prime)$. We write $\rho=\theta/\theta^\prime.$ Let $\sigma$ be the natural shift of $\mathbb Z^d$.
Let
\[
L = \bar{B}^{(\rho,1)}(N) \cap \bigcap_{z\in\mathcal P(0,2N)}
\sigma_z\bigl(\bar{B}^{(\theta^\prime,h^\prime)}([N^\rho])\bigr)\cap T(N,\rho)
,
\]
where
\[
T(N,\rho)=\{
\omega\in\Omega\,:\, \forall_{v\in\mathcal P(0,N)}\,,\
P_\omega^v\big(X_{T_{\partial\mathcal P(v,[N^\rho])}}\notin\partial^+\mathcal P(v,[N^\rho])\big)<e^{-R_1(N)}
\}.
\]
Clearly, $P(L)=1-N^{-\xi(1)}$. Therefore, all we need to show is that for some $h$ and all $N$ large enough, we have that $L\subseteq\bar{B}^{(\theta,h)}(N).$
To this end we fix $\omega\in L$, fix $z$, fix $\frac 12N^2\leq M\leq N^2$ and fix a cube $Q$ of side length $N^{\theta}$ in $\mathcal P(0,N)\cap H_M$. Let $x$ be the center of $Q$, let
$V=[N^\rho]^2$ and let $x^\prime=x-V\frac{\vartheta}{\langle \vartheta,e_1\rangle}.$
Since
\[
\omega\in\bigcap_{z\in\mathcal P(0,2N)}
\sigma_z\bigl(\bar{B}^{(\theta^\prime,h^\prime)}([N^\rho])\bigr)
,
\]
we get that for every $v\in H_{M-V}$,
\begin{equation}\label{eq:katan}
P_\omega^v(X_{T_M}\in Q)
<R_{h^\prime}(N) N^{\rho(\theta^\prime-1)(d-1)}=R_{h^\prime}(N) N^{(\theta-\rho)(d-1)}.
\end{equation}
We Remember that by the Markov property and the fact that $\omega\in T(N,\rho)$,
\begin{equation}\label{eq:markov}
P_\omega^z(X_{T_M}\in Q)
=\sum_{v\in H_{M-V}\cap\mathcal P(x^\prime,[N^\rho])}
P_\omega^z(X_{T_{M-V}}=v)
P_\omega^v(X_{T_M}\in Q)+N^{-\xi(1)}
\end{equation}
Now, $H_{M-V}\cap\mathcal P(x^\prime,[N^\rho])$ is the union of $2^{d-1}R_5(N)^{d-1}<R_6(N)$ cubes of side length $N^\rho$.
Since $\omega\in \bar{B}^{(\rho,1)}(N)$, we get that for every cube $Q^\prime$ of side length $N^\rho$ that is contained in $H_{M-V}\cap\mathcal P(0,N)$,
\begin{equation}\label{eq:gadol}
P_\omega^z(X_{T_{M-V}}\in Q^\prime)
<R_1(N)N^{(\rho-1)(d-1)}.
\end{equation}
Combining \eqref{eq:katan}, \eqref{eq:markov} and \eqref{eq:gadol}, we get that
\begin{eqnarray*}
&&P_\omega^z(X_{T_M}\in Q)\\
&\leq& R_6(N)R_{h^\prime}(N) N^{(\theta-\rho)(d-1)}\cdot
R_1(N)N^{(\rho-1)(d-1)}+N^{-\xi(1)}\\
&\leq& R_h(N)N^{(\theta-1)(d-1)}
\end{eqnarray*}
for $h=\max(6,h^\prime)+1.$
\end{proof}
Next we prove a lemma which significantly strengthens the previous lemma.
For the proof of this lemma we will
use Lemma \ref{lem:alltheta} and a more careful treatment of the proof technique of Lemma
\ref{lem:dpachotechad}. We start with the following preliminary lemma:
\begin{lemma}\label{lem:distu}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Let $\mathcal G$ be the $\sigma$-algebra generated by $\{\omega(z)\,:\,\langle z,e_1\rangle\leq N^2\}.$ Let $\eta>0$, let $V=\big[N^\eta]$ and let $B(N,V)$ be the event that for every $z\in\tilde{\mathcal P}(0,N)$ and every $v\in H_{N^2+V}$,
\[
\left|
E\bigl[P_\omega^z(X_{T_{N^2+V}}=v)\, |\mathcal G\bigr]
-\BbbP^z\bigl[X_{T_{N^2+V}}=v\bigr]
\right|\leq N^{1-d}V^{\frac{1-d}{6}}
.\]
Then $P(B(N,V))=1-N^{-\xi(1)}$.
\end{lemma}
\begin{proof}
Let $v\in H_{N^2+V}$ and let $\theta>0$ be such that $\theta < \frac 1{20}\eta$.
Let $K$ be an integer such that $2^{-K}{N^2}>V\geq 2^{-K-1}{N^2}$, and for $1\leq k< K$ we define
\[
\mathcal P^{(k)}=\mathcal P(0,N)\cap\{x\,:\,2^{-k-1}{N^2}\leq {N^2}-\langle x,e_1\rangle< 2^{-k}{N^2}\}
.\]
In addition we take
\[
\mathcal P^{(K)}=\mathcal P(0,N)\cap\{x\,:\,0\leq {N^2}-\langle x,e_1\rangle< 2^{-K}{N^2}\}
,\]
and
\[
\mathcal P^{(0)}=\mathcal P(0,N)\cap\{x\,:\,{N^2}/2\leq {N^2}-\langle x,e_1\rangle\}
.\]
In addition,
we define
\[
F(v)=\{x\in\mathcal P(0,N)\,:\,\|x-u(v,x)\|\leq|\langle v-x,e_1\rangle|^{1/2}R_2(N)\},
\]
where $u(v,x)$ is as in \eqref{eq:uzx}.
Then, for $0\leq k\leq K$, we define
\[
\mathcal P^{(k)}(v)=\mathcal P^{(k)}\cap F(v),
\]
and
\[
\hat{\mathcal P}^{(k)}(v)=\{y: \exists_{x\in\mathcal P^{(k)}(v)}\,\mbox{s.t.}\,\|x-y\|<R_2(N)\}.
\]
Note that $\mathcal P^{(k)}(v)\subseteq\hat{\mathcal P}^{(k)}(v)$.
\begin{figure}[h]
\begin{center}
\epsfig{figure=figs/withpar, width=12cm}
\caption{\sl
The darker areas are ${\mathcal P}^{(k)}(v)$ for different values of $k$. The environment in the
light-gray area has negligible influence on the probability of hitting $v$.
}
\label{fig:withpar}
\end{center}
\end{figure}
Condition on the event $\bar{B}^{(\theta,h)}$, with $h$ such that by Lemma \ref{lem:alltheta}
$P(\bar{B}^{(\theta,h)})=1-N^{\xi(1)}$.
For $0\leq k\leq K$ and $\omega\in\bar{B}^{(\theta,h)},$ we want to estimate
\[
V(k)=E_{\omega,\omega}\left[\big[X^{(1)}\big]\cap\big[X^{(2)}\big]\cap\mathcal P^{(k)}(v)\right].
\]
For $k=0$,
\[
V(0)\leq E_{\omega,\omega}\left[\big[X^{(1)}\big]\cap\big[X^{(2)}\big]\cap\mathcal P(0,N)\right]
\leq R_2(N).
\]
For $k>0$,
\begin{eqnarray}
\nonumber
V(k)&=&\sum_{x\in \mathcal P^{(k)}(v)}\big[P^z_\omega(x \mbox{ is visited})\big]^2\\
\nonumber
&\leq& \sum_{x\in \mathcal P^{(k)}(v)}\left[
\sum_{y:\|y-x\|<R(N)}P^z_\omega(X_{T_{\langle y,e_1\rangle}}=y)\right]^2+N^{-\xi(1)}\\
\nonumber
&\leq& R^{2d}(N)
\sum_{y\in\hat{\mathcal P}^{(k)}(v)}\big[P^z_\omega(X_{T_{\langle y,e_1\rangle}}=y)\big]^2+N^{-\xi(1)}\\
&\leq& R_2(N)\sum_{y\in\hat{\mathcal P}^{(k)}(v)}R_h(N)N^{2(1-\theta)(1-d)}\label{eq:bcbt} \\
\nonumber
&\leq&
R_{h+1}(N)N^{2\bigl(\frac{d+1}{2}+(1-\theta)(1-d)\bigr)}2^{-k\left[\frac{d+1}{2}\right]}
\end{eqnarray}
where the inequality \eqref{eq:bcbt} follows from the fact that $\omega\in\bar B^{(\theta,h)}(N)$.
As before, we now use the same filtration $\{\mathcal F_i\}$ as in the proof of Lemma \ref{lem:toch}, and consider the martingale
$
M_i=E\bigl[
P_\omega^z(X_{T_{N^2+V}}=v\,;\, A_N)\, |
\mathcal F_i
\bigr].
$
Again, in order to use Lemma \ref{lem:azuma}, we need to bound
$
U_i
=\operatorname{esssup}(|M_{i}-M_{i-1}|\,|\,\mathcal F_{i-1}).
$
Let $x$ be s.t. $\omega_x$ is measurable with respect to $\mathcal F_i$ but not with respect to $\mathcal F_{i-1}$.
Then $U_i=N^{-\xi(1)}$ if $x\notin F(v)$, while if $x\in F(v)$, then
\[
U_i
\leq R(n)
E[P_\omega^z(x\mbox{ is hit })\,|\,\mathcal F_{i-1}]
D(N^2+V-\langle x,e_1\rangle)
\]
where $D(n)$ is the maximal first derivative of the annealed distribution at distance $n$.
By Lemma \ref{lem:ann_der}, $D(N^2+V-\langle x,e_1\rangle)\leq CN^{-d}2^{k\frac{d}{2}}$ for $x\in\mathcal P^{(k)}(v)$.
Therefore,
\begin{eqnarray*}
U&=&\operatorname{esssup}\bigl(\sum_{i}U_i^2\bigr)\\
&\leq& C\sum_{k=0}^K V(K)N^{-2d}2^{kd}+N^{-\xi(1)}\\
&\leq& CR_{h+1}(N)N^{-2d}\\
&+& CR_{h+1}(N)N^{2\bigl(\frac{d+1}{2}+(1-\theta)(1-d)\bigr)-2d}\sum_{k=1}^K2^{kd-k\frac{d+1}{2}}
+N^{-\xi(1)}\\
&\leq& CR_{h+1}(N)\left(N^{-2d}+N^{ 3-3d + 2(d-1)\theta}2^{K\frac{d-1}{2}}\right)\\
&\leq& CR_{h+1}(N)\left(N^{-2d}+N^{ 2-2d + 2(d-1)\theta}V^{-\frac{d-1}{2}}\right)\\
&\leq& CN^{ 2-2d}V^{-\frac{d-1}{6}+\epsilon}
\end{eqnarray*}
for small enough $\epsilon$.
Therefore, using Lemma \ref{lem:azuma}, with probability $1-N^{-\xi(1)}$,
\[
\left|
E\bigl[P_\omega^z(X_{T_{N^2+V}}=v\,;\, A_N)\, |\mathcal G\bigr]
-\BbbP\bigl[X_{T_{N^2+V}}=v\,;\, A_N\bigr]
\right|\leq N^{1-d}V^{\frac{1-d}{6}}
.\]
A simple union bound coupled with the fact that $\BbbP(A_N)=N^{-\xi(1)}$ completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{lem:goodbound}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
For every $0<\theta\leq 1$ let $D^{(\theta)}(N)\subseteq\Omega$ be the event that
for every $z\in\tilde{\mathcal P}(0,N)$ and every cube $Q$ of side length $N^{\theta}$ which is contained in $\partial^+\mathcal P(0,N)$,
\begin{equation}\label{eq:goodbound}
\left|
P_\omega^z\left(X_{T_{\partial \mathcal P(0,N)}}\in Q\ \right)
-\BbbP^z\left(X_{T_{\partial \mathcal P(0,N)}}\in Q\ \right)
\right|
\leq N^{(\theta-1)(d-1)-\theta\big(\frac{d-1}{d+1}\big)}.
\end{equation}
Then $P(D^{(\theta)}(N))=1-N^{-\xi(1)}$
\end{lemma}
\begin{proof}
Take $\frac 34\theta<\theta^\prime<\theta$ and $V=\left[N^{\frac{8\theta^\prime}{d+1}}\right]$.
Then by Lemma \ref{lem:distu} we know that $P\big(B(N,V)\big)=1-N^{-\xi(1)}$. As before,
all we need to show is that $B(N,V)\subseteq D^{(\theta)}(N)$. The way we do this will be completely identical to the last step of the proof of Lemma \ref{lem:dpachotechad}. Let $\omega\in B(N,V)$, and let $Q$ be a cube of side length $N^{\theta}$ which is contained in $\partial^+\mathcal P(0,N)$. Let $x$ be the center of $Q$, and let $x^\prime=x+V\frac{\vartheta}{\langle \vartheta,e_1\rangle}$.
Let $Q^{(1)}$ and $Q^{(2)}$ be $d-1$ dimensional cubes that are contained in $H_{N^2+V}$ and are centered in $x^\prime$, such that the side length of $Q^{(1)}$ is $N^{\theta}-R_3(N)\sqrt{V}$ and the side length of $Q^{(2)}$ is $N^{\theta}+R_3(N)\sqrt{V}$.
Then, on $B(N,V)$, for $i=1,2$
\begin{equation}
\left|
E\bigl[P_\omega^z(X_{T_{N^2+V}}\in Q^{(i)})\, |\mathcal G\bigr]
-\BbbP\bigl[X_{T_{N^2+V}}\in Q^{(i)} \bigr]
\right|
\leq |Q^{(i)}|N^{1-d}V^{\frac{1-d}{6}}.
\end{equation}
In addition, exactly as in the proof of Lemma \ref{lem:dpachotechad},
\begin{equation}\label{eq:qlobndannad}
\BbbP^z(X_{T_{V+N^2}}\in Q^{(1)})<\BbbP^z(X_{T_{N^2}}\in Q)+N^{-\xi(1)},
\end{equation}
\begin{equation}\label{eq:qupbndannad}
\BbbP^z(X_{T_{V+N^2}}\in Q^{(2)})>\BbbP^z(X_{T_{N^2}}\in Q)-N^{-\xi(1)},
\end{equation}
\begin{equation}\label{eq:qlobndquead}
E\big[P_\omega^z(X_{T_{V+N^2}}\in Q^{(1)})|\mathcal G\big]<P^z_\omega(X_{T_{N^2}}\in Q)+N^{-\xi(1)},
\end{equation}
and
\begin{equation}\label{eq:qupbndquead}
E\big[P_\omega^z(X_{T_{V+N^2}}\in Q^{(2)})|\mathcal G\big]>P^z_\omega(X_{T_{N^2}}\in Q)-N^{-\xi(1)}.
\end{equation}
Therefore, for $\omega\in B(N,V)$,
\begin{eqnarray*}
\left|
P_\omega^z\left(X_{T_{\partial \mathcal P(0,N)}}\in Q\ \right)
-\BbbP^z\left(X_{T_{\partial \mathcal P(0,N)}}\in Q\ \right)
\right|\\
\leq
\big(|Q^{(1)}|+|Q^{(2)}|\big)N^{1-d}V^{\frac{1-d}{6}}
+C\big(|Q^{(2)}|-|Q^{(1)}|\big)N^{1-d}+N^{-\xi(1)}\\
\leq
C\left(
N^{(1-\theta)(1-d)}V^{\frac{1-d}{6}} + R_3(N)N^{(1-d)+(d-2)\theta}\sqrt{V}
\right).
\end{eqnarray*}
The lemma follows from the choice of $V$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:quenched}]
Proposition \ref{prop:quenched} follows from Lemma \ref{lem:toch} and Lemma \ref{lem:goodbound}.
\end{proof}
\subsection{Sums of approximate gussians}\label{subsec:sumapprox}
The purpose of this subsection is to prove Lemma \ref{lem:sumapprox} below.
\label{page:defdn}
Let $\mathcal D(N)$ be the annealed distribution starting from zero of
$
X_{T_{\partial\mathcal P(0,N)}}
$
conditioned on $\partial^+\mathcal P(0,N)$.
\ignore{
Old statement in rec_bin with its proof.
}
\begin{lemma}\label{lem:sumapprox} Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
Let $0<\lambda<1$ and $n$ be so that $n<\lambda^{-1}$. Let
$K$ be so that $N>K\geq 1$. Let $h\geq 5$. Assume further that $N>K^4$ and $N>\lambda^{-4}$,
and that $\lambda N>2KnR_{h+1}(N)$.
Let $\{X_i\}_{i=1}^n$ be random variables such that for every $i$,
conditioned on $X_1,\ldots,X_{i-1}$, the distribution of $X_i$
is $(\lambda,K)$-close to $\mathcal D(N)$.
Let $S=\sum_{i=1}^n X_i$. Then the distribution of $S$ is $(\lambda R_{h+1}(N),2nKR_{h+1}(N))$-close to $\mathcal D(N\sqrt{n})$.
\ignore{
If in addition the conditional distribution of $X_i$ is $C$-local to $\mathcal D(N)$ for some constant $C$, then
$S$ is $C/2$-local to $\mathcal D(N\sqrt{n})$.
}
\end{lemma}
{\bf Remark:} We need the assumptions \ref{item:assgamma}--\ref{item:assdim} because they give us some control over the distribution $\mathcal D(N)$.
We use the following simple fact, which follows from the decomposition of the annealed RWRE into regenerations.
\ignore{
\begin{claim}\label{claim:annealerr}
Let $\{U_i\}$ be i.i.d. $\mathcal D(N)$, and let $\bar{U}:=\sum_{i=1}^nU_i.$ Then $\bar{U}$ can be represented as $\bar{U}=\hat{U}+U^\prime$ s.t. $\hat{U}\sim\mathcal D(N\sqrt{n})$ and for every $k$,
\[
{\bf P}\left(\frac{U^\prime}{n}>k\right)<Cke^{-ck^{\gamma}}
\]
for some constants $C$ and $c$.
\end{claim}
}
\begin{claim}\label{claim:annealerr}
Assume the assumptions \ref{item:assgamma}--\ref{item:assdim} from Page \pageref{item:assgamma}.
For $j>1$,
let $\hat{\mathcal D}^{(j)}$ be the convolution of $\mathcal D(N)$ and $\mathcal D(N\sqrt{j-1})$.
Let $U\sim \hat{\mathcal D}^{(j)}$.
Then $U$ can be represented as $U=\hat{U}+U^\prime$ s.t. $\hat{U}\sim\mathcal D(N\sqrt{j})$
and for every $k$,
\begin{equation}\label{eq:bndregsq}
{\bf P}\left(\|U^\prime\|>k\right)<Ce^{-ck^{\gamma}} + N^{-\xi(1)}
\end{equation}
for some constants $C$ and $c$. In particular, there exists some constant $C$, independent
of $N$ and $j$ such that
\begin{equation}\label{eq:bndregsqE}
\|E(U^\prime)\|\leq E(\|U^\prime\|)<C.
\end{equation}
\end{claim}
\begin{proof}
\eqref{eq:bndregsqE} follows immediately from \eqref{eq:bndregsq} (In order to handle the $N^{-\xi(1)}$ error, note that $U^\prime$ is bounded by $3NR_5(N)$), and therefore we shall only prove
\eqref{eq:bndregsq}.
We will define a coupling between a random variable $U$ which is approximately $\hat{\mathcal D}^{(j)}$ distributed
and a random variable $\hat U$ which is approximately $\mathcal D(N\sqrt{j})$ distributed
such that
\[
{\bf P}\left(\|U-\hat U\|>k\right)<Ce^{-ck^{\gamma}}+ N^{-\xi(1)}.
\]
We now construct the coupling.
We define an ensemble $\mathbb L=\{\mathbb U,\mathbb T\}$ where $\mathbb U$ is a positive integer, and
$\mathbb T$ is a nearest neighbor path of length $\mathbb U$, taking values in $\mathbb Z^d$ and starting at $0$.
Let $\{\mathbb L_n=\{\mathbb U_n,\mathbb T_n\}\}_{n=1}^\infty$ be i.i.d. ensembles, such that
$\mathbb U_1$ is sampled according to the annealed distribution of $\tau_2-\tau_1$, and the path $\mathbb T_1$ is distributed according to the annealed distribution of $X_{\tau_1+.}-X_{\tau_1}$, run up to time $\tau_2-\tau_1$ and conditioned on $\tau_2-\tau_1=\mathbb U_1$.
Additionally, define $\hat\mathbb L_1=\{\hat\mathbb U_1,\hat\mathbb T_1\}$ and $\hat\mathbb L_2=\{\hat\mathbb U_2,\hat\mathbb T_2\}$ to be two independent and identically distributed ensembles s.t. $\hat\mathbb U_1$ is sampled according to the annealed distribution of $\tau_1$ and
$\hat\mathbb T_1$ is distributed according to the annealed distribution of $X_{.}$, run up to time $\tau_1$ and conditioned on $\tau_1=\mathbb U_1$. In addition, we require independence of $\hat\mathbb L_1$ and $\hat\mathbb L_2$ and $\{\mathbb L_n\}_{n=1}^\infty$.
In other words, $\hat\mathbb L_1$ and $\hat\mathbb L_2$ are distributed according to the annealed distribution of the first regeneration slab, and $\{\mathbb L_n\}$ are distributed according to the annealed distribution of regeneration slabs that are not the first one.
We now construct paths from the ensembles that we defined. The choice of the distribution of the ensembles will guarantee that the paths are distributed according to the annealed RWRE distribution. The variables $U$ and $\hat U$ will be taken to be certain hitting locations of these paths, and the fact that $U$ and $\hat U$ will be built from the same ensembles will make it easy for us to estimate the difference $U-\hat U$.
Let $\Gamma_n=\hat\mathbb T_1(\hat\mathbb U_1)+\sum_{k=1}^n\mathbb T_k(\mathbb U_k)$, and let
$T_1=\max(h:\langle e_1,\Gamma_h\rangle<N^2j)$ We take
\[
\hat U=\Gamma_{T_1}+\mathbb T_{T_1+1}\big(\min(i:\langle e_1,\mathbb T_{T_1+1}(i)+\Gamma_{T_1}\rangle=N^2j)\big).
\]
Let $T_2=\max(h:\langle e_1,\Gamma_h\rangle<N^2(j-1))$, and
\[
V_1=\Gamma_{T_2}+\mathbb T_{T_2+1}\big(\min(i:\langle e_1,\mathbb T_{T_2+1}(i)+\Gamma_{T_2}\rangle=N^2(j-1))\big)
\]
Let $\Gamma^\prime_n=\hat\mathbb T_2(\hat\mathbb U_2)+\Gamma_{T_2+n}-\Gamma_{T_2}$.
Let $T_3=\max(h:\langle e_1,\Gamma^\prime_h\rangle<N^2)$, and
\[
V_2=\Gamma^\prime_{T_3}+\mathbb T_{T_2+T_3+1}\big(\min(i:\langle e_1,\mathbb T_{T_2+T_3+1}(i)+\Gamma_{T_2+T_3}\rangle=N^2)\big)
\]
We now take $U=V_1+V_2$.
By Lemma \ref{lem:regrad} and Part \ref{item:exit_from_right} of Lemma \ref{lem:ann_der},
up to an error of $N^{-\xi(1)}$, the variables $U$ and $\hat U$ are distributed (respectively) according to
$\hat{\mathcal D}^{(j)}$ and $\mathcal D(N\sqrt{j})$.
The difference $U-\hat U$ is bounded by the sums of the radii of the regeneration slabs $\hat\mathbb L_2$,
$\mathbb L_{T_2}$, and $\mathbb L_{h}$ for $h$ between $T_2+T_3$ and $T_1$. Lemma \ref{lem:regrad} now gives us the desired bound.
\end{proof}
We also use the following lemma, which is nothing but a second order Taylor expansion.
\begin{lemma}\label{lem:arit}
Let $\mu$ be a finite signed measure on $\mathbb Z^d$, and let $f:\mathbb Z^d\to\mathbb R$. Assume that $m$, $k$, $J$, $L$ in $\mathbb N$ and $\varrho\in\mathbb Z^d$ are such that
\begin{enumerate}
\item for every $x,y$ such that $x-y\in\{\pm e_i\}_{i=1}^d$, we have that $|f(x)-f(y)|<m$.
\item for every $x,y,z,w$ and $1\leq i,j\leq d$ such that $x-y=z-w=e_i$ and $x-z=y-w=e_j$,
we have that $|f(x)+f(w)-f(y)-f(z)|<k$ (note that if $i=j$ then this is the discrete pure second derivative,
and if $i\neq j$ it is the discrete mixed second derivative).
\item $\sum_x\mu(x)=0$.
\item $\big\|\sum_xx\mu(x)\big\|_1<L$.
\item $\sum_x\|x-\varrho\|_1^2|\mu(x)|<J$.
\end{enumerate}
Then
\[
\left|\sum_{x}\mu(x)f(x)\right|\leq Lm+\frac{1}{2}Jk.
\]
\end{lemma}
\begin{proof}
$\sum_x\mu(x)=0$ and therefore, $\sum_{x}\mu(x)f(x)=\sum_{x}\mu(x)(f(x)+c)$ for every $c$.
Therefore, without loss of generality we may assume that $f(\varrho)=0$.
Let $g:\mathbb R^d\to\mathbb R$ be the affine function such that $g(\varrho)=f(\varrho)=0$ and $g(\varrho+e_i)=f(\varrho+e_i)$ for $i=1,\ldots,d$. Then
$|f(x)-g(x)|<\frac 12 k\|x-\varrho\|_1^2$ for $x\in\mathbb Z^d$.
\ignore{
why?
look at $h=f-g$. it has the exact second derivatives as $f$, and the maximum directional derivative at
$x$ and along the shortest path o it is bounded by $k\|x\|_1$. Thus, we get a bound
of
$|f(x)-g(x)|<k\|x\|_1^2$ for $x\in\mathbb Z^d$, which is within a constant from what's written.
Whether what's written is really true
}
Note also that since $\sum_x\mu(x)=0$, we get that $\sum_x(x-\varrho)\mu(x)=\sum_xx\mu(x)$ and thus
$\big\|\sum_x(x-\varrho)\mu(x)\big\|<L$.
Therefore,
\[
\left|\sum_{x}\mu(x) f(x)-\sum_{x}\mu(x) g(x)\right|\leq
\sum_x |\mu(x)||f(x)-g(x)|\leq\frac 12 Jk.
\]
In addition,
\[
\left|\sum_{x}\mu(x)g(x)\right|=\left|g\left(\sum_{x}(x-\varrho)\mu(x)\right)\right|
\leq Lm.
\]
The lemma follows.
\end{proof}
\ignore{
Old proof in rec_bin.
}
\begin{proof}[Proof of Lemma \ref{lem:sumapprox}]
For $k=1,\ldots,n$, conditioned on $X_1,\ldots,X_{k-1}$, the distribution of $X_k$ is
$(\lambda,K)$-close to $\mathcal D(N)$. Therefore there exist variables $\{Y_k\}_{k=1}^n$,
playing the role of $Z_0$ in Definition \ref{def:close}, such that for every $k$, conditioned on
$X_1,Y_1,\ldots,X_{k-1},Y_{k-1}$, the following hold:
\begin{enumerate}
\item\label{item:y_masbound} $\sum_x|{\bf P}(Y_k=x)-\mathcal D(N)(x)| \leq \lambda$.
\item\label{item:y_distbound} ${\bf P}(\|Y_k-X_k\|<K)=1$.
\item\label{item:y_stmom} $E(Y_k)=E_{\mathcal D(N)}$.
\item\label{item:y_ndmom} $\sum_{x}|{\bf P}(Y_k=x)-\mathcal D(N)(x)|\|x-E_{\mathcal D(N)}\|_1^2 \leq \lambda N^2$.
\end{enumerate}
What we need to show is that there exists a random variable $Y^\prime$ such that
\begin{enumerate}
\item\label{item:yp_masbound} $\sum_x|{\bf P}(Y^\prime=x)-\mathcal D(\sqrt{n}N)(x)| \leq \lambda R_{h+1}(N)$.
\item\label{item:yp_distbound} ${\bf P}(\|Y^\prime-S\|<2nKR_{h+1}(N))=1$.
\item\label{item:yp_stmom} $E(Y^\prime)=E_{\mathcal D(\sqrt{n}N)}$.
\item\label{item:yp_ndmom} $\sum_{x}|{\bf P}(Y^\prime=x)-\mathcal D(\sqrt{n}N)(x)|\|x-E_{\mathcal D(\sqrt{n}N)}\|_1^2
\leq \lambda nN^2R_{h+1}(N)$.
\end{enumerate}
\ignore{
If in addition for every $k$, conditioned on $X_1,Y_1,\ldots,X_{k-1},Y_{k-1}$,
\begin{equation}\label{eq:localif}
P(Y_k=x)\geq C\mathcal D(N)(x)
\end{equation}
for all $x$ such that $\|x-E_{\mathcal D(N)}\|<N$, then we need to show that
\begin{equation}\label{eq:localthen}
P(Y^\prime=x)\geq \frac{C}{2}\mathcal D(N)(x)
.\end{equation}
for all $x$ such that $\|x-E_{\mathcal D(\sqrt{n}N)}\|<\sqrt{n}N$.
}
To this end, we let
\[
S^{(j)}=\sum_{k=j}^nY_k.
\]
First we will show using descending induction, that conditioned on $X_1,\ldots,X_{j-1}$, we can represent $S^{(j)}$ as
$S^{(j)}=Y^{(j)}+Z^{(j)}$ such that $\|Z^{(j)}\|\leq (n-j)R_h(N)$ a.s. and $Y^{(j)}\sim(\mathcal D(N\sqrt{n-j+1})+D_2^{(j)})$ where $D_2^{(j)}$ is a signed measure such that
$\|D_2^{(j)}\|\leq\lambda^{(j)}$ with $\lambda^{(n)}=\lambda$ and
$\lambda^{(j)}\leq \lambda^{(j+1)}+\frac{2}{n-j}\lambda\cdot R_5(N)$ for $j<n$.
For $j=n$ the statement clearly holds, with $Z^{(n)}=0$. We now assume that the statement holds for $j+1$, and prove it for $j$.
Let ${\bf P}$ be the joint distribution of $Y_j$ and $Y^{(j+1)}$ conditioned on $X_1,\ldots,X_{j-1}$. Let
$H=Y_j+Y^{(j+1)}$. For each $z$,
\begin{eqnarray*}
{\bf P}(H=z)
=\sum_{x}{\bf P}(Y_j=x){\bf P}\big(Y^{(j+1)}=z-x\big|Y_j=x\big)
\end{eqnarray*}
Let $\mathcal D^{(j)}$ be the convolution of $\mathcal D(N\sqrt{n-j})$ and the ${\bf P}$ distribution of $Y_j$. Then
\begin{eqnarray}
\nonumber
&&\sum_z\big|{\bf P}(H=z)-\mathcal D^{(j)}(z)\big|\\
\nonumber
&\leq&
\sum_z\sum_x {\bf P}(Y_j=x)\left|{\bf P}\big(Y^{(j+1)}=z-x\big|Y_j=x\big)-\mathcal D(N\sqrt{n-j})(z-x)\right|\\
\nonumber
&=&
\sum_{x,y}{\bf P}(Y_j=x)\left|{\bf P}\big(Y^{(j+1)}=y\big|Y_j=x\big)-\mathcal D(N\sqrt{n-j})(y)\right|\\
\label{eq:inhar}
&\leq& \operatorname{esssup}\|D_2^{(j+1)}\|.
\end{eqnarray}
As in Claim \ref{claim:annealerr} let $\hat{\mathcal D}^{(j)}$ be the convolution of $\mathcal D(N)$ and $\mathcal D(N\sqrt{n-j})$.
Then for given $z$,
by Lemma \ref{lem:arit} and Parts \ref{item:second_der} and \ref{item:mix_der} of Lemma \ref{lem:ann_der},
\begin{eqnarray}
\nonumber
&& |\hat{\mathcal D}^{(j)}(z)-\mathcal D^{(j)}(z)|\\
\nonumber
&=&\sum_x \mathcal D(N\sqrt{n-j})(x) \big({\bf P}(Y_j=z-x)-\mathcal D(N)(z-x)\big)\\
\label{eq:compk}
&\leq&
\lambda N^2\cdot N^{-d-1}(n-j)^{\frac{-d-1}{2}}
= \lambda N^{1-d}(n-j)^{\frac{-d-1}{2}}.
\end{eqnarray}
Note that for $z$ such that
$\|z-E_{\hat{\mathcal D}^{(j)}}\|_1>R_5(N)N(n-j)^{\frac{1}{2}}$,
both $\hat{\mathcal D}^{(j)}(z)$ and $\mathcal D^{(j)}(z)$ are bounded by
\begin{equation}\label{eq:monst}
\exp\left(-
\left(\left.\frac{\|z-E_{\hat{\mathcal D}^{(j)}}\|_1}{R_1(N)}\right)^2
\right/ N^{2(d-1)}(n-j)^{d-1}
\right)\leq e^{-R_4(N)}.
\end{equation}
From \eqref{eq:inhar}, \eqref{eq:compk} and \eqref{eq:monst},
we get that the distribution of $H$ can be presented as
$\hat{\mathcal D}^{(j)}+\bar{D}_2^{(j)}$ such that
\[
\|\bar{D}_2^{(j)}\|\leq \|D_2^{(j+1)}\|+ \lambda(N)R_5(N)(n-j)^{-1}.
\]
By Claim \ref{claim:annealerr}, and again conditioned on $X_1,Y_1,\ldots,X_{j-1},Y_{j-1}$, there exists $Z^\prime(j)$ such that
${\bf P}(Z^\prime(j))>R_h(N))<\exp(-R_{h-1}(N))$, and the distribution of $H+Z^\prime(j)$ is
$\mathcal D(N\sqrt{n-j+1})^{(j)}+\bar{D}_2^{(j)}$.
Let
\[
\bar{H}(j)=H+Z^\prime(j)\cdot{\bf 1}_{\|Z^\prime(j)\|<R_h(N)}.
\]
Then the distribution of $\bar{H}(j)$ is $\mathcal D(N\sqrt{n-j+1})^{(j)}+\hat{D}_2^{(j)}$ with
\[
\|\hat{D}_2^{(j)}\|\leq\|\bar{D}_2^{(j)}\|+\exp(-R_{h-1}(N))\leq \|D_2^{(j+1)}\|+ 2\lambda(N)R_5(N)(n-j)^{-1}.
\]
\ignore{
The expectation $E(\bar{H}(j))$ satisfies
\[
E(\bar{H}(j))
=
\sum_{k=j}^{n}E(Y_k)+\sum_{k=j}^{n}E
\]
}
We let
\[
Z^{(j)}=Z^{(j+1)}+Z^\prime(j)\cdot{\bf 1}_{\|Z^\prime(j)\|<R_h(N)},
\]
and $Y^{(j)}=S^{(j)}-Z^{(j)}$.
Then we get that $\|Z^{(j)}\|\leq (n-j)R_{h}(N)$ and the distribution of $Y^{(j)}$ is
$\mathcal D(N\sqrt{n-j+1})+D_2^{(j)})$ where $D_2^{(j)}$ is a signed measure such that
$\|D_2^{(j)}\|\leq\lambda^{(j)}$ with
\[
\lambda^{(j)}\leq \lambda^{(j+1)}+\frac{2R_5(N)}{n-j}\lambda.
\]
We calculate the expectation of $Y^{(1)}$:
\begin{eqnarray*}
E(Y^{(1)})=E(S^{(1)})-E(Z^{(1)})=nE(Y_1)-E(Z^{(1)})=nE_{\mathcal D(N)}-E(Z^{(1)}).
\end{eqnarray*}
Therefore, again by Claim \ref{claim:annealerr},
\begin{eqnarray*}
\|E(Y^{(1)})-E_{\mathcal D(\sqrt{n}N)}\|\leq Cn+nR_h(N)<nR_{h+1}(N)
\end{eqnarray*}
As in the proof of Corollary \ref{cor:quenched}, we can find a variable $U$ which is independent of all
of the variables we have seen so far, such that $\|U\|\leq nR_{h+1}(N)+1$ almost surely and
$
E(U)=E_{\mathcal D(\sqrt{n}N)}-E(Y^{(1)}).
$
We define $Y^\prime=Y^{(1)}+U$. By the same calculation as in \eqref{eq:fmqnc}, we get that $Y^\prime$
satisfies Parts \ref{item:yp_masbound}, \ref{item:yp_distbound} and \ref{item:yp_stmom}.
\vspace{0.25cm}
Thus, all that is left is to show that $Y^\prime$ also satisfies Part \ref{item:yp_ndmom}.
To this end, Let $D_{2}$ be the signed measure such that
$Y^\prime\sim(\mathcal D(\sqrt{n}N)+D_{2})$.
We are interested in
\[
\sum_{x}|D_2(x)|\|x-E_{\mathcal D(\sqrt{n}N)}\|_1^2.
\]
As a first step, we estimate
\[
{\texttt{var}}(D_{2},i)
:=\sum_z \langle z,e_i\rangle^2D_{2}(z)
\]
for a unit vector $e_i$ with $i\neq 1$.
For $x,y,z\in\mathbb Z^d$, we write $\hat x, \hat y, \hat z$ for their projection on the $e_i$ axis.
Let $W$ be a random variable distributed
according to $\mathcal D(\sqrt{n}N)$. By Claim \ref{claim:annealerr}, there exists another random variable
$W^\prime$ such that $W^\prime\sim\mathcal D(N)^{\star n}$ ($\mathcal D(N)^{\star n}$ is the $n$-fold convolution of $\mathcal D(N)$) and
${\bf P}(\|W-W^\prime\|>nk)<Cn\exp(-ck^{-\gamma})$
for every $k$.
By the definition of $Y^\prime$, we know that $U^\prime=Y^\prime-S^{(1)}$ satisfies
$\|U^\prime\|\leq 2nR_{h+1}(N)$.
In addition note that ${\texttt{cov}}(Y_j,Y_k)=0$ for $j\neq k$,
and that for every $j$,
\begin{eqnarray*}
&&|{\texttt{var}}\big(\langle Y_j,e_i\rangle\big)-{\texttt{var}}_{\mathcal D(N)}(\hat x)|\\
&=&\sum_x (\hat x-E_{\mathcal D(N)}(\hat z))^2({\bf P}(Y_j=x)-\mathcal D(N)(x))\\
&\leq& \sum_x (\hat x-E_{\mathcal D(N)}(\hat z))^2|{\bf P}(Y_j=x)-\mathcal D(N)(x)|\leq \lambda N^2.
\end{eqnarray*}
Therefore,
\begin{eqnarray}
\nonumber
\left|{\texttt{var}}\big(\langle S^{(1)},e_i\rangle\big) - {\texttt{var}}\big(\langle W^\prime,e_i\rangle\big)\right|
&=&\left|E\big(\langle S^{(1)},e_i\rangle^2\big) - E\big(\langle W^\prime,e_i\rangle^2\big)\right|\\
\leq \sum_{j=1}^n\left|{\texttt{var}} \big(\langle Y_j,e_i\rangle\big)-{\texttt{var}}_{\mathcal D(N)}(\hat x)\right|
\label{eq:ywprime}
&\leq& \lambda n N^2.
\end{eqnarray}
Now,
\begin{eqnarray}\label{eq:yprimey}
\nonumber
&&\left|{\texttt{var}}\big(\langle Y^\prime,e_i\rangle\big) - {\texttt{var}}\big(\langle S^{(1)},e_i\rangle\big)\right|\\
\nonumber
&=&\left|{\texttt{var}}\langle S^{(1)}+U^\prime,e_i\rangle - {\texttt{var}}\langle S^{(1)},e_i\rangle\right|\\
\nonumber
&\leq& 2\operatorname{esssup}(\|U^\prime\|)\sqrt{{\texttt{var}}(S^{(1)})}+\operatorname{esssup}(\|U^\prime\|)^2\\
&\leq& 2Cn^{3/2}R_{h+1}(N)N+n^2R_{h+1}^2(N)
\leq 3Cn^{3/2}R_{h+1}(N)N,
\end{eqnarray}
and
\begin{eqnarray}\label{eq:wprimew}
\nonumber
&&\left|{\texttt{var}}\big(\langle W,e_i\rangle\big) - {\texttt{var}}\big(\langle W^\prime,e_i\rangle\big)\right|\\
\nonumber
&\leq& N^2n^2{\bf P}\big(\|W-W^\prime\|>nR_5(N)\big)
+2nR_5(N)\sqrt{{{\texttt{var}}(W^\prime)}}+2n^2R_5(N)^2\\
&\leq& Cn^{3/2}R_5(N)N.
\end{eqnarray}
From \eqref{eq:ywprime}, \eqref{eq:yprimey} and \eqref{eq:wprimew} and the fact that
$E(Y^\prime)=E(W)$, we get that
\begin{eqnarray}\label{eq:vard2}
\nonumber
|{\texttt{var}}(D_2,i)|&=&\left|\sum_{x}\langle x,e_i\rangle^2\big(\mathcal D(\sqrt{n}N)(x)-{\bf P}(Y^\prime=x)\big)\right|\\
\nonumber
&=&\left| E\big(\langle W,e_i\rangle^2\big) - E\big(\langle Y^\prime,e_i\rangle^2\big) \right|\\
\nonumber
&=&\left| {\texttt{var}}\big(\langle W,e_i\rangle\big) - {\texttt{var}}\big(\langle Y^\prime,e_i\rangle\big) \right|\\
&\leq& \lambda n N^2 + 4Cn^{3/2}R_h(N)N
\leq 2\lambda n N^2.
\end{eqnarray}
We now decompose the measure $D_{2}$ into its positive and negative parts
$D_{2}^+$ and $D_{2}^-$.
We need to bound
\[
\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 |D_{2}|=\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^+
+ \sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^-.
\]
We know that
\begin{eqnarray}\label{eq:from_var}
\nonumber
\left| \sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^+(x)
- \sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^-(x)\right|\\
=\left| \sum_x \hat x^2 D_{2}^+(x) - \sum_x \hat x^2 D_{2}^-(x)\right|
=|{\texttt{var}}(D_{2},i)|\leq 2\lambda nN^2.
\end{eqnarray}
In addition, note that $D_{2}^-(x)\leq\mathcal D(\sqrt{n}N)(x)$ for all $x$, and therefore
\[
D_{2,n}^-(x)<e^{-(x-E_{\mathcal D(\sqrt{n}N)})^2/CnN^2R_1(N)}.
\]
Combined with the fact that $\|D_{2}^-\|\leq\|D_{2}\|\leq\lambda R_{h}(N)$, we get that
\begin{equation}\label{eq:boundforminus}
\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^- \leq R_2(N)R_h(N)\lambda nN^2
\end{equation}
Thus, by \eqref{eq:from_var} and \eqref{eq:boundforminus} we get that
\begin{eqnarray*}
&&\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^+ + \sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^-\\
&\leq& 2\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^- +\big|\sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^+ - \sum_x (\hat x-E_{\mathcal D(\sqrt{n}N)})^2 D_{2}^-\big|\\
&\leq& CR_{h}R_2(N)(N)\lambda nN^2.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\sum_x \|x-E_{\mathcal D(\sqrt{n}N)}\|_1^2|D_{2}(x)|
&\leq& (d-1)\sum_x \| x-E_{\mathcal D(\sqrt{n}N)}\|_2^2|D_{2}(x)|\\
= (d-1)\sum_{i=2}^d \sum_x \langle x-E_{\mathcal D(\sqrt{n}N)}, e_i \rangle^2|D_{2}(x)|
&\leq& (d-1)^2 R_{h}(N)R_{2}(N)\lambda nN^2\\
&\leq& R_{h+1}(N)\lambda nN^2.
\end{eqnarray*}
\ignore{
Therefore,
\begin{eqnarray*}
&&\sum_{x}|{\bf P}(Y^\prime=x)-\mathcal D(\sqrt{n}N)(x)|\|x\|_1^2 \\
&=& \sum_{x}|{\bf P}(H_n+U=x)-\mathcal D(\sqrt{n}N)(x)|\|x\|_1^2 \\
&\leq& \sum_{x}\left(|{\bf P}(H_n=x)-\mathcal D(\sqrt{n}N)(x)|+\frac{nR_h(N)}{N^{-d}n^{-d/2}}\right)\|x\|_1^2 \\
&\leq& \sum_{x}\left(|{\bf P}(H_n=x)-\mathcal D(N)^{\star n}(x)|+\frac{2nR_h(N)}{N^{d}n^{d/2}}\right)\|x\|_1^2 \\
&\leq& \sum_x \|x\|_1^2|D_{2,n}(x)|+2R_h(N)Nn^{d/2+2}
\leq CR_{h+1}(N)\lambda nN^2.
\end{eqnarray*}
}
Part \ref{item:yp_ndmom} follows.
\end{proof}
\ignore{
\begin{lemma}\label{lem:pntsum}
Let $X_1,\ldots,X_n$ satisfy the assumptions of Lemma \ref{lem:sumapprox}. Let $Y_1,\ldots,Y_n$ be as in the (beginning) of the proof of Lemma \ref{lem:sumapprox}.
Assume further that for some $C>0$ and every $k$, conditioned on $X_1,Y_1,\ldots,X_{k-1},Y_{k-1}$, for every $y\in\partial^+\mathcal P(0,N)$ such that $\|y-E_{\mathcal D(N)}\|\leq N$, we have $P(Y_k=y)>CN^{1-d}$.
Then for every $y\in\partial^+\mathcal P(0,\sqrt{n}N)$ such that $\|y-E_{\mathcal D(\sqrt{n}N)}\|\leq N$, we have that
$P(Y^\prime=y)>\frac{1}{2}CN^{1-d}n^{\frac{1-d}{2}}$.
\end{lemma}
\begin{proof}
We continue with notations as in the proof of Lemma \ref{lem:sumapprox}. Conditioned on $X_1$ and $Y_1$, we have that $Y^{(2)}\sim(\mathcal D(N\sqrt{n-1})+D_2^{(2)})$, with $\|D_2^{(2)}\|\leq\lambda^{(2)}$.
Recall that
\[
\lambda^{(2)}\leq\lambda R_6(N)
\]
\end{proof}
}
\section{Reduction to quenched return probabilities}\label{sec:qrp}
\subsection{Basic calculations}\label{sec:fromsznit}
In this subsection we repeat a calculation from \cite{SznitmanT}. Our main goal is to control the probability of the event $\tau_1>u$. to this end, we take $L=\left\lceil(\log u)^{\frac{d}{\gamma}}\right\rceil$ and notice that
\[
\BbbP(\tau_1>u)
\leq \BbbP(\tau_1>T_L)+\BbbP(T_L>u)
\leq e^{-(\log u)^d}+\BbbP(T_L>u),
\]
where the last inequality follows from \eqref{eq:regrad}. Let
$
B_L:=[-L,L]\times[-L^2,L^2]^{d-1}.
$
Then, again by \eqref{eq:regrad}, $\BbbP(T_L\neq T_{\partial B_L})\leq e^{-(\log u)^d}$, and thus it is sufficient to show that
\begin{equation*}
\BbbP(T_{\partial B_L}>u)<Ce^{-c(\log u)^\alpha}
\end{equation*}
for appropriate constants $C$ and $c$.
On the event $\{T_{\partial B_L}>u\}$, there exists a point $x\in B_L$ that is visited more than $\frac{u}{|B_L|}$ times before the walk leaves $B_L$. Therefore, it is sufficient to show that
\begin{equation}\label{eq:toshowbl}
\BbbP\left(\exists_{x\in B_L} \mbox{ s.t. } T^{\frac{u}{|B_L|}}_x<T_{\partial B_L}\right)<Ce^{-c(\log u)^\alpha},
\end{equation}
where $T_x^k$ is defined to be the $k^{\mbox{th}}$ hitting time of $x$.
Let $G\subseteq\Omega$ be an event. Then,
\begin{equation*}
\label{eq:withstuff}
\BbbP\left(\exists_{x\in B_L} \mbox{ s.t. } T^{\frac{u}{|B_L|}}_x<T_{\partial B_L}\right)
\leq
P(G^c)+\sup_{\omega\in G}
P_\omega\left(\exists_{x\in B_L} \mbox{ s.t. } T^{\frac{u}{|B_L|}}_x<T_{\partial B_L}\right),
\end{equation*}
and
\begin{eqnarray*}
P_\omega\left(\exists_{x\in B_L}\mbox{ s.t. } T^{\frac{u}{|B_L|}}_x<T_{\partial B_L}\right)
&\leq&
\sum_{x\in B_L}P_\omega^0
\left(T^{\frac{u}{|B_L|}}_x<T_{\partial B_L}\right)\\
=
\sum_{x\in B_L}P_\omega^0
\left(T_x<T_{\partial B_L}\right)
P_\omega^x
\left(T^{\frac{u}{|B_L|}-1}_x<T_{\partial B_L}\right)
&\leq&
\sum_{x\in B_L}P_\omega^x
\left(T^{\frac{u}{|B_L|}-1}_x<T_{\partial B_L}\right).
\end{eqnarray*}
Note that due to the strong Markov property,
\begin{eqnarray*}
P_\omega^x\left(T^{\frac{u}{|B_L|}-1}_x<T_{\partial B_{L}}\right)
=
\left[P_\omega^x\left(T_x<T_{\partial B_{L}}\right)\right]^{\frac{u}{|B_L|}-1},
\end{eqnarray*}
and therefore \eqref{eq:withstuff} will follow
if we find an event $G$ such that $P(G^c)<\frac 12e^{-(\log u)^\alpha}$ and
for some $\epsilon>0$, every $\omega\in G$ and every $x$,
\begin{eqnarray}
\label{eq:toshowomeg1}
P_\omega^x\left(T_{\partial B_{L}}<T_x\right)>u^{\epsilon-1}.
\end{eqnarray}
In turn, we may replace \eqref{eq:toshowomeg1} by
\begin{eqnarray}
\label{eq:toshowomeg}
P_\omega^x\left(T_{\partial B_{2L}(x)}<T_x\right)>u^{\epsilon-1},
\end{eqnarray}
where $B_{2L}(x)$ is the cube of the same dimensions as $B_{2L}$, centered at $x$. The choice of
$B_{2L}(x)$ is slightly more convenient than $B_{L}$ because now the condition is translation invariant with respect to the choice of $x$.
\subsection{Definition of the event $G$}\label{sec:defgood}
We now define the event $G$, and show that $P(G^c)<\frac 12e^{-(\log u)^\alpha}$. In Sections
\ref{sec:auxwalk}, \ref{sec:randirev} and \ref{sec:prfmain} we will show that \eqref{eq:toshowomeg} holds for every
$\omega\in G$.
Let $\epsilon>0$ be so that
\begin{equation}\label{eq:chooseepsilon}
2d\epsilon < d-\alpha.
\end{equation}
Fix $\psi>0$ so that
\begin{equation}\label{eq:choosepsi}
\psi\leq\frac{\gamma\epsilon}{30d}
\end{equation}
and $\chi>0$ so that
\begin{equation}\label{eq:choosechi}
\chi<\frac{\psi^2}{2}\cdot\frac{d-1}{2(d+1)}.
\end{equation}
We say that a basic {block} $\mathcal P(z,N)$ is {\em good} with respect to the environment $\omega$ if the assertion of
proposition \ref{prop:quenched} holds for every {block} of size at least $N^{\chi}$ that is contained in $\mathcal P(z,N)$, with $\theta=\psi/2$.
Otherwise, we say that $\mathcal P(z,N)$ is {\em bad}.
We define our {\em scales} $N_1,\ldots,N_\iota$ as follows:
\begin{enumerate}
\item
$N_1:= \lceil L^\psi\rceil$
\item
We define $\rho_k=\frac{\chi}{2}+\frac{\chi}{2^k}$.
\item
$N_{k+1}:= N_k\cdot \lceil L^{\rho_k}\rceil$
\item
$\iota$ is defined to be the largest $k$ s.t. $N_k^2<2L$.
\end{enumerate}
For every $k=1,\ldots,\iota$, we let $B_{2L}(k)$ be the set of all $z\in\mathcal L_{N_k}$ such that $\mathcal P(z,N_k)\cap B_{2L}\neq\emptyset$. We now define the event $G$:
We say that the environment $\omega$ is in $G$ if for every $k=1,\ldots,\iota$,
\begin{equation}\label{eq:defgood}
\left|
\left\{
z\in B_{2L}(k)\ :\ \mathcal P(z,N_k)\mbox{ is not good w.r.t. } \omega
\right\}
\right|<
(\log u)^{\alpha+\epsilon}
\end{equation}
\begin{lemma}\label{lem:probgoodevent} For $u$ large enough,
$P(G)\geq 1-\frac 12e^{-(\log u)^\alpha}$.
\end{lemma}
\begin{proof}
Let
\[
J_k:=
\left|
\left\{
z\in B_{2L}(k)\ :\ \mathcal P(z,N_k)\mbox{ is not good w.r.t. } \omega
\right\}
\right|.
\]
First we note that
\[
P(G^c)\leq\sum_{k=1}^\iota P\left[
J_k\geq
(\log u)^{\alpha+\epsilon}
\right],
\]
and $\iota$ is bounded. Now by Proposition \ref{prop:quenched} and Corollary \ref{cor:quenched}, for given $k$ and $z\in B_{2L}(k)$,
\[
p_k:=P(\mathcal P(z,N_k)\mbox{ is not good})=N_k^{-\xi(1)}=o\left(\left|B_{2L}\right|^{-1}\right).
\]
By Lemma \ref{lem:parandlat}, we can present $J_k$ as $J_k=J_k^{(1)}+\ldots+J_k^{(9^d)}$, and
\[
J_k^{(h)}\sim \mbox{Bin}(p_k,D_k)
\]
with $D_k<|B_{2L}|$. Thus for $u$ large enough, $J_k^{(h)}$ is binomial with expected value which is less than 1. Therefore, again assuming that $u$ is large enough,
\[
P\left[J_k^{(h)}>\frac{(\log u)^{\alpha+\epsilon}}{9^d }\right]<
\exp\left(-
\frac{(\log u)^{\alpha+\epsilon}}{9^d }
\right).
\]
Therefore,
\begin{eqnarray*}
P(G^c)
&\leq& P\left(\bigcup_{k=1}^\iota\bigcup_{h=1}^{9^d}J_k^{(h)}\right)\\
\leq 9^d\iota\exp\left(-
\frac{(\log u)^{\alpha+\epsilon}}{9^d }
\right)
&\leq&\frac 12 e^{-(\log u)^\alpha}.
\end{eqnarray*}
\end{proof}
\section{The auxiliary walk}\label{sec:auxwalk}
Fix an environment $\omega\in G$. In this section we define a new random walk $\{Y_n\}$ on the environment $\omega$, whose law is different from that of the quenched random walk $\{X_n\}$ on $\omega$. However, we show an obvious relation between the laws of $\{Y_n\}$ and $\{X_n\}$ that we will exploit in sections \ref{sec:randirev} and \ref{sec:prfmain} in order to prove \eqref{eq:toshowomeg}.
We first give an informal description of the random walk $\{Y_n\}$ in Subsection \ref{sec:auxinf}, then define it properly in Subsection \ref{sec:auxdef}, and then collect some useful facts about it in Subsection \ref{sec:auxprop}
\subsection{Informal description of $\{Y_n\}$}\label{sec:auxinf}
$\{Y_n\}$ is a quenched random walk on $\omega$, which is forced to ``behave well'' in a number of different ways, which we list below.
\begin{enumerate}
\item\label{item:exright}
Once the walk $\{Y_n\}$ reached the center of certain basic blocks, it is only allowed to exit them through their right boundaries.
\item
If the walk is in a bad block, then once it exists the block, it is forced to make a number of steps on the right boundary of the block { }that will force the eventual exit distribution to be similar to the annealed distribution. We use Lemma \ref{lem:sumapprox} to control the number of forced steps that are needed. When the walk exits a good basic block, no
such correction is necessary, because the distribution is already close enough to the annealed.
\item
Upon leaving the origin the walk is forced to make a number of steps to the right. This together with part (\ref{item:exright}) makes sure that $\{Y_n\}$ leaves $B_{2L}$ before returning to the origin.
\end{enumerate}
The resulting random walk $\{Y_n\}$ is a random walk that, most of the time, behaves locally similarly to the quenched random walk, but behaves globally similarly to the annealed random walk. We will quantify and then use those similarities in order to control the behavior of the quenched walk.
\subsection{Definition of $\{Y_n\}$}\label{sec:auxdef}
The process $\{Y_n\}$ is a nearest neighbor random walk, which starts at $0$ and stops when it reaches $\partial^+B_{2L}$. Below we describe its law.
We first need some preliminary definitions.
For every $j=1,2,\ldots$ and every $k=1,2,\ldots,\iota$, we let $U_k(j)$ be the layer
\[
U_k(j)=H_{jN_k^2}=\{x\ :\ \langle x,e_1\rangle=jN_k^2\}.
\]
We define $T^Y_k(j)=\inf\{n:Y_n\in U_k(j)\}.$
For every $x\in B_{2L}$, and for every $k$,
we define $z(x,k)$ as follows: if $\langle x,e_1\rangle$ is divisible by $N_k^2$, then $z(x,k)$ is a point $z\in\mathcal L_{N_k}$ such that $\langle x,e_1\rangle=\langle z,e_1\rangle$ and $x\in\tilde\mathcal P(z,N_k)$. If more than one such point exists, then we choose one according to some arbitrary rule. If $\langle x,e_1\rangle$ is not divisible by $N_k^2$, then we take $z(x,k)$ to be $0$.
For $x\in B_{2L}$, and for every $k$, we define $\mathcal P^{(k)}(x)=\mathcal P(z(x,k),N_k)$.
For every $x\in B_{2L}$, we define
\begin{equation}\label{eq:levelx}
k(x)=\max
\{k\leq\iota\ :\
\langle x,e_1\rangle=\langle z(x,k),e_1\rangle \mbox{ and }
\mathcal P(z(x,k),N_k) \mbox{ is
good}
\},
\end{equation}
and $k(x)=0$ if no such $k$ exists.
In addition, for a random variable $X$, a distribution $\mathcal D$ and a number $\lambda<1$, we define a \underline{$(\lambda,\mathcal D)$-companion} of $X$ as follows:
Let $\nu$ be the distribution of $X$, and let $K$ be the smallest number such that $\nu$ is $(\lambda,K)$-close to $\mathcal D$ . Let $\mu$ be an arbitrarily chosen coupling of three variables $Z_0,Z_1,Z_2$ demonstrating, as in Definition \ref{def:close}, that $\nu$ is $(\lambda,k)$-close to $\mathcal D$. The roles of the variables $Z_0,Z_1,Z_2$ are exactly as in Definition \ref{def:close}. In particular, $Z_1\sim\mathcal D$ and $Z_2\sim\nu$.
We say that a variable $Y$ is a \underline{$(\lambda,\mathcal D)$-companion} of $X$ if the joint distribution of $X$ and $Y$ is the same as the $\mu$-joint distribution of $Z_2$ and $Z_0$. For every $X$, $\lambda$ and $\mathcal D$ we can construct such companion: For every $x$, on the event $\{X=x\}$, we sample $Y$ according to the $\mu$-distribution of $Z_0$ conditioned on the event $\{Z_2=x\}$.
Similarly, we can define the $(\lambda,\mathcal D)$-companion of $X$ conditioned on a $\sigma$-algebra $\mathcal F$: We work with the conditional distribution of $X$ given $\mathcal F$ instead of the (unconditional) distribution, and proceed as before.
Note that $\|Y-X\|<K$ and that by Claim \ref{claim:interm} the distribution of $Y$ is $(\lambda,0)$-close to $\mathcal D$.
We now simultaneously define the walk $\{Y_n\}$, its accompanying sequence of times $\{\zeta_m\}$, and random variables $\beta_{k,j}$.
The precise definition of the variables $\{\beta_{k,j}\}$ is postponed to the end of the subsection. However, we make the following comment on $\{\beta_{k,j}\}$ at this point:
For every $j$ and $k$, a.s. $\langle\beta_{k,j},e_1\rangle=0$.
For $j\leq N_1^2$, we define $Y_j=je_1$. In addition, $\zeta_0=0$ and $\zeta_1=N_1^2$.
Given $\zeta_0,\ldots,\zeta_n$ and $\{Y_\ell\ :\ \ell=0,\ldots,\zeta_n\}$, we define
$x^\prime=Y_{\zeta_n}$. Let $k^\prime$ be the largest $k$ such that $x^\prime\in U_k(j)$ for some $j$. Then we let
$x=x^\prime+\sum_{k=1}^{k^\prime}\beta_{k,j(k)}$, where $j(k)$ is the value of $j$ such that $x^\prime\in U_k(j)$.
We let $\kappa=\|x^\prime-x\|_1+2$ and choose
$\{Y_{\zeta_n},\ldots,Y_{\zeta_n+\kappa-2}\}$ to be a shortest path from $x^\prime$ to $x$.
We then take $Y_{\zeta_n+\kappa-1}=x+e_1$ and $Y_{\zeta_n+\kappa}=x$.
Let $\zeta_n^\prime=\zeta_n+\kappa$.
Let $k=k(x)$. If $k(x)>0$ then $\{Y_\ell\ :\ \ell=\zeta^\prime_n,\ldots,T_{\partial\mathcal P^{(k)}(x)}\}$
is chosen to be a random walk starting at $x$ on the random environment $\omega$ conditioned on the event
$\{T_{\partial\mathcal P^{(k)}(x)}=T_{\partial^+\mathcal P^{(k)}(x)}\}$ and $\zeta_{n+1}=T_{\partial^+\mathcal P^{(k)}(x)}$.
Conditioned on $\omega$, $\zeta^\prime_n$ and $x$,
the path $\{Y_\ell\ :\ \ell=\zeta^\prime_n,\ldots,T_{\partial\mathcal P^{(k)}(x)}\}$ is chosen
independently of the path prior to $\zeta^\prime_n$ and of
$
\big\{\beta_{k,j(k)}\ : \ k \mbox{ and } j \mbox{ are such that }
jN_k^2\leq\langle x,e_1\rangle\big\}.$
If $k=0$ then $\zeta_{n+1}=\zeta^\prime_n+N_1^2$ and for $\zeta^\prime_n<j\leq\zeta_{n+1}$, we take $Y_j=x+(j-\zeta^\prime_n)e_1$.
We define $x^\prime_n=Y_{\zeta_n}$ and $x_n=Y_{\zeta^\prime_n}$.
Note that for every $n$, both $\langle x_n,e_1\rangle$ and $\langle x^\prime_n,e_1\rangle$ are divisible by $N_1^2$ (remember that $\langle \beta_{k,j},e_1\rangle=0$).
All that is now left is to define $\beta_{k,j}$. $\beta_{1,1}$ is simply defined to be the $(0,\mathcal D(N_1))$-companion of the (deterministic) variable $Y_{N_1^2}$.
For $\beta_{k,j}$ for other values of $k$ and $j$,
we first list some conditions under which $\beta_{k,j}$ is zero.
\begin{enumerate}
\item\label{item:1}
If there exist no $n$ such that $\zeta_n=T^Y_{k}(j-1)$ then $\beta_{k,j}=0$.
\item\label{item:infty}
Otherwise, let $n$ be such that $\zeta_n=T^Y_{k}(j-1)$, and let $x=Y_{\zeta^\prime_n}$.
If $\mathcal P^{(k)}(x)$ is good, then $\beta_{k,j}=0$.
\end{enumerate}
Now assume that neither one of conditions \ref{item:1}--\ref{item:infty} holds.
For $k=1,\ldots,\iota$ let $\lambda_k=L^{-\chi}R_{5+k}(L)$.
We define $\beta_{k,j}$ recursively - we use the values of
$\{\beta_{k^\prime,j^\prime}\,:\,k^\prime<k, j^\prime=j\frac{N_k^2}{N_{k^\prime}^2}\}$
in the definition of $\beta_{k,j}$.
Let $x=Y_{\zeta^\prime_n}$, where as before $n$ us such that $\zeta_n=T^Y_{k}(j-1)$, and for $k^\prime<k$ let $j(k^\prime)$ be the unique value satisfying $U_{k^\prime}(j(k^\prime))=U_k(j)$.
Let
\[X=Y_{T^Y_k(j)}-x+\sum_{k^\prime=1}^{k-1}\beta_{k^\prime,j(k^\prime)}\]
Recall the definition of $\mathcal D(N)$ from Page \pageref{page:defdn}. Then $\mathcal D(N_k)$ is the annealed distribution of $X_{T_{k}(j)}-x$ for a walk starting at $x$, conditioned on exiting $\mathcal P(x,N_k)$ through the front.
We now take $\hat Z$ to be an (arbitrarily chosen) $(\lambda_k,\mathcal D(N_k))$-companion of $X$, conditioned on
$\{Y_{\ell}:\ell=1,\ldots,\zeta^\prime_n\}$ and $\omega$, and let $\beta_{k,j}=\hat Z - X$.
Thus we defined the process $\{Y_n\}$.
\begin{remark}\label{rem:Y_0}
Note that in our definition, if $\beta_{k,j}\neq 0$ then the distribution of $\hat Z=X+\beta_{k,j}$ is $(\lambda_k,0)$-close to $\mathcal D(N_k)$.
\end{remark}
\subsection{Basic properties of $\{Y_n\}$}\label{sec:auxprop}
We prove a few facts regarding the process $\{Y_n\}$ which we will use in Sections \ref{sec:randirev} and \ref{sec:prfmain}.
\begin{lemma}\label{lem:noreturn}
$\{Y_n\}$ reaches $\partial B_L$ before returning to the origin.
\end{lemma}
\begin{proof}
By the definition, $U_1(1)$ is reached before returning to the origin. Then for every $n$, if $x=Y_{\zeta_n}$, then $\mathcal P^{(k(x))}(x)$ is contained in the positive half space, and $\{Y_n\}$ exits $\mathcal P^{(k(x))}(x)$ through $\partial^+\mathcal P^{(k)}(x)$. Therefore $\{Y_n\}$ cannot
return to the origin.
\end{proof}
\begin{lemma}\label{lem:betasmall}
For every $k$ and $j$, with probability $1$,
\begin{equation}\label{eq:betasmall}
\beta_{k,j}<L^{4\psi}
\end{equation}
\end{lemma}
\begin{proof}
For $k=1$, the size of the {block} $\mathcal P(0,N_1)$ is less than $L^{4\psi}$, and therefore for every $j$, we have that $\beta_{1,j}<L^{4\psi}$.
Now assume that $k\geq 1$. In this case, we assume that there exists $n$ such that
$\zeta_n=T^Y_k(j-1)$, because otherwise $\beta_{k,j}=0$. Let $i$ be such that $U_k(j-1)=U_{k-1}(i)$.
Let $x=Y_{\zeta^\prime_n}$. If $\mathcal P^{(k)}(x)$ is good, then $\beta_{k,j}=0$. Therefore we may assume that $\mathcal P^{(k)}(x)$ is not good. In this case there exist $n_0=n, n_1, n_2, n_3, \ldots, n_m$ such that
$m$ satisfies that $U_k(j)=U_{k-1}(i+m)$ and for $h=0, 1, \ldots, m$, we have that
$\zeta_{n_h}=T^Y_{k-1}(i+h)$.
For $1\leq h< m$, let $X_h=Y_{\zeta^\prime_{n_h}}-Y_{\zeta^\prime_{n_{h-1}}}$,
and let $X_m=Y_{\zeta^\prime_{n_m}}-Y_{\zeta^\prime_{n_{m-1}}}-\beta_{k,j}$.
We now claim that for every $1\leq h\leq m$, conditioned on
$X_1,\ldots,X_{h-1}$, the distribution of $X_h$ is $(\lambda_{k-1},N_{k-1}^{\psi/2})$-close to $\mathcal D(N_{k-1})$.
Indeed, if
$\mathcal P^{(k-1)}\big(Y_{\zeta^\prime_{n_{h-1}}}\big)$
is good, then this claim follows from Corollary \ref{cor:quenched}.
Otherwise,
as in Remark \ref{rem:Y_0},
the distribution of $X_h$ is
$(\lambda_{k-1},0)$-close to $\mathcal D(N_{k-1})$ (and in particular
$(\lambda_{k-1},N_{k-1}^{\psi/2})$-close to $\mathcal D(N_{k-1})$).
Therefore, by Lemma \ref{lem:sumapprox}, the distribution of
\[
Y_{T^Y_k(j)}-x+\sum_{k^\prime=1}^{k-1}\beta_{k^\prime,j(k^\prime)}=\sum_{h=1}^mX_h
\]
is $(\lambda_k,R_{k+6}(L)N_{k-1}^{\psi/2}N^2_k/N^2_{k-1})$-close to $\mathcal D(N_k)$. Therefore we get that with probability 1,
\[
\beta_{k,j}\leq N_{k-1}^{\psi/2}\cdot\frac{R_{k+6}(L)N^2_k}{N^2_{k-1}}\leq L^{4\psi}
.\]
\end{proof}
\begin{lemma}\label{lem:stpt}
For $j$ and $k$, if there exists $n$ such that $x_n\in U_k(j)$, then at least one of the following holds:
\begin{enumerate}
\item There exist $j^\prime$ such that $U_k(j)=U_\iota(j^\prime)$,
\item There exists $k^\prime$ and $j^\prime$ such that $U_k(j)=U_{k^\prime}(j^\prime)$ and
$x_{n-1}\in U_{k^\prime}(j^\prime-1)$ and $x_{n-1}$ is contained in a {block} $\mathcal P(z,N_{k^\prime + 1})$ s.t. $z\in\mathcal L_{N_{k^\prime+1}}$ and $\mathcal P(z,N_{k^\prime + 1})$ is not good.
\item or
$j\leq\left(\frac{N_{k+1}}{N_{k}}\right)^2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $x_n\in U_k(j)$. Let $k^\prime=k(x_{n-1})$. If $k^\prime=0$ then case (2) holds. Assume
$k^\prime>0$.
Then the intersection of
$U_k(j)$ and $\partial^+\mathcal P^{(k^\prime)}(x_{n-1})$ is not empty. Therefore, by the definition of $k(x)$, we get that there is some $j^\prime$ such that $U_k(j)=U_{k^\prime}(j^\prime)$ and
$x_{n-1}\in U_{k^\prime}(j^\prime-1)$, and that one of the following occurs:
\begin{enumerate}
\item\label{item:iota} $k^\prime=\iota$.
\item\label{item:notgood} There exists $z\in\mathcal L_{N_{k^\prime+1}}$ such that $\langle z,e_1\rangle=\langle x_{n-1},e_1\rangle$, and
$\mathcal P^{(k^\prime+1)}(x_{n-1})$ is not good.
\item\label{item:z} No $z\in\mathcal L_{N_{k^\prime+1}}$ exists with
$\langle z,e_1\rangle=\langle x_{n-1},e_1\rangle$.
\end{enumerate}
In cases \ref{item:iota} and \ref{item:notgood}, the lemma holds. Thus we assume that the case that occurs is \ref{item:z}.
In this case, there exists $n^\prime<n-1$ such that
$\langle x_{n^\prime},e_1\rangle = N_{k^\prime+1}^2\left\lfloor\frac{\langle x_{n-1},e_1\rangle}{N_{k^\prime+1}^2}\right\rfloor$.
$x_{n-1}$ is in $\mathcal P^{(k^\prime+1)}(x_{n^\prime})$. If $\mathcal P^{(k^\prime+1)}(x_{n^\prime})$ is not good,
then $x_{n-1}$ is contained in a {block} $\mathcal P(z,N_{k^\prime + 1})$ s.t. $z\in\mathcal L_{N_{k^\prime+1}}$
and $\mathcal P(z,N_{k^\prime + 1})$ is not good. If $\mathcal P^{(k^\prime+1)}(x_{n^\prime})$ is good and
$\langle x_{n^\prime},e_1\rangle\neq 0$ then $\zeta_{n^\prime+1}$ is the exit time from
$\mathcal P^{(k^\prime+1)}(x_{n^\prime})$, which stands in contradiction to the assumptions. If
$\langle x_{n^\prime},e_1\rangle=0$, then $j\leq\left(\frac{N_{k+1}}{N_{k}}\right)^2$.
\end{proof}
We now let $M$ be the number of stopping times $\zeta_n$ in the definition of $\{Y_n\}$.
\begin{lemma}\label{lem:nstp}
Let $[Y]$ be the set of points visited by $\{Y_n\}$. For every $k=1,\ldots,\iota$, let
\[
Q_k(\{Y_n\})=\#
\left\{
z\in\mathcal L_{N_k}\ :\ [Y]\cap\mathcal P(z,N_k)\neq\emptyset
\mbox{ and }
\mathcal P(z,N_k)
\mbox{ is bad }
\right\}.
\]
Then
\[
M\leq \frac{2L}{N_\iota^2} + L^{2\chi}\sum_{k=1}^\iota Q_k(\{Y_n\}) + \iota L^{2\chi}
\leq L^{2\chi}\cdot\left(\iota + 2 + \sum_{k=1}^\iota Q_k(\{Y_n\})\right).
\]
\end{lemma}
\begin{proof}
This follows from Lemma \ref{lem:stpt}. There are at most $\frac{2L}{N_\iota^2}$ stopping times
that are caused by reaching the end of a $N_\iota$ block, $\iota L^{2\chi}$ stopping times
that are caused by the beginning and at most $L^{2\chi}\sum_{k=1}^\iota Q_k(\{Y_n\}) $ stopping times
that are caused by visiting {blocks} that are not good.
\end{proof}
We now draw the connection between the walks $\{Y_n\}$ and $\{X_n\}$.
\begin{lemma}\label{lem:xnyn}
Let $\upsilon=(v_1,v_2,\ldots v_{N_\upsilon})$ ($N_\upsilon$ is the length of the path $\upsilon$) be a nearest-neighbor path starting at the origin, never returning to the origin, and ending at $\partial^+ B_L$. For every $k=1,\ldots,\iota$, let
\[
Q_k(\upsilon)=\#
\left\{
z\in\mathcal L_{N_k}\ :\ \upsilon\cap\mathcal P(z,N_k)\neq\emptyset
\mbox{ and }
\mathcal P(z,N_k)
\mbox{ is bad }
\right\}.
\]
and let
$Q(\upsilon)=L^{2\chi}\cdot\big(\iota + 2 + \sum_{k=1}^\iota Q_k(\upsilon)\big).$
Then
\begin{equation}\label{eq:xnyn}
\frac{
{P_\omega}\left(
X_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}{
{P_\omega}\left(
Y_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}
\geq
\frac{1}{2}\eta^{
Q(\upsilon)\cdot(L^{4\psi}+2)
},
\end{equation}
where $\eta$ is the ellipticity constant, as in \eqref{eq:unifelliptic}.
\end{lemma}
\begin{proof}
First note that due to uniform ellipticity,
\[
P_\omega\left(X_j=v_j \mbox{ for all } j<N_\upsilon\right)>0
\]
for every $\upsilon$. Therefore without loss of generality we can restrict ourselves to considering only $\upsilon$-s such that
\[
{P_\omega}\left(Y_j=v_j \mbox{ for all } j<N_\upsilon\right)>0.
\]
For such
|
$\upsilon$, we define the sequences of times $\zeta_n$ and $\zeta^\prime_n$ in a fashion that is
very similar to the definition in the construction of $Y$-process:
$\zeta_0=\zeta^\prime_0=0$ and $\zeta_1=N_1^2$. Given $\zeta_0,\ldots,\zeta_n$ and $\zeta^\prime_0,\ldots,\zeta^\prime_{n-1}$, let $x^\prime_n=\upsilon_{\zeta_n}$. Let $\zeta^\prime_n$ be the smallest $\ell>\zeta_n$ such that $\langle \upsilon_{\ell-1},e_1\rangle>\langle x^\prime_n,e_1\rangle$, and let
$x_n=\upsilon_{\zeta^\prime_n}$. Let $k=k(x_n)$. If $k>0$, then we let $\zeta_{n+1}=T_{\partial^+\mathcal P^{(k)}(x_n)}(\upsilon)$. Otherwise, $\zeta_{n+1}=\zeta^\prime_n+N_1^2$.
Then,
\begin{eqnarray*}
P_\omega\left(Y_j=v_j \mbox{ for all } j<N_\upsilon\right)\\
\leq
\prod_{n:k(x_n)>0} P^{x_n}_\omega
\left(X_\ell=\nu_{\ell+\zeta^\prime_{n}}\,;\,\ell=1,\ldots,\zeta_{n+1}-\zeta^\prime_n
|T_{\partial\mathcal P^{(k)}(x_n)}=T_{\partial^+\mathcal P^{(k)}(x_n)}
\right),
\end{eqnarray*}
and
\begin{eqnarray}
\nonumber
P_\omega\left(X_j=v_j \mbox{ for all } j<N_\upsilon\right)\\
\nonumber
\geq
\prod_{n:k(x_n)>0} P^{x_n}_\omega
\left(X_\ell=\nu_{\ell+\zeta^\prime_{n}}\,;\,\ell=1,\ldots,\zeta_{n+1}-\zeta^\prime_n
|T_{\partial\mathcal P^{(k)}(x_n)}=T_{\partial^+\mathcal P^{(k)}(x_n)}
\right)\\ \label{eq:condit}
\cdot
\prod_{n:k(x_n)>0} P^{x_n}_\omega
\left(T_{\partial\mathcal P^{(k)}(x)}=T_{\partial^+\mathcal P^{(k)}(x_n)}
\right)\\ \label{eq:ellipt}
\cdot
\prod_{n}\eta^{\|x^\prime_n-x_n\|_1+2}
\cdot
\prod_{n:k(x_{n})=0}\eta^{L^{2\psi}}.
\end{eqnarray}
The first inequality follows from the fact that inside the good blocks { } $\{Y_n\}$ performs quenched random walk on the environment $\omega$. For the second inequality, the first term and \eqref{eq:condit} count the probability of all steps in the
good blocks. In addition, at each stopping time, the process $\{X_n\}$ has to walk from $x_n^\prime$ to $x_n$, and when $k(x_{n})=0$ it also needs to traverse through an $N_1$ block. In \eqref{eq:ellipt} we bound the probability of all of these steps by ellipticity.
By Proposition \ref{prop:quenched}, the product in \eqref{eq:condit} is no less than a half. By the definitions of $k(x)$ and $\beta_{k,j}$, by Lemma \ref{lem:betasmall}, and by uniform ellipticity with constant $\eta$, the product in \eqref{eq:ellipt} is bounded below by
\[
\eta^{
Q(\upsilon)\cdot(L^{4\psi}+2)
}.
\]
Therefore,
\[
\frac{
{P_\omega}\left(
X_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}{
{P_\omega}\left(
Y_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}
\geq
\frac{1}{2}\eta^{
Q(\upsilon)\cdot(L^{4\psi}+2)
}.
\]
\end{proof}
\ignore{
\begin{lemma}\label{lem:entropy}
For every $j$ and $k$, let $\mathcal D^{\omega}(k,j)$ be the distribution of $Y_{T^Y_k(j)}$. Then if $X\sim\mathcal D^{\omega}(k,j)$ then $X$ can be presented as $X=W+Z$ where $Z\sim\mathcal D(\sqrt{j}N_k)$ and
$\|W\|<L^{20\psi}$. Furthermore, we can present
\[
Y_{T^Y_k(j)}-Y_{T^Y_k(j-1)}=Z^\prime+W^\prime
\]
with $\|W^\prime\|<L^{20\psi}$ and $Z^\prime\sim\mathcal D(N_k)$ and independent of
\[\{Y_j:j=1,\ldots,T^Y_k(j-1)\}.\]
\end{lemma}
}
For $k$ and $j$, we define $T^{\prime Y}_k(j)$ as follows:
If there exists $n$ such that $\zeta_n=T^Y_k(j)$, then
$T^{\prime Y}_k(j)=\zeta^\prime_n$. Otherwise,
$T^{\prime Y}_k(j)=T^Y_k(j)$.
\begin{lemma}\label{lem:entropy}
Conditioned on $\{Y_\ell\, :\, \ell\leq T^{\prime Y}_k(j-1)\}$, the distribution
of $Y_{T^{\prime Y}_k(j)}-Y_{T^{\prime Y}_k(j-1)}$
is $(\lambda_\iota,2L^{4\psi})$-close to $\mathcal D(N_k)$.
\end{lemma}
\begin{proof}
We look into two different cases: If
$\mathcal P^{(k)}\big(Y_{T^{\prime Y}_k(j-1)}\big)$
is good, then it follows from Corollary \ref{cor:quenched}. Otherwise, it follows from the definition of $\beta_{k,j}$.
\end{proof}
From Lemma \ref{lem:entropy}, we get the following useful corollary.
\begin{corollary}\label{cor:entropy} Assume that $u$ is large enough.
Condition on $\{Y_\ell\, :\, \ell\leq T^{\prime Y}_k(j-1)\}$, and let
$\bar Y=Y_{T^{\prime Y}_k(j-1)}+\BbbE(X_{T_{N_k^2}})$. For
every $x\in U_k(j)$ such that $\|x-\bar Y\|<4N_k$,
\begin{equation}\label{eq:ent}
P_\omega\big(
\|Y_{T^{\prime Y}_k(j)} - x \| < N_k
\, \big| \,
Y_\ell\, :\, \ell\leq T^{\prime Y}_k(j-1)
\big) > \rho
\end{equation}
for some constant $\rho>0$.
\end{corollary}
\ignore{
\begin{proof}
If
$\mathcal P^{(k)}\big(Y_{T^{\prime Y}_k(j-1)}\big)$
is not good, then it follows from $(\lambda_k,0)$-closeness to $\mathcal D(N_k)$ and
Lemma \ref{lem:lbound}. If $\mathcal P^{(k)}\big(Y_{T^{\prime Y}_k(j-1)}\big)$ is good, then
it follows from Corollary \ref{cor:quenched}, Lemma \ref{lem:lbound} and uniform ellipticity with
constant $\eta$.
\end{proof}
}
\begin{proof} By Lemma \ref{lem:entropy},
the quenched distribution of
$Y_{T^{\prime Y}_k(j)}-Y_{T^{\prime Y}_k(j-1)}$
conditioned on the history of the walk
is $(\lambda_\iota,2L^{4\psi})$-close to the annealed distribution $\mathcal D(N_k)$.
Therefore,
\[
P_\omega\big(
\|Y_{T^{\prime Y}_k(j)} - x \| < N_k
\, \big| \,
Y_\ell\, :\, \ell\leq T^{\prime Y}_k(j-1)
\big) >
\mathcal D(N_k)(y:\|y-x\|<\frac{N_k}{2})-\lambda_\iota.
\]
By Lemma \ref{lem:lbound}, $\mathcal D(N_k)(y:\|y-x\|<\frac{N_k}{2})$ is bounded away from zero. On the other hand, $\lambda_\iota$ goes to zero as $L$ goes to infinity. The corollary follows.
\end{proof}
\begin{lemma}\label{lem:bjk}
Conditioned on $\{Y_\ell\, :\, \ell\leq T^{\prime Y}_k(j-1)\}$, the (quenched) probability
that $\{Y_\ell\}_{\ell\geq T^{\prime Y}_k(j-1)}$
exits $\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$ through
$\partial^+\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$
is
$1-L^{-\xi(1)}$.
\end{lemma}
\begin{proof}
We denote by $E$ the event whose probability we are trying to estimate.
If $\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$ is good, then the lemma follows by the definition of a good block.
Therefore we may assume that $\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$ is a bad block.
In this case we prove the lemma using induction on $k$. For $k=1$ this follows immediately from the definition of
the auxiliary walk on bad $N_1$ blocks.
Now assume $k>1$. We assume that the lemma holds for $Y_{T^{\prime Y}_{k-1}(h)}$ for every $h$.
(if the block
$\mathcal P^{(k-1)}(Y_{T^{\prime Y}_{k-1}(h-1)})$
is good, then we already proved it. If the block is bad then this is the induction hypothesis).
Let $l$ be such that $lN^2_{k-1}=(j-1)N_k^2$, and let $m$ be such that $(l+m)N^2_{k-1}=jN_k^2$
For $h=1,\ldots,m$, let
\[
I_h=Y_{T^{\prime Y}_{k-1}(l+h)}-Y_{T^{\prime Y}_{k-1}(l+h-1)}.
\]
Let $A$ be the event that for every $h=1,\ldots,m$, the walk $\{Y_\ell\}$ leaves
$\mathcal P^{(k-1)}\big(Y_{T^{\prime Y}_{k-1}(j-1)}\big)$ through its front. Then by the induction hypothesis,
$P_\omega\left(A|Y_\ell\ ;\ \ell=1,\ldots,T^{\prime Y}_{k}(j-1)\right) = 1-L^{-\xi(1)}$.
Now,
\begin{eqnarray}\label{eq:wntbjk}
\nonumber
&&P_\omega\left(\left.
E^c
\right|Y_\ell\ ;\ \ell=1,\ldots,T^{\prime Y}_{k}(j-1)\right) \\
&\leq&
\nonumber
P_\omega\left(A^c|Y_\ell\ ;\ \ell=1,\ldots,T^{\prime Y}_{k}(j-1)\right)\\
&+& P_\omega\left(E^c|A\ ;\ Y_\ell\ ;\ \ell=1,\ldots,T^{\prime Y}_{k}(j-1)\right),
\end{eqnarray}
and
\newcommand{\begin{array}{c}A,\\ Y_\ell:\ell=1,\ldots,T^{\prime Y}_{k}(j-1)\end{array}}{\begin{array}{c}A,\\ Y_\ell:\ell=1,\ldots,T^{\prime Y}_{k}(j-1)\end{array}}
\begin{eqnarray}\label{eq:wntbjk2}
\nonumber
&&P_\omega\left(E^c\left|
\begin{array}{c}A,\\ Y_\ell:\ell=1,\ldots,T^{\prime Y}_{k}(j-1)\end{array}
\right.\right)\\
\nonumber
&\leq&
P_\omega
\left(\left.
\exists_{1\leq h\leq m}
\left\|\sum_{i=1}^h I_i - hN^2_{k-1}\frac{\vartheta}{\langle \vartheta,e_1\rangle}\right\| > \frac 12N_kR_5(N_k)
\right| \mathbb A
\right)\\
&\leq&
\sum_{h=1}^m
P_\omega
\left(\left.
\left\|\sum_{i=1}^h I_i - hN^2_{k-1}\frac{\vartheta}{\langle \vartheta,e_1\rangle}\right\| > \frac 12N_kR_5(N_k)
\right|\begin{array}{c}A,\\ Y_\ell:\ell=1,\ldots,T^{\prime Y}_{k}(j-1)\end{array}
\right)
\end{eqnarray}
It is sufficient to show that for every $h$, the probability in \eqref{eq:wntbjk2} is $L^{-\xi(1)}$.
Fix $h$.
Conditioned on $A$, the variable $J_i=I_i-N^2_{k-1}\frac{\vartheta}{\langle \vartheta,e_1\rangle}$ is bounded
by $2N_{k-1}R_5(N_{k-1})$. Furthermore, the quenched expectation of $J_i$ conditioned on
$A,J_1,\ldots,J_{i-1}$ and $Y_\ell\ ;\ \ell=1,\ldots,T^{\prime Y}_{k}(j-1)$ is bounded by $N_{k-1}^{\psi/2}$ (see \eqref{eq:choosepsi}).
Therefore, using the Azuma-H\"offding inequality, we get that
\begin{eqnarray*}
&&P_\omega\left(E^c\left|\begin{array}{c}A,\\ Y_\ell:\ell=1,\ldots,T^{\prime Y}_{k}(j-1)\end{array}\right.\right)\\
&\leq&
C\exp\left(
\frac
{-N_k^2R_5^2(N_k)}
{8N_{k-1}^2R_5^2(N_{k-1})\cdot(N_k/N_k-1)^2}
\right)\\
&=&
C\exp\left(
\frac
{-R_5^2(N_{k})}
{8R_5^2(N_{k-1})}
\right)\\
&\leq&
C_1\exp\left(-C_2
\exp\left([\log(\rho_1+\ldots+\rho_{k-1}+\rho_k)-\log(\rho_1+\ldots+\rho_{k-1})][\log\log L]^5
\right)
\right)\\
&=&L^{-\xi(1)},
\end{eqnarray*}
where the last inequality follows from the definition of $N_k$, the definition of $R_k(N)$, and a first order Taylor approximation.
\end{proof}
\section{The random direction event}\label{sec:randirev}
In this section we consider an event $W^{(w)}$ which we call the random direction event. First we construct an event $W^{(w)}$. Then we show that the probability that $W^{(w)}$ occurs is more than $u^{\epsilon-{\frac 12}}$. Then we show some estimates on the hitting probabilities of the walk conditioned on the occurence of $W^{(w)}$. In the next section we will show that these estimates
are sufficient for proving \eqref{eq:toshowomeg}, and thus Theorem \ref{thm:main}.
\ignore{
\begin{enumerate}
\item\label{item:probw}
\item\label{item:probifw}
For every $\upsilon\in W^{(w)}$,
\[
\frac{P_\omega(\forall_n\, X_n=\upsilon_n)}{P_\omega(\forall_n\, Y_n=\upsilon_n)}
>u^{-{\frac 12}-\epsilon}.
\]
\end{enumerate}
}
\subsection{Definition of $W^{(w)}$}\label{sec:wdef}
Let $M=\left[(\log u)^{1-\epsilon}\right]$, and for $k=1,\ldots,\iota$ let
${\mathcal E}_k=\BbbE^0(X_{T_{\partial\mathcal P(0,N_k)}}|T_{\partial\mathcal P(0,N_k)}=T_{\partial^+\mathcal P(0,N_k)})$
be the annealed expectation of the point of exit of $\mathcal P(0,N_k)$. Let $A_1=1$, and for every $k>1$, let $A_k$ be the smallest integer number such that
$
A_k N_k^2 > (M+A_{k-1})N_{k-1}^2.
$
Note that $A_k\leq M$.
For $k=1,\ldots,\iota$ and $j>A_k$, we define
${\mathbb B}_k(j)$ to be the event that $\{Y_n\}$ leaves $\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$ through
$\partial^+\mathcal P^{(k)}(Y_{T^{\prime Y}_k(j-1)})$.
Fix $w\in[-1,1]^{d-1}$.
For $j>A_k$ we then define the event $W^{(w)}_k(j)$ as follows:
\[
W^{(w)}_k(j)=\big\{\|Y_{T^{\prime Y}_k(j)} - Y_{T^{\prime Y}_k(A_k)} - (j-A_k){\mathcal E}_k-w(j-A_k)N_k\|<N_k\big\}
.\]
Then,
\[
W^{(w)}_k=\bigcap_{j=A_k+1}^{A_k+M} [W^{(w)}_k(j)\cap {\mathbb B}_k(j)],
\]
and $W^{(w)}$ is defined to be the intersection
\[
W^{(w)}=\bigcap_{k=1}^\iota W^{(w)}_k.
\]
\subsection{The probability of $W^{(w)}$}
In this subsection we bound from below the probability of the event $W^{(w)}$.
\begin{lemma}\label{lem:wkj}
\begin{enumerate}
\item\label{item:wkj}
There exists some $\rho>0$ such that for $1\leq k\leq \iota$ and $A_k<j\leq A_k+M$,
\[
P_{\omega}
\big(
W^{(w)}_k(j)\,
\big|\,W^{(w)}_1,\ldots,W^{(w)}_{k-1},W^{(w)}_k(A_k+1),\ldots,W^{(w)}_k(j-1),
{\mathbb B}_k(A_k+1),\ldots,{\mathbb B}_k(j-1)
\big)>\rho
\]
\item\label{item:bkj}
For $1\leq k\leq \iota$ and $A_k<j\leq A_k+M$,
\[
P_{\omega}
\big(
{\mathbb B}_k(j)\,
\big|\,W^{(w)}_1,\ldots,W^{(w)}_{k-1},W^{(w)}_k(A_k+1),\ldots,W^{(w)}_k(j-1),
{\mathbb B}_k(A_k+1),\ldots,{\mathbb B}_k(j-1)
\big) = 1-o(1).
\]
\end{enumerate}
\end{lemma}
\begin{proof} For Part \ref{item:wkj},
Conditioned on
$W^{(w)}_1\cap\ldots\cap W^{(w)}_{k-1}\cap W^{(w)}_k(A_k+1)\cap\ldots\cap W^{(w)}_k(j-1),
{\mathbb B}_k(A_k+1),\ldots,{\mathbb B}_k(j-1)$,
we get that
\[
\|Y_{T^{\prime Y}_k(j-1)} - Y_{T^{\prime Y}_k(A_k)} - (j-1){\mathcal E}_k-w(j-1)N_k\|<N_k
.\]
Therefore,
\[
\|Y_{T^{\prime Y}_k(A_k)} + j{\mathcal E}_k+wjN_k - (Y_{T^{\prime Y}_k(j-1)}+{\mathcal E}_k)\|<4N_k.
\]
By Corollary \ref{cor:entropy} and the definition of $W_k(j)$,
we get that
\begin{equation*}
P_{\omega}
\big(
W^{(w)}_k(j)\,
\big|\,W^{(w)}_1,\ldots,W^{(w)}_{k-1},W^{(w)}_k(A_k+1),\ldots,W^{(w)}_k(j-1),
{\mathbb B}_k(A_k+1),\ldots,{\mathbb B}_k(j-1)
\big)>\rho
\end{equation*}
as desired.
Part \ref{item:bkj} follows from Lemma \ref{lem:bjk}.
\end{proof}
As a result of Lemma \ref{lem:wkj} and the choice of $M$, we get the following lemma:
\begin{lemma}\label{lem:probw}
The probability of $W^{(w)}$ is bounded from below by $u^{\epsilon-1/2}$.
\end{lemma}
\subsection{Hitting probability estimates}\label{sec:entropy} In this subsection we bound from above the probability, conditioned on $W^{(w)}$, of a {block} to be hit.
We begin with a simple claim.
\begin{claim}\label{claim:hitpar}
Fix $k$ between $1$ and $\iota$, and let
\[
A_k+M\leq j\leq \left(\frac{N_{k+1}}{N_k}\right)^2(A_{k+1}+M).
\]
Let $z\in\mathcal L_{N_k}\cap U_{k}(j)$.
Then,
\begin{equation}\label{eq:intw}
\mathop\int_{[-1,1]^{d-1}}
P_\omega\left(
\{Y_n\}\cap\mathcal P(z,N_k)\neq\emptyset\, |\, W^{(w)}
\right)dw
\leq (\log u)^{(1-d)(1-2\epsilon)}.
\end{equation}
\end{claim}
\begin{proof}
First note that there exists $A_{k+1}<j^\prime\leq A_{k+1}+M$
and $z^\prime\in\mathcal L_{N_{k+1}}\cap U_{k+1}(j^\prime)$ such that
$\mathcal P(z,N_k)\subseteq\mathcal P(z^\prime,N_{k+1})$.
Then by the definition of $W^{(w)}$ (and using the fact that $W^{(w)}$ implies ${\mathbb B}_{k+1}(j^\prime)$), the probability
\[
P_\omega\left(\{Y_n\}\cap\mathcal P(z,N_k)\neq\emptyset\, |\, W^{(w)}\right)
\]
is positive only if
\[
\|z^\prime - M\mathcal E_{k}-j^\prime \mathcal E_{k+1}
-MwN_k-j^\prime wN_{k+1}
\|<N_{k+1}R_5(N_{k+1})
\]
and in particular $w$ needs to be in an area of side length which is no more than
$\frac{N_{k+1}R_5(N_{k+1})}{MN_k}\leq M^{-1}L^{\chi}$
and thus the integral in \eqref{eq:intw} is bounded by $(\log u)^{(1-d)(1-2\epsilon)}$.
\end{proof}
\begin{lemma}\label{lem:hitpar}
Fix $k$ between $1$ and $\iota$, and let $j>A_k+M$. Let $z\in\mathcal L_{N_k}\cap U_{k}(j)$.
Then,
\[
\int_{[-1,1]^{d-1}}
P_\omega\left(
\{Y_n\}\cap\mathcal P(z,N_k)\neq\emptyset\, |\, W^{(w)}
\right)dw
\leq (\log u)^{(1-d)(1-2\epsilon)}.
\]
\end{lemma}
\begin{proof}
For $j<\left(\frac{N_{k+1}}{N_k}\right)^2(A_{k+1}+M)$, this follows from Claim \ref{claim:hitpar}. If
$j\geq \left(\frac{N_{k+1}}{N_k}\right)^2(A_{k+1}+M)$, then there exists $k^\prime>k$ and
$z^\prime\in\mathcal L_{N_{k^\prime}}$ such that $z^\prime\in U_{k^\prime}(j^\prime)$ with
\[
A_{k^\prime}+M\leq j^\prime\leq \left(\frac{N_{{k^\prime}+1}}{N_k}\right)^2(A_{{k^\prime}+1}+M)
\]
and $\mathcal P(z,N_k)\subseteq\mathcal P(z^\prime,N_{k^\prime})$. Then by Claim \ref{claim:hitpar} applied to $k^\prime$ we get that
\begin{eqnarray*}
&&\int_{[-1,1]^{d-1}}P_\omega\left(
\{Y_n\}\cap\mathcal P(z,N_k)\neq\emptyset\, |\, W^{(w)}
\right)dw\\
&\leq&\int_{[-1,1]^{d-1}}P_\omega\left(
\{Y_n\}\cap\mathcal P(z^\prime,N_{k^\prime})\neq\emptyset\, |\, W^{(w)}
\right)dw\\
&\leq& (\log u)^{(1-d)(1-2\epsilon)}.
\end{eqnarray*}
\end{proof}
\subsection{Expected number of bad {blocks} that are visited}
Fix $k$. Let
\[
\mathcal D(k)=\left\{
z\in\mathcal L_k \cap B_{2L} \,\left|\,
\mathcal P(z,N_k) \mbox{ is not good}
\right.\right\},
\]
and let
\[
\mathcal B(k)=\#\big\{
z\in\mathcal D(k)\,\left|\,
\{Y_\ell\}\cap \mathcal P(z,N_k) \neq \emptyset
\right.\big\}.
\]
We are interested in the distribution of the variable $\mathcal B(k)$.
\begin{lemma}\label{lem:sizebk}
Fix $k$ and $\omega\in G$. Then
\begin{equation*}
\int_{[-1,1]^{d-1}}E_\omega\left(\left.
\mathcal B(k)\, \right|\,
W^{(w)}
\right)dw
\leq 3(\log u)^{1-\epsilon}.
\end{equation*}
\end{lemma}
\begin{proof}
Let
\[
\mathcal D^{(1)}(k)=\mathcal D(k)\cap\{z\,:\,\langle z,e_1\rangle\leq N_k^2(A_k+M)\}
\]
and
\[
\mathcal D^{(2)}(k)=\mathcal D(k)\cap\{z\,:\,\langle z,e_1\rangle> N_k^2(A_k+M)\},
\]
and for $i=1,2$ let
\[
\mathcal B^{(i)}(k)=\left|\left\{
z\in\mathcal D^{(i)}(k)\,\left|\,
\{Y_\ell\}\cap \mathcal P(z,N_k) \neq \emptyset
\right.\right\}\right|.
\]
Then $\{Y_\ell\}$ visits no more than $A_k+M$ elements of $\mathcal D^{(1)}(K)$, and thus
$\mathcal B^{(1)}(k)\leq A_k+M\leq 2(\log u)^{1-\epsilon}$
Let $z\in\mathcal D^{(2)}(k)$. Then by Lemma \ref{lem:hitpar},
\[
\int_{[-1,1]^{d-1}}P_\omega\left(
\{Y_n\}\cap\mathcal P(z,N_k)\neq\emptyset\, |\, W^{(w)}
\right)dw
\leq (\log u)^{(1-d)(1-2\epsilon)}.
\]
Therefore, using \eqref{eq:defgood} and \eqref{eq:chooseepsilon}, we get that
\begin{eqnarray*}
\int_{[-1,1]^{d-1}}E_\omega\big(\mathcal B^{(2)}(k)\,|\,W^{(w)}\big)dw
&\leq& (\log u)^{\alpha +\epsilon+(1-d)(1-2\epsilon)}\\
= (\log u)^{\alpha-d + 1 + (2d-1) \epsilon}
&\leq& (\log u)^{1 - \epsilon}.
\end{eqnarray*}
Combined, we get that
\begin{eqnarray*}
&&\int_{[-1,1]^{d-1}}
E_\omega\left(\left.
\mathcal B(k)\, \right|\,
W^{(w)}
\right)dw\\
&\leq&
\int_{[-1,1]^{d-1}}
E_\omega\left(\left.
\mathcal B^{(1)}(k)\, \right|\,
W^{(w)}
\right)dw\\
&+&\int_{[-1,1]^{d-1}}
E_\omega\left(\left.
\mathcal B^{(2)}(k)\, \right|\,
W^{(w)}
\right)dw\\
&\leq& 3(\log u)^{1-\epsilon}.
\end{eqnarray*}
\end{proof}
\section{Proof of main result}\label{sec:prfmain}
In this section we prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
by Lemma \ref{lem:sizebk},
\begin{equation*}
\int_{[-1,1]^{d-1}}
E_\omega\left(\left.
\sum_{k=1}^\iota \mathcal B(k)\, \right|\,
W^{(w)}
\right)dw
\leq 3\iota(\log u)^{1-\epsilon}.
\end{equation*}
Therefore, there exists $w$ such that
\begin{equation*}
E_\omega\left(\left.
\sum_{k=1}^\iota \mathcal B(k)\, \right|\,
W^{(w)}
\right)
\leq 3\iota(\log u)^{1-\epsilon}.
\end{equation*}
We now fix $w$ to be such value.
Let
\begin{equation}\label{eq:wbarint}
\bar W=W^{(w)}\bigcap \left\{\sum_{k=1}^\iota \mathcal B(k) \leq 6\iota(\log u)^{1-\epsilon} \right\}.
\end{equation}
Then by Markov's inequality,
$P_\omega(\bar W)\geq 0.5P_\omega(W^{(w)})\geq \frac 12u^{\epsilon - 1/2}.$
Note that there is a set $V$ of paths, such that
\[
\bar W=\big\{ \{Y_n\}\in V \big\},
\]
and for every $\upsilon\in V$, by Lemma \ref{lem:xnyn} and by \eqref{eq:wbarint} and the choice of $\chi$ and $\psi$ (\eqref{eq:choosepsi},\eqref{eq:choosechi}),
\begin{equation*}
\frac{
{P_\omega}\left(
X_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}{
{P_\omega}\left(
Y_j=v_j \mbox{ for all } j<N_\upsilon
\right)
}
\geq
\frac{1}{2}\eta^{
(\iota + 2 + 6\iota (\log u)^{1-\epsilon}) L^{3\chi + 4\psi}
}
\geq u^{\epsilon - 1/2}.
\end{equation*}
Therefore,
\[
P_\omega(\{X_n\}\in V)
\geq u^{\epsilon - 1/2} P_\omega(\{Y_n\}\in V)
\geq u^{\epsilon - 1}
\]
Every path in $V$ reaches $\partial^+B_{2L}$ before returning to $0$, and therefore we get
\eqref{eq:toshowomeg}, from which we get Proposition \ref{prop:main} and Theorem \ref{thm:main}.
\end{proof}
\section*{Acknowledgment}
I wish to thank A.-S.~Sznitman for introducing this problem to me, and to thank
G.~Kozma, T.~Schmitz and O.~Zeitouni for useful discussions.
In addition I thank O.~Zeitouni for suggesting that I use the methods of
\cite{gantertzeitouni} in order to
prove Corollary \ref{cor:mainquenched}.
A very careful and detailed referee report contributed significantly to the quality of the presentation, and I am grateful for that.
|
\chapter{Strong Recurrence of Recurrent RWRE} \label{chap1}
\section{Introduction}
In \cite{CP}, Comets and Popov also consider the return probabilities of the one-dimensional recurrent RWRE on $\Z$. In contrast to our setting, they consider the corresponding jump process in continuous time $(\xi^x_t)_{t \ge 0}$ started at $x \in \Z$ and with jump rates $(\omega_x^+,\omega_x^-)_{x \in \Z}$ to the right and left neighbouring sites. One advantage of this process in continuous time is that it is not periodic as the RWRE in discrete time.
As one result, they show the following (under two conditions on the environment $(\omega_x^+,\omega_x^-)_{x \in \Z}$):
\textbf{Theorem} (cf.\ Corollary 2.1 and Theorem 2.2 in \cite{CP}) \textit{We have}
\[
\frac{\log P_{\omega}(\xi_t^0 = 0)}{\log t} \xrightarrow{t \to \infty} - \widehat{a}_e
\]
\textit{in law, where $\widehat{a}_e$ has the density}
\[
p(z)=\begin{cases}
2 - z - (z+2) \cdot e^{-2z} & \text{if } z \in (0,1) \\
\big([e^2-1] \cdot z - 2 \big) \cdot e^{-2z} & \text{if } z \ge 1.
\end{cases}
\]
Since we can embed the recurrent RWRE $(X_n)_{n \in \N_0}$ in discrete-time into the corresponding jump process in continuous time, we can expect the return probabilities to behave similarly as in the continuous setting. In particular, for $\p$-a.e.\ environment $\omega$, we expect
\begin{align*} \label{preI.1}
P_{\omega}(X_{2n} = 0) =: n^{-a(\omega,n)}
\end{align*}
with
\begin{align*}
& \liminf_{n \to \infty} a(\omega,n) = 0,\ \limsup_{n \to \infty} a(\omega,n) = \infty.
\end{align*}
In order to answer the question on recurrence and transience of particular examples of RWRE with different state spaces, one needs to know more about the decay of $(P_{\omega}(X_{2n}=0))_{n \ge 0}$ for fixed environment $\omega$. For the examples included in Section \ref{I-sec1.6}, the following two statements will be helpful (cf.\ Theorem \ref{I-Rthm1} and \ref{I-Rthm2}): For $\p$-a.e.\ environment $\omega$, we have
\begin{alignat*}{4}
& \sum_{n \in \N} P_{\omega}(X_{2n}=0) \cdot n^{-\alpha}= \infty &\qquad & \text{for } 0 \le \alpha < 1 \text{ and} & \\
& \sum_{n \in \N} \Big(P_{\omega}(X_{2n}=0)\Big)^{\alpha} = \infty &\qquad & \text{for } \alpha > 0.&
\end{alignat*}
The structure of this paper is the following: In Section \ref{I-sec1.2}, we introduce the model of a RWRE on $\Z$ together with the notation which we use in this paper. Then we collect some useful equalities and inequalities in the context of RWRE in Section \ref{I-sec1.3} before we state our main result in Section \ref{I-sec1.4}. Section \ref{I-sec1.5} contains the proofs of our main results. The main tool for our proofs is a careful analysis of the corresponding potential of the RWRE
(cf.\ \eqref{I-RWRE.a}). To this end, we introduce favourable ``valleys'' (cf.\ Figure \ref{I-figure1} on page \pageref{I-figure1}), which help us to derive lower bounds for the quenched return probabilities of the RWRE to the origin. In the last Section \ref{I-sec1.6}, we give some examples for recurrent RWRE with a multidimensional state space. In particular Corollary \ref{I-cor5.3a} and Corollary \ref{I-cor5.3} give reason for the part ``strong recurrence'' in the title of this paper when we compare the behaviour of RWRE with the behaviour of a symmetric random walk on $\Z^d$.
\section{Model and Notation} \label{I-sec1.2}
Let us first introduce the notation for a one-dimensional random walk in random environment (RWRE):\\
At first, let $\omega = (\omega_x)_{x \in \Z}$ be a sequence of i.i.d.\ random variables taking values in $(0,1)$ with respect to some probability measure $\p$. For $i \in \Z$ we define
\[
\rho_i = \rho_i(\omega):= \frac{1-\omega_i}{\omega_i}.
\]
In the following, we will assume that
\begin{align}
& \E[ \log \rho_0]= 0, \label{I-ass1} \vphantom{\int}\\
& \p(\varepsilon \le \omega_0 \le 1 -\varepsilon) = 1 \text{ for some } \varepsilon \in \left(0, \tfrac12 \right), \label{I-ass2} \vphantom{\int}\\
& \mathop{\mathsf{Var}} (\log \rho_0) > 0. \label{I-ass3} \vphantom{\int}
\end{align}
Here, \eqref{I-ass1} ensures that the RWRE is recurrent. The second assumption is a common technical condition in the context of RWRE. Further, the third assumption excludes the case of a symmetric random walk on $\Z$.
For each environment $\omega$, we can introduce the random walk $(X_n)_{n \in \N_0}$ whose transition probabilities are determined by $(\omega_x)_{x \in \Z}$. More precisely for every $x \in \Z$, $(X_n)_{n \in \N_0}$ is a Markov chain with respect to $P_{\omega}^{x}$ determined by
\begin{align}
& P_{\omega}^{x}(X_0=x) = 1 \nonumber \vphantom{\int},\\
& P_{\omega}^{x}(X_{n+1} = y+1|X_n=y) = \omega_y = 1 - P_{\omega}^{x}(X_{n+1} = y-1|X_n=y) \quad \forall y \in \Z. \label{I-RWRE} \vphantom{\int}
\end{align}
As usual, we use $P_{\omega}^{o}$ instead of $P_{\omega}^{0}$ and will even drop the superscript $o$ where no confusion is to be expected. We can now define the potential $V$ as
\begin{align} \label{I-RWRE.a}
V(x):= \begin{cases} \sum\limits_{i=1}^x \log \rho_i & \text{for }x=1,2,\ldots \\ 0 & \text{for }x=0 \\ \sum\limits_{i=x+1}^{0} \log (\rho_i)^{-1} & \text{for }x=-1,-2,\ldots\ . \end{cases}
\end{align}
Note that $V(x)$ is a sum of i.i.d.\ random variables which are centred and which are bounded by $C:=\log (1-\varepsilon) - \log \varepsilon > 0$ due to the assumptions \eqref{I-ass1} and \eqref{I-ass2}. One of the crucial facts for the RWRE is that, for fixed $\omega$, the random walk is a reversible Markov chain and can therefore be described as an electrical network. The conductances are given by
\[
C_{(x,x+1)}(\omega) = e^{-V(x)} = \begin{cases} \prod\limits_{i=1}^{x} (\rho_i)^{-1} & \text{for }x=1,2,\ldots \\ 1 & \text{for }x=0 \\ \prod\limits_{i=x+1}^{0} \rho_i & \text{for }x=-1,-2,\ldots\end{cases}
\]
and the reversible measure which is unique up to multiplication by a constant is given by
\begin{align} \label{I-eq1.1}
\mu_{\omega}(x)= e^{-V(x)} + e^{-V(x-1)} = \begin{cases} \prod\limits_{i=1}^{x-1} \frac{\omega_i}{1-\omega_i} \cdot \frac{1}{1-\omega_x} & \text{for }x=1,2,\ldots \\ \frac{1}{\omega_0} & \text{for } x=0 \\ \prod\limits_{i=x+1}^{0} \frac{1-\omega_i}{\omega_i} \cdot \frac{1}{\omega_x} & \text{for }x=-1,-2,\ldots\ . \end{cases}
\end{align}
As a consequence of the reversibility, we conclude that we have
\begin{equation}
\label{I-eq1}
\mu_{\omega}(x) \cdot P^x_{\omega}(X_n = y) = \mu_{\omega}(y) \cdot P^y_{\omega}(X_n = x)
\end{equation}
for all $n \in \N_0$ and $x,y \in \Z$.
\section{Preliminaries} \label{I-sec1.3}
In the following, we collect some useful properties of the RWRE. For the random time of the first arrival in $x$
\begin{equation}\label{I-def1}
\tau(x):= \inf \{n \ge 0:\ X_n=x\},
\end{equation}
the interpretation of the RWRE as an electrical network helps us to compute the following probability for $x < y < z$ (for a proof see for example formula (2.1.4) in \cite{Zei}):
\begin{equation}
\label{I-prel1}
P^y_{\omega}(\tau(z) < \tau(x)) = \frac{\sum\limits_{j=x}^{y-1} e^{V(j)}}{\sum\limits_{j=x}^{z-1} e^{V(j)}}
\end{equation}
Further (cf.\ (2.4) and (2.5) in \cite{SZ} and Lemma 7 in \cite{Gol}), we have for $k \in \N$ and $y < z$
\begin{align}
\label{I-prel2}
& P^y_{\omega}(\tau(z) < k) \le k \cdot \exp \left( - \max_{y \le i < z} \big[V(z-1)-V(i) \big] \right)
\intertext{and similarly for $x < y$}
\label{I-prel3}
& P^y_{\omega}(\tau(x) < k) \le k \cdot \exp \left( - \max_{x < i \le y} \big[V(x+1)-V(i) \big] \right).
\end{align}
To get bounds for large values of $\tau(\cdot)$, we can use that for $x < y < z$ we have (cf.\ Lemma 2.1 in \cite{SZ})
\begin{equation}
\label{I-prel4}
E_{\omega}^y[\tau(z) \cdot \mathds{1}_{\{\tau(z) < \tau(x)\}}] \le (z-x)^2 \cdot \exp \left( \max_{x\le i \le j \le z} \big(V(j) - V(i)\big) \right).
\end{equation}
Further, the Koml{\'o}s-Major-Tusn{\'a}dy strong approximation theorem (cf.\ Theorem 1 in \cite{KMT}, see also formula (2) in \cite{CP}) will help us to compare the shape of the potential with the paths of a two-sided Brownian motion:\\
\begin{thm} \label{I-Komlos}
In a possibly enlarged probability space, there exists a version of our environment process $\omega$ and a two-sided Brownian motion $(B(t))_{t \in \R}$ with diffusion constant $\sigma:=~(\mathop{\mathsf{Var}}(\log \rho_0))^{\frac12}$ (i.e.\ $Var(B(t))=\sigma^2 |t|$) such that for some $K>0$ we have
\begin{equation}
\label{I-approx}
\p \left( \limsup_{x \to \pm \infty} \frac{|V(x)-B(x)|}{\log |x|} \le K \right) =1.
\end{equation}
\end{thm}
\section{Results} \label{I-sec1.4}
Let us consider a RWRE $(X_n)_{n \in \N_0}$ on $\Z$ where the law of the environment $\omega=(\omega_x)_{x \in \Z}$ fulfils the assumptions $\eqref{I-ass1}, \eqref{I-ass2}$, and $\eqref{I-ass3}$. Then, the following two theorems hold:\\
\begin{thm} \label{I-Rthm1}
For $0 \le \alpha < 1$, we have
\begin{equation}
\label{I-thm1}
\sum_{n \in \N} P_{\omega}(X_{2n}=0) \cdot n^{-\alpha}= \infty
\end{equation}
for $\p$-a.e.\ environment $\omega$.
\end{thm}\vspace{12pt}
\begin{thm} \label{I-Rthm2}
For all $\alpha > 0$, we have
\begin{equation}
\label{I-thm2}
\sum_{n \in \N} \Big(P_{\omega}(X_{2n}=0)\Big)^{\alpha} = \infty
\end{equation}
for $\p$-a.e.\ environment $\omega$.
\end{thm}
\noindent For the last theorem we consider a combination of $d$ environments:\\
\begin{thm} \label{I-Rthm3}
For $d \in \N$, consider $d$ i.i.d.\ environments $\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)}$ which all fulfil the assumptions $\eqref{I-ass1}, \eqref{I-ass2}$, and $\eqref{I-ass3}$. Then, we have
\begin{equation}
\label{I-eqcor2}
\sum_{n \in \N} \prod_{k=1}^{d} P_{\omega^{(k)}}(X_{2n}=0) = \infty
\end{equation}
for $\p^{\otimes d}$-a.e.\ environment $(\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)})$.
\end{thm}
\begin{rmk}
A proof for Theorem \ref{I-Rthm3} can also be found in \cite{Zei} after Lemma A.2. The proof there uses the Nash-Williams inequality in the context of electrical networks.
\end{rmk}
\section{Proofs} \label{I-sec1.5}
Let us first introduce the sets $\Gamma^{+}(L,\delta)$ and $\Gamma^{-}(L,\delta)$ of environments for $L \in \N$ and \mbox{$0 < \delta < 1$} defined by
\begin{align*}
&\Gamma^{+}(L,\delta):=\{R^{+}_1(L) \le \delta L,\, R^{+}_2(L) \le \delta L,\, T^{+}(L) \le L^2\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},\\
&\Gamma^{-}(L,\delta):=\{R^{-}_1(L) \le \delta L,\, R^{-}_2(L) \le \delta L,\, -T^{-}(L) \le L^2\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},
\end{align*}
where
\begin{align*}
& T^{+}(L):= \inf \{n \ge 0:\ V(n) - \min_{0 \le k \le n } V(k) \ge L\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},\\
& T^{-}(L):= \sup \{n \le 0:\ V(n) - \min_{n \le k \le 0} V(k) \ge L\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},\\
& R^{+}_1(L):= - \min_{0 \le k \le T^{+}(L)} V(k), \\
& R^{-}_1(L):= - \min_{T^{-}(L)\le k \le 0} V(k), \\
& T^{+}_b(L):= \inf \{n \ge 0:\ V(n) = - R^{+}_1(L)\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},\\
& T^{-}_b(L):= \sup \{n \le 0:\ V(n) = - R^{-}_1(L)\} \vphantom{\inf_{0 \le k \le T^{+}(L)}},\\
& R^{+}_2(L):= \max_{0 \le k \le T^{+}_b(L)} V(k), \\
& R^{-}_2(L):= \max_{T^{-}_b(L) \le k \le 0} V(k). \vphantom{\inf_{0 \le k \le T^{+}(L)}}
\end{align*}
Here, the $+$-sign and the $-$-sign indicate whether we deal with properties of the valley on the positive or negative half-line, respectively. Note that the definition of the sets $\Gamma^{+}(L,\delta)$ and $\Gamma^{-}(L,\delta)$ is compatible with the scaling of a Brownian motion in space and time (cf.\ \eqref{I-eq16a}).
\begin{figure}[ht]
\vspace{45pt} \hspace{2cm}
\includegraphics [viewport=105 450 380 675, scale=1.05]{pic9}
\caption{Shape of a valley of an environment in $\Gamma(L,\delta):=\Gamma^{+}(L,\delta) \cap \Gamma^{-}(L,\delta)$} \label{I-figure1}
\end{figure}
\begin{rmk}
We have constructed the valleys in such a way that the return probability of the random walk to the origin is high (or bounded from below as we will see) for even time points as long as the random walk has not left the valley. For $\omega \in \Gamma^{+}(L,\delta) \cap \Gamma^{-}(L,\delta)$, we have the following behaviour for the random walk $(X_n)_{n \in \N_0}$ in the environment $\omega$:
\begin{enumerate}
\item Since we have $V(T^{-}(L)) - V(T^{-}_b(L)) \ge L$ and $V(T^{+}(L)) - V(T^{+}_b(L)) \ge L$, the random walk $(X_n)_{n \in \N_0}$ stays within $\{T^{-}(L), T^{-}(L) + 1, \ldots, T^{+}(L)\}$ for (approximately) at least $\exp(L)$ steps (cf.\ \eqref{I-eq4a}).
\item Within the area $\{T^{-}(L), T^{-}(L) + 1, \ldots, T^{+}(L)\}$, the random walk prefers to stay at positions $x$ with a small potential $V(x)$, i.e.\ at positions close to the bottom points at $T^{-}_b(L)$ and $T^{+}_b(L)$.
\item The return probability for the random walk from the positions $T^{-}_b(L)$ and $T^{+}_b(L)$ to the origin is mainly influenced by the potential differences $R_2^{-}(L)+R_1^{-}(L) \le 2 \delta L$ and $R_2^{+}(L)+R_1^{+}(L) \le 2 \delta L$ respectively, i.e.\ by the height of the climb the random walk has to trespass from the bottom points back to the origin (cf.\ \eqref{I-eq2}).
\end{enumerate}
\end{rmk}
\begin{prop} \label{I-prop1}
For $\omega \in \Gamma(L,\delta):=\Gamma^{+}(L,\delta) \cap \Gamma^{-}(L,\delta)$ with $0 < \delta < \tfrac15$, we have
\begin{equation}
\label{I-lem1}
P_{\omega} (X_{2n}=0) \ge C \cdot \exp(-3\delta L)
\end{equation}
for \[ \exp (3 \delta L ) \le n \le \exp\big((1-2\delta)L\big),\]
where the constant $C=C(\delta)$ does not depend on $L$.
\end{prop}
\begin{pfof}{Proposition \ref{I-prop1}}
The construction of ``valleys'' has been useful for the proofs of many theorems in the context of RWRE. Our construction uses some ideas from \cite{CP}, where it is shown that the transition probabilities of a RWRE in continuous time converge in distribution. Since we deal with a RWRE in discrete time and we want to have lower estimates for the return probabilities for a fixed environment in Proposition \ref{I-prop1}, we will have to adapt the construction to our setting:\\[10pt]
The return probability to the origin for the time points of interest is mainly influenced by the shape of the ``valley'' of the environment $\omega$ between $T^-(L)$ and $T^+(L)$. For the positions of the two deepest bottom points of this valley on the positive and negative side, we write
\[b_+:=T^+_b(L) \quad \text{and} \quad b_-:=T^-_b(L)\]
and we assume for the following proof that we have (cf.\ \eqref{I-def1} for the definition of $\tau(\cdot)$)
\begin{equation}
\label{I-ass4}
P^o_{\omega} \big( \tau(b_+) < \tau(b_-) \big) \ge \frac12.
\end{equation}
(Due to the symmetry of the RWRE, the proof also works in the opposite case if we switch the roles of $b_+$ and $b_-$). We have
\begin{align}
& P^o_{\omega} (X_{2n}=0) \ge P^o_{\omega} \left(X_{2n}=0,\ \tau(b_+) \le \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) \nonumber \vphantom{\frac{\mu_{\omega}(0)}{\mu_{\omega}(b_+)}}\\
\ge\ & P^o_{\omega} \left(\tau(b_+) \le \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) \cdot \widehat{\inf}_{\ell \in \big\{\left\lceil \tfrac{4n}{3}\right\rceil,\ldots,2n\big\}} P^{b_+}_{\omega} (X_{\ell} = 0) \nonumber \vphantom{\frac{\mu_{\omega}(0)}{\mu_{\omega}(b_+)}}\\
=\ & P^o_{\omega} \left(\tau(b_+) \le \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) \cdot \frac{\mu_{\omega}(0)}{\mu_{\omega}(b_+)} \cdot \widehat{\inf}_{\ell \in \big\{\left\lceil \tfrac{4n}{3}\right\rceil,\ldots,2n\big\}} P^o_{\omega} (X_{\ell} = b_+) \label{I-eq2}
\end{align}
where we used \eqref{I-eq1} in the third step. Here, for $x,y \in \Z$,
\[
\widehat{\inf}_{\ell \in \big\{\left\lceil \tfrac{4n}{3} \right\rceil,\ldots,2n\big\}} P^x_{\omega}(X_{\ell}=y)
\]
is the short notation for
\[
\inf_{\ell \in \big\{\left\lceil \tfrac{4n}{3}\right\rceil,\ldots,2n\big\}\cap \big(2\Z+ (x+y)\big)} P^x_{\omega}(X_{\ell}=y)
\]
since we have to take care of the periodicity of the random walk.
\\
Let us now have a closer look at the factors in the lower bound in \eqref{I-eq2} separately:\\[10pt]
\underline{First factor in \eqref{I-eq2}:}
\\We can bound the first factor from below by
\begin{align*}
& P^o_{\omega} \left(\tau(b_+) \le \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) \vphantom{\exp \left( \max_{b_- \le i \le j \le b_+} \big(V(j) - V(i)\big) \right)}\\
\ge\ & 1 - P^o_{\omega} \left(\tau(b_+) > \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) - P^o_{\omega} \big( \tau(b_+) \ge \tau(b_-) \big)\vphantom{\exp \left( \max_{b_- \le i \le j \le b_+} \big(V(j) - V(i)\big) \right)}\\
\ge\ & 1 - \tfrac{3}{2n}\cdot E^o_{\omega} \left[\tau(b_+) \cdot \mathds{1}_{\{\tau(b_+) < \tau(b_-)\}} \right] - P^o_{\omega} \big( \tau(b_+) \ge \tau(b_-) \big)\vphantom{\exp \left( \max_{b_- \le i \le j \le b_+} \big(V(j) - V(i)\big) \right)}\\
\ge\ & 1 - \tfrac{3}{2n}\cdot (b_+ - b_-)^2\cdot \exp \left( \max_{b_- \le i \le j \le b_+} \big(V(j) - V(i)\big) \right) - \frac12 \ ,
\end{align*}
where we used \eqref{I-prel4} and assumption \eqref{I-ass4} for the last step.
Therefore, we get for \mbox{$\omega \in \Gamma(L,\delta)$} and $\exp\left(3 \delta L \right) \le n$ that
\begin{align}
& P^o_{\omega} \left(\tau(b_+) \le \tfrac{2n}{3},\ \tau(b_+) < \tau(b_-) \right) \ge \frac12 - \frac{3 \cdot 4 \cdot L^4}{2\cdot \exp(3\delta L)} \cdot \exp(2\delta L)= \frac12 - 6 \cdot L^4 \cdot \exp(-\delta L). \label{I-eq5}
\end{align}
\underline{Second factor in \eqref{I-eq2}:}
\\By using assumption \eqref{I-ass2} and the relation in \eqref{I-eq1.1}, we get for $\omega \in \Gamma(L,\delta)$:
\begin{align}
&\frac{\mu_{\omega}(0)}{\mu_{\omega}(b_+)} = \frac{\tfrac{1}{\omega_0}}{e^{-V(b_+)} + e^{-V(b_+-1)}} = \frac{\tfrac{1}{\omega_0}}{e^{-V(b_+)}\cdot (1+\rho_{b_+})} \nonumber \\
\ge\ & \frac{\tfrac{1}{1-\varepsilon}}{1+\tfrac{1-\varepsilon}{\varepsilon}} \cdot e^{V(b_+)}= \frac{\varepsilon}{1 - \varepsilon} \cdot e^{V(b_+)} \ge \frac{\varepsilon}{1 - \varepsilon} \cdot \exp(-\delta L). \label{I-eq3}
\end{align}
Here we used that $V(b_+) \ge -\delta L$ holds for $\omega \in \Gamma(L,\delta)$.\\[10pt]
\underline{Third factor in \eqref{I-eq2}:}\\
For the last factor in \eqref{I-eq2}, we can compare the RWRE with the process $(\widetilde{X}_n)_{n \in \N_0}$ which behaves as the original RWRE but is reflected at the positions $T^-:=T^-(L)$ and $T^+:=~T^+(L)$, i.e.\ we have for $x \in \{T^-,T^-+1,\ldots,T^+\}$
\begin{align*}
& P_{\omega}^{x}(\widetilde{X}_0=x) = 1 \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}},\\
& P_{\omega}^{x}(\widetilde{X}_{n+1} = y\pm1|\widetilde{X}_n=y) = P_{\omega}^{x}(X_{n+1} = y\pm1|X_n=y) \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}},\\
& & \hspace{-13.98193pt} \forall y \in \{T^-+1,T^-+2,\ldots,T^+-1\} \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}},\\
& P_{\omega}^{x}(\widetilde{X}_{n+1} = y+1|\widetilde{X}_n=y) = 1 \quad & \text{for } y= T^- \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}},\\
& P_{\omega}^{x}(\widetilde{X}_{n+1} = y-1|\widetilde{X}_n=y) = 1 \quad & \text{for } y= T^+. \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}
\end{align*}
Therefore, we have for $\ell \in \big\{\left\lceil \tfrac{4n}{3}\right\rceil,\ldots,2n\big\}\cap \big(2\Z+ b_+\big)$
\begin{align}
&P^{o}_{\omega} (X_{\ell} = b_+) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\ge\ & P^{o}_{\omega} (X_{\ell} = b_+,\ \min\{\tau(T^-),\tau(T^+)\} > 2n) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}} \\
=\ & P^{o}_{\omega} (\widetilde{X}_{\ell} = b_+) - P^{o}_{\omega} (\widetilde{X}_{\ell} = b_+,\ \min\{\tau(T^-),\tau(T^+)\} \le 2n) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\ge\ & P^{o}_{\omega} (\widetilde{X}_{\ell} = b_+) - P^{o}_{\omega} ( \min\{\tau(T^-),\tau(T^+)\} \le 2n) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\ge\ & P^{o}_{\omega} \big(\widetilde{X}_{\ell} = b_+, \ \tau(b_+) \le \tfrac{\ell}{2},\ \tau(b_+) < \tau(b_-)\big) - P^{o}_{\omega} ( \min\{\tau(T^-),\tau(T^+)\} \le 2n) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\ge\ & P^{o}_{\omega} \big(\tau(b_+) \le \tfrac{\ell}{2},\ \tau(b_+) < \tau(b_-)\big) \cdot \widehat{\inf\limits}_{k \in \big\{\left\lceil \tfrac{\ell}{2}\right\rceil,\ldots,\ell\big\}} P^{b_+}_{\omega} (X_{k} = b_+) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
& - P^{o}_{\omega} ( \min\{\tau(T^-),\tau(T^+)\} \le 2n). \label{I-eq4} \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}
\end{align}
Using \eqref{I-prel2} and \eqref{I-prel3}, we see that the last term in \eqref{I-eq4} with the negative sign decreases exponentially for $n \le \exp\big((1-2\delta)L\big)$, i.e.
\begin{align}
& P^{o}_{\omega} ( \min\{\tau(T^-),\tau(T^+)\} \le 2n) \le P^{o}_{\omega} \left( \min\{\tau(T^-),\tau(T^+)\} \le 2\cdot\exp\big((1-2\delta)L\big) \right) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\le\ & P^{o}_{\omega} \left( \tau(T^-) \le 2\cdot\exp\big((1-2\delta)L\big) \right) + P^{o}_{\omega} \left( \tau(T^+) \le 2\cdot\exp\big((1-2\delta)L\big)\right) \nonumber \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}\\
\le\ & 4 \cdot \exp\big((1-2\delta)L\big) \cdot \exp\big(-L\big) = 4 \cdot \exp\big(-2\delta L\big) \label{I-eq4a}. \vphantom{\widehat{\inf}_{k \in \big\{\lceil \tfrac{\ell}{2}\rceil,\ldots,\ell\big\}}}
\end{align}
In order to derive a lower bound for the first term in \eqref{I-eq4}, we first notice that the analogous calculation as in \eqref{I-eq5} shows for $\omega \in \Gamma(L,\delta)$ that
\begin{align}
& P^o_{\omega} \left(\tau(b_+) \le \tfrac{\ell}{2},\ \tau(b_+) < \tau(b_-) \right) \ge 1 - \frac{2}{\ell} \cdot 4 \cdot L^4 \cdot \exp(2\delta L) - \frac12 \nonumber\\
\ge\ & \frac12 - 6 \cdot L^4 \cdot \exp(-\delta L) \label{I-eq5.1}
\end{align}
since $\ell \ge \left\lceil \tfrac{4n}{3}\right\rceil \ge \tfrac43 \cdot \exp(3 \delta L)$ for $n \ge \exp(3 \delta L)$. For the second factor, we show the following\\
\begin{lem} \label{I-lem3}
For $\omega \in \Gamma(L,\delta)$ and for all $\ell \in 2 \N$, we have
\[
P^{b_+}_{\omega}(\widetilde{X}_{\ell} = b_+) \ge \frac12 \cdot \frac{1}{|T^-|+T^+ +1} \cdot \exp\big(-\delta L\big).
\]
\end{lem}
\begin{pfof}{Lemma \ref{I-lem3}}
Using the reversibility (cf.\ \eqref{I-eq1}) of $(\widetilde{X}_{\ell})_{\ell \in \N_0}$, we get
\begin{align}
& P^{b_+}_{\omega}(\widetilde{X}_{\ell} = b_+) \nonumber \vphantom{\sum_{x=T^-}^{T^+}} \allowdisplaybreaks[0]\\
=\ & \sum_{x=T^-}^{T^+} P^{b_+}_{\omega}(\widetilde{X}_{\ell/2} = x) \cdot P^x_{\omega}(\widetilde{X}_{\ell/2} = b_+) \nonumber\\
=\ & \sum_{x=T^-}^{T^+} P^{b_+}_{\omega}(\widetilde{X}_{\ell/2} = x) \cdot \frac{\widetilde{\mu}_{\omega}(b_+)}{\widetilde{\mu}_{\omega}(x)} \cdot P^{b_+}_{\omega}(\widetilde{X}_{\ell/2} = x), \label{I-eq7}
\end{align}
where $\widetilde{\mu}_{\omega}(\cdot)$ denotes a reversible measure of the reflected random walk $(\widetilde{X}_n)_{n \in \N_0}$ which is unique up to multiplication by a constant. To see that $(\widetilde{X}_{\ell})_{\ell \in \N_0}$ is also reversible, it is enough to note that $(\widetilde{X}_{\ell})_{\ell \in \N_0}$ can again be described as an electrical network with the following conductances:
\begin{align*}
\widetilde{C}_{(x,x+1)}(\omega)= \begin{cases}
C_{(x,x+1)}(\omega) = e^{-V(x)} & \text{for } x=T^-, T^-+1,\ldots, T^+-1 \\
0 & \text{for } x = T^- - 1, T^+
\end{cases}
\end{align*}
Therefore, a reversible measure for the reflected random walk is given by (cf.\ \eqref{I-eq1.1})
\[
\widetilde{\mu}_{\omega}(x) = \begin{cases}\mu_{\omega}(x) = e^{-V(x)} + e^{-V(x-1)} & \text{for } x=T^-+1, T^-+2,\ldots, T^+-1, \\
e^{-V(T^-)} & \text{for } x=T^-, \\
e^{-V(T^+-1)} & \text{for } x=T^+.
\end{cases}
\]
This implies, since $0 \le b_+ < T^+$,
\begin{align}
& \frac{\widetilde{\mu}_{\omega}(b_+)}{\widetilde{\mu}_{\omega}(x)} \ge \frac{e^{-V(b_+)}+ e^{-V(b_+-1)}}{e^{-V(x)}+ e^{-V(x-1)}} \nonumber \\
\ge\ & \frac{e^{-V(b_+)}}{2\cdot e^{\left( - \min \{V(b_+),V(b_-) \} \right)}} \ge \frac{1}{2} \cdot \exp(-\delta L) \label{I-eq8}
\end{align}
for $T^- \le x \le T^+$ and for $\omega \in \Gamma(L,\delta)$. By applying \eqref{I-eq8} to \eqref{I-eq7}, we get
\begin{align}
& P^{b_+}_{\omega}(\widetilde{X}_{\ell} = b_+) \nonumber \\
\ge\ &\frac12 \cdot \sum_{x=T^-}^{T^+} \left(P^{b_+}_{\omega}(\widetilde{X}_{\ell/2} = x)\right)^2 \cdot \exp(-\delta L)\nonumber\\
\ge\ &\frac12 \cdot \sum_{x=T^-}^{T^+} \left(\frac{1}{|T^-|+T^+ +1}\right)^2 \cdot \exp(-\delta L) \nonumber\\
=\ & \frac12 \cdot \frac{1}{|T^-|+T^+ +1} \cdot \exp(-\delta L). \label{I-eq9}
\end{align}
Here, we used that we have
\begin{align*}
\sum_{x=T^-}^{T^+} (a_x)^2 \ge \sum_{x=T^-}^{T^+} \left(\frac{1}{|T^-|+T^+ +1}\right)^2
\end{align*}
for every sequence $(a_x)_{x}$ with $\sum\limits_{x=T^-}^{T^+} a_x =1$.
\renewcommand{\qedsymbol}{\hfill$\square$\vspace{1ex}}
\end{pfof}
We can now return to the proof of Proposition \ref{I-prop1} and finish our lower bound for the third factor in \eqref{I-eq2}. By applying \eqref{I-eq4a}, \eqref{I-eq5.1} and Lemma \ref{I-lem3} to \eqref{I-eq4}, we get for $\exp(3\delta L) \le n \le \exp((1-2\delta) L)$ and $\omega \in \Gamma(L,\delta)$, i.e.\ $|T^-|, T^+ \le L^2$,
\begin{align}
& \widehat{\inf}_{ \ell \in \big\{\left\lceil \tfrac{4n}{3}\right\rceil,\ldots,2n\big\}}P^{o}_{\omega} (X_{\ell} = b_+) \nonumber \vphantom{\frac12}\\
\ge\ & \left(\frac12 - 6 \cdot L^4 \cdot \exp(-\delta L)\right) \cdot \frac12 \cdot \frac{1}{2L^2 +1} \cdot \exp(-\delta L) - 4 \cdot \exp\big(-2\delta L\big) \nonumber \\
\ge\ & \exp\left(-\tfrac32\delta L\right)\label{I-eq12} \vphantom{\frac12}
\end{align}
for all $L=L(\delta)$ large enough.\\[10pt]
To finish the proof of Proposition \ref{I-prop1}, we can collect our lower bounds in \eqref{I-eq5}, \eqref{I-eq3}, and \eqref{I-eq12} and conclude with \eqref{I-eq2} that for $\exp\left(3 \delta L \right) \le n \le \exp\big((1-2\delta)L\big)$ and for $\omega \in \Gamma(L,\delta)$ we have
\begin{align*}
& P_{\omega} (X_{2n}=0) \vphantom{\frac12}\\
\ge\ & \left(\frac12 - 6 \cdot L^4 \exp(-\delta L)\right) \cdot \frac{\varepsilon}{1-\varepsilon} \exp(-\delta L) \cdot \exp\left(-\tfrac32\delta L\right)\\
\ge\ & \exp(-3\delta L) \vphantom{\frac12}
\end{align*}
for all $L=L(\delta)$ large enough. This shows \eqref{I-lem1} since we have $P_{\omega}(X_{2n}=0) \ge \varepsilon^{2n} > 0$ for all $n \in \N$ due to assumption \eqref{I-ass2}.
\end{pfof}
\begin{prop} \label{I-lemma2}
For $0 < \delta < 1$, we have
\begin{equation}
\label{I-lem2}
\p (\omega:\ \omega \in \Gamma(L,\delta) \text{ for infinitely many }L )=1.
\end{equation}
\end{prop}
\begin{pfof}{Proposition \ref{I-lemma2}}
Let $(B(t))_{t \in \R}$ be the two-sided Brownian motion from Theorem \ref{I-Komlos} and let us choose some $0 < \delta < \tfrac12$. For $y \in \R$ we define
\begin{align*}
&\widehat{T}^+(y):= \inf\{t \ge 0:\ B(t)= y\},\\
&\widehat{T}^-(y):= \sup\{t \le 0:\ B(t)= y\}
\end{align*}
as the first hitting times of $y$ on the positive and negative side of the origin, respectively. Additionally, for $L \in \N$, $i \in \N$, $y \in \R$, we can introduce the following sets
\begin{align*}
F_L^+(y):=\ & \{\widehat{T}^+ \left(y \cdot L \right) < \widehat{T}^+ \left(- y \cdot L \right)\}, \\
F_L^-(y):=\ & \{\widehat{T}^-\left(y \cdot L \right) < \widehat{T}^-\left(-y \cdot L \right)\}
\end{align*}
on which the Brownian motion reaches the value $y \cdot L$ before $-y \cdot L$. Further we define
\begin{align*}
G_L^+(i):=\ & \left\{B(t) \ge (2i-1) \cdot \tfrac{\delta}{4} \cdot L \quad \text{for} \quad \widehat{T}^+\left(2i \cdot \tfrac{\delta}{4} \cdot L \right) \le t \le \widehat{T}^+\left((2i+2) \cdot \tfrac{\delta}{4} \cdot L \right) \right\}, \\
G_L^-(i):=\ & \left\{B(t) \ge (2i-1) \cdot \tfrac{\delta}{4} \cdot L \quad \text{for} \quad \widehat{T}^-\left((2i+2) \cdot \tfrac{\delta}{4} \cdot L \right) \le t \le \widehat{T}^-\left(2i \cdot \tfrac{\delta}{4} \cdot L \right) \right\}
\intertext{on which the Brownian motion does not decrease much between the first hitting time of the two levels of interest. Using these sets, we can define the sets}
A^+(L,\delta):=\ & F_L^+(\delta) \cap \left\{ \widehat{T}^+(1.1 \cdot L) \le L^2,\ \min_{\widehat{T}^+(\delta \cdot L) \le t \le \widehat{T}^+(1.1 \cdot L)} B(t) \ge \frac{\delta}{4} \cdot L \right\}, \\
A^-(L,\delta):=\ & F_L^-(\delta) \cap \left\{ - \widehat{T}^-(1.1 \cdot L) \le L^2,\ \min_{\widehat{T}^-(1.1 \cdot L) \le t \le \widehat{T}^-(\delta \cdot L)} B(t) \ge \frac{\delta}{4} \cdot L \right\}, \\
D^+(L,\delta):=\ & G_L^+(0) \cap G_L^+(1) \cap G_L^+(2) \allowdisplaybreaks[0]\\
& \cap \left\{ \widehat{T}^+(1.2 \cdot L) \le 0.9 \cdot L^2,\ \min_{\widehat{T}^+ \big(\tfrac{3 \cdot \delta}{2} \cdot L \big) \le t \le \widehat{T}^+(1.2 \cdot L)} B(t) \ge \frac{3\delta}{4} \cdot L \right\}, \\
D^-(L,\delta):=\ & G_L^-(0) \cap G_L^-(1) \cap G_L^-(2) \\
& \cap \left\{ - \widehat{T}^-(1.2 \cdot L) \le 0.9 \cdot L^2,\ \min_{\widehat{T}^-(1.2 \cdot L) \le t \le \widehat{T}^-\big(\tfrac{3\delta}{2} \cdot L \big)} B(t) \ge \frac{3\delta}{4} \cdot L \right\}
\end{align*}
which which will be used for an approximation of our previously constructed valleys $\omega$ belonging to $\Gamma(L,\delta)$ which we illustrated in Figure \ref{I-figure1} on page \pageref{I-figure1}. Here, we added the factors $1.1$, $1.2$ and $0.9$ in contrast to the construction before in order to have some space for the approximation. For the Brownian motion, we can directly compute that we have
\begin{equation}
\label{I-eq15}
\p\big(D^+(1,\delta) \cap D^-(1,\delta)\big) > 0.
\end{equation}
Thereby, the scaling property of the Brownian motion, i.e. the property that for $L \in \N$
\begin{equation} \label{I-eq16a}
\left( \frac{1}{L} B(L^2 \cdot t) \right)_{ t \in \R}
\end{equation}
is again a two-sided Brownian motion with diffusion constant $\sigma$, implies
\begin{equation} \label{I-eq16}
\p\big(D^+(L,\delta) \cap D^-(L,\delta)\big) = \p\big(D^+(1,\delta) \cap D^-(1,\delta)\big) > 0
\end{equation}
for all $L \in \N$.
At first, we notice that for $L_0 \in \N$ we have
\begin{align} \label{I-eq25}
& \p \left(\, \bigcap_{L=L_0}^{\infty} \Big( A^+(L,\delta) \cap A^-(L,\delta) \Big)^c \right)
\le \p \left(\ \bigcap_{k=\ell+1}^{\infty} \Big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \Big)^c \right)
\end{align}
for arbitrary $\ell \in \N_0$, where we define
\[
L_k := \max \left\{10, \left\lceil \tfrac{2}{\delta} \right\rceil \right\} \cdot (L_{k-1})^2
\]
for $k \in \N$ inductively. Note that for $n > \ell +1$
with
\[
\mathcal{F}_n:=\sigma \left( \big(B(t)\big)_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} \right),
\]
the following holds:
\begin{align}
& \p \left(\ \bigcap_{k=\ell+1}^{n} \Big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \Big)^c \right) \nonumber \vphantom{\E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \le (L_{n-1})^2 \right\}} \right.}\\
\le\ & \E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| < (L_{n-1})^2 \right\}} \right. \nonumber\allowdisplaybreaks[0]\\
& \hspace{-5.5pt} \cdot \left. \left. \E\left[ \vphantom{\prod_{k=\ell+1}^{n-1}} \mathds{1}_{\left\{ \big(B(t + (L_{n-1})^2) - B((L_{n-1})^2) \big)_{t \in \R} \notin D^+ \left(L_n, \delta \right) \right\} \cup \left\{ \big(B(t - (L_{n-1})^2) - B(-(L_{n-1})^2) \big)_{t \in \R} \notin D^- \left(L_n, \delta \right) \right\}} \right| \hspace{-1pt} \mathcal{F}_n \hspace{-1pt} \right] \hspace{-2pt} \right] \nonumber\allowdisplaybreaks[0]\\
& + \p \left( \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \ge (L_{n-1})^2 \right) \nonumber \vphantom{\E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \le (L_{n-1})^2 \right\}} \right.}\\
\le\ & \Big(1 - \p \Big(D^+ \left(L_n , \delta \right) \cap D^- \left(L_n, \delta \right) \Big) \Big) \cdot \p \left(\ \bigcap_{k=\ell+1}^{n-1} \Big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \Big)^c \right) \nonumber \vphantom{\E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \le (L_{n-1})^2 \right\}} \right.}\\
& + \p \left( \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \ge (L_{n-1})^2 \right) \nonumber \vphantom{\E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \le (L_{n-1})^2 \right\}} \right.}\\
\le\ & \Big(1 - \p \Big(D^+ \left(1 , \delta \right) \cap D^- \left(1, \delta \right) \Big) \Big)^{n-\ell} + \sum_{k=\ell+1}^{n} \p \left( \max\limits_{-(L_{k-1})^2 \le t \le (L_{k-1})^2} |B(t)| \ge (L_{k-1})^2 \right) \label{I-eq26} \vphantom{\E \left[ \prod_{k=\ell+1}^{n-1} \mathds{1}_{\big( A^+(L_k,\delta) \cap A^-(L_k,\delta) \big)^c} \cdot \mathds{1}_{\left\{ \max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2} |B(t)| \le (L_{n-1})^2 \right\}} \right.}.
\end{align}
To see that the first step holds, note that for
\begin{align}
\omega \in &
\left\{\max\limits_{-(L_{n-1})^2 \le t \le (L_{n-1})^2)} |B(t)| < (L_{n-1})^2 \right\} \nonumber\\
& \cap \left\{\big(B(t + (L_{n-1})^2 ) - B((L_{n-1})^2) \big)_{t \in \R} \in D^+ \left(L_n, \delta \right) \right\} \label{I-eq26.x}
\end{align}
we have
\begin{align*}
\min_{0 \le t \le (L_n)^2} B(t) \ge\ & \min_{0 \le t \le (L_{n-1})^2} B(t) + \min_{(L_{n-1})^2 \le t \le (L_n)^2} B(t+(L_{n-1})^2) - B((L_{n-1})^2) \\
>\ & - (L_{n-1})^2 - \frac{\delta}{4} \cdot L_n > - \delta \cdot L_n,
\intertext{and}
\max_{0 \le t \le (L_n)^2} B(t) \ge\ & B\big((L_{n-1})^2\big) + \max_{(L_{n-1})^2 \le t \le (L_n)^2 - (L_{n-1})^2} B(t+(L_{n-1})^2) - B((L_{n-1})^2)\\
\ge\ & - (L_{n-1})^2 + 1.2 \cdot L_n \ge 1.1 \cdot L_n.
\end{align*}
In particular, we have $\widehat{T}^+(\delta \cdot L) < \widehat{T}^+(- \delta \cdot L)$ and $\widehat{T}^+(1.1 \cdot L) \le L^2$ on the considered set. Similarly, again on the set in \eqref{I-eq26.x}, we see that we have
\begin{align*}
& \widehat{T}^+(\delta \cdot L) > \inf\{t \ge (L_{n-1})^2:\ (B(t + (L_{n-1})^2 ) - B((L_{n-1})^2) \ge \tfrac{\delta}{2} \cdot L_n\}, \\
& \widehat{T}^+(\delta \cdot L) < \inf\{t \ge (L_{n-1})^2:\ (B(t + (L_{n-1})^2 ) - B((L_{n-1})^2) \ge \tfrac{3\cdot \delta}{2} \cdot L_n\},
\intertext{which implies}
& \min_{\widehat{T}^+(\delta \cdot L) \le t \le \widehat{T}^+(1.1 \cdot L)} B(t) \ge \frac{\delta}{4} \cdot L_n
\end{align*}
by construction of $D^+(L_n,\delta)$.
Altogether, we can conclude that $\omega \in A^+(L_n,\delta)$ holds for our choice of $\omega$ in \eqref{I-eq26.x}. The argument for the negative part runs completely analogously. Further in \eqref{I-eq26}, we used the Markov property of the Brownian motion in the second step. Additionally, we iterated the first two steps $n-\ell-1$ times and used \eqref{I-eq16} for the last step. To control the last sum in \eqref{I-eq26}, let us recall the standard upper bound
\[
\p\left( Z \ge x \right) \le \frac{1}{x} \cdot \frac{1}{\sqrt{2\pi}} \cdot \exp \left(- \frac{x^2}{2} \right) \quad \text{for } x > 0
\]
for a random variable $Z \sim \mathcal{N}(0,1)$, which can be found for example in Lemma 12.9 in Appendix B of \cite{mörters}. By using this upper bound, we can conclude that
\begin{align}
& \sum_{k=\ell+1}^{n} \p \left( \max\limits_{-(L_{k-1})^2 \le t \le (L_{k-1})^2} |B(t)| \ge (L_{k-1})^2 \right) \le 4 \cdot \sum_{k=\ell+1}^{n} \p \left( \max\limits_{0\le t \le (L_{k-1})^2} \frac{B(t)}{\sigma \cdot L_{k-1}} \ge \frac{L_{k-1}}{\sigma} \right) \nonumber \\
\le\ & 4 \cdot \sum_{k=\ell+1}^{\infty} \frac{\sigma}{L_{k-1}} \cdot \frac{1}{\sqrt{2 \pi}} \cdot \exp \left(- \frac{(L_{k-1})^2}{2 \sigma^2} \right) \xrightarrow{\ell \to \infty} 0. \label{I-eq27}
\end{align}
Here, we used that
\[
\max\limits_{0\le t \le (L_{k-1})^2} \frac{B(t)}{\sigma \cdot L_{k-1}} \sim |Z|
\]
for all $k \in \N$, where $Z \sim \mathcal{N}(0,1)$. By combining the upper bounds in \eqref{I-eq25}, \eqref{I-eq26}, and \eqref{I-eq27}, we get for all $\ell \in \N_0$
\begin{align*}
& \p \left( \omega \notin \big( A^+(L,\delta) \cap A^-(L,\delta) \big) \text{ for all } L \ge L_0\right) \vphantom{\sum_{k=\ell+1}^{\in
|
fty}}\\
\le\ & \lim_{n \to \infty} \Big(1 - \p \Big(D^+ \left(1 , \tfrac{\delta}{2}\right) \cap D^- \left(1,\tfrac {\delta}{2} \right) \Big) \Big)^{n-\ell} \vphantom{\sum_{k=\ell+1}^{\infty}} \\
& + \sum_{k=\ell+1}^{\infty} \p \left( \max\limits_{-(L_{k-1})^2 \le t \le (L_{k-1})^2} |B(t)| \ge (L_{k-1})^2 \right) \xrightarrow{\ell \to \infty} 0.
\end{align*}
Since $L_0 \in \N$ was chosen arbitrarily, we can conclude that for $0 < \delta < \tfrac12$ we have
\[
\p \left(\omega:\ \omega \in \big(A^+(L, \delta) \cap A^-(L, \delta)\big) \text{ for infinitely many } L \right) = 1.
\]
Using the Koml{\'o}s-Major-Tusn{\'a}dy strong approximation Theorem (cf.\ Theorem \ref{I-Komlos}), we see that for $0 < \delta < \tfrac12$ we have
\begin{align*}
& \left\{\omega:\ \omega \in \big(A^+(L, \delta) \cap A^-(L, \delta)\big) \text{ for infinitely many } L \right\} \allowdisplaybreaks[0]\\
\subseteq\ & \left\{\omega:\ \omega \in \Gamma(L,2\delta) \text{ for infinitely many }L \right\},
\end{align*}
which is enough to conlude that \eqref{I-lem2} holds for all $0 < \delta < 1$.
\end{pfof}
With the help of Proposition \ref{I-prop1} and Proposition \ref{I-lemma2}, we can now turn to the proofs of our Theorems \ref{I-Rthm1} -- \ref{I-Rthm3}:
\begin{pfof}{Theorem \ref{I-Rthm1}} For a fixed $0 \le \alpha < 1$, we choose $0 < \delta < \tfrac16$ such that
\begin{equation}
\label{I-prthm1}
\alpha < \frac{1-5\delta}{1-2\delta}\ .
\end{equation}
For $\omega \in \Gamma(L, \delta)$, the inequality in \eqref{I-lem1} implies that
\begin{align}
& \hphantom{\ge}\ \sum_{n \in \N} P_{\omega}(X_{2n}=0) \cdot n^{-\alpha} \ge \sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor} P_{\omega}(X_{2n}=0) \cdot n^{-\alpha} \nonumber\\
&\ge \Big(\exp\big((1-2\delta)L\big) - \exp(3\delta L) - 1 \Big) \cdot C \cdot \exp(-3\delta L) \cdot \left(\exp\big((1-2\delta)L\big)\right)^{-\alpha} \nonumber \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
&= C \cdot \exp(-3\delta L) \cdot \exp(3\delta L) \cdot \Big(\exp\big((1-5\delta)L\big) - 1 - \exp(-3\delta L)\Big) \cdot \exp\big(-\alpha(1-2\delta)L\big) \nonumber \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\allowdisplaybreaks[0]\\
& \xrightarrow{L\to\infty} \infty \nonumber \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}.
\end{align}
Since Proposition \ref{I-lemma2} shows that for $\p$-a.e.\ environment $\omega$ we find $L$ arbitrarily large such that $\omega \in \Gamma(L,\delta)$, we can conclude that \eqref{I-thm1} holds for $\p$-a.e.\ environment $\omega$.
\end{pfof}
\begin{pfof}{Theorem \ref{I-Rthm2}}
For fixed $\alpha > 0$, we choose $\delta$ such that
\begin{align*}
0 < \delta < \min \left\{ \frac{1}{2+3\alpha}, \frac{1}{5} \right\}, \intertext{which yields} 1-2\delta -3\alpha \delta > 0 \quad \text{and} \quad 1- 2 \delta > 3 \delta.
\end{align*}
For $\omega \in \Gamma(L,\delta)$, the inequality in \eqref{I-lem1} implies
\begin{align*}
& \hphantom{\ge}\ \sum_{n \in \N} \Big( P_{\omega}(X_{2n}=0)\Big)^{\alpha} \ge \sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor} \Big(P_{\omega}(X_{2n}=0)\Big)^{\alpha} \\
&\ge \Big(\exp\big((1-2\delta)L\big) - \exp(3\delta L) - 1 \Big) \cdot \big(C \cdot \exp(-3\delta L)\big)^{\alpha} \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
&= C^{\alpha} \cdot \exp(-3 \alpha \delta L) \cdot\exp(3 \alpha \delta L) \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
& \qquad \cdot \Big(\exp\big((1-2\delta -3\alpha \delta)L\big) - \exp\big((3\delta - 3 \alpha \delta) L\big) - \exp( -3\alpha \delta L) \Big) \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
&\xrightarrow{L\to\infty} \infty. \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}
\end{align*}
Again since Proposition \ref{I-lemma2} shows that for $\p$-a.e.\ environment $\omega$ we find $L$ arbitrarily large such that $\omega \in \Gamma(L,\delta)$, we can conclude that \eqref{I-thm2} holds for $\p$-a.e.\ environment $\omega$.
\end{pfof}
\begin{pfof}{Theorem \ref{I-Rthm3}}
Due to the independence of the environments $\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)}$, we can extend the proof of Proposition \ref{I-lemma2} to get
\begin{equation}
\label{I-eq16.2}
\p^{\otimes d} \left(\text{For infinitely many } L \in \N \text{ we have}:\ \omega^{(i)} \in \Gamma(L,\delta)\ \text{for } i=1,2,\ldots d \right)=1
\end{equation}
for all $0 < \delta < 1$.\\[10pt]
Thereby due to Proposition \ref{I-prop1}, we have for $(\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)})$ with $\omega^{(i)} \in \Gamma(L,\delta)$ for $i=1,2,\ldots d$
\begin{align*}
& \hphantom{\ge}\ \sum_{n \in \N} \prod_{k=1}^{d} P_{\omega^{(k)}}(X_{2n}=0) \vphantom{\sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}} \ge \sum_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor} \prod_{k=1}^{d} P_{\omega^{(k)}}(X_{2n}=0) \\
&\ge \Big(\exp\big((1-2\delta)L\big) - \exp(3\delta L) - 1 \Big) \cdot C^d \cdot \exp(-3 \delta d L) \vphantom{\sum^d_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
&= C^d \cdot \exp(-3 \delta d L) \cdot \exp(3 \delta d L) \vphantom{\sum^d_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\allowdisplaybreaks[0]\\
& \qquad \cdot \Big(\exp\big((1-2\delta-3 \delta d)L\big) - \exp\big((3\delta -3\delta d) L\big) - \exp(-3 \delta d L) \Big) \vphantom{\sum^d_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}\\
& \xrightarrow{L\to\infty} \infty \vphantom{\sum^d_{\lceil\exp(3\delta L)\rceil \le n \le \lfloor\exp((1-2\delta)L)\rfloor}}
\end{align*}
for
\[
0 < \delta < \frac{1}{2+3d}\ .
\]
Since \eqref{I-eq16.2} holds for arbitrarily small $\delta$, we can conclude that \eqref{I-eqcor2} holds for $\p^{\otimes d}$-a.e.\ environment $(\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)})$.
\end{pfof}
\section{Examples for Recurrent Random Walks in Random Environments in different dimensions} \label{I-sec1.6}
\begin{rmk}\label{I-cor1}
Consider a RWRE $(X_n)_{n \in \N_0}$ for which the environment $\omega$ fulfils the assumptions $\eqref{I-ass1}$, $\eqref{I-ass2}$, and $\eqref{I-ass3}$. By an application of Theorem~\ref{I-Rthm1} for $\alpha =0$, we get
\[
\sum_{n \in \N} P_{\omega}(X_{2n}=0) = \infty
\]
for $\p$-a.e.\ environment $\omega$. From this, we can conclude that the random walk is recurrent for $\p$-a.e.\ environment $\omega$.
\end{rmk}
\begin{cor}[$d$ particles in the same random environment]\label{I-cor5.3a}
Let us first choose a random environment $\omega=(\omega_x)_{x \in \Z}$ which fulfils the assumptions $\eqref{I-ass1}$, $\eqref{I-ass2}$, and $\eqref{I-ass3}$. For fixed $\omega$, we can now consider $d$ independent random walks $(X^{(i)}_n)_{n \in \N_0}$ for $i=1,2,\ldots,d$ where every random walk $(X^{(i)}_n)_{n \in \N_0}$ is a usual RWRE in the environment $\omega$ in the sense of \eqref{I-RWRE}. Then, for arbitrary $d$, the $d$-dimensional process
\[
\big(X^{(1)}_n,X^{(2)}_n, \ldots, X^{(d)}_n\big)_{n \in \N_0}
\]
is recurrent for $\p$-a.e.\ environment $\omega$.
\end{cor}
\begin{pfof}{Corollary \ref{I-cor5.3a}}
First of all, we notice that for fixed $\omega$
\[
\big(X^{(1)}_n,X^{(2)}_n, \ldots, X^{(d)}_n\big)_{n \in \N_0}
\]
is a Markov chain. For the expected amounts of returns to $0$, we get by applying Theorem \ref{I-Rthm2} with $\alpha=d$
\begin{align*}
\sum_{n\in\N} P_{\omega} \left(\big(X^{(1)}_{2n},X^{(2)}_{2n}, \ldots, X^{(d)}_{2n}\big)= \big(0,0,\ldots,0\big) \right) = \sum_{n\in\N} \left(P_{\omega} (X^{(1)}_{2n} = 0) \right)^d = \infty
\end{align*}
for $\p$-a.e.\ environment $\omega$. This implies the recurrence.
\renewcommand{\qedsymbol}{$\square$}
\end{pfof}
\begin{cor}[$d$ particles in $d$ i.i.d.\ random environments] \label{I-cor5.3}
For arbitrary $d \in \N$, we choose $d$ i.i.d.\ environments $\omega^{(i)}=(\omega^{(i)}_x)_{x \in \Z}$ which all fulfil the assumptions $\eqref{I-ass1}$, $\eqref{I-ass2}$, and $\eqref{I-ass3}$ for $i=1,2,\ldots,d$. For fixed $\vec{\omega}:=(\omega^{(1)}, \omega^{(2)}, \ldots, \omega^{(d)})$, we consider $d$ independent RWRE $(X^{(i)}_n)_{n \in \N_0}$, where $(X^{(i)}_n)_{n \in \N_0}$ is a usual RWRE in the environment $\omega^{(i)}$ in the sense of \eqref{I-RWRE}. In this case, the $d$-dimensional process
\[
\big(X^{(1)}_n,X^{(2)}_n, \ldots, X^{(d)}_n\big)_{n \in \N_0}
\]
is recurrent for $\p^{\otimes d}$-a.e.\ environment $\vec{\omega}$.
\end{cor}
\begin{pfof}{Corollary \ref{I-cor5.3}}
Due to the independence of the processes and the environments in every component, we get
\begin{align*}
\sum_{n\in\N} P_{\vec{\omega}} \left(\big(X^{(1)}_{2n},X^{(2)}_{2n}, \ldots, X^{(d)}_{2n}\big)= \big(0,0,\ldots,0\big) \right) = \sum_{n\in\N} \prod_{i=1}^{d} P_{\omega^{(i)}} (X^{(i)}_{2n}=0) = \infty
\end{align*}
due to Theorem \ref{I-Rthm3} for $\p^{\otimes d}$-a.e.\ environment $\vec{\omega}$.
\renewcommand{\qedsymbol}{$\square$}
\end{pfof}
\begin{rmk}
An alternative proof of Corollary \ref{I-cor5.3} can be found in \cite{Zei} after Lemma A.2. The proof there uses the Nash-Williams inequality in the context of electrical networks.
\end{rmk}
\begin{rmk}
Corollary \ref{I-cor5.3a} and \ref{I-cor5.3} show that the recurrence of a RWRE is indeed ``stronger'' than the recurrence of the symmetric random walk on $\mathbb{Z}$. Note that $d$ particles performing a one-dimensional symmetric random walk will only meet finitely often for $d \ge 3$.
\end{rmk}
\begin{cor}[Symmetric Random Walk combined with RWRE - Version 1]\label{I-cor6}
We first choose an environment $\omega$ which fulfils the assumptions \eqref{I-ass1}, \eqref{I-ass2}, and \eqref{I-ass3}. For a fixed $\omega$, let $(X_n,Y_n)_{n \in \N_0}$ be a 2-dimensional process where the process $(X_n)_{n\in \N_0}$ and $(Y_n)_{n\in \N_0}$ are independent with respect to $P_{\omega}$, $(X_n)_{n\in \N_0}$ is a RWRE in the sense of \eqref{I-RWRE} and $(Y_n)_{n\in \N_0}$ a symmetric random walk on $\Z$. Then, $(X_n,Y_n)_{n \in \N_0}$ is recurrent for $\p$-a.e.\ environment $\omega$.
\end{cor}
\begin{pfof}{Corollary \ref{I-cor6}}
Due to the independence of the two components, we get
\begin{align*}
& \sum_{n \in \N} P_{\omega}\big((X_{2n},Y_{2n})=(0,0)\big) = \sum_{n \in \N} P_{\omega}\big(X_{2n}=0\big) \cdot P_{\omega}\big(Y_{2n}=0\big) \\
\ge\ & C \cdot \sum_{n \in \N} P_{\omega}\big((X_{2n}=0\big) \cdot n^{-\frac12} = \infty.
\end{align*}
Here, we used the lower bound
\begin{align} \label{I-cor1.6.6.1}
P_{\omega}\big(Y_{2n}=0\big) \ge C \cdot n^{-\frac12}
\end{align}
for the return probabilities of the symmetric random walk on $\Z$ with some constant $C>0$ (cf.\ Section 2.18.4 in \cite{Gut}) and Theorem \ref{I-Rthm1} with $\alpha =\tfrac12$ for the last two steps. Again, we can conclude the recurrence of the process $(X_n,Y_n)_{n \in \N_0}$ for $\p$-a.e.\ environment $\omega$.
\renewcommand{\qedsymbol}{$\square$}
\end{pfof}
\begin{cor}[Symmetric Random Walk combined with RWRE - Version 2] \label{I-cor1.5.7}
We first choose an environment $\omega$ which fulfils the assumptions \eqref{I-ass1}, \eqref{I-ass2}, and \eqref{I-ass3} and some $0 < \delta <1$. For a fixed environment $\omega$, let $(X_n,Y_n)_{n \in \N_0}$ be a Markov chain with values in $\Z^2$ which is determined by
\begin{align*}
&P_{\omega}\big((X_0,Y_0)=(0,0)\big) = 1 \vphantom{\frac{1 - \delta}{2}},\\
&P_{\omega}\big((X_{n+1},Y_{n+1})=(x+1,y)\big|(X_{n},Y_{n})=(x,y)\big)=\delta \cdot \omega_x \vphantom{\frac{1 - \delta}{2}},\\
&P_{\omega}\big((X_{n+1},Y_{n+1})=(x-1,y)\big|(X_{n},Y_{n})=(x,y)\big)=\delta \cdot (1-\omega_x) \vphantom{\frac{1 - \delta}{2}},\\
&P_{\omega}\big((X_{n+1},Y_{n+1})=(x,y\pm1)\big|(X_{n},Y_{n})=(x,y)\big)=\frac{1 - \delta}{2}\ .
\end{align*}
Again, $(X_n,Y_n)_{n \in \N_0}$ is recurrent for $\p$-a.e.\ environment $\omega$.
\end{cor}
\begin{figure}[h]
\begin{minipage}[t]{0.35\textwidth}
\begin{rmk}
In the situation of Corollary \ref{I-cor1.5.7}, we first choose the first (or second) component for the next step with probability $\delta$ (or $1-\delta$). If we choose the first component, then we change the first component by $\pm1$ as in the setting of a RWRE, otherwise we change the second component by $\pm1$ with probability $\tfrac12$ as in the case of a symmetric random walk on $\Z$.
\end{rmk}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\vspace{0pt}
\centering
\includegraphics[viewport=320 450 200 800, scale =0.6]{pic8a.pdf}
\caption{Transition probabilities for the considered process in Corollary \ref{I-cor1.5.7}}
\end{minipage}
\end{figure
\begin{pfof}{Corollary \ref{I-cor1.5.7}}
For the proof, it is enough to look at the process $(X_{n},Y_{n})_{n \in \N_0}$ whenever it has moved in the first component. For this, we define inductively
\begin{align*}
& \tau_0:=0 & \text{and}\\
& \tau_k:= \inf\left\{n > \tau_{k-1}:\ X_n \neq X_{\tau_{k-1}} \right\} & \text{for } k \ge 1.
\end{align*}
\vspace{0pt}
Additionally, we define
\begin{align*}
& \widetilde{X}_n:= X_{\tau_n} \qquad \hphantom{\widehat{Y}_n:= Y_{\tau_n}} \text{for } n \in \N_0,\\
& \widetilde{Y}_n:= Y_{\tau_n} \qquad \hphantom{\widehat{X}_n:= X_{\tau_n}} \text{for } n \in \N_0.
\end{align*}
Note that $(\widetilde{X}_n)_{n \in \N_0}$ is a usual RWRE on $\Z$ for which the environment $\omega$ fulfils our assumptions \eqref{I-ass1}, \eqref{I-ass2}, and \eqref{I-ass3}. Further, we have
\begin{align} \label{I-cor1.6.7.3}
\widetilde{Y}_n \stackrel{d}{=} S(\tau_n - n),
\end{align}
where $(S(n))_{n \in \N_0}$ denotes a symmetric random walk on $\Z$ which is independent of $(\widetilde{X}_n)_{n \in \N_0}$, $(\tau_n)_{n \in \N_0}$, and the environment $\omega$. Note here that we can decompose $\tau_n$ into the increments
\begin{align}\label{I-cor1.6.7.1}
\tau_n = \sum_{i=1}^{n} (\tau_i - \tau_{i-1}),
\end{align}
where $(\tau_i - \tau_{i-1})_{i \in \N}$ is a sequence of i.i.d.\ random variables with a geometric distribution with parameter $\delta$ and expectation $\tfrac{1}{\delta}$.
Let us fix an arbitrary $\gamma > 0$. Due to \eqref{I-cor1.6.7.1}, an application of Cramer's theorem implies that we have
\begin{align*}
& P_{\omega} \left(\tau_n > \left(\tfrac{1}{\delta} + \gamma \right) \cdot n \right) \le \exp(-n \cdot I)
\end{align*}
for some constant $I$=$I(\gamma) > 0$. Therefore, the Borel-Cantelli lemma implies that we have
\begin{align*}
P_{\omega} \left(\liminf_{n\to\infty} \left\{ n \le \tau_n \le \left(\tfrac{1}{\delta} + \gamma \right) \cdot n \right\} \right) = 1
\end{align*}
for every environment $\omega$. Notice here that we have $\tau_n \ge n$ by definition. Due to the continuity of $P_{\omega}$, we can therefore conclude that
\begin{align} \label{I-cor1.6.7.2a}
\lim_{n \to \infty} P_{\omega} \Big( n \le \tau_n \le \left(\tfrac{1}{\delta} + \gamma \right) \cdot n \Big) = 1.
\end{align}
Since we are interested in the returns of the random walk to 0, we have to distinguish between the cases in which $\tau_n$ is even or odd. Only for even values of $\tau_n$ our random walk $(\widetilde{X}_{2n}, \widetilde{Y}_{2n}) = (X_{\tau_{2n}}, Y_{\tau_{2n}})$ can reach the point $(0,0)$. For this, we note that $\tau_n$ has a negative binomial distribution with parameters $n$ and $\delta$ and therefore has the following properties:
\begin{align}
& P_{\omega} (\tau_n = k) \le P_{\omega} (\tau_n = k + 1) \quad \text{for } n \le k \le \frac{n -1}{\delta} \nonumber,\\
& P_{\omega} (\tau_n = k) \ge P_{\omega} (\tau_n = k + 1) \quad \text{for } k \ge \max \left\{\frac{n - 1}{\delta}, n \right\} \nonumber,\\
& \max_{k \ge n} P_{\omega} (\tau_n = k) \xrightarrow{n \to \infty} 0. \label{I-cor1.6.7.2b}
\end{align}
Thus, a combination of \eqref{I-cor1.6.7.2a} and \eqref{I-cor1.6.7.2b} implies that in the limit, for $n \to \infty$, the probability for the even and odd part is the same, i.e.
\begin{align*}
\lim_{n \to \infty} P_{\omega} \Big( n \le \tau_n \le \left(\tfrac{1}{\delta} + \gamma \right) \cdot n ,\ \tau_n \in 2 \N_0 \Big) = \frac12.
\end{align*}
Since due to our choice $\gamma > 0$ we have
\begin{align}
P_{\omega} \Big(n \le \tau_n \le \left(\tfrac{1}{\delta} + \gamma \right) \cdot n,\ \tau_n \in 2 \N_0 \Big) > 0 \nonumber
\intertext{for all $n \in \N$, a combination of the last two in-/equalities implies that there exists some constant $C_2 > 0$ such that}
P_{\omega} \Big(n \le \tau_n \le \left(\tfrac{1}{\delta} + \gamma \right) \cdot n,\ \tau_n \in 2 \N_0 \Big) \ge C_2 > 0\label{I-cor1.6.7.4}
\end{align}
for all $n \in \N$ and for every environment $\omega$. Using the independence of $(\widetilde{X}_{2n})_{n \in \N_0}$ and $(\widetilde{Y}_{2n})_{n \in \N_0}$, we therefore get the following lower bound:
\begin{align*}
& \sum_{n \in \N} P_{\omega}\big((\widetilde{X}_{2n},\widetilde{Y}_{2n})=(0,0)\big) = \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot P_{\omega}(\widetilde{Y}_{2n}=0) \vphantom{\sum_{\substack{i = 2n \\ i \in 2 \N_0}}^{\lfloor \left(\tfrac{1}{\delta}+ \gamma\right) \cdot2 n \rfloor}} \\
\ge\ & \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot \sum_{\substack{i = 2n \\ i \in 2 \N_0}}^{\left\lfloor \left(\tfrac{1}{\delta} + \gamma \right) \cdot2 n \right\rfloor} P_{\omega}(\widetilde{Y}_{2n}=0, \tau_{2n} = i )\\
\ge\ & \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot \sum_{\substack{i = 2n \\ i \in 2 \N_0}}^{\left\lfloor \left(\tfrac{1}{\delta} + \gamma\right) \cdot 2n \right\rfloor} P_{\omega}\big(S(i - 2n) =0 \big) \cdot P_{\omega}(\tau_{2n} = i )\\
\ge \ & \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot \Bigg(P_{\omega}(\tau_{2n} = 2n ) + \sum_{\substack{i = 2n + 2 \\ i \in 2 \N_0}}^{\left\lfloor \left(\tfrac{1}{\delta} + \gamma\right) \cdot 2n \right\rfloor} C \cdot (i - 2n)^{-\frac12} \cdot P_{\omega}(\tau_{2n} = i) \Bigg)
\intertext{Here, we used \eqref{I-cor1.6.7.3} in the third line and the usual lower bound for the return probabilities of the symmetric random walk on $\Z$ (cf.\ \eqref{I-cor1.6.6.1}), i.e.
\[
P_{\omega}\big(S(i - 2n) =0 \big) \ge C \cdot (i-2n)^{-\frac12}
\]
for $i \in 2\N$, $i \ge 2n+2$ and with some constant $C>0$, in the fourth line. From this, we get}
& \sum_{n \in \N} P_{\omega}\big((\widetilde{X}_{2n},\widetilde{Y}_{2n}=(0,0)\big) \vphantom{\sum_{\substack{i = 2n \\ i \in 2 \Z}}^{\lfloor \left(\tfrac{1}{\delta}+ \gamma\right) \cdot2 n \rfloor}}\\
\ge \ & C \cdot \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot \big(2 \cdot\left( \tfrac{1}{\delta} + \gamma - 1\right)\big)^{-\frac12} \cdot n^{-\frac12} \cdot \sum_{\substack{i = 2n \\ i \in 2 \N_0}}^{\left\lfloor \left(\tfrac{1}{\delta} + \gamma\right) \cdot 2n \right\rfloor} P_{\omega}(\tau_{2n} = i ) \\
\ge\ & C \cdot C_2 \cdot \big(2 \cdot\left( \tfrac{1}{\delta} + \gamma - 1 \right)\big)^{-\frac12} \cdot \sum_{n \in \N} P_{\omega}(\widetilde{X}_{2n}=0) \cdot n^{-\frac12} = \infty \vphantom{\sum_{\substack{i = 2n \\ i \in 2 \Z}}^{\lfloor \left(\tfrac{1}{\delta}+ \gamma\right) \cdot2 n \rfloor}}
\end{align*}
for $\p$-a.e.\ environment $\omega$. Here, we additionally made use of \eqref{I-cor1.6.7.4} and Theorem \ref{I-Rthm1} (applied for $\alpha =\tfrac12$) in the last line. This implies that the process
\[
(\widetilde{X}_{n},\widetilde{Y}_{n})_{n \in \N_0}
\]
is recurrent for $\p$-a.e.\ environment $\omega$. Finally, this obviously implies that our process
\[
(X_{n},Y_{n})_{n \in \N_0}
\]
is also recurrent for $\p$-a.e.\ environment $\omega$ since we can embed the paths of the process $(\hspace{-1pt}\widetilde{X}_{n},\hspace{-2pt}\widetilde{Y}_{n}\hspace{-1pt})_{n \in \N_0}$ into the paths of the process $(X_{n},Y_{n})_{n \in \N_0}$.
\renewcommand{\qedsymbol}{$\square$}
\end{pfof}
\newpage
\bibliographystyle{alpha}
|
\section{Introduction}
|
Static and dynamic properties of
|
a Deep Operator Network~(DeepONet) to approximate the solution operator~$G$ (with $x_0 = 0$) and proved its approximation capacity via~$\tilde{G}$. The input to the proposed DeepONet was the trajectory of the function~$y_{[0, T]}$ discretized using~$m \ge 1$ interpolation points (also known as \textit{sensors} in~\cite{lu2021learning}). However, their proposed DeepONet~\cite{lu2021learning} has the following \textit{two} drawbacks. (i) Their DeepONet can effectively approximate $G$ (with $x_0 = 0$) for small values of~$T$. For longer time horizons, \emph{i.e., } for $T \gg 1$ second, one must increase the number~$m$ of sensors, which makes the training process challenging. (ii) To predict~$x(t)$ for any $t \in [0,T]$, their DeepONet must take as input the trajectory $u(t)$ within \textit{the whole} interval~$[0,T]$. Such an assumption does not hold in the problem studied in this paper. That is, we only have access to past values of $y(t)$, \emph{i.e., } within the interval $[0,t) \subset [0,T]$. The DeepONet designed in~\cite{lu2021learning} is not applicable to the problem we are addressing here.
To alleviate the above two drawbacks, we propose in the next section an operator learning framework that first designs and trains a novel DeepONet to approximate~$G$ locally and then uses the trained DeepONet recursively to approximate the generator's dynamic response over the whole interval~$[0, T]$.
\section{Shadowing a Transient Synchronous Generator} \label{appendix:pst-experiment}
This experiment uses the proposed data-driven DeepONet to approximate the dynamic response of a synchronous generator using data collected with the Power System Toolbox~(PST)~\cite{chow1992toolbox}. In particular, the experiment focuses on the PST transient model of a generator with a default exciter connected to the two-area system depicted in Figure~\ref{fig:two-area-system}. Compared to the numerical experiments of Section~\ref{sec:numerical-experiments}, the trained DeepONet of this experiment only ``shadows'' the generator. That is, PST does not use the DeepONet's predicted state $x(t_n +h)$ to solve the stator~\eqref{eq:multimachine-stator} and network~\eqref{eq:multimachine-network} equations. As a result, the DeepONet always observes the correct interface inputs~$y(t_n)$, simplifying the learning/inference task and alleviating the error accumulation. In addition, this scenario's objective is also to represent the problem of learning the response of an actual synchronous generator connected to an actual power grid. Thus, we believe this is a first step towards building a generator digital twin, which we will study in our future work.
\begin{figure}[t!]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.25 ] (74,39) .. controls (74,25.19) and (85.19,14) .. (99,14) .. controls (112.81,14) and (124,25.19) .. (124,39) .. controls (124,52.81) and (112.81,64) .. (99,64) .. controls (85.19,64) and (74,52.81) .. (74,39) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 184; green, 233; blue, 134 } ,fill opacity=1 ] (73,210) .. controls (73,196.19) and (84.19,185) .. (98,185) .. controls (111.81,185) and (123,196.19) .. (123,210) .. controls (123,223.81) and (111.81,235) .. (98,235) .. controls (84.19,235) and (73,223.81) .. (73,210) -- cycle ;
\draw [line width=2.25] (83.17,163.17) -- (112.83,162.83) ;
\draw (98,134) -- (98,185) ;
\draw (98,134) -- (134,134) ;
\draw [line width=2.25] (134,94) -- (134,153) ;
\draw [line width=2.25] (83.17,75.17) -- (112.83,74.83) ;
\draw [line width=2.25] (84.17,95.17) -- (113.83,94.83) ;
\draw (98,115) -- (134,115) ;
\draw (98,64) -- (98,115) ;
\draw [line width=2.25] (163,94) -- (163,153) ;
\draw (134,123.5) -- (163,123.5) ;
\draw (163,144) -- (183,144) ;
\draw (183,144) -- (183,174) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=0.4 ] (182.51,185.02) -- (174,174.03) -- (190.98,174) -- cycle ;
\draw (164,134) -- (214,134) ;
\draw [line width=2.25] (213,94) -- (213,153) ;
\draw (164,114) -- (214,114) ;
\draw (214,134) -- (264,134) ;
\draw (214,114) -- (264,114) ;
\draw [line width=2.25] (263,94) -- (263,153) ;
\draw (243,144) -- (263,144) ;
\draw (243,144) -- (243,174) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=0.4 ] (242.51,185.02) -- (234,174.03) -- (250.98,174) -- cycle ;
\draw [line width=2.25] (173,164) -- (194,164) ;
\draw [line width=2.25] (233,165) -- (254,165) ;
\draw (263,123.5) -- (292,123.5) ;
\draw [line width=2.25] (292,94) -- (292,153) ;
\draw (293,133) -- (329,133) ;
\draw (293,114) -- (329,114) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.25 ] (305,38) .. controls (305,24.19) and (316.19,13) .. (330,13) .. controls (343.81,13) and (355,24.19) .. (355,38) .. controls (355,51.81) and (343.81,63) .. (330,63) .. controls (316.19,63) and (305,51.81) .. (305,38) -- cycle ;
\draw [line width=2.25] (314.17,74.17) -- (343.83,73.83) ;
\draw [line width=2.25] (315.17,94.17) -- (344.83,93.83) ;
\draw (329,63) -- (329,114) ;
\draw [line width=2.25] (314.17,162.17) -- (343.83,161.83) ;
\draw (329,133) -- (329,184) ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.25 ] (304,209) .. controls (304,195.19) and (315.19,184) .. (329,184) .. controls (342.81,184) and (354,195.19) .. (354,209) .. controls (354,222.81) and (342.81,234) .. (329,234) .. controls (315.19,234) and (304,222.81) .. (304,209) -- cycle ;
\draw [fill={rgb, 255:red, 184; green, 233; blue, 134 } ,fill opacity=0.25 ][dash pattern={on 0.84pt off 2.51pt}] (44,175) -- (160,175) -- (160,264.5) -- (44,264.5) -- cycle ;
\draw (141,52) node [anchor=north west][inner sep=0.75pt] [align=left] {PST two-area system};
\draw (91,202) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle G$};
\draw (92,31) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle G$};
\draw (323,30) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle G$};
\draw (322,201) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle G$};
\draw (50,241) node [anchor=north west][inner sep=0.75pt] [align=left] {DeepONet gen.};
\end{tikzpicture}
\caption{A synchronous (transient) generator model~$G$ that we approximate with a data-driven DeepONet. We trained the DeepONet using data collected from the two-area, four-generator system of the Power System Toolbox~(PST).}
\label{fig:two-area-system}
\end{figure}
\textit{Training data.} We generated the training data $\mathcal{D}_{\text{PST}}$ by simulating $N_\text{exp}=300$ experiments on PST. Each experiment proceeds as follows. (i) We simulated the two-area system on PST using a uniform partition $\mathcal{P} \subset [0.0,5.0]$ (s), which starts at $t_0 = 0$ (s) and has a constant step size of $h = 0.05$. (ii)~We simulated a fault at time $t_f = 0.1 (s).$ (iii)~Finally, we cleared the fault at time $t_f + \Delta t_f$, where $\Delta t_f$ is the fault duration. We uniformly sampled this fault duration from the interval $[0.01, 0.1]$. After each experiment, we collected trajectory data, including the interacting input trajectories $\{y(t_n):t_n \in \mathcal{P}\}$, where $y(t) = (I_d(t), I_q(t))^\top$, the exciter input data $\{u(t_n)\equiv E_\text{fld}(t_n):t_n \in \mathcal{P}\}$, and state trajectory data $\{x(t_n):t_n \in \mathcal{P}\}$, where $x(t) = (\delta(t), \omega(t), E'_d(t), E'_q(t))^\top.$ We constructed our training dataset using this trajectory data. In particular, we used interpolation to discretize the inputs using $m=2$ sensors, \emph{i.e., }
$\tilde{y}^n_m :=\{y(t_n+d_0), y(t_n + d_1))\}$ and $\tilde{u}^n_m :=\{u(t_n+d_0), u(t_n + d_1))\},$ where $d_0=0.0$ and $d_1$ was uniformly sampled from the open interval $(0,h)$. Recall that in Section~\ref{sec:numerical-experiments}, we discretized the inputs using one sensor at time $t_n.$ With these discretized inputs, we generated the final training dataset:
$$\mathcal{D}_\text{PST} = \{x_k(t_n),\tilde{y}^n{1,k}, \tilde{u}^n_{m,k}, \{0,d_{m,k}\}, h_k, G_{\Delta,k}\}_{k=1}^{N_\text{train}}$$
\textit{Training protocol and test results.} We trained the data-driven DeepONet using Adam~\cite{kingma2014adam}. We also designed a simple hyper-parameter optimization routine to find the optimal architectures for the Branch and Trunk nets. Then, we tested the data-driven DeepONet using a PST test trajectory not included in the training dataset. Figure~\ref{fig:pst-results} compares the predicted dynamic response of the proposed data-driven DeepONet with the ground truth. The results show excellent agreement between the predicted and actual response of the generator.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.95\textwidth, height=6.0cm]{figs/fig-angle-pst.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.95\textwidth, height=6.0cm]{figs/fig-velocity-pst.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.95\textwidth, height=6.0cm]{figs/fig-E_d-pst.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=0.95\textwidth, height=6.0cm]{figs/fig-E_q-pst.pdf}
\end{subfigure}
\caption{Comparison of the data-driven DeepONet prediction with the true fault test trajectory of the synchronous generator state $x(t) = (\delta(t), \omega(t), E_d'(t), E_q'(t))^\top$. We simulated the test trajectory using the Power System Toolbox~(PST) over the uniform partition $\mathcal{P} \subset [0,5]$ (s) of constant step size $h=0.05$.}
\label{fig:pst-results}
\end{figure}
\section{Incorporating Power Grid Mathematical Models} \label{sec:residual-DeepONet}
On the one hand, the amount of data collected by utilities has increased during the last few years. On the other hand, the power engineering community has developed and optimized sophisticated first-principle mathematical models for planning, operating, and controlling the power grid. Thus, we believe our proposed DeepONet framework must be able to (i) learn from high-fidelity datasets and (ii) use previously developed mathematical models.
To this end, in this section, we propose a residual DeepONet --a deep operator network that approximates the residual (or error correction) operator. This residual operator describes the error/residual dynamics between the component's true solution operator~$\tilde{G}$ and the solution operator resulting from the component's mathematical model.
Formally, we consider the following non-autonomous IVP based on the previously derived mathematical model of an SG:
\begin{gather}
\begin{aligned} \label{eq:nonautonomous-known}
\frac{d}{dt}\tilde{x}(t) &= f_\text{approx}(\tilde{x}(t),\tilde{y};\lambda), \qquad t \in [0,T], \\
\tilde{x}(0) &= x_0.
\end{aligned}
\end{gather}
In the above, $f_\text{approx}: \mathcal{X} \times \mathcal{Y} \to \mathcal{X}$ is the known vector field that approximates the true vector field~$f$, \emph{i.e., } $f_\text{approx} \approx f$ of a synchronous generator. The corresponding \textit{local} solution of~\eqref{eq:nonautonomous-known} is
\begin{align*}
\tilde{G}_\text{approx}\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n) \equiv \tilde{x}(t_n) + \int_{0}^{h_n} f_\text{approx}(\tilde{x}(t_n+ s),\tilde{y}^n_m;\lambda) ds, \qquad h_n \in (0,h].
\end{align*}
In practice, we may only have access to this approximate representation of the solution operator, which we denote as~$\hat{G}_\text{approx}$. $\hat{G}_\text{approx}$ can be, for example, (i) a step of an integration scheme, \emph{e.g., } Runge-Kutta~\cite{iserles2009first}, with variable step-size~$h_n$, (ii) a physics-informed DeepONet~\cite{wang2021long} trained to satisfy~\eqref{eq:nonautonomous-known} locally, or (iii) a DeepONet (see Section~\ref{sec:DeepONet}) trained using a dataset~$\mathcal{D}$ generated by simulating~\eqref{eq:nonautonomous-known}.
Inspired on multi-fidelity schemes~\cite{kim2020multi,wang2020mfpc}, we propose to decompose the local solution operator~$\tilde{G}$ as follows:
\begin{align*}
\tilde{G} \left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n) &\equiv \tilde{x}(t_n + h_n) \\
&= \tilde{x}(t_n) + \int_0^{h_n}f(\tilde{x}(t_n + s), \tilde{y}^n_m,\lambda)ds \\
&= \tilde{x}(t_n) + \int_0^{h_n}f_\text{approx}(\tilde{x}(t_n + s), \tilde{y}^n_m,\lambda)ds + \text{``residual dynamics''} \\
&= \tilde{G}_\text{approx} \left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n) + \text{``residual dynamics''}.
\end{align*}
Thus, we define the \textit{residual operator}~$G_{\epsilon}$ as:
$$G_{\epsilon}\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n):=[\tilde{G} - \tilde{G}_\text{approx}]\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n), \qquad h_n \in (0,h].$$
In the above, we have adopted an affine decomposition of the true solution operator~$\tilde{G}$. Such a decomposition will simplify the mathematical analysis of the cumulative error of the proposed residual DeepONet numerical scheme (see Section~\ref{sub-sec:error-bound}). However, we remark that other decomposition forms are also possible. For example, we can use the multi-fidelity framework~\cite{wang2020mfpc}: $\tilde{G} = G_{\
|
epsilon,\cdot} \circ \tilde{G}_\text{approx} + G_{\epsilon,+} $. Such a framework will be studied in our future work.
\subsection{The Residual DeepONet Design} \label{sub-sec:residual-DeepONet}
To approximate the residual operator~$G_\epsilon$, we design a residual deep operator network~(residual DeepONet) $G_\theta^{\epsilon}$, with trainable parameters~$\theta \in \mathbb{R}^p$. $G_\theta^{\epsilon}$ has the same architecture as the data-driven DeepONet (see Figure~\ref{fig:DeepONet}). That is, $G_\theta^{\epsilon}$ has a Branch Net and a Trunk Net with the same inputs. Furthermore, the residual DeepONet's output, which we define as follows:
\begin{align*}
G^{\epsilon}\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n) &:= \tilde{e}(t_n + h_n) \in \mathbb{R}^{n_x} \\
&= x(t_n + h_n) - \tilde{x}_\text{approx}(t_n + h_n) \\
&= [\tilde{G} - \tilde{G}_\text{approx}]\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda \right)(h_n),
\end{align*}
is approximated using the same dot product as for the data-driven DeepONet case.
To train the parameters of $G^{\epsilon}_\theta$, we minimize the loss function:
$$\mathcal{L}(\theta;\mathcal{D}_\text{train}) = \frac{1}{N_\text{train}} \sum_{k=1}^{N_\text{train}} \left \|\tilde{e}_k(t_n + h_{n,k}) - G^{\epsilon}_\theta(x_k(t_n), \tilde{y}_{m,k}^n, \lambda_k)(h_{n,k}) \right \|_2^2,$$
using the dataset of $N_\text{train}:=|\mathcal{D}_\text{train}|$ triplets:
$$\mathcal{D}_\text{train}:= \left \{(x_k(t_n), \tilde{y}_{m,k}^n),h_{n,k}, \tilde{e}_k(t_n + h_{n,k}) \right \}_{k=1}^{N_\text{train}}.$$
\subsection{Predicting the Dynamic Response over the Interval~$[0,T]$} \label{sub-sec:residual-DeepONet-scheme}
To predict the dynamic response of the generator over the interval~$[0, T]$, we propose the recursive residual DeepONet numerical scheme detailed in Algorithm~\ref{alg:residual-DeepONet}.
\begin{algorithm}[t]
\DontPrintSemicolon
\SetAlgoLined
\textbf{Require:} trained residual DeepONet~$G^{\epsilon}_{\theta^*}$, device's parameters~$\lambda \in \Lambda$, initial state~$x_0 \in \mathcal{X}$, and time partition $\mathcal{P} \subset [0,T]$.\;
Initialize $\tilde{x}(t_0) = x_0$\;
\For{$n = 0,\ldots,M-1$}{
observe the local trajectory of interacting variables~$\tilde{y}_m^n$\;
update the independent variable~$t_{n+1} = t_n + h_n$\;
solve~\eqref{eq:nonautonomous-known} to obtain $\tilde{x}_\text{approx} (t_{n+1})= \hat{G}_\text{approx}\left(\tilde{x}(t_n), \tilde{y}_m^n, \lambda\right)(h_n)$\;
forward pass of the residual DeepONet
$$\tilde{e}(t_{n+1}) = G^{\epsilon}_{\theta^*} \left(\tilde{x}(t_n), \tilde{y}^n_m, \lambda \right)(h_n)$$
\vspace{-1.5em}\;
update the state vector
$$\tilde{x}(t_{n+1}) = \tilde{x}_\text{approx}(t_{n+1}) + \tilde{e}(t_{n+1})$$
\vspace{-1.5em}\;}
\textbf{Return:} simulated trajectory~$\{\tilde{x}(t_n) \in \mathcal{X} : t_n \in \mathcal{P}\}$.\;
\caption{Residual DeepONet Numerical Scheme}
\label{alg:residual-DeepONet}
\end{algorithm}
Algorithm~\ref{alg:residual-DeepONet} takes as inputs (i) the trained residual DeepONet~$G^{\epsilon}_{\theta^*}$, (ii) the device's parameters~$\lambda \in \Lambda$, (iii) the initial condition~$x_0 \in \mathcal{X}$, and the time partition~$\mathcal{P} \subset [0, T]$. During the $n$th recursive step of the algorithm, we (i) observe the current state~$x(t_n)$ and the local discretized input of interacting variables~$\tilde{y}^n_m$, (ii) solve~\eqref{eq:nonautonomous-known} to obtain the approximate next state vector~$\tilde{x}_\text{approx}(t_n+1)$ (or use a trained physics-informed neural network), (iii) perform a forward pass of residual DeepONet~$G^{\epsilon}_{\theta^*}$ to obtain the predicted error~$\tilde{e}(t_{n+1})$, and (iv) update the state vector via $\tilde{x}(t_{n+1}) = \tilde{x}_\text{approx}(t_{n+1}) + \tilde{e}(t_{n+1})$. Finally, Algorithm~\ref{alg:residual-DeepONet} outputs the simulated trajectory $\{\tilde{x}(t_n) \in \mathcal{X} : t_n \in \mathcal{P}\}$. Let us conclude this section by providing next an estimate for the cumulative error of Algorithm~\ref{alg:residual-DeepONet}.
\subsection{Error Bound for the residual DeepONet Numerical Scheme} \label{sub-sec:error-bound}
This section provides an estimate for the cumulative error bound between $x(t_n)$ obtained using the \textit{true} solution operator and $\hat{x}(t_n)$ obtained using the residual DeepONet numerical scheme detailed in Algorithm~\ref{alg:residual-DeepONet}. To this end, we start by stating the following assumptions.
\textit{Assumptions.} We assume the interacting variables \textit{input function}~$y$ belong to $V \subset C[0,T]$, where $V$ is compact. We also assume the vector field~$f: \mathcal{X} \times \mathcal{Y} \to \mathcal{X}$ is Lipschitz in~$x$ and $y$, \emph{i.e., }
\begin{align*}
\|f(x_1,y) - f(x_2, y) \| &\le L ||x_1 - x_2||, \\
\|f(x,y_1) - f(x, y_2) \| &\le L ||y_1 - y_2||,
\end{align*}
where $L > 0$ is a Lipschitz constant and $x_1, x_2, y_1$ and $y_2$ are in the proper space. Note that such assumptions are generally satisfied by engineering systems as $f$ is often differentiable with respect to~$x$ and $y$. However, we will show empirically (see Section~\ref{sec:numerical-experiments}) that DeepONet can provide an accurate prediction even when the above assumptions fail, \emph{e.g., } when the external power grid to the synchronous generator experiences a disturbance.
We now provide in the following Lemma an estimate for the error bound between~$x(t_n)$ obtained using the true solution operator and $\tilde{x}(t_n)$ obtained using the solution operator of the approximate model~\eqref{eq:nonautonomous-approximate}. For simplicity, we will assume $y$ is a one-dimensional input. Extending our analysis to multiple inputs is straightforward.
\begin{lemma} \label{lemma:error-bound-I}
For any $t_n \in \mathcal{P}$ and $h_n \in (0,h]$, we have
\begin{align} \label{eq:error-bound-I}
\|x(t_n) - \tilde{x}(t_n) \| \le \frac{1-r^n}{1-r} \mathcal{E},
\end{align}
where $r:= e^{Lh}$ and $\mathcal{E} := \max_{n} \{L h_n \kappa_n e^{Lh_n}\}$.
\end{lemma}
\begin{proof}
For any $h_n \in (0,h]$,
\begin{align*}
x(t_{n+1}) &\equiv G(x(t_n),y,\lambda)(h_n) = x(t_n) + \int_{t_n}^{t_n + h_n} f(G(x(t_n),y, \lambda)(s), y(s)) ds, \\
\tilde{x}(t_{n+1}) &\equiv G(\tilde{x}(t_n),\tilde{y},\lambda)(h_n) = \tilde{x}(t_n) + \int_{t_n}^{t_n + h_n} f(G(\tilde{x}(t_n),\tilde{y}, \lambda)(s), \tilde{y}(s)) ds.
\end{align*}
We then have
\begin{align*}
\|x(t_{n+1}) - \tilde{x}(t_{n+1})\| &\le \|x(t_n) - \tilde{x}(t_n)\| + \int_{t_n}^{t_n + h_n} \|f(x(s), y(s)) - f(\tilde{x}(s),\tilde{y}, \lambda)(s), \tilde{y}(s)) \| ds \\
&\le \|x(t_n) - \tilde{x}(t_n)\| + L\int_{t_n}^{t_n + h_n} |y(s) - \tilde{y}(s)|ds + L\int_{t_n}^{t_n + h_n} \|x(s) - \tilde{x}(s)\|ds \\
&\le \|x(t_n) - \tilde{x}(t_n)\| + L h_n \kappa_n + L\int_{t_n}^{t_n + h_n} \|x(s) - \tilde{x}(s)\|ds.
\end{align*}
In the above, $\kappa_n$ is the local approximation of the input~$y$ within the arbitrary interval~$[t_n, t_n + h_n]$:
$$\max_{s \in [t_n, t_n + h_n]}~|y(s) - \tilde{y}(s)| \le \kappa_n$$
such that $\kappa \searrow 0$ as the number of sensors $m \nearrow + \infty$. We refer the interested reader to~\cite{lu2021learning} for more details about the above input approximation. Then, using Gronwall's inequality we have
\begin{align*}
\|x(t_{n+1}) - \tilde{x}(t_{n+1})\| \le \|x(t_n) - \tilde{x}(t_n)\|e^{Lh_n} + Lh_n \kappa e^{Lh_n}.
\end{align*}
Taking $\mathcal{E} := \max_{n} \{L h_n \kappa_n e^{Lh_n}\}$ gives
\begin{align*}
\|x(t_{n+1}) - \tilde{x}(t_{n+1})\| \le \|x(t_n) - \tilde{x}(t_n)\|e^{Lh_n} + \mathcal{E}.
\end{align*}
The bound~\eqref{eq:error-bound-I} follows due to $x(t_0) = \tilde{x}(t_0) = x_0$.
\end{proof}
Before we estimate the cumulative error between $\tilde{x}(t_n)$ obtained using the solution operator of the approximate model~\eqref{eq:nonautonomous-approximate} and $\hat{x}(t_n)$ obtained using the proposed residual DeepONet numerical scheme, we must first review the universal approximation theorem of neural networks for high-dimensional functions introduced in~\cite{cybenko1989approximation}. For a given $h_n \in (0,h]$, define the following vector-valued continuous function~$\varphi: \mathbb{R}^{n_x} \times \mathbb{R}^m \times \mathbb{R}^{n_p} \to \mathbb{R}^{n_x}$
$$\varphi(z_n, \tilde{y}^{n}_m, \lambda) = [G - G_\text{approx}](z_n, \tilde{y}_{m}^n, \lambda)(h_n),$$
where $z_n \in \mathbb{R}^{n_x}$. Then, by the universal approximation theorem, for $\epsilon > 0$, there exists $W_1 \in \mathbb{R}^{K \times (n_x + m + n_p)}$, $b_1 \in \mathbb{R}^{K}$, $W_2 \in \mathbb{R}^{n_x \times K}$, and $b_2 \in \mathbb{R}^{n_x}$ such that
\begin{align} \label{eq:nn-approx}
\left\|\varphi(z_n, \tilde{y}^{n}_m, \lambda) -
G^{\epsilon}_{\theta^*}(z_n, \tilde{y}^n_m, \lambda)
\right\| < \epsilon,
\end{align}
where
$$G^{\epsilon}_{\theta^*}(z_n, \tilde{y}^n_m, \lambda) =: \left(W_2 \sigma \left(W_1 \cdot \text{col}(z_n, \tilde{y}^n_m, \lambda) +b_1\right) + b_2 \right).$$
We now introduce the following Lemma, presented in~\cite{qin2021data}, provides an alternative form to describe the local solution operator~$G$ of the approximate system~\eqref{eq:nonautonomous-approximate}.
\begin{lemma}
Consider the local solution operator of the approximate model~\eqref{eq:nonautonomous-approximate}, \emph{i.e., } $G(\tilde{x}(t_n), \tilde{y}^n_m, \lambda)(h_n)$. Then, there exists a function $\Phi: \mathbb{R}^{n_x} \times \mathbb{R}^{m} \times \mathbb{R}^{n_p} \times \mathbb{R} \to \mathbb{R}^{n_x}$, which depends on~$f$, such that
$$\tilde{x}(t_{n+1}) = G(\tilde{x}(t_n), \tilde{y}^{n}_m, \lambda)(h_n) = \Phi(\tilde{x}(t_n), \tilde{y}^n_m, \lambda,h_n),$$
for any $t_n \in \mathcal{P}$ and $h_n \in (0,h]$.
\end{lemma}
We are now ready to provide an estimate for the cumulative error between~$\hat{x}(t_n)$ and $\tilde{x}(t_n)$.
\begin{lemma} \label{lemma:error-bound-II}
Assume~$\Phi$ is Lipschitz with respect to the first argument and with Lipschitz constant~$L_{\Phi} > 0$. Suppose the residual DeepONet is well trained so that the neural network architecture satisfies~\eqref{eq:nn-approx}. Then, we have
$$\|\hat{x}(t_n) - \tilde{x}(t_n)\| \le \frac{1 - L_{\Phi}^n}{1 - L_{\Phi}} \epsilon.$$
\end{lemma}
\begin{proof}
Suppose~$\Phi$ is Lipschitz and the residual DeepONet satisfies the universal approximation theorem of neural networks~\eqref{eq:nn-approx}. Then, for any $t_n \in \mathcal{P}$ and $h_n \in (0,h]$, we have
\begin{align*}
\|\hat{x}(t_{n+1}) - \tilde{x}(t_{n+1})\| &= \left \| G^{\epsilon}_{\theta^*}(\hat{x}(t_n), \tilde{y}^n_m, \lambda) + G_{\text{approx}}(\hat{x}(t_n), \tilde{y}^n_m, \lambda)(h_n) - G(\tilde{x}(t_n),\tilde{y}_m^n,\lambda)(h_n) \right \| \\
& \le \left \| G^{\epsilon}_{\theta^*}(\hat{x}(t_n), \tilde{y}^n_m, \lambda) + G_{\text{approx}}(\hat{x}(t_n), \tilde{y}^n_m, \lambda)(h_n) - G(\hat{x}(t_n),\tilde{y}_m^n,\lambda)(h_n) \right \| \\
& \qquad~\qquad~\qquad~\qquad~\qquad + \| G(\hat{x}(t_n),\tilde{y}_m^n,\lambda)(h_n) - G(\tilde{x}(t_n),\tilde{y}_m^n,\lambda)(h_n)\| \\
&\le \|G^{\epsilon}_{\theta^*}(\hat{x}(t_n), \tilde{y}^n_m, \lambda) - \varphi(\hat{x}(t_n), \tilde{y}^{n}_m, \lambda) \| \\ & \qquad~\qquad~\qquad~\qquad~\qquad + \|\Phi(\hat{x}(t_n),\tilde{y}_m^n,\lambda,h_n) - \Phi(\tilde{x}(t_n),\tilde{y}_m^n,\lambda,h_n)\| \\
&\le \epsilon + L_{\Phi} \|\hat{x}(t_n) - \tilde{x}(t_n)\|.
\end{align*}
The Lemma follows then from $\hat{x}(t_0) = \tilde{x}(t_0) = x_0$.
\end{proof}
The following theorem provides the final estimate for the cumulative error between~$x(t_n)$ obtained using the \textit{true} solution operator~$G$ and $\hat{x}(t_n)$ obtained using the proposed residual DeepONet numerical scheme (see Algorithm~\ref{alg:residual-DeepONet}).
\begin{theorem}
For any~$t_n \in \mathcal{P}$ and $h_n \in (0,h]$, we have
$$\|x(t_n) - \hat{x}(t_n)\| \le \frac{1-r^n}{1-r} \mathcal{E} + \frac{1 - L_{\Phi}^n}{1 - L_{\Phi}} \epsilon.$$
\end{theorem}
We conclude from the above theorem that the error accumulates due to (i) the input approximation and (ii) the neural network approximation error. Thus, even if the proposed residual DeepONet generalizes effectively, the final approximation may be inaccurate due to an inadequate input approximation. However, we will show empirically in the next section that for reasonable values of~$h$, the proposed DeepONet framework effectively approximates the dynamic response of power grid devices even in the extreme case when we locally approximate~$y$ using \textit{one} sensor, \emph{i.e., } $m=1$.
|
\section{Introduction}
The existence of magnetars -- young, isolated, high-magnetic-field neutron
stars -- is now well supported by a variety of independent lines of
evidence. For recent reviews, see \citet{wt04} or \citet{kg04a}. There
appear to be at least two flavors of magnetar: soft-gamma repeaters
(SGRs) and anomalous X-ray pulsars (AXPs). Defining properties of both
are their X-ray pulsations having luminosity in the range $10^{34} -
10^{36}$~erg~s$^{-1}$, periods ranging from 6 -- 12~s, period
derivatives of $10^{-13}-10^{-11}$, and surface dipolar magnetic fields in
the range $0.6 - 7 \times 10^{14}$~G, assuming the vacuum dipole model
formula for magnetic braking\footnote{Throughout the paper, magnetic
fields discussed are calculated via $B\equiv 3.2 \times 10^{19} \sqrt{P
\dot{P}}$~G, where $P$ is the spin period and $\dot{P}$ the period
derivative}. In the magnetar model, the pulsed X-rays are likely the
combined result of surface thermal emission \citep[e.g.][]{oze03,lh03a}, with a
non-thermal high-energy tail resulting from resonant scattering of thermal
photons off magnetospheric currents \citep{tlk02}. The X-rays, in the
magnetar model, are ultimately powered by an internally decaying very
strong magnetic field. Despite numerous attempts, no magnetars have been
detected at radio frequencies \citep{kbhc85,cjl94,llc98,gsg01}, which has been suggested
as implying that pair production ceases above some critical magnetic field
\citep{zh00}.
An open issue in the magnetar model is the connection of these X-ray
sources to radio pulsars. One might expect high-$B$ radio pulsars to be
more X-ray bright than low-$B$ sources, and possibly exhibit magnetar-like
X-ray emission. \citet{pkc00} searched for enhanced X-ray emission from
the high-$B$ ($5.5 \times 10^{13}$~G) radio pulsar PSR~J1814$-$1744, and
placed an upper limit on its X-ray luminosity that was much lower than
those of the five then-known AXPs (4U 0142+61, 1E 1048$-$9537, RXS
1708$-$4009, 1E 1841$-$045, 1E 2259+586). \citet{gklp04} showed that the
nearby radio pulsar PSR~B0154+61 ($B=2.1 \times 10^{13}$~G) has an X-ray
luminosity 2--3 orders of magnitude lower than those of the same five
AXPs. \citet{msk+03} reported on X-ray observations of PSR~J1847$-$0130
($B = 9.4 \times 10^{13}$~G), which has the highest inferred surface
dipolar magnetic field of any known radio pulsar, and calculated an upper limit
on its X-ray luminosity that was lower than those of all but
one of the above five AXPs. \citet{gs03} studied PSR~J1119$-$6127 ($B = 4.4
\times 10^{13}$~G), also finding it to be X-ray underluminous relative to
the standard AXP group.
There are several possible ways to explain these results.
There could exist a well-defined critical $B$ field above which the
magnetar mechanism abruptly turns on. However, that would also require
that $B$ fields inferred from spin-down are unreliable at the factor of
$\ifmmode\stackrel{>}{_{\sim}}\else$\stackrel{>}{_{\sim}}$\fi$2 level, given the overlap in high-$B$ radio pulsar fields and those
of the AXPs (e.g. 1E~2259+586 has $B=6 \times 10^{13}$~G). It could also
be that AXPs and SGRs have higher-order multipole moments that go
undetected in spin-down, such that their true surface fields are orders of
magnitude higher. The recently revealed strong X-ray variability seen in
some AXPs \citep[e.g.][]{ims+04,gk04} suggests that magnetar emission
could be transient in many high-$B$ neutron stars. Of course, which
neutron stars become magnetars could depend on other, currently ``hidden''
neutron-star properties besides $B$ field, such as mass.
\psr\ is a radio pulsar that was recently discovered in the Parkes
Multibeam Survey \citep{hfs+04}. It has spin period $P=3.3$~s and a
spin-down rate of $\dot{P}=1.5\times 10^{-12}$, which imply a
characteristic age $\tau_c \equiv P/2\dot{P} = 34$~kyr, spin-down
luminosity $\dot{E}\equiv 4\pi^2 I \dot{P}/P^3 = 1.6 \times
10^{33}$~erg~s$^{-1}$, and a surface dipolar magnetic field of $7.4 \times
10^{13}$~G. Its inferred magnetic field is the second highest of all known
radio pulsars and is higher than that of the well established AXP
1E~2259+586. Here we report on the first X-ray detection of this pulsar in
a deep {\it Chandra X-ray Observatory} observation of a nearby field.
\section{Observations and Results}
The position of \psr\ was observed serendipitously by {\it Chandra} in an
ACIS-S Timed Exposure (TE) obtained on 2002 May 13. The observation (PI
P. Slane, Sequence Number 500235) had as its target the unrelated
supernova remnant G347.7+0.2. The nominal telescope pointing was $7.0'$
away from the pulsar's position derived from radio timing. As a result,
the position of \psr\ lies on Chip 6, far from the optical axis, where the
mirror point-spread-function (PSF) is significantly extended and distorted
asymmetrically.
We obtained the public data set using the {\it Chandra} Science Center's
{\it WebChaser} facility, and reduced the data with the CIAO software
package (version 3.1), with calibration database CALDB version 2.28. After
standard filtering using CIAO threads for ACIS-S
data\footnote{http://asc.harvard.edu/ciao/threads/index.html}, the
effective integration time was 55.7~ks.
\subsection{Imaging}
The X-ray emission as seen by {\it Chandra} around the radio position
of \psr\ is shown in Figure~\ref{fig:image}. The source is identified
with CIAO's {\tt celldetect} routine as having a signal-to-noise ratio
of 6.4, for events in the energy range 0.5--3.0~keV. No source is
apparent in images made with events having energies $>3.0$~keV.
Although the source appears extended (Fig.~\ref{fig:image}), given its
large off-axis angle, its extent both in size and morphology, including
the angle of asymmetry, is consistent with the instrumental PSF at
1.5~keV, as determined using the CIAO {\tt mkpsf} routine. Indeed,
using counts in the range 0.5--3.0~keV, {\tt celldetect} run with
default parameters reports a ratio of source to PSF size of 1.01.
Given that the approximate 95\% encircled energy radius for an object
$7.0'$ off axis is $\sim 7''$, we cannot rule out the presence of faint
emission having extent significantly smaller than this. However, as
argued below, the spectrum strongly favors the emission originating
from a point source.
The {\tt celldetect} routine reports a best-fit position for the X-ray
source of (J2000) RA = 17$^{\rm h}$18$^{\rm m}$9$^{\rm s}$.84$\pm$0.02,
DEC = $-37^{\circ}$18$'$51$''$.6$\pm$0.2. These (1$\sigma$) uncertainties are
statistical, and do not include the systematic uncertainty in {\it
Chandra}'s pointing. Note that for sources that are within $3'$ of the
aimpoint, the 90\% uncertainty circle of {\it Chandra}'s absolute pointing
has radius\footnote{http://cxc.harvard.edu/cal/ASPECT/celmon/} 0.6$''$.
For sources, like ours, that are further off-axis, the absolute pointing
uncertainty has not been well determined. This is an important caveat.
A timing analysis of the radio data \citep[see][for a description of the
data and its analysis]{hfs+04} yields a radio timing position of (J2000)
RA = 17$^{\rm h}$18$^{\rm m}$10$^{\rm s}$.162$\pm$0.194, DEC =
$-37^{\circ}$18$'$53$''$.75$\pm$10.0, where the quoted errors are formal
$2\sigma$ uncertainties as reported by {\tt TEMPO}, and 10 months of
additional timing data have been included since the most recently
published result. Doubling the formal {\tt TEMPO} uncertainties when
reporting timing parameter errors is standard practice and is done to account
for likely contamination from timing noise. Indeed, like most young
pulsars, \psr\ exhibits significant timing noise (RMS 74~ms after fitting
for position, $P$ and $\dot{P}$), so the above-quoted
uncertainties are likely to be good approximations to the true $1\sigma$
uncertainties. The formal positional offset in declination is therefore
$2.2''$, or $\sim 0.2\sigma$, while the RA offset is 0$^{\rm
s}$.32, or $\sim 1.6 \sigma$. Note that these numbers do {\it not}
include the unknown {\it Chandra} pointing uncertainty so are lower limits
only. We conclude that the source positions are consistent within the
uncertainties.
However, given the slight possible positional offset, as well as the
absence of unambiguous proof of the association via the detection of X-ray
pulsations at the radio period (not possible with the ACIS-S data because
it has effective time resolution of 3.2~s), it is reasonable to question
if the X-ray source is really associated with the radio pulsar. We can
estimate the probability of chance superposition using a log~N/log~S
relationship for {\it Chandra} sources in the 0.5--2.0~keV band,
appropriate for this source \citep{glv+03}. In this relation, flux is the
unabsorbed value; thus the probability of an X-ray source being near the
pulsar position purely by chance depends strongly on the former's spectral
parameters. As we show below, given only 110 source counts, these
parameters are not well determined. However, even for the lowest
plausible unabsorbed source flux for our source,
|
the log~N/log~S relation
predicts $\sim$180 sources per square degree. With timing noise so strong
in this pulsar, we would likely consider positional agreement within
$\sim$10$''$ to be a plausible association. In this case, the probability
of a random source in this area of sky is only 1\%. That the offset is
smaller than 10$''$, as well as that the unabsorbed flux is likely
significantly larger than the lowest reasonable value (see below) make
this 1\% probability likely to be a large overestimate. Thus, the
association appears extremely likely. We further note that the nearest
optical counterpart in the uncalibrated plates of the Sloan Digital Sky
Survey \citep{pmh+03}, with a limiting magnitude of $\sim$ 22, is more
than 20$''$ away, well outside of our {\it Chandra} error radius.
\subsection{Spectroscopy}
Counts from the pulsar were extracted using an elliptical extraction region
having semi-major and semi-minor axes of 26 and 18 pixels (13$''$ and 9$''$), respectively,
rotated to angle 308$^{\circ}$ west of north. A nearby, non-overlapping
source-free region having the same elliptical shape and orientation, but
with semi-major and semi-minor axes 40 and 32 pixels (20$''$ and 16$''$), respectively, was
used to estimate the background. The total number of source counts after
background subtraction was 110, implying a count rate of
0.00197$\pm$0.00019~cps.
RMF and ARF files were generated for the source and background using the
CIAO script {\tt psextract}, and spectra grouped by a factor of 8 were fed
into the spectral fitting package {\it XSPEC} (version 11.3.1). Spectral
channels having energies below 0.5~keV and above 3.0~keV were ignored.
The data were well described by an absorbed black-body model; best-fit model
parameters are given in Table~\ref{ta:spectrum}, and the spectrum and
best-fit model with residuals are shown in Figure~\ref{fig:spectrum}.
Although a power-law model yielded a statistically acceptable fit, the
best-fit power-law index was $\sim$8--9, rendering such a model
implausible. This is consistent with the absence of counts above
$\sim$2~keV. Fitting for multi-component models was unreasonable due to
the small number of counts available. However, it is clear that it is clear that the emission
is dominantly thermal in origin. This argues strongly against our
having detected any
nebular component, as this should have a harder spectrum that
is well characterized by a power-law model with photon index in the range
$\sim$1--3 \citep[see][and references therein]{krh04}.
The absorbed flux of the source in the 0.5--2.0~keV range is
(6.3--6.9)$\times 10^{-15}$~erg~s$^{-1}$, where the range quoted
corresponds to that implied by the 68\% limits of $N_h$ and $kT$. Thus,
the quoted flux range is an approximate but slightly overestimated 68\%
confidence range. With only 110 source counts, {\tt XSPEC} is unable to
more precisely constrain the true 68\% confidence range for the flux while
simultaneously fitting for $N_h$ and $kT$. The low end of the flux range
corresponds to higher values of $N_h$ and lower values of $kT$; the high
end corresponds to the reverse. The unabsorbed 0.5--2.0~keV flux is
therefore relatively poorly constrained, ranging from $\sim 7 \times
10^{-14}$~erg~s$^{-1}$~cm$^{-2}$ for the high $kT$ end, to $\sim 2 \times
10^{-12}$~erg~s$^{-1}$~cm$^{-2}$ for the low $kT$ end. We note that the
maximum $N_h$ in this direction is $1.81 \times
10^{22}$~cm$^{-2}$, significantly lower than our upper 68\% confidence
limit \citep{dl90}. This suggests that models
having lower values of $N_h$, and hence higher values of $kT$ and low
values of unabsorbed flux, are slightly favored.
\section{Discussion}
The dispersion measure toward the pulsar of 373~pc~cm$^{-3}$ implies a
distance of 4.0 -- 5.0~kpc \citep{cl01}. We assume here a distance of
4.5~kpc. Dispersion-measure distances are notoriously uncertain and an
independent distance estimate is obviously desirable. We do note that the
\citet{tc93} distance estimate for \psr\ is 5.1~kpc, close to that
obtained with the more recent \citet{cl01} model, suggesting our
assumption of 4.5~kpc is not grossly incorrect.
Given the spectrum of the detected X-rays, the emission seems most likely
to be coming from the neutron-star surface. Thermal emission from the
surface can either be from initial cooling, in which case X-rays are
emitted from the entire surface, or from heated polar caps, a by-product
of a higher-energy magnetospheric process \citep[see][for a
review]{krh04}. In the former case, the X-ray energy source is unrelated
to the pulsar's spin-down. In the latter case, the spin-down powers it.
Given the observed spectrum and flux of the X-ray source we detect, we may
ask which of these two mechanisms most likely accounts for the emission.
First, we consider the high-temperature range of parameter space, $kT
\simeq 0.2$~keV. In this case, the unabsorbed flux, given the distance,
requires a source emitting radius of $\sim$1~km. This suggests heated
polar caps, in which case the emission could be strongly pulsed. The
implied bolometric isotropic luminosity would be $2.5 \times
10^{32}$~erg~s$^{-1}$, or 0.16$\dot{E}$. This is uncomfortably high for
polar-cap reheating models \citep{hm01a}. Assuming 1.0~sr beaming, the
efficiency drops to 0.013, still implausibly high for a pulsar having
characteristic age 34~kyr \citep{hm01a}. At the low-temperature range of
parameter space, we have $kT \simeq 0.12$~keV. In this case, for the
observed unabsorbed flux at 4.5~kpc, an effective emitting radius of 22~km
is required, too high for a neutron star, even after correcting for the
gravitational distortion \citep{lp01}. Thus, it seems likely on physical
grounds that even though Table~1 quotes 68\% confidence levels only, the
true spectral parameters are indeed bracketed in this range.
For example, for $kT \simeq 0.13$~keV (corresponding to $N_h \simeq 2
\times 10^{22}$~cm$^{-2}$), the observations can be accounted for if the
effective measured neutron-star radius is $\sim$13~km. In this case, the
unabsorbed bolometric luminosity would be $L_x \simeq 6 \times
10^{33}$~erg~s$^{-1}$ (corresponding to $L_x\simeq 9 \times 10^{29}$~erg~s$^{-1}$ in
the 2--10 keV band), or 4$\dot{E}$. This, to our knowledge, would be the
first case of a radio pulsar having initial cooling emission that
has X-ray luminosity comparable to or greater than its $\dot{E}$. Given
that initial cooling is thought to be unrelated to spin-down, this is not
necessarily surprising. More relevant is whether the effective
temperature is plausible for initial cooling. For commonly assumed
neutron-star equations of state and modified URCA cooling with no exotica,
a temperature as high as 0.13~keV at an age of 34~kyr is reasonable if the
neutron star has accreted a $\sim 10^{-7}$~M$_{\odot}$ hydrogen envelope
\citep{yp04}. In this case, however, because of the hydrogen envelope's
effect on the outgoing radiation, a black-body model as assumed here would
be overestimating the true effective temperature by as much as a factor of
$\sim$2 \citep[see, e.g.][]{pzs+01}. Thus, the true effective temperature
may be much smaller than 0.13~keV, very much in line with predictions for
initial cooling of a neutron star of this age.
Even if $L_x > \dot{E}$, as seems likely in the case of \psr, $L_x$
in the 2--10~keV band is $\ifmmode\stackrel{>}{_{\sim}}\else$\stackrel{>}{_{\sim}}$\fi$3 orders of magnitude smaller than
is observed for the five traditionally studied AXPs \citep[see, e.g.,
Table~2 in][]{msk+03}. Its spectral properties are also quite different
from those of the AXPs. This is consistent with the findings for other
high-$B$ radio pulsars \citep{pkc00,gs03,msk+03,gklp04}. With X-ray
observations of five high-magnetic-field radio pulsars revealing luminosities
much smaller than those of the AXPs, it is becoming more difficult
to appeal to small scatter in the true $B$ fields relative to those
inferred from spin-down. Thus, it seems very plausible that the $B$
fields inferred from spin-down for AXPs and high-$B$ radio pulsars are
not reliable estimators of the true surface field, at least to within a
factor of $\sim$2. Alternatively there could be a ``hidden'' parameter,
such as mass, that differentiates between the two populations.
Intriguingly, however, \psr's X-ray luminosity is comparable to that of
the recently identified transient AXP XTE~J1810$-$197 when in quiescence
\citep{ims+04,ghbb04}. Moreover, the quiescent spectrum of
XTE~J1810$-$197 as observed in a serendipitous {\it ROSAT} observation
\citep{ghbb04} is comparable to that seen for \psr, i.e. well described by
a simple absorbed black body of temperature $kT \simeq 0.18$~keV. This
raises the interesting possibility that \psr, and other high-$B$ radio
pulsars, may one day emit transient magnetar-like emission, and conversely
that the transient AXPs might be more likely to exhibit radio pulsations.
Both these possibilities can be tested observationally.
We thank Pat Slane for providing early access to his data set. We thank
Josh Grindlay and Jae Sub Hong for access to their {\it Chandra}
log~N/log~S relationship, and Alice Harding for useful conversations. VMK
was supported by an NSERC Discovery Grant and Steacie Fellowship
Supplement, and by the FQRNT and CIAR.
\newpage
|
\section{Introduction}
\label{sec:intro}
Trees are connected partial orderings where every element has a linearly ordered set of predecessors (smaller elements). They are ubiquitous structures, naturally arising in a wide variety of contexts in mathematics, computer science, game and decision theory, linguistics, philosophy, etc.; some applications in these fields are mentioned in \cite{GorankoKellermanZanardo2021a}.
This work is a contribution to the general structural theory of trees, and a follow-up to
\cite{GorankoKellermanZanardo2021a} which initiated that study.
We emphasize that in our definition and study of trees we do not assume well-foundedness, which is a standard assumption in the prevailing tradition of set-theoretic studies of trees, generalizing and extending the theory of ordinals, cf.~e.g.~\cite{Jech}, \cite{Todorcevic}. We only point out here that the well-foundedness assumption makes a very substantial difference, both in the general theory and in the particular properties of trees, and without that assumption, the study of trees remains mostly order-theoretic and extends in a quite non-trivial way the theory of linear orderings comprehensively explored in \cite{Rosenstein}.
Furthermore, whereas our general theory covers both finite and infinite trees, the notions of completeness that are considered in this paper become nontrivial only in the case of infinite trees.
In the present paper we define and explore several natural notions of \emph{completeness of trees},
intuitively stating that there are no `gaps' or `missing nodes', in one or another sense.
These notions of completeness of trees naturally extend Dedekind completeness of linear orders and Dedekind-MacNeille completions of partial orders which result in complete lattices.
These are important and well-studied constructions, but there is limited literature on their application specifically to trees, where there are some essential subtleties. Dedekind completeness of linear orders can be equivalently defined in terms of the existence of suprema of all non-empty sets that are bounded above, and in terms of the existence of infima of all non-empty sets that are bounded below. However, in the case of trees these two characterisations differ substantially. Indeed, in a tree, infima of linearly ordered sets of nodes that are bounded below, are unique whenever they exist, while linearly ordered set of nodes that are bounded above may have suprema on some paths in the tree and not on others, and thus it may have several (if any) suprema in that tree. This leads to a variety of notions of completeness of trees, generally of the following type: consider a (natural) family $\mathcal{F}$ of sets of nodes in the tree, and say that the tree is \emph{$\mathcal{F}$-complete} if every set of nodes in tree that is bounded below and belongs to $\mathcal{F}$ has an infimum. This generic notion, applied to the family of all subsets, defines a standard generalisation of Dedekind completeness of trees, variations of which have been defined and studied e.g. in \cite{Droste85}, \cite{Rubin93}, \cite{Warren}, and \cite{Barham}. Moreover, the generic notion of $\mathcal{F}$-completeness also makes very good sense when applied to other important families of nodes, such as paths (leading to `pathwise completeness'), antichains (leading to `antichain completeness'), pairs of nodes (leading to `branching completeness'), etc.
\smallskip
\textbf{Contributions of the paper.}
Here we study the notions of absolute and relativised completeness that are mentioned above and that are, in our view, the most natural and important. We show that they are generally different, yet related to each other, and that they can be characterised in a fairly uniform way in terms of suprema and infima of downward closed and upward closed parts of paths.
We then define, in a relatively uniform way, and study, several generic constructions of \emph{tree completions}, that extend any tree to a minimal one satisfying the respective completeness property. Each of these transforms in a `minimal canonical way' any tree into a complete tree in the respective sense. In particular, we present alternative constructions for producing trees that are equivalent to the Dedekind-MacNeille completion of \cite{Warren} (applied there to the wider class of `cycle-free partial orders') and the `ramification completion' of \cite{Barham}.
We note that completions of trees have various applications to the general theory of trees, which go beyond the scope of the present paper, but we only mention here that both are used when axiomatising the first-order theories, and other logical theories, of some important classes of trees, and when proving the completeness of such axiomatisations, cf. \cite{GorankoKellerman2021}.
\smallskip
\textbf{Structure of the paper.}
First, we provide the necessary terminology and notation in Section \ref{sec:prelim}. Then
we define, compare and characterise several notions of completeness in Sections \ref{sec:completeness} and \ref{sec:characterisations}. In Section \ref{sec:tree-completions} we introduce several constructions of tree completions that correspond to the respective completeness properties defined in Section \ref{sec:completeness}. We end with concluding remarks and suggestions for further study in Section \ref{sec:concluding}.
\section{Preliminaries}
\label{sec:prelim}
We define here some basic notions on trees, in order to fix notation and terminology. The reader may also consult \cite{Jech}, \cite{KellermanThesis}, and \cite{Kellerman2018} for further details.
The order types of the linear orders $\left(\mathbb{N};<\right)$, $\left(\mathbb{Z};<\right)$, $\left(\mathbb{Q};<\right)$ and $\left(\mathbb{R};<\right)$, where in each instance $<$ denotes the usual ordering of that set, will be denoted as $\omega$, $\zeta$, $\eta$ and $\lambda$ respectively.
An ordered set $\left(A;<\right)$, {with a strict partial ordering $<$},
is \defstyle{downward-linear} if for every $x \in A$ the set $\{ y \in A : y < x \}$ is linear; it
is \defstyle{downward-connected} if, for every $x,y \in A$, there exists $z \in A$ such that $z \leqslant x$ and $z \leqslant y$. A \defstyle{forest} is a downward-linear partial order. A \defstyle{tree} is a downward-connected forest\footnote{Note that we do not assume well-foundedness of trees, nor even existence of a root.}.
A \defstyle{subtree}
of a forest
$\mathfrak{F} = \left(F;<\right)$
is any substructure $\ensuremath{\mathfrak{T}} = \left(T;<^T\right)$ of $\mathfrak{F}$ which is a tree, i.e., where $T$ is a non-empty downward-connected subset of $F$ and $<^T$ is the restriction of $<$ to $T$.
The elements of a tree $\left(T;<\right)$ are called \defstyle{nodes} or \defstyle{points}.
If a tree has a $<$-minimal node, then it is unique (by downward-connectedness) and is called the \defstyle{root} of the tree. The $<$-maximal nodes (if any exist) are called \defstyle{leaves} of the tree.
We will define various notions and notation in terms of an arbitrarily fixed tree $\ensuremath{\mathfrak{T}} = (T; <)$.
First, for any nodes $t, u \in T$ we define $t \smile u$ to mean that $t < u$ or $t = u$ or $u < t$. If this holds, we say that $t$ and $u$ are \defstyle{comparable} nodes.
If $t < u$,
the intervals $(t,u)$, $(t,u]$, $[t,u)$ and $[t,u]$ are defined as usual.
For instance, if $t < u$ then $(t,u] := \{ x \in T : t < x \leqslant u \}$, etc.
We also define the sets $\tlx{T}{t} := \{ x \in T : x < t\}$, $\tleq{T}{t} := \{ x \in T : x \leqslant t \}$, $\tg{T}{t} := \{ x \in T : t < x \}$ and $\tgeq{T}{t} := \{ x \in T : t \leqslant x \}$.
We will use analogous notation for the respective substructures (as partial orderings) of the tree $\ensuremath{\mathfrak{T}}$ over these sets, for instance, $\tlx{\ensuremath{\mathfrak{T}}}{t}$ denotes $\left(\tlx{T}{t};<\upharpoonright_{\tlx{T}{t}}\right)$, etc.
For non-empty subsets $A, B \subseteq T$ we define $A < B$ (resp. $A \leqslant B$, $A > B$, $A \geqslant B$) iff $x<y$ (resp. $x \leqslant y$, $x > y$, $x \geqslant y$) for all $x \in A$ and $y \in B$. Instead of $\{x\} < B$ we will also write $x < B$, and similarly for other relations and singleton sets.
Then, we define the set $\tlx{T}{A} := \{ x \in T : x < A \}$ and likewise the sets $\tleq{T}{A}$, $\tg{T}{A}$ and $\tgeq{T}{A}$. The substructures of $\mathfrak{T}$ that have these sets as their underlying sets will be denoted as $\mathfrak{T}^{<A}$, $\mathfrak{T}^{\leqslant A}$, $\mathfrak{T}^{>A}$ and $\mathfrak{T}^{\geqslant A}$ respectively.
More generally, given any subset $A$ of $T$, $\mathfrak{T}^A$ will denote the structure $\left(A;<\upharpoonright_{A}\right)$.
Note that, for any $A \not=\emptyset$, $\tlx{\ensuremath{\mathfrak{T}}}{A}$ and $\tleq{\ensuremath{\mathfrak{T}}}{A}$ are linear orders and that $\tg{T}{A}$ and $\tgeq{T}{A}$ are empty when $A$ is not linearly ordered. If $A$ is linearly ordered then $\tg{\ensuremath{\mathfrak{T}}}{A}$ and $\tgeq{\ensuremath{\mathfrak{T}}}{A}$, if non-empty, are forests, while for every node $t$, $\tgeq{\ensuremath{\mathfrak{T}}}{t}$ is a tree that is rooted at $t$.
For ease of readability, the sets $T^{\leqslant A}$ and $T^{\geqslant A}$ of lower bounds and upper bounds of $A$ will sometimes be denoted as $L(A)$ and $U(A)$ respectively. If $A = \left\{x_1,x_2,\ldots,x_k\right\}$ then $L\left(A\right)$ and $U\left(A\right)$ will be written simply as $L\left(x_1,x_2,\ldots,x_k\right)$ and $U\left(x_1,x_2,\ldots,x_k\right)$.
A linearly ordered set of nodes in a tree is called a \defstyle{chain}.
A maximal chain is called a \defstyle{path}.
A set of nodes $\mathsf{A}$ is \defstyle{downward-closed} if $z \in \mathsf{A}$ whenever $y \in \mathsf{A}$ and $z < y$; respectively, $\mathsf{A}$ is \defstyle{upward-closed} if $z \in \mathsf{A}$ whenever $y \in \mathsf{A}$ and $y < z$.
A non-empty downward-closed linearly ordered set of nodes that is bounded above is called a \defstyle{stem}.
A non-empty subset $\mathsf{B}$ of a path $\mathsf{A}$ is called a \defstyle{branch} when it is bounded below and upward-closed within $\mathsf{A}$ (i.e.~if $x \in \mathsf{B}$ and $y \in \mathsf{A}$ with $x < y$ then $y \in \mathsf{B}$). Clearly if $\mathsf{A}$ is a path with $\mathsf{A} = \mathsf{B} \cup \mathsf{C}$, where $\mathsf{B}$ and $\mathsf{C}$ are disjoint, then $\mathsf{B}$ is a stem if and only if $\mathsf{C}$ is a branch. Note that every two distinct paths in a tree intersect in a stem.
The set of paths containing the node $t$ (resp. the stem $\mathsf{S}$) will be denoted by $\mathcal{P}_{t}$ (resp. $\mathcal{P}_{\mathsf{S}})$.
A set of nodes $\mathsf{A}$ is called \defstyle{convex} if $z \in \mathsf{A}$ whenever $x,y \in \mathsf{A}$ and $x < z < y$.
A convex linearly ordered set of nodes is called a \defstyle{segment}.
A \defstyle{bridge} is a non-empty segment $\mathsf{A}$ such that, for every path $\mathsf{P}$, either $\mathsf{A} \subseteq \mathsf{P}$ or $\mathsf{A} \cap \mathsf{P}$ is empty. Note that every singleton set of nodes $\{ t \}$ is a bridge.
For each node $t$ in $\ensuremath{\mathfrak{T}}$, the maximal bridge in $\ensuremath{\mathfrak{T}}$ containing $t$ will be denoted by $\bridge{t}$. A tree in which all maximal bridges are singletons is called a \defstyle{condensed tree}. As shown in \cite{GorankoKellermanZanardo2021a}, the relation between nodes of belonging to the same maximal bridge is an equivalence relation, and the corresponding partition of the tree $\ensuremath{\mathfrak{T}}$ into a family of maximal bridges defines a condensed tree, $\bridge{\ensuremath{\mathfrak{T}}}$, called the \defstyle{condensation} of $\ensuremath{\mathfrak{T}}$. Moreover, a tree is condensed if and only if it is isomorphic to its condensation. Further details on properties of tree condensations and condensed trees can be found in \cite{GorankoKellermanZanardo2021a}.
A set $X$ of nodes is an \defstyle{antichain} if $x \not\smile y$ for all distinct $x$ and $y$ in $X$. Note that the intersection of an antichain $X$ and a linearly ordered set of nodes $Y$ is either a singleton or the empty set. The second alternative is excluded when $X$ is a maximal (by inclusion) antichain and $Y$ is a path.
A \defstyle{lower (respectively, upper) bound} for a set of nodes $X$ in $\mathfrak{T}$ is an element $b$ such that $b \leqslant x$ (respectively, $b \geqslant x$) for every $x \in X$.
An \defstyle{infimum (respectively, supremum)} of $X$ is a greatest lower bound (respectively, least upper bound) of $X$. If it exists, the infimum (respectively, supremum) of $X$ is unique and will be denoted $\inf(X)$ (respectively, $\sup(X)$).
A linear order $\left(W;<\right)$ is \defstyle{Dedekind complete} when every non-empty subset in $W$ that is bounded below, has an infimum. Equivalently (cf.~\cite{Rosenstein}) $\left(W;<\right)$ is Dedekind complete when every non-empty subset in $W$ that is bounded above, has a supremum. Examples of Dedekind complete linear orders are $\left(\mathbb{N};<\right)$, $\left(\mathbb{Z};<\right)$, and $\left(\mathbb{R};<\right)$; a non-example is $\left(\mathbb{Q};<\right)$.
\section{Notions of completeness of trees}
\label{sec:completeness}
\subsection{Completeness properties and inter-dependence results}
We define here several natural notions of completeness in a tree. Note that the two equivalent characterisations of Dedekind complete linear orders do not transfer as equivalent properties over trees, because a linearly ordered set of nodes in a tree which is bounded above may have suprema on some paths and not on others, and thus it may have several (or none at all) suprema in the tree.
So a variety of completeness properties emerge here.
For some of the definitions we need the following terminology:
\begin{itemize}
\item a node $t$ in a tree will be called a \defstyle{branching point} when $t = \inf\{u,\ndv\}$ for some incomparable nodes $u$ and $\ndv$;
\item given two distinct paths $\mathsf{P}$ and $\mathsf{Q}$ in the tree, the supremum in $\mathsf{P}$ (or, in $\mathsf{Q}$) of the set $\mathsf{P} \cap \mathsf{Q}$, if it exists, will be called a \defstyle{weakly branching point}\footnote{Note that a weakly branching point may also be a true branching point, in sense of the previous definition.}.
\end{itemize}
The examples of trees in Figures~\ref{Fig:Dependence1} -- \ref{Fig:Dependence3} below will be used in the work for proving independence results. The tree in Fig.~\ref{Fig:Dependence1} consists of the linear order $\omega + \zeta$ with a single leaf node attached to each of its elements. Every node in $\omega + \zeta$ is then a branching point. In Fig.~\ref{Fig:Dependence2}, the tree consists of a copy of $\omega$ with two disjoint copies of $\omega$ appended to it. The first elements of each of these two copies are weakly branching points, but they are not branching points. The tree in Fig.~\ref{Fig:Dependence3} consists of a copy of the non-positive rational numbers $\eta^{\leqslant 0}$, with two disjoint copies of the positive rational numbers $\eta^{>0}$ appended to it. The number $0$ in $\eta^{\leqslant 0}$ is both a branching point and weakly branching point of the tree.
\begin{figure}[tb]
\hspace{-1cm}
\begin{minipage}{0.32\textwidth}
\centering
\begin{picture}(40,100)
\multiput(0,0)(0,12){6}{\circle*{3}}
\put(0,0){\line(0,1){70}}
\multiput(0,0)(0,12){6}{\line(1,1){10}}
\multiput(10,10)(0,12){6}{\circle*{3}}
\multiput(0,77)(0,7){3}{\circle{2}}
\put(-15,40){$\omega$}
%
\multiput(0,108)(0,12){5}{\circle*{3}}
\put(0,98){\line(0,1){68}}
\multiput(0,108)(0,12){5}{\line(1,1){10}}
\multiput(10,118)(0,12){5}{\circle*{3}}
\multiput(0,173)(0,7){3}{\circle{2}}
\put(-14,135){$\zeta$}
\end{picture} \hspace{0.1cm}
\caption{\label{Fig:Dependence1}}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\begin{picture}(50,100)
\put(0,0){\circle*{3}}
\put(0,0){\line(0,1){70}}
\multiput(0,77)(0,7){3}{\circle{2}}
\put(-12,45){$\omega$}
%
\put(5,98){\circle*{3}}
\put(5,98){\line(2,3){40}}
\multiput(49,164)(4,6){3}{\circle{2}}
\put(38,135){$\omega$}
%
\put(-5,98){\circle*{3}}
\put(-5,98){\line(-2,3){40}}
\multiput(-49,164)(-4,6){3}{\circle{2}}
\put(-46,135){$\omega$}
\end{picture}
\caption{\label{Fig:Dependence2}}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}{0.32\textwidth}
\centering
\hspace{1cm}
\begin{picture}(0,100)
\multiput(0,21)(0,-7){3}{\circle{2}}
\put(0,28){\line(0,1){70}}
\put(0,98){\circle*{3}}
\put(4,55){$\eta^{\leqslant 0}$}
%
\multiput(5,105)(4,6){3}{\circle{2}}
\put(17,123){\line(2,3){30}}
\multiput(51,174)(4,6){3}{\circle{2}}
\put(36,138){$\eta^{> 0}$}
%
\multiput(-5,105)(-4,6){3}{\circle{2}}
\put(-17,123){\line(-2,3){30}}
\multiput(-51,174)(-4,6){3}{\circle{2}}
\put(-56,138){$\eta^{> 0}$}
\end{picture} \hspace{1cm}
\caption{\label{Fig:Dependence3}}
\end{minipage}
\hfill
\end{figure}
We call a tree:
\begin{enumerate}
\item[\cmp{1}] \defstyle{(Dedekind) complete} when every non-empty set of nodes that is boun\-ded below, has an infimum.
\noindent {Hereafter, we will usually omit `Dedekind' and will simply write \defstyle{complete}.}
\item[\cmp{2}] \defstyle{pathwise (Dedekind) complete} when for every path $\mathsf{P}$ and each set $X \subseteq \mathsf{P}$ that is bounded below, $X$ has an infimum in $\mathsf{P}$.
\noindent {Again, hereafter, we will usually simply write \defstyle{pathwise complete}.}
\noindent Note that \cmp{2} is equivalent to:
\item[\cmp{$2^{\prime}$}]
for every path $\mathsf{P}$ and each set $X \subseteq \mathsf{P}$ that is bounded above, $X$ has a supremum in $\mathsf{P}$;
\item[\cmp{3}] \defstyle{antichain complete} when every non-empty antichain that is boun\-ded below has an infimum;
\item[\cmp{4}] \defstyle{branching complete}\footnote{In \cite{ZanardoBarcellanReynolds}, branching complete trees are called \emph{jointed} trees, in \cite{Barham} they are called \emph{ramification complete} trees, and in \cite{CourcelleDelhomme} they are called \emph{join-trees}.} when every pair of incomparable nodes (equi\-valently, every finite antichain) has an infimum;
\item[\cmp{5}] \defstyle{weakly branching complete} when for any two distinct paths $\mathsf{P}$ and $\mathsf{Q}$, the set $\mathsf{P} \cap \mathsf{Q}$ has suprema in both $\mathsf{P}$ and $\mathsf{Q}$;
\item[\cmp{6}] \defstyle{weakly branching point complete} when for each path $\mathsf{P}$ and each non-empty set $X$
of weakly branching points in $\mathsf{P}$, if $X$ is bounded below (respectively, above) then $X$ has an infimum (respectively, supremum)\footnote{Note that the two conditions, for the existence of infima and of suprema, are not equivalent. Indeed, consider a tree similar to the one in Figure~\ref{Fig:Dependence1} but with the leaves removed from the terminal $\zeta$-part of the tree. The weakly branching points of this tree are precisely the non-leaf nodes in the initial $\omega$-part of the tree. Clearly each non-empty set of weakly branching points has an infimum, while the set of all weakly branching points is bounded above but has no supremum. } in $\mathsf{P}$.
\end{enumerate}
Note that each of the properties \cmp{1}, \cmp{2}, \cmp{3} and \cmp{4} fits the following generic notion of completeness:
given a family $\mathcal{F}$ of sets of nodes in a tree $\mathfrak{T}$, we say that $\mathfrak{T}$ is \defstyle{$\mathcal{F}$-complete} if every set of nodes in $\mathfrak{T}$ that is bounded below and belongs to $\mathcal{F}$, has an infimum. In the case of Dedekind completeness, take $\mathcal{F}$ to be the collection of all sets of nodes in $\mathfrak{T}$; in the case of pathwise completeness, take $\mathcal{F}$ to be the set of all linearly ordered sets of nodes in $\mathfrak{T}$; in the case of antichain completeness, take $\mathcal{F}$ to be the set of all antichains in $\mathfrak{T}$; and in the case of branching completeness, take $\mathcal{F}$ to be the set of all pairs of incomparable nodes in $\mathfrak{T}$.
\smallskip
Some simple observations:
\begin{itemize}
\item Every branching point is also a weakly branching point, while in branch\-ing complete trees, it also holds that every weakly branching point is a branching point.
\item
In any tree, every branching point is the greatest element of a maximal bridge, and every weakly branching point is either the greatest or the least element of a maximal bridge. The converses of these statements do not hold. For example, consider the tree that resembles the one in Figure~\ref{Fig:Dependence1} but with a single node $u$ inserted between the initial portion $\omega$ and the terminal portion $\zeta$. This node $u$ forms a maximal bridge (i.e.~$\bridge{u} = \{u\}$) and $u$ is both the least and the greatest element of that maximal bridge but $u$ is neither a branching point nor a weakly branching point.
\item In any tree, if $\mathsf{P}$ is a path and $X$ is a non-empty subset of $\mathsf{P}$ with infimum $u$ in $\mathsf{P}$, then $u$ will be the infimum of $X$ in \textit{every} path $\mathsf{Q}$ that contains $X$; in other words, infima of non-empty chains are not path specific.
\item Moreover, in any \textit{branching complete} tree, if $\mathsf{P}$ is a path and $X \subseteq \mathsf{P}$ is a non-empty set with supremum $\ndv$ in $\mathsf{P}$, then $\ndv$ will be the supremum of $X$ in \textit{every} path $\mathsf{Q}$ that contains $X$; in other words, suprema of non-empty chains in a branching complete tree are not path specific.
\end{itemize}
It follows from these observations that in \textit{branching complete} trees, \cmp{6} is equivalent to the following property:
\begin{enumerate}
\item[\cmp{7}]
every non-empty chain of branching points that is bounded below (respectively, bounded above) has an infimum (respectively, supremum).
\end{enumerate}
The following theorem summarises all dependence results that hold between the properties \cmp{1} to \cmp{7}.
\begin{theorem} \label{Thm:Dependences}
The following implications hold between the properties \cmp{1} to \cmp{7} and all implications that hold between \cmp{1} to \cmp{7} follow from these by transitivity.
\begin{enumerate}
\item \cmp{1} implies each of
\text{\cmp{2} to \cmp{7}};
\item \cmp{2} is equivalent to \cmp{$2^{\prime}$};
\item \cmp{2} implies \cmp{5} to \cmp{7};
\item \cmp{3} implies \cmp{4};
\item \cmp{4} implies \cmp{5}.
\end{enumerate}
\end{theorem}
\begin{figure}[th]
\begin{center}
\begin{picture}(300,130)
\put(72,100){\cmp{1}}
\put(0,25){\cmp{2}}
\put(60,25){\cmp{2$'$}}
\put(120,25){\cmp{3}}
\put(180,25){\cmp{4}}
\put(210,59){\cmp{5}}
\put(240,100){\cmp{6}}
\put(0,100){\cmp{7}}
%
\thicklines
\put(68,102){\vector(-1,0){42}}
\put(12,42){\vector(0,1){53}}
\put(68,93){\vector(-1,-1){53}}
\put(88,92){\vector(-1,-4){13}}
\put(95,92){\vector(2,-3){35}}
\put(102,92){\vector(3,-2){77}}
\put(102,99){\vector(3,-1){103}}
\put(102,102){\vector(1,0){130}}
\put(195,40){\vector(1,1){13}}
\put(26,28){\vector(1,0){30}}
\put(56,28){\vector(-1,0){31}}
\put(146,28){\vector(1,0){30}}
\put(12,2){\line(0,1){18}}
\put(12,2){\line(1,0){243}}
\put(255,2){\vector(0,1){90}}
\put(220,2){\vector(0,1){50}}
\end{picture}
\end{center}
\caption{The dependencies that hold between \cmp{1} to \cmp{7}.\label{Fig:Dependencies}}
\end{figure}
\begin{proof}
We first consider the relationships between the properties \cmp{1} to \cmp{6}, after which we will relate them with the auxiliary property \cmp{7}.
1. Most implications from \cmp{1} to the properties {\cmp{2} to \cmp{6}} are trivial. For \cmp{5}, consider the sets of all upper bounds of $\mathsf{P} \cap \mathsf{Q}$ in each of $\mathsf{P}$ and $\mathsf{Q}$.
2. Just like the argument for linear orders: to show that \cmp{2} implies \cmp{$2^{\prime}$}, consider the set of all upper bounds of $X$ in $\mathsf{P}$ and take its infimum, and to show that \cmp{$2^{\prime}$} implies \cmp{2}, consider the set of all lower bounds of $X$ in $\mathsf{P}$ and take its supremum.
3. The implication from \cmp{2} to \cmp{5} is like that of \cmp{1} to \cmp{5}; the implication to $\cmp{6}$ is trivial.
4. \cmp{3} implies \cmp{4} is trivial.
5. Take two points, one on $\mathsf{P} \setminus \mathsf{Q}$ and the other on $\mathsf{Q} \setminus \mathsf{P}$. Their infimum is a supremum of $\mathsf{Q} \cap \mathsf{P}$ in each path.
\smallskip
Now, we show that all implications that do not follow from those listed in the proposition, do not hold.
First, consider the tree in Fig.~\ref{Fig:Dependence2}. The properties \cmp{2}, and hence \cmp{5} and \cmp{6}, hold in this tree, while none of \cmp{1}, \cmp{3} and \cmp{4} hold in it. It follows that none of the properties \cmp{2}, \cmp{5} and \cmp{6} imply any of \cmp{1}, \cmp{3} and \cmp{4}.
Next, consider the tree in Fig.~\ref{Fig:Dependence3}. This tree shows that \cmp{3} (hence also \cmp{4}) does not imply \cmp{1}, and none of \cmp{3} to \cmp{6} imply \cmp{2}.
Similar to the previous example, consider the tree that consists of a copy of the rational numbers $\eta$, with two disjoint copies of the rational numbers $\eta$ appended to it. This tree has the same structure as the one in Fig.~\ref{Fig:Dependence3} except that the weakly branching points of that tree are missing from it. The property \cmp{6} holds vacuously in this tree, while \cmp{5} does not hold in it, hence \cmp{6} does not imply \cmp{5}.
To see that \cmp{4} (hence also \cmp{5}) does not imply \cmp{6}, consider the tree in Fig.~\ref{Fig:Dependence1}, which also shows that \cmp{4} does not imply \cmp{3}.
The tree obtained from the one on Fig.~\ref{Fig:Dependence1} by removing the leaf nodes that were attached to the elements of the copy of $\zeta$ in $\omega + \zeta$, shows that \cmp{3} does not imply \cmp{6}.
This concludes the determination of all possible dependencies between the properties \cmp{1} to \cmp{6}.
\smallskip
Now, regarding the dependencies between \cmp{1} to \cmp{6} and \cmp{7} on the class of \textit{all} trees, first note that \cmp{1} and each of the equivalent statements \cmp{2} and \cmp{2'}, implies \cmp{7}. Further, none of \cmp{3}, \cmp{4}, \cmp{5} and \cmp{6} imply \cmp{7}. To see that \cmp{3} does not imply \cmp{7}, consider the tree that consists of the path $\omega+\zeta$ with an extra node adjoined to the side of each node in the copy of $\omega$. This tree is antichain complete but does not satisfy \cmp{7}. That neither of \cmp{4} and \cmp{5} implies \cmp{7} can be seen from the tree in Figure~\ref{Fig:Dependence1}. To see that \cmp{6} does not imply \cmp{7}, consider the tree obtained from the one in Figure~\ref{Fig:Dependence2} by adding a leaf to the side of each node in that tree. Each node from the original tree is a branching node (hence also weakly branching node) in this new tree. This tree satisfies \cmp{6} but not \cmp{7} as the set of branching nodes that lie on the initial copy of $\omega$, has no supremum.
In the other direction, \cmp{7} does not imply any of \cmp{1} to \cmp{6}. Indeed, the tree in Figure~\ref{Fig:Dependence2} satisfies \cmp{7} but not \cmp{1}, \cmp{3} or \cmp{4}, while the tree in Figure~\ref{Fig:Dependence3} satisfies \cmp{7} but not \cmp{2}. The tree that resembles the one in Figure~\ref{Fig:Dependence3} but with the greatest node in the initial copy of $\eta^{\leqslant 0}$ removed, vacuously satisfies \cmp{7}, but it does not satisfy \cmp{5}. Finally, the tree that consists of a copy of $\omega \cdot (\zeta + \zeta)$ (i.e.~$\zeta + \zeta$ copies of $\omega$ placed end to end) with another copy of $\omega$ placed aside each copy of $\omega$ in $\omega \cdot (\zeta + \zeta)$, vacuously satisfies \cmp{7}, but does not satisfy \cmp{6}.
\end{proof}
\subsection{Rubin completeness}
\label{sec:Rubin}
A somewhat more restrictive notion of completeness is considered in \cite{Rubin93} (Def.~0.2). We say that a tree $\ensuremath{\mathfrak{T}}$ is \defstyle{Rubin complete} if
\begin{itemize}
\item[(1)] every non-empty set of nodes has an infimum;
\item[(2)] every non-empty chain has a supremum;
\item[(3)] if $\mathsf{A}$ and $\mathsf{B}$ are disjoint non-empty convex chains, then $\inf(\mathsf{A}) \not= \inf(\mathsf{B})$.
\end{itemize}
Then, Rubin complete trees are (Dedekind) complete and hence they can be placed above \cmp{1} in the hierarchy considered in this section. It can also be observed that, by (1), Rubin complete trees are rooted and, by (2), every path in them has a leaf.
\begin{proposition} A complete tree is Rubin complete if and only if:
\begin{itemize}
\item[(a)]
for every set $\{ \mathsf{P}_i : i \in I\}$ of paths, if $\bigcap_{i \in I} \mathsf{P}_i \not= \emptyset$, then this set has a maximum,
\item[(b)]
for any two paths $\mathsf{P} \not= \mathsf{Q}$, $\mathsf{P} \setminus \mathsf{Q}$ and $\mathsf{Q} \setminus \mathsf{P}$ have a minimum, and
\item[(c)]
every non-leaf node has an immediate successor on each path passing through it.
\end{itemize}
\end{proposition}
\begin{proof} $(\Rightarrow)$
Let $\ensuremath{\mathfrak{T}}$ be a Rubin complete tree. Consider a set $\{ \mathsf{P}_i : i \in I\}$ of paths in $\ensuremath{\mathfrak{T}}$ and assume that $\mathsf{A} = \bigcap_{i \in I} \mathsf{P}_i \not= \emptyset$. By condition (2), $\mathsf{A}$ has a supremum $t_\mathsf{A}$. If $I$ is the singleton $\{i\}$, $\mathsf{A}$ is $\mathsf{P}_i$ and $t_\mathsf{A}$ is the the leaf of it. If $|I| \geqslant 2$, assume for reductio $t_\mathsf{A} \not\in \mathsf{A}$, so that $t_\mathsf{A} \not\in \mathsf{P}_i$ for some $i \in I$. This implies $t_\mathsf{A} \not\leqslant \ndv$ for every $\ndv \in \mathsf{P}_i$. But $\mathsf{P}_i$ contains upper bounds of $\mathsf{A}$.
Let now $\mathsf{P} \not= \mathsf{Q}$ be paths in $\ensuremath{\mathfrak{T}}$. Consider the non-empty segments $\mathsf{A} = \mathsf{P} \setminus \mathsf{Q}$ and $\mathsf{B} = \mathsf{Q} \setminus \mathsf{P}$ and let $t_\mathsf{A}$ and $t_\mathsf{B}$ be their infima. Let $t$ be the maximum of $\mathsf{P} \cap \mathsf{Q}$, which exists by condition (a).
Assume for reductio that $\mathsf{A}$ has no minimum, so that $t_\mathsf{A} = t$. The node $t$ is also the infimum of the segment $\mathsf{B}' = \{t\} \cup \mathsf{Q} \setminus \mathsf{P}$ and hence $\inf(\mathsf{A}) = \inf(\mathsf{B}') = t$. But this contradicts (3) because $\mathsf{A}$ and $\mathsf{B}'$ are disjoint.
Finally, consider a non-leaf node $t$ and a path $\mathsf{P}$ passing through it. Let $\mathsf{A}$ be the set of all nodes $u \in \mathsf{P}$ such that $t < u$. Then $\mathsf{A}$ and $\mathsf{B} = \{ t \}$ are disjoint non-empty convex chains. Since $\inf(\mathsf{A}) \not= \inf(\mathsf{B}) = t$, $\inf(\mathsf{A})$ must be the immediate successor of $t$ in $\mathsf{P}$.
\smallskip
$(\Leftarrow)$ Let $\ensuremath{\mathfrak{T}}$ be a complete tree in which (a), (b), and (c) hold. Consider a non-empty chain $\mathsf{A}$ in $\ensuremath{\mathfrak{T}}$ and the set $\{ \mathsf{P}_i : i \in I\}$ of all paths containing $\mathsf{A}$. Call $\mathsf{B}$ the set $\bigcap_{i \in I} \mathsf{P}_i$ and $t_\mathsf{B}$ the maximum of $\mathsf{B}$.
Given any $\mathsf{P}_i$, the pathwise completeness of $\ensuremath{\mathfrak{T}}$ implies that $\mathsf{A}$ has a supremum $u_i$ in $\mathsf{P}_i$. We can observe now that
$t_\mathsf{B}$ is an upper bound of $\mathsf{A}$ and belongs to every $\mathsf{P}_i$. Then $u_i \leqslant t_\mathsf{B}$ for each $i$ and hence, for all $i, j$, $u_i = u_j$. This means that $u_i$ is the supremum of $\mathsf{A}$ also in $\ensuremath{\mathfrak{T}}$.
Consider now two disjoint segments $\mathsf{A}$ and $\mathsf{B}$ and let $t_\mathsf{A}$ and $t_\mathsf{B}$ be their infima. Two cases can be distinguished.
\\
Case 1: $\mathsf{A} \not< \mathsf{B}$ and $\mathsf{B} \not< \mathsf{A}$. Then there exist two different paths $\mathsf{P}$ and $\mathsf{Q}$ containing $\mathsf{A}$ and $\mathsf{B}$, respectively. Since $\mathsf{A}$ and $\mathsf{B}$ are disjoint, one of the inclusions $\mathsf{A} \subseteq \mathsf{P} \setminus \mathsf{Q}$ and $\mathsf{B} \subseteq \mathsf{Q} \setminus \mathsf{P}$ must hold. Assume, w.l.o.g., the first one. By (b), we can consider the minimum $t$ of $\mathsf{P} \setminus \mathsf{Q}$. Then $t \leqslant t_\mathsf{A} \in \mathsf{P} \setminus \mathsf{Q}$. This implies $t_\mathsf{A}\not= t_\mathsf{B}$ because $t_\mathsf{B} \in \mathsf{Q}$.
\\
Case 2: $\mathsf{A} < \mathsf{B}$ (the case $\mathsf{B} < \mathsf{A}$ is similar). If $\mathsf{A}$ is not a singleton set, then $t_\mathsf{A} \not= t_\mathsf{B}$ trivially holds. Else, assume $\mathsf{A} = \{u\}$ and let $\mathsf{P}$ be any path containing $\mathsf{B}$, so that $u \in \mathsf{P}$. By (c), $u$ has an immediate successor $u'$ in $\mathsf{P}$. Then the claim follows by the inequalities $t_\mathsf{A} \leqslant u < u' \leqslant t_\mathsf{B}$.
\end{proof}
Observe that the strong condition (c) is needed only for dealing with the very particular case $\mathsf{A} = \{u\}< \mathsf{B}$. This can be avoided, for instance, by assuming that $\mathsf{A}$ and $\mathsf{B}$ contain incomparable nodes, or by assuming that chains are not singleton sets. In these cases, condition (c) and the last part of the proof above can be dropped.
\subsection{Weakly branching completeness and partitions}
\label{sec:partitioning}
The property of weakly branching completeness, apart from being a natural notion of completeness in trees, also becomes critical when partitioning a tree along one of its stems. Such partitions are used in \cite{GorankoKellerman2021} for the purpose of approximating trees as coloured linear orders\footnote{There the property of weakly branching completeness is simply called \textit{branching completeness} since no stronger form of branching completeness is used in that context.}.
For $\ensuremath{\mathfrak{T}}$ any tree, $\mathsf{A}$ any path in $\ensuremath{\mathfrak{T}}$ and $s \in \mathsf{A}$, we define the following sets:
\begin{eqnarray*}
\forroml{s} & := & \left\{ \begin{array}{cl}
\emptyset, & \text{when $s$ has an immediate predecessor,} \\ ~ \\
\displaystyle \left(\bigcap_{t<s}T^{>t}\right) \setminus T^{\geqslant s}, & \text{otherwise} \end{array} \right. \\
\text{and} \\
\forromu{s}{A} & := & T^{> s} \setminus \left(\,\bigcup_{t \in \mathsf{A} \cap T^{>s}} \!\!\!\! T^{\geqslant t} \right).
\end{eqnarray*}
Provided that these sets are non-empty, we define the structures $\forfral{s} := \mathfrak{T}^{\forroml{s}}$ and $\forfrau{s}{A} := \mathfrak{T}^{\forromu{s}{A}}$. In general, either or both of the sets $\forroml{s}$ and $\forromu{s}{A}$ may be empty, in which case the corresponding structure is left undefined. The structures $\forfral{s}$ and $\forfrau{s}{A}$, if defined, are forests. They will be called respectively the \defstyle{lower side-forest} of $s$, and the \defstyle{upper side-forest} of $s$ with respect to $\mathsf{A}$. Finally, the \defstyle{side-forest} of $s$ with respect to $\mathsf{A}$ is the forest $\forfra{s}{A} := \left(\mathfrak{T}^{\forrom{s}{A}};s\right)$ where
$$\forrom{s}{A} := \left\{s\right\} \cup \forroml{s} \cup \forromu{s}{A}.$$
The forests $\forfral{s}$ and $\forfrau{s}{A}$ are depicted in Fig.~\ref{Fig:Side-Forests}.
\begin{figure}
\begin{center}
\begin{picture}(0,190)
\put(0,-10){\line(0,1){195}}
\put(0,80){\circle*{3}}
\qbezier[60](5,85)(35,115)(65,145)
\qbezier[60](5,85)(20,125)(35,165)
\qbezier[25](65,145)(50,155)(35,165)
\put(53,159){$\mathfrak{F}_{u}\left(s\backslash\mathsf{A}\right)$}
\qbezier[60](5,75)(35,105)(65,135)
\qbezier[60](5,75)(45,90)(85,105)
\qbezier[25](65,135)(75,120)(85,105)
\put(78,123){$\mathfrak{F}_{l}\left(s\right)$}
\multiput(0,120)(0,10){3}{\circle*{3}}
\multiput(-4,100)(0,5){3}{\circle{2}}
\multiput(-4,150)(0,5){3}{\circle{2}}
\multiput(0,40)(0,-10){3}{\circle*{3}}
\multiput(-4,60)(0,-5){3}{\circle{2}}
\multiput(-4,10)(0,-5){3}{\circle{2}}
\put(-10,77){$s$}
\put(-10,180){$\mathsf{A}$}
\end{picture}
\caption{The side-forests $\forfral{s}$ and $\forfrau{s}{A}$.\label{Fig:Side-Forests}}
\end{center}
\end{figure}
Intuitively, the upper side-forest of $s$ with respect to $\mathsf{A}$ consists of those nodes that sit above $s$ but do not sit above any nodes on $\mathsf{A} \cap T^{>s}$, and the lower side-forest of $s$ consists of those nodes that sit above $T^{<s}$ but are incomparable with $s$, unless $s$ has an immediate predecessor, in which case its lower side-forest is left empty so as not to coincide with the upper side-forest of that predecessor.
The following result is adapted from \cite{GorankoKellerman2021}.
\begin{proposition}
Let $\mathfrak{T}$ be a weakly branching complete tree and let $\mathsf{A}$ be a path in $\ensuremath{\mathfrak{T}}$. The set $\displaystyle \left\{ \forrom{s}{A} \right\}_{s \in \mathsf{A}}$ forms a partition of $T$.
\end{proposition}
\begin{proof}
Each side-forest $\forrom{s}{A}$ is non-empty since $s \in \forrom{s}{A}$, and different side-forests $\forrom{s}{A}$ and $\forrom{t}{A}$ are disjoint by the way that side-forests are defined. To see that $\displaystyle \left\{ \forrom{s}{A} \right\}_{s \in \mathsf{A}}$ covers $T$, pick any node $u \in T$, and consider the following cases.
Case 1: $u \in \mathsf{A}$. Then $u \in \forrom{u}{A}$.
Case 2: $u \not\in \mathsf{A}$. Then there exists $v \in \mathsf{A}$ such that $\left\{u,v\right\}$ is an antichain. Let $\mathsf{B}$ be a path that contains $u$. By the weakly branching completeness of $\ensuremath{\mathfrak{T}}$, the set $\mathsf{A} \cap \mathsf{B}$ has an supremum $w$ in $\mathsf{A}$.
Case 2.1: $w < u$. Then $u \in \forromu{w}{A}$.
Case 2.2: $w \not< u$. Then $u \in \forroml{w}$.
\noindent This shows that $\bigcup_{s \in \mathsf{A}} \forrom{s}{A} = T$.
\end{proof}
\section{Characterisations of the notions of completeness}
\label{sec:characterisations}
\begin{theorem} \label{Thm:DedekindCompleteness}
A tree is complete \cmp{1} if and only if it is both pathwise complete \cmp{2} and branching complete \cmp{4}.
\end{theorem}
\begin{proof}
($\Rightarrow$) \ Trivial.
($\Leftarrow$) \ Suppose that \cmp{2} and \cmp{4} hold in the tree $\mathfrak{T}$. Consider a non-empty set $X$ of nodes in $\mathfrak{T}$ that is bounded below. Let $\mathsf{A}$ be any path in $\mathfrak{T}$ that intersects $X$. Then $\mathsf{A} \cap X$ is a non-empty subset of $\mathsf{A}$ that is bounded below, hence, by \cmp{2}, it has an infimum on $\mathsf{A}$, call it $\ndv$.
For any path $\mathsf{P}$ that intersects with $X$, again, $\mathsf{P} \cap X$ has an infimum on $\mathsf{P}$, call it $t_{\mathsf{P}}$. By \cmp{4}, the set $\left\{\ndv,t_{\mathsf{P}}\right\}$ has an infimum $u_{\mathsf{P}}$, which is a node on $\mathsf{A}$. Observe that $t_\mathsf{A} = \ndv = u_\mathsf{A}$. Now consider the set
\[ Y := \{ u_{\mathsf{P}} \mid \, \mathsf{P} \mbox{ is any path that intersects with } X \}. \]
For any lower bound $u$ of $X$, we have $u \leqslant \ndv$, $u \leqslant t_{\mathsf{P}}$, and hence $u \leqslant u _{\mathsf{P}}$ for every path $\mathsf{P}$ intersecting $X$.
Then, the set $Y$ is a non-empty subset of the path $\mathsf{A}$ and is bounded below. By property \cmp{$2$} it has an infimum $t$ in $\mathsf{A}$. We claim that $t$ is the infimum of $X$. Indeed, the definition of $Y$ implies that $t$ is a lower bound of $X$, and we have observed above that every lower bound $t'$ of $X$ is also a lower bound of $Y$, and hence $t' \leqslant t$.
\end{proof}
\begin{corollary}
A pathwise complete \cmp{2} tree $\mathfrak{T}$ is antichain complete \cmp{3} if and only if $\mathfrak{T}$ is branching complete \cmp{4}, if and only if $\mathfrak{T}$ is complete \cmp{1}.
\end{corollary}
The next result shows that the property of antichain completeness is equivalent to the conjunction of the properties of branching completeness and ``half'' of weakly branching point completeness (bearing in mind that in branching complete trees, branching points and weakly branching points coincide).
\begin{proposition} \label{Thm:Antichains}
A tree $\mathfrak{T}$ is antichain complete \cmp{3} if and only if $\mathfrak{T}$ is branching complete \cmp{4} and every non-empty chain of branching points that is bounded below, has an infimum.
\end{proposition}
\begin{proof}
($\Rightarrow$) \ Assume that every non-empty antichain in $\mathfrak{T}$ that is bounded below, has an infimum. It is immediate that $\mathfrak{T}$ is branching complete. Let $X$ be a non-empty chain of branching points in $\mathfrak{T}$ that is bounded below. For each $x \in X$, let $u_x$ and $\ndv_x$ be incomparable nodes such that $x = \inf\{u_x,\ndv_x\}$ and let $Y := \bigcup_{x \in X} \{u_x,\ndv_x\}$. It follows from Zorn's Lemma
that there exists $Z \subseteq Y$ that is maximal with respect to being an antichain, and $Z$ must be bounded below since $X$ is bounded below. Let $t := \inf(Z)$.
Observe that $L(X) = L(Y)$. Let $s \in L(X)$. Since $Z \subseteq Y$ then $L(Y) \subseteq L(Z)$ from which $s \in L(Z)$ hence $s \leqslant t$.
%
To see that $t \in L(X)$, first note that, by the maximality of $Z$, there exists, for each $y \in Y$, a node $y' \in Z$ such that $y' \smile y$. Let $x \in X$. If $(u_x)' \neq (\ndv_x)'$ then $(u_x)' \not\smile (\ndv_x)'$ (since $Z$ is an antichain) hence $t \leqslant \inf\{(u_x)',(\ndv_x)'\} = \inf\{u_x,\ndv_x\} = x$. If, on the other hand, $(u_x)' = (\ndv_x)'$, then $(u_x)' \smile u_x,\ndv_x$, hence $t \leqslant (u_x)' \leqslant \inf\{u_x,\ndv_x\} = x$, hence $t \in L(X)$, from which it follows that $t = \inf(X)$, as required.
($\Leftarrow$) \ Let $X$ be a non-empty antichain that is bounded below. Let $x_0 \in X$ and let $\mathsf{P}$ be a path that passes through $x_0$. For each $y \in X \setminus \{x_0\}$, let $u_y := \inf\{x_0,y\} \in \mathsf{P}$. The set $\left\{ u_y : y \in X \setminus \{x_0\} \right\}$ is then a chain of branching points that is bounded below, hence this set has an infimum $\ndv$. It is readily verified that $\ndv$ is also the infimum of $X$.
\end{proof}
Completeness issues lead us to consider particular stems and branches.
Recall that $\bridge{t}$ denotes the maximal bridge in the tree containing $t$.
We call a stem $\mathsf{A}$ a \defstyle{trunk} when $\mathsf{A} = \bigcup_{t \in \mathsf{A}} \bridge{t}$
and, similarly, call a branch $\mathsf{B}$ a \defstyle{limb} when $\mathsf{B} = \bigcup_{t \in \mathsf{B}} \bridge{t}$. For a non-example of a trunk and a limb, consider the tree $\mathfrak{T}$ in Figure~\ref{Fig:Dependence2}. For each node $u$ in the initial copy of $\omega$ in $\mathfrak{T}$, the set $T^{\leqslant u}$ forms a stem that is not a trunk, and for each node $\ndv$ in each of the two copies of $\omega$ that are appended atop the initial copy of $\omega$, the set $T^{> \ndv}$ forms a branch that is not a limb.
If $\mathsf{P}$ is a path and $\mathsf{A} \subseteq \mathsf{P}$ then $\mathsf{A}$ is a trunk if and only if $\mathsf{P} \setminus \mathsf{A}$ is a limb. If $\mathsf{A}$ is a trunk and $\mathsf{B}$ is a limb for which $\mathsf{A} \cap \mathsf{B} = \emptyset$ and $\mathsf{A} \cup \mathsf{B}$ is a path, then $\mathsf{A}$ and $\mathsf{B}$ will be called \defstyle{complementary}. In condensed trees, stems and trunks coincide, as do branches and limbs. Note that if a trunk $\mathsf{A}$ and limb $\mathsf{B}$ are complementary, then the supremum of $\mathsf{A}$, if it exists, will be the infimum of $\mathsf{B}$. However, if $\mathsf{B}$ has an infimum, that infimum need not be the supremum of $\mathsf{A}$, in fact, $\mathsf{A}$ need not even have a supremum. The tree in Figure~\ref{Fig:Dependence2} can serve as a counterexample.
\begin{theorem} \label{Thm:TrunksForm}
Let $\mathfrak{T}$ be any tree.
\begin{enumerate}
\item
For any non-singleton set $\left\{\mathsf{P}_i\right\}_{i \in I}$ of paths in $\mathfrak{T}$, the set $\displaystyle \bigcap_{i \in I} \mathsf{P}_i$, if non-empty, is a trunk.
\item
For any path $\mathsf{Q}$ and any non-singleton set $\left\{\mathsf{P}_i\right\}_{i \in I}$ of paths in $\mathfrak{T}$, the set $\displaystyle \mathsf{B} := \mathsf{Q} \setminus \bigcup_{i \in I} \mathsf{P}_i$, if non-empty, is a limb (hence $\mathsf{Q} \setminus \mathsf{B}$ is a trunk).
\item
Every trunk in $\mathfrak{T}$ is of one, and possibly both, of the two forms above.
\end{enumerate}
\end{theorem}
\begin{proof}
1. \ Let $\left\{\mathsf{P}_i\right\}_{i \in I}$ be any set of paths in $\mathfrak{T}$ and let $\mathsf{A} := \bigcap_{i \in I} \mathsf{P}_i \neq \emptyset$. Pick any path $\mathsf{P}_0 \in \left\{\mathsf{P}_i\right\}_{i \in I}$. That $\mathsf{A}$ is a stem follows from the fact that $\mathsf{A} \subseteq \mathsf{P}_0$ along with the fact that each path $\mathsf{P}_i$ is downward-closed. To see that $\mathsf{A}$ is a trunk it suffices to show, for all nodes $u$ and $\ndv$ with $u \in \mathsf{A}$ and $\ndv \in \bridge{u}$, that $\ndv \in \mathsf{A}$. Let $u$ and $\ndv$ be any nodes for which $u \in \mathsf{A}$ and $\ndv \in \bridge{u}$. For each $i \in I$, since $u \in \mathsf{P}_i$ then $\mathsf{P}_i \cap \bridge{u} \neq \emptyset$ and so, since $\bridge{u}$ is a bridge, $\mathsf{P}_i \cap \bridge{u} = \bridge{u}$, hence $v \in \mathsf{P}_i$. It follows that $\ndv \in \mathsf{A}$, as required.
2. \ Let $\mathsf{Q}$ be a path, $\left\{\mathsf{P}_i\right\}_{i \in I}$ be a set of paths in $\mathfrak{T}$, and $\mathsf{B} := \mathsf{Q} \setminus \bigcup_{i \in I} \mathsf{P}_i \neq \emptyset$. Let $u \in \mathsf{B}$ and let $\ndv \in \mathsf{Q}$ with $u < \ndv$. Suppose that $\ndv \not\in \mathsf{B}$. Then $\ndv \in \mathsf{P}_j$ for some $j \in I$. Then $u \in \mathsf{P}_j$, as well, so that $u \not\in \mathsf{B}$, a contradiction. It follows that $\mathsf{B}$ is upward-closed in $\mathsf{Q}$ hence $\mathsf{B}$ is a branch.
To see that $\mathsf{B}$ is a limb it again suffices to show, for all nodes $u$ and $\ndv$ with $u \in \mathsf{B}$ and $\ndv \in \bridge{u}$, that $\ndv \in \mathsf{B}$. So, let $u$ and $\ndv$ be any nodes for which $u \in \mathsf{B}$ and $\ndv \in \bridge{u}$, but suppose, for a contradiction, that $\ndv \not\in \mathsf{B}$. From $\ndv \in \bridge{u}$,
it follows that $\ndv \in \mathsf{P}_j$ for some $j \in I$, hence $\bridge{u} = \bridge{\ndv} \subseteq \mathsf{P}_j$. Then $u \in \mathsf{P}_j$, so that $u \not\in \mathsf{B}$, a contradiction.
3. \ Let $\mathsf{A}$ be a trunk, $\mathsf{Q}$ a path that contains $\mathsf{A}$, and $\mathsf{B} := \mathsf{Q} \setminus \mathsf{A}$. We consider three cases (Cases 1 and 2 may overlap).
Case 1: $\mathsf{A}$ does not contain a greatest maximal bridge. Let $\mathcal{A}$ be the set of maximal bridges in $\mathsf{A}$. Observe that for each $\mathsf{J} \in \mathcal{A}$ there exists a path $\mathsf{P}_{\mathsf{J}}$ and a node $u_{\mathsf{J}} \in \mathsf{A}$ such that $\mathsf{J} \subseteq \mathsf{P}_{\mathsf{J}}$ while $u_{\mathsf{J}} \not\in \mathsf{P}_{\mathsf{J}}$. Then $\mathsf{B} = \mathsf{Q} \setminus \bigcup_{\mathsf{J} \in \mathcal{A}} \mathsf{P}_{\mathsf{J}}$.
Case 2: $\mathsf{B}$ does not contain a least maximal bridge. Let $\mathcal{B}$ be the set of maximal bridges in $\mathsf{B}$. Observe that for each $\mathsf{J} \in \mathcal{B}$ there exists a path $\mathsf{P}_{\mathsf{J}}$ such that $\mathsf{P}_{\mathsf{J}} \cap \mathsf{J} = \emptyset$ while $\mathsf{P}_{\mathsf{J}} \cap \mathsf{B} \neq \emptyset$. Then $\mathsf{A} = \bigcap_{\mathsf{I} \in \mathcal{B}} \mathsf{P}_{\mathsf{I}}$.
Case 3: $\mathsf{A}$ contains a greatest maximal bridge $\mathsf{I}$ and $\mathsf{B}$ contains a least maximal bridge $\mathsf{J}$. Since $\mathsf{I}$ and $\mathsf{J}$ are distinct maximal bridges, there must exist a path $\mathsf{P}$ that contains $\mathsf{I}$ but such that $\mathsf{P} \cap \mathsf{J} = \emptyset$. Then $\mathsf{P} \cap \mathsf{Q} = \mathsf{A}$, as required.
\end{proof}
Trunks that are of the form in Part 1 of Theorem \ref{Thm:TrunksForm} will be called \defstyle{type I trunks}, and trunks that are of the form in Part 2 of the theorem will be called \defstyle{type II trunks}. A limb will be called a \defstyle{type I limb} or \defstyle{type II limb} according to whether the trunk that complements it is a type I or type II trunk. The set of paths $\left\{\mathsf{P}_i\right\}_{i \in I}$ from Theorem \ref{Thm:TrunksForm} will be said to \defstyle{generate} the corresponding trunk or limb as well as its complementary limb/trunk.
A trunk $\mathsf{A}$ will be called \defstyle{finitely generated} when it can be written in the form $\mathsf{A} = \mathsf{P} \cap \mathsf{Q}$ for some paths $\mathsf{P}$ and $\mathsf{Q}$, and a limb $\mathsf{B}$ will be called \defstyle{finitely generated} when it can be written in the form $\mathsf{B} = \mathsf{Q} \setminus \mathsf{P}$ for some paths $\mathsf{Q}$ and $\mathsf{P}$. If $\mathsf{A}$ is a trunk that is contained in a path $\mathsf{P}$ then $\mathsf{A}$ is finitely generated if and only if the limb $\mathsf{P} \setminus \mathsf{A}$ is finitely generated. Finitely generated trunks and limbs are both type I and type II but where the set of paths $\{\mathsf{P}_i\}_{i \in I}$ is finite.
\begin{lemma} \label{Thm:Trunk}
Let $\mathsf{A}$ be a finitely generated trunk in a tree $\mathfrak{T}$ and let $\mathsf{P}$ be a path such that $\mathsf{A} \subseteq \mathsf{P}$. Then there exists a path $\mathsf{Q}$ such that $\mathsf{A} = \mathsf{P} \cap \mathsf{Q}$.
\end{lemma}
\begin{proof}
Since $\mathsf{A}$ is a finitely generated trunk, there exist paths $\mathsf{B}$ and $\mathsf{C}$ such that $\mathsf{A} = \mathsf{B} \cap \mathsf{C}$.
If $\mathsf{P} \cap \mathsf{B} = \mathsf{A}$ then we are done, so consider the case where $\mathsf{P} \cap \mathsf{B} \supsetneq \mathsf{A}$. Let $u \in (\mathsf{P} \cap \mathsf{B}) \setminus \mathsf{A}$. Suppose, for a contradiction, that $\mathsf{P} \cap \mathsf{C} \supsetneq \mathsf{A}$. Let $\ndv \in (\mathsf{P} \cap \mathsf{C}) \setminus \mathsf{A}$. Then $\min\left\{u,\ndv\right\} \in (\mathsf{B} \cap \mathsf{C}) \setminus \mathsf{A}$, a contradiction. Hence $\mathsf{P} \cap \mathsf{C} = \mathsf{A}$, as required.
\end{proof}
The following result characterises the notions of completeness \cmp{1} to \cmp{6}
in terms of properties involving stems, branches, trunks, and limbs.
\begin{theorem} \label{Thm:CharacterisationOfCompleteness}
Let $\mathfrak{T}$ be any tree. Then each of the following claims holds.
\begin{enumerate}
\item
$\mathfrak{T}$ is complete \cmp{1} if and only if every stem in $\mathfrak{T}$ has a supremum.
\item
$\mathfrak{T}$ is pathwise complete \cmp{2} if and only if every branch in $\mathfrak{T}$ has an infimum.
\item
$\mathfrak{T}$ is antichain complete \cmp{3} if and only if every type I trunk in $\mathfrak{T}$ has a greatest node.
\item
$\mathfrak{T}$ is branching complete \cmp{4} if and only if every finitely generated trunk in $\mathfrak{T}$ contains a greatest node.
\item
$\mathfrak{T}$ is weakly branching complete \cmp{5} if and only if every finitely generated limb in $\mathfrak{T}$ has an infimum.
\item
$\mathfrak{T}$ is branching complete \cmp{4} and weakly branching point complete \cmp{6} if and only if every trunk in $\mathfrak{T}$ has a supremum.
\item
$\mathfrak{T}$ is weakly branching complete \cmp{5} and weakly branching point complete \cmp{6} if and only if every limb in $\mathfrak{T}$ has an infimum.
\end{enumerate}
\end{theorem}
\begin{proof}
Claim 1: ($\Rightarrow$) \ Given a stem $\mathsf{A}$ in $\mathfrak{T}$, the infimum of $T^{\geqslant \mathsf{A}}$ can easily be verified to be the supremum of $\mathsf{A}$.
($\Leftarrow$) \ Given $X \subseteq T$ that is non-empty and bounded below, $\mathsf{B} := T^{\leqslant X}$ will be a stem in $\mathfrak{T}$. Its supremum $u$ can readily be verified to be the infimum of $X$.
Claim 2: straightforward.
Claim 3: ($\Rightarrow$) \ Let $\mathfrak{T}$ be antichain complete and let $\mathsf{A} := \bigcap_{i \in I} \mathsf{P}_i$, where $\{\mathsf{P}_i\}_{i \in I}$ is a set of paths, be a type I trunk. Let $X$ be a maximal antichain in $\left(\bigcup_{i \in I} \mathsf{P}_i\right) \setminus \mathsf{A}$. Then $X$ is bounded below (by $\mathsf{A}$) hence $X$ has an infimum $u$. Then $u$ is the greatest node of $\mathsf{A}$.
($\Leftarrow$) \ Let $X$ be a non-empty antichain that is bounded below. For each $t \in X$, let $\mathsf{P}_{t}$ be a path that passes through $t$. Then $\mathsf{A} := \bigcap_{t \in X} \mathsf{P}_{t}$ is a type I trunk, hence $\mathsf{A}$ has a greatest node $u$. This node $u$ will be the required infimum of $X$.
Claim 4: straightforward.
Claim 5: straightforward (using Lemma \ref{Thm:Trunk}).
Claim 6: ($\Rightarrow$) \ Assume that $\mathfrak{T}$ satisfies \cmp{4} and \cmp{6}, and therefore also \cmp{7}. Let $\mathsf{A}$ be a trunk in $\mathfrak{T}$. First, consider the case where $\mathsf{A}$ is a type I trunk, say $\mathsf{A} := \bigcap_{i \in I} \mathsf{P}_i$ for some set of paths $\{\mathsf{P}_i\}_{i \in I}$. Fix $j \in I$ and for each $i \in I \setminus \{j\}$, let $t_i$ be the greatest node in $\mathsf{P}_j \cap \mathsf{P}_i$, which exists by \cmp{4}. By \cmp{7} the set $\{t_i\}_{i \in I \setminus \{j\}}$ has an infimum $u$, which is readily seen to be the supremum of $\mathsf{A}$. In the case where, instead, $\mathsf{A}$ is a type II trunk, it can similarly be shown that the supremum of the set of branching points that are formed on $\mathsf{A}$ by the paths that generate $\mathsf{A}$ is also a supremum of $\mathsf{A}$ itself.
($\Leftarrow$) \ Suppose that every trunk in $\mathfrak{T}$ has a supremum. To see that \cmp{4} holds, let $u$ and $\ndv$ be incomparable nodes and let $\mathsf{A}$ and $\mathsf{B}$ be paths that contain $u$ and $\ndv$. The supremum of the trunk $\mathsf{A} \cap \mathsf{B}$ will also be the infimum of $\{u,\ndv\}$. To see that \cmp{7}, and therefore also \cmp{6}, holds, let $X$ be a non-empty chain of branching points in $\mathfrak{T}$ and let $\mathsf{P}$ be a path for which $X \subseteq \mathsf{P}$. For each $t \in X$, let $\mathsf{P}_{t}$ be a path for which $t$ is the supremum of the trunk $\mathsf{P} \cap \mathsf{P}_{t}$. If $X$ is bounded below, then the supremum of the type I trunk $\bigcap_{t \in X} \mathsf{P}_{t}$ will be the infimum of $X$, while if $X$ is bounded above, then the supremum of the type II trunk $\mathsf{P} \setminus \bigcup_{t \in X} \mathsf{P}_{t}$ will be the supremum of $X$.
Claim 7: The proof is similar to that of Claim 6.
\end{proof}
\begin{proposition} \label{Thm:PathwiseDedekindComplete}
A tree $\mathfrak{T}$ is pathwise complete \cmp{2} if and only if $\mathfrak{T}$ is weakly branching complete \cmp{5} and weakly branching point complete \cmp{6} and each of the maximal bridges in $\mathfrak{T}$ is complete.
\end{proposition}
\begin{proof}
($\Rightarrow$) \ Immediate.
($\Leftarrow$) \ Let $\mathsf{P}$ be any path in $\mathfrak{T}$ and let $X \subseteq \mathsf{P}$ be non-empty and bounded below.
Case 1: there exists a limb $\mathsf{A}$ in $\mathfrak{T}$ such that $X$ is co-initial in $\mathsf{A}$. Then the infimum of $\mathsf{A}$, which exists by Part 7 of Theorem \ref{Thm:CharacterisationOfCompleteness}, is also the infimum of $X$.
Case 2: there exists a least maximal bridge $\mathsf{B}$ in $\mathfrak{T}$ such that $\mathsf{B} \cap X \neq \emptyset$, and a node $b \in \mathsf{B}$ such that $b < x$ for each $x \in X$. Since $\mathsf{B}$ is complete, the set $\mathsf{B} \cap X$ has an infimum which is also the infimum of $X$.
\end{proof}
\begin{corollary} \label{Thm:CompletenessCondensedTrees}
Let $\mathfrak{T}$ be a condensed tree.
\begin{enumerate}
\item
$\mathfrak{T}$ is pathwise complete \cmp{2} if and only if it is weakly branching complete \cmp{5} and weakly branching point complete \cmp{6}.
\item
$\mathfrak{T}$ is complete \cmp{1} if and only if it is branching complete \cmp{4} and weakly branching point complete \cmp{6}.
\end{enumerate}
\end{corollary}
\begin{proof}
1. This follows from Proposition \ref{Thm:PathwiseDedekindComplete} and the fact that every maximal bridge in a condensed tree is a singleton and hence complete.
2. This follows from Theorem \ref{Thm:Dependences}, Theorem \ref{Thm:DedekindCompleteness} and Part 1 above.
\end{proof}
In what follows we need the notion of $<$-connected components in a forest.
\begin{definition}\label{def:schmerl-comp}
A \defstyle{$<$-connected component}\footnote{This definition comes from \cite{Schmerl}.}
(briefly, \defstyle{$<$-component}) of the forest $\mathfrak{F} = \left(F;<\right)$ is a non-empty
subset $\schm$ of $F$ such that:
(1) if $t \in \schm$, $t' \leqslant t$ and $t' \leqslant u$, then $u \in \schm$;
(2) $\schm$ is minimal (by inclusion) for the property (1).
\end{definition}
It can be verified that the $<$-components of a forest coincide with the maximal subtrees of that forest. The following characterisation will be needed when constructing completions of trees in Section \ref{sec:tree-completions}.
\begin{proposition} \label{Thm:CompletenessComponents}
Let $\mathfrak{T}$ be any tree.
\begin{enumerate}
\item
$\mathfrak{T}$ is pathwise complete \cmp{2} if and only if for every stem $\mathsf{X}$ in $\mathfrak{T}$, each $<$-component of $\mathfrak{T}^{\geqslant \mathsf{X}}$ has a root.
%
\item
$\mathfrak{T}$ is weakly branching complete \cmp{5} if and only if for every finitely generated trunk $\mathsf{X}$ in $\mathfrak{T}$, each $<$-component of $\mathfrak{T}^{\geqslant \mathsf{X}}$ has a root.
\end{enumerate}
\end{proposition}
\begin{proof}
1. ($\Rightarrow$) \ Let $\mathsf{X}$ be a stem in $\mathfrak{T}$. If $\mathsf{X}$ has a greatest node $u$ then $u$ is the root of $\mathfrak{T}^{\geqslant \mathsf{X}}$, so let us consider the case where $\mathsf{X}$ has no greatest node. Let $\ensuremath{\mathfrak{T}}'$ be any $<$-component of $\mathfrak{T}^{\geqslant \mathsf{X}}$ and let $\mathsf{A}$ be a path in $\ensuremath{\mathfrak{T}}'$. Then $\mathsf{A}$ is a branch in $\mathfrak{T}$ for which $T^{< \mathsf{A}} = \mathsf{X}$. By Part 2 of Theorem~\ref{Thm:CharacterisationOfCompleteness}, $\mathsf{A}$ has an infimum $\ndv$ in $\mathfrak{T}$. Since $\mathsf{X}$ has no greatest node then $\ndv$ must be the least element of $\mathsf{A}$. It follows that $\ndv$ is the root of $\ensuremath{\mathfrak{T}}'$.
($\Leftarrow$) \ By Part 2 of Theorem \ref{Thm:CharacterisationOfCompleteness}, it suffices to show that each branch in $\mathfrak{T}$ has an infimum. Let $\mathsf{A}$ be such a branch. If $\mathsf{A}$ has a least node then this least node will be its infimum, so let us consider the case where $\mathsf{A}$ has no least node. Let $\mathsf{B} := T^{< \mathsf{A}}$ and let $\ensuremath{\mathfrak{T}}'$ be that $<$-component in $\mathfrak{T}^{\geqslant \mathsf{B}}$ that contains $\mathsf{A}$. By assumption, $\ensuremath{\mathfrak{T}}'$ has a root $u$. Since $\mathsf{A}$ has no least node, $u$ must be the greatest node of $\mathsf{B}$, from which it follows that $u$ is the infimum of $\mathsf{A}$.
2. ($\Rightarrow$) \ By considering a finitely generated trunk $\mathsf{X}$, it can be shown, in a similar manner as in the forward direction of Part 1 of this proof, but using Part 5 rather than Part 2 of Theorem \ref{Thm:CharacterisationOfCompleteness}, that each $<$-component of $\mathfrak{T}^{\geqslant \mathsf{X}}$ has a root.
($\Leftarrow$) \ This direction can be proved similarly to the backward direction of Part 1 of this proof, but again using Part 5, rather than Part 2, of Theorem \ref{Thm:CharacterisationOfCompleteness}, and considering a finitely generated limb $\mathsf{A}$ rather than a branch, from which $\mathsf{B} := T^{<\mathsf{A}}$ will be a finitely generated trunk, rather than a stem.
\end{proof}
\section{Completions of trees}
\label{sec:tree-completions}
We now turn to the problem of constructing completions of trees. Intuitively, this will involve adding the necessary ``missing'' nodes to a tree so that it becomes complete, in one sense or another. The completions that will be considered here, are Dedekind completions, antichain completions, branching completions, pathwise Dedekind completions, weakly branching completions, and $\alpha$-fillings. Theorem \ref{Thm:CharacterisationOfCompleteness} and Proposition \ref{Thm:CompletenessComponents} will be key in constructing these completions in intuitive terms.
\subsection{Dedekind completions}
We start by outlining a construction for obtaining the Dedekind completion of a tree that is given in \cite{Warren} (applied there to the wider class of so called `cycle-free partial orders').
Given a tree $\mathfrak{T}$, a nonempty subset $X$ of $T$ is called a \defstyle{Dedekind ideal} when $X$ is bounded above and $L(U(X)) = X$; $X$ is called a \defstyle{principal ideal} when it has the form $X = T^{\leqslant y}$ for some $y \in T$. Let $\mathcal{I}\left(\mathfrak{T}\right)$ denote the set that consists of all Dedekind ideals of $\mathfrak{T}$ and let us define $\mathfrak{T}^{\mathcal{I}} := \left(\mathcal{I}\left(\mathfrak{T}\right);\subseteq\right)$.
\begin{fact}\cite[Lemma 2.2.5]{Warren}
For any tree $\mathfrak{T}$, the structure $\mathfrak{T}^{\mathcal{I}}$ is a complete tree.\footnote{In \cite{Warren}, Dedekind completeness is defined as the property that each ideal is principal. This amounts to the same as the definition of (Dedekind) completeness that is used in this paper.}
\end{fact}
The tree $\mathfrak{T}^{\mathcal{I}}$ can be related to $\mathfrak{T}$ as follows: define $f : \mathfrak{T} \rightarrow \mathfrak{T}^{\mathcal{I}}$ by $x \mapsto T^{\leqslant x}$, i.e.,~each node in $\mathfrak{T}$ is mapped to the principal ideal that it generates.
\begin{fact}\cite[Lemmas 2.2.3, 2.2.5, 2.2.7]{Warren}
\label{Thm:Ded
|
ekindCompletionMinimal}
The mapping
$f$ is an isomorphic embedding of $\mathfrak{T}$ into $\mathfrak{T}^{\mathcal{I}}$, and $\mathfrak{T}^{\mathcal{I}}$ is a substructure of every complete tree that extends $f\left[\mathfrak{T}\right]$.
\end{fact}
We note that the above construction is an adaptation of the usual \emph{Dede\-kind-MacNeille completion} (see e.g.,~\cite{MacNeille}, \cite{BurrisSankappanavar}) of a partially ordered set, but with the following differences: the Dedekind-MacNeille completion of a tree $\mathfrak{T}$ takes for its underlying set \textit{all} subsets of $T$ for which $L(U(X)) = X$, not only those sets $X$ that are non-empty and bounded above, and the resulting structure is (up to isomorphism) the smallest complete \textit{lattice} that contains $\mathfrak{T}$.
We now describe how to construct the Dedekind
completion of a tree in more intuitive terms.\footnote{A similar construction is given in \cite{Droste85}, \S5.} Given a tree $\mathfrak{T} := \left(T;<\right)$, let $\mathcal{S}$ denote the set that consists of all stems in $\mathfrak{T}$ that do not have a supremum. For each $\mathsf{S} \in \mathcal{S}$, let $t_{\mathsf{S}}$ denote a new node that is not in $\mathfrak{T}$, and let $T^{DC} := T \cup \left\{t_{\mathsf{S}} : \mathsf{S} \in \mathcal{S}\right\}$. Let $<^{DC}$ be the relation
\begin{multline} \label{Eqn:CompletionOrdering}
\left\{(x,y) \in T \times T: x < y\right\} \cup \left\{\left(t_{\mathsf{R}},t_{\mathsf{S}}\right) : \mathsf{R},\mathsf{S} \in \mathcal{S} \ \text{with} \ \mathsf{R} \subsetneq \mathsf{S} \right\} \cup \\
\bigcup_{\mathsf{S} \in \mathcal{S}}\left\{\left(u,t_{\mathsf{S}}\right) : u \in \mathsf{S} \right\} \cup \bigcup_{\mathsf{S} \in \mathcal{S}}\left\{\left(t_{\mathsf{S}},u\right) : u \ \text{is any upper bound of} \ \mathsf{S} \right\}
\end{multline}
on $T^{DC}$ and let us denote the structure $\left(T^{DC};<^{DC}\right)$ as $\mathfrak{T}^{DC}$.
\begin{proposition} \label{Thm:DedekindCompletion}
The structure $\mathfrak{T}^{DC}$ is a complete tree that contains $\mathfrak{T}$.
\end{proposition}
\begin{proof}
It is straightforward to check that $<^{DC}$ is irreflexive, transitive, downward-linear, and downward-connected, hence $\mathfrak{T}^{DC}$ is a tree.
By Theorem \ref{Thm:CharacterisationOfCompleteness}, we have only to show that every stem in $\mathfrak{T}^{DC}$ has a supremum. Observe firstly that every stem $\mathsf{S}$ that is also a stem in $\mathfrak{T}$, has a supremum in $\mathfrak{T}^{DC}$, namely either its supremum $\sup(\mathsf{S})$ in $\mathfrak{T}$, or $t_{\mathsf{S}}$.
Next, let $\mathsf{R}$ be any stem in $\mathfrak{T}^{DC}$ that is not a stem in $\mathfrak{T}$. If $\mathsf{R}$ has a greatest element, there is nothing to prove. Otherwise, consider the set $\mathsf{R}^- := \mathsf{R} \cap T$, which is a stem in $\mathfrak{T}$. It suffices to show that $\mathsf{R}^-$ is cofinal in $\mathsf{R}$, that is, for every $u \in \mathsf{R}$, there is a $t \in \mathsf{R}^-$ such that $u \leqslant^{DC} t$; then $t_{\mathsf{R}^-}$, the supremum of $\mathsf{R}^-$ in $\mathfrak{T}^{DC}$, will also be the supremum of $\mathsf{R}$ in $\mathfrak{T}^{DC}$.
Indeed, suppose that $u$ is an element of $\mathsf{R}$ such that $t <^{DC} u$ for all $t \in \mathsf{R}^-$. Then $u \not\in \mathsf{R}^-$, hence $u = t_{\mathsf{S}}$ for some stem $\mathsf{S}$ in $\mathfrak{T}$. Since $\mathsf{S} \subseteq \mathsf{R}$ then $\mathsf{S} \subseteq \mathsf{R}^-$, and it is easily verified that $\mathsf{R}^- \subseteq \mathsf{S}$, hence $\mathsf{S} = \mathsf{R}^-$, so that $u = t_{\mathsf{S}} = t_{\mathsf{R}^-}$. Now consider any node $u'$ in $\mathsf{R}$ for which $u <^{DC} u'$; such $u'$ exists because $\mathsf{R}$ has no greatest element. Then $t <^{DC} u'$ for all $t \in \mathsf{R}^-$ and it again follows that $u' = t_{\mathsf{R}^-}$ hence $u' = u$, which contradicts the irreflexivity of $<^{DC}$.
\end{proof}
The tree $\mathfrak{T}^{DC}$ will be called the \defstyle{Dedekind completion} of $\mathfrak{T}$. By Theorem \ref{Thm:CharacterisationOfCompleteness}, every complete tree must have a supremum for each of its stems. It follows that $\mathfrak{T}^{DC}$ can be embedded in every complete tree that extends $\mathfrak{T}$. Using Fact \ref{Thm:DedekindCompletionMinimal}, it therefore must be the case that $\mathfrak{T}^{DC} \cong \mathfrak{T}^{\mathcal{I}}$.
\subsection{Antichain completions and branching completions}
Given a tree $\mathfrak{T} := \left(T;<\right)$ with Dedekind completion $\mathfrak{T}^{DC}$, the antichain completion and branching completion\footnote{\cite{Barham} defines branching completions, there called \textit{ramification completions}, in this way.} of $\mathfrak{T}$ can be obtained respectively as
$$\bigcap \left\{ \mathfrak{S} : \mathfrak{S} \ \text{is an antichain complete tree such that} \ \mathfrak{T} \subseteq \mathfrak{S} \subseteq \mathfrak{T}^{DC} \right\}$$
and
$$\bigcap \left\{ \mathfrak{S} : \mathfrak{S} \ \text{is a branching complete tree such that} \ \mathfrak{T} \subseteq \mathfrak{S} \subseteq \mathfrak{T}^{DC} \right\}.$$
Theorem \ref{Thm:CharacterisationOfCompleteness} again suggests intuitively simple constructions for these completions, this time in terms of the trunks of $\mathfrak{T}$.
Let $\mathcal{T}_{\mathrm{I}}$ be the set of all type I trunks in $\mathfrak{T}$ that do not contain a greatest node, and let $\mathcal{T}_{\mathrm{fin}}$ be the set of all finitely generated trunks in $\mathfrak{T}$ that do not contain a greatest node. For each trunk $\mathsf{S}$ in $\mathfrak{T}$, let $t_{\mathsf{S}}$ denote a new node that is not in $T$, and let us define
\begin{eqnarray*}
T^{AC} & := & T \cup \left\{t_{\mathsf{S}} : \mathsf{S} \in \mathcal{T}_{\mathrm{I}} \right\}, \\
T^{BC} & := & T \cup \left\{t_{\mathsf{S}} : \mathsf{S} \in \mathcal{T}_{\mathrm{fin}} \right\}.
\end{eqnarray*}
Now, we define the relations $<^{AC}$ and $<^{BC}$ as in (\ref{Eqn:CompletionOrdering}) but respectively with the sets $\mathcal{T}_{\mathrm{I}}$ and $\mathcal{T}_{\mathrm{fin}}$ replacing the set $\mathcal{S}$. Finally, let $\mathfrak{T}^{AC} := \left(T^{AC};<^{AC}\right)$ and $\mathfrak{T}^{BC} := \left(T^{BC};<^{BC}\right)$.
\begin{proposition} \label{Thm:AntichainBranchingCompletion}
Let $\mathfrak{T}$ be any tree.
\begin{enumerate}
\item
$\mathfrak{T}^{AC}$ is an antichain complete tree that contains $\mathfrak{T}$.
\item
$\mathfrak{T}^{BC}$ is a branching complete tree that contains $\mathfrak{T}$.
\end{enumerate}
\end{proposition}
\begin{proof}
1. To show that $\mathfrak{T}^{AC}$ is a tree, we can use an argument identical to the one used in the proof of Proposition \ref{Thm:DedekindCompletion} to show that $\mathfrak{T}^{DC}$ was a tree.
Let $\mathsf{R}$ be a type I trunk in $\mathfrak{T}^{AC}$, say $\mathsf{R} = \bigcap_{i \in I} \mathsf{P}_i$ for some set of paths $\left\{\mathsf{P}_i\right\}_{i \in I}$ in $\mathfrak{T}^{AC}$. Let $\mathsf{R}^- := \mathsf{R} \cap T$ and $\mathsf{P}_i^- := \mathsf{P}_i \cap T$ for each $i \in I$. Then for each $i$, $\mathsf{P}_i^-$ is a path in $\mathfrak{T}$, and $\mathsf{R}^- = \bigcap_{i \in I} \mathsf{P}_i^-$ is a type I trunk in $\mathfrak{T}$.
First consider the case where $\mathsf{R}^-$ has a greatest node $u$. Then $u$ is also the greatest node of $\mathsf{R}$. To see this, suppose, for a contradiction, that there exists $\ndv \in \mathsf{R}$ for which $u <^{AC} \ndv$. Then $\ndv \not\in T$ (else $\ndv \in \mathsf{R}^-$, a contradiction) hence $\ndv = t_{\mathsf{S}}$ for some trunk $\mathsf{S}$ in $\mathcal{T}_{\mathrm{I}}$. Since also $\mathsf{S} \subseteq \mathsf{R}$ (because $\ndv \in \mathsf{R}$) then $\mathsf{S} \subseteq \mathsf{R}^-$, and since $u <^{AC} t_{\mathsf{S}}$ then $\mathsf{R}^- \subseteq \mathsf{S}$, hence $\mathsf{S} = \mathsf{R}^-$. This contradicts the fact that $\mathsf{S}$ does not have a greatest node.
Next, consider the case where $\mathsf{R}^-$ does not have a greatest node. Then $T^{AC}$ contains the node $t_{\mathsf{R}^-}$, and $u := t_{\mathsf{R}^-}$ will be the greatest node of $\mathsf{R}$. To see this, first note, from the way that $<^{AC}$ is defined, that $u \in \mathsf{P}_i$ for each $i \in I$, hence $u \in \mathsf{R}$. Next, suppose again, for a contradiction, that there exists $\ndv \in \mathsf{R}$ for which $u <^{AC} \ndv$. It follows again that $\ndv \not\in T$, for if $\ndv \in T$ then, as above, $\ndv \in \mathsf{R}^-$ hence $\ndv <^{AC} t_{\mathsf{R}^-}$, which contradicts the fact that $u <^{AC} \ndv$. Hence, $\ndv = t_{\mathsf{S}}$ for some trunk $\mathsf{S}$ in $\mathcal{T}_{\mathrm{I}}$. Again, it follows from $\ndv \in \mathsf{R}$ that $\mathsf{S} \subseteq \mathsf{R}$, hence $\mathsf{S} \subseteq \mathsf{R}^-$, while from $t_{\mathsf{R}^-} = u <^{AC} \ndv = t_{\mathsf{S}}$ can be concluded that $\mathsf{R}^- \subsetneq \mathsf{S}$, a contradiction.
The proof of Claim 2 is similar.
%
\end{proof}
The trees $\mathfrak{T}^{AC}$ and $\mathfrak{T}^{BC}$ will be called, respectively, the \defstyle{antichain completion} and \defstyle{branching completion} of $\mathfrak{T}$. Using Theorem \ref{Thm:CharacterisationOfCompleteness}, it follows that $\mathfrak{T}^{AC}$ can be embedded in every antichain complete tree that extends $\mathfrak{T}$, and $\mathfrak{T}^{BC}$ can be embedded in every branching complete tree that extends $\mathfrak{T}$.
\subsection{Pathwise Dedekind completions and weakly branching completions}
The case of constructing a pathwise Dedekind completion of a tree poses a minor complication, in that the construction is not deterministic. In general, there need not be a unique minimal pathwise complete tree that extends a given tree. In terms of Theorem \ref{Thm:CharacterisationOfCompleteness}, to obtain a pathwise complete tree from a given tree it suffices to add either missing least nodes to the tree's branches, or missing greatest nodes to its stems, and there is some freedom in how one can go about doing this.
Instead, we will use Proposition \ref{Thm:CompletenessComponents} as the basis for our construction. The reason for performing the construction using $<$-components, rather than using branches as suggested by Theorem \ref{Thm:CharacterisationOfCompleteness}, is that, if one were to simply add nodes as infima to branches that are without infima, it could happen that multiple nodes get added to the same branch since different branches may have the same set of lower bounds.
The construction presented below will produce, from any tree $\mathfrak{T}$, a minimal pathwise complete tree $\mathfrak{T}^{PDC}$, without introducing any additional bran\-ch\-ing points. Thus, $\mathfrak{T}^{PDC}$ will be the most conservative pathwise Dedekind completion of $\mathfrak{T}$ in the sense that, amongst all pathwise Dedekind completions of $\mathfrak{T}$, $\mathfrak{T}^{PDC}$ will be the one that is furthest away from being branching complete.
Similar observations hold when producing weakly branching completions from a tree, where, again, a tree need not have a unique minimal weakly branching completion. Here we will again base the construction on Proposition \ref{Thm:CompletenessComponents}, rather than on Theorem \ref{Thm:CharacterisationOfCompleteness}, to obtain from a tree $\mathfrak{T}$ a minimal weakly branching complete tree $\mathfrak{T}^{WBC}$ that does not contain any new branching points and which, amongst all weakly branching complete trees that extend $\mathfrak{T}$, will be the one that is furthest away from being branching complete.
\smallskip
Let $\mathfrak{T} := \left(T;<\right)$ be any tree. Let $\mathcal{S}'$ be the set of all stems in $\mathfrak{T}$, and let
\begin{equation} \label{Eqn:ClassC}
\mathcal{C}(\mathcal{S}') := \bigcup_{\mathsf{S} \in \mathcal{S}'}\left\{ \schm : \schm \ \text{is a $<$-component without root in} \ \mathfrak{T}^{\geqslant \mathsf{S}} \right\}.
\end{equation}
For each $\schm \in \mathcal{C}(\mathcal{S}')$, let $t_{\schm}$ denote a new node that is not in $\mathfrak{T}$, and define $T^{PDC} := T \cup \left\{t_{\schm} : \schm \in \mathcal{C}(\mathcal{S}')\right\}$. Let $<^{PDC}$ be the following relation on $T^{PDC}$:
\begin{multline}
\left\{(x,y) \in T \times T: x < y\right\} \cup \left\{\left(t_{\schm},t_{\mathsf{D}}\right) : \schm, \mathsf{D} \in \mathcal{C}(\mathcal{S}') \ \text{with} \ \mathsf{D} \subsetneq \schm \right\} \cup \\
\bigcup_{\schm \in \mathcal{C}(\mathcal{S}')}\left\{\left(u,t_{\schm}\right) : u \ \text{is any lower bound of} \ \schm \right\} \cup \bigcup_{\schm \in \mathcal{C}(\mathcal{S}')}\left\{\left(t_{\schm},u\right) : u \in \schm \right\}. \label{Eqn:RelationPDC}
\end{multline}
Now, we define $\mathfrak{T}^{PDC} := \left(T^{PDC};<^{PDC}\right)$.
\medskip
The tree $\mathfrak{T}^{WBC}:= \left(T^{WBC};<^{WBC}\right)$ is defined similarly, as follows. Let $\mathcal{T}'$ be the set of all finitely generated trunks in $\mathfrak{T}$, and define $\mathcal{C}(\mathcal{T}')$ as in (\ref{Eqn:ClassC}) but using the set $\mathcal{T}'$ in place of $\mathcal{S}'$. For each $\schm \in \mathcal{C}(\mathcal{T}')$, let $t_{\schm}$ again denote a new node that is not in $\mathfrak{T}$, and again define $T^{WBC} := T \cup \left\{t_{\schm} : \schm \in \mathcal{C}(\mathcal{T}')\right\}$. Finally, we define the relation $<^{WBC}$ as in (\ref{Eqn:RelationPDC}), but again using the set $\mathcal{T}'$ instead of $\mathcal{S}'$.
\begin{proposition} \label{Thm:PathwiseDedekindCompletion}
Let $\mathfrak{T}$ be any tree.
\begin{enumerate}
\item
$\mathfrak{T}^{PDC}$ is a pathwise complete tree that contains $\mathfrak{T}$.
\item
$\mathfrak{T}^{WBC}$ is a weakly branching complete tree that contains $\mathfrak{T}$.
\end{enumerate}
\end{proposition}
\begin{proof}
1. The verification that $\mathfrak{T}^{PDC}$ is a tree is straightforward.
Let $\mathsf{R}$ be a stem in $\mathfrak{T}^{PDC}$ and let $\schm$ be a $<^{PDC}$-component in the forest $\left(\mathfrak{T}^{PDC}\right)^{\geqslant^{PDC}\, \mathsf{R}}$. By Proposition \ref{Thm:CompletenessComponents}, it suffices to show that $\schm$ has a root. Let $\schm^- := \schm \cap T$. Two cases will be considered.
Case 1. Suppose that $\schm^-$ has a root $u$. Then $u$ is also a root in $\schm$. To see this, it suffices to show that $u <^{PDC} t_{\mathsf{D}}$ for each node in $\schm$ of the form $t_{\mathsf{D}}$, since it is immediate that $u <^{PDC} \ndv$ for each $\ndv \in \schm^-$. Suppose, for a contradiction, that $u \not\smile^{PDC} t_{\mathsf{D}}$ or $t_{\mathsf{D}} <^{PDC} u$.
%
If $u \not\smile^{PDC} t_{\mathsf{D}}$ then $u \not<^{PDC} \mathsf{D}$ and $u \not\in \mathsf{D}$. It follows that $u \not\smile^{PDC} \ndv$ for each $\ndv \in \mathsf{D}$, which contradicts the fact that $\mathsf{D} \subseteq \schm^-$ while $u$ is the root of $\schm^-$.
%
On the other hand, if $t_{\mathsf{D}} <^{PDC} u$ then $u \in \mathsf{D}$. Since $u$ is the root of $\schm^-$ then $\schm^- \subseteq \mathsf{D}$, and since $t_{\mathsf{D}} \in \schm$ then $\mathsf{D} \subseteq \schm$ hence $\mathsf{D} \subseteq \schm^-$, so that $\mathsf{D} = \schm^-$. Hence $t_{\mathsf{D}} = t_{\schm^-}$, but this contradicts the assumption that $\schm^-$ has a root.
Case 2. Suppose that $\schm^-$ does not have a root. Then the node $t_{\schm^-}$ is the root of $\schm$. That $t_{\schm^-} \in \schm$ follows from $\mathsf{R} \leqslant^{PDC} \ t_{\schm^-} <^{PDC} \schm^-$, along with the fact that $\schm \supseteq \schm^-$ and $\schm$ is a $<^{PDC}$-component in $\left(\mathfrak{T}^{PDC}\right)^{\geqslant^{PDC} \, \mathsf{R}}$. It now suffices to show that $t_{\schm^-} <^{PDC} t_{\mathsf{D}}$ for each $t_{\mathsf{D}} \in \schm$ with $t_{\mathsf{D}} \neq t_{\schm^-}$. Indeed, if $t_{\mathsf{D}} \in \schm$ then $\mathsf{D} \subseteq \schm$ hence $\mathsf{D} \subseteq \schm^-$, and since, in addition, $t_{\mathsf{D}} \neq t_{\schm^-}$, then $\mathsf{D} \subsetneq \schm^-$ from which $t_{\schm^-} <^{PDC} t_{\mathsf{D}}$.
2. The proof is identical to that of Claim 1, except for using finitely generated trunks, rather than stems.
\end{proof}
Using Proposition \ref{Thm:CompletenessComponents}, it follows that $\mathfrak{T}^{PDC}$ is minimal in the class of pathwise complete trees that extend $\mathfrak{T}$, and $\mathfrak{T}^{PDC}$ can be embedded in every pathwise complete tree that extends $\mathfrak{T}$ and has the same branching points as $\mathfrak{T}$. Similarly, $\mathfrak{T}^{WBC}$ is minimal in the class of weakly branching complete trees that extend $\mathfrak{T}$, and $\mathfrak{T}^{WBC}$ can be embedded in every weakly branching complete tree that extends $\mathfrak{T}$ and has the same branching points as $\mathfrak{T}$.
\smallskip
Lastly, we raise the following question, which we leave open: for which families $\mathcal{F}$ of sets of nodes in a tree, does the generic notion of $\mathcal{F}$-completeness give rise to a generic construction of an $\mathcal{F}$-completion of the tree that can be obtained in a way similar to the constructions presented here?
\subsection{$\alpha$-fillings}
The Dedekind completion that was proposed above need not preserve some natural properties, such as denseness. For example, consider a tree $\mathfrak{T}$ that consists of a copy of the rationals $\eta$, on top of which two incomparable copies of $1 + \eta$ are appended. $\mathfrak{T}$ has two paths, each of order type $\eta + 1 + \eta \cong \eta$. Both of these paths become copies of the linear order $\lambda + 2 + \lambda$ in the Dedekind completion of $\mathfrak{T}$. The paths in $\mathfrak{T}$ are thus dense linear orders whereas the paths in the Dedekind completion of $\mathfrak{T}$ are not dense.
The question therefore arises how to modify the construction to produce a dense Dedekind completion? Here we propose an alternative general construction that, in particular, does that. The construction is non-deterministic, and is defined as follows.
Given a condensed tree $\mathfrak{S} := \left(S;<_{\mathfrak{S}}\right)$, a set of linear orders $\mathcal{A} := \left\{\mathfrak{A}_i\right\}_{i \in I}$, and a function $f : S \rightarrow I$, the \defstyle{$f$-product} $\mathfrak{S} \times_f \mathcal{A} := \left( \lvert \mathfrak{S} \times_f \mathcal{A} \rvert; < \right)$ is the structure that is defined as follows:
\begin{itemize}
\item
$\displaystyle \lvert \mathfrak{S} \times_f \mathcal{A} \rvert := \bigcup_{t \in S} \left(\left\{t\right\} \times \lvert\mathfrak{A}_{f(t)}\rvert\right)$;
\item
for $\left(t_1,x_1\right),\left(t_2,x_2\right) \in \lvert \mathfrak{S} \times_f \mathcal{A} \rvert$,
$$\left(t_1,x_1\right) < \left(t_2,x_2\right) \ \Longleftrightarrow \ t_1 <_{\mathfrak{S}} t_2 \ \text{or} \ \left(t_1 = t_2 \ \text{and} \ x_1 <_{\mathfrak{A}_{f\left(t_1\right)}} x_2\right).$$
\end{itemize}
Informally, the structure $\mathfrak{S} \times_f \mathcal{A}$ is obtained from $\mathfrak{S}$ by replacing each node $t$ in $\mathfrak{S}$ with the linear order $\mathfrak{A}_{f\left(t\right)}$. It is easily seen that $\mathfrak{S} \times_f \mathcal{A}$ is a tree, that the condensation (cf. \cite{GorankoKellermanZanardo2021a})
of $\mathfrak{S} \times_f \mathcal{A}$ is isomorphic to $\mathfrak{S}$, and that the distinct maximal bridges in $\mathfrak{S} \times_f \mathcal{A}$ are, up to isomorphism, precisely the elements of $f\left(S\right) \subseteq \mathcal{A}$. The tree $\mathfrak{S} \times_f \mathcal{A}$ is, strictly speaking, not an extension of $\mathfrak{S}$ itself, because the elements of $\mathfrak{S} \times_f \mathcal{A}$ are ordered pairs while the elements of $\mathfrak{S}$ are not. However, $\mathfrak{S} \times_f \mathcal{A}$ \textit{is} isomorphic to a tree $\mathfrak{S}'$ that extends $\mathfrak{S}$, and $\mathfrak{S} \times_f \mathcal{A}$ will, for the sake of simplicity, be identified with this tree $\mathfrak{S}'$.
Now, let $\alpha$ be any dense linear order without endpoints and let $\mathfrak{T} = (T;<)$ be a condensed tree. Take $\mathfrak{A}_0 := \alpha$ and $\mathfrak{A}_1 := \alpha+1$. Define an \defstyle{$\alpha$-filling} of $\ensuremath{\mathfrak{T}}$ to be any tree of the form
\begin{equation} \label{Eqn:AlphaFilling}
\mathfrak{T} \times_f \{\mathfrak{A}_0,\mathfrak{A}_1\}
\end{equation}
where $f : T \rightarrow \{0,1\}$ is any function. Intuitively, an $\alpha$-filling of $\mathfrak{T}$ is obtained by replacing each node in $\mathfrak{T}$ with either $\alpha$ or $\alpha+1$. Thus, the tree $\mathfrak{T}$ provides the branching structure, and $\alpha$ provides the filling pattern inside maximal bridges. A tree that is of the form (\ref{Eqn:AlphaFilling}) will also be said to be a \defstyle{locally $\alpha$-tree}. If $f(x) = \mathfrak{A}_0$ for each leaf in $\mathfrak{T}$, and $f(x) = \mathfrak{A}_1$ otherwise, then $\mathfrak{T} \times_f \{\mathfrak{A}_0,\mathfrak{A}_1\}$ will be called the \defstyle{full $\alpha$-filling} of $\mathfrak{T}$.
In the example of the tree $\mathfrak{T}$ described above, that consists of a copy of $\eta$ with two disjoint copies of $1+\eta$ appended on top of it, producing the full $\eta$-filling from the condensation of $\mathfrak{T}$, will have the effect of collapsing the two least elements of each of the copies of $1+\eta$ in $\mathfrak{T}$, into a single branching node. The resulting $\eta$-filling will be both dense and branching complete.
A linear order is called \defstyle{continuous} when it is dense, without endpoints, and complete. A linear order $\mathfrak{A} := \left(A;<\right)$ is called \defstyle{separable} if each subset $X$ of $A$ that is dense in $\mathfrak{A}$ (i.e.,~whenever $a,b \in A$ with $a < b$, there exists $x \in X$ such that $a < x < b$) is at most countable. Recall the following two facts:
\begin{itemize}
\item
Cantor's Theorem (see e.g.~\cite[Theorem 2.8]{Rosenstein}): every countable dense linear order without endpoints is isomorphic to $\eta$; and
\item
a linear order is isomorphic to $\lambda$ if and only if it is continuous and separable (see e.g.~\cite[Theorem 2.30]{Rosenstein}).
\end{itemize}
The following two special cases of $\alpha$-fillings are now singled out:
\begin{itemize}
\item
When $\alpha = \eta$. Every full $\eta$-filling of a condensed tree $\mathfrak{T}$ has the property that each of its paths is a dense linear order without endpoints. Moreover, if each path in $\mathfrak{T}$ is at most countable then, by Cantor's Theorem, each path in the full $\eta$-filling of $\mathfrak{T}$ will be isomorphic to $\eta$.
\item
When $\alpha = \lambda$. Given a condensed pathwise complete tree $\mathfrak{T}$, each path in the full $\lambda$-filling of $\mathfrak{T}$ will be a continuous linear order. Moreover, if each path in $\mathfrak{T}$ is at most countable then, by the above characterisation of $\lambda$, it follows that each path in the full $\lambda$-filling of $\mathfrak{T}$ will be isomorphic to $\lambda$. Finally, if $\mathfrak{T}$ is branching complete then, by Theorem \ref{Thm:DedekindCompleteness}, its full $\lambda$-filling will be complete.
\end{itemize}
\section{Concluding remarks}
\label{sec:concluding}
This work continues the study of the general theory of trees initiated in
\cite{GorankoKellermanZanardo2021a}.
Further planned work on this project will include:
\begin{itemize}
\item the study of general operations on trees, such as sums and products, thus extending classical studies of ordinal arithmetic (Cantor, Sierpinski, and others) and, more generally, operations on linear orderings
(cf. \cite{Rosenstein});
\item the study of classes of trees generated by applying such operations, and their structural and logical theories.
\end{itemize}
The ultimate goal of this project is a systematic development of a structural theory of trees. One intended application of this study is to obtain structural characterisations of elementary equivalence and other logical equivalences of trees and to use them to obtain new axiomatisations and decidability or undecidability results for logical theories of important classes of trees.
\section*{Acknowledgements}
We thank the referee for the careful reading and helpful comments and suggestions on the paper.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
\section{Introduction}
The core of coronal mass ejection (CME) clouds occasionally exposes
very bright concentrated patches in white-light coronagraphs.
They are interpreted as cool plasma material from a prominence
that was embedded inside the streamer environment of the CME before the eruption. During the eruption process, the prominence is
then expelled along with the surrounding streamer plasma.
Poland and Munro (1976) report one such observation
made on 21 August 1973, 15:11 UT with the Skylab white-light coronagraph and its HeII 30.4 nm spectroheliograph.
About 18 min before, an H$\alpha$ image taken at the Sacramento
Peak Observatory had shown bright patches extending
out to 1.42 R$_{\odot}$ but fading in intensity with time.
Even though the coronagraph field-of-view was limited to above 1.5 R$_{\odot}$, it was concluded that H$\alpha$ radiation contributed to the white-light image because its signal was less polarised in some bright patches than in the surrounding region.
H$\alpha$ radiation is the result of the electronic $j$=3 $\rightarrow$ $j$=2
transition of the hydrogen atom. In equilibrium at an electron temperature
below 50,000~K, the $j$=3 level is populated much more by absorption of the
ambient Ly$\beta$ radiation than by absorption of photospheric H$\alpha$.
This causes a substantial decrease in polarisation of the emitted H$\alpha$
radiation below the theoretical maximum value of 30\% for pure resonant
scattering (Poland and Munro 1976). Besides, the Hanle effect caused by the
coronal magnetic field (Sahal-Brechot et al. 1977, Heinzel et al. 1996) and
collisional depolarisation (Bommier et al. 1986) reduce the amount of
polarisation even further. As a result, the linear polarisation of H$\alpha$
radiation observed in prominences well above the limb ranges from a fraction of
a percent (Gandorfer, 2000; Wiehr and Bianda, 2003) to a few percent (Leroy et
al., 1984).
The white-light emission of the solar K-corona originates in Thomson-scattering of photospheric light by free electrons. Detailed
description of the Thomson-scattering theory can be found in various papers
(\eg, Minnaert 1930; van de Hulst 1950; Billings 1966). The anisotropy
of the incident light causes the observed scattered radiation to exhibit
a polarisation parallel to the visible limb. The degree of polarisation
depends on the distance from the solar surface and on the scattering angle
to the observer.
Indeed, it has been proposed to use the observed degree of polarisation
to estimate the distance of the coronal scattering volume off the plane
of the sky (POS; \eg, Moran and Davila 2004, Dere et al. 2005,
Vourlidas and Howard 2006). Hence, a reduction of polarisation of a
white-light signal from the corona can in principle also be explained
by a geometric effect, and Poland and Munro's
conclusion only holds if it is assumed that the H$\alpha$ material is
well embedded in the CME cloud of enhanced electron density and is
located close to the POS of the observer.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth,type=eps,ext=.eps,read=.eps]{16295fg1}
\caption{Combined images of EUVI and COR1 on board STEREO A and B. Both A and B images
are synchronous with respect to the time on the Sun.}
\label{plotsmin11aug}
\end{figure*}
In this letter, we report the observation of a similar incidence with
the COR1 coronagraph on board the two STEREO spacecraft A and B. In addition
to the polarisation measurements from which we determine the azimuthal
barycentre position of the CME plasma, a stereoscopic triangulation of
the low-polarisation patch proves that the prominence material is well
embedded inside the CME.
\section{Observations}
EUVI and COR1 are the multi-wavelength EUV telescope and the innermost
coronagraph of the Sun Earth Connection Coronal and Heliospheric
Investigation (SECCHI) instrument suite (Howard et al. 2008) aboard
the twin Solar Terrestrial Relations Observatory spacecraft (STEREO,
see Kaiser et al. 2008). Each of the COR1/STEREO telescopes has a field of view from 1.4 to 4 R$_\odot$
and observes in a white-light waveband 22.5~nm wide centred at the H$\alpha$
line at 656~nm (Thompson and Reginald 2008). The COR1
coronagraphs take polarised images at three different polarisation angles
at 0, 120, and 240 degrees. These primary data allow to derive Stokes I,U and Q
components and finally total ($tB$), polarised ($pB$)
and unpolarised brightness ($uB=tB-pB$) images.
The start of a prominence eruption was observed in EUVI HeII bandpass
images at 30.4 nm on 31 August 2007, at around 19:00 UT. Approximately eight hours
before, a dark filament was detected about 100,000 km south of an active region
close to the west limb in the H$\alpha$ images of Kanzelh\"ohe Observatory, which had disappeared the following day
(http://cesar.kso.ac.at/halpha2k/archive/2007).
At about 21:00 UT, the HeII prominence had risen to 1.5 R$_\odot$
and appeared co-spatial with the bright core of a structured CME detected in
COR1 images (Fig.~\ref{plotsmin11aug}, see for e.g. Cremades
and Bothmer 2004 for the definition of a structured CME).
Preliminary COR1 images of the event studied here revealed patches of
extremely low polarisation in the bright CME core. These patches
of low polarisation could faintly be detected even out to about
7 R$_\odot$ in COR2 at about 2:00 UT the next day. COR2 is the outer coronagraph
of STEREO/SECCHI, which covers distances from 3 to 15 R$_\odot$. The
outward velocity of these patches started from about 170 km/s in COR1 and
accelerated to 240 km/s in the COR2 field of view.
\section{Data analysis}
For a more quantitative analysis we removed the background
contribution by subtracting a minimum intensity $pB$ and $tB$
image from each $pB$ and $tB$ images, respectively.
The minimum images were obtained over a time range of 12
hours, centred at the launch time of the CME. As a result,
we obtain the brightness of the CME alone.
In Fig.~\ref{plotpolratio} we display the resulting ratio $pB/uB$ of
polarised to unpolarised image intensities.
From both instruments we see bright patches of extremely low polarisation
(red, $pB/tB$ $\approx$ 0.1) at about 1.5 R$_\odot$ inside the strongly polarised CME cloud (grey, $pB/tB$ $\approx$ 0.5).
The degree of polarisation of Thomson-scattered light by coronal electrons is
a function of the scattering angle between the incident light direction and
the direction towards the observer (Billings, 1966). A scattering location
close to the POS has a 90 degree scattering angle and yields a
high polarisation parallel to the limb. A scattering location far off
the POS corresponds to nearly forward/backward scattering,
which is hardly polarised. This effect allows us to estimate an effective
scattering angle from the observed degree of polarisation for each pixel
that can be related to an effective distance of the scattering location
from the POS.
Moran and Davila (2004) and Dere, Wang, and Howard (2005)
introduced this method, called polarisation ratio method
(PR), to construct the azimuthal barycentre plane of a CME.
We used this technique to infer the propagation
direction of the CME discussed here (Mierla et al. 2009).
The method is applied here to polarisation data from COR1/STEREO A and B
taken at around 21:30 UT. At each pixel of the images, the ratio
$pB/uB$ is calculated and converted into the effective scattering distance
along the line-of-sight from the POS. Owing to the forward/backward
symmetry of Thomson scattering, the brightness ratio does not indicate
whether the scatterer is in front of or behind the POS. This ambiguity
can be resolved, however, by matching the scattering locations derived
from STEREO A and B observations.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth,type=eps,ext=.eps,read=.eps]{16295fg2a}
\\[2mm]
\includegraphics[width=.48\textwidth,type=eps,ext=.eps,read=.eps]{16295fg2b}
\caption{Ratio $pB/uB$ of polarised to unpolarised light from
COR1 of STEREO spacecraft A (upper panel) and B (lower panel).
The ratio is colour-coded and the red patches mark low polarised
values. The upper left insert is a zoom of this region with the green line representing the projection of the 3D curve fit to the patches location obtained from stereoscopic triangulation.}
\label{plotpolratio}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.48\textwidth,type=eps,ext=.eps,read=.eps,bb=78 100
430 401, clip]{16295fg3}
\caption{Polarisation ratio reconstruction of the barycentre plane of the
CME on 31 August 2007, 21:30 UT (see section 3.1 for details).}
\label{plotreconstr}
\end{figure}
In Fig.~\ref{plotreconstr} we display the result of this analysis as viewed from a direction
along the northward normal of the STEREO mission plane. The green and blue dots
represent the scattering location derived from COR1/STEREO A and B pixels
of the CME cloud, respectively.
Obviously, they match well if
the barycentre of the CME is assumed in front of the
POS of STEREO A but behind the POS of STEREO B.
The only exception are the ''horns'' that extend to the far right for
STEREO A and to the far left for STEREO B pixels. These ``horns'' result from the low-polarisation patches observed in each of the
|
COR1 images. If they were produced by Thomson scattering, the low-polarisation patch would have to be located at the tip of the "horns" in Fig.3, i.e., far away from the centre of the CME. However, these positions from STEREO A and B are several solar radii apart and are therefore inconsistent.
An independent estimate of the location of the low-polarisation patch can be obtained by stereoscopy, provided we assume that the patches we see in both coronagraphs are the same object. Both in brightness (see Fig.~\ref{plotsmin11aug}) and in polarisation ratio
(see Fig.~\ref{plotpolratio}) the patches displayed a high
contrast and a sharp boundary with respect to the CME environment.
The COR1 instrument of STEREO B seems somewhat less sensitive because
not all features of the STEREO A patch can be matched to an equivalent
signal in the STEREO B image. Yet we could determine a loop-like 3D curve,
the projections of which trace out the patches in COR1/STEREO B and the
major structures in COR1/STEREO A. These projections are drawn in green
into the inserts of Fig.~\ref{plotpolratio}.
The 3D curve is again projected into Fig.~\ref{plotreconstr}, in red. The triangulation of the patches locates it very close to the barycentre plane of the CME. Note that the cloud in Fig.~\ref{plotreconstr} does not show the full extent of the CME, but is only an approximation of its azimuthal barycentre plane. The PR method cannot resolve the azimuthal extent of a CME (see Mierla et al., 2011).
\section{Discussion}
Low polarisation in coronagraph images can have several explanations:
1) F-corona emission (\eg, Morgan and Habbal 2007),
2) Thomson scattering from enhanced plasma density far away from the POS
(\eg, Billings 1966), and
3) H$\alpha$ emission (\eg, Poland and Munro, 1976).
F-corona contributions can be ruled out because it forms a diffuse
background and does not vary rapidly in time. The subtraction of the
minimum background intensities from our images should have removed any
F-corona contribution.
The interpretation of the low-polarisation patches by Thomson
scattering can also be ruled out. We showed in the previous
section that this assumption leads to different locations of the
scatterer for STEREO A and B observations, which are both
inconsistent with the results from the stereoscopic triangulation.
We are therefore left with the last explanation. H$\alpha$ patches in
white-light coronagraph images have only
rarely been reported, even though bright amorphous structures are often seen in CME clouds. Coronal mass ejections often appear in coronagraph images
as three-part structures composed of a bright leading edge, a dark cavity, and
a bright core, which are associated with the compressed solar plasma ahead of the
ejecta, the erupting magnetic flux rope, and the cool and dense prominence
plasma, respectively (\eg, Cremades and Bothmer 2004). However, a
convincing identification of the white-light core of the CME with the cool
prominence material is relatively rare.
Even if recent studies have demonstrated a strong connection between
prominence eruptions and CMEs, the correlation is not always one-to-one.
It is not quite clear how much of the emission from the bright core
structures are produced by Thomson scatter from its enhanced plasma
density and how much stems from H$\alpha$ radiation. Because we observed
the patches at an outward velocity above 100 km/s, their H$\alpha$ resonance
is well outside of the H$\alpha$ absorption line from the solar surface
spectrum. Even though the depth of the absorption is 1/7 of the surrounding
continuum, the H$\alpha$ radiation emitted from the rising patch
is only little enhanced by this Doppler brightening effect because of
the complicated balance of the electronic level population of hydrogen
atoms exposed to the solar spectrum (\eg, Hyder and Lites 1970).
The observed brightness $B_\mathrm{patch}$ of the H$\alpha$ patch in
the image is in fact a line-of-sight (LOS) superposition of the
radiation from the three different sources: the H$\alpha$ radiation,
$B_{\mathrm{H}\alpha}$, and the Thomson scatter, $B_\mathrm{Th'}$,
from inside the H$\alpha$ cloud and the ambient Thomson scatter,
$B_\mathrm{Th}$, along remaining part of the LOS through the patch.
The latter contribution can roughly be assumed equal to the Thomson
scatter measured close to, but outside the H$\alpha$ patch.
Both the total and the polarised intensities add (assuming that the
polarisation direction in all three components is the same),
\begin{gather}
tB_\mathrm{patch}=tB_{\mathrm{H}\alpha}+tB_\mathrm{Th'}+tB_\mathrm{Th}
\label{tBpatch}\\
pB_\mathrm{patch}=pB_{\mathrm{H}\alpha}+pB_\mathrm{Th'}+pB_\mathrm{Th}.
\label{pBpatch}
\end{gather}
We find that the total brightness of the H$\alpha$ patch (around 1--5$\cdot10^{-8}$ MSB, where MSB is mean solar brightness) is about
10 times as high as a typical value in the Thomson-scattering area (around 1--5$\cdot10^{-9}$ MSB), i.e.,
$tB_\mathrm{patch} \simeq 10\;tB_{Th}$.
According to the Thomson-scattering theory (Billings 1966), the calibrated value of $tB_\mathrm{Th}$ corresponds to an electron column density along the LOS of 1.8 10$^{17}$ cm$^{-2}$.
For a depth of the CME cloud of about 1 R$_\odot$, this yields
an excess density of the CME over the streamer background of
2.6 10$^6$ cm$^{-3}$.
For the polarisation ratio we find (see Fig.~\ref{plotpolratio})
\begin{equation}
r=\frac{pB}{tB}=\frac{\frac{pB}{uB}}{1+\frac{pB}{uB}}\simeq
\begin{cases}
0.5 & \text{for}\; r_\mathrm{Th}\\
0.1 & \text{for}\; r_\mathrm{patch},
\end{cases}
\nonumber
\end{equation}
where $r=pB/tB$ is the polarisation ratio of the respective component.
For Thomson scatter, $r_\mathrm{Th}$ should depend only on the distance from
the solar surface, hence we can assume $r_\mathrm{Th} \simeq r_\mathrm{Th'}$.
Obviously, $tB_{\mathrm{H}\alpha}$ must be significant in (\ref{tBpatch})
over the Thompson scatter contribution because $r_\mathrm{patch}$
differs considerably from $r_\mathrm{Th}$.
Inserting the observed total brightness and polarisation ratios into
(\ref{tBpatch}) and (\ref{pBpatch})
and eliminating $tB_\mathrm{Th}$, we obtain the
relation
\begin{equation}
\frac{tB_{\mathrm{H}\alpha}}{tB_\mathrm{Th'}}=
\frac{8}{1-18\,r_{\mathrm{H}\alpha}}.
\nonumber
\end{equation}
Judging from the brightness and polarisation ratios observed,
$r_{\mathrm{H}\alpha}$ cannot be higher than 1/18. This
low value of the intrinsic H$\alpha$ polarisation ratio agrees
with the low values obtained for chromospheric measurements
(Gandorfer, 2000; Wiehr and Bianda, 2003).
Moreover, the ratio $tB_{\mathrm{H}\alpha}/tB_{Th'}$
cannot be lower than 8. Consequently a large part of the radiation from
the core patch (at least 88\%) must be H$\alpha$ emission.
Jej\v{c}i\v{c} and Heinzel (2009) have calculated
the ratio $tB_{\mathrm{H}\alpha}/tB_{Th'}$
for various temperatures and densities
(note that they assume a different white-light
bandwidth 10 nm instead of 22.5 nm and a dilution factor $W$=0.416
instead of 0.21 appropriate for 1.5 R$_\odot$. This causes our white-light intensities to be 1.3 times brighter than the model intensities
assumed in their paper. We neglect this factor in view of the approximate
nature of our estimate).
For the comparatively low densities we expect in the corona,
they derive a linear relation,
$tB_{\mathrm{H}\alpha}/tB_\mathrm{Th'}=n_e/10^8 \mathrm{cm}^{-3}$,
which only weakly depends on temperature, provided it is below
15000~K. If these conditions hold in our case, the density in the
H$\alpha$ patch would exceed 8 10$^8$ cm$^{-3}$ and would hence be
nearly three orders of magnitude of what we estimated for the CME
cloud outside of the patch.
The brightness from the core patch in the CME is dominated
by H$\alpha$ radiation. If, reversely, we had assumed
$tB_\mathrm{patch}$ to be entirely caused by Thomson scatter, we
would have obtained a gross overestimate of the density in the
CME core. In view of this result, some CME mass estimates in
previous studies, where the contribution of the core brightness
was not negligible, may have to be revised.
\section{Conclusions}
We showed that the white-light core of the 31 August 2007 CME is clearly identified with the eruptive prominence observed in the EUVI 304 images. To our knowledge, we demonstrate for the first time that this core material is located close to the centre of CME cloud. Moreover, we showed that the major part of the CME core emission, more than 85\% in our case, is H$\alpha$ radiation and only a small fraction is Thomson-scattered light. We made a rough estimate of the electron density, showing that the density in the H$\alpha$ patch will exceed by nearly three orders of magnitude what we estimated for the CME cloud outside of the patch.
\begin{acknowledgements}
MM would like to thank T. Moran and A. Vourlidas for constructive discussions on PR method and electron densities.
The authors acknowledge the use of SECCHI data.
The contribution of the IC and BI benefited form support of the
German Space Agency DLR and the German ministry of economy and
technology under contract 50 OC 0904.
BI thanks A. Gandorfer for enlightening discussions on the nature
of the H$\alpha$ radiation.
\end{acknowledgements}
|
\section{Introduction}
Recently LEPS and DIANA collaborations
\cite{Nakano:2003,Barmin:2003} reported the observation of a very
narrow peak in the $K^+n$ and $K^0p$ invariant mass distribution
which existence has been confirmed by several experimental groups
in various reaction channels \cite{2prim}. These experimental
results were motivated by the pioneering paper on chiral soliton
model \cite{Diakonov:1997}. The reported mass determinations for
the $\Theta$ are very consistent, falling in the range
1540$\pm$10, with the width smaller than the experimental
resolution of $20$ MeV for the photon and neutrino induced
reactions and of 9 MeV for the ITEP $K^+ \mathrm{Xe} \to K^0 p
\mathrm{Xe}'$ experiment.
From the soliton point of view the $\Theta$ is nothing exotic
compared with other baryons -- it is just a member of
${\bf\overline{10}}_F$ multiplet with $S=+1$. However, in the
sense of the quark model $\Theta^+(1540)$ baryon with positive
amount of strangeness is manifestly exotic -- its minimal
configuration can not be satisfied by three quarks. The positive
strangeness requires an $\bar s$ and $qqqq$ (where $q$ refers to
the lightest quarks $(u,d)$) are required for the net baryon
number, thus making a pentaquark $uudd\bar s$ state as the minimal
``valence'' configuration. Later NA49 collaboration at CERN SPS
\cite{NA49} announced evidence for an additional narrow $udus\bar
s$ resonance with $I=3/2$, a mass
$1.862\pm 0.002$ GeV and a width below the detector resolution
of about 18 MeV\footnote{NA49 also reports evidence for a
$\Xi^0$(1860) decaying into $\Xi(1320)\pi$.} and H1 collaboration
at HERA \cite{H1} found a narrow resonance
in $D^{*-}p$ and $D^{*+} \bar p$ invariant mass combinations
at $3.099 \pm 0.033_{\text{stat}} \pm 0.005_{\text{syst}}$ GeV and
a measured Gaussian width of $12 \pm 3_{\text{stat}}$ MeV,
compatible with the experimental resolution. The later resonance
is interpreted as an anti-charmed baryon with a minimal
constituent quark composition of $uudd\bar c$, together with the
charge conjugate. The discoveries of first manifestly exotic
hadrons mark the beginning of a new and rich spectroscopy in QCD
and provide an opportunity to refine our quantitative
understanding of nonperturbative QCD at low energy.
The $\Theta$-hyperon has hypercharge $Y=2$ and third component of
isospin $I_3=0$. The apparent absence of the $I_{3}=+1$,
$\Theta^{++}$ in $K^{+}p$ argues against $I=1$, therefore it is
usually assumed the $\Theta$ to be an isosinglet. The other
quantum numbers are not established yet.
As to the theoretical predictions we are faced with a somewhat
ambiguous situation, in which exotic baryons may have been
discovered, but there are important controversies with theoretical
predictions for masses of pentaquark states. The experimental
results triggered a vigorous theoretical activity and put a
renewed urge in the need to understand of how baryon properties
are obtained from QCD.
All attempts of the theoretical estimations of the pentaquark
masses can be subdivided into following four categories: (i)
dynamical calculations using the sum rules or lattice QCD
\cite{sumrules,Fodor},
(ii) the
phenomenological analyses of the hyperfine splitting in quark
model \cite{stancu,Karliner}, (iii) phenomenological analyses of
the $SU(3)_F$ mass relations, and (iv) dynamical calculations
using the chiral $SU(3)$ quark model \cite{chiral_quark_model}).
The QCD sum rules predict a negative parity $\Theta^+$ of mass
$\simeq 1.5$ GeV, while no positive parity state was
found~\cite{sumrules}. The lattice QCD study also predicts that
the parity of the lowest $\Theta$ hyperon is most likely negative
\cite{Fodor}.
The naive quark models, in which all constituents are in a
relative $S$-wave, naturally predict the ground state energy of a
$J^P=\frac{1}{2}^-$ pentaquark to be lower than that of a
$J^P=\frac{1}{2}^+$ one. However, using the arguments based on
both the Goldstone boson exchange between constituent quarks and
color-magnetic exchange it was mentioned that the increase of
hyperfine energy in going from negative to positive parity states
can be quite enough to compensate the orbital excitation energy
$\sim 200$ MeV. However, existing dynamical calculations of
pentamasses using the chiral $SU(3)$ quark model (see, e.g.,
\cite{chiral_quark_model}) are subject to significant
uncertainties and can not be considered as conclusive.
Pentaquark baryons are unexpectedly light. Indeed, a naive quark
model with quark mass $\sim$ 350 MeV predicts $\Theta^+$ at about
\(350\times5=1750\) MeV plus $\sim$ 150 MeV for strangeness plus
$\sim$ 200 MeV for the $P$-wave excitation. A natural remedy would
be to decrease the number of constituents. This leads one to
consider dynamical clustering into subsystems of diquarks like
$[ud]^2\bar s$ \cite{Jaffe:2003} and/or triquarks like
$[ud][ud\bar s]$ \cite{Karliner} which amplify the attractive
color-magnetic forces. In particular, in \cite{Jaffe:2003} it has
been proposed that the systematics of exotic baryons can be
explained by diquark correlations.
The quark constituent model have not been yet derived from QCD.
Therefore it is tempting to consider the Effective Hamiltonian
(EH) approach in QCD (see, e.g., \cite{lisbon}) which from one
side can be derived from QCD and from another side leads to the
results for the $\bar q q$ mesons and $3q$ baryons which are
equivalent to the quark model ones with some important
modifications. The EH approach contains the minimal number of
input parameters: current (or pole) quark masses, the string
tension $\sigma$ and the strong coupling constant $\alpha_s$, and
does not contain fitting parameters as, e.g., the total
subtraction constant in the Hamiltonian. It should be useful and
attractive to consider expanding of this approach to include
diquark degrees of freedom with appropriate interactions. The
preview of this program was done in \cite{NTS03}. It is based on
assumption that chiral and the short range gluon exchange forces
are responsible for the formation of $ud$ diquarks in $\Theta$
while the strings are mainly responsible for binding constituents
in $\Theta$. In this paper we review and extend application of the
EH approach to the Jaffe--Wilczek model of pentaquarks.
In this model inside $\Theta(1540)$ and other $q^{4}\bar q$
baryons the four quarks are bound into two scalar, singlet isospin
diquarks. Diquarks must couple to ${\bf 3}_c$ to join in a color
singlet hadron. In the quark model five quarks are connected by
seven strings. In the diquark approximation the short legs on this
figure shrink to points and the five-quark system effectively
reduces to the three-body one, studied within the EH approach in
\cite{baryons,trusov}. In total there are six flavor symmetric
diquark pairs $[ud]^2$, $[ud][ds]_+$, $[ds]^2$, $[ds][su]_+$,
$[su]^2$, and $[su][ud]_+$ combining with the remaining antiquark
which give 18 pentaquark states in ${\bf 8}_F$ plus
${\bf\overline{10}}_F$. All these states are degenerate in the
SU(3)$_F$ limit.
\section{The EH approach and the results}
The EH for the three constituents has the form
\begin{equation}
\label{EH} H=\sum\limits_{i=1}^3\left(\frac{m_i^{2}}{2\mu_i}+
\frac{\mu_i}{2}\right)+H_0+V,
\end{equation}
where $H_0$ is the kinetic energy operator, $V$ is the sum of the
perturbative one-gluon exchange potentials and the string
potential $V_{\rm{string}}$.
The dynamical masses $\mu_i$ (analogues of the constituent ones)
are expressed in terms of the current quark masses $m_i$ from the
condition of the minimum of the hadron mass $M_H^{(0)}$ as
function of $\mu_i$ \footnote{~Technically, this is done using the
auxiliary field approach to get rid of the square root term in the
Lagrangian \cite{polyakov,brin77}. Applied to the QCD Lagrangian,
this technique yields the EH for hadrons (mesons, baryons,
pentaquarks) depending on auxiliary fields $\mu_i$. In practice,
these fields are finally treated as $c$-numbers determined from
(\ref{minimum_condition}).}:
\begin{equation} \label{minimum_condition}
\frac{\partial M_H^{(0)}(m_i,\mu_i)}{\partial \mu_i}=0, ~~~
M_H^{(0)}=\sum\limits_{i=1}^3\left(\frac{m_i^{2}}{2\mu_i}+
\frac{\mu_i}{2}\right)+E_0(\mu_i), \end{equation} $E_0(\mu_i)$
being eigenvalue of the operator $H_0+V$. Quarks acquire
constituent masses $\mu_i\sim\sqrt{\sigma}$ due to the string
interaction in (\ref{EH}). As of today the EH in the form of
(\ref{EH}) does not include chiral symmetry breaking effects. A
possible interplay with these effects should be carefully
clarified in the future.
The physical mass $M_H$ of a hadron is
\begin{equation}\label{self_energy}
M_H=M_H^{(0)}+\sum_i C_i.
\end{equation}
The (negative) constants $C_i$ have the meaning of the constituent
self energies and are explicitly expressed in terms of string
tension $\sigma$ \cite{simonov_self_energy}:
\begin{equation}\label{c_i}
C_i=-\frac{2\sigma}{\pi\mu_i}\eta_i,\end{equation} where
\begin{equation} \label{eta_i} \eta_q=1,~~ \eta_s=0.88, ~~\eta_c=0.234,~~\eta_b=0.052. \end{equation} In Eq.
(\ref{eta_i}) $\eta_s$, $\eta_c$, and $\eta_b$ are the correction
factors due to nonvanishing current masses of the strange, charm
and bottom quarks, respectively. The self-energy corrections are
due to constituent spin interaction with the vacuum background
fields and equal zero for any scalar constituent.
Accuracy of the EH method for the three-quark systems is $~\sim
100$ MeV or better \cite{baryons,trusov}. One can expect the same
accuracy for the diquark--diquark--(anti)quark system.
Consider a pentaquark consisting of two identical diquarks with
current mass $m_{[ud]}$ and antiquark with current mass $m_{\bar
q}$ ($q=d,s,c$). In the hyperspherical formalism the wave function
$\psi( {\boldsymbol\rho},{\boldsymbol\lambda)}$ expressed in terms of the Jacobi
coordinates $\boldsymbol \rho$ and $\boldsymbol \lambda$ and can be written in a
symbolical shorthand as
\begin{equation}\psi(\boldsymbol{\rho},\boldsymbol{\lambda})=\sum\limits_K\psi_K(R)Y_{[K]}(\Omega),
\end{equation}where $Y_{[K]}$ are eigen functions (the
hyperspherical harmonics) of the angular momentum operator $\hat
K(\Omega)$ on the 6-dimensional sphere:
$\hat{K}^2(\Omega)Y_{[K]}=-K(K+4)Y_{[K]}$, with $K$
|
being the
grand orbital momentum. For identical diquarks, like $[ud]^{2}$,
the lightest state must have a wave function antisymmetric under
diquark space exchange. There are two possible pentaquark wave
functions antisymmetric under diquark exchange, the first one
(with lower energy) corresponding to the total orbital momentum
$L=1$, and the second one (with higher energy) corresponding to
$L=0$. For a state with $L=1,~l_{\rho}=1,~l_{\lambda}=0$ the wave
function in the lowest hyperspherical approximation $K=1$ reads:
\begin{equation}
\psi=R^{-5/2}\chi_1(R)u_1(\Omega),~~~
u_1(\Omega)=\sqrt{\frac{8}{\pi^2}}\sin\theta\cdot
Y_{1m}(\Hat{\boldsymbol{\rho}}),
\end{equation} where $R^2=\boldsymbol{\rho}^2+\boldsymbol{\lambda}^2$. Here one
unit of orbital momentum between the diquarks is with respect to
the $\bf\rho$ variable whereas the ${\bf\lambda}$ variable is in
an $S$-state. The Schr\"odinger equation for $\chi_1(R)$ written
in terms of the variable $x=\sqrt{\mu} R$, where $\mu$ is an
arbitrary scale of mass dimension which drops off in the final
expressions, reads:
\begin{equation} \label{shr}
\frac{d^2\chi_1(x)}{dx^2}+
2\left[E_0+\frac{a_1}{x}-b_1x-\frac{35}{8x^2}\right] \chi_1(x)=0,
\end{equation}
with the boundary condition $\chi_K(x) \sim {\cal O} (x^{7/2})$ as
$x\to 0$ and the asymptotic behavior $\chi_1(x)\sim
{\mathrm{Ai}}((2b_1)^{1/3}x)$ as $x\to \infty$. In Eq. (\ref{shr})
\begin{equation}
\begin{aligned}
a_1&=R\sqrt{\mu}\cdot \int
V_{\text{C}}(\boldsymbol{r}_1,\boldsymbol{r}_2,\boldsymbol{r}_3)\cdot u_1^2\cdot
d\Omega,\\ b_1&=\frac{1}{R\sqrt{\mu}}\cdot\int
V_{\text{string}}(\boldsymbol{r}_1,\boldsymbol{r}_2,\boldsymbol{r}_3)\cdot u_1^2\cdot
d\Omega,
\end{aligned} \label{ab_int}
\end{equation}
where
\begin{equation}
V_{\text{C}}(\boldsymbol{r}_1,\boldsymbol{r}_2,\boldsymbol{r}_3)=
-\frac{2}{3}\alpha_s\cdot\sum\limits_{i<j}\frac{1}{r_{ij}},
\end{equation}
and
\begin{equation}
V_{\text{string}}(\boldsymbol{r}_1,\boldsymbol{r}_2,\boldsymbol{r}_3)=\sigma\cdot
l_{\text{min}}
\end{equation}
is proportional to the total length of the strings, i.e., to the
sum of the distances of (anti)quark or diquarks from the string
junction point. In the Y-shape, the strings meet at $120^\circ$ in
order to insure the minimum energy. This shape moves continuously
to a two-legs configuration where the legs meet at an angle larger
than $120^\circ$. Explicit expression of
$V_{\text{string}}(\boldsymbol{r}_1,\boldsymbol{r}_2,\boldsymbol{r}_3)$ in terms of
Jacobi variables is given in \cite{plekhanov}.
The mass of the $\Theta^+$ obviously depends on $m_{[ud]}$ and
$m_s$. The current masses of the light quarks are relatively
well-known: $m_{u,d}\approx 0$, $m_s\approx 170$ MeV. The only
other parameter of strong interactions is the effective mass of
the diquark $m_{[ud]}$. In principle, this mass could be computed
dynamically. Instead, one can tune $m_{[ud]}$ (as well as
$m_{[us]}$ and $m_{[ds]}$) to obtain the baryon masses in the
quark--diquark approximation. We shall comment on this point later
on.
In what follows, we use $\sigma=0.15\text{~GeV}^2$, and explicitly
include the Coulomb-like interaction between quark and diquarks
with $\alpha_s=0.39$.
For the pedagogy, let us first assume $m_{[ud]}=0$. This
assumption leads to the lowest $uudd\bar d$ and $uudd\bar s$
pentaquarks. If the current diquark masses vanish, then the
$[ud]^2\bar d$ pentaquark is dynamically exactly analogous to the
$J^P=\frac{1}{2}^-$ nucleon resonance and $[ud]^2\bar s$
pentaquark is an analogue of the $J^P=\frac{1}{2}^-$ $\Lambda$
hyperon, with one important exception. The masses of $P$-wave
baryons calculated using the EH method acquire the (negative)
contribution $3C_q$ for $J^P=\frac{1}{2}^-$ nucleons and
$2C_q+C_s$ for the $J^P=\frac{1}{2}^-$ hyperons. These
contributions are due to the interaction of constituent spins
with the vacuum chromomagnetic field. Using the results of Table 1
below we get the mass of the $P$-wave nucleon resonance with the
orbital $\rho$ excitation 1600 MeV and the mass of $\Lambda$
hyperon 1600 MeV that within 100 MeV agrees with the known
$P$-wave $N$ and $\Lambda$ resonances.
However, the above discussion shows that the self-energies
$C_{[ud]}$ equal zero for the scalar diquarks. This means that
introducing any scalar constituent increases the pentaquark energy
(relative to the $N$ and $\Lambda$ $P$-wave resonances) by
$2|C_q|\sim 200-300$ MeV. Therefore prior any calculation we can
put the lower bound for the pentaquark in the Jaffe--Wilczek
approximation, $M(\Theta)\ge 2\text{~GeV}$.
The numerical calculation for $m_{[ud]}=0$ yields the mass of
$[ud]^2\bar s$ pentaquark $\sim$ 2100 MeV (see Table
\ref{table1}). The similar calculations yield the mass of
$[ud]^2\bar c$ pentaquark $\sim$ 3250 MeV (for $m_c=1.4$ GeV) and
$[ud]^2\bar b$ pentaquark $\sim$ 6509 MeV (for $m_b=4.8$
GeV)\cite{veselov}. For illustration of accuracy of the auxiliary
field (AF) formalism in Table \ref{table1} are also shown the
masses of $[ud]^2\bar d$ and $[ud]^2\bar d$ pentaquarks calculated
using the spinless Salpeter equation (SSE):
\begin{eqnarray*} H_S
&=& \sum_{i=1}^3 \sqrt{\boldsymbol{p}_i^2+ m_i^2} + V, \\ M &=& M_0 -
\frac{2\sigma}{\pi} \sum_{i=1}^3 \frac{\eta_i}{\left<
\sqrt{\boldsymbol{p}_i^2+ m_i^2} \right>},
\end{eqnarray*}
where $V$ is the same as in Eq. (\ref{EH}), $M_0$ is the
eigenvalue of $H_S$,
$\left<\sqrt{\boldsymbol{p}_{[ud]}^2+m_{[ud]}^2}\right>$,
$\left<\sqrt{\boldsymbol{p}_{\bar q}^2+m_{\bar q}^2}\right>$ are the
average kinetic energies of diquarks and an antiquark and $\eta_i$
are the correction factors given in (\ref{eta_i}). The numerical
algorithm to solve the three-body SSE is based on an expansion of
the wave function in terms of harmonic oscillator functions with
different sizes \cite{nunb77}. In fact to apply this techniques to
the three-body SSE we need to use an approximation of the
three-body potential $V_{{\rm string}}$ by a sum of the two- and
one-body potentials, see \cite{NSSB}. This approximation, however,
introduces the marginal correction to the energy eigenvalues. The
quantities $\mu_{[ud]}$ and $\mu_q$ denote either the constituent
masses calculated in the AF formalism using Eq. (\ref{c_i}) or
$\left<\sqrt{\boldsymbol{p}_{[ud]}^2+m_{[ud]}^2}\right>$,
$\left<\sqrt{\boldsymbol{p}_{\bar q}^2+m_{\bar q}^2}\right>$ found from
the solution of SSE. It is seen from Table 1 that these quantities
agree with accuracy better than 5$\%$. The pentaquark masses
calculated by the two methods differ by 100 MeV for
$([ud]^2\bar{s})$ and 160 MeV for $[ud]^2\bar{d}$. The
approximation of $V_{{\rm string}}$ mentioned above introduces the
correction to the energy eigenvalues $\le 30$ MeV, so we conclude
that the results obtained using the AF formalism and the SSE agree
within $\sim 5\%$, i.e., the accuracy of the AF results for
pentaquarks is the same as for the $q{\bar q}$ system (see, e.g.,
\cite{MNS}).
If we withdraw an assumption $m_{[ud]}=0$, then a possible way to
estimate the current diquark masses is to tune $m_{[ud]}$,
$m_{[us]}$ and $m_{[ds]}$ from the fit to the nucleon and hyperon
masses (in the quark--diquark approximation). In this way one
naturally obtains larger pentaquark masses. We have performed
such the calculations using the SSE. We briefly investigated the
sensitivity of the pentaquark mass predictions to the choice of
$\sigma$, the strange quark mass $m_s$ and diquark masses
$m_{[ud]}$ and found $M([ud]^2\bar d)$ in the range 2.2--2.4 GeV,
$M([ud]^2\bar s)$ $\sim$ 2.4 GeV and $M([us]^2\bar d)$ $\sim$ 2.5
GeV.
Increasing $\alpha_s$ up to $0.6$ (the value used in the
Capstick--Isgur model \cite{capstick-isgur}) decreases the
$[ud]^2\bar s$ mass by $\sim$ $120$ MeV (see Table \ref{table2}).
We have briefly investigated the effect of the hyperfine
interaction due to the $\sigma$ meson exchange between diquarks
and strange antiquark and found that it lowers the $\Theta^+$
energy by $\sim 180$ MeV for $g^2_{\sigma}/4\pi\sim 1$. As the
result we obtain the lower bound of $[ud]^2\bar s$ pentaquark
$M([ud]^2\bar s)=1740$ MeV (for $m_{[ud]}=0$) which is still
$\sim$ 200 MeV above the experimental value.
\section{Conclusions}
We therefore conclude that the string dynamics alone in its
simplified form predicts too high masses of pentaquarks. This may
indicate on a large role of the chiral symmetry breaking effects
in light pentaquark systems. An ``extremal'' approach of chiral
soliton model totally neglects the confinement effects and
concentrates on the pure chiral properties of baryons. Therefore
the existence of $\Theta$, if confirmed, provides an unique
possibility to clarify the interplay between the quark and chiral
degrees of freedom in light baryons.
This work was supported by RFBR grants No. 03-02-17345,
04-02-17263, the grant for leading scientific schools No.
1774.2003.2. The NATO is also greatly acknowledged for the grant
No. PST.CLG.978710.
|
\section*{Content}
\section*{Appendix}
\input{sec6_appendix}
\begin{backmatter}
\section*{Acknowledgements
Luca Remaggi, Marco Crocco, Alessio Del Bue, and Robin Scheibler are thanked for help during experimental design.
\section*{Availability of data and materials
The database is publicly available at \href{www.zenodo.com/SOMETHING}{zenodo}}%\textcolor{gray}{\texttt{www.zenodo.com/SOMETHING}}{} and \href{www.github.com/Chutlhu/dEchorate}{github}}%\textcolor{gray}{\texttt{www.github.com/Chutlhu/dEchorate}}{}.
\section*{Abbreviations
\begin{acronym}[UMLX]
\acro{AE}{Angular Error}
\acro{AER}{Acoustic Echo Retrieval}
\acro{ASR}{Automatic Speech Recognition}
\acro{DE}{Distance Error}
\acro{DER}{Direct-to-Early Ratio}
\acro{DRR}{Direct-to-Reverberant ratio}
\acro{DOA}{Direction of Arrival}
\acro{ESS}{Exponentially Swept-frequency Sine}
\acro{GEVD}{Generalized Eigenvector Decomposition}
\acro{GoM}{Goodness of Match}
\acro{MDS}{Multi-Dimensional Scaling}
\acro{MVDR}{Minimum Variance Distortionless Response}
\acro{nULA}{non-Uniform Linear Array}
\acro{PESQ}{Perceptual Evalution of Speech Quality}
\acro{RIR}{Room Impulse Response}
\acro{ReTF}{Relative Transfer Function}
\acro{TOA}{Time of Arrival}
\acro{TDOA}{Time Difference of Arrival}
\acro{ISM}{Image Source Method}
\acro{SE}{Speech Enhancement}
\acro{SNRR}{Signal-to-Noise plus Reverberation Ratio}
\acro{iPESQ}{Perceptual Evaluation of Speech Quality improvement}
\acro{iSNRR}{Signal-to-Noise plus Reverberation Ratio improvement}
\acro{iSRMR}{Speech-to-Reverberation energy Modulation Ratio improvement}
\acro{RooGE}{Room Geometry Estimation}
\acro{WSJ}{Wall Street Journal}
\end{acronym}
\bibliographystyle{bmc-mathphys}
\section{Introduction}\label{sec:intro}
When sound travels from a source to a microphone in a indoor space, it interacts with the environment by being delayed and attenuated due to the distance; and reflected, absorbed and diffracted due to the surfaces.
The \ac{RIR} represents this phenomenon as a linear and causal time-domain filter.
As depicted in~\Cref{fig:rir}, \acp{RIR} are commonly subdivided into 3 parts:
the \textit{direct-path}, corresponding to the line-of-sight propagation; the \textit{early echoes}, stemming from few disjoint reflections on the closest reflectors; and the \textit{late reverberation} comprising the dense accumulation of later reflections and \textit{scattering} effects.
The late reverberation is indicative of the environment size and reverberation time, producing the so-called \textit{listener envelopment}, \textit{i.e\onedot}, the degree of immersion in the sound field~\cite{griesinger1997psychoacoustics}.
In contrast, the direct path and the early echoes carry precise information on the
scene's geometry, such as the position of the source and room surfaces relative to the receiver position~\cite{kuttruff2009room}, and on the surfaces' reflectivity.
Such relation is well explained by the \ac{ISM} \cite{allen1979image}, in which the echoes are associated with the contribution of virtual sound sources lying outside the real room.
Therefore, one may consider early echoes as ``spatialized'' copies of the source signal, whose \acp{TOA} are related to the source and reflector positions.
\begin{figure}
\centering
[width=0.95\linewidth]{rirs_measured.pdf}
\caption{Depiction of a measured room impulse response from the database.}
\label{fig:rir}
\end{figure}
Based on this idea, so-called \textit{echo-aware} methods have been introduced a few decades ago, where \textit{matched filters} (or \textit{rake receivers}) are used to constructively sum the sound reflections \cite{flanagan1993spatially,jan1995matched,affes1997signal} and build beamformers achieving much better sound qualities \cite{gannot2001signal}.
These methods have recently regained interested as manifested by the European project SCENIC~\cite{annibale2011scenic} and the UK research project S$^3$A\footnote{\url{http://www.s3a-spatialaudio.org/}}.
Later, a few studies showed that knowing the properties of a few early echoes could boost the performance of typical indoor audio inverse problems such as \ac{SE} \cite{dokmanic2015raking,kowalczyk2019raking}, sound source localization \cite{ribeiro2010turning,salvati2016sound,dicarlo2019mirage,daniel2020time} and separation \cite{asaei2014structured,leglaive2016multichannel,scheibler2017separake,remaggi2019modeling}, and speaker verification~\cite{al2019early}.
Another fervent area of research spanning transversely the audio signal processing field is estimating the room geometry blindly from acoustic signals~\cite{antonacci2012inference,dokmanic2013acoustic,crocco2017uncalibrated,remaggi2017acoustic}.
As recently reviewed by Crocco \etal{} in \cite{crocco2017uncalibrated}, end-to-end \ac{RooGE} involves a number of subtasks:
\ac{RIR} estimation, peak picking, microphones calibration, echo labeling and reflectors' position estimation. As interesting applications, these methods have been recently used in active setting (\textit{i.e\onedot}, knowing the transmitted signals) on unmanned aerial vehicles (UAVs, a.k.a. drones) \cite{jensen2019method,boutin2020drone} and on mobile-phones \cite{shih2019phone}.
The lowest common denominator of all these tasks is \ac{AER}, that is, estimating the properties of early echoes, such as their \acp{TOA} and energies. The former problem is typically referred to as \ac{TOA} estimation, or \ac{TDOA} estimation when the direct-path is taken as reference.
\input{table_rirdb.tex}
As listed in \cite{szoke2018building} and in \cite{genovese2019blind}, a number of recorded \acp{RIR} corpora are freely available online, each of them meeting the demands of certain applications. \Cref{tab:rir_db} summarizes the main characteristics of some of them.
One can broadly identify two main classes of echo-aware \ac{RIR} datasets in the literature: \ac{SE}/\ac{ASR}-oriented datasets, \textit{e.g\onedot}~\cite{szoke2018building,bertin2019voice,cmejla2021mirage}, and \ac{RooGE}-oriented datasets, \textit{e.g\onedot}{}~\cite{dokmanic2013acoustic,crocco2017uncalibrated,remaggi2017acoustic}.
The former regards acoustic echoes as highly correlated interfering sources coming from close reflectors, such as a table in a meeting room or a near wall. This typically presents a challenge in estimating the correct source's \ac{DOA} with further consequences in \ac{DOA}-based enhancement algorithm, \textit{e.g\onedot}, beamformers.
Although this factor is taken into account, such datasets lack proper annotation of these echoes in the \acp{RIR} or the absolute position of objects inside the room.
The latter group typically features design choices, such as microphones scattered across the room, which are not suitable for \ac{SE} applications. Indeed, these typically involve compact or ad hoc arrays.
The main common drawback of these datasets in that they cannot be easily used for other tasks than the ones which they are designed for.
To bypass the complexity of recording and annotating real \ac{RIR} datasets, acoustic simulators based on the \ac{ISM} are extensively used instead~\cite{gaultier2017vast,kim2017generation,perotin2018crnn,kim2017generation,dicarlo2020blaster}.
While such data are more versatile, simpler and quicker to obtain, they fail to fully capture the complexity and richness of real acoustic environments.
Due to this, methods trained, calibrated, or validated on them may fail to generalize to real conditions, as will be shown in this paper.
Interestingly, in the context of learning-based blind room volume estimation, the authors of \cite{genovese2019blind} combined multiple real and synthetic \ac{RIR} datasets in order to find a balance between number of training data and realism.
A good echo-oriented \ac{RIR} dataset should include a variety of environments (room geometries and surface materials), of microphone placings (close to or away from reflectors, scattered or forming ad-hoc arrays) and, most importantly, precise annotations of the scene's geometry and echo timings in the \acp{RIR}.
Moreover, in order to be versatile and used in both \ac{SE} and \ac{RooGE} applications, geometry and timing annotations should be fully consistent.
Such data are difficult to collect since it involves precise measurements of the positions and orientations of all the acoustic emitters, receivers and reflective surfaces inside the environment with dedicated planimetric equipment.
To fill this gap, we present the \texttt{dEchorate}{} dataset: a fully calibrated multichannel \ac{RIR} database with accurate annotation of the geometry and echo timings in different configurations of a cuboid room with varying wall acoustic profiles.
The database currently features 1800 annotated \acp{RIR} obtained from 6 arrays of 5 microphones each, 6 sound sources and 11 different acoustic conditions.
All the measurements were carried out at the acoustic lab at Bar-Ilan University following a consolidated protocol previously established for the realization of two other multichannel \acp{RIR} databases: the BIU's Impulse Response Database \cite{hadad2014multichannel} gathering \acp{RIR} of different reverberation levels sensed by uniform linear arrays (ULAs); and \texttt{MIRaGE}~\cite{cmejla2021mirage} providing a set of measurements for a source placed on a dense position grid.
The \texttt{dEchorate}{} dataset is designed for AER with linear arrays, and is more generally aimed at analyzing and benchmarking \ac{RooGE} and echo-aware signal processing methods on real data.
In particular, it can be used to assess robustness against the number of reflectors, the reverberation time, additive spatially-diffuse noise and non-ideal frequency and directive characteristics of microphone-source pairs and surfaces in a controlled way.
Due to the amount of data and recording conditions, it could also be used to train machine learning models or as a reference to improve \ac{RIR} simulators.
The database is accompanied with a Python toolbox that can be used to process and visualize the data, perform analysis or annotate new datasets.
\begin{figure*}
\centering
[trim={0 0 0 10em},clip,width=0.95\textwidth]{fornitures.jpg}
\caption{Broad-view picture of the acoustic lab at Bar-Ilan university.}
\label{fig:fornitures}
\end{figure*}
The remainder of the paper is organized as follows.
\Cref{sec:description} describes the construction and the composition of the dataset, while \Cref{sec:analysis} provides an overview of the data, studying the variability of typical acoustic parameters.
To validate the data, in \Cref{sec:applications} two echo-aware application are presented, one in speech enhancement and one is room geometry estimation.
Finally, in~\Cref{sec:conclusion} the paper closes with the conclusions and and offers leads for work.
\section{Database Description}\label{sec:description}
\begin{table}[]
\caption{\label{tab:room_equipment} Measurement and recording equipment.}
\centering
\small
\begin{tabular}{ll}
\toprule
Loudspeakers & (directional, direct) $4 \times$ Avanton\\
& (directional, indirect) $2 \times$ Avanton\\
& (omnidirectional) $1 \times$ B\&G\\
& (babble noise) $4 \times$ 6301bx Fostex\\
\hline
Microphones & $30 \times$ AKG CK32\\
Array & $6 \times$ nULA (5 mics each, handcrafted)\\
\hline
A/D Converter & ANDIAMO.MC\\
\hline
Indoor Positioning & Marvelmind Starter Set HW v4.9\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Recording setup}
The recording setup is placed in a cuboid room with dimension 6 m $\times$ 6 m $\times$ 2.4 m.
The 6 facets of the room (walls, ceiling, floor) are covered by acoustic panels allowing controllable reverberation time (\ensuremath{\mathtt{RT}_{60}}).
We placed 4 directional loudspeakers (direct sources) facing the center of the room and 30 microphones mounted on 6 static linear arrays parallel to the ground.
An additional channel is used for the loop-back signal, which serves to compute the time of emission and detect errors.
Each loudspeaker and each array is positioned close to one of the walls in such a way that the source of the strongest echo can be easily identified.
Moreover, their positioning was chosen to cover a wide distribution of source-to-receiver distances, hence, a wide range of \acp{DRR}.
Further, 2 more loudspeakers were positioned pointing towards the walls (indirect sources).
This was done to study the case of early reflections being stronger than the direct-path.
Each linear array consists of 5 microphones with non-uniform inter-microphone spacings of $[4, 5, 7.5, 10]$ cm\footnote{%
\footnotesize
that is, $[-12.25, -8.25, -3.25, 3.25, 13.25]$ cm w.r.t\onedot the barycenter}.
Hereinafter we will refer to these elements as \acp{nULA}.
\begin{figure}[t]
\centering
[width=0.95\linewidth]{positioning2D_xy.pdf}
\caption{Illustration of the recording setup - top view.}
\label{fig:2D}
\end{figure}
\begin{figure*}[h]
\hfill
\subfigure{
[width=0.31\textwidth]{recording_setup.jpg}}
\hfill
\subfigure{
[width=0.31\textwidth]{mic.jpg}}
\hfill
\subfigure{
[width=0.31\textwidth]{panels.jpg}}
\caption{Picture of the acoustic lab. From left to right: the overall setup, one microphone array, the setup with revolved panels.}
\label{fig:setup}
\end{figure*}
\subsection{Measurements}
The main feature of this room is the possibility to change the acoustic profile of each of its facets by flipping double-sided panels with one reflective (made of Formica Laminate sheets) and one absorbing face made of perforated panels filled with rock-wool).
A complete list of the materials of the room is available in \Cref{app:materials}.
This allows to achieve diverse values of $\ensuremath{\mathtt{RT}_{60}}$ that range from 0.1 to almost 1~second.
In this dataset, the panels of the floor were always kept absorbent.
Two types of measurement sessions were considered, namely, \textit{one-hot} and \textit{incremental}.
For the first type, a single facet was placed in reflective mode while all the others were kept absorbent.
For the second type, starting from fully-absorbent mode, facets were progressively switched to reflective one after the other until all but the floor are reflective, as shown in~\Cref{tab:wallcoding}.
The dataset features an extra recording session.
For this session, office furnitures (chairs, coat-hanger and a table) were positioned in the room to simulate a typical meeting room with chairs and tables (see~\Cref{fig:fornitures}).
Theses recordings may be used to assert the robustness of echo-aware methods in a more realistic scenario
\begin{table}[]
\small
\caption{\label{tab:wallcoding} Surface coding in the dataset: each binary digit indicates if the surface is absrobent ($\mathtt{0}$, \xmark ) or reflective ($\mathtt{1}$, \cmark).}
\begin{tabular}{p{.5cm}p{1.1cm}|p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}c}
\toprule
& Surfaces: & Floor & Ceil & West & South & East & North\\
\hline
\multicolumn{1}{c}{\multirow{5}{*}{\rotatebox{90}{one-hot}}} & $\mathtt{010000}$ & \xmark & \cmark & \xmark & \xmark & \xmark & \xmark \\
& $\mathtt{001000}$ & \xmark & \xmark & \cmark & \xmark & \xmark & \xmark \\
& $\mathtt{000100}$ & \xmark & \xmark & \xmark & \cmark & \xmark & \xmark \\
& $\mathtt{000010}$ & \xmark & \xmark & \xmark & \xmark & \cmark & \xmark \\
& $\mathtt{000001}$ & \xmark & \xmark & \xmark & \xmark & \xmark & \cmark \\
\hdashline
\multicolumn{1}{c}{\multirow{6}{*}{\rotatebox{90}{incremental}}} & $\mathtt{000000}$ & \xmark & \xmark & \xmark & \xmark & \xmark & \xmark \\
& $\mathtt{010000}$ & \xmark & \cmark & \xmark & \xmark & \xmark & \xmark \\
& $\mathtt{011000}$ & \xmark & \cmark & \cmark & \xmark & \xmark & \xmark \\
& $\mathtt{011100}$ & \xmark & \cmark & \cmark & \cmark & \xmark & \xmark \\
& $\mathtt{011110}$ & \xmark & \cmark & \cmark & \cmark & \cmark & \xmark \\
& $\mathtt{011111}$ & \xmark & \cmark & \cmark & \cmark & \cmark & \cmark \\
\hdashline
\multicolumn{1}{c}{\multirow{1}{*}{\rotatebox{90}{f.}}}
& $\mathtt{010001}^{*}$ & \xmark & \cmark & \xmark & \xmark & \xmark & \cmark \\
\bottomrule
\end{tabular}
\end{table}
For each room configuration and loudspeaker, three different excitation signals were played and recorded in sequence: chirps, white noise and speech utterances.
The former consists in a repetition of 3 \ac{ESS} signals of duration 10 seconds and frequency range from 100 Hz to 14 kHz interspersed with 2 seconds of silence.
Such frequency range was chosen to match the characteristics of the loudspeakers.
To prevent rapid phase changes and ``popping'' effects, the signals were linearly faded in and out over 0.2 seconds with a Tuckey taper window.\footnote{\label{fn:pyrir}%
\footnotesize The code to generate the reference signals and to process them is available together with the data.
The code is based on the \href{https://github.com/maj4e/pyrirtool}{\codeLibrary{pyrirtools}}{} Python library}
Second, 10 seconds bursts of white noise and 3 anechoic speech utterances from the \ac{WSJ} dataset~\cite{paul1992design} were played in the room.
Through all recordings, at least 40 dB of sound dynamic range compared to the room silence was asserted, and a room temperature of $\ang{24} \pm \ang{0.5}$C and $80\%$ relative humidity were registered. In these conditions the speed of sounds is $c_\text{air} = 346.98 $ m/s.
In addition, 1 minute of \textit{room tone} (\textit{i.e\onedot}, silence) and 4~minutes of diffuse babble noise were recorded for each session. The latter was simulated by transmitting different chunks of the same single-channel babble noise recording from additional loudspeakers facing the four corners of the room.
All microphone signals were synchronously acquired and digitally converted to 48~kHz with 32~bit/sample using the equipment listed in~\Cref{tab:room_equipment}.
The polarity of each microphone was recorded by clapping a book in the middle of the room and their gain is corrected using the room tone.
Finally, \acp{RIR} are estimated with the \ac{ESS} technique~\cite{farina2007advancements} where an exponential time-growing frequency sweep is using as probe signal. Then, the \ac{RIR} is estimated by devolving the microphone signal, implemented as division in the frequency domain (The authors used the same code mentioned in \Cref{fn:pyrir}).
\subsection{Dataset annotation}\label{subsec:annotation}
\subsubsection{RIRs annotation}
The objective of this database is to feature annotations in the ``geometrical space'', namely the microphone, facet and source positions, that are \textit{fully consistent} with annotations in the ``signal space'', namely the echo timings within the \acp{RIR}.
This is achieved as follows:
\begin{enumerate}[label=(\roman*)]
\item \label{it:decharate:ips} First, the ground-truth positions of the array and source centres are acquired via a Beacon indoor positioning system ($\ensuremath{\mathtt{bIPS}}$).
This system consists in 4 stationary bases positioned at the corners of the ceiling and a movable probe used for measurements which can be located within errors of $\pm2$~cm.
\item \label{it:decharate:not} The estimated \acp{RIR} are superimposed on synthetic \acp{RIR} computed with the \acf{ISM} from the geometry obtained in the previous step.
A Python GUI\footnote{\footnotesize This GUI is available in the dataset package.} (showed in~\Cref{fig:labelling_tools}), is used to manually tune a peak finder and label the echoes corresponding to found peaks, that is, annotate their timings and their corresponding image source position and room facet label.
\item \label{it:decharate:mds} By solving a simple \acf{MDS} problem \cite{dokmanic2015relax,crocco2016estimation,plinge2016acoustic}, refined microphone and source positions are computed from echo timings.
The non-convexity of the problem is alleviated by using a good initialization (obtained at the previous step), by the high SNR of the measurements and, later, by including additional image sources in the formulation.
The prior information about the arrays' structures reduced the number of variables of the problem, leaving the 3D positions of the sources and of the arrays' barycenters in addition to the arrays' tilt on the azimuthal plane.
\item \label{it:decharate:lat} By employing a multilateration algorithm \cite{beck2008exact}, where the positions of one microphone per array serve as anchors and the \acp{TOA} are converted into distances, it is possible to localize image sources alongside the real sources.
This step will be further discussed in~\Cref{sec:applications}.
\end{enumerate}
Knowing the geometry of the room, in step \ref{it:decharate:ips} we were able to initially guess the position of the echoes in the \ac{RIR}. Then, by iterating through steps \ref{it:decharate:not}, \ref{it:decharate:mds} and \ref{it:decharate:lat}, the position of the echoes are refined to be consistent under the \ac{ISM}.
The final geometrical and signal annotation was chosen as a compromise between the $\ensuremath{\mathtt{bIPS}}$ measurements and the $\ensuremath{\mathtt{MDS}}$ output.
|
While the former ones are noisy but consistent with the scene's geometry, the latter ones match the \acp{TOA} but not necessarily the physical world.
In particular, geometrical ambiguities such as global rotation, translation and up-down flips were observed.
Instead of manually correcting this error, we modified the original problem from using only the direct path distances ($\ensuremath{\mathtt{dMDS}}$) to considering the image sources' \ac{TOA} of the ceiling as well in the cost function ($\ensuremath{\mathtt{dcMDS}}$).
\Cref{tab:res_mds} shows numerically the \textit{mismatch} (in cm) between the geometric space (defined by the $\ensuremath{\mathtt{bIPS}}$ measurements) and the signal space (the one defined by the echo timings, converted to cm based on the speed of sound).
To better quantify it, we introduce here a \textit{\ac{GoM}} metric: it measures the fraction of (first-order) echo timings annotated in the \acp{RIR} matching the annotation produced by the geometry within a threshold.
Including the ceiling information, $\ensuremath{\mathtt{dcMDS}}$ produces a geometrical configuration which has a small mismatch (0.4~cm on average, 1.86~cm max) in both the signal \textit{and} geometric spaces with a $98.1\%$ matching all the first order echoes within a 0.5~ms threshold (\textit{i.e\onedot}, the position of all the image sources within about 17~cm error).
It is worth noting that the $\ensuremath{\mathtt{bIPS}}$ measurements produce a significantly less consistent annotation with respect to the signal space.
\begin{table}[]
\centering
\caption{\label{tab:res_mds} Mismatch between geometric measurements and signal measurements in terms of maximum (Max.), average (Avg.) and standard deviation (Std) of absolute mismatch in centimeters.
The goodness of match (GoM) between the signal and geometrical measurements is reported as the fraction of matching echo timings for different thresholds in milliseconds.}
\begin{tabular}{lllll}
\toprule
& Metrics & $\ensuremath{\mathtt{bIPS}}$ & $\ensuremath{\mathtt{dMDS}}$ & $\ensuremath{\mathtt{dcMDS}}$ \\
\midrule
\multicolumn{1}{c}{\multirow{2}{*}{\rotatebox{90}{\scriptsize Geom.}}}
& Max. & 0 & $6.1$ & $1.07$ \\
& Avg.$\pm$Std. & 0 & $1.8\pm1.4$ & $0.39\pm0.2$ \\
\midrule
\multicolumn{1}{c}{\multirow{2}{*}{\rotatebox{90}{\scriptsize Signal}}}
& Max. & $5.86$ & $1.20$ & $1.86$ \\
& Avg.$\pm$Std. & $1.85\pm 1.5$ & $0.16\pm0.2$ & $0.41\pm0.3$ \\
\midrule
\multicolumn{1}{c}{\multirow{3}{*}{\rotatebox{90}{\scriptsize Mismatch}}}
& GoM (0.5 ms) & $97.9 \%$ & $93.4 \%$ & $98.1 \%$ \\
& GoM (0.1 ms) & $26.6 \%$ & $44.8 \%$ & $53.1 \%$ \\
& GoM (0.05 ms) & $12.5 \%$ & $14.4 \%$ & $30.2 \%$ \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Other tools for RIRs annotation}
Finally, we would like to add that the following tools and techniques were found useful in annotating the echoes.
\paragraph{The ``skyline'' visualization} consists in presenting the intensity of multiple \acp{RIR} as an image, such that the wavefronts corresponding to echoes can be highlighted \cite{baba2018b}.
Let $h_{n}(l)$ be an \ac{RIR} from the database, where $l = 0, \ldots, L-1$ denotes sample index and $n = 0, \ldots, N-1$ is an arbitrary indexing of all the microphones for a fixed room configuration.
Then, the \textit{skyline} is the visualization of the $L \times N$ matrix $\mathbf{H}$ created by stacking column-wise $N$ normalized \textit{echograms}\footnote{
\footnotesize
The echogram is defined either as the absolute value or as the squared value of the \ac{RIR}.
}, that is
\begin{equation}
\mathbf{H}_{l, n} = \mid h_{n}(l) \mid / \max{ \mid{h_{n}(l)}\mid},
\end{equation}
where $\mid{\cdot}\mid$ denotes the absolute value.
\Cref{fig:skyline} shows an example of skyline for 120 \acp{RIR} corresponding to 4 directional sources, 30 microphones and the most reflective room configuration, stacked horizontally, preserving the order of microphones within the arrays.
One can notice several clusters of 5 adjacent bins of similar color (intensity) corresponding to the arrivals at the 5 sensors of each \ac{nULA}.
Thanks to the usage of linear arrays, this visualization allowed us to identify both \acp{TOA} and their labeling.
\begin{figure*}[h]
\centering
\small
[width=0.95\textwidth]{labeling_tool.pdf}
\caption{
Detail of the GUI used to manually annotate the \acp{RIR}.
For a given source and a microphone in an \ac{nULA},
a) and b) each shows 2 \acp{RIR} for 2 different room configurations (blue and orange) before and after the direct path deconvolution.
c) shows the results of the peak finder for one of the deconvolved RIRs, and d) is a detail on the \ac{RIR} skyline (See \Cref{fig:skyline}) on the corresponding \ac{nULA}, transposed to match the time axis.
} \label{fig:labelling_tools}
\end{figure*}
\begin{figure*}
\centering
[trim={15em 15em 2em 0},clip,width=0.95\textwidth]{rir_skyline_final_mod4paper.pdf}
\caption{
The \ac{RIR} skyline annotated with observed peaks ($\times$) together with their geometrically-expected position ($\circ{}$) computed with the Pyroomacoustic acoustic simulator.
As specified in the legend, markers of different colors are used to indicate the room facets responsible for the reflection: direct path ($\mathtt{d}$), ceiling ($\mathtt{c}$), floor ($\mathtt{f}$), west wall ($\mathtt{w}$), $\dots$, north wall ($\mathtt{n}$).
}\label{fig:skyline}
\end{figure*}
\paragraph{Direct path deconvolution/equalization} was used to compensate for the frequency response of the source loudspeaker and microphone \cite{antonacci2012inference,eaton2016estimation}.
In particular, the direct path of the \ac{RIR} was manually isolated and used as an equalization filter to enhance early reflections from their superimposition before proceed with peak picking.
Each \ac{RIR} was equalized with its respective direct path.
As depicted in~\Cref{fig:labelling_tools}, in some cases this process was required for correctly identifying the underlying \acp{TOA}' peaks.
\paragraph{Different facet configurations} for the same geometry influenced the peaks' predominance in the \ac{RIR}, hence facilitating its echo annotation.
An example of \acp{RIR} corresponding to 2 different facet configurations is shown in~\Cref{fig:labelling_tools}: the reader can notice how the peak predominance changes for the different configurations.
\paragraph{An automatic peak finder} was used on equalized echograms $\bar{\eta}_{n}(l)$ to provide an initial guess on the peak positions.
In this work, peaks are found using the Python library \href{https://bitbucket.org/lucashnegri/peakutils/}{\codeLibrary{peakutils}}{} whose parameters were manually tuned.
\subsection{Limitations of current annotation}
As stated in \cite{defrance2008finding}, we want to emphasize that annotating the correct \acp{TOA} of echoes and even the direct path in ``clean'' real \acp{RIR} is far from straightforward.
The peaks can be blurred out by the loudspeaker characteristics or the concurrency of multiple reflections.
Nevertheless, as showed in~\Cref{tab:res_mds}, the proposed annotation was found to be sufficiently consistent both in the geometric and in the echo/signal space.
Thus, no further refinement was done.
This database can be used as a first basis to develop better \ac{AER} methods which could be used to iteratively improve the annotation, for instance including 2$^\text{nd}$ order reflections.
\subsection{The \texttt{dEchorate}~package}
The dataset comes with both data and code to parse and process it.
The data are presented in 2 modalities: the \texttt{raw} data, that is, the collection of recorded wave files, are organized in folders and can be retrieved by querying a simple database table; the \texttt{processed} data, which comprise the estimated \acp{RIR} and the geometrical and signal annotations, are organized in tensors directly importable in Matlab or Python (\textit{e.g.} all the \acp{RIR} are stored in a tensor of dimension $L \times I \times J \times D$, respectively corresponding to the RIR length in samples, the number of microphones, of sources and of room configurations).
\\Together with the data a Python package is available on the same website.
This includes wrappers, GUI, examples as well as the code to reproduce this study.
In particular, all the scripts used for estimating the \acp{RIR} and annotating them are available and can be used to further improve and enrich the annotation or as baselines for future works.
\section{Analysing the Data}\label{sec:analysis}
In this section we will illustrate some characteristics of the collected data in term of acoustic descriptors, namely the $\ensuremath{\mathtt{RT}_{60}}$, the \ac{DRR} and the \ac{DER}. While the former two are classical acoustic descriptors used to evaluate \ac{SE} and \ac{ASR} technologies~\cite{eaton2015ace}, the latter is less common and used in strongly echoic situations~\cite{eargle1996characteristics,naylor2010speech}.
\subsection{Reverberation Time}
The $\ensuremath{\mathtt{RT}_{60}}$ is the time required for the sound level in a room to decrease by 60 dB after the source is turned off, thus, it measures reverberation level. This value is one the most common acoustic descriptor for room acoustics. Besides, as reverberation affects detrimentally the performances of speech processing technologies, the robustness against $\ensuremath{\mathtt{RT}_{60}}$ has become a common evaluation metric in \ac{SE} and \ac{ASR}.
\Cref{tab:rtsixty} reports estimated $\ensuremath{\mathtt{RT}_{60}}(b)$ values per octave band $b\in \{ 500,1000,2000,4000 \}$ (Hz) for each of the room in the dataset. These values were estimated using the Schroeder's integration methods~\cite{schroeder1965new,chu1978comparison,xiang1995evaluation} in each octave band. For the octave bands centred at 125 Hz and 250 Hz, the measured \acp{RIR} did not exhibit sufficient power for a reliable estimation. This observation found confirmation in the frequency response provided by the loudspeakers' manufacturer, which decays exponentially from 300 Hz downwards.
Ideally, for the $\ensuremath{\mathtt{RT}_{60}}$ to be reliably estimated, the Schroeder curve, \textit{i.e.} the log of the square-integrated, octave-band-passed \ac{RIR}, would need to feature a linear decay for 60 dB of dynamic range, which would occur in an ideal diffuse sound regime. However, such range is never observable in practice, due to the presence of noise and possible non-diffuse effects. Hence, a common technique is to compute, \textit{e.g.}, the $\mathtt{RT}_{10}$ on the range $[-5,-15]$ dB of the Schroeder curve and to extrapolate the $\ensuremath{\mathtt{RT}_{60}}$ by multiplying it by 6. We visually inspected all the \acp{RIR} of the dataset corresponding to directional sources 1, 2 and 3, \textit{i.e.}, 90 \acp{RIR} in each of the 10 rooms. Then, two sets were created. Set $\mathcal{A}$ features all the Schroeder curves featuring linear log-energy decays allowing for reliable $\mathtt{RT}_{10}$ estimates.
Set $\mathcal{B}$ contains all the other curves. In practice, $49\%$ of the 3600 Schroeder curves were placed in the set $\mathcal{B}$. These mostly correspond to the challenging measurement conditions purposefully included in our dataset, \textit{i.e.}, strong early echoes, loudspeakers facing towards reflectors or receivers close to reflectors. Finally, the $\ensuremath{\mathtt{RT}_{60}}$ value of each room and octave band was calculated from the median of $\mathtt{RT}_{10}$ corresponding to Schroeder curves in $\mathcal{A}$ only.
As can be seen in \Cref{tab:rtsixty}, obtained reverberation values are consistent with the room progressions described in \cref{sec:description}. Considering the 1000 Hz octave band, the $\ensuremath{\mathtt{RT}_{60}}$ ranges from 0.14 s for the fully absorbent room ($\mathtt{000000}$) to 0.73 s for the most reflective room ($\mathtt{011111}$). When only one surfaces is reflective the $\ensuremath{\mathtt{RT}_{60}}$ values remains around 0.19 s.
\begin{table*}[t!]
\caption{\label{tab:rtsixty} Reverberation time per octave bands $\ensuremath{\mathtt{RT}_{60}}(b)$ calculated in the 10 room configurations. For each coefficient, the number of corresponding Schroeder curves in $\mathcal{A}$ used to compute the median estimate is given in parentheses.}
\resizebox{1 \linewidth}{!}{%
\begin{tabular}{l|cccccccccc}
\toprule
& Room 1 & Room 2 & Room 3 & Room 4 & Room 5 & Room 6 & Room 7 & Room 8 & Room 9 & Room 10 \\
& $\mathtt{000000}$ & $\mathtt{011000}$ & $\mathtt{011100}$ & $\mathtt{011110}$
& $\mathtt{011111}$ & $\mathtt{001000}$ & $\mathtt{000100}$ & $\mathtt{000010}$ & $\mathtt{000001}$ & $\mathtt{010001}^*$ \\
\midrule
500 Hz & 0.18 (11) & 0.40 (7) & 0.46 (20) & 0.60 (51) & 0.75 (48) & 0.22 (8) & 0.21 (5) & 0.21 (8) & 0.22 (7) & 0.37 (12) \\
1000 Hz & 0.14 (62) & 0.33 (83) & 0.34 (86) & 0.56 (89) & 0.73 (90) & 0.19 (79) & 0.19 (74) & 0.18 (69) & 0.19 (70) & 0.26 (72) \\
2000 Hz & 0.16 (65) & 0.25 (81) & 0.30 (86) & 0.48 (82) & 0.68 (88) & 0.18 (74) & 0.20 (64) & 0.18 (66) & 0.18 (67) & 0.24 (69) \\
4000 Hz & 0.22 (15) & 0.25 (17) & 0.37 (22) & 0.55 (16) & 0.81 (29) & 0.22 (17) & 0.23 (12) & 0.26 (14) & 0.24 (18) & 0.28 (14) \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{Direct To Early and Reverberant Ratio}
In order to characterize an acoustic environment, it is common to provide the ratio between the energy of the direct and the indirect propagation paths.
In particular, one can compute the so-called \ac{DRR} directly from a measured \ac{RIR} $h(l)$~\cite{eaton2015ace} as
\begin{equation}
\mathtt{DRR} = 10 \log_{10} \frac{\sum_{l \in \mathcal{D}} h^2(l) }{\sum_{l \in \mathcal{R}} h^2l)} \quad [\text{dB}],
\end{equation}
where $\mathcal{D}$ denotes the time support comprising the direct propagation path (set to $\pm$120 samples around its time of arrival, blue part in \Cref{fig:rir}), and $\mathcal{R}$ comprises the remainder of the \ac{RIR}, including both echoes and late reverberation (orange and green parts in \Cref{fig:rir}).
Similarly, the \ac{DER} defines the ratio between the energy of the direct path and the early echoes only, that is,
\begin{equation}
\mathtt{DER} = 10 \log_{10} \frac{\sum_{l \in \mathcal{D}} h^2(l) }{\sum_{l \in \mathcal{E}} h^2(l)} \quad [\text{dB}],
\end{equation}
where $\mathcal{E}$ is the time support of the early echoes only (green part in \Cref{fig:rir}).
Differently from the $\ensuremath{\mathtt{RT}_{60}}$ which mainly describes the diffuse regime, both \ac{DER} and \ac{DRR} are highly dependent on the position of the source and receiver in the room. Therefore, for each room, wide ranges of these parameters were registered. For the loudspeakers facing the microphones, the \ac{DER} ranges from 2 dB to 6 dB in one-hot room configurations and from -2 dB to 6 dB in the most reverberant rooms.
Similarly, the \ac{DRR} has a similar trend featuring lower values, such as -2 dB in one-hot rooms and down to -7.5 dB for the most reverberant ones. A complete annotation of these metrics is available in the database.
\section{Using the Data}\label{sec:applications}
The dEchorate database
is now used to investigate the performance of state-of-the-art methods on two echo-aware acoustic signal processing applications on both synthetic and measured data, namely, spatial filtering and room geometry estimation.
\subsection{Application: Echo-aware Beamforming}
Let $I$ microphones acquire to a single static point sound source, contaminated by noise sources.
In the short-time Fourier transform (STFT) domain, we stack the $I$ complex-valued microphone observations at frequency $f$ and time $t$ into a vector $\ensuremath{\mathbf{x}}(f,t) \in \bb{C}^I$.
Let us denote $\ensuremath{s}(f,t) \in \bb{C}$ and $\ensuremath{\mathbf{n}}(f,t) \in \bb{C}^{I}$ the source signal and the noise signals at microphones, which are assumed to be statistically independent.
By denoting $\ensuremath{\mathbf{h}} \in \bb{C}^I$ the Fourier transforms of the \acp{RIR}, the observed microphone signals in the STFT domain can be expressed a follows:
\begin{equation}
\ensuremath{\mathbf{x}} (f,t) = \ensuremath{\mathbf{h}} (f) \ensuremath{s}(f,t) + \ensuremath{\mathbf{n}}(f,t).
\end{equation}
Here, the STFT windows are assumed long enough so that the discrete convolution-to-multiplication approximation holds well.
Beamforming is one of the most widely used techniques for enhancing multichannel microphone recordings. The literature on this topic spans several decades of array processing and a recent review can be found in~\cite{gannot2017consolidated}.
In the frequency domain, the goal of beamforming is to estimate a set of coefficients $\ensuremath{\mathbf{w}}(f) \in \bb{C}^{I}$ that are applied to $\ensuremath{\mathbf{x}}(f,t)$, such that $s(f,t) \approx \ensuremath{\mathbf{w}}^{H} \ensuremath{\mathbf{x}}(f,t)$.
Hereinafter, we will consider only the \textit{distortionless} beamformers aiming at retrieving the clean target speech signal, as it is generated at the source position.
As mentioned throughout the paper, the knowledge of early echoes is expected to boost spatial filtering performances. However, estimating these elements is difficult in practice.
To quantify this, we compare \textit{echo-agnostic} and \textit{echo-aware} beamformers.
In order to study their empirical potential, we will evaluate their performance using both synthetic and measured data, as available in the presented dataset.
Echo-agnostic beamformers do not need any echo-estimation step: they either ignore their contributions, as in the direct-path delay-and-sum beamformer ($\ensuremath{\mathtt{DS}}$)~\cite{vantrees2004optimum}, or they consider coupling filters between pairs of microphones, called \acp{ReTF}~\cite{gannot2001signal}.
Note that contrary to \acp{RIR}, there exist efficient methods to estimate \acp{ReTF} from multichannel recordings of unknown sources (see~\cite[Section VI.B]{gannot2017consolidated} for a review).
The \acp{ReTF} can then be naturally incorporated in powerful beamforming algorithms achieving speech dereverberation and noise reduction in static~\cite{schwartz2014multi} and dynamic scenarios~\cite{kodrasi2017evd}.
In this work, \acp{ReTF} are estimated using \ac{GEVD} method~\cite{markovich2009multichannel}, using the approach illustanted in~\cite{markovich2018performance}.
Echo-aware beamformers fall in the category of \textit{rake receivers}, borrowing the idea from telecommunication where an antenna \textit{rakes} (\textit{i.e\onedot}, combines) coherent signals arriving from different propagation paths~\cite{flanagan1993spatially,jan1995matched,affes1997signal}.
To this end, they typically consider that for each \ac{RIR} $i$, the delays and frequency-independent attenuation coefficients of $R$ early echoes are known, denoted here as $\tau_i^{(r)}$ and $\alpha_i^{(r)}$.
In the frequency domain, this translates into the following:
\begin{equation}\label{sec:appl:echomodel}
\ensuremath{\mathbf{h}}(f) = \left[ \sum_{r=0}^{R-1} \alpha_i^{(r)} \, \exp \left( 2\pi j f \tau_{i}^{(r)} \right) \right]_i,
\end{equation}
where $r = 0, \ldots, R - 1$ denotes the reflection order,
Recently, these methods have been used for noise and interferer suppression in~\cite{dokmanic2015raking,scheibler2015raking} and for noise and reverberation reduction in~\cite{javed2016spherical,kowalczyk2019raking}.
The main limitation of these works is that echo properties, or alternatively the position of image sources, must be known \textit{a priori}.
Hereafter, we will assume these properties known by using the annotations of the dEchorate dataset, as described in~\Cref{subsec:annotation}. In particular, we will assume that the \acp{RIR} follow the echo model~(\ref{
|
yshift=5pt]{ $\shape$ }
([xshift=+10pt, yshift=-24pt]\tikztostart.south)
--
node[below, sloped]{ \scalebox{.7}{$\Discrete$} }
([xshift=+10pt]\tikztotarget.south)
}
]
&[9pt]
\!\!\!\!\!\!\!\!\!\!
\categorybox{
\!\!
\GEquivariant\SmoothInfinityGroupoids
\!\!
}
\!\!\!\!\!\!\!\!\!\!\!
\ar[d, "\Shape"{sloped}]
\\[+3pt]
&
&
&
\categorybox{
\!\!
\Slice{\GloballyEquivariant\InfinityGroupoids}{\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}}
\!\!
}
\ar[r, "G\Orbi\Smooth"{swap}]
&
\!\!\!
\categorybox{
\!\!
\GEquivariant\InfinityGroupoids
\!\!
}
\!\!\!
\\[-24pt]
&&&
\mathpalette\mathclapinternal{
\mbox{
\tiny
\bf
\begin{tabular}{c}
\color{gray}
slice of base $\infty$-topos of
\\
\color{darkblue}
global equivariant
homotopy theory
\end{tabular}
}
}
&
\mathpalette\mathclapinternal{
\mbox{
\tiny
\bf
\begin{tabular}{c}
\color{gray}
sub-$\infty$-topos of
\\
\color{darkblue}
proper $G$-equivariant
\\
\color{darkblue}
homotopy theory
\end{tabular}
}
}
\end{tikzcd}
$$
\medskip
\noindent
{\bf Elmendorf's theorem as a generalized equivariant Oka principle.}
On the right, $\GEquivariant\InfinityGroupoids$
(Ex. \ref{ClassicalEquivariantHomotopyTheory})
denotes the $\infty$-presheaves over the {\it category of $G$-orbits} (Def.
\ref{GOrbitCategory}). This is the modern context of $G$-equivariant homotopy theory, traditionally motivated by {\it Elmendorf's theorem} (\cite{Elmendorf}\cite{DwyerKan84}, recalled as Prop. \ref{ElmendorfDwyerKanTheorem} below). With the hindsight of cohesive homotopy theory, this classical theorem may conceptually be understood as {\it enforcing} an ``equivariant Oka principle'': For $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \TopologicalSpace$ a $G$-CW-complex (Ex. \ref{GCWComplexesAreCofibrantObjectsInProperEquivariantModelcategory}) and $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} Y$ any topological $G$-space, the simplicial enhancement of Elmendorf's theorem (\cite[Thm. 3.1]{DwyerKan84}, see also \cite{CordierPorter96}\cite[Thm. 1.3.8]{Blu17}) says, after the above embedding into singular-cohesive homotopy theory (by Prop. \ref{OrbiSpaceIncarnationOfGSpaceIsOrbisingularizationOfHomotopyQuotient} below), that one may take the shape operation inside the equivariant mapping stack {\it if}
in the process one enhances $G$-orbifolds $\HomotopyQuotient{\mathrm{X}}{G}$ to
their {\it orbi-singularization}
$\rotatebox[origin=c]{70}{$\prec$}(\HomotopyQuotient{\mathrm{X}}{G})$
(Ex. \ref{CohesiveFormulationOfEDKTheoremForDiscreteEquivarianceGroups} below):
\begin{equation}
\label{ElmendorfTheorem}
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
\hspace{-.2cm}
shape of equivariant
mapping stack
\end{tabular}
}
}
}{
\shape
\,
\SliceMaps{\big}{\mathbf{B}G}
{
\HomotopyQuotient
{ \mathrm{X} }
{ G }
}
{
\HomotopyQuotient
{ \mathrm{Y} }
{ G }
}
}
\;\;\;
\underset{
\mathpalette\mathclapinternal{
\raisebox{-12pt}{
\tiny
\color{greenii}
\bf
\def.9{.9}
\begin{tabular}{c}
Elmendorf-Dwyer-Kan
\\
theorem
\end{tabular}
}
}
}{
\simeq
}
\;\;\;
\rotatebox[origin=c]{70}{$\subset$}
\overset{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
mapping space of
equivariant shapes
\end{tabular}
}
}{
\SliceMaps{\big}{\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}}
{
\shape
\rotatebox[origin=c]{70}{$\prec$}
(
\HomotopyQuotient
{ \mathrm{X} }
{ G }
)
}
{
\shape
\rotatebox[origin=c]{70}{$\prec$}
(
\HomotopyQuotient
{ \mathrm{Y} }
{ G }
)
}
\mathpalette\mathrlapinternal{\,.}
}
\end{equation}
\medskip
\noindent
{\bf Proper equivariant classifying spaces.}
Therefore, chasing a topological $G$-space
through the above diagram produces its
usual proper-equivariant homotopy type as a
$G$-orbi-space (Prop. \ref{OrbiSpaceIncarnationOfGSpaceIsOrbisingularizationOfHomotopyQuotient}).
But we may now also feed the above moduli stack
$\HomotopyQuotient{\mathbf{B}\Gamma}{G}$ of $G$-equivariant $\Gamma$-principal
bundles through this machine, and we find
(Thm. \ref{MurayamaShimakawaGroupoidIsEquivariantModuliStack})
that the resulting
equivariant homotopy type is that of the Murayama-Shimakawa construction
(\cite{MurayamaShimakawa95}\cite{GuillouMayMerling17}, \cref{ConstructionOfUniversalEquivariantPrincipalBundles}):
$$
\overset{
\mathpalette\mathclapinternal{
\raisebox{6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
proper equivariant classifying space
\\
for equivariant principal bundles
\end{tabular}
}
}
}{
\EquivariantClassifyingShape{G}{\Gamma}
\;\;\coloneqq\;\;
\smoothrelativeG
\;\,
\shape
\,\,
\rotatebox[origin=c]{70}{$\prec$}
\,\,
(\HomotopyQuotient{\mathbf{B}\Gamma}{G})
}
\;\;\;
:
\;\;\;
G/H
\;\;\;
\longmapsto
\;\;\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
concordances of equivariant
\\
bundles over the point
\end{tabular}
}
}
}{
\shape
\,
\SliceMaps{}{\mathbf{B}G}
{ \mathbf{B}H }
{ \HomotopyQuotient{\mathbf{B}\Gamma}{G} }
}
\;\;\;
\simeq
\;\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
Murayama-Shimakawa construction
\end{tabular}
}
}
}{
\SingularSimplicialComplex
\,
\TopologicalRealization{}
{
\Maps{}
{ \mathbf{E}G }
{ \mathbf{B}\Gamma }
}^H
}
$$
In the special case when
($G$-singularities are resolvable and)
$\shape \, \Gamma$ is truncated, the
above orbi-smooth Oka principle
\eqref{OrbiSmoothOkaPrincipleInIntroduction}
applies
to these values of the equivariant classifying space and gives
(Prop. \ref{EquivariantClassifyingShapeOfTruncatedTopologicalGroupsCoincidesWithThatOftheirShape}):
$$
\hspace{-2mm}
\left.
\arraycolsep=2pt
\begin{array}{clll}
&
\mbox{\small $G$ discrete with resolvable singularities}
\\
\mbox{\small \&}
&
\mbox{\small $\Gamma$ topological of truncated shape}
\\
\mbox{\small \&}
&
\mbox{\small $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \SmoothManifold$ a proper smooth $G$-manifold}
\end{array}
\right\}
\;
\vdash
\quad
\EquivariantClassifyingShape{G}{\Gamma}
\;\simeq\;
\EquivariantClassifyingShape{G}{(\shape \, \Gamma)}
\;
:
\;
G/H
\;
\longmapsto
\quad
\def.9{0}
\begin{aligned}
& \SliceMaps{}{B G}
{ B H }
{ \HomotopyQuotient{B \Gamma}{G} }
\\
&
\phantom{AAA}
\simeq
\;
\SingularSimplicialComplex
\,
\TopologicalRealization{\big}
{
\Maps{}
{ \TopologicalRealization{}{\mathbf{E}G} }
{ \TopologicalRealization{}{\mathbf{B}\Gamma} }
}
^H
.
\end{aligned}
$$
\medskip
\noindent
{\bf Classification statement in proper-equivariant cohomology.}
These proper-equivariant homotopy types are the coefficient
systems that represent Borel-equivariant cohomology inside proper-equivariant cohomology
(Prop. \ref{ProperEquivariantCohomologySubsumesBorelEquivariantCohomology}),
so that our classification result may be re-stated in the following
equivalent forms (Thm. \ref{ProperClassificationOfEquivariantBundlesForResolvableSingularitiesAndEquivariantStructure}):
$$
\hspace{-1mm}
\left.
\arraycolsep=2pt
\begin{array}{clll}
&
\small \mbox{$G$ discrete with resolvable singularities}
\\
\mbox{\small \&}
&
\small \mbox{$\Gamma$ Hausdorff of truncated shape}
\\
\mbox{\small \&}
&
\mbox{\small $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \SmoothManifold$ a proper smooth $G$-manifold}
\end{array}
\right\}
\;
\vdash
\;
\left\{
\hspace{-.6cm}
\def.9{1.6}
\begin{array}{lll}
&
\IsomorphismClasses{
\EquivariantPrincipalFiberBundles{G}{\Gamma}
(\DTopologicalSpaces)_{\SmoothManifold}^{\stable}
}
&
\hspace{-.4cm}
\raisebox{1pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{l}
isom. classes of
equivariantly locally trivial
\\
stable equivariant principal topol. bundles
\end{tabular}
}
\\
&
\;\;
\simeq
\;\;
H^1_G
(X;\, \shape\,\Gamma)
\;
=
\;
H^0_G
(X;\, B \Gamma)
&
\hspace{-.35cm}
\def.9{.9}
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{l}
Borel-equivariant cohomology with
\\
coefficients in classifying space
\end{tabular}
}
\\
&
\;\;
\simeq
\;\;
H^0_{\scalebox{.7}{$\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}$}}
\left(X;\, \EquivariantClassifyingShape{G}{(\shape\, \Gamma)}\right)
\;
\simeq
\;
H^0_{\scalebox{.7}{$\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}$}}
\left(
X;\,
(\EquivariantClassifyingShape{G}{\Gamma})^{\stable}
\right)
&
\hspace{-.4cm}
\raisebox{1pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{l}
proper-equivariant cohomology with
\\
coefficients in equivariant classifying space.
\end{tabular}
}
\end{array}
\right.
$$
Specialized to trivial $G$ action on $\Gamma$ and the
cases where
$\Gamma$ is either a 1-truncated compact Lie group or
the projective unitary group $\PUH$,
this theorem reproduces
(when $\HomotopyQuotient{\SmoothManifold}{G}$ is a good orbifold with resolvable
singularities)
a series of statements found in the literature
(Ex. \ref{TruncatedStructureGroupsAndTheirEquivariantClassificationResults},
Ex. \ref{EquivariantBundlesServingAsGeoemtricTwistsOfEquivariantKTheory}).
\medskip
\medskip
\noindent
{\bf Outline.}
In order to make the presentation of these results reasonably self-contained
also for the non-expert reader, we lay out a fair bit of the required
background in \cref{EquivariantTopology} (equivariant differential topology)
and \cref{Generalities} (equivariant cohesive homotopy theory).
The new constructions and results are the content of
\cref{EquivariantPrincipalTopologicalBundles} (equivariant topological bundles)
and, particularly, of \cref{EquivariantInfinityBundles}
(equivariant $\infty$-bundles).
\medskip
\medskip
\noindent
{\bf Conclusion and outlook.}
In conclusion, we find that the unifying mechanism
behind a large class of equivariant classification results and their
generalization to higher truncated structure groups is a smooth Oka principle
in cohesive homotopy theory, which seamlessly embeds the theory of equivariant bundles
into a transparent modal homotopy theory of higher geometry,
in particular into the context of higher principal bundles over orbifolds
and more general cohesive orbispaces.
\medskip
By way of outlook, notice that
there is a large supply of equivariance groups with resolvable singularities
(Prop. \ref{ExistenceOfSmoothSphericaSpaceForms})
including key examples of interest in applications
(Ex. \ref{ADEGroupsHaveSphericalSpaceForms});
and
there is a large supply of truncated structure groups,
as every $\infty$-group is the shape
of some Hausdorff topological group (Prop. \ref{AllBareInfinityGroupsAreShapesOfTopologicalGroups}).
In particular, for $R$ a ring spectrum and $\mathrm{GL}(1,R)$ its
$\infty$-group of units (see \cite[Ex. 2.37]{FSS20CharacterMap} for pointers),
there is for every $n$ a
Hausdorff group $\Gamma$ with shape its $n$-truncation $\shape \, \Gamma \;\simeq\;
\Truncation{n} \mathrm{GL}(1,R)$. The corresponding $G$-equivariant $\Gamma$-principal
bundles are candidate geometric twists for equivariant $R$-cohomology theory,
generalizing the archetypical case of twisted complex K-theory
(where $R = \KU$, $\Gamma = \GradedPUH$
with $\shape \, \GradedPUH \,\simeq\, \Truncation{2} \, \mathrm{GL}(1,\KU)$).
\medskip
In all these cases, the classification theorem of \cref{EquivariantModuliStacks}
shows, in particular, that the equivariant classifying spaces of twists are, generically,
equivariantly non-simply connected,
which means that many traditional tools, notably
of rational homotopy theory, do not apply without extra care.
For example, in the case of twisted complex K-theory,
this fact
(see Ex. \ref{OrbiSmoothOkaPrincipleForPUHCoefficientsOverThePoint}
and \eqref{EquivariantHomotopyGroupsOfClassifyingShapeOfStableEquivariantPUHBundles}),
explains, we claim,
the otherwise somewhat unexpected
(cf. \cite[(3.22)]{FreedHopkinsTeleman02ComplexCoefficients})
appearance of local systems in the twisted equivariant Chern character
(\cite[Def. 3.10]{TuXu06}). Generally, one may use the equivariant
classifying theory developed here to give a general construction of
twisted equivariant Chern-Dold character maps and hence of
twisted equivariant differential cohomology theories, in
equivariant generalization (along the lines of \cite[\S 3]{SS20EquivariantTwistorial})
of the construction in \cite[Def. 5.4]{FSS20CharacterMap}:
$$
\begin{tikzcd}[column sep=huge]
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
universal local coefficient bundle
\\
for $\Gamma$-twisted $G$-equivariant
\\
$A$-cohomology theory
\end{tabular}
}
}
}{
\shape
\,
\rotatebox[origin=c]{70}{$\prec$}
\,
\left(
\HomotopyQuotient{A}{(\HomotopyQuotient{\Gamma}{G})}
\right)
}
\ar[
rr,
"{
\mbox{\hspace{-7mm}
\tiny
\begin{tabular}{c}
\color{greenii}
\bf
proper equivariant sliced rationalization
\\
representing the
\\
\color{greenii}
\bf
twisted equivariant $A$-character map
\end{tabular}
}
}"{swap, yshift=-7.5pt, xshift=12pt}
]
\ar[d]
&&
L_{\mathbb{Q}}
\left(
\shape
\,
\rotatebox[origin=c]{70}{$\prec$}
\,
\left(
\HomotopyQuotient{A}{(\HomotopyQuotient{\Gamma}{G})}
\right)
\right)
\ar[d]
\\
\underset{
\mathpalette\mathclapinternal{
\def.9{.9}
\raisebox{-5pt}{
\tiny
\color{darkblue}
\begin{tabular}{c}
\bf
equivariant classifying
\\
{\bf
space of twists }
{\color{black}
(\cref{EquivariantModuliStacks})
}
\end{tabular}
}
}
}{
\EquivariantClassifyingShape{G}{\Gamma}
}
\ar[rr]
&&
\underset{
\raisebox{-3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
\end{tabular}
}
}{
L_{\mathbb{Q}}
\left(
\EquivariantClassifyingShape{G}{\Gamma}
\right)
}
\end{tikzcd}
$$
The homotopy pullback
(formed in $\SingularSmoothInfinityGroupoids$)
of stacks of flat equivariant $L_\infty$-algebra valued differential forms
(\cite[Def. 3.57]{SS20EquivariantTwistorial})
along this rationalization operation
(in direct equivariant generalization of \cite[Def. 4.38]{FSS20CharacterMap})
solves the open problem of providing a general construction of
$G$-equivariant $\Gamma$-twisted {\it differential} $A$-cohomology.
This application will be discussed in detail in
\cite{TwistedEquivariantChernCharacter}\cite{TwistedEquivariantDifferentialCohomology}.
\newpage
\section{Tools and Techniques}
\label{ToolsAndTechniques}
\noindent
{\bf Higher cohesive geometry as intrinsically equivariant geometry.}
The point of
{\it higher homotopical geometry}
is (see \cite[p. 4-5]{SS20OrbifoldCohomology})
that the notion and presence of {\it gauge transformations} (homotopies) and
{\it higher gauge-of-gauge transformations}
is natively built into
the theory, so that absolutely every concept formulated in higher
geometry is {\it intrinsically} equivariant with respect to all relevant
symmetries.
This makes higher geometry the natural context for laying foundations
for equivariant algebraic topology.
\vspace{-3mm}
\begin{equation}
\label{GaugeTransformations}
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
$\Sigma$-shaped probes of
\\
higher geometric space $\mathcal{X}$
\\
\phantom{a}
\end{tabular}
}
}
}{
\underset{
\mathpalette\mathclapinternal{
\raisebox{-4pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
\\
value of $\infty$-stack $\mathcal{X}$
\\
on site-object $\Sigma$
\end{tabular}
}
}
}{
\mathcal{X}(\Sigma)
}
}
\qquad
=
\;\;
\left\{\quad
{\phantom{\mbox{\tiny\bf domain}}}
\begin{tikzcd}[column sep=large]
\mathpalette\mathllapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
probe
\\
space
\\
(`` brane")
\end{tabular}
}
\!\!\!\!
}
\Sigma
\ar[
rr,
bend left=60,
"\mbox{\tiny\color{greenii}\bf configuration}"{description},
"\ "{name=sl, xshift=-12pt,below},
"\ "{name=sr, xshift=+12pt,below}
]
\ar[
rr,
bend right=60,
"\mbox{\tiny\color{greenii}\bf configuration}"{description},
"\ "{name=tl, xshift=-12pt, above},
"\ "{name=tr, xshift=+12pt, above}
]
\ar[
from=sr,
to=tr,
Rightarrow,
bend left=50,
"\mbox{\tiny\color{orangeii}\bf gauge transf.}"{sloped, description},
"\ "{name=t2, left}
]
\ar[
from=sl,
to=tl,
Rightarrow,
bend right=50,
"\rotatebox{0}{\tiny\color{orangeii}\bf gauge transf.}"{sloped, description},
"\ "{name=s2, right}
]
\ar[
from=s2,
to=t2,
dashed,
Rightarrow,
"\mbox{\tiny\color{purple}\bf gauge-of-gauge}"{above},
"\mbox{\tiny\color{purple}\bf transformations}"{below}
]
\ar[
from=s2,
to=t2,
-,
dashed
]
&{\phantom{A}}&
\mathcal{X}
\mathpalette\mathrlapinternal{
\!\!\!\!
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
field
\\
space
\end{tabular}
}
}
\end{tikzcd}
{\phantom{\mbox{\tiny\bf codomain}}}
\!\!\!\!\!\!\right\}
\;\;
\;\in\; \InfinityGroupoids
\,.
\end{equation}
\medskip
\noindent
{\bf The machinery of $\infty$-category theory.}
While higher stacks and their higher geometry are often perceived as an
esoteric and convoluted mathematical subject, this is rather a property
of their {presentation} by models in simplicial homotopy theory
(reviewed in \cref{AbstractHomotopyTheory}, \cite[\S A]{FSS20CharacterMap})
and must be understood as a reflection of
their precious richness, not as of their intractability.
Indeed, with
{\it $\infty$-category theory}
\cite{Joyal08Notes}\cite{Joyal08Theory}\cite{Lurie09HTT}\cite{Cisinski19}\cite{RiehlVerity21}
(see \cref{AbstractHomotopyTheory}),
and specifically with
{\it $\infty$-topos theory}
\cite{Simpson99}\cite{Lurie03}\cite{ToenVezzosi05}\cite{Joyal08Logoi}\cite{Lurie09HTT}\cite{Rezk10}
(see \cref{ToposTheory}),
there is a high-level language,
abstracting away from the zoo of models (e.g. \cite{Bergner07Survey}),
to admit efficient reasoning about higher stacks via elementary categorical logic.
This point is made fully manifest
by the existence of an elementary internal logic of $\infty$-toposes \cite{Shulman19},
now known as Homotopy Type Theory \cite{UFP13}
(in our context see \cite{SchreiberShulman14}\cite[p. 5]{SS20OrbifoldCohomology})
which condenses all such reasoning to coding in a kind of programming language.
\medskip
For example, once Cartesian (pullback) squares are understood as
homotopy Cartesian squares, namely filled with a homotopy which
exhibits the expected unique factorization property up to homotopy and in the sense of
a contractible space of homotopy-factorizations:
\begin{equation}
\label{HomotopyCartesianSquare}
\begin{tikzcd}
X \times_B Y
\ar[
r,
"\ "{swap, pos=.8, name=s}
]
\ar[d]
&
Y
\ar[d]
\\
X
\ar[
r,
"\ "{pos=.2, name=t},
"{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
homotopy Cartesian square
{\color{black}/}
\\
homotopy pullback square
\end{tabular}
}
}"{swap, yshift=-6pt}
]
&
B
%
\ar[
from=s,
to=t,
Rightarrow,
"{ \mbox{\tiny\rm(pb)} }"
]
\end{tikzcd}
\;\;\;
\Rightarrow
\qquad
\left(
\begin{tikzcd}
Q
\ar[
drr,
bend left=30,
"\ "{swap, name=s2, pos=.7}
]
\ar[
ddr,
bend right=30,
"\ "{name=t2, pos=.7}
]
\\[-15pt]
&[-15pt]
&
Y
\ar[d]
\\
&
X
\ar[
r,
"\ "{pos=.2, name=t}
]
&
B
%
\ar[
from=s2,
to=t2,
Rightarrow
]
\end{tikzcd}
\;\;\;
\Leftrightarrow
\;\;\;
\begin{tikzcd}
Q
\ar[
drr,
bend left=30,
"\ "{swap, name=s2, pos=.4}
]
\ar[
ddr,
bend right=30,
"\ "{name=t2, pos=.35}
]
\ar[
dr,
dashed,
"\ "{name=t3},
"\ "{swap, name=s3},
]
\\[-10pt]
&[-15pt]
X \times_B Y
\ar[
r,
"\ "{swap, pos=.4, name=s}
]
\ar[d]
&
Y
\ar[d]
\\
&
X
\ar[
r,
"\ "{pos=.2, name=t}
]
&
B
%
\ar[
from=s2,
to=t3,
Rightarrow
]
\ar[
from=s3,
to=t2,
Rightarrow
]
\ar[
from=s,
to=t,
Rightarrow
]
\end{tikzcd}
\right)
\,,
\end{equation}
then they follow patterns familiar from 1-category theory:
For instance, the {\it pasting law} in 1-categories (recalled as Prop. \ref{PastingLaw} below) continues to hold verbatim in any $\infty$-category \cite[Lem. 4.4.2.1]{Lurie09HTT}:
\begin{equation}
\label{HomotopyPastingLaw}
\begin{tikzcd}
X
\ar[
r,
"\ "{swap, name=s1, pos=.9}
]
\ar[d]
&
Y
\ar[d]
\ar[
r,
"\ "{swap, name=s2, pos=.9}
]
&
Z
\ar[d]
\\
A
\ar[
r,
"\ "{name=t1, pos=.1}
]
&
B
\ar[
r,
"\ "{name=t2, pos=.1}
]
&
C
%
\ar[
from=s1,
to=t1,
Rightarrow,
"\mbox{\tiny\rm(pb)}"{description}
]
\ar[
from=s2,
to=t2,
Rightarrow,
]
\end{tikzcd}
\quad \Rightarrow\quad
\left(
\begin{tikzcd}
X
\ar[
rr,
"\ "{swap, name=s, pos=.9}
]
\ar[d]
&
&
Z
\ar[d]
\\
A
\ar[
rr,
"\ "{name=t, pos=.1}
]
&
&
C
%
\ar[
from=s,
to=t,
Rightarrow,
"\mbox{\tiny\rm(pb)}"{description}
]
\end{tikzcd}
\;\;\Leftrightarrow\;\;
\begin{tikzcd}
X
\ar[
r,
"\ "{swap, name=s1, pos=.9}
]
\ar[d]
&
Y
\ar[d]
\ar[
r,
"\ "{swap, name=s2, pos=.9}
]
&
Z
\ar[d]
\\
A
\ar[
r,
"\ "{name=t1, pos=.1}
]
&
B
\ar[
r,
"\ "{name=t2, pos=.1}
]
&
C
%
\ar[
from=s1,
to=t1,
Rightarrow,
"\mbox{\tiny\rm(pb)}"{description}
]
\ar[
from=s2,
to=t2,
Rightarrow,
"\mbox{\tiny\rm(pb)}"{description}
]
\end{tikzcd}
\right)
\,;
\end{equation}
and it still makes sense, for instance, to say that a morphism $f$ is a {\it monomorphism}
if and only if its homotopy fiber product with itself is equivalently its domain
(\cite[p. 575]{Lurie09HTT}\cite[p. 10]{Rezk19}, see also Ex. \ref{MonomorphismsOfInfinityGroupoids}):
\vspace{-.3cm}
\begin{equation}
\label{InfinityMonomorphism}
\begin{tikzcd}
X
\ar[
r,
hook,
"f",
"{
\mbox{
\tiny
\rm
\color{greenii}
monomorphism
}
}"{swap}
]
&[+12pt]
Y
\end{tikzcd}
\;\;\;\;\;\;\;\;
\Leftrightarrow
\;\;\;\;\;\;\;\;
\begin{tikzcd}[column sep=large]
X
\ar[r, "\mathrm{id}", "\ "{swap, pos=.9, name=s}]
\ar[d, "\mathrm{id}"{swap}]
&
X
\ar[d, "f"]
\\
X
\ar[r, "f"{swap}, "\ "{pos=.1, name=t}]
&
Y
\mathpalette\mathrlapinternal{\,.}
%
\ar[
from=s,
to=t,
Rightarrow,
"{ \mbox{\tiny\rm(pb)} }"{description}
]
\end{tikzcd}
\end{equation}
\noindent
(In the following we will leave the homotopies filling these squares notationally implicit.)
\medskip
As another example: For any $\infty$-category $\mathbf{C}$ and any pair
$X, A$ of its objects, we have their {\it hom(omorphism) $\infty$-groupoids}
(see \cite{DuggerSpivak11})
\vspace{-2mm}
\begin{equation}
\label{HomSpace}
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
hom-$\infty$-groupoid
\\
from $X$ to $A$
\\
in $\infty$-category $\mathbf{C}$
\\
\phantom{a}
\end{tabular}
}
}
}{
\mathbf{C}(X,A)
}
\quad
=
\;\;
\left\{
{\phantom{\mbox{\tiny\bf domain}}}
\begin{tikzcd}[column sep=large]
\mathpalette\mathllapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
domain
\\
object
\end{tabular}
}
\!\!\!\!
}
X
\ar[
rr,
bend left=60,
"\mbox{\tiny\color{greenii}\bf homomorphism}"{description},
"\ "{name=sl, xshift=-8pt,below},
"\ "{name=sr, xshift=+8pt,below}
]
\ar[
rr,
bend right=60,
"\mbox{\tiny\color{greenii}\bf homomorphism}"{description},
"\ "{name=tl, xshift=-8pt, above},
"\ "{name=tr, xshift=+8pt, above}
]
\ar[
from=sr,
to=tr,
Rightarrow,
bend left=50,
"\rotatebox{180}{\tiny\color{orangeii}\bf homotopy}"{sloped, description},
"\ "{name=t2, left}
]
\ar[
from=sl,
to=tl,
Rightarrow,
bend right=50,
"\rotatebox{0}{\tiny\color{orangeii}\bf homotopy}"{sloped, description},
"\ "{name=s2, right}
]
\ar[
from=s2,
to=t2,
dashed,
Rightarrow,
"\mbox{\tiny\color{purple}\bf higher}"{above},
"\mbox{\tiny\color{purple}\bf homotopies}"{below}
]
\ar[
from=s2,
to=t2,
-,
dashed
]
&{\phantom{A}}&
A
\mathpalette\mathrlapinternal{
\!\!\!\!
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
codomain
\\
object
\end{tabular}
}
}
\end{tikzcd}
{\phantom{\mbox{\tiny\bf codomain}}}
\right\}
\;\;
\;\in\; \InfinityGroupoids
\,,
\end{equation}
which are well-defined (model-independent) up to
(weak homotopy-)equivalence,
and homotopy functorial in the two arguments in the expected way.
\medskip
For instance, an $\infty$-functor is {\it fully faithful} if it
induces a natural equivalence on values of hom-$\infty$-functors:
\vspace{-2mm}
\begin{equation}
\label{FullyFaithfulInfinityFunctor}
\begin{tikzcd}
\mathbf{C}
\ar[
rr,
hook,
"{F}",
"{
\mbox{
\tiny
\color{greenii}
\bf
fully faithful
}
}"{swap}
]
&&
\mathbf{D}
\end{tikzcd}
\hspace{1cm}
\Leftrightarrow
\hspace{1cm}
\begin{tikzcd}
\mathbf{C}(-,\,-)
\ar[
rr,
"F_{(-,-)}",
"\sim"{swap}
]
&&
\mathbf{D}
\left(
F(-)
,\,
F(-)
\right)\,.
\end{tikzcd}
\end{equation}
\vspace{-1mm}
\noindent
Moreover, the hom $\infty$-functor satisfies the expected category-theoretic properties
in homotopy-theoretic form:
\vspace{1mm}
\noindent
{\bf (i)}
hom-$\infty$-groupoids respect homotopy (co)-limits via natural equivalences of the form
\vspace{-1mm}
\begin{equation}
\label{HomFunctorRespectsLimits}
\mathbf{C}
\Big(
\underset{\underset{i \in \mathcal{I}}{\longrightarrow}}{\lim}
\,
X_i,
\,
\underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim}
\,
A_j
\Big)
\;\;
\simeq
\;\;
\underset{\underset{i \in \mathcal{I}}{\longleftarrow}}{\lim}
\;
\underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim}
\;
\mathbf{C}
\left(
X_i,
\,
A_j
\right)
\,;
\end{equation}
\vspace{-2mm}
\noindent
{\bf (ii)}
for a pair of {\it adjoint functors} between $\infty$-categories,
there is a natural equivalence between these hom-$\infty$-groupoids \eqref{HomSpace},
of the usual form
\cite[p. 159]{Joyal08Theory}\cite[Def. 5.2.2.7]{Lurie09HTT}\cite[Prop. F.5.6]{RiehlVerity21}:
\vspace{-2mm}
\begin{equation}
\label{AdjunctionAndHomEquivalence}
\begin{tikzcd}[column sep=large]
\mathbf{D}
\ar[
rr,
shift right=7pt,
"R"{description},
"\mbox{\tiny\color{greenii}\bf right adjoint}"{below,yshift=-1pt}
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
&&
\mathbf{C}
\ar[
ll,
shift right=7pt,
"L"{description},
"\mbox{\tiny\color{greenii}\bf left adjoint}"{above,yshift=1pt}
]
\end{tikzcd}
\;\;\;\;\;\;\;\;
\Leftrightarrow
\;\;\;\;\;\;\;\;
\underset{
\raisebox{-2pt}{
\tiny
\color{darkblue}
\bf
natural equivalence of hom $\infty$-groupoids
}
}{
\mathbf{C}
\left(
C, R(D)
\right)
\;\;
\simeq
\;\;
\mathbf{D}
\left(
L(C), D
\right)
}
\,;
\end{equation}
\noindent
{\bf (iii)} in usual consequence, right (left) $\infty$-adjoint
$\infty$-functors
preserve
$\infty$-limits ($\infty$-colimits) via natural equivalences:
\begin{equation}
\label{InfinityAdjointPreservesInfinityLimits}
R
\big(\,
\underset{
\underset{i \in \mathcal{I}}{\longleftarrow}
}{\mathrm{lim}}
\,
X_i
\big)
\;\;
\simeq
\;\;
\underset{
\underset{i \in \mathcal{I}}{\longleftarrow}
}{\mathrm{lim}}
\;
R
(
X_i
)
\,,
\phantom{AAAAA}
L
\big(\,
\underset{
\underset{i \in \mathcal{I}}{\longrightarrow}
}{\mathrm{lim}}
\,
X_i
\big)
\;\;
\simeq
\;\;
\underset{
\underset{i \in \mathcal{I}}{\longrightarrow}
}{\mathrm{lim}}
\;
L
(
X_i
)
\,.
\end{equation}
Therefore, with a good supply of systems of adjoint
$\infty$-functors \eqref{AdjunctionAndHomEquivalence}
--
which we gain by invoking {\it modal} and specifically
{\it cohesive} homotopy theory
(\cite[\S 3.1]{SSS09}\cite{dcct}\cite{SS20OrbifoldCohomology},
see \cref{CohesiveHomotopyTheory} below)
--
many proofs in higher geometry,
which may be formidable when done in simplicial components,
are reduced to formal manipulations
yielding strings of such natural equivalences.
This is how we prove the main theorems in
\cref{EquivariantInfinityBundles}.
\medskip
\noindent
{\bf Homotopy fibers and $\infty$-groups.}
While $\infty$-category theory thus parallels category theory in the abstract,
a key difference is that fiber sequences in $\infty$-categories
(whenever they exist)
are, generically, {\it long}:
\vspace{-2mm}
\begin{equation}
\label{GenericHomotopyFiberSequence}
\begin{tikzcd}[row sep=large, column sep=huge]
&
\cdots
\ar[r, "\mathrm{fib}^6(f)"]
\ar[
d,
phantom,
""{coordinate, name=t1}
]
&
\Omega^2 C
\ar[
dll,
rounded corners,
"\mathrm{fib}^5(f)"{description},
to path={
-- ([xshift=8pt]\tikztostart.east)
|- (t1) [pos=1]\tikztonodes
-| ([xshift=-8pt]\tikztotarget.west)
-- (\tikztotarget)
}
]
&
\\
\Omega A
\ar[r, "\mathrm{fib}^4(f)"]
&
\Omega B
\ar[r, "\mathrm{fib}^3(f)"]
\ar[
d,
phantom,
""{coordinate, name=t}
]
&
\Omega C
\ar[
dll,
rounded corners,
"\mathrm{fib}^2(f)"{description},
to path={
-- ([xshift=8pt]\tikztostart.east)
|- (t) [pos=1]\tikztonodes
-| ([xshift=-8pt]\tikztotarget.west)
-- (\tikztotarget)
}
]
\\
A
\ar[r, "\mathrm{fib}(f)"]
&
B
\ar[r, "f"]
&
C
\,.
\end{tikzcd}
\end{equation}
In particular, the (homotopy-)fiber of a point inclusion is not
in general trivial (as it necessarily is in 1-category theory),
but is the {\it looping}
\vspace{-2mm}
\begin{equation}
\label{LoopingInIntroduction}
\begin{tikzcd}[column sep=large]
\Omega_x X
\ar[r]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\ast
\ar[
d,
"x"{right}
]
\\
\ast
\ar[
r,
"x"{below}
]
&
X
\mathpalette\mathrlapinternal{\,,}
\end{tikzcd}
\end{equation}
whence the long homotopy fiber sequences
\eqref{GenericHomotopyFiberSequence}
follow by the pasting law \eqref{HomotopyPastingLaw}:
\vspace{-1mm}
$$
\begin{tikzcd}[row sep=25pt, column sep=60pt]
\Omega^2 C
\ar[r]
\ar[
d,
"\scalebox{.85}{$\mathrm{fib}^5(f)$}"{description}
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\ast
\ar[d]
\\
\Omega A
\ar[d]
\ar[
r,
"\scalebox{.85}{$\mathrm{fib}^4(f)$}"{description}
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\Omega B
\ar[r]
\ar[
d,
"\scalebox{.85}{$\mathrm{fib}^3(f)$}"{description}
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\ast
\ar[d]
\\
\ast
\ar[r]
&
\Omega C
\ar[
r,
"\scalebox{.85}{$\mathrm{fib}^2(f)$}"{description}
]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
A
\ar[r]
\ar[
d,
"\scalebox{.85}{$\mathrm{fib}(f)$}"{description}
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\ast
\ar[d]
\\
&
\ast
\ar[
r
]
&
B
\ar[
r,
"\scalebox{.85}{$f$}"{description}
]
&
C
\mathpalette\mathrlapinternal{\,.}
\end{tikzcd}
$$
\medskip
\noindent
{\bf $\infty$-Toposes.}
In $\infty$-categories $\Topos$ of higher stacks \eqref{GaugeTransformations},
namely in {\it $\infty$-toposes}
\cite{Lurie03}\cite{ToenVezzosi05}\cite{Lurie09HTT}\cite{Rezk10},
every group object
($\infty$-group) $\mathcal{G} \,\in\, \Groups(\Topos)$ arises,
uniquely up to equivalence, as the looping \eqref{LoopingInIntroduction}
of its {\it delooping stack} $\mathbf{B}\mathcal{G} \,\in\, \Topos$
(recalled in Prop. \ref{GroupsActionsAndFiberBundles}):
\vspace{-1mm}
\begin{equation}
\label{LoopingDeloopingInIntroduction}
\mathcal{G} \;\simeq\; \Omega_\ast \mathbf{B} \mathcal{G}
\,.
\end{equation}
Accordingly,
the connected components $\Truncation{0}(-)$ (see Prop. \ref{nTruncation} below)
of hom-$\infty$-groupoids
\eqref{HomSpace}
in $\infty$-toposes $\Topos$
may be understood
as (non-abelian, generalized) {\it cohomology theories}
(\cite[\S 2]{FSS20CharacterMap}\cite[p. 6]{SS20OrbifoldCohomology}):
\vspace{0mm}
$$
\underset{
\raisebox{-5pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
domain
\\
space
\end{tabular}
}
}{
X
},
\underset{
\raisebox{-5pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
coefficient stack{\color{black}/}
\\
classifying stack
\end{tabular}
}
}{
A
}\,\in\, \Topos
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\vdash
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\underset{
\mathpalette\mathclapinternal{
\raisebox{-4pt}{
\tiny
\color{orangeii}
\bf
\begin{tabular}{c}
$A$-cohomology of $X$
\\
in degree $\bullet$
\end{tabular}
}
}
}{
H^{\bullet}
\scalebox{1.1}{$($}
X,
\,
A
\scalebox{1.1}{$)$}
}
\;\;\coloneqq\;\;
\tau_0
\,
\Topos
\scalebox{1.1}{$($}
X,
\,
\Omega^{-\bullet} A
\scalebox{1.1}{$)$}
\,.
$$
Specifically, for $\mathcal{G} \,\in\, \Groups(\Topos)$
we have {\it first non-abelian cohomology sets}:
$$
H^1(X;\, \mathcal{G})
\;=\;
\Truncation{0}
\Topos(X; \mathbf{B}\mathcal{G})
\,.
$$
Moreover, the {\it fundamental theorem of $\infty$-topos theory}
(\cite[Prop. 6.5.3.1]{Lurie09HTT}, see around Prop. \ref{SliceInfinityTopos} below) says that for every object
$B \,\in\, \Topos$ in an $\infty$-topos, the slice $\infty$-category
$\SliceTopos{B}$, with
\begin{equation}
\label{SliceHomInIntroduction}
\overset{
\mathpalette\mathclapinternal{
\raisebox{6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
slice hom $\infty$-groupoid
\\
in $\Topos$ over $B$
\end{tabular}
}
}
}{
\SlicePointsMaps{\big}{B}
{ (X_1, p_1) }
{ (X_2, p_2) }
}
\;\;
=
\;\;
\left\{
\begin{tikzcd}
X_1
\ar[
dr,
" "{name=t2, pos=.8},
" "{name=t2prime, pos=.6},
"p_1"{swap},
shorten=-4pt
]
\ar[
rr,
bend left=30pt,
"\ "{name=s, swap},
"\ "{name=sprime, pos=.7, swap},
shorten=-4pt
]
\ar[orangeii,
from=sprime,
to=t2prime,
dashed,
Rightarrow,
bend left=15pt,
shorten=-3pt
]
\ar[
rr,
bend right=30pt,
"\ "{name=t},
"\ "{name=s2,
pos=.65, swap},
crossing over,
shorten=-4pt
]
&&
X_2
\ar[
dl,
"p_2",
shorten=-4pt
]
\\[+16pt]
&
B
\ar[
from=ul,
shorten=-4pt
]
%
\ar[orangeii,
from=s,
to=t,
Rightarrow,
bend left=50pt,
shift left=7pt,
crossing over,
shorten=-1pt,
"\ "{name=s3, swap}
]
\ar[orangeii,
from=s,
to=t,
Rightarrow,
bend right=50pt,
shift right=7pt,
crossing over,
shorten=-1pt,
"\ "{name=t3}
]
\ar[
from=t3,
to=s3,
Rightarrow,
shorten=-3pt,
crossing over
]
\ar[
from=t3,
to=s3,
Rightarrow,
shorten=-3pt,
crossing over
]
\ar[
from=t3,
to=s3,
-,
shorten=-3pt
]
\ar[orangeii,
from=s2,
to=t2,
Rightarrow,
bend left=20pt,
crossing over,
shorten=-3pt
]
\end{tikzcd}
\right\}
\;\;\;\;
\in
\;
\InfinityGroupoids
\,,
\end{equation}
is itself an $\infty$-topos.
When $B \,=\, \mathbf{B}\mathcal{G}$, the cohomology in the
slice $\SliceTopos{\mathbf{B}\mathcal{G}}$ is
{\it Borel-$\mathcal{G}$-equivariant cohomology} of $\Topos$
(see Def. \ref{BorelEquivariantAndProperEquivariantCohomologyInCohesiveInfinityTopos} below):
\vspace{-1mm}
$$
\underset{
\mathpalette\mathclapinternal{
\raisebox{-3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
Borel-equivariant
\\
$A$-cohomology of $X$
\end{tabular}
}
}
}{
H^\bullet_{\mathcal{G}}(X;\, A)
}
\;\;
\coloneqq
\;\;
\Truncation{0}
\,
\SlicePointsMaps{\big}{\mathbf{B}\mathcal{G}}
{ \HomotopyQuotient{X}{\mathcal{G}} }
{
\Omega_{\mathbf{B}\mathcal{G}}^{-\bullet}
(
\HomotopyQuotient{A}{\mathcal{G}}
)
}
\,.
$$
This is a consequence of the following fact:
\medskip
\noindent
{\bf Transformation groups in an $\infty$-topos.}
The key fact which propels our general theory in
\cref{EquivariantInfinityBundles}
is
(following \cite{NSS12a} and \cite[\S 2.2]{SS20OrbifoldCohomology}):
that the notions of {\it groups},
of {\it group actions},
of {\it principal bundles}
and their {\it moduli stacks}
and associated {\it fiber bundles}
are all {\it native to $\infty$-topos theory}, in that these concepts and
their pertinent properties are available internal to any $\infty$-topos
without needing further axiomatization:
\begin{proposition}[Groups, actions and principal bundles in any $\infty$-topos
({Props. \ref{LoopingAndDeloopingEquivalence},
\ref{HomotopyQuotientsAndPrincipaInfinityBundles},
Thm. \ref{DeloopingGroupoidsAreModuliInfinityStacksForPrincipalInfinityBundles}
})]
\label{GroupsActionsAndFiberBundles}
$\,$
\noindent
Let $\Topos$ be an $\infty$-topos.
\noindent
{\bf (i)} The operation
of forming loop space objects \eqref{LoopingInIntroduction}
constitutes an equivalence\footnote{This is the \emph{May recognition theorem} \cite{May72}
generalized from $\InfinityGroupoids$ to $\infty$-toposes by
\cite[7.2.2.11]{Lurie09HTT}\cite[6.2.6.15]{Lurie17}.}
of
group objects with pointed connected objects in $\Topos$ (Ntn. \ref{ConnectedObject}):
\vspace{-6mm}
\begin{equation}
\label{LoopingAndDelooping}
\begin{tikzcd}[row sep=0pt]
\Groups(\Topos)
\ar[
rr,
shift right=5pt,
"\mathbf{B}"{below}
]
\ar[
rr,
phantom,
"{\scalebox{.8}{$\simeq$}}"
]
&&
\Topos^{\ast/}_{\geq 1}
\ar[
ll,
shift right=5pt,
"\Omega"{above}
]
\\
\scalebox{0.7}{$ \mathcal{G} $}
\ar[
rr,
|->
]
&&
\scalebox{0.7}{$ \ast \!\sslash\! \mathcal{G} $}
\end{tikzcd}
{\phantom{AAAA}}
\mbox{\rm i.e.}
{\phantom{AAAAA}}
\begin{tikzcd}[column sep=huge]
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
group stack
}
}
}{
\mathcal{G}
}
\ar[r]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\overset{
\mathpalette\mathclapinternal{
\raisebox{5pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
base
\\
point
\end{tabular}
}
}
}{
\ast
}
\ar[
d,
"\mathrm{pt}_{\mathbf{B}G}"
]
\\
\ast
\ar[
r,
"\mathrm{pt}_{\mathbf{B}G}"
]
&
\mathbf{B}\mathcal{G}
\end{tikzcd}
\end{equation}
hence
\begin{equation}
\label{PointedConnectedObjectEquivalentToDeloopingOfItsLoopSpaceObject}
\mbox{
$
X
\,\in\,
\Topos^{\ast/}
$
is connected
}
{\phantom{AAAA}}
\Leftrightarrow
{\phantom{AAAA}}
X
\;\simeq\;
\mathbf{B} \Omega X
\,.
\end{equation}
\vspace{-1mm}
\noindent
{\bf (ii)}
For $\mathcal{G} \,\in\, \Groups(\Topos)$,
the $\mathcal{G}$-actions (Def. \ref{ActionObjectsInAnInfinityTopos})
and $\mathcal{G}$-principal bundles (Def. \ref{PrincipalInfinityBundles})
are both identified with the slice objects \eqref{SliceHomInIntroduction}
over the delooping $\mathbf{B} \mathcal{G}$ \eqref{GroupsActionsAndFiberBundles},
as follows:
\vspace{-2mm}
\begin{equation}
\label{EquivalenceBetweenActionsAndPrincipalBundlesAndSlices}
\hspace{0cm}
\begin{tikzcd}[row sep=-1pt, column sep=8pt]
\Actions{\mathcal{G}}(\Topos)
\ar[
rr,
"\sim"{above, yshift=1pt},
"(-) \!\sslash\! G"{below}
]
&&
\PrincipalBundles{\mathcal{G}}(\Topos)
&&
\Topos_{/\mathbf{B}\mathcal{G}}
\ar[
ll,
"\sim"{above, yshift=1pt},
"\mathrm{fib}"{below}
]
\\
\scalebox{0.7}{$ G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, P $}
&\longmapsto&
\scalebox{.7}{$ \left(\!\!\!\!
\def.9{.9}
\begin{array}{c}
P
\\
\downarrow
\\
P \!\sslash\! \mathcal{G}
\end{array}
\!\!\!\! \right)
$}
&\longmapsfrom&
\scalebox{.7}{$ \left(\!\!\!\!
\def.9{.9}
\begin{array}{c}
P \!\sslash\! \mathcal{G}
\\
\downarrow
\\
\mathbf{B}\mathcal{G}
\end{array}
\!\!\!\! \right)
$}
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent i.e.,
\vspace{-2mm}
\begin{equation}
\begin{tikzcd}[column sep=80pt]
\mathpalette\mathllapinternal{
\mbox{
\tiny
\color{greenii}
\bf
action
}
\;\;\;\;\;\;
}
\overset{
\mathpalette\mathclapinternal{
\raisebox{6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
$\mathcal{G}$-principal
\\
bundle
\\
\phantom{.}
\end{tabular}
}
}
}{
P
}
\ar[out=180-60+90, in=60+90, looseness=3.8, "\scalebox{.77}{$\mathpalette\mathclapinternal{
\mathcal{G}
}$}"{pos=.41, description},shift right=1]
\ar[r]
\ar[
d
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\overset{
\mathpalette\mathclapinternal{
\raisebox{6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
universal
\\
$\mathcal{G}$-principal bundle
\end{tabular}
}
}
}{
\ast
\mathpalette\mathrlapinternal{
\;
\simeq \mathcal{G} \!\sslash\! \mathcal{G}
}
}
\ar[
d,
"\mathrm{pt}_{\mathbf{B}\mathcal{G}}"{right}
]
\\
\mathpalette\mathllapinternal{
\mathpalette\mathllapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
action quotient-
\\
/ base-stack
\end{tabular}
}
\!\!\!\!\!
}
}
P \!\sslash\! \mathcal{G}
\ar[
r,
"\vdash P"{above},
"\mbox{\tiny \color{greenii}\bf cocycle}"{below}
]
&
\mathbf{B}\mathcal{G}
\mathpalette\mathrlapinternal{
\!\!\!\!\!\!
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
universal
\\
moduli stack
\end{tabular}
}
}
\end{tikzcd}
\end{equation}
\end{proposition}
\medskip
\noindent
{\bf Equivariant principal $\infty$-bundles.} With the conceptualization
of Prop. \ref{GroupsActionsAndFiberBundles} in hand, there is an
evident general-abstract definition of $G$-equivariant principal $\infty$-bundles
(Def. \ref{GEquivariantGammaPrincipalBundles}):
These must simply be the principal $\infty$-bundles internal to
a slice $\infty$-topos $\SliceTopos{\mathbf{B}G}$ over the delooping $\mathbf{B}G$:
\vspace{-.2cm}
$$
\overset{
\mathpalette\mathclapinternal{
\raisebox{7pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
$G$-equivariant $\Gamma$-principal
$\infty$-bundles in $\Topos$
\end{tabular}
}
}
}{
\EquivariantPrincipalBundles{G}{\Gamma}(\Topos)_X
}
\quad
\coloneqq
\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
$\HomotopyQuotient{\Gamma}{G}$-principal
$\infty$-bundles in $\SliceTopos{\mathbf{B}G}$
\\
\phantom{.}
\end{tabular}
}
}
}{
\PrincipalBundles{ (\HomotopyQuotient{\Gamma}{G}) }
(
\SliceTopos{\mathbf{B}G}
)_{ \HomotopyQuotient{X}{G} }
}
\,.
$$
We prove
(in \eqref{EquivalenceOfGroupoidOfEquivariantCechCocyclesIntoGroupoidOfEquivariantPrincipalBundles} of Thm. \ref{BorelClassificationOfEquivariantBundlesForResolvableSingularitiesAndEquivariantStructure})
that, for the case $\Topos \,\coloneqq\,\SmoothInfinityGroupoids$
and restricted to topological $G$-spaces $\TopologicalSpace$
and topological structure groups $\Gamma$,
this canonical $\infty$-topos-theoretic definition
recovers the traditional definition
of topological equivariant principal bundles,
including their equivariant local triviality property
(which are all reviewed and developed in \Cref{EquivariantPrincipalTopologicalBundles}).
\medskip
By extension, this means that for more general $\Topos$ and/or more general
$X, \Gamma \,\in\, \SliceTopos{\mathbf{B}G}$, we obtain sensible generalizations
of these classical definitions.
For example, by taking $\Topos$ to be the cohesive $\infty$-topos of super-geometric $\infty$-groupoids
(\cite[\S 3.1.3]{SS20OrbifoldCohomology}) the general theory
developed here immediately produces a good theory of equivariant
higher super-gerbes (as needed, e.g., in super-string theory
on super-orbifolds \cite{FSS13}\cite{HSS18}).
\medskip
Finally, this abstract definition in combination with the orbi-smooth Oka principle
\eqref{OrbiSmoothOkaPrincipleInIntroduction} implies the classification
of stable $G$-equivariant $\Gamma$-principal bundles,
at least in the case that $G$-singularities are cover-resolvable
(Ntn. \ref{ResolvableOrbiSingularities})
and $\Gamma$ is a truncated Hausdorff group.
This is the content of our main
Theorems \ref{ClassificationOfPrincipalBundlesAmongPrincipalInfinityBundles} ands \ref{ProperClassificationOfEquivariantBundlesForResolvableSingularitiesAndEquivariantStructure} below.
\medskip
Even though this classification result concerns only equivariant ``1-bundles''
instead of more general equivariant $\infty$-bundles,
the cohesive $\infty$-topos theory drives the proof:
For example, our generalization of the
existing classification results to non-trivial discrete $G$-action on the structure group
$\Gamma$
is a direct consequence
(see the end of the proof of Thm. \ref{ShapeOfMappingStackOutOfOrbiSingularityIsMappingStackIntoShape})
of the general fact (Prop. \ref{ShapeFunctorPreservesHomotopyFibersOverDiscreteObjects})
that the shape operation in
cohesive $\infty$-toposes preserves homotopy fiber products over geometrically
discrete groupoids (such as $\mathbf{B}G$ for discrete $G$).
\newpage
\section{Notation and Terminology}
\phantom{AA} {\bf Categories and functors.}
\vspace{1mm}
\def.9{1.3}
\begin{tabular}{lll}
\hline
{\bf Category} & {\bf of}
\\
\hline
\hline
\rowcolor{lightgray}
$\kTopologicalSpaces$ & cg topological spaces
&
Ntn. \ref{CompactlyGeneratedTopologicalSpaces}
\\
$\kHausdorffSpaces$ & cg Hausdorff spaces & Ntn. \ref{CompactlyGeneratedTopologicalSpaces}
\\
\rowcolor{lightgray}
$\Actions{G}(\kTopologicalSpaces)$ & topological $G$-spaces
&
Ntn. \ref{GActionOnTopologicalSpaces}
\\
$\Groups(\Sets)$ & discrete groups
\\
\rowcolor{lightgray}
$\Groups(\kHausdorffSpaces)$ & Hausdorff groups &
\\
$\FormallyPrincipalBundles{\Gamma}(\mathcal{C})$
&
formally principal internal bundles
& Ntn. \ref{InternalizationOfPrincipalBundleTheory}
\\
\rowcolor{lightgray}
$\EquivariantPrincipalBundles{G}{\Gamma}$ & equivariant principal bundles
&
Def. \ref{EquivariantPrincipalBundle}
\\
$\EquivariantPrincipalFiberBundles{G}{\Gamma}$ & ... equivariantly locally trivial
&
Def. \ref{TerminologyForPrincipalBundles}
\\
\rowcolor{lightgray}
\hline
$\CartesianSpaces$ & Cartesian spaces &
\\
$\DiffeologicalSpaces$ & diffeological spaces
& Ntn. \ref{CartesianSpacesAndDiffeologicalSpaces}
\\
\rowcolor{lightgray}
$\DTopologicalSpaces$ & D-topological spaces
&
Ntn. \ref{DeltaGeneratedTopologicalSpaces}
\\
\hline
\hline
$\SimplicialSets$ & simplicial sets
&
Ntn. \ref{SimplicialSets}
\\
\rowcolor{lightgray}
$\SimplicialCategories$
& simplicial categories
& Ntn. \ref{SimplicialCategories}
\\
$\SimplicialPresheaves$
& simplicial presheaves
& Ntn. \ref{ModelCategoriesOfSimplicialPresheaves}
\\
\hline
\hline
{\bf 2-category} & {\bf of}
\\
\rowcolor{lightgray}
\hline
$\Groupoids$ & groupoids &
\\
$\TopologicalGroupoids$
& topological groupoids
& Ntn. \ref{TopologicalGroupoids}
\\
\rowcolor{lightgray}
$\HomotopyTwoCategory(\PresentableInfinityCategories)$
&
presentable $\infty$-categories
&
Prop. \ref{HomotopyCategoryOfPresInfinityCategoriesIsThatOfCombinatorialModelCategories}
\\
\hline
\hline
{\bf $\infty$-category} & {\bf of}
\\
\hline
\hline
$\InfinityGroupoids$
& $\infty$-groupoids
&
Ntn. \ref{SimplicialSetsAndInfinityGroupoids}
\\
\rowcolor{lightgray}
$\SmoothInfinityGroupoids$
& smooth $\infty$-groupoids
& Ntn. \ref{SmoothInfinityGroupoids}
\\
$\SingularSmoothInfinityGroupoids$
& singular smooth $\infty$-groupoids
& Ntn. \ref{SingularSmoothInfinityGroupoids}
\\
\hline
\hline
{\bf Functor} & {\bf producing}
\\
\rowcolor{lightgray}
\hline
$N$ & simplicial nerve &
Ntn. \ref{NerveOfTopologicalGroupoids}
\\
$\TopologicalRealization{}{-}$
&
topological realization
&
Ntn. \ref{TopologicalRealizationFunctors}
\\
\rowcolor{lightgray}
$\ContinuousDiffeology$
& continuous diffeology
& Ex. \ref{ContinuousDiffeologyAndDTopology}
\\
$\DTopology$
& D-topology
& Ex. \ref{ContinuousDiffeologyAndDTopology}
\\
\hline
\end{tabular}
\vspace{5mm}
\phantom{.} {\bf Types of groups.}
\vspace{1mm}
\def.9{1.4}
\begin{tabular}{llll}
\hline
\hline
\rowcolor{lightgray}
$G$
& $\in \Groups(\Sets) \xhookrightarrow{\;} \Groups(\Topos)$
& equivariance group
& Ntn. \ref{GActionOnTopologicalSpaces}
\\
\hline
$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma$
&
$\in \Groups(\kTopologicalSpaces)$
&
equivariant structure group
&
Def. \ref{EquivariantTopologicalGroup}, Lem. \ref{EquivariantTopologicalGroupsAreSemidirectProductsWithG}
\\
\hline
\rowcolor{lightgray}
$\mathcal{G}$
& $\in \Groups(\Topos)$
& structure $\infty$-group
& Prop. \ref{GroupsActionsAndFiberBundles}
\\
\hline
$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma$
& $\in \Groups \left( \Actions{G}(\Topos)\right)$
&
\hspace{-2pt}
\multirow{2}{*}{
equivariant structure $\infty$-group
}
&
\multirow{2}{*}{
Def. \ref{GGroups}
}
\\
\cline{1-2}
$\Gamma \!\sslash\! G$
& $\in \Groups(\Topos_{/\mathbf{B}G})$
&
&
\\
\hline
\end{tabular}
\newpage
\noindent
{\bf Ambient $\infty$-toposes.}
\vspace{.2cm}
$$
\begin{tikzcd}[row sep=50pt, column sep=large]
&
&[+20pt]
\mathpalette\mathllapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
slice over
\\
$G$-orbi-singularity
\end{tabular}
}
\hspace{+1pt}
}
\categorybox{
\SliceTopos{\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}}
}
\ar[
rr,
<->
]
\ar[
dl,
<->,
end anchor={[xshift=+2pt, yshift=+2pt]}
]
\ar[
dd,
<->,
crossing over,
"{
\def.9{.6}
\begin{array}{c}
G\Conical
\\
\bot
\\
G\Space
\\
\bot
\\
G\Smooth
\\
\bot
\\
G\Orbisingular
\end{array}
}"{swap, pos=.6, xshift=4pt}
]
&&
\categorybox{
\GloballyEquivariant\InfinityGroupoids{}_{/\scalebox{.7}{$\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}$}}
}
\ar[
dd,
<->
]
\ar[
dl,
<->
]
\\[-45pt]
&
\mathpalette\mathllapinternal{
\mbox{
\tiny
\cref{GeneralSingularCohesion}
\color{darkblue}
\bf
\begin{tabular}{c}
singular cohesive
\\
$\infty$-topos
\end{tabular}
}
\hspace{+0pt}
}
\categorybox{
\Topos
}
\ar[
rr,
<->,
crossing over
]
\ar[
dd,
<->,
"{
\def.9{.6}
\begin{array}{c}
\Conical
\\
\bot
\\
\Space
\\
\bot
\\
\Smooth
\\
\bot
\\
\Orbisingular
\end{array}
}"{swap, xshift=5pt}
]
&&
\categorybox{
\GloballyEquivariant\InfinityGroupoids
}
\mathpalette\mathrlapinternal{
\hspace{-3pt}
\mbox{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
global
\\
homotopy theory
\end{tabular}
}
}
\\[+20pt]
&
&
\categorybox{
\GEquivariant\ModalTopos{\rotatebox[origin=c]{70}{$\subset$}}
}
\ar[
dl,
<->,
end anchor={[xshift=+2pt, yshift=+2pt]}
]
\ar[
rr,
<->,
crossing over
]
&&
\categorybox{
\GEquivariant\InfinityGroupoids
}
\mathpalette\mathrlapinternal{
\hspace{-3pt}
\mbox{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
$G$-equivariant
\\
homotopy theory
\end{tabular}
}
}
\ar[
dl,
<->
]
\\[-45pt]
&
\mathpalette\mathllapinternal{
\mbox{
\tiny
\cref{GeneralCohesion}
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
smooth cohesive
\\
$\infty$-topos
\end{tabular}
}
\hspace{+1pt}
}
\categorybox{
\ModalTopos{\rotatebox[origin=c]{70}{$\subset$}}
}
\ar[
rr,
<->,
"{
\Shape
\;\dashv\;
\Discrete
\;\dashv\;
\Points
\;\dashv\;
\Chaotic
}"{swap}
]
&&
\categorybox{
\InfinityGroupoids
}
\mathpalette\mathrlapinternal{
\hspace{-3pt}
\raisebox{-3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
base
\\
$\infty$-topos
\end{tabular}
}
}
\ar[
from=uu,
<->,
crossing over
]
\end{tikzcd}
$$
\vspace{2mm}
Here:
\vspace{-2mm}
$$
\def.9{1.5}
\begin{array}{lcrcll}
\ModalTopos{\rotatebox[origin=c]{70}{$\subset$}}
&:=&
\SmoothInfinityGroupoids
&:=&
\InfinitySheaves(\CartesianSpaces)
&
\proofstep{(Ntn. \ref{SmoothInfinityGroupoids},
{\cite[Ex. 3.18]{SS20OrbifoldCohomology}}
\!)},
\\
\Topos
&:=&
\SingularSmoothInfinityGroupoids
&:=&
\InfinitySheaves(\CartesianSpaces \times \Singularities)
&
\proofstep{(Ntn. \ref{SingularSmoothInfinityGroupoids},
{\cite[Ex. 3.56]{SS20OrbifoldCohomology}}
\!)}.
\end{array}
$$
\vspace{5mm}
\noindent {\bf Modalities.}
\vspace{2mm}
\begin{center}
\def.9{3}
\begin{tabular}{|c||l|l|l|}
\hline
\def.9{1}
\begin{tabular}{c}
{\bf Cohesion}
\\
Def. \ref{CohesiveInfinityTopos}
\end{tabular}
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
shape
}
}
}{
\shape
\,\coloneqq\,
}
\Discrete \circ \Shape
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
discrete
}
}
}{
\flat
\,\coloneqq\,
}
\Discrete \circ \Points
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
sharp
}
}
}{
{\rm chaotic}
\,\coloneqq\,
}
\Chaotic \circ \Points
$
\\
\hline
\def.9{1}
\begin{tabular}{c}
{\bf Singularities}
\\
Def. \ref{GEquivariantAndGloballyEquivariantHomotopyTheories}
\end{tabular}
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
conical
}
}
}{
\rotatebox[origin=c]{70}{$<$}
\,\coloneqq\,
}
\Space \circ \Conical
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
smooth
}
}
}{
\rotatebox[origin=c]{70}{$\subset$}
\,\coloneqq\,
}
\Space \circ \Smooth
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
orbisingular
}
}
}{
\rotatebox[origin=c]{70}{$\prec$}
\,\coloneqq\,
}
\Singularity \circ \Smooth
$
\\
\hline
\def.9{1}
\begin{tabular}{c}
{\bf $G$-singularities }
\\
Def. \ref{ModalitieswithrespecttoGOrbiSingularities}
\end{tabular}
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
$G$-conical
}
}
}{
\conicalrelativeG
\,\coloneqq\,
}
G\Space \circ G\Conical
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
$G$-smooth
}
}
}{
\smoothrelativeG
\,\coloneqq\,
}
G\Space \circ G\Smooth
$
&
$
\overset{
\mathpalette\mathrlapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
$G$-orbisingular
}
}
}{
\orbisingularrelativeG
\,\coloneqq\,
}
G\Singularity \circ G\Smooth
$
\\
\hline
\end{tabular}
\end{center}
\def.9{1}
\vspace{5mm}
\noindent {\bf Notions of space.}
\vspace{-.1cm}
$$
\begin{tikzcd}[row sep=17pt, column sep=23pt]
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
Ntn. \ref{CompactlyGeneratedTopologicalSpaces}
}
}
}{
\categorybox{\kTopologicalSpaces}
}
\ar[
rr,
hook
]
\ar[
ddr,
"\SingularSimplicialComplex"{swap, pos=.65},
"{
\mbox{
\tiny
\begin{tabular}{c}
Ntn. \ref{DiffeologicalSingularComplex}
\\
\color{greenii}
\bf
sing. simp. complex
\end{tabular}
}
}"{sloped, pos=.54 }
]
&&
\overset{
\raisebox{4pt}{
\tiny
Ntn. \ref{GActionOnTopologicalSpaces}
}
}{
\categorybox{
\Actions{G}(\kTopologicalSpaces)
}
}
\ar[
ddr,
"{
\shape \rotatebox[origin=c]{70}{$\prec$} (\HomotopyQuotient{-}{G})
}",
"\mbox{
\tiny
\begin{tabular}{c}
\color{greenii}
\bf
equivariant shape
\\
\eqref{EquivariantShape}
\end{tabular}
}"{sloped, swap, pos=.4}
]
\ar[
rr,
"{
\rotatebox[origin=c]{70}{$\prec$} (\HomotopyQuotient{-}{G})
}",
"{
\mbox{
\tiny
\color{greenii}
\bf
orbi-singularized homotopy quotient
}
}"{swap}
]
&&
\overset{
\raisebox{4pt}{
\tiny
Ntn. \ref{SingularSmoothInfinityGroupoids}
}
}{
\categorybox{
(
\SingularSmoothInfinityGroupoids
)_{/\scalebox{.7}{$\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}$}}
}
}
\ar[
ddl,
"{
\mbox{
\tiny
\begin{tabular}{c}
\color{greenii}
\bf
shape
\end{tabular}
}
}"{swap, sloped},
"{
\shape
}"{swap}
]
\\[-22pt]
\hspace{-5mm}
\underset{
\mathpalette\mathclapinternal{
\raisebox{-6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
topological
\\
space
\end{tabular}
}
}
}{
\scalebox{.93}{$\TopologicalSpace$}
}
&&
\hspace{-1cm}
\underset{
\mathpalette\mathclapinternal{
\raisebox{-6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
topological
\\
$G$-space
\end{tabular}
}
}
}{
\scalebox{.93}{$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace$}
}
&&
\underset{
\mathpalette\mathclapinternal{
\raisebox{-6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
orbi-stack {\color{black}/}
\\
cohesive orbi-space
\end{tabular}
}
}
}{
\mathcal{X}
}
\\
&
\mathpalette\mathllapinternal{
\mbox{ \tiny
Ntn. \ref{SimplicialSetsAndInfinityGroupoids}
}
\;\;\;}
\categorybox{\InfinityGroupoids}
\ar[
rr,
hook
]
&&
\categorybox{
(
\SingularInfinityGroupoids
)_{/\scalebox{.7}{$\raisebox{-3pt}{$\orbisingular^{\hspace{-5.7pt}\raisebox{2pt}{\scalebox{.83}{$G$}}}$}$}}
}
\\[-20pt]
&
\underset{
\mathpalette\mathclapinternal{
\raisebox{-6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
(shape of)
\\
space
\end{tabular}
}
}
}{
X
}
&&
\underset{
\mathpalette\mathclapinternal{
\raisebox{-6pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
orbi-
\\
space
\end{tabular}
}
}
}{
\scalebox{1.06}{$\mathscr{X}$}
}
\end{tikzcd}
$$
\newpage
\part{In topological spaces}
\label{InTopologicalSpaces}
\chapter{Equivariant topology}
\label{EquivariantTopology}
We recall and develop basics of equivariant algebraic topology
(see \cite{Illman72}\cite{Bredon72}\cite{tomDieck87}\cite{May96}\cite{Blu17})
that we invoke below in \cref{EquivariantPrincipalTopologicalBundles}.
The expert reader may want to skip this chapter and refer back to it just as need be.
\begin{itemize}
\vspace{-.2cm}
\item[{\cref{TopologicalGActions}}] recalls basics of equivariant point-set topology,
highlighting how the change-of-group adjoint triple governs the theory,
and establishing lemmas needed in \cref{EquivariantPrincipalTopologicalBundles}.
\vspace{-.2cm}
\item[{\cref{GActionsOnTopologicalGroupoids}}] recalls basics of topological groupoids,
generalizing to equivariant topological groupoids,
and establishing lemmas needed in \cref{ConstructionOfUniversalEquivariantPrincipalBundles}.
\vspace{-.2cm}
\item[{\cref{GEquivariantHomotopyTypes}}] recalls basics of proper equivariant homotopy theory,
recording lemmas needed in \cref{ConstructionOfUniversalEquivariantPrincipalBundles}
and \cref{EquivariantLocalTrivializationIsImplies}.
\end{itemize}
Throughout, we make extensive use of
a hierarchy of {\it internalizations}
of mathematical structures (Ntn. \ref{Internalization})
into {\it categories with pullbacks} (Ntn. \ref{CartesianSquares}),
starting in a {\it convenient category of topological spaces}
(Ntn. \ref{CompactlyGeneratedTopologicalSpaces}).
Here is the basic terminology and notation that we are using:
\medskip
\noindent
{\bf Categories.}
In \cref{TopologicalGActions} and \cref{EquivariantPrincipalTopologicalBundles}
we need just basic notions of (co)limits and adjoint functors
in plain category theory (e.g., \cite{AHS90}\cite{Borceux94I}\cite{Borceux94II}),
while in \cref{GActionsOnTopologicalGroupoids}
and \cref{EquivariantLocalTrivializationIsImplies}
we need these notions in their $\Groupoids$-enriched enhancement
(e.g., \cite[\S 6]{Borceux94II}\cite[\S 3]{Riehl14}\cite[\S 1.3]{JohnsonYau21})
--
but we only need the most basic concepts:
enriched adjunctions and conical (i.e., non-weighted) enriched (co-)limits,
and here mostly just finite ones.
\begin{notation}[Basic categories]
\label{CategoryOfGroupoids}
We write
\noindent
{\bf (i)} $\Sets$ for the category of sets with functions between them;
\noindent
{\bf (ii)}
$\Groupoids \;\coloneqq\; \Groupoids(\Sets)$
for the category of small groupoids with functors between them.
\end{notation}
\begin{notation}[Morphisms]
\label{BasicNotationForCategories}
Let $\mathcal{C}$ be a category and
$X, Y \,\in\, \mathcal{C}$
a pair of its objects. We write
\noindent
{\bf (i)}
$\mathcal{C}(X,Y) \,\in\, \Sets$
for the set of morphisms $X \to Y$ in $\mathcal{C}$ (the {\it hom-set});
\noindent
{\bf (ii)}
$X \xrightarrow{\;\; \sim \;\;} Y$
to indicate {\it isomorphisms}.
\end{notation}
\begin{notation}[$\Groupoids$-enriched categories]
\label{Strict2Categories}
With $\Groupoids$ (Ntn. \ref{CategoryOfGroupoids}) regarded as a
cartesian monoidal category,
a
{\it strict (2,1)-category},
namely a
{\it $\Groupoids$-enriched category}
\cite{FanthamMoore83},
has for each pair of objects $X , Y \,\in\, \mathcal{C}$
a {\it hom-groupoid}
$
\mathcal{C}(X,Y)
\;\in\;
\Groupoids
$.
Equivalently, this is a
{\it strict 2-category} or
{\it $\mathrm{Cat}$-enriched category}
(e.g. \cite[\S 2.3]{JohnsonYau21}\cite[\S 9.5]{Richter20})
whose hom-categories happen to be groupoids.
\end{notation}
\begin{notation}[Adjoint functors]
\label{AdjointFunctors}
We denote pairs of adjoint functors as shown on the left here:
\vspace{-2mm}
\begin{equation}
\label{FormingAdjuncts}
\begin{tikzcd}
\mathcal{D}
\ar[
rr,
"R"{below},
shift right=5.5pt
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"{description}
]
&&
\mathcal{C}
\ar[
ll,
"L"{above},
shift right=5pt
]
\end{tikzcd}
{\phantom{AAAA}}
\Leftrightarrow
{\phantom{AAAA}}
\begin{tikzcd}[row sep=-4pt, column sep=1pt]
\mathcal{C}
\left(
c, R(d)
\right)
&
\simeq
&
\mathcal{D}
\left(
L(c),\, d
\right)
\\
c \xrightarrow{f} R(d)
&\leftrightarrow&
L(c) \xrightarrow{\tilde f} d
\end{tikzcd}
\end{equation}
\vspace{-3mm}
\noindent
meaning that for all objects $c \in \mathcal{C}$ and $d \in \mathcal{C}$ there is
a natural isomorphism (``forming adjuncts'')
between the hom-objects
(Ntn. \ref{BasicNotationForCategories}, \ref{Strict2Categories})
out of the image of left adjoint functor $L$
and into the image of the right adjoint functor $E$,
as shown on the right.
\end{notation}
\begin{notation}[Cartesian/pullback squares]
\label{CartesianSquares}
We indicate that a commuting square of morpisms
in some category $\mathcal{C}$ (Ntn. \ref{BasicNotationForCategories})
-- here typically in the category of
$\GActionsOnTopologicalSpaces$ \eqref{GActionsOnTopologicalSpaces} --
is a {\it pullback square} (also: {\it Cartesian square} or {\it fiber product})
by putting the symbols ``{\color{orangeii} (pb)}'' at its center.
This means that each pair of morphisms forming another commuting square with its
right and bottom morphism (a ``cone'') factors uniquely through the its
top left object
such that the resulting triangles commute:
\vspace{-3mm}
$$
\begin{tikzcd}
Q
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[
drr,
bend left=20,
"{ \forall }"{above}
]
\ar[
ddr,
bend right=20,
"{ \forall }"{left, xshift=-2pt}
]
\ar[
dr,
dashed,
"\exists !"{description}
]
\\[-6pt]
&[-6pt]
\TopologicalSpace \times_{\mathrm{B}} \mathrm{P}
\ar[r]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\color{orangeii} \tiny\rm(pb)}"{description}
]
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
&
\mathrm{P}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[d]
\\
&
\TopologicalSpace
\ar[r]
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\ar[
r,
phantom,
"{
\mbox{
\tiny
\color{darkblue}
\bf
fiber product {\color{black}/}
pullback
}
}"{below, yshift=-15pt}
]
&
\mathrm{B}
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\end{tikzcd}
{\phantom{AAAAA}}
\mbox{e.g.}
{\phantom{AAAAA}}
\begin{tikzcd}
Q
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[
drr,
bend left=20,
"{ \forall }"{above}
]
\ar[
ddr,
bend right=20,
"{ \forall }"{left, xshift=-2pt}
]
\ar[
dr,
dashed,
"\exists !"{description}
]
\\[-6pt]
&[-6pt]
\TopologicalSpace \times \mathrm{P}
\ar[r]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\color{orangeii} \tiny\rm(pb)}"{description}
]
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
&
\mathrm{P}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[d]
\\
&
\TopologicalSpace
\ar[r]
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\ar[
r,
phantom,
"{
\mbox{
\tiny
\color{darkblue}
\bf
product
}
}"{below, yshift=-15pt}
]
&
\ast
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
Over the {\it terminal object}, denoted by a point:
$
\begin{tikzcd}
Q
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[
r,
dashed,
"{ \exists ! }"
]
&
\ast
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\end{tikzcd}
$,
a fiber product is a plain {\it product}, as shown on the right.
\end{notation}
\begin{proposition}[Finite limits, {e.g. \cite[Prop. 2.8.2]{Borceux94I}}]
\label{FiniteLimits}
In the presence of a terminal object,
every {\it finite limit} is an iteration of
pullbacks (Ntn. \ref{CartesianSquares}).
\end{proposition}
\begin{example}[Pullback preserves isomorphisms]
\label{PullbackPreservesIsomorphisms}
A commuting square with a bottom isomorphism
(Ntn. \ref{BasicNotationForCategories})
is a pullback square
(Ntn. \ref{CartesianSquares}) if and only if also the top morphism is an
isomorphism:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=small, column sep=large]
\mathrm{A}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[
r,
"f"
]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"{description}
]
&
\mathrm{P}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[d]
\\
\TopologicalSpace
\ar[out=-180+66, in=-66, looseness=4.2, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\ar[
r,
"\sim"{below}
]
&
\mathrm{B}
\ar[out=-180+66, in=-66, looseness=4.2, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\end{tikzcd}
{\phantom{AAA}}
\Leftrightarrow
{\phantom{AAA}}
\begin{tikzcd}
\mathrm{A}
\ar[
r,
"f"{above},
"\sim"{below}
]
&
\mathrm{P}\;.
\end{tikzcd}
$$
\end{example}
\begin{proposition}[Right adjoint functors preserve limits]
\label{RightAdjointFunctorsPreserveFiberProducts}
A right adjoint functor $R$ (Ntn. \ref{AdjointFunctors}) preserves all limits,
in particular it preserves all finite limits (Prop. \ref{FiniteLimits})
and hence terminal objects and
pullbacks (Ntn. \ref{CartesianSquares}):
\vspace{-2mm}
$$
R
(
\TopologicalSpace \times_{\mathrm{B}} \mathrm{P}
)
\;\simeq\;
R(\TopologicalSpace)
\times_{R(\mathrm{B})}
R(\mathrm{P})
\,.
$$
\end{proposition}
\begin{proposition}[Pasting law (e.g. {\cite[Prop. 11.10]{AHS90}})]
\label{PastingLaw}
Given two adjacent commuting squares where the right one is a pullback
(Ntn. \ref{CartesianSquares})
then the left square is a pullback if and
only if the total rectangle is:
\vspace{-3mm}
$$
\begin{tikzcd}[row sep=small, column sep=large]
\mathrm{P}_1
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[r]
\ar[d]
&
\mathrm{P}_2
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[r]
\ar[d]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\mathrm{P}_3
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[d]
\\
\TopologicalSpace_1
\ar[out=-180+66, in=-66, looseness=4.2, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\ar[r]
&
\TopologicalSpace_2
\ar[out=-180+66, in=-66, looseness=4.2, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\ar[r]
&
\TopologicalSpace_3
\ar[out=-180+66, in=-66, looseness=4.2, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\end{tikzcd}
$$
\end{proposition}
\begin{notation}[Effective epimorphism {\cite[p. 101]{Grothendieck61}\cite[Def. 2.5.3]{Borceux94II}}]
\label{EffectiveEpimorphism}
A morphism $p$ is called a {\it regular epimorphism}
if it is the coequalizer of \emph{some} parallel pair of morphisms,
and an {\it effective epimorphism},
to be denoted by double-headed arrows,
if it is the
coequalizer specifically of the two projections out of
its pullback (Ntn. \ref{CartesianSquares}) along itself:
\vspace{-2mm}
$$
\begin{tikzcd}
\mathrm{P}
\times_{\TopologicalSpace}
\mathrm{P}
\ar[
rr,
shift left=3pt,
"\mathrm{pr}_1"{above}
]
\ar[
rr,
shift right=3pt,
"\mathrm{pr}_2"{below}
]
&&
\mathrm{P}
\ar[
rr,
->>,
"p"{above},
"\mathrm{coeq}"{below}
]
&&
\TopologicalSpace \;.
\end{tikzcd}
$$
\end{notation}
\begin{definition}[Regular categories ({\cite{BarrGrilletyOsdol}\cite[\S 2]{Borceux94II}\cite{Gran21}})]
\label{RegularCategory}
A category is called {\it regular} if
\noindent {\bf (i)}
for every morphism $X \to Y$,
\vspace{-3mm}
\begin{itemize}
\setlength\itemsep{-2pt}
\item[{\bf (a)}]
the fiber product
$X \times_Y X$ exists (the ``kernel pair'');
\item[{\bf (b)}]
the coequalizer
$
X \times_Y X \rightrightarrows X \xrightarrow{\;\mathrm{coeq}\;} X/(X\times_Y X)
$
exists (the {\it image});
\end{itemize}
\vspace{-2mm}
\noindent {\bf (ii)}
pullbacks of regular epimorphisms (Ntn. \ref{EffectiveEpimorphism})
exist and are again regular epimorphisms.
\end{definition}
In this Part I we are interested in the exceptional example of the category of compactly-generated topological spaces (Prop. \ref{CompactyGeneratedTopologicalSpacesFormARegularCategory} below). A generic class of examples of regular categories are toposes (such as the category of simplicial sets, which is of interest in Part II, see Ntn. \ref{SimplicialSets} below):
\begin{example}[Toposes are regular {\cite[p. 17]{BarrGrilletyOsdol}\cite[Prop. 3.4.14]{Borceux94III}\cite[p. 92]{Johnstone02a}}]
\label{ToposesAreRegular}
Every topos (hence in particular every category of presheaves) is a regular category (Def. \ref{RegularCategory}). Moreover, in toposes the classes of (i) epimorphisms, (ii) regular epimorphisms and (iii) effective epimorphisms
(Ntn. \ref{EffectiveEpimorphism})
all coincide (e.g. \cite[\S IV.7, Thm. 8 (p. 197)]{MacLaneMoerdijk92}\cite[Prop. 3.4.13, 3.4.15]{Borceux94III}).
\end{example}
\begin{lemma}[Effective epimorphisms in regular categories
(e.g. {\cite[Prop. 2.5.7]{Borceux94I}\cite[Prop. 2.3.3]{Borceux94I}})]
\label{EffectiveEpimorphismsArePreservedByPullbackInRegularCategories}
In a regular category (Def. \ref{RegularCategory}),
the notions of regular and of effective epimorphisms (Ntn. \ref{EffectiveEpimorphism})
coincide, and pullback along any morphism $f$ preserves effective epimorphisms $p$
together with their coequalizer diagrams:
$$
\begin{tikzcd}[column sep=large]
(f^\ast\TopologicalPrincipalBundle)
\times_{\TopologicalSpace'}
(f^\ast\TopologicalPrincipalBundle)
\ar[d, shift right=4pt, "\mathrm{pr}_1"{swap}]
\ar[d, shift left=4pt, "\mathrm{pr}_2"]
\ar[r]
\ar[dr, phantom, "\mbox{\tiny\rm(pb)}"{pos=.4}]
&
\TopologicalPrincipalBundle
\times_{\TopologicalSpace}
\TopologicalPrincipalBundle
\ar[d, shift right=4pt, "\mathrm{pr}_1"{swap}]
\ar[d, shift left=4pt, "\mathrm{pr}_2"]
\\
f^\ast\TopologicalPrincipalBundle
\ar[d, ->>, "f^\ast p", "\mathrm{coeq}"{swap}]
\ar[r]
\ar[
dr,
phantom,
"\mbox{\tiny\rm (pb)}"
]
&
\TopologicalPrincipalBundle
\ar[d, ->>, "p", "\mathrm{coeq}"{ swap}]
\\[+10pt]
\TopologicalSpace'
\ar[r, "f"{swap}]
&
\TopologicalSpace
\end{tikzcd}
$$
\end{lemma}
In regular categories, there are partial reverses
to the implications of
Ex. \ref{PullbackPreservesIsomorphisms}
and Prop. \ref{PastingLaw} (see also the $\infty$-category theoretic version in Lem. \ref{ReversePastingLawForInfinityPullbacks} below):
\begin{lemma}[Reverse pasting law in regular categories (e.g. {\cite[Lem. 1.15]{Gran21}})]
\label{ReversePastingLawInRegularCategories}
Given a commuting diagram
in a regular category (Def. \ref{RegularCategory})
of the form
\vspace{-1mm}
$$
\begin{tikzcd}
{}
\ar[r]
\ar[d]
\ar[dr, phantom, "\mbox{\tiny \rm (pb)}"]
&
{}
\ar[r]
\ar[d]
&
{}
\ar[d]
\\
{}
\ar[r, ->>]
&
{}
\ar[r]
&
{}
\end{tikzcd}
$$
\vspace{-1mm}
\noindent
where the left square is Cartesian (Ntn. \ref{CartesianSquares})
and the bottom left morphism is an effective epimorphism
(Ntn. \ref{EffectiveEpimorphism}), then
the right square is Cartesian if and only if the total rectangle is
Cartesian.
\end{lemma}
\begin{lemma}[Local recognition of isomorphisms in regular categories]
\label{IsomorphismsOnRegularCategoriesAreDetectedOnEffectiveCovers}
In a regular category (Def. \ref{RegularCategory}),
if the pullback $p^\ast f$ of a morphism $f$
along an effective epimorphism
$p$ (Ntn. \ref{EffectiveEpimorphism})
is an isomorphism, then $f$ was already an isomoprhism itself.
\end{lemma}
\begin{proof}
By assumption, we have a pullback square as on the bottom of the following diagram:
\vspace{-1mm}
$$
\begin{tikzcd}[column sep=large]
\widehat X
\times_X
\widehat X
\ar[r, "\sim"]
\ar[d, shift left=3pt]
\ar[d, shift right=3pt]
\ar[dr, phantom, "\mbox{\tiny\rm (pb)}"{pos=.4}]
&
\widehat Y
\times_Y
\widehat Y
\ar[d, shift left=3pt]
\ar[d, shift right=3pt]
\\
\widehat{X}
\ar[r, "\sim"{swap}, "p^\ast f"]
\ar[d, ->>, "f^\ast p"{swap}]
\ar[dr, phantom, "\mbox{\tiny\rm (pb)}"]
&
\widehat{Y}
\ar[d, ->>, "p"]
\\
X
\ar[r, "f"{below}]
&
Y
\end{tikzcd}
$$
\vspace{-1mm}
\noindent
Here the bottom left morphism is an effective epimorphism by
Lem. \ref{EffectiveEpimorphismsArePreservedByPullbackInRegularCategories},
since $p$ is so by assumption.
Since limits commute with each other,
we get the top pullback squares, where the topmost morphism
is an isomorphism as the pullback of the isomorphism $p^\ast f$
(by Ex. \ref{PullbackPreservesIsomorphisms}).
By the nature of effective epimorphisms (Ntn. \ref{EffectiveEpimorphism}),
this now exhibits $f$ as the
image under passage to coequalizers of an isomorphism of coequalizer
diagrams, hence as an isomorphism.
\end{proof}
\medskip
\noindent
{\bf Topological spaces.}
We use the following {\it convenient category of topological spaces}
\cite{Steenrod67} which has become the standard foundation for algebraic topology:
\begin{notation}[Category of compactly-generated topological spaces]
\label{CompactlyGeneratedTopologicalSpaces}
We write
\vspace{-3mm}
\begin{equation}
\label{CategoryOfTopologicalSpaces}
\begin{tikzcd}
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
topological spaces
\end{tabular}
}
}
}{
\categorybox{\TopologicalSpaces}
}
\;\;
\ar[
r,
phantom,
"{ \scalebox{.7}{$\bot$} }"
]
\ar[
r,
shift right=5.5pt,
"{ k }"{below}
]
&
\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
\color{orangeii}
\bf
\begin{tabular}{c}
topological k-spaces
\end{tabular}
}
}
}{
\categorybox{\kTopologicalSpaces}
}
\ar[
l,
hook',
shift right=5.5pt
]
&
\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{4pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
Hausdorff k-spaces
\end{tabular}
}
}
}{
\categorybox{\kHausdorffSpaces}
}
\;\;
\ar[
l,
hook'
]
&
\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
locally compact
\\
Hausdorff spaces
\end{tabular}
}
}
}{
\categorybox{\LocallyCompactHausdorffSpaces}
}
\ar[
l,
hook'
]
\end{tikzcd}
\end{equation}
\vspace{-1mm}
\noindent
for
the coreflective subcategory of
those topological spaces
which are the colimits of all images of compact spaces inside them
({\it k-spaces} \cite[\S 1]{Gale50}),
and its full subcategory of
Hausdorff spaces among these
(subsuming all locally compact Hausdorff spaces),
hence of {\it compactly generated} topological spaces
(e.g. \cite[\S XI.9]{Dugundji66}\cite{Steenrod67}\cite{Lewis78} \cite[\S 3.4]{HerrlichStrecker97},
concise practical review is in \cite[\S 0]{FHT00},
and specifically for the equivariant context in \cite[\S 16]{LueckUribe14}).
\end{notation}
\begin{remark}[Mapping spaces]
In particular, the category of k-spaces is Cartesian closed,
which means that, for $\TopologicalSpace, \mathrm{Y} \,\in\, \kTopologicalSpaces$
\eqref{CategoryOfTopologicalSpaces},
the {\it mapping space}
\vspace{-2mm}
\begin{equation}
\label{MappingSpace}
\mathrm{Maps}(\TopologicalSpace,\mathrm{Y})
\;\;\;
\in
\;
\kTopologicalSpaces
\end{equation}
\vspace{-1mm}
\noindent
(namely the set of continuous functions $\TopologicalSpace \xrightarrow{\;} \mathrm{Y}$
equipped with the $k$-ified compact-open topology) serves as an
{\it exponential object}
or
{\it cartesian internal hom},
(e.g., \cite[\S 9]{Niefield78}\cite[Thm. B.10]{Piccinini92}\cite[\S 7.1-7.2]{Borceux94II}).
That is,
$\mathrm{Maps}(\TopologicalSpace,-)$ is right adjoint (Ntn. \ref{AdjointFunctors})
to forming the categorical product (the k-ified product topological space) with $\TopologicalSpace$:
\vspace{-5mm}
\begin{equation}
\label{MappingSpaceAdjunction}
\begin{tikzcd}[column sep=large]
\kTopologicalSpaces
\ar[
rr,
shift right=5pt,
"{
\mathrm{Maps}(\TopologicalSpace,\,-)
}"{below}
]
\ar[
rr,
phantom,
"{
\scalebox{.7}{$\bot$}
}"
]
&&
\kTopologicalSpaces
\mathpalette\mathrlapinternal{\,.}
\ar[
ll,
shift right=5pt,
"{
\TopologicalSpace \times (-)
}"{above}
]
\end{tikzcd}
\end{equation}
\vspace{-.4cm}
\end{remark}
Besides making the adjunction \eqref{MappingSpaceAdjunction} work,
compactly generated topological spaces
behave essentially like plain topological spaces:
\begin{remark}[Colimits of compactly generated topological spaces]
Colimits of k-spaces (Ntn. \ref{CompactlyGeneratedTopologicalSpaces})
are computed as
usual colimits of topological spaces.
For instance:
\noindent
{\bf (i)}
Orbit spaces --
i.e., the usual quotient topological spaces of continuous group actions
(see Lem. \ref{HausdorffQuotientSpaces} and Cor. \ref{QuotientCoprojectionOfFreeProperActionIsLocallyTrivial} below)
--
are (split) coequalizers in $\kTopologicalSpaces$ \eqref{CategoryOfTopologicalSpaces}.
\noindent
{\bf (ii)}
It is still true that
the functor
$\pi_0 \,\colon\, \kTopologicalSpaces \xrightarrow{\;} \Sets$
assigning sets of path-connected components
preserves coequalizers
(quotients by relations) and
it preserves finite products:
\vspace{-1mm}
\begin{equation}
\label{PathConnectedComponentsPreserveQuotientsAndFiniteProducts}
\TopologicalSpace, \mathrm{Y}
\;\;
\in
\;
\kTopologicalSpaces
\;\;\;\;
\Rightarrow
\;\;\;\;
\left\{\!\!\!\!
\begin{array}{rcl}
\pi_0
\left(
\mathrm{coeq}
(\mathrm{R} \rightrightarrows \TopologicalSpace)
\right)
&
\;\simeq\;
&
\mathrm{coeq}
\left(
\pi_0(\mathrm{R}) \rightrightarrows \pi_0(\TopologicalSpace)
\right)
\\
\pi_0
(
\TopologicalSpace
\times
\mathrm{Y}
)
&
\;
\simeq
\;
&
\pi_0(\TopologicalSpace)
\times
\pi_0(\mathrm{Y})
\end{array}
\right.
\;\;\;\;
\in
\;\;
\Sets
\,.
\end{equation}
\end{remark}
\begin{remark}[Relation to locally compact Hausdorff spaces]
$\,$
\noindent
{\bf (i)}
If $\TopologicalSpace \,\in\,
\LocallyCompactHausdorffSpaces \xhookrightarrow{\;} \kTopologicalSpaces$
\eqref{CategoryOfTopologicalSpaces}
then, for any $\mathrm{Y} \,\in\, \kTopologicalSpaces$,
the usual product topological space is the category-theoretic product
$\TopologicalSpace \!\times\! \mathrm{Y} \,\in\, \kTopologicalSpaces$
in k-spaces
(\cite[Lem. 2.4]{Lewis78}\cite[Thm. B.6]{Piccinini92}).
\noindent
{\bf (ii)}
Any topological space is a k-space if and only if it is a
quotient topological space of a locally compact Hausdorff space.
(\cite[Lem. 3.2 (v)]{EscardoLawsonSimpson04}, strengthening \cite[\S XI, Thm. 9.4]{Dugundji66}\cite[Thm. B.4]{Piccinini92}).
\end{remark}
Moreover:
\begin{proposition}[Compactly-generated topological spaces form a regular category
{\cite[p. 3]{CagliariMantovaniVitale95}}]
\label{CompactyGeneratedTopologicalSpacesFormARegularCategory}
$\,$
\noindent
The categories $\kTopologicalSpaces$
and $\kHausdorffSpaces$ (Ntn. \ref{CompactlyGeneratedTopologicalSpaces})
are regular (Def. \ref{RegularCategory}).
\end{proposition}
\begin{example}[Open covers are effective epimorphisms]
\label{OpenCoversAreEffectiveEpimorphisms}
For $\TopologicalSpace \,\in\, \kTopologicalSpaces$
and $\{ \TopologicalPatch_i \xhookrightarrow{\;} \TopologicalSpace \}_{i \in I}$
an open cover, then the canonical map
$\!\!
\begin{tikzcd}
\underset{i \in I}{\sqcup}
\TopologicalPatch
\ar[r, ->>]
&
\TopologicalSpace
\end{tikzcd}
\!\!$
is an effective epimorphism (Def. \ref{EffectiveEpimorphisms}), in that
the canonical maps
\vspace{-3mm}
$$
\begin{tikzcd}
\underset{j_1, j_2 \in I}{\sqcup}
\TopologicalPatch_{j_1} \cap \TopologicalPatch_{j_2}
\ar[r, shift left=3pt]
\ar[r, shift right=3pt]
&
\underset{i \in I}{\coprod} \TopologicalPatch_i
\ar[r]
&
\TopologicalSpace
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
make a coequalizer diagram.
(Namely, this is the case on underlying sets, by the fact that the
covering is surjective;
hence the remaining condition
is that a subset of $\TopologicalSpace$ is open precisely if its intersection with
each $\TopologicalPatch_i$ is open, which is the case by the fact that the covering is
by open subsets.)
\end{example}
\begin{example}[Isomorphism of bundles is detected on covers]
\label{IsomorphismOfBundlesDetectedOnOpenCovers}
Given a commuting diagram in $\kTopologicalSpaces$ (Ntn. \ref{CompactlyGeneratedTopologicalSpaces})
of the form
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=small]
\TopologicalPrincipalBundle
\ar[rr, "f"]
\ar[dr]
&&
\TopologicalPrincipalBundle'
\ar[dl]
\\
&
\TopologicalSpace
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
then $f$ is an isomorphism as soon as it is locally so, hence
if its pullback to any open cover
$\widehat X \,\coloneqq\, \underset{i \in I}{\coprod} \TopologicalPatch_i$
is an isomorphism:
\vspace{-1mm}
$$
\begin{tikzcd}[column sep={between origins, 60pt}]
\TopologicalPrincipalBundle|_{\widehat X}
\ar[rr]
\ar[dr, "\sim"{sloped}]
\ar[ddr, bend right=24]
\ar[drrr, phantom, "\mbox{\tiny\rm(pb)}"{pos=.4}]
&&
\TopologicalPrincipalBundle
\ar[dr, "f"]
\ar[ddr, bend right=24]
\\
&
\TopologicalPrincipalBundle'|_{ \widehat \TopologicalSpace}
\ar[rr, ->>, crossing over]
\ar[d]
\ar[drr, phantom, "\mbox{\tiny\rm(pb)}"{pos=.3}]
&{}&
\TopologicalPrincipalBundle'
\ar[d]
\\
&
\widehat X
\ar[rr, ->>]
&&
\TopologicalSpace
\,.
\end{tikzcd}
$$
\vspace{0mm}
\noindent
Namely,
here the bottom morphism is an effective epimorphism by
Ex. \ref{OpenCoversAreEffectiveEpimorphisms},
hence the middle morphism is an effective epimorphism by
regularity of $\kTopologicalSpaces$ (Prop. \ref{CompactyGeneratedTopologicalSpacesFormARegularCategory}).
Now with the rear square also the top square is Cartesian, by the
pasting law (Prop. \ref{PastingLaw}),
whence the top square exhibits the pullback of $f$ along
an effective epimorphism as an isomorphism, so that
Lem. \ref{IsomorphismsOnRegularCategoriesAreDetectedOnEffectiveCovers},
implies that it is itself an isomorphism.
\end{example}
\medskip
\noindent
{\bf Internal mathematical structures.} We find below that equivariant topology
is both a beautiful example of and is itself beautified by a systematic use
of {\it internalization} of mathematical structures into ambient categories
other than $\Sets$; a basic point that seems not to have received due attention.
\begin{notation}[Internal mathematical structures]
\label{Internalization}
For $S$ a mathematical structure
expressible in terms of finite limits
(a ``finite limit sketch''
\cite{BastianiEhresmann72}\cite[\S 4]{BarrWells83}\cite[\S 1.49]{AdamekRosicky94}),
hence by operations on fiber products (Prop. \ref{FiniteLimits}),
and for $\mathcal{C}$ any category,
\noindent {\bf (i)} we write
$S(\mathcal{C})$ for the category of $S$-models in $\mathcal{C}$,
hence the category of $S$-structures
{\it internal} to $\mathcal{C}$ in the
original sense of \cite[p. 370]{Grothendieck60II}.
\noindent {\bf (ii)} For $F$ a functor that preserves finite limits (denoted {\it lex}),
there is the evident induced functor on $S$-structures, which we denote as follows:
\vspace{-2mm}
\begin{equation}
\label{FunctorOnStructuresInducedFromLexFunctor}
F \,:\, \mathcal{C} \xrightarrow{\mathrm{lex}} \mathcal{D}
\qquad
\vdash
\qquad
S(F)
\;:\;
S(\mathcal{C})
\longrightarrow
S(\mathcal{D})
\,.
\end{equation}
\end{notation}
\begin{example}[Archetypical examples of internal structures]
We have the categories:
\vspace{-.1cm}
\begin{itemize}
\vspace{-.2cm}
\item
$\Groups(\mathcal{C})$
of internal groups
(e.g., \cite{EckmannHilton61}\cite{EckmannHilton62}\cite{EckmannHilton63}\cite[\S 4.1]{BarrWells83},
see Def. \ref{EquivariantTopologicalGroup} below);
\vspace{-.2cm}
\item
$\Actions{G}(\mathcal{C})$
of internal group actions,
(e.g., \cite[\S 7]{Boardman95}\cite[p. 8]{BorceuxJanelidzeKelly05}, see Def. \ref{EquivariantPrincipalBundle} below);
\vspace{-.2cm}
\item
$\Groupoids(\mathcal{C})$
of internal groupoids
(e.g., \cite[\S 8]{Borceux94I}\cite[\S 1]{NiefieldPronk19}, see Ntn. \ref{TopologicalGroupoids} below);
\end{itemize}
\vspace{-.2cm}
\noindent
all originally due to \cite[\S 4]{Grothendieck61}.
We consider these notions mainly internal to the category
$\mathcal{C} = \GActionsOnTopologicalSpaces$ \eqref{GActionsOnTopologicalSpaces}
of, in turn, group actions internal to
topological spaces \eqref{CategoryOfTopologicalSpaces}.
\end{example}
\begin{notation}[Internalization of principal bundle theory]
\label{InternalizationOfPrincipalBundleTheory}
The key example of internal structures for our purposes is the category
\vspace{-.1cm}
\begin{itemize}
\vspace{-.2cm}
\item
$\FormallyPrincipalBundles{\Gamma}(\mathcal{C})$
of
{\it formally principal bundles}
(\cite[p. 312 (15 of 30)]{Grothendieck60}\cite[p. 9 (293)]{Grothendieck71},
\\
\phantom{$\FormallyPrincipalBundles{\Gamma}(\mathcal{C})$}
also: {\it pseudo-torsors} \cite[\S 16.5.15]{Grothendieck67}),
\end{itemize}
\vspace{-.3cm}
\noindent
which is the subcategory of $\Actions{\Gamma}(\mathcal{C})$
(Ntn. \ref{Internalization})
on those actions that are
either fiberwise principal or empty \eqref{PrincipalityConditionAsShearMapBeingAnIsomorphism},
see Rem. \ref{PseudoTorsorCondition} below.
\end{notation}
We observe in Cor. \ref{InternalDefinitionOfGPrincipalBundlesCoicidesWithtomDieckDefinition}
that these (formally) principal bundles, when internalized in
$\mathcal{C} \,=\, \Actions{G}(\kTopologicalSpaces)$,
are equivalently equivariant principal bundles
in the original and general sense of \cite{tomDieck69},
see Rem. \ref{LiteratureOnEquivariantPrincipalBundles} below.
While this might not be a surprising observation for experts with the relevant background,
it is, we find, absolutely foundational to the subject of equivariant bundle theory,
and seems not to have been made before in existing literature.
\newpage
\section{$G$-Actions on topological spaces}
\label{TopologicalGActions}
\begin{notation}[Equivariant topology (``transformation groups'', ``$G$-spaces'', e.g.{\cite{Bredon72}\cite{tomDieck79}\cite{tomDieck87}})]
\label{GActionOnTopologicalSpaces}
$\,$
\noindent Throughout:
\vspace{-3mm}
\begin{itemize}
\setlength\itemsep{-2pt}
\item
$
G \;\in\;
\mathrm{Grps}
(
\kHausdorffSpaces
)
$
denotes a Hausdorff topological group, with group operation denoted $(-)\cdot (-)$;
\item
$H \subset G$ denotes a topological subgroup
(necessarily Hausdorff, since $G$ is);
\item
$N\!(H) \subset G$ denotes its {\it normalizer subgroup}, and
\item $W\!(H) \coloneqq N\!(H)\!/H$ its {\it Weyl group} (e.g. \cite[p. 13]{May96}):
\vspace{-5mm}
\begin{equation}
\label{NormalizerAndWeylGroup}
\begin{tikzcd}
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
normalizer
\\
subgroup
\end{tabular}
}
}
\qquad
{
N(H)
}
\ar[
r,
->>
]
&
N(H)/H
\,=:\,
W(H)
\qquad
\mathpalette\mathclapinternal{
\raisebox{1pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
Weyl group
\end{tabular}
}
}
\end{tikzcd}
\end{equation}
\vspace{-3mm}
\item
We write
\vspace{-1mm}
\begin{equation}
\label{GActionsOnTopologicalSpaces}
\hspace{-1cm}
\GActionsOnTopologicalSpaces
\;\coloneqq\;
\left\{\!\!
\left(
\arraycolsep=1pt
\begin{array}{ccl} \small
\TopologicalSpace &\in& \kTopologicalSpaces \,,
\\
\rho &:& G \xrightarrow{\rho} \mathrm{Aut}(\TopologicalSpace)
\end{array}
\right)
\!\!\right\}
\;=\;
\Big\{
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace
\;\coloneqq\;
\big(\!\!\!
\begin{tikzcd}[column sep=40pt]
G \times \TopologicalSpace
\ar[
r,
"
\scalebox{.7}{$ (-)\cdot(-) $}
"{
above
},
"
\mbox{
\tiny
\color{greenii}
\bf
\begin{tabular}{c}
continuous
\\
actions
\end{tabular}
}
"{
below
}
]
&\TopologicalSpace
\end{tikzcd}
\!\!\!
\big)
\!
\Big\}
\end{equation}
\vspace{-3mm}
\noindent
for the category whose objects
$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace$ are topological spaces $\TopologicalSpace$
equipped with continuous left $G$-actions
and
whose morphisms are $G$-equivariant contionuous functions between these
(often: ``$G$-spaces'', for short).
\item
For $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \in \GActionsOnTopologicalSpaces$,
we write
\vspace{-.1cm}
\begin{itemize}
\vspace{-.2cm}
\item[$\circ$]
$
\TopologicalSpace^G
\,\coloneqq\,
\big\{
x \in \TopologicalSpace
\,\vert\,
\underset{g \in G}{\forall}
\;
g \cdot x \,=\, x
\big\}
\xhookrightarrow{\;}
\TopologicalSpace
\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\in
\;
\kTopologicalSpaces
$
for the {\it $G$-fixed subspace};
\vspace{0mm}
\item[$\circ$]
$
\TopologicalSpace
\relbar\joinrel\twoheadrightarrow
X_{G}
\,\coloneqq\,
X/G
\,\coloneqq\,
\left\{
[x] \coloneqq G \cdot x
\;\vert\;
\TopologicalSpace \in \TopologicalSpace
\right\}
\;\;
\in
\;
\kTopologicalSpaces
$
for the {\it $G$-quotient space} ({\it $G$-orbit space}).
\end{itemize}
\vspace{-.1cm}
\item
For $G\raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \in \GActionsOnTopologicalSpaces$ and $x \in \TopologicalSpace$,
its {\it isotropy subgroup} is denoted
\vspace{-2mm}
\begin{equation}
\label{StabilizerSubgroupInEquivarianceGroup}
G_x
\;\coloneqq\;
\mathrm{Stab}_{{}_{G}}(x)
\;\coloneqq\;
\{
g \in G \,\vert\, g \cdot x = x
\}
\;\subset\;
G\;.
\end{equation}
\end{itemize}
\end{notation}
From \cref{NotionsOfEquivariantLocalTrivialization} on, we make the
following further assumptions on the equivariance group:
\begin{assumption}[Proper equivariant topology (following {\cite{DHLPS19}\cite{SS20OrbifoldCohomology}})]
\label{ProperEquivariantTopology}
We speak of {\it proper equivariant topology} if:
\vspace{-4mm}
\begin{itemize}
\setlength\itemsep{-2pt}
\item
equivariance groups $G$ are Lie groups with compact connected components;
\item
subgroups $H \subset G$ are compact;
\item
domain spaces $\TopologicalSpace$ are locally compact and Hausdorff;
\item
equivariance actions $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace$ are proper. \footnote{
Under the previous assumption that domain spaces are locally compact and
Hausdorff, all notions of proper actions agree \cite[Thm. 1.2.9]{Palais61};
see also \cite[Rem. 5.2.4]{Karppinen16}.
}
\end{itemize}
\end{assumption}
\begin{lemma}[Equivariance subgroups in proper equivariant topology]
\label{EquivarianceSubgroupsInProperEquivariantTopology}
Under Assumption \ref{ProperEquivariantTopology},
\vspace{-4mm}
\begin{enumerate}[{\bf (i)}]
\setlength\itemsep{-2pt}
\item
every
$H \subset G$
(namely every compact subgroup of a Lie group with compact connected components)
is:
\vspace{-4mm}
\begin{enumerate}[{\bf (a)}]
\setlength\itemsep{-2pt}
\item
a closed subgroup;
\item
a compact Lie group;
\end{enumerate}
\vspace{-3mm}
\item
every $G_x \subset G$
(namely the isotropy subgroup \eqref{StabilizerSubgroupInEquivarianceGroup}
of a proper action at any point $x$)
is of this form.
\end{enumerate}
\vspace{-.2cm}
\end{lemma}
\begin{proof}
For the first statement,
it is sufficient to consider the connected components of the neutral element.
Here statement (a) follows since Lie groups are Hausdorff spaces
and compact subspaces of compact Hausdorff spaces are equivalently closed subspaces.
With this, statement (b) follows from Cartan's closed-subgroup theorem
(e.g. \cite[Thm. 10.12]{Lee12}).
The assumption that $\TopologicalSpace$ is locally compact and Hausdorff
ensures that all notions of proper action agree, and it follows that
all stabilizer subgroups of points are compact.
With this, the second statement follows from the first.
\end{proof}
\medskip
\noindent
{\bf Basic examples of $G$-actions.} To fix notation and conventions,
we make explicit the following basic $G$-actions.
\begin{example}[Left and right-inverse multiplication action]
\label{LeftAndInverseRightMultiplicationAction}
Each $G \,\in\, \Groups(\kTopologicalSpaces)$
carries canonical left $G$-actions \eqref{GActionsOnTopologicalSpaces},
by left multiplication
and by inverse right multiplication, respectively,
\vspace{-.3cm}
\begin{equation}
\label{LeftMultiplicationAndInverseRightMultiplicationActionsOnATopologicalGroup}
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, G^L \,\in\, \GActionsOnTopologicalSpaces
{\phantom{AAAAAAA}}
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, G^R \,\in\, \GActionsOnTopologicalSpaces
\end{equation}
\vspace{-.5cm}
$$
\begin{tikzcd}[row sep=-5pt, column sep=4pt]
\scalebox{0.8}{$ G \times G^L $}
\ar[rr]
&&
\scalebox{0.8}{$ G^L $}
\\
\scalebox{0.8}{$ (g,h) $} &\longmapsto& \scalebox{0.8}{$ g \cdot h $}
\end{tikzcd}
{\phantom{AAAAAAA}}
\begin{tikzcd}[row sep=-5pt, column sep=4pt]
\scalebox{0.8}{$ G \times G^R $}
\ar[rr]
&&
\scalebox{0.8}{$ G^R $}
\\
\scalebox{0.8}{$ (g,h) $} &\longmapsto& \scalebox{0.8}{$ h \cdot g^{-1}$}
\mathpalette\mathrlapinternal{\,.}
\end{tikzcd}
$$
\vspace{-1mm}
\noindent Under inversion, these two actions are isomorphic:
\vspace{-.4cm}
\begin{equation}
\label{IsomorphismBetweenLeftMultiplicationAndInverseRightMultiplicationAction}
\begin{tikzcd}
G^L
\ar[
r,
"{ (-)^{-1} }"{above}
"{\sim}"{below, yshift=+1pt}
]
&
G^R
\;\;\;
\in
\;
\GActionsOnTopologicalSpaces \;.
\end{tikzcd}
\end{equation}
\vspace{-.3cm}
\end{example}
\begin{example}[Diagonal action]
\label{DiagonalActionOnProductGSpaces}
For
$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_1, \, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_2
\,\in\,
\GActionsOnTopologicalSpaces
$,
one can consider the {\it diagonal action}
$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, (\TopologicalSpace_1 \times \TopologicalSpace_2)$
on the product space of the underlying spaces:
\vspace{-3mm}
$$
\begin{tikzcd}[row sep=-5pt]
G \times (\TopologicalSpace_1 \times \TopologicalSpace_2)
\ar[
rr
]
&&
\scalebox{0.8}{$ \TopologicalSpace_1 \times \TopologicalSpace_2$}
\\
\scalebox{0.8}{$ \left(g, (x_1, x_2)\right) $}
&\longmapsto&
\scalebox{0.8}{$ \left( g\cdot x_1, \, g \cdot x_2\right)$}.
\end{tikzcd}
$$
\end{example}
\begin{example}[Conjugation action on mapping space (e.g. {\cite[p. 5]{GuillouMayRubin13}})]
\label{ConjugationActionOnMappingSpaces}
Let $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_1, \, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_2 \,\in \, \GActionsOnTopologicalSpaces$.
\noindent
{\bf (i)}
The mapping space \eqref{MappingSpace} of the underlying topological spaces
carries a $G$-action
given by {\it conjugation}:
\vspace{-.3cm}
\begin{equation}
\label{ConjugationActionOnMappingSpace}
G
\raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \;
\mathrm{Maps}(\TopologicalSpace_1, \TopologicalSpace_2)
\;\;\;
\in
\;
\GActionsOnTopologicalSpaces
\end{equation}
\vspace{-.6cm}
\vspace{-.5cm}
\begin{equation}
\label{ConjugationActionOnMapsBetweenGSpaces}
\hspace{-2mm}
f \,\in\, \mathrm{Maps}(\TopologicalSpace_1, \TopologicalSpace_2)
\;\;\;\;\;\;
\vdash
\;\;\;\;\;\;
\underset{g \in G}{\forall}
\quad
\begin{tikzcd}
\TopologicalSpace_1
\ar[
rr,
"{
g \cdot f
}"
]
\ar[
d,
"{
g^{-1}\cdot(-)
}"{left}
]
&&
\TopologicalSpace_2
\\
\TopologicalSpace_1
\ar[
rr,
"{
f
}"{below}
]
&&
\TopologicalSpace_2
\ar[
u,
"{
g \cdot (-)
}"
]
\end{tikzcd}
{\phantom{AA}}
\mbox{i.e.,}
\;\;\;\;
\underset{x \in \TopologicalSpace_1}{\forall}
\;
(g \cdot f)(x)
\;=\;
g
\cdot
(
f
(
g^{-1} \cdot x
)
)\;.
\end{equation}
\vspace{-.4cm}
\noindent
{\bf (ii)} The fixed locus of the conjugation action
\eqref{ConjugationActionOnMapsBetweenGSpaces} is the subspace of
{\it $G$-equivariant functions}
\vspace{-3mm}
\begin{equation}
\label{EquivariantFunctions}
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
subspace of $G$-equivariant maps
\end{tabular}
}
}
}{
\big\{
f \,\in\,
\mathrm{Maps}(\TopologicalSpace_1, \TopologicalSpace_2)
\,\big\vert\,
f(-) = g^{-1}\cdot f(g\cdot -)
\big\}
}
\;\;
=
\;\;
\overset{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
$G$-fixed subspace
\\
of conjugation action
\end{tabular}
}
}{
\mathrm{Maps}
(
\TopologicalSpace_1,
\,
\TopologicalSpace_2
)^G
}
\;\; \xhookrightarrow{\quad}
\;\;
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
space of all
\\
continuous maps
\end{tabular}
}
}
}{
\mathrm{Maps}
(
\TopologicalSpace_1,
\,
\TopologicalSpace_2
)\;.
}
\end{equation}
\vspace{-.4cm}
\noindent
{\bf (iii)}
This construction \eqref{ConjugationActionOnMappingSpace} is functorial in both
arguments, contravariantly in the first.
With \eqref{IsomorphismBetweenLeftMultiplicationAndInverseRightMultiplicationAction}
with means, in particular, for $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, X \in \GActionsOnTopologicalSpaces$ that
\vspace{-3mm}
\begin{equation}
\mathrm{Maps}
(
G^L,
\TopologicalSpace
)
\;\simeq\;
\mathrm{Maps}
(
G^R,
\TopologicalSpace
)
\;\;\;
\in
\;
\GActionsOnTopologicalSpaces \,.
\end{equation}
\vspace{-1mm}
\noindent
{\bf (iv)}
With the first argument fixed, this construction
\eqref{ConjugationActionOnMappingSpace} is a right adjoint to the
product operation from Ex. \ref{DiagonalActionOnProductGSpaces}:
\vspace{-3mm}
\begin{equation}
\label{InternalHomInGSpaces}
\begin{tikzcd}[column sep=35pt]
\Actions{G}(\kTopologicalSpaces)
\ar[
rr,
shift right=6pt,
"{
\scalebox{.7}{$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \mathrm{Maps}(\TopologicalSpace,\, -)
$}
}"{below}
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
&&
\Actions{G}(\kTopologicalSpaces) \;.
\ar[
ll,
shift right=6pt,
"{
\scalebox{.7}{$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \,\times\, (-)
$}
}"{above}
]
\end{tikzcd}
\end{equation}
\vspace{-.4cm}
\end{example}
\medskip
\noindent
{\bf Change of equivariance group.}
Much of our formulation of equivariant topology proceeds by applying the
{\it change of equivariance group}
adjoint triple from the following Lem. \ref{InducedAndCoinducedActions}, in numerous ways.
\begin{lemma}[Change of equivariance group (e.g. {\cite[\S I.1]{May96}\cite[p. 9]{DHLPS19}})]
\label{InducedAndCoinducedActions}
Given a continuous homomorphism of topological groups
\vspace{-2mm}
$$
\begin{tikzcd}
G_1
\ar[
r,
"\phi"
]
&
G_2\;,
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
we have a triple of adjoint functors (Ntn. \ref{AdjointFunctors})
between their categories of
continuous actions (Ntn. \ref{GActionOnTopologicalSpaces}):
\vspace{-3mm}
\begin{equation}
\label{AdjointTripleOfChangeOfEquivarianceGroup}
\begin{tikzcd}[column sep=55pt]
G_1\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
rrr,
shift right=13pt,
"{
\mathrm{Maps}
(
G_2,
\,
-
)^{G_1}
\;\coloneqq\;
\mathrm{Maps}
\left(
\phi^\ast
(
G^L_2
)
,\,
-
\right)^{G_1}
}"{
below
},
"\scalebox{.7}{$\bot$}"{above}
]
\ar[
rrr,
shift left=13pt,
"{
G_2 \times_{{}_{G_1}} (-)
\;\coloneqq\;
\left(
\phi^\ast
(
G^R_2
)
\times
(-)
\right)_{G_1}
}"{
above
},
"\scalebox{.7}{$\bot$}"{below}
]
&&&
G_2\mathrm{Act}
(
\mathrm{TopSp}
)
\mathpalette\mathrlapinternal{\,,}
\ar[
lll,
"\;\phi^\ast"{description}
]
\end{tikzcd}
\end{equation}
\vspace{-.4cm}
\noindent
where:
-- the $G_1$-{\rm pullback action} on $\phi^\ast Y$ is through $\phi$,
on the same underlying topological space;
-- the {\rm induced $G_2$-action} on $G_2 \times_{G_1} X$ is
that given by left multiplication of $G_2$ on the $G_2$-factor;
-- the {\rm co-induced $G_2$-action} on $\mathrm{Maps}(G_2,X)^{G_1}$
is given by right multiplication on the $G_2$-argument.
\end{lemma}
\begin{example}[Quotient spaces, fixed loci and trivial action]
\label{QuotientAndFixedLociFromChangeOfGroupAdjunction}
For $G \xrightarrow{\;} 1$ the unique group homomorphism to the trivial group, the corresponding pullback action (Lemma \ref{InducedAndCoinducedActions}) is the trivial $G$-action, whose adjoints \eqref{AdjointTripleOfChangeOfEquivarianceGroup}
form the quotient space $(-)_G$ and the $G$-fixed space
$(-)^G$ (Ntn. \ref{GActionOnTopologicalSpaces}), respectively:
\begin{equation}
\hspace{-.7cm}
\begin{tikzcd}[column sep=50pt]
\Actions{G}(\kTopologicalSpaces)
\ar[
rr,
shift left=16pt,
"{
\mbox{
\tiny
\color{greenii}
\bf
quotient space
}
}"{description},
"{
(-)/G
}"{yshift=1pt}
]
\ar[
from=rr,
"{
\mbox{
\tiny
\color{greenii}
\bf
trivial $G$-action
}
}"{description}
]
\ar[
rr,
shift right=16pt,
"{
\mbox{
\tiny
\color{greenii}
\bf
fixed locus
}
}"{description},
"{
(-)^G
}"{swap,yshift=-1pt}
]
\ar[
rr,
phantom,
"{\scalebox{.75}{$\bot$}}",
shift left=8pt
]
\ar[
rr,
phantom,
"{\scalebox{.75}{$\bot$}}",
shift right=8pt
]
&&
\kTopologicalSpaces
\mathpalette\mathrlapinternal{\,.}
\end{tikzcd}
\end{equation}
\end{example}
\begin{example}[Underlying topological spaces and (co-)free actions]
\label{ForgettingGActionsAsPullbackAction}
For $1 \xhookrightarrow{\;} G$ the unique inclusion of the trivial group,
the corresponding pullback action (Lemma \ref{InducedAndCoinducedActions})
is the forgetful functor from continuous $G$-actions to their underlying
topological spaces, whose adjoints \eqref{AdjointTripleOfChangeOfEquivarianceGroup}
form the free action and cofree action, respectively:
\vspace{-2mm}
\begin{equation}
\label{FreeForgetfulAdjunctionForGAction}
\hspace{.7cm}
\begin{tikzcd}[column sep=50pt]
\kTopologicalSpaces
\ar[
rr,
shift left=2*8pt,
"{
\mbox{
\tiny
\color{greenii}
\bf
free action
}
}"{description},
"{
G \times (-)
}"{above, yshift=+1pt}
]
\ar[
rr,
shift right=2*8pt,
"{
\mbox{
\tiny
\color{greenii}
\bf
co-free action
}
}"{description},
"{
\mathrm{Maps}(G,-)
}"{below, yshift=-1pt}
]
\ar[
rr,
shift left=1*8pt,
phantom,
"\scalebox{.7}{$\bot$}"{description}
]
\ar[
rr,
shift right=1*8pt,
phantom,
"\scalebox{.7}{$\bot$}"{description}
]
\ar[
rr,
shift right=1*8pt,
phantom,
"\scalebox{.7}{$\bot$}"{description}
]
&&
\GActionsOnTopologicalSpaces
\mathpalette\mathrlapinternal{\;.}
\ar[
ll,
"\mbox{\tiny\color{greenii} \bf forget $G$-action}"{description}
]
\end{tikzcd}
\end{equation}
\end{example}
\begin{lemma}[Forgetting $G$-action creates limits and colimits]
\label{ForgetfulFunctorFromTopologicalGSpacesToGSpaces}
The forgetful functor from topological $G$-actions
to underlying topological spaces
(Example \ref{ForgettingGActionsAsPullbackAction})
{\it creates limits and colimits}, in that a diagram of
topological $G$-actions is a (co)limiting (co)cone diagram precisely if
its underlying diagram of topological spaces is:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=-2pt]
\GActionsOnTopologicalSpaces
\ar[
rr,
"
\mbox{
\tiny
\rm
\color{greenii}
\bf forget $G$-action
}
"
]
&&
\kTopologicalSpaces
\\
\scalebox{0.8}{$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace
\;\simeq\;
\limit{i}
(
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_i
)
$}
&\Leftrightarrow&
\scalebox{0.8}{$
\TopologicalSpace
\;\simeq\;
\limit{i}
(
\TopologicalSpace_i
)
$}
\\
\scalebox{0.8}{$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace
\;\simeq\;
\colimit{i}
(
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace_i
)
$}
&\Leftrightarrow&
\scalebox{0.8}{$
\TopologicalSpace
\;\simeq\;
\colimit{i}
(
\TopologicalSpace_i
)
$}
\;.
\end{tikzcd}
$$
\end{lemma}
\begin{proof}
That the forgetful functor {\it preserves} all limits
and colimits
follows
(Prop. \ref{RightAdjointFunctorsPreserveFiberProducts})
from it being a right and a left adjoint \eqref{FreeForgetfulAdjunctionForGAction}.
A general abstract way to see that it also reflects
and hence creates all limits and colimits is to notice that $G$-actions
are algebras for the monad $G \times (-)$, and that monadic functors
create all limits which exist in their codomain, and create all colimits which
exist and are preserved by the monad
(e.g., \cite[pp. 137-138]{MacLane70}). But the monad here is the composite
of the two left adjoints in the change of group adjoint triple
\eqref{AdjointTripleOfChangeOfEquivarianceGroup}
along $1 \to G$
and hence preserves all colimits.
The resulting claim also appears in \cite[\S B]{Schwede18}.
For the record, we spell out the reflection of pullbacks/fiber products
(Ntn. \ref{CartesianSquares}, the general proof is directly analogous),
which is the main case of interest in \cref{EquivariantPrincipalTopologicalBundles}:
Consider a commuting square of topological $G$-actions whose
underlying square of topological spaces is a pullback, and consider
a cone with tip $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, Q$ over this square, as shown on the left here:
\vspace{-2mm}
\begin{equation}
\label{ReflectedPullbackOfGActions}
\hspace{-8mm}
\begin{tikzcd}
Q
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[
drr,
bend left=20
]
\ar[
ddr,
bend right=20
]
\ar[
dr,
dashed
]
\\
&
\TopologicalSpace \times_{\mathrm{B}} \mathrm{P}
\ar[r]
\ar[d]
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
&
\mathrm{P}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[d]
\\
&
\TopologicalSpace
\ar[r]
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
&
\mathrm{B}
\ar[out=-180+66, in=-66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift left=1]
\end{tikzcd}
{\phantom{AAAAAAAA}}
\begin{tikzcd}[column sep=tiny, row sep=-4pt]
& &[10pt]
& &
G \times \mathrm{P}
\ar[ddr]
\ar[dddd]
\\
G
\times
\mathrm{Q}
\ar[
rr,
dashed,
"\mathrm{id} \times f"
]
\ar[
dddd
]
&&
G \times
(
\TopologicalSpace \times_{\mathrm{B}} \mathrm{P}
)
\ar[
dddd
]
\ar[urr]
\ar[ddr]
\\
& & & & &
G \times \mathrm{B}
\ar[dddd, "\mbox{\tiny\color{greenii} group action}"{sloped}]
\\
& & &
G \times \TopologicalSpace
\ar[
urr,
crossing over
]
\\
& & & &
\mathrm{P}
\ar[ddr]
\\
\mathrm{Q}
\ar[
rr,
dashed,
"f"
]
&&
\TopologicalSpace \times_{\mathrm{B}} \mathrm{P}
\ar[
urr
]
\ar[ddr]
\\
&& &&&
\mathrm{B}
\\
& & &
\TopologicalSpace
\ar[urr]
\ar[
from=uuuu,
crossing over
]
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
We need to show that there exists a unique dashed morphism making the
full diagram on the left commute. But since the underlying square of topological spaces
is a pullback, there exists a unique such continuous function,
in the bottom part of the diagram on the right of
\eqref{ReflectedPullbackOfGActions}.
Hence it remains to show that this unique function is necessarily $G$-equivariant.
But the functor $G \times (-)$, preserves limits,
being itself a limit,
so that also the top square on the right is a pullback.
Therefore also the top dashed morphism exists uniquely, and makes
the square commute as shown, by functoriality of limits.
The argument for colimits is analogous, now using that
$G \times (-)$ also preserves colimit diagrams, by \eqref{MappingSpaceAdjunction}.
\end{proof}
\begin{proposition}[Compactly generated topological $G$-actions form a regular category]
\label{CompactlyGeneratedTopologicalGActionsFormARegularCategory}
For $G \,\in\, \Groups(\kTopologicalSpaces)$,
the category $\Actions{G}(\kTopologicalSpaces)$
\eqref{GActionOnTopologicalSpaces}
is regular (Def. \ref{RegularCategory}).
\end{proposition}
\begin{proof}
Since regularity is entirely a condition on limits and colimits of a category,
it transfers through any forgetful functor which creates all limits and colimits.
Therefore the statement follows by the combination of
Lem. \ref{ForgetfulFunctorFromTopologicalGSpacesToGSpaces}
with Prop. \ref{CompactyGeneratedTopologicalSpacesFormARegularCategory}.
\end{proof}
In generalization of Example \ref{ForgettingGActionsAsPullbackAction}, we have:
\begin{example}[Restricted actions]
\label{RestrictedActions}
Let $H \xhookrightarrow{\;} G$ any subgroup inclusion. Then
the corresponding pullback $H$-action (Lemma \ref{InducedAndCoinducedActions})
of a $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \mathrm{Y} \in \GActionsOnTopologicalSpaces$
is just its restriction to the action of $H \subset G$
\vspace{-2mm}
\begin{equation}
\label{InducedRestrictedActionAdjunction}
\begin{tikzcd}[row sep=0pt]
H\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
rr,
shift left=5pt,
"G \times_H (-)"{above}
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
&&
G\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
ll,
shift left=5pt,
"\mbox{\tiny\color{greenii} \bf restricted action}"
]
\\
\scalebox{0.7}{$ H \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \mathrm{Y} $}
&\longmapsfrom&
\scalebox{0.7}{$ G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \mathrm{Y} $}
\mathpalette\mathrlapinternal{\,,}
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
and the adjunction \eqref{AdjointTripleOfChangeOfEquivarianceGroup}
corresponds to a natural bijection \eqref{FormingAdjuncts} of hom-sets of the following form:
\vspace{-2mm}
\begin{equation}
\label{HomIsomorphismForRestrictedActionAndInducedAction}
\bigg\{
\begin{tikzcd}[row sep=-3pt]
\TopologicalSpace
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
H
}$}"{description},shift right=1]
\ar[r]
&
\mathrm{Y}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
H
}$}"{description},shift right=1]
\\
\scalebox{.7}{$ x $}
\ar[
r,
phantom,
"{\overset{}{\longmapsto}}"{description}
]
&
\scalebox{.7}{$ f(x) $}
\end{tikzcd}
\bigg\}
\qquad
\mathrel{\mathop{
\xleftrightarrow{
\qquad \quad
\widetilde{(-)}
\qquad \quad
}}_{\scalebox{0.6}{\bf \color{greenii} induction/restriction}}
}
\qquad
\bigg\{\!\!\!
\begin{tikzcd}[row sep=-3pt]
G \times_H
\TopologicalSpace
\ar[out=180-60, in=60, looseness=3.0, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\ar[r]
&
\mathrm{Y}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description},shift right=1]
\\
\scalebox{.7}{$ {[g,x]} $}
\ar[
r,
phantom,
"{\overset{}{\longmapsto}}"{description}
]
&
\scalebox{.7}{$ g \cdot f(x) $}
\end{tikzcd}
\!\!\! \bigg\}.
\end{equation}
\end{example}
\begin{example}[Fixed loci with residual Weyl-group action]
\label{FixedLociWithResidualWeylGroupAction}
Let $H \subset G$ be a subgroup inclusion. Consider the functor which
sends a $G$-action to its $H$-fixed subspace equipped with its
residual Weyl group action (Ntn. \ref{GActionOnTopologicalSpaces})
\vspace{-2mm}
\begin{equation}
\label{HFixedLocusFunctor}
\hspace{-.6cm}
\begin{tikzcd}[row sep=-4pt, column sep=large]
N(H)
\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
rr,
"{
\scalebox{0.75}{$ (-)^H
\;\coloneqq\;
\mathrm{Maps}
\left(
N(H)/H, \,
X
\right)^{N(H)}
$}
}"{above}
]
&&
W(H)
\mathrm{Act}
(\mathrm{TopSp})
\\
\scalebox{0.7}{$ N(H) \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace $}
&\longmapsto&
\scalebox{0.7}{$ W(H)
\; \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt}\;
\TopologicalSpace^H
\,=:\,
\big\{
x \in \TopologicalSpace
\; \big\vert \;\;
\underset{
h \in H \subset G
}{\forall} \;
h \cdot x = x
\big\}.
$}
\end{tikzcd}
\end{equation}
\vspace{-3mm}
\noindent
This fixed locus functor $(-)^H$ is, equivalently, the pull-push of change-of-equivariance-groups
(Lemma \ref{InducedAndCoinducedActions}) through the normalizer correspondence
\vspace{-4mm}
$$
\begin{tikzcd}[row sep=-4pt]
&
N\!(H)
\ar[dl, hook]
\ar[dr,->>]
\\
G
&&
W\!(H)\;,
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
in that it is the composite right adjoint
in the following composite of
change-of-equivariance group adjunction \eqref{AdjointTripleOfChangeOfEquivarianceGroup}:
\vspace{-.5cm}
\begin{equation}
\label{RightAdjointWeylGroupValuedFixedLocusFunctor}
\hspace{-5mm}
\begin{tikzcd}[column sep=7em]
G\mathrm{Act}
\ar[
rr,
shift right=7pt,
"(N(H) \hookrightarrow G)^\ast"{below}
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
\ar[
rrrr,
rounded corners,
to path={
-- ([yshift=-15pt]\tikztostart.south)
--node[below]{\scalebox{.7}{$(-)^H$}} ([yshift=-12pt]\tikztotarget.south)
-- (\tikztotarget.south)}
]
&&
N\!(H)\mathrm{Act}
\ar[
ll,
shift right=7pt,
"G \times_{N\!(H)} (-)"{above}
]
\ar[
rr,
shift right=7pt,
"{
\mathrm{Maps}
(
N\!(H)\!/H,
-
)^{N\!(H)}
}"{below}
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
&&
\big(
N\!(H)\!/H
\big)\mathrm{Act}
\mathpalette\mathrlapinternal{\,.}
\ar[
ll,
shift right=7pt,
"{
\left( N\!(H)\, \twoheadrightarrow N\!(H)\!/H \right)^\ast
}"{above}
]
\ar[
llll,
rounded corners,
to path={
-- ([yshift=+13pt]\tikztostart.north)
--node[above]{\scalebox{.7}{$G/H \times_{N\!(H)\!/H}(-)$}} ([yshift=+15pt]\tikztotarget.north)
-- (\tikztotarget.north)}
]
\end{tikzcd}
\end{equation}
\vspace{-.4cm}
\end{example}
\begin{example}[Coset spaces (e.g. {\cite[p. 34]{Bredon72}})]
\label{CosetSpacesAsActions}
For $H \hookrightarrow G$ a subgroup inclusion,
the induced $G$-action \eqref{InducedRestrictedActionAdjunction}
of the unique trivial $H$-action on the point
is the coset space
\vspace{-2mm}
\begin{equation}
\label{CosetSpace}
G \times_H \ast
\;=\;
G/H
\;\coloneqq\;
\{ g H \subset G \,\vert\, g \in G \}
\;\in\;
\GActionsOnTopologicalSpaces
\end{equation}
\vspace{-2mm}
\noindent
equipped with its $G$-action by left multiplication of representatives in $G$.
\end{example}
\medskip
\noindent
{\bf Quotient spaces.} In generalization of Example \ref{QuotientAndFixedLociFromChangeOfGroupAdjunction} we have:
\begin{example}[Partial quotient spaces]
\label{QuotientSpaces}
For
$G, G' \in \mathrm{Grps}(\mathrm{HausSp})$
and $G \times G' \xrightarrow{\mathrm{pr}_2} G'$ the projection
homomorphism out of their direct product,
the corresponding pullback action in Lemma \ref{InducedAndCoinducedActions}
assigns trivial $G$-actions
\vspace{-2mm}
$$
\begin{tikzcd}[column sep=large]
(
G \times G'
)
\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
rr,
shift left=5pt,
"(-)/G"
]
\ar[
rr,
phantom,
"\scalebox{.7}{$\bot$}"
]
&&
G'\mathrm{Act}
(
\mathrm{TopSp}
)
\ar[
ll,
shift left=5pt,
"\mbox{\tiny\color{greenii}\bf trivial $G$-action}"
]
\end{tikzcd}
$$
\vspace{0mm}
\noindent
and its left adjoint \eqref{AdjointTripleOfChangeOfEquivarianceGroup}
forms
$G$-quotients
$
G' \times_{G \times G'} (-)
\;=\;
(-)/G
\,.
$
The unit of this adjunction is the natural transformation which sends
any $G \times G'$-action to the coprojection
$q_{\TopologicalSpace} : \TopologicalSpace \xrightarrow{\;} \TopologicalSpace/G$
onto its $G$-quotient space, and
any $G$-equivariant continuous function $f : \TopologicalSpace \xrightarrow{\;} \mathrm{Y}$
to a commuting square of $G'$-$\mathrm{actions}$:
\vspace{-2mm}
\begin{equation}
\label{QuotientSpaceNaturalitySquare}
\begin{tikzcd}[column sep=large, row sep=small]
\TopologicalSpace
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G'
}$}"{description},shift right=1]
\ar[
r,
"f"{above}
]
\ar[
d,
"q_{\TopologicalSpace}"{left}
]
&
\mathrm{Y}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G'
}$}"{description},shift right=1]
\ar[
d,
"q_{\mathrm{Y}}"
]
\\
\TopologicalSpace/G
\ar[out=-180+66, in=-66, looseness=3.6, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G'
}$}"{description},shift left=1]
\ar[
r,
"f/G"{below}
]
&
\mathrm{Y}/G
\ar[out=-180+66, in=-66, looseness=3.6, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G'
}$}"{description},shift left=1]
\end{tikzcd}
\end{equation}
\end{example}
\begin{lemma}[Hausdorff quotient spaces (e.g. {\cite[Thm. 3.1]{Bredon72}})]
\label{HausdorffQuotientSpaces}
If the equivariance group $G$ is compact
and the topological space underlying
$\TopologicalSpace \,\in\, \GActionsOnTopologicalSpaces$ (Ntn. \ref{GActionOnTopologicalSpaces})
is Hausdorff, then the quotient space $X/G$ is Hausdorff.
\end{lemma}
\begin{remark}[Recognition of pullbacks of quotient coprojections]
{\bf (i)} The quotient coprojection squares \eqref{QuotientSpaceNaturalitySquare}
are not in general pullbacks (Ntn. \ref{CartesianSquares}); and it is important
to recognize those situations
in which they are.
\vspace{-1mm}
\item {\bf (ii)}
A general recognition principle applies to
{\it compact} quotient groups (Lemma \ref{RecognitionOfCartesianQuotientProjections} below)
which is however of little value in the applications
to twisted cohomology theory, where the quotient groups
generically are topological representatives of general $\infty$-groups
(topological realizations of general simplicial groups) and thus rarely compact.
\vspace{-1mm}
\item {\bf (iii)}
Without assuming compactness of the quotient group we
may still recognize pullbacks of {\it free} (principal) quotients
(Lemma \ref{HomomorphismsOfLocallyTrivialPrincipalBundlesArePullbackSquares})
from just the fact that the left one is locally trivial below
(compare Ntn. \ref{TerminologyForPrincipalBundles}).
This is a basic fact of principal bundle theory, but rarely, if
ever, stated in the general form of Lemma
\ref{HomomorphismsOfLocallyTrivialPrincipalBundlesArePullbackSquares}
in which it drives much of the proofs in \cref{NotionsOfEquivariantLocalTrivialization}.
\end{remark}
\begin{lemma}[Recognition of pullbacks of compact group quotients ({\cite[Prop. 4.1]{BykovFlores15}})]
\label{RecognitionOfCartesianQuotientProjections}
If $G$ is compact and the underlying topological spaces of
$\TopologicalSpace, \mathrm{Y} \,\in\, \GActionsOnTopologicalSpaces$
are Hausdorff
then, for any morphism $f : \TopologicalSpace \xrightarrow{\;} \mathrm{Y}$,
its quotient naturality square
\eqref{QuotientSpaceNaturalitySquare}
is a pullback square (Ntn. \ref{CartesianSquares})
if and only if $f$ preserves isotropy groups (as subgroups of $G$):
\vspace{-1mm}
$$
\begin{tikzcd}[column sep=large]
\TopologicalSpace
\ar[
r,
"f"{above}
]
\ar[
d,
"q_{\TopologicalSpace}"{left}
]
\ar[
dr,
phantom,
"\mbox{\tiny\rm(pb)}"
]
&
\mathrm{Y}
\ar[
d,
"q_{\mathrm{Y}}"
]
\\
\TopologicalSpace/G
\ar[
r,
"f/G"{below}
]
&
\mathrm{Y}/G
\end{tikzcd}
{\phantom{AAA}}
\Leftrightarrow
{\phantom{AAA}}
\underset{x \in \TopologicalSpace}{\forall}
\left(
G_x \simeq G_{f(x)}
\right)
\,.
$$
\end{lemma}
\begin{lemma}[Recognition of pullbacks of principal quotients]
\label{HomomorphismsOfLocallyTrivialPrincipalBundlesArePullbackSquares}
A homomorphism of $G$-principal fibrations
(covering any morphism of base spaces)
is a pullback square
(Ntn. \ref{CartesianSquares})
as soon as the domain is a locally trivializable fiber bundle:
\vspace{-7mm}
\begin{equation}
\label{MorphismOfPrincipalBundlesIsPullback}
{
\begin{tikzcd}[column sep=large, row sep=20pt]
\\
\mathrm{P}_1
\ar[
rr,
"f"
]
\ar[
d,
"p_1"{left}
]
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description}, shift right=1]
&&
\mathrm{P}_2
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\mathpalette\mathclapinternal{
G
}$}"{description}, shift right=1]
\ar[
d,
"\,p_2"
]
\\
\underset{
\mathpalette\mathclapinternal{
\raisebox{-4pt}{
\tiny
\color{darkblue}
\bf
\def.9{1}
\begin{tabular}{c}
$G$-principal \&
\\
locally trivial
\end{tabular}
}
}
}{
\TopologicalSpace_1
}
\ar[
rr,
"f/G"
]
&&
\underset{
\mathpalette\mathclapinternal{
\raisebox{-4pt}{
\tiny
\color{darkblue}
\bf
\def.9{1}
\begin{tabular}{c}
$G$-principal
\end{tabular}
}
}
}{
\TopologicalSpace_2
}
\end{tikzcd}
}
{\phantom{AAAA}}
\Rightarrow
{\phantom{AAAA}}
\begin{tikzcd}[column sep=large]
\mathrm{P}_1
\ar[
rr
]
\ar[
d,
"p_1"{left}
]
\ar[
drr,
phantom,
"\mbox{\tiny\rm(pb)}"{description}
]
&&
\mathrm{P}_2
\ar[
d,
"\,p_2"
]
\\
\TopologicalSpace_1
\ar[rr]
&&
\TopologicalSpace_2
\end{tikzcd}
\end{equation}
\end{lemma}
\begin{proof}
First, consider the special case when the domain bundle is
actually trivial and that the base morphism is the identity
\vspace{-2mm}
$$
\begin{tikzcd}[column sep=large, row sep=small]
G \times \TopologicalSpace
\ar[
d,
"\; \mathrm{pr}_2"{left}
]
\ar[
r,
"\sigma"
]
&
\mathrm{P}
\ar[
d,
"\; p"
]
\\
\TopologicalSpace
\ar[r,-,shift left=1pt]
\ar[r,-,shift right=1pt]
&
\TopologicalSpace
\end{tikzcd}
$$
(equivalently a global section $\sigma(\NeutralElement)(-)$).
In that case, a continuous inverse of $\sigma$ is given by the composite
\vspace{-2mm}
$$
\begin{tikzcd}
G \times \TopologicalSpace
&
G \times \mathrm{P}
\ar[
l,
"{\mathrm{id} \times p}"{above}
]
&
\mathrm{P} \times_{\TopologicalSpace} \mathrm{P}
\ar[
l,
"{\sim}"{above}
]
&&&
\mathrm{P}
\mathrm{\,,}
\ar[
lll,
"{
\left(
\sigma(\NeutralElement,\,p(-))
,\,
\mathrm{id}
\right)
}"{above}
]
\end{tikzcd}
$$
\vspace{-1mm}
\noindent
where the second map is the inverse of the shear map (see
\eqref{PrincipalityConditionAsShearMapBeingAnIsomorphism}) of the codomain bundle,
and where $\NeutralElement \in G$ denotes the neutral element.
From this it follows
that morphisms out of any locally trivial principal bundle
over the identity base morphism are isomorphisms,
by local recognition of homeomorphisms
(Ex. \ref{IsomorphismOfBundlesDetectedOnOpenCovers}).
Finally, in the general case the universal comparison morphism
\vspace{-3mm}
$$
\begin{tikzcd}[row sep=small]
\mathrm{P}_1
\ar[
r,
dashed,
"\sim"
]
\ar[dr]
&
\mathrm{P}_2 \times_{{}_{\TopologicalSpace_1}} \mathrm{X_2}
\ar[rr]
\ar[d]
\ar[
drr,
phantom,
"\mbox{\tiny\rm(pb)}"{description}
]
&&
\mathrm{P}_2
\ar[d]
\\
&
\TopologicalSpace_1
\ar[rr]
&&
\TopologicalSpace_2
\end{tikzcd}
$$
\vspace{-2mm}
\noindent from $\mathrm{P}_1$ to the pullback of $\mathrm{P}_2$ in
\eqref{MorphismOfPrincipalBundlesIsPullback}
is such a homomorphism over a common
base space $\TopologicalSpace_1$, hence is an isomorphism, thus exhibiting
$\mathrm{P}_1$ as a pullback.
\end{proof}
\medskip
\noindent
{\bf Slices of $G$-orbits.} The existence of local
{\it slices through families of orbits} of group actions
is guaranteed by the Slice Theorem (Prop. \ref{SliceTheorem})
below and serves to ensure or detect local triviality of
plain principal bundles (Cor. \ref{QuotientCoprojectionOfFreeProperActionIsLocallyTrivial}) below
and of equivariant principal bundles (around Ntn. \ref{SlicesInsideBierstonePatches} below).
\begin{definition}[Slices of $G$-orbits]
\label{SliceOfTopologicalGSpace}
For $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \; \mathrm{U} \in \GActionsOnTopologicalSpaces$
and $H \subset G$ a subgroup, an $H$-subspace
\vspace{-2mm}
\begin{equation}
\label{SliceSubspace}
\begin{tikzcd}
\mathrm{S}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
H
}\phantom{}$}"{description}, shift right=1]
\ar[
r,
hook,
"\iota"
]
&
\mathrm{U}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
H
}\phantom{}$}"{description}, shift right=1]
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
is called a {\it slice} through its $G$-orbit modulo $H$
if its
induction/restriction-adjunct \eqref{HomIsomorphismForRestrictedActionAndInducedAction}
is an isomorphism
\vspace{-3mm}
\begin{equation}
\label{SliceIsomorphism}
\begin{tikzcd}[row sep=-2pt]
G \times_H \mathrm{S}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
G
}\phantom{}$}"{description}, shift right=1]
\ar[
r,
"\sim"{below},
"\tilde \iota"{above}
]
&
\mathrm{U}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
G
}\phantom{}$}"{description}, shift right=1]
\\
\scalebox{.7}{$ {[g , s]} $}
\;\; \ar[
r,
phantom,
"\longmapsto"{description}
]
&
\scalebox{.7}{$ g \cdot s $}
\,.
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
Specifically, for
$G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \in \GActionsOnTopologicalSpaces$
and $x \in \TopologicalSpace$ a point,
by a {\it slice through $x$} one means
(e.g. {\cite[\S II, Def. 4.1]{Bredon72}})
a slice \eqref{SliceSubspace}
relative to the isotropy group $G_x$ \eqref{StabilizerSubgroupInEquivarianceGroup}
through $x$ of an open $G$-neighborhood $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \; \mathrm{U}_x$ of $x$
\vspace{-2mm}
\begin{equation}
\label{ASliceThroughAPoint}
\begin{tikzcd}[column sep=-1]
x
&\in
&
\mathrm{S}_x
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
G_x
}\phantom{\cdot}$}"{description}, shift right=1]
\ar[
rr,
hook
]
&{\phantom{AA}}&
\mathrm{U}_x
\mathpalette\mathrlapinternal{\,.}
\ar[out=180-66, in=66, looseness=3.5, "\scalebox{.77}{$\phantom{}\mathpalette\mathclapinternal{
G_x
}\phantom{\cdot}$}"{description}, shift right=1]
\end{tikzcd}
\end{equation}
\end{definition}
\begin{proposition}[Slice Theorem ({\cite[Prop. 2.3.1]{Palais61}\cite[Thm. 6.2.7]{Karppinen16}})]
\label{SliceTheorem}
Under the assumption
\ref{ProperEquivariantTopology}
of proper equivariance,
given $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \in \GActionsOnTopologicalSpaces$
then for every $x \in \TopologicalSpace$
there exists a slice through $x$ (Def. \ref{SliceOfTopologicalGSpace}).
\end{proposition}
\begin{remark}[Technical conditions in the slice theorem]
\label{TechnicalConditionsInTheSliceTheorem}
The slice theorem
for {\it compact} Lie group actions
is due to \cite[Thm. 2.1]{Mostow57}\cite[Cor. 1.7.19]{Palais60},
and for proper actions of general Lie groups it is due to \cite{Palais61},
reviewed in \cite{Karppinen16}.
Beware that \cite{Palais61} goes to some length
to further generalize beyond proper actions, which leads to a wealth of technical
conditions that, it seems, have been of rare use in practice.
But, under the assumption \ref{ProperEquivariantTopology}
that all $G$-spaces are locally compact, all these conditions reduce to
properness \cite[Thm 1.2.9]{Palais61}\cite[Rem. 5.2.4]{Karppinen16},
and thus the theorem reduces to the statement in Prop. \ref{SliceTheorem}.
\end{remark}
\begin{corollary}[Quotient coprojection of free proper action is locally trivial
{\cite[\S 4.1]{Palais61}}]
\label{QuotientCoprojectionOfFreeProperActionIsLocallyTrivial}
Under the assumption \ref{ProperEquivariantTopology} of proper equivariance,
the quotient space coprojection
$P \xrightarrow{q} P/G$
of a \emph{free} action $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, P$ admits local sections.
\end{corollary}
\medskip
\noindent
{\bf Equivariant open covers.}
\begin{definition}[Properly equivariant open cover]
\label{ProperEquivariantOpenCover}
Given $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \,\in\, \Actions{G}(\kTopologicalSpaces)$
\eqref{GActionOnTopologicalSpaces},
we say that an open cover
(Ex. \ref{OpenCoversAreEffectiveEpimorphisms}) of the underlying space
\vspace{-1mm}
\begin{equation}
\label{OpenCoverToBeEquivariant}
\widehat{\TopologicalSpace}
\,=\,
\underset{i \in I}{\sqcup}
\TopologicalPatch_i
\twoheadrightarrow
\TopologicalSpace
\end{equation}
\vspace{-1mm}
\noindent
is
\noindent
{\bf (i)}
{\it equivariant} if the $G$-action on $\TopologicalSpace$ pulls back to
$\widehat{\TopologicalSpace}$
\vspace{-2mm}
\begin{equation}
\label{AProperEquivariantOpenCover}
\begin{tikzcd}
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \; \widehat{\TopologicalSpace}
\ar[r, ->>, "p"]
&
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace
\end{tikzcd}
\;\;\;
\in
\;
\Actions{G}(\kTopologicalSpaces) \;;
\end{equation}
\vspace{-2mm}
\noindent
{\bf (ii)}
{\it regular}
if there is a $G$-action on the index set such that
\begin{equation}
\label{RegularityConditionsOnEquivariantOpenCover}
\begin{aligned}
{\bf (a)}
\quad
&
\underset{i,j \in I}{\forall}
\Bigg(
\begin{tikzcd}[row sep=7pt]
\TopologicalPatch_i
\ar[r, "\sim"{swap}]
\ar[d, hook]
&
\TopologicalPatch_{g \cdot j}
\ar[d, hook]
\\
\TopologicalSpace
\ar[r, -, shift left=1pt]
\ar[r, -, shift right=1pt]
&
\TopologicalSpace
\end{tikzcd}
\;\;\;\;
\Rightarrow
\;\;\;\;
i \,=\, j
\Bigg);
\\
{\bf (b)}
\quad
&
\underset{i \in I}{\forall}
\,\,
\underset{g \in G}{\forall}
\;
\bigg(
U_i
\cap
g \cdot U_j
\,\neq\,
\varnothing
\;\;\Rightarrow\;\;
\begin{tikzcd}[row sep=7pt]
\TopologicalPatch_i
\ar[r, "\sim"{swap}]
\ar[d, hook]
&
\TopologicalPatch_{i}
\ar[d, hook]
\\
\TopologicalSpace
\ar[r, "g", "\sim"{swap}]
&
\TopologicalSpace
\end{tikzcd}
\bigg);
\\
{\bf (c)}
\quad
&
\underset{n \in \mathbb{N}}{\forall}
\;
\underset{
{i_0, \cdots, i_n \in I}
\atop
{g_0, \cdots, g_n \,\in\, G }
}{\forall}
\left(
\begin{array}{rcl}
U_{i_0} \cap \cdots \cap U_{i_n}
& \neq
& \varnothing,
\\
g_0 \cdot U_{i_0} \cap \cdots g_n \cdot U_{i_n}
& \neq
& \varnothing
\end{array}
\;\;\Rightarrow\;\;
\underset{g \in G}{\exists}
\;
\underset{0 \leq k \leq n}{\forall}
\;
g \cdot U_{i_k} \,=\, g_k \cdot U_{i_k}
\right);
\end{aligned}
\end{equation}
\noindent
{\bf (iii)}
{\it properly equivariant}
if, in addition, each $H$-fixed locus of $\widehat{\TopologicalSpace}$
is an open cover of that of $\TopologicalSpace$:
\vspace{-2mm}
\begin{equation}
\label{EquivariantOpenCoverRestrictedToFixedLoci}
\underset{
H
\underset{\mathpalette\mathclapinternal{\mathrm{clsd}}}{\subset}
G
}{\forall}
\;\;\;
\begin{tikzcd}
\widehat{\TopologicalSpace}^H
\ar[rr, ->>, "p^H", "\mbox{\tiny \rm open cover}"{swap}]
&&
\TopologicalSpace^H
;
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
{\bf (iv)}
properly equivariantly {\it good}
if all the restrictions
\eqref{EquivariantOpenCoverRestrictedToFixedLoci}
are good open covers
(Def. \ref{GoodOpenCovers}).
\end{definition}
\begin{proposition}[Smooth $G$-manifolds admit properly equivariant regular good open covers]
\label{SmoothGManifoldsAdmitProperlyEquivariantGoodOpenCovers}
At least for
\noindent $G \,\in\, \Groups(\FiniteSets) \xhookrightarrow{\Groups(\Discrete)}
\Groups(\kTopologicalSpaces)$,
every smooth $G$-manifold
$
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace
\,\in\,
\Groups(\SmoothManifolds)
\xhookrightarrow{\;}
\Groups(\kTopologicalSpaces)
$
admits a regular properly equivariant and good open cover
(Def. \ref{ProperEquivariantOpenCover}).
\end{proposition}
\begin{proof}
This follows with the
equivariant triangulation theorem
\cite[Thm. 3.1]{Illman72}\cite{Illman83}; see \cite[Thm. 2.11]{Yang14}.
\end{proof}
\begin{remark}
\label{StabilizerSubgroupOfPointAlsoFixedIndexOfPatchInRegularEquivariantOpenCover}
If an equivariant open cover \eqref{EquivariantOpenCover}
is regular \eqref{RegularityConditionsOnEquivariantOpenCover}
then for $x \,\in\, \TopologicalPatch_i \xhookrightarrow{\;} \TopologicalSpace$
the
stabilizer group $G_x \,\coloneqq\, \mathrm{Stab}_G(x)$
of $x \,\in\, X$ also fixes the index $i$:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=small, column sep=large]
\mathpalette\mathllapinternal{x \in \;}
\TopologicalPatch_i
\ar[r, "\sim"{swap}]
\ar[d, hook]
&
\TopologicalPatch_i
\ar[d, hook]
\\
\TopologicalSpace
\ar[r, "g \,\in\, G_x"]
&
\TopologicalSpace
\,.
\end{tikzcd}
$$
\end{remark}
\section{$G$-Actions on topological groupoids}
\label{GActionsOnTopologicalGroupoids}
The theory of {\it universal} equivariant bundles turns out to
be most naturally formulated
in the language not just of topological spaces equipped with
$G$-actions, but of topological {\it groupoids} equipped with $G$-actions.
This observation, which is due to \cite{MurayamaShimakawa95}
and was only
more recently amplified again in \cite{GuillouMayMerling17},
we turn in \cref{ConstructionOfUniversalEquivariantPrincipalBundles} and then especially in \cref{InCohesiveInfinityStacks} below.
Here we recall and develop some basics of equivariant topological groupoids
that will make the theory of universal equivariant bundles
in \cref{ConstructionOfUniversalEquivariantPrincipalBundles}
be transparent and run smoothly.
(The material here is not needed for \cref{EquivariantPrincipalTopologicalBundles}.)
\medskip
\noindent
{\bf Topological groupoids.}
\begin{notation}[Topological groupoids]
\label{TopologicalGroupoids}
We write
$\TopologicalGroupoids$ for the
strict (2,1)-category (Ntn. \ref{Strict2Categories})
of groupoid objects internal (Ntn. \ref{Internalization})
to
$\kTopologicalSpaces$ \eqref{CategoryOfTopologicalSpaces},
hence of {\it topological groupoids}
(\cite{Ehresmann59}, survey in \cite[\S II]{Mackenzie87},
exposition in \cite[p. 6]{Weinstein96}):
\noindent
{\bf (i)} Its objects are diagrams of topological spaces
\vspace{-4mm}
\begin{equation}
\label{DiagramForTopologicalGroupoid}
\begin{tikzcd}[row sep=1pt, column sep=30pt]
\mathpalette\mathclapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
space of composable
\\
pairs of morphisms
\end{tabular}
}
}
&[-16pt]
\mathpalette\mathclapinternal{
\mbox{
\tiny
\color{greenii}
\bf
\begin{tabular}{c}
composition
\\
map
\end{tabular}
}
}
&
\quad
\mathpalette\mathclapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
space of
\\
morphisms
\end{tabular}
}
}
\quad
&
\mathpalette\mathclapinternal{
\mbox{
\tiny
\color{greenii}
\bf
\begin{tabular}{c}
source, target \& unit
\\
maps
\end{tabular}
}
}
&
\quad
\mathpalette\mathclapinternal{
\mbox{
\tiny
\color{darkblue}
\bf
\begin{tabular}{c}
space of
\\
objects
\end{tabular}
}
}
\\
(\TopologicalSpace_1)
\,
{}_{t}\!\!\underset{\TopologicalSpace_0}{\times_s}
(\TopologicalSpace_1)
\ar[
rr,
"\circ"{pos=.4}
]
&&
\quad
\TopologicalSpace_1
\ar[out=180-60+180, in=60+180, looseness=3.0, "\scalebox{.77}{$\mathpalette\mathclapinternal{
\mathpalette\mathllapinternal{
\mbox{
\tiny
\color{greenii}
\bf
\begin{tabular}{c}
inversion
\\
map
\end{tabular}
}
\!\!\!
}
(-)^{-1}
}$}"{below}]
\ar[
rr,
shift left=5pt,
"{s}"{above}
]
\ar[
rr,
shift right=5pt,
"{t}"{below}
]
\quad
&&
\quad
\TopologicalSpace_0
\ar[
ll,
"{\mathrm{e}}"{description}
]
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
such that the composition operation $\circ$ is associative, unital
with respect to $\mathrm{e}$ and with inverses given by $(-)^{-1}$.
We will mostly denote such an object by
\vspace{-.4cm}
$$
\TopologicalSpace_1
\rightrightarrows
\TopologicalSpace_0
\;\;\;
\in
\;
\TopologicalGroupoids
\,,
$$
\vspace{-1mm}
\noindent
with the rest of the structure understood from the given context.
\vspace{0mm}
\noindent
{\bf (ii)} Its morphisms are
{\it continuous functors}
hence continuous functions $F_0$, $F_1$
compatible with all this structure:
\vspace{-2mm}
\begin{equation}
\label{ContinuousFunctors}
\begin{tikzcd}[row sep=14pt, column sep=30pt]
(\TopologicalSpace_1)
\,
{}_{t}\!\!\underset{\TopologicalSpace_0}{\times_s}
(\TopologicalSpace_1)
\;\;
\ar[
r,
"\circ"
]
\ar[
d,
"\scalebox{0.8}{$
F_1 \, \underset{F_0}{{}_t\!\!\times_s} \, F_1
$}"{left}
]
&
\quad
\TopologicalSpace_1
\ar[out=180-60, in=60, looseness=3.0, "\scalebox{.77}{$\mathpalette\mathclapinternal{
(-)^{-1}
}$}"{above}]
\quad
\ar[
rr,
shift left=5pt,
"{s}"{above}
]
\ar[
rr,
shift right=5pt,
"{t}"{below}
]
\ar[
d,
"{F_1}"
]
&&
\;\;
\TopologicalSpace_0
\ar[
ll,
"{\mathrm{e}}"{description}
]
\ar[
d,
"F_0"
]
\\
(\mathrm{Y}_1)
\,
{}_{t}\!\!\underset{\TopologicalSpace_0}{\times_s}
(\mathrm{Y}_1)
\;\;
\ar[
r,
"\circ"
]
&
\quad
\mathrm{Y}_1
\ar[in=180-60+180, out=60+180, looseness=3.0, "\scalebox{.77}{$\mathpalette\mathclapinternal{
(-)^{-1}
}$}"{below}]
\quad
\ar[
rr,
shift left=5pt,
"{s}"{above}
]
\ar[
rr,
shift right=5pt,
"{t}"{below}
]
&&
\;\;
\mathrm{Y}_0
\ar[
ll,
"{\mathrm{e}}"{description}
]
\end{tikzcd}
\end{equation}
\vspace{-3mm}
\noindent
{\bf (iii)}
Its {\it 2-morphism} $\eta \,\colon\, F \Rightarrow F'$
are {\it continuous natural transformations}, hence
continuous functions
$\eta(-) \,\colon\, \TopologicalSpace_0 \xrightarrow{\;} \mathrm{Y}_1$
making all the naturality squares commute:
\vspace{-2mm}
\begin{equation}
\label{NaturalitySquareForTopologicalGroupoids}
\begin{tikzcd}
(
\TopologicalSpace_1
\rightrightarrows
\TopologicalSpace_0
)
\ar[
rr,
bend left=30,
"{F}"{above},
" "{below,name=s}
]
\ar[
rr,
bend right=30,
"{F'}"{below},
" "{above,name=t}
]
&&
(
\mathrm{Y}_1
\rightrightarrows
\mathrm{Y}_0
)
%
\ar[
from=s,
to=t,
Rightarrow,
"\eta"{xshift=1pt}
]
\end{tikzcd}
{\phantom{AA}}
\colon
{\phantom{AA}}
\begin{tikzcd}[row sep=16pt]
x
\ar[
dd,
"{\gamma}"
]
&[-15pt]&[-15pt]
F(x)
\ar[
rr,
"{ \eta(x) }"
]
\ar[
dd,
"F(\gamma)"
]
&&
F'(x)
\ar[
dd,
"F'(\gamma)"
]
\\[-16pt]
&
\overset{
\mbox{
\tiny
\rm
cts.
}
}{\longmapsto}
&
\\[-16pt]
x'
&&
F(x')
\ar[
rr,
"\eta(x')"
]
&&
F'(x')
\mathpalette\mathrlapinternal{\,.}
\end{tikzcd}
\end{equation}
\vspace{-2mm}
\noindent
{\bf (iv)}
An {\it isomophism of topological groupoids}
$(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0) \;\simeq\; (\mathrm{Y}_1 \rightrightarrows \mathrm{Y}_0)$
is an isomorphsim in the underlying 1-category (ignoring the 2-morphisms).
\noindent
{\bf (v)}
An {\it equivalence of topological groupoids} is a pair of
morphisms going back and forth between them, together with
2-morphisms \eqref{NaturalitySquareForTopologicalGroupoids}
relating their composites to the identity morphism:
\vspace{-2mm}
\begin{equation}
\label{EquivalenceOfTopologicalGroupoids}
\begin{tikzcd}
(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
\,\underset{\mathrm{hmtpy}}{\simeq}\,
(\mathrm{Y}_1 \rightrightarrows \mathrm{Y}_0)
\qquad
\Leftrightarrow
\qquad
(
\TopologicalSpace_1
\rightrightarrows
\TopologicalSpace_0
)
\ar[
r,
shift right=2pt,
"R"{below}
]
&
(
\mathrm{Y}_1
\rightrightarrows
\mathrm{Y}_0
)
\mathpalette\mathrlapinternal{\,,}
\ar[
l,
shift right=2pt,
"L"{above}
]
\end{tikzcd}
\;\;\;
L \circ R \Rightarrow \mathrm{id}\,,
\;\;\;\;\;
\mathrm{id} \Rightarrow R \circ L
\,.
\end{equation}
\end{notation}
\begin{example}[Topological spaces as constant topological groupoids]
\label{TopologicalSpacesAsTopologicalGroupoids}
Each $\TopologicalSpace \,\in\, \kTopologicalSpaces$ \eqref{CategoryOfTopologicalSpaces}
becomes a topological groupoid (Ntn. \ref{TopologicalGroupoids})
\vspace{-3mm}
$$
\ConstantGroupoid(\mathrm{C})
\;\coloneqq\;
\big(
\TopologicalSpace
\underoverset
{\mathrm{id}}
{\mathrm{id}}
{\rightrightarrows}
\TopologicalSpace
\big)
\;\;\;
\in
\;
\TopologicalGroupoids
$$
\vspace{-2mm}
\noindent by taking all structure maps \eqref{DiagramForTopologicalGroupoid}
to be the identity on $\TopologicalSpace$. This construction
constitutes a full subcategory inclusion
\vspace{-2mm}
\begin{equation}
\label{FullInclusionOfTopologicalSpacesIntoTopologicalGroupoids}
\kTopologicalSpaces \;
\xhookrightarrow{\;\; \ConstantGroupoid \;\;}
\;
\Groupoids(\kTopologicalSpaces)
\mathpalette\mathrlapinternal{\,.}
\end{equation}
\end{example}
\begin{example}[Topological pair groupoid]
\label{TopologicalPairGroupoid}
For $\TopologicalSpace \,\in\, \kTopologicalSpaces$, its
{\it chaotic groupoid} or
{\it pair groupoid}
is the topological groupoid (Ntn. \ref{TopologicalGroupoids})
whose space of morphisms is the product of $\TopologicalSpace$ with itself
(the space of pairs of elements of $\TopologicalSpace$),
with source and target given by the two canonical projection maps
\vspace{-2mm}
$$
\CodiscreteGroupoid(\TopologicalSpace)
\;\coloneqq\;
\big(
\TopologicalSpace \times \TopologicalSpace
\underoverset
{\mathrm{pr}_2}
{\mathrm{pr}_1}
{\rightrightarrows}
\TopologicalSpace
\big)
\;\;\;
\in
\;
\TopologicalGroupoids
$$
\vspace{-2mm}
\noindent
and equipped with the unique admissible composition operation:
\vspace{-2mm}
$$
(\TopologicalSpace \times \TopologicalSpace)
\underset{
\TopologicalSpace
}{
{}_{t}\!\times_s
}
(\TopologicalSpace \times \TopologicalSpace)
\;=\;
\TopologicalSpace \times \TopologicalSpace \times \TopologicalSpace
\xrightarrow{ ( \mathrm{pr}_1 , \, \mathrm{pr}_3 ) }
\TopologicalSpace \times \TopologicalSpace
\,.
$$
\vspace{-2mm}
\noindent
This construction constitutes {\it another} full subcategory inclusion
\vspace{-2mm}
$$
\kTopologicalSpaces \;
\xhookrightarrow{\; \CodiscreteGroupoid \;} \;
\Groupoids(\kTopologicalSpaces)
\,.
$$
\end{example}
\begin{definition}[Space of components of a topological groupoid]
\label{SpaceOfConnectedComponentsOfTopologicalGroupoid}
For $(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)$
a topological groupoid (Ntn. \ref{TopologicalGroupoids}),
its {\it space of connected components}
(or: {\it 0-truncation})
\vspace{-2mm}
$$
\tau_0
(
\TopologicalSpace_1
\rightrightarrows
\TopologicalSpace_0
)
\;\;\;
\in
\kTopologicalSpaces
$$
\vspace{-2mm}
\noindent is the quotient space by the source/target relation, hence
the coequalizer of its source and target maps:
\vspace{-2mm}
\begin{equation}
\begin{tikzcd}
\TopologicalSpace_1
\ar[
r,
shift left=3pt,
"{s}"{above}
]
\ar[
r,
shift right=3pt,
"{t}"{below}
]
&
\TopologicalSpace_0
\ar[
rr,
"{\mathrm{coeq}(s,t)}"{above}
]
&&
\tau_0(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
\,.
\end{tikzcd}
\end{equation}
\end{definition}
All these basic notions are unified as follows:
\begin{proposition}[Adjunctions between topological groupoids and topological spaces]
\label{AdjunctionBetweenTopologicalGroupoidsAndTopologicalSpaces}
The 1-category of topological groupoids (Ntn. \ref{TopologicalGroupoids})
is related to that of topological spaces (Ntn. \ref{CompactlyGeneratedTopologicalSpaces})
by a quadruple of adjoint functors (Ntn. \ref{AdjointFunctors})
\vspace{-2mm}
$$
\begin{tikzcd}
\Groupoids(\kTopologicalSpaces)
\ar[
rr,
shift left=4*8pt,
"{
\ConnectedGroupoidComponents
}"{description}
]
\ar[
rr,
"{
\SpaceOfObjects
}"{description}
]
&&
\kTopologicalSpaces
\mathpalette\mathrlapinternal{\,,}
\ar[
ll,
hook',
shift right=2*8pt,
"{
\ConstantGroupoid
}"{description}
]
\ar[
ll,
hook',
shift left=2*8pt,
"{
\CodiscreteGroupoid
}"{description}
]
\ar[
ll,
phantom,
shift right = 3*8pt,
"{\scalebox{.6}{$\bot$}}"
]
\ar[
ll,
phantom,
shift right = 1*8pt,
"{\scalebox{.6}{$\bot$}}"
]
\ar[
ll,
phantom,
shift left = 1*8pt,
"{\scalebox{.6}{$\bot$}}"
]
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
where
\vspace{-4mm}
\begin{itemize}
\setlength\itemsep{-4pt}
\item
$\ConstantGroupoid$ assigns constant groupoids
in the sense of
Ex. \ref{TopologicalSpacesAsTopologicalGroupoids};
\item
$\CodiscreteGroupoid$ assigns pair groupoids
in the sense of
Ex. \ref{TopologicalPairGroupoid};
\item
$\SpaceOfObjects$ assigns spaces of objects \eqref{DiagramForTopologicalGroupoid};
\item
$\ConnectedGroupoidComponents$ assigns spaces of
{\it connected components} (Def. \ref{SpaceOfConnectedComponentsOfTopologicalGroupoid}).
\end{itemize}
\vspace{-.2cm}
\end{proposition}
\begin{proof}
The hom-isomorphisms \eqref{FormingAdjuncts}
are readily seen by unwinding the definitions:
\noindent
{\bf (1)}
For $\ConnectedGroupoidComponents \dashv \ConstantGroupoid$,
the natural bijection
\vspace{-2mm}
$$
\kTopologicalSpaces
\big(
\ConnectedGroupoidComponents(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
,\,
\mathrm{Y}
\big)
\;\simeq\;
\Groupoids(\kTopologicalSpaces)
\big(
(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
,\,
\ConstantGroupoid(\mathrm{Y})
\big)
$$
\vspace{-2mm}
\noindent
exhibits the universal property of the coequalizer:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=14pt]
\TopologicalSpace_1
\ar[rr]
\ar[
d,
shift left=3pt,
"{s}"{right}
]
\ar[
d,
shift right=3pt,
"{t}"{left}
]
&&
\mathrm{Y}
\ar[
d,-,
shift left=1pt
]
\ar[
d,-,
shift right=1pt
]
\\
\TopologicalSpace_0
\ar[rr]
\ar[
d,
"{\mathrm{coequ}(s,t)}"{left}
]
&&
\mathrm{Y}
\\
\ConnectedGroupoidComponents(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
\ar[
urr,
dashed,
"{\exists !}"{below}
]
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
{\bf (2)} For $\ConstantGroupoid \dashv \SpaceOfObjects$,
the natural bijection
\vspace{-2mm}
$$
\Groupoids(\kTopologicalSpaces)
\big(
\ConstantGroupoid(\TopologicalSpace)
,\,
(\mathrm{Y}_1 \rightrightarrows \mathrm{Y}_0)
\big)
\;\simeq\;
\kTopologicalSpaces
(
\TopologicalSpace
,\,
\mathrm{Y}_0
)
$$
\vspace{-2mm}
\noindent
reflects the unitality of the groupoid composition:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=small]
\TopologicalSpace
\ar[
rr,
dashed,
"{\exists !}"{above}
]
&&
\mathrm{Y}_1
\\
\TopologicalSpace
\ar[
rr
]
\ar[
u,-,
shift left=1pt,
"{\mathrm{e}}"{left}
]
\ar[
u,-,
shift right=1pt
]
&&
\mathrm{Y}_0
\ar[
u,
"{e}"
]
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
{\bf (3)}
For $\SpaceOfObjects \dashv \CodiscreteGroupoid$, the natural bijection
\vspace{-2mm}
$$
\kTopologicalSpaces
(
\TopologicalSpace_0
,\,
\mathrm{Y}
)
\;\simeq\;
\Groupoids(\kTopologicalSpaces)
\big(
(\TopologicalSpace_1 \rightrightarrows \TopologicalSpace_0)
,\,
\CodiscreteGroupoid(\mathrm{Y})
\big)
$$
\vspace{-2mm}
\noindent
reflects the universal property of the Cartesian product:
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=small]
\TopologicalSpace_1
\ar[
rr,
dashed,
"{\exists !}"
]
\ar[
d,
shift right=3pt,
"{s}"{left}
]
\ar[
d,
shift left=3pt,
"{\,t}"{right}
]
&&
\mathrm{Y} \times \mathrm{Y}
\ar[
d,
shift right=3pt,
"{\mathrm{pr}_1}"{left}
]
\ar[
d,
shift left=3pt,
"{\, \mathrm{pr}_2}"{right}
]
\\
\TopologicalSpace_0
\ar[
rr
]
&&
\mathrm{Y}
\mathpalette\mathrlapinternal{\,.}
\end{tikzcd}
$$
$\,$
\vspace{-1cm}
\end{proof}
We continue to list some classes of examples of topological groupoids
that we need later on.
\begin{example}[Topological action groupoid]
\label{TopologicalActionGroupoid}
For $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \TopologicalSpace \,\in\, \GActionsOnTopologicalSpaces$
\eqref{GActionsOnTopologicalSpaces},
the corresponding {\it action groupoid} is the topological groupoid
(Ntn. \ref{TopologicalGroupoids}) given by
\vspace{-3mm}
\begin{equation}
\label{LeftActionGroupoid}
\hspace{-5mm}
\begin{tikzcd}[row sep=0pt]
\TopologicalSpace \times G
|
^{\mathrm{op}} \times G^{\mathrm{op}}
\ar[
rr,
"{
\mathrm{id}
\times
\scalebox{.8}{$($}(-) \cdot (-) \scalebox{1.1}{$)$}
}"
]
&&
\;\;
\TopologicalSpace \times G^{\mathrm{op}}
\;\;
\ar[
rr,
shift left=13pt,
"{
\mathrm{pr}_1
}"
]
\ar[
rr,
shift right=5pt,
"{
\scalebox{0.7}{$(-) \cdot (-) $}
}"{description}
]
&&
\quad \TopologicalSpace
\ar[
ll,
shift right=5pt,
"{
\mathrm{id} \times e
}"{description}
]
\\
\scalebox{0.8}{$ (x, g_1, g_2) $}
&\longmapsto&
\scalebox{0.8}{$ (x, g_2 \cdot g_1) $}
&\;\;\;\;\; \longmapsto&
\quad \scalebox{0.8}{$ (g_2 \cdot g_1 \cdot x) $}
\end{tikzcd}
\quad
\in
\;
\TopologicalGroupoids
\end{equation}
\vspace{-1mm}
\noindent
with composition given by the {\it reverse} of the group operation in $G$.
\end{example}
\begin{example}[Topological delooping groupoid]
\label{TopologicalDeloopingGroupoid}
For $\Gamma \,\in\, \Groups(\kTopologicalSpaces)$,
its {\it delooping groupoid} is the
topological left action groupoid (Ex. \ref{TopologicalActionGroupoid})
of the unique $\Gamma^{\mathrm{op}}$-action
$\Gamma^{\mathrm{op}} \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \ast \,\in\, \Actions{\Gamma}(\kTopologicalSpaces)$
on the point space:
\vspace{-1mm}
$$
\mathbf{B}\Gamma
\;\coloneqq\;
(
\ast \times \Gamma
\rightrightarrows
\ast
)
\;=\;
(
\Gamma
\rightrightarrows
\ast
)
\;\;\;
\in
\;
\TopologicalGroupoids
\,,
$$
\vspace{-1mm}
\noindent
with composition given by the group operation in $\Gamma$:
\vspace{-2mm}
$$
\begin{tikzcd}
\bullet
\ar[
r,
"{g_1}"
]
\ar[
rr,
rounded corners,
to path={
-- ([yshift=+5pt]\tikztostart.north)
--node[above]{
\scalebox{.7}{$g_1 \cdot g_2$}
}
([yshift=+5pt]\tikztotarget.north)
-- (\tikztotarget.north)}
]
&
\bullet
\ar[
r,
"{g_2}"
]
&
\bullet
\end{tikzcd}
\;\;\;
\in
\;
\mathbf{B}\Gamma
\,.
$$
\end{example}
\begin{example}[Action groupoid of group multiplication is pair groupoid]
\label{ActionGroupoidOfLeftGroupMultiplicationIsPairGroupoid}
For $\Gamma \,\in\, \Groups(\kTopologicalSpaces)$,
the topological pair groupoid (Ex. \ref{TopologicalPairGroupoid})
on its underlying topological space
is
isomorphic to the action groupoid (Ex. \ref{TopologicalActionGroupoid})
of both the left and inverse-right action of $G^{\mathrm{op}}$
on itself \eqref{LeftMultiplicationAndInverseRightMultiplicationActionsOnATopologicalGroup}
\vspace{-2mm}
$$
\begin{tikzcd}[row sep=-5pt]
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
left multiplication
\\
action groupoid
\end{tabular}
}
}
}{
(
G \times G
\rightrightarrows G
)
}
&&
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
pair groupoid
\end{tabular}
}
}
}{
(
G \times G
\rightrightarrows G
)
}
\ar[
ll,
"\sim"{above, yshift=-1pt}
]
\ar[
rr,
"\sim"{above, yshift=-1pt}
]
&&
\overset{
\mathpalette\mathclapinternal{
\raisebox{3pt}{
\tiny
\color{darkblue}
\bf
\def.9{.9}
\begin{tabular}{c}
right multiplication
\\
action groupoid
\end{tabular}
}
}
}{
(
G \times G
\rightrightarrows G
)
}
\\
\scalebox{0.7}{$ \big(
g_1
\xrightarrow{ ( g_1, \, g_1 \cdot g_2^{-1} ) }
g_2
\big)
$}
&\longmapsfrom&
\scalebox{0.7}{$ \big(
g_1
\xrightarrow{ (g_1, g_2) }
g_2
\big)
$}
&\longmapsto&
\scalebox{0.7}{$ \big(
g_1
\xrightarrow{ ( g_1, \, g_1^{-1} \cdot g_2 ) }
g_2
\big)
$}
\end{tikzcd}
$$
\end{example}
\begin{example}[Topological mapping groupoid]
\label{TopologicalFunctorGroupoid}
Given
a pair of topological groupoids (Ntn. \ref{TopologicalGroupoids}),
their {\it mapping groupoid} or {\it functor groupoid}
(e.g. \cite[\S 2]{NiefieldPronk19})
\vspace{-2mm}
\begin{equation}
\label{MappingGroupoidOfTopologicalGroupoids}
\mathrm{Maps}
(
\TopologicalSpace_1 \rightrightarrows \mathrm{X_0},
\,
\mathrm{Y}_1 \rightrightarrows \mathrm{Y_0}
)
\;\;\;
\in
\;
\TopologicalGroupoids
\end{equation}
\vspace{-2mm}
\noindent
has as object space the subspace of the
product of mapping spaces
$\mathrm{Maps}(\TopologicalSpace_0,\mathrm{Y}_0) \times \mathrm{Maps}(\TopologicalSpace_1, \mathrm{Y}_1)$
\eqref{MappingSpace}
on the elements that satisfy the functoriality condition \eqref{ContinuousFunctors},
and
as morphism space the subspace on the product of that space with the
mapping space $\mathrm{Maps}(\TopologicalSpace_0, \TopologicalSpace_1)$
on those elements which satisfy the naturality condition \eqref{NaturalitySquareForTopologicalGroupoids}.
This construction is a $\Groupoids$-enriched functor in both arguments
contravariantly so in the first:
\vspace{-2mm}
$$
\mathrm{Maps}
(- ,\, - )
\;:\;
\begin{tikzcd}
\Groupoids(\kTopologicalSpaces)^{\mathrm{op}}
\times
\Groupoids(\kTopologicalSpaces)
\ar[r]
&
\Groupoids(\kTopologicalSpaces)
\,.
\end{tikzcd}
$$
\vspace{-2mm}
\noindent
With the first argument fixed, it constitutes a
$\Groupoids$-enriched
right adjoint (Ntn. \ref{AdjointFunctors})
to the product functor (e.g. \cite[Prop. 3.1]{NiefieldPronk19}):
\vspace{-2mm}
\begin{equation}
\label{InternalHomAdjunctionForTopologicalGroupoids}
\begin{tikzcd}[column sep=40pt]
\Groupoids(\kTopologicalSpaces)
\ar[
rr,
shift right=6pt,
"{
\scalebox{.7}{$ \mathrm{Maps}
\left(
(\TopologicalSpace_1 \rightrightarrows \, \TopologicalSpace_0)
,\,
-
\right)
$}
}"{below}
]
\ar[
rr,
phantom,
"{\scalebox{.7}{$\bot$}}"
]
&&
\Groupoids(\kTopologicalSpaces)
\mathpalette\mathrlapinternal{\,.}
\ar[
ll,
shift right=6pt,
"{
\scalebox{.7}{$ (\TopologicalSpace_1 \rightrightarrows \, \TopologicalSpace_0)
\times
(-)$}
}"{above}
]
\end{tikzcd}
\end{equation}
\end{example}
\begin{example}[Mapping groupoid between delooping groupoids]
\label{MappingGroupoidBetweenDeloopingGroupoids}
For $G, \Gamma \,\in\, \Groups(\kTopologicalSpaces)$,
the mapping groupoid (Ex. \ref{TopologicalFunctorGroupoid})
between their topological delooping groupoids
(Ex. \ref{TopologicalDeloopingGroupoid})
is isomorphic to the topological action groupoid
(Ex. \ref{TopologicalActionGroupoid})
of the adjoint action of $\Gamma$
on the hom-set of group homomorphisms $G \xrightarrow{\;} \Gamma$
(topologized as a subspace of $\mathrm{Maps}(G,\Gamma)$):
\vspace{-5mm}
\begin{equation}
\label{MappingGroupoidOfDeloopingGroupoidsIsAdjointActionGroupoid}
\hspace{2cm}
\mathrm{Maps}
(
G \rightrightarrows \ast
,\,
\Gamma \rightrightarrows \ast
)
\;\;
\simeq
\;\;
\big(
\Groups(G,\, \Gamma)
\times
\Gamma^{\mathrm{op}}
\; \rightrightarrows \;
\Groups(G,\, \Gamma)
\big)
\,.
\end{equation}
\vspace{-6mm}
$$
\hspace{-2cm}
\begin{tikzcd}[column sep=large]
\bullet
\ar[
d,
"g_1"
]
\ar[
dd,
rounded corners,
to path={
-- ([xshift=-10pt]\tikztostart.west)
--node[below, sloped]{\rotatebox{0}{\scalebox{.7}{$
g_1 \cdot g_2
$}}} ([xshift=-10pt]\tikztotarget.west)
-- (\tikztotarget.west)}
]
&[-10pt]&[-10pt]
\bullet
\ar[
d,
"\phi(g_1)"
]
\ar[
dd,
rounded corners,
to path={
-- ([xshift=-10pt]\tikztostart.west)
--node[below, sloped]{\rotatebox{0}{\scalebox{.7}{$
\phi(g_1 \cdot g_2)
$}}} ([xshift=-10pt]\tikztotarget.west)
-- (\tikztotarget.west)}
]
\ar[
rr,
"\gamma"
]
&&
\bullet
\ar[
d,
"\phi'(g_1)"{left}
]
\\
\bullet
\ar[
d,
"g_2"
]
&&
\bullet
\ar[
d,
"\phi(g_2)"
]
\ar[
rr,
"\gamma"
]
&&
\bullet
\ar[
d,
"\phi'(g_2)"{left}
]
\\
\bullet
&&
\bullet
\ar[
rr,
"\gamma"
]
&&
\bullet
\end{tikzcd}
\;\;\;\;\;\;\longmapsto\;\;\;\;\;\;
\scalebox{0.7}{$ \big(
\phi
\xrightarrow{ (\gamma,\phi) }
\phi'
=
\mathrm{ad}_\gamma \circ \phi
\big)
{\phantom{AAAAAAAAAAAAA}}
$}
$$
\end{example}
\medskip
\noindent
{\bf Crossed homomorphisms and first non-abelian group cohomology.}
The classical notion of {\it crossed homomorphisms}, recalled as
Def. \ref{CrossedHomomorphismsAndFirstNonAbelianGroupCohomology} below,
turns out to play a pivotal role in equivariant bundle theory
(e.g., Lem. \ref{EquivariantPrincipalTwistedProductBundles}
and Prop. \ref{FixedLociOfBaseOfUniversalEquivariantPrincipalGroupoid}),
often secretly so (Rem. \ref{GraphsOfCrossedHomomorphismsInTheLiterature} below).
Here we highlight a transparent groupoidal understanding
of crossed homomorphisms with crossed conjugations
between them (Prop. \ref{ConjugationGroupoidOfCrossedHomomorphismsIsSectionsOfDeloopedSemidirectProductProjection} below).
\begin{definition}[Crossed homomorphisms and first non-abelian group cohomology]
\label{CrossedHomomorphismsAndFirstNonAbelianGroupCohomology}
Let $G \,\in\, \Groups(\kTopologicalSpaces)$
and $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma \,\in\, \Actions{G}(\kTopologicalSpaces)$,
with
$\alpha : G \xrightarrow{\;} \mathrm{Aut}_{\mathrm{Grp}}(\Gamma)$
the underlying automorphism action (Lem. \ref{EquivariantTopologicalGroupsAreSemidirectProductsWithG}).
\noindent
{\bf (i)} A continuous {\it crossed homomorphism} from $G$ to $\Gamma$
is a continuous map $\phi \colon G \xrightarrow{\;} \Gamma$
which satisfies the following {\it $G$-crossed} homomorphism property:
\vspace{-2mm}
\begin{equation}
\label{GCrossedHomomorphismProperty}
\underset{
g_1, g_2 \in G
}{\forall}
\;\;\;
\phi(g_1 \cdot g_2)
\;=\;
\phi(g_1)
\cdot
\alpha(g_1)
\left(
\phi(g_2)
\right)
\,.
\end{equation}
\vspace{-2mm}
\noindent
We write
\begin{equation}
\label{SpaceOfCrossedHomomorphisms}
\CrossedHomomorphisms(G, \, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma)
\;\subset\;
\mathrm{Maps}(G,\Gamma)
\;\;\;
\in
\;
\kTopologicalSpaces
\end{equation}
for the subspace of the mapping space \eqref{MappingSpace}
on the crossed homomorphisms.
\noindent
{\bf (ii)} A {\it crossed conjugation} between two crossed homomorphisms
$\phi \xrightarrow{\;} \phi'$ is an element $\gamma \,\in\, \Gamma$ such that
\vspace{-2mm}
\begin{equation}
\label{CrossedConjugationAction}
\underset{g \in G}{\forall}
\;\;\;
\phi'(g)
\;=\;
\gamma^{-1}
\cdot
\phi(g)
\cdot
\alpha(g)(\gamma)
\,.
\end{equation}
\vspace{-2mm}
\noindent We denote the continuous $\Gamma$-action by crossed conjugation by
\vspace{-2mm}
\begin{equation}
\label{ActionOnSpaceOfCrossedHomomorphisms}
\Gamma
\raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \;
\CrossedHomomorphisms(G,\, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma)
\;\;\;
\in
\;
\Actions{\Gamma}(\kTopologicalSpaces)
\,.
\end{equation}
\vspace{-2mm}
\noindent We write $\phi \sim_{\mathrm{ad}} \phi'$
for the corresponding equivalence relation.
\noindent
{\bf (iii)} The {\it non-abelian group cohomology} of $G$
in degree 1
with coefficients
in $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma$ is
-- at least when $G$ is discrete\footnote{
For non-discrete domain groups
the notion of crossed homomorphisms need no longer capture all
1-cocycles in non-abelian group cohomology, when the latter is formulated in
proper stacky generality; see \cite{WagemannWockel11} for pointers.} --
the set of connected components\footnote{
The passage to connected components in \eqref{NonAbelianGroup1Cohomology}
seems not to be considered in existing literature.
It makes no difference when the coefficient group is discrete,
(which tends to be tacitly understood in this context,
but is not the most general case of interest),
as well as under other sufficient conditions discussed in
Prop. \ref{DiscreteSpacesOfCrossedConjugacyClassesOfCrossedHomomorphisms} below.
But in general
the correct homotopy-meaningful definition of non-abelian group 1-cohomology
(see Rem. \ref{GeneralAbstractPerspectiveOnNonAbelianGroup1Cohomology} below)
is
only obtained with passage to connected components included.
(The statement in \cite[\S 4.3]{GuillouMayMerling17},
that any groupoid is equivalent to the coproduct of its automorphism sub-groupoids,
is patently false for topological groupoids, in general.
It does hold under suitable extra conditions, such
as in Prop. \ref{DiscreteSpacesOfCrossedConjugacyClassesOfCrossedHomomorphisms} below).
For further discussion of this point see the companion article (cite).
}
of the quotient space by crossed conjugation classes \eqref{CrossedConjugationAction}
of the space \eqref{SpaceOfCrossedHomomorphisms}
of crossed homomorphisms \eqref{GCrossedHomomorphismProperty}:
\vspace{-2mm}
\begin{equation}
\label{NonAbelianGroup1Cohomology}
H^1_{\mathrm{Grp}}
(
G
,\,
G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma
)
\;\;
\coloneqq
\;\;
\pi_0
\big(
\CrossedHomomorphisms(G, \, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma)
/\!\sim_{\mathrm{ad}}
\!\!\big)
\;\;\;\;
\in
\;
\Sets^{\ast/}
\,.
\end{equation}
\end{definition}
\begin{remark}[Crossed homomorphisms in the literature]
\label{CrossedHomomorphismsInTheliterature}
Since the notion of crossed homomorphisms,
in the generality that we need them here
(Def. \ref{CrossedHomomorphismsAndFirstNonAbelianGroupCohomology}),
tends to be neglected in the literature, we record some pointers:
Crossed homomorphisms \eqref{GCrossedHomomorphismProperty}
appear first, already in full non-abelian generality,
in \cite[(3.1)]{Whitehead49}.
Much later, following \cite[\S IV.2]{MacLane75},
they became widely appreciated only in the special case when $\Gamma$ is an abelian group,
as a tool in ordinary group cohomology (e.g. \cite[p. 45]{Brown82}).
Crossed homomorphisms in their non-abelian generality
appear again
in \cite[\S 2.1]{tomDieck69} (not using the ``crossed'' terminology, though)
and in \cite[p. 2]{MurayamaShimakawa95}\cite[Def. 4.1]{GuillouMayMerling17},
all in the context of equivariant bundle theory
(in which we consider them in \cref{ConstructionOfUniversalEquivariantPrincipalBundles}).
Textbook accounts in this generality are in \cite[p. 16]{NSW08}\cite[\S 15.a-b]{Milne17}.
The corresponding definition
\eqref{NonAbelianGroup1Cohomology}
of non-abelian group 1-cohomology is
rarely made explicit;
exceptions are
\cite[Def. 2.3.2]{GilleSzamuely06}\cite[Def. 4.17]{GuillouMayMerling17}\cite[\S 3.k]{Milne17}\footnote{
These are sections 16.a-b \& 27.a in the expanded version of Milne's book at
\href{https://www.jmilne.org/math/CourseNotes/iAG200.pdf}{\tt www.jmilne.org/math/CourseNotes/iAG200.pdf}}.
\end{remark}
We also need to recall the following standard fact (e.g. \cite[Ex. 15.1]{Milne17}):
\begin{lemma}[Crossed homomorphisms are sections of the semidirect product projection]
\label{CrossedHomomorphismsAreSectionsOftheSemidirectProductProjection}
{\bf (i)}
Crossed homomorphisms $\phi \colon G \to \Gamma$ (Def. \ref{MappingGroupoidBetweenDeloopingGroupoids})
are in bijective correspondence to homomorphic sections
of the semidirect group projection \eqref{SplitGroupExtensionOfGByGamma}:
\vspace{-2mm}
$$
\begin{tikzcd}[column sep=60pt, row sep=small]
&
\Gamma \rtimes G
\ar[
d,
"\; \mathrm{pr}_2"
]
\\
G
\ar[
r,-,
shift left=1pt
]
\ar[
r,-,
shift right=1pt
]
\ar[
ur,
dashed,
"{
g \,\mapsto\,
(
\phi(g)
\,,
g
)
}"{above, yshift=1pt, sloped}
]
&
G
\end{tikzcd}
$$
\vspace{-2mm}
\noindent {\bf (ii)} Under this identification, crossed conjugations \eqref{CrossedConjugationAction}
are equivalently plain conjugations with elements in
$\Gamma \xrightarrow{i} \Gamma \rtimes G$.
\end{lemma}
\begin{proof}
Having a section means that
$g \,\mapsto\, ( \phi(g),\, g ) \,\in\, \Gamma \rtimes G$,
and this being a homomorphism means that
\vspace{-1mm}
\begin{equation}
\label{CrossedHomomorphismAsPlainHomomorphismsToSemidirectProductGroup}
(
\phi(g_1 \cdot g_2)
,\,
g_1 \cdot g_2
)
\;=\;
(
\phi(g_1)
,\,
g_1
)
\cdot
(
\phi(g_2)
,\,
g_2
)
\;=\;
\scalebox{1.3}{$($}
\phi(g_1)
\cdot
\alpha(g_2)
\scalebox{1.15}{$($}
\phi(g_2)
\scalebox{1.15}{$)$}
,\,
g_1 \cdot g_2
\scalebox{1.3}{$)$}
\,,
\end{equation}
\vspace{-1mm}
\noindent
where the second equality on the right is the definition of the semidirect product group operation,
evidently reproducing the defining condition \eqref{GCrossedHomomorphismProperty}
in the first argument.
Analogously, plain conjugation in the semidirect product group with elements of the form
$(\gamma, \mathrm{e}) \,\in\, \Gamma \rtimes G$
gives
\vspace{-2mm}
\begin{equation}
\label{CrossedConjugationAsPlainConjugationInSemidirectProductGroup}
(\gamma, \mathrm{e})^{-1}
\cdot
\left(
\phi(g) ,\, g
\right)
\cdot
(\gamma, \mathrm{e})
\;\;
=
\;\;
\big(
\gamma^{-1}
\cdot
\phi(g)
\cdot
\alpha(g)(\gamma)
,\,
g
\big)
\mathpalette\mathrlapinternal{\,,}
\end{equation}
reproducing the formula
\eqref{CrossedConjugationAction} in the first argument.
\end{proof}
Similarly elementary, but maybe less widely appreciated
(see Rem. \ref{GraphsOfCrossedHomomorphismsInTheLiterature}),
is the following:
\begin{lemma}[Graphs of crossed homomorphisms {\cite[\S 2.1]{tomDieck69}\cite[Lem. 4.5]{GuillouMayMerling17}}]
\label{CrossedHomomorphismsAreEquivalentlyMaySubgroupsOfSemidirectProducts}
$\,$
\noindent
{\bf (i)}
The graph of a crossed homomorphism $\phi \;:\; G \xrightarrow{\;} \Gamma$
(Def. \ref{CrossedHomomorphismsAndFirstNonAbelianGroupCohomology})
is a subgroup
\vspace{-1mm}
\begin{equation}
\label{MaySubgroupsOfSemidirectProducts}
\widehat{G}
\;\in\;
\Gamma \rtimes G
\,,
\;\;\;\;\;\;\;
\mbox{\rm such that}
\;\;\;\;\;
\mathrm{pr}_2
(
\widehat{G}
)
\,\simeq\,
G
\;\;\;\;
\mbox{\rm and}
\;\;\;\;
\widehat{G}
\cap
i(\Gamma)
\,\simeq\,
\{
(\mathrm{e},\,\mathrm{e})
\}
\,,
\end{equation}
\vspace{-2mm}
\noindent
where $i : \Gamma \xhookrightarrow{\;} \Gamma \rtimes G$ is the canonical
subgroup inclusion \eqref{SplitGroupExtensionOfGByGamma}.
\noindent
{\bf (ii)}
Every such subgroup \eqref{MaySubgroupsOfSemidirectProducts}
is the graph of a unique crossed homomorphism.
\end{lemma}
\begin{proof}
The first statement is immediate from the definitions.
For the converse statement (ii),
consider a subgroup $\widehat{G}$ as in \eqref{MaySubgroupsOfSemidirectProducts}.
Then the subgroup property implies that
\vspace{-2mm}
$$
(\gamma,\,g)
,\,
(\gamma{\;}',\,g)
\;\in\;
\widehat{G}
\;\;\;\;\;\;\;
\Rightarrow
\;\;\;\;\;\;\;
(\gamma{\;}',\, g)
\cdot
(\gamma,\, g)^{-1}
\;=\;
(\gamma{\;}',\, g)
\cdot
\left(
\alpha(g^{-1})(\gamma^{-1})
,\,
g^{-1}
\right)
\;=\;
\big(
\gamma{\;}' \cdot \gamma^{-1}
,\,
\mathrm{e}
\big)
\;\;\;
\in
\;
\widehat{G}
\,.
$$
\vspace{-2mm}
\noindent
From this, the second condition in \eqref{MaySubgroupsOfSemidirectProducts}
implies that
\vspace{-2mm}
$$
(\gamma,\,g)
,\,
(\gamma{\;}',\,g)
\;\in\;
\widehat{G}
\;\;\;\;\;\;\;
\Rightarrow
\;\;\;\;\;\;\;
\gamma \,=\, \gamma{\;}'
.
$$
\vspace{-2mm}
\noindent
Together with the first condition in \eqref{MaySubgroupsOfSemidirectProducts},
this implies that
$\widehat G$ is the graph of a function $\phi \,:\, G \times \Gamma$.
From this the claim follows by
Lem. \ref{CrossedHomomorphismsAreSectionsOftheSemidirectProductProjection}.
\end{proof}
\begin{remark}[Graphs of crossed homomorphisms in the literature]
\label{GraphsOfCrossedHomomorphismsInTheLiterature}
Subgroups of the form \eqref{MaySubgroupsOfSemidirectProducts}
were used in early articles on equivariant bundle
theory (e.g. \cite[Thm. 10]{LashofMay86}\cite[Thm. 7]{May90}).
That these are equivalently (graphs of) crossed homomorphisms
(Lem. \ref{CrossedHomomorphismsAreEquivalentlyMaySubgroupsOfSemidirectProducts})
and hence homomorphic sections of the semidirect product group
(Lem. \ref{CrossedHomomorphismsAreSectionsOftheSemidirectProductProjection})
may have been
(in view of
Prop. \ref{ConjugationGroupoidOfCrossedHomomorphismsIsSectionsOfDeloopedSemidirectProductProjection}
below)
one of the key observations that led to the construction in
\cite{MurayamaShimakawa95}
(discussed in \cref{ConstructionOfUniversalEquivariantPrincipalBundles} below);
however, Lem. \ref{CrossedHomomorphismsAreEquivalentlyMaySubgroupsOfSemidirectProducts}
is still not made explicit there.
\end{remark}
\begin{notation}[Conjugation groupoid of crossed homomorphisms]
\label{ConjugationGroupoidOfCrossedHomomorphisms}
We write
\vspace{-2mm}
$$
\CrossedHomomorphisms(G,\,G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma) \sslash_{\!\mathrm{ad}} \Gamma
\;\;\coloneqq\;\;
\big(
\CrossedHomomorphisms(G,\, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma)
\times
\Gamma
\;\; \rightrightarrows \;\;
\CrossedHomomorphisms(G ,\, G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma)
\big)
$$
\vspace{-2mm}
\noindent
for the topological action groupoid (Ex. \ref{TopologicalActionGroupoid})
of crossed conjugations \eqref{CrossedConjugationAction}
acting on the space
\eqref{SpaceOfCrossedHomomorphisms} of
crossed homomorphisms \eqref{GCrossedHomomorphismProperty}.
\end{notation}
\begin{definition}[Topological groupoid of sections of delooped semidirect product projection]
\label{GroupoidOfSectionsOfDeloopedSemidirectProductProjection}
For $G \,\in\, \Groups(\kTopologicalSpaces)$
and $G \raisebox{1.4pt}{\;\rotatebox[origin=c]{90}{$\curvearrowright$}}\hspace{.5pt} \, \Gamma \,\in\, \Actions{G}( \kTopologicalSpaces)$,
consider the topological mapping groupoid (Ex. \ref{TopologicalFunctorGroupoid})
from the delooping groupoid (Ex. \ref{TopologicalDeloopingGroupoid})
of $G$ into that of the semidirect product group $\Gamma \rtimes G$
(Lem. \ref{EquivariantTopological
|
angle and the assumed
source spectra (which are unknown since the HRC-I does not currently
allow to extract energy information). Therefore, the count rates we
quote are the uncorrected count rates and are therefore lower limits
to the count rates the sources would have if they were located
on-axis. Depending on the off-axis angles and the source spectra, the
on-axis count rates could have been larger by a factor of a few.
Again, if the sources were detected in multiple observations, we only
give the count rates from the data set in which the sources had the
smallest offset to minimize the systematic errors on the derived
fluxes.
\section{Results}
In total we have detected 21 sources so far. Two sources (the Sgr A*
complex and the Arches cluster) are known to embody a complex of point
sources in combination with strong diffuse emission
(\cite{2003ApJ...589..225M, 2002ApJ...570..665Y}). The analysis of
these complex regions is still in progress. Ten of the remaining
sources can be identified with known stars (e.g., HD 316314, HD
316224, HD 161274, TYC 06840-38-1, ALS 4400; Fig.~\ref{fig:images}
left) or have clear counterparts in the Digital Sky Survey images
indicating that they are foreground objects (and hence have relatively
low X-ray luminosities). We will not discuss the detections of the
foreground objects further in this paper, instead we will focus on the
detected X-ray binaries.
\subsection{The persistent sources}
We detected the two persistent X-ray binaries known to be present in
the surveying region: 1E 1743.1--2843 and 1A 1742--294.
\subsubsection{1E 1743.1--2843}
1E 1743.1--2843 is a persistent X-ray binary for which the type of
accreting object is not yet known. The source was in the FOV of two of
the seven HRC-I pointings and was detected during both observations
but we detected no bursts from the source. The position obtained from
our {\it Chandra}/HRC-I data is consistent with that derived from a
previous {\it XMM-Newton} observation (\cite{2003A&A...406..299P})
although, due to the systematic uncertainties in our positional
errors, it cannot currently be determined if our position is better
than the {\it XMM-Newton} one. We used PIMMS to convert the obtained
count rate (see Tab.~\ref{table:binaries}). We assumed an absorbed
power-law model similar to what was found by
\cite{2003A&A...406..299P} when fitting the {\it XMM-Newton}
observation of the source (they obtained an equivalent hydrogen column
density $N_{\rm H}$ of $2\times10^{23}$ cm$^{-2}$ and a photon index
of 1.8). This results in unabsorbed fluxes of $1.8\times 10^{-10}$
(2--10 keV) and $3\times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ (0.5--10
keV) and X-ray luminosities of 1.4 and $2.3\times 10^{36}$ erg
s$^{-1}$, respectively. These X-ray luminosities are very
similar to what has been seen before for this source (e.g.,
\cite{2003A&A...406..299P}).
\subsubsection{1A 1742--294}
1A 1742--294 is a persistent X-ray binary harboring a neutron-star
accretor as evidenced by the type-I X-ray bursts observed from this
system (see, e.g., \cite{1994ApJ...425..110P}). We detected this
source during both HRC-I pointings in which the source was in the
FOV. During the GC-10 pointing we detected an X-ray burst. Our {\it
Chandra} position is fully consistent with the best position so far
reported on this source (using {\it ROSAT};
\cite{2001A&A...368..835S}) and despite the possible unknown
systematic uncertainty in our errors, our position is better. We
again used PIMMS to convert the obtained count rate (see
Tab.~\ref{table:binaries}) and used the absorbed power-law model
($N_{\rm H} \sim$$6\times10^{22}$ cm$^{-2}$ ; photon index $\sim$1.8)
found when fitting the {\it BeppoSAX} and {\it ASCA} data of the
source (\cite{1999ApJ...525..215S,2002ApJS..138...19S}). This results
in unabsorbed fluxes of $2.3\times 10^{-10}$ (2--10 keV) and
$3.7\times 10^{-10}$ erg cm$^{-2}$ (0.5--10 keV) s$^{-1}$. The
corresponding X-ray luminosities are 1.8 and $2.8\times 10^{36}$ erg
s$^{-1}$, consistent with what has been observed before for this
source (e.g., \cite{1999ApJ...525..215S}).
\subsection{The transient sources}
Two transients were clearly visible during our observations: GRS
1741.9--2853 and XMM J174457--2850.3. We made a preliminary
announcement of the detection of these new outbursts on 6 June 2005
(\cite{2005ATel..512....1W}). Following these detections, we obtained
an additional {\it Chandra} observation of both sources (using the
ACIS-I detector) on 1 July 2005 (see Tab.~\ref{table:observations} for
details). Because the two transients were only $\sim$4.6$'$ away from
each other, we could observe both sources with only one ACIS-I
pointing. We placed both sources at an off-axis angle of 7$'$ in order
to limit pile-up in case the sources were as bright as seen during the
HRC-I observations. The ACIS-I data were also analyzed using CIAO and
the standard threads. Again, all data could be used since no episodes
of high background emission occurred during our observation.
\subsubsection{GRS 1741.9--2853}
GRS 1741.9--2853 is a neutron star X-ray transient (it exhibits type-I
bursts; e.g., \cite{1999A&A...346L..45C}) which has been detected
several times in outburst since its original discovery in 1990
(\cite{1990IAUC.5104....1S}). Its peak luminosity is typically a few
times $10^{36}$ erg s$^{-1}$ making it a faint X-ray transient (see
\cite{2003ApJ...598..474M} for more details). This source was detected
during two of our pointings (Tab.~\ref{table:binaries}) but we
detected no bursts. The position of the source was consistent with,
but not better than the one obtained by \cite{2003ApJ...598..474M}.
The observed count rate was converted into fluxes using PIMMS and
assuming an absorbed power-law with $N_{\rm H} = 9.7\times10^{22}$
cm$^{-2}$ and a photon index of 1.88
(\cite{2003ApJ...598..474M}). This results in unabsorbed fluxes of
$1.1\times 10^{-10}$ (2--10 keV) and $1.8\times 10^{-10}$ erg
cm$^{-2}$ s$^{-1}$ (0.5--10 keV), yielding X-ray luminosities of 0.8
and $1.4\times 10^{36}$ erg s$^{-1}$, respectively (for comparison
with previous {\it Chandra} data on this source reported by
\cite{2003ApJ...598..474M}, we also list the 2--8 keV luminosity of
$7.0\times10^{35}$ erg s$^{-1}$). GRS 1741.9--2853 was also detected
during the additional {\it Chandra}/ACIS-I observation
(Fig.~\ref{fig:extra_image}). We extracted the source spectrum using a
source extraction region of 10$''$ and a background extraction circle
of 50$''$ from a source-free region close to GRS 1741.9--2853. The
spectrum was rebinned to have at least 15 counts per bin to allow the
$\chi^2$ fitting method. The resulting spectrum is shown in
Figure~\ref{fig:spectra}. We used XSPEC to fit the spectrum and the
fit results obtained are listed in
Table~\ref{table:spectral_fits}. Clearly, the source flux had
decreased by almost an order of magnitude within about a month (i.e.,
since 5 June 2005). The long-term light curve of the source is plotted
in Figure~\ref{fig:lc} showing the multiple outbursts of the source in
the last 15 years.
\subsubsection{XMM J174457--2850.3}
XMM J174457--2850.3 is also clearly detected during our HRC-I
observations (Fig.~\ref{fig:images}). This source has been detected
only once before in outburst in 2001 (using {\it XMM-Newton};
\cite{2005MNRAS.357.1211S}). During that outburst the source was
seen at a peak luminosity of $5 \times 10^{34}$ erg s$^{-1}$,
justifying a classification as a VFXT. We detected it during two of
our pointings (Tab.~\ref{table:binaries}) but saw no bursts. Our
source position is consistent with that obtained by
\cite{2005MNRAS.357.1211S}, although the exact uncertainty on our
HRC-I position is currently unclear. However, the source was also
detected during our additional ACIS-I observation yielding a more
reliable position even though the source was relatively weak (see
Tab.~\ref{table:binaries}). This position is significantly better than
the {\it XMM-Newton} one. The observed HRC-I count rate was converted
into fluxes using PIMMS and assuming an absorbed power-law with
$N_{\rm H} = 6\times10^{22}$ cm$^{-2}$ and a photon index of 1.0
(\cite{2005MNRAS.357.1211S}). This resulted in unabsorbed fluxes of
$1.1\times 10^{-10}$ (2--10 keV) and $1.3\times 10^{-10}$ erg
cm$^{-2}$ s$^{-1}$ (0.5--10 keV) and in X-ray luminosities of 0.8 and
$1.0\times 10^{36}$ erg s$^{-1}$, respectively. This is significantly
brighter than what was previously found for the source and makes it a
borderline case as a VFXT. As stated above, XMM J174457--2850.3 was
also detected during the additional {\it Chandra}/ACIS-I observation
(Fig.~\ref{fig:extra_image}). We extracted the source spectrum using a
source extraction region of 5$''$. Due to the rather low number of
source photons (26 counts in the 0.3--7.0 keV energy range) we did not
rebin the spectrum or subtract the background (which was $<$0.3 photon
in the source region and therefore negligible) so that we could use
the Cash statistics (\cite{1979ApJ...228..939C}) when fitting the
spectrum in XSPEC. The fit results obtained for this observation are
also listed in Table~\ref{table:spectral_fits} and the resulting
spectrum is shown in Figure~\ref{fig:spectra}. Clearly, the source
flux had decreased by nearly three orders of magnitude within
approximately a month (i.e., since 5 June 2005). The long-term light
curve of the source is plotted in Figure~\ref{fig:lc}.
\subsubsection{A possible new VFXT}
None of the other known transients in the FOV of our observations (see
Tab.~\ref{table:sources_in_FOV}) were conclusively detected in our
HRC-I data. The upper limits on their luminosities depend strongly on
their spectral shape and their off-axis positions, with a rough
estimate of $\sim$$10^{34}$ erg s$^{-1}$. Several additional weak
sources were detected during our observations which could not be
identified with a star in the Digital Sky Survey database. Only one of
these had a large enough count rate (see Tab.~\ref{table:binaries})
that its X-ray luminosity exceeded $10^{34}$ erg s$^{-1}$ if it had a
'prototypical X-ray binary' spectrum (power-law model with photon
index of 1.8 and a typical $N_{H}$ of $6\times10^{22}$
cm$^{-2}$). Using such a spectral shape, the source had unabsorbed
X-ray fluxes of 1.9 and $3.1\times10^{-12}$ erg s$^{-1}$ for 2--10 keV
and 0.5--10 keV, respectively, and thus luminosities of $1.5\times
10^{34}$ erg s$^{-1}$ (2--10 keV) and $2.4\times10^{34}$ erg s$^{-1}$
(0.5--10 keV). We note that we
|
do not know the intrinsic source
spectrum and therefore these fluxes and luminosities could be
significantly off if the real source spectrum is considerably
different. We investigated the {\it Chandra} and {\it XMM-Newton}
archives and found that the source was in the FOV of one previous {\it
XMM-Newton} observation. The source was not detected during this {\it
XMM-Newton} observation, but it was at the edge of its FOV making it
difficult to obtain a reliable upper limit on the flux, especially
because we do not know the spectral shape of the source. We estimate
that the luminosity of the source was at least a factor of a few
fainter during the {\it XMM-Newton} observation compared with our
HRC-I data. Although this is suggestive of a transient nature for this
source, it could also be a highly variable persistent
source. Currently, we will refer to this source as a possible new
VFXT.
\subsubsection{Observations at other wavelengths}
We obtained VLA observations at 4 and 6 cm on 8--9 June 2005 of GRS
1741.9--2853, XMM J174457--2850.3, and the possible new VFXT. The
analysis of these radio data is complicated by the strong side-lobes
of Sgr A* and we are still in the process of fully analyzing these
data. A preliminary analysis of the 4 cm data shows that none of the
sources were conclusively detected with radio fluxes of
$0.003\pm0.060$, $-0.002\pm0.046$, and $0.032\pm0.043$ mJy/beam,
respectively. On 8 June 2005, \cite{2005ATel..522....1L} obtained
I-band images of GRS 1741.9--2853 and XMM J174457--2850.3 using the
Magellan-Baade telescope but could not detect the I-band counterparts
of the sources. This is not surprising when considering the high
absorption column in front of both sources.
\section{Discussion}
We have presented our initial results of the first observations taken
as part of our {\it XMM-Newton}/{\it Chandra} monitoring campaign of
the inner region of our Galaxy. Using our {\it Chandra}/HRC-I
observations we detected mostly foreground objects (like X-ray active
stars), but we also detected two persistent X-ray binaries, two X-ray
transients, and one possible very faint X-ray transient (but its
transient nature requires further confirmation). Clearly, our
monitoring {\it XMM-Newton}/{\it Chandra} campaign is detecting
transients in outburst which are being missed by the other monitoring
instruments in orbit. Our campaign therefore complements, as designed,
other monitoring campaigns using satellites currently in orbit (e.g.,
\cite{1996ApJ...469L..33L, 2001ASPC..251...94S,
2004AstL...30..382R,2005ATel..438....1K}). These programs find mainly
the brighter transients or the faint transients far away from the
crowded fields near Sgr A*.
The faint X-ray transient GRS 1741.9--2853 was detected at a level of
$\sim$$10^{36}$ erg s$^{-1}$, very similar to what has been observed
previously for this source. A month after our initial HRC-I
observations this source could still be detected at $\sim$$10^{35}$
erg s$^{-1}$ with the ACIS-I. The parameters obtained for the source
spectrum during this observation were consistent with those found by
\cite{2003ApJ...598..474M} when the source was an order of magnitude
brighter, indicating that the source spectrum is not very dependent on
source luminosity. Although we did not detect X-ray bursts during our
observations, this source is known to exhibit such phenomena making it
very likely to be a neutron star accreting from a low-mass companion
star. Even though no optical/infrared counterpart has so far been
found for this source, type-I X-ray bursts have only been seen for
low-mass X-ray binaries making it very likely that GRS 1741.9--2853 is
also such a system. The fact that GRS 1741.9--2853 harbors a neutron
star also is consistent with the non-detection of the source in our
radio data since neutron-star low-mass X-ray binaries are known to
exhibit very low radio luminosities (e.g., \cite{2001MNRAS.324..923F,
2005ApJ...626.1020M} )
Figure~\ref{fig:lc} shows that the source has been seen to be in
outburst at least 5 times with X-ray luminosities above $10^{34}$ erg
s$^{-1}$. Its recurrence time can be estimated to be between 2 and 5
years, making GRS 1741.9--2853 one of the most active transients in
our FOV, with a duty cycle of about 50\% (as estimated from
Fig.~\ref{fig:lc})\footnote{We note that this is likely an upper limit
on the duty cycle since actual source detections are more frequently
reported in the literature than non-detections which will skew the
data toward detections. For example, the data presented by
\cite{2003ApJ...598..474M} of GRS 1741.9--2853 (as used in
Fig.~\ref{fig:lc}) does not report the non-detections of the source as
seen with {\it ROSAT} (\cite{2001A&A...368..835S}) or {\it BeppoSAX}
(\cite{1999ApJ...525..215S}; using the Narrow Field
Instruments). Since no upper limits on the source flux are given in
these papers, we also do not include these non-detections in
Fig.~\ref{fig:lc}. \label{footnote:duty}} . Its peak luminosity is
very similar to the accreting millisecond X-ray pulsar SAX
J1808.4--3658. For that system and the other accreting millisecond
X-ray pulsars, it has been suggested that their pulsating nature is
related to their rather low time averaged accretion rates (e.g.,
\cite{2001ApJ...557..958C}). Although the time averaged accretion rate
of GRS 1741.9--2853 seems to be higher than for the accreting
millisecond pulsars due to its higher duty cycle, GRS 1741.9--2853
could still be a millisecond X-ray pulsar as well (see also
\cite{2003ApJ...598..474M}), especially if its duty cycle has been
overestimated (see footnote~\ref{footnote:duty}). Unfortunately, its
faintness and its location in the Sgr A* region make it very difficult
to detect these pulsations using {\it RXTE} because of significant
contribution to the detected count rate from other sources in the
FOV. However, with {\it XMM-Newton} pulsations could be detected
within several tens of ksec (depending on the actual fluxes of the
source) if they have similar strengths as the pulsations seen in the
known accreting millisecond pulsars.
The VFXT XMM J174457--2850.3 was also detected during our HRC-I
observations at an X-ray luminosity close to $10^{36}$ erg
s$^{-1}$. This is about a factor of 20 higher than what was previously
seen for this source (\cite{2005MNRAS.357.1211S}). This demonstrates
that VFXTs can exhibit a large range of X-ray luminosities (similar to
what has been observed for the brighter systems) and XMM
J174457--2850.3 is at the border between faint and very faint X-ray
transients, clearly demonstrating that our luminosity boundaries are
somewhat arbitrary as discussed in the introduction. It is possible
that the previous detection of this source was made either during the
rise or decay of a full outburst and that the maximum luminosity
reached at the time was closer to what we have observed for the source
during our HRC-I observations. Within a month the source luminosity
has decreased by nearly 3 orders of magnitude. Its X-ray spectrum at
this low X-ray luminosity was consistent with that found by
\cite{2005MNRAS.357.1211S} demonstrating that for this source the
shape of its spectrum is not strongly dependent on luminosity for
luminosities below $5\times 10^{34}$ erg s$^{-1}$. Since we cannot
extract any spectral information from the HRC-I data we cannot
determine if the spectrum was significantly different at times when
the source had X-ray luminosities close to $10^{36}$ erg s$^{-1}$.
Since only very few observations have been performed of this source
(see Fig.~\ref{fig:lc}), it is difficult to estimate its recurrence
time (at most of order 3 years according to Fig.~\ref{fig:lc}) and its
time-averaged accretion rate. The non-detection at radio wavelengths
might indicate that the source harbors a neutron-star accretor since
according to the radio-X-ray correlation found for low luminosity
black hole binaries (\cite{2003MNRAS.344...60G}) the source should
have had (if it harbors a black-hole accretor which was accreting at
$\sim$$10^{36}$ erg s$^{-1}$) a radio flux of $\sim$1 mJy,
significantly higher than our radio upper limit. Alternatively, XMM
J174457--2850.3 could still harbor a black hole, but one which does
not follow this correlation.
We did not detect any unambiguous new VFXTs during our observations,
although we detected a possible new VFXT whose transient nature must
be confirmed. This will be possible with our next sets of monitoring
observations. Our three additional epochs will also be very important
to find further VFXTs, either previously unknown transients or
recurrent ones. Our observations will allow us to set tighter
constraints on the time averaged accretion rates of these systems than
is currently possible with the available data. Such constraints are
especially important for the low-mass X-ray binaries among the VFXTs
because if their time-averaged accretion rates is very low then our
theories of the evolution of such systems will have a very hard time
explaining their existence without invoking exotic scenarios such as
accretion from a brown dwarf or planet or intermediate mass black hole
accretors (e.g., \cite{kingwijnands2005}). The latter option cannot be
invoked if type-I X-ray bursts have been observed for these systems
since this establishes the existence of a neutron star
accretor. Potential candidates for such systems are the burst-only
sources mentioned in the introduction. Monitoring observations of
these sources with sensitive X-ray telescopes would be very useful to
constrain their time-averaged accretion rates to determine if indeed
these rates are very low for these systems.
Finding new VFXTs and determining their time averaged accretion rate
using our monitoring campaign is only one way forward to increase our
understanding of these enigmatic transients. We now discuss other
avenues that can be explored as well to achieve that goal. First, a
search in the data archives for previously unnoticed VFXTs (e.g., by
comparing different exposures of the same fields) might lead to the
detection of several more systems. Second, larger regions of our
Galaxy need to be monitored at the desired sensitivity to detect the
low fluxes observed from VFXTs. It is especially important to
determine if a large number of VFXTs also exist outside the inner
region of our Galaxy. \cite{2005ApJ...622L.113M} found that the excess
of VFXTs within 10 arcminutes of Sgr A* is significant and might point
to an unusual formation history of these systems. However, if a large
number of VFXTs are also found further away from Sgr A* (e.g., systems
like XMM J174716--2810.7 or SAX J1828.5--1037;
\cite{2003ATel..147....1S, 2004MNRAS.351...31H}), then
any production mechanism which requires the high stellar density near
Sgr A* cannot be invoked for these VFXTs. There is currently no
monitoring satellite in orbit which can perform that task mainly due
to a lack of sensitivity and angular resolution of the
instruments. However, it is possible to derive a first approximation
to the number density of VFXTs at large distances from the center of a
spiral galaxy by performing several deep pointings of the core of
other spiral galaxies. The most obvious choice is the nearest large
spiral galaxy to our own, M31. Within a $\sim$250 ksec exposure it is
possible to observe a 3.7 kpc $\times$ 3.7 kpc region of M31 using the
{\it Chandra}/ACIS-I detector with a limit sensitivity of $1-4 \times
10^{34}$ erg s$^{-1}$ (depending on the spectral properties of the
sources). Several such deep pointings would detect all but the
faintest X-ray transients in a large region of M31. Alternatively,
such programs can also be performed for the smaller spiral galaxy M33
or for galaxies further away. In the latter case, the limiting
sensitivity will be of course less.
\begin{acknowledgements}
RW thanks Michael Muno and Andrew King for useful discussions about
very faint X-ray transients. We also thank Michael Muno for the
information on the absence of eclipses in CXOGC J17535.5--290124.
\end{acknowledgements}
|
\section*{Introduction and \\motivations}
The torsional completion of gravity is essentially the result of not neglecting the torsion tensor within the most general connection of the spacetime \cite{sa-si,h-o,h}; once torsion is allowed to be non-zero in a geometrical setting in which the metric is non-flat, or equivalently when torsion is allowed to be present beside the curvature tensor, one gets the Cartan enlargement of Riemannian geometry called Cartan-Riemann geometry: such an extension of the underlying structure of the geometrical spacetime might well be justified in terms of generality arguments, but it is in its physical effects that it is most important.
In fact, if gravity is derived as a gauge theory, the gravitational field is interpreted as the strengths of the potentials that arise when making local some continuum spacetime transformation, and so it is all too natural that there be two basic quantities, torsion and curvature, as there are two fundamental spacetime transformations, translations and rotations \cite{Capozziello:2011et, Shapiro:2001rz}: so when gauging the entire Poincar\'{e} group with its full rototranslations, translations give rise to torsion in the same way in which rotations give rise to curvature, according to the approach that was followed by Sciama and Kibble; in the Sciama-Kibble picture torsion turns out to be coupled to the spin in a way that is analogous to the way in which in Einstein gravity curvature is coupled to the energy, so that the Sciama-Kibble scheme is simply the most general expression of the Einsteinian spirit of geometrization of physics, the one in which the spin-torsion coupling is included beside the usual curvature-energy coupling, in what can thus be reasonably called the Sciama-Kibble completion of Einstein gravitation. That this is not only an extension but a true completion comes from the fact that such an enlargement is the most general, first because spacetime rototranslations are all continuum transformations we may gauge and then because spin and energy are all the conserved quantities we may have, according to Wigner classification of elementary particles as irreducible representations of the Poincar\'{e} group; if on the one hand including spin beside energy is the most we can actually do, on the other hand leaving it behind will never permit us to discuss the whole particle content that we know to exist in nature instead. As a consequence of this situation, torsion as well as curvatures are both necessary in order to couple all the conserved quantities of a general matter field precisely because the spin density as well as the energy density are the two conserved quantities that pertain to the most general matter field we may define.
Nevertheless, if from a general point of view we have the prescription for which torsion is coupled to spin and curvature to energy, there is no unique way in which such coupling protocols can indeed be realized unless one fixes additional conditions. For example, what is historically known to be the Sciama-Kibble-Einstein gravitation is based only on the simplest dynamical action one may write, that is the one in which the torsionless Ricci scalar given by $R(g)$ is generalized up to its torsionfull counterpart given by $G(g,Q)$ where $g$ and $Q$ are the metric and torsion tensors; however, because torsion is a tensor then it is not necessary to add it only implicitly though the curvature but it is also possible to add it explicitly as squared torsion contributions \cite{Baekler:2011jt}, and if one relaxes the assumption of parity invariance then also parity-odd terms may be added too \cite{Hojman:1980kv}. And again, these contributions are linear in the curvature and quadratic in torsion, but more terms may be possible if one decides to allow higher-order derivative terms in the Lagrangian of the model: the difference between the aforementioned least-order derivative case \cite{Baekler:2011jt,Hojman:1980kv} and any of the infinite higher-order derivative cases is that while in the former the torsion-spin coupling is algebraic in the latter the torsion-spin coupling equations are differential equations; in the former the torsion-spin coupling is a constrain while in the latter the torsion-spin coupling is a real field equation, so that in the first case torsion is zero when the spin vanishes while in the second case torsion can be present even in vacuum of spin. This property is encoded in the statement that in the least-order derivative models torsion does not propagate out of matter while in all higher-order derivative models it propagates out of matter.
As it is quite clear, all higher-order derivative models may be at risk whenever experimental limits for torsion in vacuum are very strict: this is so because constraining torsion in vacuum means constraining it also inside matter, and therefore totally, with the consequence that torsion might turn out to be zero after all; and because experimental limits for torsion in vacuum are indeed very strict \cite{l,k-r-t, Kostelecky:2008ts}, then either torsion, although non-zero in principle, may be equal to zero because of observational evidence, or there are no higher-order derivative models in the first place. Least-order derivative models are still safe in this case, because torsion is identically zero in vacuum, and then it is compatible with all experimental limits no matter how strict they are outside matter, so that if we wish to have torsion restricted within matter then such constraints can only come from in-matter experiments; very recently \cite{Lehnert:2013jsa}, these experiments have been performed. Our purpose is to discuss what this will imply for the least-order derivative models we consider.
As mentioned above, we will employ the torsional completion of gravity for a geometrical spacetime filled with the most general form of matter accounting for both spin and energy, and so far as we know such a form of matter with both spin and energy is realized in nature by the Dirac field solely; and again, as we have just discussed, we will take the torsion-gravity for Dirac fields as described by Lagrangians at the least-order derivative, in their most general form: the resulting model will be what in this paper we will call Sciama-Kibble-Einstein-Dirac theory, or SKED theory for short. Because the SKED theory is a least-order derivative theory its torsion-spin coupling is algebraic, but because the Dirac field is a least-spin spinorial fermion then it has a completely antisymmetric spin density; this additional feature is important because, since the torsion-spin field equations are algebraic and as the spin is completely antisymmetric, then torsion turns out to be completely antisymmetric itself. What this implies is that, since in least-order derivative models torsion is identically zero in vacuum and therefore it can only be constrained by in-matter experiments, least-order derivative models in which torsion is completely antisymmetric are such that two of the three irreducible decompositions of torsion are identically zero and therefore they cannot be constrained by in-matter experiments however stringent they are, and only the completely antisymmetric irreducible decomposition of torsion can be constrained.
Luckily enough, in the above mentioned in-matter experiments the completely antisymmetric part of torsion is exactly the one on which restrictions are placed, and thus it is essential to perform such an investigation so to assess if torsion will really be constrained and how much.
Eventually, we will speculate about the possible outlooks, especially about hints for new physics.
\section{The SKED Theory}
As specified in the introduction, any higher-order theory of gravity has in general a torsion-spin coupling differential field equation, so that whether or not spin is present, torsion is non-zero in general; in \cite{k-r-t,Kostelecky:2008ts} the authors deal precisely with this type of situation by considering a very general Lagrangian for torsion in interaction with spinorial matter, so that their results are relatively model independent, and capable of including propagating torsion as well: since their results place stringent limits on torsion in vacuum, then they can be interpreted by stating that torsion can be assumed not to exist out of spinorial matter. But in theories in which torsion can propagate, it may be present even in absence of its spin source: then constraining torsion in vacuum signifies constraining torsion entirely, and these results can be interpreted by stating that torsion cannot be a propagating field whatsoever. Or equivalently, since propagating torsion comes from higher-order field equations, they may be interpreted by stating that torsion cannot be described in terms of higher-order Lagrangians in general at all.
Therefore in the present paper we will focus on the least-order Lagrangian, generating torsion-spin coupling algebraic field equations, for which torsion may have whatever value can be assigned in spinorial matter without having to be different from zero also in vacuum, so that torsion may still be present inside matter even if it is always zero outside matter; in \cite{l} the author considers such a theory, so that his results are specific to this model, which describes torsion as a non-linear short-range interaction: effects on the energy levels of atoms can be tested by means of the Hughes-Drever experiment, constraining this specific type of torsion for short-range potentials.
The Lagrangian of \cite{k-r-t,Kostelecky:2008ts} is the starting point also for the results discussed in \cite{Lehnert:2013jsa} although in this last reference the authors discuss in-matter experiments; the results that have been exhibited in \cite{l} about short-range interactions place bounds that in \cite{Lehnert:2013jsa} are improved: therefore we may see the results of \cite{Lehnert:2013jsa} as what condenses and improves all previous results about limits on torsion, placing strong bounds also on the last theory that was still compatible with present experiments, the SKED theory.
Our purpose is to consider the SKED theory, studying how the torsionally-induced spin-contact interactions influence in-matter dynamics, to see whether they really are incompatible with in-matter experiments or not.
So to begin, we will introduce very briefly the formalism we intend to employ, exposed in \cite{Fabbri}, and where here we recall the most important notation: all along this paper we will work in a $(1\!+\!3)$-dimensional space-time with Riemann-Cartan geometry, described in terms of a metric tensor $g_{\mu\nu}$ and a torsion tensor $Q^{\alpha}_{\phantom{\alpha}\mu\nu}$ which will be taken to be completely antisymmetric without any loss of generality for the reasons that were explained in the introduction here above; the metric and torsion tensors will construct the connection in terms of which we define the covariant derivatives $D_{\mu}$ and $\nabla_{\mu}$ in the most general case and in the torsionless case, respectively, and where we have that metric-compatibility holds; then the curvature tensors $G^{\rho}_{\phantom{\rho}\xi\mu\nu}$ and $R^{\rho}_{\phantom{\rho}\xi\mu\nu}$ are defined as usually done in the most general case and in the torsionless case, respectively, and because of their symmetry properties we may also define $G^{\rho}_{\phantom{\rho}\mu\rho\nu}\!=\!G_{\mu\nu}$ with contraction given in terms of $G_{\eta\nu}g^{\eta\nu}\!=\!G$ and $R^{\rho}_{\phantom{\rho}\mu\rho\nu}\!=\!R_{\mu\nu}$ with contraction given by $R_{\eta\nu}g^{\eta\nu}\!=\!R$ called Ricci tensor and scalar and torsionless Ricci tensor and scalar. In Lorentz formalism, the metric is $g_{\alpha\nu}\!=\!e_{\alpha}^{p} e_{\nu}^{i} \eta_{pi}$ in terms of the basis of tetrad fields $e_{\alpha}^{i}$ and the constant metric $\eta_{ij}$ with Minkowskian structure and where $\omega^{ip}_{\phantom{ip}\alpha}$ is the spin-connection; this formalism is equivalent to the previous one, but it allows the possibility to introduce spinor fields. Here, the spinorial transformation will be taken in $\frac{1}{2}$-spin representation, obtained after introduction of the $\boldsymbol{\gamma}_{a}$ matrices verifying the Clifford algebra $\{\boldsymbol{\gamma}_{a},\boldsymbol{\gamma}_{b}\}\!=\!
2\boldsymbol{\mathbb{I}}\eta_{ab}$ from which one may define the matrices $\frac{1}{4}[\boldsymbol{\gamma}_{a},\boldsymbol{\gamma}_{b}]\!=\!\boldsymbol{\sigma}_{ab}$ such as they verify the condition $\{\boldsymbol{\gamma}_{i},\boldsymbol{\sigma}_{jk}\}\!=\!i\varepsilon_{ijkq}
\boldsymbol{\pi}\boldsymbol{\gamma}^{q}$ implicitly defining the matrix $\boldsymbol{\pi}$ and where the matrices $\boldsymbol{\sigma}_{ij}$ are the infinitesimal generators of the spinorial transformation, while the spinorial connection $\boldsymbol{\Omega}_{\rho}\!=\!
\frac{1}{2}\omega^{ij}_{\phantom{ij}\rho}\boldsymbol{\sigma}_{ij}$ defines spinorial covariant derivatives $\boldsymbol{D}_{\rho}$ and $\boldsymbol{\nabla}_{\rho}$ in the general and torsionless case, respectively, thus accomplishing the list of conventions we wanted to recall for the sake of clarity.
With this kinematic background, we proceed by defining the most general least-order derivative Lagrangian
\begin{eqnarray}
\nonumber
&L\!=\!(\frac{k-1}{4k})Q_{\alpha\nu\sigma}Q^{\alpha\nu\sigma}\!+\!G-\\
&-\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\psi
\!-\!\boldsymbol{D}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)\!+\!m\overline{\psi}\psi
\label{actionleast}
\end{eqnarray}
where $k$ is the torsional constant while the gravitational constant has been normalized to unity, and $m$ is the mass of the matter field: in the most general circumstance, torsion enters not only implicitly within the curvature but also explicitly as a quadratic term, both instances having their own coupling constant. In the most general case, as the torsion-squared term is independent from the linear curvature term, the torsional coupling constant is independent from the gravitational Newton constant.
Variation of this Lagrangian with respect to all fields involved yields the corresponding field equations, starting from the completely antisymmetric torsion-spin coupling field equations that are given in the following form
\begin{eqnarray}
&Q^{\rho\mu\nu}\!=\!-k\frac{i}{4}
\overline{\psi}\{\boldsymbol{\gamma}^{\rho}\!,\!\boldsymbol{\sigma}^{\mu\nu}\}\psi
\label{torsion-spin}
\end{eqnarray}
which come together with the non-symmetric curvature-energy coupling field equations given according to
\begin{eqnarray}
\nonumber
&\left(\frac{1-k}{2k}\right)(D_{\mu}Q^{\mu\rho\alpha}
\!-\!\frac{1}{2}Q^{\theta\sigma\rho}Q_{\theta\sigma}^{\phantom{\theta\sigma}\alpha}
\!+\!\frac{1}{4}Q^{\theta\sigma\pi}Q_{\theta\sigma\pi}g^{\rho\alpha})+\\
&+(G^{\rho\alpha}\!-\!\frac{1}{2}Gg^{\rho\alpha})\!=\!\frac{i}{4}(\overline{\psi}\boldsymbol{\gamma}^{\rho}\!\boldsymbol{D}^{\alpha}\psi
\!-\!\boldsymbol{D}^{\alpha}\overline{\psi}\!\boldsymbol{\gamma}^{\rho}\psi)
\label{curvature-energy}
\end{eqnarray}
complemented by the fermionic field equations
\begin{eqnarray}
&i\boldsymbol{\gamma}^{\mu}\!\boldsymbol{D}_{\mu}\psi\!-\!m\psi\!=\!0
\label{fermionic}
\end{eqnarray}
as the most general system of field equations given in terms of the torsional coupling constant and the mass of the matter field as the only unknown parameters.
As anticipated, the assumption of having torsion completely antisymmetric does not require a loss of generality since the spin is completely antisymmetric and the torsion-spin coupling is algebraic; this circumstance also allows us, after all torsionfull quantities have been decomposed in terms of the corresponding torsionless quantities plus torsional contributions, to employ such torsion-spin coupling equations in order to have torsion substituted in terms of the spin of the spinor matter fields, either within the Lagrangian or within the field equations.
When this is done in the Lagrangian we get
\begin{eqnarray}
\nonumber
&L=R\!-\!\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)-\\
&-\frac{3k}{32}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi\!+\!m\overline{\psi}\psi
\label{actionleastdecomposed}
\end{eqnarray}
whose variation will yield the system of gravitational and material field equations already in the decomposed form.
So by varying this action with respect to the metric tensor and the Dirac field or equivalently by substituting torsion in terms of the spin of fermionic fields, we get the symmetric curvature-energy coupling field equations usually known with the name of Einstein field equations
\begin{eqnarray}
\nonumber
&R^{\rho\alpha}\!-\!\frac{1}{2}Rg^{\rho\alpha}
\!=\!\frac{i}{8}(\overline{\psi}\boldsymbol{\gamma}^{\rho}\boldsymbol{\nabla}^{\alpha}\psi
\!-\!\boldsymbol{\nabla}^{\alpha}\overline{\psi}\boldsymbol{\gamma}^{\rho}\psi+\\
\nonumber
&+\overline{\psi}\boldsymbol{\gamma}^{\alpha}\boldsymbol{\nabla}^{\rho}\psi
\!-\!\boldsymbol{\nabla}^{\rho}\overline{\psi}\boldsymbol{\gamma}^{\alpha}\psi)+\\
&+\frac{3k}{64}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi g^{\alpha\rho}
\label{gravitational}
\end{eqnarray}
together with the Dirac field equations
\begin{eqnarray}
&i\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!+\!\frac{3k}{16}\overline{\psi}\boldsymbol{\gamma}_{\rho}\boldsymbol{\pi}\psi
\boldsymbol{\gamma}^{\rho}\boldsymbol{\pi}\psi\!-\!m\psi\!=\!0
\label{fermionical}
\end{eqnarray}
as the most general system of field equations with torsion replaced by spin-spin contact fermionic interactions of the Nambu--Jona-Lasinio structure and in which the torsional coupling constant has the role of coupling constant giving the strength of these interactions.
By applying to the Dirac equation another Dirac operator and taking advantage of some Fierz rearrangement, it is possible to obtain a Klein-Gordon field equation
\begin{eqnarray}
\nonumber
&\boldsymbol{\nabla}^{2}\psi\!+\!\frac{3k}{8}
\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi i\boldsymbol{\nabla}_{\mu}\psi
\!+\!\frac{3k}{8}\boldsymbol{\nabla}_{\mu}(\overline{\psi}\boldsymbol{\gamma}_{\rho}\psi)
i\boldsymbol{\sigma}^{\mu\rho}\psi-\\
&-\frac{3k}{16}\left(\frac{3k}{16}\!+\!\frac{1}{8}\right) \overline{\psi}\boldsymbol{\gamma}_{\rho}\psi\overline{\psi}\boldsymbol{\gamma}^{\rho}\psi\psi
\!+\!\frac{1}{8}m\overline{\psi}\psi\psi\!+\!m^{2}\psi=0
\end{eqnarray}
in which we notice that even in absence of torsion, encoded by assuming $k$ null, gravitationally-induced non-linear terms are present, rendering non-trivial the dynamics of spinor fields; on the other hand, if we keep torsion while neglecting gravity, then we may take $k$ to be much larger then unity, so that we have the following
\begin{eqnarray}
\nonumber
&\boldsymbol{\nabla}^{2}\psi\!+\!\frac{3k}{8}
\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi i\boldsymbol{\nabla}_{\mu}\psi
\!+\!\frac{3k}{8}\boldsymbol{\nabla}_{\mu}(\overline{\psi}\boldsymbol{\gamma}_{\rho}\psi)
i\boldsymbol{\sigma}^{\mu\rho}\psi-\\
&-\frac{9k^{2}}{256}\overline{\psi}\boldsymbol{\gamma}_{\rho}\
|
psi \overline{\psi}\boldsymbol{\gamma}^{\rho}\psi\psi
\!+\!\frac{1}{8}m\overline{\psi}\psi\psi\!+\!m^{2}\psi=0
\end{eqnarray}
and showing that the non-linearity given by torsion is much more relevant: in this weak-gravity limit we may take stationary configurations of energy $E$ subject to the low-speed regime $E^{2}\!-\!m^{2}\!\approx\!2m(E\!-\!m)$ and hence by writing everything in standard representation we have that the non-relativistic approximation is accomplished by the condition $\overline{\psi} \!\approx\!(\phi^{\dagger},0)$ in terms of which we get
\begin{eqnarray}
\nonumber
&\frac{1}{2m}\!\!\vec{\boldsymbol{\nabla}}\!\cdot\!\vec{\boldsymbol{\nabla}}\phi
\!+\!\frac{9k^{2}}{512m}|\phi^{\dagger}\phi|^{2}\phi-\\
&-\frac{1}{16}|\phi^{\dagger}\phi|\phi\!+\!(E\!-\!m)\phi\!=\!0
\end{eqnarray}
as Pauli-Schr\"{o}dinger field equations for a non-relativistic semi-spinor matter field. But on the other hand, we also have to notice that the absence of the Pauli matrices means that the Pauli-Schr\"{o}dinger field equations decouple in one Schr\"{o}dinger field equation for each of the two components of the semi-spinor field that can then be taken as independent, and so we actually have
\begin{eqnarray}
\nonumber
&\frac{1}{2m}\!\!\vec{\boldsymbol{\nabla}}\!\cdot\!\vec{\boldsymbol{\nabla}}u
\!+\!\frac{9k^{2}}{512m}|u^{*}u|^{2}u-\\
&-\frac{1}{16}|u^{*}u|u\!+\!(E\!-\!m)u\!=\!0
\label{equation}
\end{eqnarray}
as a Schr\"{o}dinger field equation for a non-relativistic complex scalar field: the absence of Pauli terms in the non-linearities for the single Dirac field encodes the fact that there is a complete isotropy in the self-interaction of the single matter field. Such self-interactions cannot be detected in the type of experiments we are considering.
If in the Lagrangian beside the initial matter field we were to include a second matter field then we would get
\begin{eqnarray}
\nonumber
&L\!=\!(\frac{k-1}{4k})Q_{\alpha\nu\sigma}Q^{\alpha\nu\sigma}\!+\!G-\\
\nonumber
&-\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\psi
\!-\!\boldsymbol{D}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)-\\
&-\frac{i}{2}(\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\chi
\!-\!\boldsymbol{D}_{\mu}\overline{\chi}\boldsymbol{\gamma}^{\mu}\chi)
\!+\!m\overline{\psi}\psi\!+\!M\overline{\chi}\chi
\end{eqnarray}
so that torsion would now be given as the total spin, accounting for both the initial fermion and the supplementary fermion; the effective Lagrangian is thus
\begin{eqnarray}
\nonumber
&L=R\!-\!\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)-\\
\nonumber
&-\frac{i}{2}(\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\chi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\chi}\boldsymbol{\gamma}^{\mu}\chi)-\\
\nonumber
&-\frac{3k}{16}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\chi
\!-\!\frac{3k}{32}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi-\\
&-\frac{3k}{32}\overline{\chi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\chi
\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\chi
\!+\!m\overline{\psi}\psi\!+\!M\overline{\chi}\chi
\end{eqnarray}
perfectly symmetric in the two fermions: however, if the initial fermion is kept as dynamical while the second is taken as fixed, the effective Lagrangian is simply
\begin{eqnarray}
\nonumber
&L=R\!-\!\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)-\\
&-\frac{3k}{16}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\chi
\!-\!\frac{3k}{32}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi
\!+\!m\overline{\psi}\psi
\end{eqnarray}
in which as it is straightforward to see an asymmetry has appeared among the two fields. With this Lagrangian we describe the dynamics of the initial fermion in self-interaction and in interaction with the additional fermion taken to represent a non-dynamical background, and because of the fact that the self-interactions cannot be detected in the type of experiments we are considering then
\begin{eqnarray}
\nonumber
&L=R\!-\!\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)-\\
&-\frac{3k}{16}\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi
\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\chi
\!+\!m\overline{\psi}\psi
\end{eqnarray}
will provide the same amount of information about the observables of the system. And this is the Lagrangian we will investigate in the following of the paper.
This Lagrangian is a generalization of the Lagrangian that was obtained in \cite{Fabbri}; in the following it will be compared to the results obtained in \cite{Lehnert:2013jsa}: the idea is to assess the way in which a model described by such a Lagrangian can be such that the completely antisymmetric part of torsion could escape from the constraints imposed in non-relativistic regimes by in-matter experiments.
To do that, we consider that in \cite{Lehnert:2013jsa}, the authors start from a model-independent Lagrangian that nevertheless has to be applied to the case of polarized slow neutrons through a condensed-state of liquid $^{4}\mathrm{He}$ taken as a background field distribution: first of all, as it has been discussed in the above reference, the geometrical properties of such a system are such that the torsional effects have to be isotropic, and the authors go ahead in explaining what restrictions on the parameters are allowed, so to simplify the Lagrangian function; then, because all fermions involved are Dirac fields, their spin is completely antisymmetric and thus they can only generate a torsion that is completely antisymmetric, its dual indicated in terms of the fixed axial vector $A_{\mu}$ to follow the notation of the above paper, and this also amounts to additional simplifications in the Lagrangian; finally, as we intend to compare this Lagrangian to the one we have in the present model, the highest-order derivative are to be present in the kinetic term alone, for further simplification in their Lagrangian once applied to the SKED theory: when all these requirements are implemented, all parameters can be taken to vanish except a single one, so that
\begin{eqnarray}
\nonumber
&L=-\frac{i}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi
\!-\!\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)+\\
&+\xi_{4}^{(4)}A_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi\!+\!m\overline{\psi}\psi
\end{eqnarray}
in terms of the single $\xi_{4}^{(4)}$ parameter is the Lagrangian we will have to employ to fit our model; this Lagrangian gives rise to the non-relativistic Hamiltonian given by
\begin{eqnarray}
&H\!\approx\!\frac{P^{2}}{2m}
\!+\!\vec{\frac{P}{m}}\!\cdot\!\vec{\frac{\boldsymbol{\sigma}}{2}}(-2\xi_{4}^{(4)}A^{0})
\label{function}
\end{eqnarray}
placing bounds on the $-2\xi_{4}^{(4)}A^{0}$ term. It is worth noticing that not the individual factors but only the entire term will be constrained by experimental measurements.
We recall that such Hamiltonian is the one describing the dynamics of the slow neutrons in a background in which the torsion is generated inside the liquid $^{4}\mathrm{He}$.
We are now able to compare the results, knowing that what in our model was the initial fermion is identified with the neutron while the additional fermion is identified with the liquid $^{4}\mathrm{He}$, and every neutron has self-interactions and interactions with liquid $^{4}\mathrm{He}$ in the most general circumstances: however, the complete isotropy of the self-interaction for a single matter field means that the contribution given by $\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\pi}\psi \overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\psi$ is not going to give rise to any correction to the Hamiltonian in the form that is given in expression (\ref{function}), so that we may neglect the self-interaction for each neutron field, and thus
\begin{eqnarray}
&-\frac{3k}{16}\overline{\chi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\pi}\chi
\!=\!\xi_{4}^{(4)}A^{\mu}
\end{eqnarray}
as it is easy to check; in the standard representation writing the fermion according to $\overline{\chi}\!\approx\!(a^{\dagger},-b^{\dagger})$ we finally have
\begin{eqnarray}
&\frac{3}{8}k(a^{\dagger}b\!+\!b^{\dagger}a)\!=\!-2\xi_{4}^{(4)}A^{0}
\end{eqnarray}
and the bounds on $-2\xi_{4}^{(4)}A^{0}$ are on $k(a^{\dagger}b\!+\!b^{\dagger}a)$ instead, but in standard representation $b$ is the small-valued semi-spinorial component, the one that vanishes in the non-relativistic limit. Such limit is certainly applicable in this case since the liquid $^{4}\mathrm{He}$ is static, with the consequence that the mixed term $k(a^{\dagger}b
\!+\!b^{\dagger}a)$ vanishes, because of the vanishing of the field, and therefore showing that this term is compatible, regardless the actual value of the constant $k$, with any constraint placed by the experiment.
As a consequence of this fact, we have that the SKED theory we have introduced above is the only gravitational theory which, even in its most general instance given when the torsional coupling constant is completely undetermined, is compatible with all in-matter experiments.
Finally we will discuss some of the consequences.
\section{Effective Interactions}
Up to now we discussed that the theory in \cite{Fabbri} is among all the least-order derivative dynamical theories the one that is the most general coupling the torsional completion of gravity to spinorial matter fields: it gives Dirac matter field equations with non-linear potentials in which the torsional coupling constant is not determined by any empirical results; in particular its non-relativistic limit results into Schr\"{o}dinger field equations that contain no Pauli contribution. This situation holds especially for polarized slow neutrons; experiments such as those involving polarized slow neutrons in interaction with a condensed-state of liquid helium, which is static, will have no Pauli term as a correction to the Hamiltonian of the system, and therefore the system is not constrained by any measurement. The SKED theory remains the only gravitational theory compatible with observations.
To probe torsionally-induced non-linear interactions one is compelled to study in-matter experiments performed in the relativistic regimes, involving the high-energy scattering of many particles: these high-energy scattering can probe models up to a few TeV solely, for the moment being. Thus the spectrum ranging from the present to the Planck scale is still largely unbound.
Nevertheless, this opens an interesting question about torsion, but before dealing with that, we would like to spend some words in order to clarify a misconception that is unfortunately quite widespread: the torsional completion of gravity is achieved by not neglecting torsion beside curvature, as the two fundamental objects describing the character of the spacetime; however, because torsion is a tensor on its own, the action should not only have a curvature tensor implicitly containing torsion but torsion should also be explicitly present in terms of squared contributions, thus accounting for an independent constant that is different from the gravitational constant in general circumstances. Because Einstein gravity can be obtained variationally from the Lagrangian that is given by the Ricci torsionless curvature scalar $R$ people initially obtained the torsional completion of Einstein gravity from the variation of the Lagrangian that is given by the Ricci torsionfull curvature scalar $G$, but then an action containing only $G$ has only one term, and therefore it cannot have more than one constant, which must be nothing else but the Newton constant in order to recover Newtonian dynamics in the weak-gravitational low-speed static configurations: overlooking the fact that more general Lagrangians were possible has laid the basis for the misconception that the torsional constant had to be the Newton constant, and such a misconception was eventually cemented along the decades. Hence we would like to take the opportunity here to clearly stress the fact that the Lagrangian given by $G$ is certainly the most straightforward but nevertheless not the most general Lagrangian, which is given by the Ricci torsionfull curvature scalar accompanied by quadratic torsion terms, therefore given in terms of two different constants, the gravitational one being the Newton constant but the torsional one being completely undetermined. As a consequence of the fact that the torsional constant might be much larger than the Newton constant, we have that the torsionally-induced non-linear terms within the Dirac matter field equations might be relevant much before the Planck scale.
As a matter of fact, it may happen that the torsional constant is not much larger than the Newton constant, and the torsionally-induced non-linear terms of the Dirac matter field equations are relevant only at the Planck scale after all, but this is not a necessity; now back to the problem of the boundaries on torsion, we have just recalled that, on the other hand of the allowed spectrum, the torsional constant cannot be larger than the Fermi constant, or else the torsionally-induced non-linear terms of the Dirac matter field equations would have been relevant before the Higgs scale, but we have never detected them at those distances: this places the torsional constant between the Newton and Fermi constant, so not much of a constraint. The torsional constant must be smaller than the Fermi constant, but because it does not need to be as small as the Newton constant, it might happen that the torsion constant is just a little smaller than the Fermi constant: if the torsional constant were just a little smaller than the Fermi constant, then what would the consequences be? Would this be of any help in addressing open problems in physics or in constituting evidence suggesting the appearance of new physics, right beyond what can be probed at the moment?
For instance, in field theory, computing some quantities may lead to divergences unless a cut-off is introduced by hand, and even so it may well happen that a reasonable cut-off may still give exceedingly large results compared to observations; the whole idea of placing a cut-off beyond which computations cannot be done is interpretable by thinking that there is a limit beyond which new effects change the physics in such a way that the same computations done in terms of this new physics would give finite results: a theory with a torsional coupling constant that happens to be just a little smaller than the Fermi constant, so that the torsionally-induced non-linear interactions happen to become relevant a little beyond these scales, does precisely this. If we interpret the torsional coupling constant as the effective limit encoded by the cut-off of the theory and the torsionally-induced non-linear terms as new physics, all computations in the standard context would happen to work properly up to the scale at which there is the cut-off because the torsional effects are negligible, and beyond such scales calculations would no longer be reliable because torsional effects would change the effective phenomenology; if the torsional coupling constant were to be a little smaller than the Fermi constant the cut-off would be just beyond the present scales, and problems related to divergences would not necessarily appear beyond this boundary.
Thus, if the torsional coupling constant happened to be tuned a little beyond the Fermi constant, it would mean that all the torsionally-induced non-linear interactions in high-energy scattering would become manifest soon after the scales we are probing in today's accelerators, with the interesting consequence that such non-linearities might soon let new physics arise; this might be of some help in addressing problems that can be solved when new physic is necessary. Again, it may well be that after all the torsional coupling constant will be measured to be much smaller, no torsionally-induced non-linear term will be relevant and no new physics will be possible along this avenue, but for the moment this is a viable possibility.
Even if we cannot yet be sure that torsional effects will be relevant a little beyond the Fermi scale, nevertheless a situation in which this could happen is better than the situation in which torsional effects were not thought to be possible anywhere before the Planck scale.
More information about such effects may only come from high-energy physics experiments.
\section*{Conclusion}
In this paper, we considered the torsional completion of gravitation in a spacetime filled with Dirac fields, specifying that we had taken least-order derivative dynamics, which we called SKED theory, and we discussed SKED models in two situations: one in which there was a single matter field, for which we have shown that the non-linear potentials were isotropic; and another in which a matter field was sent to probe a non-dynamical static matter field distribution, for which we have shown that the non-linear potentials were vanishing. We have discussed that when such models are used to describe neutrons in interactions with static liquid $^{4}\mathrm{He}$ as in recent in-matter experiments, the non-linear potentials account for either a self-interaction that cannot be detected by such experiments or by mutual interactions that nevertheless are equal to zero, and so all these torsionally-induced non-linear potentials are compatible with all limits that are set by the type of in-matter experiments discussed in the recent literature; furthermore, we have remarked that these results are true regardless the value of the torsional coupling constant. Therefore torsion in gravity for least-spin spinor fields in the least-order derivative action in its most general case is at the same time the simplest and yet the most general theory that is still compatible with all experimental constraints we know at the moment.
In the second part of the paper, we have discussed that the only way we may have to detect the torsional effects is by studying anisotropies in relativistic scattering, commenting that this type of experiments may take place at the LHC, although we have specified that beyond the Fermi scale there is no constraint that has been placed yet; then we went on to discussing that in a situation in which the torsional coupling constant may be anywhere between the Fermi and the Planck scales, such a constant might happen to be just a little smaller than the Fermi constant, and we have stressed that interesting consequences might follow. As in the final part of the paper we have thoroughly specified, all this does not mean that having torsionally-induced non-linear interactions relevant right beyond the Fermi scale is what will actually happen, but at least the possibility of this occurrence may be view as an opportunity to consider new physics right beyond what we observe today.
Then only accelerators may tell.
|
\section{Introduction}
Intuition and evidence suggests that many individuals are time-inconsistent; at any particular point in time the (near) present gets an additional weight in intertemporal tradeoffs \citep[e.g.][]{strotz,frederickloewenstein2,augenblickniederle,augenblickrabin}. Especially when individuals fail to fully anticipate their predictable preference changes, such present-focused individuals tend to procrastinate \citep{akerlofobedience,odonoghuerabindoing,odonoghuerabinchoice}: they will often excessively delay the completion of tedious tasks such as filing taxes or paying parking-tickets. And when facing a gratifying task---such as taking a day off---, present-focused individuals often precrastinate. To model the resulting interpersonal-conflict of preference changes in a simple and tractable way, \cite{laibson} adopted intergenerational discounting models \citep{phelpspollak} to individual decision-making. His quasi-hyperbolic discounting model captures the present-focus of individuals by introducing an additional present-bias parameter that discounts all future utility into \cite{samuelsondu}'s time-separable exponential-discounting model. \cite{odonoghuerabindoing,odonoghuerabinchoice} extend this framework by introducing (partial) naivete, and illustrating such individuals' tendency to delay unpleasent tasks. Since excessive procrastination is a robust prediction of (naive) hyperbolic discounting models, it seems natural to use task-completion data to identify time-inconsistent preferences from the pattern of completion times. In line with this idea, previous research classifies individuals as time-inconsistent if they complete tasks at or close to the deadline \citep{brownprevitero,frakeswasserman} or estimates the degree of time-inconsistency from completion times under parametric assumptions \citep{martinezmeier}.\footnote{\cite{brownprevitero} classify individuals that select their health care plan close to the deadline as procrastinators and look for correlated behavior in other financial domains. \cite{frakeswasserman} investigate the behavior of patent officers that have to complete a given quota of applications supposing that the cost of working on a patent are deterministic and identical across days. In their model, for conventional discount rates the empirically observed bunching close to the deadline is inconsistent with exponential discounting. While earlier papers do not address the concern of unobservable and random opportunity cost, \cite{martinezmeier} allow for random opportunity costs and use a parametric approach to identify time preferences.}
In this paper, we ask whether time preferences can be inferred by an outside observer---referred to as the analyst---when \textsl{only} task completion is observed \textsl{absent} parametric assumptions on the (unobservable) cost and benefit of task completion. A key difficulty in doing so is to separate naivete or time-preference-based explanations of delay from those due to the option value of waiting \citep{wald,weisbrod,dixitpindyck}: whenever the cost of doing a certain task is stochastic, a time-consistent individual may wait in the hope of getting a lower cost draw tomorrow.\footnote{Throughout, we abstract from another reason that tasks may not be completed: forgetting. Conceptually, one can think of the agent in our analysis as getting a non-intrusive reminder at the beginning of every period. This is not to say that limited memory and the strategic response to it are unimportant in determining task completion behavior in the field. See, for example, \cite{heffetzodonoghue} for how reminders determine when parking fines are payed, \cite{altmannetraxler} for how deadlines and reminders determine the probability of making a check-up appointment at the dentist, and \cite{ericson2} for how time-inconsistency and limited memory interact. }
Section \ref{sec:setup} introduces our task-completion model. We consider an analyst who, from observing task completion times of a partially-naive quasi-hyperbolic discounter, tries to learn about some or all of the following parameters: the long-run discount factor $\delta$, the present-bias parameter $\beta$, or the degree of sophistication $\hat{\beta}$. To facilitate learning by the analyst, we assume that the agent's task-completion payoffs are drawn each period from the same underlying payoff distribution. Absent any such a priori restriction, it is straightforward to rationalize any observed stopping behavior independently of the agent's taste for immediate gratification and degree of sophistication, leaving no hope for identification thereof.\footnote{For example, suppose in every period the cost of doing the task is either one or zero, allowing for time-varying probability that the cost are zero. Simply setting the probability that the cost are zero in each period equal to that period's observed task completion probability rationalizes the data for any time-separable utility function.} Furthermore, to make identification easier, we suppose that the analyst can observe the individual's exact stopping probability in each period. Intuitively, one may think of the analyst as having access to an ideal data set with (infinitely) many observations of either the same individual in identical situations or a homogenous group of individuals. Again, this assumption strongly favors the analyst's ability to learn about underlying parameters. Finally, we impose that individuals can be described as (partially) naive quasi-hyperbolic discounters. We are agnostic as to the nature of the task, so our analysis applies when task-completion leads to immediate benefits, immediate costs, or both.
In Section \ref{sec:example}, we introduce two motivating examples. The first highlights that, even when the parametric form of the underlying unobservable payoff distribution are known, bunching at the deadline is insufficient to distinguish a time-consistent from a time-inconsistent agent. In the example, the cost of completing the task are drawn from a log-normal distribution and in every period the stopping behavior of time-consistent agent looks almost identical to that of an agent with a present-bias parameter $\beta = 0.7$, whose cost are drawn from a different log-normal distribution. The second example illustrates how the estimated present-bias can depend crucially on common parametric assumptions about the unobservable payoff distribution---even when the analyst knows (or guesses correctly) the long-run discount factor, as well as the mean and variance of the underlying stationary payoff distribution. While we suppose that in reality payoffs are drawn from a uniform distribution and the agent is time-consistent $\beta=\hat{\beta}=1$, when the analyst supposes costs are drawn either from a normal, log-normal, extreme value, or logistic distribution, her squared-distance-minimizing or likelihood-maximizing estimate of $\beta$ varies between $0.561 - 0.819$, with the exact value depending on the parametric family (and the degree of sophistication) the analyst imposes.
Furthermore, the squared error associated with some of these incorrect estimates is below $0.232$\%---suggesting that with finite noisy data it is difficult for the analyst to realize when she picks an incorrect functional form.
Motivated by the importance of the parametric assumptions in the example, we turn to the main focus of the paper: what lessons about time-inconsistent preferences and naivete thereof can be learned non-parametrically?
As a useful preliminary step, Section \ref{sec:recursive} establishes that the agent's perceived continuation value is characterized by a simple recursive equation. Section \ref{sec:task_completition} establishes that for any quasi-hyperbolic discounter---independently of whether she is sophisticated or (partially) naive and of her degree of impatience---the subjective continuation value decreases the closer the agent gets to the deadline. To see the intuition behind the theorem, consider first the case in which the task always generates a net benefit. Then from the perspective of Self 1, all future selves are too impatient, and hence tend to perform the task to early. By extending the deadline, the formerly last period's self now can decide and perform the task later. As from any earlier self's perspective she is too eager to complete the task, the direct effect of additional delay on any earlier self is positive. Now consider the former penultimate self; her perceived continuation value of waiting increases because she strictly prefers future selves to wait whenever they choose to do so. This, in turn, induces her to act more patiently, benefiting all earlier selfs, and so forth. Hence, in the case of net benefits, a quasi-hyperbolic discounter does not want to impose an earlier deadline.
Consider next the case in which completing the task is always costly. When comparing a $(T-1)$-period to $T$-period deadline, Self 1 realizes that if she does not engage in the task in the $T$ period problem, Self 2 will face a $T-1$-period problem. That subgame is identical to the one she faces in the $T-1$ period problem, and future selves who are $s$ periods away from the deadline will therefore behave identically in the two problems. Hence for $s \in {1,\cdots T-1}$, the task completion probability $s$-periods before the deadline is identical, and due to discounting of future costs, Self 1 is strictly better off when selecting the $T$-period problem and not doing the task. The formal proof extends these intuitions to the case in which the support of the net benefit distribution can contain positive and negative payoffs.
Because the agent in our model completes the task when the current benefit is greater than her subjective continuation value, Theorem \ref{prop:monotone-values} implies that a quasi-hyperbolic discounter becomes more and more likely to complete the task the closer she is to the deadline. This, therefore, provides another simple testable prediction, which also implies that the agent never wants to impose a shorter deadline.\footnote{Despite her tendency to procrastinate, hence, when the payoffs are independently drawn from a stationary distribution, a quasi-hyperbolic discounter's willingness to pay for an earlier deadline is always non-positive. This is noteworthy as self-imposed deadlines by students has been used to identify sophisticated procrastinators \citep[e.g.][]{arielywertenbroch,bisinhyndman}; our result suggests that these students either do not have quasi-hyperbolic preferences or that they must foresee a non-stationary environment,
which induces them to impose an earlier deadline. A self-imposed-deadline-based classification, hence, is conservative in identifying agents who are aware of their time-inconsistent preferences.} Through a simple counterexample, however, we also highlight that this result relies on payoffs each period being drawn from the same underlying distribution.\footnote{Furthermore, in Section \ref{sec:discussion} we note that the prediction need not hold for a heterogenous population of time-consistent individuals that each faces a stationary payoff distribution.}
Section \ref{sec:unidentified} establishes our main result: if the agent is either sophisticated ($\hat{\beta} = \beta$) or fully naive ($\hat{\beta} =1$), for {\it any given} long-run discount factor $\delta$ and present-bias parameter $\beta$, any given penalty of not completing the task, and \textsl{any} weakly increasing profile of task completion, there exists a stationary payoff distribution that rationalizes the agent's behavior (Theorems \ref{thm:non-identifiability-sophisticate} and \ref{thm:non-identifiability}, respectively). This implies that for \textsl{any} data set the analyst may observe, absent parametric assumptions it is impossible for her to learn {\it anything} about the agent's degree of time-inconsistency or level of sophistication. Importantly, this absence of even partial identification continues to hold even if the analyst imposes a priori restrictions on permissible long-run discount factors.
A very rough intuition for this fact is as follows: whether a self prefers to do a task today or tomorrow depends on her time preferences and on the perceived option value of waiting. The option value of waiting, in turn, depends on the payoff distribution. Through changing the unobservable payoff distribution, we can hence undo a change in the present bias or long-run discount factor of the agent.
Technically, however, a local change in the payoff distribution changes continuation values in every period in a highly non-linear way, so to establish that we can construct an appropriate payoff distribution, we need a non-local argument. This is where we use the assumption that the agent is either sophisticated or fully naive. The fact that a sophisticated agent makes no forecast error enables us rewrite the recursive equations determining the perceived continuation values in a simple manner. Based on this rewrite, we transfer the search for an appropriate distribution to that of solving for a fixed point of a system of linear equations. This proof method, however, cannot be used if the agent is partially naive as the corresponding system becomes non-linear.
For a fully naive agent the problem becomes tractable for a different reason. Because a fully naive agent believes to be time-consistent, we can establish that a first-order stochastic increase in the stationary payoff distribution, increases the agent's subjective continuation value in every period (Lemma \ref{lem:aux-properties-naive}). In addition, we establish that we can map subjective continuation values into a payoff distribution that gives rise to the desired completion times in such a way that greater subjective continuation values lead to a first-oder stochastic increase in the stationary payoff distribution. The combination of these two steps leads to a monotone operator on subjective continuation values to which we can apply Tarski's Theorem, and thereby establish the existence of a payoff distribution that gives rise to the data's stopping probabilities. We also, however, provide a simple example in which a first-order stochastic dominance increase in the stationary payoff distribution makes a sophisticated quasi-hyperbolic discounter worse off. In the example, the agent prefers to pay a fixed utility-tax immediately upon completing the task. This tax reduces his temptation to stop even after a low payoff realization, and the induced more virtuous behavior of future selves overcompensates the direct payoff loss due to the tax. The example highlights why our proof technique does not cover the more general case of a partially naive agent.
In our proofs of Theorems \ref{thm:non-identifiability-sophisticate} and \ref{thm:non-identifiability}, we freely construct a stationary net-benefit distribution. One may hope to identify present-bias through economically meaningful restrictions on this distribution. Arguably, the most natural assumptions are those regarding the moments of the net-benefit distribution; for example, an analyst may have an idea regarding the possible expected net benefit of doing the task---that is regarding the mean of $F$---or may be willing to impose that net benefits do not vary to much between periods (restricting the variance of $F$). Our example in Section \ref{sec:example}, however, already highlights that even fixing these moments, common parametric assumptions can lead to widely varying estimates of the agent's time preferences. To expand on this point, in Section \ref{subsec:moments} we establish that as long as the penalty is unobservable or the task is mandatory, we can find a net benefit distribution with \textsl{any} given mean and non-zero variance that rationalizes the observed stopping behavior for a time-consistent agent with $\delta =1$. Any identification of present-bias parameter $\beta$ in this case, therefore, must follow from parametric restrictions on higher-order moments of the distribution, for which we see no convincing economic motivation in most contexts.
Section \ref{sec:rich data} asks whether non-parametric identification is feasible with richer data in which the analyst does not only observe the stopping probabilities but, in addition, observes the agent's willingness to pay for continuing with the stopping problem in each period. In the case of tax-filing, for example, this amount to eliciting the willingness to pay for having someone else file one's taxes immediately with zero hassle.\footnote{As we explain carefully in Section \ref{sec:rich data}, our procedure does not explicitly or implicitly rely on the agent comparing monetary rewards at different points in time, so it is robust to standard critiques of eliciting time-preference via monetary rewards \citep{augenblickniederle,ericsonlaibsonreview,ramsey}.} For the case of a sophisticated agent who's contemporaneous utility function is quasi-linear in money, Theorem \ref{thm:non-para_identification} provides an analytical answer in closed form. Indeed, to check whether or not the data is consistent with a given pair of parameters $\beta,\delta$, the analyst only needs to verify a simple set of inequalities. The key analytical insight is contained in Lemma \ref{lem:mass_points_sufficient}, which establishes that it suffices to consider distributions that have $T+1$ mass points. Intuitively, the option value of waiting is determined by the probability with which the agent stops at given future point in time and the expected payoff conditional on doing so. Hence, moving the probability mass between any two continuation values to the expected payoff conditional on falling between these two values leaves the agent's continuation values and stopping probabilities unaltered. Therefore, the analyst can restrict attention to such relatively simple distributions.
Economically, observing the continuation values allows the analyst to distinguish between a taste for immediate gratification and option-value-of-waiting-based delays because a high option value requires the unobservable payoffs to differ significantly. As a consequence, as the deadline approaches and the agent foresees less future draws, the option value must decrease quickly. In contrast, a present-biased agent's continuation value decreases at a slower rate. We also argue that at the cost of relying on numerical techniques commonly used in applied work, our set-identification result can be extended straightforwardly to cover partial naivete and non-linear utility in money.
Applying our Theorem \ref{thm:non-para_identification} to the example introduced in Section \ref{sec:example}, however, illustrates that the analyst may need to observe a large number of continuation values to be able to tightly identify the present-bias parameter. In the example, there is no meaningful identification with $5$ periods of data, but $20$ periods are enough to tightly identify $\beta$ when $\delta=1$ is known to the analyst. Given that we made a number of assumptions facilitating identification---such as that the exact stopping probabilities and continuation values are observable to the analyst---, we think that the overall message of our analysis suggests a substantial amount of additional data is needed to empirically identify a taste for immediate gratification or the degree of sophistication without relying on parametric assumptions. We point out that if the analyst observes a heterogenous population, much richer stopping patterns can be explained in Section \ref{sec:discussion}, where we conclude by discussing some broader implications of our analysis.
\section{Setup}
\label{sec:setup}
Let time $t=1,2,\cdots, T+1$ be discrete. We consider an agent with quasi-hyperbolic preferences who can choose when and whether to complete a single given task before some deadline $T$. More precisely, we suppose that the agents' utility is time-separable, and denote a level of instantaneous utility the agent receives in period $t$ by $u_t$; let
\begin{equation}
\label{eq:true_pref}
U^t = u_t + \beta \, \sum_{s=t+1}^{T+1} \delta ^{s-t} \, u_s,
\end{equation}
denote the utility over sequence of $(u_t,\cdots u_{T+1})$ of self $t$. Following \cite{odonoghuerabindoing}, we allow the agent to have incorrect beliefs regarding future selves' behavior. The agent believes that all future selfs $r>t$ maximize
\begin{equation}
\label{eq:sub_pref}
\hat{U}^r = u_r + \hat{\beta} \, \sum_{s=r+1}^{T+1} \delta ^{s-r} \, u_s.
\end{equation}
We allow for any vector of preference and belief parameters $(\delta,\beta,\hat{\beta}) \in (0,1]^3$. In case $\hat{\beta} =\beta =1$, the agent has time-consistent preferences with an exponential discount factor $\delta$. In case $\beta < 1$, she has a taste for immediate gratification. We say she is sophisticated---i.e. perfectly predicts her future behavior---when $\hat{\beta} =\beta$, she is fully naive---i.e. believes that her future selves behave according to her current preference---if $\hat{\beta} =1$, and otherwise say that she is partially naive. Our setup covers the case in which the agent overestimates her own future taste for immediate gratification $\hat{\beta}<\beta$ as well as the case in which she underestimate it $\hat{\beta}>\beta$.
The agent can complete the task once during the periods $t=1,\cdots,T$, so that $T$ is the deadline before which the task needs to be completed. If the agent does not undertake the task in a given period $t=1,\cdots,T$, we normalize her instantaneous utility $u_t$ to zero. If she completes the task she gets an instantaneous utility of zero in period $T+1$, while if she did not complete the task by the end of period $T$, the agents gets a (utility) penalty of $\underline{y}/ (\beta \delta) \in \mathbb{R}_- \cup \{-\infty\}$ in period $T+1$.\footnote{In other words, $\underline{y}$ is self $T$'s continuation value when not completing the task. Expressing the penalty in this way simplifies the exposition below.} Setting $\underline{y}=-\infty$, this encompasses the case where the task is \textsl{mandatory} so that the agent is forced to complete the task by the end of period $T$; and setting $\underline{y}=0$, this encompasses the case in which the task is \textsl{optional} so the agent only completes the task if her active self decides to do so. Finally, we suppose that in every period $t$ the instantaneous utility of completing the task is drawn independently from a given payoff distribution $F$, which is known to the agent.
We look for \emph{perception-perfect equilibria} \citep{odonoghuerabindoing,odonoghuerabinchoice} in which each self $t$ chooses an optimal strategy given its prediction of future selves' behavior, and a self $t$'s prediction of future selves' behavior are consistent with how a future self with preference parameter $\hat{\beta}$ would optimally behave. More formally, let $Y^t=(y_1, \cdots,y_t)$ be the history of payoff realizations up to time $t$. A pure strategy for Self $t$ is a mapping $\sigma_t(Y^{t-1},y_t) \rightarrow \{0,1\}$, with the interpretation that $1$ means Self $t$ completes the task.
A perception-perfect equilibrium is a pair of strategies $(\sigma_1,\cdots,\sigma_T)$
and $(\hat{\sigma}_2, \cdots,\hat{\sigma}_T)$ such that for all
$t \in \{1,\cdots,T\}$, $\sigma_t$ maximizes $U^t$ under the assumption that selves $r>t$ use strategy
$\hat{\sigma_r}$, and for all $t \in \{2,\cdots,T\}$, the strategy $\hat{\sigma}_t$ maximizes $\hat{U}^t$ under the assumption that selves $r>t$ use strategy $\hat{\sigma}_r$. In addition, we restrict attention to perception-perfect equilibria in which all selves that are indifferent between completing the task and waiting choose to wait.\footnote{Without a given tie-breaking assumption, we could rationalize any behavior by simply assuming that the payoff of completing the task is $0$ with certainty in all periods. In that case, any stopping probability in any period is trivially optimal, independently of the agent's time preferences. All our results below extend to the case in which the agent completes the task with some given positive probability when indifferent. Furthermore, in case the agent's benefit distribution admits a density, the tie-breaking assumption is obviously immaterial. And even otherwise, the case in which there is a mass-point at a payoff at which the agent is indifferent between completing the task and waiting is knife-edge.}
\section{Examples on the Influence of Parametric Assumptions}
\label{sec:example}
\begin{example}\label{ex:bar_plot}
To illustrate the difficulty of identifying time-inconsistency from an agent's stopping behavior, consider the following stylized example. A sophisticated agent receives a parking fine, which has to be paid within ten days of receiving it. In case she does not pay the fine, she incurs a known cost of $\$5$ in addition to the fine. Furthermore, the agent's long-run (daily) discount rate is (well approximated by) $\delta =1$.
\renewcommand{\baselinestretch}{1}
\begin{figure}
\begin{center}
\includegraphics[width=3.8in]{bar-plot-stopping-probabilities.pdf}
\end{center}
\caption{Observed Task Completion Times. The above graphs illustrates the observed stopping times. In both cases $\delta =1$, and the penalty for not doing the task is $-5$. The red bar plot shows the distribution of task completion times of a time-consistent agent whose cost of completing the task are drawn from a log-normal distribution, whose underlying normal distribution has mean $\mu=1$ and variance $ \eta=1$. The blue bar plot that of a sophisticated time-inconsistent agent with $\beta = 0.7$ whose cost are drawn from a log-normal distribution with parameters $\mu=0, \eta=2.3$.}
\label{fig:stopping_example}
\end{figure}
\renewcommand{\baselinestretch}{1.5}
Figure \ref{fig:stopping_example} compares the stopping behavior of a time consistent agent who draws the cost of completing the task from a log-normal distribution whose underlying normal distribution has mean $\mu =1$ and variance $\eta = 1$ (red bar plot) to that of a sophisticated time-inconsistent one with a present-bias parameter $\beta =0.7$ who draws the cost from a log-normal distribution with parameters $\mu=0, \eta=2.3$ (blue bar plot).
\end{example}
An obvious first lesson from the example is that bunching at the deadline is no reliable guide to identifying time-inconsistency: both agents probability of completing the task in the final period is just above 50\%. Indeed, both agents stopping behavior is remarkably similar throughout and the observed stopping probabilities differ by less than $1\%$ in any period, suggesting that even an analyst who wants to test only between these two possible types faces a difficult problem in practice.\footnote{Independently of our work, \citeauthor*{heffetzodonoghue} observe that substantially different values of $\beta$ can explain the parking-ticket payment behavior in New York City, which they analyze in \cite{heffetzodonoghue}. They illustrate this supposing that the cost for paying the parking ticket is drawn from the small parametric family of distributions that has a mass point at zero and admit a constant density on an interval above zero. Their real-world data nicely demonstrates the practical importance of the identification challenge we illustrate in Example \ref{ex:bar_plot} with synthetic data. We are very grateful to these authors for sharing their example with us during private communication.}
In the above illustrative example, the analyst knows or correctly guesses the parametric class of distributions (log-normal) from which the payoffs are drawn. The example suggests that without knowing its exact parameters, nevertheless, it is hard to correctly identify the time-preference parameters. In reality, however, payoffs are drawn from an unobservable payoff distribution and for typical field data---such as parking tickets---an analyst does \textsl{not} know the parametric form of the payoff distribution. The following example highlights how crucial common functional form assumptions routinely imposed in applied papers can be in determining the analyst's findings. For this example, we suppose that the analyst has precise prior knowledge about the mean and the variance of the unobservable payoff distributions but is unsure as to the exact parametric family from which these payoffs are drawn. Indeed, it strikes us as extremely unreasonable that an analyst has prior knowledge beyond some (typically vague) ideas about the first two moments of this distribution.
\begin{example} \label{ex:parametric form} We suppose that the agent has $5$ periods to complete the task and the agent's value of completing the task are drawn from a uniform distribution over $[-1,1]$; in reality the agent is time-consistent with $\beta=\delta=1$.\footnote{Think of a parent that promised their kid to see a theatre play that shows for seven more days. The parent is self-employed and needs to complete tasks at work as they come in. When not being very busy, the parent enjoys the joint activity. When very busy, however, he is distracted during the play and needs to stay up late afterwards completing his work tasks. Not going to the play after having promised to do so, however, is not a possibility.} The corresponding stopping probabilities are $0.25827, 0.304687, 0.375, 1/2, 1$, which we suppose the analyst can observe exactly. In addition, we assume the analyst knows the true mean ($0$) and standard deviation ($0.577$) of the stationary payoff distribution $F$ but not its exact functional form. Furthermore, suppose the analyst correctly imposes that $\delta =1$ when analyzing the data. Let the analyst consider four standard parametric families of distributions: normal, log-normal, extreme value, and logistic. For each of these families, the analyst selects the parameter $\beta$ that best fits---in the sense of squared distance or log-likelihood---the observed stopping probabilities allowing the agent to be either naive or sophisticated. Table \ref{tab:table1} reports the parameter estimates for $\beta$ and the squared distance/log-likelihood for the different parameterizations of the error distribution.\footnote{The estimates are computed using grid search with a distance of $0.0005$ between grid points.}
\begin{table}
\begin{center}
\label{tab:table1}
\begin{tabular}{@{}lllcll@{}}
\toprule
\multirow{2}{*}{\textsl{Parametric Family}} & \multicolumn{2}{c}{\textsl{Sq. Distance Minimzation}} && \multicolumn{2}{c}{\textsl{Likelihood Maximization}} \\
\cmidrule{2-3} \cmidrule{5-6}
&$\beta$ & Distance && $\beta$ & Log-Likelihood \\
\hline
\text{Normal Sophisticate} & 0.819 & 0.0026777 && 0.818 & 1.59188 \\
\text{Normal Naive} & 0.817 & 0.00231803 && 0.816 & 1.59187 \\
\text{Extreme Value Sophisticate} & 0.57 & 0.0402888 && 0.5705 & 1.59638 \\
\text{Extreme Value Naive} & 0.561 & 0.0396802 && 0.562 & 1.59627 \\
\text{Logistic Sophisticate} & 0.7605 & 0.00331235 && 0.7595 & 1.59189 \\
\text{Logistic Naive} & 0.7565 & 0.00267175 && 0.7555 & 1.59188 \\
\bottomrule
\end{tabular}
\caption{Parameter estimates of $\beta$ and squared distance and log-likelihood.}
\end{center}
\end{table}
The analyst's estimates of $\beta$ range between $0.561 - 0.819$ even in this idealized situation in which she has infinite data, actually knows the mean and standard deviation of $F$, and knows the long-run discount factor $\delta$. And if the analyst engaged in model testing selecting the model on the basis of minimizing squared distance or maximizing log-likelihood, she would conclude that the agent is naive time-inconsistent with $\beta =0.817/0.816$ while in truth the agent is time-consistent and $\beta=1$. Furthermore, for the normal distribution the squared difference in stopping probabilities in the sophisticated and naive case are remarkably small (less than $0.232\%$), so (in a finite data set analogue) nothing would indicate to the analyst that these are bad distributional choices to model the unobservable shocks.\footnote{ If the analyst does not know the mean and standard deviation of the shock distribution and thus needs to estimate these parameters as well, she is able to fit the data even better, making it even harder to detect her misspecification.}
\end{example}
\medskip
Our general results below, which establish that non-parametrically the degree of time-inconsistency is never identified from task completion data, prove that the above examples are not artefacts of the numbers we have chosen. For every set of model parameters $\delta,\beta,\hat{\beta}$ and any given dataset, there exists some unobserved stationary payoff distribution that perfectly fits the data. Thus, the analyst can rule out parameter values for $\delta,\beta,\hat{\beta}$ only through ad-hoc assuming a specific parametric family of distributions. As a consequence, the analyst's conclusions are---in line with Example 2---solely determined by her parametric choice for the unobservable payoff distribution.
\section{Preliminary Analysis: Recursive Structure}
\label{sec:recursive}
We begin by establishing that the agent's problem has a simple recursive structure. A strategy $\sigma_t(\cdot,\cdot; z)$ is a cutoff strategy with cutoffs $z=(z_1,\ldots,z_T)$ if
\[
\sigma_t(Y^{t-1},y_t;z) = \begin{cases}
0 & \text{ if } y_t \leq z_t \\
1 & \text{ if } y_r > z_t
\end{cases} \,.
\]
Self $T$ completes the task if and only if her realized payoff is strictly greater than $\underline{y}$. Furthermore, selves $t<T$ believe that Self $T$ will complete the task if and only if her realized payoff is strictly greater than $(\nicefrac{\hat{\beta}}{\beta}) \underline{y}$. Hence both the perceived and actual strategy in the final period are cutoff strategies. Similarly, if all future selves are perceived to use cutoff strategies, Self $t$ can calculate the perceived continuation value of waiting, and will complete the task if and only if her current payoff is greater than this perceived continuation value. Hence, by induction, all selves use a cutoff strategy and perceive their future selves to use a cutoff strategy.
For a partially-naive quasi-hyperbolic discounter, the time $t$ and time $t'$ selves have the same beliefs about the strategy future selves---i.e. selves active after time $\max\{t,t'\}$---use.
Self $t$ thus believes that if she does not complete the task at time $t$, the task will be completed at the (random) time
\[
\hat{\tau}_t = \min\{s>t\colon y_s > c_s \} \,,
\]
where $c_s$ is the perceived cutoff that selves $t<s$ believe Self $s$ will use. Trivially, for all $s>t$ the stopping time $\hat{\tau}_{s}$ equals $\hat{\tau}_{t}$ conditional on not stopping before time $s+1$,
\[
\mathbb{P}[\hat{\tau}_s = \hat{\tau}_t \mid \hat{\tau}_t > s] = 1\,.
\]
Hence, Self $t$ believes that her \textsl{perceived continuation utility} $v_t$ if she does not complete the task at time $t$ is given by
\[
v_t = \beta \, \mathbb{E} \left[ \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] \,.
\]
Since Self $t$ stops whenever the value of completing the task immediately is greater than her subjective continuation value, the time $\tau_t$ at which the task is completed conditional on not having been completed before time $t$ is given by
\[
\tau_t = \min \{s>t \colon y_s > v_s \} \,.
\]
We first show that the perceived continuation values satisfy a recursive equation.\footnote{Throughout this paper, $\int \cdot \, dF$ denotes the Riemann--Stieltjes integral.}
\begin{lemma}[Recursive Characterization]\label{lem:rec-representation}
A pair of strategies $(\sigma,\hat{\sigma})$ constitute a perception-perfect equilibrium if and only if both are cut-off strategies with cutoffs $(v,c) \in \mathbb{R}^T \times \mathbb{R}^T$ that satisfy the equations
\begin{equation}\label{eq:rec-representation}
v_t = \begin{cases} \beta \,\delta \int_{\nicefrac{\hat{\beta}}{\beta}\,\,v_{t+1}}^\infty z \,d F(z) + F(\nicefrac{\hat{\beta}}{\beta}\,\,v_{t+1}) \, \delta \, v_{t+1} & \text{ for } t<T\\
\underline{y} &\text{ for } t=T\end{cases}
\end{equation}
and $c_t = \left(\nicefrac{\hat{\beta}}{\beta}\right) \,\,v_t$.
\end{lemma}
\begin{proof}
We first show that the conditions are necessary for a perception-perfect equilibrium.
We already argued that any equilibrium must be in cutoff strategies and that the cutoffs used by each self must equal their perceived continuation value $v$.
We can rewrite the perceived continuation utility by considering the event that the task is completed in period $t+1$ as well as the complementary event that it is completed later
\begin{align*}
v_t &= \beta \, \, \mathbb{E} \left[ \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] = \beta \, \, \mathbb{E} \left[ \mathbf{1}_{\hat{\tau}_t = t+1} \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} + \mathbf{1}_{\hat{\tau}_t > t+1} \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] .\\
\intertext{Because Self $t$ believes the task is completed in period $t+1$ if and only if the benefit is greater than the subjective cutoff $y_{t+1} > c_{t+1}$, this equals}
v_t &= \beta \, \, \mathbb{E} \left[ \mathbf{1}_{y_{t+1}> c_{t+1}} \delta y_{\hat{\tau}_t} + \mathbf{1}_{y_{t+1}\leq c_{t+1}} \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] \,.
\end{align*}
Since $y_{t+1}$ is distributed according to $F$ and $\hat{\tau}_t = \hat{\tau}_{t+1}$ conditional on not stopping in period $t+1$, we can use the definition of a Riemann--Stieltjes integral to rewrite the above as
\begin{align*}
v_t &= \beta \delta \int_{c_{t+1}}^\infty z \,d F(z) + F(c_{t+1}) \, \beta \delta \, \mathbb{E} \left[ \delta^{\hat{\tau}_{t+1}-(t+1)} y_{\hat{\tau}_{t+1}} \right].
\end{align*}
Using the definition of $v_{t+1}$ to rewrite the last summand above, we therefore have that
\begin{equation}\label{eq:v-c-dynamic}
v_t = \beta \delta \int_{c_{t+1}}^\infty z \,d F(z) + F(c_{t+1}) \, \delta \, v_{t+1}\,.
\end{equation}
Here, $v_t$ is the cutoff that Self $t$ actually uses. Prior selves, however, believe that Self $t$ discounts with hyperbolic weight $\hat{\beta}$, so the perceived cutoff $c_t$ they think Self $t$ uses solves
\begin{align*}
c_{t} &= \hat{\beta} \, \mathbb{E} \left[ \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] = \left(\nicefrac{\hat{\beta}}{\beta}\right)\, \beta\, \mathbb{E} \left[ \delta^{\hat{\tau}_t-t} y_{\hat{\tau}_t} \right] = \left(\nicefrac{\hat{\beta}}{\beta}\right) \,\,v_t \,.
\end{align*}
\noindent Using this equation to replace $c_{t+1}$ in \eqref{eq:v-c-dynamic} establishes that the continuation values $v_1,\ldots,v_{T-1}$ satisfy the recursive equation
\[
v_t = \beta \, \delta \int_{ \left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t+1}}^\infty z \,d F(z) + F( \left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t+1}) \, \delta \, v_{t+1}\,.
\]
That any such pair of cutoff strategies constitutes a perception-perfect equilibrium follows from checking the (perceived) optimality conditions inductively starting from the last period.
\end{proof}%
To see the intuition behind Equation \ref{eq:rec-representation}, suppose first that the agent is sophisticated ($\hat{\beta}=\beta$) in which case ($\nicefrac{\hat{\beta}}{\beta}=1$). Then the first term is the discounted benefit of stopping tomorrow, which the agent does whenever the benefit of stopping falls above the continuation value of tomorrow's self. This payoff is discounted according to Self $t$'s short-term discount factor $\beta \delta$. The second term captures the fact that with probability $F( v_t)$ tomorrow's self continues because it prefers its perceived continuation value $v_{t+1}$. As today's self discounts payoffs that realize after period $t+1$ by a factor of $\delta$ more than tomorrow's self, this term is discounted with $\delta$. When predicting future behavior, a partially naive agent uses the perceived cutoffs $c_{t} = (\nicefrac{\hat{\beta}}{\beta})\, v_t $ determined by the continuation value a former time $s<t$ self believes Self $t$ has. If $\hat{\beta}>\beta$, current selves overestimate future selves' patience and, hence, the cutoff they use. If $\hat{\beta}<\beta$, current selves underestimate future selves' patience and, hence, their cutoffs.
\section{Rate of Task Completion Increases Over Time}
\label{sec:task_completition}
Building on this recursive formulation, this section establishes that a partially-naive quasi-hyperbolic agent is (weakly) more likely to stop and complete the task, the closer she is to the deadline $T$. In other words, the longer away the deadline, the higher the perceived continuation value of the current self. Because the payoff distribution is stationary, comparing the perceived continuation value of period $t$ to that of period $t+1$ is equivalent to comparing the perceived continuation in the first period of task-completion
with a deadline of $T-t$ to that with a deadline of $T-(t+1)$. Interestingly, since the perceived continuation value increases in the distance to the deadline, therefore, a quasi-hyperbolic agent would never want to impose an earlier deadline to keep herself from procrastinating excessively. While obvious for an exponential discounter---adding an extra period simply increases her choice set and hence makes her better off---the question of whether to limit future selves delay possibility is much more subtle when the agent is a quasi-hyperbolic discounter. Indeed, when the distribution of net benefits is not stationary, it is easy to construct counterexamples in which Self 1 would want to impose an early deadline on future selves.
\begin{example}[Self 1 wants to impose a deadline with a time-dependent payoff distribution]\label{ex:deadlines-help} Consider a sophisticated agent with $\delta =1, \beta =1/2$ who has two periods to complete a mandatory task, and who has a deterministic cost of $0.9$ in the first and $1$ in the second period. Due to her present bias, the agent will complete the task in period 2 giving Self 1 a utility of -1/2. Now add the chance to complete the task in a third period at a cost of 1.5. Then Self 2 strictly prefers to procrastinate, and if Self 1 waits, her utility is -3/4. Thus, adding another period in which the task can be completed makes Self 1 worse off. As a result, Self 1 would be willing to impose a two-period deadline.
\end{example}
Intuitively, because preferences between today's self and future selves are not aligned, if payoffs depend on time restricting future selves' choices through imposing a deadline can be beneficial to today's self. \cite{bisinhyndman} provide further examples in which a sophisticated quasi-hyperbolic agent benefits from imposing a deadline when costs of doing a mandatory task follow a Markov process in which higher costs today are associated with higher costs tomorrow.\footnote{While in our simple example the state changes deterministically, continuity of payoffs implies that the example also hold if with a small probability the costs are redrawn from a uniform distribution over $\{0.9,1,1.5\}$ and otherwise move up deterministically towards the state $1.5$ as in our example.} What is perhaps surprising is that if costs---or net benefit in our setup---are uncorrelated over time, a sophisticated quasi-hyperbolic agent \textsl{never} wants to impose a deadline.
Indeed, when the payoff distribution is the same across periods, we have:
\begin{theorem}[Monotonicity of the Continuation Value]\label{prop:monotone-values} Let $\delta \leq 1$.
\begin{compactenum}[i)]
\item The subjective continuation values are non-increasing over time
\[
v_1 \geq v_2 \geq \ldots \geq v_T \,.
\]
\item Every self $t$ prefers a later deadline.
\end{compactenum}
\end{theorem}
Parts $i)$ and $ii)$ are equivalent since when the payoff distribution is identical across periods, the subjective continuation value in a given period $t$ equals the value in the problem with a deadline of $T-t$ periods. To understand intuitively why a quasi-hyperbolic agent's Self $1$ does not want to impose a deadline with a stationary payoff distribution, consider first the case in which doing the task is always costly---i.e., where the support of $F$ is a subset of $\mathbb{R}_-$. When comparing a $(T-1)$-period to $T$-period deadline, Self 1 realizes that if she does not engage in the task in the $T$ period problem, self 2 will face a $T-1$-period problem. That subgame is identical to the one she faces in the $T-1$ period problem, and future selves who are $s$ periods away from the deadline will behave identically in the two problems. Hence for $s \in {1,\cdots T-1}$, the task completion probability $s$-periods before the deadline is identical, and due to discounting of future costs, Self 1 is strictly better off selecting the $T$-period problem and not doing the task in the first period.
Suppose now instead, that the agent is sophisticated with quasi-hyperbolic parameter $\beta <1$ and that the payoff of completing the task is always positive---i.e., the support of $F$ is a subset of $\mathbb{R}_+$. From the perspective of a Self $t$, future selves are to impatient, and therefore to willing to cash in the positive benefit in every future period. Suppose now that Self 1 can extend the deadline from $T-1$ to $T$ periods. In this case, Self $T-1$ will wait for sufficiently low net benefits. Because the time $T-1$ self is more impatient than Self 1 would want it to be, whenever the impatient Self $T-1$ chooses to wait, Self 1's expected payoff increases from waiting. Thus, conditional on reaching period $T-1$, the longer deadline benefits Self 1. Now consider Self $T-2$. With the longer deadline, Self $T-2$'s benefit from waiting increases because it always prefers its future self to not complete the task when the future self chooses to do so. Hence, Self $T-2$ will also act less impatiently, which again benefits Self 1 conditional on reaching period $T-2$. By induction, hence, in expectation Self 1 benefits in every future period from the deadline extension.
Because a partially naive Self 1 thinks that she is sophisticated, and in either case a sophisticated agent's Self 1 does not want to impose a deadline, a partially naive agent will not want to do so either. Hence, the perceived continuation value of a partially naive agents also increase in the distance to the deadline.
Our proof studies properties of solutions to the recursive equation \eqref{eq:rec-representation} to extend the above intuitions to cases in which the support of the payoff distribution may contain positive and negative elements, and hence some future selves can be a priori to eager and others not eager enough to complete the task.
We now turn to an immediate implications of Theorem \ref{prop:monotone-values}.
Note that the probability $p_t = \mathbb{P}[\tau_{t-1} = t]$ that the agent stops in period $t$ conditional on not having stopped before is the probability that the value of completing the task $y_t$ is above the subjective continuation value $v_t$; i.e.
\[
p_t = \mathbb{P}[ y_t \geq v_t ] = 1-F(v_t)\,.
\]
As the subjective continuation value $v_t$ is non-increasing, we have that the objective probability $p_t$ that the agent stops in period $t$ is non-decreasing.
\begin{corollary}\label{cor:monotone-stopping} Let $\delta \leq 1$. For any given benefit distribution $F$ and in every perception-perfect equilibrium, the objective probability with which the agent completes the task conditional on not having completed it before is non-decreasing towards the deadline, i.e.
\[
p_1 \leq p_2 \leq \ldots \leq p_T\,.
\]
\end{corollary}
Independently of the naivete and preference-parameters of a hyperbolic discounter, Corollary \ref{cor:monotone-stopping} provides a simple testable prediction about her task-completion behavior when payoffs are independently and identically distributed over time: the likelihood of completing the task is increasing over time. Section \ref{sec:discussion}, however, emphasizes that researchers need individual not group data to test this prediction.\footnote{Interestingly, this result holds independently of whether the agent over- or underestimates estimates $\beta$, i.e. whether $\hat{\beta}<\beta$ or $\hat{\beta}\geq\beta$.}
\begin{remark}
Corollary \ref{cor:monotone-stopping} establishes that the probabilities of stopping \textsl{conditional} on not having stopped previously increase over time. The unconditional stopping probability, however, may either increase or decrease. This difference is of practical relevance: for example, the conditional stopping probabilities increase over time in the tax-filing data of \cite{martinezmeier} while the unconditional stopping probabilities decrease.\footnote{See Figure 1 and 2 in \cite{martinezmeier}.}
\end{remark}
\section{Time-Preferences are Unidentifiable from Task Completion}
\label{sec:unidentified}
In this section, we identify a strong sense in which time-preferences are unidentifiable from task completion choices. Recall that we established that for any arbitrary preference profile $\beta,\delta$ and any belief $\hat{\beta}$, the profile of stopping probabilities is non-decreasing.
In this section we establish the converse: absent (parametric) restrictions on the payoff distribution $F$, we show that any non-decreasing profile of stopping probabilities is consistent with any arbitrary preference profile $\beta,\delta$ in case either the agent is either sophisticated ($ \hat{\beta} = \beta$) or fully naive ($\hat{\beta} =1$). Hence, it is impossible, for example, to distinguish a naive time-inconsistent agent from a time-consistent one based on their task-completion behavior. Importantly, this impossibility continues to hold even if a researcher is willing to exogenously impose that the ``long-run discount factor'' $\delta$ equals $1$, as is plausible in many applications in which one observes task completion on a frequent (e.g. daily) basis. Similarly, even if the researcher is willing to impose a priori restrictions on plausible levels of $\beta$---including the strong requirement that the agent is time-consistent---, absent exogenous restrictions on $F$, no information on $\delta$ or $\beta$ can be inferred from the task-completion data.
Intuitively, whether a self prefers to do a task today or tomorrow depends on her time preferences (as well as beliefs about future selves' time preferences) and on the perceived option value of waiting. The option value of waiting, in turn, depends on the payoff distribution. Through changing the unobservable payoff distribution, we can hence undo a change in the present-bias or long-run-discount factor of the agent. Technically, however, a local change in the payoff distribution affects continuation values in every period in a highly non-linear way, so to establish that we can construct an appropriate payoff distribution, we need a non-local argument. When the agent is either sophisticated or fully naive---for different technical reasons that we explain below---the analysis simplifies and allows us to establish that we can indeed rationalize the stopping behavior for any arbitrarily chosen $\beta,\delta$.
For the case in which the penalty is unobservable, we furthermore illustrate that the data is rationalizable as the optimal behavior of a fully patient time-consistent agent $(\hat{\beta}=\beta=\delta=1)$ facing an unobservable payoff distribution $F$ with {\it any given} expected value and (non-zero) variance of the distribution; any parametric identification of present bias in such a task-completion setting, therefore, must be based on a prior knowledge of higher-order moments of the benefit distribution.
\subsection{Time-Preferences are Unidentifiable: Sophisticated Case}
In this subsection, we establish that absent (parametric) restrictions on the payoff distribution $F$, any non-decreasing profile of stopping probabilities is consistent with any arbitrary preference profile $\beta,\delta$ of a sophistcated quasi-hyperbolic discounter. In particular, we have:
\begin{theorem}[Non-identifiability]\label{thm:non-identifiability-sophisticate}
Suppose the agent is sophisticated $\hat{\beta}=\beta$. For every non-decreasing sequence of stopping probabilities $0 < p_1 \leq p_2 \leq \ldots \leq p_T < 1$, every $(\delta,\beta) \in (0,1] \times (0,1]$, and every penalty $\nicefrac{\underline{y}}{\beta \delta} \in \mathbb{R}$, there exists a distribution $F$ that rationalizes the agent's stopping probabilities as the (unique) outcome of a perception perfect equilibrium.
\end{theorem}
Technically, to prove the theorem, we construct a distribution with $t+2$ mass points, where each of the non-extreme values equals the agent's (correctly perceived) continuation value in a given period $t \in \{1,\ldots,T\}$; i.e. the second lowest mass point is set at the value $v_T = \underline{y}$, and so on. The probability on each mass point is chosen so that the agent---who waits if and only if $y_t \geq v_t$---selects the exogenously given stopping probability. The constructions is feasible since when $\hat{\beta} =\beta$, the recursive representation (Lemma \ref{eq:rec-representation}) takes a particular simple form, and together with the chosen construction of the distribution gives rise to a system of linear equations, which can be solved forward.
\subsection{Time-Preferences are Unidentifiable: Naive Case}
We now turn to the case in which the agent believes to be time-consistent and establish that for every chosen non-decreasing sequence of stopping probabilities and every chosen preference profile $\beta,\delta$, there exists a payoff distribution $F$ that admits a piecewise constant density and induces the agent to choose the stopping behavior given by the data.
\begin{theorem}[Non-identifiability]\label{thm:non-identifiability}
Suppose the agent believes to be time-consistent $\hat{\beta}=1$. For every non-decreasing sequence of stopping probabilities $0 < p_1 \leq p_2 \leq \ldots \leq p_T < 1$, every $(\delta,\beta) \in (0,1) \times (0,1]$, and every penalty $\nicefrac{\underline{y}}{\beta \delta}<0$, there exists a distribution $F$ that rationalizes the agent's stopping probabilities as the unique outcome of any perception perfect equilibrium.
\end{theorem}
Our formal proof in the appendix proceeds roughly as follows. Step (i). Fix the agent's time preference as well as period $T$'s continuation value (which equals $\underline{y}$). Step (ii). Take an arbitrary $(T-1)$-element vector of non-increasing continuation values $v_1 \geq v_2 \geq \ldots \geq v_{T-1}$. Step (iii). Here, we generate a payoff distribution for these continuation values that gives the desired stopping probabilities. In particular, we put a probability mass that is equal to the difference in the exogenously given stopping probability between period $t$ and $t+1$ between the corresponding period's perceived continuation values, for simplicity using a uniform density. This step, hence, amounts to mapping continuation values into distributions that lead to the correct stopping probabilities.
Step (iv). Calculate the actual continuation values that the new payoff distribution from the third step gives rise to. This maps the set of distributions back into the vector of continuation values. By Theorem \ref{prop:monotone-values}, these continuation values are again non-decreasing, and thus the combined function maps a non-increasing sequence of continuation values into a non-increasing sequence of continuation values. Step (v). We show that this function is bounded and maps sequences from an appropriately chosen interval into itself. Furthermore, the function is monotone as higher continuation values lead to a better distribution (in the sense of first-order stochastic dominance) and a better distribution increases the subjective continuation values for an agent who believes to be time-consistent (established in Lemma \ref{lem:aux-properties-naive} $ii)$ below). Thus, the mapping from continuation values into continuation values is a monotone mapping from a complete lattice into a complete lattice, and by Tarski's Theorem admits at least one fixed point.
Any fixed point gives the desired distribution, since by Step (iii) the stopping probabilities are correct and by Step (iv) the continuation values are those consistent with the limit distribution. Furthermore, because by Lemma \ref{lem:aux-properties-naive} $i)$ below, the continuation values are strictly decreasing when $F(\underline{y})>0$ and $\underline{y}<0$, the limit distribution that we construct is continuous, so that the agent's stopping behavior is unique.
As explained in the above sketch, the proof of Theorem \ref{thm:non-identifiability} relies on the following Lemma.
\begin{lemma}\label{lem:aux-properties-naive}
Suppose $\delta<1$ and the agent believes to be time-consistent $\hat{\beta}=1$.
\begin{compactenum}[i)]
\item For every distribution $F$ with $F(\underline{y})>0$ and $\underline{y}<0$, the continuation values are strictly decreasing $v_1 > v_2 > \ldots > v_T$.
\item {A first-order stochastic dominance increase in the payoff distribution $F$ increases the vector of subjective continuation values point-wise.}
\end{compactenum}
\end{lemma}
Part $i)$ shows that whenever there is a positive probability that the utility from completing the task in the final period before the deadline $\underline{y}$ is less than that from not completing the task, an agent who believes to be time-consistent (i.e. who has beliefs $\hat{\beta}=1$) has a {\it strictly} positive willingness to pay for extending the deadline. Here, the assumption that $F(\underline{y})>0$ and $\underline{y}<0$ rules out that it is optimal for the agent to always complete the task immediately.\footnote{As a trivial counterexample to the finding when the assumption is dropped, suppose the task yields a (net) positive deterministic payoff above $\underline{y}$. Then the agent would always complete the task immediately, and hence is unwilling to pay for extending the deadline.} Thereby, it allows us to strengthen the finding of Theorem \ref{prop:monotone-values} for the case of $\hat{\beta}=1$.
The second part of the Lemma shows that any improvement in the payoff distribution weakly increases the subjective continuation values in all periods. Obviously, for a time-consistent agent an improvement in the payoff distribution raises the second to last period's continuation payoff. Furthermore, from the third to last period's perspective, the increase in the payoff distribution and the penultimate period's continuation value, makes it more desirable to reach the second to last period, that is increases its continuation value; etc... . And because an agent with beliefs $\hat{\beta}=1$ thinks she is time-consistent from tomorrow on, it similarly increases her continuation values.
While economically we do not believe that the restriction to fully naive or actually time-consistent agents (with $\hat{\beta} =1$) is important for Theorem \ref{thm:non-identifiability} to hold, our mathematical proof uses this assumption when arguing that subjective continuation values increase in a first-order-stochastic dominance shift in the payoff distribution, which in turn allows us to use Tarski's Theorem. In general, due to a time-inconsistent agent's the conflict of interest between her different selves, a first-oder-dominance improvement of her payoffs need not raise subjective continuation values as the following example highlights.
\begin{example}[A sophisticated $\beta,\delta$-agent can prefer a fixed uniformly payoff-reducing tax]\label{ex:tax-sophisticate}
Let $\beta = 1/8$ and the agent be sophisticated ($\hat{\beta}=\beta$). To simplify the calculation, we set $\delta =1$ but the argument obviously extends to $\delta$ sufficiently close to $1$. We compare the agent's expected welfare and (subjective) continuation values in a three-period voluntary-task-completion problem across two scenarios.\footnote{Because even the lowest payoff from completing the task is positive, the agent always completes the task voluntarily. Our results, thus, remain unchanged if task completion becomes mandatory.} One without a tax, and one in which the agent has to pay a fixed utility tax of $1/8$ in the period in which she completes the task. Let the distribution $F$ of payoffs absent a tax be such that with probability $3/4$ the agent receives a payoff of $3/2$, and with the remaining probability of $1/4$ the agent receives a payoff of $1/4$. Straightforward calculations (see the Supplementary Appendix) establish that the agent strictly prefers the tax to the no tax situations and that the tax increases the first-period continuation value.
\end{example}
Note that the tax introduced in Example \ref{ex:tax-sophisticate} is the same independent of when the agent completes the task and in that sense is not tailored to punish an agent for giving in to early temptations. Intuitively, nevertheless, the tax in the above example lowers the temptation to quit immediately in period 2 as it reduces the benefits from doing so. As a result, the agent obtains a commitment device to only stop when payoff are high in the second or first period. The benefits thereof overcompensate the direct payoff reduction through the tax, and thereby raise earlier periods' continuation values.
Lemma \ref{lem:aux-properties-naive} and Example \ref{ex:tax-sophisticate} jointly imply that one can (sometimes) identify agents that believe to have self control problems $(\hat{\beta}<1)$: such an agent can have a strictly positive willingness to pay to make his payoff distribution strictly worse. In contrast, an agent who believes to be time-consistent ($\hat{\beta} =1$) and hence does not foresee future self-control problems will never want to do so.
\subsection{Known Expected Value and Variance}
\label{subsec:moments}
For our very general results, we have not restricted the class of permissible distribution functions. One may hope to rule out time-consistency and find evidence through restricting features of the distribution. Perhaps the most natural way of doing so would be two make restrictions regarding the moments of $F$; for example, an analyst may have an idea regarding the possible expected net benefit of doing the task---that is regarding the mean of $F$---or may be willing to impose that net benefits do not vary to much between periods (restricting the variance of $F$).
We now briefly observe that if the penalty is unobservable, even with a priori knowledge of the mean and variance of $F$ it is impossible to rule-out time-consistent behavior. To see this, consider an agent for whom $\beta =\delta =1$. Theorem \ref{thm:non-identifiability-sophisticate} implies that there exists a net benefit distribution $F$ that rationalizes any increasing profile of stopping probabilities. Furthermore, in this case the recursive formulation of the problem in Lemma \ref{eq:rec-representation} simplifies to
$$
v_t = \mathbb{E} \left[ \max \{ y_{t+1},v_{t+1} \} \right] \ \ \ \ \text{ for all } t <T.
$$
Hence, if the distribution $F$ together with the penalty $\underline{y}$ rationalize the data, so does the distribution $F + \kappa$ together with the penalty $\underline{y} +\kappa$ for any $\kappa \in \mathbb{R}$. In other words, we can always select a net benefit distribution with a given expected value. Furthermore for any $\kappa_2 >0$, the stopping behavior remains optimal if we scale the net-benefits and $\underline{y}$ by $\kappa_2$. This implies that we can not only select a distribution with a given mean but that we can at the same time select any desired variance and explain the observed stopping behavior.\footnote{Indeed, since the construction of $F$ in the proof of Theorem \ref{thm:non-identifiability-sophisticate} uses bounded support, we can rationalize the observed stopping behavior as resulting from a patient agent ($\beta =\delta =1$) whose net benefits vary arbitrarily little.}
\begin{corollary}
Suppose the agent is time-consistent and fully patient $\hat{\beta}=\beta=\delta=1$. For every non-decreasing sequence of stopping probabilities $0 < p_1 \leq p_2 \leq \ldots \leq p_T < 1$, and every $\mu \in \mathbb{R}$ and $\sigma^2>0$, there exists a distribution $F$ with mean $\mu$ and variance $\sigma^2$ and a penalty $\underline{y}$ that rationalizes the agent's stopping probabilities as the (unique) outcome of a perception perfect equilibrium.
\end{corollary}
\section{Non-Parametric Identification with Richer Data}
\label{sec:rich data}
Above, we established that stopping data by itself is insufficient to test for time preferences. A natural question is whether richer data allows the analyst to learn about the agent's time-preferences. To do so, the analyst needs to disentangle whether the stopping behavior is driven by a desire to delay incurring costs or by the option value of drawing a better payoff in the future. Observe that in the latter case, a considerable option value requires payoff to differ significantly. Hence, as the deadline approaches and a waiting agent faces fewer future draws, the continuation value should drop considerably. In contrast, even with a (relatively) constant option value, an agent who is present biased is willing to delay a costly activity to the last minute. Thus, observing, in addition to task-completion times, continuation values directly should facilitate the non-parametric identification of $\delta,\beta,\hat{\beta}$. We, thus, analyze how much the analyst can learn when also observing the continuation values.
More formally, consider the case in which the analyst observes the agent's stopping behavior (infinitely often) as well as his exact willingness to pay for continuing with the task. Conceptually, the analyst could elicit this information by selecting some stopping problems in which she offers the agent a mechanism at the end of period $t$ that truthfully elicits her willingness to pay for continuing with the task from $t+1$ onwards.\footnote{If the analyst sees infinitely many identical agents, she can randomly select $T$ agents. Label these agents $k=1, \ldots, T$. At the end of period $k$, the analyst then elicits agent $k$'s willingness to pay for facing the task-completion problem from period $k+1$ to $T$. She can do so using a standard Becker-De Groot-Marschak mechanism \citep{beckerdegroot}. } Denote the amount she is willing to pay at the end of period $t$ by $m_t$. If the agent's utility is quasi-linear in money, which is a good approximation in the standard hyperbolic discounting model whenever the involved stakes are relatively small---as in the case of parking tickets---, then observing $m_t$ is equivalent to observing the continuation value $v_t$; otherwise, $v_t = u(m_t)$ for some monotonically increasing utility function $m_t$. We provide an exact analytical result regarding partial identification for the case of linear utility in money and a sophisticated agent. But, we also highlight that---at the cost of having to use numerical methods common in empirical work to solve for the admissible parameter range---our results can be readily extended in multiple directions, including partial naivete and non-linear utility in money. Importantly, below we also point out that our procedure identifies the time-preferences over effort even if the agent discounts money---due to time-preferences or the ability to borrow or save---differently than effort, which implies that our time-preference identification is robust to standard criticisms of eliciting time preferences using monetary choices \citep{augenblickniederle,ericsonlaibsonreview,ramsey}.
As a preliminary observation, recall that Theorem \ref{prop:monotone-values} and Corollary \ref{cor:monotone-stopping} imply that the elicited continuation values must be non-increasing and the observed stopping probabilities non-decreasing. We refer to data $v,p$ that has these properties as {\it plausible}.\footnote{If $\bar{y}$ is observable then in addition we require that $v_T = \bar{y}$.} Any data that is not plausible cannot be justified by our quasi-hyperbolic setup.
Imposing that the agent is sophisticated, we now show how to non-parametrically identify the set of $\beta,\delta$ that are consistent with the observed data. Using Lemma \ref{lem:rec-representation} and the fact that an agent stops whenever his payoff is strictly above the continuation value, for a sophisticate the continuation values $v$ and conditional stopping probabilities $p$ must satisfy
\begin{align}\label{eq:constraints-sophisticate}
\begin{aligned}
v_t &=u(m_t) \ \ &&\text{ for all } t \in \{ 1,\ldots, T\}\, ,\\
\int_{v_{t+1}}^\infty z \,d F(z) &= \frac{\delta^{-1}\,v_t - (1-p_{t+1}) \, v_{t+1}}{\beta} \ \ \ &&\text{ for all } t \in \{ 1,\ldots, T-1\}\, ,\\
1 - F(v_t) &= p_t \ \ &&\text{ for all } t \in \{ 1,\ldots, T\}\,.
\end{aligned}
\end{align}
Conversely, if a pair $u,F$ satisfies \eqref{eq:constraints-sophisticate} for a given plausible data set, then Lemma \ref{lem:rec-representation} implies that it gives rise to a perception perfect equilibrium for a sophisticated agent.
Note that the right-hand-side of \eqref{eq:constraints-sophisticate} is given by the data and hypothesized values of $\beta$ and $\delta$. Thus, the data is consistent with a given pair $\beta,\delta$ if and only if there exists a distribution $F$ that solves \eqref{eq:constraints-sophisticate}. As a preliminary step, we show that whenever \eqref{eq:constraints-sophisticate} admits a solution, it also admits a solution that is a distribution consisting of $T+1$ mass points.
\begin{lemma}\label{lem:mass_points_sufficient}
Whenever \eqref{eq:constraints-sophisticate} admits a solution for a plausible data set, there exists a solution $F$ that consists of exactly $T+1$ mass points located at $(\pi_0, \ldots,\pi_T)$ that satisfy
$$
\pi_0 \leq v_T < \pi_1 \leq v_{T-1} < \ldots \leq \pi_{T-1} \leq v_1 < \pi_T,
$$
with associated probabilities $f_k=\mathbb{P}[y = \pi_k] $ given by
\begin{equation*}
f_k = \begin{cases} 1-p_T &\text{ if } k = 0\\
p_{T-k+1} - p_{T-k} &\text{ if } k \in \{ 1,\ldots,T-1\}\\
p_1 &\text{ if } k = T
\end{cases}\,.\
\end{equation*}
\end{lemma}
Intuitively, two distributions give rise to the same stopping probability when the probability mass above the continuation values is the same. And the only things that matters for the option value of waiting is the probability with which the agent stops at given future point in time and the expected payoff conditional on doing so. By moving the probability mass between any two continuation values to the expected payoff conditional on falling between these two values, thus, the incentives to wait are unaltered. Furthermore, because the observed stopping probabilities determine the continuation mass between any two continuation values, the question of whether the analyst can non-parametrically match the observed data for a given $\beta, \delta$ boils down to the question of whether she can do so by choosing a distribution consisting of $T+1$ mass points in the appropriate intervals.
Conceptually, Lemma \ref{lem:mass_points_sufficient} hence allows the analyst to search over a finite dimensional rather than an infinite-dimensional space of possible distribution. Indeed, under the distributional restriction given by the lemma, \eqref{eq:constraints-sophisticate} becomes a non-linear system with finitely many real-valued unknowns. Theorem \ref{thm:non-para_identification}, which we prove in the Appendix, shows that this system can be transformed into a simple set of transparent inequalities that identify the values of $\delta$ and $\beta$ that are consistent with the observed stopping behavior and elicited continuation values.
\begin{theorem}[Non-Parametric Identification] \label{thm:non-para_identification} Suppose $u(m_t) =m_t$ for all $t$ and that $p_1 >0$.\footnote{We require $p_1>0 $ only to simplify the statement. \comments{check carefully}} Plausible data $(v,p)$ is consistent with $\beta,\delta$ and sophistication $\hat{\beta}=\beta$ if and only if (i)
\begin{equation}\label{eq:beta-at}
\beta < \frac{\delta^{-1}\,v_1 - (1-p_{2}) \, v_{2}}{v_{2} (p_2 - p_1) + v_1 p_1} \, \nonumber
\end{equation}
and (ii) $ v_{t+1} \beta < v_{t+1} a(\delta,t) \leq v_t \beta$ for all $t \in \{2, \ldots, T-1\}$, where
$$
a(\delta,t) = 1 - \frac{\delta^{-1} (v_{t-1} -v_t) - (1-p_t) (v_t - v_{t+1}) }{ v_{t +1}(p_{t+1} - p_t)}.
$$
\end{theorem}
The theorem provides an exact characterization of what time-preference parameters are consistent with the observed rich data. To illustrate its implications, consider the example from Section \ref{sec:example} in which $T=5$, the agent's payoff of completing the task are uniformly distribiuted over $[-1,1]$, and the agent is time-consistent with $\beta=\delta=1$ (this is the setup of Example 2). We illustrate the set of parameters the analyst can identify non-parametrically for $T=5$ and $T=20$ in Figure \ref{fig:example_np}. It is immediate that---in contrast to the case of unobservable continuation values---not all parameter combinations $\beta, \delta$ are consistent with the data.
Figure \ref{fig:example_np}, however, also illustrates that even if the analyst correctly imposes that $\delta =1$, she cannot make precise inference in the case where $T=5$. Indeed, in the example any $\beta$ between $0.82$ and $1.28$ is consistent with the data.
This changes drastically for $T=20$ in which case $\beta$ is tightly identified once $\delta =1$ is imposed. Without imposing $\delta=1$, however, the inference about $\beta$ remains imprecise even in the case of $T=20$, as it is impossible to reject $\beta=0.84$. Overall, the example suggests that rich data---including a significant number of continuation values---are needed for tight parameter estimates.
What allows the analyst to separate the option-value-from-waiting based reason for delaying the task from time-preference-based ones with a rich enough data set? If the agent is patient, he will only delay completing the task with high probability in case he expects a better draw with high probability. This implies that there needs to be considerable variation in the underlying payoff distribution. But then as the deadline moves closer, the agent foresees getting less and less draws, which means the option values quickly drops. In contrast, if time preferences are the underlying reason for delaying, the continuation value will drop much more slowly as the deadline approaches. The additional data on continuation values, hence, allows for set identification of the preference parameters.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{consistent-parameters-crop.pdf}
\caption{\label{fig:example_np}The above figures illustrates the set of parameters $\beta,\delta$ that the analyst can non-parametrically identify if she correctly imposes that the agent's instantaneous utility is linear in money. The agent's true values of completing the task are uniformly drawn from $[-1,1]$ and she is time-consistent with $\beta=\delta=1$. In yellow is the case of $T=5$ periods of data and in blue $T=20$ periods of data.}
\end{center}
\end{figure}
Since it is the change in option value that allows identification, one can also use other related data. For example, the willingness to pay for extending the deadline reflects the drop in continuation value, and therefore would also give rise to a rich data set that would allow non-parametric set identification. Again, however, our example suggests that many such observations are needed, suggesting that a tight estimation of agents' time inconsistency requires ``extremely rich'' task-completion data.
\paragraph{Generalizations of this Methodolgy} We think of the Theorem \ref{thm:non-para_identification} as a proof of concept, and analysts can adopt it to the data at hand and the assumption they are willing to make. For example, it is in principle straightforward to adopt the above analysis to allow for partial naivete. In that case, however, one needs to be careful to account not only for the probability mass and expectation of falling between two actual continuation values but also differentiate whether a given probability mass falls above or below the anticipated continuation values $c_t$. An analog to Lemma \ref{lem:mass_points_sufficient} implies that this can be done with $2T+1$ mass points. In this case, however, for intervals that are bounded by anticipated and not actual continuation values, the probability that $y_t$ falls into this interval is unknown. As a result, the analyst needs to choose both the mass point and the weight on it (with the appropriate constraints from the observed stopping behavior), giving rise to quadratic constraints. While this can be solved numerically using standard techniques, a simple transparent closed-form solution as in the case of Theorem \ref{thm:non-para_identification} is unavailable. Similarly, because we only need to consider a finite number of mass points, one can allow for non-linear utility in money, which---imposing that utility is increasing in money---requires the analyst to choose increasing utility values $u(m_t)$ in addition to the mass points.\footnote{If the analyst wants to impose risk-aversion in money, this adds simple (linear) constraints that ensure that the slope of $u$ is non-increasing in $m_t$. Again, this can be solved using standard numerical techniques.}
\paragraph{Time-Preferences over Money} One important aspect of our procedure is that it does not (explicitly or implicitly) impose constraints on how the agent handles monetary payments at different points in time. It is sufficient for contemporaneous utility to be separable in money, and the marginal utility of receiving money to be the same across periods. This assumption is consistent with an intertemporal set-up in which the agent can borrow and lend at given interest rates---in which case the interest rate determines how she trades off monetary payments at different points in time \citep{ericsonlaibsonreview,ramsey}. But it is also consistent with an agent narrow bracketing and consuming small monetary payments immediately---or reasoning as if she does so---so long as she trades of money and effort consistently over time. The procedure outlined in this section thus works for either specification of the agent's time preferences over monetary payments.
\section{Discussion}
\label{sec:discussion}
Our results establish a strong form of non-identifiability in that---absent data on continuation values---even with ideal stopping data in which the analyst observes the exact stopping probability for each individual separately, without parametric assumptions nothing can be learned regarding the agent's discount factor, taste for immediate gratification, or degree of sophistication. In reality, an analyst is likely to observe a large group of agents and infer their average stopping probability; if the group is homogenous our analysis applies. If individuals, however, in addition differ in their unobservable payoff distribution or time preferences, the analyst's problem becomes even more difficult. In that case, for example, it is easy to generate non-monotone stopping probabilities for the overall population. As a simple example, suppose there are two types of agents in the population that face a three-period mandatory task-completion problem. The first type stops in each period with probability 1, while the second type only stops in the final period. If $\alpha >0$ is the fraction of the first type, then the aggregate stopping probability is $\alpha$ in the first period, $0$ in the second, and $1- \alpha$ in the final period, which is clearly non-monotone.\footnote{See \cite{heffetzodonoghue} for a more detailed discussion of heterogeneity as well as empirical evidence on its importance in determining when individuals pay their parking fines.}
Importantly, we establish our formal result for the specific task-completion setting analyzed, and they should not be misconstrued as implying complete non-identifiability of the quasi-hyperbolic discounting model in other settings. In richer and different datasets, it is possible to identify $\beta,\hat{\beta}$ more directly. For example, lotteries (or contracts) that payoff differently depending on the agent's own future behavior can be used to reveal whether the agent missperceives her own future behavior and, hence, whether she is (partially) naive in the quasi-hyperbolic discounting model \citep[see, for example,][]{dellavignamalmendier,Spiegler_2011_book}. Similarly, if the agent is willing to pay for reducing her choice set or for imposing a fine for certain future actions, she values commitment and---within the quasi-hyperbolic discounting framework---must be time-inconsistent \citep[see, for example,][]{strotz}. Such identification strategies, however, rely on data that is fundamentally different from the task-completion data for which we establish the impossibility of non-parametric identification.
Indeed, even in the closely related, but different, problem of task-timing \citep{carrollchoi,laibson3} in which the benefit from doing the task start accumulating as soon as the agent finishes it, it is possible to construct examples in which an agent wants to commit to an earlier deadline, implying that at least partial identification of perceived present-bias ($\hat{\beta} \neq 1$) is theoretically feasible. While agents may theoretically benefit from imposing a deadline in such task-timing problems, however, the calibration of the example in \cite{laibson} suggests that their willingness to do so is small, suggesting that identifying time-inconsistency may nevertheless be challenging in real-world data.
The broader economic lesson from our analysis is that conclusions about time-preferences can quickly be driven by seemingly innocuous parametric assumptions. Our results on set-identification with richer data illustrate, however, that it is possible---and in our setting surprisingly easy---to avoid functional form assumptions. We, thus, think of these results as a proof of concept for the feasibility of non-parametric analysis within the quasi-hyperbolic discounting framework.
Finally, let us emphasize the obvious fact: even though present-bias is non-identifiable in our task-completion settings absent data on continuation values, present-bias may still be a major driver for the wide-spread observation that agents complete tasks last minute. Our results simply caution that the observed task-completion behavior in these settings on its own is not enough to conclude that present-bias is widespread.
\begin{comment}
Let $\delta=1$ the continuation value is
E[ max{w,beta v_{delta} } ]
we note that the continuation value is continuous in delta. Thus, the limit $\lim_{\delta \to 1 } v_t^delta = v_t^1$
- delta = 1, agent is patient,
- hat{beta} such that the task is never done before the last period
-
\end{comment}
\section*{Appendix}
Define the function $g:\mathbb{R} \to \mathbb{R}$ as
\begin{equation}\label{eq:def-g}
g(w) = \hat{\beta} \, \delta \int_{w}^\infty z \,d F(z) + F(w) \, \delta \, w\,.
\end{equation}
As the following lemma formally establishes, $g$ has a number of convenient properties.
\begin{lemma}\label{lem:prop-g} The function $g$ has the following properties:
\begin{compactenum}[i)]
\item For all $t \in \{1,\ldots,T-1\}$, the perceived continuation values satisfy $ \left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t} = g\left( \left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t+1} \right)$.
\item $g(w)$ is non-decreasing for $w \geq 0$, is right-continuous, and has only upward jumps.
\end{compactenum}
Let $\delta<1$. Then $g$ has the following additional properties:
\begin{compactenum}
\item[iii)] $g(w) > w$ for all $w < 0$ and there exists $\bar{w}>0$ such that $g(w)< w$ for all $w>\bar{w}$.
\item[iv)] Let $w^\star = \inf \{ w \in \mathbb{R} \colon g(w) \leq w\}$. Then $w^\star$ satisfies $g(w^\star)=w^\star$ and $w^\star \geq 0$.
\item[v)] If $w' \geq 0 > w$, then $g(w') \geq g(w).$
\end{compactenum}
\end{lemma}
\noindent {\bf Proof of Lemma \ref{lem:prop-g}:} $i)$ follows immediately from Lemma \ref{lem:rec-representation}. To see that $ii)$ holds, observe that we can rewrite $g$ as
\begin{align}
g(w) &= \hat{\beta}\, \delta \int_{w}^\infty z \,d F(z) + \hat{\beta} F(w) \, \delta \, w + (1-\hat{\beta}) F(w) \, \delta \, w \nonumber \\
&= \hat{\beta}\, \delta \int_{-\infty}^\infty \max \{ z, w\} \,d F(z) + (1-\hat{\beta}) F(w) \, \delta \, w \label{eq:alt-representation}\,.
\end{align}
Note that both the first and the second summand are non-decreasing for $w \geq 0$, and that the first summand is continuous in $w$ while the second is right-continuous and has only upward jumps as $F$ is a CDF.
To see that $iii)$ holds, observe that the integral in the first summand of \eqref{eq:alt-representation} is bounded from below by $w$ and, thus, for $w<0$
\[
g(w) \geq \hat{\beta}\, \delta \, w + (1-\hat{\beta}) F(w) \, \delta \, w = \delta w - (1-\hat{\beta})(1-F(w))\,\delta\,w \geq \delta w> w \,.
\]
For establishing the second part, note that
\begin{align*}
\lim_{w \nearrow \infty}\frac{g(w)}{w} &=\lim_{w \nearrow \infty} \left \{ \hat{\beta}\, \delta \int_{-\infty}^\infty \max \left\{ \frac{z}{w}, 1\right\} \,d F(z) + (1-\hat{\beta}) F(w) \, \delta \right\} = \hat{\beta}\, \delta + (1-\hat{\beta}) \, \delta = \delta\, <1.
\end{align*}
We next argue that this implies that there exist a $\bar{w}$ such that for all $w> \bar{w}$, $g(w) <w$. Suppose otherwise, then there exists a sequence $w_k \nearrow \infty$ such that $g(w_k) >w_k$. Furthermore, for this sequence $\lim_{w_k \nearrow \infty} \frac{g(w_k)}{w_k} >1$, a contradiction.
We now show $iv)$. Observe that since $g$ has only upward jumps, $w\mapsto g(w)-w$ has only upward jumps. Because by $iii)$ the set $\{ w \in \mathbb{R} \colon g(w) \leq w\}$ is non-empty, the fact that $w\mapsto g(w)-w$ has only upward jumps implies that $w^\star = \inf \{ w \in \mathbb{R} \colon g(w) \leq w\}$ satisfies $g(w^\star)=w^\star$. Furthermore, it follows immediately from $iii)$ that the set $\{ w \in \mathbb{R} \colon g(w) \leq w\}$ contains only $w \geq 0$, and hence that $w^\star \geq 0$.
To show $v)$, note that for $0 \geq w$ Equation \ref{eq:alt-representation} together with $w' \geq 0$ implies that
\begin{align}
g(w') -g(w) = & \hat{\beta}\, \delta \left[ \int_{-\infty}^w (w' -w ) dF(z) + \int_w^{w'} (w' -z) dF(z) \right] \nonumber \\
& + (1- \hat{\beta}) \delta \left[ F(w') w' - F(w) w \right] \geq 0,\nonumber
\end{align}
where the inequality follows from the facts that $w' \geq 0$ and $w \leq 0$. \qed
\bigskip
\noindent {\bf Proof of Theorem \ref{prop:monotone-values}:} That statements i) and ii) of the theorem are equivalent is argued in the main text. Here, we prove statement i).
We begin by establishing the result for $\delta<1$. Trivially, Self $T$'s perceived continuation value is $v_T = y \leq 0$. Define $w^\star = \min \{ w \in \mathbb{R} \colon g(w) \leq w\}\geq 0$, which is well defined by Lemma \ref{lem:prop-g}, $iii) $ and $iv)$. By Lemma \ref{lem:prop-g}, $i)$ and $iv)$, we have that
\begin{equation}
w^\star - \left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_t = g(w^\star) - g\left(\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1} \right) \,.
\label{eq_something}
\end{equation}
As $v_T = \underline{y} \leq 0$ and $w^\star \geq 0$ (by Lemma \ref{lem:prop-g}, $iv$)), we have that $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_T \leq w^\star$. We now proceed by induction to show that this implies that $v_t\leq w^\star$.
We distinguish two cases: First,$\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1} \geq 0$. In this case the monotonicity of $g$ , established in Lemma \ref{lem:prop-g}, $ii)$, together with Equation \ref{eq_something} implies that $\sgn ( w^\star - \, v_{t} \left(\nice
|
frac{\hat{\beta}}{\beta}\right) ) = \sgn (w^\star - \left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1})$ and, thus, by induction $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_t \leq w^\star$. Second if $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1} < 0$, then by Lemma \ref{lem:prop-g}, $v)$, $g(w^\star) \geq g(\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1}) $ and hence it follows from Equation \ref{eq_something} that $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_t < w^\star$. We conclude that $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_t < w^\star$ for all $t\in\{1,\ldots,T\}$.
Hence, since $\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1} \leq w^\star$, we have
\[
\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1} \leq g(\left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t+1}) = \left(\nicefrac{\hat{\beta}}{\beta}\right) \, v_{t} \Rightarrow v_{t+1} \leq v_{t}\,.
\]
Finally, we establish the result for $\delta=1$. First, note that the right-hand-side of \eqref{eq:def-g} is continuous in $\delta$ and as $\left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t} = g\left( \left(\nicefrac{\hat{\beta}}{\beta}\right)\,\,v_{t+1} \right)$ by Lemma \ref{lem:prop-g} $i)$, it follows that the continuation values $v_1,\ldots,v_T$ are continuous in $\delta$. Let $v_t^\delta$ be the continuation value in period $t$ as a function of $\delta$. We already established that $v_t^\delta - v_{t+1}^\delta \geq 0$ for all $\delta<1$. By continuity, we have that $v_t^1 - v_{t+1}^1 = \lim_{\delta \nearrow 1} v_t^\delta - v_{t+1}^\delta \geq 0$.
\qed
\bigskip
\noindent {\bf Proof of Theorem \ref{thm:non-identifiability-sophisticate}:} Fix a non-decreasing sequence of stopping probabilities $0 < p_1 \leq p_2 \leq \ldots \leq p_T < 1$, $(\delta,\beta) \in (0,1] \times (0,1]$, and a penalty $\nicefrac{\underline{y}}{\beta \delta} \in \mathbb{R}$. We will construct a distribution $F$ that implies the stopping probabilities $p$ for a sophisticate.
Pick any perceived first-period cutoff $c_1 >0$ such that
\[
c_1 > \max \left\{0, -(1 - \beta) \,\delta \underline{y} \,\,\frac{1-(\delta \, \frac{1-p_T}{2})^{T-1}}{(1-(\delta \, \frac{1-p_T}{2})) (\delta \, \frac{1-p_T}{2})^{T-1}} \right\}.
\]
Using $\hat{\beta}=\beta$ in Lemma \ref{lem:rec-representation}, the perceived continuation values satisfy
\begin{equation}
v_t = \begin{cases} \beta \,\delta \int_{v_{t+1}}^\infty z \,d F(z) + F(v_{t+1}) \, \delta \, v_{t+1} & \text{ for }t \in \{1,\ldots,T-1\}\\
\underline{y} &\text{ for } t=T\end{cases}\,.
\end{equation}
Let $F$ be the sum of $T+2$ Dirac measures
\begin{align}\label{eq:def-F-sophisticate}
F(x; v) &= \sum_{k=0}^{T+1} f_k\, \mathbf{1}_{\pi_k(v) \leq x} ,
\end{align}
at the mass points $\pi_0,\ldots,\pi_{T}$ satisfying
\begin{equation*}
\pi_k(v) = \begin{cases} \underline{y} - c_1 &\text{ if } k =0\\
\underline{y} &\text{ if } k =1\\
v_{T-k+1} &\text{ if } k \in \{ 2,\ldots,T\}
\end{cases}\,.
\end{equation*}
Let the probability of each mass point be given by
\begin{equation*}
f_k = \begin{cases} (1-p_T)/2 &\text{ if } k = 0,1\\
p_{T-k+2} - p_{T-k+1} &\text{ if } k \in \{ 2,\ldots,T\}\\
p_1 &\text{ if } k = T+1
\end{cases}\,.
\end{equation*}
Note that $f_0>0$ as $p_T<1$. Since the mass points of $F$ are exactly at the continuation values, we get that for $t \in \{1,\ldots,T-1\}$ the recursive equation for the continuation values $v$ simplifies to a recursive equation for the mass points $\pi$; i.e.
\begin{align}
\pi_{T+1-t} &= \beta \,\delta \int_{v_{t+1}}^\infty z \,d F(z) + F(v_{t+1}) \, \delta \, v_{t+1} = \beta \,\delta \sum_{j=T-t+1}^{T+1} f_j \pi_j + \delta \, \left( \sum_{j=0}^{T-t} f_j \right) \pi_{T-t} \label{eq:cont_value_mass} \\
\Rightarrow \pi_k &= \beta \,\delta \sum_{j=k}^{T+1} f_j \pi_j + \delta \,\left( \sum_{j=0}^{k-1} f_j \right) \pi_{k-1} \hspace{3mm}\text{ for } k \in \{2,\ldots,T\} \label{eq:pik}\,.
\end{align}
We furthermore restrict attention to distributions for which Equation \eqref{eq:cont_value_mass} is also satisfied for $t=T$, i.e. for which $\pi_1$ satisfies Equation \eqref{eq:pik} evaluated at $k=1$. In that case, \eqref{eq:pik} implies that for $k \in \{2,\ldots,T\}$,
\begin{align}\label{eq:pi-forward}
\left( \pi_k - \pi_{k-1} \right) &= (1 - \beta) \,\delta f_{k-1} \pi_{k-1} + \delta \,\sum_{j=0}^{k-2} f_j \left( \pi_{k-1} - \pi_{k-2} \right) \,.
\end{align}
As \eqref{eq:pi-forward} can be solved forward and $\pi_0,\pi_1$ are known, we can use it to determine $\pi_2 , \ldots, \pi_T$. Given the values $\pi_0, \ldots, \pi_T $, we can determine $\pi_{T+1}$ by solving \eqref{eq:pik} for $k=T$
\begin{align*}
\pi_T &= \beta \delta (f_T \pi_T + f_{T+1} \pi_{T+1}) + \delta \pi_{T-1} \left( \sum_{j=0}^{T-1} f_j \right)\,.
\end{align*}
Denote this solution by $\pi^\star$. If $\pi^\star$ is strictly increasing then the distribution defined in \eqref{eq:def-F-sophisticate} has mass points exactly at the continuation values $v$ and leads to the given stopping probabilities $p$.
We are thus left left to show that the resulting solution $\pi_0^\star, \pi_1^\star, \ldots, \pi_{T+1}^\star$ is increasing. We will show that $\pi_k^\star - \pi_{k-1}^\star >0$ by induction for $k \in \{1,\ldots,T\}$. $\pi_0^\star < \pi_1^\star$ by construction as $c_1 > 0$. We next do the induction step and assume that $\pi^\star_0 < \pi_1^\star < \ldots < \pi_{k-1}^\star$. Since for $k \geq 2$ one has $\pi^\star_{k-1} > \pi_1^\star = \underline{y}$, \eqref{eq:pi-forward} implies that
\begin{align}
\left( \pi_k - \pi_{k-1} \right) &\geq (1 - \beta) \,\delta \underline{y} + \delta \, f_0 \left( \pi_{k-1} - \pi_{k-2} \right) \nonumber \\
&= \alpha + \gamma \left( \pi_{k-1} - \pi_{k-2} \right)\,,
\end{align}
where $\alpha=(1 - \beta) \,\delta \underline{y}$ and $\gamma = \delta \, f_0 \in (0,1)$. Since for $\underline{y}\geq 0$, we have $\alpha \geq 0$, it follows that $\pi$ is non-decreasing in this case. We are left to show the result for $\underline{y}<0$ and $\alpha<0$. This implies that
\begin{align}
\left( \pi_k^\star - \pi_{k-1}^\star \right) &\geq \alpha \sum_{j=0}^{k-2} \gamma^j + \gamma^{k-1} \left( \pi_1^\star - \pi_{0}^\star \right) \nonumber \\
&= \alpha \frac{1-\gamma^{k-1}}{1-\gamma} + \gamma^{k-1} c_1 \nonumber \\
&\geq \alpha \frac{1-\gamma^{T-1}}{1-\gamma} + \gamma^{T-1} c_1 \nonumber \\
&= \gamma^{T-1} \left( c_1 - |\alpha| \frac{1-\gamma^{T-1}}{(1-\gamma) \gamma^{T-1}} \right) > 0\,.
\end{align}
The last inequality here follows from our choice of $c_1$. We thus have shown that $\pi_0^\star < \pi_1^\star < \ldots < \pi_T^\star$. It is left to show that $\pi_T^\star < \pi_{T+1}^\star$. By chosing $c_1$ large enough, we can without loss of generality assume that $\pi_T^\star>0$. If $\pi_T^\star > 0$ and $\pi_{T+1}^\star \leq \pi_T^\star$, we have that
\begin{align*}
\pi_T^\star &= \beta \delta (f_T \pi_T^\star + f_{T+1} \pi_{T+1}^\star) + \delta \pi_{T-1}^\star \left( \sum_{j=0}^{T-1} f_j \right)\\
& \le \pi_T^\star \beta \delta f_T + \pi_{T}^\star\beta \delta f_{T+1} + \pi_{T-1}^\star \delta \left( \sum_{j=0}^{T-1} f_j \right) \\
\Leftrightarrow 1 & \le \beta \delta f_T + \beta \delta f_{T+1} + \frac{ \pi_{T-1}^\star}{\pi_{T}^\star} \delta \left( \sum_{j=0}^{T-1} f_j \right)\,.
\end{align*}
As $f_T+ f_{T+1}+ \left( \sum_{j=0}^{T-1} f_j \right)=1$, $f_0 >0$, and $\pi_{T-1}^\star <\pi_{T}^\star$, this is a contradiction and completes the proof. \qed
\bigskip
\noindent {\bf Proof of Lemma \ref{lem:aux-properties-naive}:}
\textbf{ $i)$:} By Theorem \ref{prop:monotone-values}, the subjective continuation vales are weakly decreasing. For the sake of a contradiction, suppose the subjective continuation value is constant across two periods.
Denote by $\underline{m}=\min \big(\text{supp}\, F \big)$ the left end-point of the support of $F$. By assumption $\underline{m}\leq \underline{y} < 0$. By Lemma \ref{lem:prop-g} $i)$, we have that $\nicefrac{v_t}{\beta}=g(\nicefrac{v_{t+1}}{\beta})$ for all $t\in \{1,\ldots,T-1\}$, where, by Equation \ref{eq:alt-representation}, $g(x)=\delta \int_{-\infty}^\infty \max\Big\{z,x\Big\} \,d F(z)$. Note that $g$ is non-decreasing, strictly increasing for all $x\geq \underline{m}$, and that $g(x)=\delta \int_{-\infty}^\infty z \,d F(z)\geq \delta \underline{m} > \underline{m}$ for $x<\underline{m}$. Suppose that $v_{t-1}=v_{t}$ for some $t\in \{2,\ldots,T\}$. This implies that $\nicefrac{v_t}{\beta}=g(\nicefrac{v_{t+1}}{\beta})=g(\nicefrac{v_{t}}{\beta})$. Hence, $\nicefrac{v_t}{\beta} > \underline{m}$ and as $g$ is strictly increasing for $x\geq \underline{m}$, there can not exist a $\tilde{v}\neq v_t$ such that $\nicefrac{v_t}{\beta}=g(\nicefrac{\tilde{v}}{\beta})$. Hence, $v_s = v_t$ for all $s,t \in \{1,\ldots,T\}$. As $v_T = \underline{y}$, this implies that $v_t = \underline{y}$ for all $t$. By Lemma \ref{lem:prop-g} $iii)$, however, any fixed point of $g$ is non-negative, so that $\bar{y}\geq 0$, contradicting the assumption that $\bar{y} < 0$.\\
\noindent We now show $ii)$: Let $v$ be the continuation values associated with $F$ and $\tilde{v}$ the continuation values associated with $\tilde{F} \prec_{FOSD} F$. We want to show that $v_t \geq \tilde{v}_t$ for every $t\in \{1,\ldots,T\}$. We show the result by backward induction over $T$. The start of the induction is that $v_T = \tilde{v}_T =\underline{y}$. In the induction step, we show that $v_{t+1}\geq \tilde{v}_{t+1}$ implies $v_t\geq \tilde{v}_t$
\begin{align*}
\nicefrac{v_t}{\beta}&=\delta \int_{-\infty}^\infty \max\Big\{z,\nicefrac{v_{t+1}}{\beta}\Big\} \,d F(z) \geq \delta \int_{-\infty}^\infty \max\Big\{z,\nicefrac{\tilde{v}_{t+1}}{\beta}\Big\} \,d F(z) \\
&\geq \delta \int_{-\infty}^\infty \max\Big\{z,\nicefrac{\tilde{v}_{t+1}}{\beta}\Big\} \,d \tilde{F}(z) = \nicefrac{\tilde{v}_t}{\beta} \,. \ \ \ \ \ \ \ \ \ \ \qed
\end{align*}
\noindent We are now ready to prove Theorem \ref{thm:non-identifiability}.
\bigskip
\noindent {\bf Proof of Theorem \ref{thm:non-identifiability}:}
Let $G_{a,b}(x) = \max \{ \min \left\{ \frac{x - a}{b-a}, 1 \right\} ,0 \}$ be the uniform CDF on $[a,b]$ for $a<b$ and a Dirac measure $G_{a,a}(x)=\mathbf{1}_{a\leq x}$ for $a=b$. Fix some $c_1,c_2 > 0$.
Consider a non-decreasing sequence of stopping probabilities $0 < p_1 \leq \ldots \leq p_T < 1$ and for every non-increasing sequence of continuation values $v_1 \geq \ldots \geq v_{T-1}$ with $ v_{T-1} \geq \underline{y}$, define the function $F$
\begin{align*}
F(x; v) &= \sum_{k=0}^T f_k\, G_{\pi_k(v),\pi_{k+1}(v)} (x) ,
\end{align*}
where
\begin{equation*}
\pi_k(v) = \begin{cases} \underline{y} - c_1 &\text{ if } k =0\\
\underline{y} &\text{ if } k =1\\
v_{T-k+1} &\text{ if } k \in \{ 2,\ldots,T\}\\
v_{1} + c_2 &\text{ if } k = T+1
\end{cases}\,,
\end{equation*}
and
\begin{equation*}
f_k = \begin{cases} 1-p_T &\text{ if } k = 0\\
p_{T-k+1} - p_{T-k} &\text{ if } k \in \{ 1,\ldots,T-1\}\\
p_1 &\text{ if } k = T
\end{cases}\,.\
\end{equation*}
\noindent \textsl{$F$ is a distribution:} We begin by showing that $F$ is a cumulative distribution function. Note that $f_k\geq 0$ and that for $k<T$,
\begin{equation}\label{eq:sum-f}
\sum_{j=0}^k f_j = 1-p_{T-k}
\end{equation} and $\sum_{j=0}^T f_j = 1$. For every $v$, the function
$F(\cdot;v)$ is non-decreasing and non-negative as the CDF $G$ is non-decreasing and non-negative. It thus follows that $F$ is a well defined CDF with support $[\pi_0,\pi_{T+1}]=[\underline{y}-c_1, v_1 + c_2]$.\\
\noindent \textsl{Continuation values induced by $F$:} Consider now the continuation values $w$ induced by $F(\cdot;v)$. By Lemma \ref{lem:rec-representation}, they solve the equation
\begin{equation}\label{eq:fixedpoint-representation}
\frac{w_{t}}{\beta} = \delta \int_{-\infty}^\infty \max\Big\{z,\frac{w_{t+1}}{\beta}\Big\} \,d F(z;v) \,\,\,\,\, \text{ for }t \in \{1,\ldots,T-1\}\,,
\end{equation}
with $w_T=\underline{y}$. Denote by $L:\mathbb{R}^{T-1} \to \mathbb{R}^{T-1}$ the function mapping $v$ to $w$ using \eqref{eq:fixedpoint-representation}. By Theorem \ref{prop:monotone-values}, $w = L(v)$ is non-increasing. As $w$ is non-increasing and $w_T=\underline{y}$, it follows that $(Lv)_t \geq \underline{y}$ for all $t\in \{2,\ldots,T-1\}$. Furthermore, as $\text{supp} F(\cdot;v) \subseteq [\underline{y}-c_1, v_1 + c_2]$
\begin{align*}
w_1 &= \beta \delta \int_{-\infty}^\infty \max\Big\{z,\frac{w_{t+1}}{\beta}\Big\} \,d F(z;v) \leq \beta \delta \int_{-\infty}^\infty \max\Big\{v_1+c_2,\frac{w_{1}}{\beta}\Big\} \,d F(z;v)\\
&= \delta \beta \max\Big\{(v_1 + c_2), \frac{w_1}{\beta}\Big\} \leq \delta \beta (v_1 + c_2) \leq \delta (v_1 + c_2) \,.
\end{align*}
Thus, if $v_1 \leq \frac{\delta}{1-\delta} c_2$, we have that
\[
w_t\leq w_1 \leq \delta (v_1 + c_2) \leq \frac{\delta}{1-\delta} c_2 \,.
\]
Consequently, $L$ maps $M$ into itself, where $M$ is the set of non-increasing sequences contained in $[\underline{y},\frac{\delta}{1-\delta} c_2]^{T-1}$, i.e.
\[
M=\left\{m \in \Big[\underline{y},\frac{\delta}{1-\delta} c_2\Big]^{T-1}: m_1 \geq m_2 \geq \ldots \geq m_{T-1} \right\}\,.
\]
\noindent \textsl{Any fixed-point of $L$ induces a solution:} We next argue that if $w^\star \in \mathbb{R}^{T-1}$ is a fixed point of $L$ then the distribution $F(\cdot ;w^\star)$ induces the stopping probabilities $p$ and thus solves our problem. By Lemma \ref{lem:aux-properties-naive} $i)$, any fixed-point must be strictly decreasing
$w_1^\star > w_2^\star > \ldots > w_{T-1}^\star$. As $w^\star$ is a fixed point of $L$, the agent stops in period $t$ if and only if $y_t \geq w_t^\star$, which happens with probability
\begin{align*}
\mathbb{P}[ y > w_t^\star ] &= 1-F(w_t^\star;w^\star) = 1- \sum_{k=0}^T f_k\, G_{\pi_k(w^\star),\pi_{k+1}(w^\star)} (w_t^\star) = 1- \sum_{k=0}^T f_k\, \ind{\pi_{k+1}(w^\star)\leq w_t^\star} \\
&= 1- \sum_{k=1}^{T-1} f_k\, \ind{w_{T-k}^\star \leq w_t^\star} - f_0\ind{\underline{y}-c_1 \leq w_t^\star} - f_T \ind{w_1^\star + c_2 \leq w_t^\star}\\
&= 1- \sum_{k=0}^{T-t} f_k\, = 1- (1-p_t) = p_t \,.
\end{align*}
Where we used \eqref{eq:sum-f} in the second to last equality. Thus, any distribution associated with a fixed point of $L$ induces the correct stopping probabilities. \\
\noindent \textsl{$L$ has a fixed-point:} It remains for us to argue that $L$ has a fixed point. We note that $M$ is a complete bounded lattice, as the point-wise maximum (minimum) over increasing sequences is increasing.\footnote{To see this note, that $(\underline{y},\ldots,\underline{y})$ is a minimal element and $(\frac{\delta}{1-\delta}c_2,\ldots,\frac{\delta}{1-\delta}c_2)$ is a maximal element. Furthermore, the point-wise infimum and supremum over any subset of $M$ lie in $M$.} We next note that $F$ respects first order stochastic dominance (FOSD), ie. if $v \geqq w$ then $F(\cdot; v)$ is greater than $F(\cdot; w)$ in FOSD.\footnote{We use the notation $\geqq$ for point-wise comparisons.} By Lemma \ref{lem:aux-properties-naive} $ii)$, increasing the distribution of payoffs in FOSD will (weakly) increase the subjective continuation values.
As a consequence $L$ is a monotone operator, i.e. $L (v) \geqq L(w)$ if $v \geqq w$. By Tarski's fixed point theorem, $L$ thus has a fixed point on the lattice $M$.\\
\noindent \textbf{Uniqueness:} Finally, we note that as the subjective continuation values $w^\star$ is strictly decreasing $F(\cdot;w^\star)$ has no mass points. Consequently, the probability that the agent is ever indifferent between stopping and continuing equals zero. Thus, any perception perfect equilibrium leads to the same distribution $p$. \qed
\bigskip
\noindent {\bf Proof of Lemma \ref{lem:mass_points_sufficient}:} Let the pair $u,G$ solve \ref{eq:constraints-sophisticate}. From now one, fix $u$. Let $\mathbb{E}_G$ denote the expectation taken with respect to the cumulative distribution function $G$, and $\mathbb{P}_G$ the probability mass with respect to $G$.
We now specify a distribution $F$ that has the properties specified in the Lemma. The $T+1$ mass points $(\pi_0, \ldots,\pi_T)$ are located at
\begin{equation*}
\pi_k = \begin{cases} \mathbb{E}_G[y|y \leq v_T] &\text{ if } k = 0\\
\mathbb{E}_G[y|v_{T-k+1} < y \leq v_{T-k}] &\text{ if } k \in \{ 1,\ldots,T-1\}\\
\mathbb{E}_G[y|v_1 < y] &\text{ if } k = T
\end{cases}\,.\
\end{equation*}
and their probability mass is given by $f_k$ as specified in the Lemma. Observe that by construction, we have
$$
\pi_0 \leq v_T < \pi_1 \leq v_{T-1} < \ldots \leq \pi_{T-1} \leq v_1 < \pi_T.
$$
Since $G$ solves \ref{eq:constraints-sophisticate} and $1 - F(v_t) = p_t$ for all $t\in \{1, \ldots,T \}$ by construction, one has
$$
1 - F(v_t) = 1-G(v_t) \ \ \forall t \in \{1, \ldots, T\}.
$$
Furthermore,
\begin{align}
\nonumber
\int_{v_{t+1}}^\infty z \,d G(z) & = \sum_{k=T - t}^{T-1} \mathbb{E}_G[y|v_{T-k+1} < y \leq v_{T-k}] \mathbb{P}_G[y|v_{T-k+1} < y \leq v_{T-k}] + \mathbb{E}_G[y|v_1 < y] \mathbb{P}_G[y|v_1 < y] \\ \nonumber
&= \sum_{k=T - t}^{T} f_k \pi_k\\ \nonumber
& = \int_{v_{t+1}}^\infty z \,d F(z) \,. \nonumber
\end{align}
Thus, since $u,G$ solve \ref{eq:constraints-sophisticate} so do $u,F$. \qed
\bigskip
\noindent {\bf Proof of Theorem \ref{thm:non-para_identification}:} Lemma \ref{lem:mass_points_sufficient} implies for a plausible data set that \eqref{eq:constraints-sophisticate} admits a solution if and only if there exists $\pi \in \mathbb{R}^{T+1}, f \in \Delta^{T+1}$ and a monotone function $u$ such that
\begin{align}
v_t &=u(m_t) \ \ & \forall t \in \{ 1,\ldots, T\}\, ,\\
\pi_0 \leq &v_T < \pi_1 \leq v_{T-1} < \ldots \leq \pi_{T-1} \leq v_1 < \pi_T, \label{eq:order-pi}\\
\sum_{k=T-t}^T\pi_k f_k &= \frac{\delta^{-1}\,v_t - (1-p_{t+1}) \, v_{t+1}}{\beta} \ \ \ & \forall t \in \{ 1,\ldots, T-1\}\, ,\label{eq:rec-eq-mass}\\
\sum_{k=T-t+1}^T f_k &= p_t \ \ , & \forall t \in \{ 1,\ldots, T\}\label{eq:f-equal-p}\,.
\end{align}
Equation \ref{eq:f-equal-p} is equivalent to $f_T = p_1, f_0 = 1-p_T$ and for all $t \in \{2,\ldots,T\}$
\begin{align*}
p_t - p_{t-1} = \sum_{k=T-t+1}^T f_k - \sum_{k=T-t+2}^T f_k = f_{T-t+1} \,,
\end{align*}
and thus completely determines $f$.
From now on we thus consider $f$ as given.
Equation \ref{eq:rec-eq-mass} for $t=1$ is equivalent to
\[
\pi_{T-1} f_{T-1} + \pi_{T} f_{T} = \frac{\delta^{-1}\,v_1 - (1-p_{2}) \, v_{2}}{\beta} \,\,.
\]
We note that there exists $\pi$ satisfying the above equation and \eqref{eq:order-pi} if and only if
\begin{equation}
v_{2} f_{T-1} + v_1 f_{T} < \frac{\delta^{-1}\,v_1 - (1-p_{2}) \, v_{2}}{\beta} \,\,.
\end{equation}
That this is necessary follows as \eqref{eq:order-pi} provides a lower bound on $\pi_{T-1}$ and $\pi_T$. Since, $f_T= p_1>0$, this is also sufficient as you can always chose $\pi_T$ arbitrarily large. Rearranging for $\beta$ and plugging in $f$ yields
\begin{equation}\label{eq:beta-at}
\beta < \frac{\delta^{-1}\,v_1 - (1-p_{2}) \, v_{2}}{v_{2} (p_2 - p_1) + v_1 p_1} \,.
\end{equation}
Next, we consider \eqref{eq:rec-eq-mass} for $t \in \{2,\ldots,T-1\}$. Subtracting \eqref{eq:rec-eq-mass} evaluated at $t-1$ from \eqref{eq:rec-eq-mass} evaluated at $t$ yields
\begin{align*}
\pi_{T-t} f_{T-t} &= \sum_{k=T-t}^T\pi_k f_k - \sum_{k=T-t+1}^T\pi_k f_k = \frac{\delta^{-1}\,v_t - (1-p_{t+1}) \, v_{t+1}}{\beta} - \frac{\delta^{-1}\,v_{t-1} - (1-p_{t}) \, v_{t}}{\beta}\, ,
\end{align*}
which is equivalent to
$$
\pi_{T-t} = \frac{v_{t+1} (p_{t+1} - p_t) - \delta^{-1} (v_{t-1} - v_t) + (1-p_t) (v_t - v_{t+1})}{\beta (p_{t+1} - p_t)}.
$$
The above equation admits a solution satisfying \eqref{eq:order-pi} if and only if for all $t \in \{2, \ldots, T-1\}$, $v_{t+1} <\pi_{T-t} \leq v_t$. Rewriting using the definition of $a(\delta, t)$ from the statement of the theorem, \ref{eq:rec-eq-mass} admits a solution satisfying \eqref{eq:order-pi} if for all $t \in \{2, \ldots, T-1\}$ both $ v_{t+1} \beta < v_{t+1} a(\delta,t) $ and $v_{t} \beta \geq v_{t+1} a(\delta,t)$, and in addition
\begin{equation}\label{eq:beta-at}
\beta < \frac{\delta^{-1}\,v_1 - (1-p_{2}) \, v_{2}}{v_{2} (p_2 - p_1) + v_1 p_1} \,.
\end{equation}
This completes the proof.\qed
\renewcommand{\baselinestretch}{1} \normalsize
|
\section{Introduction}
In this paper we study the number of solutions
of the diophantine equation
\begin{equation}{\label{eq:main}}
\sum_{i=1}^k \frac{1}{x_i}=1,
\end{equation}
in particular, where the $x_i$ have some restrictions, such as all $x_i$ are
distinct odd positive integers.
Let us first review what is known for distinct positive integers,
without further restriction:
Let
\[{\cal X}_k=\{(x_1, x_2, \ldots,x_k): \sum_{i=1}^k \frac{1}{x_i}=1, \quad
0<x_1<x_2< \cdots < x_k\}.\]
It is known that
\begin{equation}{\label{eq:bounds}}
\exp \left( \exp \left(((\log 2)(\log 3) +o(1))\frac{k}{\log k}\right)\right)
\leq |{\cal X}_k| \leq c_0^{(\frac{5}{3}+\varepsilon) \, 2^{k-3} },
\end{equation}
where $c_0=1.264\ldots$ is $\lim_{n \rightarrow \infty} u_n^{1/2^n}$,
$u_n=1, u_{n+1}=u_n (u_n+
|
1)$.
The lower bound is due to Konyagin \cite{konyagin},
the upper bound due to Browning and
Elsholtz \cite{browning-elsholtz}.
Earlier results on the upper and lower bounds were
due to S\'{a}ndor \cite{sandor}
and Erd\H{o}s, Graham and Straus (see \cite{erdosandgraham}, page 32).
The set of solutions has also been investigated with various restrictions
on the variables $x_i$.
A quite general and systematic investigation
of expansions of $\frac{a}{b}$ as a sum of unit fractions with restricted
denominators is due to Graham \cite{graham}.
Elsholtz, Heuberger, Prodinger \cite{elsholtz-heuberger-prodinger}
gave an asymptotic formula for the number of solutions of
({\ref{eq:main}}), with two main terms, when the $x_i$ are
(not necessarily distinct) powers of a fixed integer $t$.
Another prominent case is when all denominators $x_i$ are odd.
Sierpi\'{n}ski \cite{sierpinski} proved that a nontrivial solution exists.
It is known
that for $k=9$ there are exactly 5 solutions, and for $k=11$, there are exactly
379,118 solutions (see \cite{shiu, arce-nazario-castro-figueroa}).
|
\section{Introduction}
\label{intro-midpointrule}
\subsection{Preliminary remarks}
In this paper we consider linear \genabelinteqspur of the following form,
\begin{eqnarray}
\klasm{Au}\klasm{\varx} =
\mfrac{1}{\Gamma\klami{\alpha}}
\ints{0}{\varx}
{(\varx-\vary)^{\alp-1} k\kla{\varx,\vary} u\klami{\vary} }
{d \vary}
= f\klasm{\varx}
\for \intervalarg{\varx}{0}{\xmax},
\label{eq:weaksing-inteq}
\end{eqnarray}
with $ 0 < \alpha < 1 $
and $ \xmax > 0 $,
and with a sufficiently smooth kernel function
$ k: \inset{(x,y) \in \reza^2 \ \mid \ 0 \le y \le x \le \xmax } \to \reza $,
and $ \Gamma $ denotes Euler's gamma function.
Moreover, the function $ f: \interval{0}{\xmax} \to \reza $
is supposed to be approximately given,
and a function $ u: \interval{0}{\xmax} \to \reza $
satisfying equation \refeq{weaksing-inteq} is to be determined.
In the sequel we suppose that the kernel function does not vanish on the
diagonal $ 0 \le \varx = \vary \le \xmax $, and
\mywlog we may assume that
\begin{align}
k\klasm{\varx,\varx} = 1 \for
\intervalarg{\varx}{0}{\xmax}
\label{eq:k_eq_one}
\end{align}
holds.
For the approximate solution of equation \refeq{weaksing-inteq} with an exactly given \rhs $ f $, there exist many quadrature methods,
see \eg \mycitebtwo{Brunner}{van der Houwen}{Brunner_Houwen[86]}, \mycitea{Linz}{85},
and \mycitea{Hackbusch}{95}.
One of these methods is the \repmidrule which is considered in detail, \eg in
\myciteb{Weiss}{Anderssen}{72} and in \mycitea{Eggermont}{79}, see also
\cite[Section 10.4]{Linz[85]}.
In the present paper we investiate,
for perturbed \rhss in equation \refeq{weaksing-inteq},
the regularizing properties of the \repmidrule.
The smoothness of the solution is classified in terms of H\"older continuity of the function and its derivative is considered. We also give a new proof of the inverse stability of the quadrature weights which
relies on Banach algebra techniques and may be of independent interest.
Finally, some numerical illustrations are presented.
\subsection{The Abel integral operator}
As a first step we consider in \refeq{weaksing-inteq}
the special situation $ k \equiv 1 $. On the other hand, for technical reasons we allow arbitrary intervals $ \interval{0}{b} $ with $ 0 < b \le a $ instead of the fixed interval $ \interval{0}{a} $ which allows to extend the obtained results for arbitrary kernels $ k $.
The resulting integral operator is the Abel integral operator
\begin{align}
\Ialp{\myfun}{\varx}
=
\mfrac{1}{\Gamma\klami{\alpha}}
\ints{0}{\varx}
{ \klasm{\varx-\vary}^{\alpha-1} \myfun\klami{\vary} }
{d \vary}
\for \intervalarg{\varx}{0}{\myb},
\label{eq:ialp_def}
\end{align}
where $ \myfun: \interval{0}{\myb} \to \reza $ is supposed to be a piecewise \cont function. One of the basic properties of the Abel integral operator
is as follows,
\begin{align}
\Ialp{\mon{q}}{x}
=
\tfrac{\Gamma\klafn{q+1}}{\Gamma\klafn{q+1+\alpha}}
\cdott x^{q+\alpha}
\for x \ge 0
\qquad \kla{q \ge 0},
\label{eq:ialp_monom}
\end{align}
where $ \mon{q} $ is short notation for the mapping $ y \mapsto y^q $.
In the sequel, frequently we make use of the following elementary estimate:
\begin{align}
\sup_{0 \le x \le \myb}
\modul{\Ialp{\myfun}{x}}
\le \mfrac{\myb^\alpha}{\Gamma(\alpha+1)}
\sup_{0 \le x \le \myb}
\modul{\myfun\klasm{y}} \qquad
\kla{\myfun : \interval{0}{\myb} \to \reza \textup{\ piecewise continuous}}.
\label{eq:ialp_norm}
\end{align}
Other basic properties of the Abel integral operator can be found
\eg~in \myciteb{Gorenflo}{Vessella}{91}
or \mycitea{Hackbusch}{95}.
\section{The \repmidrule for Abel integrals}
\label{midpointrule-basics}
\subsection{The method}
For the numerical approximation of the Abel integral operator \refeq{ialp_def}
we introduce equidistant \gridpoints
\begin{align}
\xn = \n \mydeltax, \qquad \n = k/2, \quad k = 0, 1, \ldots,2\N,
\with \mydeltax = \frac{\xmax}{\N},
\label{eq:grid-points}
\end{align}
where $ \N $ is a positive integer.
For a given \cont function $ \myfun: \interval{0}{\xn} \to \reza
\ (\n \in \inset{\myseqq{1}{2}{\N} }) $, the \repmidrule for the numerical approximation of the Abel
integral $ \Ialp{\myfun}{\xn} $
is obtained by replacing
the function $ \myfun $ on each subinterval
$ \interval{\xjm}{\xj}, \ j = 1,2,\ldots,\n $,
by the constant term $ \myfunjb $, respectively:
\begin{align}
\Ialp{\myfun}{\xn}
& \approx
\gammaalpinv
\mysum{\jod=1}{\n}
\schweifklala{
\ints{\varx_{\jod-1}}{\varx_\jod} {\klasm{\xn -\vary}^{\alpha-1} }{d \vary}
}
\myfunjb
\label{eq:midpoint-rule-start}
\\
&=
\gammaalpinvone
\mysum{\jod=1}{\n}
\schweifklabi{ \kla{\xn - \xjm}^{\alpha} - \kla{\xn - \xj}^{\alpha} }
\myfunjb
\nonumber
\\
&=
\mfrac{\halp}{\gammaalpone}
\mysum{\jod=1}{\n}
\schweifklabi{
\kla{\n - \jod + 1}^{\alpha} \minus \kla{\n - \jod}^{\alpha}
}
\myfunjb
\nonumber
\\[-1mm]
&=
\mydeltaxalp \mysum{\jod=1}{\n} \an[\n-\jod] \myfunjb
=:
\Ialph{\myfun}{\xn},
\label{eq:midpoint-rule}
\end{align}
where the quadrature weights $ \an[0], \an[1], \ldots $ are given by
\begin{align}
\an[s] & =
\gammaalpinvone
\schweifklabi{\klasm{s+1}^{\alp} - s^{\alp}}
\for s = 0, 1, \ldots \ .
\label{eq:omegan-def}
\end{align}
The weights
have the asymptotic behavior
$ \myomegan = \frac{1}{\gammaalp} \n^{\alpha-1} \plus \Landauno{\n^{\alpha-2}} $
as $ \n \to \infty $.
\subsection{The integration error -- preparations}
In the sequel, we consider the integration error
\begin{align}
\enn{\myfun}{\xn} = \Ialp{\myfun}{\xn} - \Ialph{\myfun}{\xn}
\label{eq:midpoint-rule-error-def}
\end{align}
under different smoothness assumptions on the function $ \myfun $.
As a preparation, for $ c < d, L \ge 0, m = 0, 1,
\ldots $ and $ 0 < \beta \le 1 $, we introduce the space $ \HLc{m+\beta}{c}{d} $
of all functions $ \myfun : \interval{c}{d} \to \reza $ that are continuously
differentiable up to order $m$, and the derivative $ \ableit{\myfun}{m} $ of order
$ m $ is H\"older continuous of order $ \beta $ with H\"older constant $ L \ge 0
$, \ie
\begin{align}
\HLc{m+\beta}{c}{d}
= \inset{\varphi \in C^m\interval{c}{d}
\mid
\modul{\ableit{\varphi}{m}(x) - \ableit{\varphi}{m}(y)} \le L \modul{x-y}^\beta
\for x, y \in \interval{c}{d}}.
\label{eq:hoelder-with-L}
\end{align}
The space of H\"older continuous functions of order $ m + \beta $ on the interval
$ \interval{c}{d} $ is then given by
\begin{align*}
\Hspc{m+\beta}{c}{d}
= \inset{\varphi: \interval{c}{d} \to \reza
\mid
\myfun \in \HLc{m+\beta}{c}{d} \text{ for some constant } L \ge 0}.
\end{align*}
Other notations for those spaces are quite common, \eg $ C^{m,\beta}\interval{c}{d} $,
\cf \cite[section 2]{Brunner[04]}.
As a preparation, for $ n \in \inset{\myseqq{1}{2}{N}} $ and
$ \myfun: \interval{0}{\xn} \to \reza $
we introduce the corresponding piecewise constant interpolating spline
$ \ph \myfun: \interval{0}{\xn} \to \reza $, \ie
\begin{align}
(\ph \myfun)(\vary) \equiv \myfunjb
\for \xjm \le \vary < \xj \qquad \kla{\jod = 1,2,\ldots,\n},
\label{eq:ph-def}
\end{align}
and in the latter case $ j = \n $, this setting is also valid for $ \vary = \xn $.
For $ \myfun \in \Hsp{\gamma}{\xn} $ with $ 0 < \gamma \le 1 $,
it follows from zero order Taylor expansions at the \gridpoints
that
\begin{align}
\myfun(y) = (\ph \myfun)(y) + \Landauno{h^\gamma},
\quad 0 \le y \le \xn,
\label{eq:interpol-error-1}
\end{align}
uniformly both on $ \interval{0}{\xn} $ and for $ \myfun \in \HLc{\gamma}{0}{\xn} $, with any
arbitrary but fixed constant $ L \ge 0 $, and also uniformly for $ n = 1,2,\ldots, \N $.
We consider the smooth case $ \myfun \in C^1\interval{0}{\xn},
\ \n \in \inset{\myseqq{1}{2}{\N} } $, next.
Let
$ \qh \myfun: \interval{0}{\xn} \to \reza $ be given by
\begin{align}
(\qh \myfun)(\vary) = \myfunjb + (\vary-\xjb)\myfunpjb
\for \xjm \le \vary < \xj \quad \kla{\jod = 1,2,\ldots, \n},
\label{eq:qh-def}
\end{align}
and
in the latter case $ j = \n $, this definition is extended to the case $ \vary = \xn $.
For $ \myfun \in \Hsp{\gamma}{\xn} $ with $ 1 < \gamma \le 2 $,
first order Taylor expansions at the \gridpoints
yield
\begin{align}
\myfun(y) = (\qh \myfun)(y) + \Landauno{h^\gamma},
\quad 0 \le y \le \xn,
\label{eq:interpol-error-2}
\end{align}
uniformly in the same manner as for \refeq{interpol-error-1}.
\subsection{The integration error}
We are now in a position to consider, under different smoothness conditions on the function $ \varphi $, representations for the integration errors $ \enn{\myfun}{\xn} $ introduced in \refeq{midpoint-rule-error-def}.
\begin{lemma}
Let $ n \in \inset{\myseqq{1}{2}{N}} $, and moreover let
$ \myfun: \interval{0}{\xn} \to \reza $ be a continuous function.
We have the following representations for the quadrature error
$ \enn{\myfun}{\xn} $ introduced in \refeq{midpoint-rule-error-def}:
\begin{myenumerate_indent}
\item
We have
\begin{align}
\enn{\myfun}{\xn} = \Ialpkla{\myfun- \ph \myfun}{\xn}.
\label{eq:midpoint_error_0}
\end{align}
\item
For $ \myfun \in C^1\interval{0}{\xn} $
we have
\begin{align}
\enn{\myfun}{\xn} =
\halpone\mysum{\jod=1}{\n} \tau_{\n-\jod} \myfunpjb
+ \Ialpkla{\myfun- \qh \myfun}{\xn},
\label{eq:midpoint_error_1}
\end{align}
where
\begin{align}
\tau_n
& =
\mfrac{1}{\gammaalptwo}
\schweifkla{\klasm{\n+1}^{\alpone} \minus \n^{\alpone }}
-\mfrac{1}{2\gammaalpone}
\schweifkla{\klasm{\n+1}^{\alp} \plus \n^{\alp}}
\forsm \n = 0, 1, \ldots \ .
\label{eq:interr-beta+1-c}
\end{align}
\end{myenumerate_indent}
\label{th:midpoint-error}
\end{lemma}
\proof
The error representation \refeq{midpoint_error_0}
is an immediate consequence of the identities \refeq{midpoint-rule-start} and \refeq{midpoint-rule}.
For the verification of the second error representation \refeq{midpoint_error_1},
we use the decomposition
\begin{align*}
\enn{\myfun}{\xn} = \Ialpkla{\myfun- \ph \myfun}{\xn} =
\Ialpkla{\qh \myfun - \ph \myfun}{\xn} + \Ialpkla{\myfun -\qh \myfun}{\xn},
\end{align*}
and we have to consider the first term on the \rhs in more detail.
Elementary computations show that
\begin{align}
\gammaalpinv \ints{\xjm}{\xj}{ \klasm{\xn - \vary}^{\alpha-1} \klasm{\vary \minus \varxjb}}
{d y}
= \halpone \tau_{\n-\jod}
\for \jod = \myseqq{1}{2}{\n}.
\label{eq:interr-beta+1-b}
\end{align}
From \refeq{interr-beta+1-b}, the second error representation \refeq{midpoint_error_1} already follows.
\proofendspruch[ of the lemma]
\bn
A Taylor expansion of the \rhs of \refeq{interr-beta+1-c}
shows that the coefficients $ \tau_{\myl} $ have the following
asymptotic behavior:
\begin{align}
\tau_\myl &=
\mfrac{1-\alp}{12 \gammaalp} \myl^{\alpha-2} \plus \Landauno{\myl^{\alpha-3} }
\as \myl \to \infty.
\label{eq:taul_asymp}
\end{align}
Lemma \ref{th:midpoint-error} is needed in the proof of our main theorem. It is stated in explicit form here since it immediately becomes clear from this lemma that,
for each $ \myfun \in \Hsp{\gamma}{a} $ with $ 0 < \gamma \le \alpone $,
the interpolation error satisfies
\begin{align*}
\enn{\myfun}{\xn} = \Landauno{h^\gamma} \as h \to 0
\end{align*}
uniformly for $ n = 0, 1, \ldots, \N $. This follows from
\refeq{interpol-error-1} and \refeq{interpol-error-2}, and from the absolute summability
$ \mysumtxt{n=0}{\infty}\modul{\taun} \allowbreak < \infty $,
\cf \refeq{taul_asymp}.
\section{The \repmidrule for \weaklysingular first-kind Volterra integral equations with perturbations}
\label{quad-error}
\subsection{Some preparations}
We now return to the \weaklysingular integral equation \refeq{weaksing-inteq}. For the numerical approximation we consider this equation at grid points
$ \xn = \n \mydeltax, n = 1, 2, \ldots,\N $ with
$ \mydeltax = \lfrac{\xmax}{\N} $, \cf\refeq{grid-points}.
The resulting integrals are approximated by the \repmidrule, respectively, see \refeq{midpoint-rule} with $ \myfun(y) = k\klasm{\xn,y} u\klasm{y} $ for
$ \intervalargno{y}{0}{\xn} $.
\Inthesequel we suppose that the \rhs of equation \refeq{weaksing-inteq} is only approximately given with
\begin{align}
\modul{ \fndelta - f\klasm{\xn} } \le \delta
\for n = \myseqq{1}{2}{\nmax},
\label{eq:rhs-assump}
\end{align}
where $ \delta > 0 $ is a known noise level. For this setting, the \repmidrule for the numerical solution of equation \refeq{weaksing-inteq} looks as follows:
\begin{align}
\halp \mysum{j=1}{n} \an[n-j] \cdott k\kla{\xn,\varxjb} \cdott \undeltab[j]
=
\fndelta, \qquad n = 1, 2, \ldots, \nmax.
\label{eq:midpoint-noise-rule}
\end{align}
The approximations $ \undeltab \approx u(\varxnb) $ for $ j = 1, 2, \ldots, \nmax $
can be determined recursively by using scheme \refeq{midpoint-noise-rule}.
For the main error estimates, we impose the following conditions.
\myassump{
\begin{myenumerate}
\item
\label{item:assump-u}
There exists a solution $ u: \interval{0}{\xmax} \to \reza $ to the integral equation
\refeq{weaksing-inteq} with $ u \in \Hsp{\gamma}{\xmax} $, where
$ \calp \defeq \min\{\alp,1-\alp\} < \gamma \le 2 $.
\item
\label{item:assump-k=1}
There holds $ k\klasm{\varx,\varx} = 1 $ for each $ \intervalarg{\varx}{0}{\xmax}
$.
\item
\label{item:assump-k-smooth}
The kernel function $ k $ has Lipschitz continuous partial derivatives up to the order 2.
\item
The \gridpoints $ \xn $ are given by \refeq{grid-points}.
\item
The values of the \rhs of equation \refeq{weaksing-inteq}
are approximately given at the \gridpoints, \cf\refeq{rhs-assump}.
\end{myenumerate}
}{midpoint-assump}
\subsection{\Fps}
As a preparation for the proof of the main stability result of the present paper,
\cf{}Theorem \ref{th:main-midpoint},
we next consider \powser. \Inthesequel we identify sequences $ (b_\myl)_{\myl \ge 0} $ of complex numbers with their (formal) \fps
$ b\klasm{\myxi} = \sum_{\myl=0}^{\infty} b_\myl \myxi^\myl $,
with $ \myxi \in \koza $.
Pointwise multiplication of two \powser
\begin{align*}
\klala{\mysum{\kaa=0}{\infty} b_\kaa \myxi^\kaa}
\cdot
\klala{\mysum{j=0}{\infty} c_j \myxi^j}
=
\mysum{n=0}{\infty} d_n \myxi^n,
\with
d_n \defeq
\mysum{\kaa=0}{n} b_\kaa c_{n-\kaa}
\fortwo n = 0, 1, \ldots
\end{align*}
makes the set of \fps
into a complex commutative algebra with unit element $ 1 + 0 \cdot \myxi + 0 \cdot \myxi^2 + \cdots $~.
For any \fps
$ b\klasm{\myxi} = \sum_{\myl=0}^{\infty} b_\myl \myxi^\myl $
with $ b_0 \neq 0 $, there exists a
\fps which inverts the \fps $ b $ with respect to pointwise multiplication,
and it is denoted by
$ 1/b\klasm{\myxi} $ or by
$ \powinv{b\klasm{\myxi}} $. For a thorough introduction to formal \fps see,
\eg~\mycitea{Henrici}{74}.
\Inthesequel we consider the \inverse
\begin{align}
\powinv{\myomega\klasm{\myxi}}
=
\mysum{n=0}{\infty} \aninv \myxi^n
\label{eq:ainv-def}
\end{align}
of the \genfunc
$ \myomega\klasm{\myxi} =
\sum_{n=0}^{\infty} \an \cdott \myxi^n $, with $ \an $ as
in \refeq{omegan-def}.
\begin{lemma}
The coefficients in \refeq{ainv-def} have the following properties:
\begin{align}
& \aninv[0] > 0,
\qquad
\aninv < 0 \for n = 1,2,\ldots,
\label{eq:omeganinv-negative} \\
& \aninv[0] = \Gamma(\alpone)
= \mysum{n=1}{\infty} \modul{\aninv},
\label{eq:omeganinv-sum} \\
& \aninv = \Landauno{n^{\malpone}} \as n \to \infty.
\label{eq:omeganinv-decay}
\end{align}
\label{th:omeganinv-props}
\end{lemma}
Estimate \refeq{omeganinv-decay} can be found in \mynocitea{Eggermont}{79}.
Another proof of \refeq{omeganinv-decay} which uses Banach algebra theory and may be of independent interest is given in section \ref{abel-stability} of the present paper.
Section \ref{abel-stability} also contains proofs of the other statements in Lemma \ref{th:omeganinv-props}.
Lemma \ref{th:omeganinv-props} is needed in the proof of our main result, \cf Theorem
\ref{th:main-midpoint} below and section \ref{appendix_b}.
We state the lemma here in explicit form since it is fundamental in the stability estimates.
\subsection{The main result}
We next present the first main result of this paper, cf.~the following theorem, where
different situations on the smoothness of the solution $ u $ are considered. For comments on the estimates presented
in the theorem, see Remark \ref{th:main-midpoint-remark} below.
\begin{theorem}
\label{th:main-midpoint}
Let the conditions of Assumption \ref{th:midpoint-assump} be
satisfied, and consider the approximations $ \myseqq{\undelta[1/2]}{\undelta[3/2]}{\undeltab[\nmax]} $
determined by scheme \refeq{midpoint-noise-rule}.
Let $ \calp \defeq \min\{\alp,1-\alp\} $.
\begin{myenumerate}
\item
If $ \calp < \gamma \le 1 + \calp $, then we have
\begin{align}
\max_{\mj=\myseqq{1}{2}{\nmax}}
\modul{\undeltab - u\klasm{\varxnb} }
= \Landauno{h^{\gamma-\calp} + \mfrac{\delta}{\halp}} \as \hdeltatonull.
\label{eq:th-main-midpoint-a}
\end{align}
\item Let $ 2-\alp < \gamma \le 2 $, and in addition let $ u(0) = \prim{u}(0) = 0 $ be satisfied.
Then
\begin{align}
\max_{\mj=\myseqq{1}{2}{\nmax}}
\modul{\undeltab - u\klasm{\varxnb} }
=
\Landauno{h^{\gamma-1+\alp} + \mfrac{\delta}{\halp}} \as \hdeltatonull.
\label{eq:th-main-midpoint-c}
\end{align}
\end{myenumerate}
\end{theorem}
The proof of Theorem \ref{th:main-midpoint} is given in section \ref{appendix_b}.
Below we give some comments on Theorem \ref{th:main-midpoint}.
\begin{remark}
\label{th:main-midpoint-remark}
\begin{myenumerate}
\item
\label{it:alp_le_1_2}
In the case $ \alp \le \tfrac{1}{2} $ we have the following estimates:
\begin{align*}
\max_{\mj=\myseqq{1}{2}{\nmax}} \modul{\undelta - u\klasm{\varxnb} }
= \left\{ \begin{array}{rl}
\Landauno{h^{\gamma-\alp} + \mfrac{\delta}{\halp}}, & \textup{if} \ \alp < \gamma \le \alpone, \\
\Landauno{h^{\gamma-1 +\alp} + \mfrac{\delta}{\halp}}, & \textup{if} \ 2-\alp < \gamma \le 2,
\ u(0) = \prim{u}(0) = 0.
\end{array}
\right.
\end{align*}
\item
\label{it:alp_ge_1_2}
In the case we $ \alp \ge \tfrac{1}{2} $ the following estimates hold:
\begin{align*}
\max_{\mj=\myseqq{1}{2}{\nmax}} \modul{\undelta - u\klasm{\varxnb} }
=
\Landauno{h^{\gamma- 1 + \alp} + \mfrac{\delta}{\halp}}, & \textup{ if } \ 1-\alp < \gamma \le 2-\alp, \\[-3mm]
& \textup{ or if } \ 2-\alp < \gamma \le 2, \ u(0) = \prim{u}(0) = 0.
\end{align*}
\item
\label{it:weiss_eggermont}
The noise-free rates, obtained for $ \gamma = 1 $ and $ \gamma = 2 $,
basically coincide with those given in the papers by
\myciteb{Weiss}{Anderssen}{72} and by Eggermont \mynocitea{Eggermont}{81}.
\item
\label{it:max_rate}
The maximal rate in the noise-free case $ \delta = 0 $ is
$ \Landauno{h} $ without initial conditions, and it is
obtained for $ \gamma = 1 +\calp $. This rate is indeed maximal, which can be seen by considering
the error at the first \gridpoint $ x_{1/2} $, obtained for the function $ u(y) = y $,
\cf \myciteb{Weiss}{Anderssen}{72}.
Under the additional assumption $ u(0) = \prim{u}(0) = 0 $, the maximal rate
is $ \Landauno{h^{\alpone}} $, obtained for $ \gamma = 2 $.
\item It is not clear if the presented rates are optimal.
\remarkend
\end{myenumerate}
\end{remark}
\Inthesequel for \stepsizes $ h = \xmax/N $ we write, with a slight abuse of notation,
$ h \sim \delta^\beta $ as $ \delta \to 0 $,
if there exist real constants $ c_2 \ge c_1 > 0 $ such that
$ c_1 h \le \delta^{\beta} \le c_2 h $ holds for $ \delta \to 0 $.
As an immediate consequence of Theorem \ref{th:main-midpoint}
we obtain the following main result of this paper.
\begin{corollary}
Let Assumption \ref{th:midpoint-assump} be satisfied.
\begin{mylist_indent}
\item
Let $ \alpha \le 1/2 $ and $ \alpha < \gamma \le \alpone $. For
$ h = h(\delta) \sim \delta^{1/\gamma}$ we have
\begin{align*}
\max_{\mj=1,2, \ldots, \nmax} \modul{\undeltab \minus u\klasm{\varxnb}}
= \Landauno{\delta^{1-\lfrac{\alp}{\gamma}}}
\as \delta \to 0.
\end{align*}
\item
Let one the following two conditions be satisfied:
(a) $ \alpha \ge 1/2, \, 1- \alpha < \gamma \le 2-\alp $, or (b)
$ \gamma > 2-\alp, \, u(0) = \prim{u}(0) = 0 $. Then for $ h = h(\delta) \sim \delta^{1/(\gamma-1+2 \alp)} $ we have
\begin{align*}
\max_{\mj=1,2, \ldots, \nmax} \modul{\undeltab \minus u\klasm{\varxnb}}
= \Landaubi{\delta^{1- \tfrac{\alp}{\noklafn{\gamma-1+2\alp}}}}
\as \delta \to 0.
\end{align*}
\end{mylist_indent}
\label{th:noise-corollary}
\end{corollary}
Note that in the case $ \alpha < \tfrac{1}{2} $, for the class of functions satisfying the initial conditions $ u(0) = \prim{u}(0) = 0 $,
there is a gap for $ \alpone < \gamma \le 2 - \alp $ where no improvement in the rates is obtained, \ie we have piecewise saturation $ \Landauno{\delta^{1-\lfrac{\alp}{\gamma}}} $ for this range of $ \gamma $. This is due to the different techniques used in the proof of Theorem \ref{th:main-midpoint}.
We conclude this section with some more remarks.
\begin{remark}
\begin{myenumerate}
\item
We mention some results on other quadrature schemes for the approximate solution of \weaklysingular integral equations of the first kind. The product trapezoidal method is considered, e.g., in
\mycitea{Weiss}{72}, \mycitea{Eggermont}{81}, and in
\mynocitea{Plato}{12}. Fractional multistep methods are treated in Lubich
\cite{Lubich[86b], Lubich[87]}
and in \mynocitea{Plato}{05}.
Backward difference product integration methods are considered in
Cameron and McKee~\cite{Cameron_McKee[84],Cameron_McKee[85]}.
Galerkin methods for Abel-type integral equations are considered, e.g., in
\mycitea{Eggermont}{81} and in V\"ogeli, Nedaiasl and Sauter~\cite{Voegeli_Nedaiasl_Sauter[72]}.
Some general references are already given in the beginning of this paper.
\item
For other special regularization methods for the approximate solution of
Volterra integral equations of the first kind with perturbed \rhss
and
with possibly algebraic-type weakly singular kernels, see \eg
\mycitea{Bughgeim}{99},
\myciteb{Gorenflo}{Vessella}{91},
and the references therein.
\end{myenumerate}
\label{th:main-remark}
\end{remark}
\begin{remark}
The results of Theorem \ref{th:main-midpoint} and Corollary
\ref{th:noise-corollary} can be extended to linear Volterra integral equations of the first kind with smooth kernels, that is, for $ \alpha = 1 $. The resulting method is in fact the classical midpoint rule, and the main error estimate is as follows: if $ 0 < \gamma \le 2 $, then we have
\begin{align*}
\max_{\mj=\myseqq{1}{2}{\nmax}}
\modul{\undeltab - u\klasm{\varxnb} }
= \Landauno{h^{\gamma} + \mfrac{\delta}{h}} \as \hdeltatonull,
\end{align*}
and initial conditions are not required anymore then. The choice $ h = h(\delta) \sim \delta^{1/(\gamma+1)}$ then gives
\begin{align*}
\max_{\mj=1,2, \ldots, \nmax} \modul{\undeltab \minus u\klasm{\varxnb}}
= \Landauno{\delta^{\gamma/(\gamma+1)}}
\as \delta \to 0.
\end{align*}
The proof follows the lines used in this paper, with a lot of simplifications then. In particular, the inverse stability results derived in section \ref{abel-stability} can be discarded in this case.
We leave the details to the reader and indicate the basic ingredients only:
we have $ \an = 1 $ and $ \taun = 0 $ for $ n = 0,1,\ldots $ then, and in addition, $ \aninv[0] = 1, \aninv[1] = -1 $, and $ \aninv = 0 $ for $ n = 2,3,\ldots $ holds.
For other results on the regularizing properties of the midpoint rule for solving linear Volterra integral equations of the first kind, see
\mynocitea{Plato}{17} and \mycitea{Kaltenbacher}{10}.
\label{th:alp=1}
\end{remark}
\section{Modified starting weights}
\label{start_weights}
For the \repmidrule \refeq{midpoint-rule},
applied to a continuous function $ \myfun: \interval{0}{a} \to \reza $,
and with \gridpoints as in \refeq{grid-points}, with $ 1 \le \n \le \N $ and $ \N \ge 2 $,
we now would like to overcome the conditions
$ \myfun(0) = \prim{\myfun}(0) = 0 $.
For this purpose we consider the modification
\begin{align}
\mymetmod{h}{\myfun}{\xn} \Defeq
\myoverbrace{\halp \mysum{j=1}{n} \myomega_{n-j} \cdott \myfunjb }%
{\dis = \mymetno{h}{\myfun}{\xn}}
\plus
\halp \mysum{j=1}{2} \w{n}{j} \cdott \myfunjb
\qquad
\qquad \label{eq:midpoint-mod}
\end{align}
as approximation to the fractional integral $ \Ialp{\myfun}{\xn} $
at the considered grid points $ \xn $, \resp.
See Lubich \cite{Lubich[86b], Lubich[87]}
and \mynocitea{Plato}{05}
for a similar approach for fractional multistep methods.
In \refeq{midpoint-mod},
$ \w{n}{1} $ and $ \w{n}{2} $
are correction weights for the starting values that are specified
in the following.
In fact, for each $ n = \myseqq{1}{2}{\nmax} $ the correction weights are chosen
such that the modified \repmidrule \refeq{midpoint-mod} is exact at $ \xn = nh $ for polynomials of degree $ \le 1 $,
\ie
\begin{align}
\mymetmod{h}{\mon{q}}{\xn} \eq
\Ialp{\mon{q}}{\xn}
\for q \eq 0,1.
\label{eq:startweights_ansatz}
\end{align}
\subsection{Computation of the correction weights}
For each $ n = \myseqq{1}{2}{\nmax} $, a reformulation of \refeq{startweights_ansatz} gives the following linear system of two equations for the
starting weights $ \w{n}{j}, \ j = 1,2 $:
\begin{align*}
\halp \kla{\w{n}{1} + \w{n}{2}} & = \enn{1}{\xn}, \qquad
\halpone \kla{\tfrac{1}{2}\w{n}{1} + \tfrac{3}{2}\w{n}{2}} = \enn{y}{\xn},
\end{align*}
\cf \refeq{midpoint-rule-error-def} for the introduction of $ \ennsym $.
On the other hand we have
\begin{align*}
\enn{1}{\xn} = 0, \qquad
\enn{y}{\xn} = \halpone \mysum{\jod=0}{\n-1} \tau_{\jod}.
\end{align*}
Those identities follow from representations
\refeq{midpoint_error_0} and \refeq{midpoint_error_1}, respectively.
From this we obtain
\begin{align}
-\w{n}{1} = \w{n}{2} =
\mysum{\jod=0}{\n-1} \tau_{\jod}.
\label{eq:wnj-rep}
\end{align}
This in particular means that the correction weights are independent of $ h $.
We finally note that the asymptotic behavior
of the coefficients $ \tau_{\jod} $,
\cf \refeq{taul_asymp}, implies
\begin{align}
\w{n}{j} = \Landauno{1}
\as n \to \infty \qquad
\text{for} \quad j = 1,2.
\label{eq:wnj-stable}
\end{align}
\subsection{Integration error of the \modquamet}
We now consider, for each $ n = \myseqq{1}{2}{\nmax} $, the error of the \modrepmidrule,
\begin{align}
\ennmod{\myfun}{\xn} = \Ialp{\myfun}{\xn} - \mymetmod{h}{\myfun}{\xn},
\label{eq:midpoint-rule-error-def-mod}
\end{align}
where $ \myfun: \interval{0}{a} \to \reza $ denotes a continuous function.
\begin{lemma}
Let $ n \in \inset{\myseqq{1}{2}{N}} $, and moreover let
$ \myfun \in \Hsp{\gamma}{a} $, with $ 0 < \gamma \le 2 $.
We have the following representations of the modified quadrature error
$ \ennmod{\myfun}{\xn} $ introduced in \refeq{midpoint-rule-error-def-mod}:
\begin{myenumerate_indent}
\item
In the case $ 0 < \gamma \le 1 $ we have
$ \ennmod{\myfun}{\xn} = \enn{\myfun}{\xn}
+ \Landauno{h^{\gamma + \alp}} $ as $ h \to 0 $.
\item
In the case $ 1 < \gamma \le 2 $ we have, with
$ \myfuntil(y) \defeq \myfun(y) - \myfun(0) - \prim{\myfun}(0) y $
for $ 0 \le y \le \xmax $,
\begin{align*}
\ennmod{\myfun}{\xn} = \enn{\myfuntil}{\xn}
+ \Landauno{h^{\gamma + \alp}} \as h \to 0.
\end{align*}
\end{myenumerate_indent}
Both statements hold uniformly for $ n = 1,2,\ldots, N $,
and for $ \myfun \in \HLc{\gamma}{0}{a} $, with any
constant $ L \ge 0 $.
\label{th:midpoint-error-mod}
\end{lemma}
\proof
\begin{myenumerate}
\item This follows immediately from \refeq{midpoint-mod} and
\refeq{wnj-rep}--\refeq{midpoint-rule-error-def-mod}:
\begin{align*}
\ennmod{\myfun}{\xn} =
\enn{\myfun}{\xn} + \halp \w{n}{1} \klabi{\myfunjb[x_{3/2}] - \myfunjb[x_{1/2}]}
= \enn{\myfun}{\xn} + \Landauno{h^{\gamma + \alp}} \as h \to 0.
\end{align*}
\item
Using the notation $ \mytp(y) \defeq \myfun(0) + \prim{\myfun}(0) y $, we have
$ \myfun = \myfuntil + \mytp $, and the linearity of the modified error functional gives
\begin{align*}
\ennmod{\myfun}{\xn} & =
\ennmod{\myfuntil}{\xn} + \myoverbrace{\myerrmod{h}{\mytp}{\xn}}{=0}
=
\enn{\myfuntil}{\xn}
\minus
\halp \mysum{j=1}{2} \w{nj} \cdott \myfuntil\kla{\xjb}
\\[-1mm]
& =
\enn{\myfuntil}{\xn}
+ \Landauno{h^{\gamma + \alp }},
\end{align*}
where
$ \myfuntil(y) = \Landauno{y^\gamma} $ as $ y \to 0 $ has been used,
and the boundedness of the correction weights, \cf \refeq{wnj-stable}, is also taken into account.
\proofend
\end{myenumerate}
\subsection{Application to the Abel-type first kind integral equation }
In what follows, the modified \repmidrule \refeq{midpoint-mod} is applied to numerically solve the algebraic-type weakly singular integral equation \refeq{weaksing-inteq},
with noisy data as in \refeq{rhs-assump}.
In order to make the starting procedure applicable, in the sequel we assume that the kernel $ k $ can be smoothly extended beyond the triangle $ \inset { 0 \le y \le x \le \xmax } $. For simplicity we assume that the kernel is defined on the whole square.
\myassump{
The kernel function $ k $ has Lipschitz continuous partial derivatives up to the order 2
on $ \interval{0}{\xmax} \times \interval{0}{\xmax} $.}
{kernel-square-cont}
For each $ n = 1, 2, \ldots, \nmax $, we consider the modified \repmidrule \refeq{midpoint-mod} with $ \myfun(y) = k\klasm{\xn,y} u\klasm{y} $ for $ \intervalargno{y}{0}{a}, \, n = 1, 2, \ldots, \nmax $. This results in the following modified scheme:
\begin{align}
\halp \mysum{j=1}{n} \an[n-j] k\kla{\xn,\varxjb} \undeltabmod[j]
+ \halp \mysum{j=1}{2} \w{n}{j} k\kla{\xn,\varxjb} \undeltabmod[j]
=
\fndelta, \quad n = 1, 2, \ldots, \nmax.
\label{eq:midpoint-noise-rule-mod}
\end{align}
This scheme can be realized
by first solving a linear system of two equations for the approximations
$ \undeltabmod[n] \approx u(\varxnb), \ n = 1, 2 $.
The approximations $ \undeltabmod[n] \approx u(\varxnb) $ for $ n = 3, 4, \ldots, \nmax $
can be determined recursively by using scheme \refeq{midpoint-noise-rule-mod} then.
\subsection{Uniqueness, existence and approximation properties
of the starting values}
\label{integsolvec}
We next consider uniqueness, existence and the approximation
properties of the two starting values $ \undeltamod[1/2] $ and $ \undeltamod[3/2] $.
They in fact satisfy the linear system of equations
\begin{align}
\halp \mysum{j=1}{2}
\kla{\myunderbrace{\myomega_{n-j} + \w{n}{j}}{\dis =: \ \omegbar{n}{j} }}
k\klasm{\xn,\xjb} \undeltabmod
=
\fndelta
\for
n = 1, 2,
\label{eq:midpoint_mod_eqsolve-start}
\end{align}
with the notation $ \myomega_{-1} = 0 $.
In matrix notation this linear system of equations
can be written as
\begin{align}
& & \halp
\myoverbrace{
\left( \begin{array}{@{\cdott}c@{\quad }c@{\cdott}}
\omegbar{1}{1} \cdott k\klasm{x_1,x_{1/2}} &
\omegbar{1}{2} \cdott k\klasm{x_1,x_{3/2}}
\\ [6mm]
\omegbar{2}{1} \cdott k\klasm{x_2,x_{1/2}} &
\omegbar{2}{2} \cdott k\klasm{x_2,x_{3/2}}
\end{array} \right)
}{=\Sh}
|
\left( \begin{array}{@{\ }c@{\ }}
\undeltamod[1/2] \\[4mm] \undeltamod[3/2]
\end{array} \right)
=
\left( \begin{array}{@{\ }c@{\ }}
\fndelta[1] \\[4mm]
\fndelta[2]
\end{array} \right).
\label{eq:S-regular}
\end{align}
\begin{lemma}
The matrix $ \Sh \in \myrnn[2] $ in \refeq{S-regular} is regular
for sufficiently small values of $ h $, and
$ \maxnorm{\Sh^{-1}} = \Landausm{1} $ as $ h \to 0 $, where
$ \maxnorm{\cdot} $ denotes the matrix norm induced by the maximum vector norm on $ \reza^2 $.
\label{th:S-regular}
\end{lemma}
\proof
We first consider the situation $ k \equiv 1 $ and denote the matrix $ \Sh $ by
$ T $ in this special case.
From
\refeq{ialp_monom}
and
\refeq{startweights_ansatz}
it follows
\begin{align*}
\omegbar{n}{1} + \omegbar{n}{2} = \mfrac{n^\alp}{\gammaalpone}
\qquad
\tfrac{1}{2} \omegbar{n}{1} + \tfrac{3}{2}\omegbar{n}{2} = \mfrac{n^{\alpone}}{\gammaalptwo},
\quad n = 1,2.
\end{align*}
Hence the matrix $ T $ is regular and does not depend on $ h $.
We next consider the general case for $ k $. Since
$ k\klasm{x,x} = 1 $, we have $ k\klasm{x_n,x_m} \to 1 $ as $ h \to 0 $ uniformly for the four values of $ k $ considered
in the matrix $ \Sh $. This shows $ \Sh = T + \Delta $ with $ \maxnorm{\Delta}
\to 0 $ as $ h \to 0 $
so that the matrix $ \Sh $ is regular
for sufficiently small values $ h $,
with $ \maxnorm{\Sh^{-1}} $ being bounded as $ h \to 0 $.
\proofendspruch[ of the lemma]
\bn
We next consider the error of the modified \repmidrule at the first two grid
points $ x_{1/2} $ and $ x_{3/2} $.
\begin{proposition}
Let the conditions of Assumption \ref{th:midpoint-assump} and
Assumption \ref{th:kernel-square-cont} be
satisfied.
Consider the approximations
$ \undeltamod[1/2] $ and $ {\undeltamod[3/2]} $
determined by scheme \refeq{midpoint-noise-rule-mod} for $ n = 1,2 $.
Then we have
\begin{align*}
\max_{\mj=1,2}
\modul{\undeltabmod - u\klasm{\varxnb} }
= \Landauno{h^{\gamma} + \mfrac{\delta}{\halp}} \as \hdeltatonull.
\end{align*}
\label{th:midpoint-mod-starting-values}
\end{proposition}
\begin{proof}
From \refeq{midpoint-mod}, \refeq{midpoint-rule-error-def-mod}
and Lemma \ref{th:midpoint-error-mod},
applied with
$ \myfun_n(y) = k\klasm{\xn,y} u\klasm{y} $ for $ 0 \le y \le a $,
we obtain the representation
\begin{align*}
\halp \mysum{j=1}{2} \omegbar{n}{j} \cdott k\klasm{\xn,\xn[j-1/2]} \cdott
\enndeltamod[j-1/2]
= \ennmod{\myfun_n}{\xn}
+ \fndelta - f(\xn)
= \Landauno{h^{\gamalp} + \delta}
\for n = 1, 2
\end{align*}
as $ \hdeltatonull $, where $ \enndeltamod = \undeltamod[j-1/2] -
u\klasm{x_{j-1/2}}, \ j = 1,2 $, and the weights
$ \omegbar{n}{j} $ are introduced in \refeq{midpoint_mod_eqsolve-start}.
Note that
Lemma \ref{th:midpoint-error} and Lemma \ref{th:midpoint-error-mod}
imply, for the two integers $ n = 1, 2 $, that
$ \ennmod{\myfun_n}{\xn} = \Landauno{h^{\gamma+ \alp}} $ as $ h \to 0 $.
The proposition now follows from Lemma \ref{th:S-regular}.
\end{proof}
\subsection{The regularizing properties of the modified scheme}
\begin{theorem}
Let the conditions of Assumption \ref{th:midpoint-assump} and
Assumption \ref{th:kernel-square-cont} be satisfied.
\begin{myenumerate}
\item
In the case $ \alp \le 1/2 $ we have
\begin{align*}
\max_{\mj=\myseqq{1}{2}{\nmax}}
\modul{\undeltamod[\mj-1/2] - u\klasm{\varxnb} }
= \left\{ \begin{array}{rl}
\Landauno{h^{\gamma-\alp} + \mfrac{\delta}{\halp}}
& \textup{if} \ \alp < \gamma \le \alpone, \\[1mm]
\Landauno{h^{\gamma-1 +\alp} + \mfrac{\delta}{\halp}} & \textup{if} \ 2-\alp < \gamma \le 2.
\end{array}
\right.
\end{align*}
\item
In the case $ \alp \ge 1/2, \, 1-\alpha < \gamma \le 2 $ we have
\begin{align*}
\max_{\mj=\myseqq{1}{2}{\nmax}}
\modul{\undeltamod[\mj-1/2] - u\klasm{\varxnb} }
= \Landauno{h^{\gamma-1+\alp} + \mfrac{\delta}{\halp}} \as \hdeltatonull.
\end{align*}
\end{myenumerate}
\label{th:main-midpoint-mod}
\end{theorem}
\begin{proof}
Let $ \enndeltamod = \undeltamod[j-1/2] - u\klasm{x_{j-1/2}} $ for $ j = 1,2, \ldots, \N $.
From \refeq{midpoint-mod}, \refeq{wnj-stable}, \refeq{midpoint-rule-error-def-mod},
Lemma \ref{th:midpoint-error-mod} and Proposition \ref{th:midpoint-mod-starting-values}
we obtain the representation
\begin{align*}
\halp \mysum{j=1}{n} \omega_{n-j} k\klasm{\xn,\xn[j-1/2]}
\enndeltamod[j-1/2]
& =
\ennmod{\myfun_n}{\xn}
+ f(\xn) - \fndelta
- \halp \mysum{j=1}{2} \w{n}{j} k\klasm{\xn,\xn[j-1/2]} \enndeltamod[j-1/2]
\\
&=
\ennmod{\myfunn}{\xn}
+ \Landauno{h^{\gamalp} + \delta}
= \enn{\myfunntil}{\xn}
+ \Landauno{h^{\gamalp} + \delta}
\end{align*}
as $ \hdeltatonull $, uniformly for $ n = \myseqq{1}{2}{\nmax} $,
where $ \myfunntil = \myfunn $, if $ \gamma \le 1 $, and
$ \myfunntil(y) = \myfunn(y) - \myfunn(0) - \prim{\myfun}_n(0)y $ for $ \gamma > 1 $.
The theorem now follows by performing the same steps as in the proof of
Theorem~\ref{th:main-midpoint}.
\end{proof}
\bn
As an immediate consequence of Theorem \ref{th:main-midpoint-mod},
we can derive regularizing properties of the modified scheme.
\begin{corollary}
Let both Assumption \ref{th:midpoint-assump} and Assumption \ref{th:kernel-square-cont} be satisfied.
\begin{mylist_indent}
\item
If $ \alpha \le 1/2 $ and $ \alpha < \gamma \le \alpone $, then
choose $ h = h(\delta) \sim \delta^{1/\gamma}$. The resulting error estimate is
\begin{align*}
\max_{\mj=1,2, \ldots, \nmax} \modul{\undeltabmod \minus u\klasm{\varxnb}}
= \Landauno{\delta^{1-\lfrac{\alp}{\gamma}}}
\as \delta \to 0.
\end{align*}
\item
Let one the following two conditions be satisfied:
(a) $ \alpha \ge 1/2, \, 1- \alpha < \gamma \le 2-\alp $, or (b)
$ 2-\alp < \gamma \le 2 $.
For $ h = h(\delta) \sim \delta^{1/(\gamma-1+2 \alp)} $ we then have
\begin{align*}
\max_{\mj=1,2, \ldots, \nmax} \modul{\undeltabmod \minus u\klasm{\varxnb}}
= \Landauno{\delta^{1- \tfrac{\alp}{\noklafn{\gamma-1+2\alp}}}}
\as \delta \to 0.
\end{align*}
\end{mylist_indent}
\label{th:noise-corollary-mod}
\end{corollary}
\section{Numerical experiments}
\label{num_exps}
We next present results of some numerical experiments with the linear \weaklysingular Volterra integral equation
of the first kind \refeq{weaksing-inteq}. The following example is considered
(for different values of $ 0 < \alpha < 1 $ and $ 0 < \myq \le 2 $):
\begin{align}
k(x,y) = \mfrac{1+xy}{1+x^2}, \qquad
f(x) = \mfrac{1}{\Gamma(q+2+\alp)}\mfrac{x^{\myq+\alp}}{1+x^2} \kla{\myq + 1+\alp + (\myq+1)x^2 }
\for \intervalarg{x,y}{0}{1},
\label{eq:numeric-a}
\end{align}
with exact solution
\kla{\cf\refeq{ialp_monom}}
\begin{align}
u\klasm{y} \eq \tfrac{1}{\Gamma(q+1)} y^\myq
\for \intervalarg{y}{0}{1},
\label{eq:numeric-b}
\end{align}
so that the conditions in \ref{item:assump-u}--\ref{item:assump-k-smooth}
of Assumption \ref{th:midpoint-assump} are satisfied with $ \gamma = q $.
We present experiments for different values of $ \alpha $ and $ q $, sometimes with corrections weights, sometimes without, in order to cover all variants in Corollaries \ref{th:noise-corollary} and \ref{th:noise-corollary-mod}.
Here are additional remarks on the numerical tests.
\begin{mylist_indent}
\item
Numerical experiments with \stepsizes
$ h = 1/2^m $ for $ m = \myseqq{5}{6}{11} $
are employed, \resp.
\item
For each considered \stepsize $ h $, we consider the noise level
$ \delta = \delta(h) = c h^{p + \alp} $, where $ c = 0.3 $, and
$ \Landauno{h^p} $ is the rate for exact data, supplied by
Theorems \ref{th:main-midpoint} and \ref{th:main-midpoint-mod},
with $ p = p(\alpha,\myq) $.
The expected error is then of the form $ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{h^p} = \Landauno{\delta^{p/(p+\alpha)}} $ as $ h \to 0 $.
\item
In the numerical experiments, the perturbations are of the
form $ \fndelta = \fxn + \Delta_n $
with uniformly distributed random values $ \Delta_n $ with
$ \modul{\Delta_n} \le \delta $.
\item In all tables, $ \maxnorm{f} $ denotes the maximum norm of the function $ f $.
\item
Experiments are employed using the program system \octave \kla{http://www.octave.org}.
\end{mylist_indent}
\begin{example}
\label{th:example1}
We first consider the situation \refeq{numeric-a}--\refeq{numeric-b},
with $ \alp = \tfrac{1}{2} $ and $ \myq = 2 $. The conditions in \ref{item:assump-u}--\ref{item:assump-k-smooth} of Assumption \ref{th:midpoint-assump} are satisfied with $ \gamma = 2 $ (also for any $ \gamma > 2 $ in fact, but then we have saturation).
We have $ u(0) = \prim{u}(0) = 0 $, so correction weights are not required here. The expected error estimate, with the choice of $ \delta = \delta(h) $ considered in the beginning of this section, is
$ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{\delta^{3/4}} = \Landauno{h^{3/2}} $. The numerical results are shown in Table \ref{tab:num1}.
\begin{table}
\hfill
\begin{tabular}{|| r | c |@{\hspace{5mm} } l | c | c ||}
\hline
\hline
$ N $
& $ \delta $
& $ 100 \myast \delta/\maxnorm{f} $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ / \delta^{3/4} \ $
\\ \hline \hline
$ 32$ & $2.9 \myast 10^{-4}$ & $9.74 \myast 10^{-2}$ & $2.84 \myast 10^{-3}$ & $1.27$ \\
$ 64$ & $7.3 \myast 10^{-5}$ & $2.43 \myast 10^{-2}$ & $1.12 \myast 10^{-3}$ & $1.41$ \\
$ 128$ & $1.8 \myast 10^{-5}$ & $6.09 \myast 10^{-3}$ & $3.77 \myast 10^{-4}$ & $1.35$ \\
$ 256$ & $4.6 \myast 10^{-6}$ & $1.52 \myast 10^{-3}$ & $1.37 \myast 10^{-4}$ & $1.38$ \\
$ 512$ & $1.1 \myast 10^{-6}$ & $3.80 \myast 10^{-4}$ & $5.20 \myast 10^{-5}$ & $1.48$ \\
$1024$ & $2.9 \myast 10^{-7}$ & $9.51 \myast 10^{-5}$ & $1.89 \myast 10^{-5}$ & $1.53$ \\
$2048$ & $7.2 \myast 10^{-8}$ & $2.38 \myast 10^{-5}$ & $6.55 \myast 10^{-6}$ & $1.50$ \\
\hline
\hline
\end{tabular}
\hfill
\caption{Numerical results for Example \ref{th:example1}}
\label{tab:num1}
\end{table}
\end{example}
\begin{example}
\label{th:example2}
We next consider the situation \refeq{numeric-a}--\refeq{numeric-b},
with $ \alp = 0.9 $ and $ \myq = 0.4 $. The conditions in \ref{item:assump-u}--\ref{item:assump-k-smooth} of Assumption \ref{th:midpoint-assump} are satisfied with $ \gamma = 0.4 $. Since $ \gamma \le 1 $, correction weights are not needed here. The expected error estimate, with $ \delta = \delta(h) $ as in the beginning of this section, is
$ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{\delta^{1/4}} = \Landauno{h^{0.3}} $.
The numerical results are shown in Table \ref{tab:num2}.
\begin{table}
\hfill
\begin{tabular}{|| r | c |@{\hspace{5mm} } l | c | c ||}
\hline
\hline
$ N $
& $ \delta $
& $ 100 \myast \delta/\maxnorm{f} $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ / \delta^{1/4} \ $
\\ \hline \hline
$ 32$ & $4.7 \myast 10^{-3}$ & $6.80 \myast 10^{-1}$ & $1.88 \myast 10^{-1}$ & $0.72$ \\
$ 64$ & $2.0 \myast 10^{-3}$ & $2.96 \myast 10^{-1}$ & $1.32 \myast 10^{-1}$ & $0.62$ \\
$ 128$ & $8.9 \myast 10^{-4}$ & $1.29 \myast 10^{-1}$ & $1.23 \myast 10^{-1}$ & $0.71$ \\
$ 256$ & $3.9 \myast 10^{-4}$ & $5.61 \myast 10^{-2}$ & $9.61 \myast 10^{-2}$ & $0.69$ \\
$ 512$ & $1.7 \myast 10^{-4}$ & $2.44 \myast 10^{-2}$ & $8.12 \myast 10^{-2}$ & $0.71$ \\
$1024$ & $7.3 \myast 10^{-5}$ & $1.06 \myast 10^{-2}$ & $6.77 \myast 10^{-2}$ & $0.73$ \\
$2048$ & $3.2 \myast 10^{-5}$ & $4.62 \myast 10^{-3}$ & $5.43 \myast 10^{-2}$ & $0.72$ \\
\hline
\hline
\end{tabular}
\hfill
\caption{Numerical results for Example \ref{th:example2}}
\label{tab:num2}
\end{table}
\end{example}
\begin{example}
\label{th:example3}
We next consider the situation \refeq{numeric-a}--\refeq{numeric-b},
with $ \alp = 0.2 $ and $ \myq = 0.5 $.
The conditions in \ref{item:assump-u}--\ref{item:assump-k-smooth} of Assumption \ref{th:midpoint-assump} are satisfied with $ \gamma = 0.5 $ then,
and the expected error estimate
is
$ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{\delta^{0.6}} = \Landauno{h^{0.3}} $.
The numerical results are shown in Table \ref{tab:num3}.
\begin{table}
\hfill
\begin{tabular}{|| r | c |@{\hspace{5mm} } l | c | c ||}
\hline
\hline
$ N $
& $ \delta $
& $ 100 \myast \delta/\maxnorm{f} $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ / \delta^{0.6} \ $
\\ \hline \hline
$ 32$ & $5.3 \myast 10^{-2}$ & $5.12 \myast 10^{0}$ & $1.18 \myast 10^{-1}$ & $0.69$ \\
$ 64$ & $3.8 \myast 10^{-2}$ & $3.62 \myast 10^{0}$ & $8.52 \myast 10^{-2}$ & $0.61$ \\
$ 128$ & $2.7 \myast 10^{-2}$ & $2.56 \myast 10^{0}$ & $7.78 \myast 10^{-2}$ & $0.69$ \\
$ 256$ & $1.9 \myast 10^{-2}$ & $1.81 \myast 10^{0}$ & $5.89 \myast 10^{-2}$ & $0.64$ \\
$ 512$ & $1.3 \myast 10^{-2}$ & $1.28 \myast 10^{0}$ & $5.19 \myast 10^{-2}$ & $0.69$ \\
$1024$ & $9.4 \myast 10^{-3}$ & $9.05 \myast 10^{-1}$ & $4.20 \myast 10^{-2}$ & $0.69$ \\
$2048$ & $6.6 \myast 10^{-3}$ & $6.40 \myast 10^{-1}$ & $3.33 \myast 10^{-2}$ & $0.68$ \\
\hline
\hline
\end{tabular}
\hfill
\caption{Numerical results for Example \ref{th:example3}}
\label{tab:num3}
\end{table}
\end{example}
\begin{example}
\label{th:example4}
We finally consider the situation \refeq{numeric-a}--\refeq{numeric-b},
with $ \alp = 0.5 $ and $ \myq = 1 $.
Then the conditions in \ref{item:assump-u}--\ref{item:assump-k-smooth} of Assumption \ref{th:midpoint-assump} are satisfied with any $ \gamma > 0 $, and initial conditions are not satisfied in this case.
The presented theory for the \repmidrule without correction weights suggests
that we have $ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{\delta^{2/3}} = \Landauno{h} $.
The corresponding numerical results are shown in Table \ref{tab:num4}.
\begin{table}
\hfill
\begin{tabular}{|| r | c |@{\hspace{5mm} } l | c | c ||}
\hline
\hline
$ N $
& $ \delta $
& $ 100 \myast \delta/\maxnorm{f} $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ / \delta^{2/3} \ $
\\ \hline \hline
$ 32$ & $1.7 \myast 10^{-3}$ & $2.20 \myast 10^{-1}$ & $1.26 \myast 10^{-2}$ & $0.90$ \\
$ 64$ & $5.9 \myast 10^{-4}$ & $7.79 \myast 10^{-2}$ & $6.47 \myast 10^{-3}$ & $0.92$ \\
$ 128$ & $2.1 \myast 10^{-4}$ & $2.75 \myast 10^{-2}$ & $3.27 \myast 10^{-3}$ & $0.94$ \\
$ 256$ & $7.3 \myast 10^{-5}$ & $9.74 \myast 10^{-3}$ & $1.57 \myast 10^{-3}$ & $0.89$ \\
$ 512$ & $2.6 \myast 10^{-5}$ & $3.44 \myast 10^{-3}$ & $7.72 \myast 10^{-4}$ & $0.88$ \\
$1024$ & $9.2 \myast 10^{-6}$ & $1.22 \myast 10^{-3}$ & $3.95 \myast 10^{-4}$ & $0.90$ \\
$2048$ & $3.2 \myast 10^{-6}$ & $4.30 \myast 10^{-4}$ & $2.06 \myast 10^{-4}$ & $0.94$ \\
\hline
\hline
\end{tabular}
\hfill
\caption{Numerical results for Example \ref{th:example4}, without correction weights}
\label{tab:num4}
\end{table}
We also consider the \modrepmidrule for the same problem, \ie correction weights are used this time. The presented theory then yields
$ \max_{n} \modul{\undelta - u\klasm{\xn}} = \Landauno{\delta^{3/4}} = \Landauno{h^{3/2}} $.
The related numerical results are shown in Table \ref{tab:num5}.
\begin{table}
\hfill
\begin{tabular}{|| r | c |@{\hspace{5mm} } l | c | c ||}
\hline
\hline
$ N $
& $ \delta $
& $ 100 \myast \delta/\maxnorm{f} $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ $
& $ \ \max_{n} \modul{\undelta - u\klasm{\xn}} \ / \delta^{3/4} \ $
\\ \hline \hline
$ 32$ & $2.9 \myast 10^{-4}$ & $3.89 \myast 10^{-2}$ & $2.10 \myast 10^{-3}$ & $0.94$ \\
$ 64$ & $7.3 \myast 10^{-5}$ & $9.74 \myast 10^{-3}$ & $6.56 \myast 10^{-4}$ & $0.83$ \\
$ 128$ & $1.8 \myast 10^{-5}$ & $2.43 \myast 10^{-3}$ & $2.88 \myast 10^{-4}$ & $1.03$ \\
$ 256$ & $4.6 \myast 10^{-6}$ & $6.09 \myast 10^{-4}$ & $8.66 \myast 10^{-5}$ & $0.87$ \\
$ 512$ & $1.1 \myast 10^{-6}$ & $1.52 \myast 10^{-4}$ & $3.46 \myast 10^{-5}$ & $0.99$ \\
$1024$ & $2.9 \myast 10^{-7}$ & $3.80 \myast 10^{-5}$ & $1.22 \myast 10^{-5}$ & $0.99$ \\
$2048$ & $7.2 \myast 10^{-8}$ & $9.51 \myast 10^{-6}$ & $4.31 \myast 10^{-6}$ & $0.99$ \\
\hline
\hline
\end{tabular}
\hfill
\caption{Numerical results for Example \ref{th:example4}, with correction weights}
\label{tab:num5}
\end{table}
\\
The last column in each table shows that the theory is confirmed in each of the five numerical experiments.
\end{example}
\section{Appendix A: Proof of Lemma \ref{th:omeganinv-props}}
\label{abel-stability}
We now present a proof of \refeq{omeganinv-decay} for the coefficients of the inverse of
the considered generating \powser $ \sum_{\n=0}^{\infty} a_\n \myxi^\n $ which differs from that given by
Eggermont \mynocitea{Eggermont}{81}. Our proof uses Banach algebra theory and may be of independent interest.
\subsection{Special sequence spaces and Banach algebra theory}
We start with the consideration of some sequence spaces in a Banach
algebra framework. For an introduction to Banach algebra theory see, \eg
\mycitea{Rudin}{91}. The following results can be found in Rogozin~\cite{Rogozin[73], Rogozin[76]},
and for completeness they are recalled here.
For a sequence of positive real weights
$ (\mytau_n)_{n \ge 0} $, consider the following norms,
\begin{align*}
\normlinfomeg{a} = \sup_{m \ge 0} \modul{a_m} \mytau_m
+ \mysum{n=0}{\infty} \modul{a_n},
\qquad
\quad
\normlone{a} = \mysum{n=0}{\infty} \modul{a_n},
\qquad a = (a_n)_{n \ge 0} \subset \koza,
\end{align*}
and the spaces
\begin{align*}
\lone & = \inset{a = (a_n)_{n \ge 0} \subset \koza \mid \normlone{a} < \infty },
\qquad \linfomeg = \inset{a = (a_n)_{n \ge 0} \subset \koza \mid \normlinfomeg{a} < \infty },
\\
\czeromeg & = \inset{a \in \linfomeg \mid a_n \mytau_n \to 0 \assh n \to \infty}.
\end{align*}
We obviously have $ \czeromeg \subset \linfomeg \subset \lone $.
By using the canonical identification
$ a(\myt) = \mysumtxt{n=0}{\infty} a_n \myt^n $,
the spaces $ \czeromeg, \linfomeg $ and $ \lone $ can be considered as function algebras on
\begin{align*}
\Dr = \inset{\xi \in \koza \mid \modul{\xi} \le 1 },
\end{align*}
the closed unit disc with center 0 and radius $ 1 $.
We are mainly interested in positive weights
$ (\mytau_n)_{n \ge 0} $ which satisfy
$ \mysumtxt{n=0}{\infty} \mytau_n^{-1} < \infty $. In that case,
$ \sup_{m \ge 0} \modul{a_m} \mytau_m $ for
$ (a_n)_{n \ge 0} \in \linfomeg $ defines a norm on
$ \linfomeg $ which is equivalent to the given norm
$ \normlinfomeg{\cdot} $. In particular, in the case $ \sigma_0 = 1 $ and
$ \sigma_n = n^\beta $ for $ n = 1,2, \ldots \ (\beta>1) $,
then $ \linfomeg $ is the space of
sequences $ (a_n)_{n \ge 0} $ satisfying
$ a_n = \Landauno{n^{-\beta}} $ as $ n \to \infty $.
In the sequel we assume
that
\begin{align}
\mytau_n \le c \mytau_j, \quad \tfrac{n}{2} \le j \le n, \quad n \ge 0,
\label{eq:omegan_bound}
\end{align}
holds for some finite constant $ c > 0 $.
We state without proof the following elementary result
(cf.~\mynocitea{Rudin}{91} for part (a) of the proposition, and
\cite{Rogozin[73], Rogozin[76]} for parts (b) and (c)).
\begin{proposition}
Let
$ \mytau_0, \mytau_1, \ldots $
be positive weights satisfying \refeq{omegan_bound}.
\begin{myenumerate}
\item
The space $ \lone $, equipped with convolution
$ (a*b)_n = \mysumtxt{j=0}{n} a_{n-j} b_j, n \ge 0 $, for $ a,b \in \lone $,
is a commutative complex Banach algebra, with unit $ e = (1,0,0,\ldots) $.
\item
The space $ \linfomeg $ is a subalgebra of $ \lone $, \ie it is closed \wrt
addition, scalar multiplication and convolution.
The norm $ \normlinfomeg{\cdot} $ is complete on $ \linfomeg $
and satisfies
\begin{align}
\normlinfomeg{a*b} \le (2c+1) \normlinfomeg{a} \cdot \normlinfomeg{b},
\qquad
a, b \in \linfomeg,
\label{eq:normlinfomegr_convolut}
\end{align}
where $ c $ is taken from estimate \refeq{omegan_bound}.
\item The statements of (b) are also valid for the space
$ \czeromeg $ (instead of $ \linfomeg $), supplied with the norm $ \normlinfomeg{\cdot} $.
\end{myenumerate}
\label{th:linf_czer_algebra}
\end{proposition}
The following proposition is based on the fact that the subalgebra generated by
$ a(\xi) = \
|
{cum}(C_j')$
does not contain variables $x{\restriction}_\lambda$ with
$\lambda\in\Lambda\backslash \Lambda_j$.
To finish the proof it is enough to observe that the concatenation of the lists $\mathit{tr}_\mathit{cum}(D_{\lambda_{j,1}});\dots;\mathit{tr}_\mathit{cum}(D_{\lambda_{j,k_j}})$ for $j\in\{0,\dots,n\}$ is $\mathit{merge}$-equivalent to $\mathit{tr}_\mathit{cum}(D_{\lambda_1});\dots;\mathit{tr}_\mathit{cum}(D_{\lambda_k})$.
Indeed, for $\lambda\in\Lambda_\emptyset$ by Lemma
\ref{lem:s-empty} $\mathit{tr}_\mathit{cum}(D_\lambda)$ is empty, and, as we have already shown, every
$\lambda\in\Lambda\setminus\Lambda_\emptyset$ belongs to
exactly one $\Lambda_j$.
\end{proof}
\begin{restatable}{lemma}{lemmacstep}
\label{lem:c-step}
Let $D'$ be a maximal derivation for $\vdash L':(S,\r)$, and let $L$ be a term that does not contain the initial nonterminal of $\mathcal S$ and such that $L\to_\mathcal S L'$.
Then there exists a maximal derivation $D$ for $\vdash L:(S,\r)$ and a term $P$ that is $\mathit{merge}$-equivalent to $\mathit{tr}_\mathit{cum}(D')$ and such that $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))\to_{\mathcal S'}P$.
\end{restatable}
\noindent
The lemma is proved by induction on the structure of $L$; cf. App.~\ref{app:lem:c-step}.
The case when $L$ starts with a nonterminal uses Lemma~\ref{lem:c-subst}.
\begin{corollary}\label{cor:compl2}
Let $L$ be a term that is of sort $o$ and does not contain the initial nonterminal of $\mathcal S$, and let $M$ be an $S$-narrow tree generated by $\mathcal S$ from $L$.
Then there exists a maximal derivation $D$ for $\vdash L:(S,\r)$ such that a tree equivalent to $M$ can be generated by $\mathcal S'$ from $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))$.
\end{corollary}
\begin{proof}
We proceed by induction on the smallest length of the sequence of reductions $L\to_\mathcal S^* M$.
If $L=M$, we just apply Lemma \ref{lem:c-start}.
Suppose that the length is positive, and write $L\to_\mathcal S L'\to_\mathcal S^* M$.
The initial nonterminal does not appear in $L'$ since by assumption it does not appear on the right side of any rule.
By induction we obtain a maximal derivation $D'$ for $\vdash L':(S,\r)$ such that a tree $Q$ equivalent to $M$ can be generated by $\mathcal S'$ from $\mathit{merge}(\mathit{tr}_\mathit{cum}(D'))$.
Then, from Lemma \ref{lem:c-step} we obtain a maximal derivation $D$ for $\vdash L:(S,\r)$ and a term $P$ that is $\mathit{merge}$-equivalent to $\mathit{tr}_\mathit{cum}(D')$ and such that $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))\to_{\mathcal S'}P$.
By Lemma \ref{lem:s-merge-equiv} a tree equivalent to $Q$ (and hence to $M$) can be generated by $\mathcal S'$ from $P$, and hence also from $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:compl}]
Consider a tree $M$ generated by $\mathcal S$, and a sequence of reductions of $\mathcal S$ leading to $M$.
In the first step the initial nonterminal reduces to $A\,e_1^0\,\dots\,e_{|\Delta|}^0$.
Corollary \ref{cor:compl2} gives us a derivation $D$ for $\vdash A\,e_1^0\,\dots\,e_{|\Delta|}^0:(\Delta,\r)$ such that $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))$ generates a tree equivalent to $M$.
Necessarily $\mathit{tr}_\mathit{cum}(D)=(A_{\tau_0};e_1^0;\dots;e_{|\Delta|}^0)$, so $\mathit{merge}(\mathit{tr}_\mathit{cum}(D))$ is obtained as the result of the initial rule of $\mathcal S'$.
\end{proof}
\section{Narrowing the HORS}\label{sec:narrowing}
The first step in our proof of Theorem~\ref{thm:main} is to convert a scheme to a narrow
scheme. The property of being narrow is essential for the second step, as lowering
the order of a scheme works only for narrow schemes. This approach
through narrowing has been used by Hague et
al.~\cite{DBLP:conf/popl/HagueKO16} for
higher-order pushdown automata. Here we deal with recursion schemes,
which are equivalent to higher-order pushdown automata with collapse.
\ignore{
The idea behind narrowing is quite intuitive.
Consider a tree $t$ having $n$ occurrences of a certain symbol $b$.
We consider the branch in $t$ where, at each step, we go to the subtree having the maximal number of occurrences of $b$.
If $k$ is the maximal rank, at each step we thus retain at least $1/k$ of the total number of remaining $b$'s,
and therefore on the chosen branch we see at least $\log_k(n)$ total occurrences of $b$.
We can check that we are on such a maximal branch by noticing that it satisfies the following property:
Thus, we can effectively linearize a tree into a single branch by preserving unboundedness of a fixed symbol $b$.
When we consider multiple symbols $\S$, the situation is a bit more complicated,
since it might be the case that we need to choose different branches for different symbols.
However, this can be done by generating independently a branch for each symbol in $\S$,
and we thus obtain narrow trees with at most $|\S|$ branches.
This observation implies that for our purposes it is enough to convert a scheme $\mathcal S$ generating finite trees
into a scheme $\mathcal S'$ generating all paths in the trees generated by $\mathcal S$
with an additional labeling expressing the above property.
Then there will be a set $\S'$ of labels such that $\mathit{Diag}_{\set{b}}(\mathcal S)$ is
equivalent to $\mathit{Diag}_{\S'}(\mathcal S')$.
}
\ignore{
The idea behind narrowing is quite intuitive.
%
Consider a binary tree, and suppose that we are interested in the number
of occurrences of a certain symbol $b$ in it.
%
Consider a path that, at each node, selects the subtree containing the larger number of $b$'s,
and label the node by $b'$ if either: (i) it has already label $b$,
or (ii) the successor of the node that is not on the path has a descendant labeled $b$.
%
Then, the tree has $n$ occurrences of $b$, if, and only if, we encounter at least $\log n$ labels $b'$ on this path.
This observation implies that for our purposes it is enough to convert
a scheme $\mathcal S$
generating trees to a scheme $\mathcal S'$ generating all paths in the trees
generated by $\mathcal S$ with the additional labeling.
Then $\mathit{Diag}_{\set{b}}(\mathcal S)$ will be equivalent to $\mathit{Diag}_{\set{b'}}(\mathcal S')$.
The general situation is a bit more complicated since we are
interested in the unboundedness problem not just for a single letter,
but for a set of letters $\S$.
In this case, different letters may have different witnessing paths,
so $\mathcal S'$ should generate not a single path but a narrow tree.
The number of paths in the narrow tree will be bounded by the size of $\S$.
%
The next lemma describes how to create an additional labeling of a
scheme.
It relies on the reflection
operation~\cite{broadbent10:_recur_schem_logic_reflec}.
Later we
will show how to use the lemma to realize the labeling described above.
The proof is presented in Appendix~\ref{app:lem:prod-aut}.
It is rather long but relatively standard.
%
%
\begin{restatable}{lemma}{lemmaprodaut}
\label{lem:prod-aut}
Let $\mathcal S$ be a HORS, let $\mathcal A$ be a non-deterministic finite tree automaton (reading trees generated by $\mathcal S$), and let $Q'$ be a subset of its set of states.
We can create a HORS $\mathcal S'$ of the same order as $\mathcal S$ generating trees obtained from run trees of $\mathcal A$ on trees generated by $\mathcal S$
(trees generated by $\mathcal S$ with labels replaced by states of an accepting run of $\mathcal A$),
by restricting those run trees to nodes labeled by states in $Q'$.
\end{restatable}
%
Using Lemma \ref{lem:prod-aut}, we can implement the above idea of restricting
trees generated by $\mathcal S$ to $|\Sigma|$ paths.
The resulting HORS will be narrow.
\begin{corollary}\label{coro:narrowing}
For a HORS $\mathcal S$ and a set of symbols $\S$, one can construct a
narrow HORS $\mathcal S'$ of the same order as $\mathcal S$, and sets of symbols
$\S_1,\dots,\S_k$ such that $\mathit{Diag}_\S(\mathcal S)$ holds iff there is
$i\in \{1,\dots,k\}$ for which $\mathit{Diag}_{\S_i}(\mathcal S')$ holds.
\end{corollary}
\todo[inline]{L: we should note that $k$ is exponential in $|\Sigma|$.}
\begin{proof}
First, for a technical reason that will be clear towards the end of
the proof, we assume that all trees generated from $\mathcal S$ have at
least $|\S|$ leaves. If it is not the case, we can always add a
new initial nonterminal from which we generate a tree with a root having to
one side a fixed tree with $|\S|-1$ leaves, and to the other side a tree
generated by the original scheme.
For every letter $b\in \S$ consider a finite tree automaton $\mathcal A_b$
that chooses a path in a tree, and on this path assumes a special
state $q_b$ in a node if the node is labeled by $b$, or if $b$ appears
in the subtree rooted in one of the successors of the node that are
not on the chosen path.
Let $Q'_b$ be the set of states used by the automaton $\mathcal A_b$ on the chosen
path; in particular, $q_b\in Q'_b$.
Consider now the product automaton $\mathcal A$ constructed from all $\set{\mathcal A_b}_{b\in \S}$.
States of $\mathcal A$ are tuples of states, and the transitions are done
independently on each coordinate.
This product chooses $|\S|$ paths in a tree, in the sense that given a
run of $\mathcal A$ its restriction to nodes labeled with states where
at least one component is in $\bigcup\set{Q'_b : b\in \S}$ is a
tree with at most $|\Sigma|$ leaves.
Suppose that every symbol from $\S$ appears at least $n$ times in a tree $t$.
In this case $\mathcal A$ has a run where for every $b\in\S$ the state $q_b$ appears
at least $\log_r n$ times, where $r$ is the maximal rank of symbols used by $\mathcal S$.
States of $\mathcal A$ are tuples of states, and $q_b$ may appear in
different tuples, but then there is a tuple appearing in the run at
least $(\log_r n)/|Q|$ times where $Q$ is the set of states of $\mathcal A$.
Thus we have a function $\sigma:\S\to Q$ such that $\mathcal A$ has a run on
$t$ where each state $\sigma(b)$, for $b\in\S$, appears at least
$(\log_r n)/|Q|$ times in the tree.
We call such a function $\sigma$, \emph{$n$-correct} for $t$.
Observe that for every $b$, the state $\sigma(b)$ is a tuple
containing the state $q_b$.
A function with this property is called \emph{choice function}.
To summarize, we have shown that $\mathit{Diag}_\S(\mathcal S)$ holds iff there is a choice function
$\sigma$ such that for arbitrary $n\in \mathbb{N}$ there is a $t$ generated
by $\mathcal S$ and a run of $\mathcal A$ on $t$ for which $\sigma$ is $n$-correct.
The last step in the reduction is to take care of the leaves of runs
of $\mathcal A$. The definition of a narrow scheme requires that there should be
an alphabet $\S'_0$ of nullary symbols such that every symbol from
$\S'_0$ appears at precisely one leaf.
When we look at a run of $\mathcal A$, every path chosen by the automaton
ends in a different state.
It may happen, though, that there are less than $|\S|$ paths in the chosen
restriction, since the paths for some two letters may turn out to
be the same.
As we have assumed that every tree generated by $\mathcal S$ has at least
$|\S|$ leaves, we can make automaton $\mathcal A$ choose some additional
dummy paths if needed.
Moreover, we can make $\mathcal A$ guess the positions of the leaves, and finish its
run in the $i$-th leaf of the tree, counting from left to right, in the special
state $q^e_i$. Let $\mathcal A'$ be the automaton obtained from $\mathcal A$ after
these modifications, and let $\S'_0=\set{q^e_i \mid i\in\{1,\dots,|\S|\}}$.
Take $\mathcal S'$ to be the scheme obtained by Lemma~\ref{lem:prod-aut} from
$\mathcal S$, $\mathcal A'$, and the set $Q'$ consisting of $\S'_0$ and all the
states of $\mathcal A'$ having at least
one state in $\bigcup\set{Q'_b : b\in \S}\cup\S'_0$.
Thus, $\mathcal S'$ generates narrow trees where leaves are uniquely labeled
by symbols from $\S'_0$. We have: $\mathit{Diag}_\S(\mathcal S)$ holds iff there
is some choice function $\sigma:\S\to Q$ such that
$\mathit{Diag}_{\set{\sigma(b) :b\in \S}}(\mathcal S')$ holds.
\end{proof}
}
The idea behind narrowing is quite intuitive.
Consider a binary tree, and suppose that we are interested in the number
of occurrences of a certain letter $a$, that may appear only in leaves.
Consider a path that, at each node, selects the subtree containing the larger number of $a$'s,
and let's label the node by $a$ if the successor of the node that is not on the path has an $a$-labeled descendant.
Then, if the original tree had $n$ occurrences of $a$, then on the selected path we put between $\log n$ and $n$ labels $a$.
The lower bound holds since, whenever a subtree is selected,
at most half of the $a$'s is discarded (on the other subtree),
and this happens a number of times equal to the number of $a$'s on the resulting path.
This observation implies it suffices to convert a scheme $\mathcal S$ generating trees
to a scheme $\mathcal S'$ generating all paths (words) in the trees generated by $\mathcal S$ with the additional labeling.
Then $\mathit{Diag}_{\set{a}}(\mathcal S)$ will be equivalent to $\mathit{Diag}_{\set{a}}(\mathcal S')$.
The general situation is a bit more complicated
since we are interested in the diagonal problem not just for a single letter,
but for a set of letters $\S$.
In this case, different letters may have different witnessing paths,
so $\mathcal S'$ should generate not a single path but a narrow tree
whose number of paths is bounded by $|\S|$.
\begin{theorem}
\label{thm:narrowing}
For a HORS $\mathcal S$ and a set of letters $\S$,
one can construct a set of nullary symbols $\Delta$ of size $|\S|$
and a $\Delta$-\emph{narrow} HORS $\mathcal S'$ of the same order as $\mathcal S$,
such that $\mathit{Diag}_\S(\mathcal S)$ holds if, and only if, $\mathit{Diag}_{\S}(\mathcal S')$ holds.
\end{theorem}
\begin{proof}
We start by assuming that $\mathcal S$ uses only symbols of rank $2$
and $0$, where additionally letters from $\Sigma$ appear only
in leaves.
The general situation can be easily reduced to this one, by applying a tree transduction that replaces every node by a small fragment of a tree built of binary symbols, with the original label in a leaf.
Then, we consider a linear bottom-up transducer $\mathcal A$ from trees produced by $\mathcal S$
to narrow trees.
As labels in the resulting trees we use:%
\begin{inparaenum}[(i)]
\item new leaf symbols $\Delta = \set{e_1^0, \dots, e_{|\S|}^0}$,
\item unary symbols $a^1$ for all $a\in\Sigma$, and
\item new auxiliary symbols $\bullet^k$ (of rank $k\geq 1$).
\end{inparaenum}
%
For each set of letters $\Gamma \subseteq \Sigma$,
$\mathcal A$ contains a state $p_\Gamma^?$ making sure that each letter from $\Gamma$ occurs at least once in the input tree.
%
Moreover, for each nonempty set of leaf labels $\Delta' \subseteq \Delta$,
$\mathcal A$ contains a state $p_{\Delta'}$ that outputs only $\Delta'$-narrow trees.
%
The final state of $\mathcal A$ is $p_\Delta$.
%
Transitions are as follows:
%
\begin{align*}
\textrm{(Branch)} && a^2 \, (p_{\Delta_1}, x_1) \, (p_{\Delta_2}, x_2)
&\goesto {} p_{\Delta_1 \cup \Delta_2}, \bullet^2 \, x_1\, x_2\,, \\
\textrm{(Leaf)} && &\hspace{-3em}a^0
\goesto {} p_{\set {e_{i_1},\dots,e_{i_k}}}, \bullet^k\,e_{i_1}\,\dots\,e_{i_k}\,, \\
%
\textrm{(Choose${}_1$)} && a^2 \, (p_{\Delta_1}, x_1) \, (p_{\Gamma}^?, x_2)
&\goesto {} p_{\Delta_1}, a_1^1 (\cdots (a_k^1 \, x_1))\,, \\
%
\textrm{(Choose${}_2$)} && a^2 \, (p_{\Gamma}^?, x_1) \, (p_{\Delta_2}, x_2)
&\goesto {} p_{\Delta_2}, a_1^1 (\cdots (a_k^1 \, x_2))\,.
\end{align*}
%
where $\Delta_1$ and $\Delta_2$ are \emph{disjoint} subsets of $\Delta$, where $i_1<\dots<i_k$,
and where $ \Gamma = \set{a_1, \dots, a_k}\subseteq\Sigma$.
%
%
Intuitively, rules of types (Branch) and (Leaf) make sure that we output narrow trees,
and rules of types (Choose${}_i$) select a branch
and output (only) letters that appear at least once in the discarded subtree.
%
States $p_\Gamma^?$ check that each letter in $\Gamma$ occurs at least once, as follows:
%
\begin{align*}
%
\textrm{(Check${}_2$)} && a^2 \, (p_{\Gamma_1}^?, x_1) \, (p_{\Gamma_2}^?, x_2)
&\goesto {} p_{\Gamma_1 \cup \Gamma_2}^?, e_1^0 \\
%
\textrm{(Check${}_0$)} && a^0
&\goesto {} p_{\set a}^?, e_1^0
\end{align*}
%
The set $\trans{p_\Gamma^?}(\set t)$ is either a single leaf or $\emptyset$,
depending on whether $t$ satisfies the condition or not.
%
The choice of $e_1^0$ on the right side of the transitions is not important,
since, in the way states $p_\Gamma^?$ are used,
it only matters whether the input can be successfully parsed,
and not what the output actually is.
It is clear that the image of state $p_{\Delta'}$ is always a language of $\Delta'$-narrow trees.
Correctness follows from the following claim.
%
\begin{claim}
Let $t$ be an input tree.
Then,%
\begin{inparaenum}[(i)]
\item if $t$ has at least $n$ occurrences of every letter $a \in \Sigma$, then
$\trans{\mathcal A}(t)$ contains a tree with at least $\log n$ occurrences of every letter $a \in \Sigma$, and
\item if $\trans{\mathcal A}(t)$ contains a tree with at least $n$ occurrences of every letter $a \in \Sigma$, then
$t$ has at least $n$ occurrences of every letter $a \in \Sigma$.
\end{inparaenum}
\end{claim}
%
%
\ignore{
\begin{proof}[Proof of the claim]
Suppose that every letter $a$ from $\S$ appears at least $n$ times in the tree $t$.
%
In this case, by selecting at each step the branch containing the larger number of $a$'s,
we see that $a$ is output at least $\log n$ times by $\mathcal A$.
%
The other direction is immediate, since every time $\mathcal A$ outputs a letter $a$,
an occurrence of this letter appears somewhere in the input tree $t$.
\end{proof}
%
}
To conclude the proof, let $T$ be the transduction $\trans \mathcal A$ realized by $\mathcal A$.
%
By Theorem~\ref{thm:HORS:transd},
there exists a HORS $\mathcal S'$ of the same order as $\mathcal S$ with $\lang {\mathcal S'} = T(\lang \mathcal S)$.
%
First, it is clear that $\lang {\mathcal S'}$ is a language of $\Delta$-narrow trees.
%
Second, thanks to the claim above, $\mathit{Diag}_\S(\mathcal S)$ holds if, and only if, $\mathit{Diag}_\S(\mathcal S')$ holds.
%
\end{proof}
\section{Preliminaries}
\label{sec:preliminaries}
\paragraph{Higher-order recursion schemes.}
We use the name ``sort'' instead of ``simple type'' or ``type''
to avoid confusion with the types introduced later.
The set of \emph{sorts} is constructed from a unique basic sort $o$ using a binary operation $\to$.
Thus $o$ is a sort, and if $\alpha,\beta$ are sorts, so is $\alpha\to\beta$.
The order of a sort is defined by: $\mathit{ord}(o)=0$, and $\mathit{ord}(\alpha\to\beta)=\max(1+\mathit{ord}(\alpha),\mathit{ord}(\beta))$.
By convention, $\to$ associates to the right, i.e., $\alpha\to\beta\to\gamma$ is understood as $\alpha\to(\beta\to\gamma)$.
Every sort $\alpha$ can be uniquely written as $\alpha_1\to\alpha_2\to\ldots\to\alpha_n\to o$.
The sort $o\to\dots\to o\to\alpha$ with $r$ occurrences of $o$ is denoted $o^r\to \alpha$, where $o^0\to \alpha$ is simply $\alpha$.
The set of \emph{terms} is defined inductively as follows.
For each sort $\alpha$ there is a countable set of \emph{variables} $x^\alpha,y^\alpha,\dots$ and a countable set of \emph{nonterminals} $A^\alpha,B^\alpha,\dots$; all of them are terms of sort $\alpha$.
There is also a countable set of \emph{letters} $a,b,\dots$; out of a letter $a$ and a sort $\alpha$ of order at most $1$ one can create a \emph{symbol} $a^\alpha$ that is a term of sort $\alpha$.
Moreover, if $K$ and $L$ are terms of sort $\alpha\to\beta$ and
$\alpha$, respectively, then $(K\,L)^\beta$ is a term of sort $\beta$.
For $\alpha=(o^r\to o)$ we often shorten $a^\alpha$ to $a^r$, and we call $r$ the \emph{rank} of $a^r$.
Moreover, we omit the sort annotation of variables, nonterminals, or terms,
but note that each of them is implicitly
assigned a particular sort.
We also omit some parentheses when writing terms and denote $(\dots
(K\,L_1) \dots L_n)$ simply by $K L_1\dots L_n$.
A term is called \emph{closed} if it uses no variables.
We deviate here from usual definitions in the detail that letters itself are unranked, and thus out of a single letter $a$ one may create a symbol $a^r$ for every rank $r$.
This is convenient for us, as during the transformations of HORSes described in Sections \ref{sec:narrowing} and \ref{sec:lowering} we need to change the rank of tree nodes, without changing their labels.
Notice, however, that in terms a letter is used always with a particular rank.
A \emph{higher-order recursion scheme} (HORS for
short) is a pair $\mathcal S=(A_\mathit{init},\mathcal R)$, where $A_\mathit{init}$
is the \emph{initial nonterminal} that is of sort $o$,
and $\mathcal R$ is a finite set of rules of the form $A^\alpha\,x_1^{\alpha_1}\,\dots\,x_k^{\alpha_k}\to K^o$
where $\alpha = \alpha_1\to\dots\to\alpha_k\to o$
and $K$ is a term that uses only variables from the set $\{x_1^{\alpha_1},\dots,x_k^{\alpha_k}\}$.
The order of $\mathcal S$ is defined as the highest order of a nonterminal for which there is a rule in $\mathcal S$.
We write $\mathcal R(\mathcal S)$ to denote the set of rules of a HORS
$\mathcal S$.
Observe that our schemes are \emph{non-deterministic} in the sense that
$\mathcal R(\mathcal S)$ can have many rules with the same
nonterminal on the left side. A scheme with at most one rule for
each nonterminal is called \emph{deterministic}.
Let us now describe the dynamics of HORSes.
Substitution is defined as expected:
\begin{mathpar}
A[M/x]=A,\and
a^r[M/x]=a^r,\and
x[M/x]=M,\and
y[M/x]=y\mbox{ if }y\neq x,\and
(K\,L)[M/x]=K[M/x]\,L[M/x].
\end{mathpar}
We shall use the substitution only when $M$ is closed, so there is no need to perform $\alpha$-conversion.
We also allow simultaneous substitutions: we write
$K[M_1/x_1,\dots,M_k/x_k]$ to denote the simultaneous substitution of
$M_1$, \dots, $M_k$ respectively for $x_1$, \dots, $x_k$.
We notice that when the terms $M_i$ are closed, this amounts to apply
the substitutions $[M_i/x_i]$ (with $i\in\{1,\dots,k\}$) in any order.
A HORS $\mathcal S$ defines a reduction relation $\to_\mathcal S$ on closed terms:
\begin{mathpar}
\inferrule{(A\,x_1\,\dots\,x_k\to K)\in\mathcal R(\mathcal S)}{A\,M_1\,\dots\,M_k\to_\mathcal S K[M_1/x_1,\dots,M_k/x_k]}
\and
\inferrule{
K_l\to_\mathcal S K_l'\mbox{ for some }l\in\{1,\dots,r\}
\\
K_i=K_i'\mbox{ for all }i\neq l
}{
a^r\,K_1\,\dots\,K_r\to_\mathcal S a^r\,K_1'\,\dots\,K_r'
}
\end{mathpar}
We thus apply some of the rules of $\mathcal S$ to one of the outermost nonterminals in the term.
We are interested in finite trees generated by HORSes.
A closed term $L$ of sort $o$ is a \emph{tree} if it does not contain any nonterminal.
A HORS $\mathcal S$ \emph{generates} a tree $L$ from a term $K$ if $K\to_\mathcal S^* L$;
when we do not mention the term $K$ we mean generating from the
initial nonterminal of $\mathcal S$. Since a scheme may have more than one
rule for some nonterminals, it may generate more than one tree.
We can view a HORS of order $0$ essentially as a finite tree automaton,
thus a HORS of order $0$ generates a regular language of finite trees.
Let $\Delta$ be a finite set of symbols of rank $0$ (called also \emph{nullary} symbols).
A tree $K$ is \emph{$\Delta$-narrow} if it has exactly $|\Delta|$ leaves, each of them labeled by a different symbol from $\Delta$.
A HORS is called \emph{$\Delta$-narrow} if it generates only $\Delta$-narrow trees, and it is called \emph{narrow} if it is $\Delta$-narrow for some $\Delta$.
We are particularly interested in $\Delta$-narrow HORSes for
$|\Delta|=1$; trees generated by them consist
|
of a single branch
and thus can be seen as words.
\paragraph{Transductions.}
A (bottom-up, nondeterministic) \emph{finite tree transducer} (FTT) is a tuple $\mathcal A = (Q, Q_F, \delta)$,
where $Q$ is a finite set of control states,
$Q_F \subseteq Q$ is the set of final states,
and $\delta$ is a finite set of transitions of the form
\begin{align*}
&a^r\, (p_1, x_1)\, \dots\, (p_r, x_r) \goesto {} q, t
\quad \textrm { or } \\%quad
&p, x_1 \goesto {} q, t \qquad (\textrm{\it $\varepsilon$-transition})
\end{align*}
where $a$ is a letter, $p, q, p_1, \dots, p_r$ are states,
$x_1,\dots,x_r$ are variables of sort $o$,
and $t$ is a term built of variables from $\set{x_1, \dots, x_k}$ ($\set { x_1}$, respectively) and symbols, but no nonterminals.
An FTT $\mathcal A$ defines in a natural way a binary relation $\trans \mathcal A$ on trees \cite{tata2007}.
We say that an FTT is \emph{linear} if no term $t$ on the right of transitions contains more than one occurrence of the same variable.
We show that HORSes are closed under linear transductions.
The construction relies on the reflection operation~\cite{broadbent10:_recur_schem_logic_reflec},
in order to detect unproductive subtrees.
\begin{restatable}{theorem}{thmtransd}
\label{thm:HORS:transd}
HORSes are effectively closed under linear tree transductions.
\end{restatable}
A family of word languages is a \emph{full trio} if it is effectively closed under rational (word) transductions.
Since rational transductions on words are a special case of linear tree transductions,
we obtain the following corollary of Theorem~\ref{thm:HORS:transd}.
\begin{corollary}
\label{cor:HORS:trio}
Languages of finite words recognized by HORSes form a full trio.
\end{corollary}
\section{The Main Result}
\label{sec:result}
We formulate the main result and state some of its consequences.
\begin{definition}[Diagonal problem]
For a higher-order recursion scheme $\mathcal S$, and a set of letters
$\S$, the predicate $\mathit{Diag}_\S(\mathcal S)$ holds if for every $n\in \mathbb{N}$
there is a tree $t$ generated by $S$ with at least $n$ occurrences
of every letter from $\S$. The \emph{diagonal problem} for schemes is to
decide whether $\mathit{Diag}_\S(\mathcal S)$ holds for a given scheme $\mathcal S$ and a set $\S$.
\end{definition}
\begin{theorem}\label{thm:main}
The diagonal problem for higher-order recursion schemes is decidable.
\end{theorem}
\begin{proof}
The proof is by induction on the order of a HORS $\mathcal S$. It relies on
results from the next two sections.
If $\mathcal S$ has order $0$, then $\mathcal S$ can be converted to an equivalent
finite automaton on trees,
for which the diagonal problem can be solved by direct inspection.
For $\mathcal S$ of order greater than $0$, we first convert $\mathcal S$ to a narrow
HORS $\mathcal S'
such that $\mathit{Diag}_\S(\mathcal S)$ holds iff $\mathit{Diag}_{\S}(\mathcal S')$ holds
(Theorem~\ref{thm:narrowing}).
Then,
we employ the construction from
Section~\ref{sec:lowering} and obtain a HORS $\mathcal S''$ of order smaller
by $1$ than the order of $\mathcal S'$. By Lemmata~\ref{lem:sound}
and~\ref{lem:compl}:
$\mathit{Diag}_{\S}(\mathcal S')$ holds iff $\mathit{Diag}_{\S}(\mathcal S'')$ holds.
\end{proof}
The main theorem allows to solve some other problems for
higher-order schemes. The \emph{downward closure} of a language of words is the set
of its (scattered) subwords.
Since the
subword relation is a well quasi-order~\cite{Higman:1952}, the downward closure of any
language of words is regular. The main theorem implies that the downward
closure can be computed for HORSes generating languages of finite words,
or, in our terminology, $\set{e^0}$-narrow HORSes, where $e^0$ is a
nullary symbol acting as an end-marker.
\begin{corollary}
There is an algorithm that given an $\set{e^0}$-narrow HORS $\mathcal S$
computes a regular expression for the downward closure of the
language generated by $\mathcal S$.
\end{corollary}
\begin{proof}
By Corollary~\ref{cor:HORS:trio},
word languages generated by schemes are closed under rational transductions.
In this case,
Theorem~\ref{thm:main} together with a result of Zetzsche~\cite{Zetzsche:ICALP:2015}
can be used to compute the downward closure of a language generated by a HORS.
\end{proof}
Piecewise testable languages of words are boolean combinations of languages of
the form $\S^*a_1\S^*a_2\dots\S^*a_k\S^*$ for some
$a_1,\dots,a_k\in\S$.
Such languages talk about possible orders of occurrences of
letters. The problem of separability by piecewise testable languages
asks, for two given languages of words, whether there is a piecewise testable
language of words containing one language and disjoint from the other. A
separating language provides a simple explanation of the disjointness
of the two languages~\cite{hofman_et_al:LIPIcs:2015:4987}.
\begin{corollary}
There is an algorithm that given two $\set{e^0}$-narrow HORSes
decides whether there is a piecewise testable language separating the
languages of the two HORSes.
\end{corollary}
\begin{proof}
This is an immediate consequence of a result of Czerwi\'nski et
al.~\cite{CzerwinskiMartensRooijenZeitounZetzsche}
who show that for any class of languages effectively
closed under rational transductions, the problem reduces to solving the
diagonal problem.
\end{proof}
The final example concerns deciding reachability in parameterized
asynchronous shared-memory systems~\cite{DBLP:conf/fsttcs/Hague11}.
In this model one instance of a process, called leader, communicates with
an undetermined number of instances of another process, called
contributor.
The communication is implemented by common registers on which the
processes can perform read and write operations; however, operations of
the kind of test-and-set are not possible.
The reachability problem asks if for some number of instances of the
contributor the system has a run writing a designated value to a register.
\begin{corollary}
The reachability problem for parameterized asynchronous shared-memory
systems is decidable for systems where leaders and contributors are
given by $\set{e^0}$-narrow HORSes.
\end{corollary}
\begin{proof}
La Torre et al.~\cite{LaTorreMuschollWalukiewicz:2015}
show how to use the downward closure of the
language of the leader to reduce the reachability problem for a
parameterized system to the
reachability problem for the contributor. Being a full trio
is sufficient for this reduction to work.
\end{proof}
\section{Strategy}
For a HORS $\mathcal S$ and a set of symbols $\Sigma$, let $\mathit{Diag}_\Sigma(\mathcal S)$ denote the property that holds if for every $n\in\mathbb{N}$ there is a tree generated by $\mathcal S$ in which every symbol from $\Sigma$ appears at least $n$ times.
We want to give an algorithm deciding whether $\mathit{Diag}_\Sigma(\mathcal S)$ for a given $\Sigma$ and $\mathcal S$.
When $\mathcal S$ is of order $0$, it is just (almost) a finite tree automaton, and thus deciding whether $\mathit{Diag}_\Sigma(\mathcal S)$ holds in this case is easy.
For HORSes of higher order we repeatedly perform a transformation that decreases the order by one, finally reducing the situation to the easy case of order $0$.
This decreasing of order is done in two steps.
First, we ensure that the HORS is narrow: we create a HORS $\mathcal S'$ that is narrow, of the same order as $\mathcal S$, and such that $\mathit{Diag}_\Sigma(\mathcal S)$ holds if and only if $\mathit{Diag}_{\Sigma'}(\mathcal S')$ holds.
Then, in the narrow HORS $\mathcal S'$ we lower the order by one:
we create a HORS $\mathcal S''$ that is of order smaller by one than $\mathcal S'$ (but is no longer narrow), and such that $\mathit{Diag}_{\Sigma'}(\mathcal S')$ holds if and only if $\mathit{Diag}_{\Sigma''}(\mathcal S'')$ holds.
The two steps are described in the next two sections.
\section{Closure under linear transductions and full trio}
In this section we prove that finite tree languages generated by HORSes are closed under \emph{linear} bottom-up tree transductions.
An FTT is \emph{complete} if every variable $x_i$ appearing on the left side of any transition also appears in the term $t$ on the right side of the transition,
i.e., no subtree is discarded.
A \emph{restriction} is a special case of an FTT where there is only one control state, and where every transition is of the form
$a^r\,(q_,x_1)\,\dots\,(q,x_r)\goesto{}q,b^n\,x_{i_1}\,\dots\,x_{i_n}$ with $1 \leq i_1 < \cdots < i_n \leq r$,
i.e., it relabels the tree and discards some its subtrees.
Clearly, every FTT is the composition of a complete FTT with a restriction.
A \emph{higher-order recursion scheme with states} (HORSS)
is a triple $\mathcal H = (Q, (q_\mathit{init}, A_\mathit{init}), \mathcal R)$,
where $Q$ is a finite set of control states,
$(q_\mathit{init}, A_\mathit{init})$ is the \emph{initial process}
with $q_\mathit{init}$ the \emph{initial control state} and $A_\mathit{init}$ the \emph{initial nonterminal} that is of sort $o$,
and $\mathcal R$ is a finite set of rules of the form
\begin{align*}
\textrm{(I)} &&
p, A^{\alpha_1\to\dots\to\alpha_k\to o}\,x_1^{\alpha_1}\,\cdots\,x_k^{\alpha_k} &\to q, K^o \\
\textrm{(II)} &&
p, a^r \, x_1^o \, \cdots \, x_r^o &\to a^r \, (p_1, x_1) \, \cdots \, (p_r, x_r)
\end{align*}
where the term $K$ uses only variables from the set $\{x_1^{\alpha_1},\dots,x_k^{\alpha_k}\}$.
Rules of type (I) are as in standard HORS except that they are guarded by control states.
Rules of type (II) correspond to a finite top-down tree automaton reading the tree produced by the HORS.
The order of $\mathcal S$ is defined as the highest order of a nonterminal for which there is a rule in $\mathcal S$.
Let us now describe the dynamics of HORSSes.
A \emph{process} is a pair $(p, M)$ where $M$ is a closed term of sort $o$ and $p$ is a state in $Q$.
A \emph{process tree} is a tree built of symbols and processes, where the latter are seen as symbols of rank $0$.
A HORSS $\mathcal H$ defines a reduction relation $\to_\mathcal H$ on process trees:
\begin{mathpar}
\inferrule
{(p, A\,x_1\,\dots\,x_k \to q, K)\in\mathcal R(\mathcal H)}
{(p, A\,M_1\,\dots\,M_k) \to_\mathcal H (q, K[M_1/x_1,\dots,M_k/x_k])}
\and
\inferrule
{(p, a^r \, x_1 \, \cdots \, x_r \to a \, (p_1, x_1) \, \cdots \, (p_r, x_r)) \in \mathcal R(\mathcal H)}
{(p, a^r \, M_1 \cdots M_r) \to_\mathcal H a \, (p_1, M_1) \cdots (p_r, M_r)}
\and
\inferrule{
K_l\to_\mathcal H K_l'\mbox{ for some }l\in\{1,\dots,r\}
\\
K_i=K_i'\mbox{ for all }i\neq l
}{
a^r\,K_1\,\dots\,K_r\to_\mathcal H a^r\,K_1'\,\dots\,K_r'
}
\end{mathpar}
We are interested in finite trees generated by HORSSes.
A process tree $T$ is a \emph{tree} if it does not contain any process.
A HORSS $\mathcal H$ \emph{generates} a tree $T$ from a process $(p, M)$ if $(p, M)\to_\mathcal H^* T$.
The language $\lang \mathcal H$ is the set of trees generated by the initial process $(q_\mathit{init}, A_\mathit{init})$.
A HORS can be seen as a special case of a HORSS where $Q$ has only one state $\hat p$
with the trivial rule $\hat p, a \, x_1 \cdots x_k \to a \, (\hat p, x_1) \cdots (\hat p, x_k)$.
It is well known that this extension does not increase expressive power of HORS,
in the sense that given a HORSS $\mathcal H$
it is possible to construct a (standard) HORS $\mathcal S$ of the same order as $\mathcal H$ (but where the arity of nonterminals is increased)
such that $\lang \mathcal H = \lang \mathcal S$ \cite{HagueMurawskiOngSerre:Collapsible:2008}.
However, while combining a HORS with an FTT it is convenient to create a HORSS, as its states can be used to simulate states of the FTT.
On the other hand, it is also useful to have the input HORS in a special normalized form, defined next.
We say that a HORS is \emph{normalized} if every its rule is of the form
\[ A\, x_1\,\dots\, x_p \to h\, (B_1\, x_1\,\dots\, x_p)\, \dots\, (B_r\, x_1\,\dots\, x_p)\ , \]
where $r \geq 0$,
$h$ is either one of the $x_i$'s, a nonterminal, or a symbol,
and the $B_j$'s are nonterminals.
The arity $p$ may be different in each rule.
We will not detail the rather standard procedure of transforming any HORS into a normalized HORS without increasing the order.
It amounts to splitting every rule into multiple rules, using fresh nonterminals in the cut points.
\begin{lemma}
\label{lemma:HORS:transd:complete}
HORSes are affectively closed under complete linear tree transductions.
\end{lemma}
\begin{proof}
Let $\mathcal S$ be a HORS and let $\mathcal A$ be a linear FTT.
We construct a HORSS $\mathcal H$ s.t. $\lang \mathcal H = \trans \mathcal A (\lang \mathcal S)$.
%
The set of control states of $\mathcal H$ is taken to be the set of control states of the FTT $\mathcal A$.
%
As noted above, we can assume w.l.o.g.~that $\mathcal S$ is normalized.
First, if $\mathcal S$ contains a rule $A\, \vec x \to h\, M_1 \cdots M_r$ with $h$ not a symbol,
then $\mathcal H$ contains the rule $p, A\, \vec x \to p, h\, M_1 \cdots M_r$ for every control state $p$.
Next, for every such rule with $h$ being a symbol $a^r$, and for every transition of $\mathcal A$ having $a^r$ on the left side,
we take to $\mathcal H$ one rule illustrated by means of a representative example:
if $\mathcal A$ contains a transition
\[ a^2 \, (p_1, x_1) \, (p_2, x_2) \goesto {} p, b^2 \, (c^1 \, x_1) \, x_2 \]
and $\mathcal S$ contains a rule
$A \, \vec y \to a^2 \, (B_1 \, \vec y) \, (B_2 \, \vec y)$,
then $\mathcal H$ contains the rule
%
\[ p, A \, \vec y \to b^2 \, (c^1 \, (p_1, B_1 \, \vec y))\, (p_2, B_2 \, \vec y) \]
%
Technically speaking, this is not a HORSS rule,
but it can be turned into one type (I) rule
and several type (II) rules by adding new states.
Finally, we also add rules corresponding to $\varepsilon$-transitions of $\mathcal A$, what is again defined by an example:
%
if $\mathcal A$ contains a transition
%
\[ p , x_1 \goesto {} q, a^1 \, x_1 \]
%
then, for every nonterminal $A$ of $\mathcal S$,
$\mathcal H$ contains the rule
\[ q, A \, \vec y \to a^1 \, (p, A \, \vec y) \]
%
The two inclusions needed to show that $\lang \mathcal H = \trans \mathcal A (\lang \mathcal S)$
can be proved straightforwardly by induction on the length of derivations.
%
\end{proof}
The difficulty in proving closure under possibly non-complete FTTs
is that when combining a (non-complete) FTT transition of the form e.g.~$a^2 \, (p,x_1) \, (p,x_2) \goesto {} p, b^1 \, x_1$
with a HORS rule of the form e.g.~$A \, \vec y \to a^2 \, (B_1 \, \vec y) \, (B_2 \, \vec y)$,
we cannot simply discard the subterm $B_2 \, \vec y$, but we have to make sure that it generates at least one tree on which the FTT has some run.
While concentrating on closure only under restrictions, one think becomes easier: a restriction has a run almost on every tree.
There is, however, one exception: a restriction $\mathcal A$ does not have a run on a tree that uses a symbol for which $\mathcal A$ has no transition.
We deal with this in Lemma \ref{lemma:HORS:restrict-to-symbols}, below.
However, knowing that on every tree there is a run of $\mathcal A$ is not enough;
we also need to know that $B_2 \, \vec y$ generates at least one tree.
This problem is resolved by Lemma \ref{lemma:HORS:productive}.
\begin{lemma}
\label{lemma:HORS:restrict-to-symbols}
For every set of (ranked) symbols $\Theta$ and every HORS $\mathcal S$ we can build a HORS $\mathcal S'$ of the same order, such that $\mathcal L(\mathcal S')$ contains those trees from $\mathcal L(\mathcal S)$ which use only symbols from $\Theta$.
\end{lemma}
\begin{proof}
We start by assuming w.l.o.g.~that $\mathcal S$ is normalized.
Then, we simply remove from $\mathcal S$ all rules that use symbols not in $\Theta$.
Then surely trees in $\mathcal L(\mathcal S')$ use only symbols from $\Theta$.
On the other hand, since $\mathcal S$ was normalized, every removed rule was of the form
$A \, \vec y \to a^r \, (B_1 \, \vec y)\,\dots \, (B_r \, \vec y)$ (with $a^r\not\in\Theta$),
so whenever such a rule was used, an $a^r$-labeled node was created.
In consequence, removing these rules has no influence on generating trees that use only symbols from $\Theta$.
\end{proof}
A HORS $\mathcal S=(A_\mathit{init},\mathcal R)$ is \emph{productive}
if, whenever we can reduce $A_\mathit{init}$ to a term $M$ (which may contain nonterminals),
then $M$ can be reduced to some finite tree.
By using the reflection operation~\cite{broadbent10:_recur_schem_logic_reflec},
we can easily turn a HORS into a productive one.
\begin{lemma}
\label{lemma:HORS:productive}
For every HORS $\mathcal S$ we can build a productive HORS $\mathcal S'$ of the same order generating the same trees.
\end{lemma}
\begin{proof}
First, we construct a deterministic scheme $\mathcal{T}$ from
the non-deterministic scheme $\mathcal{S}$.
To $\mathcal T$ we will be then able to apply a reflection transformation.
We use a letter $+$ to eliminate
non-determinism.
For every nonterminal $A$ of $\mathcal S$ we collect all its rules:
$A\, x_1\,\dots\, x_p \to K_1, \dots, A\, x_1\,\dots\, x_p\to K_m$, and add to
$\mathcal{T}$ the single rule:
$$A\,x_1\,\dots\, x_p \to +^2\, K_1\, (+^2\, K_2\, (\dots\, (+^2\, K_{m-1}\, K_m)\dots))\,.$$
%
The (possibly infinite) tree generated by $\mathcal{T}$ represents the
language of trees generated from $\mathcal{S}$ since the
non-deterministic choices that can be made in $\mathcal{S}$ are
represented by nodes labeled by $+$ in the tree generated by
$\mathcal T$.
In this latter tree, we can find every tree generated by $\mathcal S$ using a
finite number of rewriting steps consisting of replacing a subtree
rooted in $+$ by one of its children.
We now take the monotone applicative structure
(see~\cite{salvati15:_using,kobele2015}) $\mathcal{M} =
(\mathcal{M}_\alpha)_{\alpha\in\mathrm{Sorts}}$ where $\mathcal{M}_o$
is the two element lattice, with maximal element $\top$ and minimal
element $\bot$.
%
Intuitively, $\top$ means nonempty language and $\bot$ means empty language.
%
We interpret $+^2$ as the join (max) of its arguments, and every other symbol
$a^r$ as the meet (min) of its arguments; in particular symbols of rank $0$ are
interpreted as $\top$.
This allows us to define the semantics $\sem{M,\chi,\nu}$ of a
term given a valuation $\chi$ for nonterminals and $\nu$
for variables (these valuations assign to a variable/nonterminal a value in $\mathcal M$ of
an appropriate sort).
The definition of $\sem{M,\chi,\nu}$ is standard, in particular $\sem{K\,L,\chi,\nu} = \sem{K,\chi,\nu}(\sem{L,\chi,\nu})$.
The meaning of nonterminals in $\mathcal T$ is given by the least fixpoint
computation.
For a valuation $\chi$ of the nonterminals of $\mathcal{T}$, we write
$\mathcal{T}(\chi)$ for the valuation $\chi'$ such that
$\chi'(A)=\lambda g_1.\cdots.\lambda g_p.\sem{K,\chi,[g_1/x_1,\dots,g_p/x_p]}$
where $A\, x_1\,\dots\, x_p \to K$ is the rule for $A$ in $\mathcal{T}$.
Then the meaning of nonterminals is given by the valuation that is the
least fixpoint of this operator: $\chi_\mathcal T=\bigwedge\set{\chi :
\mathcal T(\chi)\subseteq \chi}$.
Having $\chi_\mathcal T$ we can define the semantics of a term $M$ in a valuation
$\nu$ of its free variables as $\sem{M,\nu}=\sem{M,\chi_\mathcal T,\nu}$.
Least fixed point models of schemes induce an interpretation on
infinite trees by finite approximations. An infinite tree has value $\top$
iff it represents a non-empty
language~\cite{kobele2015}.
The important point is that the semantics of a term and that of the
infinite tree generated from the term coincide.
We can now apply to $\mathcal T$ the reflection
operation~\cite{broadbent10:_recur_schem_logic_reflec}
with respect to the above interpretation $\mathcal{M}$.
The result is a scheme $\mathcal T'$ that generates the same tree as
$\mathcal{T}$ but where every node is additionally marked by a tuple
$(a_1,\dots, a_r,b)$ where $a_1$, \dots, $a_r$ is the semantics of the
arguments of that node (i.e., subtrees rooted at its children) and $b$ is the semantics of the subtree rooted
at that node.
What is important here is that $\mathcal{T'}$ has the same order as
$\mathcal{T}$ which is the same as that of $\mathcal{S}$.
The additional labels allow us to remove unproductive parts of the
tree generated by $\mathcal T'$.
For this we introduce two more nonterminals $\Pi_1$ and
$\Pi_2$ of sort $o\to o \to o$.
We then add the rules $\Pi_1\, x_1\, x_2 \to x_1$,
$\Pi_2\, x_1\, x_2 \to x_2$.
Now we
replace every occurrence of $+^2$ labeled by $(\top,\bot,\top)$ by
$\Pi_1$, and every occurrence of $+^2$ labeled by $(\bot,\top,\top)$ by
$\Pi_2$.
After these transformations we obtain a scheme $\mathcal{T}''$
generating a tree which contains exactly those nodes of $\mathcal{T}'$
that are labeled with $(\top,\dots,\top,\top)$.
We convert
$\mathcal{T}''$ into a HORS $\mathcal S'$
whose language is the same as that of $\mathcal{S}$.
%
For this we replace every remaining occurrence of $+^2$ (thus labeled by $(\top,\top,\top)$) by a nonterminal $C$ of sort
$o\to o\to o$, and we add two rewrite rules $C\, x\, y \to x$ and
$C\, x\, y\to y$. We also remove the additional labels from symbols.
By construction, $\mathcal S'$ is productive and $\lang{\mathcal S'} \subseteq \lang{\mathcal S}$.
Moreover, since we only eliminated non-productive nonterminals,
$\lang{\mathcal S'} = \lang{\mathcal S}$.
%
\end{proof}
\begin{lemma}
\label{lem:HORS:restriction}
Let $\mathcal S$ be a productive HORS, and $\mathcal A$ a restriction such that for every symbol $a^r$ appearing in any tree generated by $\mathcal S$ there is a transition of $\mathcal A$ having $a^r$ on the left side.
Then we can build a HORS $\mathcal S'$ whose language is $\mathcal T(\mathcal A)(\mathcal L(\mathcal S))$.
\end{lemma}
\begin{proof}
First, w.l.o.g.~we assume that $\mathcal S$ is normalized (notice that while converting a productive HORS to a normalized one, it remains productive).
Every rule $S\,\vec y\to h\,(B_1\,\vec y)\,\dots\,(B_r\,\vec y)$ of $S$ in which $h$ is not a symbol is also taken to $\mathcal S'$.
If $h=a^r$ is a symbol, we consider every transition of $\mathcal A$ having $a^r$ on the left side.
Since $\mathcal A$ is a restriction, this transition is of the form
\[ a^r\,(p,x_1)\,\dots\,(p,x_r) \goesto {} p,b^n \, x_{i_1} \cdots x_{i_n}\,, \]
where $1 \leq i_1 < \cdots < i_n \leq r$.
%
Then, to $\mathcal S'$ we take the rule
\[ A \, \vec y \to b^n \, (B_{i_1} \, \vec y) \cdots (B_{i_n} \, \vec y)\,. \]
In general, $\trans{\mathcal A}(\lang \mathcal S) \subseteq \lang{\mathcal S'}$.
Since $\mathcal S$ is productive,
the subterms $B_i \, \vec y$ obtained by rewriting the initial nonterminal $A_\mathit{init}$
produce at least one tree, and since for every symbol in this tree there is a transition of $\mathcal A$ having this symbol on the left side, $\mathcal A$ has some run on this tree.
Thus $\trans{\mathcal A}(\lang \mathcal S) = \lang{\mathcal S'}$.
\end{proof}
\thmtransd*
\begin{proof}
A transduction $\mathcal A$ realized by an FTT is the composition of a complete one $\mathcal B$ and a restriction $\mathcal C$.
%
We first apply Lemma~\ref{lemma:HORS:transd:complete} to the complete transduction realized by $\mathcal B$.
%
Then, using Lemma \ref{lemma:HORS:restrict-to-symbols} we remove from the generated language all trees that use symbols not appearing on the left side of any transition of $\mathcal C$.
Next, we turn the resulting HORS into a productive one by Lemma~\ref{lemma:HORS:productive},
and, finally, we apply Lemma~\ref{lem:HORS:restriction} to the resulting productive HORS and the restriction realized by $\mathcal C$.
We end up with a HORS producing the image of $\mathcal A$ applied to the original HORS, and being of the same order.
\end{proof}
|
\section{Introduction}\label{sec:introduction}
Deep learning has accelerated progress in Object Detection research \cite{girshick2015fast,ren2015faster,he2017mask,lin2017focal,redmon2016you}, where a model is tasked to identify and localise objects in an image.
All existing approaches work under a strong assumption that all the classes that are to be detected would be available at training phase. Two challenging scenarios arises when we relax this assumption:
1) A test image might contain objects from unknown classes, which should be classified as \emph{unknown}.
2) As and when information (labels) about such identified unknowns become available, the model should be able to incrementally learn the new class.
Research in developmental psychology \cite{meacham1983wisdom,livio2017makes} finds out that the ability to identify what one doesn't know, is key in captivating curiosity. Such a curiosity fuels the desire to learn new things \cite{engel2011children,grazer2016curious}.
This motivates us to propose a new problem where a model should be able to identify instances of unknown objects as unknown and subsequently learns to recognise them when training data progressively arrives, in a \textit{unified} way. We call this problem setting as \textit{Open World Object Detection\xspace}.
The number of classes that are annotated in standard vision datasets like Pascal VOC \cite{everingham2010pascal} and MS-COCO \cite{lin2014microsoft} are very low (20 and 80 respectively) when compared to the infinite number of classes that are present in the open world. Recognising an unknown as an unknown requires strong generalization. Scheirer \etal \cite{scheirer2012toward} formalise this as \textit{Open Set\xspace} classification problem. Henceforth, various methodologies (using 1-vs-rest SVMs and deep learning models) has been formulated to address this challenging setting.
Bendale \etal \cite{bendale2015towards} extend Open Set\xspace to an \textit{Open World\xspace} classification setting by additionally updating the image classifier to recognise the identified new unknown classes.
Interestingly, as seen in Fig.~\ref{fig:related_works}, Open World object detection is unexplored, owing to the difficulty of the problem setting.
\begin{figure}
\includegraphics[width=\columnwidth]{images/OWOD.pdf}
\caption{Open World Object Detection\xspace ({\color{Orange}$\bigstar$}) is a novel problem that has not been formally defined and addressed so far. Though related to the Open Set\xspace and Open World\xspace classification, Open World Object Detection\xspace offers its own unique challenges, which when addressed, improves the practicality of object detectors.}
\label{fig:related_works}
\end{figure}
The advances in Open Set\xspace and Open World\xspace image classification cannot be trivially adapted to Open Set\xspace and Open World\xspace object detection, because of a fundamental difference in the problem setting:
\textit{The object detector is trained to detect unknown objects as background.} Instances of many unknown classes would have been already introduced to the object detector along with known objects. As they are not labelled, these unknown instances would be explicitly learned as background, while training the detection model.
Dhamija \etal \cite{dhamija2020overlooked} find that even with this extra training signal, the state-of-the-art object detectors result in false positive detections, where the unknown objects end up being classified as one of the known classes, often with very high probability. Miller \etal \cite{miller2018dropout} propose to use dropout sampling to get an estimate of the uncertainty of the object detection prediction. This is the only peer-reviewed research work in the open set\xspace object detection literature.
Our proposed Open World Object Detection\xspace goes a step further to incrementally learn the new classes, once they are detected as unknown and an oracle provides labels for the objects of interest among all the unknowns. To the best of our knowledge this has not been tried in the literature.
The Open World Object Detection\xspace setting is much more natural than the existing closed-world, static-learning setting. The world is diverse and dynamic in the number, type and configurations of novel classes. It would be naive to assume that all the classes to expect at inference are seen during training. Practical deployments of detection systems in robotics, self-driving cars, plant phenotyping, healthcare and surveillance cannot afford to have complete knowledge on what classes to expect at inference time, while being trained in-house. The most natural and realistic behavior that one can expect from an object detection algorithm deployed in such settings would be to confidently predict an unknown object as unknown, and known objects into the corresponding classes. As and when more information about the identified unknown classes becomes available, the system should be able to incorporate them into its existing knowledge base. This would define a smart object detection system, and ours is an effort towards achieving this goal.
\noindent The key contributions of our work are:
\begin{itemize}[leftmargin=*,topsep=0pt, noitemsep]
\item We introduce a novel problem setting, Open World Object Detection\xspace, which models the real-world more closely.
\item We develop a novel methodology, called ORE\xspace, based on contrastive clustering, an unknown-aware proposal network and energy based unknown identification to address the challenges of open world\xspace detection.
\item We introduce a comprehensive experimental setting, which helps to measure the open world\xspace characteristics of an object detector, and benchmark ORE\xspace on it against competitive baseline methods.
\item As an interesting by-product, the proposed methodology achieves state-of-the-art performance on Incremental Object Detection, even though not primarily designed for it.
\end{itemize}
\section{Related Work}\label{sec:related_works}
\noindent\textbf{Open Set\xspace Classification:} The open set\xspace setting considers knowledge acquired through training set to be incomplete, thus new unknown classes can be encountered during testing.
Scheirer \etal \cite{scheirer2013toward} developed open set classifiers in a one-vs-rest setting to balance the performance and the risk of labeling a sample far from the known training examples (termed as open space risk). Follow up works \cite{jain2014multi, scheirer2014probability} extended the open set framework to multi-class classifier setting with probabilistic models to account for the fading away classifier confidences in case of unknown classes.
Bendale and Boult \cite{bendale2016towards} identified unknowns in the feature space of deep networks and used a Weibull distribution to estimate the set risk (called OpenMax classifier). A generative version of OpenMax was proposed in \cite{ge2017generative} by synthesizing novel class images. Liu \etal \cite{liu2019large} considered a long-tailed recognition setting where majority, minority and unknown classes coexist. They developed a metric learning framework
identify unseen classes as unknown. In similar spirit, several dedicated approaches target on detecting the out of distribution samples \cite{liang2018enhancing} or novelties \cite{pidhorskyi2018generative}. Recently, self-supervised learning \cite{Perera_2020_CVPR} and unsupervised learning with reconstruction \cite{Yoshihashi_2019_CVPR} have been explored for open set recognition. However, while these works can recognize unknown instances, they cannot dynamically update themselves in an incremental fashion over multiple training episodes. Further, our energy based unknown detection approach has not been explored before.
\noindent\textbf{Open World\xspace Classification:} \cite{bendale2015towards} first proposed the open world setting for image recognition. Instead of a static classifier trained on a fixed set of classes, they proposed a more flexible setting where knowns and unknowns both coexist. The model can recognize both types of objects and adaptively improve itself when new labels for unknown are provided. Their approach extends Nearest Class Mean classifier to operate in an open world setting by re-calibrating the class probabilities to balance open space risk. \cite{pernici2018memory} studies open world face identity learning while \cite{xu2019open} proposed to use an exemplar set of seen classes to match them against a new sample, and rejects it in case of a low match with all previously known classes.
However, they don't test on image classification benchmarks and study product classification in e-commerce applications.
\noindent\textbf{Open Set\xspace Detection:} Dhamija \etal \cite{dhamija2020overlooked} formally studied the impact of open set setting on popular object detectors. They noticed that the state of the art object detectors often classify unknown classes with high confidence to seen classes. This is despite the fact that the detectors are explicitly trained with a background class \cite{ren2016faster, girshick2015fast, liu2016ssd} and/or apply one-vs-rest classifiers to model each class \cite{girshick2014rich, lin2017focal}. A dedicated body of work \cite{miller2018dropout, miller2019evaluating, hall2020probabilistic} focuses on developing measures of (spatial and semantic) uncertainty in object detectors to reject unknown classes. E.g., \cite{miller2018dropout,miller2019evaluating} uses Monte Carlo Dropout \cite{gal2016dropout} sampling in a SSD detector to obtain uncertainty estimates. These methods, however, cannot incrementally adapt their knowledge in a dynamic world.
\section{Open World Object Detection\xspace}\label{sec:prob_definition}
\begin{figure*}[t]
\centering
\begin{minipage}[]{0.75\textwidth}
\includegraphics[width=1\linewidth]{images/arch_jkj.pdf}
\end{minipage}
\begin{minipage}[]{0.24\textwidth}
\captionof{figure}{\emph{Approach Overview:} \emph{Top row:} At each incremental learning step, the model identifies unknown objects (denoted by `?'), which are progressively labelled (as blue circles) and added to the existing knowledge base (green circles). \emph{Bottom row:} Our open world object detection model identifies potential unknown objects using an energy-based classification head and the unknown-aware RPN. Further, we perform contrastive learning in the feature space to learn discriminative clusters and can flexibly add new classes in a continual manner without forgetting the previous classes.
}
\label{fig:pipeline}
\end{minipage}
\vspace{-5pt}
\end{figure*}
Let us formalise the definition of Open World Object Detection\xspace in this section.
At any time $t$, we consider the set of known object classes as $\mathcal{K}^t = \{1, 2, .. , \nC\} \subset \mathbb{N}^+$ where $\mathbb{N}^+$ denotes the set of positive integers. In order to realistically model the dynamics of real world, we also assume that their exists a set of unknown classes $\mathcal{U} = \{\nC + 1, ... \}$, which may be encountered during inference. The known object classes $\mathcal{K}_t$ are assumed to be labeled in the dataset $\mathcal{D}^t = \{ \mathbf{X}^t, \mathbf{Y}^t\}$ where $\mathbf{X}$ and $\mathbf{Y}$ denote the input images and labels respectively. The input image set comprises of $M$ training images, $\mathbf{X}^t = \{\bm{I}_1, \ldots, \bm{I}_{M} \}$ and associated object labels for each image forms the label set $\mathbf{Y}^t = \{\bm{Y}_1, \ldots, \bm{Y}_{M} \}$. Each $\bm{Y}_i = \{\bm{y}_1, \bm{y}_2, .., \bm{y}_K \} $ encodes a set of $K$ object instances with their class labels and locations i.e., $\bm{y}_k = [l_k, x_k, y_k, w_k, h_k]$, where $l_k \in \mathcal{K}^t$ and $x_k, y_k, w_k, h_k$ denote the bounding box center coordinates, width and height respectively.
The \textit{Open World Object Detection\xspace} setting considers an object detection model $\mathcal{M}_{\nC}$ that is trained to detect all the previously encountered $\nC$ object classes. Importantly, the model $\mathcal{M}_{\mathrm{C}}$ is able to identify a test instance belonging to any of the known $\mathrm{C}$ classes, and can also recognize a new or unseen class instance by classifying it as an \emph{unknown},
denoted by a label zero (0). The unknown set of instances $\mathbf{U}^t$ can then be forwarded to a human user who can identify $n$ new classes of interest (among a potentially large number of unknowns) and provide their training examples. The learner incrementally adds $n$ new classes and updates itself to produce an updated model $\mathcal{M}_{\nC+n}$ without retraining from scratch on the whole dataset. The known class set is also updated $\mathcal{K}_{t+1} = \mathcal{K}_t + \{\nC + 1, \ldots, \nC + n\} $. This cycle continues over the life of the object detector, where it adaptively updates itself with new knowledge.
The problem setting is illustrated in the top row of Fig.~\ref{fig:pipeline}.
\section{ORE\xspace: \underline{O}pen Wo\underline{r}ld Object D\underline{e}tector}\label{sec:ore_methodology}
A successful approach for Open World Object Detection\xspace should be able to identify unknown instances without explicit supervision and defy forgetting of earlier instances when labels of these identified novel instances are presented to the model for knowledge upgradation (without retraining from scratch). We propose a solution, ORE\xspace
which addresses both these challenges in a unified manner.
Neural networks are universal function approximators \cite{hornik1989multilayer}, which learn a mapping between an input and the output through a series of hidden layers. The latent representation learned in these hidden layers directly controls how each function is realised. We hypothesise that learning clear discrimination between classes in the latent space of object detectors could have two fold effect. \emph{First}, it helps the model to identify how the feature representation of an unknown instance is different from the other known instances, which helps identify an unknown instance as a novelty. \emph{Second}, it facilitates learning feature representations for the new class instances without overlapping with the previous classes in the latent space, which helps towards incrementally learning without forgetting.
The key component that helps us realise this is our proposed \textit{contrastive clustering} in the latent space, which we elaborate in Sec. \ref{sec:contrastive_clustering}.
To optimally cluster the unknowns using contrastive clustering, we need to have supervision on what an unknown instance is. It is infeasible to manually annotate even a small subset of the potentially infinite set of unknown classes. To counter this, we propose an auto-labelling mechanism based on the Region Proposal Network \cite{ren2015faster} to pseudo-label unknown instances, as explained in Sec. \ref{sec:autolabelling_unknown}.
The inherent separation of auto-labelled unknown instances in the latent space helps our energy based classification head to differentiate between the known and unknown instances. As elucidated in Sec. \ref{sec:energy_based_unk_identification}, we find that Helmholtz free energy is higher for unknown instances.
Fig.~\ref{fig:pipeline} shows the high-level architectural overview of ORE\xspace. We choose Faster R-CNN \cite{ren2015faster} as the base detector as Dhamija \etal. \cite{dhamija2020overlooked} has found that it has better open set\xspace performance when compared against one-stage Retina-Net detector \cite{lin2017focal} and objectness based YOLO detector \cite{redmon2016you}.
Faster R-CNN \cite{ren2015faster} is a two stage object detector. In the first stage, a class-agnostic Region Proposal Network (RPN) proposes potential regions which might have an object from the feature maps coming from a shared backbone network. The second stage classifies and adjusts the bounding box coordinates of each of the proposed region.
The features that are generated by the residual block in the Region of Interest (RoI) head are contrastively clustered.
The RPN and the classification head is adapted to auto-label and identify unknowns respectively. We explain each of these coherent constituent components, in the following subsections:
\subsection{Contrastive Clustering} \label{sec:contrastive_clustering}
Class separation in the latent space would be an ideal characteristic for an Open World\xspace methodology to identify unknowns. A natural way to enforce this would be to model it as a contrastive clustering problem, where instances of same class would be forced to remain close-by, while instances of dissimilar class would be pushed far apart.
For each known class $i \in \mathcal{K}^t$, we maintain a prototype vector $\bm{p}_i$. Let $\bm{f}_c \in \mathbb{R}^d$ be a feature vector that is generated by an intermediate layer of the object detector, for an object of class $c$. We define the contrastive loss as follows:
\begin{align}\label{eqn:clustering_loss}
\mathcal{L}_{cont}(\bm{f}_c) & = \sum_{i = 0}^{\nC} \ell(\bm{f}_c, \bm{p}_i), \text{ where,} \\
\ell(\bm{f}_c, \bm{p}_i) & = \begin{cases} \mathcal{D}(\bm{f}_c, \bm{p}_i) & i = c\\ \max\{0, \Delta - \mathcal{D}(\bm{f}_c, \bm{p}_i)\} & \text{otherwise} \end{cases} \notag
\end{align}
where $\mathcal{D}$ is any distance function and $\Delta$ defines how close a similar and dissimilar item can be. Minimizing this loss would ensure the desired class separation in the latent space.
Mean of feature vectors corresponding to each class is used to create the set of class prototypes: $\mathcal{P} = \{\bm{p}_0 \cdots \bm{p}_\nC\}$. Maintaining each prototype vector is a crucial component of ORE\xspace. As the whole network is trained end-to-end, the class prototypes should also gradually evolve, as the constituent features change gradually (as stochastic gradient descent updates weights by a small step in each iteration). We maintain a fixed-length queue $\bm{q}_i$, per class for storing the corresponding features. A feature store $\mathcal{F}_{store} = \{\bm{q}_0 \cdots \bm{q}_\nC\}$, stores the class specific features in the corresponding queues. This is a scalable approach for keeping track of how the feature vectors evolve with training, as the number of feature vectors that are stored is bounded by $\nC \times \nQ$, where $\nQ$ is the maximum size of the queue.
Algorithm \ref{algo:get_clustering_loss} provides an overview on how class prototypes are managed while computing the clustering loss. We start computing the loss only after a certain number of burn-in iterations ($I_b$) are completed. This allows the initial feature embeddings to mature themselves to encode class information. Since then, we compute the clustering loss using Eqn.~\ref{eqn:clustering_loss}. After every $ I_p$ iterations, a set of new class prototypes $\mathcal{P}_{new}$ is computed (line 8). Then the existing prototypes $\mathcal{P}$ are updated by weighing $\mathcal{P}$ and $\mathcal{P}_{new}$ with a momentum parameter $\eta$. This allows the class prototypes to evolve gradually keeping track of previous context. The computed clustering loss is added to the standard detection loss and back-propagated to learn the network end-to-end.
\begin{algorithm}\small
\caption{Algorithm \textsc{ComputeClusteringLoss}}
\label{algo:get_clustering_loss}
\begin{algorithmic}[1]
\Require{Input feature for which loss is computed: $\bm{f}_c$; Feature store: $\mathcal{F}_{store}$; Current iteration: $i$; Class prototypes: $\mathcal{P} = \{\bm{p}_0 \cdots \bm{p}_\nC\}$; Momentum parameter: $\eta$.
}
\State Initialise $\mathcal{P}$ if it is the first iteration.
\State $\mathcal{L}_{cont}$ $\leftarrow$ 0
\If{$i== I_b$}
\State $\mathcal{P} \leftarrow$ class-wise mean of items in $\mathcal{F}_{Store}$.
\State $\mathcal{L}_{cont}$ $\leftarrow$ Compute using $\bm{f}_c$, $\mathcal{P}$ and Eqn. \ref{eqn:clustering_loss}.
\ElsIf{$i> $ $I_b$}
\If{$i\% I_p == 0$}
\State $\mathcal{P}_{new} \leftarrow$ class-wise mean of items in $\mathcal{F}_{Store}$.
\State $\mathcal{P} \leftarrow \eta \mathcal{P} + (1-\eta)\mathcal{P}_{new}$
\EndIf
\State $\mathcal{L}_{cont}$ $\leftarrow$ Compute using $\bm{f}_c$, $\mathcal{P}$ and Eqn. \ref{eqn:clustering_loss}.
\EndIf
\State \Return $\mathcal{L}_{cont}$
\end{algorithmic}
\end{algorithm}
\subsection{Auto-labelling Unknowns with RPN}\label{sec:autolabelling_unknown}
While computing the clustering loss with Eqn.~\ref{eqn:clustering_loss}, we contrast the input feature vector $\bm{f}_c$ against prototype vectors, which include a prototype for unknown objects too ($c\in \{0,1,..,\nC\}$ where $0$ refers to the unknown class). This would require unknown object instances to be labelled with \cls{unknown} ground truth class, which is not practically feasible owing to the arduous task of re-annotating \underline{all} instances of each image in already annotated large-scale datasets.
As a surrogate, we propose to automatically label some of the objects in the image as a potential unknown object. For this, we rely on the fact that Region Proposal Network (RPN) is class agnostic.
Given an input image, the RPN generates a set of bounding box predictions for foreground and background instances, along with the corresponding objectness scores.
We label those proposals that have high objectness score, but do not overlap with a ground-truth object as a potential unknown object. Simply put, we select the top-k background region proposals, sorted by its objectness scores, as unknown objects.
This seemingly simple heuristic achieves good performance as demonstrated in Sec.~\ref{sec:expr_and_results}.
\subsection{Energy Based Unknown Identifier}\label{sec:energy_based_unk_identification}
Given the features ($\bm{f} \in F$) in the latent space $F$ and their corresponding labels $l \in L$, we seek to learn an energy function $E(F,L)$. Our formulation is based on the Energy based models (EBMs) \cite{lecun2006tutorial} that learn a function $E(\cdot)$ to estimates the compatibility between observed variables $F$ and possible set of output variables $L$ using a single output scalar i.e., $E(\bm{f}): \mathbb{R}^d \rightarrow \mathbb{R}$.
The intrinsic capability of EBMs to assign low energy values to in-distribution data and vice-versa motivates us to use an energy measure to characterize whether a sample is from an unknown class.
Specifically, we use the Helmholtz free energy formulation where energies for all values in $L$ are combined,
\begin{equation}
E(\bm{f}) = -T \log \int_{l'} \exp\bigg({-\frac{E(\bm{f}, l')}{T}} \bigg),
\label{eqn:free_energy}
\end{equation}
where $T$ is the temperature parameter. There exists a simple relation between the network outputs after the softmax layer and the Gibbs distribution of class specific energy values \cite{liu2020energy}. This can be formulated as,
\begin{equation}
p(l | \bm{f}) = \frac{ \exp(\frac{g_{l}(\bm{f})}{T}) }{\sum_{i=1}^{\nC}
\exp(\frac{g_{i}(\bm{f})}{T})} =
\frac{ \exp(-\frac{E(\bm{f}, l)}{T})}{ \exp (- \frac{E(\bm{f})}{T})}
\end{equation}
where $p(l | \bm{f})$ is the probability density for a label $l$, $g_l(\bm{f})$ is the $l^{th}$ classification logit of the classification head $g(.)$. Using this correspondence, we define free energy of our classification models in terms of their logits as follows:
\vspace{-5pt}
\begin{equation}
E(\bm{f}; g) = -T \log \sum_{i = 1}^{\nC} \exp(\frac{g_i(\bm{f})}{T}).
\vspace{-5pt}
\label{eqn:energy}
\end{equation}
The above equation provides us a natural way to transform the classification head of the standard Faster R-CNN~\cite{ren2015faster} to an energy function. Due to the clear separation that we enforce in the latent space with the contrastive clustering, we see a clear separation in the energy level of the known class data-points and unknown data-points as illustrated in Fig. \ref{fig:energy_plots}. In light of this trend, we model the energy distribution of the known and unknown energy values $\xi_{kn}(\bm{f})$ and $\xi_{unk}(\bm{f})$, with a set of shifted Weibull distributions. These distributions were found to fit the energy data of a small held out validation set (with both knowns and unknowns instances) very well, when compared to Gamma, Exponential and Normal distributions. The learned distributions can be used to label a prediction as unknown if $\xi_{kn}(\bm{f}) < \xi_{unk}(\bm{f})$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{images/energy_values_1.png}
\vspace{-13pt}
\caption{\small
The energy values of the known and unknown data-points exhibit clear separation as seen above. We fit a Weibull distribution on each of them and use these for identifying unseen known and unknown samples, as explained in Sec.~\ref{sec:energy_based_unk_identification}.}
\label{fig:energy_plots}
\end{figure}
\subsection{Alleviating Forgetting}
After the identification of unknowns, an important requisite for an open world\xspace detector is to be able to learn new classes, when the labeled examples of some of the unknown classes of interest are provided.
Importantly, the training data for the previous tasks will not be present at this stage since retraining from scratch is not a feasible solution. Training with only the new class instances will lead to catastrophic forgetting \cite{mccloskey1989catastrophic,french1999catastrophic} of the previous classes.
We note that a number of involved approaches have been developed to alleviate such forgetting, including methods based on parameter regularization \cite{aljundi2018memory,kirkpatrick2017overcoming,li2018learning,zenke2017continual}, exemplar replay \cite{AGEM,rebuffi2017icarl,lopez2017gradient,castro2018end}, dynamically expanding networks \cite{mallya2018packnet,serra2018overcoming,rusu2016progressive} and meta-learning \cite{rajasegaran2020itaml,kj2020meta}.
We build on the recent insights from \cite{prabhu2020gdumb,knoblauch2020optimal,wang2020frustratingly} which compare the importance of example replay against other more complex solutions. Specifically, Prabhu \etal \cite{prabhu2020gdumb} retrospects the progress made by the complex continual learning methodologies and show that a greedy exemplar selection strategy for replay in incremental learning consistently outperforms the state-of-the-art methods by a large margin. Knoblauch \etal \cite{knoblauch2020optimal} develops a theoretical justification for the unwarranted power of replay methods. They prove that an optimal continual learner solves an NP-hard problem and requires infinite memory. The effectiveness of storing few examples and replaying has been found effective in the related few-shot object detection setting by Wang \etal \cite{wang2020frustratingly}. These motivates us to use a relatively simple methodology for ORE\xspace to mitigate forgetting i.e., we store a balanced set of exemplars and finetune the model after each incremental step on these. At each point, we ensure that a minimum of $N_{ex}$ instances for each class are present in the exemplar set.
\section{Experiments and Results} \label{sec:expr_and_results}
We propose a comprehensive evaluation protocol to study the performance of an open world detector to identify unknowns, detect known classes and progressively learn new classes when labels are provided for some unknowns.
\subsection{Open World\xspace Evaluation Protocol} \label{sec:evaluation_protocol}
\noindent \textbf{Data split:} We group classes into a set of tasks $\mathcal{T} = \{T_1, \cdots T_t, \cdots\}$. All the classes of a specific task will be introduced to the system at a point of time $t$. While learning $T_t$, all the classes of $\{T_\tau:\tau {<} t\}$ will be treated as known and $\{T_\tau:\tau {>} t\}$ would be treated as unknown. For a concrete instantiation of this protocol, we consider classes from Pascal VOC \cite{everingham2010pascal} and MS-COCO \cite{lin2014microsoft}. We group all VOC classes and data as the first task $T_1$. The remaining $60$ classes of MS-COCO \cite{lin2014microsoft} are grouped into three successive tasks with semantic drifts (see Tab. \ref{tab:data_split}).
All images which correspond to the above split from Pascal VOC and MS-COCO train-sets form the training data.
For evaluation, we use the Pascal VOC test split and MS-COCO val split. $1$k images from training data of each task is kept aside for validation.
Data splits and codes can be found at {\small\url{https://github.com/JosephKJ/OWOD}}.
\noindent \textbf{Evaluation metrics:}
Since an unknown object easily gets confused as a known object, we use the Wilderness Impact (WI) metric \cite{dhamija2020overlooked} to explicitly characterises this behaviour.
\begin{equation}
\text{Wilderness Impact} \, (WI) = \frac{P_{\mathcal{K}}}{P_{\mathcal{K} \cup \mathcal{U}}} - 1,
\end{equation}
where $P_{\mathcal{K}}$ refers to the precision of the model when evaluated on known classes and $P_{\mathcal{K} \cup \mathcal{U}}$ is the precision when evaluated on known and unknown classes, measured at a recall level $R$ (0.8 in all experiments). Ideally, WI should be less as the precision must not drop when unknown objects are added to the test set. Besides WI, we also use Absolute Open-Set Error (A-OSE) \cite{miller2018dropout}
to report the number count of unknown objects that get wrongly classified as any of the known class.
Both WI and A-OSE implicitly measure how effective the model is in handling unknown objects.
In order to quantify incremental learning capability of the model in the presence of new labeled classes, we measure the mean Average Precision (mAP) at IoU threshold of 0.5 (consistent with the existing literature \cite{shmelkov2017incremental,PENG2020109}).
\begin{table}
\centering
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{@{}l|cccc@{}}
\toprule
& Task 1 & Task 2 & Task 3 & Task 4 \\ \midrule
Semantic split & \begin{tabular}[c]{@{}c@{}}VOC \\ Classes\end{tabular} & \begin{tabular}[c]{@{}c@{}}Outdoor, Accessories, \\ Appliance, Truck\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sports, \\ Food\end{tabular} & \begin{tabular}[c]{@{}c@{}}Electronic, Indoor, \\ Kitchen, Furniture\end{tabular} \\\midrule
\# training images & 16551 & 45520 & 39402 & 40260 \\
\# test images & 4952 & 1914 & 1642 & 1738 \\
\# train instances & 47223 & 113741 & 114452 & 138996 \\
\# test instances & 14976 & 4966 & 4826 & 6039 \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{The table shows task composition in the proposed Open World\xspace evaluation protocol. The semantics of each task and the number of images and instances (objects) across splits are shown.}
\label{tab:data_split}
\end{table}
\begin{table*}[!htp]\setlength{\tabcolsep}{2pt}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}l|c|c|c|c|c|ccc|c|c|ccc|ccc@{}}
\toprule
Task IDs ($\rightarrow$)& \multicolumn{3}{c|}{Task 1} & \multicolumn{5}{c|}{Task 2} & \multicolumn{5}{c|}{Task 3} & \multicolumn{3}{c}{Task 4} \\ \midrule
& \cellcolor[HTML]{F3F3F3}WI & \cellcolor[HTML]{F3F3F3}A-OSE & \multicolumn{1}{c|}{mAP ($\uparrow$)} & \cellcolor[HTML]{F3F3F3}WI & \cellcolor[HTML]{F3F3F3}A-OSE & \multicolumn{3}{c|}{mAP ($\uparrow$)} & \cellcolor[HTML]{F3F3F3}WI & \cellcolor[HTML]{F3F3F3}A-OSE & \multicolumn{3}{c|}{mAP ($\uparrow$)} & \multicolumn{3}{c}{mAP ($\uparrow$)} \\ \cmidrule(lr){4-4} \cmidrule(lr){7-9} \cmidrule(lr){12-14}\cmidrule(lr){15-17}
& \cellcolor[HTML]{F3F3F3}($\downarrow$) & \cellcolor[HTML]{F3F3F3}($\downarrow$) & \begin{tabular}[c]{@{}c}Current \\ known\end{tabular} & \cellcolor[HTML]{F3F3F3}($\downarrow$) & \cellcolor[HTML]{F3F3F3}($\downarrow$) & \begin{tabular}[c]{@{}c@{}}Previously\\ known\end{tabular} & \begin{tabular}[c]{@{}c@{}}Current \\ known\end{tabular} & Both & \cellcolor[HTML]{F3F3F3}($\downarrow$) & \cellcolor[HTML]{F3F3F3}($\downarrow$) & \begin{tabular}[c]{@{}c@{}}Previously \\ known\end{tabular} & \begin{tabular}[c]{@{}c@{}}Current \\ known\end{tabular} & Both & \begin{tabular}[c]{@{}c@{}}Previously \\ known\end{tabular} & \
|
begin{tabular}[c]{@{}c@{}}Current \\ known\end{tabular} & Both \\ \midrule
Oracle & \cellcolor[HTML]{F3F3F3}0.02004 & \cellcolor[HTML]{F3F3F3}7080 & 57.76 & \cellcolor[HTML]{F3F3F3}0.0066 & \cellcolor[HTML]{F3F3F3}6717 & 54.99 & 30.31 & 42.65 & \cellcolor[HTML]{F3F3F3}0.0038 & \cellcolor[HTML]{F3F3F3}4237 & 40.23 & 21.51 & 30.87 & 32.52 & 19.27 & 31.71 \\ \midrule
Faster-RCNN & \cellcolor[HTML]{F3F3F3}0.06991 & \cellcolor[HTML]{F3F3F3}13396 & 56.16 & \cellcolor[HTML]{F3F3F3}0.0371 & \cellcolor[HTML]{F3F3F3}12291 & 4.076 & 25.74 & 14.91 & \cellcolor[HTML]{F3F3F3}0.0213 & \cellcolor[HTML]{F3F3F3}9174 & 6.96 & 13.481 & 9.138 & 2.04 & 13.68 & 4.95 \\ \midrule
\begin{tabular}[c]{@{}l@{}}Faster-RCNN\\+ Finetuning\end{tabular} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c}Not applicable as incremental\\component is not present in Task 1\end{tabular}} & \cellcolor[HTML]{F3F3F3}0.0375 & \cellcolor[HTML]{F3F3F3}12497 & 51.09 & 23.84 & 37.47 & \cellcolor[HTML]{F3F3F3}0.0279 & \cellcolor[HTML]{F3F3F3}9622 & 35.69 & 11.53 & 27.64 & 29.53 & 12.78 & 25.34 \\ \midrule
ORE & \cellcolor[HTML]{F3F3F3}\textbf{0.02193} & \cellcolor[HTML]{F3F3F3}\textbf{8234} & \textbf{56.34} & \cellcolor[HTML]{F3F3F3}\textbf{0.0154} & \cellcolor[HTML]{F3F3F3}\textbf{7772} & 52.37 & 25.58 & \textbf{38.98} & \cellcolor[HTML]{F3F3F3}\textbf{0.0081} & \cellcolor[HTML]{F3F3F3}\textbf{6634} & 37.77 & 12.41 & \textbf{29.32} & 30.01 & 13.44 & \textbf{26.66} \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{Here we showcase how ORE\xspace performs on Open World Object Detection\xspace. Wilderness Impact (WI) and Average Open Set Error (A-OSE) quantify how ORE\xspace handles the unknown classes (\colorbox{ gray!15}{gray} background), whereas Mean Average Precision (mAP) measures how well it detects the known classes (white background). We see that ORE\xspace consistently outperforms the Faster R-CNN based baseline on all the metrics. Kindly refer to Sec.~\ref{sec:main_results} for more detailed analysis and explanation for the evaluation metrics.
}\vspace{-0.8em}
\label{tab:main_table}
\end{table*}
\subsection{Implementation Details} \label{sec:impl_details}
ORE\xspace re-purposes the standard Faster R-CNN \cite{ren2015faster} object detector with a ResNet-50 \cite{he2016deep} backbone. To handle variable number of classes in the classification head, following incremental classification methods \cite{rajasegaran2020itaml, kj2020meta,AGEM,lopez2017gradient}, we assume a bound on the maximum number of classes to expect, and modify the loss to take into account only the classes of interest. This is done by setting the classification logits of the unseen classes to a large negative value ($v$), thus making their contribution to softmax negligible ($e^{-v} \rightarrow 0$).
The $2048$-dim feature vector which comes from the last residual block in the RoI Head is used for contrastive clustering. The contrastive loss (defined in Eqn. \ref{eqn:clustering_loss}) is added to the standard Faster R-CNN classification and localization losses and jointly optimised for.
While learning a task $T_i$, only the classes that are part of $T_i$ will be labelled. While testing $T_i$, all the classes that were previously introduced are labelled along with classes in $T_i$, and all classes of future tasks will be labelled `\cls{unknown}'. For the exemplar replay, we empirically choose $N_{ex}= 50$. We do a sensitivity analysis on the size of the exemplar memory in Sec.~\ref{sec:discussions_and_analysis}. Further implementation details are provided in supplementary.
\subsection{Open World Object Detection\xspace Results}\label{sec:main_results}
Table \ref{tab:main_table} shows how ORE\xspace compares against Faster R-CNN on the proposed open world evaluation protocol.
An `Oracle' detector has access to all known and unknown labels at any point, and serves as a reference.
After learning each task, WI and A-OSE metrics are used to quantify how unknown instances are confused with any of the known classes. We see that ORE\xspace has significantly lower WI and A-OSE scores, owing to an explicit modeling of the unknown.
When unknown classes are progressively labelled in Task 2, we see that the performance of the baseline detector on the known set of classes (quantified via mAP) significantly deteriorates from $56.16 \%$ to $4.076 \%$. The proposed balanced finetuning is able to restore the previous class performance to a respectable level ($51.09\%$) at the cost of increased WI and A-OSE, whereas ORE\xspace is able to achieve both goals: detect known classes and reduce the effect of unknown comprehensively. Similar trend is seen when Task 3 classes are added. WI and A-OSE scores cannot be measured for Task 4 because of the absence of any unknown ground-truths. We report qualitative results in Fig.~\ref{fig:qual_results} and supplementary section, along with failure case analysis.
We conduct extensive sensitivity analysis in Sec.~\ref{sec:discussions_and_analysis} and supplementary section.
\subsection{Incremental Object Detection Results}\label{sec:incr_OD_results}
\begin{table*}
\centering\setlength{\tabcolsep}{3pt}
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}llllllllllllllllllllll@{}}
\toprule
{\color[HTML]{009901} \textbf{10 + 10 setting}} & aero & cycle & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & bike & person & plant & sheep & sofa & train & tv & mAP \\ \midrule
All 20 & 68.5 & 77.2 & 74.2 & 55.6 & 59.7 & 76.5 & 83.1 & 81.5 & 52.1 & 79.8 & \cellcolor[HTML]{DAE8FC}55.1 & \cellcolor[HTML]{DAE8FC}80.9 & \cellcolor[HTML]{DAE8FC}80.1 & \cellcolor[HTML]{DAE8FC}76.8 & \cellcolor[HTML]{DAE8FC}80.5 & \cellcolor[HTML]{DAE8FC}47.1 & \cellcolor[HTML]{DAE8FC}73.1 & \cellcolor[HTML]{DAE8FC}61.2 & \cellcolor[HTML]{DAE8FC}76.9 & \cellcolor[HTML]{DAE8FC}70.3 & 70.51\\
First 10 & 79.3 & 79.7 & 70.2 & 56.4 & 62.4 & 79.6 & 88.6 & 76.6 & 50.1 & 68.9 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & 35.59 \\
New 10 & 7.9 & 0.3 & 5.1 & 3.4 & 0 & 0 & 0.2 & 2.3 & 0.1 & 3.3 & \cellcolor[HTML]{DAE8FC}65 & \cellcolor[HTML]{DAE8FC}69.3 & \cellcolor[HTML]{DAE8FC}81.3 & \cellcolor[HTML]{DAE8FC}76.4 & \cellcolor[HTML]{DAE8FC}83.1 & \cellcolor[HTML]{DAE8FC}47.2 & \cellcolor[HTML]{DAE8FC}67.1 & \cellcolor[HTML]{DAE8FC}68.4 & \cellcolor[HTML]{DAE8FC}76.5 & \cellcolor[HTML]{DAE8FC}69.2 & 36.31 \\ \midrule
ILOD \cite{shmelkov2017incremental} & 69.9 & 70.4 & 69.4 & 54.3 & 48 & 68.7 & 78.9 & 68.4 & 45.5 & 58.1 & \cellcolor[HTML]{DAE8FC}59.7 & \cellcolor[HTML]{DAE8FC}72.7 & \cellcolor[HTML]{DAE8FC}73.5 & \cellcolor[HTML]{DAE8FC}73.2 & \cellcolor[HTML]{DAE8FC}66.3 & \cellcolor[HTML]{DAE8FC}29.5 & \cellcolor[HTML]{DAE8FC}63.4 & \cellcolor[HTML]{DAE8FC}61.6 & \cellcolor[HTML]{DAE8FC}69.3 & \cellcolor[HTML]{DAE8FC}62.2 & 63.15 \\
ILOD + Faster R-CNN & 70.5 & 75.6 & 68.9 & 59.1 & 56.6 & 67.6 & 78.6 & 75.4 & 50.3 & 70.8 & \cellcolor[HTML]{DAE8FC}43.2 & \cellcolor[HTML]{DAE8FC}68.1 & \cellcolor[HTML]{DAE8FC}66.2 & \cellcolor[HTML]{DAE8FC}65.1 & \cellcolor[HTML]{DAE8FC}66.5 & \cellcolor[HTML]{DAE8FC}24.3 & \cellcolor[HTML]{DAE8FC}61.3 & \cellcolor[HTML]{DAE8FC}46.6 & \cellcolor[HTML]{DAE8FC}58.1 & \cellcolor[HTML]{DAE8FC}49.9 & 61.14 \\
Faster ILOD \cite{PENG2020109} & 72.8 & 75.7 & 71.2 & 60.5 & 61.7 & 70.4 & 83.3 & 76.6 & 53.1 & 72.3 & \cellcolor[HTML]{DAE8FC}36.7 & \cellcolor[HTML]{DAE8FC}70.9 & \cellcolor[HTML]{DAE8FC}66.8 & \cellcolor[HTML]{DAE8FC}67.6 & \cellcolor[HTML]{DAE8FC}66.1 & \cellcolor[HTML]{DAE8FC}24.7 & \cellcolor[HTML]{DAE8FC}63.1 & \cellcolor[HTML]{DAE8FC}48.1 & \cellcolor[HTML]{DAE8FC}57.1 & \cellcolor[HTML]{DAE8FC}43.6 & 62.16 \\ \midrule
ORE - (CC + EBUI) & 53.3 & 69.2 & 62.4 & 51.8 & 52.9 & 73.6 & 83.7 & 71.7 & 42.8 & 66.8 & \cellcolor[HTML]{DAE8FC}46.8 & \cellcolor[HTML]{DAE8FC}59.9 & \cellcolor[HTML]{DAE8FC}65.5 & \cellcolor[HTML]{DAE8FC}66.1 & \cellcolor[HTML]{DAE8FC}68.6 & \cellcolor[HTML]{DAE8FC}29.8 & \cellcolor[HTML]{DAE8FC}55.1 & \cellcolor[HTML]{DAE8FC}51.6 & \cellcolor[HTML]{DAE8FC}65.3 & \cellcolor[HTML]{DAE8FC}51.5 & 59.42 \\
ORE & 63.5 & 70.9 & 58.9 & 42.9 & 34.1 & 76.2 & 80.7 & 76.3 & 34.1 & 66.1 & \cellcolor[HTML]{DAE8FC}56.1 & \cellcolor[HTML]{DAE8FC}70.4 & \cellcolor[HTML]{DAE8FC}80.2 & \cellcolor[HTML]{DAE8FC}72.3 & \cellcolor[HTML]{DAE8FC}81.8 & \cellcolor[HTML]{DAE8FC}42.7 & \cellcolor[HTML]{DAE8FC}71.6 & \cellcolor[HTML]{DAE8FC}68.1 & \cellcolor[HTML]{DAE8FC}77 & \cellcolor[HTML]{DAE8FC}67.7 & \textbf{64.58} \\ \midrule
\midrule
{\color[HTML]{009901} \textbf{15 + 5 setting}} & aero & cycle & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & bike & person & plant & sheep & sofa & train & tv & mAP \\ \midrule
First 15 & 74.2 & 79.1 & 71.3 & 60.3 & 60 & 80.2 & 88.1 & 80.2 & 48.8 & 74.6 & 61 & 76 & 85.3 & 78.2 & 83.4 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & \cellcolor[HTML]{DAE8FC}0 & 55.03 \\
New 5 & 3.7 & 0.5 & 6.3 & 4.6 & 0.9 & 0 & 8.8 & 3.9 & 0 & 0.4 & 0 & 0 & 16.4 & 0.7 & 0 & \cellcolor[HTML]{DAE8FC}41 & \cellcolor[HTML]{DAE8FC}55.7 & \cellcolor[HTML]{DAE8FC}49.2 & \cellcolor[HTML]{DAE8FC}59.1 & \cellcolor[HTML]{DAE8FC}67.8 & 15.95 \\ \midrule
ILOD \cite{shmelkov2017incremental} & 70.5 & 79.2 & 68.8 & 59.1 & 53.2 & 75.4 & 79.4 & 78.8 & 46.6 & 59.4 & 59 & 75.8 & 71.8 & 78.6 & 69.6 & \cellcolor[HTML]{DAE8FC}33.7 & \cellcolor[HTML]{DAE8FC}61.5 & \cellcolor[HTML]{DAE8FC}63.1 & \cellcolor[HTML]{DAE8FC}71.7 & \cellcolor[HTML]{DAE8FC}62.2 & 65.87 \\
ILOD + Faster R-CNN & 63.5 & 76.3 & 70.7 & 53.1 & 55.8 & 67.1 & 81.5 & 80.3 & 49.6 & 73.8 & 62.1 & 77.1 & 79.7 & 74.2 & 73.9 & \cellcolor[HTML]{DAE8FC}37.1 & \cellcolor[HTML]{DAE8FC}59.1 & \cellcolor[HTML]{DAE8FC}61.7 & \cellcolor[HTML]{DAE8FC}68.6 & \cellcolor[HTML]{DAE8FC}61.3 & 66.35 \\
Faster ILOD \cite{PENG2020109} & 66.5 & 78.1 & 71.8 & 54.6 & 61.4 & 68.4 & 82.6 & 82.7 & 52.1 & 74.3 & 63.1 & 78.6 & 80.5 & 78.4 & 80.4 & \cellcolor[HTML]{DAE8FC}36.7 & \cellcolor[HTML]{DAE8FC}61.7 & \cellcolor[HTML]{DAE8FC}59.3 & \cellcolor[HTML]{DAE8FC}67.9 & \cellcolor[HTML]{DAE8FC}59.1 & 67.94 \\ \midrule
ORE - (CC + EBUI) & 65.1 & 74.6 & 57.9 & 39.5 & 36.7 & 75.1 & 80 & 73.3 & 37.1 & 69.8 & 48.8 & 69 & 77.5 & 72.8 & 76.5 & \cellcolor[HTML]{DAE8FC}34.4 & \cellcolor[HTML]{DAE8FC}62.6 & \cellcolor[HTML]{DAE8FC}56.5 & \cellcolor[HTML]{DAE8FC}80.3 & \cellcolor[HTML]{DAE8FC}65.7 & 62.66 \\
ORE & 75.4 & 81 & 67.1 & 51.9 & 55.7 & 77.2 & 85.6 & 81.7 & 46.1 & 76.2 & 55.4 & 76.7 & 86.2 & 78.5 & 82.1 & \cellcolor[HTML]{DAE8FC}32.8 & \cellcolor[HTML]{DAE8FC}63.6 & \cellcolor[HTML]{DAE8FC}54.7 & \cellcolor[HTML]{DAE8FC}77.7 & \cellcolor[HTML]{DAE8FC}64.6 & \textbf{68.51} \\ \midrule
\midrule
{\color[HTML]{009901} \textbf{19 + 1 setting}} & aero & cycle & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & bike & person & plant & sheep & sofa & train & tv & mAP \\ \midrule
First 19 & 77.8 & 81.7 & 69.3 & 51.6 & 55.3 & 74.5 & 86.3 & 80.2 & 49.3 & 82 & 63.6 & 76.8 & 80.9 & 77.5 & 82.4 & 42.9 & 73.9 & 70.4 & 70.4 & \cellcolor[HTML]{DAE8FC}0 & 67.34 \\
Last 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[HTML]{DAE8FC}64 & 3.2 \\ \midrule
ILOD \cite{shmelkov2017incremental} & 69.4 & 79.3 & 69.5 & 57.4 & 45.4 & 78.4 & 79.1 & 80.5 & 45.7 & 76.3 & 64.8 & 77.2 & 80.8 & 77.5 & 70.1 & 42.3 & 67.5 & 64.4 & 76.7 & \cellcolor[HTML]{DAE8FC}62.7 & 68.25 \\
ILOD + Faster R-CNN & 60.9 & 74.6 & 70.8 & 56 & 51.3 & 70.7 & 81.7 & 81.5 & 49.45 & 78.3 & 58.3 & 79.5 & 79.1 & 74.8 & 75.7 & 42.8 & 74.7 & 61.2 & 67.2 & \cellcolor[HTML]{DAE8FC}65.1 & 67.72 \\
Faster ILOD \cite{PENG2020109} & 64.2 & 74.7 & 73.2 & 55.5 & 53.7 & 70.8 & 82.9 & 82.6 & 51.6 & 79.7 & 58.7 & 78.8 & 81.8 & 75.3 & 77.4 & 43.1 & 73.8 & 61.7 & 69.8 & \cellcolor[HTML]{DAE8FC}61.1 & 68.56 \\ \midrule
ORE - (CC + EBUI) & 60.7 & 78.6 & 61.8 & 45 & 43.2 & 75.1 & 82.5 & 75.5 & 42.4 & 75.1 & 56.7 & 72.9 & 80.8 & 75.4 & 77.7 & 37.8 & 72.3 & 64.5 & 70.7 & \cellcolor[HTML]{DAE8FC}49.9 & 64.93 \\
ORE & 67.3 & 76.8 & 60 & 48.4 & 58.8 & 81.1 & 86.5 & 75.8 & 41.5 & 79.6 & 54.6 & 72.8 & 85.9 & 81.7 & 82.4 & 44.8 & 75.8 & 68.2 & 75.7 & \cellcolor[HTML]{DAE8FC}60.1 & \textbf{68.89} \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{
We compare ORE\xspace against state-of-the-art incremental Object Detectors on three different settings. $10$, $5$ and the last class from the Pascal VOC 2007 \cite{everingham2010pascal} dataset are introduced to a detector trained on $10$, $15$ and $19$ classes respectively (shown in \colorbox{ iODBlue}{blue} background). ORE\xspace is able to perform favourably on all the settings with no methodological change. Kindly refer to Sec.~\ref{sec:incr_OD_results} for more details.}
\label{tab:iOD}
\vspace{-13pt}
\end{table*}
We find an interesting consequence of the ability of ORE\xspace to distinctly model unknown objects: it performs favorably well on the incremental object detection (iOD) task against the state-of-the-art (Tab.~\ref{tab:iOD}). This is because, ORE\xspace reduces the confusion of an unknown object being classified as a known object, which lets the detector incrementally learn the true foreground objects. We use the standard protocol \cite{shmelkov2017incremental,PENG2020109} used in the iOD domain to evaluate ORE\xspace, where group of classes ($10$, $5$ and the last class) from Pascal VOC 2007 \cite{everingham2010pascal} are incrementally learned by a detector trained on the remaining set of classes. Remarkably, ORE\xspace is used as it is, without any change to the methodology introduced in Sec.~\ref{sec:ore_methodology}. We ablate contrastive clustering (CC) and energy based unknown identification (EBUI) to find that it results in reduced performance than standard ORE\xspace.
\section{Discussions and Analysis}\label{sec:discussions_and_analysis}
\customsubsection{Ablating ORE\xspace Components:}\label{sec:ablation} To study the contribution of each of the components in ORE\xspace, we design careful ablation experiments (Tab.~\ref{tab:ablation}). We consider the setting where Task 1 is introduced to the model. The auto-labelling methodology (referred to as ALU), combined with energy based unknown identification (EBUI) performs better together (row $5$) than using either of them separately (row $3$ and $4$). Adding contrastive clustering (CC) to this configuration, gives the best performance in handling unknown (row $7$), measured in terms of WI and A-OSE. There is no severe performance drop in known classes detection (mAP metric) as a side effect of unknown identification. In row $6$, we see that EBUI is a critical component whose absence increases WI and A-OSE scores. Thus, each component in ORE\xspace has a critical role to play for unknown identification.
\begin{table}
\centering
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{@{}c|ccc|ccc@{}}
\toprule
Row ID & CC & ALU & EBUI & WI ($\downarrow$) & A-OSE ($\downarrow$) & mAP ($\uparrow$) \\ \midrule
1 & & Oracle & & 0.02004 & 7080 & 57.76 \\ \midrule
2 & $\times$ & $\times$ & $\times$ & 0.06991 & 13396 & {56.16} \\
3 & $\times$ & $\times$ & \checkmark & 0.05932 & 12822 & 56.21 \\
4 & $\times$ & \checkmark & $\times$ & 0.05542 & 12111 & 56.09 \\
5 & $\times$ & \checkmark & \checkmark & 0.04539 & 9011 & 55.95 \\
6 & \checkmark & \checkmark & $\times$ & 0.05614 & 12064 & \textbf{56.36} \\
7 & \checkmark & \checkmark & \checkmark & \textbf{0.02193} & \textbf{8234} & 56.34 \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{We carefully ablate each of the constituent component of ORE\xspace. CC, ALU and EBUI refers to `Contrastive Clustering', `Auto-labelling of Unknowns' and `Energy Based Unknown Identifier' respectively. Kindly refer to Sec.~\ref{sec:ablation} for more details.}
\label{tab:ablation}
\end{table}
\customsubsection{Sensitivity Analysis on Exemplar Memory Size:}
Our balanced finetuning strategy requires storing exemplar images with at least $N_{ex}$ instances per class. We vary $N_{ex}$ while learning Task 2 and report the results in Table \ref{tab:memory_size}. We find that balanced finetuning is very effective in improving the accuracy of previously known class, even with just having minimum $10$ instances per class. However, we find that increasing $N_{ex}$ to large values does-not help and at the same time adversely affect how unknowns are handled (evident from WI and A-OSE scores). Hence, by validation, we set $N_{ex}$ to $50$ in all our experiments, which is a sweet spot that balances performance on known and unknown classes.
\begin{table}
\centering
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{@{}c|cc|ccc@{}}
\toprule
$N_{ex}$ & WI & A-OSE & \multicolumn{3}{c}{mAP ($\uparrow$)} \\ \midrule
& ($\downarrow$) & ($\downarrow$) & \begin{tabular}[c]{@{}c@{}}Previously known\end{tabular} & \begin{tabular}[c]{@{}c@{}}Current known\end{tabular} & Both \\ \cmidrule(l){4-6}
0 & 0.0406 & 9268 & 8.74 & 26.81 & 17.77 \\
10 & 0.0237 & 8211 & 46.78 & 24.32 & 35.55 \\
20 & 0.0202 & 8092 & 48.83 & 25.42 & 37.13 \\
\cellcolor[HTML]{E6FFE1}50 & \cellcolor[HTML]{E6FFE1}0.0154 & \cellcolor[HTML]{E6FFE1}7772 & \cellcolor[HTML]{E6FFE1}52.37 & \cellcolor[HTML]{E6FFE1}25.58 & \cellcolor[HTML]{E6FFE1}38.98 \\
100 & 0.0410 & 11065 & 52.29 & 26.21 & 39.24 \\
200 & 0.0385 & 10474 & 53.41 & 26.35 & 39.88 \\
400 & 0.0396 & 11461 & 53.18 & 26.09 & 39.64 \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{The table shows sensitivity analysis. Increasing $N_{ex}$ by a large value hurts performance on unknown, while a small set of images are essential to mitigate forgetting (best row in \colorbox{ green!10}{green}).}
\label{tab:memory_size}
\end{table}
\customsubsection{Comparison with an Open Set\xspace Detector:}
The mAP values of the detector when it is evaluated on closed set data (trained and tested on Pascal VOC 2007) and open set\xspace data (test set contains equal number of unknown images from MS-COCO) helps to measure how the detector handles unknown instances. Ideally, there should not be a performance drop. We compare ORE\xspace against the recent open set\xspace detector proposed by Miller \etal \cite{miller2018dropout}.
We find from Tab.~\ref{tab:comparison_with_OS} that drop in performance of ORE\xspace is much lower than \cite{miller2018dropout} owing to the effective modelling of the unknown instances.
\begin{table}
\centering\setlength{\tabcolsep}{8pt}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{@{}l|cc@{}}
\toprule
Evaluated on $\rightarrow$ & VOC 2007 & VOC 2007 + COCO (WR1) \\ \midrule
Standard Faster R-CNN & 81.86 & 77.09 \\
Dropout Sampling \cite{miller2018dropout} & 78.15 & 71.07 \\
ORE & \textbf{81.31} & \textbf{78.16} \\ \bottomrule
\end{tabular}%
}\vspace{-0.5em}
\caption{Performance comparison with an Open Set\xspace object detector. ORE\xspace is able to reduce the fall in mAP values considerably.}
\label{tab:comparison_with_OS}\vspace{-0.2em}
\end{table}
\customsubsection{Clustering loss and t-SNE \cite{maaten2008visualizing} visualization:} We visualise the quality of clusters that are formed while training with the contrastive clustering loss (Eqn.~\ref{eqn:clustering_loss}) for Task 1. We see nicely formed clusters in Fig.~\ref{fig:tsne_loss} (a). Each number in the legend correspond to the $20$ classes introduced in Task 1. Label $20$ denotes unknown class. Importantly, we see that the unknown instances also gets clustered, which reinforces the quality of the auto-labelled unknowns used in contrastive clustering. In Fig.~\ref{fig:tsne_loss} (b), we plot the contrastive clustering loss against training iterations, where we see a gradual decrease, indicative of good convergence.
\begin{figure}
\includegraphics[width=\columnwidth]{images/qualitative_result.pdf}
\vspace{-18pt}
\caption{Predictions from ORE\xspace after being trained on Task 1. `\cls{elephant}', `\cls{apple}', `\cls{banana}', `\cls{zebra}' and `\cls{giraffe}' have not been introduced to the model, and hence are successfully classified as `\cls{unknown}'. The approach misclassifies one of the `\cls{giraffe}' as a `\cls{horse}', showing the limitation of ORE\xspace. }
\label{fig:qual_results}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{images/tsne_loss_2.pdf}
\vspace{-25pt}
\caption{(a) Distinct clusters in the latent space. (b) Our contrastive loss which ensures such a clustering steadily converges.}
\label{fig:tsne_loss}\vspace{-0.3em}
\end{figure}
\section{Conclusion}\label{sec:conclusion}\vspace{-0.5em}
The vibrant object detection community has pushed the performance benchmarks on standard datasets by a large margin.
The closed-set nature of these datasets and evaluation protocols, hampers further progress. We introduce Open World Object Detection\xspace, where the object detector is able to label an unknown object as unknown and gradually learn the unknown as the model gets exposed to new labels. Our key novelties include an energy-based classifier for unknown detection and a contrastive clustering approach for open world learning. We hope that our work will kindle further research along this important and open direction.
\vspace{-10pt}
\section*{Acknowledgements}\label{sec:Acknowledgements}\vspace{-0.5em}
\noindent We thank TCS for supporting KJJ through its PhD fellowship; MBZUAI for a start-up grant; VR starting grant (2016-05543) and DST, Govt of India, for partly supporting this work through IMPRINT program (IMP/2019/000250). We thank our anonymous reviewers for their valuable feedback.
{\small
\bibliographystyle{ieee_fullname}
|
\section{Introduction}
Wilker \cite{Wilker.AMM.96.1.1989} proposed two open problems, the first of
which states that if $x\in \left( 0,\pi /2\right) $ then
\begin{equation}
\left( \frac{\sin x}{x}\right) ^{2}+\frac{\tan x}{x}>2, \label{W}
\end{equation
which was proved by Sumner et al. in \cite{Sumner.AMM.98.1991}.
Wilker inequality (\ref{W}) and the second one have attracted great interest
of many mathematicians and have produced a batch of Wilker type ones by
various generalizing and improving as well as different methods and ideas
(see \cite{Baricz.JMI.2.3.2008}, \cite{Chen.in press}, \cit
{Chen.JIA.accepted}, \cite{Mortitc.MIA.14.3.2011}, \cit
{Neuman.MIA.13.4.2010}, \cite{Neuman.MIA.15.2.2012}, \cite{Wu.39(2009)},
\cite{Wu.AML.2011}, \cite{Wu.ITSF.18.2007}, \cite{Wu.ITSF.19.2008}, \cit
{Zhang.MIA.11.2008}, \cite{Zhu.MIA.10.2007}, \cite{Zhu.AAA.2009}, \cit
{Zhu.CMA.58.2009}, \cite{Zhu.JIA.2010.130821} and related references
therein).
In \cite{Wu.ITSF.18.2007}, Wu and Srivastava established another Wilker type
inequalit
\begin{equation}
\left( \frac{x}{\sin x}\right) ^{2}+\frac{x}{\tan x}>2\text{ \ for }x\in
\left( 0,\pi /2\right) , \label{Wu1}
\end{equation
and proved a weighted and exponential generalization of Wilker inequality.
\noindent \textbf{Theorem Wu (\cite[Theorem 1]{Wu.ITSF.18.2007}).} \emph{Let
}$\lambda >0
\mu
>0$\emph{\ and }$p\leq 2
\mu
/\lambda $.\emph{\ If }$q>0$\emph{\ or }$q\leq \min \left( -1,-\lambda
\mu
\right) $\emph{, then
\begin{equation}
\frac{\lambda }{\lambda
\mu
}\left( \frac{\sin x}{x}\right) ^{p}+\frac
\mu
}{\lambda
\mu
}\left( \frac{\tan x}{x}\right) ^{q}>1 \label{Wu2}
\end{equation
\emph{holds for }$x\in \left( 0,\pi /2\right) $\emph{.}
As an application of the inequality (\ref{Wu2}), an open problem posed by
the S\'{a}dor--Bencze in \cite{Sandor.RGMIA.8.3.2005} was solved and
improved. Recently, the inequality (\ref{Wu2}) and all results in \cit
{Wu.ITSF.18.2007} were extended in \cite{Baricz.JMI.2.3.2008} to Bessel
functions. A hyperbolic version of Theorem Wu has been presented in \cit
{Wu.AML.2011} very recently.
In 2009, Zhu \cite{Zhu.AAA.2009}\textbf{\ }gave another exponential
generalization of Wilker inequality (\ref{W}) as follows.
\noindent \textbf{Theorem Zh1 (\cite[Theorem 1.1, 1.2]{Zhu.AAA.2009}).}
\emph{Let }$0<x<\pi /2$\emph{. Then the inequalities
\begin{equation}
\left( \frac{\sin x}{x}\right) ^{2p}+\left( \frac{\tan x}{x}\right)
^{p}>\left( \frac{x}{\sin x}\right) ^{2p}+\left( \frac{x}{\tan x}\right)
^{p}>2 \label{Zhwt}
\end{equation
\emph{hold if }$p\geq 1$,\emph{\ while the first one in (\ref{Zhwt}) holds
if and only if }$p>0.$
\noindent \textbf{Theorem Zh2 (\cite[Theorem 1.3, 1.4]{Zhu.AAA.2009}).}
\emph{Let }$x>0$\emph{. Then the inequalities
\begin{equation}
\left( \frac{\sinh x}{x}\right) ^{2p}+\left( \frac{\tanh x}{x}\right)
^{p}>\left( \frac{x}{\sinh x}\right) ^{2p}+\left( \frac{x}{\tanh x}\right)
^{p}>2 \label{Zhwh}
\end{equation
\emph{hold if }$p\geq 1$\emph{, while the first one in (\ref{Zhwh}) holds if
and only if }$p>0.$
In the end of the same paper, Zhu posed two open problems: find the
respective largest range of $p$ such that the inequalities (\ref{Zhwt}) and
\ref{Zhwh}) hold. They have been solved by Mateji\v{c}ka in \cit
{Matejicka.IJOPCM.4.1.2011}.
Another inequality associated with Wilker one is the following
\begin{equation}
2\frac{\sin x}{x}+\frac{\tan x}{x}>3 \label{H}
\end{equation
for $x\in \left( 0,\pi /2\right) $, which is known as Huygens inequality
\cite{Huygens.1888-1940}. The following refinement of Huyegens inequality is
due to Neuman and S\'{a}ndor \cite{Neuman.MIA.13.4.2010}
\begin{equation}
2\frac{\sin x}{x}+\frac{\tan x}{x}>2\frac{x}{\sin x}+\frac{x}{\tan x}>3,
\label{Nueman}
\end{equation
where $x\in \left( 0,\pi /2\right) $. Very recently, the generalizations of
\ref{Nueman}), similar to (\ref{Zhwt}), has been derived by Neuman in \cit
{Neuman.MIA.13.4.2010}. In \cite{Zhu.CMA.58.2009.a}, Zhu proved that for
x\in \left( 0,\pi /2\right) $
\begin{eqnarray}
\left( 1-\xi _{1}\right) \frac{\sin x}{x}+\xi _{1}\frac{\tan x}{x}
&>&1>\left( 1-\eta _{1}\right) \frac{\sin x}{x}+\eta _{1}\frac{\tan x}{x},
\label{Zhth1} \\
\left( 1-\xi _{2}\right) \frac{x}{\sin x}+\xi _{2}\frac{x}{\tan x}
&>&1>\left( 1-\eta _{2}\right) \frac{x}{\sin x}+\eta _{2}\frac{x}{\tan x}
\label{Zhth2}
\end{eqnarray
with the best constants $\xi _{1}=1/3$, $\eta _{1}=0$, $\xi _{2}=1/3$, $\eta
_{2}=1-2/\pi $. Later, he in \cite{Zhu.CMA.58.2009} generalized inequalities
(\ref{Zhth1}) and (\ref{Zhth2}) in exponential form, which is stated as
follows.
\noindent \textbf{Theorem Zh3 (\cite[Theorem 1.1, 1.2]{Zhu.AAA.2009})}.
\emph{Let }$0<x<\pi /2$\emph{. Then we have}
\emph{(i) when }$p\geq 1$\emph{, the double inequality\
\begin{equation}
\left( 1-\lambda \right) \left( \frac{x}{\sin x}\right) ^{p}+\lambda \left(
\frac{x}{\tan x}\right) ^{p}<1<\left( 1-\eta \right) \left( \frac{x}{\sin x
\right) ^{p}+\eta \left( \frac{x}{\tan x}\right) ^{p} \label{Zhth3}
\end{equation
\emph{holds if and only if }$\eta \leq 1/3$\emph{\ and }$\lambda \geq
1-\left( 2/\pi \right) ^{p}$\emph{.}
\emph{(ii) when }$0\leq p\leq 4/5$\emph{, the double inequality (\ref{Zhth3
) holds if and only if }$\lambda \geq 1/3$\emph{\ and }$\eta \leq 1-\left(
2/\pi \right) ^{p}$\emph{.}
\emph{(iii) when \thinspace }$p<0$\emph{, the second one in (\ref{Zhth3})
holds if and only if }$\eta \geq 1/3$\emph{.}
The hyperbolic version of inequalities (\ref{Nueman}) was given in \cit
{Neuman.MIA.13.4.2010} by Neuman and S\'{a}ndor. In the same year, Zhu
showed that
\noindent \textbf{Theorem Zh4 (\cite[Theorem 4.1]{Zhu.JIA.2010.130821})}.
\emph{Let }$x>0$\emph{. Then }
\emph{(i) when }$p\geq 4/5$\emph{, the double inequality
\begin{equation}
\left( 1-\lambda \right) \left( \frac{x}{\sinh x}\right) ^{p}+\lambda \left(
\frac{x}{\tanh x}\right) ^{p}<1<\left( 1-\eta \right) \left( \frac{x}{\sinh
}\right) ^{p}+\eta \left( \frac{x}{\tanh x}\right) ^{p} \label{Zhhh1}
\end{equation
\emph{holds if and only if }$\eta \geq 1/3$\emph{\ and }$\lambda \leq 0
\emph{;}
\emph{(ii) when }$p<0$\emph{, the inequality
\begin{equation}
\left( 1-\eta \right) \left( \frac{x}{\sinh x}\right) ^{p}+\eta \left( \frac
x}{\tanh x}\right) ^{p}>1 \label{Zhhh2}
\end{equation
\emph{holds if and only if }$\eta \leq 1/3$\emph{.}
The aim of this paper is to find the best $p$ such that the inequalities
\begin{eqnarray}
\frac{2}{k+2}\left( \frac{\sin x}{x}\right) ^{kp}+\frac{k}{k+2}\left( \frac
\tan x}{x}\right) ^{p} &>&1\text{ for\ }x\in \left( 0,\pi /2\right) ,
\label{Mt} \\
\frac{2}{k+2}\left( \frac{\sinh x}{x}\right) ^{kp}+\frac{k}{k+2}\left( \frac
\tanh x}{x}\right) ^{p} &>&1\text{ for }x\in \left( 0,\infty \right)
\label{Mh}
\end{eqnarray
or their reverse ones hold for certain fixed $k$ with $k\left( k+2\right)
\neq 0$. In Section 2, some useful lemmas are proved. necessary and
sufficient conditions for (\ref{Mt}) or its reverse and (\ref{Mh}) to hold
are presented in Section 3. Some applications of our main results given in
Section 4.
\section{Lemmas}
The following two lemmas is very important in the sequel.
\begin{lemma}
\label{Lemma AB/C}Let $A$, $B$ and $C$ be defined on $\left( 0,\pi /2\right)
$ by
\begin{eqnarray}
A &=&A\left( x\right) =\left( \cos x\right) \left( \sin x-x\cos x\right)
^{2}\left( x-\cos x\sin x\right) , \label{A} \\
B &=&B\left( x\right) =\left( x-\cos x\sin x\right) ^{2}\left( \sin x-x\cos
x\right) , \label{B} \\
C &=&C\left( x\right) =x\left( \sin ^{2}x\right) \left( -2x^{2}\cos x+x\sin
x+\cos x\sin ^{2}x\right) . \label{C}
\end{eqnarray
Then for fixed $k\geq 1$ the $x\mapsto C\left( x\right) /\left( kA\left(
x\right) +B\left( x\right) \right) $ is increasing on $\left( 0,\pi
/2\right) $. Moreover, we hav
\begin{equation}
\tfrac{5}{12\left( k+2\right) }<\dfrac{C\left( x\right) }{kA\left( x\right)
+B\left( x\right) }<1. \label{RAB/C}
\end{equation}
\end{lemma}
\begin{proof}
Evidently, $A$, $B>0$ for $x\in \left( 0,\pi /2\right) $ due to $\left( \sin
x-x\cos x\right) >0$ and $\left( x-\cos x\sin x\right) =\left( 2x-\sin
2x\right) /2>0$, while $C>0$ because
\begin{equation*}
\left( -2x^{2}\cos x+x\sin x+\cos x\sin ^{2}x\right) =x^{2}\left( \cos
x\right) \left( \left( \frac{\sin x}{x}\right) ^{2}+\frac{\tan x}{x
-2\right) >0
\end{equation*
by Wilker inequality (\ref{W}).
Denote $\left( kA+B\right) /C$ by $D$ and factoring yields
\begin{eqnarray*}
D\left( x\right) &=&\tfrac{x\left( \sin ^{2}x\right) \left( -2x^{2}\cos
x+x\sin x+\cos x\sin ^{2}x\right) }{\left( \sin x-x\cos x\right) \left(
x-\cos x\sin x\right) \left( \left( 1-k\cos ^{2}x\right) x+\left( k-1\right)
\cos x\sin x\right) } \\
&=&\tfrac{-2x^{2}\cos x+x\sin x+\cos x\sin ^{2}x}{\left( \sin x-x\cos
x\right) \left( x-\cos x\sin x\right) }\times \tfrac{x\sin ^{2}x}{k\left(
\sin x-x\cos x\right) \cos x+\left( x-\cos x\sin x\right) } \\
&:&=D_{1}\left( x\right) \times D_{2}\left( x\right) .
\end{eqnarray*
It is known that the function $D_{1}$ (which is equal to $G$ in \cite[Proof
of Lemma 2.9]{Zhu.AAA.2009}) is positive and increasing on $\left( 0,\pi
/2\right) $ proved in \cite[Proof of Lemma 2.9]{Zhu.AAA.2009}, and it
remains to prove the function $D_{2}$ is also positive and increasing.
Clearly, $D_{2}\left( x\right) >0$, we only need to show that $D_{2}^{\prime
}\left( x\right) >0$ for $x\in \left( 0,\pi /2\right) $. Indeed,
differentiation and simplifying yiel
\begin{eqnarray*}
D_{2}^{\prime }\left( x\right) &=&\left( k-1\right) \left( \sin x\right)
\frac{\left( -2x^{2}\cos x+\cos x\sin ^{2}x+x\sin x\right) }{\left( k\left(
\sin x-x\cos x\right) \cos x+\left( x-\cos x\sin x\right) \right) ^{2}} \\
&=&\frac{\left( k-1\right) x^{2}\sin x\cos x}{\left( k\left( \sin x-x\cos
x\right) \cos x+\left( x-\cos x\sin x\right) \right) ^{2}}\left( \left(
\frac{\sin x}{x}\right) ^{2}+\frac{\tan x}{x}-2\right) ,
\end{eqnarray*
which is clearly positive due to Wilker inequality (\ref{W}). Hence,
C/\left( kA+B\right) $ is increasing on $\left( 0,\pi /2\right) $, and it is
deduced that
\begin{equation*}
\tfrac{5}{12\left( k+2\right) }=\lim_{x\rightarrow 0}\dfrac{C\left( x\right)
}{kA\left( x\right) +B\left( x\right) }<D\left( x\right) <\lim_{x\rightarrow
\pi /2-}\dfrac{C\left( x\right) }{kA\left( x\right) +B\left( x\right) }=1.
\end{equation*}
This completes the proof.
\end{proof}
\begin{lemma}
\label{Lemma EF/G}Let $U$, $V$ and $W$ be defined on $\left( 0,\infty
\right) $ b
\begin{eqnarray}
E &=&E\left( x\right) =\left( \cosh x\right) \left( \sinh x-x\cosh x\right)
^{2}\left( x-\cosh x\sinh x\right) , \label{E} \\
F &=&F\left( x\right) =\left( \sinh x-x\cosh x\right) \left( x-\cosh x\sinh
x\right) ^{2}, \label{F} \\
G &=&G\left( x\right) =x\left( \sinh ^{2}x\right) \left( 2x^{2}\cosh
x-x\sinh x-\cosh x\sinh ^{2}x\right) . \label{G}
\end{eqnarray
Then for fixed $k\geq 1$ (or $k<-2$) the function $x\mapsto G\left( x\right)
/\left( kE\left( x\right) +F\left( x\right) \right) $ is decreasing
(increasing) on $\left( 0,\infty \right) $. Moreover, we hav
\begin{equation}
\min \left( 0,\dfrac{12}{5\left( k+2\right) }\right) <\dfrac{G(x)}{kE\left(
x\right) +F\left( x\right) }<\max \left( 0,\dfrac{12}{5\left( k+2\right)
\right) . \label{REF/G}
\end{equation}
\end{lemma}
\begin{proof}
It is easy to verify that $E$, $F<0$ for $x\in \left( 0,\infty \right) $ due
to
\begin{eqnarray*}
\left( x-\cosh x\sinh x\right) &=&\left( 2x-\sinh 2x\right) /2<0, \\
\left( \sinh x-x\cosh x\right) &=&x\left( \frac{\sinh x}{x}-\cos x\right) <0.
\end{eqnarray*
While $G<0$ because
\begin{equation*}
\left( 2x^{2}\cosh x-x\sinh x-\cosh x\sinh ^{2}x\right) =-x^{2}\left( \cosh
x\right) \left( \left( \frac{\sinh x}{x}\right) ^{2}+\frac{\tanh x}{x
-2\right) <0
\end{equation*
by Wilker inequality (\ref{Zhwh}).
Denote $G/\left( kE+F\right) $ by $H$ and factoring give
\begin{eqnarray*}
H\left( x\right) &=&\tfrac{x\left( \sinh ^{2}x\right) \left( 2x^{2}\cosh
x-x\sinh x-\cosh x\sinh ^{2}x\right) }{\left( \cosh x\right) \left( \sinh
x-x\cosh x\right) ^{2}\left( x-\sinh x\cosh x\right) k+\left( \sinh x-x\cosh
x\right) \left( x-\sinh x\cosh x\right) ^{2}} \\
&=&\tfrac{-2x^{2}\cosh x+x\sinh x+\cosh x\sinh ^{2}x}{\left( x\cosh x-\sinh
x\right) \left( \sinh x\cosh x-x\right) }\times \tfrac{x\left( \sinh
^{2}x\right) }{\left( k\left( x\cosh x-\sinh x\right) \cosh x+\sinh x\cosh
x-x\right) } \\
&:&=H_{1}\left( x\right) \times H_{2}\left( x\right) .
\end{eqnarray*
Clearly, $H_{1}\left( x\right) >0$, and it has shown in \cite[Proof of Lemma
2.2]{Matejicka.IJOPCM.4.1.2011} that $H_{1}$ (that is, the function $s$, in
\cite[Proof of Lemma 2.2]{Matejicka.IJOPCM.4.1.2011}) is decreasing on
\left( 0,\infty \right) $. In order to prove the monotonicity of $H$, we
also need to deal with the sign and monotonicity of $H_{2}$.
(i) Clearly, $H_{2}\left( x\right) >0$ for $k\geq 1$. And, we claim that
H_{2}$ is also decreasing on $\left( 0,\infty \right) $. Indeed,
differentiation and simplifying yiel
\begin{eqnarray*}
H_{2}^{\prime }\left( x\right) &=&-\left( k-1\right) \sinh x\frac{\left(
-2x^{2}\cosh x+\cosh x\sinh ^{2}x+x\sinh x\right) }{\left( x\cosh x-\sinh
x\right) ^{2}\left( \cosh x\sinh x-x\right) ^{2}} \\
&=&-\frac{\left( k-1\right) x^{2}\sinh x\cosh x}{\left( x\cosh x-\sinh
x\right) ^{2}\left( \cosh x\sinh x-x\right) ^{2}}\left( \left( \frac{\sinh
}{x}\right) ^{2}+\frac{\tanh x}{x}-2\right) <0.
\end{eqnarray*}
Consequently, $H=H_{1}\times H_{2}$ is positive and decreasing on $\left(
0,\infty \right) $, and s
\begin{equation*}
0=\lim_{x\rightarrow \infty }\dfrac{G(x)}{kE\left( x\right) +F\left(
x\right) }<\dfrac{G(x)}{kE\left( x\right) +F\left( x\right)
<\lim_{x\rightarrow 0}\dfrac{G(x)}{kE\left( x\right) +F\left( x\right) }
\tfrac{12}{5\left( k+2\right) }.
\end{equation*
(ii) For $k<-2$, by the previous proof we see that $-H_{2}^{\prime }$ is
decreasing on $\left( 0,\infty \right) $, and so
\begin{equation*}
0<-\frac{1}{k}=\lim_{x\rightarrow \infty }\left( -H_{2}\left( x\right)
\right) <-H_{2}\left( x\right) <\lim_{x\rightarrow 0}\left( -H_{2}\left(
x\right) \right) =-\frac{3}{k+2}.
\end{equation*
It is implied that $-H_{2}$ is positive and decreasing on $\left( 0,\infty
\right) $, and so is the function $-H=H_{1}\times \left( -H_{2}\right) $.
That is, $H$ is negative and increasing on $\left( 0,\infty \right) $, and
\ref{REF/G}) naturally holds.
This completes the proof.
\end{proof}
\begin{remark}
It should be noted that $kE\left( x\right) +F\left( x\right) <0$ for $k\geq
1 $ and $kE\left( x\right) +F\left( x\right) >0$ for $k<-2$. In fact, it
suffices to notice (\ref{REF/G}) and $G(x)<0$ for $x\in \left( 0,\infty
\right) $.
\end{remark}
\begin{lemma}
\label{Lemma 3}For $k\geq 1$, we hav
\begin{equation*}
1>\frac{\ln \left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }>\frac
12}{5\left( k+2\right) }.
\end{equation*}
\end{lemma}
\begin{proof}
It suffices to show that
\begin{eqnarray*}
\delta _{1}\left( k\right) &=&\frac{\ln \left( k+2\right) -\ln 2}{\ln \pi
-\ln 2}-k<0, \\
\delta _{2}\left( k\right) &=&\frac{\ln \left( k+2\right) -\ln 2}{\ln \pi
-\ln 2}-\frac{12k}{5\left( k+2\right) }>0
\end{eqnarray*
where $k\geq 1$.
Differentiation give
\begin{eqnarray*}
\delta _{1}^{\prime }\left( k\right) &=&\frac{1}{\left( \ln \pi -\ln
2\right) \left( k+2\right) }-1<0, \\
\delta _{2}^{\prime }\left( k\right) &=&\frac{1}{5}\frac{5k+24\ln 2-24\ln
\pi +10}{\left( k+2\right) ^{2}\left( \ln \pi -\ln 2\right) }>0
\end{eqnarray*
for $k\geq 1$. It follows that $\delta _{1}\left( k\right) \leq \delta
_{1}\left( 1\right) =\left( \ln 3-\ln 2\right) /\left( \ln 3-\ln \pi \right)
<0$, $\delta \left( k\right) \geq \delta \left( 1\right) =\left( \ln 3-\ln
2\right) /\left( \ln \pi -\ln 2\right) -4/5>0$, which proves the lemma.
\end{proof}
\section{Main results}
\begin{theorem}
\label{Main 1}For fixed $k\geq 1$, the inequality (\ref{Mt}) holds for $x\in
\left( 0,\pi /2\right) $ if and only if $p>0$ or $p\leq -\frac{\ln \left(
k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }$
\end{theorem}
\begin{proof}
The inequality (\ref{Mt}) is equivalent t
\begin{equation}
f\left( x\right) =\frac{2}{k+2}\left( \frac{\sin x}{x}\right) ^{kp}+\frac{k}
k+2}\left( \frac{\tan x}{x}\right) ^{p}-1>0 \label{f}
\end{equation
for $x\in \left( 0,\pi /2\right) $. Differentiation yield
\begin{eqnarray}
f^{\prime }\left( x\right) &=&-\tfrac{2kp}{k+2}\frac{\sin x-x\cos x}{x^{2}
\left( \frac{\sin x}{x}\right) ^{kp-1}+\tfrac{kp}{k+2}\frac{x-\sin x\cos x}
x^{2}\cos ^{2}x}\left( \frac{\tan x}{x}\right) ^{p-1} \notag \\
&=&\tfrac{kp}{k+2}\frac{x-\sin x\cos x}{x^{2}\cos ^{2}x}\left( \frac{\tan x}
x}\right) ^{p-1}g(x), \label{df}
\end{eqnarray
where
\begin{equation}
g(x)=1-4\frac{\sin x-x\cos x}{2x-\sin 2x}\left( \frac{\sin x}{x}\right)
^{\left( k-1\right) p}\left( \cos x\right) ^{p+1}. \label{g}
\end{equation
Simple computation leads to $g(0^{+})=0$.
Differentiation again and simplifying give
\begin{equation}
g^{\prime }\left( x\right) =8\frac{\left( \frac{\sin x}{x}\right) ^{\left(
k-1\right) p}\left( \cos x\right) ^{p}}{x\left( \sin x\right) \left( 2x-\sin
2x\right) ^{2}}h\left( x\right) , \label{dg}
\end{equation
where
\begin{eqnarray}
h\left( x\right) &=&\left( \cos x\right) \left( \sin x-x\cos x\right)
^{2}\left( x-\cos x\sin x\right) kp \notag \\
&&+\left( x-\cos x\sin x\right) ^{2}\left( \sin x-x\cos x\right) p \notag \\
&&+x\left( \sin ^{2}x\right) \left( -2x^{2}\cos x+x\sin x+\cos x\sin
^{2}x\right) \notag \\
&=&kpA\left( x\right) +pB\left( x\right) +C\left( x\right) \label{h} \\
&=&\left( kA+B\right) \left( p+\frac{C}{kA+B}\right) , \notag
\end{eqnarray
here $A\left( x\right) ,B\left( x\right) ,C\left( x\right) $ are defined by
\ref{A}), (\ref{B}), (\ref{C}), respectively.
By (\ref{df}), (\ref{dg}) we easily get
\begin{eqnarray}
\limfunc{sgn}f^{\prime }\left( x\right) &=&\limfunc{sgn}p\limfunc{sgn}g(x),
\label{sgn(df)} \\
\limfunc{sgn}g^{\prime }\left( x\right) &=&\limfunc{sgn}h\left( x\right) .
\label{sgn(dg)}
\end{eqnarray}
\textbf{Necessity}. We first present two limit relations
\begin{eqnarray}
\lim_{x\rightarrow 0^{+}}x^{4}f\left( x\right) &=&\frac{kp}{36}\left( p
\frac{12}{5\left( k+2\right) }\right) , \label{Limit1} \\
\lim_{x\rightarrow \left( \pi /2\right) ^{-}}f\left( x\right) &=&\left\{
\begin{array}{cc}
\infty & \text{if }p>0, \\
\frac{2}{k+2}\left( \frac{2}{\pi }\right) ^{kp}-1 & \text{if }p<0
\end{array
\right. \label{Limit2}
\end{eqnarray
In fact, using power series extension yield
\begin{equation*}
f\left( x\right) =\frac{kp}{36}\frac{kp+2p+12/5}{k+2}x^{4}+O\left(
x^{6}\right) ,
\end{equation*
which implies the first limit relation (\ref{Limit1}). From the fact
\lim_{x\rightarrow \pi /2^{-}}\tan x=\infty $ the second one (\ref{Limit2})
easily follows.
Now we can derive the necessary condition for (\ref{Mt}) to holds for $x\in
\left( 0,\pi /2\right) $ from the simultaneous inequalities
\lim_{x\rightarrow 0^{+}}x^{4}f\left( x\right) \geq 0$ and
\lim_{x\rightarrow \left( \pi /2\right) ^{-}}f\left( x\right) \geq 0$.
Solving for $p$ yields $p>0$ or
\begin{equation*}
p\leq \min \left( -\frac{12}{5\left( k+2\right) },-\frac{\ln \left(
k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }\right) =-\frac{\ln \left(
k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) },
\end{equation*
where the equality holds is due to the Lemma \ref{Lemma 3}.
\textbf{Sufficiency}. We prove the condition $p>0$ or $p\leq -\frac{\ln
\left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }$ is sufficient. We
distinguish three cases.
Case 1: $p>0$. Clearly, $h\left( x\right) >0$, then $g^{\prime }\left(
x\right) >0$, and then $g\left( x\right) >g\left( 0^{+}\right) =0$, which
together with $\func{sgn}p=1$ yields $f^{\prime }\left( x\right) >0$. Then
f\left( x\right) >f\left( 0^{+}\right) =0$.
Case 2: $p\leq -1$. By Lemma (\ref{Lemma AB/C}) it is easy to get
\begin{equation*}
p+\frac{C}{kA+B}<p+1\leq 0,
\end{equation*
which reveals that $h\left( x\right) <0$, then $g^{\prime }\left( x\right)
<0 $, and then $g\left( x\right) <g\left( 0^{+}\right) =0$, which in
combination with $\func{sgn}p=-1$ implies $f^{\prime }\left( x\right) >0$.
Then $f\left( x\right) >f\left( 0^{+}\right) =0$.
Case 3: $-1<p\leq -\frac{\ln \left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln
2\right) }$. Lemma (\ref{Lemma AB/C}) reveals that $\frac{C}{kA+B}$ is
increasing on $\left( 0,\pi /2\right) $, so is the function $x\mapsto p
\frac{C}{kA+B}:=\lambda \left( x\right) $. Since
\begin{equation*}
\lambda \left( 0^{+}\right) =p+\frac{12}{5\left( k+2\right) }<0\text{, \
\lambda \left( \frac{\pi }{2}^{-}\right) =p+1>0,
\end{equation*
there is a unique $x_{1}\in \left( 0,\pi /2\right) $ such that $\lambda
\left( x\right) <0$ for $x\in \left( 0,x_{1}\right) $ and $\lambda \left(
x\right) >0$ for $x\in \left( x_{1},\pi /2\right) $, and so is $g^{\prime
}\left( x\right) $. Therefore, $g\left( x\right) <g\left( 0^{+}\right) =0$
for $x\in \left( 0,x_{1}\right) $ but $g\left( \pi /2^{-}\right) =1$, which
implies that there is a sole $x_{0}\in \left( x_{1},\pi /2\right) $ such
that $g\left( x\right) <0$ for $x\in \left( 0,x_{0}\right) $ and $g\left(
x\right) >0$ for $x\in \left( x_{0},\pi /2\right) $. Due to $\func{sgn}p=-1$
it is deduced that $f^{\prime }\left( x\right) >0$ for $x\in \left(
0,x_{0}\right) $ and $f^{\prime }\left( x\right) <0$ for $x\in \left(
x_{0},\pi /2\right) $, which reveals that $f$ is increasing on $\left(
0,x_{0}\right) $ and decreasing on $\left( x_{0},\pi /2\right) $. It follows
that
\begin{eqnarray*}
0 &=&f\left( 0^{+}\right) <f\left( x\right) <f\left( x_{0}\right) =0\text{
for }x\in \left( 0,x_{0}\right) , \\
f\left( x_{0}\right) &>&f\left( x\right) >f\left( \pi /2^{-}\right) =\tfrac{
}{k+2}\left( \tfrac{2}{\pi }\right) ^{kp}-1\geq 0\text{ for }x\in \left(
x_{0},\pi /2\right) ,
\end{eqnarray*
that is, $f\left( x\right) >0$ for $x\in \left( 0,\pi /2\right) $.
This completes the proof.
\end{proof}
\begin{theorem}
\label{Main 2}For fixed $k\geq 1$, the reverse of (\ref{Mt}), that is,
\begin{equation}
\frac{2}{k+2}\left( \frac{\sin x}{x}\right) ^{kp}+\frac{k}{k+2}\left( \frac
\tan x}{x}\right) ^{p}<1 \label{Mtr}
\end{equation
holds for $x\in \left( 0,\pi /2\right) $ if and only if $-\frac{12}{5\left(
k+2\right) }\leq p<0$.
\end{theorem}
\begin{proof}
\textbf{Necessity}. If inequality (\ref{Mtr}) holds for $x\in \left( 0,\pi
/2\right) $, then we have
\begin{equation*}
\lim_{x\rightarrow 0^{+}}\frac{f\left( x\right) }{x^{4}}=\frac{kp}{36}\left(
p+\frac{12}{5\left( k+2\right) }\right) \leq 0.
\end{equation*}
Solving the inequalities for $p$ yields $-\frac{12}{5\left( k+2\right) }\leq
p<0$.
\textbf{Sufficiency}. We prove the condition $-\frac{12}{5\left( k+2\right)
\leq p<0$ is sufficient. It suffices to show that $f\left( x\right) <0$ for
x\in \left( 0,\pi /2\right) $. By Lemma (\ref{Lemma AB/C}) it is easy to get
\begin{equation*}
p+\frac{C}{kA+B}\geq p+\frac{12}{5\left( k+2\right) }\geq 0,
\end{equation*
which reveals that $h\left( x\right) >0$, then $g^{\prime }\left( x\right)
>0 $, and then $g\left( x\right) >g\left( 0^{+}\right) =0$. It in
combination with $\func{sgn}p=-1$ implies $f^{\prime }\left( x\right) <0$.
Thus, $f\left( x\right) <f\left( 0^{+}\right) =0$, which proves the
sufficiency and the proof is complete.
\end{proof}
\begin{theorem}
\label{Main 3}For fixed $k\geq 1$, the inequality (\ref{Mh}) holds for $x\in
\left( 0,\infty \right) $ if and only if $p>0$ or $p\leq -\frac{12}{5\left(
k+2\right) }$.
\end{theorem}
\begin{proof}
We define
\begin{equation}
u\left( x\right) =\frac{2}{k+2}\left( \frac{\sinh x}{x}\right) ^{kp}+\frac{
}{k+2}\left( \frac{\tanh x}{x}\right) ^{p}-1. \label{u}
\end{equation
Then inequality (\ref{Mh}) is equivalent to $u\left( x\right) >0$.
Differentiation leads t
\begin{equation}
u^{\prime }\left( x\right) =-\frac{kp}{2\left( k+2\right) }\frac{\sinh 2x-2
}{x^{2}\cosh ^{2}x}\left( \frac{\tanh x}{x}\right) ^{p-1}v\left( x\right) ,
\label{du}
\end{equation
where
\begin{equation}
v\left( x\right) =1-4\frac{\sinh x-x\cosh x}{2x-\sinh 2x}\left( \frac{\sinh
}{x}\right) ^{kp-p}\left( \cosh x\right) ^{p+1}. \label{v}
\end{equation
Differentiation again give
\begin{equation}
v^{\prime }\left( x\right) =\frac{2\left( \cosh ^{p}x\right) \left( \frac
\sinh x}{x}\right) ^{kp-p}}{\left( x\sinh x\right) \left( x-\cosh x\sinh
x\right) ^{2}}w\left( x\right) , \label{dv}
\end{equation
where
\begin{eqnarray}
w\left( x\right) &=&\left( \cosh x\right) \left( \sinh x-x\cosh x\right)
^{2}\left( x-\cosh x\sinh x\right) kp \notag \\
&&+\left( \sinh x-x\cosh x\right) \left( x-\cosh x\sinh x\right) ^{2}p
\notag \\
&&+x\left( \sinh ^{2}x\right) \left
|
( 2x^{2}\cosh x-x\sinh x-\cosh x\sinh
^{2}x\right) \notag \\
&=&kpE\left( x\right) +pF\left( x\right) +G\left( x\right) =\left(
kE+F\right) \left( p+\frac{G}{kE+F}\right) , \label{w}
\end{eqnarray
here $E\left( x\right) ,F\left( x\right) ,G\left( x\right) $ are defined by
\ref{E}), (\ref{F}), (\ref{G}), respectively.
By (\ref{du}), (\ref{dv}) we easily get
\begin{eqnarray}
\limfunc{sgn}u^{\prime }\left( x\right) &=&-\limfunc{sgn}\frac{k}{k+2
\limfunc{sgn}p\limfunc{sgn}v(x), \label{sgn(du)} \\
\limfunc{sgn}v^{\prime }\left( x\right) &=&\limfunc{sgn}w\left( x\right) .
\label{sgn(dv)}
\end{eqnarray}
\textbf{Necessity}. If inequality (\ref{Mh}) holds for $x\in \left( 0,\infty
\right) $, then we have $\lim_{x\rightarrow 0^{+}}x^{-4}u\left( x\right)
\geq 0$. Expanding $u\left( x\right) $ in power series gives
\begin{equation*}
u\left( x\right) =\frac{k}{36}p\left( p+\frac{12}{5p\left( k+2\right)
\right) x^{4}+O\left( x^{6}\right) .
\end{equation*
Hence we get
\begin{equation*}
\lim_{x\rightarrow 0^{+}}x^{-4}u\left( x\right) =\frac{k}{36}p\left( p+\frac
12}{5\left( k+2\right) }\right) \geq 0.
\end{equation*
Solving the inequality for $p$ yields $p>0$ or $p\leq -\frac{12}{5\left(
k+2\right) }$.
\textbf{Sufficiency}. We prove the condition $p>0$ or $p\leq -\frac{12}
5\left( k+2\right) }$ is sufficient for (\ref{Mh}) to hold.
If $p>0$, then $w\left( x\right) <0$ due to $E,F,G<0$. Hence, from (\re
{sgn(dv)}) we have $v^{\prime }\left( x\right) <0$, and then $v\left(
x\right) <\lim_{x\rightarrow 0^{+}}v\left( x\right) =0$. It is derived by
\ref{sgn(du)}) that $u^{\prime }\left( x\right) >0$, and so $u\left(
x\right) >\lim_{x\rightarrow 0^{+}}u\left( x\right) =0$.
If $p\leq -\frac{12}{5\left( k+2\right) }$, then by Lemma \ref{Lemma EF/G}
we hav
\begin{equation*}
p+\frac{G}{kE+F}\leq -\frac{12}{5\left( k+2\right) }+\frac{G}{kE+F}<0,
\end{equation*
and the
\begin{equation*}
w\left( x\right) =\left( kE+F\right) \left( p+\frac{G}{kE+F}\right) >0.
\end{equation*
From (\ref{sgn(dv)}) we have $v^{\prime }\left( x\right) >0$, and then
v\left( x\right) >\lim_{x\rightarrow 0^{+}}v\left( x\right) =0$. It follows
by (\ref{sgn(du)}) that $u^{\prime }\left( x\right) >0$, which implies that
u\left( x\right) >\lim_{x\rightarrow 0^{+}}u\left( x\right) =0$.
This completes the proof.
\end{proof}
\begin{remark}
For $k\geq 1$, since $\lim_{x\rightarrow \infty }u\left( x\right) =\infty $
for $p\neq 0$ and $\lim_{x\rightarrow \infty }u\left( x\right) =0$ for $p=0
, there has no $p$ such that the reverse inequality of (\ref{Mh}) holds for
all $x>0$. But we can show that there is a unique $x_{0}\in \left( 0,\infty
\right) $ such that $u\left( x\right) <0$, that is, the reverse inequality
of (\ref{Mh}), for $-\frac{12}{5\left( k+2\right) }<p<0$. The details of
proof are omitted.
\end{remark}
\begin{theorem}
\label{Main 4}For fixed $k<-2$, the reverse of (\ref{Mh}), that is
\begin{equation}
\frac{2}{k+2}\left( \frac{\sinh x}{x}\right) ^{kp}+\frac{k}{k+2}\left( \frac
\tanh x}{x}\right) ^{p}<1 \label{Mhr}
\end{equation}
holds for $x\in \left( 0,\infty \right) $ if and only if $p<0$ or $p\geq
\frac{12}{5\left( k+2\right) }$.
\end{theorem}
\begin{proof}
\textbf{Necessity}. If inequality (\ref{Mh}) holds for $x\in \left( 0,\infty
\right) $, then we have
\begin{equation*}
\lim_{x\rightarrow 0^{+}}\frac{u\left( x\right) }{x^{4}}=\frac{k}{36}p\left(
p+\frac{12}{5\left( k+2\right) }\right) \leq 0.
\end{equation*
Solving the inequality for $p$ yields $p<0$ or $p\geq -\frac{12}{5\left(
k+2\right) }$.
\textbf{Sufficiency}. We prove the condition $p<0$ or $p\geq -\frac{12}
5\left( k+2\right) }$ is sufficient for (\ref{Mh}) to hold.
If $p<0$, then $w\left( x\right) =\left( kE+F\right) \left( p+\frac{G}{kE+F
\right) <0$ due to $kE+F>0$ and $G<0$. Hence, from (\ref{sgn(dv)}) we have
v^{\prime }\left( x\right) <0$, and then $v\left( x\right)
<\lim_{x\rightarrow 0^{+}}v\left( x\right) =0$. It is derived by (\re
{sgn(du)}) that $u^{\prime }\left( x\right) <0$, and so $u\left( x\right)
<\lim_{x\rightarrow 0^{+}}u\left( x\right) =0$.
If $p\geq -\frac{12}{5\left( k+2\right) }$, then by Lemma \ref{Lemma EF/G}
we hav
\begin{equation*}
p+\frac{G}{kE+F}\geq p+\frac{12}{5\left( k+2\right) }>0,
\end{equation*
and the
\begin{equation*}
w\left( x\right) =\left( kE+F\right) \left( p+\frac{G}{kE+F}\right) >0.
\end{equation*
From (\ref{sgn(dv)}) we have $v^{\prime }\left( x\right) >0$, and then
v\left( x\right) >\lim_{x\rightarrow 0^{+}}v\left( x\right) =0$. It follows
by (\ref{sgn(du)}) that $u^{\prime }\left( x\right) <0$, which implies that
u\left( x\right) <\lim_{x\rightarrow 0^{+}}u\left( x\right) =0$.
This completes the proof.
\end{proof}
\section{Applications}
\subsection{Huygens type inequalities}
Letting $k=1$ in Theorem \ref{Main 1} and \ref{Main 2}, we have
\begin{proposition}
\label{Ptk=1}For $x\in \left( 0,\pi /2\right) $, inequality
\begin{equation}
\frac{2}{3}\left( \frac{\sin x}{x}\right) ^{p}+\frac{1}{3}\left( \frac{\tan
}{x}\right) ^{p}>1>\frac{2}{3}\left( \frac{\sin x}{x}\right) ^{q}+\frac{1}{3
\left( \frac{\tan x}{x}\right) ^{q} \label{Pt1}
\end{equation
holds if and only if $p>0$ or $p\leq -\frac{\ln 3-\ln 2}{\ln \pi -\ln 2
\approx -0.898$ and $-4/5\leq q<0$.
\end{proposition}
Let $M_{r}\left( a,b;w\right) $ denote the $r$-th weighted power mean of
positive numbers $a,b>0$ defined by
\begin{equation}
M_{r}\left( a,b;w\right) :=\left( wa^{r}+\left( 1-w\right) b^{r}\right)
^{1/r}\text{ if }r\neq 0\text{ and }M_{0}\left( a,b;w\right) =a^{w}b^{1-w},
\label{M_r}
\end{equation
where $w\in \left( 0,1\right) $.
Sinc
\begin{equation*}
\frac{2}{3}\left( \frac{\sin x}{x}\right) ^{p}+\frac{1}{3}\left( \frac{\tan
}{x}\right) ^{p}=\frac{\frac{2}{3}+\frac{1}{3}\left( \cos x\right) ^{-p}}
\left( \frac{\sin x}{x}\right) ^{-p}},
\end{equation*
by Proposition \ref{Ptk=1} the inequality
\begin{equation*}
\frac{\sin x}{x}>\left( \frac{2}{3}+\frac{1}{3}\left( \cos x\right)
^{-p}\right) ^{-1/p}=M_{-p}\left( 1,\cos x;\tfrac{2}{3}\right)
\end{equation*
holds for $x\in \left( 0,\pi /2\right) $ if and only if $-p\leq 4/5$.
Similarly, its reverse one holds if and only if $-p\geq \frac{\ln 3-\ln 2}
\ln \pi -\ln 2}$. The facts cab be stated as a corollary.
\begin{corollary}
\label{Ctyang1}Let $M_{r}\left( a,b;w\right) $ be defined by (\ref{M_r}).
Then for $x\in \left( 0,\pi /2\right) $, the inequalities
\begin{equation}
M_{\alpha }\left( 1,\cos x;\tfrac{2}{3}\right) <\frac{\sin x}{x}<M_{\beta
}\left( 1,\cos x;\tfrac{2}{3}\right) \label{Yang1t}
\end{equation
hold if and only if $\alpha \leq 4/5$ and $\beta \geq \frac{\ln 3-\ln 2}{\ln
\pi -\ln 2}\approx -0.898$.
\end{corollary}
\begin{remark}
Cusa-Huygens inequality \cite{Huygens.1888-1940} refers t
\begin{equation}
\frac{\sin x}{x}<\tfrac{2}{3}+\tfrac{1}{3}\cos x \label{Cusa}
\end{equation
holds for $x\in \left( 0,\pi /2\right) $, which is an equivalent one of the
second one in (\ref{Nueman}). As an improvement and generalization,
Corollary \ref{Ctyang1} was proved in \cite{Yang.MIA.2013.inprint} by Yang.
Here we provide a new proof.
\end{remark}
\begin{remark}
Let $a>b>0$ and let $x=\arcsin \frac{a-b}{a+b}\in \left( 0,\pi /2\right) $.
Then $(\sin x)/x=P/A$, $\cos x=G/A$, and then inequalities (\ref{Yang1t})
can be changed into
\begin{equation}
M_{\alpha }\left( A,G;\tfrac{2}{3}\right) <P<M_{\beta }\left( A,G;\tfrac{2}{
}\right) , \label{P-A-G}
\end{equation
where $P$ is the first Seiffert mean \cite{Seiffert.EM.42.1987} defined by
\begin{equation*}
P=P\left( a,b\right) =\frac{a-b}{2\arcsin \frac{a-b}{a+b}},
\end{equation*
$A$ and $G$ denote the arithmetic and geometric means of $a$ and $b$,
respectively.
Let $x=\arctan \frac{a-b}{a+b}$. Then $(\sin x)/x=T/Q$, $\cos x=A/Q$, and
then inequalities (\ref{Yang1t}) can be changed into
\begin{equation}
M_{\alpha }\left( Q,A;\tfrac{2}{3}\right) <T<M_{\beta }\left( Q,A;\tfrac{2}{
}\right) , \label{T-A-Q}
\end{equation
where $T$ is the second Seiffert mean \cite{Seiffert.DW.29.1995} defined by
\begin{equation*}
T=T\left( a,b\right) =\frac{a-b}{2\arctan \frac{a-b}{a+b}},
\end{equation*
$Q$ denotes the quadratic mean of $a$ and $b$.
Obviously, by Corollary \ref{Chyang1}, both the two double inequalities (\re
{P-A-G}) (see \cite{Yang.MIA.2013.inprint}) and (\ref{T-A-Q}) hold if and
only if $\alpha \leq 4/5$ and $\beta \geq \frac{\ln 3-\ln 2}{\ln \pi -\ln 2
\approx -0.898$, in which (\ref{T-A-Q}) seem to be new ones
\end{remark}
In the same way, taking $k=1$ in Theorem \ref{Main 3}
\begin{proposition}
\label{Phk=1}For $x\in \left( 0,\infty \right) $, inequality
\begin{equation}
\frac{2}{3}\left( \frac{\sinh x}{x}\right) ^{p}+\frac{1}{3}\left( \frac
\tanh x}{x}\right) ^{p}>1 \label{Ph1}
\end{equation
holds if and only if $p>0$ or $p\leq -\frac{4}{5}$.
\end{proposition}
Similar to Corollary \ref{Ctyang1}, we have
\begin{corollary}
\label{Chyang1}Let $M_{r}\left( a,b;w\right) $ be defined by (\ref{M_r}).
Then for $x\in \left( 0,\infty \right) $, the inequalities
\begin{equation}
M_{\alpha }\left( 1,\cosh x;\tfrac{2}{3}\right) <\frac{\sinh x}{x}<M_{\beta
}\left( 1,\cosh x;\tfrac{2}{3}\right) \label{Yang1h}
\end{equation
hold if and only if $\alpha \leq 0$ and $\beta \geq 4/5$.
\end{corollary}
\begin{remark}
Let $a>b>0$ and let $x=\ln \sqrt{a/b}$. Then $\left( \sinh x\right) /x=L/G$,
$\cosh x=A/G$, and then (\ref{Yang1h}) can be changed into
\begin{equation}
M_{\alpha }\left( G,A;\tfrac{2}{3}\right) <L<M_{\beta }\left( G,A;\tfrac{2}{
}\right) , \label{L-A-G}
\end{equation
where $L$ is the logarithmic means of $a$ and $b$ defined b
\begin{equation*}
L=L\left( a,b\right) =\frac{a-b}{\ln a-\ln b}.
\end{equation*
Making a change of variable $x=\func{arcsinh}\frac{b-a}{a+b}$ yields $(\sinh
x)/x=NS/A$, $\cosh x=Q/A$, where $NS$ is the Nueman-S\'{a}ndor mean defined
b
\begin{equation*}
NS=NS\left( a,b\right) =\frac{a-b}{2\func{arcsinh}\frac{a-b}{a+b}}.
\end{equation*
Thus, (\ref{Yang1h}) is equivalent t
\begin{equation}
M_{\alpha }\left( A,Q;\tfrac{2}{3}\right) <NS<M_{\beta }\left( A,Q;\tfrac{2}
3}\right) . \label{NS-A-Q}
\end{equation}
Corollary \ref{Chyang1} implies that the inequalities (\ref{L-A-G}) and (\re
{NS-A-Q}) hold if and only if $\alpha \leq 0$ and $\beta \geq 4/5$. The
second one in (\ref{NS-A-Q}) is a new one.
\end{remark}
\begin{remark}
It should be pointed out that all inequalities involving $(\sin x)/x\ $and
\cos x$ or $(\sinh x)/x\ $and $\cosh x$ in this paper can be changed into
the equivalent ones for means by variable substitutions mentioned
previously. In what follows we no longer mention.
\end{remark}
\subsection{Wilker-Zhu type inequalities}
Letting $k=2$ in Theorem \ref{Main 1} and \ref{Main 2}, we have
\begin{proposition}
\label{Ptk=2}For $x\in \left( 0,\pi /2\right) $, inequality
\begin{equation}
\left( \frac{\sin x}{x}\right) ^{2p}+\left( \frac{\tan x}{x}\right)
^{p}>2>\left( \frac{\sin x}{x}\right) ^{2q}+\left( \frac{\tan x}{x}\right)
^{q} \label{Pt2}
\end{equation
holds if and only if $p>0$ or $p\leq -\frac{\ln 2}{2\left( \ln \pi -\ln
2\right) }\approx -0.767$ and $-3/5\leq q<0$.
\end{proposition}
Note that
\begin{equation*}
\frac{\left( \frac{\sin x}{x}\right) ^{2p}+\left( \frac{\tan x}{x}\right)
^{p}-2}{\left( \frac{\sin x}{x}\right) ^{p}+\frac{\sqrt{8+\cos ^{-2p}x}+\cos
^{-p}x}{2}}=\left( \frac{x}{\sin x}\right) ^{-p}-\frac{\sqrt{8+\cos ^{-2p}x
-\cos ^{-p}x}{2},
\end{equation*
by Proposition \ref{Ptk=2} the inequalit
\begin{equation*}
\frac{x}{\sin x}>\left( \frac{\sqrt{8+\cos ^{-2p}x}-\cos ^{-p}x}{2}\right)
^{-1/p}
\end{equation*
or
\begin{equation*}
\frac{\sin x}{x}<\left( \frac{\sqrt{8+\cos ^{-2p}x}+\cos ^{-p}x}{4}\right)
^{-1/p}:=H_{-p}\left( \cos x\right)
\end{equation*
holds for $x\in \left( 0,\pi /2\right) $ if and only if $-p\geq \frac{\ln 2}
2\left( \ln \pi -\ln 2\right) }$, where $H_{r}$ is defined on $\left(
0,\infty \right) $ by
\begin{equation}
H_{r}\left( t\right) =\left( \frac{\sqrt{8+t^{2r}}+t^{r}}{4}\right) ^{1/r
\text{ if }r\neq 0\text{ and }H_{0}\left( t\right) =\sqrt[3]{t}\text{.}
\label{H_r}
\end{equation
Likewise, its reverse one holds if and only if $-p\leq 3/5$. This result cab
be stated as a corollary.
\begin{corollary}
\label{Ctyang2}Let $H_{r}\left( t\right) $ be defined by (\ref{H_r}). Then
for $x\in \left( 0,\pi /2\right) $, the inequalities
\begin{equation}
H_{\alpha }\left( \cos x\right) <\frac{\sin x}{x}<H_{\beta }\left( \cos
x\right) \label{Yang2t}
\end{equation
are true if and only if $\alpha \leq 3/5$ and $\beta \geq \frac{\ln 2}
2\left( \ln \pi -\ln 2\right) }\approx 0.767$.
\end{corollary}
Taking $k=2$ in Theorem \ref{Main 3}, we have
\begin{proposition}
\label{Phk=2}For $x\in \left( 0,\infty \right) $, the inequality
\begin{equation*}
\left( \frac{\sinh x}{x}\right) ^{2p}+\left( \frac{\tanh x}{x}\right) ^{p}>2
\end{equation*
holds if and only if $p>0$ or $p\leq -3/5$.
\end{proposition}
In a similar way, we get
\begin{corollary}
\label{Chyang2}Let $H_{r}\left( t\right) $ be defined by (\ref{H_r}). Then
for $x\in \left( 0,\infty \right) $, the inequalities
\begin{equation}
H_{\alpha }\left( \cosh x\right) <\frac{\sinh x}{x}<H_{\beta }\left( \cosh
x\right) \label{Yang2h}
\end{equation
are true if and only if $\alpha \leq 0$ and $\beta \geq 3/5$.
\end{corollary}
Now we give a generalization of inequalities (\ref{Zhwt}) given by Zhu \cit
{Zhu.CMA.58.2009}
\begin{proposition}
\label{PtZhug}For fixed $k\geq 1$, both the chains of inequalities
\begin{eqnarray}
\tfrac{2}{k+2}\left( \tfrac{\sin x}{x}\right) ^{kp}+\tfrac{k}{k+2}\left(
\tfrac{\tan x}{x}\right) ^{p} &\geq &\tfrac{k}{k+2}\left( \tfrac{\sin x}{x
\right) ^{kp}+\tfrac{2}{k+2}\left( \tfrac{\tan x}{x}\right) ^{p}
\label{Yang3t} \\
&>&\tfrac{2}{k+2}\left( \tfrac{x}{\sin x}\right) ^{kp}+\tfrac{k}{k+2}\left(
\tfrac{x}{\tan x}\right) ^{p}>1, \notag \\
\tfrac{2}{k+2}\left( \tfrac{\sin x}{x}\right) ^{kp}+\tfrac{k}{k+2}\left(
\tfrac{\tan x}{x}\right) ^{p} &>&\tfrac{2}{k+2}\left( \tfrac{x}{\tan x
\right) ^{p}+\tfrac{k}{k+2}\left( \tfrac{x}{\sin x}\right) ^{kp}
\label{Yang4t} \\
&\geq &\tfrac{2}{k+2}\left( \tfrac{x}{\sin x}\right) ^{kp}+\tfrac{k}{k+2
\left( \tfrac{x}{\tan x}\right) ^{p}>1 \notag
\end{eqnarray
hold for $x\in \left( 0,\pi /2\right) $ if and only if $k\geq 2$ and $p\geq
\frac{\ln \left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }$.
\end{proposition}
\begin{proof}
The first inequality in (\ref{Yang3t}) is equivalent to
\begin{eqnarray*}
&&\frac{2}{k+2}\left( \frac{\sin x}{x}\right) ^{kp}+\frac{k}{k+2}\left(
\frac{\tan x}{x}\right) ^{p}-\frac{k}{k+2}\left( \frac{\sin x}{x}\right)
^{kp}-\frac{2}{k+2}\left( \frac{\tan x}{x}\right) ^{p} \\
&=&\frac{k-2}{k+2}\left( \left( \frac{\tan x}{x}\right) ^{p}-\left( \frac
\sin x}{x}\right) ^{kp}\right) >0.
\end{eqnarray*
Due to $\frac{\tan x}{x}>1$ and $\frac{\sin x}{x}<1$, it holds for $x\in
\left( 0,\pi /2\right) $ if and only if
\begin{equation*}
\left( k,p\right) \in \{k\geq 2,p>0\}\cup \{1\leq k\leq 2,p<0\}:=\Omega _{1}.
\end{equation*}
The second one is equivalent to
\begin{equation*}
\frac{\frac{k}{k+2}\left( \frac{\sin x}{x}\right) ^{kp}+\frac{2}{k+2}\left(
\frac{\tan x}{x}\right) ^{p}}{\frac{2}{k+2}\left( \frac{x}{\sin x}\right)
^{kp}+\frac{k}{k+2}\left( \frac{x}{\tan x}\right) ^{p}}>1,
\end{equation*}
which can be simplified t
\begin{equation*}
\left( \frac{\sin x}{x}\right) ^{kp}\left( \frac{\tan x}{x}\right)
^{p}=\left( \left( \frac{\sin x}{x}\right) ^{k+1}\frac{1}{\cos x}\right)
^{p}>1.
\end{equation*
It is true for $x\in \left( 0,\pi /2\right) $ if and only if $\left(
k,p\right) \in \{k+1\geq 3,p\geq 0\}:=\Omega _{2}$.
By Theorem \ref{Main 1}, the third one in (\ref{Yang3t}) holds for $x\in
\left( 0,\pi /2\right) $ if and only if
\begin{equation*}
\left( k,p\right) \in \{k\geq 1,-p>0\}\cup \{k\geq 1,-p\leq -\frac{\ln
\left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right) }\}:=\Omega _{3}.
\end{equation*}
Hence, inequalities (\ref{Yang3t}) hold for $x\in \left( 0,\pi /2\right) $
if and only if
\begin{equation*}
\left( k,p\right) \in \Omega _{1}\cap \Omega _{2}\cap \Omega _{3}=\{k\geq
2,p\geq \frac{\ln \left( k+2\right) -\ln 2}{k\left( \ln \pi -\ln 2\right)
\},
\end{equation*}
which proves (\ref{Yang3t}).
In the same way, we can prove (\ref{Yang4t}), of which details are omitted.
\end{proof}
Letting $k=2$ in Proposition \ref{PtZhug} we have
\begin{corollary}
\label{Ctyang3}For $x\in \left( 0,\pi /2\right) $, the inequalities (\re
{Zhwt}) hold if and only if $p\geq \frac{\ln 2}{2\left( \ln \pi -\ln
2\right) }\approx 0.767$.
\end{corollary}
Similarly, using Theorem \ref{Main 3} we easily prove the following
\begin{proposition}
\label{PhZhug}For fixed $k\geq 1$, the inequalities
\begin{equation}
\tfrac{k}{k+2}\left( \tfrac{\sinh x}{x}\right) ^{kp}+\tfrac{2}{k+2}\left(
\tfrac{\tanh x}{x}\right) ^{p}>\tfrac{2}{k+2}\left( \tfrac{x}{\sinh x
\right) ^{kp}+\tfrac{k}{k+2}\left( \tfrac{x}{\tanh x}\right) ^{p}>1
\label{Yang3h}
\end{equation
hold $x\in \left( 0,\infty \right) $ if and only if $k\geq 2$ and $p\geq
\frac{12}{5\left( k+2\right) }$.
\end{proposition}
Letting $k=2$ in Proposition \ref{PhZhug} we have
\begin{corollary}
\label{Chyang3}For $x\in \left( 0,\infty \right) $, the inequalities (\re
{Zhwh}) hold if and only if $p\geq 3/5$.
\end{corollary}
\begin{remark}
Clearly, Corollaries \ref{Ctyang3} and \ref{Chyang3} offer another method
for solving the problems posed by Zhu in \cite{Zhu.AAA.2009}.
\end{remark}
\subsection{Other Wilker type inequalities}
Taking $k=3,4$ in Theorems \ref{Main 1} and \ref{Main 2}, we obtain the
following
\begin{proposition}
\label{Ptk=3}For $x\in \left( 0,\pi /2\right) $, inequality
\begin{equation}
\frac{2}{5}\left( \frac{\sin x}{x}\right) ^{3p}+\frac{3}{5}\left( \frac{\tan
x}{x}\right) ^{p}>1 \label{Pt3}
\end{equation
holds if and only if $p>0$ or $p\leq -\frac{\ln 5-\ln 2}{3\left( \ln \pi
-\ln 2\right) }\approx -0.676$. It is reversed if and only if $-12/25\leq
p<0 $.
\end{proposition}
\begin{proposition}
\label{Ptk=4}For $x\in \left( 0,\pi /2\right) $, inequality
\begin{equation}
\frac{1}{3}\left( \frac{\sin x}{x}\right) ^{4p}+\frac{2}{3}\left( \frac{\tan
x}{x}\right) ^{p}>1 \label{Pt4}
\end{equation
holds if and only if $p>0$ or $p\leq -\frac{\ln 3}{4\left( \ln \pi -\ln
2\right) }\approx -0.608$. It is reversed if and only if $-2/5\leq p<0$.
\end{proposition}
Putting $k=-3,-4$ in Theorem \ref{Main 3} we get
\begin{proposition}
\label{Phk=-3}For $x\in \left( 0,\infty \right) $, inequality
\begin{equation}
\left( \frac{\tanh x}{x}\right) ^{p}<\frac{2}{3}\left( \frac{x}{\sinh x
\right) ^{3p}+\frac{1}{3} \label{Ph-3}
\end{equation
holds if and only if $p<0$ or $p\geq 12/5$.
\end{proposition}
\begin{proposition}
\label{Phk=-4}For $x\in \left( 0,\pi /2\right) $, inequality
\begin{equation}
2\left( \frac{\tanh x}{x}\right) ^{p}<\left( \frac{x}{\sinh x}\right) ^{4p}+1
\label{Ph-4}
\end{equation
holds if and only if $p<0$ or $p\geq 6/5.$
\end{proposition}
|
\section{Introduction}
Since discovery in 1991 \cite{Iijima1991}, carbon nanotubes (CNTs) have quickly developed as the working horse of nanotechnology, mostly owing to their remarkable electronic, mechanical and thermal properties, which facilitate a whole plethora of CNTs' applications. Among them are new composite materials synthesized by adding CNTs to various materials such as alloys, polymers, and metals. Such composites constitute the extraordinary class of materials being very light and exhibiting simultaneously enhanced mechanical strength, electrical and thermal conductivity, and chemical stability \cite{Terrones2003,book1}. However, the fabrication of such nano-composites is hindered by the fact that the pristine CNTs are not soluble in water or in organic solvents and have tendencies to create bundles. The common remedy of these problems is the functionalization of the CNTs, in particular covalent functionalization with simple organic molecules (such as -CH$_n$, -NH$_n$ fragments, and -OH groups). These molecules adsorbed at the surface of CNTs allow for strong binding of the functionalized in such a way CNTs with matrix material, typically a polymer or a metal \cite{book1, amr2011, steiner2012, lachman2010, gojny2005}.
On the other hand, functionalization to the side walls of CNTs changes their morphology, generates defects \cite{app1, prof1, diamond, condmat}, and could decrease the strength of the structure in comparison to the pristine CNTs. Therefore, it is very important to investigate the elastic properties of the functionalized CNTs. It is also meaningful having in mind broad area of CNT applications in fields such as nanoelectronics or medicine. Generally, the elastic properties of the functionalized CNTs are rather poorly known, in contrast to the pristine ones\cite{lier2000, li2003, hernandez1998, govindjee1999, chang2006, yao1998, xin2000, Terrones2003, kudin2001, sanchez1999, lu1997, popov2000, krishna1998, shokrieh2010}. To close this gap, we have undertaken extensive and systematic {\sl ab initio} studies of elastic properties of the functionalized CNTs. The stability of the functionalized CNTs has been also studied previously in a series of publications.\cite{Li2004, pup, Rosi2007, Shirvani, Veloso2006, wang, Strano, app1, diamond, prof1, condmat}.
In this paper, we consider prototypes of the CNTs, namely single wall (9,0), (10,0) and (11,0) CNTs, covalently functionalized with simple organic fragments -NH , -NH$_2$ , -CH$_2$ , -CH$_3$ and -OH. The molecules at various concentrations are attached to the side walls of CNTs being evenly distributed over the CNT surfaces. For the functionalized systems, we calculate first their equilibrium geometry and further their elastic moduli.
The paper is organized as follows. In Section ~\ref{sec:det}, we present calculation details. The results of the calculations are described and discussed in the third section - 'Results and discussion'. Here we present: (i) how the functionalization procedure changes the equilibrium geometry of the functionalized systems, (ii) how the elastic moduli of the covalently functionalized CNTs deviate from the elastic moduli of the pristine ones, and (iii) how these deviations depend on the concentration of the functionalizing molecules. Finally, the paper is concluded in section 'Conclusions'.
We consider three types of CNTs: nominally metallic (9,0), and semiconducting (10,0) and (11,0). All of the CNTs have been covalently functionalized by attaching to their lateral surface simple organic groups, such as -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH. We examine those systems at various concentrations reaching up to 4.6$\cdot$10$^{14}$ adsorbed molecules per cm$^2$ of CNT's surface (see Fig.~\ref{fig:Fig1}).
However, in the present paper, we follow convention of the other authors and measure the concentration of the adsorbents as the number of attached molecules $n_A$ per doubled unit cell of pristine CNTs, i.e., per number of carbon atoms in the doubled unit cell $n_{cell}$ equal to 72, 80, and 88 carbon atoms for (9,0), (10,0), and (11,0) CNTs, respectively. To facilitate comparison between different CNTs, we express also the concentration of the adsorbed molecules as $\frac{n_A}{n_{cell}} 100\% $. We have considered all possible positions of the adsorbed fragments and determined these positions that lead to the minimum of the total energy of the functionalized CNTs 11, i.e., the equilibrium geometry. Only these positions are depicted in Fig.~\ref{fig:Fig1}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig1a}
\caption{\label{fig:Fig1} (color online) Exemplary structures of functionalized CNTs with simple -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH organic fragments. For -NH$_2$, -CH$_2$ fragments in addition to the cross-sectional view also the top views of local arrangement of adsorbents and surrounding C atoms are presented. This shows that depending on the electronic configuration the fragments form either single chemical bond to the C atom from the CNT backbone (-NH$_2$, -OH, and -CH$_3$), being in the so-called top position, or double bond (as -CH$_2$ and -NH) with fragment's C atom placed in the bridge positions.}
\end{figure}
The total energies and components of stress tensor are obtained from the \textit{ab initio} calculations in the framework of the density functional theory \cite{Hohenberg1965,kohn} employing the following realization. We use the generalized gradient approximation (GGA) for the exchange-correlation density functional \cite{Perdew1996} and supercell geometry within the numerical package SIESTA \cite{siesta1, siesta2}. Since in many cases we have to do with system with odd number of electrons, we employ spin-polarized version of the GGA functional. A kinetic energy cut-off (parameter MeshCutoff in the SIESTA code) of 300 Ry and split double zeta basis set with spin polarization have been used in all calculations. Each supercell contains two primitive unit cells along the CNT symmetry axis. The lateral separation (i.e., lateral lattice constants in the direction perpendicular to the symmetry axis) has been set to 30 \AA, just to eliminate completely the spurious interaction between neighboring cells. We use the self-consistency mixing rate of 0.1, the convergence criterion for the density matrix of 10$^5$, maximum force tolerance equal to 0.01 eV/\AA, and 1x1x10 k-sampling in Monkhorst Pack scheme.
The stability of the functionalized structures can be assessed by considering the adsorption energy $E_{ads}$ defined below (and sometimes called packing energy \cite{Coto2011}).
\begin{equation}
\label{eee}
E_{ads} = \frac{1}{N} ( E_{CNT + groups} - (E_{CNT} + N \cdot E_{group} )),
\end{equation}
where $E_{CNT + groups}$, $E_{CNT}$, and $E_{group}$ are the total energies of the functionalized CNTs with the optimized unit cell lengths and atomic geometry, pristine one, and one functionalizing molecule, respectively.
The functionalized CNTs change their lattice constants $l$ along the symmetry axis and radii $r$ in comparison to pristine ones. These parameters take the values that minimize $E_{CNT + groups}$ with the optimized positions of all atoms in the supercell (i.e., with vanishing all forces on atoms). Since the cross-sections of the functionalized CNTs in the plane perpendicular to the CNT symmetry axis, are not circular any more, we determine average radius $r$ of the functionalized nanotubes as geometrical average of carbon atom positions on the CNT surface.
Since we use the numerical code with the localized basis, the basis set superposition error (BSSE) correction should be taken into account. We have calculated this correction following the well established procedure \cite{bsse, bsse1, bsse2}
\begin{equation}
E_{cc} = \frac{1}{N} \left( E_{CNT + \cdot ghost} - E_{CNT} + E_{ghost + groups} - N \cdot E_{group} )\right),
\label{eq:bsse}
\end{equation}
where $E_{CNT + ghost}$ and $E_{ghost + groups}$ are Kohn-Sham energies of the functionalized system but where the adsorbents or nanotube are replaced by their ghosts \cite{siesta2}, respectively. These calculations have been performed with atomic sites fixed to their equilibrium positions. $E_{cc}$ corrects values of $E_{ads}$ by approximately 11$\%$ to 44 $\%$ and does not change the conclusions about stability of functionalized systems\cite{condmat}. We would like to stress that the BSSE correction to the adsorption energy originates mostly from the calculation of the total energies of free groups $E_{group}$. These energies when calculated with the basis functions connected to few atoms differ considerably from energies calculated employing full basis of the whole functionalized system. The role of BSSE correction gets completely negligible when one calculates equilibrium geometry of the functionalized systems or elastic properties, since these quantities are determined on the basis of total energies for the whole functionalized systems where the bases are identical (up to atomic positions).
This has been confirmed by calculating $dE_{cc}$/$dl$ according to the procedure described in the Ref.\cite{bsse2}.
Having determined equilibrium geometry of the functionalized CNTs, we are in the position to calculate their elastic moduli. To do so, we strain (usually we apply tensile strain) the functionalized CNTs along the symmetry axis by $\Delta l$ and calculate the response.
The most interesting quantity, Young's modulus, has been determined in two ways:
(i) - by comparing the total energy of unstrained ($E_l$) and strained ($E_{l+\Delta l}$) systems
\begin{equation}
\label{e1}
\nonumber Y = \frac{1}{{V_o }}\frac{{\partial ^2 E_{strain} }}{{\partial \varepsilon _{ii} ^2 }},
\quad
E_{strain} = E_{l+\Delta l } - E_l, \\
\quad
\nonumber \varepsilon_{ii}=\frac{\Delta l}{l},
\end{equation}
where $l$ is a lattice constant along the axis of the functionalized tube, $\Delta l$ is elongation in the chosen direction, and $V_o$ is volume of the unstressed system, and (ii) from components ($\sigma_{ii}$) of the stress tensor $Y = \sigma_{ii} / \varepsilon_{ii} $.
Volume of the pure CNT has been calculated using following relation $V_o = 2 \cdot \pi \cdot r \cdot l \cdot t$,
where thickness $t$ has been chosen as double Van der Waals radius of C atom (equal to 0.34 nm) \cite{lu1997,hernandez1998, sanchez1999, harik2002, li2003, chang2006}. In the case of functionalized CNT, we neglect volume of the attached molecules.
Bulk modulus and Shear modulus have been also calculated according to the formulas:
\begin{equation}
\label{e5}
K = \frac{Y}{3(1-2 \nu)},
\quad
G = \frac{Y}{2(1+ \nu)},
\end{equation}
We have also calculated the BBSE corrections to the elastic moduli. These corrections modify the values of Young's moduli by maximally 10$\%$ and do not change the conclusion presented in this article.
At the end, we can compute the Poisson ratio values as follows $\nu = - (\Delta r / r)(l / \Delta l)$,
where $\Delta r$ describes the change of the average radius of the functionalized CNT that has been caused by the applied strain $\Delta l$.
\section{\label{sec:res}Results and discussion}
\subsection{Influence of functionalization on the structure}
We have studied the stability and electronic structure of covalently functionalized CNTs in previous works \cite{app1, diamond, prof1, condmat}. We have also shown there how the functionalization induces changes in morphology of the functionalized systems and leads to the redistribution of electronic charge. All of the functionalizing fragments considered in the present study induce rehybridization from sp$^2$ to sp$^3$ of C-C bonds in neighborhood of the attachment, but in many cases we have found out that some of the adsorbed molecules also cause stronger deformation of CNT backbone structure. These pronounced changes in the morphology of the functionalized CNTs we observed motivated us to study the global strength of the functionalized CNTs expressed by the elastic moduli.
Before we turn to the discussion of the elastic properties, we would like to present shortly the stability of the functionalized CNTs, and the change of geometry (lattice constant and radius) caused by the functionalization.
The adsorption energy (per adsorbed molecule) for all considered functionalizing molecules is shown in Fig.~\ref{fig:Fig2} for the prototypical metallic (9,0) CNT. It is seen that all the considered molecules bind to the surface of the (9,0) CNT (i.e., the adsorption energy is negative). However, the strength of the bonding is larger for typical radicals (-NH and -CH$_2$ ) then for the non-radicals (-NH$_2$ , -CH$_3$ , and -OH ). This we would like to correlate with the induced changes of geometry and elastic moduli later on.
As can be seen in Fig. ~\ref{fig:Fig2}, generally, the adsorption energy per molecule remains nearly constant with increasing number of attached molecules. Only for the strong radical -NH, the adsorption energy per molecule gets less negative (indicating that the bonding weakens) with increasing number of attached fragments.
This trend obeys also for semiconducting (10,0) and (11,0) CNTs, as it has been illustrated for -CH$_2$ and -OH adsorbents in Fig.~\ref{fig:Fig3}. At least for these rather similar in diameter CNTs and considered concentration of the adsorbed molecules, the adsorption energy depends rather weakly on the metallic or semiconducting character of the functionalized CNTs and their radius.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig2}
\caption{\label{fig:Fig2}(color online) Adsorption energy per molecule of the (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups as a function of the number of adsorbed fragments per CNT unit cell, i.e., per 72 carbon atoms. On the top axis, the universal percentage scale is depicted.}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig3}
\caption{\label{fig:Fig3} (color online) Adsorption energy per molecule for (9,0), (10,0), and (11,0) CNT functionalized with -CH$_2$ and -OH fragments as a function of the density of attached molecules.}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig4}
\caption{\label{fig:Fig4} (color online) The equilibrium lattice constant ($l$) along symmetry axis of the functionalized nanotubes as a function of the number of covalently bound fragments to the sidewall of (9,0) CNTs for -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH functionalizing molecules. Top axis gives the concentrations of adsorbed molecules in \%.}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig5}
\caption{\label{fig:Fig5} (color online) Radius of (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups as a function of number of covalently bound fragments to sidewall of the tube. Top axis gives the concentrations of adsorbed molecules in \%.}
\end{figure}
The functionalization changes the parameters characterizing the backbone of the functionalized CNTs, the longitudinal lattice constant $l$ and radius $r$. Longitudinal lattice constants and radii of the functionalized CNTs are larger than of pristine ones and change rather strongly with the number of attached molecules. It is depicted for the (9,0) CNT in Figs.~\ref{fig:Fig4} and ~\ref{fig:Fig5}. For example, radius and lattice constant of pure (9,0) CNT equals to 3.592 \AA $ $ and 8.590 \
|
AA, respectively. For (9,0) CNT functionalized with 9 -CH$_2$ molecules per unit cell, the radius increases to 3.715 \AA $ $(by 3.31$\%$) and the lattice constant reaches 8.726 \AA $ $ (increase by 1.58$\%$).
Generally, one can say that functionalization of (9,0) CNT acts as an effective tensile strain, which blows up pristine CNT. The effect is more pronounced for molecules that built strong covalent bonds to the CNT walls.
The largest changes of the lattice constant $l$ and radius $r$ have been observed for CNT functionalized with -CH$_2$ radicals. For maximal considered concentration of 12.5\%, the relative changes of $l$ and $r$ in comparison to the length and radius of the pristine CNTs are equal to 1.56\% and 3.31\%, respectively. This effect is much weaker for -CH$_3$ functionalized CNT, where percentage change of $l$ is equal to 0.34\%, whereas the change of $r$ equals 0.92\%.
The relative changes in the $l$ and $r$ parameters induced by functionalization depend rather slowly on metallic - semiconducting character of CNTs and their diameter. The radius of (9,0), (10,0), and (11,0) CNTs functionalized with -CH$_2$ and -OH molecules is depicted in Fig.~\ref{fig:Fig5}. As it was determined previously\cite{condmat} -CH$_2$ radicals bind strongly to the CNT surfaces and at higher concentrations can lead to some local structural defects (so-called 5-7 defects). On the other hand, the functionalization with -OH groups slightly changes the cross-section of CNT - from circle to ellipse. Therefore, we have decided to compare (see Fig.~\ref{fig:Fig6}) both types of attachments for all of the CNTs studied: (9,0), (10,0) and (11,0). We have noticed, for the biggest considered concentration of 12.5$\%$, that (9,0) CNT functionalized with -CH$_2$ shows the biggest percentage change ( 3.31$\%$) of radius in comparison to (10,0) and (11,0) CNTs (where percentage changes are equal to 2.83$\%$ and 1.81$\%$, respectively). The -OH groups follow the similar trend, however, the functionalization induced changes of the radius are weaker. The relative changes of the radius are 0.64$\%$, 0.43$\%$ and 0.39$\%$, for (9,0), (10,0) and (11,0) CNT, respectively. Therefore, one can say that the functionalization of the nanotubes with larger original radius has less influence on its structure than functionalization of CNTs with smaller diameter.
Having described the equilibrium geometry of the functionalized CNTs, we are now in the position to discuss their elastic properties.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig6a}
\caption{\label{fig:Fig6} (color online) Radius of (9,0), (10,0), and (11,0) CNTs functionalized with -CH$_2$ and -OH as a function of the number of attached fragments per supercell. Since (9,0), (10,0), and (11,0) CNTs contain different number of atoms in the supercell, the percentage concentrations of the attached fragments are also depicted above top axes of each panel for better comparison.}
\end{figure}
\subsection{Elastic properties of pure CNT}
Before we turn to the elastic moduli of the functionalized CNTs, we would like to present our results for pristine (9,0), (10,0), and (11,0) ones. This allows for comparison with previous works and provides the reference to the case with functionalization.
\begin{table}[h!tb] \centering
\caption{\small{Elastic moduli and Poisson's ratio of (9,0), (10,0), and (11,0) pristine CNTs.}}
\vspace{0.06 in}
\begin{tabular}{|l|c|c|c|}
\hline
\small{\textbf{Property}} & \small{\textbf{(9,0)}} & \small{\textbf{(10,0)}} & \small{ \textbf{(11,0)}} \\
\hline
\hline
Y (TPa) & 1.02 & 1.03 & 1.02 \\
\hline
K (TPa) & 0.61 & 0.57 & 0.54 \\
\hline
G (TPa) & 0.41 & 0.43 & 0.43 \\
\hline
$\nu$ & 0.22 & 0.20 & 0.18 \\
\hline
\end{tabular}
\label{tab:tab1}
\end{table}
Young's, Shear, and Bulk moduli, and also Poisson's ratios for (9,0), (10,0) and (11,0) pristine CNTs are gathered in Tab.\ref{tab:tab1}. The calculated Young's moduli of the pure CNT compare excellently to experimental findings (0.32-1.80 TPa) \cite{lu1997, krishna1998, Terrones2003} and previous theoretical works (0.8-1.5 TPa) \cite{lier2000, li2003, hernandez1998, govindjee1999, chang2006, yao1998, xin2000, Terrones2003, kudin2001, sanchez1999}. Calculated Poisson's ratios are identical to the experimental ones and also very close to previous theoretical predictions (0.19-0.34) \cite{lu1997, popov2000, chang2006, Terrones2003, sanchez1999}. Also calculated values of the Shear and Bulk moduli agree fairly well with previously obtained theoretical and experimental values, lying in the range of 0.45-0.58 TPa \cite{li2003, Terrones2003, krishna1998, lu1997, chang2006, popov2000} and 0.50-0.78 TPa \cite{Terrones2003, lu1997}, respectively.
We have also calculated the elastic properties for wider range of zigzag pristine CNTs. Only for small CNTs, like (4,0) and (5,0) all the values of elastic moduli are smaller.
For larger in diameter CNTs, up to (20,0), the values are very similar to those shown in Tab.\ref{tab:tab1}. Starting from (6,0) CNT, all of the elastic moduli seem to be rather weakly dependent on the diameter of CNT. Such behavior of Young's as well as Shear modulus has been noticed in previous studies for Young's\cite{popov2000, chang2006, sanchez1999, li2003, shokrieh2010, lu1997} and for Shear\cite{popov2000, li2003, lu1997} moduli.
\subsection{Elastic properties of functionalized CNT}
Let us now present theoretical predictions for elastic moduli of the functionalized CNTs. We start the presentation of our results with Young's modulus of the (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups (Fig.~\ref{fig:Fig7}). For all considered groups, the Young's modulus decreases with increasing density of the attachments. However, for the radicals, -CH$_2$ and -NH, the trend is much more pronounced than for other groups. For CNTs functionalized with 9 -CH$_2$ (i.e., concentration of 12.5\%), the Young's modulus decreases by 28.41\%, whereas CNT with 9 functionalizing -CH$_3$ groups exhibits reduction in the Young's modulus equal to 13.52\%. It confirms the already described tendency that the molecules with stronger binding to the CNT's surface modify the properties of the functionalized CNTs in a stronger manner.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig7}
\caption{\label{fig:Fig7}(color online) The Young's modulus of the (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups as a function of the density of attached molecules, given as the number of attached molecules per unit cell (lower x-axis) or the ratio of adsorbents to the number of atoms in the unit cell (upper x-axis). On the right axis we have depicted percentage change of Young's modulus relative to the pristine CNT.}
\end{figure}
For the purpose of comparison how the Young's modulus depends on diameter of the tubes, we have chosen -OH groups and -CH$_2$ fragments as examples.
In Fig.~\ref{fig:Fig8}, we plot the dependence of Young's modulus for (9,0), (10,0) and (11,0) CNTs functionalized with -OH and -CH$_2$ molecules on the density of adsorbents. It is seen that -OH groups represent behavior typical for non-radical adsorbents (which generally cause small deformation of CNTs), and one observes practically no difference between tubes. Even in the case of -CH$_2$ radical (that causes typically rather large deformations of CNTs), one can only weakly differentiate between the types of the functionalized nanotubes.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig8}
\caption{\label{fig:Fig8}(color online) The Young's modulus of (9,0), (10,0), and (11,0) CNTs functionalized with -OH and -CH$_2$ fragments as a function of the density of attached molecules.}
\end{figure}
Our calculations show that Poisson ratio for structures functionalized by all considered fragments always oscillates between values 0.17 and 0.24. This quantity, for the studied range of the adsorbent concentrations, neither exhibits clear dependencies on the type of functionalizing molecules nor allows for resolution between (9,0), (10,0), and (11,0) CNTs.
We complete the discussion of the elastic moduli for functionalized CNTs with the presentation of results for Shear and Bulk moduli, the magnitude of which can be easily calculated from Young's modulus and Poisson ratio employing formulas \ref{e5}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig9}
\caption{\label{fig:Fig9} (color online) The Shear modulus of (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups as a function of the density of attached molecules, given as the number of attached molecules per unit cell (lower x-axis) or the ratio of adsorbents to the number of atoms in the unit cell (upper x-axis). On the right axis we have depicted the relative changes of Shear modulus with respect to pristine CNT.}
\end{figure}
The Shear modulus as a function of the concentration of attached molecules is depicted in Fig.~\ref{fig:Fig9}. Generally, the Shear modulus drops with the increasing density of the attached molecules. This decrease is stronger for -CH$_2$ radical than for non-radical groups such as -OH .
For the highest considered concentration of the -CH$_2$ radicals, the Shear modulus is smaller by roughly 25\%, and even for the non-radical functionalizing groups the decrease is of the order of 10\%. Therefore, our studies do not corroborate Franklad's \cite{frankland2002} suggestion that functionalization has tiny influence on Shear modulus (less then 4.63 $\%$).
The Bulk modulus as a function of the concentration of attached molecules is shown in Fig.~\ref{fig:Fig10}.
The Bulk modulus behaves similarly to other elastic moduli and decreases with the growing concentration of functionalizing molecules, with the strongest effect observed for functionalization with -CH$_2$ radical.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig10}
\caption{\label{fig:Fig10}(color online) The Bulk modulus of (9,0) CNT functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, and -OH groups as a function of the density of attached molecules, given as the number of attached molecules per unit cell (lower x-axis) or the ratio of adsorbed molecules to the number of atoms in the unit cell (upper x-axis). On the right axis we have depicted percentage change of the Bulk modulus with respect to pristine CNT.}
\end{figure}
Generally, our studies provide theoretical predictions for the elastic moduli of the covalently functionalized CNTs. These moduli diminish with the concentration of the functionalizing molecules. In the situation of the lack of experimental data, the obtained values should facilitate the undestanding and design of the composite materials. First of all, the decrease of elastic moduli is quite modest, particularly for non-radical -OH, -NH$_2$, -CH$_3$ groups. Therefore, the functionalized CNTs should still be good reinforcement in composites employing polymers or metals as matrices. On the other hand, the functionalization of CNTs is necessary to bind CNTs to polymer matrix and significantly improves homogeneous dispersion and integration of CNTs into polymers, simultaneously reducing the tendency of pristine CNTs to re-agglomeration. This feature substantially enhances elastic strength of polymer matrices with incorporated CNTs. This effect has been confirmed in a series of experiments studying the Young's modulus of composites with amines \cite{gojny2005, wang, lachman2010}, amides \cite{steiner2012}, and carboxylic groups \cite{lachman2010, amr2011}. All these studies are in agreement with our findings. However, for CNT with hydroxyl groups dispersed into polymer matrix, Wang \cite{wang2011} reported reduction in Young's modulus in comparison to pure matrix. Unfortunately, the issue of the elastic properties of composites is out of scope of our atomistic approach, and would require a study based on continuous methods.
\section{\label{sec:con}Conclusions}
We have performed extensive and systematic {\sl ab initio} studies of the elastic properties of the (9,0), (10,0), and (11,0) CNTs functionalized with -NH, -NH$_2$, -CH$_2$, -CH$_3$, -OH molecules covalently bound to the CNT walls at concentrations reaching up to 4.6$\cdot$10$^{14}$ molecules per cm$^2$. Our studies provide valuable theoretical quantitative predictions for elastic moduli (Young's , Shear, Bulk moduli, and Poisson ratio) of functionalized CNTs, demonstrate clear chemical trends in the elastic moduli, and shed light on physical mechanisms governing these trends. These results are of importance for design of composite materials employing carbon nanotubes.
We have shown that considered molecules form covalent bonds to the CNT surfaces and cause local and global changes in the morphology of the CNT that are generally proportional to the density of the attached molecules.
The local deformations include rehybridization of the C-C bonds and defects that influence strength of the functionalized systems. Functionalization of CNTs causes expansion of the functionalized CNTs, i.e., increase of longitudinal lattice constant and radius in comparison to the pristine CNTs. This expansion is proportional to the density of the adsorbed molecules.
We observe general trend that the molecules forming the stronger bonds to CNTs cause larger deformations of the functionalized systems (i.e., the larger changes of the lattice constants $l$ and radii $r$) and larger reduction of the elastic moduli (Young's, Shear, and Bulk). All moduli decrease with concentration of the adsorbed molecules.
As far as the Young's, Shear, and Bulk moduli reflect changes in the CNT morphology caused by functionalization, the Poisson's ratio remains almost unchanged.
In a few cases when comparison with experimental or other theoretical studies is possible, we observe reasonable agreement with results of our calculations.
In spite of the fact that the functionalization diminishes elastic moduli of CNTs and this effect generally cannot be neglected, the elastic moduli remain large enough to guarantee successful employment of functionalized CNTs for reinforcement of composite materials.
\section{Acknowledgement}
The authors gratefully acknowledge financial support of the Polish Council for Science through the Development Grants for the years 2008-2011 (NR. 15-0011-04/2008, NR. KB/72/13447/IT1-B/U/08) and the SiCMAT Project financed under the European Founds for Regional Development (Contract No. UDA-POIG.01.03.01-14-155/09). We thank also PL-Grid Infrastructure and Interdisciplinary Centre for Mathematical and Computational Modeling of University of Warsaw (Grant No. G47-5) for providing computer facilities.
|
\section{Introduction}
\label{sec:intro}
Since\footnote{While finalizing this manuscript, we became aware of another work applying Markov Chain Monte-Carlo technique in quantum algorithms \cite{Mazzola-mcmc-2021}. However, we differentiate our work by targeting near-term quantum algorithms and providing the proof of ergodicity.} the advent of the Variational Quantum Eigensolver (VQE) \cite{mcclean2016theory, peruzzo2014variational} and Quantum Approximate Optimization Algorithm (QAOA) \cite{farhi2014quantum}, quantum algorithms that function in tandem with classical machine learning have garnered great interest. These variational quantum algorithms (VQAs) typically harness some form of classical gradient descent to tackle a large-scale optimization problem on the exponential state space of quantum hardware \cite{Cerezo2021, lavrijsen2020classical, cerezo2021variational}. Applications of these methods have included the optimization of NP-hard combinatorial problems \cite{garey2002computers, Nannicini2019, Braine2021, Patti2021, Fuller2021}, the identification of eigenstates and energies in quantum chemistry applications \cite{mcardle2018quantum, kandala2017hardware, Grimsley2019}, and the study of condensed matter systems \cite{ritter2019near, vogt2020preparing, Zhang2021}. Much like their classical counterparts, the above near-term quantum algorithms can be plagued by nonconvex optimization landscapes, causing them to converge to suboptimal minima \cite{Lee2021}. A variety of techniques have been suggested to address this issue in NP-hard combinatorial optimization problems, such as: ``warm starting'' proceedures \cite{beaulieu2021max, egger2021warm, van2021}, composition with classical neural networks \cite{Rivera-Dean2021}, multibasis encodings with bistable convergence \cite{Patti2021}, and other techniques \cite{Fuller2021, harwood2021improving, shehab2019noise}. However, these methods offer few provable optimization guarantees of practical utility. While optimization landscapes are known to become more convex with high-depth \cite{Lee2021}, the adverse effect of quantum noise \cite{bravyi2018quantum} and barren plateaus \cite{mcclean2018barren, Patti2020,Marrero2020,holmes2021connecting,Cerezo2020} on deep quantum networks is well-documented.
In order to avoid the local minima convergence that plagues VQAs, we introduce MCMC-VQA, a technique that adapts the ergodic exploration of classical Markov chain Monte Carlo (MCMC) to guarantee the global convergence of quantum algorithms. As samples of ergodic systems are representative of their underlying probability distribution, an ergodic VQA necessarily yields a sample that contains states near the global minimum. In this work, we focus on the Metropolis-Hastings algorithm due to its success in high-dimensional spaces and suitability for unnormalized probability distributions \cite{Metropolis1953}. MCMC-VQA utilizes modified VQAs and their statistics as the Metropolis-Hastings transition kernels and quantum state energies as state likelihoods. These quantities are then used to determine the viability of parameter updates. Our algorithm requires no increase in quantum overhead and only a minimal increase classical overhead. MCMC-VQA represents a time-discrete, space-continuous Markov chain, as the algorithm progresses in discrete VQA epochs while training a continuous-parameter quantum circuit. It can also be classified as a form of Stochastic Gradient Descent MCMC \cite{Robbins1951, Nemeth2021}. Although in this work we focus on VQE \cite{peruzzo2014variational}, our techniques are readily applicable to a wide array of quantum machine learning applications.
While other works have introduced quantum subroutines for classical MCMC methods that offer a quadratic speedup for random walks \cite{Szegedy2004,Temme2011, Lemieux2020} and sampling \cite{Montanaro2015, Cornelissen2021}, this manuscript takes the opposite approach by designing a classical MCMC subroutine for quantum algorithms. Likewise, while classical MCMC methods have been used to \textit{simulate} quantum computing routines \cite{Wang2016, Medvidovic2021}, our work uses classical MCMC to \textit{enhance} quantum algorithms on quantum hardware.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{MCMC-VQA.eps}
\caption{Diagram of a random graph for MaxCut, VQE, and MCMC-VQA. \textbf{Random graphs} (black, Secs.\ \ref{sec:intro} and \ref{sec:methods}) in this work are generated with normally distributed edge weights $w_i$. The objective is to minimize Eq.\ \ref{eq:maxcut} by optimally assigning each pair of vertices $v_{ia}$, $v_{ib} \in \{-1,1\}$. MaxCut can be solved on a quantum computer by mapping $v_{ia}$, $v_{ib} \rightarrow \sigma_{ia}$, $\sigma_{ib}$ and minimizing the corresponding $H$. See Sec.\ \ref{sec:methods} for graph details. \textbf{VQE} (gray, Sec.\ \ref{sec:intro}) minimizes the loss function for each $\hat{\theta}$ by calculating the expectation value $\Lambda(\hat{\theta})$ and updating $\hat{\theta}$ with gradient descent using $\nabla \Lambda(\hat{\theta})$. \textbf{MCMC-VQA} (blue, Sec.\ \ref{sec:results}) uses gradient descent with $\nabla \Lambda(\hat{\theta})$ and random noise $\xi \Theta_r$ to produce candidate state $\hat{\theta}'$, but also calculates probability distributions $P(\hat{\theta})$ and $P(\hat{\theta}')$, as well as proposal distributions $G(\hat{\theta}'|\hat{\theta})$ and $G(\hat{\theta}|\hat{\theta}')$. Using these distributions, the acceptance distribution $A(\hat{\theta}'|\hat{\theta})$ is calculated and compared to random uniform sample $u \sim U(0,1)$. If $A(\hat{\theta}'|\hat{\theta}) > u$, then $\hat{\theta}' \rightarrow \hat{\theta}$. Otherwise, the MCMC-VQA algorithm restarts with the original $\hat{\theta}$. (Red) after the maximum number of MCMC-VQA epochs $T_\text{MC}$ have occurred, the sampled parameters with the lowest loss, $\hat{\theta}_\text{min}$, are selected and the optimization completes with a closing sequence of VQE epochs.}
\label{fig:1}
\end{figure*}
We briefly review VQAs, focusing on VQE (Fig.\ \ref{fig:1}, gray) for quantum optimization for MaxCut problems. This choice of application is motivated by the ample nonconvexity of the corresponding quadratic loss functions \cite{Patti2021, Lee2021}. VQAs are parameterized by input states $|\psi \rangle$ and quantum circuit unitaries $U_t = U(\hat{\theta}_t)$, where $\hat{\theta}_t$ are the variable parameters learned during epoch $t-1$. Without loss of generality, we choose the $n$-qubit input state as $|\mathbf{0} \rangle = \prod_{i=0}^{n-1} |0\rangle$ such that the output state is entirely defined by $\hat{\theta}$ and assume that the initial parameters $\hat{\theta}_0$ are randomly selected at the start of each new sequence of epochs.
MaxCut is a partitioning problem on undirected graphs $G$ (Fig.\ \ref{fig:1}, black), where edges $\omega_i$ connect pairs of vertices $v_{ia}$, $v_{ib}$ \cite{Commander2009}. The goal is to optimally assign all vertices $v_{ia}$, $v_{ib} \in \{-1,1\}$, so as to maximize the objective function
\begin{equation}
\textrm{maximize} \hspace{0.4cm} \frac{1}{2} \sum_{i} w_{i} \left(1-v_{ia} v_{ib} \right).
\label{eq:maxcut}
\end{equation}
\noindent In this work, we will consider a generalized form of the problem known as \textit{weighted} MaxCut, in which $w_i$ take arbitrary real values.
To solve MaxCut via VQE, a graph $G$ is encoded in the Ising model Hamiltonian
\begin{equation}
H = \sum_i \omega_i \sigma_{ia} \sigma_{ib},
\label{eq:H}
\end{equation}
\noindent where $\omega_i$ remains unchanged from the MaxCut objective function and $v_{ia}, v_{ib} \rightarrow \sigma_{ia}, \sigma_{ib}$ for Pauli-Z spin operators $\sigma_{ia}, \sigma_{ib}$. Maximizing the cut of $G$ is then equivalent to minimizing the loss function
\begin{align}
& \Lambda_t = \Lambda(\hat{\theta}_t) = \langle \mathbf{0}| (U_t^\dagger | H |U_t)| \mathbf{0} \rangle \\
& = \sum_i \omega_i \langle \sigma_{ia} \sigma_{ib} \rangle_t = \sum_i \mu^i_t \nonumber,
\end{align}
\noindent where $\mu^i_t$ are the expectation values of the quadratic MaxCut terms. VQE circuit training updates parameters $\hat{\theta}$ via gradient descent on $\Lambda_t$ (Fig.\ \ref{fig:1}), where the gradient of any $\theta_t^k \in \hat{\theta}_t$ can be calculated as $\nabla_k \Lambda(\hat{\theta}_t) = \left( \Lambda(\hat{\theta}_t + \epsilon \hat{k}) - \Lambda(\hat{\theta}_t - \epsilon \hat{k}) \right) / 2 \epsilon $ by finite difference. As $\nabla \Lambda(\hat{\theta}_t) \rightarrow 0$ in the vicinity of both global \textit{and local} minima, VQE training is prone to stagnation at suboptimal solutions.
\section{Results}
\label{sec:results}
In this section, we present our novel method for enhancing the performance of VQAs with classical MCMCs, a technique that we dub MCMC-VQA. We start by briefly reviewing traditional MCMC, focusing on the Metropolis-Hastings algorithm. Then, we introduce MCMC-VQA, derive its behavior, and verify our findings with numerical simulations.
\subsection{MCMC-VQA Method}
\label{subsec:MCMC-VQ}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{trajectories.eps}
\caption{Example trajectories with inverse thermodynamic temperature $\beta=0.8$ (left) and $\beta=0.2$ (right). Four-hundred MCMC-VQA epochs (Markovian epochs) are followed by a closing sequence of VQE epochs (beginning at red dashed line), which is initialized with the best parameters $\hat{\theta}_\text{min}$ found during the Markov process. At lower temperature ($\beta=0.8$), trajectories become trapped in local minima and reaching ergodicity is a lengthy process. Conversely, the high-temperature ($\beta=0.2$) trajectories rapidly reach burn-in, generating $\hat{\theta}_\text{min}$ that lead to near perfect convergence during the VQE closing sequence. See Sec.\ \ref{sec:methods} for simulation details.}
\label{fig:2}
\end{figure*}
MCMC algorithms, such as Metropolis-Hastings, combine the randomized sampling of Monte-Carlo methods with the Markovian dynamics of a Markov chain in order to randomly sample from a distribution that is difficult to characterize deterministically \cite{Metropolis1953}. MCMC is particularly useful for approximations in high-dimensional spaces, where the so-called ``curse of dimensionality'' can make techniques such as random sampling prohibitively slow \cite{Geyer1992}. The core merit of MCMC techniques is their ergodicity, which guarantees that all states of the distribution are eventually sampled in a statistically representative way, regardless of which initial point is chosen. This representative sample is known as the unique stationary distribution $\pi$. In particular, any Markov chain that is both irreducible (each state has a non-zero probability of transitioning to any other state) and aperiodic (not partitioned into sets that undergo periodic transitions) will provably converge to its unique stationary distribution $\pi$, from which it samples ergodically \cite{Brooks1998}. The mathematical properties of ergodic Markov chains are well-studied, including analytic bounds for solution quality and mixing time (number of epochs) \cite{Montenegro2006, McNew2011}.
In order to obtain $\pi$ for a distribution of interest, Metropolis-Hastings specifies the transition kernel $P(x'|x)$, which is the probability that state $x$ transitions to state $x'$. Typically, the Markov process is defined such that transitions satisfy the detailed balance condition:
\begin{equation}
P(x) P(x'|x) = P(x') P(x|x').
\label{eq.detailed_balance}
\end{equation}
\noindent When Eq.\ \ref{eq.detailed_balance} holds, the chain is said to be reversible and is guaranteed to converge to a stationary distribution. $P(x'|x)$ can be factored into two quantities
\begin{equation}
P(x'|x) = G(x'|x) A(x'|x),
\end{equation}
\noindent where $G(x'|x)$ is the proposal distribution, or the conditional probability of proposing state $x'$ given state $x$, and $A(x'|x)$ is the acceptance distribution, or the probability of accepting the new state $x'$ given state $x$. To satisfy Eq.\ \ref{eq.detailed_balance}, the acceptance distribution is defined as
\begin{equation}
A(x'|x) = \min \left(1, \frac{P(x')G(x|x')}{P(x)G(x'|x)} \right).
\end{equation}
\noindent Note that as only the ratio $P(x') / P(x)$ is considered, the probability distribution need not be normalized. To determine whether the candidate state $x'$ or the current state $x_t$ should be used as the future state $x_{t+1}$, a sample $u$ is drawn from the uniform distribution $U(0,1)$. If $A(x'|x_t) \geq u$, then $x_{t+1} = x'$ and we say that the candidate state $x'$ is accepted. Otherwise, $x_{t+1}=x_t$ and we say that $x'$ is rejected.
We now present the MCMC-VQA method. Fig.\ \ref{fig:1} contains a diagram of the algorithm (blue). In particular, we focus on an ergodic Metropolis-Hastings algorithm, which is guaranteed to sample states near global minima. We outline the algorithm both idealistically and experimentally, prove its ergodicity and convergence, and verify these findings with numerical simulations.
As we seek the lowest energy eigenstate when solving MaxCut via VQE, we define $P(\hat{\theta})$ as the Boltzmann distribution
\begin{equation}
P(\hat{\theta}_a) = \exp \left( -\beta \Lambda_a \right) / Z, \hspace{0.8cm} Z = \sum_i \exp \left( -\beta \Lambda_i \right),
\label{eq:P}
\end{equation}
\noindent such that a state's probability increases exponentially with decreasing loss function.
To calculate the proposal distribution $G(\hat{\theta}'|\hat{\theta}_t)$, we must consider the sampling statistics of VQAs. Due to quantum uncertainty, a measurement $m_i^r(\hat{\theta}_t)$ of operators $\omega_i \sigma_{ia} \sigma_{ib}$ from Eq.\ \ref{eq:H} is
|
a sample from a distribution with mean $\mu^i_t$ and variance
\begin{equation}
(\Delta^i_t)^2 = \omega_i^2 [\langle (\sigma_{ia} \sigma_{ib})^2 \rangle_t - \langle \sigma_{ia} \sigma_{ib} \rangle_t^2 ] = \omega_i^2 [1 - (\mu^i_t)^2].
\end{equation}
\noindent The Central Limit Theorem asserts that, assuming at least $M \gtrsim 30$ independent and identically distributed measurements $m_i^r(\hat{\theta}_t)$, an estimate of the loss function $\Lambda_t$ is the statistic $l_t \sim \mathcal{N}\left(\Lambda_t, \hspace{0.05cm} (\Delta^\Lambda_t)^2 \right)$, where $(\Delta^\Lambda_t)^2 = \sum_i (\Delta^i_t)^2 / M$ \cite{kim2015t, Kwak2017}. Similarly, $\forall \theta_t^k \in \hat{\theta}_t$ and assuming small parameter shifts $\epsilon$, the gradient $\nabla_k \Lambda_t = \left( \Lambda(\hat{\theta}_t + \epsilon \hat{k}) - \Lambda(\hat{\theta}_t - \epsilon \hat{k}) \right) / 2 \epsilon $ is the statistic
$d_k l_t \sim \mathcal{N}\left( \nabla_k \Lambda_t , \hspace{0.2cm} [ \Delta_{\Lambda}^2(\hat{\theta}_t + \epsilon \hat{k}) + \Delta_{\Lambda}^2(\hat{\theta}_t - \epsilon \hat{k}) ] / 4 \epsilon^2 \right)$. The variance of this distribution can be simplified by noting that to first order in $\epsilon$, the parameter shifted Pauli operators are $\sigma_{ia}^{\pm k} = \sigma_{ia}(\hat{\theta}\pm \epsilon \hat{k}) = \sigma_{ia} \pm \iota_{iak}$, where $\sigma_{ia} = \sigma_{ia}(\hat{\theta})$ and $\iota_{iak} = (\partial \sigma_{ia} / \partial \theta^k) \epsilon$. We can then simplify the sum $\Delta_i(\hat{\theta}_t + \epsilon \hat{k})^2 + \Delta_i(\hat{\theta}_t - \epsilon \hat{k})^2 = 2 \Delta_i(\hat{\theta}_t)^2$ by noting that
\begin{subequations}
\begin{align}
& \Delta_i(\hat{\theta}_t + \epsilon \hat{k})^2 = \langle (\omega_i \sigma_{ia}^{\pm k} \sigma_{ib}^{\pm k})^2 \rangle - \langle \omega_i \sigma_{ia}^{\pm k} \sigma_{ib}^{\pm k} \rangle^2, \\
& \langle (\sigma_{ia}^{+ k} \sigma_{ib}^{+ k})^2 \rangle + \langle (\sigma_{ia}^{- k} \sigma_{ib}^{- k})^2 \rangle = 2 + \mathcal{O}(\iota^2), \label{eq1} \\
& \langle \sigma_{ia}^{+k} \sigma_{ib}^{+k} \rangle^2 + \langle \sigma_{ia}^{-k} \sigma_{ib}^{-k} \rangle^2 = 2 \langle \sigma_{ia} \sigma_{ib} \rangle + \mathcal{O}(\iota^2). \label{eq2}
\end{align}
\end{subequations}
\noindent Now, up to first order in $\iota$, we can derive the gradient's distribution
\begin{equation}
d_k l_t \sim \mathcal{N}\left( \nabla_k \Lambda_t , \hspace{0.1cm} \Delta_{\Lambda}^2(\hat{\theta}_t) / 2 \epsilon^2 \right).
\end{equation}
Standard gradient descent would propose the candidate state $\hat{\theta}' = \hat{\theta} - \eta \nabla \Lambda_t$, however MCMC-VQA adds a normally distributed random noise term $\Theta_r \sim \mathcal{N}(0,1)$ with scale parameter $\xi$ in order to expand the support of the proposal distribution $G(\hat{\theta}'|\hat{\theta}_t)$. This specifies
\begin{widetext}
\begin{equation}
G(\hat{\theta}'|\hat{\theta}_t) = \prod_k G(\hat{\theta}'|\hat{\theta}_t)_k, \hspace{0.4cm} G(\hat{\theta}'|\hat{\theta}_t)_k = \text{pdf}\left[\mathcal{N} \left(\eta \nabla_k \Lambda (\hat{\theta}_t), \hspace{0.2cm} \xi^2 + \eta^2 \frac{(\Delta^\Lambda_t)^2}{2 \epsilon^2} \right) \right] \left( \hat{\theta}_t - \hat{\theta}' \right),
\label{eq:proposal_distribution}
\end{equation}
\end{widetext}
\noindent where the notation $\text{pdf}\left[\mathcal{N}\left(\mu, \sigma^2 \right) \right] (x)$ denotes the probability density function at point $x$ of a normal distribution with mean $\mu$ and variance $\sigma^2$. It follows that the acceptance distribution is given by
\begin{equation}
A(\hat{\theta}'|\hat{\theta}_t) = \min \left(1, \frac{P(\hat{\theta}')G(\hat{\theta}_t|\hat{\theta}')}{P(\hat{\theta}_t)G(\hat{\theta}'|\hat{\theta}_t)} \right).
\end{equation}
\noindent We note that $G(\hat{\theta}_t|\hat{\theta}')$ is obtained by simply exchanging $\hat{\theta}_t$ and $\hat{\theta}'$ in Eq.\ \ref{eq:proposal_distribution}. A random uniform sample $u \sim U(0,1)$ is then drawn for comparison, such that $\hat{\theta}_{t+1} = \hat{\theta}'$ if $A(\hat{\theta}'|\hat{\theta}_t) > u$ and $\hat{\theta}_{t+1} = \hat{\theta}_t$ otherwise.
After $T_\text{MC}$ epochs of the above Markovian process, MCMC-VQA implements a short series of traditional VQA epochs for rapid convergence to the nearest minimum. In particular, these closing VQA epochs are initialized with $\hat{\theta}_\text{min}$, the parameter set of lowest eigenvalue $\Lambda_\text{min}$ found during the Metropolis-Hastings phase. In this manner, MCMC-VQA can be considered a ``warm starting'' procedure \cite{beaulieu2021max, egger2021warm, van2021}, but with ergodic guarantees.
Example MCMC-VQA trajectories are shown in Fig.\ \ref{fig:2} with inverse thermodynamic temperatures $\beta=0.8$ and $\beta=0.2$. The details of all simulations are given in Sec.\ \ref{sec:methods}. Our algorithm combines the gradient descent-based optimization of VQE with a Markovian process that escapes local minima. Such exploration is significantly greater at the higher-temperature $\beta=0.2$, where rather than settling into distinct loss function basins from which escape is relatively rare, the trajectories display the trademark ``burn-in'' behavior of ergodic Markov chains. By the time that the closing VQE epochs are applied, the ergodic $\beta=0.2$ MCMC-VQA chains have sampled states sufficiently near the global minimum and converge to the groundtruth nearly uniformly.
Fig.\ \ref{fig:3} (left) displays the average accuracy $1-\alpha$ (where $\alpha$ is the average error, blue), and standard deviation (gray) of MaxCut solutions with MCMC-VQA as a function of $\beta$. Dashed lines represent the performance of traditional VQE on the same set of graphs and circuit ansatz. We note that all simulated $\beta$ values outperform traditional VQE. Until $\beta \sim 0.2$, higher temperature MCMC-VQA chains have higher accuracy and better convergence, as their more permissive temperature parameter biases the acceptance distribution towards accepting the candidate states. However, performance decreases at very high temperatures, for which the MCMC-VQA chains are no longer appreciably biased towards energy minimization and the algorithm becomes more like random sampling than intrepid gradient descent. Likewise, the optimal amount of parameter update noise $\xi$ is inversely proportional to $\beta$ (Fig.\ \ref{fig:3}, right), as higher temperatures permit more radical deviations from standard gradient descent.
\subsection{Implementation of MCMC-VQA on Quantum Hardware}
\label{subsec:experimental_considerations}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{betaVS.eps}
\caption{(Left, blue) Average MCMC-VQA accuracy ($1-\alpha$, for average error $\alpha$) vs inverse thermodynamic temperature $\beta$. Nearly perfect average accuracy is obtained for properly tuned hyperparameter $\beta$ (here, $\beta \approx 0.2$). At low temperature (large $\beta$), the algorithm mixes slowly, only partially approximating ergodicity in $T_\text{MC}=400$ Markovian epochs. This partial convergence results in lower accuracy, which approaches that of traditional VQE (blue dashed line) in the limit of large $\beta$. Conversely, for high temperature (small $\beta$), the algorithm is insufficiently biased towards low-energy solutions, which renders its gradient descent inefficient and reduces its accuracy. (Left, gray) The standard deviation of MCMC-VQE accuracy vs $\beta$. Higher standard deviation directly corresponds with lower accuracy. As discussed above, at high $\beta$, this is due to runs trapped in local minima (see Fig.\ \ref{fig:2}), while at low $\beta$, this stems from the lack of energy-preferred convergence. (Left) Optimal value of $\xi$ vs $\beta$, where $\xi$ is the gradient descent noise parameter ($\hat{\theta}' = \hat{\theta} - \eta \nabla \Lambda_t + \xi \Theta_r$) and each trajectory undergoes $T_\text{MC}=400$ Markovian epochs. As larger temperatures generate more permissive acceptance distributions $A(\hat{\theta}'|\hat{\theta})$, higher $\xi$
values lead to more efficient mixing in the low-$\beta$ limit. See Sec.\ \ref{sec:methods} for simulation details.}
\label{fig:3}
\end{figure*}
As discussed above, the loss function $\Lambda_t$ is not precisely determined on actual quantum hardware, but rather estimated as a statistic $l_t = \sum_i q^i_t$, where $q^i_t = \frac{1}{M} \sum_{r=1}^{M} m_i^r(\hat{\theta}_t)$. As a result, the variance of a single observable measurement $(\Delta^i_t)^2$ is estimated by $(\delta^i_t)^2 = \omega_i^2 [1 - (q^i_t)^2]$, while that of the total loss function $(\Delta^\Lambda_t)^2$ is estimated by $(\delta^\Lambda_t)^2 = \sum_i (\delta^i_t)^2 / M = \sum_i \omega_i^2 [1 - (q^i_t)^2] / M$, for $M$-measurements per observable. Alternatively, the variances could be directly estimated from the standard deviations of expectation value statistics. We then define $a(\hat{\theta}'|\hat{\theta}_t)$, the acceptance distribution on quantum hardware, as
\begin{subequations}
\begin{align}
& a(\hat{\theta}'|\hat{\theta}_t) = \min \left(1, \frac{p(\hat{\theta}')g(\hat{\theta}_t|\hat{\theta}')}{p(\hat{\theta}_t)g(\hat{\theta}'|\hat{\theta}_t)} \right), \\
& p(\hat{\theta}) \propto \exp(- \beta l_t), \\
& g(\hat{\theta}'|\hat{\theta}_t) = \prod_k g(\hat{\theta}'|\hat{\theta}_t)_k, \\
& g(\hat{\theta}'|\hat{\theta}_t)_k = \text{pdf}\left[\mathcal{N} \left(\eta d_k l_t, \hspace{0.2cm} \xi^2 + \eta^2 \frac{(\delta^\Lambda_t)^2}{2 \epsilon^2} \right) \right] \left( \hat{\theta}_t - \hat{\theta}' \right).
\end{align}
\end{subequations}
\noindent MCMC-VQA does not increase the quantum complexity of VQAs (number of operations carried out on quantum hardware), as the measurements to estimate $\Lambda(\hat{\theta})$ are carried out in the typical way. Moreover, the acceptance distribution and its components are computed classically with simple arithmetic.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Epochs_vs_Accuracy.eps}
\caption{Average accuracy vs Markovian epochs for three different $\beta$ values. Gray dots are the average MCMC-VQA accuracy $1-\alpha$, and blue curves are a least squares fit of this data to the analytical accuracy of an ergodic Markov chain $1 - \alpha_\text{MC}(\tau)$, with theoretical mixing time $\tau$ (see Eq.\ \ref{eq:mixing_time}). The analytical time-dependence of $\alpha_\text{MC}$ matches the observed scaling of $\alpha$, affirming that MCMC-VQA is an ergodic Markov chain, and thus guaranteeing convergence to the global minimum. Furthermore, the ratio of observed scale parameters between MCMC-VQA simulations with different $\beta$ values is consistent with the analytic dependence $\tau \propto \ln(1/\sqrt{\pi^*})$ (Eq.\ \ref{eq:mixing_time}) on the least likely state $\pi^* \propto \exp(-\beta \Lambda_\text{max})$ (Eq.\ \ref{eq:P}). This functional dependence on temperature further supports our claims of ergodically sampling from $P(\hat{\theta})$ and thus deterministically converging to the global minimum.}
\label{fig:4}
\end{figure*}
\subsection{Proof of Ergodicity}
\label{subsec:ergodicity}
If a Metropolis-Hastings algorithm is \textit{irreducible} and \textit{aperiodic}, then the resulting Markov chain is provably ergodic \cite{Brooks1998}. That is, it will explore all areas of the probability distribution, converging on average to the Markov process' unique stationary distribution, which includes the global minimum of the solution space. Moreover, as we have chosen to sample from the Boltzmann distribution of the loss function, we sample from states near optimal solutions with exponentially higher probability.
\subsubsection{Irreducibility}
The VQA Metropolis-Hastings Markov chain is irreducible if $\forall \hat{\theta}_a, \hat{\theta}_b, \hspace{0.2cm} \exists T, \{\hat{\theta}_1, \hat{\theta}_2, ....., \hat{\theta_T} \}$ such that
\begin{equation}
p(\hat{\theta}_1|\hat{\theta}_a) p(\hat{\theta}_b|\hat{\theta}_T) \prod_{i=1}^{T-1} p(\hat{\theta}_{i+1}|\hat{\theta}_i) > 0.
\label{eq:irreducible}
\end{equation}
\noindent That is, the Markov chain is irreducible if, for any two points in parameter space $\hat{\theta}_a
|
c|c|c|c|c|}
\hline
&
\multicolumn{2}{|c}{\textbf{seq\_hotel}} &
\multicolumn{2}{|c}{\textbf{seq\_eth}} &
\multicolumn{2}{|c}{\textbf{zara01}} &
\multicolumn{2}{|c|}{\textbf{zara02}} \\
\cline{2-9}
& ST & IS & ST & IS & ST & IS & ST & IS \\
\hline
LIN & 182 & 92 & 187 & 58 & 51 & 27 & 49 & 27 \\
\hline
Boids & 192 & 78 & 202 & 59 & 52 & 27 & 54 & 26 \\
\hline
Helbing & 221 & 73 & 232 & 48 & 54 & 26 & 55 & 25 \\
\hline
LTA & 238 & 70 & 249 & 42 & 60 & 24 & 62 & 25 \\
\hline
RVO & 241 & 71 & 258 & 37 & 61 & 22 & 65 & 23 \\
\hline
MeanShift & 98 & 171 & 112 & 139 & 32 & 41 & 33 & 39 \\
\hline
\rowcolor[HTML]{FFCCC9}MMM & 252 & 68 & 267 & 34 & 63 & 20 & 68 & 21 \\
\hline
\end{tabular}
}
\caption{We compare the percentage of successful tracks (ST) and ID switches (IS) of our mix motion model algorithm (MMM) with homogeneous motion models - LIN, Boids, Helbing, LTA, RVO and a baseline mean-shift tracker with standard datasets - seq\_hotel , seq\_eth , zara01 , zara02 ~\cite{pellegrini2010improving}.}
\label{tb:tablescore2}
\end{table*}
\begin{table*}[ht]
\centering
\scalebox{1.0}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
&
\multicolumn{8}{|c}{High Density} &
\multicolumn{6}{|c|}{Medium Density} \\
\hline
&
\multicolumn{2}{|c}{\textbf{NDLS-1}} &
\multicolumn{2}{|c}{\textbf{IITF-1}} &
\multicolumn{2}{|c}{\textbf{IITF-3}} &
\multicolumn{2}{|c}{\textbf{IITF-5}} &
\multicolumn{2}{|c}{\textbf{NPLC-1}} &
\multicolumn{2}{|c}{\textbf{NPLC-3}} &
\multicolumn{2}{|c|}{\textbf{IITF-2}} \\
\cline{2-15}
& ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS \\
\hline
MMM-C & 63 & 11 & 74 & 12 & 57 & 11 & 67 & 12 & 78 & 14 & 71 & 13 & 46 & 13 \\
\hline
\rowcolor[HTML]{FFCCC9} MMM & 63 & 27 & 73 & 28 & 57 & 26 & 67 & 26 & 77 & 28 & 71 & 26 & 44 & 26 \\
\hline
\end{tabular}
}
\vspace*{0.5 cm}
\scalebox{1.0}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
&
\multicolumn{2}{|c}{Medium Density} &
\multicolumn{12}{|c|}{Low Density} \\
\hline
&
\multicolumn{2}{|c}{\textbf{IITF-4}} &
\multicolumn{2}{|c}{\textbf{NDLS-2}} &
\multicolumn{2}{|c|}{\textbf{NPLC-2}} &
\multicolumn{2}{|c}{\textbf{seq\_hotel}} &
\multicolumn{2}{|c}{\textbf{seq\_eth}} &
\multicolumn{2}{|c}{\textbf{zara01}} &
\multicolumn{2}{|c|}{\textbf{zara02}} \\
\cline{2-15}
& ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS & ST & FPS \\
\hline
MMM-C & 63 & 11 & 80 & 12 & 78 & 11 & 254 & 11 & 267 & 16 & 63 & 14 & 69 & 15 \\
\hline
\rowcolor[HTML]{FFCCC9} MMM & 63 & 27 & 79 & 28 & 78 & 26 & 252 & 28 & 267 & 29 & 63 & 27 & 68 & 28 \\
\hline
\end{tabular}
}
\caption{
We compare the percentage of successful tracks (ST) and average tracking frames per second (FPS) of our mixture of motion models algorithm adaptive particle filtering (MMM) and with constant particle numbers (MMM-C).
}
\label{tb:tablefps}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{cost.png}
\caption{
Computation cost comparison between the particle filter system and the optimization framework. The x-axis represents number of people tracked and the y-axis represent the computation time (in milliseconds) }
\label{fig:compcost}
\end{figure}
\section{Implementation and Results}\label{section:results}
In this section we present our implementation details and highlight the performance on 10 different crowd video datasets.
\subsection{Evaluation}
We use the \textbf{CLEAR MOT}~\cite{keni2008evaluating} evaluation metrics to analyze the performance analytically. We use the \textbf{MOTP} and the \textbf{MOTA} metrics. \textbf{MOTP} evaluates the alignment of tracks with the ground truth while \textbf{MOTA} produces a score based on the amount of false positives, missed detections, and identity switches. These metrics have become standard for evaluation of detection and tracking algorithms in the computer vision community, and we refer the interested reader to ~\cite{keni2008evaluating} for more a detailed explanation.
We analyze these metric across the density groups and the different motion models (Table~\ref{CLEAR}).
\subsection{Tracking Results}
We highlight the performance of our algorithm based on a mixture of motion models on different benchmarks, comparing the performance of our algorithm with single, homogeneous motion model methods: constant velocity model (LIN), LTA~\cite{pellegrini2009you}, Social Forces~\cite{yamaguchi2011you}, Boids~\cite{Reynolds1999} and RVO~\cite{van2011reciprocal}.
LIN models the velocities of pedestrians as constant, and is the underlying motion model frequently used in the standard particle filter.
The other four models compute the pedestrian states based on optimizing functions, which model collision avoidance, destinations of pedestrians, and the desired speed.
In our implementation, we replace the state transition process of a standard particle filtering algorithm with different motion models.
We evaluate on some challenging datasets~\cite{bera2014} which are available publicly and also some standard datasets from the pedestrian tracking community. These videos were recorded at 24-30 fps. We manually annotated these videos and corrected the perspective effect by camera calibration.
We also compare our performance compared to a baseline mean-shift tracker (Table~\ref{tb:tablescore}). We also compare the computational overhead of our optimization framework compared the particle filter system in terms of computation time. (Refer Figure~\ref{fig:compcost})
For our evaluation, we have divided our system into two phases:
\emph{Initialization:} Here we initialize the motion model estimation and parameter-optimization system with hand-drawn or ground truth data for a few initial frames, which is computed offline. For our experiments, we've used the first 10 frames. We compute a score that is used to choose the best-fit model from our motion model set and the associated parameters.
\emph{Prediction:} After learning from the initial data, we use the predicted set of parameters to model the state transition part of the
standard Bayesian inference framework. We iteratively and incrementally recompute the score and update the motion model. This computation is performed in realtime.
We show the number of correctly tracked pedestrians and the number of ID switches. A track is counted as ``successful'' when the estimated mean error between the tracking result and the ground-truth value is less than 0.8 meter in groundspace. The average human stride length is about 0.8 meter and we consider the tracking to be incorrect if the mean error is more than this value.
Our method provides 9-18\% higher accuracy over LIN for medium density crowds (Table~\ref{tb:tablescore}).
Moreover, we compare the performance of our adaptive particle tracking algorithm with a particle filter that uses constant number of particles (Table~\ref{tb:tablefps}).
\begin{figure*}[!htb]
\centering
\includegraphics[width=1\textwidth]{bar_track.jpg}
\label{fig:9}
\caption{
The results of our approach on some challenging datasets.
From top to bottom, left to right: IITF-1, IITF-2, NPLC-1, IITF-3, NDLS-2, NDLS-1, NLPC-2, IITF-4, IITF-5.
We are able to achieve a 4-12\% increase in accuracy over homogeneous motion models at interactive framerates.
}
\vspace*{-0.13in}
\end{figure*}
\section{Introduction}\label{section:introduction}
The tracking of human crowd motion is becoming increasingly ubiquitous.
It is a well-studied problem that has many applications in surveillance, behavior modeling, activity recognition, disaster prevention, and the analysis of crowd phenomena.
Despite many recent advances, it is still difficult to accurately track pedestrians in real-world
scenarios, especially as the crowd density increases.
The problem of tracking pedestrians and objects has been studied in computer vision and image processing for three decades.
However, tracking pedestrians in a crowded scene is regarded as a hard problem due to the following reasons: intra-pedestrian occlusion (one pedestrian blocking another), changes in lighting and pedestrian appearance, and the difficulty of modeling human behavior or the intent of each pedestrian.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{Page1.jpg}
\caption{{
Our mixture motion model can accurately compute the trajectories in real time.
We highlight different motion models (Boids, Helbing's Social Forces, or RVO) used for the same pedestrian (marked in red) over different frames. We adaptively choose the best-fit model for every pedestrian in the scene.
This increases the accuracy by 4\%-18\% of our adaptive tracking algorithm.
}}
\label{fig:1}
\end{figure*}
One approach that improves the accuracy of tracking algorithms is the use of realistic crowd motion models.
These motion models simulate the current behavior of each pedestrian in the crowd in order to predict the pedestrians' possible future positions.
There has been considerable work on developing crowd motion models for pedestrians in the areas of computer graphics, robotics, computer animation, and pedestrian dynamics. Many approaches have been investigated that suggest different \textit{principles} to model crowds.Most of these models use kind of parameters to describe the shape and trajectory of each agent. While many approaches have been investigate to model the motion of the agents, there is relatively less effort to estimating model parameters based on available data, evaluating and comparing the effects of these parameters, and quantifying the improvements that can result from parameter optimization.
Prior realtime or online crowd-tracking algorithms use a single, homogeneous motion model.
Every motion model is unique and generally relies upon one or more assumptions: these include the assumption of highly coherent motion in terms of velocity or acceleration, or assumptions about how pedestrian trajectories will change in response to other agents or obstacles.
The simpler motion models assume that agents will ignore any interactions with other pedestrians, instead assuming that they will follow ``constant-speed'' or ``constant-acceleration'' paths to their immediate destinations.
However, the accuracy of this assumption decreases as crowd density in the environment increases (e.g. to 2-4 pedestrians per square meter).
More sophisticated pedestrian motion models take into account interactions between pedestrians, formulated either in terms of attraction or repulsion forces or collision-avoidance constraints.
In real-world scenarios, the trajectory of each pedestrian is governed by its intermediate goal location, intrinsic behaviors, as well as local interactions with other pedestrians and obstacles in the scene.
In a dense crowd setting, the behavior of each pedestrian changes in response to the environment, the overall crowd density and flow, and the behavior of other pedestrians.
It may not be possible, therefore, to model the overall behavior of each pedestrian with a single, homogeneous motion model.
Furthermore, each of these homogeneous models is described using some parameters that may correspond to the size, speed, anticipation period, or local navigation constraints of each pedestrian.
The accuracy of each motion model is governed by the choice of these parameters.
As the behavior of each pedestrian responds to changes in a dynamic environment, these model parameters should be recomputed or updated to improve the resulting motion model's accuracy.
Overall, we need efficient techniques that can take into account heterogeneous behaviors based on constantly changing models and underlying parameters.
\\\\
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{comp.png}
\caption{{
(a) Our tracking on the \textit{student003} dataset~\cite{student003}.
(b) Comparing ground truth (in red) with with prediction by Social Forces by Helbing et al.~\cite{helbing1995social}(in blue).
(c) Comparing ground truth (in red) with with prediction by our motion model mixture (in blue).
The distance between the red and blue points denote the error in prediction. We can see that the error with our approach is considerably lower.
}}
\label{fig:1}
\end{figure*}
{\bf Main results:}
We present a method that uses particle filters to perform realtime pedestrian tracking in moderately crowded scenes.
Our formulation computes the best-fit or mixture motion model for each pedestrian based on prior tracked data.
In order to characterize the heterogeneous, dynamic behavior of each agent, we use an optimization based scheme to perform the following steps:
\begin{itemize}
\item Choose, every few frames, the new motion model that best describes the local behavior of each pedestrian based on tracked data.
\item Compute the optimal set of parameters for that motion model that best fit this tracked data.
\item Computing the adaptive number of particles for each pedestrian based on a combination of metrics for optimizing performance.
\end{itemize}
We compute the locally-optimal motion model for the pedestrians in realtime and use that, along with a particle-filter based tracker, to compute their trajectories.
In our approach, we consider a variety of possible motion models to characterize pedestrian motion during each frame: Boids~\cite{Reynolds1987}, Social Forces~\cite{helbing1995social} or reciprocal velocity obstacles~\cite{van2011reciprocal} as possible models to characterize the motion of a pedestrian during each frame. For videos with high fps(over ~50 fps), a constant velocity model may be sufficient to model the motion prior.
Furthermore, we use our heterogeneous motion model to adaptively choose the number of particles for each pedestrian in our particle-filter.
This adaptive formulation can increase the runtime speed based of our system based on a reliability measure computed using mixture motion model.
We evaluate our method in comparison with homogeneous motion models on high definition crowd datasets that include both indoor and outdoor scenes recorded at different locations with 30 - 150 pedestrians and also standard datasets used in the pedestrian tracking community.
In practice, our adaptive particle-filter tracker with adaptive motion model is about 4-18\% more accurate than prior interactive tracking algorithms that use homogeneous or simple motion models.
Moreover, as the crowd density increases, we observe increased improvements in the level of accuracy.
Moreover, the adaptive particle selection can increase the runtime frame rate by 2-2.5 times as compared to algorithms that use a constant high number of particles.
Overall, our algorithm can track tens of pedestrians at realtime rates (i.e. more than 25fps) on a multi-core CPU.
The rest of the paper is organized as follows.
In Section~\ref{section:related_work}, we give an overview of prior work related to online pedestrian tracking.
Section~\ref{section:overview} gives an overview of our approach, and Section~\ref{section:mixture_motion_model} describes our multi-agent heterogeneous motion model.
Section~\ref{section:results} evaluates the different components of our algorithm and compares it with other online tracking methods.
\section{Mixture Motion Model}\label{section:mixture_motion_model}
In this section, we introduce the notion of a parameterized motion model.
We then describe the different parameterized motion models that form the basis for the mixture motion model.
Finally, we describe the mixture motion model itself.
\subsection{Parameterized Motion Model}
A motion model is defined as an algorithm $f$ which, from a collection of agent states $\mathbf{X}_t$, derives new states $\mathbf{X}_{t+1}$ for these agents, representing their motion over a timestep towards the agents' immediate goals $\mathbf{G}$:
\begin{align}
\mathbf{X}_{t+1} = f(\mathbf{X}_t,\mathbf{G}).
\end{align}
Motion algorithms usually have several parameters that can be
tuned in order to change the agents' behaviors.
We assume that each parameter can have a different value for each pedestrian.
By changing the value of these parameters, we get some variation in the resulting trajectory prediction algorithm.
We use $\mathbf{P}$ to denote all the parameters of all the pedestrians.
Typically, for a crowd of 50 pedestrians, the dimension of $\mathbf{P}$ could be anywhere in the range 150-300 depending on the motion model.
In our formulation, we denote the resulting parameterized motion model as:
\begin{align}
\mathbf{X}_{t+1} = f(\mathbf{X}_t,\mathbf{G},\mathbf{P}).
\label{eqn:crowdSim}
\end{align}
\subsection{Motion Models}
Our mixture motion model can include any generic motion model that conforms to Equation~(\ref{eqn:crowdSim}).
Here we describe the three component motion models that currently make up the mixture motion model in our current implementation.
\subsubsection{Reciprocal Velocity Obstacles}
RVO is a local collision-avoidance and navigation algorithm.
Given each agent's state at a certain timestep, it computes a collision-free state for the next timestep\cite{van2011reciprocal}.
Each agent is represented as a 2D circle in the plane, and the parameters (used for optimization) for each agent consist of the representative circle's radius, maximum speed, neighbor distance, and time horizon (only future collisions within this time horizon are considered for local interactions).
Let $V_{pref}$ be the preferred velocity for a pedestrian that is based on the immediate goal location. The RVO formulation takes into account the position and velocity of each neighboring pedestrian to compute the new velocity. The velocity of the neighbors is used to formulate the ORCA constraints for local collision avoidance~\cite{van2011reciprocal}. The computation of the new velocity is expressed as an
optimization problem for each pedestrian.
If an agent's preferred velocity is forbidden by the ORCA constraints, that agent chooses the closest velocity that lies in the feasible region:
\begin{equation} \label{eqn:ORCA}
V_{RVO} = \underset{V \notin ORCA}{\arg\max} \|V - V_{pref}\|.
\end{equation}
More details and mathematical formulations of the ORCA constraints are given in~\cite{van2011reciprocal}.
As per Equation~(\ref{eqn:crowdSim}), $f$ returns the states obtained with the admissible velocity that is closest to the preferred velocity.
\subsubsection{The Boids Model}
Initially developed to simulate the flocking behavior of birds, this model has later been extended to pedestrian motion in a crowd.
Broadly, three rules are enforced on Boids agents:
\begin{itemize}
\item \textbf{Separation}: steer to avoid crowding local agents
\item \textbf{Alignment}: steer towards the average heading of local agents
\item \textbf{Cohesion}: steer to move toward the average position (center of mass) of local
|
agents
\end{itemize}
Thus, as per Equation~(\ref{eqn:crowdSim}), $f$ is a function of agents' positions at some specified future time (current time plus constant).
When the predicted distance between the pedestrians gets too low, a separation force is computed and added to the attraction force that is pulling the agents toward their goal.
The parameters are radius (size of 2D circle agents) and comfort speed (i.e., speed when no interactions occur).
\subsubsection{Social Forces Model}
The social forces model is defined by the combination of three different forces: the personal motivation force, social forces, and physical constraints:
\begin{itemize}
\item \textbf{Personal Motivation force} ($F^{M}$): This is the incentive to move at a certain preferred velocity in a certain direction.
\item \textbf{Social forces} ($F^{S}$): These are the repulsive forces from other agents and obstacles.
\item \textbf{Physical Constraints} ($F^{P}$): These are the hard constraints other than the environment and other agents.
\end{itemize}
The net force $F^{C} = F^{M} + F^{S} + F^{P}$ then defines
an agent's chosen new velocity. For a detailed explanation of the method, refer to~\cite{helbing1995social}.
As per Equation~(\ref{eqn:crowdSim}), $f$ is a function of the agents' positions from which all computed forces are derived. The parameters are radius and comfort speed.
\subsection{Mixture of motion models}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{optimization.png}
\caption{Our parameter optimization algorithm used in Figure~\ref{fig:unified}. Based on the error metric, we compute optimal parameters for each motion model. The best motion model (from RVO, Social Forces, Boids or LIN) is used for trajectory extraction and predicting the next state.}
\label{fig:algo}
\end{figure*}
\vspace{-0.25cm}
We now present the algorithm to compute the mixture motion model, which essentially corresponds to computing the ``best'' motion model at any given timestep.
In this case, the ``best'' motion model is the one that most accurately matches agents' immediately past states, as per a given error metric.
This ``best'' motion model is determined by an optimization framework, which automatically finds the parameters $\mathbf{P}$ that minimize the error metric. Wolinski et al. ~\cite{Wolinski2014} designed an optimization framework for evaluating crowd motion models but it computes the optimal parameters in an offline manner for a single homogenous simulation model. Our framework is online and iteratively computes the best heterogeneous motion every few frames and chooses the most optimized crowd parameters at a given time. The computation cost is considerably lower and hence useable for real-time tracking.
\subsubsection{Formalization}
Formally, at any timestep $t$, we define the agents' (k+1)-states (as computed by the tracker) $\mathbf{S}_{t-k:t}$:
\begin{align}
\mathbf{S}_{t-k:t} = \bigcup_{i=t-k}^{t} \mathbf{S}_i.
\end{align}
Similarly, a motion model's corresponding computed agents' states $f(\mathbf{S}_{t-k:t}, \mathbf{P})$ can be defined as:
\begin{align}
f(\mathbf{S}_{t-k:t}, \mathbf{P}) = \bigcup_{i=t-k}^t f(\mathbf{X}_i, \mathbf{G}, \mathbf{P}),
\end{align}
initialized with $\mathbf{X}_{t-k} = \mathbf{S}_{t-k}$ and $\mathbf{G} = \mathbf{S}_t$.
At timestep $t$, considering the agents' k-states $\mathbf{S}_{t-k:t}$, computed states $f(\mathbf{S}_{t-k:t}, \mathbf{P})$ and a user-defined error metric $error()$, our algorithm computes:
\begin{align}
\mathbf{P}^{opt, f}_t
& = \argmin_{\mathbf{P}} error(f(\mathbf{S}_{t-k:t},\mathbf{P}),\mathbf{S}_{t-k:t}),
\label{eqn:optimize}
\end{align}
where $\mathbf{P}^{opt, f}_t$ is the parameter set which, at timestep $t$, leads to the closest match between the states computed by the motion algorithm $f$ and the agents' k-states.
For several motion algorithms $\{ f1, f2, ... \}$, we can then compute the algorithm which best matches
the agents' k-states $\mathbf{S}_{t-k:t}$ at timestep $t$:
\begin{align}
m_t = f^{opt}_t = \argmin_{f} error(f(\mathbf{S}_{t-k:t},\mathbf{P}^{opt, f}_t),\mathbf{S}_{t-k:t}),
\label{eqn:optmodel}
\end{align}
and consequently, the best (as per the error in the $error()$ metric itself) prediction for the agents' next state
obtainable from the motion algorithms for timestep $t+1$ is:
\begin{align}
\mathbf{X}_{t+1} = m_t(\mathbf{S}_t).
\label{eqn:bestPred}
\end{align}
\subsubsection{Optimization Algorithm and Error Metric}
Optimizing crowd parameters is a unique and challenging problem. Because most simulation methods have several parameters to tune for each agent, even moderately sized scenarios with a few dozen agents can become a hundred-dimensional optimization problem.
In total we tested three global optimization approaches:\textit{ Greedy algorithm}, \textit{Simulated Annealing}, and \textit{Genetic Algorithm}.
For the greedy approach we start by choosing random parameters for every agent. The chosen data similarity metric is then evaluated to establish a baseline measure of how well the simulation matches the data. After several iterations, where in each iteration starts with the best set of simulation parameter seen so far. This new set of parameters is evaluated, whichever set of parameters has the lowest error metric over all the iterations is chosen as the optimal parameters for the agents.
The main limitation with a greedy approach is that it will get stuck in local minimum in search space and also the final outcome depends on the starting point. Simulated Annealing addresses this problem. Analogous with thermodynamics, simulated annealing incorporates a `\textit{temperature}' parameter into the
minimization procedure. At high temperatures, we explore the parameter space whereas at lower temperature, we restrict the exploration.
\begin{algorithm}
\DontPrintSemicolon
$k \leftarrow 0$\tcp*{initialize loop counter}
\While{$k<K$}{
$T \leftarrow \operatorname{temperature}(k, K)$\tcp*{compute temperature}
$s_{new} \leftarrow \operatorname{neighborState}(s)$\tcp*{try new neighbor}
$e_{new} \leftarrow \operatorname{cost}(s)$\tcp*{compute cost}
\If(\tcp*[f]{is new state better?}){$\operatorname{move}(e, e_{new}, T)$}{
$s \leftarrow s_{new};\ e \leftarrow e_{new}$\tcp*{yes, change state}
}
\If(\tcp*[f]{did we find a new minimum?}){$e < e_{best}$}{
$s_{best} \leftarrow s;\ e_{best} \leftarrow e$\tcp*{save new optimum}
$k \leftarrow 0$\tcp*{reset loop counter}
}
$k \leftarrow k+1$\tcp*{increase loop counter}
}
\caption{Simulated annealing.}
\label{algo:simanneal}
\end{algorithm}
Algorithm \ref{algo:simanneal} gives the pseudocode for the process where:
\begin{description}
\item [neighborState():] pick a new random value for a random parameter according to the parameter's base distribution
\item [move():] is $True$ iff $e_{new}<e_{old}$, $exp({\frac{e_{old}-e_{new}}{T}})$.
\item [temperature():] is $\frac{K-k}{K}$, $k$ being the number of iterations with no improvement and $K$ the number of such iterations allowed.
\item [cost():] the cost as returned by the currently used metric.
\end{description}
We also use a Genetic algorithm~\cite{holland1992genetic}. The underlying optimization technique as algorithm offers the best compromise between optimization results and speed.
The efficiency component is important as our goal is realtime pedestrian tracking.
Genetic algorithms seek to overcome the problem of local minima in optimization.
This is accomplished by keeping a pool of parameter sets and, during each iteration of the optimization process, creating a new pool of potential solutions by combining and modifying these parameter sets.
\begin{algorithm}
\DontPrintSemicolon
$pop \leftarrow \operatorname{initialize}()$\tcp*{initialize population}
\While{$true$}{
$\operatorname{selection}(pop)$\tcp*{evaluate and select fittest}
\If(\tcp*[f]{should we terminate?}){$\operatorname{termination}()$}{$stop$\tcp*{yes, stop loop}}
$pop \leftarrow \operatorname{reproduction}(pop)$\tcp*{new generation}
}
\caption{Genetic algorithm.}
\label{algo:genetic}
\end{algorithm}
Algorithm \ref{algo:genetic} provides pseudocode for the method given the following functions:
\begin{itemize}
\item \textbf{initialize()}: parameters randomly initialized in accordance with the base distribution for each parameter.
\item \textbf{selection()}: individuals are sorted according to their score and divided into 3 groups: Best, Middle and Worst.
\item \textbf{termination()}: the algorithm is terminated after finding $K$ successive loop iterations without any new optimum.
\item \textbf{reproduction()}: based on which group it belongs to, a parameter set is attributed three probabilities $\alpha$, $\beta$ and $\gamma$. For each parameter of this individual, $\alpha$ decides if the value is changed or not, $\beta$ decides if the value is changed by crossover or mutation and, finally, $\gamma$ decides which type of mutation is done.
\item crossover: a crossover is done by copying a value from an individual belonging to the Best group.
\item mutation: a mutation is done by picking a new value at random based on either the base distribution or the current real distribution of an individual from the Best group (according to $\gamma$).
\end{itemize}
At each iteration, this algorithm evaluates and ranks all possible parameter sets (solutions) currently in the solution pool.
If there have been a certain number of successive iterations without any improvement, the process is terminated.
Otherwise, individual parameter values in each solution have a probability of being modified.
If so, this modification has a probability of being either a crossover or a mutation.
If it is a crossover, a value from the corresponding parameter from a better ranked solution is selected; if it is a mutation, a new value is sampled from a probability distribution.
This probability distribution can either be the one defined by the user (for instance, a preferred velocity could obey a normal law with mean $1.4 m.s^{-1}$ and standard deviation $0.3 m.s^{-1}$) or one that is computed on parameter values from better ranked solutions.
\\\\
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{graph1.png}
\caption{Comparing the score of the different optimization approaches. Each graph is a range of the scores (minimum and maximum) and the black dot is the mean score. We compute the score from the normalized error metric. A lower value indicates better optimization. MMM or the `Motion-Model Mixture' is the our approach.}
\label{fig:graph1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{graph2.png}
\caption{This graph shows the time taken for each computing the every set of optimal parameters corresponding to each motion model. MMM is our approach. Time computed is in miliseconds. Each graph is a range of the scores (minimum and maximum) and the black dot is the mean score. We compute the score from the normalized error metric. }
\label{fig:graph2}
\end{figure}
An error metric is also needed to compute the term in Equation~(\ref{eqn:optimize}).
In our case, we've chosen a metric that simply computes the average 2-norm between the observed agent positions and the tracker-computed positions.
Formally, this metric is defined at timestep $t$ as follows:
\begin{align}
error &= \sum_{i=t-k}^t \norm{\mathbf{S}_i - \mathbf{X}_i}.
\label{eqn:totaldist}
\end{align}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{error.png}
\caption{{
This is the RMS error in the predicted position compared to the ground truth. For an unbiased comparison, all measurements are in ground-space (meters). We have divided our dataset into 3 categories (Refer table~\ref{tb:tablediv})
(a) Low-density datasets
(b) Medium-density datasets
(c) High-density datasets
We find that our approach considerably lower error for future-state prediction in medium-density crowds.
}}
\label{fig:error}
\end{figure*}
\subsection{Adaptive Particle Selection}\label{sec:adaptive_particle_selection}
The performance of a particle filter is proportional to the number of particles used for each pedestrian, and the process can be expensive for a high number of particles.
However, with more particles, the probability that a pedestrian will be tracked accurately is higher; fewer particles, though computationally less expensive, actually lowers the tracking accuracy.
As a result, we need to use an appropriate number of particles to balance the tradeoffs between computation cost and accuracy.
Ideally, one would use fewer particles most of the time, increasing their number only when needed: when there is a large change in motion trajectory, lighting, appearance or partial occlusions, for example.
To this end, we estimate tracker confidence and particle selection by using the motion model.
We analyze the confidence of our tracker given the number of particles based on combining various metrics to measure the propagation and motion model reliability.
The propagation reliability is a measure of how well the object matches the initial target candidate and also the last tracked object:
\begin{equation}
pr_{t} = g(\norm{O_{t}-O_{t-1}}, \norm{O_{t}-O_{0}}),
\end{equation}
\noindent where $pr_{t}$ is the propagation reliability at time $t$ and $O_{t}$ denotes the object representation at time $t$. Motion model reliability is a normalized difference measure between the tracked state and the predicted state $f(X_{t-1})$ given by the motion model $f$:
\begin{equation}
mmr_{t+1} = h(\norm{f(X_{t-1}) - S_t}),
\end{equation}
\noindent where $mmr_{t+1}$ is the motion model reliability at timestep $t+1$ and $h$ is function varying linearly to the norm difference of the actual and simulated trajectories.
The combination of these metrics helps us in optimizing the number of active particles needed in the system.
In our mixture of motion models, our system chooses the optimal motion algorithm $m_t$ from all possible motion models $\{ f1, f2, ... \}$ (Equation~(\ref{eqn:optmodel})) with the optimal parameter set.
Hence the motion model reliability is always higher compared to systems with homogeneous or non-varying motion models.
\begin{equation}
mmr_{t+1}^{opt} = h(\norm{m_t(X_{t-1}) - S_t}).
\end{equation}
\section{Related work}\label{section:related_work}
In this section, we briefly review some prior work on pedestrian tracking and motion models.
Multi-pedestrian tracking has attracted a lot of research attention in recent years.
We refer the reader to some excellent surveys~\cite{wuonline, enzweiler2009monocular,yilmaz2006object}.
At a broad level, pedestrian tracking algorithms can be classified as either online or offline trackers.
Online trackers use only the present or previous frames for realtime tracking.
Zhang et al.~\cite{zhang2012real} proposed an approach that uses non-adaptive random projections to model the structure of the image feature space of objects, and Tyagi et al.~\cite{tyagi2008context} described a technique to track pedestrians using multiple cameras.
Offline trackers, on the other hand, use data from future frames as well as current and past data~\cite{sharma2012unsupervised, rodriguez2011density}.
These methods, however, require future-state information; they are therefore not useful for realtime applications.
In addition to the online/offline classifications, tracking algorithms can also be classified based on their underlying search mechanisms: as either deterministic or probabilistic trackers.
Deterministic trackers iteratively attempt to search for the local maxima of a similarity measure between the target candidate (the location of the pedestrian in a frame) and the object model (the initial state of the pedestrian).
The most commonly used deterministic trackers are the mean-shift algorithm~\cite{yilmaz2007object} and the Kanade-Lucas-Tomasi algorithm~\cite{lucas1981iterative}.
In probabilistic trackers, the movement of the object is modeled based on its underlying dynamics.
Two well-known probabilistic trackers are the Kalman filter and the particle filter.
Particle filters are more frequently used than Kalman filters in pedestrian tracking, since particle filters are multi-modal and can represent any shape using a discrete probability distribution.
\emph{Motion Models:}
The problem of modeling crowd behaviors and motions has received significant attention in various disciplines.
This attention has resulted in a high number of simulation models based on microscopic or macroscopic principles.
Several of the proposed motion models represent each individual or pedestrian in a crowd as particles (or as 2D circles in a plane), then model the interactions between these particles.
Reynolds'~\cite{Reynolds1987, Reynolds1999} seminal approach is representative of such models: local interactions, matching an agent's speed and orientation to those of its neighbors, determine agents' motions and lead to emergent behaviors.
Many popular algorithms model agents as particles which are subjected to repulsive forces~\cite{helbing1995social} and additional behavior-improving rules.
More recently, velocity-based algorithms~\cite{van2011reciprocal, Pettre2009, Karamouzas2009} have been developed, which model agents' motions in velocity-space to ensure collision-free trajectories over short future time windows.
Other approaches that have recently been developed are based on cognitive models~\cite{chung2010mobile}, affordance~\cite{Fajen2007}, short-term planning using a discrete approach~\cite{Antonini2006} or Linear Trajectory Avoidance (LTA)~\cite{pellegrini2009you}.
A final recent approach uses the virtual optic flow of agents to derive perceptual variables in order to compute collision-free motions~\cite{Ondvrej2010}.
A few tracking algorithms use the Reciprocal Velocity Obstacle (RVO) model as motion prior~\cite{Liu2014}~\cite{bera2014}.
Many non-particle-based motion modeling techniques have also been proposed; these techniques are useful mainly for crowded scenes in which pedestrians display similar motion patterns.
Song et al.~\cite{song2013fully} proposed an approach that clusters pedestrian trajectories based on the assumption that ``persons only appear/disappear at entry/exit.''
Ali et al.~\cite{ali2008floor} presented a floor-field based method to determine the probability of motion in densely crowded scenes.
Rodriguez et al.~\cite{rodriguez2011data} used a large collection of public crowd videos and learned about crowd motion patterns by extracting global video features.
Kratz et al.~\cite{kratz2012going} and Zhao et al.~\cite{zhao2012tracking} used local motion patterns in dense videos for pedestrian tracking.
Shu et al.~\cite{kratz2012going} proposed an approach that learns part-based person-specific SVM classifiers which capture dynamically changing pedestrian appearance. Zamri et al.~\cite{kratz2012going} used generalized minimum clique graphs for multiple-person tracking. Leal-Taix\'{e} et al.~\cite{leal2012exploiting} used a social and grouping behavior as a physical model in their tracking system. Burgos-Artizzu et al.~\cite{burgos2012social} presented a novel method for analyzing social behavior, particularly in mice videos, where the continuous videos are segmented into action `bouts' by building a temporal context model.
These methods are well-suited for modeling motion in dense crowds with few distinct motion patterns; however, they may not work in heterogeneous crowds.
|
\subsection*{Abstract}
Today's increasing demand for wirelessly uploading a large volume of User Generated Content (UGC) is still significantly limited by the throttled backhaul
of residential broadband (typically between 1 and 3Mbps).
We propose \mytit, a carefully designed system with implementation for bunching WiFi access points'
backhaul to achieve a high aggregated throughput. \mytit is inspired by a decade of networking design principles and
techniques to enable efficient TCP over wireless links and multipath.
\mytit aims to achieve two major goals:
1) requires \emph{no client modification} for easy incremental adoption;
2) supports \emph{not only} UDP, but also TCP traffic to greatly extend its applicability to a broad class of popular applications such as HD streaming or large file transfer. We prototyped \mytit with commodity hardware. Our extensive experiments shows that despite TCP's sensitivity to typical channel factors such as high wireless packet loss, out-of-order packets arrivals due to multipath, heterogeneous backhaul capacity, and dynamic delays, \mytit achieves a backhaul aggregation up to 95\% of the theoretical maximum throughput for UDP and 88\% for TCP. We also empirically estimate the potential idle bandwidth that can be harnessed from residential broadband.
\section{Bapu}
\label{sec:bapu}
In this section, we describe the whole \mytit system in details.
We discuss technical challenges arising and propose solutions to
achieve an efficient and practical aggregation system. We remark that \mytit shares some similarities
in the high-level architecture with prior work (e.g., Link-alike~\cite{link-alike}, FatVAP~\cite{KandulaLBK2008}),
which presented neat systems for aggregating the bandwidth among APs. However, from the practicality
aspects, the applicability of those systems is still limited due to constraints such as heavy modification of
client devices or support for only specific applications (e.g., large file transfer).
Yet our ultimate goals of the \emph{transparency} for the users, and the \emph{high-throughput} transmission for all kinds of user applications
require a new solution with unique characteristics.
\subsection{Network Unicast}
First, the transparency goal requires that legacy transport protocols be usable for
data transmission from \s to \d. Accordingly, the \s must be able to transmit data to the \d
via \emph{network unicast} through its \home.
The second reason for the need of network unicast is to increase the reliability of the transmission,
because \mytit supports TCP, whose performance depends on
the reliability of the underlying MAC layer. To be clearer, according to the IEEE 802.11 standard,
a packet with a broadcast destination address is transmitted only once by the WiFi
device, whereas up to 12 MAC-layer retransmissions are tried for a failed unicast destination address,
therefore a unicast is much more reliable than a broadcast.
Consequently, supporting network unicast is an essential requirement in \mytit, while in prior work~\cite{link-alike},
broadcast is preferred due to the simplicity goal of their system.
\vskip 1eX\noindent{\bf Packet Overhearing:}
In WiFi network, although both network unicast and network broadcast use the same method of wireless broadcast
to transfer data, the difference lies in the MAC layer, where the next-hop physical
address is specified to the unicast address or broadcast address.
This complicates the packet overhearing capability at \mon{}s.
As \home is the first hop in the transmission path, the \s, according to the
underlying routing protocol, has to use the \home's physical address
as the next-hop address in the 802.11 header. While \home as a next hop can receive the packet,
\mon{}s automatically discard the packet due to mismatched physical address.
Therefore, barely relying on the default network behavior does not let \mon{}s capture
packets sent by \s{}s in other WLANs.
\mytit's solution is to configure \ap{}s to operate simultaneously
in two modes: \emph{AP mode} and \emph{monitor mode}. The former mode is used for
serving clients in the AP's own WLAN, whereas the latter is used for overhearing packets
in the air. In monitor mode, packets are
captured in raw format via the use of {\tt libpcap}.
\vskip 1eX\noindent{\bf Packet Identification:}
Each packet sent from the \s (step~\ref{step:init}) contains the session information in the packet's IP header such as
the protocol identifier, the source and destination IP addresses and ports. With this information,
\home can uniquely identify the \s (step~\ref{step:identify}). In contrast, \mon{}s may have ambiguity in identifying the \s,
as \s{}s from different WLANs may (legally) use the same IP address.
To resolve such conflict, we write a frame parser for the packet's MAC header
to obtain the {\tt BSSID} that identifies the WLAN
the session belongs to. Therefore, any session in \mytit is now uniquely determined on the following 6-tuple
{\tt <BSSID, proto, srcIP, dstIP, srcPort, dstPort>}.
\vskip 1eX\noindent{\bf Duplicate Elimination:}
As mentioned earlier, unicasting a packet may involve a number of (MAC-layer) retransmissions due to
wireless loss occurred between the \s and its \home. This benefits the data transmission between them.
Nevertheless, it is possible that a nearby \mon can overhear more than one (re)transmission
of the same packet, which creates duplicates and floods the \mon's uplink if all the retransmitted packets
get scheduled. To identify the duplicate packets, we keep records of {\tt IPID} field in the
IP header of each overheard packet. Since {\tt IPID} remains the same value for each MAC-layer retransmission,
it allows \mon{}s to identify and discard the same packet. It is worth noting that in TCP transmission,
the TCP sequence number is not a good indicator to identify the duplicate packets, as it is unique for
TCP-layer retransmitted packets, but not unique for MAC-layer retransmissions.
\subsection{Tunnel Forwarding}
The transparency goals requires that the \s's data transfer session is unaware of
the aggregation protocol in \mytit. A seemingly straightforward solution
is that \home and \mon{}s forward the \s's packets with spoofed IP addresses.
It is, however, impractical for two reasons:
1) many ISPs block spoofed IP packets;
2) forwarded packets by \mon{}s are unreliable, because they are raw packets
overheard from the air. Our approach is that each \ap
conveys the \s's data via a separate TCP tunnel.
Since we support a transparency for aggregation over multiple paths,
the techniques for tunnelling and address resolving in each single path require
a careful design at both \ap{}s and \gw.
\vskip 1eX\noindent{\bf Tunnel Connection:}
Once a \ap identifies a new \s-\d session (step~\ref{step:register}) based on the 6-tuple, it establishes a tunnel connection
to \gw. Regardless of the session protocol, a tunnel connection between the
\ap and \gw is always a TCP connection. The choice of TCP tunnel is partially
motivated by the \emph{TCP-friendliness}. We desire to aggregate the idle bandwidth
of \ap{}s without overloading the ISP networks. Besides, since TCP tunnel can provide a reliable
channel, it helps keep a simple logic for handling a reliable aggregated transmission.
\vskip 1eX\noindent{\bf Forwarding:}
In the registration (step~\ref{step:register}) to \gw, the \ap receives an {\tt APID} as its ``contributor'' identifier for
the new session. The {\tt APID} is used in all messages in the protocol.
Both control messages (registration, report, scheduling) and data messages are exchanged via
the TCP tunnel, which ensures reliable transmissions.
On reception of a scheduling message with matching {\tt APID}, the \mon encapsulates
the corresponding \s's packet in a \mytit data message and sends it to \gw (step~\ref{step:forward}), which then
extracts the original data packet, delivers to the \d.
\ignore{At the same time, \gw broadcasts back to all \ap{}s
a \mytit acknowledgement for reception of the data. The \mon{}s, who are not
the selected forwarder for the corresponding packet, keep the captured packet
in their buffer until a timeout or a \mytit acknowledgement is received.
This helps the \gw to schedule to another \ap in case the selected \ap is suddenly offline.
In TCP session, if no other \ap{}s have the corresponding data packet, the \d
will automatically trigger the TCP retransmission algorithm. The TCP semantics will
be discussed in more details in Section~\ref{sec:proack}.}
In \mytit, the control messages are short, thus introducing only a small overhead in the backhaul.
\vskip 1eX\noindent{\bf NAT:}
In WiFi network, the \s is behind the \home and the \s might also reside behind
a gateway. By default, a NAT (network address translation) is performed
for the session between the \s and the \d.
In \mytit, the \s's data are conveyed to the \d via separate tunnels from
each participating \ap. Therefore, different from the address translation
in a traditional network, \mytit requires that the NAT mapping information of the end-to-end
session must be known
to transfer the embedded data to the desired \d. Consequently, in the registration step, each \ap,
besides {\tt APID}, also receives the NAT mapping records from \gw.
Besides, since the downlink capacity is enormous, we allow all reverse (downlink) traffic from \d to \s to traverse along the \emph{default downlink path}. In addition, as there might be multiple tiers of NAT boxes in the middle, we must ensure that the NAT mapping for a session is properly installed on all NAT boxes along the path between \s and \d in order for the returning traffic to traverse the NAT boxes properly. Therefore, the first few packets in a session must go along the \emph{default uplink path}. This means the first packet in UDP sessions or the 3-way handshake traffic in TCP sessions are not tunnelled.
\subsection{TCP with Proactive-ACK}
\label{sec:proack}
TCP ensures successful and in-order data delivery between the
\s and the \d.
\ignore{TCP relies on two major mechanisms, flow control and congestion control.
The former one prevents the sender from overrunning the receiver buffer. The latter one prevents the sender from overrunning the network path between the sender and the receiver.}
In TCP, each packet is
identified with a sequence number and must be acknowledged by the \d
to indicate the proper delivery. The \s maintains a dynamic CWND (congestion window)
during the on-going session, which indicates the maximum number of packets that can be sent on
the fly, therefore determines the TCP throughput.
The \s's CWND size is affected by the acknowledgements received from the \d.
First, the growth rate of CWND depends on the rate of receiving acknowledgements, i.e., the link latency.
Second, missing acknowledgement within a RTO (retransmission timeout) causes
the \s to issue a \emph{retransmission}. On the other side, if the \d receives some out-of-order sequence,
it sends a DUPACK (duplicate acknowledgement)
to inform the \s of the missing packet. By default~\cite{Allman:2009:TCC:RFC5681}, the \s will issue
a \emph{fast retransmission} upon receiving 3 consecutive DUPACKs.
Both retransmission and fast retransmission cause the \s to cut off
the CWND accordingly to slow down the sending rate and adapt to the congested
network or slow receiver.
\vskip 1eX\noindent{\bf Performance issues with aggregation:}
TCP was designed based on the fact that the out-of-order
sequence is generally a good indicator of lost packets or congested network.
However, such assumption no longer holds
in \mytit.
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item \emph{Out-or-order packets:} In \mytit, the packets belonging to the same TCP session are \emph{intentionally} routed through
multiple \ap{}s via diverse backhaul connections in terms of capacity,
latency, traffic load, etc. This results in \emph{serious} out-of-order sequence at
\gw, which eventually injects the out-of-order packets to the \d.
\item \emph{Double RTT:}
Also, due to the aggregation protocol, data packets in \mytit are delivered to the \d with a double round-trip-time (RTT)
compared to a regular link. This causes the \s's CWND to grow more slowly and peak at lower values.
\end{itemize}
Consequently, with an \emph{unplanned} aggregation method, the TCP congestion control mechanism is \emph{falsely} triggered,
resulting in considerably low throughput. As we show later in Section~\ref{sec:evaluation}, a simplified prototype of
our system, which share similarities with the system in~\cite{link-alike}, gives poor TCP throughput.
\vskip 1eX\noindent{\bf Solution:}
To address the performance issue, we first investigated a simple
approach:
data packets forwarded by \ap{}s are buffered at \gw until
a continuous sequence is received before injecting to the \d.
This solution, however, encounters the following issues:
1) \emph{Efficiency:} Introducing a buffer for each session at \gw
is wasteful of memory, since the \d already maintains
a TCP buffer for its own session. Furthermore, this does not scale well
when more simultaneous sessions are established.
2) \emph{Optimality:} Due to the difference in capacity, latency, loss rate
among backhaul uplinks, it is not clear how to determine the optimal buffer size.
3) \emph{Performance:} In fact, we implemented a buffering mechanism at \gw, and
the results (Section~\ref{sec:eval-buffer}) showed that using buffering mechanism
\emph{does not} help improving the TCP throughput.
Now we introduce a novel mechanism called \emph{Proactive-ACK}, which is
used in step~\ref{step:report-tcp} of \mytit protocol.
The principle of Proactive-ACK mechanism is to actively control
the exchange of acknowledgements instead of relying on the default
behaviour of the end-to-end session. With Proactive-ACK, we solve
both \emph{out-of-order packet} and \emph{double RTT} issues.
In the following paragraphs, we call acknowledgements
actively sent by \gw \emph{spoofed} acknowledgements, while the ones
sent by the \d are \emph{real} acknowledgements.
\vskip 1eX\noindent{\bf Managing DUPACK:}
In \mytit, most of out-of-order packets are caused by the aggregation mechanism
via multiple \ap{}s. To avoid the cutting off of the CWND at the \s, we intentionally
discard all DUPACKs received from the \d, as we observed that most of DUPACKs
generated by the \d in \mytit are due to the multiple-path aggregation.
However, by dropping DUPACKs from the \d, we need to handle the case of actual lost packets in the air
between the \s and \ap{}s.
Concretely, if the report for expected TCP sequence number is not received within certain time window,
it is implied that this sequence is lost on all participating \ap{}s.
Now that \gw sends a spoofed DUPACK back to
the \s in order to mimic the TCP fast retransmission mechanism for fast recovery from packet loss.
\vskip 1eX\noindent{\bf Managing ACK:}
Besides the effect of DUPACKs, the CWND size of the \s is also highly affected by
the double RTT introduced by the protocol. Not only the CWND grows slowly, but
the chance of CWND being cut off is also higher.
With Proactive-ACK mechanism, in step~\ref{step:report-tcp}, \gw sends back to the \s the spoofed ACK after
receiving the report from \ap{}s.
The intuition is that all the packets that are reported by some \ap{}s are currently stored in
those \ap{}s' buffer. Due to the reliability of the TCP tunnel between \ap{}s and \gw, the
reported packets will be eventually forwarded to \gw in reliable manner. Therefore, as long as \gw
identifies a continuous range of reported TCP sequence, immediately sending a spoofed ACK back to
the \s helps maintaining a high and constant throughput, as the RTT with respect to the \s is reduced
to a value around the real RTT.
This approach prevents the cutting off of CWND at the \s.
Since \gw manually sends spoofed ACKs to the \s, on reception of real ACKs the \d,
\gw simply discards the real ACKs.
\vskip 1eX\noindent{\bf TCP semantics:}
We have two important remarks on the TCP semantics:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Immediately sending the spoofed ACKs after receiving the reports
may result in spoofed ACKS being received at the \s before data packets being
forwarded to the \d. This increases the CWND
in a more aggresive manner than the standard mechanism.
\item Dropping real DUPACKs and sending spoofed DUPACKS can increase the time
for recovery of an \emph{actual} loss of packet, because the loss reflected
by the \d is not immediately signaled to the \s.
For example, if an AP who has been scheduled to forward the selected packet
is suddenly offline, it takes a longer time for the packet to be scheduled again after a timeout
and later forwarded to the \d.
\end{itemize}
Despite the slightly difference in TCP semantics, the Proactive-ACK mechanism
has been proved to give a significant improvement to the TCP throughput.
We present these results in Section~\ref{sec:evaluation}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/bapu-exp-setup}
\caption{\mytit Experiment Setup. 7 \mytit-APs and 1 \mytit-Gateway are
inter-connected. A traffic shaping box is set up in between to emulate
a typical residential network setting. }
\label{fig:bapu-exp-setup}
\end{figure}
\begin{figure*}
\centering
\subfloat[\label{fig:basic-total-32ms}UDP and TCP throughput]{
\includegraphics[width=0.9\columnwidth]{figures/plot32ms-basic-total}
}
\subfloat[\label{fig:basic-total-percent-32ms}UDP and TCP aggregation efficiency]{
\includegraphics[width=0.9\columnwidth]{figures/plot32ms-basic-total-percent}
}
\caption{\basic aggregation for UDP and TCP with 2Mbps 32ms RTT uplinks.}
\end{figure*}
\subsection{Scheduling}
\label{sec:schedule}
The bandwidth aggregation performance depends on the efficiency of multiplexing
data among \ap{}s to best utilize the idle uplink bandwidth.
In \mytit, we adopt a \emph{centralized scheduler} at \gw.
There are two main factors to select this design.
First, with the centralized design, it does not only simplify the implementation,
but also allow easy extension of the design with extra logic to further optimize
the scheduling strategy. Second, a scheduler usually requires complex processing
and memory capability, which might overload the \ap{}s with much lower capability
if scheduling decisions are migrated to the APs.
The scheduling strategy is based on the received reports
in step~\ref{step:report-udp} and~\ref{step:report-tcp} of the protocol. Each report from a \ap contains
a sending buffer size obtained from the Linux kernel via {\tt ioctl()} function call.
This value specifies how much a \ap can contribute to the aggregation.
Based on reports, \gw applies First-Come-First-Served strategy to select
a forwarder among \ap{}s who have captured the same packet. This approach is
similar to those applied in~\cite{link-alike,KandulaLBK2008}.
The rationale for choosing this approach are
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item \emph{Fairness:}
Sharing bandwidth for aggregation takes into account the available bandwidth
of participating \ap{}s, as because AP owners have
different subscription plans.
\item \emph{Protocol independence:} Scheduling decision is made based on
the \ap{}s' sharing capability, not on the particular transport protocol.
\end{itemize}
\ignore{
\subsection{Embedded devices}
\fixme{Challenges in programming with OpenWRT, cross-compile, reflash}
\fixme{Memory constraints lead to what difficulties?}
\subsection{Discussion}
\vskip 1eX\noindent{\bf Availability of APs:} from reviewer: "80\% of them are secure" and "operate on other channels"
\vskip 1eX\noindent{\bf Double latency issue:} latency aware applications
\vskip 1eX\noindent{\bf TCP semantics:} what if an AP who got scheduled but fails before forwarding?
\vskip 1eX\noindent{\bf Security and Privacy issues:} end-to-end encryption should be enough
\vskip 1eX\noindent{\bf Packet loss:} strawman approach: coding or modulo scheduling do not work
\vskip 1eX\noindent{\bf Out-of-order packets:} strawman approach: buffering do not work
(mention the above strawman approaches very briefly, leave the details in the evaluation section where we can show the charts)
}
\section{Bapu}
\label{sec:bapu-pro}
Inspired by our analysis in section \ref{sec:bapubasic-tcp}, we propose BaPu-Pro to address
the TCP challenges. BaPu-Pro shares the same architecture as BaPu-Basic, and employs
a mechanism called \emph{Proactive-ACK} to mitigate the out-of-order TCP sequence issue.
BaPu-Pro maintains a healthy CWND growth at the sender side, and therefore achieves high throughput.
\subsection{Proactive-ACK}
The protocol flow of BaPu-Pro is depicted in Figure \ref{fig:bapu-pro-flow}. At a high level,
BaPu-Pro works as follow,
\begin{enumerate}
\item BaPu-APs send packet reception report to BaPu-Gateway. The only difference from BaPu-Basic at this step is
that each report includes the TCP sequence number of the reported packet.
\item BaPu-Gateway maintains a lookup table to record all the the reported TCP sequence.
\item On receiving a new TCP sequence, BaPu-Gateway scans the table to find out if a continuous sequence has been reported. If so, BaPu-Gateway generates a series of spoofed TCP ACKs and sends them to the sender. The intuition is that all the reported sequence numbers are currently buffered at some BaPu-AP, and due to the reliability of the TCP tunnel between the BaPu-AP and the BaPu-Gateway, will be reliably forwarded to the BaPu-Gateway ultimately. Therefore, as long as BaPu-Gateway identifies a continuous range of reported TCP sequence, it is safe to send acknowledgement back to the sender. Such spoofed ACKs may happen even before the real TCP segments are forwarded to the BaPu-Gateway. In this way, the TCP sequence window at the sender side can keep sliding which prevents the ACK timeout from happening.
\item If the report for next expected TCP sequence is not received within certain time window, it is likely that this sequence is lost on all BaPu-APs. In this case, BaPu-Gateway sends a spoofed DUPACK back to the sender. Such operation mimics the TCP Fast Retransmission mechanism for fast recovery from potential packet loss.
\item As BaPu-Gateway forwards the received TCP segment to the destination, the destination will send real TCP ACKs to the sender. Since BaPu-Gateway resides along the path between the sender and the destination, BaPu-Gateway captures the pure TCP ACKs and discards them. We discard the real TCP ACKs because we have already sent a spoofed ACK to the sender a while ago.
\item The destination may send TCP data segment to the sender. For those packets, BaPu-Gateway just let them go along the default network path.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/bapu-pro-flow-compact}
\vspace{-0.1in}
\caption{BaPu-Pro Protocol Traffic Flow}
\label{fig:bapu-pro-flow}
\end{figure}
TCP is two way communication. Both ends may send TCP data packets to each other. Thus, the sequence window at both ends must be properly synchronized. Asides from the sequence number, other TCP header fields are also important. For example, the advertised receiver window size is important for flow control between sender and receiver. To honour the TCP header fields generated by the receiver, BaPu-Gateway monitors the real TCP ACKs generated by the destination. BaPu-Gateway extracts the TCP header and use those values to prepare the spoofed ACKs. While the Proactive-ACK mechanism breaks the end to end semantic of TCP, BaPu can be limited to operate on HTTP upload (POST) traffic which has an explicit application layer acknowledgment mechanism (i.e., HTTP 200 OK).
\ignore{
\paragraph{Implementation}
Based on the implementation of BaPu-Basic described in section~\ref{sec:implement}, we integrate
the Proactive-ACK mechanism at BaPu-Gateway.
BaPu-Gateway uses \emph{raw socket} to spoof the TCP ACKs. We also install proper iptables rules at BaPu-Gateway
and use \emph{iptables-queue} to capture the returning TCP traffic from the destination to the sender.
BaPu-AP logic remains unmodified. }
\section{Conclusion}
\label{sec:conclusion}
In this work, we present the design and implementation of BaPu, a complete software based solution on WiFi APs for aggregating multiple broadband uplinks. First, with a large scale wardriving data and a long term measurement in Boston residential broadband, we show that the high AP density and under utilized broadband uplinks calls for a solution to harness the idle bandwidth for a boosted uplink throughput.
With our client transparent design, BaPu offers generic support for legacy client and a large variety of network applications. However, the client transparency design raises many new technical challenges. In particular, we propose a novel mechanism, called Proactive-ACK, to address the challenge of multiplexing single TCP session through multiple paths without degrading performance. The benefit of such mechanism is analysed with experimental data. We carry out an extensive set of experiments to evaluate the BaPu throughput performance for both UDP and TCP, in a variety of realistic network settings. BaPu achieves over 95\% aggregation efficiency for UDP and over 88\% for TCP, even in lossy wireless environment.
Besides, to further justify the feasibility of BaPu as a crowd-sourcing mechanism, we empirically show the potential idle uplink bandwidth that can be harnessed from residential broadband networks. We also provide a design guideline for such bandwidth sharing system to eliminate the negative impact to home users. Also, the software based solution makes BaPu an easy incremental deployment, especially as APs are becoming social and cloud-managed.
\section{BaPu}
\label{sec:design}
BaPu is a suite of software solutions running on the home WiFi APs
and gateway servers. BaPu consists of the two major components,
\begin{itemize}
\item \textit{BaPu-AP}: In BaPu, each home WiFi AP is configured as both AP
mode and monitor mode. In an upload session, the sender carries out unicast to
its own home AP via a high bandwidth wireless link. The proximate BaPu-APs
overhears the communication in monitor mode.
This group of BaPu-APs, including both home-AP and monitor-APs,
communicate with the far end BaPu-Gateway to determine which AP forwards which
IP packet.
\ignore{The scheduled IP packets are forwarded from the BaPu-AP to the
BaPu-Gateway through a TCP tunnel connection. In this way, the \textbf{``fat"}
wireless data flow is forwarded to the destination utilizing multiple APs'
backhaul uplinks. }
\item \textit{BaPu-Gateway}: To make traffic multiplexing transparent to
both sender and destination, we design BaPu-Gateway,
which resides on the network path between the BaPu-APs and the destination.
BaPu-Gateway is mainly responsible for scheduling and load balancing on
BaPu-APs, and forwarding the traffic to the destination.
BaPu-Gateway may be deployed in the following 3 scenarios:
1) The destination is an end device in a WLAN. The BaPu-Gateway runs on the WiFi
AP of the same WLAN; 2) The destination is a server where the data is uploaded
to. The BaPu-Gateway is a gateway server running in front of it; 3) The
BaPu-Gateway and the destination reside on the same physical machine.
The destination is a service process which receives the uploaded data, while
the BaPu-Gateway is a process intercepting on the data propagation path to
the destination.
\end{itemize}
The main idea of our BaPu system is inspired by the work from Jakubczak et al.
\cite{link-alike}. Our preliminary experiments show that we can achieve high
aggregated throughput with efficiency over 95\% for UDP. However,
the basic design performs poorly for TCP due to some of TCP's
inherent nature. Therefore, we call this solution ``BaPu-Basic". In the contrast,
we propose a novel mechanism \emph{Proactive-ACK} to improve the TCP performance,
which we call ``BaPu-Pro". In the following
section, we first present BaPu-Basic mechanisms in detail and our preliminary evaluation
of BaPu-Basic. Then we present
an analysis on why BaPu-Basic is not adequate for TCP.
\subsection{Client Transparent Tunnel Forwarding}
Figure \ref{fig:bapu-basic-flow} describes the BaPu-Basic protocol traffic flow.
As the sender device transfers data to its home-AP $H$, the proximate monitor-AP(s),
$M_1$, overhears the wireless communication. BaPu-AP identifies a
transport session based on the following tuple,
$<BSSID, protocol, src IP, dst IP, src Port, dst Port>$\\
Here $protocol$ indicates whether it is a TCP or UDP session. The latter five items
are the typical tuple to identify a transport layer session. However, in neighboring
WLANs, the clients may have conflicting IP addresses. To resolve such conflict, we
use $BSSID$ in the 802.11 header to identify which WLAN this session belongs to.
Once BaPu-AP identifies a new session, it establishes a TCP connection to the $dst IP$,
which is the IP address of the BaPu-Gateway $G$. BaPu-AP registers itself at the
BaPu-Gateway for the newly identified session. On registration, BaPu-Gateway assigns
each BaPu-AP an AP ID. Later on, the tunnelled IP packets and exchanged control
information is all carried through this TCP tunnel.
At $H$, instead of directly forwarding the sender's packets to the Internet,
it stores the packets in a session buffer. Session buffer is a memory space allocated
for each BaPu session, and will be released when the session ends.
Similarly, the monitor-APs all maintain a session buffer to store the overheard packets.
In order to harness the idle bandwidth of multiple APs, BaPu-APs and BaPu-Gateway
coordinate to determine how to split the traffic load. Therefore, the bandwidth
aggregation is transformed to the problem of how to schedule the packets
among BaPu-APs. BaPu adopts a centralized scheduling mechanism at BaPu-Gateway.
For example, $H$ and $M_1$ both
receive packet $p$, and each sends packet reception report to $G$. Based on our
scheduling method, $G$ schedules $M_1$ to forward IP packet $p$. We will explain our scheduling mechanism in next section.
$G$ informs all BaPu-APs of this scheduling decision. Upon receiving the schedules,
$H$ releases packet $p$ from its session buffer. $M_1$ carries out the following two operations,
\begin{itemize}
\item \textbf{NAT}: The sender is behind a home-AP, which is generally also a NAT. In this case, the packet overheard by $M_1$ has the private IP address. $M_1$ first translates the private IP/port pair to the public pair. The home-AP and BaPu-Gateway can communicate to obtain the NAT mapping record for this session. The monitor-AP can retrieve the NAT mapping record when it establishes the tunnel connection with BaPu-Gateway. Please note that the NAT operation is not mandatory at AP side. We can shift the NAT operation to the BaPu-Gateway.
\item \textbf{Encapsulation}: $M_1$ encapsulates the NAT'ed IP packet in our BaPu protocol message and sends it to the BaPu-Gateway.
\end{itemize}
When BaPu-Gateway receives the tunnelled packets, it simply inject the packets to the network with raw socket. The packet is then finally delivered to the destination.
A few points are worth noting,
\begin{enumerate}
\item In BaPu, we assume that the downlink capacity has no limit. Thus, all the downlink traffic from the destination to the sender still goes along the default path. Since the sender may reside behind multiple tiers of NAT boxes, we must ensure that the NAT mapping for this session is properly installed on all NAT boxes along the path between sender and destination, so that the returning traffic can traverse the NAT boxes properly. Therefore, the first few packets in a session must go along the default path. This generally means the first packet in UDP session or the 3-way handshake traffic in TCP session.
\item Compared with UDP, TCP obviously has higher overhead. However, the choice of TCP tunnels is partially motivated by TCP-friendliness. We want to aggregate the idle bandwidth without overloading the ISP's networks. Besides, since TCP tunnel can provide a reliable channel, it also helps keep a simple logic for the control and data communication between BaPu-AP and BaPu-Gateway.
\item In theory, the monitor-APs can send NAT'ed IP packet with raw socket, instead of forwarding it through the tunnel. This seems more straightforward. However, it is inapplicable for two reasons: 1) many ISPs block the spoofed IP packet; 2) raw TCP socket is not reliable.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/bapu-basic-flow-compact}
\vspace{-0.1in}
\caption{BaPu-Basic Protocol Traffic Flow}
\label{fig:bapu-basic-flow}
\end{figure}
\subsection{Scheduling}
\label{sec:schedule}
As mentioned above, the bandwidth aggregation is transformed to the problem of how to schedule the IP packet forwarding among BaPu-APs to make the best utilization of the limited uplink capacity. As we design the scheduler, we explored 3 mechanisms, \textbf{module scheduler}, \textbf{module+redundancy} and \textbf{centralized scheduler}.
In \textbf{module scheduler}, each BaPu-AP computes a hash value for IP ID. The hash function is defined to have low enough collision probability. If the IP ID moduled by AP ID is 0, the BaPu-AP forwards this packet. Otherwise, BaPu-AP discards it.
The big advantage of module scheduler is zero coordination overhead. However, the major downside of module scheduler is its unreliability. Due to the WiFi packet loss, the BaPu-AP, which should forward certain packet, may not receive this packet. In this case, this packet is for sure lost, which results in throughput degradation. Such packet loss can lead to even worse throughput in TCP.
In order to mitigate the packet loss in module scheduler, we attempt to introduce some redundancy. We examined one simple strategy, \textbf{P-based redundancy}. In P-based redundancy, other than the packets selected by the module function, each BaPu-AP also forwards the received packets with some pre-determined random probability P. Even though such redundancy in packet forwarding can mitigate the packet loss. However, it places a bottleneck on the maximum achievable throughput because each BaPu-AP forwards packets in probability P, which results in a waste of bandwidth.
In our design, we adopt a \textbf{centralized scheduler} at BaPu-Gateway. BaPu-AP sends a packet reception report to BaPu-Gateway for each received packet. With the complete knowledge of packet reception at all BaPu-APs, BaPu-Gateway determines which AP should forward which packet. Also, with this centralized design, it is easy to employ extra logic to further optimize the scheduling strategy, such as the load balancing method which we will present in next section.
At the centralized scheduler, we choose First-Come First-Served (FCFS) strategy to select the BaPu AP for forwarding the packet. The intuition of FCFS is that, among APs who have enough bandwidth to transmit the data, the AP which reports first should forward the packet in order to deliver the reported packet as soon as possible. Our design matches the approach taken in Link-alike \cite{link-alike}. However, Link-alike is designed specifically for UDP based large file transfer. In their case, the greedy centralized scheduler is sufficient because we just need to have all UDP chunks delivered to the destination without worrying about the order of the chunk arrival and when they arrive. As we design BaPu, we realize that our design goal of supporting TCP results in some new technical challenges. Due to the TCP's inherent nature, the TCP performance is closely related to whether the TCP sequence can be efficiently delivered in order. A basic centralized scheduler is no longer adequate to handle TCP transmission. In section \ref{sec:bapu-pro}, we will propose a new mechanism called \textit{proactive-ACK} to address the challenges of supporting TCP transmission.
\subsection{Load Balancing}
One of our design goals of BaPu is ``fair" sharing of backhaul bandwidth. In our design,
``fairness" stands for the following two things,
\begin{itemize}
\item APs are equipped with different uplink bandwidth because they run in different ISP networks,
and the end users have different subscription plans. Thus, the BaPu scheduling must be
``fair" to all the participating BaPu-APs by taking into
account the available bandwidth of each AP, instead of evenly splitting the traffic among all APs.
\item BaPu is essentially a community based approach by contributing everyone's
idle bandwidth. We must ensure the ``fairness" to the home LAN users and prioritize
the regular network usage.
\ignore{The home LAN traffic load may vary depending on how the home LAN users use the network. }
In order not to disturb the regular network usage, we consistently monitor the ``idle" bandwidth of each BaPu-AP and avoid
overrunning the uplink. We believe this is critical for such community based approach like BaPu to be
adopted.
\end{itemize}
\subsubsection{Why Is FCFS Not Good Enough?}
The FCFS scheduling described in previous section, as suggested by its name,
is closely related to the latency between BaPu-AP and BaPu-Gateway.
The BaPu-AP with low latency to the BaPu-Gateway is more likely to be assigned
with more IP packets because its packet reception report arrives earlier.
However, the BaPu-AP with low latency might have less uplink bandwidth compared with
other APs. As a result, the FCFS scheduling might place too much traffic load on the APs
with low latency, and waste a lot of idle bandwidth on other APs with higher latency and
higher bandwidth. Therefore, FCFS scheduling may result in unfairness among APs.
\subsubsection{Uplink Capacity Estimation}
To avoid this situation, BaPu employs a load balancing mechanism on top of FCFS scheduling.
The core idea of our load balancing mechanism is to combine the estimated available uplink capacity and FCFS
in scheduling.
When BaPu-AP sends packet reception report, it piggy back the uplink capacity estimation in the report.
Since all the IP packets are forwarded through a TCP tunnel connection, therefore the TCP tunnel throughput
reflects the uplink capacity. We estimate the capacity based on the currently available send buffer size of the
TCP tunnel socket. The available send buffer size implies how fast the kernel can empty the sender buffer, which
implies the uplink capacity. BaPu-AP can obtain the available send buffer size by querying the Linux kernel with
$ioctl()$ function call. Upon receiving such estimation, BaPu-Gateway decides whether to schedule the IP packets to the
reporting AP or not. Even though the TCP socket buffer size in the Linux kernel level does not represent the
exact link capacity, this parameter works very well in our practice, and efficiently balance the load among APs and well adapts to
the change of capacity. Our capacity estimation matches the method taken in Link-alike \cite{link-alike}.
\input{prelim_exp}
\section{Wireless Diversity Gain}
\section{Evaluation}
\label{sec:evaluation}
In this section, we conduct a comprehensive set of experiment to evaluate the performance
of BaPu-Pro. First we validate our Proactive-ACK mechanism by comparing
the TCP throughput of BaPu-Pro against that of BaPu-Basic. Second, we
measure the performance of BaPu-Pro under a variety of network settings (e.g. network latency,
background traffic load, wireless link quality, etc.). Finally, we demonstrate that BaPu-Pro is
feasible for both streaming and large file transfer applications. We conduct
all the experiments in the same setup as described in section \ref{sec:prelim-exp}.
\subsection{TCP Throughput -- \proack vs. \basic}
\begin{figure*}
\centering
\subfloat[\label{fig:proack_tcp_throughput} \proack vs. -Basic, 32ms RTT, 2Mbps uplink
capacity. ]{\includegraphics[width=\columnwidth]{figures/plot32ms-tcp-compare}\vspace{-0.5cm}}
\subfloat[\label{fig:proack_tcp_th_vs_upper_limit} Efficiency of
Aggregation, BaPu-Basic
vs. BaPu-Pro.]{\includegraphics[width=\columnwidth]{figures/plot32ms-tcp-compare-percent}\vspace{-0.5cm}}
\vspace{-0.2cm}
\caption{TCP aggregated throughput}
\end{figure*}
\ignore{
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot32ms-tcp-compare}
\caption{TCP aggregated throughput - \proack vs. \basic, 32ms RTT, 2Mbps uplink capacity. }
\label{fig:proack_tcp_throughput}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot32ms-tcp-compare-percent}
\caption{Efficiency of Aggregation, BaPu-Basic vs. BaPu-Pro.}
\label{fig:proack_tcp_th_vs_upper_limit}
\end{figure}
}
We carry out the same TCP iperf throughput test as
described in section~\ref{sec:prelim-exp} with BaPu-Pro.
As shown in Figure~\ref{fig:proack_tcp_throughput},
the aggregated TCP throughput of BaPu-Pro significantly outperforms
that of BaPu-Basic. With 7 BaPu-APs, BaPu-Pro achieves 11.04Mbps, which
translates to 62\% improvement than BaPu-Basic. Furthermore, Figure~\ref{fig:proack_tcp_th_vs_upper_limit} shows that
BaPu-Pro achieves at least 88\% aggregation efficiency in our setup, and
achieves at least 83\% of the upper limit of standard TCP throughput.
Our experiment results demonstrate that BaPu-Pro can achieve high
aggregated throughput with high aggregation efficiency for both UDP and TCP.
\subsection{Proactive-ACK Benefit}
To justify our design considerations of Proactive-ACK mechanism, we adopt
the same method as section \ref{sec:bapubasic-tcp} to examine the TCP
CWND growth in BaPu-Basic, BaPu-Pro and normal TCP (Figure
\ref{fig:tcp_info_basic_vs_proack}).
In BaPu-Basic, the CWND size remains very small during the whole TCP session. In the contrast,
BaPu-Pro allows the CWND to grow to very high value, which contributes to the resulting high throughput.
For reference purpose, we also run a normal TCP session with a throttled bandwidth 11Mbps
(similar to the BaPu-Pro resulting throughput). The CWND growth patterns for BaPu-Pro and
normal TCP is very close, which also implies our BaPu-Pro design and implementation can efficiently
and transparently aggregate multiple slow uplinks.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{figures/plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd}
\caption{TCP sender CWND growth comparison: \proack vs. \basic vs. normal TCP}
\label{fig:tcp_info_basic_vs_proack}
\end{figure*}
\subsection{Impact of Network Latency}
In TCP transmission, the round-trip time (RTT) is an important factor that has impact
on the throughput. In this experiment, we measure the performance of BaPu-Pro with 4 different network
latency settings listed in Table \ref{tab:latency}. Each latency represents certain application scenario. For
example, when users upload HD video with BaPu to CDN edge servers, the latency to CDN server is generally
regional latency (32ms). When users upload data to their friends in other continent, the RTT is on average 192ms.
Besides, according to our measurement on a residential WiFi testbed, we observe that the latency between broadband
subscribers may vary a lot, ranging between 20ms and 80ms. This depends on whether the two users
are in the same ISP or different ISPs. Consider the case that a user uploads data with BaPu through
neighbors' APs to another user in the same city, the latency between the BaPu-APs and the end user can be quite
different. We would like to study the potential impact of the diverse latency on the performance of BaPu-Pro.
Given certain number of APs, we assign each BaPu-AP a random RTT value between 20ms and 80ms. We carry out
this test for 10 runs and report the average throughput. As shown in Figure \ref{fig:latency-pro}, BaPu-Pro throughput
declines as network latency increases, but the throughput reduction is limited. In random latency setting, the
resulting throughput shows no difference.
\begin{table}
\centering
\caption{\label{tab:latency} Network RTT Latency. Inter-AP RTT
measured by our Open Infrastructure WiFi testbed in Boston area,
representing typical
RTT between home APs in Boston, covering Comcast, RCN, and Verizon.}
\begin{tabular}{|l|c|}
\hline
Distance & RTT\\
\hline\hline
Regional: 500 - 1,000 mi& 32ms~\cite{akamai:hd}\\
\hline
Cross-continent: $\sim$ 3,000 mi& 96ms~\cite{akamai:hd}\\
\hline
Multi-continent: $\sim$ 6,000 mi& 192ms~\cite{akamai:hd}\\
\hline
inter-AP in Boston& 20ms $\sim$ 80ms\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\subfloat[\label{fig:latency-pro} Different RTT]{\includegraphics[width=\columnwidth]{figures/plot-proack-tcp-latency-compare}}
\subfloat[\label{fig:diversity-pro} Different packet loss rate $P$ on monitor-APs.]{\includegraphics[width=\columnwidth]{figures/plot-diversity-tcp-compare}}
\caption{BaPu-Pro TCP throughput}
\end{figure*}
\ignore{\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot-proack-tcp-latency-compare}
\caption{BaPu-Pro TCP throughput with different RTT}
\label{fig:latency-pro}
\end{figure}
}
\subsection{Impact of Lossy Wireless Links}
The wireless links in real neighbourhood can be very lossy for a variety
of reasons, such as cross channel interference and distant neighboring APs. Besides, since monitor-APs
switch between transmit and receive mode, they cannot overhear all the wireless communication
happening in the background.
To estimate the potential of BaPu-Pro in a highly lossy wireless environment, we
emulate the packet loss at monitor-APs by dropping the received packets with probability $P$.
No losses were inflicted on the home-AP,
because the sender carries out unicast to home-AP, and 802.11 MAC takes care of packet loss
and retransmission. We conduct the experiment with 3 values of $P$, 20\%, 40\% and 60\%.
As indicated by Figure~\ref{fig:diversity-pro}, the throughput reduction on lossy
wireless links is very limited in all cases. We believe the good
performance can be explained by the link diversity combined with the
centralized scheduling mechanisms. The probability of some packet not overheard by
any monitor-AP is slim, especially when the number of APs is high. This also explains
why 7 BaPu-APs achieve higher throughput with $P=60\%$ than with $P=20\%$. With higher $P$,
there is still a good chance that some monitor-AP overhears the packet and shares the traffic
load from home-AP. Whereas, less packets are reported to the BaPu-Gateway. The
reduced control overhead therefore leads to higher effective throughput.
\ignore{
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot-diversity-tcp-compare}
\caption{BaPu-Pro TCP throughput with different packet loss rate $P$ on monitor-APs.}
\label{fig:diversity-pro}
\end{figure}
}
\subsection{Streaming vs. Large File Transfer}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot-bursty}
\caption{Instantaneous receiver end throughput. Streaming with 11Mbps transmitter rate vs. iperf }
\label{fig:bursty}
\end{figure}
During our iperf measurement, even though the average TCP throughput is quite stable in various settings, we find that
the receiver end instantaneous throughput fluctuates a lot (See Figure \ref{fig:bursty}).
Our packet trace inspection reveals that due to the latency difference
among APs, the arrival order of the scheduled TCP segments is uncertain. Therefore, the BaPu-Gateway sometimes must buffers
the out-of-order segments until the expected ones arrive. Besides, since iperf always tries to saturate the link, as the throughput on the WiFi link overruns the aggregated backhual uplink capacity, the out-of-order segment arrival becomes more severe. Both reasons result in the bursty receiver end throughput.
The iperf throughput only indicates that BaPu is suitable for some applications like instant backup of large files in the cloud. This aligns with the findings of prior work. However, it tells only one side of the story. As we design BaPu, the other important goal is to support instant sharing of high bitrate HD video directly from users' home WiFi, in streaming mode. The motivation behind such goal is that today's main stream online streaming services (e.g. Netflix) run on TCP based streaming technologies, such as HTTP based Adaptive Bitrate Streaming. Real time streaming generally requires stable instantaneous throughput. In this experiment, we would like to study the potential of BaPu as a solution to high bitrate real time streaming.
Unlike file uploading, streaming application generally has fixed transmitter rate, determined by the codec bit rate. To emulate the streaming traffic, we issue the TCP flow with a tool called \emph{nuttcp}. nuttcp can do rate limiting at transmitter. Figure \ref{fig:bursty} shows the receiver end instantaneous throughput every second in a 100 second session. In the streaming flow with 11Mbps fixed transmitter rate, the receiver end achieves reasonably stable throughput in the whole session. It indicates that BaPu can sustain high bit rate streaming through aggregated links. In comparison, the iperf flow with unlimited transmitter rate shows much higher fluctuation.
\section{Introduction}
\label{sec:introduction}
Nowadays, the mobile devices are equipped with high-resolution cameras
and a variety of sensors and are quickly becoming the primary device to
generate personal multimedia content. Both the quality and quantity of User
Generated Content grows continuously. This naturally leads to end users' ever
increasing demand of sharing these high volume of UGC with others
in an instant way. Prominent examples of services allowing multimedia content sharing are YouTube, Dailymotion, and various social networking platforms
like Facebook and Google+. In addition, there is also a trend of instantly
backing up personal files in the ``Cloud Storage", such as Dropbox and iCloud.
To obtain a satisfactory user experience, users need sufficient uplink bandwidth to
do the fast bulk data transfer to the Internet. However, today's ISP's generally offer
highly throttled uplink bandwidth around 1 to 3Mbps.
As a result, instant sharing of HD content or fast data backup in the ``Cloud"
is generally infeasible in today's residential broadband.
For example, iPhone 5 takes video at 1080p and 30fps, which translates to
around 200MB per minute. With 3Mbps uplink, it takes over an hour to
upload a 10 minute video clip to iCloud! These limitations are
even more critical for users who desire to retain the control over
their content and intend to share them directly from their homes.
This calls for solutions to scale backhaul uplink.
In this work, we propose a complete software based solution on WiFi Access Point for aggregating multiple broadband uplinks, with the assistance of the WiFi infrastructure in the same neighborhood. Our solution features complete transparency to client devices and high aggregated throughput for both TCP and UDP, even in lossy wireless environment. Our work is primarily motivated by the following observations:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item {\bf Asymmetric WiFi and broadband capacity:} In
contrast to the broadband uplink, WiFi has a much higher bandwidth. 802.11n
supports up to 600Mbps data rate. With sufficiently high WiFi
bandwidth, it is beneficial to wirelessly communicate with multiple proximate APs
and ``harness" the idle broadband uplink bandwidth behind them.
\item {\bf Mostly idle broadband uplinks}: Since February 2011, we have
developed and deployed a WiFi testbed \cite{open-infrastructure} in Boston
urban area, aiming to monitor the usage pattern of residential broadband networks.
As shown in Table \ref{table:hbt_summary}, this testbed consists of 30 home WiFi
APs running customized firmware based on OpenWRT \cite{openwrt}.
Each AP reports the network statistics
every 10 second. During a 18 month period, we have collected over 70 million
records. We observe that the broadband uplink utilization is very low. Figure
\ref{fig:idle} shows the probability of uplink bandwidth being consumed at most
certain value during a 24 hour time window. Throughout the day, there is at least
50\% chance that uplink is completely idle. Even
during peak hours, there is over 90\% chance that the uplink
bandwidth usage is below 100Kbps. This implies that there exists a considerable
amount of idle uplink bandwidth resources, which makes bandwidth harnessing
through neighboring APs a viable approach for scaling the uplink
capacity.
\begin{table}
\centering
\begin{tabular}{lc}
\hline
\textbf{Location}&Boston urban area\\
\textbf{Home APs}&Comcast (26), RCN (4)\\
\textbf{Data collection time}& Feb. 2011 $\sim$ Dec. 2012\\%now\\
\textbf{Network stats samples}& 70 million\\
\hline
\end{tabular}
\caption{Data summary of Broadband usage statistics collected from residential WiFi testbed}
\label{table:hbt_summary}
\end{table}
\item {\bf WiFi densely deployed in residential area:} The density
of WiFi APs is very high in residential areas. Already in 2005, authors in~\cite{AkellaJSS2007} measured more than 10 APs per geographical
location. Recently, we conducted Wardriving measurements
in 4 urban residential areas in Boston. Our results (Table \ref{table:wardrv_summary})
indicate 3192 unencrypted WiFi APs, accounting for 14.2\% of total APs detected during our wardriving. As shown in Figure \ref{fig:wardriving_ap_density_per_channel}, there are on average 17 APs available
at each location, with on average 7 to 12 APs on each channel. This enormous
presence of WiFi APs also justifies the feasibility of the concept of bandwidth
aggregation via open APs.
\begin{table}[ht]
\centering
\begin{tabular}{lc}
\hline
\textbf{Total APs}&22,475 (100\%)\\
\textbf{Unencrypted APs}& 3,192 (14.2\%)\\
\hline
\end{tabular}
\caption{Boston Wardriving Data Summary}
\label{table:wardrv_summary}
\end{table}
\vspace{-10pt}
\item {\bf WiFi becoming open and social:}
Nowadays, end users have an ever increasing demand of ubiquitous Internet access. Driven by such demand, there is an emerging model that broadband subscribers host two WiFi SSIDs, one encrypted for private use, the other unencrypted to share part of their bandwidth as public WiFi signal to mobile users for free or for some payment in return. Unencrypted guest-WLAN is now a standard feature in mainstream home WiFi AP products like LinkSys and D-Link. FON \cite{fon}, a leading company in this area, claims to have over 7 million hotspots worldwide. In addition, WiFi APs are quickly becoming cloud-managed devices, such as FON and Meraki \cite{meraki}. AP firmware updates are regularly pushed from the Cloud. Given such trend of WiFi quickly becoming social and cloud-powered, we believe a software based solution on WiFi AP can make a quite easy incremental adoption of our technology.
\item {\bf Lack of efficient and practical solution:} Despite a set of prior work
exploring how to aggregate wired bandwidth through WiFi, they either require heavy modification on client, or support only specific application, such as UDP based bulk file transfer~\cite{link-alike}. Our goal is to design a client transparent, software based solution, which is easy to deploy and offer generic support for both TCP and UDP applications.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{figures/bw_pdf_tod_ul}
\vspace{-7pt}
\caption{\label{fig:idle} CDF of uplink bandwidth usage (per household) in residential broadband.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{figures/wardriving_ap_density_per_channel}
\vspace{-7pt}
\caption{\label{fig:wardriving_ap_density_per_channel}
Available APs per scanning in Wardriving.}
\end{figure}
With the above motivations and goals bearing in mind, we design our \mytit system. Our major contributions in this work are summarized as follows:
\vskip 1eX
\textbf{Transparency to client}: \mytit does not require any modification to client devices. The client device, running in Station mode, transfers data via unicast link to its ``home'' AP. Given the broadcast nature of wireless communication, such unicast packets can be ``heard'' by both ``home'' AP and the ``neighboring'' APs on the same channel. They each upload a share of
received (overheard) packets to the destination in a collaborative manner. Such
transparency to client devices allows all kinds of wireless client devices and a broad class of legacy network applications, such as streaming and large file transfer, to seamlessly utilize \mytit system.
\textbf{Efficient aggregation for both TCP and UDP}:
Given our design goal of client transparency, some commonly adopted technique in the existing bandwidth aggregation solutions, such as parallel TCP flows \cite{KandulaLBK2008}, are no longer valid, because it requires client applications to intentionally establish multiple connections through different APs and transfer data in parallel. The multiplexing of a single TCP flow through multiple paths raises many technical challenges which makes efficient aggregation non-trivial. Our initial approach relied on coding across paths, however, we could show that a conceptually simpler mechanism, which we call \emph{Proactive-ACK}, combined with a reliable 802.11 unicast to the ``home'' AP, and adequate scheduling are sufficient.
\textbf{Prototype with commodity hardware}: We prototyped our
complete \mytit system on commodity WiFi APs. We flash Buffalo 802.11n APs with Linux based OpenWRT~\cite{openwrt} firmware. As of today, OpenWRT supports devices by more than 50 manufacturers and hundreds of models. This gives us a great selection of compatible devices to deploy \mytit.
\textbf{Evaluation}: We have conducted an extensive set of experiments to evaluate \mytit in various realistic network settings. Our results show that \mytit achieves high
aggregated throughput in UDP transmission, harnessing over 95\% of total uplink bandwidth. With the \emph{Proactive-ACK} mechanism, \mytit harnesses over 88\% of total uplink bandwidth in TCP transmission.
\textbf{Design guideline for bandwidth sharing}: We propose a simple traffic shaping method on AP, which allows us to harness as much idle bandwidth as possible without affecting home users' regular network usage. We also give an estimation of idle uplink bandwidth under a typical residential broadband usage pattern.
Our paper is organized as follows. We first present an overview of \mytit system.\ignore{ along with
two typical application scenarios.} The details of our design is discussed in
Section~\ref{sec:bapu}. We evaluate the performance of \mytit in Section~\ref{sec:evaluation}. In Section~\ref{sec:uplink}, we quantitatively evaluate the potential impact of uplink sharing to home users' regular network usage. We discuss related work in Section~\ref{sec:related} and conclude the paper in Section~\ref{sec:conclusion}.
\ignore{
There exists a plethora of previous research that considers
cooperation of APs to scale the quality and capacity of wired
backhauls, cf.~\citet{GiustinianoGLR2009, GiustinianoGTLDMR2010,
fatvap, NicholsonWN2009}. Yet, previous research focuses primarily
on using one physical WiFi card seamlessly to switch among multiple
APs in order to harness idle bandwidth -- while ascertaining fairness
among APs. However, such solutions may boost only \emph{download}
throughput. Upload of a TCP or UDP flow is usually assigned to a
single AP, therefore it is not clear that those solutions could lend themselves to overcoming the uplink
bottleneck.
A more related solution to our work is ``link-alike''~\cite{link-alike}, which
also suggests uplink aggregation via multiple APs, but it is designed specifically for UDP
based large file transfer only. We have built a ``link-alike'' based prototype and
evaluated it for TCP transfer experiments. The aggregated TCP throughput was much poorer
compared to the efficiency of UDP transmission. The degradation of TCP throughput was due
to the wireless lossy links between the client and the participating APs, and the ``uncontrolled''
order of packets forwarded by the APs. Simple aids based on coding or buffering schemes, however,
do not solve the issues (see Section~\ref{}), which are inherently rooted from the \emph{unreliable} broadcast
link between the client and the APs. Furthermore, ``link-alike'' requires heavy modifications of
not only the APs and destination, but the clients as well, therefore renders itself hard to
deploy in practical scenarios where client devices such as mobile phones are not desired
to be ``touched''.
}
\ignore{
While contemporary broadband downlinks offer sufficient bandwidth for
most applications, the protocol presented in this paper, ``\mytit'',
specifically addresses aggregation of highly limited uplinks.
Besides, for the sake of the applicability and ease of adoption,
\mytit targets a solution \emph{transparent} to clients, requiring
only minimal modifications. Moreover, \mytit targets a generic
support for both existing transport layer protocols, i.e., UDP
\emph{and} TCP, to support a wide range of popular applications such
as large file transfer, streaming, etc.}
\ignore{To have a better understanding of our design considerations and
motivations, we first present an overview of the BaPu architecture and
two example application scenarios. We will also summarize the key
features of BaPu and our experimental results.}
\ignore{
\textit{TCP friendly}: Contrary to previous work, \mytit allows
not only UDP, but also TCP multiplexing by employing a novel
mechanism, \emph{Proactive-ACKs}. This renders uplink aggregation
TCP friendly. Based on this technique, \mytit supports not only
large file upload, but also, e.g., HD video streaming.
}
\ignore{\item \textit{Support large file transfer and streaming}:
BaPu can work with a large set of
applications out-of-the-box. Prior work only supports UDP based large file transfer.
BaPu also supports HD video streaming. }
\section{Evaluation}
\label{sec:evaluation}
\begin{table}
\centering
\begin{tabular}{|l|c|}
\hline
Distance & RTT\\
\hline\hline
Regional: 500 - 1,000 mi& 32ms~\cite{akamai:hd}\\
\hline
Cross-continent: $\sim$ 3,000 mi& 96ms~\cite{akamai:hd}\\
\hline
Multi-continent: $\sim$ 6,000 mi& 192ms~\cite{akamai:hd}\\
\hline
inter-AP in Boston& 20ms $\sim$ 80ms\\
\hline
\end{tabular}
\caption{\label{tab:latency} Network RTT Latency. Inter-AP RTT
measured by our Open Infrastructure WiFi testbed in greater Boston,
representing typical
RTT between home APs, covering Comcast, RCN, and Verizon~\cite{open-infrastructure}.}
\end{table}
In this section, we evaluate the performance of \mytit for UDP
and TCP in various system settings.
\vskip 1eX\noindent{\bf Experiment Setup:} Our experiment setup is
shown in Figure~\ref{fig:bapu-exp-setup}. Our testbed consists of a
\s, 7 \ap{}s, a \gw, a \d node, and a traffic shaping box. All APs are
Buffalo WZR-HP-G300NH 802.11n wireless routers. This router model has
a 400MHz CPU with 32MB RAM. We reflashed the APs with
OpenWRT firmware, running Linux kernel 2.6.32 and
{\texttt{ath9k}} WiFi driver. In our experiments, we select one \ap to
act as a \home which the \s is always associated to. The other
6 \ap{}s act as \mon{}s to capture the traffic in monitor mode. The
\gw runs on a Linux PC, and the \d runs behind the \gw.
The \s and the \d are both laptops with 802.11n WiFi card,
running the standard Linux TCP/IP stack.
To emulate traffic shaping as with residential broadband, we use the
traffic shaping box between the \ap{}s and \gw. We use Linux' {\tt
iptables} and {\tt tc} with the {\tt htb} module to shape the
downlink bandwidth to 20Mbps and the uplink to 2Mbps. Also,
to emulate network latency between \ap{}s and \gw, we use {\tt
netem} to shape the RTT with different values. The bandwidth
and latency parameter are selected to represent the typical bandwidth
capacity and regional latency in residential cable broadband that we
have measured in Boston's urban area (Table~\ref{tab:latency}).
\begin{table}
\centering
\begin{tabular}{|l|c|}
\hline
max UDP & 1.94 Mbps\\
\hline
max \mytit UDP & 1.82 Mbps\\
\hline
max TCP & 1.9 Mbps\\
\hline
max \mytit TCP & 1.8 Mbps\\
\hline
\end{tabular}
\caption{Maximum theoretical goodput for UDP and TCP with and without \mytit overhead. Data payload size is 1350Bytes. Uplink capacity is 2Mbps.}
\label{tab:limit}
\end{table}
In our experiments, we issue long-lived 30 minutes {\tt iperf} flows
(both TCP and UDP) from \s to \d. We choose 1350Byte as TCP/UDP
payload size in our {\tt iperf} test to make sure that the whole
client IP packet can be encapsulated in one IP packet while an \ap{}
sends it through its TCP tunnel. All throughput values reported in
our experiment are the {\tt iperf} throughput, which is the
\emph{goodput}.
In the evaluation, we compare throughput of UDP and TCP in different
system scenarios. More precisely, we evaluate the following scenarios:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item \basic: \mytit system without any buffering or Proactive-ACK
mechanism.
\item \buffering: \mytit system without Proactive-ACK mechanism, but
enhanced by buffering at \gw.
\item \proack: this is the full \mytit system.
\end{itemize}
\ignore{
In the above scenarios, in order to have a practical testbed environment,
we keep other features in \mytit such as network unicast, tunnel forwarding
with NAT-ing, centralized scheduler at \gw.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
\caption{Sender's TCP CWND growth compared between \basic and regular
single AP with 32ms RTT and total 14Mbps uplink.}
\label{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
\end{figure}
\subsection{{\Large{\basic}}: Efficient UDP, Poor TCP}
\label{sec:eval-basic}
\ignore{
We evaluate the performance of \basic by deploying the testbed
with number of \ap{}s increasing from 1 to 7.}
\subsubsection{System efficiency with UDP throughput}
The practicality of \mytit lies in its efficiency. In contrast to
related work, \mytit's transparency goal, not requiring any
modifications at the client side, has motivated the design of \mytit's
underlying technical details. We now first measure \mytit's efficiency
by the throughput with UDP, as it provides a light-weight end-to-end
transmission between \s and \d. Figure~\ref{fig:basic-total-32ms}
shows the achieved aggregated UDP throughput with numbers of
participating \ap{}s increasing from 1 to 7. We observe that the aggregated UDP throughput
increases proportionally with the number of \ap{}s, and achieves 12.4Mbps with 7 \ap{}s.
To put this figure into
perspective, note that related work by~\citet{link-alike} achieves
similar UDP throughput but without support for TCP or client transparency.
\subsubsection{Low TCP throughput}
We conduct the same experiments also for TCP
transmission. Figure~\ref{fig:basic-total-32ms} shows that the
aggregated TCP throughput does not benefit much when the number of
\ap{}s increases. The TCP aggregated throughput is always lower than
the UDP's in the same setup, and the gap between UDP and TCP
performance increases along with the number of \ap{}s. For example, we
achieve only 6.83Mbps with 7~\ap{}s.
\subsubsection{Aggregation efficiency}
In addition to measuring aggregated throughput, we evaluate our system
based on another metric, \emph{aggregation efficiency}. We define
\emph{aggregation efficiency} as the ratio between practical
throughput over the maximum theoretical goodput. Due to the TCP/IP
header and \mytit protocol overhead, the actual goodput is less than
the uplink capacity. With all protocol header overhead accounted, we
derive the maximum theoretical goodput as the given backhaul capacity
of 2Mbps. Table~\ref{tab:limit} lists the maximum throughput when
data is transmitted via standard UDP/TCP and via \mytit.
As shown in Figure~\ref{fig:basic-total-percent-32ms}, \basic UDP can
harness close to 100\% idle bandwidth. Even if we consider the extra
overhead incurred by \mytit protocol messages, UDP aggregation
efficiency is still over 90\% in all cases. In contrast, the
aggregation efficiency for TCP degrades quickly as more \ap{}s join
the cooperation. With 7 \ap{}s, \basic transforms only 50\% of idle
bandwidth to effective throughput.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/plot32ms-compare-basic-buffering}
\caption{\basic vs. \buffering comparison in TCP throughput with 2Mbps
32ms RTT uplinks.}
\label{fig:plot32ms-compare-basic-buffering}
\end{figure}
\begin{figure*}
\centering
\subfloat[\label{fig:proack_tcp_throughput}
Aggregated TCP throughput]{\includegraphics[width=0.9\columnwidth]{figures/plot32ms-tcp-compare}\vspace{-0.5cm}}
\subfloat[\label{fig:proack_tcp_th_vs_upper_limit} Aggregation efficiency]{\includegraphics[width=0.9\columnwidth]{figures/plot32ms-tcp-compare-percent}\vspace{-0.5cm}}
\vspace{-0.2cm}
\caption{\proack vs. \basic: comparison with 2Mbps 32ms RTT uplinks.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.9\columnwidth]{figures/plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd}
\caption{TCP sender CWND growth comparison: \proack vs. \basic vs. normal TCP.}
\label{fig:tcp_info_basic_vs_proack}
\end{figure*}
\subsubsection{Discussion: \basic's poor TCP performance}
\label{sec:eval-basic-cwnd}
We can observe several factors in Section~\ref{sec:proack} that
decrease the aggregated TCP throughput. In this section, we carry out
an analysis on the \s's CWND size in \basic. To justify our analysis,
we inspect the TCP behavior by examining the Linux kernel TCP stack
variables. We call {\tt getsockopt()} to query the {\tt TCP\_INFO}
Linux kernel data structure. {\tt TCP\_INFO} includes the system time
stamp, the \s's CWND, number of retransmissions, etc. We have
modified the {\tt iperf} code to log {\tt TCP\_INFO} each time {\tt
iperf} calls {\tt send()} to write the application data to the TCP
socket buffer.
Figure~\ref{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
shows the CWND growth in a 120 second {\tt iperf} test with 7~\ap{}s.
The theoretical throughput here $2\mbox{Mbps}\times7 = 14\mbox{Mbps}$.
In comparison, we carry out another {\tt iperf} test with standard TCP
through a single, regular AP with 14Mbps uplink capacity. The CWND
growth in a normal TCP connection is also shown in
Figure~\ref{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}.
As shown, the \s's CWND remains at a very low level. Our captured
packet trace at the \s shows that lots of DUPACK packets and RTO incur
a lot of retransmissions, which results in very low TCP throughput.
\subsection{Does {\Large\buffering} help?}
\label{sec:eval-buffer}
As discussed in Section~\ref{sec:proack}, a simple \emph{buffering}
mechanism does \emph{not} solve the TCP performance issue due to
difference in \ap{} uplink characteristics (latency, packet loss). In
this section, we show experimentally that a buffering mechanism cannot
help in improving the TCP throughput. The experiment is performed for
equal uplink capacity and latency, i.e., we eliminate external factors
such as asymmetric links among \ap{}s.
Figure~\ref{fig:plot32ms-compare-basic-buffering} depicts the
throughput comparison between \basic and \buffering. Surprisingly,
the throughput is \emph{worse} with \buffering. We have also adjusted
the buffer size at \gw, but the throughput still remains as low as
shown in Figure~\ref{fig:plot32ms-compare-basic-buffering}. We have
investigated \s's CWND, and we have seen that it peaks at low values,
similarly to the behavior in \basic. The packet trace also shows a
lot of retransmissions.
\subsection{{\Large\proack} Performance}
\label{sec:eval-proack}
Now, we conduct a comprehensive set of experiments to evaluate the
performance of \proack. First, we validate our Proactive-ACK mechanism
by comparing \proack against \basic. Second, we measure the
performance of \proack under a variety of network settings (network
latency, wireless link quality, etc.). Finally, we demonstrate that
\proack is feasible for both, streaming and large file transfer
applications.
\subsubsection{TCP Throughput -- \proack vs. \basic}
We carry out the same {\tt iperf} test as described in
Section~\ref{sec:eval-basic} with \proack. As shown in
Figure~\ref{fig:proack_tcp_throughput}, the aggregated TCP throughput
of \proack significantly outperforms the one of \basic. With
7~\ap{}s, \proack achieves 11.04Mbps, i.e., 62\% improvement over
\basic. Furthermore, Figure~\ref{fig:proack_tcp_th_vs_upper_limit}
shows that \proack achieves at least 88\% aggregation efficiency in
our setup, and it achieves at least 83\% of the upper limit of
standard TCP throughput. These results demonstrate that \proack can
achieve high aggregated throughput with high aggregation efficiency
for TCP in practical settings.
\begin{figure*}
\centering
\subfloat[\label{fig:latency-pro} Different RTT]{\includegraphics[width=0.9\columnwidth]{figures/plot-proack-tcp-latency-compare}}
\subfloat[\label{fig:diversity-pro} Different packet loss rate $P$ on \mon{}s]{\includegraphics[width=0.9\columnwidth]{figures/plot-diversity-tcp-compare}}
\caption{\proack TCP throughput.}
\end{figure*}
\subsubsection{Proactive-ACK benefit}
To justify our Proactive-ACK mechanism, we adopt the same method as in
Section~\ref{sec:eval-basic-cwnd} to examine the TCP CWND
growth. Figure~\ref{fig:tcp_info_basic_vs_proack} shows that \proack
allows the CWND to grow to very high values, contributing to the high
throughput. For convenience, we also run a regular TCP session with a
throttled bandwidth 11Mbps (similar to the \proack's resulted
throughput). The CWND growth for \proack and regular TCP shares a
similar pattern, which implies that our design and implementation can
efficiently and transparently aggregate multiple slow uplinks.
\subsubsection{Impact of network latency}
For TCP transmissions, RTT is an important factor that has impact on
the throughput. In another experiment, we measure the performance of
\mytit with 4 different network latency settings listed in
Table~\ref{tab:latency}. Each latency represents certain application
scenarios. For example, when users upload HD video with \mytit to CDN
edge servers, the latency to CDN servers is generally the regional
latency (32ms). When users upload data to their friends in another
continent, the RTT is on average 192ms. Besides, according to our
measurements in a residential WiFi testbed~\cite{open-infrastructure},
we observe that the latency between broadband subscribers may vary
considerably, ranging between 20ms and 80ms. This depends on whether
users are with the same or a different ISP. Consider the case that a
user uploads data with \mytit through neighboring APs to another user
in the same city, the latency between the \ap{}s and the end user can
be quite different.
Given a certain number of APs, we assign to each \ap a random RTT
value between 20ms and 80ms. We carry out this test for 10 runs and
report the average throughput. As shown in
Figure~\ref{fig:latency-pro}, \proack throughput slightly declines as
network latency increases. In random latency setting, the resulted
throughput shows no significant difference.
\subsubsection{Impact of lossy wireless links}
The wireless links in a real neighbourhood can be very lossy for a
variety of reasons, such as cross channel interference and distant
neighboring APs. Besides, since \mon{}s switch between transmit and
receive mode, they cannot overhear all transmitted packets. To
estimate the potential of \mytit highly lossy wireless environments,
we emulate packet loss at \mon{}s by dropping received packets with a
probability $P$. No losses were inflicted on \home, because \s
carries out unicast to \home, and 802.11 MAC already handles packet
loss and retransmissions automatically. We conduct the experiment with
3 values of $P$: 20\%, 40\%, and 60\%.
As indicated in Figure~\ref{fig:diversity-pro}, the throughput
reduction on lossy wireless links is very limited in all cases. The
good performance can be explained by the link diversity combined with
the centralized scheduling mechanisms. The probability of some packet
not overheard by \emph{at least one} \mon is negligible small,
especially in case of high number of participating APs. This also
explains why 7~\ap{}s achieve higher throughput with $P=60\%$ than
with $P=20\%$.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/plot-bursty}
\caption{Instantaneously received throughput comparison: 11Mbps
Streaming vs. Unlimited rate.}
\label{fig:bursty}
\end{figure}
\subsubsection{Streaming vs. large file transfer}
One important goal in \mytit's design is to support instant sharing of
high-bitrate HD videos directly between users using streaming. The
motivation behind is that today the major online streaming services
(e.g., Netflix) run on TCP based streaming technologies, such as HTTP
based Adaptive Bitrate Streaming. Real time streaming generally
requires \emph{stable} instantaneous throughput. In this experiment,
we study the potential of \mytit as a solution to high-bitrate
real-time streaming.
To emulate the streaming traffic, we use {\tt nuttcp} to issue a TCP
flow with a fixed 11Mbps sending rate. Figure~\ref{fig:bursty} shows
\d's instantaneous throughput in a 100 second session. \d achieves
a reasonably stable throughput in the whole session. It indicates that
\mytit can sustain high-bitrate streaming through aggregated
uplinks. In comparison, the {\tt iperf} flow with unlimited sending
rate shows much higher fluctuation.
\ignore{
During our {\tt iperf} measurement, even though the average TCP throughput is quite stable in various settings, we find that
the \d's instantaneous throughput fluctuates considerably (Figure~\ref{fig:bursty}).
Our packet trace inspection reveals that due to the latency difference
among APs, the arrival order of the scheduled TCP segments is uncertain. Therefore, the BaPu-Gateway sometimes must buffers
the out-of-order segments until the expected ones arrive. Besides, since iperf always tries to saturate the link, as the throughput on the WiFi link overruns the aggregated backhual uplink capacity, the out-of-order segment arrival becomes more severe. Both reasons result in the bursty receiver end throughput.
The iperf throughput only indicates that BaPu is suitable for some applications like instant backup of large files in the cloud. This aligns with the findings of prior work. However, it tells only one side of the story. As we design BaPu, the other important goal is to support instant sharing of high bitrate HD video directly from users' home WiFi, in streaming mode. The motivation behind such goal is that today's main stream online streaming services (e.g. Netflix) run on TCP based streaming technologies, such as HTTP based Adaptive Bitrate Streaming. Real time streaming generally requires stable instantaneous throughput. In this experiment, we would like to study the potential of BaPu as a solution to high bitrate real time streaming.
Unlike file uploading, streaming application generally has fixed transmitter rate, determined by the codec bit rate. To emulate the streaming traffic, we issue the TCP flow with a tool called \emph{nuttcp}. nuttcp can do rate limiting at transmitter. Figure \ref{fig:bursty} shows the receiver end instantaneous throughput every second in a 100 second session. In the streaming flow with 11Mbps fixed transmitter rate, the receiver end achieves reasonably stable throughput in the whole session. It indicates that BaPu can sustain high bit rate streaming through aggregated links. In comparison, the iperf flow with unlimited transmitter rate shows much higher fluctuation. }
\section{System Overview}
\subsection{Application Scenarios}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/bapu-2scenarios}
\caption{\label{fig:bapu-arch} \mytit system architecture and two
example application scenarios.
Scenario 1 (left): Sender~1 shares an HD video
with a remote end user through a \mytit-enabled Home-AP and
neighboring Monitor-APs.
Scenario 2 (right): Sender 2 backs up a large file
to iCloud through a \mytit-enabled Home-AP and Monitor-APs.
}
\end{figure}
For ease of understanding, we first introduce two typical application
scenarios that benefit from \mytit -- see Figure~\ref{fig:bapu-arch}.
\textbf{Scenario 1: Instant Sharing of HD Video:} In order to
retain the control of personal content, Sender~1 shares his HD
video directly from his hard drive and streams it instantly, i.e., in
real-time, to the other user -- Destination~1. Both users are
connected to their Home-APs, with an uplink connection from Sender~1
to Destination~1 throttled to 1 $\sim$ 3Mbps by Sender~1's ISP. The HD
video has 8Mbps playback rate (standard 1080p video bit rate), so
Sender~1's single uplink cannot handle this content in
real-time. However with \mytit, the idle uplink of the neighboring
Monitor-APs are exploited to boost the uplink throughput. The
\gw, the Home-AP of Destination 1, plays the role as
the \emph{aggregator} to receive and forward multiplexed traffic to
Destination~1.
\textbf{Scenario 2: Instant Backup of Large File:} Sender~2
wishes to backup his HD video clip to some cloud storage service such
as iCloud. With the 3Mbps uplink rate, it takes over an hour to
upload a 30 minute HD video. With \mytit, neighboring Monitor-APs and
Home-AP upload data in parallel. iCloud just needs to deploy a
gateway server in front of the cloud storage servers. This gateway
server runs the \gw software to handle parallel uploading
from multiple paths. Using \mytit, file uploading time is greatly
reduced.
\textbf{Security and Privacy:} In both application scenarios, the APs are configured
to have two SSIDs, an encrypted SSID (i.e., WPA) for private use, and an unencrypted (open) SSID. The \mytit traffic is carried over neighbouring unencrypted SSIDs with end-to-end security (e.g., SSL/TLS), while allowing the main SSID of participating APs to remain encrypted.
\subsection{{\large\mytit} Description}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/bapuap.png}
\caption{\label{fig:bapu-ap}\mytit-AP Building components.}
\end{figure}
First, we introduce the notation used in this paper:
\noindent
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item{\s}: device uploading data to a remote destination.
\item{\d}: device receiving uploaded data from a remote sender.
\item{\home}: AP which \s is associated to.
\item{\mon}: in physical proximity to \d
and \home, a couple of neighboring APs, the \mon{}s, run in
\emph{monitor mode} on the same WiFi channel as \home.
\item{\gw}: gateway device connected to
\d. As shown in Figure~\ref{fig:bapu-arch}, \gw can
be the WiFi AP of the \d or the gateway server at the edge of
some cloud data center.
\item{\ap}: for abstraction, \home and
\mon will be called \ap, thereby
representing the role that APs play in a \mytit data upload session.
\end{itemize}
In \mytit, \s is associated with its \home, and the uploading of data
is aggregated via unencrypted wireless link. The data, however, are
protected with some end-to-end security mechanism (e.g., SSL/TLS).
\home{} and \mon{} are configured
to run in \emph{both} WiFi AP mode and WiFi monitor
mode\footnote{Modern WiFi drivers, such as the prominent
{\texttt{Ath9k}} family, allow one physical WiFi interface to
support running in multiple modes}. The general \ap{} setup is
illustrated in Figure~\ref{fig:bapu-ap}.
The WiFi link between a \s{} and its \home{} generally provides high
bandwidth, up to hundreds of Mbps with 802.11n. The link between a
\ap{} and the \d{}, however, is throttled by the ISP. At the remote
end, we place a \gw{} immediately before the \d{}. The connection
between the \gw{} and the \d is a wired or wireless
high-speed link. Note that being in physical proximity, unicasts
between \s{} and \home{} (AP mode) can be overheard by (some of) the
neighboring \mon{}s (monitor mode).
At a high level, \mytit is a centralized system with the controller
residing at \gw. \mytit provides an aggregation of uplink bandwidth
that is carried out as follows.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/bapu-pro-flow-compact}
\vspace{-0.1in}
\caption{\mytit Protocol Traffic Flow. The ACKs (red color) are managed for TCP only.}
\label{fig:bapu-pro-flow}
\end{figure}
\begin{enumerate}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item \label{step:init}\s{} starts a TCP or UDP upload session to \d{} through its
\home{} via WiFi.
\item \label{step:identify}\home{} and \mon{} overhear WiFi packets and identify if this
can be a ``\mytit" session by checking the destination IP and
port. In our prototype described in Section~\ref{sec:evaluation}, we choose a
specific UDP/TCP port for all traffic that allows bandwidth
aggregation.
\item \label{step:register}\ap{}s register themselves as a \emph{contributor} to \gw.
\item In \mytit, \home{} and \mon{} collaborate to upload data for
\s{}, following a schedule that is determined by \gw{}. We will
explain this scheduling mechanism and protocol details later in
Section~\ref{sec:bapu}.
\ignore{Once \home{} identifies a \mytit
session for \s{}, it dynamically updates its {\tt iptables} rules to
|
prevent the \mytit session from being forwarded to WAN along the
default route.}
Practically speaking, \home{} and \mon{} now capture \s{}'s packets
with {\tt libpcap} from the monitor mode, and stores them in a
buffer.
\item \label{step:overhear}For each packet overheard, \home{} and \mon{} send packet
reception reports to the \gw{}.
\item \label{step:report-udp}For an \emph{UDP} session, on reception of the reports, \gw{}
determines which \ap{} will forward the captured packet in step~\ref{step:forward}.
A scheduling message is then prepared to include
the selected \ap{}'s identity, and this scheduling message is
\emph{broadcast} back to all \ap{}s participating in the current
session.
\item \label{step:report-tcp}A \emph{TCP} session is much more challenging to support than
UDP. To properly multiplex \s{}'s single TCP flow through multiple
paths, \gw{} adopts a mechanism we call \emph{Proactive-ACK} mechanism. \gw{}
sends spoofed TCP ACKs to \s{} as the \mytit session goes
on. \emph{Proactive-ACK} is designed to make \mytit work
efficiently with legacy TCP congestion control. We will explain
details later in Section~\ref{sec:bapu}.
\item \label{step:forward}The scheduled AP forwards the buffered packets to \gw{}, which
forwards to \d{}.
\item \label{step:downlink}Any downlink traffic from \d{} to \s{} just follows the default
network path, i.e., from \d over \gw and \home to \s.
\end{enumerate}
Figure~\ref{fig:bapu-pro-flow} shows \mytit's protocol flow. In the
following section, we now discuss \mytit's research challenges in
details and give an insight to our design decision at each step of the
protocol.
\subsection{Preliminary Evaluation}
\label{sec:prelim-exp}
To demonstrate the feasibility and performance of BaPu, we
first present a preliminary experimental evaluation for BaPu-Basic.
In this section, we first present our prototype implementation of
BaPu, and the experiment setup.
Then, with a set of experimental results, we show that BaPu-Basic achieves
high aggregation throughput for UDP, but low throughput for TCP.
Our analysis sheds some insight on our design of BaPu-Pro, which significantly
improves the TCP aggregated throughput.
\subsubsection{Implementation}
\label{sec:implement}
Both BaPu-AP and BaPu-Gateway are implemented as Linux user space programs
using about 8000 lines of C codes. Each BaPu-AP has an atheros chipset, running ath9k driver.
They are configured in both AP mode and monitor mode.
On BaPu-AP, we use \emph{libpcap} to
capture the raw 802.11 traffic from sender. Since home-AP also forwards the received packets following the
schedules decided by the BaPu-Gateway, we install proper iptables rules to disable the default packet routing through
home-AP.
We wrote a 802.11 frame
parser and TCP/IP header parser to extract the raw packet information, including
BSSID, IP addresses, ports, etc.
At BaPu-Gateway, we use
\emph{Linux raw socket} to inject the tunnelled IP packets to the actual
destination.
In our prototype, BaPu-AP program is cross-compiled to run on
commodity home WiFi APs, reflashed with OpenWRT firmware.
BaPu-Gateway is running on a Linux PC.
\subsubsection{Experiment Setup}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/bapu-exp-setup}
\caption{BaPu Experiment Setup. 7 BaPu-APs and 1 BaPu-Gateway are inter-connected.
Traffic shaping box is setup in between to emulate the typical residential
network settings. }
\label{fig:bapu-exp-setup}
\end{figure}
Our experiment setup is shown in Figure \ref{fig:bapu-exp-setup}.
Our testbed consists of 1 client, 7 BaPu-APs, 1 BaPu-Gateway,
1 destination node and a traffic shaping box. Each AP is a Buffalo WZR-HP-G300NH
802.11n wireless router. This router model has 400MHz CPU and 32MB RAM.
We have reflashed the APs with OpenWRT firmware, with Linux kernel 2.6.32
and ath9k WiFi driver. In our experiments, 1 BaPu-AP acts as home-AP which the client
is always associated to. The other 6 BaPu-APs act as neighbouring monitor-APs to
capture the traffic in monitor mode. BaPu-Gateway runs on a
Linux PC, and the destination node runs behind the Gateway
as an in-LAN node. The legacy client and the destination are both laptops with a 802.11n WiFi card,
running Ubuntu Linux.
To emulate the traffic shaping in residential broadband, we install a Linux PC
between the BaPu-APs and the BaPu-Gateway. We use $tc$ to shape the downlink/uplink bandwidth (htb) to
20Mbps and 2Mbps for downlink and uplink, respectively.
Also, to emulate the network latency between BaPu-APs and BaPu-Gateway, we use $netem$ tc module to
shape the latency to 32ms RTT.
The bandwidth and latency parameter is selected by our measurement on our Open Infrastructure urban WiFi testbed,
which represents typical bandwidth capacity and regional latency in the residential cable broadband in Boston urban area.
In all our experiments, we issue long-lived \emph{iperf} flow
(TCP or UDP) from the client to the destination. We choose 1350Byte as TCP/UDP
payload size in our iperf test. We make sure that the whole client IP packet can be
encapsulated in one IP packet as the BaPu-AP sends it through the TCP tunnel.
Each iperf test lasts for 30 minutes.
All the throughput value reported in our experiment is the iperf throughput,
which is the ``goodput".
\subsubsection{Aggregated Throughput: UDP and TCP}
We evaluate the performance of BaPu-Basic by deploying (up to) 7 BaPu-APs.
Figure \ref{fig:basic-total-32ms} shows the achieved aggregated throughput with different number of participating APs.
In UDP, the aggregated throughput increases proportionally with the number of APs. With
7 APs, we achieve 12.4Mbps aggregated UDP throughput. In the contrast, the aggregated TCP throughput does not benefit much when the number of APs increases. The TCP aggregated throughput is always lower than value for UDP in the same setup. With 7 APs, we achieve only 6.83Mbps throughput.
\begin{figure*}
\centering
\subfloat[Throughput for UDP and TCP. 32ms RTT, 2Mbps uplink
\label{fig:basic-total-32ms}]{\includegraphics[width=\columnwidth]{figures/plot32ms-basic-total}}
\subfloat[\label{fig:basic-total-percent-32ms} Efficiency
for UDP and TCP. 32ms RTT, 2Mbps uplink
]{\includegraphics[width=\columnwidth]{figures/plot32ms-basic-total-percent}}
\caption{BaPu-Basic Aggregation}
\end{figure*}
\ignore{
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot32ms-basic-total}
\caption{BaPu-Basic Aggregated Throughput for UDP and TCP. 32ms RTT, 2Mbps uplink}
\label{fig:basic-total-32ms}
\end{figure}
}
\subsubsection{Aggregation Efficiency}
Asides from the aggregated throughput, we use another metric, \emph{Aggregation Efficiency}, to evaluate the performance of BaPu. We define the Aggregation Efficiency as the ratio of the resulting throughput over the maximum theoretical goodput. Due to the TCP/IP header and BaPu protocol overhead, the actual goodput is less than the uplink capacity. With all protocol header overhead accounted, we derive the maximum theoretical goodput given backhaul capacity of 2Mbps.
Table \ref{tab:limit} lists the maximum throughput when data is transmitted via UDP/TCP standard protocol and via our BaPu system.
\begin{table}
\centering
\begin{tabular}{|l|c|}
\hline
Uplink Capacity & 2 Mbps\\
\hline \hline
max UDP & 1.94 Mbps\\
\hline
max \mytit UDP & 1.82 Mbps\\
\hline
max TCP & 1.9 Mbps\\
\hline
max \mytit TCP & 1.8 Mbps\\
\hline
\end{tabular}
\caption{Maximum theoretical goodput for TCP and UDP. Data payload size is 1350Byte.}
\label{tab:limit}
\end{table}
As shown in Figure~\ref{fig:basic-total-percent-32ms}, BaPu-Basic UDP can harness close to 100\% idle
bandwidth. Even if we consider the extra overhead incurred by BaPu protocol messages, UDP aggregation
efficiency is still over 90\% in all cases. In the contrast, the aggregation efficiency for TCP degrades
quickly as more APs join the cooperation. With 7 BaPu-APs, we transform only 50\% of idle bandwidth
to effective throughput.
This shows that 1) our BaPu-Basic mechanism design is efficient for UDP; 2)
our prototype implementation is efficient from engineering perspective.
Next, we discuss the reasons for poor TCP performance in BaPu-Basic.
\ignore{
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot32ms-basic-total-percent}
\caption{BaPu-Basic Efficiency of Aggregation for UDP and TCP. 32ms RTT, 2Mbps Uplink Capacity}
\label{fig:basic-total-percent-32ms}
\end{figure}
}
\subsubsection{A Look inside Poor TCP Performance}
\label{sec:bapubasic-tcp}
In this section we carry out an in-depth analysis on TCP behaviour in BaPu-Basic.
Our analysis reveals that some TCP inherent nature
raises some challenges, and limits the throughput with
the simple design of BaPu-Basic.
TCP ensures successful and in-order data delivery between the
sender and the receiver.
\ignore{TCP relies on two major mechanisms, flow control and congestion control.
The former one prevents the sender from overrunning the receiver buffer. The latter one prevents the sender from overrunning the network path between the sender and the receiver.}
In TCP, each packet is
identified with a sequence number and must be acknowledged by the receiver
to indicate the proper delivery. The sender maintains a dynamic CWND (Congestion Window)
as the communication goes on, which indicates the maximum number of packets on
the fly. If some packet is not ACK'ed before RTO (Retransmission Timeout) occurs,
the sender issues retransmission. If the receiver receives some out-of-order sequence, this generally implies that the missing sequence number is lost or delayed due
to congested network. In this case, the receiver sends a DUPACK (Duplicate ACK)
to inform the sender of the missing sequence. By default, the sender issues
Fast\_Retransmission upon receiving 3 DUPACKs. In either of these two cases, the sender
reduces the CWND accordingly to slow down the sending rate and adapt to the congested
network or slow receiver.
TCP was designed based on the fact that the out-of-order
sequence is generally a good indicator of packet being lost or the network path is
experiencing congestion. However, such assumption no longer holds
in BaPu. In BaPu, the packets belonging to the same TCP session are intentionally routed through
multiple APs, which have diverse backhaul connections in terms of capacity,
latency, traffic load, etc. This results in very serious out-of-order packet at
the BaPu-Gateway. As BaPu-Gateway forwards the out-of-order packets to the
destination, the sender and destination falls on the TCP congestion control mechanism to limit the
CWND at a very low value. The throughput is consequently highly limited.
To further justify our analysis, we inspect the TCP behaviour by examining
the Linux kernel TCP stack variables. We use the $getsockopt()$
call to query the $TCP\_INFO$ Linux kernel data structure.
$TCP\_INFO$ includes system time stamp, sender
CWND, retransmissions, etc. We have modified the iperf client to log
$TCP\_INFO$ each time when
iperf calls send() to write the application buffer data to the TCP
socket buffer.
Figure \ref{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
shows how TCP CWND grows in a 120 second iperf test with 7 BaPu-APs.
With 7 APs, we expect to obtain $2Mbps\times7 = 14Mbps$ throughput.
In comparison, we carry out another iperf test with normal TCP through a
regular AP with 14Mbps uplink capacity. The CWND growth in normal TCP
connection is also shown in Figure \ref{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}.
In BaPu-Basic TCP session, the sender side congestion window remains
at a very low level. Our pcap captured packet trace at client shows that a lot
of DUPACKs and RTO timeout incurs a lot of retransmissions, which results in
very low TCP throughput.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
\caption{TCP sender CWND growth, BaPu-Basic vs. normal TCP. 32ms RTT and 2Mbps uplink }
\label{fig:plot-basic-7ap-32msrtt-5min-iperf-tcpinfo-cwnd-retrans}
\end{figure}
\section{Related Work}
\label{sec:related}
\mytit system is inspired by design principles of several earlier protocols, it however addresses unique constraints and goals and results in a novel combination of techniques that achieves high efficiency. Several earlier research works proposed to improve the performance of TCP over wireless links by using intermediate nodes to assist in the recovery of lost packets include Snoop TCP~\cite{BalakrishnanSAK1995}, and Split TCP for ad hoc networks~\cite{KoppartyKFT2002}. Multiple radio links to improve devices throughput have also been explored
from several perspectives including traffic aggregation~\cite{KandulaLBK2008}, mutipath
forwarding~\cite{link-alike}, mitigation of wireless losses~\cite{MiuBK2005,miu2004divert}. In
addition to systems that rely on multiple radio interfaces~\cite{BahlAPW2004},
many solutions and algorithms were proposed for a single radio interface that
carefully switches across multiple access points while providing the upper
layers of the network stack a transparent access through a virtual
interface~\cite{VirtualWiFi,ChandraB2004,XingML2010, KandulaLBK2008}. Solutions to overcome
the limited APs backhaul through aggregation using a virtualized radio interface
include the initial Virtual-WiFi system where two TCP connection might be
services by through two different APs~\cite{VirtualWiFi}, FatVAP that achieves
fast switching a smart AP selection~\cite{KandulaLBK2008}, ARBOR that add
security support~\cite{XingML2010}, Fair WLAN that provides
fairness~\cite{GiustinianoGTLDMR2010}. Many of these systems require techniques
for fast switching across access points to reduce the impact on TCP performance
in terms of delay and packet loss as proposed in Juggler~\cite{NicholsonWN2009}, and
WiSwitcher~\cite{GiustinianoGLR2009}. In \cite{Soroush:2011ki}, an analytical model
is proposed to optimze concurrent AP connections for highly mobile clients. They also implement
Spider, a multi-AP driver using optimal AP and channel scheduling to improve the aggregated throughput.
Unlike BaPu, these works do not aggregate the throughput for single transport layer connection, which is critical for client transparency.
Divert~\cite{miu2004divert} propose a central controller to select the optimal AP across multiple BSes
in WLANs in order to reduce the path-dependent downlink loss from AP to client.
Rather than improving the wireless link quality, BaPu is aimed to aggregate the wired capacity behind APs.
In BaPu, the sender communicates with its home AP as usual.
However, BaPu does benefit from the link diversity across APs while aggregating the backhaul capacity.
ViFi~\cite{Balasubramanian:2008vo} propose a probabilistic algorithm for coordination between basestations to
improve the opportunistic connectivity of the client-BS communication. Similar to Divert, ViFi is not for aggregating throughput. Also, in section~\ref{sec:schedule}, we show that such probabilistic solution sets limitations on throughput aggregation.
The closest approach to our work is the Link-alike system where
access points coordinate to opportunistically schedule the traffic over their backhaul links~\cite{link-alike}.
Our approach differs from previous work in requiring that the client devices remain
unchanged (unicast connection to the AP) and transparently supports protocols like TCP.
Being completely transparent to the clients and constraining each link
AP-Destination flow to be TCP-friendly makes efficient multipath transport, a
key component of our system. There has been a significant amount of work in this
area for quite some time from various perspectives that are fairly different
from our setup. Previous work identified the issues with differential delay, and
rates and mostly focussed on providing reliability, flows balancing, and
maintaining fairness. Proposed solutions, require the modification of
the client devices network stacks, and usually do not aim at increasing capacity
through simultaneous use of all paths. For example, the IETF has two standards
on transport protocols using multipath. The Stream Control Transmission Protocol
(SCTP) was primarily designed for multi-homed devices that require fail-over
support~\cite{SCTP}, the more recent Multi-Path TCP (MPTCP) is a TCP extension
that aims at enabling nodes to efficiently communicate utilizing multiple
parallel paths~\cite{MPTCP}. In recent work,~\cite{WischikRGH2011} proposed a
congestion control mechanism for MPTCP with the objective of reliability and
fairness. Other transport protocols that require the modification of the client
devices, include pTCP~\cite{HsiehS2002} an end to end transport protocol that
achieves bandwidth aggregation on multi-homed mobile hosts,
RCP~\cite{HsiehKZS2003} a Receiver Centric Protocol that aggregates
heterogeneous interfaces traffic and carries link specific congestion control,
R-MTP~\cite{MagalhaesK2001} balances and coordinates the traffic over wireless
links with varying characteristics, Horizon~\cite{RadunovicGG2008} uses
back-pressure technique to balance the traffic across forwarding paths. Beyond
mobile communication environments, multipath TCP features have also been finding
applications in various networking environments such as data
~\cite{RaiciuBPGW2011, BarrePB2011}. The distinguishing element of \mytit is that
it aims at transparently supporting unmodified client devices and TCP/IP stacks while efficiently
aggregating the APs backhauls.
\section{Problem Statement and \\Architectural Challenges}
shows our system setup. In a residential area, a
client device, the ``source'' $S$, sends large amounts of data
wirelessly to a destination $D$. As an example scenario, imagine a
user with a handheld device wirelessly uploading a large video to
YouTube. Therefore, client $S$ connects to its home access point
$\hap{H}$ which is connected to the Internet and forwards data to
$D$. However, it turns out that the performance problem in this
scenario is the throttled backhaul of $\hap{H}$. Typically, the
wireless channel between $S$ and $\hap{H}$ offers a higher throughput
than $\hap{H}$'s Internet backhaul. For example, 802.11n provides
$T_{\mathrm{Wifi}}=54$ Mbps to $T_{\mathrm{Wifi}}=600$ Mbps while,
today, typical cable backhauls are throttled by Internet providers to
a throughput between $T_\mathrm{backhaul}= 1$ Mbps and
$T_\mathrm{backhaul}= 3$ Mbps.
Yet, in residential areas, there is often more than just the home
access point $\hap{H}$ in communication range to $S$ (typically tens of APs). That is, a list
of $n$ access points $\{\hap{1},\ldots{},\hap{n}\}$ can overhear (some
of the) packets sent from $S$ to $\hap{H}$. If these ``neighboring''
access points are idle, it seems straightforward to utilize their
spare backhaul capacity and assist $S$ in delivering its data more
quickly to $D$. The main rationale is to \emph{split} the burden of
delivering $S$'s packets among all APs in communication range.
However, such a system faces several challenges.
\paragraph{Challenges}
\vskip 2eX \noindent{\bf Client Transparency:} First of all, for
client $S$, the system should work transparently. There should be no
requirement to change $S$'s network stack. For $S$, the system should
look like standard, e.g., TCP communication with $D$. This is
motivated by the enormous heterogeneity of possible end user
devices. Changing details of the network stack would need to be
implemented and maintained on many different architectures. Instead,
for the massive deployment of such a system, we argue that only the
APs' and destination $D$'s network stack should undergo changes. For
the real-world, we expect this to be much more feasible. For example,
Internet service providers and AP manufacturers can cooperate with
content providers such as YouTube.
\vskip 1eX\noindent{\bf Lightweight Coordination:} As backhauls
represent a major bottleneck for throughput, the system must refrain
as much as possible from inter-AP coordination. If APs communicate
with each other to, e.g., schedule packet delivery towards $S$, this
puts additional load on backhauls degrading the overall throughput.
\vskip 1eX\noindent{\bf TCP and Packet Loss:} The scenario requires
reliable end-to-end data delivery. Although not primarily designed for
wireless environments, TCP has to be employed for the communication
from $S$ to $D$. This will keep client transparency and allow system
usage with legacy protocols. Communication between APs and $D$ traverses
the Internet and has to be
``TCP-friendly''~\cite{tcp-friendly,tcp-friendly2}. \ignore{While UDP-based
systems can be envisioned that emulate congestion control mechanisms,
TCP must also be used for Internet communication between the $\hap{i}$
and $D$.}
Yet, with TCP-based communication, packet loss in our wireless
environment has a severe impact on throughput. Packet loss results in
either a time-out or out of order TCP delivery. This causes the TCP
window size to be cut in half. While the \emph{primary} wireless link
between $S$ and $\hap{H}$ employs ARQ for reliable packet delivery at
$\hap{H}$, the \emph{secondary} wireless link to neighbors $\hap{i}$
is prone to packet loss. Due to the space diversity of the APs, the
packet loss at each AP is different. Consequently, besides $\hap{H}$,
neighboring APs receive different subsets of sent packets. As AP
should not exchange information, it is difficult to coordinate which
AP has received which packet. A system has to take this into account.
\vskip 1eX\noindent{\bf Out-of-Order Packets:} All APs use standard
TCP communication with $S$. Consequently, packet loss is not the only
reason for out-of-order packet receipt at $S$, but the difference in
latencies between APs and $S$ is another one. A low latency AP can
deliver its packets before a high latency AP. At receiver $D$, TCP
packets will arrive with out of order sequence numbers. Assuming
dropped packets, $D$ typically responses with duplicate
acknowledgments (``DUPACK''), requesting the client to reduce its
sending rate. After $S$ has received 3 DUPACKs, it will start a fast
retransmit hurting overall throughput.
In our scenario however, out-of-order packets are main\-ly caused by
backhaul latency differences among APs and, generally, the multipath
setup with different arrival characteristics. The system needs to
avoid generating duplicate ACKs (and dividing TCP's window size in
half).
\vskip 1eX\noindent{\bf Embedded Devices:} While many complex systems
for bunching neighboring access points can be envisioned, we have to
take the extreme resource restrictions of cheap, commodity hardware
APs into account. APs are powered with embedded hardware,
featuring only a few MBs of main memory and comparably lightweight
CPUs. Typically, already forwarding of packets between the
wireless interface to the backhaul uses an AP's CPU to full
capacity. Consequently, the system restrains to simple operations, as
complex operations would overburden APs.
We will now present the design of \mytit and explain how \mytit
addressed the individual research challenges.
\subsection*{Abstract}
Your Abstract Text Goes Here. Just a few facts.
Whet our appetites.
\section{Introduction}
A paragraph of text goes here. Lots of text. Plenty of interesting
text. \\
More fascinating text. Features\endnote{Remember to use endnotes, not footnotes!} galore, plethora of promises.\\
\section{This is Another Section}
Some embedded literal typset code might
look like the following :
{\tt \small
\begin{verbatim}
int wrap_fact(ClientData clientData,
Tcl_Interp *interp,
int argc, char *argv[]) {
int result;
int arg0;
if (argc != 2) {
interp->result = "wrong # args";
return TCL_ERROR;
}
arg0 = atoi(argv[1]);
result = fact(arg0);
sprintf(interp->result,
return TCL_OK;
}
\end{verbatim}
}
Now we're going to cite somebody. Watch for the cite tag.
Here it comes~\cite{Chaum1981,Diffie1976}. The tilde character (\~{})
in the source means a non-breaking space. This way, your reference will
always be attached to the word that preceded it, instead of going to the
next line.
\section{This Section has SubSections}
\subsection{First SubSection}
Here's a typical figure reference. The figure is centered at the
top of the column. It's scaled. It's explicitly placed. You'll
have to tweak the numbers to get what you want.\\
\begin{figure}[t]
\begin{center}
\begin{picture}(300,150)(0,200)
\put(-15,-30){\special{psfile = fig1.ps hscale = 50 vscale = 50}}
\end{picture}\\
\end{center}
\caption{Wonderful Flowchart}
\end{figure}
This text came after the figure, so we'll casually refer to Figure 1
as we go on our merry way.
\subsection{New Subsection}
It can get tricky typesetting Tcl and C code in LaTeX because they share
a lot of mystical feelings about certain magic characters. You
will have to do a lot of escaping to typeset curly braces and percent
signs, for example, like this:
``The {\tt \%module} directive
sets the name of the initialization function. This is optional, but is
recommended if building a Tcl 7.5 module.
Everything inside the {\tt \%\{, \%\}}
block is copied directly into the output. allowing the inclusion of
header files and additional C code." \\
Sometimes you want to really call attention to a piece of text. You
can center it in the column like this:
\begin{center}
{\tt \_1008e614\_Vector\_p}
\end{center}
and people will really notice it.\\
\noindent
The noindent at the start of this paragraph makes it clear that it's
a continuation of the preceding text, not a new para in its own right.
Now this is an ingenious way to get a forced space.
{\tt Real~$*$} and {\tt double~$*$} are equivalent.
Now here is another way to call attention to a line of code, but instead
of centering it, we noindent and bold it.\\
\noindent
{\bf \tt size\_t : fread ptr size nobj stream } \\
And here we have made an indented para like a definition tag (dt)
in HTML. You don't need a surrounding list macro pair.
\begin{itemize}
\item[] {\tt fread} reads from {\tt stream} into the array {\tt ptr} at
most {\tt nobj} objects of size {\tt size}. {\tt fread} returns
the number of objects read.
\end{itemize}
This concludes the definitions tag.
\subsection{How to Build Your Paper}
You have to run {\tt latex} once to prepare your references for
munging. Then run {\tt bibtex} to build your bibliography metadata.
Then run {\tt latex} twice to ensure all references have been resolved.
If your source file is called {\tt usenixTemplate.tex} and your {\tt
bibtex} file is called {\tt usenixTemplate.bib}, here's what you do:
{\tt \small
\begin{verbatim}
latex usenixTemplate
bibtex usenixTemplate
latex usenixTemplate
latex usenixTemplate
\end{verbatim}
}
\subsection{Last SubSection}
Well, it's getting boring isn't it. This is the last subsection
before we wrap it up.
\section{Acknowledgments}
A polite author always includes acknowledgments. Thank everyone,
especially those who funded the work.
\section{Availability}
It's great when this section says that MyWonderfulApp is free software,
available via anonymous FTP from
\begin{center}
{\tt ftp.site.dom/pub/myname/Wonderful}\\
\end{center}
Also, it's even greater when you can write that information is also
available on the Wonderful homepage at
\begin{center}
{\tt http://www.site.dom/\~{}myname/SWIG}
\end{center}
Now we get serious and fill in those references. Remember you will
have to run latex twice on the document in order to resolve those
cite tags you met earlier. This is where they get resolved.
We've preserved some real ones in addition to the template-speak.
After the bibliography you are DONE.
{\footnotesize \bibliographystyle{acm}
\section{Impact of Uplink Sharing}
\label{sec:uplink}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/comcast_bg_dl_user_ul}
\caption{Regular user download behaviour with and without background upload traffic. No impact is
apparent in the presence of competing background traffic.}
\label{fig:dl-backoff}
\end{figure}
\mytit is essentially a crowd-sourcing approach that shares users' idle bandwidth to help others. The goal is to harness as much idle bandwidth as possible with minimal impact on home users' network usage. We first show that with standard Linux traffic shaping tools, bandwidth sharing has minimal impact on regular traffic. Next, we study how much bandwidth can be harnessed with a testbed in residential broadband.
\subsection{Prioritizing Home User's Traffic}
Techniques and tools for traffic shaping, such as Linux's {\tt tc} are widely available. While a full-fledged system may use complicated traffic prioritization, it is sufficient for \mytit to simply classify traffic in two classes: regular user traffic with the highest priority, and background sharing traffic with a minimum bandwidth guarantee, but allowed to grow if more bandwidth is available. To implement this, we use \textit{Hierarchical Token Bucket} and
\textit{Stochastic Fair Queuing} modules within {\tt tc} to fairly distribute traffic belonging to
the same class.
We set up the traffic shaping on one OpenWRT home router, and validate the correctness of the traffic shaping with two tests. In the first test, we first
generate regular download traffic for 10 minutes to obtain a baseline of AP's download throughput. After the baseline measurement, we start the background upload for 20 minutes, emulating the uplink sharing. As the upload goes on, we relaunch the regular download traffic. As shown in Figure~\ref{fig:dl-backoff}, the user's regular download throughput is not affected, because the TCP ACKs related to the user download have been prioritized. Also, TCP ACKs consume limited
uplink bandwidth, and thus has negligible impact to the background upload. In the second test, we examine the analogous case for regular upload traffic. We first start emulated background upload with a minimum bandwidth guarantee of 500Kbps. During the background upload, we start the regular user upload. As shown in Figure~\ref{fig:ul-backoff}, the regular upload traffic takes over more bandwidth, while the background upload backs off properly (but not lower than 500Kbps). We conclude that with proper traffic shaping, \mytit can provide uplink sharing with a minimal impact on users' regular traffic.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/rcn_bg_ul_user_ul}
\caption{Regular uplink observation with and without background traffic.
The low-priority background traffic backs off as soon as regular traffic starts.}
\label{fig:ul-backoff}
\end{figure}
\subsection{Push Uplink Sharing to the Limit}
To find out how much idle uplink bandwidth can be harnessed, we instrument an uplink throughput experiment to 17 residential APs, covering 2 major ISPs in Boston (Table~\ref{table:ul_exp_summary}). Each AP is configured with proper traffic shaping. With constant background upload, each AP reports background upload throughput every 10 seconds, lasting for 16 days.
\begin{table}[ht]
\caption{Uplink experiment data summary}
\label{table:ul_exp_summary}
\centering
\begin{tabular}{lc}
\hline
\textbf{Home APs}&Comcast (14), RCN (3)\\
\textbf{Data collection time}& May 22 -- Jun 13, 2012\\
\textbf{Mean AP online time}&381 hours ($\sim$ 16 days)\\
\textbf{Throughput samples}&2.3 millions\\%2,330,580\\
\hline
\end{tabular}
\end{table}
As shown in Figure~\ref{fig:exp_ul_bw_breadown_per_ap}, all APs can share 1-3Mbps uplink throughput during most of the experiment time. Each AP's mean shared throughput is very close to its uplink limit set by ISP. Also, we investigate how long a certain upload throughput can last. Figure~\ref{fig:exp_ul_flow_length_cdf-peakhour} shows the the CDF of duration for which the background upload can sustain certain throughput during peak hours (18pm$\sim$23pm). We see that, there is over 80\% chance that the background upload throughput can stay above 2Mbps for over 30 minutes. To conclude, there is abundant idle uplink bandwidth in residential broadband that can be exploited for upload sharing.
\begin{figure}
\center
\includegraphics[width=0.9\linewidth]{figures/exp_ul_bw_breadown_per_ap}
\caption{Fraction of time spent by each AP with certain background upload throughput over the AP's total online time.}
\label{fig:exp_ul_bw_breadown_per_ap}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/exp_ul_flow_length_cdf-peakhour}
\caption{Duration of background traffic flow of certain throughput classes.}
\label{fig:exp_ul_flow_length_cdf-peakhour}
\end{figure}
|
\section*{\textbf{#1}}}
\newcommand{\textup{\protect\scalebox{-1}[1]{L}}}{\textup{\protect\scalebox{-1}[1]{L}}}
\newcommand{\Comment}[1]{{\color{blue} \sf ($\clubsuit$ #1 $\clubsuit$)}}
\newcommand{\margin}[1]{%
\marginpar[{\raggedleft\smaller[3]#1}]{\raggedright\smaller[3]#1}}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{G}}{\mathbb{G}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{J}}{\mathbb{J}}
\newcommand{\mathbb{K}}{\mathbb{K}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{O}}{\mathbb{O}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{U}}{\mathbb{U}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{W}}{\mathbb{W}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Y}}{\mathbb{Y}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mf}[1]{\mathfrak{#1}}
\newcommand{\mathfrak{I}}{\mathfrak{I}}
\newcommand{\mathfrak{G}}{\mathfrak{G}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{\tilde{X}}{\tilde{X}}
\renewcommand{\aa}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{\partial}{\partial}
\newcommand{\vspace{.4cm}}{\vspace{.4cm}}
\newcommand{\mathbb{C} \mathbb{P}}{\mathbb{C} \mathbb{P}}
\newcommand{\overline}{\overline}
\newcommand{\epsilon}{\epsilon}
\newcommand{\textnormal{GL}}{\textnormal{GL}}
\newcommand{\textnormal{SL}}{\textnormal{SL}}
\newcommand{\textnormal{Conf}}{\textnormal{Conf}}
\newcommand{\textnormal{eval}}{\textnormal{eval}}
\newcommand{\textnormal{Sym}}{\textnormal{Sym}}
\newcommand{\textnormal{sign}}{\textnormal{sign}}
\newcommand{\textnormal{SYT}}{\textnormal{SYT}}
\newcommand{\downarrow}{\downarrow}
\newcommand{\uparrow}{\uparrow}
\newcommand{\textnormal{hex}}{\textnormal{hex}}
\newcommand{\textnormal{Inv}}{\textnormal{Inv}}
\newcommand{\textnormal{Web}}{\textnormal{Web}}
\newcommand{\hat{W}}{\hat{W}}
\newcommand{\overleftarrow}{\overleftarrow}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{\overrightarrow}{\overrightarrow}
\newcommand{{\rm Gr}}{{\rm Gr}}
\newcommand{\textnormal{star}}{\textnormal{star}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\def\widetilde \Gr{\widetilde {\rm Gr}}
\def\widetilde \Pi{\widetilde \Pi}
\def\tilde{M}{\tilde{M}}
\newcommand{\textnormal{sign}}{\textnormal{sign}}
\newcommand{\remind}[1]{{\bf ** #1 **}}
\newcommand{\MSB}[1]{\textcolor{blue}{[MSB: #1]}}
\newcommand{\KS}[1]{\textcolor{red}{[KS: #1]}}
\newcommand{\KSlook}[1]{\textcolor{purple}{[MSB: #1]}}
\newcommand{\cev}[1]{\reflectbox{\ensuremath{\vec{\reflectbox{\ensuremath{#1}}}}}}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\Ext}{Ext}
\DeclareMathOperator{\End}{End}
\DeclareMathOperator{\Tor}{Tor}
\DeclareMathOperator{\Ker}{Ker}
\DeclareMathOperator{\CoKer}{CoKer}
\DeclareMathOperator{\Spec}{Spec}
\DeclareMathOperator{\Proj}{Proj}
\DeclareMathOperator{\Trop}{Trop}
\DeclareMathOperator{\Ind}{Ind}
\DeclareMathOperator{\Des}{Des}
\DeclareMathOperator{\Imm}{Imm}
\DeclareMathOperator{\Piv}{minVal}
\DeclareMathOperator{\Ing}{Ing}
\DeclareMathOperator{\Lec}{Lec}
\DeclareMathOperator{\Lus}{Lus}
\DeclareMathOperator{\obj}{obj}
\DeclareMathOperator{\add}{add}
\DeclareMathOperator{\Spr}{Spr}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\textnormal{span}}{\textnormal{span}}
\newcommand{\overleftarrow}{\overleftarrow}
\newcommand{\overrightarrow}{\overrightarrow}
\newcommand{\pds}[1]{\textbf{#1}^+}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\pth}[1]{L_{#1}}
\newcommand{\young}[1]{\lambda_{#1}}
\def\skew_#1^#2{#1/#2}
\newcommand{\Jo}{J^{\bullet}_{\textbf{v}}}
\newcommand{\Jp}{J^{+}_{\textbf{v}}}
\newcommand{\rmin}[1]{\Delta^{\rho}_{#1}}
\newcommand{\lmin}[1]{\Delta^{\lambda}_{#1}}
\newcommand{\ch}{\chi}
\newcommand{\wl}[1]{w_{(#1)}}
\newcommand{\wu}[1]{w^{(#1)}}
\newcommand{\vl}[1]{v_{(#1)}}
\newcommand{\vu}[1]{v^{(#1)}}
\newcommand{\ul}[1]{u_{(#1)}}
\newcommand{\uu}[1]{u^{(#1)}}
\newcommand{\jhol}[1]{J^{+}_{\mathbf{#1}}
\newcommand{\jsol}[1]{J^\bullet_{\mathbf{#1}}}
\newcommand{\wiring}[1]{W_{#1}}
\newcommand{\jc}[1]{\Spr(#1)}
\newcommand{\iquiv}[1]{Q_{#1}^{\Ing}}
\newcommand{\lquiv}[1]{Q_{#1}^{\Lec}}
\newcommand{\wquiv}[1]{Q^W_{#1}}
\newcommand{\iclus}[1]{\mathbf{A}_{#1}}
\newcommand{\lclus}[1]{\mathbf{B}_{#1}}
\newcommand{\ivar}[1]{A_{#1}}
\newcommand{\lvar}[1]{B_{#1}}
\newcommand{\iseed}[1]{\Sigma_{#1}^{\Ing}}
\newcommand{\lseed}[1]{\Sigma_{#1}^{\Lec}}
\newcommand{v, \bw}{v, \mathbf{w}}
\newcommand{\rich}[1]{\mathring{\mathcal{R}}_{#1}}
\def\mcf\ell{\mathcal{F}\ell}
\newcommand{\mtx}[1]{m(#1)}
\newcommand{\minor}[1]{\Delta_{#1}}
\newcommand{\llabel}[1]{L(#1)}
\newcommand{\rlabel}[1]{R(#1)}
\def\mcg_n{\mathcal{G}_n}
\def\mca_{v, w}^{\Lec}{\mathcal{A}_{v, w}^{\Lec}}
\usepackage{subfiles}
\title{Leclerc's conjecture on a cluster structure for type A Richardson varieties}
\author{Khrystyna Serhiyenko}
\author{Melissa Sherman-Bennett}
\thanks{KS and MSB were supported by the National Science Foundation under Award No.~DMS-2054255 and Award No.~DMS-2103282 respectively. Any opinions, findings, and conclusions or recommendations expressed in this material are
those of the authors and do not necessarily reflect the views of the National Science
Foundation.}
\begin{document}
\maketitle
\begin{abstract}Leclerc \cite{Leclerc} constructed a conjectural cluster structure on Richardson varieties in simply laced types using cluster categories.
We show that in type A, his conjectural cluster structure is in fact a cluster structure. We do this by comparing Leclerc's construction with another cluster structure on type A Richardson varieties due to Ingermanson \cite{gracie}. Ingermanson's construction uses the combinatorics of wiring diagrams and the Deodhar stratification. Though the two cluster structures are defined very differently, we show that the quivers coincide and clusters are related by the twist map for Richardson varieties, recently defined by Galashin--Lam \cite{GLRichardson}.
\end{abstract}
\section{Introduction}
\subfile{sections/introduction}
\section{Background} \label{sec:background}
\subfile{sections/background}
\subfile{sections/LeclercBackground}
\section{Correspondence between cluster variables} \label{sec:clusterVar}
\subfile{sections/LeclercVar}
\subfile{sections/IngermansonVar}
\section{Leclerc's quiver in terms of wiring diagrams}\label{sec:LQuivFromWiring}
\subfile{sections/LeclercMaps}
\section{Correspondence between quivers}\label{sec:quiversEqual}
\subfile{sections/quiversEqual}
\section{Finishing up proofs}\label{sec:LSeedsMutationEquiv}
\subfile{sections/LecSeedMutation}
\bibliographystyle{alpha}
\subsection{Ingermanson's quiver}
Throughout this section, we fix $v \leq \mathbf{w}$ with $\mathbf{w}$ unipeak.
The definition of Ingermanson's quiver (c.f. \cref{def:IngQuiv}) involves a lot of cancellation. In this section, we give a ``cancellation-free" description of Ingermanson's quiver in terms of the wiring diagram quiver, so that we can compare with \cref{prop:Lec_arrows}.
\begin{defn} Let $S$ and $T$ be disjoint subsets of the cluster of $\Sigma_{v, \mathbf{w}}$. A collection $\mathcal{C}$ of arrows in the wiring diagram is a \emph{witnessing collection} for $(S,T)$ if for every $A_s \in S$ and $A_t \in T$, the number of arrows $A_s \to A_t$ in $\iquiv{v, \bw}$ is the same as the number of arrows in $\mathcal{C}$ which point from a chamber in $\jc{s}$ to a chamber in $\jc{t}$ (this number may be negative).
That is to say, one can compute the arrows in $\iquiv{v, \mathbf{w}}$ between $S$ and $T$ just by considering the contributions of $\mathcal{C}$ and ignoring all other arrows in the wiring diagram.
\end{defn}
Let $A_d$ be a mutable cluster variable and let
\[\mathcal{S}_d=\{A_i: i \in [d-1], A_i \text{ does not appear in }\jc{d}.\}\] In this section, we will find a witnessing collection $\mathcal{C}_d$ for $(A_d, \mathcal{S}_d)$. The union of the witnessing collections will clearly be equal to Leclerc's quiver $\lquiv{v, \bw}$; we will later show that the union of witnessing collections is equal to Ingermanson's quiver $\iquiv{v, \bw}$ as well.
Before finding a witnessing collection for $(\ivar{d}, \mathcal{S}_d)$, we present an alternate definition of $\iquiv{v, \bw}$.
\begin{defn}[Crossing monomial]
Let $c$ be a crossing in $\wiring{v, \bw}$. Say $\ch_{c^{\uparrow}}, \ch_{c^{\downarrow}}, \ch_{c^{\leftarrow}}, \ch_{c^{\rightarrow}}$ are the chambers above, below, to the left, and to the right of $c$, respectively. The \emph{crossing monomial} of $c$ is defined as
\[t_c:= \frac{\lmin{c^\uparrow} \lmin{c^\downarrow}}{\lmin{c^\leftarrow} \lmin{c^\rightarrow}}.\]
\end{defn}
\begin{rmk} \label{rmk:crossingMonoFacts}\
\begin{enumerate}
\item \label{itm:crossingMonoOne} If $c \in \jhol{v}$, then $t_c =1$ (c.f. \cref{lem:hollowRel}).
\item \label{itm:crossingMonoYHat} Say a chamber $\ch$ is bounded on the left by crossing $a$ and on the right by crossing $b$. It is not hard to check that
\begin{equation} \label{eq:yHatinCrossing}
\hat{y}^W_\ch=\frac{t_a}{t_b}.
\end{equation}
\end{enumerate}
\end{rmk}
Recall from \cref{def:ends} the notion of left ends, right ends, and cusps of $\jc{j}$.
\begin{lem}\label{lem:onlyLeftEnds}
Let $c$ be a solid crossing. Then
\[\hat{y}_c = \frac{1}{t_c} \cdot \prod_{d} t_d\]
where the product is over all $d$ that are left ends of $\jc{c}$.
\end{lem}
\begin{proof}
Combine \cref{eq:yHats,eq:yHatinCrossing} and note that all crossing monomials for crossings in the interior of $\jc{c}$ cancel. Then apply \cref{rmk:crossingMonoFacts,cor:rightEndHollow}.
\end{proof}
\begin{thm}\label{thm:witnessingCollections}
Consider $c \in \jsol{v}$ a mutable solid crossing. To obtain a witnessing collection $\mathcal{C}_c$ for $(A_c, \mathcal{S}_c)$, take the arrows indicated in \cref{fig:specialArrows}, with the following exception: in the final two cases, if $d$ is hollow and traveling down the falling strand of $d$, you pass through only hollow left ends of $\jc{c}$ and then reach a cusp, do not include the arrow for $d$ or the cusp.
\begin{figure}[h]
\includegraphics[width=\textwidth]{witnessingCollection}
\caption{\label{fig:specialArrows} A witnessing collection for all arrows between $A_c$ and $A_d$ ($d<c$) in $\iquiv{v, \mathbf{w}}$. If $d$ is hollow $A_d$ should be interpreted as $A_{d'}$ where $d'$ is the first solid crossing along the falling strand of $d$. The shaded chambers are in $\jc{c}$, white regions are not in $\jc{c}$, and dotted regions can be either.}
\end{figure}
\end{thm}
Comparing \cref{thm:witnessingCollections,prop:Lec_arrows}, we have an immediate corollary.
\begin{cor}\label{cor:unionOfWitnessingEqualsLeclerc}
The union of witnessing collections in Ingermanson's quiver
\[\bigcup_{c \in \jsol{v}} \mathcal{C}_c\]
is equal to Leclerc's quiver $\lquiv{v, \bw}$ (where vertices of both quivers are labeled by solid crossings). In particular, $\lquiv{v, \bw}$ is a subquiver of $\iquiv{v, \bw}$.
\end{cor}
\begin{proof}[Proof of \cref{thm:witnessingCollections}] Using \cref{lem:IRightStable}, without loss of generality we may assume $c=\ell$ is the final crossing.
By \cref{lem:onlyLeftEnds}, the arrows from $A_c$ to $A_a$ for $a<c$ are determined by product of crossing ratios for left ends of $\jc{c}$. Recall that we only concern ourselves with $A_a$ not appearing in $\jc{c}$, so in fact we can consider the product of modified crossing ratios
\[t'_d:= \frac{\lmin{c^\uparrow} \lmin{c^\downarrow}}{\lmin{c^\leftarrow}}.\]
The modified crossing ratio encodes the arrows
\begin{center}
\includegraphics[width=0.1\textwidth]{leftEndArrows}
\end{center}
between chambers around the left end $d$, which we call $d$-arrows.
First, we analyze which chamber minors cancel in the product of modified crossing ratios, or, equivalently, which arrows around left ends cancel. Again, we only care about chambers which are not in $\jc{c}$. Suppose a chamber $\chi$ not in $\jc{c}$ contributes to $t'_d$. Whether or not the $d$-arrow involving $\chi$ cancels with another $d'$-arrow depends on if there is a nearby left end along one of the strands of $d$. The cases are summarized in \cref{fig:arrows-cancel-cases}.
The situations in which the $d$-arrow involving $\chi$ does not cancel is exactly when $\chi$ and the arrow are as pictured in \cref{fig:specialArrows}. We call such chambers $\chi$ and arrows \emph{special}.
Now, we show that each special chamber $\chi$ contains a unique cluster variable which is not in $\jc{c}$, which we denote $A_\chi$. Let $a$ be the crossing to the right of $\chi$. In the four leftmost cases of \cref{fig:specialArrows}, $a$ is solid, so $A_a$ appears in $\chi$. On the other hand, in all cases but the one in the far left of \cref{fig:specialArrows}, a chamber of $\jc{c}$ lies to the right or above $a$. Using \cref{lem:propogateRightAndUp}, all other cluster variables $A_r$ appearing in $\chi$ appear in an adjacent chamber in $\jc{c}$. Thus $A_a$ is the only candidate for $A_\chi$.
If we are in the far left case of \cref{fig:specialArrows}, the falling strand $\alpha$ of $a$ is also the falling strand of crossing $c$. Indeed, following $\alpha$ to the right of $a$, by unipeakness $\alpha$ continues to travel down and eventually leaves the boundary of $\jc{c}$, say immediately after crossing $b$. The crossing $b$ must be a cusp, and is thus solid. \cref{cor:forbidden-cusp} implies that $b$ is in fact a right end of $\jc{c}$. Then by \cref{cor:rightEndHollow}, $b=c$. Also, all crossings $a_1, \dots,a_k$ along $\alpha$ between $a$ and $b$ are right ends of $\jc{c}$ and so are hollow by \cref{cor:rightEndHollow}. So the product of crossing ratios $t_{a_1} \dots t_{a_k}$ is equal to $1$. We also have
\[t_{a_1} \dots t_{a_k}= \frac{\lmin{a_1^\uparrow} \lmin{a_k^\downarrow}}{\lmin{a_1^\leftarrow}\lmin{a_k^\rightarrow}}\]
where $\chi_{a_k^\downarrow}$ is the chamber below $a_k$, etc.
The chamber $\chi_{a_k^\rightarrow}$ is also the chamber above crossing $c$. Because $c=\ell$ is the last crossing, the chamber minor of this chamber is equal to $1$. The chamber $\chi_{a_1^\leftarrow}$ is in $\jc{c}$, so the above equality implies in particular that all cluster variables appearing in $\chi_{a_1^\uparrow}$ also appear in $\jc{c}$. By \cref{lem:propogateRightAndUp}, all cluster variables $A_r \neq A_a$ appearing in $\chi$ also appear in $\chi_{a_1^\uparrow}$, which is the chamber to the right of $a$. So in this case also, the only candidate for $A_\chi$ is $A_a$.
For the two rightmost cases of \cref{fig:specialArrows}, the crossing $a$ may or may not be solid. If it is solid, the same argument as above shows that $A_a$ is the only candidate for $\chi$. If it is hollow, follow the falling strand $\alpha$ of $a$ to the right of $a$ until it hits a solid crossing $b$. This solid crossing is either a left end of $\jc{c}$ or a cusp, and is guaranteed to exist because $\alpha$ must leave the boundary of $\jc{c}$ eventually. By \cref{rmk:crossingMonoFacts}(1) and the fact that all crossings along $\alpha$ between $a$ and $b$ are hollow, $A_b$ appears in $\chi$ and all other cluster variables appearing in $\chi$ also appear in $\jc{c}$. So the only candidate for $A_\chi$ is $A_b$. Note that if $b$ is a cusp, then $A_b$ is also the candidate for $A_{\chi'}$ where $\chi'$ is the special chamber to the left of the cusp. The special arrows involving $\chi$ and $\chi'$ contribute a two-cycle between $A_b$ and $A_c$ in $\iquiv{\mathbf{v}, w}$. So we exclude this case from consideration.
Now, we need to verify that the candidate $A_a$ or $A_b$ for $A_\chi$ does not appear in $\jc{c}$ somewhere else. So long as we are not in the exception described in the theorem, this follows from \cref{thm:factorizationsAgree} and \cref{prop:Lec_arrows}, as there is an irreducible map involving the modules $M_c$ and $M_a$ (or $M_b$). This implies the contents of $M_c$ and $M_a$ (or $M_b$) overlap, so they cannot be summands of the same chamber module.
The special arrows are the only arrows of the wiring diagram quiver which can contribute an arrow between $\mathcal{S}_c$ and $A_c$ in $\iquiv{\mathbf{v}, w}$. We have already identified pairs of special arrows which give a 2-cycle in $\iquiv{\mathbf{v}, w}$, and do not include these arrows in our collection $\mathcal{C}_c$. For all other pairs of special chambers $\chi, \chi'$, it is easy to see that $A_{\chi} \neq A_{\chi'}$, and so no arrows in $\iquiv{\mathbf{v}, w}$ coming from the special arrows for $\chi$, $\chi'$ can cancel. This shows $\mathcal{C}_c$ is indeed a witnessing collection.
\end{proof}
\begin{figure}
\includegraphics[width=0.7\textwidth]{arrows-cancel-cases}
\caption{\label{fig:arrows-cancel-cases} The cases showing when left-end-arrows to chamber $\ch$ cancel or don't cancel in the proof of \cref{thm:witnessingCollections}. }
\label{fig:jcj}
\end{figure}
\subsection{Equality of quivers} \label{sec:quiversEqualProof}
We have shown that each arrow of Leclerc's quiver $\lquiv{v, \bw}$ is an arrow of Ingermanson's quiver $\iquiv{v, \bw}$. Moreover, we know that Leclerc's cluster algebra $\mathcal{A}(\lseed{v, \bw})$ is a subalgebra of $\mathbb{C}[\rich{v,w}]$ and Ingermanson's cluster algebra $\mathcal{A}(\iseed{v, \bw})$ is equal to $\mathbb{C}[\rich{v,w}]$. Finally, we have an automorphism $\tau_{v, w}^*$ of $\mathbb{C}[\rich{v,w}]$ which takes Ingermanson's cluster $\mathbf{A}$ to Leclerc's cluster $\mathbf{B}$. We will now use these facts to show that in fact, Ingermanson's quiver cannot have any additional arrows.
\begin{lem}\label{lem:quiverEqual}
Consider two seeds $(\mathbf{x}, Q)$ and $(\mathbf{x}, Q')$ in the field of rational functions $\mathbb{C}(\mathbf{x})$ with the same cluster but different quivers. Let $\mathcal{A}:=\mathcal{A}(\mathbf{x}, Q)$ be the cluster algebra of the first seed and $\mathcal{A}':=\mathcal{A}(\mathbf{x}, Q')$ be the cluster algebra of the second.
Suppose that $\mathcal{A}' \subset \mathcal{A}$ and that $Q'$ is a subquiver of $Q$ (identifying a vertex of $Q'$ with the vertex of $Q$ labeled by the same cluster variable). Then in fact $Q=Q'$.
\end{lem}
\begin{proof}
Consider a mutable vertex $k$ of $Q$ and $Q'$. We will argue that for all $j$, $\#(\text{arrows }k \to j)$ is the same in $Q$ and $Q'$.
In $(\mathbf{x}, Q')$, mutating at $k$ gives the cluster variable
\[\tilde{x}_k'= \frac{M_+ + M_-}{x_k}\]
where $M_+, M_-$ are monomials in $\mathbf{x}$. Since $Q'$ is a subquiver of $Q$, mutating at $k$ in $(\mathbf{x}, Q)$ gives the cluster variable
\[\tilde{x}_k= \frac{M_+N_+ + M_-N_-}{x_k}.\]
By assumption, $\mathcal{A}' \subset \mathcal{A}$, so in particular $\tilde{x}'_k$ is a Laurent polynomial in $\mu_k(\mathbf{x}, Q)$, say
\[\frac{M_+ + M_-}{x_k}= \sum_{\textbf{a}\in \mathbb{Z}^n} c_{\textbf{a}} x_1^{a_1} \dots \tilde{x}_k^{a_k} \dots x_n^{a_n}.\]
Clearing the denominator on the left and writing $\tilde{x}_k$ in terms of the cluster $\mathbf{x}$, we have
\[M_+ + M_-=x_k \cdot \sum_{\textbf{a}\in \mathbb{Z}^n} c_{\textbf{a}} x_1^{a_1} \dots ( M_+N_+ + M_-N_- )^{a_k} x_k^{-a_k} \dots x_n^{a_n}.\]
Note that the left hand side is a polynomial with degree $0$ in $x_k$, so the same must be true of the right hand side. Because the binomial $M_+N_+ + M_-N_-$ is also degree 0 in $x_k$, this implies that $a_k=1$ for all nonzero $c_\textbf{a}$. So we have
\[M_+ + M_-= ( M_+N_+ + M_-N_- ) \cdot \sum_{\substack{\textbf{a}\in (\mathbb{Z}_{\geq 0})^n \\ a_k=1} }c_{\textbf{a}} x_1^{a_1} \dots x_{k-1}^{a_{k-1}} x_{k+1}^{a_{k+1}} \dots x_n^{a_n}.\]
Comparing degrees, we see that $a_1=\dots=a_n=0$ and $N_+=N_-=1$. That is, $\tilde{x}'_k=\tilde{x}_k$, which implies $\#(\text{arrows }k \to j)$ is the same in $Q$ and $Q'$ for all $j$.
\end{proof}
\begin{cor}\label{cor:quivEqual}
Choose $v \leq \mathbf{w}$ with $\mathbf{w}$ unipeak. Label the vertices of both Ingermanson's quiver $\iquiv{v, \bw}$ and Leclerc's quiver $\lquiv{v, \bw}$ by the set of solid crossings $\jsol{v}$. Then $\iquiv{v, \bw}=\lquiv{v, \bw}$.
\end{cor}
\begin{proof}
This follows directly from \cref{lem:quiverEqual}, with $(\mathbf{x}, Q)=(\iclus{v, \bw}, \iquiv{v, \bw})=:\iseed{v, \bw}$ and $(\mathbf{x}, Q')=(\lclus{v, \bw} \circ \tau_{v,w}, \lquiv{v, \bw})=:\tau_{v,w}^*(\lseed{v, \bw})$. Indeed, \cref{thm:Gracie-clusterstruc} and \cref{Lec-seed} together imply that $\mathcal{A}(\lseed{v, \bw})$ is a subset of $\mathcal{A}(\iseed{v, \bw})= \mathbb{C}[\rich{v, w}]$. Because $\tau_{v,w}$ is an automorphism of $\mathcal{A}(\iseed{v, \bw})$, we also have that $\mathcal{A}(\tau_{v,w}^*(\lseed{v, \bw})) \subset \mathcal{A}(\iseed{v, \bw})$. By \cref{thm:varCorrespondence}, $\tau_{v,w}^*(\lvar{c})=\ivar{c}$, so the clusters of $\iseed{v, \bw}$ and $\tau_{v,w}^*(\lseed{v, \bw})$ are equal. \cref{cor:unionOfWitnessingEqualsLeclerc} shows that $\lquiv{v, \bw}$ is a subquiver of $\iquiv{v, \bw}$. So the assumptions of \cref{lem:quiverEqual} are satisfied.
\end{proof}
\end{document}
\subsection{Background on Richardson varieties}
Let $G=SL_n(\mathbb{C})$ and let $B, B_- \subset G$ denote the Borel subgroups of upper and lower triangular matrices, respectively. Let $N, N_-$ denote the corresponding unipotent subgroups of upper and lower unitriangular matrices, respectively. For $g \in G$, let $g_i$ denote the $i$th column of $g$. We denote the minor of $g$ on rows $R$ and columns $C$ by $\minor{R, C}(g)$.
For $w \in S_n$, we choose a distinguished lift $\dot{w}$ of $w$ to $G$. The lift satisfies
\[\dot{w}_{ij}=\begin{cases}
\pm 1 &\text{ if } i=w(j)\\
0 & \text{ else }
\end{cases}\]
and the signs of entries are determined by the condition that $\minor{w[j], [j]}(\dot{w})=1$ for all $j \in [n]$. If the particular lift of $w$ to $G$ does not matter, we also write $w$ for the lift (e.g. we write $B w B$ rather than $B \dot{w}B$).
We identify the flag variety $\mcf\ell_n$ with the quotient $G/B$. Concretely, a matrix $g \in G$ represents the flag $V_\bullet=(V_1 \subset V_2 \subset \cdots \subset V_n=\mathbb{C}^n)$ where $V_i$ is the span of $g_1, \dots, g_i$.
The flag variety has two well-known decompositions into cells, the Schubert decomposition
\[G/B = \bigsqcup_{w \in S_n} B w B/B = \bigsqcup_{w \in S_n} C_w\]
and the opposite Schubert decomposition
\[G/B = \bigsqcup_{w \in S_n} B_- w B/B= \bigsqcup_{w \in S_n} C^w.\]
The stratum $C_w$ is a \emph{Schubert cell} and is isomorphic to $\mathbb{C}^{\ell(w)}$. The stratum $C^w$ is an \emph{opposite Schubert cell} and is isomorphic to $\mathbb{C}^{\ell(w_0)-\ell(w)}$. For a fixed lift $w$, it is well-known that the projection map $G \to G/B$ restricts to isomorphisms
\begin{equation}\label{eq:schub-iso}
N w\cap w N_- \xrightarrow{\sim} C_w \qquad \qquad N_-w \cap wN_- \xrightarrow{\sim} C^w.
\end{equation}
Or, more concretely, each coset in $C_w$ (resp. $C^w$) has a unique representative matrix which differs from $w$ only in entries that lie both above and to the left (resp. both below and to the left) of a nonzero entry of $w$ (see e.g. \cite{Fulton}).
We are concerned with the intersection of an opposite Schubert cell and a Schubert cell
\[\rich{v, w}:= C^v \cap C_w \]
which is called an \emph{(open) Richardson variety}. We usually drop the adjective ``open." The Richardson variety $\rich{v,w}$ is nonempty if and only if $v \leq w$, in which case it is a smooth irreducible affine variety of dimension $\ell(w)-\ell(v)$ \cite{Deodhar}.
Open Richardson varieties were studied in the context of Kazdhan-Lusztig polynomials \cite{KL79}; the number of $\mathbb{F}_q$ points of $\rich{v,w}$ is exactly the $R$-polynomial indexed by $(v, w)$ \cite{Deodhar}, which can be used to recursively compute Kazhdan-Lusztig polynomials. The $\mathbb{F}_q$-point counts and more generally the cohomology of $\rich{v,w}$ are also related to knot homology \cite{GLCohom}. Real points of $\rich{v,w}$, and in particular \emph{positive points}, feature in work of Lusztig and Rietsch \cite{LusTPGr,Rietsch} on total positivity. Special cases of Richardson varieties include the \emph{(open) positroid varieties} of \cite{KLS}, which are Richardson varieties $\rich{v,w}$ where $w$ has a single descent. Richardson varieties themselves are special cases of \emph{braid varieties} (see e.g. \cite{CGGS}).
We identify $\rich{v, w}$ with two different subsets of $G$, one for Ingermanson's construction and one for Leclerc's. We will later use these identifications to define functions on $\rich{v, w}$. Below, we use the involutive automorphism $g \mapsto g^\theta$ of $G$ from \cite[(1.11)]{FZ99}; the $(i,j)$ entry of $g^\theta$ is the minor of $g$ obtained by deleting the $i$th row and $j$th column. It is not hard to check that $B^{\theta}=B_-$, $N^{\theta}=N_-$, and $\dot{v}^\theta$ is another lift of $v$ to $G$.
\begin{lem}\label{lem:richIso
For $v\leq w$, let
\[N_{v,w}:= N \cap \dot{w}N_- \dot{w}^{-1} \cap B_- vB \dot{w}^{-1} \quad \text{and} \quad N'_{v,w}:= N \cap \dot{v}^{-1}N \dot{v} \cap \dot{v}^{-1} B_- w B_-. \]
Also, let $D: N_{v, w} \to G$ be the renormalization map sending $g \to g d_g$, where $d_g$ is the unique diagonal matrix so that $\minor{v[j], w[j]}(gd_g)=1$ for all $j$.
We have isomorphisms
\begin{align*}
\alpha: D(N_{v, w}) &\to \rich{v,w} &\quad \beta: N'_{v, w} & \to \rich{v, w}\\
g d_g &\mapsto g d_g \dot{w} B& \quad g &\mapsto (\dot{v} g)^{\theta} B.
\end{align*}
\end{lem}
\begin{proof}
If $g \in N_{v, w}$, then $g \dot{w}$ is in $B_- v B$. In particular, the minors $\minor{v[j], [j]}(g \dot{w})= \minor{v[j], w[j]}(g)$ are nonzero. This implies the map $D$ is well-defined. It is also an isomorphism onto its image.
The map $\alpha$ can be written as a composition of two maps
\begin{alignat*}{3}
D(N_{v, w}) & \xrightarrow{D^{-1}}& N_{v, w} &\xrightarrow{\alpha'} \rich{v, w}\\
g d_g& \longmapsto &g & \longmapsto g \dot{w} B
\end{alignat*}
since $g d_g \dot{w}B$ is equal to $g \dot{w} B$.
Now, it follows easily from \eqref{eq:schub-iso} that $\alpha'$ and $\beta$ are both isomorphisms, noting in the first case that $g \dot{w}$ is in $N \dot{w}\cap \dot{w}N_- \cap B_- vB$ and in the second that $(\dot{v} g)^\theta$ is in $N_- \tilde{v} \cap \tilde{v}N_- \cap B w B$ where $\tilde{v}=\dot{v}^\theta$.
\end{proof}
\begin{rmk}\label{rmk:differencesLeclerc}
Leclerc identifies the flag variety with $B_- \backslash G$ rather than $G/B$, and so considers the variety
\[^- \rich{v, w}:=B_-\backslash {(B_- v B \cap B_- w B_-)}\]
which is different from, though isomorphic to, $\rich{v, w}$. We fix an isomorphism so that we can pullback functions on $^-\rich{v, w}$ to functions on $\rich{v,w}$. The isomorphism we choose is
\[
\rich{v, w}\xrightarrow{\Theta} {(B v B_- \cap B_- w B_-)}/B_- \xrightarrow{\delta_1} \dot{v} N'_{v, w} \xrightarrow{\delta_2} {^-\rich{v, w}}.
\]
The map $\Theta: g B\mapsto g^{\theta} B_-$ is induced by the involution $g \mapsto g^\theta$ on $G$; from \cite[Section 2]{FZ99}, one can see that $B^{\theta}=B_-$ and that $\dot{w}^\theta$ is another lift of $w$ so it is indeed an isomorphism. The maps $(\delta_1)^{-1}$ and $\delta_2$ are the natural projections from $\dot{v} N'_{v,w}$ to $G/B_-$ and $B_-\backslash G$ respectively; these are isomorphisms using the appropriate analogue of \eqref{eq:schub-iso}. The composition $\delta= \delta_2 \circ \delta_1$ is called the \emph{left chiral map} in \cite[Definition 2.2]{GLRichardson}.
\end{rmk}
\subsection{Background on wiring diagrams and chamber minors}
Before describing Ingermanson's and Leclerc's seeds, we need some combinatorial background.
Given $w\in S_n$, a \emph{reduced expression} for $w$ is an expression $\mathbf{w}=s_{h_1} \dots s_{h_\ell}$ where $\ell$ is as small as possible. The number $\ell$ is the \emph{length} of $w$, denoted $\ell(w)$. We use the notation
\[\wl{i}:=s_{h_1} \dots s_{h_{i-1}} \quad \text{ and } \quad \wu{i}:=s_{h_\ell} \dots s_{h_{i}} = w^{-1}\wl{i}\]
for prefixes of $\mathbf{w}$ and prefixes of $\mathbf{w}^{-1}$, setting $\wl{1}=e$.
As a shorthand, we write $v \leq \mathbf{w}$ to indicate a pair of permutations $v \leq w$ and a choice of reduced expression $\mathbf{w}$ for $w$.
\begin{defn}\label{defn:PDS}
Let $v \leq \mathbf{w}=s_{h_1} \dots s_{h_\ell}$. A \emph{subexpression} for $v$ in $\mathbf{w}$ is an expression for $v$ of the form $v=s_{h_1}^v \dots s_{h_\ell}^v$ where $s_{h_i}^v \in \{e, s_{h_i}\}$.
As for $\mathbf{w}$, we define
\[\vl{i}:=s_{h_1}^v \dots s_{h_{i-1}}^v \quad \text{ and } \quad \vu{i}:=s_{h_{\ell}}^v \dots s_{h_{i}}^v.\]
The indices $i$ where $s_{h_i}^v \neq e$ is the \emph{support} of the subexpression. The subexpression is \emph{reduced} if the support has size $\ell(v)$. The \emph{positive distinguished subexpression} (PDS) for $v$ in $\mathbf{w}$ is the reduced subexpression whose support is lexicographically largest. If $\mathbf{w}$ is fixed, we denote the PDS for $v$ by $\mathbf{v}$.
We denote the support of the PDS by $\jhol{v}$, and call these the \emph{hollow crossings} of $\mathbf{w}$. The complement of the support is $\jsol{v}$; we call these the \emph{solid crossings} of $\mathbf{w}$. Note that $|\jsol{v}|= \ell(w)-\ell(v)=\dim \rich{v, w}$.
\end{defn}
\begin{example}
Let $\mathbf{w}=s_1 s_2 s_1 s_3 s_2 s_1$ and let $v=3214$. Reduced subexpressions for $v$ in $\mathbf{w}$ include
\[ e e s_1 e s_2 s_1 \qquad e s_2 s_1 e s_2 e \qquad s_1 s_2 e e e s_1 \qquad s_1s_2s_1eee.\]
The first subexpression has support $\{3, 5, 6\}$ and is the PDS for $v$ in $\mathbf{w}$. So the hollow crossings are $\jhol{v}=\{3, 5, 6\}$ and the solid crossings are $\jsol{v}=\{1, 2, 4\}$.
\end{example}
\begin{rmk}
Alternatively, the PDS for $v$ can be defined using a greedy procedure, moving from right to left. Set $\vl{\ell+1}=v$. If $\vl{i+1}$ is already determined, then $\vl{i}$ is equal to either $\vl{i+1}$ or $\vl{i+1}s_{h_{i}}$, whichever is smaller. In the first case, $s_{h_i}^v=e$; in the second, $s_{h_i}^v=s_{h_i}$.
\end{rmk}
\begin{rmk}
The notion of positive distinguished subexpressions (and more generally, distinguished subexpressions) is due to Deodhar \cite{Deodhar}. Our notation for the support and complement of the support is inspired by \cite{MR}, as is the terminology ``solid" and ``hollow" crossing. The $+$ in the superscript of $\jhol{v}$ is to indicate that $\jhol{v}$ records where the length of $\vl{i}$ increases. The $\bullet$ in the superscript of $\jsol{v}$ is to remind the reader that these are the solid crossings.
\end{rmk}
For $v\leq \mathbf{w}$, we will draw both the reduced expression $\mathbf{w}$ and the PDS $\mathbf{v}$ in the plane as wiring diagrams. Since $\mathbf{w}$ is itself a positive distinguished subexpression of $\mathbf{w}$, we make all definitions for the PDS $\mathbf{v}$.
\begin{defn}
The \emph{wiring diagram} $\wiring{\mathbf{v}}$ is obtained by replacing each simple transposition $s_{i}$ in $\mathbf{v}$ with the configuration of strands on the left, and each $e$ in $\mathbf{v}$ with the configuration of strands on the right.
\begin{center}
\includegraphics[width=0.4\textwidth]{transpositionWires}
\end{center}
\noindent We label the crossings of $\wiring{\mathbf{v}}$ with $\jsol{v}$ in the natural way. We also label the endpoints of the strands from $1$ to $n$, going from bottom to top. Each crossing $c$ has a \emph{rising strand}, whose height immediately to the right of $c$ is higher than immediately to the left of $c$, and a \emph{falling strand}.
\end{defn}
If a strand $\gamma$ in $\wiring{\mathbf{v}}$ has right endpoint $h$, it has left endpoint $v(h)$. Since $\mathbf{v}$ is reduced, then no two strands cross more than once.
A \emph{chamber} of a wiring diagram is a connected component of the complement of the strands. We denote chambers by $\chi$; the chamber to the left of crossing $c$ is $\chi_c$. We can label each chamber with a subset of $[n]$.
\begin{defn}[Right and left labeling of chambers]
Let $\chi_c$ be a chamber of $\wiring{\mathbf{v}}$. The \emph{left label} of $\chi_c$ is
\[ \vl{c}[h_c] = \{i \in [n]: i \text{ is the left endpoint of a strand }\gamma \text{ below }\chi_c\}\]
and the \emph{right label} is
\[ \vu{c}[h_c] = \{i \in [n]: i \text{ is the right endpoint of a strand }\gamma \text{ below }\chi_c\}.\]
\end{defn}
The equalities above are easy to see by induction on $\ell$. Since $\mathbf{v}$ is reduced, the left label can be obtained from the right label by applying $v$. See \cref{fig:wiringEx} for an example of a wiring diagram and its right and left labels.
\begin{figure}
\includegraphics[width=0.6\textwidth]{wiringEx}
\caption{ \label{fig:wiringEx} A wiring diagram $\wiring{\mathbf{v}}$ for $\mathbf{v}=s_2 s_1 s_3 s_2 s_1$. The left and right labels of chambers are shown respectively in the left and right of each chamber.}
\end{figure}
The following combinatorial object will encode two seeds for $\rich{v, w}$, one in Ingermanson's cluster algebra and one in Leclerc's.
\begin{defn}
Let $v \leq \mathbf{w}$ and let $\mathbf{v}$ be the PDS for $v$ in $\mathbf{w}$. The \emph{stacked wiring diagram} $\wiring{v, \mathbf{w}}$ is the union of the two wiring diagrams $\wiring{\mathbf{w}}$ and $\wiring{\mathbf{v}}$. We emphasize that the crossings of $\wiring{\mathbf{v}}$ are drawn directly on top\footnote{This is in contrast to the ``double wiring diagrams" of \cite{FZ99}.} of the corresponding crossings of $\wiring{\mathbf{w}}$. We call the strands of $\wiring{\mathbf{w}}$ the \emph{$w$-strands} of $\wiring{v, \mathbf{w}}$, and the strands of $\wiring{\mathbf{v}}$ the \emph{$v$-strands}. We sometimes also call $w$-strands just ``strands". A \emph{chamber} of $\wiring{v, \mathbf{w}}$ is a chamber of $\wiring{\mathbf{w}}$. For $c \in [\ell]$, we denote by $\chi_c$ the chamber of $\wiring{v, \mathbf{w}}$ which is to the left of crossing $c$. We call a chamber \emph{frozen} if it is open on the left.
\end{defn}
\begin{figure}
\includegraphics[width=\textwidth]{chamberMinorEx}
\caption{\label{fig:chamberMinorEx}The stacked wiring diagram $\wiring{v, \mathbf{w}}$ for $\mathbf{w}=s_4s_3s_2s_1s_4s_3s_2s_3s_4$ and $v=12534$. The $w$-strands are solid black; the $v$-strands are dashed green. Left and right chamber minors (c.f. \cref{def:chamberMinor}) are black and blue, respectively.}
\end{figure}
See \cref{fig:chamberMinorEx} for an example of a stacked wiring diagram. Note that a crossing $c$ of $\wiring{v, \mathbf{w}}$ is hollow (c.f. \cref{defn:PDS}) if it is a crossing of both $\wiring{\mathbf{v}}$ and $\wiring{\mathbf{w}}$; it is solid if it is only a crossing of $\wiring{\mathbf{w}}$.
Each chamber $\chi$ of $\wiring{v, \mathbf{w}}$ has two left labels, one from the $w$-strands passing below $\chi$ and one from the $v$-strands. Similarly, $\chi$ has two right labels. We use these labels to define two regular functions for each chamber.
\begin{defn}\label{def:chamberMinor}
Fix $v \leq \mathbf{w}$ and stacked wiring diagram $\wiring{v, \bw}$. Let $c \in [\ell]$ and suppose $s_{h_c}=s_j$. For $gB\in \rich{v, w}$, let $x:= \alpha^{-1}(gB)$ and let $y := \beta^{-1}(gB)$, where $\alpha, \beta$ are the isomorphisms from \cref{lem:richIso}. The \emph{left chamber minor} of $\chi_c$ is the function
\begin{align*}
\lmin{c} : \rich{v, w}& \to \mathbb{C}\\
gB &\mapsto \minor{\vl{c}[j], \wl{c}[j]} (x)
\end{align*}
and the \emph{right chamber minor} is the function
\begin{align*}
\rmin{c} : \rich{v, w}& \to \mathbb{C}\\
gB &\mapsto \minor{\vu{c}[j], \wu{c}[j]} (y).
\end{align*}
\end{defn}
See \cref{fig:chamberMinorEx} for an example of the left and right chamber minors. Note that chamber minors are defined only for chambers which are to the left of some crossing. This is because the analogous minors for the chambers which are open to the right are equal to $1$ on $\rich{v, w}$.
\begin{rmk}
Left chamber minors were introduced by \cite{MR} in their study of the Deodhar stratification of $\rich{v, w}$ (in fact, their chamber minors were evaluated on elements of $N_{v,w}$ rather than elements of $D(N_{v,w})$, so they are monomially related to the left chamber minors defined here). They showed that the subset of $\rich{v, w}$ where the left chamber minors are nonzero is an algebraic torus, called the \emph{Deodhar torus}. This torus will be a cluster torus in Ingermanson's cluster structure.
\end{rmk}
\begin{rmk}\label{rmk:our-minor-vs-leclerc}
Leclerc uses functions $f_c: {^-\rich{v, w}} \to \mathbb{C}$ defined by $f_c: B_- \dot{v} g \mapsto \minor{\vu{c}[h_c], \wu{c}[h_c]} (g)$, where $g \in N'_{v,w}$. This is related to the right chamber minor defined above by
\[\rmin{c}= f_c \circ \delta \circ \Theta\]
where $\delta$ and $\Theta$ are as in \cref{rmk:differencesLeclerc}.
\end{rmk}
We will later use the left (resp. right) chamber minors to define cluster variables in Ingermanson's seed (resp. Leclerc's seed). The chamber minors are not algebraically independent; the chamber minors around a hollow crossing satisfy a binomial relation.
\begin{lem}\label{lem:hollowRel}
Fix $v \leq \mathbf{w}$ and a hollow crossing $c \in \jhol{v}$. Say the chambers surrounding $c$ are $\chi_{c^{\uparrow}}, \chi_{c^{\rightarrow}}, \chi_{c^{\downarrow}}, \chi_{c^{\leftarrow}}$. Then the chamber minors in those chambers satisfy
\[\frac{\lmin{c^\uparrow} \lmin{c^\downarrow}}{\lmin{c^\leftarrow} \lmin{c^\rightarrow}} = 1 \quad \text{ and } \quad \frac{\rmin{c^\uparrow} \rmin{c^\downarrow}}{\rmin{c^\leftarrow} \rmin{c^\rightarrow}} =1.\]
\end{lem}
\begin{proof}
See the discussion in \cite{gracie} below Formula III.25 for the first relation; it follows from \cite{MR} and the Desnanot-Jacobi identity.
For the second relation, let $u:= \vl{c+1}$, $x:=\wl{c+1}$ and $i:=h_c$. Then the Desnanot-Jacobi relation implies
\[\minor{u[i+1], x[i+1]}\minor{u[i-1], x[i-1]}= \minor{u[i], x[i]}\minor{us_i[i], xs_i[i]}-\minor{us_i[i], x[i]}\minor{u[i], xs_i[i]}\]
on $G$. Since $c \in \jhol{v}$ and $\mathbf{v}$ is the rightmost subexpression for $v$ in $\mathbf{w}$, we conclude that $us_i \nleq x$. Because $u \leq x$, this means that $us_i[i] \nleq x[i]$ in the Gale order (see \cref{defn:latticePath}). So $\minor{us_i[i], x[i]}$ vanishes identically on $B$. Noting that
\[\minor{u[i+1], x[i+1]}= \rmin{c^\uparrow}, \quad \minor{u[i-1], x[i-1]}=\rmin{c^\downarrow}, \quad \minor{u[i], x[i]}=\rmin{c^\rightarrow}, \quad \minor{us_i[i], xs_i[i]}=\rmin{c^\leftarrow},\]
this gives the desired relation.
\end{proof}
The left and right chamber minors are related by a \emph{twist automorphism} of $\rich{v, w}$, recently defined in \cite{GLRichardson}. The precise definition of the twist will not be needed, so we omit it.
\begin{prop}\cite[Theorem 11.6]{GLRichardson}\label{prop:twist}
Fix $v \leq w$. There is a regular automorphism $\tau_{v,w}: \rich{v, w} \to \rich{v, w}$ such that for all $c \in [\ell]$,
\[\lmin{c}= \rmin{c} \circ \tau_{v,w}.\]
\end{prop}
\begin{proof}
We translate \cite[Theorem 11.6]{GLRichardson} into our conventions. Recall the maps $\delta, \Theta$ from \cref{rmk:differencesLeclerc}. Galashin-Lam identify the flag variety with $G/B_-$ rather than $G/B$. Let $\rich{v,w}^-:=(B v B_- \cap B_- w B_-)/B_-$ denote the Richardson variety in $G/B_-$. They define an isomorphism $\vec{\tau}^{\text{pre}}_{v,w}: \rich{v, w}^- \xrightarrow{\sim} {^-\rich{v, w}}$, and set $\vec{\tau}_{v,w}:= \delta^{-1} \circ \vec{\tau}^{\text{pre}}_{v,w}.$
Theorem 11.6 of \cite{GLRichardson} shows that
\[f_c \circ \vec{\tau}^{\text{pre}}_{v,w} = \lmin{c} \circ \Theta^{-1} \]
as maps on $\rich{v, w}^-$, where $f_c$ are the functions in \cref{rmk:our-minor-vs-leclerc}. So we see that
\begin{align*}
\lmin{c}&= f_c \circ \delta \circ \Theta \circ \Theta^{-1} \circ \delta^{-1} \circ \vec{\tau}^{\text{pre}}_{v,w} \circ \Theta\\
&= \rmin{c} \circ \Theta^{-1} \circ \vec{\tau}_{v,w} \circ \Theta
\end{align*}
where the second equality uses \cref{rmk:our-minor-vs-leclerc}. So the automorphism in the theorem statement is $\Theta^{-1} \circ \vec{\tau}_{v,w} \circ \Theta$.
\end{proof}
\subsection{Background on lattice paths}
Let $\mcg_n$ denote the $n \times n$ grid with rows and columns indexed as in a matrix. A \emph{lattice path} in $\mcg_n$ is a path which begins at the upper right corner of the grid, takes unit length steps down or left, and ends on the left edge of the grid. We label the steps of the lattice path with $1, 2, \dots$ so the labels increase from the beginning of the path to the end (see \cref{fig:latticePathEx} for examples).
\begin{defn}\label{defn:latticePath}
Let $I \in \binom{[n]}{h}$. We denote by $\pth{I}$ the lattice path of length $n+h$ in $\mcg_n$ whose vertical steps are labeled with $I$. We denote by $\young{I}$ the Young diagram (in English notation) whose lower boundary is $\pth{I}$.
For $I, J \in \binom{[n]}{h}$, $I \leq J$ in the Gale order if $\pth{I}$ is weakly below $\pth{J}$; or equivalently $\young{J} \subset \young{I}$; or equivalently, writing $I=\{i_1 < \cdots < i_h\}$ and $J = \{ j_1 < \cdots < j_h\}$, we have $i_a \leq j_a$ for $a=1, \dots ,h$.
\end{defn}
Note that for $I \in \binom{[n]}{h}$, the steps $n+1, \dots, n+h$ of $\pth{I}$ are horizontal and the Young diagram $\young{I}$ has $h$ parts, which are all at least $h$.
For $I \leq J$, we abuse notation and denote by $\skew_I^J$ the skew shape $\young{I}/\young{J}$. We will need to keep track of the ``connected components" of this skew shape.
\begin{defn}\label{defn:content}
Consider a box $b$ in row $r$ and column $c$ of $\mcg_n$. The \emph{content} of $b$ is $r-c+n$. So the content of the box in the upper right corner is $1$, and content increases moving down or left.
For a skew shape $\lambda / \mu$ in $\mcg_n$, the \emph{content} of $\lambda / \mu$ is
\[\{i:\lambda / \mu \text{ has a box of content }i\}.\]
A \emph{component} of $\lambda / \mu$ is a maximal-by-inclusion element of
\[\{\nu/\rho \subset \lambda/\mu: \text{the content of }\nu/\rho \text{ is an interval}\}.\]
\end{defn}
See \cref{fig:latticePathEx} for examples of these definitions.
\begin{figure}
\includegraphics[width=0.3\textwidth]{fig1}
\caption{\label{fig:latticePathEx} Let $I:=\{1,3,4\}$ and $J:=\{2,3,7\}$. The path $\pth{I}$ is shown in blue and $\pth{J}$ is shown in purple in $\mcg_n$. The skew shape $\skew_I^{J}$ has two components, one with content $\{1\}$ and one with content $\{4,5,6\}.$}
\end{figure}
Note that for $I \leq J \subset [n]$, the content of $\skew_I^{J}$ is contained in $[n-1]$. The components of $\skew_I^{J}$ are the connected components of $(\skew_I^{J}) \setminus (\pth{I} \cap \pth{J})$.
The following easy lemma will be useful to us.
\begin{lem}\label{lem:factors-minors-on-N}
Consider $I \leq J$. In $\mathbb{C}[N]$,
\begin{enumerate}
\item if the components of $\skew_I^{J}$ are $\skew_{I_1}^{J_1}, \dots, \skew_{I_r}^{J_r}$, then
\[\minor{I,J}= \prod_{k=1}^r \minor{I_k, J_k}\]
and each minor on the right hand side is irreducible.
\item if $R \leq S$ is a pair of subsets such that $\skew_R^{S}$ is a translation of $\skew_I^{J}$ parallel to the line $y=-x$, then $\minor{R, S}=\minor{I,J}$.
\end{enumerate}
\end{lem}
\begin{proof} For a pair of subsets $A \leq B$, say $\pth{A}$ and $\pth{B}$ intersect in steps $C \subset [n]$. Let $A':= A \setminus C$ and let $B':= B\setminus C$.
For (1): If $n \in N$, $\minor{I, J}(n)= \minor{I',J'}(n)$. The submatrix $n'$ of $n$ on rows $I'$ and columns $J'$ is block-upper triangular. The blocks intersecting the diagonal are on rows $I_k'$ and columns $J_k'$. So we have
\[\minor{I,J}= \minor{I', J'}= \prod_{k=1}^r \minor{I'_k, J'_k}= \prod_{k=1}^r \minor{I_k, J_k}.\]
Irreducibililty follows from \cite[Lemma 3.3]{GLPositroid}, since the skew shapes $\skew_{I_k}^{J_k}$ are connected by construction.
For (2): the content of a box in $\skew_I^{J}$ is the same as the content of the corresponding box of $\skew_R^{S}$. The right edge of a content $k$ box in $\mcg_n$ is $k$ steps from the upper right corner of $\mcg_n$. This implies $I'=R'$ and $J'=S'$, which gives the desired equality of minors.
\end{proof}
\begin{cor}\label{cor:irred-factors-skew-shapes}
The irreducible factors of $\rmin{c}$ (as an element of $\mathbb{C}[N]$), are the minors corresponding to the components of $\skew_{\vu{c}[h_c]}^{\wu{c}[h_c]}$.
\end{cor}
\begin{rmk}
In light of \cref{lem:factors-minors-on-N}, we consider skew shapes in $\mcg_n$ only up to translation parallel to $y=-x$.
\end{rmk}
\section{Ingermanson's and Leclerc's cluster algebras}\label{sec:background2}
We will define Ingermanson's cluster algebra and Leclerc's in parallel, using similar symbols for objects in each construction. ``Ingermanson" is before ``Leclerc" in alphabetical order, so the symbols will follow the same rule (e.g., Ingermanson's cluster variables will be $A_d$, while Leclerc's will be $B_d$).
\subsection{Cluster algebras}
In this section, we set conventions and notation for cluster algebras and related concepts. We refer the reader to e.g. \cite{CAbook} for most definitions.
An \emph{ice quiver} is a directed graph with no loops or 2-cycles, each vertex of which is either \emph{mutable} or \emph{frozen}. A \emph{seed} $\Sigma=(\mathbf{A}, Q)$ in a field $\mathcal{F}$ consists of an ice quiver $Q$ together with a tuple $\mathbf{A}$ of elements of $\mathcal{F}$, called \emph{cluster variables}, which are indexed by the vertices of the quiver. Cluster variables indexed by mutable vertices are \emph{mutable}; the others are \emph{frozen}. The tuple $\mathbf{A}$ is the \emph{cluster} of $\Sigma$. For each mutable cluster variable $A_i$, we have the corresponding \emph{exchange ratio}
\[\hat{y}_i := \prod_{j \in Q} A_j ^{\# \text{ arrows } A_j \to A_i \text{ in }Q}.\]
By convention, if there are $b$ arrows from $A_i$ to $A_j$ in $Q$, then there are $-b$ arrows from $A_j$ to $A_i$.
There is an involutive operation called \emph{mutation} which can be performed at any mutable vertex of $Q$; this produces a new seed $\Sigma'=(\mathbf{A}', Q')$. The collection of all seeds which can be obtained from $\Sigma$ by a sequence of mutations is the \emph{seed pattern} of $\Sigma$. The \emph{cluster algebra} $\mathcal{A}(\Sigma) \subset \mathcal{F}$ is the $\mathbb{C}$-algebra generated by all mutable variables in the seed pattern of $\Sigma$, the frozen variables, and the inverses of the frozen variables.
Let $V$ be an affine variety. We say $\mathcal{A}(\Sigma)$ is a \emph{cluster structure} on $V$ if $\mathcal{A}(\Sigma)=\mathbb{C}[V]$. If $\mathcal{A}(\Sigma)$ and $\mathcal{A}(\Sigma')$ are cluster structures on $V$, then of course $\mathcal{A}(\Sigma)$ and $\mathcal{A}(\Sigma')$ are equal as rings. However, their seeds may differ. Two cluster structures $\mathcal{A}(\Sigma)$ and $\mathcal{A}(\Sigma')$ are equal if $\Sigma$ and $\Sigma'$ are related by a sequence of mutations; they are \emph{quasi-equivalent} if $\Sigma$ and $\Sigma'$ are related by a sequence of mutations and rescalings by Laurent monomials in frozens which preserve all exchange ratios (see \cite{Fraser} for additional details).
We emphasize that a variety $V$ may have many different cluster structures, which may be quasi-equivalent or not; indeed, Richardsons which are open positroid varieties are known to have many cluster structures \cite{FSB}.
\subsection{Ingermanson's cluster structure}
Fix a Richardson variety $\rich{v, w}$. In \cite{gracie}, Ingermanson defined a seed $\iseed{v, \mathbf{w}}$ for $\rich{v, w}$ using the \emph{unipeak} expression for $w$. We review her results in this section.
\begin{defn}
Let $w \in S_n$. A reduced expression $\mathbf{w}$ is \emph{unipeak} if in $\wiring{\mathbf{w}}$, no strand travels down and then up.
\end{defn}
The unipeak expressions for $w$ form a nonempty commutation class of reduced expressions for $w$; in particular, every permutation has a unipeak expression \cite{KLR}. \cref{fig:wiringEx} shows a non-unipeak expression; \cref{fig:IngEx} shows a unipeak expression.
For the remainder of this section, let $\mathbf{w}$ denote a unipeak expression for $w$. We will define Ingermanson's seed $\iseed{v, \mathbf{w}}=(\mathbf{A}_{v, \mathbf{w}}, \iquiv{v, \mathbf{w}})$ in $\mathbb{C}[\rich{v, w}]$. The cluster variables $\mathbf{A}_{v, \mathbf{w}}$ are indexed by the solid crossings $\jsol{v}$ of $\wiring{v, \bw}$. We define the cluster variables by giving a monomial map from the left chamber minors to the set of cluster variables. The reader should look ahead to \cref{ex:Ing} for an example.
\begin{defn}\cite[Definition IV.6, Proposition IV.7]{gracie}
Let $J \subset [n]$ and $u \in S_n$. We define $$\Piv_J(u) := \min_{I \leq J} u(I)$$ where minimum is taken in the Gale order\footnote{The collection $\{I : I \leq J\}$ is a matroid, so it follows from the maximality property for matroids that this minimum is unique (see \cite[Section 1.3]{CoxMatroid} and replace maximum with minimum everywhere.) Ingermanson used the notation Pivots$_J(u)$.}.
\end{defn}
For $1 \leq c<d \leq \ell$, let $L(c,d):=s_{h_{d-1}} s_{h_{d-2}} \cdots s_{h_c}[h_c]$; that is, $L(c,d)$ records the heights of the $w$-strands below $\chi_c$ immediately before crossing $d$. For example, $L(c,c)=[h_c]$. Let $M=(m_{c,d})$ be the matrix whose rows are indexed by $[\ell]$ and whose columns are indexed by $\jsol{v}$, with entries
\begin{equation}\label{eq:is-variable-in-chamber}
m_{c,d}=\begin{cases}
0 &\text{ if } c>d\\
1 &\text{ if } \Piv_{L(c,d)}(\vl{d}s_{h_d})>\Piv_{L(c,d)}(\vl{d}) \\
0 &\text{ if } \Piv_{L(c,d)}(\vl{d}s_{h_d})=\Piv_{L(c,d)}(\vl{d}).
\end{cases}\end{equation}
Deleting the rows of $M$ indexed by hollow crossings gives a upper unitriangular matrix with 0/1 entries; we denote its inverse by $P=(p_{d,c})$.
\begin{defn} \label{def:Ivar}
For $d \in \jsol{v}$, we define the cluster variable
\[A_d := \prod_{c \in \jsol{v}} (\lmin{c})^{p_{d,c}}.\]
\end{defn}
Using \cref{lem:hollowRel} we may express all left chamber minors in terms of cluster variables.
\begin{prop}\cite[Proposition V.1]{gracie}\label{prop:chamber-mono-in-var-gracie}
For $c \in [\ell]$, we have
\[\lmin{c}= \prod_{d \in \jsol{v}} (A_d)^{m_{c,d}} .\]
\end{prop}
\begin{defn}
We say that a cluster variable $A_d$ \emph{appears} in chamber $\chi_c$ of $\wiring{v, \bw}$ if $m_{c,d}=1$. We denote the (closure of) the union of chambers in which $A_d$ appears by $\jc{d}$.``$\Spr$" stands for ``spread."\footnote{Ingermanson used the notation $JC(j)$ instead; the ``JC" stands for ``jump chambers" since in these chambers, there is a ``jump" between the two pivot sets used to compute $m_{c,d}$.}
The cluster variable $A_d$ is frozen if $A_d$ appears in a chamber which is open on the left.
\end{defn}
\begin{example}\label{ex:Ing}
Let $v=12534$ and $\mathbf{w}=s_4s_3s_2s_1\underline{s_4}s_3s_2\underline{s_3}s_4$; the hollow crossings are underlined. See \cref{fig:IngEx} for the stacked wiring diagram. We have
\[L(6, 9)=s_{h_8}s_{h_7}s_{h_6}[h_6]=s_3 s_2 s_3[3]=\{1,3,4\} \qquad \text{ and } \qquad \vl{9}=s_4s_3.\]
We compute $m_{6,9}$, which determines if $A_9$ appears in $\ch_6$.
\[
\Piv_{L(6,9)} \vl{9}= \Piv_{134} s_4s_3= \min_{I \leq 134} s_4s_3(I)=123
\]
and
\[\Piv_{L(6,9)} \vl{9} s_4= \Piv_{134} s_4s_3 s_4=\min_{I \leq 134} s_4s_3s_4(I)= 124.
\]
Since $123<124$, $m_{6,9}=1$ and by \cref{prop:chamber-mono-in-var-gracie} $A_9$ appears in $\ch_6$.
\end
|
{example}
To summarize, so far we have a labeling of chambers of $\wiring{v, \bw}$ by monomials in cluster variables $A_d$ (see \cref{fig:IngEx} for an example). Moving from right to left, $A_d$ first appears in $\ch_d$, and then spreads to other chambers. The appearance of $A_d$ in $\chi_c$ is governed by the two pivot sets in \eqref{eq:is-variable-in-chamber}.
To define $\iquiv{v, \bw}$, we first draw a quiver on the wiring diagram, following \cite{CA3}.
\begin{defn}
The \emph{wiring diagram quiver }$\wquiv{v, \bw}$ has vertices labeled by left chamber minors of $\wiring{v, \bw}$. The chamber minors in frozen chambers of $\wiring{v, \bw}$ are frozen; all others are mutable. To determine the arrows, place the configuration of half arrows in \cref{fig:halfArrows} around each crossing of $\wiring{\mathbf{w}}$ and sum up the contributions\footnote{That is, to determine the number of arrows from $\lmin{c}$ to $\lmin{d}$: count the number of half-arrows from $\lmin{c}$ to $\lmin{d}$, subtract the number of half-arrows from $\lmin{d}$ to $\lmin{c}$ and divide by 2.}. Delete all arrows between frozen variables.
\end{defn}
\begin{figure}
\includegraphics[width=0.1\textwidth]{halfArrows}
\caption{\label{fig:halfArrows} The half arrow configuration used to define the wiring diagram quiver. The horizontal arrow is two half-arrows.}
\end{figure}
\begin{defn} \label{def:IngQuiv}
Let $B$ denote the square signed adjacency matrix of $\wquiv{v,\mathbf{w}}$, with rows and columns indexed by $[\ell]$. For $c \in \jsol{v}$ and $c \neq d \in [\ell]$,
in $\iquiv{v, \bw}$ we have
\begin{align*}\#(\text{arrows }A_c \to A_d)&=\sum_{\chi_a \ni A_c, \chi_b \ni A_d} B_{a, b}\\
&=\sum_{a, b \in[\ell]} m_{a, c} B_{a,b} m_{b, d}\\
&= (M^t B M)_{c,d}.
\end{align*}
Equivalently, let $\hat{y}^W_\ch$ denote the $\hat{y}$-variable for a mutable vertex in $\wquiv{v, \bw}$ (this is a ratio of left chamber minors). Then
\begin{equation}\label{eq:yHats}
\hat{y}_c= \prod_{\ch \ni A_c} \hat{y}^W_\ch.
\end{equation}
\end{defn}
In words, for each arrow in $\wquiv{v, \bw}$ between chambers containing $A_c$ and $A_d$, put an arrow between $A_c$ and $A_d$ in $\iquiv{v, w}$. Then delete 2-cycles.
\begin{figure}
\includegraphics[width=\textwidth]{IngEx}
\caption{\label{fig:IngEx} Left: a stacked wiring diagram $\wiring{v, \mathbf{w}}$ for $v=12534$ and $\mathbf{w}=s_4s_3s_2s_1s_4s_3s_2s_3s_4$. Chambers are labeled by the cluster monomials from \cref{prop:chamber-mono-in-var-gracie}. The wiring diagram quiver is drawn on top. Right: the seed $\iseed{v, \mathbf{w}}.$}
\end{figure}
Ingermanson showed that the upper cluster algebra $\mathcal{U}(\iseed{v, \bw})$ is equal to $\mathbb{C}[\rich{v,w}]$. Further, \cite[Corollary 5.8, Remark 7.18]{GLSBS1} shows that the quiver $\iquiv{v, \bw}$ is locally acyclic, which implies by work of Muller \cite{Muller} that $\mathcal{A}(\iseed{v, \bw})= \mathcal{U}(\iseed{v, \bw})$. So we have the following theorem.
\begin{thm}\cite{gracie,GLSBS1} \label{thm:Gracie-clusterstruc}
Fix $v \leq w$ and let $\mathbf{w}$ be a unipeak expression for $w$. Then $\mathcal{A}(\iseed{v, \bw})= \mathbb{C}[\rich{v, w}]$.
\end{thm}
\end{document}
\subsection{Leclerc's cluster structure}\label{}
We recall Leclerc's construction of conjectural cluster structure on $\mathbb{C}[\rich{v,w}]$ \cite{Leclerc}. One of the main results of this paper is that his construction does in fact yield a cluster structure.
Leclerc defines a cluster category inside the module category of a preprojective algebra. Via the cluster character map, this gives rise to a cluster subalgebra of $\mathbb{C}[\rich{v,w}]$. For a more detailed exposition of the representation theoretic construction we refer to \cite[Section 5]{SSBW}.
The preprojective algebra $\Lambda_{n-1}$ of type $A$ is the path algebra of the quiver
$$P= \xymatrix@!{1 \ar@/^/[r]^{\alpha_1}& \ar@/^/[l]^{\alpha_1^*} 2 \ar@/^/[r]^{\alpha_2}& \ar@/^/[l]^{\alpha_2^*} 3 \ar@/^/[r]^{\alpha_3} &\cdots \ar@/^/[l]^{\alpha_3^*} \ar@/^/[r]^{\alpha_4} & \ar@/^/[l]^{\alpha_4^*} n-1}$$
with relations
$$\sum_{i} \alpha_i \alpha_i^*-\alpha_i^*\alpha_i=0.$$
A module $U$ over $\Lambda_{n-1}$ is obtained by placing a $\mathbb{C}$-vector space $U_i$ at each vertex $i$ of $P$ and linear maps between these vector spaces $\phi_{\alpha_i}:U_i\to U_{i+1}$ and $\phi_{\alpha_i^*}:U_{i+1}\to U_{i}$ for each arrow of $P$, such that the maps satisfy the relations given above. Let $\dim\, U:= (\dim\, U_{i})_{i \in [n-1]}$ denote the dimension vector of $U$. The support of $U$ is the set of all vertices $i$ in the quiver such that $U_i\not=0$. Let $\left|U\right|$ denote the number of pairwise non-isomorphic indecomposable direct summands of $U$, and let $\add U$ denote the full subcategory of the module category whose objects are direct sums of summands of $U$.
We will be interested in a special type of $\Lambda_{n-1}$-modules, which correspond to skew shapes in $\mcg_n$. Let $\lambda/\mu \subset \mcg_n$ be a skew shape with content in $[n-1]$. The $\Lambda_{n-1}$-module $U_{\lambda/\mu}$ is as follows. Recall the boxes of $\mcg_n$ are indexed as in a matrix. Each box $b=b_{i,j}$ of $\lambda/\mu$ with content $c$ yields a basis vector $e_{i,j}$ of $(U_{\lambda/\mu})_{c}$ and the maps are defined as follows.
\[ \phi_{\alpha_{c}}(e_{i,j}) = \begin{cases}
e_{i+1,j} & \text{if } b_{i+1,j}\in \lambda/\mu \\
0 & \text{else}
\end{cases} \hspace{1cm}
\phi_{\alpha^*_{c-1}}(e_{i,j}) = \begin{cases}
e_{i,j+1} & \text{if } b_{i,j+1}\in \lambda/\mu \\
0 & \text{else}
\end{cases} \]
For example, if $\lambda/\mu$ is the $(n-k)\times k$ rectangle whose lower right corner has content $k$, then $U_{\lambda/\mu}$ is the indecomposable injective $\Lambda_{n-1}$-module at vertex $k$, and if $\lambda/\mu$ is a single content $k$ box then $U_{\lambda/\mu}$ is the simple $\Lambda_{n-1}$-module at vertex $k$, which we will denote by $S(k)$.
From this description it follows that the top of $U_{\lambda/\mu}$ is a direct sum of simple modules $S(k)$, one for each box $b_{i,j} \in \lambda/\mu$ with content $k$ such that $b_{i-1,j}, b_{i,j-1}\not\in \lambda/\mu$. Similarly, the socle of $U_{\lambda/\mu}$ is a direct sum of simple modules $S(k)$, one for each box $b_{i,j} \in \lambda/\mu$ with content $k$ such that $b_{i+1,j}, b_{i,j+1}\not\in \lambda/\mu$. That is, $b_{i,j}$ is a content $k$ corner of the northwest or southeast boundary of $\lambda/\mu$ respectively.
A module $U_{\lambda'/\mu'}$ is a submodule of $U_{\lambda/\mu}$ if $\lambda'/\mu' \subseteq \lambda/\mu$ and for every $b_{i,j}\in \lambda'/\mu'$ whenever $b_{i+1,j}, b_{i,j+1} \in \lambda/\mu$ then $b_{i+1,j}, b_{i,j+1} \in \lambda'/\mu'$ respectively. On the other hand, a module $U_{\lambda'/\mu'}$ is a quotient of $U_{\lambda/\mu}$ if $\lambda'/\mu' \subseteq \lambda/\mu$ and for every $b_{i,j}\in \lambda'/\mu'$ whenever $b_{i-1,j}, b_{i,j-1} \in \lambda/\mu$ then $b_{i-1,j}, b_{i,j-1} \in \lambda'/\mu'$ respectively. Any map of two modules $f: U_{\lambda/\mu}\to U_{\nu/\rho}$ is determined by its image $\text{im}\,f$, where $\text{im}\,f=U_{\lambda'/\mu'}$ is a quotient of $U_{\lambda/\mu}$ and a submodule of $U_{\nu/\rho}$.
\begin{rmk}\label{rmk:components-give-indec-summands}
Consider a skew shape $\lambda/\mu$ in $\mcg_n$ with content contained in $[n-1]$. The components of $\lambda/\mu$ (c.f. \cref{defn:content}) give the indecomposable summands of $U_{\lambda/\mu}$.
\end{rmk}
Recall that a pair of subsets $I \leq J \in \binom{[n]}{h}$ determines a pair of Young diagrams $\young{I} \supset \young{J}$ and a skew shape $\skew_I^{J}$ (c.f. \cref{defn:latticePath}). So the pair $I \leq J$ also determines a $\Lambda_{n-1}$-module, which we denote $U_{I,J}$.
Given $v\leq w$, Leclerc defines a certain subcategory $\mathcal{C}_{v,w}$ of the module category of $\Lambda_{n-1}$ which he showed admits a cluster structure in the sense of \cite{BIRS}. The category $\mathcal{C}_{v,w}$ is equipped with a cluster character map
\begin{align*}
\varphi : \obj \mathcal{C}_{v,w}&\to \mathbb{C}[\rich{v,w}]\\
U & \mapsto \varphi_U
\end{align*}
satisfying $U \oplus U' \mapsto \varphi_U \varphi_{U'}$. Each (reachable) \emph{cluster tilting module} $U$ of $\mathcal{C}_{v,w}$ corresponds to a seed in $\mathbb{C}(\rich{v,w})$; the cluster variables are the images of the indecomposable summands of $U$ under $\varphi$.
To obtain a cluster tilting module, we define a module for each chamber of the wiring diagram.
\begin{defn}
Fix $v \leq \mathbf{w}$. For $c \in [\ell]$, the \emph{chamber module} is
\begin{equation}\label{eq:chamberModule}
U_c :=U_{\vu{c}[h_c],\wu{c}[h_c]}.
\end{equation}
\end{defn}
See \cref{fig:LecEx} for an example of a stacked wiring diagram with chambers labeled by chamber modules. By results of Leclerc, $\varphi$ maps $U_c \mapsto \rmin{c}$.
The main result of \cite{Leclerc} can be formulated as follows.
\begin{theorem}\cite[Theorem 4.5]{Leclerc}\label{Lec-seed} Fix $v \leq \mathbf{w}$.
$$U_{v,{\bf w}}:=
\bigoplus_{c\in \jsol{v}} U_c$$ is a cluster tilting object in $\mathcal{C}_{v,w}$. The corresponding seed $\lseed{v, \bw}=(\mathbf{B}_{v, \bw}, \lquiv{v, \bw})$ in $\mathbb{C}[\rich{v, w}]$ can be described as follows.
\begin{itemize}
\item[(a)] The cluster variables are the $\ell(w)-\ell(v)$ irreducible factors\footnote{We mean the irreducible factors of $\rmin{c}$ as a function on $N$.} of
\[\prod_{c \in \jsol{v}} \varphi_{U_c} = \prod_{c \in \jsol{v}} \rmin{c}.\]
The set of cluster variables is the $\varphi$-image of the set of indecomposable summands of the $U_c$.
\item[(b)] A cluster variable is frozen if it is a factor of the right chamber minor of a frozen chamber in $\wiring{v, \mathbf{w}}$. The frozen variables are $\varphi$-images of the
indecomposable summands of $\bigoplus_{i\in I} U_{v^{-1}([i]),w^{-1}([i])}$ (which
are the projective-injective objects).
\item[(c)] The quiver $ \lquiv{v, \mathbf{w}}$ is the endomorphism quiver of the cluster tilting module. In particular, the vertices of the quiver are nonisomorphic indecomposable summands of $U_{v,{\bf w}}$ and the arrows are irreducible morphisms in $\add U_{v,{\bf w}}$ between these summands.
\end{itemize}
Finally, the cluster algebra
$\mathcal{A}(\lseed{v, \mathbf{w}})$
is a subalgebra of
$\mathbb{C}[\rich{v, w}]$.
\end{theorem}
\begin{rmk}
Analogously to Ingermanson's construction, in Leclerc's construction the right chamber minors of $\wiring{v, \bw}$ are cluster monomials. This is clear for chambers to the left of a solid crossing by construction; for chambers to the left of a hollow crossing, this follows from \cref{lem:hollowRel}. We say that a cluster variable $B \in \mathbf{B}_{v, \bw}$ \emph{appears} in a chamber $\chi_c$ if $B$ is an irreducible factor of the right chamber minor $\rmin{c}$.
\end{rmk}
\begin{figure}
\includegraphics[width=\textwidth]{LecEx}
\caption{\label{fig:LecEx} The stacked wiring diagram $\wiring{v, \mathbf{w}}$ for $\mathbf{w}=s_4s_3s_2s_1s_4s_3s_2s_3s_4$ and $v=12534$, with chambers labeled by chamber modules (or by skew shapes).}
\end{figure}
In certain special cases, Leclerc showed that $\mathcal{A}(\lseed{v, \mathbf{w}}) = \mathbb{C}[\rich{v, w}]$. He conjectured that this equality holds in general.
\begin{conj}\label{conj:leclerc} \cite{Leclerc}
The cluster algebra $\mathcal{A}(\lseed{v, \bw})$ is equal to $\mathbb{C}[\rich{v,w}]$.
\end{conj}
Our main result is that this conjecture is true. Moreover Leclerc's seeds for different reduced expressions $\mathbf{w}, \mathbf{w}'$ are related by mutation (c.f. \cref{prop:LecSeedMutationEquiv}), so Leclerc's construction gives a single cluster structure on $\mathbb{C}[\rich{v, w}]$.
\end{document}
\subsection{Base case} In this section we show the following lemma.
\begin{lemma}\label{lem:base_case}
Suppose the final crossing $\ell$ is solid. Then $A_\ell$ appears in $\lmin{1}$ if and only if $B_\ell$ appears in $\rmin{1}$.
\end{lemma}
\begin{proof}
Since $\ell\in \jsol{v}$, $s_{i_\ell}$ is not in the PDS for $v$ in $\mathbf{w}$. Set $j:=i_{\ell}$ and $k=i_1$. Then $B_\ell = \minor{j, j+1}$ is a one-by-one minor; the corresponding skew shape is a single box with content $j$. It follows that $B_\ell$ appears in $\rmin{1}$ if and only if all of the following conditions hold:
\begin{itemize}
\item[(1)] $j\not\in w^{-1}[k]$ and $j+1\in w^{-1}[k]$;
\item[(2)] $j\in v^{-1}[k]$ and $j+1\not\in v^{-1}[k]$;
\item[(3)] $\left| w^{-1}[k]\cap [j-1] \right| = \left| v^{-1}[k]\cap [j-1] \right|$.
\end{itemize}
Now let $w'=ws_{j}$. Then on the other hand, $A_\ell$ appears in $\Delta_1^\lambda$ if and only if $\Piv_L(vs_{j})>\Piv_L(v)$ where $L=(w')^{-1}[k]$. First, $v^{-1}\leq (w')^{-1}$ implies that $v^{-1}[k] \leq L$, so we have $\Piv_L(v)=[k]$. Then the same argument implies that $\Piv_L(vs_{j})>[k]$ if and only if $s_{j}v^{-1}[k]\not\leq (w')^{-1}[k]$.
To prove the lemma it suffices to show that conditions (1)-(3) above are equivalent to the condition that $s_{j}v^{-1}[k]\not\leq (w')^{-1}[k]$. Since $v\leq w'$, it is easy to see that $s_{j}v^{-1}[k]\not\leq (w')^{-1}[k]$ if and only if all of the following conditions hold:
\begin{itemize}
\item[(1')] $j\in (w')^{-1}[k]$ and $j+1\not\in (w')^{-1}[k]$;
\item[(2)] $j\in v^{-1}[k]$ and $j+1\not\in v^{-1}[k]$;
\item[(3')] $\left| (w')^{-1}[k]\cap [j-1] \right| = \left| s_{j}v^{-1}[k]\cap [j-1] \right|$.
\end{itemize}
From $w=w's_{j}$, (1) and (1') are equivalent, as are (3) and (3').
\end{proof}
\subsection{Leclerc's factorizations are stable under left and right multiplication}
In this section, we show that the apearance of $B_d$ in a chamber $\ch$ does not change under removing prefixes and suffixes from $\mathbf{w}$.
We need some definitions involving wiring digrams which differ by a single crossing at the left or right. Notice that if $\mathbf{w} < \mathbf{w} \cdot s_i$, each chamber $\chi$ of $\wiring{\mathbf{w}}$ corresponds naturally to a chamber $\chi$ of $\wiring{\mathbf{w} \cdot s_i}$, and similarly if $\mathbf{w} < s_i \cdot \mathbf{w}$.
\begin{defn}\label{def:right}
Let $\mathbf{w}'=\mathbf{w}_{(\ell-1)}$ and $v'=v_{(\ell-1)}$. Note that $\jsol{v}\setminus \{\ell\}=\jsol{v'}$. We denote cluster variables in $\iseed{v', \mathbf{w}'}$ and $\lseed{v', \mathbf{w}'}$ by $A'_j$ and $B_j'$, respectively. For $d \in \jsol{v'}$, the appearance of $A_d'$ (resp. $B_d'$) is \emph{stable under right multiplication} if for all chambers $\chi$ of $\wiring{v', \mathbf{w}'}$, $A'_d$ (resp. $B_d'$) appears in $\chi$ in $\wiring{v', \mathbf{w}'}$ if and only if $A_d$ (resp. $B_d$) appears in $\ch$ in $\wiring{v, \mathbf{w}}$.
\end{defn}
We make exactly analogous definitions for left multiplication.
\begin{defn}\label{def:left}
Let $\mathbf{w} = s_{i_1} \dots s_{i_\ell}$ and $v\leq w$. Let $\mathbf{w}'=s_{i_2} \dots s_{i_\ell}$ and $v'=s^v_{i_2} \dots s^v_{i_\ell}$. Note that the crossings of $\mathbf{w}'$ are indexed by $2, \dots, \ell$, and that $\jsol{v}\setminus \{1\}=\jsol{v'}$. We denote cluster variables in $\iseed{v', \mathbf{w}'}$ and $\lseed{v', \mathbf{w}'}$ by $A'_j$ and $B_j'$, respectively. For $d \in \jsol{v'}$, the appearance of $A_d'$ (resp. $B_d'$) is \emph{stable under left multiplication} if for all chambers $\chi$ of $\wiring{v', \mathbf{w}'}$, $A'_d$ (resp. $B_d'$) appears in $\chi$ in $\wiring{v', \mathbf{w}'}$ if and only if $A_d$ (resp. $B_d$) appears in $\ch$ in $\wiring{v, \mathbf{w}}$.
\end{defn}
\begin{lem}\label{lem:Lec_left}
In the setup of \cref{def:left}, let $j \in \jsol{v'}$. The appearance of $B'_j$ is stable under left multiplication.
\end{lem}
\begin{proof}
Notice that all right chamber minors ${(\rmin{c})}'$ of $\wiring{v', \mathbf{w}'}$ are equal to the corresponding right chamber minor $\rmin{c}$ of $\wiring{v, \mathbf{w}}$. Also, $B'_j=B_j$. The claim follows.
\end{proof}
\begin{prop}\label{lem:Lec_right}
In the setup of \cref{def:right}, let $j \in \jsol{v'}$. The appearance of $B'_j$ is stable under right multiplication.
\end{prop}
\begin{proof}
Fix a chamber $\ch_c$. Note that the appearance of a cluster variable $B'_j$ in a right chamber minor $\rmin{c}$ does not depend on the prefix $s_{i_1} \dots s_{i_{c-1}}$. So we may assume without loss of generality that $c=1$ is the first crossing of $\mathbf{w}'$. Also, we set $k:=i_\ell$.
First, suppose that $B_j'$ appears in $(\rmin{1})'=\Delta_{v'^{-1}[i_1],{w'}^{-1}[i_1]}$. Then $B_j' =\Delta'_{R,S}$ for some $R \subset [r, s]$ and $S \subset [ r+1, s+1]$ with $s\geq r$ where $r\in R$ and $s+1\in S$. The corresponding lattice paths for $B_j'$ and $(\Delta_1^\rho)'$ are shown in Figure~\ref{LecFact} on the left. Next, it will be more convenient to work with modules and lattice paths instead of minors. Let $M_j', M_j$ be indecomposable modules that correspond to $B_j', B_j$ respectively. Also, let $X,Y$ be summand of $U_1'$ that are adjacent to $M_j'$ as in the figure. To prove the proposition, we will compare the indecomposable module $M_j'$ with $M_j$ and also the chamber module $U'_1$ with $U_1$ (c.f. \eqref{eq:chamberModule}).
\begin{figure}
\scalebox{.6}{\Large\input{fig2.pdf_tex}}
\vspace*{-4cm}
\caption{The relation between $U_1$ and $U_1'$ in the proof of Proposition~\ref{lem:Lec_right}.}
\label{LecFact}
\end{figure}
If $k<r-1$ or $k>s+1$, then $M_j'=M_j$ and $U_1$ is obtained from $U'_{1}$ by possibly adding and/or removing a content $k$ box. In particular, we see that $M_j$ is a summand of $U_1$ as desired. Similarly, if $k\in[r,s]$ then $M_j$ is obtained from $M_j'$ by possibly adding and/or removing a content $k$ box, while $U_1$ is obtained from $U_{1}'$ in the exact same way by adding and/or removing the corresponding box to the summand $M_j'$ of $U'_{1}$. Hence, we conclude again that $M_j$ is a summand of $U_1$.
Next, suppose that $k=r-1$. There are several cases to consider (see Figure~\ref{LecFact}, right). Note that by assumption we have $r\not\in {w'}^{-1}[i_1]$ and $r\in (v')^{-1}[i_1]$.
(1) Suppose that $k=r-1 \in {w'}^{-1}[i_1]$, and $r-1\in (v')^{-1}[i_1]$. Then $U_1$ is obtained from $U'_1$ by adding a content $r-1$ box, and the summands $X,Y$ do not change. Similarly, since $M'_j, M_j$ are the topmost summands in $U'_j, U_j$ respectively, then $M_j$ is obtained from $M'_j$ by adding a content $r-1$ box. Hence, we conclude that $M_j$ is a summand of $U_1$ as desired.
(2) Suppose that $k=r-1 \in {w'}^{-1}[i_1]$ and $r-1\not \in (v')^{-1}[i_1]$. Since $r\in (v')^{-1}[i_1]$ and $r-1\not \in (v')^{-1}[i_1]$, then we conclude that $s_{r-1}(v')^{-1}[i_1] < (v')^{-1}[i_1]$, so then $v's_{r-1}<v'$. Thus, $v=v'$, but in the expression for $w=w's_{r-1}$ the reduced expression for $v'=v$ contains $s_{r-1}$. This contradicts the assumptions in Definition~\ref{def:right} that $v'=v_{(\ell-1)}$.
(3) Suppose $k=r-1\not\in (w')^{-1}[i_1]$. Then $r-1\not\in(v')^{-1}[i_1]$. In this case, it may happen that $M_j$ is obtained from $M_j'$ by adding a content $r-1$ box. However, $U'_1=U_1$, so we observe that $M_j'$ and not $M_j$ is a summand of $U_1$. We will show that this leads to a contradiction. Let $U_p$ contain $M_j'$ as a summand with $p$ being maximal, that is $M_j'=M_p$. By \cref{lem:LecVar} the module $M_j'$ is the topmost summand in $U_p$. Note that $p\not=j$ since $M_j\not=M_p$. Then $p\in \jsol{v'}$ and $M_j'$ is then also the topmost summand of $U'_p$. Again \cref{lem:LecVar} implies that $p$ is the maximal index such that $M_j'$ is a summand of $M'_p$. Therefore, $p=j$, which is a contradiction.
This completes the proof in the case $k=r-1$ and the other remaining situation is when $k=s+1$, which can be shown in a similar way. This shows that if $M_j'$ is a summand of $U'_1$ then $M_j$ is a summand of $U_1$. It remains to show the converse.
Now, suppose that $M_j$ is a summand of $U_1$, and we want to show that $M_j'$ is a summand of $U'_1$. Again let $i_{\ell}=k$ and we consider several cases.
If $M_j = M_j'$ and $M_j$ is a summand of $U_1$ then one cannot add a content $k$ box to the top of $M_j'$ in $U_1$ or remove a content $k$ box from the bottom of $M_j'$ in $U_1$. Then $M_j'$ is also a summand of $U'_1$ because $U'_1$ is obtained from $U_1$ by possibly removing a content $k$ box from the top and/or adding it to the bottom. Hence, it remains to show the case when $M_j\not=M_j'$.
If $M_j = S(k)$ is a simple module represented by a single box and $M_j'=0$, then $s_{k}=s_{i_{\ell}}$ at the end of $\mathbf{w}$ is not part of the reduced expression for $v$ inside $\mathbf{w}$, as otherwise $M_j$ would be zero. But then $M_{\ell} = S(k)$ so $\ell=j$ which is a contradiction, since we assume $j<\ell$.
Now suppose that $M_j'$ is obtained from $M_j$ just by removing a content $k$ box from the top. Then if in addition $U_1\not= U'_1$, then $U'_1$ is similarly obtained from $U_1$ by removing a content $k$ box from the top. Hence, $M_j'$ is then a summand of $U'_1$ as desired. Otherwise, if $U_1= U'_1$, then $M_j$ and not $M'_j$ is a summand of $U'_1$. But then $M_j=M'_p$ for some $p\not=j$, and so $M_j=M'_p=M_p$, where the last equality holds since $M'_p$ already has content $k$ box in the top. Hence, $j=p$, which is a contradiction.
The same argument applies if $M_j'$ is obtained from $M_j$ just by adding a box with content $k$ to the bottom.
Finally, suppose that $M_j'$ is obtained from $M_j$ by both removing a content $k$ box from the top and by adding a content $k$ box to the bottom. Then $k$ would be a vertical step while $k+1$ would be a horizontal step for both the top and the bottom contour of $M_j'$. This implies that $M_j'$ and $M_j$ both contain a box with content $k-1$ and a box with content $k+1$. Hence, $k$ is not a minimal or maximal content of a box in $M_j'$ and $M_j$. This implies that if $M_j$ is a summand of $U_1$ then $M'_j$ is then a summand of $U'_1$ as desired.
This completes all the cases and proves the proposition.
\end{proof}
\end{document}
\subsection{Morphisms coming from neighboring chambers}\label{sec:map-descripiton}
Let $\chi$ and $\chi'$ be chambers adjacent to a solid crossing $i \in \jsol{v}$, and let $U_\ch$ and $U_{\ch'}$ be the corresponding chamber modules. Let $\alpha$ be the falling $w$-strand at the crossing $i$ with right endpoint $a$ while $\alpha'$ be the rising $w$-strand at $i$ with right endpoint $a'$. Let $\Delta_{I,J}, \Delta_{I',J'}$ denote the right chamber minors for the chambers $\chi,\chi'$ respectively. We will define explicit morphisms between the modules $U_{\chi}$ and $U_{\chi'}$. There are three cases depending on the relative positions of the chambers $\chi$ and $\chi'$. Recall the notation $\chi_{i^{\uparrow}}, \chi_{i^{\rightarrow}}, \chi_{i^{\downarrow}}, \chi_{i^{\leftarrow}}$ for the chambers above, to the right, below, and to the left of $i$ respectively.
First, suppose that $\chi = \chi_{i^{\rightarrow}}$ and $\chi'=\chi_{i^{\leftarrow}}$ (Figure~\ref{Fig:fgh}, left). Since $i\in \jsol{v}$, we see that $I=I'$ and $J'=J\set
|
-face images of 38 humans. There are 64 images, each of the size $192 \times 168$ pixels, per individual. The face images were captured under various lighting conditions. Similar to ~\cite{elhamifar2013sparse}, the images were downsampled to $48\times42$ pixels. For LG-SSC, we set $p=4$ and $s=2$. In order to study the effect of the number of clusters on the clustering performance, we implement 2 different settings of experiments: 1) We follow the setting utilized in ~\cite{elhamifar2013sparse}, which has been implicitly specified as the general setting for reporting the performance of subspace clustering algorithms on this database over the past years. In particular, for $n \in \{2,3,5,8,10\}$ clusters, the images of 38 subjects are divided into 4 groups of [1-10], [11-20], [21-30] and [31-38]. For $n \in \{2,3,5,8\}$ clusters, all choices of possible different trials for each group is considered and for $n=10$ subjects, only the first three groups are considered. The subspace clustering algorithms are applied on the corresponding subsets of images and the average ACC, NMI and ARI values over these subsets are reported in Table~\ref{tabyale}. The numbers indicated with * are taken from the corresponding papers. 2) For $n \in \{15, 20, 30, 38\}$, we simply select the first $n$ images of the database and apply the subspace clustering algorithms. The values of three metrics ACC, NMI and ARI for each subspace clustering algorithm is reported in Table~\ref{tabyale2}.
We observe that:
\begin{itemize}
\item LG-SSC and MG-SSC significantly outperform other approaches in all cases. Specifically, for $n \geq 15$, the accuracy of LG-SSC is more than 20\% higher than SSC which is the basic foundation of this approach. The results indicate the efficiency of hierarchical structure of LG-SSC in dealing with sever illumination effects.
\item MG-SSC slightly performs better than LG-SSC when the number of clusters is low. However, by increasing the number of clusters to 20, LG-SSC outperforms MG-SSC. This confirms that by gradually feeding summarized information of local patches in a hierarchical framework, the robustness can increase, especially in more challenging cases.
\item The performance of LG-SSC is quite stable with respect to the number of clusters.
\item The performance of SSC, LRR, $S^3C$ and LRSC decreases significantly as the number of clusters increases.
\item Sparse-based approaches, in particular SSC and $S^3C$, perform generally better compared to low-rank based approaches of LRR and LRSC.
\item EDSC benefits from a specific post-processing step which tends to be different from the other 6 approaches and this post-processing of affinity matrix plays a major role for the accuracy of clustering. We noted that without this post-processing step, the quality of the obtained coefficient matrix of EDSC is similar to LRR and LRSC. This makes sense as EDSC reguralizes the coefficient matrix using Frobinius norm which exhibits similar characteristics as nuclear norm in subspace clustering.
\end{itemize}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Parameters of the compared approaches.}
\label{tabpar}
\small\addtolength{\tabcolsep}{-1pt}
\begin{tabular}{c||ccc}
\hline
\multicolumn{1}{c||}{Approach} & \multicolumn{3}{c}{Parameters} \\
& Extended Yale B & AR &Coil 20\\
\hline
LGSSC & \begin{tabular}{@{}c@{}}$\alpha = 20$, $\lambda_1 = 1$, $\lambda_2 = 10$ \\ $p=4$, $s=2$\end{tabular} & \begin{tabular}{@{}c@{}}$\alpha = 100$, $\lambda_1 = 5$, $\lambda_2 = 10$\\ $p=4$, $s=3$\end{tabular} & \begin{tabular}{@{}c@{}}$\alpha = 20$, $\lambda_1 = 2$, $\lambda_2 = 10$\\ $p=4$, $s=2$\end{tabular}\\
MG-SSC & $\alpha = 20$, $p=4$, $s=3$ & $\alpha = 100$, $p=9$, $s=3$ & $\alpha = 20$, $p=4$, $s=2$\\
SSC & $\alpha = 20$ & $\alpha = 100$ & $\alpha =20$\\
LRR & $\lambda = 0.009$ & $\lambda = 0.095$ & $\lambda = 0.0092$\\
EDSC & \begin{tabular}{@{}c@{}}$\lambda_1 = 0.06$, $\lambda_2 = 0.01$ \\ dim = 10, $\alpha = 4$\end{tabular} & \begin{tabular}{@{}c@{}}$\lambda_1 = 0.06$, $\lambda_2 = 0.01$ \\ dim = 10, $\alpha = 4$\end{tabular} & \begin{tabular}{@{}c@{}}$\lambda_1 = 0.06$, $\lambda_2 = 0.01$ \\ dim = 12, $\alpha = 8$\end{tabular}\\
$S^3C$ & $\gamma = 1$, $\alpha = 20$ & $\gamma = 1$, $\alpha = 100$ & $\gamma = 1$, $\alpha = 20$\\
LRSC & $\tau = 0.045$,$\alpha=10^5$ & $\tau = 0.07$,$\alpha=0.1$ & $\tau = 0.045$,$\alpha=0.07$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Average performance on the Extended Yale B data set with different number of subjects. The best performance is indicated in bold.}
\label{tabyale}
\begin{tabular}{|cccccccc|}
\hline
Algorithm & \multicolumn{1}{|c|}{LG-SSC} & \multicolumn{1}{|c|}{MG-SSC} & \multicolumn{1}{|c|}{SSC} & \multicolumn{1}{|c|}{LRR} & \multicolumn{1}{|c|}{EDSC} & \multicolumn{1}{|c|}{$S^3C$} &\multicolumn{1}{|c|}{LRSC} \\
\hline
\multicolumn{1}{c}{2 subjects} & &&&& \multicolumn{1}{c}{}\\
\hline
ACC & \multicolumn{1}{|c|}{\textbf{99.92}} & \multicolumn{1}{|c|}{99.91} & \multicolumn{1}{|c|}{98.14} & \multicolumn{1}{|c|}{89.69} & \multicolumn{1}{|c|}{97.35$^*$} &\multicolumn{1}{|c|}{99.48$^*$} &\multicolumn{1}{|c|}{96.23} \\
NMI & \multicolumn{1}{|c|}{\textbf{99.54}} & \multicolumn{1}{|c|}{99.38} & \multicolumn{1}{|c|}{93.16} & \multicolumn{1}{|c|}{66.69} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{82.05}\\
ARI & \multicolumn{1}{|c|}{\textbf{99.70}} & \multicolumn{1}{|c|}{99.66} & \multicolumn{1}{|c|}{94.29} & \multicolumn{1}{|c|}{68.13} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-}&\multicolumn{1}{|c|}{86.23}\\
\hline
\multicolumn{1}{c}{3 subjects} & &&&& \multicolumn{1}{c}{}\\
\hline
ACC & \multicolumn{1}{|c|}{99.42} & \multicolumn{1}{|c|}{\textbf{99.87}} & \multicolumn{1}{|c|}{96.70} & \multicolumn{1}{|c|}{79.09} & \multicolumn{1}{|c|}{96.35$^*$} &\multicolumn{1}{|c|}{99.11$^*$} &\multicolumn{1}{|c|}{93.55} \\
NMI & \multicolumn{1}{|c|}{99.04} & \multicolumn{1}{|c|}{\textbf{99.43}} & \multicolumn{1}{|c|}{92.75} & \multicolumn{1}{|c|}{59.61} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{81.06}\\
ARI & \multicolumn{1}{|c|}{98.99} & \multicolumn{1}{|c|}{\textbf{99.62}} & \multicolumn{1}{|c|}{92.61} & \multicolumn{1}{|c|}{53.22} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-}&\multicolumn{1}{|c|}{82.61}\\
\hline
\multicolumn{1}{c}{5 subjects} & &&&& \multicolumn{1}{c}{}\\
\hline
ACC & \multicolumn{1}{|c|}{99.35} & \multicolumn{1}{|c|}{\textbf{99.78}} & \multicolumn{1}{|c|}{95.68} & \multicolumn{1}{|c|}{65.46} & \multicolumn{1}{|c|}{94.89$^*$} &\multicolumn{1}{|c|}{98.49$^*$} &\multicolumn{1}{|c|}{90.46} \\
NMI & \multicolumn{1}{|c|}{99.02} & \multicolumn{1}{|c|}{\textbf{99.32}} & \multicolumn{1}{|c|}{91.56} & \multicolumn{1}{|c|}{54.53} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{80.74}\\
ARI & \multicolumn{1}{|c|}{98.85} & \multicolumn{1}{|c|}{\textbf{99.46}} & \multicolumn{1}{|c|}{90.17} & \multicolumn{1}{|c|}{39.07} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-}&\multicolumn{1}{|c|}{78.99}\\
\hline
\multicolumn{1}{c}{8 subjects} & &&&& \multicolumn{1}{c}{}\\
\hline
ACC & \multicolumn{1}{|c|}{99.41} & \multicolumn{1}{|c|}{\textbf{99.72}} & \multicolumn{1}{|c|}{94.13} & \multicolumn{1}{|c|}{59.02} & \multicolumn{1}{|c|}{93.93$^*$} &\multicolumn{1}{|c|}{97.69$^*$} &\multicolumn{1}{|c|}{76.36} \\
NMI & \multicolumn{1}{|c|}{98.54} & \multicolumn{1}{|c|}{\textbf{99.35}} & \multicolumn{1}{|c|}{90.58} & \multicolumn{1}{|c|}{56.34} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{70.71}\\
ARI & \multicolumn{1}{|c|}{98.65} & \multicolumn{1}{|c|}{\textbf{99.38}} & \multicolumn{1}{|c|}{86.44} & \multicolumn{1}{|c|}{36.27} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-}&\multicolumn{1}{|c|}{59.15}\\
\hline
\multicolumn{1}{c}{10 subjects} & &&&& \multicolumn{1}{c}{}\\
\hline
ACC & \multicolumn{1}{|c|}{\textbf{99.68}} & \multicolumn{1}{|c|}{\textbf{99.68}} & \multicolumn{1}{|c|}{92.60} & \multicolumn{1}{|c|}{60.42} & \multicolumn{1}{|c|}{92.76$^*$} &\multicolumn{1}{|c|}{97.19$^*$} &\multicolumn{1}{|c|}{66.56} \\
NMI & \multicolumn{1}{|c|}{\textbf{99.33}} & \multicolumn{1}{|c|}{\textbf{99.33}} & \multicolumn{1}{|c|}{89.37} & \multicolumn{1}{|c|}{59.79} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{66.27}\\
ARI & \multicolumn{1}{|c|}{\textbf{99.31}} & \multicolumn{1}{|c|}{\textbf{99.31}} & \multicolumn{1}{|c|}{82.72} & \multicolumn{1}{|c|}{38.13} & \multicolumn{1}{|c|}{-} &\multicolumn{1}{|c|}{-}&\multicolumn{1}{|c|}{49.23}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Performance on the Extended Yale B data set with different number of subjects. The best performance is indicated in bold.}
\label{tabyale2}
\begin{tabular}{c|c||ccccccc}
\hline
\#subjects & Metric & LG-SSC & MG-SSC & SSC & LRR & EDSC & S$^3$C & LRSC \\
\hline
15 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}99.47 \\ 99.10 \\ 98.87\end{tabular}&\begin{tabular}{@{}c@{}c@{}} \textbf{100} \\ \textbf{100} \\ \textbf{100}\end{tabular}&\begin{tabular}{@{}c@{}c@{}}78.81 \\ 79.14 \\ 60.89\end{tabular} &\begin{tabular}{@{}c@{}c@{}}64.30 \\ 66.18 \\ 41.18\end{tabular} &\begin{tabular}{@{}c@{}c@{}}86.44 \\ 88.97 \\ 80.13 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}88.24 \\ 91.25 \\ 84.94 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}68.75 \\ 71.57 \\ 51.65\end{tabular}\\
\hline
20 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{98.73} \\ \textbf{98.05} \\ \textbf{97.31}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 98.65\\ 98.02 \\ 97.04\end{tabular}&\begin{tabular}{@{}c@{}c@{}}73.61 \\ 76.67 \\ 54.50\end{tabular} &\begin{tabular}{@{}c@{}c@{}}68.07 \\ 70.68 \\ 42.99 \end{tabular} &\begin{tabular}{@{}c@{}c@{}}88.51 \\ 90.79 \\ 81.98 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}85.73 \\ 91.15 \\ 82.79 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 71.08 \\ 75.49 \\ 52.90\end{tabular}\\
\hline
30 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{98.69} \\ \textbf{98.09} \\ \textbf{97.27}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 92.43\\ 94.57 \\ 88.35\end{tabular}&\begin{tabular}{@{}c@{}c@{}}74.66 \\ 77.69 \\ 51.20\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 71.50 \\ 75.35 \\ 43.90\end{tabular} &\begin{tabular}{@{}c@{}c@{}}87.22 \\ 91.22 \\ 79.46 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 84.91\\ 90.48 \\ 80.63 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 71.24 \\ 75.19 \\ 52.90\end{tabular}\\
\hline
38 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{93.37}\\\textbf{ 94.91} \\ \textbf{86.03}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 90.27\\ 91.47 \\ 78.99\end{tabular}&\begin{tabular}{@{}c@{}c@{}}70.67 \\ 75.44 \\ 40.52\end{tabular} &\begin{tabular}{@{}c@{}c@{}}66.28 \\ 72.19 \\ 45.99\end{tabular} &\begin{tabular}{@{}c@{}c@{}}85.29 \\ 90.08 \\ 72.67 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}78.71 \\ 86.78 \\ 68.16 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}70.17 \\ 75.19 \\ 52.46\end{tabular}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\subsection{AR face data set}
The AR database~\cite{martinez1998ar} contains frontal face images for 100 individuals (50 men and 50 females). There are 26 colored pictures collected for each person. The images include facial variations such as illumination changes, different expressions and facial disguises using sunglasses and scarves. Compared to Extended Yale B, this database is more challenging because of occlusions and fewer number of images per individual. We downsampled each image to $48 \times 42$ pixels and converted them to gray scale. We set $p=4$ and $s=3$. The performance of LG-SSC with respect to different number of clusters, is compared with MG-SSC, SSC, LRR, LRSC, EDSC and $S^3C$ in table~\ref{tabar}. We observe that:
\begin{itemize}
\item Performance of almost all approaches (except LRR, MG-SSC and LG-SSC) degraded compared to Extended Yale B database. This result is expected as AR database is a more challenging database and with higher number of clusters.
\item LG-SSC has a better performance compared to other approaches in all cases by a large margin. The robustness of LG-SSC is clearly evident for this database and the patch-based representations are elegantly guiding the global self-expressive representation to a more robust clustering segmentation.
\item LG-SSC consistantly performs better than MG-SSC which further highlights the efficiency of LG-SSC in combining the information of local patches with calculation of robust global self-expressive representation.
\item The occlusions are degrading the performance of SSC, EDSC, $S^3C$ and LRSC even in the simplest case of 5 clusters. This shows the sensitivity of these approaches to the occlusions and contiguous corruptions.
\item LRSC attempts to recover a clean dictionary by optimizing a nonconvex problem, however this approach assumes that the data is contaminated by sparse error which is clearly violated in this database.
\item The post-processing step of EDSC cannot improve the performance in this case. This is due to the severely corrupted global representation that makes the \emph{correction} difficult (if not impossible).
\item The third best performance is achieved by LRR. The LRR approach is considered as the extension of RPCA~\cite{candes2011robust} for the union of subspaces. Dense representations of nuclear norm appears to be more suitable compared to sparse representations for the data with complex noise structures.
\end{itemize}
For better visualization and comparison, the coefficient matrices corresponding to each algorithm for the first 5 individuals are plotted in Figure~\ref{AR_C}. The block diagonal structure in LG-SSC's coefficient matrix is clearly evident. Two major components for the success of a subspace clustering algorithm, namely, (i) subspace preserving connections and (ii) strong connectivity withing each subspace is present for LG-SSC. However, the coefficient matrices of other approaches are contaminated due to illumination effects and disguises which is affecting the clustering performance as well. Note that MG-SSC does not output a final coefficient matrix, hence the comparison is done with other 5 approaches.
\begin{figure*}[htb]
\begin{minipage}[b
|
]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/ssc-ar-5}}
\centerline{(a) SSC}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/lrr-ar-5}}
\centerline{(b) LRR}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/edsc-ar-5}}
\centerline{(c) EDSC}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/s3c-ar-5}}
\centerline{(d) $S^3C$}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/lrsc-ar-5}}
\centerline{(e) LRSC}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/lg-ssc-ar-5}}
\centerline{(f) LG-SSC}\medskip
\end{minipage}
\caption{Coefficient matrices of (a) SSC, (b) LRR, (c) EDSC, (d) $S^3C$, (e) LRSC and (f) LG-SSC for the first 5 individuals of AR database.}
\label{AR_C}
\end{figure*}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Performance on the AR data set with different number of subjects. The best accuracy is indicated in bold.}
\label{tabar}
\begin{tabular}{c|c||ccccccc}
\hline
\#subjects & Metric & LG-SSC & MG-SSC & SSC & LRR & EDSC & S$^3$C & LRSC \\
\hline
5 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{100} \\ \textbf{100} \\ \textbf{100}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} \textbf{100} \\ \textbf{100} \\\textbf{100} \end{tabular}&\begin{tabular}{@{}c@{}c@{}}76.92 \\ 62.79 \\ 48.32\end{tabular} &\begin{tabular}{@{}c@{}c@{}}82.31 \\ 71.57 \\ 62.33\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 75.38 \\ 64.74 \\ 56.17 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}76.92 \\ 64.04 \\ 48.55 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}63.08 \\ 47.87 \\ 35.55\end{tabular}\\
\hline
10 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{100} \\ \textbf{100} \\ \textbf{100}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} \textbf{100} \\ \textbf{100} \\\textbf{100} \end{tabular}&\begin{tabular}{@{}c@{}c@{}}66.54 \\ 65.34 \\ 41.97\end{tabular} &\begin{tabular}{@{}c@{}c@{}}68.08 \\ 72.27 \\ 55.13\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 73.46 \\ 81.29 \\ 65.89 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}71.15\\ 73.19 \\ 50.13 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}67.69 \\ 68.64 \\ 52.11\end{tabular}\\
\hline
20 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{100} \\ \textbf{100} \\ \textbf{100}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 90.00 \\ 92.82 \\86.38 \end{tabular}&\begin{tabular}{@{}c@{}c@{}}59.42 \\ 68.86 \\ 40.41\end{tabular} &\begin{tabular}{@{}c@{}c@{}}80.58 \\ 86.14 \\ 73.89\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 66.16 \\ 75.61 \\ 51.78 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}60.00\\ 69.25 \\ 37.84 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}72.69 \\ 77.42 \\ 55.68\end{tabular}\\
\hline
50 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{96.15} \\ \textbf{97.71} \\ \textbf{94.28}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 86.15 \\ 90.33 \\ 75.09 \end{tabular}&\begin{tabular}{@{}c@{}c@{}}67.31 \\ 78.21 \\ 47.21\end{tabular} &\begin{tabular}{@{}c@{}c@{}}87.31 \\ 91.79 \\ 75.85\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 65.76 \\ 79.95 \\ 51.56 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 58.08\\ 72.90 \\ 31.95 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 69.15 \\ 79.76 \\ 56.30\end{tabular}\\
\hline
75 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{94.97} \\ \textbf{96.93} \\ \textbf{92.69}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 87.08 \\ 91.23 \\ 70.93 \end{tabular}&\begin{tabular}{@{}c@{}c@{}}67.64 \\ 82.11 \\ 53.34\end{tabular} &\begin{tabular}{@{}c@{}c@{}}84.26 \\ 91.39 \\ 75.85\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 67.69 \\ 83.69 \\ 55.32 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 61.49\\ 78.17 \\ 38.81 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 69.49 \\ 81.55 \\ 59.13 \end{tabular}\\
\hline
100 & \begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{90.00} \\ \textbf{93.74} \\ \textbf{83.06}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 83.27 \\ 90.98 \\ 73.02 \end{tabular}&\begin{tabular}{@{}c@{}c@{}}68.15 \\ 82.87 \\ 52.52\end{tabular} &\begin{tabular}{@{}c@{}c@{}}79.92 \\ 90.01 \\ 71.06\end{tabular} &\begin{tabular}{@{}c@{}c@{}} 67.54 \\ 82.77 \\ 52.29 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 60.54\\ 79.89 \\ 43.09 \end{tabular} & \begin{tabular}{@{}c@{}c@{}} 67.23 \\ 82.34 \\ 57.25 \end{tabular}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\subsection{Coil-20 data set}
Columbia Object Image Library(COIL-20)~\cite{nene1996columbia} contains 1440 gray-scale images of 20 objects in different poses. The images of the objects were taken by placing objects on a turntable against a black background. There are 72 images per object, each of size $128\times128$ pixels. We downsampled each image to $32 \times 32$ pixels. Even though, this database is clean, with no noise or occlusions, it would be still interesting to observe the performance of LG-SSC in dealing with data from clean subspaces. We divided each image to 9 overlapping patches and set $s=2$. This is due to the fact that for images of the objects, the patches closer to the center of image contain more meaningful information compared to other patches.The performance of our proposed LG-SSC is compared with other subspace clustering algorithms in Table~\ref{tabcoil}. We can conclude that:
\begin{itemize}
\item Even though MG-SSC fails to increase the accuracy in dealing with clean images of objects, LG-SSC is able to increase the clustering accuracy by more than 10\%. This suggests that locally guided self-expressiveness might improve the quality of clustering in the challenging case of close subspaces.
\item EDSC enjoys the benefits of post-processing the coefficient matrix and has the second best performance.
\item sparse based approaches (SSC and $S^3C$) have higher performance compared to low-rank based algorithms (LRR and LRSC). In general sparse based approaches have stronger theoretical guarantees compared to low-rank based alternatives and hence, for the clean data sets, they are usually expected to perform better.
\end{itemize}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Performance on the COIL-20 data set with different number of clusters. The best accuracy is indicated in bold.}
\label{tabcoil}
\begin{tabular}{c||ccccccc}
\hline
Metric & LG-SSC & MG-SSC & SSC & LRR & EDSC & S$^3$C & LRSC \\
\hline
\begin{tabular}{@{}c@{}c@{}}ACC \\ NMI \\ ARI\end{tabular} & \begin{tabular}{@{}c@{}c@{}}\textbf{89.58} \\ \textbf{95.34} \\ \textbf{85.53}\end{tabular}&\begin{tabular}{@{}c@{}c@{}} 78.26 \\ 87.92 \\ 72.30\end{tabular}&\begin{tabular}{@{}c@{}c@{}}78.68 \\ 90.39 \\74.89 \end{tabular} &\begin{tabular}{@{}c@{}c@{}}54.86 \\ 70.03 \\ 42.19 \end{tabular} &\begin{tabular}{@{}c@{}c@{}} 84.51 \\ 93.52 \\ 81.54 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}74.86 \\ 88.28 \\ 66.88 \end{tabular} & \begin{tabular}{@{}c@{}c@{}}64.09 \\ 72.29 \\ 52.22\end{tabular}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\subsection{Parameter Analysis}
In LG-SSC, there are three parameters that are used in the optimization problem~(\ref{final}) which controls the trade-off between four qualities: (i) sparsity, (ii) ignoring cannot-links, (iii) respecting the recommended-links and (iv) the self-expressive reconstruction error. Following the methodology in~\cite{elhamifar2013sparse}, we set the regularization parameter $\mu$ as $\frac{\alpha}{\max_{j\neq i} |x_i^Tx_j|}$ where $\alpha >1$ is tuned for each dataset. In this paper, we set this parameter to the values that were commonly used and reported in subspace clustering literature.
The behavior of LG-SSC with respect to $\lambda_1$ and $\lambda_2$ is empirically validated on all three databases (AR, Extended Yale B and Coil-20). We consider the clustering performance for the first 10 subjects of AR and Yale B and 20 objects of Coil-20. The clustering accuracy with respect to different values of these parameters is illustrated for each database in Figure~\ref{param}. It can be seen that for values of $\lambda_1 \in [2:10]$ and $\lambda_2 \in [0.5:2]$, the accuracy is quite stable for all three cases. In particular, Yale B is the least sensitive one to the values of $\lambda_1$ and $\lambda_2$ and Coil-20 is the most sensitive database. For Yale B database, as long as $\lambda_1$ and $\lambda_2$ are not too small, the accuracy is almost 100\%. Interestingly by setting the $\lambda_1 \in [2:20]$ and $\lambda_2 = 0$, the accuracy is still 100\%. This suggests that for this database, the "cannot-links" information is more important compared to the "recommended-links" information. However, for Coil-20, the recommended-links information plays an important role in boosting the accuracy of the basic SSC (which is around 78.68\%).
We also evaluate the effect of patch sizes ($p$) and the number of levels ($s$) on the clustering accuracy. We consider $s \in \{2,3,4\}$ and $p \in \{2,3,4\}$. The performance of LG-SSC for different values of $s$ and $p$ for the first 10 subjects of Yale B and AR and 20 objects of Coil-20 are reported in Table~\ref{tabblocklevel}. For the AR database, the clustering accuracy is not affected by the patch-size as long as $s=3$. Because for $s=2, p=2$ and $s=2, p =3$, the patches at the coarse level are not robust themselves and they contain occluded parts of image, hence, the robustness is not transferred to the fine scale. This can be confirmed by considering the case where $s=2$ and $p=4$. In this case, the accuracy is 100\% because the patches in the second level are small enough to contain robust discriminant information. By increasing the number of levels and patches to 4 ($s=4$ and $p=4$), the accuracy decreases significantly to 22.31\%. In this case, the patches at the last level are very small and hence, neither robust nor discriminant information can be fed into upper levels. For Yale B, which is relatively less challenging compared to AR, the accuracy is almost 100\% in all cases except for the case with $s=4$ and $p=4$ where the patches get intuitively very small. For this database, $s=2$ is sufficient to increase robustness to illumination variations. Interestingly, the best accuracy for the Coil-20 database is achieved only for $s=2$ and $p=2$. Note that in this case, each image is $32 \times 32$ pixels and hence we do not consider $s=4$. For object clustering, the edges play a critical role, hence the patches should be considered such that they contain enough edge information for accurate clustering.
\begin{figure*}[htb]
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/AR_10s_parameters}}
\centerline{(a) AR database}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/YALE_10s_parameters}}
\centerline{(b) YALE B database}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\linewidth}
\centering
\centerline{\includegraphics[width=6cm]{images/COIL_20s_parameters}}
\centerline{(c) COIL-20 database}\medskip
\end{minipage}
\caption{Effect of $\lambda_1$ and $\lambda_2$ on the accuracy of LG-SSC for (a) AR database, (b) Extended Yale B database and (c) COIL-20 database}
\label{param}
\end{figure*}
\begin{center}
\begin{table*}[!htbp]
\begin{center}
\caption{Accuracy of LG-SSC with respect to different values for levels ($s$) and number of blocks in each level ($p$)}
\label{tabblocklevel}
\small\addtolength{\tabcolsep}{-1pt}
\begin{tabular}{c||ccc|ccc|ccc}
\hline
\multicolumn{1}{c||}{} & \multicolumn{3}{c|}{$2 \times 2$} & \multicolumn{3}{c|}{$3 \times 3$}& \multicolumn{3}{c}{$4 \times 4$} \\
& s=2 & s=3& s=4 & s=2 & s=3 & s=4 & s=2 & s=3& s=4\\
\hline
AR (10) & 60.38 & 100 & 99.61 & 86.54 & 100 & 99.62 & 100 & 100 & 22.31\\
YALE B (10) & 100 & 100 & 99.84 & 99.84 & 100 & 99.68 & 99.68 & 100 & 18.13\\
COIL-20 & 89.44 & 77.84 & - & 77.22 & 74.16 & - &74.37 & 72.5 & -\\
\hline
\end{tabular}
\end{center}
\end{table*}
\end{center}
\subsection{Neither Global Nor Local}
In this section, we discuss the role of multi-layer graph fusion approach and emphasize on the point that almost neither of individual local patches nor global data might lead to a robust discriminant representation. However, merging the local representations using their low-dimensional embedding on Grassmann manifold can provide a summary representation which highlights the information that majority of local representations tend to agree on. Hence, the clustering accuracy of each local patch for the three databases (all samples for each database are considered) are plotted in Figure~\ref{local}. For the AR dataset, two cases of $p=4$ and $p=16$ are considered. As can be seen, none of local patches reach an accuracy higher than 65\% but applying a k-means on summarized low-dimensional embedding of these 16 patches lead to an accuracy near 85\% (first column from right). LG-SSC further boosts this robustness to near 90\%. When $p=4$, the same observation in Table~\ref{tabblocklevel} is repeated and not only neither of local patches have an accuracy higher than 65\% but also the merged information of these 4 local patches does not boost the performance significantly. As mentioned previously, this is because none of these patches have robust representations. In Yale B, the coefficient matrix corresponding to the 4th patch has the highest clustering accuracy and LG-SSC is improving this accuracy without getting affected by other patches, eg. the 1st patch.
For Coil-20 database, not only the local patches do not lead to high clustering accuracy but also the merged low-dimensional embedding does not increase the clustering accuracy significantly as well. However, LG-SSC still increases the clustering accuracy. This is due to the fact that the merged information \emph{induces} the global sparse self-expressive representation and for this database, the local representations provide sufficient information to avoid miss-clustering of closely related objects.
\begin{figure*}[htb]
\begin{minipage}[b]{0.2\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{images/AR_Local_vs_global}}
\centerline{(a) AR (16 patches)}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.2\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{images/AR_Local_vs_global_2}}
\centerline{(b) AR (4 patches)}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.2\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{images/YALE_Local_vs_global}}
\centerline{(c) Yale B database}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.2\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{images/COIL_Local_vs_global}}
\centerline{(d) COIL-20 database}\medskip
\end{minipage}
\caption{Clustering accuracy of individual local patches and the corresponding merged low-dimensional embedding on (a) AR (dividing to 16 patches), (b) AR (dividing to 4 patches), (c) Extended Yale B and (d) Coil-20.}
\label{local2}
\end{figure*}
\section {Conclusion} \label{conclusion}
In this paper, we uncovered the importance of local representations in improving the robustness of self-expressive based subspace clustering approaches. The proposed hierarchical approach bridges the gap between robust local representations and discriminant global alternative in order to obtain a robust discriminant self-expressive representation for the input data. This approach consists of two major key ingredients: 1) Efficiently summarizing local based representations using low-rank embedding on a Grassmann manifold to obtain cannot-links and recommended links which local patches agree on them. 2) Employing this summarized information into the optimization problem for calculating self-expressive representation in each level using weighted group lasso regularization. Robustness of proposed approach to occlusion and complex noise was confirmed by experimental results.
\small
\bibliographystyle{spmpsci}
|
\section{Introduction}
A systematic and coherent treatment of mass effects in collider observables (such as event shapes) is mandatory to match
the expected accuracy of upcoming experimental data at the LHC and future linear colliders. This encompasses the production
of top quarks and other massive colored particles (yet to be discovered) at very high energies in hadron-hadron
and lepton-lepton machines, bottom quark production at lower energies (e.g.\ LEP, TASSO, etc\ldots), and also charm production
in lepton-hadron scattering experiments.
Concerning initial state mass effects in inclusive cross sections at hadron colliders, a first description
for arbitrary mass scales has been provided by Aivazis, Collins, Olness and Tung (ACOT) in two seminal
papers~\cite{Aivazis:1993kh,Aivazis:1993pi}. Their results are based on using different renormalization prescriptions
depending on the relation between the quark mass and the hard momentum transfer scale, and lay the foundations for existing
variable flavor number schemes (VFNS). This concept can be incorporated in a convenient way into effective field theories,
as was shown in Ref.~\cite{Pietrulewicz:2014qza,Hoang:2014ira} in the Soft-Collinear Effective Theory (SCET)~\cite{Bauer:2000ew,Bauer:2000yr}
framework.
Based on Refs.~\cite{Gritschacher:2013pha,Pietrulewicz:2014qza} we present here a VFNS applied to
state jets. We address the description of the production of heavy quarks through the secondary radiation off a virtual
gluon. In contrast to the case of primary production of heavy quarks, in which these are created already in the hard
interaction~\cite{Fleming:2007qr,Fleming:2007xt}, we thus consider here the situation, where massless quarks are
primarily produced in the $e^+e^-$ collision, and one of the subsequent radiated gluons splits into a heavy quark-antiquark
pair before hadronization takes place. The approach we describe here continuously interpolates between the very large mass
limit, where one achieves decoupling of the heavy quark, and the very small mass limit, where one approaches the known
massless quarks results, without upsetting the overall parametric precision of the corresponding massless factorization
theorem. Secondary mass effects are naturally small, since they happen at $\mathcal{O}(\alpha_s^2)$. However they are
conceptually valuable since a number of nontrivial issues concerning the construction of VFNSs are clarified which are also relevant for other applications, where mass effects are more prominent.
The case of secondary heavy quark radiation is also specific since there is no kinematical situation for which the production of the massive quarks needs to be treated in the framework of boosted Heavy Quark Effective Theory (bHQET)~\cite{Pietrulewicz:2014qza}.
We will deal with the thrust event-shape distribution in $e^+e^-$ collisions as a specific example,
emphasizing that the factorization setup can be easily adapted to other event shapes, and with some more effort to
the more complicated environment of hadron collisions. We define thrust as:
\begin{equation}\label{eq:thrust-def}
\tau \,=\, 1-T \,=\,
\min_{\vec{n}} \left( 1 -\, \frac{ \sum_i|\vec{n} \cdot \vec{p}_i|}{\sum_j E_j} \right) =
\min_{\vec{n}} \left( 1 -\, \frac{ \sum_i|\vec{n} \cdot \vec{p}_i|}{Q} \right),
\end{equation}
where $\vec{n}$ is referred to as the thrust axis. Eq.~(\ref{eq:thrust-def}) reduces to the familiar definition of thrust
\cite{Farhi:1977sg} for massless particles which it is not convenient for analytic computations within the
factorization approach described below.
The definition in Eq.~(\ref{eq:thrust-def}) maximizes the effect of heavy particles making it more suitable for a
heavy-quark mass determination~\cite{Fleming:2007qr,Fleming:2007xt}.
The dijet limit is enforced by small values of $\tau$ (which can be thought of a veto on a third jet), and in this
situation the final state constitutes of two narrow back-to-back jets plus soft radiation at large angles. One can
identify three physical scales, with which we associate corresponding renormalization scales:
the hard scale $\mu_H\sim Q$ (scale of the short distance interaction), the jet scale $\mu_J\sim Q\,\lambda$ (typical
transverse momentum of a jet) and the soft scale $\mu_S\sim Q \lambda^2$ (energy of soft particles).
The effective field theory correspondingly has $n$-, ${\bar n}$-collinear and ultra-soft modes with momenta (in light-cone
coordinates) \mbox{$p^\mu_n\sim Q\,(\lambda^2,1,\lambda)$}, \mbox{$p^\mu_{\bar n}\sim Q\,(1,\lambda^2,\lambda)$} and
$p_{\rm us}^\mu \sim Q( \lambda^2,\lambda^2, \lambda^2)$ , respectively. In this limit the dominant part of the cross section
(which we denote by singular cross section) factorizes into a hard matching coefficient and the convolution of a jet and
soft functions~\cite{Fleming:2007qr,Fleming:2007xt,Schwartz:2007ib,Bauer:2008dt}.
When dealing with secondary production of a pair of quarks with mass $m$ one needs to introduce additional \textit{mass modes},
that have an intrinsic fluctuation scale related to parameter $\lambda_m\equiv m/Q$ associated to their mass $m$,
in addition to the massless power counting parameter $\lambda\sim \tau^{1/2}$. The corresponding $n$-, ${\bar n}$-collinear
and soft mass modes contain mass-shell fluctuations scaling as \mbox{$p^\mu_n\sim Q\,(\lambda_m^2,1,\lambda_m)$},
\mbox{$p^\mu_{\bar n}\sim Q\,(1,\lambda_m^2,\lambda_m)$} and $p_{\rm s}^\mu \sim Q( \lambda_m,\lambda_m,\lambda_m)$,
respectively. (Note that the soft mass modes have typical momenta scaling linearly with $\lambda_m$.)
In particular the soft and collinear
mass-shell fluctuations have the same virtuality, $p_{\bar n}^2\sim p_{n}^2\sim p_{s}^2\sim m^2$, which leads to the
emergence of rapidity logarithms in the threshold corrections at the mass scale.
\section{Factorization theorem}
Since we are dealing with primary production of massless quarks and secondary production of heavy quarks, we can take the
massless factorization theorem for thrust as our starting point.\footnote{A similar factorization theorem for C-parameter
has been recently derived in Ref.~\cite{Hoang:2014wka} and a general theorem can be found in \cite{Bauer:2008dt}. These
factorization theorems can be easily extended to measure the angle between the thrust axis and the beam
direction~\cite{Mateu:2013gya}.} It reads~\cite{Korchemsky:1999kt,Schwartz:2007ib,Fleming:2007qr}
\begin{align}
\!\!\!\!\!\frac{1}{\sigma_0}\frac{\mathrm{d}\sigma}{\mathrm{d}\tau}\,=\, Q \,H^{(n_{\!f})}(Q,\mu_H)\,
U_H^{(n_{\!f})}(Q,\mu_H,\mu_S)\!\!
\int\! \mathrm{d} s\!\!\int \! \mathrm{d} s^\prime\, J^{(n_{\!f})}(s^\prime,\mu_J)\,U^{(n_{\!f})}_J(s-s^\prime,\mu_S,\mu_J)
\,\, S^{(n_{\!f})}\!\Big(Q\,\tau-\frac{s}{Q},\mu_S\Big) \, .
\label{eq:diffsigma0}
\end{align}
Here $\sigma_0$ denotes the tree-level $e^+ e^- \to q\bar{q}$ total cross section and $H$, $J$ and $S$ are the hard,
jet, and soft functions, respectively; $U_H$ and $U_J$ are the RG factors for the hard and jet functions. For
definiteness we have chosen to evolve the hard and jet functions to the soft scale (although this choice is arbitrary),
and therefore there is no evolution of the soft function. The dependence on the number of light quark flavors $n_f$
relevant for the RG evolution is explicitly indicated in each of the terms in Eq.~(\ref{eq:diffsigma0}). It starts
already at LL order through the RG evolution factors, which depend on $\alpha_s^{(n_f)}$. The matrix elements receive an explicit dependence on $n_f$ from $\mathcal{O}(\alpha_s^2)$.
The soft function can be further factorized into a partonic soft function $\hat S^{(n_f)}$, calculable in perturbation
theory, and a nonperturbative shape function $F$. This separation into partonic and hadronic components is usually performed in a
scheme relying on dimensional regularization, which leads to an $\mathcal{O}(\Lambda_{\rm QCD})$ renormalon
in $\hat S^{(n_f)}$. This problem can be eliminated introducing a gap parameter and gap subtractions (which depend on
the renormalization scale $\mu_S$ and the gap-subtraction scale $R$) in the definition of the leading power correction
for the OPE region~\cite{Hoang:2007vb,Hoang:2008fs,Mateu:2012nk}. These also depend on the number of massless flavors $n_f$.
The quark mass adds a complication to the factorization setup since it can a priori adopt any hierarchy with respect
to the hard, collinear or soft scales, which themselves are functions of $\tau$. As will be described below, this
leads to renormalization group factors with a variable numbers of active quark flavors. Furthermore, when
crossing the quark mass scale in the evolution, threshold corrections arise that are related to virtual fluctuations
in the hard, collinear and soft sectors as well as to the gap subtractions which are in analogy to the threshold
corrections known in the evolution of $\alpha_s$ and the parton distribution functions. Finally, also mass-dependent fixed-order
corrections in the hard, jet and soft functions have to be taken into account.
\section{Mass Mode setup and different scenarios}
In this section we summarize the mass mode setup of Ref.~\cite{Gritschacher:2013pha}, which is based on four different
scenarios. Each scenario corresponds to a different hierarchy between the quark mass and the hard, collinear and soft
scales. Depending on the relative sizes of $\lambda_m$ and $\lambda$ one of the four scenarios has to be employed. We
emphasize that we do not require large hierarchies of the mass with respect to the other three scales for the
characterization of these scenarios, since neighboring scenarios merge continuously into each other.
To discuss the various scenarios we consider a generic setup with {\it one} heavy quark with mass $m$ and $n_l$
light quarks. In the following we denote with $\mu_m\sim m$ the scale at which one switches between the schemes which contain either $n_l$ or $n_l+1$ running flavors. The corresponding explicit calculations and results can be found in
Refs.~\cite{Gritschacher:2013pha,Gritschacher:2013tza,Pietrulewicz:2014qza}.
\subsection{Scenario I: $\mathbf{m>Q>Q\,\lambda>Q\,\lambda^{\!2}}$}
If $\mu_m > \mu_H$ the massive quark is integrated out already at the level when SCET is matched onto QCD.
Therefore the massive quark only affects the hard matching
coefficient, but not the jet and soft functions. The factorization theorem is analogous to Eq.~(\ref{eq:diffsigma0}) with
$n_l$ active flavors, except for the hard current matching coefficient which depends on the heavy quark mass through
virtual effects,
\begin{align}
\frac{1}{\sigma_0} \frac{\mathrm{d}\sigma}{\mathrm{d}\tau}= Q\,H^{(n_l)}(Q,m,\mu_H)\,
U^{(n_l)}_H(Q,\mu_H,\mu_S) \int \!\mathrm{d} s\! \int\! \mathrm{d} s'\, J^{(n_l)}(s',\mu_J)\, U^{(n_l)}_J(s-s',\mu_S,\mu_J)\,
S^{(n_l)}\Big(Q\,\tau-\frac{s}{Q},\mu_S\Big).
\label{eq:diffsigmaI}
\end{align}
Here the strong coupling constant in each of the matrix elements or running factors runs with $n_l$ active flavors.
Scenario I shows manifest decoupling in the infinite mass limit:
\begin{align}
\lim_{m \to \infty} H^{(n_l)}(Q,m,\mu) = H^{(n_l)}(Q,\mu) \, .
\end{align}
Hence in this limit the factorization theorem in Eq.~\eqref{eq:diffsigma
|
I} reduces to the massless one in
Eq.~(\ref{eq:diffsigma0}) with $n_f = n_l$ active flavors. To achieve this desirable property one needs to renormalize
the massive quark bubble contribution to the QCD form factor with the on-shell (OS) subtraction (i.e. with zero-momentum
subtraction) while the $n_l$ massless quark bubble contributions are still renormalized
in the $\overline{\rm MS}$ subtraction as usual. So the massive quark is not an active running flavor.
The factorization formula Eq.~(\ref{eq:diffsigmaI}) is not appropriate for the limit of small $m$ due to large logarithms
that appear in $H^{(n_l)}(Q,m,\mu)$ for $m\to 0$.
\subsection{Scenario II: $\mathbf{Q>m>Q\,\lambda>Q\,\lambda^{\!2}}$}
In this scenario one has $\mu_H > \mu_m > \mu_J>\mu_S$.
Now the mass modes contribute as dynamic degrees of freedom in SCET above $\mu_m$. For the hard matching coefficient,
one needs to sum up large logs that show up when $Q\gg m$, and make sure that the known massless limit can be smoothly
attained for $m/Q\to 0$. The former is achieved by evolving $H$ with $n_l + 1$ active flavors from $\mu_H$ to
the mass scale $\mu_m$, where the heavy quark is integrated out, and further evolving down from $\mu_m$ to $\mu_S$ with
$n_l$ active flavors. Both running factors are mass independent. At the matching scale $\mu_m$ the threshold correction
$\mathcal{M}_H$ has to be included. Mass effects are purely virtual, and their contributions to the jet and soft functions
vanish identically if the OS scheme for $\alpha_s$ and the matrix elements is used. Therefore the jet function (at the scale
$\mu_J$) and the soft function (at the scale $\mu_S$) are identical to the massless case with $n_l$ active flavors. The
factorization theorem reads:
\begin{align}
\frac{1}{\sigma_0}\frac{\mathrm{d}\sigma}{\mathrm{d}\tau} \,=\, & \,Q\, H^{(n_l+1)}(Q,m,\mu_H)\,
U^{(n_l+1)}_{H}(Q,\mu_H,\mu_m) {\mathcal{M}_{H}(Q,m,\mu_m)}\,U^{(n_l)}_{H}(Q,\mu_m,\mu_S) \\
& \times \int\! \mathrm{d} s \! \int \!\mathrm{d} s'\,J^{(n_l)}(s',\mu_J) \, U^{(n_l)}_J(s-s',\mu_S,\mu_J)\, S^{(n_l)}\Big(Q\,\tau-\frac{s}{Q},\mu_S\Big).\nonumber
\label{eq:diffsigmaII}
\end{align}
Here $H^{(n_l+1)}$ and $U^{(n_l+1)}_{H}$ depend on $\alpha_s^{(n_l + 1)}$ (i.e. in the ($n_l+1$)-flavor scheme), whereas everything else depends on
$\alpha_s^{(n_l)}$ (i.e. in the ($n_l$)-flavor scheme).\footnote{The threshold correction $\mathcal{M}_{H}$ can be displayed in either the ($n_l$)- or the
($n_l+1$)-flavor scheme.} Note that the difference between $H^{(n_l+1)}(Q,m,\mu_H)$ and $H^{(n_l)}(Q,m,\mu_H)$
is not only the different number of active flavors in $\alpha_s$. In the former there are contributions from non-vanishing
SCET diagrams that take part in the matching computation, and also the $\overline{\rm MS}$ scheme has been used.
The massless limit for $H$ occurs naturally in this renormalization scheme,
\begin{equation}
\lim_{m\to 0} H^{(n_l+1)}(Q,m,\mu_H) = H^{(n_l+1)}(Q,\mu_H)\,,
\end{equation}
where the RHS is the hard function appearing in the massless factorization theorem in Eq.~(\ref{eq:diffsigma0}) with
$n_f = n_l+1$. The decoupling limit for the jet and soft function is trivial. The threshold coefficient
$\mathcal{M}_H$ depends on large rapidity logarithms $\log(m^2/Q^2)$ which have to be considered of order $\alpha_s^{-1}$
in the logarithmic counting. They are known to exponentiate and have been computed explicitly up to three
loops~\cite{Ablinger:2014vwa}, which yields necessary ingredients for an overall N$^3$LL resummation.
\subsection{Scenario III: $\mathbf{Q>Q\,\lambda>m>Q\,\lambda^{\!2}}$}
Here one has $\mu_H>\mu_J>\mu_m>\mu_S$\,, and therefore there is no change in the hard matching coefficient, its running
factors and the corresponding threshold coefficient compared to the previous scenario. There is also no modification in
the soft sector. However, now one has secondary real radiation of the massive quark pair in the jet function. In this
scenario, the use of the OS subtraction for the virtual secondary massive quark loops in the jet function is not appropriate,
since in that way one cannot smoothly interpolate to the massless limit, desirable in this scenario. Instead one has to use
the $\overline{\rm MS}$ prescription. Therefore the jet function has an analogous treatment as the hard matching coefficient:
it is evolved with $n_l + 1$ active flavors between $\mu_J$ and $\mu_m$, where a threshold correction has to be included,
and $n_l$-evolution is used between $\mu_m$ and $\mu_S$\,:
\begin{align}
\frac{1}{\sigma_0} \frac{\mathrm{d}\sigma}{\mathrm{d}\tau} \,=\, & \, Q\,H^{(n_l+1)}(Q,m,\mu_H)\,
U^{(n_l+1)}_H(Q,\mu_H,\mu_m) {\mathcal{M}_H(Q,m,\mu_m)}\, U^{(n_l)}_H(Q,\mu_m,\mu_S)\!\!
\int\! \mathrm{d} s \!\!\int\!\! \mathrm{d} s' \!\!\!\int \!\mathrm{d} s''\!\!\!\int\! \mathrm{d} s''' J^{(n_l+1)}(s''',m,\mu_J)\nonumber\\
&\times U^{(n_l+1)}_J(s''-s''',\mu_m,\mu_J) \, \mathcal{M}_J(s'-s'',m,\mu_m) \, U^{(n_l)}_J(s-s',\mu_S,\mu_m) \, S^{(n_l)}\Big(Q\,\tau-\frac{s}{Q},\mu_S\Big).
\label{eq:diffsigmaIII}
\end{align}
Here only the soft function and $U^{(n_l)}_J$ depend on $\alpha_s^{(n_l)}$, whereas the rest of elements
depend on $\alpha_s^{(n_l + 1)}$. The jet function satisfies the massless limit
\begin{align}
\lim_{m\to 0} J^{(n_l+1)}(s,m,\mu) = J^{(n_l+1)}(s,\mu)\,,
\end{align}
as already indicated above, and the decoupling limit of the soft function is again trivial. The jet threshold
function $\mathcal{M}_J$ depends on the
large rapidity logarithm $\log(m^2/\mu_J^2)$, which again exponentiates, and its required perturbative expressions for an
overall N$^3$LL resummation are known.
\subsection{Scenario IV: $\mathbf{Q>Q\,\lambda>Q\,\lambda^{\!2}>m}$}
Now $m$ is below any other renormalization scale, and therefore the hard and jet functions look the same as in Scenario III.
Furthermore their running proceeds with $n_l + 1$ flavors from their respective scales to $\mu_S$, since $\mu_m$ is not
crossed during the evolution. Therefore no threshold coefficients are necessary in this scenario. On the other hand now
we have real radiation effects in the soft function, and in complete analogy to the jet function in Scenario III we switch
to the $\overline{\rm MS}$ scheme, such the massless limit is manifest. The factorization theorem looks very similar to
Eq.~(\ref{eq:diffsigma0}), with the exception that the matrix elements depend explicitly on $m$:
\begin{align}
\frac{1}{\sigma_0}\frac{\mathrm{d}\sigma}{\mathrm{d}\tau}\,=\, &\, Q\,H^{(n_l+1)}(Q,m,\mu_H)\,
\,U^{(n_l+1)}_H(Q,\mu_H,\mu_S) \int\! \mathrm{d} s\! \int\! \mathrm{d} s' \,J^{(n_l+1)}(s',m,\mu_J) \\
&\times U^{(n_l+1)}_J(s-s',\mu_S,\mu_J)\,S^{(n_l+1)}\Big(Q\,\tau-\frac{s}{Q},m,\mu_S\Big).\nonumber
\label{eq:diffsigmaIV}
\end{align}
Every single element of Eq.~(\ref{eq:diffsigmaIV}) depends on $\alpha_s^{(n_l + 1)}$ and all functions have a smooth massless
limit, in particular
\begin{align}
\lim_{m\to 0}\hat{S}^{(n_l+1)}(\ell,m,\mu) = \hat{S}^{(n_l+1)}(s,\mu)\,,
\end{align}
therefore in this limit Eq.~(\ref{eq:diffsigmaIV}) exactly reduces to Eq.~(\ref{eq:diffsigma0}) with $n_f \to n_l + 1$.
Finally, in this scenario the evolution of the gap parameter \cite{Hoang:2008fs} also crosses the mass-mode threshold.
Therefore its running
proceeds in two steps and includes threshold corrections, in a similar way as for the strong coupling constant.
For the gap parameter one does not encounter any rapidity logarithms in the threshold corrections.
\section{Numerical Results}
In Fig.~\ref{fig:numerics} we show the effects of the secondary bottom (a) and top mass effects (b) on the thrust distribution,
at $14$~GeV and $500$~GeV, respectively. The plots only show the most singular terms of the distribution, and do not include
an estimate of the perturbative uncertainties. The renormalization scales have been chosen such that no large logs appear,
and their specific form can be found in Ref.~\cite{Pietrulewicz:2014qza}, as well as further plots. Hadronization effects
and renormalon subtractions are included. The different scenarios are indicated by horizontal dashed lines. One can see that
the effects are more visible in the peak of the distribution, and very small in the tail, where measurements of $\alpha_s$
are usually carried out.
\begin{figure}
\captionsetup{type=figure}
\subfloat[][]{
\includegraphics[width=0.49\columnwidth]{figs/Q14}}~~~~
\subfloat[][]
{\includegraphics[width=0.48\columnwidth]{figs/Q500}}
\caption{The thrust distribution for primary massless production at $Q=14$ GeV (a) and $Q = 500$ GeV (b) including secondary
massive bottom (a) and top (b) effects (blue, solid) compared to keeping the bottom/top quark massless (red, dashed).
\label{fig:numerics}}
\end{figure}
\section{Conclusions}
In this work an explicit realization of a VFNS for final jet states has been presented, taking thrust as a case example.
It is suitable to include collinear and soft mass modes in the EFT setup to properly account for secondary heavy quark
radiation, which allows to write down a set of factorization theorems. Depending on the relation between the mass scale and
the other renormalization scales one of the four different factorization scenarios is applied. The key to achieve a continuous
description of the distribution for infinitely heavy masses (decoupling limit) to infinitesimally small ones (massless
limit) is to include the mass modes as passive or active running degrees of freedom related to the renormalization
subtractions corresponding to either the OS or $\overline{\rm MS}$ scheme, respectively, depending on the situation one is
facing. Real radiation effects in the jet and soft functions are included whenever they are kinematically allowed.
We have shown that the numerical impact of the secondary quark mass effects on the thrust distribution is small in the
tail region, but sizable at the peak. We plan to employ the VFNS setup for final state jets discussed here for a number of
other applications where quark mass effects are more sizeable.
\begin{theacknowledgments}
We thank the Erwin-Schrdinger Institute (ESI) for partial support in the framework of the ESI program ``Jets and Quantum
Fields for LHC and Future Colliders''.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
\section{Dyadic Subarcs and Diameter Functions} \label{S:Dyadics}
Here we give precise definitions of our model curves, i.e., our model circles. These are given by defining metrics on $\mfS^1$. Since we can restrict attention to 1-bounded turning} \newcommand{\ca}{chord arc\ circles (thanks to \rf{L:BT iff BL}(b,c)), it suffices to only know the diameters of certain subarcs, provided we have a sufficiently plentiful collection of subarcs; for this purpose we use the dyadic subarcs described in \rf{s:DS}. We introduce the notion of a dyadic diameter function in \rf{s:DDF}; these provide a simple method for constructing metrics on $\mfS^1$. Then in \rf{s:BL equiv} we establish a convenient way to detect when two such metrics are bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent, and also when a given metric Jordan curve is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $\mfS^1$ with such a metric.
\subsection{Dyadic Subarcs} \label{s:DS}
With our convention that $\mfS^1=[0,1]/\{0\!\!\sim\!\!1\}$, the $n^{\rm th}$-generation dyadic subarcs of $\mfS^1$ (obtained by dividing $\mfS^1$ into $2^n$ subarcs of equal diameter) are the subarcs of the form
\begin{gather*}
I_k^n:=[k/2^n,(k+1)/2^n] \quad\text{where $k\in\{0,1,\dots,2^n-1\}$}. \\
\intertext{Noting that $I^0:=I_0^0:=\mfS^1$, we define}
\mcI^{n}:=\{I_k^{n} \mid k\in\{0,1,\dots,2^n-1\} \} \quad \text{and then} \quad \mcI:=\bigcup_{n=0}^\infty \mcI^{n} .
\end{gather*}
Each dyadic subarc $I^n\in \mcI^n$ contains exactly two
$I^{n+1},\tilde{I}^{n+1}\in \mcI^{n+1}$ that we call the \emph{children} of $I^n$, and then $I^n$ is the
\emph{parent} of each of $I^{n+1}, \tilde{I}^{n+1}$.
It is convenient to introduce some terminology. Often, we denote the \emph{children} or \emph{sibling} or \emph{parent} of a generic $I\in\mcI$ by
$$
I_0 \,, I_1 \;\text{ or }\; \tilde{I} \;\text{ or }\; \hat{I}
$$
respectively; implicit in the use of the latter two notations is the requirement that $I\ne\mfS^1$.
Clearly, $(\mcI^n)_1^\infty$ is a shrinking subdivision for $\mfS^1$ in the sense of \rf{s:shrinking}.
Recall too that a sequence $(I^{n})_{n=0}^\infty$ of dyadic subarcs $I^n\in\mcI^n$ is a \emph{descendant sequence} provided $I^{0}\supset I^{1}\supset I^{2}\supset\dots$; that is, for each $n$, $I^{n+1}$ is a child of $I^{n}$. We note that for each $x\in\mfS^1$ there is a descendant sequence $(I_x^{n})_{n=0}^\infty$ with $\{x\}=\bigcap_{n=0}^\infty I_x^{n}$; such a sequence is unique unless $x$ is a dyadic endpoint in which case there are exactly two such sequences.
By connecting each arc to its parent, we can view $\mcI$ as the vertex set of a rooted binary tree. In this connection, we use the following elementary fact on various subtrees.
\theoremstyle{plain}
\newtheorem*{KsL}{K\H{o}nig's Lemma}
\begin{KsL}
A rooted tree with infinitely many vertices, each of finite degree, contains an infinite simple path.
\end{KsL}
In our setting this means that each infinite subtree contains a descendant sequence.
\medskip
In the proof of part (B) of our Theorem it will be convenient to ``do $m$ steps at once''. This means that instead of dividing an arc into two subarcs, we will divide it into $2^m$ subarcs. With this in mind, we also consider the family $\mcJ$ of all $2^m$-adic subarcs; thus
$$
\mcJ := \bigcup_{n=0}^\infty \mcJ^{n} \quad\text{where $\mcJ^{n} = \mcI^{mn}$}.
$$
Each $\mcJ^{n}$ contains the $2^{mn}$ subarcs of the form
$J_k^n:=[k/2^{mn}, (k+1)/2^{mn}]$ in $\mcI^{mn}$ with $k\in\{0,1,\dots,2^{mn}-1\}$. Each such arc $J^n$ has
$2^m$ children, i.e., arcs $J^{n+1}\in \mcI^{m(n+1)}$, all of which are contained in $J^n$.
\subsection{Dyadic Diameter Functions} \label{s:DDF}
A dyadic diameter function $\Del$ assigns a diameter $\Delta(I)$ to each dyadic subarc $I\in \mcI$. More precisely, we call $\Del:\mcI\to(0,1]$ a \emph{dyadic diameter function constructed using the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1]$} provided $\Del(\mfS^1)=1$ and
\begin{gather*}
\forall\; I\in\mcI \;, \quad\text{either}\quad \Del(I_0)=\Del(I_1):=\half\,\Del(I) \quad\text{or}\quad \Del(I_0)=\Del(I_1):=\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\,\Del(I) \\
\intertext{where $I_0, I_1$ are the two children of $I$. When $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=1$, we also require}
\lim_{n\to\infty} \max \left\{ \Del(I) \mid I\in\mcI^n \right\} = 0 \,.
\end{gather*}
If $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta<1$, this latter condition is automatically true. The snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ is kept fixed throughout the construction.
\showinfo{
By taking $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=1/2$ we recover the (normalized) Euclidean arc-length metric $\lam$. With $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=1$ we obtain a \emph{simple} dyadic diameter function that satisfies
\begin{equation}\label{E:simple}
\forall\; I\in\mcI \,, \quad \Del(I)/\Del(I_0)=\Del(I)/\Del(I_1) \in \{1,2\}.
\end{equation}
}
Each dyadic diameter function $\Del$ produces a distance function $d=d_\Del$ on $\mfS^1$ defined by
\begin{equation} \label{E:dist}
d(x,y)=d_\Del(x,y):=\inf \sum_{k=1}^N \Del(I_k)
\end{equation}
where the infimum is taken over all \emph{$xy$-chains} $I_1,\dots,I_N$ in $\mcI$; thus $x$ and $y$ lie in $I_1\cup\dots\cup I_N$, each $I_k$ belongs to $\mcI$, and for all $2\le k \le N$, $I_{k-1}\cap I_k\ne\emptyset$.
Now we present various properties of this metric. Our `diameter function' terminology is motivated by item (d) below.
\begin{lma} \label{L:dist} %
Let $\mcI\overset{\Del}\to(0,\infty)$ be a dyadic diameter function and define $d:=d_\Del$ as in \eqref{E:dist}. Then:
\begin{enumerate}
\item[\rm(a)] $d$ is a metric on $\mfS^1$.
\item[\rm(b)] The identity map $\id:(\mfS^1,d)\to(\mfS^1,\lam)$ is a 1-Lipschitz homeomorphism} \newcommand{\homic}{homeomorphic; recall that $\lam$ is the normalized length metric on $\mfS^1$; see \rf{s:basic info}.
\item[\rm(c)] $(\mfS^1,d)$ is 1-bounded turning} \newcommand{\ca}{chord arc\ (so $d$ is its own diameter distance).
\item[\rm(d)] The diameter (with respect to $d$) of each dyadic subarc is given by $\Delta$; i.e., for all $n\in\mfN$ and all $I\in\mcI^{n}$, $\diam_d(I)=\Del(I)$.
\item[\rm(e)] If $\Del$ is constructed using a snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1)$, then the Assouad dimension of $(\mfS^1,d)$ is at most $\log 2/\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$. Equality holds for the ``extremal model'' where we take $\Del(I_0) = \Del(I_1) = \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\, \Delta(I)$ for both children $I_0,I_1$ of each $I\in\mcI$.
\end{enumerate}
\end{lma}
\noindent
\begin{proof}%
(a) It is clear that $d$ is non-negative, symmetric, and satisfies the triangle inequality. Given $x\in \mfS^1$ and $n\in \mfN$, let $I^n_x\in \mcI^n$ be a dyadic subarc containing $x$. Since $(I^n_x)_1^\infty$ is an $xx$-chain, $d(x,x)\leq \Delta(I^n_x) \to 0$ (as $n\to \infty$), so $d(x,x)=0$. Since $\Delta(I^n)\geq 2^{-n}=\diam_\lam(I^n)$, it follows that $d(x,y)\geq \lambda(x,y)$. Thus $d(x,y)= 0$ if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization $x=y$.
\smallskip\noindent(b)
This follows from \rf{P:subdiv_homeo} and the penultimate sentence in the proof of (a).
\smallskip\noindent(d)
Fix $I\in\mcI^{n}$ with $n\geq 1$. For all points $x,y\in
I$, $I$ is an $xy$-chain, so $d(x,y)\le \Delta(I)$ and thus $\diam_d(I)\le
\Delta(I)$. The opposite inequality follows from the observation that any
chain joining the endpoints of $I$ must cover either $I$ or its sibling $\tilde{I}$.
\smallskip\noindent(c)
To demonstrate that $(\mfS^1,d)$ is $1$-bounded turning} \newcommand{\ca}{chord arc, fix distinct points
$x,y\in\mfS^1$. Let $[x,y]$ and $[y,x]$ be the two closed
arcs on $\mfS^1$ between $x,y$ (i.e., the closures of
the components of $\mfS^1\sm\{x,y\}$). Assume that $\diam_d([x,y])
\leq \diam_d([y,x])$. Next let $I_1,\dots,I_N$ be
any $xy$-chain. Then $I_1\cup\dots\cup I_N\supset A$, where either
$A=[x,y]$ or $A=[y,x]$, so $\diam_d([x,y])\leq \diam_d(A)$.
For any $a,b\in A$, $I_1,\dots, I_N$ is an $ab$-chain; therefore
$$
d(a,b) \le \sum_{n=1}^N \Delta(I_n) \,,\;\text{ and thus }\; \diam_d([x,y]) \le \diam_d(A) \le \sum_{n=1}^N \Delta(I_n) \,.
$$
Taking the infimum over all such $xy$-chains $I_1,\dots,I_N$ yields $$\diam_d([x,y])\le d(x,y)\,.$$
\noindent(e)
First, suppose $\Del$ is constructed using a snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1)$. Let $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta:=\log 2/\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$, so $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{-\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta}=2$. Fix an arbitrary $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\in(0,1]$. Choose $n\in \mfN$ so that $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^n < \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma \leq \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{n-1}$. Consider a dyadic subarc $I^n\in\mcI^n$. Then
$\diam_d(I^n) =\Delta(I^n) \leq \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^n< \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$.
Now let $A$ be any $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-separated set in $(\mfS^1,d)$. Then $A$ contains at most one point in each dyadic subarc $I^n\in \mcI^n$. Thus
$$
\card(A) \leq 2^n = \lp \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{-\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta} \rp^n = \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{-\alpha} \lp \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{n-1} \rp^{-\alpha} \le 2\, \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta} \,.
$$
It follows that the Assouad dimension of $(\mfS^1,d)$ is at most $\alpha$; see \rf{s:assouad-dimension}.
\smallskip
Finally, consider the dyadic diameter function given by setting $\Delta(I^{n+1}) :=\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\, \Delta(I^n)$ (for each child $I^{n+1}\in \mcI^{n+1}$ of every $I^n\in \mcI^{n}$) and its corresponding metric $d=d_\Delta$. Then for each $n\in\mfN$, the set $A^n:=\{k/2^n \mid 0\le k < 2^n\}$ of $n^{\rm th}$-generation endpoints is $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^n$-separated in $(\mfS^1,d)$. Assume constants $C>0, \alpha>0$ are given so that the number of $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-separated points is at most $C\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\alpha}$. Taking $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma:=\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^n$ we obtain
\begin{equation*}
C\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\alpha} = C (\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^n)^{-\alpha} = C (\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{-\alpha})^n \geq \card(A^n) = 2^n \,,\;\text{ so }\; \alpha\geq \frac{\log 2}{\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)}\,. \qedhere
\end{equation*}
\end{proof}%
\showinfo{The following is useful for checking the requirement that the diameters tend to zero.
\begin{lma} \label{L:Itozero} %
Suppose the function $\mcI\overset{\Delta}\to(0,\infty)$ satisfies $\Del(\hat{I})\ge\Del(I)$ for each $I\in\mcI\sm\{\mfS^1\}$. Then the following are equivalent} \newcommand{\tfaqe}{the following are quantitatively equivalent:
\begin{enumerate}
\item[\rm(a)] $\lim_{n\to\infty} \max \{\Delta(I^{n}) \mid I^n \in \mcI^n\}= 0$.
\item[\rm(b)] For every sequence $(I^{n})_0^\infty$ (with $I^{n}\in\mcI^{n}$),
$\lim_{n\to\infty} \Delta(I^{n}) = 0$.
\item[\rm(c)] For every descendant sequence
$(I^{n})_0^\infty$, $\lim_{n\to\infty} \Delta(I^{n}) = 0$.
\end{enumerate}
\end{lma}
\begin{proof}%
The implications (a)$\iff$(b)$\implies$(c) are clear.
Assume (a) is false. Then there is an $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$ such that the set
$\mcI_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma:= \{I^n\in \mcI \mid \Delta(I^n)\geq \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\}$ is
infinite. Note that if $I^n$ is in $\mcI_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$, then its parent
is contained in $\mcI_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ as well. Thus $\mcI_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ naturally forms
a graph, where we connect each $I^n\in \mcI_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ to
its parent. This is an infinite tree. By K\H{o}nig's Lemma
there is a descendant
sequence $I^0\supset I^1\supset \dots$ with $\Delta(I^n) \geq
\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ for all $n\in \mfN$. Thus (c) is false, so (c)$\implies$(a).
\end{proof}%
}
\medskip
Given $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1]$, we let $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ be the collection of all metric circles $(\mfS^1,d)$, where the metric $d=d_\Del$ is defined as in \eqref{E:dist} and $\Del:\mcI\to(0,1]$ is any dyadic diameter function constructed using the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$. Then
$$
\mcS:=\bigcup_{\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1]} \mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta
$$
is our catalog of snowflake type metric circles.
Thanks to the Tukia-\Va\ characterization, \rf{L:dist}(c,e) imply that for $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1)$, each curve in $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ is a metric quasicircle.
The curves in $\mcS_1$ are bounded turning} \newcommand{\ca}{chord arc\ circles, but need not be metric quasicircles since they may fail to be doubling. There is a simple test for doubling that we give below in Lemma~\ref{L:dblg}.
\subsection{$2^m$-adic Diameter Functions} \label{s:2DF}
We also require $2^m$-adic diameter functions; recall (see the end of \rf{s:DS}) that $\mcJ$ denotes the family of $2^m$-adic subarcs of $\mfS^1$. We call $\Del:\mcJ\to(0,1]$ a \emph{$2^m$-adic diameter function constructed using the snowflake parameter $\tau\in[1/2^m,1]$} provided $\Del(\mfS^1)=1$ and
\begin{align*}
\forall\; J\in\mcJ \;, \quad\text{either}\quad & \Del(J_0)=\Del(J_1)=\dots=\Del(J_{2^m-1}):=\frac{1}{2^m}\,\Del(J) \\
\quad\text{or}\quad & \Del(J_0)=\Del(J_1)=\dots=\Del(J_{2^m-1}):=\tau\,\Del(J)
\end{align*}
where $J_0,\dots,J_{2^m-1}$ are the children of $J$. The
snowflake parameter $\tau$ is fixed throughout the construction. If
$\tau=1$, we also require
$$
\lim_{n\to\infty} \max \left\{ \Del(J) \mid J\in\mcJ^n \right\} = 0 \,.
$$
When $\tau<1$ this latter condition is automatically true.
\smallskip
Just as for dyadic diameter functions, each $2^m$-adic diameter function $\Del$ has an associated distance function $d_\Del$ defined as in \eqref{E:dist} but now we only consider $xy$-chains chosen from $\mcJ$. \rf{L:dist} remains valid for $2^m$-adic diameter functions; however, in part (e) we must take $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=\tau^{1/m}$, where the $2^m$-adic diameter function is constructed using the snowflake parameter $\tau\in[1/2^m,1]$.
\medskip
We note the following
useful fact. For each dyadic arc $I\in\mcI$, there exist $2^m$-adic
arcs $J^n\in\mcJ^n$ and $J^{n+1}\in\mcJ^{n+1}$ such that
\begin{equation} \label{E:I vs J}
J^{n+1} \subset I\subset J^n \,.
\end{equation}
Each $2^m$-adic diameter function $\Del:\mcJ\to(0,1]$, with snowflake parameter $\tau$, has a natural extension to a dyadic diameter function $\Del:\mcI\to(0,1]$, with snowflake parameter $\sigma:=\tau^{1/m}$, that is defined as follows. Fix a subarc $J^n\in\mcJ$ and let $J^{n+1}\subset J^n$ be any child of $J^n$. Let $J^n=:I^{mn}\supset I^{mn+1}\supset \dots \supset I^{m(n+1)}:=J^{n+1}$ be the finite descendant sequence from $\mcI$ determined by $J^{n+1}$ and $J^n$. Set
\begin{gather*}
\rho:=[\Del(J^{n+1})/\Del(J^n)]^{1/m} \quad\text{(so, $\rho\in\{1/2,\tau^{1/m}\}$)} \\
\intertext{and for each $i\in\{0,1,\dots,m\}$ define}
\Del(I^{mn+i}) := \rho^i \Del(J^n) \,.
\end{gather*}
In view of \eqref{E:I vs J}, this procedure defines $\Del(I)$ for each $I\in\mcI$. Note that $\Delta(I^{mn+0})= \Delta(J^n)$ and $\Delta(I^{mn+m})= \Delta(J^{n+1})$, so $\Del:\mcI\to(0,1]$ is an extension of $\Delta\colon \mcJ\to (0,1]$. Clearly this extension is a dyadic diameter function constructed with the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=\tau^{1/m}$.
\begin{lma} \label{L:DDF to 2^mDF} %
Let $\Delta\colon\mcJ\to(0,1]$ be a $2^m$-dyadic diameter function that has been extended to all dyadic intervals, i.e., to a dyadic diameter function $\Delta\colon \mcI\to (0,1]$, as described above. Let $d_\mcI$ and $d_\mcJ$ be the metrics defined via $\Delta|_{\mcI}$ and $\Delta|_{\mcJ}$ respectively, meaning by \eqref{E:dist} and using chains from $\mcI$ and $\mcJ$ respectively.
Then for all $x,y\in\mfS^1$,
$$
\frac1{2^m} d_\mcJ(x,y) \le d_\mcI(x,y) \le d_\mcJ(x,y) \,.
$$
\end{lma}
\begin{proof}%
The right-hand inequality holds because there are more $xy$-chains available when we use subarcs from $\mcI$. To prove the left-hand inequality, let $I_1,\dots,I_N$ be an $xy$-chain from $\mcI$. Now use \eqref{E:I vs J} to get a corresponding $xy$-chain $J_1,\dots,J_N$ from $\mcJ$ and with $J_k'\subset I_k \subset J_k$ where $J_k'$ is some child of $J_k$. Then for each $k$
$$
\Del(I_k) \ge \Del(J'_k) \ge 2^{-m} \Del(J_k)\,, \quad\text{so}\quad
d_\mcJ(x,y) \le \sum_{k=1}^N \Del(J_k) \le 2^m \sum_{k=1}^N \Del(I_k) \,.
$$
Taking an infimum gives $d_\mcJ(x,y)\le 2^m d_\mcI(x,y)$.
\end{proof}%
The previous lemma and prior discussion reveal that in order to prove that a given metric circle $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a curve in $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$, it is sufficient to construct a $2^m$-adic model circle (with snowflake parameter $\tau= \sigma^m$) that is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$; this will yield a dyadic model circle (with snowflake parameter $\sigma$) bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$.
\begin{rmk}\label{rmk:4m4adic}
Rohde's construction is based on $4$-adic arcs rather than dyadic arcs. Results similar to the above also hold in this case. Namely each $4^m$-adic diameter function $\bigcup_k\mcI^{4^{mk}}\to(0,1]$, with snowflake parameter $\tau$ in $[1/4^m,1]$, has an extension to a $4$-adic diameter function with snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta:=\tau^{1/m}\in [1/4,1]$. The analog of \rf{L:DDF to 2^mDF} holds: the metrics constructed from these two diameter functions are bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent.
\end{rmk}
\subsection{Bi-Lipschitz Equivalence} \label{s:BL equiv}
Let $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ be a bounded turning} \newcommand{\ca}{chord arc\ circle and $(\mfS^1,d_\Del)$ be a model circle where $\Del$ is some dyadic diameter function. In the following we
show that to prove bi-Lipschitz equivalence of $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ and $(\mfS^1,d_\Del)$, it is enough to show bi-Lipschitz equivalence for dyadic
subarcs. More precisely, we establish the following result.
\begin{lma} \label{L:d BL_to_d_D_variant2k}
Let $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ be a $C$-bounded turning} \newcommand{\ca}{chord arc\ circle and $d=d_\Delta$ a metric on
$\mfS^1$ defined via a $2^m$-adic diameter function $\Delta$.
Let $\varphi\colon \mfS^1 \to \Gamma$ be a homeomorphism.
Suppose there exists a constant $K\geq 1$ such that for all $J\in\mcJ$,
$$
K^{-1}\diam(\varphi(J)) \le \Delta(J) \le K\, \diam(\varphi(J)) \,.
$$
Then $(\mfS^1,d)\overset{\vphi} \to (\Gamma,\ed)$ is $L$-bi-Lipschitz where $L:=2^{m+1}C\,K$.
\end{lma}
Before proving this lemma (see \ref{proof}), we first give a simple way to estimate the diameter of an arc in terms of the diameters of dyadic subarcs.
\begin{lma} \label{L:arc lemma_2k_var} %
Let $\mcJ\overset{\Del}\to(0,1]$ be a $2^m$-adic diameter function with associated metric $d=d_\Delta$.
For each arc $A\subset\mfS^1$, define
\begin{equation*} \label{eq:def_delta}
\D(A)=\D_\Del(A):= \max\{\Delta(I) \mid I\subset A, I \in \mcJ\}.
\end{equation*}
Then for all arcs $A\subset\mfS^1$,
\begin{equation*}
\D(A)\leq \diam_d (A)\leq 2^{m+1}\D(A).
\end{equation*}
In fact, there are $2^m$-adic arcs $I,J\in \mcJ$ such that $I\cup J\subset A \subset \hat{I} \cup \hat{J}$, $\Delta(I)=\D(A)$, and either $I=J$ or $\hat{I},\hat{J}$ are adjacent. Here $\hat{I},\hat{J}\in \mcJ$ are the parents of $I,J$ relative to $\mcJ$.
\end{lma}
\begin{proof}%
Let $A$ be a subarc of $\mfS^1$.
Suppose we have verified the existence of the described $2^m$-adic arcs $I,J\in \mcJ$. Then
\begin{align*}
\D(A) &=\Del(I)=\diam_d(I) \le \diam_d(A) \le \diam_d(\hat{I}\cup\hat{J}) \\
&\le \diam_d(\hat{I})+\diam_d(\hat{J}) = \Del(\hat{I})+\Del(\hat{J}) \\
&\le 2^m[\Del(I)+\Del(J)] \le 2^{m+1} \Del(I) = 2^{m+1} \D(A) \,.
\end{align*}
Thus it suffices to exhibit such $I$ and $J$.
\smallskip
Suppose $\mcF\subset\mcJ$ is some family of $2^m$-adic arcs (e.g.,
defined by certain properties). We say that an arc $I^n\in\mcJ^n$ is
\emph{maximal \wrt $\mcF$} provided $I^n\in\mcF$ and for all $J^l\in
\mcJ^l$ with
$J^l\in\mcF$, either $\Del(J^l)<\Del(I^n)$ or
$$
\Del(J^l)=\Del(I^n) \quad\text{and }\; l\ge n \,.
$$
Thus $I^n$ is the ``largest'' arc in $\mcF$, and when there are
several such large arcs, ``seniority wins''. Note that the parent of
such a maximal $I^n$ will not belong to $\mcF$.
\smallskip
Now assume $A$ is the oriented arc $[a,b]\subset\mfS^1=[0,1]/\!\!\sim$
with $0<a<b<1$. Pick $I=I^n\in \mcJ$ so that $I\subset A$,
$\Del(I)=\D(A)$, and such that $I$ is maximal among all such arcs.
Let $\hat{I}\supset I$ be the $\mcJ$-parent of $I$. If $A\subset
\hat{I}$, then upon setting $J:=I$ we are done.
\smallskip
Assume that $A\not\subset \hat{I}$. The maximality of $I$
ensures that one endpoint of $\hat{I}$, without loss of generality the
left endpoint, is not contained in $A$. Let $y$ be the right endpoint
of $\hat{I}$. Then $[a,y]\subset \hat{I}$.
Now consider subarcs $J\in\mcJ$ that lie in $A$ and to the right of
$y$, and select the largest of these. More precisely, let
$J=J^l\in\mcJ$ be the maximal $2^m$-adic subarc that contains $y$ as
its left endpoint and is contained in $[y,b]$. Note that the
maximality of $I$ implies that
\begin{equation}\label{E:J vs I}
\text{either}\quad l \ge n \qquad\text{or}\quad \Del(J)<\Del(I) \,.
\end{equation}
Consider the parent $\hat{J}$ of $J$. We claim that $\hat{J}$ contains a point to the right of $b$, and then since $A=[a,y]\cup[y,b]\subset\hat{I}\cup\hat{J}$, we are done. If $\hat{J}$ did not contain a point to the right of $b$, then it would have to contain a point to the left of $y$, but as we now show this would lead to a contradiction.
So, suppose $\hat{J}$ contains a point to the left of $y$. Then in particular, $y$ is an interior point of $\hat{J}$. Since $y$ is an endpoint of $\hat{I}$, we cannot have $\hat{I}\supset\hat{J}$ nor $\hat{I}=\hat{J}$, and therefore $\hat{I}\subsetneq\hat{J}$. This implies that $n>l$. However, it also implies that some $2^m$-adic sibling $\tilde{J}$ of $J$ satisfies $\tilde{J}\supset\hat{I}$, and therefore $\Del(I)\le\Del(\hat{I})\le\Del(\tilde{J})=\Del(J)$. In view of \eqref{E:J vs I}, one of these last two implications does not hold, so $\hat{J}$ cannot contain a point to the left of $y$.
\end{proof}%
\begin{pf}{Proof of \rf{L:d BL_to_d_D_variant2k}} \label{proof}%
An appeal to \rf{L:BT iff BL}(b,c) permits us to assume that $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is 1-bounded turning} \newcommand{\ca}{chord arc. Write $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$ for the smaller diameter subarc joining points $x,y$ on $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$; so $\abs{x-y}=\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])$. Fix points $s,t$ on $\mfS^1$ and put $x:=\vphi(s), y:=\vphi(t)$. Let $[s,t], [t,s]$ be the two arcs in $\mfS^1$ joining $s,t$ and assume that $\diam_d([t,s])\geq \diam_d([s,t])=d(s,t)$.
\smallskip
First we show that $\abs{x-y}\le2^{m+1}K\,d(s,t)$. Using \rf{L:arc lemma_2k_var} we select $2^m$-adic subarcs $I,J\in \mcJ$ with $I\cup J\subset [s,t] \subset \hat{I} \cup \hat{J}$, $\hat{I}\cap\hat{J}\ne\emptyset$, and
$$
\Del(J)\le \Delta(I)=\D([s,t])\le \diam_d([s,t])=d(s,t) \,.
$$
Here $\hat{I},\hat{J}\in \mcJ$ are the parents of $I,J$ relative to $\mcJ$. Then
\begin{align*}
\abs{x-y} &=\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]) = \min\{\diam(\vphi[s,t]),\diam(\vphi[t,s]) \} \le \diam(\vphi[s,t]) \\
&\le \diam(\vphi(\hat{I} \cup \hat{J})) \le K[\Del(\hat{I}) + \Del( \hat{J} ) ] \le 2^m K [\Del({I}) + \Del({J} ) ] \\
&\le 2^{m+1} K \, \Del(I) \le 2^{m+1} K \, d(s,t) \,.
\end{align*}
Next we show that $d(s,t)\le2^{m+1}K\,\abs{x-y}$. Let $A$ be the subarc of $\mfS^1$---either $A=[s,t]$ or $A=[t,s]$---with $\vphi(A)=\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$. Again we use \rf{L:arc lemma_2k_var} to pick a subarc $I\in \mcJ$ with $I\subset A$ and $\Delta(I)=\D(A)$. Then $\vphi(I)\subset\vphi(A)=\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$, so
\begin{align*}
d(s,t)&\le\diam_d(A)\le 2^{m+1}\D(A)=2^{m+1}\Del(I) \le 2^{m+1}K\,\diam(\vphi(I)) \\
&\le 2^{m+1}K\,\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])=2^{m+1}K\,\abs{x-y} \,.
\end{align*}
\vspace*{-5mm}
\end{pf}%
\medskip
We end this subsection with a criterion that describes when a metric circle in $\mcS_1$ is doubling. Roughly speaking, we get doubling if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization diameters are always at least halved after a fixed number of steps.
\begin{lma} \label{L:dblg} %
Let $\mcI\overset{\Del}\to(0,1]$ be a dyadic diameter function with
snowflake parameter $\sigma=1$ and define $d:=d_\Del$ as in
\eqref{E:dist}. Then $(\mfS^1,d)$ is doubling if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization there exists an
$n_0\in\mfN$ such that
$$
\forall\; n\in\mfN \,, \forall\; I^{n} \,, \forall\; I^{n+n_0}\subset I^{n} \,, \quad \Del(I^{n+n_0}) \le \half\, \Del(I^{n}) \,.
$$
\end{lma}
\begin{proof}%
Suppose $(\mfS^1,d)$ is doubling. Then there are constants $C\ge1$ and $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta\ge1$ such that for each $r$-separated set $E$ in $(\mfS^1,d)$,
$$
\card(E) \le C \lp \diam_d(E)/r \rp^\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta \,.
$$
Let $I:=I^{n}\in \mcI^n$ be given. Suppose $(I^{m})_{m=n}^{n+k}$ is a
descendant sequence with $\Del(I^{m})\ge r:=\half \Del(I)$ for all
$m\in\{n,n+1,\dots,n+k\}$. Let $E$ be the set of endpoints of all the
subarcs $I^n, \dots, I^{n+k}$. To see that $E$ is $r$-separated, let $e_1,e_2$ be
two distinct points in $E$. We can assume that $e_1$ is an endpoint
of some $I^i$ and $e_2\in I^j\subset I^i$, where $n\leq i< j \leq
n+k$, and that $I^j$ does not contain $e_1$ but $I^{j-1}$ does. Then
the sibling $\tilde{I}^{j}$ of $I^j$ separates $e_1$ and $e_2$. Thus
$d(e_1,e_2)\geq \Delta(\tilde{I}^{j})= \Delta(I^j)\geq r$.
Now $\diam_d(E)=\diam_d(I)=\Del(I)=2\,r$, so by doubling
$$
k \le \card(E) \le C \lp \diam_d(E)/r \rp^\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta = 2^\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta C \,.
$$
Therefore $n_0:= \lceil 2^{\alpha}C\rceil +1$ is the desired number.
\medskip
Conversely, suppose there is such an $n_0\in\mfN$. Let $A\subset \mfS^1$ be any arc. Let $I\in\mcI^n$, $J\in \mcI^m$ be dyadic subarcs with parents $\hat{I}\in \mcI^{n-1}$, $\hat{J}\in \mcI^{m-1}$ as in \rf{L:arc lemma_2k_var}; thus $I\cup J \subset A \subset \hat{I}\cup \hat{J}$. Let $I_1,\dots,
I_{2^{n_0+1}}\in \mcI^{n+n_0}$, $J_1,\dots, J_{2^{n_0+1}}\in \mcI^{m+n_0}$ be the dyadic subarcs contained in $\hat{I}$ and $\hat{J}$ respectively. Then for all $1\leq k \leq 2^{n_0+1}$
\begin{equation*}
\diam_d(I_k)=\Del(I_k) \leq \half \diam_d(I) \leq \diam_d(A)
\end{equation*}
and similarly $\diam_d(J_k)\leq(1/2)\diam_d(A)$. Thus we obtain the doubling condition with $N:=2^{n_0+2}$.
\end{proof}%
\section{Introduction} \label{S:Intro}
By definition, a \emph{metric quasicircle} is the \qsc\ image of the unit circle $\mfS^1$. (See \rf{S:Prelims} for definitions and basic terminology.) We exhibit a catalog that contains a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ copy of each metric quasicircle. This is a metric space analog of recent work by Steffen Rohde \cite{Rohde-qcircles-mod-bl}, so we briefly describe his result. He constructed a collection $\mcR$ of snowflake type planar curves with the intriguing property that each planar quasicircle (the image of $\mfS^1$ under a global \qc\ self-homeomorphism} \newcommand{\homic}{homeomorphic\ of the plane) is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to some curve in $\mcR$.
Rohde's catalog is $\mcR:=\bigcup \mcR_p$, where $p\in[1/4,1/2)$ is a \emph{snowflake parameter}. Each curve in $\mcR_p$ is built in a manner reminiscent of the construction of the von Koch snowflake. Thus, each $R\in\mcR_p$ is the limit of a sequence $(R^n)$ of polygons where $R^{n+1}$ is obtained from $R^n$ by using the replacement rule illustrated in \rf{f:Rohde_snow}: for each of the $4^n$ edges $E$ of $R^n$ we have two choices, either we replace $E$ with the four line segments obtained by dividing $E$ into four arcs of equal diameter, or we replace $E$ by a similarity copy of the polygonal arc $A_p$ pictured at the top right of \rf{f:Rohde_snow}. In both cases $E$ is replaced by four new segments, each of these with diameter $(1/4)\diam(E)$ in the first case or with diameter $p\diam(E)$ in the second case. The second type of replacement is done so that the ``tip'' of the replacement arc points into the exterior of $R^n$. This iterative process starts with $R^1$ being the unit square, and the snowflake parameter, thus the polygonal arc $A_p$, is fixed throughout the construction. See the discussion at the beginning of \rf{s:C} for more details.
The sequence $(R^n)$ of polygons converges, in the Hausdorff metric, to a planar quasicircle $R$ that we call a \emph{Rohde snowflake} constructed with snowflake parameter $p$. Then $\mcR_p$ is the collection of all Rohde snowflakes that can be constructed with snowflake parameter $p$.
Rohde \cite[Theorem~1.1]{Rohde-qcircles-mod-bl} proved the following.
\begin{noname}
A planar Jordan curve is a quasicircle if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization it is \\ the image of some Rohde snowflake under a bi-Lipschitz} \newcommand{\qc}{quasiconformal \\ self-homeomorphism of the plane.
\end{noname}
\begin{figure}[t]
\centering
\begin{overpic}[width=12cm, tics=10]{Rohde_snow.eps}
\put(9,13.5){An edge $E$ of $R^n$}
\put(71.7,15){The arc $A_p$}
\put(66.5,23){$\scriptstyle{p}$}
\put(91,23){$\scriptstyle{p}$}
\put(74.5,28){$\scriptstyle{p}$}
\put(83.5,28){$\scriptstyle{p}$}
\put(63,3){$\scriptstyle{1/4}$}
\put(72.5,3){$\scriptstyle{1/4}$}
\put(82.3,3){$\scriptstyle{1/4}$}
\put(92.1,3){$\scriptstyle{1/4}$}
\end{overpic}
\caption{Construction of a Rohde-snowflake.}
\label{f:Rohde_snow}
\end{figure}
Thanks to a celebrated theorem of Ahlfors \cite{Ahlfors-QCrflxns}, there is a \emph{simple geometric criterion} that characterizes planar quasicircles: a planar Jordan curve $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is a quasicircle if and only if it satisfies the \emph{bounded turning condition}, which means that there is a constant $C\geq 1$ such that for each pair of points $x,y$ on $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, the smaller diameter subarc $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$ of $\Gamma$ that joins $x,y$ satisfies
\begin{equation} \label{eq:bt}
\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])\le C\,|x-y| \,. \tag{BT}
\end{equation}
We say $\Gamma$ is \emph{$C$-bounded turning} to emphasize the constant $C$.
Tukia and \Va\ \cite{TV-qs} introduced the notion of a \emph{quasisymmetry} \newcommand{\qsc}{quasisymmetric} between metric spaces. In this same paper they established the following metric space analog of Ahlfors' result.
\begin{noname}
A metric Jordan curve is a metric quasicircle if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization it is both bounded turning} \newcommand{\ca}{chord arc\ and doubling (that is, of finite Assouad dimension).
\end{noname}
\medskip
Our catalog $\mcS$ of metric snowflake curves is a collection of metric circles $(\mfS^1,d)$ where the metrics $d$ are given in a simple way by specifying the diameter of each dyadic subarc of $\mfS^1$. See \eqref{E:dist} and the end of \rf{s:DDF} for precise details.
Our catalog is $\mcS:=\bigcup \mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$, and we also employ an auxiliary \emph{snowflake parameter} $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1]$. Each $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$ in $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ has a metric $d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ that is obtained by the assignment of diameters to each dyadic subarc of $\mfS^1$. As in Rohde's construction, at each step there are two choices: the diameter (\wrt $d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$) of a given dyadic subarc is either one-half, or $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$, times the diameter of its parent subarc.
Each $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$ is a bounded turning} \newcommand{\ca}{chord arc\ circle. Moreover, when $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta<1$, $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$ has Assouad dimension $\alpha \leq \log 2/ \log(1/\sigma)<\infty$ (so, $2^{-1/\alpha}\le \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta<1$), hence $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$ is doubling and thus a metric quasicircle; see \rf{L:dist}(e). In fact, each collection $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ (with $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta<1$) contains a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ copy of every metric quasicircle with Assouad dimension strictly less than $\log(2)/\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$. In addition, the sub-catalog $\mcS_1$ contains a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ copy of every bounded turning} \newcommand{\ca}{chord arc\ circle.
Here is our main result.
\begin{thm*}
Let $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ be a metric Jordan curve.
\begin{itemize}
\item[(A)] If $\,\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is bounded turning} \newcommand{\ca}{chord arc, then $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to some curve in $\mcS_1$.
\item[(B)] If $\,\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is a metric quasicircle with Assouad dimension $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta:=\dimA(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)$ and $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in(2^{-1/\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta},1)$, then $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a curve in $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$.
\item[(C)] A metric quasicircle is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a \emph{planar quasicircle} if and only if it has Assouad dimension strictly less than two.
\end{itemize}
\end{thm*}
\noindent
This result is quantitative in that the bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constants depend only on the given data. For example, if $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is $C$-bounded turning} \newcommand{\ca}{chord arc, then the bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constant in \rm(A) is
$$
L=8\,C \max\{\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda),\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)^{-1}\}.
$$
Minor modifications to our proofs reveal that the analogous results hold for bounded turning} \newcommand{\ca}{chord arc\ Jordan arcs and metric quasiarcs.
In addition, we explain how to recover Rohde's theorem from our result. This provides an alternative proof of Rohde's result that avoids the technical construction of a ``uniform doubling measure'' appearing in \cite[Theorem~1.2]{Rohde-qcircles-mod-bl}. In view of this, our argument somewhat simplifies the proof of Rohde's theorem.
We mention that Bonk, Heinonen, and Rohde have established a result that gives metric quasicircles as metric boundaries of certain metric disks; see \cite[Lemma~3.7]{BHR-cfml}.
\smallskip
The novel ideas in our approach include the following. We make extensive use of the fact that every bounded turning} \newcommand{\ca}{chord arc\ metric space is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to its associated diameter distance space; see \rf{L:BT iff BL}. In particular, this permits us to restrict attention to 1-bounded turning} \newcommand{\ca}{chord arc\ Jordan curves. In this setting, the metrics are characterized, up to bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalence, by knowledge of the diameters of certain subarcs, provided we have a sufficiently plentiful collection of subarcs; see \rf{L:dist}. Finally, there is a straightforward way to build a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ homeomorphism} \newcommand{\homic}{homeomorphic\ from one of our model curves onto such a metric Jordan curve; see \rf{P:subdiv_homeo} and \rf{L:d BL_to_d_D_variant2k}.
This document is organized as follows. \rf{S:Prelims} contains preliminary information including background material on Assouad dimension (in \rf{s:assouad-dimension}) and on \qsc\ homeomorphism} \newcommand{\homic}{homeomorphic s (in \rf{s:QS}). We prove a result about dividing an arc into subarcs of equal diameter (in \rf{s:dividing-arcs}) and (in \rf{s:shrinking}) give a useful tool for constructing homeomorphism} \newcommand{\homic}{homeomorphic s between Jordan curves. We construct our dyadic models in \rf{s:DDF} and prove our Theorem in \rf{S:Proof}.
\section{Preliminaries} \label{S:Prelims}
Here we set forth our (relatively standard) notation and terminology and present fundamental definitions and basic information. First we provide some background on quasisymmetric maps, doubling, and bounded turning. In \rf{s:IDD} we show that we can restrict attention to $1$-bounded turning circles. In \rf{s:dividing-arcs} we prove that one can divide an arc into subarcs of equal diameter. In \rf{s:shrinking} we establish a useful proposition for constructing homeomorphism} \newcommand{\homic}{homeomorphic s between Jordan arcs or curves.
\subsection{Basic Information} \label{s:basic info}
For the record, $\mfN$ denotes the set of natural numbers, i.e., the positive integers.
We view the unit circle $\mfS^1$ as the unit interval with its endpoints identified; that is, $\mfS^1=[0,1]/\{0\!\!\sim\!\!1\}=[0,1]/\!\!\sim$ where $s\sim t$ if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization either $s=t$ or $\{s,t\}=\{0,1\}$. Then $\lam$ denotes the (normalized) arc-length metric on $\mfS^1$: for $s,t\in\mfS^1$ with say $0\le s\le t\le1$,
$$
\lam(s,t) := \min\{t-s, 1-(t-s)\} \,.
$$
A (closed) \emph{Jordan curve} is the \homic\ image of the circle $\mfS^1$ and a \emph{metric Jordan curve} is a Jordan curve with a metric on it. A \emph{Jordan arc} is the \homic\ image of the unit interval $[0,1]$ and a \emph{metric Jordan arc} is a Jordan arc with a metric on it. Thus Jordan curves and arcs are non-degenerate compact spaces, where non-degenerate means not a single point.
Given distinct points $x,y$ on a metric Jordan curve $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, we write $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$ to denote the closure of the smaller diameter component of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda\sm\{x,y\}$; when both components have the same size, we randomly pick one. We often fix an orientation on $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, and then $[x,y]$ stands for the subarc of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ that joins $x$ to $y$.
We note the following easy consequence of uniform continuity.
\begin{lma} \label{L:finite} %
Let \,$\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ be a metric Jordan curve or arc. Then for each $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$, there are at most finitely many non-overlapping subarcs of \,$\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ that all have diameter at least $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$.
\end{lma}
\begin{proof}%
Suppose $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda=\vphi(\mfS^1)$ for some homeomorphism} \newcommand{\homic}{homeomorphic\ $\vphi$. Let $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$ be given. Choose $\del>0$ so that for each subarc $I\subset\mfS^1$ with $\diam_\lam(I)<\del$ we have $\diam(\vphi(I))<\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma/2$. Pick $N\in\mfN$ with $1/N<\del$. Partition $\mfS^1$ into adjacent equal length subarcs $I_1,\dots,I_N$.
Let $A$ be a subarc of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ with $\diam(A)\ge\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$. Then $A$ must contain at least one of the subarcs $\vphi(I_i)$. Thus there are at most $N$ such subarcs $A$.
A similar argument applies when $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ is an arc.
\end{proof}%
Throughout this article we employ the Polish notation $\abs{x-y}$ for the distance between points $x,y$ in a metric space.
The \emph{bounded turning condition} (BT), also called \emph{Ahlfors' three point condition}, makes sense in any connected metric space: this holds whenever points can be joined by continua whose diameters are no larger than a fixed constant times the distance between the original points. To be precise, given a constant $C\ge1$, we say that $X$ has the {\em $C$-bounded turning\/} property if each pair of points $x,y\in X$ can be joined by a continuum $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]$ satisfying (BT).
The bounded turning condition has a venerable position in quasiconformal analysis; see for example \cite{TV-qs}, \cite{G-qdisks}, \cite{NV-John}, \cite{Tukia-bt} and the references therein.
A metric Jordan curve that is bounded turning is called a \emph{bounded turning} \newcommand{\ca}{chord arc\ circle}, or a \emph{$C$-bounded turning} \newcommand{\ca}{chord arc\ circle} if we wish to indicate the bounded turning} \newcommand{\ca}{chord arc\ constant $C$.
\subsection{Assouad Dimension} \label{s:assouad-dimension
A metric space is \emph{doubling} if there is a number $N$ such that every subset of diameter $D$ has a cover that consists of at most $N$ subsets each having diameter at most $D/2$. It follows that every set of diameter $D$ has a cover by (at most) $N^k$ sets each of diameter at most $D/2^k$.
The \emph{Assouad dimension} $\dimA(X)$ of a metric space $X$ is the infimum of all numbers $\alpha>0$ with the property that there exists a constant $C>0$ such that for all $D>0$, each subset of diameter $D$ has a cover consisting of at most $C \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\alpha}$ sets each of diameter at most $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma D$.
An equivalent description can be given in terms of separated sets. A subset $S\subset X$ is $r$-\emph{separated} provided it is non-degenerate, meaning $\card(S)>1$, and for all distinct $x,y\in S$, $|x-y|\ge r$; in particular, $\diam(S)\ge r$. Then $\dimA(X)$ is the infimum of all numbers $\alpha>0$ with the property that there exists a constant $C>0$ such that for all $r>0$, each $r$-separated set $S\subset X$ has $\card(S)\le C (\diam(S)/r)^{\alpha}$.
Evidently, a metric space has finite Assouad dimension if and only if it is doubling. The Assouad dimension was introduced by Assouad in \cite{Assouad-thesis} (see also \cite{Assouad-dim}). A comprehensive overview is given in \cite{Luuk-ass-dim}. The role of doubling spaces in the general theory of quasisymmetric maps is explained in \cite{Juha-analysis}. The Assouad dimension of a space is a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ invariant, and it is always at least the Hausdorff dimension.
\showinfo{
\begin{lma} \label{L:lma} %
Suppose $\dimA(X)<\beta$. Then there is an $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0\in(0,1]$ such that for all $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\in(0,\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)$, the cardinality of any $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma d$-separated set $S\subset X$ with $d=\diam(S)$ satisfies $\card(S) < \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\beta}$.
\end{lma}
\begin{proof}%
Put $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta:=\dimA(X)$ and $\gam:=(\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta+\beta)/2$. Then $\gam>\dimA(X)$, so there exists a constant $C:=C(\gam)\ge2$ such that for all $d>0$, all $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\in(0,1]$, and each $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma d$-separated $S\subset X$ with $d=\diam(S)$, $\card(S)\le C\, \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\gam}$. Since $\beta-\gam=(\beta-\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta)/2>0$, there exists $t_0=t_0(\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta,\beta,C)>0$ such that for all $t>t_0$, $t^{\beta-\gam}>C$.
Let $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0:=1/t_0$. Then for all $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\in(0,\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)$ and all $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma d$-separated sets $S\subset X$ with $d=\diam(S)$,
$$
\card(S) \le C\, \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\gam} < \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{\gam-\beta} \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\gam} = \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\beta} \,.
$$
\flag{So, $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0=\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma(\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta,\beta)$ where $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta:=\dimA(X)$, but this is a bit hokey... However, this kind of quantitativeness is needed to bound the BL constant in part (B).}
\end{proof}%
}
\subsection{Quasisymmetric Homeomorphisms} \label{s:QS}
A homeomorphism $X\overset{f}\to Y$ of metric spaces $X,Y$ is called a \emph{quasisymmetry} if there is a homeomorphism $\eta \colon
[0,\infty) \to [0,\infty)$ such that for all distinct $x,y,z \in X$ and $t\in [0,\infty)$,
\begin{equation*}
\frac{\abs{x-y}}{\abs{x-z}} \leq t
\quad\implies\quad
\frac{\abs{f(x)-f(y)}}{\abs{f(x)-f(z)}} \leq \eta(t) .
\end{equation*}
This notion of quasisymmetry} \newcommand{\qsc}{quasisymmetric\ was introduced by Tukia and \Va\ in \cite{TV-qs} where they also studied \emph{weak-quasisymmetries}. A homeomorphism $f\colon X\to Y$ is a weak-quasisymmetry} \newcommand{\qsc}{quasisymmetric\ if there is a constant $H\geq 1$, such that for all distinct $x,y,z\in X$,
\begin{equation*}
\frac{\abs{x-y}}{\abs{x-z}} \leq 1
\quad\implies\quad
\frac{\abs{f(x)-f(y)}}{\abs{f(x)-f(z)}} \leq H.
\end{equation*}
Clearly every quasisymmetry} \newcommand{\qsc}{quasisymmetric\ is a weak-quasisymmetry} \newcommand{\qsc}{quasisymmetric. Tukia and \Va\ proved that each weak-quasisymmetry} \newcommand{\qsc}{quasisymmetric\ from a pseudo-convex space to a doubling space is a quasisymmetry} \newcommand{\qsc}{quasisymmetric\ \cite[Theorem~2.15]{TV-qs}; Heinonen has a similar result for maps from a connected doubling space to a doubling space \cite[Theorem~10.19]{Juha-analysis}. In particular, this holds for maps between Euclidean spaces. However, a weak-quasisymmetry} \newcommand{\qsc}{quasisymmetric\ may fail to be \qsc\ if the target space is not doubling, as illustrated by an example in the paper by Tukia and \Va.
As discussed in the Introduction, a \emph{metric quasicircle} is the quasisymmetric image of $\mfS^1$; thanks to work of Tukia and \Va, we know that these are precisely the doubling bounded turning} \newcommand{\ca}{chord arc\ circles. Recently the second author \cite{Meyer-weakQS} established the following characterization of bounded turning} \newcommand{\ca}{chord arc\ circles.
\begin{noname}
A metric Jordan curve is bounded turning} \newcommand{\ca}{chord arc\ if and only if it is a \emph{weak-quasisymmetric} image of the unit circle.
\end{noname}
\subsection{Diameter Distance} \label{s:IDD}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\setcounter{footnote}{1}
Here we show that we can always restrict attention to $1$-bounded turning circles. More precisely, we show that any bounded turning circle is bi-Lipschitz equivalent to a $1$-bounded turning circle. The relevant tool employed is the notion of \emph{diameter distance}\footnote{This is also called \emph{inner diameter distance}.} $\dia$ that is defined on any path connected metric space $(X,\ed)$ by
$$
\dia(x,y):=\inf\{\diam(\gam) \mid \gam \;\text { a path in $X$ joining $x,y$ }\} \,.
$$
It is not hard to see that $\dia$ is a metric on $X$.
Here are some additional properties of $\dia$.
\begin{lma} \label{L:BT iff BL} %
Let $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ be a metric Jordan curve or a metric Jordan arc and let $\dia$ be the associated diameter distance.
\begin{enumerate}
\item[\rm(a)] \label{item:dia3}
The $\dia$-diameter of any subarc $A$ of $\,\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ equals its diameter \wrt the original metric on $X$; that is, $\diam_{\dia}(A) = \diam(A)$.
\item[\rm(b)] \label{item:dia4}
For all points $x,y\in\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, $\diam_{\dia}(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])=\dia(x,y)$. In particular, $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\dia)$ is $1$-bounded turning} \newcommand{\ca}{chord arc.
\item[\rm(c)] \label{item:dia2}
$(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $C$-bounded turning} \newcommand{\ca}{chord arc\ if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization the identity map $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\dia)\overset{\id}\to(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $C$-bi-Lipschitz} \newcommand{\qc}{quasiconformal.
\end{enumerate}
\end{lma}
\begin{proof}%
To prove (a), first observe that for all $x,y\in \Gamma$, $|x-y|\le\dia(x,y)$, so $\diam(A)\le\diam_{\dia}(A)$. Next, for all $x,y\in A$, $\dia(x,y)\le\diam(A)$, so $\diam_{\dia}(A)\le\diam(A)$.
Now (b) follows directly from (a) since
$$
\dia(x,y)=\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])=\diam_{\dia}(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y]) \,.
$$
It remains to establish (c). If $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $C$-bounded turning} \newcommand{\ca}{chord arc, then for all $x,y\in\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$
$$
\dia(x,y)=\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])\le C\, |x-y| \le C\,\dia(x,y)
$$
so the identity map is $C$-bi-Lipschitz} \newcommand{\qc}{quasiconformal. Conversely, if this map is $C$-bi-Lipschitz} \newcommand{\qc}{quasiconformal, then for all $x,y\in\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$
$$
\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])=\diam_{\dia}(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda[x,y])=\dia(x,y)\le C\, |x-y|
$$
and therefore $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $C$-bounded turning} \newcommand{\ca}{chord arc.
\end{proof}%
We remark that in general the identity map $(X,\dia) \xrightarrow{\id}
(X,\ed)$ need not be a homeomorphism. A simple example of this is the planar \emph{comb space}
$$
X:=\lp [0,1]\times\{0\}\rp \cup \lp \{0\}\times[0,1] \rp \bigcup_{n=1}^\infty \lp \{1/n\}\times[0,1] \rp \subset\mfR^2
$$
equipped with Euclidean distance $\ed$. If $z_n:=(1/n,1)$ and $a:=(0,1)$, then $|z_n-a|\to0$ as $n\to\infty$, whereas $\dia(z_n,a)\ge 1$ for all $n$. Also, $(X,\ed)$ is compact but $(X,\dia)$ is not.
\showinfo{In fact, $\id$ will be a homeomorphism} \newcommand{\homic}{homeomorphic if and only if } \newcommand{\wrt}{with respect to } \newcommand{\param}{parametrization $(X,\ed)$ is locally path connected. This is in my notes, but perhaps need not be mentioned here.}
\subsection{Division of Arcs} \label{s:dividing-arcs}
Here we prove that any metric Jordan arc can be divided into any given number of subarcs each having exactly the same diameter.
The problem of finding points on a metric Jordan arc such that consecutive points are at the same distance is non-trivial. In 1930 Menger gave a proof \cite[p.\ 487]{Menger}, that is short, simple and natural, but wrong. It was proved for arcs in Euclidean space in \cite{Alt-Beer}, and in the general case (indeed in more generality) in \cite[Theorem 3]{Schoenberg}; see also \cite{V-dividing}.
For the case at hand, i.e., for bounded turning arcs, it suffices to find adjacent subarcs that have equal diameter. We give the following elementary proof for this problem.
\begin{prop} \label{P:equi_diam}
Let $A$ be a metric Jordan arc and $N\geq 2$ an integer. Then we can divide $A$ into $N$ subarcs of equal diameter.
\end{prop}
\begin{proof}%
We may assume that $A$ is the unit interval $[0,1]$ equipped with some metric $d$. We claim that there are points $0=s_0 < s_1 < \dots < s_{N-1} < s_N=1$ such that
\begin{equation*}
\diam[s_0,s_1] = \diam [s_1,s_2] = \dots = \diam[s_{N-1},s_N]
\end{equation*}
where $\diam$ denotes diameter with respect to the metric $d$. When $N=2$ this follows by applying the Intermediate Value Theorem to the function $[0,1]\ni s\mapsto \diam [0,s] - \diam[s,1]$.
According to \rf{L:BT iff BL}(a), we may replace $d$ by its associated diameter distance; thus we may assume from the start that for any $[s,t]\subset [0,1]$
\begin{equation} \label{eq:d_diam}
d(s,t) = \diam [s,t] \,.
\end{equation}
\smallskip
Next, we modify $d$ to get a metric $d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ that is \emph{strictly increasing} in the sense that
\begin{equation} \label{eq:de_strictly_inc}
[s,t] \subsetneq [s',t'] \subset[0,1] \implies d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma(s,t) < d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma(s',t') \,.
\end{equation}
The crucial point here is the \emph{strict} inequality, which need not hold in general.
To this end, fix $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$ and for all $s,t\in [0,1]$ set
\begin{equation*}
d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma(s,t) := d(s,t) +\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\abs{t-s} \,.
\end{equation*}
Then from \eqref{eq:d_diam} it follows that
\begin{equation*}
\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s,t] = \diam[s,t] + \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\abs{t-s} =
d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma(s,t) \,,
\end{equation*}
where $\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ denotes diameter with respect to
$d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$. This immediately implies \eqref{eq:de_strictly_inc}.
\smallskip
We now show that $[0,1]$ can be divided into $N$ subintervals of
equal $d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-diameter.
Consider the compact set $S:=\{\mathbf{s}=(s_1,\dots, s_{N-1}) \mid 0\leq s_1 \leq
\dots \leq s_{N-1}\leq 1\}$. Set $s_0:=0, s_N:=1$. The function
$\varphi \colon S\to \mfR$ defined by
\begin{equation*}
\varphi(\mathbf{s}) := \max_{0\leq i\leq N-1} \diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_i,s_{i+1}] -
\min_{0\leq j\leq N-1} \diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_j,s_{j+1}]
\end{equation*}
assumes a minimum on $S$. If this minimum is zero, we are
done. Otherwise, there are adjacent intervals $[s_{i-1}, s_{i}],
[s_{i}, s_{i+1}]$ that have different $d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-diameter. Using the
Intermediate Value Theorem as before, we can find $s'_i\in [s_{i-1},
s_{i+1}]$ such that $\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_{i-1},s'_i] =
\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma[s'_i, s_{i+1}]$. Then from (\ref{eq:de_strictly_inc}) it
follows that
\begin{align*}
\min_{0\leq j < N} \diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_j,s_{j+1}]
& <
\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_{i-1},s'_i] \\
& =
\diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma[s'_i, s_{i+1}]
<
\max_{0\leq i < N} \diam_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma [s_i,s_{i+1}].
\end{align*}
Applying this procedure to all subintervals of maximal
$d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-diameter we obtain a strictly smaller
minimum for the function $\varphi$, which is impossible. Thus the minimum must be zero,
and so we can subdivide $[0,1]$ into $N$ subintervals of equal
$d_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$-diameter.
\smallskip
Consider now a sequence $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_n\searrow 0$, as $n\to
\infty$. Let $s_1^n< \dots < s_{N-1}^n$ be the points that divide
$[0,1]$ into $N$
subintervals of equal diameter with respect to $d_{\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_n}$. We
can assume that for all $1\leq j < N$, all points $s^n_j$ converge
to $s_j$ as $n\to\infty$. It follows that
for all $1\leq i,j < N$,
\begin{equation*}
\diam[s_i,s_{i+1}] = \lim_{n\to\infty} \diam_{\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_n} [s^n_i,s^n_{i+1}] =
\lim_{n\to\infty} \diam_{\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_n} [s^n_j,s^n_{j+1}] = \diam[s_j, s_{j+1}]
\end{equation*}
as desired.
\end{proof}%
The previous Lemma is also true for metric Jordan curves $\Gamma$. In this case we are free to choose any point in $\Gamma$ to be an endpoint of one of the subarcs.
\subsection{Shrinking Subdivisions} \label{s:shrinking}
Here we present a useful tool for constructing homeomorphisms between Jordan curves; see \rf{P:subdiv_homeo}.
We begin with some terminology.
Let $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ be a metric Jordan curve or arc.
A sequence $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ is a \emph{shrinking subdivision} for $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ provided:
\begin{itemize}
\item
Each ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$ is a finite \emph{decomposition} of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ into compact arcs.
Thus each ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$ is a finite set of non-overlapping non-degenerate compact
subarcs of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ that cover $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$. (Here non-overlapping means disjoint
interiors and non-degenerate means not a single point.)
\item
Each ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^{n+1}$ is a \emph{subdivision} of ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$; i.e., for each
arc $A$ in ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^{n+1}$ there is a (unique) arc in ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$, called the \emph{parent of $A$}, that contains $A$.
\item
The subdivisions \emph{shrink}, meaning that $\displaystyle} \newcommand{\half}{\frac{1}{2}} \newcommand{\qtr}{\frac{1}{4} \max_{A\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n} \diam(A) \to 0 \text{ as } n\to \infty$.
\end{itemize}
Assume $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ is a shrinking subdivision for $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$. We call $(A^n)_1^\infty$ a \emph{descendant sequence} if $A^1\supset A^2\supset\dots$ and $A^n\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$ for all $n\in\mfN$; thus each $A^n$ is the parent of $A^{n+1}$. Note that for any descendant sequence $(A^n)_1^\infty$, $\bigcap_1^\infty A^n$ is a single point. Also, for each point $x\in\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, there exists a descendant sequence $(A_x^n)_1^\infty$ with $\{x\}=\bigcap_1^\infty A_x^n$; such a descendant sequence need not be unique, but there can be at most two such sequences.
Shrinking subdivisions are useful for constructing homeomorphisms between metric Jordan curves; see \rf{s:A}, \rf{s:B}, \rf{s:C}.
\begin{prop} \label{P:subdiv_homeo} %
Let $\mfA$ and $\mfB$ both be metric Jordan curves or metric Jordan arcs. Suppose $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ and $(\mcB^n)_1^\infty$ are shrinking subdivisions for $\mfA$ and $\mfB$ respectively. Assume these subdivisions are \emph{combinatorially equivalent}, meaning that for each $n\in\mfN$ there are bijective maps $\Phi^n\colon {\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n \to \mcB^n$ such that for all $A, \tilde{A}\in {\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$ and $A_0\in {\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^{n+1}$
\begin{align*}
A \cap \tilde{A} = \emptyset &\quad\iff\quad \Phi^n(A) \cap \Phi^n(\tilde{A}) =\emptyset\,, \\
A_0 \subset A &\quad\iff\quad \Phi^{n+1}(A_0) \subset \Phi^{n}(A) \,.
\end{align*}
Then the sequence $(\Phi^n)_1^\infty$ induces a \emph{homeomorphism} $\mfA\overset{\vphi} \to \mfB$ with the property that
$$
\text{for all $n\in\mfN$ and all }\; A\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n \,,\quad \varphi(A) =\Phi^n(A) \,.
$$
\end{prop}
\begin{proof}%
Let $a\in\mfA$ and select a descendant sequence $(A^n)_1^\infty$ with $\{a\}=\bigcap_1^\infty A^n$. Setting $B^n:=\Phi^n(A^n)$ we obtain a descendant sequence $(B^n)_1^\infty$ with, say, $\{b\}:=\bigcap_1^\infty B^n$. Suppose $(\tilde{A}^n)_1^\infty$ is a second descendant sequence with $\{a\}=\bigcap_1^\infty \tilde{A}^n$. Let $\tilde{B}^n:=\Phi^n(\tilde{A}^n)$ and $\{\tilde{b}\}=\bigcap_1^\infty \tilde{B}^n$. Since $A^n\cap\tilde{A}^n\ne\emptyset$, $B^n\cap\tilde{B}^n\ne\emptyset$ and therefore
$$
\abs{b-\tilde{b}} \le \diam(B^n) + \diam(\tilde{B}^n ) \to 0 \quad\text{as $n\to\infty$} \,.
$$
Thus $\tilde{b}=b$ and so there is a well defined map $\vphi:\mfA\to\mfB$ given by setting $\vphi(a):=b$.
Two distinct points $a_1,a_2\in\mfA$ lie in disjoint arcs $A_1,A_2\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$, for sufficiently large $n\in\mfN$, and then $\vphi(A_1)\cap\vphi(A_2)=\emptyset$, so $\vphi(a_1)\ne\vphi(a_2)$ verifying that $\vphi$ is injective.
Given $b\in\mfB$ and a descendant sequence $(B^n)_1^\infty$ with $\{b\}=\bigcap_1^\infty B^n$, $A^n:=(\Phi^n)^{-1}(B^n)$ defines a descendant sequence $(A^n)_1^\infty$ with, say, $\{a\}:=\bigcap_1^\infty A^n$, and then $\vphi(a)=b$. Thus $\vphi$ is surjective.
Let $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$ be arbitrary. Fix an $n\in\mfN$ such that $\max\{\diam(B) \mid B\in \mcB^n\}< \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma/2$. Let $\delta:=\min\{\dist(A_1,A_2) \mid A_1,A_2\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n \,;\; A_1\cap A_2=\emptyset\}$. Suppose $a_1,a_2\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}$ with $|a_1-a_2|<\del$. Pick $A_k\in{\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n$ with $a_k\in\A_k$. The definition of $\del$ ensures that $A_1\cap A_2\ne\emptyset$. Therefore,
$$
|\vphi(a_1)-\vphi(a_2)|\le \diam(\vphi(A_1)) + \diam(\vphi(A_2)) \le \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma
$$
and so $\varphi$ is (uniformly) continuous and hence a homeomorphism} \newcommand{\homic}{homeomorphic.
\end{proof}%
\section{Proof of the Main Theorem} \label{S:Proof}
Here we establish parts (A), (B), (C) of the Theorem stated in the Introduction; see \rf{s:A}, \rf{s:B}, \rf{s:C} respectively. In addition, in \rf{s:C} we explain how to recover Rohde's Theorem.
\showinfo{If the subsection counters are changed to 1,2,3, then we should change our ``parts'' to (1),(2),(3).}
Recall from \rf{s:DDF} that $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ is the collection of all
metric circles $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$ where the metrics $d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=d_{\Del}$
are defined as in \eqref{E:dist} and $\Del:\mcI\to(0,1]$ is any dyadic
diameter function constructed using the snowflake parameter
$\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in[1/2,1]$. Recall too that for $\sigma\in [1/2,1)$ each curve in $\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ is a
metric quasicircle that has Assouad dimension at most
$\log2/\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$; see \rf{L:dist}(c,e).
\medskip
For the remainder of this section, $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is a bounded turning} \newcommand{\ca}{chord arc\ circle. Our three proofs share the following common theme: We define an appropriate shrinking subdivision for $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ and then appeal to \rf{P:subdiv_homeo} and \rf{L:d BL_to_d_D_variant2k} to obtain the necessary bi-Lipschitz} \newcommand{\qc}{quasiconformal\ homeomorphism} \newcommand{\homic}{homeomorphic s. In each case this involves constructing a dyadic diameter function $\Del$ using some snowflake parameter.
\smallskip
To start, we fix an orientation on $\Gamma$. All subarcs inherit this orientation, and $[a,b]$ denotes the oriented subarc of $\,\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ with endpoints $a,b$. Next, an appeal to \rf{L:BT iff BL}(b,c) permits us to replace $\ed$ with its associated diameter distance thereby obtaining a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent 1-bounded turning} \newcommand{\ca}{chord arc\ circle; the bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constant for this change of metric equals the original bounded turning} \newcommand{\ca}{chord arc\ constant. Thus we may, and do, assume that $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is 1-bounded turning} \newcommand{\ca}{chord arc. This means that
\begin{equation*} \label{eq:diamgam_gam}
\diam([a,b]) = \abs{a-b} \quad\text{whenever}\quad \diam([a,b])\leq \diam(\Gamma\setminus[a,b]) \,.
\end{equation*}
We also assume that $\diam(\Gamma)=1$; this involves another bi-Lipschitz} \newcommand{\qc}{quasiconformal\ change of metric with bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constant $\max\{\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda),\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)^{-1}\}$.
\subsection{Proof of (A)} \label{s:A}
We assume $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is 1-bounded turning} \newcommand{\ca}{chord arc\ with $\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)=1$; it need not be doubling. We construct a dyadic diameter function $\Del$ on $\mcI$, using the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=1$, so that $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\mfS,d_\Del)$.
First, we divide $\Gamma$ into two arcs $A^1_0, A^1_1$ that both have
diameter one. Then we inductively divide each arc into two subarcs of
equal diameter. Appealing to \rf{P:equi_diam}, we divide each $A^n_i$
into two subarcs $A^{n+1}_{2i}, A^{n+1}_{2i+1}$ of equal diameter.
This defines subarcs $A^{n}_k$ for each $k\in\{0,1,\dots,2^n-1\}$ and
all $n\in \mfN$. Here we label so that the $A^n_k$ are ordered
successively along $\Gamma$ with the initial point of $A^n_0$ the same
for all $n\in \mfN$.
\showinfo{We could avoid using \rf{P:equi_diam} by selecting midpoints; a \emph{midpoint of an arc} is any point on it that is equi-distant from its endpoints.}
We claim that $\lim_{n\to\infty} \max_k \diam(A^n_k) =0$. For suppose this does not hold. Then there is an $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma>0$ such that the set $\mathbf{\Gamma}_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma:= \{A^n_k \mid \diam(A^n_k)\geq \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\}$ is infinite. Noting that each parent of an arc in $\mathbf{\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda}_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$ also belongs to $\mathbf{\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda}_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$, we may appeal to K\H{o}nig's Lemma to obtain a descendent sequence $\mfS^1=:A^0\supset A^1\supset A^2\supset \dots$ (where $A^n=A^n_{k_n}$ is some arc in $\mathbf{\Gamma}_\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$).
By construction $A^n$ is divided into two subarcs $A^{n+1}$ and $B^{n+1}$ of equal diameter, so $\diam(B^{n+1})\geq \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$. Then $\{B^1,B^2,\dots\}$ is an infinite collection of non-overlapping subarcs of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ each with diameter at least $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma$. This contradiction to \rf{L:finite} implies that our claim must hold
\smallskip
By setting ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n:=\{A^n_k \mid k\in\{0,1,\dots,2^n-1\} \}$ (for each $n\in \mfN$) we obtain a shrinking subdivision $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ for $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$; see \rf{s:shrinking}. In fact, $(\mcI^n)_1^\infty$ and $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ are combinatorially equivalent shrinking subdivisions, and thus by \rf{P:subdiv_homeo} there is an induced homeomorphism $\varphi\colon \mfS^1 \to \Gamma$ with $\vphi(I_k^n)=A_k^n$ for all $n\in\mfN$ and all $k\in\{0,1,\dots,2^n-1\}$.
\smallskip
It remains to construct a dyadic diameter function $\Delta$ using the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=1$ and so that $\Del$ also satisfies the following: for all $n\in\mfN$,
\begin{equation} \label{eq:DeltadiamGam}
\text{for all $k\in\{0,1,\dots,2^n-1\}$}\;, \quad
\frac{1}{2}\, \Delta(I^n_k) \leq \diam(A^n_k) \leq 2 \, \Delta(I^n_k) \,.
\end{equation}
Having accomplished this task, we can appeal to \rf{L:d BL_to_d_D_variant2k} (with $C=1$, $m=1$, $K=2$) to assert that $\varphi:(\mfS^1,d_\Del)\to(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $8$-bi-Lipschitz} \newcommand{\qc}{quasiconformal.
\smallskip
We start by setting $\Del(\mfS^1)=\Delta(I^1_0)=\Delta(I^1_1) :=1$ and note that (\ref{eq:DeltadiamGam}) holds for $n=1$. Now assume that for some $n\in\mfN$ and all $k\in\{0,1,\dots,2^n-1\}$, $\Delta(I^n_k)$ has been defined so that (\ref{eq:DeltadiamGam}) holds. Consider a dyadic subarc $I^n=I^n_k$, its two children $I^{n+1}, \tilde{I}^{n+1}\subset I^n$, and its corresponding arc $A^n=A^n_k=\varphi(I^n_k)\subset\Gamma$. We note that by construction each child $A^{n+1}$ of $A^n$ satisfies
\begin{gather*}
\half\,\diam(A^n)\le\diam(A^{n+1})\le\diam(A^n) \,. \\
\intertext{\indent We examine two cases. If $\Delta(I^n) \leq \diam(A^n)$, then we define}
\Delta(I^{n+1}) =\Del(\tilde{I}^{n+1}) := \Delta(I^n)\,. \\
\intertext{We see that \eqref{eq:DeltadiamGam} holds (for $n+1$) , since}
\begin{align*}
\frac{1}{2}\Delta(I^{n+1}) &= \frac{1}{2}\Delta(I^n) \leq \frac{1}{2}\diam(A^n) \leq \diam(A^{n+1}) \\
&\leq \diam(A^n) \leq 2\Delta(I^n) = 2 \Delta(I^{n+1})\,.
\end{align*}
\end{gather*}
Here \eqref{eq:DeltadiamGam} was used for $n$ in the last inequality.
\smallskip
If $\Delta(I^n) > \diam(A^n)$, then we define
\begin{gather*}
\Delta(I^{n+1}) =\Del(\tilde{I}^{n+1}) := \frac{1}{2}\Delta(I^n) \,. \\
\intertext{Again one checks that \eqref{eq:DeltadiamGam} holds (for $n+1$), since}
\begin{align*}
\frac{1}{2}\Delta(I^{n+1}) &= \frac{1}{4}\Delta(I^n) \leq \frac{1}{2}\diam(A^n) \leq \diam(A^{n+1}) \\
&\leq \diam(A^n) < \Delta(I^n) = 2 \Delta(I^{n+1}) \,.
\end{align*}
\end{gather*}
Here \eqref{eq:DeltadiamGam} was used for $n$ in the first inequality.
\qed
\subsection{Proof of (B)} \label{s:B}
We assume $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $1$-bounded turning} \newcommand{\ca}{chord arc\ with $\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)=1$ and doubling with finite Assouad dimension $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta$. Fix any $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta\in(2^{-1/\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta},1)$ (equivalently, $\alpha< \log2/\log(1/\sigma$)). We construct a dyadic diameter function $\Del$ on $\mcI$, using the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$, so that $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\mfS,d_\Del)$. In contrast to our above proof of (A), here we do ``$m$ steps at the same time''; i.e., each arc will be divided into $2^m$ subarcs of the same diameter. That is, we will in fact construct a $2^m$-adic diameter function; see \rf{s:2DF}.
\smallskip
Put $\beta:=\log2/\log(1/\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)$, so $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=2^{-1/\beta}$. Then since $\beta>\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta=\dimA(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)$, there exists an $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0\in(0,1]$ such that for all $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma\in(0,\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)$, the cardinality of any $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma D$-separated set $S\subset\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ with $D=\diam(S)$ satisfies
\begin{gather}
\notag
\card(S) < \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma^{-\beta} \,. \\
\intertext{Since $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=2^{-1/\beta}<1$, we may select an $m\in\mfN$
so that}
\label{eq:def_m}
\tau:=\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^m= \lp 2^{-1/\beta} \rp^m = \lp 2^m\rp ^{-1/\beta} < \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0 \,.
\end{gather}
In particular, if $S$ is a $\tau D$-separated subset of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$, with $D=\diam(S)$, then $\card(S)< \tau^{-\beta}=2^m=: M$.
It now follows that whenever we divide an arc $A$ of $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$ into $M$ subarcs $A_k$ all with equal diameters, then
\begin{equation} \label{E:diamG_eps}
M^{-1} \diam(A)\leq \diam A_k \leq \tau \, \diam(A) \,.
\end{equation}
The left-hand inequality follows directly from the triangle inequality whereas the right-hand inequality holds because there are at least $M$ distinct endpoints of the subarcs $A_k$ (which are separated by $\diam A_k$) and so, by the above, these endpoints cannot be $\tau D$-separated with $D:=\diam(A)$
\smallskip
We use \rf{P:equi_diam} to divide $\Gamma$ into $M$ arcs $A^1_0, A^1_1,\dots, A^1_{M-1}$ all of equal diameter. We iterate this procedure: assuming that arcs $A^n_k$ (with $k\in\{0,1,\dots,M^n-1\}$) have been so constructed, each arc $A^n_k$ is subdivided into $M$ subarcs $A^{n+1}_{kM +j}$ (with $
j\in\{0,1,\dots,M-1\}$) all of equal diameter; the subarcs $A_{kM+j}^{n+1}$ are labeled successively along $A_k^n$. To avoid possible confusion, we note that all subarcs of the same arc $A^n_k$ have the same diameters, however, subarcs of different arcs $A^n_i, A^n_j$ do not necessarily have the same diameters.
\smallskip
Let $\mcJ=\bigcup_{n=0}^\infty \mcJ^{n}$ be the family of all $M$-adic subarcs of $\mfS^1$; here $M=2^m$ and $\mcJ^{n} = \mcI^{mn}$ consists of the $M^n=2^{mn}$ subarcs of the form $J_k^n:=[k/2^{mn}, (k+1)/2^{mn}]\in \mcI^{mn}$ with $k\in\{0,1,\dots,2^{mn}-1\}$. See the last paragraph of \rf{s:DS}.
Setting ${\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n:=\{A^n_k \mid k\in\{0,1,\dots,M^n-1\}\}$ (for each $n\in \mfN$) we obtain a shrinking subdivision $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ for $\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda$; see \rf{s:shrinking}. In fact, $(\mcJ^n)_1^\infty$ and $({\mathcal A}} \newcommand{\mcB}{{\mathcal B}} \newcommand{\mcC}{{\mathcal C}} \newcommand{\mcD}{{\mathcal D}^n)_1^\infty$ are combinatorially equivalent shrinking subdivisions, and thus by \rf{P:subdiv_homeo} there is an induced homeomorphism $\varphi\colon \mfS^1 \to \Gamma$ with $\vphi(J_k^n)=A_k^n$ for all $n\in\mfN$ and all $k\in\{0,1,\dots,M^n-1\}$.
\smallskip
Now we construct an $M$-adic diameter function $\mcJ\overset{\Del}\to(0,1]$ using the snowflake parameter $\tau$ and so that $\Del$ also satisfies the following: for all $n\in\mfN$ and for all $k\in\{0,1,\dots,M^n-1\}$,
\begin{equation} \label{E:DeltadiamGam2}
K^{-1} \Delta(J^n_k) \leq \diam(A^n_k) \leq K\, \Delta(J^n_k) \,,
\end{equation}
where $K:=\tau \,M$. Once this task is completed, we can appeal to \rf{L:d BL_to_d_D_variant2k} (with $C=1$ and $2^m=M$) to assert that $\varphi:(\mfS^1,d_\Del)\to(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $L$-bi-Lipschitz} \newcommand{\qc}{quasiconformal\ with $L=2\,M\,K=2\,\tau\,M^2$.
To start, we set $\Del(\mfS^1):=1$ and then for each $k\in\{0,1,\dots,M-1\}$, we put $\Delta(J^1_k):=\tau$. To check \eqref{E:DeltadiamGam2} for $n=1$ we use \eqref{E:diamG_eps} and the fact that $\diam(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda)=1$ to see that
$$
\frac1{K} \, \Del(J_k^1)= \frac{\tau}{K} = \frac1{M} \le \diam(A_k^1) \le \tau = \Del(J_k^1) \,.
$$
Assume that for some $n\in\mfN$ and all $k\in\{0,1,\dots,M^n-1\}$, $\Delta(J^n_k)$ has been defined so that \eqref{E:DeltadiamGam2} holds. Fix any $M$-adic subarc $J^n=J^n_k$ and let $A^n=A^n_k=\varphi(J^n_k)$ be the corresponding subarc of $\Gamma$. We consider two cases.
First, suppose $\Delta(J^n) \leq \diam(A^n)$. Then we define the diameter of each child $J^{n+1}$ of $J^n$ by
$$
\Delta(J^{n+1}):= \tau \, \Delta(J^n)\,. \\
$$
To confirm that \eqref{E:DeltadiamGam2} is still satisfied for all these children, we observe that
\begin{align*}
\frac{1}{K} \, \Del(J^{n+1}) &= \frac{1}{M} \, \Delta(J^n) \le \frac{1}{M} \, \diam(A^n) \le \diam(A^{n+1}) \\
&\le \tau \, \diam(A^n) \le \tau \,K\, \Del(J^n) = K \,\Del(J^{n+1}) \,.
\end{align*}
Here the initial inequality holds by supposition, the next two inequalities follow from \eqref{E:diamG_eps}, and the induction hypothesis gives the last inequality.
Next, suppose $\
|
Delta(J^n)>\diam(A^n)$. Now we define the diameter of each child $J^{n+1}$ of $J^n$ by
$$
\Delta(J^{n+1}):= \frac1M \, \Delta(J^n) = \frac1{2^m} \, \Del(J^n)\,.
$$
To check that \eqref{E:DeltadiamGam2} holds for all these children, we again observe that
\begin{align*}
\frac{1}{K} \, \Del(J^{n+1}) &= \frac{1}{K\,M} \, \Delta(J^n) \le \frac{1}{M} \, \diam(A^n) \le \diam(A^{n+1}) \\
&\le \tau \, \diam(A^n) \le \tau \, \Del(J^n) = K \,\Del(J^{n+1}) \,.
\end{align*}
Here the initial inequality holds by the induction hypothesis, the next two inequalities follow from \eqref{E:diamG_eps}, and our supposition gives the last inequality.
This finishes the construction of an $M$-adic diameter function $\Delta$ for which \eqref{E:DeltadiamGam2} holds for all $n\in\mfN$ and all $k\in\{0,1,\dots,M^n-1\}$.
\smallskip
Having defined an appropriate $M$-adic diameter function $\Del$ on $\mcJ$, we use \rf{L:d BL_to_d_D_variant2k} to deduce that $\varphi:(\mfS^1,d_\tau)\to(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $L$-bi-Lipschitz} \newcommand{\qc}{quasiconformal, where $d_\tau:=d_{\Del}$. The $M$-adic diameter function $\Del$, constructed using the snowflake parameter $\tau$, can be extended to a dyadic diameter function $\Del$ that is constructed with the snowflake parameter $\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta=\tau^{1/m}$. See the discussion in \rf{s:2DF}. Let $d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$ be the metric associated with the dyadic diameter function $\Del$. According to \rf{L:DDF to 2^mDF}, the identity map $\id:(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)\to(\mfS^1,d_\tau)$ is $M$-bi-Lipschitz} \newcommand{\qc}{quasiconformal. It now follows that $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is $(ML)$-bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to the metric quasicircle $(\mfS^1,d_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)\in\mcS_\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta$.
\qed
\begin{rmk}
We can easily adjust the previous proof to obtain a model circle constructed from a $4$-adic diameter function. To do so, we choose $m$ in
\eqref{eq:def_m} to be even; say, $m=2k$, so $M=4^k$. Then we extend the $M$-adic diameter function $\mcJ\to (0,1]$ to a $4$-adic diameter function with snowflake parameter $p:=\tau^{1/k}= \sigma^2\in (4^{-1/\alpha},1)$ as described in Remark~\ref{rmk:4m4adic}. This yields a metric $d$, constructed via the $4$-adic diameter function, such that the original metric quasicircle $(\Gamma,\ed)$ is bi-Lipschitz equivalent to $(\mfS^1,d)$. Thus the following variant of (B) holds.
\end{rmk}
\begin{cor}[(B$'$)] \label{cor:thmB2}
Let $(\Gamma,\abs{\cdot})$ be a metric quasicircle with finite Assouad dimension $\alpha$. Then for each $p\in (4^{-1/\alpha},1)$ there is a $4$-adic diameter function $\Delta$, constructed with snowflake parameter $p$, and an associated metric $d=d_\Delta$, such that $(\mfS^1,d)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\Gamma,\abs{\cdot})$.
\end{cor}
\noindent
Note that $1\leq \alpha<2$ is equivalent to $1/4 \leq 4^{-1/\alpha}<1/2$, so in this case we can choose $p\in(4^{-1/\alpha},1/2)\subset(1/4,1/2)$.
\showinfo{Note that $ML=2\tau M^3=2(8\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)^m\le2\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0 8^m$. Also, $8\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta>4$. Finally, if we pick $m$ so that
$$
\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^m \le \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0 < \sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta^{m-1} \quad\text{then}\quad \frac{\log(\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)}{\log(\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)} \le m < 1 + \frac{\log(\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)}{\log(\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)} \,.
$$
We can write $\frac{\log(\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)}{\log(\sigma} \newcommand{\Sig}{\Sigma} \newcommand{\tha}{\theta)}$ as $\beta \log_2(1/\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0)$ and....then I guess the BL constant is not more than $16\, \varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0^{1-3\beta}$....for whatever that is worth! \\
The real problem here is that I do not see how to get an UPPER bound on $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0$. We seem to lose quantitative control when we pick $\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\gam}{\gamma_0$. This is related to the proof of \rf{L:lma}.}
\subsection{Planar Quasicircles} \label{s:C}
In \ref{pf of (C)} below we corroborate part (C) of our Theorem. Then we explain how to recover Rohde's theorem. We begin with a precise description for the construction of Rohde snowflakes that includes some useful geometric estimates.
\smallskip
Everywhere throughout this subsection $\mcJ$ denotes the family of 4-adic subarcs of the circle $\mfS^1$.
\smallskip
Each \emph{Rohde snowflake} $R$, constructed using a parameter $p\in[1/4,1/2)$, is the Hausdorff limit of a sequence $(R^n)_1^\infty$ of polygons where $R^{n+1}$ is obtained from $R^n$ by using the replacement choices illustrated in \rf{f:Rohde_snow}. Both the snowflake parameter $p$ and the polygonal arc $A_p$ are kept fixed throughout the construction.
We start with the unit square $R^1=E^1_0\cup E^1_1\cup E^1_2 \cup E^1_3$, so each $E^1_k$ is a Euclidean line segment of diameter one and these are labeled successively along $R^1$. Suppose we have constructed $R^n$ as a union of $4^n$ Euclidean line segments $E^n_k$, $k\in\{0,1,\dots,4^n-1\}$ (labeled successively along $R^n$). Then for each of the edges $E^n_k$ of $R^n$ we have two choices: either we replace $E^n_k$ with the four line segments obtained by dividing $E^n_k$ into four segments of equal diameter, or we replace $E^n_k$ by a similarity copy of the polygonal arc $A_p$ pictured at the top right of \rf{f:Rohde_snow}. In both cases $E^n_k$ is replaced by four new line segments $E^{n+1}_{4k+j}$ (with $j\in\{0,1,2,3\}$) that we call the \emph{children} of $E^n_k$, so $E^n_k$ is the \emph{parent} of each of $E^{n+1}_{4k},E^{n+1}_{4k+1},E^{n+1}_{4k+2},E^{n+1}_{4k+3}$. Each of these children has Euclidean diameter equal to either $(1/4)\diam(E^n_k)$ in the first case or $p\,\diam(E^n_k)$ in the second case. The second type of replacement is done so that the ``tip'' of the replacement arc points into the exterior of $R^n$. Then $R^{n+1}$ is the union of the $4^{n+1}$ arcs $E^{n+1}_i$ (with $i\in\{0,1,\dots,4^{n+1}-1\}$).
We call the line segments $E^n_k$ the \emph{$4$-adic edges of $R^n$}. We note that different replacement rules can be used for different edges $E^n_i$, $E^n_j$ of $R^n$. Thus, for example, one edge could have diameter $1/4^n$ while an adjacent edge might have diameter $p^n$ (which could be much larger). In any event, for each $n\in \mfN$ there is a natural homeomorphism} \newcommand{\homic}{homeomorphic\ $\vphi_n:\mfS^1\to R^n$ that is given by mapping each 4-adic subarc $J^n_k\subset\mfS^1$ to the 4-adic edge $E^n_k\subset R^n$. We say that the edge $E^n_k$ \emph{corresponds to} the subarc $J^n_k$.
\smallskip
Set $\tha=\tha(p):=2\arcsin((2p)^{-1}-1)$; this is the interior angle at the ``tip'' of the arc $A_p$ in \rf{f:Rohde_snow}, but see also the left-most picture in \rf{f:Rohde2}. Also, notice that if $A_p$ is normalized to have diameter one, then its height is $(p-1/4)^{1/2}$.
\begin{figure}[b] %
\begin{overpic}[width=12cm,
tics=10]{Rohde_snow2.eps}
%
\put(9,24){$\scriptstyle{T(E)}$}
\put(20,8){$\scriptstyle{E}$}
\put(20.5,19){$\scriptstyle{\theta}$}
%
\put(55,31){$\scriptstyle{T(E_0)}$}
\put(65,39){$\scriptstyle{T(E_1)}$}
\put(84.5,39){$\scriptstyle{T(E_2)}$}
\put(95,31){$\scriptstyle{T(E_3)}$}
\put(69.7,31){$\scriptstyle{\theta}$}
\put(77.5,35){$\scriptstyle{\theta}$}
\put(85.7,30.6){$\scriptstyle{\theta}$}
%
\put(55,5){$\scriptstyle{T(E_0)}$}
\put(70,6){$\scriptstyle{T(E_1)}$}
\put(81,6){$\scriptstyle{T(E_2)}$}
\put(95,5){$\scriptstyle{T(E_3)}$}
\end{overpic}
\caption{Triangles enclosing an arc.}
\label{f:Rohde2}
\end{figure} %
Let $E$ be one of the 4-adic edges of some $R^n$. We write $T(E)=T_p(E)$ for the closed isosceles triangle with base $E$ and height $\diam(E)(p-1/4)^{1/2}$; we orient $T(E)$ so that it ``points'' into the exterior of the polygon $R^n$. Thus if $E$ were to be replaced by a similarity copy of the arc $A_p$, then $T(E)$ would be the closed convex hull of this affine copy of $A_p$ (see the left-most picture in \rf{f:Rohde2}) and the third vertex of $T(E)$ would correspond to the ``tip'' of this image of $A_p$. We call this third vertex the ``tip'' of $T(E)$.
Next, let $E_0, E_1, E_2, E_3$ be the four children of $E$.
Not only are these children contained in $T(E)$, but elementary geometric considerations reveal that the associated triangles $T(E_0),T(E_1),T(E_2),T(E_3)$ are also contained in $T(E)$. See the two right-most pictures in \rf{f:Rohde2}. A standard argument now reveals that the sequence $(\vphi_n)_1^\infty$ is uniformly Cauchy, and hence it converges to a continuous surjection $\vphi:\mfS^1\to R$ and the planar curve $R$ is the Hausdorff limit of the sequence $(R^n)_1^\infty$.
\showinfo{Also, the Hausdorff distance between $R^m$ and $R^n$ is at most $p^{\min\{m,n\}}$.}
Consider a subcurve $A:=\varphi(J)$ of $R$ where $J$ is some 4-adic subarc of $\mfS^1$. Let $E$ be the 4-adic edge that corresponds to $J$. We see that $A$ is ``built on top of $E$'' in the sense that the replacement choices used to construct $R$, applied to the edge $E$, produce $A$. We write $A:=R(E)$ and call $A$ the \emph{$4$-adic subarc of $R$ corresponding to $E$ (and to $J$)}. (This abuse of notation will be justified below---see \eqref{E:last eqn}---where we prove that $\vphi$ is injective, hence a homeomorphism} \newcommand{\homic}{homeomorphic, so $R$ is a Jordan curve and $A$ is an arc.) By induction, we deduce that $A$ also lies in $T(E)$ and has the same endpoints as $E$, therefore
$$
\diam(A)=\diam(T(E))=\diam(E)\,.
$$
\smallskip
Looking again at the right-most pictures in \rf{f:Rohde2}, and appealing to elementary geometric considerations, we see that the angle between any pair of consecutive triangles $T(E_0), \dots, T(E_3)$ is at least $\theta$. It is also elementary to check that
\begin{equation}\label{E:dist(T0,T2)}\begin{split}
\dist(T(E_0),T(E_3)) &\ge \dist(T(E_1),T(E_3)) \\ &=\dist(T(E_0),T(E_2)) \ge c(p) \, \diam(E)
\end{split}
\end{equation}
where $c(p):=\half-p$.
\smallskip
As final preparation for our proof of part (C), suppose $\hat{I},\hat{J}$ are two adjacent $4$-adic subarcs of $\mfS^1$, say with $\hat{I}\cap\hat{J}=\{\xi\}$. (These arcs might be from different generations; i.e., possibly $\hat{I}=J^n_k$ and $\hat{J}=J^m_\ell$ where $n\ne m$.)
Let $\hat{E},\hat{F}$ be the corresponding $4$-adic edges, so $\hat{E}\cap\hat{F}=\{a\}$ where $a:= \varphi(\xi)$.
It follows from the above remarks that the angle between the two triangles $T(\hat{E})$ and $T(\hat{F})$, at their common vertex $a$, is at least $\tha$. See \rf{f:Rohde3}. More precisely, let $S$ be the closed sector, with vertex at $a$, that contains $T(\hat{E})$ and is such that $\tha$ is the angle between each edge of $\bd S$ and the nearest edge of $T(\hat{E})$. Then $T(\hat{F})$ lies in the closure of $\mfR^2\sm S$.
Now suppose there is a child $E$ of $\hat{E}$ that does not contain $a$. Then $T(E)$ is compactly contained in the sector $S$ and in fact
\begin{equation} \label{E:crucial}
\dist(T(E), T(\hat{F})) \geq \dist(T(E), \partial S) \geq c(p) \diam(E)
\end{equation}
where again $c(p):=\half-p$. This follows from the estimates
$$
\dist(T(E), \partial S) \ge \dist(b,\bd S) \geq c(p) \diam(E)
$$
where $b$ is the ``tip'' of the appropriate $T(E_0)$ as pictured in \rf{f:Rohde3}.
\begin{figure} %
\begin{overpic}[width=12cm,
tics=10]{Rohde_snow3.eps}
\put(46.3,8.5){$\scriptstyle{\theta}$} \put(55,3){$\scriptstyle{\theta}$}
\put(93,34){$S$} \put(80,5){$\bd S$} \put(39,30){$\bd S$}
\put(55,34){${T(\hat{E})}$}
\put(23,29){${T(\hat{F})}$}
\put(58,7){$\scriptstyle{E_{\tiny{0}}}$}
\put(46,23){$\scriptstyle{T({E})}$}
\put(62.4,13){\circle*{1.7}}
\put(50.6,12.6){\circle*{1.7}} \put(47.4,11.8){$\scriptstyle{b}$}
\put(48,0.9){\circle*{1.7}}
\put(46.8,-2.2){$\scriptstyle{a}$}
\end{overpic}
\caption{Separating points.}
\label{f:Rohde3}
\end{figure} %
Finally, fix points $s,t\in\hat{I}\cup\hat{J}$. Suppose there is a child $I$ of $\hat{I}$ whose interior, $\operatorname{int}(I)$, \emph{separates} $s,t$ in $\hat{I}\cup\hat{J}$ (meaning that $s,t$ lie in different components of $(\hat{I}\cup \hat{J}) \setminus \operatorname{int}(I)$). We claim that
\begin{equation} \label{E:last eqn}
\abs{\vphi(s)-\vphi(t)} \geq c(p)\diam(\vphi(I)) \,.
\end{equation}
This follows from \eqref{E:dist(T0,T2)} if both $\vphi(s),\vphi(t)$ lie in $T(\hat{E})$; otherwise it follows from \eqref{E:crucial}. Also, see \rf{f:Rohde3}.
Notice that injectivity of $\vphi$ follows from \eqref{E:last eqn}.
\smallskip
Having established the above terminology and geometric estimates, we now turn to the following.
\begin{pf}{Proof of {\rm(C)}} \label{pf of (C)}%
We use the notation and terminology introduced above.
\smallskip
It is well-known that planar quasicircles have Assouad dimension strictly less than two; see \cite[Lemma~4.1]{Rohde-qcircles-mod-bl} or \cite[Theorem~5.2]{Luuk-ass-dim}. Furthermore, Assouad dimension is unchanged by bi-Lipschitz} \newcommand{\qc}{quasiconformal\ maps. Thus every metric quasicircle that is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a planar quasicircle has Assouad dimension strictly less than two.
\smallskip
Let $(\Gamma,\ed)$ be a metric quasicircle with Assouad dimension $\alpha} \newcommand{\del}{\delta} \newcommand{\Del}{\Delta\in[1,2)$.
We prove that $(\Gamma,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a planar quasicircle. In fact, we show that it is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a Rohde snowflake.
Fix $p\in (4^{-1/\alpha},1/2)\subset (1/4,1/2)$. According to part (B) of our Theorem---more precisely, the version (B$'$) stated as Corollary~\ref{cor:thmB2}---there is a $4$-adic diameter function $\Delta$ with snowflake parameter $p$ and associated metric $d_p$ such that $(\Gamma,\abs{\cdot})$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $(\mfS^1,d_p)$.
We use the 4-adic diameter function $\Del$ to construct a Rohde snowflake $R$ with snowflake parameter $p$, and we prove that $(\mfS^1,d_p)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to $R$. Hence $(\Gamma} \newcommand{\kap}{\kappa} \newcommand{\lam}{\lambda,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a planar quasicircle.
Recall that $\mcJ$ is the set of all $4$-adic subarcs of $\mfS^1$; similarly, $\mcJ^n:= \mcI^{4n}$.
\medskip
It is convenient to scale the metric $d_p$---so also the diameter function $\Del$---by the factor $1/p$. This bi-Lipschitz} \newcommand{\qc}{quasiconformal\ change in our metric means that for each $J^1_k\in\mcJ^1$, $\Del(J^1_k)=1$. See the paragraph immediately following \eqref{E:DeltadiamGam2}.
The desired Rohde snowflake $R$ is the limit of a sequence $(R^n)_1^\infty$ of polygons, and we must describe how to replace each edge of $R^n$ to obtain $R^{n+1}$. Of course, we start with the unit square $R^1:=E^1_0\cup E^1_1 \cup E^1_2 \cup E^1_3$, so each edge $E^1_k$ satisfies $\Del(J^1_k)=1=\diam(E^1_k)$. Now suppose that we have constructed polygons $R^1,R^2, \dots, R^n:=E^n_0\cup\dots\cup E^n_{4^n-1}$ so that
$$
\text{for each }\; k\in\{0,1,\dots,4^n-1\} \,, \quad \Del(J^n_k)=\diam(E^n_k) \,.
$$
Fix any $J=J^n_k$ and consider its four children $J_0,J_1,J_2,J_3$. Since $\Del$ is a 4-adic diameter function (constructed with the snowflake parameter $p$),
\begin{align*}
\quad\text{either}\quad & \Del(J_0)=\Del(J_1)=\Del(J_2)=\Del(J_3):=\frac14\,\Del(J) \\
\quad\text{or}\quad & \Del(J_0)=\Del(J_1)=\Del(J_2)=\Del(J_3):=p\,\Del(J) \,.
\end{align*}
In the first case, we replace the edge $E^n_k$ with the four segments $E^{n+1}_{4k}$, $E^{n+1}_{4k+1}$, $E^{n+1}_{4k+2}$, $E^{n+1}_{4k+3}$ obtained by dividing $E^n_k$ into four line segments of equal diameter. Thus here $\diam(E^{n+1}_j)=(1/4)\diam(E^n_k)$. In the second case, we replace $E^n_k$ by a similarity copy of the polygonal arc $A_p$ pictured at the top right of \rf{f:Rohde_snow}; again $E^n_k$ is replaced by four new segments $E^{n+1}_j$, but now each of these has diameter $\diam(E^{n+1}_j)=p\,\diam(E^n_k)$. The second type of replacement is done so that the ``tip'' of the replacement arc points into the exterior of $R^n$.
It is now straightforward to check that
$$
\text{for each }\; k\in\{0,1,\dots,4^{n+1}-1\} \,, \quad \Del(J^{n+1}_k)=\diam(E^{n+1}_k) \,.
$$
In particular, we can iterate this construction and thus obtain a sequence $(R^n)_1^\infty$ of planar polygons. As explained above, the sequence $(R^n)_1^\infty$ converges, in the Hausdorff metric, to a Rohde snowflake $R\,$ that has been constructed using the snowflake parameter $p$.
\smallskip
Let $\mfS^1\overset{\vphi}\to R$ be the natural homeomorphism} \newcommand{\homic}{homeomorphic\ induced by the correspondences between the 4-adic subarcs of $\mfS^1$, all 4-adic edges, and the 4-adic subarcs of $R$ (see the paragraphs just before \eqref{E:dist(T0,T2)}). Thus each 4-adic edge $E^n_k$ (of $R^n$) corresponds to a 4-adic subarc $A^n_k=R(E^n_k)=\vphi(J^n_k)$ of $R$ and
$$
\diam(A^n_k)=\diam(T(E^n_k))=\diam(E^n_k) = \Del(J^n_k) \,.
$$
We claim that $(\mfS^1,d_p)\overset{\vphi}\to(R,\ed)$ is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ with
\begin{equation} \label{E:phi BL}
[c(p)/8] \, d_p(s,t) \le |\vphi(s)-\vphi(t)| \le 8\, d_p(s,t) \quad\text{for all $s,t\in\mfS^1$}
\end{equation}
where $c(p):=\half-p$.
To verify this claim, let $s,t$ be two points in $\mfS^1$ and write $[s,t]$ for the smaller diameter subarc of $\mfS^1$ joining $s,t$. Appealing to \rf{L:arc lemma_2k_var}, we get 4-adic subarcs $I,J$ of $\mfS^1$ such that:
\begin{align*}
& I \cup J \subset [s,t] \subset \hat{I}\cup\hat{J} \,, \\
& \Del(I) \le \diam_{d_p}([s,t])=d_p(s,t) \le 8\, \Del(I) \,, \\
& \text{$\Del(I)$ is maximal among all 4-adic subarcs in $[s,t]$} \,, \\[-1mm]
& \text{either $I=J$ or $\hat{I},\hat{J}$ are adjacent subarcs} \,.
\end{align*}
Here $\hat{I}, \hat{J}$ are the 4-adic parents of $I,J$. Put $x:=\vphi(s), y:=\vphi(t)$. Let $A:=\vphi(I),B:=\vphi(J)$ and $E,F$ be the 4-adic subarcs of $R$ and 4-adic edges (respectively) that correspond to $I,J$; also, $\hat{A}=\vphi(\hat{I}),\hat{B}=\vphi(\hat{J})$ are the parents of $A,B$.
Since $x,y\in \hat{A}\cup \hat{B}$,
\begin{align*}
|x-y| &\le \diam(\hat{A}\cup\hat{B}) \le \diam(\hat{A}) + \diam(\hat{B}) = \Del(\hat{I}) + \Del(\hat{J}) \\
&\le 4 \left[ \Del(I) + \Del(J) \right] \le 8 \,\Del(I) \le 8\,d_p(s,t)
\end{align*}
which establishes the upper estimate in \eqref{E:phi BL}. To prove the lower estimate in \eqref{E:phi BL}, we observe that $\operatorname{int}(I)$ separates $s,t$ in $\hat{I}\cup \hat{J}$ and thus \eqref{E:last eqn} yields
$$
|x-y| \ge c(p) \diam(\vphi(I)) = c(p) \, \Del(I) \ge [c(p)/8] \, d_p(s,t) \,. \qedhere
$$
\vspace*{-5mm}
\end{pf}%
It is worthwhile to observe that the above provides an independent proof that each Rohde snowflake is a quasicircle; in fact, each $R$ in $\mcR_p$ is $C$-bounded turning} \newcommand{\ca}{chord arc\ with $C=C(p):=8/c(p)=16/(1-2p)$.
\medskip
We close this paper by explaining how Rohde's theorem follows from our
Theorem. From the proof of part (C) of our Theorem,
each planar quasicircle is bi-Lipschitz} \newcommand{\qc}{quasiconformal\ equivalent to a Rohde snowflake.
Therefore, Rohde's theorem follows from the fact that a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ homeomorphism} \newcommand{\homic}{homeomorphic\
between planar quasicircles has a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ extension to the entire plane.
Below we state this extension theorem, due to Gehring \cite[Theorem~7,
Corollary~2]{G-inj}, as \rf{T:qc_bL_extension}. The construction of
the extension essentially follows from the Beurling-Ahlfors extension
\cite{BA-bdry-corr}. See also \cite[Lemma~3]{Tukia-extension} and
\cite[Theorems~2.12,2.19]{TV-BL-ext}.
Interestingly, the property of there being such a bi-Lipschitz} \newcommand{\qc}{quasiconformal\ extension, for every bi-Lipschitz} \newcommand{\qc}{quasiconformal\ self-homeomorphism} \newcommand{\homic}{homeomorphic, is a characteristic property of quasicircles among all closed (that is, bounded, so compact) planar Jordan curves. See \cite[Theorem~5.1]{G-extQI}.
\begin{thm}[{\cite{G-inj}}] \label{T:qc_bL_extension}
Each bi-Lipschitz homeomorphism} \newcommand{\homic}{homeomorphic\
between planar quasicircles extends to a bi-Lipschitz self-homeomorphism of the plane. The bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constant for the extension depends only on the original bi-Lipschitz} \newcommand{\qc}{quasiconformal\ constant and the two original bounded turning} \newcommand{\ca}{chord arc\ constants.
\end{thm}
We end by remarking that the previous theorem is false for Jordan curves. Namely a bi-Lipschitz map between planar Jordan curves $\Gamma_1,\Gamma_2$ need not have a bi-Lipschitz extension to the plane. For example let $\Gamma_1$ be a circle with two outward pointing cusps and let $\Gamma_2$ be a circle with one outward and one inward pointing cusp. It is elementary that $\Gamma_1$ and $ \Gamma_2$ are bi-Lipschitz equivalent, but any such map cannot be extended to a bi-Lipschitz map of the whole plane. This example appears already in \cite[p.388]{Rickman-curves}.
\section*{Acknowledgements}
\label{sec:Acknowledgements}
Saara Lehto and David Freeman helped the authors to understand Steffen Rohde's paper. Jussi \Va\
provided many helpful suggestions and references.
\subsubsection*{#1}}{\qed\smallskip}
\newcounter{aenumctr}
\newenvironment{aenum
{\begin{list}%
{\rm(\alph{aenumctr})
{\usecounter{aenumctr}}}
{\end{list}}
|
\section{Introduction}
\label{sec:section1}
In this paper we are interested in analyzing the direct and inverse scattering problems for the first-order discrete system
\begin{equation}\label{1.1}
\begin{bmatrix}
\alpha_n\\
\noalign{\medskip}
\beta_n
\end{bmatrix}=
\begin{bmatrix}
z & \left(z-\displaystyle\frac{1}{z}\right)q_n\\
\noalign{\medskip}
z\, r_n &\displaystyle\frac{1}{z}+ \left(z-\displaystyle\frac{1}{z}\right)q_n\,r_n
\end{bmatrix}
\begin{bmatrix}
\alpha_{n+1}\\
\noalign{\medskip}
\beta_{n+1}
\end{bmatrix},\qquad n\in\mathbb{Z},
\end{equation}
where $z$ is the spectral parameter taking values on the unit circle $\mathbb{T}$ in the complex $z$-plane $\mathbb{C},$ $n$ is the discrete independent variable taking values in the set of integers $\mathbb{Z}$, the complex-valued scalar quantities
$q_n$ and $r_n$ correspond the respective values evaluated at
$n$ for the potential pair $(q,r),$ and
$\begin{bmatrix}
\alpha_n\\
\beta_n
\end{bmatrix}$ corresponds to the value of the wavefunction at the spacial location
$n.$
We assume that $q_n$ and $r_n$ are rapidly decaying in the sense that they vanish faster than any negative powers of $|n|$ as $n\to\pm\infty$. We also assume that
\begin{equation}\label{1.1a}
1-q_nr_n\ne 0,\quad 1+q_nr_{n+1}\ne 0, \qquad n\in\mathbb{Z}.
\end{equation}
The complex-valued quantities $\alpha_n$ and $\beta_n$ depend on
the spectral parameter $z,$ but in our notation we usually suppress
that $z$-dependence.
The system in \eqref{1.1} is used as a model for an infinite lattice where
the particle with an internal structure at the lattice point $n$ experiences local forces
from the potential values $q_n$ and $r_n.$ Since we assume
that $q_n$ and $r_n$ vanish sufficiently fast as $n\to\pm\infty$, a scattering scenario can be established for \eqref{1.1}.\par
The direct scattering problem for \eqref{1.1} is described as
the determination of the scattering data set consisting of the scattering coefficients and bound-state information when the potential pair $(q,r)$ is known. The inverse scattering problem for \eqref{1.1} consists of the recovery of the potential pair $(q,r)$ when the scattering data set is given. Since $q_n$ and $r_n$ vanish sufficiently fast as $n\to\pm\infty$, it follows from \eqref{1.1} that any solution to \eqref{1.1} has the asymptotic behavior
\begin{equation}
\label{1.2}
\begin{bmatrix}
\alpha_n\\
\noalign{\medskip}
\beta_n
\end{bmatrix}=
\begin{bmatrix}
a_{\pm}z^{-n}\left[1+o(1)\right]\\
\noalign{\medskip}
b_{\pm}z^{n}\left[1+o(1)\right]
\end{bmatrix}, \qquad n\to\pm\infty,
\end{equation}
for some constants $a_{\pm}$ and $b_{\pm}$
that may depend on $z$ but not on $n$. By choosing two of the four coefficients $a_{+}$, $a_{-}$, $b_{+}$, $b_{-}$ appearing in
\eqref{1.2} in a specific way, we obtain a particular solution to \eqref{1.1}.
Note that \eqref{1.1} has two linearly independent solutions, and its general solution
can be expressed as a linear combination of
any two linearly independent solutions.
The discrete system \eqref{1.1} is related to the integrable semi-discrete system
\begin{equation}\label{1.2a}
\begin{cases}
i\dot{q}_n+\displaystyle\frac{q_{n+1}}{1-q_{n+1}r_{n+1}}-
\displaystyle\frac{q_{n}}{1-q_{n}r_{n}}-\displaystyle\frac{q_{n}}{1+q_{n}r_{n+1}}+
\displaystyle\frac{q_{n-1}}{1+q_{n-1}r_{n}}=0,\\
\noalign{\medskip}
i\dot{r}_n-\displaystyle\frac{r_{n+1}}{1+q_{n}r_{n+1}}+\displaystyle\frac{r_{n}}{1+q_{n-1}r_{n}}
+\displaystyle\frac{r_{n}}{1-q_{n}r_{n}}-\displaystyle\frac{r_{n-1}}{1-q_{n-1}r_{n-1}}=0,
\end{cases}
\end{equation}
which is known as the semi-discrete derivative NLS (nonlinear Schr\"odinger) system or the semi-discrete Kaup-Newell system \cite{tsuchida2002integrable,tsuchida,tsuchida2010new}. From the denominators in \eqref{1.2a} we see why we need the restriction \eqref{1.1a}. Note that an overdot in \eqref{1.2a} denotes the derivative with respect to
the independent variable $t,$ which is interpreted as the time variable
and is suppressed in \eqref{1.2a}.
In our analysis of \eqref{1.1}, without loss of generality
we can either assume that $q_n$ and
$r_n$ are independent of $t$ or they
contain $t$ as a parameter.
We analyze the direct and inverse scattering problems for \eqref{1.1}
by using the connection to the two first-order discrete systems
\begin{equation}\label{1.2aa}
\begin{bmatrix}
\xi_n\\
\noalign{\medskip}
\eta_n
\end{bmatrix}=
\begin{bmatrix}
z &z\, u_n\\
\noalign{\medskip}
\displaystyle\frac{1}{z}\, v_n &\displaystyle\frac{1}{z}
\end{bmatrix}
\begin{bmatrix}
\xi_{n+1}\\
\noalign{\medskip}
\eta_{n+1}
\end{bmatrix},\qquad n\in\mathbb{Z},
\end{equation}
\begin{equation}\label{1.2ab}
\begin{bmatrix}
\gamma_n\\
\noalign{\medskip}
\epsilon_n
\end{bmatrix}=
\begin{bmatrix}
z &z\,p_n\\
\noalign{\medskip}
\displaystyle\frac{1}{z}\, s_n &\displaystyle\frac{1}{z}
\end{bmatrix}
\begin{bmatrix}
\gamma_{n+1}\\
\noalign{\medskip}
\epsilon_{n+1}
\end{bmatrix},\qquad n\in\mathbb{Z},
\end{equation}
where $u_n$ and $v_n$ are the values for the potential pair $(u,v)$ and $p_n$ and $s_n$ are the values for $(p,s)$. By choosing $(u,v)$ and $(p,s)$ as in \eqref{x3.1}--\eqref{x3.4}, we relate
the relevant quantities for \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} to each other. Such relevant quantities include the Jost solutions, the scattering coefficients, and the bound-state data sets for each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab}.
We remark that in the literature it is always assumed that the bound states for \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} are simple. In our paper
we do not make such an artificial
assumption because we easily and in an elegant way handle the bound states of any multiplicities, and this is done by using a pair of constant matrix triplets
describing the bound-state values of the spectral parameter $z$ and the corresponding norming constants.
The systems \eqref{1.2aa} and \eqref{1.2ab} are of importance also in their own, and they are known as the Ablowitz-Ladik systems or as the discrete AKNS systems. It is
possible \cite{tsuchida2010new}
to transform \eqref{1.2aa} and \eqref{1.2ab} into
\begin{equation}\label{1.2an}
\begin{bmatrix}
\tilde{\xi}_{n+1}\\
\noalign{\medskip}
\tilde{\eta}_{n+1}
\end{bmatrix}=
\begin{bmatrix}
z &u_n\\
\noalign{\medskip}
v_n &\displaystyle\frac{1}{z}
\end{bmatrix}
\begin{bmatrix}
\tilde{\xi}_{n}\\
\noalign{\medskip}
\tilde{\eta}_{n}
\end{bmatrix},\qquad n\in\mathbb{Z},
\end{equation}
\begin{equation}\label{1.2am}
\begin{bmatrix}
\tilde{\gamma}_{n+1}\\
\noalign{\medskip}
\tilde{\epsilon}_{n+1}
\end{bmatrix}=
\begin{bmatrix}
z &p_n\\
\noalign{\medskip}
s_n &\displaystyle\frac{1}{z}
\end{bmatrix}
\begin{bmatrix}
\tilde{\gamma}_{n}\\
\noalign{\medskip}
\tilde{\epsilon}_{n}
\end{bmatrix},\qquad n\in\mathbb{Z}.
\end{equation}
Note that \eqref{1.2aa} and \eqref{1.2an} also differ from each other by the fact that the appearances of the wavefunction values evaluated at $n$ and $n+1$ are switched. The same remark also applies to \eqref{1.2ab} and \eqref{1.2am}.
As already pointed out by Tsuchida \cite{tsuchida2010new}, the analysis of the direct and inverse scattering problems for an Ablowitz-Ladik system written in the form of \eqref{1.2an} and \eqref{1.2am} is unnecessarily complicated. For example, the analysis provided in \cite{ablowitzPrinari2003} for \eqref{1.2an} involves separating the scattering data into two parts containing even and odd integer powers of $z,$ respectively. This unnecessarily makes the analysis cumbersome. Furthermore, if we use \eqref{1.2an} with the roles of $n$ and $n+1$ switched compared to \eqref{1.2aa} and use the scattering coefficients from the right instead of the scattering coefficients from the left as input, then the analysis of the inverse scattering problem for \eqref{1.2an} by the Marchenko method becomes unnecessarily complicated.
The researchers who are mainly interested in nonlinear evolution equations use only the scattering coefficient from the right without referring to the scattering coefficients from the left. In this paper, we are careful in making a distinction between the right and left scattering data sets. The right and left transmission coefficients in a first-order discrete linear system are unequal unless the coefficient matrix in that system has determinant equal to $1.$ One can verify that the coefficient matrix in \eqref{1.1} has its determinant equal to $1,$ whereas the corresponding determinants for \eqref{1.2aa} and \eqref{1.2ab} are given by $1-u_nv_n$ and $1-p_ns_n$, respectively. Thus, the left and right transmission coefficients for each of \eqref{1.2aa} and \eqref{1.2ab} are unequal.
The scattering and inverse scattering problems for \eqref{1.1} have partially been analyzed by Tsuchida in \cite{tsuchida2010new}. Our own analysis is complementary to Tsuchida's work in the following sense. Tsuchida's main interest in \eqref{1.1} is confined to its relation to \eqref{1.2a}, and he only deals with the right scattering coefficients. Tsuchida exploits certain gauge transformations to relate \eqref{1.1} to two discrete Ablowitz-Ladik systems, and he
assumes that the bound states are all simple.
Tsuchida's expressions for the scattering coefficients not only involve the Jost solutions to the relevant linear system but also the Jost solutions to the corresponding adjoint system, whereas in our case the scattering coefficients are expressed in terms of the Jost solutions to the relevant linear system only. In our opinion the latter description of the scattering coefficients provides physical insight and intuition into the analysis of direct and inverse problems. Tsuchida
formulates a Marchenko system
given in (4.12c) and (4.12d) of \cite{tsuchida2010new},
somehow similar to our own alternate
Marchenko system \eqref{6.22d} and \eqref{6.23}, but it lacks the
appropriate symmetries existing in our alternate
Marchenko system.
In formulating his Marchenko system Tsuchida uses a Fourier transformation with respect to $z^2$ and not with respect to $z$. Furthermore, in Tsuchida's formulation it is not quite clear how the scattering data sets for \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} are related to each other.
One of the important accomplishments of our paper is the introduction of a standard Marchenko formalism for
\eqref{1.1} using as input the scattering data from
\eqref{1.1} only. The formulation
of our standard Marchenko system \eqref{Z.0} is a significant
generalization step to solve inverse problems
for various other discrete and continuous systems for which
a standard Marchenko theory has not yet been formulated.
As mentioned already, we also introduce
an alternate Marchenko formalism for \eqref{1.1}
using as input the scattering data sets
from \eqref{1.2aa} and \eqref{1.2ab}.
Both our standard and alternate Marchenko systems we introduce have the appropriate symmetry properties and resemble the standard Marchenko systems arising in other continuous and
discrete systems.
The alternate Marchenko method in our paper
corresponds to the discrete analog of the systematic approach \cite{AE19} we presented to solve the inverse scattering problem for the energy-dependent AKNS system given in (1.1) of \cite{AE19}. Besides \cite{AE19} the most relevant reference for our current work is the important paper by Tsuchida \cite{tsuchida2010new}.
Our paper is organized as follows. In Section~\ref{sec:section2} we introduce the Jost solutions and the scattering coefficients for each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} and we present some relevant properties of
those Jost solutions and scattering coefficients. In that section we also prove that the linear dependence of the appropriate pairs of Jost solutions occurs
at the poles of the corresponding transmission coefficients for each of
\eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab}.
In Section~\ref{sec:section3} when the corresponding potential pairs
are related to each other as in
\eqref{x3.1}--\eqref{x3.4}, we relate the
Jost solutions and scattering coefficients
for \eqref{1.1} to those for \eqref{1.2aa} and \eqref{1.2ab}.
In that section we also present certain relevant properties of
the Jost solutions to \eqref{1.1} and
express the potentials $q_n$ and $r_n$ in terms of the values at $z=1$ of the Jost solutions to \eqref{1.2aa} and \eqref{1.2ab}.
In Section~\ref{sec:section4} we describe the bound-state data sets
for each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} in terms of two
matrix triplets, which allows us to handle bound states of any multiplicities
in a systematic manner that can also be used for other systems both in the continuous and discrete cases. In the formulation of the Marchenko method
we show how the Marchenko kernels contain the matrix triplets
in a simple and elegant manner. Also in that section, when
the potential pairs for \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} are related as in
\eqref{x3.1}--\eqref{x3.4}, we show how the corresponding bound-state
data sets are related to each other.
In Section~\ref{sec:section5} we outline the steps to solve the direct
problem for \eqref{1.1}.
In Section~\ref{sec:section6} we introduce the Marchenko system
\eqref{Z.0} using as input the scattering data directly related to
\eqref{1.1} and we describe how the potentials
$q_n$ and $r_n$ are recovered from the solution
\eqref{Z.0}.
In Section~\ref{sec:section7} we present
our alternate Marchenko system given in \eqref{6.22d} and \eqref{6.23}
using as input the scattering data sets from
\eqref{1.2aa} and \eqref{1.2ab}, as we also show how
$q_n$ and $r_n$ are recovered from the solution to the
alternate Marchenko system.
In Section~\ref{sec:section8} we describe various
methods to solve the inverse problem for
\eqref{1.1} by using as input the scattering data for
\eqref{1.1} and outline how the
potentials $q_n$ and $r_n$ are recovered.
Finally, in Section~\ref{sec:section9}
we provide the solution to the integrable
nonlinear system \eqref{1.2a} via the inverse scattering transform.
This is done by providing the time evolution of the scattering data
for \eqref{1.1} and by determining the corresponding time-evolved
potentials $q_n$ and $r_n.$
In that section we also present some explicit solution formulas for
\eqref{1.2a} corresponding to time-evolved reflectionless
scattering data for
\eqref{1.1}, and such solutions are explicitly expressed in terms of
the two matrix triplets describing the time-evolved bound-state
data for \eqref{1.1}.
\section{The Jost solutions and scattering coefficients}
\label{sec:section2}
In this section we introduce the Jost solutions and the scattering coefficients for each of the linear systems given in \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab}, and we present some of their relevant properties. For clarification, we use the superscript $(q,r)$ to denote the quantities relevant to \eqref{1.1}, use $(u,v)$ for those relevant to \eqref{1.2aa}, and use $(p,s)$ for those relevant to \eqref{1.2ab}. When these three potential pairs decay rapidly in their respective equations as $n\to\pm\infty$, the corresponding coefficient matrices all reduce to the same unperturbed coefficient matrix. In other words, each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} corresponds to the
|
same unperturbed system
\begin{equation*}
\label{x2.1}
\mathring{\Psi}_n=
\begin{bmatrix}
z &0\\
\noalign{\medskip}
0&\displaystyle\frac{1}{z}
\end{bmatrix}
\mathring{\Psi}_{n+1},\qquad n\in\mathbb{Z},
\end{equation*}
where the general solution is a linear combination of the two linearly independent solutions $\begin{bmatrix}
z^{-n}\\0
\end{bmatrix}$ and $\begin{bmatrix}
0\\z^{n}
\end{bmatrix}$, i.e. we have
\begin{equation}\label{x2.2}
\mathring{\Psi}_n=a\begin{bmatrix}
z^{-n}\\
\noalign{\medskip}
0
\end{bmatrix}+b\begin{bmatrix}
0\\
\noalign{\medskip}
z^{n}
\end{bmatrix},\qquad n\in\mathbb{Z},
\end{equation}
with $a$ and $b$ being two complex-valued scalars that are independent of $n$ but may depend on $z.$
There are four Jost solutions for each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab}, and they are obtained by assigning specific values to $a$ and $b$ as $n\to+\infty$ or $n\to-\infty$. We uniquely define the four Jost solutions $\psi_n,$ $\phi_n,$ $\bar{ \psi}_n,$ $\bar{ \phi}_n$ to each of \eqref{1.1}, \eqref{1.2aa}, \eqref{1.2ab} so that they satisfy the respective asymptotics
\begin{equation}\label{x2.3}
\psi_n=\begin{bmatrix}
o(1)\\
\noalign{\medskip}
z^n[1+o(1)]
\end{bmatrix} ,\qquad n\to+\infty,
\end{equation}
\begin{equation}\label{x2.4}
\phi_n=\begin{bmatrix}
z^{-n}[1+o(1)]\\
\noalign{\medskip}
o(1)
\end{bmatrix} ,\qquad n\to-\infty,
\end{equation}
\begin{equation}\label{x2.5}
\bar{\psi}_n=\begin{bmatrix}
z^{-n}[1+o(1)]\\
\noalign{\medskip}
o(1)
\end{bmatrix} ,\qquad n\to+\infty,
\end{equation}
\begin{equation}\label{x2.6}
\bar{\phi}_n=\begin{bmatrix}
o(1)\\
\noalign{\medskip}
z^{n}[1+o(1)]
\end{bmatrix} ,\qquad n\to-\infty.
\end{equation}
We remark that an overbar does not denote complex conjugation. We will use the notation $\psi_n^{(q,r)},$ $\phi_n^{(q,r)},$ $\bar{ \psi}_n^{(q,r)},$ $\bar{ \phi}_n^{(q,r)}$ to refer to the respective Jost solutions for \eqref{1.1}; use $\psi_n^{(u,v)},$ $\phi_n^{(u,v)},$ $\bar{ \psi}_n^{(u,v)},$ $\bar{ \phi}_n^{(u,v)}$ for the respective Jost solutions for \eqref{1.2aa}; and use $\psi_n^{(p,s)},$ $\phi_n^{(p,s)},$ $\bar{ \psi}_n^{(p,s)},$ $\bar{ \phi}_n^{(p,s)}$ for the respective Jost solutions for \eqref{1.2ab}.
The asymptotics of the Jost solutions complementary to \eqref{x2.3}--\eqref{x2.6} are used to define the corresponding scattering coefficients compatible with \eqref{x2.2}. We have
\begin{equation}\label{x2.7}
\psi_n=\begin{bmatrix} \displaystyle\frac{L}{T_{\rm l}}\,z^{-n}\left[1+o(1)\right]\\
\noalign{\medskip}
\displaystyle\frac{1}{T_{\rm l}}\,z^{n}\left[1+o(1)\right]
\end{bmatrix}, \qquad n\to-\infty,
\end{equation}
\begin{equation}\label{x2.8}
\phi_n=\begin{bmatrix}
\displaystyle\frac{1}{T_{\rm r}}\,z^{-n}\left[1+o(1)\right]\\
\noalign{\medskip}
\displaystyle\frac{R}{T_{\rm r}}\,z^{n}\left[1+o(1)\right]
\end{bmatrix} , \qquad n\to+\infty,
\end{equation}
\begin{equation}\label{x2.10}
\bar{\psi}_n=\begin{bmatrix}
\displaystyle\frac{1}{\bar{T}_{\rm l}}\,z^{-n}\left[1+o(1)\right]\\
\noalign{\medskip}
\displaystyle\frac{\bar{L}}{\bar{T}_{\rm l}}\,z^{n}\left[1+o(1)\right]
\end{bmatrix}, \qquad n\to-\infty,
\end{equation}
\begin{equation}\label{x2.9}
\bar{\phi}_n=\begin{bmatrix}
\displaystyle\frac{\bar{R}} {\bar{T}_{\rm r}}\,z^{-n}\left[1+o(1)\right]\\\noalign{\medskip}
\displaystyle\frac{1}{\bar{T}_{\rm r}}\,z^{n}\left[1+o(1)\right]
\end{bmatrix}, \qquad n\to+\infty,
\end{equation}
where $T_{\rm l}$ and $\bar{T}_{\rm l}$ are the transmission coefficients from the left, $T_{\rm r}$ and $\bar{T}_{\rm r}$ are the transmission coefficients from the right, $R$ and $\bar{R}$ are the reflection coefficients from the right, and $L$ and $\bar{L}$ are the reflection coefficients from the left. We will also say left scattering coefficients instead of scattering coefficients from the left, and similarly we will use right scattering coefficients and scattering coefficients from the right interchangeably.
Note that we will use $T_{\rm r}^{(q,r)},$ $T_{\rm l}^{(q,r)},$ $R^{(q,r)},$ $L^{(q,r)},$ $\bar{T}_{\rm r}^{(q,r)},$ $\bar{T}_{\rm l}^{(q,r)},$ $\bar{R}^{(q,r)},$ $\bar{L}^{(q,r)}$ to refer to the scattering coefficients for \eqref{1.1}; use $T_{\rm r}^{(u,v)},$ $T_{\rm l}^{(u,v)},$ $R^{(u,v)},$ $L^{(u,v)},$ $\bar{T}_{\rm r}^{(u,v)},$ $\bar{T}_{\rm l}^{(u,v)},$ $\bar{R}^{(u,v)},$ $\bar{L}^{(u,v)}$ for the scattering coefficients for \eqref{1.2aa}; and use $T_{\rm r}^{(p,s)},$ $T_{\rm l}^{(p,s)},$ $R^{(p,s)},$ $L^{(p,s)},$ $\bar{T}_{\rm r}^{(p,s)},$ $\bar{T}_{\rm l}^{(p,s)},$ $\bar{R}^{(p,s)},$ $\bar{L}^{(p,s)}$ for the scattering coefficients for \eqref{1.2ab}.
Related to the linear system \eqref{1.2aa}, let us introduce the quantities $D_n^{(u,v)}$ and $D_\infty^{(u,v)}$ as
\begin{equation}\label{x2.D_n}
D_n^{(u,v)}:=\displaystyle\prod_{j=-\infty}^{n}(1-u_j\,v_j),\quad D_\infty^{(u,v)}:=\displaystyle\prod_{j=-\infty}^{\infty}(1-u_j\,v_j).
\end{equation}
From the fact that $u_n$ and $v_n$ are rapidly decaying and that $1-u_nv_n\ne 0$ for $n\in\mathbb{Z},$ it follows that $D_n^{(u,v)}$ and $D_\infty^{(u,v)}$ are each well defined and nonzero.
Similarly, related to the linear system \eqref{1.2ab}, we let
\begin{equation}\label{x2.D_na}
D_n^{(p,s)}:=\displaystyle\prod_{j=-\infty}^{n}(1-p_j\,s_j),\quad D_\infty^{(p,s)}:=\displaystyle\prod_{j=-\infty}^{\infty}(1-p_j\,s_j).
\end{equation}
From the fact that $p_n$ and $s_n$ are decaying rapidly and that $1-p_ns_n\ne 0$ for $n\in\mathbb{Z},$ we see that $D_n^{(p,s)}$ and $D_\infty^{(p,s)}$ are each well defined and nonzero.
In the next theorem we list some relevant analyticity properties of the Jost solutions to \eqref{1.2aa}.
\begin{theorem}
\label{thm:theorem x2.1}
Assume that the potentials $u_n$ and $v_n$ appearing in \eqref{1.2aa} are rapidly decaying and $1-u_nv_n\ne 0$ for $n\in\mathbb{Z}$. Then, the corresponding Jost solutions to \eqref{1.2aa} satisfy following:
\begin{enumerate}
\item[\text{\rm(a)}] For each $n\in\mathbb{Z}$ the quantities $z^{-n}\,\psi_n^{(u,v)},$ $z^{n}\,\phi_n^{(u,v)},$ $z^{n}\,\bar{\psi}_n^{(u,v)},$ $z^{-n}\,\bar{\phi}_n^{(u,v)}$ are even in $z$ in their respective domains.
\item[\text{\rm(b)}] The quantity $z^{-n}\,\psi_n^{(u,v)}$ is analytic in $|z|<1$ and
continuous in $|z|\le 1$.
\item[\text{\rm(c)}] The quantity $z^{n}\,\phi_n^{(u,v)}$ is analytic in $|z|<1$ and
continuous in $|z|\le 1$.
\item[\text{\rm(d)}] The quantity $z^{n}\,\bar{\psi}_n^{(u,v)}$ is analytic in $|z|>1$ and
continuous in $|z|\ge 1$.
\item[\text{\rm(e)}] The quantity $z^{-n}\,\bar{\phi}_n^{(u,v)}$ is analytic in $|z|>1$ and
continuous in $|z|\ge 1$.
\item[\text{\rm(f)}] The Jost solution $\psi_{n}^{(u,v)}$ has the expansion
\begin{equation}\label{x2.11}
\psi_{n}^{(u,v)}=\sum_{l=n}^{\infty}K_{nl}^{(u,v)}z^l, \qquad |z|\le 1,
\end{equation}
with the double-indexed quantities $K_{nl}^{(u,v)}$ for which we have
\begin{equation}\label{x2.12}
K_{nn}^{(u,v)}= \begin{bmatrix}
0\\
\noalign{\medskip}
1
\end{bmatrix},\quad
K_{n(n+2)}^{(u,v)}= \begin{bmatrix}
u_n\\
\noalign{\medskip}
\displaystyle \sum_{k=n}^{\infty} u_{k+1}\,v_k
\end{bmatrix},
\end{equation}
and that $K_{nl}^{(u,v)}=0 $ when $n+l$ is odd or $l<n$.
\item[\text{\rm(g)}] The Jost solution $\bar{\psi}_{n}^{(u,v)}$ has the expansion \begin{equation}\label{x2.13}
\bar{\psi}_{n}^{(u,v)}=\sum_{l=n}^{\infty}\bar{K}_{nl}^{(u,v)}\displaystyle\frac{1}{z^l}, \qquad |z|\ge 1,
\end{equation}
with the double-indexed quantities $\bar{K}_{nl}^{(u,v)}$ for which we have
\begin{equation}\label{x2.14}
\bar{K}_{nn}^{(u,v)}= \begin{bmatrix}
1\\
\noalign{\medskip}
0
\end{bmatrix},\quad
\bar{K}_{n(n+2)}^{(u,v)}= \begin{bmatrix}
\displaystyle\sum_{k=n}^{\infty} u_k\,v_{k+1}\\
\noalign{\medskip}
v_n
\end{bmatrix},
\end{equation}
and that $\bar{K}_{nl}^{(u,v)}=0 $ when $n+l$ is odd or $l<n$.
\item[\text{\rm(h)}] For the Jost solution $\phi_{n}^{(u,v)}$ we have the expansion
\begin{equation}\label{x2.16}
z^{n}\,\phi_{n}^{(u,v)}=\sum_{l=0}^{\infty}P_{nl}^{(u,v)}\displaystyle z^{l}, \qquad |z|\le 1,
\end{equation}
with the double-indexed quantities $P_{nl}^{(u,v)}$ for which we have
\begin{equation}\label{x2.17}
P_{n0}^{(u,v)}=\displaystyle\frac{1}{D_{n-1}^{(u,v)}}
\begin{bmatrix}
1\\
\noalign{\medskip}
-v_{n-1}
\end{bmatrix},
\end{equation}
\begin{equation*}
P_{n2}^{(u,v)}=\displaystyle\frac{1}{D_{n-1}^{(u,v)}}
\begin{bmatrix}
\displaystyle\sum_{k=-\infty}^{n-2} u_{k+1}\,v_{k}\\
\noalign{\medskip}
-v_{n-2}-v_{n-1}\displaystyle\sum_{k=-\infty}^{n-3} u_{k+1}\,v_{k}
\end{bmatrix},
\end{equation*}
with $D_{n-1}^{(u,v)}$ being the quantity defined in \eqref{x2.D_n} and that $P_{nl}^{(u,v)}=0 $ when $l$ is odd or $l<0$.
\item[\text{\rm(i)}] For the Jost solution $\bar{\phi}_{n}^{(u,v)}$ we have the expansion
\begin{equation*}
z^{-n}\,\bar{\phi}_{n}^{(u,v)}=\sum_{l=0}^{\infty}\bar{P}_{nl}^{(u,v)}\displaystyle\frac{1}{z^{l}}, \qquad |z|\ge 1,
\end{equation*}
with the double-indexed quantities $\bar{P}_{nl}^{(u,v)}$ for which we have
\begin{equation*}
\bar{P}_{n0}^{(u,v)}=\displaystyle\frac{1}{D_{n-1}^{(u,v)}}
\begin{bmatrix}
-u_{n-1}\\
\noalign{\medskip}
1
\end{bmatrix},
\end{equation*}
\begin{equation*}
\bar{P}_{n2}^{(u,v)}=\displaystyle\frac{1}{D_{n-1}^{(u,v)}}
\begin{bmatrix}
-u_{n-2}-u_{n-1}\displaystyle\sum_{k=-\infty}^{n-3} u_{k}\,v_{k+1}\\
\noalign{\medskip}
\displaystyle\sum_{k=-\infty}^{n-2} u_{k}\,v_{k+1}
\end{bmatrix},
\end{equation*}
and that $\bar{P}_{nl}^{(u,v)}=0 $ when $l$ is odd or $l<0$.
\item[\text{\rm(j)}] The scattering coefficients for \eqref{1.2aa} are even in $z$ in their respective domains. The domain for the reflection coefficients
is the unit circle $\mathbb{T}$ and the domains for the transmission coefficients consist of the union of $\mathbb{T}$ and their regions of extensions.
\item[\text{\rm(k)}] The quantities $1/T_{\rm l}^{(u,v)}$ and $1/T_{\rm r}^{(u,v)}$ have analytic extensions in $z$ from $z\in\mathbb{T}$ to $|z|<1$ and those extensions are continuous for $|z|\le 1.$ Similarly, the quantities $1/\bar{T}_{\rm l}^{(u,v)}$ and $1/\bar{T}_{\rm r}^{(u,v)}$ have extensions from $z\in\mathbb{T}$ so that they are analytic in $|z|>1$ and continuous in $|z|\ge 1.$
\end{enumerate}
\end{theorem}
\begin{proof}
We can write \eqref{1.2aa} for $\psi_n^{(u,v)}$ in the equivalent form
\begin{equation}\label{x.401}
z^{-n}\,\psi_n^{(u,v)}=\begin{bmatrix}
z^{2} &z^{2}\, u_n\\
\noalign{\medskip}
v_n&1
\end{bmatrix}z^{-n-1}\,\psi_{n+1}^{(u,v)}, \qquad n\in \mathbb{Z}.
\end{equation}
From \eqref{x2.3} and the iteration of \eqref{x.401} in $n,$ it follows that $z^{-n}\,\psi_n^{(u,v)}$ is an even function of $z.$ By proceeding in a similar manner for the remaining Jost solutions, we complete the proof of (a). The expansion of $z^{-n}\,\psi_n^{(u,v)}$ obtained in (a) contains only nonnegative integer powers of $z^{2}$ and is uniformly convergent in $z$ for $|z|\le1,$ from which we conclude (b) and (f). The proofs for (c), (d), (e), (g), (h), (i) are obtained in a similar manner. Using (a)--(e) in \eqref{x2.7}--\eqref{x2.9} we establish (j). Finally, using (b)
|
10]{Figures/channelEstimation}
\put(72,10){1) Repeated pilot}
\put(72,6){transmission}
\put(23,15){2) Switching between}
\put(23,11){different configurations}
\put(0,50){3) Feedback}
\put(0,46){of preferred}
\put(0,42){configuration}
\put(17,66){RIS controller}
\put(51.5,73){RIS}
\end{overpic}
\caption{One approach to configure the RIS is to transmit pilots that the RIS scatters using different configurations. The receiver feeds back a preferred configuration to the RIS.}
\label{figureEstimation}
\end{figure}
Another approach is to alter the passive nature of the RIS by having a few elements with receiver chains
\cite{Taha2019a}, which enables sensing and channel estimation directly at the RIS.
The ability to extrapolate a few measurements to estimate the entire wideband channel requires spatially sparse channels with a known parametrization. This might be reasonable in mmWave or terahertz bands but further work on channel and hardware modeling is required. The sparseness can also make the channels flat over rather wide bandwidths. Learning-based and sparsity-based estimation algorithms were considered in \cite{Taha2019a,Alexandropoulos2020}. Even if the RIS has sensing capabilities, a control loop is needed to jointly select the RIS configuration and the beamforming at the transmitter/receiver.
Estimation algorithms can leverage special channel characteristics to reduce the pilot overhead. For instance, the channel between the BS and RIS is semi-static
|
and common for all users, which makes the end-to-end channels correlated between users. An estimation algorithm exploiting this correlation was proposed in \cite{wang2019}. The BS-to-RIS channel can contain many coefficients if the BS has many antennas but since this channel is semi-static, it can be estimated less frequently than the RIS-to-user channel, which typically contains fewer coefficients since users have fewer antennas.
There is no doubt that RIS can be used for fixed communication links, but mobile operation requires real-time channel estimation and reconfiguration, even in indoor use cases.
A few millimeters of movement will change the channels in mmWave bands and above.
It remains to be demonstrated if any estimation protocol can enable real-time reconfigurability and under what mobility conditions. Since the array is passive, the RIS technology is potentially more energy-efficient than alternative technologies \cite{Basar2019a} but this remains to be demonstrated quantitatively. The RIS will require a power source for reconfigurability and wireless control channels.
It is likely that the control interface will consume most of the power at the RIS, so one cannot predict the total power consumption until the channel estimation and reconfigurability have been solved and validated.
\section*{Summary}
An RIS is a full-duplex transparent relay that synthesizes the scattering behavior of an arbitrarily shaped object. Since the RIS is not amplifying the signal, a larger surface area is required to achieve a given SNR than using conventional relays or multi-antenna transceivers. RIS-aided communication is an emerging research topic where the main open problems are to identify convincing use cases and designing practical protocols for reconfigurability.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
The fuzzy partial differential equation (FPDE) means the generalization of partial differential equation (PDE) in fuzzy sense. While doing modeling of real situation in terms of partial differential equation, we see that the variables and parameters involve in the equations are uncertain (in the sense that they are not completely known or inexact or imprecise). Often common initial or boundary condition of ambient temperature is a fuzzy condition since ambient temperature is prone to variation in a range. We express this impreciseness and uncertainties in terms of fuzzy numbers. So we come across with fuzzy partial differential equations . In \cite{BU}, Buckley and Feuring (1999) proposed a procedure to examine solutions of elementary fuzzy partial differential equations. First they verified the Buckley - Feuring (BF) solution exist or not. If the BF-solution fails to exist they looked for the Seikkala solution. The solutions are based on Seikkala derivative introduced in \cite{SE}. Their proposed method works for elementary fuzzy partial differential equations. They assumed the solution of FPDE is not defined in terms of series. \\
\subsection{Brief literature survey}
In \cite{AL}, Allahviranloo (2002) proposed difference method to solve FPDEs. This method was based on Seikkala derivative of fuzzy functions. The Adomian method was studied to find the approximate solution of fuzzy heat equation in \cite{AL1}(2009). While in \cite{AL2}, Allahviranloo and Afshar (2010) presented numerical methods for solving the fuzzy partial differential equations. These numerical methods were based on the derivative due to Bede and Gal \cite{BE}. Mahmoud and Iman \cite{MA} (2013) presented finite volume method that solves some FPDEs such as fuzzy hyperbolic equations, fuzzy parabolic equations and fuzzy elliptic equations. They have obtained explicit, implicit and Crank-Nicolson schemes for solving fuzzy heat equation. Study of heat, wave and Poisson equations with uncertain parameters are given in \cite{BE1} (2013). Allahviranloo et al. have studied fuzzy solutions for fuzzy heat equation with fuzzy initial value based on generalized Hukuhara differentiability in \cite{AL3}(2014). Pirzada and Vakaskar have discussed the solution of fuzzy heat equations using Adomian Decomposition in \cite{PI1} (2015). Solution of fuzzy heat equation under
fuzzified thermal diffusivity is discussed by Pirzada and Vakaskar in \cite{PI2} (2017). Fuzzy solution of homogeneous heat equation having solution in Fourier series form is analyized in \cite{PI3}(2019).
Applications to FPDEs are presented with a new inference method in \cite{CH}(2009). B.A. Faybishenko \cite{FA} presented a hydrogeologic system as a fuzzy system in (2004). He derived a fuzzy logic form of parabolic-type partial differential equation and solved using basic principles of fuzzy arithmetic. The exact solutions of fuzzy wave-like equations with variable coefficients by a variational iteration method is proposed in \cite{AL4} (2011). Series solution of fuzzy wave-like equations with variable coefficients were presented in \cite{HA}(2013). Biswas and Roy have defined generalization of Seikkala derivative and solved fuzzy Volterra integro-differential equations using differential transform method in \cite{BI} (2018).
\subsection{Motivation and novelty of the proposed work}
Limitations in other fuzzy derivatives:
\begin{itemize}
\item In the solution of fuzzy differential equation, we need differentiability of level functions of fuzzy-valued function. In Sekkala derivatives, level functions are differentiable but condition $f_{1}^{\prime}(\alpha) \leq f_{2}^{\prime}(\alpha)$, for all $\alpha$ may not satisfied for many fuzzy-valued functions.
\item Hukuhara derivatives are based on Hukuhara difference which exists in very restrictive situation.
\item Generalized H-derivatives are less restrictive but in this differentiability, the level functions may not be differentiable.
\end{itemize}
Another motivation for the current study is the following: \\
Buckley and Feuring \cite{BU} have introduced BF-solution of non-homogeneous elementary fuzzy partial differential equations in the form $\phi(D_{t},D_{x})\tilde{U} = \tilde{F}(x,t, \tilde{K})$. If we consider a homogeneous fuzzy partial differential equation, i.e. $\tilde{F}(x, t, \tilde{K}) = \tilde{0}$ then we can not apply sufficient condition to find a BF-solution. \\
With above motivations, we propose the new generalized Seikkala derivative of fuzzy-valued function which is appropriate for solution of fuzzy differential equations and it is less restrictive. Moreover, we find solutions of second order homogeneous fuzzy wave equation based on Seikkala solution approach. Using generalized Seikkala derivative, we solve fuzzy wave equation with specific fuzzy boundary and initial conditions whose crisp solution is expressed in Fourier series. \\
The paper is organized as follows. \\
The basic concepts of fuzzy numbers are given in Sec. 2. The generalized Seikkala derivative (gS-derivatives) of fuzzy-valued function is proposed in Sec. 3. Properties, relation between gS-derivative and other derivatives are discussed in the same section. The generalized Seikkala partial derivatives are also proposed in the section. Sec. 4 deals with the solution of fuzzy wave equation with specific fuzzy boundary and initial conditions. Analysis of solution based on Fourier series is given in Sec. 5. We conclude in the last Section.
\section{Fuzzy numbers and arithmetics}
We start with some basic definitions.
\label{sec:2}
\begin{definition}\label{def1} A fuzzy set $\tilde{a}$ with membership function $\tilde{a}:\mathbb{R} \to[0,1]$, where $\mathbb{R}$ is the set of real numbers, is called a fuzzy number if it is normal, upper semi-continuous, quasi-concave function and closure of the set $\{x \in \mathbb{R} / \tilde{a}(x) >0\}$ is compact. The set of all fuzzy numbers on $\mathbb{R}$ is denoted by $F(\mathbb{R})$.
\end{definition}
\begin{definition}
For all $\alpha \in (0,1]$, $\alpha$-level set $\tilde{a}_{\alpha}$ of any $\tilde{a}\in F(\mathbb{R})$ is defined as
\begin{eqnarray*}
\tilde{a}_{\alpha} = \{ x \in \mathbb{R}/ \tilde{a}(x)\geq \alpha \} .
\end{eqnarray*}
The 0-level set $\tilde{a}_{0}$ is defined as the closure of the set
\begin{eqnarray*}
\{x \in \mathbb{R} / \tilde{a}(x) >0\}.
\end{eqnarray*}
\end{definition}
The following Theorem of Goetschel and Voxman \cite{GO}, shows the characterization of a fuzzy number in terms of its $\alpha$-level sets.
\begin{theorem}\label{thm21}
For $\tilde{a} \in F(\mathbb{R})$, define two functions ${a}_{1}(\alpha),{a}_{2}(\alpha): [0,1] \to \mathbb{R} $. Then
\begin{enumerate}
\item [(i)] {${a}_{1} (\alpha)$ is bounded left continuous non-decreasing function on (0,1];}
\item [(ii)] {${a}_{2}(\alpha)$ is bounded left continuous non-increasing function on (0,1];}
\item [(iii)] {${a}_{1}(\alpha)$ and ${a}_{2}(\alpha)$ are right continuous at $\alpha = 0$;}
\item [(iv)] {${a}_{1}(\alpha) \leq {a}_{2}(\alpha)$.}
\end{enumerate}
Moreover, if the pair of functions ${a}_{1}(\alpha)$ and ${a}_{2}(\alpha)$ satisfy the conditions (i)-(iv), then there exists a unique $\tilde{a} \in F(\mathbb{R})$ such that $\tilde{a}_{\alpha} = [{a}_{1}(\alpha), {a}_{2}(\alpha)]$, for each $\alpha \in [0,1]$.
\end{theorem}
\begin{definition}\label{def3} According to Zadeh's extension principle, scalar multiplication of fuzzy number $\tilde{a}$ with a scalar $\lambda \in \mathbb{R}$ by its $\alpha$-level sets is defined as follows:
\begin{eqnarray*}
(\lambda \odot \tilde{a})_{\alpha} & = &
[\lambda\cdot {a}_{1}(\alpha),\lambda\cdot {a}_{2}(\alpha)],~if~\lambda \geq 0 \\
& = &
[\lambda\cdot {a}_{2}(\alpha),\lambda\cdot {a}_{1}(\alpha)],~if~\lambda < 0,
\end{eqnarray*}
where $\alpha$-level sets of $\tilde{a}$ is $\tilde{a}_{\alpha} = [{a}_{1}(\alpha), {a}_{2}(\alpha)]$, for $\alpha \in [0,1]$.
\end{definition}
The fuzzy-valued function is defined as follows:
\begin{definition} \label{def4}
A function $\tilde{f}: V \to F(\mathbb{R})$ is called a fuzzy-valued function, where $V$ is a real vector space. That is, for each $x \in V$, $\tilde{f}(x)$ is a fuzzy number. Corresponding to $\tilde{f}$ and $\alpha \in [0,1]$, we denote two real-valued functions ${f}_{1}(x,\alpha)$ and ${f }_{2}(x,\alpha)$ on $V$ for all $x \in V$. These functions ${f}_{1}(x,\alpha)$ and ${f }_{2}(x,\alpha)$ are called lower and upper $\alpha$-level functions of $\tilde{f}$, respectively.
\end{definition}
\section{Generalized Seikkala Derivatives}
Seikkala derivative of fuzzy-valued function is defined as follows. The definition is adopted from Seikkala (1987)\cite{SE}.
\begin{definition} \label{def3.1}
Let $I$ be subset of $\mathbb{R}$ and $\tilde{y}$ be a fuzzy-valued function defined on $I$. The $\alpha$-level sets $\tilde{y}_{\alpha}(t) = [y_{1}(t, \alpha), y_{2}(t, \alpha)]$ for $\alpha \in [0,1]$ and $t \in I$. We assume that derivatives of $y_{i}(t, \alpha)$, $i = 1, 2$ exist for all $t \in I$ and for each $\alpha$.
We define $(\tilde{y}^{\prime}(t))_{\alpha} = [y_{1}^{\prime}(t, \alpha), y_{2}^{\prime}(t, \alpha)]$ for all $t \in I$, all $\alpha$.
If, for each fixed $t \in I$, $(\tilde{y}^{\prime}(t))_{\alpha}$ defines the $\alpha$-level set of a fuzzy number, then we say that Seikkala derivative of $\tilde{y}(t)$ exists at $t$ and it is denoted by fuzzy-valued function $\tilde{y}^{\prime}(t)$.
\end{definition}
The Seikkala derivative involves two steps:
\begin{enumerate}
\item [(1)] Check both level functions are differentiable or not
\item [(2)] Check level sets of derivatives define fuzzy numbers or not.
\end{enumerate}
Sufficient conditions for $(\tilde{y}^{\prime}(t))_{\alpha}$ to define $\alpha$-level sets of a fuzzy number are \cite{BU}:
\begin{enumerate}
\item [(i)] $y_{1}^{\prime}(t, \alpha)$ is an increasing function of $\alpha$ for each $t \in I$;
\item [(ii)] $y_{2}^{\prime}(t, \alpha)$ is an decreasing function of $\alpha$ for each $t \in I$;
\item [(iii)] $y_{1}^{\prime}(t, 1) \leq y_{2}^{\prime}(t, 1)$ for all $t \in I$.
\end{enumerate}
There are certain functions which exist in real situation but their Seikkala derivatives do not exist. We consider two such examples.
\begin{example}\label{ex3.1}
Consider a fuzzy-valued function $\tilde{g}(t) = \tilde{a} \odot \exp(-t)$, $t \in \mathbb{R}$ and $\tilde{a}$ is a fuzzy number with $\alpha$-level sets
\[(\tilde{g}(t))_{\alpha} = [g_{1}(t, \alpha), g_{2}(t, \alpha)] = [a_{1}(\alpha) \exp(-t), a_{2}(\alpha) \exp(-t)].\]
To check Seikkala differentiability of given fuzzy-valued function, first we check both its level functions are differentiable or not.
We see that $g_{1}(t, \alpha)$ and $g_{2}(t, \alpha)$ are differentiable for $t \in \mathbb{R}$. Next, we check that the level sets
\[(\tilde{g}^{\prime}(t))_{\alpha} = [g_{1}^{\prime}(t, \alpha), g_{2}^{\prime}(t, \alpha)] = [-a_{1}(\alpha)\exp(-t), -a_{2}(\alpha)\exp(-t)]\]
define a fuzzy number for each $t \in \mathbb{R}$. By checking sufficient conditions for $(\tilde{g}^{\prime}(t))_{\alpha}$ to define $\alpha$-level sets of fuzzy number,
\begin{enumerate}
\item [(i)] $g_{1}^{\prime}(t, \alpha)$ is an increasing function of $\alpha$ for each $t \in \mathbb{R}$;
\item [(ii)] $g_{2}^{\prime}(t, \alpha)$ is a decreasing function of $\alpha$ for each $t \in \mathbb{R}$; and
\item [(iii)] $g_{1}^{\prime}(t, 1) \leq g_{2}^{\prime}(t, 1)$, for all $t \in \mathbb{R}$,
\end{enumerate}
we see that
\[{{\partial g_{1}^{\prime}(t, \alpha)} / {\partial \alpha}} = -a_{1}^{\prime}(\alpha) \exp(-t) < 0\] as $a_{1}^{\prime}(\alpha) > 0$ and
\[{{\partial g_{2}^{\prime}(t, \alpha)}/ {\partial \alpha}} = -a_{2}^{\prime}(\alpha) \exp(-t) > 0\] as $a_{2}^{\prime}(\alpha) < 0$. Therefore $g_{1}^{\prime}(t, \alpha)$ is not an increasing function and $g_{2}^{\prime}(t, \alpha)$ is not a decreasing function. Hence, Seikkala derivative of $\tilde{g}$ does not exist.
\end{example}
We consider another example of fuzzy function which occur in uncertain periodic motion of an object whose Seikkala derivative does not exist.
\begin{example}\label{ex3.2}
Consider a fuzzy-valued function $\tilde{h}(t) = \tilde{a} \odot \sin(t)$, $t \in [0, \pi]$, where $\tilde{a}$ is a fuzzy number. The $\alpha$- level sets of $\tilde{h}(t)$ are $[a_{1}(\alpha) \sin(t), a_{2}(\alpha) \sin(t)]$. The level functions are differentiable but their derivatives $h_{1}^{\prime}(t, \alpha) = a_{1}(\alpha) \cos(t)$ and $h_{2}^{\prime}(t, \alpha) = a_{2}(\alpha) \cos(t)$ do not define fuzzy number for each $t \in [\pi/2, \pi]$ and hence $\tilde{h}$ is not Seikkala differentiable for $t \in [\pi/2, \pi]$.
\end{example}
To overcome this limitation, we define the generalized Seikkala derivative (gS-derivative) of a fuzzy-valued function
\begin{definition}\label{def3.2}
Let $I$ be a real interval. A fuzzy-valued function $\tilde{f}: I \to F(\mathbb{R})$ with $\alpha$-level sets
\[(\tilde{f}(t))_{\alpha} = [ f_{1}(t, \alpha), f_{2}(t, \alpha)],\]
for $t \in I$ and $ \alpha \in [0, 1]$, is said to have generalized Seikkala derivative $\tilde{f}^{\prime}(t)$ if $f_{1}(t, \alpha)$ and $f_{2}(t, \alpha)$ are differentiable for each $t \in I$ and
\[(f^{\prime}(t))_{\alpha} = [ \min \{f_{1}^{\prime}(t, \alpha), f_{2}^{\prime}(t, \alpha)\}, \max\{ f_{1}^{\prime}(t, \alpha), f_{2}^{\prime}(t, \alpha)\}], \]
for all $\alpha$ defines a fuzzy number for each $t \in I$.
\end{definition}
Biswas and Roy have defined generalization of Seikkala derivative in \cite{BI} (2018).
\begin{definition}\label{def3.3}
Let $\tilde{f} : (a, b) \to F(\mathbb{R})$ and $t_{0} \in (a, b)$. Then the generalized Seikkala derivative (gS-derivative) of $\tilde{f}(t)$ at $t_{0}$ is denoted $\tilde{f}^{\prime}(t_{0})$ and defined by
\begin{enumerate}
\item [(i)] if $f_{1}^{\prime}(t_{0}, \alpha)$, $f_{2}^{\prime}(t_{0}, \alpha)$ exist and $f_{1}^{\prime}(t_{0}, \alpha) \leq f_{2}^{\prime}(t_{0}, \alpha)$ then
\[
f_{\alpha}^{\prime}(t_{0}) = [f_{1}^{\prime}(t_{0}, \alpha), f_{2}^{\prime}(t_{0}, \alpha)]
\]
\item [(ii)] if $f_{1}^{\prime}(t_{0}, \alpha)$, $f_{2}^{\prime}(t_{0}, \alpha)$ exist and $f_{1}^{\prime}(t_{0}, \alpha) \geq f_{2}^{\prime}(t_{0}, \alpha)$ then
\[
f_{\alpha}^{\prime}(t_{0}) = [f_{2}^{\prime}(t_{0}, \alpha), f_{1}^{\prime}(t_{0}, \alpha)]
\]
\end{enumerate}
\end{definition}
The relation between Definition \ref{def3.2} and \ref{def3.3} of generalized Seikkala differentiability is given below.
\begin{theorem}
Definition \ref{def3.2} and \ref{def3.3} are equivalent.
\end{theorem}
\begin{proof}
The proof is straight forward and therefore omitted. \qed
\end{proof}
\begin{theorem}
If $\tilde{f}$ is Seikkala differentiable by Definition \ref{def3.1} then it is gS-differentiable by Definition \ref{def3.2} and \ref{def3.3}.
\end{theorem}
\begin{proof}
Since $\tilde{f}$ is S-differentiable, by definition, derivatives $f_{1}^{\prime}(t, \alpha)$ and $f_{2}^{\prime}(t, \alpha)$ exist and the set $(\tilde{f}^{\prime}(t))_{\alpha}$ defines a fuzzy number for each $t$ in domain.
\[
(\tilde{f}^{\prime}(t))_{\alpha} = [f_{1}^{\prime}(t, \alpha), f_{2}^{\prime}(t,\alpha)],
\]
for all $\alpha \in [0,1]$. It satisfied the Definition \ref{def3.3} as $ f_{1}^{\prime}(t, \alpha) \leq f_{2}^{\prime}(t, \alpha)$. Now we write the above equation in following way
\[
(\tilde{f}^{\prime}(t))_{\alpha} = [\min\{f_{1}^{\prime}(t, \alpha), f_{2}^{\prime}(t,\alpha)\}, \max\{f_{1}^{\prime}(t, \alpha), f_{2}^{\prime}(t,\alpha)\}],
\]
for all $\alpha \in [0,1]$. Therefore, $\tilde{f}$ is gS-differentiable by Definition \ref{def3.2}. \qed
\end{proof}
\begin{remark}
If $\tilde{f}$ is gS-differentiable by Definition \ref{def3.2} then it may not be S-differentiable by Definition \ref{def3.1}. For instance, fuzzy-valued function $\tilde{g}(t) = \tilde{a} \odot \exp(-t)$, $t \in \mathbb{R}$ in Example \ref{ex3.1} is not S-differentiable. The following example shows that it is gS-differentiable by Definition \ref{def3.2}.
\end{remark}
We see that the uncertain functions defined in above examples are gS-differentiable.
\begin{example} \label{ex3.3}
Consider the fuzzy-valued function $\tilde{g}(t) = \tilde{a} \odot \exp(-t)$, $t \in \mathbb{R}$, defined in Example \ref{ex3.1}. The derivatives of level functions of $\tilde{g}(t)$ are
\[ g_{1}^{\prime}(t, \alpha) = -a_{1}(\alpha) \exp(-t)\] and
\[ g_{2}^{\prime}(t, \alpha) = -a_{2}(\alpha) \exp(-t).\]
By definition of gS-differentiability, $\alpha$-level sets $(\tilde{g}^{\prime}(t))_{\alpha}$ defined as
\[(\tilde{g}^{\prime}(t))_{\alpha} = [\min\{-a_{1}(\alpha) \exp(-t), -a_{2}(\alpha) \exp(-t)\}, \max\{-a_{1}(\alpha) \exp(-t), -a_{2}(\alpha) \exp(-t)\}] \]
which is equal to
\[ (\tilde{g}^{\prime}(t))_{\alpha} = [-a_{2
|
}(\alpha) \exp(-t), -a_{1}(\alpha) \exp(-t)]\] \\
as $- a_{2}(\alpha) \leq - a_{1}(\alpha)$ and $\exp(-t) \geq 0$, for all $t$. Therefore, $\tilde{g}$ is gS-differentiable with derivative $\tilde{g}^{\prime}(t) = -\tilde{a} \odot \exp(-t)$.
\end{example}
\begin{example}\label{ex3.4}
The fuzzy-valued function $\tilde{h}(t)$ defined in Example \ref{ex3.2} is gS-differentiable with derivative $\tilde{h}^{\prime}(t) = \tilde{a} \odot \cos(t)$. The $\alpha$-level sets of $\tilde{h}^{\prime}(t)$ are $[a_{1}(\alpha) \cos(t), a_{2}(\alpha) \cos(t)]$ for $t \in [0, \pi/2]$ and $[a_{2}(\alpha) \cos(t), a_{1}(\alpha) \cos(t)]$ for $t \in (\pi/2, \pi]$.
\end{example}
Now we define generalized Seikkala partial derivatives of two variables fuzzy-valued function $\tilde{f} : \mathbb{R}^{2} \to F(\mathbb{R})$.
\begin{definition}
Let $X$ be a subset of $\mathbb{R}^{2}$. A fuzzy-valued function $\tilde{f}: X \to F(\mathbb{R})$ with $\alpha$-level sets
\[(\tilde{f}(x, t))_{\alpha} = [ f_{1}(x, t, \alpha), f_{2}(x,t, \alpha)],\]
for $(x, t) \in X$ and $ \alpha \in [0, 1]$, is said to have generalized Seikkala partial derivative ${\partial \tilde{f}} \over {\partial t}$ if both partial derivatives $ {\partial f_{1}(x, t, \alpha)} \over {\partial t}$ and ${\partial f_{2}(x, t, \alpha)} \over {\partial t}$ exist and continuous for each $(x, t) \in X$, for all $\alpha$ and
\[\Big ({{\partial \tilde{f}} \over {\partial t}}\Big)_{\alpha} = \Big [ \min \Big \{ {{\partial f_{1}} \over {\partial t}}, {{\partial f_{2}} \over {\partial t}} \Big \}, \max \Big \{ {{\partial f_{1}} \over {\partial t}}, {{\partial f_{2}} \over {\partial t}} \Big \} \Big], \]
for all $\alpha$, defines a fuzzy number for each $(x, t) \in X$. In similar way, we can define generalized Seikkala partial derivative ${\partial \tilde{f}} \over {\partial x}$.
\end{definition}
\begin{example}
Consider a fuzzy-valued function $\tilde{f}: X \to F(\mathbb{R})$ by $\tilde{f}(t,x) = \tilde{c} \odot (e^{t}\sin{x})$, for $(x, t) \in \mathbb{R} \times [0, \pi]$. It is easily checked that the gS-partial derivative ${\partial \tilde{f}} \over {\partial x}$ of $\tilde{f}$ exists.
\end{example}
Now we define generalized Seikkala differentiability of fuzzy-valued function $\tilde{f}: X \subset \mathbb{R}^{2} \to F(\mathbb{R})$.
\begin{definition}
A fuzzy-valued function $\tilde{f}: X \to F(\mathbb{R})$ is said to be generalized Seikkala differentiable if both generalized Seikkala partial derivatives ${\partial \tilde{f}} \over {\partial x}$ and ${\partial \tilde{f}} \over {\partial t}$ exists and the fuzzy partial derivatives are continuous.
\end{definition}
The second order generalized Seikkala partial derivatives are defined as follows:
\begin{definition}
If a fuzzy-valued function $\tilde{f}: X \to F(\mathbb{R})$ is generalized Seikkala differentiable then its second order Seikkala partial derivative ${\partial^2 \tilde{f}} \over {\partial t^2}$ is exists if both partial derivatives $ {\partial^2 f_{1}} \over {\partial t^2}$ and ${\partial f_{2}} \over {\partial t^2}$ exist and continuous for each $(x, t) \in X$, for all $\alpha$ and
\[\Big ({{\partial^2 \tilde{f}} \over {\partial t^2}}\Big)_{\alpha} = \Big [ \min \Big \{ {{\partial^2 f_{1}} \over {\partial t^2}}, {{\partial^2 f_{2}} \over {\partial t^2}} \Big \}, \max \Big \{ {{\partial^2 f_{1}} \over {\partial t^2}}, {{\partial^2 f_{2}} \over {\partial t^2}} \Big \} \Big], \]
for all $\alpha$, defines a fuzzy number for each $(x, t) \in X$. In similar way, we can define other second order generalized Seikkala partial derivatives.
\end{definition}
\section{Fuzzy wave equation}
\subsection{Fuzzy model}
The form of second order partial differential equation extensively occurred is the wave equation. Examples include disruption of a body of fluid, vibration of string instruments, vibration of membrane and pressure perturbations in air. In these cases, if the amplitude of the disturbance is sufficiently small, the perturbation variable characterizing the disturbance will satisfy the wave equation. Under some physical sensible assumptions, an one-dimensional wave equation is derived as
\begin{equation}
{{\partial^2 u}\over{\partial t^2}} = c^2 {{\partial^2 u}\over{\partial x^2}},
\end{equation}
where $c^2$ is called wave speed and $u(x,t)$ is the displacement of the string. These assumptions make parameters constant or precise. For instance, we assume that string is uniform. That is, mass per unit length is constant. To study the problem in real sense, we consider imprecise variables and parameters. Modeling the problem of wave equation with imprecise parameters and uncertain boundary and initial conditions lead to a fuzzy model of wave equation.
\begin{equation}\label{eq2.1}
{{\partial^2 \tilde{u}}\over{\partial t^2}} = c^2 \odot {{\partial^2 \tilde{u}}\over{\partial x^2}},
\end{equation}
where $\tilde{u}(x,t)$ is the fuzzy displacement represented by fuzzy number at each $(x, t) \in [0, L_{1}] \times [0, L{2}]$, $ L_{1}, L_{2} > 0$. The fuzzy parameters involve in the boundary and initial conditions are also expressed as fuzzy numbers and ${{\partial^2 \tilde{u}}\over{\partial t^2}}$ and ${{\partial^2 \tilde{u}}\over{\partial x^2}}$ represent second order fuzzy partial derivatives of fuzzy-valued function $\tilde{u}(x,t)$.
\subsection{Solution}
Elementary fuzzy partial differential equations are studied by Buckley and Feuring in \cite{BU}. They have not considered the solution in Fourier series form. Some researchers have studied non-homogeneous fuzzy wave equation involving constant $K$. The solution of fuzzy wave equation fully depend on $K$. Here we consider the homogeneous fuzzy partial differential equation in the form of fuzzy wave equation does not involve constant $K$. It is observed that BF-solution does not exist for the problem (\ref{eq2.1}), as we can not apply the condition for existence to find solution to this problem. We find the Seikkala solution (S-solution) of (\ref{eq2.1}) using generalized Seikkala partial derivatives.
\subsection{Seikkala solution approach}
The fuzzy function $\tilde{u}(x,t)$ is a S-solution of problem (\ref{eq2.1}) if generalized Seikkala partial derivatives exist and satisfy the equation. Let $\tilde{u}_{\alpha}(x,t) = [u_{1}(x,t,\alpha), u_{2}(x,t,\alpha)]$. We re-write the equation (\ref{eq2.1}) as system of crisp partial differential equations
\begin{equation}\label{eq4}
{{\partial^2 u_1(x,t,\alpha)}\over{\partial t^2}} = c^2 {{\partial^2 u_{1}(x,t,\alpha)}\over{\partial x^2}}
\end{equation}
\begin{equation}\label{eq5}
{{\partial^2 u_2(x,t, \alpha)}\over{\partial t^2}} = c^2 {{\partial^2 u_{2}(x,t,\alpha)}\over{\partial x^2}}
\end{equation}
for all $(x,t) \in I_{1} \times I_{2}$ and all $\alpha \in [0,1]$. The fuzzy boundary conditions are $\tilde{u} (0,t) = \tilde{C}_{1}$ and $\tilde{u} (L,t) = \tilde{C}_{2}$, $t > 0$ and fuzzy initial conditions are $\tilde{u} (x,0) = \tilde{f}(x)$ and $\tilde{u}_{t}(x,0) = \tilde{g}(x)$, $0 < x < L$, where $\tilde{C}_{1}$, $\tilde{C}_{2}$ are fuzzy numbers and $\tilde{f}(x)$, $\tilde{g}(x)$ are fuzzy-valued functions of $x$. We write boundary conditions in terms of $\alpha$-level sets as
\begin{equation}\label{bc1}
u_{1}(0,t,\alpha) = c_{11}(\alpha),~ u_{2}(0,t,\alpha) = c_{12}(\alpha),
\end{equation}
\begin{equation}\label{bc2}
u_{1}(L,t,\alpha) = c_{21}(\alpha),~ u_{2}(L,t,\alpha) = c_{22}(\alpha).
\end{equation}
The initial conditions
\begin{equation}\label{ic1}
u_{1}(x,0, \alpha) = {f}_{1}(x, \alpha),~ u_{2}(x,0, \alpha) = {f}_{2}(x, \alpha),
\end{equation}
\begin{equation}\label{ic2}
{{\partial u_{1}(x,0, \alpha)} \over {\partial t}} = {g}_{1}(x, \alpha),~ {{\partial u_{2}(x,0, \alpha)} \over {\partial t}} = {g}_{2}(x, \alpha).
\end{equation}
Let $u_{i}(x,t, \alpha)$ solve equations (\ref{eq4}) and (\ref{eq5}) with boundary conditions (\ref{bc1}) and (\ref{bc2}) and initial conditions (\ref{ic1}) and (\ref{ic2}) , $i = 1,2$. If
\begin{equation}
[u_{1}(x,t, \alpha), u_{2}(x,t, \alpha)]
\end{equation}
defines the $\alpha$-level set of a fuzzy number, for each $(x,t) \in I_{1} \times I_{2}$, then $\tilde{u}(x,t)$ is the S-solution.
\begin{remark}
The non-homogeneous elementary fuzzy partial differential equations which are solved in \cite{BU} using BF- solution and Seikkala solution (S-solution) are also solved using gS-derivatives and S-solution approach. Because, fuzzy-valued functions which are Seikkala differentiable are also generalized Seikkala differentiable and therefore S-solution exists.
\end{remark}
\section{Solution of a fuzzy wave equation involves Fourier series}
In this section, we find the solution of a fuzzy wave equation whose crisp solution is expressed in terms of Fourier series. Consider the fuzzy wave equation given in (\ref{eq2.1}) with fuzzy boundary conditions $\tilde{u}(0,t) = \tilde{u}(1,t) = \tilde{0} $ and fuzzy initial conditions $\tilde{u}(x,0) = \tilde{U_{0}}$, where $\tilde{U_{0}}$ is a fuzzy number, $\tilde{u}_{t}(x,0) = \tilde{0}$. First, we solve equations (\ref{eq4}) and (\ref{eq5}) subject to conditions
\begin{equation}
u_{i}(0,t,\alpha) = u_{i}(1,t, \alpha) = 0
\end{equation}
\begin{equation}
u_{i}(x,0,\alpha) = U_{0i}(\alpha), {{\partial u_{i}(x,0, \alpha)} \over {\partial t}} = 0,
\end{equation}
for $i = 1,2$. The solution is
\begin{equation}\label{sol1}
u_{i} (x,t, \alpha) = U_{0i}(\alpha) {{4} \over {\pi}} \sum_{n = 0}^{\infty} {{\sin{((2n+1)x)}} \over {(2n+1)}} \cos{((2n + 1)t)},
\end{equation}
for $i = 1,2$. The solution (\ref{sol1}) is obtained using Separation of variables method.\\
If $[u_{1}(x,t, \alpha), u_{2}(x,t, \alpha)]$ defines $\alpha$-level sets of a fuzzy number for $x \in I_{1}$ and $t \in I_{2}$, then it is a fuzzy solution of (\ref{eq2.1}) with specified fuzzy boundary and initial conditions. Since $u_{i}(x,t, \alpha)$ are continuous and $u_{1}(x,t,1) = u_{2}(x,t,1)$, what we need to check is
${{\partial u_{1}} \over {\partial \alpha}} \geq 0 $ and ${{\partial u_{2}} \over {\partial \alpha}} \leq 0 $. Since $\tilde{U_{0}}$ is a fuzzy number, we have $U_{01}^{\prime}(\alpha) > 0 $ and $U_{02}^{\prime}(\alpha) < 0 $ (by Theorem \ref{thm21} and assumption of continuity of $U_{01}(\alpha)$ and $U_{02}(\alpha)$). For the S-solution to exist
\begin{equation}
{{\partial u_{1}} \over {\partial \alpha}} = U_{01}^{\prime}(\alpha){{4} \over {\pi}} \sum_{n = 0}^{\infty} {\sin{((2n+1)x)} \over {(2n+1)}} \cos{((2n + 1)t)}
\end{equation}
should be non-negative, for all $\alpha \in [0,1]$ and
\begin{equation}
{{\partial u_{2}} \over {\partial \alpha}} = U_{02}^{\prime}(\alpha){{4} \over {\pi}} \sum_{n = 0}^{\infty} {\sin{((2n+1)x)} \over {(2n+1)}} \cos{((2n + 1)t)}
\end{equation}
should be non-positive, for all $\alpha \in [0,1]$ and $(x,t) \in I_{1} \times I_{2}$. For that
\begin{equation}\label{e1}
z(x,t) = {{4} \over {\pi}} \sum_{n = 0}^{\infty} {\sin{((2n+1)x)} \over {(2n+1)}} \cos{((2n + 1)t)}
\end{equation}
should be non-negative for $(x,t) \in I_{1} \times I_{2} $.
\subsection{Analysis of results}
When $n = 0$, i.e., only first term in the series, we have
\[ z(x,t) = {{4} \over {\pi}} {sin{(x)}} \cos{(t)} \]
is non-negative( see surface $z(x,t)$ in \textbf{Fig. 1}). Hence S-solution of the fuzzy wave equation exists for $x \in [0,\pi]$, $t \in [0,\pi/2]$ and it is given as
\begin{equation}\label{solw0}
\tilde{u}(x,t) = \tilde{U}_{0} \odot {{4} \over {\pi}} {{\sin{(x)}} \cos{(t)}},
\end{equation}
as generalized Sekkala partial derivatives of $\tilde{u}(x,t)$ exist for $(x,t) \in [0, \pi] \times [0, \pi/2]$ where as Seikkala derivatives do not exist because the derivative of $\sin(x)$ is negative in the interval $[\pi /2, \pi]$ and derivative of $\cos(t)$ is also negative in $[0, \pi/ 2]$.
\begin{figure}[h]\label{fig1}
\begin{center}
\includegraphics[width=4.0in]{wave_zerom}
\caption{$z(x,t) = {{4} \over {\pi}} sin{(x)} \cos{(t)}$}
\end{center}
\end{figure}
Now we take $n = 1$, we have
\[
z(x,t) = {{4} \over {\pi}} \sum_{n = 0}^{1}{ {sin{((2n+1)x)} \cos{((2n + 1)t)} } \over {(2n+1)}}
\]
is non-negative for $x \in [0,0.78]$ and $t \in [0,0.78]$ (see \textbf{Fig. 2}). Therefore, S-solution exist in this domain and it as given as following
\begin{equation}\label{solw1}
\tilde{u}(x,t) = \tilde{U}_{0} \odot {{4} \over {\pi}} \sum_{n = 0}^{1} {{\sin{((2n+1)x)} \cos{((2n+1)t)}} \over {(2n + 1)}}.
\end{equation}
\begin{figure}[h]\label{fig2}
\begin{center}
\includegraphics[width=4.0in]{wave_onem.eps}
\caption{$z(x,t) = {{4} \over {\pi}} \sum_{n=0}^{1}{{sin{((2n+1)x)} \cos{((2n + 1)t)}}\over {(2n+1)}} $}
\end{center}
\end{figure}
For $n = 2$, we have
\[z(x,t) = {{4} \over {\pi}} \sum_{n = 0}^{2}{ {sin{((2n+1)x)} \cos{((2n + 1)t)} } \over {(2n+1)}}
\] is non-negative for $x \in [0,0.525]$, $t \in [0,0.525]$ (see \textbf{Fig. 3}). The S-solution is
\begin{equation}\label{solw2}
\tilde{u}(x,t) = \tilde{U}_{0} \odot {{4} \over {\pi}} \sum_{n = 0}^{2} {{\sin{((2n+1)x)} \cos{((2n+1)t)}} \over {(2n + 1)}}.
\end{equation}
\begin{figure}[h]\label{fig3}
\begin{center}
\includegraphics[width=4.0in]{wave_twom.eps}
\caption{$z(x,t) = {{4} \over {\pi}} \sum_{n=0}^{2}{sin{((2n+1)x)} \cos{((2n + 1)t)} \over {(2n+1)}} $}
\end{center}
\end{figure}
For $n = 3$,
\[z(x,t) = {{4} \over {\pi}} \sum_{n = 0}^{3}{ {sin{((2n+1)x)} \cos{((2n + 1)t)} } \over {(2n+1)}}
\]
is non-negative for $x \in [0,0.39996]$, $t \in [0,0.39996]$ (see \textbf{Fig. 4}). S-solution exist in this domain.
\begin{equation}\label{solw3}
\tilde{u}(x,t) = \tilde{U}_{0} \odot {{4} \over {\pi}} \sum_{n = 0}^{3} {{\sin{((2n+1)x)} \cos{((2n+1)t)} \over {(2n + 1)}}}.
\end{equation}
But if we increase $x$ and $t$ slightly, we have some negative values of $z(x,t)$. For instance, consider $x \in [0,0.4]$, $t \in [0,0.4]$ and see the surface in \textbf{Fig. 5}. Therefore, we S-solution of fuzzy wave equation exists in domain $x \in [0,0.39996]$, $t \in [0,0.39996]$.
\begin{figure}[h]\label{fig4}
\begin{center}
\includegraphics[width=4.0in]{wave_three_po.eps}
\caption{$z(x,t) = {{4} \over {\pi}} \sum_{n=0}^{3}{sin{((2n+1)x)} \cos{((2n + 1)t)} \over {(2n+1)}} $}
\end{center}
\end{figure}
\begin{figure}[h]\label{fig5}
\begin{center}
\includegraphics[width=4.0in]{wave_three_ne.eps}
\caption{$z(x,t) = {{4} \over {\pi}} \sum_{n=0}^{3}{sin{((2n+1)x)}\cos{((2n + 1)t)} \over {(2n+1)}} $}
\end{center}
\end{figure}
\section{Conclusions}
We introduced new generalized Seikkala derivatives of fuzzy-valued function and studied the solution of fuzzy wave equation whose crisp solution involves Fourier series. We conclude that
\begin{itemize}
\item [(a)] A larger class of fuzzy-valued functions are generalized Seikkala differentiable.
\item [(b)] Homogeneous fuzzy partial differential equations can not solved using the Buckley- Feuring approach. Seikkala solution approach is applicable in this situation.
\item [(c)] Using Seikkala solution approach and generalized Seikkala partial derivatives, the solution of fuzzy wave equation is proposed whose crisp solution is expressed in terms of Fourier series.
\item [(d)] As we increase the number of terms of Fourier series in the fuzzy solution, the domain of fuzzy solution decreased.
\end{itemize}
|
\section{Derivation of the flow equations}
\label{app_flow}
In this appendix we derive the flow of the non-linear function $A_k(\phi)$, defined in Eq.~(\ref{eq_Ak}). Having in mind this definition, we use the Wetterich equation~(\ref{eq_Wetterich}) to deduce the following equality:
\begin{align}
\begin{split}
&\partial_k \text{FT} \left( \dfonc{\Gamma_k}{\tilde \phi (z)} \right) (\bm p) = -\frac{1}{2} \, \text{Tr} \int_{\bm k_1,\bm q_1,\bm q_2} \partial_k \mathcal{R}_k (\bm k_1) \\
& \cdot G_k(\!-\bm k_1,\!-\bm q_1; \phi ) \cdot \Gamma_{k,\tilde\psi}^{(3)}(\bm q_1,\bm q_2,\bm p) \cdot G_k(-\bm q_2,\bm k_1;\phi ) \, ,
\end{split}
\end{align}
where $\int_{\bm q}\equiv 1/(2\pi)^{d+1} \int_{q,\omega}\mathrm{d}^{d-1}q_\bot \, \mathrm{d} q_\parallel \, \mathrm{d} \omega $, and $\Gamma_{k,\tilde\psi}^{(3)} \equiv \delta \Gamma^{(2)}_k/\delta \tilde\psi$ reads:
\begin{align}
\Gamma_{k,\tilde\psi}^{(3)}(\bm q_1,\bm q_2,\bm p)\!=\!\!\left(
\begin{array}{cc}\!\! p_{\parallel}^2 \, \text{TF}(A_k''(\phi))(\bm q_1+\bm q_2+\bm p) & 0 \\ 0 & 0 \end{array}
\right) \, .
\end{align}
Notice that we keep the same name for a function and its Fourier transform, such that a function $f(\bm q)$ has to be understood as the Fourier transform of $f(\bm x)$, and we recall the convention: $f(\bm q) = \int_{\bm x} f(\bm x) \mathrm{e}^{-i(q x - \omega t)}$.
In order to get the flow of $A_k$, one now has to take the derivative of the previous expression with respect to $p_{\parallel}^2$, and then to evaluate it at $p=0$ and uniform field $\phi$. Since $\Gamma^{(3)}_k(\bm q_1,\bm q_2,\bm p) \propto p_{\parallel}^2$, the whole expression is proportional to $p_{\parallel}^2$ and the only non-vanishing term after the derivation and the evaluation at zero external momentum ($\bm p=0$) is the one obtained when deriving $\Gamma^{(3)}_k(\bm q_1,\bm q_2,\bm p)$ with respect to $p_{\parallel}^2$, and evaluating every other Fourier Transform at $\bm p=0$. This means that one can already perform the evaluation at constant field, which simplifies drastically the computation. One therefore gets:
\begin{align}
\begin{split}
\partial_k A_k &= -\frac{1}{2} \, \text{Tr} \int_{\bm q_1} \partial_k \mathcal{R}_k (\bm q_1) \\
&\cdot G_k(-\bm q_1;\phi) \cdot
\left(
\begin{array}{cc} A_k''(\phi) & 0 \\ 0 & 0 \end{array}
\right)
\cdot G_k(\bm q_1,\phi) \, ,
\end{split}
\end{align}
where the full propagator $G_k$ is now evaluated at uniform field and reads:
\begin{align}
G_k(\bm q;\phi) = \left(
\begin{array}{cc} \frac{2 W(\omega)}{P(q^2,\omega)P(q^2,-\omega)} & \frac{1}{P(q^2,-\omega)} \\ \frac{1}{P(q^2,\omega)} & 0 \end{array}
\right) \, ,
\end{align}
with $P(q^2,\omega)=R_k(q_\parallel^2,q_\bot^2)+q_\bot^2+q_\parallel^2 A_k'(\phi)+ i \omega $, and $W(\omega)=1$ for an isotropic noise, and $W(\omega)=\delta(\omega)$ for a static noise. After performing the matrix product and the trace, the integration over the frequencies is straightforward and yields for the flow of $A_k$:
\begin{align}
\begin{split}
&\partial_k A_k = -\frac{(3\kappa-2) K_d}{2} \times \\
&\int_{|q_\bot|=0}^{\infty} \int_{q_\parallel=-\infty}^\infty \frac{\partial_k R_k(q_\parallel^2,|q_\bot|^2) \, |q_\bot|^{d-2} A_k''(\phi) }{\left( R_k(q_\parallel^2,|q_\bot|^2)+|q_\bot|^2+q_\parallel^2 A_k'(\phi) \right)^{1+\kappa}}
\label{eq_dkAdimensionfull}
\end{split}
\end{align}
where $\kappa=1$ for an isotropic noise, and $\kappa=2$ for a static noise, and where $K_d=(2^{d-1}\pi^{d/2}\Gamma(d/2))^{-1}=S_{d-1}/(2\pi)^d$ with $S_d$ the surface of the $d$-dimensional unit hypersphere. Notice that we have used the rotational invariance in the transverse direction to rewrite the integral over $q_\bot$ as an integral over its norm. Finally, one performs the change of variable $q_\parallel = \sqrt{y}\cos(\theta)$ and $q_\bot = \sqrt{y}\sin(\theta)$ with $y\in [0,\infty[$ and $\theta \in [0,\pi]$. If we furthermore chose the regulator $R_k$ to be a function of $y=q_\bot^2+q_\parallel^2$ only, we can write:
\begin{align}
R_k(q_\parallel^2,|q_\bot|^2)= y k^2 r(y) \, ,\label{eq_regu}
\end{align}
with $r(y)$ the usual momentum regulator, for example an exponential regulator:
\begin{align}
r(y) = \frac{a}{\mathrm{e}^y-1} \, ,
\label{eq_reguExp}
\end{align}
where $a$ is a free parameter. Finally, using the dimensionless variables as defined in Eq.~(\ref{eq_adimensionalisation}), the particular form of regulator~(\ref{eq_regu}) and Eq.~(\ref{eq_dkAdimensionfull}) one finally gets the dynamical part of the flow, Eq.~(\ref{eq_flowAdyn}).
\section{Retrieving the one-loop perturbative results}
\label{app_perturbative}
To retrieve the perturbative results from~\cite{pastor-satorras1998,pastor-satorras1998a}, and from~\cite{antonov2017a}, we first evaluate the previous equations at the upper critical dimension $d_c$, which depends on the noise type: $d_c^{\, \text{stat}}=4$ for a static noise, and $d_c^{\, \text{iso}}=2$ for an isotropic noise. We define accordingly $\epsilon=d_c-d$.
\subsection{Pastor-Satorras and Rothman's results}
The equations derived in~\cite{pastor-satorras1998} are retrieved by performing a lowest-order expansion of the function $\hat A(\hat\phi)$:
\begin{align}
\hat A(\hat\phi) = \hat \phi + \frac{\hat a_{3}}{3!} \hat\phi^3 \, , \label{eq_SatorExpansion}
\end{align}
where $\hat a_{1}\equiv 1$ by definition of the anomalous dimension $\eta_A$. Then, taking derivatives of the flow equation~(\ref{eq_flowA}), and evaluating them at $\hat \phi = 0$, one finds:
\begin{align}
\eta_A &= \frac{\epsilon}{2} + \frac{3\pi K_d}{8}\hat{a}_3 \label{eq_SatorEtaA} \, ,\\
k\partial_k \hat{a}_3 &= - \epsilon \hat{a}_3+\frac{3\pi K_d}{2}\hat{a}_3^2 \, . \label{eq_SatorA3}
\end{align}
Notice that at first order in the $\epsilon$-expansion, the integrals of the dynamical part of the flow can be computed analytically at $d=d_c^{\, \text{stat}}$ or $d=d_c^{\, \text{iso}}$. Moreover, at the first-order in the $\epsilon$-expansion, one notices that the flow equations do not depend on the precise shape of the regulator $r(y)$. Finally, the definition of the term in front of the cubic term in $\hat A$, $\hat{a}_3$, differs from that of~\cite{pastor-satorras1998} and the relation between the two is $\hat a_{3}=2\lambda$. Their dimensionless parameter $\bar \lambda$ is also proportional to ours and we have the following relation between the two: $\hat a_3 = 2 (2\pi)^{d-1}/S_{d-1} \bar \lambda$ where $S_d$ is the surface area of a $d$-dimensional unit sphere. Up to these notation, and up to a factor $-1$ which comes from the fact their equations are derived for the real-space variable $l$, whereas ours are derived for the momentum $k$, Eq.~(\ref{eq_SatorA3}) is indeed equivalent to their Eq.~(6) in~\cite{pastor-satorras1998}. We also agree with their results for the roughness (and anisotropy) exponent, and the stable fixed point of Eqs.~(\ref{eq_SatorEtaA}) and (\ref{eq_SatorA3}) indeed yields:
\begin{align}
\alpha \equiv (4\kappa-2d-\eta_A^*)/3 = \frac{5}{12} \epsilon \, .
\end{align}
We still emphasize that this result is not correct, even for $\epsilon \to 0$, because the expansion~(\ref{eq_SatorExpansion}) discards an infinity of equally relevant coupling constants and is thus not valid.
\subsection{Antonov and Kakin's results}
Following~\cite{antonov2017a}, we set $\kappa=1$ (isotropic noise), $d_c=d_c^{\, \text{iso}}=2$ and we expand the function $\hat A(\hat\phi)$ as
\begin{align}
\hat A(\hat\phi) = \hat\phi + \sum_{i=2}^\infty \frac{\hat a_{i}}{i!} \hat\phi^i \, .
\end{align}
Notice that $\hat A$ is not an odd function of $\hat\phi$. Again, taking derivatives of the flow equation~(\ref{eq_flowA}), and evaluating them at $\hat \phi = 0$, we are able to retrieve the equations derived in~\cite{antonov2017a}, except that we do not agree on their integration over the momenta. Indeed, in~\cite{antonov2017a}, the integration over the momenta $\int \mathrm{d} \bm k$ seems to be performed as if $\bm k$ was isotropic, yielding a factor $S_d$ whereas we argued it should be a factor $S_{d-1}$. A factor $\pi$ coming from the integration over the angle $\theta$ is also missing. Up to this difference and notational discrepancies, our flow equations are in a one to one agreement with the $\beta$ functions of~\cite{antonov2017} (those of the first article~\cite{antonov2017a} involved a misprint in the $\beta_2$ function).
Notice also that contrary to what is stated in~\cite{antonov2017a}, taking $\hat a_{i}=0$ for all $i\neq3$ makes the RG equations of~\cite{antonov2017a} boil down to those of~\cite{pastor-satorras1998,pastor-satorras1998a} (up to the factor coming from the momentum integration discussed in the previous paragraph).
\section{Exact solution of the fixed point equation for $\eta_A^*=0$}
\label{app_fixedPoint}
In the special case of $\eta_A^*=0$ (which is not interesting for the physics since it means $\zeta=1/3<1$), the fixed point solution of the flow equation~(\ref{eq_flowAlitim}) can be solved exactly. Indeed, one can show that $\hat A'(\hat \phi)$ is a solution of the simple differential equation:
\begin{align}
4 \left(2 \hat \phi^2+5\right)^2 (\hat A')^3-\left(9 \hat A'+1\right)^2=0 \, ,
\end{align}
which can be solved exactly in terms of an integral over an algebraic integrand. In this special case, we therefore have a proof that a well-defined function exists on the whole real axis.
Moreover, this function is in fact also a solution of a \emph{linear} ordinary differential equation of order 4, on which the study of the singularities can be performed. The main singularity lies at $\hat \phi^2 = -5/2$ and not on the real axis. Thus, at least in this case, the series expansion around $\hat\phi=0$ of the fixed point solution coincides with the fixed point solution, although it has a finite radius of convergence, $R=\sqrt{5/2}$.
Although it is difficult to extrapolate this result to the physically interesting values of $\eta_A^*$, we have nonetheless checked that our numerical integration of the fixed point equation for $\eta_A^*=0$ matches this exact result.
\section{Introduction}
Landscapes are known to exhibit scale invariance~\cite{dodds2000}, and Mandelbrot even considered the stretch of a coastline to introduce his notion of fractal dimension~\cite{mandelbrot1982}: seen from far away, the coast displays bays and peninsulas, and reveals more and more sub-bays and sub-peninsulas as one looks closer and closer at it. The self-similarity of branching rivers networks -- where brooks merge into creeks that become streams flowing to form rivers -- is also a well-known fact in geomorphology, and led to several phenomenological scaling laws~\cite{horton1945,rodriguez-iturbe2001,somfai1997}.
Our interest in the following will be erosional landscapes such as mountain ranges, that are also scale invariant~\cite{newman1990}. This scale invariance is made obvious when one studies the roughness of a surface, given by the height-height correlation function:
\begin{align}
C(r) = \sqrt{\left\langle |h(x+r)- h(x)|^2 \right\rangle} \, ,
\end{align}
where $\left\langle \cdot \right\rangle$ denotes a spatial averaging (over $x$). This correlation function is shown in various empirical measurements to scale as $C(r) \sim |r|^\alpha$, where $\alpha$ is known as the roughness exponent.
Although the scaling behaviour of erosional landscapes is a well-documented fact (from field measurements~\cite{mark1984,czirok1993,norton1989,ouchi1992,matsushita1989,chase1992}, laboratory experiments~\cite{hasbargen2000,paola2009} or numerical simulations~\cite{kim2000,caldarelli1997}), an unambiguous and unique value of the roughness exponent $\alpha$ remains elusive. In fact, from the large amount of experimental data available, two features can be extracted: (i) the roughness exponent has a large variability, and it seems to span the whole range between $\alpha \simeq 0.2$ and $\alpha \simeq 1$, and (ii) there is a tendency to find larger values of the roughness exponent ($0.70 \lesssim \alpha \lesssim 0.95$) at intermediate length scales ($\lesssim3$~km), and smaller values ($0.20 \lesssim \alpha \lesssim 0.60$) at larger length scales~\cite{mark1984,chase1992,kalda2003}.
Because of the complexity and variety of the erosion mechanisms (rainfalls and storms, freezing events and changes in temperature, chemical erosion, landslides and avalanches, etc.~\cite{kukal1990}), a model stemming from these mechanisms is out of reach. However, the scale invariance displayed by these systems suggests that the intermediate and large scale physics of these systems is, at least to a large extent, independent of the smallest scale details, and that a simple phenomenological model that would capture the relevant elements could be sufficient to reproduce this power-law behaviour and predict the value of the roughness exponent.
So far, some necessary elements for this self-similarity to emerge have already been identified~\cite{sornette1993}. First, the flowing of eroded material by diffusion of the soil has of course to be taken into account. In some simple cases such as river deltas, diffusion in itself can be sufficient to explain the delta front profile~\cite{kenyon1985}. However, the nontrivial scaling property of the correlation function $C(r)$ in eroding landscapes is not reproduced with this sole ingredient. Then, a phenomenological noise term taking into account most of the underlying stochastic phenomena contributing to the erosion must be included~\cite{sornette1993,pelletier2007}. Combining diffusion and noise, one gets the Edwards-Wilkinson noisy diffusion equation. Although this equation yields scale invariance in $d=1$ with a nonvanishing value of $\alpha$, this property is lost in larger dimensions since $\alpha=0$ in this model for all $d>1$~\cite{edwards1982}. Finally, some nonlinearity in the model is mandatory to explain the occurrence of nontrivial values of $\alpha$~\cite{newman1990,roering1999}. The combination of these three elements is minimal to get scaling features in an erosive model.
Amongst the equations displaying the features highlighted above, the Kardar-Parisi-Zhang (KPZ) equation stands out of the crowd~\cite{kardar1986}. First derived and famous in the context of surface growth, the KPZ equation is also thought to describe isotropic erosion of landscapes~\cite{sornette1993}, and predicts $\alpha \simeq 0.4$ in $d=2$~\cite{kloss2012,kelling2011}.
However, although a description of erosion by the KPZ equation seems satisfactory for large scale landscapes, where erosion is indeed isotropic and where the KPZ prediction for the roughness exponent $\alpha$ seems to meet the experimental data (for which $0.20 \lesssim \alpha \lesssim 0.60$), it is not the case for intermediate length scales, where erosion occurs along a preferred direction (the slope of the mountain), and the KPZ equation -- which is isotropic -- fails to capture this important additional ingredient and underestimate the roughness exponent (which is of order $0.70 \lesssim \alpha \lesssim 0.95$)~\cite{dodds2000}. In addition, the KPZ equation is also nonconservative, a feature that is not realistic for smaller scale erosion~\cite{pelletier2007}.
To bridge this gap, Pastor-Satorras and Rothman suggested a nonlinear yet conservative description, and to add anisotropy on top of the three main ingredients discussed above~\cite{pastor-satorras1998,pastor-satorras1998a}. Their perturbative Renormalization Group (RG) analysis that retains only one coupling constant yields exponents in surprisingly good agreement with field measurements. Unfortunately, a recent paper by Antonov and Kakin~\cite{antonov2017a} revealed a mistake in their analysis, showing that there is not a single but infinitely many relevant coupling constants in the theory, which invalidates their results. Antonov and Kakin are however unable to predict the value of the roughness exponent~$\alpha$, but they suggest that the correct model has a line of fixed points, and therefore possibly a continuous range of values for $\alpha$ if this line is attractive, which they cannot show. Moreover, Antonov and Kakin's paper is focused only on a single type of noise (the isotropic noise, which we describe in more details in the following), while Pastor-Satorras and Rothman studied in addition a more interesting model involving a static noise. In this second model, it is not known whether a line of fixed points also exists.
In this paper, we tackle the anisotropic erosion model with two different kinds of noise using the nonperturbative RG (NPRG)~\cite{berges2002} (for an introduction, see~\cite{gies2012,delamotte2012}), which is perfectly suited for studying a model involving infinitely many coupling constants, since the NPRG is functional in essence. We do agree with Antonov and Kakin about the infinite number of coupling constants involved in the model and with the fact that any truncation retaining only a finite number of them yields wrong predictions in the case of the isotropic noise. We show in addition that this conclusion holds for the two types of noise.
Furthermore, we are able to integrate numerically the flow equation, and find that in the case of the static noise, there indeed exists for this model an interval of stable fixed points in the case of physical interest $d=2$. This interval shrinks to a single fixed point, the trivial Edwards-Wilkinson fixed point, in the case of an isotropic noise. This result is of course in marked disagreement with those of~\cite{pastor-satorras1998,pastor-satorras1998a} and in partial disagreement with those of~\cite{antonov2017a} in which it is argued that the isotropic noise case could yield nontrivial exponents.
Moreover, although we are not able to predict whether the whole line of fixed points can be reached from realistic initial conditions, the very existence of this line of fixed points could be a first step to explain the large variability observed in the experimental values of the roughness exponent $\alpha$. Let us also emphasize that despite the very simple formulation of this erosion model, its RG equation displays very interesting features that we describe in the following.
\section{Anisotropic erosion model}
We briefly recall the main features of the model defined in \cite{pastor-satorras1998a}. Our aim is to describe the erosion -- that is the evolution of the height $h(\vec x,t)$ -- of a surface with a fixed mean tilt (e.g. the slope of a mountain) which introduces an intrinsic anisotropy in the model. This preferred direction is determined by a unit vector that we denote $\vec e_\parallel$. Thus, the $d$-dimensional horizontal position $\vec x$ can be decomposed as $\vec x = \vec x_\bot + x_\parallel \vec e_\parallel$ with $\vec x_\bot \cdot \vec e_\parallel = 0$, and $\vec x_\bot$ is therefore a $(d-1)$-dimensional vector. We also define the derivative in the slope direction as $\partial_\parallel\equiv \partial/\partial x_\parallel$ and in the transverse direction as $\nabla_\bot \equiv (\partial/ \partial x_{\bot,i})$ with $i=1 \dots (d-1)$.
The equation derived by Pastor-Satorras and Rothman in \cite{pastor-satorras1998,pastor-satorras1998a} to describe the evolution of the height profile is a minimal Langevin equation that takes into account diffusion, nonlinearity, noise, and anisotropy. It reads:
\begin{align}
\partial_t h (\bm x)\! = \! \nu_\parallel \partial_\parallel^2 h(\bm x) +\! \nu_\bot \nabla_\bot^2 h(\bm x) +\! \partial_\parallel^2 B(h(\bm x)) +\! \xi(\bm x)
\label{eq_Langevin}
\end{align}
where $\bm x \equiv (\vec x,t)$, the function $B(h)$ is an odd function of the height $h$ and represents the non-linearity, and $\xi$ is a stochastic noise. As usual, the above Langevin equation has to be understood in the It\=o sense. The non-linear function $B(h)$ takes into account the fact that the flow of water carrying the soil -- and thus responsible for the erosion -- increases with the slope, and is therefore stronger in the downhill direction $\vec e_\parallel$.
The noise probability distribution $P(\xi)$ is
\begin{align}
P(\xi ) \propto \mathrm{e}^{-\frac{1}{4D}\int _{\bm x,t'} W(t-t') \xi (x,t) \xi (x,t') }
\end{align}
with $\int _{\bm x} \equiv \int \mathrm{d}^d x \,\mathrm{d} t$ (notice that we now drop the arrow above the spatial vector $\vec x$ to alleviate the notation), and the noise correlations are:
\begin{align}
\left\langle \xi (\bm x) \xi (\bm{x'}) \right\rangle = 2D\, W(t-t') \delta ^d (x-x') \, ,
\end{align}
where $W(t-t')=1$ for a static noise, and $W(t-t')=\delta (t-t')$ for an isotropic noise. In this model, the choice of the noise is paramount~\cite{pelletier2007}, since different noises will lead to different universality classes~\cite{caldarelli1997}, different critical dimensions, and therefore either to a trivial ($\alpha=0$), or non-trivial roughness exponent in $d=2$ as we will see in the following. A static (or quenched) noise $W(t-t')=1$ expresses the fact that different types of soil (with various erodibility) can be originally present, whereas an thermal (or isotropic) noise $W(t-t')=\delta (t-t')$ is more suited for mimicking the action of rainfalls over the eroding land. As will be shown in the following, the former leads to a nontrivial roughness exponent in $d=2$, whereas the latter results in smooth landscapes.
From the Langevin equation (\ref{eq_Langevin}), an equivalent field theory can be derived using the Martin--Siggia--Rose--de Dominicis--Janssen (MSRDJ) approach \cite{martin1973,janssen1976,dedominicis1976}. In this formalism, the mean value (over the different realizations of the noise) of a given observable $\mathcal{O}[h]$ is given by:
\begin{align}
\left\langle \mathcal{O}[h] \right\rangle _{\xi} &=\int \mathcal{D} h \mathcal{D} \tilde{h} \, \mathrm{e} ^{-\mathcal{S}[h,\tilde{h}]} \mathcal{O}[h] \label{eq_sansJacob}
\end{align}
with the action
\begin{align}
\begin{split}
\mathcal{S}[h,\tilde{h}] = \int _{\bm x} \tilde{h} &
\left( \partial_t h - \nu_\parallel \partial_\parallel^2 h - \nu_\bot \nabla_\bot^2 h - \partial_\parallel^2 B(h) \right) \\
& - \int_{x,t,t'} W(t-t') \tilde{h}(x,t) \tilde h(x,t')
\, .
\label{eq_action}
\end{split}
\end{align}
Notice that within this formalism the functional integral over $\tilde{h}$ (which is called the ``response'' field) is performed along the imaginary axis, whereas $h$ is a real field. Notice also that up to a rescaling of the time $t$, the longitudinal direction $x_\parallel$, and of the fields $\tilde h$ and $h$, one can set $\nu_\parallel = \nu_\bot= D =1$, which is the normalization we keep in the following and which simplifies the symmetry analysis.
\section{Nonperturbative RG}
In this section we describe briefly the implementation of the nonperturbative RG (NPRG) formalism in the context of a nonequilibrium model~\cite{canet2007,canet2011a}. As in equilibrium statistical physics, the starting point of the field theory is the analog of the partition function associated with the previous action $\mathcal S$ defined in Eq.~(\ref{eq_action}), and which reads:
\begin{align}
\mathcal{Z}[j,\tilde{j}] = \int \mathcal{D} h \mathcal{D}\tilde{h} \,
\mathrm{e}^{-\mathcal{S}+\int_{\bm x}J(\bm x)^T \cdot H (\bm x)}
\end{align}
where we use a matrix notation and define the following vectors
\begin{align}
H (\bm x)=
\left(
\begin{array}{c} h(\bm x) \\ \tilde{h}(\bm x) \end{array}
\right)
\quad \text{and} \quad
J(\bm x)=
\left(
\begin{array}{c} j(\bm x) \\ \tilde{j}(\bm x) \end{array}
\right) \, .
\end{align}
As in equilibrium, the generating functional of the connected correlation and response functions is $\mathcal{W}[J] = \log \mathcal{Z}[J]$. We also introduce its Legendre transform, the generating functional of the one-particle irreducible correlation functions $\Gamma[\Phi]$, where $\Phi= \left\langle H \right\rangle$.
In order to determine the effective action $\Gamma$, we apply the NPRG formalism and write a functional differential equation which interpolates between the microscopic action~$\mathcal{S}$ and the effective action~$\Gamma$. The interpolation is performed through a momentum scale $k$ and by integrating over the fluctuations with momenta $\vert q\vert>k$, while those with momenta $\vert q\vert<k$ are frozen. At scale $k=\Lambda$, where $\Lambda$ is the ultra-violet cutoff imposed by the (inverse) microscopic scale of the model (e.g. the lattice spacing), all fluctuations are frozen and the mean-field approximation becomes exact; at scale $k \to 0$, all the fluctuations are integrated over and the original functional $\mathcal{Z}$ is recovered. The interpolation between these scales is made possible by using a regulator $\mathcal{R}_k(\bm x)$, whose role is to freeze-out all the fluctuations with momenta $\vert q\vert<k$. This regulator is introduced by adding an extra term to the action and thus defining a new partition function $\mathcal Z_k$:
\begin{align}
\mathcal{Z}_k[j,\tilde{j}] = \int \mathcal{D} h \mathcal{D}\tilde{h} \,
\mathrm{e}^{-\mathcal{S}-\Delta \mathcal S_k+\int_{\bm x}J(\bm x)^T \cdot H (\bm x)}
\label{eq_partitionFunction}
\end{align}
with
\begin{align}
\Delta\mathcal{S}_k= \frac{1}{2}\int_{\bm x,\bm{x'}} H(\bm x)^T \cdot \mathcal{R}_k(\bm x-\bm{x'}) \cdot H (\bm{x'})
\label{eq:def_regulator}
\end{align}
where $\mathcal{R}_k$ is a $2\times 2$ regulator matrix, depending both on space and time, and whose task is to cancel slow-mode fluctuations.
Let us first recall that the MSRDJ formalism together with It\=o's prescription does not allow for a term in the action not proportional to the response field $\tilde h$. This implies that there is no cutoff term in the $h-h$ direction, and the regulator matrix defined in Eq.~(\ref{eq:def_regulator}) can be written in full generality as
\begin{align}
\mathcal{R}_k(\bm x)=\left(
\begin{array}{cc} 0 & R_{1,k}( x,t) \\ R_{1,k}( x,-t) & 2 R_{2,k}( x,t) \end{array}
\right) \, ,
\end{align}
where the minus sign in $R_{1,k}(x,-t)$ is a consequence of $\Delta \mathcal{S}_k$ being written in a matrix form and the factor 2 in front of $R_{2,k}$ has been included for convenience.
In the following, we only consider a space regulator, that is, a regulator which is trivial in the time direction, and we also discard the noise modification $R_{2,k}$ (see \cite{duclut2017} for further discussion of a frequency regulator), such that
\begin{align}
\mathcal{R}_k(\bm x)=\left(
\begin{array}{cc} 0 & R_k(x) \delta(t) \\ R_k(x) \delta(t) & 0 \end{array}
\right) \, .
\end{align}
In this paper we use the $\Theta$-regulator which allows for an analytic computation of the integrals over momentum, and which is defined in Fourier space as
\begin{align}
R_k(q) &= (k^2-q^2) \Theta (k^2-q^2)
\label{eq_Litim}
\end{align}
where $\Theta (q)$ is the Heavyside step-function ($\Theta (q<0)=0$ and $\Theta (q\geq0)=1$). Notice that we have kept the same name for the function and its Fourier transform, which is defined as:
\begin{align}
f(\bm q) \equiv \int_{\bm x} f(\bm x) \, \mathrm{e}^{-i(q x - \omega t)} \, .
\label{eq_FTconvention}
\end{align}
We also define the effective average action $\Gamma_k$ as a modified Legendre transform of $\mathcal{W}_k[J] = \log \mathcal{Z}_k[J]$~\cite{wetterich1993}:
\begin{align}
\begin{split}
&\Gamma_k[\Phi]+\mathcal{W}_k[J] = \\
&\int_{\bm x} J^T\cdot \Phi - \frac{1}{2} \int_{\bm x,\bm{x'}} \Phi(\bm x)^T \cdot \mathcal{R}_k(\bm x-\bm{x'}) \cdot \Phi (\bm{x'})
\label{eq_LegendreTransform}
\end{split}
\end{align}
in such a way that $\Gamma_k$ coincides with the action at the microscopic scale ($\Gamma_{k=\Lambda}=\mathcal S$) and with $\Gamma$ at $k=0$ ($\Gamma_{k=0}=\Gamma$), when all fluctuations have been integrated over. The evolution of the interpolating functional $\Gamma_k$ between these two scales is given by the Wetterich equation~\cite{wetterich1993,morris1994}:
\begin{align}
\partial_k \Gamma_k [\Phi] &= \frac{1}{2} \text{Tr} \int_{\bm x,\bm{x'}} \partial_k \mathcal{R}_k(\bm x-\bm{x'}) \cdot G_k [\bm x,\bm{x'};\Phi]
\label{eq_Wetterich}
\end{align}
where $G_k [\bm x,\bm{x'};\Phi] \equiv [ \Gamma_k^{(2)}+\mathcal{R}_k]^{-1}[\bm x,\bm{x'};\Phi]$ is the full, field-dependent, propagator and $\Gamma_k^{(2)}$ is the $2 \times 2$ matrix whose elements are the $\Gamma_{k,ij}^{(2)}$ defined such that:
\begin{align}
\Gamma_{k,i_1,\cdots,i_n}^{(n)}[{\bm x_i};\Phi] &= \frac{\delta^n \Gamma_k[\Phi]}{\delta \Phi_{i_1}(\bm x_1)\cdots \delta \Phi_{i_n}(\bm x_n)} \, .
\end{align}
The Wetterich equation~(\ref{eq_Wetterich}) represents an exact flow equation for the effective average action $\Gamma_k$, which we solve approximately by restricting its functional form. We use in the following the derivative
|
expansion (DE), stating that instead of following the full $\Gamma_k$ along the flow, only the first terms of its series expansion in space and time derivatives of $\Phi$ are considered. This method is very efficient and has led both at and out of equilibrium to many accurate and original results \cite{canet2003a,canet2005a,kloss2014,*canet2011,*canet2012,delamotte2004,benitez2008,caffarel2001,*holovatch2004,*peles2004,*delamotte2004a,canet2004,*canet2004a,*canet2003,*canet2005,tissier2010,tissier2008,*tissier2012,*tissier2012a}.
The terms retained in this derivative expansion have to be consistent with the symmetries of the action~$\mathcal S$, and we therefore discuss them before giving an explicit ansatz for $\Gamma_k$.
\section{Symmetries}
In order to find a meaningful and simple ansatz for the effective average action $\Gamma_k$, we start by studying the symmetries of the action. We consider the following shift-gauged symmetry:
\begin{align}
\tilde{h}'(\bm x) = \tilde{h}(\bm x) + \varepsilon (x_\bot,t)
\label{eq_symmetry}
\end{align}
where $\varepsilon$ is an arbitrary infinitesimal function. The action~(\ref{eq_action}) is not strictly invariant under the transformation~(\ref{eq_symmetry}), but since the variations of the action following this transformation are linear in the fields, it also yields useful Ward identities~\cite{canet2011,canet2016}. Under transformation~(\ref{eq_symmetry}), the integral~(\ref{eq_partitionFunction}) remains unchanged, which yields:
\begin{align}
\begin{split}
\int_{\bm x} &\left[ \tilde j \varepsilon - \varepsilon \partial_t \left\langle h \right\rangle + \varepsilon \nabla_\bot^2 \left\langle h \right\rangle - \int_{\bm x'} \varepsilon R_k \left\langle h \right\rangle \right] \\
& + 2 \int_{\bm x,t'} W(t-t')\, \varepsilon(x_\bot,t) \left\langle \tilde h(x,t') \right\rangle = 0 \, .
\end{split}
\end{align}
Notice that we have integrated by parts the terms involving a derivation with respect to $x_\parallel$, and that the boundary terms that result from this integration by parts vanish because of the symmetry $x_\parallel \to - x_\parallel$.
Then, using the definition~(\ref{eq_LegendreTransform}) of the modified Legendre transform to eliminate the external field $\tilde j$, and using the fact that, by definition, $\left\langle h \right\rangle = \phi$ and $\langle \tilde h \rangle = \tilde \phi$, the previous expression becomes:
\begin{align}
\int_{\bm x} \! \left[ \dfonc{\Gamma_k}{\tilde \phi} \!-\! \partial_t \phi \!+\! \nabla_\bot^2 \phi \!+\!2 \!\! \int_{t'} \! W(t-t') \tilde \phi(x,t') \right] \! \varepsilon(x_\bot,t)\! =\! 0\, .
\end{align}
Since this equality is true for any function $\varepsilon(x_\bot,t)$, it means that the Fourier transform [defined in Eq.~(\ref{eq_FTconvention})] of the term inside the brackets vanishes at $q_\parallel=0$. Consequently, at $q_\parallel=0$, the functional
\begin{align}
\Gamma_k\! - \!\! \int_{\bm q} \!\! \tilde \phi (-\bm q) \! \left[ -i\omega +q_\bot^2 \right] \phi(\bm q) +\!\int_{\bm q}\!\! W(\omega) \tilde \phi(-\bm q) \tilde \phi(\bm q)
\end{align}
vanishes under transformation~(\ref{eq_symmetry}). It finally means that only the terms $\partial_\parallel h$ and $\partial_\parallel B(h)$ [which are invariant under~(\ref{eq_symmetry})] are renormalized, while the terms $\int \tilde \phi \partial_t \phi$, $\int \tilde \phi \nabla_{\bot}^2 \phi$, and $\int W(t-t') \tilde{\phi}(x,t) \tilde \phi(x,t')$ are not. Thus, at lowest order in the space and time derivatives, the most general ansatz for the effective average action $\Gamma_k[\phi,\tilde \phi]$ reads
\begin{align}
\begin{split}
\Gamma_k[\phi,\tilde \phi] = \int_{x,t} &\tilde \phi(x,t) \left[
\partial_t \phi - \nabla_{\bot}^2 \phi - \partial_{\parallel}^2 A_k(\phi) \right] \\
&- \int_{x,t,t'} W(t-t') \tilde{\phi}(x,t) \tilde \phi(x,t') \, .
\label{eq_Gammak}
\end{split}
\end{align}
We conclude that at this order only one function, $A_k(\phi)$, has a nontrivial renormalization flow that we derive in the following.
\section{Upper critical dimension and controversies}
Before deriving the flow equation and giving the results using the NPRG, we discuss here the upper critical dimension of this model, and try to clarify the misunderstanding about the relevance of some operators. First, depending on the nature of the noise, isotropic or static, the model has different upper critical dimensions. This upper critical dimension is $d_c^{\, \text{stat}}=4$ in the case of a static noise, and $d_c^{\, \text{iso}}=2$ in the case of an isotropic noise, as already stated in~\cite{pastor-satorras1998}.
The computation of the upper critical dimension is made very simple once the model has been cast into its simplest form~(\ref{eq_Gammak}) using symmetry considerations. From this equation, we find that the engineering dimension of the field $\phi$ (expressed in momentum scale) is:
\begin{align}
[\phi] = \frac{2(d-2\kappa)}{3}
\end{align}
where $\kappa = 1$ for an isotropic noise, and $\kappa=2$ for a static noise. Therefore, a coupling constant in front of a $\phi^i$ term is irrelevant for $d>d_c=2\kappa$, which indeed yields the previous upper critical dimensions.
However, the important and surprising feature of this model is that, exactly at the upper critical dimension $d=d_c^{\, \text{stat}}$ or $d=d_c^{\, \text{iso}}$, the dimension of the field $\phi$ vanishes, meaning that all terms $\int_{\bm x} \tilde\phi\,\partial_{\parallel}^2\phi^n$ coming from the expansion of the function $A_k(\phi)$ in Eq.~(\ref{eq_Gammak}) are equally relevant, as pointed out in~\cite{antonov2017a,antonov2017} in the isotropic case. It therefore invalidates the whole approach of~\cite{pastor-satorras1998,pastor-satorras1998a} since infinitely many coupling constants were discarded. We indeed show in the following that truncating the function $A_k$ greatly modifies the physics and the computation of the critical exponent of the model.
\section{Flow equation}
We now compute the flow of the function $A_k(\phi)$, which we define as:
\begin{align}
A_k(\phi) = \frac{1}{\Omega} \left. \left( \partial_{p_{\parallel}^2} \text{FT} \left( \dfonc{\Gamma_k}{\tilde \phi (z)} \right) (\bm p) \right) \right|_{\phi(x,t)=\phi,\bm p=0}
\label{eq_Ak}
\end{align}
where $\Omega$ is the volume of the system, and $\text{FT}(f)(\bm q)$ refers to the Fourier Transform of the function $f(\bm x)$ with the convention~(\ref{eq_FTconvention}). Notice that one has to evaluate it at constant field \textit{after} having performed the momentum derivation. This is unusual in the NPRG context, and we therefore give slightly more details of the derivation of the flow in Appendix~\ref{app_flow}. In order to find a fixed point of the RG flow, one has to write the flow equation in terms of dimensionless variables. We define them in the following way:
\begin{subequations}
\begin{empheq}{align}
\hat{x}_\bot &= k \,x_\bot \\
\hat t &= k^{2}\,t \\
\hat{A}(\hat{\phi}) &= \bar{A}_k^{-1} A_k(\phi) \\
\hat{x}_\parallel &= k^{1+(d-2\kappa)/3} \bar{A}_k^{-2/3} \,x_\parallel \\
\hat{\phi} &= k^{(4\kappa-2d)/3} \bar{A}_k^{1/3} \,\phi\\
\hat{\tilde{\phi}} &= k^{2(\kappa-d)/3} \bar{A}_k^{1/3} \,\tilde{\phi} %
\end{empheq}%
\label{eq_adimensionalisation}%
\end{subequations}%
where we define the running coefficient $\bar{A}_k$ such that $\hat A'(\hat \phi=0)\equiv 1$ where the prime means derivation with respect to $\phi$. In the critical regime, this running coefficient is expected to behave as a power law $\bar{A}_k \sim k^{-\eta_A^*}$, and we therefore define a running exponent $\eta_A(k) = - k \partial_k \ln \bar A_k$ such that $\eta_A(k=0)\equiv \eta_A^*$. The roughness exponent $\alpha$ and the anisotropy exponent $\zeta$ correspond respectively to the anomalous dimension of the field $\phi$ and to the anomalous dimension of the longitudinal direction $x_{\parallel}$. They can thus be expressed in terms of the fixed point value of $\eta_A^*$ as
\begin{align}
\alpha &\equiv (4\kappa-2d-\eta_A^*)/3 \, , \label{eq_alpha}\\
\zeta &\equiv 1+(d+2\eta_A^*-2\kappa)/3 \, .\label{eq_zeta}
\end{align}
The flow of the function $\hat A(\hat \phi)$ can be split into two parts:
\begin{align}
k\partial_k \hat{A}(\hat{\phi}) = k\partial_k \hat{A}(\hat{\phi})|_{\text{dim}} + k\partial_k \hat{A}(\hat{\phi})|_{\text{dyn}}
\label{eq_flowA}
\end{align}
where the dimensional part of the flow $k\partial_k \hat{A}(\hat{\phi})|_{\text{dim}}$ directly follows from the previous definitions~(\ref{eq_adimensionalisation}) and reads:
\begin{align}
k\partial_k \hat{A}(\hat{\phi})|_{\text{dim}} &= \eta_A \hat{A}(\hat{\phi}) +\frac{2d+\eta_A-4 \kappa}{3} \hat{\phi} \hat{A}'(\hat{\phi}) \, ,
\end{align}
while the dynamical part of the flow is derived in Appendix~\ref{app_flow} and reads:
\begin{align}
\begin{split}
&k\partial_k \hat{A}(\hat{\phi})|_{\text{dyn}} = \frac{(3\kappa-2) K_d}{2} \times \\ &\int_{y=0}^{\infty} \int_{\theta=0}^\pi \frac{y^{d/2-\kappa}\sin(\theta)^{d-2} r'(y) \hat{A}''(\hat{\phi})}{\left(r(y)+\sin^2\theta+ \hat{A}'(\hat{\phi})\cos^2\theta \right)^{1+\kappa}}
\label{eq_flowAdyn}
\end{split}
\end{align}
where $K_d=(2^{d-1}\pi^{d/2}\Gamma(d/2))^{-1}=S_{d-1}/(2\pi)^d$ with $S_d$ the surface of the $d$-dimensional unit hypersphere.
Moreover, the definition of the running anomalous dimension $\eta_A(k)$ provides us with the additional equation $k \partial_k \hat{A}'(0) =0$, which yields:
\begin{align}
\begin{split}
&\eta_A = \kappa -d/2-\frac{3 (3\kappa-2) K_d}{8 \hat{A}'(0)} \times \\
& \int_{y=0}^{\infty} \int_{\theta=0}^\pi \frac{y^{d/2-\kappa}\sin(\theta)^{d-2} r'(y) \hat{A}'''(0)}{\left(r(y)+\sin^2\theta+ \hat{A}'(0)\cos^2\theta\right)^{1+\kappa}} \, .
\label{eq_flowEtaA}
\end{split}
\end{align}
Notice that in Eqs.~(\ref{eq_flowA}) and (\ref{eq_flowEtaA}) the dimension $d$, as well as the nature of the noise $\kappa$ are real parameters that can be chosen at will. Starting from the flow equations (\ref{eq_flowA}) to (\ref{eq_flowEtaA}), one can easily retrieve the one-loop perturbative results obtained in \cite{antonov2017a}, and the truncated results of~\cite{pastor-satorras1998,pastor-satorras1998a}; this is explained in Appendix~\ref{app_perturbative}.
Notice that in the case of static noise, in $d=2$ and with the $\Theta$ regulator~(\ref{eq_Litim}), the flow equation~(\ref{eq_flowA}) can be rewritten in a much simpler form:
\begin{align}
k\partial_k \hat{A} = \eta_A \hat{A}+ \frac{\eta_A-4}{3}\hat\phi \hat{A}'-\frac{(1+3\hat{A}')\hat{A}''}{4 (\hat{A}')^{3/2}}
\label{eq_flowAlitim}
\end{align}
where we have omitted the argument of $\hat A$ and its $k$-dependence for convenience.
\section{Line of fixed points}
We now study the properties of the flow equation~(\ref{eq_flowA}). Notice that at the fixed point (namely when $\hat{A}(\hat{\phi})=\hat{A}^*(\hat{\phi})$ such that $k\partial_k \hat{A}^*(\hat{\phi})=0$), the flow equation provides us with an iterative scheme for computing the derivatives $\hat{A}^{*(j)}(0)\equiv a_j$ for all $j$. Indeed, at the fixed point and evaluated at $\hat{\phi}=0$, the derivatives of Eq.~(\ref{eq_flowA}) can be rewritten as:
\begin{subequations}%
\begin{empheq}{align}%
& f_3 (\eta_A^*, a_3) = 0 \\
& f_5 (\eta_A^*, a_3, a_5) = 0 \\
& f_7 (\eta_A^*, a_3, a_5, a_7) = 0 \\
& \hspace{1.3cm} \vdots \nonumber
\end{empheq}%
\end{subequations}%
where the ${f_i}$ are linear functions of their last argument. For instance, for the static noise in $d=2$ and with the $\Theta$~regulator~(\ref{eq_Litim}), the previous equations yield:
\begin{subequations}%
\begin{empheq}{align}%
& a_3 = \frac{4}{3}(\eta_A^*-1) \\
& a_5 = \frac{4}{3} (\eta_A^*-1) (5\eta_A^*-7) \label{eq_a5}\\
& \hspace{1.7cm} \vdots \nonumber
\end{empheq}%
\end{subequations}%
Therefore, provided that the Taylor expansion of $\hat A^*(\hat\phi)$ around $\hat\phi=0$ can be analytically continued on the whole real axis then a line of fixed points parametrized by the values of $\eta_A^*$ exists, as claimed in~\cite{antonov2017a}. On the other hand, notice that a truncation of $\hat A$ at any finite order will \emph{not} yield a line of fixed points. For instance, writing $\hat A=\hat \phi + a_3/3! \, \hat \phi^3$ means that the coefficient $a_5$ vanishes and thus yields $\eta_A^*=1$ or $\eta_A^*=7/5$ according to Eq.~(\ref{eq_a5}). Instead of improving the accuracy of $\eta_A^*$, increasing the rank of the truncation will rather yield more and more (different) fixed points, with some stable and some unstable. The correct picture is therefore only accessible when the problem is tackled functionally, that is with the full function $\hat A(\hat \phi)$.
Studying numerically these fixed points as well as their stability is non-trivial as we show in the following, but simple physical arguments already allow us some comments: (i)~the line of fixed point is upper bounded in all dimensions because the roughness exponent $\alpha$ is positive, and we therefore deduce from Eq.~(\ref{eq_alpha}) that $\eta_A^* \leq 2 (2\kappa -d)$; (ii) the anisotropy exponent $\zeta$ characterizes the ratio between the roughness exponent in the transverse direction, $\alpha_\bot\equiv \alpha$, and the roughness exponent in the parallel direction $\alpha_\parallel$~\cite{pastor-satorras1998,pastor-satorras1998a}. In our anisotropic model, we expect this ratio to be larger than 1, i.e., $\zeta\geq 1$, which translates for $\eta_A^*$ as (using Eq.~(\ref{eq_zeta})]: $\eta_A^* \geq (2\kappa-d)/2$.
The first inequality is directly encoded in the flow equation since there exists no scaling solution (of the form $\hat A^*(\hat\phi) \sim \hat\phi^\gamma$ at large field) of the fixed point equation~(\ref{eq_flowA}) when $\eta_A^*$ is such that $\alpha<0$. The second inequality also has a signature in the flow equation, more precisely on the scaling form of the fixed point function $\hat A^* (\hat \phi)$: indeed, studying Eq.~(\ref{eq_flowA}) at large field, one finds that the fixed point function should scale as
\begin{align}
\hat{A}^*(\hat{\phi}) \subrel{\hat{\phi}\to \infty}{\sim} \hat{\phi}^{\,\gamma} \quad \text{with} \quad \gamma = \frac{3\eta_A^*}{4\kappa-2d-\eta_A^*}
\label{eq_largePhi}
\end{align}
and the inequality $\zeta\geq 1$ is equivalent to saying that $\hat{A}^*(\hat{\phi})$ is sub-linear at large field, which is not unphysical, but simply does not correspond to the model that we study where we expect non-linearity and a power-law behaviour at large field. These considerations allow us to discard the isotropic noise ($\kappa=1$) since in dimension $d=2=d_c^{\, \text{iso}}$ (the physical dimension of our problem), the only value of $\alpha$ that satisfies both inequalities is the trivial Edwards-Wilkinson exponent $\alpha=0$. Within this erosion model, an isotropic noise can therefore not explain the observed landscapes roughness; see Fig.~\ref{fig_etaVSd}.
\begin{figure}[t!]
\centering
\subfigure[ ]
{\includegraphics[width=0.45\textwidth]{figure1a.pdf}}
\subfigure[ ]
{\includegraphics[width=0.45\textwidth]{figure1b.pdf}}
\caption{Critical exponent $\eta_A$ for isotropic (a) and static (b) noises as a function of the physical space dimension $d$. Recall that for landscape erosion, the dimension of interest is $d=2$. The upper colored region is unphysical($\alpha<0$). Its lower boundary is the Edwards-Wilkinson fixed point with $\alpha=0$. The bottom region is the physical yet uninteresting region for which the anisotropy exponent $\zeta$ is lower than 1. In this region, the function behaves like $\hat A_k^*\sim \hat \phi^\gamma$ as $\hat \phi\to\infty$, with $\gamma <1$, and the system does not display the kind of nonlinearity we were looking for. The blank region in between is therefore the interesting region for our model; it ends up in a single point at the upper critical dimension, $d_c^{\,\text{iso}}=2$ (a), or $d_c^{\,\text{stat}}=4$ (b). In the case of the anisotropic noise (b), we see that there is an interval of fixed points (red line) in $d=2$.}
\label{fig_etaVSd}
\end{figure}
\section{Numerical solution}
\begin{figure}[t!]
\centering
\subfigure[ \label{fig_plateaus1a} ]
{\includegraphics[width=0.45\textwidth]{figure2a.pdf}}
\subfigure[ \label{fig_plateaus1b} ]
{\includegraphics[width=0.45\textwidth]{figure2b.pdf}}
\caption{RG flows ($s=\log (k/\Lambda)$ is the RG time) of the exponent $\eta_A$ for two different initial conditions, obtained by integrating numerically the flow equation~(\ref{eq_flowAlitim}). \textbf{(a)} Dotted lines $a$ and $b$: initial condition with a large field behaviour $\hat A^{\text{init}}(\hat\phi) \sim \hat\phi^{8}$ for which we expect from Eq.~(\ref{eq_largePhi}) an exponent $\eta_A^* \simeq 2.91$ which is indeed what is observed on the plateau 1. Solid lines $a'$ and $b'$: same as above with
$\hat A^{\text{init}}(\hat\phi) \sim \hat\phi^{3.5}$ and $\eta_A^*\simeq2.15$ which is observed on the plateau 1'. At large $s$, both flows end on the plateau 2. The curves $b$ and $b'$ are obtained by increasing the size of the box $\hat \phi_{\text{max}}$, which increases the length of the plateaus 1 and 1'. \textbf{(b)} Same initial conditions as for (a), but with improved computation of the derivatives of $\hat A$ around $\hat \phi_{\text{max}}$ (see main text). With this method, the first plateaus 1 and 1' are never left showing that the crossover to plateau 2 is a numerical artifact.}
\label{fig_plateaus1}
\end{figure}
\begin{figure}[t!]
\centering
{\includegraphics[width=0.45\textwidth]{figure3.pdf}}
\caption{Solid line: Fixed point solution $\hat A^*(\hat\phi)$ of Eq.~(\ref{eq_flowA}) for $\eta_A=\eta_A^{\text{plateau}}\simeq2.91$. Dashed line: asymptotic behaviour in $\hat \phi^{\, 3\eta_A/(4-\eta_A)}$ with $\eta_A=\eta_A^{\text{plateau}}$. Dots: plateau solution $\hat A^{\text{plateau}}(\hat\phi)$ for $\hat \phi_{\text{max}}=80$ (blue), 200 (yellow), and 400 (red) taken from the numerical solution of Eq.~(\ref{eq_flowA}) at RG time $s=-5$. The plateau solution converges towards the true fixed point solution as $\hat \phi_{\text{max}}$ is increased.}
\label{fig_Aplateau}
\end{figure}
We are now interested in confirming the existence of the line of fixed points found above from a Taylor expansion around $\hat\phi=0$. We now focus on the case of static noise, in $d=2$ and with the $\Theta$-regulator~(\ref{eq_Litim}), although the method we present remains true for a different noise, dimension or regulator. The flow equation in this case is given by Eq.~(\ref{eq_flowAlitim}).
We thus solve numerically the fixed point equation:
$k\partial_k \hat{A}^*(\hat{\phi}) =0 $
together with the two boundary conditions $\hat A^*(0)=0$ coming from the fact that $A(\phi)$ is odd and $\hat A^*{}'(0)=1$ which defines $\eta_A(k)$. The numerical integration is performed on a finite grid $\hat \phi \in [0,\hat \phi_{\text{max}}]$.
The derivatives of $\hat A^*$ are then computed on this grid using the usual ``five-point stencil'' method. At the leftmost part of the grid ($\hat \phi=0$), we use the fact that $\hat A^*(-\hat\phi)=-\hat A^*(\hat\phi)$. On the rightmost part of the grid, we do not impose any boundary condition and the derivatives are computed using only points inside the grid. This simple scheme confirms the existence of a line of fixed points: for any given $\eta_A^*$ (such that $\alpha\geq0$) we find a fixed point function $\hat A^*$ solution of Eq.~(\ref{eq_flowAlitim}). The precision of each of these solutions is refined when the size of the box $\hat \phi_{\text{max}}$ or the number of discretization points is increased. In particular, the scaling at large field, Eq.~(\ref{eq_largePhi}), is very well reproduced (at least when $\hat \phi_{\text{max}}$ is large enough) which confirms the global existence of the fixed points. Notice that an exact solution of the fixed point equation (\ref{eq_flowAlitim}) for $\eta_A^*=0$ is available (see Appendix~\ref{app_fixedPoint}) which allows for a check of our numerical solution in this particular case.
The stability of these fixed points is a subtler issue. Usually, the stability analysis is simply performed by linearizing the flow around the fixed point, that is, by computing the (discretized) stability matrix and evaluating its eigenvalues. The sign of these eigenvalues then provides the stability of each fixed point. An alternative path consists in perturbing the fixed point solution: $\hat A(\hat \phi) = \hat A^*(\hat \phi) + \varepsilon \mathrm{e}^{\lambda s} g(\hat \phi)$ (where $s=\log(k/\Lambda)$ is the RG time) and then solving the differential equation for $g$ while using a shooting method to find the eigenvalues $\lambda$~\cite{bender1978,bervillier2008,bervillier2008a}. In this model however, none of these methods yield reliable results since we do not observe the convergence of the eigenvalues when the size of the box or the number of discretization points is increased.
To tackle this issue, we perform a numerical integration of the flow equation~(\ref{eq_flowAlitim}) starting with different initial conditions $\hat A^{\text{init}}$. We use a Runge-Kutta scheme and the same discretization for the field $\hat \phi$ as explained above for the fixed point equation. For various initial conditions, we observe that $\eta_A(s)$ reaches a first plateau [see Fig.~\ref{fig_plateaus1a}] which is left after a finite RG time. The flows then reach a second plateau where they stay forever. Whereas the position of the first plateaus depends on the initial condition, the second plateau is the same for all initial conditions; this seems to indicate the existence of a unique fully attractive fixed point, for which $\eta_A^*\simeq 2.29$, whereas all the other fixed points are unstable. However, increasing the size of the box $\hat \phi_{\text{max}}$ increases the length of the first plateaus (see Fig.~\ref{fig_plateaus1a}] and it seems that except for numerical stability issues, we could virtually extend these plateaus for an arbitrary long RG time by increasing $\hat\phi_{\rm max}$. We notice that all the plateau functions $\hat A^{\text{plateau}}(\hat\phi)$ match with the fixed point solutions found by integrating Eq.~(\ref{eq_flowAlitim}) directly at the fixed point and for $\eta_A=\eta_A^{\text{plateau}}$; see Fig.~\ref{fig_Aplateau}. This indicates that the first plateaus correspond to fixed points that are (numerically) unstable.
To cure the sensitive dependence of the numerical flow on the box size $\hat\phi_{\rm max}$, we proceed to a compactification of the field $\hat \phi$ and define $y=\hat \phi^2/(m+\hat\phi^2)$, with $m$ a free parameter. The whole interval $\hat \phi\in [0,\infty[$ is mapped onto $y\in [0,1[$ that can be discretized. We also compactify the function $\hat A$ and obtain a new function $D(y)$ which remains finite when $y\in[0,1[$. In this compactified version, the flow of $D$ provides us with a boundary condition at the rightmost side of the new box, $y=1$. The numerical integration of this compactified version reveals that each initial condition converges towards a different fixed point, that is, to a single plateau (reminiscent of the plateaus 1 and 1' [see Fig.~\ref{fig_plateaus1a}] in the noncompactified version), different for each initial condition. This qualitative feature is not modified when the number of discretization points is increased and $m$ is varied and therefore highlights the fact that the previous stable fixed point, reached at large RG time and observed in Fig.~\ref{fig_plateaus1a} on the plateau 2, is a numerical artifact. However, the quantitative picture, that is, the precise positions of the plateaus, is modified when the number of discretization points is increased. We have not been able to obtain fully converged results by increasing the number of points in the grid which indicates that the behaviour of $D$ in the vicinity of $y\simeq1$ is not well captured by our numerical scheme in the compactified version.
The final remedy to these numerical hurdles is the following: going back to the noncompact formulation in terms of $\hat \phi$ and $\hat A$, we modify the way the derivatives of $\hat A$ are computed around $\hat \phi_{\rm max}$. Instead of using the ``five-point stencil'' method, we now fit the large-field region by a function $b \, \hat \phi^\gamma$ [where $\gamma$ is given by Eq.~(\ref{eq_largePhi})], and compute the derivatives at the boundary using this fitting function. This fit prevents the numerical drift that eventually leads the flow to leave the plateaus 1 or 1', and confirms that the fixed point $\eta_A^* \simeq 2.29$ is only a numerical artifact; see Fig.~\ref{fig_plateaus1b}.
From this numerical study, we conclude that the whole interval of fixed points with $\alpha \in [0,1[$ is stable, and the convergence to one of these fixed points is determined by the large-field behaviour of the initial condition. The importance of the initial condition due to the existence of this line of fixed points signals the breakdown of universality for this model, although a nontrivial anisotropy exponent $\alpha \neq 0$ is preserved.
\section{Conclusion}
To summarize, in all dimensions $d$ there exists a half-line of stable fixed points which correspond to a positive roughness exponent $\alpha$. In $d=2$ in particular, if one aims to study the effects of anisotropy, then only the fixed points for which $\zeta\geq1$ should be considered, which means that the line of fixed points shrinks to an interval in the case of the static noise ($\kappa=2$), or to a single (trivial) fixed point $\alpha=0$ for the isotropic noise ($\kappa=1$).
In the light of the results on this anisotropic model, it appears that the discussion about the origin of the scaling in erosional landscapes is not completely closed. However, some new elements are now available: anisotropy is indeed a relevant feature in this context, and should not be overlooked when modelizing erosion at short length scale. The nature of the noise is also a main characteristic and drastically modifies the scaling behaviour of the model, since it changes its universality class. As a remark, notice that within the NPRG formalism, the noise term could be studied for noninteger values of $\kappa$ between 1 and 2, therefore giving rise to a smaller range of accessible $\alpha$. The status of a noninteger value of $\kappa$ is not mathematically clear, but one can see it as an interpolation between the two meaningful values $\kappa=1$ (isotropic noise) and $\kappa=2$ (static noise).
Moreover, we believe that our results can give some insights for the great dispersion of the values of the roughness exponent $\alpha$ when looking at different field measurements: if this model is valid (or at least the fact that an interval of fixed points may be generic in more realistic erosion models), then the dispersion of the roughness exponent is a signature of this line of fixed points, each of them corresponding to a different value of the exponent due to the difference in the initial conditions, that is differences in the geological context in the case of real landscapes. Let us also emphasize the surprising yet interesting fact that even though this anisotropic model is rather simple (there is only one renormalized function), it yields a very nontrivial RG physics, functional in essence and displaying a line of fixed points.
Finally, although this work was focused on the erosion of landscapes and on the topography itself, continuum models have also been devised and applied to river landscapes~\cite{giacometti1995,banavar1997}. Numerical studies stemming from these models have been carried out but they still lack a theoretical study. We believe that our framework could be applied successfully to these models, and will be subject to further work.
\section*{Acknowledgements}
We thank Jean-Marie Maillard for useful advice on differentially algebraic equations and for providing us with the exact solution of the fixed point equation given in Appendix~\ref{app_fixedPoint}, and we thank an anonymous referee for indicating Refs.~\cite{giacometti1995,banavar1997} to us. C.D. also thanks F\'elix Rose for useful discussions about the numerical scheme.
|
\section{Introduction}
The interactions between particles in a fluid play a critical role in determining the manner in which the fluid flows. At low densities where particles can move ballistically, such as in gases, the conductance of a constriction is dependent on only the channel width and particle scattering from the walls, which leads to momentum loss. At higher densities, particle-particle interactions - which preserve momentum - become more frequent and can lead to collective flows that enhance the conductivity through the constriction beyond the ballistic limit.\cite{knudsen1909law} This phenomenon, a behavior exhibited by viscous fluids with Laminar flow, was predicted by Gurzhi to also occur in electronic systems when the electron-electron (el-el) scattering length $l_{\text{ee}}$ becomes much shorter than momentum relaxing scattering lengths, $l_{\text{mr}}$.\cite{gurzhi1963minimum, gurzhi1968hydrodynamic} This behavior is observable in ultrapure material samples, and has been demonstrated through transport measurements in both PdCoO\_2\cite{moll2016evidence} and GaAs,\cite{de1995hydrodynamic} where other viscous, fluid-like behaviors have also been observed.\cite{braem2018scanning}
Recently, it has been predicted that such phenomena can also occur in graphene, where strong el-el interactions and low Umklapp scattering rates allow the quasiparticles to form viscous Fermi or Dirac fluids. \cite{guo2017higher, narozhny2017,lucas2018hydrodynamics, levitov2016electron, neto2009electronic,ho2018theoretical,principi2016bulk,torre2015nonlocal} This fluid-like behavior could lead to several interesting phenomena to appear in graphene, including vortex formation, vortex shedding, and perhaps even electronic turbulence. \cite{mendoza2011preturbulent, levitov2016electron} In order to observe these effects, a number of experimental methods have been implemented. Transport measurements through a series of lithographed constrictions and strategically contacted samples have recently observed signatures of superballistic conductance as well as negative backflow - a possible indicator of vortex formation.\cite{berdyugin2019measuring,kumar2017superballistic,bandurin2018fluidity,bandurin2016negative} Meanwhile, scanned single electron transistor and nitrogen vacancy measurements of etched, encapsulated graphene devices have imaged flow profiles with spatial resolutions as small as 50 nm, and observed signatures of Poiseuille flow, one potential sign of viscous flow behavior.\cite{sulpizio2019visualizing,ella2019simultaneous,jenkins2020imaging,ku2020imaging}
In this work we use scanning tunneling potentiometry (STP) to image---with nm-scale spatial resolution---the electrochemical potential profile associated with quasiparticle flow in graphene around electrostatic barriers that are `drawn' using voltage pulses from the tip of the scanning tunneling microscope (STM).\cite{velasco_nanoscale_2016} This methodology allows for the creation of smooth barriers defined by in-plane p-n junctions which confine the particle flow without introducing diffusive scattering, or other momentum-relaxing processes which would occur in lithographed samples.\cite{kiselev2019boundary} Moreover, we are able to vary the width of the conduction channels from $\mu$m-scale to pinch-off (where the barriers form `electrostatic dams' that suppress flow). We also probe graphene/hBN samples that are non-encapsulated, which reduces charge screening, enhancing el-el interactions and allowing viscous flow behavior to be observable at shorter lengthscales.
Our results reveal how quasiparticle flow through constrictions changes as the carrier density, channel width, and temperature are varied. We observe multiple signatures of non-Ohmic behavior, including a small drop in potential across the graphene, mean free paths that exceed 3 $\mu$m, and the formation of Landauer residual resistivity dipoles characterized by charge build-up and depletion on the upstream and downstream side of barriers, respectively.\cite{landauer1957spatial} At 4.5 K, the flow behaves ballistically and we are able to observe ray-like streams of charge passing through the opened dam, with the channel conductance matching the Sharvin formalism. Meanwhile, at 77 K, we observe a profile that more closely resembles viscous flow behavior, and we measure a channel conductance that is super-ballistic. We find that our observations can be qualitatively-described by numerical simulations of a Stokesian fluid. We are able to estimate key parameters of the fluid, including $l_{ee}$, which we measure to be $\sim$100 nm at 77 K and a kinematic viscosity $\nu$ of $\approx 2.5 \times 10^3$ cm$^2$/s. These measurements represent a fundamentally new way of probing viscous electronic fluids that allows for controllable geometric effects on the flow to become observable.
\begin{figure}[t!]
\centering
\includegraphics[width
=6.5in,keepaspectratio]{Figures/OverviewFig.png}
\caption{(A) Schematic of the STP experimental setup. $V_{sd}$ drives current in the sample while $V_s$ determines the difference between the sample and tip electrochemical potentials. The carrier density (and $E_F$) is globally modified through the use of an electrostatic gate electrode $V_g$. (B) A energy diagram depicting how the electrostatic potential and sample electrochemical potential vary along the flow direction across a potential well. (C) Simultaneously acquired topography [upper left] and spatial map of the electronic LDOS [lower left] ($V_s$ = -10 mV) of the electrostatic dam. All scale bars are 100 nm. Example STP map [upper right] ($V_g$ = -2 V) align with the current flow. Example tunneling I-V curves [lower right] acquired under transport. I = 0 (black dashed line) corresponds to $\mu_{ec} = -V_s$ (orange vertical line); by fitting many such curves (blue solid line) the sample electrochemical potential is mapped spatially. (D) Electrostatic dam shown for different gating conditions, from low to high electron doping (top to bottom). $E_{CNP}$ tracks the electrostatic potential in the graphene sheet (here shown moving across the channel).}
\label{fig:overview}
\end{figure}
A schematic of our STP measurement geometry is shown in Fig. \ref{fig:overview}A. In STP, a source-drain bias is used to drive current laterally through a thin sample, and the subsequent spatially varying electrochemical potential, $\mu_{ec}$, is measured locally using an STM tip.\cite{kirtley1988direct,briner1996local,chu1989scanning,muralt1987scanning,muralt1986scanning, druga2010versatile} To measure $\mu_{ec}$, the feedback of the STM tip is turned off and the tip bias needed to zero the tunneling current is determined by performing a linear fit of the tip-sample I-V curve measured near the zero crossing (Fig. \ref{fig:overview}C). This allows $\mu_{ec}$ to be measured with $\sim$ 10 $\mu$V potential resolution, and with the \AA-scale spatial resolution of standard STM. Fig. \ref{fig:overview}B illustrates how $\mu_{ec}$ varies across the sample under transport conditions. When $V_{sd} = 0$, all local accumulations of charge which affect the chemical potential, $\mu$, are offset by changes in electrostatic potential $\phi$, such that $\mu_{ec}$ is constant across the surface. For $V_{sd} \neq 0$, meanwhile, $\mu_{ec}$ will change continuously across the sample, with a spatially varying slope that depends on the local conductance; meanwhile, any changes in local charge accumulation that are due to active carrier scattering or ballistic transport will affect $\mu_{ec}$ and be visible in STP measurement.\cite{bevan2014first,morr2017scanning}
Previous STP measurements of graphene devices on SiC substrates have revealed sharp drops in potential associated with monolayer-bilayer boundaries, as well as sub-surface crystal steps. In some cases, Landauer residual-resistivity dipoles could be observed near defect features, which could be used to model the electron-barrier scattering mechanisms.\cite{ji2012atomic,giannazzo2012electronic,wang2013local,willke2015spatial,clark2013spatially, sinterhauf2020substrate} STP measurements have also been performed on graphene nanoribbons on SiC, where signatures of ballistic transport were observed.\cite{de2020non} To the best of our knowledge, all previous STP measurements have been performed on SiC subtrates with large dielectric constants, which strongly screen el-el interactions. Those measurements also utilized topographic features to act as scattering barriers, which are known to introduce artifacts in STP measurements due to tip convolution.\cite{pelz1990tip}
In this work, we probe ultraflat graphene/hBN samples with electrostatic barriers that are introduced by `drawing' them with the STM tip using a methodology developed by Velasco et al. \cite{velasco_nanoscale_2016, lee_imaging_2016, velasco2018visualization}. Each individual barrier was created by introducing sub-surface charges in the underlying hBN by applying a 1--2 minute, 5 V pulse with the STM tip which serves to ionize defects in the underlying hBN substrate. Those defects create an electrostatic potential well in the plane of the graphene sheet that scatters incident holes and electrons. Within a suitable range of negative gate voltages (when the graphene is hole doped), a circular p-n (outside-inside) junction forms on the periphery of the potential well, which acts as a reflective boundary. By placing two of these p-n junctions in close proximity, we build a small channel that current can flow through when a source-drain bias is applied. Moreover, as shown in Fig. \ref{fig:overview}D, the width of this current-carrying channel can be tuned by using an electrostatic backgate to adjust the Fermi energy, $E_F$, which alters the radius of the p-n junction barrier; the p-n junctions considered here decrease in radius with higher hole concentrations, which leads to an increased channel width.
\begin{figure}[t!]
\centering
\includegraphics[width
=6.3in,keepaspectratio]{Figures/big_slope.png}
\caption{(A) Topographic STM image of a 1 x 1 $\mu m$ area of graphene on hBN. Scale bars are 200 nm. (B) Simultaneously acquired STP image from the same area obtained with $I_{\text{sd}}$ = 190 $\mu$A across a 30 $\mu m$ long sample that has an overall width of 15 $\mu m$. The periodic texture observed in both images is an aliasing effect created by the graphene/hBN Moire potential and the measurement grid. (C) Measured electrochemical potential along the flow direction and dashed line indicated in (B). The best fit line is shown in red (slope = 420 $\pm$ 10 uV/um).}
\label{fig:slope}
\end{figure}
Prior to creating electrostatic barriers on our samples, we first obtain spatial maps of $\mu_{ec}$ when driving $I_{\text{sd}} = 190$ $\mu$A through the bare graphene/hBN sample, as shown in Fig. \ref{fig:slope}, along with a topographic image of the sample acquired simultaneously. These data reveal a potential drop of 420 uV/$\mu m$ corresponding to a mean free path of $l_{\text{mr}} = 3$ $\mu$m at a low carrier density $n = -1.4 \times 10^{11}$ cm$^{-2}$, obtained via the Drude conductivity $\sigma = e^2 v_F l_{\text{mr}} D(E_F)/2$, where $v_F$ is the Fermi velocity and $D(E_F)$ the density of states at the Fermi level. \cite{sarma2011electronic} Some localized deviations in $\mu_{ec}$ are observable, which we attribute to charged defects buried in the hBN substrate.
STP images acquired after the formation of the potential wells at 4.5 K and 77 K are shown in Fig. \ref{fig:stp}, revealing a drastically altered electrochemical landscape. At both 4.5 K and 77 K, $\mu_{ec}$ is observed to increase (decrease) on the upstream (downstream) side of the potential wells, which creates in-plane dipoles across the wells. For the 4.5 K data, we associate these features with Landauer residual resistivity dipoles, which occur in ballistic (or near-ballistic) transport conditions when charge carriers scatter against localized potential barriers and accumulate (or are depleted) against the side of the barrier, which locally increases (decreases) the chemical potential.\cite{landauer1957spatial} The in-plane dipole potentials decay approximately as $r^{-1}$, while the magnitude is determined by the current density.\cite{sorbello1981residual,sorbello1988residual} We observe no significant changes in the cross-well dipole profile as the Fermi level of the device is changed by varying the electrostatic backgate voltage, $V_g$. Within the quantum wells, standing waves associated with circular quasibound states that are excited by carriers are visible in the STP images due to their affect on the local charge density.\cite{bevan2014first} The p-n barrier, meanwhile, can be observed as the bright (dark) ring in the 4.5 K (77 K) measurements. The same ring feature has been determined in previous scanning tunneling spectroscopy and Kelvin probe force microscopy measurements to indicate the position of the classical turning point of the quasibound states, where there is an accumulation of quasiparticle density.\cite{gutierrez2018interaction,quezada2020comprehensive,velasco_nanoscale_2016, lee_imaging_2016,Behn_Krebs_2021} It is not understood why the p-n barrier appears bright for measurements at 4.5 K, and dark at 77 K. This effect was observed over multiple, separate measurements, and we speculate that thermovoltages generated between the tip and the sample play an important role.\cite{sto1990thermopower,druga2010versatile,park2013atomic}
\begin{figure}[t!]
\centering
\includegraphics[width
=\textwidth,keepaspectratio]{Figures/drawing.pdf}
\caption{(A-D) STP maps of an electrostatic dam at T = 4.5 K, V$_{sd}=0.4$ V and four selected gate voltages: -10, -12, -16, and -18 V in order of increasing channel width. (E-H) STP maps of a new electrostatic dam at T = 77 K, V$_{sd}=-0.4$ V and four gate voltages: -2, -4, -6, and -12 V in order of increasing channel width. The scale bar is 250 nm. The black arrows represent the direction of current flow that is incident upon the barriers. (I) Line cuts through the STP maps at T = 4 K along the white dashed line in (A). (J) Line cuts through the STP maps at T = 77 K along the white dashed line in (E). Each curve is shifted by a constant offset for clarity. The dashed, vertical grey line marks the halfway point through the channels. }
\label{fig:stp}
\end{figure}
In addition to the features described above, we also observe a drop in $\mu_{ec}$ along a transverse path through the channel between the wells, which represents the central focus of this work. This change in $\mu_{ec}$ is associated with current that flows through the channel, the width of which can be tuned via electrostatic gating (as illustrated in Fig. 1D). In Fig. \ref{fig:stp}, we show how the $\mu_{ec}$ landscape evolves as the channel is varied from as wide as 350 nm, to `pinch off' where it forms an `electrostatic dam', which blocks the incident current. These measurements are performed on separately prepared samples at $T = 4.5$ K (Fig. \ref{fig:stp}a-d) and at $T = 77$ K (Fig. \ref{fig:stp}e-h). For 4.5 K measurements, ray-like `streams' of current are visible emerging from the downstream side of the channel, a property that is consistent with ballistic carriers passing through the gap, and locally increasing $\mu_{ec}$. \cite{morr2017scanning} Such qualitative `streams' are not as apparent for data obtained at 77 K.
\begin{figure}[t!]
\centering
\includegraphics[width
|
=\textwidth,keepaspectratio]{Figures/analysis.pdf}
\caption{(A) Width-dependent channel conductance of electrostatically defined channels at 4.5 K and 77 K. Solid (4.5 K) and dashed (77 K) lines indicate the theoretical ballistic conductance defined by the Sharvin formalism. (B) Carrier density-dependent electron-electron scattering lengths divided by the non-universal factor $C$, extracted from the superballistic conductance in (A).}
\label{fig:conductances}
\end{figure}
In order to quantitatively characterize the carrier flow, we measure the width (and $V_g$)-dependent electrochemical drop through the channels (shown in Fig. \ref{fig:stp}i,j) to calculate the conductance of each channel, $G_{data} = I/\Delta \mu_{ec}$. The current $I$ flowing through each channel is first estimated using the rudimentary assumption that the ratio between the channel width and the width of the graphene flake (15 $\mu$m) is equal to the ratio of $I$ to the current passing through the whole flake, which is measured using a current meter. Conductance values estimated in this way are compared against the values predicted by the Sharvin formula for ballistic transmission through a channel
\begin{equation}
G_{\text{Sh}} = c_{\text{S}} G_{\text{Q}} \frac{\overline{E_F}}{\pi \hbar v_F} w
\end{equation}
where $G_{\text{Q}}=2e^2/\pi\hbar$, is the conductance quantum, $w$ is the width of the narrowest cross-section spanning the channel, $\overline{E_F}$ is the Fermi level (chemical potential) averaged over the the narrowest cross-section of the channel, and $c_{\text{S}}$ is a non-universal numerical factor specific to our electrostatic dams. Geometrical factors such as $c_{\text{S}}$ are known to effect current flows in both ballistic and viscous systems, and represent corrections of order 1 that are difficult to accurately calculate analytically. We determine $c_{\text{S}}$ by assuming that the transport at 4.5 K is ballistic, and therefore the conductivity depends linearly on $\overline{E_F}$ and $w$ for small channel widths as in Eq. (1). In repeated measurements at 4.5 K using a new electrostatic dam each time, we find excellent agreement between $G_{\text{Sh}}$ and $G_{data}$ across all channel widths when $c_{\text{S}} = 2.8$. The results for a selected measurement are shown in Fig. \ref{fig:conductances}. The fact that $c_S$ is greater than unity can be attributed to the details of the channel geometry and the finite extent of the electrostatic dams, as well as the nonzero transmissibility of the p-n junction barriers. We note that tip-induced doping effects and inhomogenous doping within the channel are unlikely to be significantly affecting these conductivity measurements, as $G_{data}$ taken with different tips and separately prepared potential barriers result in the similar values for both 4.5 K and 77 K. In contrast to measurements at 4.5 K, data taken at 77 K demonstrates a channel conductance that is \textit{larger} than the ballistic Sharvin model with $c_{\text{S}}=2.8$, and this deviation increases as the channel is widened. Using a larger number for $c_{\text{S}}$ in the 77 K theory does not produce good agreement across all channel widths. This suggests a viscous Gurzhi contribution to the channel conductance at elevated temperatures, $G_{\text{G}}$, which has a quadratic dependence on channel width $w$ and must be added to the Sharvin conductance $G_{\text{Sh}}$ to get the total conductance. With this addition, we can estimate the electron-electron scattering length $l_{\text{ee}}$ that would lead to this enhanced conductance, we turn to \cite{guo2017higher, kumar2017superballistic}
\begin{equation}
G = G_{\text{Sh}} + G_{\text{G}}, \quad G_{\text{G}} = c_{\text{G}} G_{\text{Q}} \frac{\overline{E_F}}{\pi \hbar v_F} \frac{w^2}{l_{\text{ee}}}
\end{equation}
from which we can write
\begin{equation}
l_{\text{ee}} =w\, (c_{\text{G}}/ c_{\text{S}}) \left(\frac{G_{data}}{G_{\text{Sh}}} -1 \right)^{-1}
\end{equation}
where $c_{\text{G}}$ is an additional non-universal geometric numerical factor that is specific to viscous flow around our electrostatic dams. For example, $c_{\text{G}} = \pi^2/16$ for a perfect slit geometry \cite{guo2017higher}, however the value of $c_{\text{G}}$ also depends on the boundary conditions and differ for flows with no-slip and no-stress conditions.\cite{kiselev2019boundary,Pershoguba2020,Li2021} Our resulting estimates of $l_{\text{ee}}$ are shown in Fig. \ref{fig:conductances}(b), giving values that are broadly consistent with previously published results.\cite{kumar2017superballistic}
\begin{figure}[t!]
\centering
\includegraphics[width
=6in,keepaspectratio]{Figures/thoeryfig.png}
\caption{Finite element models of carrier flow in the Ohmic and viscous regimes. Numeric solutions of profiles of current density, electric potential for the hydrodynamic flow through two circular barriers in the viscous (upper panel) regime and Ohmic (lower regime) regime. $w$ is the width of the sample. (\textbf{a}) and (\textbf{e}): the arrow plots show the streamline of the current density and the color plots are the magnitude of the current density; (\textbf{b}) and (\textbf{f}): profiles of $j_x$ at $x=0$; (\textbf{c}) and (\textbf{g}): distribution of the electric potential; (\textbf{d}) and (\textbf{h}): line cuts of electric potential along $y=0$.}
\label{fig:theory}
\end{figure}
These findings indicate that as the temperature of the graphene is increased from 4.5 K to 77 K, the el-el scattering length ($l_{ee}$) decreases until it is comparable to the width of the channel. Under these conditions, the carrier flow transitions from a Knudsen to a Gurzhi regime, where it behaves as a viscous Fermi liquid, which exhibits a channel conductance that is greater than ballistic. In order to better understand the potential profiles measured using STP and how they relate to viscous flow, we compare our measurements to the following theoretical model. The motion of hydrodynamic electron flow under moderate external drive is can be described by the linear Navier-Stokes (NS) equation in the following form,
\begin{equation}
\nu\nabla^2\mathbf{u}-\frac{\mathbf{u}}{\tau_{\text{mr}}}=\frac{e}{m}\nabla\phi, \label{eq:NS-1}
\end{equation}
where $\mathbf{u}$ is the macroscopic flow velocity, $\tau_{\text{mr}}$ is the
momentum relaxation time and $\phi$ is the electric potential. It is evident that the first term on the left hand side of Eq.~\eqref{eq:NS-1} describes the viscous stress while the second term gives the Ohmic loss. In addition, the continuity equation for current conservation is written as
\begin{equation}
\nabla \cdot \mathbf{j}=0. \label{eq:continuity}
\end{equation}
Considering that $\mathbf{j}=ne\mathbf{u}$ and assuming constant electron density $n$, the linear NS equation~\eqref{eq:NS-1} is recast in the form,
\begin{equation}
l^2_{\text{G}}\nabla^2\mathbf{j}-\mathbf{j}=\sigma\nabla\phi, \label{eq:NS-2}
\end{equation}
where $\sigma$ is the Drude conductivity introduced earlier. The interplay of viscous and momentum relaxing terms introduces the natural length scale in the problem, namely the Gurzhi length, $l_{\text{G}}\equiv\sqrt{\nu\tau_{\text{mr}}}=\sqrt{l_{\text{ee}}l_{\text{mr}}}$. If $l_{\text{G}} \ll w$, where $w$ is the typical size of the system such as width of the channel, the viscous stress in Eq.~\eqref{eq:NS-2} can be neglected and one is in the Ohmic (diffusive) regime. In contrast, if $l_{\text{G}} \gg w$, the Ohmic dissipation in Eq.~\eqref{eq:NS-2} is small and one is in the viscous regime.
In this framework, Eqs.~\eqref{eq:continuity} and~\eqref{eq:NS-2} lead to the profile of current density and electric potential. The analytical solutions are difficult to obtain for arbitrary geometries, nevertheless we provide numeric solutions for the hydrodynamic flow bypassing two circular barriers with finite element methods. The major results for the distribution of the current density and the electric potential are shown in Fig.~\eqref{fig:theory}. In the numeric simulation, we used no-slip boundary condition for the flow velocity $\mathbf{u}$. The flow is driven by the bias voltage $V_{\text{sd}}$, applied at the left side of the sample (the right side is grounded to zero). It is interesting to notice the dipole formation in the electric potential profile in the viscous regime, see Fig.~\eqref{fig:theory}(c). Such dipole formation or the increase of potential near the edges has been pointed out in Ref.~\cite{guo2017higher}, wherein the potential grows near the slits and diverge at the end points. This behavior can be attributed to the fact that the electric fields near the edges point against the current flow in order to push the electron liquid away from the boundary walls. It is also noteworthy that we have adopted hard wall potentials at the boundaries of the two circular barriers and neglected the nonlinear screening effects of the p-n interface. Accounting for these effects require solving self-consistent Poisson equation coupled with hydrodynamic flow equations~\eqref{eq:continuity} and~\eqref{eq:NS-2}, a simpler description would only involve a single circular p-n junction without considering the hydrodynamic flow.
\begin{figure}[t!]
\centering
\includegraphics[width
=\textwidth,keepaspectratio]{Figures/screening.pdf}
\caption{Regions of perfect screening and Thomas Fermi approximation. $\alpha=e^2/(\kappa\hbar v_F)$ is the interaction constant, $\kappa$ is the dielectric constant. $\rho'$ is the density gradient at the p-n interface.}
\label{fig:screening}
\end{figure}
To assess the validity of electron flow modeling at constant density and nuances related to locally measured electrochemical potential, one can benefit from the analysis of nonlinear screening effects of a circular graphene pn-junctions that mimic our electrostatic barriers. The electric potential and charge density can be found self-consistently. We denote the background charge density created by external gates as $\rho_b(r)$, which can be approximated as $\rho_b(r)=\rho'_b\,r$ for the entire pn-junction width (the p-n interface is shifted to the origin $r=0$). The electric potential and the density can be solved from the following coupled equations,
\begin{subequations}
\begin{align}
& \frac{\kappa}{e}V(r)=\int^\infty_0 \frac{4r'dr'}{r+r'}K\left(\frac{2\sqrt{rr'}}{r+r'}\right)\left[\rho_b(r')-\rho(r')\right], \label{eq:Poisson} \\
& \mu[\rho(r)]-eV(r)=0,\quad \mu(\rho)=\sqrt{\pi}\hbar v_F\sqrt{|\rho|}, \label{eq:TF}
\end{align}
\end{subequations}
where $\kappa$ is the dielectric constant and $K(k)$ is the elliptic integral of the first kind with modulus $k$. In Eq.~\eqref{eq:TF}, we assumed the Thomas-Fermi (TF) approximation. It is then evident that within the TF approximation, the charge density satisfies the following equation:
\begin{equation} \label{eq:rho}
\sqrt{\rho(r)}=\frac{\alpha}{\sqrt{\pi}} \int^\infty_0\frac{4r'dr'}{r+r'}K\left(\frac{2\sqrt{rr'}}{r+r'}\right)\left[\rho'_b r'-\rho(r')\right].
\end{equation}
The solution to Eq.~\eqref{eq:rho} gives the universal spatial profile of the charge density/potential, but it requires numerical evaluation. Nevertheless, one is still able to draw several qualitative conclusions from Eq.~\eqref{eq:rho}. Following similar analysis of a planar junction in Ref.~\cite{Fogler2008}, it can be shown that for small interaction constant $\alpha<1$, in the region $|r|\gg l_s\sim (\alpha^2 \rho'_b)^{-1/3}$, one has almost perfect screening, which can be described by TF approximation; in the region $l_{\text{TF}}\sim\sqrt{\alpha}l_s \ll |r|\ll l_s$, the screening effect is poor but the TF approximation still holds; in the immediate vicinity of p-n interface, $0<|r|<l_{\text{TF}}$, the TF approximation breaks down and one needs to compute the electron wavefunction in order to obtain the potential and the charge density.
Given the material parameters, one can get the estimation $\alpha\approx 0.8$, $l_s\sim 116$nm, $l_{\text{TF}}\sim 103$nm (the junction width is $\sim 1\mu$m). The various domains and length scales are summarized in Fig.~\ref{fig:screening}. This means that almost everywhere except for a close proximity to the interface the macroscopic electron flow description gives reasonably accurate approximation.
In conclusion, we have shown that scanning tunneling potentiometry can be used to visualize hydrodynamic effects in graphene through direct imaging of the local electrochemical potential while a current is passed through the graphene sheet. This methodology offers a superior spatial resolution to other scanned probe measurements, and allows for the creation and analysis of complex flow geometries defined by smooth barriers created by in-plane p-n junctions. In this work, we show that STP can reveal super-ballistic conductance through narrow channels in graphene, as well as local dipoles that form in both ballistic and viscous regimes due to local carrier accumulation. These results provide new insight into carrier transport in graphene, and provide a framework for analyzing more complex flow patterns that are engineered to exhibit exotic effects, such as, for example, non-reciprocal flow\cite{geurs2020rectification}, or to measure turbulence, which could occur at timescales and lengthscales that are inaccessible to other local probes.\cite{mendoza2011preturbulent} \AA-scale images, meanwhile, could be used to visualize atomistic transport features that are predicted to occur along grain boundaries and near defects.\cite{bevan2014first}
\section{Acknowledgment}
Work by Z. J. K., S. L., K. J. S., and A. L. was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) Program for Materials and Chemistry Research in Quantum Information Science under Award No. DE-SC0020313. Work by W. A. B. and V. W. B. was supported by the Office of Naval Research under Award No. N00014-20-1-2356. The authors gratefully acknowledge the use of facilities and instrumentation supported by NSF through the University of Wisconsin Materials Research Science and Engineering Center (No. DMR1720415). K.W. and T.T. acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan, Grant Number JPMXP0112101001, JSPS KAKENHI Grant Number 19H05790 and JP20H00354.
|
\section{Introduction} \label{sec:introduction}
The {\it Nancy Grace Roman Space Telescope}\xspace ({\it Roman}\xspace) is an under-construction flagship space telescope designed for coronagraphy and wide-field optical-NIR observations \citep{spergel15}. The \gls{WFI} is the baseline imaging and slitless spectroscopic instrument for {\it Roman}\xspace. The \gls{WFI} will observe 0.28 square degrees per pointing with $0\farcs 11$ pixels in seven moderate-width filters (shown in Figure~\ref{fig:filters}) as well as one very wide filter and grism.
One of the primary goals of {\it Roman}\xspace is to investigate the dynamics of the accelerated expansion of the universe, reaching a 10$\times$ improvement in \gls{FoM} compared to today.\footnote{For example, SNe Ia with external \gls{CMB} measurements should reach a \gls{FoM} of 325 with all uncertainties included (Science Requirement SN 2.0.1).} Such measurements have the potential to revolutionize our understanding of cosmology. To achieve these ambitious science objectives, {\it Roman}\xspace will employ several cosmological probes, including measuring luminosity distances of Type~Ia supernovae (SNe Ia), the technique being studied by the two SN-focused Science Investigation Teams.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{f1.pdf}
\caption{{\bf Left panel}: the effective area of the \gls{WFI} moderate-width imaging filters and the prism (not shown: the wide $F146$ and the grism). {\bf Center panel}: two-pixel dispersion $R$ for the prism. {\bf Right panel}: prism \gls{PSF} full width at half maximum in pixels as a function of wavelength for each of the 18 detectors (evaluated with a 2D Gaussian fit after convolution with the pixel). Generally speaking, the \gls{PSF} is reasonably well-sampled and the two-pixel dispersion is a reasonably accurate representation of the resolution.}
\label{fig:filters}
\end{figure}
In current measurements, the known statistical and systematic uncertainties are generally comparably sized \citep{Brout2022}, so only increasing the number of SNe will not yield the desired small cosmological uncertainties. Furthermore, there already exist modest tensions between techniques \citep[e.g.,][]{delubac15, hikage18, PlanckCollaboration2020, Riess2021}, either pointing to the need for even more complicated cosmological models or the presence of undetected systematic errors \citep[e.g.,][]{DiValentino2021}. Thus, control of systematic uncertainties will be crucial for {\it Roman}\xspace cosmology.
This work presents a set of studies addressing the use of the low-dispersion slitless prism on \WFIRST for SN spectroscopy as part of the \WFIRST \gls{HLTDS}.\xspace The value of spectroscopy for observing live SNe comes down to three\xspace major points: redshift measurements, spectroscopic SN classifications, and making spectrophotometric measurements of the SN population.\footnote{This work does not consider two other possible uses of the prism for the \gls{HLTDS}: 1) obtaining redshifts of SN host galaxies and 2) observing brighter spectrophotometric standard stars than are possible to observe with imaging, thus improving the absolute calibration by tying it to more or better-understood standard stars.
} Most high-redshift SN Ia surveys are not able to spectroscopically observe all of their SNe with good light curves (e.g., \citealt{Smith2020}). If a similar survey design is adopted for {\it Roman}\xspace, (i.e., a prism survey area smaller than the imaging survey area), then each of these points takes on a dual role: applying to both the direct subset of SNe observed with the prism, and to provide more detailed dataset for training and validating analyses of SNe only observed with the imaging. We elaborate on these three points below:
\begin{enumerate}[I]
\item Redshift measurements of live SNe to allow accurate placement of SNe on the Hubble diagram. For the training role of the prism, these measurements help assess performance and biases of both photometric redshifts \citep{Roberts2017} and association of transients with nearby galaxies that may be the host \citep{Gupta2016}.
\item Spectroscopic classification of transients. These classifications enable an initial spectroscopic-only cosmology analysis (e.g., \citealt{Abbott2019} for the Dark Energy Survey). They also can validate and train photometric classifiers with a random sample of high-redshift events.
\item Finally, and perhaps most importantly, providing spectrophotometric constraints on the \gls{SED} distribution of SNe at high redshifts. Again, this enables an initial cosmology analysis based on subclassifying SNe~Ia for better distance precision (e.g., \citealt{twins15, Boone2021b}). This sample also provides a high-redshift check on population evolution and other systematic uncertainties in the larger photometric-only sample \citep[e.g.,][]{Sullivan2009}. As we show, prism data can even train any components of \gls{SED} variation that may not be present in current low-redshift spectrophotometric training sets.
\end{enumerate}
Certainly by the time the \gls{HLTDS} is being finalized, one should imagine demonstrating a series of steps on simulated imaging+prism data that will mimic what will be done with actual \gls{HLTDS} data:
\begin{enumerate}
\item Perform a calibration using simulated observations of wavelength and flux standards.
\item Having determined the calibration, simulate SN extractions from realistic imaging+prism data.
\item Assemble a simulated cosmology sample of SNe Ia with redshifts + light curves, investigate which SNe are getting misclassified photometrically (or assigned the wrong galaxy as the host) and improve the photometric classification.
\item Using both simulated spectra and simulated light curves, examine the population distributions of a set of SN parameters as a function of redshift and host-galaxy type, and see what evidence of population drift there is compared to lower redshift.
\item Select a volume-limited prism SN sample covering reasonable S/N, fit it with the \gls{SED} model, and do dimensionality reduction (e.g., principal component analysis) on the residuals to see if there is any evidence that the training of the \gls{SED} model has missed some behavior of SNe.
\item Finally, perform a simulated cosmology analysis using the above analysis products.
\end{enumerate}
The collection of studies presented in this work are an existence proof that the above series of steps is possible and will yield useful data; Section~\ref{sec:analysisoverview} outlines the analyses we present. Section~\ref{sec:prismsurveysim} presents a simple demonstration survey that (although this survey is not fully optimized) shows the performance of the prism that is possible for each tier in a \TierOne/\TierTwo two\xspace-tier survey, examining S/N and numbers of SNe~Ia as a function of redshift. Section~\ref{sec:performance} outlines our evaluation of this survey according to the three\xspace uses of spectroscopy above. These studies show that a prism can produce spectral time series with S/N sufficient for the goals I and III listed above (redshift measurements and \gls{SED} constraints), with conclusive results on goal II (typing) needing further study with a core-collapse SN time-series model. Section~\ref{sec:performance} also performs a simple survey optimization, maximizing the number of SNe~Ia useful for different purposes and forecasting the cosmological constraints possible with those SNe. Section~\ref{sec:performance} ends by describing the optimization of the prism parameters, and shows that the dispersion is high enough that extracting the data should be possible with only modest biases.
Finally, we conclude and provide a glossary in Section~\ref{sec:conclusion}. For the sake of readability, we put some of the technical discussion in appendices: Appendix~\ref{sec:prismsimulations} presents simulation details, Appendix~\ref{sec:analytic} shows a simple analytic optimization of the prism dispersion, and Appendix~\ref{sec:prismvsimaging} evaluates constraints on SN~Ia parameters in prism vs. imaging at fixed total survey time.
\begin{deluxetable}{cc|cc|cc}[htbp]
\rotate
\tablecaption{Overview of Analyses}
\label{table:analysisoverview}
\tablehead{
\colhead{Section(s)} & \colhead{Analysis} & \colhead{Simulation SED Model} & \colhead{Type of Simulation} & \colhead{Fitting SED Model} & \colhead{Type of Inference}}
\startdata
\ref{sec:prismsurveysim} & S/N, Numbers of SNe & SALT2-Extended & 1D Spectra & \nodata & \nodata \\
\ref{sec:redshift} & Redshift Measurements & SN Timeseries & 1D Spectra & SALT2-Extended & Minimization \\
\ref{sec:subclassification} & SN Ia Subclassification & SNEMO15 & 1D Spectra w + w/o Stacking & SNEMO15 & Fisher Matrix \\
\ref{sec:missingcomponent} & Missing SED Component & SNEMO15 & 1D Spectra & SNEMO15 & Minimization \\
\ref{sec:surveyoptimization} & Survey Optimization & SALT2-Extended & 1D Spectra & \nodata & Fisher Matrix \\
\ref{sec:prismoptimization} & Optimum Prism Parameters & SNEMO15 & Forward Model & SNEMO15 & Minimization \\
Appendix~\ref{sec:analytic} & Analytic Optimization & Gaussian Feature & 1D Spectrum & Gaussian Feature & Fisher Matrix \\
Appendix~\ref{sec:prismvsimaging} & Prism vs. Imaging Trade & SUGAR & 1D Spectra & SUGAR & Fisher Matrix \\
\enddata
\end{deluxetable}
\section{Overview of Our Analyses and Simulations}
\subsection{Overview of Analyses} \label{sec:analysisoverview}
We perform a series of analyses to evaluate and optimize the performance of the prism; Table~\ref{table:analysisoverview} presents an overview of these analyses. Each analysis generally has two components: the model for simulating the spectra and the model for inference of results. As we show in Section~\ref{sec:prismsurveysim}, the prism can be used to build time series of SNe, so it is important to use full time-series models for both simulation and inference, and we use several different models depending on the goal. In general, having more models provides cross checks and increases the robustness of our results. We describe here some of the considerations for why we selected the models that we did.
\gls{SED} models for simulations:
\begin{itemize}
\item The \gls{SALT2}-Extended model \citep{SALT2, Pierel2018} is a combination of \gls{SALT2} and the \citet{Hsiao2007} SN~Ia template. It spans the largest rest-frame wavelength range (1,000\AA\xspace--18,000\AA\xspace) of any of our \gls{SED} models. This makes it the best choice for spanning large redshift ranges, for example predicting S/N for the whole survey or fitting simulated data to find redshifts. However, \gls{SALT2} only has one intrinsic parameter of SN variability ($x_1$, which has no effect outside the rest-frame optical) and one color parameter ($c$), so it is not the best choice for looking at SN~Ia variability in detail.
\item \gls{SNEMO} is an \gls{SED} model based on a principal-component decomposition of the \gls{SNfactory} spectral time series \citep{Saunders2018}. It only spans a rest-frame wavelength range of 3,300\AA\xspace--8,600\AA\xspace, so it can only fit some of the {\it Roman}\xspace observer-frame wavelength range. It comes in versions with one (SNEMO2), six (SNEMO7), and fourteen principal components (SNEMO15) of variability (plus color). It is possible that these large numbers of linear principal components (especially SNEMO15) may be approximating nonlinear trends in the data with linear components\footnote{This may be related to SNEMO7 performing roughly comparable to \gls{SALT2} in standardization performance \citep{Rose2020}.}; \citet{Rubin2020} and \citet{Boone2021a} suggest that $\sim$~three intrinsic components may be closer to the right number.
\item \gls{SUGAR} \citep{Leget2020} spans a rest-frame wavelength range of 3340\AA\xspace--8580\AA\xspace, comparable to \gls{SNEMO}. Its main advantage over \gls{SNEMO} is that it describes SNe~Ia with three intrinsic parameters (plus extinction).
\item We also directly use SN time series for simulations. We draw on the spectrophotometric \gls{SNfactory} data for this \citep{Aldering2020}, which (at present) is available for a rest-frame wavelength range of 3,300\AA\xspace--8,600\AA\xspace.
\end{itemize}
We use two types of simulations:
\begin{itemize}
\item 1D simulations simply produce spectra with proper wavelength sampling and S/N. Generally speaking, our analyses fit the time series of 1D spectra directly (without stacking). Note that we call these simulations ``1D,'' even though they produce a time series of 1D spectra and thus could be considered 2D (wavelength and time).
\item Forward models produce 2D simulated images (which we generate assuming a four-point $2\times2$ dither pattern) which are optimally extracted using a forward model (e.g., \citealt{Bolton2010, shukla17, Ryan2018}). We only use this more computationally intensive approach when optimizing the prism parameters and looking at robustness to an inaccurate \gls{LSF}. Again, our analyses fit the full time series directly. Note that we call these simulations ``2D,'' even though they produce a time series of 2D images and thus could be considered 3D (along the trace in wavelength, perpendicular to the trace, and time).
\end{itemize}
Finally, we consider two types of inference:
\begin{itemize}
\item We use least-squares minimization to find the best-fit parameters for some of our analyses. For fitting redshifts (a somewhat nonlinear process that generally produces local $\chi^2$ minima), we try many initial starting redshifts to ensure convergence to the global best-fit. When fitting 1D simulations (each epoch represented by a vector of fluxes and a vector of uncertainties), we use a spectral model to generate model values at those specific wavelengths and dates. The 2D simulations (simulated images) are also fit with least squares, with the forward-model code used as a generative model for each epoch of simulated data.
\item We use Fisher-matrix calculations to predict uncertainties (the inverse of the Fisher matrix is the parameter covariance matrix) by linearizing a model in its parameters and approximating the observational uncertainties as Gaussian. We use Fisher-matrix calculations when using \gls{SED} models with limited rest-frame wavelength coverage (\gls{SNEMO} and \gls{SUGAR}). This limited wavelength coverage may require using only modest redshift perturbations around the true redshift to avoid shifting some wavelengths outside the wavelength range of the model, so a Fisher-matrix approach is best. Fisher-matrix calculations are also useful to quickly get uncertainty estimates (but as noted above, they are not fully trustworthy for assessing redshift measurements due to multiple $\chi^2$ minima).
\end{itemize}
We do not consider model-independent spectral-feature measurements (e.g., smoothing and splines in \citealt{Blondin2006}) for two reasons. 1) As we show in Section~\ref{sec:prismsurveysim}, individual visits with even $\sim 1$~hour exposure times only yield low S/N spectra which are only built up over multiple visits to reasonable time series. The SN spectral features evolve over this time, so a parameterized model is the most efficient and unbiased method of inference. 2) The optimal prism dispersion (discussed in Section~\ref{sec:prismoptimization}) samples SN spectral features well, but (unlike for observations with $R\sim1000$ ground-based spectrographs) does not over-resolve so much that the \gls{LSF} does not matter. Once again, a parameterized model (that can be convolved with the \gls{LSF}) will be necessary to perform unbiased inference.
\subsection{Prism Survey Simulation} \label{sec:prismsurveysim}
We present here a simple survey simulation that demonstrates the utility of a slitless prism on {\it Roman}\xspace. We discuss the simulation details in Appendix~\ref{sec:prismsimulations}. Figure~\ref{fig:prismsens} shows our estimated point-source sensitivity of the prism.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95 \textwidth]{f2.pdf}
\caption{{\bf Left panel}: 1$\sigma$ noise level per pixel in wavelength for a one-hour prism exposure of a point source. {\bf Right panel}: The AB magnitude that can be measured at 5$\sigma$ (not 1$\sigma$), shown for each pixel in wavelength.}
\label{fig:prismsens}
\end{figure}
Our simulated survey is similar to the \gls{SDT} survey \citep{spergel15} in that it envisions a two-year mission with 30 hour visits once every 5 days (2--5 days in the rest frame, depending on redshift), with a total integrated time of 6 months (including overheads). Optimizing the split between imaging and prism spectroscopy will be a focus of future work. For now, we simulate a 25\% prism survey (cf. the 10\%, 25\%, 50\%, and 75\% in \citealt{Rose2021}). One can roughly scale our numbers of SNe by the fraction of time used for the prism (i.e., half the time for the prism means that our numbers should be doubled).
The two\xspace prism tiers consist of a 5.32\xspace square degree (19~pointings\xspace) wide field with an exposure time of 600\xspace~s and a 1.12\xspace square degree deep field (4~pointings\xspace) with an exposure time of 3600\xspace~s. Including slew times of 62.15 seconds (22 detector readouts of 2.825~seconds each)\xspace, every five days \widehours hours are spent on the wide tier and \deephours hours are spent on the deep tier. We assume a simple four-point dither for each visit. The longer deep-tier exposures are Poisson-dominated and thus could certainly be broken into more dithers without taking a read-noise penalty, but this is only a minor effect for our simulations.
The prism spectroscopy covers the full \widedegsq square degree and \deepdegsq square degree\xspace areas in each visit. In this way each prism exposure will frequently contain multiple live SNe (shown in Figure~\ref{fig:multiple}), with varying signal to noise per spectrum. As Figure~\ref{fig:multiple} shows that the multiplex factor is already significant by $z \sim 0.8$, we do not consider targeted prism observations of lower-redshift SNe in this work. Figure~\ref{fig:pointings} shows that the wide and deep tiers cannot be exactly circular, so they will have to be embedded together (deep inside wide) into a larger (roughly circular) area.\footnote{We choose the pointing centers for the wide tier with a simple algorithm that fills a circular area, column by column. The deep tier is more complicated. For simplicity, we choose to implement each of the four 3600~s deep pointings as twenty three 600~s wide pointings (with one pointing going to the wide to enable a symmetric pointing pattern). The positions of four of these pointings are fixed to continue the wide-tier pattern. The positions of the other nineteen are chosen with a downhill simplex code \citep{NelderMead} that optimizes the actual depth on the sky compared to a uniformly filled circular field. After solving for all field centers, we solve for the path of shortest slew time with Concorde \citep{Concorde}. Concorde solves the traveling salesman problem, and can do so with non-Euclidean distances between points. For the path shown here, we use a table of slew times as a function of angular size to construct a matrix of slews between every set of points. As the traveling salesman problem solves for a cycle, we have a virtual point that is zero distance from all others, then remove this point to obtain a solution where the starting and ending points are not the same.} This will necessarily blend SNe from the tiers together, as (a fraction of the) SNe rotate between them as {\it Roman}\xspace rotates throughout the year to keep its solar panels pointed at the Sun. We simulate the tiers as though they are completely distinct, as this has sufficient accuracy for our purposes here.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6 \textwidth]{f3.pdf}
\caption{\gls{WFI} multiplex: average number of SNe~Ia per pointing within $\pm$10 rest-frame days of maximum vs. redshift, updated from \citet{Rose2021}. Assuming that the {\it Roman}\xspace SN field(s) are in the continuous-viewing zone so that survey temporal edge effects from stopping and starting are small, the multiplex for other phase ranges scales from this plot (e.g., the multiplex within $\pm5$ rest-frame days is half the plotted values). Multiplex increases rapidly with maximum redshift and is a significant factor in survey design for surveys with maximum redshift $\gtrsim 0.7$. Additional multiplex is possible for cadenced, untargeted survey strategies as SN-free (``reference'') observations do not have to be taken.
}
\label{fig:multiple}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{f4.pdf}
\caption{The idealized circular survey geometry (with no edge effects) assumed in this work overlaid with a realistic survey geometry made from \gls{WFI} pointings (shaded gray). The blue dots mark the centers of \gls{WFI} pointings, and the black lines connect these centers with the shortest total slew path (found with Concorde, \citealt{Concorde}). For illustrative purposes, we do not show chip-gap-spanning dithers. Each cadence step (assumed in this work to be five days), {\it Roman}\xspace rotates to keep its solar panels pointing at the Sun. These 72 discrete roll angles have the fortunate effect of helping to break spatial/spectral degeneracy in the slitless prism spectroscopy.}
\label{fig:pointings}
\end{figure}
We need a way to summarize the S/N of a spectral time series into one number that can reasonably represent the useful S/N if, e.g., the cadence or dispersion changes. We choose to compute the combined S/N of all spectra, integrated over a tophat from 5000\AA\xspace to 6000\AA\xspace in the rest frame (hereafter referred to as the ``$V$'' band):
\begin{equation}
V_{\mathrm{SNR}} \stackrel{\text{def}}{=} \sqrt{ \sum_{\mathrm{Spectrum}\ i} \ \ \sum_{\mathrm{Wavelength}\ j \in V} \left[ \frac{\mathrm{True\ Noiseless\ Flux}_{ij}}{\mathrm{Uncertainty}_{ij}} \right]^2 }
\end{equation}
The rest-frame $V$ is a reasonable choice: it is accessible for most of the relevant redshift range and it is adjacent to the strong Si 6355\,\AA\xspace feature (which is blueshifted around maximum light to $\sim 6150$\,\AA\xspace). Note that the S/N values listed here are thus, for convenience of comparison, based on the total-time-series SN spectra that would be obtained over a period of time, though all of the analyses we have seriously considered would fit the time series of individual spectra (or an individual spectrum near max), not the co-added stack. Not all of these spectra would be around maximum light and therefore their contribution to the total-time-series S/N would be less significant. The contribution of each spectra within a given time frame from peak to the overall S/N, is presented in Figure~\ref{fig:SNRmax}. Figure~\ref{fig:simspectra} shows simulated SN observations in increasing redshift order.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6 \textwidth]{f5.pdf}
\caption{Fraction of the total time series rest-frame $V$-band S/N in spectra around maximum for SNe~Ia. The red confidence interval (encompassing 68.3\% of the SNe) shows the fraction of the S/N in the single highest S/N spectrum. This declines as $\sqrt{1 + z}$, as the fixed rest-frame windows encompass more spectra at higher redshifts due to time dilation, but is generally around 1/3rd (e.g., a S/N 75 SN in Figure~\ref{fig:SNhistNew} has a $V$ S/N of around 25 for the spectrum closest to maximum light). The green interval shows the fraction of the S/N integrating from $-$2.5 to +2.5 rest-frame days. Quantization is visible in certain redshift ranges, e.g., at redshift 0.5, the rest-frame cadence is $3.3\overline{3}$ days, so sometimes a 5 rest-frame-day window contains two spectra, and sometimes just one. Orange and blue show $-$5 to +5 and $-10$ to +10, respectively. Analyses that efficiently use time-series information will be necessary to reach the full potential of the prism data.}
\label{fig:SNRmax}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49 \textwidth]{f6a.pdf}
\includegraphics[width=0.49 \textwidth]{f6b.pdf}
\includegraphics[width=0.49 \textwidth]{f6c.pdf}
\includegraphics[width=0.49 \textwidth]{f6d.pdf}
\caption{Simulated SNe Ia with one hour per visit every five days. Each SN shown has the median S/N out of many simulated at its redshift. We stack the data within five and ten rest-frame days of maximum for visualization purposes (Section~\ref{sec:subclassification} shows that stacking is not an optimal analysis). The wavelengths shown are the native wavelengths of the prism with no smoothing. Redshift $\sim 1.1$ SNe~Ia ({\bf top left}) are very clearly SNe Ia, and subclassification is possible. Redshift $\sim 1.3$ SNe~Ia ({\bf top right}) can generally be recognized as SNe Ia (at a minimum, the Si~6355\,\AA\xspace and 4130\,\AA\xspace features are visible), and probabilistically subclassified. At redshift $\sim 1.6$ ({\bf lower left}), classification may be possible. Finally, at redshift $\sim 2$ ({\bf lower right}), the $\sim 2800$\AA\xspace break and Ca H\&K are visible (with lower S/N for other features), but the strong Si 6355\,\AA\xspace feature is not covered (and would not have enough S/N for a strong detection, even if the prism went redder). This SN is thus recognizable as a $z=2$ SN~I, but probably cannot be absolutely classified as a SN~Ia (cf., \citealt{rubin13, Jones2013}).
\label{fig:simspectra}}
\end{figure}
Figure~\ref{fig:SNhistNew} shows the number of SNe within a certain redshift bin color coded by S/N. Within this figure the wide\xspace prism tier data is shown in the top panel, the deep\xspace tier below that, and the sum of the tiers on the bottom. The left column shows the numbers of SNe assuming a SN-free (``reference'' or ``template'') observation much deeper than the live-SN observations (thus not contributing any correlated noise to the SN time-series when subtracting host-galaxy light). This will require combining reference observations taken over two complete rolls (as the \gls{HLTDS} survey is planned to last for two years and {\it Roman}\xspace must keep its solar panels pointed near the Sun). Each roll angle will sample the spatial and spectral information of nearby objects differently. We thus refer to this as a ``3D'' host-galaxy subtraction, as a 3D model (coordinates on the sky and wavelength) will have to be constructed for the part of the sky that can blend into the spectral time series of each SN. The right column shows the case where observations only at the same roll angle can be used to subtract the host-galaxy light, so the observations decrease in S/N by a factor $1/\sqrt{2}$.
In total, Figure~\ref{fig:SNhistNew} shows that we expect to obtain the redshifts of $\sim 2500\xspace$ Type Ia SNe ($\sim 1500\xspace$ with secure types and redshifts). Of these, 530\xspace--960\xspace will have S/N $> 35$, be sub-typed, and be suitable for population and evolution studies. Finally, 240\xspace--520\xspace SN Ia will have S/N $> 50$ and will be used for the retraining of the \gls{SED} model at high redshift. Again, these numbers are for a survey with 25\% time devoted to the prism (0.125~years of prism time), and roughly scale linearly with that amount.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45 \textwidth]{f7a.pdf}
\includegraphics[width=0.45 \textwidth]{f7b.pdf}
\caption{SN~Ia yields from our demonstration survey (we average over 25 survey realizations to reduce Poisson noise). {\bf Left column}: numbers of SNe assuming 3D host-galaxy subtraction (or no need of host subtraction). {\bf Right column}: numbers of SNe taking a 2D $\sqrt{2}$ host subtraction noise hit, in which only SN-free references at a given rotation angle are used. The numbers of SNe in the right column are high enough to be useful, but the gains from a 3D host-galaxy subtraction are clear. For the time series of each SN, we compute the total rest-frame $V$-band signal to noise of the full time series as described in Section~\ref{sec:prismsurveysim} and color code accordingly. We shade the redshifts where the rest-frame $V$ band wavelength range is incompletely covered by the prism in gray.}
\label{fig:SNhistNew}
\end{figure}
\clearpage
\section{Evaluating the Scientific Performance of a Prism} \label{sec:performance}
The simulated survey strategy we present is based on the simple optimizations in Section~\ref{sec:surveyoptimization}. Although future work will improve on this study, this survey does however provide a baseline from which to evaluate each of our scientific goals. We discuss the results below, focusing on the results for the deep tier (3600\xspace~s of exposure time per visit), as there is the best overlap between the prism wavelength coverage and the current set of (rest-frame-optical-focused) models for $z \gtrsim 1$.
\subsection{Redshift Measurements} \label{sec:redshift}
To assess the viability of measuring SN~Ia redshifts with the prism, we simulate time series and fit with \gls{SALT2}-extended. Using a model with wide rest-frame wavelength coverage enables us to initialize the optimization at a wide range of redshifts to ensure the redshift found really is the global optimum (in general, redshift finding can produce local $\chi^2$ minima, especially at low S/N, when the spectral features are hard to uniquely identify). We simulate directly from the \gls{SNfactory} time series \citep{Aldering2020}, using the 39 SNe with the base phase coverage. We interpolate each SN time series to a five-day observer-frame cadence, resample to prism resolution, and add appropriate noise for one-hour visits at each simulated redshift. Each SN is simulated five times, with five different realizations of noise. These time series do not cover the rest-frame UV, where the rapid falloff in flux can help secure the redshift \citep{rubin13}. Moreover, the \gls{SALT2}-extended model used here cannot fit a peculiar 1991T-like SN \citep{Filippenko1992, Phillips
|
1992}, limiting the recovery rate, even at the lower redshift end of our simulations. A parameterization like \citet{Boone2021a} does fit 1991T-like SNe, and may do better if this parameterization can be extended into the UV. For both these reasons, our results should be considered an underestimate of the possible prism performance. Figure~\ref{fig:completeness-snf} shows the results of our simulated redshift measurements. One-hour exposures every five days are sufficient for measuring redshifts to $z \sim 2$ ($V$-band S/N $\sim 15$), a redshift that is difficult to reach from the ground.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\textwidth]{f8a.pdf}
\includegraphics[width=0.95\textwidth]{f8b.pdf}
\caption{The {\bf top left panel} shows the fraction of our simulated SN~Ia prism spectroscopic sample that is recovered near the core of the distribution as a function of simulated SNe redshift. We take the spectral time-series data from the 39 SNe in the Nearby Supernova Factory data set \citep{Aldering2020} having the best temporal sampling, and degrade the wavelength sampling and signal-to-noise to match 1-hour {\it Roman}\xspace prism exposures. Each SN is simulated five times, with five different realizations of noise. To recover the redshifts, we fit the \gls{SALT2}-extended spectral model to the simulated prism data, using a wide range of initialization redshifts to ensure convergence to the global best fit. The plateau in efficiency at low redshift is due to a peculiar 1991T-like SN \citep{Filippenko1992, Phillips1992} in our input data that is not well modeled by \gls{SALT2}. Such SNe are modeled well by, e.g., the \citet{Boone2021a} parameterization, but we leave this for future work. Another future improvement on these results would come from simulating a dataset with bluer rest-frame wavelength coverage, as SNe Ia show a rapid decline in flux in the rest-frame NUV that can yield a secure redshift in this redshift range \citep{rubin13}. The {\bf top right panel} shows the dispersion (Normalized Median Absolute Deviation) in recovered redshifts. The {\bf bottom panels} repeat these summaries, but show the results in bins of rest-frame $V$ S/N. In general, these tests show reasonable performance to $z \lesssim 2$ and S/N $\gtrsim 15$.}
\label{fig:completeness-snf}
\end{figure}
\subsection{Typing of SNe} \label{sec:typing}
Most spectroscopic typing tools are based on a comparison of an unknown-transient spectrum to a set of library spectra, (e.g., \gls{SNID}, \citealt{blondin07}). We have experimented with using \gls{SNID} on our simulated time series to see if a majority of the spectra are classified as one type of SN. We see plausible results: most simulated SNe~Ia are classified as SNe~Ia for most of their epochs with declining classification efficiency as the S/N varies from $\sim 35$ to $\sim 20$. However, the majority of training spectra (and spectral time series) are SNe~Ia, so a careful study of biases would need to be undertaken to come to a secure conclusion. A more promising path forward may be to use the \gls{ParSNIP} model \citep{Boone2021c}, which has a parameterization that spans both core-collapse and SNe~Ia. \gls{ParSNIP} has only been trained so far with imaging data, so we leave this for the future and suggest that rest-frame $V$-band S/N $\sim 25$ is necessary for typing (midway between the $\sim 15$ required for redshifts, and the $\sim 35$ required for sub-typing).
\subsection{Sub-typing of SNe Ia} \label{sec:subclassification}
In general, SNe~Ia are continuously distributed in most parameters \citep[e.g.,][]{Branch2006, Boone2021a}. We thus use ``sub-typing'' in this work to refer to placing SNe into regions of parameter space that are much smaller than the population distribution to obtain smaller distance uncertainties than are possible with a two-parameter \citet{tripp} standardization \citep{Wang2009, twins15, Boone2021b}. We take S/N $\sim 1$ (measurement uncertainty comparable to the population dispersion) as the threshold for useful sub-typing, which may seem like only a weak constraint. However, the \citet{twins15} analysis seems to be only S/N $\sim 1$ \citep{Rubin2020}; furthermore Bayesian Hierarchical Models can provide useful constraints on standardization coefficients and population parameters even when individual SNe are measured to this precision \citep{minka99, Hayden2019}.
To determine the maximum redshift for which we can sub-type, we fit the 15-eigenvector \gls{SNEMO} model to simulated prism time series with 1 hour per visit. Figure~\ref{fig:prism_subtype} shows the \gls{SNEMO}15 uncertainties as a function of redshift. The left panel shows the uncertainty in the rest-frame $V$ magnitude and the extinction $A_V$. The right panel shows the uncertainty on the scaling of each eigenvector. It shows a general trend where higher-numbered coefficients (e.g., $c_{14}$) have generally larger uncertainties than lower-number coefficients (e.g., $c_1$) at any given redshift. The higher numbered eigenvectors make much smaller contributions to the overall variation of SN fluxes and thus their projections require higher S/N to constrain. Figure~\ref{fig:prism_subtype_stack} shows the same analysis, but stacking the time series into one spectrum. The much larger uncertainties indicate that a time-series has much more information than a stacked spectrum at the same total S/N.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{f9.pdf}
\caption{Prism time series (with one hour per visit) are simulated and fit with SNEMO15. The {\bf left panel} shows the absolute rest-frame $V$-band magnitude and extinction uncertainties; the {\bf right panel} shows the uncertainties on the coefficient for each eigenvector. (There are no published standardization coefficients for SNEMO15, so we cannot show distance-modulus uncertainties.) Generally, the SNEMO15 eigenvector projections are usefully measured to $z \sim 1.3$\xspace, indicating that the prism time series can be used to sub-type SNe Ia.}
\label{fig:prism_subtype}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{f10.pdf}
\caption{As in Figure~\ref{fig:prism_subtype}, but the time series is stacked using inverse-variance weighting into one spectrum before fitting with SNEMO (the SNEMO model is also stacked with the same weighting). As the time-series information is lost, this analysis includes a prior of $\pm 1$ day for the date of maximum. The uncertainties are much larger than the time-series analysis in Figure~\ref{fig:prism_subtype}, indicating that ignoring the time evolution with stacking throws away useful information (and that this is true even though each individual epoch is quite noisy).
\label{fig:prism_subtype_stack}}
\end{figure}
Most of the uncertainties in eigenvector projections cross our S/N $\sim 1$ threshold at $z \sim 1.3$\xspace with one-hour exposures, when the time-series has a total S/N of $\sim 35$. We thus take this as a reasonable criterion for subtyping. A time-series model with standardization coefficients will be necessary to predict distance-modulus uncertainties as a function of redshift, which we unfortunately leave to future work.
\subsection{Population evolution and SED Training} \label{sec:missingcomponent}
The prism can provide time series of a random sample of SNe~Ia. These time series can investigate new, unknown SN behaviors, described as eigenvectors in a \gls{SNEMO}-like framework.\footnote{A similar concept is the intrinsic-scatter matrix \citep{Kessler2013}, which can be thought of as the sum of the outer products of the eigenvectors, with each eigenvector scaled by the width of the population distribution. We pursue an eigenvector-based description here because it is not clear that the population distribution will be the same at all redshifts and eigenvectors provide a natural basis for describing any changes in the mean SN.} We thus examine our ability to recover a eigenvector that is only present in high-redshift prism data. This eigenvector could represent the effect of a physical parameter that begins to be located outside the range it is observed to have at low redshift (i.e., where \gls{SNEMO} is trained). It may also represent an effect that shows up (partially or completely) outside the rest-frame wavelength range of \gls{SNEMO}.
We simulate 1 hour per pointing exposures of just 100 SNe at redshift 1 or 1.2. The time series for each SN is drawn from the SNEMO15 coefficient distribution in \gls{SNfactory} data. We get a baseline set of eigenvector projections for each SN by fitting its time series with SNEMO15, assuming knowledge of all 15 eigenvectors (the mean plus 14 components of variation).
Then, we pretend that we do not have knowledge of one of the eigenvectors, and recover this unknown eigenvector using an \gls{EMPCA} \citep{EMPCA}. This algorithm alternates between estimating projections (the ``e-step'') with estimating the missing eigenvector(s) (the ``m-step''). We initialize the missing eigenvector with 2D random Gaussian noise (the two dimensions are wavelength and phase, as with the other eigenvectors).\footnote{To impose some regularization on the solution, we use a 2D 3rd-order spline as our basis, with eight nodes in phase and 20 in wavelength. Examining other (combinations of) missing eigenvectors and regularization is left for future work.} Then we estimate all the eigenvector projections, and with the projections fixed, solve for the missing eigenvector. Using this updated eigenvector, we reestimate all of the projections, and iterate until convergence (defined as no eigenvector projection changing by more than 0.01).
Figure~\ref{fig:EMPCA} shows the progression of the iterations in terms of the correlation coefficient between the projections found with the true eigenvector, and the projections found in the e-step with the estimated eigenvector. This figure shows three different eigenvectors (\gls{SNEMO}'s 7, 8, and 9) which are neither the most obvious and easiest to constrain, nor the least obvious (cf. Figure~\ref{fig:prism_subtype}). For the sample of 100 simulated SNe at redshift 1, we see rapid convergence and high correlation coefficients; for the sample of 100 simulated SNe at redshift 1.2, we see worse performance comparing the same eigenvectors.
Figure~\ref{fig:evcorrelation} shows a typical recovery. The top panel shows the projections one finds with the recovered eigenvector plotted against the projections one finds with perfect eigenvector knowledge. The bottom panel shows the residuals. The scaling of the eigenvector is arbitrary, i.e., multiplying the eigenvector by two and dividing all the projections by 2 will leave the fluxes the same. For the purposes of this figure, we scale the eigenvector so that the bottom panel has zero correlation with the projections using the true eigenvector. Note also that there is no noise directly on this figure (although noise does propagate into the recovered eigenvector), as the same simulated time series is fit for both the values on the x-axis and the values on the y-axis.
As shown in Figure~\ref{fig:evcorrelation}, we obtain almost the same eigenvector projections using our recovered eigenvector as the real one. Thus, if an unknown eigenvector appeared only in the prism spectra, we would be able to find it, measure its impact on luminosity, and take it into account in the cosmology analysis. The exact requirements will depend on the eigenvector, but S/N $\sim 50$ is a reasonable threshold for high enough quality data for running this test.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65 \textwidth]{f11.pdf}
\caption{Using six independent analyses of 100 SNe, we investigate the convergence of an \gls{EMPCA} in recovering each of three different unknown eigenvectors (\gls{SNEMO}15's seventh, eighth, and ninth) at each of two different redshifts (1 and 1.2). We monitor the convergence by looking at the correlation coefficient between the eigenvector projections using the true eigenvector, and the eigenvector projections using the best estimate of the missing eigenvector at each iteration. We see high correlations at redshift 1 and somewhat lower correlations for simulations at redshift 1.2.}
\label{fig:EMPCA}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65 \textwidth]{f12.pdf}
\caption{{\bf Top panel}: Estimated eigenvector projections for the 7th \gls{SNEMO}15 eigenvector recovered from 100 simulated prism time series at redshift 1. This is a typical case from Figure~\ref{fig:EMPCA}. The x-axis shows the projections estimated with perfect knowledge of the eigenvector; the y-axis shows the projections when the eigenvector is estimated from an \gls{EMPCA}. This panel shows reasonable agreement. Note that no noise is directly seen in this, as the same simulated time series are used for the projections shown on both axes (the noise does propagate into the recovered eigenvector). {\bf Bottom panel:} the residuals from the top panel, showing that the mean difference between the mean difference in projections constrained to somewhat better than the statistical uncertainty. This indicates that an unknown eigenvector can be recognized, and its effect on SN distances measured and mostly taken into account in a cosmology analysis.}
\label{fig:evcorrelation}
\end{figure}
\subsection{Preliminary Prism Survey Optimization} \label{sec:surveyoptimization}
With approximate rest-frame $V$-band S/N targets in hand (S/N 25 for typing, and S/N 35 for subtyping), we perform a simple optimization. Following \citet{Rubin21b}, we solve for the per-epoch exposure times as a function of redshift for both S/N targets (shown in Table~\ref{tab:prismexposure}), then parameterize the number of SNe as a function of redshift (in bins of 0.1). Each survey is represented by a non-negative vector of the relative number of SNe as a function of redshift. The \citet{Rubin21b} optimizer scales the relative numbers of SNe to absolute numbers by finding the minimum number of pointings and exposure times that give those relative numbers, then scaling that survey linearly to take a certain fixed total time. This scaling can produce fractional pointings, so for small surveys with a few pointings it is only an approximation to the optimal survey. As with the \gls{SDT} report, we combine with a 0.2\% CMB shift-parameter constraint (defined in \citealt{Efstathiou1999}) and 800 nearby SNe and assume a flat $w_0$-$w_a$ cosmology to compute a \gls{FoM}. The survey is optimized by adjusting the relative numbers of SNe as a function of redshift to produce the maximum \gls{FoM}.
Table~\ref{tab:optimumsurveys} shows our optimized surveys. They have two tiers: which we refer to as ``wide'' and ``deep.'' We vary three sets of input assumptions: the S/N needed in the prism data for a SN to be useful (either S/N~25 for typing, or S/N~35 for subtyping) the Hubble diagram RMS (0.15, 0.1, or 0.075 magnitudes), and the amount of time used for the prism survey (either 0.125~years, 0.25~years, or 0.5~years). \citet{Rose2021} considered prism times amounting to 10\%, 25\%, 50\%, and 75\% of the 0.5 year \gls{HLTDS} survey. We select the middle two options, and also consider 0.5~years not as a serious proposal, but just as a comparison to imaging-only surveys. Appendix~\ref{sec:prismvsimaging} performs a preliminary optimization that suggests $\sim 25\%$ or 0.125 years for the prism is a reasonable option, but this needs further study.
Surprisingly, the optimized surveys are very similar across all of these assumptions: for surveys taking 0.125 years of prism time, the optimum is a $\sim 5$~deg$^2$ wide tier with $\sim 600$~s pointings, with a $\sim 1$~deg$^2$ deep tier with $\sim 1$~hour pointings\xspace (double this for the 0.25-year surveys and quadruple for the 0.5-year surveys). We also note that the statistical-only \gls{FoM} values can be quite high ($\sim 200$--400 for a 0.125 year survey). Depending on the level of systematic uncertainty, an initial cosmology analysis using just SNe observed with both the prism and imaging may be a useful interim step towards the full cosmology analysis that also includes SNe observed in just imaging.
\begin{deluxetable}{c|cccc}[htbp]
\tablehead{
\colhead{Redshift} & \colhead{S/N $7.50\times4\ \mathrm{Dithers}=15.00$} & \colhead{$10.61\times2=15.00$} & \colhead{$12.50\times4=25.00$} & \colhead{$17.50\times4=35.00$}
}
\startdata
0.3 & 186.45 & 121.48 & 276.85 & 367.25\\
0.4 & 180.80 & 118.65 & 268.38 & 355.95\\
0.5 & 203.40 & 132.78 & 302.28 & 403.98\\
0.6 & 245.78 & 161.03 & 372.90 & 505.68\\
0.7 & 296.62 & 197.75 & 457.65 & 638.45\\
0.8 & 347.48 & 237.30 & 553.70 & 793.83\\
0.9 & 418.10 & 288.15 & 686.48 & 1031.12\\
1.0 & 494.38 & 350.30 & 844.68 & 1322.10\\
1.1 & 570.65 & 418.10 & 1025.48 & 1680.88\\
1.2 & 661.05 & 497.20 & 1243.00 & 2118.75\\
1.3 & 768.40 & 598.90 & 1528.33 & 2706.35\\
1.4 & 898.35 & 726.03 & 1884.28 & 3412.60\\
1.5 & 1042.42 & 867.28 & 2288.25 & 4234.68\\
1.6 & 1217.58 & 1048.08 & 2799.58 & 5262.98\\
1.7 & 1460.53 & 1299.50 & 3503.00 & 6667.00\\
1.8 & 1697.83 & 1542.45 & 4186.65 & 8006.05\\
1.9 & 2017.05 & 1881.45 & 5144.33 & 9927.05\\
2.0 & 2387.12 & 2262.83 & 6217.83 & 12057.10\\
\enddata
\caption{The exposure times (in seconds) per visit necessary to reach a given rest-frame $V$ S/N over the full time series of the median SN. We show S/N 15, 25, and 35, with the shorter S/N 15 exposure times with both two dithers (S/N $10.61 \times 2$) and four dithers (S/N $7.50\times 4$) to show the impact of read noise. These times do not include our assumed slew time per pointing of 62.15 seconds (22 detector readouts of 2.825~seconds each)\xspace.}
\label{tab:prismexposure}
\end{deluxetable}
\begin{deluxetable}{cc|ccc}[htbp]
\tablehead{
\colhead{$V$ S/N Target} & \colhead{HD RMS (Mag)} & \colhead{Wide Tier} & \colhead{Deep Tier} & \colhead{Statistical-Only FoM}
}
\startdata
\multicolumn{5}{c}{Surveys using 0.125 Years of Prism} \\
\hline
25 & 0.15 & 5.6 deg$^2$, 457.65 s & 1.5 deg$^2$ 2799.57 s & 228 \\
25 & 0.10 & 6.2 deg$^2$, 457.65 s & 1.5 deg$^2$ 2799.57 s & 383 \\
35 & 0.10 & 4.8 deg$^2$, 638.45 s & 0.8 deg$^2$ 5262.97 s & 292 \\
35 & 0.075 & 6.4 deg$^2$, 505.67 s & 0.7 deg$^2$ 5262.97 s & 402 \\
\hline
\multicolumn{5}{c}{Surveys using 0.25 Years of Prism} \\
\hline
25 & 0.15 & 12 deg$^2$, 457.65 s & 3.0 deg$^2$ 2799.57 s & 335 \\
25 & 0.10 & 13 deg$^2$, 457.65 s & 2.9 deg$^2$ 2799.57 s & 544 \\
35 & 0.10 & 10 deg$^2$, 638.45 s & 1.5 deg$^2$ 5262.97 s & 432 \\
35 & 0.075 & 10 deg$^2$, 638.45 s & 1.6 deg$^2$ 5262.97 s & 584 \\
\hline
\multicolumn{5}{c}{Surveys using 0.5 Years of Prism} \\
\hline
25 & 0.15 & 24 deg$^2$, 553.70 s & 4.0 deg$^2$ 4248.8 s & 462 \\
25 & 0.10 & 29 deg$^2$, 457.65 s & 4.8 deg$^2$ 2799.57 s & 731 \\
35 & 0.10 & 19 deg$^2$, 638.45 s & 3.2 deg$^2$ 5262.97 s & 602 \\
35 & 0.075 & 24 deg$^2$, 505.67 s & 2.8 deg$^2$ 5262.97 s & 806 \\
\enddata
\caption{Simple optimized survey strategies and \gls{FoM} values. Each row shows a set of assumptions ($V$ S/N targets corresponding with Table~\ref{tab:prismexposure}, and Hubble diagram RMS), the optimized strategy, and the statistical-only \gls{FoM}. Surprisingly, the optimum survey strategies look similar (for surveys taking 0.125 years of prism time, the optimum is a $\sim 5$~deg$^2$ wide tier with $\sim 600$~s pointings, with a $\sim 1$~deg$^2$ deep tier with $\sim 1$~hour pointings\xspace), irrespective of assumptions. The statistical-only \gls{FoM} values of just these-prism observed SNe can be quite high, even in the 0.125-year prism survey, implying that an initial cosmology analysis using just SNe observed with the both the prism and imaging may be a useful interim step towards the full cosmology analysis including imaging-only SNe.}
\label{tab:optimumsurveys}
\end{deluxetable}
\clearpage
\subsection{Prism Parameter Optimization} \label{sec:prismoptimization}
The optimization of the prism consists of three related parameters (and their interaction with the optical design): the wavelength of the cutoff on the blue side, the cutoff wavelength on the red side, and the overall scaling of the dispersion as a function of wavelength. Widening the spectral range or increasing the dispersion reduce the contrast of faint continuum sources against the background and thus decrease the sensitivity. However, the goals outlined in Section~\ref{sec:introduction} (redshifts, classifications, and sub-classifications) benefit from having a wider spectral range and to some extent benefit from having higher dispersion. We performed a series of analyses varying the prism parameters and investigating the sub-typing performance at fixed exposure time, i.e., remaking Figure~\ref{fig:prism_subtype}. In short, we conclude that the blue cutoff should be set as blue as the prism image quality allows ($\sim 7500$\AA\xspace), the red cutoff should be set to $\sim 18000$\AA\xspace to minimize thermal background, and the dispersion should be $\gtrsim 70$\xspace. (Appendix~\ref{sec:analytic} performs a simple analytic calculation that supports this range of dispersion.) Figure~\ref{fig:filters} shows the current design based on these parameters.
The optimization of the dispersion is involved enough that it is worth describing here. The concern with simply adequately resolving the SN spectral features (minimizing dispersion to increase S/N per unit time) is that biases will occur in the interpretation of the data if the \gls{LSF} is incorrect. An in-depth investigation of this bias necessarily involves 2D simulations, rather than just 1D S/N calculations and we describe these 2D simulations in detail in Appendix~\ref{sec:twoDforward}. We evaluate the bias by generating the time series with the \gls{PSF} provided by the Project, and fitting the time series assuming an incorrect \gls{PSF}, in this case scaling the \gls{PSF} by 0.95 in the dispersion direction (resulting in a \gls{LSF} error of 5\%).
Figure~\ref{fig:forwardmodelimage} shows a time series simulated for this purpose at $z=1$. We use the time series without noise added (left column) to directly and precisely evaluate the bias without having to average over many thousands of noisy SNe. Figure~\ref{fig:bias_vs_R} shows the biases one obtains as a function of dispersion fitting with the incorrect \gls{PSF}. For a minimum dispersion $\gtrsim 70$\xspace, the biases are modest.
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.95 \textwidth]{f13.png}
\caption{A simulated prism time series for a SN~Ia at redshift 1 sampled every 5 observer-frame days based on our forward-model code. The {\bf left column} shows the time series without noise added; the {\bf right column} shows noise appropriate for 1 hour visits (900-second exposures with a $2\times2$ dither pattern). (A big$\times$little$\times$little dithering strategy that spans chip gaps is likely more optimal, but $2\times2$ suffices for illustrative purposes.) The row for each date shows the four dithers interlaced. As expected from Figure~\ref{fig:SNRmax}, the S/N in any one visit is modest, but the S/N over the time series is quite reasonable.
\label{fig:forwardmodelimage}}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8 \textwidth]{f14.pdf}
\caption{Biases due to an incorrect \gls{LSF}. We use our forward-model code to generate a simulated $z=1.1$ SN with the correct \gls{PSF}, then fit it using an \gls{PSF} scaled by 0.95 in the dispersion direction (for a 5\% error in the \gls{LSF}). We are only interested in measuring biases, so we do not add noise in the forward-model code, enabling us to measure the bias using only one SN time series at each dispersion value. The biases incurred drop rapidly with the dispersion, as the SN spectral features are increasingly over-resolved and thus our measurements are increasingly insensitive to the assumed \gls{LSF}. The {\bf top panel} shows biases in the recovered absolute magnitude and extinction as a function of the minimum prism dispersion. The {\bf bottom panel} shows biases in the eigenvector projections. For minimum two-pixel dispersion of $\gtrsim 70$\xspace (a criterion met by the nominal dispersion, shown with a dotted line), the biases are $\lesssim 1$~mmag, and the eigenvector projections are recovered to a few percent of the intrinsic distribution widths ($\sim 1$). Thus with $\sim 5$\% uncertainty in the \gls{LSF}, we can plausibly average over $\sim 1000$ SNe to look for any biases without being limited by systematic uncertainties.}
\label{fig:bias_vs_R}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
This work presents a series of studies investigating the uses of a low-dispersion prism in the {\it Roman}\xspace mission (now baselined). Broadly speaking, many of our studies are idealized, assuming perfect host-galaxy and background subtraction (except for the impact of Poisson noise), relying on existing SN \gls{SED} models (each of which has significant limitations), assuming a good calibration, and ignoring the details of the survey geometry. Future work will introduce more realism in simulation and treatment of the prism data. But these studies do indicate that the prism can produce data with S/N and wavelength sampling that is useful for a broad range of SN investigations.
We find that using such a prism for part of the {\it Roman}\xspace \gls{HLTDS} provides crucial SN data for a significant and representative sample of SNe~Ia that would be difficult to otherwise obtain. We perform a simple survey optimization, and present a toy survey that shows what performance is possible with exposure times in the range 600\xspace--3600\xspace seconds. We show that live-SN redshifts from such a survey extend above redshift 2, SN Ia subclassification is possible to $z \sim 1.3$\xspace, and useful \gls{SED} training information is available at redshift 1--1.2. In short, we find the prism addresses many of the systematic uncertainties that are present in an imaging-only survey. Future work will also seek to continue to optimize the prism component of the \gls{HLTDS}, including the relative amount of time spent performing imaging and spectroscopy.
\clearpage
\printglossaries
\begin{acknowledgments}
This work was supported by NASA through grant NNG16PJ311I (Perlmutter {\it Roman}\xspace Science Investigation Team). This work was also partially supported by the Office of Science, Office of High Energy Physics, of the U.S. Department of Energy, under contract no. DE-AC02-05CH11231. L.G. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovaci\'on (MCIN), the Agencia Estatal de Investigaci\'on (AEI) 10.13039/501100011033, and the European Social Fund (ESF) ``Investing in your future'' under the 2019 Ram\'on y Cajal program RYC2019-027683-I and the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Cient\'ificas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia Mar\'ia de Maeztu CEX2020-001058-M.
\end{acknowledgments}
\software{
Astropy \citep{Astropy},
Concorde \citep{Concorde},
Matplotlib \citep{matplotlib},
Numpy \citep{numpy},
SciPy \citep{scipy},
SNCosmo \citep{sncosmo},
}
|
\section{Introduction}
Detecting and localising anomalous findings in medical images (e.g., polyps, malignant tissues, etc.) are of vital importance~\cite{tian2019one,tian2020few,litjens2017survey,baur2020scale,fan2020pranet,lz2020computer,liu2021self,liu2021acpl,liu2021noisy}.
Systems that can tackle these tasks are often formulated with a classifier trained with large-scale datasets annotated by experts.
Obtaining such annotation is often challenging in real-world clinical datasets because the amount of normal images from healthy patients tend to overwhelm the amount of anomalous images.
Hence, to alleviate the challenges of collecting anomalous images and learning from class-imbalanced training sets, the field has developed unsupervised anomaly detection (UAD) models~\cite{tian2021constrained,chen2021deep} that are trained exclusively with normal images.
Such UAD strategy benefits from the straightforward acquisition of training sets containing only normal images and the potential generalisability to unseen anomalies without collecting all possible anomalous sub-classes.
Current UAD methods learn a one-class classifier (OCC) using only normal/healthy training data, and detect anomalous/disease samples using the learned OCC~\cite{f-AnoGAN,seebock2019exploiting,gong2019memorizing,chen2021deep,liu2019photoshopping,venkataramanan2020attention,pang2019deep,li2021cutpaste,tian2021pixel}.
UAD methods can be divided into: 1) reconstruction methods, 2) self-supervised approaches, and 3) Imagenet pre-trained models.
Reconstruction methods~\cite{f-AnoGAN,gong2019memorizing,chen2021deep,liu2019photoshopping,venkataramanan2020attention} are trained to accurately reconstruct normal images, exploring the assumption that the lack of anomalous images in the training set will prevent a low error reconstruction of an test image that contains an anomaly.
However, this assumption is not met in general because reconstruction methods are indeed able to successfully reconstruct anomalous images, particularly when the anomaly is subtle.
Self-supervised approaches~\cite{tian2021constrained,tian2021self,sohn2020learning} train models using contrastive learning, where pretext tasks must be designed to emulate normal and anomalous image changes for each new anomaly detection problem.
Imagenet pre-trained models~\cite{reiss2021panda,defard2020padim} produce features to be used by OCC, but the translation of these models into medical image problems is not straightforward.
Reconstruction methods are able to circumvent the aforementioned challenges posed by self-supervised and Imagenet pre-trained UAD methods, and they can be trained with a relatively small amount of normal samples.
However, their viability depends on an acceptable mitigation of the potentially low reconstruction error of anomalous test images.
In this paper, we introduce a new UAD reconstruction method, the Memory-augmented Multi-level Cross-attention Masked Autoencoder (MemMC-MAE), designed to address the low reconstruction error of anomalous test images.
MemMC-MAE is a transformer-based approach based on masked autoencoder (MAE)~\cite{he2021masked} with of a novel memory-augmented self-attention encoder and a new multi-level cross-attention decoder.
MemMC-MAE masks large parts of the input image during its reconstruction, and given that the likelihood of masking out an anomalous region is large, then it is unlikely that it will accurately reconstruct that anomalous region.
However, there is still the risk that the anomaly is not masked out, so in this case, the normal patterns stored in the encoder's memory combined with the
correlation of multiple normal patterns in the image, utilised by the decoder's multi-level cross-attention can explicitly constrain the accurate anomaly reconstruction to produce high reconstruction error (high anomaly score).
The encoder's memory is also designed to address the MAE's long-range 'forgetting' issue~\cite{martins2021infty}, which can be harmful for UAD due to the poor reconstruction based on forgotten normality patterns and 'unwanted' generalisability to subtle anomalies during testing.
Our contributions are summarised as:
\begin{itemize}
\item To the best of our knowledge, this is the first UAD method based on MAE~\cite{he2021masked};
\item A new memory-augmented self-attention operator for our MAE transformer encoder to explicitly encode and memorise the normality patterns; and
\item A novel decoder architecture that uses the learned multi-level memory-augmented encoder information as prior features to a cross-attention operator.
\end{itemize}
Our method achieves better anomaly detection and localisation accuracy than most competing approaches on the UAD benchmarks using the public Hyper-Kvasir colonoscopy dataset~\cite{borgli2020hyperkvasir} and Covid-X Chest X-ray (CXR) dataset~\cite{wang2020covid}.
\section{Method}
\begin{figure}[t!]
\centering
\vspace{-10pt}
\includegraphics[width=0.86\textwidth]{images/framework_v3.pdf}
\vspace{-40pt}
\caption{\textbf{Top:} overall MemMC-MAE framework. Yellow tokens indicate the unmasked visible patches, and blue tokens indicate the masked patches. Our memory-augmented transformer encoder only accepts the visible patches/tokens as input, and its output tokens are combined with dummy masked patches/tokens for the missing pixel reconstruction using our proposed multi-level cross-attentional transformer decoder.
\textbf{Bottom-left:} proposed memory-augmented self-attention operator for the transformer encoder, and \textbf{bottom-right:} proposed multi-level cross-attention operator for the transformer decoder.
}
\label{fig:enc_dec_structure}
\end{figure}
\subsection{Memory-augmented Multi-level Cross-attention Masked Autoencoder (MemMC-MAE)}
Our MemMC-MAE, depicted in Fig.~\ref{fig:enc_dec_structure}, is based on the
masked autoencoder (MAE)~\cite{he2021masked} that was recently developed for the pre-training of models to be used in downstream computer vision tasks.
MAE has an asymmetric architecture, with a encoder that takes a small subset of the input image patches and a smaller/lighter decoder that reconstructs the original image based on the input tokens from visible patches and dummy tokens from masked patches.
Our MemMC-MAE is trained with a normal image training set, denoted by $\mathcal{D} = \{ \mathbf{x}_i \}_{i=1}^{|\mathcal{D}|}$, where $\mathbf{x} \in \mathcal{X} \subset \mathbb{R}^{H \times W \times R}$ ($H$: height, $W$: width, $R$: number of colour channels).
Our method first divides the input image $\mathbf{x}$ into non-overlapping patches $\mathcal{P} = \{ \mathbf{p}_i \}_{i=1}^{|\mathcal{P}|}$, where $\mathbf{p} \in \mathbb{R}^{\hat{H} \times \hat{W} \times R}$, with $\hat{H}<<H$ and $\hat{W} << W$. We then randomly mask out 75\% of the $|\mathcal{P}|$ patches, and the remaining visible patches $\mathcal{P}^{(v)} = \{\mathbf{p}_{v}\}_{v=1}^{|\mathcal{P}^{(v)}|}$ (with $|\mathcal{P}^{(v)}| = 0.25\times |\mathcal{P}|$) are used by the MemMC-MAE to encode the normality patterns of those patches, and all $|\mathcal{P}^{(v)}|$ encoded visible patches and $|\mathcal{P}|-|\mathcal{P}^{(v)}|$ dummy masked patches are used as the input of a new multi-level cross-attention decoder to reconstruct the image.
The training of MemMC-MAE is based on the minimisation of the mean squared error (MSE) loss between the input and reconstructed images at the pixels of the masked patches of the training images.
The approach is evaluated on a testing set $\mathcal{T} = \{ (\mathbf{x},y,\mathbf{m})_i \}_{i=1}^{|\mathcal{T}|}$, where $y \in \mathcal{Y} = \{\text{normal}, \text{anomalous} \}$, and $\mathbf{m}\in \mathcal{M} \subset \{0,1\}^{H \times W \times 1}$ denotes the segmentation mask of the lesion in the image $\mathbf{x}$.
When testing, we also mask 75\% of the image and the patch-wise reconstruction error indicates anomaly localisation, and the mean reconstruction error of all patches is used to detect image-wise anomaly.
Below we provide details on the major contributions of MemMC-MAE, which are the memory-augmented transformer encoder that stores the long-term normality patterns of the training samples, and the new multi-level cross-attentional transformer decoder to leverage the correlation of features from the encoder to reconstruct the missing normal pixels.
\subsubsection{Memory-augmented Transformer Encoder
(Fig.~\ref{fig:enc_dec_structure} - bottom left)}
We modify the encoder from the transformer with our a novel memory-augmented self-attention, by extending the keys and values of the self-attention operation with learnable memory matrices that store normality patterns, which are updated via back-propagation.
To this end, the proposed self-attention (SA) module for layer $l \in \{0,...,L-1\}$ is defined as:
\begin{equation}
\begin{split}
\mathbf{X}^{(l+1)} &=
f_{SA}\big(\mathbf{W}^{(l)}_{Q}\mathbf{X}^{(l)}, [\mathbf{W}^{(l)}_{K}\mathbf{X}^{(l)},\mathbf{M}^{(l)}_{K}], [\mathbf{W}^{(l)}_{V}\mathbf{X}^{(l)},\mathbf{M}^{(l)}_{V}] \big ), \\
\end{split}
\label{eq:X}
\end{equation}
where $\mathbf{X}^{(0)}$
is the encoder input matrix containing $|\mathcal{P}^{(v)}|$ patch tokens formed from the visible image patches transformed through the linear projection $\mathbf{W}^{(0)}$,
with $|\mathcal{P}^{(v)}|$ being the number of visible tokens/patches, $\mathbf{X}^{(l)},\mathbf{X}^{(l+1)}$
are the input and output of layer $l$,
$\mathbf{W}^{(l)}_{Q},\mathbf{W}^{(l)}_{K},\mathbf{W}^{(l)}_{V}$ are the linear projections of the encoder's layer $l$ for query, key and value of the self-attention operator, respectively, and $\textbf{M}^{(l)}_{K},\textbf{M}^{(l)}_{V}$ are the layer $l$ learnable memory matrices that are concatenated with $\mathbf{W}_{K}\mathbf{X}^{(l)}$ and $\mathbf{W}_{V}\mathbf{X}^{(l)}$ using the operator $[.,.]$.
The self-attention operator $f_{SA}(.)$ follows the standard ViT~\cite{dosovitskiy2020image} and transformer~\cite{vaswani2017attention}, which computes a weighted sum of value vectors according to the cosine similarity distribution between query and key.
Such memory-augmented self-attention aims to store normal patterns that are not encoded in the feature $\mathbf{X}^{(l)}$,
forcing the decoder to reconstruct anomalous input patches into normal output patches during testing.
\subsubsection{Multi-level Cross-Attention Transformer Decoder (Fig.~\ref{fig:enc_dec_structure} - bottom right).
}
Our transformer decoder computes the cross-attention operation using the outputs from all encoder layers and the decoder layer output from the self-attention operator (see Fig.~\ref{fig:enc_dec_structure} - Bottom right). More formally, the layer $d \in \{0,...,D-1\}$ of our decoder
outputs
\begin{equation}
\begin{split}
\mathbf{Y}^{(d+1)} &= \sum_{l=1}^{L} \alpha^{(d,l)} \times f_{SA}\big(f_{SA}(\mathbf{Y}^{(d)},\mathbf{Y}^{(d)},\mathbf{Y}^{(d)}), \mathbf{W}^{(d)}_{K}\mathbf{X}^{(l)}, \mathbf{W}^{(d)}_{V}\mathbf{X}^{(l)}\big ),\\
\end{split}
\label{eq:Y}
\end{equation}
where $\mathbf{Y}^{(d)}$
and $\mathbf{Y}^{(d+1)}$
represent the input and output of the decoder layer $d$ containing $|\mathcal{P}|$ tokens (i.e., $|\mathcal{P}^{(v)}|$ tokens from the visible patches of the encoder and $|\mathcal{P}| - |\mathcal{P}^{(v)}|$ dummy tokens from the masked patches),
$\mathbf{X}^{(l)}$ denotes the output from encoder layer $l-1$, and $\mathbf{W}^{(d)}_{K},\mathbf{W}^{(d)}_{V}$ are the linear projections of the layer $d$ of the decoder for the key and value of the self-attention operator, respectively. Note that all $|\mathcal{P}|$ input tokens for the decoder are attached with positional embeddings.
The multi-level cross-attention results in~\eqref{eq:Y} are fused together with a weighted sum operation using the weight $\alpha^{(l,d)}$, which is computed based on a linear projection layer and sigmoid function to control the weight of different layers' cross-attention results, as in
\begin{equation}
\begin{split}
\alpha^{(d,l)} &= \sigma \left(\mathbf{W}_{\alpha}^{(d,l)} \left (\left [f_{SA}(\mathbf{Y}^{(d)},\mathbf{Y}^{(d)},\mathbf{Y}^{(d)}), \mathbf{Y}^{(d+1)}\right ] \
|
right)\right), \\
\end{split}
\label{eq:alpha}
\end{equation}
where $\sigma(.)$ is the sigmoid function, and
$\mathbf{W}_{\alpha}^{(d,l)}$
denotes a learnable weight matrix.
Such fusion mechanism enforces the correlation of multiple normal patterns in the image present at
different levels of encoding information to contribute at different decoding layers by adjusting their relative importance using the self-attention output
from $f_{SA}(.)$ and cross-attention output
$\mathbf{Y}^{(d+1)}$.
\subsection{Anomaly Detection and Segmentation}
We compute the anomaly score~\cite{chen2021deep} with multi-scale structural similarity (MS-SSIM)~\cite{wang2003multiscale}.
The anomaly scores are pooled from 10 different random seeds for masking image patches with a fixed 75\% masking ratio, which enables a more robust anomaly detection and localisation.
The anomaly localisation mask is obtained by computing the mean MS-SSIM scores for all patches, and the anomaly detection relies on the mean MS-SSIM scores from the patches~\cite{chen2021deep}.
\begin{table}[t]
\centering
\resizebox{0.66\textwidth}{!}{
\begin{tabular}{@{}@{\hskip .15in}c@{\hskip .15in}c@{\hskip .15in}c@{\hskip .15in}c@{\hskip .15in}@{}}
\toprule \hline
Methods & Publication & Covid-X (AUC) & Hyper-Kvasir (AUC)\\ \hline\hline
DAE~\cite{masci2011stacked} & ICANN'11 & 0.557 & 0.705 \\
OCGAN~\cite{perera2019ocgan} & CVPR'18 & 0.612& 0.813 \\
F-anoGAN~\cite{f-AnoGAN} & IPMI'17 & 0.669 & 0.907 \\
ADGAN~\cite{liuphotoshopping} & ISBI'19 & 0.659 & 0.913 \\
MS-SSIM~\cite{chen2021deep} & AAAI'22 & 0.634 & 0.917 \\
PANDA~\cite{reiss2021panda} & CVPR'21 & 0.629 & 0.937 \\
PaDiM~\cite{defard2020padim} & ICPR'21 & 0.614 & 0.923 \\
IGD~\cite{chen2021deep} & AAAI'22 & 0.699 & 0.939 \\
CCD+IGD*~\cite{tian2021constrained} & MICCAI'21 & 0.746 & \textbf{0.972} \\ \hline
Ours & & \textbf{0.917} &\textbf{0.972} \\
\hline \bottomrule
\end{tabular}
}
\caption{\textbf{Anomaly detection AUC} test results on Covid-X and Hyper-Kvasir. CCD+IGD*~\cite{tian2021constrained} requires at least $2\times$longer training time than other approaches in the table because of a two-stage self-supervised pre-training and fine-tuning.}
\vspace{-20pt}
\label{tab:detection_auc}
\end{table}
\section{Experiments and Results}
\subsubsection{Datasets and Evaluation Measures}
Two disease screening datasets
are used in our experiments.
We test anomaly detection on the CXR images of the Covid-X dataset~\cite{wang2020covid}, and both anomaly detection and localisation on the colonoscopy images of the Hyper-Kvasir dataset~\cite{borgli2020hyperkvasir}.
\textbf{Covid-X}~\cite{wang2020covid} has a training set with 1,670 Covid-19 positive and 13,794 Covid-19 negative CXR images, but we only use the 13,794 Covid-19 negative CXR images for training. The test set contains 400 CXR images, consisting of 200 positive and 200 negative images, each image with size 299 $\times$ 299 pixels.
\textbf{Hyper-Kvasir} is a large-scale public gastrointestinal dataset. The images were collected from the gastroscopy and colonoscopy procedures from Baerum Hospital in Norway, and were annotated by experienced medical practitioners. The dataset contains 110,079 images from unhealthy and healthy patients, out of which, 10,662 are labelled.
Following~\cite{tian2021constrained}, 2,100 normal images from `cecum', `ileum' and `bbps-2-3' are selected, from which we use 1,600 for training and 500 for testing. The testing set also contains 1,000 anomalous images with their segmentation masks.
Detection is assessed with area under the ROC curve (AUC)~\cite{masci2011stacked,gong2019memorizing,perera2019ocgan,chen2021deep}, and localisation is evaluated with intersection over union (IoU)~\cite{chen2021deep,defard2020padim,tian2021constrained,venkataramanan2020attention}.
\subsubsection{Implementation Details}
For the transformer,
we follow ViT-B~\cite{dosovitskiy2020image,he2021masked} for designing the encoder and decoder, consisting of stacks of transformer blocks.
Inspired by U-Net~\cite{zhou2018unet++} for medical segmentation, we add residual connections to transfer information from earlier to later blocks for both the encoder and decoder.
Each encoder block contains a memory-augmented self-attention block and an MLP block with LayerNorm (LN). Each decoder block contains a multi-level cross-attention block and an MLP block with LayerNorm (LN).
We also adopt a linear projection layer after the
encoder to match the different width between encoder and decoder~\cite{he2021masked}.
We add positional embeddings (with the sine-cosine version) to both the encoder and decoder input tokens.
RandomResizedCrop is used for data augmentation during training.
Our method is trained for 2000 epochs in an end-to-end manner using the Adam optimiser~\cite{kingma2014adam} with a weight decay of 0.05 and a batch size of 256.
The learning rate is set to 1.5e-3. At the beginning, we warm up the training process for 5 epochs. The method is implemented in PyTorch~\cite{NEURIPS2019_9015} and run on an NVIDIA 3090 GPU. The overall training times is around 22 hours, and the mean inference time takes 0.21s per image.
\subsubsection{Evaluation on Anomaly Detection on Covid-X and Hyper-Kvasir}
We compare our method with nine competing UAD approaches:
DAE~\cite{masci2011stacked}, OCGAN~\cite{perera2019ocgan}, f-anogan~\cite{f-AnoGAN}, ADGAN~\cite{liuphotoshopping}, MS-SSIM autoencoder~\cite{chen2021deep}, PANDA~\cite{reiss2021panda}, PaDiM~\cite{defard2020padim}, CCD~\cite{tian2021constrained} and IGD~\cite{chen2021deep}.
We apply the
same experimental setup (i.e., image pre-processing, training strategy, evaluation methods) to these methods above as the one for our approach for fair comparison.
The quantitative comparison results for anomaly detection are shown in Table~\ref{tab:detection_auc} for both Covid-X and Hyper-Kvasir benchmarks. Our MemMC-MAE achieves the best AUC results on Covid-X and Hyper-Kvasir datasets with 91.7\% and 97.2\%, respectively.
On Covid-X, our result outperforms all competing methods by a large margin with an improvement of 17.1\% over the second best approach. For Hyper-Kvasir, our result is on par with the best result in the field produced by CCD+IGD~\cite{tian2021constrained}, which has a training time $2\times$ longer than our approach.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{images/segment_results.pdf}
\caption{Segmentation results of our proposed method on Hyper-Kvasir~\cite{borgli2020hyperkvasir}, with our predictions (Pred) and ground truth annotations (GT).
}
\label{fig:qualitative_segmentation}
\vspace{-10pt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{images/recon_examples.pdf}
\caption{Reconstruction of testing images from Covid-X (Top) and Hyper-Kvasir (Bottom). For each triplet, we show the masked image (left), our MemMC-MAE reconstruction (middle), and the ground-truth (right). Normal testing images are marked with green boxes, and anomalous ones are marked with red boxes.
}
\label{fig:qualitative}
\end{figure}
\begin{table}[t]
\scalebox{0.85}{
\parbox{.55\linewidth}{
\centering
\begin{tabular}{ccc|c}
\toprule\hline
MAE & Mem-Enc & MC-Dec & AUC - Covid\\ \hline \hline
\checkmark & & & 0.799 \\
\checkmark & \checkmark & & 0.862 \\\hline
\checkmark & \checkmark & \checkmark & \textbf{0.917} \\ \hline\bottomrule
\end{tabular}%
\caption{\textbf{Ablation study} on Covid-X of the encoder's memory-augmented operator (Mem-Enc) and the decoder's multi-level cross-attention (MC-Dec).}
\label{tab:ablation}
}
\hfill
\parbox{.55\linewidth}{
\centering
\begin{tabular}{@{}cc@{}}
\toprule \hline
Methods & Localisation - IoU \\ \hline \hline
IGD~\cite{chen2021deep} & 0.276 \\
PaDiM~\cite{defard2020padim} & 0.341 \\
CAVGA-$R_{u}$~\cite{venkataramanan2020attention} & 0.349 \\
CCD + IGD~\cite{tian2021constrained} & 0.372 \\
\hline
Ours & \textbf{0.419} \\\bottomrule \hline
\end{tabular}%
\caption{\textbf{Anomaly localisation:} Mean IoU test results on Hyper-Kvasir on 5 groups of 100 images.}
\label{tab:localisation_auc_HK}
}
}
\vspace{-10pt}
\end{table}
\subsubsection{Evaluation on Anomaly Localisation on Hyper-Kvasir}
We compare our anomaly localisation results on Table~\ref{tab:localisation_auc_HK} with four recently proposed UAD baselines: IGD~\cite{chen2021deep}, PaDiM~\cite{defard2020padim}, CCD~\cite{tian2021constrained} and CAVGA-$R_{u}$~\cite{venkataramanan2020attention}.
The results of these methods on Table~\ref{tab:localisation_auc_HK} are from~\cite{tian2021constrained}.
Following~\cite{tian2021constrained}, we randomly sample five groups of 100 anomalous images from the test set and compute the mean segmentation IoU.
The proposed MemMC-MAE surpasses IGD, PaDiM, CAVGA-$R_{u}$ and CCD by a minimum of 4.7\% and a maximum of 14.3\% IoU, illustrating the effectiveness of our model in localising anomalous tissues.
\subsubsection{Visualisation of predicted segmentation.}
The visualisation of polyp segmentation results of MemMC-MAE on Hyper-Kvasir~\cite{borgli2020hyperkvasir} is shown in Fig.~\ref{fig:qualitative_segmentation}. Notice that our model can accurately segment colon polyps of various sizes and shapes.
\subsubsection{Visualisation of Reconstructed Images}
Figure~\ref{fig:qualitative} shows the reconstructions produced by MemMC-MAE on Covid-X (Top) and Hyper-Kvasir (Bottom) testing images.
Notice that our method can effectively reconstruct the anomalous images with
polyps/covid as normal images by automatically removing the polyps or blurring the anomalous regions, leading to larger reconstruction errors for those anomalies.
The normal images are accurately reconstructed with smaller reconstruction errors than the anomalous images.
\subsubsection{Ablation Study}
Tab.~\ref{tab:ablation} shows the contribution of each component of our proposed method on Covid-X testing set. The baseline MAE~\cite{he2021masked} achieves 79.9\% AUC.
Our method obtains a significant performance gain by adding the memory-augmented self-attention operator to the transformer encoder (Mem-Enc).
Adding the proposed multi-level cross-attention operator into the decoder (MC-Dec) further boosts the performance by about 5\% AUC.
\section{Conclusion}
We proposed a new UAD reconstruction method, called MemMC-MAE, for anomaly detection and localisation in medical images, which to the best of our knowledge, is the first UAD method based on MAE.
MemMC-MAE introduced a novel memory-augmented self-attention operator for the MAE encoder and a new multi-level cross-attention for the MAE decoder to address the large reconstruction error of anomalous images that plague UAD reconstruction methods.
The resulting anomaly detector showed SOTA anomaly detection and localisation accuracy on two public medical datasets.
Despite the remarkable performance, the results can potentially improve if we use MemMC-MAE as a pre-training approach for other UAD methods~\cite{chen2021deep,defard2020padim,tian2021constrained,venkataramanan2020attention}, which we plan to explore in the future.
\bibliographystyle{splncs04}
|
\subsection{Propositional and first-order graphical models}
Probabilistic graphical models such as Bayesian networks, Markov networks and factor graphs compactly represent a joint distribution over a set of randvars ${\mathcal V} = \{V_{1}, \ldots, V_{n}\}$ by factorizing the distribution into a set of local distribution. For example, factor graphs represent the distribution as a product of \emph{factors}:
$\textit{Pr}(V_{1}, \ldots, V_{n}) = \frac{1}{Z} \prod \phi_i({\mathcal V_i})$,
where $\phi_i$ is a \emph{potential} function that maps each configuration of ${\mathcal V}_i \subseteq \mathcal{V}$ to a real number and $Z$ is a normalization constant.
Probabilistic logical models use concepts from first-order logic to provide a high-level modeling language for representing propositional graphical models. While many such languages exist (see~\cite{Getoor07:book} for an overview), we focus on \emph{parametric factors} (parfactors)~\cite{Poole2003} that generalize factor graphs.
Parfactors use \emph{parametrized randvars} (PRVs) to represent entire sets of randvars. For example, the PRV $BloodType(X)$, where $X$ is a logvar, represents one $BloodType$ randvar for each object in the domain of $X$ (written $\mathcal{D}(X)$).
Formally, a PRV is of the form $P({\mathbf X}) | C$ where $C$ is a \emph{constraint} consisting of a conjunction of inequalities $X_i \neq t$ where $t \in \mathcal{D}(X_i)$ or $t \in {\mathbf X}$.
It represents the set of all randvars $P({\mathbf x})$ where ${\mathbf x}\in \mathcal{D}({\mathbf X})$ and ${\mathbf x}$ satisfies $C$; this set is
denoted $rv(P({\mathbf X})|C)$.
A {\em parfactor} uses PRVs to compactly encode a set of factors. For example, the parfactor $\phi({\mathit Smoke}(X), {\mathit Friends}(X,Y),{\mathit Smoke}(Y))$ could encode that friends have similar smoking habits.
It imposes a symmetry in the model by stating that the probability that, among two friends, both, one or none smoke, is the same for all pairs of friends, in the absence of any other information.
Formally, a parfactor is of the form $\phi({\mathcal A})|C$, where ${\mathcal A} = (A_i)_{i=1}^n$ is a sequence of PRVs, $C$ is a constraint on the logvars appearing in $\mathcal{A}$, and $\phi$ is a potential function. The set of logvars occurring in ${\mathcal A}$ is denoted $logvar({\mathcal A})$.
A {\em grounding substitution} maps each logvar to an object from its domain.
A parfactor $g$ represents the set of all factors that can be obtained by applying a grounding substitution to $g$ that is consistent with $C$; this set is called the grounding of $g$, and is denoted $gr(g)$. A parfactor model is a set $G$ of parfactors. It compactly defines a factor graph $gr(G) = \{gr(g) | g \in G\}$.
Following the literature, we assume that the model is in a \emph{normal form}, such that (i) each pair of logvars have either identical or disjoint domains, and
(ii) for each pair of co-domain logvars $X$, $X'$ in a parfactor $\phi({\mathcal A}) |C$, $(X \neq X') \in C$.
Every model can be written into this form in poly time~\cite{Poole2011}.
\subsection{Inference}
A typical inference task is to compute the
marginal probability of some variables
by summing out the remaining variables, which can be written as:
$\textit{Pr}({\mathcal V}') = \sum_{{\mathcal V} \setminus {\mathcal V}'} \prod_i \phi_i({\mathcal V}_i)$.
This is an instance of the general sum-product problem~\cite{Bacchus09}. Abusing notation, we write this sum of products as $\sum_{{\mathcal V}\setminus \mathcal{V}'} M(\mathcal{V})$.
\noindent\textbf{Inference by recursive decomposition.}
Inference algorithms exploit the factorization of the model to recursively decompose the original problem into smaller, independent subproblems.
This is achieved by a decomposition of the sum-product, according to a simple {\em decomposition rule}.
\begin{definition}[{\bf The decomposition rule}] Let ${\mathcal P}$ be a sum-product computation ${\mathcal P}: \sum_{{\mathcal V}} M({\mathcal V})$,
and let $\mathbb{M} = \{M_1({\mathcal V_1}), \dots M_k({\mathcal V_k})\}$ be a partitioning (decomposition) of $M({\mathcal V})$.
Then, the {\em decomposition of ${\mathcal P}$, w.r.t.\ ${\mathbb M}$} is an equivalent sum-product formula ${\mathcal P}_{{\mathbb M}}$, defined as follows:
$${\mathcal P}_{{\mathbb M}}: \sum_{{\mathcal V}'} \Big[ \, \big( \sum_{{\mathcal V}'_1} M_1({\mathcal V}_1) \big) \dots \big(\sum_{{\mathcal V}'_k} M_k({\mathcal V}_k) \big) \, \Big]$$\vspace{-0.2cm}
where ${\mathcal V}' = \bigcup_{i,j} {\mathcal V}_i \cap {\mathcal V}_j$, and ${\mathcal V}'_i = {\mathcal V_i} \setminus {\mathcal V}'$.
\end{definition}
Most exact inference algorithms recursively apply this rule
and compute the final result using top-down or bottom-up dynamic programming
\cite{Bacchus09,Darwiche01,Dechter99}. The complexity is then exponential only in the size of the largest sub-problem solved.
Variable elimination (VE) is a bottom-up algorithm that computes the nested sum-product by repeatedly solving an innermost problem $\sum_V M(V,\mathcal{V}')$ to \emph{eliminate} $V$ from the model. At each step, VE eliminates a randvar $V$ from the model by \emph{multiplying} the factors in $M(V,\mathcal{V}')$ into one and \emph{summing-out} $V$ from the resulting factor.
\noindent\textbf{Decomposition trees.} A single inference problem typically has multiple solutions, each with a different complexity.
A \emph{decomposition tree} (dtree) is a structure that represents the decomposition used by a specific solution and allows us to determine its complexity~\cite{Darwiche01}. Formally, a dtree is a rooted tree in which each leaf represents a factor in the model.\footnote{We use a slightly modified definition for dtrees, which were originally defined as full binary rooted trees.}
Each node in the tree represents a decomposition of the model into the models under its child subtrees. Properties of the nodes can be used to determine the complexity of inference. $Child(T)$ refers to $T$'s child nodes;
$rv(T)$ refers to the randvars under $T$, which are those in its factor if $T$ is a leaf and $rv(T) = \cup_{T' \in Child(T)} rv(T')$ otherwise.
Using these, the important properties of {\em cutset}, {\em context}, and {\em cluster} are defined as follows:
\begin{itemize}
\item $cutset(T) = \cup_{\{T_1, T_2\} \in child(T)} rv(T_1) \cap rv(T_2) \setminus acutset(T)$, where ${\textit acutset}(T)$ is the union of cutsets associated with ancestors of $T$. \item $context(T) = rv(T) \cap {\textit acutset}(T)$
\item $cluster(T) = rv(T)$, if $T$ is a leaf; otherwise $cluster(T) = cutset(T) \cup context(T)$
\end{itemize}
Figure~\ref{fig:dtree_nb} shows a factor graph model, a dtree for it with its clusters, and the corresponding sum-product factorization. Intuitively, the properties of dtree nodes help us analyze the size
of subproblems solved during inference. In short, the time complexity of inference is $O(n \exp(w))$ where $n$ is the size (number of nodes) of the tree and $w$ is its \emph{width}, i.e., its maximal cluster size minus one.
\begin{figure} [t]
\centering
\includegraphics[height = 3cm]{Figures/dtree_mult_sum.pdf}
\caption{(a) a factor graph model; (b) a dtree for the model, with its node clusters shown as $cutset, [context]$; (c) the corresponding factorization of the sum-product computations.}\vspace{-0.2cm}
\label{fig:dtree_nb}
\end{figure}
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Background}
\input{back}
\section{Lifted inference: Exploiting symmetries}
\input{lifting_symmetry}
\section{First-Order decomposition trees}
\input{fodtrees}
\vspace{-0.4cm}
\section{Liftable FO-dtrees}
\input{liftability}
\vspace{-0.2cm}
\section{Lifted inference based on FO-dtrees}
\input{inference}
\section{Complexity of lifted inference}
\input{complexity}
\section{Conclusion}
\input{conclusion}
\section*{Appendix}
\subsection{Structure}
An FO-dtree provides a compact representation of a propositional dtree, just like a PLM is a compact representation of a propositional model. It does so by explicitly capturing isomorphic decomposition, which in a dtree correspond to a node with isomorphic children. Using a novel node type, called a \emph{decomposition into partial groundings (DPG)} node, an FO-dtree represents the \emph{entire set} of isomorphic child subtrees with a \emph{single representative subtree}. To formally introduce the structure, we first show how a PLM can be decomposed into isomorphic parts by DPG.
\noindent\textbf{DPG of a parfactor model.}
The DPG of a parfactor $g$ is defined w.r.t.\ a $k$-subset ${\mathbf X} = \{X_1, \dots, X_k \}$ of its logvars that all have the same domain $D_{\bf X}$.
For example, the decomposition used in Example~\ref{ex:group-inv}, and shown in Figure~\ref{fig:group-inv}, is the DPG of $\phi(F(X,Y),F(Y,X))|X\neq Y$ w.r.t.\ logvars $\{X,Y\}$.
Formally, $DPG(g,{\bf X})$ partitions the model defined by $g$ into ${|D_{\bf X}| \choose k}$ parts: one part $G_{{\bf x}}$ for each $k$-subset ${\bf x} = \{x_1, \dots, x_k\}$ of the objects in $D_{\bf X}$. Each $G_{\bf x}$ in turn contains all $k!$ (partial) groundings of $g$ that can result from replacing $(X_1, \dots, X_k)$ with a permutation of $(x_1, \dots, x_k)$.
The key intuition behind DPG is that for any ${\bf x},{\bf x}' \subseteq_k \mathcal{D}_{\bf X}$, $G_{{\bf x}}$ is isomorphic to $G_{{\bf x}'}$,
since any bijection from ${\bf x}$ to ${\bf x}'$ yields a bijection from $G_{{\bf x}}$ to $G_{{\bf x}'}$.
$DPG$ can be applied to a whole model $G= \{ g_i\}_{i=1}^m$,
if $G$'s logvars are (re-)named
such that (i) only co-domain logvars share the same name, and (ii) logvars ${\mathbf X}$ appear in all parfactors.
\begin{example}
\label{ex:dpg2}
Consider $G = \{\phi_1(P(X) )$, $\phi_2(A,P(X))\}$. $DPG(G,\{X\}) = \{G_i\}_{i=1}^n$, where each group $G_i = \{\phi_1(P(x_i) )$, $\phi_2(A,P(x_i))\}$ is a grounding of $G$ (w.r.t.\ $X$).
\end{example}
\begin{figure}[tb]
\centering
\includegraphics[height = 3 cm]{Figures/fodt_examples.pdf}
\caption{(a) dtree (left) and FO-dtree (right) of Example~\ref{ex:dpg2}; (b) FO-dtree of Example~\ref{ex:group-inv}}
\label{fig:simple_fodt}
\end{figure}
FO-dtrees simply add to dtrees special nodes for representing DPGs in parfactor models.
\begin{definition}[DPG node] A DPG node $T_{{\bf X}}$ is a triplet $({\mathbf X}, {\mathbf x}, C)$, where ${\bf X} = \{X_1, \dots X_k \}$ is a set of logvars with the same domain $D_{\mathbf X}$, ${\mathbf x} = \{x_1, \dots, x_k \}$ is a set of {\em representative objects},
and $C$ is a constraint, such that for all $i\neq j$: $x_i \neq x_j \in C$. We denote this node as $\forall {\bf x}:C$ in the tree.
\end{definition}
A representative object is simply a placeholder for a domain object.\footnote{As such, it plays the same role as a logvar.
However, we use both
to distinguish between a whole group of randvars (a PRV $P(X)$), and a representative of this group (a representative randvar $P(x)$).}
The idea behind our FO-dtrees is to use $T_{{\bf X}}$ to graphically indicate a $DPG(G, {\bf X})$.
For this, each $T_{\bf X}$ has a single child distinguished as $T_{{\bf x}}$, under which the model is a representative instance of the isomorphic models $G_{\bf x}$ in the DPG.
\begin{definition}[FO-dtree]
An FO-dtree is a rooted tree in which
\begin{enumerate}
\item non-leaf nodes may be DPG nodes
\item each leaf contains a factor (possibly with representative objects)
\item each leaf with a representative object $x$ is the descendent of exactly one DPG node $T_{\bf X} = ({\mathbf X}, {\mathbf x}, C)$, such that $x \in {\bf x}$
\item each leaf that is a descendent of $T_{\bf X}$ has all the representative objects ${\bf x}$, and
\item for each $T_{{\bf X}}$ with ${\bf X} = \{X_1, \dots, X_k \}$, $T_{\bf x}$ has $k!$ children $\{ T_i \}_{i=1}^{k!}$, which are isomorphic up to a permutation of the representative objects ${\bf x}$.
\end{enumerate}
\end{definition}
\noindent{\bf Semantics.} Each FO-dtree defines a dtree, which can be constructed by recursively \emph{grounding} its D
|
PG nodes.
Grounding a DPG node $T_{{\bf X}}$ yields a (regular) node $T'_{{\bf X}}$ with ${|\mathcal{D}_{\bf X}| \choose k}$
children $\{T_{{\bf x} \rightarrow {\bf x}'}| {\bf x'} \subseteq_k D_{\bf X} \}$, where
$T_{{\bf x} \rightarrow {\bf x}'}$ is the result of replacing ${\bf x}$ with objects ${\bf x'}$ in $T_{{\bf x}}$.
\begin{example} Figure~\ref{fig:simple_fodt} (a) shows the dtree of Example~\ref{ex:dpg2} and its corresponding FO-dtree, which only has one instance $T_x$ of all isomorphic subtrees $T_{x_i}$. Figure~\ref{fig:simple_fodt} (b) shows the FO-dtree for Example~\ref{ex:group-inv}.
\end{example}
\subsection{Properties}
Darwiche~\cite{Darwiche01} showed that important properties of a recursive decomposition are captured in the properties of dtree nodes.
In this section, we define these properties for FO-dtrees.
Adapting the definitions of the dtree properties, such as cutset, context, and cluster, for FO-dtrees requires accounting for the semantics of
an FO-dtree, which uses DPG nodes and representative objects. More specifically, this requires making the following
two modifications (i) use a function $Child_{\theta}(T)$, instead of $Child(T)$, to take into account the semantics of DPG nodes, and (ii) use a function $\cap_{\theta}$ that finds the intersection of two sets of {\em representative} randvars.
First, for a DPG node $T_{{\bf X}} = ({\bf X}, {\bf x}, C)$,
we define: $Child_{\theta}(T_X) =
\{ T_{{\bf x} \rightarrow {\bf x}'} | {\bf x}' \subseteq_k {\mathcal D}_{\bf X} \}$.
Second, for two sets $A = \{a_i\}_{i=1}^n$ and $B= \{b_i\}_{i=1}^n$ of (representative) randvars we define:
$A \cap_{\theta} B = \{a_i | \exists \theta \in \Theta: a_i \theta \in B\},$
with $\Theta$ the set of grounding substitutions to their representative objects. Naturally, this provides a basis to define a `$\setminus_{\theta}$' operator as : $A \setminus_{\theta} B = A \setminus (A \cap_{\theta} B)$.
All the properties of an FO-dtree are defined based on their corresponding definitions for dtrees, by replacing $Child$, $\cap$, $\setminus$ with $Child_{\theta}$, $\cap_{\theta}$, $\setminus_{\theta}$.
Interestingly, all the properties can be computed without grounding the model, e.g., for a DPG node $T_X$, we can compute $rv(T_X)$ simply as $rv(T_x) \theta^{-1}_X$, with $\theta^{-1}_X = \{{\mathbf x} \rightarrow \mathbf{X}\}$.\footnote{The only non-trivial property is $cutset$ of DPG nodes.
We can show that $cutset(T_X)$ excludes from $rv(T_X) \setminus acutset(T_X)$ only those PRVs for which ${\bf X}$ is a binding class of logvars~\cite{Jha2010,GuyNips11}.}
Figure~\ref{fig:2lv_fodt_props} shows examples of FO-dtrees with their node clusters.
\begin{figure}[htb]
\centering
\includegraphics[height = 4 cm]{Figures/fodt_props_example.pdf}
\caption{Three FO-dtree with their clusters (shown as $cutset, [context]$).}
\label{fig:2lv_fodt_props}
\end{figure}
\noindent\textbf{Counted FO-dtrees.}
FO-dtrees capture the first lifting tool, isomorphic decomposition, explicitly in DPG nodes. The second tool, counting,
can be simply captured by rewriting interchangeable randvars in clusters of the tree nodes with counting randvars. This can be done in FO-dtrees similarly to the operation of counting conversion on logvars in LVE. We call such a tree a {\em counted} FO-dtree.
Figure~\ref{fig:fodt_count_ops}(a) shows an FO-dtree (left) and its counted version (right).
\begin{figure}[hbt]
\centering
\includegraphics[height = 3.2 cm]{Figures/fodt_counting_ops.pdf}
\caption{(a) an FO-dtree (left) and its counted version (right);
(b) lifted operations of each node.}
\label{fig:fodt_count_ops}
\end{figure}
\section{Proof of Theorem 1}
\emph{Proof.} Following the discussion in the paper, each subproblem arising during inference requires handling a parfactor involving the randvars and PRVs that appear at the cluster of the node. To prove that each of these problems are liftable (do not require us to ground the PRVs and deal with all their randvars directly), we need to show that the whole group of randvars in each cluster can be partitioned into $m$ groups of interchangeable $k$-tuples of randvars, with $m$ and $k$ independent of the domain size.
We prove this relying on the properties of counting randvars in PLMs, and the correctness of counting conversion in LVE~\cite{Milch2008,Taghipour_StarAI12}.
For simplicity, let us assume that there are no ground randvars in the cluster (the generalization to include ground randvars is trivial).
Then the model can be written as a $1$-logvar parfactor as follows:
$$ \phi(P_{11}(X_{11}), \dots P_{1,n_1}(X_{1,n_1}), \dots, P_{m1}(X_{11}), \dots P_{m,n_1}(X_{m,n_m})) \, | \, C,$$
in which for each $i \in \{1,\dots,m\}$, all $X_{ij}$ are logvars from a distinct domain $D_i$, and $P_{ij}$ is an PRV containing such a logvar---note that for the same $i$ some $X_{ij}$ (and some $P_{ij}$) can have the same name, although the PRVs are distinct.
Since no PRV contains more than one logvar we can count-convert all the logvars in this model. This merges all distinct PRVs $P_{ij}(X_i)$ into one counting randvar. As such, by applying counting conversion on all the logvars $X_{ij}$ of domain $D_i$, we can rewrite in the model the group of PRVs $P_{i1}(X_{i1}), \dots, P_{i,n_i}(X_{i,n_i})$ into a counting randvar $$\#_{X_i}[P'_{i1}(X_i), \dots, P'_{i,k_i}(X_i)]$$
where $P'_{ij}$ are the distinct predicates among $P_{ij}$, that is:
$$\{P'_{ij}(X_i)\}_{j=1}^{k_i} = \bigcup_{j=1}^{n_i} P_{ij}(X_i)$$
After counting all the logvars the parfactor becomes of the form
$$\phi' \big( \#_{X_1}[P'_{11}(X_1), \dots, P'_{1,k_1}(X_1)], \, \dots \,, \#_{X_m}[P'_{m1}(X_m), \dots, P'_{m,k_m}(X_m)] \big)$$
This shows that the whole group of randvars in the model can be partitioned into $m$ groups of interchangeable $k$-tuples of randvars-- one group of tuples for each counting randvar. Note that here both $k$ and $m$ are independent of the domain size of the logvars: (i) $m$ is the number of distinct domains among the logvars, and (ii) $k$ can be no larger than the number of PRVs with a co-domain logvar in the model, that is, $k \leq \max \{k_i\}_i \leq \max \{n_i\}_i$. It is straight-forward to show that this also holds in the general case of a parfactor involving both $1$-logvar and ground randvars.
\section{Proof of Theorem 2}
\emph{Proof.} We prove the theorem by bounding the complexity of each lifted operation performed at each of the $n_T$ nodes of the tree. First consider a lifted elimination performed at some node $T'$. The complexity of this operation is proportional to $|\mathit{range}(cluster(T'))|$, as it needs to deal with a parfactor involving the (counting) randvars in the cluster. Each cluster is a group $\mathcal{A} = \{A_1, A_2, \dots A_{w_{g}'}, \gamma_1, \gamma_2, \dots, \gamma_{w_{\#}'} \}$ of randvars $A_i$, and counting randvars $\gamma_i = \#_{X_i}[P_{i1}(X_i), \dots, P_{ik}(X_{i})]$, where $w_{\#}' \leq w_{\#}$, and $w_{g}' \leq w_{g}$
Thus $$|range(\mathcal{A})| = \big(\prod_i |range(A_i)|\big) \cdot \big( \prod_j |range(\gamma_j)|\big).$$
For the first product, we have $$\prod_{i=1}^{w_{g}'} |range(A_i)| = O(\exp(w_{g})).$$
Moreover, since for each counting randvar $\gamma_i$, $|range(\gamma_i)| = O(n_i^{r_i})$, where $n_i$ is the domain size of $X_i$, and $r_i$ is the range size of the tuples of PRVs inside $\gamma_i$, for the second product we have
$$\prod_{j=1}^{w_{\#}'} |range(\gamma_j)| = O((n_{\#}^{r_{\#}})^{w_{\#}}) = O(n_{\#} ^ {(w_{\#} \cdot r_{\#})})$$
These two show that
$$|\mathit{range}(\mathcal{A})| = O(\exp(w_{g}) \cdot n_{\#} ^ {(w_{\#} \cdot r_{\#})})$$
This is the complexity of each lifted elimination step.
Build on this we compute the complexity of the other two lifted operations, aggregation and counting conversion. For each of the $|\mathit{range}(\mathcal{A})|$ entries in the parfactor, these two operations perform an exponentiation which has complexity $O(\log n)$, where $n$ is the domain size of the logvar. As such, this has complexity $O(\log n \cdot \exp(w_g) \cdot n_{\#} ^ {(w_{\#} \cdot r_{\#})})$. Since there at most one of each operation performed at each of the $n_T$ nodes, the complexity of entire inference is
$$O(n_T \cdot \log n \cdot \exp(w_g) \cdot n_{\#} ^ {(w_{\#} \cdot r_{\#})}).$$
\section{Finding corresponding FO-dtrees}
In this section, we provide a simple algorithm that given a model $G$ constructs a corresponding FO-dtree. Our method works in a top-down manner according to a recursive decomposition of $G$ using $DPGs$. We also briefly discuss possible extensions of this simple algorithm, which can transform it into a greedy algorithm for finding `better' trees.
We construct the tree top-down according to a recursive decomposition of $G$, which also employs $DPGs$ (Algorithm~\ref{alg:fodt_simple}). At the beginning we have a single root node $T$ with model $G$. According to a decomposition of $G$ into $\{G_i\}_i$ we add the children $T_i$ of $T$ to the tree, and then recursively build each tree $T_i$ for $G_i$. Under DPG nodes we represent only one instance of the children. DPGs allow us to decompose the model into partial groundings, and recursive application of this tool results in a ground model. This allows us to reduce the problem to finding a dtree for the ground model.
\begin{algorithm}[htb]
\begin{center}
\begin{tabular}{l}
\hline
{\bf FO-dtree}($G$)\\
{\bf if} $G$ is ground\\
\quad {\bf return} \textsc{Dtree}($G$)\\
{\bf if} $\exists {\bf X}$ that allows DPG\\
\quad $T_{\bf X} \leftarrow \textsc{DPG-Node}({\mathbf X}, {\mathbf x}, G)$\\
\quad $G_{\mathbf x} = \{G \theta | \theta \in \Theta_{\mathbf{x}}\}$\\
\quad $T.\textsc{addChild}$({\bf FO-Dtree}($G_{\mathbf x}))$\\
{\bf else}:\\
\quad $T \leftarrow \textsc{newnode}()$\\
\quad choose logvars {\bf X} that co-occur in $G$:\\
\quad (there is always at least one choice ${\bf X} = \{X_i\}$)\\
\quad $G_{{\bf X}} \leftarrow \{g | {\bf X} \in logvar(g)\}$\\
\quad $G_{\neg {\bf X}} \leftarrow G \setminus G_{{\bf X}}$\\
\quad $T.\textsc{addChildren}(${\bf FO-Dtree}$(G_{\mathbf X}), ${\bf FO-Dtree}$(G_{\neg \mathbf X}))$\\
{\bf return} $T$\\
\hline
\end{tabular}
\end{center}
\caption{A simple algorithm for finding a corresponding FO-dtree.}
\label{alg:fodt_simple}
\end{algorithm}
\textbf{Extension to a greedy method for finding FO-dtrees.} The above is a simple algorithm that shows the existence of a FO-dtree for each model, by finding one possible FO-dtree. While it does not consider the quality of the found FO-dtree, it can be easily modified into an algorithm that greedily searches for better trees, by performing better DPGs. For this we need to make two changes in Algorithm~\ref{alg:fodt_simple}: (1) rename the logvars such that the model allows for a DPG, instead of relying on the naming of logvars in the model, and (2) select among the possible DPGs based on some criteria.
The first change requires us to {\em align} the logvars in different parfactors before performing a DPG, that is to rename the logvars properly such that a subset of the logvars allow for DPG. This is a simple generalization of finding an alignment between two parfactors, which is employed in lifted multiplication. This change allows us to consider all possible DPGs of the model in our search, without being restricted by the naming of logvars in the model.
The second change allows us to consider the quality of different DPGs for selection among them. Here we give a score to possible DPGs, which is a greedy measure of the quality of their decomposition. For instance, we can simply consider the cutset size of the decomposition, or the size of its resulting clusters. A straightforward measure is comparing the lifted width of the resulting nodes, which takes into account also the opportunities exploited by counting.
These two changes should be naturally incorporated into one module, which considers possible logvar re-namings (alignments) that enable some DPG, measures the quality of the corresponding DPGs, and selects among them. Search for alignments can be guided by considering the properties of logvars in the model~\cite{Jha2010,GuyNips11}, and our result about computing properties of FO-dtree nodes based on the properties of logvars.
|
\section{Introduction and Related Work}
\label{sec:intro}
Object recognition is one of the most important and challenging problems in
computer vision. The ability to classify objects plays a crucial role in scene
understanding, and is a key requirement for autonomous robots operating in both
indoor and outdoor environments. Recently, computer vision has witnessed
significant progress, leading to impressive performance in various
detection and recognition
tasks~\cite{he2015delving,taigman2014deepface,ioffe2015batchnorm}.
On the one hand, this is partly due to the recent advancements in
machine learning techniques such as deep learning, fueled by a great interest
from the research community as well as a boost in hardware performance.
On the other hand, publicly-available datasets have been a
great resource for bootstrapping, testing, and comparing these techniques.
Examples of popular image datasets include ImageNet, CIFAR, COCO and PASCAL,
covering a wide range of categories including people, animals, everyday
objects, and much
more~\cite{imagenet2009deng,krizhevsky2009cifar,everingham2010pascal,lin2014microsoftcoco}.
Other datasets are tailored towards specific domains such as house numbers extracted from Google Street
View~\cite{netzer2011reading}, face recognition~\cite{huang2007labeled}, scene
understanding and place recognition~\cite{zhou2014learning,
silberman2011indoor}, as well as object recognition, manipulation and pose estimation for
robots~\cite{kasper2012kit, calli2015ycbbenchmarking, rennie2016dataset, hinterstoisser2012model}.
One of the challenging domains where object recognition plays a key role is
service robotics. A robot operating in unstructured, domestic environments
has to recognize everyday objects in order to successfully perform tasks
like tidying up, fetching objects,
|
or assisting elderly
people. For example, a robot should be able to recognize grocery objects in
order to fetch a can of soda or to predict the preferred shelf
for storing a box of cereals~\cite{srinivasa2010herb, abdo2016organizing}. This is
not only challenging due to the difficult lighting conditions and occlusions in
real-world environments, but also due to the large number of everyday objects
and products that a robot can encounter.
\input{figures/example_images}
\input{figures/statistics_vertical}
Typically, service robotic systems address the problem of object recognition
for different tasks by relying on state-of-the-art perception methods.
Those methods leverage existing object models by extracting hand-designed visual and 3D
descriptors in the environment~\cite{hsiao2010making, alexandre20123d, pauwels_simtrack_2015} or by learning new feature representations from raw sensor data~\cite{bo_iser12, eitel15iros}. Others rely on an ensemble of perception techniques and sources of information
including text, inverse image search, cloud data, or images downloaded from online stores
to categorize objects and reason about their relevance for different
tasks~\cite{irosws11germandeli, Kaiser2014, beetz2015robosherlock, kehoe2013ICRA}.
However, leveraging the full potential of machine learning approaches to address
problems such as recognizing groceries and food items remains, to a large
extent, unrealized. One of the main reasons for that is the lack of training
data for this domain. In this paper, we address this issue and present the
Freiburg Groceries Dataset, a rich collection of 5000 images of grocery products (available in
German stores) and covering 25 common categories. Our motivation for this is twofold: \emph{i)} to
help bootstrap perception systems tailored for domestic robots and assistive
techn
|
\to D(\mathcal{H})$, where $D(\mathcal H)$ denotes the space of density matrices of states in $\mathcal H$.
Alternatively, the channel can be expressed in the operator-sum representation~\cite{Nielsen2009, Kraus1983}
\begin{equation}\label{eq:kraus_op_rep}
\mathcal E(\rho) = \sum_i K_i \rho K_i^{\dagger}, \; \text{where}\;\sum_i K_i^{\dagger}K_i = \mathbb{I},
\end{equation}
where $\{K_i\}_i$ are Kraus operators~\cite{Kraus1983}.
For example, depolarizing channel for a one-qubit system $\rho$ with strength $\gamma$ has the following Kraus operators,
\begin{align}
K_0 = \sqrt{1 - \frac{3\gamma}{4}} \mathbb I_2,\, K_1 = \sqrt{\frac{\gamma}{4}}X,\, K_2 = \sqrt{\frac{\gamma}{4}}Y,\, K_3 = \sqrt{\frac{\gamma}{4}}Z.
\end{align}
where $\mathbb I_2$ is a $2 \times 2$ identity matrix and $X, Y, Z$ are the Pauli matrices.
Lastly, we will assume each link is independent of each other.
\paragraph{Measurement Nodes}
Measurement nodes receive the incoming qubits and output the corresponding measurement outcomes.
For a node $A_j \subseteq [N_q]$, $j \in [N_m]$, we consider a projection-valued measure (PVM) $\{\Pi^{A_j}_{a_j}\}$ that forms a set of orthogonal projectors satisfying $\sum_{a_j} \Pi^{A_j}_{a_j} = \mathbb{I}^{A_j}$.
The node measures its local qubits $\rho^{A_j}\in D(\mathcal{H}^{A_j})$ that were received from its linked sources.
We assume measurement nodes are independent of one another, and the network applies the projector $\Pi_{\vec{a}} = \bigotimes_{j=1}^{N_m} \Pi^{A_j}_{a_j}$.
Upon measurement, the classical output $\vec{a}$ is obtained with probability,
\begin{equation}\label{eq:quantum-conditional-probabilities-born-rule}
\mathbb P(\vec{a}) = \tr \left( \Pi_{\vec{a}} \,\mathcal E (\rho) \right).
\end{equation}
It is worth noting that any permutations needed to map the joint Hilbert space of sources to that of the sources are included implicitly.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1]
\Vertex[x=-5,y=2,shape=rectangle,label=$A_1$]{A1}
\Vertex[x=-3.5,y=2.00,shape=rectangle,label=$A_2$]{A2} \Vertex[x=-2,y=2.00,shape=rectangle,label=$A_3$]{A3}
\Vertex[x=-0.5,y=2.00,shape=rectangle,label=$A_4$]{A4}
\Vertex[x=1,y=2.00,shape=rectangle,label=$A_5$]{A5}
\Vertex[x=-3.5,y=-2,RGB,color={127,201,127},label=$\Lambda_1$]{S1}
\Vertex[x=-2,y=-2,RGB,color={127,201,127},label=$\Lambda_2$]{S2}
\Vertex[x=-0.5,y=-2,RGB,color={127,201,127},label=$\Lambda_3$]{S3}
\Edge[label=1](S1)(A1)
\Edge[label=2](S1)(A2)
\Edge[label=3](S2)(A2)
\Edge[label=4](S2)(A3)
\Edge[label=5](S2)(A4)
\Edge[label=6](S3)(A4)
\Edge[label=7](S3)(A5)
\node[rectangle] (r) at (5.5,2) {$\leftarrow \textnormal{Nodes} \,\, \bm A = \begin{rcases}
\begin{dcases}
A_1=\{1\} , A_2=\{2,3\}, A_3=\{4\}, \\
A_4=\{5,6\}, A_5=\{7\} \\
\end{dcases}
\end{rcases}$};
\node[rectangle] (r) at (5.5,-0.25) {$\leftarrow \textnormal{Links} \,\, \bm L = \begin{rcases}
\begin{dcases}
L_1=(\Lambda_1, A_1), L_2=(\Lambda_1, A_2), L_3=(\Lambda_2, A_2), \\
L_4=(\Lambda_2, A_3), L_5=(\Lambda_2, A_4), \\
L_6=(\Lambda_3, A_4), L_7=(\Lambda_3, A_5) \\
\end{dcases}
\end{rcases}$};
\node[rectangle] (r) at (4.5,-2) {$\leftarrow \textnormal{Sources} \,\, \bm \Lambda = \begin{rcases}
\begin{dcases}
\Lambda_1=\{1,2\} , A_2=\{3,4,5\}, A_3=\{6,7\}
\end{dcases}
\end{rcases}$};
\end{tikzpicture}
\caption{A quantum network is composed of sources (green circles), links (edges), and measurement nodes (blue squares). Each link sends one qubit from a source to a node. Viewing the nodes and sources jointly as the vertex set, a quantum network can be interpreted as a bipartite graph.}
\label{fig:network-diagram}
\end{figure}
Quantum networks can be concisely interpreted as a directed bipartite graph $G = (\{\bm \Lambda,\bm A\},\bm L)$.
The vertices are partitioned into the sources $\bm \Lambda = \{ \Lambda_i\}_{i=1}^{N_s}$ and measurement nodes $\bm A = \{ A_j\}_{j=1}^{N_m}$.
The edges connect sources to nodes, $\bm L = \{(\Lambda_i, A_j)\}$, and represent the movement of qubits.
See \Cref{fig:network-diagram} for an example quantum network and an enumeration of its respective parts.
\subsection{Entropic quantities on quantum networks}
The paper focuses on two entropic quantities observed on networks: the von Neumann entropy and the measured mutual information.
Both quantities will convey important information about the topology of the network and will be discussed in more detail in the next section.
\paragraph{Von Neumann entropy} The von Neumann entropy for a quantum state $\rho$ is defined as
\begin{align}
S(\rho) = \mbox{\eric{$-$}}\tr \left( \rho \log \rho \right)
\end{align}
where the $\log(\cdot)$ above refers to the matrix logarithm and we use the convention that $\log 0 = 0$.
So, any pure state $\rho = \ket \psi \bra \psi$ has a von Neumann entropy of zero. Recall that the Shannon entropy of a probability distribution $\mu$ on support $\mathcal X$ is defined as
\begin{align}
H(\mu) = - \sum_{x \in \mathcal X} \mu(x) \log \mu(x).
\end{align}
When measured in the eigenbasis of $\rho$, the von Neumann entropy coincides with the Shannon entropy, with all randomness coming from the superposition of pure states in $\rho$~\cite{Nielsen2009}.
When measured in any other basis, the Shannon entropy calculated from measurement results is strictly greater than the von Neumann entropy because measurements only add noise.
Thus, the von Neumann entropy can be calculated by minimizing the Shannon entropy over the measurement basis, \textit{i.e.}
\begin{align}
S(\rho) = \min_{\{\Pi_{\vec a}\}} H(\mathbb P(\vec a))
\end{align}
where $\{\Pi_{\vec a}\}$ is a complete set of projections and $\mathbb P(\vec a)$ is the probability distribution upon measuring the quantum state in basis $\{\Pi_{\vec a}\}$.
\paragraph{Measured mutual information} Intuitively, the mutual information between two random variables quantifies the amount of correlation between them. However, the conventional mutual information defined for quantum systems involves joint measurement between the two parties. Let $A_i$ and $A_j$ be two measurement devices. We introduce the \textit{measured mutual information} as the maximal shared randomness between two nodes using measurements local to each node, calculated as
\begin{align}
\mathcal I_m (A_i; A_j) &= \max_{\{\Pi^{A_i}_{\vec a_i} \tensor \Pi^{A_j}_{\vec a_j}\}} H(\mathbb P (\vec a_i)) + H(\mathbb P (\vec a_j)) - H(\mathbb P (\vec a_i, \vec a_j)). \label{eq:meas-mut-info}
\end{align}
If the two measurement nodes are not correlated, then we can decompose the joint distribution into products. Furthermore, since the Shannon entropy of independent random variables is additive, the measured mutual information will go to zero if no correlation---quantum or classical---is shared.
The measured mutual information is, in a sense, a dual quantity to the von Neumann entropy. Consider the measured mutual information applied onto the same node, \textit{i.e.}
\begin{align}
\mathcal I_m (A_i, A_i) = \max_{\{\Pi^{A_i}_{\vec a_i}\}} H(\mathbb P(\vec a_i)).
\end{align}
Contrary to the von Neumann entropy, which seeks the basis that minimizes the measured Shannon entropy, measured mutual information strives to maximize the entropy. In conjunction, these two quantities can be used to characterize the topology of a quantum network.
\section{Protocol for topology classification and inference}\label{sec:protocol}
We aim at characterizing and subsequently infer the topology from local measurements on each node. In particular, we are interested in protocols that can distinguish two network topologies. We define two networks to be the same if they are related by a graph isomorphism, formally defined below.
\begin{definition}[Network Topology] \label{def:isomorphism}
Two quantum networks, $\mathcal N \ind 1$ and $\mathcal N \ind 2$, have the same topology if there exists bijections $\phi: [N_s] \to [N_s], \varphi: [N_m] \to [N_m]$ such that for any edge $L_k \ind 1 = (\Lambda_i \ind 1, A_j \ind 1)$, there is a corresponding $L_k \ind 2 = (\Lambda_{i} \ind 2, A_{j} \ind 2) = (\Lambda_{\phi(i)} \ind 1, A_{\varphi(j)} \ind 1)$.
\end{definition}
Note that in the above definition, we gave two bijections $\phi$ and $\varphi$ separately for nodes and sources.
Conventionally, one bijective map is sufficient to describe the relabeling of vertices.
In the context of quantum networks, the two maps are necessary ensure sources and nodes remain distinct.
Yang \textit{et al.} gave a protocol for distinguishing the topology of quantum networks using GHZ states~\cite{yang2022strong}. More specifically, they proved that the von Neumann entropies measured at each node are the same between two networks, up to a permutation of node indices, if and only if the topologies of the two networks are the same. However, this theorem only holds if no two nodes share more than one source. We find this class of network restricting. In this section, we will introduce an alternative protocol that distinguishes the topology of two networks where any pairs of nodes can share any number of sources.
\subsection{Topology Classification using von Neumann Entropy}
Consider an $n$-local quantum network $\mathcal N$ where no two nodes share more than one source. Yang \textit{et al.}~constructed the \textit{characteristic vector}
\begin{align}
V_{\mathcal N} = \begin{pmatrix} S(A_1) & S(A_2) & \dots & S(A_{N_m}) \end{pmatrix}
\end{align}
to store the von Neuman entropy measured on each node. Then, a quantum network can be uniquely characterized by its characteristic vector.
\begin{lemma}[Theorem 6 of~\cite{yang2022strong}]\label{thm:Yang-thm}
Let $\mathcal N \ind 1$ and $\mathcal N \ind 2$ are two quantum networks preparing GHZ states and for any two parties $A_i$ and $A_j$, they share no more than one source, \textit{i.e.}
\begin{align}
\biggr \vert \bigr \{\Lambda_k \in \bm \Lambda: (\Lambda_k, A_i), (\Lambda_k, A_j) \in \bm L \bigr \} \biggr \vert \leq 1.
\end{align}
Then, $\mathcal N \ind 1$ and $\mathcal N \ind 2$ have the same topology if and only if their characteristic vectors are equal to each other.
\end{lemma}
It is helpful to establish a graphical interpretation of the von Neumann entropy. First, we must establish assumptions on the class of network considered in this paper, enumerated below.
\begin{assumption}\label{ass:all}
We assume:
\begin{enumerate}[label=\textnormal{\textbf{\Alph*}.},ref=\Alph*]
\item Each source prepares maximally entangled states (GHZ states) up to local unitary transformations. Without loss of generality, assume states are prepared in the form specified in \Cref{eq:ghz}. \label{subass:A1}
\item Each source sends at most one qubit to any given measurement device.\label{subass:A2}
\item Only measurements local to each measurement device can be performed.\label{subass:A3}
\end{enumerate}
\end{assumption}
Following \subassref{all}{A2}, we can interpret the von Neumann entropy as a graph-theoretic quantity.
\begin{lemma} \label{thm:VNE-interpretation}
Let a quantum network satisfy \Cref{ass:all}. Then, for any node $A_i$, $S(A_i) = N_s^{A_i}$, where $N_s^{A_i}$ denotes the number of sources $A_i$ is connected to.
\end{lemma}
\begin{proof}
Since only GHZ states are prepared and each source can send at most one qubit, the qubits received at node $A_i$ are all maximally entangled with another qubit that is not present in $A_i$. Thus, the state at node $A_i$ is maximally mixed and has a von Neumann entropy (which, in this case, is equivalent to the Shannon entropy) of $A_i$ is the number of qubits received, $N_s^{A_i}$.
\end{proof}
Thus, \Cref{thm:Yang-thm} states that knowing the number of sources connected to each node is sufficient for knowing the topology of a quantum network. However, the assumption that nodes share no more than one entanglement is crucial and restrictive, as emphasized in the following example.
\subsection{Example: triangle networks}\label{sec:tri-ex-1}
\begin{figure}
\centering
\begin{subfigure}{0.35\linewidth}
\centering
\begin{tikzpicture}[scale=0.7]
\Vertex[y=2.46,shape=rectangle]{A1} \Vertex[x=-2,y=-1.00,shape=rectangle]{A2} \Vertex[x=2,y=-1.00,shape=rectangle]{A3}
\Vertex[x=0,y=0,RGB,color={127,201,127}]{S1} \Vertex[x=-0.5,y=0.86,RGB,color={127,201,127}]{S2} \Vertex[x=0.5,y=0.86,RGB,color={127,201,127}]{S3}
\Edge(S1)(A3)
\Edge(S1)(A2)
\Edge(S2)(A1)
\Edge(S2)(A2)
\Edge(S3)(A1)
\Edge(S3)(A3)
\end{tikzpicture}
\caption{Network 1}
\label{fig:tri-net-1}
\end{subfigure}
\begin{subfigure}{0.35\linewidth}
\centering
\begin{tikzpicture}[scale=0.7]
\Vertex[y=2.46,shape=rectangle]{A1} \Vertex[x=-2,y=-1.00,shape=rectangle]{A2} \Vertex[x=2,y=-1.00,shape=rectangle]{A3}
\Vertex[x=-0.5,y=0.66,RGB,color={127,201,127}]{S2} \Vertex[x=0.5,y=0.66,RGB,color={127,201,127}]{S3}
\Edge(S2)(A3)
\Edge(S3)(A2)
\Edge(S2)(A1)
\Edge(S2)(A2)
\Edge(S3)(A1)
\Edge(S3)(A3)
\end{tikzpicture}
\caption{Network 2}
\label{fig:tri-net-2}
\end{subfigure}
\caption{Example triangle networks that are indistinguishable solely from von Neumann entropy.}
\label{fig:network-example}
\end{figure}
Consider the two networks as shown in \Cref{fig:network-example}. The first network satisfies the assumption Yang \textit{et al.}~\cite{yang2022strong} made. Each node receives two qubits, each from two different sources. Since the subsystem of any maximally entangled state is a maximally mixed one, the von Neumann entropy at each node is $2$. On the other hand, the second network consists of only two preparation nodes, each preparing a $3$-qubit GHZ state. Each node in network 2 also receives two qubits, one from each source. Again by property of maximally entangled states, the von Neumann entropy at each node is $2$. One could take a step further and study the von Neumann entropy of the joint state of two measurement nodes only to find out that the two networks yield the same statistics. Thus, observing the von Neumann entropy alone cannot distinguish networks.
{As a solution, Yang \textit{et al.}~\cite{yang2022strong} show that the Shannon mutual information can distinguish between the two networks in \Cref{fig:network-example}. Although the Shannon mutual information is evaluated from classical data, this entropic quantity must be evaluated for all groupings of parties where the number of groupings scales exponentially with the number of parties. Thus, this approach is not practical for large networks.}
We propose the addition of the \textit{measured mutual information}. We claim that the basis that maximizes the Shannon entropy is the computational basis, which would be formally proven later. Take any two nodes in network 1. The Shannon entropy at each node will be $2$ since a maximally mixed state is information-theoretically equivalent to a fair classical coin flip. The joint state of the nodes can be written as
\begin{align}
\frac{1}{4} \left( \mathbb I_2 \tensor \ket\Phi \bra\Phi \tensor \mathbb I_2 \right),
\end{align}
which acts equivalently as three independent fair coin flips. This yields a joint Shannon entropy of $3$ with measurements local to each node. Thus, the measured mutual information will be $1$ for all pairs of nodes in network $1$.
The same does not hold for network 2! The joint state of any pair of nodes in network 2
\begin{align}
\frac{1}{4} \left( \ket{00}\bra{00} + \ket{11}\bra{11} \right)^{\tensor 2}
\end{align}
has a Shannon entropy of $2$ upon measuring separately in the respective nodes. Thus, the measured mutual information in network 2 is $2$.
We can interpret the calculation of this example from a graph-theoretic perspective. The von Neumann entropy of a node gives the number of sources the node is connected to, whereas the measured mutual information gives the number of sources the two nodes share. We prove this fact in \Cref{thm:MMI-interpretation}, using established results reviewed in \Cref{thm:ent-lb} and \Cref{thm:ent-lb-2}.
\begin{lemma}\label{thm:ent-lb}
Let $\sigma_n$ be an $n$-qubit shared random bit, and let the classical probability distribution upon measuring $\sigma_n$ be $\mathbb P(\vec a)$. Then, the following inequality is true for any measurement basis,
\begin{align}
H(\mathbb P(\vec a)) \geq 1
\end{align}
with equality occurring only when measured in the computational basis.
\end{lemma}
\begin{proof}
Recall that the entropy of the measurement of a state is minimized when measured in its eigenbasis~\cite{Nielsen2009}. The eigenbasis of $\sigma_n$ is the computational basis, which behaves like a classical coin flip upon measuring. Thus, the Shannon entropy is lower bounded by $1$.
\end{proof}
\begin{lemma}\label{thm:ent-lb-2}
Consider the distribution acquired through local measurements on a Bell state, \textit{i.e.}
\begin{align}
\mathbb P(\vec a) = \bra \Phi \Pi_{a_1} \tensor \Pi_{a_2} \ket \Phi
\end{align}
for projective operators $\{\Pi_{a_1}\}$ and $\{\Pi_{a_2}\}$. Then, for any choice of $\Pi_{a_1}$ and $\Pi_{a_2}$,
\begin{align}
H(\mathbb P(\vec a)) \geq 1.
\end{align}
where the equality holds only when measured in the computational basis.
\end{lemma}
\begin{proof}
By subadditivity~\cite{Nielsen2009}, the Shannon entropy of measuring $\ket \Psi$ is at least the entropy of its marginal, which is $\sigma_1$. Since $\sigma_1$ is a classical coin flip, it has Shannon entropy of $1$ and the entropy of $\ket \Psi$ is lower bounded by $1$.
\end{proof}
\begin{lemma} \label{thm:MMI-interpretation}
Let a quantum network satisfy \Cref{ass:all}. Then, for any two measurement nodes $A_i$ and $A_j$, $\mathcal I_m (A_i;A_j) = N_s^{A_i,A_j}$, where $N_s^{A,B}$ denote the number of sources they share.
\end{lemma}
\begin{proof}
Recall the definition of measured mutual information in \Cref{eq:meas-mut-info}. We must first determine the basis to measure in at each node, \textit{i.e.} $\Pi^{A_i}_{\vec a_i}$ and $\Pi^{A_j}_{\vec a_j}$.
Moreover, for any two nodes $A_i$ and $A_j$, the qubits received can be either maximally entangled or independent (maximally mixed). By \subassref{all}{A1}, the measurement basis does not influence the entropies at each node. Thus, we can achieve the lower bounds in \Crefrange{thm:ent-lb}{thm:ent-lb-2} by measuring in the computational basis.
Knowing the basis of choice, we proceed to understand the graph-theoretic properties of the mutual information. Let $N_s^{A_i}$ be the number of sources connected to device $A_i$. Then, we know that $H(\mathbb P(\vec a_i)) = N_s^{A_i}$ and similarly with $H(\mathbb P(\vec a_j))$ where $\mathbb P(\vec a_i)$ and $\mathbb P(\vec a_j)$ are probability distributions upon measuring at node $A_i$ and $A_j$ respectively.
For the joint entropy $H(\mathbb P(\vec a_i, \vec a_j))$, partition $A_i \cup A_j$ into sets:
\begin{align}
S_1 &= \{(q_k, q_\ell) : q_k \in A_i, q_\ell \in A_j, \{q_k, q_\ell\} \subseteq \Lambda_i \text{ for some } i\} \\
S_2 &= \{q_k : \forall q_\ell, \{q_k, q_\ell\} \not \in S_1\}
\end{align}
Each pair of qubits in $S_1$ are entangled and act jointly as a fair coin flip. On the other hand, each qubit in $S_2$ acts independently of one another also like a fair coin flip. Moreover, each element in $S_1$ and $S_2$ is independent of each other and Shannon entropies are additive. Thus, $H(\mathbb P(\vec a_i, \vec a_j)) = |S_1| + |S_2|$. Since the total number of qubits is $2|S_1| + |S_2| = H(\mathbb P(\vec a_i)) + H(\mathbb P(\vec a_j))$, the joint entropy can be expressed as
\begin{align}
H(\mathbb P(\vec a_i, \vec a_j)) = H(\mathbb P(\vec a_i)) + H(\mathbb P(\vec a_j)) - |S_1|.
\end{align}
By definition of $S_1$, we find the measured mutual information to be
\begin{align}
\mathcal I_m (A_i;A_j) = |S_1| = N_s^{A_i,A_j}.
\end{align}
\end{proof}
The graph-theoretic interpretation shown in \Cref{thm:VNE-interpretation} and \Cref{thm:MMI-interpretation} will be useful for proving the correctness of our protocol. Furthermore, in spirit the characteristic vector in~\cite{yang2022strong}, we define the \textit{characteristic matrix} of a quantum network to be
\begin{equation}
M_{\mathcal N} = \begin{pmatrix}
S(A_1) & \mathcal I_m (A_1; A_2) & & \dots & & \mathcal I_m (A_1; A_{N_m}) \\
\mathcal I_m (A_2; A_1) & S(A_2) & \mathcal I_m (A_2; A_3) &\dots& & \vdots \\
\vdots & & &\dots& & \mathcal I_m (A_{N_m-1}; A_{N_m}) \\
\mathcal I_m (A_{N_m}; A_1) & & & \dots& \mathcal I_m (A_{N_m}; A_{N_m -1}) & S(A_{N_m})
\end{pmatrix}
\end{equation}
where the diagonal is the characteristic vector $V_{\mathcal N}$ containing the von Neumann entropy and the off-diagonals are the measured mutual information.
Note that the matrix is symmetric, $M_{\mathcal N} = M_{\mathcal N}^\intercal$. By introducing the off-diagonal terms, we can quantify the number of sources two nodes share. This addition allows us to extend the previous network classification protocol to include cases where more than one entanglement is shared between nodes.
{We note that the characteristic matrix bears resemblance to the covariance matrix used in the semidefinite tests for network compatibility~\cite{Kela2020_semidefinite_tests,Aberg2020_semidefinite_tests, Kraft2021_characterizing_quantum_networks} where the off-diagonals of the covariance matrix are nonzero if and only if a source correlates two measurements.
A key distinction is that the covariance matrix is evaluated from the classical data sampled from the network, whereas the characteristic matrix is evaluated by optimizing the measurements with respect to von Neumann entropy and measuring mutual information.
Indeed, it is significantly more efficient to evaluate the covariance matrix, however, having control over the measurement apparatus improves our ability to probe the strength of the correlation between measurement devices. Thus, the characteristic matrix might give a more detailed view of the network's topology.}
\begin{remark}
The term ``computational basis'' used thus far can be misleading. As mentioned previously, the basis of choice at the sources can different than the choice at measurement nodes. However, since calculating the von Neumann entropy and the measured mutual information requires optimization over basis sets at the end of each measurement node, we expect to recover the reference frame of the sources. Implementation of such a procedure can be achieved via differential programming \cite{Doolittle2022}. Thus, without loss of generality, the term ``computational basis'' would be used synonymously with ``the source's reference frame.''
\end{remark}
\subsection{Distinguishing network topology}
We now show that the topology of an $n$-local quantum network can be fully characterized by the characteristic matrix $M_{\mathcal N}$, which specifies the von Neumann entropy at each node and the measured mutual information of each pair of nodes.
\begin{theorem} \label{thm:noiseless-classifier}
Let two quantum networks, $\mathcal N \ind 1$ and $\mathcal N \ind 2$, satisfy \Cref{ass:all}, $\mathcal N \ind 1$ and $\mathcal N \ind 2$ have the same topology (c.f. \Cref{def:isomorphism}) if and only if $S(A_i \ind 1) = S(A_i \ind 2)$ for all nodes $A_i$ and $\mathcal I_m(A_i \ind 1;A_j \ind 1) = \mathcal I_m(A_i \ind 2; A_j \ind 2)$ for all pairs of nodes $A_i, A_j$.
\end{theorem}
\begin{proof}
For sufficiency, observe that if two networks have the same topology, then the number of sources connected to $A_i \ind 1$ is the same as $A_i \ind 2$, and the number of sources shared between $A_i \ind 1$ and $A_j \ind 1$ is the same as $A_i \ind 2$ and $A_j \ind 2$. By \Cref{thm:VNE-interpretation,thm:MMI-interpretation}, the von Neumann entropy of each node and measured mutual information of pairs of nodes are the same.
We show necessity by way of contradiction. Suppose the von Neumann entropy at each measurement node and the measured mutual information between any pair of measurement nodes for the two networks are identical
|
, but the two networks have different topologies.
First, we note that the two networks must have the same number of nodes; an immediate contradiction is reached otherwise. On the other hand, the two networks will also have the same number of links, which can be written alternatively using the von Neumann entropy as $N_\ell = \sum_{i} S(A_i)$
using \subassref{all}{A1}. Lastly, the number of sources must also be the same. Suppose $N_s \ind 1 < N_s \ind 2$. Since the number of links present in either network is the same, there must be at least one link $\ell$ connected to the preparation node in $\mathcal N \ind 1$ that is not connected to the same preparation node in $\mathcal N \ind 2$. This means that the measurement device $A_i$ that was connected to $A_j$ via $\ell$ in $\mathcal N \ind 1$ must have lost the connection to $A_j$ in $\mathcal N \ind 2$. Thus, the $\mathcal I_m (A_i \ind 1;A_j \ind 1) > \mathcal I_m (A_i \ind 2; A_j \ind 2)$ and we've reached a contradiction.
Now, focus on the case where the two networks have the same number of sources, links, and nodes. Define a \textit{triplet} in a network to be a tuple of three elements, $t_n = (A_i, \Lambda_k, A_j)$, such that $A_i$ and $A_j$ share $\Lambda_k$. We distinguish the multiplicity of the triplets, that is, it is possible that $t_n = t_m$ for $n \neq m$ so long that the number of duplicated triplets is consistent with the number of node pairs sharing the same source. Two networks have different topologies if and only if there is at least one triple $t_n$ that is in $\mathcal N \ind 1$ but not $\mathcal N \ind 2$. In particular, if the two networks differ by exactly one triplet, then $\mathcal I_m(A_i \ind 1, A_j \ind 1) = \mathcal I_m (A_i \ind 2, A_j \ind 2) - 1$ and we reach a contradiction.
Now, suppose there are $d$ triplets present in $\mathcal N \ind 1$ and not in $\mathcal N \ind 2$. Construct a map $\xi$ that satisfies:
\begin{enumerate}[label=(\roman*)]
\item $\xi \left( t_n \right) = t_{n'} = (A_i', \Lambda_k', A_j')$ where $t_n$ is in $\mathcal N \ind 1$ and $t_{n'}$ is in $\mathcal N \ind 2$,
\item Performing the map $\xi$ on all triplets $t_n$ in $\mathcal N \ind 1$ yields $\mathcal N \ind 2$.
\end{enumerate}
Think of $\xi$ as the map that ``moves'' $\mathcal N \ind 1$ to $\mathcal N \ind 2$. When a triplet is present in both networks, $\xi(t_n) = t_n$; when a triplet is only present in one network, or one triplet is ``moved'' from its original position, $\xi(t_n) = t_{n'} \neq t_n$. If there exists one $t_n^* = (A_i^*, \Lambda_k^*, A_j^*)$ such that $t_{n'}$ is not in $\mathcal N \ind 1$ also, then $\mathcal I_m (A_i^*, A_j^*) \leq \mathcal I_m (A_i', A_j') - 1$, and we've reached a contradiction. However, for all $t_n$ in $\mathcal N \ind 1$, $\xi(t_n)$ is also in $\mathcal N \ind 1$, and the topology has not changed, contradicting our initial supposition.
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}
\Vertex[x=-10,y=1,shape=rectangle,label=$A_1$]{A1}
\Vertex[x=-8.5,y=1.00,shape=rectangle,label=$A_2$]{A2} \Vertex[x=-7,y=1.00,shape=rectangle,label=$A_3$]{A3}
\Vertex[x=-5.5,y=1.00,shape=rectangle,label=$A_4$]{A4}
\Vertex[x=-4,y=1.00,shape=rectangle,label=$A_5$]{A5}
\Vertex[x=-8.5,y=-1,RGB,color={127,201,127},label=$\Lambda_1$]{S1}
\Vertex[x=-7,y=-1,RGB,color={127,201,127},label=$\Lambda_2$]{S2}
\Vertex[x=-5.5,y=-1,RGB,color={127,201,127},label=$\Lambda_3$]{S3}
\Edge[label=1](S1)(A1)
\Edge[label=2](S1)(A2)
\Edge[label=3](S2)(A2)
\Edge[label=4](S2)(A3)
\Edge[label=5](S2)(A4)
\Edge[label=6](S3)(A4)
\Edge[label=7](S3)(A5)
\node[rectangle] (r) at (-2.3,0) {$M_{\mathcal N} =$};
\matrix (m)[matrix of math nodes,left delimiter=(,right delimiter=)]
{
1 & 1 & 0 & 0 & 0 \\
1 & 2 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 \\
0 & 0 & 1 & 2 & 1 \\
0 & 0 & 0 & 1 & 1 \\
};
\end{tikzpicture}
\caption{Example of a quantum network and its respective characteristic matrix $M_{\mathcal N}$. \Cref{thm:noiseless-classifier} showed that $M_{\mathcal N}$ uniquely characterizes a quantum network. However, inferring the network from $M_{\mathcal N}$ is nontrivial.}
\label{fig:meas-infer-ex}
\end{figure}
The theorem states that one can uniquely find the topology of a network from its characteristic matrix. Suppose entropic quantities can be reliably calculated, verifying if two networks have the same topology requires only that the number of queries grows polynomially with respect to the number of nodes.
Inferring the topology from the characteristic matrix remains a difficult task. See \Cref{fig:meas-infer-ex} for an example network and it's corresponding characteristic matrix $M_{\mathcal N}$. Knowing the topology, $M_{\mathcal N}$ can be straightforwardly obtained using \Cref{thm:VNE-interpretation} and \Cref{thm:MMI-interpretation}. However, we encourage the reader to try the other direction. Even though $M_{\mathcal N}$ indicate whether $A_i$ and $A_j$ share a source(s), finding the correct number of sources $N_s$ and assigning nodes to the respective sources appear to be highly nontrivial. Naively, one could search through all possible quantum networks with $N_s$ sources, for all possible $N_s$. The search space grows exponentially and would not be tractable for large networks. Thus, we defer the existence of a polynomial-time inference algorithm as a future direction.
\subsection{Inferring topology from qubit measurements}
However, the concerns described above are non-existent when measurements can be done qubit-wise!
So far, qubits have been assumed to be received in a black-box manner where qubits are indistinguishable, and joint measurement can be performed so long as it is local to the measurement node.
This assumption is often made for generality.
However, we can show that if qubits in each node can be measured individually, the quantum network can be inferred from the characteristic matrix in polynomial time.
\begin{theorem}\label{thm:qubit_network_characterization}
Consider an $n$-local network $\mathcal N$ measured using local qubit projectors $\Pi^{\mathcal N}_{\vec{a}} = \bigotimes_{j=1}^m \Pi_{a_j}^{q_j}$ where $a_j\in\{0,1\}$ and $q_j\in[N_q]$ index the measured qubit.
The network's topology is completely characterized by the matrix
\begin{equation}
Q_{\mathcal N} = \begin{pmatrix}
S(q_1) & \mathcal I_m (q_1; q_2) & & \dots & & \mathcal I_m (q_1; q_{N_q}) \\
\mathcal I_m (q_2; q_1) & S(q_2) & \mathcal I_m (q_2; q_3) &\dots& & \vdots \\
\vdots & & &\dots& & \mathcal I_m (q_{N_q-1}; q_{N_q}) \\
\mathcal I_m (q_{N_q}; q_1) & & & \dots& \mathcal I_m (q_{N_q}; q_{N_q -1}) & S(q_{N_q})
\end{pmatrix}
\end{equation}
where the $i^{th}$ row lists the qubits entangled with qubit $q_i$ and the number of sources $N_s$ is equivalent to the number of unique rows (or columns) of $Q_{\mathcal N}$.
\end{theorem}
\begin{figure}
\centering
\begin{tikzpicture}
\Vertex[x=-8, y=2.46,shape=rectangle,label=$A_1$]{A1}
\Vertex[x=-10,y=-1.00,shape=rectangle,label=$A_2$]{A2}
\Vertex[x=-6,y=-1.00,shape=rectangle,label=$A_3$]{A3}
\Vertex[x=-8.5,y=0.66,color=Periwinkle]{S2}
\Vertex[x=-7.5,y=0.66,color=Green]{S3}
\Edge[label=$q_4$](S2)(A2)
\Edge[label=$q_5$](S3)(A3)
\Edge[label=$q_1$](S2)(A1)
\Edge[label=$q_6$](S2)(A3)
\Edge[label=$q_2$](S3)(A1)
\Edge[label=$q_3$](S3)(A2)
\node[rectangle] (r) at (-2.3,0) {$Q_{\mathcal N} =$};
\matrix (m)[matrix of math nodes,left delimiter=(,right delimiter=)]
{
1 & 0 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 & 1 \\
};
\draw[draw=black,dashed] (-1.2,-1.5) rectangle ++(0.75,3);
\draw[draw=black,dashed] (-0.35,-1.5) rectangle ++(0.72,3);
\draw[draw=black,dashed] (0.47,-1.5) rectangle ++(0.75,3);
\Vertex[x=-0.8,y=1.9,shape=rectangle,label=$A_1$]{A1l}
\Vertex[x=0,y=1.9,shape=rectangle,label=$A_2$]{A1l}
\Vertex[x=0.8,y=1.9,shape=rectangle,label=$A_3$]{A1l}
\draw[fill=Periwinkle, opacity=0.4] (-1.3,-1.3) rectangle ++(2.61,0.3);
\draw[fill=Green, opacity=0.4] (-1.3,-0.83) rectangle ++(2.61,0.3);
\draw[fill=Periwinkle, opacity=0.4] (-1.3,-0.36) rectangle ++(2.61,0.3);
\draw[fill=Green, opacity=0.4] (-1.3,0.10) rectangle ++(2.61,0.3);
\draw[fill=Green, opacity=0.4] (-1.3,0.56) rectangle ++(2.61,0.3);
\draw[fill=Periwinkle, opacity=0.4] (-1.3,1.01) rectangle ++(2.61,0.3);
\end{tikzpicture}
\caption{Application of \Cref{thm:qubit_network_characterization} on network in \Cref{fig:tri-net-2}. The columns of $Q_{\mathcal N}$ are organized by the nodes each qubit is from, and unique rows are grouped together. Each group of unique rows corresponds to a preparation node (purple or green) and the connectivity can be found by observing the non-zero entries in each row.}
\label{fig:qubit-infer-ex}
\end{figure}
The above theorem gives an algorithm for reconstructing the network topology given the qubit-wise characteristic matrix $Q_{\mathcal N}$. First, assume knowledge of the measurement node that each qubit is sent to. The columns of $Q_{\mathcal N}$, each representing a qubit, can then be grouped into its respective measurement node. On the other hand, the rows of $Q_{\mathcal N}$ can be partitioned into sets of indices with identical rows, that is, let $\Lambda_i$ be a set such that for all $j,k \in \Lambda_i$, $Q_{\mathcal N, j,*} = Q_{\mathcal N, k,*}$. As the notation suggests, this set of indices is the set of qubits from the source $\Lambda_i$. Lastly, if $Q_{\mathcal N,r,s} = 1$, qubits $r$ (in node $A_j$) and $s$ (in node $A_k$) share a source $\Lambda_i$ and the triplet $(A_j,\Lambda_i,A_k)$ exists in the network. In \Cref{fig:qubit-infer-ex}, we give an elementary demonstration of the algorithm on the triangle network presented in \Cref{fig:tri-net-2}.
Performing joint measurements, even for qubits received in the same measurement node, can be experimentally demanding. Thus, limiting ourselves to qubit-wise measurement actually enhances the practicality of the protocol. The fine-grained measurements provide a simple algorithm for determining the topology in time quadratic to the number of qubits. Moreover, sources are not restricted to preparing only GHZ states; any entangled states will do. Given infinite precision and noiseless channels, each entry of $Q_{\mathcal N}$ can store whether there exist correlations between qubits. As correlations can only arise from quantum entanglement, two qubits are from the same source if and only if a non-zero correlation is observed. Because of this binary nature (zero/non-zero correlation), we will show that this protocol is entirely robust to depolarizing noise in the next section.
\section{Inferring topology from noisy channels}\label{sec:noisy}
Maximally entangled states such as GHZ states are fragile and easily corrupted by noise. In this section, we hope to establish robustness for the classification protocol introduced subjected to depolarizing noise, that is, for a quantum state $\rho$, a \textit{depolarizing channel} $\mathcal E_\gamma$ performs the map
\begin{align}
\mathcal E_\gamma (\rho) = (1-\gamma)\rho + \frac{\gamma}{2^n} \mathbb I_{2^n}
\end{align}
where $\gamma \in [0,1]$ is the parameter of the channel, and $n$ is the number of qubits involved in state $\rho$. In the quantum network setting, depolarizing noise acts jointly on qubits prepared by a source, and sources are affected independently.
\subsection{Example: triangle network revisited}
Consider the triangle network 1 shown in \Cref{fig:tri-net-1}. Like in the noiseless case, the Shannon entropy at each measurement device is independent of the choice of measurement basis, and $H(A) = S(A) = 1$. In an attempt to characterize the topology, we look at the measured mutual information between devices $A_i$ and $A_j$, where the joint state is
\begin{align}
\rho_{A_i \cup A_j} = \frac{\mathbb I_2}{2} \tensor \ket{\Phi} \bra \Phi \tensor \frac{\mathbb I_2}{2}.
\end{align}
If each qubit is sent through a depolarizing channel of the same noise parameter, then the joint state received $\mathcal E_\gamma (\rho_{AB})$ becomes
\begin{align}
\mathcal E_\gamma (\rho_{A_i \cup A_j}) = \frac{\mathbb I_2}{2} \tensor \left( (1-\gamma) \ket \Phi \bra \Phi + \frac{\gamma}{4} \mathbb I_4 \right) \tensor \frac{\mathbb I_2}{2},
\end{align}
which yields the Shannon entropy of
\begin{align}\label{eq:noisy-type-one}
-H(\mathcal E_\gamma (\rho_{A_i \cup A_j})) = \frac{2-\gamma}{2} \log \left(2-\gamma\right) + \frac{\gamma}{2} \log \gamma - 4
\end{align}
when measured using the computational basis for both devices. When the channel is noiseless, \textit{i.e.} $\gamma = 0$, then $\mathcal I_m(A_i;A_j) = 1$ and we recover the results shown in \Cref{sec:tri-ex-1}. However, if the channel is completely noisy, meaning that $\gamma = 1$ , then $H(\mathcal E_\gamma(\rho_{A_i \cup A_j})) = 4$ and the measured mutual information is zero, which can lead one at the receiver end to think that qubits received at $A_i$ and $A_j$ are independent of one another.
The same calculation can be applied to network 2 in \Cref{fig:network-example}. Let $\sigma_2 = (\ket{00}\bra{00} + \ket{11}\bra{11})/2$. Knowing that the joint system $\rho_{A_i \cup A_j} = \sigma_2 \tensor \sigma_2$, we focus on the behavior of one $\sigma_2$ knowing that the remaining system behaves identically and independently. We can find the noisy joint system of $\sigma_2$ to be
\begin{align}
\mathcal E_\gamma (\sigma_2) = \frac{\gamma}{4} \mathbb I_4 + (1-\gamma)\sigma_2
\end{align}
The above state yields the following Shannon entropy when measured in the computational basis.
\begin{align}
-H(\mathcal E_\gamma (\sigma_2)) = \frac{2-\gamma}{2} \log \left(2-\gamma\right) + \frac{\gamma}{2} \log \gamma - 2
\end{align}
Since $\rho_{AB} = \sigma \tensor \sigma$, the entropy of the joint system simply adds.
\begin{align}\label{eq:noisy-type-two}
-H(\mathcal E_\gamma (\rho_{A_i \cup A_j})) = (2-\gamma) \log \left(2-\gamma\right) + \gamma \log \gamma - 4
\end{align}
Again, we can cover the noiseless and completely random behavior when $\gamma = 0$ or $\gamma = 1$, respectively. However, we are interested if the two systems are ever indistinguishable due to noise.
Claiming the computational basis is again the basis of choice that maximizes the mutual information and from \Cref{eq:noisy-type-one,eq:noisy-type-two}, the measured mutual information is, respectively,
\begin{align}
\mathcal I_m \ind 1 (A_i;A_j) &= \frac{2-\gamma}{2} \log \left(2-\gamma\right) + \frac{\gamma}{2} \log \gamma, \\
\mathcal I_m \ind 2 (A_i;A_j) &= (2-\gamma) \log \left(2-\gamma\right) + \gamma \log \gamma.
\end{align}
We can see that the measured mutual information of the first network is half of the second one. Consequently, this means that for depolarizing of strength $\gamma \in [0, 1)$, the measured mutual information will always have a non-zero gap and we will be able to distinguish the two networks through local measurements.
\subsection{Inferring topology of noisy networks}
We hope to extend \Cref{thm:noiseless-classifier} to the case of applying uniform global noise on sources. To do so, we first extend \Cref{thm:ent-lb,thm:ent-lb-2} to the noisy case.
\begin{lemma}
Consider a quantum network satisfying \Cref{ass:all}, measurement nodes $A_i$ and $A_j$. Moreover, the network is made up of depolarizing channels that act on each qubit with strength $\gamma$. Then, the local measurement basis that maximizes the Shannon mutual information, \textit{i.e.}
\begin{align}
\underset{\{\Pi^{A_i}_{\vec a_i}\}, \{\Pi^{A_j}_{\vec a_j}\}}{\textnormal{argmax}} H(\mathbb P (\vec a_i)) + H(\mathbb P (\vec a_j)) - H(\mathbb P (\vec a_i, \vec a_j))
\end{align}
is the computational basis.
\end{lemma}
\begin{proof}
Let $\rho$ be either a Bell pair $\ket \Phi \bra \Phi$ or shared random bits $\sigma_2$. Under depolarizing noise on the source, the state becomes
\begin{align}
\mathcal E_\gamma(\rho) = (1-\gamma) \rho + \frac{1}{4} \mathbb I_4.
\end{align}
Note that for any unitary $U$ applied onto the noisy state, the effect of the noise stays unchanged, \textit{i.e.}
\begin{align}
U \mathcal E_\gamma(\rho) U^\dagger = (1-\gamma) U \rho U^\dagger + \frac{1}{4} \mathbb I_4.
\end{align}
Thus, the measurement basis that maximizes the Shannon entropy of $\rho$ remains the same for all $\gamma > 0$.
\end{proof}
We are interested in whether there exists a $\gamma \in [0,1)$ such that the measured mutual information of two networks is the same. If such $\gamma$ exists, denote $\gamma^*$, then the two networks are identical according to the protocol within some small neighborhood of $\gamma^*$. On the other hand, if the measured mutual information remains distinct for all $\gamma$ and pairs across two networks, then the protocol will be applicable for any noise level that is not completely depolarized, assuming an infinite amount of samples can be taken. Below, we provide one condition that guarantees such robustness to noise.
\begin{theorem} \label{thm:noisy-classifier}
Consider two quantum networks $\mathcal N \ind 1$ and $\mathcal N \ind 2$ satisfying \Cref{ass:all}. For depolarizing channels with known strength $\gamma \in [0,1)$, we can distinguish the topology of $\mathcal N \ind 1$ from $\mathcal N \ind 2$.
\end{theorem}
\begin{proof}
Under the assumption that each preparation node can send at most one qubit, for any two measurement nodes $A_i$ and $A_j$, the joint state $\rho_{A_iA_j}$ will be the tensor product of $\mathbb I_2$, $\sigma_2$, and Bell pairs. In particular, the von Neumann entropy of each device is invariant with respect to noise. Moreover, the measured mutual information of $\sigma_2$ and $\ket \Phi \bra \Phi$ is the same for all $\gamma \in [0,1]$. Let $I(\gamma) = \mathcal I_m (\mathcal E_\gamma (\ket \Phi \bra \Phi)) = \mathcal I (\mathcal E_\gamma (\sigma_2))$, the measured mutual information as a function of $\gamma$ will be $\mathcal I_m (A_i;A_j) = N_s^{A_i,A_j} I(\gamma)$.
If $\mathcal N \ind 1$ and $\mathcal N \ind 2$ have different topologies, it means that there exists at least one pair of nodes $A_i$ and $A_j$ such that (without loss of generality) $N_s^{A_i \ind 1, A_j \ind 1} > N_s^{A_i \ind 2, A_j \ind 2}$. Thus, for any depolarizing channel of strength $\gamma$, $\mathcal I_m (A_i \ind 1; A_j \ind 1) > \mathcal I_m (A_i \ind 2; A_j \ind 2)$. Thus, by \Cref{thm:noiseless-classifier}, the two networks can be distinguished given sufficient shots taken to estimate each entropic quantity.
\end{proof}
Note that \Cref{thm:noisy-classifier} actually provides a limited sense of noise robustness, that is, we need a priori knowledge of $\gamma$. The theorem can remain useful for comparing two quantum networks with unknown topologies but are under the influence of depolarizing noise of the same strength or verifying the topology of one noisy network. Recall that there are no straightforward algorithms for deriving the topology from the characteristic matrix even in the noiseless case. Moreover, inferring topology only becomes more difficult when the two networks are exposed to noises of different strengths.
Nonetheless, if measurements can be done on the level of qubits, the topology can be deduced in polynomial time without any knowledge of $\gamma$!
\begin{theorem}\label{thm:qubit-network-characterization-noisy}
Consider a noisy network $\mathcal N$ that is measured using local qubit measurements and $\rho^{\Net} = \bigotimes_{i=1}^n \rho^{\Lambda_i}$.
It's topology is completely characterized by the matrix $Q_{\mathcal N}$ as described in \Cref{thm:qubit_network_characterization}.
\begin{proof}
Let $\rho_i = \tr_{j\neq i}[\rho^{\Net}]$ for all $j\in[N_q]$. Then, if $S(\rho_i) > 0$ a source may exist that correlates the qubit with other qubits.
Next, the measured mutual information $\mathcal I_m(q_i,q_j) = 0$, if and only if $\rho_i\otimes\rho_j$. This implies that when $I_m(q_i,q_j) > 0$, a source must be present to correlate the two independent qubit measurements.
Therefore, the matrix $Q_{\mathcal N}$ only has nonzero elements on its off-diagonal if there exist sources to correlate the qubits.
In practice, finite samples are taken and the scalar value of $\mathcal I_m(q_i,q_j)$ should only be counted as nonzero if it is sufficiently larger than the statistical fluctuations of uncorrelated qubits.
\end{proof}
\end{theorem}
Note that in the above Theorem, a source can be so noisy that it separates as $\bigotimes_j \rho_j$.
We argue that the source no longer qualifies as such precisely because it no longer distributes shared randomness.
Otherwise, so long as a sufficient number of measurements are taken and non-separable states are prepared at each source, the qubit-wise characteristic matrix $Q_{\mathcal N}$ is sufficient for determining the network topology.
On a different note, for certain choices of noise, such as colored noise and qubit dephasing noise, the characteristic matrix is preserved; thus, both \Cref{thm:noisy-classifier} and \Cref{thm:qubit-network-characterization-noisy} will hold under these noise models.
\section{Conclusion}
In this work, we introduced protocols for inferring the topology of $n$-local quantum networks. The protocol constructs characteristic matrices of a quantum network, storing the von Neumann entropies on the diagonal entries and the measured mutual information on the off-diagonal entries. These information-theoretic measures use only local measurements, which allows for easy implementation on quantum hardware. Assuming sources prepare maximally entangled states, the characteristic matrix can uniquely determine the topology of a quantum network. Moreover, if one is capable of making qubit-wise measurements, the topology of the network can be inferred in polynomial time from the characteristic matrix and is entirely robust to noise.
{
Furthermore, our approach is well suited to the variational quantum optimization methods for quantum networks described in reference~\cite{Doolittle2022}.}
It is worth noting that the characteristic matrix cannot distinguish between quantum entanglement and shared randomness.
However, the characteristic matrix does indicate which qubits are correlated.
Thus, an entanglement witness can be tailored to the network's topology.
One approach might be to then test each source independently using an entanglement witness of choice~\cite{Terhal2002detecting,Guhne2009entanglement_detection}.
There are assumptions made in this paper that can be extended to fit more realistic scenarios. For example, we assumed all measurement nodes are observed and measurements can be performed on all nodes. In reality, the known set of measurement nodes can merely be a subset of the entire network. Exploring the limits and extensions of our protocol for the case of partially-observed networks can be a fruitful future direction.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers and
the Office of Advanced Scientific Computing Research, Accelerated Research for Quantum Computing program under contract number DE-AC02-06CH11357.
\printbibliography
\vfil
\framebox{\parbox{.90\linewidth}{\scriptsize The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (``Argonne''). Argonne, a U.S.\ Department of Energy Office of Science laboratory, is operated under Contract No.\ DE-AC02-06CH11357. The U.S.\ Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan \url{http://energy.gov/downloads/doe-public-access-plan}.}}
\end{document}
|
\section{Introduction}\label{Sec:Introduction}
X-ray bursts (XRBs) are thermonuclear explosions in an accreted H or
He layer on the surface of a neutron star (see~\citealt{galloway:2017}
for a review). Observations of bursts can assist in constraining the
properties of the underlying neutron star, helping to illuminate the
nuclear equation of state~\citep{steiner:2010,ozel2016masses}. Extensive
observations of brightness oscillations during the rise (the initial phase
of the burst where the observed flux rapidly increases) have provided
evidence that the burning begins in a localized region and spreads
over the surface of the neutron
star~\citep{bhattacharyya:2006,bhattacharyya:2007,chakraborty:2014}.
One dimensional studies of XRBs have been very successful in predicting the
lightcurves and recurrence times (see, e.g., \citealt{woosley-xrb}).
These assume spherical symmetry and thus cannot capture the effects of
localized burning spreading across the neutron star. These studies
have also been used to explore the sensitivity of the burst
observables to accretion and reaction
rates~\citep{cyburt:2010,Jose2010a,Lampe2016}, and to model individual
bursts~\citep{johnston:2019}.
Multidimensional simulations of burning on a neutron star are more
difficult, with both the temporal and spatial scales presenting
challenges (see \citealt{astronum_2018} for an overview). For the spatial scales, we need to
resolve the reaction zone, $\mathcal{O}(10~\mbox{cm})$ or smaller, the
scale height of the atmosphere, $\mathcal{O}(500~\mbox{cm})$, and the
Rossby scale where the Coriolis force balances the lateral pressure
gradient, $\mathcal{O}(10^5~\mbox{cm})$~\citep{spitkovsky2002}. For
the temporal scales, capturing the rise, $\mathcal{O}(1~\mbox{s})$,
and the decay of the burst, $\mathcal{O}(10~\mbox{s})$, as well as the
accretion period between bursts, $\mathcal{O}(10^4~\mbox{s})$, is
currently beyond the ability of multidimensional hydro codes.
Nevertheless, significant progress has been made in understanding the
multidimensional nature of XRBs, through various approximations.
Laterally propagating detonations were modeled by
\citet{fryxellwoosley82} and \citet{hedet}. However, since it is difficult to detonate helium at the
densities found in normal XRBs, and even harder to detonate hydrogen
because of the waiting times for weak reactions, these would only occur at
very high densities. This means that
detonations may only really be applicable to superbursts (where carbon
is the reactant)~\citep{Weinberg2006b,Weinberg2007}.
Global multidimensional studies were performed by
\citet{spitkovsky2002}, where it was demonstrated that the Coriolis
force plays an important role in confining the burning as it spreads
across the neutron star surface. These calculations used the
shallow-water approximation, so the vertical details of the
atmosphere's structure were not captured. Their model showed that the
horizontal pressure gradient between the ash and fuel can be important
in accelerating the burning front.
Small-domain studies of convective burning in XRBs preceding flame
development have been done in two-dimensions \citep{lin:2006,xrb,xrb2}
and three-dimensions \citep{xrb3d}. These calculations used low Mach
number methods, which approximate the hydrodynamics equations to filter
soundwaves, enabling large timesteps and efficient modeling of
subsonic convection. While these calculations could not support
the lateral differences needed for flame spreading, they can help
understand the role that convection plays in distributing the initial
burning products vertically throughout the neutron star atmosphere as well
as the nature of any turbulence the burning front might encounter as it propagates
through the atmosphere.
The first vertically resolved simulations of lateral deflagrations
were obtained by \citet{cavecchi:2013}, who showed how the Coriolis
force creates a geometrical configuration that increases the flame
speed set by conduction by a factor $\sim L_R / H$, where $L_R$ is the
Rossby radius and $H$ is the scale height of the burning layer. The
effect of changing Coriolis confinement across the surface on the flame
propagation was explored in \citet{art-2015-cavecchi-etal},
while \citet{art-2016-cavecchi-etal} showed how magnetic field tension,
opposing the Coriolis force, can either speed up or slow down the flame by
changing the horizontal extent of the flame front. Finally, \citet{Cavecchi2019}
studied the effects in 3D of the baroclinic instability at the flame
front, measuring flames up to 10 times faster than in the 2D case.
For deflagrations, we either need to resolve the structure of the
reaction zone or use a flame model. Flame models usually assume that
the flame structure is thin compared to the size of the system (see,
e.g., \citet{Ropke2007} for applications to Type Ia supernovae). For
XRBs however, the flame thickness is comparable to the scale height of
the atmosphere, so we cannot use these approximations. Accurate
models of flames in XRBs therefore require that we resolve the thermal width,
which is $\mathcal{O}(10~\mbox{cm})$ for helium flames~\citep{Timmes00}.
The goal of this study is to understand what numerical and physical
approximations are required to perform a full hydrodynamical,
multidimensional simulation of flame propagation through the
atmosphere of a neutron star. For simplicity in this first set of
calculations, we will use a pure helium composition. These studies
complement the prior multidimensional studies described above in helping
us to build a picture of the dynamics of X-ray bursts.
\section{Numerical Approach}\label{Sec:numerics}
All simulations are performed with the {\sf Castro}\ hydrodynamics
code~(\citealt{castro}; see also \citealt{astronum:2017} for a recent
description). We evolve the system of fully compressible Euler
equations for reacting flow:
\begin{eqnarray}
\frac{\partial( \rho X_k)}{\partial t} &=& - \nabla\cdot (\rho {\bf{U}} X_k) + \rho \dot{\omega}_k \\
\frac{\partial (\rho {\bf{U}})}{\partial t} &=& - \nabla\cdot (\rho {\bf{U}} {\bf{U}}) - \nabla p +
\rho {\bf{g}} \nonumber \\
&&-2 \rho {\bf{\Omega}}\times {\bf{U}} - \rho {\bf{\Omega}} \times ({\bf{\Omega}} \times {\bf{r}}) \\
\frac{\partial (\rho E)}{\partial t} &=& - \nabla \cdot (\rho {\bf{U}} E + p{\bf{U}}) +
\nabla \cdot k_{\rm th} \nabla T + \rho \dot{\epsilon} +\nonumber\\
&& \rho {\bf{U}} \cdot {\bf{g}} - \rho ({\bf{\Omega}} \cdot {\bf{r}})({\bf{\Omega}} \cdot {\bf{U}}) + \rho |{\bf{\Omega}}|^2 ({\bf{U}} \cdot {\bf{r}})
\end{eqnarray}
Here, $\rho$ is the mass density, ${\bf{U}}$ is the velocity, $p$ is the
pressure and $E$ is the specific total energy, which is related to the
specific internal energy as $e = E - |{\bf{U}}|^2/2$. The forcing in the
momentum equation includes gravity, described by gravitational
acceleration ${\bf{g}}$, and rotational forces, described by angular velocity
${\bf{\Omega}}$, with ${\bf{r}}$ the position vector from the origin.
Species are described by mass fractions, $X_k$ (such that $\sum_k X_k
= 1$), and creation rates, $\dot{\omega}$, and are related to the total
specific energy generation rate, $\dot{\epsilon}$. The total mass
conservation,
\begin{equation}
\frac{\partial \rho}{\partial t} = - \nabla\cdot (\rho {\bf{U}})
\end{equation}
implies $\sum_k \dot{\omega}_k = 0$. An equation of state of the form $p = p(\rho, e, X_k)$ completes the
thermodynamic description of the system. Thermal diffusion is
described by a thermal conductivity $k_{\rm th}$ and temperature $T$.
{\sf Castro}\ uses an unsplit piecewise
parabolic method (PPM) with characteristic tracing for solving the
hydrodynamics~\citep{ppm,millercolella:2002}, generalized to an
arbitrary equation of state~\citep{zingalekatz}. Reactions are
incorporated via Strang splitting~\citep{strang:1968}, giving a method
that is overall second-order accurate in space and time.
{\sf Castro}\ uses the {\sf AMReX}\ adaptive mesh refinement
library~\citep{amrex_joss} to manage a hierarchy of grids at different
resolutions.
Since the neutron star rotates, we work in a corotating frame, taking
the angular velocity ${\bf{\Omega}}$ to be constant. Further, for the two-dimensional
simulations presented here, we work in axisymmetric coordinates,
but we advect a third component of velocity, coming out of the
simulation plane, that participates in the Coriolis force (sometimes
described as a 2.5D simulation). We will take the
{\sf Castro}\ $x$-coordinate to be the cylindrical radial coordinate with
corresponding velocity $u$, the {\sf Castro}\ $y$-coordinate to be the
cylindrical vertical coordinate with corresponding velocity $v$, and the
{\sf Castro}\ $z$-coordinate to be the cylindrical azimuthal coordinate,
with corresponding velocity $w$. A righthanded coordinate system has
positive $w$ pointing out of the page. We will take ${\bf{\Omega}} =
\Omega_0 \hat{\bf y}$ for the angular rotation rate, and ${\bf{g}} = -g
\hat{\bf y}$ for the gravitational acceleration, with $g$ constant.
With these choices, the Coriolis force is:
\begin{equation}
-2\rho {\bf{\Omega}} \times {\bf{U}} =
-2\rho \left ( \Omega_0 w \hat{x} - \Omega_0 u \hat{z} \right )
\end{equation}
We will neglect the centrifugal force---with our plane-parallel
geometry, this will act only in the lateral direction, and is not
expected to greatly affect the dynamics. Carrying the Coriolis force
allows us to capture the geostrophic balance that sets up via lateral
hydrostatic equilibrium \citep{spitkovsky2002}. In the discussions
below, we will use the Castro coordinate names, $(x, y)$ in our
notation.
Writing the momentum equation in terms of the $u$, $v$, and $w$
components, and neglecting the centrifugal force, we have:
\begin{align}
\frac{\partial (\rho u)}{\partial t} + \nabla \cdot (\rho u {\bf{U}}) +
\frac{\partial p}{\partial x} &= -2\rho \Omega_0 w \\
\frac{\partial (\rho v)}{\partial t} + \nabla \cdot (\rho v {\bf{U}}) +
\frac{\partial p}{\partial y} &= -\rho g \\
\frac{\partial (\rho w)}{\partial t} + \nabla \cdot (\rho w {\bf{U}}) +
\cancelto{0}{\frac{\partial p}{\partial z}} &=
2\rho \Omega_0 u
\end{align}
where we cancel $\partial p/\partial z$ because there are no
variations in the azimuthal direction. This allows us to recast the
$w$-velocity equation as a simple advection equation:
\begin{equation}
\frac{\partial w}{\partial t} + {\bf{U}} \cdot \nabla w = 2\Omega_0 u
\end{equation}
In our geometry, the flame will propagate from left to right, so $u$
will be positive and the Coriolis force results in $w > 0$ (out of the
simulation plane). The algorithmic implementation of rotation in
{\sf Castro}\ is described in \cite{wdmergerI}.
We use a general stellar equation of state with nuclei (treated as an
ideal gas), photons, and degenerate/relativistic electrons, as
described in \cite{timmes_swesty:2000}. To model reactions, we use a
13-isotope alpha chain network derived from the {\tt aprox13}
network~\citep{timmes_aprox13}. For one of our runs, we use the
smaller 7-isotope network described in \citet{iso7}. We integrate the
network using the VODE integration package~\citep{vode}, and our
implementation is provided in the StarKiller Microphysics
source~\citep{starkiller}.
We note that we do not explicitly model viscosity. The reactions will
provide the smallscale cutoff to the instabilities and turbulence at
the flame front. We also do not include species
diffusion---astrophysical flames tend to have large Lewis numbers, so
this is not expected to be important~\citep{timmeswoosley:1992}.
Finally, we use the thermal conductivities described in
\citet{Timmes00}.
All simulations use adaptive mesh refinement to refine on the
atmosphere (leaving the space between the top of the atmosphere and
upper boundary at low resolution). As seen in Figure \ref{fig:grids},
we use up to 3 refinement levels in addition to the base grid, the first
one a factor of 4 finer than the previous and the remaining each a
factor of 2 finer than the previous. {\sf Castro}\ subcycles in time, so the
finer grids are evolved with a finer timestep than the coarse grids.
Occasionally, the timestep chosen at the start of a cycle will violate
the CFL condition during the advancement of the finer grids. In this
case, we restart the finer grid evolution with a smaller timestep,
subcycling within the larger timestep hierarchy. We use a CFL number
of 0.8 for our simulations.
The base grid for our standard
simulations is $768\times 192$ zones, and the equivalent finest grid
when 3 refinement levels are added would be $12288\times 3072$ zones.
Our standard domain is $1.2288\times 10^5~\mathrm{cm} \times
3.072\times 10^4~\mathrm{cm}$, corresponding to $10~\mathrm{cm}$ resolution
on the finest grid. We only refine the fuel layer in the left half of
the domain at the highest resolution (and only down to densities of
$2.5\times 10^4~\mathrm{g~cm^{-3} }$), since this is where we expect the flame to
propagate. At the start of the simulation, $3.4\%$ of the domain is
at the finest resolution. This increases to $7.4\%$ by the end of the
simulation, because of the increase in the scale height of the atmosphere
behind the flame.
\begin{figure}[t]
\plotone{grids.pdf}
\caption{\label{fig:grids}A section of a 2D simulation showing the four-level
grid structure. Note that the boxes shown are not the simulation zones, which span
$10~\mathrm{cm}$ at the finest level, but subdomains containing approximately equal
numbers of zones that are distributed across MPI processes. A coarse base grid
extending to the upper boundary is drawn in white, with the magenta, green, and
blue grids showing the jumps in refinement needed to fully resolve the fuel layer
and underlying neutron star.}
\end{figure}
Thermal diffusion is modeled explicitly, using a predictor-corrector
scheme to achieve second-order accuracy. A verification test of the
diffusion scheme is shown in Appendix~\ref{app:diffusion}. The
explicit thermal diffusion requires a timestep limiter of the form:
\begin{equation}
\Delta t_\mathrm{diff} \le \frac{1}{2} \frac{\Delta x^2}{\mathcal{D}}
\end{equation}
where $\mathcal{D} = k_{\rm th}/(\rho c_v)$ is the thermal diffusivity. The
diffusivity increases rapidly at the top of the atmosphere, causing
these low density regions to determine the overall timestep for the
simulations. Therefore, we disable thermal conduction at low
densities where it is not expected to be important.
We use hydrostatic boundary conditions on the lower boundary, using
a discretized hydrostatic equilibrium equation of the form:
\begin{equation}
\label{eq:hse}
p_i = p_{i-1} + \frac{1}{2} \Delta y (\rho_i + \rho_{i-1}) {\bf{g}} \cdot \hat{\bf y}
\end{equation}
and holding the temperature constant in the ghost cells. This is solved
together with the equation of state. The velocity is reflected at this boundary.
This procedure follows the form described in \citet{ppm-hse}. The
left boundary is reflecting and the right boundary is a zero-gradient
outflow. The top boundary sets the state to simply the conditions
in our outer buffer region of the initial model (see below), with the
normal velocity set to the larger of zero or the velocity at the top
of the domain (this prevents incoming velocities at the top) and the
transverse velocities set to zero.
When we begin the simulation, there is a transient phase as the flame
gets established. Material that is forced upward will encounter the
steep density gradient at the top of the atmosphere and accelerate as
it is blown out of the atmosphere. Eventually this material will fall
back to the top of the atmosphere. In this paper, we are mostly
concerned with the behavior of the flame and not any material that is
violently blown out of the top of the atmosphere, so we apply a sponge
to this region. This is similar to the method we previously used in \cite{xrb2}, and takes
the form of a source term to the momentum and energy equations of the form:
\begin{align}
{\bf S}_{\rho {\bf{U}}} &= \rho {\bf{U}} \frac{f}{\Delta t} \\
S_{\rho E} &= \rho {\bf{U}} \cdot {\bf S}_{\rho {\bf{U}}}
\end{align}
with the sponge forcing $f$ dependent on the density. We define the sponge
shape as:
\begin{equation}
s = \left \{
\begin{array}{cc}
0 & \rho > \rho_\mathrm{upper} \\
\frac{1}{2}
\left [ 1 - \cos \left ( \frac{\pi (\rho - \rho_\mathrm{upper})}{\Delta \rho} \right ) \right ] & \rho_\mathrm{upper} \ge \rho > \rho_\mathrm{lower} \\
1 & \rho < \rho_\mathrm{lower}
\end{array} \right .
\end{equation}
Here $\rho_\mathrm{upper}$ and $\rho_\mathrm{lower}$ are the densities where the sponge transitions to being fully applied. We take $\rho_\mathrm{upper} = 10^2~\mathrm{g~cm^{-3} }$ and $\rho_\mathrm{lower} = 1~\mathrm{g~cm^{-3} }$, with $\Delta \rho = \rho_\mathrm{upper} - \rho_\mathrm{lower}$.
The sponge update is done implicitly to get the effective forcing, $f$:
\begin{equation}
f = -\left [ 1 - \frac{1}{1 + \alpha s} \right ]
\end{equation}
with $\alpha = \Delta t/\tau_\mathrm{sponge}$. Here
$\tau_\mathrm{sponge}$ is the timescale over which the sponge acts.
We take $\tau_\mathrm{sponge} = 10^{-7}~\mathrm{s}$.
The sponge drives the velocity of the material in the low
density regions at the top
of the atmosphere to zero. This sponging helps increase our timestep
as well.
\section{Flame Properties}\label{Sec:Flame}
The speed and thickness of a laminar helium flame are determined by the
energy generation rate and conductivity, and scale roughly as
\begin{equation}
\label{eq:flame_scaling}
s_L \approx \sqrt{k_{\rm th} \dot{\epsilon}} \qquad
\lambda_L \approx \sqrt{\frac{k_{\rm th}}{\dot{\epsilon}}}
\end{equation}
\citep{orourke:1979,khokhlov:1993}.
At the densities we consider in this simulation, the pure He flame
speed is quite slow and would require long integration
times to see significant evolution of the burning. We therefore
consider boosted flames in this first paper, to accelerate the burning
and allow us to understand the qualitative effects of laterally
propagating flames. To boost the flame while keeping the thickness
the same, we can multiply both the burning rate and the conductivity
by the same factor. For our standard calculations, we choose 10 for
each, to give a 10$\times$ faster flame speed. We call this the ``10/10''
flame. We will also do a simulation with the reactions and
conductivity both boosted by 5, the ``5/5'' flame.
To understand the time and length scales involved in flame
propagation, we do a 1D simulation of a laminar flame using our
microphysics. Figure~\ref{fig:flame} shows the flame thermodynamic
profile and properties for the 10/10 flame using
our conductivities and the {\tt aprox13} reaction network. This flame
had a density of $2\times 10^6~\mathrm{g~cm^{-3} }$ and
temperature of $5\times 10^7$~K. We observe that this flame speed is
about $10^5~\mathrm{cm~s^{-1} }$, the flame width is about 40~cm, and it
takes about 3~ms to settle into a sustained flame. Note that this speed
is quite small compared to the speeds of $\sim 10^6~\mathrm{cm~s^{-1} }$ estimated in
\citet{spitkovsky2002}. Table~\ref{table:flame_speeds_1d} gives the properties
of the 10/10 and 5/5 laminar flames. We expect a multidimensional flame to
accelerate due to hydrodynamics interactions (wrinkling, turbulence
interactions, directed flows feeding fuel into the flame, etc.).
\begin{figure*}[t]
\plottwo{flame.pdf}{speed.pdf}
\caption{\label{fig:flame} Time-evolution of the 10$\times$ boosted 1D
laminar flame. The left plot shows temperature and nuclear energy
generation profiles at 11 different times, while the right plot
shows flame propagation speed and flame thickness as functions of time.}
\end{figure*}
We measure the laminar flame width as:
\begin{equation}
\lambda_L \equiv \frac{\Delta T}{\max\{|\nabla T|\}}
\end{equation}
Experience with modeling resolved flames suggests that we need a
spatial resolution, $\Delta x$ of $\lambda_L/\Delta x \sim 5$~\citep{SNld}. These
conditions represent the bottom of the He layer. As the density
decreases with altitude, the flame thickness increases and the flame
speed decreases, so we will easily resolve the flame structure throughout
the rest of the atmosphere.
\begin{deluxetable}{lc}
\tablecaption{\label{table:flame_speeds_1d} Laminar flame speeds.}
\tablehead{\colhead{run} & \colhead{$s_L$ (km s$^{-1}$)}}
\startdata
10/10 boost & $1.06 \pm 0.01$ \\
5/5 boost & $0.56 \pm 0.01$ \\
\enddata
%
\end{deluxetable}
\section{Initial Model}\label{Sec:inital_model}
We wish to create an initial atmosphere consisting of a hot
``post-flame'' region and a cooler atmosphere that the flame will
laterally propagate into. We put the hot region at the very left of
the domain (the origin of the axisymmetric coordinates). To create
these initial conditions, we produce two different hydrostatic models,
a ``hot'' model that will represent the perturbation that drives the
flame and a ``cool'' model that will represent the state ahead of the
flame. These will have different scale heights. To create these
models, we break the vertical structure of the atmosphere into four
layers: (1) the underlying neutron star, (2) a ramp-up to the base of the
accreted atmosphere, (3) a fuel layer representing the bulk of the
atmosphere where the flame will propagate, and (4) an outer, low density,
isothermal buffer above the atmosphere that allows us
to capture expansion and explosive dynamics.
The temperature profile in the star and ramp region is given as:
\begin{equation}
T(y) = T_\star + \frac{1}{2} (T_\mathrm{hi} - T_\star) \left [ 1 + \tanh\left( \frac{\tilde{y}}{2 \delta_\mathrm{atm}} \right ) \right ]
\end{equation}
with
\begin{equation}
\tilde{y} = y - H_\star - \frac{3}{2} \delta_\mathrm{atm}
\end{equation}
Here, $\delta_\mathrm{atm}$ is a characteristic width of the
transition ramp, $T_\star$ is the temperature of the neutron star, and
$T_\mathrm{hi}$ is the highest temperature in the HSE model---it will
represent the base of the fuel layer.
The species mass fractions use this same profile, switching from a set
describing the underlying star, ${X_k}_\star$, and the set for the
accreted material, ${X_k}_\mathrm{atm}$, which is used in the
isentropic and outer regions. Note, since the profile above is
linear in $X$, if the initial mass fractions sum to one, then the blended
mass fractions in the ramp region also sum to one.
We specify the density, $\rho_\mathrm{int} = \rho(y = H_\star)$ as the
starting point for the integration of hydrostatic equilibrium. This
is just below the ramp-up region---this ensures that regardless of
what the peak temperature ($T_\mathrm{hi}$) is, the state beneath the
ramp-up region remains unchanged. Therefore, we will still be in
lateral equilibrium in the star region. We will denote the density
where $T = T_\mathrm{hi}$ as $\rho_\mathrm{fuel}$.
Creating the model involves specifying $T_\star$, $T_\mathrm{hi}$,
$T_\mathrm{lo}$, $\rho_\mathrm{int}$, $H_\star$,
$\delta_\mathrm{atm}$, ${X_k}_\star$, ${X_k}_\mathrm{atm}$, and $g$. We
then integrate outwards from the base of the ramp region ($y =
H_\star$), enforcing the discrete form of hydrostatic equilibrium,
Eq.~\ref{eq:hse}. Integrating upwards, we find $p_i$ and
$\rho_i$ using a Newton-Raphson solver together with the equation of state, with either the temperature
specified, $T_i = T(p_i, \rho_i, \{X_k\}_i)$ (in the isothermal, ramp,
and buffer layers) or constant entropy, $s_i = s(p_i, \rho_i,
\{X_k\}_i)$, in the fuel layer. This follows the procedures described in \citet{ppm-hse}. We
use a constant temperature for all $y < H_\star +
3\delta_\mathrm{atm}$. Above this, we switch to an isentropic atmosphere until the
temperature drops to a floor value, $T_\mathrm{lo}$, at which point we
again keep the temperature constant. The integration of the
atmosphere continues until the density falls to a low density cutoff,
$\rho_\mathrm{cutoff}$. The material above this height is taken to
have constant density and temperature.
The choice of factors in front of $\delta_\mathrm{atm}$ were designed
to make sure the peak $T$ is attained at the desired density
of the burning layer.
The parameters we use for
the model generation are listed in Table~\ref{table:params} and the
initial model profiles are showing in Figure~\ref{fig:initial_models}.
\begin{deluxetable}{lcc}
\tablecaption{\label{table:params} Initial model parameters.}
\tablehead{\colhead{parameter} & \colhead{cool} & \colhead{hot}}
\startdata
$T_\star$ & \multicolumn{2}{c}{$10^8$~K} \\
$T_\mathrm{hi}$ & $2\times 10^8$~K & $1.4\times 10^9$~K \\
$T_\mathrm{lo}$ & \multicolumn{2}{c}{$8\times 10
|
^6$~K} \\
$\rho_\mathrm{int}$ & \multicolumn{2}{c}{$3.43\times 10^6~\mathrm{g~cm^{-3} }$} \\
$\rho_\mathrm{fuel}$\tablenotemark{a} & $2.36\times 10^6~\mathrm{g~cm^{-3} }$ & $1.20\times 10^6~\mathrm{g~cm^{-3} }$ \\
$\rho_\mathrm{cutoff}$ & \multicolumn{2}{c}{$10^{-4}~\mathrm{g~cm^{-3} }$} \\
$H_\star$ & \multicolumn{2}{c}{2000~cm} \\
$\delta_\mathrm{atm}$ & \multicolumn{2}{c}{50~cm} \\
$X_\star(\isotm{Ni}{56})\tablenotemark{b}$ & \multicolumn{2}{c}{1.0} \\
$X_\mathrm{atm}(\isotm{He}{4})\tablenotemark{b}$ & \multicolumn{2}{c}{1.0} \\
${\bf{g}}$ & \multicolumn{2}{c}{$-1.5\times 10^{14}~\mathrm{cm~s^{-2}} \hat{\bf y}$} \\
\enddata
\tablenotetext{a}{This is not an input parameter, but instead is
computed during integration. We list it here for reference.}
\tablenotetext{b}{All other species are taken as 0.}
\end{deluxetable}
We blend the hot and cold models laterally to produce the perturbation
needed to initiate a localized flame, with the hot model at the
origin of the axisymmetric geometry. The blending is done as:
\begin{align}
p(x,y) &= f(x) p_\mathrm{hot}(y) + [1-f(x)] p_\mathrm{cool}(y) \\
\rho(x,y) &= f(x) \rho_\mathrm{hot}(y) + [1-f(x)] \rho_\mathrm{cool}(y) \\
X_k(x,y) &= f(x) {X_k}_\mathrm{hot}(y) + [1-f(x)] {X_k}_\mathrm{cool}(y)
\end{align}
with
\begin{equation}
f(x) = \begin{cases}
1 & x < x_\mathrm{pert} \\
1 - \frac{x - x_\mathrm{pert}}{\delta_\mathrm{blend}} & x_\mathrm{pert} \le x \le x_\mathrm{pert} + \delta_\mathrm{blend} \\
0 & x > x_\mathrm{pert} + \delta_\mathrm{blend}
\end{cases}
\end{equation}
Since the equation of hydrostatic equilibrium is linear and our
blending is a linear combination of two models in hydrostatic
equilibrum, the blended model is also in vertical equilibrium initially. We choose
$x_\mathrm{pert} = 1.024\times 10^4$~cm and $\delta_\mathrm{blend} = 2048$~cm. Once
the blended model is constructed, we compute $T(x,y)$ and $(\rho e)(x,y)$
from the equation of state,
\begin{align}
T(x,y) &= T(\rho(x,y), p(x,y), X_k(x,y)) \\
(\rho e)(x,y) &= \rho(x, y) \cdot e(\rho(x,y), p(x,y), X_k(x,y))
\end{align}
\begin{figure}[t]
\centering
\epsscale{0.75}
\plotone{initial_model_paper}
\caption{\label{fig:initial_models} Our ``cool'' (solid) and ``hot''
(dashed) initial models, showing both the density and temperature.}
\epsscale{1.0}
\end{figure}
The initial model is created on a uniform grid at the resolution
corresponding to the finest level of refinement. At regions of the
atmosphere that are not refined to the finest level, we interpolate
density, pressure, and composition on the grid and then obtain the
temperature from the EOS.
\section{Simulations and Results}\label{Sec:results}
To assess the sensitivity of the flame propagation to the various approximations
we made, we ran a suite of simulations.
Table~\ref{table:sim_names} summarizes these simulations.
The majority of them used a reaction rate boosting of $10$ and a
conductivity boosting of $10$, which should increase the flame speed
by a factor of $10$. Most simulations used a resolution of
$10~\mbox{cm}$ and a domain width of a little more than one kilometer.
We use an artifically high rotation rate of
$2000~\mbox{Hz}$, which gives a Rossby length of
\begin{equation}
L_R = \frac{\sqrt{g H_0}}{\Omega} \sim 3\times 10^4~\mathrm{cm}
\end{equation}
using a scale height $H_0 = 10^3~\mathrm{cm}$. This is about one
quarter of the domain width. The run at $20~\mbox{cm}$ resolution used
one fewer level of refinement. The slower rotating case (1000~Hz) uses a
slightly wider domain to accommodate the expected larger Rossby
length. We note that the entire simulation framework for these
calculations is freely available in the {\sf Castro}\ github
repository\footnote{\url{https://github.com/AMReX-Astro/Castro}, using
the {\tt flame\_wave} setup.}. In the discussions below, we'll use
the simulation names defined in the table to refer to specific runs
and we'll use the 10/10 run as the reference calculation.
\begin{deluxetable}{lcccccc}
\tablecaption{\label{table:sim_names} Simulation parameters.}
\tablehead{\colhead{name} &
\colhead{reaction} &
\colhead{conductivity} &
\colhead{fine grid} &
\colhead{rotation} &
\colhead{domain size} &
\colhead{network} \\
\colhead{} &
\colhead{boost} &
\colhead{boost} &
\colhead{resolution} &
\colhead{rate} &
\colhead{} &
\colhead{}
}
\startdata
10/10 & 10 & 10 & 10~cm & 2000~Hz & $1.2288\times 10^5~\mbox{cm} \times 3.072\times 10^4~\mbox{cm}$ & {\tt aprox13} \\
5/5 & 5 & 5 & 10~cm & 2000~Hz & $1.2288\times 10^5~\mbox{cm} \times 3.072\times 10^4~\mbox{cm}$ &{\tt aprox13} \\
10/10-iso7 & 10 & 10 & 10~cm & 2000~Hz & $1.2288\times 10^5~\mbox{cm} \times 3.072\times 10^4~\mbox{cm}$ &{\tt iso7} \\
10/10-lowres & 10 & 10 & 20~cm & 2000~Hz & $1.2288\times 10^5~\mbox{cm} \times 3.072\times 10^4~\mbox{cm}$ &{\tt aprox13} \\
10/10-1000 Hz & 10 & 10 & 10~cm & 1000~Hz & $1.8432\times 10^5~\mbox{cm} \times 3.072\times 10^4~\mbox{cm}$ &{\tt aprox13} \\
\enddata
\end{deluxetable}
\subsection{General Features}
Figure~\ref{fig:time_series} shows the time evolution of the 10/10
simulation, focusing on the mean molecular weight,
\begin{equation}
\bar{A} = \left ( \sum_k \frac{X_k}{A_k} \right )^{-1}.
\end{equation}
In each frame the buffer of \isot{Ni}{56} that serves as the
underlying neutron star is seen spanning the bottom of the domain.
Above that, the composition begins as pure \isot{He}{4}, but as the
simulation progresses, the flame processes this to heavier nuclei,
increasing $\bar{A}$. By about $10~\mbox{ms}$, we see the flame front
is reasonably well-defined. We see that the bottom of the burning
front is lifted off of the base of the atmosphere, greatly increasing
the surface area of the burning compared with a perfectly vertical
flame front. By $20~\mbox{ms}$ the flame has moved out substantially
and we are beginning to see a gradient in the composition of the ash,
with the heavier nuclei furthest behind the flame. The boosting of
the burning likely artificially increases this effect, as we'll see in the 5/5 case below.
\begin{figure}[t]
\centering
\plotone{time_series}
\caption{\label{fig:time_series} Time series of the mean molecular weight of the
flame for our standard 10/10 simulation.}
\end{figure}
Figure~\ref{fig:10_10_overview} shows the temperature, energy
generation rate, and $w$ component of the velocity (the out-of-plane
velocity induced by the Coriolis force). This latter field
illustrates the hurricane effect set up by the laterally spreading
burning front. In the energy generation rate plot, we see that the
burning is mostly concentrated toward the bottom of the layer, as
expected since the density is greatest there. We see that the peak of
the burning has moved off of the symmetry axis, demonstrating that the
burning front is propagating to the right. In the temperature plot,
we see the effect of our refinement criteria focusing only on the part of
the atmosphere where we are most dense, with an artificial change in the
temperature at the refinement boundary due in part to the construction
of the initial model using the fine grid resolution for HSE. As we will see in the lower resolution case, this does not
affect the results.
A final feature worth noting is the ash that seems to move along the
surface at a higher velocity than the flame, via surface gravity
waves. The sponging that we perform is likely damping this to some extent, and
the method by which we initialize the flame may induce a larger
transient than in nature. Nevertheless, this surface ash is intriguing
because it affects the composition of the photosphere ahead of the
burning front, potentially changing our interpretation of
observations. This is something that will be explored more fully in
the future.
\begin{figure}[t]
\centering
\plotone{flame_wave_boost_10_10_overview}
\caption{\label{fig:10_10_overview} Temperature, energy generation rate, and out-of-plane velocity for the 10/10 simulation at 20~ms.}
\end{figure}
Our default resolution puts $\sim 80$ zones vertically in the
``cool'' model atmosphere height. To
understand the effects of resolution, we also performed a run with one
fewer refinement level, giving a $20~\mbox{cm}$ resolution overall (this is our
10/10-lowres run). Figure~\ref{fig:10_10_lowres} shows the fields for this run at
20~ms. The structure is largely the same as the 10/10 run, with
largely the same flame shape and position, and the same structure in
the energy generation rate. Since we do not have a jump in refinement
right below the atmosphere, we don't see the temperature feature from
the initial model mapping there, but we do see some cooling in the
\isot{Ni}{56} region. This is likely a resolution effect, again due
to the strong gradient in temperature at the base of the atmosphere.
The strong agreement in the low resolution run to the 10/10 run gives
us confidence that we are capturing the flame physics properly.
\begin{figure}[t]
\centering
\plotone{flame_wave_boost_10_10_lowres_slice}
\caption{\label{fig:10_10_lowres} Temperature, mean molecular weight,
energy generation rate, and out-of-plane velocity for the 10/10 low
resolution simulation at 20~ms.}
\end{figure}
\subsection{Effects of our Approximations}
The results above all used a boosting of 10/10. To see how the
results are sensitive to this boosting, we ran a simulation with a reduced
boosting of 5/5. This is shown in Figure~\ref{fig:5_5_overview}. The
results look qualitatively the same---a laterally propagating flame
develops that is lifted off of the bottom of the fuel layer. The
flame has not moved out as far as in the 10/10 simulation, simply
because there is less energy release, but we expect that if we were to
run this out twice as long, the flame would have advanced to the
position seen in our 10/10 runs. The ash is also not as evolved to as high of an $\bar{A}$. The good agreement in the structure
of the flame seen with the lower boosting gives us confidence that the
overall aspects of the flame structure and acceleration we are seeing
are robust to the approximations we make.
\begin{figure}[t]
\centering
\plotone{flame_wave_boost_5_5_slice}
\caption{\label{fig:5_5_overview} Temperature, mean molecular weight, energy generation rate, and out-of-plane velocity for the 5/5 simulation at 20~ms.}
\end{figure}
We also considered the effect of the network size.
Figure~\ref{fig:network} shows a comparison of the 10/10 boosting run
with the standard 13-isotope {\tt aprox13} network and the reduced
7-isotope {\tt iso7} network. We see that the flame in the {\tt aprox13}
case is slightly more advanced and has a higher $\bar{A}$ than the {\tt iso7}
case. We see in the next section that these two networks give largely
the same flame speed. The reduced network size saves a lot of memory,
which will be useful when we transition to 3D simulations.
\begin{figure}[t]
\centering
\plotone{network_compare}
\caption{\label{fig:network} Mean molecular weight at 15~ms comparing a run with
{\tt aprox13} to a run with the {\tt iso7} network.}
\end{figure}
The final approximation to explore is our choice of rotation rate. We ran the 10/10-1000~Hz
simulation for 12.6~ms (shorter than the 20~ms we
used for the other runs). We also used a larger domain, $1.8432\times
10^5~\mathrm{cm} \times 3.072\times 10^4~\mathrm{cm}$ to account for
the larger Rossby radius. Figure~\ref{fig:10_10_slow} shows the flame
structure. Overall it looks much like the faster rotator. In the
next section, we explore the effect of rotation on the flame
acceleration.
\begin{figure}[t]
\centering
\plotone{flame_wave_slow}
\caption{\label{fig:10_10_slow} Temperature, mean molecular weight, energy generation rate, and out-of-plane velocity for the 10/10-1000 Hz simulation at 12.6~ms.}
\end{figure}
\subsection{Flame Propagation}
To measure the propagation rate of the burning front, we first collapse our nuclear
energy generation rate ($\dot{e}_\mathrm{nuc}$) data at each time into a 1D radial
profile by averaging over the vertical coordinate. We then take the peak
$\dot{e}_\mathrm{nuc}$ value across all profiles to provide a fixed reference point.
We define the position of the flame front to be the location ahead of hottest part of
the flame where $\dot{e}_\mathrm{nuc}$ first drops to $< 0.1 \%$ of the global maximum.
This corresponds roughly to the leading edge of the burning region. Averaging
over the vertical coordinate helps to reduce sensitivity to localized fluid motions, as does
tracking the $0.1 \%$ contour rather than a local maximum. As we see in Figure
\ref{fig:flame_speed}, the flame settles into a state of steady propagation after an initial
transient period spanning $\sim 3$ ms. The position data here are well fitted by a linear
function of time, and the resulting slope gives the velocity of the flame front.
\begin{deluxetable}{lc}
\tablecaption{\label{table:flame_speeds_multid} Flame speeds measured in 2D calculations.}
\tablehead{\colhead{run} & \colhead{$s_\mathrm{front}$ (km s$^{-1}$)}}
\startdata
10/10 & $9.18 \pm 0.03$ \\
5/5 & $4.00 \pm 0.01$ \\
10/10-iso7 & $7.56 \pm 0.02$ \\
10/10-lowres & $9.33 \pm 0.04$ \\
10/10-1000 Hz & $18.6 \pm 0.16$ \\
\enddata
%
\end{deluxetable}
Table \ref{table:flame_speeds_multid} gives the flame speed measured in each multidimensional
simulation. The 2D flames propagate at speeds about an order of magnitude faster than their 1D
counterparts (Table \ref{table:flame_speeds_1d}). The increase in flame speed is likely a product
of the larger flame surface area and hydrodynamical effects such as turbulence, wrinkling, and
convective cycles, which bring cooler fuel from ahead of the front into the hottest part of the
burning region.
\begin{figure}[t]
\centering
\plotone{speedplot_all.pdf}
\caption{\label{fig:flame_speed}The position of the burning front for each simulation run as a function of time. The dashed lines show linear least squares fits for $t \gtrsim 6$ ms.}
\end{figure}
All of the various approximations have the expected effects. The flame in the 5/5
boost run propagates at about half the speed of the 10/10 flame. This is consistent
with the 1D laminar flames, and is the behavior predicted by Eq.~\ref{eq:flame_scaling}. The
1000~Hz run goes 2 times faster than the 10/10 2000~Hz run, as expected from the
inverse relation between rotation rate and the ratio of burning front area to scale height \citep{cavecchi:2013}.
This confirms that the role of the Coriolis
force is to limit the rate of flame spreading by the anticipated geometrical/hydrodynamical
effect ($s_\mathrm{front} = s_L * L_R / H$). Reducing the resolution had minimal impact on
the flame speed -- the low resolution line is right on top of the fit for the standard run
in Figure \ref{fig:flame_speed}, with their slopes differing by only a few percent. Using a
smaller network also produces similar behavior to the standard run, although there is a small
reduction in speed owing to less energetic burning.
\subsection{Entrainment, Flow Features}
We explore the baroclinic instability by looking at the magnitude of the
baroclinicity, calculated as
\begin{equation}
\boldsymbol{\psi} = \frac{1}{\rho^2} \mathbf{\nabla} p \times \mathbf{\nabla} \rho,
\end{equation}
and shows the misalignment of the local density and pressure gradients
(we also explored this in \citep{Malone2014a}). As we are considering a 2D system, the
component of the baroclinicity we consider here (out of the plane)
reflects the misalignment of the fields in the plane of the
simulation. Figure~\ref{fig:baroclinicity} shows that the
baroclinicity peaks along the flame front. This baroclinicity
generates vorticity, which in turn entrains material along the surface
of the flame (see also \citealt{cavecchi:2013}). In 3D, this same vorticity
should perturb the flame front and further increase the flame speed
\citep{Cavecchi2019}.
\begin{figure}[t]
\centering
\plotone{baroclinicity}
\caption{\label{fig:baroclinicity} Baroclinicity. This plot shows
$\ln \left(\mathbf{\psi}\right)$ at time $t = 0.02~\mathrm{s}$ for the 10/10 simulation.}
\end{figure}
Vortical motions are also evident in a direct visualization of the velocity field, as seen in
Figure \ref{fig:streamlines}. We observe turbulence in the wake of the flame and in the
unburnt fuel ahead of the front, while a convective cycle sets up near the hottest part of the
burning region. The cycle draws the cooler fluid near the interface into the center of the flame
and drives hot ash out towards the instability at its surface, helping to facilitate the flame
spreading. Convective mixing should still be important in 3D, but we would expect much more
complicated flow patterns, with a greater contribution from small-scale features \citep{xrb3d}.
\begin{figure}[t]
\centering
\plotone{streamlines.pdf}
\caption{\label{fig:streamlines} Streamlines showing velocity field at time $t = 0.01~\mathrm{s}$ for the 10/10 simulation.}
\end{figure}
To further demonstrate the relationships between different properties
of the flame, in Figure~\ref{fig:phase_plots} we present some phase
plots of the energy generation rate. The phase plot of the energy
generation rate as a function of the $x$- and $y$-velocities shows
that energy is preferentially generated in regions with negative
$x$-velocity. This most likely corresponds to the burning along the
underside of the flame front, where fuel has been entrained along the
surface of the flame and so is moving in the opposite direction to the
direction the flame is propagating in. This is further supported by
the phase plot of the energy generation rate as a function of the
$x$-velocity and the density, which shows that the energy generation
rate peaks at $\rho \sim 3 \times 10^5~\mathrm{g/cm}^3$, at the base
of the flame. The energy generation rate also depends strongly on density.
In the first plot, there are ``loops" of
points of similar $\dot{e}_\mathrm{nuc}$ in the outer edges of the plot.
These are likely to correspond to the vortices that appear within the flame,
where the fluid moves in a circular motion in $u-v$ phase space.
\begin{figure}[t]
\centering
\plottwo{vx_vy_129139}{u_rho_129139}
\caption{\label{fig:phase_plots} Phase plots at time $t = 0.02~\mathrm{s}$.
\emph{Left}: Phase plot showing the energy generation rate as a function of
the $x$- and $y$-velocities. The black cross shows the location of $u = v = 0$. \emph{Right}: phase plot showing the energy
generation rate as a function of the $x$-velocity and the density.}
\end{figure}
Figure~\ref{fig:abar_temp} shows the energy generation rate as a function of the
temperature and $\bar{A}$.
The energy generation rate peaks at low $\bar{A}$, where there is a high fraction
of unburnt material. The burning raises the temperature of this material, such
that the peak temperature coincides with the peak energy generation rate. The
material then cools again as the burning converts the fuel into ashes,
increasing $\bar{A}$ and reducing the energy generation rate as there is
less available fuel. Along the base of the plot we see cool unburnt fluid and
ashes. In the center of the plot there are thin `trails' in phase space, which
could correspond to less common reaction pathways in the reaction network.
\begin{figure}[t]
\centering
\plotone{abar_temp_129139}
\caption{\label{fig:abar_temp} Phase plot of energy generation rate as a function of $\bar{A}$ and temperature at time $t = 0.02~\mathrm{s}$.}
\end{figure}
\section{Discussion}
We have shown the results of our fully hydrodynamical,
multidimensional simulations of flame propagation through the
atmosphere of a neutron star. By using the fully compressible
hydrodynamics equations, we are able to capture the vertical dynamics
of the system. To accurately model the flame propagation, it is
necessary to sufficiently resolve the scale height of the
atmosphere. It is also important that there is a thin transition
between the underlying neutron star and the atmosphere, to ensure that
the peak temperature occurs at the correct base density.
In order to satisfy these requirements whilst allowing the simulations
to be computationally feasible, we used several approximations: a
higher than normal rotation rate, a boosted flame, a simplified
reaction network, and a 2D axisymmetric model for the flow. The first two of
these were used to reduce the model's spatial and temporal
scales. Using the simulation framework developed here, these both can
be relaxed in the future, at the cost of more computer time. The same
goes for 2D axisymmetry vs.\ full 3D---the only difference is computer time, and
our future calculations will explore the 3D evolution and compare to
the 2D simulations presented here. In particular, in 3D we will be
able to explore shear instabilities at the flame front. We can also
capture the baroclinic instability \citep{Cavecchi2019} and the
competition between it and shear. Larger networks are a
straightforward change, and already supported in {\sf Castro}\ using the
{\sf pynucastro}\ framework~\citep{pynucastro} and JINA ReacLib rate
database~\citep{reaclib}.
In addition to extending our models to 3D and relaxing the
approximations used in this study, in the future we plan to perform a
number of additional further studies. These include investigating the
effects of ignition latitude and different initial models. We also
plan to model mixed H/He bursts. This will require a different
reaction network, and perhaps different resolution
requirements. A long term goal is to include magnetohydrodynamics in
our models. Although the magnetic fields of neutron stars exhibiting
X-ray bursts are relatively weak, with $B \lesssim 10^8 - 10^9$G
\citep{mukherjee2015magnetic} (at least compared to e.g. magnetars,
which have $B\lesssim 10^{15}$G), it has been found by
\citet{art-2016-cavecchi-etal} that even weak magnetic fields could
have a non-negligible effect on the flame propagation, reducing
confinement due to the Coriolis force and leading to increased flame
speeds. Ultimately, we wish to link our simulations to observed light
curves. To do this, we will need to explore radiation transport in
order to model how the burst energy propagates through the outer
layers of the neutron star atmosphere.
In addition to exploring other aspects of the XRB physics, there are several changes to the algorithm used to
model these XRBs we will pursue. First, as shown in \citet{castro-sdc}, we have
developed a fourth-order (in space and time) method for coupling
hydrodynamics and reactions that should greatly improve the accuracy
of the simulations. We expect that by using this new high-order
algorithm we can drop a level of refinement from the simulations while
still accurately modeling the evolution. We have also ported
{\sf Castro}\ to GPUs, giving an order of magnitude performance boost on
nodes with both CPUs and GPUs. Flames without any boosting are currently running
using the new GPU-enabled {\sf Castro} and will be the focus of the next study.
\acknowledgements {\sf Castro}\ is open-source and freely available at
\url{http://github.com/AMReX-Astro/Castro}. The problem setups used
here are available in the git repo as {\tt flame} and {\tt
flame\_wave}. The work at Stony Brook was supported by DOE/Office
of Nuclear Physics grant DE-FG02-87ER40317 and the SciDAC program DOE grant DE-SC0017955. MZ acknowledges support
from the Simons Foundation. YC was supported by the
European Union Horizon 2020 research and innovation program under
the Marie Sklodowska-Curie Global Fellowship grant agreement No.\
703916. This research used resources of the National Energy Research
Scientific Computing Center, a DOE Office of Science User Facility
supported by the Office of Science of the U.~S.\ Department of Energy
under Contract No.\ DE-AC02-05CH11231. This research used resources
of the Oak Ridge Leadership Computing Facility at the Oak Ridge
National Laboratory, which is supported by the Office of Science of
the U.S. Department of Energy under Contract No. DE-AC05-00OR22725,
awarded through the DOE INCITE program. This research has made use of
NASA's Astrophysics Data System Bibliographic Services.
\facilities{NERSC, OLCF}
\software{AMReX \citep{amrex_joss},
Castro \citep{castro},
GCC (\url{https://gcc.gnu.org/}),
linux (\url{https://www.kernel.org/}),
matplotlib (\citealt{Hunter:2007}, \url{http://matplotlib.org/}),
NumPy \citep{numpy,numpy2},
python (\url{https://www.python.org/}),
valgrind \citep{valgrind},
VODE \citep{vode},
yt \citep{yt}}
|
\section{Introduction}
Effective field theory (EFT) is a powerful tool for describing the
strong interactions at low energies \cite{Weinberg:1978kz}.
Starting point is the chiral $\mbox{SU}(N)_L\times\mbox{SU}(N)_R$
symmetry of QCD in the limit of $N$ massless quarks and its
spontaneous breakdown to $\mbox{SU}(N)_V$ in the ground state.
Instead of solving QCD in terms of quarks and gluons,
its low-energy physics (of the mesonic sector) is described using
the most general Lagrangian containing the Goldstone bosons as
effective degrees of freedom
\cite{Gasser:1983yg,Gasser:1984gg,Fearing:1994ga,%
Bijnens:1999sh,Ebertshauser:2001nj,Bijnens:2001bb}.
Physical quantities are calculated in terms of
an expansion in $p/\Lambda$, where $p$ stands for momenta or masses
that are smaller than a certain momentum scale $\Lambda$ (see, e.g.,
Refs.~\cite{Scherer:2002tk,Scherer:2005ri} for an introduction).
In the following we will outline some recent developments in devising
a renormalization scheme leading to a simple and consistent power
counting for the renormalized diagrams of a manifestly
Lorentz-invariant approach to baryon chiral perturbation theory
\cite{Gasser:1987rb}.
\section{Renormalization and Power Counting}
The standard effective Lagrangian relevant to the single-nucleon sector
consists of the sum of the purely mesonic and $\pi N$ Lagrangians,
respectively,
\begin{displaymath}
{\cal L}_{\rm eff}={\cal L}_{\pi}+{\cal L}_{\pi N}={\cal L}_2+ {\cal
L}_4 +\cdots +{\cal L}_{\pi N}^{(1)}+{\cal L}_{\pi N}^{(2)}+\cdots
\end{displaymath}
which are organized in a derivative and quark-mass expansion.
The aim is to devise a renormalization procedure generating, after
renormalization, the following power counting:
a loop integration in $n$ dimensions counts as $q^n$,
pion and fermion propagators count as $q^{-2}$ and $q^{-1}$,
respectively, vertices derived from ${\cal L}_{2k}$ and ${\cal
L}_{\pi N}^{(k)}$ count as $q^{2k}$ and $q^k$, respectively.
Here, $q$ generically denotes a small expansion parameter such as,
e.g., the pion mass.
Several methods have been suggested to obtain a consistent
power counting in a manifestly Lorentz-invariant approach.
As an illustration consider the integral
\begin{displaymath}
H(p^2,m^2;n)= \int \frac{d^n k}{(2\pi)^n}
\frac{i}{[(k-p)^2-m^2+i0^+][k^2+i0^+]},
\end{displaymath}
where $\Delta=(p^2-m^2)/m^2={\cal O}(q)$ is a small quantity. In the
infrared (IR) regularization of Becher and Leutwyler
\cite{Becher:1999he} one makes use of the Feynman parametrization
\begin{displaymath}
{1\over ab}=\int_0^1 {dz\over [az+b(1-z)]^2}
\end{displaymath}
with $a=(k-p)^2-m^2+i0^+$ and $b=k^2+i0^+$.
The resulting integral over the Feynman parameter $z$ is then rewritten as
\begin{eqnarray*}
\int_0^1 dz \cdots &=& \int_0^\infty dz \cdots
- \int_1^\infty dz \cdots,\\
\end{eqnarray*}
where the first, so-called infrared (singular) integral satisfies
the power counting, while the remainder violates power counting but
turns out to be regular and can thus be absorbed in counterterms.
The central idea of the extended on-mass-shell (EOMS)
scheme\cite{Gegelia:1999gf,Fuchs:2003qc} consists of performing
additional subtractions beyond the $\widetilde{\rm MS}$ scheme.
In Ref.\ \cite{Schindler:2003xv} the IR regularization of
Becher and Leutwyler was reformulated in a form analogous to the
EOMS renormalization scheme.
Within this (new) formulation the subtraction terms are found by
expanding the integrands of loop integrals in powers of small
parameters (small masses and Lorentz-invariant combinations of
external momenta and large masses) and subsequently exchanging the
order of integration and summation.
The new formulation of IR regularization can be applied to
diagrams with an arbitrary number of propagators with various masses
(e.g., resonances) and/or diagrams with several fermion lines as
well as to multi-loop diagrams.
\section{Applications}
\subsection{Nucleon Form Factors}
It has been known for some time that ChPT
|
results at ${\cal O}(q^4)$
only provide a decent description of the electromagnetic Sachs form
factors $G_E$ and $G_M$ up to $Q^2=0.1\,\mbox{GeV}^2$ and do not
generate sufficient curvature for larger values of $Q^2$
\cite{Kubis:2000zd,Fuchs:2003ir}. To improve these results
higher-order contributions have to be included. This can be achieved
by performing a full calculation at ${\cal O}(q^5)$ which would also
include the analysis of two-loop diagrams.
Another possibility is to include additional degrees of freedom, through which
some of the higher-order contributions are re-summed.
Both the reformulated IR regularization and the EOMS scheme allow for
a consistent inclusion of vector mesons which already a long time
ago were established to play an important role in the description of
the nucleon form factors.
Figure \ref{G_neu} shows the results for
the electric and magnetic Sachs form factors in the EOMS scheme
(solid lines) and the infrared renormalization (dashed lines)
\cite{Schindler:2005ke}.
A {\em consistent} inclusion of vector
mesons clearly improves the quality of the description.
Similarly, the inclusion of the axial-vector meson $a_1(1260)$ results
in an improved description of the experimental data for the axial
form factor \cite{Schindler:2006it}.
\begin{figure}[ph]
\centerline{\psfig{file=G_eine_Zeile.eps,width=\textwidth}}
\vspace*{8pt} \caption{The Sachs form factors of the nucleon in
manifestly Lorentz-invariant chiral perturbation theory at ${\cal
O}(q^4)$ including vector mesons as explicit degrees of freedom.
Full lines: results in the extended on-mass-shell scheme; dashed
lines: results in infrared regularization.\protect\label{G_neu}}
\end{figure}
\subsection{Chiral Expansion of the Nucleon Mass to Order ${\cal O}(q^6)$}
Using the reformulated infrared regularization
\cite{Schindler:2003xv} we have calculated the nucleon mass up to
and including order ${\cal O}(q^6)$ in the chiral expansion
\cite{Schindler:2006ha,Schindler:2007dr}:
\begin{eqnarray}\label{H1:emff:MassExp}
m_N &=& m +k_1 M^2 +k_2 \,M^3 +k_3 M^4 \ln\frac{M}{\mu}
+ k_4 M^4 + k_5 M^5\ln\frac{M}{\mu} + k_6 M^5 \nonumber\\&& + k_7
M^6 \ln^2\frac{M}{\mu}+ k_8 M^6 \ln\frac{M}{\mu} + k_9 M^6.
\end{eqnarray}
In Eq.~(\ref{H1:emff:MassExp}), $m$ denotes the nucleon mass in the
chiral limit, $M^2$ is the leading term in the chiral expansion of
the square of the pion mass, $\mu$ is the renormalization scale; all
the coefficients $k_i$ have been determined in terms of infrared
renormalized parameters.
Our results for the renormalization-scheme-independent terms agree
with the heavy-baryon ChPT results of Ref.~\cite{McGovern:1998tm}.
The numerical contributions from higher-order terms cannot be
calculated so far since, starting with $k_4$, most expressions in
Eq.~(\ref{H1:emff:MassExp}) contain unknown low-energy coupling
constants (LECs) from the Lagrangians of order ${\cal O}(q^4)$ and
higher.
The coefficient $k_5$ is free of higher-order LECs.
Figure \ref{fig:nucleonmass} shows the pion mass dependence of the term
$k_5 M^5 \ln(M/m_N)$ (solid line) in comparison with the term $k_2
M^3$ (dashed line) for $M<400$ MeV.
For $M\approx 360\,\mbox{MeV}$ the $k_5$ term is as large as the
$k_2$ term.
\begin{figure}[ph]
\centerline{\psfig{file=k2k5LowMCompFarbe.eps,width=0.5\textwidth}}
\vspace*{8pt} \caption{Pion mass dependence of the term $k_5 M^5
\ln(M/m_N)$ (solid line) for $M<400$ MeV. For comparison also the
term $k_2 M^3$ (dashed line) is shown.
\protect\label{fig:nucleonmass}}
\end{figure}
\section*{Acknowledgments}
This work was
made possible by the financial support from the Deutsche
Forschungsgemeinschaft (SFB 443 and SCHE 459/2-1) and the EU
Integrated Infrastructure Initiative Hadron Physics Project
(contract number RII3-CT-2004-506078).
|
\section{Introduction}
In \cite{gho1} we constructed a model on the basis of two types of Minkowski spaces, the space with indefinite inner product (\emph{Lorentzian-Minkowski space} see e.g. \cite{gohberg}, \cite{minkowski}) and the space with a semi-inner product (finite-dimensional separable Banach space see in \cite{giles}, \cite{lumer}, \cite{martini-swanepoel 1} and \cite{martini-swanepoel 2}), respectively. Among other, we introduced the concept of \emph{generalized Minkowski space} and specially the so-called \emph{generalized space-time model}, which is a generalization of the Minkowski-Lorentz space-time. For differential-geometric point of view we investigated the latter in \cite{gho2}. This investigation led to a generalization of the spaces of constant curvature; the hyperbolic (anti-de Sitter), de Sitter, and Euclidean spaces, respectively. In the own right in a generalized space-time there is a theory of special relativity which was not developed in the above mentioned theoretical papers. In \cite{gho4} the concept of generalized space-time model extracted to a model called by \emph{generalized Minkowski space with changing shape} (briefly \emph{time-space}). It was given two types of models, a non-deterministic (random) variation and a deterministic one, respectively. We proved that in a finite range of time the random model can be approximated in an appropriate deterministic model. Thus, in practical point of view the deterministic models are more important. We have to mention here that the measure of a random model based on the following observation: on the space of norms can be defined such a geometric measure which push-forward onto the line of the absolute-time has normally distribution (see \cite{gho3}).
A time-space can be given also via the help of the so-called \emph{shape function}. In Section 2 we give the fundamental formulas of special relativity in a time-space (depending on the given shape function). In Section 3 we embed some known metrics of general relativity into a suitable time-space. This shows that time-space is a good place to visualize some of these. Of course, since time-space has a direct product character hence a lot of metrics holding the Einstein's equation have no natural embedding into it. In the last subsection of Section 3 we define a generalization of the Lorentzian manifold which we call \emph{time-space manifold}. The tangent spaces of a time-space manifold are time-spaces with linear shape-functions. We introduce the concept of \emph{homogeneous time-space manifold} as such a time-space manifold which tangent spaces can be identified with the same time-space. In a homogeneous time-space manifold we give all of the concepts of global relativity theory, the concepts of affine connection, parallel transport, curvature tensor and Einstein equation, respectively.
The first paragraph contains those definitions, notations and statements which used in this paper.
\subsection{Deterministic and random time-space models}
We assume that there is an absolute coordinate system of dimension $n$ in which we are modeling the universe by a time-space model. The origin is a generalized space-time model (see in \cite{gho1}) in which the time axis plays the role of the absolute time. Its points are unattainable and immeasurable for me and the corresponding line is also in the exterior of the modeled universe. (We note that in Minkowskian space-time this assumption holds only for the axes determining the space-coordinates.) This means that in our model, even though the axis of time belongs to the double cone of time-like points, its points do not belong to the modeled universe. In a fixed moment of time (with respect to this absolute time) the collection of the points of the space can be regarded as an open ball of the embedding normed space centered at the origin and does not contain the origin. The omitted point is the origin of a coordinate system giving the space-like coordinates of the world-points with respect to our time-space system. Since the points of the axis of the absolute-time are not in our universe there is no reference system in our modeled world which determines the absolute time.
In our probabilistic model (based on a generalized space-time model) the absolute coordinates of points are calculated by a fixed basis of the embedding vector space. The vector $s(\tau)$ means the collection of the space-components with respect to the absolute time $\tau$, the quantity $\tau$ has to be measured on a line $T$ which is orthogonal to the linear subspace $S$ of the vectors $s(\tau)$. (The orthogonality was considered as the Pythagorean orthogonality of the embedding normed space.) Consider a fixed Euclidean vector space with unit ball $B_E$ on $S$ and use its usual functions e.g. volume, diameter, width, thinness and Hausdorff distance, respectively. With respect to the moment $\tau$ of the absolute time we have a unit ball $K(\tau)$ in the corresponding normed space $\{S,\|\cdot\|^{\tau}\}$. The modeled universe at $\tau$ is the ball $\tau K(\tau)\subset \{S,\|\cdot\|^{\tau}\}$.
The shape of the model at the moment $\tau$ depends on the shape of the centrally symmetric convex body $K(\tau)$. The center of the model is on the axis of the absolute time, it cannot be determined. For calculations on time-space we need further smoothness properties on the function $K(\tau)$. These are
\begin{itemize}
\item $K(\tau)$ is a centrally symmetric, convex, compact, $C^2$ body of volume $\mathrm{vol}(B_E)$.
\item For each pairs of points $s',s''$ the function
$$
K:\mathbb{R}^+\cup \{0\}\rightarrow \mathcal{K}_0 \mbox{ , }\tau\mapsto K(\tau)
$$
holds the property that $[s',s'']^{\tau}:\tau\mapsto [s',s'']^{\tau}$ is a $C^1$-function.
\end{itemize}
\begin{defi}
We say that a generalized space-time model endowed with a function $K(\tau)$ holding the above properties is a \emph{deterministic time-space model}.
\end{defi}
The main subset of a deterministic time-space model contains the points of negative norm-square. This is the set of time-like points and the upper connected sheet of the time-like points is the modeled universe. The points of the universe have positive time-components. We denote this model by
$
\left(M,K(\tau)\right).
$
To define a random time-space model we should choose the function $K(\tau)$ ``randomly". To this purpose we use Kolmogorov's extension theorem (or theorem on consistency, see in \cite{kolmogorov}). This says that a suitably "consistent" collection of finite-dimensional distributions will define a probability measure on the product space. The sample space here is $\mathcal{K}_0$ with the Hausdorff distance. It is a locally compact, separable (second-countable) metric space. By Blaschke's selection theorem $\mathcal{K}$ is a boundedly compact space so it is also complete. It is easy to check that $\mathcal{K}_0$ is also a complete metric space if we assume that the non-proper bodies (centrally symmetric convex compact sets with empty interior) also belong to it. (In the remaining part we regard such a body as the unit ball of a normed space of smaller dimension.) Finally, let $P$ be a probability measure. In every moment of absolute time we consider the same probability space $\left(\mathcal{K}_0, P\right)$ and also consider in each of the finite collections of moments the corresponding product spaces $\left((\mathcal{K}_0)^r, P^r\right)$ . The consistency assumption of Kolmogorov's theorem now automatically holds. By the extension theorem we have a probability measure $\hat{P}$ on the measure space of the functions on $T$ to $\mathcal{K}_0$ with the $\sigma$-algebra generated by the cylinder sets of the space. The distribution of the projection of $\hat{P}$ to the probability space of a fix moment is the distribution of $P$.
\begin{defi}
Let $(K_\tau \mbox{ , }\tau\geq 0)$ be a random function defined as an element of the Kolmogorov's extension $\left(\Pi \mathcal{K}_0, \hat{P}\right)$ of the probability space $\left(\mathcal{K}_0, P\right)$. We say that the generalized space-time model with the random function
$$
\hat{K}_\tau:=\sqrt[n]{\frac{\mathrm{ vol}(B_E)}{\mathrm{ vol}(K_\tau)}}K_\tau
$$
is a \emph{random time-space model}. Here $\alpha_0(K_\tau)$ is a random variable with truncated normal distribution and thus $(\alpha_0(K_\tau) \mbox{ , } \tau\geq 0)$ is a stationary Gaussian process. We call it the \emph{shape process} of the random time-space model.
\end{defi}
It is clear that a deterministic time-space model is a special trajectory of the random time-space model. The following theorem is essential.
\begin{theorem}[\cite{gho4}]
For a trajectory $L(\tau)$ of the random time-space model, for a finite set $0\leq \tau_1\leq \cdots \leq \tau_s$ of moments and for a $\varepsilon >0$ there is a deterministic time-space model defined by the function $K(\tau)$ for which
$$
\sup\limits_{i}\{\rho_H\left(L(\tau_i), K(\tau_i)\right)\} \leq \varepsilon.
$$
\end{theorem}
An important consequence of Theorem 1 is the following: \emph{ Without loss of generality we can assume that the time-space model is deterministic.}
\begin{defi}
For two vectors $s_1+\tau_1$ and $s_2+\tau_2$
of the deterministic time-space model we define their product with the equality
$$
[s_1+\tau_1,s_2+\tau_2]^{+,T}:=[s_1,s_2]^{\tau_2}+\left[\tau_1,\tau_2\right]=
$$
$$
=[s_1,s_2]^{\tau_2}-\tau_1\tau_2.
$$
\end{defi}
Here $[s_1,s_2]^{\tau_2}$ means the s.i.p defined by the norm $\|\cdot\|^{\tau_2}$. This product is not a Minkowski product, as there is no homogeneity property in the second variable.
On the other hand the additivity and homogeneity properties of the first variable, the properties on non-degeneracy of the product are again hold, respectively. Finally the continuity and differentiability properties of this product also remain the same as of a Minkowski product. The calculations in a generalized space-time model basically depend on a rule on the differentiability of the second variable of the Minkowski product. As a basic tools of investigations we proved in \cite{gho4} that
\begin{theorem}[\cite{gho4}]
If $f_1, f_2: S\longrightarrow V=S+T$ are two $C^2$ maps and $c:\mathbb{R}\longrightarrow S$ is an arbitrary $C^2$ curve then
$$
([(f_1\circ c)(t)),(f_2\circ c)(t))]^{+,T})'=
$$
$$
=[D(f_1\circ c)(t),f_2(c(t))]^{+,T} +\left({[f_1(c(t)),\cdot]^{+,T}}\right)'_{D(f_2\circ c)(t)}(f_2(c(t)))+
$$
$$
+\frac{\partial{\left[(f_1)_S(c(t)),(f_2)_S(c(t)) \right]^{\tau}}}{\partial \tau}((f_2)_T(c(t)))\cdot((f_2)_T\circ c)'(t)
$$
\end{theorem}
The theory of generalized space-time model can be used in a generalization of special relativity theory, if we change some previous formulas using also the constant $c$. ($c$ is practically can be considered as the speed of the light in vacuum.) The formula of the product in such a deterministic (random) time-space was
$$
[x',x'']^{+,T}:=[s',s'']^{\tau ''}+c^2\left[\tau ',\tau ''\right].
$$
Parallel we used the assumption that the dimension $n$ is equal to $4$.
A particle is a random function $x: I_x \rightarrow S$ holding two conditions:
\begin{itemize}
\item the set $I_x\subset T^+$ is an interval
\item
$
[x(\tau),x(\tau)]^{\tau}<0 \mbox{ if } \tau\in I_x.
$
\end{itemize}
The particle lives on the interval $I_x$, born at the moment $\inf I_x$ and dies at the moment $\sup I_x$. Since all time-sections of a time-space model is a normed space of dimension $n$ the Borel sets of the time-sections are independent from the time. This means that we could consider the physical specifies of a particle as a trajectory of a stochastic process. A particle ``realistic" if it holds the ``known laws of physic" and ``idealistic" otherwise. This is only a terminology for own use, the mathematical contain of the expression ``known laws of physics" is indeterminable. First we introduced an inner metric $\delta_{K(\tau)}$ on the space at the moment $\tau$.
\begin{defi}
Let $X(\tau):T\rightarrow \tau K(\tau)$ be a continuously differentiable (by the time) trajectory of the random function $\left(x(\tau)\mbox{ , }\tau\in I_x\right)$. We say that the particle $x(\tau)$ is \emph{realistic in its position} if for every $\tau\in I_x$ the random variable
$\delta_{K (\tau)}\left(X(\tau),x(\tau)\right)$ has normal distribution on $\tau K(\tau)$. In other words, the stochastic process
$\left(\delta_{K (\tau)}\left(X(\tau),x(\tau)\right)\mbox{ , }\tau\in I_x\right)$ has stationary Gaussian process with respect to a given continuously differentiable function $X(\tau)$. We call the function $X(\tau)$ the \emph{world-line} of the particle $x(\tau)$.
\end{defi}
We note that the concept of "realistic in its position" is independent from the choice of $\delta_{K (\tau)}$. As a refinement of this concept we defined another one, which can be considered as a generalization of the principle on the maximality of the speed of the light.
\begin{defi}
We say that a particle \emph{realistic in its speed} if it is realistic in its position and the derivatives of its world-line $X(\tau)$ are time-like vectors.
\end{defi}
For such two particles $x',x''$ which are realistic in their position we can define a momentary distance by the equality:
$$
\delta(x'(\tau),x''(\tau))=\|X'(\tau)-X''(\tau)\|^{\tau}=\sqrt{[X'(\tau)-X''(\tau),X'(\tau)-X''(\tau)]^{+,T}}.
$$
We could say that two particles $x'$ and $x''$ are agree if the expected value of their distances is equal to zero. Let $I=I_{x'}\cap I_{x''}$ be the common part of their domains. The required equality is:
$$
E(\delta_{K(\tau)}(x'(\tau),x''(\tau)))=\int\limits_{I}\delta_{K(\tau)}(x'(\tau),x''(\tau))\mathrm{ d }\tau=
$$
$$
=\int\limits_{I}\|X'(\tau)-X''(\tau)\|^{\tau}\mathrm{ d }\tau=0.
$$
In a deterministic time-space we have a function $K(\tau)$, and we have more possibilities to define orthogonality in a moment $\tau$. We fix a concept of orthogonality and consider it in every normed space. In the case when the norm induced by the Euclidean inner product this method should give the same result as the usual concept of orthogonality. The most natural choice is the concept of Birkhoff orthogonality (see in \cite{gho1}). Using it, in every normed space can be considered an Auerbach basis (see in \cite{gho1}) which play the role of a basic coordinate frame. We can determine the coordinates of the points with respect to this basis.
We say that a frame is \emph{at rest with respect to the absolute time} if its origin (as a particle) is at rest with respect to the absolute time $\tau$ and the unit vectors of its axes are at rest with respect to a fixed Euclidean orthogonal basis of $S$. In $S$ we fix an Euclidean orthonormal basis and give the coordinates of a point (vector) of $S$ with respect to this basis. We get curves in $S$ parameterized by the time $\tau$. We define the concept of a frame as follows.
\begin{defi}
The system $\{f_1(\tau),f_2(\tau),f_3(\tau), o(\tau)\}\in (S,\|\cdot\|^{+\tau})\times \tau K(\tau)$ is a \emph{frame}, if
\begin{itemize}
\item $o(\tau)$ is a particle realistic in its speed,
with such a world-line
$$
O(\tau):T\rightarrow \tau K(\tau)
$$
which does not intersect the absolute time axis $T$,
\item the functions
$$
f_i(\tau):T\rightarrow \cup\left\{(S,\|\cdot\|^\tau) \mbox{ , } \tau\in T\right\}
$$
are continuously differentiable, for all fixed $\tau$,
\item
the system $\{f_1(\tau), f_2(\tau), f_3(\tau)\}$ is an Auerbach basis with origin $O(\tau)$ in the space $(S,\|\cdot\|^\tau)$.
\end{itemize}
\end{defi}
Note, that for a good model we have to guarantee that Einstein's convention on the equivalence of the inertial frames can be remained for us. However at this time we have no possibility to give the concepts of "frame at rest" and the concept of "frame which moves constant velocity with respect to another one". The reason is that when we changed the norm of the space by the function $K(\tau)$ we concentrated only the change of the shape of the unit ball and did not use any correspondence between the points of the two unit balls. Obviously, in a concrete computation we should proceed vice versa, first we should give a correspondence between the points of the old unit ball and the new one and this implies the change of the norm. To this purpose we may define a homotopic mapping $\mathbf{ K }$ which describes the deformation of the norm.
\begin{defi}
Consider a homotopic mapping
$
\mathbf{ K }\left(x,\tau\right): (S,\|\cdot\|_E)\times T \rightarrow (S,\|\cdot\|_E)
$
holding the assumptions:
\begin{itemize}
\item
$\mathbf{ K }\left(x,\tau\right)$ is homogeneous in its first variable and continuously differentiable in its second one,
\item
$ \mathbf{ K }\left(\{e_1,e_2,e_3\},\tau\right)$ is an Auerbach basis of $\left(S,\|\cdot\|^{\tau}\right)$ for every $\tau$,
\item
$ \mathbf{ K }\left(B_E,\tau\right)=K(\tau) $.
\end{itemize}
Then we say that the function $\mathbf{ K }\left(x,\tau\right)$ is the \emph{shape-function} of the time-space.
\end{defi}
The mapping $\mathbf{ K }\left(x,\tau\right)$ determines the changes at all levels. For example, we can consider a frame is ``at rest" if its change arises from this globally determined change, and ``moves with constant velocity" if its origin has this property and the directions of its axes are ``at rest". Precisely, we say, that
\begin{defi}
The frame $\{f_1(\tau),f_2(\tau),f_3(\tau),o(\tau)\}$ \emph{ moves with constant velocity with respect to the time-space} if for every pairs $\tau$, $\tau'$ in $T^+$ we have
$$
f_i(\tau )=\mathbf{ K }\left(f_i(\tau'),\tau \right) \mbox{ for all } i \mbox{ with } 1\leq i \leq 3
$$
and there are two vectors $O=o_1e_1+o_2e_2+o_3e_3\in S$ and $v=v_1e_1+v_2e_2+ v_3e_3 \in S$ that for all values of $\tau$ we have
$$
O(\tau)=\mathbf{K}(O,\tau)+\tau \mathbf{K}(v,\tau).
$$
A frame is \emph{at rest with respect to the time-space} if the vector $v$ is the zero vector of $S$.
\end{defi}
Consider the derivative of the above equality by $\tau$. We get that
$$
\dot{O}(\tau)=\frac{\partial \mathbf{K}(O,\tau)}{\partial \tau}+ \mathbf{K}(v,\tau)+ \tau \frac{\partial \mathbf{K}(v,\tau)}{\partial \tau},
$$
showing that for such a homotopic mapping, which is constant in the time $O(\tau)$, is a line with direction vector $v$ through the origin of the time space. Similarly in the case when $v$ is the zero vector it is a vertical (parallel to $T$) line-segment through $O$.
We can re-define the concept of time-axes,too.
\begin{defi}
The \emph{time-axis} of the time-space model is the world-line $O(\tau)$ of such a particle which moves with constant velocity with respect to the time-space and starts from the origin. More precisely, for the world-line $\left(O(\tau),\tau\right)$ we have $\mathbf{K}(O,\tau)=0$ and hence with a given vector $v\in S$,
$$
O(\tau)=\tau \mathbf{K}(v,\tau).
$$
\end{defi}
\begin{remark}
Note that if the shape-function is linear in its first variable then all sections define by $\tau=\mbox{const.}$ are Euclidean spaces. This is the case when the shape-function is of the form:
$$
\mathbf{K}(v,\tau)=f(\tau)A(s),
$$
where $f$ is a continuously differentiable function and $A:S\longrightarrow S$ is a linear mapping.
\end{remark}
\section{On the formulas of special relativity theory}
In this section we assume that the shape-function is a two-times continuously differentiable function, so it is a $C^2$ function.
We need two further axioms to interpret in time-space of the usual axioms of special relativity theory. First we assume that:
\begin{axiom}
The laws of physics are invariant under transformations between frames. The laws of physics will be the same whether you are testing them in frame "at rest", or a frame moving with a constant velocity relative to the "rest" frame.
\end{axiom}
\begin{axiom}
The speed of light in a vacuum is measured to be the same by all observers in frames.
\end{axiom}
These axioms can be transformed into the language of the time-space by the method of Minkowski \cite{minkowski}. To this we use the imaginary sphere $H_c$ of parameter $c$ introduced in the previous subsection and the group $G_c$ as the set of those isometries of the space which leave invariant this sphere of parameter $c$. Such an isometry can be interpreted as a coordinate transformation of the time-space which sends the axis of the absolute time into another time-axis $t'$, and also maps the intersection point of the absolute time-axis with the imaginary sphere $H_c$ into the intersection point of the new time-axis and $H_c$. An isometry of the time-space is also a homeomorphism thus it maps the subspace $S$ into a topological hyperplane $S'$ of the embedding normed space. $S'$ is orthogonal to the new time-axis in the sense that its tangent hyperplane at the origin is orthogonal to $t'$ with respect to the product of the space. Of course the new space-axes are continuously differentiable curves in $S'$ which tangents at the origin are orthogonal to each other. Since the absolute time-axis is orthogonal to the imaginary sphere $H_c$ the new time-axis $t'$ must holds this property, too. Thus the investigations in the previous section are essential from this point of view. Assuming that the definition of the time-space implies this property we can get some formulas similar to the well-known formulas of special relativity.
We note that the function $\mathbf{K}(v,\tau)$ holds the orthogonality property of vectors of $S$ and by the equality
$$
[\mathbf{K}(v,\tau),\mathbf{K}(v,\tau)]^\tau=\|v\|_E^2
$$
we can see that the formulas on time-dilatation and length-contraction are valid, too.
Using the well-known notations
$$
\beta = \frac{\|v\|_E}{c}
$$
$$
\gamma = \frac{1}{\sqrt{1 - \beta^2}}
$$
we get the connection between the time $\tau_0$ and $\tau$ of an event measuring by two observers one of at rest and the other moves with an constant velocity $\|v\|_E$ with respect to the time-space. It is
$$
\tau=\gamma \tau_0.
$$
Similarly we can consider a moving rod which points move constant velocity with respect to the time space such that it is always parallel to the velocity vector $\mathbf{K}(v,\tau)$. Then we have
$$
\|v\|_E=\frac{L_0}{T}
$$
where $T$ is the time calculated from the length $L_0$ and the velocity vector $v$ by such an observer which moves with the rod. Another observer can calculate the length $L$ from the measured time $T_0$ and the velocity $v$ by the formula
$$
\|v\|_E=\frac{L}{T_0}.
$$
Using the above formula of dilatation we get the known Fitzgerald contraction of the rod:
$$
L=L_0\sqrt{1-\beta^2}=\frac{L_0}{\gamma}.
$$
\subsection{Lorentz transformation}
Lorentz transformation in time space also based on the usual experiment in which we send a ray of light to a mirror in direction of the unit vector $e$ with distance $d$ from me.
\subsubsection{Deduction of Lorentz transformation in time-space}
If we are at rest, we can determine in time space the respective points $A$, $C$ and $B$ of departure, turn and arrival of the ray of light. $A$ and $B$ are on the absolute time-axis at heights $\tau_A$, and $\tau_B$, respectively. The position of $C$ is
$$
(\tau_C-\tau_A)\mathbf{K}(ce,\tau_C-\tau_A)+\tau_C e_4=\frac{\tau_B-\tau_A}{2}\mathbf{K}\left(ce,\frac{\tau_B-\tau_A}{2}\right)+\frac{\tau_B+\tau_A}{2}e_4,
$$
since we know that the light take the road back and forth over the same time. We observe that the norm of the space-like component $s_C$ is
$$
\|s_C\|^{\tau_C}=c\frac{\tau_B-\tau_A}{2}
$$
as in the usual case of space-time.
The moving observer synchronized its clock with the observer at rest in the origin, and moves in the direction $v$ with velocity $\|v\|_E$. We assume that the moving observer also sees the experiment thus its time-axis corresponding to the vector $v$ meats the world-line of the light in two points $A'$ and $B'$ positioning on the respective curves $AC$ and $CB$. This implies that the respective space-like components of the world-line of the light and the world-line of the axis are parallels to each other in every minutes. By formula we have:
$$
\|v\|_E \mathbf{K}(e,\tau)=\mathbf{K}(v,\tau).
$$
From this we get the equality
$$
\tau_{A'}\mathbf{K}(v,\tau_{A'})+\tau_{A'}e_4=(\tau_{A'}-\tau_A)\mathbf{K}(ce,\tau_{A'}-\tau_A)+\tau_{A'}e_4.
$$
This implies that
$$
{\tau_{A'}}^2{\|v\|_E}^2-c^2{\tau_{A'}}^2=(\tau_{A'}-\tau_A)^2c^2-c^2{\tau_{A'}}^2
$$
and thus
$$
\tau_{A'}=\frac{c}{c-\|v\|_E}\tau_A.
$$
The proper time $(\tau_{A'})_0$ is
$$
(\tau_{A'})_0=\sqrt{1-\beta^2}\frac{c}{c-\|v\|_E}\tau_A=\tau_A\sqrt{\frac{1+\beta}{1-\beta}}.
$$
Similarly we also get that
$$
(\tau_{B'})_0=\tau_B\sqrt{\frac{1-\beta}{1+\beta}},
$$
and we can determine the new time coordinate of the point $C$ with respect to the new coordinate system:
$$
(\tau_{C})_0=\frac{(\tau_{A'})_0+(\tau_{B'})_0}{2}= \frac{1}{2}\left(\tau_A\sqrt{\frac{1+\beta}{1-\beta}}+\tau_B\sqrt{\frac{1-\beta}{1+\beta}}\right).
$$
Since the norm of the space-like component is
$$
\|s_C\|_E=c\frac{\tau_B-\tau_A}{2},
$$
we get that
$$
\tau_A=\tau_C-\frac{\|s_C\|_E}{c} \mbox{ and } \tau_B=\tau_C+\frac{\|s_C\|_E}{c}
$$
and thus
$$
(\tau_{C})_0=\frac{1}{2}\left(\left(\tau_C-\frac{\|s_C\|_E}{c}\right)\sqrt{\frac{1+\beta}{1-\beta}}+ \left(\tau_C+\frac{\|s_C\|_E}{c}\right)\sqrt{\frac{1-\beta}{1+\beta}}\right)=
$$
$$
=\frac{\tau_C-\frac{\beta\|s_C\|_E}{c}}{\sqrt{1-\beta^2}}= \frac{\tau_C-\frac{\|v\|_E\|s_C\|_E}{c^2}}{\sqrt{1-\frac{\|v\|_E^2}{c^2}}}= \frac{\tau_C-\frac{[\mathbf{K}(s_C,\tau_C),\mathbf{K}(v,\tau_C)]^{\tau_C}}{c^2}}{\sqrt{1-\frac{\|v\|_E^2}{c^2}}}.
$$
On the other hand we also have that the space-like component $((s_C)_0)_S$ of the transformed space-like vector $(s_C)_0$ arises also from a vector parallel to $e$ thus it is of the form
$$
\mathbf{K}(((s_C)_0)_S,\tau)=\|((s_C)_0)_S\|_E \mathbf{K}(e,\tau).
$$
For the norm of $(s_C)_0$ we know that
$$
\|(s_C)_0\|^{+,T}=c\frac{(\tau_{B'})_0-(\tau_{A'})_0}{2},
$$
hence
$$
\|(s_C)_0\|^{+,T}=\frac{\|s_C\|_E-\|v\|_E\tau_C}{\sqrt{1-\frac{\|v\|_E^2}{c^2}}}.
$$
If we consider the vector
$$
\widehat{(s_C)_0}=\gamma\left(\mathbf{K}(s_C,\tau_C)-\mathbf{K}(v,\tau_C)\tau_C\right)\in S,
$$
we get a norm-preserving, bijective mapping $\widehat{L}$ from the world-line of the light into $S$ by the definition
$$
\widehat{L}:\mathbf{K}((s_C)_0,(\tau_C)_0)\mapsto \gamma\left(\mathbf{K}(s_C,\tau_C)-\mathbf{K}(v,\tau_C)\tau_C\right).
$$
The connection between the space-like coordinates of the point with respect to the two frames now has a more familiar form. Henceforth the Lorentz transformation means for us the correspondence:
\begin{eqnarray*}
s & \mapsto & \widehat{\mathbf{K}(s',\tau')}=\gamma\left(\mathbf{K}(s,\tau)-\mathbf{K}(v,\tau)\tau\right) \\
\tau & \mapsto & \tau'=\gamma\left(\tau-\frac{[\mathbf{K}(s,\tau),\mathbf{K}(v,\tau)]^{\tau}}{c^2}\right),
\end{eqnarray*}
and the inverse Lorentz transformation the another one
\begin{eqnarray*}
\widehat{\mathbf{K}(s',\tau')} & \mapsto & \mathbf{K}(s,\tau)=\gamma\left(\mathbf{K}(s',\tau')+\mathbf{K}(v,\tau')\tau'\right) \\
\tau' & \mapsto & \tau=\gamma\left(\tau'+\frac{[\mathbf{K}(s',\tau'),\mathbf{K}(v,\tau')]^{\tau'}}{c^2}\right).
\end{eqnarray*}
\subsubsection{Consequences of Lorentz transformation}
First note that we can determine the components of $(s_C)_0$ with respect to the absolute coordinate system, too. Since $(s_C)_0$ and $\tau\mathbf{K}(v,\tau)+\tau e_4$ are orthogonal to each other we get that
$$
[\mathbf{K}(((s_C)_0)_S,\tau_C),\mathbf{K}(v,\tau_C)]^{\tau_C}=c^2((s_C)_0)_T ,
$$
implying that
$$
((s_C)_0)_T=\frac{\|((s_C)_0)_S\|_E\|v\|_E}{c^2}.
$$
Thus we get the equality
$$
\|((s_C)_0)_S\|_E^2\left(1-c^2\left(\frac{\|v\|_E}{c^2}\right)^2\right)= \left(\frac{\|s_C\|_E-\|v\|_E\tau_C}{\sqrt{1-\frac{\|v\|_E^2}{c^2}}}\right)^2,
$$
implying that
$$
\|((s_C)_0)_S\|_E=\frac{\|s_C\|_E-\|v\|_E\tau_C}{\left(1-\frac{\|v\|_E^2}{c^2}\right)}=\gamma ^2\left(\|s_C\|_E-\|v\|_E\tau_C\right)
$$
and
$$
((s_C)_0)_T=\frac{\|((s_C)_0)_S\|_E\|v\|_E}{c^2}=\frac{\|v\|_E\|s_C\|_E-\|v\|_E^2\tau_C}{c^2-\|v\|_E^2}.
$$
We get that
$$
(s_C)_0=\gamma ^2 \left(\|s_C\|_E-\|v\|_E\tau_C\right)\left(\mathbf{K}(e,\tau_C)+\frac{\|v\|_E}{c^2}e_4\right)=
$$
$$
=\gamma ^2\left(\mathbf{K}(s_C,\tau_C)-\mathbf{K}(v,\tau_C)\tau_C\right)+\left(\frac{\gamma}{1-\gamma}\right)^2\left(\|s_C\|_E-\|v\|_E\tau_C\right)e_4.
$$
We can determine the length of this vector in the new coordinate system, too. Since
$$
[(s_C)_0,(s_C)_0]^{+,T}=\left(\|(s_C)_0\|^{+,T}\right)^2=\frac{(\|s_C\|^{\tau_C}-\|v\|_E\tau_C)^2}{1-\frac{\|v\|_E^2}{c^2}}=
$$
$$
=\frac{[s_C,s_C]^{\tau_C}-2\|s_C\|^{\tau_C}\|v\|_E\tau_C +(\|v\|_E\tau_C)^2}{1-\frac{\|v\|_E^2}{c^2}}
$$
and
$$
\left((\tau_{C})_0\right)^2=\frac{(\tau_C)^2-2\tau_C\frac{\|v\|_E\|s_C\|^{\tau_C}}{c^2}+ \frac{\left(\|v\|_E\|s_C\|^{\tau_C}\right)^2}{c^4}}{1-\frac{\|v\|_E^2}{c^2}},
$$
hence the equality
$$
[(s_C)_0,(s_C)_0]^{+,T}-c^2\left((\tau_{C})_0\right)^2=[s_C,s_C]^{\tau_C}-c^2\left(\tau_{C}\right)^2
$$
shows that under the action of the Lorentz transformation the "norm-squares" of the vectors of the time-space are invariant as in the case of the usual space-time.
Finally, we can determine those points of the space which new time-coordinates are zero and thus we get a mapping from the subspace $S$ into the time-space.
Let $s\in S$ arbitrary and consider the corresponding point $\mathbf{K}(s,\tau)+\tau e_4$ and assume that
$$
0=\tau_0=\gamma \tau-\gamma \frac{\|v\|_E}{c^2} \|\mathbf{K}(s,\tau)\|^\tau,
$$
hence
$$
\tau=\frac{\|v\|_E \|s\|_E}{c^2}.
$$
Then we get the image of the coordinate subspace $S$ under the action of that isometry which can be corresponded to the Lorentz transformation sending the absolute time-axis into the time-axis $\tau \mathbf{K}(v,\tau)+\tau e_4$ in question. This set is:
$$
S_0=\left\{ \mathbf{K}\left(s,\frac{\|v\|_E \|s\|_E}{c^2}\right)+\frac{\|v\|_E \|s\|_E}{c^2} e_4 \quad | \quad s\in S\right\}.
$$
For a boost in an arbitrary direction with velocity $v$, it is convenient to decompose the spatial vector $s$ into components perpendicular and parallel to $v$:
$$
s=s_1+s_2
$$
so that
$$
[\mathbf{K}(s,\tau),\mathbf{K}(v,\tau)]^\tau = [\mathbf{K}(s_1,\tau),\mathbf{K}(v,\tau)]^\tau + [\mathbf{K}(s_2,\tau),\mathbf{K}(v,\tau)]^\tau = [\mathbf{K}(s_2,\tau),\mathbf{K}(v,\tau)]^\tau.
$$
Then, only time and the component $\mathbf{K}(s_2,\tau)$ in the direction of $\mathbf{K}(v,\tau)$;
\begin{eqnarray*}
\tau' & = &\gamma \left(\tau - \frac{[\mathbf{K}(s,\tau),\mathbf{K}(v,\tau)]^\tau}{c^{2}} \right) \\
\widehat{\mathbf{K}(s',\tau')} & = & \mathbf{K}(s_1,\tau)+ \gamma (\mathbf{K}(s_2,\tau)-\mathbf{K}(v,\tau)\tau )
\end{eqnarray*}
are "distorted" by the Lorentz factor $\gamma$. The second equality can be written also in the form:
$$
\widehat{s'}=\mathbf{K}(s,\tau)+\left(\frac{\gamma-1}{\|v\|_E^2}[\mathbf{K}(s,\tau),\mathbf{K}(v,\tau)]^\tau-\gamma \tau\right)\mathbf{K}(v,\tau).
$$
\begin{remark}
If we have two time-axes $\tau \mathbf{K}(v',\tau)+\tau e_4$ and $\tau \mathbf{K}(v'',\tau)+\tau e_4$ then there are two subgroups of the corresponding Lorentz transformations mapping the absolute time-axis onto another time-axes, respectively. These two subgroups are also subgroups of $G_c$. Their elements can be paired on the base of their action on $S$. The pairs of these isometries define a new isometry of the space (and its inverse) on a natural way, with the composition one of them and the inverse of the other. Omitting the absolute time-axis from the space (as we suggest earlier) the invariance of the product on the remaining space and also the physical axioms of special relativity can remain in effect.
\end{remark}
\subsubsection{Addition of velocities} If $\mathbf{K}(u,\tau)$ and $\mathbf{K}(v,\tau')$ are two velocity vectors then using the formula for inverse Lorentz transformation of the corresponding differentials we get that
$$
\mathrm{d}\tau = \gamma \left(\mathrm{d}\tau' + \frac{[\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}}{c^{2}} \right)
$$
and
$$
\mathbf{K}(\mathrm{d}s,\mathrm{d}\tau)=\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau')+\left(\frac{1-\gamma}{\|v\|_E^2} [\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}+\gamma \mathrm{d}\tau'\right)\mathbf{K}(v,\tau').
$$
Thus
$$
\mathbf{K}(u,\tau)=\frac{\mathbf{K}(\mathrm{d}s,\mathrm{d}\tau)}{\mathrm{d}\tau}
=\frac{\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau')+\left(\frac{1-\gamma}{\|v\|_E^2} [\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}+\gamma \mathrm{d}\tau'\right)\mathbf{K}(v,\tau')}{\gamma \left(\mathrm{d}\tau' + \frac{[\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}}{c^{2}} \right)}=
$$
$$
=\frac{\left(\mathbf{K}(v,\tau')+\frac{1}{\gamma}\frac{\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau')}{\mathrm{d}\tau'}+\frac{1+\gamma}{\gamma c^2}\left[\frac{\mathbf{K}(\mathrm{d}\widehat{s'}, \mathrm{d}\tau')}{\mathrm{d}\tau'},\mathbf{K}(v,\tau')\right]^{\tau'}\mathbf{K}(v,\tau')\right)}
{1+\frac{\left[\frac{\mathbf{K}(\mathrm{d}\widehat{s'},\mathrm{d}\tau')}{\mathrm{d}\tau'},\mathbf{K}(v,\tau')\right]^{\tau'}}{c^{2}}} $$
$$
=\frac{\left(\mathbf{K}(v,\tau')+
\frac{1}{\gamma}\mathbf{K}(u',\mathrm{d}\tau')+
\frac{1+\gamma}{\gamma c^2}[\mathbf{K}(u',\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}\mathbf{K}(v,\tau')\right)} {1+\frac{[\mathbf{K}(u',\mathrm{d}\tau'),\mathbf{K}(v,\tau')]^{\tau'}}{c^{2}}}.
$$
\subsection{Acceleration, momentum and energy}
Our starting point is \emph{ the velocity vector (or four-velocity)}. The absolute time coordinate is $\tau$, this defines a world line of form $S(\tau)=\mathbf{K}(s(\tau),\tau)+\tau e_4$. Its proper time is $\tau_0=\frac{\tau}{\gamma}=\tau\sqrt{1-\frac{\|v\|_E^2}{c^2}}$, where $v$ is the velocity vector of the moving frame. By definition
$$
V(\tau):=\frac{\mathrm{d}S(\tau)}{\mathrm{d}\tau_0}=\gamma\left(\frac{\mathrm{d}(\mathbf{K}(s(\tau),\tau))}{\mathrm{d}\tau}+e_4\right).
$$
If the shape-function is a linear mapping then $\frac{\mathrm{d}(\mathbf{K}(s(\tau),\tau))}{\mathrm{d}\tau}=\mathbf{K}(\dot{s}(\tau),1):=\mathbf{K}(v(\tau),1)$ and we also have
$$
[V(\tau),V(\tau)]^{+,T}=\gamma^2\left([\mathbf{K}(v(\tau),1),\mathbf{K}(v(\tau),1)]^{1}-c^2\right)=-c^2.
$$
The \emph{ acceleration } is defined as the change in four-velocity over the particle's proper time. Hence now the velocity of the particle is also a function of $\tau$ as without $\gamma $ we have the function $\gamma(\tau)$. The definition is:
$$
A(\tau):=\frac{\mathrm{d}V}{\mathrm{d}\tau_0}=\gamma (\tau)\frac{\mathrm{d}V}{\mathrm{d}\tau}=\gamma^2(\tau)\frac{\mathrm{d}^2 \mathbf{K}(s(\tau),\tau)}{\mathrm{d}\tau^2}+ \gamma(\tau)\gamma'(\tau) \frac{\mathrm{d}(\mathbf{K}(s(\tau),\tau))}{\mathrm{d}\tau}+\gamma(\tau)\gamma'(\tau)e_4,
$$
where with notation $a(\tau)=v'(\tau)=s''(\tau)$,
$$
\gamma'(\tau)=\left(\frac{1}{\sqrt{1-\frac{\|v(\tau)\|_E^2}{c^2}}}\right)'= \left(\frac{1}{\sqrt{1-\frac{\left[\mathbf{K}(v(\tau),1),\mathbf{K}(v(\tau),1)\right]^{1}}{c^2}}}\right)'=
$$
$$
=\frac{\left[\frac{\mathrm{d}(\mathbf{K}(v(\tau),1)}{\mathrm{d}\tau},\mathbf{K}(v(\tau),1)\right]^{1}} {c^2\left(1-\frac{\left[\mathbf{K}(v(\tau),1),\mathbf{K}(v(\tau),1)\right]^{1}}{c^2}\right)^{\frac{3}{2}}}= \frac{\left[\frac{\mathrm{d}(\mathbf{K}(v(\tau),1)}{\mathrm{d}\tau},\mathbf{K}(v(\tau),1)\right]^{1}}{c^2}\gamma^3(\tau),
$$
In the case of linear shape-function it has the form
$$
A(\tau)=\gamma^2(\tau)\mathbf{K}(a(\tau),0)+ \gamma(\tau)\gamma'(\tau)\mathbf{K}(v(\tau),1))+\gamma(\tau)\gamma'(\tau)e_4,
$$
Since in this case $[V(\tau),V(\tau)]^{+,T}=-c^2$, we have
$$
[A(\tau),V(\tau)]^{T,+}=\gamma^3(\tau)\left(\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^1+\right.
$$
$$
\left.+\gamma^2(\tau)\frac{\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^{1}}{c^2} \|v(\tau)\|_E^2- \gamma^2(\tau)\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^{1}\right)=
$$
$$
=\gamma^3(\tau)\left(\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^1-
\frac{c^2-\|v(\tau)\|_E^2}{c^2-
\|v(\tau)\|_E^2}\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^{1}\right)=0.
$$
By Theorem 2 on the derivative of the product (corresponding to smooth and strictly convex norms) we also get this result, in fact we have
$$
0=\frac{\mathrm{d}[V(\tau),V(\tau)]^{+,T}}{\mathrm{d}\tau}=
2\left[\frac{\mathrm{d}V}{\mathrm{d}\tau},V\right]^{+,T}+\frac{\partial[V(\tau),V(\tau)]^\tau}{\partial{\tau}}(1)\cdot 0=\frac{2}{\gamma}[A(\tau),V(\tau)]^{+,T}.
$$
Also in the case of linear shape-function the \emph{ momentum } is
$$
P=m_0 V=\gamma m_0\left(\mathbf{K}(v(\tau),\tau)+ e_4\right)
$$
where $m_0$ is the invariant mass. We also have that
$$
[P,P]^{+,T}=\gamma^2 m_0^2(\|v\|_E^2-c^2)=(m_0c)^2.
$$
Similarly the \emph{ force } is
$$
F=\frac{\mathrm{d}P}{\mathrm{d}\tau}= m_0\gamma^2(\tau)\mathbf{K}(a(\tau),\tau)+ \gamma(\tau)\gamma'(\tau)\mathbf{K}(v(\tau),\tau))+\gamma(\tau)\gamma'(\tau)e_4,
$$
and thus holds
$$
[F,V]^{+,T}=0.
$$
\section{General relativity theory}
In time-space there is a way to describe and visualize certain spaces which are solutions of Einstein field equations (briefly Einstein's equation). The first method is when we embed into an at least four-dimensional time-space a four-dimensional manifold which inner metric is a solution of the Einstein's equation. Our basic references here are the books \cite{eddington} and \cite{griffiths}.
\subsection{Metrics embedded into a time-space}
\subsubsection{Minkowski-Lorentz metric}
The simplest example of a Lorentz manifold is the \emph{flat-space metric} which can be given as $\mathbb{R}^4$ with coordinates $(t,x,y,z)$ and the metric function:
$$
\mathrm{d}s^2 = -c^2 \mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2.
$$
In the above coordinates, the matrix representation is
$$
\eta = \left(\begin{array}{cccc}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right)
$$
In spherical coordinates $(t,r,\theta,\phi)$, the flat space metric takes the form
$$
\mathrm{d}s^2 = -c^2 \mathrm{d}t^2 + \mathrm{d}r^2 + r^2 \mathrm{d}\Omega^2.
$$
Here $f(r)\equiv 0$, $g=\mathrm {id }$ and $\tau=t$ implying that $\mathbf{K}\left(v,\tau\right)=v$ and the hypersurface is the light-cone defined by $\tau=\|v\|_E$. It can be considered also in a $5$-dimensional time-space with shape-function $\mathbf{K}\left(v,\tau\right)=v$ as the metric of a $4$-dimensional subspace through the absolute time-axis. By the equivalence of time axes in a usually space-time it can be considered as an arbitrary $4$-dimensional subspace distinct to the $4$-dimensional subspace of space-like vectors, too.
\subsubsection{The de Sitter and the anti-de Sitter metrics}
The \emph{de Sitter space} is the space defined on the de Sitter sphere of a Minkowski space of one higher dimension. Usually the metric can be considered as the restriction of the Minkowski metric
$$
\mathrm{d}s^2 = -c^2 \mathrm{d}t^2 + \mathrm{d}x_1^2 + \mathrm{d}x_2^2 + \mathrm{d}x_3^2+\mathrm{d}x_4^2
$$
to the sphere $-x_0^2+x_1^2+x_2^2+x_3^2+x_4^2=\alpha^2=\frac{3}{\Lambda}$, where $\Lambda $ is the cosmological constant (see e.g. in \cite{griffiths}).
Using also our constant $c$ this latter equation can be rewrite as
$$
-ct^2+(x'_1)^2+(x'_2)^2+(x'_3)^2+(x'_4)^2=1 \mbox{ where } x_0=t \, , \, \frac{1}{\alpha}=c \mbox{ and } x'_i=\frac{1}{\alpha}x_i.
$$
This shows that in the $5$-dimensional time-space with shape-function $\mathbf{K}\left(v,\tau\right)=v$ it is the hyperboloid with one sheet with circular symmetry about the absolute time-axis.
The \emph{anti-de Sitter space} is the hyperbolic analogue of the elliptic de Sitter space. The Minkowski space of one higher dimension can be restricted to the so called \emph{anti-de Sitter sphere} (also called by in our terminology as imaginary sphere) defined by the equality $-x_0^2+x_1^2+x_2^2+x_3^2=-\alpha^2$. The shape function again is $\mathbf{K}\left(v,\tau\right)= v$ and the corresponding $4$-submanifold is the hyperboloid of two sheets with hyperplane symmetry with respect to the $4$-subspace $S$ of space-time vectors.
\subsubsection{Friedmann-Lema\^{\i}tre-Robertson-Walker metrics}
A standard metric forms of the Friedmann-Lema\^{\i}tre-Robertson-Walker metrics (F-L-R-W) family of space-times can be obtained by using suitable coordinate parameterizations of the 3-spaces of
constant curvature. One of its forms is
$$
\mathrm{d}s^2=-\mathrm{d}t^2+\frac{R^2(t)}{1+\frac{1}{4}k(x^2+y^2+z^2)}\left(\mathrm{d}x^2+\mathrm{d}y^2 + \mathrm{d}z^2\right)
$$
where $k\in\{-1,0,1\}$ is fixed. By the parametrization $\tau=t$ this metric is the metric of a time-space with shape-function
$ \mathbf{K}\left(v,\tau\right)$. Observe that
$$
\|v\|_E^2=\left[\mathbf{K}\left(v,\tau\right),\mathbf{K}\left(v,\tau\right)\right]^\tau= \frac{R^2(\tau)}{1+\frac{1}{4}k\|v\|_E^2}\|\mathbf{K}\left(v,\tau\right)\|_E^2.
$$
Note that we can choose the constant $k$ also as a function of the absolute time $\tau$ giving a deterministic time-space with more generality. Hence the shape-function is
$$
\mathbf{K}\left(v,\tau\right)= \frac{\sqrt{1+\frac{1}{4}k(\tau)\|v\|_E^2}}{R(\tau)}v.
$$
\subsection{Three-dimensional visualization of a metric in a four-time-space}
The second method is when we consider a four-dimensional time-space and a three-dimensional sub-manifold in it with the property that the metric of the time-space at the points of the sub-manifold can be corresponded to the given one. This method gives a good visualization of the solution in such a case when the examined metric has some speciality e.g. there is no dependence on time or (and) the metric has a spherical symmetry.
The examples of this section are also semi-Riemannian manifolds. We consider now such solutions which have the form:
$$
\mathrm{d}s^2 = -(1-f(r)) c^2 \mathrm{d}t^2 + \frac{1}{1-f(r)} \mathrm{d}r^2 + r^2(\mathrm{d}\theta^2 + \sin^2\theta\mathrm{d}\phi^2)
$$
where
$$
\mathrm{d}\Omega^2 := \mathrm{d}\theta^2 + \sin^2\theta\mathrm{d}\phi^2
$$
is the standard metric on the 2-sphere. Thus we have to search a shape function $\mathbf{K}\left(v,\tau\right)$ of the embedding space and a sub-manifold of it on which the Minkowski-metric gives the required one. If the metric isotropic we have a chance to give it by isotropic coordinates. To this we substitute the parameter $r$ by the function $r=g(r^\star)$, and solve the differential equation:
$$
f(g(r^\star))=1-\left(\frac{r^\star g'(r^\star)}{g(r^\star)}\right)^2
$$
for the unknown function $g(r^\star)$. Then we get the metric in the isotropic form
$$
\mathrm{d}s^2 = -\left(\frac{r^\star g'(r^\star)}{g(r^\star)}\right)^2 c^2 \mathrm{d}t^2 + \frac{g^2(r^\star)}{{r^\star}^2} \left(\mathrm{d}{r^\star}^2 + {r^\star}^2(\mathrm{d}\theta^2 + \sin^2\theta \mathrm{d}\phi^2)\right).
$$
For isotropic rectangular coordinates $x=r^\star \sin \theta \cos \phi$, $y=r^\star \sin \theta \sin \phi$ and $z=r^\star \cos \theta $ the metric becomes
$$
\mathrm{d}s^2 = -\left(\frac{r^\star g'(r^\star)}{g(r^\star)}\right)^2 c^2 \mathrm{d}t^2 + \frac{g^2(r^\star)}{{r^\star}^2} \left(\mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2\right),
$$
where $r^\star=\sqrt{x^2+y^2+z^2}$. From this substituting $ds^2=0$ and rearranging the equality, we get that the velocity of the light which is equal to the quantity
$$
\sqrt{\frac{\mathrm{d}x^2}{\mathrm{d}t^2} + \frac{\mathrm{d}y^2}{\mathrm{d}t^2} + \frac{\mathrm{d}z^2}{\mathrm{d}t^2}}=\frac{{r^\star}^2 g'(r^\star)}{g^2(r^\star)}c.
$$
It is independent from the direction and varies with only the radial distance $r^\star$ (from the point mass at the origin of the coordinates). In the points of the hypersurface $t=r^\star=\sqrt{x^2+y^2+z^2}$ the metric can be parameterized by the time:
$$
\mathrm{d}s^2 = -\left(\frac{t g'(t)}{g(t)}\right)^2 c^2 \mathrm{d}t^2 + \frac{g^2(t)}{{t}^2} \left(\mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2\right),
$$
and from the equation
$$
\frac{t g'(t)}{g(t)}\mathrm{d}t=\mathrm{d}\tau
$$
we can give a re-scale of the time by the parametrization
$$
\tau :=\int t\frac{g'(t)}{g(t)}\mathrm{d}t=t\ln(g(t))-\int \ln(g(t))\mathrm{d}t.
$$
From this equation we determine the inverse function $\hat{g}$ for which $t=\hat{g}(\tau)$. Since $\hat{g}(\tau)=t=r^\star=\sqrt{x^2+y^2+z^2}$ we also have that the examined set of points of the space-time is a hypersurface defined by the equality:
$$
\tau=\left(t\ln(g(t))-\int \ln(g(t))dt\right){\sqrt{x^2+y^2+z^2}}.
$$
This implies a new form of the metric at the points of this hypersurface:
$$
\mathrm{d}s^2 = -c^2 \mathrm{d}\tau^2 + \frac{g^2(\hat{g}(\tau))}{{\hat{g}(\tau)}^2} \left(\mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2\right).
$$
The corresponding inner product has the matrix form:
$$
\left(
\begin{array}{cccc}
-c^2 & 0 & 0 & 0 \\
0 & \frac{g^2(\hat{g}(\tau))}{{\hat{g}(\tau)}^2} & 0 & 0 \\
0 & 0 & \frac{g^2(\hat{g}(\tau))}{{\hat{g}(\tau)}^2} & 0 \\
0 & 0 & 0 & \frac{g^2(\hat{g}(\tau))}{{\hat{g}(\tau)}^2}\\
\end{array}
\right)
$$
and hence the Euclidean lengths of the vectors of the space depend only on the absolute moment $\tau$. Thus we can visualize the examined metric as a metric at the points of the hypersurface
$$
\tau=\left(t\ln(g(t))-\int \ln(g(t))\mathrm{d}t\right)\|v\|_E
$$
of certain time-space. We note that this is not the inner metric of the examined surface of dimension $3$ which can be considered as metric of a three-dimensional space-time. To determine the shape-function observe that
$$
\|v\|_E^2=\left[\mathbf{K}\left(v,\tau\right),\mathbf{K}\left(v,\tau\right)\right]^\tau= \frac{g^2(\hat{g}(\tau))}{{\hat{g}(\tau)}^2}\|\mathbf{K}\left(v,\tau\right)\|_E^2
$$
from which we get that
$$
\mathbf{K}\left(v,\tau\right)=\frac{{\hat{g}(\tau)}}{g(\hat{g}(\tau))}v.
$$
We now give some examples.
\subsubsection{Schwarzschild metric}
Besides the flat space metric the most important metric in general relativity is the \emph{Schwarzschild metric} which can be given in the set of local polar-coordinates $(t,r,\varphi,\theta )$ by
$$
\mathrm{d}s^{2} = -\left(1 - \frac{2GM}{c^2r} \right) c^2 \mathrm{d}t^2 + \left(1 - \frac{2GM}{c^2r} \right)^{-1} \mathrm{d}r^2 + r^2 \mathrm{d}\Omega^2
$$
where, again, $\mathrm{d}\Omega^2$ is the standard metric on the 2-sphere. Here $G$ is the \emph{gravitation constant} and $M$ is a constant with the dimension of mass. The function $f$ is
$$
f(r)=\frac{2GM}{c^2r}:=\frac{r_s}{r} \mbox{ with constant } r_s=\frac{2GM}{c^2}.
$$
The differential equation on $g$ is
$$
\frac{r_s}{g(r^\star)}=1-\left(\frac{r^\star g'(r^\star)}{g(r^\star)}\right)^2
$$
with the solution
$$
g(r^\star)= \frac{r_s}{4}c_1r^\star {\left( 1 + \frac{1}{c_1 r^\star} \right)}^{2},
$$
and if we choose $\frac{4}{r_s}$ the parameter $c_1$ we get the
known (see in \cite{eddington}) solution
$$
g(r^\star)= r^\star {\left( 1 + \frac{r_s}{4 r^\star} \right)}^{2}.
$$
For isotropic rectangular coordinates the metric becomes
$$
\mathrm{d}s^2=-\frac{(1-\frac{r_s}{4r^\star})^{2}}{(1+\frac{r_s}{4r^\star})^{2}} \, c^2 {\mathrm{d} t}^2 + \left(1+\frac{r_s}{4r^\star}\right)^{4}(\mathrm{d}x^2+\mathrm{d}y^2+\mathrm{d}z^2).
$$
The equation between $\tau$ and $t$ is
$$
\tau=\int\frac{(1-\frac{r_s}{4t})}{(1+\frac{r_s}{4t})}\mathrm{d}t=\int\frac{4t-r_s}{4t+r_s}\mathrm{d}t=t-2r_s\int\frac{1}{4t+r_s}\mathrm{d}t= t-\frac{r_s}{2}\ln \left(t+\frac{r_s}{4}\right)+C.
$$
Of course we can choose $C=0$. Similarly to the known tortoise-coordinates there is no explicit inverse function of this parametrization which we denote by $\hat{g}(\tau)=t$.
The shape-function of the corresponding time-space is
$$
\mathbf{K}\left(v,\tau\right)=\frac{{\hat{g}(\tau)}}{g(\hat{g}(\tau))}v=\left( 1 + \frac{r_s}{4 \hat{g}(\tau)} \right)^{-2}v.
$$
\subsubsection{Reissner-Nordstr\"om metric}
In spherical coordinates $(t, r, \theta, \phi)$, the line element for the Reissner-Nordstr\"om metric is
$$
\mathrm{d}s^2 =
-\left( 1 - \frac{r_\mathrm{S}}{r} + \frac{r_Q^2}{r^2} \right) c^2\, \mathrm{d}t^2 + \frac{1}{1 - \frac{r_\mathrm{S}}{r} + \frac{r_Q^2}{r^2}}\, \mathrm{d}r^2 + r^2\, \mathrm{d}\theta^2 + r^2 \sin^2 \theta \mathrm{d}\phi^2,
$$
here again $t$ is the time coordinate (measured by a stationary clock at infinity), $r$ is the radial coordinate, $r_S= 2GM/c^2$ is the Schwarzschild radius of the body, and $r_Q$ is a characteristic length scale given by
$$
|
r_{Q}^{2} = \frac{Q^2 G}{4\pi\varepsilon_{0} c^4}.
$$
Here $1/4\pi\varepsilon_0$ is the Coulomb force constant. The function $f$ is
$$
f(r)=\frac{r_s}{r}-\frac{r_Q^2}{r^2}
$$
The differential equation on $g$ is
$$
\frac{r_s}{g(r^\star)}-\frac{r_Q^2}{g^2(r^\star)}=1-\left(\frac{r^\star g'(r^\star)}{g(r^\star)}\right)^2
$$
with the solution
$$
g(r^\star)= \sqrt{\frac{r^2_s}{4}-r^2_Q}\frac{c_1}{2}r^\star {\left( 1 + \frac{1}{c_1 {r^\star}} \right)^2}-\sqrt{\frac{r^2_s}{4}-r^2_Q}+\frac{r_s}{2},
$$
if we choose $c_1:=\frac{2}{\sqrt{\frac{r^2_s}{4}-r^2_Q}}$ we get a more simple form:
$$
g(r^\star)= r^\star {\left( 1 + \frac{\sqrt{\frac{r^2_s}{4}-r^2_Q}}{2{r^\star}} \right)^2}-\sqrt{\frac{r^2_s}{4}-r^2_Q}+\frac{r_s}{2}=r^\star \left( 1 + \frac{\frac{r^2_s}{4}-r^2_Q}{4{r^\star}^2}\right)+\frac{r_s}{2}.
$$
For the isotropic rectangular coordinates we have:
$$
\mathrm{d}s^2 = -\left(\frac{r^\star\left(1 - \frac{\frac{r^2_s}{4}-r^2_Q}{4{r^\star}^2}\right)}{r^\star \left( 1 + \frac{\frac{r^2_s}{4}-r^2_Q}{4{r^\star}^2}\right) +\frac{r_s}{2}}\right)^2 c^2 \mathrm{d}t^2 + \left(\frac{r^\star \left( 1 + \frac{\frac{r^2_s}{4}-r^2_Q}{4{r^\star}^2}\right) +\frac{r_s}{2}}{{r^\star}}\right)^2(\mathrm{d}x^2+\mathrm{d}y^2+\mathrm{d}z^2).
$$
Our process now leads to the new time parameter
$$
\tau=t-\left(\frac{r_s}{4}-\frac{r_Q}{2}\right)\ln\left(\left(t+\frac{r_s}{4}\right)^2-\frac{r_Q^2}{4}\right)- r_Q\ln\left(t+\frac{r_s}{4}+\frac{r_Q}{2}\right)+C,
$$
which in the case of $C=r_Q=0$ gives back the parametrization of Schwarzschild solution.
The shape-function of the searched time-space can be determined by the corresponding inverse $t=\hat{g}(\tau)$, it is
$$
\mathbf{K}\left(v,\tau\right)=\frac{{\hat{g}(\tau)}}{g(\hat{g}(\tau))}v=\frac{\hat{g}(\tau)}{\hat{g}(\tau) \left( 1 + \frac{\frac{r^2_s}{4}-r^2_Q}{4{\hat{g}(\tau)}^2}\right)+\frac{r_s}{2}}v.
$$
Analogously we can compute the time-space visualization of the Schwarzschild-de Sitter solution which we now omit.
\subsubsection{Bertotti-Robinson metric}
The Bertotti-Robinson space-time is the only conformally flat solution of the Einstein-Maxwell equalities for a non-null source-free electromagnetic field. The metric is:
$$
\mathrm{d}s^2 =\frac{Q^2}{r^2}\left(- \mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2\right),
$$
and on the light-cone $t=r$ it has the form
$$
\mathrm{d}s^2 =-\frac{Q^2}{t^2}\mathrm{d}t^2 + \frac{e^2}{t^2}\left(\mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2 \right).
$$
By the new time coordinate
$$
\tau=Q\ln t \mbox{ or } t=e^{\frac{\tau}{Q}}
$$
using orthogonal space coordinates we get the form
$$
\mathrm{d}s^2=-\mathrm{d} \tau^2 + \frac{Q^2}{e^{\frac{2\tau}{Q}}}\left(\mathrm{d}x^2+\mathrm{d}y^2+\mathrm{d}z^2\right).
$$
Thus it can be visualize on the hypersurface $\tau=e\ln r$ of the time-space with shape-function:
$$
\mathbf{K}\left(v,\tau\right):=\frac{e^{\frac{\tau}{Q}}}{Q}v.
$$
\subsection{Einstein fields equations}
As we saw in the previous section the direct embedding of a solution of Einstein's equation into a time-space requires non-linear and very complicated shape-functions. It can be seen also that there are such solutions which there are no natural embedding into a time-space. This motivates the investigations of the present section. Our building up follows the one of the clear paper of Prof. Alan Heavens \cite{heavens}, we would like to thank to him for his downloadable PDF.
\subsubsection{Homogeneous time-space-manifolds and the Equivalence Principle}
We consider now such manifolds which tangent spaces are four-dimensional time-spaces with given shape-functions. More precisely:
\begin{defi}
Let $\mathcal{S}$ be the set of linear mappings $\mathbf{K}(v,\tau):\mathbb{E}^3\times \mathbb{R}\longrightarrow \mathbb{E}^3$ holding the properties of a linear shape-function given in Definition 7. Giving for it the natural topology we say that ${K}$ is \emph{the space of shape-functions}. If we have a pair a four-dimensional topological manifold $M$ and a smooth ($C^\infty$) mapping $\mathcal{K}:M\longrightarrow \mathcal{S}$ with the property that at the point $P\in M$ the tangent space is the time-space defined by $\mathbf{K}^P(s,\tau)\in \mathcal{S}$ we say that it is a \emph{time-space-manifold}. The time-space manifold is \emph{homogeneous} if the mapping $\mathcal{K}$ is a constant function.
\end{defi}
Note that a Lorentzian manifold is such a homogeneous time-space manifold which shape-function is independent from the time and it is the identity mapping on its space-like components, namely $\mathbf{K}^P(s,\tau)=s$ for all $P$ and for all $\tau $. Its matrix-form (using the column representation of vectors in time-space) is:
$$
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
\end{array}
\right)
$$
Our purpose to build up the theory of global relativity in a homogeneous time-space-manifolds. We accept the so-called \emph{Strong Equivalence Principle} of Einstein in the following form:
\begin{axiom}(Equivalence Principle)
At any point in a homogeneous time-space manifold it is possible to choose a \emph{locally-inertial frame} in which the laws of physics are the same as the special relativity of the corresponding time-space.
\end{axiom}
According to this principle, there is a coordinate-system in which a freely-moving particle moves with constant velocity with respect to the time-space $\mathcal{K}(P)=\mathbf{K}^P(s,\tau)=\mathbf{K}(s,\tau)$. It is convenient to write the world line
$$
S(\tau)=\mathbf{K}(s(\tau),\tau)+\tau e_4
$$
parametrically, as a function of the proper time $\tau_0=\frac{\tau}{\gamma(\tau)}$. In subsection 2.2 we determined the velocity using the time-space parameter $\tau$:
$$
V(\tau)=\gamma(\tau)\left(\frac{\mathrm{d}(\mathbf{K}(s(\tau),\tau))}{\mathrm{d}\tau}+e_4\right)=\gamma(\tau)\left(\mathbf{K}(v(\tau),1)+e_4\right).
$$
Taking into consideration again that the shape-function is linear, the acceleration is :
$$
A(\tau)=\gamma^2(\tau)\mathbf{K}(a(\tau),0)+ \gamma^4(\tau)\frac{\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^{\tau}}{c^2} \mathbf{K}(v(\tau),1)+
$$
$$
+\gamma^4(\tau)\frac{\left[\mathbf{K}(a(\tau),0),\mathbf{K}(v(\tau),1)\right]^{\tau}}{c^2} e_4,
$$
giving the differential equation $A(\tau)=0$ for such particle which moves linearly with respect to this frame.
\subsubsection{Affine connection and the metric on a homogeneous time-space-manifold}
Consider any other coordinate system in which the particle coordinates are $S'(\tau_0)$. Using the chain rule, the defining equation
$$
0=A(\tau_0)=\frac{\mathrm{d} V(\tau_0)}{\mathrm{d} \tau_0}=\frac{\mathrm{d}^2 S(\tau_0)}{\mathrm{d}\tau_0^2}
$$
becomes
$$
0=\frac{\mathrm{d}}{\mathrm{d}\tau_0}\left(\frac{\mathrm{d}{S}}{\mathrm{d}{S'}}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}\right)= \frac{\mathrm{d}{S}}{\mathrm{d}{S'}}\frac{\mathrm{d}^2S'(\tau_0)}{\mathrm{d}\tau_0^2}+ \frac{\mathrm{d}}{\mathrm{d}\tau_0}\left(\frac{\mathrm{d}{S}}{\mathrm{d}{S'}}\right)\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}=
$$
$$
=\frac{\mathrm{d}{S}}{\mathrm{d}{S'}}\frac{\mathrm{d}^2S'(\tau_0)}{\mathrm{d}\tau_0^2}+ \frac{\mathrm{d}^2{S}}{\mathrm{d}{S'}\mathrm{d}{S'}}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0},
$$
where $\frac{\mathrm{d}{S}}{\mathrm{d}{S'}}$ means the total derivatives of the mapping of the time-space sending the path $S'(\tau_0)$ into the specific path $S(\tau_0)$, and the trilinear function $\frac{\mathrm{d}^2{S}}{\mathrm{d}{S'}\mathrm{d}{S'}}$ is the second total derivatives of the same mapping. (If there is a general smooth transformation between the coordinate-frames, the corresponding derivatives are exist.) From this equality we get the tensor form of the so called \emph{geodesic equation} of homogeneous time-space manifold, it is:
$$
\frac{\mathrm{d}^2S'(\tau_0)}{\mathrm{d}\tau_0^2}+\left(\frac{\mathrm{d}{S'}}{\mathrm{d}{S}}\frac{\mathrm{d}^2{S}}{\mathrm{d}{S'}\mathrm{d}{S'}}\right) \frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}=\frac{\mathrm{d}^2S'(\tau_0)}{\mathrm{d}\tau_0^2}+ \Gamma(S',S)\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}=0.
$$
Here we denote the inverse of the total derivatives $\frac{\mathrm{d}{S}}{\mathrm{d}{S'}}$ by $\frac{\mathrm{d}{S'}}{\mathrm{d}{S}}$. The name of $\Gamma (S',S)$ is the \emph{affine connection}.
For the uniform labeling we denote by $x^4$ the identity function. Since the shape function is a linear mapping we can represent it as the multiplication on left by the $3\times 4$ matrix $K=[k_{ij}]=k^i{}_j$. In the rest of this paragraph we apply all conventions of general relativity. The Greek alphabet is used for space and time components, where indices take values 1,2,3,4 (frequently used letters are $\mu,\nu,\cdots$) and the Latin alphabet is used for spatial components only, where indices take values 1,2,3 (frequently used letters are $i, j, ...$) and according to the Einstein's convention, when an index variable appears twice in a single term it implies summation of that term over all the values of the index.
The upper indices are indices of coordinates, coefficients or basis vectors.
The mapping $\mathcal{S}:S'(\tau_0)\longrightarrow S(\tau_0)$ sends $K({x'}^1,{x'}^2,{x'}^3,{x'}^4)^T+{x'}^4e_4$ into the vector $K(x^1,x^2,x^3,x^4)^T+x^4e_4$. Denote by $\widetilde{K}$ the $4\times 4$ matrix with coefficients:
$$
\left(
\begin{array}{cccc}
k^1{}_{1} & k^{1}{}_2 & k^{1}{}_3 & k^{1}{}_4 \\
k^2{}_{1} & k^2{}_{2} & k^2{}_{3} & k^2{}_{4} \\
k^3{}_{1} & k^3{}_{2} & k^3{}_{3} & k^3{}_{4} \\
0 & 0 & 0 & 1 \\
\end{array}
\right),
$$
then we get $\mathcal{S}:\widetilde{K}({x'}^1,{x'}^2,{x'}^3,{x'}^4)^T\mapsto \widetilde{K}(x^1,x^2,x^3,x^4)^T$. If the shape-function $\mathbf{K}$ restricted to the subspace $S$ is a regular linear mapping than we also have
$$
\widetilde{K}^{-1}\mathcal{S}\widetilde{K}({x'}^1,{x'}^2,{x'}^3,{x'}^4)^T=(x^1,x^2,x^3,x^4)^T
$$
and we have that
$$
\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]=\frac{\mathrm{d} \widetilde{K}^{-1}\mathcal{S}\widetilde{K}}{\mathrm{d} S'}= \widetilde{K}^{-1}\frac{\mathrm{d} \mathcal{S}}{\mathrm{d} S'}\widetilde{K} \mbox{ and so } \frac{\mathrm{d} \mathcal{S}}{\mathrm{d} S'}=\widetilde{K}\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]\widetilde{K}^{-1}.
$$
Hence
$$
\frac{\mathrm{d}{S'}}{\mathrm{d}{S}}=\widetilde{K}\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]^{-1}\widetilde{K}^{-1}=\widetilde{K}\left[\frac{\partial {x'}^\mu}{\partial x^\alpha}\right]\widetilde{K}^{-1} \mbox{ and } \left[\frac{\mathrm{d}^2{S}}{\mathrm{d}{S'}\mathrm{d}{S'}}\right]^\alpha=\widetilde{K}\left[\frac{\partial^2 x^\alpha}{\partial {x'}^\mu\partial {x'}^\nu}\right]\widetilde{K}^{-1}
$$
implying that the affine connection is:
$$
\Gamma(S',S)^{\lambda}{}_{\mu\nu}=\widetilde{K}\frac{\partial {x'}^\lambda}{\partial x^\alpha}\frac{\partial^2 {x}^\alpha}{\partial {x'}^\mu\partial {x'}^\nu}\widetilde{K}^{-1}=\widetilde{K}\Gamma^{\lambda}{}_{\mu\nu}\widetilde{K}^{-1}=\widetilde{K}\left\{ \begin{array}{c}
\lambda\\
\mu\nu \end{array}
\right\}\widetilde{K}^{-1}.
$$
Since $S'(\tau_0)=\widetilde{K}({x'}^1,{x'}^2,{x'}^3,{x'}^4)^T$ thus we also get three equalities, the first one is:
$$
\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}=\widetilde{K}\left(\frac{\mathrm{d}{x'}^1}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^2}{\mathrm{d}\tau_0}, \frac{\mathrm{d}{x'}^3}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^4}{\mathrm{d}\tau_0}\right)^T= \left(k^1{}_{\alpha}\frac{\mathrm{d}{x'}^\alpha}{\mathrm{d}\tau_0},k^2{}_{\alpha}\frac{\mathrm{d}{x'}^\alpha}{\mathrm{d}\tau_0}, k^3{}_{\alpha}\frac{\mathrm{d}{x'}^\alpha}{\mathrm{d}\tau_0}, k^4{}_{\alpha}\frac{\mathrm{d}{x'}^\alpha}{\mathrm{d}\tau_0}\right)^T=
$$
$$
=\left[k^\lambda{}_{\alpha}\frac{\mathrm{d}{x'}^\alpha}{\mathrm{d}\tau_0}\right].
$$
The second equality is:
$$
\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}\frac{\mathrm{d}S'(\tau_0)}{\mathrm{d}\tau_0}= \widetilde{K}\left(\frac{\mathrm{d}{x'}^1}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^2}{\mathrm{d}\tau_0}, \frac{\mathrm{d}{x'}^3}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^4}{\mathrm{d}\tau_0}\right)^T \left(\frac{\mathrm{d}{x'}^1}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^2}{\mathrm{d}\tau_0}, \frac{\mathrm{d}{x'}^3}{\mathrm{d}\tau_0},\frac{\mathrm{d}{x'}^4}{\mathrm{d}\tau_0}\right)\widetilde{K}^T=
$$
$$
=\widetilde{K}\left[\frac{\mathrm{d}{x'}^\mu}{\mathrm{d}\tau_0} \frac{\mathrm{d}{x'}^\nu}{\mathrm{d}\tau_0}\right]\widetilde{K}^T,
$$
and the third one is:
$$
\frac{\mathrm{d}^2S'(\tau_0)}{\mathrm{d}\tau_0^2}=\widetilde{K}\left(\frac{\mathrm{d}^2{x'}^1}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^2}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^3}{\mathrm{d}\tau_0^2},\frac{\mathrm{d}^2{x'}^4}{\mathrm{d}\tau_0^2}\right)^T= \left[k^\lambda{}_{\alpha}\frac{\mathrm{d}^2{x'}^\alpha}{\mathrm{d}\tau_0^2}\right].
$$
The geodesic equation now:
$$
0=\widetilde{K}\left(\frac{\mathrm{d}^2{x'}^1}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^2}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^3}{\mathrm{d}\tau_0^2},\frac{\mathrm{d}^2{x'}^4}{\mathrm{d}\tau_0^2}\right)^T+ \widetilde{K}\Gamma^{\lambda}{}_{\mu\nu}\widetilde{K}^{-1}\widetilde{K}\left[\frac{\mathrm{d}{x'}^\mu}{\mathrm{d}\tau_0} \frac{\mathrm{d}{x'}^\nu}{\mathrm{d}\tau_0}\right]\widetilde{K}^T,
$$
or equivalently
$$
0=\left(\frac{\mathrm{d}^2{x'}^1}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^2}{\mathrm{d}\tau_0^2}, \frac{\mathrm{d}^2{x'}^3}{\mathrm{d}\tau_0^2},\frac{\mathrm{d}^2{x'}^4}{\mathrm{d}\tau_0^2}\right)^T+ \Gamma^{\lambda}{}_{\mu\nu}\left[\frac{\mathrm{d}{x'}^\mu}{\mathrm{d}\tau_0} \frac{\mathrm{d}{x'}^\nu}{\mathrm{d}\tau_0}\right]\widetilde{K}^T,
$$
implying that
$$
0= \frac{\mathrm{d}^2{x'}^\lambda}{\mathrm{d}\tau_0^2}+\Gamma^{\lambda}{}_{\mu\nu}\frac{\mathrm{d}{x'}^\mu}{\mathrm{d}\tau_0}k^\nu{}_\zeta \frac{\mathrm{d}{x'}^\zeta}{\mathrm{d}\tau_0}.
$$
Since for the proper time we have the equality
$$
-c^2\mathrm{d}\tau_0^2=\mathrm{d}S^T\left(
\begin{array}{cc}
1 & 0\\
0 & -c^2 \\
\end{array}
\right)\mathrm{d}S=\left(\frac{\mathrm{d}S}{\mathrm{d}S'}\mathrm{d}S'\right)^T\eta \frac{\mathrm{d}S}{\mathrm{d}S'}\mathrm{d}S'=
\mathrm{d}S'^Tg\mathrm{d}S'
$$
hence
$$
g(S',S)=\left(\frac{\mathrm{d}S}{\mathrm{d}S'}\right)^T\eta \frac{\mathrm{d}S}{\mathrm{d}S'}.
$$
Let denote by $[{}_j{}^{i}k]$ the transpose of the matrix $[k^{i}{}_j]$ and $K^{i}{}_j$ the elements of the inverse of $\widetilde{K}$. Then since
$$
g(S',S)=\left(\widetilde{K}^{-1}\right)^T\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]^T\widetilde{K}^{T}\eta\widetilde{K}\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]\widetilde{K}^{-1}
$$
thus
$$
g(S',S)_{\varphi\psi}={}_\varphi{}^\mu{K}\frac{\partial x^\alpha}{\partial {x'}^\mu}{}_\alpha{}^{\delta}k \eta_{\delta,\varepsilon} k^\varepsilon{}_\beta\frac{\partial x^\beta}{\partial {x'}^\nu}K^{\nu}{}_\psi.
$$
This matrix is the \emph{metric tensor} of the homogeneous time-space manifold in question. If $\widetilde{K}$ is the unit matrix, then $\mu=\varphi$, $\nu=\psi$, $\alpha=\delta$ and $\beta=\varepsilon$ implying the known formula
$$
g_{\mu\nu}=\frac{\partial x^\alpha}{\partial {x'}^\mu}\frac{\partial x^\beta}{\partial {x'}^\nu}\eta_{\alpha\beta}.
$$
Also note that if $\widetilde{K}$ is an orthogonal transformation then we get a more simple form of the metric:
$$
g(S',S)=\widetilde{K}\left[\frac{\partial x^l}{\partial {x'}^i}\right]^T\eta\left[\frac{\partial x^l}{\partial {x'}^i}\right]\widetilde{K}^{T}.
$$
To determine the connection between the metric and the affine connection, we determine the partial derivative of the metric.
$$
\frac{\partial g(S',S)}{\partial {x'}^\lambda}=\left(\widetilde{K}^{-1}\right)^T\left[\frac{\partial^2 x^\alpha}{\partial {x'}^\mu\partial {x'}^\lambda}\right]^T\widetilde{K}^{T}\eta\widetilde{K}\left[\frac{\partial x^\beta}{\partial {x'}^\nu}\right]\widetilde{K}^{-1}+
$$
$$
+\left(\widetilde{K}^{-1}\right)^T\left[\frac{\partial x^\alpha}{\partial {x'}^\mu}\right]^T\widetilde{K}^{T}\eta\widetilde{K}\left[\frac{\partial^2 x^\beta}{\partial {x'}^\nu\partial {x'}^\lambda}\right]\widetilde{K}^{-1},
$$
and since
$$
\frac{\partial^2 {x}^\alpha}{\partial {x'}^\mu\partial {x'}^\lambda}=\frac{\partial {x}^\alpha}{\partial {x'}^\rho}\widetilde{K}^{-1}\Gamma(S',S)^{\rho}{}_{\mu\lambda}\widetilde{K}=
$$
we have
$$
\frac{\partial g(S',S)_{\varphi\psi}}{\partial {x'}^\lambda}=\Gamma(S',S)^{\rho}{}_{\varphi\lambda}g(S',S)_{\rho\psi}+g(S',S)_{\varphi\rho}\Gamma(S',S)^{\rho}{}_{\lambda\psi}
$$
as in the classical case. Denote by $g(S,S')^{\varphi\rho}$ the inverse of the metric tensor then we get the connection:
$$
\Gamma(S',S)^{\sigma}{}_{\lambda\mu}=\frac{1}{2}g(S,S')^{\nu\sigma}\left\{\frac{\partial g(S',S)_{\mu,\nu}}{\partial {x'}^\lambda}+
\frac{\partial g(S',S)_{\lambda,\nu}}{\partial {x'}^\mu}-\frac{\partial g(S',S)_{\mu,\lambda}}{\partial {x'}^\nu}\right\}.
$$
\subsubsection{Covariant derivative, parallel transport and the curvature tensor}
Since we determined the affine connection we can define the \emph{covariant derivative} of a vectors fields on the way:
$$
V^\mu{}_{;\lambda}=\frac{\partial V^\mu}{\partial {x'}^\lambda}+\Gamma(S',S)^{\mu}{}_{\lambda\rho}V^\rho= \frac{\partial V^\mu}{\partial {x'}^\lambda}+\widetilde{K}\Gamma^{\mu}{}_{\lambda\delta}\widetilde{K}^{-1}V^\delta.
$$
In fact, it converts vectors into tensor on the basis of the following calculation:
$$
\widetilde{K}\left[\frac{\partial {x'}^\mu}{\partial x^\nu}\right]\left[\frac{\partial x^\rho}{\partial {x'}^\lambda}\right]\widetilde{K}^{-1}V^\nu{}_{;\rho}=\widetilde{K}\left[\frac{\partial {x'}^\mu}{\partial x^\nu}\right]\left[\frac{\partial x^\rho}{\partial {x'}^\lambda}\right]\widetilde{K}^{-1}\left(\frac{\partial V^\nu}{\partial {x}^\rho}+\widetilde{K}\Gamma^{\nu}{}_{\rho\delta}\widetilde{K}^{-1}V^\delta\right)=
$$
$$
=\widetilde{K}\left[\frac{\partial {x'}^\mu}{\partial x^\nu}\right]\left[\frac{\partial x^\rho}{\partial {x'}^\lambda}\right]\widetilde{K}^{-1}\left(\frac{\partial V^\nu}{\partial {x}^\rho}+\widetilde{K}
\frac{\partial {x'}^\nu}{\partial x^\alpha}\frac{\partial^2 {x}^\alpha}{\partial {x'}^\rho\partial {x'}^\delta}\widetilde{K}^{-1}V^\delta\right)=
$$
$$
=\frac{\partial V'^\mu}{\partial {x'}^\lambda}+\widetilde{K}\frac{\partial {x'}^\mu}{\partial x^\alpha}\frac{\partial^2 {x}^\alpha}{\partial {x'}^\lambda\partial {x'}^\delta}\widetilde{K}^{-1}{V'}^\delta=\frac{\partial {V'}^\mu}{\partial {x'}^\lambda}+\widetilde{K}\Gamma^{\mu}{}_{\lambda\delta}\widetilde{K}^{-1}{V'}^\delta={V'}^\mu{}_{;\lambda}.
$$
Note that the covariant derivative of a co-vector is
$$
V_{\mu;\lambda}=\frac{\partial V_\mu}{\partial {x'}^\lambda}-\Gamma(S',S)^{\mu}{}_{\lambda\rho}V^\rho,
$$
and the covariant derivative of a tensor has the rule, each upper index adds a $\Gamma$ term and each lower index subtracts one. For this reason the covariant derivative of the metric tensor (by our calculation above) vanishes.
Again from the definition of the covariant derivative we get that the \emph{equation of parallel transport} is now:
$$
\frac{\mathrm{d} V^\mu}{\mathrm{d}\tau_0}=-\Gamma(S',S)^{\mu}{}_{\lambda\nu}\frac{\mathrm{d}{x'}^\lambda}{\mathrm{d}\tau_0}V^\nu.
$$
From this it follows that the parallel-transport along a side $\delta {x'}^\beta$ of a small closed parallelogram is
$$
\delta V^\alpha =-\Gamma^{\alpha}{}_{\beta\nu}(S',S)V^\nu\delta {x'}^\beta
$$
and thus the total change around a small closed parallelogram with sides $\delta a^\mu$, $\delta b^\nu$ is
$$
\delta V^\alpha=\left(\Gamma^{\alpha}{}_{\beta\nu;\rho}(S',S)V^\nu+\Gamma^{\alpha}{}_{\beta\nu}(S',S)V^\nu{}_{;\rho}- \Gamma^{\alpha}{}_{\rho\nu;\beta}(S',S)V^\nu-\Gamma^{\alpha}{}_{\rho\nu}(S',S)V^\nu{}_{;\beta}\right)\delta a^\beta\delta b^{\rho}
$$
implying that
$$
\delta V^\alpha=R(S',S)^{\alpha}{}_{\sigma\rho\beta}V^{\sigma}\delta a^\beta\delta b^{\rho}.
$$
Here $R(S',S)^{\alpha}{}_{\sigma\rho\beta}$ is the \emph{Riemann curvature tensor} defined by
$$
R(S',S)^{\alpha}{}_{\sigma\rho\beta}:=\Gamma(S',S)^{\alpha}{}_{\beta\sigma;\rho}-\Gamma(S',S)^{\alpha}{}_{\rho\sigma;\beta} +\Gamma(S',S)^{\alpha}_{\rho\nu}\Gamma(S',S)^{\nu}_{\sigma\beta}-\Gamma(S',S)^{\alpha}_{\beta\nu}\Gamma(S',S)^{\nu}_{\sigma\rho}.
$$
The Ricci Tensor and the scalar curvature defined by
$$
R(S',S)_{\sigma\beta}:=R(S',S)^{\alpha}{}_{\sigma\alpha\beta} \mbox{ and } R(S',S):=R(S',S)^\sigma{}_\sigma,
$$
respectively.
\subsubsection{Einstein's equation}
As we saw in the previous paragraph all of the notion of global relativity can be defined in a time-space-manifold thus all of the equations between them is a well-defined equation. On the other hand Einstein's equation take into consideration the facts of physic; hence contains parameters which can not be changed. Fortunately, we noted earlier that the covariant derivative of our metric tensor vanishes, too. Thus also vanishes the covariant derivative its inverse and hence we can write the Einstein's equation with \emph{cosmological constant} $\Lambda$, too. The equation is
formally the same that the original one, but contains a new (undetermined) parameter which is the matrix $\widetilde{K}$ of the shape-function. It is:
$$
R(S',S)^{\mu\nu}-\frac{1}{2}g(S',S)^{\mu\nu}R(S',S)-\Lambda g(S',S)^{\mu\nu}=\frac{8\pi G}{c^4}T^{\mu\nu},
$$
where the parameter $G$ can be adjusted so that the active and gravitational masses are equal and $T^{\mu\nu}$ is the \emph{energy-momentum tensor}.
|
\section{Introduction}
\object{M~33~X$-$7}\ (hereafter X$-$7) was detected as a variable source by the {\it Einstein}\
observatory with a maximum luminosity in the 0.15--4.5~keV band
(assuming an absorption column of \ohcm{21}) that
exceeds \oergs{38}
\citep[][]{1981ApJ...246L..61L,1983ApJ...275..571M}.
The source stayed active in all following observations.
Its variability was explained by
an eclipsing X-ray binary (XRB) with an orbital period of 1.7~d and an eclipse duration
of $\sim$0.4~d \citep{1989ApJ...336..140P,1993ApJ...418L..67S,1994ApJ...426L..55S}.
Based on {ROSAT}\ and {ASCA}\ data
\citep[][hereafter DCL99]{1997AJ....113..618L,1999MNRAS.302..731D}, the orbital
period was found to be twice as long. DCL99 described the shape of the eclipse
by a slow ingress
($\Delta \Phi_{\rm ingress} = 0.10\pm0.05$),
an eclipse duration of $\Delta \Phi_{\rm eclipse} = 0.20\pm0.03$ and a fast
eclipse egress ($\Delta \Phi_{\rm egress} = 0.01\pm0.01$) with an ephemeris for
the mid-eclipse time of HJD~244\,8631.5$\pm$0.1 + N$\times$(3.4535$\pm$0.0005).
DCL99 also report 3$\sigma$ evidence for a 0.31~s pulse period. They come to
that conclusion by splitting PSPC and HRI data of 500~s intervals of continuous
data where X$-$7\ was positively detected and calculating the summed Rayleigh power
spectrum. The linearly binned power spectrum showed an significant excess (at
99.9 per cent confidence) at the proposed period when compared to simulated data
assuming a Poisson distribution with the X$-$7\ mean flux. The signal is broader
than what would be expected from a simple sinusoidal pulse. It was not possible
to check if the variability is coherent. DCL99 conclude ``Although pulsed
emission seems a reasonable assumption, the power excess could arise from
variability of a different nature (e.g. broad-band variability increasing the
chance of spurious detection)."
The orbital period, pulse period and observed X-ray luminosity are remarkably
similar to those of the Small Magellanic Cloud neutron star XRB \object{SMC X$-$1}
\citep{2000A&AS..147...25L}.
X$-$7\ was the first and only identified eclipsing accreting binary system
with an X-ray source in an external galaxy other than the Magellanic Clouds before
the detection of similar behavior based on {XMM-{\it Newton}}\ and {\it Chandra}\ data of the NGC~253
X-ray source RX~J004717.4-251811 \citep{2003A&A...402..457P}.
\citet[][hereafter PMM2004]{2004A&A...413..879P} analyzed several observations of the
{XMM-{\it Newton}}\ \object{M~33}\ survey and an archival {\it Chandra}\ observation where X$-$7\ was in the
field of view. The observations cover a large part of the 3.45 d orbital period,
however not eclipse in- and egress.
PMM2004 detected emission of X$-$7\ during eclipse and a soft X-ray
spectrum of the source out-of eclipse that can best be described by
bremsstrahlung or disk blackbody models. No significant regular
pulsations of the source in the range 0.25--1000~s were found. The
average source luminosity out of eclipse was 5\ergs{37} (0.5--4.5~keV, corrected
for Galactic foreground absorption).
In a special analysis of DIRECT\footnote{For information on the DIRECT project
see {\tt
http://cfa-www.harvard.edu/\~\/kstanek/DIRECT/}.} observations PMM2004
identified as the optical
counterpart a B0I to O7I star of 18.89 mag in V which
shows the ellipsoidal light curve of a high mass X-ray binary (HMXB)
with the X$-$7\ binary period. Based on the location of the X-ray eclipse and the
optical minima, PMM2004 derived an improved binary ephemeris and argued that the compact
object in the system is a black hole. Those authors reached this conclusion
based on the mass of
the compact object derived from orbital parameters and the optical
companion mass, the lack of pulsations, and the X-ray spectrum.
X$-$7\ would be the first detected eclipsing high mass black hole XRB.
The {\it Chandra}\ ACIS-I survey of \object{M~33}\ (ChASeM33) is a very large program which
will accumulate in seven deep pointings, each 200 ks in length,
a total exposure of 1.4~Ms. During several of these pointings
X$-$7\ is in the field of view. We report here on ChASeM33 observations of
X$-$7\ spread over just 20 binary orbits,
which resolved for the first time the eclipse ingress and egress
and allowed us to constrain the light curve of X$-$7\ for binary phases around
eclipse. Preliminary results of the first
observations were announced by \citet{2005ATel..633....1S}.
In addition we identify the source on archival HST WFPC2
images. Throughout the paper, we assume a distance to \object{M~33}\ of 795~kpc
\citep{1991PASP..103..609V}.
\section{Chandra observations and results}
X$-$7\ was sampled by the ACIS-S3 chip in one {\it Chandra}\ observation
and
by the I0, I1 and I2 chips during 11 additional observations.
Table~\ref{tbl:obs} summarizes these observations
giving observation identification (ObsID) in column 1, observation start date (2),
elapsed time
(3), the ACIS chip covering X$-$7\ (4), the offset of X$-$7\ from the pointing
direction (5), and the X$-$7\ binary phase during the observation (6) using ephemeris that will be
discussed in Sect.~\ref{sec:tim} and \ref{sec:eph}. The source brightness varied from
$\sim$3\expo{-3} ct s$^{-1}$\ to $\sim$0.2 ct s$^{-1}$\ normalized to ACIS-I on-axis.
Of the pointings considered here,
the Field 5 position of X$-$7\ is located close enough to the optical axis
to result in significant pile-up,
particularly during the high phase. To correct the light curve for
the effects of pile-up, we used the best-fit model for unpiled data from
the other pointings (see
Sect.~\ref{sec:spec}), and applied a pileup(phabs(diskbb)) model within
XSPEC. We estimated the pile-up parameters by fitting simultaneously
the high state data for ObsID 6382 (insignificant pile-up) and
ObsIDs 7170 and 7171 (significant pile-up). The phabs and diskbb
models were set to their best fit values from the unpiled spectra;
the diskbb normalizations were tied for all three datasets.
All of the pile-up model parameters except the grade morphing
parameter $\alpha$ were frozen. For a description of the pile-up model see
\citet{2001ApJ...562..575D}. The
pile-up parameters were set to their default values, but the fr\_time
parameter was frozen at 3.2 for ObsIDs 7070, 7071, and
at 0 for 6382; this has the effect of turning off the pile-up
model component for ObsID 6382. The resulting reduced
$\chi^2$ of the fit was 0.97 and the corresponding value for
$\alpha$ was 0.692$^{+0.114}_{-0.116}$ (90\% confidence limits).
Using the fitted value for $\alpha$, a correction
curve was generated by evaluating the ratio of the XSPEC
``Model predicted rate'' without pile-up (fr\_time=0) to the corresponding
rate with pile-up (fr\_time=3.2) as a function of the model predicted rate
(with pile-up). This ratio was used to correct the observed count rates of
ObsIDs 6384, 7070, and 7071 for the effects of pile-up.
For the spectral analysis we used standard Level 2 event files cleaned for
columns with higher background rates than adjacent columns.
In the observations of
Fields 4 and 5, counts from X$-$7\ are rejected when the source is moving across
rejected columns (due to the satellite dithering). This effect creates spurious
periods in period analysis and can reduce the counts in 1000~s integration
intervals by varying amounts (up to more than 30\%). Therefore, we created new
Level 2 event files for broad band time variability analysis that did not
reject these columns. We correct for the detection efficiency at different
off-axis angles using factors derived from response files for the different CCDs and
off-axis positions assuming the disk-blackbody spectrum derived in
Sect.~\ref{sec:spec}.
We normalized the count rates to a CCD 3 on-axis rate.
The data analysis was performed using tools in the ESO-MIDAS v05SEPpl1.0,
EXSAS v03OCT\_EXP, CIAO v3.2 and LHEASOFT v5.3
software packages as well as the imaging application DS9 v3.0b6.
\subsection{Time variability}\label{sec:tim}
During ObsID 6378 and 7171 we observed transitions by X$-$7\ into and out of
eclipse, respectively. We sampled both background and solar system barycenter corrected
light curves of X$-$7\ with a time resolution of 1000~s. To increase the signal to noise
specifically in the far off-axis ObsID 6378, we
restricted the analysis to the 0.5--5~keV band which covers most of the source
flux. Further sub-dividing this energy
band into a hard and soft band did not show any significant hardness ratio
changes of the in-
and egress behavior as could be expected based on {EXOSAT}\ observations of LMC~X$-$4
and Her~X$-$1 \citep[see Figs.~78 and 79 in ][]{1991PhDT.......139D}.
To determine eclipse start and end times, we approximated the light curves
assuming constant count rates within and out-of eclipse and a linear transition
in between using a $\chi^2$ minimization technique. We searched for $1\sigma$ errors
of the eclipse start and end time, respectively, assuming those to be the only interesting
parameter for the fit.
In ObsID 6378 the transition into eclipse lasted 12.75~ks and ended at
HJD~245\,3635.4110$\pm$0.0037. The transition out of eclipse in
ObsID 7171 started at HJD~245\,3642.8272$\pm$0.0052 and lasted
for 10.52~ks. From these ingress and egress times separated by two orbital
periods, we directly derive the mid eclipse ephemeris of the eclipse in between
to HJD~$245\,3639.119\pm0.005$. Assuming this epoch as phase zero and a
binary period of 3.453014~d (see Sect.~\ref{sec:eph}), we calculated
light curves of all X$-$7\ observations (Fig.~\ref{fig:lc}).
We also determine an eclipse duration of less than
0.147$\pm$0.006 in phase corresponding to an eclipse half angle of
$26.5\degr\pm1.1\degr$.
ObsIDs 7196 and 7199 fully fall into eclipse. ObsID 6384 at phase
0.83 to 0.90 indicates a much longer transition into eclipse than ObsID 6378.
ObsID 7198 shows dipping behavior well before eclipse with a return to
the out-of eclipse intensity level at the end. ObsID 6382 shows a second
egress from eclipse which is significantly faster than the one
16 orbits earlier (ObsID 7171). However, the observation starts during egress and
the phase range within eclipse is not covered. ObsID 7226 covers eclipse
ingress at the end of the same orbit and again shows strong dipping well before
the ingress. Eclipse egress and ingress times are consistent with the
times derived above. Generally speaking the variability of X$-$7\ before
eclipse (phase 0.7 to 0.9) seems to be much more pronounced in individual observations
than after eclipse (phase 1.1 to
1.4). Average count rates in eclipse and out of eclipse are 0.003~ct s$^{-1}$\ and
0.15~ct s$^{-1}$, respectively. This out-of-eclipse count rate varies in different
binary orbits by factors of 1.3 and there are residual short term fluctuations
that can be described by dips with a similar amplitude and a duration of
several 1000~s also outside the pre-eclipse phase.
To search for pulsations we extracted light curves in the 0.5--5~keV band from
the longest observations after eclipse, ObsIDs 6382 and 7170. We created power density
spectra in the frequency range of 10$^{-4}$--0.15~Hz and found no significant
periodic signal with a 3$\sigma$ upper limit of 5.3\% for sinusoidal variations. A power
density spectrum derived from ObsID 6382 by adding the power spectra from
23 intervals of 3319 s length (1024 time bins with the instrument resolution of
3.241 s) is shown in Fig.~\ref{fig:pds}.
The power spectrum is flat at a value of 2.
\subsection{Energy spectra}\label{sec:spec}
We analyzed energy spectra of X$-$7\ for all observations. For ObsIDs 6378
and 7171, times out of eclipse, during ingress and egress, and in eclipse were
handled
separately. Absorbed power-law, bremsstrahlung and disk-blackbody models
(which were found to best represent the {XMM-{\it Newton}}\ and {\it Chandra}\ spectra
analyzed by PMM2004) were first fit to the individual spectra. The power-law fit to
the spectrum obtained from ObsID 6382 (which has the highest statistical quality)
yields an unacceptable fit with a reduced $\chi^2$ of 1.92 while bremsstrahlung and
disk-blackbody models result in $\chi^2_r$ of 1.45 and 1.32, respectively.
The derived parameters are consistent within the errors for all spectra including the
spectra accumulated during eclipse ingress and egress and during ObsID 7198
when the source shows high variability. Therefore, we performed a simultaneous fit
with the disk-blackbody model to the spectra of eight out-of-eclipse observations
excluding times when the X-ray source was in eclipse and forcing the absorbing
column density to be the same for all observations (this reduces the number of free
fit parameters compared to the individual fits). The resulting inner disk
temperature was systematically higher (at $\sim$1.3 keV) for ObsIDs 6384,
7170 and 7171: these are the observations
which include X$-$7\ nearly on-axis. The most likely
reason for the ``harder'' spectra is pile-up and we therefore excluded these three
spectra from further analysis. The fit to the remaining five spectra from ObsIDs
6378, 6382, 6386, 7197 and 7198 shows no significant differences in the inner disk
temperature and therefore this parameter was also forced to be the same in the
simultaneous fit. We refit the power-law and bremsstrahlung models
for comparison. The best fit is obtained with the disk-blackbody model yielding
a $\chi^2_r$ of 1.10. The $\chi^2_r$ for the best fitting bremsstrahlung and
power-law models are 1.16 and 1.44, respectively.
The derived spectral parameters are similar to the ones reported by PMM2004:
disk-blackbody with inner disk temperature kT = 0.99$\pm$0.03~keV and
N$_H$ = (0.95$\pm$0.10)$\times 10^{21}$ cm$^{-2}$;
bremsstrahlung with temperature kT = 2.74$\pm$0.13~keV and
N$_H$ = (2.05$\pm$0.12)$\times 10^{21}$ cm$^{-2}$;
power-law with photon index $\gamma$ = 2.38$\pm$0.05 and
N$_H$ = (3.32$\pm$0.17)$\times 10^{21}$ cm$^{-2}$.
The best fit disk-blackbody model is shown in Fig.~\ref{fig:spec}.
The normalization of the disk-blackbody model is given by
K = $(r_{\rm in}/{\rm d})^2$(cos $i$) with the inner disk radius
r$_{\rm in}$ in km,
the source distance d in units of 10 kpc and the disk inclination $i$.
The spectra show variations by a factor of $\sim$2 in normalization with
K=0.054 for ObsID 6378 and relative factors 0.58, 0.94, 1.13 and 1.37 for
ObsIDs 7198, 7197, 6386 and 6382, respectively.
The N$_H$ values for the models discussed above clearly indicate absorption
within \object{M~33}\ or intrinsic to the source in addition to
the Galactic value \citep[5.86 and 6.37 $\times 10^{20}$ cm$^{-2}$ in the direction
of X$-$7\ according to][respectively]{1990ARA&A..28..215D,1992ApJS...79...77S}.
Absorbed and unabsorbed source fluxes in the 0.3--10 keV band are in the range
(5.4--12.6)\ergcm{-13} and (6.2--14.7)\ergcm{-13}, respectively,
based on the best fitting disk-blackbody model. These fluxes correspond to
source luminosities of (4.1--9.6)\ergs{37} and (4.7--11.2)\ergs{37},
respectively.
The N$_H$ value of the best fitting disk-blackbody model indicates that
X$-$7\ lies on the near side of \object{M~33} as the absorbing column within \object{M~33}\ can
be determined to $\sim$2.2\hcm{21} from a
$47\times 93$ arcsec half power beam width H{\sc i} map
\citep{1980MNRAS.190..689N}. From the N$_H$ value we can
compute the expected optical extinction $A_{\rm V} = 0.53\pm0.06$ mag
and $E(B-V) = 0.18\pm0.02$ using the standard relations
\citep{1995A&A...293..889P}. These numbers are in the range given by
PMM2004 who assumed that we see X$-$7\ through less than half the absorbing
column within \object{M~33}.
\subsection{Improved position}
X$-$7\ was located in four of the five ChASeM33 fields observed so far.
The source was closest to on-axis (1.58\arcmin\ off-axis) in
the Field 5 observations (see Table~\ref{tbl:obs}).
This results in the most compact PSF
and thus the most reliable position determination. To
refine the absolute astrometry of the Field 5 data, we searched
the USNO-B1.0 and 2MASS catalogs for close positional matches with
X-ray sources. Similarly, to improve the statistics for X-ray centroiding,
we worked with a merged dataset for ObsIDs 6384, 7170, and 7171.
We identified 9 candidate optical/2MASS objects. Six were rejected
because of far off-axis positions or small number of counts. The
remaining candidates were $\ge 7^\prime$ off-axis,
for a 2MASS object ($\sim$250 counts, 0.5--5~keV) which was
7.8\arcmin\ off-axis. We enhanced the number of candidates for registration by
adding two isolated centrally brightened supernova remnants (SNRs) with good radio positions
\citep[sources 57 and 64 from the list of][]{1999ApJS..120..247G} assuming that
the finite size of SNRs -- and
potential differences in the X-ray versus radio distribution --
do not bias the position determination.
We determined the X-ray centroids based on an
iterative sigma-clipping algorithm applied to the 0.5--5~keV X-ray data.
Based on an initial position estimate and clipping radius, the
standard deviation of the radial distribution is evaluated, and
points greater than a given number of standard deviations are rejected.
The iteration of centroiding and rejecting events continues until the
centroid converges to within a specified tolerance or for a fixed
number of iterations (10). The difference in sky coordinates between
the catalog position and the X-ray centroid position was evaluated
for each source. The mean offset was $\Delta x=(0.23\pm0.37)\arcsec,
\Delta y=(0.32\pm0.21)\arcsec$. The centroid of the X$-$7\ source was evaluated
in the same way, and the resulting offset was applied to correct
the sky position. Finally, the corresponding celestial coordinates
were evaluated using the CIAO tool dmcoords; the aspect solution
files (asol1 files) were used in order to correct the positions
using the aspect offsets. The resulting X$-$7\ position is
RA$_\mathrm{J2000} = 01^h33^m34\fs12, \delta_\mathrm{J2000} =
+30\degr32\arcmin11\farcs6$
with a combined error of 0.5\arcsec. The position is within 2$\sigma$ of that
given by PMM2004 based on a registration using just one SNR.
\section{Optical observations and results}
\object{M~33~X$-$7}\ is located along the line of sight to the
dense OB association HS 13 \citep{humphreys1980_m33_h2reg}: it was
identified by PMM2004 with a
specific star within this association, as a result of detection of
regular (ellipsoidal) variations in B and V at the X-ray period. The
OB association has been imaged with {\it HST} using WFPC2 in three
filters: F336W, F439W, and F555W. The observing details are listed
in Table \ref{tab_hst}. In an attempt to learn more about the
optical counterpart to X$-$7, we retrieved relevant datasets from the
MAST archive. Data retrieved from the archive are automatically
reprocessed with the latest calibration files. After inspecting the
individual exposures, we combined the two exposures taken with the
F336W filter and the two taken in F439W (using the STSDAS task
gcombine) to eliminate
|
the effects of cosmic rays on the images.
The position of the X-ray source as determined with {\it Chandra}\ is shown
in Fig.~\ref{fig:opt}. Since the error in the astrometric solution of images
in the HST pipeline is typically $\sim$1.5\arcsec\,--2\arcsec, we
attempted to reduce the positional uncertainty by registering the WFPC2
images to the USNO-B1.0 or 2MASS frames. Given the small field of view and
greatly superior spatial resolution of the WFPC2 images, many
sources identified in the USNO-B1.0 or 2MASS catalogues are resolved into
multiple objects, making a unique match between stars in the image and
sources in the catalogue difficult. Therefore, we performed our
astrometric corrections in two steps, taking advantage of the
much larger field of view of the KPNO Mosaic B image of \object{M~33}\
\citep{massey2002}. First, we identified a sample of 25 isolated, bright USNO-B1.0
stars within 4\farcm5 of X$-$7\ to compute the shift required to bring
the Mosaic B image into the USNO-B1.0 frame. The RMS positional error
of these stars in the corrected Mosaic image was unacceptably large (0\farcs85).
We then tried the same procedure using 21 bright 2MASS stars located in isolated
areas in the KPNO Mosaic B image. The RMS positional error
of these stars in the corrected Mosaic image was 0\farcs16.
We then used 10 bright,
isolated stars in common between the Mosaic B and F439W images (restraining the
selection to the region covered by the WFPC2 CCD in which the X$-$7\ counterpart
was located) to compute the shift
required.
The RMS positional error in the shifted HST image was 0\farcs13. Combining
the 0\farcs1 absolute uncertainty of the 2MASS position with the
uncertainties listed above, the absolute astrometric uncertainty of the
final registered F439W image is 0\farcs23.
We applied the calculated shifts above to the HST image.
Fig.~\ref{fig:opt} shows a 10\arcsec\ $\times$ 10\arcsec\ field from the F439W image
centered on \object{M~33~X$-$7}. The black solid circle shows our best estimated position
for the X-ray source from the {\it Chandra}\ image. The error circle of X$-$7\
is 0\farcs5 in radius (see above). The HST positional accuracy is indicated by a
cross. We also show the error quoted by PMM2004 in red. The error circles
overlap with a bright star, coincident with that proposed by PMM2004 (based also
on time variation arguments) as the donor star for the compact object.
We also carried out aperture photometry of the stars in the
{\it HST} field (using the IRAF procedures daofind and phot).
We find that the optical counterpart to X$-$7\
with the WFPC2 resolution is not a blend of stars.
PSF fits to the source in the F336W, F439W, and F555W images give FWHM
compatible with the other point-like sources in the images. The star
about 0\farcs9 to the South on the other hand is just resolved in at least
two equally
bright sources which are separated by $\sim$0\farcs2. We can rule out the
presence of another star of similar brightness at the position of the optical
counterpart of X$-$7 to this distance, which corresponds to a projected separation
at the distance of \object{M~33}\ of 0.8~pc.
The optical counterpart has
apparent magnitudes of 17.6, 18.2, and 18.9 for the F336W, F439W, and
F555W filters, respectively, in the STMAG system. The colors derived from these filters
for the counterpart to X$-$7\ are typical of the other bright
stars ($m_{\rm V}<20.5$) in HS 13 as observed with WFPC2. The F336W, F439W,
and F555W filters are centered approximately on the corresponding U, B
and V filters. Adopting a color
transformation of 0.5~mag for U, 0.66~mag for B, and 0.03~mag for V
\citep{2002datahandbook}, we find $m_{\rm U}$ of 18.1~mag,
$m_{\rm B}$ of 18.8~mag, and $m_{\rm V}$ of
18.9~mag (i.e. U--B of -0.7~mag, B--V of -0.1~mag) for the optical
counterpart to X$-$7.
\section{Discussion}
The well sampled orbital light curve of X$-$7\ indicates stronger variability
before eclipse compared to after eclipse
(Fig.~\ref{fig:lc}). Variability at this phase is often observed in HMXBs and
is explained by the viewing geometry through the innermost regions of the
wind of the companion and dense material following the compact object in its orbit
\citep[e.g.][]{1992A&A...263..241H}. Dense structures are created by the gravitational
and radiative interactions of the compact object with the stellar wind
\citep{1990ApJ...356..591B,1991ApJ...371..684B}.
This behavior is also reflected in the on-average
longer eclipse ingress time and longer eclipse duration derived by DCL99.
Similarly to
PMM2004 we find residual emission from the source during eclipse.
Residual emission during eclipse was measured from most
eclipsing XRBs and can be explained by re-processing
of primary photons from the compact X-ray source in an extended accretion
disk corona (which is not fully occulted) or by scattering in the companion
atmosphere/stellar wind. Residual emission of up to $\sim$10\% of the
uneclipsed flux was reported
\citep{1991A&A...252..272H,1992ApJ...389..665L,1996PASJ...48..425E}
depending on system geometry and wind density. The
X$-$7\ residual emission of $\sim$4\% is well within these limits.
\subsection{Improved ephemeris}\label{sec:eph}
DCL99 modeled the folded light curve from X$-$7\ as a constant flux plus linear ingress and
egress plus an eclipse interval with zero flux. It is obvious from their data (see
their Fig.~1) that the eclipse egress is better determined than the eclipse center and
duration. These strongly depend on the shape of the pre-eclipse dips contained in the
light curve as can be seen from the resolved pre-eclipse behavior in the {\it Chandra}\ data
(Fig.~\ref{fig:lc}).
DCL99 did not determine the eclipse parameters from individual
eclipses but only from the average light curve due to limited statistics.
The time of eclipse egress is the best determined parameter.
Unfortunately, in the paper they do not give this parameter separately but only the
center of eclipse and length of eclipse. In the following we use eclipse egress
times to determine an improved orbital period $P$ and a possible period
derivative $\dot{P}$.
Due to limited phase coverage, the {XMM-{\it Newton}}\ observations do not resolve eclipse ingress or
egress. The time of eclipse egress can only be constrained to $<$0.02 in phase.
Based on these data PMM2004 restricted the time of eclipse egress to
HJD~$245\,1760.953\pm0.035$ (note typographical error in egress time in PMM2004,
but the calculation
used the number given here) and determined a time of mid-eclipse assuming the
eclipse shape parameters of DCL99. With this mid-eclipse epoch and the one given
by DCL99, PMM2004 determined an improved orbital period.
The maximum eclipse duration of 0.147$\pm$0.006 (Sect.~\ref{sec:tim}) determined from
individual observations is -- as expected -- significantly
shorter than the one given by DCL99 (0.20$\pm$0.03) from average orbit fitting.
From the DCL99 ephemeris it is not possible to derive a well defined eclipse
egress time. The same is true for the {\it Einstein}\ results
\citep[][hereafter PRC89]{1989ApJ...336..140P}.
We therefore decided to re-analyze relevant {ROSAT}\ and {\it Einstein}\ data.
{ROSAT}\ PSPC observations of \object{M~33}\ were short compared to the HRI observations and
did not cover the X$-$7\ eclipse egress. We therefore restricted our re-analysis
to
{ROSAT}\ HRI data. After screening for high background, we combined data with
continuous observation intervals which lead to variable integration times of
typically 1800~s (minimum 41~s, maximum 3791~s) depending on the duration of the
scheduled observation and background. DCL99 in contrast grouped the
data in 3000~s averages. Only once is the eclipse egress closely monitored.
In the {ROSAT}\ interval corresponding to
ObsID 600488h, X-7 was still in eclipse while in the following interval
(corresponding to ObsID 600489h), less than 0.04d later, X-7
already featured a count rate that indicated the source was out
of eclipse. This allowed us to restrict the time of eclipse egress to
HJD~$244\,9571.724\pm0.018$.
In another case during ObsID 600020h-1, observations in eclipse and out of
eclipse are separated by 0.26 d, which does not allow us to further constrain
eclipse egress times.
PRC89 reported {\it Einstein}\ IPC and HRI observations of X$-$7.
For the HRI observations PRC89 combined several continuous observation
intervals to get significant data. For the IPC observations PRC89 simply
integrated over individual continuous observations. Due to the better statistics
in the IPC observations we considered only IPC eclipse egress coverages
(i.e. ObsID I2090, see also PRC89, Fig.~1). In images of the observation
X$-$7\ is only visible in the energy band 0.6--2.8~keV (PI 4--9).
We therefore restricted the analysis to this energy band. We selected
extraction position and area by comparison with the close-by bright central source
X$-$8. We specifically investigated the
``first rising episode at day 1.5" as identified by PRC89. In contrast to the
report by PRC89, X$-$7\ count rates
during this period do not show increasing flux
but the source intensity is compatible with zero during all three intervals.
In the next set of observation intervals at
around day 2 the source is clearly out of eclipse. This indicates an eclipse
egress between HJD~244\,4087.840 and HJD~244\,4088.255.
Combining the {ROSAT}\ eclipse egress boundaries with the {\it Chandra}\ ephemeris
suggests an
orbital period of 3.453014$\pm$0.000020~d. The lower panel of Fig.~\ref{fig:lc}
shows the {ROSAT}\ HRI light curve folded with the above period and assuming the
{\it Chandra}\ mid-eclipse epoch, for the same phase range as the {\it Chandra}\ data
above. As can be seen in Fig.~\ref{fig:orb}, these ephemerides also are
consistent with the boundaries determined for the {XMM-{\it Newton}}\ eclipse egress. However,
they seem to miss the boundaries determined above for the {\it Einstein}\ IPC observation.
If we assume a constant rate of change of the orbital period over this time, we
can model all eclipse egresses. The parabola shown in Fig.~\ref{fig:orb}
assumes a period of 3.45294~d at the {\it Chandra}\ epoch and orbital period decay
rate of $\dot{P}_{\rm orb}/P_{\rm orb} = -4\times10^{-6}$ yr$^{-1}$. It nicely
models the average eclipse egress times of {XMM-{\it Newton}}, {ROSAT}\ and {\it Einstein}. However, also
a small $\dot{P}_{\rm orb}/P_{\rm orb} = -0.7\times10^{-6}$ yr$^{-1}$ and a period of
3.45302~d or a much higher
$\dot{P}_{\rm orb}/P_{\rm orb} = -7.5\times10^{-6}$ yr$^{-1}$ and a period of
3.45285~d would still be consistent with all the eclipse egress boundaries.
The derived orbital decay for X$-$7\ is well within the range of values
determined for other HMXBs like
Cen X$-$3
\citep[$(-1.738\pm0.004)\times10^{-6}$ yr$^{-1}$,][]{1992ApJ...396..147N},
SMC X$-$1
\citep[$(-3.36\pm0.02)\times10^{-6}$ yr$^{-1}$,][]{1993ApJ...410..328L}
or LMC X$-$4
\citep[$(-9.8\pm0.7)\times10^{-7}$ yr$^{-1}$, see e.g.][]{1996ApJ...456L..37S,2000ApJ...541..194L}.
Such rapidly decreasing periods in HMXBs are most likely caused by tidal
interaction between the compact object and its massive companion. As the orbit
decays the Roche lobe will descend into the companion's atmosphere and mass
transfer will increase to super-Eddington rates over a relatively short
time scale. In the end, the compact object is expected to
spiral into the envelope of the companion and in this way terminate the high-mass XRB
phase of the evolution \citep[see e.g.][]{1993ApJ...410..328L}.
\subsection{The optical companion}
The HST WFPC2 images clearly resolve the
dense OB association HS 13 \citep{humphreys1980_m33_h2reg}. The optical
counterpart is located to the North and is one of a pair of stars with similar
luminosities: these stars are separated by $\sim$0."9. While the
suggested counterpart is presumed to be a single source, the source to the South
is a blend of at least 2 stars (elongation from SSE to NNW).
The HST observations were carried out within one hour on October 25, 1995
corresponding to binary phase 0.76 using the ephemeris given in
Sect.~\ref{sec:eph}. Phase 0.76 corresponds to the second maximum of the
ellipsoidal light curve. Assuming the parameters of the optical light curve by
PMM2004 (sinusoidal fit with 0.033 mag amplitude) the correction to X-ray
eclipse is +0.066 mag resulting in corrected magnitudes $m_{\rm B}$ of 18.9 mag,
and $m_{\rm V}$ of 19.0 mag for the optical counterpart to X$-$7.
Magnitudes and colours are very close (within 0.1 mag)
to values deduced by PMM2004. X$-$7\ is also included in the UBVRI photometry of
stars in \object{M~31}\ and \object{M~33}\ from the survey of Local Group galaxies, recently
published by \citet{2006astro.ph..2128M}. They give a position (registered with
the USNO B1.0 frame) of RA$_\mathrm{J2000} = 01^h33^m34\fs18,
\delta_\mathrm{J2000} = +30\degr32\arcmin11\farcs5$, about 0\farcs3 ESE from the
position in our registered HST image position (see Fig.~\ref{fig:opt}). Their
X$-$7\ magnitude and colours ($m_{\rm V} = 19.087\pm0.007$mag, $m_{\rm (B-V)} =
-0.084\pm0.10$mag, $m_{\rm (U-B)} =-1.057\pm0.11$mag) coincide with those
determined by us within 0.2 mag. Part of the discrepancy may be due to the
sampling at different binary phases.
The type and luminosity class of the star can be deduced from the absolute
optical magnitude and colour during eclipse when we see the optical surface that
is mostly undisturbed by gravitational effects, an expected accretion disk and
heating by the X-ray source. To derive the absolute magnitude the measured
brightness has to be corrected for the distance (-25.50~mag for the assumed
distance of 795 kpc) and for interstellar extinction, the colour has to be
corrected for reddening. These corrections have been estimated from the
absorption of the X-ray spectrum in Sect. 2.2. Based on the HST values the
companion star should have an absolute $M_{\rm V}$ of $-6.1$ mag and $(B-V)_0$ of
$-0.3$ mag. This corresponds to a giant star of spectral type O6III which
has a temperature of 39\,500~K,
a radius of 17 $R_{\sun}$ and a mass well above 20 $M_{\sun}$
\citep[see Appendix E in][]{1996QB461.C35......}.
\subsection{M~33 X$-$7, an eclipsing black hole HMXB}
With the new eclipse duration and the better determination
of the companion type on the basis of the extinction combined with the colour
excess corrections derived from
the absorbing column of the X-ray spectrum of X$-$7, we can significantly
improve the mass estimate of the compact object compared to PMM2004. To do so we
correct the B and V light curves given in PMM2004 for extinction and colour
excess. We then model these light curves using the ``PHysics Of Eclipsing BinariEs"
program PHOEBE \citep{2005ApJ...628..426P} built on top of the widely used WD
program
\citep{1971ApJ...166..605W,1979ApJ...234.1054W,1990ApJ...356..613W}.
By adjusting the semi-major axis of the binary system we kept the radius of the
secondary star at 17$R_{\sun}$. We also fixed the temperature of the
secondary to 39\,500~K. We then fitted the luminosity of the secondary and the
mass ratio of the companions using the light curves in the B and V band
simultaneously, assuming different inclination angles. We derive acceptable fits
for inclination angles from 75\degr\ to 90\degr. Angles of 74\degr\ and
smaller can be excluded as the companion
star would overfill its Roche lobe. The derived parameters are given in
Table~\ref{tab_fit}. The available optical data are not sufficient to
discriminate between the allowed inclination angles. However, we expect that
an inclination above 80\degr\ is more likely than an inclination in the range
75\degr\ -- 80\degr\ where the companion nearly fills its Roche lobe
and unstable mass transfer would be expected based on the X-ray observations.
In the case of unstable mass transfer, the X-ray emission from X$-$7\ would most likely be much
more variable on longer time scales than observed in all observations since
those with the {\it Einstein}\ observatory. This implies a mass of
the compact object in the system of greater than 9$ M_{\sun}$ and
clearly indicates a black hole as the compact object in the system.
As already discussed in PMM2004, further arguments for the black hole nature of the
compact object in X$-$7\ come from the lack of X-ray pulsations, the short term
variability and the X-ray spectra. We discuss each of these
properties in turn:
(I) Pulsations are clear indicators of a neutron star as the compact object in a
HMXB. In the power density spectrum analysis we did not detect significant
periodic signals allowing a black hole as the compact object. However, this does
not rule out the possibility that the compact object is a neutron
star. Our unsuccessful {\it Chandra}\ periodicity search was limited towards
short periods by the ACIS-I sampling time of 3.2~s. Also, with
the significantly shorter sampling time of {XMM-{\it Newton}}\ EPIC pn of 0.073~s, PMM2004 did
not find significant pulsations.
(II) Low accretion rate
sources should show a power density spectrum (PDS) with a broken power law
\citep[see][and references therein]{2005astro.ph..8284B}. Above a luminosity of
typically 0.1 of the Eddington luminosity, the PDS should be flat.
The short term fluctuations seen in the power spectral analysis of X$-$7\ are
very small as one would expect for such a high accretion rate source. The
unabsorbed luminosity of X$-$7\ in the 0.3--10~keV band of $>$1.1\ergs{38}
in maximum is consistent
with a stellar mass black hole, but it does not constrain the mass.
(III) HMXBs with a neutron star as the compact object normally show power law spectra
(photon index 0.8--1.5) with a high-energy cutoff around 10--20 keV
\citep[see e.g.][]{1983ApJ...270..711W,2000ApJ...535..632M}.
Disk-blackbody spectra, on the other
hand, suggest emission that is dominated by the inner accretion disk of a low
mass XRB
system, or -- in case of a HMXB -- the presence of an accretion disk surrounding
a black hole emitting in the high state \citep[e.g.][]{1986ApJ...308..635M}.
As mentioned before, the variability of X$-$7\ outside eclipse most likely is
caused by partial covering of the X-ray emission region
due to material in the accretion stream or in the outer accretion disk.
If we assume that during the brightest phase, the X$-$7\ disk-blackbody
normalization corresponds to the innermost stable circular orbit with
radius $r$ around a black hole, i.e.
\hbox{$r = 3 R_s = \EXPN{9}{5} (M_x/M_{\sun})$ cm} (with $R_s = 2GM_x/c^2$ the
Schwarzschild radius), we obtain $M_x > 2.4 M_{\sun}$.
All of these results of the new {\it Chandra}\ observations suggest that \object{M~33~X$-$7}\
is the first known eclipsing HMXB with a black hole as the compact object.
\citet{2005ApJ...634L..85P} advanced the idea of searching for
eclipses in ultra-luminous
X-ray sources (ULXs) to determine black hole masses and -- most importantly --
to separate
intermediate mass from stellar mass black holes as the compact object in this
exceptional class of X-ray sources.
They proposed to compare the number of eclipses
in the different kind of systems and predicted that more eclipses by far
should be detected in stellar mass systems
than in intermediate mass black hole
systems.The orbital periods and other system parameters would provide
|
hline
\multirow{3}{*}{\shortstack[l]{ $\alpha_1=1$ \\ $\alpha_2 = 10$}} &
\ref{eq:NXFEM-EV} & 5192 & 1794 & 7 & 9 & (0.135) \\
& \ref{eq:NXFEM-LO} & 4655 & 1784 & 7 & 7 & (0.031) \\
& \ref{eq:NXFEM-EV} & 4428 & 1803 & 7 & 7 & (0.033) \\ \hline
\multirow{3}{*}{\shortstack[l]{ $\alpha_1=1$ \\ $\alpha_2 = 10^5$}} &
\ref{eq:NXFEM-EV} & 3761 & 1635 & 6 & 7 & (0.029) \\
& \ref{eq:NXFEM-LO} & 3752 & 1635 & 6 & 7 & (0.029) \\
& \ref{eq:NXFEM-GP} & 4417 & 1684 & 6 & 7 & (0.029) \\ \hline
\multirow{3}{*}{\shortstack[l]{ $\alpha_1=1$ \\ $\alpha_2 = 10^9$}} &
\ref{eq:NXFEM-EV} & 3535 & 1509 & 6 & 7 & (0.029) \\
& \ref{eq:NXFEM-LO} & 3520 & 1509 & 6 & 7 & (0.029) \\
& \ref{eq:NXFEM-GP} & 3941 & 1550 & 6 & 7 & (0.029) \\ \end{tabular}
\caption {\nameref{para:example3}}
\end{subtable}
\caption{Number of iterations required to reach the predefined tolerance for different preconditioners,
the last column shows number of required iterations and the asymptotic convergence rate of multigrid method}
\label{tab:example_precond}
\end{table*}
The experiments are carried out on the system of linear equations with around $2.5\times 10^{6}$ dofs ($L5$).
Our semi-geometric multigrid method is set up with 5-levels, and symmetric Gauss-Seidel is chosen as smoother with 3 pre-smoothing and 3 post-smoothing steps at each level, and we perform a single $V$-cycle as a preconditioner.
Table~\ref{tab:example_precond} shows the number of iterations required by different methods to reach the termination criterion \eqref{eq:termination}.
We observe that the CG method preconditioned with the Jacobi method has the slowest convergence amongst all solvers.
CG method with SGS as a preconditioner is significantly better than the Jacobi preconditioner, the number of iterations is reduced in more than half for most of the problems.
The best performance from all the preconditioners is clearly shown by SMG method.
The rate of convergence of conjugate gradient method depends on the distribution of the spectrum of $\bA$, and the method performs very well if the eigenvalues are clustered in a certain region of the spectrum, rather than uniformly distributed eigenvalues.
Hence, even with the same condition number for the same method, we require a different number of iteration to reach the termination criterion.
Thus, here we show that the CG-SMG method is stable for all discussed discretization methods and coefficients, the number of iterations required for the convergence stay stable.
\subsubsection{Performance as a Solution Method}
By comparing the results of the SMG method, we observe that Nitsche's method with the ghost penalty stabilization term converges fastest for the highly varying coefficients, while it is slowest for the continuous coefficients.
Even though the difference is not significantly high, a few more iterations can be due to the large value of the stabilization parameter.
For the \eqref{eq:NXFEM-EV} and \eqref{eq:NXFEM-LO} the number of iterations to reach the convergence tolerance stays more or less stable.
The multigrid method can be considered quite robust in terms of the asymptotic convergence rates, as for all the experiments we observe $ \rho^* < 0.2 $.
A multigrid method can be interpreted as a Richardson method with SMG as a preconditioner, and the CG method is known to be far superior to the Richardson method.
Hence, we observe that the number of iterations required is smaller in all cases when the semi-geometric multigrid is chosen as a preconditioner than a solution method.
\subsubsection{Level Independence}
In the next part, we evaluate the performance of CG-SMG method for different levels in the multigrid hierarchy.
The finest level is kept the same as in the previous experiments, and the number of levels used in the multilevel hierarchy is changed.
As we use a direct solver on the coarsest level, the coarse level corrections become increasingly accurate as the number of levels is reduced, but the higher number of levels are computationally cheaper as a smaller linear system of equations has to be solved on the coarsest level.
Table~\ref{tab:example3_levels} demonstrates that the number of iterations stay constant regardless of the number of levels used in the multigrid hierarchy.
We see that the change in the ratio between the coefficients does not affect the performance of CG-SMG method.
This result shows the level independence of the multigrid method as a preconditioner.
\begin{table}[t]
\begin{subtable}[t]{0.5\textwidth}
\centering
\begin{tabular}{ c || c || c | c | c | c }
\multicolumn{2}{c||}{ \# levels} & 2 & 3 & 4 & 5 \\ \hline \hline
\multirow{3}{*}{\shortstack[l]{$\alpha_1=1$ \\ $\alpha_2 = 10$}} &
\ref{eq:NXF
|
EM-EV} & 7 & 7 & 7 & 7 \\
& \ref{eq:NXFEM-LO} & 6 & 6 & 6 & 7 \\
& \ref{eq:NXFEM-GP} & 7 & 7 & 7 & 7 \\ \hline
\multirow{3}{*}{\shortstack[l]{$\alpha_1=1$ \\ $\alpha_2 = 10^5$}} &
\ref{eq:NXFEM-EV} & 6 & 6 & 6 & 6 \\
& \ref{eq:NXFEM-LO} & 6 & 6 & 6 & 6 \\
& \ref{eq:NXFEM-GP} & 6 & 6 & 6 & 6 \\ \hline
\multirow{3}{*}{\shortstack[l]{$\alpha_1=1$ \\ $\alpha_2 = 10^9$}} &
\ref{eq:NXFEM-EV} & 6 & 6 & 6 & 6 \\
& \ref{eq:NXFEM-LO} & 6 & 6 & 6 & 6 \\
& \ref{eq:NXFEM-GP} & 6 & 6 & 6 & 6
\end{tabular}
\caption {Effect of different number of levels in the hierarchy\\for \nameref{para:example3}}
\label{tab:example3_levels}
\end{subtable}
\hfill
\begin{subtable}[t]{0.45\textwidth}
\centering
\begin{tabular}{ c || c | c | c}
\# interfaces & \ref{eq:NXFEM-EV} & \ref{eq:NXFEM-LO} & \ref{eq:NXFEM-GP} \\ \hline \hline
1 & 9 & 8 & 9 \\
2 & 9 & 8 & 9 \\
4 & 9 & 8 & 9 \\
6 & 9 & 8 & 9 \\
8 & 9 & 8 & 9 \\
10 & 9 & 8 & 9
\end{tabular}
\caption {Effect of different number of interfaces in the domain}
\label{tab:example1_interfaces}
\end{subtable}
\caption{Number of iterations required to reach the predefined tolerance for conjugate gradient method preconditioned with semi-geometric method with respect to different levels in the hierarchy and multiple interfaces in the domain}
\label{tab:temps}
\end{table}
\subsubsection{Multiple Interfaces}
The last set of experiments is done to demonstrate the robustness of the SMG method with respect to the number of interfaces in a domain.
We consider \nameref{para:example1} for this numerical experiment with continuous coefficients.
The finest level is kept the same as in the previous cases, and the multigrid hierarchy consists of 5-levels.
This test is performed for all the discussed variants of Nitsche's methods with multiple interfaces.
The interfaces are represented by zeros level set of the following functions,
\begin{equation*}
\Lambda_i(x) :=
\begin{cases}
x - 0.1\Big(\frac{1}{\sqrt{2}} +i-1 \Big), & \text{for all } i \in \{1,\ldots,5\}, \\
x + 0.1\Big(\frac{1}{\sqrt{2}}-i\Big), & \text{for all } i \in \{6,\ldots,10\}.
\end{cases}
\end{equation*}
All the interfaces are linear, parallel to the original interface $\Gamma_l$.
We start with a single interface and increase up to 10 interfaces in the domain.
From Table~\ref{tab:example1_interfaces}, we can observe that the proposed multigrid method as a preconditioner is stable, as the number of iterations do not change at all with increasing interfaces in the domain.
Thus, we can conclude that our SMG method is a robust solution strategy.
The method is stable for all variants of Nitsche's method with respect to highly varying coefficients, with respect to the number of levels in the multilevel hierarchy and also with respect to the number of interfaces in the domain.
\section{Conclusion}
In this paper, we reviewed selected strategies to overcome ill-conditioning related to Nitsche's method for the XFEM discretization.
We discussed two different strategies to implicitly estimate the stabilization parameter and the ghost penalty term to improve the robustness of Nitsche's formulation.
Also, we numerically compared the stability of these methods for continuous and highly varying coefficients in terms of discretization error and condition numbers.
We introduced a semi-geometric multigrid method for the unfitted finite element methods and discussed the $L^2$-projection and pseudo-$L^2$-projection approaches to construct the transfer operator for the XFEM discretization.
In the series of experiments, we demonstrated the robustness of our tailored multigrid method with respect to highly varying coefficients and the number of interfaces in a domain.
Additionally, the multigrid method shows level independent convergence rates when applied to variants of Nitsche's methods.
The multigrid method proposed in this work can be used for any unfitted finite element discretization.
In the future, we aim to extend the multigrid method to more complex problems, for example, contact problems and fluid-structure interaction problems in the unfitted FEM framework.
We also aim to implement the $L^2$-projections for the XFEM discretization in the ParMOONoLith library~\cite{moonolithgit,krause_parallel_2016}.
This library can compute the $L^2$-projection on the complex geometries on distributed computing architecture.
In this way, we can extend the multigrid method from this work to the parallel architecture in order to tackle large-scale problems.
\bibliographystyle{spmpsci} \def\url#1{}
|
\section{Introduction}
Let us consider the class $V$ of all probability distributions on
the real line $\mathbb R$, which have zero mean, unit variance and
finite third absolute moment.
Let
$X,\,X_1,\,X_2,\,\ldots\,,X_n$ be i.i.d. random variables, where the
distribution of $X$ belongs to $V$. Denote
\begin{align*}
\varPhi(x)=\frac{1}{\sqrt{2\pi}}\int\limits
_{-\infty}^x e^{-t^2/2}\,dt,
\qquad \beta_3={\bf E}|X|^3.
\end{align*}
According to the
Berry--Esseen inequality \cite{Berry,Ess42}, there exists such
an absolute constant $C_0$ that for all $n=1,\,2,\,\ldots\;$,
\begin{equation}
\label{B-E-ineq}\sup_{x\in{\mathbb R}} \Bigg|{\bf P} \Biggl(\frac{1}{\sqrt{n}}\sum
_{j=1}^nX_j<x \Biggr)-
\varPhi(x) \Bigg|\le \frac{C_0\beta_3}{\sqrt{n}}.
\end{equation}
The first upper bounds for the constant $ C_0 $ were obtained by
C.-G.~Esseen~\cite{Ess42} (1942), H.~Bergstr\"om~\cite{Berg49} (1949)
and K.~Takano~\cite{Takano} (1951).
In 1956 C.-G.~Esseen \cite{Ess} showed that
\begin{equation}
\label{C0>}\lim_{n\to\infty}\frac{\sqrt{n}}{\beta_3}\sup
_{x\in\mathbb
R} \Bigg|{\bf P} \Biggl(\frac{1}{\sqrt{n}}\sum
_{j=1}^nX_j<x \Biggr)-\varPhi(x) \Bigg|\le
C_E,
\end{equation} where
$C_E=\frac{3+\sqrt{10}}{6\sqrt{2\pi}}=0.409732\,\ldots\;$.
He has also found a two-point distribution, for
which the equality holds in \eqref{C0>}. He has proved the
uniqueness of such a distribution (up to a reflection).
Consequently, $C_0\ge C_E$. The result of Esseen
served as an argument for the conjecture
\begin{equation}
\label{C=C}C_0= C_E,
\end{equation} that V.M. Zolotarev advanced in 1966
\cite{Zolot66-TV}. The question whether the conjecture is correct remains open up to now.
Since then, a number of upper bounds for $ C_0 $ have been obtained. A
historical review can be found, for example, in
\cite{KorShv,preprint-2009,Shv}. We only note that recent results in this
field were obtained by I.S. Tyurin (see, for example,
\cite{arxive-Tyurin-2009,Tyurin-2009-Dokl,Tyurin-2010-TV,Tyurin-2010-Uspehi,Tyurin-Prob.Let.-2012}),
V.Yu. Korolev and I.G. Shevtsova (see, for example,
\cite{KorShv,KorSh-2010-TV}\xch{),}{,} and I.G.~Shevtsova (see, for example,
\cite{arxive-Shevtsova-2013,Shevtsova-2011-TV,Shevtsova-2013-Inform,Shv,Shevtsova-2006-TV}).
The best upper estimate, known to date, belongs to Shevtsova: $C_0 \le0.469 $
\cite{Shv}. Note that in obtaining upper bounds, beginning from the estimates
in \cite{Zolot66-TV,Zolot67-ZW}, calculations play an essential role. In
addition, because of the large amount of computations, it was necessary to
use computers.
The present paper is devoted to estimation of $C_0$ in the
particular case of i.i.d. Bernoulli random variables. In this case
we will use the notation $C_{02}$ instead of $C_0$. Let us recall
the chronology of the results along these lines.
In 2007 C.~Hipp and L.~Mattner published an analytical proof of
the inequality $C_{02}\le\frac{1}{\sqrt{2\pi}}$ in the symmetric case
\cite{XiM.L.}.
In 2009 the second and third authors of the present paper
have suggested the compound method
in which a refinement of C.L.T. for i.i.d. Bernoulli random
variables was used along with direct
calculations~\cite{preprint-2009}. In unsymmetric case this method
allows to obtain majorants for $C_{02}$, arbitrarily close to
$C_E$, provided that the computer used is of sufficient power. The
main content of the preprint~\cite{preprint-2009} was published in 2011,
2012 in the form of the papers \cite{NaCh-Dokl,NaCh}. In these
papers, the following bound was proved, $C_{02}<0.4215$.
In 2015 we obtained the bound
\begin{equation}
\label{953} C_{02} \le 0.4099539, \end {equation} by applying the
same approach as in \cite{preprint-2009,NaCh-Dokl,NaCh}, with the only difference that this time a
supercomputer was used instead of an ordinary PC. We announced bound \eqref{953}
in \cite{NChZ-2016}, but for a number of reasons, delayed publishing the proof,
and do it just now. While the present work being in preparation, we have detected
a small inaccuracy in the calculations, namely, bound \eqref{953} must be
increased by $10^{-7}$. Thus the following statement is true.
\begin{thm}\label{th-2} The bound
\begin{equation}\label{954} C_{02} \le 0.409954 \end {equation} holds.
\end{thm} Meanwhile, in 2016 J.~Schulz~\cite{Shulz} obtained the unimprovable
result: if the symmetry condition is violated, $C_{02}=C_E$.
As it should be expected, J.~Schulz's proof turned out to be very
long and complicated. It should be said that methods based on the use of
computers, and analytical methods complement each other. The former ones cannot
lead to a final result, but they do not require so much effort. On the other
hand, they allow us to predict the exact result, and thus facilitate theoretical
research.
\section{Shortly about the proof of Theorem
\ref{th-2}} \label{sect2}
\subsection{Some notations. On the choice
of the left boundary of the interval for $p$} Let $X,\,X_1,\,
X_2,\ldots,\,X_n$ be a sequence of independent random
variables with the same distribution:
\begin{equation}\label{Bernoulli}{
\bf P}(X\!=\!1)\!=\!p,\quad {\bf P}(X\!=\!0)=q=1-p.
\end{equation} In what follows we use the
following notations,
\begin{align}
&\!F_{n,p}(x)\!=\!{\bf P} \Biggl(\sum\limits
_{i=1}^nX_i<x
\Biggr),\quad G_{n,p}(x)\!=\!\varPhi \biggl({{x-np}\over{\sqrt{npq}}}
\biggr),
\nonumber
\\
&\Delta_n(p)\!=\!\sup\limits
_{x\in \mathbb
R} |F_{n,p}(x)-G_{n,p}(x)
|,\quad \varrho(p)\!=\!\frac{{\bf
E}|X-p|^3}{({\bf E}(X-p)^2)^{3/2}}\!=\!{{p^2+q^2}\over\sqrt{pq}},
\nonumber
\\
&T_n(p)\!=\!{\Delta_n(p)\sqrt{n}\over{\varrho(p)}},\quad {\cal E}(p)=
\frac{2-p}{3\sqrt{2\pi} \, [p^2 +
(1-p)^2 ]}.\label{Tnp}
\end{align} Obviously,
\begin{equation}
\label{C02=sup}C_{02}= \sup\limits
_{n\geq 1}\sup\limits
_{p\in(0,0.5]}T_n(p).
\end{equation}
In this paper we solve, in particular, the
problem of computing the sequence
$T(n)=\sup\limits_{p\in(0,0.5)}T_n(p)$ for all~$n$ such that $1\le n
\le N_0$. Here and in what follows,
\begin{align*}
N_0=5\cdot10^5.
\end{align*}
Note that for fixed $ n $ and $ p $, the quantity
$\sup\limits_{x\in\mathbb R} \llvert F_{n, p} (x) -G_{n, p} (x)
\rrvert $ is achieved at some discontinuity point of the function $
F_{n, p} (x)$ (see Lemma \ref{lem-D+-}). We consider distribution
functions that are continuous from the left. Consequently,
\begin{equation}
\label{Deltanp}\Delta_n(p)=\max_{0\le i\le
n}
\Delta_{n,i}(p),
\end{equation} where $i$ are integers, $
\Delta_{n,i}(p)=
\{ \llvert F_{n,p}(i)-G_{n,p}(i) \rrvert ,\, \llvert F_{n,p}(i+1)
-G_{n,p}(i) \rrvert \}$.
Note also that we can vary the parameter $ p $ in a narrower
interval than $ [0,0.5] $, namely, in
\begin{align*}
I: = [0.1689,0.5].
\end{align*} This conclusion follows from the next
statement.
\begin{lemma}\label{lem-1-ZNC}If $0<p\le0.1689$, then for
all \mbox{$n\!\ge\!1$},
\begin{equation}
\label{p=0}T_n(p)\!<\!0.4096.
\end{equation}
\end{lemma}
Lemma \ref{lem-1-ZNC} is proved in Section~\ref{sect3} with the
help of some modification of the Berry\,--\,Esseen inequality (with
numerical constants) obtained in
\cite{KorShv-Obozr-2010-2,KorShv-Obozr-2010}.
\begin{remark}
By the same method that is used to prove inequality~\eqref{p=0}, the
estimate \mbox{$T_n(p)\le 0.369$} is found in \cite{NaCh} in the
case $0<p<0.02$ ($n\ge1$) (see the proof of~(1.37) in \cite{NaCh}),
where an earlier estimate of V.~Korolev and I.~Shevtsova
\cite{KorShv} is used, instead of
\cite{KorShv-Obozr-2010-2,KorShv-Obozr-2010}. Note that the use of
modified inequalities of the Berry\,--\,Esseen type, obtained in
\cite{KorShv-Obozr-2010-2,KorShv-Obozr-2010,KorShv}, is not
necessary for obtaining estimates of~$T_n (p) $ in the case when~$ p
$ are close to~0.
An alternative approach, using Poisson approximation, is proposed in
the pre\-print~\cite{preprint-2009}. Let us explain the essence of
this method.
An alternative bound is found in the domain
$\{(p,n):\,0.258\leq\lambda\leq 6,\,n\geq 200\}$, where
$\lambda=np$. Under these conditions, we have $p\leq 0.03$, i.e.
$p$ are small enough. Consequently, the error arising under
replacement of the binomial distribution by Poisson distribution
$\varPi_\lambda$ with the parameter $\lambda$ is small.
Next, the distance $d(\varPi_\lambda,G_\lambda)$ \xch{}{in the sense of uniform metric} between $\varPi_\lambda$ and normal distribution
$G_\lambda$ with the mean $\lambda$ and the variance $\lambda$ is
estimated, where\querymark{Q1} \xch{\mbox{$d(U,V)=\sup\limits_{x\in\mathbb R}|U(x)-V(x)|$}}{$d(U,V)=\sup\limits_{x\in\mathbb R}|U(x)-V(x)|$} for
any distribution functions $U(x)$ and $V(x)$. Then the estimate of
the distance between $ G_\lambda $ and the normal distribution $
G_{n, p} $ with the mean $ \lambda $ and variance $ npq $ is
deduced. Summing the obtained estimates, we arrive at an estimate
for the distance between the original binomial distribution and $ G_
{n, p} $. As a result, in \cite[Lemma~7.8, Theorem~7.2]
{preprint-2009} we derive the estimate $ T_n (p) <0.3607 $, which is
valid for all points $ (p, n) $ in the indicated domain.
\end{remark}
\subsection{On calculations}
\label{subsect-2.2}
Define
\begin{align*}
C_{02}(N)=\max_{1\le n\le N}\sup_{p\in(0,0.5]}T_n(p),
\quad \overline C_{02}(N)=\sup_{n\ge
N}\sup
_{p\in(0,0.5]}T_n(p).
\end{align*} Obviously,
$C_{02}=\max\{C_{02}(N),\overline C_{02}(N+1)\}$ for every $N\ge1$.
It was proved in \cite{NaCh} that $\overline C_{02}(200)<0.4215$. By
that time it was shown with the help of a computer (see the preprint
\cite{N-M-Ch-prep}) that $C_{02}(200)<0.4096$, i.e.
\begin{equation}
\label{C_{02}(200)} C_{02}({200})<C_E,
\end{equation} and thus, $C_{02}<0.4215$ for all
$n\ge1$.
Some words about bound \eqref{C_{02}(200)}. By \eqref{C02=sup}, to
get $C_{02}(N)$ it is enough to calculate
$T(n)=\sup\limits_{p\in(0,0.5]}T_n(p)$ for every $1\le n\le N$, and
then find $\max\limits_{1\le n\le N}T(n)$. The
calculation of $T(n)$ is reduced to two problems. The first problem is to
calculate $\max\limits_{p_j\in S}T_n(p_j)$, where $S$ is a grid on
$(0,0.5]$, and the second one is to estimate $T_n(p)$ in intermediate
points $p$. Both problems were solved in \cite{N-M-Ch-prep} for
$1\le n\le200$.
It should be noted here that, according to the method, the quantity
$C_{02}(N)$ is calculated (with some accuracy), and $\overline
C_{02}(N)$ is estimated from above. In both cases, a computer is
required. The power of an ordinary PC is sufficient for calculating
majorants for $\overline C_{02}(N)$ whereas to calculate $
C_{02}(N)$ a supercomputer is needed if $N$ is sufficiently large.
Moreover, an additional investigation of the interpolation type is
required for the convincing conclusion from computer calculations of
$ C_{02}(N)$. In our paper, Theorem~\ref{th-3} plays this role.
Denote by symbol $S$ the uniform grid on $I$ with the step
$h=10^{-12}$. The values of $ T_n (p_j) $ for all $ p_j \in S$ and $ 1 \le n \le N_0 $ were calculated on a supercomputer.
\vskip2mm
\par\noindent\textbf{The result of the calculations}. {\it For all $1\le n\le
N_0$},
\begin{equation}
\label{<0.4-S}\max_{p_j\in
S}T_n(p_j)=T_{N_1}(
\overline p)=0.40973212897643\ldots<0.40973213.
\end{equation}
The counting algorithm is a triple loop: a loop with respect to the
parameter $ i $ (see~\eqref{Deltanp}) is nested in a loop with
respect to the parameter $ p $, which in turn is nested in the loop
with respect to the parameter~$ n $.
With the growth of $n$, the computation time increased rapidly. For
example, for $2000 \le n \le 2100$ calculations took more than 3
hours on a computer with processor Core2Due E6400. For $2101\le n\le
N_0$ calculations were carried out on the supercomputer Blue Gene/P.
It follows from \cite[Corollary~7]{NChZ-2016} that for $ n> 200 $ in
the loop with respect to $ i $, one can take not all values of $ i
$ from 0 to $ n $, but only those, which satisfy the inequality
\begin{align*}
np - (\nu + 1) \sqrt{npq} \! \le \! i \le np + \nu \sqrt{npq},
\end{align*} where \mbox{$ \nu \! = \! \sqrt{3+ \sqrt{6}} $}. This
led to a significant reduction of computation time. We give
information about the
computer time (without waiting for the
queue) in
Table 1.
\begin{table}[h]
\caption{Dependence of computer time on $n$ (supercomputer Blue
Gene/P)}\label{tabl-time}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n\in[N_1,N_2]$: & $[10000,11024]$ & $[30000,50000]$ & $[300000,320000]$ & $[490000,N_0]$\\
\hline
computer time: & 3 min & 2 hrs + 5 min & 4 hrs + 50 min & 7 hrs \\
\hline
\end{tabular}
\end{table}
Calculations were carried out
on the supercomputer Blue Gene/P of the Computational Mathematics
and Cybernetics Faculty of Lomonosov Moscow State University. After
some changes in the algorithm, the calculations for $n$ such that
$490000\le n\le N_0 $, were also performed on the CC FEB RAS
Computing Cluster \cite{DataCenter}. The corresponding computer time
was 6 hours and 40 minutes.
The
program is written in C+MPI and registered \cite{Zol}.
\subsection{Interpolation type results}
Let $p^\ast\in(0,0.5)$. Consider a uniform grid on $[p^\ast,0.5]$
with a step $h$. The following statement allows to estimate the
value of the function $\frac{1}{\varrho(p)}\,\Delta_{n,k}(p)$ at an
arbitrary point from the interval $[p^\ast,0.5]$ via the value of
this function at the nearest grid node and $h$.
Denote
\begin{equation}
\label{c1c2c3} c_1\!=\!0.516, \quad c_2\!=\!0.121,\quad
c_3\!=\!0.271.
\end{equation}
\begin{thm}\label{th-3} Let $0<p^\ast<p\le0.5$,
$p^{\,\prime}$ be a node of a grid with a step $h$ on the interval
$[p^\ast,0.5]$, closest to $p$. Then for all $n\ge1$ and $0\le k\le
n$,
\begin{align*}
\bigg|\frac{1}{\varrho(p)}\,\Delta_{n,k}(p)-\frac{1}{\varrho(p^\prime)}\,
\Delta_{n,k}\bigl(p^\prime\bigr) \bigg| \le\frac{h}{2}\,L
\bigl(p^\ast\bigr),
\end{align*}
where
\begin{equation}
\label{L(p)-new}L(p)\!=\!\frac{1}{(1-2pq)\sqrt{pq}} \biggl(\frac{c_1}{p}+c_2+c_3
\,\frac{(1-2p)(1+2pq)}{1-2pq} \biggr).
\end{equation}
\end{thm}
The next statement follows from Theorem \ref{th-3}. Note that
without it the proof of Theorem~\ref{th-2} would be incomplete.
\begin{cor}\label{th-1.2} If
$p\in I$, and $p^\prime$ is a node of the grid $S$, closest to $p$, then for all $1\le
n\le N_0$,
\begin{align*}
\big|T_n(p)-T_n\bigl(p^\prime\bigr)\big|\le 4.6
\cdot10^{-9}.
\end{align*}
\end{cor}
\begin{proof} It follows
from Theorem~\ref{th-3} that for $0\le k\le n\le N_0$,
\begin{equation}
\label{<L(0.1689)} \bigg|\frac{\sqrt{n}}{\varrho(p)}\,\Delta_{n,k}(p)-\frac{\sqrt{n}}{\varrho(p^\prime)}
\,\Delta_{n,k} \bigl(p^\prime \bigr) \bigg|\le \sqrt{N_0}
\,\frac{1}{2}\,10^{-12}\,L(0.1689).
\end{equation} Since
$L(0.1689)<12.98$, the right-hand side of
inequality~\eqref{<L(0.1689)} is majorized by the
number~$4.6\cdot10^{-9}$. This implies the statement of
Corollary~\ref{th-1.2}.\end{proof}
\subsection{On the proof of Theorem~\ref{th-2}}
It follows from \eqref{<0.4-S}, Corollary \ref{th-1.2} and
Lemma~\ref{lem-1-ZNC}
that for all $1\le n\le N_0$ and $p\in(0,0.5]$, the
following inequality holds, $T_n(p)<0.4097321346<C_E$ (for details,
see~\eqref{Knp<0.41}). It is easy to verify that this inequality is
true for $ p \in (0.5,1) $ as well. Hence, inequality \eqref{954}
implies Theorem~\ref{th-2}.
\subsection{About structure of the paper}
The structure of the paper is as follows. The proof of Theorem
\ref{th-3}, the main analytical result of the paper, is given in
Section~\ref{sect3-}. The proof consists of 12 lemmas.
In Section \ref{sect3}, Theorem~\ref{th-2} is proved. The section
consists of three subsections. In the first one, the formulation of
Theorem~1.1~\cite{NaCh} is given. Several corollaries from the latter
are also deduced here. The second subsection discusses the
connection between the result of K. Neammanee \cite{Neamm}, who
refined and generalized
Uspensky's estimate~\cite{Uspen}, and the problem of
estimating $C_{02}$. It is shown that one can obtain from the result
of K.~Neammanee the same estimate for $C_{02}$ as ours, but for a
much larger $N$. This means that calculating $C_{02}(N)$ requires
much more computing time if to use Neammanee's estimate.
In the third subsection, we give, in particular, the proof of
Lemma~\ref{lem-1-ZNC}.
\section{Proof of Theorem \ref{th-3}}
\label{sect3-}
We need the following statement, which we give without proof.
\begin{lemma}\label{lem-D+-} Let $G(x)$ be a distribution
function with a finite number of discontinuity points, and $G_0(x)$
a continuous distribution function. Denote $\delta(x)=G(x)-G_0(x)$.
There exists a discontinuity point $x_0$ of $G(x)$ such that the
magnitude $\sup\limits_x|\delta(x)|$ is attained in the following
sense: if $G$ is continuous from the left, then
$\sup\limits_x|\delta(x)|=\max \{\delta(x_0+),\,-\delta(x_0) \}$,
and if $G$ is continuous from the right, then
$\sup\limits_x|\delta(x)|=\max \{\delta(x_0),\,-\delta(x_0-) \}$.
\end{lemma}
Define $f(t)={\bf E}e^{it(X-p)}\equiv qe^{-itp}+pe^{itq}$.
\begin{lemma} \label{lem-2-ZNC} For all $t\in\mathbb R$,
\begin{align*}
|f(t)|\le \exp \biggl\{-2pq\,\sin^2 \frac{t}{2} \biggr\}.
\end{align*}
\end{lemma}
\begin{proof} Taking into account the difference in the notations, we obtain the statement of Lemma \ref{lem-2-ZNC} from~\cite[Lemma 8]{NaCh}.\end{proof}
Further, we will use the following notations:
\begin{align*}
\sigma=\sqrt{npq},\quad\beta_3(p)={\bf E} {|X-p|^3},
\end{align*} $Y$ is a
standard normal random variable. Note that
$\varrho(p)=\frac{\beta_3(p)}{(pq)^{3/2}}$.
\begin{lemma}\label{lem_f^n-e^n-10} The following bound is
true for all $n\ge2$,
\begin{equation*}
\int_{|t|\le\pi}|f^n(t)-e^{-npqt^2/2}|
\,dt<\frac{1}{\sigma^2}\, \biggl(f(p,n)+\pi \sigma^2e^{-\sigma^2}+
\frac{4}{\pi}\,e^{-\pi^2\sigma^2/8} \biggr)\xch{,}{.}
\end{equation*}
where
\begin{align*}
f(p,n)= \bigl(p^2+q^2 \bigr)\,\frac{\pi^4}{96}\,
\biggl(\frac{n}{n-1} \biggr)^2+ \frac{3\pi^5\sqrt{\pi
pq}}{2^{10}\sqrt{n}}\, \biggl(
\frac{n}{n-1} \biggr)^{5/2}.
\end{align*}
\end{lemma}
\begin{proof} Using the equalities $e^{-pqt^2/2}={\bf
E}e^{it\sqrt{pq}\,Y}$, ${\bf E}(X-p)^j={\bf
E} (Y\sqrt{pq}\, )^j$, $j=0,1,2$, and the Taylor formula, we
get
\begin{multline}
\label{t^3+t^4} |f(t)-e^{-pqt^2/2}|\!=\! \Bigg|{\bf E} \Biggl[\sum
_{j=1}^2\frac{ (it(X-p) )^j}{j!} +\frac{ (it(X-p) )^3}{2}\!\int
_0^1\!\!(1-\theta)^2
\,e^{it\theta(X-p)} \,d\theta \Biggr]
\\
-{\bf E} \Biggl[ \sum_{j=1}^3
\frac{1}{j!}\, (it\sqrt{pq}\,Y )^j +\frac{ (it\sqrt{pq}\,Y )^4}{3!} \int
_0^1(1-\theta)^3\, e^{it\theta\sqrt{pq}\,Y}
\,d\theta \Biggr] \Bigg|
\\
= \Bigg|{\bf E} \Biggl[\frac{ (it(X-p) )^3}{2}\int_0^1(1-
\theta)^2\,e^{it\theta(X-p)} \,d\theta
\\
- \frac{ (it\sqrt{pq}\,Y )^4}{3!} \int_0^1(1-
\theta)^3\, e^{it\theta\sqrt{pq}\,Y} \,d\theta \Biggr] \Bigg| \le\frac{|t|^3}{6}
\, \beta_3(p)+\frac{t^4}{8}\,(pq)^2.
\end{multline}
Since for $|x|\le\frac{\pi}{4}$ the inequality $|\sin
x|\ge\frac{2\sqrt{2}\,|x|}{\pi}$ is fulfilled, then with the help of
Lemma~\ref{lem-2-ZNC} we arrive at the following bound for
$|t|\le\pi/2$,
\begin{equation*}
\nonumber
|f(t)|\le\exp \bigl\{-2pq\sin^2(t/2) \bigr\} \le\exp
\biggl\{- \frac{4t^2pq}{\pi^2} \biggr\}.
\end{equation*} Then,
taking into account the elementary equality $ a^n-b^n = (a-b)
\sum\limits_{j=0}^{n-1}a^jb^{n-1-j} $ and the estimate
\eqref{t^3+t^4}, we obtain for $ | t |\le \pi / 2 $ that
\begin{multline*}
|f^n(t)-e^{-npqt^2/2}|\le |f(t)-e^{-pqt^2/2}|\sum
_{j=0}^{n-1}|f(t)|^je^{-(n-1-j)t^2pq/2}
\le
\\
\le \biggl(\frac{|t|^3}{6}\, \beta_3(p)+\frac{t^4}{8}\,
(pq)^{2} \biggr)\,\sum_{j=0}^{n-1}
\exp \bigl\{ \bigl[j\, \bigl(1-8/\pi^2 \bigr)-(n-1)
\bigr]t^2pq/2 \bigr\}\le
\\
\le \biggl(\frac{|t|^3}{6}\, \beta_3(p)+\frac{t^4}{8}\,
(pq)^{2} \biggr)\,n \,\exp \biggl\{-\frac{4(n-1)t^2pq}{\pi^2} \biggr\}.
\end{multline*}
Using the well-known formulas ${\bf E}|Y|^3=\frac{4}{\sqrt{2\pi}}$
and ${\bf E}Y^4=3$, we deduce from the previous inequality that for
$n\ge2$,
\begin{multline}
\label{f(p,n>2)}\int\limits
_{|t|\le\pi/2}|f^n(t)-e^{-npqt^2/2}|\,dt\le n\sqrt{2
\pi} \, \biggl(\frac{\beta_3(p)}{6m^2}\,{\bf E}|Y|^3+\frac{(pq)^2}{8m^{5/2}}\,{
\bf E}Y^4 \biggr) \bigg|_{m=\frac{8(n-1)pq}{\pi^2}}
\\
= n \biggl(\frac{\pi^4\varrho(p)}{96\sqrt{pq}\,(n-1)^2} +\frac{3\pi^5\sqrt{\pi}}{2^{10}\sqrt{pq}\,(n-1)^{5/2}} \biggr) =\frac{f(p,n)}{\sigma^2}.
\end{multline}
Applying Lemma \ref{lem-2-ZNC} again, we get
\begin{equation}
\label{int_x>pi/6}\int\limits
_{\pi/2\le |t|\le\pi}|f^n(t)|\,dt\le 2\int_{\pi/2}^{\pi}
e^{-2\sigma^2\,\sin^2
(t/2)}dt<\pi\,e^{-\sigma^2}.
\end{equation} Moreover, by virtue of
the known inequality
\begin{equation}
\label{norm-0}\int_c^\infty e^{-t^2/2}\,dt
\le \frac{1}{c}\,e^{-c^2/2},
\end{equation} which holds
for every $c>0$, we have
\begin{equation}
\label{e^{-nt}}\int_{|t|\ge
\pi/2}e^{-\sigma^2t^2/2}\,dt \le
\frac{4}{\pi
\sigma^2}\,e^{-\sigma^2\pi^2/8}.
\end{equation} Collecting the
estimates \eqref{f(p,n>2)}--\eqref{e^{-nt}}, we obtain the
statement of Lemma \ref{lem_f^n-e^n-10}.
\end{proof}
Denote
\begin{align*}
P_n(k)=C_n^kp^kq^{n-k},
\quad \delta_n(k,p)=P_n(k)-\frac{1}{\sqrt{npq}}\,\varphi
\biggl(\frac{k-np}{\sqrt{npq}} \biggr).
\end{align*}
\begin{lemma}\label{lem_loc} For every $n\ge1$ and $0\le
k\le n$ the following bound holds,
\begin{equation}
\label{loc-}|\delta_n(k,p)|<\min \biggl\{\frac{1}{\sigma\sqrt{2e}},
\frac{c_1}{\sigma^2} \biggr\},
\end{equation}
where $c_1$ is defined in \eqref{c1c2c3}.
\end{lemma}
\begin{proof}
It was proved in \cite{Herzog} that
$P_n(k)\le\frac{1}{\sqrt{2enpq}}$. Moreover,
$\frac{1}{\sqrt{npq}}\,\varphi (\frac{k-np}{\sqrt{npq}} )\le\frac{1}{\sqrt{2\pi
npq}}$. Hence,
\begin{equation}
\label{P-fi-2}|\delta_n(k,p)|\le\frac{1}{\sqrt{2e
npq}}=
\frac{1}{\sigma\sqrt{2e }}.
\end{equation}
Let us find another bound for $\delta_n(k,p)$. Let $\sigma>1$. Then
$n>\frac{1}{pq}\ge4$, i.e. $n\ge5$.
By the inversion formula for integer random variables,
\begin{align*}
P_n(k)=\frac{1}{2\pi}\int_{-\pi}^\pi
\bigl(q+e^{it}p\bigr)^n\,e^{-itk}\,dt=
\frac{1}{2\pi}\int_{-\pi}^\pi f^n(t)
\,e^{-it(k-np)}\,dt.
\end{align*} Moreover,
by the inversion formula for densities,
\begin{align*}
\frac{1}{\sigma}\,\varphi \biggl(\frac{x-\mu}{\sigma} \biggr)= \frac{1}{2\pi}
\int_{-\infty}^\infty e^{-t^2\sigma^2/2-it(x-\mu)}\,dt.
\end{align*} Consequently,
\begin{equation}
\label{J1-J2}\delta_n(k,p)=\frac{1}{2\pi} \,
(J_1-J_2 ),
\end{equation} where
\begin{align*}
J_1=\int_{-\pi}^\pi
\bigl[f^n(t)-e^{-\sigma^2t^2/2} \bigr]\,e^{-it(k-np)}\,dt,\quad
J_2=\int_{|t|\ge\pi} e^{-\sigma^2t^2/2}\,e^{-it(k-np)}
\,dt.
\end{align*} Note
that the function $f(p,n)$ from Lemma \ref{lem_f^n-e^n-10}
decreases in $n$. Hence, $f(p,n)\le f(p,5)$. It is not hard to
verify that $\max\limits_{p\in[0,1]}f(p,5)<1.707$. Thus, for
$\sigma>1$,
\begin{align*}
|J_1|\le \frac{1}{\sigma^2} \biggl(1.707+\frac{\pi}{e}+
\frac{4}{\pi}\,e^{-\pi^2/8} \biggr)<\frac{3.234}{\sigma^2}.
\end{align*} Using
inequality \eqref{norm-0}, we get the estimate
\begin{align*}
|J_2|\le \frac{2}{\pi\sigma^2}\,e^{-\pi^2\sigma^2/2}<\frac{0.005}{\sigma^2}.
\end{align*}
Thus, we get from~\eqref{J1-J2} that for $\sigma>1$,
\begin{equation}
\label{P-fi} |\delta_n(k,p)| \le \frac{3.24}{2\pi\sigma^2}<
\frac{0.516}{\sigma^2}.
\end{equation}
Since $\frac{1}{\sigma\sqrt{2e}}\le \frac{c_1}{\sigma^2}$ for
$0<\sigma\le c_1\sqrt{2e}=1.203\ldots\;>1$, the statement of Lemma
\ref{lem_loc} follows from \eqref{P-fi-2} and \eqref{P-fi}.
\end{proof}
\begin{lemma}\label{lem_d/dpG} The following equality holds,
\begin{equation}
\label{d/dpG}\frac{\partial}{\partial
p}\,G_{n,p}(x)=-\frac{x(1-2p)+np}{2pq\sqrt{npq}}\,
\varphi \biggl(\frac{x-np}{\sqrt{npq}} \biggr).
\end{equation}
\end{lemma}
\begin{proof} We have
\begin{align*}
&\frac{d}{d p}p^{-1/2}(1-p)^{-1/2}=-\frac{q-p}{2pq\sqrt{pq}},
\\
&\frac{d}{d
p}p^{1/2}(1-p)^{-1/2}=\frac{1}{2}p^{-1/2}(1-p)^{-1/2}+
\frac{1}{2}p^{1/2}(1-p)^{-3/2} =\frac{1}{2q\sqrt{pq}}.
\end{align*} Hence,
\begin{align*}
\frac{\partial}{\partial p}\,\frac{x-np}{\sqrt{npq}}=-\frac{x(q-p)}{2pq\sqrt{npq}}- \frac{\sqrt{n}}{2q\sqrt{pq}}=-
\frac{x(q-p)+np}{2pq\sqrt{npq}},
\end{align*} and
we arrive at \eqref{d/dpG}. \end{proof}
\begin{lemma}\label{lem_loc-2} For all $n\ge1$ and $0\le
k\le n$ the following bound holds,
\begin{align*}
\bigg|\frac{\partial}{\partial p}\,F_{n,p}(k+1)- \frac{\partial}{\partial p}
\,G_{n,p}(k) \bigg|\le L_1(p)\equiv\frac{1}{pq} \biggl(
\frac{c_1}{q}+ c_2 \biggr).
\end{align*}
\end{lemma}
\begin{proof} It is shown in \cite{Shmet} that
\begin{align*}
\frac{\partial}{\partial
p}\,F_{n,p}(k+1)=-nC_{n-1}^kp^kq^{n-1-k}=-
\frac{n-k}{q}P_n(k).
\end{align*}
By Lemma \ref{lem_loc},
\begin{equation}
\label{P-fi-3}\frac{n-k}{q}\; \bigg|P_n(k)- \frac{1}{\sigma}\,
\varphi \biggl(\frac{k-np}{\sigma} \biggr) \bigg|\le \frac{n\,c_1}{q\sigma^2}=
\frac{c_1}{pq^2}.
\end{equation} In turn,
it follows from Lemma \ref{lem_d/dpG} that
\begin{multline}
\label{fi-fi}\frac{n-k}{q\sigma} \,\varphi \biggl(\frac{k-np}{\sigma} \biggr)+
\frac{\partial}{\partial
p}\,G_{n,p}(k)
\\
= \biggl(\frac{n-k}{q\sigma} - \frac{k(1-2p)+np}{2pq\sigma} \biggr)\, \,\varphi \biggl(
\frac{k-np}{\sigma} \biggr)=- \frac{k-np}{2pq\sigma}\,\varphi \biggl(\frac{k-np}{\sigma}
\biggr).
\end{multline}
Since
\begin{multline*}
\frac{\partial}{\partial p}\,F_{n,p}(k+1)- \frac{\partial}{\partial
p}
\,G_{n,p}(k)= -\frac{n-k}{q}\; \biggl[P_n(k)-
\frac{1}{\sigma}\,\varphi \biggl(\frac{k-np}{\sigma} \biggr) \biggr]
\\
- \biggl[\frac{n-k}{q\sigma}\, \varphi \biggl(\frac{k-np}{\sigma} \biggr)+
\frac{\partial}{\partial p}\,G_{n,p}(k) \biggr]
\end{multline*} and
$\max\limits_{x}|x|\varphi(x)=\frac{1}{\sqrt{2\pi e}}<0.242$, the
statement of the lemma follows from~\eqref{P-fi-3}
and~\eqref{fi-fi}.
\end{proof}
\begin{lemma}
\label{lem_loc-3} For all $n\ge1$ and $0\le k\le n$ the following
bound holds,
\begin{align*}
\bigg|\frac{\partial}{\partial p}\,F_{n,p}(k)- \frac{\partial}{\partial p}
\,G_{n,p}(k) \bigg|\le L_2(p)\equiv\frac{1}{pq} \biggl(
\frac{c_1}{p}+ c_2 \biggr),
\end{align*} where $c_1$,
$c_2$ are from \eqref{c1c2c3}.
\end{lemma}
\begin{proof} Similarly to the proof of Lemma \ref{lem_loc-2} we
obtain
\begin{align}
&\frac{\partial}{\partial
p}\,F_{n,p}(k)=-nC_{n-1}^{k-1}p^{k-1}q^{n-k}=-
\frac{k}{p}P_n(k),
\nonumber\\
\label{P-fi-1}&\frac{k}{p}\; \bigg|P_n(k)- \frac{1}{\sigma}\,
\varphi \biggl(\frac{k-np}{\sigma} \biggr) \bigg|\le \frac{k\,c_1}{p\sigma^2}\le
\frac{c_1}{p^2q}.
\end{align}
Hence,
\begin{align*}
\frac{\partial}{\partial p}\,F_{n,p}(k)- \frac{\partial}{\partial
p}\,G_{n,p}(k)=
-\frac{k}{p}\; \biggl[P_n(k)- \frac{1}{\sigma}\,\varphi
\biggl(\frac{k-np}{\sigma} \biggr) \biggr]- \frac{k-np}{2pq\sigma}\varphi \biggl(
\frac{k-np}{\sigma} \biggr).
\end{align*}
Since the last summand on the right-hand side of the equality is
less than $\frac{0.121}{pq}$, then by using~\eqref{P-fi-1} we get
the statement of the lemma. \end{proof}
\begin{lemma}\label{lem-2.8} For every $0<p<0.5$,
\begin{equation}
\frac{d}{dp}\frac{1}{\varrho(p)}=\frac{1}{2}\,A(p):=
\frac{1}{2}\,\frac{(1-2p)(1+2pq)}{\sqrt{pq}(1-2pq)^2}.\label{(2.8.0)}
\end{equation}
\end{lemma}
\begin{proof} The lemma follows from the equalities:
\begin{align*}
&\frac{d}{dp}\,\frac{1}{\varrho(p)}= \frac{d}{dx}\,\frac{x}{1\,{-}\,2x^2}
\bigg|_{x=\sqrt{pq}}\times\frac{d}{dp}\sqrt{p(1\,{-}\,p)},\;\;\frac{d}{dp}
\sqrt{p(1\,{-}\,p)}=\frac{1-2p}{2\sqrt{pq}},
\\
& \frac{d}{dx}\,\frac{x}{1-2x^2}=\frac{1}{1-2x^2} +
\frac{4x^2}{(1-x^2)^2}=\frac{1+2x^2}{(1-2x^2)^2}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lem-2.9} The function $A(p)$ decreases
on the interval $(0,0.5)$.\end{lemma}
\begin{proof} Denote $x=x(p)=p(1-p)$, $A_1(t)=
\frac{\sqrt{1-4t}\,(1+2t)}{\sqrt{t}\,(1-2t)^2}$. Taking into account
the equality $1-2p=\sqrt{1-4pq}$, we obtain $A(p)=A_1(x)$.
Since $x(p)$ increases for $0<p<0.5$, it remains to prove the
decrease of the function $A_1(x)$ for $0< x<0.25$. We have
\begin{align*}
\frac{d}{dx} \,\ln{A_1(x)}=\frac{-2}{1-4x}+
\frac{2}{1+2x}-\frac{1}{2x}+\frac{4}{1-2x} =-\frac{32x^3+36x^2-12x+1 }{2x(1-4x)(1-4x^2)}.
\end{align*} On the interval
$[0,0.25]$ the polynomial $A_2(x)\equiv32x^3+36x^2-12x+1$ has the
single minimum point $x_1=\frac{-3+\sqrt{17}}{8}=0.140\ldots\;$.
Since $A_2(x_1)=0.11\ldots>0$, we have $\frac{d}{dx}\,\ln A_1(x)<0$
for $0\le x<0.25$, i.e. the function $A_1(x)$ decreases on $(0,0.25)$.
The lemma is proved.\end{proof}
\begin{lemma}\label{lem-L(p)ubyv} The function $L(p)$,
defined in \eqref{L(p)-new}, decreases on $[0,0.5]$.\end{lemma}
\begin{proof} Taking into account the equality $p^2+q^2=1-2pq$, it is
not difficult to see that
\begin{equation}
\label{L(p)-new2}L(p)=\frac{1}{\varrho(p)}\,L_2(p)+c_3\,A(p).
\end{equation}
According to Lemma \ref{lem-2.9}, the function $A(p)$ decreases.
Consequently, it remains to prove that the function
$L_3(p):=\frac{1}{\varrho(p)}\,L_2(p)=\frac{c_1+c_2p}{p\sqrt{pq}(1-2pq)}$
decreases on $[0,0.5]$. We have
\begin{multline*}
\frac{d}{dp}\,\ln L_3(p)=\frac{c_2}{c_1+c_2p}-
\frac{3}{2p}+\frac{1}{2(1-p)}+\frac{2(1-2p)}{
1-2p+2p^2}
\\
=
\frac{A_3(p)}{2pq(c_1+c_2p)(1-2pq)},
\end{multline*}
where $A_3(p)=-3 c_1 + (14 c_1 - c_2) p - (26 c_1-8c_2) p^2 + (16
c_1 -
18 c_2) p^3 + 12 c_2 p^4$. Let us prove that
\begin{equation}
\label{A3(p)<0} A_3(p)<0,\quad 0<p<0.5.
\end{equation}
We have
\begin{align*}
&A_3^\prime(p)=14 c_1 - c_2 -4 (13
c_1-4c_2) p +6 (8 c_1 - 9 c_2)
p^2 + 48 c_2 p^3,
\\
&A_3^{\prime\prime}(p)= -4 (13 c_1-4c_2) +12
(8 c_1 - 9 c_2) p + 144 c_2 p^2.
\end{align*}
As a result of calculations, we find that the equation
$A_3^{\prime}(p)=0$ has the single root $p_0=0.478287\ldots$ on
$[0,0.5]$. The roots of the equation $A_3^{\prime\prime}(p)=0$ have
the form
\begin{align*}
p_{1,2}=\frac{1}{24c_2}\, \bigl(-8c_1+9c_2
\pm\sqrt{(8c_1-9c_2)^2+16c_2(13c_1-4c_2)}
\, \bigr),
\end{align*}
and are equal to $p_1=-2.6\ldots\;$, $p_2=0.54\ldots\;$
respectively. Hence, $A_3^{\prime\prime}(p)<0$ for $p\in[0,0.5]$.
Thus, the function $A_3(p)$, considered on $[0,0.5]$, takes a
maximum value at the point $p_0$. Since
\mbox{$A_3(p_0)=-0.257\ldots\;$}, inequality~\eqref{A3(p)<0} is
proved. This implies that $L_3(p)$ decreases on
$(0,0.5)$.\end{proof}
Let $f(x)$ be an arbitrary function. Denote by $D^+f(x)$ and
$D^-f(x)$ its right-side and left-side derivatives respectively (if
they exist).
\begin{lemma}\label{lem-f11} Let
$g(x)=\max\{f_{1}(x),\,f_{2}(x)\}$, where $f_1(x)$ and $f_2(x)$
are functions, differentiable on a finite interval $(a,b)$. Then at
every point $x\in(a,b)$ there exist both one-side derivatives $D^+
g(x)$ and $D^- g(x)$, each of which coincides with either
$f^\prime_{1}(x)$ or $f^\prime_{2}(x)$. \end{lemma}
\begin{proof} Let $x$ be a point such that
$f_{1}(x)\neq f_{2}(x)$. Then the function $g$ is differentiable at~$x$, and in this case the statement of the lemma is trivial.
Now let for a point $x\in(a,b)$,
\begin{equation}
\label{f1=f2}f_{1}(x)=f_{2}(x).
\end{equation}
First, consider the case $f_1^\prime(x)\neq f_2^\prime(x)$. Let, for
instance, $f_1^\prime(x)> f_2^\prime(x)$. Then there exists $h_0>0$
such that
\begin{align}
\label{f1>f2}&f_1(x+h)>f_2(x+h),\quad 0<h\le
h_0,
\\
&\label{f2>f1}f_2(x+h)>f_1(x+h), \quad
-h_0\le h<0.
\end{align}
From differentiability of the functions $f_1$ and $f_2$ it follows
that for $h\to0$,
\begin{equation}
\label{fi}f_i(x+h)=f_i(x)+f_i^\prime(x)h+o(h),
\quad i=1,2.
\end{equation} Then using \eqref{f1>f2} we obtain the equality
\begin{align*}
g(x+h)=f_1(x+h)=f_1(x)+f_1^\prime(x)h+o(h),
\quad h>0\xch{,} {.}
\end{align*}
and using \eqref{f2>f1},
\begin{align*}
g(x+h)=f_2(x+h)=f_2(x)+f_2^\prime(x)h+o(h),
\quad h<0.
\end{align*}
Thus, existence of $D^+ g(x)$ and $D^- g(x)$ follows.
Now let
\begin{equation}
\label{df1=df2}f_1^\prime(x)= f_2^\prime(x).
\end{equation} It follows from \eqref{f1=f2},
\eqref{fi} and \eqref{df1=df2} that for $h\to0$,
\begin{align*}
g(x+h)=f_i(x)+f_i^\prime(x)h+o(h),\quad i=1,2.
\end{align*} Hence, $g^\prime(x)=f_1^\prime(x)=f_2^\prime(x)$. The lemma
is proved. \end{proof}
Denote
\begin{align*}
\varrho=\varrho(p),\quad q_i=1-p_i,\quad
\varrho_i=\varrho(p_i)\equiv\frac{\omega(p_i)}{\sqrt{p_iq_i}}.
\end{align*}
\begin{lemma}\label{lem-2.10} Let $ 0< p_1<p<p_2\le0.5$.
Then for all $n\ge1$ and $\;0\le k\le n$,
\begin{equation}
\bigg|\frac{1}{\varrho}\,\Delta_{n,k}(p)- \frac{1}{\varrho_1}\,
\Delta_{n,k}(p_1) \bigg|\le L(p_1)
\,(p-p_1),\quad \;\label{(2.17)}
\end{equation} and
\begin{equation}
\bigg|\frac{1}{\varrho}\Delta_{n,k}(p)-\frac{1}{\varrho_2}
\Delta_{n,k}(p_2 ) \bigg|<L(p_1) (p_2-p).\label{(2.10.17)}
\end{equation}\end{lemma}
\begin{proof} Note that $\Delta_{n,k}(p)<0.541$ (see~\cite{arxive-2007}). Consequently,
\begin{equation}
\bigg|\frac{1}{\varrho}\,\Delta_{n,k}(p)- \frac{1}{\varrho_1}\,
\Delta_{n,k}(p_1) \bigg|\le \frac{1}{\varrho_1}\,|
\Delta_{n,k}(p)-\Delta_{n,k}(p_1)|+0.541 \biggl(
\frac{1}{\varrho}- \frac{1}{\varrho_1} \biggr)\;.\label{(2.18)}
\end{equation}
It is obvious that $F_{n,p}(k)$ and $G_{n,p}(k)$, considered as
functions of the argument
$p$, are differentiable. Then, according to Lemma~\ref{lem-f11}, the one-side derivatives of the functions $\Delta_{n,k}(p)$ exist at each
point
$p\in[0,0.5]$ and coincide with
$\frac{\partial}{\partial p} (F_{n,p}(k+1)-G_{n,p}(k) )$ or $\frac{\partial}{\partial
p} (G_{n,p}(k)-F_{n,p}(k) )$.
Taking into account that $L_1(p)\le L_2(p)$ for $0<p\le0.5$, we
obtain from Lemmas~\ref{lem_loc-2} and \ref{lem_loc-3}
\begin{multline}
|\Delta_{n,k}(p)-\Delta_{n,k}(p_1)|\le
(p-p_1)\max_{p_1\le s\le
p} |D^+\Delta_{n,k}(s) |
\\
\le(p-p_1)\max_{p_1\le s\le p} L_2(s).\label{(2.10.18)}
\end{multline} The function $L_2(s)$
decreases on
$(0,\,0.5]$. Hence,
\begin{equation}
\max_{p_1\le s\le p} L_2(s)=L_2(p_1).\label{(2.10.19)}
\end{equation} The inequality
\begin{equation}
\frac{1}{\varrho_1}\, |\Delta_{n,k}(p)- \Delta_{n,k}(p_1)
|\le\frac{p-p_1}{\varrho_1}\,L_2(p_1)\label{(2.19)}
\end{equation}
follows from \eqref{(2.10.18)} and \eqref{(2.10.19)}.
Taking into account Lemmas~\ref{lem-2.8} and \ref{lem-2.9}, we have
\begin{equation}
\label{(2.20)}\frac{1}{\varrho}-\frac{1}{\varrho_1}\le (p-p_1)\,
\max\limits
_{p_1<s<p}\frac{d}{ds}\frac{1}{\varrho(s)}<2^{-1}A(p_1)
(p-p_1).
\end{equation} Collecting the estimates \eqref{(2.18)}, \eqref{(2.19)}, \eqref{(2.20)}, we obtain with the help of
\eqref{L(p)-new2} that for $0\le p_1<p\le0.5$,
\begin{multline}
\label{p1<p} \bigg|\frac{1}{\varrho}\,\Delta_{n,k}(p)-
\frac{1}{\varrho_1}\,\Delta_{n,k}(p_1) \bigg|\le
(p-p_1) \biggl(\frac{1}{\varrho_1}\,L_2(p_1)+0.271
\,A(p_1) \biggr)
\\
= (p-p_1)L(p_1).
\end{multline} Hence, for $0<p<p_2\le0.5$,
\begin{equation}
\label{p2-p} \bigg|\frac{1}{\varrho}\Delta_{n,k}(p)-\frac{1}{\varrho_2}
\Delta_{n,k}(p_2 ) \bigg|<(p_2-p)L(p).
\end{equation}
Inequality \eqref{(2.17)} coincides with \eqref{p1<p}, and
inequality \eqref{(2.10.17)} follows from \eqref{p2-p} and
Lemma~\ref{lem-L(p)ubyv}.
Lemma~\ref{lem-2.10} is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th-3}] It follows from the definition of
$p^\prime$ that either $0<p-p^{\,\prime}<h/2$ or
$0<p^{\,\prime}-p<h/2$. In the first case the statement of the
theorem follows from \eqref{(2.17)} and Lemma~\ref{lem-L(p)ubyv},
and in the second one from \eqref{(2.10.17)} and
Lemma~\ref{lem-L(p)ubyv} again.
\end{proof}
\section{Proof of Theorem \ref{th-2}}
\label{sect3}
\subsection{Theorem 1.1~\cite{NaCh} and some its consequences}
\label{sect2+1}
First we formulate Theorem 1.1 from \cite{NaCh}. To do this, we
need to enter a rather lot of notations from \cite{NaCh}:
\begin{align*}
&\omega_3(p)=q-p,\quad \omega_4(p)=|q^3+p^3-3pq|,
\quad \omega_5(p)=q^4-p^4,
\\
&\omega_6(p)=q^5+p^5+15(pq)^2,
\\
&K_1(p,n)=\frac{\omega_3(p)}{4\sigma\sqrt{2\pi}(n-1)}\, \biggl(1+\frac{1}{4(n-1)}
\biggr)+ \frac{\omega_4(p)}{12\sigma^2\pi}\, \biggl(\frac{n}{n-1} \biggr)^2
\\
&\hspace*{30mm}+\,\frac{\omega_5(p)}{40\sigma^{3}\sqrt{2\pi}}\, \biggl(\frac{n}{n-1}
\biggr)^{5/2}+ \frac{\omega_6(p)}{90\sigma^{4}\pi}\, \biggl(\frac{n}{n-1}
\biggr)^{3};
\end{align*}
\vspace*{-12pt}
\begin{align*}
\omega(p)&=p^2+q^2,& \zeta(p)&= \biggl(
\frac{\omega(p)}{6} \biggr)^{2/3},\quad e(n,p)=\exp \biggl\{
\frac{1}{24\sigma^{2/3} \zeta^2(p)} \biggr\},
\\
e_5&=0.0277905,& \widetilde{\omega}_5(p)&=p^4+q^4+5!
\,e_5(pq)^{3/2},
\end{align*}
\vspace*{-18pt}
\begin{align*}
&V_6(p)=\omega_3^2(p),\quad
V_7(p)=\omega_3(p)\omega_4(p),\quad
V_8(p)=\frac{2\widetilde{\omega}_5(p)\omega_3(p)}{5!3!}+ \biggl(\frac{\omega_4(p)}{4!}
\biggr)^2,
\\
&V_9(p)=\widetilde{\omega}_5(p)\omega_4(p),
\quad V_{10}(p)=\widetilde{\omega}_5^2(p),\quad
A_k(n)= \biggl(\frac{n}{n-2} \biggr)^{k/2}\,
\frac{n-1}{n},
\end{align*}
\vspace*{-15pt}
\begin{align*}
\begin{array}{lllll}
\gamma_6=\frac{1}{9},&\quad\gamma_7=\frac{5\sqrt{2\pi}}{96},&\quad\gamma_8=24,&\quad
\gamma_9=\frac{7\sqrt{2\pi}}{4!\,16},&\quad\gamma_{10}=\frac{2^6\cdot
3}{(5!)^2},
\\[12pt]
\widetilde\gamma_6=\frac{2}{3},&\quad\widetilde\gamma_7=\frac{7}{8},&\quad\widetilde\gamma_8=\frac{10}{9},
&\quad\widetilde\gamma_9=\frac{11}{8},&\quad\widetilde\gamma_{10}=\frac{5}{3},
\end{array}
\end{align*}
\vspace*{-15pt}
\begin{equation*}
K_2(p,n)=\frac{1}{\pi\sigma}\sum_{j=1}^5
\frac{\gamma_{j+5}\,A_{j+5}(n)\,V_{j+5}(p)}{\sigma^j}\, \biggl[1 +\frac{\widetilde\gamma_{j+5}\,e(n,p)\,n}{\sigma^{2}\,(n-2)} \biggr];
\end{equation*}
\vspace*{-15pt}
\begin{align*}
A_1=5.405,\quad A_2=7.521,\quad A_3=5.233,
\quad \mu=\frac{3\pi^2-16}{\pi^4},
\end{align*}
\vspace*{-18pt}
\begin{align*}
\chi(p,n)= \frac{2\zeta(p)}{\sigma^{2/3}}\;\,\text{\rm if}\;\, p\in(0,0.085),\;\;\text{\rm
and}\;\; \chi(p,n)=0\;\,\text{\rm if}\;\, p\in[0.085,0.5],
\end{align*}
\vspace*{-9pt}
\begin{equation*}
\begin{split} K_3(p,n)&=\frac{1}{\pi}\, \biggl\{
\frac{1}{12\sigma^2}+ \biggl(\frac{1}{36}+\frac{\mu}{8} \biggr)\,
\frac{1}{\sigma^4}+ \biggl(\frac{1}{36}\,e^{A_1/6}+
\frac{\mu}{8} \biggr)\, \frac{1}{\sigma^6}+ \frac{5\mu}{24}
\,e^{A_2/6}\, \frac{1}{\sigma^8}
\\
&+\,\frac{1}{3}\,\exp \biggl\{ -\sigma\sqrt{A_1}+
\frac{A_1}{6} \biggr\}+(\pi-2)\mu\exp \biggl\{ -\sigma\sqrt{
A_2}+\frac{ A_2}{6} \biggr\}
\\
&+\,\exp \biggl\{-\sigma\sqrt{A_3}+\frac{A_3}{6} \biggr\}
\frac{1}{4}\,\ln \biggl(\frac{\pi^4 \sigma^2}{4A_3} \biggr)
\\
&+\,\exp \biggl\{-\
|
frac{\sigma^{2/3}}{2\zeta(p)} \biggr\} \biggl[\frac{2\zeta(p)}{\sigma^{2/3}}
+e^{A_3/6}\,\frac{1+\chi(p,n)}{24\,\zeta(p)\,\sigma^{4/3}} \biggr] \biggr\}; \end{split}
\end{equation*}
\vspace*{-6pt}
\begin{equation}
\label{R=K1+}R(p,n)=K_1(p,n)+K_2(p,n)+K_3(p,n).
\end{equation}
\begin{tthm}[{\cite[Theorem 1.1]{NaCh}}]\label{thmA}
Let
\begin{equation}
\label{p>0.02} \frac{4}{n}\le p\le0.5,\quad n\ge200.
\end{equation} Then
\begin{equation}
\label{Delta-Th} \Delta_n(p)\le \frac{\varrho(p)}{\sqrt{n}}\,{\cal
E}(p)+R(p,n),
\end{equation} and
the sequence $R_0(p,n):=\frac{\sqrt{n}}{\varrho(p)}\,R(p,n)$ tends
to zero for every $0<p\le0.5$, decreasing in~$n$.
\end{tthm}
Denote
\begin{align*}
E(p,n)={\cal E}(p)+R_0(p,n).
\end{align*} Figure \ref{ris3} shows the mutual location of the
following functions: $E(p, n)$ for $ n = 200 $ and $800 $, $ {\cal
E}(p) $ and $ T_n(p) |_{n = 50}$. Note that, as a consequence of
the definition of the binomial distribution, the behavior of these
functions is symmetric with respect to $ p = 0.5 $.
\begin{figure}[ht]
\includegraphics{113f01}
\caption
Graphs of the functions (from top to down): \mbox{$\!E(p,\!200),\,
\!E(p,\!800),\,\!{\cal E}(p),\,\!T_{50}(p)$}}
\label{ris3}
\end{figure}
Recall that $N_0=500000$.
\begin{ccor}\label{corA}
For $p\in[0.1689,0.5]$, and $ n\ge
N_0$,
\begin{align*}
E(p,n)\le E(p,N_0)<0.409954.
\end{align*}
\end{ccor}
\begin{proof}
Since $E(p,n)$ decreases in $n$, we obtain the statement of
Corollary~\ref{corA} by finding the maximal value of $E(p,N_0)$ directly using a computer.
\end{proof}
In order to verify the plausibility of the previous numerical
result, we estimate the function $E(p,N_0)$, making preliminary
estimates of some of the terms that enter into it. This leads to the
following somewhat more coarse inequality.
\begin{CCOR}
For $p\in[0.1689,0.5]$, and $ n\ge
N_0$,
\begin{equation}
\label{Epn<}E(p,n)<0.409954153.
\end{equation}
\end{CCOR}
\begin{proof}
Separate the proof of \eqref{Epn<} into four steps. First we rewrite
$R_0(p,n)$ in the following form,
\begin{align*}
R_0(p,n)=\frac{K_1(p,n)\sigma}{\omega(p)}+\frac{K_2(p,n)\sigma}{\omega(p)}+
\frac{K_3(p,n)\sigma}{\omega(p)}.
\end{align*}
In each function $\frac{K_i(p,n)\sigma}{\omega(p)}$, $i=1,2,3$, we
will select the principal term, and estimate the remaining ones.
Step 1. Note that for $n\ge N_0$ and $0<a\le3$,
\begin{align*}
&\biggl(\frac{n}{n-1} \biggr)^a\le \biggl(\frac{n}{n-1}
\biggr)^3<e_1:=1.00000601,
\\
& 1+\frac{1}{4(n-1)}
\!<e_2:=1.000000501.
\end{align*} Then
\begin{align*}
\frac{K_1(p,n)\,\sigma}{\omega(p)}=\frac{\omega_4(p)}{12\pi\omega(p)\sigma} \biggl(\frac{n}{n-1}
\biggr)^2 +r_1(p,n),
\end{align*}
where
\begin{align*}
r_1(p,n)<\widetilde r_1(p,n):=\frac{e_1}{\omega(p)} \biggl(
\frac{e_2\,\omega_3(p)}{4\sqrt{2\pi}(n-1)} +\frac{\omega_5(p)}{40\sqrt{2\pi}\sigma^2}+\frac{\omega_6(p)}{90\pi
\sigma^3} \biggr).
\end{align*}
Using a computer, we get the estimate $\widetilde r_1(p,n)\le \widetilde
r_1(0.1689,N_0)<2.78\cdot10^{-7}$.
Step 2. We have
\begin{align*}
\frac{K_2(p,n)\sigma}{\omega(p)}= \frac{\gamma_{6}\,A_{6}(n)\,V_{6}(p)}{\pi\omega(p)\sigma}+r_2(p,n),
\end{align*}
where
\begin{align*}
r_2(p,n)= \sum_{j=2}^5
\frac{\gamma_{j+5}A_{j+5}(n)V_{j+5}(p)}{\pi\omega(p)\sigma^j}
\biggl[1 +\frac{\widetilde\gamma_{j+5}e(n,p)n}{\sigma^{2}(n-2)} \biggr]
+\frac{\gamma_{6}\widetilde\gamma_{6}A_6(n)e(n,p)n}{\pi\omega(p)\sigma^{3}(n-2)}.
\end{align*}
Taking into account that for $n\ge N_0$, $1\le j\le5$ and
$p\in[0.1689,0.5]$, we have
\begin{align*}
& A_{j+5}(n)<A_{10}(N_0)<e_3:=1.00001801,
\quad e(n,p)\le e(N_0,0.5)<1.02316,
\\
& 1+\frac{\widetilde\gamma_{j+5}\,e(n,p)\,n}{\sigma^{2}\,(n-2)}<1+\frac{(5/3)\cdot1.02316}{pq(N_0-2)}
\bigg|_{p=0.1689}<e_4:=1.0000243.
\end{align*}
Then, taking into account as well that $A_6(N_0)<1.0000101$, we get
\begin{align*}
r_2(p,n)<\widetilde r_2(p,n):=\frac{e_3\cdot
e_4}{\pi\omega(p)}\sum
_{j=2}^5\frac{\gamma_{j+5}V_{j+5}(p)}{\sigma^j}+
\frac{(1/9)(2/3)1.0000101\cdot1.02316}{\pi\omega(p)(pq)^{3/2}\sqrt{n}(n-2)}.
\end{align*}
We find with the help of a computer: $\widetilde r_2(p,n)\le
\widetilde r_2(0.1689,N_0)<8.852\cdot10^{-8}$.
Step 3. Let us write up
\begin{align*}
\frac{K_3(p,n)\sigma}{\omega(p)}=\frac{1}{12\pi\omega(p)\sigma}+r_3(p,n),
\end{align*}
where
\begin{align*}
r_3(p,n)&=\frac{\sigma}{\pi \omega(p)}\, \biggl\{ \biggl(
\frac{1}{36}+\frac{\mu}{8} \biggr)\, \frac{1}{\sigma^4}+ \biggl(
\frac{1}{36}\,e^{A_1/6}+ \frac{\mu}{8} \biggr)\,
\frac{1}{\sigma^6}+ \frac{5\mu}{24}\,e^{A_2/6}\, \frac{1}{\sigma^8}
\nonumber
\\
&\quad+\,\frac{1}{3}\,\exp \biggl\{ -\sigma\sqrt{A_1}+
\frac{A_1}{6} \biggr\}+(\pi-2)\mu\exp \biggl\{ -\sigma\sqrt{
A_2}+\frac{ A_2}{6} \biggr\}
\nonumber
\\
&\quad+\,\exp \biggl\{-\sigma\sqrt{A_3}+\frac{A_3}{6} \biggr
\} \frac{1}{4}\,\ln \biggl(\frac{\pi^4 \sigma^2}{4A_3} \biggr)
\nonumber
\\
&\quad+\,\exp \biggl\{-\frac{\sigma^{2/3}}{2\zeta(p)} \biggr\} \biggl[\frac{2\zeta(p)}{\sigma^{2/3}}
+e^{A_3/6}\,\frac{1+\chi(p,n)}{24\,\zeta(p)\,\sigma^{4/3}} \biggr] \biggr\}.
\end{align*}
Using a computer, we get $r_3(p,n)\le
r_3(0.1689,N_0)<1.08\cdot10^{-9}$.
Thus, for $p\in[0.1689,0.5]$, $n\ge N_0$, we have
\begin{multline*}
r_1(p,n)+r_2(p,n)+r_3(p,n)<2.78
\cdot10^{-7}+8.852\cdot10^{-8}+1.08\cdot10^{-9}<3.676
\cdot10^{-7}.
\end{multline*}
Step 4. Now consider the function
\begin{align*}
B(p,n)= {\cal E}(p)+\frac{1}{12\pi\omega(p)\sigma} \biggl(\omega_4(p) \biggl(
\frac{n}{n-1} \biggr)^2 +12\gamma_{6}
\,A_{6}(n)\,V_{6}(p)+1 \biggr).
\end{align*}
We find with the help of a computer that for $p\in[0.1689,0.5]$,
$n\ge N_0$,
\begin{multline*}
\max_{p\in[0.1689,0.5]}B(p,n)=\max_{p\in[0.1689,0.5]}B(p,N_0)
\\
=B(0.418886928\ldots\;,N_0)=0.40995378459\ldots\;.
\end{multline*}
Consequently,
\begin{multline*}
E(p,n)=B(p,n)+\sum_{j=1}^3
r_j(p,n)
\\
<0.4099537846+3.676\cdot10^{-7}
<0.409954153.
\qedhere
\end{multline*}
\end{proof}
Let us introduce the following notations:
\begin{align*}
{\cal E}_1(p)=\bigl(p^2+q^2\bigr){\cal E}(p)=
\frac{2-p}{3\sqrt{2\pi}},
\end{align*}
$D_2(p,n)$
is the coefficient at $\frac{1}{\sigma^2}$ in the expansion of
$R(p,n)$ in powers of $\frac{1}{\sigma}$,\break $\overline
D_2(p,n)=\sigma^2R(p,n)$, where the remainder $R(p,n)$ is defined by
equality~\eqref{R=K1+}. One can rewrite bound \eqref{Delta-Th} in
the following form,
\begin{equation}
\label{Delta-Th-2} \Delta_n(p)\le\frac{{\cal E}_1(p)}{\sigma}+
\frac{\overline
D_2(p,n)}{\sigma^2}.
\end{equation} Define
$D_2^I(n)=\max\limits_{p\in I}D_2(p,n)$, $\overline
D_2^I(n)=\max\limits_{p\in I}\overline D_2(p,n)$, where $I$ is an
interval.
\begin{ccor}\label{corB}
The quantities $\max\limits_{n\ge N}D_2^I(n)$ and $\max\limits_{n\ge
N}\overline D_2^I(n)$ take the following values depending on
$N=200,\,N_0$ and intervals $I=[0.02,0.5]$, $[0.1689,0.5]$:
\begin{table}[h]
\caption{Some values of $\max\limits_{n\ge N}D_2^I(n)$ and
$\max\limits_{n\ge N}\overline D_2^I(n)$}
\label{tab2-D2}
\begin{tabular}{|c|c|c|c|}
\hline
&$\phantom{\int}$\hspace{-2mm}$I=[0.02,0.5]$&\multicolumn{2}{c|}{$I=[0.1689,0.5]$}\\[0.3mm]
\hline&$N=200$&$N=200$&$N=N_0$\\[0.3mm]
\hline $\max\limits_{n\ge
N}D_2^I(n)=$&$0.083592\ldots$&$0.046656\ldots$&$0.0462198\ldots$\\[0.5mm]
\hline
$\max\limits_{n\ge N}\overline
D_2^I(n)=$&$0.1940\ldots$&$0.05986\ldots$&$0.05531\ldots$\\[0.5mm]
\hline
\end{tabular}
\end{table}
\end{ccor}
\begin{proof}
Since
\begin{align*}
\max_{n\ge N}\overline D_2^I(n)=\max
_{n\ge N}\max_{p\in
I}\sigma^2R(p,n)=
\max_{p\in I}\sigma^2R(p,N),
\end{align*} then by using a
computer, we get the tabulated values of $\max\limits_{n\ge
N}\overline D_2^I(n)$.
Proceed to the derivation of the values of $\max\limits_{n\ge N}
D_2^I(n)$. It follows from the definitions of $K_1(p,n)$,
$K_2(p,n)$, and $K_3(p,n)$ that the coefficient at
$\frac{1}{\sigma^2}$ in $R(p,n)$ is
\begin{align*}
D_2(p,n)=\frac{\omega_4(p)}{12\pi}\, \biggl(\frac{n}{n-1}
\biggr)^2+\frac{1}{\pi}\,\gamma_6
A_6(n)V_6(p)+\frac{1}{12\pi}
\end{align*} or, in more detail,
\begin{multline*}
D_2(p,n)=\frac{1}{36\pi} \biggl(3|q^3+p^3-3pq|
\biggl(\frac{n}{n-1} \biggr)^2 +4A_6(n)
(q-p)^2+3 \biggr)
\\
=:\frac{G_2(p,n)}{36\pi}.
\end{multline*}
First
we consider ${G_2}(p):=\lim\limits_{n\to\infty}G_2(p,n)$. We have
\begin{align*}
{G_2}(p)=3|q^3+p^3-3pq|+4(q-p)^2+3
\equiv3|6p^2-6p+1|+4(1-2p)^2+3.
\end{align*}
Taking into account that
\begin{align*}
|6p^2-6p+1|=\begin{cases}6p^2-6p+1&\text{if}\;p\le p_1:=\frac{3-\sqrt{3}}{6}=0.211324\ldots\;,\\
-6p^2+6p-1&\text{if}\;p>p_1,\end{cases}
\end{align*} we obtain
\begin{align*}
{G_2}(p)=\begin{cases}2(17p^2-17p+5)&\text{if}\;p\le p_1,
\\
-2(p^2-p-2)&\text{if}\;p> p_1.\end{cases}
\end{align*}
Since $ G_2 (p) $ decreases for $ p <p_1 $, and increases for $ p>
p_1 $, then the maximum value of this function is achieved either at
the left bound or at the right bound of the interval. We have
\begin{align*}
G_2(0.02)=9.3336,\quad G_2(0.1689)=5.2273251\ldots\;,
\quad G_2(0.5)=4.5.
\end{align*} Thus,
\begin{align*}
&\frac{1}{36\pi}\,\max_{0.02\le
p\le0.5}G_2(p)=
\frac{G_2(0.02)}{36\pi}=0.0825271\ldots\;,
\\
&\frac{1}{36\pi}\,\max_{0.1689\le
p\le0.5}G_2(p)=
\frac{G_2(0.1689)}{36\pi}=0.04621970\ldots\;.
\end{align*}
Similarly,
with more efforts only, we get
\begin{gather*}
\max_{0.02\le p\le0.5}G_2(p,200)= G_2(0.02,200)=9.4541
\ldots\;,
\\
\max_{0.1689\le p\le0.5}G_2(p,200)= G_2(0.1689,200)=5.2767
\ldots\;,
\\
G_2(0.5,200)=4.515\ldots\;,
\\
\max_{0.02\le p\le0.5}G_2(p,N_0)=
G_2(0.02,N_0)=9.33364\ldots\;,
\\
\max_{0.1689\le p\le0.5}G_2(p,N_0)=
G_2(0.1689,N_0)=5.227344\ldots\;,
\\
G_2(0.5,N_0)=4.00006\ldots\;.
\end{gather*}
Consequently,
\begin{gather*}
\frac{\max\limits_{0.02\le p\le0.5}G_2(p,200)}{36\pi}= 0.083592\ldots\;,\quad \frac{\max\limits_{0.1689\le
p\le0.5}G_2(p,200)}{36\pi}=0.046656\ldots
\;,
\\
\frac{\max\limits_{0.1689\le
p\le0.5}G_2(p,N_0)}{36\pi}=0.0462198\ldots\;.
\qedhere
\end{gather*}
\end{proof}
\begin{remark} 1. One can observe from the previous proof that
$G_2(p,N_0)\approx G_2(p)$, therefore,
$D_2(p,N_0)\approx\frac{G_2(p)}{36\pi}$.
2. With increasing $N$, the sequence $a^I(N):=\max\limits_{n\ge N}
D_2^I(n)$
approaches to $a^I:=\frac{1}{36\pi}\,\max\limits_{p\in I}G_2(p)$.
For instance, by Table~\ref{tab2-D2}, we have for the interval
$I=[0.1689,0.5]$ that $a^I(200)=0.046656\ldots\;$,
$a^I(N_0)=0.0462198\ldots\;$ while $a^I=0.0462197\ldots \;$. The
sequence $\overline a^I(N):=\max\limits_{n\ge N} \overline D_2^I(n)$
tends to $0.0462197\ldots \;$ as well, but slowly, since the main
term of the difference $\overline D_2(p,n)-\frac{G_2(p)}{36\pi}$ has
the order $\frac{1}{\sqrt{n}}$.
\end{remark}
The following bound for $\Delta_n(p)$, simpler than Theorem~\ref{thmA},
follows from \eqref{Delta-Th-2} and Table~\ref{tab2-D2}.
\begin{ccor}\label{corC}
For all $p\in I=[0.1689,0.5]$
and $n\ge N_0$,
\begin{equation}
\label{532}\Delta_n(p)\le\frac{{\cal
E}_1(p)}{\sigma}+\frac{0.05532}{\sigma^2}.
\end{equation}
\end{ccor}
\begin{remark}
Corollary~C allows to obtain the same estimate for $ C_{02} $
as~\eqref{953}, but for larger $ n $. Really, it is easy to verify
with the help of a computer that
\begin{equation}
\label{954-4} \sup_{p\in[0.1689,0.5]} \biggl({\cal E}(p)+
\frac{0.05532}{\sqrt{npq}(p^2+q^2)} \bigg|_{n=971000} \biggr) <0.409954,
\end{equation} but
\begin{equation}
\label{954-3} \sup_{p\in[0.1689,0.5]} \biggl({\cal E}(p)+
\frac{0.05532}{\sqrt{npq}(p^2+q^2)} \bigg|_{n=970000} \biggr)>0.409954.
\end{equation}
\end{remark}
\subsection{On the connection between Uspensky's result and its refinements with the problem of estimating $C_{02}$}
First we recall Uspensky's estimate, published by him in 1937 in
\cite{Uspen}. To this end we introduce the following notations:
$S_n$ is the number of occurrences of an event in a series of $n$
Bernoulli trials with a probability of success $p$, $\mu=np$,
\begin{align*}
G(x)=\varPhi(x)+\frac{q-p}{6\sqrt{2\pi}\,\sigma}\bigl(1-x^2\bigr)e^{-x^2/2}.
\end{align*}
For
every $x\in{\mathbb R}$, define
\begin{equation}
\label{x^+}x_n^\pm=\frac{x-\mu\pm\frac{1}{2}}{\sigma},
\end{equation}
where $\sigma=\sqrt{npq}$, as before.
Uspensky's result can be formulated in the following form.
\begin{tthm}[{\cite[p. 129]{Uspen}}]
\label{thmB}
Let $\sigma^2\ge25$. Then
for arbitrary integers $a<b$,
\begin{equation}
\label{resU-1} \big|{\bf P}(a\le S_n\le b)- \bigl(G
\bigl(b_n^- \bigr)-G \bigl(a_n^+ \bigr) \bigr) \big|\le
\frac{0.13+0.18|p-q|}{\sigma^2}+e^{-3\sigma/2}.
\end{equation}
\end{tthm}
A lot of works are devoted to generalizations and refinements of
\eqref{resU-1}, for example,
\cite{Deh,Makabe-1961,Makabe-1955,Mikhailov-93,Neamm,Senatov-2014-Eng,Volkova-95}.
In 2005 K.~Neammanee \cite{Neamm} refined and generalized~\eqref{resU-1} to the case of non-identically
distributed Bernoulli random variables. Let us formulate his result
as applied to the case of Bernoulli trials: {\it if
$\sigma^2\ge100$, then
\begin{equation}
\label{Neam-1} \big|{\bf P}(a\le S_n\le b)- \bigl(G
\bigl(b_n^- \bigr)-G \bigl(a_n^+ \bigr) \bigr) \big|<
\frac{0.1618}{\sigma^2},
\end{equation}
where $a_n^+$, $b_n^-$ are defined by the formula \eqref{x^+}}.\querymark{Q2}
It follows from \eqref{Neam-1} that under condition
$\sigma^2\ge100$,
\begin{equation}
\label{Neam-20} \big|{\bf P}(S_n\le b)- G \bigl(b_n^- \bigr)\big|
\le\frac{0.1618}{\sigma^2}.
\end{equation}
We may consider $p\in(0,0.5]$. Denote for brevity, $d=0.1618$. It
follows from \eqref{Neam-20} and the definition of $G(\cdot)$ that
\begin{equation*}
\big|{\bf P}(S_n\le b)-\varPhi \bigl(b_n^- \bigr) \big|<
\frac{|(1-(b_n^-)^2)(q-p)|e^{-(b_n^-)^2/2}}{6\sqrt{2\pi
}\sigma}+\frac{d}{\sigma^2}.
\end{equation*}
Taking into account that
$\max\limits_{t}|t^2-1|e^{-t^2/2}=1$, we get
\begin{equation}
\label{Neam-3} \big|{\bf P}(S_n\le b)-\varPhi \bigl(b_n^-
\bigr) \big|\le \frac{|q-p|}{6\sqrt{2\pi}\sigma}+\frac{d}{\sigma^2} .
\end{equation}
Denote $x_n=\frac{x-\mu}{\sigma}$. It is easily seen that
\begin{equation}
\label{Phi-Phi-Neam} \big|\varPhi(b_n)-\varPhi \bigl(b_n^- \bigr)
\big|< \frac{b_n-b_n^-}{\sqrt{2\pi}}=\frac{1}{2\sqrt{2\pi
}\sigma}.
\end{equation} It follows from \eqref{Neam-3},
\eqref{Phi-Phi-Neam} that
\begin{align*}
\big|{\bf P}(S_n\le b)-\varPhi(b_n)\big|< \biggl(
\frac{|q-p|}{6}+\frac{1}{2} \biggr) \frac{1}{\sigma\sqrt{2\pi}}+
\frac{d}{\sigma^2}=\frac{{\cal
E}_1(p)}{\sigma}+\frac{d}{\sigma^2},
\end{align*}
provided that $0<p\le0.5$. Thus,
\begin{equation}
\label{P-Phi} \Delta_n(p)\le\frac{{\cal
E}_1(p)}{\sigma}+
\frac{0.1618}{\sigma^2}.
\end{equation} Note that
our bound \eqref{532} is more accurate than \eqref{P-Phi}. To get
the bound $0.409954$ for $C_{02}$ from~\eqref{P-Phi}, we should take
$n$ almost five times larger than in~\eqref{954-4}. Really, with the
help of a computer we have
\begin{align*}
\sup_{p\in[0.1689,0.5]} \biggl( {\cal E}(p)+\frac{0.1618}{\sqrt{npq}\,(p^2+q^2)}
\bigg|_{n=4.6\cdot10^6} \biggr)<0.410031,
\end{align*}
and
\begin{align*}
\sup_{p\in[0.1689,0.5]} \biggl( {\cal E}(p)+\frac{0.1618}{\sqrt{npq}\,(p^2+q^2)}
\bigg|_{n=4.2\cdot10^6} \biggr)>0.410044
\end{align*}
(cf. \eqref{954-4}, \eqref{954-3}).
\begin{remark}
In 2014 V. Senatov obtained non-uniform estimates of the
approximation accuracy in the central limit theorem, and, in particular,
generalized Uspensky's result \eqref{resU-1} to lattice
distributions \cite{Senatov-2014-Eng}.
\end{remark}
\subsection{Proof of Theorem \ref{th-2}}
Before proving Theorem \ref{th-2}, we first prove
Lemma~\ref{lem-1-ZNC}.
\begin{proof}[Proof of Lemma \ref{lem-1-ZNC}] By \cite[Theorem
1]{KorShv-Obozr-2010-2},
\begin{equation}
\label{var-1}\Delta_n(p)\le\frac{0.33477}{\sqrt{n}}\, \bigl(
\varrho(p)+0.429 \bigr).
\end{equation}
Therefore,
$T_n(p)\equiv\frac{\sqrt{n}\,\Delta_n(p)}{\varrho(p)}\le0.33477 (1+\frac{0.429}{\varrho(p)} )$.
Since $\varrho(p)$ decreases on $(0, 0.5]$, then
$\max\limits_{p\in(0,0.1689]}\frac{1}{\varrho(p)}=\frac{1}{\varrho(0.1689)}=0.52090548\ldots\;\,$.
Consequently,
\begin{equation*}
\max\limits
_{p\in(0,0.1689]}T_n(p)\le 0.33477(1+0.429
\cdot0.52090549)<0.409581.
\qedhere
\end{equation*}
\end{proof}
\begin{remark} If instead of \cite[Theorem
1]{KorShv-Obozr-2010-2} we will use other modifications of the
Berry--Esseen inequality by I.~Shevtsova
\cite{arxive-Shevtsova-2013}, the interval (0,0.1689] for which
Lem\-ma~\ref{lem-1-ZNC} is true can be extended, i.e. one can find $
b> 0.1689 $ such that the inequality $\max\limits_{p\in(0,
b]}T_n(p)<C_E $ will be fulfilled. This will narrow the interval $ I
$ (see~\eqref{<0.4-S}), which in turn will reduce the computation
time on the supercomputer.
Let us indicate such $b$. The estimates found in
\cite{arxive-Shevtsova-2013} as applied to the particular case of
Bernoulli trials can be written in the following form,
\begin{align}
\label{var-2}&\Delta_n(p)\le\frac{0.33554}{\sqrt{n}}\, \bigl(
\varrho(p)+0.415 \bigr),
\\
\label{var-3}&\Delta_n(p)\le\frac{0.3328}{\sqrt{n}}\, \bigl(
\varrho(p)+0.429 \bigr).
\end{align}
It is easy to verify that inequality \eqref{var-2} implies $b
=0.174$, and \eqref{var-3} implies that $b =0.177$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{th-2}] It follows from
Corollary~\ref{th-1.2} and
\eqref{<0.4-S} that for all $p\in I$ the following bound holds,
\begin{equation}
\label{Knp<0.41}T_n(p)<0.40973213+4.6\cdot10^{-9}<
0.4097321346,\quad 1\le n\le N_0.
\end{equation}
Then by Lemma~\ref{lem-1-ZNC}, this inequality is fulfilled for all
$p\in(0,0.5]$ as well. It is not hard to see that the bound
\eqref{Knp<0.41} is also true for all $p\in(0.5,1)$. Hence,
bound~\eqref{954} implies Theorem~\ref{th-2}.
\end{proof}
\begin{acknowledgement}[title={Acknowledgments}]
We thank the following colleagues from Lomonosov Moscow State
University for providing the opportunity to use supercomputer Blue
Gene/P: V.~Yu.~Korolev, Head of the Department of Mathematical
Statistics of the Faculty of Computational Mathematics and
Cybernetics, Professor, I.~G.~Shevtsova, Assistant Professor of the
same Department, A.~V.~Gulyaev, Deputy Dean of the same Faculty,
and S.~V.~Korobkov, the Data Center administrator.
We also thank our colleagues from Computing Center FEB RAS for the
opportunity to use the Center for the Collective Use ``Data Center
FEB RAS''.
We also would like to thank reviewers for useful comments.
\end{acknowledgement}
|
\section{Introduction}\label{sec:intro}
\input{TeX/Text/Introduction.tex}
\subsection{Contributions}
\input{TeX/Text/Introduction/Contributions.tex}
\subsection{Organization}
\input{TeX/Text/Introduction/Organization.tex}
\subsection{Notation}
\input{TeX/Text/Introduction/Notation.tex}
\section{An optimal control reformulation}\label{sec:OC}
\input{TeX/Text/OC.tex}
\subsection{Vectorized form}\label{sec:vectorized}
\input{TeX/Text/OC/Vectorized.tex}
\section{The outer ALM algorithm}\label{sec:outerALM}
\input{TeX/Text/OuterALM.tex}
\section{The Lagrangian subproblem via Gauss-Newton iterations}\label{sec:innerGN}
\input{TeX/Text/InnerGN.tex}
\subsection{Gauss-Newton linearization and
|
update direction}\label{sec:ABc}
\input{TeX/Text/InnerGN/Linearization.tex}
\subsection{The Gauss-Newton algorithm}
\input{TeX/Text/InnerGN/GN.tex}
\subsection{Forward dynamic programming}\label{sec:FDP}
\input{TeX/Text/InnerGN/FDP_LS.tex}
\section{Numerical experiments}\label{sec:experiments}
\input{TeX/Text/Experiments.tex}
\subsection{Design of numerical experiments}
\input{TeX/Text/Experiments/Design.tex}
\subsection{Numerical results and discussion}
\input{TeX/Text/Experiments/NumPerformance.tex}
\subsection{Comparison with first-order methods}
\input{TeX/Text/Experiments/NumCompare.tex}
\section{Conclusions}\label{sec:conc}
\input{TeX/Text/Conclusions.tex}
\bibliographystyle{plain}
|
\section{Introduction}
The interplay between interacting electrons and lattice vibrations in solids gives rise to diverse phenomena ranging from quantitative effects, such as changes in the electric~\cite{Ponce2020} or thermal conductivity~\cite{Knoop2020}, to qualitative effects, such as instabilities toward charge~\cite{Wilson1974} and superconducting order~\cite{Revolinsky1963}.
Accordingly, the simulation of lattice dynamics is an exceptionally well established branch of computational materials science~\cite{Baroni2001}.
Readily accessible total energies from \ac{DFT}~\cite{Hohenberg1964, Kohn1965} and the development of \ac{DFPT}~\cite{Zein1984, Gonze1997, Baroni2001} have significantly advanced this field.
Nowadays, phonon frequencies and normal modes can be obtained routinely and at reasonable computational cost, allowing for high-throughput calculations~\cite{Petretto2018, Mounet2018}.
For a wide range of materials, especially semiconductors~\cite{Baroni1987, Giannozzi1991} but also metals~\cite{deGironcoli1995}, the agreement between theory and experiment is remarkable.
Nevertheless, \ac{DFPT} still depends on crucial approximations:
First, the adiabatic or Born-Oppenheimer approximation~\cite{Born1927} implicitly assumes that the dynamics of electrons and ions happens on two well separated energy scales.
However, in some materials the relevant energy scale of the electrons is similar to or even smaller than that of the phonons, leading to nonadiabatic effects~\cite{Maksimov1996, Lazzeri2006, Pisana2007, Calandra2010, Ponce2015, Caruso2017, Verdi2017, Miglio2020, Girotto2022}.
Second, calculations are in practice limited to a sparse sampling of the \ac{BZ} or, equivalently, short-range interatomic force constants.
This poses a problem for materials close to a lattice instability, signalled by Kohn anomalies~\cite{Kohn1959} driven by a strong long-range electronic response.
Furthermore, the exchange-correlation energy is described by an approximate functional.
Thus, \ac{DFT}-based methods will inevitably fail for strongly correlated materials where the single-electron picture breaks down~\cite{Hubbard1963}.
A popular approach in cases where the abovementioned approximations cannot be applied is to complement the results with suitable corrections to the self-energy of electrons~\cite{Geilikman1971, Kocer2020} and phonons~\cite{Brovman1967, Ipatova1974, Calandra2010, Giustino2017}.
Here, we usually face a difficulty referred to as ``double counting'' or ``overscreening''~\cite{Paleari2021a, Paleari2021b, Marini2022}:
An effect that is accounted for in \ac{DFT} must be removed before a more elaborate description of the same effect can be applied.
This is closely related to the concept of ``downfolding''~\cite{Aryasetiawan2004, vanLoon2021a, vanLoon2021b}:
The full problem is mapped to an \emph{ab initio} low-energy effective system with a significantly reduced number of degrees of freedom~\cite{Imada2010}.
Established downfolding methods for the electron-electron and electron-phonon interaction~\cite{Giovannetti2014} are the \ac{cRPA}~\cite{Aryasetiawan2004, Aryasetiawan2006} and the \ac{cDFPT}~\cite{Nomura2015}, respectively.
While for the former several implementations in popular simulation software exist~\cite{Friedrich2010, Sasioglu2013, Amadon2014, Kaltak2015, Nakamura2021}, a general workflow for the latter is at an earlier development stage~\cite{Nomura2015, Berges2020b, Novko2020a}.
In this work, we revisit the problem of obtaining nonadiabatic phonons that are converged with respect to the sampling of the \ac{BZ}, originally addressed in Ref.~\citenum{Calandra2010}.
In doing so, we also react to the recently revived controversy about the correct screening of the electron-phonon vertices in the phonon self-energy:
There are strong arguments~\cite{Calandra2010} that the traditional choice~\cite{Allen1972, Lazzeri2006, Pisana2007, Giustino2007b, Novko2018, Novko2020b} of two screened vertices can not only be ascribed to the absence of a better alternative, but is indeed a good option as long as screening effects are approximated at the \ac{DFT} level.
This is however not universally acknowledged, since both perturbative~\cite{Paleari2021a, Paleari2021b, Marini2022} and nonperturbative treatments of the problem based on the Hedin-Baym equations~\cite{Giustino2017} yield a phonon self-energy with one bare and one screened vertex -- a fact that is by no means disputed in Ref.~\citenum{Calandra2010}.
Here, by comparing the two approaches and supporting them with numerical results from \ac{DFPT} and \ac{cDFPT}, we will detail the following options:
(i)~The established approach with two screened vertices~\cite{Calandra2010} yields excellent results at low cost for a wide parameter range.
Its robustness can be explained by a designed cancellation of errors to first order.
(ii)~Working with one bare vertex is exact, avoids overscreening, and thus allows for systematic improvements.
However, the bare vertex is not routinely obtained in a pseudopotential framework, and the quality of the result depends on the achievable precision of the screened vertex.
Also, the computational cost is increased by the necessity to sum over many electronic bands, but a possible way around this problem has recently been published~\cite{Lihm2021}.
In practice, some authors have approximated the bare vertex by \emph{unscreening} the \ac{DFPT} one with model dielectric functions~\cite{Caruso2017, PrasadKafle2020}, but their accuracy is limited by the validity of these models.
We also illustrate a third option~(iii) where downfolding~\cite{Nomura2015} is used and can be shown to be equivalent to (ii) with a computational cost similar to (i).
By splitting the electronic transitions into an ``active subspace'' and its complement, the ``rest subspace\rlap,'' we can gradually switch between the bare and the screened vertex and settle for the optimum.
Importantly, not only the bare but also the partially screened quantities are, in good approximation, adiabatic -- in an Engelsberg-Schrieffer sense~\cite{Engelsberg1963, Saitta2008} -- and independent of the electronic temperature since the rest system is gapped.
As a consequence, they are also smooth in large parts of reciprocal space, which facilitates interpolation and ensures convergence already at coarse \ac{BZ} sampling.
This is the main advantage of our proposed downfolding approach.
Nevertheless, there is also a drawback since the absence of these screening effects is accompanied by the emergence of long-range Fr\"ohlich terms.
Since the associated discontinuities in the derivatives of phonon dispersion and electron-phonon coupling render a straightforward Fourier interpolation impossible, an accurate description of these long-range terms, which allows to subtract and add them before and after interpolation, is needed.
We implemented approaches~(i), (ii), and (iii), in \textsc{Quantum ESPRESSO}~\cite{Giannozzi2009, Giannozzi2017, Giannozzi2020} and the EPW code~\cite{Giustino2007a, Noffsinger2010, Ponce2016}, see Supplemental Material.
On this basis, we can efficiently study fine features such as nonadiabatic Kohn anomalies and Peierls instabilities~\cite{He2020}, which all arise from the active subspace, at the required (and otherwise prohibitive) ultradense \ac{BZ} sampling, low electronic temperature, and with frequency dependence.
We showcase the simplicity and importance of methods~(i) and (iii) by investigating the electron-phonon interaction and resulting peculiarities of the phonon dispersions of monolayer TaS\s2 and n-doped MoS\s2.
In the former, we can access the sharp features of the doping-dependent Kohn anomalies, indicating instability toward charge order, and significant nonadiabatic phonon renormalization, which only emerge at low electronic temperatures.
In the latter, we find that the observed doping-induced phonon softening is due to a combination of high intervalley susceptibility and electron-phonon coupling, while the maximum of the coupling is still \emph{dormant} and could be activated by doping or band engineering.
This paper is organized as follows:
In Sec.~\ref{sec:theory}, we present the relevant theoretical concepts and formulas.
Subsequently, in Sec.~\ref{sec:implementation}, we describe how these are implemented in practice.
On this basis, in Sec.~\ref{sec:results}, we discuss our numerical results.
We finish by summarizing our work and discussing possible generalization of the scheme in Sec.~\ref{sec:conclusions}.
\section{Theory}
\label{sec:theory}
In this section, we review the theoretical background, including the phonon Green's function [Sec.~\ref{sec:green}], downfolding [Sec.~\ref{sec:downfolding}], the diagrams describing the screening of phonons and interactions [Sec.~\ref{sec:rpa}], approximate phonon self-energies [Sec.~\ref{sec:approx}], long-range effects [Sec.~\ref{sec:lr}], and the basics of Wannier-Fourier interpolation [Sec.~\ref{sec:wannier}].
We employ Rydberg atomic units, where in particular $\hbar = e = 1$.
\subsection{Phonon Green's function and dynamical matrix}
\label{sec:green}
The lattice dynamics of a material can be described by the phonon Green's function or, more precisely, the retarded displacement-displacement correlation function~%
\footnote{Strictly speaking, the phonon Green's function is the correlation function of phonon ladder operators instead of displacement operators~\cite{Giustino2017} and thus differs from the latter by a factor of $2 \omega$, which thus appears in Eq.~\eqref{eq:specfun} for the phonon spectral function.},
\begin{equation}
\label{eq:green_t}
G_{\vec R - \vec R' \kappa \alpha \kappa' \beta}(T, t - t')
= -\mathrm i \Theta(t - t') \av{[\op u_{\vec R \kappa \alpha} ^{\vphantom\dagger} (t), \op u_{\vec R' \kappa' \beta} ^\dagger (t')]}_T,
\end{equation}
where $\av \ldots_T$ denotes an ensemble average, which depends on the electronic temperature $T$, and $[{\cdots}, {\cdots}]$ the commutator.
The Heaviside function $\Theta$ ensures the time ordering $t > t'$ of the Green's function.
$u_{\vec R \kappa \alpha}$ describes the displacement in the $\alpha$th Cartesian direction of the $\kappa$th basis atom in the unit cell of lattice vector $\vec R$.
Knowledge of the correlations between any two ionic displacements at different times and positions as in Eq.~\eqref{eq:green_t} completely characterizes the lattice dynamics.
Due to translational invariance, Eq.~\eqref{eq:green_t} depends on \emph{differences} of times $t - t'$ and lattice vectors $\vec R - \vec R\mathrlap'$, which allows for a Fourier transform to frequency $\omega$ and phonon momentum $\vec q$,
\begin{equation}
\label{eq:green_omega}
G_{\vec q}(T, \omega)
= \frac{\mathds 1}{\omega^2 \mathds 1 - D_{\vec q}(T, \omega)}
\end{equation}
with the screened dynamical matrix $D$.
Here, $\omega$ is defined on the whole complex plane and the \emph{retarded} phonon Green's function is obtained at $\omega + \mathrm i 0^+$ with $\omega$ real and $0^+$ a positive infinitesimal.
The right-hand side of Eq.~\eqref{eq:green_omega} is a matrix inversion in the basis of displacements $\kappa \alpha, \kappa' \beta$ and $\mathds 1$ is the corresponding identity matrix.
The link to experiments such as inelastic neutron- or x-ray-scattering spectroscopy~\cite{Caruso2017} is the phonon spectral function.
It assigns an intensity to each combination of energies $\omega$ and crystal momenta $\vec q$ and thus provides the quasiparticle bandstructure, where applicable, including many-body effects such as broadening and satellites.
We compute the phonon spectral function as~\cite{Abrikosov1963}
\begin{equation}
\label{eq:specfun}
A_{\vec q}(T, \omega)
= -\frac {2 \omega} \pi \Tr \Im G_{\vec q}(T, \omega + \mathrm i 0^+).
\end{equation}
Instead, the static ($\omega = 0$) adiabatic phonon frequencies $\omega_{\vec q \nu}$ as obtained from \ac{DFPT}, usually at high electronic smearing $\sigma$, follow from the eigenvalue equation for the dynamical matrix,
\begin{equation}
\label{eq:eigen}
D_{\vec q}(\sigma, 0) \vec e_{\vec q \nu} ^{\vphantom0}
= \omega^2_{\vec q \nu} \vec e_{\vec q \nu} ^{\vphantom0},
\end{equation}
where $ \vec e_{\vec q \nu} $ are the phonon eigenvectors with mode index $\nu$.
\subsection{Screened, partially screened, and bare quantities}
\label{sec:downfolding}
In a typical \emph{ab initio} calculation, all electronic states are treated equally.
However, the relevant physics often takes place in a small subset of these states, namely the low-energy states close to the Fermi level, which we will refer to as active states.
In particular, if an \emph{ab initio} calculation for chosen parameters and approximations fails to converge or capture the effects of interest, this is likely due to processes within this subset of active states alone, while the rest might already be properly described.
Before resorting to more elaborate treatments, it is thus instructive to consider the different subsets of states and corresponding energy scales separately.
While pure \emph{ab initio} approaches always address the full system, the \emph{downfolding} approach~\cite{Aryasetiawan2004, Aryasetiawan2006, Giovannetti2014, Nomura2015} uses \emph{ab initio} calculations to construct tractable low-energy systems with active states only, thus reducing the number of degrees of freedom significantly.
This approach is exact, provided the complexity reduction is properly compensated by the partial screening of the system parameters.
While the full system consists of simple \emph{bare} elementary particles and interactions, the parameters of the downfolded system usually acquire dependences on the involved quantum numbers and are in general also frequency dependent~\cite{Aryasetiawan2004}.
The chosen active states must thus span an energy window large enough that low-energy effects and related dependences can be safely neglected and at the same time small enough to keep the computational cost affordable.
\begin{figure}
\includegraphics[width=\linewidth]{fig01.pdf}
\caption{Visualization of active ($A$) and rest ($R$) subspaces of electronic transitions based on a generic band structure.
%
$A$ only includes transitions between a chosen set of low-energy states highlighted in orange, $R$ all remaining transitions.
%
Note that there is in general an infinite number of empty states beyond the shown energy range.}
\label{fig:downfolding}
\end{figure}
A typical choice of active and rest states is sketched in Fig.~\ref{fig:downfolding}.
Note that here, for convenience but departing from what is usually done, we define the active \emph{subspace} not as the set of active \emph{states} but as the set of \emph{transitions} between them.
The full system with bare parameters would be recovered if \emph{all} states were counted among the active states.
Hence, special care has to be taken in the context of pseudopotentials.
Since quasiparticle energies and interactions decrease with screening, the \emph{bare} quantities in an all-electron calculation are larger than those in a pseudopotential framework with less core states.
\subsection{Random-phase approximation}
\label{sec:rpa}
\begin{table*}
\caption{List of symbols used in this manuscript.
%
Subscript momenta and band indices have been omitted for brevity.}
\label{tab:symbols}
\medskip
\newlength\gap
\setlength\gap{1mm}
\begin{tabular}[t]{*{11}l}
\multicolumn2l{\bfseries Phonon Green's function}
&
\multicolumn2l{\bfseries Electron-phonon coupling}
&
\multicolumn5l{\bfseries Bare electron susceptibility}
&
\multicolumn2l{\bfseries Miscellaneous}
\\
$G \super b$ & bare
&
$g \super b$ & bare
&
$\chi \super b (T, \omega)$ & \multicolumn4l{full}
&
$\omega$ & frequency argument
\\
$G \super p (T, \omega)$ & partially screened
&
$g \super p (T, \omega)$ & partially screened
&
$\chi \super{b,R} (T, \omega)$ & \multicolumn4l{rest subspace}
&
$T$ & electronic temperature
\\
$G(T, \omega)$ & screened
&
$g(T, \omega)$ & screened
&
$\chi \super{b,A} (T, \omega)$ & \multicolumn4l{active subspace}
&
$\sigma$ & \emph{ab initio} smearing
\\
[\gap]
\multicolumn2l{\bfseries Dynamical matrix}
&
\multicolumn2l{\bfseries Electron-electron interaction}
&
\multicolumn5l{\bfseries Phonon self-energy}
&
$\vec R$ & Bravais lattice
\\
$D \super b$ & bare
&
$v$ & bare
&
$\Pi(T, \omega)$ & $=$ & $g \super b$ & $\chi \super b (T, \omega)$ & $g(T, \omega)$
&
$\vec G$ & reciprocal lattice
\\
$D \super p (T, \omega)$ & partially screened
&
$U(T, \omega)$ & partially screened
&
$\Pi \super R (T, \omega)$ & $=$ & $g \super b$ & $\chi \super{b,R} (T, \omega)$ & $g \super p (T, \omega)$
&
$\vec k$ & fermion momentum
\\
$D(T, \omega)$ & screened
&
$W(T, \omega)$ & screened
&
$\Pi \super A (T, \omega)$ & $=$ & $g \super p (T, \omega)$ & $\chi \super{b,A}(T, \omega)$ & $g(T, \omega)$
&
$\vec q$ & boson momentum
\\
[\gap]
\multicolumn2l{\bfseries Basis indices}
&
\multicolumn2l{\bfseries Long-range electrostatics}
&
$\Pi \super{00} (T, \omega)$ & $=$ & $g(\sigma, 0)$ & $\chi \super{b,A}(T, \omega)$ & $g(\sigma, 0)$
&
$\varepsilon$ & electron energy
\\
$s, p$ & electron orbitals
&
$\epsilon$ & dielectric constant
&
$\Pi \super{p0} (T, \omega)$ & $=$ & $g \super p (\sigma, 0)$ & $\chi \super{b,A}(T, \omega)$ & $g(\sigma, 0)$
&
$f$ & electron occupation
\\
$m, n$ & electron bands
&
$\vec Z^*$ & Born effective charge
&
$\Pi \super{p$T$} (T, \omega)$ & $=$ & $g \super p (\sigma, 0)$ & $\chi \super{b,A}(T, \omega)$ & $g(T, 0)$
&
$M$ & atomic mass
\\
$\kappa, \kappa'$ & basis atoms
&
$Q$ & quadrupole tensor
&
$\Pi \super{b0} (T, \omega)$ & $=$ & $g \super b$ & $\chi \super{b,A}(T, \omega)$ & $g(\sigma, 0)$
&
$\vec \tau$ & atomic position
\\
$\alpha, \beta$ & Cartesian directions
&
$L$ & separation parameter
&
$\Pi \super{b$T$} (T, \omega)$ & $=$ & $g \super b$ & $\chi \super{b,A}(T, \omega)$ & $g(T, 0)$
&
$\vec u$ & atomic displacement
\end{tabular}
\end{table*}
In this section, we review the formulas that describe the electronic screening of phonons and interactions in the framework of the \ac{RPA}~\cite{Bohm1951, GellMann1957, Ren2012, vanLoon2021b}.
In this approximation, the electronic response is given by the polarizability of the system.
In terms of Feynman diagrams, we consider all possible diagrams consisting of bare phonons, the bare electron-phonon interaction, the bare electron-electron interaction, and electron-hole ``bubbles\rlap.''
In the static case ($\omega = 0$), this is equivalent to the screening in \ac{DFPT} as long as the electrons are given by the adiabatically screened Kohn-Sham states and the exchange-correlation kernel is included in the bare electron-electron interaction~\cite{Nomura2015, Giustino2017}.
A summary of the most relevant symbols used in the following is provided in Table~\ref{tab:symbols}.
\subsubsection{Electron-electron interaction}
\label{sec:rpa_elel}
We start with the screening of the electron-electron interaction, for which the \ac{RPA} has been originally derived~\cite{Bohm1951}.
In the basis of electronic eigenstates, used throughout the manuscript, the bare Coulomb interaction can be written as
\begin{equation}
\label{eq:coulomb}
v_{\vec q \vec k m n \vec k' m' n'}
= \bra{\vec k n; \vec k' {+} \vec q m'} \op v \ket{\vec k {+} \vec q m; \vec k' n'}
\end{equation}
with electron momentum $\vec k$, band indices $m, n$, and the virtual photon momentum transfer $\vec q$.
We purposely used the symbol $\vec q$ as in Eq.~\eqref{eq:green_omega} to highlight the fact that in \ac{RPA} the phonon and photon momentum transfers have to be the same -- this is no longer true beyond \ac{RPA}.
More precisely, Eq.~\eqref{eq:coulomb} quantifies the scattering of two electrons from single-particle states $\ket{\vec k {+} \vec q m}$ and $\ket{\vec k' n'}$ into states $\ket{\vec k n}$ and $\ket{\vec k' {+} \vec q m'}$, respectively.
In the basis of electronic positions, it has the well-known diagonal representation $\bra{\vec r; \vec r'} \op v \ket{\vec r; \vec r'} = 1 / \abs{\vec r - \vec r'}$.
The interaction between two electrons is however screened by the polarizability of all other electrons.
Taking the formation of any number of electron-hole pairs into account, the screened electron-electron interaction $W(T, \omega)$ is related to the bare Coulomb interaction $v$ via
\begin{equation}
\label{eq:v2w}
\begin{tikzpicture}
\pic at (0, 0) {W};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {v};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {v};
\pic at (6, 0) {W};
\pic at (4, 0) {chi};
\end{tikzpicture}
\end{equation}
or, translated to a formula,
\begin{multline}
\label{eq:v2w_form}
W_{\vec q \vec k m n \vec k' m' n'}(T, \omega)
= v_{\vec q \vec k m n \vec k' m' n'}
+ \frac 2 N \sum_{\vec k'' m'' n''}
v_{\vec q \vec k m n \vec k'' m'' n''}
\\
\chi_{\vec q \vec k'' m'' n''} \super b (T, \omega)
W_{\vec q \vec k'' m'' n'' \vec k' m' n'} ^{\vphantom0} (T, \omega),
\end{multline}
where the $2$ accounts for the spin and the summation is over $N$ momenta and \emph{all} pairs of band indices.
$W(T, \omega)$ as defined here is also used in the $G W$ approximation~\cite{Hedin1965, Golze2019, Li2019}.
The summation over an infinite number of bands can be circumvented via the Sternheimer approach~\cite{Sternheimer1954, Giustino2010, Schlipf2020, Lihm2021}.
The bare electronic susceptibility or ``polarizability\rlap,'' from which any $T$ or $\omega$~dependence originates, reads
\begin{equation}
\label{eq:susc}
\chi_{\vec q \vec k m n} \super b (T, \omega)
= \frac
{f(\varepsilon_{\vec k n} / T) - f(\varepsilon_{\vec k + \vec q m} / T)}
{\varepsilon_{\vec k n} - \varepsilon_{\vec k + \vec q m} + \omega},
\end{equation}
where $\varepsilon$ and $f$ are the electronic energies and occupations.
The most important contributions in Eq.~\eqref{eq:susc} come from transitions between occupied and empty states with similar energies, i.e., from the low-energy electrons near the Fermi level.
This suggests to split the screening in Eq.~\eqref{eq:v2w} into two steps, namely the \emph{downfolding} to a low-energy system and the \emph{renormalization} to recover physical results.
We label the subset of transitions between properly chosen low-energy active states as $A$ and the remaining transitions as the rest $R$ [cf.\@ Fig.~\ref{fig:downfolding}].
The bare susceptibility can then be decomposed as
\begin{equation}
\label{eq:split}
\chi_{\vec q \vec k m n} \super b (T, \omega)
= \chi_{\vec q \vec k m n} \super{b,A} (T, \omega)
+ \chi_{\vec q \vec k m n} \super{b,R} (T, \omega),
\end{equation}
where the first term is nonzero only if the transition from $\ket{\vec k n}$ to $\ket{\vec k {+} \vec q m}$ is part of $A$ and the second term vice versa.
First, in the downfolding step, the partially screened electron-electron interaction $U(T, \omega)$ is calculated from the bare Coulomb interaction $v$,
\begin{equation}
\label{eq:v2u}
\begin{tikzpicture}
\pic at (0, 0) {U};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {v};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {v};
\pic at (6, 0) {U};
\pic at (4, 0) {chiR};
\end{tikzpicture}.
\end{equation}
This is known as the \ac{cRPA}~\cite{Aryasetiawan2004, Aryasetiawan2006}.
The corresponding formula is equivalent to Eq.~\eqref{eq:v2w_form}, except that the summation is constrained to the transitions in $R$.
In general, the partially screened $U(T, \omega)$ depends on $T$ and $\omega$.
However, excluding low-energy transitions makes the system effectively gapped and the dependence on $T$ and $\omega$ can be controlled via the size of the active subspace~\cite{Miyake2009}.
Second, in the renormalization step, the screened $W(T, \omega)$ can be recovered from the partially screened $U(T, \omega)$,
\begin{equation}
\label{eq:u2w}
\begin{tikzpicture}
\pic at (0, 0) {W};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {U};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {U};
\pic at (6, 0) {W};
\pic at (4, 0) {chiT};
\end{tikzpicture}.
\end{equation}
The corresponding formula is again equivalent to Eq.~\eqref{eq:v2w_form}, but now the summation is constrained to the finite number of active bands.
This allows to evaluate the low-energy response, which requires a dense \ac{BZ} sampling at low electronic temperature, at an affordable computational cost.
\subsubsection{Electron-phonon interaction}
\label{sec:rpa_elph}
The change of the electronic energies upon ionic displacements is also screened by the surrounding electrons.
Without this screening, the bare electron-phonon coupling in the electronic eigenbasis reads
\begin{align}
\label{eq:gb}
g_{\vec q \kappa \alpha \vec k m n} \super b
= \frac 1 {\sqrt{M_\kappa}}
\bra{\vec k {+} \vec q m}
\frac{\partial \op V \super b}{\partial u_{\vec q \kappa \alpha}}
\ket{\vec k n}
\end{align}
with the atomic mass $M$, and the bare external potential $V \super b$ acting on an electron amid the ensemble of ions.
In the position representation, we have $\bra{\vec r} \op V \super b \ket{\vec r} = -\sum_{\vec R \kappa} Z_\kappa / \abs{\vec R + \vec \tau_\kappa + \vec u_{\vec R \kappa} - \vec r}$, where $Z_\kappa$ and $\vec \tau$ are ionic charges and equilibrium positions within the unit cell.
The screened electron-phonon interaction can be written as $g = g \super b \epsilon ^{-1}$ with the inverse dielectric function $\epsilon ^{-1} = 1 + \chi \super b W$, similarly to the screened electron-electron interaction $W = v \epsilon ^{-1}$ of Eq.~\eqref{eq:v2w}, or, using diagrams, as [cf.\@ Appendix~I of Ref.~\citenum{Geilikman1975}]
\begin{equation}
\label{eq:gb2g}
\begin{tikzpicture}
\pic at (0, 0) {g};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {W};
\pic at (2, 0) {chi};
\pic at (2, 0) {gb};
\end{tikzpicture}.
\end{equation}
Translated to a formula, it reads
\begin{multline}
\label{eq:gb2g_form}
g_{\vec q \kappa \alpha \vec k m n} ^{\vphantom0} (T, \omega)
= g_{\vec q \kappa \alpha \vec k m n} \super b
+ \frac 2 N \sum_{\vec k' m' n'}
g_{\vec q \kappa \alpha \vec k' m' n'} \super b
\\
\chi_{\vec q \vec k' m' n'} \super b (T, \omega)
W_{\vec q \vec k' m' n' \vec k m n} ^{\vphantom0} (T, \omega).
\end{multline}
Also the renormalization of the electron-phonon coupling can be split into two steps:
First, the partially screened $g \super p (T, \omega)$ is obtained from the bare $g \super b$,
\begin{equation}
\label{eq:gb2gp}
\begin{tikzpicture}
\pic at (0, 0) {gp};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {U};
\pic at (2, 0) {chiR};
\pic at (2, 0) {gb};
\end{tikzpicture},
\end{equation}
which can be accomplished using \ac{cDFPT}~\cite{Nomura2015}.
The screened $g(T, \omega)$ follows from the partially screened $g \super p (T, \omega)$,
\begin{equation}
\label{eq:gp2g}
\begin{tikzpicture}
\pic at (0, 0) {g};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gpb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {W};
\pic at (2, 0) {chiT};
\pic at (2, 0) {gpl};
\end{tikzpicture}.
\end{equation}
The corresponding formulas are equivalent to Eq.~\eqref{eq:gb2g_form}.
Using the alternative expression for the dielectric function $\epsilon = 1 - \chi \super b v$ and the corresponding formulas for the active and rest subspace, we can also write Eqs.~\eqref{eq:gb2g}, \eqref{eq:gb2gp}, and \eqref{eq:gp2g} as
\begin{align}
\label{eq:gb2g_alt}
\begin{tikzpicture}
\pic at (0, 0) {g};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gbb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {v};
\pic at (2, 0) {chi};
\pic at (2, 0) {gl};
\end{tikzpicture},
\\
\label{eq:gb2gp_alt}
\begin{tikzpicture}
\pic at (0, 0) {gp};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gbb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {v};
\pic at (2, 0) {chiR};
\pic at (2, 0) {gpl};
\end{tikzpicture},
\\
\label{eq:gp2g_alt}
\begin{tikzpicture}
\pic at (0, 0) {g};
\node at (0.5, 0) {$=$};
\pic at (1, 0) {gpb};
\node at (1.5, 0) {$+$};
\pic at (4, 0) {U};
\pic at (2, 0) {chiT};
\pic at (2, 0) {gl};
\end{tikzpicture}.
\end{align}
Since these equations must be solved self-consistently, they are less convenient in practice.
The absence of the (partially) screened electron-electron interaction from the formula for the (partially) screened electron-phonon coupling will however be useful in Sec.~\ref{sec:approx}.
\subsubsection{Phonons}
\label{sec:rpa_ph}
Finally, we consider how electronic screening affects the phonons.
Without any electronic response, the bare interatomic force constants read~\cite{Baroni2001}
\begin{equation}
C_{\vec R - \vec R' \kappa \alpha \kappa' \beta} \super b
= \int \mathrm d^3 r \frac
{\partial^2 V \super b (\vec r)}
{\partial u_{\vec R \kappa \alpha} \partial u_{\vec R' \kappa' \beta}}
n(\vec r)
+ \frac
{\partial^2 \varPhi}
{\partial u_{\vec R \kappa \alpha} \partial u_{\vec R' \kappa' \beta}}
\end{equation}
with the electron density $n$ and the classical electrostatic energy $\varPhi = 1/2 \sum_{\vec R \kappa \neq \vec R' \kappa'} Z_\kappa Z_{\kappa'} / \abs{\vec R + \vec \tau_\kappa + \vec u_{\vec R \kappa} - \vec R' - \vec \tau_{\kappa'} - \vec u_{\vec R' \kappa'}}$ of the ensemble of ions, which is often called Ewald energy.
The corresponding bare phonon Green's function $G \super b$ follows from the bare dynamical matrix $D \super b = C \super b / \sqrt{M_\kappa M_{\kappa'}}$ and Eq.~\eqref{eq:green_omega}.
As derived in detail in Sec.~5.1 of Ref.~\citenum{Berges2020a}, the screened phonon Green's function $G(T, \omega)$ satisfies
\begin{equation}
\label{eq:db2d}
\begin{tikzpicture}
\pic at (0, 0) {G};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {Gb};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {Gb};
\pic at (6, 0) {G};
\pic at (4, 0) {chi};
\pic at (4, 0) {gb};
\pic at (6, 0) {gr};
\end{tikzpicture}
\end{equation}
or, using matrices in the basis of ionic displacements,
\begin{equation}
\label{eq:db2d_form}
G_{\vec q}(T, \omega)
= G_{\vec q} \super b
+ G_{\vec q} \super b
\cdot \Pi_{\vec q}(T, \omega)
\cdot G_{\vec q} (T, \omega),
\end{equation}
where we have defined the phonon self-energy as an electron-hole bubble connected with one bare -- to avoid a double counting of the Coulomb interaction -- and one screened electron-phonon vertex,
\begin{equation}
\label{eq:selfen}
\Pi_{\vec q \kappa \alpha \kappa' \beta}(T, \omega)
= \frac 2 N \smash[b]{\sum_{\vec k m n}}
g_{\vec q \kappa \alpha \vec k m n} \super b
\chi_{\vec q \vec k m n} \super b (T, \omega)
g_{\vec q \kappa' \beta \vec k m n} ^\ast (T, \omega).
\end{equation}
If we multiply Eq.~\eqref{eq:db2d_form} with the matrix inverses $(G \super b) ^{-1}$ and $G ^{-1} (T, \omega)$ from the left and right, respectively, and insert Eq.~\eqref{eq:green_omega}, we are left with a simple additive formula for the screened dynamical matrix,
\begin{equation}
D_{\vec q}(T, \omega)
= D_{\vec q} \super b
+ \Pi_{\vec q}(T, \omega).
\end{equation}
Eventually, also the screening of the phonons can be split into two steps.
First, the partially screened $G \super p (T, \omega)$ is derived from the bare $G \super b$,
\begin{equation}
\label{eq:db2dp}
\begin{tikzpicture}
\pic at (0, 0) {Gp};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {Gb};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {Gb};
\pic at (6, 0) {Gp};
\pic at (4, 0) {chiR};
\pic at (4, 0) {gb};
\pic at (6, 0) {gpr};
\end{tikzpicture}.
\end{equation}
Just like Eq.~\eqref{eq:db2d}, Eq.~\eqref{eq:db2dp} can be rewritten as an additive equation for the dynamical matrix,
\begin{equation}
D_{\vec q} \super p (T, \omega)
= D_{\vec q} \super b
+ \Pi_{\vec q} \super R (T, \omega).
\end{equation}
In the second step, the screened $G(T, \omega)$ is retrieved from the partially screened $G \super p (T, \omega)$,
\begin{equation}
\label{eq:dp2df}
\begin{tikzpicture}
\pic at (0, 0) {G};
\node at (1.25, 0) {$=$};
\pic at (1.5, 0) {Gp};
\node at (2.75, 0) {$+$};
\pic at (3, 0) {Gp};
\pic at (6, 0) {G};
\pic at (4, 0) {chiT};
\pic at (4, 0) {gpl};
\pic at (6, 0) {gr};
\end{tikzpicture}.
\end{equation}
Again, this translates into a simple addition of dynamical matrix and phonon self-energy,
\begin{equation}
D_{\vec q}(T, \omega)
= D_{\vec q} \super p (T, \omega)
+ \Pi_{\vec q} \super A (T, \omega).
\end{equation}
The phonon self-energy can thus be decomposed in the same way as the bare susceptibility in Eq.~\eqref{eq:split},
\begin{equation}
\Pi_{\vec q}(T, \omega)
= \Pi_{\vec q} \super A (T, \omega)
+ \Pi_{\vec q} \super R (T, \omega).
\end{equation}
In the following, we will focus on corrections to the first term, which reflects the relevant physics of the active subspace.
\subsection{Approximations to the phonon self-energy}
\label{sec:approx}
As outlined in the previous section, the exact phonon self-energy [Eq.~\eqref{eq:selfen}] is calculated using one bare and one screened electron-phonon vertex.
However, expressions with two screened vertices are often used in practice.
In Ref.~\citenum{Calandra2010}, the connection between these two formulations is established:
Using Eq.~\eqref{eq:gb2g_alt}, and leaving $T$ and $\omega$~dependences understood for brevity, we can recast the phonon self-energy as
\begin{align}
\label{eq:nonstationary}
\begin{tikzpicture}
\pic at (0, 0) {chis};
\pic at (0, 0) {gb};
\pic at (1, 0) {gs};
\node at (1.5, 0) {$=$};
\pic at (2, 0) {chis};
\pic at (2, 0) {gb};
\node at (3, 0) {$\,\Big[$};
\pic at (3.25, 0) {gb};
\node at (3.5, 0) {$+$};
\pic at (3.75, 0) {v};
\pic[mauve] at (4.75, 0) {chis};
\pic[mauve] at (5.75, 0) {gs};
\node at (6, 0) {$\Big]$};
\end{tikzpicture}
\\
\label{eq:stationary}
\hspace*{-3mm}%
\begin{tikzpicture}
\node at (-1.5, 1) {$=$};
\pic at (2, 1) {chis};
\node at (-1, 1) {$\Big[$};
\pic at (0.25, 1) {v};
\pic[mauve] at (-0.75, 1) {chis};
\pic[mauve] at (-0.75, 1) {gs};
\node at (1.5, 1) {$+$};
\pic at (1.75, 1) {gb};
\node at (2, 1) {$\Big]\,$};
\node at (3, 1) {$\,\Big[$};
\pic at (3.25, 1) {gb};
\node at (3.5, 1) {$+$};
\pic at (3.75, 1) {v};
\pic[mauve] at (4.75, 1) {chis};
\pic[mauve] at (5.75, 1) {gs};
\node at (6, 1) {$\Big]$};
\node at (2.25, 0) {$-$};
\pic at (3.75, 0) {v};
\pic[mauve] at (2.75, 0) {chis};
\pic[mauve] at (4.75, 0) {chis};
\pic[mauve] at (2.75, 0) {gs};
\pic[mauve] at (5.75, 0) {gs};
\end{tikzpicture}
\\
\begin{tikzpicture}
\node at (0.25, 0) {$=$};
\pic at (0.75, 0) {chis};
\pic at (0.75, 0) {gs};
\pic at (1.75, 0) {gs};
\node at (2.25, 0) {$-$};
\pic at (3.75, 0) {v};
\pic at (2.75, 0) {chis};
\pic at (4.75, 0) {chis};
\pic at (2.75, 0) {gs};
\pic at (5.75, 0) {gs};
\node at (6, 0) {$\phantom{\Big]}$};
\useasboundingbox;
\node at (6, 0) {$,$};
\end{tikzpicture}
\end{align}
i.e., as a phonon self-energy with two screened vertices less a double-counting term [cf.\@ Eq.~(4.23) of Ref.~\citenum{Allen1980} and Eq.~(4.5) of Ref.~\citenum{Maksimov2008}].
All of the above expressions evaluate to the same result as long as all their constituents are exact.
If however -- for reasons that will become evident soon -- each occurrence of the electron response $g \chi \super b$ or $\chi \super b g$ in Eqs.~\eqref{eq:nonstationary} and \eqref{eq:stationary} (\emph{only} the blue parts) is replaced by an approximation to it, their values will differ.
Here, by construction Eq.~\eqref{eq:stationary} will deviate least from the exact value.
This is because its \emph{partial} functional derivative with respect to $\chi \super b g$ vanishes.
\begin{equation}
\label{eq:cancellation}
\frac
{\delta \eqref{eq:stationary}}
{\delta \begin{tikzpicture}
\draw[el] (0, 0) to[out=-70, in=-110] (0.5, 0);
\draw[el] (0.5, 0) to[out=+110, in=+70] (0, 0);
\fill (0.5, 0) circle (3pt);
\end{tikzpicture}}
=
\begin{tikzpicture}
\pic at (1, 0) {v};
\pic at (0, 0) {chis};
\pic at (0, 0) {gs};
\end{tikzpicture}
-
\begin{tikzpicture}
\pic at (1, 0) {v};
\pic at (0, 0) {chis};
\pic at (0, 0) {gs};
\end{tikzpicture}
= 0.
\end{equation}
The same holds true for the derivative with respect to $g \chi \super b$.
This is not the case for the right-hand side of Eq.~\eqref{eq:nonstationary},
\begin{equation}
\frac
{\delta \eqref{eq:nonstationary}}
{\delta \begin{tikzpicture}
\draw[el] (0, 0) to[out=-70, in=-110] (0.5, 0);
\draw[el] (0.5, 0) to[out=+110, in=+70] (0, 0);
\fill (0.5, 0) circle (3pt);
\end{tikzpicture}}
=
\begin{tikzpicture}
\pic at (1, 0) {v};
\pic at (0, 0) {chis};
\pic at (0, 0) {gb};
\end{tikzpicture}
\neq 0.
\end{equation}
Equation~\eqref{eq:stationary} is a stationary functional of the electron response~\cite{Calandra2010}.
Consequently, an approximate electron response yields errors only at second order.
Hence, it appears to be a reasonable approximation to replace the electron response in Eq.~\eqref{eq:stationary} by the static ($\omega = 0$) and high-smearing ($T = \sigma$) response we obtain from an \emph{ab initio} calculation using \ac{DFPT},
\begin{equation}
\begin{tikzpicture}
\pic at (0, 1.3) {chi};
\pic at (0, 1.3) {gb};
\pic at (2, 1.3) {gr};
\node at (2.6, 1.3) {$\approx$};
\pic at (3.2, 1.3) {chi};
\pic at (3.2, 1.3) {gbl0};
\pic at (5.2, 1.3) {gbr0};
\node at (0.4, 0) {$-$};
\pic at (3, 0) {v};
\pic at (1, 0) {chi0};
\pic at (4, 0) {chi0};
\pic at (1, 0) {gl0};
\pic at (6, 0) {gr0};
\end{tikzpicture}.
\end{equation}
Since the approximate double-counting term does not depend on $T$ or $\omega$, we only have to focus on the first term when correcting \emph{ab initio} phonon self-energies.
Using Eq.~\eqref{eq:gp2g_alt} and the fact that the partially screened $U(T, \omega)$ and $g \super p (T, \omega)$ exclude low-energy screening and are thus only weakly $T$ and $\omega$~dependent (in the phononic energy range), a corresponding expression can be derived for the active-subspace phonon self-energy $\Pi \super A(T, \omega)$,
\begin{equation}
\label{eq:approx}
\begin{tikzpicture}
\pic at (0, 1.3) {chiT};
\pic at (0, 1.3) {gpl};
\pic at (2, 1.3) {gr};
\node at (2.6, 1.3) {$\approx$};
\pic at (3.2, 1.3) {chiT};
\pic at (3.2, 1.3) {gbl0};
\pic at (5.2, 1.3) {gbr0};
\node at (0.4, 0) {$-$};
\pic at (3, 0) {U0};
\pic at (1, 0) {chiT0};
\pic at (4, 0) {chiT0};
\pic at (1, 0) {gl0};
\pic at (6, 0) {gr0};
\end{tikzpicture}.
\end{equation}
For later analysis, we define the following five approximate active-subspace phonon self-energies~%
\footnote{In practice, we symmetrize the outer product of the bare or partially screened and the screened electron-phonon vertex according to Eq.~(3b) of Ref.~\citenum{Berges2020b}}:
\begin{align}
\label{eq:pi00}
\Pi \super{00} (T, \omega)
\equiv &&
\begin{tikzpicture}
\pic at (0, 0) {chiT};
\pic at (0, 0) {gl0};
\pic at (2, 0) {gr0};
\end{tikzpicture}, &&
\\
\label{eq:pip0}
\Pi \super{p0} (T, \omega)
\equiv &&
\begin{tikzpicture}
\pic at (0, 0) {chiT};
\pic at (0, 0) {gpl0};
\pic at (2, 0) {gr0};
\end{tikzpicture}, &&
\\
\label{eq:pipt}
\Pi \super{p$T$} (T, \omega)
\equiv &&
\begin{tikzpicture}
\pic at (0, 0) {chiT};
\pic at (0, 0) {gpl0};
\pic at (2, 0) {grT0};
\end{tikzpicture}, &&
\\
\label{eq:pib0}
\Pi \super{b0} (T, \omega)
\equiv &&
\begin{tikzpicture}
\pic at (0, 0) {chiT};
\pic at (0, 0) {gb};
\pic at (2, 0) {gr0};
\end{tikzpicture}, &&
\\
\label{eq:pibt}
\Pi \super{b$T$} (T, \omega)
\equiv &&
\begin{tikzpicture}
\pic at (0, 0) {chiT};
\pic at (0, 0) {gb};
\pic at (2, 0) {grT0};
\end{tikzpicture}. &&
\end{align}
With Eq.~\eqref{eq:pi00}, the approach of Ref.~\citenum{Calandra2010} to approximate converged low-temperature nonadiabatic phonons based on adiabatic high-smearing calculations can be formulated as
\begin{align}
\label{eq:d00}
D_{\vec q}(T, \omega)
\approx D_{\vec q} \super{00} (T, \omega)
&\equiv D_{\vec q} \super u(\sigma, 0)
+ \Pi_{\vec q} \super{00} (T, \omega),
\\
\label{eq:du}
D_{\vec q} \super u(\sigma, 0)
&\equiv D_{\vec q}(\sigma, 0)
- \Pi_{\vec q} \super{00} (\sigma, 0),
\end{align}
where we have defined the ``unscreened'' dynamical matrix $D \super u (\sigma, 0)$.
Note that in the original publication, a slightly different approach via phonons at the high electronic temperature $T_\infty$, chosen such that they can be safely interpolated, is proposed.
Equation~\eqref{eq:d00} is equivalent for $\sigma \approx T_\infty$.
With Eq.~\eqref{eq:pip0}, the corresponding \ac{cDFPT}-based formula is
\begin{align}
\label{eq:dp0}
D_{\vec q}(T, \omega)
\approx D_{\vec q} \super{p0} (T, \omega)
&\equiv D_{\vec q} \super p (\sigma, 0)
+ \Pi_{\vec q} \super{p0} (T, \omega),
\\
\label{eq:dp}
D_{\vec q} \super p (\sigma, 0)
&\equiv D_{\vec q}(\sigma, 0)
- \Pi_{\vec q} \super{p0} (\sigma, 0),
\end{align}
where the partially screened dynamical matrix is calculated via unscreening from the \ac{DFPT} one but could also be directly obtained from a self-consistent \ac{cDFPT} calculation.
We note that that the choice between unscreening and constrained calculations is also relevant in other contexts~\cite{Nomura2012}.
A shortcoming of Eq.~\eqref{eq:pip0} is that the bare susceptibility is generally calculated at a different electronic temperature than the screened vertex.
Using the appropriate screened vertex yields Eq.~\eqref{eq:pipt} and
\begin{equation}
\label{eq:dpt}
D_{\vec q}(T, \omega)
\approx D_{\vec q} \super{p$T$} (T, \omega)
\equiv D_{\vec q} \super p (\sigma, 0)
+ \Pi_{\vec q} \super{p$T$} (T, \omega).
\end{equation}
This will be insightful but useless in practice since $g(T, 0)$ is as computationally expensive as $D(T, 0)$.
In the limit of an infinitely large active subspace, where the partially screened vertex becomes bare, and for a single electronic temperature $T = \sigma$, Eqs.~\eqref{eq:dp0} and \eqref{eq:dp} are equivalent to Eq.~(145) of Ref.~\citenum{Giustino2017}.
In practice, the phonon self-energy with one bare vertex might also be calculated for a finite number of active bands as in Eq.~\eqref{eq:pib0}, which leads to
\begin{align}
\label{eq:db0}
D_{\vec q}(T, \omega)
\approx D_{\vec q} \super{b0} (T, \omega)
&\equiv D_{\vec q} \super{ub} (\sigma, 0)
+ \Pi_{\vec q} \super{b0} (T, \omega),
\\
\label{eq:dub}
D_{\vec q} \super{ub} (\sigma, 0)
&\equiv D_{\vec q}(\sigma, 0)
- \Pi_{\vec q} \super{b0} (\sigma, 0).
\end{align}
Note that the unscreened $D \super{ub} \neq D \super b$ for finite active subspaces.
Also here, a variant with temperature-corrected screened vertex can be studied.
With Eq.~\eqref{eq:pibt},
\begin{equation}
\label{eq:dbt}
D_{\vec q}(T, \omega)
\approx D_{\vec q} \super{b$T$} (T, \omega)
\equiv D_{\vec q} \super{ub} (\sigma, 0)
+ \Pi_{\vec q} \super{b$T$} (T, \omega).
\end{equation}
Finally, we remark that throughout the manuscript by the word ``static'' we mean $\omega = 0$ and by ``dynamic'' that a quantity has a frequency dependence $\omega$.
With the word ``nonadiabatic'' we refer to an Engelsberg-Schrieffer~\cite{Engelsberg1963} type of nonadiabaticity, i.e., a nonadiabatic electronic renormalization of \emph{adiabatic} phonons~\cite{Saitta2008, Calandra2010}.
However, we still neglect adiabatic effects beyond the Migdal theorem~\cite{Migdal1958} arising from vertex corrections, which are of the order $\sqrt{m \sub e \smash/ M}$ and thus usually much smaller.
\subsection{Long-range effects}
\label{sec:lr}
\subsubsection{Friedel long-rangedness}
\label{sec:friedel}
In a metal, the bare electronic susceptibility usually has sharp features at finite momenta in reciprocal space or, equivalently, it is long-range in real space.
This can be referred to as Friedel~\cite{Friedel1958}, Peierls~\cite{Peierls1955}, or Kohn~\cite{Kohn1959} long-rangedness.
These features are generated by the low-energy system, i.e., they affect $\chi \super{b,A} (T, \omega)$ while $\chi \super{b,R} (T, \omega)$ [cf.\@ Eq.~\eqref{eq:split}] can safely be assumed to be smooth in reciprocal space or short-range in real space.
This long-rangedness is inherited by the other screened quantities discussed in Sec.~\ref{sec:rpa}, i.e., $G(T, \omega)$ or $D(T, \omega)$, $g(T, \omega)$, and $W(T, \omega)$.
All other quantities are bare or partially screened and thus smooth, which allows not only to calculate them using coarse integration meshes but also to interpolate them easily.
\begin{figure}
\includegraphics[width=\linewidth]{fig02.pdf}
\caption{Long-range force constants associated with the Kohn anomaly in a one-dimensional tight-binding chain.
%
We show the interatomic force constants as a function of distance for different electronic temperatures; dashed lines are guides for the eye.
%
The inset depicts the corresponding screened phonon dispersion.}
\label{fig:peierls}
\end{figure}
In Fig.~\ref{fig:peierls}, we show the relation between low-energy electronic screening and long-range force constants using the example of a Peierls chain~\cite{Peierls1955}.
The electron dispersion is modeled as
\begin{equation}
\varepsilon_k = -t \cos(k)
\end{equation}
with $t = 1$~eV, the bare dynamical matrix as
\begin{equation}
D_q \super b = \omega_0^2 [1 - \cos(q)]
\end{equation}
with $\omega_0 = 50$~meV, and the bare electron-phonon coupling as
\begin{equation}
g_{q k} \super b = \mathrm i g_0 [\sin(k) - \sin(k + q)]
\end{equation}
with $g_0 = 0.02~\text{eV}^{3 / 2}$.
Since there is no electron-electron interaction in the model, there is no screening of the electron-phonon coupling either.
The force constants are obtained from the dynamical matrix via a discrete Fourier transform.
We renormalize the bare phonons according to Eq.~\eqref{eq:db2d}.
At the high electronic temperature of $T = 1~\text{eV} \approx 11605~\text K$, the phonon dispersion is smooth and the corresponding force constants decay to nearly zero already after the first neighboring site.
As soon as the electronic temperature is lowered, a Kohn anomaly~\cite{Kohn1959} in the phonon dispersion at $q = \pi$ emerges, and the force constants become long-range exhibiting Friedel oscillations~\cite{Friedel1958}, whereby the sign changes from one site to the other.
For the chosen parameters, the phonon dispersion becomes soft below $T \sub{CDW} = 6.16~\text{meV} \approx 71~\text K$, indicating the onset of the dimerization.
\subsubsection{Fr\"ohlich long-rangedness}
\label{sec:froehlich}
The bare and partially screened phonons and electron-phonon coupling do not show any Friedel long-rangedness.
However, as soon as the metallic screening is removed, we deal with another type of long-range phenomenon usually observed in insulators and semiconductors, which we will refer to as Fr\"ohlich or Coulomb long-rangedness, most clearly recognizable in the bare Coulomb interaction $v(\vec r, \vec r') = 1 / \abs{\vec r - \vec r'}$ itself.
In the absence of metallic screening, this propagates into the bare or partially screened quantities and manifests as divergences or discontinuities at the center of the \ac{BZ}, which would lead to Gibbs oscillations in a na\"ive Fourier interpolation.
However, analytical models for these effects exist~\cite{Brovman1967, Giannozzi1991, Gonze1994, Verdi2015, Sjakste2015, Sohier2016, Royo2020, Brunin2020, Jhalani2020, Ponce2021, Royo2021, Macheda2022, Sio2022}, allowing to split the electron-phonon coupling and the dynamical matrix into a long-range ($\mathcal L$) and a short-range part ($\mathcal S$), the latter of which can be easily interpolated.
Here, we will apply a recent approach to model the long-range electrostatics of two-dimensional materials~\cite{Royo2021, Ponce2022a, Ponce2022b}, including dipolar and quadrupolar terms, where we neglect the effect of the out-of-plane polarizability on the dynamical matrix.
We decompose the partially screened dynamical matrix and electron-phonon coupling as
\begin{align}
D_{\vec q \kappa \alpha \kappa' \beta} \super p
&= D_{\vec q \kappa \alpha \kappa' \beta}^{\mathcal S}
+ D_{\vec q \kappa \alpha \kappa' \beta}^{\mathcal L},
\\
g_{\vec q \kappa \alpha \vec R s p} \super p
&= g_{\vec q \kappa \alpha \vec R s p}^{\mathcal S}
+ g_{\vec q \kappa \alpha}^{\mathcal L}
\delta_{\vec R 0}^{\phantom{\mathcal L}}
\delta_{s p}^{\phantom{\mathcal L}},
\end{align}
where $s, p$ label electronic orbitals in the unit cell at $\vec R$.
The long-range parts are given by
\begin{align}
D_{\vec q \kappa \alpha \kappa' \beta}^{\mathcal L}
&= \widetilde D_{\vec q \kappa \alpha \kappa' \beta}^{\mathcal L}
- \delta_{\kappa \kappa'} \sum_{\kappa''}
\widetilde D_{0 \kappa \alpha \kappa'' \beta}^{\mathcal L},
\\
\widetilde D_{\vec q \kappa \alpha \kappa' \beta}^{\mathcal L}
&= \phantom \mathrm i \sum_{\vec G \neq \vec q}
a_{\vec q + \vec G} ^{\vphantom\ast}
b_{\vec q + \vec G \kappa \alpha} ^{\vphantom\ast}
b_{\vec q + \vec G \kappa' \beta} ^\ast,
\\
g_{\vec q \kappa \alpha}^{\mathcal L}
&= \mathrm i \sum_{\vec G \neq \vec q}
a_{\vec q + \vec G} ^{\vphantom\ast}
b_{\vec q + \vec G \kappa \alpha} ^\ast,
\end{align}
where the summations over \emph{in-plane} reciprocal lattice vectors $\vec G$ converge fast.
The scalar part (in the displacement basis) is
\begin{equation}
\label{eq:lr_scalar}
a_{\vec q}
= \frac{4 \pi f_L(\vec q)}{A \abs{\vec q}}
\big[
1 + \frac{c f_L(\vec q)}{2 \abs{\vec q}}
\vec q^T (\epsilon - \mathds 1) \vec q
\big] ^{-1}
\end{equation}
with the unit cell area $A$ and height $c$, the dielectric constant $\epsilon$, and the cutoff function $f_L(\vec q) = 1 - \tanh(\abs{\vec q} L / 2)$.
The long-range separation parameter $L$ is chosen such that the real-space force constants are minimized.
The vectorial part reads
\begin{equation}
\label{eq:lr_vector}
b_{\vec q \kappa \alpha}
= \frac{\mathrm e^{\mathrm i \vec \tau_\kappa \vec q}}{\sqrt{M_\kappa}}
\big[
\vec Z^*_{\kappa \alpha} \vec q
+ \frac \mathrm i 2 \vec q^T Q_{\kappa \alpha} \vec q
\big]
\end{equation}
including the Born effective charges $\vec Z^*$ and the quadrupole tensors $Q$.
Note that neither phonons nor effective charges from \ac{cDFPT} calculations fulfill the acoustic sum rule.
In the bare system, $\epsilon = \mathds 1$ and $\vec Z^* = Z \mathds 1$ with nuclear charge $Z$.
\subsection{Wannierization and Fourier interpolation}
\label{sec:wannier}
In practice, calculations of the electronic energies $\varepsilon_{\vec k}$ and especially the dynamical matrices $D_{\vec q}$ and electron-phonon couplings $g_{\vec q \vec k}$ (screened or partially screened) are limited to a coarse grid of $\vec k$ and $\vec q$~points in the \ac{BZ}.
The points in between are usually obtained via Fourier interpolation, i.e., by a discrete Fourier transform into a localized representation and the smoothest-possible back transform to arbitrary points.
The phononic degrees of freedom already have a natural localized representation, namely the Cartesian ionic displacement directions; for the electrons, the basis of Wannier functions~\cite{Marzari2012}, i.e., localized orthogonal orbitals, is used.
For instance, the interpolant of the short-range electron-phonon coupling~\cite{Giustino2007a} as used in the EPW code~\cite{Giustino2007a, Noffsinger2010, Ponce2016} reads
\begin{equation}
g_{\vec q \nu \vec k m n}^{\mathcal S}
= \sum_{\mathclap{\vec R \kappa \alpha \vec R' s p}}
e_{\vec q \nu \kappa \alpha} ^{\vphantom\ast}
\psi_{\vec k + \vec q m s} ^\ast
g_{\vec R \kappa \alpha \vec R' s p}^{\mathcal S}
\psi_{\vec k n p} ^{\vphantom\ast}
\mathrm e^{\mathrm i (\vec q \vec R + \vec k \vec R')},
\end{equation}
where $e$ and $\psi$ are the eigenvectors of dynamical matrix and electronic Hamiltonian, for which equivalent formulas hold.
Having taken care of the Fr\"ohlich long-rangedness [Sec.~\ref{sec:froehlich}], we can safely interpolate all quantities except those related to $\chi \super{b,A} (T, \omega)$ at low $T$, which has to be evaluated on a dense mesh because of the inherent Friedel long-rangedness [Sec.~\ref{sec:friedel}].
\section{Implementation}
\label{sec:implementation}
We implemented routines to perform \ac{cDFPT} calculations, based on existing code kindly provided by the authors of Ref.~\citenum{Nomura2015}, and to renormalize the phonons according to Eqs.~\eqref{eq:d00}--\eqref{eq:dbt} in the \textsc{PHonon} and \textsc{EPW} codes~\cite{Giustino2007a, Noffsinger2010, Ponce2016}, which are part of the \textsc{Quantum ESPRESSO} distribution~\cite{Giannozzi2009, Giannozzi2017, Giannozzi2020}.
The corresponding patch is provided as Supplemental Material.
The implementation of constrained theories such as \ac{cRPA} and \ac{cDFPT} on top of existing programs to perform unconstrained \ac{RPA} or \ac{DFPT} calculations is straightforward and requires only minor modifications of the source code~\cite{Nomura2015}.
In fact, the most difficult aspect is the definition of suitable electronic active subspaces, i.e., the identification of the band indices (usually sorted by energy in \emph{ab initio} codes) that belong to the active bands for each $\vec k$~point.
In fortunate cases, an appropriate low-energy subspace is isolated from all other bands~\cite{Berges2020b}.
In the general case, however, the active bands will be entangled with other bands.
For simplicity, we define the \ac{cDFPT} active subspace via an energy window, but a selection via band indices or atomic projections is also possible~\cite{Berges2017}.
We use a slightly modified version of the \textsc{PHonon} code with additional input parameters \texttt{cdfpt\_min} and \texttt{cdfpt\_max}, which define the lower and upper bounds of the energy window, as well as \texttt{bare} to suppress the electronic response.
The calculation and interpolation of the electron-phonon coupling as well as the phonon renormalization are done with the \textsc{EPW} code~\cite{Giustino2007a, Noffsinger2010, Ponce2016}.
Usually, the \textsc{EPW} code reads a single directory \texttt{dvscf\_dir} including the dynamical matrices $D$ and the change of the self-consistent potential $\partial V$ from \ac{DFPT} as calculated with the \textsc{PHonon} code.
We define a second input parameter \texttt{cdfpt\_dir} pointing to analogous data $D \super p$ and $\partial V \super p$ from a \ac{cDFPT} (or bare) calculation.
This directory also contains the values of \texttt{cdfpt\_min} and \texttt{cdfpt\_max}, which for convenience are used to set the default ``frozen window'' for the generation of Wannier functions~\cite{Pizzi2020}.
If both \texttt{dvscf\_dir} and \texttt{cdfpt\_dir} are specified, the modified code performs the calculation of the electron-matrix elements $g(\sigma, 0)$ and $g \super p (\sigma, 0)$ and the Fourier interpolation of the dynamical matrices $D(\sigma, 0)$ and $D \super p (\sigma, 0)$ and the matrix elements in the same way.
The fact that identical basis transforms are employed on both the \ac{DFPT} and the \ac{cDFPT} data ensures a consistent gauge.
Finally, we evaluate phonon self-energies and spectral functions for arbitrary $\vec q$~points using dense $\vec k$~meshes and small electronic temperatures.
Besides the above, we introduced additional inputs for the \textsc{EPW} code:
Since the \ac{cDFPT} quantities do not always fulfill the acoustic sum rule~\cite{vanLoon2021a}, enforcing it can be disabled using \texttt{asr\_typ = 'none'}.
To properly handle the long-range terms in \ac{cDFPT}, we define \texttt{lpolarc} and read the file \texttt{quadrupolec.fmt} in addition to the existing \texttt{lpolar} and \texttt{quadrupole.fmt}.
Using \texttt{unscreen\_fine}, $D \super u (\sigma, 0)$ and $D \super p (\sigma, 0)$ can be calculated on the dense (instead of coarse) \ac{BZ} meshes [cf.\@ Eqs.~\eqref{eq:du} and \eqref{eq:dp}].
We select $T$ and the corresponding smearing function $f$ via \texttt{temps} and \texttt{types}.
Finally, the $\mathrm i 0^+$ appearing in the phonon spectral function in Eq.~\eqref{eq:specfun} is set in practice via two smearings \texttt{degaussw} and \texttt{degaussq} using Eq.~\eqref{eq:double_smearing} defined later.
\section{Results}
\label{sec:results}
In this section, we apply the above to monolayer TaS\s2, for which we calculate screened, partially screened, and bare phonons [Sec.~\ref{sec:phonons}], the corresponding electron-phonon coupling [Sec.~\ref{sec:coupling}], renormalized phonons using the different approaches [Secs.~\ref{sec:renorm} and \ref{sec:corr}], and the phonon spectral function [Sec.~\ref{sec:specfun}].
To test for general validity, we performed additional calculations for n-doped MoS\s2, see Appendix~\ref{app:mos2}.
The trigonal-prismatic transition-metal dichalcogenide TaS\s2 is long known to be a showplace of competing charge-density waves~\cite{Tidman1974} and superconductivity~\cite{Nagata1992, Vano2021}, which are suppressed and enhanced, respectively, when reducing the material thickness to the monolayer~\cite{NavarroMoratalla2016, Yang2018}.
Based on \ac{cDFPT} results~\cite{Berges2020b}, the lattice instability and associated Kohn anomalies are exclusively due to low-energy electronic screening from an isolated half-filled band at the Fermi level.
These signs of significant electron-phonon coupling and well separable electronic energy scales make monolayer TaS\s2 an ideal system to test the discussed methods and to study the different levels of electronic screening.
In the \emph{ab initio} calculations, we apply the \ac{PBE} functional~\cite{Perdew1996} and corresponding norm-conserving pseudopotentials with \ac{NLCC} and without \ac{SC} states from the \textsc{PseudoDojo} table~\cite{Hamann2013, vanSetten2018} at an energy cutoff of 100~Ry.
We separate periodic images of the layer using a unit-cell height of 15~\AA{} together with a truncation of the Coulomb interaction in this direction~\cite{Sohier2017}.
The relaxed lattice constant is $a = 3.34$~\AA.
For the high-smearing starting point, we use a Marzari-Vanderbilt smearing~\cite{Marzari1999} of $\sigma = 20$~mRy in combination with uniform $12 \times 12$ $\vec k$- and $\vec q$-point meshes (including $\Gamma$), sufficient for this smearing.
Reference low-temperature data is generated using a Fermi-Dirac smearing of $T = 300~\text K \approx 1.9~\text{mRy}$ and $48 \times 48$ $\vec k$~points.
Minimizing forces to below 1~\textmu Ry/Bohr yields a layer thickness (sulfur-sulfur distance) of $d = 3.13$~\AA, which is recomputed for each considered smearing and $\vec k$~mesh but does not change significantly.
For the Fourier interpolation, we use one-shot Wannier functions obtained from projections onto atomic orbitals to ensure perfect symmetry with the exception of the twenty-two-bands case where we use maximally localized Wannier functions (MLWF)~\cite{Marzari2012} with (i)~a Ta-$2s$ orbital, (ii)~an $1s$~orbital vertically centered between two S atoms, and (iii)~another three $1s$~orbitals halfway between (i) and (ii) as initial projections for the additional five conduction bands.
\subsection{From screened to bare phonons}
\label{sec:phonons}
\begin{figure}
\includegraphics[width=\linewidth]{fig03.pdf}
\caption{(a--e)~Electronic band structure of monolayer TaS\s2 from \ac{DFT}.
%
Possible choices for sets of active bands of increasing size are shown using thick mauve lines.
%
(f--j)~Corresponding phonon dispersions for a high Marzari-Vanderbilt smearing of $\sigma = 20$~mRy.
%
Screened and partially screened phonons from \ac{DFPT} and \ac{cDFPT} are shown using solid lines, renormalized (screened \ac{cDFPT} and unscreened \ac{DFPT}) phonons using dashed lines.}
\label{fig:subspaces}
\end{figure}
The Kohn-Sham band structure of monolayer TaS\s2 from \ac{DFT} is shown in Fig.~\ref{fig:subspaces}\,(a--e).
At the Fermi level, there is a half-filled isolated band of Ta-$d_{z^2}$, -$d_{x^2 - y^2}$, and -$d_{x y}$ orbital character.
These orbitals also span the two lower empty bands, which partially overlap but do not hybridize with the two higher empty bands of Ta-$d_{x z}$ and -$d_{y z}$ character.
The occupied bands are formed by four isolated blocks of (in order of decreasing energy) six S-$p$, two S-$s$, three Ta-$p$, and one Ta-$s$ band.
To trace the transition from screened to bare phonons and interactions, we perform \ac{cDFPT} calculations for active subspaces of different size, including zero (\ac{DFPT}), one, five, thirteen, seventeen, and twenty-two bands.
Note that even if all bands explicitly calculated in \ac{DFT} were considered active, the infinite number of empty bands accounted for via the Sternheimer approach as well as the core bands incorporated into the pseudopotential would still contribute to the screening~[cf.\@ Appendix~\ref{app:bare}].
The corresponding Fourier-interpolated phonon dispersions are shown in Fig.~\ref{fig:subspaces}\,(f--j).
The \ac{DFPT} phonon dispersion, obtained from $D(\sigma, 0)$ via Eq.~\eqref{eq:eigen}, is reproduced as a reference in each panel using solid orange lines.
It features a softening of the longitudinal-acoustic branch, most pronounced at $\vec q = 2/3\,\mathrm M$, signaling the tendency toward the experimentally observed $3 \times 3$ charge-density wave~\cite{Tidman1974}.
Interestingly, at the high Marzari-Vanderbilt smearing of $\sigma = 20$~mRy, the system is dynamically stable with no imaginary frequencies.
The partially screened phonons, obtained from $D \super p (\sigma, 0)$, corresponding to the different choices of active bands are shown using solid mauve lines.
Excluding electronic screening from within the isolated band only [Fig.~\ref{fig:subspaces}\,(f)] already removes all $\vec q$-dependent softening of the longitudinal branch, which is now highest in energy among the acoustic branches.
For five active bands [Fig.~\ref{fig:subspaces}\,(g)], the situation is similar, except that the acoustic sum rule is no longer fulfilled because we freeze long-wavelength dipole-allowed transitions~\cite{vanLoon2021a}, and the acoustic phonons acquire a finite energy at $\Gamma$ [cf.\@ Appendix~\ref{app:bare}].
This effect is even more pronounced for thirteen [Fig.~\ref{fig:subspaces}\,(h)], seventeen [Fig.~\ref{fig:subspaces}\,(i)], and twenty-two [Fig.~\ref{fig:subspaces}\,(j)] active bands, where the partially screened dispersions have shifted to much higher energies, the originally acoustic phonons reaching about 175~meV in the latter case.
Note that some of the branches feature a finite slope near $\Gamma$, which is due to the lack of metallic screening and corresponds to long-range interactions in real space.
We used the electrostatic model introduced in Sec.~\ref{sec:froehlich} to properly interpolate these phonons, the details of which are found in the next section.
If we renormalize the partially screened phonons evaluating Eq.~\eqref{eq:dp0} at $\omega = 0$ using exactly the same smearing $T = \sigma$ and \ac{BZ} sampling as in the \emph{ab initio} calculation, we obtain the dashed mauve lines in Fig.~\ref{fig:subspaces}\,(f--j).
They coincide with the \ac{DFPT} result for all sizes of the active subspace, showing that the phonon self-energy with one partially screened and one screened vertex is exact as long as all involved quantities are exact too.
Note that all quantities entering Eq.~\eqref{eq:dp0} have been directly computed using \ac{DFPT} and \ac{cDFPT} on the coarse grid and only the resulting $D \super{p0} (\sigma, 0)$ has been Fourier-interpolated along the high-symmetry lines shown in Fig.~\ref{fig:subspaces}\,(f--j).
Finally, in Fig.~\ref{fig:subspaces}\,(f--j) we also show the unscreened phonons from $D \super u (\sigma, 0)$ according to Eq.~\eqref{eq:du}, again calculated on the original coarse mesh and interpolated only in the end, using dashed orange lines.
Since $g(\sigma, 0)$ is smaller than $g \super p (\sigma, 0)$, $\Pi \super{00} (\sigma, 0)$ is smaller than $\Pi \super{p0} (\sigma, 0)$.
As a consequence, also the unscreened phonon frequencies are smaller than (or at most equal to) the \ac{cDFPT} ones; some $\vec q$-dependent softening is still present albeit hardly discernible because of the large smearing chosen.
A practical advantage is the absence of long-range terms~\cite{Pickett1976, Allen1980}; the slope of all branches vanishes toward $\Gamma$.
\subsection{From screened to bare electron-phonon coupling}
\label{sec:coupling}
\begin{figure*}
\includegraphics[width=\linewidth]{fig04.pdf}
\caption{Electron-phonon coupling of monolayer TaS\s2 for different sizes of the active subspace at a Marzari-Vanderbilt smearing of $\sigma = 20$~mRy.
%
We show the absolute value of the coupling to the isolated low-energy electronic band as a function of $\vec q$ with $\vec k = 0$ for all phononic eigenmodes and using different approaches to handle the long-range part.
%
The black dots indicate direct \ac{DFPT} and \ac{cDFPT} results and serve as a reference.}
\label{fig:coupling}
\end{figure*}
Now we will discuss the screened and partially screened electron-phonon coupling $g(\sigma, 0)$ and $g \super p (\sigma, 0)$ corresponding to the different active subspaces.
In Fig.~\ref{fig:coupling}, the absolute value of the interpolated coupling with the isolated electronic band at the Fermi level is shown as a function of $\vec q$ with $\vec k = 0$ for all nine phononic eigenmodes.
Reference data from direct \ac{DFPT} and \ac{cDFPT} calculations are shown using black dots.
Those $\vec q$~points that are part of the $12 \times 12$ mesh on which the interpolation is based are marked using vertical gray lines.
The interpolated quantities are guaranteed to match the reference data at these $\vec q$~points by construction.
In the case of the screened $g(\sigma, 0)$ from \ac{DFPT} shown in Fig.~\ref{fig:coupling}\,(a), the interpolated coupling matches the reference coupling everywhere.
Here the system is metallic such that the coupling is continuous and smooth at the zone center, and due to the high electronic smearing there are no sharp features away from $\Gamma$ either.
However, as soon as low-energy screening is excluded, peaks at $\Gamma$ emerge whose magnitude increases with the number of active bands, and the overall magnitude of the
|
coupling increases too, as seen in Fig.~\ref{fig:coupling}\,(b--f).
A na\"ive Fourier interpolation of these data yields the gray curves, which by definition match the reference at all original $\vec q$~points but are wrong in the vicinity of $\Gamma$.
This is because the peaks belong to the long-range part, which has to be subtracted before interpolation and added back afterward, as described in Sec.~\ref{sec:froehlich}.
Using the equations in Sec.~\ref{sec:froehlich} with the dielectric constant $\epsilon$ and Born effective charges $\vec Z^*$ as obtained from the \ac{cDFPT} calculation but neglecting the term in Eq.~\eqref{eq:lr_vector} that involves the quadrupole tensors $Q$, which cannot be calculated from perturbation theory with \textsc{Quantum ESPRESSO} at present, we obtain the orange lines.
Most of the reference points are reproduced, but there are still deviations for the mode with the largest coupling very close to $\Gamma$ in Fig.~\ref{fig:coupling}\,(b,\,c) as well as for the mode with the third-largest coupling in a wider range around $\Gamma$ in Fig.~\ref{fig:coupling}\,(b--f).
The former discrepancy is likely due to inaccuracies in $\epsilon$ or $\vec Z^*$.
The latter discrepancy however occurs for a phonon mode where the atoms move in the out-of-plane direction and can be traced back to the missing term with $Q_{\text S z}$ in Eq.~\eqref{eq:lr_vector}.
\begin{table}
\caption{Parameters used in the long-range terms of the phonons shown in Fig.~\ref{fig:subspaces}\,(f--j) and the electron-phonon coupling in Fig.~\ref{fig:coupling}\,(b--f).
%
The in-plane dielectric constants $\epsilon$ and Born effective charges $\vec Z^*$~($e$) stem from \ac{cDFPT} calculations.
%
The independent elements of the quadrupole tensors $Q$~($e$\,Bohr) have been optimized by fitting the interpolants to reference \ac{cDFPT} data.
%
The out-of-plane elements $Q_{\text S \alpha \alpha z}$ do not contribute.
%
The other elements are either zero or follow from $Q_{\kappa x x y} = Q_{\kappa y x x} = -Q_{\kappa y y y}$ and $Q_{\kappa z x x} = Q_{\kappa z y y}$ with $Q_{\text S' \alpha} = (1 - 2 \delta_{\alpha z}) Q_{\text S \alpha}$.
%
The long-range separation parameters $L$ minimize the real-space force constants.
%
See Appendix~\ref{app:bare} for more information about the bare values (``all''), where the last line is for different pseudopotentials with \ac{SC} states.
%
We report values for neighboring Ta and S~atoms with $\vec \tau \sub S - \vec \tau \sub{Ta} = (0, a / \sqrt 3, -d / 2)$.
%
Note that the optimized values of the bare $Q_{\text S z y y}$ are remarkably close to $Z^* \sub S d = 35.44~e\,\text{Bohr}$ and $Z^* \sub{S,\ac{SC}} d = 82.69~e\,\text{Bohr}$.
}
\label{tab:lr}
\medskip
\setlength\tabcolsep{4.2pt}
\begin{tabular}{r|r|*2r|*3r|r}
bands
& $\epsilon$
& $Z^* \sub{Ta}$
& $Z^* \sub S$
& $Q_{\text Ta y y y}$
& $Q_{\text S y y y}$
& $Q_{\text S z y y}$
& $L$
\\[1.4pt]\hline
1 & $3.93$ & $ 2.13$ & $-0.53$ & $6.20$ & $ 1.25$ & $ 4.39$ & $6.8$ \\
5 & $3.30$ & $ 2.84$ & $-0.43$ & $6.21$ & $ 1.82$ & $ 5.48$ & $6.4$ \\
13 & $1.62$ & $ 3.97$ & $ 2.07$ & $3.84$ & $-2.64$ & $15.01$ & $5.4$ \\
17 & $1.61$ & $ 6.75$ & $ 2.06$ & $6.22$ & $-2.59$ & $14.80$ & $5.3$ \\
22 & $1.13$ & $ 8.74$ & $ 3.96$ & $9.71$ & $-1.01$ & $19.73$ & $5.5$ \\
all & $1.00$ & $13.00$ & $ 6.00$ & $0.23$ & $ 0.38$ & $35.06$ & $3.8$ \\\hline
\ac{SC} all & $1.00$ & $27.00$ & $14.00$ & $7.31$ & $ 0.23$ & $82.08$ & $3.8$
\end{tabular}
\end{table}
The quadrupole tensors $Q$ could be calculated \emph{ab initio} with \textsc{Abinit}~\cite{Gonze2016, Gonze2020}, albeit only within the \ac{LDA} and \ac{PBE} exchange-correlation functionals and without \ac{NLCC}~\cite{Royo2019}.
We instead choose an approach similar to the one in Ref.~\citenum{Ponce2021} and fit the quadrupole tensors minimizing the error in the interpolated phonons and coupling for all reference $\vec q$~points marked with black dots in Fig.~\ref{fig:coupling}\,(b--f)~%
\footnote{We simultaneously minimize the mean squared error of the (complex) dynamical matrix and of the shown coupling (absolute value), preserving the symmetries of the quadrupole tensors imposed by the point groups of the associated atoms.}.
The resulting contributing elements of $Q$ together with those of $\epsilon$ and $\vec Z^*$ from \ac{cDFPT} and the optimal $L$ are listed in Table~\ref{tab:lr}.
As shown using mauve lines in Fig.~\ref{fig:coupling}\,(b--f), the coupling to the out-of-plane mode is correctly interpolated when the quadrupole term is taken into account.
\subsection{Comparison of approaches}
\label{sec:renorm}
\begin{figure}
\includegraphics[width=\linewidth]{fig05.pdf}
\caption{Renormalized acoustic phonon dispersion of monolayer TaS\s2 for a Fermi-Dirac smearing of $T = 300~\text K \approx 1.9~\text{mRy}$ based on \emph{ab initio} calculations performed at a Marzari-Vanderbilt smearing of $\sigma = 20$~mRy for different sizes of the active subspace according to (a--e)~Eq.~\eqref{eq:d00}, (f--j)~Eq.~\eqref{eq:dp0}, (k--o)~Eq.~\eqref{eq:dpt}, (p--t)~Eq.~\eqref{eq:db0}, and (u--y)~Eq.~\eqref{eq:dbt}.
%
The thin gray lines and black dots are the same in all panels and indicate converged direct \ac{DFPT} results for smearings $\sigma$ (starting point) and $T$ (reference), respectively.}
\label{fig:renorm}
\end{figure}
After the analysis of the screened and partially screened phonons and interactions calculated for the high Marzari-Vanderbilt smearing of $\sigma = 20$~mRy and a coarse momentum grid, we will now use Eq.~\eqref{eq:d00} (with two screened vertices), Eqs.~\eqref{eq:dp0} and \eqref{eq:dpt} (with one partially screened vertex), as well as Eqs.~\eqref{eq:db0} and \eqref{eq:dbt} (with one bare vertex) to estimate the screened phonons for a low electronic temperature (Fermi-Dirac smearing) of $T = 300~\text K \approx 1.9~\text{mRy}$ and dense momentum grids.
On this basis, we will compare the different approaches with special focus on the influence of the number of active bands.
The results are shown in Fig.~\ref{fig:renorm} together with converged reference points from direct \ac{DFPT} calculations.
At this electronic temperature, the system is dynamically unstable, as indicated by imaginary frequencies, and exhibits relatively sharp Kohn anomalies.
The adiabatically renormalized phonon dispersions have not been interpolated but calculated for each $\vec q$~point along the path separately, using a converged $96 \times 96$ $\vec k$~mesh.
Only the underlying quantities, namely the electronic energies $\varepsilon$, the screened dynamical matrix $D(\sigma, 0)$, and the screened, partially screened, and bare electron-phonon coupling $g(\sigma, 0)$, $g \super p (\sigma, 0)$, and $g \super b$ have been interpolated.
Alternatively, the interpolation could be performed on the level of the unscreened $D \super u (\sigma, 0)$ and partially screened $D \super p (\sigma, 0)$ instead of the screened $D(\sigma, 0)$, but this degrades the results because errors accumulate, see Appendix~\ref{app:unscreen}.
In all self-energy calculations, the chemical potential has been adjusted to the respective smearing function $f$ and electronic temperature $\sigma$ or $T$ to properly capture the response on the Fermi surface.
Figure~\ref{fig:renorm}\,(a--e) displays the renormalized phonons from $D \super{00} (T, 0)$ according to Eq.~\eqref{eq:d00}, which is equivalent to the approach suggested in Ref.~\citenum{Calandra2010}.
Most reference points are well reproduced -- even the shape of the soft mode, which is remarkable since it is completely absent in the high-smearing data used as the starting point, shown using thin gray lines.
The agreement is even better for larger active subspaces, for which the approximations leading to Eq.~\eqref{eq:approx} are less severe since the $T$~dependence of $U$ and $g \super p$ is strongly suppressed.
We note that here the active subspace merely defines the number of bands summed over since no downfolding to partially screened quantities is involved in this approach.
Only near $\Gamma$, the acoustic sum rule is slightly broken, resulting in unphysical finite energies of the acoustic modes.
This might be related to small changes in the atomic positions upon cooling the system, which are not captured by the discussed methods.
The corresponding results from $D \super{p0} (T, 0)$ according to Eq.~\eqref{eq:dp0} are shown in Fig.~\ref{fig:renorm}\,(f--j), where (f) represents the proposed use of an optimal active subspace and (j) is the closest we get to using a bare vertex~[cf.\@ Appendix~\ref{app:bare}].
The \ac{cDFPT}-based approach throughout overestimates the phonon softening, more severely the larger the active subspace.
This might be surprising since the diagrammatically correct combination of partially screened and screened vertices is used.
However, opposed to the partially screened coupling $g \super p (T, 0) \approx g \super p (\sigma, 0)$, the correct screened coupling $g(T, 0) \not\approx g(\sigma, 0)$ depends significantly on $T$, a fact that is not properly accounted for.
When using the larger $g(\sigma, 0)$ in place of $g(T, 0)$ in $\Pi \super{p0} (T, 0)$, we underestimate the screening of the coupling and thus overestimate the screening of the phonons.
In the approach of Ref.~\citenum{Calandra2010} in turn, the error in the phonon self-energy and the double-counting term cancel to first order [cf.\@ Eq.~\eqref{eq:cancellation}].
To prove that the overscreening seen in the \ac{cDFPT}-based approach is indeed due to the failure to adjust the screened coupling to the target temperature, we also calculate the phonons from $D \super{p$T$} (T, 0)$ according to Eq.~\eqref{eq:dpt}.
The only difference to the previous approach is that we \emph{take} the correctly screened coupling $g(T, 0)$ from the low-temperature reference calculation~%
\footnote{The \ac{DFPT} calculation has been done using $48 \times 48$ $\vec k$~points, but $g(T, 0)$ is only calculated on the coarser $12 \times 12$ $\vec k$- and $\vec q$-point meshes for subsequent Fourier interpolation.}.
Since the direct \ac{DFPT} calculation of $g(T, 0)$ is computationally as expensive as the direct calculation of the $D(T, 0)$ we are interested in -- indeed they are calculated at the same time -- this approach has no practical utility beyond this proof of concept.
As expected, in Fig.~\ref{fig:renorm}\,(k--o), the reference points are largely reproduced now, showing that it is in principle possible to obtain accurate results based on partially screened quantities.
``However\rlap,'' as already stated by \citeauthor{Calandra2010}, ``such a procedure requires an accurate self-consistent determination of the screened potential''~\cite{Calandra2010}.
The growing overscreening with the number of active bands in Fig.~\ref{fig:renorm}\,(f--j) presages large errors in the limit of using a bare vertex, when the screened vertex is not corrected.
This is confirmed by the phonon dispersions from $D \super{b0} (T, 0)$ according to Eq.~\eqref{eq:db0} in Fig.~\ref{fig:renorm}\,(p--t), which exhibit deviations similar to the ones already seen in Fig.~\ref{fig:renorm}\,(j), yet much pronounced.
Just like the results from $D \super{00} (T, 0)$ in Fig.~\ref{fig:renorm}\,(a--e), $D \super{b0} (T, 0)$ converges very fast with the number of bands summed over, as the vertices are fixed and the temperature dependence of the bare susceptibility stems largely from the single band at the Fermi level.
Note that the bare vertex (unlike the partially screened ones) and derived quantities depend on the pseudopotential.
The influence of \ac{SC} states is discussed in Appendix~\ref{app:bare}.
Finally, we also repeat the calculation with the bare vertex using the temperature-adjusted screened vertex.
This approach is in principle exact, at least in the limit of an infinite number of bands.
Indeed, $D \super{b$T$}$ according to Eq.~\eqref{eq:dpt} yields phonons with an overscreening error, which however decreases slowly with the number of bands summed over, see Fig.~\ref{fig:renorm}\,(u--y).
In practice, a partially screened vertex that matches the number of bands promises to be a good alternative to the bare vertex that is incompatible with the concept of a finite active subspace.
Taken together, it is clear that the method of Ref.~\citenum{Calandra2010} is the easiest-to-use and best-performing one in this context.
However, we would like to argue in favour of using a partially screened vertex for the optimal subspace, see Fig.~\ref{fig:renorm}\,(f), for two reasons:
(i)~The Friedel long-rangedness is exactly removed, guaranteeing a smooth partially screened phonon dispersion as in Fig.~\ref{fig:subspaces}\,(f), and (ii)~the result can be systematically improved as shown in the following.
\subsection{Correction of the screened vertex}
\label{sec:corr}
To overcome the problem with the \ac{cDFPT}-based approach, we need to have precise control of the screened vertex and solve Eq.~\eqref{eq:gp2g_alt}.
However, to our knowledge, it is currently not possible to calculate the necessary partially screened $U$ as a function of all three momenta and four electronic band indices and consistent with existing \ac{cDFPT} implementations.
Even though eventually there will be no way around this, in this section we present two alternative correction methods that approximate or circumvent the calculation of $U$ at no significant additional computational cost.
First, we can make the simplistic assumption that the dependence on electronic degrees of freedoms can be neglected or averaged out.
Then we can approximately solve Eq.~\eqref{eq:gp2g_alt} for
\begin{equation}
U_{\vec q}
\approx \frac{
\sum_{\kappa \alpha \vec k m n} ^{\vphantom0}
\abs{g_{\vec q \kappa \alpha \vec k m n} ^{\vphantom0} (\sigma, 0)
- g_{\vec q \kappa \alpha \vec k m n} \super p (\sigma, 0)}
}{
\sum_{\kappa \alpha \vec k m n} ^{\vphantom0}
\abs{g_{\vec q \kappa \alpha \vec k m n} ^{\vphantom0} (\sigma, 0)
\chi_{\vec q \vec k m n} \super{b,A} (\sigma, 0)}
}.
\end{equation}
The corrected screened electron-phonon coupling follows as
\begin{equation}
\label{eq:corr1}
g_{\vec q \kappa \alpha \vec k m n} \super{corr.~I} (T, 0)
\approx g_{\vec q \kappa \alpha \vec k m n} (\sigma, 0)
\frac{\epsilon_{\vec q}(\sigma)} {\epsilon_{\vec q}(T)}
\end{equation}
with the $\vec q$-dependent and otherwise scalar dielectric function
\begin{equation}
\epsilon_{\vec q}(T)
\approx 1 - U_{\vec q}
\frac 2 N \sum_{\vec k m n}
\chi_{\vec q \vec k m n} \super{b,A} (T, 0).
\end{equation}
Second, we make the ansatz that the change in the electron-phonon coupling for the electronic degrees of freedom is linear,
\begin{multline}
\label{eq:corr2}
g_{\vec q \kappa \alpha \vec k m n} \super{corr.~II} (T, 0)
= g_{\vec q \kappa \alpha \vec k m n} \super p (\sigma, 0)
\\
+ x_{\vec q \kappa \alpha} ^{\vphantom0} (T)
[g_{\vec q \kappa \alpha \vec k m n} ^{\vphantom0} (\sigma, 0)
- g_{\vec q \kappa \alpha \vec k m n} \super p (\sigma, 0)]
\end{multline}
with an unknown $x$ that has to be determined for each phonon displacement separately.
Further, again assuming that the smearing dependence of $U$ and $g \super p$ is weak [cf.\@ Eq.~\eqref{eq:approx}] and can be neglected, Eq.~\eqref{eq:gp2g_alt} for $\sigma$ and $T$ can be written as
\begin{align}
\label{eq:gp2g_sigma}
g(\sigma)
&= g \super p
+ g(\sigma) \chi \super{b,A} (\sigma) U,
\\
\label{eq:gp2g_t}
g(T)
&= g \super p
+ g(T) \chi \super{b,A} (T) U,
\end{align}
where we have left all subscripts, summations, and prefactors understood for brevity and the only unknown is $U$.
Inserting the ansatz from Eq.~\eqref{eq:corr2} and Eq.~\eqref{eq:gp2g_sigma} into Eq.~\eqref{eq:gp2g_t}, we obtain
\begin{multline}
\big\{
x(T) g(\sigma) \chi \super{b,A} (\sigma)
- x(T) [g(\sigma) - g \super p] \chi \super{b,A} (T)
\\
- g \super p \chi \super{b,A} (T)
\big\} U = 0,
\end{multline}
where $g \super{(p)} \chi \super{b,A}$ and $U$ are vectors and matrices in the electronic degrees of freedom, respectively.
We equate the expression in curly braces with zero and approximately solve for
\begin{multline}
x_{\vec q \kappa \alpha}(T)
= \sum_{\vec k m n}
\abs{
g \super p \chi \super{b,A} (T)
}_{\vec q \kappa \alpha \vec k m n}
\\
\Big/ \sum_{\vec k m n}
\abs{
g(\sigma) \chi \super{b,A} (\sigma)
- [g(\sigma) - g \super p] \chi \super{b,A} (T)
}_{\vec q \kappa \alpha \vec k m n}.
\end{multline}
\begin{figure}
\includegraphics[width=\linewidth]{fig06.pdf}
\caption{Comparison of correction schemes for the screened electron-phonon vertex.
%
We show the same data as in Fig.~\ref{fig:renorm}\,(f), supplemented with corrected results according to Eqs.~\eqref{eq:corr1} and \eqref{eq:corr2}.
%
The inset is a close-up of the framed region including the leading soft modes.}
\label{fig:corr}
\end{figure}
In Fig.~\ref{fig:corr}, we compare the renormalized phonons according Eq.~\eqref{eq:dp0} with and without the correction of the screened vertex from Eqs.~\eqref{eq:corr1} and \eqref{eq:corr2} for the \emph{optimal} case of a single active band.
Here we stress that the discussed corrections are not suitable for much larger active subspaces because of their number of free parameters does not increase with the number of bands.
We find that the simple correction from Eq.~\eqref{eq:corr1} (red dashed lines) reduces the deviation from the reference data by about half on average.
Still, the quality of the correction depends rather strongly on $\vec q$, being more effective at the leading instability near $2/3\,\mathrm M$ than at $\mathrm M$.
In turn, the correction from Eq.~\eqref{eq:corr2} (orange dashed lines), which also takes into account changes in the $\vec k$~dependence of the coupling, yields almost the same accuracy as the method of Eq.~\eqref{eq:d00} with two screened vertices.
However, this correction fails for small momenta since the ansatz of a linear change of the coupling becomes unsuitable as soon as the long-range part of $g \super p$ dominates.
To summarize, it is to some extent possible to correct the screening of the electron-phonon coupling for changes in the electronic temperature, even without performing a full \emph{ab initio} calculation of the electron-electron interaction.
These corrections are however not universally applicable and limit the possibilities for systematic improvements, e.g., there is no obvious way to include an $\omega$~dependence in Eq.~\eqref{eq:corr2}.
Finally, it is important to bear in mind that the error bars from the approximations made in \ac{DFT} are likely as large, if not larger, than the discussed deviations from the converged \ac{DFPT} calculation.
\subsection{Spectral function}
\label{sec:specfun}
Having convinced ourselves that the approach of Ref.~\citenum{Calandra2010} and the one with the partially screened vertex yield excellent adiabatic results, we will now turn to the nonadiabatic case, $\omega \neq 0$.
Given that Ref.~\citenum{Calandra2010} suggests that error cancellation could be extended to the frequency dependence, we here focus on that approach.
We have also performed the same calculations using the partially screened vertex and obtained very similar results.
With this work, we believe to have settled the debate on which method to use in the static case for practical calculation and shown that the approach with the bare vertex should not be used.
However, we emphasize that the dynamical case is still an open question for which the community has no reference calculation to compare with, beyond experimental data.
Since the computational time scales approximately quadratically with the number of bands, and since the result in Fig.~\ref{fig:renorm}\,(a) is adequate, we will work with a single active band.
The quantity of interest is the phonon spectral function as defined in Eq.~\eqref{eq:specfun}.
In practice, the imaginary infinitesimal $0^+$ is approximated by two different finite smearing parameters~\cite{Monacelli2021},
\begin{equation}
\label{eq:double_smearing}
G_{\vec q}(T, \omega + \mathrm i 0^+)
\approx \frac
{\mathds 1}
{(\omega + \mathrm i \eta)^2 \mathds 1 - D_{\vec q}(T, \omega + \mathrm i \delta)}.
\end{equation}
While $\delta$ only affects phonon branches with nonzero electron-phonon coupling, $\eta$ broadens all branches equally.
The former must be large enough to ensure that the spectral function is converged with respect to the chosen $\vec k$~mesh, but small enough to avoid artificial frequency shifts.
The latter aids the graphical representation, since it prevents infinitely sharp delta peaks.
\begin{figure}
\includegraphics[width=\linewidth]{fig07.pdf}
\caption{Phonon spectral function of monolayer TaS\s2 together with adiabatic phonon dispersion at an electronic temperature of $T = 1$~meV according to Eqs.~\eqref{eq:d00} and \eqref{eq:double_smearing} for (a)~the undoped case and (b)~the hole-doped case, where the Van Hove singularity is at the Fermi level.}
\label{fig:specfun}
\end{figure}
The results are shown in Fig.~\ref{fig:specfun}, together with the corresponding adiabatic $\omega = 0$ result for comparison.
We used a Fermi-Dirac smearing of $T = 1$~meV in combination with $\eta = 0.05$~meV, $\delta = 2$~meV, and $2000 \times 2000$ $\vec k$~points.
As all results shown above, Fig.~\ref{fig:specfun}\,(a) has been calculated for pristine TaS\s2 without electron doping.
Here, one prominent feature is the discontinuity of long-wavelength optical modes, which separates regions of nonadiabatic phonon hardening and significant broadening on the side of smaller and larger $\vec q$, respectively.
This can be explained by which \emph{intraband} electron-hole excitations -- we only deal with a single band -- are allowed:
The range of possible excitation energies is zero for $\vec q = 0$ and fans out into a continuum with increasing $\vec q$.
As soon as $\omega$ falls below the maximum excitation energy, the denominator of the bare susceptibility in Eq.~\eqref{eq:susc} can become arbitrarily small.
Resemblant features can also be observed in the vicinity of the $\mathrm K$~point.
Traces of these discontinuities extend vertically across the whole frequency range, similar to previous results on n-doped monolayer MoS\s2~\cite{GarciaGoiricelaya2020}.
Beyond that, we find an overall broadening of the branches that couple to the low-energy electronic band, which also have a nonzero renormalization in the adiabatic case [cf.\@ Fig.~\ref{fig:subspaces}\,(f)].
An interesting question is how nonadiabatic renormalization influences the adiabatic Kohn anomalies, i.e., the stability of the system.
In Fig.~\ref{fig:specfun}\,(a), no such effect is visible in the sense that the longitudinal-acoustic mode does not display any significant nonadiabatic renormalization close to the instability.
Indeed, processes in a large energy window of the order of 100~meV contribute to the softening~\cite{Berges2020b}, while nonadiabatic effects occur on an energy scale that is about one order of magnitude smaller.
This changes when hole doping moves the Van Hove singularity -- located at the minimum of the low-energy band along $\Gamma$--$\mathrm K$ [cf.\@ Fig.~\ref{fig:subspaces}\,(a--e)], which is actually a saddle point -- to the Fermi level, where it leads to a logarithmic divergence of the phonon self-energy~\cite{Berges2020b}.
If we realize this situation via a rigid shift by about 0.15~eV of the low-energy band for the calculation of $\Pi \super{00} (T, \omega)$ -- the unscreening via $\Pi \super{00} (\sigma, 0)$ is still done without doping -- we obtain the result in Fig.~\ref{fig:specfun}\,(b).
Most effects seen in the undoped case are even more pronounced here, but we also observe that some spectral weight of the soft modes -- now located at different $\vec q$~points, especially between $\Gamma$ and $\mathrm K$ -- remains in the positive energy range.
While this system is dynamically unstable both with and without nonadiabatic effects, there might be similar scenarios or materials where nonadiabatic damping of the charge-density wave occurs.
\section{Conclusions}
\label{sec:conclusions}
Comparing different approaches to calculate phonon dispersions at low electronic temperature with an affordable computational cost, we have explored the central findings of Ref.~\citenum{Calandra2010}:
First, using a static phonon self-energy with two screened electron-phonon vertices is an excellent approximation, which in particular allows to work with a constant approximate coupling as obtained from usual \emph{ab initio} calculations.
Second, it is in principle possible to work with a phonon self-energy with one bare -- or partially screened -- vertex~\cite{Giustino2017, Paleari2021a, Paleari2021b, Marini2022}, but this requires precise control of the static screened vertex and is not advantageous in practice, especially when using the bare vertex.
The static results suggests that the cancellation benefit of the approach with two screened vertices with respect to changes in the electronic temperature could be extended to frequency dependence, but this remains to be definitely proven.
After all, changes in the adiabatic dynamical matrix are in many regards different from the complete phonon self-energy of out-of-equilibrium many-body theory~\cite{Marini2022}.
In this context, the approach with a partially screened vertex could be useful as it allows to incorporate the frequency dependence not only in the bare susceptibility but also in the active-subspace electron-phonon coupling in a controlled manner.
An important step in this direction would be the affordable and consistent computation of the partially screened electron-electron interaction with all relevant dependences, which occurs in the equations for the renormalization of the electron-phonon vertex [Eq.~\eqref{eq:gp2g_alt}].
We have provided an easy-to-use implementation of consistent screened, partially screened, and bare phonons and electron-phonon interactions from \ac{DFPT} and \ac{cDFPT} in the \textsc{PHonon} and \textsc{EPW} codes of \textsc{Quantum ESPRESSO}.
Here, one technical challenge is the presence of dipolar and quadrupolar long-range terms in the partially screened and bare quantities.
We have shown that the \ac{cDFPT}-based approach is not optimal for the task at hand but will be useful in a context where the double-counting-free formulation is crucial.
This is usually the case when effects beyond \ac{DFT} are relevant or simply when \ac{DFPT} is an improper starting point for a more refined study.
But even then, the stationary functional of Ref.~\citenum{Calandra2010} [Eq.~\eqref{eq:stationary}], where all \ac{DFPT} vertices -- both in the discussed first term and in the double-counting term -- are replaced by properly screened vertices from \ac{RPA} or beyond, can be useful~%
\footnote{Private communication with Francesco Mauri.}.
The fact that the double-counting term is not accessible in \ac{DFPT} and can be successfully circumvented in the discussed correction method, does not make it less relevant for the theory.
Without it, the phonon self-energy with two screened vertices will in general be too small, which is important when comparing absolute values rather than differences as done here.
Finally, we remark that we are here limited to the harmonic approximation and that anharmonic effects can be important in materials close to a lattice instability, where the energy landscape is by definition anharmonic~\cite{Leroux2012}, or in systems such as superconducting hydrides, which attract a lot of attention recently.
It is likely that not only nonadiabatic but also anharmonic effects are often generated by the low-energy electronic system~\cite{Schobert2021}.
An interesting open question in this context is whether similar properties as the stationary functional can also be derived and taken advantage of for higher-order terms.
\begin{acknowledgments}
We thank (in alphabetical order) Luca Binci, Matteo Calandra, Jae-Mo Lihm, Andrea Marini, Francesco Mauri, Dino Novko, Fulvio Paleari, and Junfeng Qiao for fruitful discussions and Ryotaro Arita and Yusuke Nomura for providing us with their original \ac{cDFPT} source code.
J.B. acknowledges support from the \ac{DFG} under Germany's Excellence Strategy (EXC~2077, No.~390741603, University Allowance, University of Bremen) and Lucio Colombi Ciacchi, the host of the ``U Bremen Excellence Chair Program\rlap,'' as well as computational resources of the \ac{HLRN}.
T.W. acknowledges support from the \ac{DFG} through QUAST (FOR~5249, No.~449872909) and via the Cluster of Excellence ``CUI: Advanced Imaging of Matter'' (EXC~2056, No.~390715994).
S.P. acknowledges support from the F.R.S.-FNRS as well as from the European Unions Horizon 2020 Research and Innovation Programme, under the Marie Sk\l odowska-Curie Grant Agreement (SELPH2D, No.~839217), and computational resources awarded on the Belgian share of the EuroHPC LUMI supercomputer and by the PRACE-21 resources MareNostrum at BSC-CNS.
\end{acknowledgments}
|
\section{Introduction}
\subsection{Background}
Let $\gamma \in (0,2)$ and let $h$ be a Gaussian free field (GFF) defined in some domain $D \subset \mathbb{R}^2$ together with some boundary conditions. Consider the (formal) Riemannian metric tensor
\begin{equation}\label{tensor}
e^{\gamma h(x)}dx^2.
\end{equation}
The tensor \eqref{tensor} gives rise to a random geometry known in physics as (critical) Liouville quantum gravity (LQG); see \cite{cf:Da,DistKa,Nak,KPZ,bourbaki,QLE,mating,diffrag,lbm} for a series of works both within the physics and mathematics literature on the subject. A rigorous construction of the metric space associated to \eqref{tensor} is still an open problem (except in the case $\gamma = \sqrt{8/3}$, where this has very recently been announced in \cite{QLE, LQGandTBMI}), but one can make rigorous sense of \eqref{tensor} in other ways, for example as a measure on a space with a conformal structure. Using these interpretations, \eqref{tensor} has been conjectured to represent (in some sense) the scaling limit of certain decorated random planar maps. There are various ways to formulate this constructure (in terms of metric space structure, conformal structure, loop structure, etc.) and the loop structure formulation has been recently proved \cite{FKstory1, FKstory2, FKstory3, FKstory4}. The parameter $\gamma$ in \eqref{tensor} is related to the weighting of the planar maps by a given statistical physics model. See for example the surveys \cite{bourbaki, BerestyckiKPZnotes}.
The Liouville measure $ \mu_h$, which is the natural volume form of this metric (e.g., the conjectured limit of the uniform distribution on the vertices of the planar map) was defined in \cite{KPZ} as well as by Rhodes and Vargas in \cite{RhodesVargas}, building on work of H{\o}egh-Krohn, Polyakov, and Kahane \cite{hoeghkrohn, polyakovstrings, kahane}. Another natural object called Liouville Brownian motion (LBM), which is the canonical diffusion in the geometry of LQG, was introduced in \cite{diffrag} as well as by Garban, Rhodes and Vargas in \cite{lbm}.
\subsection{Aim of the paper}
The main purpose of this paper is to analyse the following question: how much of the geometry of Liouville Quantum Gravity is encoded by the Liouville measure $\mu_h$? As we will see, the answer turns out to be {\em everything}, in a precise sense. Beyond the intrinsic appeal of this question, we will see that this result has applications to the question of uniqueness in the main theorem of \cite{mating}.
More generally, much of the emerging theory of LQG concerns certain random fractals $X$, coupled in a certain way with the underlying Gaussian free field $h$. They typically come equipped with a `natural' quantum measure supported on $X$. In this paper we will pay particular attention to the case where $X$ is an independent SLE$_{\kappa}$ curve equipped with its so-called quantum natural parameterisation, or the case where $X$ is the range of a Liouville Brownian motion equipped with its quantum clock, but there are many other examples. It is natural to wonder how much information these measures contain about the underlying field $h$. We conjecture that, as soon as $X$ is harmonically nontrivial, such measures encode \emph{everything} about the restriction of $h$ to $X$ (that is, the harmonic extension of $h$ off $X$), and nothing more. We prove this result in the two cases mentioned above. At a technical level, the SLE case follows from conformal invariance and the estimates used to prove the `full domain' result (meaning Theorem \ref{Theorem:measure determines field}), while the Liouville Brownian motion case requires very different ideas, and in particular relies on properties of nonintersecting planar Brownian motion (including the value of nonintersecting exponents derived by Lawler, Schramm and Werner \cite{half-plane, Brownian-exponent}).
\subsection{First results}\label{sec:main-results}
Let $D$ be a domain of $\mathbb{R}^2$ and let $h$ be a Gaussian free field with zero boundary conditions on $D$.
We can use a regularization procedure to define an area measure on $D$:
\begin{equation} \label{e.mudef}
\mu = \mu_h := \lim_{\varepsilon \to 0} \varepsilon^{\gamma^2/2} e^{\gamma h_\varepsilon(z)}dz,
\end{equation}
where $dz$ is Lebesgue measure on $D$, $h_\varepsilon(z)$ is the mean value of $h$ on the circle~$\partial B(z,\epsilon)$ and the limit represents weak convergence in the space of measures on~$D$. The limit exists almost surely, at least if $\varepsilon$ is restricted to powers of two \cite{KPZ}, and an alternative definition is provided in \cite{RhodesVargas, RV-EJP} using Kahane's theory of Gaussian multiplicative chaos. Note however that Kahane's theory only provides convergence in distribution and hence with that approach it is not immediately clear whether $h$ determines $\mu_h$. This problem was however resolved recently by Shamov \cite{Shamov}, so that the methods of \cite{KPZ, RhodesVargas, RV-EJP} all give equivalent ways of constructing $\mu_h$ from $h$. For a more recent, self-contained and elementary proof, the reader can also consult \cite{BerestyckiGMC}.
Before taking the limit, it is clear that $ h_\epsilon $ and $ \mu_{h_\epsilon} $ determine each other. After taking the limit, $ \mu_h $ is clearly determined by $ h $, as noted above. But from \cite{KPZ} we know that $ \mu_h $ will almost surely assign full measure to the set of so-called $ \gamma $-thick points of $ h $ (see \cite{SheffieldGFF}). The (Euclidean) Hausdorff dimension of thick points has been computed in \cite{hu2010thick}, and is shown to be equal to $ 2-\frac{\gamma^2}{2} $, almost surely. So one may wonder whether $\mu_h$ still determines $h$, thus determining the quantum surface.
The worry is that $\mu_h$ might only retain information about points which are in some sense exceptional for the field $h$. Fortunately, these points are sufficiently dense and together they contain enough information that we shall be able to determine the field $h$ from the measure $\mu_h$. Our first main result below states this in a very general way.
Note that if $h = h_0 + g$, where $h_0$ is a Gaussian Free Field on $D$, and $g$ is a possibly random continuous function, the Liouville quantum gravity measure $\mu_h$ associated to $h$ is well defined, and is simply the measure having density $e^{\gamma g}$ with respect to $\mu_{h_0}$.
\begin{theorem}\label{Theorem:measure determines field}
Let $ h=h_0+g $ where $h_0$ is a zero boundary $ \GFF $ on a simply connected domain $ D\subset \mathbb{C} $ and $g$ is a random continuous function. Denote by $ \mu_h $ its Liouville quantum measure with parameter $ \gamma\in (0,2) $. Then $ h $ is determined by $ \mu_h $ almost surely. That is, $h$ is measurable with respect to the $\sigma$-algebra generated by $\{\mu_h(A): A \text{ open in }D\}$.
\end{theorem}
\begin{remark}\label{rmk:generality}
Theorem \ref{Theorem:measure determines field} actually covers various types of GFFs (Dirichlet boundary conditions, Neumann boundary conditions, mixed boundary conditions, the whole plane GFF, etc.) via the domain Markov property. Likewise, by absolute continuity, Theorem \ref{Theorem:measure determines field} also covers the quantum surfaces defined in \cite{mating} including quantum cones, wedges, spheres and disks.
\end{remark}
\begin{remark}
If $g \in H_0^1$ is deterministic then $h + g$ is absolutely continuous with respect to $h$ so the theorem is trivially implied by the case $g \equiv 0$. But here we only assume that $g$ is continuous, so $g$ can be much rougher, and moreover $g$ may depend on $h$.
\end{remark}
Before we present a more general setup in Section \ref{subsec::moregeneral}, we briefly explain an application of Theorem \ref{Theorem:measure determines field} to the peanosphere point of view on Liouville quantum gravity developed in \cite{mating}. In the main result of that paper (Theorem 9.1), the authors consider a space-filling variant of SLE$_\kappa'$, $\kappa' = 16/\gamma^2$, on top of a $\gamma$-quantum cone, where the curve $\eta'$ is parametrized by its quantum area (i.e., $\mu_h (\eta'([s,t])) = t - s$ for all $s\le t \in \mathbb{R}$). We refer to \cite{mating} and \cite{Zipper} for the notion of quantum cone, while the space-filling variant of SLE was introduced in \cite{IG4}. The main theorem of \cite{mating} is that the left and right boundary quantum length of the curve $\eta'([0,t])$, relative to time 0, evolve as a certain two dimensional Brownian motion $(L_t,R_t)_{t\in \mathbb{R}}$ whose covariance is given by $\cos(\pi \gamma^2/4)$. (In fact, this formula was only proved for $\gamma \in [\sqrt{2}, 2)$ in \cite{mating}, and the corresponding result for $\gamma \in [0, \sqrt{2})$ is being addressed in a work in progress \cite{covariancestory}.)
In \cite[Chapter 10]{mating}, the authors proved that this pair of Brownian motion in fact determines the quantum measure on the $\gamma$-quantum cone as well as the space-filling SLE almost surely, up to rotations, and used Theorem~\ref{Theorem:measure determines field} of this paper to conclude that this in turn determines the free field $h$ (up to rotations).
\begin{corollary}
In the setting described above, $(L_t,R_t)_{t\in \mathbb{R}}$ determines the field $h$ defining the $\gamma$-quantum cone almost surely (up to rotations). More precisely, $h$ (modulo rotations) is measurable with respect to $(L_t,R_t)_{t \in \mathbb{R}}$.
\end{corollary}
\subsection{A more general setup} \label{subsec::moregeneral}
In this subsection we introduce a general conjecture that in some sense motivates the remainder of the paper. However, we stress that it is not necessary to read this subsection to follow the remainder of the paper.
Recall that a Borel measure on a domain $D$ is {\em locally finite} if every point has a neighborhood of finite measure (or equivalently, if every compact set has finite measure). We will be interested in random pairs $(\sigma, X)$, where $\sigma$ is any (possibly random) locally finite measure on $D$ and $X$ is the (closed) support of $\sigma$. For example, $X$ could be one of the random fractal sets that arise in $\SLE$ theory, and $\sigma$ could be a `natural' fractal measure associated to $X$. Let $h$ be an instance of the GFF on $D$ with some boundary conditions chosen independently from $(\sigma, X)$.
We would now like to describe in some generality how to construct a ``quantum'' version $\mu_{X,h}$ of the measure $\sigma$. Fix $d \in (0,2]$ and assume that $\sigma$ has
finite $(d-\epsilon)$-dimensional energy for all $\epsilon>0$, i.e.,
\begin{equation}\label{dim}
\iint \frac1{| x-y|^{d- \varepsilon}}\sigma(dx) \sigma(dy) < \infty, \,\,\,\,\, \textrm{for all} \,\,\, \varepsilon>0.
\end{equation}
(The reader may recall that, by Frostman's theorem, the Hausdorff dimension of a closed set $X$ is the largest value of $d$ for which there exists a non-trivial measure $\sigma$ on $X$ satisfying \eqref{dim}. In the discussion below, we will not require that $d$ is the dimension of $X$, or that $\sigma$ is in any sense an optimal measure on $X$. Once $\sigma$ is fixed, choosing a smaller $d$ than necessary for \eqref{dim} will in some sense be equivalent to choosing a smaller $\gamma$.)
Now choose $x$ so that $d = 2-2x $. If $d$ happens to be the dimension of $X$ (as will be the case in all of the examples treated in this paper), then $x$ can be understood as the so-called (Euclidean) scaling exponent of $X$. Let $\Delta$ be related to $x$ via the KPZ relation,
\begin{equation}\label{KPZ}
x = \frac{\gamma^2}{4} \Delta^2 + (1- \frac{\gamma^2}{4}) \Delta,
\end{equation}
so $\Delta$ is the quantum scaling exponent associated to the Euclidean exponent $x$. Write $\hat \gamma = \gamma(1-\Delta)$.
Now, by Kahane's theory of multiplicative chaos (as explained, e.g., in Theorem 1.1 in \cite{BerestyckiGMC}) there is a way to define a measure $\mu_{X,h}$ (which depends on $h$ and $\sigma$) that can be formally written as follows:
$$
\mu_{X,h} (dz) = \exp ( \hat \gamma h(z) - \frac{\hat \gamma^2}2 \mathbb{E}(h(z)^2)) \sigma(dz).
$$
We will not explain the details of this construction here. However, we do point out that $\hat \gamma < \sqrt{2d}$, which implies (by the theorem in \cite{BerestyckiGMC}) that the measure $\mu_{X,h}$ is non-trivial and that its support is $X$.
We view $\mu_{X,h}$ as a natural quantum analogue of $\sigma$. An important feature of the definition of $\mu_{X,h}$ in \cite{BerestyckiGMC} is that adding a constant $C$ to $h$ locally multiplies the measure by $e^{\hat \gamma C}$. By contrast, adding $C$ to $h$ locally multiplies the measure $\mu_h$ by $e^{\gamma C}$. In other words, if we rescale the overall $\mu_h$ volume by a factor of $A = e^{\gamma C}$, then we rescale the $\mu_{X,h}$ volume by a factor of $$\hat A = e^{\hat \gamma C} = (e^{\gamma C})^{\hat \gamma/\gamma} = A^{1-\Delta}.$$
This is a way of saying that $\Delta$ is the natural scaling exponent associated to $\mu_{X,h}$.
Another important feature of the definition of $\mu_{X,h}$, also explained in \cite{BerestyckiGMC}, is that a typical point chosen from $\mu_{X,h}$ is the center of a log singularity of magnitude proportional to $\hat \gamma$.
The reader may have wondered why we chose the particular value $\hat \gamma = \gamma(1- \Delta)$ in the definition above. One reason is that (as explained above) it gives a scaling relation that matches the $\Delta$ predicted by KPZ theory. Another (essentially equivalent) reason is the idea (see e.g.\ (63) in \cite{KPZ}) that if one chooses a random small quantum ball conditioned to intersect a $d$-dimensional set, one expects to see a log singularity proportional to $\hat \gamma$ centered at that ball. In many instances, we like to think (heuristically) of $\sigma$ as representing Euclidean measure restricted to $X$ and $\mu_{X,h}$ as representing $\mu_h$ restricted to $X$, so it is natural (at least in these instances) to expect the log singularity at a typical point to be as described above.
We can now formulate the question we have in mind:
\textbf{Question:} To what extent does the measure $\mu_{X, h}$ determine the field $h$?
Clearly $\mu_{X,h}$ can only determine the field $h$ `restricted to $X$' in some sense. The issue of whether the restriction of $h$ to a fractal subset $X$ makes sense is itself not obvious. But if $X$ is any `local' set coupled with $h$ (in particular, if $X$ is any random set independent of $h$) then there is a natural way to define the harmonic extension (to the complement of $X$) of the values of $h$ on $X$ \cite{localsetpaper}. (If $X$ is harmonically trivial, then this extension is just the {\em a priori} expectation of $h$.) We make the following conjecture:
\begin{conjecture}\label{Conj}
In the setting described above, the measure $\mu_{X,h}$ a.s.\ determines the harmonic extension of $h$ off $X$.
\end{conjecture}
We will prove two particular cases of interest of this conjecture, dealing with an independent SLE$_{\kappa}$ and Liouville Brownian motion respectively.
Note that the question makes sense and is interesting in even greater generality, assuming e.g. that $ h$ is a Gaussian log-correlated field in Euclidean space of some given dimension, and $\sigma$ is some given locally finite measure with finite $(d-\varepsilon)$-dimensional energy for some $d$ and for all $\varepsilon>0$.
\subsection{Result in the case of SLE}
Let $h$ be a Gaussian free field on $\mathbb{H}$ with free boundary conditions (alternatively, the reader can also think of the case where $(\mathbb{H}, h)$ is a $\gamma$-quantum wedge if familiar with this notion).
Let $\eta$ be an independent SLE$_\kappa$ curve with $\kappa =\gamma^2$ (we emphasise that this is the standard non-space-filling curve, and in fact here $\kappa<4$
|
so the curve $\eta$ is simple). We let $X$ be the range of $\eta$, i.e. $X = \eta([0, \infty))$, and equip $X$ with the so-called quantum natural time defined by Theorem 1.3 in \cite{Zipper} (or Theorem 1.8 in the case of the wedge). That is, the measure $\mu_{X,h}(\eta[0,t]) )$ is given by the quantum boundary length of $\eta([0,t])$ in either component of $\mathbb{H}\setminus X$ (by Theorem 1.3 of \cite{Zipper}, resp. Theorem 1.8, these measures are indeed a.s. equal). Equivalently, we map $\eta([0,t])$ away using the Loewner map $g_t$ and measure the quantum length on $\mathbb{R}$ of $g_t(\eta([0,t]))$ on either side of 0, that is,
$$
\mu_{X, h } ( \eta([0,t]) ) : = \nu_{h_t} ( [0^-, \xi_t]) )
$$
where $\xi_t$ is the driving function of the Loewner equation, $0^-$ is the left-image of 0 by $g_t$, $h_t$ is obtained from $h$ by applying the change of coordinate rule of LQG:
\begin{equation}
\label{eq:coordinate-change}
h_t = h \circ g_t^{-1} + Q \log | (g_t^{-1})'| ; \ \ Q = \frac{\gamma}2 + \frac{2}\gamma,
\end{equation}and if $h$ is a field we denote by
\begin{equation}\label{boundary}
\nu_{h} (dx) = \lim_{\varepsilon \to 0} \varepsilon^{\gamma^2/4} e^{\gamma h_\varepsilon(x) /2} dx;\ \ \ x \in \mathbb{R}
\end{equation}
the boundary length measure associated with $h$ on $\partial \mathbb{H} = \mathbb{R}$. It is easy to check that $\mu_{X,h}$ indeed defines a nonnegative measure supported on $X$. Note that this definition is different from that in Conjecture \ref{Conj}, but we believe that the two notions coincide when $\sigma$ is given by the so-called natural parametrisation of $\eta$ defined in \cite{LawlerSheffield,LawlerZhou,LawlerRezaei}, up to multiplication by a deterministic function related to the conformal radius.
\begin{theorem}
\label{T:SLE}
In the above setup, $\mu_{X,h}$ determines the harmonic extension of $h$ off $X$.
\end{theorem}
\subsection{Result in the case of Liouville Brownian motion}
The second case of interest to us will be the case where $X$ is the range of an independent Brownian motion $(B_t, t \le T_D)$, run until it leaves the domain $D$ for the first time, and $\sigma $ is the occupation measure of $B$ (i.e., $\sigma (A) = \int_0^{T_D} \indic{B_s \in A}ds$ for all Borel set $A \subset D$). Then it is well known that the dimension of $X$ is almost surely equal to 2 so $x = 0$ and $\Delta =0$ as well. Hence, following the construction of Section \ref{subsec::moregeneral}, the measure $\mu_{X,h}$ is formally given by
$$
\mu_{X, h} (A) = \int_0^t e^{\gamma h(B_s) - \frac{\gamma^2}{2} \mathbb{E}( h(B_s)^2)} \indic{B_s \in A}ds.
$$
In other words, $X$ is the range of an independent Liouville Brownian motion (as equivalently defined in \cite{diffrag,lbm}) and $\mu_{X,h}$ is the occupation measure of $X$ induced by its \emph{quantum clock}. This is the increasing process $(\phi(t), 0 \le t \le \tau_D)$ such that
$$
\phi(t): = \lim_{\varepsilon \to 0} \varepsilon^{\gamma^2/2}\int_0^t e^{\gamma h_\varepsilon(B_s)} ds.
$$
Liouville Brownian motion is then defined as the process
$$
Z_t := B(\phi^{-1}(t)); \ \ t \le T_D = \phi(\tau_D).
$$
\begin{theorem}\label{thm:LBM-GFF}
Liouville Brownian motion determines the harmonic extension of $h$ off its range. That is, the harmonic extension of $h$ off $X = B[0, T_D]$ is measurable with respect to $(Z_t, t \le T_D)$ (or, equivalently, with respect to $X$ and $(\phi(t), t \le \tau_D)$).
\end{theorem}
The paper is organized as follows. In Section \ref{sec:preliminaries}, we provide some relevant background on the Gaussian free field and prove a useful preliminary estimate. In Section~\ref{sec:LQM-GFF}
we give the proof of Theorem~\ref{Theorem:measure determines field}. Section \ref{sec:SLE} then gives the proof of Theorem \ref{T:SLE} which covers the SLE case. Finally, Section \ref{sec:finite time} contains the proof of Theorem \ref{thm:LBM-GFF}. This is the most technical part of the paper.
\bigskip
\noindent{\bf Acknowledgments.}
We thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for generous support and hospitality during the programme \emph{Random Geometry} where part of this project was undertaken. The first and second authors were partially supported by EPSRC grants EP/GO55068/1 and EP/I03372X/1. The second author was partially supported by a grant and a sabbatical fellowship from the Simons Foundation. The second and third authors were partially supported by NSF Award DMS 1209044.
\section{Preliminaries}\label{sec:preliminaries}
In this section we recall some background on the Gaussian free field (GFF). We focus on the zero boundary GFF since our technical proofs are mainly in the setting of zero boundary GFF.
Let $D$ be a domain in $\mathbb{C}$ with harmonically nontrivial boundary (i.e. the harmonic measure of $\partial D$ is positive as seen from any point in $ D $).
We denote by $H_0(D)$ the Hilbert-space closure of $C_0^\infty( D)$ [the space of compactly supported smooth functions in $D$], equipped with the Dirichlet inner product
\begin{equation}\label{dirichlet prod}
(f,g)_\nabla = \frac{1}{2\pi} \int_D \nabla f(z) \cdot \nabla g(z) \, dz.
\end{equation}
A zero boundary \textit{Gaussian free field} on $D$ is given by the formal sum
\begin{equation}\label{h}
h = \sum_{n=1}^\infty \alpha_n f_n,\quad (\alpha_n)\quad \textrm{i.i.d}\quad N(0,1)
\end{equation}
where $\{f_n\}$ is an orthonormal basis for $H_0(D)$. Although this expansion of $h$ does not converge in $H(D)$, it can be shown that convergence holds almost
surely in the space of distributions. See \cite{SheffieldGFF, BerestyckiKPZnotes} for more details.
If $V, V^\perp \subset H_0(D)$ are complementary orthogonal subspaces, then $h$ can be decomposed as the sum of its projections onto $V$ and $V^\perp$. In particular, for a domain $U\subset D$, we can take $V=H_0(U)$ and $V^\perp$ the set of functions in $H_0(D)$ which are harmonic in $U$. This allows us to decompose $h$ as the sum of a zero boundary Gaussian free field on $U$ and a random distribution which is harmonic on $U$, with both terms independent. We call the former field the \emph{projection of $h$ onto $U$}.
We record a lemma which will be used frequently.
\begin{lemma}\label{Lemma:modulus of harmonic extension}
For a simply connected domain $D$, $z\in D$, let $h$ be the zero boundary Gaussian free field on $D$ and $ h^{\har} $ be the projection of $h$ onto the space of functions in $H_0(D)$ that are harmonic inside $B_r(z)$, where $r\le {\rm dist}(z,D)$. For $\varepsilon <r/4$, let
$$
\Delta_\varepsilon =\max\limits_{x\in B_{\varepsilon}(z)}h^{\har}(x) -\min\limits_{x\in B_{\varepsilon}(z)}h^{\har}(x).
$$
Then $ \mathbb{E}[\Delta^2_\varepsilon] \le C(\varepsilon/r)^2 $ where $C$ is an universal constant independent of $\varepsilon,r,z,D$.
\end{lemma}
\begin{proof}
By translation and scaling, we only need to prove the case when $z=0$ and $r=1$.
It suffices to control the gradient of $ h^{\har} $ in $ B_{1/2}(0) $. In the proof of \cite[Lemma 4.5]{KPZ}, the authors show that the minimum of $ h^{\har} $ in $ B_{1/2}(0) $ has super exponential tail which is independent of the domain $D$ containing the unit disk.
(In fact, we point out that a simpler proof of that lemma can be obtained using the Borell--Tsirelson inequality for Gaussian processes).
The same is true for the maximum of $ h^{\har} $. In particular, the second moment of $ \| h^{\har} \|_{\infty, B_{1/2}(0) } $ is bounded by a universal constant $C$. By a standard gradient estimate of harmonic functions, $ \| \nabla h^{\har} \|_{\infty,B_{\frac{1}{4}}(0)}\leq C \| h^{\har} \|_{\infty, B_{1/2}(0) } $ where $ C $ is another universal constant. So for $ \varepsilon <1/4 $, we have $$\mathbb{E}[\Delta^2_\varepsilon]\leq C\varepsilon^2 \mathbb{E}[\| \nabla h^{\har} \|^2_{\infty,B_{1/4}(0)}] \le C\varepsilon^2.$$
\end{proof}
\section{Proof of Theorem~\ref{Theorem:measure determines field}: full domain case}\label{sec:LQM-GFF}
We will prove Theorem~\ref{Theorem:measure determines field} by making sense of the statement that $ e^{\gamma h} $ is the Radon-Nikodym derivative of $\mu_h $ with respect to Lebesgue measure. Pick a positive radially symmetric smooth function $ \eta $ which has integral 1 and is supported on the unit disk. Let $ \eta^{\epsilon}(x)=\frac{1}{\epsilon^2}\eta(\frac{x}{\epsilon}) $. We define $ h^{\epsilon} $ by letting
\begin{equation}\label{eq:estimator}
e^{\gamma h^{\epsilon}(x)}=\int_D \eta^{\epsilon}(x-z)d\mu_h(z).
\end{equation}
Then $ \mu_{h^\epsilon} =e^{\gamma h^{\epsilon}}dz$ is the convolution of $ \mu_{h} $ with $ \eta^{\epsilon} $, which is an approximation to $ \mu_h $. Roughly speaking, we will show that $ h^\epsilon -\mathbb{E}[h^\varepsilon]$ converges to $ h $ in probability as $\varepsilon \to 0$. Since $ h^\epsilon $ is determined by $ \mu_h $, $ h $ is determined by $ \mu_h $. We will achieve this via the following two lemmas.
\begin{lemma}[Variance estimate]\label{Lemma:variance}
Suppose $ D $ is a simply connected domain and $ D'=\{ x\in D| {\rm dist}(x,\partial D)>\varepsilon_0 \} $, where $ \varepsilon_0 $ is a fixed constant. $ h $ is the zero boundary GFF on $ D $ and $ h^\epsilon $ and $ h_\epsilon $ are defined as above. For all $z\in D', 0<\epsilon<\frac{\epsilon_0}{4} $, let $ f_\epsilon(z)=h^\varepsilon(z)-h_\varepsilon(z) $. Then we have
\[
\Var[f_\varepsilon(z)]\leq C\log(\varepsilon_0/\varepsilon),
\]
where $ C $ is a universal constant independent with $D, \varepsilon_0$ and $z$.
\end{lemma}
\begin{lemma}[Covariance estimate]\label{Lemma:covariance}
Let h be the zero boundary $ \GFF $ on $ \mathbb{D} $ and $f_\varepsilon $ be defined as in Lemma \ref{Lemma:variance} for $D=\mathbb{D}$. Then for $ x_1,x_2\in r\mathbb{D} $ and $\varepsilon<|x_1-x_2|/100$,
$$\Cov[f_\varepsilon(x_1),f_\varepsilon(x_2)]\le C_r\frac{\varepsilon}{|x_2-x_1|}\log^{1/2}\frac{|x_1-x_2|}{\varepsilon}$$where $C_r$ only depends on $r\in (0,1)$.
\end{lemma}
Given Lemma \ref{Lemma:variance} and Lemma \ref{Lemma:covariance}, we can get Theorem \ref{Theorem:measure determines field} in the case that $h$ is the zero boundary GFF on $\mathbb{D}$.
\begin{proposition}\label{Proposition: theorem for disk}
If $ h $ is a zero boundary $ \GFF $ on $ \mathbb{D} $, then $ \mu_h $ determines $ h $ almost surely.
\end{proposition}
\begin{proof
Suppose $\rho$ is a smooth function supported on $ r\mathbb{D}$ where $r<1$. It is sufficient to show that $(h,\rho)$ is measurable with respect to $\mu_h$
\begin{align*}
&\Var[(f_\varepsilon,\rho)]=\int_{\mathbb{D}\times\mathbb{D}}dxdy \Cov[f_\varepsilon(x),f_\varepsilon(y) ]\rho(x)\rho(y)\\
&=\int_{\{|x-y|>\varepsilon^{1/2}\}}dxdy \Cov[f_\varepsilon(x),f_\varepsilon(y) ]\rho(x)\rho(y)+\int_{\{|x-y|<\varepsilon^{1/2}\}}dxdy\Cov[f_\varepsilon(x),f_\varepsilon(y) ]\rho(x)\rho(y).
\end{align*}
By Lemma \ref{Lemma:covariance}$$1_{\{|x-y|>\varepsilon^{1/2}\}}\Cov[f_\varepsilon(x),f_\varepsilon(y)]\le C_r\varepsilon^{1/2}\log^{1/2}(\varepsilon^{-1}).$$ Therefore
$$\lim\limits_{\varepsilon\to 0}\int_{\mathbb{D}\times\mathbb{D}}dxdy 1_{\{|x-y|>\varepsilon^{1/2}\}}\Cov[f_\varepsilon(x),f_\varepsilon(y) ]\rho(x)\rho(y)=0.$$
On the other hand $$\int_{\mathbb{D}\times\mathbb{D}}dxdy1_{\{|x-y|<\varepsilon^{1/2}\}}\Cov[f_\varepsilon(x),f_\varepsilon(y) ]\rho(x)\rho(y)\le C\varepsilon\log\frac{r}{\varepsilon}.$$
Therefore $\lim\limits_{\varepsilon\to 0}\Var[(f_\varepsilon,\rho)] =0$. In other words (recalling that $\mathbb{E}((h_\varepsilon, \rho)) = 0$), $ (h^{\varepsilon},\rho)-\mathbb{E}[(h^{\varepsilon},\rho)]- (h_{\varepsilon},\rho) $ tends to 0 in $ L^2 $.
In addition, $ (h_\varepsilon-h,\rho)$ also tends to 0 in $ L^2 $. So $(h^{\varepsilon} , \rho)-\mathbb{E}[(h^{\varepsilon},\rho)]$ tends to $(h,\rho)$ in $ L^2 $.
This implies that the random variable $(h,\rho)$ is measurable with respect to $ \mu_h $.
So far, we have proved that for all smooth function supported on $\mathbb{D}$, $(h,\rho)$ is measurable with respect to $\mu_h$, which means $h$ is almost surely determined by $\mu_h$.
\end{proof}
With this in hand it is not hard to get a proof of Theorem~\ref{Theorem:measure determines field} (we only need to add to $h$ a part which is a continuous function.)
\begin{proof}[Proof of Theorem \ref{Theorem:measure determines field}]
We first assume $D=\mathbb{D}$, $ h_0 $ is an instance of a zero boundary $ \GFF $ on $D$ and $ g=h-h_0$ is the random continuous function in Theorem \ref{Theorem:measure determines field}. Let $\mu_{h_0}$ be the Liouville quantum measure of $h_0$. Defined $h^\varepsilon$ and $h^\varepsilon_0$ by
\begin{align*}
e^{\gamma h^{\epsilon}}=\int_D \eta^{\epsilon}(x-z)d\mu_h(z),\\
|
n,m)$ in three dimensions, for which a
reconstruction is not possible by simple dimensional analysis.
\end{fig}
\begin{thm}[Ullman theorem in three dimensions for 3 points]
In three dimensions, three orthographic pictures of three noncollinear points
determine both the points and camera positions up to finitely many reflections.
The correspondence is locally unique.
\end{thm}
We assume that the three planes are different and that the three points are different.
Otherwise, we had a situation with $m<3$ and $n<3$, where finding the inverse is not possible.
If the normals to the planes are coplanar, that is when the three planes go through
a common line after some translation, then the problem can be reduced to the
two-dimensional Ullman problem.
\begin{center}
\parbox{6.2cm}{\scalebox{0.50}{\includegraphics{ullman/ullman3dpaper.ps}}}
\end{center}
\begin{fig}
The setup for the structure of motion problem with three orthographic cameras and three points
in three dimensions. One point is at the origin, one camera is the $xy$-plane. The problem is to find
the $z$-coordinates of the two points as well as the three Euler angles for each cameras from the
projections onto the planes.
\end{fig}
Because Ullman stated his theorem with 4 points and this result is cited so widely
\cite{HoffmanBennett1985,HoffmanBennett1985a,HuAhuja1993,
Bennetetall1993,Koenderink,HoffmanBennett1994,HuAhuja1995,pritt},
we give more details to the proof of Ullman for 3 points. The only
reason to add a 4'th point is to reduce the number of ambiguities from typically $64$ to
$2$. We will give explicit solution formulas which provide an explicit reconstruction with
in the case of $3$ points. One could write down explicit algebraic expressions for the
inverse.
\begin{proof}
Again we chose a coordinate system so that one of the cameras is the
$xy$-plane with the standard basis $q_0,p_0$. One of the three points
$P_1=O$ is fixed at the origin.
The problem is to find two orthonormal frames $p_j,q_j$ in space spanning two planes
$S_1$ and $S_2$ through the origin and two points $P_2,P_3$
from the projection data
\begin{equation}
\label{ullman3d}
a_{ij} = P_i \cdot q_j, b_{ij} = P_i \cdot p_j \; .
\end{equation}
The camera $j$ sees the point $P_i$ at the
position $(a_{ij},b_{ij})$. Because an orthonormal 2 frame needs 3 parameters
$(\theta_i,\phi_i,\gamma_i)$ and each point in space has $3$ coordinates, there are
$2 \cdot 3 + 2 \cdot 3=12$ unknowns and 12 equations $a_{ij} = P_i \cdot q_j$ and $b_{ij} = P_i \cdot p_j$,
$i=1,2, j=0,1,2$. Because the projection to the $xy$ plane is known, there
are 4 variables, which can directly be read off. We are left with a nonlinear system of $8$ equations and
$8$ unknowns $(z_1,z_2,\theta_1,\phi_1,\gamma_1,\theta_1,\phi_2,\gamma_2)$. Just plug in
$$p_j = \left[ \begin{array}{c} \cos(\gamma_j) \cos(\theta_j)-\cos(\phi_j) \sin( \gamma_j) \sin(\theta_j) \\
-\cos(\phi_j) \cos(\theta_j) \sin(\gamma_j) - \cos(\gamma_j) \sin( \theta_j) \\
\sin(\gamma_j)\sin(\phi_j)
\end{array} \right] $$
$$ q_j = \left[ \begin{array}{c} \cos(\theta_j)\sin(\gamma_j)+\cos(\gamma_j)\cos(\phi_j)\sin(\theta_j) \\
\cos(\gamma_j)\cos(\phi_j)\cos(\theta_jg)-\sin(\gamma_j)\sin(\theta_j) \\
-\cos(\gamma_j)\sin(\phi_j) \end{array} \right]
$$
and $P_i = (x_i,y_i,z_i)$ into equations~(\ref{ullman3d}). The determinant of the Jacobean matrix can
be computed explicitely. It is a polynomial in the $2$ unknown position variables $z_1,z_2$ and a trigonometric
polynomial in the $6$ unknown camera orientation parameters $\theta_1,\phi_1,\gamma_1,\theta_2,\phi_2,\gamma_2$:
\begin{eqnarray*}
\det(J) &=& \sin^2(\phi_1) \\
&\cdot& \sin^2(\phi_2) \\
&\cdot& (A \cos(\theta_1)+B \sin(\theta_1)) \\
&\cdot& (A \cos(\theta_2)+B \sin(\theta_2)) \\
&\cdot& (\cos(\phi_2)\sin(\phi_1)(A\cos(\theta_1)+B\sin(\theta_1)) \\
&& + \sin(\phi_2)(D \sin(\phi_1)\sin(\theta_1-\theta_2)+\cos(\phi_1)(C \cos(\theta_2)-B \sin(\theta_2)))) \; ,
\end{eqnarray*}
where
$$ A = (y_2 z_1-y_1 z_2), B = (x_2 z_1-x_1 z_2), C = (y_1 z_2-y_2 z_1), D = (y_1 x_2-x_1 y_2) \; . $$
In general, this determinant is nonzero and by the implicit function theorem, the reconstruction
is locally unique. \\
The main idea (due to Ullman) for the actual inversion is to first find vectors
$u_{ij}$ in the intersection lines of the three planes. For every pair $(i,j)$
of two cameras, the intersection line can be expressed in two ways:
$$ \alpha_{ij} p_i + \beta_{ij} q_i = \alpha_{ij} p_j + \beta_{ij} q_j $$
the projection of the two points produces equations
$$ \alpha_{ij} p_i \cdot P_k + \beta_{ij} q_i \cdot P_k = \alpha_{ij} p_j \cdot P_k + \beta_{ij} q_j \cdot P_k \; . $$
Because $a_{ik} = p_i \cdot P_k, b_{ik} = q_i \cdot P_k$ are known, these are $2$ equations
for each of the three pairs of cameras and each of the 4 unknowns
$\alpha_{ij},\beta_{ij},\gamma_{ij},\delta_{ij}$. Because additionally $\alpha_{ij}^2+\beta_{ij}^2=1$,
$\gamma_{ij}^2 + \delta_{ij}^2=1$, the values of $\alpha_{ij},\beta_{ij},\gamma_{ij},\delta_{ij}$ are
determined. \\
On page 194 in the book \cite{Ullman}, there are only 4 equations needed, not
5 as stated there to solve for the intersection lines of the planes. With 5 equations
the number of ambiguities is reduced. Actually, the Ullman equations with 4 equations
have finitely many additional solutions which do not correspond to point-camera
configurations. They can be detected by checking what projections they produce. \\
We aim to find vectors $(\alpha_{ij},\beta_{ij})$ in the plane $i$ and coordinates
$(\gamma_{ij},\delta_{ij})$ in the plane $j$
in the intersections of each pair $(i,j)$ of photographs.
Taking the dot products with the two points $P_1,P_2$ gives the equations
\begin{eqnarray}
\label{ullmanequations}
\alpha_{ij} u_{i1} + \beta_{ij} v_{i1} &=& \gamma_{ij} u_{j1} + \delta_{ij} v_{j1} \\
\alpha_{ij} u_{i2} + \beta_{ij} v_{i2} &=& \gamma_{ij} u_{j2} + \delta_{ij} v_{j2} \\
\alpha_{ij}^2 + \beta_{ij}^2 = 1 &,& \gamma_{ij}^2 + \delta_{ij}^2 = 1 \; .
\end{eqnarray}
They can be explicitely solved, evenso the formulas given by the computer algebra system
are too complicated and contain hundreds of thousands of terms.
Each of the above equations is of the form
$$ a x + b y = c u + d v, e x + f y = g u + h v, x^2+y^2=u^2+v^2=1 \; . $$
Geometrically, it is the intersection of two three dimensional planes and two three dimensional cylinders in
four dimensional space. From the first two equations, we have
$$ x = A u + B v, y = F u + G v \; . $$
By writing $u = \cos(t), v = \sin(t)$ the equation $x^2+y^2=1$ and replacing $\cos^2(t) = (\cos(2 t) + 1)/2$
$\sin^2(t) = (1-\cos(2 t))/2, \sin(t) \cos(t) = \sin(2t)/2$, we get a quadratic equation for $\cos(2t)$
which has the solution
$$ \cos(2 t) = \frac{-(S T + W \sqrt{S^2-T^2+W^2})}{S^2+W^2} $$
with $U = (
|
A^2+F^2)/2; V = (B^2+G^2)/2; W = (A B + F G); S = U-V; T = U+V-1$. We see that there are
8 solutions to the equations(\ref{ullmanequations}). Four of these solutions are
solutions for which
$\alpha_{ij} p_i + \beta_{ij} q_i - \alpha_{ij} p_j - \beta_{ij} q_j$ is perpendicular to the plane containing
the three points. These solutions do not solve the reconstruction problem and these branches of the algebraic
solution formulas are discarded.
There are $4$ solutions to each Ullman equation which lead to solutions to the reconstruction problem. \\
Assume we know the three intersection lines in each plane. Because the ground camera plane is fixed, we know
two of the intersection lines. Let's denote by $U$ and $V$ the unit vectors in those lines. We have to find
only the third intersection line which contains a unit vector $X$.
This vector $X=(x,y,z)$ can be obtained by intersecting two cones.
Mathematically, we have to solve the system $X \cdot U = r, X \cdot V = s, |X|=1$.
This leads to elementary expressions by solving a quadratic equation. \\
Once we know the intersection lines, we can get the points
$P_1,P_2$ by finding the intersection of normals lines to the image points in the photographs. \\
The Ullman equations have $4$ solutions maximally. Because there are three
intersection lines we expect $4^3=64$ solutions in total in general. \\
If the normals to the cameras are coplanar, the problem reduces to a
two-dimensional problem by turning the coordinate system so that the
intersection line is the $z$-axes. This situation is what Ullman calls the
{\bf degenerate case}. After finding the intersection line, we are
directly reduced to the two-dimensional Ullman problem.
\end{proof}
The fact that there are solutions to the Ullman equation
which do not lead to intersection lines of photographic planes could have been
an additional reason for Ullman to add a 4'th point. Adding a 4'th point reduces the
number of solutions from 64 to 2 if the four points are noncoplanar but it makes
most randomly chosen projection data unreconstructable. With three points, there is
an open and algebraically defined set for which a reconstruction is not possible and
and open algebraically defined set on which the reconstruction is possible and
locally unique. The boundary of these two sets is the image of the set
${\rm det}(F)=0$.
\begin{center}
\parbox{16.2cm}{\scalebox{1.20}{\includegraphics{ullman3d/allsolutions.ps}}}
\end{center}
\begin{fig}
64 solutions to the reconstruction problem in a particular case.
\end{fig}
\section{When is the reconstruction possible?}
Given three photographs each showing three points. As usual, we know which points correspond.
How can we decide whether there is a point-camera configuration which realizes this picture?
Of course, we have explicit formulas, they do not illustrate the geometry very well. \\
Define for two complex numbers $A,B$ the interval $I(A,B)$ of possible angles
$$ {\rm arg} (\frac{ e^{i \theta} - A }{ e^{i \theta} - B } ) \; , $$
where $\theta \in [0,2\pi)$.
\begin{center}
\parbox{6.2cm}{\scalebox{0.50}{\includegraphics{ullman/complex.ps}}}
\end{center}
\begin{fig}
The range of angles $I(A,B)$.
\end{fig}
The following lemma deals with the equations which determine the intersection
lines of the camera planes.
\begin{lemma}
The equations
$$ a x + b y = c u + d v, e x + f y = g u + h v, x^2+y^2=u^2+v^2=1 $$
can be solved for the unknown $x,y,u,v$ for any values of $a,b,c,d,e,f,g,h$
for which
$$ {\rm arg}(\frac{c+id}{g+ih}) \in I(\frac{c+id}{a+ib},\frac{g+ih}{e+if}) $$
\end{lemma}
\begin{proof}
Define $p=a+ib, q=c+id, r=e+i f, s=g+i h$.
We look for two complex numbers $z=x-iy,w=u-iv$ of modulus $1$ such that
${\rm Re}(z p) = {\rm Re}( w q), {\rm Re}(u r) = {\rm Re}( v s)$. Therefore
${\rm arg}(z p - w q) = \pi/2, {\rm arg}(z r - w s) = \pi/2$. With $z=e^{i \theta},w=e^{i \phi}$,
this defines two curves on the torus. The solutions are the intersection points.
If ${\rm arg}(q/s) \in I(q/p,s/r)$, there is a solution to the problem.
\end{proof}
\section{Final remarks}
{\bf Explicit implementations}. \\
We have implemented the reconstruction explicitely in Mathematica 6, a computer algebra
system in which it is now
possible to manipulate graphics parameters. We have programs, which invert the
nonlinear equations on the spot, both in two and three dimensions. \\
\begin{center}
\parbox{16.8cm}{
\parbox{8.2cm}{\scalebox{0.40}{\includegraphics{ullman2d/demo.ps}}}
\parbox{8.2cm}{\scalebox{0.40}{\includegraphics{ullman3d/demo.ps}}}
}
\end{center}
\begin{fig}
Interactive demonstration of the reconstruction in two
and three dimensions with Mathematica. The user can change each of the
image parameters and the computer reconstructs the cameras and the points.
We will have this programs available on the Wolfram demonstration project.
\end{fig}
{\bf Higher dimensions}. \\
How many points are needed in $d$ dimensions for $3$ orthographic cameras to locally have a unique
reconstruction? In $d$ dimensions, an orthographic camera has $f=d(d-1)/2 + (d-1)$ parameters
and the global Euclidean symmetry group has dimension $g = d + d (d-1)/2$. The dimension relations are
\begin{eqnarray*}
n d + m f &=& (d-1) n m + g \\
f &=& d(d-1)/2 + (d-1) \\
g &=& d + d (d-1)/2 \; .
\end{eqnarray*}
This gives \\
\begin{tabular}{|l|llll|} \hline
dimension & $n(m)$ & $n(2)$ & $n(3)$ & $n(4)$ \\ \hline
dim=2: & $n = (2m-3)/(m-2)$ & - & $3$ & $3$ \\ \hline
dim=3: & $n = (5m-6)/(2m-3)$ & $3$ & $3$ & $3$ \\ \hline
dim=4: & $n = (9m-10)/(3m-4)$ & $4$ & $4$ & $4$ \\ \hline
\end{tabular}
\vspace{1cm}
In any dimension, there is always a reflection ambiguity. \\
{\bf Other cameras}. \\
The structure from motion problem can be considered for many other camera types. The most common is the
pinhole camera, a perspective camera. In that case, two views and 7 points are enough to determine
structure from motion locally uniquely, if the focal parameter is kept the same in both shots and
needs to be determined too.
We have studied the structure from motion problem for spherical cameras in detail in the paper
\cite{KnillRamirezOmni} and shown for example that for
three cameras and three points in the plane a {\bf unique} reconstruction is possible if both the camera and point
sets are not collinear and the 6 points are not in the union of two lines.
This uniqueness result can be proven purely geometrically using Desarques theorem and is sharp: weakening any
of the three premises produces ambiguities, where the two line ambiguity was the hardest to find. \\
{\bf Other fields}. \\
The affine structure of motion problem can be formulated over other fields too, and not
only over the field of reals $k={\bf R}$ or complex numbers $k={\bf C}$.
The space $S$ is a $d$- dimensional vector space over some field $k$.
A camera is a map $Q$ from $N$ to a $(d-1)$-dimensional linear
subspace satisfying $Q^2=Q$. A point configuration $\{ P_1,P_2,...,P_n \; \}$
and a camera configuration $\{Q_1, \dots, Q_m \; \}$
define image data $Q_j(P_i)$. The task is to reconstruct
from these data the points $P_i$ and the cameras $Q_j$.
If the field $k$ is finite, the structure from motion problem is a problem in a finite
affine geometry. If the inversion formulas derived over the reals make sense in that field, then they
produce solutions to the problem. "Making sense" depends for example, whether we can take square
roots. We might ask the field $k$ to be algebraicalliy complete so that
a reconstruction is possible for all image data. \\
{\bf A question}. For orthographic cameras in the plane, the only ambiguities are a reflection.
One can extend the global symmetry group $G$ so that the map $F$ becomes injective. Can one extend
the group in three dimensions also to make the structure from motion map $F$ globally injective?
To answer this, we would need to understand better the structure of the finite set
$F^{-1}(a)$ if $a$ is in the image of $F$.
\vspace{12pt}
\bibliographystyle{plain}
|
\section{Introduction} \label{Intro}
For each positive integer $d$, we denote by $M_d^{GH}$ the \emph{Gromov-Hausdorff compactification} of the moduli space of degree $d$ K\"ahler-Einstein Del Pezzo surfaces, and denote by $M_d^0$ the dense subset that parametrizes those smooth surfaces. It is well-known that for $d\geq 5$ there is no moduli, so in this paper we will always assume $d\in \{1, 2, 3,4\}$.
By Tian-Yau \cite{TY} we know that $M_d^0$ is at least a non-empty set. By general theory, $M_d^{GH}$ is a compact Hausdorff space under the Gromov-Hausdorff topology. By \cite{An, BKN, Tian1}, points in $M_d^{GH}\setminus M_d^0$ are certain K\"ahler-Einstein log Del Pezzo surfaces, and by a famous theorem of Tian \cite{Tian1}, every smooth Del Pezzo surface admits a K\"ahler-Einstein metric
so that it is actually parametrized in $M_d^0$.
In this paper for each $d$ we identify $M_d^{GH}$ with certain explicit algebro-geometric moduli space of log Del Pezzo surfaces. The latter is
a compact analytic Moishezon space $M_d$, which, roughly speaking,
parametrizes isomorphism classes of certain $\Q$-Gorenstein smoothable log Del Pezzo surfaces of degree $d$.
Notice that such geometric compactifications of moduli variety are not necessarily unique in general, while on the other hand, the Gromov-Hausdorff compactification is clearly canonical but the definition is very non-algebraic in nature.
The following main theorem of the present article builds a bridge between the two notions of moduli spaces.
\begin{thm}\label{MT}
For each integer $d$, there is a compact moduli algebraic space \footnote{For $d\neq 1$, it follows from the construction that $M_d$ is actually a \emph{projective variety}.} $M_d$, that will be constructed explicitly in later sections, and a homeomorphism $$\Phi\colon M_d^{GH} \rightarrow M_d,$$ such that $[X]$ and $\Phi([X])$ parametrize isomorphic log Del Pezzo surfaces for any $[X]\in M_d^{GH}$. Moreover, $M_d$ contains a Zariski open subset which parametrizes all smooth degree $d$ Del Pezzo surfaces.
\end{thm}
\noindent
For the precise formulation, see Section \ref{Moduli space}.
Theorem \ref{MT} immediately implies the above mentioned theorem of Tian, and also classifies all degenerations of K\"ahler-Einstein Del Pezzo surfaces which was posed as a problem in \cite{Tian2}. When $d=4$ Theorem \ref{MT} was proved by Mabuchi-Mukai \cite{MM}, and we shall provide a slightly different proof based on our uniform strategy. For other degrees, there have been partial results by \cite{Chel}, \cite{CW}, \cite{GK}, \cite{Shi}, \cite{Wang} on the existence of K\"ahler-Einstein metrics on some canonical Del Pezzo surfaces, by calculating $\alpha$-invariant.
A minor point is that the Gromov-Hausdorff topology defined here is slightly different from the standard definition, in that we also remember the complex structure when we talk about convergence. See \cite{Spotti} and Section \ref{DG input} for a related discussion on this. The standard Gromov-Hausdorff compactification is homeomorphic to the quotient of $M_d$ by the involution which conjugates the complex structures.
For the proof of Theorem \ref{MT}, we do not need to assume the existence of K\"ahler-Einstein metrics on all the smooth Del Pezzo surfaces as proved in \cite{Tian1}. The only assumption which we need, and which has been originally proved by Tian-Yau \cite{TY}, is the following:
\begin{hyp}\label{Tian-Yau}
For each $d\in\{1,2,3,4\}$, $M_d^0$ is non-empty.
\end{hyp}
Given this, the main strategy of proving Theorem \ref{MT} is as follows:
\begin{enumerate}
\item For each $d$, we construct a natural moduli variety $M_d$ with a Zariski open subset $M_d^{\text{sm}}$ parametrizing all smooth degree $d$ Del Pezzo surfaces. Moreover, there is a well-defined continuous map $\Phi\colon M_d^{GH}\rightarrow M_d$, where we use the Gromov-Hausdorff distance in the domain and the local analytic topology in the target, so that $[X]$ and $\Phi([X])$ parametrize isomorphic log Del Pezzo surfaces for any $[X]\in M_d^{GH}$.
\item $\Phi$ is injective. This follows from the uniqueness theorem of Bando-Mabuchi \cite{BM} and its extension to orbifolds.
\item $\Phi$ is surjective. This follows from the fact that the image of $\Phi$ is open in $M_d^{\text{sm}}$ (by the implicit function theorem, see for example \cite{LS}) and closed in $M_d$ (by the continuity of $\Phi$ in (1)).
\item Since $M_d^{GH}$ is compact and $M_d$ is Hausdorff, then $\Phi$ is a homeomorphism.
\end{enumerate}
The main technical part lies in Step (1). For this we need first to investigate Gromov-Hausdorff limits of K\"ahler-Einstein Del Pezzo surfaces, and then construct a moduli space that includes all the possible limits. The difficulty increases as the degree goes down. When $d=3, 4$ we take the classical GIT moduli space on the anti-canonical embedding. For $d=2$ we take the moduli space constructed in \cite{Mukai} (based on Shah's idea \cite{Shah} which blows up a certain GIT quotient). For $d=1$ we need to combine Shah's method with further modifications suggested by the differential geometric study of Gromov-Hausdorff limits. As far as we are aware, this moduli space is new. We should mention that in the last two cases, $M_d$ (and thus $M_d^{GH}$) contains points that parametrize non-canonical log Del Pezzo surfaces. This disproves a conjecture of Tian in \cite{Tian1}, see Remark \ref{Tian conjecture}. We also remark that Gromov-Hausdorff limits of K\"ahler-Einstein Del Pezzo surfaces was first studied by Tian in \cite{Tian1}, but as we shall see there are some inaccuracies in \cite{Tian1}, see Remark \ref{remark BG} and Example \ref{degree1 toric}.
Finally we remark that for each $d\in \{1,2,3,4\}$, it is easy to find explicit examples of singular degree $d$ $\Q$-Gorenstein smoothable K\"ahler-Einstein log Del Pezzo surface by a global quotient construction (see the examples in later sections). Thus one way to avoid assuming Hypothesis \ref{Tian-Yau} would be to find a smooth K\"ahler-Einstein Del Pezzo surface by a gluing construction. For example, it has been proved in \cite{Spotti} that for a K\"ahler-Einstein log Del Pezzo surface with only nodal singularities and discrete automorphism group, one can glue model Eguchi-Hanson metrics to obtain nearby K\"ahler-Einstein metrics in the smoothing. This can be applied when $d=3$, since the Cayley cubic (see Section \ref{Degree34}) satisfies these assumptions.
The organization of this paper is as follows. In Section \ref{DG input} we collect the main results that we need on the structure of Gromov-Hausdorff limits, focusing on the two dimensional case. In Section \ref{AG input} we make an algebro-geometric study of the Gromov-Hausdorff limits, and define precisely the notion of moduli spaces that we use in this paper. Then we reduce the proof of Theorem \ref{MT} to the construction of moduli spaces in each degree. In later Sections we treat the cases $d\geq 3$ and $d\leq 2$ separately. We also investigate the relation with moduli space of curves, in
subsections \ref{curve.2}, \ref{curve.1}. In Section \ref{K moduli} we make some further discussions. \\
\textbf{Notation}:
A \emph{Del Pezzo surface} is a smooth projective surface with ample anti-canonical bundle. A \emph{log Del Pezzo surface} is a normal projective surface with quotient singularities (or equivalently, with log terminal singularities) and ample anti-canonical divisor. For a log Del Pezzo surface $X$, its degree $\deg(X)$ is the intersection number $K_X^2$. In general dimensions, a {\textit{$\mathbb{Q}$-Fano variety}} means a normal projective variety with log terminal singularities and with $-rK_X$ ample for some positive integer $r$. Smallest such $r$ will be called {\textit{index}} or
{\textit{Gorenstein index}}. \\
\textbf{Acknowledgements}: This work is motivated by the PhD Thesis of the second named author under the supervision of Professor Simon Donaldson. We would like to thank him for great support. We would also like to thank Professors Claudio Arezzo,
Paolo Cascini, Ivan Cheltsov, Xiuxiong Chen, Mark Haskins,
David Hyeon, Alexander Kasprzyk, Radu Laza,
Yongnam Lee, Shigeru Mukai, Hisanori Ohashi, Shingo Taki and Bing Wang for helpful discussions and encouragements. S.S. is partly funded by European Research Council award No 247331.
\section{General results on the Gromov-Hausdorff limits}\label{DG input}
The main differential geometric ingredient involved in the proof of the main theorem is the study of the structure of Gromov-Hausdorff limits of K\"ahler-Einstein Del Pezzo surfaces.
The following orbifold compactness theorem is well-known.
\begin{prop}[\cite{An}, \cite{BKN}, \cite{Tian1}]\label{orbifold compactness}
Given a sequence of degree $d$ K\"ahler-Einstein Del Pezzo surfaces $(X_i, \omega_i, J_i)$ then, by passing to a subsequence, it converges in the Gromov-Hausdorff sense to a K\"ahler-Einstein log Del Pezzo surface $(X_\infty, \omega_\infty, J_\infty)$, and $\deg(X_\infty)=d$.
\end{prop}
In \cite{Tian1} Tian found further constraints on the possible singularities that could appear in $X_\infty$. We will state a more general theorem and give an alternative proof.
First we have (compare also \cite{Tian4}):
\begin{prop}[\cite{DS}]\label{Fano limit}
Given a sequence of $n$-dimensional K\"ahler-Einstein Fano manifolds $(X_i, \omega_i, J_i)$, by passing to a subsequence, it converges in the Gromov-Hausdorff sense to a $\Q$-Fano variety $(X_\infty, J_\infty)$ endowed with a weak K\"ahler-Einstein metric $\omega_\infty$ (cf. \cite{EGZ}). Moreover, there exist integers $k$ and $N$, depending only on $n$, so that we could embed $X_i\,(i\in \N \cup \{\infty\})$ into $\P^N$ using orthonormal basis of $H^0(X_i, -kK_{X_i})$ with respect to the Hermitian metric defined by $\omega_i$, and $X_i$ converges to $X_\infty$ as varieties in $\P^N$.
\end{prop}
Here one can think of the convergence as varieties in $\P^N$ as the convergence of defining polynomials. Notice that the orbifold property in Proposition \ref{orbifold compactness} also follows naturally from Proposition \ref{Fano limit}, since by Kawamata's theorem \cite{Kawa} that a two dimensional log terminal singularity is a quotient singularity. \\
We will treat singular varieties that come from certain limits of smooth ones. The following algebro-geometric notion is very natural from the point of view of minimal model program, and will be shown to be also naturally satisfied by the above limit $X_\infty$.
\begin{defi}
Let $X$ be a $\Q$-Fano variety. We say $X$ is \emph{$\Q$-Gorenstein smoothable} if there exists a deformation $\pi: \X\rightarrow \Delta\ni 0$ of $X$ over a smooth curve germ $\Delta$ such that $\X_0=X$, the general fibre is smooth and $K_{\X}$ is $\Q$-Cartier.
\end{defi}
\begin{lem} \label{Qsmoothable}
$X_\infty$ is $\Q$-Gorenstein smoothable.
\end{lem}
\begin{proof} By Proposition \ref{Fano limit} and general theory we can find a family of varieties $\pi_2: \X\subset \P^N\times \Delta \rightarrow \Delta$ in $\P^N$ where for $t\neq 0$ $\X_t$ is smooth and $\X_0$ is the variety $X_\infty$.
Indeed, for a morphism from $\Delta$ to the Hilbert scheme which sends
$0$ to $X_\infty$ (embedded by $|-kK_{X_\infty}|$) and contains
$X_i$ (embedded by $|-kK_{X_i}|$ as well) for one sufficiently large $i$, we can construct the required family by pulling back
the total space and take its normalization if necessary.
Denote the other projection map by $\pi_1\colon \P^N\times \Delta\rightarrow \P^N$, then $-rK_{\X}$ and $\pi_1^*\O(1)$ agrees up to a pull back from the base. Thus $-rK_{\X}$ is Cartier and so $X_\infty$ is $\Q$-Gorenstein smoothable.
\end{proof}
Note that the above proof does not use the $\mathbb{Q}$-Gorenstein property of the normal central fiber,
although in our case, we knew it by Proposition \ref{Fano limit}. We only need a relatively ample linear line bundle and the normality assumption
of the total space and the central fiber.
The definition of $\Q$-Gorenstein smoothability can be obviously defined also for local singularities, and for a $\Q$-Gorenstein smoothable $\Q$-Fano manifold, all its singularities must also be $\Q$-Gorenstein smoothable. On the other hand, it is proved in \cite{HP} that a log Del Pezzo surface with $\Q$-Gorenstein smoothable singularities is $\Q$-Gorenstein smoothable. In dimension two, $\Q$-Gorenstein smoothable quotient singularities are also commonly called \emph{``T-singularities"}. The classification of $T$-singularities is well-known, see \cite{KS}, \cite{Ma} for example.
So combining the above discussions we obtain:
\begin{thm}[\cite{Tian1}] \label{T-singularity}
The Gromov-Hausdorff limit $(X_\infty, J_\infty)$ of a sequence of K\"ahler-Einstein Del Pezzo surfaces is a K\"ahler-Einstein log Del Pezzo surface with singularities either canonical (i.e. ADE singularities) or cyclic quotients of type $\frac{1}{dn^2}(1,dna-1)$ with $(a,n)=1$ ($1\leq a < n$).
\end{thm}
\begin{rmk}
For sake of completeness, even if we will not use it in our proof, we should remark that it is known that local smoothings of $T$-singularities admit asymptotically conical Calabi-Yau metrics \cite{Kr} \cite{Suvaina}. It is then natural to expect from a metric perspective the following picture: given a sequence $(X_i,\omega_i)$ of degree $d$ K\"ahler-Einstein Del Pezzo surfaces Gromov-Hausdorff converging to a singular $(X_\infty,\omega_\infty)$ and choose $p_\infty \in \mbox{Sing}(X_\infty)$, then there exists a sequence of points $p_i \in X_i \rightarrow p_\infty \in X_\infty $ and scaling parameters $\lambda_i \rightarrow +\infty$ such that $(X_i,p_i, \lambda_i \omega_i)$ converges in the pointed Gromov-Hausdorff sense to an asymptotically conical Calabi-Yau metric on a smoothing of the $T$-singularity at $p_\infty$.
\end{rmk}
Next, we can use the Bishop-Gromov volume comparison Theorem to control the order of the orbifold group at each point.
\begin{thm} [\cite{Tian1}]
\label{Bishop-Gromov} Let $(X, \omega)$ be a K\"ahler-Einstein log Del Pezzo surface and let $\Gamma_p \subseteq U(2)$ be the orbifold group at a point $p\in X$. Then
\begin{equation} \label{order bound}
|\Gamma_p| \deg(X) < 12,
\end{equation}
\end{thm}
\begin{proof}
Without loss of generality we may normalize the metric so that $Ric (\omega)= 3\omega$. The Bishop-Gromov volume comparison extends without difficulty to orbifolds \cite{Bor}, so for all $p \in X$ the function
$\frac{Vol(B(p,r))}{Vol(\overline{B}(r))}$
is decreasing in $r$, where $\overline{B}(r)$ is the ball of radius $r$ in the standard four sphere $S^4(1)$. As $r$ tends to zero the function converges to $1/|\Gamma_p|$, and for sufficiently large $r$ the function is constant $Vol(X, \omega)/Vol(S^4(1))$.
So $Vol(X, \omega) |\Gamma_p| \leq Vol (S^4(1)).$
The normalization condition $Ric (g)= 3 g$ implies that $[\omega] = \frac{2 \pi}{3} c_1(X)$.
So
$Vol(X, \omega)=\int_X \frac{\omega^2}{2}=\frac{2 \pi^2}{9} \deg(X)$.
Then, using the fact that $Vol(S^4(1))=\frac{8}{3} \pi^2$, it is easy to see
$|\Gamma_p| \deg(X) \leq 12.$
If the equality is achieved, then $X$ must have constant curvature.
But since $X$ is K\"ahler, we have $S(\omega)^2=24|W^+|^2$, so the scalar curvature vanishes.
Contradiction.
\end{proof}
\begin{rmk} \label{remark BG}
The two theorems above were essentially known to Tian \cite{Tian1}. For the inequality \ref{order bound}, the constant on the right hand side was $48$ in \cite{Tian1}.
\end{rmk}
By Theorem \ref{T-singularity} and \ref{Bishop-Gromov}, and we have the constraints on the possible singularities that could appear on the Gromov-Hausdorff limit $X_\infty$\footnote{Recall that the order of the finite Klein group yielding an $A_k$ singularity is $k+1$, a $D_k$ singularity is $4(k-2)$, an $E_6$ singularity is $24$, an $E_7$ singularity is $48$, and an $E_8$ singularity is $120$.}
\begin{itemize}
\item $\deg=4$, $X_\infty$ is canonical, and can have only $A_1$ singularities.
\item $\deg=3$, $X_ \infty$ is canonical, and can have only $A_1$ or $A_2$ singularities.
\item $\deg=2$, $X_\infty$ can have only $A_1$, $A_2$, $A_3$, $A_4$, and $\frac{1}{4}(1,1)$ singularities.
\item $\deg=1$, $X_\infty$ can have only $\frac{1}{4}(1,1)$, $\frac{1}{8}(1,3)$, and $\frac{1}{9}(1,2)$ singularities besides $A_i\ (i\leq 10)$ and $D_4$ singularities.
\end{itemize}
For the case when $d\geq 3$, the above classification is already sufficient for our purposes, as canonical log Del Pezzo surfaces are classified (see the next section). When $d\leq 2$ we will make a further study in Section \ref{Degree12}.
Now we make a side remark about the Gromov-Hausdorff topology used in this paper. In \cite{Spotti} it is proved that if two K\"ahler-Einstein log Del Pezzo surfaces are isometric, then the complex structures could be the same or conjugate. For this reason the standard Gromov-Hausdorff distance can not distinguish two conjugate complex structures in general. So in our case there is an easy modification, where we say a sequence $(X_i, J_i, \omega_i)$ converges to $(X_\infty, J_\infty, \omega_\infty)$ if it converges in the Gromov-Hausdorff topology and in the sense of Anderson-Tian, i.e. smooth convergence of both the metric and complex structure away from the singularities. The spaces $M_d$ appearing in Theorem \ref{MT} admit an involution given by conjugating the complex structure, and we will identify explicitly this involution for each $d$.
\section{Algebro-geometric properties of log Del Pezzo surfaces}\label{AG input}
We continue to study the algebro-geometric properties of $X_\infty$ appearing in the last section. These constraints help the construction of the desired moduli spaces in later sections.
\subsection{Classification of mildly singular log Del Pezzo surfaces}
We first recall some general classification results for log Del Pezzo surfaces with mild singularities. The following is classical.
\begin{thm}[\cite{HW}] \label{logDelPezzo-canonical}
A degree $d$ log Del Pezzo surface with canonical singularities is
\begin{itemize}
\item a complete intersection of two quadrics in $\P^4$, if $d=4$;
\item a cubic hypersurface in $\P^3$, if $d=3$;
\item a degree $4$ hypersurface in $\P(1,1,1,2)$ not passing $[0:0:0:1]$, if $d=2$;
\item a degree $6$ hypersurface in $\P(1,1,2,3)$ not passing $[0:0:1:0]$ and $[0:0:0:1]$, if $d=1$.
\end{itemize}
\end{thm}
Although we will not use it, log Del Pezzo surfaces with Gorenstein index two are also classified, by \cite{AN} and \cite{Nakayama}. In case the degree is one or two, we have:
\begin{thm}[\cite{KK}] \label{logDelPezzo-index2}
A degree $2$ log Del Pezzo surface with Gorenstein index at most two is either a degree $4$ hypersurface in $\P(1,1,1,2)$, or a degree $8$ hypersurface in $\P(1,1,4,4)$.
A degree $1$ log Del Pezzo surface with Gorenstein index at most two is a degree $6$ hypersurface in $\P(1,1,2,3)$.
\end{thm}
Notice that by the restrictions on Gromov-Hausdorff limits of K\"ahler-Einstein Del Pezzo surfaces discussed in the previous section, we know that the Gorenstein index of such limits is less then or equal to $2$ for degree $\geq$ 2, and at most $6$ in the degree $1$ case.
\subsection{CM line bundle comparison}
In this subsection we study GIT stability of K\"ahler-Einstein log Del Pezzo surfaces. For smooth K\"ahler-Einstein manifolds, it is known that they are K-polystable (cf. \cite{Tian3, Stoppa, Mab}). This has been generalized to the singular setting in \cite{Berman}, and we state the two dimensional case here:
\begin{thm}[\cite{Berman}]
\label{KEtoKstability} A log Del Pezzo surface admitting a K\"ahler-Einstein metric is K-polystable.
\end{thm}
Next we state a general theorem relating K-polystability and usual GIT stabilities, using the CM line bundle of Paul-Tian \cite{PT}.
Recall that the CM line bundle is a line bundle defined on base scheme of each flat family of polarized varieties in terms of the Deligne pairing and if the family is $G$-equivariant with an algebraic group $G$, the line bundle naturally inherits the group action.
It gives a GIT weight interpretation to the Donaldson-Futaki invariant whose positivity is roughly the K-stability.
A point is that the CM line bundle is \textit{not} even nef in general so that we cannot apply GIT straightforward.
We refere to \cite{PT}, \cite{PRS} for more details.
\begin{thm}\label{CM stability} Let $G$ be a reductive algebraic group without nontrivial characters. Let $\pi\colon (\mathcal{X},\mathcal{L})\rightarrow S$ be a $G$-equivariant
polarized projective flat family of equidimensional varieties over a projective variety.
Here ``polarized" means that $\mathcal{L}$ is a
relatively ample line bundle on $\mathcal{X}$, and ``equidimensional" means that all the components have the same dimension.
Suppose that
\begin{enumerate}
\item the Picard rank $\rho(S)$ is one;
\item there is at least one K-polystable $(\mathcal{X}_t,\mathcal{L}_t)$
which degenerates in $S$ via a one parameter subgroup $\lambda$ in $G$, i.e. the corresponding test configuration is not a product one.
\end{enumerate}
Then a point $s\in S$ is GIT (poly, semi)stable if $(\X_s,\mathcal{L}_s)$
is K-(poly, semi)stable and reduced.
\end{thm}
\begin{proof}
Let $\Lambda_{CM}$ be the CM line bundle \cite{PT} over $S$ associated to $\pi$.
In general, this is a $G$-linearized $\mathbb{Q}$-line bundle. Let $\Lambda_0$ be the positive generator of $Pic(S)$, then there exists integers $r>0$ and $k$, so that
$\Lambda_{CM}^{\otimes r}\cong \Lambda_0^{\otimes k}$. The isomorphism is $G$-equivariant by the condition that $G$ has no nontrivial character.
On the other hand, from the condition (2), we know that the degree of CM line along the closure of the $\lambda$-orbit is positive. This is because by \cite{Wan}
the degree is the sum of the Donaldson-Futaki invariant on the two degenerations along $\lambda$ and $\lambda^{-1}$.
This implies that the integer $k$ is positive.
Therefore, $\Lambda_{CM}^{\otimes r}$ is ample.
If $\pi\colon \mathcal{X}\rightarrow S$ is
the universal polarized family over a Hilbert scheme, and $G$ is the associated special linear group $SL$,
then it is known \cite{PT} that for any $s\in S$ and one parameter subgroup
$\lambda\colon \mathbb{C}^{*}\rightarrow G$, the associated Donaldson-Futaki
invariant \cite{Do1} $DF((\mathcal{X}_s, \mathcal{L}_s); \lambda)$ is
the GIT weight in the usual sense with respect to the CM line bundle $\Lambda_{CM}^{\otimes r}$, up to a positive multiple.
This fact can be extended to our general family $\pi\colon (\mathcal{X},\mathcal{L})
\rightarrow S$ in a straightforward way by considering $G$-equivariant morphism into a certain Hilbert scheme
defined by $(\mathcal{X},(\pi^{*}\Lambda_0)^{\otimes l}\otimes
\mathcal{L}^{\otimes m})$
for $l\gg m\gg 0$.
If $\mathcal{X}_s$ is reduced, from our equidimensionality assumption
on all fibers, we can not get \textit{almost trivial} test configurations from one parameter subgroup of $G$ (in the sense of \cite{LX}, \cite{Od3}).
This is because the central fiber of an almost trivial test configuration for a reduced
equidimensional variety should have an embedded component.
Summarizing up, the conclusion follows from the Hilbert-Mumford numerical criterion.
\end{proof}
We believe Theorem \ref{CM stability} should have more applications in the explicit study of general extremal metrics beyond our
study of log Del Pezzo surfaces in this paper. For instance, there are many examples of
equivariant family of polarized varieties parametrized by a projective space or Grassmanian through various covering constructions. In these situations one can always apply Theorem \ref{CM stability}.
We remark that in the above proof what we really need is the CM line bundle to be ample. For example, the following has been known to Paul-Tian long time ago:
\begin{cor}[{\cite{Tian2.5}}] \label{local versal}
A hypersurface $X\subseteq \mathbb{P}^N$
is Chow polystable (resp. Chow semistable) if $(X,\mathcal{O}_X(1))$ is K-polystable (resp. K-semistable).
\end{cor}
We also state the following variant of Theorem \ref{CM stability},
which we also believe to be a useful tool for future developments.
\begin{lem} \label{local CM stability}
Let $S$ be an affine scheme, and $G$ be a reductive algebraic group acting on $S$
fixing $0\in S$. Let $\pi\colon (\X, \L)\to S$ be a $G$-equivariant polarized flat
deformation of a K-polystable reduced polarized variety $(\mathcal{X}_0,\mathcal{L}_0)$
and all fibers $\X_s$ are equidimensional varieties. Then a point $s\in S$ is GIT (poly)stable if $(\X_s,\mathcal{L}_s)$ is K-(poly)stable and reduced.
\end{lem}
This follows from similar argument as in the proof of Theorem \ref{CM stability}, noting that the CM line bundle is equivariantly trivial over $S$. One can often apply this to the versal deformation family, as we do in the Section \ref{Versal deformations}.
\subsection{Semi-universal $\Q$-Gorenstein deformations} \label{Versal deformations}
In this subsection we provide some general theory on $\Q$-Gorenstein deformations, continuing Section \ref{DG input}. The following is well-known
\begin{lem}[\cite{KS}]\label{local no obstruction}
A $T$-singularity has a smooth semi-universal $\Q$-Gorenstein deformation.
\end{lem}
A $T$-singularity is either Du Val (ADE type), which is a hypersurface singularity in $\C^3$ and has a smooth semi-universal $\Q$-Gorenstein deformation space, or a cyclic quotient of type $\frac{1}{dn^2}(1, dna-1)$ with $(a, n)=1$. The latter is the quotient of the Du Val singularity $A_{dn-1}$ by the group $\Z/n\Z$. More precisely, an $A_{dn-1}$ singularity embeds as a hypersurface $z_1z_2=z_3^{dn}$ in $\C^3$. The generator $\xi$ of $\Z/n\Z$ acts on $\C^3$ by $\xi. (z_1, z_2, z_3)=(\zeta_n z_1, \zeta_n^{-1} z_2, \zeta_n^a z_3)$, where $\zeta_n$ is the $n$-th root of unity. One can explicitly write down a semi-universal $\mathbb{Q}$-Gorenstein deformation as the family of hypersurfaces in $\C^3$ given by $z_1z_2=z_3^{dn}+a_{d-1} z_3^{(d-1)n}+\cdots+ a_0$, see \cite{Ma}. Then its dimension is $d$.
Globally, since $H^2(X, T_X)$ vanishes (for example by the Kodaira-Nakano vanishing theorem or see \cite{HP}), we have the following lemmas.
\begin{lem}\cite{HP}
Let $X$ be a log Del Pezzo surface. Then $X$ is $\Q$-Gorenstein smoothable if and only if it has only $T$-singularities.
\end{lem}
\begin{lem}\label{local-global}
Let $X$ be a $\Q$-Gorenstein smoothable log Del Pezzo surface with singularities $p_1, \cdots, p_n$. Then for the $\mathbb{Q}$-Gorenstein
deformation tangent space $\Def(X)$ of $X$, we have
$$0\rightarrow \Def '(X)\rightarrow \Def(X) \rightarrow \bigoplus_{i=1}^n \Def_i\rightarrow 0, $$
where $\Def'(X)$ is the subspace of $\Def(X)$ corresponding to equisingular deformations, and $\Def_i$ is the $\Q$-Gorenstein deformation tangent space of the local singularity $p_i$.
Moreover, there is an algebraic scheme $(\Kur(X), 0)$ with tangent space $\Def(X)$ at $0$, and a semi-universal $\mathbb{Q}$-Gorenstein family $\mathcal{U}\rightarrow (\Kur(X), 0)$ which is $\Aut(X)$-equivariant. Here $\Aut(X)$ denotes the automorphism group of $X$.
\end{lem}
This is again well-known, for example Fact 1.1 in \cite{CL}. We give a sketch of the proof here. It follows from general algebraic deformation theory (or the Grauert's construction of analytic semi-universal deformation) that there exists a formal semi-universal family
$\mathcal{X}\rightarrow Spec(R)$, where $R$ is the completion of
an essentially finite type local ring. By using the Grothendieck existence theorem \cite{FGA} and
equivariant version of the Artin algebraicity theorem \cite{Art},
we obtain an $\Aut(X)$-equivariant semi-universal deformation.
Moreover, in the semi-universal deformation, it follows from \cite[Theorem 3.9(i)]{KS} that $\mathbb{Q}$-Gorenstein deformation corresponds to one irreducible component.
In general there is a tangent-obstruction theory for deformation of singular reduced varieties, with tangent space
$\text{Ext}^1(\Omega_X , \O_X )$ and obstruction space $\text{Ext}^2(\Omega_X , \O_X)$. Since $X$ has only isolated singularities and $H^2(X, TX)=0$, it follows, by the local-to-global spectral sequence of $\text{Ext}$, that the obstruction indeed lies in the map
$$H^0(\mathcal{E} xt^1(\Omega_X, \O_X))=\bigoplus_{i=1}^n\mathcal{E} xt^1_{p_i}(\Omega_X, \O_X) \rightarrow H^0(\mathcal{E}xt^2(\Omega_X, \O_X))=\bigoplus_{i=1}^n\mathcal{E} xt^2_{p_i}(\Omega_X, \O_X).$$
By Lemma \ref{local no obstruction} there is no obstruction if we restrict to $\Q$-Gorenstein deformation tangent subspaces. Then again by the local-to-global spectral sequence we obtain the stated exact sequence. The property on the action of $\Aut(X)$ follows from the proof.
Now we study a particular example, which we will use in Section \ref{Degree12}.
\begin{exa}
Let $X_1^T$ be the quotient of $\P^2$ by $\Z/9\Z$, where the generator $\xi$ of $\Z/9\Z$ acts by $\xi. [z_1: z_2 : z_3]=[z_1: \zeta_9 z_2: \zeta_9^{-1} z_3]$, and $\zeta_9$ is the primitive ninth root of unity. Then $X_1^T$ is a degree one log Del Pezzo surface, with one $A_8$ singularity at $[1:0:0]$ and two $\frac{1}{9}(1,2)$ singularities at $[0:1:0]$ and $[0:0:1]$. In particular it is $\Q$-Gorenstein smoothable and has Gorenstein index $3$. Note the Fubini-Study metric on $\P^2$ descends to a K\"ahler-Einstein metric.
. By the above general theory and the fact that $X_1^T$ has no equisingular deformations, we have a decomposition $$\Def(X_1^T)=\Def_1\oplus \Def_2\oplus \Def_3, $$ where $\Def_i$ is the $\Q$-Gorenstein deformation tangent space of the local singularity $p_i$.
It is not hard to see that the connected component of the
automorphism group is $\Aut^0(X_1^T)=(\C^*)^2$. We want to identify its action on $\Def(X_1^T)$. We first choose coordinates on $\Aut^0(X_1^T)$ so that $\lambda=(\lambda_1, \lambda_2)$ acts on $X_1^t=\P^2/(\Z/9\Z)$ by $\lambda. [z_1:z_2:z_3]=[\lambda_1 z_1:\lambda_2 z_2:z_3]$. Around $p_3$ we may choose affine coordinate $y_1=z_1/z_3$, and $y_2=z_2/z_3$. So the action of $\Z/9\Z$ is given by $\xi. (y_1, y_2)=(\zeta_9 y_1, \zeta_9^2 y_2)$, which is the standard model for the $\frac{1}{9}(1,2)$ singularity. The action of $(\C^*)^2$ is then $\lambda. (y_1, y_2)=(\lambda_1y_1, \lambda_2y_2)$. Now a local deformation of the affine singularity $\frac{1}{9}(1,2)$ can be seen as follows. We embed $\C^2/(\Z/9\Z)$ into $\C^3/(\Z/3\Z)$ by sending $(y_1, y_2)$ to $(u, v, w)=(y_1^3, y_2^3, y_1y_2)$. A versal deformation is given by $uv-w^3=s$. The induced action of $(\C^*)^2$ is then $\lambda. s=\lambda_1^{-3}\lambda_2^{-3} s$. This is then the weight of the action on $\Def_3$. Similarly one can see the weight on $\Def_2$ is given by $\lambda_1^{-3}\lambda_2^{6}$. To see the weight on $\Def_1$, we can embed $X_1^T$ into $\P(1,2,9, 9)$ as a hypersurface $x_3x_4=x_2^9$, by sending $[z_1:z_2:z_3]$ to $[x_1:x_2:x_3:x_4]=[z_1:z_2z_3: z_2^9: z_3^9]$. One can easily write down a space of deformations of $X_1^T$ as $x_3x_4=x_2\Pi_{i=1}^8(x_2+a_ix_i)$. This deformation only partially smoothes the $A_8$ singularity, so that $\Def_1$ can be identified with the space of all vectors $(a_1, \cdots, a_8)$. It is then easy to see the weight of the action of $\lambda$ on $\Def_1$ is $\lambda_1\lambda_2^{-1}$. So we have arrived at:
\begin{lem}\label{local GIT}
The action of $\Aut^0(X_1^T)$ on $\Def(X_1^T)$ is given by
$$\lambda. (v_1, v_2, v_3)=(\lambda_1\lambda_2^{-1} v_1, \lambda_1^{-3}\lambda_2^{6}v_2, \lambda_1^{-3}\lambda_2^{-3} v_3).$$
\end{lem}
\end{exa}
From Lemma \ref{local-global} we have a linear action of a group $\Aut(X)$ on $\Def(X)$. If $\Aut(X)$ is reductive (for example, when $X$ admits a K\"ahler-Einstein metric, by Matsushima's theorem \cite{Matsu}), one can take a GIT quotient $\Def(X)//\Aut(X)$. By general theory the GIT on $\Kur(X)$ is equivalent to that on $\Def(X)$, and $\Kur(X)//\Aut(X)$ parametrizes local deformations of $X$ that are represented by polystable points in $\Kur(X)$. This can be viewed as a ``local" coarse moduli space of $\Q$-Gorenstein deformations of $X$. The following lemma provides a more precise link between the Gromov-Hausdorff convergence and algebraic geometry.
\begin{lem} \label{continuity}
Let $X_\infty$ be the Gromov-Hausdorff limit of a sequence of K\"ahler-Einstein Del Pezzo surfaces $X_i$, then for $i$ sufficiently large
we may represent $X_i$ by a point $u_i\in \Kur(X_\infty)//\Aut(X_\infty)$, so that $u_i\rightarrow 0$ as $i$ goes to infinity.
\end{lem}
\begin{proof}
From Section \ref{DG input}, we know there are integers $m, N$, such that by passing to a subsequence the surface $X_i$ converges to $X_\infty$, under the projective embedding into $\P^N$ defined by orthonormal section of $H^0(X_i, -mK_{X_i})$. Since $X_\infty$ has reductive automorphism group, we can choose a Luna slice $S$ in the component of the Hilbert scheme corresponding to $\Q$-Gorenstein smoothable deformations of $X_\infty$. Hence for $i$ large enough, $X_i$ is isomorphic to a surface parametrized by $s_i\rightarrow 0\in S$. By versality and shrinking $S$ if possible we have a map $F: S \rightarrow \Kur(X_\infty)$ so that $s$ and $F(s)$ represent isomorphic surfaces. Let $v_i=F(s_i)$. Then $v_i\rightarrow 0$. Moreover, by Lemma \ref{local CM stability} $v_i$ is polystable for $i$ large, thus its image $u_i\in \Kur(X_\infty)//\Aut(X_\infty)$ represents the same surface $X_i$. The conclusion then follows.
\end{proof}
\subsection{Moduli spaces} \label{Moduli space}
In this section we will define precisely what moduli of K\"ahler-Einstein
$\mathbb{Q}$-Fano varieties means to us.
\begin{defi}[KE moduli stack]\label{KE moduli stack}
We call a moduli algebraic stack $\mathcal{M}$ of $\mathbb{Q}$-Gorenstein family of
$\mathbb{Q}$-Fano varieties a \textit{KE moduli stack} if
\begin{enumerate}
\item It has a categorical moduli $M$ in the category of algebraic spaces;
\item There is an \'etale covering of $\mathcal{M}$ of the form $\{ [U_i/G_i] \}$ with algebraic schemes
$U_i$ and reductive groups $G_i$, where there is a $G_i$-equivariant
$\mathbb{Q}$-Gorenstein flat family of $\mathbb{Q}$-Fano varieties.
\item Closed orbits of $G_i \curvearrowright U_i$ correspond to geometric points of $M$, and parametrize $\Q$-Gorenstein smoothable K\"ahler-Einstein $\mathbb{Q}$-Fano varieties.
\end{enumerate}
We call the categorical moduli in the category of algebraic space $M$ a \emph{KE moduli space}.
If it is an algebraic variety, we also call it \emph{KE moduli variety}.
\end{defi}
\noindent
For an introduction to the theory of stacks, one may refer to \cite{Andrew}.
For the general conjecture and for more details on the existence of KE moduli stack, compare Section \ref{K moduli}.
For our main purposes in proving Theorem \ref{MT}, we only need a much weaker notion.
\begin{defi}[Analytic moduli space]
An \emph{analytic moduli space} of degree $d$ log Del Pezzo surfaces is a compact analytic space $M_d$ with the following structures:
\begin{enumerate}
\item We assign each point in $M_d$ a unique isomorphism class of $\Q$-Gorenstein smoothable degree $d$ log Del Pezzo surfaces. For simplicity of notation, we will denote by $[X]\in M_d$ a point which corresponds to the isomorphism class of the log Del Pezzo surface $X$.
\item For each $[X]\in M_d$ with $\Aut(X)$ reductive, there is an analytic neighborhood $U$, and a finite surjective map $\Phi_U$ from $U$ to an analytic neighborhood of $0\in \Kur(X)//\Aut(X)$, such that $\Psi_U^{-1}(0)=[X]$ and for any $u\in U$, the surfaces parametrized by $u$ and $\Psi_U(u)$ are isomorphic.
\end{enumerate}
\end{defi}
\begin{defi}\label{perfect}
We say that an analytic moduli space has {\textit{property (KE)}} if every surface parametrized by $M_d^{GH}$ is isomorphic to one parametrized by some point in $M_d$.
\end{defi}
\begin{thm} For any analytic moduli space $M_d$ which has property (KE), there is a homeomorphism from $M_d^{GH}$ to $M_d$, under the obvious map.
\end{thm}
\begin{proof}
To carry out the strategy described in the introduction, we just need the natural map from $M_d^{GH}$ to $M_d$ to be continuous. It suffices to show that if we have a sequence $[X_i]\in M_d^{0}$ converges to a point $[X_\infty]\in M_d^{GH}$, then $\Phi([X_i])$ converges to $\Phi([X_\infty])$. Unwrapping the definitions, this is exactly Lemma \ref{continuity}.
\end{proof}
In later sections \ref{Degree34} and \ref{Degree12}, we will construct the analytic moduli space $M_d$ for
$\mathbb{Q}$-Gorenstein smoothable cases one-by-one. We will show that these $M_d$'s satisfy property (KE). Moreover they are actually categorical moduli of moduli stacks $\M_d$, with a Zariski open subset parametrizing all smooth degree $d$ Del Pezzo surfaces. Thus Theorem \ref{MT} follows.
\section{The cases of degree four and three}\label{Degree34}
\subsection{Degree four case}\label{Degree4}
In this case Theorem \ref{MT} has already been proved in \cite{MM}. Following the general strategy outlined in the introduction, we give a partially new proof here.
Recall that smooth degree $4$ Del Pezzo surfaces are realized by the anti-canonical embedding as intersections of two quadrics in $\P^4$. So in order to construct a moduli space, it is natural to consider the following GIT picture
$$PGL(5; \C) \curvearrowright H_4=Gr(2, Sym^2 (\C^5)) \hookrightarrow \P_*
(\Lambda^2 Sym^2 (\C^5))\footnote{In this paper $\P_*(V)=\P(V)$ is the covariant projectivization and $\P^*(V)=\P(V^*)$ is the contravariant projectivization. } ,$$
with a linearization induced by the Pl\"ucker embedding.
\begin{thm}[Mabuchi-Mukai\ \cite{MM}] \label{M4} An intersection $X$ of two quadrics in $\P^4$ is
\begin{itemize}
\item stable $\Longleftrightarrow$ $X$ is smooth;
\item semistable $\Longleftrightarrow$ $X$ has at worst $A_1$ singularities (nodes);
\item polystable $\Longleftrightarrow$ the two quadrics are simultaneously diagonalizable, i.e. $X$ is isomorphic to the intersection of quadrics
$$ \,\begin{cases} x_0^2+x_1^2+x_2^2+x_3^2+x_4^2=0\\
\lambda_0 x_0^2+ \lambda_1x_1^2+\lambda_2x_2^2+\lambda_3x_3^2+\lambda_4x_4^2=0 \end{cases}$$
and no three of the $\lambda_i$s are equal (or equivalently, $X$ is either smooth or has exactly two or four $A_1$ singularities).
\end{itemize}
\end{thm}
Define $$M_4:= H_4^{ss} // PGL(5; \C)$$ to be the GIT quotient
which parametrizes isomorphism classes of polystable intersections of two quadrics.
Since $M_4$ is naturally isomorphic to the moduli space of binary quintics on $\P^1$, choosing invariants as in \cite{Dolga}, Chapter 10.2, we see that $M_4$ is isomorphic to
$\P(1,2,3)$ and that the smooth surfaces are parametrized by the Zariski open subset $M_4^{\mbox{sm}} \cong \P(1,2,3) \setminus D$, where $D$ is an ample divisor cut out by the equation $z_1^2=128z_2$.\\
The $d=4$ case of Theorem \ref{MT} then follows from the following:
\begin{thm}
The above constructed $M_4$ is an analytic moduli space with property (KE).
\end{thm}
\begin{proof}
To check $M_4$ is an analytic moduli space, observe that item $(1)$ is obvious, and item $(2)$ follows from the construction of $M_4$ as a GIT quotient (the versal family is the universal one over $H_4$). To see $M_4$ has property (KE), we first use Theorem \ref{logDelPezzo-canonical} to see that any $[X]\in M_4^{GH}$ is parametrized by $H_4$. Then we apply Theorem \ref{KEtoKstability} and Theorem \ref{CM stability} (since Picard rank of $H_4$ is one, and it is easy to verify the assumptions are satisfied in this case) to see that $[X]$ is parametrized by $M_4$.
\end{proof}
Clearly $\M_4:=[H_4^{ss}/PGL(5;\C)]$ is a quotient stack, so we conclude that it is indeed a KE moduli stack. We make a few remarks here. First of all, the above arguments actually prove that all degree four K\"ahler-Einstein log Del Pezzo surfaces are parametrized by $M_4$.
By Theorem \ref{M4} the Gromov-Hausdorff limits of smooth Del Pezzo quartics have only an even number of $A_1$ singularities. The maximum number of such singularities is four. There is exactly one such surface $X_4^T$, which is defined by the equations $x_0x_1=x_2^2=x_3x_4$. It is isomorphic to the quotient $\P^1\times\P^1/(\Z/2\Z)$, where the generator $\xi$ of $\Z/2\Z$ acts as $\xi.(z_1, z_2)=(-z_1, -z_2)$. So it admits an obvious K\"ahler-Einstein metric.
It is also easy to see that the action of complex conjugation, which sends a Del Pezzo quartic to its complex conjugate, coincides with the natural complex conjugation on $\P(1,2,3)$.
\subsection{Degree three case}\label{Degree3}
Recall that smooth degree $3$ Del Pezzo surfaces are cubic hypersurfaces in $\P^3$. Note that the anti-canonical bundle is very ample. We recall the following classical GIT picture. The group $PGL(4; \C)$ acts naturally on the space $H_3= \P_{*}(Sym^3 (\C^4)) \cong \P^{19} $ of cubic polynomials.
\begin{thm}[Hilbert] \label{M3} A cubic surface $X$ in $\P^3$ is
\begin{itemize}
\item stable $\Longleftrightarrow$ $X$ has at most singularities of type $A_1$;
\item semistable $\Longleftrightarrow$ $X$ has at worst singularities of type $A_1$ or $A_2$;
\item strictly polystable $\Longleftrightarrow$ $X$ is isomorphic to the cubic $X_3^T$ defined by equation $x_1x_2x_3=x_0^3$. It is not hard to see that $X_3^T$ has exactly three $A_2$ singularities, and is isomorphic to the quotient $\P^2/(\Z/3\Z)$, where the generator $\xi$ of $(\Z/3\Z)$ acts by $\xi.[z_1: z_2: z_3]=[z_1: e^{2\pi i/3}z_2: e^{-2\pi i/3}z_3]$.
\end{itemize}
\end{thm}
Define the quotient stack $\M_3:=[H_3^{ss}/PGL(4;\C)]$ and the corresponding GIT quotient (or in other word, categorical moduli)
$$M_3:=H_3^{ss}// PGL(4; \C)$$
which parametrizes isomorphism classes of polystable cubics.
The above Theorem is classical. It was proved by D. Hilbert in his Doctoral dissertation \cite{Hil}. For a modern proof consult \cite{Mum}.
Moreover, by looking at the ring of invariants
\cite{Sal}, it is known that $$M_3 \cong \P(1,2,3,4,5),$$
and that $M_3^{\mbox{sm}} \cong \P(1,2,3,4,5) \setminus D$ where $D$ is the ample divisor of equation $(z_1^2-64z_2)^2-2^{11}(8z_4+z_1z_3)=0$. So $M_3^{\mbox{sm}}$ is Zariski open and parametrizes all smooth cubic surfaces.
Note that we can apply Theorem \ref{CM stability} for universal family
over $H_3$. Thus it follows that $\M_3$ is a KE moduli and $M_3$ is a KE moduli variety. \\
Observe that a Gromov-Hausdorff limit of smooth K\"ahler-Einstein cubic surfaces has either exactly three $A_2$ singularities or at most four $A_1$ singularities. In the former case, it is isomorphic to $X_3^T$. In the latter case, this is the Cayley's cubic $X_3^C$ defined by $x_0x_1x_2+x_1x_2x_3+
x_2x_3x_0+x_3x_0x_1=0$. It is not hard to see it is isomorphic to the quotient of $X_6/(\Z/2\Z)$, where $X_6$ is the degree six Del Pezzo surface, and the action of $\Z/2\Z$ is induced by the standard Cremona transformation on $\P^2$, i.e.,
$[z_1: z_2: z_3]\mapsto [z_1^{-1}: z_2^{-1} : z_3^{-1}]$. The existence of K\"ahler-Einstein metrics on $X_3^T$ and $X_3^C$ can also be easily seen using the above quotient description.
We remark that it was proved in \cite{DT} that a K\"ahler-Einstein cubic surface must be GIT semistable, and our application of Theorem \ref{CM stability} sharpens this. The existence of K\"ahler-Einstein metrics on cubic surfaces with exactly one $A_1$ singularity was proved by \cite{Wang}, using K\"ahler-Ricci flow on orbifolds and certain calculation of $\alpha$-invariants. In \cite{Spotti}, by glueing method we know the existence of K\"ahler-Einstein metrics on a partial smoothing of the Cayley cubic $X_3^C$. For general cubics with two or three $A_1$ singularities this was previously unknown. Here we actually know that all degree three $\Q$-Gorenstein smoothable K\"ahler-Einstein log Del Pezzo surfaces are parametrized by $M_3$.
As in the degree four case, the action of complex conjugation on $M_3$ is also given by the natural anti-holomorphic involution.
\section{The cases of degree two and one}\label{Degree12}
\subsection{More detailed study on Gromov-Hausdorff limits}
When the degree is one or two, there are new difficulties as non-canonical singularities could appear in the Gromov-Hausdorff limits. So the classification of canonical Del Pezzo surfaces (Theorem \ref{logDelPezzo-canonical}) is not enough for our purpose. In degree two by Theorem \ref{Bishop-Gromov} we only need to deal with index 2 log del Pezzo surfaces, which have been classified in \cite{AN}, \cite{Nakayama}, \cite{KK}. We could simply use these classification results directly, but since our assumption is much more restricted, we provide a more elementary approach which treats both $d=1$ and $d=2$ cases.
A common feature of the two cases is the existence of a holomorphic involution. For a degree two Del Pezzo surface $X$, it is well-known that the anti-canonical map defines a double cover of $X$ to $\P^2$. Therefore $X$ admits an involution $\sigma$ (``Geiser involution") which is simply the deck transformation of the covering map. The fixed locus of $\sigma$ is smooth quartic curve. If $X$ admits a K\"ahler-Einstein metric $\omega$, then by \cite{BM} $\omega$ must be invariant under any such $\sigma$.
Similarly for a degree one Del Pezzo surface $X$, the linear system $|-2K_X|$ defines a double cover of $X$ to $\P(1,1,2)\subset \P^3$. So $X$ also admits an involution $\sigma$ (``Bertini involution"). Again any such $\sigma$ must preserve the K\"ahler-Einstein metric if $X$ admits one. The fixed locus of $\sigma$ consists of the point $[0:0:1]$ and a sextic in $\P(1,1,2)$.
\begin{lem}
Suppose a sequence of degree one (or two) K\"ahler-Einstein Del Pezzo surfaces $(X_i, \omega_i, J_i)$ converges to a Gromov-Hausdorff limit $(X_\infty, \omega_\infty, J_\infty)$, then by passing to a subsequence one can take a limit $\sigma_\infty$, which is a holomorphic involution on $X_\infty$.
\end{lem}
\begin{proof} This is certainly well-known. We include a proof here for the convenience of readers. Let $p_1, \cdots, p_n$ be the singular points of $X_\infty$. We denote $\Omega_r=X_\infty\setminus \cup_{j=1}^n B(p_j, r)$. For any $r>0$ small, from the convergence theorem \ref{orbifold compactness}, we know that for $i$ sufficiently large, there are $\sigma_i$ invariant open subsets $\Omega_{i} \subset X_i$ and embeddings $f_i: \Omega_i\rightarrow X_\infty\setminus \{p_1,\cdots, p_n\}$ such that $\Omega_r$ is contained in the image of each $f_i$ and $(f_i^{-1})^*(\omega_i, J_i)$ converges to $(\omega_\infty, J_\infty)$ smoothly. Then, by passing to a subsequence, the isometries $(f_i^{-1})^*\sigma_i$ converge to a limit $\sigma_{r, \infty}: \Omega_r \rightarrow X_\infty$ with $\sigma_{r, \infty}^*(\omega_\infty, J_\infty)=(\omega_\infty, J_\infty)$. Then we can let $r$ tend to zero and choose a diagonal subsequence so that $\sigma_{r, \infty}$ converges to a holomorphic isometry $\sigma_\infty$ on $X_\infty\setminus \{p_1,\cdots, p_n\}$. Then by the Hartog's extension theorem, $\sigma_\infty$ extends to a holomorphic isometry on the whole $X_\infty$. It is also clear $\sigma_{\infty}^2$ is the identity.
\end{proof}
\begin{thm} \label{double cover classification}
In the degree two case, $X_\infty$ is either a double cover of $\P^2$ branched along a quartic curve, or a double cover of $\P(1,1,4)$ branched along a degree 8 curve not passing through the vertex $[0:0:1]$.
In the degree one case, $X_\infty$ is either a double cover of $\P(1,1,2)$ branched along the point $[0:0:1]$ and a sextic , or a double cover of $\P(1,2,9)$ branched along the point $[0:1:0]$ and a degree 18 curve not passing through the vertex $[0:0:1]$.
\end{thm}
\begin{proof}
We first treat the case of degree one. The proof of the degree two case is essentially the same and we will add some remarks later. Denote by $Y_i$ the quotient of $X_i$ by $\sigma_i$, so the quotient $Y_\infty=X_\infty/\sigma_\infty$ is the Gromov-Hausdorff limit of $Y_i$'s. For each integer $m$ we have an orthogonal decomposition $H^0(X_i, -mK_{X_i})=V_i\oplus W_i$ with $V_i$ being the $+1$ eigenspace and $W_i$ the $-1$ eigenspace. Then we have a corresponding decomposition $H^0(X_\infty, -mK_{X_\infty})=V_\infty\oplus W_\infty$ on $X_\infty$. Now, by constructing orthonormal $\sigma_\infty$-invariant sections of $-kK_{X_\infty}$ for some $k$ large divisible, one can show that there is a well-defined map $\iota_\infty: X_\infty\rightarrow \P^*(V_\infty)$, which induces a projective embedding of $Y_\infty$. By an adaption of the H\"ormander technique (\cite{Tian1}, \cite{DS}), this implies that the orthonormal $\sigma_i$-invariant sections of $-kK_{X_i}$(equivalent to sections of $-l K_{Y_i}$ for some integer $l$) define an embedding (Tian's embedding) of $Y_i$ into $\P^*(V_i)$ for $i$ sufficiently large. Moreover, we may assume $Y_i$ converges to $Y_\infty$ as normal varieties in $\P^N$ for some integer $N$. Since $Y_i$'s are all isomorphic to $\P(1,1,2)$, we see that $Y_\infty$ is $\Q$-Gorenstein smoothable, and there is a partial $\Q$-Gorenstein smoothing of $Y_\infty$ to $\P(1,1,2)$.
\begin{clm}
$Y_\infty$ is isomorphic to $\P(1,2,9)$.
\end{clm}
Clearly we have $K^2_{Y_\infty}=8$, and thus we may apply \cite{HP}. Notice the full proof in \cite{HP} relies on the classification theorem of Alexeev-Nikulin \cite{AN}, but in our case we only need the more elementary part \cite{HP1}, without use of \cite{AN}. So we know $Y_\infty$ is either a toric log Del Pezzo surface $\P(a^2, b^2, 2c^2)$ with $a^2+b^2+2c^2=4abc$ or its partial smoothings. Since the orbifold structure group of $X_\infty$ always has order less than $12$, the order of all the orbifold structure groups of $Y_\infty$ must be less or equal to $22$. Then, by an easy investigation of the above Markov equation, we see that $Y_\infty$ must have two singularities, one of type $A_1$ and one of type $\frac{1}{9}(1,2)$. It could be possible that $Y_\infty$ is a partial smoothing of $\P(9, b^2, 2c^2)$, but we claim then it must be $\P(1,2, 9)$. For this we need to go back to the proof in \cite{HP1}. For the minimal resolution $\pi: \tY_\infty\rightarrow Y_\infty$, let $n$ be the largest number such that there is a birational morphism $\mu_n$ from $\tY_\infty$ to the $n$-th Hirzebruch surface $\F_n$. Let $B'$ be the proper transform of the negative section $B$ in $\F_n$, and let $p: \tY_\infty\rightarrow\P^1$ be the composition of $\mu_n$ with the projection map on $\F_n$. Then by a theorem of Manetti \cite[Theorem 11]{Ma} (see also \cite[Theorem 5.1]{HP1}) we know that $n\geq 2$, and the exceptional locus $E$ of $\pi$ is the union of $B'$ and the components of degenerate fibers of $p$ with self-intersection at most $-2$; furthermore, each degenerate fiber of $p$ contains a unique $-1$ curve. Moreover, by the proof of Theorem 18 in \cite{Ma} (see laso Theorem 5.7 in \cite{HP1}), there are only two possible types for the dual diagram of the degenerate fiber: one type is that two strings of curves of self-intersection at most $-2$ joined by a $(-1)$-curve, and the other type is that we join a string of $(-2)$-curves though a $(-1)$ curve to the middle of a string of curves of self-intersection at most $-2$. In our case we know $Y_\infty$ has exactly one $A_1$ and one $\frac{1}{9}(1,2)$ singularity. By general theory on the resolution of cyclic quotient singularities we know $E$ is the disjoint union of a (-2)-curve and a string of a (-2)-curve and a (-5)-curve. Then one easily sees that the only possibility is that there is exactly one degenerate fiber of $\tY_\infty$ which consists of a string of (-2)-(-1)-(-2)-curve, and one of the (-2)-curves in the string intersects the horizontal section $B'$ which is a (-5)-curve. Clearly $\tY_\infty$ is then a toric blown-up of $\F_5$ and then $Y_\infty$ is toric which must be $\P(1,2,9)$. This completes the proof of the claim.
The degree of the branched locus follows from the Hurwitz formula for coverings. The degree 18 curve can not pass through the point $[0:0:1]$, for otherwise the equation would be $a _0x_3f_9(x_1,x_2)+a_1f_{18}(x_1,x_2)=0$. Then by Lemma \ref{quotient singularity} below the singularity on the branched cover is not quotient, so can not be $X_\infty$ by Theorem \ref{orbifold compactness}. This finishes the proof of Theorem \ref{double cover classification} for degree one case.
In the degree two case we can follow exactly the same arguments, noticing for the proof of the above lemma we can only have one singularity in that case. We omit the details here.
\end{proof}
In terms of equations we have the following:
\begin{cor}\label{equation degree one}
In degree one case $X_\infty$ is either a sextic hypersurface in $\P(1,1, 2, 3)$ of the form $x_4^2=f_6(x_1, x_2, x_3)$, or a degree 18 hypersurface in $\P(1,2,9,9)$ of the form $
x_3^2+x_4^2=f_{18}(x_1, x_2)$.
\end{cor}
\begin{cor} \label{equation degree two}
In degree two case $X_\infty$ is either a quartic hypersurface in $\P(1,1,1,2)$ of the form $x_4^2=f_4(x_1, x_2, x_3)$ or an octic hypersurface in $\P(1,1,4,4)$ of the form $x_3^2+x_4^2=f_8(x_1, x_2)$.
\end{cor}
\begin{lem} \label{quotient singularity}
Suppose $f$ is a polynomial and the surface $w^2=f(x, y)$ in $\C^3$ or its $\Z/2\Z$ quotient by $(x,y,w)\mapsto (-x, -y, -w)$ has a quotient singularity at the origin, then $f$ must contain a monomial with degree at most three.
\end{lem}
\begin{proof}
If the singularity is a quotient singularity, then singularity $w^2=f(x,y)$ in $\C^3$ is canonical since the finite map does not have
branch divisor. Then the statement follows from a criterion of canonicity in terms of Newton polygon (cf. e.g., \cite{Ish}).
\end{proof}
The idea of using involutions to study $X_\infty$ was previously used in \cite{Tian1}, where some partial results were claimed. For example, in Proposition 6.1 of \cite{Tian1}, it was stated that in degree two case $X_\infty$ can have at most $\frac{1}{4}(1,1)$ singularities besides canonical singularities and $|-2K_{X_\infty}|$ is base point free. This agrees with the above result. But, as one can see from the following example, the claims in Proposition 6.2 of \cite{Tian1} that in degree one case $X_\infty$ can have at most one non-canonical singularity and $|-2K_{X_\infty}|$ is base point free, are both incorrect.
Now we show explicit examples of K\"ahler-Einstein log Del Pezzo surfaces with non-canonical singularities in both degree one and two \footnote{We are indebted to A. Kasprzyk for discussions related to these examples \cite{KKL}.} . In the next two subsections it will be proved that both are parametrized in the moduli spaces.
\begin{exa} \label{degree2 toric}
Let $X_2^T$ be the quotient of $\P^1\times\P^1$ by the action of $\Z/4\Z$, where the generator $\xi$ of $\Z/4\Z$ acts by $\xi.([z_1:z_2], [w_1:w_2])=([\sqrt{-1}z_1: z_2], [-\sqrt{-1}w_1: w_2])$. Then it is easy to see that $X_2^T$ is a degree two log Del Pezzo surface, with two $A_3$ singularities and two $\frac{1}{4}(1,1)$ singularities. The standard product of round metrics on $\P^1\times\P^1$ descends to a K\"ahler-Einstein metric on $X_2^T$. The space $H^0(X_2^T, -K_{X_2^T})$ is spanned by the sections $z_1^2w_1^2$, $z_2^2w_2^2$, and $z_1z_2w_1w_2$. So a generic divisor in $|-K_{X_2^T}|$ is given by the union of two curves $z_1w_1+az_2w_2=0$ and $z_1w_1+bz_2w_2=0$ for $a\neq b$, and is thus reducible. The space $H^0(X_2^T, -2K_{X_2^T})$ is spanned by sections $z_1^4w_1^4, z_2^4w_2^4, z_1^2z_2^2w_1^2w_2^2, z_1^3z_2w_1^3w_2, z_1z_2^3w_1w_2^3, z_1^4w_2^4, z_2^4w_1^4$. The subspace $U$ spanned by the first five sections is generated by $H^0(X_2^T, -K_{X_2^T})$. The involution $\sigma$ maps $([z_1:z_2], [w_1:w_2])$ to $([w_1: w_2], [z_1: z_2])$. The $+1$ eigenspace $V_1$ is still six dimensional, spanned by $U$ and the element $z_1^4w_1^4+z_2^4w_2^4$. It is easy to see that the image of $X$ under the projection to $V$ is the cone over the rational normal curve of degree $4$, i.e. $\P(1,1,4)$. The branch locus is defined by $z_1^4w_2^4=z_2^4w_1^4$, with singularies exactly at the two $A_3$ singularities. We can also see directly that $X_2^T$ is the hypersurface in $\P(1,1,4,4)$ defined by $x_1^4x_2^4=x_3x_4$. The map is given by
$$([z_1: z_2], [w_1: w_2])\mapsto (z_1w_1, z_2w_2, z_1^4w_2^4, z_2^4w_1^4). $$
Make a change of variable $x_3'=x_3+x_4$ and $x_4'=x_3-x_4$, then the projection to the $(x_1, x_2, x_3')$ plane realizes $X_2^T$ as a double cover of $\P(1,1,4)$.
\end{exa}
\begin{exa} \label{degree1 toric}
Let $X_1^T$ be the example studied in Section \ref{AG input}. It is a toric degree one K\"ahler-Einstein log Del Pezzo surface with one $A_8$ singularity and two $\frac{1}{9}(1,2)$ singularities. It can be viewed as a hypersurface in $\P(1,2, 9,9)$ given by the equation $x_3x_4=x_2^9$. The embedding is defined by
$$[z_1:z_2:z_3]\mapsto [z_1: z_2z_3: z_2^9: z_3^9].$$
The projection map $\P(1, 2, 9, 9)\rightarrow \P(1,2,9)$ sending $[x_1:x_2:x_3:x_4]$ to $[x_1: x_2: x_3+x_4]$ realizes $X_1^T$ as the double cover of $\P(1, 2, 9)$, branched along the rational curve $x_2^9=x_3^2$.
On $X_1^T$ the holomorphic involution $\sigma$ simply exchanges $z_2$ with $z_3$.
One can see the pluri-anti-canonial linear systems on $X_1^T$. $H^0(X_1^T, -K_{X_1^T})$ is spanned by $z_1^3$ and $z_1z_2z_3$, so it has a fixed component $z_1=0$. $H^0(X_1^T, -2K_{X_1^T})$ is spanned by $z_1^6, z_1^4z_2z_3, z_1^2z_2^2z_3^2, z_2^3z_3^3$, so it has two base points $[0:1:0]$ and $[0:0:1]$. We will show below that $X_1^T$ is the Gromov-Hausdorff limit of a sequence of K\"ahler-Einstein degree one Del Pezzo surfaces. This implies that the Proposition 6.2 in \cite{Tian1} is incorrect. Similarly it is easy to see that $|-3K_{X_1^T}|$ is base point free.
As before we have an eigenspace decomposition $H^0(X_1^T, -mK_{X_1^T})=V_m\oplus W_m$ for $\sigma$. Then $|V_6|$ is base point free, and it defines the embedding of $\P(1,2,9)$ into $\P^{15}$ by sections of $\O(18)$.
\end{exa}
\subsection{Degree two case} \label{degree two case}
We first recall the moduli space constructed in \cite{Mukai}. For a smooth Del Pezzo surface $X$ of degree 2 the anti-canonical map is a double covering to $\P^2$ branched along a smooth quartic curve $F_4$. The geometric invariant theory for quartic curves is well-understood
(cf. \cite{Mum}) as follows. (Note that Mukai's citation \cite[9.3]{Mukai} misses one case.)
\begin{lem}
A quartic curve $F_4$ in $\P^2$ is:
\begin{itemize}
\item stable $\Longleftrightarrow$ it has only rational double points $A_1$ or $A_2$;
\item strictly polystable $\Longleftrightarrow$ it is double conic or
a union of two reduced conics that are tangential at two points and at least one is smooth (called cateye and ox in \cite{HL}).
\end{itemize}
\end{lem}
\noindent
So the quotient $Q:=\mathbb{P}_*(Sym^4\C^3)^{ss}//PGL(3;\C)$
parametrizes certain canonical log Del Pezzo surfaces of degree $2$, away from the double conic. The stable curves parametrize surfaces with at worst $A_1$ or $A_2$ singularities, the double conic parametrize a non-normal surface and the otherpolystable curves parametrize surfaces with exactly $2A_3$ singularities.
As in \cite{Mukai}, we blow up the point corresponding to the double conic to obtain a new variety, denoted by $M_2$. Let $E$ be the exceptional divisor. Then, as in \cite{Shah}, we know $E$ is isomorphic to the GIT moduli space
$\mathbb{P}_*(Sym^8\C^2)^{ss}//PGL(2;\C)$, parametrizing binary octics $f_8(x, y)$. Moreover,
\begin{thm}
$M_2$ is an analytic moduli space of log Del Pezzo surfaces of degree two. For any $[s]\notin E$, $X_s$ is the double cover of $\P^2$ branched along the polystable quartic defined by $[s]$, and for $[s]\in E$, $X_s$ is the double cover of $\P(1,1,4)$ (i.e. the cone over the rational normal curve in $\P^5$) branched along the hyperelliptic curve $z_3^2=f_8(z_1, z_2)$, where $f_8$ is the polystable binary octic defined by $[s]$.
\end{thm}
The proof uses some ideas of \cite{Shah} as written in \cite{Mukai},
but note that the proof in \cite{Shah} is incomplete about the existence of moduli algebraic stack nor
the blow up is its coarse moduli scheme, since no family has been constructed.
The argument in \cite{Shah}
is curve-wise and only verifies the properness criterion formally.
\begin{proof}
Let $H_4$ be the Hilbert scheme of quartics in $\P^2$, and fix a non-degenerate conic $C=\{q=0\}$.
We identify the automorphism group of $C$ with $PGL(2;\C)$
(The notation $PGL(2;\C)$ only appears in this context of this proof, so should not be confusing).
Denote by $\Psi$ the (9-dimensional) $PGL(2;\C)$-invariant subspace of $H^0(\mathbb{P}^2, \mathcal{O}(4))$ that corresponds to $H^0(C,\mathcal{O}(4)|_{C})$.
Take an affine space $\mathbb{A}\simeq \C^9$ in $H_4$ which represents $\{ q^2+f_4(x, y, z)\}$ for all quartics $f_4 \in \Psi$. From the construction, this gives
a Luna \'{e}tale slice. Note that the blow up $\B$ of $\mathbb{A}$ at $0$ is a closed subvariety of $\mathbb{A}\times \mathbb{P}_*(\mathbb{A})$, and let $\mathbb E$ be its exceptional divisor. Let $\mathcal{B}\subset \mathbb{A}\times (\mathbb{A}\setminus \{0\})$ be the cone over $\B$, and $\mathcal{E}=\{0\}\times (\mathbb{A}\setminus \{0\})$ be the cone over $ \mathbb E$.
For each point $(a, b) \in \mathcal{B}$, we can associate
the curve $q^2+b=0$ in $\mathbb{P}^2$. These form a flat projective family $\mathcal Q$ over $\mathcal{B}$.
On the other hand, consider the trivial family of $(\mathbb{P}^2, C)$ over $\mathcal{B}$.
We blow up
$C\times \mathcal{E}$ and contract the strict transform of
$\mathbb{P}^2\times \mathcal{E}$. It is possible because $\mathcal{E}$ is a
Cartier divisor in $\mathcal{B}$ and the classical degeneration (deformation to the normal cone of $C$) of $\mathbb{P}^2$
to $\mathbb{P}(1,1,4)$ over a smooth curve is constructed in the same way, so
we can do it locally and glue the contraction morphism. Denote the family constructed in this way by $\mathcal{P}\rightarrow \mathcal{B}$.
The generic fibers are $\mathbb{P}^2$ and special fibers (those over $\mathcal E$) are $\mathbb{P}(1,1,4)$.
We also obtain a natural family of conics $C_\mathcal{P}\subset \mathcal{P}$ over $\mathcal{B}$.
All the above process is $PGL(2;\C)\times \mathbb{C}^*$-equivariant. Thus we can construct $PGL(2;\C)$-invariant complement of $\mathbb{C}q^2$
in $H^0(\mathcal{P}_u, \mathcal{O}(2C_\mathcal{P}))$ ($u\in \mathcal B$) in a continuous way, and extend the family of quartics $\mathcal Q|_{(\mathcal B \setminus \mathcal E)}$
to the whole $\mathcal{B}$. We denote the new total space by
$\mathcal{D}$. Notice that over $\mathbb E$ this is a family of binary octics. Then construct $\mathcal{S}$ as the double of
$\mathcal{P}$ branched along $\mathcal{D}$. As everything is again
$PGL(2;\C)\times \mathbb{C}^*$-equivariant, we can first divide by $\mathbb{C}^*$ and obtain
a $\mathbb{Q}$-Gorenstein flat family $S$ of degree two
log Del Pezzo surfaces over $\B$.
There is still an action of $PGL(2;\C)$ on $\B$. We consider GIT with respect to this action and
with $PGL(2; \C)$-linearized line bundle $\mathcal{O}_\B(-\mathbb E)$.
The natural morphism $\B^{ss}//PGL(2;\C)\rightarrow \mathbb{A}//PGL(2;\C)$ is
an isomorphic away from $\mathbb E\subset B$ and $0\in \mathbb{A}$. So this is a blow up with exceptional divisor $\mathbb E^{ss}//PGL(2;\C)$.
By the local picture of GIT (\cite[Prop 5.1]{Shah}),
we can see that $\mathbb{A}//PGL(2;\C)\rightarrow H_4^{ss}//PGL(3;\C)$ is \'etale
(or in differential geometric language, local bi-holomorphism)
around $0$. This follows completely the same way as in \cite[Prop 5.1]{Shah}
or the proof of famous Luna \'etale slice theorem.
Hence, the blow up $\B^{ss}//PGL(2;\C)\rightarrow \A//PGL(2;\C)$ induces blow up $M_2$ of $H_4//PGL(3;\C)$.
To see that $M_2$ is an analytic moduli space for degree two log Del Pezzo surfaces, we only need to check the item (2) in the definition. For this, one simply notices that, by construction, for any $[s]\in M_2$ there is a Luna's slice $V$ in $H_4$ or in $\B$ (depending on whether $[s]$ is in $E$ or not). Then by versality there is an $\Aut(X_{s})$ equivariant analytic map $\Psi_U$ from a small analytic neighborhood $U=V//\Aut(X_{s})$ of $[s]$ to the GIT quotient $\Kur(X_s)//\Aut(X_s)$ so that $\Phi_U^{-1}(0)=0$. Then it follows that $\Psi_U$ is a finite map onto an open neighborhood of $0$.
In terms of \'etale topology one can also directly check the versality by going through our construction.
We only need to check our
$(H_4^{ss}\setminus PGL(3;\C)q^2) \coprod \B^{ss}$ is versal in \'etale
topology. That is,
given a $\mathbb{Q}$-Gorenstein family $f\colon \mathcal{X}\to S$ of our log del Pezzo surfaces of degree $2$, there is a morphism
$\tilde{S}\rightarrow (H_4^{ss}\setminus PGL(3;\C)q^2) \coprod \B^{ss}$
compatible with fibers where $\tilde{S}\rightarrow S$ is an \'etale cover. For this, we can first construct a degenerating family of $\mathbb{P}^2$ to
$\mathbb{P}(1,1,4)$ over $S$ and from the $\mathbb{Q}$-Gorenstein deformation theory of $\mathbb{P}(1,1,4)$ (with $1$-dimensional smooth semi-universal deformation space) we know that the locus of $\mathbb{P}(1,1,4)$ should be a Cartier divisor so that we can convert the process to obtain a family of reduced quartics of $\mathbb{P}^2$.
Thus we have a compatible morphism to
$(H_4^{ss}\setminus PGL(3;\C)q^2) \coprod B^{ss}$
locally in \'etale topological sense.
\end{proof}
\begin{rmk}
In terms of algebro-geometric language, $M_2$ coarsely represents the algebraic stack $\mathcal{M}_2$ constructed by gluing together the quotient stacks $[\B^{ss}/PGL(2;\C)]$ and $[(H_4^{ss}\setminus PGL(3;\C)q^2)/PGL(3;\C)]$.
\end{rmk}
\begin{rmk}
Replacing blow up and its cone as above by weighted blow up and its quasi-cone,
the argument in \cite{Shah} can be completed to prove that the blow up is a coarse moduli
scheme of degree two K3 surfaces and its degenerations.
\end{rmk}
The proof of Theorem \ref{MT} follows from the fact that all smooth degree two Del Pezzo surfaces are parametrized by $M_2$ and by
\begin{thm} \label{degree two perfect}
$M_2$ has property (KE).
\end{thm}
\begin{proof} By Theorem \ref{double cover classification} there are two possibilities for $X\in M_2^{GH}$: it is either a double cover of $\P^2$ branched along a quartic $f_4(x_1,x_2,x_3)=0$, or a double cover of $\P(1,1,4)$ branched along a hyperelliptic octic curve $x_3^2-f_8(x_1,x_2)=0$. It suffices to show $f_4$ and $f_8$ are polystable. For this we use Theorem \ref{KEtoKstability} and Theorem \ref{CM stability}. When applying Theorem \ref{CM stability}, in the first case we choose $S=\P_*({\it Sym}^4(\C^3))$; in the second case we choose $S=\P_*(Sym^8(\C^2))$.
\end{proof}
So we also conclude that $\M_2$ is a KE moduli stack. As it is immediately clear from the proof, the complex conjugation acts on $M_2$ by the natural anti-holomorphic involution.
\begin{rmk} \label{Tian conjecture}
In \cite{Tian1} it is conjectured that degenerations of K\"ahler-Einstein Del Pezzo surfaces should have canonical singularities. In this section we have seen that this conjecture is in general false, as all the surfaces parametrized by the exceptional divisor $E$ have exactly two non-canonical singularities of type $\frac{1}{4}(1,1)$. In general dimension one expects the compact moduli space of smoothable $\Q$-Fano varieties to have log terminal singularities, see \cite{DS}. This type of singularities also appear to be the worst singularities allowed for K-semistability of Fano varieties, see \cite{Od2}.
\end{rmk}
We finish this subsection by a discussion on the surfaces parametrized by the ox and cateyes, which will be used in our study of degree one case.
These are defined by equations in $\P(1,1,1,2)$ parametrized by $\lambda=[\lambda_1:\lambda_2]$ in $(\P^1\setminus \{[1:1]\})$ which we denotes by $X_2^{\lambda}$. The equation of $X_2^{\lambda}$ is
$$w^2=(\lambda_1 z^2+xy)(\lambda_2 z^2+xy). $$
It is clear that when we interchange $\lambda_1$ and $\lambda_2$ we get isomorphic surfaces. When $\lambda$ is $[1:0]$ or $[0:1]$, the branch locus is an \emph{ox} and the surface $X_2^\infty=X_2^{\lambda}$ with exaclty two $A_3$ plus one $A_1$ singularities, otherwise the branch locus is a \emph{cateye} and $X_2^\lambda$ with exactly two $A_3$ singularities. By Theorem \ref{degree two perfect} this family of surfaces all admit K\"ahler-Einstein metrics. As $\lambda$ tends to $[1:1]$ these K\"ahler-Einstein surfaces converge to $X_2^T$, with the obvious K\"ahler-Einstein metric.
One can see that $X_2^\infty$ is a global quotient of $\P^1\times \P^1$, as follows. Consider the action of $\Z/4\Z$ on $\P^1\times \P^1$, where the generator $\xi$ acts by
\[
\xi. ([z_1: z_2], [w_1:w_2])=([-w_1:w_2], [z_1:z_2]).
\]
Then there are exactly four points with nontrivial isotropy. Let $Y$ be the quotient. Then the points $([0:1], [0:1])$ and $([1:0], [1:0])$ are $A_3$ singularities and $([1:0], [0:1])$
and $([0:1], [1:0])$ are $A_1$ singularities.
One can see that the anti-canonical map $p$ from $Y$ to $\P^2$ is given by
$$([z_1:z_2], [w_1:w_2])\mapsto (z_1^2w_1^2: z_2^2w_2^2: z_1^2w_2^2+z_2^2w_1^2), $$
and the corresponding involution to the double covering structure
is $$\sigma. ([z_1:z_2], [w_1:w_2])=([w_1:w_2], [z_1:z_2]). $$
The branch locus is defined by $xy(z^2-4xy)=0$ in $\P^2$, which is isomorphic to the ox. So $Y$ is exactly $X_2^\infty$, and it admits an explicit K\"ahler-Einstein metric.
Notice that $\P^1\times \P^1$ or $\P^2$ has no deformations, so their quotients by any finite group have no equisingular deformations. But clearly for $\lambda\neq [1:0], [0:1]$, $X_2^\lambda$ has nontrivial equisingular deformations, so it can not be a global quotient of $\P^2$ or $\P^1\times \P^1$.
\subsubsection{Relation with moduli of curves}\label{curve.2}
Naturally considering the associated branch locus for each double cover (i.e. the bi-anti-canonical map), we can regard our moduli $M_2$ as the GIT moduli of bi-canonically
embedded Hilbert polystable genus $3$ curves, which is constructed in \cite{HL}.
Indeed, by a direct comparison,
the corresponding set of parametrized curves are the same.
We have a $1$-dimensional tacnodal curves and $5$-dimensional hyperelliptic curves.
They intersect at one point corresponding to the curve $z^2=x^4y^4$ in $\mathbb{P}(1,1,4)$.
From this point of view, the proof that the moduli space is a blow up of the GIT moduli of plane quartics is given in \cite{Arte} due to David Hyeon. Our proof recovers this result,
modulo the criterion of the Hilbert stability.
Thus a natural question would be the corresponding ``Del Pezzo surface
modular interpretation" for the flipped contraction which contracts the tacnodal
locus in the paper \cite{HL}. In general, we can ask:
\begin{quest}
What are the modular interpretations {\textit{via log Del Pezzo surfaces}}
for each step of the Hassett-Keel program in \cite{HL}? In addition, are there also
stability interpretations for them?
\end{quest}
\subsection{Degree one case}\label{Degree1}
From Section \ref{DG input} we know that for any $X\in M_1^{GH}$, there are only three possible types for the non-canonical singularities. Moreover, we have:
\begin{lem}
The canonical singularities in $X\in M_1^{GH}$ are either $A_1, \cdots, A_8$ or
$D_4$.
\end{lem}
\begin{proof} This follows from Theorem \ref{Bishop-Gromov} and the Noether formula for singular surfaces (cf. e.g., \cite{HP})
$$
\rho(X)+K_X^2+\sum _{P\in Sing(X)} \mu_P =12\chi(\O_X)-2,
$$
where $\rho(X)$ is the Picard rank of $X$ and $\mu_P$ denotes the Milnor number. Notice that $\chi(\O_X)=1$ by the Kodaira vanishing theorem and that the Milnor number of an $A_k$, $D_k$ or $E_k$ singularity is $k$.
\end{proof}
We mention that, by using the K\"ahler-Ricci flow and calculating certain $\alpha$-invariant, it has been proved in \cite{Wang}, \cite{CK} that a degree $1$ log Del Pezzo surface with only $A_n$ singularities admits a K\"ahler-Einstein metric, if $n\leq 6$.
\subsubsection{First step: GIT}\label{dP1.Gore}
By Corollary \ref{equation degree one}, a Gromov-Hausdorff limit in degree one is either a double cover of $\P(1,1,2)$ branched along a sextic or a double cover of $\P(1,2, 9)$ branched along a degree 18 curve. As the first step, we will construct a moduli space of surfaces that are double cover of $\P(1,1,2)$ branched along a sextic that does not pass through $[0:0:1]$.
These surfaces have equations $w^2=F(x,y,z) \subset \mathbb{P}(1,1,2,3)$, where $F$ contains a nonzero term $z^3$.
Although the automorphism
group of $\mathbb{P}(1,1,2)$ in non-reductive,
we can construct a compact moduli space of such sextics in
$\mathbb{P}(1,1,2)$ which are polystable in appropriate GIT sense, following \cite{Shah}.
Instead of the honest automorphism group $Aut(\mathbb{P}(1,1,2))$, we consider the action of $SL(2;\C) \ltimes H^0(\mathbb{P}^1,\mathcal{O}(2))$
which is a finite cover of $Aut(\mathbb{P}(1,1,2))$ and a subgroup of
$Aut(\mathbb{P}(1,1,2),\mathcal{O}(2))$ (i.e. it also acts on the
linearization).
First we fix the translation action of $H^0(\mathbb{P}^1,\mathcal{O}(2))$ by requiring the vanishing of the coefficient of $z^2$. Thus we only need to consider surfaces of the form $$w^2=z^3+f_4(x,y)z+f_6(x, y).$$
Then, by dividing out by the natural $\mathbb{C}^*$-action
on $f_4$ and $f_6$ with weights $4, 6$ repectively,
we obtain a weighted projective space $\P_s:=\mathbb{P}(2,2,2,2,2,3,3,3,3,3,3,3)$
as a parameter space. What is left is the action of $SL(2;\C)$ in the two variables $x, y$. Thus we get a GIT quotient
$$M_1':=\P_s^{ss}//SL(2;\C)$$ as
a moduli space. This is similar to \cite{Shah}, where the GIT of degree $12$ curves in $\P(1,1,4)$ was studied. We have the following classification of singularities for polystable locus (compare \cite{Shah}, Theorem 4.3):
\begin{lem} \label{degree one stable classification}
With respect to the GIT stability of the above $SL(2;\C)$-action, our surface $[w^2=z^3+zf_4(x,y)+f_6(x, y) \subset \mathbb{P}(1,1,2,3)]$
is:
\begin{enumerate}
\item stable if and only if it contains at worst $A_k$ singularities;
\item strictly polystable if and only if it contains exactly two $D_4$ singularities or
$SL(2;\C)$-equivalent to $p_0:=[-\frac{1}{3}(x^2+y^2)^2: \frac{2}{27}(x^2+y^2)^3]$ in $\P_s$ (in this case it is non-normal).
\end{enumerate}
\end{lem}
\begin{proof} By the numerical criterion, a point $f=[f_4:f_6]$ is unstable if and only if there is a point $u\in \P^1(x, y)$ such that $f_4$ and $f_6$ has multiplicity bigger than two and three at $u$ respectively. Without loss of generality, we may assume $u=[1:0]$, so that $y^3$ divides $f_4$ and $y^4$ divides $f_6$. Then it is easy to see that the corresponding sextic has a triple point at $u$, with unibranch (i.e. a unique
tangent line). So the surface $X_f$ has an $E_k$ or worse singularity. Conversely if $X_f$ has a singularity of type $E_k$ or worse, then by multiplying by an element in $SL(2;\C)$ we may assume the singularity is of the form $[1:0:z_0]\in \P(1,1,2)$. In the affine chart where $x\neq0$, the sextic is of the form $z^3+z f_4(1,y)+f_6(1, y)$. It is easy to see that the only triple point must have $y=z=0$. Then it follows that $[f_4: f_6]$ is unstable. Similarly, it is easy to see that $X_f$ is stable if and only if it contains at worst $A_k$ singularities, i.e. the sextic contains at worst double points. If $X_f$ is polystable, then $[f_4: f_6]$ must be in the $SL(2;\C)$ orbit of $[ax^2y^2: bx^3y^3]$ for some non zero $[a:b]\in \P(2, 3)$. It is not hard to see that for $[a:b]\in \P(2, 3)$ not equal to $[-1/3: 2/27]$, $X_f$ has exactly two $D_4$ singularities.
\end{proof}
\begin{rmk}
We remark that, in the context of rational elliptic surfaces (which is the
blow up of the base point of a complete anti-canonical system of degree $1$ Del Pezzo surface), Miranda \cite{Mir}
also analyzed the equivalent GIT stability and constructed
the corresponding compactified moduli variety which is isomorphic to our $M_1'$.
\end{rmk}
\subsubsection{Second step: Blow up}\label{dP1.blup.ss}
For the compatibility with later discussions,
we replace characters $x,y,z,w$ by $x',y',z',w'$ for the homogeneous coordinates for
$\P(1,1,2,3)$. Recall that in the statement of Theorem \ref{double cover classification} when the Gromov-Hausdorff limit is the double cover of $\P(1,1,2)$, the branch locus could pass the vertex.
This corresponds to the $z'^3$ term vanishing in $F(x',y',z')$.
By Lemma \ref{quotient singularity}, it is easy to see that if we want the surface to have only quotient singularities, there must be a term of the form $z'^2f_2(x', y')$ where $f_2$ must have rank at least one. On the other hand, for these surfaces, there is no obvious reason that they do not appear as the Gromov-Hausdorff limit of K\"ahler-Einstein surfaces.
Indeed we have explicit examples of such surfaces which admit K\"ahler-Einstein metric. The first is a one dimensional family of degree one K\"ahler-Einstein log Del Pezzo surfaces which are Gorenstein except one whose $f_2$ is rank two.
\begin{exa}
We consider a $\Z/2\Z$ action on the family of degree two surfaces $X_2^\lambda$ as studied in the end of Section \ref{degree two case}. The action is given by $[x:y:z:w]\mapsto [x:y:-z:-w]$. The fixed points are exactly the singularities of $X_2^\lambda$. One can check that for $\lambda \neq [1:0], [0:1]$, the quotient $X_1^\lambda$ is a degree one log Del Pezzo surface with exactly two $D_4$ singularities. It is interesting that these surfaces admit a $\C^*$ action and correspond exactly to the polystable points in $M_1'$, except $p_0$. From the discussion in the end of Section \ref{degree two case} we see they all admit K\"ahler-Einstein metrics.
For the surface $X_2^\infty$ the action fixes also the $A_1$ singularity $[0:0:1:0]$, so the quotient $X_1^\infty$ has two $D_4$ singularities and one $\frac{1}{4}(1,1)$ singularity. Denote the embedding $\P(1,1,2)\hookrightarrow \P^3$ by $[x':y':z']\mapsto [z': x'^2: x'y': y'^2]$. Then the bi-anti-canonical map realizes $X_1^\infty$ as a double cover of $\P(1,1,2)\subset \P^3$ branched along the curve isomorphic to $z'^2x'y'+x'^3y'^3$. Indeed, $|-2K_{X_1^{\infty}}|=|-2K_{X_2^{\infty}}|^{\mathbb{Z}/2\mathbb{Z}}=|\mathcal{O}_{\mathbb{P}(1,1,1,2)}(2)^{\mathbb{Z}/2\mathbb{Z}}|$ which is spanned by $x^2, xy, y^2, z^2$
so the branch locus is $xyz(z-xy)$. The latter is isomorphic to the sextic described above.
So $X_1^\infty$ corresponds to the case that $f_2$ has rank two. Clearly $X_1^\infty$ admits a K\"ahler-Einstein metric, as a global quotient of $\P^1\times \P^1$.
\end{exa}
The next example, which will be important in our further modification, is a degree one K\"ahler-Einstein log Del Pezzo surface which corresponds to $f_2$ being rank one.
\begin{exa}\label{d1new} Consider the degree two surface $X_2^{\gamma_0}$ with $\gamma_0=[1:-1]$.
It has two $A_3$ singularities, one at $[1:0:0:0]$ and one at $[0:1:0:0]$. Now consider the involution $\sigma: X_2^{\gamma_0}\rightarrow X_2^{\gamma_0}$ which sends $[x:y:z:w]$ to $[x:-y:-z:-w]$. Then $\sigma$ has two fixed points exactly at the two singularities. It is straightforward to check that the quotient, which we will denote by $X_1^e$ from now on, has one $A_7$ singularity and one $\frac{1}{8}(1,3)$ singularity.
$|-2K_{X_1^e}|$ is determined by the sections $\{x^2, y^2, yz, z^2\}\in H^0(\P(1,1,1,2), \O(2))$, and this defines a double covering map from $X_1^e$ to the quadric cone in $\P^3$.
The corresponding involution $\sigma$ maps $[x:y:z:w]$ to $[-x:-y:-z:-w]=[-x: y:z:w]$ (the identity holds on $X_1^e$). Then the fixed locus of $\sigma$ consists of the curve $w=0$ and the curve $x=0$. Denote again the embedding $\P(1,1,2)\hookrightarrow \P^3$ by $[x':y':z']\mapsto [z': x'^2: x'y': y'^2]$. The branch locus in $\P(1,1,2)$ is isomorphic to the sextic $z'^2x'^2-z'y'^4=0$. So $X_1^e$ corresponds to that $f_2$ has rank one. Again $X_1^e$ admits a K\"ahler-Einstein metric by the discussion in the end of Section \ref{degree two case}.
\end{exa}
We have a refined classification than Corollary \ref{equation degree one}.
\begin{lem} \label{Z8 is unique}
Let $X_\infty$ be the Gromov-Hausdorff limit of a sequence of degree one K\"ahler-Einstein Del Pezzo surfaces. If it is a hypersurface in $\P(1,1,2, 3)$ of the form $w^2=F_6(x, y, z)$, then either $F_6$ has a term $z^3$, or $F_6$ is equivalent to $z^2(x^2+y^2)+zg_4(x, y)+g_6(x, y)$ or $X_0$ is isomorphic to $X_1^e$.
\end{lem}
\begin{proof}Consider the case when $F_6$ contains no $z^3$ term. Then we claim the term $z^2f_2(x,y)$ must not vanish. Otherwise $F_6=zf_4(x,y)+f_6(x,y)$. Then in the affine chart $\{z\neq 0\}$ in $\P(1,1,2,3)$ we have equation $w^2=f_4(x,y)+f_6(x,y)$ then by the Lemma \ref{quotient singularity}, $X_\infty$ has a non quotient singularity, so it can not be a Gromov-Hausdorff limit by Theorem \ref{orbifold compactness}. So up to equivalence we may assume the $z^2$ term in $F_6$ is of the form $z^2(x^2+y^2)$ or $z^2x^2$. In the former case we are done, so we assume the latter. Then we can write
$$F_6(x,y, z)=z^2x^2+a zy^4+bzxf_3(x, y)+f_6(x,y). $$
Now if $a=0$, then again in the affine chart $\{z\neq 0\}$ we have equation $w^2=x^2+bxf_3(x,y)+f_6(x, y)$. Then by a change of variable at $(0, 0,0)$ we may assume it is locally equivalent to $w^2=x^2+a_1 xy^3+a_2 xy^5+a_3 y^6$. It is easy to see this is either non-normal or has a $A_i$ singularity $i\ge 5$ at the origin. The corresponding singularity on $X_0$ is a $(\Z/2\Z)$-quotient by the action $(x, y, w)\mapsto (-x, -y, -w)$. So $X_0$ is either non-normal or has an orbifold point of order at least 12, thus it can not admit a K\"ahler-Einstein metric by Theorem \ref{Bishop-Gromov}.
So $a\neq 0$, then by a change of variables $y\mapsto y+cx$ and $z\mapsto z+g_2(x,y)$ we may assume
$$F_6(x, y, z)=z^2x^2+zy^4+f_6(x,y). \ \ (*)$$
$X_1^e$ is isomorphic to the surface defined by $w^2=z^2x^2+zy^4$. The one parameter subgroup $\lambda(t)=(t^2, t, 1, t^2)$ degenerates every surface defined by $(*)$ to $X_1^e$ as $t$ tends to zero. Since $X_1^e$ admits a K\"ahler-Einstein metric, it has vanishing Futaki invariant. By Theorem \ref{KEtoKstability} we see $X_\infty$ must be isomorphic to $X_1^e$.
\end{proof}
We first construct a moduli space for surfaces with $f_2$ being rank two, and we will show these surfaces are parametrized exactly by a weighted blow up of $M_1'$ at $p_0$. The surfaces are defined by
\begin{equation}\label{dP1.exc}
w'^2=z'^2(x'^2+y'^2)+z'g_4(x', y')+g_6(x',y').
\end{equation}
Similarly as before, by considering the translation $z' \mapsto z'+a_2(x',y')$
for certain quadric $a_2(x',y')$, we may assume $g_4$ lies in the space $T(x', y'):=\C(x'+iy')^4\oplus \C(x'-iy')^4$, which is the $SO(2;\C)(\cong\C^*)$-invariant complement to the linear subspace of ${\it Sym}^4(\C x'\oplus \C y')$ consists of those divisible by $(x'^2+y'^2)$.
In this way, we can obtain GIT quotient $\P_e^{ss}//SO(2;\C):=\mathbb{P}(1,1,2,2,2,2,2,2,2)^{ss}//SO(2;\C)$
which parametrizes surfaces of the form (\ref{dP1.exc}). Here we need to specify the weight of $SO(2;\C)\cong\C^*$ on the linearization, and we choose the natural one, so the action corresponding to $(x'+iy')\mapsto \mu (x'+iy')$, $(x'-iy')\mapsto \mu^{-1}(x'-iy')$ has weight
\begin{equation}\label{wt}
(4, -4, 6, 4, 2, 0, -2, -4, -6),
\end{equation}
with respect to the basis consists of
\begin{equation*}\label{bas1}
(x'+iy')^4, (x'-iy')^4,
\end{equation*}
and
\begin{eqnarray*}\label{bas2}
&&(x'+iy')^6, (x'+iy')^5(x'-iy'), \\&&
(x'+iy')^4(x'-iy')^2, (x'+iy')^3(x'-iy')^3, \\&&
(x'+iy')^2(x'-iy')^4, (x'+iy')(x'-iy')^5, (x'-iy')^6.
\end{eqnarray*}
Then we have the following.
\begin{lem}
The GIT quotient $\P_e^{ss}//SO(2;\C)$ with respect to the action with weight (\ref{wt}) above parametrizes log Del Pezzo surfaces, i.e. a polystable sextic defined by $[g_4:g_6]\in P_e$ has only quotient singularities, or more precisely, the corresponding Del Pezzo surface has exactly one $\frac{1}{4}(1,1)$ singularity besides canonical singularities.
\end{lem}
\begin{proof}
It is easy to check that if a sextic has the form $z'^2(x'^2+y'^2)+z'(a (x'+iy')^4+b(x'-iy')^4)+g_6(x', y')$ with $a, b\neq 0$, then it has only double points away from the vertex. If $a=b=0$, then for it to be stable, it has at most double points, and for it to be polystable, it has exactly two $D_4$ singularities besides the vertex. If $a\neq0$ and $b=0$, then, if it is stable, the sextic has at most double points, and if it is semistable, then it degenerates to $z'^2(x'^2+y'^2)+a (x'+iy')^3(x'-iy')^3$, which has two $D_4$ singularities.
\end{proof}
When we prove the moduli space we constructed in the end has property (KE) we need to show:
\begin{lem} \label{CM exceptional}
A surface of the form (\ref{dP1.exc}) that admits a K\"ahler-Einstein metric must be GIT polystable with respect to the chosen linearization as above.
\end{lem}
\begin{proof}
This does not follow directly from the general Theorem \ref{CM stability}, as the group $SO(2;\C)\cong\C^*$ has non trivial characters. But in our case this can be done by explicit analysis as follows.
Notice that since $\P_e$ contains a point parametrizing a K-polystable log Del Pezzo surface (e.g. $X_1^\infty$), the CM line bundle must be isomorphic to $\O(k)$ for $k>0$. This follows from the proof of Theorem \ref{CM stability}. $X_1^\infty$ corresponds to the vector $v=[0:0:0:0:0:1:0:0:0]$ in $\P_e$ with respect to the quasi-homogeneous
coordinates as above. So the weight of the action on the CM line bundle must also be the natural one as above, for otherwise it is easy to see that $v$ is unstable.
\end{proof}
The second step toward the construction of $M_1$ is to replace the point
$[p_0]\in M_1'$ (which corresponds to a non-normal surface) by the above GIT quotient.
\begin{thm}
There is a blow up $M_1''\rightarrow M_1'$ at $[p_0]$ (with a non-reduced ideal) so that $M_1''$ is an analytic moduli space for degree one log Del Pezzo surfaces. The exceptional divisor $E$ is isomorphic to $\P_e^{ss}//SO(2;\C)$. Moreover, a point $s\in M_1''$ parametrizes the polystable sextic hypersurface $X_s$ defined by it, and $s\in E$ if and only if the sextic passes through the vertex $[0:0:1]$.
\end{thm}
\begin{proof}
Let $\tilde{\mathbb{A}}\simeq {\it Sym}^4(\C x\oplus \C y)\oplus {\it Sym}^6(\C x
\oplus \C y)$ be the cone over $\mathbb{P}_s$. In the tangent space at the point $p_0=(-\frac{1}{3}(x^2+y^2)^2, \frac{2}{27}(x^2+y^2)^3)$, we take an $SO(2;\C)$-invariant Luna \'etale
slice $\mathbb{A}_f:=p_0+\{T(x, y)\oplus {\it Sym}^6(\C x\oplus \C y)\}$ in $\tilde{\mathbb{A}}$.
To include surfaces of the form (\ref{dP1.exc}), let $\A_g=T(x', y')\oplus {\it Sym}^6(\C x'\oplus \C y')$, and we consider the family of surfaces over $\A_g\times \C^*$ where we associate $(g_4, g_6, t)$ the sextic
\begin{equation} \label{sextic}
tz'^3+z'^2(x'^2+y'^2)+z'g_4(x',y')+g_6(x',y').
\end{equation}
Making the change of variable
$$x':=tx, y':=ty, z':=z-\frac{t}{3}(x^2+y^2), $$
and
$$f_4(x, y)=-\frac{t^2}{3}(x^2+y^2)^2+ t^3 g_4(x,y); $$
$$f_6(x, y)=\frac{2t^3}{27}(x^2+y^2)^3-\frac{t^4}{3}(x^2+y^2)g_4(x,y)+t^5g_6(x,y), $$
the sextic in equation (\ref{sextic}) is then transformed into the form
$$t[z^3+f_4(x, y)z+f_6(x, y)]. $$
Hence it corresponds to the point $[f_4(x, y): f_6(x,y)]\in \A_f\subseteq \P_s$. If we keep $g_4$ and $g_6$ fixed, and let $t$ tend to zero this converges exactly to the point $p_0$.
The equation (\ref{sextic}) defines a family of sextics over the trivial $\P_{x', y', z'}(1,1,2)$ bundle $\mathcal{P}'$ over $\A_g\times \C^*$, and it extends obviously over $\A_g\times\C$, which is the cone over the blow up $\B_g$ of $\A_g$ at $0$. This family is invariant under $\C^*$ action $\lambda. (t, g_4, g_6):=(\lambda^{-1}t, \lambda g_4, \lambda^2 g_6)$, and thus
descended to a family over $\B_g$.
The above change of variables indeed defines an isomorphism $\Psi$ between $\mathcal{P}=\P_{x,y,z}(1,1,2)\times (\A_f\times\C^*)$, and induces a $\C^*$ action on $\A_f$.
We decompose $\A_f$ as $\A_f=p_0+(L_1\oplus L_2)$, where
$$L_1:=\{(f_4(x,y), -\frac{1}{3} (x^2+y^2)f_4(x,y))\}\mid f_4\in T_{(x,y)}\}, $$and
$$L_2:= {\it Sym}^6(\C x\oplus \C y)\subset \A_f. $$
Denote the associated ideals of $L_i+p_0$ in $\mathbb{A}_f$ by
$I_{(L_i+p_0)}$. Then we define $\B_f$ to be the blow up of $\mathbb A_f$ at $I_{(L_1+p_0)}^2+I_{(L_2+p_0)}$. The exceptional divisor is isomorphic to $\P_e$.
Then by pulling back by $\Psi$ we obtain a flat family of sextics over $\B_f$, and the exceptional divisor parametrizes sextics of the form (\ref{dP1.exc}).
Similarly to the degree $2$ case, we consider GIT of $\B_f$
with respect to the $SO(2;\C)$-action, and get
a certain blow up $\B_f^{ss}//SO(2;\C)\rightarrow \A_f//SO(2;\C)$.
This induces a
|
blow up of
$\mathbb{P}_s//SL(2;\C)$ at $[p_0]$, with exceptional divisor $E\cong \P_e^{ss}//SO(2;\C)$. We denote this by $M_1''\rightarrow M_1'$.
From the construction, as in the previous section, $M_1''$ is an analytic moduli space and a coarse moduli of an algebraic stack which is constructed by gluing
$$[\B_f^{ss}/SO(2;\C)]$$ naturally with
$$[(\mathbb{P}_s^{ss}\setminus (PGL(2;\C).p_0))/PGL(2;\C)]$$ in our context.
\end{proof}
\subsubsection{Construction of moduli: further modifications}
We have a further refinement of Corollary \ref{equation degree one}, parallel to Lemma \ref{Z8 is unique}.
\begin{lem} \label{toric is unique}
Let $X_\infty$ be the Gromov-Hausdorff limit of a sequence of degree one K\"ahler-Einstein Del Pezzo surfaces. Then $X_\infty$ is a sextic hypersurface in $\P(1,1, 2, 3)$ of the form $x_4^2=f_6(x_1, x_2, x_3)$, or isomorphic to the toric surface $X_1^T$.
\end{lem}
\begin{proof} By Theorem \ref{double cover classification}, we may assume $X_\infty$ is a degree 18 hypersurface in $\P(1,2, 9,9)$ of the form $x_4^2=f_{18}(x_1, x_2, x_3)$ not passing throught the point $[0:0:1]$. So we may assume $f_{18}(x_1, x_2, x_3)= x_3^2+g_{18}(x_1, x_2)$. If the term $x_2^9$ appears in $g_{18}$, then the one parameter subgroup $\Lambda$ acting with weight $(0, 9, 2, 2)$ degenerates $x_4^2-f_{18}$ to $x_4^2-x_3^2-a x_2^9$. This induces a test configuration for $X_\infty$ with central fiber isomorphic to $X_1^T$. Since $X_1^T$ has vanishing Futaki invariant, and $X_\infty$ is K-polystable, we conclude that $X_\infty$ must be isomorphic to $X_1^T$. If $x_2^9$ does not appear in $g_{18}$, then the one parameter subgroup $\Lambda$ acting with weight $(0, 0, 1,1)$ degenerates $x_4^2-f_{18}(x_1,x_2, x_3)$ to $x_4^2-x_3^2$. Again this induces a test configuration for $X_\infty$ with central fiber the nonnormal hypersurface $Y$ defined by $x_4^2-x_3^2=0$. We claim this has zero Futaki invariant, thus contradicting the fact that $X_\infty$ is K-polystable. To see the claim, note that the Futaki invariant for a $\C^*$-action on a connected fixed component in the Hilbert scheme is constant. Since $X_1^T$ obviously degenerates to $Y$ and is fixed the same $\Lambda$, we can compute the Futaki invariant on $X_1^T$, which is zero since it is K\"ahler-Einstein.
\end{proof}
The analytic moduli space $M_1''$ constructed in the previous section does not have property (KE), since it does not parametrize the two examples $X_1^e$ and $X_1^T$ which we are unable to show that they can not appear as a Gromov-Hausdorff limit. So we have to make a modification of $M_1''$. Now the only problem is to fit these two into $M_1''$. We first illustrate the phenomenon of modification of GIT by a simple example.
\begin{exa}
Let $\C^*$ act linearly on $\C^2$ by $t. (z_1, z_2)=(t z_1, z_2)$. Then the quotient is isomorphic to $\C$, and the polystable locus are points on the line $\{0\}\times\C$. If we remove the origin $(0,0)$, then the quotient is again isomorphic to $\C$, but the polystable locus differs from the previous one in that the orbit of the origin is replaced by the punctured line $\C^*\times\{0\}$.
\end{exa}
Our situation is very similar to this. We first investigate the $\Q$-Gorenstein deformation of $X_1^T$ studied in Section 3. Adopting the notation there, we have:
\begin{lem}
A point $v=(v_1, v_2, v_3)\in \Def(X_1^T)$ is polystable under the action of $\Aut^0(X_1^T)$ if and only if $v_1$, $v_2$ and $v_3$ are all non-zero or all zero, and $(0,0,0)$ is the only strictly polystable point.
\end{lem}
\begin{proof} If $v_1=0$, then we can destabilize $v$ by the one-parameter subgroup $\lambda(t)=(t^{-1}, 1)$. If $v_2=0$, then we can destabilize $v$ by the one-parameter subgroup $(1, t^{-1})$. If $v_3=0$, then we can destabilize $v$ by the one-parameter subgroup $(t^3, t^2)$. If all the $v_i$'s are non-zero, then for $\lambda(t)=(t^a, t^b)$ to destabilize $v$ we need $a-b\geq0$, $-3a+6b\geq0$, and $-3a-3b\geq0$. It is easy to see that no non-trivial such pair $(a, b)$ exists.
\end{proof}
To fill $X_1^{T}$ in our moduli, since we may locally identify $\Kur(X_1^T)$ with $\Def(X_1^T)$, and
the $(\mathbb{C}^*)^2$-action on $\Kur(X_1^T)$ is compatible with the one on $\Def(X_1^{T})$, it suffices to study the GIT on $\Def(X_1^T)$.
By the above lemma, the stable points all represent canonical log Del Pezzo surfaces with at most a unique $A_k (k\leq 7)$ singularity, and the polystable point $0$ represents $X_1^T$. The GIT quotient $Q$ is then smooth at $[X_1^{T}]$.
The semistable orbit $(0,v_2,v_3)$ (where $0<|v_2|^2+ |v_3|^2\ll 1$) represent a log Del Pezzo surface with a unique $A_8$ singularity.
Since it is unique up to isomorphism by \cite{Furu}, we denote it by $X_1^a$. Due to
Lemma \ref{degree one stable classification}, it has discrete automorphism group and it is parametrized by a point $u_0$ in $M_1''\setminus E$.
Consider the analytic subset $\Kur'(X_1^T)$ of $\Kur(X_1^T)$ which represents only canonical log Del Pezzo surfaces, i.e. that consists of points with $v_2\neq 0$ and $v_3\neq 0$. Then the corresponding quotient $Q'$ can be identified with the previous quotient $Q$, which identifies every stable orbit, except the orbit of $X_1^a$ is replaced by $X_1^T$. $Q'$ can be viewed as the universal deformation space $X_1^a$. There is an analytic neighborhood $U$ of $u_0$, and an embedding $\iota: U\rightarrow Q'=Q$ such that $\iota(u_0)=0$, and $u$ and $\iota(u)$ parametrize equivalent surfaces. In terms of stack language, the open embedding of stacks
$[(\Kur(X_1^T)\setminus \Kur(X_1^T)')/(\mathbb{C}^*)^2]\hookrightarrow [\Kur(X_1^T)/(\mathbb{C}^*)^2]$
induces an isomorphism of the categorical moduli.
Now we can simply define $M_1'''=M_1''$ as a variety and only change the surface parametrized by $u_0$ from $X_1^a$ to $X_1^T$. Then it is clear that $M_1'''$ is again an analytic moduli space of degree one log Del Pezzo surfaces. So this modification takes care of the point $X_1^T$.
Now we treat $X_1^e$ in a similar fashion. First notice that the linear system $|-2K_{X_1^e}|$ realizes $X_1^e$ as the double cover of $\P(1,1,2)$, thus $\Aut^0(X_1^e)$ is induced from $\Aut(\P(1,1,2))$. Then one sees that $\Aut^0(X_1^e)\cong\C^*$ corresponds to the scaling $\lambda(t)=(t^2, t, 1, t^2)$. By Lemma \ref{local-global}, we have
$$\Def(X_1^e)=\Def'\oplus \Def_1\oplus \Def_2, $$
where $\Def'$ corresponds to equisingular deformations, $\Def_1$ corresponds to deformations of the local singularity at $[0 :0:1:0]$, and $\Def_2$ corresponds to deformations of the local singularity at $[1:0:0:0]$. By general theory $\Def_1$ is two dimensional and $\Def_2$ is seven dimensional. Thus by dimension counting we must have $\Def'=0$.
We can write down a semi-universal deformation family over $\Def(X_1^e)$:
$$w^2=z^2x^2+zy^4+a_1 z^3+a_2z^2y^2+\sum_{i=0}^6 b_ix^iy^{6-j}, $$
where $(a_1, a_2)\in \Def_1$ and $(b_0, \cdots, b_6)\in \Def_2$.
It is also easy to see the weights of the action is
$$\lambda(t). (a, b)=(t^{-4}, t^{-2}, t^{8}, t^6, \cdots, t^2). $$
So in the local GIT quotient by $\Aut(X_1^e)$ a point $(a,b)$ is stable if and only if $a\neq 0$ and $b\neq0$ in which case $X_{a, b}$ has either a unique $A_k(k\leq 6)$ singularity or a $\frac{1}{4}(1,1)$ plus $A_k(k\leq 6)$ singularity.
When we remove the subspace $\{0\}\oplus \Def_2$, every point becomes stable. In particular, the quotient of the subspace $(a, 0)$ with $a\neq 0$ is exactly a $\P^1$, which parametrizes surfaces in $M_1''$
$$w^2=a_1z^3+z^2x^2+zy^4+a_2 z^2y^2, $$
and intersects the exceptional divisor at one point corresponding to $a_1=0$.
It is easy to see that $\lambda(t)$ degenerates all these surfaces to $X_1^e$ as $t$ tends to infinity, so they could not admit K\"ahler-Einstein metrics, and we need to remove them.
Notice this family does not include the point corresponding to $X_1^a$, so we can make a further modification simultaneously as the previous one. When we add the the subspace $\{0\}\oplus \Def_2$, the point $(a, 0)$ with $a\neq 0$ become semistable and in the GIT quotient this is contracted to the point $0$. To be more precise we take the neighborhood $U$ in $\Def(X_1^e)$ consisting of points $(a, b)$ with $||a|-1|\ll 1$ and $|b|\ll 1$, and the quotient $V$ by $\C^*$ gives rise to a tubular neighborhood of the $\P^1$ in $M_1''$. When we add the subspace $\{0\}\oplus \Def_2$ we have that $V$ gets mapped to a neighborhood of $0$ in the local GIT, with $\P^1$ contracted to $0$.
As before the GIT on $\Kur(X_1^e)$ and on $\Def(X_1^e)$ are equivalent so this allows us to perform the contraction in an analytic neighborhood of the $\P^1$ inside $M_1'''$. We obtain a new analytic moduli space $M_1$, which enjoys the Moishezon property.
Thus it has a natural structure of an algebraic space as well.
Theorem \ref{MT} in degree one case then follows from the theorem below.
\begin{thm} \label{degree one perfect}
$M_1$ has property (KE).
\end{thm}
\begin{proof}
The proof is very similar to Theorem \ref{degree two perfect}. By Lemma \ref{toric is unique} we only need to show that if a $X\in M_1^{GH}$ is a sextic hypersurface in $\P(1,1,2,3)$ defined by $w^2=f_6(x, y,z)$, then it is parametrized by some element in $M_1''$. If $f_6$ contains a term $az^3$ with $a\neq0$, then it is parametrized a point $u$ by $\P_s$. Then by Theorem \ref{KEtoKstability} and Theorem \ref{CM stability}, keeping in mind that $\P_s$ has Picard rank one, we conclude that $u$ is polystable under the $SL(2;\C)$ action, thus $X$ is parametrized by a point $p$ in $M_1'$. Then $X$ can not be isomorphic to $X_1^T$ or the $\P^1$ family above. So $X$ is parametrized by a point in $M_1$. If the term $z^3$ does not appear in $f_6$, then by Lemma \ref{Z8 is unique} and Lemma \ref{CM exceptional} $X$ is either isomorphic to $X_1^e$ or is parametrized by a polystable point $u\in \P_e$. Again this point $u$ can not be on the $\P^1$ and this means that $u$ is in $M_1$.
\end{proof}
We can construct a KE moduli stack $\mathcal{M}_1$
by gluing the previously constructed moduli stack with $[U/\Aut(X_1^e)]$ where $U$ is some open $\Aut(X_1^e)$-invariant neighborhood of $0\in \Kur(X_1^e)$
(along $[(U\setminus (\{0\}\oplus \Def_2))/\Aut(X_1^e)]$).
We can show that with a small enough $\Aut(X_1^e)$-invariant open neighborhood $U$ of $0$ in $\Kur(X_1^e)$, a stack
$[(U\setminus (\{0\}\oplus \Def_2)))/\Aut(X_1^e)]$
has a natural \'etale morphism to the previously constructed
moduli stack so that the glueing is possible. Indeed, the $\mathbb{Q}$-Gorestein deforming component (cf. \cite[section 5]{KS}) of a Luna \'etale slice in
the Hilbert scheme $\Hilb(\mathbb{P}(H^{0}(X_1^T,-K_{X_1^T}^{\otimes m})))$ at $[X_1^T]$ with respect to the standard $\SL$ action
is \'etale locally semi-universal deformation by the universality of the Hilbert scheme. Then the \'etale local uniqueness of
semi-universal family tells us it is actually \'etale locally equivalent with $U$ including the family on it. Then the assertion follows from the universality of Hilbert scheme again. Note that especially $U$ includes the subspace $\Def_{1}\oplus \{0\}$ so that the categorical moduli of
the open immersion $[(U\setminus (\{0\}\oplus \Def_2)))/\Aut(X_1^e)] \hookrightarrow [U/\Aut(X_1^e)]$ represents the contraction
of $\mathbb{P}^1$.
Then $\M_1$ is a KE moduli stack and $M_1$ constructed above is KE moduli space. This completes the proof of Theorem \ref{MT} for degree $1$ case as well. Note that our contraction of $\mathbb{P}^1$ on the coarse quotient is constructed just on an \'etale cover, not
a priori an open substack. Indeed it is not, although we omit the lengthy proof for that.
This is the reason our argument is not enough to show $M_1$ is a (projective) variety.
Completely as before, there is a natural anti-holomophic involution on $M_1$ which gives rise to the complex conjugation.
\subsubsection{A remark on a conjecture of Corti}
In the paper \cite{Cor}, Corti conjectured the following,
motivated by the possibility of using birational geometry to get
certain ``nice" integral models over a discrete valuation ring:
\begin{conj}[{\cite[Conjecture 1.16]{Cor}}]
For an arbitrary smooth punctured curve $C\setminus \{p\}$ and a smooth family of
Del Pezzo surfaces $f\colon \mathcal{X}\to (C\setminus \{p\})$ over it,
we can complete it to a flat family $\bar{f}\colon \bar{\mathcal{X}}\to C$ which satisfies:
\begin{itemize}
\item $\mathcal{X}$ is terminal.
\item $\mathbb{Q}$-Gorenstein index of $\bar{\mathcal{X}}_{p}$ is either $1, 2, 3$ or $6$ and $-6K_{\bar{\mathcal{X}}_{p}}$ is very ample.
\end{itemize}
\end{conj}
\noindent
He called $\bar{\mathcal{X}}$ the \textit{standard model}. We have the following partial solution to the above;
it is rather weak, in the sense we permit base change, but
on the other hand we even have a classification of the possible central fiber.
\begin{prop}
For an arbitrary smooth punctured curve $C\setminus \{p\}$ and a smooth family of
Del Pezzo surfaces $f\colon \mathcal{X}\to (C\setminus \{p\})$ over it,
possibly ramified base change $p' \in \tilde{C}\rightarrow C$ (with $p'\mapsto p$), we can fill the punctured family $\mathcal{X}\times _{(C\setminus \{p\})} (\tilde{C}\setminus \{p'\})$
to a flat family $\bar{\mathcal{X}}' \rightarrow \tilde{C}$ such that:
\begin{itemize}
\item $\bar{\mathcal{X}}'$ is terminal.
\item $\mathbb{Q}$-Gorenstein index of $\bar{\mathcal{X}}'_{p'}$ is either $1, 2$ and
$-6K_{\bar{\mathcal{X}}'_{p'}}$ is very ample.
\end{itemize}
\end{prop}
\begin{proof}
We have constructed the moduli stack $\M_1''$ by gluing quotient stacks of certain GIT semistable locus
(subsection \ref{dP1.blup.ss}).
From the construction, it is universally closed stack and
it parametrizes log del Pezzo surfaces of $\mathbb{Q}$-Gorenstein index
$1$, $2$. The $\mathbb{Q}$-Gorenstein property of $\mathcal{X}$ follows from
our construction as well.
\end{proof}
\subsubsection{Relation with moduli of curves}\label{curve.1}
We expect the KE moduli variety $M_1$ to be a divisor of
one of the geometric compactifactions of moduli of curves with genus $4$.
Especially we suspect that our moduli $M_1$ is
the prime divisor of $\overline{M}_4(a)$ with $\frac{23}{44}<a<\frac{5}{9}$ in \cite{CJL}. Note that it is the moduli of Hilbert polystable
canonical curves.
\section{Further discussion} \label{K moduli}
\subsection{Some remarks}
\subsubsection{Lower bound of the Bergman function}
The main technical part in the proof of Proposition \ref{Fano limit} is a uniform lower bound of the Bergman function. Let $(X, J, \omega, L)$ be a polarized K\"ahler manifold, then for any $k$ there is an induced metric on $H^0(X, L^k)$. The Bergman function is defined by
$$\rho_{k, X}(x)=\sum |s_\alpha|^2(x), $$
where $\{s_\alpha\}$ is any orthonormal basis of $H^0(X, L^k)$. Kodaira embedding theorem says that for fixed $X$, and for sufficiently large $k$ the Bergman function is always positive. It is proved in \cite{DS} that for a $n$ dimensional K\"ahler-Einstein Fano manifold $(X, J, \omega)$, we always have $\rho_{k, X}(x)\geq \epsilon$ for some integer $k$ (and thus every positive multiple of $k$) and $\epsilon>0$ depending only on $n$. This was named ``partial $C^0$ estimate" in \cite{Tian1} and it is also proved there for two dimensional case. It was explained in \cite{DS} that one may not take $k$ to be all sufficiently large integers, and in our proof of the main theorem we have seen examples, see Remark \ref{Tian conjecture}. Indeed we found explicitly all the integers $k$ that we need to take in each degree in order to ensure a uniform positivity of Bergman function for all K\"ahler-Einstein Del Pezzo surfaces. (Compare the strong partial $C^0$ estimate in \cite{Tian1}, Theorem 2.2):
\begin{itemize}
\item $d=4, 3$: $k\geq 1$;
\item $d=2$: $k=2l$, with $l\geq 1$;
\item $d=1$: $k=6l$, with $l\geq 1$.
\end{itemize}
\subsubsection{K\"ahler-Einstein metrics on del Pezzo orbifolds}
As a consequence of our main Theorem \ref{MT}, we have a complete classification of K\"ahler-Einstein Del Pezzo surfaces with at worst canonical singularities in terms of K-polystability.
\begin{cor}
Let $X$ be a Del Pezzo surface with at worst canonical singularities. Then
$$X \mbox{ admits a K\"ahler-Einstein metric} \Longleftrightarrow X \mbox{ is K-polystable}.$$
\end{cor}
\begin{proof}
The direction ``$\Longrightarrow$'' is known by Theorem \ref{KEtoKstability}. To prove the other direction, suppose that $X$ is K-polystable and with at worst canonical singularities (in particular it is automatically $\Q$-Gorenstein smoothable). Then by
Theorem \ref{CM stability} $X$ is also polystable with respect to the stability notions that we used in the construction of our moduli spaces, i.e. $[X] \in M_d$. Thus $X$ admits a K\"ahler-Einstein metric as a consequence of Theorem \ref{MT}.
\end{proof}
The above result gives the answer to the conjecture of Cheltsov and Kosta (\cite{CK}, Conjecture 1.19) on the existence of K\"ahler-Einstein metrics on canonical Del Pezzo surfaces. In particular, we have the following exact list of possible singularities that can occur. Let $(X,\omega)$ be a degree $d\leq 4$ Del Pezzo surface with canonical singularities, then it admits a K\"ahler-Einstein metric if and only if $X$ is smooth or
\begin{itemize}
\item $d=4$: Sing($X$) consists of only two $A_1$ singularities and $X$ is simultaneously diagonalizable, or exactly four singularities (in which case $X$ is isomorphic to $X_4^T$);
\item $d=3$: Sing($X$) consists of only points of type $A_1$, or of exactly three points of type $A_2$ (in which case $X$ is isomorphic to $X_3^T$);
\item $d=2$: Sing($X$) consists of only points of type $A_1$, $A_2$, or of exactly two $A_3$ singularities;
\item $d=1$: Sing($X$) consists of only points of type $A_k$ $(k\leq 7)$, or of exactly two $D_4$ singularities, and $X$ is not isomorphic to one the surfaces in the $\P^1$ family in the last section.
\end{itemize}
As we have seen, the class of log Del Pezzo surfaces with canonical singularities is not sufficient to construct a KE moduli variety. In particular we have found some $\Q$-Gorenstein smoothable K\"ahler-Einstein log Del Pezzo surfaces, hence K-polystable, with non-canonical singularities. Thus it is natural to ask the following differential geometric/algebro-geometric question: do there exist other $\Q$-Gorenstein smoothable K\"ahler-Einstein/K-polystable log Del Pezzo surfaces besides the ones which appear in our KE moduli varieties? If the answer to the above question is negative (as we conjecture) then the Yau-Tian-Donaldson conjecture for K-polystability also holds for the class of $\Q$-Gorenstein smoothable Del Pezzo surfaces. For this it is of course sufficient to prove the following: let $\pi\colon \mathcal{X} \to \Delta$ be a $\Q$-Gorenstein deformation of a K-polystable Del Pezzo surface $X_0$ over the disc $\Delta$ such that the generic fibers $X_t$ are smooth (hence admit K\"ahler-Einstein metrics). Then $X_0$ admits a K\"ahler-Einstein metric $\omega_0$, and $(X_0, \omega_0)$ is the Gromov-Hausdorff limit of a sequence of K\"ahler-Einstein metrics on the fibers $(X_{t_i},\omega_{t_i})$ for some sequence $t_i\rightarrow 0$.
\subsection{On compact moduli spaces}
In this final section, we would like to formulate a conjecture about the existence of certain compact moduli spaces of K-polystable/K\"ahler-Einstein Fano varieties. Before stating our conjecture, we recall some important steps in the history of the construction and compactifications of moduli spaces of varieties.
For complex curves of genus $g\geq 2$, the construction of the moduli spaces, and their ``natural'' compactifications, was completed during the seventies by Deligne, Mumford, Gieseker and others \emph{using} GIT. The degenerate curves appearing in the compactification are the so-called ``stable curves'', i.e., curves with nodal singularities and discrete automorphisms group. Let us recall that these compact moduli spaces have also a ``differential geometric'' interpretation. It is classically well-known that every curve of genus $g$ has a unique metric of constant Gauss curvature with fixed volume. As the curves move towards the boundary of the Deligne-Mumford compactification, the diameters, with respect to the constant curvature metrics, go to infinity and finally these metric spaces ``converge'' to a complete metric with constant curvature and hyperbolic cusps on the smooth part of a ``stable curve''.
The construction of compact moduli spaces of higher dimensional polarized varieties turns out to be much more complicated than in the one dimensional cases.
Indeed, in the seminal paper \cite{KS} the authors discovered examples of surfaces with ample canonical class and semi-log-canonical singularities, which are the natural singularities to be considered for the compactification, which are \emph{not} asymptotically GIT stable. The central point for this phenomenon is that there are semi-log-canonical singularities which have ``too big'' multiplicity compared to the one required to be asymptotically Chow stable
(\cite{Mum2}). Nevertheless, proper separated moduli of canonical models of surface of general type have been recently constructed using birational geometric techniques instead of classical GIT. These compactifications are sometimes known as
\textit{Koll\'ar-Shepherd-Barron-Alexeev (KSBA) type moduli}.
It is then natural to ask what is the ``differential geometric'' interpretation of these kind of moduli spaces.
In order to discuss this last point, we first recall that GIT theory became again a main theme for the following reason: the existence of a K\"ahler-Einstein, or more generally constant scalar curvature, metric on a polarized algebraic variety is found to be deeply linked to some GIT stability notions, e.g., asymptotic Chow, Hilbert stability and in particular to the formally GIT-like notion of ``K-stability" introduced in \cite{Tian2}, \cite{Do1}. Similarly to the previous discussion, asymptotic Chow stability seems to not fully capture the existence of a K\"ahler-Einstein metric since there are examples of K\"ahler-Einstein varieties which are asymptotically Chow \textit{un}stable (\cite{KS}, \cite{Od2}).
On the other hand, for $\Q$-Fano varieties it is indeed proved that the existence of a K\"ahler-Einstein metric implies K-polystability \cite{Berman}.
It turns out that the notion of K-stability is also closely related with the singularities allowed in the KSBA compactifications (\cite{Od1}, \cite{Od2}): for varieties with ample canonical class, the notion of K-stability \emph{coincides} with the semi-log-canonicity property, and for Fano varieties K-(semi)stability \emph{implies} log-terminalicity. This last condition on the singularities in the Fano case it is also important for differential geometric reasons. As recently shown in \cite{DS}, it is known that Gromov-Hausdorff limits of smooth K\"ahler-Einstein Fano manifolds (and more generally of polarized K\"ahler manifolds with control on the Ricci tensor, the injectivity radius and with bounded diameter) are indeed $\Q$-Fano varieties, i.e., they have at worst log-terminal singularities, and moreover they must be K-polystable, by \cite{Berman}.
Summing up, a central motivation of the present work was to investigate how K\"ahler-Einstein metrics and the compact moduli varieties are indeed related. Thus, motivated by our results on Del Pezzo surfaces and by the above discussion, we shall now try to state a conjecture on moduli of K\"ahler-Einstein/K-polystable Fano varieties.
Denote the category of algebraic schemes over $\C$ by Sch{$_\C$},
and let $$\mathcal{F}_h\colon \mbox{Sch}_{\C}^{o} \rightarrow Set$$ be the contravariant moduli functor which sends an object $S\in Ob( \mbox{Sch}_{\C})$ to isomorphic classes of $\Q$-Gorenstein flat families $\X\rightarrow S$ of K-semistable $\Q$-Fano varieties with Hilbert polynomial equal to $h$ and sends a morphism to pull-back of families making the corresponding squared diagram commuting. Moreover, adding isomorphism (or isotropy) structure
on this functor, we should naturally get a stack $\mathcal{M}_h$ on which we conjecture, refining \cite[Conjecture 1.3.1]{Spotti} and \cite[Conjecture 5.2]{Od0} in the $\Q$-Fano case, the following:
\begin{conj}
$\mathcal{M}_h$ is a KE moduli stack (cf. Definition \ref{KE moduli stack})
which has a categorical moduli algebraic space $$\mathcal{M}_h \rightarrow M_h, $$ where
$M_h$ is a \emph{projective variety} (in general may not be irreducible) endowed with an ample CM line bundle.
Especially, $M_h$ is a KE moduli variety in the sense of Definition \ref{KE moduli stack}.
Let $M_h^{GH}$ be the Gromov-Hausdorff compactification of the moduli space of smooth K\"ahler-Einstein Fano manifolds with Hilbert polynomial $h$. Then there is a natural homeomorphism $$\Phi\colon M_h^{GH} \rightarrow M_h,$$ where we use the analytic topology on $M_h$.
\end{conj}
This paper explicitly
settles the above conjecture for ($\mathbb{Q}$-Gorenstein smoothable) log del Pezzo surface case, except the issue in the previous subsection and the statement about the CM line bundle. A remark is that the
CM line bundle \cite{PT} can be naturally regarded as a line bundle on $\mathcal{M}_h$
and so by ``CM line bundle on $M_h$" we mean a $\mathbb{Q}$-line bundle descended from $\mathcal{M}_h$.
The descent is possible for each $U_i\rightarrow U_i//G$ in the context of Definition \ref{KE moduli stack}
since for each K-semistable $x\in U_i$ the action of the identity component of the isotropy group of $G$ on the CM line over $x$ is trivial
by the weight interpretation of vanishing of Futaki invariant \cite{PT}.
They canonically patch together due to the canonical uniqueness of
the descended line bundle on each $U_i//G$.
From the point of view of the authors, one way towards establishing the above conjecture in higher dimensions is by combining the algebraic and differential geometric techniques, as we did in this article. In many concrete situations one can hope to construct the above KE moduli stack by glueing together quotient stacks from different GIT. This also fits into the general conjecture on Artin stack
\cite[Conjecture 1]{Alp}.
Finally we remark that the points in the boundary $M_h \setminus M_h^{0}$ should correspond to $\Q$-Fano varieties, admitting weak K\"ahler-Einstein metrics in the sense of pluripotential theory \cite{EGZ}.
This is known for $M_{h}^{GH}\setminus M_h^0$, see \cite{DS}.
|
\section{Introduction}
We consider discrete-time state-space models. They can be described
by a latent Markov process $(X_{t})_{t\ge1}$ and an observation process
$(Y_{t})_{t\ge1}$, $(X_{t},Y_{t})$ being $\mathcal{X}\times\mathcal{Y}$-valued,
which satisfy $X_{1}\sim\mu(\cdot)$ and
\begin{equation}
X_{t+1}|\{X_{t}=x\}\sim f(\cdot|x)\qquad Y_{t}|\{X_{t}=x\}\sim g(\cdot|x)
\end{equation}
for $t\ge1$. Our goal is to sample from posterior distribution of the latent states $X_{1:T}:=\left(X_{1},...,X_{T}\right)$
given a realization of the observations $Y_{1:T}=y_{1:T}$. This distribution admits a density given by
\begin{equation}
p(x_{1:T}|y_{1:T})\propto\mu(x_{1})g(y_{1}|x_{1})\prod_{t=2}^{T}f(x_{t}|x_{t-1})g(y_{t}|x_{t}).
\end{equation}
This sampling problem is now commonly addressed using an MCMC scheme
known as the iterated cSMC sampler \cite{Andrieu_Doucet_Holenstein_2010}
and extensions of it; see, e.g., \cite{ShestopaloffNeal2018}. This
algorithm relies on a SMC-type proposal mechanism. A limitation of
these algorithms is that they typically use data only up to time $t$
to propose candidate states at time $t$, whereas the entire sequence
$y_{1:T}$ is observed in the context we are interested in. To address
these issues, various lookahead techniques have been proposed in the
SMC literature; see \cite{Chen2013} for a review. Alternative approaches
relying on a parametric approximation of the backward information
filter used for smoothing in state-space models \cite{Briers2010}
have also been recently proposed in \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017}.
When applicable, these iterative methods have demonstrated good performance.
However, it is unclear how these ideas could be adapted to the MCMC
framework investigated here. Additionally these methods are difficult
to put in practice for multimodal posterior distributions.
In this paper, we propose a novel approach which allows us to build proposals
for cSMC that allows considering all observed data in a proposal,
based on conditioning on replicas of the state variables. Our approach
is based purely on Monte Carlo sampling, bypassing any need for approximating
functions in the estimate of the backward information filter.
The rest of this paper is organized as follows. In Section \ref{sec:Iterated-Conditional-Sequential},
we review the iterated cSMC algorithm and outline its limitations.
Section \ref{sec:Replica-Iterated-Conditional} introduces the replica
iterated cSMC methodology. In Section \ref{sec:Examples}, we demonstrate
the methodology on a linear Gaussian model, two non-Gaussian state
space models from \cite{ShestopaloffNeal2018} as well as the Lorenz-96
model from \cite{Heng2017}.
\section{Iterated cSMC\label{sec:Iterated-Conditional-Sequential}}
The iterated cSMC sampler is an MCMC method for sampling from a target
distribution of density $\pi\left(x_{1:T}\right):=\pi_T\left(x_{1:T}\right)$. It relies on a modified SMC scheme targeting a sequence of auxiliary target probability
densities $\{\pi_t\left(x_{1:t}\right)\}_{t=1,...,T-1}$ and a sequence of proposal densities $q_{1}\left(x_{1}\right)$ and $q_{t}(x_{t}|x_{t-1})$ for $t\in\{2,...,T\}$. These target densities are such that $\pi_t(x_{1:t})/\pi_{t-1}(x_{1:t-1})\propto \beta_t(x_{t-1},x_t)$.
\subsection{Algorithm}
We define the `incremental importance weights' for $t\geq2$ as
\begin{align}
w_{t}(x_{t-1},x_{t})&:=\frac{\pi_t\left(x_{1:t}\right)}{\pi_{t-1}\left(x_{1:t-1}\right)q_{t}(x_{t}|x_{t-1})}\propto\frac{\beta_t(x_{t-1},x_t)}{q_{t}(x_{t}|x_{t-1})} \label{eq:incrementalweight}
\end{align}
and for $t=1$ as
\begin{equation}
w_{1}(x_{0},x_{1}):=\frac{\pi_1(x_{1})}{q_{1}(x_{1})}.
\end{equation}
\begin{algorithm}[t]
\protect\caption{Iterated cSMC kernel $K\left(x_{1:T},x'_{1:T}\right)$~\label{alg:CSMC}}
cSMC step.
\begin{enumerate}
\item \textsf{At time} $t=1$
\begin{enumerate}
\item \textsf{Sample $b_{1}$ uniformly on $[N]$ and set} $x_{1}^{b_{1}}=x_{1}.$
\item \textsf{For }$i\in\left[N\right]\backslash\{b_{1}\}$, \textsf{sample}
$x_{1}^{i}\sim q_{1}\left(\cdot\right)$.
\item \textsf{Compute} $w_{1}(x_{0}, x_{1}^{i})$ for $i\in\left[N\right]$.
\end{enumerate}
\item \textsf{At times} $t=2,\ldots,T$
\begin{enumerate}
\item \textsf{Sample $b_{t}$ uniformly on $[N]$ and set} $x_{t}^{b_{t}}=x_{t}$.
\item \textsf{For }$i\in\left[N\right]\backslash\{b_{t}\}$, \textsf{sample
}\\$a_{t-1}^{i}\sim$ Cat$\{ w_{t-1}(x_{t-2}^{a_{t-2}^{j}},x_{t-1}^{j});j\in[N]\}$.
\item \textsf{For }$i\in\left[N\right]\backslash\{b_{t}\}$, \textsf{sample
}$x_{t}^{i}\sim q_{t}(\left.\cdot\right\vert x_{t-1}^{a_{t-1}^{i}})$\textsf{.}
\item \textsf{Compute} $w_{t}(x_{t-1}^{a_{t-1}^{i}}, x_{t}^{i})$ for $i\in\left[N\right]$.
\end{enumerate}
\end{enumerate}
Backward sampling step.
\begin{enumerate}
\item \textsf{At times} $t=T$
\begin{enumerate}
\item \textsf{Sample }$b_{T}\sim$ Cat$\{w_{T}(x_{T-1}^{a_{T-1}^{j}},x_{T}^{j});j\in[N]\}$.
\end{enumerate}
\item \textsf{At times} $t=T-1,...,1$
\begin{enumerate}
\item \textsf{Sample }$b_{t}\sim$ \\ Cat$\{\beta_{t+1}(x_t^j, x_{t+1}^{b_{t+1}})w_{t}(x_{t-1}^{a_{t-1}^{j}},x_{t}^{j});j\in[N]\}$.
\end{enumerate}
\end{enumerate}
Output $x'_{1:T}=x_{1:T}^{b_{1:T}}:=\left(x_{1}^{b_{1}},\ldots,x_{T}^{b_{T}}\right)$.
\end{algorithm}
We introduce a dummy variable $x_{0}$ to simplify notation.
We let $N\geq2$ be the number of particles used by the algorithm and $[N]:=\{1,...,N\}$.
We introduce the notation $\mathbf{x}_{t}=\left(x_{t}^{1},\ldots,x_{t}^{N}\right)\in\mathcal{X}^{N},$
$\mathbf{a}_{t}=\left(a_{t}^{1},\ldots,a_{t}^{N}\right)\in\left\{ 1,\ldots,N\right\} ^{N}$,
$\mathbf{x}_{1:T}=(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{T}),$
$\mathbf{a}_{1:T-1}=(\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{T-1}$)
and $\mathbf{x}_{t}^{-b_{t}}=\mathbf{x}_{t}\backslash x_{t}^{b_{t}}$,
$\mathbf{x}_{1:T}^{-b_{1:T}}=\left\{ \mathbf{x}_{1}^{-b_{1}},\ldots,\mathbf{x}_{T}^{-b_{T}}\right\} $,
$\mathbf{a}_{t-1}^{-b_{t}}=\mathbf{a}_{t-1}\backslash a_{t-1}^{b_{t}}$,
$\mathbf{a}_{1:T-1}^{-b_{2:T}}=\left\{ \mathbf{a}_{1}^{-b_{2}},\ldots,\mathbf{a}_{T-1}^{-b_{T}}\right\} $
and set $b_{t}=a_{t}^{b_{t+1}}$ for $t=1,...,T-1.$ \\
It can be shown that the iterated cSMC kernel, described in Algorithm \ref{alg:CSMC},
is invariant w.r.t. $\pi(x_{1:T})$. Given the current state
$x_{1:T}$, the cSMC step introduced in \cite{Andrieu_Doucet_Holenstein_2010}
samples from the following distribution
\begin{align}
\Phi(\left.\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\right\vert x_{1:T}^{b_{1:T}},b_{1:T})=\delta_{x_{1:T}}\left(x_{1:T}^{b_{1:T}}\right)\notag \\ \times {\displaystyle \prod\limits _{i=1,i\neq b_{1}}^{N}}q_1\left(x_{1}^{i}\right)\,{\displaystyle \prod\limits _{t=2}^{T}}\thinspace{\displaystyle \prod\limits _{i=1,i\neq b_{t}}^{N}}\lambda(\left.a_{t-1}^{i},x_{t}^{i}\right\vert \mathbf{x}_{t-1}),\label{eq:CPF}
\end{align}
where
\begin{align}
\lambda\left(\left.a_{t-1}^{i}=k,x_{t}^{i}\right\vert \mathbf{x}_{t-1}\right)&=\frac{w_{t-1}(x_{t-2}^{a_{t-1}^{k}},x_{t-1}^{k})}{\sum_{j=1}^{N}w_{t-1}(x_{t-2}^{a_{t-1}^{j}},x_{t-1}^{j})}~ \notag \\ &\times q_{t}(\left.x_{t}^{i}\right\vert x_{t-1}^{k}).
\end{align}
This can be combined to a backward sampling step introduced in \cite{Whiteley2010}; see \cite{Finke2016,ShestopaloffNeal2018} for a detailed derivation. It can be shown that the combination of these two steps defined a Markov kernel that preserves the following extended target distribution
\begin{align}
\gamma(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}},b_{1:T}):=\frac{\pi(x_{1:T}^{b_{1:T}})}{N^{T}} \notag \\ \times \ \Phi(\left.\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\right\vert x_{1:T}^{b_{1:T}},b_{1:T})\label{eq:extended}
\end{align}
as invariant distribution. In particular, it follows that if $x_{1:T}\sim \pi$ then $x'_{1:T}\sim \pi$. The algorithm is described in Algorithm 1 where we use the notation Cat$\{c_i;i\in[N]\}$ to denote the categorical distribution of probabilities $p_i\propto c_i$.
Iterated cSMC has been widely adopted for state space models, i.e. when the target is $\pi(x_{1:T})=p(x_{1:T}|y_{1:T})$. The default sequence of auxiliary targets one uses is $\pi_t(x_{1:t})=p(x_{1:t}|y_{1:t})$ for $t=1,...,T-1$ resulting in the incremental importance weights
\begin{equation}
w_{t}(x_{t-1},x_{t})\propto\frac{f(x_{t}|x_{t-1})g(y_{t}|x_{t})}{q_{t}(x_{t}|x_{t-1})}\label{eq:incrementalweight-2}
\end{equation}
for $t \geq 2$ and
\begin{equation}
w_{1}(x_{0},x_{1})\propto\frac{\mu(x_{1})g(y_{1}|x_{1})}{q_{1}(x_{1})}
\end{equation}
for $t=1$. Typically we will attempt to select a proposal which minimizes the
variance of the incremental weight, which at time $t \geq 2$ is $q_{t}^{\mathrm{opt}}(x_{t}|x_{t-1})=p(x_{t}|x_{t-1},y_{t})\propto g(y_{t}|x_{t})f(x_{t}|x_{t-1})$
or an approximation of it.
\subsection{Limitations of Iterated cSMC}
When using the default sequence of auxiliary targets for state space models, iterated cSMC does not exploit
a key feature of the problem at hand. The cSMC step typically uses
a proposal at time $t$ that only relies on the observation $y_{t}$,
i.e. $q_{t}(x_{t}|x_{t-1})=p\left(x_{t}|x_{t-1},y_{t}\right)$, as
it targets at time $t$ the posterior density $p\left(x_{1:t}|y_{1:t}\right)$.
In high-dimensions and/or in the presence of highly informative observations,
the discrepancy between successive posterior densities $\{p\left(x_{1:t}|y_{1:t}\right)\}_{t\geq1}$
will be high. Consequently the resulting importance weights $\{w_{t}(x_{t-1}^{a_{t-1}^{i}},x_{t}^{i});i\in[N]\}$
will have high variance and the resulting procedure will be inefficient.
Ideally one would like to use the sequence of marginal
smoothing densities as auxiliary densities, that is $\pi_t(x_{1:t})=p\left(x_{1:t}|y_{1:T}\right)$ for $t=1,...,T-1$.
Unfortunately, this is not possible as $p\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t-1}\right)p\left(y_{t:T}|x_{t}\right)$
cannot be evaluated pointwise up to a normalizing constant. To address
this problem in a standard SMC framework, recent contributions \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017}
perform an analytical approximation $\hat{p}\left(y_{t:T}|x_{t}\right)$
of the backward information filter $p\left(y_{t:T}|x_{t}\right)$
based on an iterative particle mechanism and target instead $\{\hat{p}\left(x_{1:t}|y_{1:T}\right)\}_{t\geq1}$
where $\hat{p}\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t-1}\right)\hat{p}\left(y_{t:T}|x_{t}\right)$
using proposals of the form $q_{t}\left(x_{t}|x_{t-1}\right)\propto f\left(x_{t}|x_{t-1}\right)\hat{p}\left(y_{t:T}|x_{t}\right)$.
These methods can perform well but it requires a careful design of
the analytical approximation and is difficult to put in practice for
multimodal posteriors. Additionally, it is unclear how these could
be adapted in an iterated cSMC framework without introducing any bias.
Versions of iterated cSMC using an independent approximation to the
backward information filter based on Particle Efficient Importance
Sampling \cite{ScharthKohn2016} have been proposed \cite{GrotheKleppeLiesenfeld}
though they still require a choice of analytical approximation and
use an approximation to the backward information filter which is global. This can become inefficient in high dimensional state scenarios.
\section{Replica Iterated cSMC\label{sec:Replica-Iterated-Conditional}}
We introduce a way to directly use the iterated cSMC algorithm to
target a sequence of approximations $\{\hat{p}\left(x_{1:t}|y_{1:T}\right)\}_{t\geq1}$ to the marginal smoothing densities of a state space
model. Our proposed method is based on sampling from a target over
multiple copies of the space as done in, for instance, the Parallel
Tempering or Ensemble MCMC \cite{Neal2011} approaches. However, unlike
in these techniques, we use copies of the space to define a sequence
of intermediate distributions in the cSMC step informed by the whole
dataset. This enables us to draw samples of $X_{1:T}$ that incorporate
information about all of the observed data. Related recent work includes
\cite{Leimkuhler2018}, where information sharing amongst an ensemble of
replicas is used to improve MCMC proposals.
\subsection{Algorithm}
We start by defining the replica target for some $K\geq2$ by
\begin{align}
\bar{\pi}(x_{1:T}^{(1:K)})=\prod_{k=1}^{K}p(x_{1:T}^{(k)}|y_{1:T}).
\end{align}
Each of the replicas $x_{1:T}^{(k)}$ is updated in turn by running
Algorithm \ref{alg:CSMC} with a different sequence of intermediate
targets which we describe here. Consider updating replica $k$ and let $\hat{p}^{(k)}(y_{t+1:T}|x_{t})$
be an estimator of the backward information filter,
built using replicas other than the $k$-th one, $x_{t+1}^{(-k)}=(x_{t+1}^{(1)},\ldots,x_{t+1}^{(k-1)},x_{t+1}^{(k+1)},\ldots,x_{t+1}^{(K)}).$
For convenience of notation, we take $\hat{p}^{(k)}(y_{T+1:T}|x_{T}):=1$. At time $t$, the cSMC does
target approximation of the marginal smoothing distribution $p\left(x_{1:t}|y_{1:T}\right)$ as in \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017}.
This is of the form $\hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t}\right)\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$.
This means that the cSMC for replica $k$ uses the novel incremental weights at time $t\geq2$
\begin{align}
w_{t}^{\left(k\right)}(x_{t-1},x_{t}) &:= \frac{\hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)}{\hat{p}^{(k)}\left(x_{1:t-1}|y_{1:T}\right)q_{t}(x_{t}|x_{t-1})} \\&\propto\frac{g(y_{t}|x_{t})f(x_{t}|x_{t-1})\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)}{\hat{p}^{(k)}\left(y_{t:T}|x_{t-1}\right)q_{t}(x_{t}|x_{t-1})} \notag
\end{align}
and $w_{1}^{\left(k\right)}(x_{0},x_{1})\propto g(y_{1}|x_{1})\mu(x_{1})\hat{p}^{(k)}(y_{t+1:T}|x_{1})/q_{1}(x_{1})$. We would like to use the proposal minimizing the variance of the incremental
weight, which at time $t\geq2$ is $q_{t}^{\mathrm{opt}}(x_{t}|x_{t-1})\propto g(y_{t}|x_{t})f(x_{t}|x_{t-1})\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ or an approximation of it.
The full replica cSMC update for $\bar{\pi}$ is described in Algorithm
\ref{alg:replica-CSMC} and is simply an application of Algorithm \ref{alg:CSMC} to
a sequence of target densities for each replica. A proof of the validity of the algorithm is provided
in the Supplementary Material.
\begin{algorithm}
\protect\caption{Replica cSMC update~\label{alg:replica-CSMC}}
For $k=1,\ldots,K$
\begin{enumerate}
\item \textsf{Build an approximation $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ of $p\left(y_{t+1:T}|x_{t}\right)$ using the replicas $(x_{t+1}^{(1)'},\ldots,x_{t+1}^{(k-1)'},x_{t+1}^{(k+1)},\ldots,x_{t+1}^{(K)})$ \hspace{-0.2cm}for $t=1,...,T-1$}.
\item \textsf{Run Algorithm \ref{alg:CSMC} with target $\pi(x_{1:T}) = p(x_{1:T}|y_{1:T})$ and auxiliary targets
$\pi_{t}(x_{1:t}) = \hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)$ for $t = 1,\ldots, T-1$ with initial state $x_{1:T}^{(k)}$ to return $x_{1:T}^{(k')}$}.
\end{enumerate}
Output $x_{1:T}^{(1:K)'}$.
\end{algorithm}
One sensible way to initialize the replicas is to set them to sequences sampled from standard independent
SMC passes. This will start the Markov chain not too far from equilibrium. For multimodal
distributions, initialization is particularly crucial, since we need to ensure that different
replicas are well-distributed amongst the various modes at the start of the run.
\subsection{Setup and Tuning\label{subsec:Setup-and-Tuning}}
The replica cSMC sampler requires an estimator $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ of
the backward information filter based on $x_{t+1}^{(-k)}$. For our algorithm, we propose an
estimator $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ that is not
based on any analytical approximation of $p\left(y_{t+1:T}|x_{t}\right)$
but simply on a Monte Carlo approximation built using the other replicas,
\begin{equation}
\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)\propto\sum_{j\neq k}\frac{f(x_{t+1}^{\left(j\right)}|x_{t})}{p(x_{t+1}^{\left(j\right)}|y_{1:t})},\label{eq:approxMonteCarlobackward}
\end{equation}
where $p\left(x_{t+1}|y_{1:t}\right)$ denotes the predictive density
of $x_{t+1}$. The rationale for this approach is that at equilibrium
the components of $x_{t+1}^{(-k)}$ are an iid sample from a product
of $K-1$ copies of the smoothing density, $p\left(x_{t+1}|y_{1:T}\right)$.
Therefore, as $K$ increases, (\ref{eq:approxMonteCarlobackward})
converges to
\begin{align}
&\int\frac{f\left(x_{t+1}|x_{t}\right)}{p\left(x_{t+1}|y_{1:t}\right)}p\left(x_{t+1}|y_{1:T}\right)dx_{t+1} \notag \\
& \propto\int f\left(x_{t+1}|x_{t}\right)p\left(y_{t+1:T}|x_{t+1}\right)dx_{t+1} \notag \\
& =p\left(y_{t+1:T}|x_{t}\right). \label{eq:backwardFilter}
\end{align}
In practice, the predictive density is also unknown and we need to use an approximation
of it. Whatever being the approximation $\hat{p}\left(x_{t+1}|y_{1:t}\right)$
of $p\left(x_{t+1}|y_{1:t}\right)$ we use, the algorithm is valid. We note that for
$K = 2$, any approximation of the predictive density results in the same
incremental importance weights.
We propose to approximate the predictive density in (\ref{eq:backwardFilter}) by a constant over the entire latent space, i.e. $\hat{p}(x_{t+1}|y_{1:t}) = 1$. We justify this choice as follows. If we assume that we have informative observations, which is typical in many state space modelling scenarios, then $p(x_{t+1}|y_{1:T})$ will tend to be much more concentrated than $p(x_{t+1}|y_{1:t})$. Thus, over the region where the posterior has high density, the predictive density will be approximately constant relative to the posterior density. This suggests approximating the predictive density in (\ref{eq:backwardFilter}) by its mean with respect to the posterior density,
\begin{align}
&\int\frac{f\left(x_{t+1}|x_{t}\right)}{p\left(x_{t+1}|y_{1:t}\right)}p\left(x_{t+1}|y_{1:T}\right)dx_{t+1} \notag \\
&\approx \frac{\int f\left(x_{t+1}|x_{t}\right)p\left(x_{t+1}|y_{1:T}\right)dx_{t+1}}{\int p\left(x_{t+1}|y_{1:t}\right)p\left(x_{t+1}|y_{1:T}\right)dx_{t+1}} \notag \\
&\approx \frac{\frac{1}{K}\sum_{k=1}^{K} f(x_{t+1}^{(k)}|x_{t})}{\frac{1}{K}\sum_{k=1}^{K} p(x_{t+1}^{(k)}|y_{1:t})}. \label{eq:ConstPred}
\end{align}
Since the importance weights in cSMC at each time are defined up to a constant, sampling is not affected by the specific value of $\frac{1}{K}\sum_{k=1}^{K} p(x_{t+1}^{(k)}|y_{1:t})$. Therefore, when doing computation it can simply be set to any value, which is what we do.
We note that while the asymptotic argument doesn't hold for the estimator
in (\ref{eq:ConstPred}), when the variance of the predictive density
is greater than the variance of the posterior density, we expect the estimators
in (\ref{eq:approxMonteCarlobackward}) and (\ref{eq:ConstPred})
to be close for any finite $K$.
An additional benefit to approximating the predictive density by a constant is reduction in the variance of the mixture weights in (\ref{eq:approxMonteCarlobackward}). To see why this can be the case, consider
the following example. Suppose the predictive density of $x_{t+1}$ is $\mathcal{N}(\mu,\sigma_{0}^{2})$ and the posterior density is $\mathcal{N}(0,\sigma_{1}^{2})$, where $\sigma_{1}^{2} < \sigma_{0}^{2}$. Computing the variance of the mixture weight, we get
\begin{align}
&\textnormal{Var}\bigg(\frac{1}{p(x_{t+1}|y_{1:t})}\biggr) \notag \\
& = \frac{2\pi\sigma_{0}^{2}}{\sqrt{2\sigma_{1}^{2}\nu_{1}}}\exp\biggr[\mu^{2}\biggl(\frac{1}{\sigma_{0}^{2}}+\frac{1}{(\sigma_{0}^{2})^{2}\nu_{1}}\biggr)\biggr] \notag \\
& - \frac{2\pi\sigma_{0}^{2}}{\sigma_{1}^{2}\nu_{2}}\exp\biggr[\mu^{2}\biggl(\frac{1}{\sigma_{0}^{2}}+\frac{1}{(\sigma_{0}^{2})^{2}\nu_{2}}\biggr)\biggr].
\end{align}
where
\begin{equation}
\nu_{1}=\biggl(\frac{1}{2\sigma
|
_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\biggr) \qquad
\nu_{2}=\biggl(\frac{1}{\sigma_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\biggr).
\end{equation}
From this we can see
that variance increases exponentially with the squared difference of predictive
and posterior means, $\mu^{2}$. As a result, we can get outliers in the mixture
weight distribution. If this happens, many of the replicas will end
up having low weights in the mixture. This will reduce the effective
number of replicas used. Using a constant approximation will weight
all of the replicas uniformly, and allow us to construct better proposals,
as illustrated in Section \ref{subsec:A-Linear-Gaussian}.
A natural extension of the proposed method is to update some of the
replicas with other than replica cSMC updates. Samples from these
replicas can then be used in estimates of the backward information
filter when doing a replica cSMC update. This makes it possible to
parallelize the method, at least to some extent. For instance, one
possibility is to do parallel independent cSMC updates on some of
the replicas.
Performing other than replica cSMC updates on some of the replicas
can be useful in multimodal scenarios. If all replicas are located
in an isolated mode, and the replica cSMC updates use an estimate
of the backward information filter based on replicas in that mode,
then the overall Markov chain will tend not to transition well to
other modes. Using samples from other types of updates in the estimate
of the backward information filter can help counteract this effect
by making transitions to other high-density regions possible.
\section{Examples\label{sec:Examples}}
We consider four models to illustrate the performance of our method.
In all examples, we assume that the model parameters are known. The
first is a simple linear Gaussian model. We use this model to demonstrate that it is sensible to use a constant
approximation to the predictive density in our estimator of the backward information
filter. We also use the linear Gaussian model to better understand the accuracy and performance of
replica cSMC. The second model, from \cite{ShestopaloffNeal2018}, demonstrates
that our proposed replica cSMC method is competitive with existing state-of-the-art methods at drawing latent state sequences
in a unimodal context. The third model, also from \cite{ShestopaloffNeal2018},
demonstrates that by updating some replica coordinates with a standard
iterated cSMC kernel, our method is able to efficiently handle multimodal
sampling without the use of specialized ``flip'' updates. The fourth
model is the Lorenz-96 model from \cite{Heng2017}, which has very
low observation noise, making it a challenging case for standard iterated cSMC.
To do our computations, we used MATLAB on an OS X system, running on
an Intel Core i5 1.3 GHz CPU. As a performance metric for the sampler,
we used autocorrelation time, which is a measure of approximately
how many steps of an MCMC chain are required to obtain the equivalent
of one independent sample. The autocorrelation time is estimated based
on a set of runs as follows. First, we estimate the overall mean using
all of the runs. Then, we use this overall mean to estimate autocovariances
for each of the runs. The autocovariance estimates are then averaged
and used to estimate the autocorrelations $\hat{\rho}_{k}$. The autocorrelation
time is then estimated as $1+2\sum_{m=1}^{M}\hat{\rho}_{m}$ where
$M$ is chosen such that for $m>M$ the autocorrelations are approximately
$0$. Code to reproduce the experiments is provided \href{https://github.com/ayshestopaloff/replicacsmc}{here}.
\subsection{A Linear Gaussian Model\label{subsec:A-Linear-Gaussian}}
Let $X_{t}=(X_{1,t}, \ldots,X_{d,t})'$ for $t=1, \ldots, T$. The latent process for this model is defined as $X_{1} \sim \mathcal{N}(0,\Sigma_{1})$, $X_{t}|\{X_{t-1}=x_{t-1}\} \sim \mathcal{N}(\Phi x_{t-1},\Sigma)$ for $t=2,\ldots,T$, where
\begin{eqnarray*}
\Phi & = &
\setlength{\arraycolsep}{1pt}
\begin{pmatrix}\phi_{1} & 0 & \cdots & 0\\
0 & \phi_{2} & \ddots & \vdots\\
\vdots & \ddots & \phi_{d-1} & 0\\
0 & \cdots & 0 & \phi_{d}
\end{pmatrix},
\quad \Sigma =
\setlength{\arraycolsep}{1pt}
\begin{pmatrix}1 & \rho & \cdots & \rho\\
\rho & 1 & \ddots & \vdots\\
\vdots & \ddots & 1 & \rho\\
\rho & \cdots & \rho & 1
\end{pmatrix},\\
\Sigma_{1} & = &
\setlength{\arraycolsep}{1pt}
\begin{pmatrix}\sigma^{2}_{1,1} & \rho \sigma_{1,1} \sigma_{1,2}& \cdots & \rho \sigma_{1,1}\sigma_{1,d}\\
\rho \sigma_{1,2} \sigma_{1,1} & \sigma^{2}_{1,2} & \ddots & \vdots\\
\vdots & \ddots & \sigma^{2}_{1,d-1} & \rho \sigma_{1,d-1} \sigma_{1,d}\\ \rho \sigma_{1,d} \sigma_{1,1} & \cdots & \rho \sigma_{1,d} \sigma_{1,d-1} & \sigma^{2}_{1,d}
\end{pmatrix},
\end{eqnarray*}
with $\sigma^{2}_{1,i} = 1/(1-\phi_{i}^{2})$ for $i=1,\ldots,d$. The observations are $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{N}(x_{i,t},1)$ for $i=1,\ldots,d$ and $t=1,\ldots,T$. We set $T=250,d=5$ and the model's parameters to $\rho=0.7$ and
$\phi_{i}=0.9$ for $i=1,\ldots,d$. We generate a sequence from this
model to use for our experiments.
Since this is a linear Gaussian model, we are able to compute the
predictive density in (\ref{eq:approxMonteCarlobackward}) exactly
using a Kalman filter. So for replica $k$, we can use the following importance densities,
\begin{align}
q_{1}(x_{1}) & \propto\mu(x_{1})\sum_{j\neq k}\frac{f(x_{2}^{(j)}\vert x_{1})}{p(x_{2}^{(j)}|y_{1})},\nonumber \\
q_{t}(x_{t}\vert x_{t-1}) & \propto f(x_{t}\vert x_{t-1})\sum_{j\neq k}\frac{f(x_{t+1}^{(j)}\vert x_{t})}{p(x_{t+1}^{(j)}|y_{1:t})},\nonumber \\
q_{T}(x_{T}|x_{T-1}) & \propto f(x_{T}\vert x_{T-1}),\label{eq:ImportanceDensities}
\end{align}
where $t=2,\ldots,T-1$. Since these densities are Gaussian mixtures,
they can be sampled from exactly. However, as pointed out in the previous section, this approach can
be inefficient. We will show experimentally that using a constant
approximation to the predictive density in (\ref{eq:approxMonteCarlobackward})
actually improves performance.
In all experiments, we intialize
all replicas to a sample from an independent SMC pass with the same
number of particles as used for cSMC updates. Also, the different runs in
our experiments use different random number generator seeds.
We first check that our replica method produces answers that agree
with the posterior mean computed by a Kalman smoother. To do this,
we do $10$ replica cSMC runs with $100$ particles and $2$ replicas
for $25,000$ iterations, updating each replica conditional on the
other. We then look at whether the posterior mean of $x_{i,t}$ computed
using a Kalman smoother lies within two standard errors of the overall
mean of $10$ replica cSMC runs. We find this happens for about $91.4\%$
of the $x_{i,t}$. This indicates strong agreement between the answers
obtained by replica cSMC and the Kalman smoother.
Next, we investigate the effect of using more replicas. To do this,
we compare replica cSMC using $2$ versus $75$ replicas. We do $5$
runs of each sampler. Both samplers use $100$ particles and we do
a total of $5,000$ iterations per run. For the sampler using $75$
replicas, we update replica $1$ at every iteration and replicas $2$
to $75$ in sequence at every $20$-th iteration. For the sampler
using $2$ replicas, we update both replicas at every iteration. In
both samplers, we update replica $1$ with replica cSMC and the remaining
replica(s) with iterated cSMC. After discarding 10\% of each run as
burn-in, we use all runs for a sampler to compute autocorrelation
time.
We can clearly see in Figures \ref{fig:Replica-2} and \ref{fig:Replica-75}
that using more replicas improves performance, before adjusting for
computation time. We note that for this simple example, there is no
benefit from using replica cSMC with a large number of replicas if
we take into account computation time.
To check the performance of using the constant approximation versus
the exact predictive density, we run replica cSMC with $75$ replicas
and the same settings as earlier, except using a constant approximation
to the predictive density. Figure \ref{fig:Replica-approx} shows
that using a constant approximation to the predictive density results
in peformance better than when using the true predictive density.
This is consistent with our discussion in Section \ref{subsec:Setup-and-Tuning}.
\begin{figure}[t]
\centering
\subfloat[Replica cSMC, $2$ replicas.\label{fig:Replica-2}]
{\begin{centering}
\includegraphics[width=0.23\textwidth]{rep_2}
\par\end{centering}}
\subfloat[Replica cSMC, $75$ replicas.\label{fig:Replica-75}]
{\begin{centering}
\includegraphics[width=0.23\textwidth]{rep_75}
\par\end{centering}}\\
\subfloat[Replica cSMC, $75$ replicas, constant approximation to predictive.\label{fig:Replica-approx}]
{\begin{centering}
\includegraphics[width=0.23\textwidth]{rep_75_approx}
\par\end{centering}}
\caption{Estimated autocorrelation times for each latent variable. Different
coloured lines correspond to different latent state components. The
$x$-axis corresponds to different times.\label{fig:Estimated-autocorrelation-times}}
\end{figure}
The linear Gaussian model can also be used to demonstrate that due
to looking ahead, a fixed level of precision can be achieved with
much fewer particles with replica cSMC than with standard iterated
cSMC. In scenarios where the state is high dimensional and the observations
are informative, it is difficult to efficiently sample the variables
$x_{i,1}$ with standard iterated cSMC using the initial density as
the proposal. We do $20$ runs of $2,500$ iterations of both iterated
cSMC with $700$ particles and of replica cSMC with $35$ particles
and $2$ replicas, with each replica updated given the other. We then
use the runs to estimate the standard error of the overall mean over
$20$ runs. For the variable $x_{1,1}$ sampled with iterated cSMC
we estimate the standard error to be approximately $0.0111$ whereas
for replica cSMC the estimated standard error is a similar $0.0081$,
achieved using only 5\% of the particles.
Finally, we verify that the proposed method works well on longer time
series by running it on the linear Gaussian model but with the length
of the observed sequence set to $T=1,500$. We use $2$ replicas,
each updated given the other, and do $5$ runs of $5,000$ iterations
of the sampler to estimate the autocorrelation time for sampling the
latent variables. In Figure \ref{fig:Estimated-autocorrelation-times-1}
we can see that the replica cSMC method does not suffer from a decrease
in performance when used on longer time series.
\begin{figure}
\begin{centering}
\includegraphics[width=0.23\textwidth]{rep_2_long}
\par\end{centering}
\caption{Estimated autocorrelation times for each latent variable. Different
coloured lines correspond to different latent state components. The
$x$-axis corresponds to different times.\label{fig:Estimated-autocorrelation-times-1}}
\end{figure}
\subsection{Two Poisson-Gaussian Models}
In this example, we consider the two models from \cite{ShestopaloffNeal2018}.
Model $1$ uses the same latent process as Section \ref{subsec:A-Linear-Gaussian}
with $T=250$, $d=10$ and $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{\textnormal{Poisson}}(\exp(c+\sigma x_{i,t}))$ for $i=1,\ldots,d$ and $t=1,\ldots,T$ where $c=-0.4$ and $\sigma=0.6$. For Model $2$, we again use the
latent process in Section \ref{subsec:A-Linear-Gaussian}, with $T=500,d=15$
and $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{\textnormal{Poisson}}(\sigma|x_{i,t}|))$ for $i=1,\ldots,d$ and $t=1,\ldots,T$ where $\sigma=0.8$. We assume the observations are independent given the latent states.
We generate one sequence of observations from
each model. A plot of the simulated data along dimension $i=1$ is shown in Figure \ref{fig:Simulated-data-from}.
We set the importance densities $q_{t}$ for the replica cSMC sampler
to the same ones as in Section \ref{subsec:A-Linear-Gaussian}, with a
constant approximation to the predictive density.
\begin{figure}
\centering
\subfloat[Data for Model 1.]{\centering{}\includegraphics[width=0.23\textwidth]{model1}}
\subfloat[Data for Model 2.]{\centering{}\includegraphics[width=0.23\textwidth]{model2}}
\caption{Simulated data from the Poisson-Gaussian models.\label{fig:Simulated-data-from}}
\end{figure}
\subsubsection*{Model 1}
We use replica cSMC with $5$ replicas, updating one replica conditional
on the other. We start with both sequences initialized to $\mathbf{0}.$
We set the number of particles to $200$. We do a total of $5$ runs
of the sampler with $5,000$ iterations, each run with a different
random number generator seed. Each iteration of replica cSMC takes approximately $0.80$ seconds.
We discard 10\% of each run as burn-in.
Plots of autocorrelation time comparing replica cSMC to the best method
in \cite{ShestopaloffNeal2018} for sampling each of the latent variables
are shown in Figure \ref{fig:Model1acf}. The benchmark method takes approximately
$0.21$ seconds per iteration. We can see that the proposed
replica cSMC method performs relatively well when compared to their
best method after adjusting for computation time. The figure for iterated
cSMC+Metropolis was reproduced using code available with \cite{ShestopaloffNeal2018}.
\begin{figure}
\subfloat[Iterated cSMC+Metropolis.]{\begin{centering}
\includegraphics[width=0.23\textwidth]{pgmetonemode_ub}
\par\end{centering}
}\subfloat[Replica cSMC.]{\begin{centering}
\includegraphics[width=0.23\textwidth]{replicacsmc_ub}
\par\end{centering}}
\centering{}\caption{Model 1. Estimated autocorrelation times for each latent variable,
adjusted for computation time. Different coloured lines corresponds
to different latent state components. The $x$-axis corresponds to
different times.\label{fig:Model1acf}}
\end{figure}
\subsubsection*{Model 2}
For this model, the challenge is to move between the many different
modes of the latent state due to conditioning on $|x_{i,t}|$ in the
observation density. The marginal posterior of $x_{i,t}$ has two
modes and is symmetric around $0$. Additional modes appear due to
uncertainty in the signs of state components.
We use a total of $50$ replicas and update $49$ of the $50$ replicas
with iterated cSMC and one replica with replica cSMC. This is done
to prevent the Markov chain from being stuck in a single mode while
at the same time enabling the replica cSMC update to use an estimate
of the backward information filter based on replicas that are distributed
across the state space. We initialize all replicas using sequences
drawn from independent SMC passes with $1,000$ particles, and run
the sampler for a total of $2,000$ iterations. Both replica cSMC
and iterated cSMC updates use $100$ particles.
In Figure \ref{fig:Model2trace} we plot every other sample of the
same functions of state as in \cite{ShestopaloffNeal2018} of the
replica updated with replica cSMC. This is the the coordinate $x_{1,300}$
with true value $-1.99$ and $x_{3,208}x_{4,208}$ with true value
$-4.45$. The first has two well-separated modes and the second is
ambiguous with respect to sign. We see that the sampler is able to
explore different modes, without requiring any specialized ``flip''
updates or having to use a much larger number of particles, as is
the case in \cite{ShestopaloffNeal2018}.
We note that the replicas doing iterated cSMC updates tend to get
stuck in separate modes for long periods of time, as expected. However,
as long as these replicas are well-distributed across the state space
and eventually explore it, the bias in the estimate of the backward
information filter will be low and vanish asymptotically. The samples
from the replica cSMC update will consequently be a good approximation
to samples from the target density. Further improvement of the estimate
of the backward information filter based on replicas in multimodal
scenarios remains an open problem.
\begin{figure}
\centering
\subfloat[Trace plot for $x_{1,300}$.]{\centering{}\includegraphics[width=0.23\textwidth]{x_1_300}}
\subfloat[Trace plot for $x_{3,208}x_{4,208}$.]{\centering{}\includegraphics[width=0.23\textwidth]{x_3_208x_4_208}}
\caption{Trace plots for Model 2.\label{fig:Model2trace}}
\end{figure}
\subsection{Lorenz-96 Model}
Finally, we look at the Lorenz-96 model in a low-noise regime from
\cite{Heng2017}. The state function for this model is the It\^{o} process
$\xi(s)=(\xi_{1}(s),\ldots,\xi_{d}(s))$ defined as the weak solution
of the stochastic differential equation (SDE)
\begin{equation}
\textnormal{d}\xi_{i}=(-\xi_{i-1}\xi_{i-2}+\xi_{i-1}\xi_{i+1}-\xi_{i}+\alpha)\textnormal{d}t+\sigma_{f}\textnormal{d}B_{i}\label{eq:Lorenz}
\end{equation}
for $i=1,\ldots,d$, where indices are computed modulo $d$, $\alpha$
is a forcing parameter, $\sigma_{f}^{2}$ is a noise parameter and
$B(s)=(B_{1}(s),\ldots,B_{d}(s))$ is $d$-dimensional standard Brownian
motion. The initial condition for the SDE is $\xi(0)=\mathcal{N}(\mathbf{0},\sigma_{f}^{2}\mathcal{I}_{d})$.
We observe the process on a regular grid of size $h>0$ as $Y_{t}\sim\mathcal{N}(H\xi(th),R)$,
where $t=0,\ldots,T$. We will assume that the process is only partially
observed, with $H_{ii}=1$ for $i=1,\ldots,p$ and $0$ otherwise,
for $p=d-2$.
We discretize the SDE (\ref{eq:Lorenz}) by numerically integrating
the drift using a fourth-order Runge-Kutta scheme and adding Brownian
increments. Let $u$ be the mapping obtained by numerically integrating
the drift of (\ref{eq:Lorenz}) on $[0,h]$. This discretization produces
a state space model with $X_{1}\sim\mathcal{N}(\mathbf{0},\sigma_{f}^{2}\mathcal{I})$, $X_{t}|\{X_{t-1}=x_{t-1}\}\sim\mathcal{N}(u(x_{t-1}),\sigma_{f}^{2}h\mathcal{I})$ for $t=2,\ldots,T+1$ and $Y_{t}|\{X_{t}=x_{t}\}\sim\mathcal{N}(Hx_{t},R)$
for $t=1,\ldots,T+1$. We set $d=16,\sigma_{f}^{2}=10^{-2},R=10^{-3}\mathcal{I}_{p}$
and $\alpha=4.8801$. The process is observed for $10$ time units,
which corresponds to $h=0.1$, $T=100$, and a step size of $10^{-2}$
for the Runge-Kutta scheme. A plot of data generated from the Lorenz-96
model along one of the coordinates is shown in Figure \ref{fig:Simulated-data-from-Lorenz-96}.
\begin{figure}
\begin{centering}
\includegraphics[width=0.23\textwidth]{lorenz}
\par\end{centering}
\caption{Simulated data from Lorenz-96 model along coordinate $i=1$.\label{fig:Simulated-data-from-Lorenz-96}}
\end{figure}
We compare the performance of replica cSMC with two replicas, updating
each replica conditional on the other, to an iterated cSMC scheme.
For iterated cSMC, we use the model's initial density as $q_{1}$
and the model's transition density as $q_{t}$ for $t\geq2$. For
replica cSMC, we use the following importance densities for replica
$k$,
\begin{align}
q_{1}(x_{1}) & \propto f(x_{1})\sum_{j\neq k}\phi(x_{1}|x_{2}^{(j)}),\nonumber \\
q_{t}(x_{t}|x_{t-1}) & \propto f(x_{t}|x_{t-1})\sum_{j\neq k}\phi(x_{t}|x_{t+1}^{(j)}),\nonumber \\
q_{T}(x_{T}|x_{T-1}) & \propto f(x_{T}|x_{T-1}),\label{eq:Case1}
\end{align}
where $t=2,\ldots,T-1$ and $\phi$ is the $p$-dimensional Gaussian density with mean $Hu^{-1}(x_{t+1}^{(j)})$ and variance
$\sigma_{f}^{2}h\mathcal{I}_{p}$, that is, the mean is computed by
running the Runge-Kutta scheme backward in time starting at the replica
state $x_{t+1}^{(j)}$. We initialize the iterated cSMC sampler and
each replica in the replica cSMC sampler with a sequence drawn from
an independent SMC pass with $3,000$ particles. We run replica cSMC
with $200$ particles for $30,000$ iterations ($0.7$ seconds per
iteration) and compare to standard iterated cSMC with $600$ particles,
which we also run for $30,000$ iterations ($0.7$ seconds per iteration),
thus making the computational time equal.
Figure \ref{fig:Lorenz-iterated-replica} shows the difference in
performance of the two samplers by trace plots of $x_{1,45}$ (true
value $-0.23$), from one of the runs, plotting the samples every
$30$th iteration. We can see that replica cSMC performs noticeably
better when compared to standard iterated cSMC.
\begin{figure}
\begin{centering}
\subfloat[Standard cSMC trace, $x_{1,45}$.]{\begin{centering}
\includegraphics[width=0.23\textwidth]{lorenz_1_45_basic}
\par\end{centering}
}\subfloat[Replica cSMC trace, $x_{1,45}$.]{\begin{centering}
\includegraphics[width=0.23\textwidth]{lorenz_1_45_rep}
\par\end{centering}
}
\par\end{centering}
\caption{Lorenz-96 model. Comparison of standard cSMC and replica cSMC.\label{fig:Lorenz-iterated-replica}}
\end{figure}
\section{Conclusion}
We presented a novel sampler for latent sequences of a non-linear
state space model. Our proposed method leads to several
questions. The first is whether there are other ways to estimate the
predictive density that does not result in mixture weights with high
variance. Another question is to develop better guidelines on choosing
the number of replicas to use in a given scenario. It would also be
interesting to look at applications of replica cSMC in non time-series
examples. Finally, while the proposed method offers an approach for
sampling in models with multimodal state distributions, further improvement
is needed.
\clearpage
\section{Validity of Replica cSMC}
It is easy to see that the proposed update leaves $\bar{\pi}$ invariant.
Let $M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})$ be the cSMC
transition kernel used to update replica $x_{1:T}^{(k)}$, $k=1,\ldots,K$,
where $x_{1:T}^{(-k)}:=(x_{1:T}^{(1)'},\ldots,x_{1:T}^{(k-1)'},x_{1:T}^{(k+1)},\ldots,x_{1:T}^{(K)})$.
The replica update is a composition of the $M_{x_{1:T}^{(-k)}}$ so
we can write the replica cSMC transition kernel $M$ as a product,
$M(x_{1:T}^{(1:K)'}|x_{1:T}^{(1:K)})=\prod_{k=1}^{K}M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})$.
The replica cSMC transition kernel $M$ then leaves $\bar{\pi}$ invariant
since we have
\begin{align*}
& \int\bar{\pi}(x_{1:T}^{(1:K)})M(x_{1:T}^{(1:K)'}|x_{1:T}^{(1:K)})dx_{1:T}^{(1:K)} \\
& =\int\prod_{k=1}^{K}p(x_{1:T}^{(k)}|y_{1:T})M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})dx_{1:T}^{(1:K)}\\
& =\int\biggl[\int p(x_{1:T}^{(1)}|y_{1:T})M_{x_{1:T}^{(-1)}}(x_{1:T}^{(1)'}|x_{1:T}^{(1)})dx_{1:T}^{(1)}\biggr]\\
& \times\prod_{k=2}^{K}p(x_{1:T}^{(k)}|y_{1:T})M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})dx_{1:T}^{(2:K)}\\
& =p(x_{1:T}^{(1)'}|y_{1:T})\int\biggl[\int p(x_{1:T}^{(2)}|y_{1:T})M_{x_{1:T}^{(-2)}}(x_{1:T}^{(2)'}|x_{1:T}^{(2)})dx_{1:T}^{(2)}\biggr]\\
& \times\prod_{k=3}^{K}p(x_{1:T}^{(k)}|y_{1:T})M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})dx_{1:T}^{(3:K)}\\
& =p(x_{1:T}^{(1)'}|y_{1:T})p(x_{1:T}^{(2)'}|y_{1:T})\\
& \times\int\prod_{k=3}^{K}p(x_{1:T}^{(k)}|y_{1:T})M_{x_{1:T}^{(-k)}}(x_{1:T}^{(k)'}|x_{1:T}^{(k)})dx_{1:T}^{(3:K)}\\
& =\prod_{k=1}^{K}p(x_{1:T}^{(k)'}|y_{1:T})\quad(\textnormal{by induction)}\\
& =\bar{\pi}(x_{1:T}^{(1:K)'}).
\end{align*}
\end{document}
|
\section{Introduction}
Predicting the survival time of a cancer patient based on his/her genome-wide gene expression is a well studied, yet unresolved problem. In some types of cancer, the effects of gene expression are both weak and abundant which, when combined with often high censoring rates, makes feature selection for survival time association very challenging. On the other hand, genome-wide gene expression data can be highly informative for prognosis. For example, \cite{zhu2017integrating} demonstrate that two patients with similar genome-wide gene expression data may have similar survival time.
Our method development is motivated by a dataset with genome-wide gene expression, survival time, and some demographical/clinical variables of more than 500 patients with kidney renal clear cell carcinoma, which is part of The Cancer Genome Atlas (TCGA) project (\href{http://cancergenome.nih.gov/}{http://cancergenome.nih.gov/}).
To demonstrate that the associations between gene expression and survival time are abundant and weak in this dataset, we first report results of gene-by-gene marginal association testing. For each gene, we fit two Cox proportional hazards models. In Model I, we include only the expression of this gene as a predictor and the sequencing plate ID as a confounder. In Model II, we include the expression of this gene, sequencing plate ID, and three demographical/clinical covariates: age, gender, and tumor stage. Histograms of the marginal p-values are displayed in Figure \ref{fig:p_val_dist}(a) and (b).
\begin{figure}[t!]
\centerline{\hfill\includegraphics[width=.40\textwidth]{Model1_pvals.pdf}\hfill \includegraphics[width=.40\textwidth]{Model2_pvals.pdf}\hfill}
\centerline{\hfill\makebox[.40\textwidth]{(a)}\hfill\makebox[.40\textwidth]{(b)} \hfill}
\centerline{\hfill\includegraphics[width=.40\textwidth]{Model1_Conc.pdf}\hfill \includegraphics[width=.40\textwidth]{Model2_Conc.pdf}\hfill}
\centerline{\hfill\makebox[.40\textwidth]{(c)}\hfill\makebox[.40\textwidth]{(d)} \hfill}
\caption{Histograms of the marginal p-values for each of the 20,483 genes for (a): Model I and (b): Model II. Histograms of the concordance for each of the 20,483 marginal models for (c): Model I and (d): Model II. Dark dotted lines in (c) and (d) denote the concordance measurement of survival time prediction under the baseline model that includes all the covariants in Model I or II, other than gene expression. }\label{fig:p_val_dist}
\end{figure}
Assuming genes with p-value larger than 0.5 are not associated with survival time, we can calculate the expected number of genes associated with survival time by $p(1 - 2 s/p)$, where $s$ is the number of genes with p-value $>$ 0.5 and $p=20,483$ is the total number of genes. This number is 13,052 and 10,512 for Models I and II, respectively. Similarly, with a false discovery rate of 0.05, the number of genes that are significantly associated with survival time are 8,550 and 4,312 for Models I and II, respectively. The large number of genes associated with survival time is biologically plausible given that kidney renal clear cell carcinoma is characterized by oncogenic metabolism and epigenetic reprogramming, both of which may affect the expression of many genes \citep{cancer2013comprehensive}.
In Figures \ref{fig:p_val_dist}(c) and (d), we display the survival time prediction concordances (C-index) for the 20,483 marginal models. The dark dashed vertical lines denote the concordance of the baseline model, i.e., the model excluding gene expression but including sequencing plate ID (Model I) as well as clinical covariates (Model II). For Model I, including a single gene can improve concordance by as much as six percent. Comparatively, the improvement in concordance for Model II is smaller, with a single gene improving concordance no more than two percent. The concordance improvements indicate that gene expression can improve the prediction of survival time, although few, if any, genes appear to have strong effects. Together, these results suggest that screening or variable selection may be difficult or ineffective because of potentially weak and abundant effects.
In our proposed method, we do not attempt to identify a subset of genes associated with survival time. Instead, we use genome-wide gene expression to model the covariance of the log-survival time under a Gaussian process accelerated failure time model. Inspired by multiple kernel learning \citep{gonen2011multiple}, we allow the covariance to be a linear combination of $M$ user-specified candidate kernels. A major challenge for survival time prediction is censoring. To mitigate this challenge, we develop an efficient Monte Carlo EM algorithm which jointly imputes censored log-survival times and estimates model parameters. The imputed survival times are then used in our subsequent prediction rule.
The majority of methods for survival time prediction address censoring using partial likelihood methods, which use event orderings rather than the times at which they occur. Consequently, when survival time can be predicted with reasonable accuracy, partial likelihood methods may miss useful information in the censoring times. Alternatively, some methods use a two-step approach to first impute censored survival times (e.g., mean, median, or multiple imputation), and then fit a predictive model using the imputed survival times \citep{datta2007predicting,wu2008method}.
Some other methods iteratively impute the censored survival times and fit a predictive model, for example, using survival trees \citep{zhu2012recursively} or an ensemble model \citep{deng2016predicting}.
These methods were not designed for ultra-high dimensional -omic data. For example, in their real data analysis examples, the sample size ($n$) and the number of covariates ($p$) are
$n=686$ and $p=8$ for \citet{zhu2012recursively}, and $n=2070$ and $p=256$ for \citet{deng2016predicting}. In contrast, we propose a new method that iteratively imputes the censored survival times and fits a kernel-based predictive model. Our real data analysis has much higher dimensionality than the earlier methods with $n=513$ and $p=20,428$. \cite{zhu2017integrating} also employed a kernel-based method for survival time prediction using gene expression, though they only used one kernel derived from gene expression and did not seek to impute the censored survival times.
The remainder of this article is organized as follows: in Section 2 we described our proposed model and discuss its relation to existing methods; in Section 3 we describe how to compute our estimator; in Section 4 we perform simulation studies to demonstrate our method's prediction accuracy under a range of models; in Section 5 we analyze the TCGA dataset which motivated our study; and in Section 6 we discuss limitations and extensions of our method.
\section{Gaussian process accelerated failure time model}
Let $S_i$ denote the time-to-failure (survival time) for the $i$th patient with $i=1,\dots, n$ patients in the study. Let $T = (\log S_1, \dots, \log S_n)' \in \mathbb{R}^n$. Let $x_i \in \mathbb{R}^p$ and $z_i \in \mathbb{R}^{q+1}$ denote the measured genome-wide gene expression and the measured clinical variables for the $i$th patient, respectively. To allow for an intercept, assume that the first entry of $z_{i}$ is equal to one for $i = 1, \dots, n$. Let $Z = (z_1, \dots, z_n)' \in \mathbb{R}^{n \times (q+1)}$, and $X = (x_1, \dots, x_n)' \in \mathbb{R}^{n \times p}$. For the $n$ patients in the study, we assume that survival time follows the \textit{Gaussian process accelerated failure time model}:
\begin{equation} \label{eq:accelerated_failure_time}
T = Z \boldsymbol{\beta} + G + \epsilon, \quad G \sim {\rm N}_n\left\{ 0, K(X, \boldsymbol{\sigma}^2) \right\}, \quad \epsilon \sim {\rm N}_{n}\left\{0, \sigma_{\epsilon}^2 I_n\right\},
\end{equation}
where $G$ and $\epsilon$ are independent; $\boldsymbol{\sigma}^2 \in \mathbb{R}^M_+, \sigma_{\epsilon}^{2} \in \mathbb{R}_+$, and $\boldsymbol{\beta}\in\mathbb{R}^{q + 1}$ are unknown model parameters, $\mathbb{R}_+$ denotes non-negative real numbers, and $M$ is the number of kernels. We will sometimes use the more compact notation:
${\rm Cov}(G + \epsilon) \equiv \tilde{K}(X, \tilde{\boldsymbol{\sigma}}^2) = K(X, \boldsymbol{\sigma}^2) + \sigma_{\epsilon}^2 I_n,$
where $\tilde{\boldsymbol{\sigma}}^2 = ({\boldsymbol{\sigma}^{2}}', \sigma^2_\epsilon)' \in \mathbb{R}^{M + 1}_{+}.$
The function $K: \mathbb{R}^{n \times p} \times \mathbb{R}^M_+ \to \mathbb{S}^{n}_+$ is a covariance function with $(i,j)$th entry $$[K(X, \boldsymbol{\sigma}^2)]_{i,j} = \sum_{s=1}^M \sigma_s^2 k_s(x_i, x_j), \quad (i,j) \in \left\{ 1, \dots,n \right\} \times \left\{1, \dots,n \right\},$$
where $\mathbb{S}^{n}_+$ denotes the set of $n \times n$ symmetric and positive definite matrices, and $k_s:\mathbb{R}^p \times \mathbb{R}^p \to \mathbb{R}$ is a positive definite kernel function for $s = 1, \dots, M$. A positive definite kernel function ensures that the matrix $k_s(X, X)$: $\mathbb{R}^{n \times p} \times \mathbb{R}^{n \times p} \to \mathbb{S}^{n}_+$, whose $(i,j)$th entry is $k_s(x_i, x_j)$, is positive definite for all $X \in \mathbb{R}^{n \times p}$. The function $k_s(x_i, x_j)$ quantifies the similarity between $x_i$ and $x_j$, e.g., a radial basis kernel function is $k_s(x_i, x_j) = {\rm exp}(-\|x_i - x_j\|^2)$.
The Gaussian process accelerated failure time model in \eqref{eq:accelerated_failure_time} generalizes the log-normal accelerated failure time model of \citet{klein1999modeling}, which for clustered subjects, assumed that ${\rm Cov}(T_i, T_j) = \phi$ for all $(i,j)$ such that $i$ and $j$ belong to the same cluster and $i \neq j$. Gaussian processes have also been used for survival analysis under the Cox proportional hazards model \citep{banerjee2003frailty, fernandez2016gaussian,zhu2017integrating}.
Intuitively, \eqref{eq:accelerated_failure_time} assumes that if two patients have similar genome-wide gene expression, as defined by $K$, then their mean-adjusted log-survival times will be similar. Out-of-sample prediction based on \eqref{eq:accelerated_failure_time} is also known as \textit{kriging}, a method for prediction through linear interpolation in geo-spatial statistics. In geo-spatial applications, the function $K$ is used to quantify the similarities of two-dimensional coordinates, whereas in our application, $K$ quantifies similarities in an ultra-high dimensional, genome-wide space. Recently, kriging was applied in the genomic literature as a means for predicting a phenotypic trait using multiple types of -omics data \citep{wheeler2014poly}.
Fitting \eqref{eq:accelerated_failure_time} is non-trivial when one observes a censored realization of $T$, as is often the case in survival analysis. Specifically, suppose there exists a realization $T = (t_1, \dots, t_n)' \in \mathbb{R}^n,$ which cannot be observed. Instead one observes the pairs $(y_1, \delta_1), \dots, (y_n, \delta_n)$ where
$ y_i = \min(t_i, d_i)$, $d_i$ is the censoring time for the $i$th subject, $\delta_i =
1(y_i = t_i)$ for $i = 1, \dots, n$, and $1(\cdot)$ is an indicator function. In this article, we treat the censored survival times as missing. This allows us to develop an algorithm that simultaneously imputes the latent survival times conditional on the observed survival times and model parameters; and estimates model parameters $\boldsymbol{\beta}, \boldsymbol{\sigma}^2, \sigma^2_{\epsilon}$.
Although we focus on the case of right-censored outcomes, our methodology naturally accommodates right, left, and interval censoring.
For the remainder of the article, without loss of generality, suppose that $\delta_i = 0$ for $i=1, \dots, n_c$, $\delta_i = 1$ for $i = n_c + 1, \dots, n$, and let $n_o = n - n_c$. Hence, we can partition $Y = (y_1, \dots, y_n)'$ into $Y_{\rm c}\in \mathbb{R}^{n_{\rm c}}$ and $Y_{\rm o} \in \mathbb{R}^{n_{\rm o}}$ so that $Y = (Y_c', Y_o')' \in \mathbb{R}^n$. We similarly partition $T$ into $(T_c', T_o')'$ (where $T_c$ is not observed and $T_o = Y_o$); $Z$ into $Z_c \in \mathbb{R}^{n_c \times (q+1)}$ and $Z_o \in \mathbb{R}^{n_o \times (q+1)}$; and $\tilde{K}(X, \tilde{\boldsymbol{\sigma}}^2)$ into sub-matrices $\tilde{K}_{co}(X, \tilde{\boldsymbol{\sigma}}^2) \in \mathbb{R}^{n_c \times n_o}$, $\tilde{K}_{oo}(X, \tilde{\boldsymbol{\sigma}}^2) \in \mathbb{R}^{n_o \times n_o}$, and $\tilde{K}_{cc}(X, \tilde{\boldsymbol{\sigma}}^2) \in \mathbb{R}^{n_c \times n_c}$. For ease of display, we will sometimes omit the $(X, \tilde{\boldsymbol{\sigma}}^2)$ dependence on $\tilde{K}(X, \tilde{\boldsymbol{\sigma}}^2)$ and its submatrices. Let $\mathcal{H} = \mathbb{R}^{q + 1} \times \mathbb{R}_+^{M} \times \mathbb{R}_+$ denote the space of the unknown parameters $\theta = (\boldsymbol{\beta}', \boldsymbol{\sigma}^2, \sigma^2_{\epsilon})'.$ Finally, let $W$ be the collection of data that we condition on: $W = \left\{Z, X, Y, \delta\right\}$.
\section{Maximum likelihood estimation}
\subsection{Overview}
To fit the Gaussian process accelerated failure time model in \eqref{eq:accelerated_failure_time}, we use a Monte Carlo expectation-maximization (MC-EM) algorithm. We provide an overview of the MC-EM algorithm in Section 3.2 and describe the sub-algorithms used for distinct covariance function specifications in Section 3.3. We implement our MC-EM algorithm, along with a set of auxiliary functions, in ab R package \texttt{SurvGPR}, which is available in the Supplementary Materials.
\subsection{Monte Carlo expectation-maximization algorithm}
Throughout this section, let the superscript $(r)$ denote the $r$th iterate of the MC-EM algorithm, and let $s_r$ denote the $r$th iterate's Monte Carlo sample size.
The $(r+1)$th iterate of the standard expectation-maximization (EM) algorithm is computed in two steps: the E-step computes
\begin{equation}\label{eq:E_step}
Q(\theta \mid \theta^{(r)}) = {\rm E} \left[ \log f_{T} (T_o, T_c; \theta, W) \mid \theta^{(r)}, W\right],
\end{equation}
where $\log f_{T}$ is the log-likelihood of $T$; and the M-step computes
\begin{equation}\label{eq:Theta}
\theta^{(r+1)} = \operatorname*{arg \ max}_{\theta \in \mathcal{H}} Q(\theta \mid \theta^{(r)}).
\end{equation}
When \eqref{eq:Theta} cannot be obtained, an alternative is to compute $\theta^{(r+1)}$ such that
\begin{equation}\label{eq:gem_iterate}
\theta^{(r+1)} \in \left\{\theta \in \mathcal{H}: Q(\theta \mid \theta^{(r)}) \geq Q( \theta^{(r)} \mid \theta^{(r)}) \right\},
\end{equation}
which yields the generalized EM algorithm \citep{wu1983convergence}.
Unfortunately, when log-survival times are censored, there may not exist an analytic expression for the right hand side of \eqref{eq:E_step} under \eqref{eq:accelerated_failure_time}. In particular, ignoring constants,
\begin{align}
Q(\theta \mid \theta^{(r)}{}) & \propto - {\rm E}\left[\log {\rm det} \{\tilde{K} (X, \tilde{\boldsymbol{\sigma}}^2)\} + (T - Z\boldsymbol{\beta})'\{ \tilde{K}(X, \tilde{\boldsymbol{\sigma}}^2)\}^{-1} (T - Z\boldsymbol{\beta}) \mid \theta^{(r)}, W\right] \label{E_step},
\end{align}
so that computing $Q(\theta \mid \theta^{k})$ requires evaluating
$$
(i) \hspace{2pt} {\rm E}\left[ T_{\rm c}\mid \theta^{(r)}, W\right], \quad (ii) \hspace{2pt} {\rm E}\left[ T_{\rm c}'(\tilde{K}_{cc} - \tilde{K}_{co}\tilde{K}_{oo}^{-1}\tilde{K}_{co}')^{-1} T_{\rm c}\mid \theta^{(r)}, W \right].
$$
Computing $(i)$ and $(ii)$ is non-trivial because
\
|
begin{equation} \label{Tc_Dist}
T_c \mid \theta^{(r)}, W \sim {\rm N}^{[Y_c, \infty)}_{n_c} \left\{ Z_c \boldsymbol{\beta}^{(r)} + \tilde{K}_{co}\tilde{K}_{oo}^{-1}(T_o - Z_o \boldsymbol{\beta}^{(r)}), \tilde{K}_{cc} - \tilde{K}_{co}\tilde{K}_{oo}^{-1}\tilde{K}_{co}' \right\},
\end{equation}
where the notation ${\rm N}_{n_c}^{[Y_c, \infty)}$ denotes the $n_c$-dimensional truncated multivariate normal distribution with nonzero probability mass on the hyper-rectangle $[Y_c, \infty)= [y_1, \infty) \times \dots \times [y_{n_c}, \infty).$
Although $(i)$ can be computed numerically, the distribution of $(\tilde{K}_{cc} - \tilde{K}_{co}\tilde{K}_{oo}^{-1}\tilde{K}_{co}')^{-1/2} {T}_c \mid (\theta^{(r)}, W)$ is not truncated multivariate normal unless $(\tilde{K}_{c} - \tilde{K}_{co}\tilde{K}_{oo}^{-1}\tilde{K}_{co}') = I_{n_c}$ \citep{horrace2005some}, so $(ii)$ is intractable in general. Instead, we approximate \eqref{eq:E_step} by drawing $s_r$ samples from \eqref{Tc_Dist} \citep{wei1990monte}.
There are multiple software packages available to simulate from \eqref{Tc_Dist}. In our implementation, we use the Gibbs sampler implemented in the \texttt{tmvtnorm} package in R \citep{tmvtnorm}. Let $\tilde{T}_{c}^{(r)} = (T_{c,1}^{(r)}, \dots, T_{c,s_r}^{(r)})' \in \mathbb{R}^{s_r \times n_c}$ be the matrix of samples from \eqref{Tc_Dist}. Given $\tilde{T}_{c}^{(r)},$ the $(r+1)$th iterate of our MC-EM algorithm is
\begin{equation}\label{eq:theta_update}
\theta^{(r+1)} = \operatorname*{arg \ max}_{\theta \in \mathcal{H}} \left\{s_r^{-1} \sum_{j=1}^{s_r}\log f_{T} (T_o, T_{c,j}^{(r)}; \theta, W)\right\}.
\end{equation}
We propose an algorithm to compute \eqref{eq:theta_update} in Section 3.3. To improve the efficiency of our MC-EM algorithm, we use the ascent-based variation proposed by \citet{caffo2005ascent}. We state our complete ascent-based MC-EM algorithm in Algorithm 1.
\begin{itemize}
\item[] \textbf{Algorithm 1:} Initialize $\theta^{(1)} = (\boldsymbol{\beta}^{(1)}, \boldsymbol{\sigma}^{2(1)}, \sigma_{\epsilon}^{2(1)})$. Set $r=1$ and $s_1 = 500$.
\begin{enumerate}
\item Simulate $\tilde{T}_{c}^{(r)}$, $s_{r}$ samples from
$$ {\rm N}^{[Y_c, \infty)}_{n_c} \left\{ Z_c \boldsymbol{\beta}^{(r)} + \tilde{K}_{co}'\tilde{K}_{oo}^{-1}(T_o - Z_o \boldsymbol{\beta}^{(r)}), \tilde{K}_{cc} - \tilde{K}_{co}\tilde{K}_{oo}^{-1}\tilde{K}_{oc} \right\}.$$
\item Compute $\bar{\theta} \leftarrow \operatorname*{arg \ max}_{\theta \in \mathcal{H}} \left\{s_r^{-1} \sum_{j=1}^{s_r}\log f_{T} (T_o, T_{c,j}^{(r)}; \theta, W)\right\}$.
\item Compute ${\rm ASE}^{(r)}$, the standard error of $ \left\{ \log f_{T} (T_o, T_{c,j}^{(r)}; \bar{\theta}, W) - \log f_{T} (T_o, T_{c,j}^{(r)}; \theta^{(r)}, W)\right\}_{j=1}^{s_r}$.
\item[4a.] If $s_{r}^{-1}\sum_{j=1}^{s_r} \left\{\log f_{T} (T_o, T_{c,j}^{(r)}; \bar{\theta}, W) - \log f_{T} (T_o, T_{c,j}^{(r)}; \theta^{(r)}, W) \right\} > 1.96 {\rm ASE}^{(r)}$
\begin{itemize}
\item Set $\theta^{(r+1)} \leftarrow \bar{\theta}$, $s_{r+1} = s_{r}$, $r \leftarrow r + 1$, and return to Step 1.
\end{itemize}
\item[4b.] Else
\begin{itemize}
\item If $s_r \geq 10^5,$ terminate. Else, set $s_{r} \leftarrow 2 s_{r}$ and return to Step 1, appending $s_r$ new samples to the $s_r$ from the previous iteration.
\end{itemize}
\end{enumerate}
\end{itemize}
We terminate the algorithm based on a Monte Carlo sample size threshold in Step 4. When the algorithm has converged, the difference between $\theta^{(r)}$ and $\bar{\theta}$ will be negligible for sufficiently large $s_r$. In practice, we suggest practitioners track the parameter estimates across iterations to ensure that $10^5$ is a sufficiently large threshold for their application.
Because we use a Gibbs sampler in Step 1, the simulated $T_{c,j}^{(r)}$ may be correlated. To decrease dependence while maintaining computational efficiency, we keep every tenth sample generated by the Gibbs sampler \citep{owen2017statistically}. To compute the standard error in Step 3 while accounting for the serial correlations due to the Gibbs sampler, we use the spectral variance method with a Tukey-Hanning window implemented in the R package \texttt{mcmcse} \citep{mcmcse}.
\subsection{Maximization algorithms}
We now describe how to solve Step 2 of Algorithm 1. Throughout this section, treat $r$ as fixed, let $\hat{T}_j = (T_{c,j}^{(r)'}, T_o')$ for $j=1, \dots, s_{r}$, and let $\bar{T} = s_{r}^{-1} \sum_{j=1}^{s_r} \hat{T}_j$. We develop distinct algorithms for solving \eqref{eq:theta_update} for two types of covariance functions: the single kernel case ($M=1$), and the more general case of multiple distinct kernel functions $(M \geq 1)$. For both cases, we solve \eqref{eq:theta_update} using blockwise coordinate descent. The algorithm we use for the case that $M=1$ is described in the Supplementary Material. This algorithm exploits that $k_1(X,X)$ and $\tilde{K}(X, \tilde{\sigma}^2)$ have the same eigenvectors under \eqref{eq:accelerated_failure_time}.
For the general case that $M \geq 1$, we use a variation of the blockwise coordinate descent algorithm proposed by \citet{zhou2015mm}. The complete algorithm is stated in Algorithm 2.
\begin{itemize}
\item[]\textbf{Algorithm 2:} Initialize $\theta^{(1)} = (\boldsymbol{\beta}^{(1)}, \boldsymbol{\sigma}^{2(1)}, \sigma_{\epsilon}^{2(1)})$ at their final iterates from the previous M-step. Set $b=1$.
\vspace{-8pt}
\begin{enumerate}
\item Compute $\Omega \leftarrow \tilde{K}(X, \tilde{\boldsymbol{\sigma}}^{2(b)})^{-1}$
\item Compute $\boldsymbol{\beta}^{(b+1)} \leftarrow (Z'\Omega Z)^{-1}Z'\Omega \bar{T}$
\item For $ i = 1, \dots, M, $ compute
\vspace{-8pt}
$$ \sigma_{i}^{2(b+1)} \leftarrow \frac{\sigma_{i}^{2(b)}}{\sqrt{s_r}} \left[ \frac{\sum_{j=1}^{s_{r}}(\hat{T}_j - Z \boldsymbol{\beta}^{(b+1)})'\Omega' k_i(X, X) \Omega (\hat{T}_j - Z \boldsymbol{\beta}^{(b+1)})}{{\rm tr}\left\{ \Omega k_i(X, X)\right\} } \right]^{1/2},$$
\vspace{-12pt}
\item Compute
\vspace{-14pt}
$$ \sigma_{\epsilon}^{2(b+1)} \leftarrow \frac{\sigma_{\epsilon}^{2(b)}}{\sqrt{s_{r}}} \left[ \frac{\sum_{j=1}^{s_r}(\hat{T}_j - Z \boldsymbol{\beta}^{(b+1)})'\Omega'\Omega (\hat{T}_j - Z \boldsymbol{\beta}^{(b+1)})}{{\rm tr}(\Omega)}\right]^{1/2}.$$
\vspace{-10pt}
\item[5a.] If $\{ \sum_{j=1}^{s_r} \log f_{T} (T_o, T_{c,j}^{(r)}; \theta^{(b+1)}, W) - \log f_{T} (T_o, T_{c,j}^{(r)}; \theta^{(b)}, W)\}$\\
$\leq \epsilon|\sum_{j=1}^{s_r} f_{T} (T_o, T_{c,j}^{(r)}; \theta^{(1)}, W)|$
\begin{itemize}
\item Terminate.
\end{itemize}
\item[5b.] Else
\begin{itemize}
\item Set $b \leftarrow b + 1$ and return to Step 1.
\end{itemize}
\end{enumerate}
\end{itemize}
The updates of $\sigma_{i}^{2(b+1)}$ and $\sigma_{\epsilon}^{2(b+1)}$ in Steps 3 and 4 are derived based on the minorize-maximize (MM) algorithm for variance components estimation proposed by \citet{zhou2015mm}. Briefly, given the initial values of the parameters or their estimates from the previous iteration, a minorizing function is created to approximate the objective function. The updates in Steps 3 and 4 are the arguments that maximize a minorizing function and thus, ensure that the objective function evaluated at $\theta^{(b+1)}$ is greater than or equal to the objective function evaluated at $\theta^{(b)}.$ A complete derivation of Algorithm 2 is provided in the Supplementary Material.
In our implementation, we also use quasi-Newton-like acceleration attempts based on an extrapolation heuristic. We found that the iterates from Steps 3 and 4 of Algorithm 2 often followed monotonic paths to their local maximizers. Thus, after Step 4, we attempt to replace $\boldsymbol{\sigma}^{2(b+1)}$ with an extrapolated value
$$\bar{\boldsymbol{\sigma}}^{2(b+1)} = \boldsymbol{\sigma}^{2(b+1)} + (b^{1/2} + 2)^{-1}(\boldsymbol{\sigma}^{2(b+1)} - \boldsymbol{\sigma}^{2(b)}),$$ and similarly for $\sigma_{\epsilon}^{2(b+1)}$. If the log-likelihood evaluated at the extrapolated values $\bar{\boldsymbol{\sigma}}^{2(b+1)}$ and $\bar{\sigma}_{\epsilon}^{2(b+1)}$ is greater than the log-likelihood evaluated at the $\boldsymbol{\sigma}^{2(b+1)}$ and $\sigma_{\epsilon}^{2(b+1)}$, we replace $\boldsymbol{\sigma}^{2(b+1)}$ with $\bar{\boldsymbol{\sigma}}^{2(b+1)}$ and $\sigma_{\epsilon}^{2(b+1)}$ with $\bar{\sigma}_{\epsilon}^{2(b+1)}$.
\subsection{Implementation and practical considerations }
Given the final iterates of the MC-EM algorithm, $\hat{\boldsymbol{\beta}},\hat{\boldsymbol{\sigma}}^2, \hat{\sigma}^2_\epsilon$, and final imputed survival time, $\bar{T}$, we predict log-survival time for a new patient with covariates $z_*$ and genome-wide gene expression $x_*$ using the conditional expectation of the univariate normal distribution:
\begin{equation}\notag
{\rm N}\left\{ \hat{\boldsymbol{\beta}}'z_* + K_{*}(x_*, X, \hat{\boldsymbol{\sigma}}^2)' \tilde{K}(X, \hat{\tilde{\boldsymbol{\sigma}}}^2)^{-1} (\bar{T} - Z\hat{\boldsymbol{\beta}}), \ \ \tilde{K}(x_*, \hat{\tilde{\boldsymbol{\sigma}}}^2) - K_{*}(x_*, X, \hat{\boldsymbol{\sigma}}^2)' \tilde{K}(X, \hat{\tilde{\boldsymbol{\sigma}}}^2)^{-1} K_{*}(x_*, X, \hat{\boldsymbol{\sigma}}^2) \right\},
\end{equation}
where $K_{*}(x_*, X, \hat{\boldsymbol{\sigma}}^2) \in \mathbb{R}^{n}$ with $j$th entry $[K_{*}(x_*, X, \hat{\sigma}^2)]_j = \sum_{s=1}^M \hat{\sigma}_s^2 k_s(x_*, x_j)$ for $j= 1, \dots, n$. We can also easily evaluate the estimated survival function, $\hat{\mathcal{S}}$ at any time $a$ since $P(T_* < a \mid z_*, x_*)$ is the cumulative distribution function of a univariate normal distribution.
In studies collecting gene expression or other types of -omics data, there are often measured technical confounders, e.g., the plate on which an RNA sample was stored. To address confounding in genome-wide gene expression under \eqref{eq:accelerated_failure_time}, we propose to compute the kernel functions $k_s$ using the residuals from the multivariate regression of gene expression on the measured technical confounders.
To obtain reasonable initial values for our MC-EM algorithm with right-censored survival times, we suggest first imputing the censored log-survival times using the inverse probability weighted mean-imputation method proposed by \citet{datta2005estimating}.
\section{Simulation studies}
\subsection{Data generating models}\label{data_gen_models}
To create simulation scenarios similar to our motivating data example, we use the observed gene expression data and clinical covariates of the 513 patients in the TCGA KIRC (kidney renal clear cell carcinoma) dataset, and we simulate survival times for these patients. Specifically, we use the observed tumor stage and age as clinical covariates, and use the observed expression of $p=20,483$ genes to generate survival times from four distinct models. More information about how we prepared the TCGA KIRC dataset is given in Section \ref{data_preparation}. For 500 independent replications, we generate $n = 513$ survival times and split the data into a training and testing set of size 413 and 100 respectively. We then fit the model to the censored training data and record the metrics described in Section \ref{sec:metrics}. The data generating models we consider are:
\begin{enumerate}
\item[] \textit{Model 1: Gaussian process AFT model.} Log-survival times are generated as a realization of the Gaussian process accelerated failure time model: $$T = Z \boldsymbol{\beta} + \eta + \gamma,$$ where
$\gamma \sim {\rm N}_n \left\{ 0, 0.5 I_n \right\}$ and $\eta \sim {\rm N}_n \left\{ 0, K(X, \boldsymbol{\sigma}^2) \right\}$ with $K(X, \boldsymbol{\sigma}^2)$ defined below and $\boldsymbol{\beta} = (6.1, -0.5, -1.2, -2.0, -1\times 10^{-5})$ where the columns of $Z$ corresponds to the intercept, tumor stage II, tumor stage III, tumor stage IV, and age in days.
\item[] \textit{Model 2: Normal-Logistic AFT model.} Log-survival times are generated as a realization of the normal-logistic accelerated failure time model, $$T = Z \boldsymbol{\beta} + \eta + \kappa,$$
where $\kappa = (\kappa_1, \dots, \kappa_n)'$ with each $\kappa_i$ independent and identically distributed
logistic distribution such that $E(\kappa_i) = 0$ and ${\rm Var}(\kappa_i) = 0.5$. Note that logistic distribution has much heavier tails than normal distribution. As in Model 1, $\eta \sim {\rm N}_n \left\{ 0, K(X, \boldsymbol{\sigma}^2) \right\}$ with $K(X, \boldsymbol{\sigma}^2)$ defined below; and $Z$ and $\boldsymbol{\beta}$ are the same as in Model 1.
\item[] \textit{Model 3: Logistic-Logistic AFT model.} Log-survival times are generated as a realization of the logistic-logistic accelerated failure time model, $$T = Z \boldsymbol{\beta} + \omega + \kappa,$$ where $\kappa$ is generated in the same manner as in Model 2. To generate $\omega \in \mathbb{R}^n$, we generate $v_1, \dots, v_n$, $n$ independent copies of $V_i \sim {\rm Logistic}$ where $E(V_i) = 0$ and ${\rm Var}(V_i) = 1$ for $i=1, \dots, n$. Then, we set $(\omega_{1}, \dots, \omega_{n})' = ( v_1, \dots, v_n)'\{ K(X, \boldsymbol{\sigma}^2)\}^{1/2}$ so that ${\rm E}(\omega_i) = 0$ and ${\rm Cov}(\omega_i, \omega_j) = [K(X, \boldsymbol{\sigma}^2)]_{i,j}.$
\item[] \textit{Model 4: Cox proportional hazards model.} We generate survival times from the mixed-effects Cox proportional hazards model with Gompertz baseline hazard \citep{bender200
|
knowledge representation database to support critical reasoning.
In a typical AGI application, knowledge is stored in the form of hypergraphs \cite{opencog-graph}, where each vertex and hyperedge represents an \textit{Atom} \cite{hodges1997shorter} with a certain type.
A pattern matcher, which performs subhypergraph matching\xspace, is used to search for specific patterns in the hypergraphs.
After specifying some arrangements of atoms (i.e., a query hypergraph), the pattern matcher will find all instances of that hypergraph in the atom space (i.e. a data hypergraph).
The matched results can then be sent to a rule engine \cite{watkin2017introduction,baader1999term} for further reasoning.
\sstitle{\underline{Pattern Learning in NLP.}}
Hypergraphs are also increasingly popular in machine leaning \cite{hyper-app-learn,hyper-app-learn3,hyper-app-learn4,hyper-app-ml} and natural language processing (NLP) \cite{parsing_hyper,hyper-app-text,hyper-app-nlp}.
The authors of \cite{hyper-app-nlp} propose the concept of \textit{semantic hypergraphs} where each word is a vertex, and each valid sentence is a hyperedge. Semantic hypergraphs can be constructed by parsing large corpus using modern machine learning techniques in NLP.
During the process of pattern learning, some sentences are first selected from a given training corpus. It can be drawn at random or by any other criterion adapted to the pattern-learning task at hand. The selected sentences are inferred and transformed into a hypergraph query. Subhypergraph matching\xspace is then performed in the semantic hypergraph to find matched embeddings. Finally, the embeddings are presented to humans for validation of the corresponding learning tasks. The process repeats with a human-refined query hypergraph if no valid embeddings are found.
\sstitle{\underline{\red{Q/A over Hypergraph Knowledge Base.}}}
\red{
It is observed in \cite{Wen2016OnTR} that more than 33\% of the entities participate in non-binary relations in the knowledge base Freebase \cite{Freebase}, and further observed in \cite{ijcai2020p303} that 61\% of the entities participate in non-binary relations.
Question answering (Q/A) allows users to query real-world questions over the knowledge base. By representing the knowledge base in hypergraphs, it allows us to better express and explore the massive non-binary relations in the knowledge base, where the evaluation of queries can be performed using subhypergraph matching\xspace \cite{qa-rdf}.
We present a case study of in \refsec{case_study}.
}
\stitle{Motivations.}
Subgraph matching in conventional graphs has been extensively studied in the literature.
Existing algorithms of subgraph matching \cite{turbo-iso,cfl,daf,ceci,rapidmatch,jin2021fast,match-survey,graph-ql,vf2,quicksi} primarily works on better matching orders, pruning rules, index structures, and enumeration methods, to improve the efficiency.
However, subhypergraph matching\xspace in hypergraphs has attracted little attention despite its emerging applications, as mentioned above.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{figures/bi-graph.pdf}
\caption{Converted Bipartite Graph (Data Hypergraph in \reffig{data}).}
\label{fig:bi-graph}
\end{figure}
One straightforward approach of subhypergraph matching\xspace is to convert the hypergraph to a \emph{bipartite graph} by treating the hyperedges as the vertices \cite{bio-hyper-ppi2}.
An example bipartite graph of the data hypergraph in \reffig{data} is given in \reffig{bi-graph}, where the upper vertices refer to hyperedges in the original hypergraph, the lower vertices refer to vertices in the original hypergraph, and the edges refer to the connectivity of hyperedges.
After converting both the query and data hypergraphs into bipartite graphs, conventional subgraph matching algorithms can be applied to find embeddings of the original hypergraph.
However, this strawman approach will significantly inflate the size of the graphs. For example, a hypergraph with $2$ million vertices and $15$ million hyperedges will result in a bipartite graph with $17$ million vertices and $1$ billion edges \cite{communities-graph}.
Due to the NP-hardness \cite{np-complete} nature of subgraph matching, it is hard to compute embeddings on such inflated graphs \cite{hyperx}.
Another approach is to directly extend existing subgraph matching algorithms to the case of hypergraphs. Among the existing algorithms, \cite{sm05,sm08} extends Ullmann's backtracking algorithm \cite{ullmann}.
\cite{hyper-iso,subhymatch} also follow the same framework with more filtering rules derived from hypergraph features to improve efficiency.
As most state-of-the-art subgraph matching algorithms (e.g., \cite{cfl,daf,ceci}) follows Ullmann's backtracking framework, such an extension can be orthogonally applied to them as well.
Specifically, they recursively expand partial embedding vertex-by-vertex by mapping a query vertex to a data vertex at each step to enumerate all results following a given matching order and backtrack when necessary.
We denote this framework as the \emph{match-by-vertex} approach.
Hyperedges are used as a verification condition in the match-by-vertex framework, just like the edges in subgraph matching.
However, treating hyperedges simply as verification conditions delays the hyperedge verification and underutilises the high-order information in hypergraphs, which can lead to a huge search space and large enumeration cost.
In addition, with the rapid growth of hypergraph data these days, it is becoming difficult to compute subhypergraph matching\xspace on massive hypergraphs using sequential algorithms.
However, none of the existing subhypergraph matching\xspace algorithms supports parallel execution.
For example, in the real-world hypergraph of Amazon Reviews with more than $4$ million hyperedges (\kw{AR} in \reftable{datasets}), none of the existing sequential algorithms in our experiments are able to compute queries of three hyperedges within a one-hour time limit.
Traditional backtracking framework in subgraph matching adopts the depth-first search (DFS), which is generally hard to parallelise.
Distributed solutions of subgraph matching \cite{twin-twig,seed,wco-join,crystaljoin,multiway-join,star-join}, on the other hand, adopt the breath-first search (BFS) in a cluster of machines for high CPU utilisation.
But this can often lead to high memory consumption, network communications, and economic cost \cite{cost}.
Motivated by the above reasons, we study the problem of subhypergraph matching\xspace to develop an efficient and parallel solution on a single machine in this paper.
\stitle{Challenges.}
We summarise key challenges as follows.
\begin{enumerate} [leftmargin=*]
\item \textit{\ul{How to effectively utilise high-order information in hypergraphs?}}
%
Hypergraphs contain n-ary relationships in hyperedges that are typically not presented in conventional graphs.
%
Hence, it is crucial to fully utilise the high-order information in hyperedges during matching to reduce search space and speed up enumeration.
\item \textit{\ul{How to efficiently enumerate all embeddings in parallel?}}
%
To improve the performance of subhypergraph matching\xspace on a single machine, it is important to fully utilise the ever-developing hardware (i.e., multi-core) while managing memory consumption well.
%
Furthermore, the challenge arises from the power law nature of the real-world graphs \cite{power-law-1,power-law-2} to handle workload disparity among different workers while parallelising.
\end{enumerate}
\stitle{Our Solution and Contributions.}
To address these challenges, we develop \kw{HGMatch}, an efficient and parallel sub\underline{H}yper\underline{G}raph \underline{Match}ing engine on a single machine. Instead of matching the query hypergraph vertex-by-vertex as in the match-by-vertex framework used by existing subgraph matching and subhypergraph matching\xspace algorithms, we propose a \emph{match-by-hyperedge} framework to match the query by hyperedges to fully utilise the n-ary relationships.
Specifically, we made the following contributions.
\begin{enumerate} [leftmargin=*]
\item \textit{\underline{A match-by-hyperedge framework.}}
%
We propose to match the query hypergraph by hyperedges instead of vertices.
%
\kw{HGMatch} expands a partial embedding by one new hyperedge at a time.
%
In this way, \kw{HGMatch} is able to fully utilise the high-order information in hypergraphs to reduce search space and avoid redundant computation of enumerating matchings of vertices.
%
We store the data hypergraph in multiple tables with different \emph{hyperedge signatures} (i.e., a multiset\footnote{A multiset (i.e., bag) is a set that allows for multiple instances for each of its elements.} of vertex labels contained in a hyperedge).
%
A lightweight inverted hyperedge index is then built for each table to speed up the retrieve process of incident hyperedges of a given vertex.
%
By doing so, \kw{HGMatch} is able to generate candidate hyperedges directly using set operations (i.e., difference, union and intersections), which can be implemented very efficiently on modern hardware \cite{simd_gallop,qfilter,simd-intersection,bitmap2}.
%
Apart from that, we use set comparison to remove false positives during enumeration, which completely avoids expensive recursive calls in traditional backtracking-based enumeration methods.
\item \textit{\underline{A highly optimised parallel execution engine.}}
%
Thanks to the above-mentioned design, \kw{HGMatch} does not incur any recursive calls or build any auxiliary structures during runtime, which makes it easy to be parallelised.
%
%
We adopt the dataflow model \cite{dataflow,dataflow-def} for parallel execution in \kw{HGMatch}, which has been employed in many recent subgraph matching solutions \cite{huge,patmat-exp,wco-join,graphflow-demo}.
%
%
%
To bound memory consumption while keeping a high degree of parallelism, we design a task-based scheduler in \kw{HGMatch}.
%
With the scheduler, we prove that \kw{HGMatch} achieves a tight memory bound of $O(\overbar{a_q}\times|E(q)|^2\times|E(H)|)$ for subhypergraph matching\xspace, where $\overbar{a_q}$ is the average arity (i.e., hyperedge size) of query, $|E(q)|$ and $|E(H)|$ are the number of query and data hyperedges, respectively.
%
Furthermore, the dynamic work-stealing mechanism \cite{work_stealing1,work_stealing2,cilk} is employed for fine-grained load balancing.
\item \textit{\underline{In-depth experiments using real-world datasets.}}
%
We conducted extensive experiments on $10$ real-world datasets. Results show the efficiency and scalability of \kw{HGMatch}.
%
Comparing with the extended version of three state-of-the-art subgraph matching algorithms \kw{CFL} \cite{cfl}, \kw{DAF} \cite{daf}, \kw{CECI} \cite{ceci}, and \kw{RapidMatch} \cite{rapidmatch}, \kw{HGMatch} achieves an average speedup of more than $5$ orders of magnitude.
%
When using multi threads, \kw{HGMatch} achieves almost linear scalability when increasing the number of threads with near-perfect load balancing.
%
Besides, \kw{HGMatch} is the only algorithm that is able to complete all queries within the time limit.
\end{enumerate}
\stitle{Paper Organization.} The rest of this paper is organized as follows. \refsec{related_work} discusses related work. \refsec{background} introduces problem definition and background. In \refsec{overview}, we present the workflow and hypergraph storage of \kw{HGMatch}. We introduce our match-by-hypergraph framework in \refsec{matching} and the design of our parallel execution engine in \refsec{execution}, respectively. Experimental evaluation and case study are presented in \refsec{experiment}, followed by conclusion in \refsec{conclusion}.
\section{\kw{HGMatch} Overview}\label{sec:overview}
In this section, we introduce the basic workflow of \kw{HGMatch} followed by the data hypergraph storage mechanism in \kw{HGMatch}.
\subsection{Overall Workflow}
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=.9\columnwidth]{figures/hgmatch_overview.pdf}}
\caption{ \kw{HGMatch} Framework Overview. The solid arrow represents the workflow and the dotted arrow shows the interaction between data and processing steps.}
\label{fig:overview}
\end{figure}
The workflow overview of \kw{HGMatch} is illustrated in \reffig{overview}. Specifically, two main stages are \textit{offline data hypergraph preprocessing} and \textit{online query processing}.
In the offline data hypergraph preprocessing stage, the first step is to load the data hypergraph from the source (e.g., text files) and construct the data hypergraph structure (\refsec{data_structure}). Once the data hypergraph data structure is constructed, \kw{HGMatch} builds a lightweight inverted hyperedge index to boost the retrieve process of all incident hyperedges of a given vertex (\refsec{inverted_index}). At the end of prepossessing, an indexed data hypergraph is created. Note that \kw{HGMatch} does not build any auxiliary data at runtime, the indexed data hypergraph is created only once offline and it is lightweight, as will be discussed later.
In the online query processing stage, \kw{HGMatch} receives a query hypergraph as its input. Then, the query hypergraph is sent to the plan generator to generate an execution plan. The plan generator fetches cardinality information for the indexed data hypergraph to select a better matching order. The generated execution plan is then input into \kw{HGMatch}'s parallel execution engine. The execution engine accesses the indexed data hypergraph and executes the given plan to compute all subhypergraph embeddings in parallel.
\subsection{Data Hypergraph Storage} \label{sec:data_structure}
In \kw{HGMatch}, we store the data hypergraph in multiple \emph{hyperedge tables}, where each hyperedge table has a unique hyperedge signature. We define the concept of hyperedge signature as follows.
\begin{definition}
\textbf{(Hyperedge Signature).} The signature of a hyperedge $e$, denoted as $\mathcal{S}(e)$, is a \textit{multiset} of all vertex labels contained in $e$, i.e., $\mathcal{S}(e) = multiset\{l(v): v\in e\}$.
\end{definition}
We denote $he(v,s)$ as the set of incident hyperedges with signature $s$.
\kw{HGMatch} stores data hyperedges with different hyperedge signatures in separated hyperedge tables denoted as \textit{partitions}. As a result, to search the candidate hyperedges of a query hyperedge $e_q$, \kw{HGMatch} only needs to scan the partition with the signature $\mathcal{S}(e_q)$, rather than scanning the whole hypergraph.
\begin{table}[t]
\footnotesize
\centering
\begin{subtable}{.38\columnwidth}
%
\centering
\caption{Partition $1$}
\scalebox{0.9}{
\begin{tabular}{|cc|}
\hline
\multicolumn{2}{|c|}{$\mathcal{S}(e) = \{A,B\}$} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$E$}} & $e_1 = \{v_2,v_4\}$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $e_2 = \{v_4,v_6\}$ \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{$I$}} & $v_2 \rightarrow [e_1]$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $v_4 \rightarrow [e_1,e_2]$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $v_6 \rightarrow [e_2]$ \\ \hline
\end{tabular}
}
\end{subtable}
%
\begin{subtable}{.53\columnwidth}
%
\centering
\caption{Partition $2$}
\scalebox{0.9}{
\begin{tabular}{|cc|}
\hline
\multicolumn{2}{|c|}{$\mathcal{S}(e) = \{A,A,C\}$} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$E$}} & $e_3 = \{v_0,v_1,v_2\}$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $e_4 = \{v_3,v_5,v_6\}$ \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$I$}} & $v_0,v_1,v_2 \rightarrow [e_3]$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $v_3,v_5,v_6 \rightarrow [e_4]$ \\ \hline
\end{tabular}
}
\end{subtable}
\\
\begin{subtable}{\columnwidth}
%
\centering
\caption{Partition $3$}
\scalebox{0.9}{
\begin{tabular}{|cc|}
\hline
\multicolumn{2}{|c|}{$\mathcal{S}(e) = \{A,A,B,C\}$} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$E$}} & $e_5 = \{v_0,v_1,v_4,v_6\}$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $e_6= \{v_2,v_3,v_4,v_5\}$ \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{$I$}} & $v_0,v_1,v_6 \rightarrow [e_5]$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $v_4 \rightarrow [e_5,e_6]$ \\ \cline{2-2}
\multicolumn{1}{|c|}{} & $v_2,v_3,v_5 \rightarrow [e_6]$ \\ \hline
\end{tabular}
}
\end{subtable}
%
\caption{ Example Data Layout of the Hypergraph in \reffig{data}. The header of each table represents the signature of all its hyperedges. $E$ represents hyperedges and $I$ represents inverted hyperedge index.}
\label{tab:edge_lists}
\end{table}
\begin{example}
The partitioned hyperedge tables of the data graph $H$ in \reffig{data} are shown in \reftable{edge_lists}.
For the given data hypergraph, \kw{HGMatch} constructs three partitions having signatures $\{A,B\}$, $\{A,A,C\}$ and $\{A,A,B,C\}$, respectively.
\end{example}
\stitle{Size Analysis.}
The proposed hypergraph data structure in \kw{HGMatch} brings only a very small overhead of an additional signature header for each partition, which is no larger than the size of all hyperedges (i.e., all hyperedges have unique signatures in the worst case).
Thus, the total size of storing all hyperedges in \kw{HGMatch} is $O(\overbar{a_H}\times |E(H)|)$.
\subsection{Inverted Hyperedge Index} \label{sec:inverted_index}
In subgraph matching in conventional graphs, it is essential to access all connected edges (i.e., neighbours) of a given vertex. In hypergraphs, similarly, it is often demanded to get all incident hyperedges of a given vertex. %
Given a hyperedge table, it requires a linear scan to complete such an operation which can be time-consuming for large hypergraphs. To further speed up this process of finding all incident hyperedges (with a certain signature) of a given vertex, we adopt the common technique of inverted index \cite{bitmap1,bitmap2} to build a lightweight \textit{inverted hyperedge index} for each hyperedge table in \kw{HGMatch}.
\begin{example}
The inverted hyperedge index of the data graph $H$ in \reffig{data} is also shown in \reftable{edge_lists}. It maps each vertex in a hyperedge list to a \textit{posting list} of hyperedge IDs of all its incident hyperedges in the hyperedge list (in ascending order).
\end{example}
\stitle{Size Analysis.}
The inverted hyperedge index in \kw{HGMatch} is also lightweight.
For each hyperedge, its hyperedge ID will appear in the posting list of all the vertices it contains.
Therefore, each hyperedge $e$ takes additionally $O(a(e))$ space.
The total size of the inverted edge index is also $O(\overbar{a_H}\times |E(H)|)$.
\section{Experiments}\label{sec:experiment}
\subsection{Experimental Setup}
\kw{HGMatch} is implemented in Rust.
All experiments are conducted on a server with two $20$-core Xeon E5-2698 v4 CPUs ($40$ threads each) and $512$GB memory.
\stitle{Baselines.}
We compare \kw{HGMatch} with the state-of-the-art subgraph matching algorithms \kw{CFL} \cite{cfl}, \kw{DAF} \cite{daf} and \kw{CECI} \cite{ceci}.
We adopt the C++ implementations in a recent experimental study of subgraph matching\footnote{\url{https://github.com/RapidsAtHKUST/SubgraphMatching}} \cite{match-survey}, and extend them to the case of hypergraphs as described in \refsec{baseline} with additional \textit{IHS} filter.
Note that this implementation utilises single instruction multiple data (SIMD) instructions \cite{simd_gallop} to speed up set intersections.
We did not implement SIMD set intersections in \kw{HGMatch}.
The modified algorithms are denoted as \kw{CFL}-\kw{H}, \kw{DAF}-\kw{H} and \kw{CECI}-\kw{H}, respectively.
We do not include \cite{hyper-iso} for the reason discussed at the end of \refsec{baseline}.
Also, we do not compare \cite{sm05,sm08} since the subgraph matching algorithm they extend, namely the Ullmann's algorithm \cite{ullmann}, has been largely outperformed by \kw{CFL}, \kw{CECI} and \kw{DAF} in the literature.
We also compare \kw{RapidMatch} \footnote{\url{https://github.com/RapidsAtHKUST/RapidMatch}}\cite{rapidmatch}. But since \kw{RapidMatch} uses join-based techniques which cannot be fitted in our generic backtracking framework, we directly convert the query and data hypergraph to bipartite graphs in \kw{RapidMatch}.
\stitle{Datasets.}
We use $10$ real-world data hypergraphs with labelled vertices in our experiment downloaded from \cite{datasets}.
They are house committees (\kw{HC}), MathOverflow answers (\kw{MA}), contact high school (\kw{CH}), contact primary school (\kw{CP}), senate bills (\kw{SB}), house bills (\kw{HB}), Walmart trips (\kw{WT}), Trivago clicks (\kw{TC}), StackOverflow answers (\kw{SA}), and Amazon reviews (\kw{AR}).
We preprocess the datasets to remove all repeated hyperedges and all repeated vertices in one hyperedge.
The statistics of the datasets are shown in \reftable{datasets}.
\begin{table}[t]
\footnotesize
\centering
\caption{Table of Datasets }
\label{tab:datasets}
\scalebox{0.9}{
\begin{tabular}{|c|r|r|r|r|r|r|}
\hline
Dataset &
\multicolumn{1}{c|}{$|V|$} &
\multicolumn{1}{c|}{$|E|$} &
\multicolumn{1}{c|}{$|\Sigma|$} &
\multicolumn{1}{c|}{$a_{max}$} &
\multicolumn{1}{c|}{$\overbar{a}$}
& \multicolumn{1}{c|}{$|Index|$}
\\ \hline \hline
\kw{HC} & 1,290 & 331 & 2 & 81 & 34.8 & 178KB \\ \hlin
\kw{MA} & 73,851 & 5,444 & 1,456 & 1,784 & 24.2 & 2.1MB \\ \hlin
\kw{CH} & 327 & 7,818 & 9 & 5 & 2.3 & 109KB \\ \hlin
\kw{CP} & 242 & 12,704 & 11 & 5 & 2.4 & 190
|
KB \\ \hlin
\kw{SB} & 294 & 20,584 & 2 & 99 & 8.0 & 2.1MB \\ \hlin
\kw{HB} & 1,494 & 52,960 & 2 & 399 & 20.5 & 15.5MB \\ \hlin
\kw{WT} & 88,860 & 65,507 & 11 & 25 & 6.6 & 6.8MB \\ \hlin
\kw{TC} & 172,738 & 212,483 & 160 & 85 & 4.1 & 7.8MB \\ \hlin
\kw{SA} & 15,211,989 & 1,103,193 & 56,502 & 61,315 & 23.7 & 419.7MB\\ \hlin
\kw{AR} & 2,268,264 & 4,239,108 & 29 & 9,350 & 17.1 & 998.6MB \\ \hlin
\end{tabular}
}
\end{table}
\stitle{Queries.}
We use randomly sampled subhypergraphs from the data hypergraphs as our queries. Therefore, for each query hypergraph, there must exist at least one embedding in the corresponding data hypergraph.
Specifically, we perform a random walk in the data hypergraph to generate subhypergraphs with the given number of hyperedges whose number of vertices is in the range of $[|V|_{min},|V|_{max}]$. The settings of our queries are presented in \reftable{queries}. We generate $20$ random queries for each setting.
\iffullpaper
\red{The random queries vary from low to high selectivity, we draw the distributions of the number of embeddings for each query setting in box plots in \reffig{num_embeddings}.}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/hc_result.pdf}
\caption{\kw{HC}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/ma_result.pdf}
\caption{\kw{MA}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/ch_result.pdf}
\caption{\kw{CH}}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/cp_result.pdf}
\caption{\kw{CP}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/sb_result.pdf}
\caption{\kw{SB}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/hb_result.pdf}
\caption{\kw{HB}}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/wt_result.pdf}
\caption{\kw{WT}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/tc_result.pdf}
\caption{\kw{TC}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/query_results/sa_result.pdf}
\caption{\kw{SA}}
\end{subfigure}%
\caption{\red{Number of Embeddings Distributions}}
\label{fig:num_embeddings}
\end{figure*}
\else
\red{The random queries vary from low to high selectivity, detailed statistics of all queries are presented in the full version of this paper \cite{fullpaper}.}
\fi
\stitle{Metrics.} We measure the average elapsed time of each query type. Each query is executed three times for more precise measurement.
\red{We count the number of embeddings for all compared methods instead of outputting them to eliminate I/O costs.}
Since \kw{CFL}-\kw{H}, \kw{DAF}-\kw{H} and \kw{CECI}-\kw{H} fail on almost all queries on our largest dataset, \kw{AR}, we only use it for parallel evaluation of \kw{HGMatch} (\refsec{parallel_eval}). For other datasets, we use them in single-thread evaluation (\refsec{single_eval}).
For single-thread comparisons (\refsec{single_eval}), we set a timeout of $1$ hour for all queries. The running time of out-of-time queries will be counted as $3600$ seconds when computing the average.
\begin{table*}[ht]
\footnotesize
\centering
\begin{minipage}{.3\textwidth}
\centering
\caption{Table of Query Settings}
\label{tab:queries}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c}
\hline
Query & $|E|$ & $|V|_{min}$ & $|V|_{max}$ \\ \hline \hline
$q_2$ & 2 & 5 & 15 \\ \hline
$q_3$ & 3 & 10 & 20 \\ \hline
$q_4$ & 4 & 10 & 30 \\ \hline
$q_6$ & 6 & 15 & 35 \\ \hline
\end{tabular}
}
\end{minipage} %
~
\begin{minipage}{.6\textwidth}
\centering
\caption{Query Completion Ratio (Single-thread)}
\label{tab:complete}
\scalebox{0.9}{
\begin{tabular}{|c|cccccccccc|}
\hline
Algorithm &
\multicolumn{1}{c|}{\kw{HC}} &
\multicolumn{1}{c|}{\kw{MA}} &
\multicolumn{1}{c|}{\kw{CH}} &
\multicolumn{1}{c|}{\kw{CP}} &
\multicolumn{1}{c|}{\kw{SB}} &
\multicolumn{1}{c|}{\kw{HB}} &
\multicolumn{1}{c|}{\kw{WT}} &
\multicolumn{1}{c|}{\kw{TC}} &
\multicolumn{1}{c|}{\kw{SA}} &
Total \\ \hline
\kw{CFL}-\kw{H} &
\multicolumn{4}{c|}{\multirow{4}{*}{\textbf{100\%}}} &
\multicolumn{1}{c|}{56\%} &
\multicolumn{1}{c|}{44\%} &
\multicolumn{1}{c|}{76\%} &
\multicolumn{1}{c|}{90\%} &
\multicolumn{1}{c|}{99\%} &
85\% \\ \cline{1-1} \cline{6-11}
\kw{DAF}-\kw{H} &
\multicolumn{4}{c|}{} &
\multicolumn{1}{c|}{49\%} &
\multicolumn{1}{c|}{43\%} &
\multicolumn{1}{c|}{75\%} &
\multicolumn{1}{c|}{90\%} &
\multicolumn{1}{c|}{99\%} &
84\% \\ \cline{1-1} \cline{6-11}
\kw{CECI}-\kw{H} &
\multicolumn{4}{c|}{} &
\multicolumn{1}{c|}{50\%} &
\multicolumn{1}{c|}{43\%} &
\multicolumn{1}{c|}{75\%} &
\multicolumn{1}{c|}{90\%} &
\multicolumn{1}{c|}{99\%} &
84\% \\ \cline{1-1} \cline{6-11}
\kw{RapidMatch} &
\multicolumn{4}{c|}{} &
\multicolumn{1}{c|}{45\%} &
\multicolumn{1}{c|}{44\%} &
\multicolumn{1}{c|}{75\%} &
\multicolumn{1}{c|}{86\%} &
\multicolumn{1}{c|}{99\%} &
83\% \\ \hline
\textbf{\kw{HGMatch}} &
\multicolumn{10}{c|}{\textbf{100\%}} \\ \hline
\end{tabular}
}
\end{minipage}
\end{table*}
\subsection{Single-thread Comparisons} \label{sec:single_eval}
In this subsection, we evaluate \kw{HGMatch} in a single-thread environments
We use all the datasets except \kw{AR} as data hypergraphs as discussed before.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/exp/index_ct.pdf}
\caption{\red{Building Time and Size of Index.}}
\label{fig:index}
\end{figure}
\stitle{\red{Exp-1: Index Building.}}
\red{
We first evaluate the proposed inverted hyperedge index. \reffig{index} shows the time of building the index, the size of the graph and the size of the index (in MB).
The time of index building is extremely fast. It takes only around $6.7$ seconds to build the index even for the largest dataset \kw{AR}. As for index size, we observe that the index size is similar to the original graph size, which confirms our size analysis in \refsec{inverted_index}.
}
\stitle{Exp-2: Overall Comparisons.}
We then compare \kw{HGMatch} with \kw{CFL}-\kw{H}, \kw{CECI}-\kw{H}, \kw{DAF}-\kw{H}, and \kw{RapidMatch} to verify the efficiency of our matching by hyperedges framework (\refsec{matching}).
The results are shown in \reffig{all_cases}.
As demonstrated, \kw{HGMatch} significantly outperforms \kw{CFL}-\kw{H}, \kw{DAF}-\kw{H}, \kw{CECI}-\kw{H}, and \kw{RapidMatch} in all cases, with an average speedup of $5\times10^4$, $1\times10^5$ and $7\times10^5$, and $1\times10^6$ respectively.
Especially for data hypergraphs with high average arity $\overbar{a}$ including \kw{HC}, \kw{MA}, \kw{HB}, and \kw{SA}, \kw{HGMatch} outperforms \kw{CFL}-\kw{H}, \kw{DAF}-\kw{H} and \kw{CECI}-\kw{H} by up to $6$ orders of magnitude and \kw{RapidMatch} by up to $7$ orders of magnitude.
This is because \kw{HGMatch} can fully use the high-order information in hypergraphs to filter out unpromising hyperedges and reduce redundant computation.
In addition, \kw{HGMatch} is the only algorithm that completes all queries within the time limit. The query completion rate
is shown in \reftable{complete}. All algorithms run successfully for smaller datasets (i.e., \kw{HC}, \kw{MA}, \kw{CH}, and \kw{CP}). However, as the size of data hypergraphs grows, \kw{RapidMatch}, \kw{CFL}-\kw{H}, \kw{DAF}-\kw{H}, and \kw{CECI}-\kw{H} start to fail on some queries.
This is because of the huge search space they have to explore.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[height = 0.15in]{figures/exp/legend.pdf}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/hc.pdf}
\caption{\kw{HC}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/ma.pdf}
\caption{\kw{MA}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/ch.pdf}
\caption{\kw{CH}}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/cp.pdf}
\caption{\kw{CP}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/sb.pdf}
\caption{\kw{SB}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/hb.pdf}
\caption{\kw{HB}}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/wt.pdf}
\caption{\kw{WT}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/tc.pdf}
\caption{\kw{TC}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/sa.pdf}
\caption{\kw{SA}}
\end{subfigure}%
\caption{Single-thread Comparisons.}
\label{fig:all_cases}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{minipage}{0.35\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/exp/filter.pdf}
\caption{Candidates Filtering.}
\label{fig:filtering}
\end{minipage}
%
\begin{minipage}{0.35\linewidth}
\centering
\begin{subfigure}[b]{0.8\linewidth}
\centering
\includegraphics[height = 0.12in]{figures/exp/thread_legend.pdf}
\end{subfigure}%
\\
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/exp/amazon_q0.pdf}
\caption{$q_{3}^{1}$}
\end{subfigure}
~
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/exp/amazon_q8.pdf}
\caption{$q_{3}^{2}$}
\end{subfigure}
\caption{Vary Number of Threads.}
\label{fig:scalability}
\end{minipage}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/exp/mem.pdf}
\caption{Task-based Scheduling.}
\label{fig:mem}
\end{minipage}
%
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/exp/nostl2.pdf}
\caption{Work Stealing.}
\label{fig:work-stealing}
\end{minipage}
\end{figure*}
\stitle{Exp-3: Candidates Filtering.}
In this experiment, we evaluate the pruning power of \kw{HGMatch}'s candidate generation (\refsec{cand_gen}) and embedding validation process (\refsec{emb_vali}). The results are given in \reffig{filtering}, where we count the number of true embeddings (denoted as `Embeddings'), the number of candidates after applying the vertex number check in Observation \ref{val_rule_1} (denoted as `Filtered'), and the number of candidates generation using \refalg{cand_gen} (denoted as `Candidates'). We draw the total number of all candidates of all queries for each data hypergraph. For data hypergraphs \kw{MA} and \kw{SA}, there are almost no false positive candidates in the candidate set due to a large number of labels.
For other datasets with fewer labels, the candidate generation method can generate more false positive embeddings. However, only after a fast check of the number of vertices in the partial embedding, \kw{HGMatch} are able to filter out the most majority of unpromising results. We observe that $97\%$ of the filtered results are true positive embeddings.
This is because, again, \kw{HGMatch} can fully utilise the high-order information in hyperedges to prune candidates.
The results reveal the pruning power of \kw{HGMatch}, which finally contributes to the significant speedup of existing algorithms.
\subsection{Parallel Comparisons} \label{sec:parallel_eval}
In this subsection, we evaluate \kw{HGMatch} in the multi-thread environment. We use our largest dataset, \kw{AR}, as the data hypergraph as described and the queries in $q_3$ as the default queries.
Note that the original authors implementation of \kw{DAF}\footnote{\url{https://github.com/SNUCSE-CTA/DAF}} and \kw{CECI}\footnote{\url{https://github.com/iHeartGraph/ceci-release}} support parallel execution. However, we do not compare them in our parallel experiments due to the errors in their code that causes segmentation faults and/or reports wrong numbers of embeddings.
\stitle{Exp-4: Scalability.}
We conduct a scalability test of \kw{HGMatch} by varying the number of threads used for parallel execution.
We present the results of $2$ random queries from $q_3$ with a large number of embeddings.
We denote the two queries as $q_{3}^{1}$ and $q_{3}^{2}$.
Specifically, $q_{3}^{1}$ has about $3.86\times10^{10}$ results and $q_{3}^{2}$ has about $2.53\times10^{8}$ results.
We vary the number of threads from $1$ to $60$.
The results are shown in \reffig{scalability}.
\kw{HGMatch} demonstrates almost perfect linear scalability when the number of threads is equal or below $20$ (i.e., $20\times$ speedup when using $20$ thread), thanks to the highly optimised parallel execution engine and dynamic load balancing mechanism.
When the number of threads is beyond $20$, the speedup factor slightly decreases due to non-uniform memory access (NUMA) and hyper-threading in the CPUs of our machine (i.e., $2$ physical CPUs with hyper-threading). In the future, we will investigate NUMA optimisations of \kw{HGMatch}.
\stitle{Exp-5: Scheduling.} In this experiment, we evaluate the memory usage of \kw{HGMatch} to test the effectiveness of its task-based scheduler (\refsec{scheduler}).
We compare \kw{HGMatch}'s task-based scheduler with BFS-style scheduling using $20$ threads.
The memory usages and the number of embeddings of the $20$ random queries in $q_3$ are illustrated in \reffig{mem}.
As the number of embeddings increases, memory usage grows rapidly for BFS-style scheduling.
The results indicate for queries with many results, the memory usage of BFS-style scheduling is significantly larger than \kw{HGMatch}'s task-based scheduler because of the materialisation of all intermediate results.
This can lead to out-of-memory errors in machines with smaller memory capacities or when querying complex queries.
However, \kw{HGMatch}'s task-based scheduler keeps the memory usage bounded with stable memory consumption of around $4.8$GB for all $20$ queries while achieving almost linear scalability, as demonstrated in the previous experiment.
\stitle{Exp-6: Load Balancing.}
We further evaluate the effectiveness of \kw{HGMatch}'s dynamic work stealing mechanism. Due to the space limit, we present the results of $q_3^2$ in Exp-4 executed using $20$ threads. The running time of each worker (i.e.thread) is shown in \reffig{work-stealing}. Time is reordered to sort in ascending order for ease of illustration.
We compare \kw{HGMatch} with the load balancing technique of assigning the load by the firstly matched hyperedges (denoted as `HGMatch-NOSTL' in the figure).
However, when dynamic work stealing is not applied, we observe load differences among different workers, especially for the last worker.
On the other hand, when dynamic work stealing is applied, \kw{HGMatch} achieves a near-perfect load balancing (the dashed line) with little overhead.
\subsection{\red{Case Study}} \label{sec:case_study}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.4\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/case_study_q1.pdf}
\caption{Example Query $1$}
\label{fig:query1}
\end{subfigure} %
~
\begin{subfigure}[b]{.4\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{figures/case_study_q2.pdf}
\caption{Example Query $2$}
\label{fig:query2}
\end{subfigure}
\caption{\red{Example Queries in Hypergraph Knowledge Base.}}
\label{fig:case_study}
\end{figure}
\red{
We demonstrate a case study of subhypergraph matching\xspace on question answering over hypergraph knowledge base to help illustrate its applications.
We have conducted the case study on the \kw{JF17K} hypergraph knowledge base dataset \cite{Wen2016OnTR}, which is a small subset of non-binary relations extracted from the knowledge base Freebase \cite{Freebase}. The dataset is a hypergraph with the label for each vertex representing its type.
For example, the hyperedges with labels \textit{(Player, Team, Match)} indicate the fact that a football player played in a match representing a team.
Another example of hyperedges with labels \textit{(Actor, Character, TV Show, Season)} means an actor played a character in the TV show on the season.
We present two example queries in \reffig{case_study} to answer real-world questions over the knowledge hypergraph by users with respect to the above example hyperedge types.
Query $1$ (\reffig{query1}) is to find the results of `Football players who represented different teams in different matches'. \kw{HGMatch} finds $111$ embeddings of this query in the dataset.
For instance, the football player Óscar Cardozo played for the Paraguay national football team in FIFA World Cup 2010, but played for the S.L. Benfica team in UEFA Europa League 2014.
Query $2$ (\reffig{query2}) is to find the results of `Actors who played the same character in a TV show on different seasons'. \kw{HGMatch} finds $76$ embeddings of this query in the dataset. For instance, the actor Carlo Bonomi played the character Pingg in the TV show Pingu during seasons 1-4, and the same character was played by David Sant during seasons 5-6.
}
\section*{ACKNOWLEDGMENT}
Wenjie Zhang's research is supported by the Australian Research Council Future Fellowship (FT210100303).
\fi
\newpage
\balance
\bibliographystyle{abbrv}
|
\section{Introduction}
\noindent ``Classifying'' the objects of a category, may be occurred in different ways. One may introduce (finite or infinite) chains
of objects, e.g. in TOP (category of topological spaces) this is a chain (under inclusion relation):
``The class of metrizable spaces $\subset$ The class of T$_2$ spaces $\subset$ The class of T$_1$ spaces
$\subset$ The class of T$_0$ spaces $\subset$ The class of topological spaces''.
\\
Sometimes try to refine an existing classification leads to new
concepts. However one can mention several texts envolving the
matter in topological dynamics as well as other areas of
mathematics~\cite{golestani, hj}. In this point of view for
dynamical property $\mathsf P$ we introduce the class of
transformation (semi)groups pseudo--co--demoposable to
tansformation semigroups carrying property $\mathsf P$ (with
emphasis on case $\mathsf P$ is distality/ non--minimality).
\\
In an other point of view, some authors try to understand a transformation semigroup by ``dividing'' its phase space~(\cite[Proposition 2.6]{ellis} and~\cite{chu}). In this text following~\cite{decom} we approach the matter by
``dividing'' phase (semi)group.
\\
In the following ``$\subset$'' means strict inclusion.
\subsection*{Co--decomposition vs pseudo--co--decomposition} For nonempty collection
$\{(X,S_\alpha):\alpha\in\Gamma\}$ of transformation semigroups
the concept of multi--transformation semigroup
$(X,(S_\alpha:\alpha\in\Gamma))$ has been introduced as a
generalization of bitransformation group for the first time
in~\cite{decom}, which leads to the definition of
\linebreak
co--decomposition of a transformation semigroup. In this way for
a transformation semigroup $(X,S)$ and dynamical property
$\mathsf{P}$ one may ask ``Is there any
co--decomposition of
transformation semigroup $(X,S)$ like
$(X,(S_\alpha:\alpha\in\Gamma))$ such that for each
$\alpha\in\Gamma$, $(X,S_\alpha)$ has property $\mathsf{P}$?'' in
other words we want to know whether $(X,S)$ is co--decomposable
to $\mathsf{P}$ transformation semigroups. Co--decomposability of
a transformation semigroup to those carrying a special property
brought an other way to classify transformation semigroups, e.g.
we have the following diagram for $\mathsf{P}\in\{$distal,
equicontinuous$\}$~\cite{decom}: {\small
\\
\begin{tabular}{l}
$\:$ \\
The class of all $\mathsf{P}$ transformation semigroups
\\
\begin{tabular}{rl}
& $\subset$
The class of all transformation semigroups
co--decomposable to $\mathsf{P}$ ones \\
& $\subset$ The class of all transformation semigroups. \\
\end{tabular}
\\ $\:$ \\
\end{tabular}
\\
In this text we introduce the concept of pseudo--multi--transformation semigroup and pseudo--co--decomposability
to $\mathsf{P}$ transformation semigroups and via examples show
that we have the following strict inclusions:
{\small
\\
\begin{tabular}{l}
$\:$ \\
The class of all distal transformation semigroups \\
\begin{tabular}{rl}
& $\subset$
The class of all transformation semigroups
co--decomposable to distal ones \\
& $\subset$
The class of all transformation semigroups
pseudo--co--decomposable to distal ones \\
& $\subset$ The class of all transformation semigroups. \\
\end{tabular} \\ $\:$ \\ \end{tabular}}
\noindent
In this paper our aim is to bring a more precise tool
(pseudo--co--decomposition) to classify transformation semigroups.
\section{Preliminaries}
\noindent By a transformation semigroup (group) $(X,S,\pi)$ or
simply $(X,S)$ we mean a compact Hausdorff topological space $X$,
a discrete topological semigroup (group) $S$ with identity $e$
and continuous map $\mathop{\pi:X\times S\to
X}\limits_{\: \: \: \: \:\SP\: \: \: \: \:(x,s)\mapsto xs}$ such that for all $x\in X$
and $s,t\in S$ we have $xe=x$ and $x(st)=(xs)t$. In
transformation semigroup $(X,S)$ we say $S$ acts effectively on
$X$, if for each distinct $s,t\in S$ there exists $x\in X$ with
$xs\neq xt$~\cite{ellis}.
\\
For nonempty collection $\{(X,S_\alpha):\alpha\in\Gamma\}$
of transformation semigroups (groups) we
say $(X,(S_\alpha:\alpha\in\Gamma))$ is a multi--transformation
semigroup (group) if for every distinct $\alpha_1,\ldots,\alpha_n
\in\Gamma$, $x\in X$, $s_1\in S_{\alpha_1},\ldots,s_n\in S_{\alpha_n}$ and permutation
$\mathop{\{1,\ldots,n\}\to
\{1,\ldots,n\}}\limits_{k\mapsto m_k}$ we have~\cite{decom}:
\[(\cdots((xs_1)s_2)\cdots)s_n=
(\cdots((xs_{m_1})s_{m_2})\cdots)s_{m_n}\:.\]
In transformation semigroup (group) $(X,S)$ we say
multi--transformation semigroup (group) $(X,(S_\alpha:\alpha\in\Gamma))$ is a co--decomposition of $(X,S)$ if
$S_\alpha$s are distinct sub--semigroups (sub--groups)
of $S$ and semigroup (group) generated by $\bigcup\{S_\alpha:
\alpha\in\Gamma\}$ is $S$.
\\
For dynamical property $\mathsf{P}$ we say the transformation
semigroup (group) $(X,S)$ is co--decomposable to $\mathsf{P}$
transformation semigroups (groups) if it has a co--decomposition
like $(X,(S_\alpha:\alpha\in\Gamma))$ to transformation
semigroups (groups) such that for all $\alpha\in\Gamma$,
$(X,S_\alpha)$ has property $\mathsf{P}$.
\\
We generalize the concept of multi--transformation semigroup and
co--decomposition of a transformation semigroup simply in the
following way:
\begin{definition}
For nonempty collection $\{(X,S_\alpha):\alpha\in\Gamma\}$
of transformation semigroups (groups) we
say $(X,(S_\alpha:\alpha\in\Gamma))$ is a pseudo--multi--transformation
semigroup (group) if for every $\alpha_1,\ldots,\alpha_n
\in\Gamma$, $x\in X$ and permutation
$\mathop{\{1,\ldots,n\}\to
\{1,\ldots,n\}}\limits_{k\mapsto m_k}$ we have:
\[(\cdots((xS_{\alpha_1})S_{\alpha_2})\cdots)S_{\alpha_n}=
(\cdots((xS_{\alpha_{m_1}})S_{\alpha_{m_2}})\cdots)
S_{\alpha_{m_n}}\:.\]
In transformation semigroup (group) $(X,S)$ we say
pseudo--multi--transformation semigroup (group) $(X,(S_\alpha:\alpha\in\Gamma))$ is a co--decomposition of $(X,S)$ if
$S_\alpha$s are distinct sub--semigroups (sub--groups)
of $S$ and semigroup (group) generated by $\bigcup\{S_\alpha:
\alpha\in\Gamma\}$ is $S$.
\\
For dynamical property $\mathsf{P}$ we say the transformation
semigroup (group) $(X,S)$ is
\linebreak
pseudo--co--decomposable to $\mathsf{P}$
transformation semigroups (groups) if it has a
\linebreak
pseudo--co--decomposition
like $(X,(S_\alpha:\alpha\in\Gamma))$ to transformation
semigroups (groups) such that for all $\alpha\in\Gamma$,
$(X,S_\alpha)$ has property $\mathsf{P}$.
\end{definition}
\section{Pseudo--co--decompositions a new way to classify transformation semigroups}
\noindent It's evident that every multi--transformation semigroup
is a pseudo--multi--transformation semigroup, and
every co--decomposition is a pseudo--co--decomposition. In this section we present a
pseudo--multi--transformation semigroup (pseudo--co--decomposition
of a transformation semigroup), which is not a
multi--transformation semigroup (co--decomposition). Then in
subsections we present counterexamples to show considering
\linebreak
pseudo--co--decompositions provides finer classifications on the
class of transformation groups with emphasize on distality and
non--point transitivity.
\begin{convention}
Let $\mathfrak{B}=\{\frac1n:n\geq1\}\cup\{0\}$ with induced
topology of Euclidean real line $\mathbb{R}$. Also
for permutation $\sigma:\mathop{\mathbb{N}\to\mathbb{N}}\limits_{
k\mapsto m_k}$ consider $\mathsf{f}_\sigma:\mathfrak{B}\to
\mathfrak{B}$ with:
\[x\mathsf{f}_\sigma:=\left\{\begin{array}{lc}
& \\ \frac{1}{m_k} & x=\frac1k,k\geq1\:, \\ & \\ 0 & x=0\:. \\ &
\end{array}\right.\]
Thus $h:\mathfrak{B}\to\mathfrak{B}$ is a homeomorphism
if and only if there exists a permutation $\tau:\mathbb{N}\to
\mathbb{N}$ with $h=\mathsf{f}_\tau$.
\\
For $n\geq1$ let $T_n=\{f:\mathfrak{B}\mathop{\to}\limits^{f}\mathfrak{B}$ is bijective and for all $x\neq 0,1,\frac{1}{2},\ldots,\frac{1}{n}$ we have $xf=x\}$
(thus $T_n=\{\mathsf{f}_\sigma:\forall k\geq n\:k\sigma=k\}$)
and $T=\bigcup\{T_m:m\geq1\}$. Thus $T$ (and $T_n$s) is a group
of homeomorphisms on $\mathfrak{B}$ (under the operation
of composition of maps) which acts in a natural way on
$\mathfrak{B}$.
\end{convention}
\begin{example}\label{salam10}
$(\mathfrak{B},(T_n:n\geq1))$ is a pseudo--co--decomposition of
$(\mathfrak{B},T)$ and it is not a co--decomposition of
$(\mathfrak{B},T)$.
In particular, $(\mathfrak{B},(T_n:n\geq1))$ is a
pseudo--multi--transformation semigroup and it is not a multi--transformation semigroup.
\end{example}
\begin{proof}
Suppose $n_1,\ldots,n_k\geq1$ and $p=\max(n_1,\ldots,n_k)$, then
for every permutation $\mathop{\{1,\ldots,k\}\to\{1,\ldots,k\}}\limits_{j\mapsto m_j}$ we have $T_{n_1}\cdots T_{n_k}=T_p=T_{n_{m_1}}\cdots T_{n_{m_k}}$, hence for all $x\in \mathfrak{B}$ we have
$xT_{n_1}\cdots T_{n_k}=xT_p=xT_{n_{m_1}}\cdots T_{n_{m_k}}$ and
$(\mathfrak{B},(T_n:n\geq1))$ is a pseudo--multi--transformation semigroup.
\\
Now consider permutations $\sigma=(1 \: 2)$ and $\mu=(1 \: 3)$ on
$\mathbb{N}$, then $\mathsf{f}_\sigma\in T_2$ and
$\mathsf{f}_\mu\in T_3$
and
$1\mathsf{f}_\sigma\mathsf{f}_\mu=\frac12\neq\frac13=1
\mathsf{f}_\mu\mathsf{f}_\sigma$,
so $(\mathfrak{B},(T_n:n\geq1))$ is not a multi--transformation semigroup.
\end{proof}
\subsection{Is $(\mathfrak{B},T)$ (pseudo--)co--decomposable to distal transformation semigroups?}
\noindent In this sub--section we show $(\mathfrak{B},T)$
is pseudo--co--decomposable to distal
transformation groups, however it is not co--decomposable
to distal transformation semigroups.
\\
In transformation semigroup $(X,S)$ consider proximal
relation
$P(X,S)=\{(x,y)\in X\times X:$ there exists a net $\{t_\alpha\}_{
\alpha\in\Gamma}$ is $S$ and $z\in X$ such that
$\mathop{\lim}\limits_{\alpha\in\Gamma}xt_\alpha=z=
\mathop{\lim}\limits_{\alpha\in\Gamma}yt_\alpha\}$. We
say the transformation semigroup $(X,S)$ is distal
if $P(X,S)=\Delta_X(=\{(x,x):x\in X\})$~\cite{ellis}.
\begin{lemma}\label{salam50}
In transformation semigroup $(\mathfrak{B},S)$ suppose
$S\subseteq\{\mathsf{f}_\sigma:\sigma$ is a permutation on
$\mathbb{N}\}$. The following statements are equivalent:
\begin{itemize}
\item[1.] $(\mathfrak{B},S)$ is distal,
\item[2.] $(\mathfrak{B},S)$ has a finite pseudo-co-decomposition
$(\mathfrak{B},(S_i:1\leq i\leq n))$ to distal
transformation semigroups,
\item[3.] for all $x\in \mathfrak{B}$, $xS$ is finite,
\item[4.] for all $x\in \mathfrak{B}\setminus\{0\}$, $xS$ is finite.
\end{itemize}
\end{lemma}
\begin{proof} First of all note that $0S=\{0\}$ is finite, hence (3) and (4) are equivalent.
\\
``(1)$\Rightarrow$(4)'' If there exists $x\in \mathfrak{B}\setminus\{0\}$
with infinite $xS$, then there exists a sequence $\{s_n\}_{n\geq1}$ in $S$ such that
$\{xs_n\}_{n\geq1}$ is a one--to--one sequence in $xS\subseteq\{\frac1n:n\geq1\}=\mathfrak{B}\setminus\{0\}$
thus $\mathop{\lim}\limits_{n
\to\infty}xs_n=0(=\mathop{\lim}\limits_{n\to\infty}0s_n)$,
and $(0,x)\in P(\mathfrak{B},S)$, hence $(\mathfrak{B},S)$
is not distal.
\\
``(3)$\Rightarrow$(1)'' Suppose for all $x\in \mathfrak{B}$, $xS$
is finite and $(u,v)\in P(\mathfrak{B},S)$, then there exists a
sequence $\{t_n\}_{n\geq1}$ in $S$ such that
$z:=\mathop{\lim}\limits_{n\to\infty}ut_n=
\mathop{\lim}\limits_{n\to\infty}vt_n$. Thus $uS\cap
vS=\overline{uS}\cap\overline{vS}\ni z$. If $u\neq v$, then we
may suppose $u\neq0$ hence $0\notin uS$ and $z\neq 0$ which leads
to openness of $\{z\}$. There exists $N\geq1$ such that for all
$n\geq N$ we have $ut_n=z=vt_n$ which shows $u=v$ and distality
of $(\mathfrak{B},S)$.
\\
``(1)$\Rightarrow$(2)'' It's clear that (1) implies (2).
\\
``(2)$\Rightarrow$(4)''
Suppose $(\mathfrak{B},(S_i:1\leq i\leq n))$ is a
pseudo--co--decomposition of $(\mathfrak{B},S)$ to distal
transformation semigroups, then for each $1\leq i\leq n$, $(X,S_i)$ is distal and $xS_i$ is finite for each $x\in X$
(since (1) and (3) are equivalent). Thus for each $x\in X$, $xS_1\cdots S_n$ is finite, using
definition of pseudo-co-decomposition we have $xS=xS_1\cdots S_n$, hence $xS$ is finite
which shows distality of $(\mathfrak{B},S)$.
\end{proof}
\begin{lemma}\label{salam20}
In transformation semigroup $(X,S)$ suppose $S$ acts effectively on $X$ and $(X,(S_\alpha:\alpha\in\Gamma))$ is a co--decomposition of $(X,S)$, in particular suppose $e\in\bigcap\{S_\alpha:\alpha\in\Gamma\}$, then
for each $\alpha,\beta\in\Gamma$ we have (for $A\subseteq S$ suppose $<A>$ is the sub--semigroup
of $S$ generated by $A$):
\begin{itemize}
\item[1.] $<S_\alpha \cup S_\beta>=S_\alpha S_\beta$,
\item[2.] $S=\bigcup\{S_{i_1}\cdots S_{i_n}:n\geq1$ and
$i_1,\ldots,i_n\in\Gamma$ are distinct$\}$,
\item[3.] there exists
a subset $\Lambda$ of $\Gamma$
with ${\rm card}(\Lambda)\leq\max(\aleph_0,{\rm card}(S))$
such that $(X,(S_\alpha:\alpha\in\Lambda))$ is a co--decomposition of $(X,S)$.
\end{itemize}
\end{lemma}
\begin{proof} 1, 2) Note that for each distinct $\alpha,\beta\in\Gamma$, $s_\alpha\in S_\alpha$ and $s_\beta\in S_\beta$ we have:
\[\forall x\in X\:\: xs_\alpha s_\beta=xs_\beta s_\alpha\]
thus $s_\alpha s_\beta=s_\beta s_\alpha$ since $(X,S)$ is effective.
\\
3) Consider $\theta\in\Gamma$. Using (2), for each $t\in S$ we may choose a finite
sequence $(\alpha_1^t,\cdots,\alpha_{n_t}^t)$ of elements
of $\Gamma$ such that
\[t\in S_{\alpha_1^t}\cdots S_{\alpha_{n_t}^t}\:.\]
Let
$(w_k^t)_{k\geq1}:=(\alpha_1^t,\cdots,\alpha_{n_t}^t,\theta,\theta,\cdots)$. Then for each $k\geq1$, $\Lambda_k=\{w_k^t:t\in S\}$ is a subset of $\Gamma$ with ${\rm card}(\Lambda_k)\leq
{\rm card}(S)$. Let $\Lambda=\bigcup\{\Lambda_k:k\geq1\}$,
then ${\rm card}(\Lambda)\leq\max(\aleph_0,{\rm card}(S))$.
Moreover $S$ is sub--semigroup generated by $\{S_i:i\in\Lambda\}$, thus
$(X,(S_\alpha:\alpha\in\Lambda))$ is a co--decomposition of $(X,S)$.
\end{proof}
\begin{theorem}\label{salam60}
The transformation semigroup $(\mathfrak{B},T)$ is not
co--decomposable to distal transformation semigroups.
\end{theorem}
\begin{proof}
First note that for all $x\in \mathfrak{B}\setminus\{0\}$ the set
$xT=\mathfrak{B}\setminus\{0\}$ is infinite, thus
$(\mathfrak{B},T)$ is not distal by Lemma~\ref{salam50}. Suppose
$(\mathfrak{B},(S_\alpha:\alpha\in\Gamma))$ is a
co--decomposition of $(\mathfrak{B},T)$ to distal transformation
semigroups, then by item (3) of Lemma~\ref{salam20} and
countability of $T$, there exists a countable subset $\Lambda$ of
$\Gamma$ such that $(\mathfrak{B},(S_\alpha:\alpha\in\Lambda))$
is a co--decomposition of $(\mathfrak{B},T)$ (to distal
transformation semigroups). Since $(\mathfrak{B},T)$ is not
distal, by Lemma~\ref{salam50}, $\Lambda$ is not finite, hence
$\Lambda$ is infinite countable.
\\
So we may suppose $\Lambda=\mathbb{N}$ and
$(\mathfrak{B},(S_n:n\geq1))$ is a co--decomposition of $(X,T)$
to distal transformation semigroups. Since $T_3$ is a finite
subset of $T(=\bigcup\{S_1\cdots S_n:n\geq1\})$, there exists
$p\geq1$ such that $T_3\subseteq S_1\cdots S_p$. Suppose
$\{K_n:n\geq1\}=\{S_n:n>p\}\cup\{S_1\cdots S_p\}$ with
$K_1=S_1\cdots S_p$ and distinct $K_n$s. Then
$(\mathfrak{B},(K_n:n\geq1))$ is a co--decomposition of $(X,T)$
to distal transformation semigroups with $T_3\subseteq K_1$.
Let's continue the proof via the following claims:
\\
{\bf Claim 1.} $1K_n=\{1\}$ for all $n\geq2$.
\\
{\it Proof of Claim 1.} Consider $n\geq2$, note that $1K_n\subseteq 1T=\mathfrak{B}\setminus\{0\}=\{\frac1t:t\geq1\}$.
For $t>1$ if $\frac1t\in 1K_n$ then
there exists permutation $\mu:\mathbb{N}\to\mathbb{N}$ with
$1\mu=t$ and $\mathsf{f}_\mu\in K_n$, so for
permutation $\theta=(1\: \: q)$ with $q\in\{2,3\}\setminus\{t\}$
we have $\mathsf{f}_\theta\in T_3\subseteq K_1$, hence
$\frac1q\mathsf{f}_\theta\mathsf{f}_\mu=
\frac1q\mathsf{f}_\mu\mathsf{f}_\theta$ thus
$\frac{1}{1\mu}=\frac{1}{q\mu\theta}$ and
$t=q\mu\theta$ hence $q\mu=t\theta^{-1}=t=1\mu$ which
leads to contradiction $q=1$. Hence $\frac1t\notin 1K_n$
and $1K_n=\{1\}$.
\\
{\bf Claim 2.} For $s,n\geq2$ with $ \frac1s \in 1K_1$ we have $\frac1s K_n=\{\frac1s\}$.
\\
{\it Proof of Claim 2.} For $n,s\geq2$ and $t\geq1$ suppose
$\frac1s \in 1K_1$ and $\frac1t\in\frac1s K_n$. There exist permutations $\mu,\lambda$ on
$\mathbb N$ with $1\mu=s$, $s\lambda=t$, $\mathsf{f}_\mu\in K_1$
and $\mathsf{f}_\lambda\in K_n$. By Claim 1, $\frac1{1\lambda}=1\mathsf{f}_\lambda\in1K_n=\{1\}$ and $1\lambda=1$. So $\frac{1}{1\mu\lambda}=
1\mathsf{f}_\mu\mathsf{f}_\lambda=1
\mathsf{f}_\lambda\mathsf{f}_\mu
=\frac{1}{1\lambda\mu}$ and
$t=s\lambda=1\mu\lambda=1\lambda\mu= 1\mu=s$. Hence
$\frac1s K_n=\{\frac1s\}$.
\\
Using Lemma~\ref{salam50} (since $(\mathfrak{B},K_1)$ is distal),
$1K_1$ is finite and there exists $q>1$ with
$\frac1q\in\mathfrak{B}\setminus 1K_1$. Consider
permutation $\psi=(1\: q)$ on $\mathbb{N}$ then
$\mathsf{f}_\psi\in T(=\bigcup\{K_1\cdots K_n:n\geq1\})$
thus there exists $m\geq1$, $h_1\in K_1,\ldots,h_m\in K_m$
with $\mathsf{f}_\psi=h_1\cdots h_m$.
For all $i\in\{2,\ldots,m\}$ using Claim 1, $1h_i\in 1K_i=\{1\}$, thus
$1
|
h_1h_i=1h_ih_1=1h_1$, and $1h_1=1h_1\cdots h_m=1\mathsf{f}_\psi=\frac{
1}{1\psi}=\frac1q$ which leads to contradiction $\frac1q\in1K_1$.
\\
Therefore $(\mathfrak{B},T)$ is not co--decomposable to distal transformation semigroups.
\end{proof}
\begin{example}\label{salam70}
Using Theorem~\ref{salam60}, $(\mathfrak{B},T)$ is not
co--decomposable to distal transformation semigroups, however for
all $n\geq1$, $T_n$ is a finite group and $(\mathfrak{B}, T_n)$
is a distal transformation group, so $(\mathfrak{B},
(T_n:n\geq1))$ is a pseudo--co--decomposition of $(
\mathfrak{B},T)$ to distal transformation groups (see
Example~\ref{salam10}).
\end{example}
\begin{example}\label{salam80}
Let $G=\{\mathsf{f}_\sigma:\sigma$ is a permutation on $\mathbb{N}\}$. Then the transformation group
$(\mathfrak{B},G)$ is not pseudo--co--decomposable to distal transformation semigroups.
\end{example}
\begin{proof}
Suppose $(\mathfrak{B},(S_\alpha:\alpha\in\Gamma))$ is a
pseudo--co--decomposition of $(\mathfrak{B},G)$ to distal
transformation semigroups, then for all $x\in\mathfrak{B}$ and
$\alpha\in\Gamma$, $xS_\alpha$ is finite (use Lemma~\ref{salam50}), thus for all $\alpha_1,\ldots,\alpha_n\in
\Gamma$ and $x\in \mathfrak{B}$, $xS_{\alpha_1}\cdots S_{\alpha_n}$ is finite.
\\
Consider $\sigma:\mathbb{N}\to\mathbb{N}$ with:
\[\cdots\mathop{\rightarrow}\limits^{\sigma}6
\mathop{\rightarrow}\limits^{\sigma}4
\mathop{\rightarrow}\limits^{\sigma} 2
\mathop{\rightarrow}\limits^{\sigma}1
\mathop{\rightarrow}\limits^{\sigma}3
\mathop{\rightarrow}\limits^{\sigma}5
\mathop{\rightarrow}\limits^{\sigma}7
\mathop{\rightarrow}\limits^{\sigma}\cdots\]
then $\mathsf{f}_\sigma\in G$ and there exist
$\beta_1,\ldots,\beta_m\in\Gamma$ with
$f_\sigma\in S_{\beta_1}\cdots S_{\beta_m}$, thus for all $k\geq1$
we have $1f_\sigma^k\in1\mathop{\underbrace{(S_{\beta_1}\cdots S_{\beta_m})\cdots(S_{\beta_1}\cdots S_{\beta_m})}}\limits_{k{\rm \: times}}=1S_{\beta_1}\cdots S_{\beta_m}$.
So
\[\{1,\frac13,\frac15,\ldots\}=\{1f_\sigma^k:k\geq1\}\subseteq
1S_{\beta_1}\cdots S_{\beta_m}\]
which is in contradiction with finiteness of
$1S_{\beta_1}\cdots S_{\beta_m}$ and
$(\mathfrak{B},G)$ is not pseudo--co--decomposable to
distal transformation semigroups.
\end{proof}
\begin{remark}\label{salam90}
There exist non--distal transformation groups co--decomposable to distal ones~\cite[Theorem~3.5]{decom}.
\end{remark}
\begin{corollary}\label{taha10}
Using Examples~\ref{salam70},~\ref{salam80} and
Remark~\ref{salam90}, we have the following strict inclusions:
(compare with~\cite[Theorem 3.5]{decom}):
\vspace{5mm}
{\small\begin{center}
\begin{tabular}{|c|}
\hline The class of all transformation semigroups\\
\begin{tabular}{|c|} \hline
The class of all transformation semigroups pseudo--co--decomposable to distal ones\\
\begin{tabular}{|c|} \hline
The class of all transformation semigroups co--decomposable to distal ones\\
\begin{tabular}{|c|} \hline
The class of all distal transformation semigroups \\
(Example: trivial transformation group $(X,\{id_X\})$) \\ \hline
\end{tabular}\\
Remark~\ref{salam90} \\ \hline
\end{tabular}\\
Example~\ref{salam70} \\ \hline
\end{tabular}\\
Example~\ref{salam80} \\ \hline
\end{tabular}
\end{center}}
\end{corollary}
\begin{note}
One may study intraction of a transformation semigroup and its pseudo--co--decompositions via operators like
product, disjoint union, quotient of transformation semigroups using similar methods described in~\cite{decom}.
\end{note}
\subsection{A glance at minimality and point transitivity approach}
\noindent We say the transformation semigroup $(X,S)$ is point
transitive if there exists $x\in X$ such that $\overline{xS}=X$,
moreover $(X,S)$ is minimal if for each $x\in X$,
$\overline{xS}=X$. It's evident that $(X,S)$ is point transitive
(resp. minimal) if and only if $S$ has a sub--semigroup like
$S_0$ such that $(X,S_0)$ is point transitive (resp. minimal).
Hence $(X,S)$ is point transitive (resp. minimal) if and only if
it has a pseudo--co--decomposition
$(X,(S_\alpha:\alpha\in\Gamma))$ such that $(X,S_\alpha)$ is
point transitive (resp. minimal) for some $\alpha\in\Gamma$.
Hence what is interesting for us is pseudo--co--decomposability
of a point transitive (resp. minimal) transformation semigroup to
non--point transitive (resp. minimal) transformation semigroups.
\\
Consider unit circle $\mathbb{S}^1=\{e^{i\theta}:\theta\in\mathbb{R}\}$, also for $\alpha\in\mathbb{R}$ suppose:
\\
$\bullet$ $\mathop{\varphi_\alpha:\mathbb{S}^1\to\mathbb{S}^1}\limits_{\: \: \: \: \:\SP\: \: \: \: \: e^{i\theta}\mapsto e^{i(\alpha+\theta)}}$
be $\alpha-$rotation in unit circle,
\\
$\bullet$ $\mathop{\varepsilon_\alpha:\mathbb{S}^1\to\mathbb{S}^1}\limits_{\: \: \: \: \:\SP\: \: \: \: \: e^{i\theta}\mapsto e^{i(\alpha-\theta)}}$
be composition of $\alpha-rotation$ and
conjugate map $\mathop{\eta:\mathbb{S}^1\to\mathbb{S}^1}\limits_{\: \: \: \: \: e^{i\theta}\mapsto e^{-i\theta}}$.
\\
Then $\Sigma=\{\varphi_\alpha:\alpha\in2\pi\mathbb{Q}\}$ is the group of rational multiplication of $2\pi$
rotations on unit circle and $\Sigma^*=\{\varphi_\alpha:\alpha\in2\pi\mathbb{Q}\}\cup
\{\varepsilon_\alpha:\alpha\in2\pi\mathbb{Q}\}$ is the group generated by conjugate map and rational multiplication of $2\pi$
rotations on unit circle.
\begin{remark}\label{taha20}
$(\mathbb{S}^1,\Sigma)$ is a minimal (point transitive) transformation group, co--decomposable to non--minimal transformation
groups. In fact $(\mathbb{S}^1,(\{\varphi_\alpha^n:n\in\mathbb{N}\}:\alpha\in \{\frac{2\pi}{m}:m\geq1\}))$
is a co--decomposition of
$(\mathbb{S}^1,\Sigma)$ to non--minimal (non--point transitive) transformation groups~\cite[Counterexample 3.3]{decom}.
\end{remark}
\noindent $(\mathbb{S}^1,\Sigma^*)$ is minimal since $(\mathbb{S}^1,\Sigma)$ is minimal and $\Sigma\subseteq\Sigma^*$.
We show $(\mathbb{S}^1,\Sigma^*)$ is pseudo--co--decomposable to non--minimal transformation groups
and it is not co--decomposable to non--minimal transformation semigroups.
\\
Item (1) of the following lemma shows that $\Sigma^*$ is semigroup generated by $\eta$ and $\Sigma$. Since all elements
of $\Sigma^*$ have finite order, $\Sigma^*$ is a group (under the operation of composition of maps) too. By item (3)
of the following Lemma $\Sigma^*$ is non--abelian.
\begin{lemma}\label{taha30}
We have:
\begin{itemize}
\item[1.] $\varepsilon_\alpha=\eta\varphi_\alpha=\varphi_{-\alpha}\eta$ for each $\alpha\in\mathbb{R}$,
\item[2.] $\{h\in\Sigma^*:h^2=id_{\mathbb{S}^1}\}=\{\varepsilon_\alpha:\alpha\in2\pi\mathbb{Q}\}\cup\{\varphi_\alpha:\alpha\in\pi\mathbb{Z}\}$,
\item[3.] $\{h\in\Sigma^*:h\eta=\eta h\}=\{\eta, id_{\mathbb{S}^1}, \varphi_\pi,\varepsilon_\pi\}$,
\item[4.] each sub--semigroup of $\Sigma^*$ is a sub--group of $\Sigma^*$.
\end{itemize}
\end{lemma}
\begin{proof}
(1) Consider $\theta,\alpha\in \mathbb{R}$, then
$e^{i\theta}\varepsilon_\alpha=e^{i(\alpha-\theta)}=e^{-i\theta}\varphi_\alpha=e^{i\theta}\eta\varphi_\alpha$
and
\linebreak
$e^{i\theta}\varepsilon_\alpha=e^{i(\alpha-\theta)}=e^{i(-\alpha+\theta)}\eta=e^{i\theta}\varphi_{-\alpha}\eta$.
\\
(2) For each $\alpha\in\mathbb{R}$, we have $\varphi_{2\alpha}=\varphi_\alpha^2=id_{\mathbb{S}^1}$ if and only if
$\alpha\in\pi\mathbb{Z}$, i.e. $\varphi_\alpha\in\{\varphi_\pi,\varphi_{2\pi}\}=\{id_{\mathbb{S}^1}, \varphi_\pi\}$ and
$\varepsilon_\alpha^2=\eta\varphi_\alpha\eta\varphi_\alpha=
\eta\varphi_\alpha\varphi_{-\alpha}\eta=\eta\eta\varphi_\alpha\varphi_{\alpha}^{-1}\eta=\eta^2=id_{\mathbb{S}^1}$.
\\
(3) Consider $\alpha\in\mathbb{R}$, then:
\begin{eqnarray*}
\varphi_\alpha\eta=\eta\varphi_\alpha & \Leftrightarrow & \eta\varphi_{-\alpha}=\eta\varphi_\alpha
\Leftrightarrow \varphi_{-\alpha}=\varphi_\alpha \Leftrightarrow \varphi_{2\alpha}=id_{\mathbb{S}^1} \\
& \Leftrightarrow & \alpha\in\pi\mathbb{Z} \Leftrightarrow\varphi_\alpha\in\{id_{\mathbb{S}^1}, \varphi_\pi\}
\end{eqnarray*}
and
\begin{eqnarray*}
\varepsilon_\alpha\eta=\eta\varepsilon_\alpha & \Leftrightarrow & \eta\varphi_\alpha\eta=\eta\eta\varphi_\alpha
\Leftrightarrow \varphi_\alpha\eta=\eta\varphi_\alpha \Leftrightarrow
\varphi_\alpha\in\{id_{\mathbb{S}^1}, \varphi_\pi\} \\
& \Leftrightarrow & \varepsilon_\alpha=\eta\varphi_\alpha\in \{\eta id_{\mathbb{S}^1},\eta \varphi_\pi\}
=\{\eta,\varepsilon_\pi\}
\end{eqnarray*}
(4) Use the fact that each element of $\Sigma^*$ has finite order.
\end{proof}
\begin{example}\label{taha40}
$(\mathbb{S}^1,(\{\eta^i\varphi_\alpha^n:n\in\mathbb{Z}, i=0,1\}:\alpha\in \{\frac{2\pi}{m}:m\geq2\}))$ is a
pseudo--co--decomposition of $(\mathbb{S}^1,\Sigma^*)$ to
non--minimal (non--point transitive) transformation groups.
\end{example}
\begin{proof}
For each $m\geq2$, $T_m=\{\eta^i\varphi_\frac{2\pi}{m}^n:n\in\mathbb{Z}, i=0,1\}=
\{\eta^i\varphi_\frac{2n\pi}{m}:1\leq n\leq m, i=0,1\}$ is a finite subgroup of $T$, hence $e^{i\theta}T_m$ is a finite
non--dense subset of $\mathbb{S}^1$ for each $e^{i\theta}\in\mathbb{S}^1$. Therefore $(\mathbb{S}^1,T_m)$
is not point transitive. It is clear that $\Sigma^*=\bigcup\{T_m:m\geq2\}$. On the other hand for each
$s,t\geq1$ we have:
\begin{eqnarray*}
T_sT_t & = & \{\eta^i\varphi_\frac{2\pi n}{s}\eta^j\varphi_\frac{2\pi m}{t}:n,m\in\mathbb{Z}, 0\leq i,j\leq 1\} \\
& = & \{\eta^i\varphi_\frac{-2\pi m}{t}\eta^j\varphi_\frac{-2\pi n}{s}:n,m\in\mathbb{Z}, 0\leq i,j\leq 1\} \\
& = & \{\eta^i\varphi_\frac{2\pi m}{t}\eta^j\varphi_\frac{2\pi n}{s}:n,m\in\mathbb{Z}, 0\leq i,j\leq 1\}=T_tT_s
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{taha50}
If $T$ is an infinite subgroup of $\Sigma^*$, then $(\mathbb{S}^1,T)$ is minimal.
\end{lemma}
\begin{proof}
Suppose $T\subseteq\Sigma^*=\{\varphi_\alpha:\alpha\in2\pi(\mathbb{Q}\cap[0,1))\}\cup\{\varepsilon_\alpha:\alpha\in2\pi(\mathbb{Q}\cap[0,1))\}=\{\eta^i\varphi_\alpha:\alpha\in2\pi(\mathbb{Q}\cap[0,1)),i=0,1\}$ is an infinite
subgroup. There exists $i\in\{0,1\}$ and one--to--one sequence $\{q_n\}_{n\geq1}$ in $\mathbb{Q}\cap[0,1)$ such that
$\{\eta^i\varphi_{2\pi q_n}\}_{n\geq1}$ is a sequence in $T$. Choose $\kappa>0$, using compactness of $[0,1]$,
$\{q_n\}_{n\geq1}$ has a Cauchy subsequence, hence there exist $n,m\geq1$ such that $0<q_n-q_m<\frac{\kappa}{2\pi}$.
Moreover
\[T\ni\eta^i\varphi_{2\pi q_n}(\eta^i\varphi_{2\pi q_m})^{-1}=\eta^i\varphi_{2\pi (q_n-q_m)}\eta^i=\left\{\begin{array}{lc}
\varphi_{2\pi (q_n-q_m)} & i =0 \:, \\ \varphi_{2\pi (q_m-q_n)}=\varphi_{2\pi (q_n-q_m)}^{-1} & i=1\:.\end{array}\right.\]
In particular $\varphi_{2\pi (q_n-q_m)}\in T$. Consider $\theta,\mu\in\mathbb{R}$, there exists $j\in\mathbb{Z}$
such that $2\pi (q_n-q_m)j\leq\theta-\mu<2\pi (q_n-q_m)(j+1)$, i'e. $0\leq\theta-(\mu+2\pi (q_n-q_m)j)<2\pi (q_n-q_m)<\kappa$.
However $e^{i\lambda}:=e^{i(\mu+2\pi (q_n-q_m)j)}=e^{i\mu}\varphi_{2\pi (q_n-q_m)j}=e^{i\mu}\varphi_{2\pi (q_n-q_m)}^{j}\in e^{i\mu}T$, so:
\[\forall\kappa>0\exists\lambda\:\:(0\leq\theta-\lambda<\kappa\wedge e^{i\lambda}\in e^{i\mu}T)\]
which shows $e^{i\theta}\in\overline{e^{i\mu}T}$ for all $\theta\in\mathbb{R}$. Therefore
$\mathbb{S}^1=\overline{e^{i\mu}T}$ for all $\mu\in\mathbb{R}$ and $(\mathbb{S}^1,T)$ is minimal.
\end{proof}
\begin{theorem}\label{taha60}
$(\mathbb{S}^1,\Sigma^*)$ is not co--decomposable to
non--minimal (non--point transitive) transformation semigroups.
\end{theorem}
\begin{proof}
Suppose $(\mathbb{S}^1,(H_\alpha:\alpha\in\Gamma))$ is a co--decomposition of $(\mathbb{S}^1,T)$ to non--point transitive
transformation semigroups. In particular for each $\alpha\in\Gamma$, $(\mathbb{S}^1,H_\alpha)$ is not minimal and
$H_\alpha$ is finite by Lemma~\ref{taha50}. There exist $\alpha_1,\ldots,\alpha_n\in\Gamma$ such that
$\eta\in S_{\alpha_1}\cdots S_{\alpha_n}$, thus for each $\alpha\in\Gamma\setminus\{\alpha_1,\ldots,\alpha_n\}$,
$s\in S_\alpha$ and $z\in \mathbb{S}^1$ we have $zs\eta=z\eta s$, i.e. $s\eta=\eta s$. By item (3) in Lemma~\ref{taha30},
$\bigcup\{S_\alpha:\alpha\in\Gamma\}\setminus\bigcup\{S_{\alpha_i}:1\leq i\leq n\}\subseteq \{\eta, id_{\mathbb{S}^1}, \varphi_\pi,\varepsilon_\pi\}$. Therefore $\bigcup\{S_\alpha:\alpha\in\Gamma\}\subseteq
S_{\alpha_1}\cup\cdots\cup S_{\alpha_n}\cup\{\eta, id_{\mathbb{S}^1}, \varphi_\pi,\varepsilon_\pi\}$.
Therefore $\bigcup\{S_\alpha:\alpha\in\Gamma\}$ is a finite subset of $\Sigma^*$. Using Lemma~\ref{taha30} and the fact that
each element of $\Sigma^*$ has finite order, sub--semigroup generated by $\bigcup\{S_\alpha:\alpha\in\Gamma\}$ is finite,
i.e. $\Sigma^*$ is finite which is a contradiction.
\end{proof}
\begin{example}\label{taha70}
By Example~\ref{taha40} and Theorem~\ref{taha60},
minimal transformation group $(\mathbb{S}^1,\Sigma^*)$ is pseudo--co--decomposable to non--point transitive
(non--minimal) transformation groups, however it is not co--decomposable to non--point transitive (non--minimal)
transformation semigroups.
\end{example}
\begin{corollary}
The following strict inclusions are valid: (compare with
~\cite[Theorem 3.6]{decom}): {\small\begin{center}
\begin{tabular}{|c|} \hline
The class of all minimal transformation semigroups\\
\begin{tabular}{|c|} \hline
The class of all minimal transformation semigroups \\ pseudo--co--decomposable to non--minimal ones\\
\begin{tabular}{|c|} \hline
The class of all minimal transformation semigroup \\ co--decomposable to non--minimal ones \\
Remark~\ref{taha20} \\ \hline
\end{tabular}\\
Example~\ref{taha70} \\ \hline
\end{tabular}\\
(Example: trivial transformation group $(\{e\},\{id_{\{e\}}\})$) \\ \hline
\end{tabular}
\end{center}}
\noindent The reader can replace ``(non--)minimal'' by ''(non--)point transitive'' in the above diagram.
\end{corollary}
\section{An arised question: strongly pseudo--co--decomposition of a transformation semigroup}
\noindent We say pseudo--co--decomposition
$(X,(S_\alpha:\alpha\in\Gamma))$ of $(X,S)$ is a strong
pseudo--co--decomposition of $(X,S)$ if for every
$\alpha_1,\ldots,\alpha_n \in\Gamma$ and permutation \linebreak
$\mathop{\{1,\ldots,n\}\to \{1,\ldots,n\}}\limits_{k\mapsto m_k}$
we have $S_{\alpha_1}S_{\alpha_2}\cdots S_{\alpha_n}=
S_{\alpha_{m_1}}S_{\alpha_{m_2}}\cdots S_{\alpha_{m_n}}$. One can
simply verify that in transformation semigroup $(X,S)$:
\begin{itemize}
\item if $S$ acts effectively on $X$, then any co--decomposition of $(X,S)$ is a strongly pseudo--co--decomposition of $(X,S)$,
\item every strongly pseudo--co--decomposition of $(X,S)$ is a pseudo--co--decomposition of $(X,S)$.
\end{itemize}
For dynamical property $\mathsf{P}$ we say the transformation
semigroup (group) $(X,S)$ is strongly pseudo--co--decomposable to $\mathsf{P}$
transformation semigroups (groups) if it has a strong pseudo--co--decomposition
like $(X,(S_\alpha:\alpha\in\Gamma))$ to transformation
semigroups (groups) such that for all $\alpha\in\Gamma$,
$(X,S_\alpha)$ has property $\mathsf{P}$, hence:
\begin{itemize}
\item by Example~\ref{salam10}, $(\mathfrak{B},T)$ is strongly pseudo--co--decomposable to
distal transformation groups,
\item in transformation semigroup $(X,S)$ with nontrivial $S$, let $M:=(S\times S)\sqcup\{\mathfrak{e}\}$ be a semigroup with identity $\mathfrak{e}$ and operation $(a,b)*(c,d)=(a,d)$ for each $(a,b),(c,d)\in S\times S$, then
$(X,M)$ is a (non--effective) transformation semigroup under action $x\mathfrak{e}:=x$ and $x(s,t):=xs$ (for $x\in X, s,t\in S$).
$(X,((S\times\{t\})\sqcup\{\mathfrak{e}\}:t\in S))$ is a pseudo--co--decomposition of $(X,M)$ and it is not a
strongly pseudo--co--decomposition of $(X,M)$.
\end{itemize}
In the class of all effective transformation (semi--)groups, $\mathcal C$, one may consider the following inclusions
(use Corollary~\ref{taha10} too):
{\small \begin{center}
\begin{tabular}{l}
The class of all distal elements of $\mathcal C$ \\
$\subset$
The class of all elements of $\mathcal C$
co--decomposable to distal ones \\
$\subset$
The class of all elements of $\mathcal C$
strongly pseudo--co--decomposable to distal ones \\
$\mathop{\subseteq}\limits^{*}$
The class of all elements of $\mathcal C$
pseudo--co--decomposable to distal ones \\
$\subset$ $\mathcal C$.
\end{tabular}
\end{center}}
\noindent In the above chain of inclusions in order to have strict inclusions in
all cases we should have positive answer to the following question, which may be subject of next
research:
\begin{problem}
Is there any (effective) transformation (semi--)group $(X,S)$ pseudo--co--decomposable
to distal ones, which is not strongly pseudo--co--decomposable to distal ones?
\end{problem}
|
\section{Introduction}
Entropy production (EP) is a key physical concept that quantifies the irreversibility of a given process: the larger the EP, the more irreversible the process. It was born from very practical considerations, since irreversibility fundamentally limits the performance of heat engines and fridges~\cite{Bejanbook2016}. Eventually it turned into {\it the} fundamental concept allowing to phrase the second law of thermodynamics (SLT): the EP of a physical process can never be negative. As a typical example, spontaneous heat flow from cold to hot bodies is forbidden by the SLT, as it would give rise to a negative
~EP.
Pioneering expressions of EP were established at the outset of macroscopic thermodynamics and generally applied to specific irreversible processes such as the thermalization of systems or, conversely, the driving out of their thermal equilibrium. Such processes involve heat dissipation into reservoirs of well-defined temperatures, therefore making heat and temperature two essential quantities to define EP. Later on, the ability to monitor and control the evolution of microscopic systems at the level of single realizations gave rise to the so-called stochastic EP~\cite{Esposito2009, Binderbook2018, Campisi2011}, that provided a renewed perspective on irreversibility. At this level of description, irreversibility results from random perturbations exerted on the system dynamics by external reservoirs, thus preventing the external operator to rewind any protocol. In this view, EP fundamentally captures the lack of control over microscopic systems, a concept that broadens the notion of EP to a much wider range of situations. Moreover, stochastic thermodynamics is agnostic to the type of noise and reservoirs which cause irreversibility. Its conceptual tools can be adapted to any kind of random perturbation, holding the promise to quantify irreversibility of quantum nature, e.g.,~ stemming from decoherence or any source of quantum noise~\cite{Elouard2017,Landi2020}.
Microscopic systems undergoing feedback-controlled dynamics provide a first example of extension beyond open systems interacting with thermal environments. In such processes, information on the system's microstate is used to set its following evolution. In the past decades, it became possible to quantify the EP of these processes, evidencing a novel place for information within thermodynamics. Treated as a correlation between the controlled system and the memory of the feedback loop, information was shown to be an essential component of EP in experiments inspired in the Maxwell's demon paradox~\cite{Maxwellbook1975, Rex2017, Leff1990}.
From an experimental perspective, EP in its various acceptions was measured on a handful of platforms. Without feedback control, experiments at the ensemble average level have been performed in a nuclear magnetic resonance (NMR) setup~\cite{Batalhao2015}, in a micromechanical resonator~\cite{Brunelli2018} and in a Bose-Einstein condensate~\cite{Brunelli2018}. For feedback-controlled protocols, EP has been accessed at the average level in an NMR setup~\cite{Camati2016} and at the trajectory level with a superconducting circuit~\cite{Masuyama2018} and single-electron transistors \cite{Koski2015}. The latter case provides an example of an autonomous Maxwell's demon, where information is encoded on a quantum system and is never processed at the classical level. The device operated as a fridge, consuming information to transfer heat from a cold to a hot reservoir. In this spirit we have recently implemented a fully closed version of such a device where the cold and hot bodies, as well as the demon, are quantum systems evolving unitarily~\cite{Luis2020}. This situation is a minimalistic model of a closed, information-powered fridge.
The ability of theoretically describing and experimentally realizing a wide range of irreversible processes involving an increasing number of parties has given rise to an equivalent variety of expressions of EP. This calls for the development of a unified perspective, serving as much as a consistency check for the various definitions and as a testbed for their respective sensitivity to measurement errors. This is the purpose of the present article, where we theoretically and experimentally study the EP of the model system recalled above~\cite{Luis2020}. Namely, we derive and compare six alternative methods to measure the entropy produced by this system which are chosen to cover and illustrate a large variety of equivalent approaches to characterize the EP. They differ by the way we analyze the system's state (ensemble average or quantum trajectories), theoretically describe the control (external or autonomous) and experimentally access the system evolution (single unitary evolution or a cyclic implementation incorporating the time reversal of the basic evolution). Each choice provides us with a different view onto the EP and its definition, allowing us to acquire a deeper understanding of the physical nature and the experimental meaning of EP obtained with different measurements. Despite being equivalent in the ideal case, these expressions show different sensitivities to experimental errors. This observation is confirmed by a thorough modelling of our experiment, providing a practical benchmark that can be used to adapt the measurement strategy of EP to a particular quantum system.
\section{Protocols and expressions}
\subsection{Review of general methods}
We first review the measures of irreversibility established within the so-called quantum Jarzynski's protocol \cite{An2015, Huber2008, Campisi2011, Batalhao2014, Heyl2012, Dorner2013, Mazzola2013}, schematically presented in Fig.~\ref{fig:entropy production}(a). After having thermalized with a heat reservoir at temperature $T$, a quantum system starts in the thermal equilibrium state $\zeta$ with inverse temperature $\beta = \left(k_{\text{B}}T\right)^{-1}$, also known as thermodynamic beta, where $k_{\text{B}}$ is the Boltzmann's constant. The system is first driven out of equilibrium through a unitary operation $U$, to the non-equilibrium state $\rho_\text{f}$ (here and in the following the subscript $\text{f}$ labels quantities at the end of the evolution). To be treated as a unitary, $U$ is assumed to be performed swiftly compared to the system relaxation. Then, the system relaxes back to the thermal state $\zeta$. This last step causes the whole process to be irreversible. The entropy production $\EP{}$ is proportional to the amount of heat $Q$ dissipated by the system along its thermalization, $\EP{} = \beta Q$. It is shown to equal the relative entropy $D(\rho_\text{f} ||\zeta)$, also known as quantum divergence, quantifying the non-negative distance between the two states. For states $\rho$ and $\sigma$, it is defined as $D(\rho||\sigma) = -\mathrm{Tr}[\rho\ln\sigma]-S(\rho)$, where $S(\rho)$ is the von Neumann entropy of state $\rho$~\cite{Vedral2002}. This provides a first intuitive flavour for the EP: the farther the system is brought away from equilibrium, the larger the entropy production.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=\columnwidth]{figure1}
\par\end{centering}
\caption{The concept of the forward and backward system's evolution used to access the entropy production of the thermalization process. (a) Starting in the equilibrium state $\zeta$, the system is unitarily driven out of equilibrium into state $\rho_\text{f}$. The irreversible thermalization with the external heat reservoir produces entropy $\EP{}$ by bringing the system back to $\zeta$. The backward evolution $\tilde U$ implements the time-reversed forward evolution. In the presence of the thermalization, the backward evolution cannot bring the system back into its initial state thus revealing the irreversibility. (b) The overall scheme can be extended to a feedback-controlled evolution, where the unitary $U^{(k)}$ depends on the result $k$ of a control measurement (readout $R$) of the system state by a feedback controller.}
\label{fig:entropy production}
\end{figure}
Another meaning for the EP is acquired by the attempt to reverse the forward evolution $U$. For this purpose we complete the above protocol with the time-reversed unitary operation $\tilde U$. In general, unitary operations are considered as reversible: from an operational point of view this presupposes the ability to generate the backward evolution $U^\dagger = \tilde U$ (here and in the following the symbol $\sim$ denotes the backward quantities). In the absence of the intermediate thermalization, this backward evolution would bring the system back to its initial state. The presence of the intermediate irreversible thermalization is the reason why the process cannot be reversed. EP is shown to equal the relative entropy $D(\zeta||\tilde\rho_\text{f})$ of the initial thermal state with respect to the final state $\tilde\rho_\text{f}$ of this backward evolution. This is also intuitive: the lower our ability to time-reverse the evolution, the larger the entropy production.
Finally, the concept of EP can be extended at the level of single realizations, that corresponds to two-point quantum trajectories in the present quantum Jarzynski's protocol. Each trajectory $\gamma$ is defined by the outcomes of energy measurements performed at the beginning and at the end of the forward protocol, while $\tilde \gamma$ stands for its time-reversed counterpart, as introduced in the pioneering two-point energy measurement (TPEM) scheme~\citep{Esposito2009, Campisi2011, Binderbook2018}. The stochastic EP is defined as ${\sigma}[\gamma] = \ln (p(\gamma)/p(\tilde \gamma))$ and compares the probability $p(\gamma)$ for $\gamma$ to be realized in the forward protocol and the probability $p(\tilde\gamma)$ for the corresponding $\tilde \gamma$ in the backward protocol \cite{Landi2020}. This expression provides us with another intuitive way to quantify irreversibility at the level of single trajectories. Although ${\sigma}[\gamma]$ can take negative values, its average $\EP{}$ over all possible trajectories is non-negative by convexity of the exponential, in agreement with the SLT.
From this brief review, it appears that EP can be captured owing to various operational resources and, in particular, to the ability to access average or stochastic physical quantities as well as to run an evolution forward and backward. In what follows, we systematically employ these different approaches of EP to a basic protocol of an information-powered fridge. More specifically, the system is measured by a controller (readout $R$) and its further unitary evolution $U^{(k)}$ is set by the readout outcome $k$, thus leading to several different evolution branches of the feedback-controlled system, see Fig.~\ref{fig:entropy production}(b). Along with the two measurements forming the TPEM scheme the readout outcome also contributes to the definition of the quantum trajectories. For the sake of clarity, we first detail the non-autonomous description of the protocol, where the feedback uses information encoded on a classical memory of the external controller. Then, we focus on the autonomous description, i.e.,~ for a fully closed system as reported in Ref.~\cite{Luis2020}.
\subsection{Average evolution in the non-autonomous description} \label{subsec:trajectories}
Figure~\ref{fig:protocol} illustrates a non-autonomous description of the Maxwell's demon experiment studied in this paper. We consider a qubit \setup{Q} and a cavity \setup{C}. Their interaction is controlled by a third system further dubbed demon and denoted by \setup{D}. In this description, \setup{D} features a classical entity, performing a local projective measurement on \setup{Q} in its energy basis and storing its result in a classical memory. The two measurement outcomes are then exploited in the feedback loop (readout followed by feedback), that conditionally acts on the joint \setup{QC} system. Namely, they trigger a unitary system evolution $U^{(1)}=V$ for $k=1$ and no interaction, i.e.,~ the identity $U^{(0)}=\mathbb{I}$, for $k=0$. All the EP expressions derived in this and the next section are also valid for more general settings, with two arbitrary systems \setup{Q} and \setup{C}.
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\columnwidth]{figure2}
\par\end{centering}
\caption{Non-autonomous system control with a binary readout completed with the two-point energy measurement (TPEM). The initial states of the systems \setup{Q} and \setup{C} are thermal states at different temperatures. The outcome $k=1$ of the demon readout $R$ sets a feedback interaction $V$ between \setup{Q} and \setup{C}. Otherwise, for $k=0$, \setup{Q} and \setup{C} do not interact, i.e.,~ they evolve under the identity $\mathbb{I}$. The TPEM is realized by two projective measurements, $M_1$ and $M_2$, in the energy basis of \setup{Q} and \setup{C} performed before and after the main protocol. Each trajectory is characterized by the set of five indices $\{{n_\textsf{Q}},k,{n_\textsf{C}},{m_\textsf{Q}},{m_\textsf{C}}\}$, referring to the results of five measurements performed in a given protocol realization.}
\label{fig:protocol}
\end{figure}
We first consider the average evolution of the joint \setup{QC} system and use the density matrix approach to describe its state. Initially, \setup{Q} and \setup{C} start at the local thermal equilibrium state $\rho_{\text{i}}^\textsf{QC}=\zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes \zeta_{\beta_\textsf{C}}^\textsf{C}$, where $\zeta_{\beta_{j}}^{j}=\exp[-\beta_{j}\left(H^{j}-F^{j}\right)]$ are the Gibbs states. For each system $j\in\{\textsf{Q},\textsf{C}\}$, $H^{j}$ is the local Hamiltonian and $F^{j} = -(1/\beta_{j}) \ln\text{Tr}\big[e^{-\beta_{j}H^{j}}\big]$ is the equilibrium free energy. The internal energy of system $j$ in state $\rho^{j}$ is given by $\mathcal{U}^{j} = \text{Tr}_{j}\left[H^{j}\rho^{j}\right]$. Next, \setup{D} performs a projective measurement (i.e.,~ demon readout) on the system. The measurement outcome $k$ projects \setup{QC} onto the state $\rho_{\text{i}}^{\textsf{QC},k}$ with probability $p(k)$. Then, \setup{D} stores the outcome $k$ and induces the unitary feedback operation $U^{(k)}$ between \setup{Q} and \setup{C} depending on $k$. There are thus several distinct branches, labelled by $k$, of the possible unitary evolution of the system. The final \setup{QC} states and their average over all measurement outcomes read $\rho_\text{f}^{\textsf{QC},k} = U^{(k)} \rho_{\text{i}}^{\textsf{QC},k}[U^{(k)}]^\dagger$ and $\rho_\text{f}^{\textsf{QC}} = \sum_k p(k) \rho_\text{f}^{\textsf{QC},k}$, respectively. The relaxation of the non-equilibrium state $\rho_\text{f}^{\textsf{QC}}$ towards the initial thermal product state gives rise to the entropy production. The demon's memory, on the other hand, does not relax and hence does not produce entropy. This leads to our first expression of EP:
\begin{equation}
\EP{1} = \Delta\beta {Q^\textsf{C}} +\mean{I},
\label{eq::S1}
\end{equation}
where $\Delta\beta = {\beta_\textsf{C}}-{\beta_\textsf{Q}}$, $Q^\textsf{C} =\sum_{k}p(k)\Delta \mathcal{U}^{\textsf{C},k}$ is the heat absorbed by \setup{C}, $\Delta \mathcal{U}^{\textsf{C},k}$ is the energy change of \setup{C} during the feedback operation in branch $k$, and $\mean{I} = H[p(k)]=-\sum_k p(k)\ln p(k)$ is the Shannon entropy of the readout measurement. We use the fact that ${Q^\textsf{Q}} =-{Q^\textsf{C}}$ for a closed system and an energy-preserving readout, see Appendix~\ref{app:EP}. If there was no feedback action ($U^{(1)}=\mathbb{I}$), $\EP{1}$ would reduce to the well-known classical expression $\EP{} = \Delta\beta{Q}$ \citep{Landi2020}, quantifying the entropic counterpart of the heat exchanged between two systems. In addition to this exchange term, Eq.\eqref{eq::S1} explicitly involves an informational contribution. This is in agreement with the pioneering expressions of the SLT in the presence of a feedback control that were obtained by explicitly taking the demon's physical memory into account~\citep{Sagawa2008, Sagawa2009,Maruyama2009}.
An alternative, second expression for the EP can be obtained starting from the following identity for an arbitrary state $\rho$: $D(\rho||\zeta) = \beta[\mathcal{U}(\rho)-F] -S(\rho)$. Writing the heat in \eqref{eq::S1} in terms of the quantum divergence we obtain
\begin{equation}
\EP{2} = \sum_k p(k) \,D\!\left(\rho_\text{f}^{\textsf{QC},k} || \zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes\zeta_{\beta_\textsf{C}}^\textsf{C} \right),
\label{eq::S2}
\end{equation}
where $\rho_\text{f}^{\textsf{QC},k}$ is the final $\textsf{QC}$ state conditioned on~$k$, see Appendix~\ref{app:EP}. This expression can be interpreted as follows. The entropy production for a thermalization process is known to be given by the quantum divergence of the initial state with respect to the final thermal one~\citep{Deffner2011}. For a given $k$, the entropy produced during the thermalization equals $D\!\left(\rho_\text{f}^{\textsf{QC},k} || \zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes \zeta_{\beta_\textsf{C}}^\textsf{C}\right)$. The total EP is, therefore, the average of such a conditional EP, associated to each branch $k$, over all readout outcomes.
The expressions $\EP{1}$ and $\EP{2}$ rely on the physical quantities provided by the forward protocol only. A third expression containing information also from the backward protocol can be obtained as well. For each branch $k$, the backward process is defined by the application of the time-reversed unitary $\tilde{U}^{(k)} = [U^{(k)}]^\dagger$ on the state after the thermalization, while the demon's memory remains unchanged. Thus, the probability of applying $\tilde{U}^{(k)}$ is given by the probability $p(k)$ of ending up in the forward branch $k$. Starting from \eqref{eq::S2} we show in Appendix~\ref{app:EP} that
\begin{equation}
\EP{3} = \sum_k p(k)\, D\!\left(\rho_{\text{i}}^{\textsf{QC},k} || \tilde \rho_\text{f}^{\textsf{QC},k}\right),
\label{eq::S3}
\end{equation}
where $\tilde\rho_\text{f}^{\textsf{QC},k}$ is the \setup{QC} state of the backward protocol of the branch $k$ after the backward evolution $\tilde{U}^{(k)}$. This expression for the EP also comes in the form of the average over the outcomes of the readout measurement. It is a generalization of the equation obtained in Ref.~\citep{Batalhao2015}, where there is no feedback control being considered.
\subsection{Stochastic evolution in the non-autonomous description}\label{subsec:forward}
The system evolution can also be described stochastically by means of individual quantum trajectories. All thermodynamic quantities become trajectory-dependent, providing a finer description of the system dynamics. In the spirit of the TPEM scheme, the definition of our quantum trajectories involve the initial and final energy states of the joint \setup{QC} system, respectively denoted $\ket{{n_\textsf{Q}},{n_\textsf{C}}}$ and $\ket{{m_\textsf{Q}},{m_\textsf{C}}}$. In the present case where the dynamics generates no coherence in the energy basis, these states can be accessed by two energy measurements $M_1$ and $M_2$ respectively performed at the beginning and at the end of the feedback loop, of respective outcomes $\{{n_\textsf{Q}},{n_\textsf{C}}\}$ and $\{{m_\textsf{Q}},{m_\textsf{C}}\}$.
The probability of the measurement outcomes $\{{n_\textsf{Q}},{n_\textsf{C}}\}$ is $p({n_\textsf{Q}},{n_\textsf{C}}) = p({n_\textsf{Q}}) p({n_\textsf{C}})$, where the probabilities $p(n_{j})$ are the Boltzmann weights of the initial uncorrelated thermal states. The demon readout R of outcome $k$ conditions the \setup{QC} evolution $U^{(k)}$ between $M_1$ and $M_2$. For the ideal readout, the state $\ket{{n_\textsf{Q}}}$ deterministically sets the value of $k$. In a more general case, we can consider a conditional probability $p(k |{n_\textsf{Q}})$ of the readout outcome $k$ accounting for possible readout limitations, such as non-projective measurement or detection errors. Eventually, the trajectory $\gamma$ is defined by a unique set of the initial states $\ket{{n_\textsf{Q}},{n_\textsf{C}}}$, the system evolution branch $k$, and the final state $\ket{{m_\textsf{Q}},{m_\textsf{C}}}$. The forward trajectory probability distribution $p(\gamma)$ is given by the probability of getting the set of outcomes $\gamma = \{{n_\textsf{Q}},k,{n_\textsf{C}},{m_\textsf{Q}},{m_\textsf{C}}\}$ and it explicitly reads
\begin{equation}
p(\gamma) = p({m_\textsf{Q}},\!{m_\textsf{C}}|{n_\textsf{Q}},\!k,\!{n_\textsf{C}}) \,p(k|{n_\textsf{Q}}) \,p({n_\textsf{Q}}) \,p({n_\textsf{C}}).
\label{eq:trajectory probability forward}
\end{equation}
The total number of all possible trajectories is $d_\textsf{Q}^2 \times d_\textsf{C}^2 \times d_\textsf{Q}$, where $d_j$ is the size of the Hilbert space of the system $j$. The last contribution, $d_\textsf{Q}$, comes from the fact that the external controller (demon) must contain $d_\textsf{Q}$ distinguishable states to encode the measurement outcomes.
\begin{figure}
\begin{centering}
\includegraphics[width=0.99\columnwidth]{figure3}
\par\end{centering}
\caption{Backward trajectory protocol reversing the forward protocol of Fig.~\ref{fig:protocol} after thermalization. Two branches with $k=0$ and $1$ correspond to the two possible feedback evolutions in the forward protocol.
The backward unitary operation for $k=1$ is $\tilde U^{(1)} = \tilde V$ obtained by time-reversing $V$. For $k=0$, the identity operation is applied to the \setup{QC} system. The initial states in each branch are thermal states. Initial and final projective measurements, $M_1$ and $M_2$, are used for the TPEM.
\label{fig:backward trajectories}}
\end{figure}
We now turn our attention to the distribution of the backward trajectories. As mentioned above, the backward process in each reverse branch $k$ is generated by the time-reversed unitary $\tilde{U}^{(k)}$. The backward trajectory $\tilde\gamma$ is defined by the set of parameters $\{{n_\textsf{Q}}, k, {n_\textsf{C}}, {m_\textsf{Q}},{m_\textsf{C}} \}$ and is the counterpart of the forward trajectory $\gamma$ labelled by the same indices. Similarly to the forward protocol, the probability distribution of the backward trajectories is given~by
\begin{equation}
p(\tilde\gamma) = {p_\text{b}}({n_\textsf{Q}},{n_\textsf{C}} | {m_\textsf{Q}},k, {m_\textsf{C}}) \,{p_\text{b}}({m_\textsf{Q}}) \,{p_\text{b}}({m_\textsf{C}})\,p(k),
\label{eq:trajectory probability backward}
\end{equation}
where ${p_\text{b}}({n_\textsf{Q}},{n_\textsf{C}} | {m_\textsf{Q}},k, {m_\textsf{C}})$ is the conditional probability of the final backward state. The initial backward probabilities ${p_\text{b}}({m_\textsf{Q}})$ and ${p_\text{b}}({m_\textsf{C}})$ are determined from the corresponding initial Gibbs states. The probability $p(k)$ of the backward branch $k$ equals the probability of the readout outcome $k$ in the forward protocol.
Given the probabilities of the forward trajectory $\gamma$ and of the corresponding backward trajectory $\tilde \gamma$, the stochastic EP is defined as $\sigma[\gamma] = \ln \big(p(\gamma)/p(\tilde\gamma)\big)$. The average EP computed over all $\gamma$'s is then given by~\citep{Landi2020}
\begin{equation}
\EP{4} = \sum_\gamma p(\gamma) \ln \frac{p(\gamma)}{p(\tilde \gamma) } = D\Big(p(\gamma) || p(\tilde{\gamma})\Big),
\label{eq::S4}
\end{equation}
where the relative entropy $D$ is computed between probability distributions $p(\gamma)$ and $p(\tilde{\gamma})$. Note that for classical distributions $p_n$ and $q_n$, $D$ is defined by the Kullback-Leibler divergence: $D\left(p||q\right)= \sum_{n}p_{n} \ln\left(p_{n}/q_{n} \right)$~\citep{Coverbook2006}, which is equivalent to the quantum divergence between two states whose density matrices are diagonal in the same basis. The expression $\EP{4}$ quantifies the irreversibility by comparing the stochastic trajectories of the forward and backward protocols. Notably, its computation requires no knowledge on the actual physical states defining the trajectories, but needs only the ability to distinguish different trajectories in order to properly access their probabilities.
We define as $p\left(\sigma\right)=\sum_{\gamma}p\left(\gamma\right)\delta_{\sigma,\sigma\left[\gamma\right]}$ and ${p_\text{b}}(\sigma) =\sum_{\gamma} p\left(\tilde{\gamma}\right) \delta_{\sigma,\sigma\left[\gamma\right]}$ the total probability of the forward and backward trajectories, respectively, contributing to the value $\sigma$ of EP, where $\delta_{a,b}$ is the Kronecker delta. With these two probability distributions one can easily show the detailed fluctuation relation: $\exp(\sigma) = p(\sigma) / {p_\text{b}}(\sigma)$. By averaging over all possible values of $\sigma$ we obtain
\begin{equation}
\EP{5} = \sum_\sigma p(\sigma) \ln\frac{p(\sigma)}{{p_\text{b}}(\sigma)} = D\Big(p(\sigma) || {p_\text{b}}(\sigma)\Big).
\label{eq::S5}
\end{equation}
The stochastic entropy production $\sigma[\gamma]$ for each trajectory $\gamma$, required for computing $\EP{5}$, can be obtained as
\begin{equation}
\sigma[\gamma] = {\beta_\textsf{Q}} Q^\textsf{Q}[\gamma]+{\beta_\textsf{C}} Q^\textsf{C}[\gamma] + I[\gamma],
\label{eq:stochastic}
\end{equation}
see Appendix~\ref{app:EP}. Here, $Q^{j}[\gamma]$ is the stochastic heat received by the system $j\in\{\textsf{Q},\textsf{C}\}$ and $I[\gamma] = -\ln p(k)$ is the stochastic information extracted from the readout measurement. On the contrary to $\EP{4}$, the expression $\EP{5}$ compares the forward and backward probability contributions to the EP, even if $\sigma[\gamma]$ is degenerate for some trajectories. Thus, despite the obvious similarity of the mathematical expressions, $\EP{4}$ and $\EP{5}$ differ appreciably with respect to the information required for their computation.
\subsection{The autonomous description\label{subsec:average}}
In the non-autonomous description the demon \setup{D} has been treated as a classical feedback loop, involving a measurement and a conditional action on the \setup{QC} system. The demon's influence has been taken into account through the measurement outcome probability $p(k)$ quantifying the information extracted by \setup{D} from the system \setup{Q}. Alternatively, we can also consider a global \setup{QDC} system incorporating \setup{D} with the demon feedback action being part of a global unitary evolution of this closed system. For the remainder of this section we apply to our experiment the autonomous demon description reported in Ref.~\cite{Luis2020}. The equivalent quantum circuit for the forward protocol corresponding to a two-level (qubit) system \setup{Q} is depicted in Fig.~\ref{fig:circuit}. The demon, also assumed to be a qubit without loss of generality, starts in the pure reference state $\ket{1_\textsf{D}}$. Both the readout and the feedback operations are dynamically implemented by means of global unitaries on the total \setup{QDC} system. The projective readout in the energy basis of \setup{Q} is replaced by a controlled NOT (CNOT) gate. It transforms the initial state of \setup{D} into $\ket{0_\textsf{D}}$ if the state of \setup{Q} is $\ket{0_\textsf{Q}}$. After the CNOT gate, a controlled unitary operation between \setup{Q} and \setup{C
|
} is performed to appropriately implement the feedback action. As expected, the reduced \setup{QC} state after this operation is $\sum_{k} p(k) \rho_\text{f}^{\textsf{QC},k}$, which is the average final state $\rho_\text{f}^{\textsf{QC}}$ of the protocol described in Sec.~\ref{subsec:forward}.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.78\columnwidth]{figure4}
\par\end{centering}
\caption{Quantum circuit of an autonomous Maxwell's demon. The demon \setup{D} and the system \setup{Q} are both qubits. The readout is represented by the controlled NOT gate with negative control line, i.e.,~ the state of \setup{D} is inverted if \setup{Q} is in state $\ket{0_\textsf{Q}}$. The controlled unitary implements the demon feedback. It switches on the \setup{QC} unitary interaction $V$ if the demon state is $\ket{1_\textsf{D}}\bra{1_\textsf{D}}$.
\label{fig:circuit}}
\end{figure}
Our final expression for the EP comes from the analysis of the closed \setup{QDC} system with the demon \setup{D} explicitly included as a third quantum system. We show in Appendix~\ref{app:EP} that
\begin{equation}
\Delta\beta Q^{\textsf{C}} = \Delta_{\text{fb}}I^{\textsf{QC}:\textsf{D}} + D\left(\rho_\text{f}^\textsf{QC} || \zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes \zeta_{\beta_\textsf{C}}^\textsf{C}\right),
\label{eq:equality correlations}
\end{equation}
where $I^{\textsf{QC}:\textsf{D}} = D\left(\rho^\textsf{QDC} || \rho^\textsf{QC}\!\otimes\! \rho^\textsf{D}\right)$ is the mutual information between \setup{QC} and \setup{D}, while $\Delta_\text{fb}$ denotes the information change during the feedback step, i.e.,~ before and after the controlled unitary gate. We consider here the ideal readout, i.e.,~ $p(k|{n_\textsf{Q}})= \delta_{k,{n_\textsf{Q}}}$. This relation is a generalization to our current protocol of the expression first derived in Ref.~\citep{Luis2020}. It comes directly from the entropy conservation of the global \setup{QDC} system for the closed evolution depicted in Fig.~\ref{fig:circuit}.
Since the correlations before the feedback step are given by the Shannon entropy $H[p(k)]$, substituting \eqref{eq:equality correlations} into \eqref{eq::S1} we obtain
\begin{equation}
\EP{6} = D\left(\rho_\text{f}^\textsf{QC} || \zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes \zeta_{\beta_\textsf{C}}^\textsf{C}\right) + I_{\text{f}}^{\textsf{QC}:\textsf{D}}.
\label{eq::S6}
\end{equation}
It clearly shows that the EP has two contributions. The divergence quantifies the entropy produced in the thermalization process for the \setup{QC} system starting in the state $\rho_\text{f}^\textsf{QC}$. The final mutual information between the two subsystems \setup{QC} and \setup{D} quantifies the amount of entropy produced by erasing all correlations between them, due to the thermalization. This result evidences that there is an entropic cost for erasing correlations~\cite{Landi2020,Camati2021}.
\subsection{Summary of all expressions}
Table~\ref{tab:EP} summarizes the six alternative expressions for the entropy production along with additional information on the underlying protocols and the statistical nature of the required physical quantities. Expressions $\Sigma_1$, $\Sigma_2$ and $\Sigma_6$ are based on the data extracted exclusively from the forward protocol. The other expressions require the execution and analysis of the backward protocol as well. We can distinguish three types of physical quantities showing up in different expressions: expressions $\Sigma_1$ and $\Sigma_6$ can be computed using information on the average initial and final states of the system (``averaged"). Expressions $\Sigma_2$ and $\Sigma_3$ are based on data averaged over different readout outcomes, thus requiring the discrimination of different evolution branches (``branched"). Finally, to compute expressions $\Sigma_4$ and $\Sigma_5$ we have to resolve individual trajectories (``stochastic").
Describing the same physical quantity, all these expressions are equivalent under the restriction of ideal unitary evolutions and ideal projective measurements, which have been used for their derivation. In the presence of realistic deviations from the idealized scenario, they start to differ, as will be shown in the next Section. For diagonal states, as considered here, only the expressions $\Sigma_2$ and $\Sigma_6$ stay mathematically identical and provide the same EP value irrespective of the evolution imperfections, see Appendix~\ref{app:EP26}.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcc}
\hline \hline
\vphantom{\text{\huge G}}
Expression & Protocol & Quantities \\[5pt]
\hline
$\Sigma_{1}=\Delta\beta Q^{\textsf{C}} +\left\langle I\right\rangle$ & forward & averaged \vphantom{\text{\Huge G}}\\[10pt]
$\Sigma_{2}=\sum_{k}p\left(k\right)D\left(\rho_{\text{f}}^{\textsf{QC},k}||\zeta_{\beta_{\textsf{Q}}}^{\textsf{Q}}\otimes\zeta_{\beta_{\textsf{C}}}^{\textsf{C}}\right)$ & forward & branched \vphantom{\text{\Huge G}}\\[10pt]
$\Sigma_{3}=\sum_{k}p\left(k\right)D\left(\rho_{\text{i}}^{\textsf{QC},k}||\tilde{\rho}_{\text{f}}^{\textsf{QC},k}\right)$ & \pbox{20cm}{forward, \\ backward} & branched \vphantom{\text{\Huge G}}\\[10pt]
$\Sigma_{4}=D\big(p\left(\gamma\right)||p\left(\tilde{\gamma}\right)\big)$ & \pbox{20cm}{forward, \\ backward} & stochastic \vphantom{\text{\Huge G}}\\[10pt]
$\Sigma_{5}=D\big(p\left(\sigma\right)||p_{\text{b}}\left(\sigma\right)\big)$ & \pbox{20cm}{forward, \\ backward} & stochastic \vphantom{\text{\Huge G}}\\[10pt]
$\Sigma_{6}=D\left(\rho_{\text{f}}^{\textsf{QC}}||\zeta_{\beta_{\textsf{Q}}}^{\textsf{Q}}\otimes\zeta_{\beta_{\textsf{C}}}^{\textsf{C}}\right)+I_{\text{f}}^{\textsf{QC:D}}$ & forward & averaged \vphantom{\text{\Huge G}}\\[10pt]
\hline \hline
\end{tabular}
}
\caption{The entropy production expressions.}
\label{tab:EP}
\end{table}
\section{Experimental results}
\subsection{Maxwell's demon system}
We measure the entropy production in the Maxwell's demon system described by the quantum circuit in Fig.~\ref{fig:circuit} and realized in a cavity QED setup~\cite{Luis2020}. Qubit \setup{Q} and two-level demon \setup{D} are simultaneously encoded into three adjacent circular Rydberg states of a single Rubidium atom \setup{A} (with principle quantum numbers $49$, $50$ and $51$ corresponding to atomic states $\ket f$, $\ket g$ and $\ket e$, respectively). The mapping between the logical states of the \setup{QD} system and the physical states of \setup{A} is the following: $\ket{1_\textsf{Q},\!1_\textsf{D}}=\ket{e}$, $\ket{0_\textsf{Q},\!1_\textsf{D}} = \ket{g}$, and $\ket{0_\textsf{Q},\!0_\textsf{D}} = \ket{f}$. According to the Maxwell's demon circuit, see Fig.~\ref{fig:circuit}, the state $\ket{0_\textsf{Q},\!1_\textsf{D}}$ is never populated and does not need to be encoded in a particular physical state of \setup{A}. The system \setup{C} is realized with a high-quality superconducting microwave cavity resonant with the atomic $\ket{g}$--$\ket{e}$ transition at $51$~GHz and far detuned from the $\ket{g}$--$\ket{f}$ transition at $54$~GHz.
The basic experimental setup is schematically presented in Fig.~\ref{fig:setup}. Individual flying Rydberg atoms exit the preparation zone \setup{B} in the state $|g\rangle$. To prepare the state $|e\rangle$ a resonant microwave pulse is applied in \setup{R}$_\textsf{Q}$ by means of the microwave source \setup{S}$_{eg}$. Its amplitude and duration are adjusted to realize a Rabi $\pi$-pulse between $|g\rangle$ and $|e\rangle$. The demon readout is implemented by deterministically flipping the atomic states $\ket{g}$ and $\ket{f}$ before \setup{A} enters \setup{C}. This operation is induced by the microwave source \setup{S}$_{gf}$ resonant with the $\ket{g}$--$\ket{f}$ transition and adjusted to maximize the atomic population transfer.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure5}
\caption{Schematic representation of the experimental setup. The Maxwell's demon system is realized with a microwave cavity (\setup{C}) and flying circular Rydberg atoms (blue toroid for the qubit-demon atom, magenta toroids for QND probe atoms). See text and Ref.~\cite{Luis2020} for details.}
\label{fig:setup}
\end{figure}
The atom-cavity interaction is controlled by an electric field applied across \setup{C} by the voltage source \setup{V} via Stark-tuning the atomic frequency. The demon feedback is implemented by a resonant interaction between \setup{A} and \setup{C} based on the adiabatic passage technique. It allows for the efficient population transfer between the \setup{AC} states $\ket{e,n}$ and $\ket{g,n+1}$ independent of the cavity photon number $n$. Energy conservation prevents the coupling of the joint ground state $\ket{g,0}$ to other states.
The atomic states are directly measured by a field-ionisation detector \setup{M} providing us with the final qubit and demon states, $\ket{{m_\textsf{Q}}}$ and $\ket{{m_\textsf{D}}=k}$, respectively. The cavity photon-number state $\ket{{m_\textsf{C}}}$ is probed by a sequence of several tens of atoms interacting with \setup{C} in the dispersive regime and performing a quantum non-demolition (QND) measurement of its photon number~\cite{Guerlin07}.
\subsection{Experimental sequences}
Each of the six EP expressions can be experimentally accessed by running the Maxwell's demon circuit and measuring physical quantities entering these expressions using measurement strategies properly adapted to each quantity. For instance, the cavity energy change $Q^\textsf{C}$, required to compute $\EP{1}$, can be obtained by comparing the average initial and final photon number in the cavity without the need to resolve different numbers. However, in order to significantly reduce the overall data acquisition time and to address all expressions at once we have decided to record the complete statistics of individual trajectories in the forward and backward protocols of the Maxwell's demon circuit. Knowing the initial and final states of each trajectory as well as their occurrence probability allows us to compute any physical quantity appearing in the EP expressions, as will be shown below.
In order to get the EP for any initial temperature of \setup{Q} and \setup{C} without increasing the overall experimental time, we have decided to replace the combination of the initial thermal state preparation and the first projective measurement of the TPEM scheme with the direct preparation of the \setup{QDC} system in the pure energy eigenstate $\ket{{n_\textsf{Q}},1_\textsf{D},{n_\textsf{C}}}$. The different thermal states are then taken into account by using the corresponding theoretical probability distributions $p({n_\textsf{Q}})$ and $p({n_\textsf{C}})$. We have shown in Ref.~\cite{Luis2020} that the experimental Gibbs' states of \setup{Q} and \setup{C} of given temperatures ${\beta_\textsf{Q}}$ and ${\beta_\textsf{C}}$ can be experimentally prepared and measured to be in good agreement with the theoretical distributions $p({n_\textsf{Q}})$ and $p({n_\textsf{C}})$.
Summarizing, in our basic experimental sequence we initially prepare the \setup{QDC} system in the pure energy eigenstate $\ket{{n_\textsf{Q}},1_\textsf{D},{n_\textsf{C}}}$ and measure the probability of its final state $\ket{{m_\textsf{Q}},k,{m_\textsf{C}}}$ after the feedback evolution. In this way we obtain the conditional probability $p({m_\textsf{Q}},k,{m_\textsf{C}} | {n_\textsf{Q}},{n_\textsf{C}})$ of the trajectory $\gamma = \{{n_\textsf{Q}},k,{n_\textsf{C}}, {m_\textsf{Q}},{m_\textsf{C}}\}$. Note that the initial demon state is always $\ket{1_\textsf{D}}$ and thus does not enter into the trajectory definition. Finally, for any temperature ${\beta_\textsf{Q}}$ and ${\beta_\textsf{C}}$ with the corresponding $p({n_\textsf{Q}})$ and $p({n_\textsf{C}})$ we compute $p(\gamma)$ using Eq.~\eqref{eq:trajectory probability forward}.
In this work we consider, without loss of generality, the constant cavity temperature of $2.8$~K and the qubit temperature varying such that the relative inverse temperature $\delta\tilde\beta=1-{\beta_\textsf{Q}}/{\beta_\textsf{C}} \in [-6,6]$. Since the populations of the photon-number states larger than 3 are negligible for this temperature (the mean thermal photon number is $0.71$), we restrict ${n_\textsf{C}}$ to values from $0$ to $3$ only.
The vacuum state $\ket{{n_\textsf{C}}=0}$ of \setup{C} is prepared by sending through its mode a beam of resonant atoms in state $\ket{g}$. They absorb all photons from \setup{C} thus cooling it into the vacuum. The state $\ket{{n_\textsf{C}}=1}$ with one photon is excited from the vacuum by using one atom in state $\ket{e}$ and forcing it to resonantly emit a photon into \setup{C}. The preparation of larger photon-number states are realized by the QND projection of a small coherent field~\cite{Guerlin07}. We first inject into \setup{C} a coherent field with about $3$ photons on average. Then, we perform the QND measurement, randomly resulting in different photon-number states. Finally, we post-select and sort all trajectories with the initial projected states $\ket{{n_\textsf{C}}=2}$ and $\ket{{n_\textsf{C}}=3}$.
The final \setup{QDC} state is measured independently on each ensemble of quantum trajectories with the same initial state $\ket{{n_\textsf{Q}},{n_\textsf{C}}}$. The final detection of \setup{A} gives us the conditional probability $p({m_\textsf{Q}},k | {n_\textsf{Q}},{n_\textsf{C}})$. The cavity photon-number probability is reconstructed on the ensemble of trajectories~\cite{Metillon19} with the same initial and final \setup{QD} state. In this way we obtain the conditional distribution $p({m_\textsf{C}} | {n_\textsf{Q}},{n_\textsf{C}},{m_\textsf{Q}},k)$ and compute $p({m_\textsf{Q}},k,{m_\textsf{C}} | {n_\textsf{Q}},{n_\textsf{C}}) = p({m_\textsf{C}} | {n_\textsf{Q}},{n_\textsf{C}},{m_\textsf{Q}},k)\,p({m_\textsf{Q}},k | {n_\textsf{Q}},{n_\textsf{C}})$. The procedure of the state preparation and detection, along with all measured probabilities, is presented in detail in Appendix~\ref{app:data}. The probability of the trajectory $\gamma = \{{n_\textsf{Q}},{n_\textsf{C}},{m_\textsf{Q}},k,{m_\textsf{C}}\}$ for each ${\beta_\textsf{Q}}$ then equals $p(\gamma) = p({m_\textsf{Q}},k,{m_\textsf{C}} | {n_\textsf{Q}},{n_\textsf{C}}) \,p({n_\textsf{Q}}) \,p({n_\textsf{C}})$. A similar procedure is realized to obtain the probability distribution $p(\tilde \gamma)$ of the backward trajectories. The set of probabilities $\{p(\gamma)\}$ and $\{p(\tilde \gamma)\}$ are used in the following to compute all expressions for the entropy production, as explained below for each EP expression.
\subsection{Measurement of entropy production}
Figure~\ref{fig:results} shows the temperature dependence of the entropy production $\EP{}$ computed from the six expressions. Dotted lines correspond to the theoretical values for the ideal \setup{QDC} system as presented by the quantum circuit in Fig.~\ref{fig:circuit}. As expected, they coincide for all expressions, showing their fundamental equivalence. For large negative $\delta\tilde\beta$ (i.e.,~ the qubit state close to $\ket{0_\textsf{Q}}$), the probability for \setup{Q} to be in $\ket{1_\textsf{Q}}$ is small making the \setup{QC} interaction after the demon readout unlikely. In this limit, the \setup{QC} state stays almost unchanged reducing the entropy production to zero. For large positive $\delta\tilde\beta$ (i.e.,~ the qubit state close to $\ket{1_\textsf{Q}}$), $\EP{}$ linearly increases with $\delta\tilde\beta$, see Appendix~\ref{app:asymptotic}. Since \setup{Q} is mostly in $\ket{1_\textsf{Q}}$, the \setup{QC} interaction is extremely likely, pushing the \setup{QC} state further away from the initial thermal one and, consequently, producing more entropy.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure6}
\caption{Entropy production calculated in natural units of information (nats) versus relative inverse temperature $\delta\tilde{\beta} = 1\!-\!{\beta_\textsf{Q}}/{\beta_\textsf{C}}$. Panels (a) to (f) correspond to the entropy production expressions $\EP{1}$ to $\EP{6}$, respectively. Solid and dashed lines are computed from experimental and simulated data, respectively. Dotted lines are theoretical for the ideal model circuit of Fig.~\ref{fig:circuit} in the absence of any experimental imperfections.}
\label{fig:results}
\end{figure}
The solid lines in Fig.~\ref{fig:results} are computed from the experimental results using the expressions $\EP{1}$ to $\EP{6}$ for panels (a) to (f), respectively. The deviation from the ideal curves is due to experimental imperfections. The most significant imperfections are the preparation error ${\epsilon_\text{prep}}$ of the initial atomic states, the errors of the readout (${\epsilon_\text{read}}$) and feedback (${\epsilon_\text{feed}}$) operations and the discrimination error ${\epsilon_\text{meas}}$ of the atomic state measurement, see Appendix~\ref{app:major} for details. The errors ${\epsilon_\text{read}}$ and ${\epsilon_\text{feed}}$ modify the system evolution. They change the entropy production and influence equally all experimentally obtained $\EP{}$. Namely, the imperfect readout allows for the non-negligible \setup{QC} interaction even for \setup{Q} prepared in $\ket{0_\textsf{Q}}$ resulting in the non-zero $\EP{}$ for $\delta\tilde\beta\ll0$. On the other hand, the imperfect feedback reduces the probability for the \setup{QC} interaction for \setup{Q} prepared in $\ket{1_\textsf{Q}}$, thus decreasing $\EP{}$ for $\delta\tilde\beta\gg0$. The errors ${\epsilon_\text{prep}}$ and ${\epsilon_\text{meas}}$ mix the labels of the detected quantum trajectories. Since different expressions are based on different combinations of experimental data, these errors have, in general, a different effect on the different expressions of $\EP{}$. Other imperfections, like atom and cavity relaxations, have a minor effect on the TPEM scheme and are listed in Appendix~\ref{app:minor}.
The computation of the first expression, $\EP{1}$, given by \eqref{eq::S1} and presented in Fig.~\ref{fig:results}(a), starts by computing the stochastic heat change $Q^\textsf{Q}[\gamma]$ of \setup{Q} for each trajectory $\gamma$. By averaging over all trajectories we get $Q^\textsf{Q}$. The probability $p(k=1)$ is given by the probability to finally detect \setup{D} in the state $\ket{1_\textsf{D}}$ and equals the sum of $p(\gamma)$ over all trajectories with $k=1$. Here, we have used only the data from the atomic state detection of the forward protocol (i.e.,~ no information on the cavity state is required). It is also noteworthy that the measured $\EP{1}$ is higher than $\EP{}$ based on other expressions. Ideally, the Shannon entropy $\mean{I}$ goes to zero for large negative and positive $\delta\tilde\beta$ when the demon state after the readout is a pure quantum state, $\ket{1_\textsf{D}}$ or $\ket{0_\textsf{D}}$, respectively. However, due to the imperfect atomic state measurement ${\epsilon_\text{meas}}$, $\mean{I}$ is bound from below by $H[{\epsilon_\text{meas}}]$ thus shifting $\EP{1}$ up, as seen in Fig.~\ref{fig:results}(a).
The state $\rho_\text{f}^{\textsf{QC},k}$ in the expression $\EP{2}$ is obtained from the final probability distribution $p({m_\textsf{Q}},k,{m_\textsf{C}})$. The product Gibbs state $\zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes \zeta_{\beta_\textsf{C}}^\textsf{C} $ is set by temperatures ${\beta_\textsf{Q}}$ and ${\beta_\textsf{C}}$. The probability $p(k)$ is obtained in the same way as for $\EP{1}$. Therefore, the current expression is based solely on the forward protocol after averaging the quantum trajectories into $\rho_\text{f}^{\textsf{QC},k}$. The experimental temperature dependence of $\EP{2}$ is shown in Fig.~\ref{fig:results}(b).
Figure~\ref{fig:results}(c) presents the expression $\EP{3}$ based on the analysis of the backward protocol with two branches, $k\!=\!0$ and $k\!=\!1$. It mainly relies on the backward trajectories, except for the value of $p(k)$ for the demon state extracted from the forward protocol. This expression shows the largest deviation from the ideal case for $\delta\tilde\beta\gg0$, which can be explained by the use of the backward trajectories and the divergent properties of $D$. The relative entropy $D(\rho||\sigma)$ is very sensitive to the smallest state variations if the support of the matrix $\sigma$ does not include the support of $\rho$, hence the second name ``divergence" for $D$. In the expression $\EP{2}$ the support of the reference state $\zeta_{\beta_\textsf{Q}}^\textsf{Q} \otimes\zeta_{\beta_\textsf{C}}^\textsf{C}$ is the whole Hilbert space of the \setup{QC} system, making this expression less sensitive to the small state variations. For the expression $\EP{3}$, however, the situation is radically different: both states appearing in the function $D$ have limited supports making its evaluation more sensitive to most experimental imperfections than all other expressions (see Appendix~\ref{app:major} for details).
The expression $\EP{4}$ in \eqref{eq::S4} is directly computed from the sets of $\{p(\gamma)\}$ and $\{p(\tilde\gamma)\}$ and is shown in Fig.~\ref{fig:results}(d). It is the only expression based on all data measured in the forward and backward protocols with no additional transformation or averaging.
Figure~\ref{fig:results}(e) shows the relative entropy $\EP{5}$ obtained from \eqref{eq::S5}. We first compute, for each trajectory $\gamma$, the stochastic entropy production $\sigma[\gamma]$ from its initial and final state using \eqref{eq:stochastic}. Then, we calculate the probabilities $p(\sigma)$ and ${p_\text{b}}(\sigma)$ from the set of all values of $\sigma$ detected in the forward and backward protocols and obtain $\EP{5}$. This expression uses all experimental data after having grouped trajectories with the same $\sigma$.
Finally, the expression $\EP{6}$ defined in \eqref{eq::S6} is shown in Fig.~\ref{fig:results}(f). The state $\rho_\text{f}^\textsf{QC}$ is computed from the joint \setup{QDC} state $\rho_\text{f}^\textsf{QDC}$, based on the distribution $p({m_\textsf{Q}},k,{m_\textsf{C}})$, by tracing out \setup{D}. The mutual information between \setup{QC} and \setup{D} is computed directly on $\rho_\text{f}^\textsf{QDC}$. Remarkably, the value of $\EP{6}$ perfectly coincides with that of $\EP{2}$. We show in Appendix~\ref{app:EP26} that these two expressions are mathematically identical and are based on the same set of the experimentally obtained physical quantities.
The dashed lines in Fig.~\ref{fig:results} are the entropy productions computed from simulated data obtained by taking into account all mentioned experimental imperfections. The good agreement between the measurement and the simulation allows us to test and confirm the influence of various system's errors onto different ways to experimentally access the entropy production $\EP{}$. Some errors perturb quantum trajectories for particular temperature ranges. For instance, ${\epsilon_\text{read}}$ manifests itself for $\delta\tilde\beta\ll 0$, while ${\epsilon_\text{prep}}$ and ${\epsilon_\text{feed}}$ are noticeable mainly for $\delta\tilde\beta\gg 0$. The detection error ${\epsilon_\text{meas}}$ is influential for the qubit temperatures with very different populations in $\ket{0_\textsf{Q}}$ and $\ket{1_\textsf{Q}}$, i.e.,~ for $|\delta\tilde\beta| \gg 1$. The influence of other sources of errors on the discrepancy between the ideal and realistic cases depend on a particular expression for $\EP{}$ and on the way it is measured experimentally. In general, the errors increase $\EP{}$ for $\delta\tilde\beta \ll 0$ and decrease it for $\delta\tilde\beta \gg 0$ relative to the ideal case.
\section{Conclusion}
Our results allow to clarify the meaning of entropy production. Beyond its usual acception as a quantifier of irreversibility, it relates to some experimental lack of control over a quantum system, the larger the entropy production, the smaller the control.
In this spirit, we have presented different alternative ways to address and describe an ultimate information-powered quantum fridge, providing us different operational expressions for entropy production. Our cavity QED setup has allowed us to formulate theoretically and to access experimentally several expressions for $\EP{}$, each of them having its own physical interpretation. Their computation is based on different data and requires different data processing. However, describing the same physical quantity, they provide equivalent strategies to measure $\EP{}$. Following the same line, similar sets of entropy production expressions can be derived for any other system under investigation and characterization. The final experimentalist's choice is set by features and imperfections of a particular setup, perturbing measured data and thus the different $\EP{}$ expressions in different ways.
In the current work the state analysis has been restricted to the populations of energy states, sufficient for accessing the entropy production of the thermalization. To study other types of environments, it might be necessary to access quantum information e.g.,~ stored in the system's coherence or entanglement between its parts. Our experimental setup allows for the complete quantum state tomography \cite{Metillon19} providing access to quantum information and its transformation. We plan to use this ability and to implement dephasing and decorrelating environments in the forward-reservoir-backward protocol in order to reveal how different types of the information erasure induce irreversibility.
\begin{acknowledgments}
We thank J.-M. Raimond and M. Brune for insightful discussions.
We acknowledge support by European Community (SIQS project) and by the Agence Nationale de la Recherche (QuDICE project). P.~A.~C. acknowledges Templeton World Charity Foundation, Inc. This publication was made possible through the support of the grant TWCF0338 from Templeton World Charity Foundation, Inc. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of Templeton World Charity Foundation, Inc.
\end{acknowledgments}
|
\section*{ S.1 Values of Parameters of Low-Energy Effective Model}
In this supplemental material, we show the values of the parameters of the {\it ab initio} low-energy effective model.
Table S~\ref{W_FeSe} (\ref{W_FeTe}) is the three-dimensional effective interaction of the maximally localized Wannier functions ({MLWFs}) originating from the Fe $3d$ orbitals of FeSe (FeTe)~\cite{S_hirayama13}.
We obtain the two-dimensional effective interaction by uniformly subtracting 0.6 (0.4) eV from the three-dimensional effective interaction of FeSe (FeTe)
following Refs.~\citen{S_nakamura10} and \citen{S_nakamurap}.
Table S~\ref{t_LDA} is the transfer integral of the {MLWFs} of FeSe and FeTe {obtained by} the local-density-approximation (LDA) {calculation}.
We also show the transfer integral of the {MLWFs} in the constrained GW with the self-interaction correction (cGW-SIC) model in Table S~\ref{t_cGW-SIC}.
Figure S~\ref{FeSe_band} is the corresponding band structure of FeSe.
The on-site potential of each model is shown in Fig. S~\ref{ton}.
The detail of the cGW method is explained in Refs.~\citen{S_hirayama13} and \citen{S_aryasetiawan09}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth ]{17078Fig1SM.eps}
\caption{
(Color online) Electronic band structures of the Fe $3d$ {MLWFs} of FeSe in the LDA [(red) solid line]
and the cGW-SIC [(green) dashed line].
The Fermi energy is set to zero.
}
\label{FeSe_band}
\end{figure}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.45\textwidth ]{17078Fig2SM.eps}
\end{center}
\caption{(Color online) On-site potential of Wannier orbitals for FeSe and FeTe {obtained by} the LDA and cGW-SIC {calculations}.
}
\label{ton}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[clip,width=0.4\textwidth ]{17078Fig3SM.eps}
\caption{
(Color online) Ground state energy per site of AF in the mVMC for FeSe at ambient pressure and at 4.0 GPa with $t^{\text{cGW-SIC}}$ (in meV).
System size is $N_{\text{s}}=8\times 8$.
}
\label{AF_FeSe_energy_4GPa}
\end{figure}
\section*{ S.2 Magnetism of FeSe under Pressure}
We calculate the magnetism of FeSe under pressure as well as that at ambient pressure.
Figure S~\ref{AF_FeSe_energy_4GPa} shows the energy per site {by} the cGW-SIC {calculation for} FeSe at 4.0 GPa in the experimental geometry~\cite{S_margadonna09}.
{Although the AFS-type short-ranged fluctuations were observed~\cite{S_rahn15,S_wangX} at ambient pressure, the} unique energetic degeneracy at ambient pressure is lifted under pressure, and we predict that the AFH phase becomes {slightly more} stable {than the AFS}.
The experimental verification would be desired.
\begin{table*}[htb]
\caption{
Bare and effective three-dimensional Coulomb interactions between two electrons for all the combinations of Fe $3d$ orbitals in FeSe (in eV).
Here, $v$ and $J_{v}$ represent the bare on-site and exchange Coulomb interactions, respectively.
The static limit
of the effective on-site and exchange Coulomb interactions are denoted by $U$ and $J$, respectively.
}
\
\label{W_FeSe}
\scalebox{0.85}[0.85]{
\begin{tabular}{c|ccccc|ccccc}
\hline \hline \\ [-8pt]
FeSe & & & $v$ & & & & & $U$ & & \\ [+1pt]
\hline \\ [-8pt]
& $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ & $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ \\
\hline \\ [-8pt]
$XY$ & 18.65 & 16.50 & 17.28 & 16.50 & 16.51 & 4.51 & 3.19 & 3.21 & 3.19 & 3.48 \\
$YZ$ & 16.50 & 16.97 & 17.05 & 15.70 & 15.30 & 3.19 & 4.11 & 3.53 & 3.02 & 2.98 \\
$3Z^{2}-R^{2}$ & 17.28 & 17.05 & 19.09 & 17.05 & 16.00 & 3.21 & 3.53 & 4.68 & 3.53 & 3.01 \\
$ZX$ & 16.50 & 15.70 & 17.05 & 16.97 & 15.30 & 3.19 & 3.02 & 3.53 & 4.11 & 2.98 \\
$X^{2}-Y^{2}$ & 16.51 & 15.30 & 16.00 & 15.30 & 15.88 & 3.48 & 2.98 & 3.01 & 2.98 & 3.77 \\
\hline \hline \\ [-8pt]
& & & $J_{v}$ & & & & & $J$ & & \\ [+1pt]
\hline \\ [-8pt]
& $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ & $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ \\
\hline \\ [-8pt]
$XY$ & & 0.66 & 0.79 & 0.66 & 0.34 & & 0.57 & 0.69 & 0.57 & 0.32 \\
$YZ$ & 0.66 & & 0.46 & 0.56 & 0.61 & 0.57 & & 0.42 & 0.48 & 0.53 \\
$3Z^{2}-R^{2}$ & 0.79 & 0.46 & & 0.46 & 0.75 & 0.69 & 0.42 & & 0.42 & 0.62 \\
$ZX$ & 0.58 & 0.56 & 0.46 & & 0.61 & 0.57 & 0.48 & 0.42 & & 0.53 \\
$X^{2}-Y^{2}$ & 0.34 & 0.61 & 0.75 & 0.61 & & 0.32 & 0.53 & 0.62 & 0.53 & \\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{table*}[htb]
\caption{
Bare and effective three-dimensional Coulomb interactions between two electrons for all the combinations of Fe $3d$ orbitals in FeTe (in eV).
Here, $v$ and $J_{v}$ represent the bare on-site and exchange Coulomb interactions, respectively.
The static limit
of the effective on-site and exchange Coulomb interactions are denoted by $U$ and $J$, respectively.
}
\
\label{W_FeTe}
\scalebox{0.85}[0.85]{
\begin{tabular}{c|ccccc|ccccc}
\hline \hline \\ [-8pt]
FeTe & & & $v$ & & & & & $U$ & & \\ [+1pt]
\hline \\ [-8pt]
& $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ & $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ \\
\hline \\ [-8pt]
$XY$ & 17.08 & 15.05 & 16.20 & 15.05 & 16.26 & 3.46 & 2.30 & 2.36 & 2.30 & 2.80 \\
$YZ$ & 15.05 & 15.36 & 15.87 & 14.24 & 14.94 & 2.30 & 3.08 & 2.62 & 2.15 & 2.30 \\
$3Z^{2}-R^{2}$ & 16.20 & 15.87 & 18.25 & 15.87 & 16.08 & 2.36 & 2.62 & 3.73 & 2.62 & 2.35 \\
$ZX$ & 15.05 & 14.24 & 15.87 & 15.36 & 14.94 & 2.30 & 2.15 & 2.62 & 3.08 & 2.30 \\
$X^{2}-Y^{2}$ & 16.32 & 14.94 & 16.08 & 14.94 & 16.77 & 2.82 & 2.30 & 2.36 & 2.30 & 3.39\\
\hline \hline \\ [-8pt]
& & & $J_{v}$ & & & & & $J$ & & \\ [+1pt]
\hline \\ [-8pt]
& $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ & $XY$ & $YZ$ & $3Z^{2}-R^{2}$ & $ZX$ & $X^{2}-Y^{2}$ \\
\hline \\ [-8pt]
$XY$ & & 0.59 & 0.73 & 0.59 & 0.33 & & 0.49 & 0.62 & 0.49 & 0.31 \\
$YZ$ & 0.59 & & 0.42 & 0.49 & 0.59 & 0.49 & & 0.37 & 0.40 & 0.49 \\
$3Z^{2}-R^{2}$ & 0.73 & 0.42 & & 0.42 & 0.74 & 0.62 & 0.37 & & 0.37 & 0.62 \\
$ZX$ & 0.59 & 0.49 & 0.42 & & 0.59 & 0.49 & 0.40 & 0.37 & & 0.49 \\
$X^{2}-Y^{2}$ & 0.33 & 0.59 & 0.74 & 0.59 & & 0.31 & 0.49 & 0.62 & 0.49 & \\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{table*}[ptb]
\caption{
Transfer integral for the $3d$ orbitals of the Fe sites in the FeSe and FeTe, $t^{\text{LDA}}_{mn}(R_x, R_y, R_z)$,
where $t^{\text{LDA}}$ is the expectation value of the Kohn-Sham Hamiltonian for the Wannier function : $t^{\text{LDA}}=\langle \phi | \mathcal{H}^{\text{LDA}} |\phi \rangle$, $m$ and $n$ denote symmetries of the $3d$ orbitals, and the axis of $(R_x, R_y, R_z)$ is taken along the Fe-Se/Te directions.
Units are given in meV.
}
\
\scalebox{0.75}[0.75]{
\begin{tabular}{c|rrrrrrr|rrr}
\hline \hline \\ [-4pt]
FeSe \\ [+2pt]
\hline \\ [-4pt]
$(m, n)$ $\backslash$ $\bm{R}$
& \big[$0,0,0$\big]
& \big[$\frac{1}{2},-\frac{1}{2},0$\big]
& \big[$1,0,0$\big]
& \big[$1,-1,0$\big]
& \big[$\frac{3}{2},-\frac{1}{2},0$\big]
& \big[$0,0,\frac{c}{a}$\big]
& \big[$\frac{1}{2},-\frac{1}{2},\frac{c}{a}$\big]
& $\sigma_{Y}$
& $I$
& $\sigma^{L}$ \\ [+4pt]
\hline \\ [-8pt]
$(XY,XY)$ & -509 & -410 & -70 & -11 & 3 & -25 & 6 & $+$ & $+$ & $+$ \\
$(XY,YZ)$ & 0 & 273 & 131 & -9 & -6 & 0 & -9 & $+$ & $-$ & $-$(1,4) \\
$(XY,3Z^{2}-R^{2})$ & 0 & -347 & 0 & 22 & -8 & 0 & 11 & $-$ & $+$ & $+$ \\
$(XY,ZX)$ & 0 & 273 & 0 & -9 & 18 & 0 & -3 & $-$ & $-$ & $-$(1,2) \\
$(XY,X^{2}-Y^{2})$ & 0 & 0 & 0 & 0 & -9 & 0 & -4 & $-$ & $+$ & $-$ \\
$(YZ,YZ)$ & 46 & 197 & 128 & -17 & -8 & 8 & 27 & $+$ & $+$ & (4,4) \\
$(YZ,3Z^{2}-R^{2})$ & 0 & -119 & 0 & 7 & 2 & 0 & 11 & $-$ & $-$ & $-$(4,3) \\
$(YZ,ZX)$ & 0 & 127 & 0 & -23 & -19 & 0 & 12 & $-$ & $+$ & (4,2) \\
$(YZ,X^{2}-Y^{2})$ & 0 & 223 & 0 & 1 & -3
|
& 0 & 20 & $-$ & $-$ & (4,5) \\
$(3Z^{2}-R^{2},3Z^{2}-R^{2})$& -388 & -4 & -15 & -14 & -6 & -23 & -9 & $+$ & $+$ & $+$ \\
$(3Z^{2}-R^{2},ZX)$ & 0 & 119 & 199 & -7 & -13 & 0 & -10 & $+$ & $-$ & $-$(3,2) \\
$(3Z^{2}-R^{2},X^{2}-Y^{2})$ & 0 & 0 & -115 & 0 & 1 & -8 & -6 & $+$ & $+$ & $-$ \\
$(ZX,ZX)$ & 46 & 197 & 335 & -17 & 13 & 8 & 0 & $+$ & $+$ & (2,2) \\
$(ZX,X^{2}-Y^{2})$ & 0 & -223 & 82 & -1 & -15 & 0 & 7 & $+$ & $-$ & (2,5) \\
$(X^{2}-Y^{2},X^{2}-Y^{2})$ & -34 & -56 & 93 & 0 & 17 & -28 & 4 & $+$ & $+$ & $+$ \\
\hline \hline
FeTe \\ [+2pt]
\hline \\ [-4pt]
$(m, n)$ $\backslash$ $\bm{R}$
& \big[$0,0,0$\big]
& \big[$\frac{1}{2},-\frac{1}{2},0$\big]
& \big[$1,0,0$\big]
& \big[$1,-1,0$\big]
& \big[$\frac{3}{2},-\frac{1}{2},0$\big]
& \big[$0,0,\frac{c}{a}$\big]
& \big[$\frac{1}{2},-\frac{1}{2},\frac{c}{a}$\big]
& $\sigma_{Y}$
& $I$
& $\sigma^{L}$ \\ [+4pt]
\hline \\ [-8pt]
$(XY,XY)$ & -452 & -378 & -11 & -41 & -1 & -31 & 12 & $+$ & $+$ & $+$ \\
$(XY,YZ)$ & 0 & 237 & 109 & 3 & -6 & 0 & -12 & $+$ & $-$ & $-$(1,4) \\
$(XY,3Z^{2}-R^{2})$ & 0 & -336 & 0 & 33 & -9 & 0 & 21 & $-$ & $+$ & $+$ \\
$(XY,ZX)$ & 0 & 237 & 0 & 3 & 35 & 0 & -1 & $-$ & $-$ & $-$(1,2) \\
$(XY,X^{2}-Y^{2})$ & 0 & 0 & 0 & 0 & -14 & 0 & 3 & $-$ & $+$ & $-$ \\
$(YZ,YZ)$ & 44 & 156 & 103 & -15 & -12 & 13 & 37 & $+$ & $+$ & (4,4) \\
$(YZ,3Z^{2}-R^{2})$ & 0 & -122 & 0 & 17 & 6 & 0 & 11 & $-$ & $-$ & $-$(4,3) \\
$(YZ,ZX)$ & 0 & 101 & 0 & -27 & -25 & 0 & 14 & $-$ & $+$ & (4,2) \\
$(YZ,X^{2}-Y^{2})$ & 0 & 178 & 0 & 0 & -10 & 0 & 22 & $-$ & $-$ & (4,5) \\
$(3Z^{2}-R^{2},3Z^{2}-R^{2})$& -480 & -73 & -53 & 3 & 8 & -67 & -23 & $+$ & $+$ & $+$ \\
$(3Z^{2}-R^{2},ZX)$ & 0 & 122 & 198 & -17 & -17 & 0 & -32 & $+$ & $-$ & $-$(3,2) \\
$(3Z^{2}-R^{2},X^{2}-Y^{2})$ & 0 & 0 & -29 & 0 & -6 & 30 & -30 & $+$ & $+$ & $-$ \\
$(ZX,ZX)$ & 44 & 156 & 300 & -15 & 42 & 13 & 9 & $+$ & $+$ & (2,2) \\
$(ZX,X^{2}-Y^{2})$ & 0 & -178 & 136 & 0 & -23 & 0 & 26 & $+$ & $-$ & (2,5) \\
$(X^{2}-Y^{2},X^{2}-Y^{2})$ & -211 & 66 & 52 & 9 & 12 & 16 & -24 & $+$ & $+$ & $+$ \\
\hline \hline
\end{tabular}
}
\label{t_LDA}
\end{table*}
\begin{table*}[ptb]
\caption{Transfer integral for the $3d$ orbitals of the Fe sites in the FeSe and FeTe, $t^{\text{cGW-SIC}}_{mn}(R_x, R_y, R_z)$,
where $t^{\text{cGW-SIC}}$ is the transfer integral without double-counting :
$t^{\text{cGW-SIC}}=\langle \phi | \mathcal{H}^{\text{cGW-SIC}} |\phi \rangle$,
$m$ and $n$ denote symmetries of the $3d$ orbitals, and the axis of $(R_x, R_y, R_z)$ is taken along the Fe-Se/Te directions.
Units are given in meV.
}
\
\scalebox{0.75}[0.75]{
\begin{tabular}{c|rrrrrrr|rrr}
\hline \hline \\ [-4pt]
FeSe \\ [+2pt]
\hline \\ [-4pt]
$(m, n)$ $\backslash$ $\bm{R}$
& \big[$0,0,0$\big]
& \big[$\frac{1}{2},-\frac{1}{2},0$\big]
& \big[$1,0,0$\big]
& \big[$1,-1,0$\big]
& \big[$\frac{3}{2},-\frac{1}{2},0$\big]
& \big[$0,0,\frac{c}{a}$\big]
& \big[$\frac{1}{2},-\frac{1}{2},\frac{c}{a}$\big]
& $\sigma_{Y}$
& $I$
& $\sigma^{L}$ \\ [+4pt]
\hline \\ [-8pt]
$(XY,XY)$ & -751 & -466 & -20 & 24 & 3 & -38 & 6 & $+$ & $+$ & $+$ \\
$(XY,YZ)$ & 0 & 201 & 106 & -26 & -3 & 0 & -9 & $+$ & $-$ & $-$(1,4) \\
$(XY,3Z^{2}-R^{2})$ & 0 & -391 & 0 & 27 & -3 & 0 & 8 & $-$ & $+$ & $+$ \\
$(XY,ZX)$ & 0 & 201 & 0 & -26 & -9 & 0 & -4 & $-$ & $-$ & $-$(1,2) \\
$(XY,X^{2}-Y^{2})$ & 0 & 0 & 0 & 0 & 6 & 0 & -10 & $-$ & $+$ & $-$ \\
$(YZ,YZ)$ & -137 & 97 & 148 & -35 & -15 & 7 & 27 & $+$ & $+$ & (4,4) \\
$(YZ,3Z^{2}-R^{2})$ & 0 & -94 & 0 & 22 & -1 & 0 & 8 & $-$ & $-$ & $-$(4,3) \\
$(YZ,ZX)$ & 0 & 199 & 0 & -37 & -36 & 0 & 16 & $-$ & $+$ & (4,2) \\
$(YZ,X^{2}-Y^{2})$ & 0 & 204 & 0 & -1 & 7 & 0 & 17 & $-$ & $-$ & (4,5) \\
$(3Z^{2}-R^{2},3Z^{2}-R^{2})$& -1075 & -67 & -76 & 5 & 11 & -4 & 3 & $+$ & $+$ & $+$ \\
$(3Z^{2}-R^{2},ZX)$ & 0 & 94 & 195 & -22 & -16 & 0 & -3 & $+$ & $-$ & $-$(3,2) \\
$(3Z^{2}-R^{2},X^{2}-Y^{2})$ & 0 & 0 & -35 & 0 & -13 & -22 & 7 & $+$ & $+$ & $-$ \\
$(ZX,ZX)$ & -137 & 97 & 263 & -35 & 25 & 7 & -2 & $+$ & $+$ & (2,2) \\
$(ZX,X^{2}-Y^{2})$ & 0 & -204 & 147 & 1 & -39 & 0 & 0 & $+$ & $-$ & (2,5) \\
$(X^{2}-Y^{2},X^{2}-Y^{2})$ & -299 & 186 & 19 & -20 & 8 & -35 & 17 & $+$ & $+$ & $+$ \\
\hline \hline
FeTe \\ [+2pt]
\hline \\ [-4pt]
$(m, n)$ $\backslash$ $\bm{R}$
& \big[$0,0,0$\big]
& \big[$\frac{1}{2},-\frac{1}{2},0$\big]
& \big[$1,0,0$\big]
& \big[$1,-1,0$\big]
& \big[$\frac{3}{2},-\frac{1}{2},0$\big]
& \big[$0,0,\frac{c}{a}$\big]
& \big[$\frac{1}{2},-\frac{1}{2},\frac{c}{a}$\big]
& $\sigma_{Y}$
& $I$
& $\sigma^{L}$ \\ [+4pt]
\hline \\ [-8pt]
$(XY,XY)$ & -678 & -410 & 52 & 5 & -2 & 23 & -13 & $+$ & $+$ & $+$ \\
$(XY,YZ)$ & 0 & 133 & 79 & -11 & -1 & 0 & 11 & $+$ & $-$ & $-$(1,4) \\
$(XY,3Z^{2}-R^{2})$ & 0 & -340 & 0 & 26 & -8 & 0 & 11 & $-$ & $+$ & $+$ \\
$(XY,ZX)$ & 0 & 133 & 0 & -11 & -2 & -26 & 10 & $-$ & $-$ & $-$(1,2) \\
$(XY,X^{2}-Y^{2})$ & 0 & 0 & 0 & 0 & -1 & 0 & 16 & $-$ & $+$ & $-$ \\
$(YZ,YZ)$ & -146 & 42 & 120 & -28 & -17 & 11 & 15 & $+$ & $+$ & (4,4) \\
$(YZ,3Z^{2}-R^{2})$ & 0 & -88 & 0 & 24 & 6 & -23 & 14 & $-$ & $-$ & $-$(4,3) \\
$(YZ,ZX)$ & 0 & 186 & 0 & -35 & -46 & 0 & 16 & $-$ & $+$ & (4,2) \\
$(YZ,X^{2}-Y^{2})$ & 0 & 139 & 0 & 1 & -2 & 12 & 14 & $-$ & $-$ & (4,5) \\
$(3Z^{2}-R^{2},3Z^{2}-R^{2})$& -974 & -114 & -114 & 21 & 26 & -1 & -1 & $+$ & $+$ & $+$ \\
$(3Z^{2}-R^{2},ZX)$ & 0 & 88 & 181 & -24 & -12 & 0 & -4 & $+$ & $-$ & $-$(3,2) \\
$(3Z^{2}-R^{2},X^{2}-Y^{2})$ & 0 & 0 & 49 & 0 & -24 & -1 & 2 & $+$ & $+$ & $-$ \\
$(ZX,ZX)$ & -146 & 42 & 213 & -28 & 51 & 22 & 13 & $+$ & $+$ & (2,2) \\
$(ZX,X^{2}-Y^{2})$ & 0 & -139 & 147 & -1 & -30 & 0 & 8 & $+$ & $-$ & (2,5) \\
$(X^{2}-Y^{2},X^{2}-Y^{2})$ & -495 & 292 & -47 & -9 & 20 & 5 & -13 & $+$ & $+$ & $+$ \\
\hline \hline
\end{tabular}
}
\label{t_cGW-SIC}
\end{table*}
|
\section{Introduction}
The idea of a market exchange automatically channeling self-interest toward welfare maximizing outcomes is a central theme in neoclassical economics. The initial conjecture of the ``invisible hand'' goes back to Adam Smith. Formally, the Arrow--Debreu model showed that under convex preferences and perfect competition there must be a set of Walrasian equilibrium prices \citep{Arrow1954}.
In these models, market participants are price-takers, and they sell or buy divisible goods in order to maximize their total value subject to their budget or initial wealth. The more recent stream of research on competitive equilibrium theory assumes indivisible goods and quasilinear utility functions, i.e. buyers maximize value minus the price they pay (their payoff) and there are no budget constraints \citep{Kelso82, Gul1999, bikhchandani1997competitive, Leme2017Soda, baldwin2019understanding}. The underlying question is under which conditions on the preferences markets with indivisible goods can be assumed to be core-stable\footnote{The core is the set of feasible outcomes that cannot be improved upon by a subset of the economy's participants.} and welfare-maximizing. This literature focuses on larger markets where bidders are assumed to be price-takers and it emphasizes core-stability over incentive-compatibility.
For two-sided matching markets where quasilinear buyers have unit-demand, referred to as assignment markets, welfare-maximization, core-stability and even incentive-compatibility can be achieved with a polynomial-time auction algorithm \citep{shapley1971assignment}.
These auctions can be interpreted as primal-dual algorithms, where the auctioneer specifies a price vector (a demand query) in each round and the bidders respond with their demand set, i.e. the set of goods that maximize their payoff for given prices.
Unfortunately, it is well-known that we cannot hope for such positive results with more general quasilinear preferences. Incentive-compatibility and core-stability are conflicting in general markets with quasilinear utilities \citep{Ausubel2006}. Even if we give up on incentive-compatibility, only very restricted types of valuations (e.g., substitutes valuations) allow for Walrasian equilibria.\footnote{A Walrasian equilibrium describes a competitive equilibrium where supply equals demand and prices are linear (i.e., there is a price for each good) and anonymous (i.e., the price is the same for all participants and there is no price differentiation) \citep{bikhchandani1997competitive,baldwin2019understanding, leme2017gross}.} As discused in \citet{bikhchandani2002package}, under general valuations (allowing for substitutes and complements), competitive equilibrium prices need to be non-linear and personalized and the core can be empty.
Compared to early utility models in general equilibrium theory such as the Fisher markets for divisible items \citep{eisenberg1959consensus, orlin2010improved}, quasilinear utility functions imply that bidders do not have budget constraints. In many markets, this is too strong an assumption \citep{Dobzinski2008, che2000optimal, dutting2016auctions}. Bidders might well maximize payoff, but they need to respect budget constraints. Spectrum auctions are just one example, where bidders have general valuations with complements and substitutes and they are typically financially constrained \citep{bichler2017handbook}. Incentive-compatible mechanisms are known to be impossible in multi-object markets \citep{Dobzinski2008}.
It is interesting to understand how core-stable and welfare-maximizing prices can be computed in the presence of budget constraints if we assume bidders to be price-takers. This question was recently analyzed for markets that allow for general valuations where the auctioneer has complete information about values and budgets \citep{bichler2021or}. The result is a spoiler: Computing the welfare-maximizing core-stable outcome is a $\Sigma_2^p$-hard optimization problem. Such problems are considered intractable, even for very small problem instances.
The intuition behind this result is that the allocation and pricing problems cannot be treated independently anymore. With quasilinear utility functions, the auctioneer first determines the welfare-maximizing outcome and then a corresponding price vector. If budget constraints are binding, then these constraints on the prices need to be considered when computing the welfare-maximizing outcome, which transforms the allocation and pricing problem into a bilevel integer program for general valuations.
Considering that budget constraints are a reality in many markets, this casts doubts whether simple market institutions based on polynomial-time algorithms (e.g., simple ascending auctions as used in selling spectrum nowadays) can find a welfare-maximizing outcome even if bidders were price-takers.
General preferences as allowed in combinatorial exchanges, might be too much to ask for. A natural question is whether we can at least hope for core-stable and welfare-maximizing outcomes in markets where bidders have unit-demand valuations, and seek to maximize payoff subject budget constraints.
\hide{
\subsection{An Illustrative Example}
In this example, we illustrate how financial constraints affect the set of welfare-maximizing, core-stable outcomes of an economy. We introduce a simple market with two buyers and two sellers (see Figure \ref{fig:example}), in which the welfare-maximizing outcome cannot be core-stable. This means that there is no anonymous and linear price vector, for which the welfare-maximizing allocation (ignoring budget constraints) would be such that no pair of buyers and sellers would want to deviate. In our example, buyer $B_1$ has a value of \$4 for the item of seller $S_1$ and of \$10 for the item of seller $S_2$. Buyer $B_2$ has a value of \$3 for the item of seller $S_1$ and of \$6 for the item of seller $S_2$. The budget constraints of $B_1, B_2$ are $b_1 = 2$ and $b_2 = 4$ respectively. Both sellers have a reserve price of zero, namely $r_1 = r_2 = 0$.
\hide{
\begin{figure}[ht]
\centering
\begin{tikzpicture}[node distance={20mm}, main/.style = {draw, circle}]
\node[main] (1) {$S_1/0$};
\node[main] (2) [below of=1] {$S_2/0$};
\node[main] (3) [right of=1] {$B_1/2$};
\node[main] (4) [below of=3] {$B_2/4$};
\draw (1) -- node[above] {4} (3);
\draw (1) -- node[above, left, pos=0.2] {3} (4);
\draw (2) -- node[below, right, pos=0.1] {10} (3);
\draw (2) -- node[below] {6} (4);d
\end{tikzpicture}
\caption{Assignment market with two buyers and two sellers of a single unit.}
\label{fig:example}
\end{figure}
}
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{illustrative_ex.jpg}
\captionsetup{justification=centering, textfont=small, format=hang}
\caption{Assignment market with two buyers and two sellers.}
\label{fig:example}
\end{figure}
There are two possible outcomes: In outcome 1, indicated with a grey solid line in Figure \ref{fig:example}, buyer $B_1$ is assigned to seller $S_1$ and $B_2$ to $S_2$ with an overall welfare of \$10. If seller $S_1$ charges \$1 and $S_2$ charges \$2.5, then the utility of $B_1$ is $4-1=3$ and that of buyer $B_2$ is $6-2.5=3.5$. $B_2$ would not want to get the item of $S_1$ at this price. In contrast, $B_1$ would prefer to acquire the item of $S_2$, but this seller prefers $B_2$ from whom she gets \$2.5.
In outcome 2, indicated by a dotted line in Figure \ref{fig:example}, buyer $B_1$ is assigned to seller $S_2$ and $B_2$ to $S_1$ with an overall welfare of \$13. Unfortunately, there is no price vector that makes this outcome core-stable. Suppose, prices for the items of sellers $S_1, S_2$ were set to 3 and 2 respectively. At this price, $B_1$ can no longer afford the item of $S_1$ but achieves a payoff of $10-2=8$ from buying the item of $S_2$. Buyer $B_2$ has a payoff of $3-3=0$ from buying the item of $S_1$, and profits from switching to $S_2$, with corresponding payoff $6-2=4$. $S_2$ cannot charge more than \$2 because this would otherwise exceed the budget of buyer $B_1$. As a result $B_2$ and $S_2$ always strictly prefer being assigned to one another. Thus, the welfare-maximizing outcome is not stable, and the auctioneer needs to consider the budgets when computing the welfare-maximizing and core-stable outcome.
}
\subsection{Contributions}
We study the properties that can be achieved in assignment markets with unit-demand bidders who aim to maximize their payoff but have hard budget constraints as illustrated in the previous example.
The aim of this work is to compute welfare-maximizing and core-stable outcomes in the presence of such financial constraints. If we cannot achieve incentive-compatibility and core-stability with such simple valuations, also markets with more complex preferences will not satisfy these properties.
We first introduce and analyze an iterative process that always finds a core-stable outcome using only demand queries based on prices and no direct access to valuations. In contrast to \citet{aggarwal}, where bidders' valuations are directly queried, a distinguishing part of our algorithm is that it relies exclusively on demand queries and provides a natural generalization of the auction by \citet{demange1986multi}. Moreover, we place emphasis on the question of when we can expect the outcome of the auction to not only lie in the core, but also maximize welfare among all core allocations, as welfare maximization is typically a central goal in market design. During the auction process, the auctioneer may sometimes have to make decisions on which buyer to exclude from certain items in subsequent rounds. One of our main results is that, for any market instance, if the auctioneer would be able to guess the right decisions throughout the auction, we terminate in a welfare-maximizing core-allocation. In particular, if the auctioneer does not have to make any such decisions - which is trivial to check ex-post - our result implies that the auction always finds a welfare-maximizing core-outcome. Unfortunately, we do not know ex-ante whether the condition holds, and if it does not, welfare can be arbitrarily low.
Now, it is important to understand whether we can hope for an incentive-compatible and welfare-maximizing core-selecting mechanism without additional conditions beyond the already strong restriction to unit-demand valuations.
Unfortunately, the answer to this question is negative.
In a novel result, we show that no auction mechanism for the assignment market can be incentive-compatible and core-stable when buyers face budget constraints. If we give up on incentive-compatibility and assume full access to the true valuations (i.e., via value queries) and buyer budgets, we can compute a core-stable and welfare-maximizing outcome.
One might expect that the problem admits a polynomial time solution, since, without the presence of budget constraints, the problem lies in complexity class P. Unfortunately, a main finding of this paper shows that determining core-stable, welfare-maximizing outcomes with financially constrained buyers is an NP-complete optimization problem, even for the assignment market with full access to valuations and budgets. This means, the existence of budget constraints renders the problem of determining welfare-maximizing, core-stable outcomes NP-hard. The hardness proof requires an involved reduction from the maximum independent set problem. One aspect that is making the reduction difficult is that prices need to be considered as continuous variables.
These results show that, even for the simplest type of multi-object markets, those with only unit-demand bidders, we cannot expect core-stable and welfare-maximizing outcomes unless additional strong conditions are satisfied that are typically unknown ex-ante.
\section{Related Literature}\label{sec:related_literature}
Two-sided matching markets describe markets where buyers want to win at most one item (also known as the unit-demand model) and sellers sell only one item. Buyers and sellers are disjoint sets of agents and each buyer forms exclusive relationships with a seller. Such markets are central to the economic sciences. The well-known marriage model of \citet{gale1962college} assumes ordinal preferences and non-transferable utility. \citet{shapley1971assignment} analyzed such markets with quasilinear utility functions and showed that the core of this game is nonempty and encompasses all competitive equilibria. Under the quasilinear utility model, buyers maximize value minus price, while sellers maximize price minus cost.
While their setting assumes access to all valuations, \citet{demange1986multi} showed that an ascending auction with only demand queries results in a competitive equilibrium at the lowest possible price, i.e. at the competitive equilibrium price vector that is optimal for buyers. In such an auction, the auctioneer specifies a price vector (the demand query) in each round, and buyers respond with their demand set, i.e. the set of goods that maximize payoff at the prices.
The housing market of \citet{shapley1974cores} is an example of a market without transferable utility or monetary funds.
In this market, each agent is endowed with a good or house, and each agent is interested in one house only.
The goal of this market is to redistribute ownership of the houses in accordance with the ordinal preferences of the agents.
In such housing markets, the core set is nonempty. If no agent is indifferent between any two houses, then the economy has a unique competitive allocation, which is also the unique strict core allocation. An allocation belongs to the \textit{strong} core, if no coalition of buyers and sellers can make all members as well off and at least one member better off by trading items among themselves. We assume an allocation belongs to the \textit{weak} core if no coalition can lead to all members' utilities improved when redistributing items amongst themselves.
\citet{quinzii1984core} generalizes the model of \citet{shapley1974cores} to one with multiple agents with unit-demand and transferable but non-quasilinear utility. Buyers derive utility from at most one good and a transfer of money. Sellers aim at obtaining the highest possible price above a reservation level.
She proved the general existence of the core in her model, and its equivalence to competitive equilibria. In a closely related model, \citet{gale1984equilibrium} shows that a competitive equilibrium always exists.
These models allow for budgets, but differ from hard budget constraints as examined in our work, where bidders are not permitted to spend more than a certain amount of money.
\citet{alaei2016competitive} provide a structural characterization of utilities in competitive equilibria and a mechanism that is group-strategyproof. These non-quasilinear models assume utility functions that are not necessarily quasilinear, but where small changes of prices do not lead to a discontinuous change of the bidders' utilities as is the case with hard budget constraints.
Closer to this paper is another line of research that focuses on assignment markets where buyers maximize payoff subject to a \textit{hard} budget constraint. \citet{aggarwal} show that an extension of the Hungarian algorithm is incentive-compatible and bidder-optimal if the auction is in general position, a rather specific condition that is usually unknown ex-ante and hard to check. Typically, ascending auctions use only demand queries, namely the auctioneer specifies a price and the bidders respond with their demand set. The auction additionally requires value and budget queries, thereby asking for the value of a specific good to a bidder and their budget during the auction. This is quite different from the ascending auctions based on price-based demand queries only, as described in \citet{demange1986multi} and the subsequent literature or compared to ascending auctions used in the field.
\citet{fujishige2007twosided} consider two-sided markets with budget-constrained bidders whose valuation functions are more general than unit-demand. Their results imply that in the unit-demand setting, there always exists a core allocation. These prior results aim exclusively for core-stability, but do not attempt to maximize welfare as done in this work.
In contrast to competitive equilibrium theory, \citet{henzinger2015truthful} and its predecessor \citet{dutting2013sponsored} do not aim for core-stability. Note that with hard budget constraints, core-stability does not imply envy-freeness.
Consider for example a market with two buyers and one seller selling a single good. If both buyers have the same budget and the same valuation for the good, which exceeds their budget, the only possibility such that no bidder envies the other one is that the good remains unsold. Such an outcome is clearly not in the core, because there is a coalition of buyer and seller who want to deviate. Note that such types of envy cannot arise without binding budget constraints, because if the price is at the value of two bidders, they are indifferent between getting the object or the empty set. It depends on the considered market, whether envy-freeness and bidder-optimality or core-stability should be preferred. While their model appears to be reasonable in cases where all items are sold by one large seller, like ad auctions, it may not seem reasonable for individual sellers to participate in an auction where items remain unsold for the sake of envy-freeness.
Finally, \citet{laan2016ascending} propose an ascending auction for the assignment market that results in an \emph{equilibrium under allotment}, which is in general not a core-stable outcome.
Core-stability and incentive-compatibility are arguably the most important axioms in market design. Whether one can design assignment markets that satisfy these axioms in the presence of hard budget constraints has not yet been answered.
We show that, without strong additional assumptions, this is not possible. Importantly, even with access to all valuations and budgets, the problem is computationally intractable for large market instances.
\section{Preliminaries} \label{sec:preliminaries}
A two-sided matching market $M=(\mcal{B}, \mcal{S}, v, b, r)$ consists of two disjoint sets of agents $\mcal{B}$ and $\mcal{S}$, representing bidders $i \in \mcal{B} = \{1,2,\dots, n\}$ and goods $j \in \mcal{S} = \{1,2,\dots, m\} \cup \{0\}$. We identify good $j$ with the seller owning it, i.e. each seller owns one good. The $0$-item is a dummy item and does not have value to any bidder, meaning that receiving good $0$ corresponds no real good. Additionally, the market is defined by each bidder $i$'s valuation $v_i: \mcal{S} \rightarrow \Z_{\geq 0}$ with $v_i(0)=0$ and budget $b^i \in \Z_{> 0}$, as well as each seller $j$'s reserve values/ ask price $r_j \in \Z_{\geq 0}$.
A \emph{price vector} is a vector $p \in \R^{\mcal{S}}$ with $p(0) = 0$, assigning price $p(j)$ to every good $j$. Bidders have quasi-linear utilities, so if bidder $i$ receives item $j$ under prices $p$, their utility is $\pi_i(j,p) = v_i(j)-p(j)$, if $p(j) \leq b^i$, and $\pi_i(j,p) = -\infty$, otherwise. An \emph{assignment} is represented as a map $\mu: \mcal{B} \rightarrow \mcal{S}$ from bidders to the items they receive, where $|\mu^{-1}(\{j\})| \leq 1$ for all $j \neq 0$, so only the dummy good may be assigned to more than one bidder. An \emph{outcome} is a pair $(\mu,p)$, where $\mu$ is an assignment and $p$ is a price vector, such that no budget constraint is violated, i.e. $p(\mu(i)) \leq b^i$ for all $i \in \mcal{B}$ and only sold items may have a positive price: $p(j) > 0$ implies that $|\mu^{-1}(\{j\})|=1$. For our iterative auction in Section \ref{sec:dgs_auction},for the sake of simplicity, we assume all reserve prices to be equal to $0$. The results can be easily generalized by starting the auction at the reserve prices, and not at $0$.
In neoclassic economics, a (Benthamite) social welfare function is defined as the sum of cardinally measurable values $v_i$ of all market participants. An optimal allocation of resources is one which maximizes the social welfare in this sense:
$$\max \left\{\sum_{i=1}^n v_i(\mu(i)) \,:\, \mu \text{ is an assignment}\right\}$$
This can be written in LP-form as
\begin{align}\label{eqn:assignment_lp}
\max &\, \sum_{i=1}^n \sum_{j=1}^m x_{ij}v_i(j) \\
\text{s.t.} &\, \sum_{i=1}^n x_{ij} \leq 1 \, \forall j=1,\dots,m &\, (p_j) \nonumber \\
&\, \sum_{j=1}^m x_{ij} \leq 1 \, \forall i=1,\dots,n &\,(\pi_i) \nonumber \\
&\, x \geq 0 \nonumber
\end{align}
where the variables in parentheses denote the corresponding duals. This assignment problem is well-known to have an integral optimal solution and can be solved in $O(n^4)$ \citep{kuhn1955hungarian}. An integral solution $x$ corresponds to an assignment $\mu$ via $x_{ij} = 1 \Leftrightarrow \mu(i) = j$.
This notion of utilitarian welfare maximization, i.e. maximizing the sum of participants' utilities, is widely used in auction theory and competitive equilibrium theory.
New welfare economics, in the tradition of Pareto defies the idea of interpersonal utility comparisons and stipulates ordinal preferences. Pareto efficiency or Pareto optimality is the key design desideratum in this literature. A market outcome is Pareto efficient, if no market participant can be better off without making at least one other participant worse off. With cardinal utilities and interpersonal comparisons a welfare-maximizing outcome is also Pareto efficient. This is because any Pareto-improvement would increase welfare, which is not possible by definition of a welfare-maximizing allocation. It has also been shown that the converse is true \citep{negishi1960welfare}. Another design desideratum is that of core-stability.
\begin{comment}\begin{definition}[core outcome]
A \emph{core outcome} is an allocation and prices $(\mu,p)$, such that
\begin{enumerate}
\item if $|\mu^{-1}(\{j\})| = 0$, i.e., item $j$ is not assigned to any bidder, then $p(j) = 0$ and \label{def:core_1}
\item for all bidders $i$ it holds that, if $\pi(j,p) > \pi(\mu(i),p)$, then $p(j) \geq p(\mu(i))$. \label{def:core_2}
\end{enumerate}
\end{definition}
\end{comment}
\begin{definition}[Core outcome]
Let $(\mu,p)$ be an outcome. A bidder-seller pair $(i,j) \in \mcal{B} \times \mcal{S}$ is called a \emph{blocking pair}, if $\pi_i(j,p) > \pi_i(\mu(i),p)$ and $p(j) < b^i$. $(\mu,p)$ is a \emph{core outcome}, if there are no blocking pairs. We also say that $(\mu,p)$ is \emph{core-stable} in this case.
\end{definition}
The idea of a blocking pair $(i,j)$ is that both bidder $i$ and seller $j$ would strictly increase their utility, if $i$ received item $j$ instead of $\mu(i)$: if $i$ pays $p(j)+\varepsilon$ for item $j$, then still $\pi_i(j,p)-\varepsilon > \pi_i(\mu(i),p)$, and at the same time, the profit of seller $j$ is increased by $\varepsilon$.
In the literature, a core outcome is often alternatively defined in the following way: an outcome $(\mu,p)$ is in the core if there are no subsets $\mcal{B}' \subseteq \mcal{B}$ and $\mcal{S}' \subseteq \mcal{S}$ and an outcome $(\mu', p')$ on $\mcal{B}' \times \mcal{S}'$ such that $\pi_i(\mu'(i),p') > \pi_i(\mu(i),p)$ for all $i \in \mcal{B}'$ and $p'(j) > p(j)$ for all $j \in \mcal{S}'$ (see for example \citep{zhou2017multiitem}). These definitions can easily be shown to be equivalent: first suppose that such subsets $\mcal{B}'$ and $\mcal{S}'$ do exist. Then it is easy to see that both sets are nonempty. In particular, let $i \in \mcal{B}'$ and $j = \mu'(i)$. Then $p'(j) > p(j)$, so $p(j) < b^i$. Furthermore, we have $\pi_i(j,p) > \pi_i(j,p') > \pi_i(\mu(i),p)$, so $(i,j)$ is a blocking pair. On the other hand, if $(i,j)$ is a blocking pair, then as in the above paragraph, we can set $p'(j) = p(j)+\varepsilon$ and get $\pi_i(j,p') > \pi_i(\mu(i),p)$ and $p'(j) > p(j)$. Thus we can choose $\mcal{B}' = \{i\}$, $\mcal{S}' = \{j\}$, $\mu'(i) = j$ and $p'(j)=p(j)+\varepsilon$ in the alternative definition.
We focus on the problem of finding welfare-maximizing core outcomes:
\begin{align}\label{eqn:welfare_max}
\max \left\{ \sum_{i=1}^n v_i(\mu(i))\,:\, (\mu,p) \text{ is a core outcome}\right\}.
\end{align}
If budgets are not binding, i.e., $b^i > v_i(j)$ for all bidders $i$ and all goods $j$, core-stability coincides with the definition of a competitive equilibrium. For this, let us first define the \emph{demand set} of bidder $i$, which consist of the most preferred, affordable among all items at prices $p$:
\[
D_i(p) = \left\{j\,:\, p(j) \leq b^i \, \wedge \, \pi_i(j,p) \geq \pi_i(k,p) \, \forall k \text{ with } p(k) \leq b^i\right\}.
\]
\begin{definition}[Competitive equilibrium]\label{def:ce}
An outcome $(\mu,p)$ is a \emph{competitive equilibrium}, if $\mu(i) \in D_i(p)$ for all bidders $i$.
\end{definition}
The next proposition summarizes well-known equivalences of the different notions for markets where budgets are not binding.
\begin{proposition}[\citet{bikhchandani1997competitive}]\label{prop:unbinding_equivalences}
Suppose that $b^i > v_i(j)$ for all $i$ and $j$, and let $(\mu,p)$ be an outcome. Then the following statements are equivalent.
\begin{enumerate}
\item $(\mu,p)$ is a core outcome.
\item $(\mu,p)$ is a competitive equilibrium.
\item The variables defined by $x_{ij} = 1 \Leftrightarrow \mu(i) = j$ solve the linear program (\ref{eqn:assignment_lp}) and $p_j = p(j)$ is a corresponding dual solution.
\item $(\mu,p)$ is a welfare-maximizing core outcome.
\end{enumerate}
\end{proposition}
This equivalence no longer remains true if bidders have binding budgets: in general, a core outcome needs not be a competitive equilibrium, and different core outcomes might generate very different welfare.
\begin{example}
For a very simple example, consider two bidders $1, 2$ and one item $A$. Suppose that $v_1(A) = 6$ and $v_2(A) = 10$. Both bidders have the same budget $b^1 = b^2 = 1$. It is easy to see that there are two core outcomes: either bidder $1$ or $2$ receives $A$ for a price of $1$, while the other bidder does not receive an item. Both core outcomes are no competitive equilibria, since the bidder $i$ not receiving $A$ does not receive an item in $D_i(p) = \{A\}$. This bidder thus envies the other. Moreover, one core outcome generates a welfare of $6$, while the other generates a welfare of $10$. Ignoring budgets, the above LP-formulation would assign item $A$ to bidder $2$ at a price $p(A) \in [6,10]$ - no such price is feasible when considering the budget constraints.
\end{example}
Besides, as we show, finding a welfare-maximizing core outcome is in general NP-complete, so we cannot expect a simple LP-formulation as above to exist.
\begin{comment}
In the following, we explain to what extent Proposition \ref{prop:unbinding_equivalences} can be extended to the case with binding buyer budgets. To begin with, we relax the requirement on demand sets containing solely goods maximizing utility among all affordable goods. We define the \emph{relaxed demand set} of bidder $i$ to be
\[
D_i^r(p) = \left\{j\,:\, p(j) \leq b^i \, \wedge \, \pi_i(j,p) \geq \pi_i(k,p) \, \forall k \text{ with } p(k) < b^i\right\}.
\]
The extended demand set contains all affordable goods whose utility is not less than any good that costs \emph{strictly} less than $b^i$. The idea behind this is the following: if bidder $i$ receives a non-optimal good $j \in D_i^r(p) \setminus D_i(p)$, then affordable every good $k$ they would prefer costs $b^i$, so $p(k) \geq p(j)$ and Property \ref{def:core_2} of Definition \ref{def:ce} remains true. We define a relaxed economic equilibrium in analogy to Definition \ref{def:ce}.
\begin{definition}[Relaxed competitive equilibrium]\label{def:rce}
A \emph{relaxed competitive equilibrium} is an outcome $(\mu,p)$, such that
\begin{enumerate}
\item if $|\mu^{-1}(\{j\})| = 0$, i.e., item $j$ is not assigned to any bidder, then $p(j) = 0$ and
\item $\mu(i) \in D_i^r(p)$ for all bidders $i$.
\end{enumerate}
\end{definition}
We provide some equivalent characterizations of core outcomes, similar to Proposition \ref{prop:unbinding_equivalences}.
\begin{proposition}\label{prop:core_char}
Let arbitrary valuations $v_i$ be given, and let $(\mu,p)$ be an outcome. Then the following statements are equivalent.
\begin{enumerate}
\item $(\mu,p)$ is a core outcome.
\item $(\mu,p)$ is a relaxed competitive equilibrium.
\item $(\mu,p)$ is a competitive equilibrium with respect to valuations $\tilde v_i$ with
\[
\tilde v_i(j) = \begin{cases}
0, \text{ if } p(j) > b^i \\
v_i(j), \text{ if } p(j) < b^i \\
v_i(j) = \min\left(v_i(j),\max_{k:\, p(k) < b^i} \pi(k,p(k))+b^i\right), \text{ if } p(j) = b^i
\end{cases}
\]
and budgets $\tilde b^i = +\infty$.
\end{enumerate}
\end{proposition}
\mb{Wir sollten hier etwas über Existenz sagen? In two-sided matching gibt es immer ein CE, aber wir wissen, dass der Core leer sein kann, wenn ein Verkäufer zwei Güter haben kann. }
There are two key differences between Propositions \ref{prop:unbinding_equivalences} and \ref{prop:core_char}: first, when budgets are binding, there may be different core outcomes producing different welfares. Thus, finding a welfare-maximizing core outcome might be significantly harder than finding an arbitrary one. Second, the valuations $\tilde v_i$ depend on the price $p$. Hence, the LP-formulation (\ref{eqn:assignment_lp}) does not directly allow us to compute a core outcome - in some sense, we would have to solve a fixed-point problem: find prices $p$, such that $p$ are duals to problem (\ref{eqn:assignment_lp}) with respect to the $\tilde v^i$.
\end{comment}
Note that efficient algorithms for determining core outcomes under budget constraints have been discussed in the literature. However, desirable properties like bidder-optimality and incentive-compatibility are only guaranteed if additional assumptions on the bidders' preferences are made. \citet{aggarwal} introduced the notion of \emph{general position}, a sufficient condition for ascending auctions to indeed find the welfare-maximizing core-stable outcome. As this condition has received considerable attention in the literature, we provide a brief discussion:
\begin{definition}[\citet{aggarwal}]\label{def:genpos}
Consider a directed bipartite graph with edges between bidders $\mcal{B}$ and goods $\mcal{S}$ (including dummy good $0$): For $i \in \mcal{B}$ and $j \in \mcal{S}$, there is a
\begin{itemize}
\item forward-edge from $i$ to $j$ with weight $-v_i(j)$
\item backward-edge from $j$ to $i$ with weight $v_i(j)$
\item maximum-price edge from $i$ to $j$ with weight $b^i - v_i(j)$
\item terminal edge from $i$ to the dummy good $0$ with weight $0$.
\end{itemize}
The auction is in $\emph{general position}$, if for every bidder $i$, there are no two alternating walks, following alternating forward and backward edges and ending with a distinct maximum-price or terminal edge, having the same total weight.
\end{definition}
\begin{example}
Consider an auction with two bidders $1$ and $2$ with $b^1=b^2$. The number of goods and bidders' valuations may be chosen arbitrarily. Assume $j \in \mcal{S}$ is any good. Consider the following path starting from bidder $1$: $1 \rightarrow j \rightarrow 2 \rightarrow j$, where the last edge is a maximum-price edge, with total weight $-v_1(j) + v_2(j) + (b^2-v_2(j)) = b^2-v_1(j)$. Now consider the path $1 \rightarrow j$, where the only edge is a maximum-price edge, with weight $b^1-v_1(j)$. Since $b^1 = b^2$, the total weight of both paths is equal, so the auction is not in general position.
\end{example}
As the example shows, the general position condition implies that in an ascending auction, no two bidders may reach their budget limits at the same time. \citet{henzinger2015truthful} claim the general position condition is rather restrictive, as it excludes, for instance, symmetric bidders. They additionally show that no polynomial-time algorithm can determine whether a set of valuations is in general position.
The general position condition is sufficient but not necessary for the existence of a unique bidder-optimal stable matching \citep{aggarwal}, which is thus also welfare-maximizing by our results below. As we will see, there are valuations not in general position, but where a welfare-maximizing core allocation can still be computed efficiently with our auction.
Let us now introduce an iterative auction that always finds a core-stable outcome in markets with budget constrained buyers and, if a simple ex-post condition is satisfied, maximizes welfare among all core outcomes.
\section{An Iterative Auction} \label{sec:dgs_auction}
Our auction is based on the well-known auction by \citet{demange1986multi} (denoted as DGS auction from now on), which implements the Hungarian algorithm. Contrary to \citet{aggarwal}, where the underlying assumption is that bidders report their valuations and budgets to the auctioneer, in our auction, they only have to report their demand sets at certain prices, similar to other ascending auctions \citep{mishra2007ascending}. We will provide conditions when reporting demand sets truthfully is incentive-compatible. Thus, we provide a natural generalization of the DGS auction to markets where bidders have binding budget constraints. The simple ascending nature of our auction also naturally motivates an ex-post optimality condition for the returned allocation. Without loss of generality, we will assume $r_j = 0$ in this section.
In the auction process, we may need to ``forbid'' some bidder to demand a certain item. We model this by introducing subsets $R_1, \dots, R_n \subseteq \mcal{S}$ of goods for every bidder and define the \emph{restricted demand set} to be
\[
D_i(p,R_i) = \left\{j \in R_i\,:\, p(j) \leq b^i \, \wedge \pi_i(j,p) \geq \pi_i(k,p) \, \forall k\in R_i \text{ with } p(k) \leq b^i\right\}.
\]
Note that our definition of the restricted demand set coincides with the definition of demand sets by \citet{laan2016ascending}. The set consists of all affordable items that generate the highest utility among all items in $R_i$. We introduce the well-known notions of over- and underdemanded sets \citep{demange1986multi,mishra2006oudemand}, adjusted to our notion of restricted demand sets.
\begin{definition}
Let a price vector $p$ and sets $R_1,\dots,R_n \subseteq \mcal{S}$ with $R_i \neq \emptyset \,\ \forall i$ be given. A set $T \subseteq \mcal{S}$ is
\begin{itemize}
\item \emph{overdemanded}, if $0 \not\in T$ and $|\{ i \in \mcal{B} \,:\, D_i(p,R_i) \subseteq T\}| > |T|$, and
\item \emph{underdemanded}, if $p(j) > 0$ for all $j \in T$ and $|\{i \in \mcal{B} \,:\, D_i(p,R_i) \cap T \neq \emptyset\}| < |T|$.
\end{itemize}
$T$ is \emph{minimally over-/underdemanded}, if it does not contain a proper over-/underdemanded subset.
\end{definition}
Finally, we define the strict budget set of bidder $i$ by $B_i(p) = \{j \in \mcal{S}\,:\, p(j) < b^i\}$. It consists of all items with prices strictly less than the bidder's budget.
\subsection{The Auction Algorithm}
Algorithm \ref{alg:auction} describes our auction. It is based on the following observation.
\begin{lemma}\label{lem:resticted_ce}
An outcome $(\mu,p)$ is in the core if and only if there are sets $R_1,\dots,R_n \subseteq \mcal{S}$ such that $B_i(p) \subseteq R_i$ and $\mu(i) \in D_i(p,R_i)$ for all $i$.
\end{lemma}
\begin{proof}
Suppose first that $(\mu,p)$ is a core outcome. Set $R_i = \{j \in \mcal{S}\,:\, p(j) < b^i\} \cup \{\mu(i)\}$. Then $\mu(i) \in D_i(p,R_i)$, since otherwise there would exist an item $j$ with $p(j) < b^i$ generating a higher utility than $\mu(i)$ - this would constitute a blocking pair.
Now let's assume that there are sets $R_i$ as described with $\mu(i) \in D_i(p,R_i)$ for all $i$. Suppose there is a blocking pair $(i,j)$. Then $j$ costs strictly less than $b^i$, so $j \in R_i$, and $j$ generates a higher utility than $\mu(i)$. This would contradict $\mu(i) \in D_i(p,R_i)$. Thus, $(\mu,p)$ is a core outcome.
\end{proof}
Computing a core outcome can thus also be interpreted as computing a ``competitive equilibrium'' with respect to the restricted demand sets $D_i(p,R_i)$. This is quite similar to the definition of an \emph{equilibrium under allotment} of \citet{laan2016ascending}. However, they have other requirements on the sets $R_i$, which, in general, cause their equilibria not to lie in the core.
In view of Lemma \ref{lem:resticted_ce}, the goal of our auction procedure is to determine prices $p$ together with sets $R_i$, such that there are neither over- nor underdemanded sets of items. As observed in \citet{mishra2006oudemand}, this implies existence of an assignment $\mu: \mcal{B} \rightarrow \mcal{S}$, such that every bidder receives an item in their demand set, and every item with positive price gets assigned to some bidder. The following result is due to the aforementioned work.
\begin{proposition}
\label{prop:oudem_assignment}
Suppose that with respect to the $D_i(p,R_i)$, there is no over- or underdemanded set of items. Then there is an assignment $\mu: \mcal{B} \rightarrow \mcal{S}$ such that $\mu(i) \in D_i(p,R_i)$ for all $i$, and for all $j \in \mcal{S}$ with $p(j) > 0$, there is some $i$ with $\mu(i) = j$.
\end{proposition}
Note that they considers markets without budgets and demand sets without restrictions. However, their proof only uses combinatorial properties of the demand sets, so it can be directly adapted to our setting. Thus, we omit a proof here.
\begin{algorithm}
\DontPrintSemicolon
\caption{Iterative Auction}
\label{alg:auction}
Set $p^1 = (0,\dots,0)$ and $R^1_i = \mcal{S}$ for all bidders $i$. Set $t=1$, $O^0 = \emptyset$ and $I^1 = \emptyset$.\;
Request $D_i(p^t,R^t_i)$ from all bidders. If $t > 1$ and the set
\[
I^t= \{i \in \mcal{B}\,:\, D_i(p^{t-1},R_i^{t-1}) \subseteq O^{t-1} \, \wedge \, D_i(p^{t-1},R_i^{t-1}) \setminus D_i(p^t,R_i^t) \neq \emptyset\}
\]
is nonempty, go to Step 3. Otherwise, if there is an overdemanded set, go to Step 4. Else, go to Step 5.\;
Choose a bidder $i \in I^t$ and define $J_i^t = D_i(p^{t-1},R_i^{t-1}) \setminus D_i(p^t,R_i^t)$. Set $R_i^{t+1} = R_i^t \setminus J_i^t$, $O^t = \emptyset$ and $p^{t+1} = p^{t-1}$. For all other bidders $i'$, the sets $R_{i'}^{t+1} = R_{i'}^t$ are unchanged. Set $t=t+1$ and go to Step 2.\;
Choose a minimally overdemanded set $O^t$. For all $j \in O^t$, set $p^{t+1}(j) = p^t(j)+1$. The prices for all other goods, as well as the sets $R_i^t$ remain unchanged. Set $t = t+1$ and go to Step 2.\;
Compute an assignment $\mu$, such that $\mu(i) \in D_i(p^t,R_i^t)$ for all bidders $i$ and $\mu(\mcal{B}) \subseteq \{j \in \mcal{S}\,:\, p^t(j) > 0\}$. Set $p = p^t$ and return $(\mu,p)$.\;
\end{algorithm}
Step 3 of the auction ensures that we do not end up with underdemanded sets of items. Moreover, sets $R_i^t$ always contain at least all items that cost strictly less than the bidder's budget $b^i$. Our proof of correctness is similar to the one by \citet{laan2016ascending}: due to the budget constraints, underdemanded sets of items may appear. We show that Step 3 of the auction takes care of these sets.
\begin{lemma}\label{lem:overdem_not_underdem}
Let $O$ be minimally overdemanded and $T \subseteq O$ with $T \neq \emptyset$. Let prices $p$ and sets $R_i$ be given. Then
\[
|\{i\,:\, D_i(p,R_i) \subseteq O \,\wedge \,D_i(p,R_i) \cap T \neq \emptyset\}| > |T|.
\]
In particular, $T$ is not underdemanded.
\end{lemma}
The proof of this lemma can be found in the Appendix.
\begin{lemma}\label{lem:restr_nonempty}
For all bidders $i \in \mcal{B}$ and all iterations $t$ of the algorithm, we have that $B_i(p) \subseteq R_i^t$. In particular, since $p^t(0) = 0$, $R_i^t \neq \emptyset$.
\end{lemma}
\begin{proof}
Assume to the contrary that there is a minimal iteration $t+1$, such that a bidder $i^*$ and a good $j^*$ exist with $p^{t+1}(j^*) < b^{i^*}$, but $j^* \not\in R_{i^*}^{t+1}$. Then in iteration $t$, Step 3 was executed, since otherwise $p^t \leq p^{t+1}$ and $R_{i^*}^t = R_{i^*}^{t+1}$, so $t+1$ would not be minimal. Hence, in iteration $t$, we have $j^* \in J_{i^*}^t$ and in particular $j^* \in D_{i^*}(p^{t-1},R_{i^*}^{t-1})$. Because Step 3 is executed, we have $O^{t-1} \neq \emptyset$, so in iteration $t-1$ Step 4 was executed and $p^t(j^*) = p^{t-1}(j^*)+1 = p^{t+1}(j^*) + 1 \leq b^{i^*}$. Thus, since from iteration $t-1$ to $t$, all prices for all preferred goods of bidder $i^*$ were raised and $i^*$ can still afford $j^*$ at prices $p^t$, $j^* \in D_{i^*}(p^t, R_{i^*}^t)$, so $j^* \not\in J_{i^*}^t$. This is a contradiction.
\end{proof}
\begin{proposition}\label{prop:underdem_implies_3}
For every iteration $t$ in the auction, it holds:
\begin{enumerate}
\item if there is a minimally underdemanded set of items $T$, then $T \subseteq O^{t-1}$ and Step 3 is executed
\item if Step 3 is executed, there is no underdemanded set of items with respect to the $D_i(p^{t+1},R_i^{t+1})$.
\end{enumerate}
\end{proposition}
\begin{proof} We prove this by induction on $t$. For $t = 1$, there clearly is no underdemanded set of items, and Step 3 is not executed.
Suppose now that $t > 1$ and that the statement is true for all $1 \leq s < t$.
First suppose that there exists an underdemanded set of items $T$. Therefore, by induction, in iteration $t-1$, Step 4 must have been executed - otherwise, there would not exist an underdemanded set. But then, using the same inductive reasoning, there was no underdemanded set in iteration $t-1$. It is thus easy to see that, since in iteration $t-1$ only prices for items in $O^{t-1}$ were raised, only the demand for those items could decrease, so $T$ must be a subset of $O^{t-1}$. By Lemma \ref{lem:overdem_not_underdem}, we have
\[
|\{i \in \mcal{B}\,:\, D_i(p^{t-1},R_i^{t-1}) \subseteq O^{t-1} \, \cap \, D_i(p^{t-1},R_i^{t-1}) \cap T \neq \emptyset\}| > |T|.
\]
Thus, since $|\{i \in \mcal{B}\,:\, D_i(p^{t},R_i^{t}) \cap T \neq \emptyset\}| < |T|$, there must be a bidder $i^*$ with $D_{i^*}(p^{t-1},R_{i^*}^{t-1}) \subseteq O^{t-1}$ and $D_{i^*}(p^{t-1},R_{i^*}^{t-1}) \cap T \neq \emptyset$, but $D_{i^*}(p^{t},R_{i^*}^{t}) \cap T= \empty
|
set$. This implies that $i^* \in I^t$, so Step 3 is executed in iteration $t$.
Now suppose that Step 3 is executed in iteration $t$. Then again, in iteration $t-1$, Step 4 was executed, since otherwise we would have $O^{t-1} = \emptyset$, which implies $I^t = \emptyset$. By induction, there was no underdemanded set of items in iteration $t-1$. Note that $p^{t-1} = p^{t+1}$, so only the demand of a single bidder $i^* \in I^t$ chosen in Step 3 does change. Since $D_{i^*}(p^{t-1},R_{i^*}^{t-1}) \subseteq O^{t-1}$, so $J_{i^*}^t \subseteq O^{t-1}$, only the demand for items in $O^{t-1}$ can decrease. However, for $T \subseteq O^{t-1}$ we have again by Lemma \ref{lem:overdem_not_underdem} that
\[
|\{i \in \mcal{B}\,:\, D_i(p^{t-1},R_i^{t-1}) \subseteq O^{t-1} \, \cap \, D_i(p^{t-1},R_i^{t-1}) \cap T \neq \emptyset\}| > |T|,
\]
and, since we only changed $R_{i^*}^{t-1}$, the demand for items in $T$ can at most decrease by $1$. Thus, $T$ is not underdemanded in iteration $t+1$.
\end{proof}
Employing the previous lemmata, we can proceed to prove correctness of our proposed auction.
\begin{proposition}\label{prop:alg_correctness}
The auction terminates after a finite number of iterations, and an assignment $\mu$ as is described in Step 5 exists whenever this Step is reached. The returned tuple $(\mu,p)$ constitutes a core outcome.
\end{proposition}
\begin{proof}
Whenever Step 3 is executed, at least one item is removed from the set $R_i^t$ of one bidder. Hence, Step 3 can only be called a finite number of times. Also, prices can only be increased a finite number of times in Step 4 - if prices of goods go to infinity, they are clearly not overdemanded at some point anymore. Thus, in some iteration $t^*$, Step 5 is executed. By Lemma \ref{prop:underdem_implies_3}, there is no underdemanded set in iteration $t^*$, because otherwise Step 3 would have been executed. Similarly, there is no overdemanded set. Finally, because of Lemma \ref{lem:restr_nonempty}, no set $R_i^{t^*}$ is empty, so by Proposition \ref{prop:oudem_assignment}, an assignment $\mu$ as required exists. By Lemma \ref{lem:resticted_ce}, $(\mu,p)$ is a core outcome.
\end{proof}
\newpage
\begin{example}
Consider the example following auction with three bidders $1,2,3$ and two items $A$ and $B$.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c}
& $v_i(A)$ & $v_i(B)$ & $b^i$ \\ \hline
Bidder $i=1$ & $10$ & $0$ & $1$ \\
Bidder $i=2$ & $0$ & $10$ & $2$ \\
Bidder $i=3$ & $10$ & $10$ & $10$
\end{tabular}
\end{table}
\\
The auction proceeds as follows.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
& $p^t$& $D_1(p^t,R_1^t)$ & $D_2(p^t,R_2^t)$ & $D_3(p^t,R_3^t)$ & $R_1^t$ & $R_2^t$ & $R_3^t$ & $O^t$ & $I^t$ \\ \hline
$t=1$ & $(0,0)$ & $\{A\}$ & $\{B\}$ & $\{A,B\}$ & $\mcal{S}$& $\mcal{S}$ & $\mcal{S}$ & $\{A,B\}$ & $\emptyset$ \\
$t=2$ & $(1,1)$ & $\{A\}$ & $\{B\}$ & $\{A,B\}$ & $\mcal{S}$& $\mcal{S}$ & $\mcal{S}$ &$\{A,B\}$ & $\emptyset$ \\
$t=3$ & $(2,2)$ & $\{0\}$ & $\{B\}$ & $\{A,B\}$ &$\mcal{S}$& $\mcal{S}$ & $\mcal{S}$ & $\emptyset$ & $\{1\}$ \\
$t=4$ & $(1,1)$ & $\{0\}$ & $\{B\}$ & $\{A,B\}$ &$\{0,B\}$& $\mcal{S}$ & $\mcal{S}$ & $\emptyset$ & $\emptyset$
\end{tabular}
\end{table}
In iterations $t=1,2$, there is a unique minimally overdemanded set $O^t = \{A,B\}$, and $I^t$ is empty. Thus, Step 4 of the auction is executed and the prices for $A$ and $B$ are raised. In iteration $t=3$, the set $I^t = \{1\}$ is nonempty which indicates that bidder 1's budget was tight for $A$ at prices $(1,1)$. Thus, we forbid $1$ to receive item $A$ and reset the prices to $(1,1)$. Now, in iteration $t=4$, there is no overdemanded set and $I^t$ is empty. Thus, there exists an assignment $\mu$ with $\mu(i) \in D_i(p^4,R_i^4)$ for all $i \in \mcal{B}$, namely $\mu(1) = 0$, $\mu(2) = B$ and $\mu(3) = A$. It is easily checked that $(\mu,p)$ is indeed a core outcome.
\end{example}
\begin{example}
Let us now consider an example of an auction where a non-trivial decision has to be made in Step 3.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c}
& $v_i(A)$ & $v_i(B)$ & $b^i$ \\ \hline
Bidder $i=1$ & $10$ & $0$ & $3$ \\
Bidder $i=2$ & $0$ & $11$ & $1$ \\
Bidder $i=3$ & $5$ & $3$ & $10$
\end{tabular}
\end{table}
It is easy to see that after $3$ iterations through Step 4 of our auction, we reach prices $p^4 = (3,1)$, where bidder $1$ demands $\{A\}$, bidder $2$ demands $\{B\}$ and bidder $3$ demands $\{A,B\}$. Since $\{A,B\}$ is minimally overdemanded, we execute Step $4$ once again to reach $p^5=(4,2)$, where due to the budget constraints, we have $D_1(p^5,R_1^5) = \{0\}$ and $D_2(p^5,R_2^5) = \{0\}$, while bidder $3$ still demands $\{A,B\}$. Thus, Step 3 of the auction is executed with $I^5 = \{1,2\}$, so both bidders $1$ or $2$ would be valid bidders to choose in Step 3. For the choice $i=1$, we have $J^5_i=\{A\}$, while for the choice $i=2$, we have $J^5_i = \{B\}$. We could thus either remove $A$ from $R^5_1$, or $B$ from $R^5_2$. Depending on our choice, we get two different core outcomes, both supported by the prices $p=(3,1)$: one where bidder $1$ receives nothing, bidder $2$ receives $B$ and bidder $3$ receives $A$, and one where bidder $1$ receives $A$, bidder $2$ receives nothing and bidder $3$ receives $B$. The total welfare of the former allocation is $16$, while the one of the latter is $13$.
\end{example}
\subsection{Economic Properties} \label{sec:pareto_welfare}
The output produced by our iterative auction is not uniquely defined - it may depend on which bidder $i \in I^t$ is chosen whenever Step 3 is executed. Indeed, we prove the following result.
\begin{proposition}\label{prop:reachable_alloc}
Let $(\nu,q)$ be an arbitrary core outcome. Then bidders $i \in I^t$ in Step 3 can be chosen in such a way, that for the resulting outcome $(\mu,p)$ we have that $p \leq q$ coefficient-wise, and $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ for all bidders $i$.
\end{proposition}
The proof can be found in the Appendix.
We say that a core outcome is $(\mu,p)$ \emph{Pareto optimal for the bidders}, if for every core outcome $(\nu,q)$ with $\pi_i(\nu(i),q) > \pi_i(\mu(i),p)$ for some bidder $i$, there is a bidder $i'$ with $\pi_{i'}(\mu(i'),p) > \pi_{i'}(\nu(i'),q)$. Proposition \ref{prop:reachable_alloc} directly implies that for every core outcome which is Pareto optimal for the bidders, there is an outcome $(\mu,p)$ reachable by the auction with $\pi_i(\mu(i),p) = \pi_i(\nu(i),q)$ for all $i \in \mcal{B}$.
\citet{aggarwal} prove that their algorithm for computing a core-stable outcome always finds the bidder-optimal core outcome $(\mu,p)$, whenever the auction is in general position. Here, \emph{bidder-optimal} means that for every other core outcome $(\nu,q)$ we have that $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ for all $i \in \mcal{B}$. Bidder-optimality thus implies Pareto optimality. We show a similar result for our auction: if the bidder to choose in Step 3 of our auction is always unique, our auction also finds a bidder-optimal core outcome.
\begin{corollary}
Suppose that whenever Step 3 is executed, $|I^t| = 1$, i.e., there is a unique bidder to choose, and let $(\mu,p)$ be the uniquely determined outcome of the auction. Then for any core outcome $(\nu,q)$ we have that $p \leq q$ and $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ for all bidders $i$, i.e., $(\mu,p)$ is bidder-optimal.
\end{corollary}
\begin{proof}
Since $|I^t|=1$ in every iteration through Step 3, the outcome $(\mu,p)$ of the auction is unique. Proposition \ref{prop:reachable_alloc} now directly implies that for every core outcome $(\nu,q)$, we have $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ and $p \leq q$.
\end{proof}
In particular, if the general position condition is satisfied, it can be shown that $I^t$ never contains more than one bidder. Thus, our auction always finds a bidder-optimal outcome like the auction by \citet{aggarwal} in this case.
\begin{proposition}\label{prop:genpos_expost}
Suppose the auction is in general position. Then in every iteration through Step 3 of our iterative auction, we have that $|I^t|=1$, and for the unique $i \in I^t$, we have $|J_i^t| = 1$.
\end{proposition}
Note that in general that our ex-post condition $|I^t| = 1$ whenever Step 3 is reached is less demanding than the general position condition, since we do not require $|J_i^t| = 1$, and it is easy to construct examples where $|J_i^t| > 1$, but the ex-post condition is fulfilled. While our condition is only ex-post, it is straight-forward to check for the auctioneer when the auction is actually performed.
Let us now consider welfare-maximization properties of our auction. We first observe that a welfare maximizing core outcome can always be found among the ones which are Pareto optimal for the bidders.
\begin{proposition}\label{prop:welfmax_among_pareto}
Let $(\mu,p)$ and $(\nu,q)$ be core outcomes. If $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ for all bidders $i$, then
\[
\sum_{i \in \mcal{B}} v_i(\mu(i)) \geq \sum_{i \in \mcal{B}} v_i(\nu(i)).
\]
\end{proposition}
The proofs of Propositions \ref{prop:genpos_expost} and \ref{prop:welfmax_among_pareto} can be found in the Appendix.
As we described above, by Proposition \ref{prop:reachable_alloc}, we can reach any core outcome which is Pareto optimal for the bidders with our auction, and Proposition \ref{prop:welfmax_among_pareto} says that one of them must be welfare-maximizing. Now if we always have $|I^t| = 1$, the outcome of our auction is unique which proves our first main result.
\begin{theorem}
Bidders in $I^t$ in Step 3 of the auction can be chosen such that the outcome of the auction is a welfare-maximizing core outcome.
In particular, if $|I^t|=1$ whenever Step 3 is reached, the unique outcome of the auction is a welfare-maximizing core outcome.
\end{theorem}
Note that knowledge of the bidders' demand sets does not suffice in order to always choose the ``correct'' bidders in Step 3 to reach a welfare-maximizing outcome. Our hardness result in Section \ref{sec:value_queries} implies that even with perfect knowledge of the bidders' preferences, choosing the correct bidders in Step 3 is NP-hard. However, our simple ex-post condition $|I^t| = 1$ at least gives the auctioneer a simple certificate of optimality.
\begin{comment}
\cite{aggarwal} present a variant of the Hungarian algorithm that computes a core-stable outcome in assignment markets in polynomial time. Moreover, they show that if the values and budgets of the bidders are in \emph{general position} (which for example does not allow two bidders with equal budgets), the computed core outcome is bidder-optimal: each bidder's utility in this core outcome is at least as high as the bidder's utility in any other core outcome. Thus - by Proposition ?, the computed outcome is also welfare-maximizing. In contrast to their algorithm, ours only needs demand queries and can thus be implemented as an iterative auction with prices only.
Thus, if an auction is in general position, our iterative auction always finds a welfare maximizing outcome like the algorithm by \cite{aggarwal}. Note that the general position condition is sufficient but not necessary for this. While we also require $|I^t| = 1$, we do not need to assume that the unique bidder $i \in I^t$ only demands a single good $j$ with $p(j) = b^i$.
\end{comment}
\begin{comment}
\begin{proposition}\label{prop:pareto_opt_welfare_max}
Suppose there is a bidder-optimal core outcome $(\mu,p)$, i.e., for any other core outcome $(\nu,q)$, there holds that $\pi_i(\mu(i),p) \geq \pi_i(\nu(i),q)$ for all bidders $i$. Then $(\mu,p)$ is a welfare-maximizing core outcome.
\end{proposition}
Note that this does not contradict our NP-hardness result, since we do not assume auctions to be in general position.
\begin{theorem}\label{thm:dgs_optimal}
Consider an instance of our DGS auction where, whenever Step 3 is reached, there is a single bidder $i$ to choose from. Then the uniquely determined outcome $(\mu,p)$ of the auction is bidder-optimal and thus welfare maximizing.
\end{theorem}
Theorem \ref{thm:dgs_optimal} means that if general position holds, then a welfare-maximizing core-stable outcome is found.
\end{comment}
If the auction is in general position, then the auction is ex post incentive-compatible, which follows from the original work by \citet{demange1986multi} and the paper by \citet{aggarwal}.
The question is if this algorithm or any other algorithm where the bidders' preferences are no further restricted can be incentive-compatible. Unfortunately, the answer is no because incentive-compatibility goes against envy-freeness and therefore the core definition as we show next.
\begin{theorem}\label{thm:ic}
In assignment markets with payoff-maximizing but budget constrained bidders there is no incentive-compatible mechanism terminating in a core-stable solution for every input.
\end{theorem}
\begin{proof}
By the direct revelation principle, we may assume that bidders report their exact valuations, as well as their budgets to the auctioneer.
Consider a market with three bidders $1,2,3$ and two items $A, B$. Let $\mcal{M}((v_1,b^1),\dots,(v_3,b^3)) = (\mu,p)$ denote a mechanism, mapping the bidders' reported valuations and budgets to a core-stable outcome with respect to their reports.
We consider instances of the above described market, where all bidders have the same values for both items: $v_i(A)=v_i(B)=10$ for $i=1,2,3$. Let us consider two instances, where the bidders vary their reported budget.
\begin{enumerate}
\item If all bidders report $b^i=1$ for $i=1,2,3$, then obviously, since there are only two items, one bidder does not receive one: for $(\mu,p) = \mcal{M}((v_1,1),(v_2,1),(v_3,1))$, there is an $i$ with $\mu(i) = 0$. Without loss of generality, we assume that $i=3$. It is easy to see that for core-stability to hold, bidders $1$ and $2$ both receive an item, and that $p(A)=p(B)=1$. Bidders $1$ and $2$ have utility $9$, while bidder $3$ has utility $0$.
\item If bidder $3$ reports $b^3 = 2$, and the other bidders report $b^1=b^2 = 1$, then clearly bidder $3$ receives an item in any core-stable outcome, and without loss of generality $\mu(3)=B$. Also the other item $A$ must necessarily be assigned to some bidder. Again, without loss of generality, we assume that $\mu(1) = A$ and $\mu(2) = 0$. It is easy to see that $p(A)$ must be equal to $1$ in a core outcome. Additionally, we must have $p(A)=p(B)$, since otherwise bidder $3$ would strictly prefer item $A$ to item $B$, which would not be envy-free. Thus, $p(A)=p(B)=1$, and bidder $3$ has a utility of $9$.
\end{enumerate}
This already shows that $\mcal{M}$ is not incentive-compatible: If all bidders' true budgets are equal to $1$ and they report truthfully, bidder $3$ has a utility of $0$. However, if bidder $3$ misreports $b^3 = 2$, they would receive an item and have a utility of $9$. Note, that $p(B)=p(A)=1$ in this case, so bidder $3$ can still afford the received item.
\end{proof}
Note that Theorem \ref{thm:ic} does not preclude an incentive-compatible and welfare-maximizing auction (that is not core-stable).
Overall, these iterative auctions require bidders to reveal that they are indifferent to not winning the good once the price equals the valuation of a bidder. Only this allows auctioneers to differentiate between a bidder dropping out due to reaching his valuation or his budget. In practice, bidders might not always bid the null set when price reaches value even in an incentive-compatible auction, which can lead to inefficiencies in such iterative auctions.
\begin{comment}
Consider two bidders $1$ and $2$ and a single item $A$. Both bidders have budget $b^i = 1$ and valuations $v_1(A)=v_2(A) = 10$. The auctioneer decides that in case of a tie, bidder $1$ receives the item. Assuming truthfulness, our auction proceeds as follows:
\begin{enumerate}
\item At price $p(A) = 0$, bidders report $D_1(p)=D_2(p) = \{A\}$. The price for $A$ is raised.
\item At price $p(A) = 1$, bidders still report $D_1(p)=D_2(p) = \{A\}$, so the price is raised again.
\item At price $p(A) = 2$, the bidders report $D_1(p) = D_2(p) = \{\emptyset\}$ (from this we can conclude that $b^1 = b^2 = 1$). According to the rules of the auction, we decrease the price by $1$ and remove $A$ from $R_2$ (i.e., we forbid bidder $2$ to demand $A$).
\item Now, at price $p(A) = 1$, bidder $1$ demands $A$ and bidder $2$ demands $\emptyset$. This leads to the core outcome $1 \leftarrow A$, $2 \leftarrow \emptyset$ and $p(A) = 1$. Bidder $1$ has utility $9$ and bidder $2$ utility $0$ (the auctioneer does not know the exact utilities).
\end{enumerate}
Let us consider now check what happens if bidder $2$ does not report truthfully, but reports $b^2 = 2$ (or rather acts like his budget is $2$).
\begin{enumerate}
\item At price $p(A) = 0$, bidders report $D_1(p)=D_2(p) = \{A\}$. The price for $A$ is raised.
\item At price $p(A) = 1$, bidders still report $D_1(p)=D_2(p) = \{A\}$, so the price is raised again.
\item At price $p(A) = 2$, the bidders report $D_1(p) = \{\emptyset\}$ and $D_2 = \{A\}$. According to the rules of the auction, we decrease the price by $1$ and remove $A$ from $R_1$ (i.e., we forbid bidder $1$ to demand $A$).
\item Now, at price $p(A) = 1$, bidder $1$ demands $\emptyset$ and bidder $2$ demands $A$. This leads to the core outcome $1 \leftarrow \emptyset$, $2 \leftarrow A$ and $p(A) = 1$. Bidder $1$'s utility is $0$ and bidders $2$'s utility is $9$.
\end{enumerate}
One might ask why we decrease the price in Step $3$ again - from the auctioneer's point of view there is a valid assignment for $p(A) = 2$. Let us consider the following rule:
\textit{Rule I: When the auctioneer detects a tight budget, like in Steps 3 of our examples, the price is only decreased when an item is underdemanded now.}
Using \textit{Rule I}, the auction would result in $1 \leftarrow \emptyset$, $2 \leftarrow A$ and $p(A) = 2$, so since the actual budget of bidder $2$ is $1$, her utility is $-\infty$ - reporting a higher budget does thus not give a better outcome for bidder $2$. However, now it is not optimal for bidder $1$ to always bid truthfully:
Assume that now the actual budgets for bidders $1$ and $2$ are $b^1 = 2$ and $b^2 = 1$. According to \emph{Rule I}, the auction would terminate in $1 \leftarrow A$, $2 \leftarrow \emptyset$ and $p(A) = 2$ with a utility of $8$ for bidder $1$ and $0$ for bidder $2$. However, if bidder $1$ shades his budget and reports $b^1 = 1$, he would still receive the item due to the tie breaking rule: the auction would terminate in $1 \leftarrow A$, $2 \leftarrow \emptyset$ and $p(A) = 1$, where bidder $1$ has a utility of $9$. Thus, with \textit{Rule I}, bidder $1$ may have an incentive to shade his budget.
I propose the following rule which (maybe) circumvents this problem.
\textit{Rule II: After the original auction (without \textit{Rule I}) has finished, compute a valid assignment. If there are bidders $i < k$ receiving items $\mu(i)$ and $\mu(k)$, but $\mu(k) \not\in R_i$ (i.e., $i$ would prefer $\mu(k)$ but is not allowed to receive it) and $p(\mu(k)) = b^i$ (i.e., $i$ could still afford it), then $k$ has to pay $p(\mu(k))+1$.}
Let us check that bidder $2$ now does not have a reason to deviate. Assume that $b^1 = 1$ and $b^2 = 1$. As we have checked above, the original auction terminates with $2 \leftarrow \emptyset$, so bidder $2$ has a utility of $0$. Now if bidder 2 misreports $b^2 = 2$ the original auction terminates with $1 \leftarrow \emptyset$, $2 \leftarrow A$ and $p(A) = 1$. Note that at some point we detected the budget constraint of bidder $1$ and according to the auction rules, $1$ is not allowed to receive $A$, so $A \not\in R_1$. Since however $p(A) = b^1$, according to \textit{Rule II}, bidder 2 must pay $p(A) +1 = 2$ for $A$. Since his budget was $1$, his utility is now $-\infty$, so he has no incentive to not report truthfully.
Let us also check, that bidder $1$ has no incentive to deviate now: suppose that the actual budgets are $b^1 = 2$ and $b^2 = 1$. Then the original auction would terminate with $1 \leftarrow A$, $2 \leftarrow \emptyset$ and $p(A) = 1$. However, now the condition of \textit{Rule II} is not satisfied, and the price bidder $1$ has to pay is $1$. Hence, there is now reason for bidder $1$ to shade his budget.
\end{comment}
\section{A Sealed-Bid Auction}\label{sec:value_queries}
If in some iteration of the above algorithm Step 3 is reached with $|I^t| > 1$, the ascending auction with only demand queries does not necessarily find the welfare-maximizing core-stable outcome. In such a case, a combination of value and demand queries is required in order to obtain the desired assignment. To this end, for the remaining part of this paper, we assume the auctioneer has unlimited access to all valuations for all objects and the budgets of all bidders.
Unfortunately, the outcome and pricing problem does not only need a different oracle but it also becomes NP-complete as we will show next. Besides, as we mentioned above, the property of strategyproofness does not hold.
\subsection{A MILP Formulation}
First, we show that the problem belongs to complexity class NP by modeling it as a mixed integer linear program (MILP). Once a problem is modeled as such, there is a polynomial-time non-deterministic algorithm, where we guess the values of integer variables and solve the resulting linear program (LP) in polynomial time. Bi-linear terms present in the quadratic formulation (q-BC), namely products of continuous prices $p(j)$ and binary variables, can easily be linearized to obtain the resulting MILP.
\hide{
\begin{align}
\tag{q-BC}
\begin{array}{@{}l@{\quad}l@{\qquad}l@{}r@{}}
\textrm{maximize} & \sum\limits_{i \in \mcal{B}} \pi_i + \sum\limits_{j \in \mcal{S}} \pi_j\\
\textrm{subject to} & \pi_i = \sum\limits_{j \in \mcal{S}} (v_i (j) - p(j)) m_i(j) & \forall i \in \mcal{B}&\ (1)\\
& \pi_j = \sum\limits_{i \in \mcal{B}} (p(j) - r_j ) m_i(j) & \forall j \in \mcal{S} &\ (2)\\
& \sum\limits_{j \in \mcal{S}} m_i(j) \leq 1 & \forall i \in \mcal{B} &\ (3)\\
& \sum\limits_{i \in \mcal{B}} m_i(j) \leq 1 & \forall j \in \mcal{S} &\ (4)\\
& \pi_i \geq \big( v_i (j) - p(j) \big) y_i (j) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (5)\\
& \pi_j \geq (min(v_i (j),b^i)-r_j)(1-y_i (j)) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (6)\\
& r_j m_i(j) \leq p(j) \leq min(v_i (j),b^i) m_i(j) + M(1-m_i(j)) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (7)\\
& y_i (j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (8)\\
& m_i(j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (9)\\
& p(j) \geq 0 & \forall j \in \mcal{S} &\ (10)\\
\end{array}\nonumber
\label{q-BC}
\end{align}
}
\begin{align}
\tag{q-BC}
\begin{array}{@{}l@{\quad}l@{\qquad}l@{}r@{}}
\textrm{maximize} & \sum\limits_{i \in \mcal{B}} \pi_i + \sum\limits_{j \in \mcal{S}} \pi_j\\
\textrm{subject to} & \pi_i = \sum\limits_{j \in \mcal{S}} (v_i (j) - p(j)) m_i(j) & \forall i \in \mcal{B}&\ (1)\\
& \pi_j = \sum\limits_{i \in \mcal{B}} (p(j) - r_j ) m_i(j) & \forall j \in \mcal{S} &\ (2)\\
& \sum\limits_{j \in \mcal{S}} m_i(j) \leq 1 & \forall i \in \mcal{B} &\ (3)\\
& \sum\limits_{i \in \mcal{B}} m_i(j) \leq 1 & \forall j \in \mcal{S} &\ (4)\\
& \pi_i \geq \big( v_i (j) - p(j) \big) \alpha_i(j) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (5)\\
& \pi_j \geq min(v_i (j),b^i)(1-y_i (j)) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (6)\\
& b^i \geq p_j (1-\beta_i(j)) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (7)\\
& p_j \geq b^i \beta_i(j) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (8)\\
& (1-\alpha_i(j)) + (1-\beta_i(j)) -2 \leq 2(1- y_i (j)) + \epsilon y_i(j) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (9)\\
& r_j m_i(j) \leq p(j) \leq min(v_i (j),b^i) m_i(j) + M(1-m_i(j)) & \forall i \in \mcal{B}, j \in \mcal{S} &\ (10)\\
& m_i(j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (11)\\
& y_i (j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (12)\\
& \alpha_i(j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (13)\\
& \beta_i(j) \in \{0,1\} & \forall i \in \mcal{B}, j \in \mcal{S} &\ (14)\\
& p(j) \geq 0 & \forall j \in \mcal{S} &\ (15)\\
\end{array}\nonumber
\label{q-BC}
\end{align}
\hide{
\begin{align}
\tag{l-BC}
&&\begin{array}{@{}l@{\quad}l
|
{Q}}
\end{align*}
\vspace{-1ex}
\end{minipage}
\qquad
\begin{minipage}[t]{0.4\textwidth}
\centering
\begin{align*}
\intertext{\textbf{Interrupt propagation}}
\tmopin{op}{V}{\tmrun M} &\reduces \tmrun {(\tmopin{op}{V}{M})}
\\
\tmopin{op}{V}{\tmpar P Q} &\reduces \tmpar {\tmopin{op}{V}{P}} {\tmopin{op}{V}{Q}}
\\
\tmopin{op}{V}{\tmopout{op'}{W}{P}} &\reduces \tmopout{op'}{W}{\tmopin{op}{V}{P}}
\end{align*}
\begin{align*}
\intertext{\quad\textbf{Evaluation context rule}}
\quad
\coopinfer{}{
P \reduces Q
}{
\F[P] \reduces \F[Q]
}
\end{align*}
\end{minipage}
\begin{align*}
\intertext{\textbf{where}\vspace{1ex}}
\text{$\F$}
\mathrel{\;{:}{:}{=}\ }& [~]
\mathrel{\;\big|\ \ } \tmpar \F Q \mathrel{\;\big|\ \ }\! \tmpar P \F
\mathrel{\;\big|\ \ } \tmopout{op}{V}{\F}
\mathrel{\;\big|\ \ } \tmopin{op}{V}{\F}
\end{align*}
}
\caption{Small-step operational semantics of parallel processes.}
\label{fig:processes}
\end{figure}
\paragraph{Individual computations}
This reduction rule states that, as processes, individual computations evolve according to the small-step
operational semantics $M \reduces N$ we defined for them in \autoref{sec:basic-calculus:semantics:computations}.
\paragraph{Signal hoisting}
This rule propagates signals out of individual computations.
It is important to note that we only hoist those signals that have propagated to the outer boundary
of a computation.
\paragraph{Broadcasting}
The broadcast rules turn outward moving signals in one process into inward moving interrupts
for the process parallel to it, while continuing to propagate the signals outwards to any
further parallel processes. The latter ensures that the semantics is compositional.
\paragraph{Interrupt propagation}
These three rules simply propagate interrupts inwards into individual computations,
into all branches of parallel compositions, and past any outward moving signals.
\paragraph{Evaluation contexts}
Analogously to the semantics of computations, the semantics of processes also includes a context rule, which allows reductions under \emph{evaluation contexts}
$\F$. Observe that compared to the evaluation contexts for computations, those for processes
do not bind variables.
\subsection{Type-and-Effect System}
Analogously to its sequential part, we also equip \lambdaAEff's parallel part with a type-and-effect system.
\paragraph{Types} The \emph{types of processes} are designed to match their parallel structure---they are given by
\[
\text{$\tyC$, $\tyD$}
\mathrel{\;{:}{:}{=}\ } \tyrun X \o \i
\,\mathrel{\;\big|\ \ }\! \typar \tyC \tyD
\]
Intuitively, $\tyrun X \o \i$ is a process type of an individual computation of type $\tycomp{X}{(\o,\i)}$, and $\typar \tyC \tyD$
is the type of the parallel composition of two processes that respectively have types $\tyC$ and $\tyD$.
\paragraph{Typing judgements}
\emph{Well-typed processes} are characterised using the judgement
$\Gamma \vdash P : \tyC$. We present the typing rules in \autoref{fig:process-typing-rules}.
While our processes are not currently higher-order, we allow
non-empty contexts $\Gamma$ to model the possibility of using libraries and top-level function definitions.
\begin{figure}[tp]
\centering
\small
\begin{mathpar}
\coopinfer{TyProc-Run}{
\Gamma \types M : \tycomp{X}{(\o,\i)}
}{
\Gamma \types \tmrun{M} : \tyrun{X}{\o}{\i}
}
\quad
\coopinfer{TyProc-Par}{
\Gamma \types P : \tyC \\
\Gamma \types Q : \tyD
}{
\Gamma \types \tmpar{P}{Q} : \typar{\tyC}{\tyD}
}
\quad
\coopinfer{TyProc-Signal}{
\op \in \mathsf{signals\text{-}of}{(\tyC)} \\\\
\Gamma \types V : A_\op \\
\Gamma \types P : \tyC
}{
\Gamma \types \tmopout{op}{V}{P} : \tyC
}
\quad
\coopinfer{TyProc-Interrupt}{
\Gamma \types V : A_\op \\
\Gamma \types P : \tyC
}{
\Gamma \types \tmopin{op}{V}{P} : \opincomp{op}{\tyC}
}
\end{mathpar}
\caption{Process typing rules.}
\label{fig:process-typing-rules}
\end{figure}
The rules \textsc{TyProc-Run} and \textsc{TyProc-Par} capture the earlier
intuition about the types of processes matching their parallel structure. The rules
\textsc{TyProc-Signal} and \textsc{TyProc-Interrupt} are similar to the corresponding rules
from \autoref{fig:computation-typing-rules}.
The \emph{signal annotations} of a process type are calculated as
\[
\mathsf{signals\text{-}of}(\tyrun{X}{\o}{\i}) ~\defeq~ \o
\qquad\qquad
\mathsf{signals\text{-}of}(\typar{\tyC}{\tyD}) ~\defeq~ \mathsf{signals\text{-}of}(\tyC) \cup \mathsf{signals\text{-}of}(\tyD)
\]
and the \emph{action of interrupts} on process types $\opincomp{op}{\tyC}$ extends the action on effect annotations as
\[
\opincomp{op}(\tyrun{X}{\o}{\i})
~\defeq~
X \att (\opincomp {op} {(\o , \i)})
\qquad\qquad
\opincomp{op}(\typar{\tyC}{\tyD})
~\defeq~
\typar{(\opincomp{op}{\tyC})}{(\opincomp{op}{\tyD})}
\]
by propagating the interrupt towards the types of individual computations.
We then have:
\begin{lemma}
\label{lemma:signals-of-interrupt-action}
For any process type $\tyC$ and interrupt $\op$, we have that $\mathsf{signals\text{-}of}(\tyC) \order O \pi_1\, (\opincomp{op}{\tyC})$.
\end{lemma}
It is worth noting that \autoref{fig:process-typing-rules} does not include an analogue
of \textsc{TyComp-Subsume}. This is
deliberate because as we shall see below, \emph{process types reduce}
in conjunction with the processes they are assigned to, and the outcome
is generally neither a sub- nor supertype of the original type.
\subsection{Type Safety}
\label{sec:basic-calculus:type-safety:processes}
We conclude the meta-theory of \lambdaAEff~by proving type safety
for its parallel part. Analogously to \autoref{sec:basic-calculus:type-safety},
we once again split type safety into separate proofs of \emph{progress}
and \emph{preservation}.
\subsubsection{Progress}
We characterise the \emph{result forms} of parallel processes
by defining two judgements, $\ProcResult P$ and $\ParResult P$,
and by using the judgement $\RunResult {\Psi} {M}$ from
\autoref{sec:basic-calculus:type-safety}, as follows:
\begin{mathpar}
\coopinfer{}{
\ProcResult {P}
}{
\ProcResult {\tmopout {op} V P}
}
\qquad
\coopinfer{}{
\ParResult {P}
}{
\ProcResult {P}
}
\qquad
\coopinfer{}{
\RunResult {\emptyset} {M}
}{
\ParResult {\tmrun M}
}
\qquad
\coopinfer{}{
\ParResult P \\
\ParResult Q
}{
\ParResult {\tmpar P Q}
}
\end{mathpar}
These judgements express that a process $P$ is in a (top-level)
result form $\ProcResult {P}$ when, considered as a tree, it has a shape in which
\emph{all} signals are towards the root, parallel compositions are in
the intermediate nodes, and individual computation results are at the leaves.
Importantly, the computation results $\RunResult {\emptyset} {M}$ we use here are those from
which signals have been propagated out of
(see \autoref{sec:basic-calculus:type-safety:progress}).
The finality of these results forms is then captured by the next lemma.
\begin{lemma}
\label{lemma:results-are-final:processes}
Given a process $P$ such that $\ProcResult {P}$, then there exists no $Q$ such that $P \reduces Q$.
\end{lemma}
We are now ready to state and prove the \emph{progress theorem} for the parallel part of \lambdaAEff.
\begin{theorem}
Given a well-typed closed process $\types P : \tyC$,
then either (i) there exists a process $Q$ such that $P \reduces Q$, or
(ii) the process $P$ is already in a (top-level) result form, i.e., we have $\ProcResult {P}$.
\end{theorem}
\begin{proof}
The proof is standard and proceeds by induction on the derivation of $\types P : \tyC$.
In the base case, when the derivation ends with the \textsc{TyProc-Run} rule,
and $P \hspace{-0.05cm}=\hspace{-0.05cm} \tmrun {\hspace{-0.05cm}M}$, we use
\sref{Corollary}{corollary:progress}.
\end{proof}
\subsubsection{Type Preservation}
First, we note that the broadcast rules in \autoref{fig:processes} introduce new
inward propagating interrupts in their right-hand sides that originally do not exist in their left-hand sides. As a result,
compared to the types one assigns to the left-hand sides of these reduction rules, the types assigned to
their right-hand sides will need to feature corresponding type-level actions of these interrupts.
We formalise this idea using a \emph{process type reduction} relation $\tyC \tyreduces \tyD$, given by
\[
\coopinfer{}{
}{
\tyrun{X}{\o}{\i} \tyreduces \tyrun{X}{\o}{\i}
}
\quad
\coopinfer{}{
}{
X \att \opincompp {ops} {(\o , \i)} \tyreduces X \att \opincompp {ops} {(\opincomp {op} {(\o , \i)})}
}
\quad
\coopinfer{}{
\tyC \tyreduces \tyC' \\
\tyD \tyreduces \tyD'
}{
\typar{\tyC}{\tyD} \tyreduces \typar{\tyC'}{\tyD'}
}
\]
where we write $\opincompp {ops} {(\o , \i)}$ for a recursively defined \emph{action of a list of interrupts} on $(\o , \i)$,
given by
\[
\opincompp {[]} {(\o , \i)} ~\defeq~ (\o , \i)
\qquad
\opincompp {(\op :: \opsym{ops})} {(\o , \i)} ~\defeq~ \opincomp {op} {(\opincompp {ops} (\o , \i))}
\]
Intuitively, $\tyC \tyreduces \tyD$ describes how process types reduce by being acted upon by
freshly arriving interrupts. While we define the action behaviour only at the leaves of process types (under some
enveloping sequence of actions), we can prove expected properties for arbitrary process types:
\begin{lemma}
\label{lemma:type-reduction} \mbox{}
\begin{enumerate}
\item Process types can remain unreduced, i.e., $\tyC \tyreduces \tyC$ for any process type $\tyC$.
\item Process types reduce by being acted upon, i.e., $\tyC \tyreduces \opincomp {op} \tyC$ for any type $\tyC$ and interrupt $\op$.
\item Process types can reduce under enveloping actions, i.e., $\opincomp {op} \tyC \tyreduces \opincomp {op} \tyD$ when $\tyC \tyreduces \tyD$.
\item Process type reduction can introduce signals, i.e., $\mathsf{signals\text{-}of} (\tyC) \order O \mathsf{signals\text{-}of} (\tyD)$
when $\tyC \tyreduces \tyD$.
\end{enumerate}
\end{lemma}
For the proof of \srefcase{Lemma}{lemma:type-reduction}{3}, it is important that we
introduce interrupts under an arbitrary enveloping sequence of interrupt actions,
and not simply as
$X \att {(\o , \i)} \tyreduces X \att (\opincomp {op} {(\o , \i)})$.
Further, the proof of \srefcase{Lemma}{lemma:type-reduction}{4} requires us to generalise \srefcase{Lemma}{lemma:action}{1} to lists of enveloping actions:
\begin{lemma}
\label{lemma:signal-inclusion-lists-of-interrupts}
$\pi_1\, (\opincompp {ops} {(\o,\i)}) \order O \pi_1\, (\opincompp {ops} {(\opincomp {op} {(\o,\i)})})$
\end{lemma}
As in \autoref{sec:basic-calculus:type-safety:preservation}, we again find it useful
to define a separate \emph{typing judgement for evaluation contexts}, this time written
$\Gamma \types\!\![\, \tyC \,]~ \F : \tyD$, together with an
analogue of \sref{Lemma}{lemma:eval-ctx-typing}, which we omit here. Instead, we
observe that this typing judgement is subject to process type reduction:
\begin{lemma}
\label{lemma:hoisting-and-evaluation-context-types}
Given $\Gamma \types\!\![\, \tyC \,]~ \F \hspace{-0.05cm}:\hspace{-0.05cm} \tyD$ and $\tyC \hspace{-0.05cm}\tyreduces\hspace{-0.05cm} \tyC'$, then there exists $\tyD'$ with
$\tyD \hspace{-0.05cm}\tyreduces\hspace{-0.05cm} \tyD'$ and $\Gamma \types\!\![\, \tyC' \,]~ \F \hspace{-0.05cm}:\hspace{-0.05cm} \tyD'$.
\end{lemma}
We are now ready to state and prove the \emph{type preservation theorem} for the parallel part of \lambdaAEff.
\begin{theorem}
\label{theorem:preservation:processes}
Given a well-typed process $\Gamma \types P : \tyC$, such that $P$ can reduce as
$P \reduces Q$, then there exists a process type $\tyD$, such
that the process type $\tyC$ can reduce as $\tyC \tyreduces \tyD$, and we have $\Gamma \types Q : \tyD$.
\end{theorem}
\begin{proof}
The proof proceeds by induction on the derivation of
$P \reduces Q$, using auxiliary typing inversion lemmas depending on
the structure forced upon $P$ by the last rule used in $P \reduces Q$.
For all but the broadcast and evaluation context rules, we can pick $\tyD$ to be $\tyC$ and use
\srefcase{Lemma}{lemma:type-reduction}{1}.
For the broadcast rules, we define $\tyD$ by introducing the corresponding
interrupt, and build $\tyC \tyreduces \tyD$ using the parallel composition
rule together with \srefcase{Lemma}{lemma:type-reduction}{2}.
For the evaluation context rule, we use \sref{Lemma}{lemma:hoisting-and-evaluation-context-types}
in combination with the induction hypothesis. Finally, in order to discharge effects-related side-conditions
when commuting interrupts with signals,
we use \sref{Lemma}{lemma:signals-of-interrupt-action}.
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
We have shown how to incorporate asynchrony within
algebraic effects, by decoupling
the execution of operation calls into signalling that an operation's implementation
needs to be executed, and interrupting a running computation with the operation's result,
to which it can react by installing interrupt handlers.
We have shown that our approach is flexible enough that not all signals have to have a matching
interrupt, and vice versa, allowing us to also model spontaneous behaviour, such as a user
clicking a button or the environment preempting a thread. We have formalised these ideas in a small
calculus, called \lambdaAEff, and demonstrated its flexibility on a number of examples.
We have also accompanied the paper with an \pl{Agda} formalisation and a prototype implementation of \lambdaAEff.
However, various future work directions still remain. We discuss these and related work below.
\paragraph{Asynchronous effects}
As asynchrony is desired in practice, it is no surprise that \pl{Koka} \cite{Leijen:AsyncAwait}
and \pl{Multicore OCaml} \cite{Dolan:MulticoreOCaml}, the two largest implementations of algebraic effects
and handlers, have been extended accordingly.
In \pl{Koka}, algebraic operations
reify their continuation into an explicit callback structure that is then dispatched to a primitive
such as \lstinline{setTimeout} in its \pl{Node.JS} backend. In \pl{Multicore OCaml}, one uses low-level functions
such as \lstinline{set_signal} or \lstinline{timer_create} that modify the runtime by interjecting operation
calls inside the currently running code. Both approaches thus \emph{delegate} the actual asynchrony to existing
concepts in their backends. In contrast, in \lambdaAEff, we
can express such backend features within the core calculus itself.
Further, in \lambdaAEff, we avoid having to manually use (un)masking to
disable asynchronous effects in unwanted places
in our programs, which can be a very tricky business to get right, as noted by \citet{Dolan:MulticoreOCaml}.
Instead, by design, interrupts in \lambdaAEff~\emph{never}
influence running code unless the code has an explicit interrupt handler installed,
and they \emph{always} wait for any potential handler to present itself during
execution (recall that they get discarded only when reaching a $\tmkw{return}$).
\paragraph{Message-passing}
While in this paper we have focussed on the foundations of asynchrony in the
context of algebraic effects, the ideas we propose have also many common
traits with concurrency models based on \emph{message-passing},
such as the Actor model \cite{Hewitt:Actors}, the $\pi$-calculus \cite{Milner:PiCalculus},
and the join-calculus \cite{FournetGonthier:JoinCalculus}, just to name a few.
Namely, one can view the issuing of a signal $\tmopout{op}{V}{M}$ as sending a message,
and handling an interrupt $\tmopin{op}{W}{M}$ as receiving a message, along a channel
named $\op$.
In fact, we believe that in our prototype implementation we could replace the semantics
presented in the paper with an equivalent one based on shared channels
(one for each $\op$), to which the interrupt handlers could subscribe to.
Instead of propagating signals first out and then in, they would be sent directly to channels
where interrupt handlers immediately receive them, drastically reducing the cost of communication.
Comparing \lambdaAEff~to the Actor model, we see that
the $\tmrun M$ processes evolve in their own bubbles, and only communicate with other
processes via signals and interrupts, similarly to actors.
However, in contrast to messages not being required to be ordered
in the Actor model, in our $\tmpar P Q$, the process $Q$ receives
interrupts in the same order as the respective signals are issued by $P$
(and vice versa). This communication ordering could be relaxed by allowing
signals to be hoisted out of computations from deeper than just the top level.
Another difference
with actors is that \lambdaAEff-computations
can react to interrupts only sequentially, and not by dynamically creating new parallel
processes---first-class parallel processes and their dynamic creation is something
we plan to address in future work.
It is worth noting that our interrupt handlers are similar to the message receiving construct
in the $\pi$-calculus, in that they both synchronise with matching incoming
interrupts/messages. However, the two are also different, in that interrupt handlers allow
reductions to take place under them and non-matching interrupts to propagate past them.
Further, our interrupt handlers are also similar to join definitions in the join-calculus, describing
how to react when a corresponding interrupt arrives or join pattern appears, where in both cases
the reaction could involve effectful code. To this end, our interrupt handlers resemble join definitions
with simple one-channel join patterns. However, where the two constructs differ is that join definitions
also serve to define new (local) channels, similarly to the restriction operator in the $\pi$-calculus,
whereas we assume a fixed global set of channels (i.e., signal and interrupt names).
We expect that extending \lambdaAEff~with local algebraic effects
\cite{Staton:Instances,Biernacki:AbstractingAlgEffects}
could help us fill this gap between the formalisms.
\paragraph{Scoped operations}
As discussed in \autoref{sec:basic-calculus:semantics:computations}, despite their name, interrupt handlers
behave like algebraic operations, not like effect handlers. However, one should also note
that they are not conventional operations because they carry computational data that sequential
composition does not interact with, and that only gets triggered when a corresponding interrupt is received.
Such generalised operations are known in the literature as \emph{scoped operations}~\cite{Pirog:ScopedOperations},
a leading example of which is $\opsym{spawn}(M;N)$, where $M$ is the new child process to be executed and $N$ is
the current process. Crucially, the child $M$ should not directly interact with the current process. Scoped operations
achieve this behaviour by declaring $M$ to be in the scope of $\opsym{spawn}$, resulting in
$\tmlet x {\opsym{spawn}(M;N)} {K} \!\reduces\! \opsym{spawn}(M;\tmlet x N K)$, exactly
as we have for interrupt handlers.
Further recalling \autoref{sec:basic-calculus:semantics:computations}, despite their appearance,
incoming interrupts behave computationally like effect handling, not like algebraic operations.
In fact, it turns out they correspond to effect handling
induced by an instance of \emph{scoped effect handlers} \cite{Pirog:ScopedOperations}.
Compared to ordinary effect handlers, scoped effect handlers explain both
how to interpret operations and their scopes. In our setting, this
corresponds to selectively executing the handler code of interrupt handlers.
It would be interesting to extend our work both with
scoped operations having more general signatures, and with additional effect handlers
for them. The latter could allow
preventing the propagation of incoming interrupts into continuations, discarding the continuation
of a cancelled remote call, and techniques such as masking or reordering interrupts
according to priority levels.
\paragraph{Modal types}
We recall that the type safety of \lambdaAEff~crucially relies on
the promise-typed variables bound by interrupt handlers not being
allowed to appear in the payloads of signals. This ensures that it is safe to propagate
signals past all enveloping interrupt handlers, and communicate their payloads
to other processes. In its essence, this is similar to the use of \emph{modal types} in distributed
\cite{Murphy:PhDThesis} and reactive programming \cite{Krishnaswami:HOFRP,Bahr:RATT}
to classify values that can travel through space and time. In our case,
it is the omission of promise types from ground types that allows us to consider
the payloads of signals and interrupts as such \emph{mobile values}.
We expect that these connections to modal types will be key for
extending \lambdaAEff~with (i)~higher-order payloads and (ii) process
creation. For (i), we want to avoid the bodies of function-typed payloads to be able
to await enveloping promise variables to be fulfilled. For (ii),
we want to do the same for the dynamically created processes.
In both cases, the reason is to be able to safely propagate the corresponding
programming constructs past enveloping interrupt handlers, and eventually
hoist them out of individual computations. We believe that the more structured
treatment of contexts $\Gamma$, as studied in various modal type
systems, will hold the key for these extensions to be type safe.
\paragraph{Denotational semantics}
In this paper we study only the operational side of \lambdaAEff,
and leave developing its denotational semantics for the future.
In light of how we have motivated the \lambdaAEff-specific programming
constructs, and based on the above discussions, we expect the denotational semantics
to take the form of an algebraically natural \emph{monadic semantics}, where the monad would
be given by an instance of the one studied by \citet{Pirog:ScopedOperations} for
scoped operations (quotiented by the commutativity of signals and interrupt handlers,
and extended with nondeterminism to model different evaluation
outcomes), incoming interrupts would be modelled as homomorphisms
induced by scoped algebras, and parallel composition
by considering all nondeterministic interleavings of (the outgoing signals of) individual computations, e.g.,
based on how \citet{Plotkin:BinaryHandlers} and \citet{Lindley:DoBeDoBeDo} model
it in the context of general effect handlers.
Finally, we expect to take inspiration for the denotational semantics of the promise type
from that of modal logics and modal types.
\paragraph{Reasoning about asynchronous effects}
In addition to using \lambdaAEff's type-and-effect system only for specification purposes (such as specifying
that $M : \tycomp{X}{(\emptyset,\{\})}$ raises no signals and installs no interrupt handlers),
we wish to make further use of it for validating \emph{effect-dependent optimisations} \cite{Kammar:Optimisations}.
For instance, whenever $M : \tycomp{X}{(\o,\i)}$ and $\i\, (\op) = \bot$, we would like to know
that $\tmopin{\op}{V}{M} \reduces^* M$. One way to validate such optimisations
is to develop an adequate denotational semantics,
and then use a semantic \emph{computational induction} principle \cite{Bauer:EffectSystem,Plotkin:Logic}.
For \lambdaAEff, this would amount to only having to prove the optimisations for return values, signals,
and interrupt handlers. Another way to validate effect-dependent optimisations would
be to define a suitable logical
relation for \lambdaAEff~\cite{Benton:AbstractEffects}.
In addition to optimisations based on \lambdaAEff's existing effect system,
we plan to explore extending processes and their types
with \emph{communication protocols} inspired by session types \cite{Honda:LangPrimitives},
so as to refine the current ``broadcast
everything everywhere'' communication strategy.
\section*{Acknowledgements}
We thank the anonymous reviewers, Otterlo IFIP WG 2.1 meeting participants,
and Andrej Bauer, Gavin Bierman, Žiga Lukšič, and Alex Simpson for their useful feedback.
This project has received funding from the European Union's Horizon 2020 research and
innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 834146
\raisebox{-0.05cm}{
\hspace{-0.15cm}
\includegraphics[width=0.5cm]{eu_flag.pdf}
\hspace{-0.15cm}
}.
This material is based upon work supported by the Air Force Office of Scientific
Research under award
|
number FA9550-17-1-0326.
\section{Introduction}
Effectful programming abstractions are at the heart of many modern general-purpose
programming languages.
They can increase expressiveness by giving
access to first-class continuations, but often simply help users to write
cleaner code, e.g., by avoiding having to manage a program's memory explicitly in state-passing style,
or getting lost in callback hell while programming asynchronously.
An increasing number of language designers and programmers are starting to
embrace \emph{algebraic effects},
where one uses algebraic operations \cite{Plotkin:NotionsOfComputation} and
effect handlers \cite{Plotkin:HandlingEffects} to uniformly and user-definably
express a wide range of effectful behaviour,
ranging from basic examples such as state, rollbacks, exceptions,
and nondeterminism \cite{Bauer:AlgebraicEffects}, to advanced applications
in concurrency \cite{Dolan:MulticoreOCaml} and statistical probabilistic programming
\cite{Bingham:Pyro}, and even quantum computation \cite{Staton:AlgEffQuantum}.
While covering many examples, the conventional treatment of
algebraic effects is \emph{synchronous} by nature. In it
effects are invoked by placing operation calls in one's code,
which then propagate outwards until they trigger the actual effect, finally yielding
a result to the rest of the computation that has been \emph{waiting} the whole
time. While blocking the computation is indeed sometimes needed, e.g.,
in the presence of general effect handlers that can execute their continuation any
number of times, it forces all uses of algebraic effects to be synchronous, even when this
is not necessary, e.g., when the effect involves executing
a remote query to which a response is not needed (immediately).
Motivated by the recent interest in the combination of
asynchrony and algebraic effects \cite{Leijen:AsyncAwait,Dolan:MulticoreOCaml},
we explore what it takes (in terms of
language design, safe programming abstractions, and a
self-contained core calculus) to accompany the
synchronous treatment of algebraic effects with
an \emph{asynchronous} one. At the heart of our approach is the
decoupling of the execution of operation calls
into \emph{signalling} that some implementation of an operation needs to be executed, and \emph{interrupting} a
running computation with its result, to which the computation can react by
installing \emph{interrupt handlers}. Importantly, we show that our
approach is flexible enough that not all signals need to have a
corresponding interrupt, and vice versa, allowing us to also model
\linebreak
\emph{spontaneous behaviour}, such as a
user clicking a button or the environment preempting a thread.
While we are not the first ones to work on asynchrony for algebraic effects,
the prior work in this area (in the context of general effect handlers) has
achieved it by \emph{delegating} the actual asynchrony to the respective language backends
\cite{Leijen:AsyncAwait,Dolan:MulticoreOCaml}. In contrast, in this paper
we demonstrate how to capture the combination of
asynchrony and algebraic effects in a \emph{self-contained} core calculus.
It is important to emphasise that our aim is not to replace general effect handlers,
but instead to \emph{complement} them with robust primitives
tailored to asynchrony---our proposed approach is algebraic by design, so as
to be ready for future extensions with general effect handlers.
\paragraph{Paper structure}
In \autoref{sec:overview}, we give a high-level overview of our approach to
asynchrony for algebraic effects.
In \autoref{sec:basic-calculus:computations}
and \ref{sec:basic-calculus:processes}, we distil our ideas into a core calculus, \lambdaAEff,
equipped with a small-step semantics, a type-and-effect system, and proofs of
type safety. In \autoref{sec:applications},
we show \lambdaAEff~in action on examples
such as preemptive multi-threading, remote function calls, and a parallel variant of runners
of algebraic effects. We conclude, and discuss related and future work in \autoref{sec:conclusion}.
\paragraph{Code}
The paper is accompanied by a \emph{formalisation} of \lambdaAEff's type safety proofs
in \pl{Agda} \cite{ahman20:AeffAgda}, and a \emph{prototype implementation} of \lambdaAEff~in
\pl{OCaml}, called \pl{{\AE}ff} \cite{pretnar20:AEff}. For ease of use, we provide them both also as
a single virtual machine image \cite{AhmanPretnar20:Artefact}.
In the \pl{Agda} formalisation, we consider only well-typed syntax of a
variant of \lambdaAEff~in which the subsumption rule manifests as an explicit coercion, so as to make working with
de Bruijn indices less painful.
Meanwhile, the \pl{{\AE}ff} implementation provides an interpreter and
a simple typechecker, but it does not
yet support inferring and checking effect annotations. In addition, \pl{{\AE}ff} provides
a web interface that allows users to enter their programs and interactively click through
their executions.
\pl{{\AE}ff} also comes with implementations of all the examples we present in this paper.
Separately, \citet{Poulson:AsyncEffectHandling} has shown how to implement \lambdaAEff~
in \pl{Frank} \cite{Convent:DooBeeDooBeeDoo}.
\section{Asynchronous Effects, by Example}
\label{sec:overview}
We begin with a high-level overview of how we accommodate asynchrony within algebraic effects.
\subsection{Conventional Algebraic Effects Are Synchronous by Nature}
We first recall the basic ideas of programming with algebraic effects,
illustrating that their traditional treatment is synchronous by nature.
For an in-depth overview, we refer to the tutorial by \citet{Pretnar:Tutorial}, and to the
seminal papers of the field \cite{Plotkin:NotionsOfComputation,Plotkin:HandlingEffects}.
In this algebraic treatment, sources of computational effects are modelled using signatures
of \emph{operation symbols} $\op : A_\op \to B_\op$. For instance, one models
$S$-valued state using two operations, $\mathsf{get} : \tyunit \to S$ and $\mathsf{set} : S \to \tyunit$;
and $E$-valued exceptions using a single operation $\opsym{raise} : E \to \tyempty$.
Programmers can then invoke the effect that an
$\op : A_\op \to B_\op$ models by placing an \emph{operation call} $\tmop {op} V y M$ in their code. Here, the
parameter value $V$ has type $A_\op$, and the variable $y$, which is bound in the continuation $M$, has type $B_\op$.
For instance, for $\mathsf{set}$, the parameter $V$ would be
the new value of the store, and for $\mathsf{get}$, the variable $y$ would be bound to the current value of the store.
A program written in terms of operation calls is by itself just an inert piece of code. To
execute it, programmers have to provide \emph{implementations} for the operation
calls appearing in it. The idea is that an implementation of $\tmop {op} V y M$ takes $V$ as its input,
and its output gets bound to $y$.
For instance, this could take the form of defining a suitable effect handler
\cite{Plotkin:HandlingEffects}, but could also be given by calls
to runners of algebraic effects \cite{Ahman:Runners}, or simply by invoking some
(default) top-level (native) implementation.
What is important is that some pre-defined piece of code $M_\op[V/x]$
gets executed in place of every operation call $\tmop {op} V y M$.
Now, what makes the conventional treatment of algebraic effects \emph{synchronous} is
that the execution of an operation call $\tmop {op} V y M$ \emph{blocks} until some implementation
of $\op$ returns a value $W$ to be bound to $y$, so that
the execution of the continuation $M[W/y]$ could proceed \cite{Kammar:Handlers,Bauer:EffectSystem}.
Conceptually, this kind of blocking behaviour can be illustrated as
\begin{equation}
\begin{gathered}
\label{eq:syncopcall}
\xymatrix@C=1.25em@R=0.85em@M=0.5em{
& M_\op[V/x] \ar@{}[r]|{\mbox{$\Large{\leadsto^{\!*}}$}} & \tmreturn W \ar[d]
\\
\dots \ar@{}[r]|>>>{\mbox{$\Large{\leadsto}$}} & \tmop {op} V y M \ar[u] & M[W/y] \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & \dots
}
\end{gathered}
\end{equation}
where $\tmreturn W$ is a computation that causes no effects and simply returns the given value $W$.
While blocking the rest of the computation is needed in the presence of
general effect handlers that can execute their continuation any number
of times, it forces all uses of algebraic effects to be synchronous, even
when this is not necessary, e.g., when the effect in question involves
executing a remote query to which a response is not needed immediately,
or sometimes never at all.
In the rest of this section, we describe how we decouple the invocation of
an operation call from the act of receiving its result, and how we give
programmers a means to block execution only when it is necessary.
While we end up surrendering some of effect handlers' generality,
such as having access to the continuation that captures the rest of the
computation to be handled, then in return we get a natural and robust formalism for
asynchronous programming with algebraic effects.
\subsection{Outgoing Signals and Incoming Interrupts}
\label{sec:overview:signals}
We begin by observing that the execution of an operation call $\tmop {op} V y M$,
as shown in (\ref{eq:syncopcall}), consists of \emph{three distinct phases}: (i) signalling that an
implementation of $\op$ needs to be executed with parameter $V$ (the up-arrow), (ii) executing
this implementation (the horizontal arrow), and (iii) interrupting the blocking of $M$ with a value $W$
(the down-arrow). In order to overcome the unwanted side-effects of blocking execution on every operation call,
we shall naturally decouple these phases into separate programming concepts, allowing the execution of
$M$ to proceed even if (ii) has not yet completed and (iii) taken place. In particular, we
decouple an operation call into issuing an \emph{outgoing signal},
written $\tmopout{\op}{V}{M}$, and receiving an \emph{incoming interrupt}, written $\tmopin{\op}{W}{M}$.
It is important to note that while we have used the execution of operation calls
to motivate the introduction of signals and interrupts as programming concepts, \emph{not all issued signals need to have a corresponding
interrupt response}, and \emph{not all interrupts need to be responses to issued signals},
allowing us to also model spontaneous behaviour, such as the environment preempting a thread.
When \emph{issuing a signal} $\tmopout{\op}{V}{M}$, the value $V$ is a \emph{payload}, such as a location to be looked up or a
message to be displayed, aimed at whoever is listening for the given signal. We use the $\tmkw{\uparrow}$-notation to indicate that signals issued in sub-computations propagate outwards---in this sense signals behave just like conventional
operation calls. However, signals crucially differ from conventional operation calls in that no additional variables
are bound in the continuation $M$, making it naturally possible to continue executing $M$ straight after the signal has been issued, e.g., as depicted below:
\vspace{-3ex}
\[
\xymatrix@C=1.25em@R=1.25em@M=0.5em{
& &
\\
\dots \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & \tmopout {op} V M \ar[u]^{\op\, V} \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & M \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & \dots
}
\]
\newcommand{M_{\text{feedClient}}}{M_{\text{feedClient}}}
As a \emph{running example}, consider a computation $M_{\text{feedClient}}$, which lets a user scroll through a seemingly infinite feed, e.g., by repeatedly clicking a ``next page'' button.
For efficiency, $M_{\text{feedClient}}$ does not initially cache all the data, but instead requests a
new batch of data each time the user is nearing the end of the cache. To communicate with the outside world, $M_{\text{feedClient}}$ can issue a signal
\[
\tmopout{\opsym{request}}{\mathit{cachedSize} + 1}{M_{\text{feedClient}}}
\]
to request a new batch of data starting from the end of the current cache, or a different signal
\[
\tmopout{\opsym{display}}{\mathit{message}}{M_{\text{feedClient}}}
\]
to display a message to the user. In both cases, the continuation \emph{does not wait}
for an acknowledgement that the signal was received,
but instead continues to provide a seamless experience to the user.
It is however worth noting that these signals differ in what $M_{\text{feedClient}}$ expects of them:
to the $\opsym{request}$ signal, it expects a response at some future point in
its execution, while it does not expect any response to the $\opsym{display}$ signal,
illustrating that not every issued signal needs a response.
When the outside world wants to get the attention of a computation, be it in response to
a signal or spontaneously,
it happens by \emph{propagating an interrupt}~$\tmopin{\op}{W}{M}$ to the computation.
Here, the value $W$ is again a payload, while $M$ is the computation receiving the interrupt.
It is important to note that unlike signals, interrupts are not triggered by the computation itself,
but are instead issued by the \emph{outside world},
and can thus interrupt any sequence of evaluation steps,
e.g., as in
\vspace{-3ex}
\[
\xymatrix@C=1.25em@R=1.25em@M=0.5em{
& \ar[d]^-{\op\, W} &
\\
\dots \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & M \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & \tmopin {op} W M \ar@{}[r]|<<<{\mbox{$\Large{\leadsto}$}} & \dots
}
\]
In our running example, there are two interrupts of interest:
$\tmopin{\opsym{response}}{\mathit{newBatch}}{M}$, which delivers new data to replenish the
cache; and $\tmopin{\opsym{nextItem}}{\tmunit}{M}$, with which the user requests to see the next data
item. In both cases, $M$ represents the state of $M_{\text{feedClient}}$ before the interrupt arrived.
We use
the $\tmkw{\downarrow}$-notation to indicate that interrupts propagate inwards into sub-computations,
trying to reach anyone listening for them, and only get discarded when they reach a $\tmkw{return}$.
It is worth noting that programmers are not expected to write interrupts explicitly in their programs---instead,
interrupts are usually induced by signals issued by other parallel processes, as explained next.
\subsection{A Signal for the Sender Is an Interrupt to the Receiver}
\label{sec:overview:processes}
As noted above, the computations we consider do not evolve in isolation, instead they also communicate with
the outside world, by issuing outgoing signals and receiving incoming interrupts.
We model the outside world by composing individual computations into \emph{parallel processes} $P, Q, \ldots$.
To keep the presentation clean and focussed on the asynchronous use of algebraic effects, we consider a very
simple model of parallelism: a process is either one of the individual computations being run
in parallel, written $\tmrun M$, or the parallel composition of two processes,
written $\tmpar P Q$.
\newcommand{M_{\text{feedServer}}}{M_{\text{feedServer}}}
To capture the signals and interrupts based interaction of processes,
our operational semantics includes rules for \emph{propagating outgoing signals} from individual
computations to processes,
\emph{turning processes' outgoing signals into incoming interrupts} for their surrounding world, and
\emph{propagating incoming interrupts} from processes to individual computations.
For instance, in our running example,
$M_{\text{feedClient}}$'s request for new data is executed as follows (with the active redexes highlighted):
\[
\begin{array}{r l}
& \tmpar{\highlightgray{\tmrun (\tmopout{request}{V}{\highlightwhite{M_{\text{feedClient}}}})}}{\tmrun M_{\text{feedServer}}}
\\[0.5ex]
\reduces & \highlightgray{\tmpar{(\tmopout{request}{V}{\highlightwhite{\tmrun M_{\text{feedClient}}}})}{\highlightwhite{\tmrun M_{\text{feedServer}}}}}
\\[0.5ex]
\reduces & \tmopoutbig{request}{V}{\tmpar{\tmrun M_{\text{feedClient}}}{\highlightgray{\tmopin{request}{V}{\tmrun {\highlightwhite{M_{\text{feedServer}}}}}}}}
\\[0.5ex]
\reduces & \tmopoutbig{request}{V}{\tmpar{\tmrun M_{\text{feedClient}}}{\tmrun (\tmopin{request}{V}{M_{\text{feedServer}}})}}
\end{array}
\]
Here, the first and the last reduction step respectively propagate signals outwards and
interrupts inwards. The middle reduction step corresponds to what we call a \emph{broadcast rule}---it
turns an outward moving signal in one of the processes into an inward moving interrupt for the process
parallel to it, while continuing to propagate the signal outwards to any further parallel processes.
\subsection{Promising to Handle Interrupts}
\label{sect:overview:promising}
So far, we have shown that our computations can issue outgoing signals and receive incoming interrupts, and how
these evolve when executing parallel processes, but we have not yet said
anything about how computations can actually \emph{react} to incoming interrupts of interest.
In order to react to incoming interrupts, our computations can install \emph{interrupt handlers}, written
\[
\tmwith{op}{x}{M}{p}{N}
\]
that should be read as: ``we promise to handle a future interrupt named $\op$ using the computation
$M$ in the continuation $N$, with $x$ bound to the payload of the interrupt''. Fulfilling this promise consists of executing $M$ and binding its result to the
variable $p$ in $N$. This is captured by the reduction rule
\[
\tmopin{op}{V}{\tmwith{op}{x}{M}{p}{N}} \reduces \tmlet{p}{M[V/x]}{\tmopin{op}{V}{N}}
\]
It is worth noting two things: the interrupt handler is \emph{not reinstalled by default},
and the interrupt itself \emph{keeps propagating inwards} into the sub-computation $N$.
Regarding the former,
programmers can selectively reinstall interrupt handlers when needed,
by defining them suitably recursively, e.g.,
as we demonstrate in \autoref{sec:overview:runningexample}.
Concerning the latter, then in order to skip certain interrupt handlers for some $\opsym{op}$, one
can carry additional data
in $\opsym{op}$'s payload (e.g., a thread ID) and then condition the (non-)triggering of those interrupt
handlers on this data, e.g., as we do in \autoref{sec:applications:guarder-handlers}.
Interrupts that do not match a given interrupt handler ($\op \neq \op'$) are simply propagated past it:
\[
\tmopin{op'}{V}{\tmwith{op}{x}{M}{p}{N}} \reduces \tmwith{op}{x}{M}{p}{\tmopin{op'}{V}{N}}
\]
Interrupt handlers differ from operation calls in two important aspects.
First, they enable \emph{user-side post-processing} of received data, using $M$,
while in operation calls the result is immediately bound in the continuation. Second, and more
importantly, their semantics is \emph{non-blocking}. In particular,
\[
N \reduces N' \qquad \text{implies} \qquad \tmwith{op}{x}{M}{p}{N} \reduces \tmwith{op}{x}{M}{p}{N'}
\]
meaning that the continuation $N$, and thus the whole computation, can make progress
even though no incoming interrupt $\opsym{op}$ has been propagated to the computation
from the outside world.
As the observant reader might have noticed, the non-blocking behaviour of interrupt handling
means that our operational semantics has to work on \emph{open terms} because the variable $p$ can
appear free in both $N$ and $N'$ above. However, it is important to note that $p$ is not an arbitrary variable,
but in fact gets assigned a distinguished \emph{promise type} $\typromise X$ for some value type
$X$---we shall crucially make use of this typing of $p$ in the proof of type safety for our \lambdaAEff-calculus (see \autoref{theorem:progress}).
\subsection{Blocking on Interrupts Only When Necessary}
\label{sec:overview:await}
As noted earlier, installing an interrupt handler means making a promise to handle a given
interrupt in the future. To check that an interrupt has been received and handled,
we provide programmers a means to selectively \emph{block execution}
and \emph{await} a specific promise to be fulfilled, written
$\tmawait{V}{x}{M}$, where if $V$ has a promise type $\typromise X$, the variable $x$ bound in $M$ has type $X$.
Importantly, the continuation $M$ is executed only
when the $\tmkw{await}$ is handed a \emph{fulfilled promise} $\tmpromise V$:
\[
\tmawait{\tmpromise V}{x}{M} \reduces M[V/x]
\]
Revisiting our example of scrolling through a seemingly infinite feed,
$M_{\text{feedClient}}$ could use $\tmkw{await}$ to block until it has received an initial configuration,
such as the batch size used by $M_{\text{feedServer}}$.
As the terminology suggests, this part of \lambdaAEff~is strongly influenced by existing work on
\emph{futures and promises} \cite{Schwinghammer:Thesis} for structuring concurrent programs, and their use in modern languages,
such as in \pl{Scala} \cite{Haller:Futures}. While prior work often models promises as writable, single-assignment
references, we instead use the substitution of values for ordinary immutable variables (of distinguished promise type)
to model that a promise gets fulfilled exactly once.
\subsection{Putting It All Together}
\label{sec:overview:runningexample}
Finally, we show how to implement our example of scrolling through a seemingly infinite feed.
For a simpler exposition, we allow ourselves access to mutable references, though the same can be
achieved by rolling one's own state.
Further, we use $\tmopoutgen {op} V$
as a syntactic sugar for $\tmopout {op} V {\tmreturn \tmunit}$.
\subsubsection{Client}
\label{sec:overview:runningexample:client}
We implement the client computation $M_{\text{feedClient}}$ as the function \ls$client$ defined below.
For presentation purposes, we split the definition of \ls$client$ between multiple code blocks.
First, the client sets up the initial values of the auxiliary references,
issues a signal to the server asking for the data batch size that it uses, and then installs a corresponding
interrupt handler:
\begin{lstlisting}
let client () =
let (cachedData , requestInProgress , currentItem) = (ref [] , ref false , ref 0) in
send batchSizeRequest ();
promise (batchSizeResponse batchSize |-> return <<batchSize>>) as batchSizePromise in
\end{lstlisting}
While the server is asynchronously responding to the batch size request, the client
sets up an auxiliary function \ls$requestNewData$, which it later uses to request new data from the server:
\begin{lstlisting}
let requestNewData offset =
requestInProgress := true;
send request offset;
promise (response newBatch |->
cachedData := !cachedData @ newBatch;
requestInProgress := false; return <<()>>
) as _ in return ()
in
\end{lstlisting}
Here, the client first sets a flag indicating that a new data request is in process,
then issues a $\opsym{request}$ signal to the server, and finally installs an
interrupt handler that updates the cache
once the $\opsym{response}$ interrupt arrives.
Note that the client does not block while awaiting new data, instead it continues executing, notifying
the user to wait and try again once the cache is empty (see below).
Then, the client sets up its main loop, which is a simple recursively defined interrupt handler:
\begin{lstlisting}
let rec clientLoop batchSize =
promise (nextItem () |->
let cachedSize = length !cachedData in
(if (!currentItem > cachedSize - batchSize / 2) && (not !requestInProgress) then
requestNewData (cachedSize + 1)
else
return ());
(if !currentItem < cachedSize then
send display (toString (nth !cachedData !currentItem));
currentItem := !currentItem + 1
else
send display "please wait a bit and try again");
clientLoop batchSize
) as p in return p
in
\end{lstlisting}
In it, the client listens for a $\opsym{nextItem}$ interrupt from the user to display more data.
Once the interrupt arrives, the client checks if its cache is becoming empty---if so,
it uses the $\opsym{requestNewData}$ function to
request more data from the server. Next, if there is still some
data in the cache, the client issues a signal to display the next data item to the user.
If however the cache is empty, the client issues a signal
to display a waiting message to the user. The client then simply recursively reinvokes itself.
As a last step of setting itself up, the client blocks until the server has responded
with the batch size it uses, after which the client starts its main loop
with the received batch size as follows:
\begin{lstlisting}
await batchSizePromise until <<batchSize>> in clientLoop batchSize
\end{lstlisting}
\subsubsection{Server}
\label{sec:overview:runningexample:server}
We implement the server computation $M_{\text{feedServer}}$ as the following function:
\begin{lstlisting}
let server batchSize =
let rec waitForBatchSize () =
promise (batchSizeRequest () |->
send batchSizeResponse batchSize;
waitForBatchSize ()
) as p in return p
in
let rec waitForRequest () =
promise (request offset |->
let payload = map (fun x |-> 10 * x) (range offset (offset + batchSize - 1)) in
send response payload;
waitForRequest ()
) as p in return p
in
waitForBatchSize (); waitForRequest ()
\end{lstlisting}
where the computation \lstinline{range i j} returns a list of integers ranging from \lstinline{i} to \lstinline{j} (both inclusive).
The server simply installs two recursively defined interrupt handlers: the first
one listens for and responds to client's requests about the batch size it uses;
and the second one responds to client's requests for new data. Both interrupt
handlers then simply recursively reinstall themselves.
\subsubsection{User}
\label{sec:overview:runningexample:user}
We can also simulate the user as a computation. Namely, we implement
it as a function that every now and then issues a request to
the client to display the next data item:
\begin{lstlisting}
let rec user () =
let rec wait n =
if n = 0 then return () else wait (n - 1)
in
send nextItem (); wait 10; user ()
\end{lstlisting}
It is straightforward to extend the user also with a handler for $\opsym{display}$ interrupts (we omit it here).
\subsubsection{Running the Server, Client, and User in Parallel}
\label{sec:overview:runningexample:parallel}
Finally, we can simulate our running example in full by running all
three computations we defined above as parallel processes, e.g., as follows:
\begin{lstlisting}
run (server 42) || run (client ()) || run (user ())
\end{lstlisting}
|
\section{Introduction}
Many topological properties of a complex hyperplane arrangement can be
expressed in terms of its combinatorial structure (matroid). For
example, it is true for the cohomology ring of the complement of the
arrangement \cite{OS} and for the Malcev completion of the fundamental
group of the complement \cite{Ko}.
However, for fundamental group of this complement
all known algorithms of computation use non-matroidal data. (In
the case of complexified real arrangement the answer can be expressed
in terms of oriented matroid, which requires more information then
just dimensions of all intersections of hyperplanes as for ordinary matroid.)
Due to the classical theorem of Zariski, the fundamental group of the
complement of a hypersurface in $\mbox {$\mathbb{C}$} P^n$ is isomorphic to the
fundamental group of the complement in a generic 2-dimensional plane
of it's intersection with the hypersurface. Hence in our case it is sufficient
to study fundamental groups of complements of line arrangements in
$\mbox {$\mathbb{C}$} P^2$. In this paper we use a description of combinatorial structure of
line arrangements in terms of projective configurations, which is
essentially equivalent to using matroid language, but has an advantage
of being more visual.
The aim of this paper is to construct two combinatorially equivalent
line arrangements in $\mbox {$\mathbb{C}$} P^2$ whose complements have different
fundamental groups. Our approach to this is the following.
We consider
the canonical map $\pi_1(X)\to H_1(X,\mbox {$\mathbb{Z}$})$ where $X$ is the complement of a
line arrangement $L$. The first homology group $H=H_1(X,\mbox {$\mathbb{Z}$})$ is a
free Abelian group which is canonically determined by the configuration $C$
describing combinatorial structure of $L$. We construct for any $C$
an invariant of the group $\pi_1(X)$ with respect to isomorphisms
$\pi_1(X)\to\pi_1(X')$ inducing trivial automorphism of $H$, where $X$
and $X'$ are complements of line arrangements combinatorially
described by $C$. This invariant depends only on
$\pi_1(X)/\gamma_4\pi_1(X)$, where $\gamma_k G$ denotes the $k$-th member
of the lower central series of a group $G$.
Next we consider two conjugate realizations of
MacLane configuration $C_8$ corresponding to MacLane matroid
\cite{McL} (see also \cite{Z})
and show that our invariant distinguishes them.
Then, at last, we use these computations to prove that configuration
$C_{13}$, obtained by gluing together two MacLane configurations, has at
least two realizations with different fundamental groups.
\bigskip
I am deeply grateful for helpful discussions to T.~Alekseyevskaya,
A.~Dress, A.~Goncharov, S.~Lvovsky, N.~Mnev, G.~Noskov, S.~Orevkov,
D.~Stone, G.~Ziegler, and especially to
I.~M.~Gelfand, without whom this paper would never have been written.
\section{Construction of distinguishing invariant}
\begin{definition} A projective configuration is a triple
$C=(\mathcal{L},\mathcal{P},\succ)$, consisting of two sets: $\mathcal{L}$, called the set
of lines, and $\mathcal{P}$, called the set of points, and the binary relation
$\succ$, called incidence, between $\mathcal{L}$ and $\mathcal{P}$, provided the
following axioms hold:
(1) For any two distinct lines $l$ and $l'$ there is a unique point
$p$ such that $l\succ p$ and $l'\succ p$;
(2) Any point is incident to at least two lines.
\end{definition}
For lines $l$ and $l'$ and a point $p$ as in axiom (1) we write
$p=l\cap l'$ and call $p$ the intersection point of the lines
$l$ and $l'$. We call a configuration $C=(\mathcal{L},\mathcal{P},\succ)$
non-degenerate if $|\mathcal{P}|>1$, hence not all the lines pass through
one point.\footnote{The notion of a non-degenerate configuration
is essentially equivalent to the notion of a simple rank 3 matroid
on the set of lines.}
\begin{definition}
A complex realization of a projective configuration
$C=(\mathcal{L},$ $\mathcal{P}, \succ)$ is an injective map $\phi$ from $\mathcal{L}$
to the set of lines in $\mbox {$\mathbb{C}$} P^2$ and from $\mathcal{P}$ to the set of
points in $\mbox {$\mathbb{C}$} P^2$ such that $\phi p\in\phi l$ if and only if $p\prec l$.
\end{definition}
Let $C=(\mathcal{L},\mathcal{P},\succ)$ be a finite non-degenerate projective
configuration. We order the elements of $\mathcal{L}$:
$\mathcal{L}=\{l_0,l_1,\dots,l_n\}$, and call $l_0$ line at infinity.
Let $\mathcal{P}_0=\{p\in\mathcal{P}\mid p\not\prec l_0\}$.
Now we shall model in abstract terms the presentation of the
fundamental group of the complement of a line arrangement in
$\mbox {$\mathbb{C}$}^2$, as stated in \cite{A} (see also \cite{OT}).
Let $F$ be a free group with free generators $w_1,\dots,w_n$,
and let $\mathcal{A}=\{(i,p)\in\{1,\dots,n\}\times
\mathcal{P}_0\mid p\prec l_i\}$. For any $(i,p)\in\mathcal{A}$
we choose $w_i(p)\in F$ which is conjugate in $F$ to $w_i$:
$w_i(p)={(g(i,p))}^{-1}w_ig(i,p)$ for some $g(i,p)\in F$.
Thus the elements $w_i(p)$ are determined by some mapping
$g:\mathcal{A}\to F$.
Let $p\in \mathcal{P}_0$ and $\{i_1,\dots,i_k\}=\{i\in\{1,\dots,n\}\mid l_i\succ p\}$
with $i_1<\dots<i_k$. We set $c_\alpha(p)=w_{i_\alpha}(p)\cdot
w_{i_{\alpha-1}}(p)\cdot\dots\cdot w_{i_1}(p)\cdot
w_{i_k}(p)\cdot\dots\cdot w_{i_{\alpha+1}}(p)$ for $\alpha=1,\dots,k$,
and $r(i_\alpha,p)={c_{\alpha-1}(p)}^{-1}\cdot c_\alpha(p)$
for $\alpha=1,\dots,k$, where $c_0(p)=c_k(p)$. It is easy to see that
$r(i_\alpha,p)=[w_{i_\alpha},c_\alpha(p)]$ and $\prod_{\alpha=1}^k r(i_\alpha,p)=1$,
hence only $k-1$ of them are independent. We denote by
$[w_{i_k}(p),w_{i_{k-1}}(p),\dots,w_{i_{1}}(p)]$ the set
$\{r(i_2,p),\dots,r(i_k,p)\}$.
Let $\mathcal{R}=\mathcal{R}(g)=\{r(i,p)\mid l_i\succ p$ and $i\neq\min\{i\mid
l_i\succ p\}\,\}$ be the union of these sets.
We set $G=G(\mathcal{R})=F/\langle\mathcal{R}\rangle$, where $\langle\mathcal{R}\rangle$
is the minimal normal subgroup of $F$ containing $\mathcal{R}$.
Let $\phi$ be a complex realization of the configuration $C$, and
let $X=\mbox {$\mathbb{C}$} P^2\setminus\bigcup_{l\in\mathcal{L}}\phi l$. Then
$\pi_1(X)\cong G(\mathcal{R}(g))$ for some particular choice of $g$.
The image of the generator $w_i$ corresponds under this
isomorphism to a small loop around $\phi l_i$ passed in
the positive direction (the lines in $\mbox {$\mathbb{C}$}^2$ have canonical
coorientation).
Let us consider the lower central series $(\g kF)$ of the group
$F$ ($\g1F=F$, $\g{k+1}F=[F,\g kF]$). It is well known that the groups
$\mbox{gr}_kF=\g kF/\g{k+1}F$ are free Abelian groups of finite rank.
Moreover, $\mbox{gr} F=\bigoplus_{k=1}^\infty\mbox{gr}_kF$ is naturally isomorphic
(as a graded Abelian group) to the free Lie algebra $L$ over $\mbox {$\mathbb{Z}$}$
with $n$ generators $x_1,\dots,x_n$ ($L=\bigoplus_{k=1}^\infty L_k$
is graded by degree of monomials) \cite{MKS}. Under this isomorphism
the Lie algebra commutator in $L$ corresponds to group commutator
in $F$. Further on we will identify $\mbox{gr}_kF$ with $L_k$.
The group $\langle\mathcal{R}\rangle$ is generated by $r(i,p)$ (which are commutators)
and the elements of the form $[w_{i_1}^{\pm 1},[w_{i_2}^{\pm 1},
\dots,[w_{i_k}^{\pm 1}, r(i,P)]\dots]]$. Therefore
$\langle\mathcal{R}\rangle\subset\g2F$. It is easy to see that the images of the elements
of $\mathcal{R}$ in $L_2$ are $\bar{r}(i,p)=[x_i,\sum_{j:p\prec l_j}x_j]$,
hence they depend only on $C$.
We will need in the further considerations a broader class of groups
$G$. Let now $\mathcal{R}=\{r(i,p)\mid(i,p)\in\mathcal{A}\ \&\ i\neq\min\{i\mid
l_i\succ p\}\,\}$ be a set of arbitrary elements of $\g2F$ with the
only condition that the images of $r(i,p)$ in $L_2$ must
be the same $\bar{r}(i,p)$ as above. We call such set $\mathcal{R}$
admissible for $C$. The group $G=G(\mathcal{R})$
is defined as before.
As $\bar{r}(i,p)$ are linearly independent in $L_2$, it follows
that not only $\mbox{gr}_2\langle\mathcal{R}\rangle=R_2$ depend only on $C$, but it is
also true for $\mbox{gr}_3\langle\mathcal{R}\rangle=R_3=[H,R_2]$, where $H=L_1\cong\mbox {$\mathbb{Z}$}^n$.
Hence $\mbox{gr}_1G\cong H$, $\mbox{gr}_2G\cong P_2=L_2/R_2$, and $\mbox{gr}_3G\cong P_3=L_3/R_3$
are canonically determined by $C$. This means that in each of these
groups we have a canonical basis.
In the case when $G=\pi_1(X)$ for some complex realization of $C$,
we have $H=H_1(X,\mbox {$\mathbb{Z}$})$ and the canonical basis in it is dual
to the canonical basis of $H^1(X,\mbox {$\mathbb{Z}$})$ described in \cite{OS}.
The group $G/\g3G$ is an extension of
the group $H$ (which is free Abelian) with a trivial $H$-module $P_2$.
Such extensions are parameterized by the group
$H^2(H,P_2)\cong \mbox{{\rm Hom}}(\Lambda^2H,P_2)$. But as $\Lambda^2H\cong L_2$,
we have a natural projection $\chi_2:\Lambda^2H\to P_2$. It is easy
to see that $\chi_2$ is exactly the element of $H^2(H,P_2)$,
corresponding to the group extension $1\to P_2\to
G/\g3G\to H\to 1$. Hence $G/\g3G$ is canonically determined by the
configuration up to an automorphism trivial on $H$ and $P_2$.
The group of such automorphisms is isomorphic to $\mbox{{\rm Hom}}(H,P_2)$.
The group $M=\g2G/\g4G$ is Abelian, but it has a non-trivial structure
of $H$-module. The arguments similar to those used for $G/\g3G$
show that this module is also canonically determined by $C$ up to
the action of $\mbox{{\rm Hom}}(P_2,P_3)$.
Let us now consider the group extension $1\to M\to
G/\g4G\to H\to 1$. It corresponds to some $\chi_3\in H^2(H,M)$.
From the short exact sequence of $H$-modules
$0\to P_3\to M\to P_2\to0$
we get a long exact sequence
$$
\dots\to H^1(H,P_2)\stackrel{\delta}{\to}H^2(H,P_3)
\stackrel{\alpha}{\to}H^2(H,M)
\stackrel{\beta}{\to}H^2(H,P_2)
\to\dots.
$$
Let us rewrite it in the following form:
$$
\dots\to\mbox{{\rm Hom}}(H,P_2)\stackrel{\delta}{\to}\mbox{{\rm Hom}}(\Lambda^2H,P_3)
\stackrel{\alpha}{\to}H^2(H,M)
\stackrel{\beta}{\to}\mbox{{\rm Hom}}(\Lambda^2H,P_2)
\to\dots.
$$
We have $\beta(\chi_3)=\chi_2$, thus $\chi_3$ lies in $\beta^{-1}(\chi_2)$
which is a principal homogeneous space over an Abelian group
$\mbox{{\rm Hom}}(\Lambda^2H,P_3)/\delta\mbox{{\rm Hom}}(H,P_2)$.
Recall that $M$ is determined by $C$ only up to the action of
an Abelian group
$\mbox{{\rm Hom}}(P_2,P_3)$, hence only the orbit $\bar{\chi}_3$ of $\chi_3$
with respect to this action
has an invariant sense. The set of orbits $Y=\beta^{-1}(\chi_2)/\mbox{{\rm Hom}}(P_2,P_3)$
is a principle homogeneous space over the group $T=\mbox{{\rm Hom}}(R_2,
P_3)/\bar{\delta}\mbox{{\rm Hom}}(H,P_2)$, where $\bar{\delta}$ is the composition
of $\delta$ with the natural projection $\mbox{{\rm Hom}}(\Lambda^2H,P_3)\to\mbox{{\rm Hom}}(R_2,P_3)$.
We see that $Y$, $T$, and the action of $T$ on $Y$ are determined by
$C$ canonically, thus any choice of admissible $\mathcal{R}$ provides us
with a well-defined element $\bar{\chi}_3(\mathcal{R})\in Y$.
Now let us have two admissible sets $\mathcal{R}$ and $\mathcal{R}'$.
We consider corresponding groups $G=G(\mathcal{R})$ and $G'=G(\mathcal{R}')$.
Let $\kappa(\mathcal{R},\mathcal{R}')=\bar{\chi}_3(\mathcal{R})-\bar{\chi}_3(\mathcal{R}')\in T$.
From this definition it is clear that
$\kappa(\mathcal{R},\mathcal{R}'')=\kappa(\mathcal{R},\mathcal{R}')+\kappa(\mathcal{R}',\mathcal{R}'')$
for any admissible $\mathcal{R}$, $\mathcal{R}'$, and $\mathcal{R}''$.
The groups $G/\g2G$ and $G'/\g2G'$ are both canonically isomorphic to
$H$. Thus we have a canonical isomorphism $\lambda_1:G/\g2G\to G'/\g2G'$.
As we noted above, an isomorphism $\lambda_2:G/\g3G\to G'/\g3G'$
extending $\lambda_1$ always exists, although it is not determined
canonically.
\begin{theorem}
An isomorphism $\lambda_3:G/\g4G\to G'/\g4G'$ extending
$\lambda_1$ exists if and only if $\kappa(\mathcal{R},\mathcal{R}')=0$
in $T$.
\end{theorem}
\noindent {\bf Proof:} $\,$
As we have shown, the Lie algebras $\mbox{gr}(G/\g4G)$ and $\mbox{gr}(G'/\g4G')$
are canonically isomorphic and generated by $G/\g2G$ and $G'/\g2G'$
correspondingly. Therefore, if $\lambda_3$
exists, then $\mbox{gr}\lambda_3$ is uniquely determined and coincides
with this canonical isomorphism. Hence $\kappa(\mathcal{R},\mathcal{R}')=0$.
In the opposite direction, if $\kappa(\mathcal{R},\mathcal{R}')=0$, then
the extensions $1\to M\to G/\g4G\to H\to 1$ and $1\to M\to
G'/\g4G' \to H\to 1$ differ by an automorphism of $M$, thus
after applying this automorphism they become equivalent.
Hence there exists $\lambda_3$ extending $\lambda_1$.
\qed
In order to use the invariant provided by this theorem, we must describe
$P_2$ and $P_3$. What can be said in the general case is the following.
Let us identify $L_2$ with $\Lambda^2H$. We denote by $(x_i^*)$ the basis
of $H^*$ dual to the basis $(x_i)$ of $H$. Then $L_2^*=\Lambda^2H^*$
and $R_2^\perp\subset\Lambda^2H^*$ is generated by the following elements:
$\omega_{ijk}=(x_i^*-x_j^*)\wedge(x_j^*-x_k^*)
=x_i^*\wedge x_j^*+x_j^*\wedge x_k^*+
x_k^*\wedge x_i^*$ for $l_i\succ l_j\cap l_k\in\mathcal{P}_0$ and
$\omega_{ij}=x_i^*\wedge x_j^*$
for $l_0\succ l_i\cap l_j$. It is easy to show that $(R_2^\perp)^\perp=
R_2$, hence $P_2$ is a free Abelian group and $P_2^*\cong R_2^\perp$.
Therefore we will consider the forms $\omega_{ijk}$ and $\omega_{ij}$
defined above as the elements of $P_2^*$.
We have an exact sequence of Abelian groups:
$$
0\to \Lambda^3H\stackrel{d}{\to}H\otimes L_2
\stackrel{c}{\to}L_3\to0
$$
where $d(x\wedge y\wedge z)=x\otimes[y,z]+y\otimes[z,x]+z\otimes[x,y]$
and $c(x\otimes f)=[x,f]$. As we know, $R_3=[H,R_2]$, thus
$R_3=c(H\otimes R_2)$. From the dual exact sequence
$$
0\leftarrow \Lambda^3H^*\stackrel{\ d^*}{\leftarrow}H^*\otimes\Lambda^2H^*
\stackrel{\ c^*}{\leftarrow}L_3^*\leftarrow0
$$
we get that $R_3^\perp=\ker d^*|_{H^*\otimes R_2^\perp}$. Therefore
$R_3^\perp$ contains the elements of the form
$S_{ijk}=(x_i^*-x_j^*)\otimes\omega_{ijk}$ for
$l_i\succ l_j\cap l_k\in\mathcal{P}_0$ and $S_{ij}=x_i^*\otimes\omega_{ij}$
for $l_0\succ l_i\cap l_j$. I don't claim that these elements
generate $R_3^\perp$, as we shall see in the next section it is not so,
but I don't know any general description of the rest of $R_3^\perp$.
Let us obtain now a more explicit method of calculating $\kappa(\mathcal{R},\mathcal{R}')$
in the case when $\mathcal{R}=\mathcal{R}(g)$ and $\mathcal{R}'=\mathcal{R}(g')$ for some mappings
$g,g':\mathcal{A}\to F$. The following propositions follow easily from definitions.
\begin{proposition}
If $g(i,p)\equiv g'(i,p)\pmod{\g2F}$ for any $i=1,\dots,n$ and
$p\prec l_i$, then $\kappa(\mathcal{R},\mathcal{R}')=0$. In other words
$\bar{\chi}_3(\mathcal{R}(g))$ depends only on $\bar{g}:\mathcal{A}\to H$.
\end{proposition}
Let us consider an Abelian group $A=H^\mathcal{A}$. For $a\in A$ we define
$\tilde{\tau}a\in\mbox{{\rm Hom}}(R_2,P_3)$ by setting $$\tilde{\tau}a(\bar{r}(i,p))=
[[x_i,a(i,p)],\sum_{j:p\prec l_j}x_j]+[x_i,\sum_{j:p\prec l_j}[x_j,a(j,p)]]
+R_3.$$ Let $\tau a=\tilde{\tau}a+\bar{\delta}\mbox{{\rm Hom}}(H,P_2)$.
\begin{proposition}
If $\mathcal{R}=\mathcal{R}(g)$ and $\mathcal{R}'=\mathcal{R}(g')$, then
$\kappa(\mathcal{R},\mathcal{R}')=\tau(\bar{g}-\bar{g'})$.
\end{proposition}
It would be interesting to find $\ker\tau$. Here are some partial results
in this direction.
Let us define the following elements of $A$:
\ \ $a^{(0)}_{i,p}$ for $i\in\{1,\dots,n$, $p\in\mathcal{P}_0$, $p\prec l_i$;
\ \ $a^{(1)}_{i,p}$ for $i\in\{1,\dots,n$, $p\in\mathcal{P}_0$;
\ \ $a^{(2)}_{i,p_1,p_2}$ for $i\in\{1,\dots,n$, $p_1,p_2\in\mathcal{P}_0$,
$p_1,p_2\prec l_i$ ($p_1$ and $p_2$ not necessary distinct).
$$
a^{(0)}_{i,p}(j,q)=\delta_{i,j}\delta_{p,q}x_i;
$$
$$
a^{(1)}_{i,p}(j,q)=\delta_{p,q}x_i;
$$
$$
a^{(2)}_{i,p_1,p_2}(j,q)=\delta_{i,j}\delta_{p_1,q}\sum_{k:p_2\succ l_k}x_k.
$$
Let $U$ be a subgroup of $A$ generated by them.
\begin{proposition}
$U\subseteq\ker\tilde{\tau}$.
\end{proposition}
Let $B$ be a subgroup of $A$ consisting of functions which don't
depend on $p$.
\begin{proposition}
$B\subseteq\tilde{\tau}^{-1}(\bar{\delta}\mbox{{\rm Hom}}(H,P_2))$.
\end{proposition}
Let $W=A/(U+B)$. The homomorphism $\tau$ is a composition of the
projection $\pi:A\to W$ and an homomorphism $\bar{\tau}:W\to T$.
I don't know whether $\bar{\tau}$ is always injective, but it is so for
an example which we will consider in the next section.
\section{Computations for MacLane configuration}
The MacLane matroid $ML_8$ can be defined as an affine plane over
$\mbox {$\mathbb{F}$}_3$ with one element deleted. The correspondent configuration $C_8$
has $8$ lines $l_1,\dots,l_7$ and $12$ points $p_{012}$, $p_{034}$, $p_{056}$,
$p_{07}$, $p_{135}$, $p_{147}$, $p_{16}$, $p_{23}$, $p_{246}$,
$p_{257}$, $p_{367}$, $p_{45}$
with $p_{ijk}\prec l_i,l_j,l_k$ and $p_{ij}\prec l_i,l_j$.
Hence, $\mathcal{P}_0$ consists of the last $8$ of these points.
It is easy to show that any complex realization of $C_8$ is projectively equivalent
to a realization of the following form:
\begin{eqnarray*}
\phi l_0 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_0=0\}, \\
\phi l_1 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_1=0\}, \\
\phi l_2 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_1=z_0\}, \\
\phi l_3 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_2=0\}, \\
\phi l_4 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_2=z_0\}, \\
\phi l_5 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_2+\omega z_1=0\}, \\
\phi l_6 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid z_2+\omega z_1=(\omega+1)z_0\}, \\
\phi l_7 & = & \{(z_0:z_1:z_2)\in\mbox {$\mathbb{C}$} P^2\mid (\omega+1)z_1+z_2=z_0\},
\end{eqnarray*}
where $\omega$ is a root of a polynomial $x^2+x+1$. The points
$\phi p$ for $p\in\mathcal{P}$ are defined as intersections of corresponding
lines.
Let us denote the realization of $C_8$, corresponding to
$\omega=\exp(2\pi i/3)$ (to $\omega=\exp(-2\pi i/3)$), by $\phi^+$
(resp.\ by $\phi^-$). Let $G^+=\pi_1(X^+)/\gamma_4\pi_1(X^+)$
and $G^-=\pi_1(X^-)/\gamma_4\pi_1(X^-)$
\begin{theorem}
There exists no isomorphism
$G^+\to G^-$
such that the induced isomorphism $H_1(X^+,\mbox {$\mathbb{Z}$})\to H_1(X^-,\mbox {$\mathbb{Z}$})$ maps
the elements of the canonical basis of $H_1(X^+,\mbox {$\mathbb{Z}$})$ to the corresponding
elements of the canonical basis of $H_1(X^-,\mbox {$\mathbb{Z}$})$.
\end{theorem}
\noindent {\bf Proof:} $\,$
Let $X^\pm=\mbox {$\mathbb{C}$} P^2\setminus
\bigcup\phi^\pm l$. The implementation of Arvola's algorithm
gives us the following sets of defining relators for $\pi_1(X^\pm)$
(both in a free group $F$ with generators $w_1,\dots,w_7$):
for $\pi_1(X^+)$
$$
[w_5,w_3,w_1],
$$
$$
[w_7,w_4,w_1],
$$
$$
[w_6,w_1],
$$
$$
[w_3,w_2],
$$
$$
[w_6,w_4,w_2],
$$
$$
[w_7,w_5,w_5^{-1}w_2w_5],
$$
$$
[w_7,w_6,w_6^{-1}w_3w_6],
$$
$$
[w_5,w_3w_6w_4w_6^{-1}w_3^{-1}];
$$
and for $\pi_1(X^-)$
$$
[w_5,w_3,w_1],
$$
$$
[w_7,w_4,w_1],
$$
$$
[w_6,w_1],
$$
$$
[w_3,w_5^{-1}w_2w_5],
$$
$$
[w_7^{-1}w_6w_7,w_4,w_2],
$$
$$
[w_7,w_5,w_5^{-1}w_2w_5],
$$
$$
[w_7,w_6,w_6^{-1}w_3w_6],
$$
$$
[w_5,w_7w_4w_7^{-1}].
$$
These sets of relators correspond to the mappings $g^\pm:\mathcal{A}\to F$,
whose values are 1 except for the following:
\begin{eqnarray*}
g^+(3,p_{367}) & = & w_6,\\
g^+(4,p_{45}) & = & w_6^{-1}w_3^{-1},\\
g^+(2,p_{257}) & = & w_5,\\
g^-(3,p_{367}) & = & w_6,\\
g^-(4,p_{45}) & = & w_7^{-1},\\
g^-(2,p_{23}) & = & w_5,\\
g^-(2,p_{257}) & = & w_5,\\
g^-(6,p_{246}) & = & w_7.
\end{eqnarray*}
Let $a_0=\bar{g^+}-\bar{g^-}:\mathcal{A}\to H$. This mapping has only the following
non-zero values:
\begin{eqnarray*}
a_0(4,p_{45}) & = & x_7-x_6-x_3,\\
a_0(2,p_{23}) & = & -x_5,\\
a_0(6,p_{246}) & = & -x_7.
\end{eqnarray*}
By Theorem 2.1 and Proposition 2.3 it is enough to show that
$\tau a_0\neq0$ in $T$.
\begin{lemma}
The image of $a_0$ in $W$ is non-zero.
\end{lemma}
\noindent {\bf Proof:} $\,$
First we compute $\tilde{W}=A/U$. As any of the generators
$a^{(\alpha)}_{i,p}$ of $U$ is zero on $(j,q)$ with $q\neq p$,
we may consider decompositions $A=\bigoplus_{p\in\mathcal{P}_0}A_p$
$U=\bigoplus_{p\in\mathcal{P}_0}U_p$ and
$\tilde{W}=\bigoplus_{p\in\mathcal{P}_0}\tilde{W}_p$,
where $A_p$, $U_p$, and $\tilde{W}_p$ are defined in the obvious way.
For each $p$ the
|
computation is quite simple. All of $\tilde{W}_p$
appear to be free Abelian groups, thus $\tilde{W}$ is also free Abelian,
and hence it's dual $\tilde{W}^*$ is naturally isomorphic to $U^\perp
\subset A^*$.
Let $e_{ij}(p)\in A^*$ be defined by $(e_{ij}(p),a)=(x_j^*,a(i,p))$
for $a\in A$, where $(x_i^*)$ is the basis of $H^*$ dual to the basis
$(x_i)$ of $H$. The basis of $U^\perp$ consists of the following elements:
\begin{eqnarray*}
I(p_{135}) & = & e_{13}(p_{135})-e_{15}(p_{135})+e_{35}(p_{135})\\
& & {}-e_{31}(p_{135})+e_{51}(p_{135})-e_{53}(p_{135}),\\
I(p_{147}) & = & e_{14}(p_{147})-e_{17}(p_{147})+e_{47}(p_{147})\\
& & {}-e_{41}(p_{147})+e_{71}(p_{147})-e_{74}(p_{147}),\\
I(p_{367}) & = & e_{36}(p_{367})-e_{37}(p_{367})+e_{67}(p_{367})\\
& & {}-e_{63}(p_{367})+e_{73}(p_{367})-e_{76}(p_{367}),\\
I(p_{257}) & = & e_{25}(p_{257})-e_{27}(p_{257})+e_{57}(p_{257})\\
& & {}-e_{52}(p_{257})+e_{72}(p_{257})-e_{75}(p_{257}),\\
I(p_{246}) & = & e_{24}(p_{246})-e_{26}(p_{246})+e_{46}(p_{246})\\
& & {}-e_{42}(p_{246})+e_{62}(p_{246})-e_{64}(p_{246}),\\
J(p_{16}) & = & e_{12}(p_{16})-e_{62}(p_{16})+e_{64}(p_{16})\\
& & {}-e_{14}(p_{16})+e_{17}(p_{16})-e_{67}(p_{16})\\
& & {}+e_{63}(p_{16})-e_{13}(p_{16})+e_{15}(p_{16})-e_{65}(p_{16}),\\
J(p_{45}) & = & e_{43}(p_{45})-e_{53}(p_{45})+e_{51}(p_{45})\\
& & {}-e_{41}(p_{45})+e_{47}(p_{45})-e_{57}(p_{45})\\
& & {}+e_{52}(p_{45})-e_{42}(p_{45})+e_{46}(p_{45})-e_{56}(p_{45}),\\
J(p_{23}) & = & e_{21}(p_{23})-e_{31}(p_{23})+e_{35}(p_{23})\\
& & {}-e_{25}(p_{23})+e_{27}(p_{23})-e_{37}(p_{23})\\
& & {}+e_{36}(p_{23})-e_{26}(p_{23})+e_{24}(p_{23})-e_{34}(p_{23}),\\
K_1(p_{135}) & = & e_{12}(p_{135})+e_{36}(p_{135})-e_{37}(p_{135})\\
& & {}+e_{57}(p_{135})-e_{52}(p_{135})-e_{56}(p_{135}),\\
K_2(p_{135}) & = & e_{14}(p_{135})-e_{17}(p_{135})+e_{37}(p_{135})\\
& & {}-e_{34}(p_{135})-e_{36}(p_{135})+e_{56}(p_{135}),\\
K_1(p_{147}) & = & e_{12}(p_{147})+e_{13}(p_{147})-e_{15}(p_{147})\\
& & {}-e_{43}(p_{147})+e_{75}(p_{147})-e_{72}(p_{147}),\\
K_2(p_{147}) & = & e_{12}(p_{147})+e_{46}(p_{147})-e_{42}(p_{147})\\
& & {}-e_{43}(p_{147})+e_{73}(p_{147})-e_{76}(p_{147}),\\
K_1(p_{367}) & = & e_{31}(p_{367})-e_{34}(p_{367})-e_{35}(p_{367})\\
& & {}+e_{65}(p_{367})+e_{74}(p_{367})-e_{71}(p_{367}),\\
K_2(p_{367}) & = & e_{34}(p_{367})+e_{62}(p_{367})-e_{64}(p_{367})\\
& & {}-e_{65}(p_{367})+e_{75}(p_{367})-e_{72}(p_{367}),\\
K_1(p_{257}) & = & e_{21}(p_{257})+e_{53}(p_{257})-e_{51}(p_{257})\\
& & {}-e_{56}(p_{257})+e_{76}(p_{257})-e_{73}(p_{257}),\\
K_2(p_{257}) & = & e_{24}(p_{257})-e_{21}(p_{257})-e_{26}(p_{257})\\
& & {}+e_{56}(p_{257})+e_{71}(p_{257})-e_{74}(p_{257}),\\
K_1(p_{246}) & = & e_{21}(p_{246})+e_{47}(p_{246})-e_{41}(p_{246})\\
& & {}-e_{43}(p_{246})+e_{63}(p_{246})-e_{67}(p_{246}),\\
K_2(p_{246}) & = & e_{25}(p_{246})-e_{27}(p_{246})+e_{43}(p_{246})\\
& & {}+e_{67}(p_{246})-e_{63}(p_{246})-e_{65}(p_{246}).
\end{eqnarray*}
Let us consider the following linear functional $t:\tilde{W}\to
\mbox {$\mathbb{Z}$}/3\mbox {$\mathbb{Z}$}$
$$
t=2I(p_{367})+J(p_{45})+K_1(p_{135})-K_2(p_{147})+K_1(p_{257})
-K_1(p_{246})+3\mbox {$\mathbb{Z}$}.
$$
It is not hard to check that $t|_B=0$ and $t(a_0)=1$.
Hence the image of $a_0$ in $W=\tilde{W}/B$ is non-zero. \qed
\begin{lemma}
$U=\ker\tilde{\tau}$ for the configuration $C_8$.
\end{lemma}
\noindent {\bf Proof:} $\,$
In order to prove this, we must first study $P_3$. It turns out
that the basis of $R_3^\perp$ consists of
$S_{ij}=x_i^*\otimes x_i^*\wedge x_j^*$
for $l_0\succ l_i\cap l_j$,
\ $S_{ijk}=(x_i^*-x_j^*)\otimes(x_i^*-x_j^*)\wedge(x_j^*-x_k^*)$ for
$l_i\succ l_j\cap l_k\in\mathcal{P}_0$ with $i<j$, $i<k$ and the following additional
elements:
\begin{eqnarray*}
T_0 & = &
x_7^*\otimes(x_1^*-x_3^*)\wedge(x_3^*-x_5^*)
+(x_2^*-x_3^*)\otimes(x_1^*-x_4^*)\wedge(x_4^*-x_7^*)
\\ & & {}
+(x_4^*-x_5^*)\otimes(x_3^*-x_6^*)\wedge(x_6^*-x_7^*)
\\ & & {}
+(x_1^*-x_6^*)\otimes(x_2^*-x_5^*)\wedge(x_5^*-x_7^*)
\\ & & {}
-x_7^*\otimes(x_2^*-x_4^*)\wedge(x_4^*-x_6^*)
+(x_4^*-x_5^*)\otimes x_1^*\wedge x_2^*
\\ & & {}
-(x_1^*-x_6^*)\otimes x_3^*\wedge x_4^*
+(x_2^*-x_3^*)\otimes x_5^*\wedge x_6^*
,\\ T_1 & = &
x_7^*\otimes(x_1^*-x_3^*)\wedge(x_3^*-x_5^*)
-x_3^*\otimes(x_1^*-x_4^*)\wedge(x_4^*-x_7^*)
\\ & & {}
-x_5^*\otimes(x_3^*-x_6^*)\wedge(x_6^*-x_7^*)
+x_1^*\otimes(x_2^*-x_5^*)\wedge(x_5^*-x_7^*)
\\ & & {}
-(x_5^*-x_7^*)\otimes x_1^*\wedge x_2^*
-(x_1^*-x_7^*)\otimes x_3^*\wedge x_4^*
\\ & & {}
-(x_3^*-x_7^*)\otimes x_5^*\wedge x_6^*
,\\ T_2 & = &
x_2^*\otimes(x_1^*-x_3^*)\wedge(x_3^*-x_5^*)
+(x_2^*-x_5^*)\otimes(x_3^*-x_6^*)\wedge(x_6^*-x_7^*)
\\ & & {}
+(x_3^*-x_6^*)\otimes(x_2^*-x_5^*)\wedge(x_5^*-x_7^*)
-x_3^*\otimes(x_2^*-x_4^*)\wedge(x_4^*-x_6^*)
\\ & & {}
+(x_3^*-x_5^*)\otimes x_1^*\wedge x_2^*
-(x_2^*-x_6^*)\otimes x_3^*\wedge x_4^*
\\ & & {}
+(x_2^*-x_3^*)\otimes x_5^*\wedge x_6^*
,\\ T_3 & = &
x_6^*\otimes(x_1^*-x_3^*)\wedge(x_3^*-x_5^*)
-(x_3^*-x_6^*)\otimes(x_1^*-x_4^*)\wedge(x_4^*-x_7^*)
\\ & & {}
-(x_1^*-x_4^*)\otimes(x_3^*-x_6^*)\wedge(x_6^*-x_7^*)
-x_1^*\otimes(x_2^*-x_4^*)\wedge(x_4^*-x_6^*)
\\ & & {}
+(x_4^*-x_6^*)\otimes x_1^*\wedge x_2^*
-(x_1^*-x_6^*)\otimes x_3^*\wedge x_4^*
\\ & & {}
+(x_1^*-x_3^*)\otimes x_5^*\wedge x_6^*
,\\ T_4 & = &
x_4^*\otimes(x_1^*-x_3^*)\wedge(x_3^*-x_5^*)
+(x_2^*-x_5^*)\otimes(x_1^*-x_4^*)\wedge(x_4^*-x_7^*)
\\ & & {}
+(x_1^*-x_4^*)\otimes(x_2^*-x_5^*)\wedge(x_5^*-x_7^*)
-x_5^*\otimes(x_2^*-x_4^*)\wedge(x_4^*-x_6^*)
\\ & & {}
+(x_4^*-x_5^*)\otimes x_1^*\wedge x_2^*
-(x_1^*-x_5^*)\otimes x_3^*\wedge x_4^*
\\ & & {}
+(x_2^*-x_4^*)\otimes x_5^*\wedge x_6^*
.\end{eqnarray*}
What is more, $(R_3^\perp)^\perp=R_3$, hence $P_3$ is a free
Abelian group and $P_3^*\cong R_3^\perp$.
Therefore it is enough to show that $U^\perp=\mbox{\rm Im}\,\tilde{\tau}^*$.
We identify $(\mbox{{\rm Hom}}(R_2,P_3))^*$ with $R_2\otimes P_3^*$.
It is easy to see that
\begin{eqnarray*}
I(p_{135}) & = & \tilde{\tau}^*(r(1,p_{135})\otimes S_{135}),\\
I(p_{147}) & = & \tilde{\tau}^*(r(1,p_{147})\otimes S_{147}),\\
J(p_{16}) & = & \tilde{\tau}^*(r(1,p_{16})\otimes T_3),\\
K_1(p_{135}) & = & \tilde{\tau}^*(r(1,p_{135})\otimes(T_0-T_3)),\\
K_2(p_{135}) & = & \tilde{\tau}^*(r(1,p_{135})\otimes T_3),\\
K_1(p_{147}) & = & -\tilde{\tau}^*(r(1,p_{147})\otimes T_0),\\
K_2(p_{147}) & = & -\tilde{\tau}^*(r(1,p_{147})\otimes T_3).
\end{eqnarray*}
Thus $(U^\perp)_p\subset\mbox{\rm Im}\,\tilde{\tau}^*$ for $p=p_{135},p_{147},p_{16}$.
Now we can use the fact that the group, generated by permutations
$(16)(25)(34)$ and $(135)(246)$, naturally acts on $C_8$ by automorphisms
to prove that it is true for any $p\in\mathcal{P}_0$. Hence
$U^\perp\subset\mbox{\rm Im}\,\tilde{\tau}^*$. Combining it with Proposition 2.4
we get the statement of the Lemma.
\qed
\begin{lemma} $B=\tilde{\tau}^{-1}(\bar{\delta}\mbox{{\rm Hom}}(H,P_2))$
for the configuration $C_8$.
\end{lemma}
\noindent {\bf Proof:} $\,$
We will first show that if $\bar{\delta}f\in\tilde{\tau}A$ for some
$f\in\mbox{{\rm Hom}}(H,P_2)$, then \newline
1) $(\omega_{ij},f(x_k))=0$ for any
$i,j,k$ such that $l_i\cap l_j\prec l_0$, $k\neq i,j$ and\newline
2) $(\omega_{ijk},f(x_m))=0$ for any $i,j,k,m$ such that
$l_i\succ l_j\cap l_k\in\mathcal{P}_0$, $m\neq i,j,k$.
Suppose that 1) doesn't hold, then, as no four pairwise distinct lines of
$C_8$ meet in one point, we have $p=l_i\cap l_k\in\mathcal{P}_0$ and
$(S_{ij},\bar{\delta}f(r(k,p)))=(\omega_{ij},f(x_k))\neq0$.
But $(S_{ij},\tilde{\tau}a(r(k,p)))=0$ for any $a\in A$,
hence 1) must hold.
Suppose that 2) doesn't hold, then we have either $p=l_i\cap l_m\in\mathcal{P}_0$
or $p'=l_j\cap l_m\in\mathcal{P}_0$. Due to the symmetry of $(ijk)$ we
may suppose that $p=l_i\cap l_k\in\mathcal{P}_0$. Then
$(S_{ijk},\bar{\delta}f(r(m,p)))=(\omega_{ijk},f(x_m))\neq0$.
But $(S_{ijk},\tilde{\tau}a(r(m,p)))=0$ for any $a\in A$,
hence 2) must also hold.
It follows that we can choose $\tilde{f}\in\mbox{{\rm Hom}}(H,\Lambda^2H)$ with
$f(x)=\tilde{f}(x)+R_2$ in such a way that $\tilde{x_k}=x_k\wedge
b(k)$ for some $b:\{1,\dots,7\}\to H$. Hence
$\bar{\delta}f=\tilde{\tau}a$, where $a\in B$, $a(k,p)=b(k)$
for any $k$ and $p$. \qed
From the last two lemmas it follows that $\bar{\tau}:W\to T$
is an inclusion, hence $\tau a_0\neq0$. This concludes the proof of the
Theorem. \qed
Note that the groups $\pi_1(X^+)$ and $\pi_1(X^-)$ are, in fact, isomorphic,
for there exists a homeomorphism $X^+\to X^-$ induced by complex
conjugation. But as this homeomorphism reverses coorientation of lines,
the corresponding isomorphism of the first homology groups maps canonical
basis of one of them to minus canonical basis of the other.
\section{Combinatorially equivalent arrangements
with different fundamental groups}
We define the configuration $C_{13}$ by gluing together two MacLane
configurations in the following way. Let $C_8=(\mathcal{L},\mathcal{P},\succ)$
and $C'_8=(\mathcal{L}',\mathcal{P}',\succ)$ be two copies of MacLane configuration
($\mathcal{L}=\{l_0,\dots,l_7\}$, $\mathcal{P}=\{p_{012},\dots,p_{45}\}$,
$\mathcal{L}'=\{l'_0,\dots,l'_7\}$, $\mathcal{P}'=\{p'_{012},\dots,p'_{45}\}$).
Let us introduce an equivalence relation on $\mathcal{L}\cup\mathcal{L}'$
by saying that $l_0\sim l'_0$, $l_1\sim l'_1$, and $l_2\sim l'_2$,
and an equivalence relation on $\mathcal{P}\cup\mathcal{P}'$
by saying that $p_{012}\sim p'_{012}$. Let $\mathcal{P}''=\{p''_{ij}\mid
i,j\in\{3,\dots,7\}\}$. We set $\tilde{\mathcal{L}}=(\mathcal{L}\cup\mathcal{L}')/{\sim}$,
$\tilde{\mathcal{P}}=((\mathcal{P}\cup\mathcal{P}')/{\sim})\cup\mathcal{P}''$ and the incidence relation
between $\tilde{\mathcal{L}}$ and $(\mathcal{P}\cup\mathcal{P}')/{\sim}$ induced by that of
$C_8$, $C'_8$, and between $\tilde{\mathcal{L}}$ and $\mathcal{P}''$
defined by $p''_{ij}\prec l_i,l'_j$.
Let $C_{13}=(\tilde{\mathcal{L}},\tilde{\mathcal{P}},\succ)$. We denote by $\bar{l_i}$
the equivalence class of $l_i$ ($i=0,1,2$), and by $\bar{p}_{012}$
the equivalence class of $p_{012}$.
Let us construct realizations $\phi^{++}$ and $\phi^{+-}$ of $C_{13}$
by gluing together realizations $\phi^+$ and $\phi^-$ of $C_8$.
Let $\psi$ be a generic projective transformation of $\mbox {$\mathbb{C}$} P^2$
with the condition that $\psi\phi^+l_i=\phi^+l_i(=\phi^-l_i)$
for $i=0,1,2$. We set $\phi^{+\pm}\bar{l_i}=\phi^+l_i$ for $i=0,1,2$,
$\phi^{+\pm}l_i=\phi^+l_i$ and $\phi^{+\pm}l'_i=\psi\phi^\pm l_i$
for $i=3,\dots,7$. For a generic $\psi$ the intersection points of
the complex lines $\phi^{+\pm}l$ for all $l\in\tilde{\mathcal{L}}$
are in one-to-one correspondence with the elements of $\tilde{\mathcal{P}}$,
hence we get realizations of $C_{13}$.
Further on we will write $C_{13}=(\mathcal{L},\mathcal{P},\succ)$
instead of $C_{13}=(\tilde{\mathcal{L}},\tilde{\mathcal{P}},\succ)$, $l_i$ instead
of $\bar{l_i}$, and $l_{i+5}$ instead of $l'_i$, in order to
make the notations compatible to that of section 2.
Let $X^{+\pm}=\mbox {$\mathbb{C}$} P^2\setminus\bigcup\phi^{+\pm}l$.
\begin{theorem}
The groups $\pi_1(X^{++})$ and $\pi_1(X^{+-})$ are not isomorphic.
\end{theorem}
\noindent {\bf Proof:} $\,$
Let $G^{++}=\pi_1(X^{++})$ and $G^{+-}=\pi_1(X^{+-})$. It is sufficient to show
that the groups $G^{++}/\g4G^{++}$ and $G^{+-}/\g4G^{+-}$ are not isomorphic.
We will start with the study of the group $\Gamma$ of automorphisms of $C_{13}$.
It is easy to see that any automorphism $\sigma\in\Gamma$ preserves the set
$\{l_0,l_1,l_2\}$ and either preserves each of the sets
$\{l_3,\dots,l_7\}$ and $\{l_8,\dots,l_{12}\}$, or interchanges them.
Moreover, the action of $\sigma$ on $\{l_0,l_1,l_2\}$ may be arbitrary,
but after it is determined there are only two possibilities, with
$\sigma l_3\in\{l_3,\dots,l_7\}$ for one of them
and $\sigma l_3\in\{l_8,\dots,l_{12}\}$ for the other.
Hence $\Gamma\cong\mbox{S}_3\times\mbox {$\mathbb{Z}$}_2$, where $\mbox{S}_3$ is a symmetric group
on the set $\{0,1,2\}$.
Let $x_0=-\sum_{i=1}^{12}x_i$. There is a natural action
of the group $\Gamma$ on $H$ induced by permutations of the elements
$x_i$, thus we may consider $\Gamma$ as a subgroup of $GL(H)$.
Let $\tilde{\Gamma}=\gamma\times\{\pm \mbox{id}_H\}$.
Let $G_2=G^{++}/\g3G^{++}$. It is isomorphic to
$G^{+-}/\g3G^{+-}$, though not canonically. Any automorphism of $G_2$
preserves $[G_2,G_2]=P_2$, hence it acts naturally on $H=G_2/P_2$.
Let us denote the corresponding homomorphism $Aut(G_2)\to GL(H)$
by $\pi$.
\begin{lemma}
The image of $\pi$ equals $\tilde{\Gamma}$.
\end{lemma}
\noindent {\bf Proof:} $\,$
This is really a sketch of the proof, which involves simple, but long
computations.
The map $\Lambda^2H\to P_2$ induced by group commutator in $G_2$
is equivariant with respect to the action of $Aut(G_2)$. Hence
$R_2=\ker(\Lambda^2H\to P_2)$ and $R_2^\perp\subset\Lambda^2H^*$
are preserved by the natural action of this group.
It is not hard to check that for $C_{13}$ any decomposable element
of $R_2^\perp$ is proportional to some $\omega_{ijk}$
or $\omega_{ij}$. Generating $\mbox {$\mathbb{Z}$}$-submodules by several decomposable
forms and computing ranks of their generic elements we can reconstruct all
$\mbox {$\mathbb{Z}$}\omega_{ij}$ and $\mbox {$\mathbb{Z}$}\omega_{ijk}$ up to an automorphism of $C_{13}$.
Taking kernels of suitable linear combinations of decomposable forms
we reconstruct all $x_i$ up to a sign of each and an automorphism of
$C_{12}$. At last we use the unique linear dependence between $x_i$
to reconstruct them up to multiplication of all of them simultaneously
by $\pm1$ and an automorphism of $C_{13}$, in other words up to applying
some element of $\tilde{\Gamma}$. It follows that
$\pi(u)\in\tilde{\Gamma}$ for any $u\in Aut(G_2)$. On the other hand,
it is fairly easy to construct for each $\sigma\in\tilde{\Gamma}$
some $u\in Aut(G_2)$ such that $\sigma=\pi(u)$. Hence
$\pi Aut(G_2)=\tilde{\Gamma}$. \qed
\def^{(\alpha)}{^{(\alpha)}}
\def^{(\beta)}{^{(\beta)}}
\def\wa#1{\bar{w}^{(\alpha)}_#1}
\def\wb#1{\bar{w}^{(\beta)}_#1}
Let $G$ be either $G^{++}$ or $G^{+-}$. Let us choose some arbitrary isomorphism
$\zeta:G/\g3G\to G_2$. From $\zeta$ with the help of natural projections
$G/\g4G\to G/\g3G$ and $G_2\to H$ we construct an homomorphism
$\xi:G/\g4G\to H$. Let $\wa1,\dots,\wa7$ be arbitrary pre-images with respect
to $\xi$ of $x_1,\dots,x_7$, and let $\wb1,\dots,\wb7$ be arbitrary pre-images
of $x_1,x_2,x_8,\dots,x_{12}$. We set
$\wa0=(\wa1)^{-1}\cdot\dots\cdot(\wa7)^{-1}$ and
$\wb0=(\wb1)^{-1}\cdot\dots\cdot(\wb7)^{-1}$.
Let $\bar{G}^{(\alpha)}$ be a subgroup of $G/\g4G$, generated by $\wa1,\dots,\wa7$ and
let $\bar{G}^{(\beta)}$ be a subgroup of $G/\g4G$, generated by $\wb1,\dots,\wb7$.
We say that $G$ is of class $0$ if there is an isomorphism
$\mu:\bar{G}^{(\alpha)}\to\bar{G}^{(\beta)}$ such that
$\mu\wa i\in\wb i\cdot[\bar{G}^{(\beta)},\bar{G}^{(\beta)}]$ for any $i=1,\dots,7$,
and that $G$ is of class $1$ if there is no such isomorphism.
\begin{lemma}
The class of $G$ is correctly defined, $G^{++}$ is of class $0$ and
$G^{+-}$ is of class $1$.
\end{lemma}
\noindent {\bf Proof:} $\,$
By Lemma 4.2 any automorphism of $G_2$ acts on $H$ by an element
of $\tilde{\Gamma}$. It follows that for any $u\in Aut(G_2)$
we can change $\wa i$ and $\wb i$ in such a way that they will
correspond to $u\zeta$ instead of $\zeta$, but the groups
$\bar{G}^{(\alpha)}$ and $\bar{G}^{(\beta)}$ will remain the same (they can only
change places). Moreover, the condition of compatibility of
$\mu$ with the new chosen generators of these groups will be equivalent
to that for original generators, hence the class will remain the same.
This follows from the fact that any $\sigma\in\Gamma$
preserves the partition $\mathcal{L}=\bigcup_{i=0}^2\{l_i\}\cup
\bigcup_{i=3}^7\{l_i,l_{i+5}\}$.
Therefore we may assume that the homomorphism $\xi:G/\g4G\to H$
is canonical. Clearly, the groups $\bar{G}^{(\alpha)}$
and $\bar{G}^{(\beta)}$ may be described as $G(\mathcal{R}^{(\alpha)})/\g4G(\mathcal{R}^{(\alpha)})$
and $G(\mathcal{R}^{(\beta)})/\g4G(\mathcal{R}^{(\beta)})$ for the configuration
$C_8$, where the sets of relators $\mathcal{R}^{(\alpha)}$ and $\mathcal{R}^{(\beta)}$
satisfy the condition from section 2, i.~g.
they coincide with $\{\bar{r}(i,p)\}$ modulo $\g3F$.
\def\tw#1{\tilde{w}_{#1}}
Let $\tw1,\dots,\tw{12}$ be Arvola's generators of $G$
corresponding to small loops around lines, and let
$\tilde{G}^{(\alpha)}=G/\langle\tw8,\dots,\tw{12}\rangle$,
\ $\tilde{G}^{(\beta)}=G/\langle\tw3,\dots,\tw7\rangle$. Then
$\tilde{G}^{(\alpha)}\cong\pi_1(X^+)$,
while $\tilde{G}^{(\beta)}\cong\pi_1(X^+)$ for
$G=G^{++}$ and $\tilde{G}^{(\beta)}\cong\pi_1(X^-)$ for $G=G^{+-}$.
Let us consider the natural projections
$\nu^{(\alpha)}:G\to\tilde{G}^{(\alpha)}$ and $\nu^{(\beta)}:G\to\tilde{G}^{(\beta)}$.
They induce projections
$\bar{\nu}^{(\alpha)}:G/\g4G\to\tilde{G}^{(\alpha)}/\g4\tilde{G}^{(\alpha)}$
and $\bar{\nu}^{(\beta)}:G/\g4G\to\tilde{G}^{(\beta)}/\g4\tilde{G}^{(\beta)}$.
Restricting them to $\bar{G}^{(\alpha)}$ and $\bar{G}^{(\beta)}$ correspondingly
we obtain isomorphisms
$\bar{G}^{(\alpha)}\cong\tilde{G}^{(\alpha)}/\g4\tilde{G}^{(\alpha)}$
and $\bar{G}^{(\beta)}\cong\tilde{G}^{(\beta)}/\g4\tilde{G}^{(\beta)}$.
All the constructed homomorphisms are canonical modulo commutant,
hence from Theorem 3.1 it follows that an isomorphism
$\mu:\bar{G}^{(\alpha)}\to\bar{G}^{(\beta)}$ mapping any $\wa i$ to $\wb i$
modulo $\g2\bar{G}^{(\beta)}$ exists for $G=G^{++}$ and doesn't exist for
$G=G^{+-}$. \qed
Since in the computation of the class of $G$ only $G/\g4G$ was used,
and all the arbitrary choices do not affect the result which is
different for $G^{++}$ and $G^{+-}$, we conclude that $G^{++}/\g4G^{++}$
is not isomorphic to $G^{+-}/\g4G^{+-}$. \qed
|
\section{Introduction}\label{s:intro}
The fourth paper in the series of the discovery of isotopes \cite{Gin09,Sch09a,Sch09b}, the discovery of the tungsten isotopes is discussed. Previously, the discovery of cerium \cite{Gin09}, arsenic \cite{Sch09a} and gold \cite{Sch09b} isotopes was discussed. The purpose of this series is to document and summarize the discovery of the isotopes. Guidelines for assigning credit for discovery are (1) clear identification, either through decay-curves and relationships to other known isotopes, particle or $\gamma$-ray spectra, or unique mass and Z-identification, and (2) publication of the discovery in a refereed journal. The authors and year of the first publication, the laboratory where the isotopes were produced as well as the production and identification methods are discussed. When appropriate, references to conference proceedings, internal reports, and theses are included. When a discovery included a half-life measurement the measured value is compared to the currently adapted value taken from the NUBASE evaluation \cite{Aud03} which is based on ENSDF database \cite{ENS08}. In cases where the reported half-life differed significantly from the adapted half-life (up to approximately a factor of two), we searched the subsequent literature for indications that the measurement was erroneous. If that was not the case we credited the authors with the discovery in spite of the inaccurate half-life.
\begin{figure}
\centering
\includegraphics[width=12cm]{tungsten-year.pdf}
\caption{Tungsten isotopes as a function of when they were discovered. The different production methods are indicated. The solid black squares on the right hand side of the plot are isotopes predicted to be bound by the HFB-14 model. On the proton-rich side the light blue squares correspond to unbound isotopes predicted to have lifetimes larger than $\sim 10^{-9}$~s.}
\label{f:year}
\end{figure}
\section{Discovery of $^{158-192}$W}
Thirty-five tungsten isotopes from A = $158-192$ have been discovered so far; these include 5 stable, 7 neutron-rich and 23 proton-rich isotopes. According to the HFB-14 model \cite{Gor07}, tungsten isotopes ranging from $^{154}$W through $^{250}$W should be particle stable. Thus, there remain 62 isotopes to be discovered. In addition, it is estimated that 8 additional nuclei beyond the proton dripline could live long enough to be observed \cite{Tho04}. About one-third of all possible tungsten isotopes have been produced and identified so far.
Figure \ref{f:year} summarizes the year of first discovery for all tungsten isotopes identified by the method of discovery. The range of isotopes predicted to exist is indicated on the right side of the figure. Only four different reaction types were used to produce the radioactive tungsten isotopes; heavy-ion fusion evaporation (FE), light-particle reactions (LP), neutron-capture reactions (NC) and projectile fragmentation (PF). The stable isotopes were identified using mass spectroscopy (MS). Heavy ions are all nuclei with an atomic mass larger than A = 4 \cite{Gru77}. Light particles also include neutrons. In the following, the discovery of each tungsten isotope is discussed in detail.
\subsection*{$^{158-159}$W}\vspace{-0.85cm}
In 1981, Hofmann {\it{et al.}} discovered $^{158}$W and $^{159}$W at the Gesellschaft f\"{u}r Schwerionenforschung (GSI) in Darmstadt, Germany, as reported in their paper {\it{New Neutron Deficient Isotopes in the Range of Elements Tm to Pt}} \cite{Hof81}. Using a 4.4~A$\cdot$MeV nickel beam the isotopes were made in the fusion-evaporation processes $^{106}$Cd($^{58}$Ni,$2p4n$)$^{158}$W and $^{110}$Cd($^{58}$Ni, $2p7n$)$^{159}$W. $^{158}$W was identified by reconstructing its $\alpha$ decay into $^{154}$Hf: ``We explain these observations by the decay chain $^{158}$W $\overset{\alpha}{\rightarrow}$ $^{154}$Hf $\overset{\beta}{\rightarrow}$ $^{154}$Lu $\overset{\beta}{\rightarrow}$ $^{154}$Yb $\overset{\alpha}{\rightarrow}$ $^{150}$Er.'' The half-life of the decay was not measured. $^{159}$W was identified by reconstructing its $\alpha$ decay into $^{155}$Hf of $E_{\alpha}=6299(6)$~keV with a half-life of $t_{1/2}=7.3(27)$~ms: ``Therefore, our observations can easily be described within the frame of the decay chain $^{159}$W $\overset{\alpha}{\rightarrow}$ $^{155}$Hf $\overset{\beta}{\rightarrow}$ $^{155}$Lu $\overset{\alpha}{\rightarrow}$ $^{151}$Tm.'' The half-life agrees with the currently accepted value of 8.2(7)~ms.
\subsection*{$^{160}$W}\vspace{-0.85cm}
Hofmann {\it{et al.}} were the first to discover $^{160}$W in 1979 at GSI as reported in {\it{Alpha Decay Studies of Very Neutron Deficient Isotopes of Hf, Ta, W, and Re}} \cite{Hof79}. Beams of $^{58}$Ni were incident on various targets of silver, palladium, and rhodium. Fusion-evaporation products were imbedded into a silicon surface barrier detector after leaving the target. By measuring the products' $\alpha$ decays, rare neutron deficient isotopes, such as $^{160}$W, were found: ``In the investigated reactions the eleven new isotopes ... $^{160}$W ... could be identified.'' The $\alpha$-particle decay energy of $^{160}$W was measured to be 5920(10)~keV. The half-life for this decay was not determined.
\subsection*{$^{161-164}$W}\vspace{-0.85cm}
In 1973, Eastham and Grant were the first to produce the isotopes $^{161}$W, $^{162}$W, $^{163}$W, and $^{164}$W as reported in {\it{Alpha Decay of Neutron-Deficient Isotopes of Tungsten}} \cite{Eas73}. Magnesium beams of energies between 110 and 204~MeV from the Manchester University Hilac were used on samarium targets. $^{161}$W and $^{162}$W were produced in the fusion-evaporation reactions $^{144}$Sm($^{24}$Mg,$xn$) and $^{163,164}$W were both produced in the two reactions $^{144}$Sm($^{24}$Mg,$xn$) and $^{147}$Sm($^{24}$Mg,$xn$). The isotopes were identified by their radioactivity using a helium jet technique. The authors state for the observation of $^{161}$W: ``We make the tentative suggestion that $^{161}$W may decay by emission of an $\alpha$-particle of energy about 5.75~MeV.'' This energy was later confirmed \cite{Hof79}. For $^{162}$W, the $\alpha$ decay energy was found to be $E_{\alpha}=5.53(1)$~MeV and only an upper limit for the lifetime was quoted: ``We have not been able to measure the lifetime of $^{162}$W. Practically no events at all are seen at 5.53 MeV in the observation of a catcher plate flipped out of the helium jet, so the lifetime must be considerably shorter than the dead time of 1/4~s.'' This limit was later found to be incorrect \cite{Hof79}. The $\alpha$-decay energy of $^{163}$W was 5.385(5)~MeV with a half-life of 2.5(3)~s which agrees with the accepted value of 2.8(2)~s. The $\alpha$-decay energy of $^{164}$W was 5.153(5)~MeV with a half-life of 6.3(5)~s. This half-life value is included in the calculation of the currently accepted of 6.3(2) s.
\subsection*{$^{165-166}$W}\vspace{-0.85cm}
$^{165}$W and $^{166}$W were discovered by Toth {\it{et al.}} in 1975 as reported in {\it{Production and investigation of tungsten $\alpha$ emitters including the new isotopes, $^{165}$W and $^{166}$W}} \cite{Tot75}. The isotopes were produced with $^{16}$O beams from the Oak Ridge isochronous cyclotron bombarding a $^{156}$Dy target. The ORIC gas-jet-capillary system transported the nuclei to a collection chamber where the decay of fusion-evaporation residues was measured. $^{165}$W undergoes $\alpha$ decay with a half-life of 5.1(5)~s and an associated energy of $E_{\alpha}=4.909(5)$~MeV. $^{166}$W decays by $\alpha$ emission with a half-life of 16(3)~s and an associated energy of $E_{\alpha}=4.739(5)$~MeV. The identification was supported by the following statements: ``... the energies determined in this work for $^{165}$W and $^{166}$W fit well not only as an extension of the data of Eastham and Grant \cite{Eas73} but also into the general $\alpha$-decay systematics in this mass region.'' Furthermore ``... stringent arguments can be presented to exclude the assignment of the new $\alpha$ emitters to isotopes of elements below hafnium. Thus, ... we believe that the two new $\alpha$ groups represent the $\alpha$ decay of $^{165}$W and $^{166}$W.''
The half-life measurement for $^{165}$W is presently the only measured value and the result for $^{166}$W is included in the accepted value of 19.2(6)~s.
\subsection*{$^{167}$W}\vspace{-0.85cm}
In 1985, Gerl {\it{et al.}} first identified $^{167}$W in their paper {\it{Spectroscopy of $^{166}$W and $^{167}$W and Alignment Effects in Very Neutron-Deficient Tungsten Nuclei}} \cite{Ger85}. The Australian National University 14UD Pelletron accelerator was used to accelerate a $^{24}$Mg beam. $^{167}$W was created in the fusion reaction $^{147}$Sm($^{24}$Mg,$4n$)$^{167}$W. The yrast band up to high spins was measured with hyperpure germanium detectors. The statement in the introduction ``The investigation deals with the behaviour of high-spin states in $^{166}$W and $^{167}$W, nuclei which have not previously been studied'' refers to the first observation of high-spin states and the authors were apparently not aware that they were the first to discover $^{167}$W. In turn, in 1989 Meissner {\it{et al.}} was not aware of the Gerl paper in their publication {\it{Decay of the New Isotope $^{167}W$}} \cite{Mei89}.
\subsection*{$^{168}$W}\vspace{-0.85cm}
The observation of $^{168}$W was reported for the first time in 1971 by Stephens {\it{et al.}} as reported in {\it{Some Limitations on the Production of very Neutron-Deficient Nuclei}} \cite{Ste71}. $^{168}$W was produced with a 155 MeV $^{28}$Si beam from the Berkeley Hilac in the fusion-evaporation reaction $^{144}$Sm($^{28}$Si,$2p2n$)$^{168}$W. Yrast $\gamma$ rays were measured up to 8$^+$ with Ge(Li) detectors; ``... the rotational lines of $^{168}$W are very analogous to those of $^{124}$Ba, described above, and their ratio in- and out-of-beam is in excellent agreement with the calculation based on the above value for $k$.'' This paper is referenced as the observation of $^{168}$W only by Dracoulis in 1983 \cite{Dra83}.
\subsection*{$^{169}$W}\vspace{-0.85cm}
In 1985, Recht {\it{et al.}} first observed $^{169}$W as reported in {\it{High-Spin Structure in $^{169}$W and $^{170}$W}} \cite{Rec85}. Neon beams ranging from 105 to 125 ~MeV from the Hahn Meitner Institut Berlin VICKSI accelerator facility bombarded a gadolinium target. The fusion-evaporation reaction $^{154}$Gd($^{20}$Ne,$5n$)$^{169}$W was used to produce $^{169}$W. Gamma-ray spectra were measured with germanium detectors in coincidence with a high-spin filter consisting of NaI scintillation detectors. ``Levels up to about spin 30 in $^{170}$W and up to 57/2 in $^{169}$W have been identified.''
\subsection*{$^{170}$W}\vspace{-0.85cm}
Nadzhakov {\it{et al.}} discovered $^{170}$W in 1971 and presented the results in {\it{New Tungsten Isotopes}} \cite{Nad71}. $^{20,22}$Ne beams accelerated to 145$-$155 MeV by the Dubna U-300 accelerator bombarded isotopically enriched $^{155}$Gd and $^{156}$Gd targets. The isotope was produced in $x$n fusion-evaporation reactions and identified by measuring $\gamma$-ray spectra following chemical separation. The paper states: ``Figure 4 shows the chemical results, while Fig. 5 shows the rise in $^{170}$Ta activity. These results together indicate the presence of the new isotope $^{170}$W with T = 4$\pm$1 min.'' The measured half-life for $^{170}$W is close to the accepted value of 2.42(4)~m.
\subsection*{$^{171}$W}\vspace{-0.85cm}
$^{171}$W was discovered by Arciszewski {\it{et al.}} in 1983 as reported in {\it Band-crossing phenomena in $^{167,168}$Hf and $^{170,171}$W} \cite{Arc83}. The Louvain-la-Neuve CYCLONE Cyclotron accelerated $^{20}$Ne to 110 MeV and $^{171}$W was produced in the fusion-evaporation reaction $^{155}$Gd($^{20}$Ne,4n). Two Compton-suppression spectrometers located at +90$^\circ$ and $-$90$^\circ$ with respect to the beam direction recorded $\gamma$-$\gamma$-coincidences. ``Since the other rotational band could be assigned to $^{169}$W or $^{171}$W, or even to a tantalum isotope (through an xnp channel), excitation function measurements were undertaken... The comparison of fig. 3 clearly shows that the newly observed band belongs to $^{171}$W.'' A previous half-life measurement of $^{171}$W reported a value of 9.0(15)~m \cite{Nad71} which differs by almost a factor of four from the accepted value and was thus not credited for the discovery of $^{171}$W.
\subsection*{$^{172}$W}\vspace{-0.85cm}
$^{172}$W was first reported by Stephens {\it{et al.}} in 1964 in {\it{Properties of High-Spin Rotational States in Nuclei}} \cite{Ste64}. The experimental details were included in a subsequent longer paper \cite{Ste65}. Excitation functions of $^{11}$B, $^{14}$N, and $^{19}$F beams from the Lawrence Radiation Laboratory Hilac on $^{159}$Tb, $^{165}$Ho, and $^{169}$Tm were measured. To produce $^{172}$W, a 117~MeV $^{14}$N beam bombarded a holmium target, resulting in the fusion-evaporation reaction $^{165}$Ho($^{14}$N,7$n$)$^{172}$W. A single wedge-gap electron spectrometer was used to detect conversion electrons. In addition, $\gamma$-ray spectra were recorded with NaI(Tl) and
|
germanium detectors. The original paper only shows the rotational constants as a function of spin. The second paper presents 9 $\gamma$-ray transitions up to spin 18 in $^{172}$W and states: ``Mass assignments were made on the basis of the change in bombarding energy necessary to go from the maximum of the excitation function of one even nucleus to that of the next lighter one-about 30~MeV per pair of neutrons out. The close similarity in bombarding energy to produce a given reaction in any of these three targets coupled with the results of a number of cases where a given product could be made via more than one reaction leaves no doubt as to the mass assignments.''
\subsection*{$^{173}$W}\vspace{-0.85cm}
In 1963 Santoni {\it{et al.}} reported the discovery of $^{173}$W in {\it{Spectres $\gamma$ et Periodes de Quatre Isotopes de A Impair du Tungstene et Du Tantale}} \cite{San63}. Tantalum oxide was bombarded with protons between 40 and 155 MeV from the Orsay synchrocyclotron. The isotopes were separated using a magnetic separator and their $\gamma$-ray spectra were measured. ``Des cristaux NaI(Tl) 7.5x7.5 cm et 2.5x2.5 cm reli\'es \`a un analyseur \`a 256 canaux ont permis de d\'eterminer les p\'eriodes et d'identifier les spectres $\gamma$ des isotopes 173, 175, 177 et 179 du tungst\`ene et du tantale.'' (7.5x7.5 cm and 2.5x2.5 cm NaI(Tl) crystals with a 256 channel analyzer were used to identify the $\gamma$ spectra of tungsten and tantalum isotopes 173, 175, 177, and 179.) The extracted half-life of $^{173}$W of 16(5)~m is somewhat larger than the accepted value of 7.6(2)~m; however, the measured half-life of the daughter $^{173}$Ta was correct.
\subsection*{$^{174}$W}\vspace{-0.85cm}
$^{174}$W was first reported in the same paper as $^{172}$W by Stephens {\it{et al.}} in 1964 in {\it{Properties of High-Spin Rotational States in Nuclei}} \cite{Ste64}. The experimental details were included in the subsequent longer paper \cite{Ste65}. Excitation functions of $^{11}$B, $^{14}$N, and $^{19}$F beams from the Lawrence Radiation Laboratory Hilac on $^{159}$Tb, $^{165}$Ho, and $^{169}$Tm were measured. $^{174}$W was produced with a 83~MeV $^{11}$B beam bombarding a thulium target in the fusion-evaporation reaction $^{169}$Tm($^{11}$B,6$n$)$^{174}$W. A single wedge-gap electron spectrometer was used to detect conversion electrons. In addition, $\gamma$-ray spectra were recorded with NaI(Tl) and germanium detectors. The original paper only shows the rotational constants as a function of spin. The second paper presents 7 $\gamma$-ray transitions up to spin 14 in $^{174}$W. The half-life of $^{174}$W was first reported one month prior to the submission of the first Stephens paper in an internal report by Santoni and Valentin \cite{San64} and independently a year later in a refereed publication by Demeter {\it{et al.}} \cite{Dem65}.
\subsection*{$^{175}$W}\vspace{-0.85cm}
In 1963 Santoni {\it{et al.}} reported the discovery of $^{175}$W together with the discovery of $^{173}$W in {\it{Spectres $\gamma$ et Periodes de Quatre Isotopes de A Impair du Tungstene et Du Tantale}} \cite{San63}. Tantalum oxide was bombarded with protons between 40 and 155 MeV from the Orsay synchrocyclotron. The isotopes were separated using a magnetic separator and their $\gamma$-ray spectra were measured. ``Des cristaux NaI(Tl) 7.5x7.5 cm et 2.5x2.5 cm reli\'es \`a un analyseur \`a 256 canaux ont permis de d\'eterminer les p\'eriodes et d'identifier les spectres $\gamma$ des isotopes 173, 175, 177 et 179 du tungst\`ene et du tantale.'' (7.5x7.5 cm and 2.5x2.5 cm NaI(Tl) crystals with a 256 channel analyzer were used to identify the $\gamma$ spectra of tungsten and tantalum isotopes 173, 175, 177, and 179.) The half-life of $^{175}$W was determined to be 34(1) m. This value is consistent with the presently accepted value of 35.2(6) m.
\subsection*{$^{176-179}$W}\vspace{-0.85cm}
The isotopes $^{176}$W, $^{177}$W, $^{178}$W, and $^{179}$W were discovered by Wilkinson from Berkeley in 1950 as reported in {\it{Neutron Deficient Radioactive Isotopes of Tantalum and Wolfram}} \cite{Wil50}. Protons from the 184-cyclotron directed on a tantalum target created the isotopes. ``The bombardment of tantalum with protons of energy 10 to 70 Mev has led to the characterization of five new radioactive isotopes of wolfram.'' Wilkinson counted the observation of an isomeric state in $^{179}$W as the fifth isotope. They were identified following chemical separation by the measurement of K X-rays, electrons and $\gamma$ radiation. The half-life of $^{176}$W was measured to be 80(5)~m which is near the accepted value of 2.5(1)~h. The extracted values for $^{177}$W (130(3)~m) and $^{178}$W (21.5(1)~d) are included in the accepted values of 132(2)~m and 21.6(3)~d, respectively. The half-life for $^{179}$W of 30(1)~m is close to the accepted value of 37.05(16)~m. It should be mentioned that Wilkinson and Hicks had reported a 135~m half-life in 1948; however, they could not uniquely assign it to a specific tungsten isotope (either $^{178}$W or $^{179}$W) \cite{Wil48}.
\subsection*{$^{180}$W}\vspace{-0.85cm}
Dempster reported the existence of the stable isotope $^{180}$W in 1937 in {\it{The Isotopic Constitution of Tungsten}} \cite{Dem37}. Although there was prior evidence for its existence, impurities of the tungsten electrodes prevented a firm identification. ``With pure tungsten electrodes, six photographs have been made showing the isotope at 180, and by varying the time of exposure, its intensity was estimated as approximately one one-hundredth of that of the isotope at 183. In the earlier photographs, the faint isotope was also found on two photographs of doubly charged ions, on two of triply charged ions, and on one of quadruply charged ions. Thus there can be no doubt that tungsten has a fifth faint stable isotope at mass 180.''
\subsection*{$^{181}$W}\vspace{-0.85cm}
In 1947, Wilkinson identified $^{181}$W at the University of California, Berkeley, as reported in {\it{A New Isotope of Tungsten}} \cite{Wil47}. By bombarding 20~MeV deuterons from the Crocker Laboratory cyclotron on a thin tantalum foil $^{181}$W was produced in the reaction $^{181}$Ta($d$, $2n$)$^{181}$W. Electrons, X-rays, and $\gamma$ rays were measured. ``The tungsten fraction contained a previously unreported, single radioactivity of half-life 140$\pm$2 days.'' This value is close to the currently accepted value of 121.2(2)~d.
\subsection*{$^{182-184}$W}\vspace{-0.85cm}
Aston identified the stable tungsten isotopes $^{182}$W, $^{183}$W and$^{184}$W in 1930 at the Cavendish Laboratory in Cambridge as reported in {\it{Constitution of Tungsten}} \cite{Ast30}. The observation was ``made possible by the preparation of the volatile carbonyl, W(CO)$_6$, by Dr. A. v. Grosse, of Berlin. It was to be expected from the greater atomic weight that the photographic effect would be feeble, and only by means of very sensitive plates were lines of satisfactory intensity obtained.''
\subsection*{$^{185}$W}\vspace{-0.85cm}
In 1940, Minawaka reported the discovery of $^{185}$W in {\it{Neutron-Induced Radioactivity of Tungsten}} \cite{Min40}. Pure metallic tungsten powder was irradiated with slow and fast neutrons produced in nuclear reactions in the Tokyo cyclotron. A Lauritsen type electroscope, chemical separation and measurements with a thin-walled G-M counter and a magnetic field were used to identify $^{185}$W. ``From the relative intensities of both periods ..., it was found that the shorter period is produced practically only with slow neutrons and the longer one both with fast and slow neutrons. The above results lead to the conclusion that ... 77-day activity [is due] to W$^{185}$.'' This half-life (77(3)~d) agrees with the accepted value of 75.1(3)~d. It should be mentioned that Fajans and Sullivan confirmed the result by Minawaka later in the same year \cite{Faj40}.
\subsection*{$^{186}$W}\vspace{-0.85cm}
Aston identified the stable tungsten isotope $^{186}$W together with the isotopes $^{182-184}$W in 1930 at the Cavendish Laboratory in Cambridge as reported in {\it{Constitution of Tungsten}} \cite{Ast30}. ``Tungsten proves to have four isotopes, of which the strongest two give lines of practically identical intensity.''
\subsection*{$^{187}$W}\vspace{-0.85cm}
In 1940, Minawaka reported the discovery of $^{187}$W together with the discovery of $^{185}$W in {\it{Neutron-Induced Radioactivity of Tungsten}} \cite{Min40}. Pure metallic tungsten powder was irradiated with slow and fast neutrons produced in nuclear reactions in the Tokyo cyclotron. A Lauritsen type electroscope, chemical separation and measurements with a thin-walled G-M counter and a magnetic field were used to identify $^{187}$W. ``From the relative intensities of both periods ..., it was found that the shorter period is produced practically only with slow neutrons and the longer one both with fast and slow neutrons. The above results lead to the conclusion that 24-hour activity is due to W$^{187}$ ...'' This half-life (24.0(1)~h) is consistent with the accepted value of 23.72(6)~h. Previously, an $\sim$ 24~h half-life had been measured but could not be assigned to a specific tungsten isotope \cite{Fer35,McL35,Jae37}. It should be mentioned that Fajans and Sullivan confirmed the result by Minawaka later in the same year \cite{Faj40}.
\subsection*{$^{188}$W}\vspace{-0.85cm}
Lindner and Coleman reported the discovery of $^{188}$W in 1951 in {\it{The Identification of W$^{188}$ Formed in Neutron-Activated Tungsten by a Chemical Separation of Re$^{188}$}} \cite{Lin51a}. $^{188}$W was formed through successive neutron capture from $^{186}$W by neutron irradiation in a nuclear reactor. ``A new radioisotope of tungsten, mass 188, which was formed by successive neutron capture by the heaviest stable tungsten isotope, W$^{186}$, has been indirectly established in the presence of very large levels of other radio-tungsten isotopes. This was accomplished by observing the activity of the known Re$^{188}$ which arises as a result of the decay of the W$^{188}$.'' The measured half-life of 65(5)~d agrees with the accepted value of 69.78(5)~d. Lindner also published the results in a separate paper later in the same year \cite{Lin51b}.
\subsection*{$^{189}$W}\vspace{-0.85cm}
In 1963 Flegenheimer {\it{et al.}} from the Comisi\'{o}n Nacional de Energ\'{i}a At\'{o}mica, Buenos Aires, Argentina, discovered $^{189}$W in {\it{The $^{189}$W - $^{189}$Re Decay Chain}} \cite{Fle63}. Fast neutrons were produced by the bombardment of beryllium with 28 MeV deuterons in the synchrocyclotron. $^{189}$W was produced via the ($n$,$\alpha$) reaction on an osmium target. The half-life was measured following chemical separation. ``No 11-minutes tungsten nuclide was found after a W($d$,$p$) reaction, which excludes mass numbers 185 and 187. We therefore assign the mass number 189 to this half-life.'' The half-life agrees with the present value of 11.6(3)~m.
\subsection*{$^{190}$W}\vspace{-0.85cm}
In 1976 Haustein {\it{et al.}} observed $^{190}$W for the first time and they reported it in {\it{New neutron-rich isotope: $^{190}$W}} \cite{Hau76}. ``A new neutron-rich isotope, $^{190}$W, was produced by $^{192}$Os($n$,2$pn$)$^{190}$W (25-200~MeV neutrons) and by $^{192}$Os($p$,3$p$)$^{190}$W (92~MeV protons).'' The protons were accelerated by the Brookhaven linac injector of the alternating gradient synchrotron and the neutron irradiations were performed in the MEIN facility. The half-life was shown to be 30.0(15)~m, which is still the only measured half-life of $^{190}$W.
\subsection*{$^{191-192}$W}\vspace{-0.85cm}
In 1999, Benlliure {\it{et al.}} created $^{191}$W and $^{192}$W isotope as reported in {\it{Production of neutron-rich isotopes by cold fragmentation in the reaction $^{197}$Au + Be at 950~$A$~MeV}} \cite{Ben99}. A 950~A$\cdot$MeV $^{197}$Au beam from the SIS synchrotron of GSI was incident on a beryllium target, resulting in projectile fragmentation. The FRS fragment separator was used to select isotopes with a specific mass-to-charge ratio. ``The mass resolution achieved in this measurement was $A/\Delta A\approx400$ ... the isotopes ... $^{191}$W, $^{192}$W ... were clearly identified for the first time. Only isotopes with a yield higher than 15 counts were considered as unambiguously identified.''
\section{Summary}
It is interesting that six tungsten isotopes ($^{167-169}$W, $^{171-172}$W and $^{174}$W) were first identified in high-spin $\gamma$-ray spectroscopy experiments where the authors were not aware of the fact that they were the first to observe the specific isotope. The first measurement of the half-life of $^{171}$W was significantly different from the later accepted values and was not credited with the discovery. The half-lives of two other isotopes ($^{177}$W and $^{187}$W) were observed first without being assigned to the specific isotopes.
\ack
This work was supported by the National Science Foundation under grants No. PHY06-06007 (NSCL) and PHY07-54541 (REU). MH was supported by NSF grant PHY05-55445. JQG acknowledges the support of the Professorial Assistantship Program of the Honors College at Michigan State University.
|
\section{Introduction}
Asymmetric simple exclusion process (ASEP)
is a non-equilibrium process in which
particles hop to the neighboring site in a specific direction
following certain conditions \cite{ligett}.
In the simplest model, particles obey the mutual exclusion condition
due to which a site cannot be
occupied by more than one particle and a hop to the neighboring site
is possible provided this target site is empty. The particles
after being injected
at one end of the lattice at a rate $\alpha$ hop across the lattice and
reach the other end where they are withdrawn at a rate $1-\gamma$.
There exist other models where particles
can have attractive or repulsive interaction in addition to the
exclusion interaction \cite{kls,hager}.
All these systems exhibit
interesting boundary-induced phase transitions for which the tuning
parameters are the boundary rates, $\alpha$ and $\gamma$
\cite{krug,straley}.
In different phases, the average particle density has
distinct constant values across the bulk of
the lattice. These particle density profiles in different
phases may also differ due to different location
and nature of the boundary-layers.
More features, such as coexistence of high and low-density
regimes are seen in systems where particle
number is not conserved in the bulk
due to attachment and detachment of particles to and from the
lattice \cite{parameg,smsmb}.
In this coexistence phase (also known as a shock phase),
the particle density profile
has a jump discontinuity (shock) in the interior of the lattice
from a low to a high density value.
The slope and the number of such shocks in a density profile
are related to the
nature of the inter-particle interactions which determine
the fundamental current density relation \cite{sm,rakos}.
Although this relation predicts
the kind of shocks that can be seen \cite{rakos}
in the density profile, a
systematic characterization of the phase-transition
and the phase diagram
in the $\alpha-\gamma$ plane requires a detailed analysis
of the relevant equations describing the dynamics
in the steady-state \cite{smb,smsmb,jaya}.
Drawing analogies from the equilibrium phase
transitions, first order, critical \cite{straley,parameg}
and tricritical \cite{jaya}
kind of phase
transitions have been observed so far in various ASEP models.
While characterizing the phase transitions,
it is useful to study the variation of the
height or the width of the boundary-layers as $\alpha$ and $\gamma$
are changed.
In a way, these boundary-layers play important roles in deciding
the "order parameter" like quantities in these
non-equilibrium phase transitions.
Owing to their importance in describing
phase transitions, the boundary-layers for several interacting and
non-interacting models have been studied
using the techniques of boundary-layer analysis \cite{cole}.
It is found that the system usually
enters into a shock phase from a non-shock phase
due to the deconfinement of the
boundary-layer from the boundary. This
deconfinement can be described by a nontrivial
scaling exponent associated with the width of the boundary-layer \cite{smsmb}.
For the pure exclusion case, except for a critical point,
the transition from a non-shock to a shock phase
is first order in nature since a shock
of finite height is formed on the
phase boundary. The height of the shock on the phase boundary
reduces as one approaches the critical point along the phase boundary.
At the critical point, the shock height is zero on the phase
boundary and it
increases continuously as one proceeds
away from the phase-boundary further into the
shock phase. In order to visualize these features,
it is beneficial to obtain the full solution for the
density profile along with its boundary-layer. Boundary
layer analysis is useful for this purpose since it
allows us to generate a uniform approximation
for solving the steady-state particle density equation
across the entire lattice.
This steady-state equation can be obtained from the large time- and
length-scale limit (hydrodynamic limit)
of the statistically averaged
master equation that describes the particle dynamics in the
discrete form. For the simple
exclusion case, it is possible to obtain
an analytical solution of the steady-state hydrodynamic
equation for the entire density profile. This, however, may not possible
for more complex ASEPs.
A fixed-point analysis of the hydrodynamic equation turns
\cite{smfixedpt} out to be
general and useful,
since it does not involve an explicit solution of the
steady-state hydrodynamic
equation. In
particle conserving models, a boundary-layer saturates to
the constant bulk density profile asymptotically.
As a consequence of this, it is expected that the fixed-points
of the boundary-layer equation match with the bulk density values.
In other words, a boundary-layer, which is a solution
of the boundary-layer equation, is a part of the
flow trajectory of the equation flowing to
the appropriate fixed-point on the phase plane.
Thus, in order to find
out the values of the bulk-densities in different phases,
it is sufficient to determine the physically acceptable
fixed-points of the boundary-layer equation.
As a result,
the number of possible bulk phases
is given by the number of these fixed-points.
Applying this method
to a specific particle conserving two-species ASEP \cite{smfixedpt},
it is found that this system has three distinct bulk phases
corresponding to three fixed-points of the boundary-layer equations.
In addition, it is possible to predict the nature of phase transitions,
locations of the boundary-layers {\it etc.} for this system.
All these predictions
match well with the results from numerical simulations \cite{popkov}.
In a particle non-conserving case, the density is not constant
in the bulk, and therefore, the fixed-points of the boundary-layer
do not provide
the full profile since the details of the bulk dynamics
is not considered in this approach. However, it is still useful
to obtain the fixed-points of the boundary-layer equations
along with their stability properties in order to
predict the possible shapes of the density profiles under different
boundary conditions.
In the present paper, we consider a particle number non-conserving model
where particles interact repulsively. Our aim is to extend the
fixed-point analysis to a system with non-constant bulk density.
We show how this analysis helps us predict
possible shapes of the density profiles under different boundary
conditions and also understand the properties of different kinds of
shocks present in the density profile.
This particular model is chosen because of certain nontrivial shapes
of the density profiles with different kinds of shocks.
The plan of the paper is as follows. In the following section, we
describe the model. This section also contains brief discussions on the
hydrodynamic approach, boundary-layer analysis and some of the known
results. In section III, we present the phase-plane analysis of the
boundary-layer equations for the present model.
There are separate subsections on the
boundary-layer equation, its fixed-points, stability analysis of the
fixed-points and possible shapes of shocks. Section IV presents the
predictions of the possible shapes of the density profile under
different boundary
conditions. In section V, we mention a few general rules for
predicting the shapes of the density profiles and some special
features related to the shocks of this model.
We end the paper with a summary
in section VI.
\section{Model}
\subsection{Discrete description}
The asymmetric simple exclusion process that we consider here
consists of a one-dimensional lattice of $N$
sites with lattice spacing, $a$.
Particles are injected at $i=1$ with rate $\alpha$ and
withdrawn at $i=N$ at a rate $1-\gamma$.
Particles, obeying mutual exclusion, hop to the right
with rates that depend on the occupancy of the neighboring site as
\begin{eqnarray}
1100\rightarrow 1010 \ \ {\rm at} \ \ {\rm a} \ \
{\rm rate} \ \ 1+\epsilon,\\
0101\rightarrow 0011 \ \ {\rm at} \ \ {\rm a} \ \ {\rm rate}
\ \ 1-\epsilon,\\
0100 \rightarrow 0010 \ \ {\rm at} \ \ {\rm rate} \ \ 1\\
1101 \rightarrow 1010 \ \ {\rm at} \ \ {\rm rate} \ \ 1.
\end{eqnarray}
Here, $0<\epsilon<1$ and $1$ ($0$) represents an occupied (unoccupied)
site. For $\epsilon \neq 0$, there is an effective
repulsion between the particles \cite{kls,hager,rakos}.
In addition, the number of particles is not conserved due to
particle detachment, $1\rightarrow 0$,
at a rate $\omega_d$ and attachment,
$0\rightarrow 1$,
at a rate $\omega_a$ at any site on the lattice.
Particle attachment and detachment are
equilibrium like processes that do not give rise
to any particle current.
\subsection{Hydrodynamic Approach and a brief description of the
boundary-layer analysis}
The hydrodynamic approach is based
on the lattice continuity equation which equates
the time evolution of
the particle occupancy at a given site
with the difference of currents
across its two neighboring bonds. In the continuum description, the
continuous time and space variables are $t$ and $x$ with the
latter replacing, for example, the $i$th site as $i\rightarrow x=ia$.
Upon doing a Taylor expansion of the statistically averaged
continuum version of the lattice
continuity equation in small $a$, one has the following
hydrodynamic equation
\begin{eqnarray}
\frac{\partial \rho}{\partial t}+\frac{\partial J}{\partial x}+S_0=0,
\label{fulleqn}
\end{eqnarray}
for the averaged particle density $\rho(x,t)$.
This equation has already been supplemented with the
particle non-conserving parts
\begin{eqnarray}
S_0=-\Omega(\rho_L-\rho),
\end{eqnarray}
where $\rho_L=\frac{\omega_a}{\omega_a+\omega_d}$ and
$\Omega=(\omega_a+\omega_d)N$.
The current, $J(\rho)$, consists of a bulk current
$j(\rho)$ and a diffusive current proportional to
$\frac{\partial\rho}{\partial x}$ as
\begin{eqnarray}
J=-\epsilon_0\frac{\partial \rho}{\partial x}+j(\rho).\label{bigj}
\end{eqnarray}
Here, $\epsilon_0$ is a small parameter proportional to
$a$. The diffusive current part arises naturally as one retains terms
up to $O(a^2)$ in the Taylor expansion.
In order to determine the particle density, $\rho(x)$,
in the steady-state ($\frac{\partial \rho}{\partial t}=0$),
one has to solve the differential equation with appropriate boundary
conditions.
We consider the lattice-ends to be attached to the
particle-reservoirs which maintain constant densities
$\rho(x=0)=\alpha$ and $\rho(x=1)=\gamma$.
The diffusive current part is crucial here
since due to its presence, the hydrodynamic equation becomes
a second order differential equation
and as a result we can obtain a smooth solution
satisfying both the boundary conditions.
$\epsilon=0$ is the usual ASEP with only the exclusion
interaction. In this case, the current density relation,
$j(\rho)=\rho(1-\rho)$ is an exact one.
The symmetric shape of the current
about its maximum at $\rho=1/2$ is a consequence of its invariance
under particle-hole exchange $\rho\rightarrow 1-\rho$.
It is well understood that the phase diagram
has low-density ($\rho<1/2$ at the bulk), high-density ($\rho>1/2$
at the bulk) and maximum current ($\rho=1/2$ at the bulk) phases
\cite{straley}.
The particle-hole symmetry is retained in $\epsilon \neq 0$ models
although the current changes non-trivially. At $\epsilon=1$, i.e. for
the extreme
repulsion case, hops such as $0101\rightarrow 0011$ are forbidden. The
current, therefore,
vanishes exactly at the half-filling ($\rho=1/2$) with the maximum
current appearing symmetrically for densities on the
two sides of $\rho=1/2$. The exact form of the current
as a function of $\rho$ for arbitrary $\epsilon$
can be found using a transfer
matrix approach \cite{hager} and it evolves from a single to a
symmetric
double peak structure as $\epsilon$ grows beyond $\epsilon_J\approx .8$.
A simple, analytically tractable form of the current with double
peaks can be obtained by doing a double expansion of the exact
current about $\epsilon=\epsilon_J$ and $\rho=1/2$ \cite{jaya}.
This leads to a quadratic form for the current
\begin{eqnarray}
j(\rho)=(2r+u)/16-\frac{r}{2}(\rho-1/2)^2-u(\rho-1/2)^4,\label{jr}
\end{eqnarray}
where the constant term is chosen in such a way that $j(\rho)=0$
for $\rho=0, \ {\rm or} \ 1$. We recover the non-interacting limit,
$j(\rho)=\rho(1-\rho)$ for $r=2$ and $u=0$. The double peak
shape appears for $r<0$. In the entire analysis below,
we consider $r$ to be a small negative parameter and $u>0$.
For the
boundary-layer analysis, it is important to consider the bulk part
and the narrow boundary-layer or the shock
regions of the density profile separately.
These boundary-layers or shocks are formed over a narrow region of width
$O(\epsilon_0)$ and they merge
to the bulk density in the appropriate asymptotic
limit. In order to study the boundary-layer and its asymptotic
approach to the bulk,
one can rescale the position variable in (\ref{fulleqn})
as $\tilde x=(x-x_0)/\epsilon_0$,
where $x_0$ is the location of the center of the boundary-layer.
Hence, for a boundary-layer satisfying the
boundary condition at $x=1$, we
have $x_0\approx 1$. For small $\epsilon_0$, the boundary-layer
approaches the bulk density
in the $\tilde x\rightarrow -\infty$
limit and satisfies the boundary condition at $\tilde x=0$.
In terms of $\tilde x$, the steady-state
hydrodynamic equation is
\begin{eqnarray}
\frac{\partial^2 \rho }{\partial \tilde x^2}-
\frac{\partial j}{\partial \tilde x}-
\epsilon_0 S_0=0.\label{blayer}
\end{eqnarray}
Since $\epsilon_0$ is a small parameter, the effect of the particle
non-conserving term, $S_0$, on the boundary-layer
is negligible. As a result the
total current, $J=j(\rho)-\frac{\partial \rho}{\partial \tilde x}$
is constant across the boundary-layer. A shock, therefore,
can be represented by a horizontal line connecting two
densities in the $j-\rho$ plane as shown in figure (\ref{fig:jvsrho}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.5 in, clip]{jrho.eps}
\caption{Current $j$ is plotted as a function of $\rho$. Lines 'a'
and 'b' represent an upward and a downward shock respectively.}
\label{fig:jvsrho}
\end{center}
\end{figure}
For an upward shock ($\frac{\partial\rho}{\partial \tilde x}>0$),
this line lies below the $j(\rho)$ curve and the reverse is true
for a downward shock ($\frac{\partial\rho}{\partial \tilde x}<0$).
As a result, while for $r>0$, only upward
shocks are possible, for $r<0$, there can be
density profiles with a downward shock and
double shocks. Double shocks can be represented
by two horizontal lines on the $j-\rho$ plane below
the two peaks in $j(\rho)$.
To zeroth order in
$\epsilon_0$, the final boundary-layer equation is
\begin{eqnarray}
\frac{\partial^2 \rho }{\partial \tilde x^2}-
\frac{\partial j}{\partial \tilde x}=0.\label{blayerfinal}
\end{eqnarray}
In the boundary-layer language, the solution of this equation
is known as the inner solution.
To obtain the bulk part of the density profile, one can ignore the
diffusive current part in $J$ for small $\epsilon_0$.
The steady-state equation that gives the bulk part
of the density profile is
\begin{eqnarray}
\frac{dj}{dx}+S_0=0.\label{outer}
\end{eqnarray}
The solution of this equation for the bulk part of the profile
is known as the outer solution. These inner and outer solutions
contain several integration constants
which are fixed by the boundary conditions and other
matching conditions of the boundary-layer and the bulk under
various limits. Since the slope of the outer solution
is obtained from
\begin{eqnarray}
\frac{d\rho}{dx}=-S_0/\frac{dj}{d\rho},\label{outer}
\end{eqnarray}
for a given $\rho$, the slope
depends crucially on the signs of $(\rho_L-\rho)$ and
$\frac{dj}{d\rho}$. For all the analysis below, we
consider $\rho_L$ to be large and
$\alpha$ and $\gamma$ to be much smaller than $\rho_L$.
\subsection{Known results }
\label{known}
Double peak structure of the current-density relation leads to
two maximum current and one minimum current phases in
the phase diagram of the particle conserving repulsion model
\cite{hager}.
In the maximum and minimum current phases, the bulk density values
are those at which the current attains its maximum and minimum
values respectively.
With these new phases, the phase diagram for
this model becomes more
complex than its non-interacting counterpart.
Combining the techniques of boundary-layer analysis and the
results from numerical solutions, the phase diagram has been
obtained for the particle non-conserving repulsion model \cite{jaya}.
The phase diagram has a lot
of interesting features including a tricritical point at $r=0$. In the
$\alpha-\gamma-r$ phase diagram, this is a special point where two critical
lines meet. It has been found that three different phase diagrams are
possible for $r>0$, $r=0$ and $r<0$. For $r>0$, the current-density
plot is symmetric around $\rho=1/2$ with a maximum at $\rho=1/2$.
The nature of the phase diagram is qualitatively similar to the
mutually exclusive
case with one single critical point. For $r<0$, with a double
peak structure of the current-density plot, the phase diagram is
more complex with more than one critical point and
three different shock phases with the density profile having a
single upward shock, double upward shocks and
one upward and one downward shock \cite{jaya,rakos}.
The low-density peak can give rise
to a low-density upward shock ($\rho(x)\le 1/2$ in the shock part)
in the density profile.
A single shock of this kind
can be represented by a horizontal line in the $j-\rho$ plane
below the low-density peak. The critical
point corresponds to a situation where the horizontal line reaches the
peak position implying a shock of zero height.
The second distinct critical point that
involves both the peaks of the current-density plot is not
symmetrically related to this. The density profile, here,
has two upward shocks, in which one is a low-density shock and
one is a high-density shock with $\rho>1/2$.
The low-density shock, in this case, has the maximum height
with its high-density end saturating to $\rho=1/2$.
The high-density shock which is due to the high-density peak
of the current-density plot can be of varying height.
The critical point corresponds to the special point
where this high-density shock
has zero height. In addition to these regions,
there are regions in the phase diagram, where density profiles
with a downward shock or a single, symmetric upward shock are
found.
In view of the symmetry of the $j-\rho$ diagram, it is natural to
expect the two critical points to be related through this symmetry.
Previous work, however, shows that the shapes of the
density profiles are not related through this symmetry
near these two special points.
Unlike the low-density shock,
the high-density shock in the density profile is always
accompanied by a low-density shock of maximum height.
The following
analysis clearly reveals the reasons behind such asymmetries.
\section{ Phase-plane analysis of the boundary-layer equations}
In the following subsections, we determine the fixed-points of the
boundary-layer equation and their stability properties.
These fixed-points are the special points to which the
boundary-layer solution saturates in the appropriate limit. The knowledge
about the fixed-points and their stabilities can, therefore, be used to
our advantage to find out, for example,
the bulk densities to which a shock or a boundary-layer
saturates at its two edges.
\subsection{Boundary-layer equation}
Substituting the expression for $j(\rho)$ as given in (\ref{jr})
and integrating the boundary-layer equation, (\ref{blayerfinal})
once, we have
\begin{eqnarray}
\frac{d\rho_1}{d\tilde x}+\frac{r}{4}\rho_1^2+
\frac{u}{8}\rho_1^4=C_0.
\label{finalinner}
\end{eqnarray}
Here $\rho_1=2\rho-1$ and
$C_0$ is the integration constant.
The saturation of the boundary-layer to the
bulk density, $\rho_{1b}$, is ensured
by choosing the integration constant as
\begin{eqnarray}
C_0=\frac{r}{4}\rho_{1b}^2+\frac{u}{8}\rho_{1b}^4.\label{c0eqn}
\end{eqnarray}
As per equation (\ref{jr}), $C_0$ is related to the excess current
(positive, negative or zero) measured from $\rho=1/2$ (half-filled
case). The entire
analysis in the following is done in terms of $\rho_1$ for which
the boundary conditions are $\rho_1(x=0)=\alpha_1=2\alpha -1$ and
$\rho_1(x=1)=\gamma_1=2\gamma-1$.
\subsection{fixed-points}
$C_0$ can be plotted for various $\rho_{1b}$ varying from
$-1$ to $1$. For $r<0$, $C_0$ has a symmetric double well structure
around $\rho_{1b}=0$ (see figure (\ref{fig:doublewell})).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.5in,clip]{2shocks_const.eps}
\caption{$C_0$ plotted as a function of $\rho_{1b}$ with
$r=-.2$ and $u=2.2$ }
\label{fig:doublewell}
\end{center}
\end{figure}
The fixed-points, $\rho_1^*$,
of equation (\ref{finalinner}), are the solutions of the
algebraic equation
\begin{eqnarray}
\frac{u}{8}\rho_1^4+\frac{r}{4} \rho_1^2-C_0=0.
\end{eqnarray}
In general, there are four possible solutions for the fixed-point
as \begin{eqnarray}
\rho_1^*=\pm[\frac{\mid r\mid \pm\sqrt{r^2+8 C_0 u}}{u}]^{1/2}.
\label{fixed}
\end{eqnarray}
The value of $C_0$ depends on $\rho_{1b}$, the bulk density
to which the boundary-layer solution saturates.
As a consequence, for a given $C_0$, the corresponding $\rho_{1b}$
is always a fixed-point. For the same $C_0$, there are, however,
other fixed-points
which are determined from (\ref{fixed}). Hence, from the
information about one saturation density $\rho_{1b}$, the other
saturation density of the shock can always be determined.
The approach to various fixed-points has to be, of course, consistent
with their stability properties. These stability properties
of various fixed-points are discussed in the following subsection.
If $C_0$ is positive, there can be only two real
fixed-points of opposite signs. The positive and negative
fixed-points denoted respectively as $\rho_{1\pm}^*$ are
\begin{eqnarray}
\rho_{1\pm}^*=\pm[\frac{\mid r\mid +\sqrt{r^2+8 C_0 u}}{u}]^{1/2}.
\end{eqnarray}
If $C_0<0$, there are
four fixed-points for $\rho_{1b}$. In all these cases, the
fixed-points are symmetrically
located on the either side of the origin. The positive $\rho_1$
fixed-points are
\begin{eqnarray}
\rho_{1,2+}^*= [\frac{\mid r\mid \pm\sqrt{r^2-8 \mid C_0\mid u}}{u}]^{1/2}
\end{eqnarray}
and the negative $\rho_1$ fixed-points are
\begin{eqnarray}
\rho_{1,2-}^*= -[\frac{\mid r\mid \pm\sqrt{r^2-8 \mid C_0\mid u}}{u}]^{1/2}.
\end{eqnarray}
Here, the subscripts $1$ and $2$
correspond to the $+$ and $-$ signs inside the square bracket respectively.
It is important to notice that for $C_0<0$, all the fixed-points
become imaginary when $\mid C_0\mid>\frac{r^2}{8u}$.
As $C_0$ approaches
this lowest negative value, the pair of fixed-points on
the positive and
negative sides approach each other and they merge at
$C_0=-\frac{\mid r\mid^2}{8u}$. At this special value, the fixed-points
are $\rho_{1m}^{*\pm}=\pm(\frac{\mid r\mid}{u})^{1/2}$.
For $C_0=0$, there are three fixed-points, $\rho_1^*=0$ and
$\rho_{10}^{*\pm}=\pm (\frac{2\mid r\mid}{u})^{1/2}$.
Numerical values of the fixed-points for some special values of $C_0$ with
$r=-.2$ and $u=2.2$ are mentioned below.
For $C_0=0$, the nonzero fixed-points are
$\rho_{10}^{*\pm}=0,\ \ \pm.426$.
For these values of $r$ and $u$, no real fixed-points are present if
$C_0<-\frac{r^2}{8u}=-.00227$.
At this special value of $C_0$, the two fixed-points
are $\rho_{1m}^{*\pm}=\pm .30151$.
\subsection{Stability analysis of the fixed-points}
For $C_0>0$, a linearization of equation (\ref{finalinner}) around the
fixed-points with $\rho_1=\rho_1^*+\delta\rho_1$
leads to the following stability equation
\begin{eqnarray}
\frac{d\delta \rho_1}{d\tilde x} =
\frac{-\sqrt{\mid r\mid^2+8 u C_0}}{2}\rho_1^*\delta\rho_1.
\end{eqnarray}
This implies that the fixed-points
$\rho_{1+}^*$ and $\rho_{1-}^*$
are, respectively, stable and unstable.
Similarly, for $C_0<0$, the general stability equation is
\begin{eqnarray}
\frac{d\delta\rho_1}{d\tilde x}
=\frac{\rho_1^*}{2} \ \delta\rho_1 ({\mid r\mid}-
{u} {\rho_1^*}^2).
\end{eqnarray}
The flow around the fixed-points can be obtained by
substituting the explicit expressions of the fixed-points.
Figure (\ref{fig:flow}) shows the stability properties
of various fixed-points for $C_0>0$ and $C_0<0$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in,clip]{flow_diag.eps}
\caption{flow behavior of the fixed-points }
\label{fig:flow}
\end{center}
\end{figure}
The stability property of the $\rho_1^*=0$ fixed-point for $C_0=0$ and
for the pair of fixed-points for $C_0=-\frac{r^2}{8u}$ cannot be
determined from the linear analysis. However, the flow around the
fixed-points can be predicted from the continuity of the
flow behavior as $C_0$ approaches these special values.
Fixed-points, their stability properties and how the fixed-points
change with $C_0$ are combinedly shown in figure (\ref{fig:c0route}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.5in,totalheight=3.5in,clip]{c0route-figfile2.eps}
\caption{Fixed-points are plotted for different values of $C_0$ with
$r=-.2$ and $u=2.2$. $\rho_{1\pm}^*$ and $\rho_{2\pm}^*$ are different
fixed-points as mentioned in the text.
The vertical solid lines with arrows show the flow behavior
of the fixed-points. }
\label{fig:c0route}
\end{center}
\end{figure}
\subsection{Shocks for various $C_0$}
Since $C_0$ can be expressed purely in terms of $J$,
with its constant parts subtracted,
it remains constant across a shock or a boundary-layer.
In principle, using equation (\ref{c0eqn}),
one can obtain the
value of $C_0$ along the continuously varying parts
of the density profile. Hence,
as we move along a density profile having bulk shocks,
$C_0$ changes as per equation (\ref{c0eqn}) along the outer solution
parts of the profile with intermediate constant values
across the shock or inner solution regions.
The value of $C_0$ in
the shock region is fixed by one of the bulk density
values to which the shock saturates.
With the information on the
possible values of $C_0$ in the entire
range of $\rho_{1b}$ and, hence, the knowledge about the
corresponding fixed-points, it is possible to list the
kind of shocks that can be observed.
Let us assume that the shocks or the boundary-layers
approach the bulk densities $\rho_{1r}$ or $\rho_{1l}$
as $\tilde x\rightarrow \pm \infty$ respectively. Since $\rho_{1l}$ and
$\rho_{1r}$ are various fixed-points of the inner equation, the approach
to these fixed-points has to be consistent with the flow properties.
A shock is called an upward shock if $\rho_{1l}<\rho_{1r}$.
The reverse i.e. $\rho_{1l}>\rho_{1r}$ is true for a downward shock.
(i) $C_0>0$: In this case, there are two fixed-points $\rho_{1+}^*$ and
$\rho_{1-}^*$ symmetrically located around $\rho_1=0$ with $\rho_{1-}^*$
being an unstable fixed-point. Thus if a shock is formed with $C_0>0$, it
should be an upward shock which
approaches the fixed-points $\rho_{1r}=\rho_{1+}^*$
and $\rho_{1l}=\rho_{1-}^*$ as $\tilde x\rightarrow \infty$ and
$-\infty$ respectively.
The shock height, in this case, is $\rho_{1+}^*-\rho_{1-}^*$.
(ii) $C_0<0$: In this case, four fixed-points lead to different kinds of
shocks.
(a) It is possible to see a downward shock
with $\rho_{1l}=\rho_{2+}^*$ and $\rho_{1r}=\rho_{2-}^*$.
The downward shock is thus symmetric around $\rho_1=0$. The flow
in figure (\ref{fig:flow}) shows that a downward
shock cannot involve other fixed-points since that would not
be consistent with the stability criteria of the fixed-points.
|
(b) There can be small upward shocks which lie entirely
in the range $\rho_1>0$. We have already referred these shocks
as high-density shocks. In terms of the fixed-points, the left and
right saturation densities of the shock are
$\rho_{1l}=\rho_{2+}^*$ and $\rho_{1r}=\rho_{1+}^*$, respectively.
(c) The third possibility is that of an upward shock entirely in
$\rho_1\le 0$ range. Such a shock has been referred as a
low-density shock. For this shock, $\rho_{1l}=\rho_{1-}^*$ and
$\rho_{1r}=\rho_{2-}^*$.
(iii) $C_0=0$: There can be an upward shock with $\rho_{1r}=0$ and
$\rho_{1l}=\rho_{1-}^*$. There can also
be an upward shock connecting the densities $\rho_{1l}=0$ and
$\rho_{1r}=\rho_{1+}^*$. These two shocks together appear as a
large shock, symmetric around $\rho_1=0$.
Alternatively, different kinds of shocks can tell us the range
of values for $C_0$.
\section{Predictions about the shapes of the density profiles}
Based on figure (\ref{fig:c0route}), we attempt to predict possible
shapes of the density profiles for given boundary conditions
$\alpha_1$ and $\gamma_1$. We consider only a few pairs of boundary conditions
and based on this, we make certain general predictions in the next section.
The basic strategy for drawing the density profile is as follows.
We first need to mark $\alpha_1$ and $\gamma_1$ on the $\rho_1^*$ axis
of $C_0-\rho_1^*$ plane.
Starting with either of the boundary conditions, we change $\rho_1$,
along the curve in figure (\ref{fig:c0route}), in
a way that we reach the other boundary condition in the end of our move.
While doing so, we may allow a discontinuous variation of $\rho_1$
along a vertical constant-$C_0$ line,
provided it does not violate the flow property. Such a discontinuous
change in $\rho_1$ appears in the form of a shock or a boundary-layer in
the density profile. The dashed, vertical lines in figure
(\ref{fig:densprof1}a), for example, are the constant-$C_0$ lines along
which the density may change.
Such a dashed-line, therefore,
represents a boundary-layer or a shock in the density profile.
Two densities at which a shock or a boundary-layer saturates,
are those at which a particular, constant-$C_0$ line,
representing a shock or a boundary-layer, intersects the curves.
This method, however, sometimes leaves us with
different options for the density profile. All these possibilities are
shown on $C_0-\rho_1^*$ plane for each pair of boundary conditions
individually.
\begin{figure}[htbp]
\begin{center}
(a) \includegraphics[width=.35\textwidth,clip,
angle=0]{c0route-a.1-g.34.eps}\\
(b) \includegraphics[width=.4\textwidth,clip,
angle=0]{alp1_0.1_gam1_0.34.eps}
\caption{(a) Possible variations of the density, as one moves along the
density profile from $x=0$ end, are shown on the $C_0-\rho_1^*$ plane.
Bold dashed lines or curves with open arrows show the variation
of the density. Open arrows point in the direction of increasing $x$.
Dotted lines mark the boundary conditions
$\alpha_1$ and $\gamma_1$. Solid lines with arrows are the flow trajectories
of the fixed-points. (b) Numerical solutions for the density $\rho_1$
for various $\alpha_1$ with $\gamma_1=0.34$.
The inset gives a zoomed view of the particle-depleted
boundary-layer at $x=0$. No boundary-layer is
formed near $x=1$.}
\label{fig:densprof1}
\end{center}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
(a) \includegraphics[width=.35\textwidth,height=3in,clip,
angle=0]{c0route-a.32-g.23.eps}
(b) \includegraphics[width=.37\textwidth,clip,
angle=0]{alp1_0.32_gam1_0.23_left.eps}\\
(c) \includegraphics[width=.37\textwidth,clip,
angle=0]{alp1_0.32_gam1_0.23_right.eps}
\caption{(a) Same captions as figure (\ref{fig:densprof1}a)
except that here we explicitly
show possible density variations for two different left boundary
conditions specified by
$\alpha_1$ and $\alpha_1'$.
(b) Numerical solutions for $\rho_1(x)$
for various $\alpha_1$ with $\gamma_1=0.23$. The main figure provides
a zoomed view of the boundary-layers near $x=0$.
The entire density profile over the entire lattice is shown
in the inset.
(c) A zoomed view of the same density profiles near the $x=1$.
This shows the particle depleted boundary-layers near $x=1$.}
\label{fig:densprof2}
\end{center}
\end{figure*}
\subsection {Density profiles with only boundary-layers}
Suppose we consider a situation where $\alpha_1,\ \gamma_1>0$
with $\alpha_1<\rho_{1m}^{*+}$
and $\gamma_1>\rho_{1m}^{*+}$.
There can be a possibility where the density profile has
a particle-depleted boundary-layer
($\frac{d\rho_1}{dx}\mid_{x=0}>0$) at $x=0$
satisfying the boundary conditions $\rho_1(x=0)=\alpha_1$.
This boundary-layer can be represented by a vertical
line similar to (a) in figure (\ref{fig:densprof1}a). This is consistent
with the flow property
that suggests the approach of the boundary-layer to the fixed-point
$\rho_{1+}^*$ as $\tilde x\rightarrow \infty$. On the other hand,
in the $\tilde x\rightarrow -\infty$ limit, which corresponds
to the unphysical
negative $x$ region, the boundary-layer saturates to the
unstable fixed-point $\rho_{1-}^*$. After the boundary-layer, the
density may decrease continuously along (c) on the $\rho_{1+}^*$ branch
and satisfies the boundary condition,
$\rho_1(x=1)=\gamma_1$.
There can be another possibility
where the particle-depleted boundary-layer at $x=0$ is represented by
a vertical line similar to
(b) joining the fixed-points $\rho_{2+}^*$ and $\rho_{1+}^*$.
The boundary condition at $x=1$ is again satisfied by a decreasing
density part parallel to (c).
For this to be
possible the condition $\gamma_1<\rho_{10}^{*+}$ is required.
These two possibilities are distinct due to distinctly different
values of $C_0$. This shows the crucial role played by $C_0$
in deciding the density profile.
Numerical solutions of the full steady-state hydrodynamic equation
presented in figure (\ref{fig:densprof1}b) show the
boundary-layers saturating to a bulk density $\rho_{1b}\approx .48$.
This implies that the boundary-layers are indeed represented by
(a) type vertical lines.
\begin{figure}[htbp]
\begin{center}
(a) \includegraphics[width=.4\textwidth,clip,
angle=0]{c0route-a-.84-g.32.eps}\\
(b) \includegraphics[width=.4\textwidth,clip,
angle=0]{alp1_-.84_gam1_.32.eps}
\caption{ (a) This figure has the same caption as figure
(\ref{fig:c0route}).
(b) Plot of the density profiles $\rho_1(x)$
for various large negative values of $\alpha_1$ with $\gamma_1=.32$.
No boundary-layer is formed at $x=0$ or $x=1$. A few density profiles
towards right have double shocks.}
\label{fig:densprof3}
\end{center}
\end{figure}
(2) Next, we consider $\alpha_1,\gamma_1>0$ with
$\alpha_1>\rho_{1m}^{*+}$ and $\gamma_1<\rho_{1m}^{*+}$.
In this case too, the density profile can satisfy the
boundary condition
at $x=0$ through a boundary-layer
that can be represented by a line similar
to (a) in figure (\ref{fig:densprof2}a).
This would be a particle depleted boundary-layer at $x=0$.
In order to satisfy the other boundary condition,
the density should decrease till $\rho_{1m}^{*+}$ along path (b) on
the $\rho_{1+}^*$ branch and then satisfy the right boundary condition
through a particle-depleted boundary-layer
along (c) in figure (\ref{fig:densprof2}a).
The boundary condition at $x=0$ can also be satisfied by vertical
lines coming from above the $\rho_{1+}^*$ branch leading to particle-rich
boundary-layers
($\frac{d\rho_1}{dx}\mid_{x=0}<0$) at $x=0$.
These lines could be (a') type lines
in figure (\ref{fig:densprof2}a) satisfying the boundary condition
$\alpha_1'$.
Both particle-depleted and particle-rich boundary-layers are present
in the density profiles of figure (\ref{fig:densprof2}b) obtained
by solving the full steady-state hydrodynamic equation numerically.
\subsection{Density profiles with upward shocks}
With $\gamma_1>\rho_{1m}^{*+}$
and $\alpha_1$ large negative, possible
shapes of the
density profile can be of the following kinds. We start with
$\rho_1(x=0)=\alpha_1$. $\rho_1$ decreases continuously
along $\rho_{1-}^*$ branch along
the dashed line (a) in figure (\ref{fig:densprof3}a).
After this part, an upward shock, represented
by a vertical line similar to either (b) or (d) appears.
In the latter case, the dashed line (a) should be extended further
till it reaches the low-density end of (d).
If the shock is represented by (b), it is a
large shock, symmetric around $\rho_1=0$.
In the second case, the shock is a low-density shock with
the saturation densities being
$\rho_{1r}=0$ and $\rho_{1l}=\rho_{10}^{*-}$.
If the density approaches the
$\rho_{1+}^*$ branch after the large shock, the boundary condition
at $x=1$ can be
satisfied after that by a decrease in density along (c) on
this branch. If the shock
is of (d) kind, the density has to change further
to satisfy the right boundary
condition. Upon reaching $\rho_1=0$ value, the density may
change along $\rho_{2-}^*$ or the
$\rho_{2+}^*$ branch. The flow around $\rho_1^*=0$, however, suggests
that the density variation only along
$\rho_{2+}^*$ branch (path (e) in the figure (\ref{fig:densprof3}))
is possible. The continuously increasing part
along (e)
is then followed by another upward shock, given by line (f),
taking the density to $\rho_{1+}^*$ branch.
The boundary condition is then satisfied by a continuously
decreasing part along a (c) type line.
Numerical solutions in figure (\ref{fig:densprof3}b) are
consistent with these predictions.
\subsection{Density profiles with downward shocks}
We next consider a case $\rho_{1m}^{*-}<\gamma_1<0$
with $\alpha_1$ increasing from large negative values.
Here we specifically mention how the density profile changes
as $\alpha_1$ is changed keeping $\gamma_1$ fixed. In the process,
we observe how a density profile with a downward shock appears.
Let us assume that
our starting $\alpha_1$ lies somewhere on the $\rho_{1-}^*$ branch.
$\rho_1$ increases from $\rho_1(x=0)=\alpha_1$ along $\rho_{1-}^*$
branch till it reaches the boundary $x=1$. This continuously
increasing part is represented by (1a) in figure
(\ref{fig:densprof4}a).
The density, then, satisfies the
right boundary condition through a boundary-layer which can be, for
example, represented by a vertical line like (1b), in figure
(\ref{fig:densprof4}a).
This line takes the solution from the unstable fixed-point
$\rho_{1-}^*$ to the stable fixed-point $\rho_{1+}^*$.
Since the vertical line (1b)
passes through $\gamma_1$ before reaching the $\rho_{1+}^*$ branch, the
boundary-layer satisfies the boundary condition
before saturating to the
positive fixed-point, $\rho_{1+}^*$.
The boundary-layer at $x=1$ is, therefore, a part
of this vertical, constant-$C_0$ line.
\begin{figure}[htbp]
\begin{center}
(a) \includegraphics[width=.4\textwidth,clip,
angle=0]{c0route-g-0.28.eps}\\
(b) \includegraphics[width=.4\textwidth,clip,
angle=0]{gam1_-0.28.eps}\\
\includegraphics[width=.5\textwidth,clip,
angle=0]{gam1_-0.28_zoom.eps}(c)
\caption{
(a) The trajectory of the density on $C_0-\rho_1$ plane. Five possible
trajectories are shown. These trajectories are distinguished by
different numbers. Alphabetical sequence represents the variation of the
density along increasing $x$.
(b) Plot of the density profiles $\rho_1(x)$
for various $\alpha_1$ with $\gamma_1=-.28$. (c) Zoomed views of the
of the boundary-layers of the density profiles in (a) near $x=1$. }
\label{fig:densprof4}
\end{center}
\end{figure}
As $\alpha_1$ is increased slightly, the route of
the density along $\rho_{1-}^*$ branch remains the same but
this time the density reaches a higher value than the previous
$\alpha_1$ case before increasing sharply
as a boundary-layer satisfying
the boundary condition at $x=1$.
As $\alpha_1$ is increased further,
for a given $\alpha_1$, the continuously increasing
part of the profile reaches the low-density end of the line (2b).
After this, there is a shock in the density profile of (2b) kind.
This is a low-density shock that takes the density to $\rho_1^*=0$.
If this jump
is near the boundary, this shock becomes actually a boundary-layer
that can help the density satisfy the boundary condition at $x=1$.
However, if this discontinuity is in the bulk, it
is an upward low-density shock.
In case of a shock in the bulk, the density increases further
along $\rho_{2+}^*$ branch (path (2c) in figure
(\ref{fig:densprof4})). The
boundary condition, however, demands a decrease in $\rho_1$. This is
possible through a downward vertical line (similar to path (2d))
and then a continuously decreasing part (2e) along $\rho_{2-}^*$ branch.
The path (2d) is a downward shock that is seen in figure
(\ref{fig:densprof4}b) and (\ref{fig:densprof4} c).
In case the density varies along lines (3c) and (3d), we have
a downward boundary-layer
near $x=1$. These possibilities are expected if $\alpha_1$
is increased further from its value that leads to (2c) and (2d) type
variations.
As before, the principle is that if the
(3d) type vertical line intersects $\gamma_1$ line before reaching the
$\rho_{2-}^*$ branch, (3d) type line represents a boundary-layer
satisfying the boundary condition
at $x=1$. If the reverse happens, this downward vertical line
represents a downward
shock at the bulk which needs to be followed by a continuously
decreasing density part
along the $\rho_{2-}^*$ branch. This is a general principle which can be
applied to other cases also to see the deconfinement of a boundary-layer
giving rise to a shock in the bulk (see reference \cite{smsmb}).
With further increase of $\alpha_1$, the density variation from
from $x=0$ end, is still the same as before up to
part (3c) along the $\rho_{2+}^*$ curve except that
now the density approaches closer
to $\rho_{1m}^{*+}$ along (4c) like path. Finally for a given $\alpha_1$,
the density reaches the value $\rho_{1m}^{*+}$.
After this, the boundary condition is
satisfied through a depleted boundary-layer represented by vertical
path (4d).
With further increase of $\alpha_1$, the density cannot now go
around $\rho_{1m}^{*+}$ to move to $\rho_{1+}^*$ branch
due to the constraint from the stability property.
In that case, the only option for the density is
to proceed along (4c) but move to $\rho_{1+}^{*}$ branch along a
vertical line similar to (5d) before reaching the point $\rho_{1m}^{*+}$.
There is now a second high-density upward shock in the
density profile at larger $x$ (line (5d))
with the first shock being a low-density one represented by
the line, (2b).
With the increase of $\alpha_1$, the (5d) type vertical line
moves to higher values of $C_0$.
The boundary condition at $x=1$ is now satisfied
by the rest of the density profile where the
density decreases continuously along (5e)
path to the minimum $\rho_{1m}^{*+}$ and after that decreases
through a depleted boundary-layer represented by the line (4d).
Thus, for example, for this value of $\alpha_1$, we see the following
parts in the density profile as we move
along the density profile from its $x=0$ end.
(i) A continuously varying density profile that satisfies the boundary
condition $\rho(x=0)=\alpha_1$.
(ii) This is followed by a low-density upward shock of maximum
height connecting $\rho_{1l}=\rho_{10}^{*-}$ and $\rho_{1r}=0$.
(iii) Beyond this shock, there is again a continuously increasing part.
(iv) This is followed by an upward, high-density shock.
(v) Beyond this high-density shock, the density decreases continuously
to $\rho_{1m}^{*+}$.
(vi) The last part is a particle-depleted boundary layer, that saturates to
$\rho_{1m}^{*+}$ for $x<1$ and satisfies the boundary condition at
$\rho_1(x=1)=\gamma_1$. All these features can be verified from
the density profile in figure (\ref{fig:densprof4}a).
If $\alpha_1$ is increased further, the upward high-density shock
(vertical lines like (5d)) will move towards higher $C_0$ values.
For certain $\alpha_1$, the low
and the high-density shocks merge and there is a big symmetric shock
with $C_0=0$.
Beyond this $\alpha_1$, the big
upward shock still persists and this is followed
by a continuously varying part (similar to (1c)) along which the density
decreases and approaches $\rho_{1m}^{*+}$. The boundary condition is again
satisfied by the particle-depleted boundary-layer represented by (4d).
\section{General predictions}
Since the boundary-layers or shocks are the special features through
which the density profiles are distinguished,
our general predictions are more on the kind of shocks or
boundary-layers that can be seen under various boundary conditions.
\subsection{Shocks and boundary-layers}
Downward shock in the bulk or a particle-depleted boundary-layer at $x=1$:
Either of these features appears whenever the
density profile decreases through a jump discontinuity from the
$\rho_{2+}^*$ branch to the $\rho_{2-}^*$ branch or from
$\rho_{1m}^{*+}$ to $\rho_{1m}^{*-}$. The condition for this is
$\gamma_1<\rho_{1m}^{*+}$. The value of $\alpha_1$ is somewhat
flexible since it is possible to see these features both for $\alpha_1$
positive or negative.
Upward symmetric shock in the bulk or depleted boundary-layer at $x=0$:
This is seen whenever the density profile
jumps from $\rho_{1-}^*$ to $\rho_{1+}^*$. This happens for various
combinations of $\alpha_1$ and $\gamma_1$ such as $\alpha_1<0$ or
$\alpha_1>0$ with
$\gamma_1>\rho_{1m}^{*+}$ or $\gamma_1<\rho_{1m}^{*+}$. At $x=0$, the
density profile may start with a continuously varying part followed by a
symmetric large shock, or it can satisfy the boundary condition
at $x=0$ with the help
of a boundary-layer. In both cases, the discontinuity in the density
corresponds to a discontinuous jump from $\rho_{1-}^*$
to $\rho_{1+}^*$.
The presence or absence of a boundary-layer
at $x=0$ is specified completely by the value of $C_0$ at $x=0$.
Let us assume that $\rho_1=\alpha_1$ line intersects the curve on figure
(\ref{fig:c0route}) at a value $C_0=C_0(\alpha_1)$. A condition as
$C_0(\alpha_1)=C_0(x=0)$, would mean a continuously varying
density near $x=0$. If these two values are unequal, it would imply
the presence of a boundary-layer. For example, for
$\alpha_1>\rho_{1m}^{*+}$, a particle-rich or a particle-depleted
boundary-layer appears if $C_0(\alpha_1)>C_0(x=0)$ and
$C_0(\alpha_1)<C_0(x=0)$, respectively. However, it is important to
pay attention to certain situations which are forbidden due to
the stability properties. For example, if
$\alpha_1<\rho_{1m}^{*-}$ , a boundary-layer with
$C_0(x=0)<C_0(\alpha_1)$ is not possible.
Double shock: In this case, the density profile has
both high and low-density upward shocks
with the low-density shock having
maximum possible height, $\rho_{10}^{*-}$.
In order to have a high-density shock, the
lower end of the high-density shock must be
on the $\rho_{2+}^{*}$ branch.
The density can reach this branch only via
$\rho_1=0$ point. The only way the density can reach the $\rho_1=0$
point is through a low-density shock
represented by the $C_0=0$ line across the negative lobe. A low-density
shock representing a jump across the negative lobe along $C_0=0$ line
has the maximum possible height.
A density-profile with double
shock may appear for $\alpha_1<\rho_{1m}^{*-}$ and
$\gamma_1<\rho_{1m}^{*+}$ or $\gamma_1>\rho_{1m}^{*+}$.
In case of $\gamma_1>\rho_{1m}^{*+}$,
the density after the high-density shock varies continuously along
$\rho_{1+}^*$ branch to satisfy the boundary condition at $x=1$.
For $\gamma_1<\rho_{1m}^{*+}$, the second shock
is possible for some $\alpha_1$.
In this case, after reaching the $\rho_{1+}^*$ branch, the density
decreases till $\rho_{1m}^{*+}$ and then decreases further as a depleted
boundary-layer at $x=1$ to satisfy the boundary condition.
It is interesting to note that
although it is possible to have a profile with only a low-density
shock, the same with a single
high-density shock is never possible. The flow behavior suggests
that a high-density shock has to be always accompanied by a
low-density shock of maximum height.
Boundary-layer at $x=1$: As in the case of a boundary-layer at $x=1$,
it is also possible
to specify the conditions for a boundary-layer
at $x=1$ by comparing the
value of $C_0(x=1)$ with $C_0(\gamma_1)$.
In general, a boundary-layer will appear at $x=1$ if these two values of
$C_0$ are different. As an example, a downward boundary-layer for
$\rho_{1m}^{*-}<\gamma_1<\rho_{1m}^{*+}$ appears if
$C_0(x=1)<C_0(\gamma_1)$.
\subsection{Saturation of the shock}
From equation (\ref{finalinner}), we find that near
the saturation to a bulk density $\rho_{1b}$, the slope of the
boundary-layer is given by
\begin{eqnarray}
\frac{d\delta\rho_1}{d\tilde x}=(\frac{\mid r\mid }{2}-
\frac{u}{2}\rho_{1b}^2)\rho_{1b}
\delta\rho_1
\end{eqnarray}
where it is assumed that the boundary-layer density
is $\delta\rho_1$ away
from the saturation value, $\rho_{1b}$.
This shows, that the saturation of the
boundary-layer to the bulk is in general
exponential except for three special points.
The saturation is of power-law kind, if $\rho_{1b}=0$ or
$\rho_{1b}=\rho_{1m}^{*\pm}=\pm(\frac{r}{u})^{1/2}$. The length scale
associated with the exponential approach of the shock to the bulk density
diverges as the bulk density approaches these special values. As
discussed in subsection \ref{known}, the
critical points, correspond to special boundary conditions
$(\alpha_c,\gamma_c)$ at
which the shock height across the positive or negative lobe reduces
to zero. Therefore, the
approach to the critical point is associated with the
continuous vanishing of the shock height along with
the divergence of the length scale over which the shock
saturates to the bulk.
\section{Summary}
Here, we have considered an asymmetric simple exclusion
process of interacting particles on a finite, one dimensional lattice.
These particles have mutual repulsion in addition to the
exclusion interaction. Apart from the hopping dynamics of the particles,
the model also has particle attachment-detachment processes,
which lead to particle non-conservation in the bulk.
Such processes are known to exhibit boundary-induced phase transitions
for which the tuning parameters are the boundary densities,
$\alpha$ and $\gamma$.
In different phases, the average particle density distributions across
the lattice have distinct shapes with various types of discontinuous
jumps from one density value to another. Here, we carry out a phase-plane
analysis for the boundary-layer
differential equation to
understand how the fixed-points of the boundary-layer equation
and their flow properties determine
the shape of the entire density profile under given boundary conditions.
Such a fixed-point analysis has been extremely useful in
understanding the phases and phase transitions of particle
conserving models for which the constant bulk density values in different
phases are given
by the physically acceptable fixed-points of the boundary-layer equation.
In addition, the number of
steady-state phases, the nature of the phase-transitions, the locations of
the boundary-layers can be obtained analytically from the phase-plane
analysis of the boundary-layer equation. The present work
provides a generalization
of the method to a particle non-conserving process.
To apply this method, we have considered
the hydrodynamic limit of the statistically averaged
master equation describing the particle dynamics. The hydrodynamic
equation, describing the time evolution of the average particle
density, looks like a continuity equation
supplemented with the
particle non-conserving terms. The current
contains the exactly known hopping current and
a regularizing diffusive current part.
The boundary-layer equation, which is the
main focus of this work, can
be obtained from the particle conserving part of
the hydrodynamic equation. For convenience, we use
$\rho_1=2\rho-1$ for the boundary-layer equation.
$\rho_1$ is related
to the deviation from $\rho=1/2$ (half filled case).
It is found that the fixed-points, $\rho_1^*$,
of the boundary-layer equation are determined in terms
of a parameter $C_0$ related to the excess current measured
from $\rho=1/2$ (half-filled case).
Since the fixed-points
are dependent on $C_0$, one can plot the physically acceptable
fixed-points as a function of $C_0$ on the $C_0-\rho_1^*$ plane.
In the steady-state,
the constancy of the current across a shock or a boundary-layer
implies that such objects
can be represented by a fixed value of $C_0$.
The boundary-layers or shocks of the density
profiles are represented by the constant-$C_0$ lines on this
$C_0-\rho_1^*$ plot. The densities at which the constant-$C_0$
line intersects the fixed-point branches
are the densities to which the
shock or the boundary-layer saturates.
The discontinuous change of the density has to be consistent
with the stability properties of the fixed-points.
For given values of $\alpha$ and $\gamma$,
we can start from the $x=0$ end of the density profile and find out
how the density can change along the profile
as it proceeds to satisfy the boundary condition
at $x=1$.
This density variation along the density profile
can be conveniently marked on the $C_0-\rho_1^*$ plot to
see its consistency with the flow properties of the fixed-points.
Our approach does not give any information about the location of a
shock since it does not
involve the details of the bulk part of the profile.
Instead, it is found that the conserved quantity, $C_0$,
plays an important role in deciding the
shape of the density profile.
The emphasis of our approach is on the boundary-layer equation which
appears to control the shape of the entire density profile. Particle
non-conserving processes are not important for the boundary layers.
This simplicity allows us not only to classify different kinds of
density distributions, but also to gain more physical insight
as why some features of the density profile are evident under certain
boundary conditions. Some of these features are mentioned in the
list below.
(a) When a density profile has two shocks, the low-density shock
is of maximum possible height. For given values
of the interaction parameters, the height of the low-density
shock can be obtained explicitly.
(b) It is possible to have a low-density shock alone in the profile
but a high-density shock has to be always accompanied by a
low-density shock of maximum height.
(c) A downward shock is produced by the deconfinement of a downward
boundary-layer
at $x=1$. The condition on $\gamma$ for seeing a downward shock or
a downward boundary-layer at $x=1$ can be
precisely specified.
(d) The symmetric two peak structure of the current as a function of
the particle density is responsible for a symmetric two lobe
structure of the fixed-points drawn on $C_0-\rho_1^*$ plane.
The flow behavior of the fixed-points
around the two lobes are asymmetric. This is the reason why
the two critical points in the phase diagram are not symmetrically
related to each other. This asymmetry is reflected in
the shapes of the density profiles near these critical points.
(e) For a given boundary condition, a density
profile with only one boundary-layer and no shock can be fully
specified by the value of $C_0$ at this end.
In addition to these issues, this analysis also
provides quantitative predictions regarding the heights of different
kinds of shocks and their approach to the bulk along with
the length scale associated with it.
{\bf Acknowledgement}
Financial support from the Department of Science
and Technology, India and warm hospitality of ICTP (Italy),
where the work was initiated, are gratefully acknowledged.
|
\section{Introduction}
Random number generation is an important process for a variety of application domains. Due to the intrinsic randomness of quantum processes, quantum random number generation (QRNG) is an important field of study within quantum information science.
By now, cryptographically secure QRNG protocols are well studied under a variety of security models ranging from the ``fully trusted device'' scenario (whereby all devices used, sources and measurements, are fully characterized) to the ``fully device independent'' scenario (where all devices used are not trusted) \cite{di-qrng1,di-qrng2}. Clearly from a cryptographic point of view, DI-QRNG protocols are the desirable ideal due to their minimal assumptions needed for security. However, though experimental progress has been rapidly improving, the bit-rates of such protocols cannot compare to other models \cite{di-qrng-exp,di-qrng-exp-2}. As a compromise, the \emph{source independent} (SI) model was introduced in \cite{vallone2014quantum} whereby measurement devices are characterized (though not necessarily ideal) whereas the source is under the control of the adversary. One may envision the source being a quantum server, providing a service to users who wish to distill cryptographically secure random strings without trusting the server (e.g., the server may be adversarial). The SI model affords fast experimental bit generation rates \cite{si-qrng-fast} (with a recent paper discussing an implementation with a rate over 8Gb/s \cite{si-qrng-fast-new}) along with fascinating potential applications, including the use of sunlight as a source \cite{si-qrng-sun}. For a survey of QRNG protocols, the reader is referred to \cite{qrng-survey}. Note that we are actually considering a semi-source independent model where the dimension of the source is known but no other assumptions are made (this is exactly the model introduced in \cite{vallone2014quantum}).
Outside of QRNG's, quantum walks (QW), the quantum analogue of classical random walks, are a highly important process in quantum computation \cite{farhi1998quantum,childs2003exponential,childs2009universal,lovett2010universal} and, recently, in quantum cryptography \cite{rohde2012quantum,vlachou2015quantum,vlachou2018quantum,srikara2020quantum}. Recently, a QW-based random number generation protocol was analyzed in \cite{QW-QRNG}, though a rigorous security analysis was not done. In this paper, we revisit that protocol, minimally changing it to be a SI-QRNG protocol, and prove its security. To our knowledge, this is the first SI-QRNG protocol with provable composable security based on quantum walks. We note that the security analysis of this protocol is not trivial. Due to certain simplifications we make to allow for an easier potential experimental implementation, prior tools are not immediately applicable (though we do not consider experimental concerns in this work, we keep them in mind when developing the protocol). In this work we develop an alternative entropic uncertainty relation which may also hold applications in other quantum cryptographic protocols.
Naturally, QW's are random processes and, so, at first glance designing and proving secure, a QW-QRNG protocol seems a trivial task. Indeed, the following protocol is a trivial solution to the problem with an ``easy'' (using modern information theoretic tools) security proof in the SI model:
(1) First, a source prepares a state $\ket{\psi_{0,0}}^{\otimes (n+m)}$ where $\ket{\psi_{0,0}}$ is some quantum walker state. While we discuss this in detail later, for now it suffices to consider $\ket{\psi_{0,0}} = W\ket{0,0}$ where $W$ is a unitary operator and $\ket{0,0}$ lives in some Hilbert space of dimension $2P$. This state is sent to Alice.
(2) Second, Alice chooses a random subset of size $m$ and measures the systems indexed by this subset in the ``quantum walk basis'', namely the orthonormal basis $\{W\ket{0,0}, W\ket{0,1},\cdots, W\ket{1,P-1}\}$. Ideally, this measurement should always produce the zeroth state of this basis. The remaining $n$ walker systems are measured in the computational basis $\{\ket{0,0}, \ket{0,1}, \cdots, \ket{1,P-1}\}$. The first outcome is used to test the fidelity of the received state while the second is used as a raw-random string. This string is then further processed through a privacy amplification process, the output of which is the final cryptographic random string.
Indeed this protocol can be proven secure in a very straight-forward manner using entropic uncertainty \cite{berta2010uncertainty,tomamichel2011uncertainty,ent-survey}. However, there are two complications with the protocol itself. First, it would require the ability for Alice to perform a full basis measurement in the quantum-walk basis (namely, she would need to distinguish all states of the form $W\ket{c,x}$). This might require complex optics to do experimentally. Second, for the randomness generation measurement, she needs to be able to perform a measurement in the full coin and position basis, namely a measurement that can distinguish all states of the form $\ket{c,x}$. Our goal is to analyze a far simpler protocol, building off of the one from \cite{QW-QRNG}. The protocol will only require Alice to be able to distinguish a single walker state, namely $W\ket{0,0}$ from any other; and, second, she need only perform a measurement of the position of the walk for randomness, and she need not also determine the state of the coin itself. The second restriction is identical to the protocol in \cite{QW-QRNG} though, since they did not consider the source independent model, they did not require any other test. We add only this minimal test ability, namely the ability to distinguish a single quantum walk state from the $2P-1$ others in the walk basis, to ensure a cryptographically secure protocol.
Interestingly, standard entropic uncertainty relations of the form \cite{tomamichel2011uncertainty}:
\begin{equation}\label{eq:ent}
H_\infty^\epsilon(A|E) + H_{max}^\epsilon(A|B) \ge -\log\max_{x,y}||\sqrt{M_x}\sqrt{N_y}||^2_{op},
\end{equation}
where $\{M_x\}$ and $\{N_y\}$ are the two POVMs used in the protocol, are not applicable and can only yield the trivial bound. Thus a new approach is required to analyze this QW-QRNG protocol. We develop the approach in this paper using a technique of quantum sampling as introduced by Bouman and Fehr in \cite{sampling} and used by us recently to develop novel \emph{sampling-based entropic uncertainty relations} \cite{krawec2019quantum,krawec2020new}. In fact, our proof is similar, though with suitable modifications needed for this scenario and, since the result does not follow immediately from our previous analysis, it is necessary to state here.
We make two primary contributions in this paper. First, we analyze for the first time, a QW-QRNG protocol introduced in \cite{QW-QRNG} from a cryptographic perspective. We adapt the protocol sufficiently, and minimally, so as to produce a secure system and prove it is secure in the SI model. This represents, to our knowledge, the first QRNG protocol based on quantum walks in the SI model of security and shows even greater application of quantum walks to other cryptographic primitives. Second, we develop a proof of security to handle this scenario when standard approaches are not immediately applicable. Our security method may also be applicable to other protocols of this nature where standard relations such as Equation \ref{eq:ent} cannot be used directly. Our proof utilizes the method of quantum sampling by Bouman and Fehr \cite{sampling}, augmented with techniques we developed in \cite{krawec2019quantum,krawec2020new} for entropic uncertainty, showing even more potential applications of these methods to complex security analyses. We actually think this second contribution the more significant as it shows how this framework of quantum sampling may be used to tackle cryptographic problems that standard methods would fail to analyze successfully, thus opening the door to a potential wider range of applications.
\section{Notation and Definitions}
We now introduce some basic definitions and notation that we will use throughout this paper. By $\mathcal{A}_d$ we mean a $d$-dimensional alphabet, namely $\mathcal{A}_d = \{0, 1, \cdots, d-1\}$. Given a word $q \in \mathcal{A}_d^N$ and some subset $t \subset \{1, 2, \cdots, N\}$, we write $q_t$ to mean the substring of $q$ indexed by $t$ (i.e., those characters in $q$ indexed by $i \in t$). We write $q_{-t}$ to mean the substring indexed by the complement of $t$. The \emph{Hamming Weight} of $q$ is denoted $wt(q) = |\{i \text{ } : \text{ } q_i \ne 0\}|$ while the \emph{relative Hamming weight} is denoted $w(q) = wt(q)/|q|$.
A \emph{density operator} is a Hermitian positive semi-definite operator of unit trace acting on some Hilbert space $\mathcal{H}$. Given a pure quantum state $\ket{\psi} \in \mathcal{H}$ we write $\kb{\psi}$ to mean $\ket{\psi}\bra{\psi}$.
The Shannon entropy of a random variable $X$ is denoted by $H(X)$ while the $d$-ary entropy function is denoted $h_d(x)$. This function is defined to be $h_d(x) = x\log_d(d-1) - x\log_dx - (1-x)\log_d(1-x)$. We also define the \emph{extended $d$-ary entropy function} to be $\bar{H}_d(x)$ which equals $h_d(x)$ for all $x \in [0, 1-1/d]$ but is $0$ for all $x < 0$ and is $1$ for all $x > 1-1/d$.
Let $\rho_{AE}$ be a quantum state (density operator) acting on some Hilbert space $\mathcal{H}_A\otimes\mathcal{H}_E$. The \emph{conditional quantum min entropy} \cite{renner2008security} is defined to be:
$H_\infty(A|E)_\rho = \sup_{\sigma_E}\max(\lambda\in\mathbb{R} \text{ } : \text{ } 2^{-\lambda}I_A\otimes\sigma_E - \rho_{AE} \ge 0),$
where $I_A$ is the identity operator on $\mathcal{H}_A$. Note that if the $E$ system is trivial and the $A$ portion is classical (namely $\rho_A = \sum_xp_x\kb{x}$) then it is easy to show that $H_\infty(A) = -\log \max_xp_x$. If the $E$ portion is classical, namely $\rho_{AE} = \sum_ep_e\rho_A^e\otimes\kb{e}$, then it can be shown that:
\begin{equation}\label{eq:cl-ent}
H_\infty(A|E)_\rho \ge \min_eH_\infty(A)_{\rho^e}.
\end{equation}
Finally, the \emph{smooth conditional min entropy} is defined to be \cite{renner2008security}:
$H_\infty^\epsilon(A|E)_\rho = \sup_{\sigma\in\Gamma_\epsilon(\rho)}H_\infty(A|E)_\sigma,$
with:
$\Gamma_\epsilon(\rho) = \{\sigma \text{ } : \text{ } \trd{\sigma-\rho} \le \epsilon\}.$
Here, $\trd{X}$ is the trace distance of operator $X$.
Given a classical-quantum state $\rho_{AE}$, let $\sigma_{KE}$ be the result of a \emph{privacy amplification} process on the $A$ register of this state. Namely, a process of mapping the $A$ register through a randomly chosen two-universal hash function. If the output of this hash function is $\ell$ bits long, then it was shown in \cite{renner2008security} that:
\begin{equation}\label{eq:PA}
\trd{\sigma_{KE} - I_K/2^\ell\otimes\sigma_E} \le 2^{-\frac{1}{2}(H_\infty^\epsilon(A|E)_\rho - \ell)} + 2\epsilon.
\end{equation}
\subsection{Quantum Random Walks}
In this work we will consider discrete-time quantum walks on a cycle graph \cite{aharonov2001quantum}. Such a process involves a Hilbert space $\mathcal{H}_W = \mathcal{H}_C\otimes\mathcal{H}_P$ where $\mathcal{H}_C$ is the two-dimensional \emph{coin space} and $\mathcal{H}_P$ is the $P$-dimensional \emph{position space}. The walk begins with the walker at some initial position $\ket{c,x}$ (e.g., $\ket{0,0}$) from which a \emph{walk operator} is applied $T$ times. The walk operator first applies a unitary operator on the coin space (for us, we only consider the Hadamard operator here, though other possibilities exist of course). Following this a \emph{shift operator} is applied $S$ which maps $\ket{0,x}\mapsto\ket{0,x+1}$ and $\ket{1,x}\mapsto\ket{1,x-1}$ where all arithmetic in the position space is done modulo $P$. Let $W = S\cdot (H\otimes I_P)$ be the walk operator; then, after $T$ steps, the walker evolves to state $W^T\ket{c,x}$. Generally, at this point, a measurement may be done on the position space causing a collapse at one of the $P$ spots.
Later, we will denote by $\ket{w_{c,x}}$ to mean the evolved state $W^T\ket{c,x}$. We will also use $\ket{w_i}$ when appropriate, using the natural relationship of tuples $(c,x)$ to integers $i$, with $(0,0)$ being the first index $i=0$. Finally, given a walk state $\ket{w_{c,x}}$ we use the notation $Pr_W(\ket{w_{c,x}}\rightarrow z)$ to denote the probability that the walker is observed at position $z$ after measurement. Namely, $Pr_W(\ket{w_{c,x}}\rightarrow z) = \braket{w_{c,x}|I_C\otimes\kb{z}|w_{c,x}}.$
Finally, we denote by $\gamma$ to be the maximal positional probability of the walk which starts at $\ket{0,0}$, namely:
\begin{equation}\label{eq:gamma}
\gamma = \max_zPr_W(\ket{w_{0,0}}\rightarrow z).
\end{equation}
Obviously, this is a function of the walk parameters (the operation $W$ along with the number of steps $T$).
\subsection{Quantum Sampling}
In \cite{sampling}, Bouman and Fehr discovered a fascinating connection linking classical sampling strategies with quantum ones, even when the quantum state is entangled with an environment system (e.g., an adversary). Here we review some of these concepts, however for more details the reader is referred to \cite{sampling}.
Let $q\in\mathcal{A}_d^N$. A classical sampling strategy is a process of choosing a random subset $t \subset\{1, \cdots, N\}$, observing $q_t$, and estimating the value of some target value of the \emph{unobserved portion}. Here, as in \cite{sampling}, we consider the target value to be the relative Hamming weight. One sampling strategy we will employ consists of choosing a subset $t$ of size $m \le N/2$ uniformly at random, observing $q_t$, and outputting $w(q_t)$ as an estimate of the Hamming weight in the unobserved portion. It was shown in \cite{sampling} that, for $\delta > 0$:
\begin{equation}\label{eq:err-cl}
\epsilon_\delta^{cl} := \max_{q\in\mathcal{A}_d^N}Pr(q \not\in \mathcal{G}_{t,\delta}) \le 2\exp\left(\frac{-\delta^2m(n+m)}{m+n+2}\right),
\end{equation}
where the probability is over all choices of subsets $t$ and $\mathcal{G}_{t,\delta}$ is the set of all ``good'' words for which this sampling strategy is guaranteed to produce a $\delta$-close estimate of the Hamming weight of the unobserved portion, namely:
\[
\mathcal{G}_{t,\delta} = \{q\in\mathcal{A}_{d}^{N} \text{ } : \text{ } |w(q_t) - w(q_{-t})|\le\delta\}.
\]
The value $\epsilon_\delta^{cl}$ is the error probability of the classical sampling strategy (the ``cl'' superscript is used to refer to a classical sampling strategy).
The main result from \cite{sampling} shows how to promote such a classical strategy to a quantum one in a way that the failure probabilities of the quantum strategy are functions of the classical ones. Fix a basis $\{\ket{0}, \cdots, \ket{d-1}\}$ (the exact choice may be arbitrary but then fixed - later when using this result, we will use the walk basis $\{W^T\ket{c,x}\}_{c,x}$). Define:
\[
span(\mathcal{G}_{t,\delta}) = span(\ket{i_1i_2\cdots i_N} \text{ } : \text{ } |w(i_t) - w(i_{-t})|\le\delta).
\]
This is the quantum analogue of the ``good set'' of classical words. In particular, note that if given a state $\ket{\phi}_{AE} \in span(\mathcal{G}_{t,\delta})\otimes \mathcal{H}_E$, then if a measurement in the given basis were performed on those qudits indexed by $t$ leading to outcome $q\in\mathcal{A}_d^m$, it must hold that the remaining state is a superposition of the form:
$\ket{\phi_{t,q}} = \sum_{i\in J}\alpha_i\ket{i,E_i},$
where $J\subset \{i \in \mathcal{A}_d^{N-m} \text{ } : \text{ } |w(i) - w(q)| \le \delta\}$.
The main result from \cite{sampling}, reworded for our application here, was to prove the following theorem:
\begin{theorem}\label{thm:sample}
(Modified from \cite{sampling}): Let $\delta > 0$. Given the above classical sampling strategy and an arbitrary quantum state $\ket{\psi}_{AE}$, there exists a collection of ``ideal states'' $\{\ket{\phi^t}\}_{t}$, indexed over all possible subsets the sampling strategy may choose, such that each $\ket{\phi^t} \in span(\mathcal{G}_{t,\delta})\otimes\mathcal{H}_E$ and:
\begin{equation}\label{eq:ideal}
\frac{1}{2}\trd{\frac{1}{T}\sum_t\kb{t}\otimes\kb{\psi} - \frac{1}{T}\sum_t\kb{t}\otimes\kb{\phi^t}} \le \sqrt{\epsilon_\delta^{cl}}.
\end{equation}
where $T = {N \choose m}$ and the sum is over all subsets of size $m$.
\end{theorem}
Note that the result requires a fixed basis of reference (from which to define $\mathcal{G}_{t,\delta}$).
\section{The Protocol}
We consider a QW-QRNG protocol introduced in \cite{QW-QRNG}. That protocol was not analyzed rigorously from a cryptographic standpoint and, in fact
|
, would not be secure in the SI model. We modify that protocol, adding a minimal testing ability for Alice, and later show it is secure in the SI model of security. The protocol operates as follows:
$ $\newline\textbf{Public Parameters:} The quantum walk setting, namely the dimension of the position space $P$ (defining the overall Hilbert space of one walker $\mathcal{H}_W = \mathcal{H}_C\otimes\mathcal{H}_P$), the walk operator $W$, and the number of steps to evolve by, $T$.
$ $\newline\textbf{Source:} A source, potentially adversarial, produces a quantum state $\ket{\psi_0} \in \mathcal{H}_A\otimes\mathcal{H}_E$, where $\mathcal{H}_A \cong \mathcal{H}_W^{\otimes N}$. If the source is honest, the state prepared should be of the form:
\[
\ket{\psi_0} = \ket{w_0}^{\otimes N}\otimes\ket{0}_E,
\]
namely, $N$ copies of the walker state $\ket{w_0} = \ket{w_{0,0}} = W^T\ket{0,0}$ unentangled with Eve.
$ $\newline\textbf{User:} Alice chooses a random subset $t$ of size $m$ and measures those walker states using POVM $\mathcal{W} = \{\kb{w_0}, I-\kb{w_0}\} = \{W_0, W_1\}$ resulting in outcome $q \in \{0,1\}^m$ (equivalently, she reverses the quantum walk and observes whether the initial state was $\ket{0,0}$ or anything else). The remaining states she measures using POVM $\mathcal{Z} = \{I_C\otimes\kb{j}\}_{j=0}^{P-1} = \{Z_0, Z_1, \cdots, Z_{P-1}\}$ resulting in outcome $r \in \mathcal{A}_P^n$, where $n = N-m$.
$ $\newline\textbf{Postprocessing:} Finally, Alice applies privacy amplification to $r$, producing a final random string of size $\ell$. As proven in \cite{frauchiger2013true}, the hash function used for privacy amplification need only be chosen randomly once and then reused for each run of the protocol for a QRNG protocol of this nature.
The goal of this protocol is to ensure that, for a given $\epsilon_{PA}$ set by the user, after privacy amplification the resulting string is $\epsilon_{PA}$ close to an ideal random string, uniformly generated and independent of any adversary system. Using Equation \ref{eq:PA}, this involves finding a bound on the quantum min-entropy. Note that, for the given POVMs, it is straight-forward to check that $\max_{x,y}||\sqrt{W_x}\sqrt{Z_y}||^2_{op} = 1$ and so Equation \ref{eq:ent} only yields the trivial bound on the min entropy. Thus an alternative approach is required which we develop in the next section.
\subsection{Security Analysis}
To prove security, we require a bound on the quantum min entropy from which, using Equation \ref{eq:PA}, we may compute the number of random bits $\ell$ which may be extracted from $N$ quantum walk states (prepared by an adversary). We assume the adversary is allowed to create any initial state, possibly entangled with her ancilla, however as in \cite{vallone2014quantum}, the dimension of the system sent to Alice is known; in our case it is $(2P)^N$, namely, $N$ quantum walker states, each of dimension $2P$. We do not assume anything else about this state (for instance, each of the $N$ walkers may be in different states). Such a scenario also models natural noise and an honest source - considering an adversarial source is more general. Finally, we assume that Alice's measurement devices are fully characterized.
\begin{theorem}\label{thm:main}
Let $\epsilon > 0$. After executing the above QW-QRNG protocol and observing outcome $q$ during the test stage (namely, after measuring using $\mathcal{W}$), it holds that, except with probability at most $\epsilon^{1/3}$ (where the probability here is over the choice of sample subset and observation $q$), the protocol outputs a final secret string of size:
\[
\ell=-\eta_q\log_2\gamma - n\cdot\frac{\bar{H}_{2P}(w(q) + \delta)}{\log_{2P} (2)} - 2\log_2\frac{1}{\epsilon} - \log_2{N \choose m},
\]
which is $(5\epsilon + 4\epsilon^{1/3})$-close to an ideal random string (i.e., one that is uniformly generated and independent of any adversary system as in Equation \ref{eq:PA}). Above, $\eta_q = (N-m)(1-w(q)-\delta)$ and:
\begin{equation}\label{eq:delta}
\delta = \sqrt{\frac{(N+2)\ln(2/\epsilon^2)}{m\cdot N}}.
\end{equation}
\end{theorem}
\begin{proof}
Fix $\epsilon > 0$ and let $\ket{\psi_0}_{AE}$ be the state the adversarial source Eve creates, sending the $A$ portion to Alice. Using Theorem \ref{thm:sample} (with respect to the reference basis $\{W^T\ket{0,0}, \cdots, W^T\ket{1,P-1}\}$), there exist ideal states $\{\ket{\phi^t}\}$, indexed over all subsets $t\subset\{1, 2, \cdots, N\}$ of size $m$, such that $\ket{\phi^t} \in \text{span}(\ket{w_{i_1}w_{i_2}\cdots w_{i_N}} \text{ } : \text{ } |w(i_t) - w(i_{-t})| \le \delta) \otimes\mathcal{H}_E$ and Equation \ref{eq:ideal} holds.
(Note we define $\ket{w_0} = \ket{w_{0,0}} = W^T\ket{0,0}$.) From Equation \ref{eq:err-cl}, by setting $\delta$ as in Equation \ref{eq:delta},
we have $\sqrt{\epsilon^{cl}_\delta} = \epsilon$.
We now use a two-step proof method we developed in \cite{krawec2019quantum,krawec2020new} to utilize quantum-sampling for entropic uncertainty relations. Here, we modify the first step of the proof for this cryptographic application, while the second step remains largely the same. The first step is to analyze the security of the ideal state $\sigma_{TAE} = \frac{1}{T}\sum_t\kb{t}\otimes\kb{\phi^t}$. Choosing a subset is equivalent to measuring the $T$ register in $\sigma_{TAE}$ causing the state to collapse to the given ideal state $\ket{\phi^t}$. At this point, a measurement using $\mathcal{W}$ is made on subset $t$ resulting in some outcome $q$. The post-measurement state, discarding those systems that were measured, is easily seen to be of the form:
\[
\phi^t_q = \sum_{k\in\mathcal{A}_{2P-1}^{wt(q)}}p_k\underbrace{P\left(\sum_{i\in J_q^{(k)}}\alpha_i\ket{w_i}\ket{E_i}\right)}_{\sigma_{AE}^{(k)}}.
\]
with $P(z) = zz^*$ and $J_q^{(k)} \subset \{i\in\mathcal{A}_P^n \text{ } : \text{ } |w(i) - w(q)|\le\delta\}.$ Recall $n=N-m$.
Let us consider one of the $\sigma_{AE}^{(k)}$ states and perform a measurement using POVM $\mathcal{Z}$ on the remaining $A$ portion.
To compute this state, we write a single quantum walker $\ket{w_i} \in \mathcal{H}_W$ as:
$ \ket{w_i} = \ket{0}\ket{\phi(0,i)} + \ket{1}\ket{\phi(1,i)},$
where $\ket{\phi(c,i)}$ are (not necessarily normalized) states in $\mathcal{H}_P$. Using this notation, after some algebra, we find that the post-measurement state, with Alice storing the outcome $z\in\mathcal{A}^n_P$ in a classical register $Z$ and also tracing out the unmeasured coin register is:
\[
\sigma_{ZE}^{(k)} = \sum_{z\in\mathcal{A}_P^n} \kb{z}_Z\sum_{i,j \in J_q^{(k)}}\alpha_i\alpha_j^*\sum_{c\in\{0,1\}^n}x_{z,c,i}x_{z,c,j}^*\otimes\ket{E_i}\bra{E_j}
\]
where given a string $c\in\{0,1\}^n$, $z\in\mathcal{A}_P^n$, and $i\in J_q^{(k)}$, we define $x_{c,z,i}$ as:
$x_{z,c,i} = \prod_\ell\braket{z_\ell|\phi(c_\ell|i_\ell)}.$
To compute the min-entropy of this state, we will consider the following density operator:
\[
\chi_{ZE} = \sum_z \kb{z}\sum_{i\in J_q^{(k)}}|\alpha_i|^2\sum_c|x_{c,z,i}|^2\otimes\kb{E_i}
\]
Using a proof similar to a lemma in \cite{renner2008security} which bounds the min-entropy of a superposition based on the min-entropy of a suitable mixed state, we find that:
\[
H_\infty(Z|E)_{\sigma^{(k)}} \ge H_\infty(Z|E)_\chi - \log|J_q^{(k)}|.
\]
Note that, though the lemma in \cite{renner2008security} is not immediately applicable to the above scenario, the proof is, indeed, identical and so we omit the details for space reasons.
Consider the state $\chi_{ZEI}$ where we append an auxiliary system spanned by orthonormal basis $\ket{i}$:
\[
\chi_{ZEI} = \sum_{i\in J_q^{(k)}}|\alpha_i|^2\underbrace{\left(\sum_z \kb{z}\sum_c|x_{c,z,i}|^2\right)}_{\chi^{(i)}}\otimes\kb{E_i}\otimes\kb{i}
\]
For strings $z \in \mathcal{A}_P^n$ and $i \in \mathcal{A}_{2P}^n$, let $p(z|w_i)$ be the probability that outcome $z$ is observed if measuring the pure, and unentangled state, state $\ket{w_{i_1}w_{i_2}\cdots w_{i_n}}$ using POVM $\mathcal{Z}$. Simple algebra shows that this is in fact $p(z|w_i) = \sum_c|x_{c,z,i}|^2$. Thus $\chi^{(i)} = \sum_z\kb{z}p(z|w_i)$.
From Equation \ref{eq:cl-ent} and treating the joint $EI$ register as a single classical register, we have:
\[
H_\infty(Z|E)_\chi \ge H_\infty(Z|EI)_\chi \ge \min_iH_\infty(Z)_{\chi^{(i)}}.
\]
Fix a particular $i \in J_q^{(k)}$ and let $\eta = n-wt(i)$ (namely, $\eta$ is the number of zeros in the string $i$). Then, it is clear that:
$p(z|w_i) \le \max_{x\in\mathcal{A}_P} Pr_W(\ket{w_0} \rightarrow x)^\eta = \gamma^\eta,$
where $\gamma$ was defined in Equation \ref{eq:gamma}. Indeed, any other $Pr_W(\ket{w_i}\rightarrow z) \le 1$ and so we may consider only the $\ket{w_0}$ terms as contributing to this upper-bound. From this, it follows that:
$ H_\infty(Z)_{\chi^{(i)}} = -\log \max_z p(z|w_i)\ge -\log\gamma^{n-wt(i)}.
Now, since $i \in J_q^{(k)}$, we know that $wt(i) \le n(w(q)+\delta)$ and so:
\begin{align*}
H_\infty(Z|E)_\chi &\ge H_\infty(Z|EI)_\chi \ge \min_iH_\infty(Z)_{\chi^{(i)}}\\
&\ge -\log\gamma^{n(1-w(q) - \delta)} = -\eta_q\log\gamma.
\end{align*}
Finally, we note that $|J_q^{(k)}| \le d^{n\bar{H}_{2P}(w(q)+\delta)}$ (using the well known bound on the volume of a Hamming ball), we have:
\begin{align*}
&H_\infty(Z|E)_\sigma \ge \min_k H_\infty(Z|E)_{\sigma^{(k)}}\\
&\ge -\eta_q\log_2\gamma - n\cdot\frac{\bar{H}_{2P}(w(q) + \delta)}{\log_{2P} (2)}
\end{align*}
Of course, this is the ideal state analysis. However, we may use a similar technique that we employed in \cite{krawec2019quantum} for translating this ideal analysis to the real case. Indeed, let $\rho_{ZE}^{t,q}$ be the state of the real system, $\ket{\psi}$, conditioned on the protocol sampling subset $t$ and observing outcome $q$ and let $\sigma_{ZE}^{t,q}$ be the same for the ideal state. If we define:
$\Delta_{t,q} = \frac{1}{2}\trd{\rho_{ZE}^{t,q} - \sigma_{ZE}^{t,q}},$
then, treating $\Delta_{t,q}$ as a random variable over the choice of $t$ and outcome $q$, it can be shown (see the proof of Theorem 2 in \cite{krawec2019quantum} for explicit details) that except with probability $\epsilon^{1/3}$, it holds that $\Delta_{t,q} \le \epsilon + \epsilon^{1/3}$ where the probability is over the choice of $t$ and outcome $q$. Thus, by switching to smooth min entropy, we have, except with probability at most $\epsilon^{1/3}$ that $H_\infty^{2\epsilon+2\epsilon^{1/3}}(Z|E)_\rho \ge H_\infty(Z|E)_\sigma$.
Privacy amplification (Equation \ref{eq:PA}, setting the right-hand-side of that equation equal to $\epsilon_{PA} = 5\epsilon+4\epsilon^{1/3}$, namely twice the smoothening parameter plus an additional $\epsilon$), along with the fact that it requires $\log{N \choose m},$ random bits to choose a subset of size $m$, completes the proof.
\end{proof}
$ $\newline\textbf{Evaluation: }
We evaluate the performance of our protocol under a variety of position dimensions $P$. Ordinarily, users would run the protocol and observe $q$ directly; however, to simulate its execution, we will assume the noise follows a depolarization channel with parameter $Q$. We do this only to evaluate our protocol; furthermore, this noise model is a standard one to evaluate on in simulations. From this, after sampling, Alice will have an expected Hamming weight in her test measurement of $w(q) = Q$. In our evaluations, we will set $\epsilon = 10^{-36}$ which will imply a failure probability, and an $\epsilon_{PA}$-secure string, both on the order of $10^{-12}$. We also use a sample size that is the square-root of the total number of signals $N$, namely $m = \sqrt{N}$. Finally, to evaluate our bit-generation rate, we will require $\gamma$ (Equation \ref{eq:gamma}). Since the walk settings are chosen by the user, we wrote a program that, for fixed dimension $P$, found the minimum $\gamma$ value over all time settings $T = 1, 2, \cdots, 5000$. The evaluation of the bit generation rate of this SI-QW-QRNG protocol, using our analysis in Theorem \ref{thm:main}, is shown in Figure \ref{fig:1}. A comparison to an alternative SI-QRNG protocol from \cite{xu2016experimental} is shown in Figure \ref{fig:2}. Note that as the dimension of the walker increases, the bit-generation rates, even under high noise levels, increases. Interestingly, as shown in Figure \ref{fig:2}, depending on the walker dimension, the QW based protocol can sometimes outperform the SI-QRNG protocol from \cite{xu2016experimental} (which is based on mutually unbiased measurements of a highly entangled state).
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{noise-15.png}
\includegraphics[width=0.48\linewidth]{noise-20.png}
\caption{Random bit generation rates of the QW-QRNG protocol. $x$-axis: number of signals sent $N$ (from which $m=\sqrt{N}$ are used for sampling); $y$-axis: random bit-generation rate (namely $\ell/N$ where $\ell$ is computed using Theorem \ref{thm:main}). Black dashed (top) is $P=51$; red-dashed (middle) is $P=11$; blue solid (lowest) is $P=5$. Left graph is with $15\%$ noise in the source (namely $w(q) = 0.15$); Right graph has $20\%$ noise.}\label{fig:1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{comp-5.png}
\includegraphics[width=0.48\linewidth]{comp-51.png}
\caption{Comparing the QW-QRNG protocol's bit generation rate (black-solid) with that of the SI-QRNG protocol in \cite{xu2016experimental} (red-dashed). Left: $P=5$; Right: $P=51$. In both cases we assume $10\%$ noise in the signal state. For the SI-QRNG protocol's evaluation from \cite{xu2016experimental}, we use a dimension of $2P$.}\label{fig:2}
\end{figure}
\section{Closing Remarks}
In this paper, we modified, minimally, a QRNG protocol from \cite{QW-QRNG}, based on quantum walks, to be secure in the semi-source independent (SI) model. Since standard entropic uncertainty relations cannot be directly applied as discussed, we develop an alternative entropic uncertainty relation for this protocol, showing it is secure in the SI model. Our methods may potentially find applications in other difficult to analyze quantum cryptographic protocols. There are important reasons for studying this QW-based protocol. For instance, it is important to harness alternative quantum processes such as quantum-walk states, as it is still unclear what future experimental developments will yield; being able to utilize QW states may be highly relevant, especially since they are also useful for other tasks, computational and cryptographic, as discussed earlier. Second, it is interesting from a theoretical stand-point. Many exciting open problems remain, in particular a more rigorous evaluation of the performance of this QW-QRNG protocol for different walk parameters (such as alternative coin operators) or alternative models (such as history-dependent walks \cite{rohde2013quantum,mcgettrick2009one,krawec2015history,brun2003quantum}) would be very exciting.
\balance
|
\section{Effect of thermal noise on light}
In a gravitational-wave interferometer, the thermal noise of the mirrors has
a similar effect as a gravitational wave since both change the optical path
followed by the light in the interferometer arms.\ At every radial point $r$
of the mirror surface, the field experiences a local phase-shift
proportional to the longitudinal displacement $u\left( r,z=0,t\right) $ of
the surface (the origin of the cylindrical coordinates is taken at the
center of the mirror surface, see figure \ref{Fig_Model}). This leads to a
global phase-shift for the reflected field which is actually related to the
mirror displacement averaged over the beam profile\cite{Gillespie95,Pinard99}%
.\ The phase shift between the two interferometer arms thus contains
information about the mirror displacement and the variable read out by this
procedure corresponds to the averaged displacement
\begin{equation}
\widehat{u}\left( t\right) =\left\langle u\left( r,z=0,t\right) ,v\left(
r\right) \right\rangle , \label{Eq_MeanU}
\end{equation}
where the brackets stand for the overlap integral in the mirror plane ($z=0$%
),
\begin{equation}
\left\langle f\left( r\right) ,g\left( r\right) \right\rangle
=\int_{z=0}d^{2}rf\left( r\right) g\left( r\right) , \label{Eq_Overlap}
\end{equation}
and $v\left( r\right) $ is the intensity profile of the light beam in the
mirror plane.\ Assuming that the beam is in a TEM$_{00}$ Gaussian mode, this
profile is related to the beam waist $w_{0}$ by
\begin{equation}
v\left( r\right) =\frac{2}{\pi w_{0}^{2}}e^{-2r^{2}/w_{0}^{2}}. \label{Eq_v}
\end{equation}
Any displacement can be decomposed onto the acoustic modes of the mirror.\
Noting $\left\{ u_{n}\left( r,z\right) \right\} $ a basis of the internal
acoustic modes, the displacement $u\left( r,z,t\right) $ can be expressed as
a linear combination of these modes
\begin{equation}
u\left( r,z,t\right) =\sum_{n}a_{n}\left( t\right) u_{n}\left( r,z\right) ,
\label{Eq_Decomp}
\end{equation}
where $a_{n}\left( t\right) $ is the time-dependent amplitude of mode $n$.
Each acoustic mode corresponds to a harmonic oscillator characterized by a
Lorentzian mechanical susceptibility
\begin{equation}
\chi _{n}\left[ \Omega \right] =\frac{1}{M_{n}\left( \Omega _{n}^{2}-\Omega
^{2}-i\Omega _{n}^{2}\Phi \left[ \Omega \right] \right) }, \label{Eq_ChiN}
\end{equation}
where $M_{n}$ is the effective mass of mode $n$, $\Omega _{n}$ is its
resonance frequency and $\Phi \left[ \Omega \right] $ is the loss angle
assumed to be the same for all modes. The mirror motion can be described by
the Fourier transforms $a_{n}\left[ \Omega \right] $ of every amplitude
coefficient.\ Assuming that the mirror is in thermal equilibrium at
temperature $T$, one gets
\begin{equation}
a_{n}\left[ \Omega \right] =\chi _{n}\left[ \Omega \right] F_{T,n}\left[
\Omega \right] , \label{Eq_an}
\end{equation}
where $F_{T,n}$ is a Langevin force describing the coupling of the mode $n$
with the thermal bath.
\begin{figure}
\centerline{\psfig{figure=model.eps,width=7cm}}
\vspace{2mm}
\caption{The mirror is coated on the plane side of a plano-convex sustrate of radius
$R$, thickness $h_{0}$ and diameter $D$.\ The effect of the thermally excited motion
corresponds to a global phase-shift for the light beam.}
\label{Fig_Model}
\end{figure}
One can now determine the averaged displacement $\widehat{u}$ from eqs.\ (%
\ref{Eq_MeanU}), (\ref{Eq_Decomp}) and (\ref{Eq_an}).\ One gets\cite{Pinard99}
\begin{equation}
\widehat{u}\left[ \Omega \right] =\chi _{eff}\left[ \Omega \right] F_{T}%
\left[ \Omega \right] , \label{Eq_uChi}
\end{equation}
where $\chi _{eff}$ appears as an effective susceptibility taking into
account all the acoustic modes and their spatial overlap with the light
beam,
\begin{equation}
\chi _{eff}\left[ \Omega \right] =\sum_{n}\left\langle u_{n}\left(
r,z=0\right) ,v\left( r\right) \right\rangle ^{2}\chi _{n}\left[ \Omega %
\right] . \label{Eq_ChiEff}
\end{equation}
This effective susceptibility is then the sum over all modes of the
susceptibilities $\chi_{n}$ weighted by the overlap with light. The
force $F_{T}$ in eq. (\ref{Eq_uChi}) is an effective Langevin force related
to the forces $F_{T,n}$ of each acoustic mode.\ One finds\cite{Pinard99} that
the noise spectrum $S_{T}\left[ \Omega \right] $ of the force $F_{T}$ is
related to $\chi _{eff}$ by the fluctuation-dissipation theorem
\begin{equation}
S_{T}\left[ \Omega \right] =-\frac{2k_{B}T}{\Omega }%
\mathop{\rm Im}%
\left( \frac{1}{\chi _{eff}\left[ \Omega \right] }\right) . \label{Eq_ST}
\end{equation}
This relation means that the mirror is in thermodynamic equilibrium at
temperature $T$.
In a gravitational-wave interferometer, the frequency of a gravitational
wave is usually much smaller than the mechanical resonance frequencies of
internal acoustic modes of the mirrors. We are thus interested in the noise
spectrum $S_{\widehat{u}}$ of the averaged displacement $\widehat{u}$ at low
frequency compared to the resonance frequencies $\Omega _{n}$. From eqs. (%
\ref{Eq_uChi}) to (\ref{Eq_ST}), the background thermal noise can be
approximated in this frequency domain to
\begin{eqnarray}
S_{\widehat{u}}\left[ \Omega \approx 0\right] &=&\frac{2k_{B}T}{\Omega }%
\mathop{\rm Im}%
\left( \chi _{eff}\left[ \Omega \right] \right) \nonumber \\
&\approx &2k_{B}T\frac{\Phi \left[ \Omega \right] }{\Omega }\chi _{eff}\left[
0\right] . \label{Eq_Su}
\end{eqnarray}
The effect of thermal noise on light is thus proportional to the effective
susceptibility at zero frequency which is given by
\begin{equation}
\chi _{eff}\left[ 0\right] =\sum_{n}\frac{\left\langle u_{n}\left(
r,z=0\right) ,v\left( r\right) \right\rangle ^{2}}{M_{n}\Omega _{n}^{2}}.
\label{Eq_ChiEff0}
\end{equation}
In the next section we determine this susceptibility for a plano-convex
mirror and we compare the values obtained to the ones of a cylindrical
mirror.
\section{Plano-convex mirror}
We consider that the mirror is coated on the plane side of a segment of
sphere of radius $R$ and of thickness $h_{0}$ (see figure \ref{Fig_Model}).
For simplicity we assume that the mirror has a sharp edge on its
circumference so that its mass $M$ and its diameter $D$ are related to $R$
and $h_{0}$ by
\begin{eqnarray}
M &=&\pi \rho h_{0}^{2}\left( R-h_{0}/3\right) , \label{Eq_M} \\
D &=&2\sqrt{h_{0}\left( 2R-h_{0}\right) }, \label{Eq_D}
\end{eqnarray}
where $\rho $ is the density of the substrate (2200 kg/m$^{3}$ for silica).
The total mass $M$ of the mirror is an important parameter for the
suspension thermal noise\cite{Gonzalez94,Log
|
an96}. We thus choose a mass of
the same order as the one of the mirrors in gravitational-wave
interferometers.\ We will see however that the internal thermal noise is
quite insensitive to the mirror mass so that we set the mass $M$ to 20 kg.
All the geometrical parameters of the mirrors (curvature radius $R$,
diameter $D$) can then be expressed in terms of the thickness $h_{0}$.
If the thickness is much smaller than the curvature radius, the acoustic
propagation equation can be solved using a paraxial approximation and one
gets analytical expressions for the acoustic modes corresponding to Gaussian
modes\cite{Wilson74}. Each compression mode is defined by three integers $n$%
, $p$, $l$, corresponding to longitudinal, radial and angular indexes,
respectively.\ The longitudinal displacement $u_{n,p,l}\left( r,z\right) $
at radial coordinate $r$ and axial coordinate $z$ is given by\cite
{Wilson74,Pinard99}
\begin{equation}
u_{n,p,l}\left( r,z\right) =e^{-r^{2}/w_{n}^{2}}L_{p}^{l}\left(
2r^{2}/w_{n}^{2}\right) \cos \left( \frac{n\pi }{h\left( r\right) }z\right) .
\label{Eq_unpl}
\end{equation}
$u_{n,p,l}$ is composed of a transverse Gaussian structure, a transverse
Laguerre polynomial $L_{p}^{l}$ and a cosine in the propagation direction ($%
h\left( r\right) $ is the mirror thickness at radial position $r$, equal to $%
h_{0}$ for $r=0$). The acoustic waist $w_{n}$ and the eigenfrequency $\Omega
_{n,p,l}$ are given by
\begin{eqnarray}
w_{n}^{2} &=&\frac{2h_{0}}{n\pi }\sqrt{Rh_{0}}, \label{Eq_wn} \\
\Omega _{n,p,l}^{2} &=&\Omega _{M}^{2}\left[ n^{2}+\frac{2}{\pi }\sqrt{\frac{%
h_{0}}{R}}n\left( 2p+l+1\right) \right] , \label{Eq_Onpl}
\end{eqnarray}
where $\Omega _{M}=\pi c_{l}/h_{0}$ is the fundamental longitudinal
frequency, $c_{l}$ being the longitudinal sound velocity (5960 m/s for
silica).
From these equations one can derive an analytical expression for the
effective susceptibility $\chi _{eff}\left[ 0\right] $ as an infinite sum
over all modes $\left\{ n,p,l\right\} $ (eq.\ \ref{Eq_ChiEff0}). In the case
where the light beam is centered on the mirror, only modes that have a
cylindrical symmetry will contribute.\ In particular the sum over $l$ is
limited to $l=0$. The effective mass of each acoustic mode and the spatial
overlap with light are then given by
\begin{eqnarray}
M_{n} &=&\frac{\pi }{4}\rho h_{0}w_{n}^{2}, \label{Eq_Mn} \\
\left\langle u_{n,p,0}\left( r,z=0\right) ,v\left( r\right) \right\rangle &=&%
\frac{2w_{n}^{2}}{2w_{n}^{2}+w_{0}^{2}}\left( \frac{2w_{n}^{2}-w_{0}^{2}}{%
2w_{n}^{2}+w_{0}^{2}}\right) ^{p}. \label{Eq_Ovl}
\end{eqnarray}
\begin{figure}
\centerline{\psfig{figure=thick.eps,width=7cm}}
\vspace{2mm}
\caption{Variation of the effective susceptibility at zero frequency
$\chi _{eff}\left[ 0\right] $ as a function of the thickness $h_{0}$ of the mirror.\ Curves
(a) and (b) correspond to an optical waist $w_{0}$ of 2 cm and 5.5 cm, respectively.}
\label{Fig_Thick}
\end{figure}
We have numerically computed the effective susceptibility for different
thicknesses.\ Figure \ref{Fig_Thick} shows the result obtained by computing 30
different values of the thickness.\ For each value, the curvature radius $R$
and the diameter $D$ of the mirror are determined according to eqs. (\ref
{Eq_M}) and (\ref{Eq_D}). For example one gets $R=61$ cm and $D=57$ cm for a
thickness $h_{0}$ of 7 cm. The two curves in figure \ref{Fig_Thick} are obtained with
different optical waists $w_{0}$ (2 cm for curve {\it a} and 5.5 cm for
curve {\it b}). These waists correspond to the beam waists on the front and
end mirrors of the VIRGO\ interferometer\cite{Bondu95}. One observes a
decrease of the thermal noise for a thinner mirror.\ This is partly due to
the fact that the mechanical resonance frequencies are increased.\ It would
be however difficult to use a very thin mirror since its diameter would
become very large ($D$ evolves as $h_{0}^{-1/2}$ for small $h_{0}$).
If we consider a reasonable thickness of 7 cm, we obtain an effective
susceptibility $\chi _{eff}\left[ 0\right] $ equal to $11\times 10^{-11}$
m/N for an optical waist of 2 cm, and $2.4\times 10^{-11}$ m/N for $%
w_{0}=5.5 $ cm. These results can be compared to the values obtained for
cylindrical mirrors, that is $46\times 10^{-11}$ m/N ($w_{0}=2$ cm) and $%
11\times 10^{-11}$ m/N ($w_{0}=5.5$ cm)\cite{Bondu95}. The internal thermal
noise of a plano-convex mirror is thus significantly smaller, by at least a
factor 4. If the constraint on the diameter can be relaxed, even larger
noise reduction may be obtained.
\begin{figure}
\centerline{\psfig{figure=precis.eps,width=7cm}}
\vspace{2mm}
\caption{Convergence of the effective susceptibility as a function of the number of
computed modes.}
\label{Fig_Precis}
\end{figure}
We have checked the validity of the numerical calculation by plotting the
effective susceptibility as a function of the number of computed modes, for
example in the case of a thickness of 7 cm and an optical waist of 2 cm
(figure \ref{Fig_Precis}).\ This curve shows that results become valid as soon
as the number of computed modes is larger than 10$^{4}$. Since the numerical
calculation only deals with simple analytical expressions, it can easily be
processed with a very large number of modes, such as 10$^{6}$.
\section{Relation with the optical mass}
Figure \ref{Fig_Waist} shows the variation of the effective susceptibility
as a function of the optical waist. The thermal noise is reduced for a wider
waist.\ The mirror displacement is actually averaged over the beam waist.\
Since the maximum displacement is at the center of the mirror, one gets less
noise for a wide waist.
\begin{figure}
\centerline{\psfig{figure=waist.eps,width=7cm}}
\vspace{2mm}
\caption{Variation of the effective susceptibility as a function of the optical waist
$w_{0}$ for a 7-cm thick mirror. The solid curve is the computed result and the dashed
curve corresponds to an approximation for which the effective mass of the mirror is
replaced by the optical mass.}
\label{Fig_Waist}
\end{figure}
It is possible to derive a simple approximation of the thermal noise in the
case of a thickness $h_{0}$ much smaller than the curvature radius $R$. We
can then assume that the transverse acoustic modes are degenerate and we can
replace the resonance frequencies $\Omega _{n,p,0}$ by the value for $p=0$
(eq.\ \ref{Eq_Onpl}).\ The sum over $p$ in the effective susceptibility
(eq.\ \ref{Eq_ChiEff0}) is then a geometric sum and one gets a simple
estimate of the susceptibility $\chi _{eff}\left[ 0\right] $ in terms of an
optical mass $M_{opt}$ given by\cite{Pinard99}
\begin{eqnarray}
\chi _{eff}\left[ 0\right] &\approx &1/M_{opt}\Omega _{M}^{2},
\label{Eq_ChiEffApprox} \\
M_{opt} &=&\frac{12}{\pi ^{2}}\left( \frac{\pi }{4}\rho
h_{0}w_{0}^{2}\
|
rate
increases from 3 to 4 samples per FWHM.
\section{Conclusions}
\label{sec:con}
We performed conceptual studies of FIPSER, a readout concept, which promises to achieve significant
power savings compared to FADC based readout systems. The results of two independent reconstruction methods show that 12
comparators at a moderate sampling rate of 4 samples per signal FWHM can meet the
$\frac{1}{\sqrt{N}}$ requirements over a dynamic range of three orders of magnitude.
A time resolution significantly better than 1\,ns seems possible for pulses with a
FWHM of less than 10\,ns. The same conclusions can be drawn when the trace is
composed of two partially overlapping pulses.
A limitation of the FIPSER concept is that the pulse shape needs to be known beforehand.
While this should not pose a problem for most applications in astroparticle physics, the
concept needs to be studied in greater detail
|
for applications in which pulses
of similar amplitudes can overlap frequently and the pulse shape cannot be assumed fixed.
More sophisticated reconstruction
algorithms could mitigate some of these limitations.
Compared to established readout schemes, FIPSER provides a number of practical
advantages. Due to a decrease in the number of comparators by an order of
magnitude, FIPSER has the potential to realize significant power savings when
compared to existing readout systems. Other positive features are compactness of a
FIPSER readout and a possible reduction in data volumes.
FIPSER is dead time free, and it is straightforward to implement online event
selection and processing.
The implementation of a prototype of FIPSER is beyond the scope of this paper.
A possibility for implementing the concept is to use FPGAs \cite{gary5}, which have developed into one of the most versatile tools for data acquisition systems in recent years.
\section*{Acknowledgements}
We acknowledge support by Georgia Tech's GT-Fire program.
|
\section{Introduction}
Parkinson's disease (PD) is a neurodegenerative disease caused by a progressive loss of dopaminergic neurons, primarily in the substantia nigra pars compacta, but also in other parts of brain~\cite{Skodda2010}. Prevalence of this disease is estimated to 1.5\,\% for people aged over 65 years~\cite{Sapir2008}. PD is associated with different motor and non-motor deficits like muscular rigidity, rest tremor, bradykinesia and postural instability~\cite{Brodal2003,Mekyska2011b,Skodda2010}. In 60\,--\,90\,\% of PD patients the multimodal disruption of motor speech realization called hypokinetic dysarthria (HD) can be observed~\cite{Chenausky2011}. Most of patients with HD have soft and breathy voice with small variation in speech intensity (monoloudness) and fundamental frequency (monopitch)~\cite{Arnold2014}. The other clinical signs like decreased articulatory organs movement, hoarse or harsh voice, flat speech melody (dysprosody) or voice tremor can be observed as well~\cite{Skodda2013}.
In the last two decades scientists developed several acoustic signal analysis methods focused on assessment of parkinsonic speech~\cite{Eliasova2013,Rusz2011d,Tsanas2010}. Although a lot has already been investigated, some issues (e.\,g. early stage detection or accurate progress estimation) have not been solved yet. As time goes, new, robust and more sophisticated speech parametrization methods occur. But this speech features evolution more often builds barrier between engineers and clinicians, which is called ``The issue of clinical interpretability''. A feature with high discrimination power or good abilities to monitor progress of disease can be proposed, however it is becoming useless as soon as we try to find relations between its value and clinical signs of HD. In order to make a good diagnose the clinicians need transparent parametrization. In other words, when a value of feature changes they must know what will be the result from the clinical sign point of view. According to this consideration we can divide features into two categories: 1) clinically interpretable\,--\,they help us to directly quantify clinical signs; 2) clinically inexplicable features\,--\,we can find significant correlations between their values and clinical signs, but we can just guess what are the exact relations.
Perceptual features are good representatives of the second category. Although some researchers tried to interpret them from hypokinetic dysarthria signs point of view~\cite{Bocklet2011,Orozco2013b,Tsanas2010}, their meaning is still hidden in this field of science. Probably the deepest research focused on discrimination power of perceptual features was made by Orozco-Arroyave et al.~\cite{Orozco2013b}. Their results show that perceptual analysis of sustained Spanish vowels [a], [i] and [u] based on PLP (Perceptual Linear Predictive Coefficients) or MFCC (Mel-Frequency Cepstral Coefficients) provides the highest discrimination power. However, they used just a limited set of features (5) and small group of patients and control speakers respectively (20+20).
To sum up the introduction, although the perceptual features are clinically inexplicable, they could be very good markers of Parkinson's disease. Therefore the aim of this work is to: 1) prove that perceptual features can outperform the conventional clinically interpretable parameters or significantly improve PD identification accuracy; 2) test a large set of perceptual parameters and identify feature with the highest discrimination power; 3) find what kind of vowel realization it is better to analyse; 4) identify perceptual features that can predict values of different clinical tests.
The rest of this paper is organized as follows. Sections \ref{sec:data} and \ref{sec:methodology} describe the dataset and methodology respectively. Section \ref{sec:results} provides some preliminary results where the features are evaluated in terms of correlation and mutual information with speakers' label. Results of single-feature classification are given as well. Finally partial correlation with clinical tests and classification based on feature selection is considered. The conclusion is given in Sec.\,\ref{sec:conclusion}.
\section{Data}
\label{sec:data}
In the frame of this study 84 PD patients (36 women, 48 men) and 49 (24 women, 25 men) age and gender matched healthy controls (HC) were enrolled at the First Department of Neurology, St. Anne's University Hospital in Brno, Czech Republic. The demographic and clinical characteristics of PD patients can be seen in Table~\ref{tab:demographic}. The healthy controls had no history or presence of brain diseases
(including neurological and psychiatric illnesses) or speech disorders. The PD patients were on their regular dopaminergic treatment. All participants signed an informed consent form that had been approved by the Ethics Committee of St. Anne's University Hospital in Brno.
\begin{table}
\caption{Demographic and clinical characteristics of PD patients}
\label{tab:demographic}
\centering
\begin{threeparttable}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} c c}
\hline
\hline
Speakers & PD (females) & PD (males)\\
\hline
Number & 36 & 48\\
Age (years) & 68.47 $\pm$ 7.64 & 66.21 $\pm$ 8.78\\
PD duration (years) & 7.61 $\pm$ 4.85 & 7.83 $\pm$ 4.39\\
UPDRS III & 22.06 $\pm$ 13.73 & 26.85 $\pm$ 10.22\\
UPDRS IV & 2.72 $\pm$ 3.01 & 3.15 $\pm$ 2.59\\
RBDSQ & 3.42 $\pm$ 3.48 & 3.85 $\pm$ 2.99\\
FOG & 6.94 $\pm$ 5.72 & 6.67 $\pm$ 5.57\\
NMSS & 36.03 $\pm$ 26.72 & 38.19 $\pm$ 19.72\\
BDI & 18.57 $\pm$ 23.94 & 9.69 $\pm$ 6.23\\
MMSE & 27.38 $\pm$ 3.63 & 28.56 $\pm$ 1.05\\
LED (mg) & 862.44 $\pm$ 508.3 & 1087 $\pm$ 557.47\\
\hline
\hline
\end{tabular*}
\begin{tablenotes}
\scriptsize
\item[1] UPDRS III\,--\,Unified Parkinson's disease rating scale, part III: Motor Examination; UPDRS IV\,--\,Unified Parkinson's disease rating scale, part IV: Complications of Therapy; RBDSQ\,--\,The REM sleep behavior disorder screening questionnaire); FOG\,--\,Freezing of gait questionnaire; NMSS\,--\,Non-motor symptoms scale; BDI\,--\,Beck depression inventory; MMSE\,--\,Mini-mental state examination; LED\,--\,L-dopa equivalent daily dose
\end{tablenotes}
\end{threeparttable}
\end{table}
During acquisition the participants were asked to utter sequence of 5 Czech vowels ([a], [e], [i], [o] and [u]) in 4 different ways: 1) s\,--\,short vowels pronounced with normal intensity; 2) l\,--\,sustained vowels pronounced with normal intensity; 3) ll\,--\,sustained vowels pronounced with maximum intensity; 4) ls\,--\,sustained vowels pronounced with minimum intensity, but not whispered. Speech was recorded with sampling frequency $f_{\mathrm{s}} = 48\,\mbox{kHz}$ and consequently downsampled to $f_{\mathrm{s}} = 16\,\mbox{kHz}$ in order to decrease computational burden.
\section{Methodology}
\label{sec:methodology}
In order to compare the discrimination power of perceptual features to conventional ones, during the parametrization step we extracted fundamental frequency $F_{0}$, 5 kinds of jitter and 6 kinds of shimmer, Teager-Kaiser energy operator TKEO, formants $F_{1}$--$F_{3}$ and their bandwidths $BW_{1}$--$BW_{3}$, harmonic-to-noise ratio HNR, glottal-to-noise excitation ratio GNE, vowel space area VSA and its logarithmic version lnVSA, formant centralization ratio FCR, vowel articulation index VAI and ratio of second formants of vowels [i] and [u] $F_{2\mathrm{i}}/F_{2\mathrm{u}}$. If the specific feature was represented by vector or matrix, we applied transformation to scalar value. For this purpose we used median, standard deviation (std), 1st percentile (1p), 99th percentile (99p) and interpercentile range (ir) defined as 99p -- 1p. In the case of matrix the transformation was applied over each band separately.
\subsection{Perceptual Features}
First of all we included to this study the most popular MFCC (Mel Frequency Cepstral Coefficients) that can indirectly detect slight misplacements of articulators~\cite{Tsanas2010}. Consequently we derived from MFFC next 3 kinds of perceptual features: LFCC (Linear Frequency Cepstral Coefficients), CMS (Cepstral Mean Subtraction coefficients) and MFCC adjusted to equal loudness curves (as in the case of PLP). In order to provide information complementary to MFCC we tested MSC (Modulation Spectra Coefficients).
Next set of features is based on linear prediction: LPC (Linear Predictive Coefficients), PLP (Perceptual Linear Predictive coefficients), LPCC (Linear Predictive Cepstral Coefficients), LPCT (Linear Predictive Cosine Transform coefficients) and ACW (Adaptive Component Weighted coefficients). In comparison to simple LPC or MFCC the PLP also takes into account adjustment to the equal loudness curve and intensity-loudness power law. The advantage of LPCC and LPCT over ``classic'' LPC is small correlation of values. Finaly the advantage of ACW is that these coefficients are less sensitive to channel distortion.
The last features in this study are ICC (Inferior Colliculus Coefficients) that analyse amplitude modulations in voice using a~biologically-inspired model of the inferior colliculus. All perceptual features were extended by their 1st~order regression coefficients ($\Delta$).
\subsection{Preliminary analysis}
We employed calculation of Spearman's rank sum correlation and mutual information (MI) between feature vectors and resulting speakers' label in order to estimate discrimination power of the vowels separately. Consequently we applied Mann-Whitney~U test and classification based on random forests (RF). Classification results were expressed by ACC, SEN, SPE and trade-off between sensitivity and specificity (TSS) defined as:
\begin{eqnarray}
\mbox{TSS} = 2^{\sin\left(\frac{\pi\cdot\mathrm{SEN}}{2}\right)\sin\left(\frac{\pi\cdot\mathrm{SPE}}{2}\right)}
\end{eqnarray}
Finally to identify perceptual features that can predict values of different clinical tests we calculated Spearman's partial correlations where the effect of patients' age and L-dopa equivalent daily dose was removed.
\subsection{Classification}
In the last step we performed classification with a two-step feature selection. Firstly we reduced set of features to 500 parameters by mRMR (minimum Redundancy Maximum Relevance) and consequently we employed SFFS (Sequential Forward Feature Selection). Three scenarios were considered: individual vowel analysis; classification within each vowel sequence (see Chap.\,\ref{sec:data}); classification using all vowel realizations. In all the cases we used leave-one-out validation.
\section{Experimental results}
\label{sec:results}
\begin{table}
\scriptsize
\caption{Individual vowel analysis}
\label{tab:resind}
\centering
\begin{threeparttable}
\begin{tabular}{l l c c c c c c c}
\hline
\hline
Vowel & Feature & $\rho$ & MI & $p$ & ACC [\%] & SEN [\%] & SPE [\%] & TSS\\
\hline
a (s) & 10th ICC (99p) & --0.1356 & 0.0784 & 0.1198 & 71.43 & 72.62 & 69.39 & 1.75\\
e (s) & 7th LPCC (99p) & --0.0057 & 0.6975 & 0.9498 & 69.92 & 72.62 & 65.31 & 1.71\\
i (s) & 17th $\Delta$MFCC (1p) & 0.1242 & 0.7349 & 0.1542 & 69.92 & 70.24 & 69.39 & 1.73\\
o (s) & 10th CMS (std) & 0.0059 & 0.0611 & 0.9477 & \textbf{80.45} & \textbf{78.57} & \textbf{83.67} & \textbf{1.88}\\
u (s) & 6th ACW (1p) & 0.1003 & 0.7296 & 0.2502 & 72.18 & 75.00 & 67.35 & 1.75\\
\hline
a (l) & 9th LFCC (std) & --0.2042 & \textbf{0.7963} & 0.0191 & 75.19 & 75.00 & 75.51 & 1.81\\
e (l) & 15th MSC (1p) & 0.0938 & 0.0050 & 0.2823 & 66.92 & 65.48 & 69.39 & 1.69\\
i (l) & 14th ICC (1p) & --0.0503 & 0.0845 & 0.5646 & 68.42 & 69.05 & 67.35 & 1.71\\
o (l) & 2nd $\Delta$LPCT (median) & 0
|
.0978 & 0.7069 & 0.2619 & 73.68 & 77.38 & 67.35 & 1.76\\
u (l) & 12th CMS (std) & --0.0842 & 0.0269 & 0.3347 & 77.44 & 73.81 & 83.67 & 1.85\\
\hline
a (ll) & 18th $\Delta$LPCC (ir) & 0.0958 & 0.7576 & 0.2720 & 72.18 & 72.62 & 71.43 & 1.76\\
e (ll) & 11th $\Delta$PLP (99p) & 0.2635 & 0.6129 & 0.0025 & 72.18 & 75.00 & 67.35 & 1.75\\
i (ll) & 5th PLP (std) & --0.0321 & 0.6460 & 0.7142 & 68.42 & 66.67 & 71.43 & 1.72\\
o (ll) & 10th $\Delta$CMS (ir) & --0.2038 & 0.7643 & 0.0193 & 75.94 & 73.81 & 79.59 & 1.83\\
u (ll) & 17th CMS (std) & --0.0486 & 0.0325 & 0.5783 & 72.18 & 76.19 & 65.31 & 1.74\\
\hline
a (ls) & 13th ACW (ir) & --0.0093 & 0.7199 & 0.9164 & 73.68 & 79.76 & 63.27 & 1.74\\
e (ls) & 8th CMS (std) & 0.1887 & 0.0835 & 0.0304 & 72.18 & 64.29 & 85.71 & 1.77\\
i (ls) & shimmer (local.dB) & \textbf{--0.4064} & 0.7633 & \textbf{0.0000} & 72.18 & 75.00 & 67.35 & 1.75\\
o (ls) & 3rd ICC (99p) & 0.1324 & 0.0325 & 0.1289 & 69.17 & 71.43 & 65.31 & 1.71\\
u (ls) & 9th CMS (std) & --0.0191 & 0.0232 & 0.8282 & 75.19 & 69.05 & 85.71 & 1.82\\
\hline
\hline
\end{tabular}
\begin{tablenotes}
\scriptsize
\item[1] $\rho$\,--\,Spearman's rank correlation coefficient; MI\,--\,mutual information; $p$\,--\,significance level (Mann-Whitney~U test; ACC\,--\,classification accuracy; SEN\,--\,sensitivity; SPE\,--\,specificity; TSS\,--\,trade-off between sensitivity and specificity; s\,--\,short vowel pronounced with normal intensity; l\,--\,sustained vowel pronounced with normal intensity; ll\,--\,sustained vowel pronounced with maximum intensity; ls\,--\,sustained vowel pronounced with minimum intensity (not whispering)
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{table}
\scriptsize
\caption{Classification results (using feature selection)}
\label{tab:resall}
\centering
\begin{threeparttable}
\begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} c c c c c}
\hline
\hline
Vowels & ACC [\%] & SEN [\%] & SPE [\%] & TSS & No.\\
\hline
a (s) & 84.21 & 86.90 & 79.59 & 1.90 & 6\\
e (s) & 81.95 & 82.14 & 81.63 & 1.89 & 8\\
i (s) & 72.18 & 73.81 & 69.39 & 1.76 & 3\\
o (s) & 80.45 & 78.57 & 83.67 & 1.88 & 1\\
u (s) & 85.71 & 86.90 & 83.67 & 1.93 & 6\\
\hline
a (l) & 87.22 & 88.10 & 85.71 & 1.94 & 6\\
e (l) & 75.94 & 78.57 & 71.43 & 1.80 & 7\\
i (l) & 82.71 & 83.33 & 81.63 & 1.90 & 11\\
o (l) & 75.19 & 79.76 & 67.35 & 1.77 & 2\\
u (l) & 77.44 & 73.81 & 83.67 & 1.85 & 1\\
\hline
a (ll) & \textbf{91.73} & \textbf{90.48} & \textbf{93.88} & \textbf{1.98} & 8\\
e (ll) & 78.20 & 83.33 & 69.39 & 1.81 & 3\\
i (ll) & 78.95 & 82.14 & 73.47 & 1.84 & 6\\
o (ll) & 81.20 & 80.95 & 81.63 & 1.89 & 3\\
u (ll) & 72.18 & 76.19 & 65.31 & 1.74 & 1\\
\hline
a (ls) & 76.69 & 78.57 & 73.47 & 1.82 & 3\\
e (ls) & 87.97 & 88.10 & 87.76 & 1.95 & 8\\
i (ls) & 84.21 & 84.52 & 83.67 & 1.92 & 11\\
o (ls) & 76.69 & 77.38 & 75.51 & 1.83 & 6\\
u (ls) & 84.21 & 86.90 & 79.59 & 1.90 & 4\\
\hline
all (s) & 80.45 & 78.57 & 83.67 & 1.88 & 1\\
all (l) & \textbf{91.73} & \textbf{90.48} & \textbf{93.88} & \textbf{1.98} & 9\\
all (ll) & 81.95 & 79.76 & 85.71 & 1.90 & 7\\
all (ls) & 90.98 & 91.67 & 89.80 & 1.97 & 11\\
\hline
all (s, l, ll, ls) & \textbf{92.48} & \textbf{92.86} & \textbf{91.84} & \textbf{1.98} & 9\\
\hline
\hline
\end{tabular*}
\begin{tablenotes}
\scriptsize
\item[1] ACC\,--\,classification accuracy; SEN\,--\,sensitivity; SPE\,--\,specificity; TSS\,--\,trade-off between sensitivity and specificity; No.\,--\,number of selected features, s\,--\,short vowel pronounced with normal intensity; l\,--\,sustained vowel pronounced with normal intensity; ll\,--\,sustained vowel pronounced with maximum intensity; ls\,--\,sustained vowel pronounced with minimum intensity (not whispering)
\end{tablenotes}
\end{threeparttable}
\end{table}
The preliminary results performed by Spearman's rank correlation, mutual information, Mann-Whitney~U test and RF classifier are given in Table~\,\ref{tab:resind}. The results of PD identification based on feature selection can be seen in Table~\,\ref{tab:resall}. Finally the results of Spearman's partial correlations between clinical characteristics and selected features are in Table~\,\ref{tab:parcorr}.
According to the preliminary analysis we can conclude that std of 10th CMS coefficient extracted from short vowel [o] provides the best discrimination power in terms of ACC (80.45\,\%), SEN (78.57\,\%), SPE (83.67\,\%) and TSS (1.88). On the other hand conventional shimmer extracted from sustained vowel [i] pronounced with minimum intensity reached better results of $\rho$ (--0.4064), MI (0.7633) and $p$ (0.0000).
Considering the classification using feature selection, in the first scenario (individual vowel analysis) we can observe the best results in the case of sustained and loudly pronounced vowel [a] (ACC = 91.73\,\%, SEN = 90.48\,\%, SPE = 93.88\,\%, TSS = 1.98). All 8 selected features were perceptual. In the case of second scenario (classification within each vowel sequence) the best results provided sustained vowels pronounced with natural intensity (ACC = 91.73\,\%, SEN = 90.48\,\%, SPE = 93.88\,\%, TSS = 1.98), where all 9 selected features were perceptual as well. It was proved that in order to get best classification results (ACC = 92.48\,\%, SEN = 92.86\,\%, SPE = 91.84\,\%, TSS = 1.98) it is advantageous to use all 4 sets of vowels.
In our recent study we found out that sustained vowels pronounced with minimum intensity can be good speech tasks for detection of improper vocal folds vibration (measured by features based on empirical mode decomposition)\cite{Smekal2015}. In the case of perceptual analysis we observe that loudly pronounced features are better candidates to analyse. We explain this by substantiality of perception. Theoretically longer and more intense stimuli results in better perception.
Finally we have proved that perceptual features significantly correlate ($p < 0.0001$) with different clinical information like UPDRS III (Unified Parkinson's disease rating scale, part III: Motor Examination), UPDRS IV (part IV: Complications of Therapy), RBDSQ (The REM sleep behavior disorder screening questionnaire), FOG (Freezing of gait questionnaire), NMSS (Non-motor symptoms scale), BDI (Beck depression inventory) and MMSE (Mini-mental state examination). This means that they can be used for estimation of these scores.
\begin{table}
\scriptsize
\caption{Spearman's partial correlations between clinical characteristics and selected features (after removal of age and LED effect)}
\label{tab:parcorr}
\centering
\begin{tabular}{l l c c}
\hline
\hline
Clinical info & Feature & $\rho$ & $p$\\
\hline
PD duration & i (l): 15th CMS (std) & --0.4369~~ & $3.25\cdot10^{-5}$\\
UPDRS III & i (l): 1st $\Delta$PLP (1p) & --0.5174 & $6.98\cdot10^{-7}$\\
UPDRS IV & e (ll): 5th $\Delta$MFCC (ir) & 0.4572 & $1.23\cdot10^{-5}$\\
RBDSQ & u (ls): 13th $\Delta$MFCC (99p) & 0.4906 & $2.16\cdot10^{-6}$\\
FOG & a (ls): 6th MFCC (std) & --0.4476 & $1.96\cdot10^{-5}$\\
NMSS & a (ll): 12th LPC (99p) & 0.4616 & $1.25\cdot10^{-5}$\\
BDI & u (s): 3rd $\Delta$LPCT (1p) & 0.5832 & $1.25\cdot10^{-6}$\\
MMSE & i (l): 20th MFCC (99p) & --0.4719 & $5.55\cdot10^{-5}$\\
\hline
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper we perceptualy analysed phonation of 84 PD patients and 49 gender and age matched controls. We achieved all goals of this work: 1) We have proved that perceptual features outperform the conventional ones in terms of discrimination power. 2) From a wide range of perceptual features we have found out that those based on CMS (derived from MFCC) better quantify the signs of hypokinetic dysarthria. 3) We have shown that it is advantageous to perform perceptual analysis of loud sustained vowels. 4) In the case of each considered clinical score we identified a perceptual feature that can be used for its estimation.
In the near future we would like to move further, perceptualy analyse another speech tasks (spontaneous speech, read sentences, etc.) and focus on each gender individually.
\section*{Acknowledgment}
Research described in this paper was financed by the National Sustainability Program under grant LO1401 and by projects NT13499 (Speech, its impairment and cognitive performance in Parkinson's disease), COST IC1206, project ``CEITEC, Central European Institute of Technology'': (CZ.1.05/1.1.00/02.0068), FEDER and Ministerio de Econom\'{i}a y Competitividad TEC2012-38630-C04-03.
\bibliographystyle{splncs03}
|
\section{Introduction}
\label{sec:Introduction}
Innovation of modern systems comes from realizing complex functionalities through the interaction of software, electrical and mechanical components. \Glspl{adas},
autonomous vehicles, and other \glspl{CPS}\xspace have emerged from this ongoing trend. Due to the heterogeneous and interactive nature of these systems-of-systems, their engineering has become a highly effortful task that faces many challenges~\cite{Tyagi2021,KRRW17,KRS+18a,KKR+18,KKR19}.
One of them is certainly the need for collaboration of experts from different domains~\cite{DRW+20}.
This challenge also manifests in the rising number of requirements that address stakeholders from heterogeneous domains.
In systems engineering, and in particular the automotive domain, requirements are captured as documents that contain text mainly in natural language~\cite{Liebel19} often with additional information provided through, \textit{e.g.,}\@\xspace pictures or \gls{CAD}\xspace models.
Experts interpret these textual requirements to enter the design phase, and most often derive details of the implementation directly from them~\cite{FR07}.
The ambiguity of natural language, in particular, when interpreted by experts from different backgrounds,
as well as the increasing number of requirements may result in decreasing product quality, system failures which are currently detected at late development stages~\cite{DGH+19} and hinders the implementation of functional safety standards such ISO 26262.
Furthermore, the document-based approach to requirements engineering prevents agile development, where automated analyses and syntheses should enable early error detection and fast feedback for the developers.
What is needed are tools to capture, analyze, and process requirements systematically during all phases of the development cycle. An approach to achieve this is \gls{mde}\xspace~\cite{FR07}, which utilizes models as the primary development artifacts.
That is, these models serve as documentation and communication basis for engineers, but also as input for analyses and syntheses, such as verification~\cite{KPRR20a}, test case~\cite{DGH+19}, or code generation~\cite{DGM+21}. For instance, \gls{mde}\xspace can be applied to facilitate the design of \gls{ai}-based systems \cite{KNP+19,GKR19,KPRS19,AKK+21}. Approaches to introduce \gls{mde}\xspace in the automotive requirements engineering exist~\cite{Loniewski10}, but
introducing \gls{mde}\xspace comes with initial costs and efforts for training the domain experts in modeling, and most often, for translating many documents to models~\cite{Buchhiarone2020}.
An advantage of using \glspl{dsl}\xspace rather than general purpose modeling languages such as the \gls{UML}\xspace is that their syntax and semantics~\cite{HR04} can be designed to be intuitive for the model users.
As requirements are captured in natural language, we assume that a textual \gls{dsl}\xspace that offers sentence structures and wording that is close to the current formulation of requirements increases intuitiveness of both usage and understanding of models in this \gls{dsl}\xspace significantly.
However, in addition to the \gls{dsl} development costs and the \gls{dsl} training, once the \gls{dsl} is developed, the translation of old, unstructured requirements to models in the \gls{dsl}\xspace can be a tremendous effort due to the high number of requirements, requiring time and modeling know-how from the translating developer.
In this paper, we analyze an open-source set of automotive requirements for \gls{adas} and \gls{als} to understand
where formulation inaccuracies occur and how targeted \gls{dsl} constructs can help eliminate these inaccuracies and increase the level of formality and consistency in these requirements. \textbf{The goal and main contribution of this paper is the application and evaluation of few-shot learning of large neural natural language models for the translation of given unstructured requirements to sentences incorporating the new formal \gls{dsl} constructs.}
Such translation models can be used 1) during the introduction phase of a \gls{dsl} to automatically translate existing or legacy natural language requirements into the new \gls{dsl} syntax and 2) to correct natural language inputs in a smart editor when a requirement engineer writes a new requirement as natural text. With this automation supported by the fact that few-shot learning requires only a handful of translation examples to learn a given translation task, \textbf{our aim is to facilitate the introduction of highly specialized requirement \glspl{dsl}, e.g., targeting a single department of a company using specific wording or even a single project.}
The remainder of this paper is structured as follows.
\Cref{sec:Preliminaries} introduces the technical foundations of our approach; \Cref{sec:MDRE} highlights the challenges and potentials of an \gls{mde}\xspace approach for requirements engineering within engineering domains, driven by natural-language text-based documents.
\Cref{sec:RelatedWork} outlines related work in this area.
\Cref{sec:ReqDSL} presents an example \gls{dsl}\xspace for capturing requirements in the automotive domain.
\Cref{sec:Translation} details the automatic translation from natural-language to the \gls{dsl}\xspace.
In \Cref{sec:Evaluation} we evaluate the approach in multiple experiments. We discuss threats to validity in \Cref{sec:threats} before \Cref{sec:Conclusion} concludes the paper. This paper is an extended version of the corresponding SLE publication by Bertram et al. 2022 \cite{BBK+22a}.
\section{Preliminaries}
\label{sec:Preliminaries}
\subsection{Neural Language Models}
\begin{sloppypar}
Our approach for text-to-\gls{dsl}\xspace translation of requirements relies heavily on large transformer-based neural language models. The original transformer architecture \cite{VSP+17} is a groundbreaking neural network architecture for sequence-to-sequence processing. It has an encoder-decoder structure, where the encoder receives
a sequence as its input and encodes its content to pass it to the decoder. The decoder iteratively creates the target sequence using a graph search method, e.g., beam search. Instead of using recurrent neurons such as \glspl{lstm} the transformer architecture relies exclusively on the attention mechanism to grasp dependencies
between sequence elements.
\end{sloppypar}
While we are not going to use the original transformer architecture itself, all models employed in this paper are its derivatives.
For the automatic translation from natural language requirements to the \gls{dsl}\xspace, we utilize a derivative of \gls{gpt}~\cite{RNSS18}, which is tailored towards text generation. GPT is a transformer-based decoder-only language model that employs a semi-supervised learning approach~\cite{RNSS18}.
The authors showed that generative unsupervised pretraining on unlabeled data, where, given a sequence of words, the network is supposed to learn to predict the next word with the highest likelihood, and subsequent supervised fine-tuning of the pretrained parameters for a specific downstream task outperformed discriminatively trained models. \gls{gpt} language models have evolved over the last few years and various variants exist.
In \cite{RWC+19}, the authors showed that, given a sufficiently large capacity and a large, varied text corpus in training, a language model can solve tasks across different domains for which it has not experienced explicit training, also referred to as zero-shot learning. Furthermore, in \cite{BMR+20}, the authors show that zero-shot learning is outperformed by few-shot learning.
In few-shot learning, the model is given a \textit{support set} consisting of a very small number of training examples demonstrating how to solve a new task as part of the model's input. No weight updates are necessary, i.e. no classical training is performed. The support set is input into the model as part of the query. Based on this context, the language model can then solve the new task for a new input. In contrast to zero-shot learning, few-shot learning enables targeted training for very specific tasks, making it
particularly interesting for requirements engineering, a discipline heavily relying on natural language and where training data is often scarce.
For the automatic translation of natural language requirements to a model in the \gls{dsl}\xspace we mainly rely on GPT-J-6B~\cite{mesh-transformer-jax,gpt-j}, an open-source language model based on the 6.7B GPT-3~\cite{BMR+20} network and its hyperparameters. Similarly to GPT-Neo \cite{gpt-neo}, it is also trained on the Pile dataset~\cite{GBB+20}. According to the authors, its performance is almost on par with the 6.7B GPT-3 network, and it is the best-performing publicly available transformer language model in terms of zero-shot performance on various down-streaming tasks\footnote{https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/}.
\subsection{Datasets}
The method presented in this paper is evaluated on a publicly available dataset published by Daimler AG, fostering reproducibility. The dataset stems from the automotive domain and consists of 120 textual requirements~\cite{BMR+17}. It contains natural language requirements for two typical automotive systems, namely \gls{als} and \gls{adas}.
The requirements of the \gls{als} describe a set of system functions: The functionality that causes the vehicle's direction indicators to flash in response to the steering column lever and hazard warning flasher switch. A function to lower the beams depending on the rotary light switch position and the vehicle setting for daytime running light. An adaptive high beam to control the high beam headlamp depending on the high beam switch and the detection of oncoming vehicles.
For the \gls{adas} system, the dataset contains requirements concerning the main components for adaptive cruise control which maintains the distance to the vehicle in front and a speed set either manually by the driver or via traffic sign detection, provides a distance warning and an emergency brake assistant which reacts to stationary obstacles and to moving obstacles.
The requirements address a variety of typical features of current embedded software-intensive systems, (i.e., software is the key element to realize the functionality) from the automotive domain such as highly distributed and partially safety-critical functionality, hard real-time requirements, a combination of mechanical, mechatronic and electronic components both having reactive and regulating functional behavior and numerous product variants (e.g., due to of different product architectures, legal requirements or special equipment).
\section{Model-Driven Requirements Engineering: Challenges and Potentials}
\label{sec:MDRE}
As the number of requirements increases with the growing complexity of \glspl{CPS}\xspace, preventing inconsistencies and redundancies has become a highly effortful and error-prone task when performed manually.
\gls{mde}\xspace holds the promise to mend these issues by utilizing modeling languages to formulate requirements in a way that they become comprehensive both for computers and humans within the development process~\cite{Buchhiarone2020,Cabot2018,KKR+18}.
This section outlines the challenges and potentials of introducing \gls{MDRE}\xspace in the automotive domain and defines a set of requirements for requirement \glspl{dsl}\xspace to overcome these challenges.
In \gls{mde}\xspace models are the primary source of information as their mathematical semantics and strict syntax enable a unique understanding of the modeled subject among all stakeholders and computers.
Thereby, automated analyses and syntheses performed by the latter become meaningful and therefore practically useful. However, introducing \gls{mde}\xspace faces major challenges, which~\cite{Cabot2018} summarizes by stating that the benefits of \gls{mde}\xspace do not outweigh its costs, which is also the case for \gls{MDRE}\xspace.
Part of the arising costs stem from the facts that (1) training engineers to use the modeling languages correctly and efficiently is effortful~\cite{Hutchinson2014}, and (2) transferring the existing documents to models is very time-consuming~\cite{FR07}, since modeling itself is an effortful and tedious task~\cite{Pati2017}.
Language engineering means the process of creating suitable \glspl{dsl}\xspace that serve as modeling languages within \gls{mde}\xspace of a specific domain \cite{CFJ+16}.
Naturally, if this process aims at creating a language that is intuitive for the experts (who do not have a computer-science background), the training effort decreases and with that, the cost of introducing \gls{mde}\xspace, tackling challenge (1).
The contributions of this paper towards tackling these challenges rely on the idea that a textual \gls{dsl}\xspace whose syntax is very close to the phrasing of requirements in natural language used by the experts is intuitive to these experts by construction.
However, engineering such a \gls{dsl}\xspace requires extensive analyses of the natural language used to formulate these requirements, which increases the cost of language engineering significantly~\cite{FR07} and would probably outweigh the cost saving achieved by intuitiveness.
Further, given a \gls{dsl}\xspace, automating (parts of) the process to translate natural language requirements to models addresses challenge (2).
An implementation of such an automated translation is most likely based on machine learning and \gls{NLP}\xspace which, to be useful, requires an extensive set of labeled training data.
Creating the latter, again, increases the costs of implementing the translation in a way that would probably outweigh its benefits.
However, we assume, that the closer the phrasing of the output (models in the \gls{dsl}\xspace) is to the phrasing of the input (requirements in natural language), the more likely it is that few-shot learning approaches suffice for the training.
These approaches only need small amounts of labeled data for effective training, and therefore reduce the costs to create a sufficient training dataset.
Further, requirements engineering is a very broad field in automotive systems engineering that involves experts from various backgrounds.
Engineering a \gls{dsl}\xspace to capture requirements in this domain will be an iterative process during which the language will evolve along with the tools to interpret, analyze and process the models, and this evolution will continue throughout the maintenance phase.
A modular design of the language and the underlying tooling is therefore crucial~\cite{BPR+20,BEH+20}.
We derived the following set of requirements on \glspl{dsl}\xspace that facilitate introducing \gls{MDRE}\xspace in the automotive domain by reducing the training and translation efforts:
\begin{itemize}
\item[\textbf{R1:}] Since the modelers will not necessarily have a computer-science background, the \gls{dsl}\xspace's syntax shall be based on natural language.
\item[\textbf{R2:}] To make the language as intuitive as possible for its users and to enable the application of few-shot learning to implement an automatic translation from natural language requirements to models, the \gls{dsl}\xspace's syntax shall be as close to the phrasing of requirements in natural language used by the modelers as possible.
\item[\textbf{R3:}] In the \gls{dsl}\xspace, requirements shall be formulated consistently and with a precise meaning understood by relevant stakeholders, enabling automatic interpretation.
\item[\textbf{R4:}] The language and its tooling shall be extensible and maintainable by a modular design.
\end{itemize}
\section{Related Work}
\label{sec:RelatedWork}
\subsection{Formalization of Requirements}
\paragraph{Requirement \glspl{dsl}\xspace }
Trigger action patterns or If-Then constructs as we are going to use them in our \gls{dsl}\xspace are widely used in industry. One example is the graphical formalization language Simplified Universal Pattern~\cite{TBH16}, which has shown that a large part of the requirements from the automotive domain can be formalized with this concept~\cite{BBB+18}.
An approach that utilizes a \gls{dsl}\xspace for automating type checking of requirements for detecting erroneous syntactic phrasings automatically is proposed in~\cite{MSL15}.
Therein, the \gls{dsl}\xspace is defined by a context-free grammar and an ontology that defines axiomatic terms and types to allow including typed terms within the requirements.
Further tooling includes automatic consistency checking~\cite{MSL16}.
In~\cite{KC05}, and~\cite{YCC15} the authors propose approaches to verify the consistency of requirements using temporal logic.
Very similar to our \gls{dsl}\xspace, these approaches use a structured English which is defined through a grammar to enable automatic logical analyses on requirements written by non-computer scientists.
However, these approaches do not aim at minimizing the difference between the sentence constructs offered by the \gls{dsl}\xspace and the already established natural language used for requirements engineering, which will increase intuitiveness and enable few-shot learning for automated translation.
\paragraph{Structuring Natural Language through Sentence Patterns}
In addition to \glspl{dsl}\xspace for requirements development, requirements templates are widely used for standardizing the wording of requirements and increasing their quality. A requirements template provides the requirements engineer with a set of sentence templates as a guide for requirements development. In German-speaking countries, for example, the MASTER template based on \cite{PR21} is widely used. In addition to the experience-based templates, in the literature there exist approaches that deal with the extraction of requirements templates from legacy requirements. For example, \cite{EAF16} develops a set of sentence templates for quality requirements, \cite{FPQ+10} presents a metamodel for software requirements. In addition, the PABRE framework presents a method for using requirements templates \cite{FQR+13} based on a catalog of 29 QR patterns \cite{RFQ+09} and 37 non-functional patterns \cite{PQF+12}.
Because requirements templates provide only a guideline to write requirements, there are no means to detect deviations from the wording proposed by the templates. This has the advantage that requirement engineers and template designers are not limited to formally defined sentence structures when writing requirements. Hence, this approach has the major disadvantage that implementations of automated requirements processing cannot assume correct phrasing of the input.
\paragraph{Formalization by Applying Logic}
Approaches that formalize natural language requirements by translations into formal languages that allow interpretations based on logic have been researched for decades, and we will just name a few interesting examples here:
The approach presented in~\cite{Gervasi2005} translates natural language requirements into a representation in deontic logic which allows to check for consistency.
The approach does not utilize machine learning for the translation, but relies on ``typographical adjustments, tokenization and morphosyntactic analysis'' to extract buckets, \textit{i.e.,}\@\xspace pieces of a specific kind of information, from the sentences.
Similarly, \cite{Dalianis1992} propose a technique that enables users to query a conceptual model of the system in natural language and retrieve answers in natural language.
The approach targets the verification of natural language requirements given a formal model of the system, \textit{i.e.,}\@\xspace a model in a \gls{dsl}\xspace.
Another concept for verifying requirements is proposed in~\cite{Goessler2009} which perceives requirements and a component design each as contracts and verify their correctness by a conjunction of both contracts.
The technique relies on modal logic.
An overview of the research conducted in formalizing legal requirements is given in~\cite{Otto2007}.
A very recent approach~\cite{Zaki2021} utilizes \gls{NLP}\xspace to transform natural language requirements into a requirements capturing model, which is a semi-formal model.
The approach introduces automatic transformations from this representation into metric temporal logic and computation tree logic.
All of these approaches take a requirement in natural language as input, and transform it into a structured format that is comparable to a model in a \gls{dsl}\xspace before this model is the input for an automated processing.
These approaches, however, do not assure that engineers formulate requirements such that they are uniquely understood among the human stakeholders as well: The intermediate model remains internal to the automated processing, and, being a rephrasing as logical formulas, has an unintuitive format for most human readers.
Representing requirements in a textual \gls{dsl}\xspace structures the sentences such that their understanding among humans becomes unique and, at the same time, allows the implementation of formalisms to interpret these models appropriately regarding the domain understanding.
\subsection{NLP in Requirements Engineering}
NoRBERT~\cite{HKKT20} is a fine-tuned version of BERT~\cite{DCLT18} for requirements classification. It was trained on the PROMISE NFR dataset \cite{CMLP07}. An approach for clustering natural language requirements is presented in \cite{JNT21}.
The approach applies clustering to natural language descriptions with the idea of developing a \gls{dsl}\xspace in mind.
Apart from requirement classification, machine-learning and \gls{NLP}\xspace can be used for prioritizing requirements \cite{Perini2013}. These approaches, however, are not specifically tailored for reducing the \emph{initial} modeling efforts necessary for introducing \gls{MDRE}\xspace.
\subsection{Few-Shot Learning for Requirement Classification}
There already exist a few approaches concentrating on few-shot learning in requirements engineering. One of these approaches uses transformer models for a named entity-recognition task~\cite{MCB+22}. Another related approach is a preliminary study on requirement classification using zero-shot learning~\cite{AZF+22}. The PROMISE NFR dataset is used with pre-trained Transformer models such as BERT and RoBERTa. Since it is in a preliminary status, only a reduced part of the dataset is used. In contrast to our approach, the study is only in a preliminary state and zero-shot learning is used for requirement classification instead of few-shot learning.
Apart from requirement classification, few-shot learning on requirements is also applicable for requirement elicitation as described in~\cite{SXL+20}. In contrast to other approaches in the area of few-shot learning in requirements engineering, the approach does not take already existing requirements as an input, but chat messages from which requirements for new hidden features are extracted.
\subsection{Research Gap and Goal}
While neural models have been applied to requirements engineering successfully, in particular for requirements classification, there is a research gap concerning the potential of few-shot learning to tailor language models for specific requirements processing tasks such as reformulation, translation, correction, and the like, where training data is scarce, or where access to computational resources needed to fine-tune very large language models such as the GPT-J-6B is limited. The purpose of this work is hence, to develop a set of tailored rules for systematic requirement definition in a given domain by analyzing existing requirements and to determine if and to which extent few-shot learning can support the translation of legacy requirements into the systematic form, thereby aiming to increase the level of formality in requirements.
\section{Example DSL for Structured Requirements}
\label{sec:ReqDSL}
Requirements specify a product under development and thus have a decisive influence on the end product. Missing, incorrect or ambiguously phrased requirements therefore have serious negative effects on the end product and are one of the main causes of additional costs in product development and dissatisfied customers.
The aim of this section is to design an exemplary requirements \gls{dsl}\xspace increasing the degree of formality of requirement documents and hence support automated verification and consistency checks. Instead of designing a general requirement language, our approach is to focus on specific domains or projects and to tailor \glspl{dsl} accordingly. Furthermore, we employ few-shot learning capabilities of large neural language models in order to transform existing requirements into the new syntax or to support the formulation of new requirements in an editor. In this work, we focus on the automotive domain, and an existing set of \gls{adas} and \gls{als} requirements from~\cite{BMR+17}.
\glspl{dsl}\xspace provide a means to improve the quality of the requirements' phrasing and to facilitate the implementation of tooling to support agile requirements engineering.
To engineer a \gls{dsl}\xspace that fulfills the requirements R1-R3 in~\Cref{sec:MDRE}, the \gls{dsl}\xspace designer must analyze and understand how the requirements are currently phrased and which meanings are implied by certain formulations.
The \gls{dsl}\xspace is to be used in a natural language domain (\textit{cf.}\@\xspace R1) and must ensure an intuitive readability for the requirements engineer as well as the modeler who is implementing the requirements at a later stage in the development process, \textit{cf.}\@\xspace R3.
The developed \gls{dsl}\xspace follows an open-world assumption~\cite{DKMR19,DHH+20}, \textit{i.e.,}\@\xspace whatever is not restricted by the model is allowed. Currently, the \gls{dsl}\xspace focuses on isolated requirements written in one sentence. This is because we are mainly interested in concrete syntax and features such as unambiguity. Complex semantic connections and cross-references will not be discussed here.
A manual analysis of the requirements from \cite{BMR+17} reveals a set of ambiguous or unstructured formulations and inconsistencies
which might lead to misinterpretations and hence need to be tackled by the \gls{dsl} approach. The following three ambiguity types are a non-exhaustive selection, which is sufficient as a basis for our few-shot learning experiments.
\textbf{Ambiguity 1: If-Then Constructs}.
The If-Then style is an often reoccurring pattern in requirements documents and an often occurring pattern. It reflects
the idea that upon the occurrence of a trigger event something must happen. In standard English there are numerous variants of how to express this, making it difficult for requirements engineers to stick to a consistent scheme, to search for such requirements, and to analyze them automatically. To tackle this difficulty, the first construct we introduce is the If-Then pattern. It not only creates clarity for the different stakeholders, cf. R3, but also makes further processing in trigger action patterns much easier~\cite{BBB+18}.
Therefore, the \gls{dsl}\xspace introduces the two keywords \textit{IF:} and \textit{THEN:}. For this purpose, the individual requirements are divided into two parts, a \textit{trigger} part beginning with the keyword \textit{IF:} and an \textit{action} part starting with the keyword \textit{THEN:}, i.e. a parsing rule is given as \texttt{If-Then-Req = 'IF:' trigger 'THEN:' action}. For the sake of simplicity, we assume that the non-terminals \texttt{trigger} and \texttt{action} can be arbitrary strings.
As an example, the requirement \textit{``When no advancing vehicle is recognized anymore, the high beam illumination is restored within 2 seconds.''} is divided into a \textit{trigger} and an \textit{action} part and formulated in the following structure: \textit{``\textbf{IF:} no advancing vehicle is recognized anymore, \textbf{THEN:} high beam illumination is restored within 2 seconds.''}
This way we achieve a partial formalization of the original requirement. The trigger and the action are now clearly separated, and the requirement can be identified as an If-Then requirement easily.
\textbf{Ambiguity 2: Modal Verbs}.
The modal verbs (\texttt{must, can, should, etc.}) are important for the precise interpretation of requirements~\cite{PR21}. Moreover, in safety-critical systems such as vehicles,
there is an important distinction between the available modal verbs. For instance, the modal verb \textit{must} indicates a legal regulation and non-conformity can lead to legal consequences.
Our analysis
shows however that the model verb is sometimes skipped. In such cases it might not be clear whether the given sentence is a requirement or
a description of an existing system. To enforce the usage of modal verbs we therefore introduce the dedicated keyword \textit{MUST} in the \gls{dsl}\xspace. In cases of missing modal verbs
the keyword \textit{MUST} needs to be included at the correct position. If a wrong modal verb such as \textit{``can''} or \textit{``could''} is used, it needs to be exchange by \texttt{MUST} thereby preventing the usage of weak words~\cite{PR21}.
An example is the requirement \textit{``Direction blinking: For the USA and Canada, the daytime running light \textbf{can} be dimmed by 50 percent during direction blinking on the blinking side.''} from the dataset~\cite{BMR+17}. Here, \textit{``can''} has to be replaced by \textit{``MUST''}.
In the \gls{dsl}\xspace, the requirement is modeled as \textit{``Direction blinking: For the USA and Canada, the daytime running light \textit{MUST} be dimmed by 50 percent during direction blinking on the blinking side.}
Requirements written without a modal verb should also be supplemented appropriately.
For example, in the requirement \textit{``The duration of a flashing cycle \textbf{is} 1 second.''}, the \gls{dsl}\xspace requests a requirement engineer to introduce a \textit{``MUST''} after \textit{``flashing cycle''}. Hence, the model in the \gls{dsl}\xspace would be: \textit{``The duration of a flashing cycle \textbf{MUST be} 1 second.''}.
\textbf{Ambiguity 3: Expressions.} In requirements, we often need to quantify and compare things. Again, natural language offers many ways to describe comparisons, making it difficult to grasp the information of requirements automatically. For this reason, we introduce a third \gls{dsl} construct for our \gls{dsl}, namely logical formulae. Thereby, we are going to allow both words (to keep the language close to natural formulations) and mathematical expressions in the \gls{dsl}\xspace. Depending on the use case, the sentences in the \gls{dsl}\xspace may look different. When using logical signs, these sentences are greatly simplified.
One example is the requirement \textit{``The vehicle's doors are closed automatically when speeding velocity \textbf{is bigger than} 10 km/h''} from~\cite{BMR+17}.
Modeled in the \gls{dsl}\xspace integrating logical formulae, the sentence reads as follows: \textit{``The vehicle's doors are closed automatically when speeding velocity $\boldsymbol{>}$ 10 km/h''}. When using the alternative construct for logical formulae with descriptive words offered by the \gls{dsl}\xspace, the requirement translates to \textit{``The vehicle's doors are closed automatically when speeding velocity is \textbf{GREATER} 10 km/h''}. The difference here is mainly the use of the \gls{dsl}\xspace. Regarding words, a natural language proximity is desired to increase acceptance by the developer (R1,R2). Further keywords include \textit{``LESS''}, \textit{``EQUAL''}, \textit{``LESS OR EQUAL''}, and \textit{``GREATER OR EQUAL''}.
Again, such \gls{dsl} constructs standardize the
way how comparisons are formulated in a requirement. This facilitates automated consistency analysis since the operators and their semantics are clearly defined and
the variables and constants can be extracted relatively easy from the sentence, e.g. \textit{velocity} and \textit{10 km/h}. Now,
if the same variable is used in another requirement, we can 1) link these requirements
as treating the same context, e.g. to enable semantic requirement search and 2) are able to check whether the two requirements are consistent.
The three constructs introduced above can and should be combined when appropriate. For instance, the last example needs to be translated to \textit{``IF: speeding velocity is \textbf{GREATER} 10 km/h, THEN: the vehicle's doors MUST be closed automatically.''}
Now this requirement has almost no degrees of freedom in terms of formulation, and it is easy to extract the trigger variable (speeding velocity), the subject of the action (the vehicle's doors), and the desired state (closed automatically).
The challenge we would like to solve in this paper is how to translate large numbers of legacy requirements into the \gls{dsl}, how to repair requirements showing syntactical errors, and how to support a requirement engineer writing requirements in the \gls{dsl} syntax.
\section{Translating Unstructured Requirements to DSL}
\label{sec:Translation}
\subsection{Overview}
To automatically translate legacy natural language requirements into the \gls{dsl} defined in \Cref{sec:ReqDSL}, fixing bad formulations and enforcing guidelines usage and regulatory compliance, we utilize transformer-based language models.
Since we need to avoid data and resource intensive finetuning (as the necessary amounts of data might lack in project or company-specific design and the required hardware resources might not be accessible/too expensive), we will make use of the few-shot capability, which has been shown to yield good results with only a few training examples. Hence, we rule out BERT and RoBERTa as models for our translation task and stick to the GPT family.
The number of training examples for few-shot learning a new task is usually constrained by the context window, typically allowing 10-100 examples~\cite{BMR+20}. Sometimes the number of examples is further constrained by the available computational resources. To exploit the available training data as far as possible, we propose a cascaded translation process, where we provide a dedicated few-shot model for each translation task, i.e. trigger-action, modal verbs, and expressions. This reduces the training complexity and yields more focused models as the neural network does not have to decide, which kind of translation(s) to perform but only has to perform the chosen translation it is specialized for.
In case, a single translation needs to be applied, the translation step needs to be preceded either by a manual or an automated classification of the
input and the corresponding choice of a translation model. Otherwise a sequential
application of multiple models is necessary to incorporate all \gls{dsl} constructs.
\subsection{Requirement Translation}
\label{sec:RequirementsTranslation}
For each translation step, a set of few-shot examples, also referred to as the support set, for the respective requirements category is selected and given to the language model as context. Our hypothesis is
that a large capacity language model pretrained on a sufficiently large training set can be taught to solve a specific task such as a reformulation of a given requirement into a systematic form with a very small
support set. In contrast to fine-tuning approaches, where a pretrained model is further trained using a downstream task-specific training set, in few-shot learning the weights of the neural network are not adapted. The few-shot examples consist of input/output pairs and are input into the network as a demonstration for the task to be solved, followed by the actual query. The solution to the query is then generated as the model's output based on the support examples from the context.
The translation model can be implemented using any large enough pretrained language model supporting few-shot learning. Models of higher capacity can be expected to perform better in few-shot learned downstream tasks. Based on promising preliminary results compared to other models, we decided to concentrate mainly on GPT-J-6B. GPT Neo \cite{gpt-neo}, another model from the
GPT family, showed a lot of variance in its text generation
producing a lot of "noisy", unexpected outputs often changing or extending the semantics of the source requirement
without an obvious reason in our preliminary experiments.
\section{Evaluation}
\label{sec:Evaluation}
\subsection{Research Questions and Metrics}
With the experiments conducted for the translation of requirements from unstructured text to \gls{dsl} our aim was to find answers to the following research questions:
\begin{enumerate}
\item[\textbf{RQ1:}] Can state-of-the-art language models
be employed to translate natural text requirements to systematic formulations based on few-shot learning?
\item[\textbf{RQ2:}] How many few-shot learning examples are required to train a translation rule?
\end{enumerate}
To answer RQ1, we applied few-shot learning separately for If-Then requirements, modal verb insertion, and expressions (\textit{cf.}\@\xspace~\Cref{sec:ReqDSL}) (recall that for training each rule we use a separate instance of the language model).
As introduced in~\Cref{sec:ReqDSL} the trained constructs are:
\begin{enumerate}
\item[a)] ``\textit{'IF:' trigger 'THEN:' action}'' syntax for trigger action requirements,
\item[b)] modal verbs enforcement, reframing sentences to contain the ``\textit{MUST}'' keyword, and
\item[c)] the usage of ``\textit{LESS / GREATER / EQUAL / LESS EQUAL / GREATER EQUAL}'' operators in constraints.
\end{enumerate}
Input for the few-shot learning were pairs containing the unstructured input and the desired DSL formulation.
To evaluate the ``trained'' language model, it had to transform an unstructured requirement that the model was not given as example into a requirement in the \gls{dsl}\xspace.
We then assessed the result of this transformation.
Doing so, requires a significant evaluation metric.
Metrics such as BLEU \cite{PRW+02} typically used to evaluate results
of machine translation are not helpful, due to the many different possibilities to express a requirement correctly. Furthermore, semantic correctness does not imply compliance with the expected
syntactical structure we would like to enforce. For this reason, we propose
a custom evaluation scale with six possible quality classes
to assess the correctness of the translated requirements:
\begin{enumerate*}
\item[\textbf{Class 1}:] The translation is both syntactically and semantically correct and fulfills the required formulation rule. No changes required.
\item[\textbf{Class 2}:] The translation is semantically correct, but contains one or two syntactical inaccuracies to fully implement the desired rule.
\item[\textbf{Class 3}:] Syntactically correct but fails to fully cover the semantics of the source requirement (e.g. by missing a quantifier or a marginal constraint).
\item[\textbf{Class 4}:] The translation contains one or two syntactical inaccuracies to fully implement the desired rule and the semantics is not fully covered, i.e. a combination of 2 and 3.
\item[\textbf{Class 5}:] The translation has grave syntactical errors or does not implement the desired rule. An identity mapping would result in this label, as well (unless the input already implements the desired rule).
\item[\textbf{Class 6}:] The translation is semantically wrong.
\end{enumerate*}
A flaw of this scale is that it is not ordinal, e.g. we cannot say that class 2 is better than class 3. However, based on the experience we gathered with it in this work, in most cases a smaller number indicates a more satisfying result.
To answer RQ2 we conducted our evaluation with three
differently sized few-shot support sets per translation rule (If-Then, modal verbs, and constraints) consisting of one, four, and six examples each. In case of one-shot learning, \textit{i.e.,}\@\xspace if only one example is presented in training, for the conversion of constraints containing (in)equalities, the result depends on the keyword used in the example. For this reason, we tried two different one-shot trainings, \textit{i.e.,}\@\xspace for introducing ``\textit{EQUAL}'', and ``\textit{LESS OR EQUAL}''. The requirements used for testing were \textbf{not} present in the support sets.
\subsection{Concrete Training Examples}
In this section we provide excerpts of the training and prediction experiments to give the reader an intuition
on how the GPT-J-6B model dealt with the translation, which constructs were particularly challenging, and how the number of examples affected the quality of the translation. The full support sets and the complete translation experiment results can be found in the appendix in \cref{tab:supset:logic:1:equal}-\cref{tab:eval:modal:6}.
\textbf{If-Then, 1 training example:} To one-shot train the translation of a requirement to the desired If-Then syntax we use the following input and output pair:
\noindent \textit{``Input: If a defective illuminant is detected, the information about the defective illuminant is transmitted to the instrument cluster.\\
DSL: IF: defective illuminant is detected, THEN: information about the defective illuminant is transmitted to the instrument cluster.''}
As we can see in the example, the two keywords \textit{``IF:''} and \textit{``THEN:''} are included in the target sentence. Apart from that the sentence remains almost unchanged, making it relatively easy for the model
to learn the rule.
Having seen this example, the model is already able to apply the rule perfectly to some examples of the test set, yielding a class 1 rating according to the scheme given above. For instance, the requirement
\noindent \textit{``If tip-blinking was activated shortly before deactivation of the hazard warning, this is not considered during the deactivation of the hazard warning.''}
\noindent is correctly translated to
\noindent \textit{``IF: tip-blinking was activated shortly before deactivation of the hazard warning, THEN: this is not considered during the deactivation of the hazard warning
''}.
However, some other examples are translated incompletely or wrong, e.g. the requirement
\noindent \textit{``With activated darkness switch (only armored vehicles) the cornering light is not activated.''}
\noindent is translated to
\noindent \textit{``IF: darkness switch is activated, THEN: cornering light is not activated.
''}
While the keywords are included at the correct position, the model drops the information in parentheses
stating that the requirement only applies to armored vehicles. For this reason, we consider this translation as syntactically correct, but semantically incomplete, resulting in a class 3 rating.
At this point the model has not yet fully learnt to implement the If-Then syntax if the sentence does not start with an \textit{``if''}. For instance, the example
\noindent \textit{``When moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1.''}
\noindent remains
unchanged by the one-shot model. This yields a class 5 rating, as the semantics is correct, but the desired syntax is not implemented. Interestingly, another requirement starting with \textit{``when''} (\textit{``When hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, the direction blinking cycle should be released.
''}) results in the output
\noindent \textit{``WHEN: hazard warning is deactivated, THEN: direction blinking cycle should be released
''}.
Here the model introduces a new keyword (\textit{``WHEN:''}), which did not happen in the previous example. Furthermore the second condition concerning the pit arm position has been dropped, resulting in a class 6 rating due to both syntactical and semantical errors.
\textbf{If-Then, 4 training examples:} Aiming to improve the translation quality, we introduce three additional training example pairs:
\noindent \textit{``Input: If no advancing vehicle is recognized any more, the high beam illumination is restored within 2 seconds. \\
DSL: IF: no advancing vehicle is recognized any more, THEN: high beam illumination is restored within 2 seconds.\\
\#\#\#\\
Input: If the light rotary switch is in position "auto", the adaptive high beam headlights are activated by moving the pitman arm to the back. \\
DSL: IF: the light rotary switch is in position "auto" and the pitman arm is moved back, THEN: the adaptive high beam headlights are activated.\\
\#\#\#\\
Input: (a) When the driver enables the cruise control (by pulling the cruise control lever or by pressing the cruise control lever up or down), the vehicle maintains the set speed if possible. \\
DSL: IF: driver enables the cruise control by pulling the cruise control lever or by pressing the cruise control lever up or down, THEN: the vehicle maintains the set speed if possible.
''}
The three additional examples lead to an improvement in translation quality, for instance, instead of generating a new keyword \textit{``WHEN:''}, the model now knows due to the last training example that it has to replace \textit{``When''} by \textit{``IF:''}. So now, the hazard warning requirement is translated correctly to
\noindent \textit{``IF: hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, THEN: the direction blinking cycle should be released.''} and yields a class 1 rating.
\textbf{If-Then, 6 training examples:} To further improve the translation quality, the following examples have been added:
\noindent \textit{``
Input: If the driver pushes down the cruise control lever with cruise control activated up to the first resistance level, the speed set point of the cruise control is reduced by N. \\
DSL: IF: driver pushes down the cruise control lever with cruise control activated up to the first resistance level, THEN: the speed set point of the cruise control is reduced by N. \\
\#\#\#\\
Input: By pushing the brake or the hand brake, the cruise control is deactivated until it is activated again. \\
DSL: IF: the brake or the hand brake is pushed, THEN: the cruise control is deactivated until it is activated again.''}
In this variant the translation of the armored vehicle requirement is improved to a class 2 rating yielding
the text \textit{``IF: with activated darkness switch (only armored vehicles), THEN: the cornering light is not activated.
''} Here the model manages to keep the information on armored vehicles in parentheses, but chooses a strange formulation in
the if-part, i.e. small syntactic fixes are required. Hence, the model improves from a class 3 to a class 2 grade.
Requirements of the form \textit{``Context: requirement text.''} were particularly problematic for the model. The context was often interpreted as a condition. For instance, the requirement \textit{``Distance Warning: The vehicle warns the driver visually and/or acoustically if the vehicle is closer to the car ahead than allowed by the safety distance.''} is translated to \textit{``IF: distance warning is activated, THEN: the vehicle warns the driver visually and/or acoustically''}. This problem persists for all sizes of support sets
we have tried, yielding a class 6 grading. Probably, the problem also persists for other deviating sentence structures.
\textbf{Modal verb, 1 training examples:}
The one-shot support set for the insertion of modal verbs is given as:
\noindent \textit{``Input: The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer.\\
DSL: The maximum deviation of the pulse ratio MUST be below the cognitive threshold of a human observer.''}
Given this single example the model is able to insert the \textit{``MUST''} keyword into simple sentences. For instance, the descriptive sentence \textit{``The frame rate of the camera is 60 Hz.''} is translated to a
real requirement \textit{``The frame rate of the camera MUST be 60 Hz.''}, thereby yielding a class 1 grading. However, at this point it fails to insert the \textit{``MUST''} keyword in most other test examples.
\textbf{Modal verb, 4 training examples:''} For the 4-shot support set we use the following additional training example pairs:
\noindent \textit{``Input: Direction blinking: For USA and CANADA, the daytime running light shall be dimmed by 50\% during direction blinking on the blinking side.\\
DSL: Direction blinking: For USA and CANADA, the daytime running light MUST be dimmed by 50\% during direction blinking on the blinking side.\\
\#\#\#\\
Input: The adaptation of the pulse ratio has to occur at the latest after two complete flashing cycles. \\
DSL: The adaptation of the pulse ratio MUST occur at the latest after two complete flashing cycles. \\
\#\#\#\\
Input: The duration of a flashing cycle is 1 second.\\
DSL: The duration of a flashing cycle MUST be 1 second.''}
This helps the model to translate more inputs correctly, e.g. while the requirement \textit{``A flashing cycle (bright to dark) has to be always completed, before a new flashing cycle can occur''} remained unchanged by the model in the one-shot case, it is now translated to \textit{``A flashing cycle (bright to dark) MUST be always completed, before a new flashing cycle can occur.''} This yields a class 1 grade.
Similarly, the example \textit{``Also after 1000 flashing cycles the cumulated deviation will not exceed 0.05s.''} is now translated to textit{``Also after 1000 flashing cycles the cumulated deviation MUST NOT exceed 0.05s.''}, again yielding class 1.
\textbf{Modal verb, 6 training examples:}
Adding the following additional two training examples to the support set further improves the translation quality:
\noindent \textit{``Input: A flashing cycle (bright to dark) will always be completed, before a new flashing cycle can occur. \\
DSL: A flashing cycle (bright to dark) MUST always be completed, before a new flashing cycle can occur.\\
\#\#\#\\
Translate input to DSL
Input: With subvoltage the ambient light is not available. \\
DSL: With subvoltage the ambient light MUST not be available.''}
The two inputs \textit{``With subvoltage the ambient light is not available.''} and \textit{``Low beam illuminant shall be LED.''} are now correctly translated to \textit{``With subvoltage the ambient light MUST not be available.''} and \textit{``Low beam illuminant MUST be LED.''}.
Some inputs are still not translated correctly, e.g. \textit{``The functions of the system are classified as safety relevant in with respect to ISO 26262.''} But also the seemingly simpler requirement formulation \textit{``The vehicle does not exceed a set speed.''}.
\textbf{Expressions with keywords, 1 training examples:}
In the next experiment, we aim to detect expressions, in particular comparisons in requirements and
insert keywords such as \textit{``EQUALS''} or \textit{``GREATER''} into the sentences.
The one-shot support set is given as:
\noindent \textit{``Input: The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer.\\
DSL: The maximum deviation of the pulse ratio should be LESS OR EQUAL cognitive threshold of human observer.''}
The problem here is the relatively large number of keywords. The model cannot infer all the keywords from a single example. Nevertheless, it tries to apply the \textit{``LESS OR EQUAL''} keyword
to some test inputs at the correct place. For instance, the requirement
\noindent \textit{``The luminous intensity of the daytime running light must be lower than 400cd.''}
\noindent is transformed to
\noindent \textit{``The luminous intensity of the daytime running light must be LESS OR EQUAL 400cd.''}.
\noindent The result has a semantic flaw as it allows
an equality which is not the case for the original, hence resulting in a class 3 rating.
Interestingly, the model transforms \textit{``The maximum curb weight of the vehicle must be no more than 3.5t.''} to \textit{``The maximum curb weight of the vehicle must be LESS THAN 3.5t.''}, thereby introducing a keyword it has not seen in training. In contrary to the previous requirement,
equality is allowed here in the original but is excluded by the mode, again resulting in a lass 3 rating.
We tried a one shot variant with an exampling using the \textit{``EQUAL''} keyword instead of \textit{``LESS OR EQUAL''}. As expected, the model then only used this keyword and hence, performed better
where \textit{``EQUAL''} was required and worse otherwise.
\textbf{Experessions with keywords, 4 training examples:} In the next experiment we used the four following examples as support set:
\noindent \textit{``Input: The duration of a flashing cycle is 1 second
DSL: The Duration of a flashing cycle is EQUAL 1 second.\\
\#\#\#\\
Input: The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer.\\
DSL: The maximum deviation of the pulse ratio should be LESS OR EQUAL cognitive threshold of human observer.
\\
\#\#\#\\
Input: The minimal number of seatbelts used has to be 1.\\
DSL: The number of seatbelts used has to be GREATER OR EQUAL 1.
\\
\#\#\#\\
Input: The vehicles doors are closed automaticly when speeding velocity is bigger than 10km/h.\\
DSL: The vehicles doors are closed automaticly when speeding velocity is GREATER 10km/h.''}
Surprisingly, the model was biased towards inserting the \textit{``LESS THAN''} keyword, which was not present in the support set at all. The overall improvements were minor. The luminous intensity requirement
was now transformed correctly to \textit{``The luminous intensity of the daytime running light must be LESS THAN 400cd.''}, yielding a class 1 rating as the only requirement.
\textbf{Experessions with keywords, 6 training examples:}
For the last experiment we added the following two training examples:
\noindent \textit{``Input: Also after 1000 flashing cycles the cumulated deviation must not exceed 0.05s.
DSL: After 1000 flashing cycles the cumulated Deviation must be GREATER OR EQUAL 0.05s.
\\
\#\#\#\\
Input: The cruising speed has to be set at a speed which exceeds 10km/h.
DSL: The cruising speed has to be GREATER 10km/h.''}
In this experiment additionally the maximum curb weight requirement was transformed perfectly to \textit{`` The maximum curb weight of the vehicle must be LESS OR EQUAL 3.5t.''}, now including the equality part.
The experiment shows a main drawback inherent to models such as GPT. The models use statistically
learned sequences without understanding the semantics. For instance, the model often confuses less-than and greater-than inequalities, which are very similar from the syntactical point of view, but have a completely different semantics. For instance, the model delivers translations such as \textit{``The minimum distance to a vehicle in front has to be LESS OR EQUAL 5m.''}. While it is obvious to a human reader that \textit{``GREATER''} should be used here, a neural language model has no logical interpretation.
Another problem which becomes evident in this evaluation is that the language model has difficulties
reformulating sentences when a pure insertion of a keyword is insufficient. For instance,
the example \textit{``The vehicle's horn must not be louder than 50dB.''} is transformed to \textit{``The vehicle's horn must be LESS OR EQUAL 50dB.''}. While the keyword used is correct, the sentence has a semantic flaw: it is not the horn that has to be less or equal 50dB but its intensity, i.e. the perfect translation would be \textit{```The intensity of the vehicle's horn must be LESS OR EQUAL 50dB.''}. The model struggles to insert the physical
quantity (intensity) here.
\textbf{Expression Extraction}
Additionally, we performed experiments aiming to extract mathematical formulas only containing the key operators and variable names. For instance, the input \textit{``The braking distance can not be longer than 300m.''} yields the output \textit{``braking distance <= 300m''}. Such an extraction can be useful to
implement tracing and consistency checking of multiple requirements. For instance, if the variable braking distance is found in multiple requirements and featuring different bounds, this can be found automatically
and shown to a human expert to check whether the requirements are indeed contradictory. Further successful
extractions we generated using the model are \textit{``horn loudness <= 50dB''}, \textit{``low beam illuminant = LED''}, and \textit{``distance to vehicle in front >= 5m''}. Most others showed slight to medium
inacuracies, e.g. \textit{``maximum velocity <= 260km/h''} or \textit{``blinking lights = 3s''}.
\subsection{Evaluation Summary}
An overview of all experiment results is summarized in~\Cref{tab:translation}.
As expected, in each of the three experiments, the translation quality improved with larger support sets. It is fascinating however, how steep the learning curve is.
It suggests that few-shot learning can deal with \gls{nlp} tasks in requirements engineering even when only small training sets are available.
\begin{table*}
\caption{\label{tab:translation}Evaluation results for the translation experiments from
natural language requirements to domain-specific syntax.}
\centering
\resizebox{0.70\textwidth}{!}{%
\begin{tabular}{ |p{3cm}||p{1,5cm}|p{1,5cm}|p{1,5cm}|p{1,5cm}|p{1,5cm}|p{1,5cm}|p{1,5cm}|}
\hline
\# of Training Set & Class 1 & Class 2 & Class 3 & Class 4 & Class 5 & Class 6 & total \\
\hline
\multicolumn{8}{|c|}{Translation results for If-Then structure using GPT-J6B} \\
\hline
\hline
1 & 2 & 0 & 1 & 0 & 6 & 2 & 11 \\
\hline
4 & 5 & 2 & 0 & 1 & 1 & 2 & 11 \\
\hline
6 & 6 & 3 & 0 & 0 & 0 & 2 & 11 \\
\hline
\hline
\multicolumn{8}{|c|}{Translation results for modal verbs structure using GPT-J6B} \\
\hline
\hline
1 & 3 & 0 & 0 & 0 & 5 & 0 & 8 \\
\hline
4 & 5 & 0 & 0 & 0 & 3 & 0 & 8 \\
\hline
6 & 6 & 0 & 0 & 0 & 2 & 0 & 8 \\
\hline
\hline
\multicolumn{8}{|c|}{Translation results for propositional logic Structure using GPT-J6B} \\
\hline
\hline
1 (trained on keyword: equal) & 2 & 0 & 1 & 0 & 4 & 1 & 8 \\
\hline
1 (trained on keyword: less or equal) & 0 & 0 & 4 & 0 & 4 & 0 & 8 \\
\hline
4 & 1 & 0 & 1 & 0 & 3 & 3 & 8 \\
\hline
6 & 2 & 0 & 3 & 0 & 2 & 1 & 8 \\
\hline
8 & 3 & 0 & 0 & 0 & 3 & 2 & 8 \\
\hline
\end{tabular}}
\end{table*}
In the first experiment (If-Then translation), given a support set of six training examples, six out of eleven examples achieve a class 1 rating, \textit{i.e.,}\@\xspace the model did a perfect translation. Further, the results with only four training pairs in the support set were still good. However, two requirements were transformed incorrectly. An example for a class 1 translation is: \textit{``Emergency Brake Assist: The vehicle decelerates in critical situations to a full standstill.''} is transformed to \textit{``IF: emergency brake assist is activated, THEN: the vehicle decelerates in critical situations to a full standstill.''} Here the malformed sentence is restructured with the verb ``is activated'', separating trigger and action. Furthermore, the model learns to replace words such as ``if'', ``once'', ``when'' by the keyword ``IF:'' and to insert ``THEN:'' at the correct position. The two requirements which were not transformed correctly even for six training examples both contained two conditions and were hence more difficult to translate. Apparently, the network has difficulties processing inputs consisting of several sentences, e.g. multiple trigger-action clauses. However, subdividing such requirements into separate inputs seems to solve the problem.
The second experiment (modal verbs translation) was probably the easiest, which is not surprising. The language model only needs to add or replace the right verb with ``MUST''. For instance ``The frame rate of the camera is 60 Hz.'' was successfully translated to ``The frame rate of the camera MUST be 60 Hz''. The model did this well for all but two requirements given six training examples. One of the two outliers contained a negation, which was not explicitly trained.
For the third experiment, dealing with constraints (introducing (in-)equalities), to explore the capability of our model to identify and generate constraint sentences,
we crafted an additional dataset consisting of six few-shot training and six test examples featuring equality and inequality clauses. Due to more variance in the desired translations (five keywords to choose from) and the difficulty encountered by the model to transform adjectives to subjects (e.g. ``the horn has to be louder than'' needs to become ``the horn intensity has to be GREATER than''), we added three more training examples focusing on this issue. It did however not lead to the desired results, the model did not learn to transform the sentence accordingly. Sentences for which no such adjective-to-subject transformation was necessary
had a better success rate in the test set. For instance, ``The maximum curb weight of the vehicle must be no more than 3.5t.'' was successfully translated to ``The maximum curb weight of the vehicle must be LESS OR EQUAL 3.5t.'' Some translations featured a wrong operator, but were otherwise syntactically correct. In applications such as smart editors, this could still be a helpful way to sensitize the requirement engineers for the correct syntax.
We conducted a variant of the third experiment, where a) the natural language operators were replaced by mathematical ones, i.e. $<, \leq, >, \geq, =$, and b) only the variables/constants of the (in-)equality were extracted in the support set, i.e. without keeping the rest of the sentence. The model managed to perform these transformations surprisingly well (except for one example where the model put a ``$\leq$'' instead of ``$=$'' for the maximum velocity), e.g. from ``The vehicles horn must not be louder than 50dB'' the model extracted ``horn loudness $\leq$ 50dB''. Such extractions can be used in tracing, e.g. to identify assignments or conditions in code which are inconsistent with the underlying requirements. Variables and their values or bounds can be extracted by a language model and looked up in code automatically.
\section{Threats to Validity}\label{sec:threats}
\paragraph{Construction Validity}
An objective metric of requirement classification and translation is not feasible per se and depends on the context, perspective of the reader, and other factors. For our evaluation we let five software engineers label the translations independently and took the majority vote after a discussion to mitigate the risk of bias.
\paragraph{Internal Validity}
Our evaluation dataset might have a too small variance:
The legacy examples stem from a single project and are mostly well written, there are no completely ill-formed examples. This might render the translation process too easy for the language model.
\paragraph{External Validity}
Generalizability is a major concern in requirements engineering as findings and algorithms need to be transferred to unseen projects and domains. The methodology presented in this paper is designed for domain-specific adaptations, \textit{i.e.,}\@\xspace the results are only valid for the datasets at hand.
For the translation task, we cannot rule out that different or more complex syntactic elements required in other domains cannot be realized using few-shot learning or require larger support sets. Generalizability of the concrete few-shot models derived in this paper across different domains is probably not feasible due to differences in domain-specific wordings and formulation approaches, but also not pursued in this work since each domain should use tailored support sets.
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we have shown how neural language models such as representatives of the GPT family can support requirements engineering without the need for resource and data intensive fine-tuning.
Our most important result is that few-shot learning of language models can be applied to translate legacy requirements into
a given structured \gls{dsl} form automatically. However, language models available today still require human supervision. Since context space available in
a language model for few-shot learning is limited to a very small number of examples, we suggest to manually or automatically classify a requirement before translation to choose the few-shot examples to be used for a given translation task.
We draw the conclusion that few-shot learning is a powerful tool for the processing
of natural language requirements which does not require costly training from scratch and expensive
hardware and is hence accessible even for small companies with limited resources or when training data is scarce. It has the potential to accelerate the formalization of legacy requirements and improve the resulting requirements quality and syntactical consistency. However, as of now, supervision cannot be eliminated, binding human resources but limiting the task to correction.
Instead of having to rely on the generalizability of
a single language model for all projects, the few-shot learning approach can be
employed to tailor language models to new projects and application domains quickly. In very specific areas, singular fine-tunings to the domain might be necessary, e.g. we realized in our experiments that in some cases the language models had difficulties processing automotive-related language terms correctly.
We believe that the presented technology can be used for a variety of tasks in model-driven requirements
engineering including but not limited to the translation of legacy requirements, autocompletion for smart editors, requirement look-up and information extraction. While there is still room for improvement, it is reasonable to expect that the capacity of language models will steadily grow larger in the next years providing a continuously increasing few-shot learning performance.
\section{Appendix}
\label{sec:Appendix}
\setcounter{table}{0}
\renewcommand\thetable{\Alph{section}.\arabic{table}}
\begin{table}[h]
\centering
\caption{Support Set: 1 Example Logical Formulae (EQUAL).}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle is EQUAL 1 second. \\ \hline
\end{tabularx}
\label{tab:supset:logic:1:equal}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 1 Example Logical Formulae (LESS-OR-EQUAL).}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer & The maximum deviation of the pulse ratio should be LESS OR EQUAL cognitive threshold of human observer. \\ \hline
\end{tabularx}
\label{tab:supset:logic:1:lessorequal}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 4 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle is EQUAL 1 second. \\ \hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. &
The maximum deviation of the pulse ratio should be EQUAL cognitive threshold of human observer.\\\hline
The minimal number of seatbelts used has to be 1. & The number of seatbelts used has to be GREATER OR EQUAL 1.\\\hline
The vehicles doors are closed automaticly when speeding velocity is bigger than 10km/h. & The vehicles doors are closed automaticly when speeding velocity is GREATER 10km/h.\\ \hline
\end{tabularx}
\label{tab:supset:logic:4}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 6 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle is EQUAL 1 second.
\\ \hline
Also after 1000 flashing cycles the cumulated deviation must not exceed 0.05s. & After 1000 flashing cycles the cumulated Deviation must be GREATER OR EQUAL 0.05s. \\\hline
The cruising speed has to be set at a speed which exceeds 10km/h. & The cruising speed has to be GREATER 10km/h.
\\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. &
The maximum deviation of the pulse ratio should be EQUAL cognitive threshold of human observer. \\\hline
The minimal number of seatbelts used has to be 1. & The number of seatbelts used has to be GREATER OR EQUAL 1. \\\hline
The vehicles doors are closed automaticly when speeding velocity is bigger than 10km/h. & The vehicles doors are closed automaticly when speeding velocity is
|
GREATER 10km/h.
\\\hline
\end{tabularx}
\label{tab:supset:logic:6}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 8 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle is EQUAL 1 second.
\\ \hline
Also after 1000 flashing cycles the cumulated deviation must not exceed 0.05s. & After 1000 flashing cycles the cumulated Deviation must be GREATER OR EQUAL 0.05s. \\\hline
The cruising speed has to be set at a speed which exceeds 10km/h. & The cruising speed has to be GREATER 10km/h.
\\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. &
The maximum deviation of the pulse ratio should be EQUAL cognitive threshold of human observer. \\\hline
The minimal number of seatbelts used has to be 1. & The number of seatbelts used has to be GREATER OR EQUAL 1. \\\hline
The vehicles doors are closed automaticly when speeding velocity is bigger than 10km/h. & The vehicles doors are closed automaticly when speeding velocity is GREATER 10km/h.
\\\hline
The interior material must be Velour. & The interior material is EQUAL Velour. \\\hline
Flashing light cycle does not exceed 3 flashes. & Flashing light cycle is LESS or EQUAL 3 flashes.
\\\hline
\end{tabularx}
\label{tab:supset:logic:8}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 1 Example Logical Formulae (EQUAL).}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The luminous intensity of the daytime running light must be lower than 400cd. & The luminous intensity of the daytime running light must be EQUAL 400cd. & 6 \\\hline
The range of the vehicle shall be at least 450km under WLTP conditions & The range of the vehicle shall be at least 450km under WLTP conditions. & 5 \\\hline
If the speed limiter is activated, the vehicle speed shall not exceed the set speed.&If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& 5 \\\hline
The maximum curb weight of the vehicle must be no more than 3.5t & The maximum curb weight of the vehicle must be EQUAL 3.5t. & 3\\\hline
The breaking distance can not be longer than 300m. & The breaking distance can not be longer than 300m. & 5 \\\hline
The vehicles horn must not be louder than 50dz. &The vehicles horn must not be louder than 50dz. &5\\\hline
The vehicles maximum velocity is 260km/h. &The vehicles maximum velocity is EQUAL 260km/h.&1\\\hline
The minimun distance to a vehicle in front has to be 5m. &The minimun distance to a vehicle in front has to be EQUAL 5m. &1\\\hline
\end{tabularx}
\label{tab:eval:logic:1:equal}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 1 Example Logical Formulae (LESS-OR-EQUAL).}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The luminous intensity of the daytime running light must be lower than 400cd. & The luminous intensity of the daytime running light must be LESS OR EQUAL 400cd. & 3 \\\hline
The range of the vehicle shall be at least 450km under WLTP conditions & The range of the vehicle shall be at least 450km under WLTP conditions. & 5 \\\hline
If the speed limiter is activated, the vehicle speed shall not exceed the set speed.&If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& 5 \\\hline
The maximum curb weight of the vehicle must be no more than 3.5t & The maximum curb weight of the vehicle must be LESS THAN 3.5t. & 3\\\hline
The breaking distance can not be longer than 300m. & The breaking distance can not be longer than 300m. & 5 \\\hline
The vehicles horn must not be louder than 50dz. &The vehicles horn must not be louder than 50dz. &5\\\hline
The vehicles maximum velocity is 260km/h. &The vehicles maximum velocity is LESS OR EQUAL 260km/h. &3\\\hline
The minimun distance to a vehicle in front has to be 5m. & The minimum distance to a vehicle in front has to be LESS OR EQUAL 5m. &3\\\hline
\end{tabularx}
\label{tab:eval:logic:1:lessorequal}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 4 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The luminous intensity of the daytime running light must be lower than 400cd. & The luminous intensity of the daytime running light must be LESS THAN 400cd. & 1 \\\hline
The range of the vehicle shall be at least 450km under WLTP conditions & The range of the vehicle shall be at least 450km under WLTP conditions. & 5 \\\hline
If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& 5 \\\hline
The maximum curb weight of the vehicle must be no more than 3.5t & The maximum curb weight of the vehicle must be LESS THAN 3.5t. & 3\\\hline
The breaking distance can not be longer than 300m. & The breaking distance can not be LESS THAN 300m. & 6 \\\hline
The vehicles horn must not be louder than 50dz. & The vehicles horn must not be LOUDER THAN 50dz. &5\\\hline
The vehicles maximum velocity is 260km/h. & The vehicles maximum velocity is LESS THAN 260km/h. &6\\\hline
The minimun distance to a vehicle in front has to be 5m. & The minimun distance to a vehicle in front has to be LESS THAN 5m. &6\\\hline
\end{tabularx}
\label{tab:eval:logic:4}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 6 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The luminous intensity of the daytime running light must be lower than 400cd. & The luminous intensity of the daytime running light must be LESS THAN 400cd. & 1 \\\hline
The range of the vehicle shall be at least 450km under WLTP conditions & The range of the vehicle shall be at least 450km under WLTP conditions. & 5 \\\hline
If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& 5 \\\hline
The maximum curb weight of the vehicle must be no more than 3.5t & The maximum curb weight of the vehicle must be LESS OR EQUAL 3.5t. & 1.\\\hline
The breaking distance can not be longer than 300m. & The breaking distance can not be LESS OR EQUAL 300m. & 6 \\\hline
The vehicles horn must not be louder than 50dz. & The vehicles horn must be LESS OR EQUAL 50dz. &3\\\hline
The vehicles maximum velocity is 260km/h. & The vehicles maximum velocity is LESS OR EQUAL 260km/h. &3\\\hline
The minimun distance to a vehicle in front has to be 5m. & The minimum distance to a vehicle in front has to be LESS OR EQUAL 5m. &3\\\hline
\end{tabularx}
\label{tab:eval:logic:6}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 8 Examples Logical Formulae.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The luminous intensity of the daytime running light must be lower than 400cd. &The luminous intensity of the daytime running light must be LESS THAN 400cd. & 1 \\\hline
The range of the vehicle shall be at least 450km under WLTP conditions & The range of the vehicle shall be at least 450km under WLTP conditions. & 5 \\\hline
If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& If the speed limiter is activated, the vehicle speed shall not exceed the set speed.& 5 \\\hline
The maximum curb weight of the vehicle must be no more than 3.5t & The maximum curb weight of the vehicle must be LESS THAN 3.5t. & 1.\\\hline
The breaking distance can not be longer than 300m. & The breaking distance can not be GREATER THAN 300m. & 1 \\\hline
The vehicles horn must not be louder than 50dz. & The vehicles horn must not be LOUDER THAN 50dz. &5\\\hline
The vehicles maximum velocity is 260km/h. & The vehicles maximum velocity is LESS THAN 260km/h. &6\\\hline
The minimun distance to a vehicle in front has to be 5m. & The minimun distance to a vehicle in front has to be LESS THAN 5m. &6\\\hline
\end{tabularx}
\label{tab:eval:logic:8}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 1 Example If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
If a defective illuminant is detected, the information about the defective illuminant is transmitted to the instrument cluster. & IF: defective illuminant is detected, THEN: information about the defective illuminant is transmitted to the instrument cluster\\ \hline
\end{tabularx}
\label{tab:supset:ifthen:1}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 4 Examples If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
If a defective illuminant is detected, the information about the defective illuminant is transmitted to the instrument cluster. & IF: defective illuminant is detected, THEN: information about the defective illuminant is transmitted to the instrument cluster\\ \hline
If no advancing vehicle is recognized any more, the high beam illumination is restored within 2 seconds. & IF: no advancing vehicle is recognized any more, THEN: high beam illumination is restored within 2 seconds.\\ \hline
If the light rotary switch is in position "auto", the adaptive high beam headlights are activated by moving the pitman arm to the back. & IF: the light rotary switch is in position ""auto"" and the pitman arm is moved back, THEN: the adaptive high beam headlights are activated.\\ \hline
(a) When the driver enables the cruise control (by pulling the cruise control lever or by pressing the cruise control lever up or down), the vehicle maintains the set speed if possible. & IF: driver enables the cruise control by pulling the cruise control lever or by pressing the cruise control lever up or down, THEN: the vehicle maintains the set speed if possible.\\ \hline
\end{tabularx}
\label{tab:supset:ifthen:4}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 6 Examples If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
If a defective illuminant is detected, the information about the defective illuminant is transmitted to the instrument cluster. & IF: defective illuminant is detected, THEN: information about the defective illuminant is transmitted to the instrument cluster\\ \hline
If no advancing vehicle is recognized any more, the high beam illumination is restored within 2 seconds. & IF: no advancing vehicle is recognized any more, THEN:high beam illumination is restored within 2 seconds.\\ \hline
If the light rotary switch is in position "auto", the adaptive high beam headlights are activated by moving the pitman arm to the back. & IF: the light rotary switch is in position ""auto"" and the pitman arm is moved back, THEN: the adaptive high beam headlights are activated.\\ \hline
(a) When the driver enables the cruise control (by pulling the cruise control lever or by pressing the cruise control lever up or down), the vehicle maintains the set speed if possible. & IF: driver enables the cruise control by pulling the cruise control lever or by pressing the cruise control lever up or down, THEN: the vehicle maintains the set speed if possible.\\ \hline
If the driver pushes down the cruise control lever with cruise control activated up to the first resistance level, the speed set point of the cruise control is reduced by N. & IF: driver pushes down the cruise control lever with cruise control activated up to the first resistance level, THEN: the speed set point of the cruise control is reduced by N.\\ \hline
By pushing the brake or the hand brake, the cruise control is deactivated until it is activated again. & IF: the brake or the hand brake is pushed, THEN: the cruise control is deactivated until it is activated again.\\ \hline
\end{tabularx}
\label{tab:supset:ifthen:6}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 1 Example If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
With activated darkness switch (only armored vehicles) the cornering light is not activated. & IF: darkness switch is activated, THEN: cornering light is not activated & 3\\ \hline
When moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & When moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & 5\\ \hline
If the driver activates the pitman arm during the three flashing cycles of tip-blinking again, only the current flashing cycle is completed. & IF: the driver activates the pitman arm during the three flashing cycles of tip-blinking, THEN: only the current flashing cycle is completed & 1\\ \hline
Hazard Warning Light: As long as the hazard warning light switch is released, all direction indicators flash synchronically. If the ignition key is in the ignition lock, the pulse ratio is bright to dark 1:1. If the ignition key is not in the lock, the pulse ratio is 1:2. & IF: hazard warning light is released, THEN: all direction indicators flash synchronically. IF: ignition key is in the ignition lock, THEN: pulse ratio is bright to dark 1:1. IF: ignition key is not in the lock, THEN: pulse ratio is 1:2. & 6\\ \hline
When hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, the direction blinking cycle should be released. & WHEN: hazard warning is deactivated, THEN: direction blinking cycle should be released & 5\\ \hline
If tip-blinking was activated shortly before deactivation of the hazard warning, this is not considered during the deactivation of the hazard warning & IF: tip-blinking was activated shortly before deactivation of the hazard warning, THEN: this is not considered during the deactivation of the hazard warning & 1\\ \hline
(a) The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & (a) The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & 5\\ \hline
The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & 5\\ \hline
Distance Warning: The vehicle warns the driver visually and/or acoustically if the vehicle is closer to the car ahead than allowed by the safety distance. & IF: distance warning is activated, THEN: the vehicle warns the driver visually and/or acoustically & 6\\ \hline
Emergency Brake Assist: The vehicle decelerates in critical situations to a full standstill & Emergency Brake Assist: The vehicle decelerates in critical situations to a full standstill & 5\\ \hline
If the driver pushes the cruise control lever up to the first resistance level and the cruise control is activated, the speed set point of the cruise control is increased by the value N. & IF: the driver pushes the cruise control lever up to the first resistance level, THEN: the speed set point of the cruise control is increased by the value N & 5\\ \hline
\end{tabularx}
\label{tab:eval:ifthen:1}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 4 Examples If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
With activated darkness switch (only armored vehicles) the cornering light is not activated. & IF: with activated darkness switch, THEN: the cornering light is not activated. & 4\\ \hline
When moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & IF: when moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left,
exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1, THEN: the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & 5\\ \hline
If the driver activates the pitman arm during the three flashing cycles of tip-blinking again, only the current flashing cycle is completed. & IF: the driver activates the pitman arm during the three flashing cycles of tip-blinking, THEN: only the current flashing cycle is completed. & 1\\ \hline
Hazard Warning Light: As long as the hazard warning light switch is released, all direction indicators flash synchronically. If the ignition key is in the ignition lock, the pulse ratio is bright to dark 1:1. If the ignition key is not in the lock, the pulse ratio is 1:2. & IF: hazard warning light switch is released, THEN: all direction indicators flash synchronically. If the ignition key is in the ignition lock, the pulse ratio is bright to dark 1:1. If the ignition key is not in the lock, the pulse ratio is 1:2. & 1\\ \hline
When hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, the direction blinking cycle should be released. & IF: hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, THEN: the direction blinking cycle should be released. & 1\\ \hline
If tip-blinking was activated shortly before deactivation of the hazard warning, this is not considered during the deactivation of the hazard warning & IF: tip-blinking was activated shortly before deactivation of the hazard warning, THEN: this is not considered during the deactivation of the hazard warning. & 1\\ \hline
(a) The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & IF: the driver activates the low beam headlights by turning the light rotary switch to position exterior lights on, THEN: the low beam headlights are activated. & 2 \\ \hline
The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & IF: the driver activates the low beam headlights by turning the light rotary switch to position exterior lights on, THEN: the low beam headlights are activated. & 2\\ \hline
Distance Warning: The vehicle warns the driver visually and/or acoustically if the vehicle is closer to the car ahead than allowed by the safety distance. & IF: distance warning is activated, THEN: the vehicle warns the driver visually and/or acoustically. & 6 \\ \hline
Emergency Brake Assist: The vehicle decelerates in critical situations to a full standstill & IF: emergency brake assist is activated, THEN: the vehicle decelerates in critical situations to a full standstill & 6\\ \hline
If the driver pushes the cruise control lever up to the first resistance level and the cruise control is activated, the speed set point of the cruise control is increased by the value N. & IF: the driver pushes the cruise control lever up to the first resistance level and the cruise control is activated, THEN: the speed set point of the cruise control is increased by the value N. & 1\\ \hline
\end{tabularx}
\label{tab:eval:ifthen:4}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 6 Examples If-Then Structure.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
With activated darkness switch (only armored vehicles) the cornering light is not activated. & IF: with activated darkness switch (only armored vehicles), THEN: the cornering light is not activated. & 2\\ \hline
When moving the pitman arm in position turn left the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & IF: the pitman arm is moved in position turn left, THEN: the vehicle flashes all left direction indicators (front left, exterior mirror left, rear left) synchronically with pulse ratio bright to dark 1:1. & 1\\ \hline
If the driver activates the pitman arm during the three flashing cycles of tip-blinking again, only the current flashing cycle is completed. & IF: the driver activates the pitman arm during the three flashing cycles of tip-blinking, THEN: only the current flashing cycle is completed & 1\\ \hline
Hazard Warning Light: As long as the hazard warning light switch is released, all direction indicators flash synchronically. If the ignition key is in the ignition lock, the pulse ratio is bright to dark 1:1. If the ignition key is not in the lock, the pulse ratio is 1:2. & IF: the hazard warning light switch is
released, THEN: all direction indicators flash synchronically. If the ignition key is in the ignition lock, the pulse ratio is bright to dark 1:1. If the ignition key is not in the lock, the pulse ratio is 1:2. & 1\\ \hline
When hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, the direction blinking cycle should be released. & IF: hazard warning is deactivated and the pit arm is in position direction blinking left or direction blinking right, THEN: the direction blinking cycle should be released. & 1\\ \hline
If tip-blinking was activated shortly before deactivation of the hazard warning, this is not considered during the deactivation of the hazard warning & IF: tip-blinking was activated shortly before deactivation of the hazard warning, THEN: this is not considered during the deactivation of the hazard warning & 1\\ \hline
(a) The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & IF: the driver activates the low beam headlights by turning the light rotary switch to position exterior lights on, THEN: the low beam headlights are activated. & 2\\ \hline
The driver activates the low beam headlights by turning the light rotary switch to position exterior lights on & IF: the driver activates the low beam headlights by turning the light rotary switch to position exterior lights on, THEN: the low beam headlights are activated. & 2\\ \hline
Distance Warning: The vehicle warns the driver visually and/or acoustically if the vehicle is closer to the car ahead than allowed by the safety distance. & IF: distance warning is activated, THEN: the vehicle warns the driver visually and/or acoustically. & 6\\ \hline
Emergency Brake Assist: The vehicle decelerates in critical situations to a full standstill & IF: emergency brake assist is activated, THEN: the vehicle decelerates in critical situations to a full standstill. & 6\\ \hline
If the driver pushes the cruise control lever up to the first resistance level and the cruise control is activated, the speed set point of the cruise control is increased by the value N. & IF: the driver pushes the cruise control lever up to the first resistance level and the cruise control is activated, THEN: the speed set point of the cruise control is increased by the value N. & 1\\ \hline
\end{tabularx}
\label{tab:eval:ifthen:6}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 1 Example Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. & The maximum deviation of the pulse ratio MUST be below the cognitive threshold of a human observer.\\ \hline
\end{tabularx}
\label{tab:supset:modal:1}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 4 Examples Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. & The maximum deviation of the pulse ratio MUST be below the cognitive threshold of a human observer.\\ \hline
Direction blinking: For USA and CANADA, the daytime running light shall be dimmed by 50\% during direction blinking on the blinking side. & Direction blinking: For USA and CANADA, the daytime running light MUST be dimmed by 50\% during direction blinking on the blinking side.\\ \hline
The adaptation of the pulse ratio has to occur at the latest after two complete flashing cycles. & The adaptation of the pulse ratio MUST occur at the latest after two complete flashing cycles.\\ \hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle MUST be 1 second.\\ \hline
\end{tabularx}
\label{tab:supset:modal:4}
\end{table}
\begin{table}[h]
\centering
\caption{Support Set: 6 Examples Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|}
\hline
\textbf{Original} & \textbf{Translation} \\\hline
The maximum deviation of the pulse ratio should be below the cognitive threshold of a human observer. & The maximum deviation of the pulse ratio MUST be below the cognitive threshold of a human observer.\\ \hline
Direction blinking: For USA and CANADA, the daytime running light shall be dimmed by 50\% during direction blinking on the blinking side. & Direction blinking: For USA and CANADA, the daytime running light MUST be dimmed by 50\% during direction blinking on the blinking side.\\ \hline
The adaptation of the pulse ratio has to occur at the latest after two complete flashing cycles. & The adaptation of the pulse ratio MUST occur at the latest after two complete flashing cycles.\\ \hline
The duration of a flashing cycle is 1 second. & The duration of a flashing cycle MUST be 1 second.\\ \hline
A flashing cycle (bright to dark) will always be completed, before a new flashing cycle can occur. & A flashing cycle (bright to dark) MUST always be completed, before a new flashing cycle can occur.\\ \hline
With subvoltage the ambient light is not available. & With subvoltage the ambient light MUST not be available.\\ \hline
\end{tabularx}
\label{tab:supset:modal:6}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 1 Example Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle should be below the cognitive threshold of a human observer. & The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle MUST be below the cognitive threshold of a human observer. & 1\\ \hline
The frame rate of the camera is 60 Hz & The frame rate of the camera MUST be 60 Hz & 1\\ \hline
The resolution of the camera signal is 24 bits (8 bits each RGB). & The resolution of the camera signal MUST be 24 bits (8 bits each RGB). & 1\\ \hline
The functions of the system are classified as safety relevant in with respect to ISO 26262. & The functions of the system are classified as safety relevant in with respect to ISO 26262 & 5\\ \hline
Also after 1000 flashing cycles the cumulated deviation will not exceed 0.05s. & Also after 1000 flashing cycles the cumulated deviation will not exceed 0.05s. & 5\\ \hline
Speed Limit: The vehicle does not exceed a set speed. & The vehicle does not exceed a set speed & 5\\ \hline
A flashing cycle (bright to dark) has to be always completed, before a new flashing cycle can occur. & A flashing cycle (bright to dark) has to be always completed, before a new flashing cycle can occur. & 5\\ \hline
Low beam illuminant shall be LED. & Low beam illuminant shall be LED. & 5\\ \hline
\end{tabularx}
\label{tab:eval:modal:1}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 4 Examples Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle should be below the cognitive threshold of a human observer. & The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle MUST be below the cognitive threshold of a human observer. & 1\\ \hline
The frame rate of the camera is 60 Hz & The frame rate of the camera MUST be 60 Hz & 1\\ \hline
The resolution of the camera signal is 24 bits (8 bits each RGB). & The resolution of the camera signal MUST be 24 bits (8 bits each RGB). & 1\\ \hline
The functions of the system are classified as safety relevant in with respect to ISO 26262. & The functions of the system are classified as safety relevant in with respect to ISO 26262. & 5\\ \hline
Also after 1000 flashing cycles the cumulated deviation will not exceed 0.05s. & Also after 1000 flashing cycles the cumulated deviation MUST NOT exceed 0.05s. & 1\\ \hline
Speed Limit: The vehicle does not exceed a set speed. & The vehicle does not exceed a set speed & 5\\ \hline
A flashing cycle (bright to dark) has to be always completed, before a new flashing cycle can occur. & A flashing cycle (bright to dark) MUST be always completed, before a new flashing cycle can occur. & 1\\ \hline
Low beam illuminant shall be LED. & Low beam illuminant shall be LED. & 5\\ \hline
\end{tabularx}
\label{tab:eval:modal:4}
\end{table}
\begin{table}[h]
\centering
\caption{Evaluation Results: 6 Examples Modal Verbs.}
\begin{tabularx}{0.95\textwidth}{|X|X|l|}
\hline
\textbf{Original} & \textbf{Translation} & \textbf{Class} \\\hline
The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle should be below the cognitive threshold of a human observer. & The reaction time between the activation of the pitman arm and the beginning of the first flashing cycle MUST be below the cognitive threshold of a human observer. & 1\\ \hline
The frame rate of the camera is 60 Hz & The frame rate of the camera MUST be 60 Hz & 1\\ \hline
The resolution of the camera signal is 24 bits (8 bits each RGB). & The resolution of the camera signal MUST be 24 bits (8 bits each RGB). & 1\\ \hline
The functions of the system are classified as safety relevant in with respect to ISO 26262. & The functions of the system are classified as safety relevant in with respect to ISO 26262. & 5\\ \hline
Also after 1000 flashing cycles the cumulated deviation will not exceed 0.05s. & Also after 1000 flashing cycles the cumulated deviation MUST not exceed 0.05s. & 1\\ \hline
Speed Limit: The vehicle does not exceed a set speed. & The vehicle does not exceed a set speed. & 5\\ \hline
A flashing cycle (bright to dark) has to be always completed, before a new flashing cycle can occur. & A flashing cycle (bright to dark) MUST always be completed, before a new flashing cycle can occur. & 1\\ \hline
Low beam illuminant shall be LED. & Low beam illuminant MUST be LED. & 1\\ \hline
\end{tabularx}
\label{tab:eval:modal:6}
\end{table}
|
\section{Introduction}\label{sec:intro}
The importance of mass loss in the evolution of massive stars has been increasingly recognized over the past 20 years
(e.g. \citealt{b1}). However, the radiatively-driven winds \citep{b2} of OB stars are host to a number of forms of instability
(e.g. \citealt{b12}) and other competing physical
processes which are not yet fully accounted for in models. Thus an important piece of the puzzle is missing
to achieve a global understanding of these stars and
of their characteristically strong outflows. This is evidenced by different forms of spectral variability in wind-sensitive lines.
First, there are stochastic variations, which can occur over very short timescales (minutes). These are believed to be related to instability mechanisms,
such as clumping, and can be found notably atop the broad emission lines of Wolf-Rayet stars (e.g. \citealt{b45}).
On the other hand, there are also cyclical (or quasi-periodic) variations which occur typically over longer timescales (for a complete review of
the various forms cyclical variations can take, see \citealt{b47}). One example consists of the so-called
``periodic absorption modulations", or PAMs, observed in a number of OB stars (e.g. \citealt{b46}) and which manifest themselves as optical depth modulations
in the absorption troughs of ultraviolet (UV) P Cygni profiles. They can show a ``phase-bowing", appearing at intermediate velocities and
bending slightly upwards in the dynamic spectra, therefore occuring quasi-simultaneously at all velocities shortly thereafter
(as in HD~64760, \citealt{b14}). PAM variabilities occur on intermediate timescales (hours) and their physical cause is not known.
In parallel, one of the most common forms of cyclical variability among OB stars is the presence of
so-called ``discrete absorption components" (DACs). These features
are formed in the UV resonance lines of hot massive stars and appear as narrow, blueward-travelling absorption structures. Their progression from
low to near-terminal velocity over time distinguishes this form of variability from the aforementioned PAMs.
As was first shown in time series of IUE spectra \citep{y1}, DACs recur cyclically on longer timescales
(days) and at relatively well-constrained periods.
These timescales were found to be correlated with the projected rotational velocity ($v \sin i$), suggesting
that these variations are rotationally modulated \citep{b26}.
DACs are thought to be present in all OB stars. Indeed, narrow absorption
components (NACs; narrow absorption features typically found near terminal velocity), believed to be snapshots of DACs, are
found in nearly all massive stars observed by IUE \citep{b13}.
However, this does not mean that all DACs are identical. Their depths vary from one star to another (they can even be opaque),
and it is possible to find more than one DAC at
a time in single observations \citep{b4}.
Because they span the full range of velocities over time, it is believed that they are caused by large-scale
azimuthal structures extending from the base of the wind all the way to its outer regions \citep{y2}. \citet{b16} showed that a perturbation in the
photosphere could lead to co-rotating interaction regions (CIRs), although the physical nature of this perturbation is not yet known.
This model seems consistent with the DAC phenomenon and leads to promising simulated
spectral signatures. The goal of this project is to determine what physical process constitutes the origin of DACs. Obviously, there are far-reaching
implications for the general study of massive stars, since DACs are believed to be common to all OB stars.
\begin{table*}
\caption[DAC star sample]{Sample of stars used for this study; spectral types are obtained from \citet{b3} and references therein.
$N_{\textrm{obs}}$ corresponds to the total number of independent observations for each star, $\Delta t_{\mathrm{E}}$, $\Delta t_{\textrm{N}}$
and $\Delta T_{\textrm{max}}$ correspond respectively to the average individual total exposure time for ESPaDOnS and NARVAL,
and the maximum time elapsed between the first and last observation of a star on any given night (N/A for stars with only one observation
per night).}\label{tab:sample}
\begin{tabular}{|r|l|l|c|c|r|r|r|}
\hline
HD & Name & Spectral Type & $m_{V}$ & $N_{\textrm{obs}}$ & $\Delta t_{\textrm{E}}$ & $\Delta t_{\textrm{N}}$ & $\Delta T_{\textrm{max}}$\\
& & & & & (s) & (s) & (d) \\
\hline
24912 & $\xi$ Per & O7.5 III(n)((f)) & 4.06 & 44 & 360 & $\sim1800$ & 0.186 \\
30614 & $\alpha$ Cam & O9.5 Ia & 4.30 & 11 & 560 & 920 & 0.037 \\
34656 & & O7 II(f) & 6.80 & 1 & 2600 & - & N/A \\
36861 & $\lambda$ Ori A & O8 III((f)) & 3.30 & 20 & $\sim200$ & $\sim400$ & 0.039 \\
37128 & $\epsilon$ Ori & B0 Ia & 1.70 & 70 & 40 & $\sim160$ & 0.122 \\
47839 & 15 Mon & O7 V((f)) & 4.64 & 16 & 640 & $\sim1600$ & 0.035 \\
64760 & & B0.5 Ib & 4.23 & 9 & 440 & - & 0.033 \\
66811 & $\zeta$ Pup & O4 I(n)f & 2.25 & 30 & 80 & - & 0.078 \\
149757 & $\zeta$ Oph & O9.5 V & 2.58 & 65 & 100 & 180 & 0.061 \\
203064 & 68 Cyg & O7.5 III:n((f)) & 5.04 & 8 & 980 & $\sim2000$ & 0.053 \\
209975 & 19 Cep & O9.5 Ib & 5.11 & 33 & 1000 & 1800 & 0.093 \\
210839 & $\lambda$ Cep & O6 I(n)fp & 5.08 & 26 & - & 2640 & N/A \\
214680 & 10 Lac & O9 V & 4.88 & 36 & 400 & $\sim2000$ & 0.051 \\
\hline
\end{tabular}
\end{table*}
The two leading hypotheses to explain DACs are magnetic fields and non-radial pulsations (NRPs).
However, both processes present a number of challenges when it comes to explaining DACs. First, based on the statistics
of the Magnetism in Massive Stars (MiMeS) survey, less than 10\% of all massive stars are inferred to harbour detectable
magnetic fields \citep{b15a}. This is obviously a problem since DACs are thought to be common to all OB stars. On the other
hand, a pulsational origin for DACs might also be problematic, since one would expect a succession of brighter and darker
areas on the photosphere, whereas \citet{b16} specifically identify bright spots as the possible cause for DACs.
Moreover, experiments with alternating bright and dark regions, meant to simulate the brightness distribution of low-order NRPs,
failed to reproduce DAC-like variations (Owocki, priv. comm.).
On the other hand, rotational modulations (RMs; analogous to PAMs) have been modelled self-consistently with a 3D radiative transfer code
using NRPs in HD~64760 \citep{lobel}, a star possessing DACs; however, the NRPs produce the RMs, while the DACs
are created by introducing bright spots.
Finally, DAC recurrence timescales are deemed to be incompatible with pulsational periods and it has been suggested that this
problem can only be solved through complex mode superpositions \citep{b40}.
This paper investigates the simplest form
of the first case: that of a purely dipolar large-scale magnetic field, inclined relative to the rotation axis.
Indeed, most massive stars are thought to produce two DACs per rotational period \citep{b4}, so this configuration
seems like a rather natural fit. Moreover, most detected magnetic fields in OB stars are essentially dipolar, and follow the oblique
rotator model \citep{b15}. This is expected, since large-scale magnetic fields in massive stars are believed to be of fossil origin,
relaxing into a dipolar configuration \citep{z1,z2}. On the other hand, relatively weak magnetic fields, possibly below the
threshold of detection for most MiMeS observations, could still introduce a significant modulation of the winds of OB stars.
In this paper, we examine a sample of 13 stars well known to exhibit DACs. The sample is described in detail in Section~\ref{sec:samp}.
In Section~\ref{sec:obs}, we describe
the high-resolution spectropolarimetric observations of these stars, as well as the instruments on which they were obtained.
Section~\ref{sec:lsd} outlines the least-squares deconvolution (LSD) procedure used to maximize the signal-to-noise ratio of the Stokes
V profiles to search for Zeeman signatures.
In Section~\ref{sec:diag}, we present the various diagnostics used to perform the most precise magnetometry
ever obtained for this class of stars. Section~\ref{sec:notes}
contains notes on individual stars, while the results are discussed and analyzed in detail in Section~\ref{sec:disc}, as well as the conclusions of this
study and pointers for future investigations.
\section{Sample}\label{sec:samp}
Thirteen OB stars (with spectral types ranging from O4 to B0.5, and luminosity classes from V to Ia,
see Table~\ref{tab:sample}) were selected to form this sample based on
two main criteria: documented DAC behaviour, and available high-quality data.
All stars selected for this sample are well known to exhibit the DAC phenomenon and were extensively studied as such: 9 stars were studied by
\citet{b4}, $\zeta$ Pup was investigated by \citet{b7}, while $\zeta$ Oph was the subject of a paper by \citet{b8}. Finally, the two B supergiants
($\epsilon$ Ori and HD~64760) were
studied by \citet{b11}.
The suspected ubiquitous nature of DACs indicates that the physical process causing them should be common
to all OB stars. Therefore, if this process involves large-scale dipolar magnetic fields, we expect to detect such fields in most
of the stars of this sample.
Furthermore, data accessibility was one of the key factors in choosing this sample. Indeed, these stars
were selected because available archival data (high-resolution spectropolarimetry) related to the MiMeS Project allow us to conduct
high-precision magnetic measurements and compile a self-consistent dataset.
The stellar and wind parameters of all stars in the sample are presented in Table~\ref{tab:bp}.
For consistency with \citet{b4}, most of the values we use are taken from that paper.
Thus, for the 11 O stars, the mass-loss
rates are obtained by applying the empirical prescription of \citet{b6}, which relies on radio free-free emission and H$\alpha$ measurements using
unclumped models. As for the 2 B stars, mass-loss rates are taken from \citet{b10} (based on optical/UV spectroscopy).
Comparison of the adopted stellar and wind parameters with more modern values (e.g. \citealt{plouc1, plouc2, plouc3, plouc4})
yield only minor differences in $T_{\textrm{eff}}$ (typically about 1 kK, $\sim 5$\%), $R_*$ (a few $R_{\odot}$, $\sim 20$\%)
and $v_{\infty}$ (essentially identical).
For the mass-loss rates, modern values typically differ from one another
by a factor of a few, up to a full order of magnitude, depending on each star. In general, our
values are consistent with the lower end of that range.
\section{Observations}\label{sec:obs}
The observations were obtained at the Canada-France-Hawaii Telescope (CFHT) on the
ESPaDOnS instrument, and on its sister instrument, NARVAL, installed at T\'{e}lescope Bernard Lyot (TBL).
Some observations were obtained as part of the Large Programs (LPs) awarded to MiMeS on both instruments, while a significant part of the dataset
was obtained as part of individual PI programs (led by V. Petit, C. Neiner, E. Alecian, H. Henrichs and J.-C. Bouret).
Both of these instruments
are high-resolution ($R \sim 65,000$) fibre-fed \'{e}chelle spectropolarimeters. Each exposure consists of 4 sub-exposures corresponding to different
angles of the Fresnel rhomb retarders,
which are then combined in different ways to obtain both the I (unpolarized) and V (circularly polarized) Stokes parameters, as well
as two diagnostic nulls (which have the same noise level as the V spectrum, but no stellar magnetic signal, \citealt{b17}). The spectral coverage is essentially continuous
between about 360-1000 nm.
The reduction was performed using the Libre-ESpRIT package at the telescope, and the spectra were then normalized to the continuum. Appendix A contains
a summary of all the observations.
The use of these observations marks a significant improvement in the study of the role of magnetic fields in the generation of wind variability
because of both their high resolution and high signal-to-noise ratio (SNR). They constitute the highest-quality
dataset compiled to date for the purpose of magnetometry on OB stars. Furthermore, the extensive time coverage obtained for a number
of stars in the sample can provide extremely tight constraints on the geometry of any surface magnetic field present (see Section~\ref{sec:diag}).
In total, this dataset is constituted of 381 spectra, for an average of nearly 30 spectra per star (HD 34656 only has 1 observation,
while $\epsilon$ Ori has 70). The data were acquired between 2006 and 2013, with a typical peak SNR of over 1000 per CCD pixel at a wavelength
of around 550 nm.
\section{Least-Squares Deconvolution}\label{sec:lsd}
In order to improve the significance of potential Zeeman signatures in the Stokes V profile, indicative of the presence of a magnetic field,
LSD \citep{b17} was used to effectively
deconvolve each spectrum to obtain a single, high-SNR line profile. This was carried out using the latest implementation of iLSD \citep{b18}.
This procedure requires the use of a specific ``line mask" for each star,
which is a file containing all the necessary information about the lines whose signal will be added:
central wavelength, depth and Land\'{e} factor. First, to create such a file, a line list is obtained from the Vienna Atomic Line Database (VALD, \citealt{b41}),
by inputting the effective temperature of the star, and choosing a line depth threshold (0.01 in this case). Then, the information contained in the line list is used
to create a crude preliminary mask, which can then be filtered and adjusted. This means that some lines are removed (e.g. lines which don't actually appear
in the spectra, lines heavily contaminated by telluric absorption, hydrogen lines, due to their particular shape and behaviour, as well
as lines which were blended with hydrogen lines), while the depths of the remaining lines can be adjusted to better reproduce
the star's spectrum. This procedure also ensures that uncertainties in $T_{\rm{eff}}$ have little impact on the final mask.
Several tests were made with sub-masks to determine which of the remaining lines should be included or not.
In the end, masks including helium and metallic lines were used, as the helium lines provided most of the signal and did not alter the shape
of the mean line profile
significantly (although they do introduce extra broadening). The LSD profiles were then extracted using these masks, without applying a regularisation correction
\citep{b18} since it did not yield significant gain given the already high SNR of the spectra.
Another measure taken to improve the signal was to co-add the LSD profiles of spectra of each star taken on the same night. The time intervals between the first
and last exposure of a given star on a given night are systematically less than 10\% of the inferred stellar rotational period, therefore
there was no serious risk of smearing the signal and weakening it (see Table~\ref{tab:sample}). A mosaic of sample nightly-averaged LSD profiles for each
of the stars is presented in Fig.~\ref{fig:lsdm}.
\begin{figure*}
\begin{center}
\subfigure[$\xi$ Per LSD profile.]{\includegraphics[width=2.2in]{lsd_xiper_new}}
\subfigure[$\alpha$ Cam LSD profile.]{\includegraphics[width=2.2in]{lsd_alphacam_new}}
\subfigure[HD~34656 LSD profile.]{\includegraphics[width=2.2in]{lsd_hd34656_new}}
\subfigure[$\lambda$ Ori A LSD profile.]{\includegraphics[width=2.2in]{lsd_lamoria_new}}
\subfigure[$\epsilon$ Ori LSD profile.]{\includegraphics[width=2.2in]{lsd_epsori_new}}
\subfigure[15 Mon LSD profile.]{\includegraphics[width=2.2in]{lsd_15mon_new}}
\subfigure[HD~64760 LSD profile.]{\includegraphics[width=2.2in]{lsd_hd64760_new}}
\subfigure[$\zeta$ Pup LSD profile.]{\includegraphics[width=2.2in]{lsd_zetapup_new}}
\subfigure[$\zeta$ Oph LSD profile.]{\includegraphics[width=2.2in]{lsd_zetaoph_new}}
\subfigure[68 Cyg LSD profile.]{\includegraphics[width=2.2in]{lsd_68cyg_new}}
\subfigure[19 Cep LSD profile.]{\includegraphics[width=2.2in]{lsd_19cep_new}}
\subfigure[$\lambda$ Cep LSD profile.]{\includegraphics[width=2.2in]{lsd_lambdacep_new}}
\subfigure[10 Lac LSD profile.]{\includegraphics[width=2.2in]{lsd_10lac_new}}
\caption{Typical LSD profiles for all stars in the sample. In each plot, the red line (top) is the Stokes V profile,
while the blue line (middle) is a diagnostic null. Finally, the
black line (bottom) is the Stokes I profile. The dotted lines represent the integration range for each star.
We can see that no perceptible signal is found in any of the V profiles.}
\label{fig:lsdm}
\end{center}
\end{figure*}
\section{Magnetic field diagnosis}\label{sec:diag}
The LSD profiles were used to assess magnetic fields via two techniques: direct measurement diagnostics and Bayesian inference-based
modeling.
\subsection{Direct measurement diagnostics}
Using the nightly-averaged profiles, as well as the individual ones, the disc-averaged longitudinal magnetic field ($B_{z}$) was computed using
the first-order moments method (e.g. \citealt{b42}). The integration ranges were chosen carefully, after a few trial calculations to determine how to minimize
the error bars without losing any potential signal. Visually, the limits correspond loosely to the zero-points of the second derivative of the Stokes I profiles.
Nightly longitudinal field measurements are listed in Table~\ref{tab:app}. There are no significant detections. Not only do they seem normally
distributed within the error bars, but these error bars are quite small in some cases and provided very tight constraints (e.g. 4 G error bar for 10 Lac
on 17 October 2007). Furthermore, the longitudinal fields are also measured using the diagnostic nulls as a sanity check. On any given night, the error
bars for the longitudinal fields measured from the V profile are consistent with those measured from the nulls, and the distributions of $B_{z}/\sigma_{B_{z}}$
obtained from each profile are essentially identical, which suggests that the V profile does not contain any more signal than the diagnostic nulls.
$\chi^2$ diagnostics are also performed by comparing both the V profile and the diagnostic null
to the null hypothesis ($B = 0$, therefore $V = 0$ and $N = 0$), and detection probabilities are derived from these values \citep{b17}. These
calculations are performed both within the LSD profile, as well as in the continuum. A detection probability below 99.9\% is considered
as a non-detection, a marginal detection possesses a detection probability between 99.9\% and 99.999\% and a definite detection has a detection probability
of over 99.999\%
(for a discussion of these thresholds, see \citealt{b17}). The 400+ individual and nightly-averaged V profiles are all
non-detections, except 5 cases within the profile (1 in $\zeta$ Oph, 3 in 19 Cep and
1 in 10 Lac) and 1 in the continuum (in $\xi$ Per) all 6 of which are marginal
detections.
For the ones inside the line, except for a nightly-averaged observation in 10 Lac, the
other 4 occurrences appear in individual observations, with a lower SNR. This could be due to somewhat noisier profiles, and since they are relatively
isolated cases (for all 3 stars there are many more observations which are all non-detections), they are not perceived as being significant. As for the
continuum marginal detection, it is also from a single observation and could be due to noise, as well as slight telluric contamination. On the whole,
these results are largely consistent with those for the diagnostic nulls, further suggesting that there are no real detections.
In summary, both of these direct measurement diagnostics lead to the same conclusion, i.e. that no magnetic field is observed in any of these stars.
\subsection{Bayesian inference}
Additionally, to increase the SNR it is also possible to take advantage of the time resolution provided by repeated measurements. Indeed,
taking into account the oblique dipole rotator model \citep{b19}, data taken at different times should allow to view the surface magnetic field from different
perspectives, thus lifting
some of the degeneracy associated with the geometric parameters of the magnetic field, should it exist. Therefore, using the technique developed by
\citet{b20}, a fully self-consistent Bayesian inference method compares the observed profiles in the Stokes V and N parameters to synthetic Zeeman profiles
for a grid of field strength and geometry parameters. The rotational phase of the observations is also allowed to vary freely, since rotational periods are unknown.
In order to produce synthetic Zeeman profiles to be used for this Bayesian technique,
it is necessary to estimate the value of the projected rotational
velocity of each star, as well as its macroturbulent velocity. These values are sometimes degenerate and difficult to determine with great precision.
Instead of using previously published values, new values of $v \sin i$ were measured for all stars using the Fourier transform method
(e.g. \citealt{b30,z3}).
To this effect, synthetic spectra were computed with \textsc{synth}3 \citep{b43}, and the O\textsc{ii} $\lambda 4367$, O\textsc{iii} $\lambda 5508$,
O\textsc{iii} $\lambda 5592$ and C\textsc{iv} $\lambda 5801$ lines were used to compare them to the data. In most cases (10/13), we get relatively
(e.g. 20\%) lower values of projected
rotational velocity than those reported in the literature \citep{b3}, while for the 4 remaining stars, we get comparable or slightly higher results.
This can be expected, since the line broadening is no longer solely attributed to rotation with this method.
Once the value of $v \sin i$ was determined, the LSD profiles (rather than individual lines, since these are the data we are looking to model)
were then compared to synthetic Voigt profiles to refine the value of $v \sin i$ and determine $v_{mac}$.
Because this process involved some level of degeneracy, the uncertainty on the obtained values could not be determined in a systematic way,
but it is conservatively estimated to be about 10-20\%. While this may seem large,
tests using different pairs of values ($v \sin i$ and the associated $v_{mac}$)
indicate that such a precision is quite sufficient, as errors of this magnitude do not significantly affect the
results of the Bayesian analysis. A summary of these velocity measurements
is given in Table~\ref{tab:bp}, which also contains other relevant physical parameters.
The macroturbulent velocities are likely to be systematically overestimated; the extra broadening from the helium lines behaves
in a way similar to macroturbulence. However once again, extensive testing on our data has shown that this
overestimation does not significantly alter the results of the Bayesian inference.
Ultimately, we modeled the observed I, V and N profiles to obtain probability
density functions (PDFs) for 3 variables: the dipolar field strength ($B_{d}$), the inclination angle of the rotational
axis ($i$) and the obliquity angle between the magnetic field axis and the rotational axis ($\beta$). It is also possible to marginalize the PDFs for each variable individually. However, it should be noted that the latter two geometric parameters cannot be constrained in the case
of non-detections \citep{b20}. Figure~\ref{fig:pdfs} shows the marginalized PDFs for three representative stars as a function of $B_{d}$.
We can see that for each star, the PDF peaks at a value
of 0, which is consistent with a non-detection. Additionally, a similar analysis was performed on the diagnostic null, with consistent results.
Therefore, we obtained no information about the putative field's geometry: we consider the only parameter of interest for this study to be the
strength of the dipolar field. Since we only have non-detections, we can place upper limits on the dipolar field strength by using
the 95.4\% confidence region upper boundaries (which corresponds to the limit over which we expect the field to be detected, \citealt{b20}).
These upper limits (noted as $B_{d,\textrm{max}}$) are listed in Table~\ref{tab:bp} (as well as the 68.3\% confidence level upper limits for
comparison purposes). The highest upper limit (95.4\% interval) that we derive is that
of HD 34656 (359 G). This is expected, since there was only a single observation for that star, therefore a lower SNR. The tail of the PDF falls
off less abruptly as well (see Fig.~\ref{fig:pdfs}), since statistically speaking,
the observation could correspond to a particular phase where the field configuration is not suitable for detection. It should be remembered that this technique
aims to take advantage of timeseries of LSD profiles; hence better constraints and a more peaked PDF could be obtained for this star with higher SNR observations
and more extensive time coverage. All the other stars with
upper limits over 100 G (5) have very high projected rotational velocities, which explains their poorer constraints. However, for the rest of the stars (7), we get
extremely tight constraints, in particular in the case of 10 Lac (23 G). These values represent by far the tightest constraints ever obtained for any sample
of OB stars (see Fig.~\ref{fig:hist} for a histogram of these upper limits).
However, even though fast rotating stars have poorer constraints on the strength of their hypothetical dipolar field, their rotation itself suggests that they
do not possess such a field (or if so, a weak one).
Indeed, a majority of magnetic OB stars are slow rotators. Moreover, all effectively single magnetic O stars are very slow rotators, with periods ranging
from about one week to decades (e.g. \citealt{b22}). This slow rotation is thought to be achieved by the magnetic field, which contributes
to remove angular momentum from a star. This characteristic does not apply to our sample, in which nearly half (6/13) of the stars have projected rotational
velocities of over 200 km/s. We can calculate a typical spindown timescale for a given magnetic field strength (see Eq. 8 of \citealt{asif}).
For example, if we perform that calculation on the supergiant HD~64760 using the 95.4\% interval upper limit on the strength of the field,
we get a spindown timescale of just under a million years, which seems incompatible with its projected
rotational velocity of 250 km/s.
\begin{figure}
\begin{center}
\subfigure[$B_{d}$ PDF for 10 Lac.]{\includegraphics[width=3.4in]{10lac_alex_pdf_new}}
\subfigure[$B_{d}$ PDF for $\alpha$ Cam.]{\includegraphics[width=3.4in]{alphacam_alex_pdf_new}}
\subfigure[$B_{d}$ PDF for HD~34656.]{\includegraphics[width=3.4in]{hd34656_alex_pdf_new}}
\caption{Logarithm of the probability density functions of the dipolar field strength ($B_{d}$) for three representative stars (10 Lac with the best constraints
at the top, $\alpha$ Cam with typical constraints in the middle, and HD34656 with the worst constraints at the bottom)
as derived from the Bayesian inference technique.
For each plot, the dashed line delimits the 68.3\% confidence interval,
while the dotted line delimits the 95.4\% confidence interval.}
\label{fig:pdfs}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.2in]{bmax_hist_new1.eps}
\caption{Histogram of the $B_{d, \textrm{max}}$ (95.4\% interval upper limit) values derived from the Bayesian analysis
(Table~\ref{tab:bp}). Most stars have an upper limit below 120 G.}
\label{fig:hist}
\end{center}
\end{figure}
Another output of the Bayesian analysis is the \textit{odds ratio}. This value represents the ratio of the likelihoods of each of the two hypotheses to be evaluated:
$H_{0}$, corresponding to no magnetic field, and $H_{1}$, corresponding
to a globally organized dipolar magnetic field. According to \citet{b44},
we would need an odds ratio below 1/3 to say that there is weak evidence in favour of the magnetic hypothesis.
This ratio has been computed for each star (for the individual nightly observations, as well as for the entire
dataset of a given star).
For all V profiles, we get $odds(H_{0}/H_{1}) > 1$, except for two nightly profiles (1 for $\epsilon$ Ori and 1 for $\xi$ Per), but they do not
go under 0.68. Typical values for the joint datasets range between 1 and 10. None of the stars yield odds ratios favouring the magnetic hypothesis.
These results are also consistent with the odds ratios obtained from the null spectra.
It should be noted that this approach relies on a certain stability of the field. In particular, the geometry and strength of the dipole cannot undergo
significant changes during the period of observation. On the other hand, this method is insensitive to any drift of the dipole in phase (e.g. precession
of the magnetic axis around the rotation axis at a non-uniform rate). We assume that the
geometry of the field remains stable over timescales of at least a few years given the temporal baseline of our observations; this assumption
is found to be justified in intermediate-mass and massive stars (e.g. \citealt{Wade, Grun, Silv}). In any case, for a majority of stars in the sample,
most observations are grouped within a few months, periods over which secular changes in the field geometry would not be important.
Once again, this analysis supports the view that no magnetic fields are observed, but further allows us to compute quantitative upper limits on the surface
dipole component, necessary for evaluating the potential influence on the stellar wind.
\begin{table*}
\caption[Stellar parameters and results]{Stellar and magnetic parameters of the stars in the sample. Terminal wind
velocities are obtained from \citet{b3} and references therein, as well as the previously
published values of the projected rotational velocity (in parentheses). New values of $v \sin i$ obtained from the Fourier transform method
(and refined by fitting the profiles) are reported as well.
Nine stars of the sample are studied by \citet{b4} (a) and \citet{b5} (b),
and all
|
their other properties were obtained from these references (in particular, mass-loss rates are obtained using the empirical relation
of \citealt{b6}).
For the B supergiants ($\epsilon$ Ori and HD~64760), \citet{b10} (c) provide the radii and mass-loss rates, while the remaining
parameters are obtained from \citet{b11} (d). Finally, \citet{b6} (e) provide the radii, mass-loss rates and effective
temperatures of $\zeta$ Pup and $\zeta$ Oph; \citet{b7} (f) detail the DAC recurrence for the former and
\citet{b8} (g) do the same for the latter. }\label{tab:bp}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|l|}
\hline
Name & $R_{*}$ & $T_{\mathrm{eff}}$ & $\dot{M}$ & $v_{\infty}$ & $v \sin i$ & $v_{mac}$ & $P_{\mathrm{max}}$ & $t_{\mathrm{DAC}}$ & $B_{d,\textrm{max}}$ &
$B_{d,68.3\%}$ & $\eta_{*,\textrm{max}}$ & Ref.\\
& ($R_{\odot}$) & (kK) & ($M_{\odot}$/yr) & (km/s) & (km/s) & (km/s) & (d) & (d) & (G) & (G) & & \\
\hline
$\xi$ Per & 11 & 36.0 & $3 \cdot 10^{-7}$ & 2330 & 215~(213)& 80 & 2.6 & 2.0 & 59 & 22 & 0.11 & a \\
$\alpha$ Cam & 22 & 29.9 & $9 \cdot 10^{-7}$ & 1590 & 90~(129)& 85 & 12.4 & a few & 85 & 28 & 0.48 & a, b \\
HD~34656 & 10 & 36.8 & $2 \cdot 10^{-7}$ & 2155 & 70~(91) & 65 & 7.2 & 0.9 & 359 & 100 & 5.75 & a \\
$\lambda$ Ori A & 12 & 35.0 & $3 \cdot 10^{-7}$ & 2175 & 55~(74) & 60 & 11.0 & $>$ 5 & 65 & 22 & 0.18 & a \\
$\epsilon$ Ori & 32 & 28.6 & $2 \cdot 10^{-6}$ & 1910 & 65~(91) & 55 & 24.9 & 0.7 & 78 & 29 & 0.31 & c, d \\
15 Mon & 10 & 41.0 & $4 \cdot 10^{-7}$ & 2110 & 50~(67) & 53 & 10.1 & $>$ 4.5 & 84 & 30 & 0.16 & a \\
HD~64760 & 23 & 23.1 & $1 \cdot 10^{-6}$ & 1500 & 250~(216)& 50 & 4.7 & a few & 282 & 89 & 5.37 & c, d \\
$\zeta$ Pup & 16 & 42.4 & $1 \cdot 10^{-6}$ & 2485 & 220~(219)& 80 & 3.7 & 0.8 & 121 & 34 & 0.29 & e, f \\
$\zeta$ Oph & 8 & 35.9 & $9 \cdot 10^{-8}$ & 1505 & 375~(372)& 50 & 1.1 & 0.8 & 224 & 75 & 4.57 & e, g \\
68 Cyg & 14 & 36.0 & $7 \cdot 10^{-7}$ & 2340 & 290~(305)& 65 & 2.4 & 1.3 & 286 & 90 & 1.86 & a \\
19 Cep & 18 & 30.2 & $6 \cdot 10^{-7}$ & 2010 & 56~(95) & 70 & 16.3 & $\sim$ 5 & 75 & 28 & 0.30 & a \\
$\lambda$ Cep & 17 & 42.0 & $3 \cdot 10^{-6}$ & 2300 & 200~(219)& 80 & 4.3 & 1.4 & 136 & 50 & 0.15 & a \\
10 Lac & 9 & 38.0 & $1 \cdot 10^{-7}$ & 1140 & 21~(35) & 30 & 21.7 & $>$ 5 & 23 & 8 & 0.07 & a \\
\hline
\end{tabular}
\end{table*}
\section{Notes on individual stars}\label{sec:notes}
The following subsections contain notes about each individual star.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{new_halpha}
\caption{H$\alpha$ profiles of all stars (offset for viewing purposes).
Some stars have very little to no variability, whereas others have significant variability (variable depths, emission, etc.).}
\label{fig:halpha}
\end{center}
\end{figure}
\subsection{$\xi$ Per}
$\xi$ Per is a well-known O7.5 giant runaway star \citep{x1} whose DAC behaviour has been extensively studied in the past (e.g. \citealt{b4}).
\citet{b31} have studied its spectral
variability in a number of wind-sensitive lines and also confirm the presence of NRPs. While its high projected rotational velocity makes it
harder to perform precise magnetometry, the excellent time coverage of this dataset leads to a very tight upper limit on the strength of
an hypothetical dipolar field. There does not seem to be significant variation in the shape of H$\alpha$ during our observing runs, but rather simply a modulation of
the depth of the line (see Fig.~\ref{fig:halpha} for a summary of the H$\alpha$ profiles of all stars).
Forty-four independent observations of $\xi$ Per were acquired over 13 nights in December 2006, September 2007
and November 2011. The smallest nightly longitudinal field error bar calculated from these data is 21 G, and the derived dipolar field strength upper limit is
59 G.
\subsection{$\alpha$ Cam}
Also a runaway \citep{x1}, $\alpha$ Cam (O9.5 supergiant) exhibits a subtler DAC behaviour \citep{b5}. The projected rotational velocity
was significantly revised (see Table~\ref{tab:bp}).
The H$\alpha$ profile undergoes important changes from night to night. Eleven independent observations of $\alpha$ Cam were acquired over 5 nights
between 2006 and 2013. The smallest nightly longitudinal field error bar calculated from these data is 10 G, and the derived dipolar field strength upper
limit is 85 G.
\subsection{HD~34656}
HD~34656 is a well-studied O7 bright giant (e.g. \citealt{x2}, who observed line profile variations in its spectra)
with relatively low $v \sin i$, making it an interesting target for this kind of study. \citet{b4} have characterized its DAC behaviour.
Unfortunately, there
was only a single observation of the star in the archive, therefore it was not possible to constrain its magnetic properties with great precision.
The observation of HD~34656 was acquired in November 2011.
The longitudinal field error bar calculated from this observation is 38 G, and the derived dipolar field strength upper limit is
359 G.
\subsection{$\lambda$ Ori A}
In a large separation double system with an early-B star (e.g. \citealt{x3}),
$\lambda$ Ori A is a slowly-rotating O8 giant, exhibiting well-known DAC behaviour (e.g. \citealt{b4}). We placed a very
firm upper limit on its dipolar field strength. No detectable variations are found in H$\alpha$ in our observations.
Twenty independent observations of $\lambda$ Ori A were acquired over 8 nights between 2007 and 2010.
The smallest nightly longitudinal field error bar calculated from these data is 12 G, and the derived dipolar field strength upper limit is
65 G.
\subsection{$\epsilon$ Ori}
One of two B supergiants present in this sample, the DAC behaviour of $\epsilon$ Ori (B0)
was first described by \citet{b11}. Evidence suggesting the possible presence
of NRPs is offered by \citet{b34}. We derive rather tight magnetic constraints, on top of observing significant variations of the H$\alpha$ profile over time.
Seventy independent observations of $\epsilon$ Ori were acquired over 9 nights in October 2007, October 2008
and March 2009. The smallest nightly longitudinal field error bar calculated from these data is 6 G, and the derived dipolar field strength upper limit is
78 G.
\subsection{15 Mon}
A long period spectroscopic binary \citep{b32} with well-studied DACs \citep{b4},
15 Mon (O7 dwarf) has low $v \sin i$, thus leading to a well-constrained field upper limit, even though
it has not been observed as extensively as some other stars in this sample.
Our observations of 15 Mon do not present noticeable changes in H$\alpha$.
Contrarily to \citet{b37}, who claimed a 4.4 $\sigma$ detection based on two
observations with FORS2 and SOFIN (longitudinal field error bars of 37-52 G), we do not find evidence supporting
the presence of a large-scale dipolar magnetic field despite better quality data and more numerous observations.
Indeed, sixteen independent observations of 15 Mon were acquired over 8 nights in December 2006, September-October 2007
and February 2012. The smallest nightly longitudinal field error bar calculated from these data is 20 G, and the derived dipolar field strength upper limit is
84 G.
\subsection{HD~64760}
This B0.5 supergiant was studied by \citet{b14}, who not only detect DACs, but also other forms of variability such as ``phase bowing", making this star
a complex but very interesting case. It is also known to exhibit signs of NRPs (e.g. \citealt{b33}). However, due to its high projected rotational
velocity, as well as the low number of observations, its magnetic properties are amongst the worst-constrained of this sample. There is no variation of H$\alpha$
between the two nights it was observed.
Nine independent observations of HD~64760 were acquired over 2 nights in November 2010
and December 2012. The smallest nightly longitudinal field error bar calculated from these data is 37 G, and the derived dipolar field strength upper limit is
282 G.
\subsection{$\zeta$ Pup}
Characterized by a very strong wind, $\zeta$ Pup is a particularly hot O4 supergiant. Its DAC behaviour was evidenced by \citet{b7}, while \citet{b36} suggest
the possibility of NRPs. We provide good limits on the magnetic field, albeit with a single night of observations. Better time coverage could provide much
better constraints. It is not obvious from these data whether the H$\alpha$ profile varies over the course of the night.
Thirty independent observations of $\zeta$ Pup were acquired over a single night in February 2012.
The nightly longitudinal field error bar calculated from these data is 21 G, and the derived dipolar field strength upper limit is
121 G.
\subsection{$\zeta$ Oph}
A well-known runaway star (e.g. \citealt{x4}), $\zeta$ Oph (O9.5 dwarf) possesses a very high value of $v \sin i$ and short-period DACs \citep{b8}.
Nonetheless, thanks to great time coverage, we obtain good magnetic constraints.
\citet{b37} claim this star to be magnetic, a result we do not reproduce here. Although their nightly observations possess better individual error
bars, their longitudinal field curve has an amplitude of roughly 120 G and implies a surface dipole field of at least 360 G,
which seems inconsistent with the 224 G upper limit
we place on $B_{d}$. Period analysis performed on our longitudinal field measurements (for V and N)
with \textsc{period04} \citep{j1} does not suggest periodic behaviour; in particular,
the 0.8d and 1.3d periods reported by \citet{b37} are not recovered. The periodogram of both the Stokes V and the null results are quite similar, further suggesting
that no periodic signal is to be found. Individual Stokes I LSD profiles show strong
line profile variations (LPV), in the form of bumps appearing and disappearing across the profile, which are indicative of the presence of NRPs,
known to exist in this star (e.g. \citealt{z4}).
We do not detect noticeable variations in H$\alpha$ from night to night in our runs.
Sixty-five independent observations of $\zeta$ Oph were acquired over 46 nights in 2011 and 2012.
The smallest nightly longitudinal field error bar calculated from these data is 118 G, and the derived dipolar field strength upper limit is
224 G.
\subsection{68 Cyg}
The O7.5 runaway (e.g. \citealt{j2}) giant 68 Cyg is a rapid rotator with well-studied DACs \citep{b4}.
Factoring that in with a small number of observations, the putative dipolar magnetic field strength of
68 Cyg is not as well constrained as most of the other stars of the sample. However, H$\alpha$ is seen to be variable, though the pattern of its
variation with time is not clear.
Eight independent observations of 68 Cyg were acquired over 4 nights between 2006 and 2012.
The smallest nightly longitudinal field error bar calculated from these data is 46 G, and the derived dipolar field strength upper limit is
286 G.
\subsection{19 Cep}
Believed to be a multiple star system \citep{b39}, 19 Cep is known to exhibit DAC behaviour \citep{b4} and has a primary (O9.5 supergiant)
with low projected rotational velocity, so it was possible to obtain a firm upper limit on the dipolar
magnetic field. The H$\alpha$ profiles show some signs of variability.
Thirty-three independent observations of 19 Cep were acquired over 10 nights between 2006 and 2010.
The smallest nightly longitudinal field error bar calculated from these data is 17 G, and the derived dipolar field strength upper limit is
75 G.
\subsection{$\lambda$ Cep}
The hot (O6) supergiant $\lambda$ Cep is a runaway (e.g. \citealt{j2})
with a high value of $v \sin i$ and relatively short-period DACs \citep{b4}. Extensive time coverage leads to good magnetic constraints, despite
the fast rotation.
This star is also believed to harbour NRPs (e.g. \citealt{b40}). Strong variations of the H$\alpha$ profile are observed.
Twenty-six independent observations of $\lambda$ Cep were acquired over 26 nights between 2006 and 2012.
The smallest nightly longitudinal field error bar calculated from these data is 57 G, and the derived dipolar field strength upper limit is
136 G.
\subsection{10 Lac}
Hosting weaker (but detectable) wind variations \citep{b4}, 10 Lac is a sharp-lined O9 dwarf, leading to exceptionally tight limits on the field strength. No
H$\alpha$ variations are detected in our data.
Thirty-six independent observations of 10 Lac were acquired over 18 nights in December 2006, September-October-November 2007
and July 2008. The smallest nightly longitudinal field error bar calculated from these data is 4 G, and the derived dipolar field strength upper limit is
23 G, both of which are the best constraints obtained for any star in this sample.
\section{Discussion and Conclusions}\label{sec:disc}
As shown in the previous sections, no large-scale dipolar magnetic field is detected in any of the 13 stars of this sample. However,
in order to draw conclusions on whether such fields could be the cause for DACs, it is important to investigate the different possible
interactions between weak, potentially undetected magnetic fields and stellar winds.
One form of interaction that has been increasingly investigated in the past years is magnetic wind confinement. Indeed, the magnetic field
can channel the wind and closed loops can effectively ``confine'' it, leading to material trapped in a magnetosphere of closed magnetic loops. \citet{b21}
introduce the following ``wind confinement" parameter to characterize this interaction:
\begin{equation}
\eta_{*} = \frac{{B_{eq}}^2 {R_{*}}^2}{\dot{M} v_{\infty}}
\end{equation}
\noindent where $B_{eq}$ corresponds to the strength of the magnetic field at the equator (which equals half of the dipole polar field strength, $B_{d}$),
$R_{*}$ is the stellar radius, $\dot{M}$ is the mass-loss rate and $v_{\infty}$ is the terminal velocity of the wind. In effect, this parameter
corresponds to the ratio of the magnetic field energy density and the wind kinetic energy density at the stellar surface; therefore, its value gives a sense of
which of the two dominates. If $\eta_{*} << 1$, then the wind's momentum causes the magnetic field lines to stretch out radially and the outflow is essentially
unperturbed. On the other hand, if $\eta_{*} >> 1$, then the strong magnetic field lines are perpendicular to the outflow at the star's magnetic equator, barring
the passage of charged material. Depending on the rotational parameters of the star, this can lead either to a centrifugal or a dynamical magnetosphere
(for a detailed description of both these cases, see \citealt{b22}).
In intermediate cases however, the effect of the magnetic field can be somewhat more subtle. An in-depth analysis of this regime is presented
by \citet{b21} and leads to two main thresholds:
\begin{itemize}
\item for $\eta_{*} > 1$, the wind is considered to be confined by the magnetic field;
\item for $0.1 < \eta_{*} < 1$, the wind is not confined, but its flow is significantly affected by the magnetic field.
\end{itemize}
\noindent Therefore, we will consider that for $\eta_{*} < 0.1$, the dynamical effect of the magnetic field on the wind is likely to be too weak to cause DACs.
An upper limit on the value of the $\eta_{*}$ parameter was computed for each star of the sample ($\eta_{*, \textrm{max}}$)
using the upper limit on $B_{d}$ derived from
the Bayesian inference, and the results are presented in Table~\ref{tab:bp}.
It should be noted here that the wind parameter values used to compute these $\eta_{*}$ upper limits are determined empirically.
For magnetic stars, it is necessary to use theoretical mass-loss rates instead of observed values to represent the net surface driving force, since a significant
part of the outflow can be confined by the magnetic field, and would then not be detected at larger radii \citep{b22}. However, in the case of apparently non-magnetic
stars, the picture is not so clear. Furthermore, our empirical values are found to be systematically comparable to or smaller than theoretical values;
since we are deriving conservative constraints, it seemed more consistent to use the overall smaller empirical values. Finally, while it might be argued that
there are important uncertainties associated with empirical determinations of wind parameters,
theoretical prescriptions (such as \citealt{b48}) propagate the rather sizeable uncertainties on masses and luminosities, so there
is no obvious reason to choose one over the other based on such an argument.
\begin{figure*}
\begin{center}
\includegraphics[width=6.6in]{eta_new}
\caption{Comparison of the magnetic field energy density upper limits (vertical axis, 95.4\% confidence interval upper limits indicated by black points,
68.3\% confidence interval upper limits indicated by grey points)
and the wind kinetic energy density values (horizontal axis) for all
13 stars of this study. Dashed lines show where $\eta_{*} = 1$ and
$\eta_{*} = 0.1$. For most stars, the likelihood is greater than 95.4\% that $\eta_{*}$ is below 1, and greater than 68.3\% that it is below 0.1.}
\label{fig:eta}
\end{center}
\end{figure*}
The value of $\eta_{*, \textrm{max}}$
ranges between 0.072 and 5.75, with one star below a value of 0.1 (10 Lac) and a majority of the stars below
a value of 1 (9/13). As for the stars with $\eta_{*, \textrm{max}} > 1$, they all have
very high projected rotational velocities, thus making it difficult to tightly constrain the field strength. These results are also illustrated in
Fig.~\ref{fig:eta}, where the x-axis corresponds loosely to the wind kinetic energy density and the y-axis corresponds to the magnetic energy
density. The dashed lines represent our two chosen thresholds. Given the fact that the represented values all correspond to upper limits,
we can infer that at least a few of these stars do not have magnetic fields strong enough to dynamically affect the wind outflow on the equatorial
plane (as also evidenced by the 68.3\% confidence interval upper limits).
In addition to the upper limits, we use the PDFs to assess the sample's distribution of confinement, assuming that each star contributes
probabilistically to various field strength bins according to its normalized probability density function (constructing, in other words,
a ``probabilistic histogram" of field strengths). In this way, we account for both the most probable
field strength as well as the large-field tail of the distributions. The top panel of Figure~\ref{fig:cum_pdfs} shows this
global cumulative PDF for the wind confinement parameter. We expect any star selected from the sample to have $\eta_* < 0.02$ (which is well
below the threshold of $\eta_* = 0.1$) with
a probability of 50\%, or in other words, we expect half of the sample to have a confinement parameter value
below 0.02. Using this
cumulative PDF, we can also calculate that 75.6\% of the sample should
have $\eta_* < 0.1$ and 93.9\% of the sample should have $\eta_* < 1$.
Assuming this small sample
is representative of the larger population of stars displaying DACs, this
implies that there is no significant dipolar magnetic
dynamic influence on the wind for most of these stars.
Under these conditions, wind confinement by a dipolar magnetic field
does not seem to be a viable mechanism to produce DAC-like variations in
all stars.
The derived values of $\eta_{*}$ are sensitive to uncertainties in the values of $R_{*}$, $\dot{M}$ and $v_{\infty}$. While the last parameter
is essentially identical in all studies, in some extreme cases values of $R_{*}$ can be up to 2-2.5 times larger than the adopted values,
whereas $\dot{M}$ can be up to 10 times larger. Such differences would result respectively in a 6-fold increase and a
10-fold decrease in the inferred value of $\eta_{*}$. However, studies that infer larger stellar radii also tend to infer larger mass loss rates
(e.g. \citealt{plouc1}, with $\xi$ Per and HD 34656).
Thus one effect approximately offsets the other.
The largest potential increase in inferred $\eta_{*}$ for a star of our sample would occur for
$\alpha$ Cam; based on the values measured by \citet{plouc3} (about 1.5 times increase
in radius, and half the mass-loss rate), we obtain an increase of $\eta_{*}$ by a factor of 4. However, for typical combinations of $R_{*}$ and $\dot{M}$ obtained
from other studies, we obtain values of $\eta_{*}$ that are either comparable in magnitude, or smaller (up to an order of magnitude)
than those inferred using the adopted parameters.
\begin{figure}
\begin{center}
\subfigure[$\eta_{*}$ cumulative PDF.]{\includegraphics[width=3.4in]{total_cumul_pdf_eta}}
\subfigure[$B_{d}$ cumulative PDF.]{\includegraphics[width=3.4in]{total_cumul_pdf_B}}
\caption{Cumulative PDFs of the total sample for $\eta_{*}$ (top) and $B_{d}$ (bottom). In both cases, the dashed line shows the 50\% confidence
interval upper limit. For the top panel the dotted lines represent, from left to right, $\eta_{*}=0.1$ and $\eta_{*}=1$. For the bottom panel the dotted
lines represent, from left to right, the field strength required to produce a 10\% and a 50\% brightness enhancement (resp. about 180 G and 400 G).}
\label{fig:cum_pdfs}
\end{center}
\end{figure}
Do these results rule out dipolar fields altogether? \citet{b16} simply introduce bright spots on the surface of the star, with no particular attention to
the mechanism creating these. While wind confinement is possibly the most obvious effect of a magnetic field on the ouflowing material driven from the
surface of a massive star, there might also be more subtle interactions. For instance, the magnetic pressure at the poles of a weak large-scale dipolar
field could lower the local gas pressure, thus reducing the gas density and leading to a lower optical depth. Hence, light coming from the pole
would actually probe hotter regions within the star.
This could cause bright spots like those in the \citet{b16} model. Making a few assumptions (closely modeled on the
calculations of \citealt{b27}), we can derive a simplified
formula for the magnetic field ($B$) required to produce a given brightness enhancement. Indeed, if we consider a flux tube at the photosphere, we can
compare a zone outside of the tube ($B = 0$) to a zone inside the tube ($B = B_{\textrm{T}}$). Furthermore, we assume a grey atmosphere:
\begin{equation}
T(\tau) = T_{\textrm{eff}}\left( \frac{3}{4}\tau + \frac{1}{2} \right)^{\frac{1}{4}}
\end{equation}
\noindent where $T$ is the temperature and $T_{\textrm{eff}}$ is the effective temperature (corresponding to an optical depth, $\tau$,
of 2/3). At equilibrium, the gas pressures ($P_{\textrm{g}}$) inside and outside the tube only differ by the value of the
magnetic pressure ($P_{\textrm{B}} = \frac{B^{2}}{8 \pi}$):
\begin{equation}
P_{\textrm{g}}(r) = P_{\textrm{g}}'(r) + P_{\textrm{B}}
\end{equation}
\noindent where primed variables refer to values inside the flux tube, by opposition to unprimed variables
which refer to values outside the flux tube. The optical depth can be written as a function of gas pressure:
\begin{equation}
\tau = \frac{\kappa P_{\textrm{g}}}{g}
\end{equation}
\noindent where $\kappa$ is the mean Rosseland opacity, and $g$ is the surface gravity. To determine the brightness enhancement, we need to find the
temperature corresponding to an optical depth of 2/3 inside the flux tube (assuming magneto-hydrostatic and temperature equilibrium at a given vertical
depth):
\begin{equation}
T(\tau' = 2/3) = T_{\textrm{eff}} \left( 1 + \frac{3 \kappa B^{2}}{32 \pi g} \right)^{\frac{1}{4}}
\end{equation}
\noindent Finally, since the flux is proportional to the fourth power of the temperature, the brightness enhancement can be expressed as:
\begin{equation}
\frac{F'}{F} = 1 + \frac{3 \kappa B^{2}}{32 \pi g}
\end{equation}
Now, using typical values for O dwarfs ($\kappa \sim 1$ and $\log g = 4.0$), it is very simple to perform sample calculations. For instance,
the main model used by \citet{b16} uses a 50\% enhancement. The field required to produce such an enhancement is of the order of 400 G,
assuming a magnetic region surrounded by an adjacent non-magnetic region.
On the other hand, the same paper shows that DAC-like behaviour can arise with an enhancement as small as 10\%. The associated field
would be of the order of 180 G\footnote{It is likely, given that dipole fields correspond to a continuous distribution of magnetic field (rather than isolated
flux tubes as assumed in this calculation), that even stronger polar fields would actually be necessary to achieve a given brightness enhancement.}.
The dipolar field upper limits shown in Table~\ref{tab:bp} are almost all (9/13) under that value. While models with
smaller brightness enhancements are not tested in their study, this mechanism associated with dipolar magnetic fields does not provide a
viable way of producing DACs given the observational constraints obtained in this study.
Once again, in very much the same way as we did for $\eta_{*}$, we can compile a global cumulative PDF for $B_{d}$ (bottom panel of Figure~\ref{fig:cum_pdfs}).
The results are quite telling: 50\% of the sample should have $B_d <$ 23 G, and 95.8\% (99.0\%) of the sample should have a smaller dipolar field
strength value than that required to produce a 10\% (50\%) local brightness enhancement.
Even if dipolar fields seem to be an unlikely cause for DACs, the general case for magnetism is not settled. Indeed, structured small-scale magnetic
fields could arise as a consequence of the subsurface convection zone caused by the iron opacity bump at $T \simeq 150 \textrm{kK}$ \citep{b24}.
Then, magnetic spots at the surface
of the star could possibly give rise to CIRs (e.g. \citealt{c1}), even though they are expected to be relatively weak
(to have a surface field of at least 160 G, we need a 40+ $M_{\odot}$ star). While the detection of such fields is an arduous task
\citep{b25}, proving their existence and understanding their structure might hold the key to this old problem, as well as other
similar problems (e.g. in BA supergiants, see \citealt{shultzinator}). Good candidates for follow-up
deep magnetometry might be $\epsilon$ Ori and 10 Lac. The former has the advantage of being very bright and having a relatively low value of $v \sin i$, while the
latter has very low projected rotational velocity (for an O star). 10 Lac already has decent time coverage, but could benefit from obtaining more observations per
night.
In parallel to observational efforts, theoretical investigations
are required in order to probe the parameter space of magnetic field strengths and configurations
to find out which types of fields can give rise to DAC-like phenomena. Numerical simulations can also be used to investigate mechanisms other than magnetism,
as well as constrain the required brightness
enhancement in a \citet{b16} model analog to create CIRs in the first place.
The next paper of this series will explore the magnetic spot hypothesis and hopefully place constraints on how likely such a mechanism is to
cause DACs.
\section*{Acknowledgments}
This research has made use of the SIMBAD database operated at CDS, Strasbourg, France and
NASA's Astrophysics Data System (ADS) Bibliographic Services.
ADU gratefully acknowledges the support of the \textit{Fonds qu\'{e}b\'{e}cois de la recherche sur la nature et les technologies}. GAW is supported by an NSERC
Discovery Grant. AuD acknowledges support from the NASA Chandra theory grant to Penn State Worthington Scranton and NASA ATP Grant NNX11AC40G.
Finally, the authors thank the anonymous referee for his insightful comments which have no doubt
contributed to making this paper better.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.