prompt
stringlengths 0
312k
| target
stringlengths 2
62.4k
|
|---|---|
$\ip{\cdot,\cdot} : H^1(S^1,\mathbb{R}^n) \times H^1(S^1,\mathbb{R}^n) \to H^1(S^1,\mathbb{R})$ satisfies
\[
\| \ip{v,w} \|_{H^1}^2 = \int_{S^1} \bigl[ |\ip{v,w}|^2 + |\ip{v',w} + \ip{v,w'}|^2 \bigr] \, du \le c \|v\|_{H^1}^2 \|w\|_{H^1}^2.
\]
So it is continuous bilinear and therefore the map $q(\gamma):= \ip{\gamma,\gamma}$ is analytic.
Now by Lemma \ref{sqrt} we have that $\gamma \mapsto |\gamma_u|$ is a composition of analytic functions $f \circ q\circ \partial_u$
and is therefore analytic by e.g. \cite[p. 1079]{Whittlesey:1965aa}.
\end{proof}
\begin{lemma}
The function $\rec: H^1_{+}(S^1,\mathbb{R})\to H^1_{+}(S^1,\mathbb{R}),\, \psi(u)\mapsto \psi(u)^{-1}$ is analytic.
\end{lemma}
\begin{proof}
This is proved for $H^3$ functions in \cite[Appendix B.1]{DallAcqua:2016aa}, and the same method of proof works for $H^1$ functions.
\end{proof}
We also require the following property of analytic functions:
If $F:D\to Y_1, \, G:D\to Y_2$ are analytic and there exists a bilinear continuous mapping $*:Y_1\times Y_2\to Z$ into another Banach space,
then the product $F*G:D\to Z, x\mapsto F(x)*G(x)$ is also analytic.
According to \cite[p. 2175]{DallAcqua:2016aa} this can be proved using similar ideas as for the Cauchy product of series.
\begin{lemma}\label{diffsanalytic}
The function $\partial_s:\mathcal{I}^2(S^1,\mathbb{R}^n) \to H^1(S^1,\mathbb{R}^n)$, $\gamma \mapsto \gamma_s= \gamma_u / |\gamma_u|$ is analytic.
\end{lemma}
\begin{proof}
We observed in the two previous lemmas that $\partial_u$ and $\gamma \mapsto |\gamma_u|^{-1}$ are analytic on $\mathcal{I}^2(S^1,\mathbb{R}^n)$, with respective codomains $H^1(S^1,\mathbb{R}^n)$ and $H^1_+(S^1,\mathbb{R})$. Moreover the pointwise product $*:H^1(S^1,\mathbb{R})\times H^1(S^1,\mathbb{R}^n)\to H^1(S^1,\mathbb{R}^n)$ is bilinear and continuous. Indeed if $\alpha \in H^1(S^1,\mathbb{R})$ and $\gamma \in H^1(S^1,\mathbb{R}^n)$ then
\[
\|\alpha*\gamma\|_{H^1}\leq \|\alpha\|_{L^\infty} \|\gamma\|_{L^2} + \|\alpha\|_{L^\infty} \|\gamma_u\|_{L^2} + \|\alpha_u\|_{L^2} \|\gamma\|_{L^\infty}.
\]
Since $\partial_s\gamma$ is the pointwise product of $\gamma_u$ and $|\gamma_u|^{-1}$, the result follows.
\end{proof}
\begin{lemma}\label{diffss}
The function $\partial_s^2:\mathcal{I}^2(S^1,\mathbb{R}^n) \to L^2(S^1,\mathbb{R}^n)$ is analytic.
\end{lemma}
\begin{proof}
Since $\partial_u:H^1(S^1,\mathbb{R}^n)\to L^2(S^1,\mathbb{R}^n)$ is also linear and continuous,
by Lemma \ref{diffsanalytic} we have that $\partial_u\partial_s:\mathcal{I}^2(S^1,\mathbb{R}^n) \to L^2(S^1,\mathbb{R}^n)$ is a composition of analytic functions, therefore analytic.
The product $*:H^1(S^1,\mathbb{R})\times L^2(S^1,\mathbb{R}^n)\to L^2(S^1,\mathbb{R}^n)$ is also bilinear and continuous, and $\gamma_{ss}=|\gamma_u|^{-1} \gamma_{su}$, so $\partial_s^2$ is analytic.
\end{proof}
\begin{proposition} \label{energy-analytic-1}
The energy $\E: \mathcal{I}^2(S^1,\mathbb{R}^n) \to \mathbb{R}$ is analytic.
\end{proposition}
\begin{proof}
Recall that $\E(\gamma)=\int_{S^1} [ \ip{\gamma_{ss},\gamma_{ss}}|\gamma_u|+\lambda |\gamma_u|] \, du$.
Lemma \ref{diffss} implies that $\partial_s^2$ is analytic.
Moreover, the Euclidean inner product is continuous bilinear on $L^2(S^1,\mathbb{R}^n)\times L^2(S^1,\mathbb{R}^n)\to L^1(S^1,\mathbb{R})$,
pointwise multiplication $*: L^1(S^1,\mathbb{R})\times H^1(S^1,\mathbb{R}) \to L^1(S^1,\mathbb{R})$ is continuous bilinear, and the sum of analytic functions is analytic.
Thus the integrand is analytic as a function $\mathcal{I}^2(S^1,\mathbb{R}^n) \to L^1(S^1,\mathbb{R})$.
Integration is of course linear and bounded on $L^1$, so the energy is a composition of analytic functions.
\end{proof}
\subsection{The submanifold of arc length proportional parametrized curves} \label{subsection:al-para-curves}
Denote $H^1_{zm}(S^1,\mathbb{R}):=\{\alpha\in H^1(S^1,\mathbb{R}): \int_{S^1} \alpha \,du=0\}$ and define $\Phi:\mathcal{I}^2(S^1,\mathbb{R}^n) \to H^1_{zm}(S^1,\mathbb{R})$ by $\Phi(\gamma):=|\gamma_u|-\mathcal{L}(\gamma)$.
Then $\Omega:=\Phi^{-1}(0)$ is the subset of $\mathcal{I}^2(S^1,\mathbb{R}^n)$ consisting of curves which are parametrized proportional to arc length.
\begin{proposition} \label{proposition:Omega-210630}
The set $\Omega$ of arc length proportional parametrized curves is an analytic submanifold of $H^2(S^1,\mathbb{R}^n)$.
\end{proposition}
\begin{proof}
We will show that $\mathbf{0}\in H^1(S^1,\mathbb{R})$ is a regular value of $\Phi$. For the derivative we get
\begin{equation}\label{dphi}
d\Phi_\gamma v=v_u\cdot \frac{\gamma_u}{|\gamma_u|}-\int_{S^1} v_u\cdot \frac{\gamma_u}{|\gamma_u|}du,
\end{equation}
and so $\mathbf 0$ is a regular value if for all $w \in H^1_{zm}(S^1,\mathbb{R})$ there exists $v\in H^2(S^1,\mathbb{R}^n)$ such that
\begin{align}\label{surjective}
v_u \cdot \frac{\gamma_u}{|\gamma_u|}-\int_{S^1} v_u\cdot \frac{\gamma_u}{|\gamma_u|}du =w.
\end{align}
Given such a $w$, we let $v$ be a solution of
\begin{equation}\label{controlsystem}
v_u=w\frac{\gamma_u}{|\gamma_u|}+\xi N,
\end{equation}
where $N$ is the unit normal to $\gamma$ and $\xi:S^1\to \mathbb{R}$ is a \emph{control} function which we are free to choose in order to ensure $v$ satisfies the zeroth order periodicity condition $v(0)=v(1)$
(the first order condition will be discussed below).
If this is possible, for example if the system \eqref{controlsystem} is \emph{controllable}, then the solution $v$ satisfies \eqref{surjective} and $d\Phi_\gamma$ is surjective.
In fact, it will be sufficient to show that the system
\begin{equation}\label{cs2}
x_u=\xi N
\end{equation}
is controllable, because if $y$ is any solution to $y_u=w \gamma_u/|\gamma_u|$ then we can control $x$ from e.g. $x(0)=y(0)$ to $x(1)=y(1)$ and then let $v=x-y$.
According to \cite[p. 76]{Brockett:1970aa}, a sufficient condition for \eqref{cs2} to be controllable is that the matrix
\[ W:=\int_0^1 NN^T du \]
should be non-singular (here $N$ is a column vector, $N^T$ its transpose), in which case a particular control which drives the solution from $x(0)$ to $x(1)$ is $\xi=N^T W^{-1}(x(1)-x(0))$.
Observe then that since $N^T(0)=N^T(1)$, we will have that the control $\xi$ is also zeroth order periodic, and then from \eqref{controlsystem} $v$ is first order periodic.
Suppose $W$ is singular. Then there is a non-zero vector $a\in \mathbb{R}^2$ such that
\[ 0=a^T W a=\int_0^1 (a\cdot N)^2du \]
which implies that $a\cdot N=0$ for all $u$, which is impossible because $\gamma$ is neither a straight line nor constant.
We therefore have that $d\Phi_\gamma$ is surjective, and its kernel splits because it is a closed subspace of a Hilbert space.
Therefore $\mathbf 0$ is a regular value of $\Phi$ and $\Omega$ is a submanifold (see e.g. \cite[Theorem 2.2.2]{Schrader:2016aa}, with \cite[Proposition 2.3]{Lang:1999sf}).
To see that it is an \emph{analytic} submanifold we note that (e.g. in the proof of \cite[Proposition 2.3]{Lang:1999sf}) the charts for $\Omega$ are constructed by applying the inverse function theorem to $\Phi$.
Since $\Phi$ is analytic as a consequence of Lemma \ref{speed}, local inverses of $\Phi$ will also be analytic by a theorem of Whittlesey (\cite[p. 1081]{Whittlesey:1965aa}).
So the charts are analytic, i.e. $\Omega$ is analytic.
\end{proof}
Combining Proposition \ref{energy-analytic-1} with Proposition \ref{proposition:Omega-210630}, we have:
\begin{corollary}\label{restrictionanalytic}
The restriction $\E|\Omega$ is analytic.
\end{corollary}
Here we give a characterization of the tangent space $T_\gamma \Omega$.
\begin{corollary} \label{Cor:210702-1}
The tangent space $T_\gamma\Omega$ is equal to $\ker d \Phi_\gamma$ and it consists of all $V\in H^2(S^1,\mathbb{R}^n)$ satisfying
\begin{equation}\label{omegatangent}
\ip{V_s ,T}+\frac{1}{\mathcal{L}(\gamma)}\int^{\mathcal{L}(\gamma)}_{0}\ip{V,\kappa} ds=0.
\end{equation}
\end{corollary}
\begin{proof}
The relation $T_\gamma \Omega=\ker d\Phi_\gamma$ is a consequence of the regular values theorem (see e.g. \cite[Theorem~2.2.2]{Schrader:2016aa}). From \eqref{dphi} we see that
\[
\ip{V_u,T} - \int_{S^1}\ip{V_u,T}du=0 \quad \text{for} \quad v\in T_\gamma\Omega.
\]
Since $|\gamma_u|=\mathcal{L}(\gamma)$ for $\gamma \in \Omega$, multiplication by $1/\mathcal{L}(\gamma)$ and integration by parts gives the desired expression.
\end{proof}
\subsection{The gradient inequality} \label{subsection:LSineq}
\begin{proposition}\label{fredholm}
Let $\gamma$ be a critical point of $\E|\Omega$, then $d^2(\E|\Omega)_\gamma$ is a Fredholm operator with index zero.
\end{proposition}
\begin{proof}
Let $\gamma \in \Omega$, and fix $V, W \in T_\gamma\Omega$ arbitrarily.
Taking a derivative of \eqref{omegatangent}, we have
\begin{equation}
\label{eq:210702-10}
\ip{V_{ss},T} = - \ip{V_s, \kappa }.
\end{equation}
Combining Lemma \ref{L-second-var} with \eqref{eq:210702-10} and Corollary \ref{Cor:210702-1}, we reduce the second variational formula into
\begin{align*}
d^2\E_\gamma (V,W)
&= \int^{\mathcal{L}(\gamma)}_0 \bigl[ \ip{W_{ss},2V_{ss}-6\ip{T,V_s}\kappa}\\
& \qquad \qquad \quad +\ip{W_s,-6\ip{\kappa,V_{ss}}T+(15k^2-\lambda^2)\ip{V_s,T}T-(3k^2-\lambda^2)V_s} \bigr]\, ds.
\end{align*}
Assuming furthermore that $\gamma $ is a critical point of $\E$ and therefore admits higher derivatives, we can differentiate
\[
\partial_s(\ip{\kappa,V_s}T)=\ip{\gamma_{s^3},V_s}T+\ip{\kappa,V_{ss}}T+\ip{\kappa,V_s}\kappa
\]
and use this to eliminate the $\ip{\kappa,V_{ss}}T$ term from the expression for $d^2\E$. Then after further simplifications we find
\begin{align*}
d^2\E_\gamma (V,W)
=\begin{multlined}[t] \int^{\mathcal{L}(\gamma)}_0 \bigl[ 2\ip{W_{ss},V_{ss}} + \ip{W_s,6\ip{\gamma_{s^3},V_{s}}T+6\ip{T,V_s}\gamma_{s^3}}\\
+\ip{W_s,(15k^2-\lambda^2)\ip{V_s,T}T-(3k^2-\lambda^2)V_s} \bigr] \, ds.
\end{multlined}
\end{align*}
Since $\gamma\in \Omega$ we have $\ip{V,W}_{H^2}=\mathcal{L}^3(\gamma)\ip{V_{ss},W_{ss}}_{L^2}+\mathcal{L}(\gamma)\ip{V_s,W_s}_{L^2}+\ip{V,W}_{L^2}$ and observe that the second derivative of $\E$ has the form
\[
d^2\E_\gamma( V,W)=\frac{2}{\mathcal{L}(\gamma)^3}\ip{V,W}_{H^2} +\int^{\mathcal{L}(\gamma)}_0 \ip{F(V),W}\, ds,
\]
where $F$ is a continuous linear map into $L^2(S^1,\mathbb{R}^n)$.
Hence the associated operator $B:T_\gamma\Omega\subset H^2(S^1,\mathbb{R}^n)\to T_\gamma\Omega^*$, defined by $B(V)=d^2(\E|\Omega)_\gamma(V,\cdot )$ is the restriction to $T_\gamma\Omega$ of $2I+T$
where $I$ is the Riesz map $I(V)=\ip{V,\cdot }_{H^2}$ and $T(V):=\int_{S^1}\ip{V,F(\cdot) }\,ds $.
Since $I$ is an isomorphism by the Riesz representation theorem, it is Fredholm with index zero. As for $T$,
\[
\|T(V)\|_{{H^2}^*}=\sup_{\|W\|_{H^2}=1} \Bigl| \int \ip{V,F(W)}\,ds \Bigr| \leq \sup_{\|W\|_{H^2}=1} \|V\|_{L^2} \|F(W)\|_{L^2} \leq c \|V\|_{L^2},
\]
and then it follows from the compactness of the imbedding $H^2\subset L^2$ that $T$ is compact.
Moreover, these properties for $I$ and $T$ persist upon restriction to $T_\gamma\Omega$.
Then since $B$ is the sum of a Fredholm index zero operator and a compact operator, it is also Fredholm with index zero (e.g. \cite[Example 8.16]{Zeidler:1986aa}).
\end{proof}
\begin{proposition}{\rm (\L ojasiewicz--Simon gradient inequality on $\Omega$)}.\label{LS1}
Let $\gamma_\infty \in\Omega $ be a stationary point of $\E$. There are constants $Z\in (0,\infty), \sigma\in (0,1]$ and $\theta\in \left [\tfrac{1}{2},1\right )$ such that
if $\gamma \in \Omega$ with $\norm{\gamma-\gamma_\infty}_{H^2}<\sigma$ then
\[
\| d(\E|\Omega)_\gamma\|_{T_\gamma \Omega^*} \geq Z|\E(\gamma)-\E(\gamma_\infty)|^\theta.
\]
\end{proposition}
\begin{proof}
Let $\phi:U\subset \Omega \to \phi(U)\subset B$ be a local chart for $\Omega$ with $\gamma_\infty\in U$,
where $B$ is a subspace of $H^2(S^1,\mathbb{R}^n)$, and choose $\sigma$ such that $\gamma$ is also in $U$ when $\|\gamma-\gamma_\infty\|_{H^2}<\sigma$.
Define $E:=\E\circ \phi^{-1}:\phi(u)\to \mathbb{R}$, then $dE=d\E\circ d\phi^{-1}$, and since $\gamma_\infty$ is stationary we have
\[
d^2 E_{\phi(\gamma_\infty)} = d^2 \E_{\gamma_\infty}(d\phi^{-1}\cdot, d\phi^{-1}\cdot).
\]
Moreover, $\phi$ is analytic and $d\phi$ is an isomorphism so it follows from \ref{restrictionanalytic} and \ref{fredholm} that $E$ is analytic and $d^2 \E_{\phi(\gamma_\infty)}$ is Fredholm.
Therefore by \cite[Theorem 1]{Feehan:2020aa} there exist constants $\tilde Z\in (0,\infty),\tilde{\sigma}\in (0,1], \theta\in \left [\tfrac{1}{2},1\right )$ such that
if $\|\phi(\gamma)-\phi(\gamma_\infty)\|_B<\tilde \sigma$ then
\[
\|dE_{\phi(\gamma)}\|_{B^*} \geq \tilde Z \left | E(\phi(\gamma))-E(\phi(\gamma_\infty))\right |^\theta =\tilde Z \left | \E(\gamma)-\E(\gamma_\infty)\right |^\theta.
\]
Since $d\phi^{-1}$ is continuous we have that there is a constant $c$ such that for any $V\in T_\gamma \Omega$
\[
\|V\|_{H^2} \leq c \|d\phi (V)\|_B
\]
and therefore
\[
\|dE_{\phi(\gamma)}\|_{B^*}=\sup_{d\phi(V)\in B}\frac{|dE_{\phi(\gamma)}d\phi(V)|}{\|d\phi(V)\|_B}
\leq \sup_{V\in T_\gamma\Omega}\frac{|d\E_\gamma V|}{\frac{1}{c}\|V\|_{H^2}}
= c\|d(\E|\Omega)_\gamma\|_{{T_\gamma \Omega}^*}.
\]
The existence of a $\sigma$ such that if $\|\gamma-\gamma_\infty\|_{H^2}<\sigma$ then $\|\phi(\gamma)-\phi(\gamma_\infty)\|_B<\tilde \sigma$ follows from the continuity of $\phi$ and the fact that $\Omega$ is a submanifold of $H^2(S^1,\mathbb{R}^n)$.
\end{proof}
\begin{lemma}\label{Pconts}
The projection $P:\mathcal{I}^2(S^1,\mathbb{R}^n) \to \Omega$ which takes each $\gamma$ to its arc length proportional reparametrisation is continuous with respect to $H^2$ at any $c\in \mathcal{I}^3(S^1,\mathbb{R}^n)$,
where $\mathcal{I}^3(S^1,\mathbb{R}^n):= \{ \gamma \in H^3(S^1,\mathbb{R}^n): |\gamma'(u)|> 0 \}$.
\end{lemma}
\begin{proof}
Write $w_\gamma(u)=\frac{1}{\mathcal{L}(\gamma)}\int_0^u |\gamma'(\tau)|\,d\tau$ so $\alpha(w):=P(\gamma)(w)=\gamma \circ w_\gamma^{-1}(w)=\gamma(u)$ and
\begin{align*}
\alpha'(w)&=\mathcal{L}(\gamma)\frac{\gamma'(u)}{|\gamma'(u)|}=:\mathcal{L}(\gamma)T_\gamma (u), \\
\alpha''(w)&= \gamma''(u)\frac{\mathcal{L}(\gamma)^2}{|\gamma'(u)|^2}-\gamma'(u)\frac{\mathcal{L}(\gamma)^2}{|\gamma'(u)|^4}\ip{\gamma''(u),\gamma'(u)} =\mathcal{L}(\gamma)^2\kappa_\gamma(u).
\end{align*}
Let $c\in \mathcal{I}^3(S^1,\mathbb{R}^n)$ and $\gamma \in \mathcal{I}^2(S^1,\mathbb{R}^n)$. We will make repeated use of the following estimate on parameters
\begin{equation}
\label{eq:param}
\begin{aligned}
|u-\omega_\gamma^{-1}\circ \omega_c(u)|
&=|\omega_\gamma^{-1}\circ \omega_\gamma(u)-\omega_\gamma^{-1}\circ \omega_c(u)| \\
&=\Bigl| \int_{\omega_c(u)}^{\omega_\gamma(u)} (\omega_\gamma^{-1})'(\tau)\, d\tau \Bigr|\\
&\leq \mathcal{L}(\gamma) \| |\gamma|^{-1} \|_{L^\infty} \Bigl| \int_0^u \bigl[ |\gamma'(\tau)| - |c'(\tau)| \bigr] d\tau \Bigr| \\
& \leq \mathcal{L}(\gamma) \| |\gamma|^{-1} \|_{L^\infty} \| \gamma'-c' \|_{L^1}.
\end{aligned}
\end{equation}
Then
\begin{align*}
\| P(c)-P(\gamma) \|^2_{L^2}
&= \int_0^1 | c\circ \omega_{c}^{-1}(w)-\gamma\circ\omega_{\gamma}^{-1}(w)|^2\, dw \\
&\leq \begin{multlined}[t]\int_0^1 |c(u)-c\circ \omega_{\gamma}^{-1}\circ \omega_{c}(u)|^2 \frac{|c'(u)|}{\mathcal{L}(c)}\, du\\ +\int_0^1 |c\circ \omega_{\gamma}^{-1}(w) -\gamma\circ \omega_{\gamma}^{-1}(w)|^2\, dw
\end{multlined}
\\
&\leq \frac{\|c'\|_{L^\infty}^3}{\mathcal{L}(c)}\int_0^1 |u-\omega_{\gamma}^{-1}\circ \omega_{c}(u)|^2\,du +\frac{\|\gamma'\|_{L^\infty}}{\mathcal{L}(\gamma)}\| c-\gamma \|_{L^2}^2\\
&\leq \frac{\|c'\|_{L^\infty}^3}{\mathcal{L}(c)}\mathcal{L}(\gamma)^2 \| |\gamma'|^{-1}\|_{L^\infty}^2 \|c'-\gamma'\|_{L^2}^2
+ \frac{\|\gamma'\|_{L^\infty}}{\mathcal{L}(\gamma)}\|c-\gamma\|_{L^2}^2.
\end{align*}
For the difference of first derivatives we have
\begin{align*}
\| P(c)'-P(\gamma)'\|_{L^2}^2
&= \int_0^1 |\mathcal{L}(c)T_c \circ \omega_c^{-1}(w)- \mathcal{L}(\gamma) T_\gamma \circ \omega_\gamma^{-1} (w)|^2 \, dw \\
&\leq \mathcal{L}(c)^2\int_0^1 |T_c\circ \omega_c^{-1}(w)-T_c\circ \omega_\gamma^{-1}(w)|^2\, dw \\
& \qquad +\int_0^1 |\mathcal{L}(c)T_c\circ \omega_\gamma^{-1}(w)-\mathcal{L}(\gamma)T_\gamma \circ \omega_\gamma^{-1}(w) |^2\, dw.
\end{align*}
For the first
|
term on the right hand side of the inequality, using a change of variable, the fundamental theorem of calculus and \eqref{eq:param} we have
\begin{align*}
\mathcal{L}(c)^2\int_0^1 |T_c\circ \omega_c^{-1}(w)-T_c\circ \omega_\gamma^{-1}(w)|^2\, dw
&\leq \mathcal{L}(c)\|c'\|_{L^\infty} \|T_c'\|^2_{L^2} \int_0^1 |u-\omega_\gamma^{-1}\circ \omega_c(u)|^2 \, du \\
&\leq \mathcal{L}(c)\|c'\|_{L^\infty} \|T_c'\|^2_{L^2} \mathcal{L}(\gamma)^2 \| |\gamma'|^{-1}\|_{L^\infty}^2 \|\gamma'-c'\|_{L^2}^2.
\end{align*}
For the second term
\begin{align*}
& \int_0^1 |\mathcal{L}(c)T_c\circ \omega_\gamma^{-1}(w)-\mathcal{L}(\gamma)T_\gamma \circ \omega_\gamma^{-1}(w) |^2\, dw \\
& \qquad \leq 2\frac{\|\gamma'\|_{L^\infty}}{\mathcal{L}(\gamma)} \int_0^1 \bigl[ |\mathcal{L}(c)-\mathcal{L}(\gamma)|^2 + \mathcal{L}(\gamma)^2 |T_c(u)-T_\gamma(u)|^2 \bigr] \, du\\
& \qquad \leq 2\frac{\|\gamma'\|_{L^\infty}}{\mathcal{L}(\gamma)}\|c'-\gamma'\|_{L^2}^2 + 4\|\gamma'\|_{L^\infty} \mathcal{L}(\gamma){\| |\gamma'|^{-1}\|_{L^\infty}^2} \|c'-\gamma'\|_{L^2}^2.
\end{align*}
Similarly for the difference of second derivatives
\begin{align*}
\|P(c)''-P(\gamma)''\|_{L^2}^2
&=\int_0^1 |\mathcal{L}(c)^2\kappa_c\circ \omega_c^{-1}(w)-\mathcal{L}(\gamma)^2 \kappa_\gamma\circ \omega_\gamma^{-1}(w)|^2 \, dw \\
&\leq \begin{multlined}[t] \mathcal{L}(c)^3 \|c'\|_{L^\infty} \|\kappa_c'\|_{L^2}^2 \mathcal{L}(\gamma)^2 \| |\gamma'|^{-1}\|_{L^\infty}^2 \|\gamma'-c'\|_{L^2}^2\\
+2\frac{\|\gamma'\|_{L^\infty}}{\mathcal{L}(\gamma)} \|\kappa_c\|_{L^2}^2 (\mathcal{L}(c)+\mathcal{L}(\gamma))^2 \|c'-\gamma'\|_{L^2}^2 \\
+2\|\gamma'\|_{L^\infty} \mathcal{L}(\gamma)^3 \|\kappa_c-\kappa_\gamma\|_{L^2}.
\end{multlined}
\end{align*}
We have assumed $c \in \mathcal{I}^3(S^1,\mathbb{R}^n)$ so that $\ltwo{\kappa_c'}$ is bounded.
Recall that by Lemma~\ref{superlemma} if $\|c-\gamma\|_{L^2} \leq \delta$ then $|\gamma'|$ (and therefore also $\mathcal{L}(\gamma)$) is bounded above and below, and $T$ and $k$ are locally Lipschitz.
\end{proof}
\begin{theorem}{\rm (Gradient inequality on $\mathcal{I}^2(S^1,\mathbb{R}^n)$)}\label{LS2}
Let $\gamma_\infty \in \mathcal{I}^2(S^1,\mathbb{R}^n)$ be a stationary point of $\E$.
Then there are constants $Z\in (0,\infty), c \in (0,1]$ and $\theta\in \left [\tfrac{1}{2},1\right )$ such that if $\gamma \in \mathcal{I}^2(S^1,\mathbb{R}^n)$ with $\|\gamma-\gamma_\infty\|_{H^2}<c$ then
\[
\|\grad_\gamma \E\|_{H^2(ds)} \geq Z |\E(\gamma)-\E(\gamma_\infty)|^\theta.
\]
\end{theorem}
\begin{proof}
Let $\alpha,\alpha_\infty\in \Omega$ be the respective reparametrisations of $\gamma,\gamma_\infty$.
Then since $\E$ and $d\E$ are parametrisation invariant and $\|d(\E|\Omega)_\alpha\|_{T_\gamma\Omega^*}\leq \|d\E_\alpha\|_{{H^2}^*}$ we have by Proposition \ref{LS1}
\[
\| d\E_\alpha \|_{{H^2}^*} \geq \| d(\E|\Omega)_\alpha \|_{T_\gamma\Omega^*} \geq Z|\E(\alpha)-\E(\alpha_\infty)|^\theta =Z|\E(\gamma)-\E(\gamma_\infty)|^\theta
\]
provided $\|\alpha-\alpha_\infty\|_{H^2}\leq \sigma$, which can be arranged according to Lemma \ref{Pconts} because $\alpha_\infty$ is at least in $H^4$ (Proposition \ref{critsmooth}).
Since reparametrisation is a linear map on $H^2$ we have $d\E_\alpha(V) =d\E_\gamma (V \circ \omega_\gamma)$ and then
\begin{equation}\label{supnormalpha}
\|d\E_\alpha\|_{{H^2}^*}
=\sup_{\|V\|_{H^2}=1} |d\E_\gamma (V\circ \omega_\gamma)|
= \sup_{\|V\|_{H^2}=1} \ip{\grad \E_\gamma,V\circ \omega_\gamma}_{H^2(ds),\gamma}.
\end{equation}
From $\frac{d\omega^{-1}_\gamma}{du}=\mathcal{L}(\gamma)/|\gamma'(u)|$ we calculate
\begin{equation}\label{Vcirc}
\| V\circ \omega_\gamma \|^2_{H^2(ds),\gamma} = \mathcal{L}(\gamma) \|V\|_{L^2}^2 + \dfrac{1}{\mathcal{L}(\gamma)}\|V'\|_{L^2}^2 + \frac{1}{\mathcal{L}(\gamma)^3} \|V''\|_{L^2}^2.
\end{equation}
Since $\gamma$ is in an $H^2$ neighbourhood of $\gamma_\infty$ we have upper and lower bounds for $\mathcal{L}(\gamma)$ and therefore a constant $c_1$ such that
\[
\|d\E_\alpha\|_{{H^2}^*} \leq c_1 \|\grad \E_\gamma \|_{H^2(ds)}.
\]
\end{proof}
\section{Convergence} \label{subsection:full-limit-convergence}
Since the \L ojasiewicz--Simon gradient inequality proved above only holds in an $H^2$-neighbourhood of a critical point we will need subconvergence in $H^2$ of minimizing sequences in order to use it.
To this end we will prove a Palais--Smale type condition for $\E$ using the next lemma.
\begin{lemma}
The restriction $\E|\Omega$ is coercive modulo $C^1$ in the following sense$\colon$
given $\alpha\in \Omega$ with $\|\alpha\|_{H^2}<c_0$, for any $V\in T_\alpha\Omega$ there exist positive constants $c_1$ and $c_2$ depending on $c_0$ such that
\begin{align}\label{coercive}
d^2(\E|\Omega)_\alpha(V,V) \geq c_1 \|V\|_{H^2}^2 - c_2 \|V\|_{C^1}^2.
\end{align}
\end{lemma}
\begin{proof}
Rather than calculate $d^2(\E|\Omega)_\alpha(V,V)$ by putting a tangent vector $V$ of the form described by \eqref{omegatangent} into the formula \eqref{d2E} for $d^2\E$,
we find it easier to calculate $d^2(\E|\Omega)$ directly.
Indeed, since $\alpha \in \Omega$ satisfies $|\alpha'(u)|=\mathcal{L}(\alpha)$, $\E(\alpha)$ has the simpler form
\[
\E(\alpha)=\frac{1}{\mathcal{L}(\alpha)^3}\int_0^1 |\alpha''|^2 \, du.
\]
Then if we let $a(\varepsilon)$ be a path in $\Omega$ with $\partial_\varepsilon a|_{\varepsilon=0}=V\in T_\alpha \Omega$ we calculate
\begin{align*}
d(\E|\Omega)_\alpha V =\partial_\varepsilon\E(a)|_{\varepsilon=0}
&= -3 \frac{\partial_\varepsilon \mathcal{L}(a)}{\mathcal{L}(a)^4}\int_0^1 |\alpha''|^2 \, du + \frac{2}{\mathcal{L}(a)^3} \int_0^1 \ip{\partial_\varepsilon a'',a} \, du \Bigm{|}_{\varepsilon=0} \\
&= \frac{3 \E(\alpha)}{\mathcal{L}(\alpha)^2}\int_0^1\ip{\alpha'',V}\, du + \frac{2}{\mathcal{L}(\alpha)^3} \int_0^1 \ip{V'',\alpha''} \, du,
\end{align*}
where we have used $\partial_\varepsilon \mathcal{L}(a)|_{\varepsilon=0}=-\frac{1}{\mathcal{L}(\alpha)}\int\ip{\alpha'',V}\, du$. Now let $b(\varepsilon)$ be a path in $\Omega$ with $\partial_\varepsilon b|_{\varepsilon=0}=w\in T_\alpha \Omega$ and calculate
\begin{align*}
d^2\E_\alpha(V,W)&=\partial_\varepsilon(d\E_b(V))|_{\varepsilon=0}\\
&=\begin{multlined}[t]
\frac{3}{\mathcal{L}(\alpha)^2} d\E_\alpha W\int_0^1\ip{\alpha'',V}\, du -6 \frac{d\mathcal{L}_\alpha W}{\mathcal{L}(\alpha)^3}\E(\alpha)\int_0^1 \ip{\alpha'',V}\, du \\
+3\frac{\E(\alpha)}{\mathcal{L}(\alpha)^2}\int_0^1\ip{W'',V} \,du - 6\frac{d\mathcal{L}_\alpha W}{\mathcal{L}(\alpha)^4}\int_0^1 \ip{V'',\alpha''}\, du\\+\frac{2}{\mathcal{L}(\alpha)^3}\int_0^1\ip{V'',W''}\, du
\end{multlined}\\
&= \begin{multlined}[t]
\frac{3}{\mathcal{L}(\alpha)^2}\int_0^1 \ip{\alpha'',V}\, du \left(\frac{5\E(\alpha)}{\mathcal{L}(\alpha)^2}\int_0^1\ip{\alpha'',W}\, du
+ \frac{2}{\mathcal{L}(\alpha)^3}\int_0^1\ip{W'',\alpha''}\, du \right) \\
-\frac{3\E(\alpha)}{\mathcal{L}(\alpha)^2}\int_0^1 \ip{W',V'}\, du
+\frac{6}{\mathcal{L}(\alpha)^5}\int_0^1\ip{\alpha'',W}du\int_0^1\ip{V'',\alpha''}\, du \\
+\frac{2}{\mathcal{L}(\alpha)^3}\int_0^1\ip{V'',W''}\, du.
\end{multlined}
\end{align*}
Hence
\begin{align*}
d^2\E_\alpha(V,V)
&= \frac{2}{\mathcal{L}(\alpha)^3}\int_0^1 |V''|^2 \, du - \frac{3\E(\alpha)}{\mathcal{L}(\alpha)^2}\int_0^1 |V'|^2 \, du
+ \frac{15\E(\alpha)}{\mathcal{L}(\alpha)^4} \left( \int_0^1 \ip{\alpha'',V}\, du \right)^2\\
& \qquad +\frac{12}{\mathcal{L}(\alpha)^5} \int_0^1\ip{\alpha'',V}\, du \int_0^1 \ip{V'',\alpha''}\, du
\end{align*}
and then there are constants $c_1,c_2,c_3,c_4$ depending on $c_0$ such that
\[
d^2\E_\alpha(V,V) \geq c_1 \|V''\|_{L^2}^2 - c_2 \|V'\|_{L^2}^2 - c_3 \|V\|_{L^2}^2 - c_4 \|V\|_{L^2} \|V''\|_{L^2}.
\]
If we apply the inequality $2ab\leq \varepsilon a^2+\frac{1}{\varepsilon}b^2, \varepsilon>0$ to the last term, choosing $\varepsilon$ sufficiently small, and then combine terms and refresh the constants we obtain
\[
d^2\E_\alpha(V,V) \geq c_1 \|V\|_{H^2}^2 - c_2 \|V\|_{H^1}^2.
\]
Recalling that for $V \in H^2$ we have $\|V\|_{H^1} \leq \|V\|_{C^1}$, the result now follows.
\end{proof}
We remark that \eqref{coercive} has the following equivalent formulation in terms of the first derivative:
\begin{equation}\label{coercive2}
\bigl( d(\E|\Omega)_\alpha - d(\E|\Omega)_\beta \bigr)(\alpha-\beta) \geq c_1 \|\alpha-\beta\|_{H^2}^2 - c_2 \|\alpha-\beta\|_{C^1}^2
\end{equation}
Indeed, assuming $\|\alpha + V\|_{H^2} \leq c_0$ we have
\begin{align*}
\bigl( d(\E|\Omega)_{\alpha+V}-d(\E|\Omega)_\alpha \bigr)V &=\int_0^1 \frac{d}{dt}d(\E|\Omega)_{\alpha+tV}(V) dt \\
&=\int_0^1 d^2(\E|\Omega)_{\alpha+tV}(V,V)dt
\geq c_1 \|V\|_{H^2}^2 - c_2 \|V\|_{C^1}.
\end{align*}
Now replacing $V$ with $\beta-\alpha$ and assuming e.g. $\|\alpha\|_{H^2},\|\beta\|_{H^2} \leq c_0/2$ gives \eqref{coercive2}.
As a corollary we have the following Palais--Smale type condition for $\E|\Omega$ (it is not quite the standard condition because we need to assume an $L^2$-bound on the sequence).
\begin{corollary}\label{PSC}
Let $(\alpha_i)$ be a sequence of curves $\Omega$ such that $\E(\alpha_i)$ and $\| \alpha_i \|_{L^2}$ are bounded, and $|d(\E|\Omega)_{\alpha_i}|\to 0$. Then $(\alpha_i)$ has a subsequence converging in $H^2$.
\end{corollary}
\begin{proof}
The assumed bounds on $\E(\alpha_i)$ and $\|\alpha_i\|_{L^2}$ together imply an $H^2$-bound and then by compactness of the Sobolev imbedding $H^2 \subset C^1$ we have a subsequence which converges in $C^1$.
Then from \eqref{coercive2} we get
\[
\bigl( d(\E|\omega)_{\alpha_j}-d(\E|\omega)_{\alpha_i} \bigr)(\alpha_j-\alpha_i) \geq c_1 \|\alpha_i-\alpha_j\|_{H^2}^2 - c_2 \|\alpha_i-\alpha_j\|_{C^1}^2
\]
and then by the assumption $|d(\E|\Omega)_{\alpha_i}| \to 0$ our $C^1$-convergent subsequence converges in $H^2$.
\end{proof}
\begin{theorem}
Let $\gamma$ be a solution to the $H^2(ds)$-gradient flow of the modified elastic energy $\E$.
Then there exists $\gamma_\infty\in H^2(S^1,\mathbb{R}^n)$ such that $\gamma(t)\to \gamma_\infty$ in $H^2$ as $t \to \infty$.
\end{theorem}
\begin{proof}
Consider the projected and translated flow $\alpha(t):=P(\gamma(t)) - \frac{1}{\mathcal{L}}\int^{\mathcal{L}(\gamma)}_0 \gamma \, ds$ with $P$ as in Lemma \ref{Pconts} taking $\gamma(t)$ to its arc length proportional reparametrisation.
From parametrisation and translation invariance of the energy we have $\E(\alpha)=\E(\gamma)\leq \E(\gamma(0))$.
Moreover, using the estimates in Lemma~\ref{Pconts} and the Poincar\'e--Wirtinger inequality, we see that $\|\alpha(t) \|_{L^2}$ is also bounded.
From \eqref{l2time} (with $T\to \infty$) there exists a sequence $(t_i)$ such that $\norm{\grad \E(\gamma(t_i))}_{H^2(ds)}\to 0$.
Then \eqref{supnormalpha} and \eqref{Vcirc} implies that
\[
\|d\E_\alpha\|_{{H^2}^*}\leq \Bigl( \mathcal{L}(\gamma) + \frac{1}{\mathcal{L}(\gamma)}+ \frac{1}{\mathcal{L}(\gamma)^3}\Bigr)^{1/2} \|\grad \E_\gamma \|_{H^2(ds)}.
\]
Since $\mathcal{L}(\gamma)<\E(\gamma(0))/\lambda$ and applying the H\"older inequality to Fenchel's theorem $2\pi\leq \int |k| ds$ gives
\[
\frac{1}{\mathcal{L}(\gamma)}\leq \frac{1}{4\pi^2}\int k^2 ds < \frac{1}{4\pi^2}\E(\gamma(0)),
\]
it follows that $\|d\E_{\alpha(t_i)}\|_{{H^2}^*}\to 0$ too.
Hence $\alpha(t_i)$ which we abbreviate to $\alpha_i$ satisfies the assumptions of Corollary \ref{PSC} and there exists a subsequence, still denoted $\alpha_i$, converging in $H^2$ to a stationary point $\alpha_\infty$.
Now by Theorem~\ref{LS2} there are constants $Z>0$, $\sigma\in (0,1]$, and $\theta\in [1/2,1)$ such that for any $x\in \mathcal{I}^2$ with $\|x-\alpha_\infty\|_{H^2}<\sigma$:
\[
\|\grad \E_x\|_{H^2(ds)} \geq Z \abs{\E(x)-\E(\alpha_\infty)}^\theta.
\]
Choose $i$ such that $\|\alpha_i-\alpha_\infty\|_{H^2}<\sigma$ and let $\beta_i(t)$ be the $H^2(ds)$-gradient flow with intial data $\beta_i(t_i)=\alpha_i$.
Then for all $t>t_i$, $\beta_i(t)$ is a fixed (i.e. time independent) reparametrisation of $\gamma(t)$, namely $\beta_i(t)=\gamma(t)\circ \omega_{\gamma(t_i)}^{-1}$,
and therefore by \eqref{gradinvariance} we have
\begin{equation}
\label{eq:210702-1}
\|\grad \E_{\beta_i(t)}\|_{H^2(ds)}=\|\grad \E_{\gamma(t)}\|_{H^2(ds)}.
\end{equation}
It follows that the trajectories $\beta_i(t)$ and $\gamma(t)$ have the same $H^2(ds)$-length.
Let $T_i$ be the maximum time such that $\| \beta_i(t)-\alpha_\infty \|_{H^2}<\sigma$ for all $t\in [t_i,T_i)$.
Define
\[
H(t):=(\E(\gamma(t))-\E(\alpha_\infty))^{1-\theta}.
\]
Then $H>0$ and is monotonically decreasing because $\E(\alpha)=\E(\gamma)$.
Since Proposition~\ref{LS1} implies that the \L ojasiewicz--Simon gradient inequality holds for $\beta_i(t)$ with $t\in [t_i,T_i)$,
we observe from $\E(\beta_i(t))=\E(\gamma(t))$ and \eqref{eq:210702-1} that
\begin{align*}
-H'(t) &= -(1-\theta)\left (\E(\gamma(t))-\E(\alpha_\infty)\right )^{-\theta} \frac{d\E(\gamma(t))}{dt} \\
&=(1-\theta)\left (\E(\gamma(t))-\E(\alpha_\infty)\right )^{-\theta} \| \grad \E_\gamma\|^2_{H^2(ds)}
\geq (1-\theta) Z^{-1} \|\grad \E_\gamma\|_{H^2(ds)}.
\end{align*}
Integrating over $[t_i,T_i)$ we get
\[
(1-\theta) Z^{-1} \int_{t_i}^{T_i} \| \grad \E_\gamma \|_{H^2(ds)}\,dt\leq H(t_i)-H(T_i).
\]
Therefore if we let $W:=\{ t\geq t_i : \|\beta_j(t)-\alpha_\infty\|_{H^2}<\sigma \, \text{ for some } j\geq i\}$ we have that
\[
\int_W \| \grad \E_\gamma\|_{H^2(ds)} \,dt \leq \dfrac{Z}{1-\theta} H(t_i).
\]
In fact, there exists $N \in \mathbb{N}$ such that $\| \beta_N(t)-\alpha_\infty\|_{H^2}<\sigma$ for \emph{all} $t>t_N$.
If not, then for each $j \in \mathbb{N}$ there exists $T_j$ such that $\beta(T_j)$ is on the boundary of the ball $B^{H^2}_{\tilde \sigma}(\alpha_\infty)$, where $\tilde \sigma>0$ is chosen such that $B_{\tilde\sigma}^{H^2}(\alpha_\infty)\subset B_r^{\dist}(\alpha_\infty)\subset B_{\sigma}^{H^2} (\alpha_\infty)$ for some $r>0$.
Therefore Lemma~\ref{bruv1} applies, and
\begin{align*}
\tilde{\sigma} = \|\beta_j(T_j)-\alpha_\infty\|_{H^2}
&\leq \|\beta_j(t_j)-\alpha_\infty\|_{H^2} + \|\beta_j(t_j)-\beta_j(T_j)\|_{H^2} \\
&\leq \|\alpha(t_j)-\alpha_\infty\|_{H^2} + C \dist(\beta_j(t_j),\beta_j(T_j))\\
&\leq \|\alpha(t_j)-\alpha_\infty\|_{H^2} + C \int_{t_j}^{T_j} \|\gamma_t\|_{H^2(ds)}\, dt,
\end{align*}
where we have used \eqref{eq:210702-1}.
Thus, if $T_j<\infty$ for all $j \in \mathbb{N}$, then $\int_W \| \grad \E_\gamma\|_{H^2(ds)}\,dt$ cannot be finite.
So there exists $N \in \mathbb{N}$ such that $\beta_N(t)\in B_\sigma(\alpha_\infty)$ for all $t>t_N$ and therefore
\[
\int_{t_N}^\infty \|\gamma_t\|_{H^2(ds)} \, dt <\infty,
\]
that is, the $H^2(ds)$-length of $\gamma(t)$ is finite. Hence by Lemma~\ref{finitelengthlimit} the flow converges in the $H^2(ds)$-distance, and therefore also in $H^2$ (Lemma \ref{bruv1}).
\end{proof}
\begin{remark}{\rm
We remark that as in \cite{Langer-Singer_1985} the classification of closed elastica in $\mathbb{R}^2$ and $\mathbb{R}^3$ allow us to determine the shape of the limit in these cases. In $\mathbb{R}^2$ the only closed elastica are the (geometric) circle, the figure eight elastica and their multiple covers. Since the flow must remain in the homotopy class of the initial curve, for rotation index $p>0$ the limit is a $p$-fold circle, and an initial curve with rotation index $0$ will converge to a (possible multiply covered) figure eight. In $\mathbb{R}^3$ there are more closed elastica, however it was proved in \cite{Langer-Singer_1985} that circles are the only \emph{stable} closed elastica.
In both $\mathbb{R}^2$ and $\mathbb{R}^3$ it follows from \eqref{critsmooth} that the limiting circle has radius $\abs{\lambda}^{-1}$.
}
\end{remark}
|
\section*{Introduction}
In a series of four articles (\cite{Pep74, Pep79, Pep80, Pep82}),
Th\'eophile P\'epin announced the unsolvability of certain
diophantine equations of the form $ax^4 + by^4 = z^2$. He did
not supply proofs for his claims; a few of his ``theorems''
were first proved in the author's article \cite{LPep} using
techniques that P\'epin was not familiar with, such as the
arithmetic of ideals\footnote{Ideals were introduced by Dedekind
in 1879, but took off only after Hilbert published his Zahlbericht
in 1897; its French translation \cite{HZB-f} started appearing in 1909.
Kummer had introduced ideal numbers already in the 1840s, but these
were used exclusively for studying higher reciprocity laws and
Fermat's Last Theorem. For investigating diophantine equations, the
mathematicians of the late 19th century preferred Gauss's theory of
quadratic forms (see Dirichlet \cite{Dir25,Dir28} and P\'epin \cite{Pep75})
to Dedekind's ideal theory.} in quadratic number fields.
At the time P\'epin was studying these diophantine equations,
he was working on simplifying Gauss's theory of composition
of quadratic forms (see e.g. \cite{Pep75,PepC}), and it seems natural
to look into the theory of binary quadratic forms for approaches
to P\'epin's results. In fact we will find that all of P\'epin's
claims (and a lot more) can be proved very naturally using
quadratic forms.
We start by briefly recalling the relevant facts following
Bhargava's exposition \cite{Bhar1} of Gauss's theory
(Cox \cite{Cox} and Flath \cite{Flath} also provide excellent
introductions) for the following reasons:
\begin{itemize}
\item It gives us an opportunity to point out some classical references on
composition of forms that deserve to be better known; this
includes work by Cayley, Riss, Speiser and others.
\item Most mathematicians nowadays are unfamiliar with the classical
language of binary quadratic forms, and in particular with
composition of forms.
\item We need some results (such as Thm. \ref{T2} below) in a form
that is slightly stronger than what can be found in the literature.
\item We have to fix the language anyway.
\end{itemize}
In addition, working with ray class groups instead of forms with
nonfundamental discriminants does not save space since we would have
to translate the results into the language of forms for comparing them
with P\'epin's statements.
Afterwards, we will supply the proofs P\'epin must have had in mind.
In the final section we will interpret our results in terms of Hasse's
Local-Global Principle and the Tate-Shafarevich group of elliptic curves.
\section{Composition of Binary Quadratic Forms}
A binary quadratic form is a form $Q(x,y) = Ax^2 + Bxy + Cy^2$
in two variables with degree $2$ and coefficients $A, B, C \in \Z$;
in the following, we will use the notation $Q = (A,B,C)$.
The discriminant $\Delta = B^2 - 4AC$ of $Q$ will always be
assumed to be a nonsquare. A form $(A,B,C)$ is called primitive
if $\gcd(A,B,C) = 1$.
The group $\SL_2(\Z)$ of matrices $S = \smatr{r}{s}{t}{u}$ with
$r, s, t, u \in \Z$ and determinant $\det S = ru-st = +1$ acts
on the set of primitive forms with discriminant $\Delta$ via
$Q|_S = Q(rx+sy,tx+uy)$; two forms $Q$ and $Q'$ are called equivalent
if there is an $S \in \SL_2(\Z)$ such that $Q' = Q|_S$.
Given $Q = (A,B,C)$, the forms $Q' = Q|_S$ with $S = \smatr{1}{s}{0}{1}$
are said to be parallel to $Q$; their coefficients are
$Q' = (A,B+2As,C')$ with $C'=Q(s,1)$. Observe in particular that
we can always change $B$ modulo $2A$ (and compute the last coefficient
from the discriminant $\Delta$) without leaving the equivalence
class of the form. There are finitely many equivalence classes
since each form is equivalent to one whose cefficients are bounded
by $\sqrt{|\Delta|}$.
A form $Q = (A,B,C)$ represents an integer $m$ primitively if
there exist coprime integers $a, b$ such that $m = Q(a,b)$.
If $Q$ primitively represents $m$, then there is an $S \in \SL_2(\Z)$
such that $Q|_S = (A',B',C')$ with $A' = m$. In fact, write
$Q(r,t) = m$; since $\gcd(r,t) = 1$, there exist $s, u \in \Z$
with $ru-st = 1$; now set $S = \smatr{r}{s}{t}{u}$. This implies
that forms representing $1$ are equivalent to the principal form
$Q_0$ defined by
$$ Q_0(x,y) = \begin{cases}
(1,0,m) & \text{ if } \Delta = -4m, \\
(1,1,m) & \text{ if } \Delta = 1-4m
\end{cases} $$
In fact, forms representing $1$ are equivalent to forms $(1,B,C)$, and
reducing $B$ modulo $2$ shows that they are equivalent to $Q_0$.
The set of $\SL_2(\Z)$-equivalence classes of primitive forms
(positive definite if $\Delta < 0$) can be given a group
structure by introducing composition of forms, which can be most
easily explained using Bhargava's cubes\footnote{Historically,
Bhargava's cubes occurred in the form of
$2 \times 2 \times 2$-hypermatrices in the work of Cayley
(see \cite{Cay45,CayHD}, or, for a modern account,
\cite[Chap. 14, Prop. 1.4]{GKZ}), as pairs of bilinear forms
as in Eqn. (\ref{Ebf1}) and (\ref{Ebf2}) below (see
Gauss \cite{Gauss} and Dedekind \cite{Dede}), as a trilinear
form (Dedekind \cite{Dede} and Weber \cite{WebC}),
and as $2 \times 4$-matrices
$\big(\begin{smallmatrix} a & b & c & d \\
e & f & g & h \end{smallmatrix}\big)$
(see Speiser \cite{Spei}, Riss \cite{Riss}, Shanks \cite{Sha3,Shac1,Shac2},
Towber \cite{Tow}, and most other presentations of composition).}.
Each cube
$$ \cubeA{a}{b}{c}{d}{e}{f}{g}{h}{\cA} $$
of eight integers $a, b, c, d, e, f, g, h$ can be sliced in three
different ways (up-down, left-right, front-back):
\begin{align*}
UD & & M_1 = U & = \matr{a}{e}{b}{f}, & N_1 = D & = \matr{c}{g}{d}{h}, \\
LR & & M_2 = L & = \matr{a}{c}{e}{g}, & N_2 = R & = \matr{b}{d}{f}{h}, \\
FB & & M_3 = F & = \matr{a}{b}{c}{d}, & N_3 = B & = \matr{e}{f}{g}{h}.
\end{align*}
To each slicing we can associate a binary quadratic form
$Q_i = Q_i^\cA$ by putting
$$ Q_i(x,y) = - \det (M_i x + N_i y). $$
Explicitly we find
\begin{align}
\label{EQ1} Q_1(x,y) & = (be-af)x^2 + (bg+de-ah-cf)xy + (dg-ch)y^2, \\
\label{EQ2} Q_2(x,y) & = (ce-ag)x^2 + (cf+de-ah-bg)xy + (df-bh)y^2, \\
\label{EQ3} Q_3(x,y) & = (bc-ad)x^2 + (bg+cf-ah-de)xy + (fg-eh)y^2.
\end{align}
These forms all have the same discriminant, and if two of them are
primitive (or positive definite), then so is the third.
On the set $\Cl^+(\Delta)$ of equivalence classes of primitive
forms with discriminant $\Delta$ we can introduce a group structure
by demanding that
\begin{itemize}
\item The neutral element $[1]$ is the class of the principal form
$Q_0(x,y)$.
\item We have $([Q_1] \cdot [Q_2]) \cdot [Q_3] = [1]$ if and only if there
exists a cube $\cA$ with $Q_i = Q_i^\cA$ for $i = 1, 2, 3$.
\end{itemize}
Most of the group axioms are easily checked: the cubes
$$ \cube{0}{1}{1}{0}{1}{0}{0}{\ m} \qquad \text{ or } \qquad
\cube{0}{1}{1}{1}{1}{1}{1}{\ \mu} $$
show that $[Q_0][Q_0][Q_0] = [1]$ in the two cases $\Delta = 4m$
and $\Delta = 4m+1 = 4\mu-3$, with $\mu = m+1$.
Next observe that $B \equiv \Delta \bmod 2$; thus we can put $B = 2b$ if
$\Delta = 4m$, and $B = 2b-1$ if $\Delta = 1+4m$. With
$b' = 1-b$ we then find that the two cubes
$$ \cube{0}{1}{1}{0}{A}{\ -b}{b}{\ -C} \qquad \text{and} \qquad
\cube{0}{1}{1}{0}{A}{b'}{b}{\ -C} $$
give rise to the quadratic forms $Q_1 = Q_0$, $Q_2 = (A,B,C)$,
and $Q_3 = (A,-B,C)$. This shows that the inverse of $[Q]$ for
$Q = (A,B,C)$ is the class of $Q^- = (A,-B,C)$. Note in particular
that, in general, the classes of $Q$ and $Q^-$ are different (in fact
they coincide if and only if their class has order dividing $2$),
although both $Q$ and $Q^-$ represent exactly the same integers
since $Q(x,y) = Q^-(x,-y)$. Gauss almost apologized for
distinguishing the classes of these forms.
The verification of associativity is a little bit involved.
Perhaps the simplest approach uses Dirichlet's method of
united and concordant forms. Two primitive forms
$Q_1 = (A_1,B_1,C_1)$ and $Q_2 = (A_2,B_2,C_2)$ are called
concordant if $B_1 = B_2$, $C_1 = A_2C$ and $C_2 = A_1C$ for
some integer $C$. The composition of $Q_1$ and $Q_2$ then is
the form $(A_1A_2,B,C)$, as can be seen from the cube
$$ \cubeA{0}{A_1}{1}{0}{A_2}{B}{0}{\ \ -C}{\cA} $$
and the associated forms
\begin{align*}
Q_1 & = A_1x^2 + Bxy + A_2Cy^2, \\
Q_2 & = A_2x^2 + Bxy + A_1Cy^2, \\
Q_3 & = A_1A_2x^2 - Bxy + Cy^2.
\end{align*}
Given three forms, associativity follows immediately if we succeed in
replacing the forms by equivalent forms with the same middle coefficients,
which is quite easy using the observation that forms represent
infinitely many integers coprime to any given number.
Composing two (classes of) forms requires solving\footnote{For an
excellent account of the composition formulas using Dedekind's
approach via modules see Lenstra \cite{Len} and Schoof \cite{Schoof}.
The clearest exposition of the composition algorithm of binary
quadratic forms not based on modules is probably Speiser's
\cite{Spei}; his techniques also allow to fill the gaps in Shanks'
algorithm given in \cite{Shac2}. Shanks later gave a full version
of his composition algorithm which he called NUCOMP.} systems of
diophantine equations. All we need in this article
is the following observation:
\begin{thm}
The $\SL_2(\Z)$-equivalence classes of primitive forms with discriminant
$\Delta$ (positive definite forms if $\Delta < 0$) form a group with
respect to composition. If $Q_1 = (A_1,B_1,C_1)$ and $Q_2 = (A_2,B_2,C_2)$
are primitive forms with discriminant $\Delta$, and if
$e = \gcd(A_1,A_2,\frac12(B_1+B_2))$, then we can always find a form
$Q_3 = (A_3,B_3,C_3)$ in the class $[Q_1][Q_2]$ with $A_3 = A_1A_2/e^2$.
\end{thm}
The group of $\SL_2(\Z)$ equivalence classes of primitive forms
with discriminant $\Delta$ is called the class group in the strict
sense and is denoted by $\Cl^+(\Delta)$ (the equivalence classes
with respect to a suitably defined action by $\GL_2(\Z)$ gives
rise to the class group $\Cl(\Delta)$ in the wide sense; for
negative discriminants, both notions coincide). It is isomorphic
to the ideal class group in the strict sense of the order with
discriminant $\Delta$ inside the quadratic number field
$\Q(\sqrt{\Delta}\,)$.
The connection between Bhargava's group law and Gauss composition
is provided by the following
\begin{thm}
Let $\cA = [a,b,c,d,e,f,g,h]$ be a cube to which three primitive
forms $Q_i = Q_i^\cA$ are attached. Then
\begin{equation}\label{EGcom}
Q_1(x_1,y_1) Q_2(x_2,y_2) = Q_3(x_3,-y_3),
\end{equation}
where $x_3$ and $y_3$ are bilinear forms (linear forms in $x_1, y_1$
and $x_2, y_2$, respectively) and are given by
\begin{eqnarray}
\label{Ebf1} x_3 & = ex_1x_2 + fx_1y_2 + gx_2y_1 + hy_1y_2, \\
\label{Ebf2} y_3 & = ax_1x_2 + bx_1y_2 + cx_2y_1 + dy_1y_2.
\end{eqnarray}
\end{thm}
This can be verified e.g. by a computer algebra system; for a
conceptual proof, see Dedekind \cite{Dede} or Speiser \cite{Spei}.
The somewhat unnatural minus sign on the right hand side of
(\ref{EGcom}) comes from breaking the symmetry between the forms
$Q_i$; Dedekind \cite{Dede} and Weber \cite{WebC} have shown that
$$ Q_1(x_1,y_1) Q_2(x_2,y_2) Q_3(x_3,y_3) = Q_0(x_4,y_4) $$
for certain trilinear forms $x_4, y_4$; this formula preserves
the symmetry of the forms involved and makes the group law
appear completely natural.
Gauss defined a form $Q_3$ to be a composite of the forms $Q_1$
and $Q_2$ if the identity (\ref{EGcom}) holds and if (and
this additional condition is crucial -- it is what allowed
Gauss to make form classes into a group\footnote{Composition of
binary quadratic forms can be generalized to arbitrary rings if
one is willing to replace quadratic forms by quadratic spaces;
see Kneser \cite{KneC} and Koecher \cite{Koe87}. Gauss's proof
that composition gives a group structure extends without problems to
principal ideal domains with characteristic $\ne 2$, and even
to slightly more general rings (see e.g. Towber \cite{Tow}).})
the formulas (\ref{EQ1}) and (\ref{EQ2}) hold.
\begin{cor}\label{C1}
Let $\Delta$ be a discriminant, $r$ an integer, and $p$ a prime not
dividing $\Delta$. Assume that $p$ is primitively represented by a
form $Q_1$, and that $pr$ is represented primitively by $Q_2$. Then
we can choose $g \in \{\pm 1\}$ in such a way that $p^2r$ is represented
primitively by any form $Q_3$ with $[Q_1][Q_2]^g = [Q_3]$.
\end{cor}
It is obvious from the Gaussian composition formula (\ref{EGcom})
that $ap^2$ is represented by $Q_3$ and $Q_3'$; what we have to
prove is that there exists a {\em primitive} representation.
As an example illustrating the problem, take $Q_1 = (2,1,3)$
and $Q_2 = (2,-1,3)$. Both forms represent $p = 3$ primitively:
we have $3 = Q_1(0,1) = Q_2(0,1)$. We also have
$[Q_1][Q_2] = [Q_0]$ and $[Q_1][Q_2]^{-1} = [Q_2]$.
Both $Q_0$ and $Q_2$ represent $9$, but $Q_2(2,1) = 9$ is a
primitive representation whereas $Q_0(3,0) = 9$ is not.
\begin{proof}[Proof of Cor. \ref{C1}]
We may assume without loss of generality that $Q_1 = (p,B_1,C_1)$
and $Q_2 = (pr,B_2,C_2)$. The composition algorithm shows that
$[Q_1][Q_2] = [Q_3]$ for some form $Q_3 = (A_3,B_3,C_3)$ with
$A_3 = p^2r/e^2$, where $e = \gcd(p,\frac12(B_1+B_2))$. If
$p \nmid \frac12(B_1+B_2)$, then $Q_3(1,0) = p^2r$ and we are done.
If $p \mid \frac12(B_1+B_2)$, replace $Q_2$ by $Q_2^- = (pr,-B_2,C_2)$;
in this case, we find $[Q_1][Q_2^-] = [Q_3']$ for
$Q_3' = (A_3,B_3,C_3)$ with $A_3 = p^2r/e^2$, where
$e = \gcd(p,\frac12(B_1-B_2))$. If $p \nmid \frac12(B_1-B_2)$
we are done; the only remaining problematic case is where
$p$ divides both $\frac12(B_1+B_2)$ and $\frac12(B_1-B_2)$, which
implies that $p \mid B_1$ and $p \mid B_2$. But then
$p \mid (B_1^2 - 4pC_1) = \Delta$, contradicting our assumption.
\end{proof}
\section{Genus Theory}
Gauss's genus theory characterizes the square classes in $\Cl^+(\Delta)$.
Two classes $[Q_1]$ and $[Q_2]$ are said to be in the same genus if
there is a class $[Q]$ such that $[Q_1] = [Q_2][Q]^2$. The principal
genus is the genus containing the principal class $[Q_0]$; by
definition the principal genus consists of all square classes.
\subsection*{Extracting Square Roots in the Class Group}
Recall that a discriminant $\Delta$ is called fundamental if it is
the discriminant of a quadratic number field. Arbitrary discriminants
can always be written in the form $\Delta = \Delta_0 f^2$, where
$\Delta_0$ is fundamental, and where $f$ is an integer called the
conductor of the ring $\Z \oplus \frac{\Delta + \sqrt{\Delta}}2 \Z$.
An elementary technique for detecting squares in the class group
$\Cl(\Delta)$ is provided by the following
\begin{thm}\label{T2}
Let $\Delta_0$ be a fundamental discriminant, and assume that $Q$ is
a primitive form with discriminant $\Delta = \Delta_0 f^2$. Then
the following conditions are equivalent:
\begin{enumerate}
\item[$i)$] $Q$ represents a square $m^2$ coprime to $f$.
\item[$i')$] $Q$ represents a square $m^2$ coprime to $\Delta$.
\item[$ii)$] There exists a primitive form $Q_1$ with $[Q] = [Q_1]^2$.
\item[$iii)$] There exist rational numbers $x, y$ with denominator
coprime to $f$ such that $Q(x,y) = 1$.
\end{enumerate}
Moreover, if $Q$ represents $m^2$ primitively, then $Q_1$ can be
chosen in such a way that it represents $m$ primitively.
\end{thm}
\begin{proof}
Observe that Gauss's equation (\ref{EGcom}) implies that if
$Q_1$ represents $m$ and $Q_2$ represents $n$, then
$Q_3$ represents the product $mn$. Together with the fact
that the primitive form $Q_1$ represents integers coprime to
$\Delta$ this shows that ii) implies i).
Let us next show that i) and i') are equivalent. It is cleary
sufficient to show that i) implies i'). Assume therefore that
$Q(x,y) = A^2$ for coprime integers $x, y$, and that there is
a prime $p \mid \gcd(A,\Delta)$. We claim that $p \mid f$.
We know that $Q$ is equivalent to some form $(A^2,B,C)$, so we
may assume that $Q = (A^2,B,C)$.
If $p$ is odd, then $p \mid \Delta = B^2 - 4A^2C$, hence $p \mid B$,
$p^2 \mid \Delta$, and finally $p^2 \mid f$ since fundamental
discriminants are not divisible by squares of odd primes.
If $p = 2$, then $B = 2b$ and $A = 2a$, and $(a^2,b,C)$ is a
form with discriminant $\Delta/4$, showing that $2 \mid f$.
Thus i) and i') are equivalent.
For showing that i') $\Lra$ ii), assume that $Q$ represents
$m^2$ primitively (cancelling squares shows that $Q$ primitively
represents a square), and write $Q = (m^2,B,C)$; Dirichlet
composition then shows that $[Q] = [Q_1]^2$ for $Q_1 = (m,B,mC)$;
note that if $\Delta < 0$, the form $Q_1$ is positive definite only
for $m > 0$. Since $\gcd(m,\Delta) = \gcd(m,B^2) = 1$, the form
$Q_1$ is primitive.
Finally, i) and iii) are trivially equivalent.
\end{proof}
We will also need
\begin{cor}\label{Cgen}
Let $Q_1$ and $Q_2$ be forms with discriminant $\Delta = \Delta_0 f^2$.
If $Q_j(r_j,s_j) = ax_j^2$ for integers $r_j, s_j, x_j$ ($j = 1, 2$)
with $\gcd(x_1x_2,f) = 1$, then $Q_1$ and $Q_2$ belong to the same genus.
\end{cor}
\begin{proof}
Any form in the class $[Q_1][Q_2]$ represents a square
coprime to $f$, hence $[Q_1][Q_2] = [Q]^2$. This implies
that $[Q_1]$ and $[Q_2]$ are in the same genus.
\end{proof}
\subsection*{Nonfundamental Discriminants}
For negative discriminants, Gauss proved a relation between the
class numbers $h(\Delta)$ and $h(\Delta f^2)$. For general
discriminants, a similar formula was derived by Dirichlet from
his class number formula, and Lipschitz later gave an arithmetic
proof of the general result. Since we only consider positive
definite forms, we are content with stating a special case of
Gauss's result:
\begin{thm}
Let $p$ be a prime, and $\Delta < -4$ a discriminant. Then
\begin{equation}\label{ELip}
h(\Delta p^2) = \Big(p - \Big(\frac{\Delta}p\Big)\Big) \cdot h(\Delta).
\end{equation}
\end{thm}
The basic tool needed for proving this formula is showing
that every form with discriminant $\Delta p^2$ is equivalent
to a form $(A,Bp,Cp^2)$, which is ``derived'' from the form
$(A,B,C)$ with discriminant $\Delta$.
Class groups of primitive forms with nonfundamental discriminants
occur naturally in the theory of binary quadratic forms, and
correspond to certain ray class groups (called ring class groups)
in the theory of ideals.
The simplest way of proving (\ref{ELip}) is by using the elementary
fact that every primitive form with discriminant $\Delta = f^2\Delta_0$
is equivalent to a form $Q = (A,Bf,Cf^2)$. The form $\ov{Q} = (A,B,C)$
is a primitive form with discriminant $\Delta_0$ from which $Q$ is
derived.
\section{Kaplansky's ``Conjecture''}
Theorem \ref{T2} is related to a question of Kaplansky discussed by
Mollin \cite{MolK1,MolK2} and Walsh \cite{WalK1,WalK2}: Kaplansky
claimed that if a prime $p$ can be written in the form $p = a^2 + 4b^2$,
then the equation $x^2 - py^2 = a$ is solvable. The assumption
$p = a^2 + 4b^2$ implies $p \equiv 1 \bmod 4$, as well as the
solvability of the equation $T^2 - pU^2 = -1$. Since $a^2 = p - 4b^2$,
the form $(1,0,-p)$ with discriminant $\Delta = 4p$ represents $a^2$.
Since $\gcd(a,4p) = 1$, there is a form $Q$ with discriminant $\Delta$
and $[Q^2] = [Q_0]$ which represents $a$. Since the class number
$h(4p)$ is odd (from (\ref{ELip}) we find that $h(4p) = h(p)$ if
$p \equiv 1 \bmod 8$, and $h(4p) = 3h(p)$ if $p \equiv 5 \bmod 8$; it
is a well known result due to Gauss that the class number of forms with
prime discriminant ist odd), we have $Q \sim Q_0 = (1,0,-p)$, and the
claim follows.
The obvious generalization of Kaplansky's result is
\begin{prop}
Let $m$ be an integer and $p$ a prime coprime to $2m$. If
$p = r^2 + ms^2$, then there is a form $Q$ with the following
properties:
\begin{enumerate}
\item $\disc\ Q = 4pm$;
\item $Q^2 \sim (p,0,-m)$;
\item $Q$ represents $r$.
\end{enumerate}
\end{prop}
\begin{proof}
From $p - ms^2 = r^2$ and $\gcd(r,2pm) = 1$ we deduce that the
form $Q_1 = (p,0,-m)$ is equivalent to the square of a form
representing $r$.
\end{proof}
As an example, let $m = 2$ and consider primes $p = r^2 + 2s^2$.
Then $p-2s^2 = r^2$, hence the form $Q = (p,0,-2)$ with discriminant
$\Delta = 8p$ represents the square number $r^2$. Thus $Q \sim Q_1^2$
for some form $Q_1$ representing $r$.
Now assume that $p \equiv 1 \bmod 8$. By genus theory, the
class of the form $Q_1$ will be a square if and only if
$(\frac2r) = (\frac rp) = 1$. Since $(\frac2r) = (\frac pr)
= (\frac rp)$ by the quadratic reciprocity law, $[Q_1]$ is
a square if and only if $(\frac rp) = 1$. The congruence
$t^2 \equiv -2s^2 \bmod p$ implies
$\sym{r^2}{p} = (\frac rp) = \sym{2}{p} (\frac sp)$;
writing $s = 2^ju$ for some odd integer $u$ we find
$(\frac sp) = (\frac 2p)^j (\frac up) = (\frac pu) = 1$.
We have proved (see Kaplan \cite{Kap} for proofs of this and a
lot of other similar results):
\begin{prop}\label{PKap2}
Let $p = r^2 + 2s^2 \equiv 1 \bmod 8$ be a prime. Then the
class of the form $Q = (p,0,-2)$ in $\Cl(8p)$ is a fourth power
if and only if $\sym{2}{p} = +1$.
\end{prop}
Note that this does not necessarily imply that the class number
$h(8p)$ is divisible by $4$ since $Q$ might be equivalent
to the principal form. In fact, this always happens if
$r = 1$, since then $Q$ represents $1$. In this case, we get
a unit in $\Z[\sqrt{2p}\,]$ for free because $p-2s^2 = 1$
implies that $(\sqrt{p}+s\sqrt{2}\,)^2 = p+2s^2 + 2s\sqrt{2p}$
is a nontrivial unit. Observe that the field is of Richaud-Degert
type since $2p = (2s)^2 + 2$.
\section{P\'epin's Theorems}
The simplest among the about 100 theorems stated by P\'epin
\cite{Pep74, Pep79, Pep80, Pep82} are the following:
\begin{prop}
In the table below, $Q$ is a positive definite form with discriminant
$\Delta = -4m$. For any prime $p \nmid \Delta$ represented by $Q$
(the table below gives two small values of such $p$), the equations
$pX^4 - mY^4 = z^2$ only have the trivial solution.
$$ \begin{array}{c|cccccc}
\rsp Q & (2,0,7) & (2,2,9) & (4,0,5)
& (4,4,9) & (4,0,9) & (3,0,13) \\ \hline
\rsp m & 14 & 17 & 20 & 32 & 36 & 39 \\ \hline
\rsp p & 71, 79 & 13, 89 & 41, 149 & 17,89 & 13,73 & 61, 79
\end{array} $$
\end{prop}
By looking at these results from the theory of binary quadratic
forms one is quickly led to observe that the equivalence classes
of the forms $Q$ in P\'epin's examples are squares but not fourth
powers. Such forms occur only for discriminants $\Delta = -4m$
for which $\Cl(\Delta)$ has a cyclic subgroup of order $4$. The
table below lists all positive $m \le 238$ with the property that
$\Cl(-4m)$ has a cyclic subgroup of order $4$, forms $Q$ whose
classes are squares but not fourth powers, the structure of the
class group, a comment indicating the proof of the result, and a
reference to the paper of P\'epin's in which it appears.
\begin{table}[ht!]
\begin{tabular}{r|l|l|l|c}
m & Q & $\Cl(-4m)$ & \text{comment} & \text{ref} \\ \hline
14 & (2,0,7) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
17 & (2,2,9) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
20 & (4,0,5) & [4] & \ref{TPepG}.\ref{T54} & \cite{Pep80} \\
32 & (4,4,9) & [4] & \ref{TPepG}.\ref{T55} & \cite{Pep80} \\
34 & (2,0,17) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
36 & (4,0,9) & [4] & \ref{TPepG}.\ref{T56} & \cite{Pep74} \\
39 & (3,0,13) & [4] & \ref{TPepG}.\ref{T53} & \cite{Pep80} \\
41 & (5,4,9) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep79} \\
46 & (2,0,23) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
49 & (2,2,25) & [4] & \ref{TPepG}.\ref{T52}, f=7 & \cite{Pep80} \\
52 & (4,0,13) & [4] & \ref{TPepG}.\ref{T54} & \cite{Pep80} \\
55 & (5,0,11) & [4] & \ref{TPepG}.\ref{T53} & \cite{Pep80} \\
56 & (4,4,15) & [4, 2] & $56 = 4 \cdot 14$ & \cite{Pep80} \\
62 & (9,2,7) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep79} \\
63 & (7,0,9) & [4] & \ref{TPepG}.\ref{T52}, f=3 & \cite{Pep80} \\
64 & (4,4,17) & [4] & \ref{TPepG}.\ref{T55} & \cite{Pep80} \\
65 & (10,10,9) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
66 & (3,0,22) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
68 & (8,2,9)\hspace{-1.05cm}--------- (8,4,9)
& [8] & $68 = 4 \cdot 17$ & \cite{Pep79} \\
69 & (6,6,13) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
73 & (2,2,37) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
77 & (14,14,9) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
80 & (9,2,9) & [4, 2] & $80 = 4 \cdot 20$ & \cite{Pep80} \\
82 & (2,0,41) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
84 & (4,0,21) & [4, 2] & \ref{TPepG}.\ref{T54} & \cite{Pep80} \\
89 & (9, 2, 10), (2,2,45) & [12] & \ref{TPepG}.\ref{T51} & -- \\
90 & (9,0,10) & [4, 2] & \ref{TPepG}.\ref{T52}, f=3 & \cite{Pep74} \\
94 & (7,4,14) & [8] & \ref{TPepG}.\ref{T51} & -- \\
95 & (9,4,11) & [8] & \ref{TPepG}.\ref{T53} & -- \\
96 & (4,4,25) & [4, 2] & \ref{TPepG}.\ref{T55} & \cite{Pep80} \\
97 & (2,2,49) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
98 & (9,2,11) & [8] & \ref{TPepG}.\ref{T52}, f=7 & -- \\
100 & (4,0,25) & [4] & incorrect & \cite{Pep80} \\
111 & (7,2,16) & [8] & \ref{TPepG}.\ref{T53} & \cite{Pep79} \\
113 & (9,4,13) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
114 & (2,0,57)\hspace{-1.18cm}----------- (6,0,19)
& [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
116 & (9,2,13), (4,0,29) & [12] & \ref{TPepG}.\ref{T54} & -- \\
117 & (9,0,13) & [4, 2] & \ref{TPepG}.\ref{T52},f=3 & \cite{Pep74} \\
126 & (7,0,18) & [4, 2] & $126 = 9 \cdot 14$ & \cite{Pep80} \\
128 & (9,8,16) & [8] & $128 = 4 \cdot 32$ & \cite{Pep74} \\
132 & (4,0,33) & [4, 2] & \ref{TPepG}.\ref{T54} & \cite{Pep80} \\
136 & (8,0,17) & [4, 2] & $136 = 4 \cdot 34$ & \cite{Pep80} \\
137 & (9,8,17) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep74}
\end{tabular} \bigskip
\caption{Unsolvable Equations $px^4 - my^4 = z^2$.}\label{Tmp}
\end{table}
\begin{table}[ht!]
\begin{tabular}{r|l|l|l|c}
m & Q & $\Cl(-4m)$ & \text{comment} & \text{ref} \\ \hline
138 & (3,0,46) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
141 & (6,6,25) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
142 & (2,0,71) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
144 & (9, 0, 16) & [4, 2] & $144 = 4 \cdot 36$ & -- \\
145 & (5,0,29) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
146 & (3, 2, 49) & [16] & \ref{TPepG}.\ref{T51} & -- \\
148 & (4,0,37) & [4] & \ref{TPepG}.\ref{T54} & \cite{Pep80} \\
150 & (6,0,25) & [4, 2] & incorrect & \cite{Pep82} \\
153 & (13,4,13)\hspace{-1.35cm}------------ (13,8,13)
& [4, 2] & $153 = 9 \cdot 17$ & \cite{Pep82} \\
154 & (11,0,14) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
155 & (9, 8, 19), (5,0,31) & [12]& \ref{TPepG}.\ref{T54} & -- \\
156 & (12,0,13) & [4, 2] & $156 = 4 \cdot 39$ & \cite{Pep82} \\
158 & (9,4,18) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep79} \\
160 & (4,4,41) & [4, 2] & \ref{TPepG}.\ref{T55} & \cite{Pep82} \\
161 & (9, 2, 18) & [8, 2] & \ref{TPepG}.\ref{T51} & -- \\
164 & (9, 8, 20) & [16] & $164 = 4 \cdot 41$ & -- \\
171 & (7, 4, 25), (9,0,19) & [12] & \ref{TPepG}.\ref{T52}, f=3 & -- \\
178 & (11,6,17) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep79} \\
180 & (9,0,20)\hspace{-1.18cm}----------- (4,0,45)
& [4, 2] & $180 = 9 \cdot 20$ & \cite{Pep82} \\
183 & (13,10,16) & [8] & \ref{TPepG}.\ref{T53} & \cite{Pep79} \\
184 & (8,8,25) & [4, 2] & $184 = 4 \cdot 46$ & \cite{Pep82} \\
185 & (9,4,21) & [8, 2] & \ref{TPepG}.\ref{T51} & -- \\
192 & (4,2,29)\hspace{-1.18cm}----------- (4,4,49)
& [4, 2] & \ref{TPepG}.\ref{T55} & \cite{Pep82} \\
193 & (2,2,97) & [4] & \ref{TPepG}.\ref{T51} & \cite{Pep80} \\
194 & (11, 4, 18), (6,4,33), (2,0,97)
& [20] & \ref{TPepG}.\ref{T51} & -- \\
196 & (8,4,25) & [8] & $196 = 4 \cdot 49$ & \cite{Pep79} \\
198 & (9,0,22) & [4, 2] & \ref{TPepG}.\ref{T52}, f=3
& \cite{Pep74,Pep82} \\
203 & (9,4,23), (7,0,29) & [12] & \ref{TPepG}.\ref{T53} & -- \\
205 & (5,0,41) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
206 & (9, 2, 23) (14,4,15), 2,0,103)
& [20] & \ref{TPepG}.\ref{T51} & -- \\
208 & (16,8,17)\hspace{-1.35cm}------------ (16,16,17)
& [4, 2] & $208 = 4 \cdot 52$ & \cite{Pep82} \\
212 & (9, 4,24), (4,0,53) & [12] & \ref{TPepG}.\ref{T54} & -- \\
213 & (6,6,37) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
217 & (2, 2, 109) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
219 & (12, 6, 19), (3,0,73) & [12] & \ref{TPepG}.\ref{T53} & -- \\
220 & (5, 0, 44) & [4, 2] & $220 = 4 \cdot 55$ & \cite{Pep82} \\
221 & (9,4,25) & [8, 2] & \ref{TPepG}.\ref{T51} & -- \\
224 & (9,2,25) & [8, 2] & \ref{TPepG}.\ref{T55} & -- \\
225 & (9,0,25) & [4, 2] & incorrect & \cite{Pep82} \\
226 & (11,8,22) & [8] & \ref{TPepG}.\ref{T51} & \\
228 & (4,0,57) & [4, 2] & \ref{TPepG}.\ref{T54} & -- \\
233 & (9,2,26), (2,2,117) & [12] & \ref{TPepG}.\ref{T51} & -- \\
238 & (2, 0,119) & [4, 2] & \ref{TPepG}.\ref{T51} & \cite{Pep82}
\end{tabular}
\medskip
\caption{Unsolvable Equations $px^4 - my^4 = z^2$.}\label{Tmp2}
\end{table}
\begin{table}[ht!]
\begin{tabular}{r|l|l|l|c}
m & Q & $\Cl(-4m)$ & \text{comment} & \text{ref} \\ \hline
265 & (10,10,29) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
301 & (14,14,25) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
360 & (9,0,40) & [4,2,2] & $360 = 4 \cdot 90$ & \cite{Pep74} \\
465 & (10,10,49) & [4,2,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
522 & (9,0,58) & [4,2] & \ref{TPepG}.\ref{T51}, f=3 & \cite{Pep74} \\
553 & (2,2,227) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
561 & (34,34,25) $\sim$ (25,16,25) & [4,2,2] & \ref{TPepG}.\ref{T51}
& \cite{Pep74} \\
609 & (42,42,25) $\sim$ (25,8,25) & [4,2,2] & \ref{TPepG}.\ref{T51}
& \cite{Pep74} \\
645 & (6,6,109) & [4,2,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
697 & (2,2,349) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep74} \\
792 & (9,0,88) & [4,2,2] & $792=4 \cdot 198$ & \cite{Pep74} \\
1764 & (4,0,441), (25,12,72), (9,0,196)
& [8,4] & $1764 = 42^2$ & \cite{Pep74} \\
3185 & (9,2,354) & [16,2,2] & $3185 = 7^2 \cdot 65$ & \cite{Pep74} \\
4356 & (4,0,1089), (148,96,45), (229,74,25)
& [12,4] & $4256 = 66^2$ & \cite{Pep74} \\
4950 & (31,28,166), (9,0,550),
& [12
|
,2,2] & $4950 = 15^2 \cdot 22$ & \cite{Pep74} \\
8349 & (25,2,254), (49,36,177), 70,62,133)
& [24,2,2] & $8349 = 11^2 \cdot 39$ & \cite{Pep74} \\ \hline
256 & (16,8,17) & [8] & $256 = 4 \cdot 64$ & \cite{Pep79} \\
289 & (13,12,25) & [8] & incorrect & \cite{Pep79} \\
292 & (8,4,37) & [8] & $292 = 4 \cdot 73$ & \cite{Pep79} \\
295 & (16,6,19) & [8] & \ref{TPepG}.\ref{T53} & \cite{Pep79} \\
313 & (13,10,26) & [8] & \ref{TPepG}.\ref{T51} & \cite{Pep79} \\ \hline
252 & (9,0,28) & [4,2] & $252 = 4 \cdot 63$ & \cite{Pep82} \\
282 & (3,0,94) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
288 & (4,2,73)\hspace{-1.18cm}----------- (4,4,73)
& [4,2] & $288 = 9 \cdot 32$ & \cite{Pep82} \\
310 & (10,0,31) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
322 & (2,0,161) & [4,2] & \ref{TPepG}.\ref{T51} & \cite{Pep82} \\
328 & (8,0,41) & [4,2] & $328 = 4 \cdot 82$ & \cite{Pep82} \\
333 & (9,0,37) & [4,2] & $333 = 9 \cdot 37$ & \cite{Pep82} \\
340 & (4,0,85) & [4,2] & $340 = 4 \cdot 65$ & \cite{Pep82} \\
352 & (4,2,89)\hspace{-1.18cm}----------- (4,4,89)
& [4,2] & \ref{TPepG}.\ref{T55} & \cite{Pep82} \\
372 & (4,0,93) & [4,2] & $372 = 4 \cdot 39$ & \cite{Pep82}
\end{tabular}
\medskip
\caption{More of P\'epin's Examples.}\label{Tmp3}
\end{table}
P\'epin must have been aware of the connection between his claims
and the structure of the class group for the following reasons:
\begin{enumerate}
\item The examples he gives in \cite{Pep79} all satisfy
$\Cl(-4m) = [8]$ (the cyclic group of order $8$), and
those in \cite{Pep82} satisfy $\Cl(-4m) = [4,2]$.
\item P\'epin omits all values of $m$ for which the class number
$h(-4m)$ is divisible by $3$ or $5$, except for three examples
given in \cite{Pep74}.
\item Most of the misprints in his list concern the middle coefficient
of the forms $Q$, which is sometimes given as half the correct
value; a possible explanation is provided by the fact that Gauss
used the notation $(A,B,C)$ for the form $Ax^2 + 2Bxy + Cy^2$.
\end{enumerate}
The following result covers all examples in our table:
\begin{thm}\label{TPepG}
Let $m$ be a positive integer, let $Q$ be a quadratic form with
discriminant $\Delta = -4m$, and assume that $[Q]$ is a square,
but not a fourth power in $\Cl(\Delta)$. Let $p \nmid \Delta$
be a prime represented by $Q$. Then the diophantine equation
$px^4 - my^2 = z^2$ has nontrivial integral solutions, and if
one of the following conditions is satisfied, $px^4 - my^4 = z^2$
has only the trivial solution:
\begin{enumerate}
\item\label{T51} $\Delta$ is a fundamental discriminant.
\item\label{T52} $\Delta = \Delta_0 f^2$ for some fundamental discriminant
$\Delta_0$ and an odd squarefree integer $f$ such that
$(\frac{\Delta_0}{q}) = -1$ for all $q \mid f$.
\item\label{T53} $\Delta = 4 \Delta_0$, where $\Delta_0 \equiv 1 \bmod 8$ is
fundamental.
\item\label{T54} $\Delta = 4 \Delta_0$, where $\Delta_0 =4n$ is
fundamental and $n \equiv 1 \bmod 4$.
\item\label{T55} $\Delta = 16 \Delta_0$, where $8 \mid \Delta_0$.
\item\label{T56} $\Delta = 4f^2 \Delta_0$ for some odd integer $f$,
where $\Delta_0 = 4n$ with $n \equiv 1 \bmod 4$, and
$(\Delta_0/q) = -1$ for all primes $q \mid f$.
\end{enumerate}
\end{thm}
\medskip\noindent{\bf Remark 1.}
In \cite{LPep} I have claimed that some proofs can be generalized
to show the unsolvability of equations of the form $px^4 - my^2 = z^2$.
This is not correct: I have overlooked the possibility that
$\gcd(y,z) \ne 1$ in the proofs given there. In fact, consider the
equation $71x^4 -14y^2 = z^2$. We find $71 \cdot 3^2 = 5^4 + 14$,
giving rise to a solution $(x,y,z) = (3,3,75)$ of
$71 \cdot x^4 = z^2 + 14 y^2$.
\medskip
\medskip\noindent{\bf Remark 2.}
Studying a few of P\'epin's examples quickly leads to the conjecture
that Theorem \ref{TPepG} holds without any conditions on the discriminant.
This is not true: three of P\'epins ``theorems'' are actually incorrect:
\begin{enumerate}
\item $m = 100$: here $Q = (4,0,25)$ represents the prime $41 = Q(2,1)$,
and the equation $41x^4 - 100y^4 = z^2$ has the solution $(5,2,155)$.
\item $m = 150$: here $Q = (6,0,25)$ represents the prime $31 = Q(1,1)$,
and the equation $31x^4 - 150y^4 = z^2$ has the solution
$(5,3,85)$.
\item $m = 289$: here $Q = (13,12,25)$ represents the prime
$p = Q(2,-1) = 53$, and the equation $px^4 - 289y^4 = z^2$ has the
solution $(x,y,z) = (17,11,442)$.
\end{enumerate}
Note that, in these examples, the solutions do not satisfy
the condition $\gcd(x,f) = 1$ of Prop. \ref{Pmain} below.
This shows that we have to be careful when trying to generalize
Thm. \ref{TPepG} to arbitrary discriminants, and that some sort
of condition (like those in (1) -- (6)) is necessary.
\medskip\noindent{\bf Remark 3.}
The obvious generalization of P\'epin's theorems to nonprime values
of $p$ does not hold: the form $Q = (2,0,7)$ represents $15 = Q(2,1)$,
but the diophantine equation $15x^4 - 14y^4 = z^2$ has the nontrivial
(but obvious) solution $(1,1,1)$.
As in the proof below, we can deduce that $15^2$ is represented by
the form $Q$; it does not follow, however, that the square roots
$(3,\pm 2, 5)$ of $(2,0,7)$ represent $15$: in fact, we have
$15^2 = Q(9,3)$, so the representation is imprimitive, and $Q$ is
not equivalent to a form with first coefficient $15^2$.
\medskip
\medskip\noindent{\bf Remark 4.}
Some of the examples given by P\'epin are special cases of others.
Consider e.g. the case $m = 80 = 4 \cdot 20$; the derived form of
$\ov{Q} = (9,-8,4) \sim (4,0,5)$ with discriminant $-4 \cdot 20$ is the
form $Q = (9,-16,16) \sim (9,2,9)$ with discriminant $\Delta = -4m$;
its class is easily shown to be a square but not a fourth power, but
this is not needed here since every prime represented by $Q$ is also
represented by $\ov{Q}$, which means that the result corresponding to
$m = 80$ is a special case of the result for $m = 20$.
The same thing happens for $m = 68$, $126$, $128$, \ldots; the
corresponding entries in the tables above are indicated by the
comment $m = 4 \cdot n$.
The case $m = -56 = -14 \cdot 4$ is an exception: the derived form
of $(7,0,2) \sim (2,0,7)$ is $Q = (7,0,8)$, whose class is a square but
not a fourth power in $\Cl^+(\Delta)$. Moreover, the primes that $Q$
represents are also represented by $(2,0,7)$.
In addition we have the form $(3,2,19) \sim (3,-4,20)$, and the latter
form is derived from $(3,-2,5)$. This form generates $\Cl(-4 \cdot 14)$,
and the square of $(3,-4,20)$ is equivalent to $(8,8,9)$, which underives
to $(2,4,9) \sim (2,0,7)$. The composition of $(8,8,9)$ and $(7,0,8)$
produces a form equivalent to $(4,4,15)$, which is not a square but
represents $4$.
Observe that the primes $p$ represented by $(4,4,15)$ are congruent
to $3 \bmod 4$, hence the equations $px^2 - 56y^2 = z^2$ only have
solutions with $2 \mid x$ (thus $23x^2 - 56y^2 = z^2$, for example,
has the solution $(x,y,z) = (2,1,6)$). This implies that the
corresponding quartic $23x^4 - 56y^4 = z^2$ does not have a $2$-adic
solution in a trivial way: $2 \mid x$ implies $4 \mid z$ and $2 \mid y$,
so a simple descent shows that this equation does not have a nontrivial
solution.
\medskip\noindent{\bf Remark 5.}
P\'epin's calculations for $m = 114$ are incorrect: here, the square
class in $\Cl^+(-4m)$ is generated by $(6,0,19)$, whereas P\'epin uses
$(2,0,57)$. Perhaps P\'epin went through the forms $(2,0,n)$
with $n \equiv \pm 1 \bmod 8$; these forms are contained in
square classes if $n$ is prime, or if $n$ has only prime factors
$\equiv \pm 1 \bmod 8$. If the class number $h(-8n)$ is not
divisible by $8$, the class is not a fourth power.
A direct test whether forms $(2,0,p)$ for $p \equiv \pm 1 \bmod 8$ are
fourth powers can be performed as follows: write $p = e^2 - 2f^2$;
then $Q = (2,0,p)$ represents $e^2 = 2f^2 + p$, and $[Q]$ is a fourth
power if and only if a form with first coefficient $e > 0$ is in the
principal genus. If $p \equiv 7 \bmod 8$, this happens if and only if
$(\frac{2}e) = +1$, and if $p \equiv 1 \bmod 8$, if and only if
$(\frac{-2}e) = +1$. A simple exercise using congruences modulo $16$
and quadratic reciprocity shows that $(\frac{2}e) = (-1)^{(p+1)/8}$
in the first, and $(\frac{-2}e) = (\frac 2p)^{\phantom{p}}_4$ in the
second case.
$$ \begin{array}{r|rrc}
p & e & f & (-1)^{(p+1)/8} \\ \hline
7 & 3 & 1 & -1 \\
23 & 5 & 1 & -1 \\
31 & 7 & 3 & +1 \\
47 & 7 & 1 & +1 \end{array} \qquad \qquad
\begin{array}{r|rrc}
p & e & f & (2/p)_4) \\ \hline
17 & 5 & 2 & -1 \\
41 & 7 & 2 & -1 \\
73 & 9 & 2 & +1 \\
89 & 11 & 4 & +1 \end{array} $$
\medskip
Thus from Theorem \ref{TPepG} we get the following
\begin{cor}
Let $m = 2q$, where $q \equiv 7 \bmod 16$ is a prime.
For any prime $p$ represented by $(2,0,q)$, the
diophantine equation $px^4 - my^4 = z^2$ has only the
trivial solution. The same result holds for primes
$q \equiv 1 \bmod 8$ with $(2/p)_4 = -1$.
\end{cor}
It is an easy exercise to deduce countless other families
of similar results from Theorem \ref{TPepG}.
\subsection*{The Proof of Theorem \ref{TPepG}}
The part of Thm. \ref{TPepG} which is easiest to prove generalizes
Remark 1:
\begin{lem}
If $p$ is represented by a form $Q$ in the principal genus of
$\Cl^+(-4m)$, then the equation $px^4 - my^2 = z^2$ has a nonzero
integral solution.
\end{lem}
\begin{proof}
The solvability of $px^2 = z^2 + my^2$ follows from genus theory:
the prime $p$ is represented by a form in the principal genus,
hence any form in the principal genus (in particular the principal
form $(1,0,-m)$) represents $p$ rationally. Multiplying through by
$x^2$ we find $px^4 - m(xy)^2 = (xz)^2$.
\end{proof}
The main idea behind the proof of Thm. \ref{TPepG} is the content
of the following
\begin{prop}\label{Pmain}
Assume that $Q$ is a form with discriminant $\Delta = -4m$ whose
class is a square but not a fourth power in $\Cl^+(\Delta)$. Let
$p \nmid \Delta$ be a prime represented by $Q$, and write
$\Delta = \Delta_0 f^2$, where $\Delta_0$ is a fundamental
discriminant. Then the diophantine equation $px^4 - my^4 = z^2$
does not have an integral solution $(x,y,z)$ with $\gcd(x,f) = 1$.
\end{prop}
\begin{proof}
The form $Q_0 = (1,0,m)$ represents $px^4 = Q_0(z,y^2)$, and $Q$
represents $p$. By Cor. \ref{C1}, the square $p^2x^4$ is represented
primitively by any form in the class $[Q][Q_0]^e = [Q]$ (for some
$e = \pm 1$), hence by $Q$ itself. Thus there is a form $Q_1$ with
$Q_1^2 \sim Q$ representing $px^2$, which by Cor. \ref{Cgen} is in
the same genus as $Q$. But then $Q_1$ and $Q$ differ by a square
class, which implies that $[Q_1]$ is a square and that $[Q]$
is a fourth power: contradiction.
\end{proof}
Now we can give the
\begin{proof}[Proof of Thm. \ref{TPepG}]
We have to show that if $px^4 - my^4 = z^2$ has a nontrivial integral
solution, then there is a solution satisfying $\gcd(x,f) = 1$. Applying
Prop. \ref{Pmain} then gives the desired result.
\subsection*{Case (1)} If $\Delta$ is fundamental, then $f = 1$,
and the conditiong $\gcd(x,f) = 1$ is trivially satisfied.
\subsection*{Case (2)}
Write $\Delta = 4m$ and $m = q^2n$; then $\Delta_0 = 4n$.
Assume that $px^4 - my^4 = z^2$. If $q \mid x$ and $q \nmid y$,
then $x = qX$ and $z = qZ$ gives $pq^2X^4 - ny^4 = Z^2$.
Reduction modulo $q$ implies $(\frac{-n}q) = +1$ contradicting
our assumptions since $q$ is odd.
\subsection*{Case (3)} If $\Delta = 4 \Delta_0$ with
$\Delta_0 \equiv 1 \bmod 8$, then we cannot exclude the possibility
that $x$ might be even. Thus in order to guarantee the applicability
of Prop. \ref{Pmain} we have to get rid of the factor $4$ in
$\Delta = -4m$. This is achieved in the following way.
By (\ref{ELip}), we have $h(-4m) = (2 - (\frac{-m}2)) h(-m) = h(m)$
if $-m \equiv 1 \bmod 8$. Since there is a natural surjective projection
$\Cl(-4m) \lra \Cl(-m)$, this implies that $\Cl(-4m) \simeq \Cl(-m)$.
But then the form $Q$ projects to a form $\ov{Q}$ with discriminant $-m$
whose class is a square but not a fourth power. Moreover, every integer
represented by $Q$ is also represented by $\ov{Q}$. An application of
Prop. \ref{Pmain} with $\Cl^+(\Delta)$ replaced by $\Cl^+(\Delta_0)$
now gives the desired result.
\subsection*{Case (4)} Here $m = 4n$ for $n \equiv 1 \bmod 4$.
If $px^4 - 4ny^4 = z^2$ and $x = 2X$, then $z = 2z_1$ and
$4pX^4 - ny^4 = z_1^2$. From
$0 \equiv z_1^2 + ny^4 \equiv z_1^2 + y^4 \bmod 4$ we find that
$z_1 = 2Z$ and $y = 2Y$ must be even, hence $pX^4 - 4nY^4 = Z^2$.
Repeating this if necessary we find a solution with $x$ odd.
\subsection*{Case (5)} Here $m = 16n$ with $n \equiv 2 \bmod 4$.
If $px^4 - 16ny^4 = z^2$, then $x = 2X$ and $z = 4Z$, hence
$pX^4 - nY^4 = Z^2$.
Since $p$ is represented by a form in the principal genus,
we must have $pa^2 = b^2 + mc^2$ for some odd integer $a$.
This implies $p \equiv 1+mc^2 \equiv 1 \bmod 8$.
From $1 - 2Y^4 \equiv 1 \bmod 4$ we deduce that $Y = 2y_1$,
hence $pX^4 - 16ny_1^4 = Z^2$. This proves our claim.
\subsection*{Case (6)} Assume that $m = 4f^2n$ with $n \equiv 1 \bmod 4$
and $(-n/q) = -1$ for all primes $q \mid f$, and that $px^4 - my^4 = z^2$.
If $x = 2X$ and $z = 2z_1$, then $4pX^4 - f^2ny^4 = z_1^2$. If $y$ is odd,
then $z_1^2 \equiv 3 \bmod 4$: contradiction. Thus $y = 2Y$ and $z = 2Z$,
hence $pX^4 - 4f^2nY^4 = Z^2$.
If $f = qg$ and $x = qX$, $z = qz_1$, then $pq^2X^4 - 4g^2ny^4 = z_1^2$.
Reduction mod $q$ gives $(-n/q) = +1$ contradicting our assumption,
except when $q \mid y$. But then $y = qY$ and $z_1 = qZ$, and we find
$pX^4 - mY^4 = Z^2$.
\end{proof}
The abstract argument in case (3) above can be made perfectly explicit:
\begin{lem}
Assume that $m \equiv 3 \bmod 4$, and that $Q = (A,B,C)$ is a
form with discriminant $B^2 - 4AC = -4m$. We either have
\begin{enumerate}
\item $4 \mid B$;
\item $B \equiv 2 \bmod 4$ and $4 \mid C$; or
\item $B \equiv 2 \bmod 4$ and $4 \mid A$.
\end{enumerate}
Then every integer represented by $Q$ is also represented by the form
$$ Q' = \begin{cases}
(A, A+\frac{B}2,\frac{A+B+C}4) & \text{ in cases (1) and (2)} \\
(\frac{A}4,\frac{B}2,C) & \text{ in case (3)}
\end{cases} $$
with discriminant $-m$.
\end{lem}
\begin{proof}
From $B^2 - 4AC = -4m$ we get $(\frac B2)^2 - AC = -m \equiv 1 \bmod 4$.
If $4 \mid B$, then $AC \equiv 3 \bmod 4$, which implies
$A + C \equiv 0 \bmod 4$. Thus in this case, $\frac B2$ and
$\frac{A+B+C}4$ are integers.
If $2 \parallel B$, then $1-AC \equiv 1 \bmod 4$ implies that $4 \mid AC$.
Since the form $Q$ is primitive, we either have $4 \mid A$ or $4 \mid C$.
In cases (1) and (2), substitute $x = X + \frac 12 Y$ and $y = \frac12 Y$
in $Q(x,y) = Ax^2 + Bxy + Cy^2$ and observe that if $x$ and $y$ are
integers, then so are $X$ and $Y$. We find
$Q(x,y) = A(X + \frac 12 Y)^2 + B(X + \frac 12 Y)(\frac 12 Y)
+ C(\frac 12 Y)^2 = AX^2 + (A + \frac B2)XY + \frac{A+B+C}4 Y^2$.
The coefficients of $Q'$ in these two cases are clearly integers.
In case (3), set $x = \frac12 X$ and $y = Y$.
\end{proof}
\medskip\noindent
{\bf Remark.} The proof we have given works with forms in
$\Cl(\Delta)^2$; for distinguishing simple squares from
fourth powers, one can introduce characters on
$\Cl(\Delta)^2/\Cl(\Delta)^4$. This was first done in
a special case by Dirichlet, who showed how to express
these characters via quartic residue symbols; much later, his
results were generalized within the theory of spinor genera by
Estes \& Pall \cite{EP}. This explains why most proofs of the
nonsolvability of equations of the form $ax^4 + by^4 = z^2$
(see in particular \cite{LFC}) use quartic residue symbols.
\section{Examples of Lind-Reichardt type}
The most famous counterexample to the Hasse principle is due to
Lind \cite{Lind} and Reichardt \cite{Reich}; Reichardt showed that the
equation $17x^4 - 2y^2 = z^4$ has solutions in every localization
of $\Q$, but no rational solutions except the trivial $(0,0,0)$,
and Lind constructed many families of similar examples. Below,
we will show that our construction also gives some of their
examples; we will only discuss the simplest case of fundamental
discriminants and are content with the remark that there are
similar results in which $-4AC$ is not assumed to be fundamental.
\begin{thm}
Let $A$ and $C$ be coprime positive integers such that
$\Delta = -4AC$ is a fundamental discriminant.
Then $Q = (A,0,C)$ is a primitive form with discriminant $\Delta$.
If the equivalence class of $Q$ is a square but not a fourth power in
$\Cl^+(\Delta)$, then the diophantine equation $Ax^4 - z^4 = Cy^2$
only has the trivial solution $(0,0,0)$.
\end{thm}
\begin{proof}
We start with the observation that the form $(A,0,-C)$ represents $z^4$.
Since $4AC$ is fundamental, the class $[Q]$ must be a fourth power,
contradicting our assumptions.
\end{proof}
The following result is very well known:
\begin{cor}
Let $p \equiv 1 \bmod 8$ be a prime with $\sym{2}{p} = -1$.
Then the diophantine equation $px^4 - 2y^2 = z^4$ does not have
a trivial solution.
\end{cor}
\begin{proof}
The form $Q = (p,0,-2)$ has discriminant $8p$, and Prop. \ref{PKap2}
implies that its class is a square but not a fourth power in $\Cl^+(8p)$.
\end{proof}
\section{Hasse's Local-Global Principle}
Some diophantine equations $ax^4 + by^4 = z^2$ can be proved to
have only the trivial solution by congruences, that is, by studying
solvability in the localizations $\Q_p$ of the rationals. A special
case of Hasse's Local-Global Principle asserts that quadratic
equations $ax^2 + by^2 = z^2$ have nontrivial solutions
in integers (or, equivalently, in rational numbers) if and
only if it has solutions in every completion $\Q_p$.
It is quite easy to see (cf. \cite{AL}) that $ax^4 + by^4 = z^2$
(and therefore also $ax^2 + by^2 = z^2$) has local solutions for
every prime $p \nmid 2ab$, so it remains to check solvability in
the reals $\R = \Q_\infty$ and in the finitely many $\Q_p$ with
$p \mid 2ab$.
Actually we find
\begin{prop}
If $px^2 - my^2 = z^2$ has a rational solution, then
the quartic $px^4 - my^4 = z^2$ has a solution in
$\Z_q$ for every prime $q > 2$.
\end{prop}
\begin{proof}
By classical results (see \cite{AL} for a very simple and elementary
proof), $px^4 - my^4 = z^2$ is locally solvable for every prime
$q \nmid 2pm$. Thus we only have to look at odd primes $q \mid mp$.
From the solvability of $px^2 - my^2 = z^2$ we deduce that
$(\frac pq) = +1$ for every odd prime $q \mid m$. But then
$\sqrt{p} \in \Z_q$, and we can solve $px^4 - my^4 = z^2$
simply by taking $(x,y,z) = (1,0,\sqrt{p})$.
Moreover, from $(\frac{-m}p) = +1$ we deduce that $\sqrt{-m} \in \Z_p$,
hence $(x,y,z) = (0,1,\sqrt{-m}\,)$ is a $p$-adic solution of
$px^4 - my^4 = z^2$.
\end{proof}
Thus the quartic $px^4 - my^4 = z^2$, where $p, m > 0$, will be
everywhere locally solvable if and only if it is solvable in the
$2$-adic integers $\Z_2$. Now we claim
\begin{prop}
If $m \equiv 2 \bmod 4$, P\'epin's equations are solvable in $\Z_2$,
hence have solutions in every completion of $\Q$.
\end{prop}
\begin{proof}
Assume first that $m \equiv 2 \bmod 8$. Then $Q_0 = (1,0,m)$
(and therefore any form in the principal genus) represents
primes $p \equiv 1, 3 \bmod 8$. If $p \equiv 1 \bmod 8$,
then $p \cdot 1^4 - m \cdot 0^4 \equiv 1 \bmod 8$ is a square in
$\Z_2$, and if $p \equiv 3 \bmod 8$, then
$p\cdot 1^4 - m \cdot 1^4 \equiv 3 - 2 \equiv 1 \bmod 8$
is a square in $\Z_2$.
Assume next that $m \equiv 6 \bmod 8$.
Then $Q_0 = (1,0,m)$ represents only primes $p \equiv \pm 1 \bmod 8$;
the same holds for all forms in the principal genus, in particular
the form $Q$ represents only primes $p \equiv \pm 1 \bmod 8$.
For primes $p \equiv 1 \bmod 8$, the element $p\cdot 1^4 - m \cdot 0^4$
is a square in $\Z_2$, so $px^4 - my^4 = z^2$ is solvable in $\Z_2$.
If $p \equiv 7 \bmod 8$, then
$p\cdot 1^4 - m \cdot 1^4 = p-m \equiv 7 - 6 \equiv 1 \bmod 8$
is a square in $\Z_2$.
\end{proof}
A similar but simpler proof yields
\begin{prop}
If $m \equiv 0 \bmod 8$, P\'epin's equations are solvable in $\Z_2$,
hence have solutions in every completion of $\Q$.
\end{prop}
In fact, primes represented by a form in the principal genus are
$\equiv 1 \bmod 8$, and for these, showing the $2$-adic solvability
is trivial. Similarly we can show
\begin{prop}
If $m \equiv 1 \bmod 8$, then $p \equiv 1 \bmod 4$, and P\'epin's
equations are solvable in $\Z_2$ (hence in every completion of $\Q$)
if $p \equiv 1 \bmod 8$.
\end{prop}
\begin{proof}
Since $p$ is represented by some form in the principal genus,
we must have $pt^2 = r^2 + ms^2$ for integers $r, s, t$, with
$t$ coprime to $4m$. This implies that $t$ is odd, and that
$r$ or $s$ is even, hence we must have $p \equiv pt^2 \equiv 1 \bmod 4$
as claimed.
We have to study the solvability of $px^4 - my^4 = z^2$ in $\Z_2$.
If $p \equiv 1 \bmod 8$, then $\sqrt{p} \in \Z_2$, and we have
the $2$-adic solution $(x,y,z) = (1,0,\sqrt{p}\,)$.
\end{proof}
Thus although not all of P\'epin's examples are counterexamples
to Hasse's Local-Global Principle, his construction gives infinite
families of equations $px^4 - my^4 = z^2$ which have local solutions
everywhere but only the trivial soltion in integers. As is well known,
such equations represent elements of order $2$ in the Tate-Shafarevich
group of certain elliptic curves, and it is actually quite easy
to use P\'epin's construction to find Tate-Shafarevich groups with
arbitrarily high $2$-rank (see \cite{LGr} for the history and a direct
elementary proof of this result).
\section*{Acknowledgement}
I would like to thank the referee for several helpful comments.
|
\section{Introduction}
Since the discovery of superconductor, its irreplaceable superior physical properties and wide range of applications have been the subject of enthusiastic research. Especially, finding superconductors with high critical temperature ($T_{c}$) is the goal that researchers are keen to pursue. In 1968, Ashcroft predicted that the dense metallic hydrogen may be a candidate for high-temperature superconductor \cite{Ashcroft}. The reason is that the minimal mass of H would lead to high Debye frequency, which can increase $T_{c}$ based on the Bardeen-Cooper-Schrieffer (BCS) theory \cite{BCS}. Nevertheless, metallic hydrogen is difficult to synthesize experimentally \cite{Dalladay, Loubeyre, Eremets}. Therefore, researchers consider whether pure hydrogen can be replaced by compounds with high hydrogen content to explore high-temperature superconductors. Recently, several 3D polyhydrides have been investigated to show high $T_{c}$. For example, hydrogen sulfide H$_{3}$S was theoretically predicted \cite{Duan1} and experimentally confirmed with $T_{c}$ of 203 K at 155 GPa \cite{Drozdov1}. Later, $T_{c}$ was predicted to be 274-286 K for LaH$_{10}$ at 210 GPa, 253-276 K for YH$_{9}$ at 150 GPa, and 305-326 K for YH$_{10}$ at 250 GPa \cite{Liu, Peng}. Among them, high-temperature superconductivity has been observed in LaH$_{10}$ and YH$_{9}$ experimentally \cite{Drozdov2, Somayazulu, Snider1}. More recently, $T_{c}$ of 288 K in a carbonaceous sulfur hydride at 267 GPa was experimentally reported \cite{Snider2}.
However, above 3D high-temperature superconductors and can only be realized at high pressure, which greatly limits the application of the superconductors. On the other hand, 2D superconductors may have good applications in constructing nano superconducting devices. Thus, exploring 2D high-temperature superconductors at atmosphere pressure attracts a lot of attention theses years. Till now, some progress has been made in the research on the superconductivity in 2D materials. For example, lithium and calcium deposited graphene LiC$_{6}$ and CaC$_{6}$ were predicted to be phonon-mediated superconductors with $T_{c}$ of 8.1 K and 1.4 K, respectively \cite{Profeta}. Later, LiC$_{6}$ was experimentally proved to be a superconductor with $T_{c}$ of 5.9 K \cite{Ludbrook}. Moreover, it was predicted that doping and applying biaxial tensile strain can increase the $T_{c}$ of graphene to 31.6 K \cite{Duan}. The magic angle bilayer graphene has been found to be a superconductor with $T_{c}$ of 1.7 K \cite{CaoY1, CaoY2}. The aluminum-deposited graphene (AlC$_{8}$) was also predicted to be a superconductor with $T_{c}$ of 22.2 K by doping and stretching \cite{Lu}. Besides graphene, there are other 2D superconductors which will be show in Table I. However, the $T_{c}$ of these superconductors are relatively low. Whether 2D high-temperature superconductivity can be achieved through other methods is a hotspot of the current research.
Hydrogenation, which can modify the electronic properties of materials, has attracted more and more attention in the recent years. Typically, hydrogenation can modify the electronic properties of graphene \cite{sofo, Graphane, luhongyan1, luhongyan2, luhongyan3, Sahin}. For instance, fully hydrogenated graphene, called graphane, has been theoretically predicted \cite{sofo} and experimentally synthesized \cite{Graphane} to be an insulator with a direct band gap of 3.5 eV. Ferromagnetism and superconductivity with possible $p + i p$ pairing symmetry was predicted in partially hydrogenated graphene \cite{luhongyan2}. In addition, theoretical calculations show that doped graphane is a high-temperature electron-phonon superconductor with $T_{c}$ above 90 K \cite{Savini}. Similarly, it was predicted that hydrogenated monolayer magnesium diboride ${(}$MgB$_{2}$${)}$ can be a superconductor with $T_{c}$ of 67 K \cite{Bekaert}. Thus, it is of great significance to study the effect of hydrogenation on the electronic properties and possible high-temperature superconductivity in 2D materials at atmosphere pressure.
In the recent years, 2D monolayer phosphorus carbides have been investigated to show interesting physical properties. For example, several kinds of phosphorus carbides, $\alpha$$_{0}$-PC, $\alpha$$_{1}$-PC, $\alpha$$_{2}$-PC, $\beta$$_{0}$-PC, $\beta$$_{1}$-PC, and $\beta$$_{2}$-PC, were predicted to be in an atomically thin layer and to be metal, semimetal, or semiconductors, respectively \cite{Guan}. $\beta$$_{0}$-PC has also been predicted to be a superconductor with $T_{c}$ of 13 K \cite{wang}. Monolayer PC$_{6}$ was predicted to be a semiconductor with a direct bandgap of 0.84 eV and anisotropic carrier mobility \cite{Yu}. Monolayer PC$_{3}$, a semiconductor with an indirect bandgap of 1.46 eV, was proposed to be a promising thermoelectric material \cite{Rajput} or to be used as K ion battery anode \cite{Song}. Among the phosphorus carbides, monolayer PC$_{3}$ shows similar lattice structure as graphene and has favorable advantages of high symmetry and light mass. Therefore, it is anticipated that hydrogenated monolayer PC$_{3}$ may also show superconductivity. Based on first-principles calculations, it was predicted that monolayer pristine HPC$_{3}$ is a superconductor with $T_{c}$ of 31.0 K. By further applying biaxial tensile strainand, the $T_{c}$ can be boosted to 57.3 K, exceeding the McMillan limit.
The rest of this article is organized as follows. In Sec. II, we describe the computational details. In Sec. III, we present the results and discussions. Firstly, we show the stability of HPC$_{3}$. Second, we discuss the electronic structure, including the band structure, Density of states (DOS) and Fermi surface (FS) of HPC$_{3}$. Third, the phonon and electron-phonon coupling of HPC$_{3}$ are calculated and superconducting $T_{c}$ is further calculated. Sec. IV is the conclusion of the work.
\section{Results and discussions}
\subsection{Lattice structure and stability}
\begin{figure}
\centering
\label{fig:stucture}
\includegraphics[width=16cm, height=7cm]{stucture}
\caption{(a) The top view of HPC$_{3}$. (b) The side view of the HPC$_{3}$. (c) The variation of the free energy in the AIMD simulations during the time scale of 6 ps along with the last frame of photographs at 700 K. The yellow, green, and blue spheres represent phosphorus, carbon, and hydrogen atoms, respectively. The solid black line represent the unit cell.}
\end{figure}
The crystal structure of HPC$_{3}$ is shown in Fig. 1(a). The yellow, green, and blue spheres represent carbon, phosphorus, and hydrogen atoms, respectively. It is just like a honeycomb ring of C atoms surrounded by six P atoms on the outside. In the unit cell, there are two P atoms, six C atoms and two H atoms, with each H atom bonding with a P atom along different sides of the C plane, which can be easily seen from the side view of the HPC$_{3}$ in Fig. 1(b). The lattice constant is optimized to 5.408 $\textmd{\AA}$. From the side view, it can be clearly seen that the lattice structure is wrinkled, and the height between hydrogen atoms on different sides ($h$) is 3.874 $\textmd{\AA}$, and P-H, P-C, and C-C length are 1.445 $\textmd{\AA}$, 1.765 $\textmd{\AA}$, 1.428 $\textmd{\AA}$, respectively. In all the calculations, a vacuum space of 20 $\textmd{\AA}$ was used to avoid the interactions between adjacent layers. In PC$_{3}$, the C atoms are in the $sp^{2}$ configuration as in graphene, and P atoms adopt a $sp^{3}$ hybridization, in which three hybrid orbitals form covalent bonds with the neighboring C atoms and one hybrid orbital is filled with a lone electron pair. After hybridization, one electron of the lone electron pair forms covalent bond with H atom, leaving the other one a $\pi$-like electron.
For the stability of HPC$_{3}$, it was studied from two aspects. One is the thermodynamic stability, and the other is the dynamical stability. Regarding thermodynamic stability, it was proved from the ab-initio molecular dynamics (AIMD) simulations. A 2 $\times$ 2 $\times$ 1 supercell was used to minimize the effect of periodic boundary condition. The variation of free energy in the AIMD simulations within 6 ps and the last frame of the photographs are shown in Fig. 1(c). The results show that the integrity of the structure has not changed even at 700 K, proving its thermodynamic stability. Moreover, the phonon spectrum of HPC$_{3}$ shows no imaginary frequency (it will be shown later), indicating that it is dynamical stable. Thus, the structure of HPC$_{3}$ is stable at room temperature, satisfying both thermodynamic and dynamical stability.
\subsection{Electronic structure}
\begin{figure}
\centering
\label{fig:dos}
\includegraphics[width=11.5cm, height=10cm]{dos}
\caption{(a) Orbital-projected electronic band structure of HPC$_{3}$ along high-symmetry line $\Gamma$-M-K-$\Gamma$. (b) The total DOS of HPC$_{3}$ and the total DOS of P, C and H atoms. (c) The partial density of states (PDOS) of HPC$_{3}$. (d) The side view of FS of HPC$_{3}$. (e) Charge density of HPC$_{3}$ in the range of -1 to 1 eV near the Fermi level. The Fermi level in (a) (b) and (c) are set to zero.}
\end{figure}
\begin{figure*}
\centering
\label{fig:phonon}
\includegraphics[width=16 cm, height=7cm]{phonon}
\caption{(a) Phonon dispersion of HPC$_{3}$. The red hollow circles indicate the phonon linewidth $\gamma_{\textbf{q}\nu}$. (b) Phonon dispersion of HPC$_{3}$ weighted by the vibration modes of P, C and H atoms. The black, orange, green, red, blue, and wine red hollow circles indicate P horizontal, P vertical, C horizontal, C vertical, H horizontal, and H vertical modes, respectively. (c) Total PhDOS and (d) Eliashberg spectral function $\alpha^{2}F (\omega)$ and cumulative frequency-dependent EPC function $\lambda (\omega)$.}
\end{figure*}
We firstly calculated the electronic band structure and DOS of the monolayer PC$_{3}$ based on first-principles calculations, and found it is an indirect bandgap semiconductor with the gap of 1.488 eV (Fig. S1 \cite{supplement}), consistent with the former result \cite{Rajput}. For the electronic structure of HPC$_{3}$, the band structure, DOS, partial density of states (PDOS) and FS were studied, with the results shown in Fig. 2. From the band structure in Fig. 2(a), it is shown that two bands cross the Fermi level, proving that HPC$_{3}$ is a metal. Fig. 2(b) is the total DOS and the total DOS of each element. It can be seen that the states at the Fermi level are mainly contributed by C atoms, the contribution of H atoms is less, and that of P atoms is the smallest. The PDOS of HPC$_{3}$ is shown in Fig. 2(c), which confirms that the $p_{z}$ orbital of C atoms contributes the most near the Fermi level, and then is the $1s$ orbital of H. Fig. 2(d) is the Fermi surface of HPC$_{3}$, where an electron pocket around $\Gamma$ and six hole pockets aroud $K$ can be clearly seen. Furthermore, the charge density in the range of -1 to 1 eV near the Fermi level was plotted, which is shown in Fig. 2(e). It is seen that the charge mainly origins from the $p_{z}$ orbital of the C atoms, i.e., the $\pi$-bonding electrons. It was previously proposed that the superconductivity mainly arises from the coupling of $\sigma$-bonding bands with phonon \cite{GaoMiao1,GaoMiao2}. For HPC$_{3}$, since the two $\pi$-bonding bands have significant impacts on the electronic DOS at the Fermi level, we propose that the existence of the $\pi$-bonding bands may also result in strong electron-phonon coupling, leading to superconductivity with high $T_{c}$.
\subsection{Electron-phonon coupling and possible superconductivity}
Since HPC$_{3}$ is a metal, we study its possible phonon-mediated superconductivity. We firstly study the phonon properties of HPC$_{3}$. Fig. 3(a) shows the phonon dispersion with phonon linewidth $\gamma_{\textbf{q}\nu}$ (red hollow circles). There are ten atoms in the unit cell, leading to thirty phonon bands. It includes three acoustic phonon modes and twenty-seven optical phonon modes. It shows a wide range of frequency extending up to about 2130 cm$^{-1}$, and no imaginary frequency appears in the low energy, justifying its dynamical stability. The vibration modes at $\Gamma$ point are shown in Fig. S2. From the decomposition of the phonon spectrum with respect to P, C, and H atomic vibrations, as indicated in Fig. 3(b), it is seen that the main contribution to the acoustic branches below 200 cm$^{-1}$ are the out-of-plane vibration of C atoms and the in-plane vibration of H atoms. From 200 to 472 cm$^{-1}$, the mainly contribution is from the out-of-plane vibration of C atoms, the in-plane vibration of H atoms, and a small amount of the in-plane vibration of P atoms. In the range of 472 $<$ $\omega$ $<$ 800 cm$^{-1}$, the main contribution is from the in-plane vibration of H atoms and a small amount of the in-plane vibration of C atoms. From 800 to 1347 cm$^{-1}$, the main contribution is from the in-plane vibration of C atoms and a small amount of the in-plane vibration of H atoms. The out-of-plane vibration of H atoms occupies the high frequencies around 2130 cm$^{-1}$.
\begin{table*}
\renewcommand\arraystretch{0.8}
\centering
\caption{\label{tab:table2} List of superconducting parameters required for the prediction of $T_{c}$ for some reported 2D phonon-mediated superconductors. This table includes data of doping, biaxial tensile strain $\varepsilon$, Coulomb pseudopotential $\mu^{*}$, logarithmic averaged phonon frequency $\omega$$_{log}$ (in K), total EPC constant $\lambda$, and estimated $T_{c}$ (in K). Experimental $T_{c}$ values are also noted in the table.}
~\\
\scalebox{0.8}{
\begin{tabular}{cccccccc}
\hline
\hline
2D Materials & doping & $\varepsilon$ & $\mu^{*}$ & $\omega$$_{log}$ (K) & $\lambda$ & $T_{c}$ (K) & Ref. \\
\hline
graphene &4.65*10$^{14}$ cm$^{-2}$& 16.5$\%$& 0.10 & 289.06 & 1.45 &30.2 & \cite{Duan} \\
graphane & 10$\%$ $p$-doping & 0.0 & 0.13 & 909.35 & 1.45 &91.1 & \cite{Savini}\\
stanene & Li deposition & 0.0 & 0.13 & 63.60 & 0.65 & 1.4 & \cite{st} \\
silicene & 3.51*10$^{14}$ cm$^{-2}$ & 5$\%$ & 0.10 & 304.25 & 1.08 & 16.4 & \cite{si} \\
phosphorene & 1.1 $e$/cell & 0.0 & 0.10 & - & 1.20 & 11.2 & \cite{p} \\
$\beta$$_{12}$ borophene & 0.0 & 0.0 & 0.10 & 323.29 & 0.89 & 18.7 & \cite{b1} \\
$\chi$$_{3}$ borophene & 0.0 & 0.0 & 0.10 & 383.96 & 0.95 & 24.7 & \cite{b1} \\
B$_{2}$C & 0.0 & 0.0 & 0.10 & 314.80 & 0.92 & 19.2 & \cite{b2c} \\
CaC$_{6}$ & 0.0 &
|
0.0 & 0.115 & 445.64 & 0.40 & 1.4 & \cite{Profeta} \\
LiC$_{6}$ & 0.0 & 0.0 & 0.115 & 399.48 & 0.61 & 8.1 & \cite{Profeta} \\
LiC$_{6}$ & 0.0 & 0.0 & - & - & 0.58 $\pm$ 0.05 & 5.9 [Exp.] & \cite{Ludbrook} \\
AlC$_{8}$ & 0.20 $e$/cell & 12$\%$ & 0.10 & 188.452 & 1.557 & 22.23 & \cite{Lu} \\
monolayer 2H-NbSe$_{2}$ & 0.0 & 0.0 & - & - & 0.75 & 3.1 [Exp.] & \cite{nb} \\
monolayer 2H-NbSe$_{2}$ & 0.0 & 0.0 & 0.16/0.15 & 134.3/144.5 & 0.84/0.67 & 4.5/2.7 & \cite{nb1}, \cite{nb2} \\
Mo$_{2}$C & 0.0 & 0.0 & 0.10 & - & 0.63 &5.9 &\cite{J.J}\\
B$_{2}$O & 0.0 & 0.0 & 0.10 & 250.0 & 0.75 & 10.35 & \cite{b2o} \\
bilayer MoS$_{2}$ &Na-intercalation& 7$\%$ & 0.10 & 135 & 1.05 & 10.05 & \cite{Shuai Dong1} \\
bilayer blue phosphorus &Li-intercalation& 0.0 & 0.10 & 221.3 & 1.2 & 20.4 & \cite{Shuai Dong2} \\
bilayer $\beta$-Sb &Ca-intercalation & 0.0 & 0.10 & - & 0.89 & 7.2 & \cite{Shuai Dong3} \\
monolayer MgB$_{2}$ & 0.0 & 0.0 & 0.13 & - & 0.68 & 20 & \cite{mgb2} \\
hydrogenated monolayer MgB$_{2}$& 0.0 & 0.0 & 0.13 & - & 1.46 & 67 & \cite{Bekaert}\\
$\beta$$_{0}$-PC & 0.0 & 0.0 & 0.10 & 118.0 & 1.48 & 13.35 & \cite{wang} \\
& 0.0 & 0.0 & 0.10 & 482.84 & 0.95 &31.0 & Our work \\
HPC$_{3}$ & 0.0 & 1$\%$ & 0.10 & 497.73 & 1.05 &37.3 & Our work \\
& 0.0 & 2$\%$ & 0.10 & 475.57 & 1.24 &44.4 & Our work \\
& 0.0 & 3$\%$ & 0.10 & 402.84 & 1.65 &57.3 & Our work \\
\hline
\hline
\end{tabular}
}
\end{table*}
The total phonon density of state (PhDOS) of HPC$_{3}$ is shown in Fig. 3(c). The Eliashberg spectral function $\alpha^{2}F (\omega)$ and cumulative frequency-dependent EPC function $\lambda (\omega)$ are displayed in Fig. 3(d). By comparing the total PhDOS in Fig. 3(c) and the Eliashberg spectral function $\alpha^{2}F (\omega)$ in Fig. 3(d), it is seen that the peaks of them are basically at the same frequency. Besides, the significant phonon linewidths around 472 and 1272 cm$^{-1}$ lead to the 3rd and 5th peaks of Eliashberg spectral function, respectively. This is understandable from the definition of $\alpha^{2}F (\omega)$ (Eq. 1). As is shown in Fig. 3(d), the total EPC constant $\lambda$ is 0.95, which originates from the coupling between phonons and the $\pi$ electrons of C atoms and a small part of $s$ electron of H atoms. At low frequency 0 $<$ $\omega$ $<$ 472 cm$^{-1}$, the 1st and 2nd peaks of $\alpha^{2}F (\omega)$ are responsible for the gradual enhancement of $\lambda (\omega)$ to 0.45, accounting for 47$\%$ of the total EPC. In the range of 472 $<$ $\omega$ $<$ 800 cm$^{-1}$, the 3rd and 4th peaks are responsible for the $\lambda$ strength 0.4, accounting for 42$\%$ of the total EPC. Around 1272 cm$^{-1}$, the 5th peak is responsible for the $\lambda$ strength 0.09, accounting for 10$\%$ of the total EPC. At high frequency around 2130 cm$^{-1}$, the 6th peak of $\alpha^{2}F (\omega)$ contributes only 0.01 (1$\%$) of the total EPC. Moreover, based on the Eliashberg spectral function $\alpha^{2}F (\omega)$, the calculated logarithmically averaged phonon frequency $\omega_{log}$ is 482.84 K. Using the Allen-Dynes modified McMillan equation \cite{McMillan}, the $T_{c}$ of HPC$_{3}$ is 31.0 K, which is higher than most of the 2D superconductors, except for the doped graphane \cite{Savini} and the hydrogenated monolayer MgB$_{2}$ \cite{Bekaert}, with the results shown in Table I.
\begin{figure}
\centering
\label{fig:phdos}
\includegraphics[width=10cm, height=8cm]{PHDOS}
\caption{Total PhDOS for (a) pristine, (b-d) 1$\%$, 2$\%$, 3$\%$ biaxial tensile strained HPC$_{3}$.}
\end{figure}
\begin{figure}
\centering
\label{fig:Elishberg}
\includegraphics[width=10cm, height=8cm]{Elishberg}
\caption{Eliashberg spectral function $\alpha^{2}F (\omega)$ and EPC function $\lambda (\omega)$ for (a) pristine, (b-d) 1$\%$, 2$\%$, 3$\%$ biaxial tensile strained HPC$_{3}$.}
\end{figure}
Furthermore, we considered whether the application of biaxial tensile strain will increase the $T_{c}$ of HPC$_{3}$, which are the cases in doped or metal-deposited graphene \cite{Duan, Lu} and doped monolayer hexagonal boron nitride \cite{GaoMiao3}. The tensile strain was applied along the two basis vector directions, and the relative increase of lattice constant was defined as $\varepsilon=(a-a_{0})/a_{0}$, where $a_{0}$ and $a$ are the lattice constant of pristine and strained HPC$_{3}$, respectively. By applying a serial of biaxial tensile strains, it is found that for strains greater than 4$\%$, there is imaginary frequency in the phonon spectra, implying the lattice is no longer stable. Thus, we show the effect of tensile strain on phonon and superconductivity at $\varepsilon$ = 1$\%$, 2$\%$, and 3$\%$.
Fig. 4 shows the total PhDOS for pristine and biaxial tensile strained HPC$_{3}$. With the increase of tensile strain, there is obvious phonon softening for low and intermediate frequencies, whereas the high-frequency part barely changes. This greatly enhances the electron-phonon coupling, similar behavior has been found in other 2D superconductors \cite{Duan, Lu, GaoMiao3}. Fig. 5 shows the Eliashberg spectral function $\alpha^{2}F (\omega)$ and EPC function $\lambda (\omega)$ for pristine and biaxial tensile strained HPC$_{3}$. The calculated $\omega$$_{log}$, $\lambda$, and $T_{c}$ for the pristine and tensile strained cases are listed in Table I. It is seen that with the increase of tensile strain, $\omega$$_{log}$ decreases but $\lambda$ increases significantly, leading to the increase of $T_{c}$. It is worth mentioning that at $\varepsilon$ = 2$\%$, the $T_{c}$ is 44.4 K, exceeding the McMillan limit. At $\varepsilon$ = 3$\%$, the total EPC $\lambda$ (1.65) is greater than 1.5. Thus, it is necessary to consider the correction factors. By solving the Eq. (7) to Eq. (10), the obtained correction factors are $f_{1}$=1.102 and $f_{2}$=1.037, and the corrected $T_{c}$ is 57.25 K. It is known that the tensile strain up to 25$\%$ can be achieved for 2D materials\cite{lee}. Thus, the predicted monolayer HPC$_{3}$ and its strained cases are highly possible to be realized in future experiments. More experimental and theoretical researches on 2D HPC$_{3}$ are expected.
\section{Conclusion}
In summary, we have calculated the electronic structure, electron-phonon coupling and possible superconductivity of HPC$_{3}$ by using the first-principles calculation. It is found that hydrogenation changes monolayer PC$_{3}$ from a semiconductor to a metal and shows phonon-mediated superconductivity. The superconductivity of HPC$_{3}$ mainly originates from the strong coupling between the $\pi$ electrons of C atoms and the in-plane vibration modes of C and H atoms. The transition temperature of HPC$_{3}$ is 31.0 K, which is higher than most of other 2D superconductors. Under the biaxial tensile strain, the $T_{c}$ can be boosted to 57.3 K, exceeding the McMillan limit. It is anticipated that the predicted monolayer HPC$_{3}$ and its strained cases can be realized in future experiments.
\section{METHODS}
All calculations related to HPC$_{3}$ in this article were performed in the framework of density functional theory (DFT), as implemented in the Vienna ab-initio simulation package (VASP) \cite{vasp} and the Quantum Espresso (QE) program \cite{QE}. The interaction between electrons and ions was realized by the Projector-Augmented-Wave method (PAW) \cite{PAW}, and the exchange-correlation potentials was treated using the generalized gradient approximation (GGA) with Perdew-Burke-Ernzerhof parametrization (PBE) \cite{PBE}. Both the lattice parameters and the atom positions were relaxed to obtained the optimized structure. The cutoffs for wave functions and charge density were set as 80 Ry and 800 Ry, respectively. Electronic integration was carried out on a 24 $\times$ 24 $\times$ 1 $k$-point grid. For the calculation of the FS and DOS, the $k$ points of 60 $\times$ 60 $\times$ 5 and 48 $\times$ 48 $\times$ 1 were used for calculation, respectively. The phonon and electron-phonon coupling were calculated on a 12 $\times$ 12 $\times$ 1 $q$-point grid, and a denser 48 $\times$ 48 $\times$ 1 $k$-point grid was used for evaluating an accurate electron-phonon interaction matrix.
The total electron-phonon coupling (EPC) constant $\lambda$ can be obtained via isotropic Eliashberg spectral function \cite{ref:41,ref:42,McMillan}
\begin{eqnarray}
\alpha^{2}F(\omega)=\frac{1}{{2\pi}{N(E_{F})}}\sum_{\mathbf{q}\nu}\delta(\omega-\omega_{\mathbf{q}\nu})\frac{\gamma_{\mathbf{q}\nu}}{\hbar\omega_{\mathbf{q}\nu}},
\end{eqnarray}
\begin{eqnarray}
\lambda=2\int_{0}^{\infty}\frac{\alpha^{2}F(\omega)}{\omega}\,d\omega=\sum_{\mathbf{q}\nu}^{}\lambda_{\mathbf{q}\nu},
\end{eqnarray}
where $\alpha^{2}F$($\omega$) is Eliashberg spectral function and N($E_{F}$) is the DOS at the Fermi level, $\omega_{\mathbf{q}\nu}$ is the phonon frequency of the $\nu$th phonon mode with wave vector $\mathbf{q}$, and $\gamma_{\mathbf{q}\nu}$ is the phonon linewidth \cite{ref:41,ref:42,McMillan}. The $\gamma_{\mathbf{q}\nu}$ can be estimated by
\begin{eqnarray}
\begin{split}
\gamma_{\mathbf{q}\nu}=\frac{2\pi\omega_{\mathbf{q}\nu}}{\Omega_{BZ}}\ \sum_{\mathbf{k},n,m}\lvert g^{\nu}_{\mathbf{k}n,\mathbf{k}+\mathbf{q}m}\rvert^{2}\delta(\epsilon_{\mathbf{k}n}-E_{F})\\\delta(\epsilon_{\mathbf{k}+\mathbf{q}m}-E_{F}),
\end{split}
\end{eqnarray}
where $\Omega$$_{BZ}$ is the volume of the BZ, $\epsilon_{\mathbf{k}n}$ and $\epsilon_{\mathbf{k}+\mathbf{q}m}$ indicate the Kohn-Sham energy, and $g^{\nu}_{\mathbf{k}n,\mathbf{k}+\mathbf{q}m}$ represents the screened electron-phonon matrix element. $\lambda_{\mathbf{q}\nu}$ is the EPC constant for phonon mode $\mathbf{q}\nu$, which is defined as
\begin{eqnarray}
\lambda_{\mathbf{q}\nu}=\frac{\gamma_{\mathbf{q}\nu}}{\pi\hbar N(E_{F})\omega^{2}_{\mathbf{q}\nu}}.
\end{eqnarray}
$T_{c}$ is estimated by the Allen-Dynes modified McMillan equation \cite{McMillan}
\begin{eqnarray}
T_{c}=\frac{\omega_{log}}{1.2}exp[\frac{-1.04(1+\lambda)}{\lambda-\mu^{*}(1+0.62\lambda)}].
\end{eqnarray}
The hysteretic Coulomb pseudopotential $\mu^{*}$ in Eq. (4) is set to 0.1 and the logarithmic average of the phonon frequencies $\omega_{log}$ is defined as
\begin{eqnarray}
\omega_{log}=exp[\frac{2}{\lambda}\int_{0}^{\omega_{max}}\alpha^{2}F(\omega)\frac{log\;\omega}{\omega}d\omega].
\end{eqnarray}
For the strong EPC cases, i.e., $\lambda$ $>$1.5, $T_{c}$ is estimated by \cite{McMillan}
\begin{eqnarray}
T_{c}=f_{1}f_{2}\frac{\omega_{log}}{1.2}exp[\frac{-1.04(1+\lambda)}{\lambda-\mu^{*}(1+0.62\lambda)}].
\end{eqnarray}
Here, $f_{1}$ and $f_{2}$ are strong-coupling correction factor and shape correction factor, respectively, with
\begin{eqnarray}
f_{1}=\{1+[\frac{\lambda}{2.46(1+3.8\mu^{*})}]^{3/2}\}^{1/3},
\end{eqnarray}
\begin{eqnarray}
f_{2}=1+\frac{[(\omega_{2}/\omega_{log})-1]\lambda^{2}}{\lambda^{2}+3.312(1+6.3\mu^{*})^{2}(\omega_{2}/\omega_{log})^{2}},
\end{eqnarray}
in which $\omega_{2}$ is defined as
\begin{eqnarray}
\omega_{2}=[\frac{2}{\lambda}\int_{0}^{\omega_{max}}\alpha^{2}F(\omega){\omega}d\omega]^{1/2}.
\end{eqnarray}
\section{DATA AVAILABILITY}
The numerical datasets used in the analysis in this study, and in the figures of this work, are available from the corresponding author on reasonable request.
\section{CODE AVAILABILITY}
The codes that were used here are available upon request to the corresponding author.
\vspace{0.5cm} Corresponding Authors $^*$E-mail: [email protected], [email protected]
|
\section{Introduction}
The effect of an exposure on a time-to-event endpoint in the presence of right censoring is routinely estimated as the exposure coefficient in a Cox proportional hazards model \citep{Cox1972}, adjusted for baseline covariates.
However, it is rarely known \textit{a priori} which covariates to include in order to render the adjustment sufficient to control for confounding and non-informative censoring.
It is likewise unknown how to correctly specify the functional form with which these covariates should enter the model. Data-adaptive procedures (e.g., variable selection procedures) are therefore typically employed in order to select which variables to adjust for and how. In particular, they become inevitable when the number of covariates is close to or greater than the sample size.
Routine practice is often based on stepwise selection procedures that use hypothesis testing or regularization techniques like Lasso regression \citep{tibshirani1997}. Since confounders are by definition associated with outcome, these procedures may succeed well in detecting strong confounders, in addition to variables that are purely predictive of survival.
However, they may fail to detect confounders that are important because they are strongly associated with exposure while only moderately associated with outcome. This results in improper adjustment for confounding, which can in turn translate into bias. While this is true for any type of regression analysis \citep{leeb2006}, survival analyses suffer additional complications due to censoring.
In particular, failing to control for prognostic factors of outcome as a result of variable selection errors may induce informative censoring bias when those covariates are also associated with censoring.
Such selection errors are likely to occur in variables that have a weak-to-moderate effect on the outcome but a strong effect on censoring.
These problems persist with increasing sample size as one can always find data-generating mechanisms at which an imperfect selection is made. There thus exists no finite sample size $n$ at which normal-based tests and intervals are guaranteed to perform well.
In this article, we will provide two strategies for obtaining valid post-selection inference w.r.t. the exposure effect parameter indexing a Cox model. The first is simple and intuitive, whilst the second has a sharper justification based on recent theory for high-dimensional inference.
In particular, we will rely on the selection of variables associated with either survival, censoring or exposure via three separate selection steps followed by a final estimation step. Our first proposal is as follows:
\begin{enumerate}
\item We first select variables that predict outcome by fitting a standard Cox model for the survival time given exposure and baseline covariates using the Lasso. This step helps to pick up important confounders, control variables for censoring adjustment and variables for which adjustment may increase power of the test of no exposure effect.
\item In the second step, we select variables that predict censoring by fitting a Cox model for (the cause-specific hazard of) censoring given exposure and baseline covariates using the Lasso. This step provides a second chance to pick up variables that may explain censoring, which is especially relevant when those variables are only weakly-to-moderately predictive of survival.
\item In the third step, we select variables that predict exposure by fitting a model (e.g., linear or logistic) for exposure given baseline covariates using the Lasso. This step provides a second chance to pick up confounders, especially those that are strongly related to exposure (and possibly only weakly-to-moderately predictive of survival). This step is redundant when the exposure is randomly assigned.
\item Finally, we estimate the exposure effect of interest by fitting a Cox model for the survival time given exposure and the union of the sets of variables selected in the three variable selection steps.
\end{enumerate}
This proposed ``poor man's approach'' extends earlier work on randomized experiments \citep[see][]{vanlancker2020} to observational data.
The second proposal that we will make in this paper is more rigorous and builds on the theory of high-dimensional inference \citep[see][]{bradic2011, huang2013}.
In particular, it draws on the ``double selection'' approach for generalized linear models, proposed by \cite{belloni2014, belloni2016} to develop uniformly valid post-selection inference \citep[see also][]{vandegeer2014, zhangzhang2014, ning2017}. Double
selection was developed in the context of selection of confounders by using two steps to identify covariates for inclusion: first selecting variables that predict outcome and then those
that predict exposure.
We develop a ``triple selection'' approach that extends the work of \citet{belloni2014} to survival data. The corresponding test and estimator are related to a recent proposal by \citet{fang2017}, but are found to perform better under most censoring mechanisms in extensive Monte-Carlo simulation studies and have the added advantage of being obtainable using standard statistical software.
At this point we note that the causal interpretation of the hazard ratio is subtle, even in the absence of model misspecification and unmeasured confounding \citep{hernan2010}.
This is because the hazard ratio has a built-in selection bias by conditioning on survival up to a given time point; even if exposed and unexposed patients are comparable at baseline, this may no longer be true at later time points.
Nonetheless, the use of
the Cox model is widespread partly because the hazard ratio serves as a convenient measure of association which may often be constant in time and covariates to a reasonable degree of approximation. Indeed, the hazard ratio is often the main and only effect measure reported in epidemiologic studies \citep[e.g.,][]{hernan2010, uno2014moving}. It is therefore important to give advice and recommendations to practitioners on how to conduct these analyses. Moreover, tests of the causal null hypothesis (which form much of the focus of this paper) obtained from the Cox model are not vulnerable to the aforementioned selection bias.
The rest of this paper is organized as follows. In Section \ref{sec:motivation}, we state the null hypothesis of interest and describe the challenges with obtaining valid inference after variable selection.
In Section \ref{sec:proposal}, we propose a simple, heuristic as well as a more rigorous approach for testing hypotheses and obtaining confidence intervals. For the latter approach, we give asymptotic guarantees regarding type I error and interval coverage.
We investigate the empirical performance of these methods in Section \ref{sec:simulation}. In Section \ref{sec:dataAnalysis}, we illustrate the proposal on the Breast cancer data set used in \cite{royston2013external}. We end with a discussion in Section \ref{sec:discussion}.
\section{Motivation}\label{sec:motivation}
We begin with some notation. Let $T$ be the survival time and $C$ the censoring time. We observe $\left(U, \delta\right),$ where $U=\min (T, C)$ is the
observed portion of $T$ and $\delta=I(T \leqslant C)$ is a censoring indicator. Let $L=(L_1, \dots, L_p)'$ be the $p$-dimensional vector of baseline covariates and $A$ the exposure.
The observed data is then given by the i.i.d. sample $\{(U_i, \delta_i, L_i, A_i), i=1, \dots, n\}$. Let $\tau$ denote the end-of-study time.
Throughout, we assume that censoring is non-informative conditional on $A$ and $L$ in the sense that $T \mathrel{\text{\scalebox{1.07}{$\perp\mkern-10mu\perp$}}} C |A, L$ and that $L$ is sufficient to adjust for confounding of the effect of $A$ on $T$.
Suppose for the moment that we are interested in testing the null hypothesis that the event time distribution does not depend on the exposure $A$ conditional on $L$.
We assume that the conditional hazard rate function at time $t$, $\lambda \left(t | A, L\right)$, obeys the proportional hazards model \citep{Cox1972}
\[\lambda\left(t\mid A, L\right)=\lambda_0(t, \alpha^*, \beta^*)e^{\alpha^{*} A+\beta^{*'}L}=\lambda_0(t)e^{\alpha^{*} A+\beta^{*'}L},\]
where $\lambda_{0}(t)$ is an unspecified baseline hazard function, $\alpha^*$ encodes the unknown exposure effect of interest and also $\beta^*\in\mathbb{R}^{p}$ is an unknown parameter.
In the classical low dimensional setting (with fixed $p$), one may exploit the profile partial score function
\begin{align}\label{eq:score_low}
\qquad S(\alpha, \beta)
&=-\frac{1}{n} \sum_{i=1}^{n} \int_{0}^{\tau}\left[A_{i}-\frac{\mathbb{E}_n\left\{ AR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}_n\left\{ R(t) e^{\alpha A+\beta'L }\right\}}\right] \mathrm{d} N_{i}(t),
\end{align}
to test the null hypothesis that $\alpha^*=0$, with $\mathbb{E}_n\left\{\cdot\right\}$ refering to the sample average. Here, $R_i(t)$ is the at risk indicator (which is the product of the indicators $I(T_i\geq t)$ and $I(C_i\geq t)$) and $dN_i(t)$ the increment in the counting process with respect to the event time $T_i$, $N_i(t):=I(U_i\leq t, \delta_i=1)$, at time $t$ for subject $i$ ($i\in 1, \dots, n$).
Define $\hat{\beta}(\alpha)$ as the maximum partial likelihood estimator for $\beta^*$ at a fixed $\alpha$ (i.e., fixed at zero to test $\alpha^*=0$).
We can then obtain an asymptotically unbiased and normal score test statistic based on the (properly scaled) profile partial score function $S\left(\alpha, \hat{\beta}(\alpha)\right)=S\left(0, \hat{\beta}(0)\right)$.
However, in practice, there is often little prior knowledge on which variables in a given dataset require adjustment to control for confounding or non-informative censoring, as well as how to model the association between these variables and outcome. Hence, data-adaptive procedures are typically employed in order to select the variables to adjust for.
The standard methodology described above does not straightforwardly
extend to settings where variable selection is employed.
To see this, let $\tilde{\beta}$ denote
an estimator of $\beta^*$ obtained either directly via some regularization method or
after model selection with $\alpha$ fixed at zero. Define
\begin{align*}
\mathcal{U}_{i}(\tilde{\beta})= \int_{0}^{\tau}\left[A_{i}-\frac{\mathbb{E}_n\left\{ AR(t) e^{\tilde{\beta}'L}\right\}}{\mathbb{E}_n\left\{ R(t) e^{\tilde{\beta}'L}\right\}}\right] \mathrm{d} N_{i}(t).
\end{align*}
The Taylor expansion,
\begin{align} \label{TaylorExp}
\begin{split}
\frac{1}{\sqrt{n}} \sum_{i=1}^{n} \mathcal{U}_{i}(\tilde{\beta})=& \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \mathcal{U}_{i}(\beta^*)+\frac{1}{n} \sum_{i=1}^{n} \frac{\partial \mathcal{U}_{i}(\beta^*)}{\partial \beta} \sqrt{n}(\tilde{\beta}-\beta^*)+O_{P}\left(\sqrt{n}\|\tilde{\beta}-\beta^*\|_{2}^{2}\right),
\end{split}
\end{align}
where $\|\cdot\|_{2}^{2}$ denotes the Euclidian norm, then provides insight into the asymptotic distribution of $\tilde{\beta}$. For fixed $\beta^*$ the remainder term $O_{P}\left(\sqrt{n}\|\tilde{\beta}-\beta^*\|_{2}^{2}\right)$ converges to zero when $\tilde{\beta}$ converges sufficiently quickly.
However, this pointwise result does not reflect performance over the parameter space in finite samples. In particular, it does not prevent the existence of converging sequences $\beta_n$ for which
$\sqrt{n}(\tilde{\beta}-\beta_n)$ (and thus the test statistic) has a complex, non-normal distribution which is (possibly) centered away from zero. This is a result of the discrete nature of data-adaptive methods which force $\tilde{\beta}$ to zero in some samples (but not in others).
Through the second term on the righthand side of the above equality, this complex distribution may then propagate into a complex distribution of the test statistic.
At a more intuitive level, we can understand the bias that arises via variable selection by mistakenly removing confounders or variables that render censoring non-informative.
First, standard approaches fail by missing important confounders that are weakly predictive of outcome, but strongly associated with exposure. This is because the selection procedure may lack power due to collinearity between exposure and confounders.
We may therefore fail to properly adjust for confounding, which can translate into bias. Survival analyses unfortunately face additional challenges due to censoring.
In particular, failing to control for certain baseline covariates as a result of variable selection errors may induce informative censoring.
Selection mistakes are likely to occur for variables that have a moderate effect on the outcome but a strong effect on censoring.
Selection strategies may lack power to detect such variables because censoring implies information loss and may even reduce the variation in certain baseline variables in the risk set.
Imperfect selection may lead to collider-bias (sometimes referred to as selection bias, sampling bias, ascertainment bias) between $A$ and $T$ in the risk set at each time point, which can in turn induce a distorted association where there is no effect in the general population.
This is even problematic in randomized trials, whose analysis is immune to confounding bias \citep[see][]{vanlancker2020}.
As previously suggested, this problem persists with increasing sample size as for each sample size $n$ one can find values of $\beta$ at which imperfect selection happens with high probability. The score test statistic therefore does not converge uniformly over the parameter space to the limiting standard normal distribution \citep{leeb2006}. There is then no guarantee that the procedure will work well in finite samples as there is no finite $n$ at which the normal approximation is guaranteed to hold.
\section{Proposal for Variable Selection}\label{sec:proposal}
Building on the work of \cite{belloni2014, belloni2016} and \cite{vanlancker2020}, we will first make a simple proposal to overcome the previous problems. Next, we will compare this proposal with a more rigorous but complex approach.
\vspace{-0.5cm}
\subsection{Poor Man's Approach}\label{sec:poor}
In order to achieve valid post-selection inference, we recommend a ``triple selection'' approach based on the Lasso \citep{tibshirani1997}.
Inspired by the double selection approach \citep{belloni2014, belloni2016}, our proposal will rely on the selection of variables associated with either survival, censoring or exposure via three separate models. By using three different variable selection steps followed by a final estimation step, we aim to overcome the problem of standard approaches that rely on a single selection step (see Section \ref{sec:motivation}).
In particular, this approach makes it more likely to detect variables that demand adjustment but are otherwise difficult to diagnose because they are strongly associated with exposure and/or censoring, while having only a small-to-moderate effect on the outcome.
In the remainder of the paper, we will focus on the Lasso since it is known to perform well with a large number of covariates and is readily available in many statistical software packages. However, our proposal can allow for more general variable selection procedures.
Building on the idea of double selection, we perform a three stage selection procedure followed by a final
estimation step as follows:
\begin{enumerate}
\item Fit a Cox model for the hazard $\lambda\left(t\mid A, L\right)$ corresponding with the survival time $T$ given exposure $A$ and baseline covariates $L$ using the Lasso (penalizing all coefficients in the model, including the exposure), $\lambda\left(t\mid A, L\right)=\lambda_0(t, \alpha^*, \beta^*)e^{\alpha^{*} A+\beta^{*'}L}$,
and select the covariates with non-zero estimated coefficients.
\item Fit a Cox model for the hazard $\lambda^C\left(t\mid A, L\right)$ corresponding with the time to censoring $C$ given exposure $A$ and baseline covariates $L$ using the Lasso (penalizing all coefficients in the model, including the exposure), $\lambda^C\left(t\mid A, L\right)=\lambda_0^C(t)e^{\eta_1^{*} A+\eta_2^{*'}L}$
(viewing the times at which events occur as censored), where $\lambda_0^C(t)$ is an unknown baseline hazard,
and select the covariates with non-zero estimated coefficients.
\item Fit a model (e.g., linear or logistic) for exposure $A$ on baseline covariates $L$ using the Lasso (penalizing all coefficients in the model), $E\left(A\mid L\right)=g^{-1}\left(\delta_0^{*}+\delta_1^{*'}L\right)$,
with $g$ a known link function (e.g., identity for linear models and logit for logistic models), and select the covariates with non-zero estimated coefficients.
\item
Fit a Cox model for the survival time $T$ given exposure $A$ and all covariates selected in either one of the first three steps to obtain final estimates $\hat{\alpha}_{PM}$ and $\hat{\beta}_{PM}$ for $\alpha^*$ and $\beta^*$.
This regression may also include additional variables that were not selected in the first three steps, but that were identified \textit{a priori} as being important.\vspace{-0.25cm}
\end{enumerate}
Inference on the treatment effect $\hat{\alpha}_{PM}$ may then be performed using conventional methods, provided that a robust standard error is used. This can be intuitively understood upon noting that our procedure will only reject covariates that are weakly associated
with survival, censoring and exposure; asymptotically, this is not problematic because the omission of such covariates induces such weak degrees of informative censoring and/or confounding that the resulting bias in the test statistic for the null will be small enough that inference is not jeopardised.
In the next section, we will introduce a related proposal, which is more complex but whose validity is easier to understand. This proposal will give justification to the poor man's approach.
\subsection{Triple Selection}\label{sec:debiased}
Motivated by the problem described in Section \ref{sec:motivation}, we will make use of a different test statistic/estimator for $\alpha$ (than that based on the profile partial score function) that is asymptotically normal under standard conditions, even in conjunction with high dimensional variable selection.
The method proposed here differs from the poor man's approach in how the predictors of the exposure are selected (Step $3$ in Section \ref{sec:poor}). In particular, we will fit the exposure model in a specific way which de-biases the na\"ive estimator of the exposure effect so as to obtain valid inference:
\begin{enumerate}
\item Fit a Cox model for survival time $T$ given exposure $A$ and baseline covariates $L$ using the Lasso (penalizing all coefficients in the model),
select the covariates with non-zero estimated coefficients,
and refit the Cox model on exposure $A$ and all covariates selected by the Lasso step to obtain estimates $\hat{\alpha}$ and $\hat{\beta}$ (where the components of $\hat{\beta}$ with non-selected variables are set to zero).
\item Fit a Cox model for (the cause-specific hazard of) censoring $C$ on exposure $A$ and baseline covariates $L$ using Lasso (penalizing all coefficients in the model),
and select the covariates with non-zero estimated coefficients.
\item Fit a linear model for the Schoenfeld residuals for the exposure
$ A_i-\bar{A}_n(T_i, \hat{\alpha}, \hat{\beta}),$
where $\bar{A}_n(t, \alpha, \beta)=\frac{ \mathbb{E}_n\left\{ AR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}_n\left\{ R(t) e^{\alpha A+\beta'L}\right\}}$, on the Schoenfeld residuals for the covariates
$ L_i-\bar{L}_n(T_i, \hat{\alpha}, \hat{\beta}),
$
where $\bar{L}_n(t, \alpha, \beta)=\frac{ \mathbb{E}_n\left\{ LR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}_n\left\{ R(t) e^{\alpha A+\beta'L}\right\}}$, in subjects for whom an event was observed ($\delta_i=1$) at the corresponding event time $T_i$ using the Lasso (penalizing all coefficients in the model), and select the covariates with non-zero estimated coefficients.
Here, $\hat{\alpha}$ and $\hat{\beta}$ are the post-Lasso estimates obtained in Step 1.
\item Fit a Cox model for the survival time $T$ given exposure $A$ and all covariates selected in either one of the first three steps to obtain final estimates $\check{\alpha}$ and $\check{\beta}$ for $\alpha^*$ and $\beta^*$.
This regression may additionally include a small set of additional covariates identified \textit{a priori} as
necessary.
Inference on the treatment effect $\check{\alpha}$ may then be performed using conventional methods, provided that a robust standard error is used (see Part 2 of the proof of Theorem \ref{th:main} in Appendix A of the online supplementary materials for justification). \vspace{-0.25cm}
\end{enumerate}
Because of its link with the groundbreaking work of \cite{belloni2014, belloni2016}, we will refer to this method as the (post)-triple selection approach.
\subsubsection{Intuition for the Importance of Triple Selection}\label{sec:intuition}
The method proposed in the previous section overcomes the problem described in Section \ref{sec:motivation} by using a different score test statistic for $\alpha$ that is asymptotically normal under standard conditions, even when high dimensional variable selection is used.
Key to the choice of score is that it decorrelates the score function of the primary parameter ($\alpha$) from that of the nuisance parameters ($\beta$).
Our analysis is therefore based on the decorrelated score function that takes into account the estimation of the nuisance parameters,
\begin{align}\label{eq:scoreFunctExp}
\begin{split}
U_{i}\left(\alpha, \beta, \gamma\right)=\int_0^\tau\left[A_{i}-\bar{A}(t, \alpha, \beta)-\gamma'\left\{
L_{i}-\bar{L}(t, \alpha, \beta)
\right\} \right]\\
\times\left\{dN_i(t)-\lambda_{0}(t, \alpha, \beta)e^{\alpha A_i+\beta' L_i}R_i(t)dt\right\},
\end{split}
\end{align}
with
\[\bar{A}(t, \alpha, \beta)=\frac{\mathbb{E}\left\{AR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}\left\{R(t) e^{\alpha A+\beta'L}\right\}}\quad \textrm{and}\quad \bar{L}(t, \alpha, \beta)=\frac{\mathbb{E}\left\{LR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}\left\{R(t) e^{\alpha A+\beta'L}\right\}}, \]
and where $\gamma$ is a $p$-dimensional parameter with population value $\gamma^*$ defined as the population ordinary least squares coefficient from a least squares regression of the exposure Schoenfeld residuals $A_{i}-\bar{A}(T_i, \alpha^*, \beta^*)$ on the covariate Schoenfeld residuals $L_{i}-\bar{L}(T_i, \alpha^*, \beta^*)$ in subjects with an event.
Note that, in high dimensions, we estimate $\gamma^*$ via the Lasso (see Step $3$) and define $\hat{\gamma}$ as the corresponding post-Lasso estimator obtained in Step 3 of the triple selection approach.
Let $B$ denote the (index of the) variables selected in Step 1, 2 and 3 of the triple selection approach.
We then define $\check{\gamma}$ as the estimate of $\gamma^*$ obtained via a ordinary least squares regression of $A_{i}-\bar{A}_n(T_i, \check{\alpha}, \check{\beta})$ on $L_{i}-\bar{L}_n(T_i, \check{\alpha}, \check{\beta})$ in subjects with an event and subject to $\left\{j\in\{1, \dots, p\}: \check{\gamma}_j\neq 0\right\}\subseteq B$ (i.e., $\check{\gamma}_j=0$ for $j\notin B$).
In what follows, we focus our developments on the function
\begin{align*}
\hat{U}_{i}\left(\alpha, \beta, \gamma\right)=\int_0^\tau\left[A_{i}-\bar{A}_n(t, \alpha, \beta)-\gamma'\left\{
L_{i}-\bar{L}_n(t, \alpha, \beta)
\right\} \right]\{dN_i(t)-R_i(t)\hat{\lambda}_0(t, \alpha, \beta)e^{\alpha A_i+\beta' L_i}dt\},
\end{align*}
where $\hat{\lambda}_0(t, \alpha, \beta)=\frac{\mathbb{E}_n\left\{R(t)dN(t)\right\}}{\mathbb{E}_n\left\{R(t)e^{\alpha A+\beta' L}\right\}}$ and where we substitute the population averages $\bar{A}(t, \alpha, \beta)$ and $\bar{L}(t, \alpha, \beta)$ in Expression \eqref{eq:scoreFunctExp} with $\bar{A}_n(t, \alpha, \beta)$ and $\bar{L}_n(t, \alpha, \beta)$.
Via a Taylor expansion of $\frac{1}{\sqrt{n}} \sum_{i=1}^{n} \hat{U}_{i}(\check{\alpha}, \check{\beta}, \check{\gamma})$ around $(\alpha^*, \beta^*, \gamma^*)$, we obtain
\begin{align}
\begin{split}
\sqrt{n}(\check{\alpha}-\alpha^*)&=-\left\{\frac{1}{\sqrt{n}} \sum_{i=1}^{n} U_{i}(\alpha^*, \beta^*, \gamma^*)\right\}V^{*-1} \\
&-\sqrt{n}(\check{\beta}-\beta^*)'\left\{\frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \beta} U_{i}(\alpha^*, \beta^*, \gamma^*)\right\}V^{*-1}\label{eq:taylor_alpha}\\
&-\sqrt{n}(\check{\gamma}-\gamma^*)'\left\{\frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \gamma} U_{i}(\alpha^*, \beta^*, \gamma^*)\right\}V^{*-1}+\text{ Remainder},
\end{split}
\end{align}
where the remainder contains second order terms and $V^*=\mathbb{E}\left\{\frac{\partial}{\partial \alpha} U_{i}(\alpha^*, \beta^*, \gamma^*)\right\}$ is a scaling factor. By ensuring that $n^{-1}\sum_{i=1}^n\partial U_i(\alpha^*, \beta^*, \gamma^*)/\partial\gamma$ and $n^{-1}\sum_{i=1}^n\partial U_i(\alpha^*, \beta^*, \gamma^*)/\partial\beta$ converge to zero in probability, we guarantee that the complex, non-standard behavior of the estimators $\check{\gamma}$ and $\check{\beta}$ does not affect the asymptotic behaviour of the test for $\alpha^*$ nor the estimator $\check{\alpha}$.
Specifically, for the gradient with respect to $\gamma$ this is achieved by solving the following penalized estimating equation for $\theta=(\alpha, \beta)$
\begin{align}
\begin{split}\label{eq:subgrad_beta}
0&=
\frac{1}{n}\sum_{i=1}^n\frac{\partial \hat{U}_i(\alpha,\beta, \gamma)}{\partial\gamma}+\lambda_{\theta} g(\theta)\\
&=
-\frac{1}{n}\sum_{i=1}^n\int_0^\tau\left\{
L_{i}-\bar{L}_n(t, \alpha, \beta)
\right\}dN_i(t)+\lambda_{\theta} g(\theta),
\end{split}
\end{align}
with $\lambda_{\theta}>0$ a penalty parameter \citep{Fu2003}, in Step 1 of the triple selection proposal. Here, $g(a)$ denotes a vector of elements $g(a_j)$, where $g(a_j) = |a_j|$ if $a_j\neq0$ and $g(a_j)\in [-1, 1]$ otherwise.
Likewise, we will solve the following penalized estimating equation for $\gamma$
\begin{align}\label{eq:subgrad_gamma}
\begin{split}
0&=-\frac{1}{n}\sum_{i=1}^n\int_0^\tau\left[A_{i}-\bar{A}_n(t, \hat{\alpha}, \hat{\beta})-\gamma'\left\{
L_{i}-\bar{L}_n(t, \hat{\alpha}, \hat{\beta})
\right\} \right]\left\{
L_{i}-\bar{L}_n(t, \hat{\alpha}, \hat{\beta})
\right\}dN_i(t)\\
&+\lambda_\gamma g(\gamma),
\end{split}
\end{align}
with $\lambda_{\gamma}>0$ a penalty parameter, in Step $3$ of the triple selection proposal.
The penalty terms in Expression \eqref{eq:subgrad_beta} and \eqref{eq:subgrad_gamma} correspond to the subgradient of respectively the $l_1$ norm $||\theta||_1=||(\alpha, \beta)||_1$ with respect to $\theta=(\alpha, \beta)$ and the $l_1$ norm $||\gamma||_1$ with respect to $\gamma$; hence our procedure amounts to $l_1$-penalized $m$-estimation.
If $\lambda_{\theta}$ goes to zero as $n\to \infty$,
Equation \eqref{eq:subgrad_beta} directly guarantees that $n^{-1}\sum_{i=1}^n\partial U_i(\alpha^*, \beta^*, \gamma^*)/\partial\gamma$ has approximately mean zero.
Denote the history spanned by $N(t)$ as $\mathcal{F}_t$. Then, if $\lambda_{\gamma}$ also goes to zero as $n\to \infty$, because the conditional mean $E\left\{dN(t)|A, L, \mathcal{F}_t\right\}$ equals $R(t)\lambda_0(t, \alpha^*, \beta^*)e^{\alpha^{*} A+\beta^{*'}L}$ under the assumption that the Cox model for survival time $T$ is correctly specified, we can show that Equation \eqref{eq:subgrad_gamma} renders the gradient $n^{-1}\sum_{i=1}^n\partial U_i(\alpha^*, \beta^*, \gamma^*)/\partial\beta$ approximately zero. We make a more rigorous argument in Appendix A of the online supplementary materials.
Under standard ultra-sparsity conditions (see Theorem \ref{th:main}) and the assumption that $\lambda_\theta=O\left(\sqrt{\log(p\lor n)/n}\right)$, $\lambda_\gamma=O\left(\sqrt{\log(p\lor n)/n}\right)$ and $\log(p\lor n)=o(n)$ (where $a\lor b$ denotes the maximum of $a$ and $b$), these properties help ensure that the second and third term in Equation \eqref{eq:taylor_alpha} are asymptotically negligible, regardless of the complex behavior of $\check{\beta}$ and $\check{\gamma}$. Our proposal is thus to select the variables in a specific way that shrinks the gradients in Equation \eqref{eq:taylor_alpha} to zero and consequently de-biases a na\"ive post-selection estimator (e.g., post-Lasso estimator).
\vspace{-0.5cm}
\begin{remark}[Model for censoring]\label{remark:cens}
The discussion above suggests that Step 1 and Step 3 are all that is needed to make the gradients zero.
Specifically, fitting the exposure model as in Step $3$ of the triple selection approach makes the additional selection Step $2$ for censoring redundant.
This is surprising and raises the question how a sufficient adjustment set for censoring can be guaranteed.
To see this, consider the linear ``exposure'' model in Step $3$ which conditions on the risk set. As a consequence of collider-stratification, predictors of censoring may become dependent on the exposure in the risk set.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{DAG}
\caption{\label{fig:DAG} A Directed Acyclic Graph (DAG) to explain why exposure model also picks up predictors of censoring.}
\end{figure}
Indeed, conditioning on the risk set in particular implies conditioning on being uncensored. From the Directed Acyclic Graph (DAG) in Figure \ref{fig:DAG}, it is clear that conditioning on $C$ opens a path between $A$ and $L_2$, making them associated in the absence of a direct effect of $L_2$ on $A$. One may therefore detect predictors of exposure and censoring via one model for exposure in the risk set.
Even so, since collider-stratification usually only induces weak associations \citep{Greenland2003}, we expect better finite sample performance when an additional (direct) model for censoring is used.
This has the further advantage that it considerably simplifies the procedure in settings where exposure is independent of baseline covariates as Step 3 is then superfluous. This is in particular the case in randomized experiments \citep[see][]{vanlancker2020}.
\end{remark}
\begin{remark}[Link to poor man's approach]
Although the model for censoring (Step 2) in the triple selection approach is in principle redundant, it is essential in the poor man's approach to control for informative censoring as the exposure model is no longer conditional on the risk set.
By fitting a potentially correct model for the exposure assignment mechanism at baseline, we expect the poor man's approach to pick up more variables than the triple selection approach.
We therefore view it as an under-smoothed version of the triple selection approach, which may justify its validity.
\end{remark}
\subsubsection{Asymptotic Properties}
We will now study convergence of $\check{\alpha}$ more formally under a sequence of laws $P_n\in \mathcal{P}$,
where $\mathcal{P}$ represents a class of observed data laws that obey a correctly specified Cox proportional hazards model for survival time $T$ conditional on $A$ and $L$.
We will allow for $p$ to increase with $n$, and for the values of the parameters $\gamma$, $\beta$ and $\eta_2$ to depend on $n$. This is done in
order to gain better insight into the finite-sample behavior of the test statistic when the data-generating mechanism changes with $n$ (e.g., $p$ grows with $n$).
Let $\mathbb{P}_{P_{n}}$ and $\mathbb{E}_{P_{n}}$ respectively denote a probability and expectation taken with respect to this local data-generating process $P_n$.
Let $\alpha_n$ and $\beta_n$ denote the population values of $\check{\alpha}$ and $\check{\beta}$, respectively.
Moreover, define
\[\bar{A}^*(t, \alpha, \beta)=\frac{\mathbb{E}_{P_n}\left\{AR(t) e
|
^{\alpha A+\beta'L}\right\}}{\mathbb{E}_{P_n}\left\{R(t) e^{\alpha A+\beta'L}\right\}}\quad\textrm{and}\quad \bar{L}^*(t, \alpha, \beta)=\frac{\mathbb{E}_{P_n}\left\{LR(t) e^{\alpha A+\beta'L}\right\}}{\mathbb{E}_{P_n}\left(R(t) e^{\alpha A+\beta'L}\right)}.\]
The population value $\gamma_n$ of $\check{\gamma}$ is then defined as the population coefficient from a least squares regression of $A_i-\bar{A}^*(T_i, \alpha_n, \beta_n)$ on $L_i-\bar{L}^*(T_i, \alpha_n, \beta_n)$ in subjects for whom an event was observed during the study.
Define $\check{\eta}_2$ as the estimate of $\eta_2$ obtained by fitting a Cox model for censoring time $C$ given exposure $A$ and the union of variables estimated to have non-zero coefficients in the first three steps of the triple selection approach. The population value of $\check{\eta}_2$ is denoted as $\eta_{2n}$.
\vspace{-0.5cm}
\begin{theorem}[Robust Estimation and Inference]\label{th:main}
Define the sets of variables that have coefficients that are truly non-zero (i.e., active set) as $S_\gamma = \text{support}(\gamma_n)$, $S_\beta = \text{support}(\beta_n)$ and $S_\eta = \text{support}(\eta_{2n})$. Moreover, let $p$ denote the number of covariates, and define $s_\gamma=|S_\gamma|$, $s_\beta=|S_\beta|$ and $s_\eta=|S_\eta|$. Suppose the ultra-sparsity condition $n^{-1/2}(s_\gamma\lor s_\beta\lor s_\eta)\log(p\lor n)=o(1)$ holds.
Then, under Assumptions 1-8 in the online supplementary materials and the primitive conditions required for uniform consistency of the Lasso estimators (see e.g., Assumptions $2$ and $5$ in \citet{fang2017}), including
$\lambda_\theta=O\left(\sqrt{\frac{\log(p\lor n)}{n}}\right)$ and $\lambda_\gamma=O\left(\sqrt{\frac{\log(p\lor n)}{n}}\right)$,
the post-triple selection estimator $\check{\alpha}$ obeys
\begin{align}
\sqrt{n}\left(\check{\alpha}-\alpha_n\right)=Z_{n}+o_{P_n}(1), \quad Z_{n} \stackrel{d}{\rightarrow} N(0,\Sigma_{n}^2)\label{eq:theorem1}
\end{align}
as $n \rightarrow \infty$. Here, $o_{P_n}(1)$ denotes a stochastic variable that converges in probability to zero as $n \rightarrow \infty$ under the measure $P_n$, and
$$
Z_{n}:=\frac{V^{-1}}{\sqrt{n}} \sum_{i=1}^{n}\int_0^\tau\left[A_{i}-\bar{A}^*(t, \alpha_n, \beta_n)-\gamma_n'\left\{
L_{i}-\bar{L}^*(t, \alpha_n, \beta_n)
\right\} \right]dM_i(t),
$$
$$ \Sigma_{n}^{2}:=V^{-1}\mathbb{E}_{P_n}\left\{\left(\int_0^\tau\left[A_{i}-\bar{A}^*(t, \alpha_n, \beta_n)-\gamma_n'\left\{
L_{i}-\bar{L}^*(t, \alpha_n, \beta_n)
\right\} \right]dM_i(t)\right)^2\right\}V^{-1},
$$
and
\begin{align*}
V:=&\mathbb{E}_{P_n}\left(\int_{0}^{\tau}\left[A_{i}-\bar{A}^*(t, \alpha_n, \beta_n)-\gamma_n'\left\{
L_{i}-\bar{L}^*(t, \alpha_n, \beta_n)
\right\} \right]R_i(t)e^{\alpha_n A_i+\beta_n' L_i}\lambda_{0n}(t, \alpha_n, \beta_n)\right.\\
&\hspace{1cm}\left.\left\{A_i-\bar{A}^*(t, \alpha_n, \beta_n) \right\}dt\vphantom{\int_{0}^{\tau}}\right),
\end{align*}
where $$dM_i(t)=dN_i(t)-\lambda_{0n}(t, \alpha_n, \beta_n)e^{\alpha_nA_i+\beta_n' L_i}R_i(t)dt$$ and $$\lambda_{0n}(t, \alpha, \beta)=\mathbb{E}_{P_n}\{R(t)dN(t)\}\mathbb{E}_{P_n}^{-1}\{R(t)e^{\alpha A+\beta' L}\}.$$
Thus, $\Sigma_{n}^{-1}\sqrt{n}\left(\check{\alpha}-\alpha_n\right)$ converges weakly to $N(0,1)$.
Moreover, $\Sigma_{n}^{2}$ can be consistently estimated as
\begin{align*}
\widehat{\Sigma}_{ n}^{2}=&\widehat{V}^{-1}\mathbb{E}_{n}\left\{\left(\int_0^\tau\left[A_{i}-\bar{A}_n(t, \check{\alpha}, \check{\beta})-\check{\gamma}'\left\{
L_{i}-\bar{L}_n(t, \check{\alpha}, \check{\beta})
\right\} \right]dM_i(t)\vphantom{\int_{0}^{\tau}}\right)^2\right\}\widehat{V}^{-1},
\end{align*}
with
\begin{align*}
\widehat{V}=&\mathbb{E}_{n}\left(\int_0^\tau\left[A_{i}-\bar{A}_n(t, \check{\alpha}, \check{\beta})-\check{\gamma}'\left\{
L_{i}-\bar{L}_n(t, \check{\alpha}, \check{\beta})
\right\} \right]R_i(t)e^{\check{\alpha} A_i+\check{\beta}' L_i}\hat{\lambda}_0(t, \check{\alpha}, \check{\beta})\right.\\
&\hspace{1cm}\left.\left\{A_{i}-\bar{A}_n(t, \check{\alpha}, \check{\beta}) \right\}dt\vphantom{\int_{0}^{\tau}}\right)
\end{align*}
\end{theorem}
The key ultra-sparsity condition $n^{-1/2}(s_\gamma\lor s_\beta\lor s_\eta)\log(p\lor n)=o(1)$ demands the number of non-zero coefficients in the models for $\gamma$, $\beta$ and $\eta$ to be small relative to the square
root of the overall sample size. Such assumption is common in the literature on high-dimensional inference \citep{belloni2016, ning2017}.
The theorem allows for the data-generating processes to change with $n$, in particular allowing sequences
of regression models with coefficients never perfectly distinguishable from zero, i.e., models where perfect
model selection is not possible. In doing so, the results achieved in Theorem \ref{th:main} imply validity
uniformly over a large class of sparse models, which is referred to as ``honesty'' in the statistical literature \citep[see e.g.,][]{Li1989honest}. In particular, as stated in the following corollary, our
proposed test and confidence intervals are uniformly valid over the parameter space under the ultra-sparsity condition.
The addition of variables that are purely predictive of censoring may impact the asymptotic behavior of the estimator when for instance $A$ has no effect on censoring, since these variables then no longer reduce bias but only increase the variance of the estimator. This will nevertheless be accounted for in the asymptotic distribution of the estimator, since the Schoenfeld residual terms in the decorrelated score equation will have reduced variability. Otherwise, the impact in large samples should be negligible since we assume sparsity in the censoring model. Hence, we do not explictly account for Step 2 of the triple selection procedure in the proofs.
\begin{Corollary}[Uniform $\sqrt{n}$-Rate of Consistency and Uniform Normality]
Under Assumptions 1-8 in the online supplementary materials, the primitive conditions required for uniform consistency of the Lasso estimators and the ultra-sparsity condition $n^{-1/2}(s_\gamma\lor s_\beta\lor s_\eta)\log(p\lor n)=o(1)$, the post-triple selection estimator, $\check{\alpha}$, is $\sqrt{n}$-consistent and asymptotically normal
uniformly over $\mathcal{P},$ namely
$
\lim _{n \rightarrow \infty} \sup _{P_{n} \in \mathcal{P}} \sup _{r \in \mathbb{R}}\left|\mathbb{P}_{P_{n}}\left(\Sigma_{n}^{-1} \sqrt{n}\left(\check{\alpha}-\alpha_n\right) \leqslant r\right)-\Phi(r)\right|=0.
$
Moreover, the result continues to hold if $\Sigma_{n}$
is replaced by $\hat{\Sigma}_{n}$ specified in the statement of the previous theorem.
\end{Corollary}
This result can be used to show the uniform validity of the proposed tests and confidence intervals, as in \cite{belloni2014}.
\section{Simulation Study}\label{sec:simulation}
In this section, we examine the finite-sample properties of the poor man's approach and the post-triple selection test. In particular, we wish to compare their performance to that of the method proposed in \citet{fang2017} and a standard post-single selection method (i.e., post-Lasso). The latter refits the Cox model to the variables selected by the first-step penalized variable selection method (i.e., Lasso; this method omits Step 2 and Step 3 of the strategies proposed in this paper). We present the simulation study as recommended in \cite{Morris2019}.
\subsection{Simulation Design}
\hspace{0.3cm}\textbf{Aims}:
To evaluate and compare the finite-sample Type I error rate of the test statistic based on the proposed poor man's approach, the proposed triple selection strategy, the post-Lasso approach and the decorrelated score function in \citet{fang2017}.
\textbf{Data-Generating Mechanisms}:
In each setting, we generate $n$ mutually independent vectors $\left(T_{i}, C_{i}, A_{i}, L_{i}'\right)', i=1, \ldots, n .$ Here, we consider the following data-generating mechanisms for $(A_i, L_i')'=X_i$:
\begin{enumerate}[(a)]
\item $X_i\sim N_{p+1}\left(\mathbf{0}, \Sigma\right)$, where $\Sigma$ is a Toeplitz matrix with $\Sigma_{jk}=\rho^{\left|j-k\right|}$ and $\rho=0.25$ and $0.50$,
\item $L_i\sim N_p(\mathbf{0}, \mathbb{I})$ and
$A_i\sim N(\nu_AL, 1), \text{ with } \nu_A=c_A(1, 1/2, \dots, 1/10, 0_{11}, \dots, 0_{p})'$,
\end{enumerate}
with $c_A$ a scalar parameter.
The $i$th survival and censoring time are based on respectively $T_i\sim\exp(\lambda_{T,i})$ with $\lambda_{T,i}=\exp(\beta_0)\exp(\alpha A_i+\beta' L_i)$ and $C_i\sim\exp(\lambda_{C,i})$ with $\lambda_{C,i}=\exp(\eta_0)\exp(\eta_1 A_i+\eta_2'L_i)$,
where $\beta_0$, $\alpha$, $\eta_0$ and $\eta_1$ are scalar parameters, and $\beta=b\cdot\nu_T$ and $\eta_2=g\cdot\nu_C$ are $p$-dimensional parameters with $b$ and $g$ scalar and $\nu_T$ and $\nu_C$ $p$-dimensional parameters. In our simulation study, we consider the following coefficient vectors $\nu_T$ and $\nu_C$
\begin{enumerate}
\vspace{-0.25cm}
\item $\nu_T=(1, 1/2, \dots,1/9, 1/10, 0_{11}, \dots, 0_p)',$\\
$\nu_C=(1, 1/2, 1/3, 1/4, 1/5, 1, 1/2,1/3, 1/4, 1/5, 0_{11}, \dots, 0_p)',$
\item $\nu_T=(1, 1/2, \dots,1/9, 1/10, 0_{11}, \dots, 0_p)',$\\
$\nu_C=(1, 1/2, 1/3, 1/4, 1/5, 0_6, \dots, 0_{10}, 1, 1/2,1/3, 1/4, 1/5, 0_{16}, \dots, 0_p)',$\vspace{-0.25cm}
\end{enumerate}
where the subscripts indicate the index (i.e., position) of $0$ in the vector.
The parameters $b$ and $g$ are used to control the association between the covariates and respectively the survival and censoring times. Increments of $0.25$, starting from $0$ to $2$, are considered.
The coefficients $\beta_0$ and $\eta_0$ are set to $0$, corresponding with a baseline hazard of 1.
As we are interested in the Type I error of a test of the null hypothesis that $\alpha=0$ versus the alternative hypothesis that $\alpha\neq 0$ based on the different methods, $\alpha$ is set to $0$. The level of significance is set to be $0.05$.
For the data generating mechanisms described
above, we perform $1,000$ Monte Carlo runs for $n = 400$ and $p = 30$. As an objective choice we have chosen $\tau=+\infty$.
\textbf{Targets:} Our target of interest is the null hypothesis of no effect of treatment $A$ on the considered survival endpoint at a $5\%$ significance level.
\textbf{Methods of Analysis:}
Each simulated dataset is analyzed using the following methods:
\begin{itemize}
\item Triple selection: Wald test based on robust SE for the estimator for $\alpha$ obtained via the post-triple selection approach proposed in Section \ref{sec:debiased}.
\item Poor man's approach: Wald test based on robust SE for the estimator for $\alpha$ obtained via the poor man's approach proposed in Section \ref{sec:poor}.
\item Post-Lasso: Wald test based on robust SE for the estimator for $\alpha$ obtained via post-Lasso, which refits the Cox model to the variables selected in a Cox model for the outcome with Lasso (i.e., this method omits Step 2 and Step 3 of the strategies proposed in this paper).
\item Decorrelated test in \citet{fang2017}: test based on the estimator for $\alpha$ obtained via the the decorrelated score function in \citet{fang2017} using Lasso. \vspace{-0.25cm}
\end{itemize}
All considered approaches require the selection of penalty parameters.
In this simulation study, we use a $20$-fold cross-validation technique with the negative cross-validated penalized (partial) log-likelihood as loss function. We obtain the penalty parameter $\lambda_\text{1se}$, which is the largest value of the penalty parameter such that the cross-validated error is within $1$ standard error of the minimum, using the function \texttt{cv.glmnet()} in the \texttt{R} package \texttt{glmnet}.
\textbf{Performance Measures:} We assess the finite-sample (empirical) Type I error rate of the test of no exposure effect on the considered survival endpoint $T$.
\subsection{Simulation Results}
The empirical Type I errors for the
different tests
under Setting $1$ with $\eta_1=1$ are summarized in Figure \ref{fig:Setting1_1} for Setting (a) with $\rho=0.50$, Figure \ref{fig:Setting1_1d} for Setting (b) with $c_A=1$ and Figure \ref{fig:Setting1_1e} for Setting (b) with $c_A=2$.
The simulation results show that over the different settings, the poor man's approach has a rejection rate closest to the nominal level of $5\%$.
In sharp contrast, the Type I errors based on the post-Lasso selection perform very poorly, and deviate strongly away from the nominal level of $0.05$ throughout large parts of the model space (see Figures \ref{fig:Setting1_1d} and \ref{fig:Setting1_1e}). This is a result of eliminating too many covariates from the outcome model. Although the proposal by \citet{fang2017} clearly provides lower rejection rates than the post-Lasso approach, it is outperformed by both proposals throughout large parts of the model space (see Figures \ref{fig:Setting1_1d} and \ref{fig:Setting1_1e}). This might occur because re-fitting the Cox model, adjusting for the union of covariates selected in three steps (as is done in the triple selection and poor man's approaches), may take the empirical analogues of the gradients from the Taylor expansion in Section \ref{sec:intuition} closer to zero within the sample. Similar results were also observed in \cite{belloni2016}. This is moreover in line with our expectation of better finite sample performance when an additional model for censoring is used (see Remark \ref{remark:cens}).
\begin{figure}[h!]
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1_Prop_new2}
\caption{Triple selection} \label{fig:Setting1_1_Prop}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1_PMA_new2}
\caption{Poor man's approach} \label{fig:Setting1_1_PMA}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1_Lasso_new2}
\caption{Post-Lasso} \label{fig:Setting1_1_Lasso}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1_Fang_new2}
\caption{Decorrelated test in \citet{fang2017}} \label{fig:Setting1_1_Fang}
\end{subfigure}\hfill
\caption{Empirical Type I error rate at the $5\%$ significance level of the different tests under Setting 1(a) with $n=400$, $p=30$, $\rho=0.50$, $\eta_1=1$, $\beta_0=0$ and $\eta_0=0$.
} \label{fig:Setting1_1}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1d_Prop_new2}
\caption{Triple selection} \label{fig:Setting1_1d_Prop}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1d_PMA_new2}
\caption{Poor man's approach} \label{fig:Setting1_1d_PMA}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1d_Lasso_new2}
\caption{Post-Lasso} \label{fig:Setting1_1d_Lasso}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1d_Fang_new2}
\caption{Decorrelated test in \citet{fang2017}} \label{fig:Setting1_1d_Fang}
\end{subfigure}\hfill
\caption{Empirical Type I error rate at the $5\%$ significance level of the different tests under Setting 1(b) with $n=400$, $p=30$, $c_A=1$, $\eta_1=1$, $\beta_0=0$ and $\eta_0=0$.
} \label{fig:Setting1_1d}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1e_Prop_new2}
\caption{Triple selection} \label{fig:Setting1_1e_Prop}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1e_PMA_new2}
\caption{Poor man's approach} \label{fig:Setting1_1e_PMA}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1e_Lasso_new2}
\caption{Post-Lasso} \label{fig:Setting1_1e_Lasso}
\end{subfigure}\hfill
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{Setting1_1e_Fang_new2}
\caption{Decorrelated test in \citet{fang2017}} \label{fig:Setting1_1e_Fang}
\end{subfigure}\hfill
\caption{Empirical Type I error rate at the $5\%$ significance level of the different tests under Setting 1(b) with $n=400$, $p=30$, $c_A=2$, $\eta_1=1$, $\beta_0=0$ and $\eta_0=0$.
} \label{fig:Setting1_1e}
\end{figure}
The Type I errors of the proposed triple selection based test (see Section \ref{sec:debiased}) are also close to the desired 5\% level of significance, but deviate in more extreme settings (see Figure \ref{fig:Setting1_1d} and \ref{fig:Setting1_1e}), namely when the effect of the covariates on censoring is rather weak (i.e., low values of $g$). This is somewhat surprising as we considered an additional model for censoring to improve the power of the linear regression test (see Step $2$ in Section \ref{sec:debiased}). In Figure 1 in Appendix B of the online supplementary materials, the results under a double selection approach (i.e., as described in Section \ref{sec:debiased} without the censoring model) show that the extra censoring model nonetheless has added value.
Since the inflation of Type I errors is more pronounced in the scenario in Figure \ref{fig:Setting1_1d} than in the scenario in Figure \ref{fig:Setting1_1e}, a plausible reason might be that important predictors of the exposure are missed.
We further observe that the poor man's approach and the proposed triple selection approach based on the de-biased test drastically outperform the proposal by \citet{fang2017} throughout large parts of the model space (see Figure \ref{fig:Setting1_1d} and \ref{fig:Setting1_1e}).
Based on the theoretical and Monte-Carlo results, we recommend the use of either the poor man's approach or triple selection over the estimator proposed by \citet{fang2017} and especially over the na\"ive post-Lasso estimator.
We refer to Figures 2-14 in Appendix B of the online supplementary materials for
results under additional settings. Figures 15-30 in Appendix B of the online supplementary materials show results under high-dimensional settings ($p>n$). Although none of the methods seems to maintain the nominal Type I error across the different settings, the poor man's approach outperforms the other approaches in the settings considered.
\section{Data Analysis}\label{sec:dataAnalysis}
We illustrate the proposal on the Breast cancer data set used in \cite{royston2013external}. In particular, we will investigate the effect of chemo on relapse free survival (in months). Data are available on 2,982 primary breast cancer patients whose data records were included in the Rotterdam tumor bank, including the following potential confounders: year of cancer incidence, age, menopausal status (pre- or post-menopausal), tumor size ($\leq 20$, 20-50, $>50$), tumor grade (2 or 3), number of positive lymph nodes, hormonal treatment (yes or no), progesterone receptors in fmol/l and estrogen receptors in fmol/l.
A standard Cox analysis adjusting for all covariates (using main effects) delivers a log hazard ratio of -0.12 (robust SE 0.07). Performing post-Lasso (with penalty parameter $\lambda_{1se}$ chosen via 20-fold cross-validation) results in a Cox model adjusted for tumor size, tumor grade and number of positive lymph nodes, and delivers a log hazard ratio of -0.02 (robust SE 0.07). The poor man's approach (with penalty parameter $\lambda_{1se}$ chosen via 20-fold cross-validation) additionally includes year of cancer incidence, age and menopausal status, and delivers a log hazard ratio of -0.13 (robust SE 0.07). Using the proposed triple selection approach, which in addition to the variables selected by Lasso includes year of cancer incidence, age, progesterone receptors and estrogen receptors, we obtain an estimated log hazard ratio of -0.14 (robust SE 0.07).
Given the standard Cox analysis, the proposed estimators give results that look more plausible. To better understand the impact of the different approaches on resulting inferences, a sub-sampling approach was taken. Specifically, we take 1,000 subsamples of size 300 from the original dataset and evaluate how frequently the resulting 95\% confidence intervals contain the full-sample estimate obtained via a Cox model with all main effects include, which was taken as the benchmark. To construct confidence intervals, sandwich estimators of the standard errors were used (obtained via the robust option in Cox model-fitting software). We repeat this analysis using sub-samples of size 75, 150, 450 and 600.
Results in Table \ref{tab:dataAnalysis} suggest that the post-Lasso results in under-coverage and bias as a result of eliminating too many covariates.
Better coverage and less biased results are consistently obtained by the poor man's and triple selection approach.
\begin{table}[ht]
\centering
\begin{tabular}{rlllll}
\hline
$n$ & Method & Bias & SD & Mean SE & Coverage \\
\hline
75 & Post-Lasso & 0.080 & 0.467 & 0.4145 & 0.915 \\
& Poor man's approach & -0.020 & 0.550& 0.486 & 0.930 \\
& Triple selection & -0.005 & 0.552 & 0.483 & 0.926 \\
\hline
150 & Post-Lasso & 0.088 & 0.317 & 0.288 & 0.916 \\
& Poor man's approach & 0.001 & 0.355 & 0.328 & 0.938 \\
& Triple selection & -0.008 & 0.359 & 0.328 & 0.939 \\
\hline
300 & Post-Lasso & 0.096& 0.208 & 0.204 & 0.910\\
& Poor man's approach & 0.009& 0.234 & 0.229 & 0.952 \\
& Triple selection & 0.002 & 0.235 & 0.228 & 0.948 \\
\hline
450 & Post-Lasso & 0.095 & 0.160 & 0.167 & 0.912 \\
& Poor man's approach & 0.011 & 0.179 & 0.187 & 0.959 \\
& Triple selection & 0.004 & 0.181 & 0.187& 0.951 \\
\hline
600 & Post-Lasso & 0.095 & 0.136 & 0.146 & 0.913 \\
& Poor man's approach & 0.007 & 0.151 & 0.163 & 0.958 \\
& Triple selection & 0.001 & 0.152 & 0.163 & 0.958 \\
\hline
\end{tabular}
\caption{\label{tab:dataAnalysis} A comparision of the different variable selection methods. Bias: bias taken across subsamples, compared with the benchmark estimator; SD: standard deviation taken across sub-samples; mean SE: mean estimated standard error; coverage: coverage probability obtained via sub-sampling.
}
\end{table}
\section{Discussion}\label{sec:discussion}
In this work, we have developed simple to implement approaches for valid inference for conditional causal hazard ratios after variable selection.
While both proposed tests perform well, the simpler poor man's approach emerged as the clear winner in Monte Carlo simulations.
This has implications for other problems in causal inference where development and implementation of a ``debiased'' estimator may be difficult (e.g., studies with time varying confounding).
Our triple selection estimator $\check{\alpha}$ for $\alpha^*$ is closely related to the one-step estimator proposed by \citet{fang2017}. These authors obtained a closed form decorrelated estimator
$$
\hat{\alpha}_\text{Lasso}-\left[\frac{\partial\mathbb{E}_n\left\{\hat{U}_i(\hat{\alpha}_\text{Lasso}, \hat{\beta}_\text{Lasso}, \hat{\gamma}_\text{Lasso} )\right\}}{\partial\alpha}\right]^{-1}\mathbb{E}_n\left\{\hat{U}_i(\hat{\alpha}_\text{Lasso}, \hat{\beta}_\text{Lasso}, \hat{\gamma}_\text{Lasso} )\right\}
$$
for $\alpha$ by linearizing (using a Newton-Raphson type correction) the estimating equation\\
$\mathbb{E}_n\left\{\hat{U}_i(\alpha, \hat{\beta}_\text{Lasso}, \hat{\gamma}_\text{Lasso} )\right\}=0,
$
at the initial estimator $\hat{\alpha}_\text{Lasso}$. Here, $\hat{\alpha}_\text{Lasso}$, $\hat{\beta}_\text{Lasso}$ and $\hat{\gamma}_\text{Lasso}$ are the initial Lasso estimators in Step $1$ and $3$ in Section \ref{sec:debiased}.
Despite the fact that their method does not use a separate model for censoring, it is closely related to the triple selection estimator as it turns out that this estimator solves $\mathbb{E}_n\left\{\hat{U}_i(\alpha, \check{\beta}, \check{\gamma} )\right\}=0$ (see Appendix A of the online supplementary materials). Although the first order asymptotic properties of both estimators coincide, in finite samples, the triple selection method seems to deliver a more robust performance. This is likely due to the additional selection step for censoring and because the relevant gradients are likely made closer to zero within the sample by re-fitting the Cox model for survival.
A test based on the triple selection approach in Section \ref{sec:debiased} enjoys a specific form of double robustness under the null: if either the exposure mean is linear in $L$ or the log hazard of the survival endpoint is
linear in $L$ (but not necessarily both), then for similar reasons as in \cite{Dukes2020}, this test should be valid under the null when there is ultra-sparsity. This is both because the score function has mean zero if either model is correct, and because the proposed method for estimating $\beta^*$ and $\gamma^*$ ensures that the inference is doubly robust.
When choosing a working exposure model for the proposed de-biased test, we have focused on the identity link.
To extend double robustness to the logit link (e.g., for a binary exposure), we could develop an analogous proposal starting from the following score function
\begin{align*}
U_{i}\left(\alpha, \beta, \gamma, \gamma_0(t)\right)=\int_0^\tau\left\{A_{i}-\text{expit}\left(\gamma_0(t)+\gamma'L_i\right) \right\}\{dN_i(t)-R_i(t)\lambda_0(t, \alpha, \beta)e^{\alpha A_i+\beta' L_i}dt\}.
\end{align*}
We would then set the gradients with respect to $\gamma_0(t)$, $\gamma$ and $\beta$ approximately to zero by estimating the nuisance parameters as the solution to specific penalized equations given by those gradients.
Further, when choosing a working exposure model for the poor man's approach, we have mainly focused on the identity link. Other link functions (e.g., logit) can be used in the three steps of the algorithm, so long as the corresponding penalized estimators of the nuisance parameters can be shown to converge sufficiently quickly.
The estimator on which our proposed test is based (along with common alternatives) has the disadvantage that it may not converge to something easily interpretable when the proportional hazards assumption fails. The resulting confidence intervals also typically lose their validity under misspecification of an assumed Cox proportional hazards model. This is not entirely satisfactory as there is no good understanding of what it infers in this case.
Inspired by developments on assumption-lean inference for generalized linear model parameters by \cite{Vansteelandt2020}, in future research we will develop nonparametric inference for a model-free estimand that reduces to a Cox model parameter under the proportional hazards assumption, but which continues to capture the association of interest under arbitrary types of misspecification.
\backmatter
\section*{Acknowledgements}
This project has received funding from Fulbright Belgium, Belgian American Educational Foundation, VLAIO under the Baekeland grant agreement HBC.2017.0219, BOF Grant BOF.01P08419 and FWO research projects G016116N and 1222522N.\\
\bibliographystyle{biom}
|
\section{Introduction}
Let $X$ and $Y$ be Banach spaces and $F:X \rightrightarrows Y$ be a
closed multifunction, and let $A$ be a closed subset of $X$ and $b$ be a given point in $Y$.
Consider the following generalized equation with constraint (GEC)
$$
b\in F(x)\,\,\mathrm{subject}\,\,\mathrm{to}\,\,x\in A. \eqno{\rm (GEC)}
$$
This paper is devoted to several concepts of constraint qualifications for (GEC) and applications to metric subregularity of (GEC).
It is well known that basic constraint qualification (BCQ) for
continuous convex inequalities is a fundamental concept in mathematical programming, and has been extensively studied by many authors. Readers could consult references \cite{BB,Li,HL,LN,LNS,W,WZ,11,9,ZW} for the details on BCQ as well as its close relationship with other important concepts in optimization. In 2004, Zheng and Ng \cite{11} made use of singular subdifferential to introduce the concept of strong BCQ which is strictly stronger than BCQ for the convex inequality defined by one lower semicontinuous convex function, and used this notion to characterize metric regularity of the convex inequality. Afterwards, Hu \cite{10} further studied strong BCQ and introduced one measurement of end set to provide equivalent conditions for strong BCQ. In 2007, Zheng and Ng \cite{7} generalized the concepts of BCQ and strong BCQ to the case of convex generalized equation and used these constraint qualifications to obtain necessary and sufficient conditions for convex generalized equation to have metric subregularity. Naturally, it is interesting and important to further consider constraint qualifications as well as applications for nonconvex generalization equation. Motivated by this and as one aim of this paper, we are inspired by \cite{11} and \cite{10} to discuss BCQ and strong BCQ for (GEC) as well as their characterizations and apply these constraint qualifications to the study on metric subregularity of (GEC).
Metric subregularity is a well-known and useful concept in mathematical programming and optimization, and has been extensively studied by many authors under various names (cf. \cite{9,4,BS,5,13,29,22,23,28} and references therein). Note that Zheng and Ng \cite{7} discussed metric subregularity for convex generalized equation and provided dual characterizations for metric subregularity in terms of coderivative and normal cone. Recently the authors \cite{HYZ} considered metric subregularity for subsmooth generalized constraint equation. Based on \cite{7,HYZ} and as the other aim of this paper, we further investigate metric subregularity of (GEC) and mainly establish several necessary and/or sufficient conditions for (GEC) to have metric subregularity. These conditions are given in terms of constraint qualifications studied in this paper.
Given a Banach space $X$ with the dual space $X^*$ and a multifunction $\Phi: X\rightrightarrows X^*$, the symbol
\begin{equation*}
\begin{array}r
\mathop{\rm Limsup}\limits_{y\rightarrow x}\Phi(x):=\Big\{x^*\in X^*: \exists \ {\rm sequences} \ x_n\rightarrow x \ {\rm and} \ x_n^*\stackrel{w^*}\longrightarrow x^* \ {\rm with}\ \\
x_n^*\in \Phi(x_n) \ {\rm for\ all \ } n\in \mathbb{N} \Big\}
\end{array}
\end{equation*}
signifies the sequential Painlev\'{e}-Kuratowski outer/upper limit of $\Phi(x)$ as $y\rightarrow x$.
\section{Preliminaries}
Let $X, Y$ be Banach spaces with the closed unit balls denoted by
$B_{X}$ and $B_Y$, and let $X^*, Y^*$ denote the dual space of $X$
and $Y$ respectively. For a closed subset $A$ of $X$, let $\overline
A$ denote the closure of $A$. For $a\in A$, let $T_c(A,a)$ and $T(A, a)$ denote the
Clarke tangent cone and the contingent (Bouligand) cone of $A$ at $a$
respectively, which are defined by
$$
T_c(A,a):=\liminf\limits_{x\stackrel{A}{\rightarrow}a,t\rightarrow
0^+}\frac{A-x}{t}\;\;\mathrm{and}\;\;T(A,a):=\limsup\limits_{t\rightarrow 0^+}\frac{A-a}{t},
$$
where $x\stackrel{A}{\rightarrow}a$ means that $x\rightarrow a$ with
$x\in A$. Thus, $v\in T_c(A,a)$ if and only if for any
$a_n\stackrel{A}{\rightarrow}a$ and any $t_n\rightarrow 0^+$, there exists $v_n\rightarrow v$ such that $a_n+t_nv_n\in A$ for all $n$, and
$v\in T(A,a)$ if and only if there exist $v_n\rightarrow v$ and $t_n\rightarrow 0^+$ such that $a+t_nv_n\in A$ for all $n$.
We denote by $N_c(A,a)$ the Clarke normal cone of $A$ at $a$, that is,
$$N_c(A,a):=\{x^*\in X^*:\;\langle x^*,h\rangle\leq0\;\;\forall h\in
T_c(A,a)\}.$$
Let $\hat N(A,a)$ denote the Fr\'{e}chet normal cone of $A$ at $a$ which is defined by
$$
\hat N(A,a):=\left\{x^*\in
X^*:\limsup\limits_{y\stackrel{A}\rightarrow a}\frac{\langle x^*,
y-a\rangle}{\|y-a\|}\leq 0\right\},
$$
and let $N(A, a)$ denote the Mordukhovich (limiting/basic) normal cone of $A$ at $a$ which is defined by
$$
N(A, a):=\mathop{\rm Limsup}_{x\stackrel A\rightarrow a, \varepsilon\downarrow
0}\hat N_{\varepsilon}(A, x),
$$
where $\hat N_{\varepsilon}(A, x)$ is the set of $\varepsilon$-normal to $A$ at
$x$ and defined as
$$
\hat N_{\varepsilon}(A, x):=\left\{x^*\in
X^*:\limsup\limits_{y\stackrel{A}\rightarrow x}\frac{\langle x^*,
y-x\rangle}{\|y-x\|}\leq \varepsilon\right\}.
$$
It is known from \cite{17} and \cite{18} that
$$\hat N(A,a)\subset N(A, a)\subset N_c(A,a).$$
If $A$ is convex, all normal cones coincide and reduce to the normal cone in the sense of convex analysis; that is
$$
N_c(A,a)=N(A, a)=\hat{N}(A,a)=\{x^*\in X^*:\;\langle
x^*,x-a\rangle\leq 0\;\; \forall x\in A\}.
$$
For the case when $X$ is an Asplund space (cf. \cite{16} for definitions and their equivalences), Mordukhovich and
Shao \cite{18} have proved that
\begin{equation}\label{2.2a}
N_c(A, a)=\overline{\rm co}^{w^*}(N(A,
a))\;\;\mathrm{and}\;\;N(A, a)=\mathop{\rm Limsup}_{x\stackrel A\rightarrow
a}\hat N(A, x)
\end{equation}
where $\overline{\rm co}^{w^*}$ denotes the weak$^*$ closed convex hull. This means $x^*\in N(A, a)$ if and only if there exist $x_n\stackrel{A}\rightarrow a$ and $x^*_n\stackrel{w^*}\rightarrow x^*$ such that $x_n^*\in \hat N(A, x_n)$ for all $n$.
Let $F:X\rightrightarrows Y$ be a multifunction. Recall that $F$ is said to be closed if ${\rm gph}(F)$ is a closed subset of $X\times Y$, where ${\rm gph}(F):=\{(x, y)\in X\times Y : y\in F(x)\}$ is the graph of $F$. Let $(x, y)\in {\rm gph}(F)$. Recall that the Clarke tangent derivative $D_cF(x, y)$ of $F$ at $(x, y)$ is defined by
$$
{\rm gph}(D_cF(x, y)):=T_c({\rm gph}(F),(x, y) ).
$$
Let $\hat D^*F(x, y), D^*F(x, y), D_c^*F(x, y): Y^*\rightrightarrows X^*$ denote Fr\'echet, Mordukhovich and Clarke coderivatives of $F$ at $(x, y)$ respectively, and they are defined as
$$
\begin{array}l
\hat D^*F(x, y)(y^*):=\{x^*\in X^* : (x^*, -y^*)\in \hat N({\rm gph}(F), (x, y))\},\\
D^*F(x, y)(y^*):=\{x^*\in X^* : (x^*, -y^*)\in N({\rm gph}(F), (x, y))\},\\
D_c^*F(x, y)(y^*):=\{x^*\in X^* : (x^*, -y^*)\in N_c({\rm gph}(F), (x, y))\}.
\end{array}
$$
Let $\phi :X\rightarrow \mathbb{R}\cup\{+\infty\}$ be a proper
lower semicontinuous function and $x\in
\mathrm{dom}(\phi):=\{y\in X: \phi(y)<+\infty\}$. We denote Fr\'{e}chet, Mordukhovich and Clarke subdifferentials of $\phi$ at $x$ by $\hat{\partial}\phi(x), \partial\phi(x)$ and $ \partial_c\phi(x)$, respectively, which are defined as
\begin{eqnarray*}
\hat{\partial}\phi(x):=\{x^*\in X^* : (x^*, -1)\in \hat N({\rm epi}(\phi), (x, \phi(x)))\},\\
\partial\phi(x):=\{x^*\in X^* : (x^*, -1)\in N({\rm epi}(\phi), (x, \phi(x)))\},\\
\partial_c\phi(x):=\{x^*\in X^* : (x^*, -1)\in N_c({\rm epi}(\phi), (x, \phi(x)))\},
\end{eqnarray*}
where ${\rm epi}(\phi):=\{(x, \alpha)\in X\times \mathbb{R}: \phi(x)\leq \alpha\}$ denotes the epigraph of $\phi$. It is known that
$$
\hat{\partial}\phi(x)\subset\partial\phi(x)\subset\partial_c\phi(x).
$$
Further, one can verify that
$$
\hat{\partial}\phi(x)=\left\{x^*\in X^*:\;\liminf\limits_{z\rightarrow x}
\frac{\phi(z)-\phi(x)-\langle x^*,z-x\rangle}{\|z-x\|}\geq0\right\}.
$$
and
$$\partial_c \phi(x)=\{x^*\in X^*:\;\langle x^*,h\rangle
\leq \phi^{\circ}(x;h)\;\;\;\forall h\in X\},$$
here $\phi^{\circ}(x;h)$ denotes the generalized Rockafellar directional derivative of $\phi$
at $x$ along the direction $h$ and is defined by
$$
\phi^{\circ}(x;h):=\lim\limits_{\varepsilon\downarrow
0}\limsup\limits_{z\stackrel{\phi}{\rightarrow}x,
\,t\downarrow0}\inf_{w\in h+\varepsilon
B_{X}}\frac{\phi(z+tw)-\phi(z)}{t},
$$
where $ z\stackrel{\phi}\rightarrow x$ means that $z\rightarrow x$
and $\phi(z)\rightarrow \phi(x)$. When $\phi$ is locally
Lipschitzian around $x$, $\phi^{\circ}(x;h)$ reduces to Clarke
directional derivative; that is
\begin{equation}\label{2.2}
\phi^{\circ}(x;h)=\limsup\limits_{z\rightarrow x,
\,t\downarrow0}\frac{\phi(z+th)-\phi(z)}{t}.
\end{equation}
For the case that $X$ is an Asplund space, Mordukhovich and
Shao \cite{18} have proved that
$$
\partial\phi(x)=\mathop{\rm Limsup}_{y\stackrel \phi\rightarrow
x}\hat{\partial}\phi(y);
$$
that is, $x^*\in\partial\phi(x)$ if and only if there exist $x_n\stackrel{\phi}\rightarrow x$ and $x_n^*\stackrel{w^*}\rightarrow x^*$ such that $x_n^*\in\hat{\partial}\phi(x_n)$ for all $n$.
The following lemmas will be used in our analysis. Readers are invited to consult references \cite{15} and \cite{14} respectively for more details.
\begin{lem}
Let $X$ be a Banach (resp. an Asplund) space and $A$ be a nonempty closed
subset of $X$. Let $\gamma\in (0,\;1)$. Then for any $x\not\in A$
there exist $a\in A$ and $a^*\in N_c(A,a)$ (resp. $a^*\in
\hat{N}(A,a)$) with $\|a^*\|=1$ such that
$$\gamma\|x-a\|<\min\{d(x,A),\;\langle a^*,x-a\rangle\}.$$
\end{lem}
\begin{lem}
Let $X$ be an Asplund space and $A$ be a nonempty closed subset of
$X$. Let $x\in X\backslash A$ and $x^*\in \hat{\partial}d(\cdot,
A)(x)$. Then, for any $\varepsilon>0$ there exist $a\in A$ and
$a^*\in \hat N(A, a)$ such that
$$
\|x-a\|<d(x,
A)+\varepsilon\,\,\,\,\mathrm{and}\,\,\,\,\|x^*-a^*\|<\v
|
arepsilon
$$
\end{lem}
As one suitable substitute of convexity in this paper, we consider the concept of subsmooth which is introduced by Aussel, Daniilidis and Thibault \cite{2}. Recall that $A$ is said to be subsmooth at $a\in A$, if for any $\varepsilon>0$ there exists
$\delta>0$ such that
$$
\langle x^*-u^*, x-u\rangle\geq -\varepsilon\|x-u\|
$$
whenever $x, u\in B(a, \delta)\cap A$, $x^*\in N_c(A, x)\cap
B_{X^*}$ and $u^*\in N_c(A, u)\cap B_{X^*}$.
When $A$ is subsmooth at $a\in A$, one has
$$
N_c(A, a)=N(A, a)=\hat N(A, a).
$$
Further, Zheng and Ng \cite{15} provided one characterization for
this notion; that is, $A$ is subsmooth at $a\in A$ if and only if for any $\varepsilon>0$ there exists
$\delta>0$ such that
\begin{equation}\label{2.2}
\langle u^*, x-u\rangle\leq d(x, A)+\varepsilon\|x-u\|\;\;\forall
x\in B(a, \delta)
\end{equation}
whenever $u\in B(a, \delta)\cap A$ and $u^*\in N_c(A, u)\cap
B_{X^*}$. Readers are invited to consult \cite[Proposition 2.1]{15} and \cite[Proposition 3.1]{ZW} for more details.
For a closed multifunction, Zheng and Ng \cite{9} introduced the concept of L-subsmooth and studied calmness for this kind of multifunctions. Recall from \cite{9} that a closed multifunction $F:X\rightrightarrows Y$ is said
to be
(i) L-subsmooth at $(a, b)\in {\rm gph}(F)$ if for any $\varepsilon>0$ there exists
$\delta>0$ such that
\begin{equation}
\langle u^*, x-a\rangle+\langle v^*,
y-b\rangle\leq\varepsilon(\|x-a\|+\|y-b\|)
\end{equation}
whenever $v\in F(a)\cap B(b, \delta)$, $(u^*, v^*)\in N_c({\rm gph}(F), (a,
v))\cap (B_{X^*}\times B_{Y^*})$, and $(x, y)\in {\rm gph}(F)$ with
$\|x-a\|+\|y-b\|<\delta$;
(ii) $\mathcal{L}$-subsmooth at $(a, b)\in {\rm gph}(F)$
if $F^{-1}$ is L-subsmooth at $(b, a)$.
Readers are invited to consult reference \cite{9} for more properties and examples with respect to the concept of L-subsmooth.
\setcounter{equation}{0}
\section{BCQ and strong BCQ for nonconvex (GEC)}
This section is devoted to constraint qualifications of BCQ and strong BCQ for nonconvex (GEC) as well as their equivalences. We first recall the concepts of BCQ and strong BCQ for convex (GEC).
Suppose that $F:X\rightrightarrows Y$ is a convex closed multifunction and $A$ is a convex closed subset of $X$. Recall from \cite{11} that convex (GEC) is said to have the BCQ at $a\in S$, if
\begin{equation}\label{3.1}
N(S, a)=D^*F(a, b)(Y^*)+N(A, a)
\end{equation}
and
convex (GEC) is said to have the strong BCQ at $a\in S$, if there exists $\tau\in(0, +\infty)$ such that
\begin{equation}\label{3.2}
N(S, a)\cap B_{X^*}\subset\tau(D^*F(a, b)(B_{Y^*})+N(A, a)\cap B_{X^*}),
\end{equation}
where $S:=\{x\in A: b\in F(x)\}$ is the solution
set of (GEC).
Taking applications of BCQ and strong BCQ into account, we are inspired by \eqref{3.1} and \eqref{3.2} to consider the following forms of BCQ and strong BCQs for nonconvex (GEC).
{\it Let $a\in S$. We say that
{\rm(i)} (GEC) has the BCQ at $a$ in the sense of Clarke, if
\begin{equation}\label{3.3}
N_c(S, a)\subset D_c^*F(a, b)(Y^*)+N_c(A, a);
\end{equation}
{\rm(ii)} (GEC) has the strong BCQ at $a$ in the sense of Clarke, if there exists $\tau\in(0, +\infty)$ such that
\begin{equation}\label{3.4}
N_c(S, a)\cap B_{X^*}\subset\tau(D_c^*F(a, b)(B_{Y^*})+N_c(A, a)\cap B_{X^*});
\end{equation}
{\rm(iii)} (GEC) has the strong BCQ at $a$ in the sense of Fr\'echet, if there exists $\tau\in(0, +\infty)$ such that
\begin{equation}\label{3.4a}
\hat N(S, a)\cap B_{X^*}\subset\tau(D_c^*F(a, b)(B_{Y^*})+N_c(A, a)\cap B_{X^*});
\end{equation}
{\rm(iv)} (GEC) has the strong BCQ at $a$ in the sense of Mordukhovich, if there exists $\tau\in(0, +\infty)$ such that}
\begin{equation}\label{3.4b}
N(S, a)\cap B_{X^*}\subset\tau(D_c^*F(a, b)(B_{Y^*})+N_c(A, a)\cap B_{X^*}).
\end{equation}
\noindent{\bf Remark 3.1.} It is clear that \eqref{3.4}$\Rightarrow$\eqref{3.4b}$\Rightarrow$\eqref{3.4a}. Furthermore, for the case that $F$ is a convex closed multifunction and $A$ is a convex closed subset, the solution set $S$ is convex, and coderivatives and normal cones are in the sense of convex analysis. Thus, strong BCQs of \eqref{3.4}, \eqref{3.4a} and \eqref{3.4b} reduce to \eqref{3.2}, and BCQ of \eqref{3.3} is equivalent to \eqref{3.1} as the inverse inclusion of \eqref{3.1} holds trivially in this case.
Recall that for the convex inequality defined by a proper lower semicontinuous convex function, Hu \cite{10} introduced one concept of end set to study BCQ and strong BCQ, and used this concept to characterize strong BCQ. Motivated by this, we are interesting in characterizing strong BCQ of \eqref{3.4} for (GEC) in this way. We recall the concept of end set.
Let $C$ be a subset of $X$. Recall from \cite{10} that the end set of
$C$ is defined by
\begin{equation}\label{3.5}
E[C]:=\{z\in \overline{[0, 1]C}: tz\notin \overline{[0,
1]C}\,\,\,\,\forall t>1\}.
\end{equation}
It is shown in \cite{10} that if $C$ is closed and convex then
$$
E[C]=\{z\in C: tz\notin C\,\,\,\,\forall t>1\}.
$$
The following theorem provides an equivalent condition of strong BCQ of \eqref{3.4} for (GEC) by using BCQ and end set.
\begin{them}
Let $z\in S$ and $\tau\in (0, +\infty)$. Then (GEC) has the strong BCQ of \eqref{3.4} at $z$ with constant $\tau>0$ if
and only if (GEC) has the BCQ of \eqref{3.3} at $z$ and
\begin{equation}\label{3.6}
d(0, E[D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}])\geq
\frac{1}{\tau}.
\end{equation}
\end{them}
\begin{proof}
The necessity part. Since BCQ follows from strong BCQ trivially, it suffices to prove \eqref{3.6}. Let $x^*\in E[D_c^*F(z,
b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}]$. Then $x^*\not= 0$. Noting that
$D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}$ is weak$^*$-closed and
convex, it follows that
$$
x^*\in D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}\subset N_c(S, z).
$$
Using the strong BCQ of \eqref{3.4}, one has
$$
\frac{x^*}{\tau\|x^*\|}\in D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap
B_{X^*}.
$$
This implies that $\frac{1}{\tau\|x^*\|}\leq 1$ by \eqref{3.5}; that is $\|x^*\|\geq \frac{1}{\tau}$. Thus \eqref{3.6} holds.
The sufficiency part. Let $x^*\in N_c(S, z)\cap B_{X^*}$. We
set
$$
\lambda:=\sup\{t>0: tx^*\in D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap
B_{X^*}\}.
$$
Then, $\lambda>0$ as (GEC) has the BCQ at $z$. If $\lambda=+\infty$, then there exists
$t>\frac{1}{\tau}$ such that
$$
tx^*\in D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}.
$$
This and $0\in D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}$ imply that
$$
x^*\in \tau(D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}).
$$
If $\lambda<+\infty$, then one has
$$
\lambda x^*\in E[D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}].
$$
Thus
$$
\lambda\geq\lambda\|x^*\|\geq d(0, E[D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap
B_{X^*}])\geq \frac{1}{\tau},
$$
and consequently
$$
x^*\in\tau(D_c^*F(z, b)(B_{Y^*})+N_c(A, z)\cap B_{X^*}).
$$
The proof is complete.
\end{proof}
Next, we use polar and techniques in dual theory to study BCQ and strong BCQ for (GEC).
Let $M$ and $N$ be subsets of $X$ and $X^*$, respectively. Recall
from \cite{24} that the polar of $M$ and $N$ are defined by
$$
M^{\circ}:=\{x^*\in X^*: \langle x^*, x\rangle\leq 1\;\;\forall x\in
M\}\;\;{\rm{and}}\;\;N^{\circ}:=\{x\in X: \langle x^*, x\rangle\leq
1\;\;\forall x^*\in N\}.
$$
It is known that when $M$ is a cone of $X$, the polar of $M$ reduces
to $M^{\ominus}$ which is called the negative polar of $M$, where
$M^{\ominus}:=\{x^*\in X^*: \langle x^*, x\rangle\leq 0\;\;\forall
x\in M\} $. Hence, Clarke tangent cone and Clarke normal cone are dual; that is
\begin{equation}\label{3.7}
T_c(M, x)^{\circ}=N_c(M, x)\;\;{\rm{and}}\;\;N_c(M,
x)^{\circ}=T_c(M, x)\;\;\forall\, x\in M.
\end{equation}
Using this known relationships \eqref{3.7} and the polar, we establish several results on sufficient/necessary conditions for (GEC) to have BCQ and strong BCQs. These conditions are given by Clarke tangent cone and Clarke tangent derivative.
\begin{them}
Let $z\in S$ and $\tau\in (0, +\infty)$.
{\rm (i)} Suppose that
\begin{equation}\label{3.8}
\overline{D_cF^{-1}(b, z)(\frac{1}{\tau} B_Y)}\cap \overline{T_c(A, z)+\frac{1}{\tau} B_X}\subset\overline{T_c(S, z)+B_X}.
\end{equation}
Then (GEC) has the strong BCQ of \eqref{3.4} at $z$ with constant $\tau>0$.
{\rm (ii)} Suppose that
\begin{equation}\label{3.8a}
\overline{D_cF^{-1}(b, z)(\frac{1}{\tau} B_Y)}\cap \overline{T_c(A, z)+\frac{1}{\tau} B_X}\subset\overline{T(S, z)+B_X}.
\end{equation}
Then (GEC) has the strong BCQ of \eqref{3.4a} at $z$ with constant $\tau>0$.
{\rm (iii)} Suppose that (GEC) has the strong BCQ of \eqref{3.4} at $z$ with constant $\tau>0$. Then
\begin{equation}\label{3.9a}
|
u(h_{< i})} &\mbox{if } h_{< i} \in \mathcal{H}_{\mathcal{A}}\\
\sigma(h_i \,|\, h_{< i}) &\mbox{o.w.}
\end{cases}
\end{align*}
We denote $\probability_{\strategy, \sigma}[h] = \probability_{\strategy, \sigma}[\bigcup_{\bar{h} \in \mathcal{H}_{\varnothing}} \bar{h} \sqsubseteq h \sqsubseteq H]$ and refer to this quantity as $h$'s \emph{reach probability}.
We can decompose
\begin{align*}
\probability_{\strategy, \sigma}[h]
= &\left. \hspace{-1em} \prod_{
i = \abs{h_{\varnothing}} + 1, \, i \bmod 2 = \iota
}^{\abs{h}} \hspace{-1em}
\gamma(h_{< i - 1}) \sigma(h_i \,|\, h_{< i})
\right\rbrace \probability_{\sigma}[h]\\
&\left. \hspace{-1em} \prod_{i = \abs{h_{\varnothing}} + 1, \, i \bmod 2 = 1 - \iota}^{\abs{h}} \hspace{-1em} \strategy\subex*{h_i \,|\, u(h_{< i})} \right\rbrace \probability_{\strategy}[h]
\end{align*}
according to the probability that the daimon and agent play their parts of $h$, where we have grouped the continuation probabilities with the daimon's and set $\gamma(h_{< \abs{h_{\varnothing}}}) = 1$.
The conditional probability
\begin{align*}
\probability_{\strategy, \sigma}[h' \,|\, h]
&= \probability_{\strategy, \sigma}[h \sqsubseteq H, h' \sqsubseteq H] / \probability_{\strategy, \sigma}[h]
\end{align*}
is the probability that history $h' \sqsubseteq H$ given $h \sqsubseteq H$.
If $h'$ and $h$ are unrelated in that
$h' \not\sqsubseteq h \not\sqsubset h'$,
then it is not possible for $H$ to realize both, so the joint probability
$\probability_{\strategy, \sigma}[h, h'] = 0$,
and consequently
$\probability_{\strategy, \sigma}[h' \,|\, h] = 0$.
If $h' \sqsubseteq h$ then $H$ always realizes $h'$ when $h$ is realized, therefore,
$\probability_{\strategy, \sigma}[h, h'] = \probability_{\strategy, \sigma}[h]$
and
$\probability_{\strategy, \sigma}[h' \,|\, h] = 1$.
The last case is $h \sqsubseteq h'$, where
\begin{align*}
\probability_{\strategy, \sigma}[h' \,|\, h]
&= \probability_{\strategy, \sigma}[h, h'] / \probability_{\strategy, \sigma}[h]
= \probability_{\strategy, \sigma}[h'] / \probability_{\strategy, \sigma}[h].
\end{align*}
\section{Representing Traditional Models}
\label{sec:backgroundDecModels}
\subsection{Games}
\begin{algorithm}[tb]
\caption{The procedure for playing an $N$ player game in POHP-form.}
\label{alg:efgWithPohps}
\begin{algorithmic}[1]
\State \textbf{Input:}\
turn function
$p : \mathcal{H} \to \set{c} \cup \set{i}_{i = 1}^N$,\\
\quad legal actions function
$\mathcal{A}$,\\
\quad terminal histories
$\mathcal{Z} \subseteq \mathcal{H}$\\
\quad or continuation function
$\gamma: \mathcal{H} \to \Simplex{\set{0, 1}}$,\\
\quad information partitions
$\set{\mathcal{I}_i}_{i \in \set{c} \cup \set{j}_{j = 1}^N}$\\
\quad or observation functions
$\set{\omega_i : \mathcal{H} \to \mathcal{S}_i}_{i \in \set{c} \cup \set{j}_{j = 1}^N}$,\\
\quad and utility functions
$\set{\upsilon_i : \mathcal{Z} \to [-\supReward, \supReward]}_{i = 1}^N$.
\For{$i \in \set{c} \cup \set{j}_{j = 1}^N$}
\State $\omega_i(h) \gets I$ \textbf{for} $h \in I \in \mathcal{I}_i$ \textbf{if} $\omega_i$ undefined
\EndFor
\State $\gamma \gets h \mapsto \ind{h \notin \mathcal{Z}}$ \textbf{if} $\gamma$ undefined
\State $H \gets \varnothing$
\State $\Gamma \gets 1$
\While{$\Gamma$}
\State \textbf{send} $\omega_i(H)$ \textbf{to} player $p(H)$
\State \textbf{receive} $A \in \mathcal{A}(H)$ \textbf{from} player $p(H)$
\State $H \gets HA$
\State \textbf{sample} $\Gamma \sim \gamma(H)$
\EndWhile
\For{$i = 1, 2, \ldots, N$}
\State \textbf{send} $\omega_i(H) = \tuple{H, \upsilon_i(H)}$ \textbf{to} player $i$
\EndFor
\end{algorithmic}
\end{algorithm}
A \emph{game} is an $N$ player interaction where each player simultaneously chooses a strategy and immediately receives a payoff from a bounded utility function~\citep{Neumann47}.
There may also be an extra ``chance player'', denoted $c$, who ``decides'' chance events like die rolls with strategy $\strategy_{c}$.
A game described in this way is called a \emph{normal-form game} (\emph{NFG}).
For any given player, $i$, we can represent $i$'s view of the game with a POHP, $\mathcal{G}_i$, where the agent represents $i$ and the daimon represents the other $N - 1$ players and chance in aggregate.
We can also represent chance's view of the game with a POHP where the agent's strategy is fixed to $\strategy_{c}$.
The histories, action sets, and continuation function across all $N + 1$ of these POHPs are shared but the first turn indicator and observation functions are specific to each player.
The reward functions for each player must also reflect the game's payoffs.
After each player chooses an agent strategy for their POHP, all the POHPs are evaluated together, sharing the same history, and each player receives a return in their POHP that equals their payoff in the game.
Together, the set of POHPs, $\set{\mathcal{G}_i}_{i \in \set{c} \cup \set{j}_{j = 1}^N}$, represents what we could call a \emph{POHP-form game}.
See \cref{alg:efgWithPohps} for a programmatic description of how a game can be played out in POHP form.
In each $\mathcal{G}_i$, the daimon's strategy, $\sigma_i$, must reflect those of the other players.
If we have a \emph{turn function} $p: \mathcal{H} \to \set{c} \cup \set{j}_{j = 1}^N$ that determines which player acts after a given history $h$, we can constrain $\sigma_i$ to conform to the agent strategies from the other POHPs as
$\sigma_i(h) = \strategy_{p(h)}(u_{p(h)}(h))$.
A game described with histories and turns is called an \emph{extensive-form game} (\emph{EFG})~\citep{Kuhn53}.
Any NFG can be converted into extensive form by serializing each decision.
Of course, players who act later are not allowed to observe previous actions, and this is traditionally specified through information partitions.
Each player, $i$, is assigned a set of information sets as their \emph{information partition}, denoted $\mathcal{I}_i$.
Typically, EFGs also define a set of \emph{terminal histories}, $\mathcal{Z} \subseteq \mathcal{H}$, which is constructed so that every history eventually terminates.
As in a NFG, payoffs are given to players upon termination.
Since the POHP and EFG share the same history-based progression, representing an EFG in POHP-form simply requires that information partitions, terminal histories, and utility functions are faithfully reconstructed in the POHP.
If player $i$'s observation function returns its given history's information set in $\mathcal{I}_i$ and player $i$'s observation update function replaces the current information state with its given observation, then the set of information sets on active information states reproduces $\mathcal{I}_i$.
Formally, $\set{ I(s) }_{s \in \mathcal{S}_{i, \mathcal{A}}} = \mathcal{I}_i$.
Constructing the information states for each player in this way ensures that all information partitions are respected.
We can add terminal histories to a POHP by setting $\gamma(h) = \ind{h \notin \mathcal{Z}}$.
To respect the EFG's utility function for each player $i$, $\upsilon_i: \mathcal{Z} \to [-\supReward, \supReward]$, we set player $i$'s rewards for all observations to zero except those following terminal histories, $z$, at which point $r_i(\omega_i(z)) = \upsilon_i(z)$.
\subsection{Markov Models}
The POHP model allows agents to construct their own complicated notions of state but forces conceptual simplicity on environment state, \ie/, its history.
However, a popular class of models are Markov models where the environment has a more complicated notion of state and this state evolves according to Markovian dynamics.
That is, there may be many histories that lead to the same environment state and the state on the next step is determined, up to stochasticity, by the current environment state and action.
The transition probabilities must be constant across each history in an environment state, otherwise transitions would depend on past environment states, violating the Markov property.
Agent information states in a POHP may lack the Markov property because the daimon's strategy may depend on the history.
A straightforward way to represent environment states is with the passive information states of the chance player in a POHP-form game.
At each of chance's passive histories $h$, each non-chance player plays an action in turn, which advances the history to $h' = h a_1 \ldots a_N$ where chance updates their information state to
$s_{h'} = u_{c}(h') \in \mathcal{S}_{c, \mathcal{A}}$.
Chance then chooses which of their passive information states is next by sampling $A_{c}$ from $\strategy_{c}(s_{h'})$, resulting in a transition to
$s_{h'A_{c}} = u_{c}(h'A_{c}) \in \mathcal{S}_{c, \mathcal{O}}$.
A Markovian transition between $s_{h}$ and $s_{h'A_{c}}$ can be enforced by restricting chance's observation function so that
$\omega_{c}(ha_1 \ldots a_N)
= \omega_{c}(\bar{h}a_1 \ldots a_N)$
for all joint player actions
$a_1 \ldots a_N$
and histories $\bar{h} \in I(s_{h})$.
Enforcing this constraint for each history $h$ ensures that if
$u_{c}(\bar{h}) = s_{h}$,
then, given joint player actions $a_1 \ldots a_N$,
\begin{align*}
u_{c}(\bar{h}a_1 \ldots a_N A_{c})
&= u_{c, \mathcal{A}}(
u_{c, \mathcal{O}}(
u_{c}(\bar{h}),
\omega_{c}(\bar{h} a_1 \ldots a_N)),
A_{c})\\
&= u_{c, \mathcal{A}}(
u_{c, \mathcal{O}}(
s_{h},
\omega_{c}(h a_1 \ldots a_N)),
A_{c})\\
&= s_{h'A_{c}}
\end{align*}
with transition probability
$\strategy_{c}(A_{c} \,|\, s_{h'})$
for all $\bar{h}$.
A general Markovian model is the \emph{partially observable Markov game} (\emph{POMG})~\citep{hansen2004posg}.\footnote{A Markov game is also often called a ``stochastic game'', but a core feature of this model is Markovian transitions, not stochasticity.
This leads us to prefer the term ``Markov game''.
}
In this model, each environment state represents its own NFG, so all players simultaneously choose an action and receive a reward that depends on the state and joint action selection.
The next state (and thus the next NFG) is determined by the current state and the joint action selection.
Just as we can serialize any NFG by introducing a turn function and selectively hiding actions,
we can serialize a POMG into a \emph{turn-taking POMG} (\emph{TT-POMG})~\citep{Greenwald2017ttpomg} by serializing each of its component NFGs.
The TT-POMG formalism was developed to convert EFGs into Markov models~\citep{Greenwald2017ttpomg}, so naturally the POHP-form of a POMG is similar to that of an EFG, except that chance's observations are constrained so that chance's passive information states can play the role of the POMG environment states.
The actions that each player plays are only revealed to the other players after chance's current passive information state transitions to the next one, corresponding to the POMG's next NFG.
Players also receive a reward at this time based on the utilities of the previous NFG.
This model is typically presented as a continuing process with discounting, and we can replicate the same setup by setting the continuation probability $\gamma(h)$ to the discount factor for each of chance's passive histories $h$ and $\gamma(h') = 1$ for all other histories $h'$.
Providing full observability to player $i$ in a POHP-form POMG is simply a matter of adding the constraint that chance's passive information states are isomorphic to player $i$'s active information states.
That is, there must be a bijection where chance's information state is
$s' \in \mathcal{S}_{c, \mathcal{O}}$
whenever player $i$'s information state is $s \in \mathcal{S}_{i, \mathcal{A}}$,
and \viceversa/.
Effectively, player $i$'s active information state is always chance's passive information state and thus also the POMG environment state.
One trivial way to enforce this constraint is for player $i$'s observation function to return chance's passive information state, \ie/,
$\omega_i(ha_1 \ldots a_{i - 1}) = u_{c}(h)$,
and for player $i$'s observation update function to replace their current state with the given observation.
If all players are granted full observability, then a POMG becomes, naturally, a \emph{Markov game}~\citep{shapley1953stochasticGame}.
Furthermore, a single-player Markov game or POMG reduces to a \emph{Markov decision process} (\emph{MDP}) or \emph{partially observable MDP} (\emph{POMDP})~\citep{smallwood1973pomdp}, respectively, and this is true when models are represented either in their canonical or POHP-forms.
\section{The Sub-POHP and Learning}
We now describe how sub-POHPs can be constructed in finite-horizon POHPs with timed updates, and show how observable sequential rationality~\citep{hsr2020} is naturally defined in terms of sub-POHPs.
A POHP has a \emph{finite horizon} if every history eventually terminates deterministically.
We enforce this by selecting a subset of histories, $\mathcal{Z} \subseteq \mathcal{H}$ where $\gamma(z) = 0$ for all $z \in \mathcal{Z}$.
The agent's updates are \emph{timed} as long as the agent's action update function records the number of actions the agent has taken.
A finite horizon and timed updates ensure that the number of histories in each information set is finite and the same information state is never encountered twice before termination.
Thus, the information states are partially ordered and we can write $s \prec s'$ to denote that information state $s$ is a predecessor of $s'$.
\subsection{Beliefs and Realization Weights}
Given that the agent's information state is $s$, how likely is it that the agent is in a particular history $h \in I(s)$?
Traditionally, this is called the agent's \emph{belief} (about which history they are in) at $s$.
According to Bayes' rule,
$\probability_{\strategy, \sigma}[h \,|\, s]
= \probability_{\strategy, \sigma}[s \,|\, h] \probability_{\strategy, \sigma}[h] / \probability_{\strategy, \sigma}[s]$.
Since $h \in I(s)$,
$\probability_{\strategy, \sigma}[s \,|\, h] = 1$.
The agent's information state is $s$ only if the random history $H$ lands in $I(s)$, so we can describe the event of realizing $s$ as the union of history realization events.
Since we assume the agent's updates are timed, there is at most one prefix of $H$ in $I(s)$, which means that each $h \sqsubseteq H$ event for $h \in I(s)$ is disjoint.
The probability of their union is thus the sum
\begin{align*}
\probability_{\strategy, \sigma}[s]
= \probability_{\strategy, \sigma}\subblock*{\bigcup_{\substack{
h \in I(s),\\
\bar{h} \in \mathcal{H}_{\varnothing}
}} \bar{h} \sqsubseteq h \sqsubseteq H}
= \sum_{h \in I(s)} \probability_{\strategy, \sigma}[h].
\end{align*}
The agent's belief at $s$ is $\xi^{\strategy, \sigma}_{s}: h \mapsto
\probability_{\strategy, \sigma}[h] / \probability_{\strategy, \sigma}[s]$.
An assignment of beliefs to each information state is called a \emph{system of beliefs}.
A problem that arises in defining a complete system of beliefs from a given $\strategy$--$\sigma$ pair is that some information states may be unrealizable ($\probability_{\strategy, \sigma}[s] = 0$).
Different rationality assumptions lead to different ways of constructing complete belief systems and corresponding notions of equilibria (see, \eg/, \citet{KrepsWilson82,breitmoser2010beliefsOffThePath,Dekel2015EpistemicGT}).
However, from a hindsight perspective, only realizable information states could have been observed by the agent, and only behavior in realizable states could have impacted the agent's return.
Thus, beliefs at unreachable information states are naturally left undefined.
As a consequence, information state realization probabilities hold special significance in hindsight analysis, as they determine whether or not a state is observable.
More generally, they provide a measure of importance to each information state.
Let $J$ be the random step in the trajectory $\set{H_i}_{i = 1}^{\infty}$ where $u(H_J) = s$ or $J = \infty$ if information state $s$ is never realized.
The return from $H_1$ can be split as
\begin{align*}
&G_{H_1}(\strategy; \sigma)
=
\ind{J < \infty}
\sum_{i = 1}^{J - 1} Y_i \reward\big( \omega(H_i) \big)\\
&\quad+ \ind{J = \infty}
\sum_{i = 1}^{\infty} Y_i \reward\big( \omega(H_i) \big)\\
&\quad+
\left.\ind{J < \infty}
\sum_{i = J}^{\infty} Y_i \reward\big( \omega(H_i) \big)
\right\} #2{$s$'s contribution.}
\end{align*}
Since $H_J \sim \xi_{s}^{\strategy, \sigma}$, the expectation of $s$'s contribution is
the \emph{realization-weighted expected return} from $s$,
\begin{align}
\cfIv_{s}(\strategy; \sigma)
=
\probability_{\strategy, \sigma}[s]
\expectation_{H \sim \xi^{\strategy, \sigma}_{s}}\subblock*{
G_H(\strategy; \sigma) },
\label{eq:realizationWeightedExpectedReturn}
\end{align}
where $\xi^{\strategy, \sigma}_{s}$ is defined arbitrarily if $\probability_{\strategy, \sigma}[s] = 0$.
\subsection{Observable Sequential Rationality}
Here we capitalize on the generality of our POHP definition.
An agent belief can be used as a distribution over initial histories to define a POHP, which in this context we call a \emph{sub-POHP}.
Thus, every realizable information state $s$ admits a sub-POHP where
$\xi^{\strategy, \sigma}_{s}$
is the probability distribution over the histories in
$I(s)$
and the turn indicator is $\ind{s \in \mathcal{S}_{\mathcal{A}}}$.
\emph{Sequential rationality} can then be defined as optimal behavior within every sub-POHP with respect to an assignment of beliefs to unrealizable information states.
This definition is equivalent to sequential rationality in a single-player EFG~\citep{KrepsWilson82}.
\emph{Observable sequential rationality}~\citep{hsr2020} merely drops the requirement that play must be rational at unrealizable information states.
The key value determining observable sequential rationality is in fact \cref{eq:realizationWeightedExpectedReturn}, the realization-weighted expected return.
As with rationality in normal and extensive-form games, we can generalize the idea of observable sequential rationality to samples from a joint distribution of agent strategy--daimon strategy pairs (traditionally called a \emph{recommendation distribution}) and deviations.
A \emph{deviation} is a transformation that generates alternative agent behavior, \ie/, a function $\phi: \PureStratSet \to \PureStratSet$ where $\PureStratSet$ is the set of \emph{pure strategies} for the agent that play a single action deterministically in every active information state.
We denote the complete set of such transformations, known as the set of \emph{swap deviations}, as $\Phi^{\textsc{sw}}_{\PureStratSet}$.
We can now give a generalized definition of observable sequential rationality in a POHP.
\begin{definition}
\label{def:obs-seq-rationality}
A recommendation distribution,
$\mu \in \Simplex(\mathcal{X} \times D)$,
where $\mathcal{X}$ and $D$ are the sets of pure strategies for the agent and daimon, respectively,
is observably sequentially rational for the agent with respect to a set of deviations,
$\Phi \subseteq \Phi^{\textsc{sw}}_{\mathcal{X}}$,
if the maximum benefit for every deviation,
|
$\phi \in \Phi$,
according to the realization-weighted expected return from every information state,
$s \in \mathcal{S}$,
is non-positive,
\begin{align*}
\expectation_{(x, d) \sim \mu}\subblock*{
\cfIv_{s}(\phi(x); d)
- \cfIv_{s}(\phi_{\prec s}(x); d)
}
\le 0,
\end{align*}
where $\phi_{\prec s}$ is the deviation that applies $\phi$ only before $s$, \ie/,
$[\phi_{\prec s} x](\bar{s}) = [\phi x](\bar{s})$
if $\bar{s} \prec s$ and $x(\bar{s})$ otherwise.
\end{definition}
If $\phi$ always deterministically plays to reach $s$, then this definition becomes equivalent to \citet{hsr2020}'s.
The hindsight analogue to \cref{def:obs-seq-rationality} follows.
\begin{definition}
\label{def:obs-seq-hindsight-rationality}
Define the \emph{full regret} from information state $s$ as
$\rho_{s}(\phi, \strategy; \sigma)
= \cfIv_{s}(\phi(\strategy); \sigma) - \cfIv_{s}(\strategy; \sigma)$.
An agent is observably sequentially hindsight rational if they are a no-full-regret learner in every realizable information state within a given POHP with respect to $\Phi \subseteq \Phi^{\textsc{sw}}_{\mathcal{X}}$.
That is, the agent generates for any $T > 0$ a sequence of strategies, $\tuple{ \strategy^t }_{t = 1}^T$, where
$\lim_{T \to \infty}
\frac{1}{T} \sum_{t = 1}^T \rho_{s}(\phi, \strategy^t; \sigma^t) \le 0$
at each $s$ for each $\phi \in \Phi$ under any sequence of daimon strategies $\tuple{ \sigma^t }_{t = 1}^T$.
\end{definition}
\subsection{Local Learning}
Consider a local learning problem in a repeated finite-horizon POHP with timed updates based on the realization-weighted expected return at each active information state $s$.
Given a set of deviations, $\Phi \subseteq \Phi^{\textsc{sw}}_{\PureStratSet}$, we can construct a set of truncated deviations,
$\Phi_{\preceq s} = \set{ \phi_{\preceq s} }_{\phi \in \Phi}$,
where each deviation in $\Phi_{\preceq s}$ applies a deviation from $\Phi$ until after an action has been taken in $s$, at which point the rest of the strategy is left unmodified.
Each truncated deviation represents a way that the agent could play to and in $s$ so a natural local learning problem is for the agent to choose their actions at $s$ so that there is no beneficial truncated deviation.
To apply deviations to the agent's behavioral strategies, notice that sampling an action for each information state under timed updates yields a pure strategy.
Thus, a behavioral strategy defines a probability distribution over the set of pure strategies, $\mathcal{X}$.
We overload $\strategy : \mathcal{X} \to \Simplex(\mathcal{X})$ to return the probability of a given pure strategy under behavioral strategy $\strategy \in \Pi$.
From this perspective, $\strategy$ may be called a \emph{mixed strategy}.
The transformation of $\strategy$ by deviation $\phi$ is the pushforward measure $\phi(\strategy)$ defined pointwise by
$[\phi\strategy](x') = \sum_{x \in \phi^{-1}(x')} \strategy(x)$
for all $x' \in \mathcal{X}$,
where $\phi^{-1} : x' \mapsto \set{ x \;|\; \phi(x) = x'}$ is the pre-image of $\phi$.
The \emph{immediate regret} at information state $s$ for not employing truncated deviation $\phi_{\preceq s}$ is a difference in realization-weighted expected return under $\xi^{\phi_{\prec s}(\strategy), \sigma}_{s}$:
\begin{align*}
&\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)\nonumber
= \cfIv_{s}(\phi_{\preceq s}(\strategy); \sigma)
- \cfIv_{s}(\phi_{\prec s}(\strategy); \sigma)\\
&= \probability_{\phi_{\prec s}(\strategy), \sigma}[s]
\expectation\subblock*{
G_H(\phi_{\preceq s}(\strategy); \sigma) - G_H(\strategy; \sigma) }.
\end{align*}
Intuitively, it is the advantage that $\phi_{\preceq s}(\strategy)$ has over $\strategy$ in $s$ assuming that the agent plays to $s$ according to $\phi_{\prec s}(\strategy)$.\footnote{The term ``advantage'' here is chosen deliberately as immediate regret is analogous to advantage in MDPs~\citep{baird1994advantageWrtOptimalPolicy}, with respect to a given rather than optimal policy (see, \eg/, \citet{kakade2003sample}).}
Sadly, it can be impossible to prevent information state $s$'s immediate regret with respect to $\Phi_{\preceq s}$ from growing linearly in a repeated POHP.
\begin{theorem}
\label{thm:noImmediateRegretLearningIsImpossible}
An agent with timed updates cannot generally prevent immediate regret from growing linearly in a finite-horizon repeated POHP.
\end{theorem}
\begin{proof}
Consider a two action, two information state POHP where information state $s$ transitions to $s'$ where the reward is $+1$ if the agent chooses the same action in both $s$ and $s'$, and $-1$ otherwise.
The two \emph{external} (constant) deviations, $\phi^{\to 1}$ and $\phi^{\to 2}$, that choose the same actions in both information states always achieve a value of $+1$.
At $s'$, the agent has to choose between achieving value with respect to the play of $\phi^{\to 1}$ or $\phi^{\to 2}$ in $s$.
If the agent chooses action \#1, then
$\cfIv_{s}(\phi^{\to 1}_{\prec s}(\strategy); \sigma) = +1$
but
$\cfIv_{s}(\phi^{\to 2}_{\prec s}(\strategy); \sigma) = -1$,
and \viceversa/ if they choose action \#2.
Therefore, the agent minimizes their maximum regret by always playing uniform random and suffering a regret of $+1$ on every round.
\end{proof}
What if we assume a stronger property, \emph{perfect recall}?
Perfect recall requires that every bit of information from every action and observation is encoded in the information state, \eg/, update functions that concatenate the previous information state with the given action or observation.
Agents with perfect recall remember each of their actions and observations.
This ensures that each information state $s'$ is either the initial information state or has a single parent information state $s$, \ie/, $u(h_{< \abs{h}}) = s$ for each history $h \in I(s')$.
A perfect-recall agent can only play to reach each history in a given information state, $s$, equally, \ie/,
$\probability_{\strategy}[h] = \probability_{\strategy}[h']$ for all $h, h' \in I(s)$.
If we define
$\probability_{\strategy}[s] \propto \sum_{h \in I(s)} \probability_{\strategy}[h]$,
then perfect recall implies that $\probability_{\strategy}[s] = \probability_{\strategy}[h']$ for any history $h' \in I(s)$.
The probability of realizing $s$ simplifies to
\begin{align*}
\probability_{\strategy, \sigma}[s]
= \sum_{h \in I(s)}
\probability_{\strategy}[h] \probability_{\sigma}[h]
= \probability_{\strategy}[s]
\sum_{h \in I(s)}
\probability_{\sigma}[h].
\end{align*}
The belief about any history $h \in I(s)$ then simplifies to
\begin{align*}
\xi_{s}^{\strategy, \sigma}(h)
= \frac{
\probability_{\strategy}[h]\probability_{\sigma}[h]
}{
\probability_{\strategy}[s]
\sum_{h \in I(s)}
\probability_{\sigma}[h]
}
= \frac{
\probability_{\sigma}[h]
}{
\sum_{h \in I(s)}
\probability_{\sigma}[h]
}.
\end{align*}
The realization-weighted expected return simplifies to
\begin{align*}
\cfIv_{s}(\strategy; \sigma)
&= \probability_{\strategy}[s]
\underbrace{
\sum_{h \in I(s)}
\probability_{\sigma}[h]
\expectation\subblock*{G_h(\strategy; \sigma)}
}_{\cfIv^{\COUNTERFACTUAL}_{s}(\strategy; \sigma)}.
\end{align*}
The sum denoted $\cfIv^{\COUNTERFACTUAL}_{s}(\strategy; \sigma)$ is recognizable as the \emph{counterfactual value}~\citep{cfr} of $s$,
which does not depend on $\strategy$'s play at $s$'s predecessors.
Immediate regret becomes weighted immediate \emph{counterfactual regret},
\begin{align*}
\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)
= \probability_{\phi_{\prec s}(\strategy)}[s]
\subex*{
\cfIv^{\COUNTERFACTUAL}_{s}(\phi_{\preceq s}(\strategy); \sigma)
- \cfIv^{\COUNTERFACTUAL}_{s}(\strategy; \sigma)}.
\end{align*}
Since the counterfactual value function does not depend on $\strategy$'s play at $s$'s predecessors, perfect recall avoids the difficulty that leads to \cref{thm:noImmediateRegretLearningIsImpossible} and allows a reduction from minimizing immediate regret to minimizing \emph{time selection regret} in the \emph{prediction with expert advice} setting~\citep{blum2007time-selection}.
In a repeated POHP where the agent and daimon choose $\strategy^t \in \Pi$ and $\sigma^t \in \Sigma$ on each round $t$ the (time dependent) counterfactual value function
$t \mapsto \cfIv^{\COUNTERFACTUAL}_{s}(\cdot; \sigma^t)$
fills the role of the prediction-with-expert-advice reward function and the (time dependent) reach probability function
$w_{s, \phi}: t
\mapsto \probability_{\phi_{\prec s}(\strategy^t)}[s]$
fills the role of a time selection function.
The growth of cumulative immediate regret can therefore be controlled, in principle, to a sublinear rate by deploying \citet{blum2007time-selection}'s algorithm or time selection regret matching~\citep{edl2021}.
\subsection{General Immediate Regret Minimization}
The algorithm that applies a no-time-selection-regret algorithm to minimize immediate regret in every active information state generalizes the \emph{extensive-form regret minimization} (\emph{EFR}) algorithm~\citep{edl2021} in that $\Phi \subseteq \Phi^{\textsc{sw}}_{\PureStratSet}$ may be any set of deviations rather than a set of behavioral deviations.
Just as \citet{edl2021} shows that EFR is hindsight rational in EFGs, we can prove that general immediate regret minimization achieves the same in POHPs, though now we can easily present this result as a consequence of observable sequential hindsight rationality.
In principle, our generalized algorithm could compete with the set of swap deviations when given this set (or the set of internal deviations) as a parameter argument, however, circular dependencies between immediate strategies at different information states prevents our algorithm from being efficiently implemented with such a deviation set.
Observable sequential hindsight rationality depends on full regret so we relate immediate regret to full regret with two lemmas and conclude with the algorithm's regret bound.
\begin{lemma}
In a finite-horizon POHP, the realization-weighted expected return of active information state $s$ under perfect recall recursively decomposes as
\begin{align*}
\cfIv_{s}(\strategy; \sigma)
&=
\probability_{\strategy}[s] \reward_{s}(\strategy; \sigma)
+ \hspace{-2em} \sum_{
s' \in \bigcup_{a \in \mathcal{A}(s)}
\mathcal{S}_{\mathcal{A}}(s, a)
} \hspace{-2em}
\cfIv_{s'}(\strategy; \sigma)
\end{align*}
where
$\reward_{s}(\strategy; \sigma)
= \sum_{h \in I(s)}
\probability_{\sigma}[h]
\expectation\subblock*{\reward(\omega(hAB))}$,
and expectations are taken over $A \sim \strategy(s)$ and $B \sim \sigma(hA)$.
\end{lemma}
\begin{proof}
The counterfactual value decomposes as
\begin{align*}
\cfIv^{\COUNTERFACTUAL}_{s}(\strategy; \sigma)
&= \sum_{h \in I(s)}
\probability_{\sigma}[h] \expectation\subblock*{
\reward(\omega(hAB))
+ \Gamma G_{hAB}(\strategy; \sigma)
}\\
&= \reward_{s}(\strategy; \sigma)
+ \expectation\subblock{\hspace{-1.5em}
\sum_{h \in I(s), b \in \mathcal{A}(hA)} \hspace{-1.5em}
\probability_{\sigma}[hAb] G_{hAb}(\strategy; \sigma)
}\\
&= \reward_{s}(\strategy; \sigma)
+ \expectation\subblock*{
\cfIv^{\COUNTERFACTUAL}_{u_{\mathcal{A}}(s, A)}(\strategy; \sigma)},
\end{align*}
where $\Gamma \sim \gamma(h)$.
Multiplying by the reach weight,
\begin{align}
&\cfIv_{s}(\strategy; \sigma)\nonumber\\
&= \probability_{\strategy}[s] \reward_{s}(\strategy; \sigma)
+ \sum_{a \in \mathcal{A}(s)}
\probability_{\strategy}[s] \strategy(a \,|\, s)
\cfIv^{\COUNTERFACTUAL}_{u_{\mathcal{A}}(s, a)}(\strategy; \sigma)\nonumber\\
&= \probability_{\strategy}[s] \reward_{s}(\strategy; \sigma)
+ \sum_{a \in \mathcal{A}(s)}
\cfIv_{u_{\mathcal{A}}(s, a)}(\strategy; \sigma).
\label{eq:rwerDecomposition}
\shortintertext{Furthermore,}
&\cfIv_{u_{\mathcal{A}}(s, a)}(\strategy; \sigma)\nonumber\\
&=
\hspace{-1.2em} \sum_{s' \in \mathcal{S}_{\mathcal{A}}(s, a)}
\sum_{
h \in u_{\mathcal{A}}(s, a)
} \hspace{-1em}
\ind{u(h) = s'}
\probability_{\strategy, \sigma}[h]
\expectation \subblock*{
G_{h}(\strategy; \sigma)
}\nonumber\\
&=
\hspace{-1.2em} \sum_{s' \in \mathcal{S}_{\mathcal{A}}(s, a)}
\underbrace{
\sum_{h' \in I(s')}
\probability_{\strategy, \sigma}[h']
\expectation \subblock*{
G_{h'}(\strategy; \sigma)
}
}_{\cfIv_{s'}(\strategy; \sigma)}.
\label{eq:passiveToActiveValue}
\end{align}
Substituting \cref{eq:passiveToActiveValue} into \cref{eq:rwerDecomposition} completes the proof.
\end{proof}
\begin{lemma}
\label{lemma:regretDecomposition}
In a finite-horizon POHP, the full regret with respect to $\phi \in \Phi^{\textsc{sw}}_{\PureStratSet}$ under perfect recall at active information state $s$ recursively decomposes as
\begin{align*}
\rho_{s}(\phi, \strategy; \sigma)
&=
\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)
+ \hspace{-2.2em} \sum_{
s' \in \bigcup_{a \in \mathcal{A}(s)}
\mathcal{S}_{\mathcal{A}}(s, a)
} \hspace{-2.2em}
\rho_{s'}(\phi, \strategy; \sigma).
\end{align*}
\end{lemma}
\begin{proof}
\begin{align*}
&\rho_{s}(\phi, \strategy; \sigma)\nonumber\\
&=
\cfIv_{s}(\phi(\strategy); \sigma)
\rlap{$\overbrace{\phantom{- \cfIv_{s}(\phi_{\preceq s}(\strategy); \sigma)+ \cfIv_{s}(\phi_{\preceq s}(\strategy); \sigma)\,}}^0$}
- \cfIv_{s}(\phi_{\preceq s}(\strategy); \sigma)
+ \underbrace{
\cfIv_{s}(\phi_{\preceq s}(\strategy); \sigma)
- \cfIv_{s}(\strategy; \sigma)
}_{\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)}\\
&=
\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)\\
&\quad+ \underbrace{
\probability_{\phi(\strategy)}[s] \reward_{s}(\strategy; \sigma)
- \probability_{\phi_{\preceq s}(\strategy)}[s] \reward_{s}(\strategy; \sigma)
}_{0}\\
&\quad+ \sum_{a \in \mathcal{A}(s)}
\underbrace{
\cfIv_{u_{\mathcal{A}}(s, a)}(\phi(\strategy); \sigma)
- \cfIv_{u_{\mathcal{A}}(s, a)}(\phi_{\preceq s}(\strategy); \sigma)
}_{\rho_{u_{\mathcal{A}}(s, a)}(\phi, \strategy; \sigma)}.\\
\shortintertext{Applying \cref{eq:passiveToActiveValue} to sum over active information states,}
&=
\rho_{s}(\phi_{\preceq s}, \strategy; \sigma)
+ \hspace{-2em} \sum_{
s' \in \bigcup_{a \in \mathcal{A}(s)}
\mathcal{S}_{\mathcal{A}}(s, a)
} \hspace{-1em}\underbrace{
\cfIv_{s'}(\phi(\strategy); \sigma)
- \cfIv_{s'}(\strategy; \sigma)
}_{\rho_{s'}(\phi, \strategy; \sigma)}.\qedhere
\end{align*}
\end{proof}
\begin{theorem}
If a perfect recall agent's cumulative immediate regret with respect to $\Phi \subseteq \Phi^{\textsc{sw}}_{\PureStratSet}$ at each information state $s$ in a repeated finite-horizon POHP is upper bounded by $f(T) \ge 0$, $f(T) \in \smallo{T}$ after $T$ rounds, then the agent's cumulative full regret at each $s$ is sublinear, upper bounded according to
$\sum_{t = 1}^T \rho_{s}(\phi, \strategy^t; \sigma^t)
\le \abs{\mathcal{S}_{s, \mathcal{A}}} f(T)$,
where $\mathcal{S}_{s, \mathcal{A}} = \set{ s' \in \mathcal{S}_{\mathcal{A}} \;|\; s \preceq s' }$ is the active information states in the sub-POHP rooted at $s$.
Such an agent is therefore observably sequentially hindsight rational with respect to $\Phi$.
\end{theorem}
\begin{proof}
Working from each terminal information state where the full and immediate regret are equal toward $s$ at the root of any given sub-POHP, we recursively bound the cumulative full regret at every information state according to \cref{lemma:regretDecomposition}.
Every active information state adds at most $f(T)$ to the cumulative full regret at $s$ and there are $\abs{\mathcal{S}_{s, \mathcal{A}}}$ active information states in $s$'s sub-POHP so the cumulative full regret at $s$ is no more than $\abs{\mathcal{S}_{s, \mathcal{A}}} f(T)$.
\end{proof}
\section{Conclusion}
The POHP formalism may be useful in modeling continual learning problems where environments are expansive, unpredictable, and dynamic.
Good performance here demands that the agent continually learns, adapts, and re-evaluates their assumptions.
We suspect that hindsight rationality could serve as the learning objective for such problems if it could be formulated for a single agent lifetime rather than over a repeated POHP.
Our analysis of general immediate regret minimization for POHPs and the impossibility result of \cref{thm:noImmediateRegretLearningIsImpossible} brings up questions about how far this procedure can be generalized.
Regret decomposition is based on a perfect-recall, realization-weighted variant of \citet{kakade2003sample}'s performance difference lemma (Lemma 5.2.1).
\citet{ltbc2021} use this to show how the counterfactual regret minimization (CFR)~\citep{cfr} (EFR with counterfactual deviations~\citep{edl2021}) can be applied to continuing, discounted MDPs with reward uncertainty.
The POHP formalism can perhaps allow us to better understand when immediate and full regret can be minimized without perfect recall by considering \citet{lanctot2012no}'s well-formed-game conditions together with \citet{ltbc2021}'s analysis.
The POHP formalism allows agents to determine their own representation of the environment.
This opens the way to direct discussions and comparisons of representations and updating schemes.
One particular direction that is made natural by the POHP model's action--observation interface is predictive state representations (PSRs)~\citep{singh2003psr,singh2012psrSystemDynamicsMatrix}.
While PSRs were developed to model Markovian dynamical systems with at most one controller, the POHP model could facilitate an extension to multi-agent settings.
\section*{Acknowledgments}
Dustin Morrill and Michael Bowling are supported by the Alberta Machine Intelligence Institute (Amii), CIFAR, and NSERC.
Amy Greenwald is supported in part by NSF Award CMMI-1761546.
Thanks to Michael's and Csaba Szepesv\'{a}ri's research groups for feedback that helped to refine early versions of the POHP formalism.
|
\section{Introduction}
\label{sec:intr}
Retinal blood vessels are an important part of fundus images,
and they can be applied to the diagnosis of many ophthalmological diseases, such as diabetic retinopathy~\citep{wong2018guidelines}, cataract~\citep{cao2020hierarchical}, and hypertensive
retinopathy~\citep{irshad2014classification}.
Specifically, when patients with diffuse choroidal hemangioma, retinal blood vessels will expand~\citep{scott1999diffuse}.
Vascular structures in patients with cataracts are unclear or even invisible~\citep{cao2020hierarchical}.
In addition, as retinal blood vessels and cerebral blood vessels are similar in
anatomical, physiological, and embryological characteristics, so that retinal vessels are also important biomarkers to some cardiovascular diseases~\citep{wong2004retinal,schmidt2018artificial}.
Accurate segmentation of blood vessels is the basic step of efficient computer-aided diagnosis (CAD).
However, manual segmentation of retinal vessels is time-consuming and relies heavily on the human experience.
Therefore, it is necessary to develop accurate and fast vessel segmentation methods for CAD.
Considering the clinical application scenarios, a good vessel segmentation model for CAD should satisfy the following two conditions.
1) High accuracy. The model needs to be capable to recognize both thin vessels and thick vessels, even for extremely thin vessels with one-pixel width. For example, the appearance of neovascularization can be used to diagnose and grade diabetes retinopathy~\citep{wong2018guidelines}.
2) Fast processing speed~\citep{yu2012fast,villalobos2010fast}.
The model needs to have a fast processing speed to meet clinical application, as faster speed means greater throughput and higher processing efficiency.
Existing vessel segmentation methods could be divided into two categories~\citep{GUO2019BTS}: unsupervised methods and supervised methods.
Unsupervised methods utilize manually designed low-level features and rules~\citep{azzopardi2013automatic,azzopardi2015trainable,srinidhi2018visual}, therefore, they show poor extensibility.
Supervised methods utilize human annotated training images, and their segmentation accuracy is usually higher than that of unsupervised methods~\citep{schmidt2018artificial}. Deep learning-based supervised methods could learn high-level features in an end-to-end manner, and they show superior performance in terms of segmentation accuracy and extensibility~\citep{jin2019dunet,threestage}.
Most deep vessel segmentation models follow the architecture of the fully convolutional network (FCN)~\citep{shelhamer2017fully}, in which the resolution of features is first down-sampled and then up-sampled to generate pixel-wise segmentation maps. However, the detailed information is lost in FCN.
Furthermore, a U-Net~\citep{ronneberger2015u} model was proposed, which could utilize intermediate layers in the up-sampling path to fuse more spatial information to generate fine segmentation maps.
Although the detailed information could be utilized in the U-Net, the extra noise was also introduced. Moreover, most U-Net variant models~\citep{jin2019dunet} require multiple forward passes to generate a segmentation map for one testing image, since they split one fundus image into hundreds of small patches.
As a result, they show slow segmentation speed and the contextual information is not fully utilized.
Different from U-Net that recovering the spatial details in the decoder to achieve a high-resolution representation, in this paper, we present a deep model termed detail-preserving network (DPN) which could preserve a high-resolution representation all the time.
Inspired by HRNet~\citep{wang2020deep}, the DPN learns the full-resolution representation directly rather than the low-resolution representation.
In this manner, the DPN could locate the boundaries of thin vessels accurately.
To this end, on one hand, we present the detail-preserving block (DP-Block), where multi-scale features are fused in a cascaded manner so that more contextual information could be utilized. And, the resolution of input features and output features of DP-Block is never changed, so that the detailed spatial information could be preserved.
On the other hand, we stacked eight DP-Blocks together to form the DPN. We note that there are no down-sampling operations among these DP-Blocks so that the DPN could learn both semantic features via a large field of view and preserve the detailed information simultaneously.
To validate the effectiveness of our method, we conducted experiments on the DRIVE, CHASE\_DB1, and HRF datasets. Experimental results reveal that our method shows competitive/better performance compared with other state-of-the-art methods.
Overall, our contributions are summarized as follows.
\begin{enumerate}
\item We present the detail-preserving block, which could learn the structural information and preserve the detailed information via intra-block multi-scale fusion.
\item We present the detail-preserving network, which mainly consists of eight serially connected DP-Blocks, and it maintains high-resolution representations during the whole process. As a result, the DPN could learn both semantic features and preserve the detailed information simultaneously.
\item We conducted experiments over three public datasets. Experimental results reveal that our method achieves comparable or even superior performance over other methods in terms of segmentation accuracy, segmentation speed, and model size.
\end{enumerate}
The rest of this paper is organized as follows. Related works about vessel segmentation are introduced in Section~\ref{sec:related}. Our method is described in Section~\ref{sec:dpn}. Experimental results are analyzed in Section~\ref{sec:exp}. Conclusions are drawn in Section~\ref{sec:con}.
\section{Related Works}
\label{sec:related}
Retinal vessel segmentation is a pixel-wise binary classification problem, and the objective is to locate each vessel pixel accurately for further processing. According to whether annotations are used, existing methods could be divided into two categories: unsupervised methods and supervised methods.
\subsection{Unsupervised Methods}
Unsupervised methods usually utilize human-designed low-level features, such as edge, line, and color. Manually annotated information is not utilized.
Unsupervised methods can be roughly divided into match filter based method~\citep{wang2013retinal,azzopardi2015trainable}, vessel tracking based method~\citep{yin2012retinal}, threshold based method~\citep{li2006multiscale,saleh2011automated} and morphology based method~\citep{garg2007unsupervised,wang2019retinal}.
Wang et al.~\citep{wang2013retinal} proposed a multi-stage method for vessel segmentation. In their method, a matched filtering was first adopted for vessel enhancing, and then vessels were located via a multi-scale hierarchical decomposition. Yin et al.~\citep{yin2012retinal} proposed a vessel tracking method, in which local grey information was utilized to select vessel edge points. Then a Bayesian method was used to determine the direction of vessels. Garg et al.~\citep{garg2007unsupervised} proposed a curvature-based method. In their method, the vessel lines were first extracted using curvature information, and then a region growing method was used to generate the whole vessel tree. Li et al.~\citep{li2006multiscale} proposed an adaptive threshold method for vessel segmentation, and their method could detect both large and small vessels. Christodoulidis et al.~\citep{christodoulidis2016multi} utilized line detector and tensor voting for vessel segmentation, and thin vessels were well detected.
A major limitation of the unsupervised method is that the features and rules are designed by a human. It is hard to design a satisfactory feature that works well on large-scale fundus images. This kind of method may show poor generalization ability.
\subsection{Supervised Methods}
In contrast to unsupervised methods, supervised methods need annotation information to build vessel segmentation models. Before deep learning methods were applied to vessel segmentation, supervised methods usually consist of two procedures: feature extraction and classification. In the first procedure, features were extracted by human-designed rules, just as that did in unsupervised methods. In the second procedure, supervised classifiers were employed to classify these extracted features into vessels or non-vessels. As deep learning methods unify feature extraction and classification procedures together, they could extract much discriminative features.
Deep learning-based methods could be roughly divided into classification-based methods and segmentation-based methods~\citep{srinidhi2017recent,mookiah2020review}. For classification-based methods, the category for each pixel is determined by its surrounding small image patch~\citep{Liskowski2016Segmenting,wang2019blood}.
This kind of method does not make full use of contextual information.
For segmentation-based methods, existing methods follow the architecture of FCN, where the resolution of feature maps are first down-sampled to encode structural information, and then the resolution of feature maps are up-sampled further to generate pixel-wise segmentation maps.
Although successive down-sampling operations could reduce the model's computational complexity and increase the model's receptive field, it inevitably loses detailed information.
As a result, this kind of method shows poor performance in the segmentation of thin/tiny blood vessels. To alleviate this problem, multi-scale fusion methods and graph models were adopted. For instance,
Maninis et al.~\citep{driu} proposed a FCN for vessel segmentation. They adopted a multi-scale feature fusion to generate fine vessel maps. Fu et al.~\citep{Fu2016DeepVessel} adopted a holistically-nested edge detection model~\citep{xie2017holistically} to generate coarse segmentation maps, and then a conditional random field was adopted to model the relationship among long-range pixels to refine segmentation maps.
Besides above methods, Ronneberger et al. proposed an u-shape network, called U-Net to preserve spatial information~\citep{ronneberger2015u}. Similar to FCN, the feature maps were first down-sampled to a low resolution, then they were up-sampled step-by-step.
In each step, the intermediate features with high representation in the encoder were utilized.
Several methods based on U-Net have been proposed for vessel segmentation. For instance, Jin et al.~\citep{jin2019dunet} proposed a DUNet for vessel segmentation. They used deformable convolution rather than grid convolution in U-Net to capture the shape of vessels. Wu et al.~\citep{wu2018} designed a two-branch network, where each branch consists of two U-Nets. The output of their method was the average of the predictions of these two branches.
In addition, different from~\citep{driu} and~\citep{Fu2016DeepVessel} that used the entire image as training samples.
Both~\citep{jin2019dunet} and~\citep{wu2018} used overlapped image patches of size 48$\times$48 as training samples, and a re-composed procedure is required to complete a segmentation map during testing.
Hence, they suffer from a high computation complexity.
Despite their success, the problem of losing spatial information in the down-sampling phase has not been fully addressed. Meantime, considering both computation complexity and segmentation accuracy, there still lacks a fast and accurate vessel segmentation model.
\section{Our Method}
\label{sec:dpn}
In this section, we will describe our method in detail, including the architecture of our proposed detail-preserving network,
the detail-preserving block, and the loss function at last.
\subsection{Detail-Preserving Network}
A good vessel segmentation model should segment both thick vessels and thin vessels, this requires the segmentation model to learn structural semantic information and preserve detailed spatial information simultaneously.
The structural information is beneficial to locate thick vessels, and it requires the model to have a large field of view. While the detailed spatial information is important to locate vessel boundaries accurately, especially for thin vessels. However, it is easy to lose detailed information when learning structural information.
For example, the structural information of U-Net~\citep{ronneberger2015u} is learned by successive down-sampling operations, and the resolution of feature maps is decreased by a factor of 8 or even more (as can be seen in Fig.~\ref{fig:unet_arch}). Such low resolution implies that the spatial information of thin vessels is lost.
U-Net utilizes intermediate features of the encoder to recover the spatial information. However, intermediate feature maps of the encoder may have noise (non-vessel pixels are highlighted) due to a small field of view.
\begin{figure}
\centering
\subfigure[U-Net]{
\includegraphics[width=0.7\textwidth]{unet_arch}
\label{fig:unet_arch}
}
\subfigure[Our Method]{
\includegraphics[width=0.18\textwidth]{dpn_arch}
}
\caption{(a) The architecture of U-Net~\citep{ronneberger2015u}. The resolution of feature maps is first decreased in the encoder, and then up-scaled in the decoder (H and W denote the height and width of feature maps). (b) The architecture of the proposed DPN, in which high-resolution representations are learned.}
\label{fig:unet_vs_dpn}
\end{figure}
Our study is motivated by whether it is possible to preserve detailed information, while the network has a large field of view.
To this end, we present a high representation network, called detail-preserving network for vessel segmentation.
The architecture of our model is visualized in Fig.~\ref{fig:net_overview}.
\begin{figure}[!t]
\centering
\subfigure{
\includegraphics[width=0.90\textwidth]{dpn_overview}
}
\caption{Overview of the proposed detail-preserving network. DPN consists of one convolutional layer and eight DP-Blocks, and it maintains high/full resolution representations during the whole process.
Meantime, we add three auxiliary losses to pass extra gradient signals.}
\label{fig:net_overview}
\end{figure}
We can observe from Fig.~\ref{fig:net_overview} that DPN mainly consists of three parts: the front convolution operation, eight detail-preserving blocks, and four loss functions (including three supervision losses). Compared with other vessel segmentation models, the DPN has the following characteristics.
1) Different from U-Net, there are no down-sampling operations among these DP-Blocks, this implies the resolution of features among these DP-Blocks keeps the same.
In other words, the DPN maintains a full-resolution representation during the whole processing (from input to output), thereby it could preserve detailed spatial information.
2) For DP-Block, the receptive field of the output neuron could be as large as four times that of the input neuron, while the detailed information could also be preserved.
Therefore, DPN could obtain a large field of view through stacking multiple DP-Blocks. Moreover, a large field of view means that DPN could learn semantic information instead of local information.
The architecture of the DP-Block will be described in the next section.
3) Different from U-Net that utilized VGGNet or ResNet as the backbone, which incurs a large number of parameters. The total number of parameters of DPN is less than 120k.
4) The input of DPN is the entire image so that it could integrate more contextual information than patch-level segmentation models. Meantime, our method only needs one forward pass to generate the complete segmentation maps, thereby the inference speed of our method is faster than patch-level models.
\textit{Relationship with HRNet}~\citep{wang2020deep}.
Both DPN and HRNet learn a high-resolution representation, while there are some differences. 1) Proposed DPN only maintains one kind of resolution, i.e., the high-resolution stream (as can be seen in Fig.~\ref{fig:unet_vs_dpn}), and
multi-scale feature fusion is employed in DP-Blocks.
While HRNet maintains multi-streams with different resolutions and multi-scale fusion is employed among these streams. 2) The method for multi-scale feature fusion is different. Proposed DPN fuses low-resolution feature maps with high-resolution feature maps in a step-by-step manner. While in HRNet, the representations with different resolutions are concatenated directly.
\subsection{Detail-Preserving Block}
DP-Block as the key component of DPN, could learn structural semantic information and preserve the spatial detailed information at the same time.
Overview of the DP-Block is visualized in Fig.~\ref{fig:dpblock}.
We can observe that the input feature of the DP-Block is fed into three branches, and each branch is processed in different scales.
The output feature of the DP-Block is obtained by fusing features of three scales. The computing procedure of the DP-Block is as follows.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{dpblock}
\caption{Overview of the proposed detail-preserving block, where C0, C1, and C2 denote the number of convolutional filters for each branch.}
\label{fig:dpblock}
\end{figure}
For the first branch, a convolution operation with 3$\times$3 kernel was adopted to learn detailed information.
For the second branch, a pooling operation with stride 2 was adopted, then the resolution of feature maps was down-sampled by a factor of 2. A convolution operation with 3$\times$3 kernel was adopted further.
For the third branch, it was used to enlarge the field of view and learn structural information.
In this branch, a pooling operation with stride 4 was first adopted, as a result, the resolution of feature maps was down-sampled by a factor of 4 and the receptive field was increased by a factor of 4 either. A convolution operation with 3$\times$3 kernel was then adopted to extract features.
The extracted features of each branch were fused in a cascaded manner. Specifically, features learned by the third branch were first up-sampled 2$\times$, and then connected to the second branch, and the output of the second branch was further connected to the first branch. Here, we used a concatenation operation for feature fusion. The whole procedure is summarized in Alg~\ref{alg:algorithm1}.
We note that the resolution of the output feature of the DP-Block is the same as the input feature so that the DP-Block could not only preserve detailed information but also learns multi-scale features.
\begin{algorithm}
\caption{Description of the DP-Block.}
\label{alg:algorithm1}
\KwIn{Feature map $X$}
\KwOut{Feature map $Y$}
\BlankLine
$x1 = \delta(k1 * X)$;
$x2 = \delta(k2 * maxpool(X,2))$;
$x3 = \delta(k3 * maxpool(X,4))$;
$x4 = \delta(k4 * concat(x2, deconv(x3,2)))$;
$x5 = \delta(k5 * concat(x1, deconv(x4,2)))$;
$Y = \delta(k6 * concat(x5, X))$;
In the above formulas, $\delta$ refers to ReLU function, $*$ denotes convolution operation, $k_i$ denotes convolutional filters with kernel size 3$\times$3, $maxpool$ denotes max pooling, $deconv$ denotes deconvolution operation to upscale the feature map, and $concat$ denotes concatenation operation on the channel dimension.
\end{algorithm}
\textit{Number of parameters}. In our experiments, the number of convolutional filters C0, C1, and C2 for each branch of DP-Block was set to 16, 8, and 8, respectively. In DPN, the dimension of the output feature of the first convolution operation is H$\times$W$\times$32, then the number of parameters of the first DP-Block is 21,704.
For the second to the last DP-Block, the dimension of the input feature is H$\times$W$\times$16, then the number of parameters for each DP-Block is only 13,896. Hence, the total number of parameters of the DPN is less than 120k.
Experimental results show that the DPN could be effectively learned from scratch with limited training samples.
\textit{Relationship with Inception Module}. Different from the inception module~\citep{googlenet} that uses parallel convolution operations with different convolutional kernels to learn multi-scale features, our DP-Block adopts down-sampling first, so that the receptive field is further enlarged. The receptive field of each output neuron is four times that of the input neuron. As a result, the receptive field grows exponentially when stacking multiple DP-Blocks.
Furthermore, rather than parallel processing branches in the inception module, the features of different branches were fused in a cascaded manner in DP-Block to better learn multi-scale features.
\subsection{Loss Function}
Blood vessels account only for a small proportion of the entire image. Specifically, the proportion of vessels is 8.69\%/6.93\%/7.71\% on the DRIVE/CHASE\_DB1/HRF datasets, respectively. There exists a class imbalance problem in vessel segmentation. To solve this problem, we adopted class balanced cross-entropy loss~\citep{xie2017holistically}, which uses a weight factor to balance vessel pixels and non-vessel pixels.
The class-balanced cross-entropy loss is defined as follows.
\begin{equation}
L(p,y|\theta) = -\beta\sum\limits_{y_j = 1}\log{p_j} -(1-\beta)\sum\limits_{y_j=0 }\log{(1-p_j)}
\label{equ:bceloss}
\end{equation}
where $p$ is a probability map obtained by a sigmoid operation, and
$p_j$ denotes the probability that the $j^{th}$ pixel belongs to vessel.
In addition, $y$ denotes the ground truth, and $\theta$ denotes model parameters.
Rather than using a fixed value, the weight factor $\beta$ is calculated at each iteration based on the distribution of vessel pixels and non-vessel pixels.
The weight factor $\beta$ is defined as below.
\begin{equation}
\beta = \frac{N_-}{N_+ + N_-}
\label{equ:bceloss_beta}
\end{equation}
where $N_+$ denotes the number of vessel pixels, and $N_-$ denotes the number of non-vessel pixels. Since $N_- > N_+$ , the weight for vessel pixels is large than the weight for non-vessel pixels. So that the model would focus more on vessel pixels than non-vessel pixels.
Besides the segmentation loss after the last layer of DPN, we add three auxiliary losses to the intermediate layers of DPN to pass extra gradient signals to alleviate the gradient-vanish problem, just as that did in DSN~\citep{Lee2015DSN} and GoogLeNet~\citep{googlenet}.
As can be seen in Fig.~\ref{fig:net_overview}, the first auxiliary loss is after DP-Block2, the second auxiliary loss is after the DP-Block4, and the last one is after the DP-Block6. The segmentation loss is connected after the DP-Block8.
Taking the first auxiliary loss as an example, we first adopted a convolution operation with one 1$\times$1 filter to the output features of DP-Block2, then a feature map with one channel was obtained. At last, this feature map was fed into the class balanced cross-entropy loss function.
Hence, the overall objective function of DPN is the sum of three auxiliary losses and one segmentation loss, and it can be formulated as follows.
\begin{equation}
L_{all}(x,y|\theta) = \sum_{i=1}^4{L(p^i(x),y|\theta)}+\frac{\lambda}{2}{||\theta||^2}
\label{equ:bceloss_beta}
\end{equation}
where $p^i$ denotes the probability map of the $i^{th}$ loss function, and $\lambda$ denotes the weight decay coefficient.
In conclusion, we aim to minimize the above objective function during training.
In the test phase, the output of the last segmentation loss is taken as the segmentation results of DPN, and the segmentation probability maps of auxiliary losses are ignored.
\section{Experiments}
\label{sec:exp}
\subsection{Materials}
Performances of our method were evaluated on three public datasets: DRIVE~\citep{drive}, CHASE\_DB1~\citep{chase}, and HRF~\citep{budai2013robust}.
The DRIVE (Digital Retinal Images for Vessel Extraction) dataset contains 40 color fundus images captured with a 45$^{\circ}$ FOV (Field of View).
Each image has the same resolution, which is 565$\times$584 (width$\times$height).
The dataset is partitioned into the training set and the test set officially, and each set contains 20 images.
For the test set, two groups of annotations are provided. We used the annotation of the first group as ground-truth to evaluate our model, just as other methods did. In addition, the FOV masks for calculating evaluation metrics are also provided.
The CHASE\_DB1 dataset contains 28 fundus images (999$\times$960) captured with a 30$^{\circ}$ FOV.
As the split of the training set and the test set is not provided. For a fair comparison with other methods, we did two sets of experiments.
We adopted a 20/8 partition for the first set of experiments, where the first 20 images were selected for training and the rest 8 images for testing. For another set of experiments, we adopted a 14/14 (training/test) partition.
The HRF (High Resolution Fundus) dataset contains 45 high-resolution fundus images, with resolution 2336$\times$3504. Among 45 images, 15 images are with diabetic retinopathy, 15 images are with glaucoma and the rest 15 images are healthy. As no partition of the training set and test set available, in our experiments, we used the first 15 images as the training set and the rest 30 images for evaluation. Besides, the FOV masks are provided in HRF.
As the FOV masks are not present in the CHASE\_DB1 dataset, we created the masks manually. The FOV masks on these three datasets are presented in Fig.~\ref{fig:fov_mask}.
\begin{figure}
\centering
\subfigure{
\includegraphics[width=0.2\textwidth]{img_drive}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{img_chase}
}
\subfigure{
\includegraphics[width=0.28\textwidth]{img_hrf}
}\\
\subfigure{
\includegraphics[width=0.2\textwidth]{mask_drive}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{mask_chase}
}
\subfigure{
\includegraphics[width=0.28\textwidth]{mask_hrf}
}
\caption{Fundus images (the first row) and the corresponding FOV masks (the second row) from DRIVE, CHASE\_DB1 and HRF datasets, from left to right.}
\label{fig:fov_mask}
\end{figure}
\subsection{Image Preparation}
We use raw color fundus images to train our model, image enhancement methods like CLAHE are not adopted. Therefore, a time-consuming pre-processing procedure could be avoided when inference and the segmentation speed of our model could be further improved.
Considering that there are limited annotated training samples on the DRIVE, CHASE\_DB1, and HRF, and there are no pre-training weights available.
To avoid over-fitting and promote the segmentation accuracy,
several transformations have been adopted to augment the training set, including flipping (horizontal and vertical) and rotation (22$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$, 135$^{\circ}$, 180$^{\circ}$, 225$^{\circ}$, 270$^{\circ}$, 315$^{\circ}$). Rotated images can be seen in Fig.~\ref{fig:rot}.
As a result, the training images were augmented by a factor of 10 offline,
and there are 220/220/165 training images in total on the DRIVE/CHASE\_DB1/HRF datasets, respectively.
Moreover, the training image was randomly mirrored during training for each iteration.
On the DRIVE and CHASE\_DB1 datasets, it is possible to feed an entire image into GPU memory.
However, it is hard to feed an image sampled from HRF into GPU memory, since its resolution is as high as 2336$\times$3504. To deal with this problem, we scale these images from HRF to a low-resolution with size 600$\times$900 to make sure they can be loaded into GPU memory.
\begin{figure}
\centering
\subfigure[0$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot0}
}
\subfigure[22$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot22}
}
\subfigure[45$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot45}
}
\subfigure[90$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot90}
}
\subfigure[135$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot135}
}
\subfigure[180$^{\circ}$]{
\includegraphics[width=0.12\textwidth]{img_rot180}
}
\caption{Example of training images after rotating different angles.}
\label{fig:rot}
\end{figure}
\subsection{Training Details}
Our model was implemented based on an open-source deep learning framework \textit{Caffe}~\citep{caffe}, and it ran on a workstation equipped with one NVIDIA RTX 2080ti GPU.
We initialized weights of our model using xavier~\citep{glorot2010understanding}. The learning rate was initialized to 1e-3. And we trained our model for 100k/100k/70k iterations with ADAM~\citep{adam} (batch size 1) using weight decay 0.0005 on the DRIVE/CHASE\_DB1/HRF datasets, respectively.
To reduce computational complexity, each training image was cropped into 512$\times$512/632$\times$632/588$\times$588 patches randomly during training on the DRIVE/CHASE\_DB1/HRF datasets, respectively.
And, the crop operation was performed via the data layer of~\textit{Caffe}. When testing, the entire fundus image is fed into the network without cropping, so that our model could generate a segmentation map with only one forward pass.
\subsection{Evaluation Metrics}
For a fair comparison with other methods~\cite{jin2019dunet,wang2019dual,driu,GUO2019BTS}, we adopt five pixel-wise evaluation metrics, including Sensitivity (Se), Specificity (Sp), Accuracy (Acc), the Area Under the receiving operator characteristics Curve (AUC), and F1\textrm{-}score (F1) to evaluate the segmentation maps. Besides these pixel-wise evaluation metrics, we also use Structural Similarity Index Measure (SSIM)~\cite{wang2004image} and Peak Signal to Noise Ratio (PSNR) to evaluate the segmentation maps.
They are defined as follows.
\begin{align}
Se = \frac{TP}{TP+FN}
\end{align}
\begin{align}
Sp=\frac{TN}{TN+FP}
\end{align}
\begin{align}
Acc=\frac{TP+TN}{TP+FN+TN+FP}
\end{align}
\begin{align}
F1=\frac{2\times Pr\times Se}{Pr+Se}
\end{align}
where $Pr = \frac{TP}{TP+FP}$, and true positive (TP) denotes the number of vessel pixels classified correctly and true negative (TN) denotes the number of non-vessel pixels classified correctly
|
. Similarly, false positive (FP) denotes the number of non-vessel pixels misclassified as vessels and false negative (FN) denotes the number of vessel pixels misclassified as non-vessels.
To calculate Se, Sp and Acc, we select the threshold that corresponds to the optimal operating point of the receiving operator
characteristics (ROC) curve to generate the binary segmentation maps from a probability map.
Also, we note that TP, FN, FP and TN are counted pixel-by-pixel, and only the pixels inside the FOV mask are calculated, not the whole fundus image.
The ROC curve is obtained by multiple Se versus (1-Sp) via varying threshold. AUC evaluates the segmentation probability maps not the binary maps, which is more comprehensive.
The AUC ranges from 0 to 1, and the AUC of a perfect segmentation model is 1.
At last, we also report the segmentation speed of our model using fps (frames per second). The segmentation time $t$ for each image is counted starting from reading the raw test image from the hard disk to writing the segmentation map into the hard disk. Then, $fps = 1.0 / t$.
\subsection{Results and Analysis}
\subsubsection{Compare with Existing Methods}
We compared our method with several state-of-the-art deep vessel segmentation methods on three public datasets in terms of segmentation performance, segmentation speed, and the number of parameters.
Comparison results were summarized in Table~\ref{table:drive_vel_result}, Table~\ref{table:chase_vel_result} and Table~\ref{table:hrf_vel_result}.
\begin{table}
\setlength{\tabcolsep}{1.5pt}
\tiny{
\begin{center}
\caption{Comparison results on the DRIVE dataset (For each metric, the best results are shown in bold.)}
\begin{threeparttable}
\begin{tabular}{ccccccccccc}
\hline
Method &One Forward Pass? &Se & Sp &Acc &AUC &F1 &SSIM &PSNR &fps &Params(M)\\
\hline
Liskowski et al.~\citep{Liskowski2016Segmenting} &No
&0.7811 &0.9807 &0.9535 &0.9790 &N.A &N.A &N.A &N.A &N.A\\
FCN~\citep{OLIVEIRA2018229}&No &\textbf{0.8039} &0.9804 &\tgr{0.9576} &\textbf{0.9821} &N.A &N.A &N.A &\tbl{0.5} &\tgr{0.2}\\
U-Net~\citep{jin2019dunet} &No &0.7849 &0.9802 &0.9554 &0.9761 &0.8175 &N.A &N.A &0.32 &3.4\\
DUNet~\citep{jin2019dunet} &No &0.7963 &0.9800 &0.9566 &0.9802 &0.8237 &N.A &N.A &0.07 &0.9\\
DEU-Net~\citep{wang2019dual}&No &0.7940 &\tbl{0.9816}&0.9567 &0.9772 &\tgr{0.8270}&N.A &N.A &0.15 &N.A \\
MS-NFN~\citep{wu2018} &No &0.7844 &\tgr{0.9819}&0.9567 &0.9807 &N.A &N.A &N.A &0.1 &\tbl{0.4} \\
Patch BTS-DSN~\citep{GUO2019BTS}&No &0.7891 &0.9804 &0.9561 &0.9806 &\tbl{0.8249} &0.5159 &13.5640 &N.A &7.8\\
Three-stage FCN~\citep{threestage}&No&0.7631 &\textbf{0.9820} &0.9538 &0.9750 &N.A &N.A &N.A &N.A &20.4\\
Vessel-Net~\citep{wu2019vessel}&No &\tgr{0.8038} &0.9802 &\textbf{0.9578} &\textbf{0.9821} &N.A &N.A &N.A &N.A &1.7\\
DRIU~\citep{driu} &Yes &0.7855 &0.9799 &0.9552 &0.9793 &0.8220 &0.5923 &13.6364 &N.A &7.8\\
Image BTS-DSN~\citep{GUO2019BTS} &Yes&0.7800 &0.9806 &0.9551 &0.9796 &0.8208 &\textbf{0.5941} &\textbf{14.2633} &N.A &7.8 \\
\hline
Our Method &Yes &0.7934 &0.9810 &0.9571
&0.9816 &\textbf{0.8289} &0.5500 &13.7672 &\textbf{11.8} &\textbf{0.1}\\
\hline
\end{tabular}
\label{table:drive_vel_result}
\begin{tablenotes}
\item[1] N.A : Not Available
\end{tablenotes}
\end{threeparttable}
\end{center}
}
\end{table}
\begin{table*}
\setlength{\tabcolsep}{1.5pt}
\tiny{
\begin{center}
\caption{Comparison results on the CHASE\_DB1 dataset (For each metric, the best results are shown in bold.)}
\begin{threeparttable}
\begin{tabular}{ccccccccccc}
\hline
Method&One Forward Pass? &Se & Sp &Acc &AUC &F1 &SSIM &PSNR &fps &Split of dataset\\
\hline
MS-NFN~\citep{wu2018} &No &0.7538 &\textbf{0.9847} &0.9637 &0.9825 &N.A &N.A &N.A &$<$0.1 &20/8 (train/test)\\
Three-stage FCN~\citep{threestage}&No &0.7641 &0.9806 &0.9607 &0.9776 &N.A &N.A &N.A &N.A &20/8 (train/test)\\
Vessel-Net~\citep{wu2019vessel} &No &\textbf{0.8132} &0.9814 &\textbf{0.9661} &\textbf{0.9860} &N.A &N.A &N.A &N.A &20/8 (train/test)\\
Xu et al.~\citep{xu2020retinal} &No &N.A &N.A &0.9650 &0.9856 &N.A &N.A &N.A &N.A &20/8 (train/test)\\
DEU-Net~\citep{wang2019dual}&No &\tgr{0.8074} &\tbl{0.9821} &\textbf{0.9661} &0.9812 &\tgr{0.8037} &N.A &N.A &0.08 &20/8 (train/test)\\
BTS-DSN~\citep{GUO2019BTS} &Yes &\tbl{0.7888} &0.9801 &0.9627 &\tbl{0.9840} &\tbl{0.7983} &0.6052 &14.2717 &N.A &20/8 (train/test)\\
\hline
Our Method &Yes &0.7839 &0.9842 &0.9660 &\textbf{0.9860} &\textbf{0.8124}
&\textbf{0.6602} &\textbf{14.4583} &\textbf{5.6} &20/8 (train/test)\\
\hline
\hline
U-Net~\citep{jin2019dunet} &No &0.8355 &0.9698 &0.9578 &0.9784 &0.7792 &N.A &N.A &0.10 &14/14 (train/test)\\
DUNet~\citep{jin2019dunet} &No &0.8155 &0.9752 &0.9610 &0.9804 &0.7883 &N.A &N.A &0.02 &14/14 (train/test)\\
\hline
Our Method &Yes &0.7645 &\textbf{0.9846} &\textbf{0.9650} &\textbf{0.9840} &\textbf{0.8021} &0.6985 &14.7485 &\textbf{5.6} &14/14 (train/test) \\
\hline
\end{tabular}
\label{table:chase_vel_result}
\begin{tablenotes}
\item[1] N.A : Not Available
\end{tablenotes}
\end{threeparttable}
\end{center}
}
\end{table*}
\begin{table}[!bt]
\begin{center}
\caption{Comparison results on the HRF dataset (For each metric, the best results are shown in bold.)}
\begin{threeparttable}
\begin{tabular}{ccccccccc}
\hline
Method &Se & Sp &Acc &AUC &F1 &SSIM &PSNR\\
\hline
Orlando et al.~\citep{orlando2016discriminatively} &0.7874 &0.9584 &N.A &N.A &0.7158&N.A&N.A\\
Zhao et al.~\citep{zhao2017automatic} &0.7490 &0.9420 &0.9410 &\textbf{0.9710} &N.A &N.A&N.A\\
Yan et al.~\citep{yan2018joint} &0.7881 &0.9592 &0.9437 &N.A &N.A &N.A &N.A \\
\hline
Our Method &\textbf{0.7926} &\textbf{0.9764} &\textbf{0.9591} &0.9697 &\textbf{0.7835} &0.3062 &13.0284\\
\hline
\end{tabular}
\label{table:hrf_vel_result}
\begin{tablenotes}
\item[1] N.A : Not Available
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
\textit{Segmentation performance}.
On the DRIVE dataset, as we can see from Table~\ref{table:drive_vel_result},
our method achieves the highest F1-score compared with other methods, and the AUC is higher than 9 models. To be specific,
compared with DRIU~\citep{driu} and BTS-DSN~\citep{GUO2019BTS} which need only one forward pass to generate the segmentation map during testing, our method achieves much higher Se, Acc, AUC, and F1. Specifically, the Acc and AUC of our method are about 0.2\% higher than DRIU and BTS-DSN.
Besides, compared with the other eight methods that need multiple forward passes to generate the segmentation map during testing for only one fundus image, the Se, Acc, Auc, and F1 of our method is higher than six of the eight methods.
On the CHASE\_DB1 dataset, we compare our method with eight existing methods.
Our method achieves the highest AUC and F1 compared with other state-of-the-art methods, as can be seen in Table~\ref{table:chase_vel_result}. Besides, compared with U-Net and its variant models, namely DUNet and DEU-Net, our method shows superior performance in terms of Sp, AUC, and F1-score.
Besides adopting pixel-wise evaluation metrics, we also report the SSIM and PSNR of our method.
We can observe that the SSIM and PSNR of our method outperform BTS-DSN on the CHASE\_DB1 datasets by a significant margin. To be specific, the SSIM of DPN is over 5.5\% higher than that of BTS-DSN.
On the DRIVE dataset, the PSNR of our method is better than that of Patch BTS-DSN and DRIU, but lower than Image BTS-DSN. Although Image BTS-DSN achieves better SSIM and PSNR, its acc and AUC are lower than our method. We conclude that our method still shows competitive performance in terms of SSIM and PSNR.
At last, on the HRF dataset, our method also shows its superior performance compared with other methods. This part of the experiments real that our method achieves comparable or even superior performance in segmentation performance.
\textit{Segmentation speed}.
On the DRIVE dataset, the fps of our method is over 10, while the fps of all state-of-the-art methods are lower than 1.0.
For instance, the segmentation speed of our method is over 20$\times$, 100$\times$ faster than FCN~\citep{OLIVEIRA2018229} and MS-NFN~\citep{wu2018}, respectively.
On the CHASE\_DB1 dataset, most existing state-of-the methods require multiple forward passes and a recomposed operation to generate a segmentation map for one fundus image, thus they show slow segmentation speed.
Compared with DUNet~\citep{jin2019dunet} and DEU-Net~\citep{wang2019dual}, they need over 10s to segment a fundus image with resolution 999$\times$960. However, our method runs in an end-to-end way, and it could segment an image within 0.2s, which is over 280$\times$ and 70$\times$ faster than DUNet and DEU-Net.
We conclude that our method has obvious advantages in segmentation speed, which could better meet the real-time requirement in a clinical scene.
\textit{Number of parameters}.
The number of parameters of our model is only 120k, which is far less than all state-of-the-art models. Therefore, our method is more suitable to be deployed to edge devices due to its lightweight characteristic.
\subsubsection{Visualization}
To show the effectiveness of our proposed DPN, we present the segmentation probability maps and the corresponding binary maps in Fig.~\ref{fig:vessel_seg_maps}. We can observe that our model could detect both thin vessels and thick vessel trees, verifying the effectiveness of our proposed DP-Block.
\begin{figure*}
\centering
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_img}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_la}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_prob}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_bin}
}
\\
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_img2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_la2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_prob2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{drive_best_bin2}
}
\\
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_img}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_la}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_prob}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_bin}
}
\\
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_img2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_la2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_prob2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{chase_best_bin2}
}
\\
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_img}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_la}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_prob}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_bin}
}\\
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_img2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_la2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_prob2}
}
\subfigure{
\includegraphics[width=0.2\textwidth]{hrf_best_bin2}
}
\caption{Visualization of the segmentation maps.
From columns 1 to 4: fundus images, ground truth, probability maps, and binary maps.}
\label{fig:vessel_seg_maps}
\end{figure*}
Moreover, we present several challenging cases in Fig.~\ref{fig:vessel_seg_challenge}.
We can observe that our model could detect thin vessels with only one-pixel width, as DPN always preserves the spatial information.
In addition, our model is able to segment some extremely thin vessels with low contrast near the macula.
In the third row of Fig.~\ref{fig:vessel_seg_challenge}, there exist two lumps of hemorrhage, which shares similar local features with vessels. As the DPN could capture structural information, as a result, the DPN is robust to the presence of hemorrhages. Also, for some true vessels not annotated, our model could segment well. In summary, the proposed method could segment thick and thin vessels, and robust to noise.
\begin{figure}
\centering
\subfigure[Segmentation of extremely thin vessels]{
\includegraphics[width=0.28\textwidth]{examp_img_thin}
\includegraphics[width=0.28\textwidth]{examp_label_thin}
\includegraphics[width=0.28\textwidth]{examp_pred_thin}
}\\
\subfigure[Segmentation of low-contrast vessels]{
\includegraphics[width=0.28\textwidth]{examp_img_low}
\includegraphics[width=0.28\textwidth]{examp_label_low}
\includegraphics[width=0.28\textwidth]{examp_pred_low}
}\\
\subfigure[Segmentation in the presence of hemorrhages]{
\includegraphics[width=0.28\textwidth]{examp_img_he}
\includegraphics[width=0.28\textwidth]{examp_label_he}
\includegraphics[width=0.28\textwidth]{examp_pred_he}
}
\subfigure[Segmentation in the presence of microaneurysms]{
\includegraphics[width=0.28\textwidth]{examp_img_ma}
\includegraphics[width=0.28\textwidth]{examp_label_ma}
\includegraphics[width=0.28\textwidth]{examp_pred_ma}
}
\caption{Visualization of some challenging cases. From left to right: fundus images patches, ground-truth, and the segmentation probability maps generated by proposed DPN.}
\label{fig:vessel_seg_challenge}
\end{figure}
\subsection{Ablation Study}
\subsubsection{Effectiveness of Auxiliary Losses}
In order to verify the impact of auxiliary losses on the final segmentation performance of the model, we removed all three auxiliary losses in DPN and the model was trained again under the same settings.
The experimental results were summarized in Table~\ref{table:comp_loss}. We can observe that almost all evaluation metrics were decreased after removing auxiliary losses.
Specifically, the F1-score was decreased over 0.3\% on all three datasets.
This part of the experiments verifies the rationality and effectiveness of adopting auxiliary losses in DPN.
\begin{table}
\setlength{\tabcolsep}{2pt}
\begin{center}
\caption{Comparison results of employing auxiliary losses or not (best results shown in bold).}
\begin{tabular}{ccccccc}
\hline
Dataset &Auxiliary Loss? &Se &Sp &Acc &AUC &F1\\
\hline
\multirow{2}{*}{DRIVE}
&No &0.7874 &\textbf{0.9810} &0.9564 &0.9808 &0.8259\\
&Yes &\textbf{0.7934} &\textbf{0.9810} &\textbf{0.9571} &\textbf{0.9816} &\textbf{0.8289}\\
\hline
\multirow{2}{*}{CHASE\_DB1}
&No &0.7805 &0.9833 &0.9649 &0.9840 &0.8058 \\
&Yes &\textbf{0.7839} &\textbf{0.9842} &\textbf{0.9660} &\textbf{0.9860} &\textbf{0.8124}\\
\hline
\multirow{2}{*}{HRF}
&No &0.7887 &0.9759 &0.9583 &0.9679 &0.7792\\
&Yes &\textbf{0.7926} &\textbf{0.9764} &\textbf{0.9591} &\textbf{0.9697} &\textbf{0.7835}\\
\hline
\end{tabular}
\label{table:comp_loss}
\end{center}
\end{table}
\subsubsection{Effectiveness of Multi-scale Feature Fusion in DP-Block}
In order to verify the effectiveness of multi-scale feature fusion in DP-Block, we conduct three groups of experiments, i.e., the DP-Block contains only the first branch, the DP-Block contains the first and the second branch, and the DP-Block contains all of three branches.
Experimental results are summarized in Table~\ref{table:comp_pool}.
We can observe that, after adding the second and the third branch, the segmentation performance has been improved over DRIVE and CHASE\_DB1 datasets. To be specific, the F1-score of DPN with three branches is over 1.09\% higher than that of DPN with only the first branch. This part of the experiments reveals that it is necessary to adopt multi-scale feature fusion within DP-Block.
\begin{table}
\setlength{\tabcolsep}{2pt}
\begin{center}
\caption{Comparison results of multi-scale feature fusion in DP-Block (OS is short for output stride, and OS1/OS2/OS4 corresponds to the first/second/third branch of DP-Block, respectively)}
\begin{tabular}{clccccccc}
\hline
Dataset &Feature Fusion &Se &Sp &Acc &AUC &F1 &SSIM &PSNR\\
\hline
\multirow{3}{*}{DRIVE}
&OS1 &0.7875 &0.9813 &0.9566 &0.9808 &0.8262 &0.5303 &\textbf{13.8216}\\
&+OS2 &0.7874 &\textbf{0.9816} &0.9569 &0.9810 &0.8277 &0.5093 &13.5204 \\
&+OS2+OS4 &\textbf{0.7934} &0.9810 &\textbf{0.9571} &\textbf{0.9816} &\textbf{0.8289} &\textbf{0.5500} &13.7672\\
\hline
\multirow{3}{*}{CHASE\_DB1}
&OS1 &0.7631 &\textbf{0.9843} &0.9642 &0.9826 &0.8015 &0.5507 &14.1293\\
&+OS2 &0.7684 &0.9841 &0.9645 &0.9838 &0.8035 &0.5379 &13.4846\\
&+OS2+OS4 &\textbf{0.7839}&0.9842 &\textbf{0.9660} &\textbf{0.9860} &\textbf{0.8124} &\textbf{0.6602} &\textbf{14.4583}\\
\hline
\end{tabular}
\label{table:comp_pool}
\end{center}
\end{table}
\subsubsection{The Number of Convolutional Filters}
Table~\ref{table:comp_num_filters} demonstrates the ablation results of various number of convolutional filters. First, setting C0, C1, and C2 to 16, 8, 8 for each DP-Block outperforms other configurations over DRIVE and CHASE\_DB1 datasets. Second, improving segmentation speed, that is, such thin convolutional filters can significantly accelerate inference speed.
Third, thick convolutional filters mean high memory consumption and high optimization difficulty.
At last, considering the trade-off between segmentation accuracy and inference speed, we set C0, C1, and C2 to 16, 8, and 8 in DP-Block.
\begin{table}
\setlength{\tabcolsep}{2pt}
\begin{center}
\caption{Ablation studies on the number of convolutional filters.}
\begin{tabular}{ccccccccccc}
\hline
Dataset &C0 &C1 &C2 &Se &Sp &Acc &AUC &F1 &SSIM &PSNR\\
\hline
\multirow{4}{*}{DRIVE}
&8 &4 &4 &0.7894 &0.9802 &0.9559 &0.9806 &0.8242 &0.5150 &13.7664\\
&8 &8 &8 &0.7840 &\textbf{0.9815} &0.9563 &0.9808 &0.8254 &0.5141 &13.7618\\
&16 &8 &8 &0.7934 &0.9810 &\textbf{0.9571} &\textbf{0.9816} &\textbf{0.8289} &\textbf{0.5500} &\textbf{13.7672}\\
&24 &12 &12&\textbf{0.7942} &0.9808 &\textbf{0.9571} &0.9813 &0.8283 &0.5023 &13.5176\\
\hline
\multirow{4}{*}{CHASE\_DB1}
&8 &4 &4 &0.7563 &\textbf{0.9846} &0.9639 &0.9833 &0.7998 &0.5620 &13.8856\\
&8 &8 &8 &0.7485 &0.9845 &0.9631 &0.9820 &0.7950 &0.5703 &14.3965\\
&16 &8 &8 &\textbf{0.7839}&0.9842 &\textbf{0.9660} &\textbf{0.9860} &\textbf{0.8124} &\textbf{0.6602} &\textbf{14.4583}\\
&24 &12 &12 &0.7747 &0.9835 &0.9645 &0.9838 &0.8045 &0.5215 &13.5914 \\
\hline
\end{tabular}
\label{table:comp_num_filters}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:con}
Deep learning models have been applied to retinal vessel segmentation in recent years, and achieve remarkable performance.
In this paper, we propose a deep model, called DPN to segment retinal vessel trees. Different from U-Net and FCN, in which detailed spatial information is sacrificed to learn structured information. Our method could preserve the detailed information all the time via maintaining a high resolution throughout the whole process, benefiting locating the vessel boundaries accurately.
To accomplish this goal, we present the DP-Block further, where multi-scale fusion is adopted to preserve both detailed information and learn structural information. In order to show the effectiveness of our method, DPN is trained from scratch over three publicly available datasets: DRIVE, CHASE\_DB1, and HRF.
Experimental results show that our method shows competitive/better performance in terms of F1-score and segmentation speed with only about 120k parameters. Specifically, the segmentation speed of our method is over 20-160$\times$ faster than other state-of-the-art methods on the DRIVE dataset.
In summary, considering the segmentation accuracy, segmentation speed, and model size together, our model shows superior performance and is suitable for real-world application.
Meantime, there are some drawbacks of DPN. For example, it cannot process high-resolution fundus images directly, and there are discontinuous vessel patches in the binary map.
In the future, we aim to extend our method and develop robust deep models for fundus microaneurysms segmentation.
At last, the source code of our method is available at \url{https://github.com/guomugong/DPN}.
\section*{Funding}
This work is supported by PhD research startup foundation of Xi'an University of Architecture and Technology (No.1960320048).
\section*{Conflict of interest}
The authors have no conflicts of interest to declare that are relevant to the content of this article.
\section*{Availability of data and material}
All databases utilized in this publication are publicly available, namely DRIVE~\cite{drive}, CHASE\_DB1~\cite{chase}, and HRF~\cite{budai2013robust}.
\section*{Code availability}
The source code will be available at \url{https://github.com/guomugong/DPN}.
\bibliographystyle{spbasic}
|
\section{Introduction}
Pattern formation at the ecosystem level has recently gained a lot of attention in spatial ecology and its mathematical modeling.
Theoretical models are a widely used tool for studying e.g.\ banded vegetation patterns. One important model is the system of advection-diffusion equations proposed by Klausmeier \cite{klausmeier}.
This model for vegetation dynamics in semi-deserted areas is based on the ``water redistribution hypothesis'', using the idea that rain water in dry regions is eventually infiltrated into the ground. Water falling down onto bare ground mostly runs off downhill towards the next patch of vegetation which provides a better infiltration capacity. The soil in such regions of the world as Australia, Africa, and Southwestern North America is prone to nonlocality of water uptake due to the semi-arid environment. Studies of the properties of the system
and further developments can be found in e.g. \cite{sherratt,nadia,klaus1,Sherratt:2005,Sherratt:2010,Sherratt:2011}.
The Klausmeier system is a generalization of the so-called Gray-Scott system \cite{GrayScott_original} (see also \cite{segel,skt} for earlier accounts employing similar models) which already exhibits effects similar to Turing patterns \cite{turing,kepper,Maini,Dillon}, see for instance the discussion in \cite{vanderStelt:2012gs}. We refer to \cite{murray1,murray2,levin2} for further reading on pattern formation in biology. A discussion of reaction-diffusion type equations with motivation from biology can be found in \cite{Perthame}.
The underlying mathematics of this model is given by a pair of solutions $(u,v)$ to a partial differential equation system coupled by a nonlinearity. The function $u$ represents the surface water content and $v$ represents the biomass density of the plants.
In order to model the spread of water on a terrain without a specific preference for the direction in which the water flows, the original models were extended by replacing the diffusion operator by a nonlinear porous media operator, which represents the situation that the ground is partially filled by interconnected pores conveying fluid under an applied pressure gradient.
To this end, let ${{ \mathcal O }}\subset {\mathbb{R}}^d$ be a bounded domain, $d=1,2,3$, having $C^2$-boundary or ${{ \mathcal O }}=[0,1]^d$.
Consider the following problem
\begin{eqnarray} \label{equ1s}
&&
\left\{ \begin{array}{rcl}
\dot {u}(t) &=& r_u \Delta u^{[\gamma]}(t)
- \chi u(t)\,v^2(t) +k-fu(t) , \quad t> 0,
\quad u(0)= u_0 , \phantom{\Big|}
\\
\dot{v}(t) &=& r_v \Delta v(t) + u(t)\,v^2(t) -gv(t) ,\quad t> 0,
\quad
{v}(0) = v_0 ,\phantom{\Big|}
\end{array}\right.
\end{eqnarray}
with Neumann (or periodic when ${{ \mathcal O }}$ is a rectangular domain) boundary conditions and initial conditions $u(0)=u_0$
and $v(0)=v_0$. Here, $z^{[\gamma]}:=|z|^{\gamma-1}z$, $\gamma>1$, $z\in\mathbb{R}$, and further,
$r_u$, $r_v$, $\chi$, $k$, $f$ and $g$ denote positive constants.
The deterministic or macroscopic model is derived from the limiting behavior of interacting diffusions --- the so-called microscopic model. When applying the strong law of large numbers and passing from the microscopic to the macroscopic equation, one is neglecting the random fluctuations. In order to get a more realistic model, it is necessary to add noise, which represents the randomness of the natural environment or the fluctuation of parameters in the model.
Due to the Wong-Zakai principle, this leads to the representation of the noise as a Gaussian stochastic perturbation with Stratonovich increment, see \cite[Section 3.4]{Flandoli} and \cite{MaZhu2019,WongZakai}. One important consequence is the preservation of energy in the noisy system.
In practice, we are investigating the system \eqref{equ1s}
driven by a multiplicative infinite dimensional Wiener process. Under suitable regularity assumptions on the initial data $(u_0,v_0)$, on $\gamma$ and on the perturbation by noise which we shall specify later, we find that there exists a nonnegative solution to the system \eqref{equ1n} in dimensions $d=1,2,3$, see our main result\footnote{In fact, we prove existence of nonnegative solutions to the noisy systems \eqref{eq:uIto}--\eqref{eq:vIto} (It\^o noise), \eqref{equ1ss}--\eqref{eqv1ss} (Stratonovich noise) respectively.} Theorem \ref{mainresult}.
Thus, we are actually considering the following system
\begin{eqnarray} \label{equ1n}
&&\left\{ \begin{array}{rcl}
\dot {u}(t) &=& r_u \Delta u^{[\gamma]}(t) - \chi u(t)\,v^2(t) +u(t)\, \Xi_1
(t), \quad t> 0,\quad u(0)=u_0,
\\ \dot{v}(t) &=& r_v \Delta v(t) + u(t)\,v^2(t)+ v(t)\, \Xi_2(t)
,\quad t> 0,\quad v(0)=v_0.
\end{array}\right.
\end{eqnarray}
Here, $\Xi_1$ and $\Xi_2$ denote independent random Gaussian noises specified to be certain Banach space valued linear Stratonovich Wiener noises later on. Similar stochastic equations with L\'evy noise have been studied e.g. in \cite{DebusscheHoegeleImkeller2013,reactdiff}. We note that due to the cross diffusion coupling and the nonlinearity, the solutions are not expected to be stochastically independent.
Nonlinear diffusion systems perturbed by noise exhibit certain improved well posedness and regularity properties when compared to their deterministic counterparts,
see, among others, \cite{Bianchi_etal, CrauelFlandoli, Flandoli}. Another feature of linear multiplicative noise is that it preserves the nonnegativity of the initial condition \cite{TessitoreZabczyk1998,BDPR3}.
The challenging problem in the system given by \eqref{equ1s}, respective the noisy system \eqref{equ1n}, is the nonlinearity appearing once with a negative sign and once with a positive sign.
Choosing different speeds of diffusion seems rather natural, as it well known that the characteristic pattern
formation may not take place if $r_u=r_v$, cf. \cite{grayscott}. The stochastic perturbation, however, does not restrict pattern formation \cite{Cao,Woolley}, noting that our choice of noise exhibits rather small intensity and damped high frequency modes.
On the other hand, the nonlinearity is not of variational structure, such that energy methods are not available for the analysis, and neither the maximum principle nor Gronwall type arguments work.
More direct deterministic (pathwise) methods may also fail in general as the equation for $v$ is non-monotone.
Another difficulty is posed by the nonlinear porous media diffusion operator in the equation for $u$. Typically, it is studied with the help of variational monotonicity methods, see e.g. \cite{weiroeckner,BDPR2016}. To the nonlinear diffusion, neither semigroup methods nor Galerkin approximation can be applied directly without greater effort. Other approaches are given for instance in \cite{Gess2017,vazquez2007porous,Otto}, to mention a few.
Our motivation to prove a probabilistic Schauder-Tychonoff type fixed point theorem originates from the aim to show existence of a solution to the stochastic counterpart \eqref{equ1n} of the system \eqref{equ1s}.
As standard methods for showing existence and uniqueness of solutions to stochastic partial differential equations cannot be applied directly, we perform a fixed point iteration using a mix of so-called variational and semigroup methods with nonlinear perturbation. Here, a precise analysis of regularity properties of nonnegative solutions to regularized and localized
subsystems with truncated nonlinearities needs to be conducted. Together with the continuous dependence of each subsystem on the other one, we obtain the fixed point, locally in time, by weak compactness and an appropriate choice of energy spaces. The result is completed by a stopping time gluing argument.
Apart from the probabilistic structure,
the main novelty is that we construct the fixed point in the nonlinearity and not in the noise coefficient. This works well because our system is coupled precisely in the nonlinearity. The main task consists then in the analysis of regularity and invariance properties of a regularized system with ``frozen'' nonlinearity (not being fully linearized, however, as the porous media operator remains).
It may be possible to apply this method in the future to other nonlinear systems, as for example, systems with nonlinear convection-terms as systems with transport or Navier-Stokes type systems. It is certainly possible to apply the method to linear cross diffusion systems.
See \cite{boundednessbyentropymethod} for a recent work proving existence of martingale solutions to stochastic cross diffusion systems, however, their approach is using another methods and is not covering the porous media case. See \cite{MASLOWSKI2003} for a previous work using the classical Schauder theorem for stochastic evolution equations with fractional Gaussian noise.
Given higher regularity of the initial data, we also show the pathwise uniqueness of the solution to \eqref{equ1n} for spatial dimension $d=1$.
As a consequence of the celebrated result by Yamada and Watanabe \cite{YW1,WY2}, we obtain existence and uniqueness of a strong solution, see Corollary \ref{cor:strong}. We refer to \cite{Schmalfuss1997,Capinski1993,CP1997,BM2013} for previous works that employ a similar strategy for proving the existence of a unique strong solution.
\subsection*{Structure of the paper}
The stochastic Schauder-Tychonoff type Theorem \ref{ther_main} is presented in Section \ref{schauder}. In the subsequent section, i.e. Section \ref{klausmeier}, we apply this fixed point theorem to show the existence of a martingale solution to the stochastic counterpart \eqref{equ1n} of the system \eqref{equ1s}. Section \ref{klausmeier} contains our main result Theorem \ref{mainresult}. Section \ref{sec:proof_of_schauder} is devoted to the proof of Theorem \ref{ther_main}.
In Section \ref{technics}, we prove several technical propositions that are need for the main result in Section \ref{klausmeier}. Section \ref{pathwise} contains the proof of pathwise uniqueness, that is, Theorem \ref{thm_path_uniq}. Some auxiliary results are collected in Appendices \ref{app:A}, \ref{dbouley-space} and \ref{sec:BDG}.
\section{The stochastic Schauder-Tychonoff type theorem}\label{schauder}
Let us fix some notation.
Let
$U$ be a Banach space. Let $\Ocal\subset \mathbb{R}^d$ be an open domain, $d\ge 1$. Let $\mathbb{X}\subset\{\eta:[0,T]\to E\subset\Dcal^{\prime}(\Ocal)\}$
be a Banach function space\footnote{Here, $\Dcal^\prime(\Ocal)$ denotes the space of Schwartz distributions on $\Ocal$, that is, the topological dual space of smooth functions with compact support $\Dcal(\Ocal)=C_0^\infty(\Ocal)$.}, let $\mathbb{X}^{\prime}\subset\{\eta:[0,T]\to E\subset\Dcal^{\prime}(\Ocal)\}$
be a reflexive Banach function space embedded compactly into $\mathbb{X}$. In both cases, the trajectories take values in a Banach function space $E$ over the spatial domain $\Ocal$.
Let $\Afrak=(\Omega,\Fcal,\mathbb{F},\P)$ be a filtered probability space
with filtration $\mathbb{F}=(\Fcal_{t})_{t\in [0,T]}$ satisfying the usual conditions.
Let $H$ be a separable Hilbert space and $(W_t)_{t\in [0,T]}$ be a Wiener process\footnote{That is, a $Q$-Wiener process, see e.g. \cite{DaPrZa:2nd} for this notion.} in $H$ with a linear, nonnegative definite, symmetric trace class covariance
operator $Q:H\to H$ such that $W$ has the representation
$$
W(t)=\sum_{i\in\mathbb{I}} Q^\frac 12 \psi_i\beta_i(t),\quad t\in [0,T],
$$
where $\{\psi_i:i\in \mathbb{I}\}$ is a complete orthonormal system in $H$, $\mathbb{I}$ a suitably chosen countable index set, and $\{\beta_i:i\in\mathbb{I}\}$ a family of independent real-valued standard Brownian motions on $[0,T]$ modeled in $\Afrak=(\Omega,\Fcal,\mathbb{F},\P)$. Due to \cite[Proposition 4.7, p. 85]{DaPrZa:2nd}, this representation does not pose a restriction.
For $m\ge1$, define the collection of processes
\begin{equation}\label{eq:MMdef}
\begin{aligned} \Mcal_{\Afrak}^{m}(\mathbb{X})
:= & \Big\{ \xi:\Omega\times[0,T]\to E\;\colon\;\\
&\qquad\text{\ensuremath{\xi} is \ensuremath{\mathbb{F}}-progressively measurable}\;\text{and}\;\mathbb{E}|\xi|_{\mathbb{X}}^{m}<\infty\Big\}
\end{aligned}
\end{equation}
equipped with the semi-norm
\[
|\xi|_{\Mcal_{\Afrak}^{m}(\mathbb{X})}:=(\mathbb{E}|\xi|_{\mathbb{X}}^{m})^{1/m},\quad\xi\in\Mcal_{\Afrak}^{m}(\mathbb{X}).
\]
For fixed $\Afrak$, $W$, $m>1$, we define the operator $\Vcal=\Vcal_{\Afrak,W}:\Mcal_{\Afrak}^{m}(\mathbb{X})\to\Mcal_{\Afrak}^{m}(\mathbb{X})$
for $\xi\in\Mcal_{\mathfrak{A}}^m(\mathbb{X})$ via
$$\Vcal(\xi):=w,$$
where $w$ is the solution of the following It\^o stochastic partial differential equation (SPDE)
\begin{eqnarray} \label{spdes}
dw(t) &=&\left(A w(t)+ F(\xi(t),t)\right)\, dt +\Sigma(w(t))\,dW(t),\quad w(0)=w_0\in E.
\end{eqnarray}
Here, we implicitly assume that \eqref{spdes} is well posed and a unique strong solution (in the stochastic sense) $w\in\Mcal_{\mathfrak{A}}^m(\mathbb{X})$ exists for $\xi\in\Mcal_{\mathfrak{A}}^m(\mathbb{X})$.
Here, we shall also assume that $A:D(A)\subset\mathbb{X}\to \mathbb{X}$ is a possibly nonlinear and measurable (single-valued) operator and $F:\mathbb{X}\times [0,T]\to \mathbb{X}$ a (strongly) measurable map such that
\[{\mathbb{P}}\left(\int_0^T\left(\left|A w (s)\right|_E+\left|F(\xi(s),s)\right|_E\right)\,ds<\infty\right)=1,\]
and assume that $\Sigma:E \to \gamma(H,E)$ is measurable. Here, $\gamma(H,E)$ denotes the space of $\gamma$-radonifying operators from $H$ to $E$,
as defined in the beginning of Section \ref{technics}, which coincides with the space of Hilbert-Schmidt operators $L_{\textup{HS}}(H,E)$ if $E$ is a
separable Hilbert space.
We are ready to formulate our main tool for proving the existence of nonnegative martingale solutions to
\eqref{equ1n}, that is, a stochastic variant of the (deterministic) Schauder-Tychonoff fixed point theorem(s) from \cite[§ 6--7]{granas}.
\begin{theorem}\label{ther_main}
Let $H$ be a Hilbert space, $Q:H\to H$ such that $Q$ is linear, symmetric, nonnegative definite and of trace class, let $U$ be a Banach space, and let us assume that we have a compact embedding $\mathbb{X}^{\prime}\hookrightarrow\mathbb{X}$
as above. Let $m>1$. Suppose that
for any filtered probability space $\Afrak=(\Omega,\Fcal,\mathbb{F},\P)$
and for any $Q$-Wiener process $W$ with values in $H$ that is modeled
on $\Afrak$ the following holds.
There exists a nonempty, sequentially weak$^\ast$-closed, measurable and bounded subset\footnote{Here, the notation $\Xcal(\Afrak)$ means that $\text{Law}(u)=\text{Law}(\tilde{u})$
on $\mathbb{X}$ for $u\in\Xcal(\Afrak)$ and $\tilde{u}\in\Mcal_{\tilde{{\Afrak}}}^{m}$
implies $\tilde{u}\in\Xcal(\tilde{\Afrak})$.} $\Xcal(\Afrak)$ of $\Mcal_{\mathfrak{A}}^m(\mathbb{X})$ such that
the operator $\Vcal_{\Afrak,W}$, defined by \eqref{spdes}, restricted to $\Xcal(\Afrak)$
satisfies the following
properties:
\begin{enumerate}
\item $\Vcal_{\Afrak,W}(\Xcal(\Afrak))\subset\Xcal(\Afrak)$,
\item the restriction $\Vcal_{\Afrak,W}\big|_{\Xcal(\Afrak)}$ is
continuous w.r.t. the strong topology of $\Mcal_{\Afrak}^{m}(\mathbb{X})$,
\item there exist constants $R>0$, $m_0\ge m$ such that
\[
\mathbb{E}|\Vcal_{\Afrak,W}(v)|_{\mathbb{X}^{\prime}}^{m_0}\le R\quad\text{for every}\quad v\in\Xcal(\Afrak),
\]
\item $\Vcal_{\Afrak,W}(\Xcal(\Afrak))\subset\mathbb{D}([0,T];U)$ $\P$-a.s.\footnote{Here, $\mathbb{D}([0,T);U)$ denotes the
Skorokhod space of c\`adl\`ag paths in $U$.}
\end{enumerate}
Then, there exists a filtered probability space $\tilde{\Afrak}=(\tilde{\Omega},\tilde{\Fcal},\tilde{\mathbb{F}},\tilde{\P})$
(that satisfies the usual
conditions)
together with a $Q$-Wiener process $\tilde{W}$ modeled on $\tilde{\Afrak}$
and an element $\tilde{v}\in\Mcal_{\tilde{\Afrak}}^{m}(\mathbb{X})$ such that
for all $t\in[0,T]$, $\tilde{\P}$-a.s.
\[
\Vcal_{\tilde{\Afrak},\tilde{W}}(\tilde{v})(t)=\tilde{v}(t)
\]
for any initial datum $v_0:=w_0\in L^m(\Omega,\Fcal_0,\P;E)$.
\end{theorem}
The proof of Theorem \ref{ther_main} is postponed to Section \ref{sec:proof_of_schauder}. We note that
by construction, we get that $v^\ast$ solves
\begin{eqnarray}
dv^\ast(t) &=&\left(A v^\ast(t)+ F(v^\ast(t),t)\right)\, dt +\Sigma(v^\ast(t))\,d\tilde{W}(t),\quad v^\ast(0)=v_0.
\end{eqnarray}
on $\tilde\Afrak$.
\begin{remark}\label{rem:weakstar}
We note that $\Xcal(\Afrak)$ is sequentially weak$^\ast$-closed, measurable and bounded subset of $\Mcal_{\mathfrak{A}}^m(\mathbb{X})$ if there exists a constants $K_i\ge 0$, $i=1,\ldots,N$ for some $N\in\mathbb{N}$ and sequentially weak$^\ast$-lower semicontinuous\footnote{If $E$ is reflexive, this is for instance satisfied by Mazur's lemma if the maps $\Theta_i$, $i=1,\ldots,N$, are convex and strongly (lower semi-)continuous.} measurable mappings
\[\Theta_i:L^1(0,T;E)\to [0,+\infty],\quad i=1,\ldots,N,\]
with bounded sublevel sets such that $\mathcal{X}(\Afrak)$ has the structure
$$ \mathcal{X}(\mathfrak{A})=\Big\{ \xi \in \Mcal_{\mathfrak{A}}^m(\mathbb{X}) \;\colon\; \mathbb{E} [\Theta_i(\xi)]\le K_i,\;i\in \{1,\ldots,N\}\Big\}.
$$
Then, of course, the set $\Xcal(\Afrak)\cap A$ is still sequentially weak$^\ast$-closed and bounded for any sequentially weak$^\ast$ closed and measurable set $A$, where we shall use this fact later for the convex cone of nonnegative processes $A=\{\xi \in \Mcal_{\mathfrak{A}}^m(\mathbb{X})\;\colon\;\xi\ge 0\;\P\otimes\operatorname{Leb}\text{-a.e.} \}$.
\end{remark}
\section{Existence of a solution to the stochastic Klausmeier system}\label{klausmeier}
In this section, we shall prove the existence of a nonnegative solution to the stochastic Klausmeier system. First, we will introduce some notation and the definition of a (martingale) solution.
After fixing the function spaces and the main hypotheses on the parameters of those, we present our main result Theorem \ref{mainresult}. The following proof consists mainly of a verification of the conditions for Theorem \ref{ther_main} and a stopping time localization procedure. As pointed out before, we are using compactness arguments to show the existence, which leads to the loss of the initial stochastic basis.
Let $H_1$ and $H_2$ be a pair of separable Hilbert spaces,
let $\mathfrak{A}=(\Omega,{{ \mathcal F }},\mathbb{F},{\mathbb{P}})$ be a filtered probability space and let $W_1$, $W_2$ be a pair of independent Wiener process modeled on $\mathfrak{A}$ taking values in $H_1$ and $H_2$, respectively, with covariance operators $Q_1$ and $Q_2$, respectively.
We are interested in the solution to the following reduced Klausmeier system for $x\in {{ \mathcal O }}$ and $t>0$,
\begin{equation}\label{equ1ss}
d {u}(t,x) = ( r_u\Delta u^{[\gamma]}(t,x) -\chi u(t,x)\,v^2(t,x))\, dt +\sigma_1 u(t,x)\circ dW_1(t,x),
\end{equation}
and,
\begin{equation}\label{eqv1ss}
d{v}(t,x) = ( r_v\Delta v(t,x) + u(t,x)\,v^2(t,x))\, dt +\sigma_2 v(t,x)\circ d W_2(t,x),
\end{equation}
with Neumann (or periodic if $\Ocal=[0,1]^d$) boundary conditions and initial conditions $u(0)=u_0$
and $v(0)=v_0$. Let $r_u,r_v,\chi>0$ be positive constants. Here, we use the abbreviation $x^{[\gamma]}:=|x|^{\gamma-1}x$ for $\gamma>1$. We note that due to Sobolev embedding techniques, we actually have to choose $\gamma>2$. The hypotheses on the linear
noise coefficient maps $\sigma_1,\sigma_2$ are specified below.
Due to the nonlinear porous media term, we do neither use solutions in the strong stochastic sense, nor mild solutions, that is, solutions in the sense of stochastic convolutions. Let us define what we mean with a solution on a fixed stochastic basis.
\begin{definition}\label{def:singleSLN}
A couple $(u,v)$ of stochastic processes on $\Afrak$ is called \emph{solution to the system} \eqref{equ1ss}--\eqref{eqv1ss} for initial data $(u_0,v_0)$ if there exists $\rho\in\mathbb{R}$ such that
\[u\in L^2(\Omega;C([0,T];H_2^{-1}({{ \mathcal O }})))\cap L^{\gamma+1}(\Omega\times (0,T)\times \Ocal)\]
and
\[v\in L^2(\Omega;C([0,T];H^{\rho}_2({{ \mathcal O }})))\cap L^2(\Omega\times (0,T);H_2^{\rho+1}({{ \mathcal O }})) \]
such that $u$ and $v$ are $\mathbb{F}=(\Fcal_t)_{t\in [0,T]}$-adapted, and satisfy
\begin{equation}\label{eq:uStratonovich}u(t)=u_0+\int_0^t (r_u\Delta {u}^{[\gamma]}(s)-\chi u(s) v^2(s))\,ds+\int_0^t \sigma_1 u(s)\circ dW_1(s)\end{equation}
for every $t\in [0,T]$, $\P$-a.s. in $H^{-1}_2(\Ocal)$,
and
\begin{equation}\label{eq:vStratonovich}v(t)=v_0+\int_0^t (r_v\Delta v(s)+u(s) v^2(s))\,ds+\int_0^t \sigma_2 v(s)\circ dW_2(s)\end{equation}
for every $t\in [0,T]$, $\P$-a.s. in $H_2^{\rho}(\Ocal)$.
\end{definition}
Note that the above $ds$-integral for $u$ in \eqref{eq:uStratonovich} is initially a $V^\ast$-valued\footnote{Here, $V:=L^{\gamma+1}(\Ocal)$ is dualized in a Gelfand triple over $\Hcal:=H^{-1}_2(\Ocal)$ which, in turn, is identified with its own dual by the Riesz isometry. Thus $V^\ast$ is not to be mistaken to be equal to be the usual Banach space dual of $V$, namely $L^{\frac{\gamma+1}{\gamma}}(\Ocal)$. See the proof of Theorem \ref{theou1} for details.} Bochner integral, where $V^\ast$ is defined as in the proof of Theorem \ref{theou1}, however seen to be in fact $H^{-1}_2(\Ocal)$-valued, see the discussion in \cite[Section 4.2]{weiroeckner} for further details.
As mentioned before, due to the loss of the original probability space, we are considering solutions in the weak probabilistic sense.
\begin{definition}\label{Def:mart-sol}
A {\sl martingale solution} to the problem
\eqref{equ1ss}--\eqref{eqv1ss} for initial data $(u_0,v_0)$ is a system
\begin{equation}
\left(\Omega ,{{\mathcal{F}}},{\mathbb{F}},\mathbb{P},
(W_1,W_2), (u,v)\right)
\label{mart-system}
\end{equation}
such that
\begin{enumerate}
\item the quadruple $\mathfrak{A}:=(\Omega ,{{\mathcal{F}}},{\mathbb{F}},\mathbb{P})$ is a complete filtered
probability space with a filtration ${\mathbb{F}}=(\mathcal{F}_t)_{t\in [0,T]}$ satisfying the usual conditions,
\item $W_1$ and $W_2$ are independent $H_1$-valued, respectively, $H_2$-valued Wiener processes over the probability space
$\mathfrak{A}$ with covariance operators $Q_1$ and $Q_2$, respectively,
\item both $u:[0,T]\times \Omega \to H^{-1}_2({{ \mathcal O }})$ and $v:[0,T]\times \Omega \to H_2^{\rho}({{ \mathcal O }})$ are ${\mathbb{F}}$-adapted
processes such that the couple $(u,v)$ is a solution to the system \eqref{equ1ss} and \eqref{eqv1ss} over the probability space $\mathfrak{A}$ in the sense of Definition \ref{def:singleSLN} for some $\rho\in\mathbb{R}$.
\end{enumerate}
\end{definition}
\begin{remark}
For our purposes, instead of the Stratonovich formulation the system \eqref{equ1ss}--\eqref{eqv1ss}, it is convenient to consider the equations in It{\^o} form:
\begin{equation}\label{eq:uIto}
d {u}(t,x) = ( r_u\Delta u^{[\gamma]}(t,x) -\chi u(t,x)\,v^2(t,x))\, dt +\sigma_1 u(t,x)\, dW_1(t,x),
\end{equation}
and,
\begin{equation}\label{eq:vIto}
d{v}(t,x) = ( r_v\Delta v(t,x) + u(t,x)\,v^2(t,x))\, dt +\sigma_2 v(t,x)\, d W_2(t,x),
\end{equation}
with initial data $u(0)=u_0$ and $v(0)=v_0$.
One reason for this is that the stochastic integral then becomes a local martingale. In order to show the existence of a solution to the Stratonovich system, one would have to incorporate the It\^o-Stratonovich conversion term (cf. \cite{evans2012introduction}), which, due to the linear multiplicative noise, is a linear term being just a scalar multiple of $u$, $v$, respectively.
If one is interested in the exact form of the correction term, we refer to \cite{duan}. We also refer
to the discussion in \cite{grayscott}, where the constant accounting for the correction term is computed explicitly.
\end{remark}
From now on, we shall consider the system \eqref{equ1ss}--\eqref{eqv1ss} in It{\^o} form, that is, as in \eqref{eq:uIto}--\eqref{eq:vIto}. Definitions \ref{def:singleSLN} and \ref{Def:mart-sol} have to be adjusted in the obvious way.
We note that we can solve \eqref{equ1ss}--\eqref{eqv1ss} by a straightforward modification of the proof given here.
Before presenting our main result, we will first introduce the hypotheses on $d$, $\gamma$, $\rho$ and the initial conditions $u_0$ and $v_0$, and on $\sigma_1$, $\sigma_2$.
\begin{hypo}[Existence]
\label{init}
Let $d\in\{1,2,3\}$.
Let $\gamma>2$, and $m>2$, $m_0>\frac {2(\gamma+1)}\gamma$ such that
$$\frac d2 -\rho\le \frac 2m+\frac d {m_0} \quad \rho<1-\frac d2
$$
and $p^\ast\ge 2$ and $p_0^\ast\ge 2$ such that
$$
\frac 1{p^\ast}+\frac 2m< 1\quad\mbox{and}\quad \frac {2}{ m_0}+\frac 1{p_0^\ast}< \frac {\gamma}{\gamma+1}
.
$$
{Let
$$l> 1+\left(1-\frac{d}{2}-\rho\right)^{-1}
$$}
Let us assume that the initial conditions $u_0$, $v_0$ are $\Fcal_0$ measurable and satisfy
$$ \mathbb{E} |u_0|_{L^{2}}^l \vee \mathbb{E} |u_0|_{L^{p^\ast}}^{p^\ast_0}\vee \mathbb{E} |u_0|_{L^{\gamma+1}}^{\gamma+1}<\infty,\quad
\mbox{and}\quad
\mathbb{E} |v_0|_{H^{\rho}_2}^{m_0}\vee \mathbb{E} |v_0|_{H^{-\delta_0}_m}^{m_0}<\infty, \quad \delta_0<\frac 1{m_0},
$$
and that $u_0$ and $v_0$ are a.e. nonnegative functions (nonnegative Borel measures that are finite on compact subsets, respectively).
\end{hypo}
\begin{hypo}[Noise]\label{wiener}
Let $\{\psi_k:k\in\mathbb{Z}\}$ be the eigenfunctions of $-\Delta$ and $\{\nu_k:k\in\mathbb{Z}\}$ the corresponding eigenvalues.
Let $W_1$ and $W_2$ be a pair of Wiener processes given by
$$
{W}_j(t,x)=\sum_{k\in\mathbb{Z}} \lambda_k^{(j)} \psi_k(x)\beta^{(j)}_k(t), \quad t\in[0,T],\quad j=1,2,\quad x\in\Ocal,
$$
where $\{\lambda_k^{(j)}\;:\;k\in\mathbb{Z}\}$, $j=1,2$, is a pair of nonnegative sequences belonging to $\ell^2(\mathbb{Z})$,
$\{ \beta_k^{(j)}:[0,T]\times\Omega\to\mathbb{R}\;:\;(k,j)\in\mathbb{Z}\times\{1,2\}\}$
is a family of
independent standard real-valued Brownian motions.
We assume that $\lambda^{(j)}_k\le C_j |\nu_k|^{-\delta_j}$, $j=1,2$, where $\delta_1> \frac{1}{2}$, $\delta_2>\frac{1}{2}$, and $C_1>0$, $C_2>0$.
Compare also with Subsection \ref{subsec:noise} for details.
\end{hypo}
Under these hypotheses, the existence of a martingale solution can be shown.
\begin{theorem}\label{mainresult}
Assume that Hypotheses \ref{init}--\ref{wiener} hold.
Then there exists a martingale solution to system \eqref{eq:uIto}--\eqref{eq:vIto} satisfying the following properties
\begin{enumerate}[label=(\roman*)]
\item $u(t,x)\ge 0$ and $v(t,x)\ge 0$ ${\mathbb{P}}$-a.s., for a.e. $t\in [0,T]$ and a.e. $x\in\Ocal$,
\item Let $p\ge 1$ and $ \mathbb{E} |u_0|^{p+1}_{L^{p+1}}<\infty$. Then there exists a constant $C_0(p,T)>0$ such that
\begin{eqnarray} \nonumber
\lefteqn{ \mathbb{E} \left[\sup_{0\le s\le T} |u (s)|_{L^{p+1} }^{p+1}\right]
+ \gamma p(p+1)r_u \mathbb{E} \int_0^ \TInt\int_{{ \mathcal O }} |u(s,x)|^{p+\gamma-2} |\nabla u (s,x)|^2\, dx \, ds} &&
\\\nonumber &&{} +(p+1)\chi \mathbb{E} \int_0^\TInt \int_{{ \mathcal O }} |u(s,x) |^{p+1} |v(s,x)|^2\, dx \, ds \le C_0(p,T)\,\left( \mathbb{E} |u_0|_{L^{p+1}}^{p+1}+1\right).
\end{eqnarray}
\item for any choice of parameters $\rho$, $m_0$, $l$ as in Hypothesis \ref{init}, there exists a constant $C_2(T)>0$ such that
\begin{align*}& \mathbb{E} \left[\sup_{0\le s\le T} |v(s)|_{H^{\rho}_{2}}^{m_0}\right]+ \mathbb{E} \left(\int_0^T |v(s)|_{H^{\rho+1}_2}^2\,ds\right)^{m_0/2}\\
\le & C_2(T)\left(1+ \mathbb{E} |v_0|_{H^{\rho}_2}^{{m_0}}+ \mathbb{E} |u_0|_{L^{2}}^{l}\right).
\end{align*}
\end{enumerate}
\end{theorem}
We shall also collect a standard notion on uniqueness.
\begin{definition} The system \eqref{eq:uIto}--\eqref{eq:vIto} is called \emph{pathwise unique} if, whenever $(u_i,v_i)$, $i=1,2$ are martingale solutions to the system \eqref{eq:uIto}--\eqref{eq:vIto} on a common stochastic basis
$(\Omega, {{ \mathcal F }},{{ \mathbb{F} }},{\mathbb{P}},(W_1,W_2))$, such that
${\mathbb{P}}(u_1(0)=u_2(0))=1$ and ${\mathbb{P}}(v_1(0)=v_2(0))=1$, then
\[
{\mathbb{P}}(u_1(t)=u_2(t))=1 \quad \mbox{and} \quad
{\mathbb{P}}(v_1(t)=v_2(t))=1,
\quad \mbox{for every } t \in [0,T].
\]
\end{definition}
The theorem of Yamada and Watanabe \cite{YW1} asserts that
(weak) existence and pathwise uniqueness of the solution of a stochastic equation is equivalent to the existence of a unique strong solution.
Therefore, showing pathwise uniqueness is a fundamental step for obtaining the existence of a unique strong solution.
Under certain additional conditions, as collected below, we are able to prove pathwise uniqueness.
{
\begin{hypo}\label{Uniqueness}
Let $\delta_0\in(0,\frac 1 \gamma)$, and $
\rho\ge \frac d2-\frac 12$.
\end{hypo}
\begin{theorem}\label{thm_path_uniq}
Let $T>0$ and let $\mathfrak{A}=(\Omega,{{ \mathcal F }},({{ \mathcal F }}_t)_{t\in[0,T]},{\mathbb{P}})$ a probability space satisfying the usual conditions, let $(W_1,W_2)$ be a Wiener process over $\mathfrak{A}$
on $H_1$ and $H_2$ and satisfying hypothesis \ref{wiener}.
Let initial data $u_0 \in H^{-1}_2({{ \mathcal O }})$ and $v_0 \in L^{2}({{ \mathcal O }})$
together with the parameters $m^\ast,m_0,p^\ast,p^\ast_0,l,\delta_0$ and $\rho$ satisfying Hypothesis \ref{init} and Hypothesis \ref{Uniqueness}. Assume further that Hypothesis \ref{wiener} holds.
Let $(u_i,v_i)$, $i=1,2$, be two solutions to the system \eqref{eq:uIto}--\eqref{eq:vIto} with initial data $(u_0,v_0)$ and belonging ${\mathbb{P}}$-a.s. to $C([0,T];H^{-1}_2({{ \mathcal O }}))\times C([0,T];H^{-\delta_0}_2({{ \mathcal O }}))$. Then, the processes $(u_1,v_1)$ and $(u_2,v_2)$ are indistinguishable in $H^{-1}_2({{ \mathcal O }})\times H^{-\delta_0}_2({{ \mathcal O }})$.
\end{theorem}}
\begin{remark}
It can be shown that for dimension $d=1$
such indices $m^\ast,m_0,p^\ast,p^\ast_0,l,\delta_0$ and $\rho$ satisfying Hypothesis \ref{init} and Hypothesis \ref{Uniqueness}
can be found. Because of our condition $\rho\ge\frac d2-\frac 12$, we cannot handle dimensions $d\in \{2,3\}$ as Hypothesis \ref{init} then implies that $\rho<0$.
\end{remark}
\begin{proof}
The proof is postponed to Section \ref{pathwise}.
\end{proof}
By this Theorem at hand, the following corollary is a consequence of the Theorem of Yamada-Watanabe.
\begin{corollary}\label{cor:strong}
There exists a unique strong solution (in the stochastic sense) to the system \eqref{eq:uIto}--\eqref{eq:vIto}.
\end{corollary}
\begin{proof}
This follows from Theorem \ref{mainresult} in combination with Theorem \ref{thm_path_uniq} and the Yamada-Watanabe theorem, see \cite[Appendix E]{weiroeckner} and \cite{Qiao2010,Kurtz2007,Ondrejat2004}.
\end{proof}
The proof of Theorem \ref{mainresult} an application of Theorem \ref{ther_main} and consists of five steps. However, to keep the proof itself simple, we will postpone several technical results which are collected in Section \ref{technics}.
\begin{proof}[Proof of Theorem \ref{mainresult}]
In the first step, we are specifying the underlying Banach spaces. In the second step, we shall construct the operator $\mathcal{V}$ for a truncated system and show that the operator $\mathcal{V}$ satisfies the assumptions of Theorem \ref{ther_main}. In the third step, we localize via stopping times and glue the fixed point solutions together, which exist by Theorem \ref{ther_main}. In the fourth step, we prove that the stopping times are uniformly bounded. In the fifth step, we show that we indeed yield a martingale solution satisfying the above properties.
\begin{steps}
\item {\bf The underlying space(s).}
Here we define the spaces on which the operator $\mathcal{V}$ will act.
Let the probability space $\mathfrak{A}=(\Omega,{{ \mathcal F }},\mathbb{F},{\mathbb{P}})$ be given and let $W_1$ and $W_2$ be two independent $H_1$ and $H_2$-valued Wiener processes defined over $\mathfrak{A}$ with covariances $Q_1$ and $Q_2$. Let $W=(W_1,W_2)$, $H=H_1\times H_2$, with covariance operator
$$ Q=\left(\begin{matrix} Q_1 & 0 \\0 & Q_2\end{matrix}\right).
$$
Let us define the Banach space
$$\mathbb{Y}= L^{\gamma+1}(0,T; {L^{\gamma+1}}({{ \mathcal O }}))\cap L^\infty(0,T; {H^{-1}_2}({{ \mathcal O }}))$$
equipped with the norm
$$
\|\eta\|_{\mathbb{Y}}:= \|\eta\|_{L^{\gamma+1}(0,T; {L^{\gamma+1}})}+ \|\eta\|_{L^\infty(0,T; {H^{-1}_2})},\quad\eta\in \mathbb{Y},
$$
and the reflexive Banach space
$$\mathbb{Z}:=L^{m_0}(0,T;L^m({{ \mathcal O }})),$$
equipped with the norm
$$
\|\xi\|_{\mathbb{Z}}:= \|\xi\|_{L^{m_0}(0,T;L^m)},\quad\xi\in \mathbb{Z}.
$$
Finally, let us fix
an auxiliary Banach space $\BH := L^2(0,T; H^{\rho+1}_2({{ \mathcal O }}))\cap L^\infty(0,T; {H^{\rho}_2}({{ \mathcal O }}))$
equipped with the norm
\begin{equation}\label{eq:BZeq}
\|\xi\|_{\BH}:= \|\xi\|_{L^2(0,T; H^{\rho+1}_2)}+ \|\xi\|_{L^\infty(0,T; {H^{\rho}_2})},\quad\xi\in \BH.
\end{equation}
\begin{remark}\label{variationalremark}
If
$$\frac d2 -\rho\le \frac 2m+\frac d {m_0},$$
then one can show by Sobolev embedding and interpolation theorems (see Proposition \ref{interp_rho} in the appendix) that
there exists a constant $C>0$ such that
$$
\|\xi\|_{\mathbb{Z}}\le C\|\xi\|_{\BH },\quad \xi\in\BH .
$$
\end{remark}
Let us fix the compactly embedded reflexive Banach subspace of $\mathbb{Z}$ by
$$\mathbb{Z}':=L^{m_0}(0,T;H^\sigma_m({{ \mathcal O }}))\cap \mathbb{W}^{\alpha}_{m_0}(0,T;H^{-\delta}_m({{ \mathcal O }}))$$
equipped with the norm
$$
\|\xi\|_{\mathbb{Z}'}:= \left(\|\xi\|_{L^{m_0}(0,T;H^\sigma_m)}^{m_0}+\|\xi\|_{\mathbb{W}^{\alpha}_{m_0}(0,T;H^{-\delta}_m)}^{m_0}\right)^{1/m_0},
$$
compare with Appendix \ref{dbouley-space}, where the compact embedding $\mathbb{Z'}\hookrightarrow\mathbb{Z}$ is discussed. The parameters $\sigma>0$, $\alpha\in (0,1)$, $\delta>0$ are specified in Proposition \ref{semigroup}.
Now,
denote the space of progressively measurable (pairs of) processes $\mathcal{M}_{\mathfrak{A}}(\mathbb{Y},\BX)$ by
\begin{eqnarray*}
\mathcal{M}_{\mathfrak{A}}(\mathbb{Y},\BX)&:= &\Big\{ (\eta,\xi) \;\colon\; \eta,\xi:[0,T ]\times \Omega \to \Dcal^\prime(\Ocal) \;\mbox{such that} \\
&&\qquad\mbox{$\eta$ and $\xi$ are progressively measurable on $\mathfrak{A}$ and}\\
&&\qquad \mathbb{E} \|\eta\|^2_{\mathbb{Y}} <\infty \; \mbox{and}\; \mathbb{E} \|\xi\|^{m_0}_{\BX }<\infty\Big\}
\end{eqnarray*}
equipped with the norm
\begin{equation}\label{eq:MMnorm}\| (\eta,\xi)\|_{\mathcal{M}_{\mathfrak{A}}(\mathbb{Y},\BX )}:=\left( \mathbb{E} \|\eta\|^2_\mathbb{Y}\right)^\frac 12+\left( \mathbb{E} \|\xi\|^{m_0}_\BX \right)^\frac 1{m_0},
\quad (\eta,\xi)\in\mathcal{M}_{\mathfrak{A}}(\mathbb{Y},\BX ).
\end{equation}
Note that here, progressive measurability is meant relative to the Borel $\sigma$-fields of the target spaces $H_2^{-1}(\Ocal)$ and $H^{\rho}_2(\Ocal)$ respectively.
Finally, for fixed $R_0, R_1,R_2>0$, let us define the subspace $\mathcal{X}_{\mathfrak{A}}=\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$
by
\begin{eqnarray*}
&&\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\\
:=&&\Bigg\{ (\eta,\xi)\in{{ \mathcal M }}_\mathfrak{A}(\mathbb{Y},\BX )\;\colon\\
&& \qquad
\mathbb{E} \left[\sup_{0\le s\le T} |\eta(s)|_{L^{p^\ast}}^{p_0^\ast}\right]^\frac 1{{p_0^\ast}}\le R_0,\; \mathbb{E} \|\xi\|^{m_0}_{\BX}\le R_1,\; \mathbb{E} \|\xi\|_{\BH}^{m_0}\le R_2, \;\mbox{and}\\
&&\qquad \quad\eta\;\mbox{and}\;\xi\;\mbox{are nonnegative}\;\P\otimes\mbox{Leb-a.e. in}\;\Dcal^\prime(\Ocal)\Bigg\}.
\end{eqnarray*}
At this point, we note that by Remark \ref{rem:weakstar}, $\mathcal{X}_{\mathfrak{A}}$ is a sequentially weak$^\ast$-closed and bounded subset of $\mathcal{M}_{\mathfrak{A}}(\mathbb{Y},\BX)$. Every $(\eta,\xi)\in\mathcal{X}_{\mathfrak{A}}$ is a pair of a (equivalence class of a) nonnegative function (or a
nonnegative Borel measure that is finite on compact subsets of $\Ocal$).
\begin{remark}
Note that in order to apply Theorem \ref{ther_main} formally, we will assume the obvious modification (or extension) of its statement and proof, such that we can
treat pairs of spaces with different exponents (here, $\mathcal{M}^{m}_{\mathfrak{A}}(\mathbb{Y},\BX)$ would not make sense, as in \eqref{eq:MMnorm} either $m=2$ or $m=m_0$). Another easy yet not so elegant workaround would be to replace $2$ by $m_0$ in \eqref{eq:MMnorm}.
\end{remark}
\item {\bf The truncated system.}
Let $\phi \in \Dcal(\mathbb{R})$ be a smooth cutoff function that satisfies
$$
\phi(x) \begin{cases} =0, &\mbox{ if } |x|\ge 2,
\\ \in [0,1], &\mbox{ if } 1<|x|<2,
\\=1, &\mbox{ if } |x|\le 1,
\end{cases}
$$
and let $\phi_\kappa(x):= \phi(x/\kappa)$, $x\in{\mathbb{R}},\,\kappa \in \mathbb{N}$.
In addition, for any progressively measurable pair of processes $(\eta,\xi)\in{{ \mathcal M }}_\mathfrak{A}(\BY,\mathbb{Z})$
let us define for $t \in [0,T]$
\begin{eqnarray} \label{h12.def}
h(\eta,\xi,t )&:=& \left( \int_0^t |\eta(s)|_{L^{\gamma+1}}^{\gamma+1}\, ds\right)^\nu +\left(\int_0^ t|\xi(s)|_{L^m}^{m_0}\, ds
\right)^ \nu, \end{eqnarray}
where $\nu\in(0,1]$ is chosen such that $\frac 1{p_0^\ast}+\frac {\nu m_0}{\gamma+1}\le 1$ and $\frac 1{p_0^\ast}+\frac {\nu }{m_0}\le 1 $
Let us consider the truncated system given by
\begin{equation}\label{eq:cutoffu}
\left\{ \begin{array}{rcl} d {\uk }(t) & =& \left[r_u \Delta (\uk (t))^{[\gamma]} -\chi \phi_\kappa(h (u_\kappa,v_\kappa,t))u_\kappa (t) v_\kappa^2(t) \right]\, dt
+\sigma_1\uk (t) dW_1(t),
\\
\uk (0) &=& u_0 , \phantom{\Big|}\end{array}\right.
\end{equation}
and
\begin{equation}\label{eq:cutoffv}
\left\{ \begin{array}{rcl}
d{\vk }(t) &=& \left[ r_v\Delta \vk (t) + \phi_\kappa (h(u_\kappa,v_\kappa,t))u_\kappa (t) v_\kappa^2(t) \right]\,dt
+\sigma_2 \vk (t)d W_2(t) .
\\
{\vk }(0) &=& v_0 . \phantom{\Big|}\end{array}\right.
\end{equation}
We shall show the existence of a martingale solution to system \eqref{eq:cutoffu}--\eqref{eq:cutoffv}.
\begin{proposition}\label{prop.exist.n}
For any $\kappa\in\mathbb{N}$, there exists constants $R_0>0$, $ R_1>0$, $R_2>0$, depending on $\kappa$, such that
there exists a martingale solution $(\uk,v_\kk)$ to system \eqref{eq:cutoffu}--\eqref{eq:cutoffv} contained in $\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\mathbb{Y},\BX )$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop.exist.n}]
The proof consists of several steps. First, we shall define an operator denoted by $\mathcal{V}_\kappa$ which satisfies the assumptions of Theorem
\ref{ther_main}, yielding the existence of a martingale solution.
\begin{stepinner}
\item {\bf Definition of the operator $\Vcal_\kappa$.}
First, define
$$\Vcal_\kappa:=\mathcal{V}_{\kappa,\Afrak}:\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\BX)\to{{ \mathcal M }}_\mathfrak{A}(\BY,\BX)$$
by
\[\mathcal{V}_\kappa(\eta,\xi):=(\uk,v_\kk),\quad\text{for}\quad(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\]
where $\uk$ is a solution to
\begin{equation}\label{eq:cutoffuu}
\left\{ \begin{array}{rcl} d {\uk }(t) & =& \left[r_u \Delta (\uk (t))^{[\gamma]} -\chi \phi_\kappa(h(\eta,\xi,t))\;\eta (t)\,\xi ^2(t) \right]\, dt
+\sigma_1\uk (t) \, dW_1(t),
\\
\uk (0) &=& u_0 , \phantom{\Big|}\end{array}\right.
\end{equation}
and $\vk$ is a solution to
\begin{equation}\label{eq:cutoffvv}
\left\{ \begin{array}{rcl}
d{\vk }(t) &=& \left[ r_v\Delta \vk (t) + \phi_\kappa(h(\eta,\xi,t))\; \eta (t)\,\xi ^2(t) \right]\,dt
+\sigma_2 \vk (t)\, d W_2(t) ,
\\
{\vk }(0) &=& v_0 . \phantom{\Big|}\end{array}\right.
\end{equation}
The operator $\mathcal{V}_\kappa$ is well defined for $(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$. In fact, by Theorem \ref{theou1}, given such a pair of processes $(\eta,\xi)$, the existence of a unique solution $\uk$ to \eqref{eq:cutoffuu} with
\begin{eqnarray} \label{constantu}
\mathbb{E} \|\uk\|_{\BY}^2 &\le& C_1(\kappa,T)
\end{eqnarray}
follows. By Proposition \ref{propvarational} and Proposition \ref{positivityv}, the existence of a unique solution $v_\kappa$ to \eqref{eq:cutoffvv} with
$$ \mathbb{E} \|\vk\|_{\BH}^{m_0}
\le \mathbb{E} |v_0|_{H^\rho_2}^{m_0} +C(\kappa,T)R_0,
$$
follows.
\item {\bf The $\Vcal_\kappa$-invariant set $\Xcal$.}
Let
\begin{eqnarray*}
C_0(T)\Big( \mathbb{E} |u_0|^{p^\ast_0}_{L^{p^\ast}}+1\Big)\leR_0 ,
\end{eqnarray*}
where the constant $C_0(\gamma,T)$ is as in \eqref{uniformlpbounds}.
Then, due to Proposition \ref{uniformlpbounds}, we know that
\begin{eqnarray*}
\mathbb{E} \left[ \sup_{0\le s\le T} |\uk(s)|^{p^\ast_0}_{L^{p^\ast}}\right]\vee \mathbb{E} \left[ \sup_{0\le s\le T} |\uk(s)|_{L^{\gamma+1}}^{\gamma+1}\right]
\le R_0.
\end{eqnarray*}
Furthermore, by Proposition \ref{propvarational}, the existence of a unique
solution to \eqref{eq:cutoffvv} such that
\begin{eqnarray} \label{constantvkk}
\mathbb{E} \|v_\kk\|_{\BH}^{m_0}&\le& \mathbb{E} |v_0|^{m_0}_{H^\rho_2}+R_0 C_2(\kappa,T)
\end{eqnarray}
follows.
Let $R_2:= \mathbb{E} |v_0|^{m_0}_{H^\rho_2}+R_0 C_2(\kappa,T)$.
Finally, by Proposition \ref{semigroup} we can show that for $ R_1\ge C(T)\left\{ \mathbb{E} |v_0|_{H_m^{-\delta_0}}^{m_0}+C(\kappa)\right\}$
(here the constants are given by Proposition \ref{semigroup})
\begin{eqnarray} \label{constantvkk1}
\mathbb{E} \|v_\kk\|_{\BZ}^{m_0}&\le& R_1.
\end{eqnarray}
Then,
$\Vcal_\kappa$ maps $\mathcal{X}_\mathfrak{A}(R_0, R_1,R_2)$ into itself.
Hence, assumption (i) of Theorem \ref{ther_main} is satisfied.
\item {\bf Continuity of $\Vcal_\kappa$ on $\Xcal$.}
Next we show that assumption (ii) of Theorem \ref{ther_main} is satisfied.
We need to show that
the restriction of the operator $\mathcal{V}_\kappa$ to $\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$ is continuous in the strong topology of ${{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)$.
Let $\{(\eta^{(n)},\xi^{(n)}):n\in\mathbb{N}\}\subset \mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$ be a sequence converging strongly to $(\eta,\xi)$ in ${{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)$.
Firstly, due to Proposition \ref{continuity} and Remark \ref{variationalremark}, we know that the sequence $\{\uk^{(n)}:n\in\mathbb{N}\}$ where $\uk^{(n)}$ solves
\begin{equation}\label{eqv1kk}
\left\{ \begin{array}{rcl}
d{\uk ^{(n)}}(t) &=& \left[ r_u\Delta (\uk^{(n)}(t))^{[\gamma]} -\chi \phi_\kappa(h(\eta^{(n)},\xi^{(n)},t))\, \eta^{(n)} (t)\,(\xi^{(n)}(t)) ^2 \right]\,dt\\
&& \quad +\sigma_1 \uk ^{(n)}(t)\,d W_1(t) ,
\\
u_\kappa^{(n)}(0) &=& u_0 . \phantom{\Big|}\end{array}\right.
\end{equation}
converges in ${{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)$ to $\uk$ which solves \eqref{eq:cutoffuu}.
Secondly, due to Proposition \ref{semigroup_continuous} and keeping Remark \ref{variationalremark} in mind, it follows that
the sequence $\{\vk^{(n)}:n\in\mathbb{N}\}$, where $\vk^{(n)}$ solves
\begin{equation}\label{equ1kk}
\left\{ \begin{array}{rcl}
d{\vk ^{(n)}}(t) &=& \left[ r_v\Delta \vk^{(n)} (t) + \phi_\kappa(h(\eta^{(n)},\xi^{(n)},t))\, \eta^{(n)} (t)\,(\xi^{(n)}(t)) ^2 \right]\,dt\\
&& \quad +\sigma_2 \vk ^{(n)}(t)\,d W_2(t) ,
\\
v_\kappa^{(n)}(0) &=& v_0 . \phantom{\Big|}\end{array}\right.
\end{equation}
converges in ${{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)$ to $\vk$, where $\vk$ solves \eqref{eq:cutoffvv}.
This gives that the operator
$$\Vcal_\kappa|_{\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)}:\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)\to{{ \mathcal M }}_{\mathfrak{A}}(\BY,\BX)$$
is continuous.
\item {\bf Tightness.}
Next we show that assumption (iii) of Theorem \ref{ther_main} is satisfied.
First, note that the embedding ${\mathbb{Z}'}\hookrightarrow {\mathbb{Z}}$
is compact by the Aubin-Lions-Simon Lemma \ref{th-gutman}.
Second, by Proposition \ref{semigroup} it follows that for any $\kappa\in\mathbb{N}$
there exists a constant $C=C(\kappa,T)>0$ such that for all $(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$
$$
\mathbb{E} \, |\vk|^{m_0}_{\mathbb{Z}'}\le C,
$$
where $\vk$ solves \eqref{eq:cutoffvv}. Observe that we have the compact embedding $L^{\gamma+1}({{ \mathcal O }})\hookrightarrow H^{-1}_2({{ \mathcal O }})$. Hence, by Proposition \ref{CC2} and by standard arguments (cf. \cite{ethier}), we know that
the laws of the set $\{ \uk:(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\}$ are tight on $C([0,T];H^{-1}_2({{ \mathcal O }}))$.
By Proposition \ref{CC2} and the Aubin-Lions-Simon Lemma \ref{th-gutman},
the laws of $\{ \uk:(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\}$ are tight on $L^{\gamma+1}(0,T;L^{\gamma+1}({{ \mathcal O }}))$.
Therefore, it follows that the set $\{ \mathcal{V}_\kappa(\eta,\xi):(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\}$ is tight on $\BY\times \BX$,
which implies assumption (iii) of Theorem \ref{ther_main} for
$$\mathbb{X}^\prime:=L^{\gamma+1}(0,T;L^{\gamma+1}({{ \mathcal O }}))\times\mathbb{Z}^\prime\hookrightarrow \mathbb{Y}\times \BX=:\mathbb{X}$$
\item {\bf Continuity of paths.}
To show assumption (iv) of Theorem \ref{ther_main}, we notice that
by choosing
$$U:=H^{-1}_2(\Ocal)\times H_2^{\rho}({{ \mathcal O }}),$$
it follows from Proposition \ref{CC2} and standard arguments
that
$$\mathcal{V}_\kappa(\eta,\xi)\in C([0,T];U)\quad{\mathbb{P}}\text{-a.s.}$$
for all
$(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$.
\item {\bf Existence of a fixed point.}
Finally, by Theorem \ref{ther_main}, for each $\kappa\in\mathbb{N}$, there exists a probability space
$\tilde{\mathfrak{A}}_\kappa=(\tilde \Omega_\kappa,\tilde{{{ \mathcal F }}}_\kappa,\tilde{\mathbb{F}}_\kappa,\tilde{{\mathbb{P}}}_\kappa)$, a Wiener process $\tilde W^\kappa=(\tilde W^\kappa_1,\tilde W^\kappa_2)$, modeled on $\tilde {\mathfrak{A}}_\kappa$, and elements
$$(\tilde u_{\kappa},\tilde v_{\kappa})\in\mathcal{X}_{\tilde{\mathfrak{A}}_\kappa}(R_0,R_1,R_2)\cap\mathbb{D}([0,T);U)$$
such that we have a $\tilde {\mathbb{P}}$-a.s. fixed point
$$
\mathcal{V}_{\kappa,\tilde{\Afrak}_\kappa}(\tilde u_{\kappa}(t),\tilde v_{\kappa}(t))=(\tilde u_{\kappa}(t),\tilde v_{\kappa}(t))
$$
for every $t\in[0,T]$.
Due to the construction of
$\mathcal{V}_{\tilde{\Afrak}}$, the pair $(\tilde u_{\kappa},\tilde v_{\kappa})$
solves the system \eqref{eq:cutoffuu}--\eqref{eq:cutoffvv} over the stochastic basis $\tilde{\Afrak}_\kappa$ with the Wiener noise $\tilde W^\kappa$, see Section \ref{sec:proof_of_schauder} for details.
\end{stepinner}
\end{proof}
{
\item
Here, we will construct a family of solutions $\{(\bar u_{\kappa} ,\bar v_{\kappa} ):\kappa\in\mathbb{N}\}$ following the solution to the original problem until a stopping time ${\bar \tau}_\kappa$. In particular, we will introduce for each $\kappa\in\mathbb{N}$ a new pair of processes $({\bar u}_{\kappa} , {\bar v}_{\kappa} )$ following the Klausmeier system up to the stopping time ${\bar \tau}_\kappa$. Besides, we will have
$({\bar u}_{\kappa} , {\bar v}_{\kappa} )|_{[0,{\bar \tau}_\kappa)}=({\bar u}_{\kappa+1},{\bar v}_{\kappa+1})|_{[0,{\bar \tau}_\kappa)}$.
Let us start with $\kappa =1$. From Proposition \ref{prop.exist.n}, we know there exists a martingale solution consisting of a probability space $\mathfrak{A}_1=(\Omega_1,{{ \mathcal F }}_1,\mathbb{F}_1,{\mathbb{P}}_1)$, two independent Wiener processes $({W}^1_1,{W}^1_2)$
defined over $\mathfrak{A}_1$, and
a couple of processes $(u_1,v_1)$ solving ${\mathbb{P}}_1$--a.s.\ the system
\begin{eqnarray*}
\left\{ \begin{array}{rcl} du_1(t) &=&\left[r_u \Delta u_1^{[\gamma]}(t) -\chi \,\phi_1(h(u_1,v_1,t))\, u_1 (t)\,v_1^2(t)\right] dt+ \sigma_1 u_1(t)\,d {W}^1_1(t),
\\
dv_1(t)&=&\left[r_v \Delta v_1(t)+ \,\phi_1(h(u_1,v_1,t))\, u_1 (t)\,v_1^2(t)
\right]\,dt+\sigma_2 v_1(t)\, d {W}^1_2(t),
\\ (u_1(0),v_1(0))&=&(u_0,v_0).
\end{array}\right.\end{eqnarray*}
Let us define now the stopping time
\begin{align*}
\tau_1^\ast:=\inf \{s\ge 0\;\colon\; h(u_1,w_1,s) \ge 1 \}.
\end{align*}
Observe, on the time
interval $[0,\tau_1^\ast)$, the pair $(u_1,v_1)$ solves the system given in
\eqref{equ1n}.
Now, we define a new pair of processes $(\bar u_1,\bar v_1)$ following $(u_1,v_1)$ on $[0,\tau_1^\ast)$ and extend this processes
to the whole interval $[0,T]$ in the following way.
First, we put $\overline{\mathfrak{A}}_1:=\mathfrak{A}_1$ and $\bar {{W}}_j^1:={W}_j^1$, $j=1,2$,
and let us introduce the processes $y_1$ and $y_2$
being a strong solution over $\overline{\mathfrak{A}}_1$ to
\begin{eqnarray}
d y_1(t, u_1(\tau^\ast_1),\sigma) &=& r_u\Delta y_1^{[\gamma]}(t, u_1(\tau^\ast_1),\sigma)\,dt + \sigma_1 y_1(t, u_1( \tau^\ast_1),\sigma) \,d(\theta_{\sigma} \bar{{W}}^1_1)(t)
\nonumber \\ \label{eq1} y_1(0,u_1(\tau^\ast_1))&=&u_1(\tau^\ast_1){}
,\quad t\ge 0 ,\\ \nonumber
y_2(t, v_1( \tau^\ast _1),\sigma) &=&e^{r_v\Delta t} v_1(\tau^\ast_1)
\\
\lefteqn{{} + \int_{0}^t e^{(r_v\Delta )(t-s)} \sigma_2 y_2 (s, v_1(\tau^\ast _1),\sigma) \,d(\theta_{\sigma} \bar{{W}}^1_2)(s)
,\quad t\ge 0.} && \label{eq2}
\end{eqnarray}
Here, $\theta_\sigma$ is the shift operator which maps ${W}_j(t)$ to ${W}_j(t+\sigma)-W_j(\sigma)$.
Since the couple $(u_1,v_1)$ is continuous in $H_2^{-1}({{ \mathcal O }})\times H^\rho_2({{ \mathcal O }})$, we know that $u_1(\tau^\ast_1)$ and $v_1(\tau^\ast_1)$ are well defined random variables belonging ${\mathbb{P}}_1$-a.s.\ to $L^2({{ \mathcal O }})$ and $H^\rho_2({{ \mathcal O }})$, respectively.
By \cite[Theorem 2.5.1]{BDPR2016} the existence of a unique solution $y_1$ over $\overline{\mathfrak{A}}_1$ to \eqref{eq1} in $H^{-1}_2({{ \mathcal O }})$ is given. Since $(e^{t(r_v\Delta -\tilde{\alpha} \operatorname{Id})})_{t\ge 0}$ is an analytic semigroup on $H^\rho_2({{ \mathcal O }})$ for $\tilde{\alpha}>0$,
the existence of a unique solution $y_2$ over $\overline{\mathfrak{A}}_1$ to \eqref{eq2} in $H^\rho_2({{ \mathcal O }})$
can be shown by standard methods, cf. \cite{DaPrZa:2nd}. Furthermore, it is straightforward to verify that the assumptions on the initial conditions are satisfied.
Now, let us define two processes $\bar u_1 $ and $\bar v_1$
being identical to $u_1$ and $v_1$, respectively, on the time interval $[0,\tau^\ast_1)$ and
following the porous media, respective, the heat equation (with lower order terms) with noise and without nonlinearity, i.e., $y_1(\cdot,u_1(\tau^\ast_1),\tau^\ast_1)$ and $y_2(\cdot,v_1(\tau^\ast_1),\tau^\ast_1)$, afterwards.
In particular, let
$$
\bar u_1 (t) = \begin{cases} u_1(t) & \mbox{ for } 0\le t< \tau^\ast_1,\\
y _1(t,u_1(\tau^\ast_1),\tau^\ast_1) & \mbox{ for } \tau^\ast_1\le t \le T,\end{cases}
$$
and
$$
\bar v_1 (t) = \begin{cases} v_1 (t) & \mbox{ for } 0\le t< \tau_1^\ast,\\
y_2 (t,v_1(\tau^\ast_1),\tau_1^\ast
) & \mbox{ for } \tau_1^\ast \le t \le T.\end{cases}
$$
Let us now construct the probability space and the processes for the next time interval. First, let $(u_1(\tau^\ast_1),v_1(\tau^\ast_1))$ have probability law $\mu_1$ on $H_2^{-1}({{ \mathcal O }})\times H^\rho_2({{ \mathcal O }})$. Again, from Proposition \ref{prop.exist.n}, we know
there is a martingale solution consisting of a probability space $\mathfrak{A}_2=(\Omega_2,{{ \mathcal F }}_2,\mathbb{F}_2,{\mathbb{P}}_2)$, a pair of independent Wiener processes $({W}_1^2,{W}_2^2)$,
a couple of processes $(u_2,v_2)$ solving ${\mathbb{P}}_2$-a.s.\ the system
\begin{eqnarray*}
\left\{ \begin{array}{rcl} du_2(t) &=&\left[r_u \Delta u^{[\gamma]}_2(t) -\chi \,\phi_2(h(u_2,v_2,t))\, u_2 (t)\,v_2^2(t)\right] dt+ \sigma_2 u_2(t) \, d{W}^2_1(t),
\\
dv_2(t)&=&\left[r_v \Delta v_2(t)+\phi_2(h(u_2,v_2,t))\, u_2 (t)\,v_2^2(t)
\right]\,dt+\sigma_2 v_2(t)\, d {W}^2_2(t),
\\ (u_2(0),v_2(0))&\sim&\mu_1.
\end{array}\right.\end{eqnarray*}
with initial condition $(u_2(0),v_2(0))$ having law $\mu_1$.
Let us define now the stopping times on $\mathfrak{A}_2$,
\begin{align*}
\tau_2^\ast:=\inf \{s\ge 0:h(u_2,v_2,s) \ge 2 \}.
\end{align*}
Let $\overline{\mathfrak{A}}_1:=(\overline{\Omega}_1,\overline{{{ \mathcal F }}}_1,\overline{\mathbb{F}}_1,\overline{{\mathbb{P}}}_1):=\mathfrak{A}_1$, with $\overline{\mathbb{F}}_1:=(\overline{{{ \mathcal F }}}^1_t)_{t\in [0,T]}$.
Let $\overline{\Omega}_2:=\overline{\Omega}_1\times\Omega_2$, $\overline{{{ \mathcal F }}}_2:=\overline{{{ \mathcal F }}}_1\otimes {{ \mathcal F }}_2$, $\overline{{\mathbb{P}}}_2:=\overline{{\mathbb{P}}}_1\otimes{\mathbb{P}}_2$ and let
$\overline{\mathbb{F}}_2:=(\overline{{{ \mathcal F }}}^2_t)_{t\in [0,T]}$, where
$$\overline{{{ \mathcal F }}}^2_t:=\begin{cases} \overline{{{ \mathcal F }}}^1_t, & \mbox{if} \quad t<\tau_1^\ast,
\\ {{ \mathcal F }}^2_{t-\tau_1^\ast}, & \mbox{if}\quad t\ge \tau_1^\ast.
\end{cases}
$$
Let $\overline{\mathfrak{A}}_2:=(\overline{\Omega}_2,\overline{{{ \mathcal F }}}_2,\overline{\mathbb{F}}_2,\overline{{\mathbb{P}}}_2)$.
Finally, let us set for $j=1,2$
$$\bar{{W}}_j^2(t):=\begin{cases} \bar{{W}}^1_j(t), & \mbox{if} \quad t<\tau_1^\ast,
\\ {W}^2_j({t-\tau_1^\ast})+\bar{{W}}^1_j(\tau_1^\ast), & \mbox{if}\quad t\ge \tau_1^\ast.
\end{cases}
$$
which give independent Wiener processes for $j=1,2$, w.r.t. the filtration $\overline{\mathbb{F}}_2$.
Now, let us define two processes $\bar u_2 $ and $\bar v_2$
being identical to $\bar u_1$ and $\bar v_1$, respectively, on the time interval $[0,\tau^\ast_1)$, being identical to $u_2$ and $v_2$ on the time interval $[\tau^\ast_1,\tau^\ast_1+\tau_2^\ast)$ and
following the porous media, respective, the heat equation (with lower order terms) with multiplicative noise, afterwards.
Let us note, for any initial condition having distribution equal to $u_2(\tau_2^\ast)$ and $v_2(\tau^\ast_2)$ that there exists a strong solutions $y_1(\cdot ,\cdot ,\tau^\ast_2+\tau^\ast_1)$ and $y_2(\cdot,\cdot ,\tau^\ast_2+\tau^\ast_1)$ of the systems \eqref{eq1} and \eqref{eq2}, respectively, on $\overline{\mathfrak{A}}_2$.
Let for $t\in[0,T]$
$$
\bar u_2 (t) = \begin{cases} \bar u_1(t) & \mbox{ for } 0\le t< \tau^\ast_1,\\
u_2(t-\tau_1^\ast ) & \mbox{ for } \tau^\ast_1\le t\le \tau^\ast_1+\tau_2^\ast,\\
y _1(t-(\tau^\ast_1+\tau^\ast_2),u_2(\tau^\ast_2),\tau^\ast_1+\tau^\ast_2) & \mbox{ for } \tau^\ast_2+\tau^\ast_1\le t \le T,\end{cases}
$$
$$
\bar v_2 (t) = \begin{cases} v_1 (t) & \mbox{ for } 0\le t< \tau_1^\ast,\\
v_2(t-\tau_1^\ast ) & \mbox{ for } \tau^\ast_1\le t\le \tau^\ast_1+\tau_2^\ast,\\
y_2 (t-(\tau^\ast_1+\tau^\ast_2),v_2(\tau^\ast_2),\tau_1^\ast+\tau^\ast_2
) & \mbox{ for } \tau_1^\ast+\tau^\ast_2 \le t \le T.\end{cases}
$$
In the same way we will construct for any $\kappa \in\mathbb{N}$ a probability space $\overline{\mathfrak{A}}_\kappa $, a pair of independent Wiener processes $(\bar{{W}}^1_\kappa ,\bar{{W}}^2_\kappa )$, over $\overline{\mathfrak{A}}_\kappa $ and a pair of processes $(\bar{u}_\kappa,\bar{v}_\kappa)$ starting at $(u_0,v_0)$ and solving system \eqref{eq:cutoffu}-\eqref{eq:cutoffv} up to time
\begin{eqnarray} \label{stopp_time}\bar \tau_\kappa :=\tau_1^\ast+\cdots +\tau_\kappa ^\ast
\end{eqnarray} and following the porous media, respective, the heat equation afterwards.
Besides, we know that
$(\bar u_{\kappa} ,\bar v_{\kappa} )|_{[0,\bar \tau _{\kappa -1})}=(\bar u_{\kappa -1},\bar v_{\kappa -1})|_{[0,\bar \tau_{\kappa -1})}$.
\item {\bf Uniform bounds on the stopping time.}
Let us consider the family $\{ (\bar{u}_\kappa,\bar{v}_\kappa)\;\colon\; \kappa\in\mathbb{N}\}$. The next aim is to show that there exists $R_0, R_1$ and $R_2>0$ independent of $\kappa\in\mathbb{N}$ such that
$\{ (\bar{u}_\kappa,\bar{v}_\kappa)\;\colon\; \kappa\in\mathbb{N}\}\subset \mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$.
First, that due to Proposition \ref{uniform1} there exists a constant $C_0(1,T)>0$ such that
\begin{eqnarray} \label{est2}
\\\nonumber
\lefteqn{
\mathbb{E} \left[\sup_{0\le s\le \bar\tau_\kappa\wedge T} |\bar u_\kk (s)|_{L^{2} }^{2}\right]
+ 2\gamma r_u \mathbb{E} \int_0^ {\bar\tau_\kappa\wedge T}\int_{{ \mathcal O }} |\bar u_\kk (s,x )|^{\gamma-1}|\nabla \bar u_\kk (s,x)|^2\, dx \, ds} &&
\\\nonumber &&{} +2\chi \mathbb{E} \int_0^{\bar\tau_\kappa \wedge T} \int_{{ \mathcal O }} |\bar u_\kk(s,x )|^2| \bvk (s,x )|^2\, dx \, ds \le C_0(1,T)\,\left( \mathbb{E} |u_0|_{L^{2}}^{2}+1\right).
\end{eqnarray}
From above we can conclude that
there exists a constant $C>0$ such that
\begin{eqnarray*}
\mathbb{E} \|\bar u_\kk\mathbbm{1}_{[0,\bar\tau_\kappa\wedge T]}\|_{\BY}^2\le C.
\end{eqnarray*}
Here, it is important that
$\bar u_\kk\ge 0$.
This estimate can be extended to the time interval $[0,T]$ by standard results (see e.g.\ \cite{BDPR2016} and Proposition \ref{uniform1}).
{Next, let us assume $ R_1>0$ is that large that
\begin{eqnarray} \label{constantR2}
C(T)\left(
\mathbb{E} |v_0|_{H^{-\delta_0}_{m}}^{m_0}+ R_0^{\delta_1}\,
R_2^{\delta_1}\right)\le R_2,
\end{eqnarray}
where the constants $C(T)$, $\delta_0$, $\delta_1$ are given in Proposition \ref{propvarational2}.
Observe, by Proposition \ref{propvarational2},
we have for $l$ as in Hypothesis \ref{init}
\begin{eqnarray} \label{constantR22}\hspace{+6ex}
\mathbb{E} \|\bvk\|_{\mathbb{Z}}^{m_0}\le C(T)\left(
\mathbb{E} |v_0|_{H^{-\delta_0}_{m}}^{m_0}+\Bigg( \mathbb{E} \left(\int_0^{\bar\tau_\kappa \wedge T} \uk^{2}(s)\vk^2(s)\,ds
\right)^l\Bigg)^{\delta_1}\,
R_1^{\delta_1}\right).
\end{eqnarray}
Then, by Proposition \ref{uniform1}, we know that
for $p=1$,
the term
$$
\mathbb{E} \left(\int_0^{\bar\tau_\kappa \wedge T} \uk^{2}(s)\vk^2(s)\,ds
\right)^l\le \mathbb{E} |u_0|_{L^{2}}^l,
$$
which can be uniformly bounded by $R_0^l$. Hence, we conclude by \eqref{constantR2} that $ \mathbb{E} \|\bvk\|_{\BZ}^{m_0}\le R_1$
and the choice of $R_2$ that $ \mathbb{E} \|\bvk\|_{\BH}^{m_0}\le R_2$.
Finally, by Proposition \ref{interp_rho} we know for $\frac d2 -\rho\le \frac 2m+\frac d {m_0}$ that
$\BH\hookrightarrow \mathbb{Z}$. Due to the choice of $m$ and $m_0$, we know that the inequality is satisfied.
Hence, setting
\begin{eqnarray} \label{constantR3}
R_1&\ge&R_2,
\end{eqnarray}
we know that
$ \mathbb{E} \|\bvk\|_{\mathbb{Z}}^{m_0}\le R_1$.
In particular, there exists $R_0, R_1$ and $R_2$ such that for all $\kappa\in\mathbb{N}$ we have
\begin{equation}\label{constantR1} \mathbb{E} \left[\sup_{0\le s\le T}|\bar u_\kappa(s)|_{L^{p^\ast}}^{p^\ast_0}\right]\le R_0^{p_0^\ast},
\end{equation}
$ \mathbb{E} \|\bvk\|_{\BZ}^{m_0}\le R_1$, and
$ \mathbb{E} \|\bvk\|_{\BH}^{m_0}\le R_2$.
}
}
\item {\bf Passing on to the limit.}
In the final step, we will show that ${\mathbb{P}}$-a.s. a martingale solution to \eqref{eq:uIto}-\eqref{eq:vIto} exists.
\begin{claim} There exists a measurable set $\tilde{\Omega}\subset \Omega$ with ${\mathbb{P}}(\tilde \Omega)=1$ such that a martingale solution $(u,v)$ to the
system \eqref{eq:uIto}-\eqref{eq:vIto} exists on $\tilde\Omega$.
\end{claim}
\begin{proof}
For any $\kappa\in\mathbb{N}$,
let us define the set
$$A_\kappa :=
\left\{ \omega\in \Omega\;\colon\;\bar\tau_\kappa(\omega)\ge T\right\}.
$$
It can be clearly observed that
there exists a progressively measurable process $(u,v)$ over $\mathfrak{A}$
such that $(u,v)$ solves ${\mathbb{P}}$--a.s. the integral equation given by
\eqref{eq:uIto}-\eqref{eq:vIto} up to time $T$. In particular, we have for the conditional probability
$$
{\mathbb{P}}\left( \{ \mbox{there exists a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto}} \} \mid A_\kappa\right)=1.
$$
%
%
Set $\tilde \Omega=\bigcup_{\kappa=1}^{\infty} A_\kappa$. Then, it is elementary to verify that
\begin{eqnarray*} \lefteqn{
\qquad{\mathbb{P}}\left( \left\{ \mbox{there exists a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto}}
\right\}\cap\tilde{\Omega}\right) }\\
&=& \lim_{\kappa\to \infty} {\mathbb{P}}\left( \{ \mbox{there exists a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto}} \} \cap A_\kappa \right)\\
&=& \lim_{\kappa\to \infty} \left[{\mathbb{P}}\left( \{ \mbox{there exists a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto}} \} \mid A_\kappa \right){\mathbb{P}}\left( A_\kappa\right)\right].
\end{eqnarray*}
%
Since $ {\mathbb{P}}\left( \{ \mbox{a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto} exists} \} \mid A_\kappa\right)=1$, it remains to show that
${\lim_{\kappa\to\infty}}$ ${\mathbb{P}}( A_\kappa)=1$. Then, as $A_\kappa\subset A_{\kappa+1}$ for $\kappa\in\mathbb{N}$, it follows that
$${\mathbb{P}}\left( \left\{ \mbox{there exists a solution $(u,v)$ to \eqref{eq:uIto}-\eqref{eq:vIto}}
\right\}\cap\tilde{\Omega}\right)=1.
$$
However, due to Step IV, there exists a constant $C(T)>0$ such that
$$
\mathbb{E} \left[ h(\bar u_\kk,\bvk,t)\right]\le C(T),\quad t\in[0,T],\;\kappa\in\mathbb{N},
$$
%
thus by the Markov inequality,
$${\mathbb{P}}\left(\Omega\setminus A_\kappa\right) \le \frac {C(T)}{\kappa }\to 0.
$$
%
Thus the solution process is well defined on $\tilde \Omega=\bigcup_{\kappa=1}^{\infty} A_\kappa$ with ${\mathbb{P}}(\tilde \Omega)=1$.
\end{proof}
\end{steps}
The proof of Theorem \ref{mainresult} is complete.
\end{proof}
\section{Proof of the stochastic Schauder-Tychonoff theorem}\label{sec:proof_of_schauder}
\begin{proof}[Proof of Theorem \ref{ther_main}]
Fix
$\Afrak$ and $W$ and fix any $v^{(0)}\in\Xcal(\Afrak)$.
\begin{step}
\item For $k\in\mathbb{N}$, define $v^{(k)}:=\Vcal_{\Afrak,W}(v^{(k-1)})$ recursively for some initial datum $v^{(k)}(0)=v_0\in L^m(\Omega,\Fcal_0,\P;E)$, $k\in\mathbb{N}$.
We claim that the laws of the sequence $\{ v^{(k)}\}_{k\in\mathbb{N}}$,
are tight on $\mathbb{X}$. By compactness of
the embedding $\mathbb{X}^{\prime}\hookrightarrow\mathbb{X}$, and by reflexivity
of $\mathbb{X}^{\prime}$, it follows by standard arguments that
\[
K_{R}:=\{\xi\in\mathbb{X}\;\colon\;|\xi|_{\mathbb{X}^{\prime}}^{m_0}\le R\}
\]
is compact in $\mathbb{X}$. By assumption (i), $v^{(k)}\in\Xcal(\Afrak)$ for every
$k\in\mathbb{N}$, and thus it follows by assumption (iii) that $v^{(k)}\in K_{R}$ for
any $k\in\mathbb{N}$. Therefore, the probability laws $\rho_k:=\text{Law}(v^{(k)})$, $k\in\mathbb{N}$, of the sequence $\{v^{(k)}\}_{k\in\mathbb{N}}$
are tight on $\mathbb{X}$.
\item By Step I and Prokhorov's theorem, we know that there exists
a (nonrelabeled) subsequence $\{\rho_{k}\}_{k\in\mathbb{N}}$ and a Borel probability
law $\rho^{\ast}$ on $\mathbb{X}$ such that
\[
\rho_{k}\rightharpoonup\rho^{\ast}
\]
as $k\to\infty$ weakly in the sense of probability measures. Next,
by the Skorokhod lemma \cite[Theorem 4.30]{Kallenberg}, there exists a probability space $(\tilde{\Omega},\tilde{\Fcal},\tilde{\P})$,
a sequence of $\mathbb{X}$-valued random variables $\{\tilde{v}^{(k)}\}_{k\in\mathbb{N}}$
and $\tilde{v}^\ast$ such that
\begin{eqnarray} \label{equallaw1}
\text{Law}(\tilde{v}^{(k)})=\rho_{k},\quad k\in\mathbb{N},\quad\text{Law}(\tilde{v}^\ast)=\rho^{\ast},
\end{eqnarray}
and such that
\[
\tilde{v}^{(k)}\to\tilde{v}^\ast\quad\tilde{\P}\text{-a.s.}
\]
on $\mathbb{X}$ as $k\to\infty$.
\item
For each $k\in\mathbb{N}$, let $\tilde M^{(k)}$ be the process given by
\begin{eqnarray} \label{qv1}
\\ \nonumber
\tilde M^{(k)}(t) &:=&\tilde v^{(k)}(t)-\tilde{v}_0^{(k)}-\int_0^ t\left[ A \tilde v ^{(k)}(s)+F(\tilde v ^{(k-1)}(s),s)\right]\, ds,
\end{eqnarray}
and let
$$
\tilde {{ \mathcal G }}_t^{(k)}:=\sigma \left( \left\{\,\left( \tilde v^{(k)}(s),\tilde v^{(k-1)}(s)\right)\;\colon\; 0\le s\le t \, \right\}\right),\quad k\in\mathbb{N},\quad t\in [0,T],
$$
where $\tilde{v}_0^{(k)}$ is a $\tilde{\Gcal}^{(k)}_0$ measurable version of $v_0$.
Then, $\tilde M^{(k)}$ is a local martingale over $(\tilde \Omega,\tilde {{ \mathcal F }},\tilde {\mathbb{P}})$ with respect to $(\tilde {{ \mathcal G }}_{t}^{(k)})_{t\in[0,T]}$
that has the quadratic variation
\begin{equation}\label{qv11}
\langle\!\langle \tilde M^{(k)}\rangle\!\rangle (t):=\langle\!\langle \tilde M^{(k)}(t),\tilde M^{(k)}(t)\rangle\!\rangle =
\int_0^ t \sum_{i\in\mathbb{I}}|\Sigma(\tilde v^{(k)}(s))\psi_i|_{E}^2\, ds, \quad t\in[0,T].
\end{equation}
The representation \eqref{qv11} follows from \eqref{equallaw1} and
{the fact that} $v^{(k)}=\mathcal{V}_{\mathfrak{A},W}(v^{(k-1)})$, {see \eqref{spdes}}.
To show that $\langle\!\langle \tilde M^{(k)}\rangle\!\rangle$ is adapted to the filtration $ (\tilde {{ \mathcal G }}_{t}^{(k)})_{t\in[0,T]}$, we
use the fact that by \eqref{equallaw1}
for all measurable maps $\varphi:\mathbb{X}\to [0,+\infty]$ and all $0\le s\le t\le T$ we have
$$ \mathbb{E} \left[\left(\tilde M^{(k)}(t)-\tilde M^{(k)}(s)\right) \varphi(\mathbbm{1}_{[0,s)}\tilde v^{(k)}) \right]=0.
$$
\item
Next, we verify the following statements with the goal to pass on to the limit:
\begin{itemize}
\item there exists a constant $C>0$ such that $ \sup_{k\in{\mathbb{N}}} \tilde \mathbb{E} \left[ |\tilde v^{(k)}
|^{m_0}_{\mathbb{X}}\right]\le C$ and
\item for any $r\in (1,m_0)$ we have
$$\lim_{k\to \infty}\tilde {{\mathbb{E}}}\left[ |\tilde
v^{(k)}-\tilde v^{\ast} |_{\mathbb{X}}^r\right] = 0. $$
\end{itemize}
Clearly, since $\{ v^{(k)}\}_{k\in\mathbb{N}}\subset \mathcal{X}(\tilde \mathfrak{A})$, and $\mathcal{X}(\tilde \mathfrak{A})$ is bounded in $\mathbb{X}$,
we can conclude from the application of the Skorokhod lemma in Step II that
\[
\mathbb{E}|v^{(k)}|_{\mathbb{X}}^{r}=\tilde{\mathbb{E}}|\tilde{v}^{(k)}|_{\mathbb{X}}^{r},
\]
for any $r\in [1,m_0]$,
so that we get by (iii) that
\[
\sup_{k}\tilde{\mathbb{E}}|\tilde{v}^{(k)}|_{\mathbb{X}}^{m_0}\le CR,
\]
where $C>0$ is a constant such that $|\cdot|_{\mathbb{X}}\le C|\cdot|_{\mathbb{X}^\prime}$.
Hence, we know that $\{|\tilde{v}^{(k)}|_{\mathbb{X}}^{r}\}$
is uniformly integrable for any $r\in (1,m_0]$ w.r.t. the probability measure $\tilde{\P}$.
By Step II, $\tilde{v}^{(k)}\to\tilde{v}^\ast$ $\tilde{\P}$-a.s., so
we get by the Vitali convergence theorem that
\begin{equation}
\lim_{k\to\infty}\tilde{\mathbb{E}}\left|\tilde{v}^{(k)}-\tilde{v}^\ast\right|_{\mathbb{X}}^{r}=0\label{eq:strong-v}
\end{equation}
for any $r\in(1,m_0)$.
\item
Since
$ | \tilde v^{(k)}(s)|_{\mathbb{X}}^{ r}$ is uniformly integrable with respect to
the probability measure $\tilde{\mathbb{P}}$ for any $r<m_0$, it is integrable for $r=m_0$. Moreover,
\[
\int_0^ t \sum_{i\in\mathbb{I}}\left|\Sigma\left(\tilde v^{(k)}(s)\right)\psi_i\right|_{E}^2\, ds
\le
C\,\int_0^ t|\tilde v^{(k)}(s)|_{\mathbb{X}}^2\, ds,\quad t\in [0,T].
\]
Hence we invoke the Vitali convergence theorem to take the mean square limit in~\eqref{qv1}
and set $v^\ast_0:=\lim_{k\to\infty} \tilde{v}_0^{(k)}=\lim_{k\to\infty} \tilde{v}^{(k)}(0)$ as well as
\[
\tilde{M}^\ast:=\lim_{k\to\infty} \tilde{M}^{(k)} .
\]
Next, let
$\tilde {\mathbb{G}}=(\tilde {\mathcal{G}}_t)_{t\in [0,T]}$
be the
filtration defined by
\begin{equation*} \label{eq:filtrat}
\tilde {\mathcal{G}}_t= \sigma\left(\sigma(\{ \tilde v^\ast(s) \;\colon\; 0\le s\le t\})\cup \mathcal{N}\right),\quad t\in[0,T],
\end{equation*}
where $\mathcal{N}$ denotes the collection of $\tilde{{\mathbb{P}}}$-null sets of $\tilde {\mathcal{F}}$.
Since $v^{(k)}\in \mathbb{X}$ for every $k\in\mathbb{N}$, the limit is well defined and for $0 \leq s \leq t$, we have all measurable maps $\varphi:\mathbb{X}\to [0,+\infty]$ and for all $0\le s\le t\le T$,
$$\tilde{ \mathbb{E} } \left[\left(\tilde M^\ast(t)-\tilde M^\ast(s)\right) \varphi(\mathbbm{1}_{[0,s)}\tilde v^\ast ) \right]=0.
$$
We can easily conclude that $\tilde{M}^\ast$ is a local martingale with respect to $\tilde{\mathbb{G}}$ such that
\begin{equation}
\langle\!\langle \tilde{M}^{\ast}\rangle\!\rangle(t) =
\int_0^ t \sum_{i\in\mathbb{I}}|\Sigma(\tilde{v}^{\ast}(s))\psi_i|_{E}^2\, ds, \quad t\in[0,T].
\end{equation}
By the representation theorem \cite[Theorem 8.2]{DaPrZa:2nd}, and more specifically, for the case of Banach space valued noise, \cite{Dettweiler} and \cite[Proof of Theorem 4.5]{BrzezniakGatarek}, there exists a filtered probability space
$(\tilde{\tilde{\Omega}},\tilde{\tilde{{{ \mathcal F }}}},\tilde{\tilde{{\mathbb{P}}}})$ with filtration
$(\tilde{\tilde{{{ \mathcal F }}}}_t)_{t\in[0,T]}$,
a Wiener
processes $\tilde W$ on $H$ with covariance $Q$ defined on
$\tilde\mathfrak{A}:=(\tilde \Omega \times \tilde{\tilde{\Omega}},\tilde {{ \mathcal F }}\otimes \tilde{\tilde{{{ \mathcal F }}}},
\tilde {\mathbb{P}}\otimes \tilde{\tilde{{\mathbb{P}}}})$ and adapted to $(\tilde {{ \mathcal G }}_t\otimes \tilde{\tilde{{{ \mathcal F }}}}_t)_{t\in[0,T]}$
such that
$$
\tilde{M}^\ast(t,\omega,\tilde \omega):=\int_0^ t v^\ast(s,\tilde \omega) \, d\tilde W(s,\tilde \omega,\tilde{\tilde{\omega}}),\quad t\in[0,T],\,(\tilde \omega,\tilde{\tilde{\omega}})\in \tilde \Omega\times \tilde{\tilde{\Omega}},
$$
In this way, we obtain a pair $v^\ast\in C([0,t];U)$ over $\tilde\mathfrak{A}$ and
the processes $\tilde W$ on $H$ over $\tilde\mathfrak{A}$.
\item
In this step we show that $v^\ast$ over $\tilde\mathfrak{A}$ together with $\tilde W$ is indeed a martingale solution to \eqref{spdes}.
Let us consider the Banach space
${{{ \mathcal M }}}^m_{\tilde\mathfrak{A}} (\mathbb{X})$ over $\tilde\mathfrak{A}$
and the convex subset
$\mathcal{X}(\tilde {\mathfrak{A}})$. The operator $\tilde{ \Vcal}_{\Afrak,W}$ acts now on the space ${\mathcal{X}}(\tilde \mathfrak{A})$ and is defined by system \eqref{spdes}.
Fix $\varepsilon>0$. Set, for simplicity, $\tilde{\Vcal}:=\Vcal_{\tilde{\Afrak},\tilde{W}}$.
We claim that
\[
\tilde{\mathbb{E}}\left|\tilde{\Vcal}(\tilde{v}^\ast)-\tilde{v}^\ast\right|_{\mathbb{X}}<\varepsilon.
\]
By Step IV and the Banach-Alaoglu theorem, we get that $\tilde{v}^{(k)}\rightharpoonup\tilde{v}^\ast$
weakly$^\ast$ for $k\to\infty$ in the Banach space $\Mcal_{\tilde{\Afrak}}^{m}$. By the weak$^\ast$ closedness assumption, we get that $\tilde{v}^\ast\in\Xcal(\tilde{\Afrak})$.
By assumption (ii), $\tilde{\Vcal}$
is continuous on $\Xcal(\tilde{\Afrak})$ and hence there exists $\delta=\delta(\varepsilon)>0$
such that
\[
\tilde{\mathbb{E}}\left|\tilde{\Vcal}(\tilde{v}^\ast)-\tilde{\Vcal}(\tilde{v}^{(k-1)})\right|_{\mathbb{X}}^{m}<\frac{\varepsilon}{2},
\]
whenever $\tilde{\mathbb{E}}|\tilde{v}^\ast-\tilde{v}^{(k-1)}|_{\mathbb{X}}^{m}<\delta$,
which is the case by Step IV (and the assumption $m_0\ge m$) for some large $k$. Also, by construction,
\[
\tilde{\mathbb{E}}\left|\tilde{\Vcal}(\tilde{v}^{(k-1)})-\tilde{v}^{(k)}\right|_{\mathbb{X}}=\mathbb{E}\left|\Vcal(v^{(k-1)})-v^{(k)}\right|_{\mathbb{X}}=0.
\]
Furthermore, by \eqref{eq:strong-v},
\[
\tilde{\mathbb{E}}\left|\tilde{v}^{(k)}-\tilde{v}^\ast\right|_{\mathbb{X}}<\frac{\varepsilon}{2}
\]
for some large $k$. Altogether, writing
\[
\tilde{\Vcal}(\tilde{v}^\ast)-\tilde{v}^\ast=\tilde{\Vcal}(\tilde{v}^\ast)-\tilde{\Vcal}(\tilde{v}^{(k-1)})+\tilde{\Vcal}(\tilde{v}^{(k-1)})-\tilde{v}^{(k)}+\tilde{v}^{(k)}-\tilde{v}^\ast.
\]
we can easily complete the proof of the claim. As a consequence,
\[\tilde{\Vcal}(\tilde{v}^\ast)=\tilde{v}^\ast,\quad\tilde{\P}\text{-a.s.}\]
As seen above, $\tilde{v}^\ast\in\Xcal(\tilde{\Afrak})$,
so that by (iv), $\tilde{\Vcal}(\tilde{v}^\ast)\in\mathbb{D}([0,T];U)$, and
therefore $\tilde{v}^\ast\in\mathbb{D}([0,T];U)$ $\tilde{\P}$-a.s. Hence for
all $t\in[0,T]$, $\tilde{\P}$-a.s.
\[
\tilde{\Vcal}(\tilde{v}^\ast)(t)=\tilde{v}^\ast(t)
\]
and the proof is complete. By construction and Step V, we see that $v^\ast$ solves
\begin{eqnarray} \label{spdes_infact}
dv^\ast(t) &=&\left(A v^\ast(t)+ F(v^\ast(t),t)\right)\, dt +\Sigma(v^\ast(t))\,d\tilde{W}(t),\quad v^\ast(0)=v^\ast_0.
\end{eqnarray}
on $\tilde\Afrak$, where $v_0^\ast$ is a $\tilde{\Gcal}_0\otimes{\tilde{\tilde{\Fcal}}}_0$-measurable version of $v_0$.
\end{step}
\end{proof}
\section{Results on regularity and technical propositions}\label{technics}
This section contains the remaining results which are used in the proof of the main result Theorem \ref{mainresult}.
\subsection{Assumptions on the noise and consequences}\label{subsec:noise}
Let us recall, we denoted by $\{\psi_k:k\in\mathbb{N}\}$ the eigenfunctions of the Laplace operator $-\Delta$ in $L^2({{ \mathcal O }})$ and by $\{\nu_k:k\in\mathbb{N}\}$ the corresponding eigenvalues,
where the enumeration is chosen in increasing order counting the multiplicity.
Let us characterize the asymptotic behavior of $\{\nu_k:k\in\mathbb{N}\}$ and $\{\psi_k:k\in\mathbb{N}\}$ for an arbitrary domain ${{ \mathcal O }}$ with $C^2$--boundary. Here, we know by Weyl's law \cite{Weyl1911,Weyl1912} that there exist constants $c,C>0$ such that,
\begin{eqnarray} \label{abs_ev}
\# \{ j\;\colon\; \nu_j\le \lambda \} \le C \lambda^ \frac d2,
\end{eqnarray}
compare with \cite{LiYau,Hoermander} and \cite[Corollary 17.8.5]{HoermanderIII}, and there exists a constant $C>0$ such that,
\begin{eqnarray} \label{abs_infty}
\sup_{x\in{{ \mathcal O }}}|\psi_k(x)| &\le& C \nu_k^\frac {d-1}2, \quad k\in\mathbb{N},
\end{eqnarray}
compare with \cite{grieser}.
\begin{remark}
If ${{ \mathcal O }}=[0,1]^d$ is a rectangular domain, then
a complete orthonormal system $\{ \psi_k:k\in\mathbb{Z} \}$ of the underlying Lebesgue space $L^2({{ \mathcal O }})$ is given by the trigonometric functions, see \cite[p.\ 352]{Garrett1989} (now, $\mathbb{Z}$ obviously denotes the integer numbers). Let us define firstly the eigenvalues for the Laplace on $L^2([0,1])$
\begin{equation}\label{ONS1}
e_k(x)=
\begin{cases}
{\sqrt{2}} \, \sin\big(2\pi{k} x\big) &\!\!\text{if } k\geq 1,\,x\in{{ \mathcal O }}, \\
{1} &\!\!\text{if } k = 0,\, x\in{{ \mathcal O }}\,, \\
{\sqrt{2}}\, \cos\big(2 \pi{k} x\big) &\!\!\text{if } k \leq - 1, x\in{{ \mathcal O }}.
\end{cases}
\end{equation}
For instance, the eigenfunctions for $L^2([0,1]^2)$ for the multi-index $k=(k_1,k_2)\in \mathbb{Z}^2$ are given by the tensor product
$$
\psi_{k}(x_1,x_2)= e_{k_1}(x_1)e_{k_2}(x_2),\quad x\in [0,1]^2,
$$
the corresponding eigenvalues are given by $\nu_k=\pi^2|k|^2$, where $|k|=k_1+k_2$. The case $d=3$ works analogously.
In this special case the conditions on $\delta_1$ and $\delta_2$ in Hypothesis \ref{wiener} can be relaxed to
$$ \lambda_k^{(1)}\le C \nu_k^{\delta_1},\quad \lambda^{(2)}_k\le C\nu_k^{\delta_2},
\quad k=(k_1,k_2)
$$
to $\delta_1,\delta_2>\frac 12$. See also the discussion in \cite[Examples 1.2.1 and 2.1.2]{BDPR2016}.
\end{remark}
We begin with a remark on the noise coefficients. We partly work in the Banach spaces $L^m({{ \mathcal O }})$ and $H^m_\delta({{ \mathcal O }})$ for $\delta$ being arbitrary small but positive.
Given a Wiener process $W$ on $H=L^2({{ \mathcal O }})$ over $\mathfrak{A}=(\Omega,\Fcal,\mathbb{F},{\mathbb{P}})$, and a progressively measurable process $\xi\in\mathcal{M}^2_\mathfrak{A}(L^2({{ \mathcal O }}))$,
$\rho\in[0,\frac 12]$,
let us define $\{Y(t):t\in[0,T]\}$ by
$$
Y(t):=\int_0^ t \xi(s)\, d {W}(s), \quad t\in[0,T].
$$
Here, for each $t\in[0,T]$, $\xi(t)$ is interpreted as a multiplication operator acting on the
elements of $H$, namely, $\xi:H\ni\psi \mapsto \xi\psi\in {{ \mathcal S }}'(\Ocal)$, where ${{ \mathcal S }}'(\Ocal)$ denotes the space of \emph{tempered distributions} on $\Ocal$.
Let $E$ be a Banach space.
An inequality needed in several places within the proof is the Burkholder-Davis-Gundy inequality given for $p\ge 1$ as follows
\begin{equation*}\label{BDG1}
\mathbb{E} \left[\sup_{t \in [0, T]} |Y(t)|^p_{E}\right] \leq C(p) \, \mathbb{E} \,\left[ \int_0^T |\xi(t)|^2_{\gamma(H,E )}\, dt\right]^\frac p2,
\end{equation*}
where $\gamma(H,E)$ denotes the space of $\gamma$-radonifying operators and $|\cdot|_{\gamma(H,E)}$ the corresponding norm, cf. \cite{brzezniak2,vanneerven}. In case $E$ is a Hilbert space, the $\gamma$-norm coincides with the Hilbert-Schmidt norm $|\cdot|_{L_{\textup{HS}}(H,E)}$. See Appendix \ref{sec:BDG} for further details.
By $|\xi\psi_k|_{L^m}\le |\xi|_{L^m}\, |\psi_k|_{L^\infty}$ and \eqref{abs_infty}
we know that
\begin{eqnarray*}
|\xi|^2_{\gamma (H, L^m )}\le \sum_{k\in\mathbb{N}} \lambda^k_k |\xi\psi_k|_{L^m}^2\le |\xi|_{L^m} \sum_{k\in\mathbb{N}}\lambda^\delta _k |\psi_k|_{L^\infty}^2
\le C \sum_{k\in\mathbb{N}} \nu_k^{2(1 -\delta)},
\end{eqnarray*}
which is finite if $\delta>\frac 12$.
\begin{itemize}
\item Firstly, let $E=H^{-1}_2({{ \mathcal O }})$. Since
$$|\xi\psi|_{H^{-1}_2}\le |\xi|_{H^{-1}_2}|\psi|_{H^\delta_2}\le |\xi|_{H^{\rho}_2}|\psi|_{L^\infty},$$
we know that $\delta>\frac 12$, then
$$ |\sigma_1 (\xi)|_{\gamma(H,H^{\rho}_2)}\le C\, |\xi|_{H^{\rho}_2}.
$$
\item Secondly,
let $E=H^{\rho}_2({{ \mathcal O }})$, where $\rho$ is given in Hypothesis \ref{init}. Since
$$|\xi\psi|_{H^{\rho}_2}\le |\xi|_{H^{\rho}_2}|\psi|_{H^{2|\rho|}_2}\le |\xi|_{H^{\rho}_2}|\psi|_{L^\infty},$$
we know that $\delta>\frac 12$, then
$$ |\sigma_2 (\xi)|_{\gamma(H,H^{\rho}_2)}\le C\, |\xi|_{H^{\rho}_2}.
$$
\end{itemize}
\begin{remark}\label{LSSigma}
From Hypotheses \ref{wiener},
one can infer that
there exists a constant $C>0$ such that
\[\sum_{k=1}^\infty [(\sigma_j (\xi) f_k)(x)]^2\le C|\xi(x)|_{L^2}^2,\quad \forall \xi\in L^2({{ \mathcal O }}),\;x\in\Ocal,\mbox{ and } j=1,2.\]
Here $\{f_k\}$ is an orthonormal basis in $H^{-1}_2({{ \mathcal O }})$, compare with \cite[Hypothesis 3, p.\ 42]{BDPR2016}.
In addition,
note, that
\begin{itemize}
\item Firstly, $\sigma_j:H^{-1}_2({{ \mathcal O }})\to L_{\textup{HS}}(H_j,H^{-1}_2({{ \mathcal O }}))$
is of linear growth and Lipschitz continuous.
In particular, there exists constants $C_1,L_1>0$ such that
\begin{eqnarray*}
|\sigma_j(\xi)|_{L_{\textup{HS}}(H_j,H^{-1}_2)} &\le& C_1(1+|\xi|_{H^{-1}_2}),\quad \xi\in H^{-1}_2({{ \mathcal O }});
\\ |\sigma_j(\xi)-\sigma_j(\eta)|_{L_{\textup{HS}}(H_j,H^{-1}_2)}&\le& L_1 |\xi-\eta|_{H^{-1}_2}, \quad \xi,\eta\in H^{-1}_2({{ \mathcal O }});.
\end{eqnarray*}
\item Secondly, $\sigma_j:L^2({{ \mathcal O }})\to L_{\textup{HS}}(H_j,L^2({{ \mathcal O }}))$
is of linear growth and Lipschitz continuous.
In particular, there exists constants $C_2,L_2>0$ such that
\begin{eqnarray*}
|\sigma_j(\xi)|_{L_{\textup{HS}}(H_j,L^2)} &\le& C_2(1+|\eta|_{L^2}),\quad \eta\in L^2({{ \mathcal O }});
\\ |\sigma_j(\xi)-\sigma_j(\eta)|_{L_{\textup{HS}}(H_j,L^2)}&\le& L_2 |\xi-\eta|_{L^2}, \quad \xi,\eta\in L^2({{ \mathcal O }}).
\end{eqnarray*}
\item
Similarly, straightforward computations and using the fact that $|f_k|_{L^\infty}\le \nu_k^\frac {d-1}2 $, see \eqref{abs_infty} and \cite[p.\ 46]{BDPR2016}, we get that
\begin{eqnarray*}
|\sigma_2(\xi)|_{\gamma(H_2,L^m)} &\le& C_1(1+|\xi|_{L^m}),\quad \xi\in L^m({{ \mathcal O }});
\\ |\sigma_2(\xi)-\sigma_2(\eta)|_{{\gamma}(H_2,L^m)}&\le& L_1 |\xi-\eta|_{L^m}, \quad \xi,\eta\in L^m({{ \mathcal O }}),
\end{eqnarray*}
where $\gamma(H_2,L^m)$ denotes the space of $\gamma$-radonifying operators, cf. \cite{brzezniak2,vanneerven}.
\end{itemize}
\end{remark}
\subsection{Properties of the subsystem \eqref{eq:cutoffuu}}
In this subsection, we are analyzing equation \eqref{eq:cutoffuu}, where $\kappa\in\mathbb{N}$ and the couple $(\xi,\eta)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\mathbb{Z})$ are given.
First, we will show that a unique solution $u_\kappa$ to the system \eqref{eq:cutoffuu} exists and is nonnegative. Second, we will show by variational methods that this solution satisfies some integrability properties, given $u_0\in L^ {p+1}(\Ocal)$, $p\ge 1$.
\begin{theorem}\label{theou1}
Assume that the Hypotheses of Theorem \ref{mainresult} hold.
For any $(\xi,\eta)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\BX)$, and
for any $u_{0}\in L^{2}(\Omega;\Fcal_{0},\P;H^{-1}_2({{ \mathcal O }}))$ the system
\begin{equation}
d\uk(t)=r_u\Delta \uk ^{[\gamma]}(t)\, dt - \chi\phi_\kappa (h(\eta,\xi,t))\, \eta(t)\xi^2(t)\,dt +\sigma_1 \uk (t)\,dW_1(t),
\label{eq:u-spde}
\end{equation}
with $\uk(0)=u_0$ has a unique solution $\uk$ on the time interval $[0,T]$ which is $\P$-a.s. continuous in $H_2^{-1}$ and satisfies
$$ \mathbb{E} \left[\sup_{0\le s\le T} |\uk(s)|^2_{H^{-1}_2}\right]+ \mathbb{E} \int_0^T|\uk(s)|_{L^{\gamma+1}}^{\gamma+1}\, ds
<\infty.
$$
In particular, there exists a constant $C(T,\kappa)>0$ such that
$$ \mathbb{E} \|\uk\|^2_{\BY}\le C(T,\kappa).
$$
\end{theorem}
\begin{proof}
Before starting with the proof,
we introduce the setting used by the book by Barbu, Da Prato and R\"ockner \cite{BDPR2016}, and the book by Liu and R\"ockner \cite{weiroeckner}, respectively.
Let ${{ \mathcal H }}:=H_{2}^{-1}(\Ocal)$, the dual space of ${{ \mathcal H }}^\ast={H}_{2}^{1}(\Ocal)$
(corresponding to Neumann boundary conditions). Let $V:=L^{\gamma+1}(\Ocal)$.
By the Sobolev embedding theorem, $\Hcal^{\ast}\hookrightarrow L^{(\gamma+1)/\gamma}(\Ocal)$
densely and compactly. Thus, upon identifying $\Hcal$ with its own dual space
via the Riesz-map $(-\Delta)^{-1}$ of $\Hcal$, we have a Gelfand triple
\[
V\subset {{ \mathcal H }}\equiv{{ \mathcal H }}^{\ast}\subset V^{\ast}.
\]
We set
\[
A(t,u,\omega):=r_u\Delta u^{[\gamma]}-\chi\phi_\kappa(h(\eta(\omega),\xi(\omega),t))\xi^{2}(t,\omega)\eta(t,\omega).
\]
Note, due to the assumption $(\xi,\eta)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\BX)$ and by \eqref{h12.def}, $\eta$ and $\xi$ are adapted such that
there exists a constant $C>0$ such that
\begin{equation}
\mathbb{E} \left[\sup_{0\le s\le T} |\eta(s)|_{L^{p^\ast}}^{p^\ast_0}\right]
\le C,\label{eq:v-bound-L1-a}
\end{equation}
and for any $q_2\in [1,m_0]$,
\begin{equation}
\left( \int_{0}^{T}|\phi_\kappa(h(\eta,\xi,t))\xi(t)|_{L^{m}}^{m_0}
\,dt\right)\le (2\kappa)^{1/\nu}.\label{eq:v-bound-L1}
\end{equation}
Hence, by H\"{o}lder inequality and the Sobolev embedding $L^1({{ \mathcal O }})\hookrightarrow H^{-2}_{\frac{\gamma+1}{\gamma}} ({{ \mathcal O }})$,
we have for fixed $\omega\in\Omega$ and fixed $t\in [0,T]$,
\begin{eqnarray} \label{calculations}
&&\qquad\lefteqn{ \left|\int_{\Ocal}(-\Delta)^{-1}(\chi\phi_\kappa(h(\eta,\xi,t))\xi^{2}(t,x)\eta(t,x))w\,dx\right|}
\\\nonumber
&\le &
\chi|(-\Delta)^{-1}(\phi_\kappa(h(\eta,\xi,t))\xi^{2}(t)\eta(t))|_{L^{(\gamma+1)/\gamma}}|w|_{L^{\gamma+1}}\\\nonumber
&\le& \chi C|\phi_\kappa^{1/4}(h(\eta,\xi,t))\xi(t)|_{L^{m}}^{2}|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast}}|w|_{L^{\gamma+1}}
\\\nonumber
&\le &
\chi\left[C|\phi_\kappa^{1/4}(h(\eta,\xi,t))\xi(t)|_{L^{m}}^{m_0}+C|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast}}^{p_0^\ast}\right]|w|_{L^{\gamma+1}}.
\end{eqnarray}
Then,
under the assumption \eqref{eq:v-bound-L1}, we shall verify the conditions
of \cite[Theorem 5.1.3]{weiroeckner} for
$\gamma>1$.
First note that for fixed $t\in[0,T]$ and fixed $\omega\in\Omega$,
$A$ maps from $V$ to $V^{\ast}$. In particular, by the calculations done in \eqref{calculations}
we have for $\gamma>1$
\begin{eqnarray} \label{eq:A-bounded}
&&\left|\mathbin{_{V^{\ast}}\langle A(t,u),w\rangle_{V}}\right| \\
&=& \nonumber\left|\int_{\Ocal}\left[r_u u^{[\gamma]}(x)w(x)+(-\Delta)^{-1}(\chi\phi_\kappa(h(\eta,\xi,t))\xi^{2}(t,x)\eta(t,x))w(x)\right]\,dx\right|
\\
\nonumber & \le&\left| r_u|u|_{L^{\gamma+1}}^{\gamma}|w|_{L^{\gamma+1}}+\chi|(-\Delta)^{-1}(\phi_\kappa(h(\eta,\xi,t))\xi^2(t) \eta(t))|_{L^{(\gamma+1)/\gamma}}|w|_{L^{\gamma+1}}\right|\\
\nonumber & \le&\left[r_u|u|_{L^{\gamma+1}}^{\gamma}+C\chi|\phi_\kappa(h(\eta,\xi,t))\xi^{2}(t)\eta(t)|_{L^{1}}\right]|w|_{L^{\gamma+1}}\\
\nonumber & \le&\left[r_u|u|_{L^{\gamma+1}}^{\gamma}+C\chi|\phi_\kappa^{1/4}(h(\eta,\xi,t))\xi(t) |_{L^{m}}^{2}|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast }}\right]|w|_{L^{\gamma+1}}\\
\nonumber & \le&\left[r
|
_u|u|_{L^{\gamma+1}}^{\gamma}+C\chi|\phi_\kappa^{1/4}(h(\eta,\xi,t))\xi(t) |_{L^{m}}^{m_0}+C\chi|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast }}^{p_0^\ast }\right]|w|_{L^{\gamma+1}},
\end{eqnarray}
where only $\xi$, $\eta$ and $h(\eta,\xi,\cdot)$ depend on $t$ and $\omega$ and
$C$ may change
from line to line.
Next, we verify Hypothesis (H1), (H2$^{\prime}$), (H3), and (H4$^{\prime}$) of \cite[Theorem 5.1.3]{weiroeckner}.
\begin{enumerate}
\item[(H1):] For $\lambda\in\mathbb{R}$, $u_{1},u_{2},w\in L^{\gamma+1}(\Ocal)$, $t\in[0,T]$,
$\omega\in\Omega$ consider the map
\[
\lambda\mapsto\langle A(t,u_{1}+\lambda u_{2}),w\rangle
\]
and show its hemicontinuity.
Note, that we have
\begin{align*}
&\langle A(t,u_{1}+\lambda u_{2}),w\rangle\\
=&-r_u\int_{\Ocal}(u_{1}(x)+\lambda u_{2}(x))^{[\gamma]}w(x)\,dx\\
&-\chi\int_{\Ocal}(-\Delta)^{-1}[\phi_\kappa(h(\eta,\xi,t))\xi^{2}(t,x)\eta(t,x) ]w(x)\,dx,
\end{align*}
where only $\xi$, $\eta$ and $h(\eta,\xi,\cdot)$ depend on $t$ and $\omega$.
For the first integral
in the above identity, we prove hemicontinuity with the same arguments
as in \cite[Example 4.1.11, p. 87]{weiroeckner}. The second integral is independent from
$\lambda$ and the assumption (H1) of \cite[Theorem 5.1.3]{weiroeckner} is satisfied.
\item[(H2$^{\prime}$):] Let $u,w\in L^{\gamma+1}(\Ocal)$, $t\in[0,T]$, $\omega\in\Omega$. Take
\eqref{eq:porous-medium-inequality} into account and consider
\begin{eqnarray*}
\lefteqn{ \mathbin{_{V^{\ast}}\langle A(t,u)-A(t,w),u-w\rangle_{V}}+|\sigma_1 u - \sigma_1 w|^2_{L_{\textup{HS}}(H_1,{{ \mathcal H }})}}
\\
&\le & -r_u\int_{\Ocal}(u^{[\gamma]}(x)-w^{[\gamma]}(x))(u(x)-w(x))\,dx+C|u-w|_{{{ \mathcal H }}}^{2}\\
&\le & -2^{1-\gamma}r_u|u-w|_{L^{\gamma+1}}^{\gamma+1}+C|u-w|_{{{ \mathcal H }}}^{2}\\
&\le & C|u-w|_{{{ \mathcal H }}}^{2},
\end{eqnarray*}
and (H2$^{\prime}$) of \cite[Theorem 5.1.3]{weiroeckner} is satisfied.
\item[(H3):] Let $u\in L^{\gamma+1}(\Ocal)$, $t\in[0,T]$, $\omega\in\Omega$. Straightforward calculations and taking into account Hypothesis \ref{init} give,
\begin{eqnarray*}
\lefteqn{
\mathbin{_{V^{\ast}}\langle A(t,u),u\rangle_{V}}+|\sigma_1 u|^2_{L_{\textup{HS}}(H_1,\Hcal)}}&&
\\
&\le & -\int_{\Ocal}\left[r_u u^{[\gamma]}(x)u(x)+\chi(-\Delta)^{-1}(\phi_\kappa(h(t,\eta,\xi))\xi^{2}(t,x)\eta(t,x))u(x)\right]\,dx+C|u|_{\Hcal}^{2}
\\
&\le & -r_u|u|_{L^{\gamma+1}}^{\gamma+1}+ C\chi|\phi^{1/4}_\kappa(h(\eta,\xi,t))\xi(t)|^2_{L^{m}}|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast}}|u|_{L^{\gamma+1}}+C|u|_{\Hcal}^{2}
\\
&\le & -\frac {r_u}2|u|_{L^{\gamma+1}}^{\gamma+1}+ C\chi|\phi^{1/4}_\kappa(h(\eta,\xi,t))\xi(t)|^{2\frac {\gamma+1}\gamma}_{L^{m}} |\phi^{1/2}_\kappa(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast}}^{\frac{\gamma+1}\gamma}+C|u|_{\Hcal}^{2}
\\
&\le &
-\frac {r_u}2|u|_{L^{\gamma+1}}^{\gamma+1}+
\underset{=:f(t)}
{\underbrace{ C\chi|\phi^{1/4}_\kappa(h(\eta,\xi,t))\xi(t)|_{L^{m}}^{m_0}+
|\phi_\kappa^{1/2}(h(\eta,\xi,t))\eta(t)|_{L^{p^\ast}}^{p_0^\ast}}} +C|u|_{\Hcal}^{2}.
\end{eqnarray*}
Due to the definition of the cut-off, i.e. \eqref{eq:v-bound-L1-a} and \eqref{eq:v-bound-L1}, we know $f(t)$ is bounded.
As a consequence, (H3) of \cite[Theorem 5.1.3]{weiroeckner} holds with $f$ as defined above and $\theta:=r_u$.
\item[(H4$^{\prime}$):] Let $u\in L^{\gamma+1}({{ \mathcal O }})$, $t\in[0,T]$, $\omega\in\Omega$. Then,
(H4$^{\prime}$) of \cite[Theorem 5.1.3]{weiroeckner} holds by \eqref{eq:A-bounded} with $\alpha:=\gamma+1$,
$\beta:=0$ and $f$ given above.
\end{enumerate}
The rest of the proof follows by an application of \cite[Theorem 5.1.3]{weiroeckner}, noting
that
\[f\in L^1([0,T]\times\Omega)\]
by \eqref{eq:v-bound-L1-a} and \eqref{eq:v-bound-L1}.
\end{proof}
\begin{proposition}\label{positivityu}
Under the conditions of Theorem \ref{theou1} and the additional conditions $u_0(x)\ge 0$ for all $x\in{{ \mathcal O }}$ and $\xi(t,x)\ge 0$, $\eta(t,x)\ge 0$, for a.e. $(t,x)\in[0,T]\times {{ \mathcal O }}$, the solution $u_\kappa$
to \eqref{eq:cutoffuu} is a.e. nonnegative.
\end{proposition}
\begin{proof}
For the nonnegativity of the solution to \eqref{eq:cutoffuu}, we refer to the proof of positivity of the stochastic porous medium equation, see
Section 2.6 in \cite{BDPR2016} and see also \cite{BDPR3}. Mimicking the proof of nonnegativity in \cite{BDPR2016} and applying a comparison principle \cite{Kotelenez1992} the nonnegativity of \eqref{eq:cutoffuu} can be shown.
\end{proof}
In the next proposition we are using variational methods to verify uniform bounds of $u_\kappa$, $\kappa\in\mathbb{N}$.
\begin{proposition}\label{uniformlpbounds}
Fix $p\ge 1$. Then, there exists a constant $C_0(p,T)>0$ such that for every $u_0\in L^{p+1}(\Omega,\Fcal_0,{\mathbb{P}};L^{p+1}({{ \mathcal O }} ))$,
and for every $\kappa\in\mathbb{N}$ and for every $(\xi,\eta)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)$ such that $\uk$ solves \eqref{eq:cutoffuu},
the following estimate
\begin{eqnarray*} \nonumber
\lefteqn{ \mathbb{E} \left[\sup_{0\le s\le T} |\uk (s)|_{L^{p+1} }^{p+1}\right]
+ \gamma p(p+1)r_u \mathbb{E} \int_0^ \TInt\int_{{ \mathcal O }} |\uk(s,x)|^{p+\gamma-2} |\nabla \uk (s,x)|^2\, dx \, ds} &&
\\\nonumber &&{} +(p+1)\chi \mathbb{E} \int_0^\TInt \int_{{ \mathcal O }} \uk^{[p]}(s,x) \phi_\kappa (h(\eta,\xi,s))\eta(s,x)\xi^2(s,x)\, dx \, ds
\\
& \le& C_0(p,T)\,\left( \mathbb{E} |u_0|_{L^{p+1}}^{p+1}+1\right)
\end{eqnarray*}
is valid.
\end{proposition}
\begin{proof}
By It\^o's formula for the functional $u\mapsto|u|^{p+1}_{L^{p+1}}$, we get that $u_\kappa$ satisfies for $t\in [0,T]$,
\begin{align*}
&|u_\kappa (t)|^{p+1}_{L^{p+1}}-|u_0|^{p+1}_{L^{p+1}}\\
\le &(p+1)r_u \int_0^t\int_\Ocal \Delta(u_\kappa^{[\gamma]}(s,x))u^{[p]}_\kappa(s,x)\,dx\,ds\\
&-(p+1)\chi\int_0^t\int_\Ocal \phi_\kappa (h(\eta,\xi,s))\eta(s,x)\xi^2(s,x) u^{[p]}_\kappa(s,x)\,dx\,ds\\
&+M(t)+\frac{p}{2}(p+1)\int_0^t |u_\kappa(s)|^{p-1}_{L^{p+1}}|\sigma_1 u_\kappa(s)|_{\gamma(H_1,L^{p+1})}^2\,ds,
\end{align*}
where $t\mapsto M(t)$ is a local martingale. Integrating by parts, taking expectation (we may as well apply the Burkholder-Davis-Gundy inequality, see Section \ref{sec:BDG}) and rearranging yields,
\begin{align*}
& \mathbb{E} \left[|u_\kappa (s)|^{p+1}_{L^{p+1}}\right]+(p+1)r_u \mathbb{E} \int_0^t\int_\Ocal \nabla (u_\kappa^{[\gamma]}(s,x))\cdot\nabla (u_\kappa^{[p]}(s,x))\,dx\,ds\\
&+(p+1)\chi \mathbb{E} \int_0^t\int_\Ocal \phi_\kappa (h(\eta,\xi,s))\eta(s,x)\xi^2(s,x) u^{[p]}_\kappa(s,x)\,dx\,ds\\
\le& \mathbb{E} |u_0|^{p+1}_{L^{p+1}}+\frac{p}{2}(p+1)C(p) \mathbb{E} \int_0^t |u_\kappa(s)|^{p+1}_{L^{p+1}}\,ds+ \mathbb{E} \int_0^t|\sigma_1 u_\kappa(s)|_{\gamma(H_1,L^{p+1})}^{(p+1)/(p-1)}\,ds,
\end{align*}
and hence by Remark \ref{LSSigma} and Gronwall's lemma,
\begin{align*}
& \mathbb{E} \left[\sup_{0\le s\le T}|u_\kappa (s)|^{p+1}_{L^{p+1}}\right]+\gamma p(p+1)r_u \mathbb{E} \int_0^T\int_\Ocal |\nabla u_\kappa(s,x)|^2 |u(s,x)|^{\gamma+p-2} dx\,ds\\
&+(p+1)\chi \mathbb{E} \int_0^T\int_\Ocal \phi_\kappa (h(\eta,\xi,s)) \eta(s,x)\xi^2(s,x) u^{[p]}_\kappa(s,x)\,dx\,ds\\
\le&C_0(p,T)\left( \mathbb{E} |u_0|^{p+1}_{L^{p+1}}+1\right).
\end{align*}
\end{proof}
We may remark that the above result now permits an application of Proposition \ref{runst1}. Note that the inequality becomes particularly useful, when
$u_\kappa\ge 0$ and $\eta\ge 0$.
\subsection{Properties of the subsystem \eqref{eq:cutoffvv}}
Given the couple $(\eta,\xi)\in{{ \mathcal M }}_\mathfrak{A}(\BY,\mathbb{X})$,
we consider the solution $\vk$ to the equation \eqref{eq:cutoffvv}.
First, we will
In the next proposition we investigate existence and uniqueness and the regularity of the process $v_\kk$.
\begin{proposition}\label{propvarational}
Assume that the Hypotheses of Theorem \ref{mainresult} hold.
Then, for any pair $(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\mathbb{Z})$ and for any initial datum $v_0$ as in Hypothesis \ref{init},
there exists a solution $\vk$ to \eqref{eq:cutoffvv} on the time interval $[0,T]$
such that $v_\kappa$ is continuous in $H^\rho_2(\Ocal)$ and such that
$$ v_\kk\in L^\infty(0,T;H^{\rho}_2({{ \mathcal O }}))\cap L^2(0,T;H^{\rho+1}_2({{ \mathcal O }}))\quad {\mathbb{P}}\mbox{-a.s.}
$$
In addition, for all $\kappa\in\mathbb{N}$ there exists a constant $C=C(T,\kappa)>0$ such that
\begin{eqnarray} \label{estimatesol2}
\mathbb{E} \|\vk\|_{\BH}^{m_0}
\le \mathbb{E} |v_0|_{H^\rho_2}^{m_0} +C(\kappa,T)R_0.
\end{eqnarray}
\end{proposition}
\begin{proof}
Let us consider the following system,
\begin{eqnarray} \label{wwwoben3}
\qquad d w(t)&=&r_v\Delta w(t)\,dt+\sigma_2 w(t)\, dW_2(t),\quad w(0)=w_0\in H^\rho_2({{ \mathcal O }}).
\end{eqnarray}
A solution to \eqref{wwwoben3} is given by standard methods (see e.g. \cite[Chapter 6]{DaPrZa:2nd} or \cite[Theorem 4.2.4]{weiroeckner})
such that for any $q\ge 1$,
\begin{eqnarray} \label{erstes3}
\mathbb{E} \|w\|^q_{C([0,T];H_2^\rho)}\le C_1(T)\quad \mbox{and}\quad \mathbb{E} \|w\|^q_{L^2([0,T];H^{\rho+1}_2)}\le C_2(T).
\end{eqnarray}
The solution to \eqref{eq:cutoffvv} is given by
$$
v_\kk(t)=w(t)+\int_0^ t e^{r_v\Delta(t-s)} F(s)\,ds,\quad t\in[0,T].
$$
The Minkowski inequality, the smoothing property of the heat semigroup and the Sobolev embedding $L^1({{ \mathcal O }})\hookrightarrow H^{\rho-\delta}_2({{ \mathcal O }})$ for $\delta-\rho+\frac d2>d$ give
\begin{eqnarray} \label{zweites3}
\lefteqn{\left|\int_0^ t e^{r_v\Delta(t-s)}F(s)\,ds\right|_{H^\rho_2} \le\int_0^ t \left|e^{r_v\Delta(t-s)}F(s)\right|_{H^\rho_2}\, ds
}&&
\\\nonumber
&\le & \int_0^ t (t-s)^{-\frac \delta2} \left|F(s)\right|_{H^{\rho-\delta}_2}\, ds
\le C(T)\, \left(\int_0^ t \left|F(s)\right|_{L^1}^\mu\, ds\right)^\frac 1 \mu.
\end{eqnarray}
Setting
$$F=\phi_\kappa(h(\eta,\xi,\cdot))\eta\xi^2,$$
we obtain by the H\"older inequality for $\delta/2<1-\frac 1\mu $ and $\frac {2\mu}{m_0}<1$
\begin{eqnarray} \label{zweites.2}
\lefteqn{\qquad\qquad\int_0^ T |F(s)|_{L^1}^\mu\, ds
}
&&
\\
&\le& \sup_{0\le s\le T} |\eta(s)|_{L^{p^\ast }}^{\mu} \int_0^ T (\phi_\kappa(h(\eta,\xi,t)))^{\mu} | \xi(s)|_{L^m}^{2\mu}\, ds\le (2\kappa)^{\frac{2\mu}{m_0\nu}} \sup_{0\le s\le T} |\eta(s)|_{L^{p^\ast}}^{\mu}.\nonumber
\end{eqnarray}
Observe, since $\rho<2-\frac 4m-\frac d2$, such a $\delta$ and $\mu$ exists.
Taking expectation, we know due to the assumptions on $\eta$, that
the term can be bounded.
{It remains to calculate the norm in $L^2(0,T;H^{\rho+1}_2({{ \mathcal O }}))$.
By standard calculations (i.e. applying the smoothing property and the Young inequality for convolution) we get
for $\frac 32 \ge \frac 1\mu+\frac 1\kappa$ and $\delta \kappa/2<1$
\begin{eqnarray*}
\left\|\int_0^ \cdot e^{r_v\Delta(\cdot-s)}F(s)\,ds\right\|_{L^2(0,T;H_2^{\rho+1})}\le C\, \|F\|_{L^\mu(0,T;H_2^{\rho+1-\delta})}
.
\end{eqnarray*}
The embedding $L^1({{ \mathcal O }}) \hookrightarrow H^{\rho+1-\delta}_2({{ \mathcal O }})$ for $\delta-(\rho+1)>\frac d2$
gives
\begin{eqnarray*}
\| F\|_{L^\mu(0,T;H_2^{\rho-1})}\le \|F\|_{L^\mu(0,T;L^1)},
\end{eqnarray*}
and similarly to before
\begin{eqnarray*}
\mathbb{E} \left\|\int_0^ \cdot e^{r_v\Delta(\cdot-s)}F(s)\,ds\right\|^{m_0}_{L^2(0,T;H_2^{\rho+1})}\le C(\kappa)\, R_0
.
\end{eqnarray*}
}%
\end{proof}
\begin{proposition}\label{positivityv}
Under the conditions of Theorem \ref{propvarational} and the additional condition $v_0(x)\ge 0$ for all $x\in{{ \mathcal O }}$ and $\eta(t,x)\ge 0$, $\xi(t,x)\ge 0$ for a.e. $(t,x)\in[0,T]\times {{ \mathcal O }}$, the solution $v_\kk$
to \eqref{eq:cutoffvv} is nonnegative a.e.
\end{proposition}
\begin{proof}
The heat semigroup, which is generated by the Laplace, maps nonnegative functions to nonnegative functions. In this way we refer to the proof of nonnegativity by Tessitore and Zabczyk \cite{TessitoreZabczyk1998}. The perturbation can be incorporated by comparison results, see \cite{Kotelenez1992}.
\end{proof}
\begin{proposition}\label{semigroup}
Assume that the Hypotheses of Theorem \ref{mainresult} hold.
In addition let us assume that we have for the couple $(\eta,\xi)\in\mathcal{X}_\mathfrak{A}(\Reins,\Rzwei,\Rdrei)\subset{{ \mathcal M }}_\mathfrak{A}(\BY,\mathbb{Z})$.
Let $v_\kappa$ be a solution to
$$
d v_\kk(t)=[r_v\Delta v_\kk(t)+ \phi_\kappa(h(\eta,\xi,t))\, \eta(t)\xi^2(t)]\,dt+\sigma_2 v_\kk(t)\, dW_2(t),\quad v_\kk(0)=v_0\in H^{-\delta_1}_m({{ \mathcal O }}).
$$
Then,
\begin{enumerate}[label=(\roman*)]
\item
there exists a number $r_0>0$ and a constant $\tilde{C}_2(T)>0$ such that for any $r\le r_0$,
we have for all $\kappa\in\mathbb{N}$
\renewcommand{{2}}{{m_0}}
\begin{eqnarray*}
\mathbb{E} \left\| v_\kk\right\|_{L^{m_0}(0,T;H^{r}_{m})}^{{2}}
&\le & \tilde C_2(T)\bigg\{ \mathbb{E} |v_0|^{{2}}_{H^{-\delta_0}_{m}} + 4\kappa
\bigg\}.
\end{eqnarray*}
\item
there exists a number $\alpha_0>0$ and a constant $C=C(T)>0$ such that for any $\alpha \le \alpha_0$ we have for all $\kappa\in\mathbb{N}$
\begin{eqnarray*}
\mathbb{E} \left\| v_\kk\right\|^{{2}}_{\mathbb{W}_{m_0}^\alpha (0,T;H^{-\sigma}_m)}
&\le & C(T)\bigg\{ \mathbb{E} |v_0|^{{2}}_{H^{-\delta_0}_{m}} + 4\kappa
\bigg\}.
\end{eqnarray*}
\end{enumerate}
\end{proposition}
\begin{proof}
\renewcommand{{2}}{{m_0}}
We start to show (i).
First, we get by the analyticity of the semigroup for $\delta_0,\delta_1\ge 0$
\begin{eqnarray*}
\mathbb{E} \left\| v_\kk \right\|_{L^{m_0}(0,T;H^{r}_{m})}^{{2}}
&\le &\left[ \int_0^ T \left\{ t^{-2{m_0}\delta_0} \mathbb{E} |v_0|_{H^{-\delta_0}_{m}}^{m_0}+ \mathbb{E} \left|\int_0^ t e^{r_v\Delta (t-s)}\phi_\kappa(h(\eta,\xi,s))\, \eta(s) \xi^ 2(s)\, ds\right|_{H^{r}_{m}}^{m_0}\,\right.\right.
\\&&{}+ \left.\left. \mathbb{E} \left|\int_0^ t e^{r_v\Delta (t-s)} \sigma_2 v_\kk (s) dW_2(s)\right|_{H^{r}_{m}}^{m_0}\,\right\} \,dt\right]
\\
&=:& (I + II+III).
\end{eqnarray*}
Since, by the hypotheses,
$$2{m_0}\delta_0<1,
$$
the first term, i.e.\ $I$ is bounded. In particular, we have
\begin{eqnarray} \label{zusammen1}
I\le T^{\frac q{m_0}(1-2{m_0}\delta_0)} \mathbb{E} |v_0|^{{2}}_{H^{-\delta_0}_{m}}.
\end{eqnarray}
Let us continue with the second term. The smoothing property of the semigroup gives for any $\delta_1\ge 0$
$$
II\le \mathbb{E} \left[\int_0^ T \left(\int_0^ t (t-s)^ {-\frac 12(r+\delta_1)}\phi_\kappa(h(\eta,\xi,s))\;\, \left| \eta(s) \xi^ 2(s)\right|_{H^{-\delta_1}_{m}}\, ds\right)^{m_0}\, dt\right].
$$
Using the Sobolev embedding $L^ 1({{ \mathcal O }})\hookrightarrow H^{-\delta_1}_{m}({{ \mathcal O }})$, where $\delta_1\ge d(1-\frac 1m)$,
we get
\begin{eqnarray*}
II
&\le & \mathbb{E} \left[\int_0^ T \left( \int_0^ t (t-s)^ {-\frac 12(r+\delta_1)}\phi_\kappa(h(\eta,\xi,s))\;\, \left| \eta(s) \xi^ 2(s)\right|_{L^1}\, ds\right)^{m_0}\, dt\right].
\end{eqnarray*}
Supposing $l\frac { \delta_1+r_0}2<1$
the Young inequality for convolutions gives for
\begin{eqnarray}
\label{Yinq}
\frac 1{m_0} +1=\frac 1l+\frac 1\mu\quad \mbox{and} \quad \beta_0=\frac 1l-\frac 12(r+\delta_1)
\end{eqnarray}
and therefore,
\begin{eqnarray} \label{tocontinue}
II &\le &
C_0 T^{\beta_0\frac {2}{m_0}}\, \mathbb{E} \left(\int_0^ T \left| \phi_\kappa(h(\eta,\xi,s))\;\eta(s) \xi ^ 2(s)\right|_{L^1}^\mu \, ds\right)^ \frac {2}\mu.
\end{eqnarray}
Which can be bounded by H\"older inequality as follows,
\begin{eqnarray*}
\lefteqn{ \int_0^ T \left| \phi_\kappa(h(\eta,\xi,s))\; \eta(s) \xi ^ 2(s)\right|_{L^1}^\mu \, ds}
&&
\\&\le & \sup_{0\le s\le T} |\eta(s)|_{L^{p^\ast}}^{\mu}\int_0^ T \phi_\kappa(h(\eta,\xi,s))\;|\xi(s)|_{L^m}^{m_0}\, ds\le C(\kappa) R_0.
\end{eqnarray*}
Next, let us investigate $III$. We treat the stochastic term by applying \cite[Corollary 3.5 (ii)]{brzezniak}, from which it follows for $\tilde \sigma+r<1$ and $\beta>0$
$$
\mathbb{E} \left[\int_0^ T \left|\int_0^ t e^{r_v\Delta (t-s)} \sigma_2 v_\kk (s) \, dW_2(s)\right|_{H^{r}_m}^{{2}}\, dt \right]^\frac q{m_0}\le T^\beta \mathbb{E} \left[ \int_0^ T |v_\kk (t)|_{H^{-\tilde \sigma}_m}^{m_0}\, dt \right].
$$
Due to the Sobolev embedding and interpolation whenever
\begin{equation}\label{eq:rho}
\tilde \sigma>\frac d2-\frac dm-\tilde \rho
\end{equation}
there exists some $\theta\in(0,1)$ such that
$$
|v_\kk (t)|_{H^{-\tilde \sigma}_m}\le |v_\kk (t)|_{L^{m}} ^\theta |v_\kk (t)|_{H^{\tilde\rho}_{m}}^{1-\theta}\le C\,
|v_\kk (t)|_{L^{m}} ^\theta|v_\kk (t)|_{H^{\tilde\rho}_2}^{1-\theta}
$$
Thus, if $\tilde \rho$ satisfies \eqref{eq:rho}, we get that
\begin{eqnarray*}
\lefteqn{ \mathbb{E} \left[\int_0^ T |v_\kk (t)|_{H^{-\tilde \sigma}_m}^{m_0}\, dt \right]^\frac q{m_0} \le C \mathbb{E} \left\{ \left[ \int_0^ T
|v_\kk (t)|_{H^{\tilde \rho}_2}^{m_0}\, dt \right]^{1-\theta}\left[\int_0^ T |v_\kk (t)|_{L^m}^{ m_0}\, dt\right]^\theta \right\} ^\frac q{m_0}}&&
\\
&\le &C \mathbb{E} \left[ \int_0^ T
|v_\kk (t)|_{H^{\tilde \rho}_2}^{m_0}\, dt \right]^{(1-\theta)\frac q{m_0}}\left[ \mathbb{E} \int_0^ T |v_\kk (t)|_{L^m}^{ m_0}\, dt\right]^{\theta\frac q{m_0}}
\\ &\le & C(\ep ) \mathbb{E} \left[ \int_0^ T
|v_\kk (t)|_{H^{\tilde \rho}_2}^{m_0}\, dt \right]^{\frac q{m_0}}+\ep \mathbb{E} \|v_\kk \|_{L^{m_0}(0,T;L^m)}^{q}.
\end{eqnarray*}
Collecting everything together, choosing $\ep>0$ sufficiently small and subtracting $\ep \mathbb{E} \|v_\kk \|_{L^{m_0}(0,T;L^m)}^{ m_0}$
on both sides, (i) follows.
The rest of the proof is devoted to item (ii).
Note, that for $s<t$
\begin{eqnarray*}
\lefteqn{ v_\kk (t)-v_\kk (s) =
(e^{r_v\Delta (t-s)}-\text{Id}) v_\kk (s)} &&
\\&&{} + \int_s^t e^{r_v\Delta(t-\tilde s )} \phi_\kappa(h(\eta,\xi,s))\;\eta(\tilde s )\xi^2(\tilde s )\, d\tilde s +\int_0^s e^{r_v\Delta(t-\tilde s)}v_\kk (\tilde s)\, dW_2(\tilde s) .
\end{eqnarray*}
Substituting it in the definition of ${\mathbb{W}}^{\alpha}_{m_0} ([0,T];H^{-\sigma}_m({{ \mathcal O }}))$, see Appendix \ref{dbouley-space}, we can write
\begin{eqnarray*}
\lefteqn{ \int_0^T \int_0^T \frac{{|v_\kk (t)-v_\kk (s)|_{H^{-\sigma}_{m}} ^ {m_0}}}{{ |t-s| ^ {1+\alpha {m_0}}}}\,ds\,dt}
&&
\\
&\le &
2\int_0^T \int_0^t \frac{|v_\kk (t)-v_\kk (s)|_{H^{-\sigma}_{m}} ^ {m_0}}{|t-s| ^ {1+\alpha {m_0}}}\,ds\,dt
\\
&\le &
2\int_0^T \int_0^t \frac{|[e^{r_v\Delta(t-s)}-\text{Id}]v_\kk (s)|_{H^{-\sigma}_{m}} ^ {m_0}}{ |t-s| ^ {1+\alpha {m_0}}}\,ds\,dt
\\&&{}+
2\int_0^T \
|
)$ as well as its modulus can depend on time: \begin{equation}\label{{f}(t)}
{p}(t)\!\! = |{p}(t)| e^{-i{\varphi}(t)}.
\end{equation}
We adopt the convention that any constant part of a global phase derivative is incorporated into $\omega_J $, so that $\int dt \partial_t\varphi(t)= 0$.
We focus on transport associated with a given charge operator $\hat{Q}$ assumed to commute with ${\mathcal{H}}_0$ and to be translated through ${A}$ by $e^*$:
\begin{equation}\label{commutation}[{A},\hat{Q}]=e^* {A},\end{equation} where $e^*$ is a model-dependent charge parameter. Thus the associated current operator reads, in view of Eq.\eqref{Hamiltonian}:
\begin{equation}
\label{eq:current}
\hat{I}(t)\!= \partial_t\hat{Q}(t)=
-i\frac{e^*}{\hbar}\left(e^{-i\omega_Jt}{{{p}}}(t)\;{{A}}- e^{i\omega_Jt}{p}^*(t)\;
{{A}}^{\dagger}\right)\,.
\end{equation}
Other charge operators not conserved by $\mathcal{H}_0$ might enter and couple to other independent constant forces, such as those associated with an electromagnetic environment.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6cm]{QPC_QHE_V_t_bis.pdf}
\caption{\small First example: a QPC in the quantum Hall regime at an integer or fractional filling factor $\nu$. One can include arbitrary profile, range and inhomogeneities of interactions between edge states, as well as abelian or non-abelian statistics. It is possible to have simultaneous time dependence of the voltage reservoirs and the gate, as well as different upper and down temperatures $T_u,T_d$ or imperfect equilibration between edge states. $I(x,t)$ denotes the average chiral current at a position $x$ along the upper edge.}
\label{fig_Hall}
\end{center}
\end{figure}
Indeed the operator $A$ can be a superposition of terms associated with many positions, channels or circuit elements: ${A}= \sum_i\! \! \;{A}_i,$
or a continuous integral over spatially extended processes. Nonetheless, compared to the dc regime, this generalization is constrained by the fact that all time dependent fields must be incorporated into the single complex function ${p}(t)$.
The main other conditions for the approach are: (i) $A$ is weak, with respect to which second order perturbative theory is valid (ii) only correlators implying ${A}$ and its hermitian conjugate are finite (see Eq.\eqref{Xup_Xdown}). The condition (ii) leads, for a family of distributions $\rho_0$, \cite{ines_PRB_2019} to a vanishing dc current average at $\omega_J=0$; in particular, in superconducting junctions, supercurrent must be negligible due to coupling to a dissipative environment or magnetic fields.
Interestingly, the approach is not restricted to an initially thermalized system, \cite{ines_PRB_2019,ines_PRB_R_noise_2020} but extends to an initial stationary NE density matrix ${\rho}_0$ obeying: $[{\rho}_0,\mathcal{H}_0]=0$. Thus $\omega_J$ can be superimposed on other constant independent forces, or one can consider a quantum circuit with a temperature bias \cite{ines_pierre_thermal_2021}(see Fig.\ref{fig_pierre}).
Generically, though not systematically, the coupling to a voltage $V(t)$ can be included into a term $\hat{Q}V(t)$ that can be absorbed by a unitary transformation \cite{ines_eugene} so that $\omega_J$ (Eq.\eqref{Hamiltonian}) and $\varphi(t)$ (Eq.\eqref{{f}(t)}) obey the following Josephson-type relations, determined by $e^*$ (in view of Eq.\eqref{commutation}):
\begin{subequations}
\begin{align}
\omega_J&=\frac{e^*}{\hbar}{V}_{dc}\label{josephson}\\
\partial_t{\varphi}(t)&=\frac{e^*}{\hbar}{V}_{ac}(t),\label{eq:JR}
\end{align}
\end{subequations}
where $V_{ac}(t),V_{dc}$ are the ac and dc parts of $V(t)$.
But more generally, the common charge $e^*$ could be replaced by two different effective charges, and the above relations can even be broken for NE states, as is the case for the anyon collider. \cite{rosenow,ines_PRB_R_noise_2020,fractionnalisation_anyon_collider_Mora_2021} For generality, we leave $\omega_J$ and ${p}(t)$ (with its amplitude and phase) as unspecified parameters of the model. \cite{note_voltages}
We have previously shown that the average current induced by $p(t)$, $\langle \hat{I}_{\mathcal{H}}(t)\rangle
$, can be, at any time, fully expressed in terms of the dc characteristics only, $I_{dc}(\omega_J)$, whether ${p}(t)$ is periodic \cite{ines_eugene} or not \cite{ines_cond_mat,ines_PRB_2019}. The subscript $\mathcal{H}$ refers to the Heisenberg representation with respect to the Hamiltonian $\mathcal{H}(t)$. In the zero-frequency limit, one gets the PAC: \begin{equation}\label{average_current}I_{ph}(\omega_J)=\int_{-T_0/2}^{T_0/2}\frac{dt}{T_0}\langle \hat{I}_{\mathcal{H}}(t)\rangle,\end{equation}
whose expression will be recalled in Eq.\eqref{Iphoto}. Only dependence on the dc frequency $\omega_J$ is made explicit, while that on $p(t)$ is implicit through the subscript $ph$. Here $T_0$ is the period for periodic $p(t)$, and is a long measurement time for non-periodic $p(t)$ for which it forms the key of a regularization procedure we have proposed. \cite{ines_PRB_2019} We think that this solves the divergency problem obtained in previous works, and compared by H. Lee and L. S. Levitov \cite{lee_levitov} to the orthogonality catastrophe problem, for instance when $V(t)$ is formed by for a single lorentzian pulse. A similar procedure can be carried on for the PASN, defined by:
\begin{equation}\label{Sdefinition_zero}
{S}_{ph}(\omega_{J})=\int_{-T_0/2}^{T_0/2}\frac{dt}{T_0}\int_{-\infty}^{\infty} d\tau \left< \delta\hat I_{\mathcal{H}}\left(t-\frac{\tau}2\right)\delta\hat I_{\mathcal{H}}\left(t+\frac{\tau}2\right)\right>,
\end{equation}
where $\delta\hat I_{\mathcal{H}}=\hat I_{\mathcal{H}}(t)-\! \langle \hat{I}_{\mathcal{H}}(t)\rangle$. We will nonetheless simplify it by assuming that the Fourier transform of $p(t)$, $p(\omega)$, is regular at zero frequency, and by referring to Ref.[\onlinecite{ines_PRB_2019}] if not. We will show that $S_{ph}(\omega_J)$ is determined, through a universal FR given by Eq.\eqref{Sphoto}, by $S_{dc}(\omega_J)$, the NE shot noise in the dc regime (which will be coined as the dc noise). It is only when the initial density matrix is thermal that $S_{dc}(\omega_J)$ is determined by $I_{dc}$, and so is the PASN.
Some examples of models for which these relations hold are detailed in Ref.[\onlinecite{ines_PRB_2019}] and are illustrated in Figs.(\ref{fig_Hall},\ref{fig_pierre},\ref{fig_JJ}). For instance, $\hat{I}(t)$ is a tunneling current in case $A$ refers to a tunneling term between strongly correlated conductors with mutual Coulomb interactions. It is the Josephson current in a JJ at energies below the superconducting gap $\Delta$ (Fig.\ref{fig_JJ}), for which one has $e^*=2e$.
Let's discuss in more details the validity and limitations of the approach for a QPC in the quantum Hall regime.
\subsection{Validity of the approach in quantum Hall states}
\label{subsec_Hall}
For a QPC in the FQHE or IQHE at a filling factor $\nu$, the perturbative approach applies to two opposite regimes: the weak backscattering one (when the QPC is almost open, see Fig.(\ref{fig_Hall})), where $\hat{I}(t)$ in Eq.\eqref{eq:current} is a backscattering current, and the strong backscattering regime (when the QPC is pinched off), where $\hat{I}(t)$ is an electron tunneling current. While one has $e^*=e$ in the latter regime, one expects $e^*/e$ to be a fraction in the former when one deals with the FQHE. Many theoretical approaches are based on effective bosonized theories, such as the chiral TLL description for interacting edges in the IQHE or Laughlin series in the FQHE given by $\nu=1/(2n+1)$ with integer $n$, for which $e^*=\nu e$. For other $\nu$ belonging to hierarchical series, there might be many possible models\cite{cheianov_tunnel_FQHE_2015} leading to different values of the dominant charge $e^*$ (that for which the quasiparticle field has the smallest scaling dimension $\delta$. \cite{note_scaling}) It is also frequent that two or more different quasi-particle fields with the same charge and dimension enter into $A$, a situation to which the approach can still be adapted.\\ Let us focus on the case where the QPC is almost open. Such effective theories predict a power law behavior and a crossover energy scale $k_BT_B$ below which the strong backscattering regime is reached, leading to a vanishing dc conductance when both voltages and temperatures vanish. Thus when one adopts effective theories, this delimits the validity of the perturbative approach in both regimes with respect to $T_B$.
Nonetheless, in experimental works aiming to determine fractional charge \cite{saminad,glattli_photo_2018,ines_gwendal} and statistics, \cite{fractional_statistics_gwendal_science_2020} the measured dc current is not in accordance with this power law behavior.
Our approach has the advantage to be valid without a specific Hamiltonian nor voltage dependence of the dc current. This explains why the NE FRs we obtained \cite{ines_cond,ines_degiovanni_2016} provided robust methods to determine $e^*=e/5$ at $\nu=2/5$ in Ref.[\onlinecite{glattli_photo_2018}] and $e^*=e/3$ at $\nu=2/3$ in Ref.[\onlinecite{ines_gwendal}].
Though bosonization is not even necessary for the Hamiltonian in Eq.\eqref{Hamiltonian}, one might require, to end up with this form, additional conditions. For instance, absorption of inhomogeneous couplings to ac sources into the function ${p(t)}$ might require that $\mathcal{H}_0$ is a quadratic functional of bosonic fields (not required in the dc regime).
In order to implement such couplings, one might exploit a useful framework we have initiated \cite{ines_schulz_group,ines_epj}, and which has been largely adopted in electronic quantum optics.\cite{fractionnalisation_IQHE_degiovanni_13,plasmon_ines_IQHE_HOM_feve_Nature_2015_cite} It describes the electronic charge propagation in terms of plasmon dynamics dictated by Coulomb interactions, inducing charge fractionalisation. By developing the equation of motion method for bosonic fields, dynamics is solved for given time dependent boundary conditions dictated by the sources. On the one hand, a classical ac source injects a classical plasmon wave whose time evolution is determined through a scattering matrix for plasmon modes, providing the ac outgoing electronic currents. On the other hand, for a non-gaussian source, such as another QPC different from the central one (replacing the voltage source in Fig.(\ref{fig_Hall})), the NE bosonisation in Ref.\onlinecite{ines_epj} has been extended to take into account statistical fluctuations of the injected current. \cite{out_of_equilibrium_bosonisation_eugene_PRL_2009} Our present NE approach applies to such non-gaussian sources in the dc regime, \cite{ines_PRB_R_noise_2020} and it is plausible that one can still end up with Eq.\eqref{Hamiltonian} for ac voltages, as we allow for a NE stationary density matrix and a time dependent modulus of ${p}(t)$ that could incorporate ac boundary conditions. For a more rigorous justification and determination of $p(t)$, one needs to combine our treatment of ac voltages \cite{ines_epj} with that of dc non-gaussian sources, \cite{out_of_equilibrium_bosonisation_eugene_PRL_2009} a step not yet achieved to our knowledge.
At a point $x$ along the edge (Fig.\ref{fig_Hall}), the backscattering average current $\langle \hat{I}_{\mathcal{H}}(t)\rangle$ (see Eqs.\eqref{eq:current},\eqref{average_current}) reduces the perfect linear chiral current in the upper edge (for a derivation in the dc regime, see Ref.\onlinecite{dolcini_05}):
\begin{equation}\label{Itotal}
I(x,t)= \nu \frac{e^2}hV(t)-\theta(x)\int dt' \lambda(x,t-t')\langle \hat{I}_{\mathcal{H}}(t')\rangle,
\end{equation}
where $\theta(x)$ is the Heaviside function if the QPC is located at $x=0$. The function $\lambda(x,t)$ is determined by $\mathcal{H}_0$, and describes chiral plasmonic propagation between the QPC and $x$. Denoting its zero-frequency limit by $\lambda$, one gets $I(x,\omega=0)= \nu {e^2}/h V_{dc}- \theta(x)\lambda I_{ph}(\omega_J).$ One expects $\lambda=\nu$ for simple fractions, but it could be renormalized by non-universal features such as edge reconstruction \cite{halperin_HBT_FQHE_2016}.
It is frequent that one measures rather correlations or cross-correlations between chiral currents, which contain supplementary terms, similarly to Refs.[\onlinecite{trauz_2004},\onlinecite{ines_bena},\onlinecite{ines_bena_crepieux}] in the dc regime. This is also the case when sources are formed by additional QPCs, such as the anyon collider studied in the dc regime; \cite{rosenow,fractional_statistics_gwendal_science_2020,ines_PRB_R_noise_2020,fractionnalisation_anyon_collider_Mora_2021}
application of ac voltages with a time delay would form a HOM interferometer, as suggested in Ref.[\onlinecite{glattli_levitons_physica_2017}]. It turns out that the perturbative approach is still useful for the supplementary terms, as will be addressed in future works.\cite{Imen_ines}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6cm]{Figure1_Sample.pdf}
\caption{\small Second example: a quantum circuit formed by a QPC (on the right side of the lower scheme) coupled to an electromagnetic environment and with a temperature bias, studied in Ref.\cite{ines_pierre_thermal_2021} in the dc regime. This is example of such a quantum circuit studied in Ref.\cite{ines_pierre_thermal_2021} to address dynamical Coulomb blockade. The present NE FR extends to the two opposite conducting and insulating regimes of the quantum phase transition and yields PASN through the QPC in case both the potential drop and gate voltage are time-dependent.}
\label{fig_pierre}
\end{center}
\end{figure}
\begin{figure}[bp]
\begin{center}
\includegraphics[width=6cm]{JJ_example_bis.pdf}
\caption{\small Third example: a JJ with a small Josephson energy $E_J$ or a NIS junction strongly coupled to an electromagnetic environment. An additional dc voltage $V'_{dc}$ can enter into the Hamiltonian in Eq.\eqref{Hamiltonian} or in the NE stationary density matrix $\rho_0$. }
\label{fig_JJ}
\end{center}
\end{figure}
\section{Universal fluctuation relations}
\label{sec_FR}
Here we first derive the central NE FR for the PASN in Eq.\eqref{Sdefinition_zero}, then apply it to non-gaussian random sources, and finally deduce FRs for the differentials of the PASN with respect to the ac phase, which we will exploit for the other applications in section \ref{sec_applications}.
\subsection{Fluctuation relations between the ac and dc driven regimes}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6cm]{setup_dual.pdf}
\caption{\small Fourth example: a dual-phase Josephson junction with a small effective parameter $U_J$. Voltage and current are permuted, so that one imposes a time dependent current, while the voltage noise across the junction obeys the NE FR. Average voltage was computed in Ref.\onlinecite{photo_josephson_hekking} and found to obey the relation provided by the perturbative approach \cite{ines_eugene,ines_PRB_2019}.}
\label{fig_JJ_dual}
\end{center}
\end{figure}
The derivation of the NE FR follows two steps, detailed in Appendix B. The first one yields a second order perturbative expression in terms of two correlators (see Eq.\eqref{Xup_Xdown}) which are evaluated with the Hamiltonian $\mathcal{H}_0$ and the initial NE density matrix ${\rho}_0$, so that they depend only on the time difference $\tau$. Their Fourier transforms at $\omega_J$, denoted by $I_{\rightarrow}(\omega_J),I_{\leftarrow}(\omega_J)$, correspond to dc average currents in two opposite directions induced by $\omega_J$, and determine average current and noise in the dc regime: \begin{subequations}
\begin{align}
I_{dc}(\omega_{\mathrm{J}})& =
I_{\rightarrow}(\omega_{\mathrm{J}})-I_{\leftarrow}(\omega_{\mathrm{J}})\,
\label{formal_average_I}\\
S_{dc}(\omega_J)/e^* &=
I_{\rightarrow}(\omega_J)+I_{\leftarrow}(\omega_J)\,.\label{noise_DC_initial_I}
\end{align}
\end{subequations}
Notice that the NE noise $S_{dc}(\omega_J)$ is given by ${S}_{ph}(\omega_{J})$ in Eq.\eqref{Sdefinition_zero} whenever ${p}(t)=1$ in Eq.\eqref{Hamiltonian}. In general, $I_{\rightarrow}(\omega_{\mathrm{J}})\neq I_{\leftarrow}(-\omega_J)$, thus one hasn't necessarily an odd dc current nor an even dc noise.
The second step consists into reversing the two above expressions, so that, alternatively, only the two functions $I_{dc}(\omega_{\mathrm{J}}),S_{dc}(\omega_J)$ determine completely time dependent transport.
In particular, we can show that the PASN in Eq.\eqref{Sdefinition_zero} is fully determined by $S_{dc}(\omega_J)$ in Eq.\eqref{noise_DC_initial_I},
\begin{equation}
S_{ph}(\omega_J)=\int_{-\infty}^{\infty}
\frac{d\omega'}{\Omega_0}\bar{P}(\omega') S_{dc}(\omega'+\omega_J),\label{Sphoto}
\end{equation}
where $\bar{P}(\omega)=|{p}(\omega)|^2$ and $\Omega_0=2\pi/T_0$.
One recovers the dc regime when ${p}(\omega)=\delta(\omega)$.
Thus we obtain a universal FR between the ac and dc regimes, which, to our knowledge, has not been derived so far within the present large context of strongly correlated circuits and NE initial states.
The PASN is a superposition of the noise evaluated at effective dc voltages $\omega_J+\omega'$ for all finite frequencies $\omega'$ of the driving photons, modulated by $\bar{P}(\omega')$. Even at $\omega_{J}=0$, the PASN is determined by the NE dc noise $S_{dc}(\omega')$ (indeed even $S_{dc}(\omega'=0)$ is a NE noise for initial NE states). The above NE FR is independent on the form, range and force of Coulomb interactions or strong coupling to an electromagnetic environment which enters only through the NE dc noise. The external ac or classical noise sources enter through $\bar{P}(\omega')$, which can be viewed as the transfer rate for the many-body eigenstates of $\mathcal{H}_0$ to exchange an energy $\hbar \omega'$ with the ac sources, as can be checked through a spectral decomposition. \cite{ines_philippe_group}
Experimentally, one gets rid of undesirable contributions by considering the excess PASN. Here we define it by substracting the dc noise in presence of the same dc voltage, $S_{dc}(\omega_J)$, obtained when one switches off the ac source:
\begin{equation}\label{second_choice}
\underline{\Delta} S_{ph}(\omega_J)=S_{ph}(\omega_J)-S_{dc}(\omega_J).
\end{equation}
Let's notice already that $\underline{\Delta} S_{ph}(\omega_J)$ was shown to be always positive by L. Levitov {\it et al} \cite{keeling_06_ivanov} (see Eq.\eqref{levitov_inequality}), but this is not the case in a non-linear SIS junction (as shown in subsection \ref{sec_bounds}), leading us to revisit minimal excitations in section \ref{sec_levitov}.
Let us now specify to a periodic $p(t)$ with a frequency $\Omega_0$ (see appendix \ref{app_periodic} for more details): \begin{equation}\label{FDT2_periodic_zero}
{S}_{ph}(\omega_{J})= \sum _{l=-\infty}^{+\infty} P_l S_{dc}(\omega_{J}+l\Omega_0).\end{equation}
Here $P_l=\bar{P}(l\Omega_0)$ are the transfer rates for many-body states to exchange $l$ photons with the source. It is only when $|p(t)|=1$ that $P_l$ are probabilities, as $\sum_l P_l =1$ (see Eq.\eqref{orthogonality}).
In case of an initial thermal density matrix $\rho_0 \propto e^{-\beta H_0}$ at a temperature $T=1/\beta$ (see Eq.\eqref{Hamiltonian}), the dc noise obeys the general relation, valid even when $I_{dc}(\omega_J)\neq -I_{dc}(-\omega_J)$: \cite{ines_cond_mat,ines_degiovanni_2016,ines_PRB_R_noise_2020,levitov_reznikov}
\begin{equation}\label{poisson_T}
S_{dc}(\omega_J)=e^*\coth\left(\frac {\hbar\omega_J} {2k_BT}\right) I_{dc}(\omega_J),
\end{equation}
The PASN is than detailed in appendix \ref{sec_thermal}. Here, focussing on a periodic $p(t)$, on locking values $\omega_J=N\Omega_0$ with an integer $N$ and for $\Omega_0\gg k_B T/\hbar$, we get, from Eq.\eqref{FDT2_periodic_zero}:
\begin{equation}\label{FDT2_periodic_zero_thermal_n}
{S}_{ph}(N\Omega_0)=\sum_{l\neq -N} P_l \;|I_{dc}\left[(N+l)\Omega_0\right]|+2 P_{-N} k_B T G_{dc}(T).\end{equation}
Here $G_{dc}(T)=dI_{dc}(\omega_J)/dV_{dc}$ at $e^*V_{dc}\ll k_BT $; we make explicit its temperature dependence, generic in non-linear systems, while it is implicit in the NE current average and the PASN. Thus we get a mixture between NE and thermal contributions (see appendix \ref{sec_thermal}), similarly to NE finite-frequency noise in the dc regime. \cite{ines_degiovanni_2016} Taking the excess noise in Eq.\eqref{second_choice} does not cancel the thermal one, even though we are in the NE quantum regime.
This relation unifies and goes beyond previous works restricted to $|p(t)|=1$ and to independent electrons scattered by a linear QPC \cite{dubois_minimization_integer,glattli_levitons_nature_13} or to a TLL in Ref.\onlinecite{crepieux_photo}.
In the TLL or more general effective theories for the FQHE, it allows us to localize and regularize a divergency noted in the latter work, as will be explained elsewhere. \cite{Imen_ines}
Thus the universal relations in Eqs.\eqref{Sphoto},\eqref{FDT2_periodic_zero} extend fully the lateral-band transmission for the PASN to time dependent tunneling amplitudes and periodic, non-periodic or fluctuating sources. In the large family of strongly correlated circuits these relations cover, NE many-body states replace thermal one-electron states. They are also suited to address two-particle collisions in a symmetric or asymmetric HOM type geometry where two ac sources, periodic or not, operate with a time delay (as noticed briefly in Ref.\onlinecite{ines_PRB_2019}). It has been used in a recent experimental analysis of two-electron collisions \cite{Imen_thesis,glattli_imen_2022} in chiral quantum Hall edges.
\subsection{Fluctuating sources}
One advantage of considering non-periodic $p(t)$ is that one can deal with classical states of radiations. Indeed, if we assume that $|{p}(t)|=1$, one has $\int d\omega' \bar{P}(\omega')/\Omega_0=1$, so that $\bar{P}(\omega)$ becomes a probability.\cite{ines_PRB_2019} It plays a similar role to the $P(E)$ function which yields the probability for a tunneling electron to exchange photons at a frequency $\omega=E/\hbar$ with an electromagnetic environment \cite{ingold_nazarov}. Indeed, this is precisely the meaning of $\bar{P}(\omega)$ if $\varphi(t)$ is associated with a gaussian or non-gaussian electromagnetic environment in the classical limit, formed for instance by a quantum conductor we've studied in Ref.[\onlinecite{ines_eugene_detection}].
More generally, if classical fluctuations of $\varphi(t)$ have a distribution $\mathcal{D}(\varphi)$, one has to take into account averages over $\mathcal{D}(\varphi)$, denoted by $<...>_{\mathcal{D}}$:
\begin{equation}\label{p_average}
\bar{P}(\omega)=\int_{0}^{T_0}\frac{dt}{T_0} e^{i\omega t} <e^{i(\varphi(t)-\varphi(0))}>_{\mathcal{D}}.
\end{equation}
Notice that we assume here the stationarity of the distribution for $\varphi$ so that $<e^{i(\varphi(t)-\varphi(t'))}>_{\mathcal{D}}$ depends only on $t-t'$, thus dropping the integral over $t+t'$.
One can further write $<e^{i(\varphi(t)-\varphi(0))}>_{\mathcal{D}}$ as an exponential of cumulants of $\varphi(t)$ at order $m$ ($m$ is an integer; see Ref.\onlinecite{sukho_detection} for the full expression): \begin{eqnarray}
J_m(t)&=&\frac {1} {{m}\mathpunct{!}} <(\varphi(t)-\varphi(0))^m>_{\mathcal{D}}.\label{Jm}
\end{eqnarray}
If we expand it up to $m=3$, justified in the limit of weak coupling, we obtain:
\begin{equation}
S_{ph}(\omega_J)=\iint \frac{d\omega dt}{\Omega_0} S_{dc}(\omega+\omega_J)e^{i\omega t-J_2(t)+iJ_3(t)},\label{Sphoto_cumulant}
\end{equation}
There might be various ways, depending on regimes and setups, to exploit this link, in particular to use the PASN as a way of detection of cumulants of the quantum conductor, as done with the PAC.\cite{ines_PRB_2019,ines_eugene} Compared to previous works proposing Tunnel junctions or JJs as cumulant detectors,\cite{sukho_detection_JJ_PRL2007,sukho_detection} the present model opens the path to exploit a larger family of strongly correlated detectors which are not necessarily disconnected, and to drive both the detector and the non-gaussian source in stationary NE states.
We insist nonetheless that a quantum environmental phase operator $\hat{\varphi}(t)$ whose dynamics is dictated by the Hamiltonian $\mathcal{H}_0$ can be also encoded into ${A}$ through $e^{i\hat{\varphi}(t)}$, whose correlations affect the PASN through the dc noise $S_{dc}$ according to Eq.\eqref{Sphoto}.
\subsection{Fluctuation relations for differentials of the PASN}
An alternative to excess noise, in order to get rid of undesirable noisy sources, is to consider the derivative of the PASN with respect to the dc voltage, which, in view of the FRs in Eqs.\eqref{Sphoto},\eqref{FDT2_periodic_zero}, is determined through the differential dc noise.
It is also interesting, for some potential applications, to differentiate the PASN with respect to the ac components of the voltage, $V_{ac}(\omega)$, or for more generality, $\varphi(\omega)$ (as Eq.\eqref{eq:JR} is not systematic). Given a non-periodic or random $p(t)$, one can show that $\delta p(\omega')/\delta \varphi(\omega)=-ip(\omega'-\omega)$, so that :
\begin{equation}\label{FDT2_periodic_zero_derivative}
\frac{\delta S_{ph}(\omega_{J})}{\delta \varphi(\omega)}=-i\int \frac{d\omega'}{\Omega_0} p(\omega')p^*(\omega'+\omega) \left[S_{dc}(\omega_{J}+\omega'+\omega)-S_{dc}(\omega_{J}+\omega')\right].
\end{equation}
Let us now take a second differential with respect to $\varphi(-\omega)$. Then we are back to the PASN through an interesting closed relation:
\begin{equation}\label{FDT2_periodic_zero_second_derivative}
\frac{\delta ^2 S_{ph}(\omega_{J})}{\delta\varphi(\omega)\delta\varphi(-\omega)}= S_{ph}(\omega_{J}+\omega)+S_{ph}(\omega_{J}-\omega)-2S_{ph}(\omega_{J})
\end{equation}
A similar relation holds for the PAC in Eq.\eqref{Iphoto},\cite{ines_eugene} as well as for periodic drives, taking ${\delta ^2 S_{ph}(\omega_{J})}/{\delta \varphi_k\varphi_{-k}}$ and $\varphi_k=\varphi(k\Omega_0)$ for any integer $k$.
We notice however a possible experimental difficulty in case one lacks a precise knowledge of the effective phase (for instance at the level of the QPC in the Hall regime), which makes noise spectroscopy discussed in \ref{subsec_spectroscopy} useful.
It is indeed easier to consider the limit of the stationary regime, defined by $p(t)=1$ thus $p(\omega')=\delta(\omega')$ (thus the limit of a vanishing phase, up to multiple of $2\pi$, and of a unit modulus). Then the first derivative in Eq.\eqref{FDT2_periodic_zero_derivative} vanishes, and one can replace, on the r.h.s. of Eq.\eqref{FDT2_periodic_zero_second_derivative}, $S_{ph}\rightarrow S_{dc}$. In this limit, we can also show that $\delta^2 S_{ph}(\omega_{J})/{\delta\varphi(\omega)\delta\varphi(-\omega')}\rightarrow 0$ for all $\omega'\neq \omega$.
Thus, assuming furthermore that $|p(t)|=1$ so that only functional dependence with respect to $\varphi(t)$ enters, the excess PASN defined through Eq.\eqref{second_choice} can be expanded through the main quadratic correction due a small ac phase:
\begin{equation}\label{FDT2_periodic_zero_second_derivative_zero_ac}
\underline{\Delta} S_{ph}(\omega_J)= \int_{-\infty}^{+\infty} d\omega |\varphi(\omega)|^2\left[ S_{dc}(\omega_{J}+\omega)+S_{dc}(\omega_{J}-\omega)-2S_{dc}(\omega_{J})\right]+o(\varphi^4).\end{equation} This universal relation is coherent with the fact that only one or zero photon processes enter at a low ac modulation.
For periodic drives the integral is replaced by a discrete sum.
In particular, in case $\varphi(t)=\varphi_{ac}\cos \Omega_0t$ we get:
\begin{equation}\label{expansion_sine}\underline{\Delta} S_{ph}(\omega_J)/\left[S_{dc}(\omega_{J}+\Omega_0)+S_{dc}(\omega_{J}-\Omega_0)-2S_{dc}(\omega_{J})\right]=\varphi_{ac}^2.\end{equation}
As will be discussed in \ref{subsec_spectroscopy}, \ref{subsec_fractional_charge}, the above relations offer methods for shot noise spectroscopy and for a robust determination of the fractional charge, based on determining $\omega_J$ and the Josephson-type relation in Eq.\eqref{josephson} whenever it holds. This is in some sense similar to a spectroscopy as $\omega_J$ is determined by external constant forces or voltages. Indeed there are also situations where $\omega_J$ implies other unknown parameters which can then be determined consequently. Two have been addressed in Ref.\onlinecite{ines_PRB_R_noise_2020}: either $\omega_J$ is linked to a non-universal parameter of fractional statistics that enters in analyzing the anyon collider, or to the voltage drop generated by a temperature bias, which yields the Seebeck coefficient.
\section{Universal lower bounds on the PASN}
\label{sec_bounds}
This section gives
|
crucial features that will allow us to revisit minimal excitations in section \ref{sec_levitov}.
We will first show that that the universal lower bound provided by L. Levitov {\it et al}\cite{keeling_06_ivanov} is restricted to linear conductors, by giving the counterexample of a non-linear SIS junction with an initial thermal distribution. Than, considering again a NE initial distribution, we show that it is rather the PAC which provides a universal lower bound for the PASN, therefore super-poissonian.
\subsection{Breakdown of the dc noise bound in a non-linear SIS junction}
\label{subsec_SIS}
In an independent-electron picture, the choice for the excess noise, $\underline{\Delta} S_{ph}(\omega_J)$ in Eq.\eqref{second_choice} is motivated by the fact that it arises from the cloud of electron-hole excitations generated by the ac voltage,\cite{FCS_TD_tunnel_09,dubois_minimization_integer}thus inducing a positive excess noise, $\underline{\Delta} S_{ph}(\omega_J)>0$. Indeed, in a more general framework of strongly correlated systems, the ac voltage was shown to increase the noise through a theorem by L. Levitov {\it et al} \cite{klich_levitov,keeling_06_ivanov}:
\begin{equation}\label{levitov_inequality}
S_{ph}(\omega_J)\geq S_{dc}(\omega_J).\end{equation}
Nonetheless, we show now that adding an ac voltage to a dc one could decrease the PASN in a non-linear SIS junction, so that these inequalities are reversed.
We adopt, in a similar context as these works, an initial thermalized distribution in the zero temperature limit, so that the dc noise is poissonian (see Eq.\eqref{poisson_T}). We also consider a quasi-particle current $I_{dc}$ \cite{tien_gordon,tucker_rev} with a voltage gap $2\Delta/e$, thus a dc frequency gap $\omega_C=2\Delta/\hbar$ (here $e^*=e$), and a linear behavior above: $I_{dc}(\omega_J>0)=\theta(\omega_J-\omega_c)(a\omega_J-b)$ (see Fig.\ref{fig:SIS_I}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{SIS_dc_current_new.pdf}
\caption{\small. The dc noise associated with the quasiparticle current in a SIS junction, as a function of the dc frequency $\omega_J$ (in blue) and the PASN under a small sine voltage (in red). $S_{dc}$ vanishes below a threshold $\omega_c=2\Delta\hbar$ where $\Delta$ is the superconducting gap, and has a linear behavior above, $a\omega_J-b$. The PASN behavior is sketched (thus units are arbitrary) by choosing $b/a=\Omega_0=\omega_C/2$. It is below $S_{dc}$ at dc voltages above $\omega_C$. }
\label{fig:SIS_I}
\end{center}
\end{figure}where $a,b$ are positive coefficients. This gives in particular $G_{dc}(T)=0$. Now we choose the dc and ac frequencies such that $\Omega_0<\omega_C<\omega_J$ and $ b/a<\omega_J-\Omega_0<\omega_C $. We consider a weak enough sine voltage so that we can use the second-order expansion in Eq.\eqref{expansion_sine}. As the dc noise is poissonian, the sign of $\underline{\Delta}S_{ph}$ is that of $I_{dc}(\omega_J+\Omega_0)+I_{dc}(\omega_J-\Omega_0)-2I_{dc}(\omega_J)=a(\Omega_0-\omega_J)+b<0$ (one has $I_{dc}(\omega_J-\Omega_0)=0$ as $\omega_J-\Omega_0<\omega_C$). Therefore the PASN is decreased in this dc voltage range, \begin{equation}
S_{ph}(\omega_J)< S_{dc}(\omega_J),
\end{equation}
which is at odd with the inequality in Eq.\eqref{levitov_inequality}. We can indeed plot the PASN for all dc voltages (see Fig.\ref{fig:SIS_I}), which is also slightly below the dc noise at $\omega_J>\omega_C+\Omega_0$ for which $S_{ph}(\omega_J)=(P_0+2P_1)S_{dc}(\omega_J)$.
Indeed, due to our hypothesis of a weak sine voltage, one has a poissonian PASN for all $\omega_J>\Omega_0$, $S_{ph}(\omega_J)=eI_{ph}(\omega_J)$ (see Eq.\eqref{Iphoto_periodic}). Then our result is coherent with the known fact that $I_{ph}(\omega_J)<I_{dc}(\omega_J)=S_{dc}(\omega_J)/e$ in the range $\omega_C<\omega_J<\omega_C+\Omega_0$. \cite{tucker}
\subsection{Super-poissonian PASN}
Considering again a NE density matrix $\rho_0$ and non-periodic $p(t)$, let us first recall the relation obtained for the PAC in Eq.\eqref{average_current}:
\begin{equation}
I_{ph}(\omega_J)=\int_{-\infty}^{\infty} \frac{d\omega'}{\Omega_0} \bar{P}(\omega') I_{dc}(\omega'+\omega_J).\label{Iphoto}
\end{equation}
Similarly to Eq.\eqref{Sphoto}, it is also interpreted within a lateral-band transmission picture for correlated many-body states. \cite{ines_eugene,ines_PRB_2019}
Now we have shown that the dc noise is super-poissonian\cite{ines_PRB_R_noise_2020} \begin{equation}S_{dc}(\omega_J)\geq e^*|I_{dc}(\omega_J)|\label{S_dc_super}\end{equation} due to the fact that $I_{\rightarrow}(\omega_J)$ and $I_{\leftarrow}(\omega_J)$ are positive (see Eqs.\eqref{formal_average_I},\eqref{noise_DC_initial_I}). This is obviously verified by Eq.\eqref{poisson_T} for an initial thermal equilibrium, which yields a poissonian dc noise at low temperatures. Nonetheless, the super-poissonian dc noise does not arise necessarily from thermal effects if the NE initial distribution is, for instance, generated by additional dc voltages (see Fig.\ref{fig_JJ}) rather than by temperature gradients (as in Fig.\ref{fig_pierre}).
Now by comparing Eq.\eqref{Sphoto} to Eq.\eqref{Iphoto}, and using Eq.\eqref{S_dc_super}, we obtain also a super-poissonian PASN \cite{ines_cond_mat} :
\begin{equation}\label{super_poisson}
{S}_{ph}(\omega_{J})\geq e^*|I_{ph}(\omega_J)|.\end{equation}
This is an important inequality, also valid when one has periodic drives, and even when the global system is in the ground state.
Notice that this inequality suggests an alternative for the excess noise, given by $S_{ph}(\omega_J)-e^*|I_{ph}(\omega_J)|$, which yields always a positive sign, though it is not the most relevant experimentally as discussed in appendix \ref{app_excess}.
Let's now comment on the case the dc current is linear with respect to $\omega_J$ and $|{p}(t)|=1$. Then ${I}_{ph}(\omega_{J})=I_{dc}(\omega_J)$, which becomes linear as well. In particular, if Eqs.\eqref{josephson},\eqref{eq:JR} hold, one has simply:
\begin{equation}\label{photocurrent_linear}
I_{ph}(\omega_J)=G_{dc} V_{dc},\end{equation}
where $G_{dc}=I_{dc}(\omega_J)/V_{dc}$ is the linear conductance.
Therefore, the lower bound on the PASN becomes given by the dc current
${S}_{ph}(\omega_{J})\geq e^*|I_{dc}(\omega_J)|$, exactly as is the case for the dc noise in Eq.\eqref{S_dc_super}. Nonetheless, it is only when the system is in the ground state, thus at low temperatures for a thermal equilibrium distribution, that the dc current can be replaced by the dc noise (see Eq.\eqref{poisson_T}), so that one recovers Eq.\eqref{levitov_inequality}.
The inequality in Eq.\eqref{super_poisson} offers an alternative to Eq.\eqref{levitov_inequality}, valid in the SIS junction we addressed above for all dc voltages. Though restricted to a perturbative regime, it covers a much larger family of non-linear systems and quantum circuits. But an important difference from Eq.\eqref{levitov_inequality} is that the PAC, forming the universal lower bound, is also determined by the ac voltage.
\section{Revisiting minimal excitations}
\label{sec_levitov}
In view of the above features, we address the issue of characterizing minimal excitations, whose realization requires a ground many-body state, for instance the low-temperature limit of an initial thermal equilibrium.
\subsection{L. Levitov's characterisation: limitation to linear conductors}
Characterization of minimal excitations (we focus here on "electron" type ones) by L. Levitov {\it et al}\cite{keeling_06_ivanov} through the PASN is based on the central inequality in Eq.\eqref{levitov_inequality}.
First, the authors imposes an injected charge per period $Q_{cycle}= Ne$. As they assume that $I(t)=\partial_tQ(t)=e^2 V(t)/h$ (for a linear ballistic conductance with non-interacting electrons), one has $Q_{cycle}={e^2}\int _0^{T_0}dtV(t)/h= {e^2}T_0V_{dc}/h$, controlled by the dc component of the voltage $V(t)$ only. This leads to the condition $V_{dc}=Nh/(T_0 e)$ or, taking $e^*=e$ in Eq.\eqref{josephson}, to $\omega_J=N\Omega_0$.
Secondly, according to Eq.\eqref{levitov_inequality}, the voltage which minimizes the PASN by injecting well-defined electronic excitations must ensure the equality $S_{ph}(\omega_J)= S_{dc}(\omega_J)$, the lower bound of the PASN. This requires that the Fourier components $p_l$ of ${p}(t)=e^{-i\varphi(t)}$ obey:
\begin{equation}\label{p_l_lorentz}
l<-N\implies p_l=0.
\end{equation}For that, the total voltage must be formed by a series of lorentzian pulses centered at $kT_0$ with a width $2W$, so that the phase derivative verifies (see Eq.\eqref{eq:JR}, thus $\int dt \partial_t\varphi{(t)}=0$):
\begin{equation}\label{V_lorentzian}
\partial_t\varphi(t)=\frac{N\Omega_0}{\pi}\sum_{k=-\infty}^{\infty}\frac{1} {1+(t-kT_0)^2/W^2}-N\Omega_0.
\end{equation}
Nonetheless, such a characterisation require the current to be linear, thus doesn't apply to a QPC in the FQHE with a non-linear dc current as claimed in Ref.[\onlinecite{keeling_06_ivanov}].
Let us give three reasons for that.
First, the injected charge corresponds to the PAC in Eq.\eqref{Iphoto}, which, for a non-linear dc current, has a non-trivial functional dependence on the ac voltage \cite{ines_PRB_2019}.
Second, let us adopt the lorentzian pulses, and apply Eq.\eqref{p_l_lorentz} to the FR in Eq.\eqref{FDT2_periodic_zero} :
\begin{subequations}
\begin{align}
\label{S_frequency_S_dc_zero_lorentzian}
{S}_{ph}(N\Omega_0)&=\sum_{l\geq -N }^{\infty} P_l S_{dc}((N+l)\Omega_0),
\end{align}
\end{subequations}
The equality ${S}_{ph}(\omega_J)=S_{dc}(\omega_J)$, given a dc poissonian noise, requires in general a linear dc current (notice that one has to add $2k_BG_{dc}(T)TP_{-N}$ on its r.h.s., in view of Eq.\eqref{FDT2_periodic_zero_thermal_n}).
Third, the authors were not aware of an implicit hypothesis underlying the inequality in Eq.\eqref{levitov_inequality}, that of the dc current must be linear. So it cannot be generalized to non-linear conductors, such as the SIS junction we considered in subsection \ref{subsec_SIS}.
\subsection{Super-Poissonian to poissonian PASN: minimal excitations}
We have shown that the PASN is universally super-poissonian, whatever is the initial NE in Eq.\eqref{super_poisson}. This is a first central ingredient of our alternative path. The second one is to define minimal excitations as those for which the PASN becomes poissonian, thus equality is reached in Eq.\eqref{super_poisson}. For that we need to specify to an initial many body ground state. We focus, for simplicity, on a periodic $p(t)$ with $|p(t)|=1$.
Instead of solving for the voltage, we gain generality by reasoning in terms of $\varphi(t)$ and the dc frequency $\omega_J$ (the relations in Eqs.\eqref{josephson},\eqref{eq:JR} are not systematic). Now $\omega_J$, which doesn't fix the transferred charge, is not fixed but has rather to be determined, on the same level as $\varphi({t})$, by requiring equality in Eq.\eqref{super_poisson}. For that, we write Eqs.\eqref{FDT2_periodic_zero_thermal},\eqref{Iphoto_periodic} in the limit of a strictly zero temperature:
\begin{subequations}
\begin{align}
\label{S_I_I_+_I_-}
{S}_{ph}(\omega_J)&=e^*[I_{ph,+}(\omega_J)-I_{ph,-}(\omega_J)].\\
{I}_{ph}(\omega_J)&=I_{ph,+}(\omega_J)+I_{ph,-}(\omega_J).
\end{align}
\end{subequations}
We have separated
$
{I}_{ph,\pm}(\omega_J)=\sum_{\pm(\omega_J+l\Omega_0) \geq 0} {P}_l \;I_{dc}(\omega_J+l\Omega_0)$, the contributions to the PAC generated by either positive or negative effective dc drives. We have used the fact that $I_{dc}(\omega_J=0)=0$ and $\omega_J I_{dc}(\omega_J)\geq 0$ for a thermal distribution, \cite{ines_PRB_2019} so that
$\pm{I}_{ph,\pm}(\omega_J)\geq 0$.
Therefore the poissonian limit is reached whenever $I_{ph,+}(\omega_J)=0$ or $I_{ph,-}(\omega_J)=0$. We focus here on the condition $I_{ph,-}(\omega_J)=0$. If it has to be ensured whatever the profile of $I_{dc}$, it requires that ${P}_l=0$ for all $l$ such that $\omega_J+l\Omega_0<0$. Then one can show, using similar arguments to those by L. Levitov {\it et al},\cite{keeling_06_ivanov} that the phase must have the form in Eq.\eqref{V_lorentzian}, and that $\omega_J=N\Omega_0$ due to analytic properties of ${p}(t)$ in the complex plane.
Therefore, we get, from Eq.\eqref{S_frequency_S_dc_zero_lorentzian}:
\begin{equation}{S}_{ph}(N\Omega_0)=e^*|I_{ph}(N\Omega_0)|.\label{poisson_lorentz}\end{equation}
This poissonian regime indicates that the PASN reduces to the average charge given by $e^*|I_{ph}(N\Omega_0)|$, now generated only by photon absorption of the many-body ground state. Indeed, since temperatures are always finite, and even for the present NE quantum regime with $T\ll \hbar\Omega_0/k_B$, one has: $S_{ph}(N\Omega_0)=e^*|I_{ph}(N\Omega_0)|+2k_B TP_{-N}G_{dc}(T)$ (see Eq.\eqref{FDT2_periodic_zero_thermal_n}).
Similarly, in case one superimposes a finite dc frequency $\omega_{dc}$ on top of $N\Omega_0$, one goes back to a super-poissonnian PASN. Let's give an example for $N=1$ and decrease the dc drive by a frequency $\omega_{dc}$ verifying $k_BT /\hbar \ll \omega_{dc}<\Omega_0$. Then we get: $${S}_{ph}(\Omega_0-\omega_{dc})=e^*|I_{ph}(\Omega_0-\omega_{dc})|+2e^*P_{-1} |I_{dc}(-\omega_{dc})|.$$
This analysis provides another example at odd with Eq.\eqref{levitov_inequality}, by considering again a SIS junction in the ground state (see Fig.(\ref{fig:SIS_I})). As mentioned in subsection \ref{subsec_SIS}, a sine voltage reduces the PAC in the range $\omega_C<\omega_J<\omega_C+\Omega_0$ compared to $I_{dc}(\omega_J)$, \cite{tucker} a result one can extend to an arbitrary profile of the voltage. Since we showed that lorentzian pulses generate a poissonian PASN (we don't have any thermal contribution as $G_{dc}(T)=0$), one has ${S}_{ph}(N\Omega_0)<S_{dc}(N\Omega_0)$ if $N$ verifies $0<N\Omega_0-\omega_C<\Omega_0$.
Notice also that the poissonian limit can be reached by a weak sine voltage applied to the SIS junction, thus is not exclusive to lorentzian pulses.
Our analysis can be extended to a non-periodic $p(t)$ with a possible time-dependent modulus $|{p}(t)|$, where similar analytic properties of ${p}(\omega)$ lead to a poissonian PASN.
We finally insist that for a NE initial distribution, the inequality in Eq.\eqref{super_poisson} remains strict even for Lorentzian pulses.
\subsection{FQHE: non-trivial charge of minimal excitations and superpoissonian PASN}
In the FQHE, the renormalization by a fractional charge $e^*$ in front of the current arises from the fact that $A$ translates the charge by $e^*$, which is chosen as the dominant process with the lower dimension $\delta$. But contrary to the initial claim of L. Levitov {\it et al}, the lorentzian pulses cannot carry $Ne^*$ per cycle. It was shown, in Refs.\onlinecite{glattli_levitons_physica_2017,martin_sassetti_prl_2017}, that one has still $Q_{cycle}=Ne$. Nonetheless, the given proof was restricted to Laughlin states, $\nu=1/(2n+1)$, for which $e^*=\nu e$.\\
Let's consider hierarchical states, such as the Jain series in the recent experimental works (probing $e^*=e/5$ at $\nu=2/5$ in Ref.\onlinecite{glattli_photo_2018} or $e^*=e/3$ at $\nu=2/3$ in Ref.\onlinecite{ines_gwendal}). One needs to assume in order to reach almost poissonian PASN, that the lorentzian pulses are not deformed at the level of the QPC. We also assume that Eqs.\eqref{josephson},\eqref{eq:JR} hold, where $e^*$ enters, so that the condition $\omega_J=N\Omega_0$ means that the value of $V_{dc}$ for a given frequency $\Omega_0$ depends on $e^*$.
Given this condition, we would like to provide the charge carried by a minimal injected excitation in the region before the QPC, where the chiral current reduces to the first term on the r.h.s. of Eq.\eqref{Itotal}, thus for $x<0$. The charge per cycle is given by (as $\omega_J=N\Omega_0=2N\pi /T_0$):
\begin{equation}
Q_{cycle}=\nu \frac{e^2}h\int _0^{T_0}dtV(t) =Ne\frac{\nu e }{e^*}.
\end{equation}
This suggests that $Q_{cycle}$ gives a possible access to $e^*$, as one generally determines $\nu$ from conductance plateaus. Since $\nu$ is not a simple fraction, the bosonisation allows for many models whose dominant backscattering process (with the smallest scaling dimension \cite{note_scaling}) can carry different charges $e^*$. For instance, for $\nu=2/(2n+1)$ with integer $n$, some models lead to $e^*=e/(2n+1)$,\cite{cheianov_tunnel_FQHE_2015} so that $Q_{cycle}=2Ne$ is integer, which is the same charge as that in the IQHE at $\nu=2$.
Now consider the weak backscattering regime, at energies above $k_BT_B$. In case one uses the effective theories leading to power law behavior, the thermal contribution to the PASN in Eq.\eqref{FDT2_periodic_zero_thermal_n} cannot be ignored. Therefore we get a strictly super-poissonian PASN, contrary to the claim in Ref.\onlinecite{martin_sassetti_prl_2017}: $S_{ph}(N\Omega_0)=e^*|I_{ph}(N\Omega_0)|+2k_B TP_{-N}G_{dc}(T)$, where $G_{dc}(T)$ is a power of $T$. A more detailed comparison between the two terms will be treated separately.\cite{Imen_ines}
\section{Other applications}
\label{sec_applications}
\subsection{Shot-noise spectroscopy}
\label{subsec_spectroscopy}
In general, the transfer rates $\bar{P}(\omega)$ in Eq.\eqref{Sphoto} might be unknown as they can be affected, for instance, by interactions or by NE or fluctuating sources. Thus one possible advantage of the FR in Eq.\eqref{Sphoto} would reside in shot-noise spectroscopy. A protocol discussed in Ref.\onlinecite{glattli_degiovanni_PRB_2013} is nonetheless restricted to non-interacting electrons, a linear dc current and periodic voltages. It should be more facilitated here by the compact form of the FR in Eq.\eqref{Sphoto} in terms of the dc noise $S_{dc}$ which has a non-trivial behavior in non-linear systems. There are in addition situations where the sources to be probed are non-periodic, such as a random non-gaussian radiation (see Eqs.\eqref{p_average},\eqref{Sphoto_cumulant}). Without knowledge of the underlying model, one could measure the noise both in absence and in presence of the sources, then extract $\bar{P}(\omega')$ by varying the dc drive $\omega_J$.
Indeed as $\bar{P}(\omega')=|p(\omega')|^2$ in Eq.\eqref{Sphoto} hides the phase of $p(\omega')$, it is more efficient to consider PASN at a finite frequency $\omega$, where non-diagonal terms $p(\omega')p^*(\omega'+\omega)$ enter (see Ref.\onlinecite{ines_cond_mat}). Interestingly, we have also obtained these non-diagonal terms in the differential of the PASN with respect to $\varphi(\omega)$, given by Eq.\eqref{FDT2_periodic_zero_derivative} (or the ac voltage in case Eq.\eqref{eq:JR} holds). In order to evaluate differentials, the phase of ${p}(t)$ has be known, so that this procedure applies when one needs to determine its time dependent amplitude (e.g. for tunneling or a Josephson energy). Nonetheless, one could superimpose a controlled phase $\varphi_{a}(t)$ on an unknown phase $\varphi(t)$, then take the differential in Eq.\eqref{FDT2_periodic_zero_derivative} in the limit $\varphi_{a}(t)=0$, such that $p(\omega')$ on the r.h.s. becomes determined only by $\varphi(t)$. Notice that one can also superimpose a periodic $\varphi_a(t)$ on top of a non-periodic $\varphi(t)$.
Now one could probe directly a small enough $\varphi(t)$, using the second order expansion in Eq.\eqref{FDT2_periodic_zero_second_derivative_zero_ac}. This is especially easier when one applies a sine phase without knowing its amplitude $\varphi_{ac}$, for instance renormalized by interactions while keeping the same form: $\varphi(t)=\varphi_{ac}\cos \Omega_0t$. Then, given an arbitrary $\omega_J$, one needs to measure both the PASN and the dc noise and to consider the ratio in Eq.\eqref{expansion_sine}.
Another spectroscopy scheme, valid in the case of a thermal distribution, could be based on exploiting the thermal contribution on the r.h.s. of Eq.\eqref{FDT2_periodic_zero_thermal_n}, $2P_{-N}k_B T G_{dc}(T)$. For each $N$ (thus a dc voltage), looking at the unique term in the noise which depends on $T<\hbar \Omega_0/k_B$ provides $P_{-N}$, provided one knows $G_{dc}(T)$.
\subsection{Robust determination of the fractional charge}
\label{subsec_fractional_charge}
An important family of applications of our approach consists into robust methods we have proposed for the determination of the fractional charge in the FQHE,\cite{ines_eugene,ines_cond,ines_PRB_2019,ines_PRB_R_noise_2020} and implemented experimentally to determine $e^*=e/5$ at $\nu=2/5$ in Ref. [\onlinecite{glattli_photo_2018}] and $e^*=e/3$ at $\nu=2/3$ in Ref.[\onlinecite{ines_gwendal}]. They are more robust compared to the dc poissonian shot-noise \cite{saminad} in Eq.\eqref{poisson_T}. In particular they don't require thermalized states nor high voltages which could induce heating. They are based on looking at the noise argument rather than a proportionality factor, as the key step is to determine the Josephson frequency $\omega_J$, which yields the charge $e^*$ in case the relation in Eq.\eqref{josephson} holds. The method based on the NE FR in Eq.\eqref{Sphoto} works better if the dc noise has a singular behavior close to zero, which corresponds to a locking: $\omega_J=N\Omega_0$. Such a singularity becomes more pronounced by taking the second derivative $\delta^2S_{ph}(\omega_J)/\delta^2\omega_J$, formed by a series of peaks around $N\Omega_0$. Nonetheless, if one deals with an initial thermal $\rho_0$, a low enough temperature is required to preserve these peaks, which would be otherwise rounded by thermal effects when $|\omega_J-N\Omega_0|< k_BT/\hbar$ (see the second term on the r.h.s. of Eq.\eqref{FDT2_periodic_zero_thermal_n}).
We propose here a more direct method which does not rely on such a singular behavior nor low temperatures, and equally valid for a NE $\rho_0$. It is based on the FR for the second differential of the PASN in Eq.\eqref{FDT2_periodic_zero_second_derivative}. By comparing both sides, here determined by a unique function, the PASN, one can infer the value of $\omega_J$ that ensures the equality. This would be easier in the limit of a small cosine modulation, using Eq.\eqref{FDT2_periodic_zero_second_derivative_zero_ac}: one can plot the difference of both sides as a function of $\omega_J$ and look at the value of $\omega_J$ for which it vanishes.
We have also derived a similar relation for the PAC. \cite{ines_eugene} Nonetheless, the PAC becomes trivial for a linear dc current (see Eq.\eqref{photocurrent_linear}), as is often the case in the experimental works aiming to determine the fractional charge \cite{glattli_photo_2018,ines_gwendal}, thus motivating their recourse to methods we proposed that are based on noise.\cite{ines_cond,ines_degiovanni_2016} Thus the measured dc current does not obey a power law behavior as predicted by the effective theories. This illustrates precisely the power of such methods, which are independent on the underlying microscopic description of the edge states, as long as it can be cast in the form of Eq.\eqref{Hamiltonian}.
\section{Discussion and conclusion}
We have studied the noise generated by radiation fields operating in a large family of physical systems, such as a QPC in the FQHE or the IQHE with interacting edges, as well as quantum circuits formed by a JJ, NIS or dual phase slip JJ strongly coupled to an electromagnetic environment. We have related the PASN in a universal manner to its counterpart in a dc regime characterized by a NE distribution, similarly to relations obeyed by the finite-frequency current for ac drives \cite{ines_PRB_2019} and finite-frequency noise in the dc regime \cite{ines_PRB_R_noise_2020}.
The NE FRs unify higher dimensional and one-dimensional physics, though the latter is atypical as it is drastically affected even by weak interactions. They also unify previous works based on specific models and an initial thermal equilibrium.\cite{glattli_degiovanni_PRB_2013,glattli_levitons_physica_2017,crepieux_photo,martin_sassetti_prl_2017}
We can transpose to the PASN various methods based on the PAC and addressed in Refs.\onlinecite{ines_eugene,ines_PRB_2019,ines_eugene_detection}, in particular to probe the fractional charge or to detect current cumulants of a non-gaussian source, though the implementation is not identical due to different properties of current and noise. Indeed, in case the dc current is linear and $|\bar{p}(t)|=1$, the PAC becomes trivially equal to the dc current and the PASN offers a non-trivial alternative, as is the case in two situations arising in the IQHE and FQHE.
One the one hand, interactions between edge states still play an important role in the IQHE, which is addressed in many works through the plasmon scattering approach. \cite{ines_schulz_group,ines_epj,elec_qu_optics_degiovanni_2017_cite,plasmon_ines_IQHE_HOM_feve_Nature_2015_cite,sassetti_levitons_IQHE_2020} But bosonized models justify the linearity of the dc current through a spatially local QPC, which justify the recourse to scattering approach for independent electrons in those works. Notice that all these hypothesis are not required within our approach, as we can deal with a possible non-linear dc current, which is the signature of the QPC.
On the other hand, in the FQHE, the dc measured current is quite often weakly non-linear in experiments aiming to probe fractional charge \cite{glattli_photo_2018,ines_gwendal} and statistics \cite{fractional_statistics_gwendal_science_2020} As the PAC is trivial, the FR for the PASN, already obtained in Ref.\onlinecite{ines_cond_mat}, has been fruitful for an experimental determination of the fractional charge at $\nu=2/5$ \cite{glattli_photo_2018} as well as for the analysis of two-particle collision experiments. \cite{glattli_imen_2022,Imen_thesis}
In those experimental works where effective theories such a the TLL are not in accordance with the observed dc current, our methods have the advantage to be robust with respect to the underlying microscopic description and non-universal features, such as edge reconstruction or absence of edge equilibration. In the present paper, we have proposed a more advantageous method based on a second differential of the PASN, which would be easier to exploit in the limit of a weak sine voltage.
We have also discussed how the NE FR is potentially relevant to shot-noise spectroscopy as well as to current cumulant detection. \cite{ines_PRB_2019,ines_eugene_detection}
We have found that the excess noise can be negative in a non-linear SIS junction. Thereby the qualification of "photo-assisted" is not universally relevant: the PASN can be reduced by an ac voltage superimposed on a dc one. This feature is at odd with a theorem by L. Levitov {\it et al} \cite{keeling_06_ivanov} which is restricted to a linear dc current. Such a theorem was at the heart of characterizing minimal excitations for an initial thermal equilibrium at low temperatures. We have provided an alternative characterisation. Showing that the PASN is super-poissonian whatever is the NE initial distribution, the lorentzian profile of the voltage is precisely the one which leads to a poissonian PASN when the system is in the ground many-body state. For hierarchical states of the FQHE, we showed that the charge carried by minimal excitations is still depending on the fractional charge, and that the PASN is superpoissonian, as will be exposed in more details in a separate publication.
Finally, compared to the dc regime, additional limitations of the approach arise. These are mainly due to possible couplings to time dependent forces or boundary conditions which have to be incorporated into a unique complex function, for instance through unitary transformations of the Hamiltonian. This needs to be checked in presence of a supplementary tunneling point between edges with a different ac voltage one could absorb through a translation of the bosonic fields, but seems more difficult to ensure for the multiple mixing points addressed in Ref.\onlinecite{glattli_imen_mixing_2022}.
Also an interferometer with multiple QPCs driven by different ac voltages is not expected to enter within our domain of validity. The present approach would still offer a test in the limiting cases of identical ac voltages, or of a dominant tunneling through one QPC.
In the quantum Hall regime with an almost open QPC, one usually measures correlations between chiral edge currents. These are determined by the backscattering PASN owing to current conservation at zero frequency.
But it might need to be completed \cite{trauzettel_group_bis,lee_levitov,photo_grabert_noise}
in NE setups, where the perturbative approach can still be useful, as addressed in future works.
Finally, an important open question consists into finding the criteria for minimal excitations which would go beyond the second-order perturbation we have carried on.
Acknowledgments: The author thanks R. Deblock, P. Degiovanni, G. F\`eve, I. Taktak, C. D. Glattli and B. Dou\c {c}ot for discussions. She also thanks E. Sukhorukov for inspiring discussions during past collaborations.
|
\section{Perturbative String Theory}
\subsection{Basic features}\label{subsec:basic}
Heuristically one may introduce string theory\cite{intro}\ by
describing tiny loops sweeping through $D$-dimensional
space-time. Being one-dimensional extended objects, they trace out
two-dimensional ``world-sheets'' $\Sigma$ that may be viewed as
thickened Feynman diagrams:
\epsfysize=.8in
{\centerline{\epsffile{thicken.eps}}}
Accordingly, perturbative string theories are defined by
two-dimensional field theories living on such Riemann surfaces.
Suitable two-dimensional field theories can be made out of a great
variety of building blocks, the simplest possibilities being given by
free two-dimensional bosons and fermions, eg., $X_i(z)$, $\psi_i(z)$.
By combining the two-dimensional building blocks according to certain
rules, an infinite variety of operators that describe \/{\em
space-time} fields can be constructed. Typically, the more complex the
operators become that one builds, the more massive the associated
space-time fields become. In total one obtains a finite number of
massless fields, plus an infinite tower of string excitations with
arbitrary high masses and spins. Schematically, among the most generic
massless operators are:
\begin{equation}
\begin{array}{rcl}
{\rm Dilaton\ scalar} &
\phi &
\eta^{\mu\nu}\bar\partial X_\mu\partial X_\nu\qquad (\mu,\nu=1,...,D)
\\
{\rm Graviton} &
g_{\mu\nu} &
\bar\partial X_\mu\partial X_\nu
\\
{\rm Gauge\ field} &
A_{\mu}^a &
\bar\partial X_\mu\partial X^a
\\
{\rm Higgs\ field} &
\Phi^{ab} &
\bar\partial X^a\partial X^b
\end{array
\end{equation}
We see here how simple combinatorics provide an intrinsic and
profound unification of particles and their interactions: from the 2d
point of view, a gauge field operator is very similar to a graviton
operator, whereas the $D$-dimensional space-time properties of these
operators are drastically different. An important point is that
gravitons necessarily appear, which means that this kind of string
theories {\em imply} gravity. This is one of the main motivations for
studying string theory. Indeed, it is believed by many physicists
that string theory provides the only way to formulate a consistent
quantum theory of gravity.
Fermions (``electrons'' and ``quarks'') are obtained by changing the
boundary conditions of the two dimensional fermions $\psi$ on $\Sigma$.
These can be anti-periodic or periodic (the fields change sign or not
when transported around $\Sigma$), giving rise to fermionic or bosonic
fields in space-time, respectively. It turns out that whenever the
theory contains space-time fermions, it must have two dimensional
supersymmetry, and this is why such string theories are called {\em
superstrings}. It is however not so that the $D$-dimensional space-time
theory must be supersymmetric, although practically all models studied
so far do have supersymmetry to some degree. But this is to simplify
matters and keep perturbation theory under better control,\footnote{An
often-cited argument for low-energy supersymmetry is that perturbation
theory of non-supersymmetric strings tends to be unstable due to vacuum
tadpoles. However, even when starting with a supersymmetric theory,
this problem will eventually appear, namely when supersymmetry is
(e.g., at the weak scale) spontaneously broken. We therefore have at
any rate to find mechanisms to stabilize the perturbation series, which
means that vacuum tadpoles are no strong reason for beginning with a
supersymmetric theory in the first place. See especially
refs.\cite{GM,KD}\ for a vision how strings could be more clever and
avoid supersymmetry (similar ideas may apply to the ``hierarchy
problem'' as well); for a different line of thoughts, see
ref.\cite{cosmo}.} much like the study of supersymmetric Yang-Mills
theory makes the analysis much easier as compared to non-supersymmetric
gauge theories. String theory does not intrinsically predict space-time
supersymmetry, although it arises quite naturally.
That very simple, even free two dimensional field theories generate
highly non-trivial effective dynamics in $D$-dimensional space-time,
can be visualized by looking to the perturbative effective action:
\epsfysize=2.5in
\centerline{\epsffile{Seff.eps}}
Here, $g_{\mu\nu},A_\mu,...$ are space-time fields that provide the
background in which the strings move, and ${\cal
L}_{2d}(\psi,X,..,g_{\mu\nu},A_\mu,..)$ is the lagrangian containing
the two-dimensional fields, as well as the background fields which
are simply coupling constants from the two dimensional point of view.
The 2d fields are integrated out, and one also sums over all the
possible two-dimensional world-sheets $\Sigma_\gamma$ (as well as
over the boundary conditions of the $\psi(z)$). This corresponds to
the well-known loop expansion of particle QFT. Note, however, that
there is only {\rm one} ``diagram'' at any given order in string
perturbation theory.
The topological sum is weighted by $e^{-\phi\chi(\Sigma_\gamma)}$,
where $\phi$ is the dilaton field and where the Euler number
$\chi(\Sigma_\gamma)\equiv 2-2\gamma$ is the analog of the
loop-counting parameter. The coupling constant for the perturbation
series thus is
$$
g=e^{\langle\phi\rangle},
$$
which must be small in order for the perturbation series to make
sense. We see here that a coupling constant is given by an {\em a
priori} undetermined vacuum expectation value of some field, and this
reflects a general principle of string theory.
In addition, the topological sum is augmented by integrals over the
inequivalent shapes that the Riemann surfaces can have, and this
corresponds to the momentum integrations in ordinary QFT. There may be
divergences arising from degenerate shapes, but these divergences can
always be interpreted as IR divergences in the space-time sense. In
particular, the well-known logarithmic running of gauge couplings in
four dimensional string theories arises from such IR divergences.
The absence of genuine UV divergences was another early motivation of
string theory, and especially makes consistent graviton scattering
possible: remember that ordinary gravity is not renormalizable and one
cannot easily make sense out of graviton loop diagrams. A very
important point is the origin of this well-behavedness of the string
diagrams. It rests on {\em discrete reparametrizations} of the
string-world sheets $\Sigma_\gamma$ (``modular invariance''), which
have no analog in particle theory. The string ``Feynman rules'' are
very different, and cannot even be approximated by particle QFT. String
theory is therefore {\em more} than simply combining infinitely many
particle fields, and it is this what makes a crucial difference. The
whole construction is very tightly constrained: modular invariance
determines the whole massive spectrum, and taking any single state away
from the infinite tower of states would immediately ruin the
consistency of the theory.
So far, we described the ingredients that go into computing the
perturbative effective action in $D$-dimensional space-time. Now we
focus on the outcome. As indicated in (2), the effective action
typically contains (besides matter fields) general relativity and
non-abelian gauge theory, plus stringy corrections thereof. These
corrections are very strongly suppressed by inverse powers of the
Planck mass, $m_{{\rm Planck}}\sim 10^{19}$GeV, which is the
characteristic scale in the theory. The value of this scale is
dictated by the strength of the gravitational coupling constant, and
governs the size or tension of the strings, as well as the level
spacing of the excited states. That strings are so difficult to
observe, and characteristic corrections so hard to see, stems from
the fact that the gravitational coupling is so small, and this is not
at the string physicists' fault -- they have no option to change that !
That general relativity, with all its complexity just pops
out from the air (arising from eg., a free 2d field theory), may sound
like a miracle. Of course this does not happen by accident, but is
bound to come out. The important point here is that there is a special
property that the relevant two dimensional theories must have for
consistency, and this is {\em conformal invariance}. This is a quite
powerful symmetry principle, which guarantees, via Ward identities,
general coordinate and gauge invariance in space-time -- however, only
so if and only if $D=10$.
As we will see momentarily, this does not imply that superstrings
must live exclusively in ten dimensions, but it means that
superstrings are most naturally formulated in ten dimensions.
\subsection{10d Superstrings and their compactifications}
Two-dimensional field theories can have two logically independent,
namely holomorphic and anti-holomorphic pieces. For obtaining $D=10$
string theories, each of these pieces can either be of type superstring
``$S$'' or of type bosonic string ``$B$'' (with extra $E_8\times E_8$
or $SO(32)$ gauge symmetry). By combining these building blocks in
various ways,\footnote {There are further, but non-supersymmetric
theories in $D=10$.} plus including an additional ``open'' string
variant, one obtains the following exhaustive list of supersymmetric
strings in $D=10$:$\,$
\begin{center}
\begin{tabular}{|c c c l|}
\hline
Composition & Name & Gauge Group & Supersymmetry\\
\hline
$S\, \otimes \bar S^\dagger$ & Type IIA & \ \ $U(1)$ & non-chiral
$N=2$ \\ $S\, \otimes \bar S$ & Type IIB & \ \ - & chiral $N=2$ \\
$S\, \otimes \bar B$ & Heterotic & $E_8\times E_8$ & chiral $N=1$ \\
$S\, \otimes \bar B'$ & Heterotic' & $SO(32)$ & chiral $N=1$\\ $(S\,
\otimes \bar S)/Z_2$ & Type I (open) & $SO(32)$ & chiral $N=1$ \\
\hline
\end{tabular}
\end{center}
Since these theories are defined in terms of 2d world-sheet degrees of
freedom, which is intrinsically a perturbative concept in terms of 10d
space-time physics, all we can really say is that there are five
theories
in ten dimensions that are different in {\em perturbation theory}.
Their perturbative spectra are indeed completely different, their
number of supersymmetries varies, and also the gauge symmetries are
mostly quite different.
Now, we would be more than glad if strings would remain in lower
dimensions as simple as they are in $D=10$. Especially string theories
in $D=4$ turn out to be much more complicated. Specifically, the
simplest way to get down to four dimensions is to assume/postulate that
the space-time manifold is not simply $\IR^{10}$, but $\IR^4\times
X_6$, where $X_6$ is some compact six-dimensional manifold. If it is
small enough, then the theory looks at low energies, ie., at distances
much larger than the size of $X_6$, effectively four-dimensional. This
is like looking at a garden hose from a distance, where it looks
one-dimensional, while upon closer inspection it turns out to be a
three dimensional object.
The important good feature is that such kind of ``compactified''
theories are at low energies exactly of the type as the standard model
of particle physics (or some supersymmetric variant thereof). That is,
generically such theories will involve non-abelian gauge fields, {\em
chiral} fermions and Higgs fields besides gravitation, and all of this
coupled together in a (presumably) consistent and truly unified
manner~! That theories that are able to do this have been found at all,
is certainly reason for excitement.
But apart from this generic prediction, more detailed predictions, like
the precise gauge groups or the massless matter spectrum, cannot be
made at present -- this is the dark side of the story. The reason is
that the compactification mechanism makes the theories {\em enormously}
more complicated and rich than they originally were in $D=10$. This is
intrinsically tied to properties of the possible compactification
manifolds $X_6$:
\noindent {\bf i)} There is a large number of choices for the
compactification manifold. If we want to have $N=1$ supersymmetry in
$D=4$ from e.g.\ a heterotic string, then $X_6$ must be a
``Calabi-Yau'' space\cite{brian}, and the number of such spaces is
perhaps 10$^4$. On top of that, one has to specify certain instanton
configurations, which multiplies this number by a very large factor.
{\em A priori}, each of these spaces (together with a choice of
instanton configuration) leads to a different perturbative matter and
gauge spectrum in four dimensions, and thus gives rise to a different
four dimensional string theory.
\noindent {\bf ii)}
Each of these manifolds by itself has typically a large number of
parameters; for a given Calabi-Yau space, the number of parameters can
easily approach the order of several hundred.
These parameters, or ``moduli'' determine the shape of $X_6$, and
correspond physically to vacuum expectation values of scalar
(``Higgs'') fields, similar to the string coupling $g$ discussed above.
Changing these VEVs changes physical observables at low energies, like
mass parameters, Yukawa couplings and so on. They enter in the
effective lagrangian as free parameters, and are not determined by any
fundamental principles, at least as far is known in perturbation
theory. Their values are determined by the choice of vacuum, much like
the spontaneously chosen direction of the magnetization vector in a
ferromagnet (that is not determined by any fundamental law either). The
hope is that after breaking supersymmetry, the continuous vacuum
degeneracy would be lifted by quantum corrections (which is
typical for non-supersymmetric theories), so that ultimately there
would be fewer accessible vacua.
\begin{figure} \epsfysize=1.8in \centerline{\epsffile{CYmod.eps}}
\caption{The parameter space of the low energy effective field theory
of a string compactification is essentially given by the
moduli space (deformation space) of the compactification manifold.
This leads to a geometric interpretation of almost all of the
parameters and couplings. Shown is that with each point in the moduli
space ${\cal M}_X$, one associates a compactification manifold $X$ of
specific shape; the shape in turn determines particle masses, Yukawa
couplings etc. The picture will be refined below.
\hfill
\label{fig:CYmod}}
\end{figure}
The situation is actually worse than described so far,
because we have on top of points i) and ii):
{\bf iii)} There are five theories in $D=10$ and not just one, and a
priori each one yields a different theory in four dimensions for a
given compactification manifold $X_6$. If one of them would be the
fundamental theory, what is then the r\^ole of the others ?
{\bf iv)} There is no known reason why a ten dimensional theory wants
at all to compactify down to $D=4$; many choices of space-time
background vacua of the form $\IR^{10-n}\times X_n$ appear to be on
equal footing.
All these points together form what is called the {\em vacuum
degeneracy} problem of string theory -- indeed a very formidable
problem, known since a decade or so. Summarizing, most of the physics
that is observable at low energies seems to be governed by the vacuum
structure and not by the microscopic theory, at least as far as we can
see today. Still, it is not so that string theory would not make any
low-energy predictions; the theories are very finely tuned and
internal consistency still dramatically reduces the number of possible
low energy spectra and independent couplings, as compared to ordinary
field theory.
The recent progress in non-perturbative string theory does not solve
the problem of the choice of vacuum state either. The progress is
rather of conceptual nature and opens up completely new perspectives on
the very nature of string theory.
\section{Duality and non-perturbative equivalences}
Duality is the main new concept that has been stimulating the recent
advances in supersymmetric particle\cite{gaugerev} and string theory.
Roughly speaking, duality is a map between solitonic (non-perturbative,
non-local, ``magnetic'') degrees of freedom, and elementary
(perturbative, local, ``electric'') degrees of freedom. Typically,
duality transformations exchange weak and strong-coupling physics and
act on coupling constants like $g\rightarrow 1/g$. They are thus
intrinsically of quantum nature.
Duality symmetries are most manifest in supersymmetric theories,
because in such theories perturbative loop corrections tend to be
suppressed, due to cancellations between bosonic and fermionic degrees
of freedom. Otherwise, observable quantities get so much
polluted by radiative corrections that the more interesting
non-perturbative features cannot be easily made out.
More precisely, certain quantities (eg., Yukawa couplings) in a
supersymmetric low energy effective action are protected by
non-renormalization theorems, and those quantities are
typically {\em holomorphic} functions of the massless fields. As a
consequence, this allows to apply methods of complex analysis,
and ultimately of algebraic geometry, to analyze the physical theory.
Such methods (where they can be applied) turn out to be much more
powerful then traditional techniques of quantum field theory, and
this was the technical key to the recent developments.
A typical consequence of the holomorphic structure is a continuous
vacuum degeneracy, arising from flat directions in the effective
potential. The non-renormalization properties then guarantee that this
vacuum degeneracy is not lifted by quantum corrections, so that
supersymmetric theories often have continuous {\rm quantum moduli
spaces} $\cal M$ of vacua. In string theory, as mentioned above, these
parameter spaces can be understood geometrically as moduli spaces of
compactification manifolds.
\subsection{A gauge theory example}
One of the milestones in the past few years was the solution of the
(low energy limit of) $N=2$ supersymmetric gauge theory in four
dimensions.\cite{SW,SWreviews} Important insights that go beyond
conventional particle field theory have been gained by studying this
model, and this is why we briefly touch it here. In fact, even though
this model is an ordinary gauge theory, the techniques stem from string
theory, which demonstrates that string ideas can be useful also for
ordinary field theory.
Without going too much into the details, simply note that $N=2$
supersymmetric gauge theory (here for gauge group $G=SU(2)$) has a
moduli space $\cal M$ that is spanned by roughly the vacuum
expectation value of a complex Higgs field $\phi$. The relevant
holomorphic quantity is the effective, field dependent gauge coupling
$g(\phi)$ (made complex by including the theta-angle in its
definition). Each point in the moduli space corresponds to a
particular choice of the vacuum state. Moving around in $\cal M$ will
change many properties of the theory, like the value of the effective
gauge coupling $g(\phi)$ or the mass spectrum of the physical states.
An important aspect is that there are special regions in the moduli
space, where the theory behaves specially, ie., becomes {\em
singular}. This is depicted in Fig.2.
More precisely, there are two different types of such singular regions.
Near $\langle\phi\rangle\rightarrow \infty$, the gauge theory is weakly
coupled since the effective gauge coupling becomes arbitrarily small:
$g(\phi)\rightarrow0$. In this semi-classical region, non-perturbative
effects are strongly suppressed and the perturbative definition of the
theory is arbitrarily good. That is, the instanton correction sum to
the left in Fig.2 gives a negligible contribution as compared to the
logarithmic one-loop correction (further higher loop corrections are
forbidden by the $N=2$ supersymmetry). Furthermore, the solitonic
magnetic monopoles that exist in the theory become very heavy and
effectively decouple.
\begin{figure} \epsfysize=3.0in \centerline{\epsffile{SW.eps}}
\caption{The exact quantum moduli space of $N=2$ supersymmetric
$SU(2)$ Yang-Mills theory has one singularity at weak coupling and two
singularities in the strong coupling region. The latter are caused by
magnetic monopoles becoming massless for the corresponding VEVs of the
Higgs field. One can go between the various regions by analytic
continuation, ie., by resumming the non-perturbative instanton
corrections in terms of suitable variables. \hfill \label{fig:SW}}
\end{figure}
However, when we now start moving in the moduli space $\cal M$ away
from the weak coupling region, the non-perturbative instanton sum to
the left in Fig.2 will less and less well converge, and the original
perturbative definition of the theory will become worse and worse.
When we are finally close to one of the other two singularities in
Fig.2, the original perturbative definition is blurred out and does
not make sense any more. The problem is thus how to suitably {\em
analytically continue} the complex gauge coupling outside the region
of convergence of the instanton series. The way to do this is to
resum the instanton series in terms of another variable, $\phi_D$, to
yield another expression for the gauge coupling that is well defined
near $1/g\rightarrow0$. That is, there is a ``dual'' Higgs field,
$\phi_D$, in terms of which the dual gauge coupling, $g_D(\phi_D)$,
makes sense in the strong coupling region of the parameter space
$\cal M$. Indeed, $\phi_D$ becomes small in this region, so that the
infinite series for the dual coupling $g_D$ to the right in Fig.2
converges very well.
The important point to realize here is that the perturbative physics
in the strong coupling region is completely different as compared to
the perturbative physics in the weak coupling region that we started
with~! At weak coupling, we had a non-abelian $SU(2)$ gauge theory,
while at strong coupling we have now an abelian $U(1)$ gauge theory
plus some extra massless matter fields (``electrons''). But the
latter only manifest themselves as elementary fields if we express
the theory in terms of the appropriate dual variables; in the
original variables, these ``electrons'' that become light at strong
coupling, are nothing but some of the solitonic magnetic monopoles
that are very massive in the weak coupling region.
All this said, we still do not know how to actually solve the theory
and determine all the unknown non-perturbative instanton coefficients
$c_\ell$, $c^D_\ell$ in Fig.2. A direct computation would be beyond
what is currently possible with ordinary field theory methods. The
insight of Seiberg and Witten\cite{SW}\ was to realize that the {\it
patching together of the known perturbative data in a globally
consistent way} is so restrictive that it fixes the theory, and
ultimately gives explicit numbers for the instanton coefficients
$c_\ell$ and $c^D_\ell$. This shows that sometimes much can be gained
by not only looking to a theory at some given fixed background VEV, but
rather by looking to a whole family of vacua, i.e., to global
properties of the moduli space.
\subsection{The message we can abstract from the field theory example}
\begin{figure}
\epsfysize=1.2in
\centerline{\epsffile{global.eps}}
\caption{{The moduli space of a generic supersymmetric theory is
covered by coordinate patches, at the center of each of which the
theory is weakly coupled when choosing suitable local variables. A
local effective lagrangian exists in each patch, representing a
particular perturbative approximation. None of such lagrangians is
more fundamental than the other ones. }
\hfill
\label{fig:global}}
\end{figure}
The lesson is that one and the same physical theory can have many
perturbative descriptions. These can look completely different, and
can involve different gauge groups and matter fields.
There is in general no absolute notion of what would be weak
or strong coupling; rather what we call weak coupling or strong
coupling, or an elementary or a solitonic field, depends to some
extent on the specific description that we use. Which description
is most suitable, and which physical degrees of freedom will be light
or weakly coupled (if any at all), depends on the region of the
parameter space we are looking at.
More mathematically speaking, an effective lagrangian description makes
sense only in {\em local coordinate patches} covering the parameter
space $\cal M$ -- see Fig.3. These describe different perturbative
approximations of the same theory in terms of different weakly coupled
physical local degrees of freedom (eg, electrons or monopoles). No
particular effective lagrangian is more fundamental than any other one.
In the same way that a topologically non-trivial manifold cannot be
covered by just one set of coordinates, there is in general no globally
valid description of a family of physical theories in terms of a single
lagrangian.
It is these ideas that carry over, in a refined manner, to string
theory and thus to grand unification; in particular, they have us
rethink concepts like ``distinguished fundamental degrees of
freedom''. String moduli spaces will however be much more complex
than those of field theories, due to the larger variety of possible
non-perturbative states.
\subsection{$P$-branes and non-perturbative states in string theory}
String compactifications on manifolds $X$ are not only complex
because of the large moduli spaces they generically have, but also
because the spectrum of physical states becomes vastly more
complicated. In fact, when going down in the dimension by
compactification, there is a dramatic {\em proliferation of
non-perturbative states}.
The reason is that string theory is not simply a theory of strings:
there exist also higher dimensional extended objects, so called
``$p$-branes'', which have $p$ space and one time dimensions (in this
language, strings are 1-branes, membranes 2-branes etc; generically,
$p=0,1,...,9$). Besides historical reasons, string theory is only
called so because strings are typically the lightest of such extended
objects. In the light of duality, as discussed in the previous
sections, we know that there is no absolute distinction between
elementary or solitonic objects. We thus expect $p$-branes to play a
more important r\^ole than originally thought, and not necessarily to
just represent very heavy objects that decouple at low energies.
\begin{figure}
\epsfysize=2.0in
\centerline{\epsffile{branewrap.eps}}
\caption{Non-perturbative particle-like states arise from wrapping of
$p$-branes around $p$-dimensional cycles $\gamma_p$ of the
compactification manifold $X$. These are generically very massive, but
can become massless in regions of the moduli space where the $p$-cycles
shrink to zero size. The singularities in the moduli space
are the analogs of the monopole singularities in Fig.2.
\hfill
\label{fig:branewrap}}
\end{figure}
More specifically, such $(p+1)$ dimensional objects can wrap around
$p$-dimensional cycles ${\gamma_p}$ of a compactification manifold, to
effectively become particle-like excitations in the lower dimensional
(say, four dimensional) theory. These extra solitonic states are
analogs of the magnetic monopoles that had played an important r\^ole
in the $N=2$ supersymmetric Yang-Mills theory. Since such monopoles can
be light and even be the dominant degrees of freedom for certain
parameter values, we expect something similar for the wrapped
$p$-branes in string compactifications. In fact, the volumina of
$p$-dimensional cycles ${\gamma_p}$ of $X_n$ depend on the deformation
parameters, and there are singular regions in the moduli space where
such cycles shrink to zero size (``vanishing cycles'') -- see Fig.4.
Concretely, if a $p$-dimensional cycle ${\gamma_p}$ collapses, then a
$p$-brane wrapped around ${\gamma_p}$ will give a massless state in
$D=10-n$ dimensional space-time.\cite{trans} This is because the mass
formula for the wrapped brane involves an integration over $\gamma_p$:
$$ {m_{p-{\rm
brane}}}^2\ =\ |\!\int_{\gamma_p}\Omega\,|^2\ \longrightarrow\
0\qquad\ {\rm if}\ \gamma_p\rightarrow0\ .
$$
The larger the dimension of the compactification manifold $X_n$ is (and
the lower the space-time dimension $D=10-n$), the more complicated the
topology of it will be, ie., the larger the number of ``holes'' around
which the branes can wrap. For a six dimensional Calabi-Yau space
$X_6$, there will be generically an abundance of extra non-perturbative
states that are not seen in traditional string perturbation theory.
These can show up in $D=4$ as ordinary gauge or matter fields. It is
the presence of such non-perturbative, potentially massless states that
is the basis for many non-trivial dual equivalences of string theories.
When going back to ten dimensions by making the compactification
manifold very large, these states become arbitrarily heavy and
eventually decouple. In this sense, a string model has after
compactification many more states that were not present in ten
dimensions before. The non-perturbative spectrum is very finely tuned:
in an analogous way that taking out a perturbative string state from
the spectrum would destroy modular invariance (which is a global
property of the 2d world-sheet) and thus would ruin perturbative
consistency, taking out a non-perturbative state would destroy duality
symmetries (which are a global property of the compactification
manifolds): it would make the global behavior of the theory over the
moduli space inconsistent.
\subsection{Stringy geometry}
We have seen in section 2.1 that $N=2$ supersymmetric Yang-Mills theory
is sometimes better described in terms of dual ``magnetic'' variables,
namely when we are in a region of the moduli space where certain
magnetic monopoles are light. In this dual formulation, the originally
solitonic monopoles look like ordinary elementary, perturbative degrees
of freedom (``electrons''). Analogously, dual formulations exist also
for string theories, in which non-perturbative solitons are described
in terms of weakly coupled ``elementary'' degrees of freedom. It was
one of the breakthroughs in the field when it was realized how this
exactly works: the relevant objects dual to certain solitonic states
are special kinds of ($p$-)branes, so-called
``$D(p$-)branes''.\cite{Dbr} Due to the many types of $p$-branes on the
one hand, and the large variety of possible singular geometries of the
manifolds $X$ on the other, the general situation is however very
complicated. It is easiest to describe it in terms of typical examples.
For instance, massless or almost massless non-abelian gauge bosons
$W^\pm$ belonging to $SU(2)$ can be equivalently described in a
number of dual ways (see Fig.5):
\begin{figure}
\epsfysize=1.4in
\centerline{\epsffile{gaugemech.eps}}
\caption
|
{Different geometries can parametrize one and the same
physical situation. Shown here are three dual ways to represent
an $SU(2)$ gauge field and the associated Higgs mechanism.
\hfill \label{fig:gaugemech}} \end{figure}
\noindent {\bf i)} In the heterotic string compactified on some
higher dimensional torus, a massless gauge boson is represented by a
fundamental heterotic string wrapped around part of the torus, with a
certain radius (say $R=1$ in some units; changing the orientation of
the string maps $W^+\leftrightarrow W^-$).This is a perturbative
description, since it involves an elementary string. If the radius
deviates from $R=1$, the gauge field gets a mass, providing a
geometrical realization of the Higgs mechanism.
\noindent {\bf ii)} In the compactified type IIA string, the gauge
boson arises from wrapping a 2-brane around a collapsed 2-cycle
$\gamma_2$ of $X$. This is a non-perturbative formulation in terms of
string theory. If $\gamma_2$ does not quite vanish, the gauge field
retains some non-zero mass, thereby realizing the Higgs mechanism in
a different manner.
\noindent {\bf iii)} In the type I string model, an $SU(2)$ gauge
boson is realized by an open string stretched between two flat
$D$-branes. This is another perturbative formulation of the Higgs
mechanism. The mass of the gauge boson is proportional to the length
of the open string, and thus vanishes if the two $D$-branes move on
top of each other.
We thus see that very different mathematical geometries can represent
the identical physical theory, here the $SU(2)$ Higgs model -- these
geometries really should be identified in string theory. This provides
a special example of a more general concept, which is about getting
better and better understood: ``stringy geometry''. In stringy
geometry, certain mathematically different geometries are treated as
equivalent, and just seen as different choices of ``coordinates'' in
some abstract space of string theories. The underlying physical idea is
that while in ordinary geometry point particles are used to measure
properties of space-time, in stringy geometry one augments this by
string and other $p$-brane probes. It is the wrappings and stretchings
that these extra objects are able to do that can wash out the
difference between topologically and geometrically distinct manifolds.
In effect, a string theory of some type ``$A$'' when compactified on
some manifold $X_A$, can be dual to another string theory ``$B$'' on
some manifold $X_B$ -- and this even if $A$, $B$ and/or $X_A$ and $X_B$
are profoundly different.\cite{HT} Again, all what matters is that the
full non-perturbative theories coincide, while there is no need for the
perturbative approximations to be even remotely similar.
\section{The Grand Picture}
\subsection{Dualities of higher dimensional string theories}
We are now prepared to go back to ten dimensions and revisit the five
perturbatively defined string models of section 1.2. In view of the
remarks of the preceding sections concerning the irrelevance of
perturbative concepts, we will now find it perhaps not too surprising
to note that these five theories are really nothing but different
approximations of just {\rm one} theory. In complete analogy to what
we said about $N=2$ supersymmetric gauge theory in section 2.1, they
simply correspond to certain choices of preferred ``coordinates''
that are adapted to specific parameter regions.
Although these facts can be stated in such simple terms, they are
so non-trivial that it took more than a decade to discover them. It
can now be explicitly shown that by compactifying any one of
the five theories on a suitable manifold, and then de-compactifying it
in another manner, one can reach any other of the five theories in a
continuous way.
\begin{figure}
\epsfysize=1.2in
\centerline{\epsffile{bigPict10d.eps}}
\caption{Moduli space in ten and eleven dimensions. Its
singular asymptotic regions correspond to the five well-known
supersymmetric string theories in $D=10$, plus an eleven dimensional
$M$-theory. The vertical direction roughly reflects the
space-time dimension.
\hfill
\label{fig:bigPict10d}}
\end{figure}
A surprise happened when the strong coupling limit of the type
IIA string was investigated\cite{Weleven}: it turned out that in this
limit, certain non-perturbative ``$D0$-branes'' form a continuous
spectrum and effectively generate an extra 11th dimension. That is,
at ultra strong coupling the ten-dimensional type IIA string theory
miraculously gains 11-dimensional Lorentz invariance, and the
low-energy theory turns into $D=11$ supergravity. This was especially
surprising because eleven dimensional supergravity is not a low
energy limit of a string theory, but rather seems to be related to
supermem\-branes. In other words, non-perturbative dualities take us
beyond string theory~!
So what we have are not five but six $10$- or $11$-dimensional
approximations, or local coordinate patches on some moduli space -- see
Fig.6. But to what entity are these theories approximations~? Do we
have here the moduli space of some underlying microscopic theory~?
Indeed there is a candidate for such a theory, dubbed
``$M$-theory''\cite{BFSS,Mtheory}. Its low energy limit does give
$D=11$ supergravity, and simple compactifications of it give all of the
five string theories in $D=10$. It may well be that $M$-theory,
currently under intense investigation, holds the key for a detailed
understanding of non-perturbative string theory. However, since
$M$-theory is (presumably) a main topic of Susskind's lecture in these
proceedings, we will not discuss any further details here. We will
rather return to the lower dimensional theories.
\subsection{Quantum moduli space of four-dimensional strings}
Fig.6.\ shows only a small piece of a much more extended moduli space,
namely only the piece that describes higher dimensional theories.
These are relatively simple and there is only a small number of them.
As discussed above, the lower dimensional theories obtained by
compactification are much more intricate.
Among the best-investigated string theories in four dimensions are
the ones with $N=2$ supersymmetry -- these are the closest analogs of
the $N=2$ gauge theory that is so well understood. They can be
obtained by compactifications of type IIA/B strings on Calabi-Yau
manifolds $X_6$, or dually, by compactifications of heterotic strings
on\footnote{Here, $T_2$ denotes the two-torus, and $K3$ the four
dimensional version of a Calabi-Yau space.} $K3\times T_2$. Since
there are roughly 10$^4$ of such Calabi-Yau manifolds $X_6$, the
complete $D=4$ string moduli space will have roughly 10$^4$
components, each with typical dimension 100 (keeping in mind that
there can be non-trivial identifications between parts of this moduli
space)-- see Fig.7. We see that the moduli space is drastically more
complicated as it is either for the high-dimensional theories, or for
the $N=2$ gauge theory. Each of the 10$^4$ components typically has
several ten or eleven dimensional decompactification limits, so that
one should imagine very many connections between the upper and lower
parts of Fig.7 (indicated by dashed lines).
\begin{figure} \epsfysize=3.0in \centerline{\epsffile{bigPict4d.eps}}
\caption{Rough sketch of the space of $N=2$ supersymmetric string
theories in four dimensions. In general, a given region in the 4d
moduli space can be reached in several dual ways via compactification
of the higher dimensional theories. Each of the perhaps 10$^4$
components corresponds typically to a 100 dimensional family of
perturbative string vacua, and represents the moduli space of a single
Calabi-Yau manifold (like the one shown in Fig.4). Non-perturbatively,
all these vacua turn out to be connected by extremal transitions and
thus form a continuous web. \hfill \label{fig:bigPict4d}} \end{figure}
An interesting fact is that all of the known\footnote{Strictly
speaking, we cannot exclude further, disconnected components to exist.}
$10^4$ families of perturbative $D=4$ string vacua are connected by
non-perturbative extremal transitions. To understand what we mean by
that, simply follow a path in the moduli space as indicated in Fig.7,
starting somewhere in the interior of a blob. The massive spectrum will
continuously change, and when we hit singularities, extra massless
states appear and perturbation theory breaks down.
It can in particular happen that a non-perturbative massless
Higgs field appears, to which we can subsequently give a vacuum
expectation value to deform the theory in a direction that was not
visible before. In this way, we can leave the moduli space of a single
perturbative string compactification, and enter the moduli space of
another one. Thus, non-perturbative extra states can glue together
different perturbative families of vacua in a physically smooth
way.\cite{trans} It can be proven that one can connect in this
manner all of the roughly 10$^4$ known components that were previously
attributed to different four dimensional string theories. In other
words, the full non-perturbative quantum moduli space of $N=2$
supersymmetric strings seems to form {\em one} single entity.
This is not much of a practical help for solving the vacuum degeneracy
problem, but it is conceptually satisfying: instead having to choose
between many four dimensional string theories, each one equipped with
its own moduli space, we really have just one theory with however very
many facets.
This as far as $N=2$ supersymmetric strings in four dimensions are
concerned -- the situation will still be much more complicated for the
phenomenologically important $N=1$ supersymmetric string theories,
whose investigation is currently under way. The main novel features
that can be addressed in these theories are chirality and
supersymmetry breaking. It seems that certain aspects of these
theories are best described by choosing still another dual
formulation, called ``$F$-theory''\cite{Ftheory}. This is a
construction formally living in twelve dimensions, and whether it is
simply a trick to describe certain features in an elegant fashion, or
whether it is a honest novel theory in its own, remains to be
seen.
\section{``Theoretical experiments''}
How can we convince ourselves that the considerations of the previous
sections really make sense~? Clearly, all what we can do for the
foreseeable future to test these ideas are consistency checks. Such
checks can be highly non-trivial, from a formal as well as
from a physical point of view. So far, numerous qualitative and
quantitative tests have been performed, and not a single test on the
dualities has ever failed~!
\noindent
To give some flavor, let us recapitulate a few characteristic checks:
\goodbreak
\noindent {\bf i)} {\em Complicated perturbative corrections} can be
computed and compared for string models that are dual to each
other. In all cases, the results perfectly agree. Consider for example
certain 3-point couplings $\kappa$ in $N=2$ supersymmetric string
compactifications. As mentioned before, such theories can be obtained
either from type II strings on Calabi-Yau manifolds $X_6$,
or dually from heterotic string compactifications on $K3\times T_2$. In
the type II formulation, these couplings can be computed (via ``mirror
symmetry''\cite{mirror}) by counting world-sheet instantons (embedded
spheres) inside the Calabi-Yau space. Concretely, in a specific model
the result takes the following explicit form:
\defE_4(T)}\def\est{E_6(T)}\def\efu{E_4(U)}\def\esu{E_6(U){E_4(T)}\def\est{E_6(T)}\def\efu{E_4(U)}\def\esu{E_6(U)}
$$
\kappa\ =\ {i\over 2\pi}
{E_4(T)}\def\est{E_6(T)}\def\efu{E_4(U)}\def\esu{E_6(U)\efu\esu(E_4(T)}\def\est{E_6(T)}\def\efu{E_4(U)}\def\esu{E_6(U)^3-\est^2)\over \efu^3\est^2-E_4(T)}\def\est{E_6(T)}\def\efu{E_4(U)}\def\esu{E_6(U)^3\esu^2}\ ,
$$
in terms of certain modular functions $E_4$, $E_6$ depending on moduli
fields $T,U$. The very same expression can be obtained also in a
completely different manner, namely by performing a standard one-loop
computation in the dual heterotic string model -- in precise agreement
with the postulated string duality; see Fig.8. Many similar tests, also
involving higher loops and gravitational couplings, have been shown to
work out as well.
\begin{figure}
\epsfysize=1.0in
\centerline{\epsffile{mirrorsym.eps}}
\caption{Counting spheres in a Calabi-Yau space $X$ in type IIA
string theory, leads ultimately to the same expressions for certain
low energy couplings as a standard one-loop computation in the
heterotic string.
\hfill
\label{fig:mirrorsym}}
\end{figure}
\noindent {\bf ii)}
{\em State count in black holes.}
This is a highly non-trivial physics test. The issue is to compute the
Bekenstein-Hawking entropy $S_{BH}$ (=area of horizon) of an
extremal (or near-extremal) black hole. Strominger and Vafa\cite{SV}\
considered the particular case of an extremal $N=4$ supersymmetric
black hole in $D=5$, where one knows that
$$
S_{BH}\ =\ 2\pi\sqrt{{q_fq_h\over2}}\ ,
$$
where $q_f$, $q_h$ are the electric and axionic charges of the black
hole. The idea is to use string duality to represent a large
semi-classical black hole in terms of a type IIB string
compactification on $K3\times S^1$. This eventually boils down to a
2d sigma model on the moduli space of a gas of $D0$-branes on $K3$.
Counting states in this model indeed exactly reproduces the above
entropy formula for large charges.
This test (and refinements of it\cite{BHreviews})
does not only add credibility to
the string duality claims from a new perspective, but also tells that
there are {\em no missing degrees of freedom} that we might have been
overlooking -- any theory of quantum gravity better comes up with the
same count of relevant microscopic states.
\noindent {\bf ii)} {\em Recovering exact non-perturbative field theory
results.} One can derive the exact solution of the $N=2$ supersymmetric
gauge theory described in section 2.1 as a consequence of string
duality. The point is that duality often maps classical into quantum
effects and vice versa. This makes it for example
possible\cite{qmirror}\ to obtain with a {\em classical} computation in
the compactified type II string, certain non-perturbative results for
the compactified heterotic string. In particular, counting world-sheet
instantons similarly as in point i) above, and suitably decoupling
gravity (see Fig.9), allows to exactly reproduce\cite{KKLMV}\ the
non-perturbative effective gauge coupling $g(\phi)$ of Fig.2.
\begin{figure}
\epsfysize=1.8in
\centerline{\epsffile{rigid.eps}}
\caption{Recovering Seiberg-Witten theory from string duality. In the
field theory limit, one sends the Planck mass to infinity in order to
decouple gravity and other effects that are not important. A
remnant of the string duality survives, which is a
duality between the standard formulation of the gauge
theory, and a novel one that was not known before. It gives a physical
interpretation of the geometry underlying the non-perturbative
solution of the $N=2$ gauge theory.
\hfill
\label{fig:rigid}}
\end{figure}
This is very satisfying, as it gives support to both the underlying
string duality and the solution of the $N=2$ gauge theory. Indeed,
while the basic heterotic-type II string duality\cite{HT}\ and the
Seiberg-Witten theory\cite{SW}\ have been found independently from each
other around the same time, their mutual compatibility was shown only
later.
It is the clearly visible, convergent evolution of a priori separate
physical concepts, besides overwhelming ``experimental'' evidence, what
gives the string theorists confidence in the validity of their ideas.
\section{Spinoff: $D$-brane technology,
and novel quantum theories without gravity.}
The techniques for obtaining exact non-perturbative results for
ordinary field theories from string duality are not limited to only
reproducing results that one already knows -- in fact, they have opened
the door for deriving new results for a whole variety of
field theories in various dimensions.
More specifically, there are currently two main approaches (related by
duality) for obtaining standard and non-standard field theories from
string theory -- each one has its own virtues. One can either study the
singular geometry of Calabi-Yau or other compactification manifolds,
and consider the effect of wrapped branes
-- much like it is depicted in Fig.5 part ii).
Alternatively, one can model\cite{HW}\ the relevant string geometry
in terms of parallel $D$-branes, with open strings and other branes
running between them (remember that $D$-branes are
roughly perturbative duals of $p$-brane solitons). This refers to
part iii) of Fig.5. For example, $N=2$ Yang-Mills theory in four
dimensions can be represented by a configuration of $D$-branes as
shown in Fig.10 a). From this simple picture one can reproduce
the non-perturbative solution of the gauge theory\cite{SWfromM}.
Similar arrangements can describe $N=1$ supersymmetric gauge theories
as well, like supersymmetric QCD (Fig.10 b)). Some qualitative
features, like chiral symmetry breaking or confinement, can be seen
in this model.\cite{MQCD}
\begin{figure}
\epsfysize=1.7in
\centerline{\epsffile{Dbranes.eps}}
\caption{$D$-brane technology allows to represent gauge theories
in a dual way, in which non-trivial properties are
encoded in simple geometric pictures.
\hfill
\label{fig:Dbranes}}
\end{figure}
In addition, somewhat unexpectedly, these methods have also led to
the discovery of completely new kinds of quantum theories that were not
known to exist before. More specifically, when decoupling gravity there
is no reason why one should always end up with a standard field theory
that one already knows, like gauge theory coupled to some matter
fields. Rather, by either looking to specific singular geometries or to
appropriate $D$-brane configurations, one has found a number of exotic
limits.\cite{Weleven,sixdim} Such theories are typically strongly
coupled and do not have any known description in terms of traditional
quantum field theory; they comprise (non-gravitational) tensionless
strings and novel non-trivial IR fixed points. The main indication that
they really exist is that they arise as decoupling limits of a larger
string (or $M$-) theory that is consistent by itself.
Especially interesting in this context are exotic theories that
serve as transition points between $N=1$ string vacua with different
net numbers of chiral fermions. Such smooth chirality changing
processes are not possible in conventional field theory, but they can
occur string theory.\cite{chiral} This opens up the possibility that
$N=1$ supersymmetric string vacua are (to some extent)
unified in a manner analogous to the $N=2$ vacua,
similar to but much more complicated as hinted at in Fig.7.
\lref\SWreviews{For reviews see, for example:\\
{L.\ Alvarez-Gaum\'e and S.\ Hassan,
\nihil{Introduction to S-Duality in N=2
supersymmetric Gauge Theories:
A pedagogical review of the Work of Seiberg and Witten,}
Fortsch.\ Phys.\ {\bf 45} (1997) 159-236,
\eprt{hep-th/9701069};}\\
{W.\ Lerche, \nihil{Introduction to Seiberg-Witten Theory
and its Stringy Origin},
Nucl.\ Phys.\ Proc.\ Suppl.\ {\em 55B} (1996) 83,
\eprt{hep-th/9611190};\\
{M.\ Peskin,
\nihil{Duality in Supersymmetric Yang-Mills Theory,}
lectures at TASI-96 Summer School, \eprt{hep-th/9702094}.}
}
}
\lref\SW{N.\ Seiberg and E.\ Witten, \nihil{Electric - magnetic
duality, monopole condensation, and confinement in N=2 supersymmetric
Yang-Mills theory,} \nup426(1994) 19, \eprt{hep-th/9407087};
\nihil{Monopoles, duality and chiral symmetry breaking in N=2
supersymmetric QCD,} \nup431(1994) 484, \eprt{hep-th/9408099}.}
\lref\intro{For introductory textbooks, see e.g.:\\
M.\ Green, J.\ Schwarz and E.\ Witten, {\em Superstring Theory},
Vol.\ 1-2, Cambridge University Press 1987;\\
D.\ L\"ust and S.\ Theisen, {\em Lectures on String Theory},
Lecture Notes in Physics 346, Springer 1989.\\
For more recent reviews, see e.g.:\\
{J.\ Polchinski,
\nihil{What is string theory?,}
lectures presented at the 1994 Les Houches Summer School,
\eprt{hep-th/9411028};} {
\nihil{String duality: A Colloquium,}
Rev.\ Mod.\ Phys.\ {\bf 68} (1996) 1245-1258,
\eprt{hep-th/9607050};}\\
{K.\ Dienes,
\nihil{String theory and the path to unification: A Review of recent
developments,}
Phys.~ Rept.~{\bf 287} (1997) 447-525,
\eprt{hep-th/9602045};}\\
{C.\ Vafa,
\nihil{Lectures on strings and dualities,}
lectures at ICTP Summer School 1996,
\eprt{hep-th/9702201};}\\
{S.\ Mukhi,
\nihil{Recent developments in string theory: A Brief review for
particle physicists,}
\eprt{hep-ph/9710470}.}
}
\lref\KD{K.\ Dienes, as cited in ref.\ \cite{intro}.}
\lref\brian{For a review, see:
{B.\ Greene,
\nihil{String theory on Calabi-Yau manifolds,}
lectures at TASI-96 Summer School, \eprt{hep-th/9702155}.}
}
\lref\Dbr{J.\ Polchinski,
\nihil{Dirichlet-Branes and Ramond-Ramond Charges,}
Phys.\ Rev.\ Lett.\ {\bf 75} (1995) 4724-4727,
\eprt{hep-th/9510017}; \\
for a review, see:
{J.\ Polchinski,
\nihil{Tasi Lectures on D-Branes,} TASI-96 Summer School,
\eprt{hep-th/9611050}.}}
\lref\BFSS{T.\ Banks, W.\ Fischler, S.\ Shenker and L.\ Susskind,
\nihil{M-Theory as a Matrix Model: A Conjecture,}
Phys.\ Rev.\ {\bf D55} (1997) 5112-5128,
\eprt{hep-th/9610043}.}
\lref\Mtheory{For reviews, see e.g.: \\
{A.\ Bilal, \nihil{M(atrix) Theory: a Pedagogical Introduction,}
\eprt{hep-th/9710136.};}\\
T.\ Banks, \nihil{Matrix Theory,} Trieste Spring School 1997,
\eprt{hep-th/9710231}.
}
\lref\Weleven{E.\ Witten,
\nihil{Some Comments on String Dynamics,} \eprt{hep-th/9507121}.}
\lref\sixdim{See for example:
{N.\ Seiberg, E.\ Witten,
\nihil{Comments on string dynamics in six-dimensions,}
Nucl.~ Phys.~{\bf B471} (1996) 121-134,
\eprt{hep-th/9603003};}\\
{O.\ Aharony, M.\ Berkooz, S.\ Kachru, N.\ Seiberg and E.\ Silverstein,
\nihil{Matrix description of interacting theories in six dimensions,}
\eprt{hep-th/9707079}.}
}
\lref\HT{C.\ Hull and
P.\ Townsend, \nihil{Unity of superstring dualities,}
\nup438 (1995) 109, \eprt{hep-th/9410167}.}
\lref\gaugerev
{For a review on supersymmetric gauge theories, see:\\
K.\ Intriligator and N.\ Seiberg,
\nihil{Lectures on supersymmetric gauge theories and
electric - magnetic duality,}
Nucl.~ Phys.~ Proc.~ Suppl.~{\bf 45BC} (1996) 1-28,
\eprt{hep-th/9509066}.}
\lref\Ftheory{{C.\ Vafa,
\nihil{Evidence for F-theory,}
Nucl.\ Phys.\ {\bf B469} (1996) 403-418,
\eprt{hep-th/9602022}.}}
\lref\qmirror{
{S.\ Kachru and C.\ Vafa,
\nihil{Exact Results for N=2 Compactifications of Heterotic Strings,}
Nucl.\ Phys.\ {\bf B450} (1995) 69-89,
\eprt{hep-th/9505105};}\\
{S.\ Ferrara, J.\ Harvey, A.\ Strominger and C.\ Vafa,
\nihil{Second quantized Mirror Symmetry,}
Phys.\ Lett.\ {\bf B361} (1995) 59-65,
\eprt{hep-th/9505162}.}}
\lref\trans{A. Strominger,
\nihil{Massless black holes and conifolds in string theory,}
\nup451 (1995) 96,
\eprt{hep-th/9504090};\\ {for a review, see: A. Strominger,
\nihil{Black Hole Condensation and Duality in String
Theory,} \eprt{hep-th/9510207}.}}
\lref\BHreviews{For reviews see:
{J.\ Maldacena,
\nihil{Black Holes in String Theory,}
PhD thesis, \eprt{hep-th/9607235}; {
\nihil{Black Holes and D-Branes,}
\eprt{hep-th/9705078}.}
}}
\lref\SV{A.\ Strominger and C.\ Vafa,{
\nihil{Microscopic origin of the Bekenstein-Hawking entropy,}
Phys.\ Lett.\ {\bf B379} (1996) 99-104,
\eprt{hep-th/9601029}.}
}
\lref\SWfromM{E.\ Witten,
\nihil{Solutions of four-dimensional Field Theories via M-Theory,}
\eprt{hep-th/9703166}.}
\lref\MQCD{E.\ Witten,
\nihil{Branes and the Dynamics of QCD,}
\eprt{hep-th/9706109}.}
\lref\GM{G.\ Moore,
\nihil{Atkin-Lehner symmetry,}
Nucl.\ Phys.\ {\bf B293} (1987) 139.}
\lref\cosmo{E.\ Witten,
\nihil{Strong coupling and the cosmological constant,}
Mod.~ Phys.~ Lett.~{\bf A10} (1995) 2153-2156,
\eprt{hep-th/9506101}.}
\lref\mirror{S.\ Yau (ed.), {\it Essays on Mirror Manifolds},
International Press 1992; B.\ Greene (ed.), {\it Mirror symmetry II},
AMS/IP Studies in advanced mathematics vol 1, 1997.}
\lref\geometric{
{A.\ Klemm, W.\ Lerche, P.\ Mayr, C.\ Vafa and
N.\ Warner,
\nihil{Self-dual strings and N=2 supersymmetric field theory,}
Nucl.\ Phys.\ {\bf B477} (1996) 746-766,
\eprt{hep-th/9604034};}//
{M.\ Bershadsky at al.,
\nihil{Geometric singularities and enhanced gauge symmetries,}
Nucl.\ Phys.\ {\bf B481} (1996) 215-252,
\eprt{hep-th/9605200};}//
S.\ Katz and C.\ Vafa,
\nihil{Geometric engineering of N=1 quantum field theories,}
Nucl.\ Phys.\ {\bf B497} (1997) 196-204,
\eprt{hep-th/9611090}.
}
\lref\HW{A.\ Hanany and E.\ Witten, \nihil{Type IIB Superstrings, BPS
Monopoles, and three-dimensional Gauge Dynamics,} Nucl.\ Phys.\ {\bf
B492} (1997) 152-190, \eprt{hep-th/9611230}.}
\lref\KKLMV{S.\ Kachru, A.\ Klemm, W.\ Lerche, P.\ Mayr and
C.\ Vafa,
\nihil{Nonperturbative Results on the Point Particle Limit of N=2
Heterotic String Compactifications,}
Nucl.\ Phys.\ {\bf B459} (1996) 537-558,
\eprt{hep-th/9508155}.}
\lref\chiral{For reviews, see e.g.:
{S.\ Kachru,
\nihil{Aspects of N=1 string dynamics,}
\eprt{hep-th/9705173};}\\
{E.\ Silverstein,
\nihil{Closing the Generation Gap,} Talk at Strings '97,
\eprt{hep-th/9709209}.}
}
\section*{References}
\newcommand\nil[1]{{}}
\newcommand\nihil[1]{{\sl #1}}
\newcommand\ex[1]{}
\newcommand{\nup}[3]{{\em Nucl.\ Phys.}\ {B#1#2#3}\ }
\newcommand{\plt}[3]{{\em Phys.\ Lett.}\ {B#1#2#3}\ }
\newcommand{{\em Phys.\ Rev.\ Lett.}\ }{{\em Phys.\ Rev.\ Lett.}\ }
\newcommand{\cmp}[3]{{\em Comm.\ Math.\ Phys.}\ {A#1#2#3}\ }
|
\section{Introduction}
The treatment of convection and of convective boundary mixing has
major implications for the results of stellar evolutionary models
\citep{Maeder2009,Kippenhahn2012}. As it is inherently a 3D process,
the 1D descriptions of convection required for modern stellar structure
and evolution calculations, often achieved using the Mixing Length
Theory \citep[MLT, ][]{BohmVitense1958} or variations thereof, have
shortcomings. Although MLT does well in approximating the deep
interiors of a convective region, its limitations become apparent at
the boundaries of convective regions. Namely, MLT does not capture
the scope of processes that occur over different distance scales at
the interface of a turbulent convective zone and a stably stratified
radiative zone. In such transition regions, phenomena including
convective penetration, overshooting, turbulent entrainment, bulk
shear mixing, and the generation of internal gravity waves which
induce mixing in the adjacent radiative region may occur \citep{Renzini1987,
Arnett2019}. Additionally, 1D simplifications of convection result
in a single, hard convective boundary (based either on the Schwarzschild
or Ledoux criterion), rather than the dynamical, fluctuating boundary
that is often observed in 2- and 3D hydrodynamical simulations
\citep{Meakin2007,Arnett2015,Cristini2017,Higl2021}. As a result of
ignoring the non-local extensions of convection, simplified 1D descriptions
of convection underestimate the transport of chemical species both at the
convective boundary and throughout the adjacent radiative region. More
specifically, in the absence of some form of explicitly included extra
chemical mixing, 1D stellar models with a convective nuclear burning core
underestimate the transport of fresh chemical species from the envelope
into the core where they can participate in burning. This implies that models
which do not include extra mixing underestimate the mass of the convective
core at any time along a star's evolution when the core is convective
\citep{Bressan1981,Stothers1981,Bertelli1985,Maeder1987}.
To account for the missing chemical mixing induced at convective boundaries,
1D stellar evolution models introduce ad-hoc parameterised mixing profiles
that are stitched to convective regions. However, each parameterised mechanism
only serves to address one aspect of the many processes occurring at, or beyond,
a convective boundary \citep{Meakin2007,Arnett2015,Rogers2017}. Furthermore,
all of these parameterised descriptions have at least one free parameter that
scales the `efficiency' of the chemical mixing induced by that mechanism.
Numerous studies have attempted to observationally calibrate the efficiency
of different mixing mechanisms both at the convective boundary and in the
radiative envelope, however, these studies often report conflicting efficiencies
for the same mechanisms. The focus of this paper is to discuss the complexities
of observationally constraining the efficiency of mixing mechanisms in stellar
models, to introduce convective core mass as a robust point of comparison
for the implementation of chemical mixing in 1D stellar models, and to
consider the consequences of a range of inferred mixing efficiencies in model
calculations.
\section{Chemical mixing in the literature}
A complete chemical mixing profile should include the effects of
entrainment, penetration, overshooting, semi-convection, as well
as mixing induced by rotational instabilities and internal gravity
waves \citep{Langer2012,Maeder2009,Hirschi2014,Arnett2015,Rogers2017,
Salaris2017}. Unfortunately, discussion of chemical mixing mechanisms
in the literature suffers from a problem of mixed identities. Numerous
studies have attempted to constrain the impact of chemical mixing in
different forms, be it any of the mechanisms previously mentioned.
However, more often than not, these studies focus on constraining the
efficiency of a single mixing mechanism, fixing the efficiency of the
other mechanisms or ignoring them outright. In the process of doing
this, one effectively enforces that the mixing mechanism under investigation
in practice becomes a proxy for all of the chemical transport present.
Thus, when comparing results of different studies, one must account
for both the physical mechanisms that were explicitly included, as well
as the absence of effects from any physical mechanisms that were not
included.
Given modern observational quantities, their associated precision, and
underlying model assumptions and degeneracies, we are unable to disentangle
the contributions of individual mixing mechanisms to the point of robustly
calibrating multiple mixing mechanisms in a single target \citep{Valle2017,
Valle2018,Constantino2018,Aerts2018b,Johnston2019a}. Instead, different
observables provide constraints on the overall mixing history of a star.
We will briefly review the types of observations used to constrain
chemical mixing in the literature.
\subsection{Eclipsing Binaries}
If the two components of a double-lined spectroscopic binary system
eclipse, one can model the eclipses and radial velocities to determine
absolute masses and radii of the components. Combined with effective
temperature estimates from spectroscopy, modelling eclipsing binaries
provides some of the most precise determination of fundamental stellar
properties possible, with quoted uncertainties to better than one
percent on mass and radius \citep{Torres2010,Serenelli2021}. Furthermore,
binaries afford the assumption that both components have the same age
and initial chemical composition, providing additional powerful
constraints.
The standard approach in the literature is to fit stellar evolution models
(or isochrones) to the observed mass, radius, and temperature of the binary
components to determine how much internal mixing, if any, is required to explain
observations of intermediate- to high-mass stars with a convective core
on the main sequence. This approach has been used to investigate the
impact of rotational mixing \citep{Brott2011b,Schneider2014,Ekstrom2018},
convective penetration \citep{Schroder1997,Pols1997,Ribas2000b,Guinan2000,
Claret2016}, and convective overshooting \citep{Stancliffe2015,Higl2017,
Claret2019,Guglielmo2019,Tkachenko2020}. However, notable works claim
that chemical mixing efficiency cannot be uniquely determined using this
approach, even with such precise mass and radius estimates \citep{Valle2018,
Constantino2018,Johnston2019b}. Similarly, systems that display apsidal
motion have been used to investigate chemical mixing through its influence
on the central condensation of the star over time \citep{Claret2010,Claret2019b}.
We note, however, that the fundamental stellar properties used in this
modelling method are not directly sensitive to the efficiency of any
particular mixing mechanism individually. Instead this methodology is
sensitive to the efficiency of any process which alters the radius and
temperature of the star. Through the transport of hydrogen from the
envelope into the core, chemical mixing increases the mass (and luminosity)
of the core, and extends the main-sequence lifetime. These changes to
the core translate to changes in the overall temperature and radius of the
star at a given age, which is what is estimated in binary modelling.
Furthermore, in addition to the transport of chemicals in the interior,
very rapid rotation causes changes to the radius and temperature at the
surface of the star. Thus, modelling the fundamental properties of binaries
is sensitive to the total internal mixing history which transports fresh
hydrogen from the radiative region into the convective core throughout
the stellar lifetime, as well as any modification to the surface properties
due to rapid rotation. To this end, we should be careful to not conflate the
inferred overall mixing efficiency from binary modelling with the efficiency
of a single mixing mechanism.
\subsection{Stellar clusters}
Given the assumption that the members of a stellar cluster are formed at
approximately the same time and with a similar initial chemical composition,
the observed morphology of the colour-magnitude diagram can be used to
calibrate internal mixing mechanisms in stellar models. Several studies
have demonstrated that the width of the main sequence is sensitive to the
mixing history of stars, and used this sensitivity to estimate the effect
of overshooting and/or rotational mixing on stellar models \citep{VandenBerg2006,
Castro2014,Martinet2021}.
Similar to the width of the main sequence, the morphology of the extended
main-sequence turn off (eMSTO) is used to test the implementation of
mixing mechanisms in stellar models. While the eMSTO phenomenon is commonly
associated with rapid rotation and its consequences \citep{Dupree2017,Kamann2018,
Bastian2018,Georgy2019}, studies have shown that the eMSTO of young massive
clusters can be reproduced by stellar evolution models with convective boundary
mixing calibrated from asteroseismology \citep{Yang2017,Johnston2019c}.
Alternatively, binary evolution and its byproducts have been invoked as a
means of explaining the eMSTO phenomenon \citep{Schneider2014b,Gosnell2015,
Beasor2019}. As in the case of eclipsing binaries, this type of analysis
is not directly sensitive to the efficiency of a given mixing mechanism,
but rather to the total mixing across the evolution of a star.
\subsection{Asteroseismology}
Regions of the stellar interior can be driven to deviate from
hydrostatic equilibrium due to a build up of radiation caused
by partial ionisation zones, the blocking of convective flux,
or other mechanisms. Depending on where these perturbations
occur within the stellar interior, the restoring force will
be dominated by either the pressure force or the buoyancy
force. If a perturbation is regularly driven, it produces
a standing pressure or buoyancy wave propagating throughout
the star, resulting in surface brightness and velocity variations.
The frequency of the standing wave is determined by the chemical
and thermodynamical conditions of the regions that the waves
travel through. Asteroseismology is then the study of the stellar
interior via the observation of and modelling of stellar oscillations
\citep{Aerts2010,Aerts2021}.
The frequencies of g~modes (buoyancy waves) in particular are sensitive
to the near-core chemical gradient \citep{Miglio2008} and rotation rate
\citep{Bouabid2013}. Asteroesismology of stars oscillating in p~modes and
in g~modes with 1.3 $\le$ M $\le$ 24 M$_{\odot}$ has revealed the need
for a wide range of convective boundary mixing efficiencies to reproduce
observed pulsation frequencies \citep{Briquet2007,Handler2013,Moravveji2016,
Schmid2016,Szewczuk2018,Mombarg2019,Pedersen2021}. Additionally, asteroseismology
of stars pulsating in p~modes (pressure waves) between 1.15-1.5~M$_{\odot}$
has revealed the need for a range of convective mixing efficiencies to
reproduce observed frequencies \citep{Deheuvels2010,Deheuvels2011,
SilvaAguirre2011,Salmon2012,Angelou2020,Viani2020,Noll2021}. Similar to
binaries and clusters, the modelling of pulsation frequencies (for both
p and g~modes) is not sensitive to the efficiency of a single mixing
mechanism. Instead asteroseismology is sensitive to the total influence
of chemical mixing that alters the core mass and radius, as well as
the chemical gradient near the core (high order g~modes) and the bulk
density of the star (low order p and g~modes). However, studies have
demonstrated that g-mode asteroseismology is able to differentiate
between the morphology and temperature gradient of the near core mixing
profile for models of the same mass evolved to the same core hydrogen
content \citep{Pedersen2018,Michielsen2019}.
\section{Convective core masses}
\label{section:core_mass_sample}
Despite the remarkable precision provided by eclipse modelling and/or
asteroseismology, modelling methodologies cannot uniquely constrain the
efficiency of an individual mixing mechanism \citep{Valle2018,Aerts2018b}.
Unfortunately, parameter correlations in stellar evolution models
can produce several models with different masses, ages, and amounts
of internal mixing that all reproduce observations reasonably well
within the uncertainties. As such, it is often the case that any
amount of internal mixing can be used to match observations, even
when the bulk stellar properties or oscillation frequencies are
known to a high precision \citep[e.g.,][]{Aerts2018b,Constantino2018,
Johnston2019a, Johnston2019b,Michielsen2019,Sekaran2021}. Furthermore,
both binary and asteroseismic studies have demonstrated that reasonably
similar fits to the data can be obtained using models with different
types of internal mixing mechanisms \citep{Moravveji2015,Claret2017,
Mombarg2019,Pedersen2021} and for different bulk chemical compositions
\citep{Claret2017,Claret2018}.
Although model degeneracies prevent the direct constraining of mixing
mechanism efficiencies, \citet{Johnston2019b,Tkachenko2020} demonstrated
that 1\% precision on mass and radius estimates from eclipsing binaries
result in inferences on the mass and hydrogen content of the convective
|
core to a precision of $\sim10$~\% or better. A similar result has been
found through asteroseismic analysis of both p and g mode pulsating stars
with a convective core on the main sequence \citep{Mazumdar2006,Briquet2011,
Johnston2019a,Angelou2020,Mombarg2021,Pedersen2021}.
A few studies have performed comparative model fits between models with an exponentially decaying diffusive overshooting with a radiative temperature gradient in the overshooting region and models with step penetrative convection with an adiabatic expansion of the convective core. These studies demonstrate that the best matching solutions from either model have core masses, masses, and ages, which largely agree within uncertainties \citep{Claret2018,Tkachenko2020,Mombarg2021,Pedersen2021}. However, it should be noted that solutions with different metallicities tend to have different core mass inferences, masses, and ages as well. This highlights that core mass inferences can function as a robust proxy for the overall influence of chemical mixing, even when the efficiency of the implemented mechanism cannot be constrained. Furthermore, this implies that core mass inferences can serve as a robust comparison point for solutions that do not use the same implementation of chemical mixing.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/mcc_vs_m_v04.png}
\caption{Convective core mass (M$_{cc}$) vs initial mass (M) of the star. Solid and dashed lines
correspond to fractional core mass at specified points in evolution. Colour of region
corresponds to fractional core hydrogen content as indicated by colour-bar. Colour of
individual marker corresponds to inferred fractional core hydrogen content of best
model. Squares with orange uncertainties denote values obtained from binary systems,
circles with blue uncertainties denote values obtained from binary systems with at least
one pulsating component, and triangles with grey uncertainties denote values obtained
from pulsating stars. References for values and their uncertainties are listed in
Table~\ref{tab:mccs}.}
\label{fig:m_vs_mcc_obs}%
\end{figure}
To this end, we collect a large sample of convective core mass and
core hydrogen content inferences (the mass and remaining hydrogen
content within the Schwarzschild boundary) for core-hydrogen burning
stars from the literature and show them in Fig.~\ref{fig:m_vs_mcc_obs}.
The values, their associated uncertainties, and references are
listed in Table~\ref{tab:mccs}. The core
mass inferences are plotted against predictions from non-rotating
models with $Z=0.014$ and $Y=0.276$ \citep[computed with {\sc mesa}
r10398; ][]{Paxton2019} with a minimum amount of internal chemical
mixing (included in the form of diffusive exponential overshooting
with $f_{ov}$=0.005). The models span 1.1\,<\,M\,<\,25~M$_{\odot}$.
The colour of the background denotes the remaining fraction of the
core-hydrogen content relative to the amount at the zero-age main sequence.
The colour of the markers correspond to the inferred fractional
core-hydrogen content of that star. If the model predictions were
accurate, the colour of the marker would match the colour of the
background where the point is situated. Instead, the inferred values
are systematically shifted upwards with respect to their predicted location,
indicating that the observations are better reproduced by models with
more massive convective cores that are less progressed along their
main-sequence evolution. This sample is composed of inferences from
p and g mode (and hybrid) pulsating stars, eclipsing binaries, and
pulsating stars in binaries, spanning a wide mass range
(1.14\,<\,M\,<\,24~M$_{\odot}$).
The diversity of this sample {\it i}) confirms the robust need
for more massive convective cores across a wide stellar mass range,
and {\it ii}) demonstrates that models including only one
mixing mechanism with a single efficiency cannot reproduce the
range of core masses displayed in Fig.~\ref{fig:m_vs_mcc_obs}.
Instead, we find that the range of observations can be reproduced by
considering stellar models with internal mixing profiles that span a
wide range of efficiencies. This result is consistent with the fact that
the observables used in modelling efforts are not directly sensitive to a
particular chemical mixing mechanism, but rather to the overall influence
of chemical mixing on the stellar temperature, radius, core properties,
and near core chemical gradient. Thus, we interpret the diversity
of inferred core masses to result from the combined effect of multiple
mixing mechanisms active in different stars. We adopt a representative
mixing profile consisting of convective boundary mixing in the form of diffusive
exponential overshooting \citep{Freytag1996} with $f_{ov}\in[0.005,0.04] H_p$,
and mixing in the radiative zone induced by internal gravity waves with base
efficiencies in the range of $D_{\rm IGW}\in [1-100]$~cm$^2$~s$^{-1}$.
This range is derived by first considering the range of efficiencies
for each mechanism reported in the literature, and then trimming the ranges
to include all values for which we can reproduce the values reported in
Table~\ref{tab:mccs}. We stress that this representative profile and
efficiency ranges are only a proxy for the total amount of internal
chemical mixing, and should not be considered as absolute metric. We
remark instead that this range of efficiencies can be interpreted as
arising from different mechanisms producing an overall different effect
in different stars.
\section{Implications beyond the main sequence}
The modification of the convective core mass during the main
sequence propagates throughout the remainder of the star's
evolution \citep{Stothers1981}. \citet{Kaiser2020} demonstrated
that variable convective boundary mixing efficiencies lead
to large differences in the resulting helium-core mass at the
terminal-age main sequence (>30\%), the carbon/oxygen core
mass at the end of core helium burning (up to 70\%), and in
the observed surface abundances. These differences necessarily
have consequences for the stellar end product, although
\citet{Kaiser2020} only focused on supernova progenitors. Such
variations in stellar end products have also been demonstrated
for varying degrees of rapid rotation \citep{Brott2011a,Ekstrom2012},
as well as for the inclusion of convective entrainment in 1D
models \citep{Scott2021}.
Figure~\ref{fig:mhe_at_tams} shows the predicted values of the
helium core mass at the end of the main sequence for stars between
1.1$\le$M$\le$25~M$_{\odot}$. The grey region contains those predicted
values for models which include the representative range of chemical
mixing efficiencies which reliably reproduces the range of inferred
core masses shown in Fig~\ref{fig:m_vs_mcc_obs}. These predictions
are compared to predictions by non-rotating MIST \citep[red, ][]{Choi2016}
and PARSEC \citep[blue, ][]{Bressan2012} models. The result of this
comparison is that we predict a wide range of possible resultant helium
core masses, compared to the MIST or PARSEC models which adopt a single
efficiency for one chemical mixing mechanism, and that the predicted
range of helium core masses increases with mass within the available range.
The prediction of a range of helium core masses has numerous implications
beyond main-sequence evolution and to other fields of astronomy. Crucially,
this range suggests that there is not a singular one-to-one relation between
the birth-mass of the star and the mass of the end-product, as can be seen
in the range of the helium core masses. \citet{Kaiser2020} demonstrated that
this can have an impact on the results of supernova simulations, but did not
consider masses below 15~M$_{\odot}$. The mass limit for a star to produce
a neutron star end-product canonically depends on the remnant core mass exceeding
the Chandrasekhar limit of M$_{\rm crit}\approx1.44$~M$_{\odot}$. Typically, a
remnant core with such a mass is thought to be achieved for stars with initial
masses M$_{\rm init}>\sim 8-9$~M$_{\odot}$, considering possible accretion onto
the core at later evolutionary stages \citep{Heger2003,Fryer2005,Camenzind2007,
Kippenhahn2012}. This limiting mass is denoted by the dotted black line in
Fig.~\ref{fig:mhe_at_tams}. Interestingly we see that this critical remnant
core mass is already reached for stars with initial masses as low as
M$_{\rm init}>\sim 7$~M$_{\odot}$ for models with the most internal mixing
that is considered. This lower limit does not consider possible core mass
enhancement through later phases of shell burning or convective boundary
mixing during later core burning phases.
Furthermore, studies have demonstrated that convective boundary mixing is
favoured to explain observations of evolved single stars \citep{Montalban2013,
Constantino2017,Bossini2017,Arentoft2017,denHartogh2019}, and/or binary
interaction products, such as subdwarfs \citep{Constantino2018,Ostrowski2021}.
However, the study of binaries and stellar populations is also dependent
upon considerations of internal mixing in stellar models. Particularly,
the pathways of binary evolution products depend on the radius and central
condensation of an evolving star to determine the probability, duration,
and efficiency of mass transfer.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/mhe_vs_m_range_v02.png}
\caption{Predicted helium core mass at the terminal-age main sequence for
models with CBM represented by overshooting and internal gravity
wave mixing (grey) compared against predictions from MIST models
\citep{Choi2016} in red and PARSEC models \citep{Bressan2012} in
blue. The dotted black line denotes the fractional core mass
required to produce a remnant helium core with $M=1.44~{\rm M_{\odot}}$.}
\label{fig:mhe_at_tams}%
\end{figure}
\section{Conclusions}
\label{section:conclusions}
It is easy to review the results of all the attempts to calibrate
different internal mixing mechanisms and think that they disagree.
However, given the nature of the sensitivity of modern observables
combined with the limitations of 1D stellar models, we argue that
these results do agree. Irrespective of the type of internal mixing
mechanism implemented, models require a mechanism to modify the
core mass, internal chemical gradients, and surface properties in
order to reproduce observations. This is supported by the fact that
different studies have been able to independently reproduce the behaviour
of commonly used observables with different implementations of mixing
mechanisms such as convective penetration, overshooting, entrainment,
rotation, and internal gravity waves \citep{Brott2011b,Ekstrom2012,
Staritsin2013,Moravveji2015,Claret2017,Tkachenko2020,Pedersen2021,
Scott2021}.
Consulting the diversity of results from the literature, it is clear
that a wide range of internal mixing efficiencies are required to
reproduce observations even when considering a single mixing mechanism.
When considering a single efficiency for a single mixing mechanism,
models predictions are unintentionally biased. Based on the diverse
sample of inferred core masses we have collected, we instead argue
that a range of efficiencies needs to be considered to function as a
proxy for the various mixing processes present at the convective
boundary which are commonly ignored in standard 1D models. As this
in turn produces a range of helium core masses at the terminal-age
main sequence, this range of efficiencies should be accounted for
in evolutionary calculations for both single and binary stars,
nucleosynthetic yield predictions, and population synthesis efforts
beyond the main sequence.
\begin{acknowledgements}
CJ would like to thank the referee for their comments which improved the
manuscript. CJ thanks M.G. Pedersen, J.S.G. Mombarg, G. Angelou, L. Viani,
and C. Neiner for sharing their results to be included in this manuscript, as
well as D.M. Bowman, A. Tkachenko, and C. Aerts for their useful discussion
on topics concerning asteroseismology and chemical mixing profiles, and M.
Abdul-Masih and A. Escorza for their comments on the manuscript. This
work has received funding from NOVA, the European Research Council under the
European Union's Horizon 2020 research and innovation programme (N$^\circ$670519:MAMSIE),
and from the Research Foundation Flanders under grant agreement G0A2917N (BlackGEM).
The computational resources and services used in this work were provided by
the VSC (Flemish Supercomputer Center), funded by the Research Foundation -
Flanders (FWO) and the Flemish Government - department EWI to PI Johnston.
\end{acknowledgements}
\bibliographystyle{aa}
|
\section{Introduction}
\label{sect:intro}
Let $Z$ be a variety over a finite field. The triangulated category of
$\ell$-adic sheaves on $X$ has a full subcategory $\dbmc Z$ of ``mixed
sheaves,'' defined in terms of eigenvalues of the Frobenius morphism. The
existence and good formal properties of this category are among the most
important consequences of Deligne's proof of the Weil conjectures. It
plays a major role in the theory of perverse sheaves and their applications
in representation theory. An important part of the formalism of mixed
sheaves is a certain filtration of $\dbmc Z$ by full subcategories
$\{\dbmc Z_{\le w}\}_{w \in \mathbb{Z}}$, known as the \emph{weight filtration}.
Let us now turn our attention to the world of equivariant coherent sheaves.
Let $X$ be a scheme (say, of finite type over a field),
and let $G$ be an affine group scheme acting on $X$ with finitely many orbits. In~\cite{a}, the first author introduced a class of $t$-structures, called \emph{staggered $t$-structures}, on the bounded derived category $\dgb X$ of $G$-equivariant coherent sheaves on $X$. These $t$-structures depend on the choice of a certain kind of filtration of the abelian category of equivariant coherent sheaves. These filtrations, known as \emph{$s$-structures}, bear an at least superficial resemblance to the weight filtration of $\dbmc Z$.
The main goal of this paper is to try to make this resemblance into a precise statement, and to thereby place these two kinds of structures in a unified setting. We do this by introducing the notion of a \emph{baric structure} on a triangulated category. The usual weight filtration on $\dbmc Z$ is not a baric structure, but a modified version of it due to S.~Morel~\cite{mor} is. (Indeed, the definition of a baric structure is largely motivated by Morel's results.) An $s$-structure is not a baric structure either: for one thing, it is a filtration of an abelian category, not of a triangulated category. We show in this paper how to construct baric structures on $\dgb X$ using an $s$-structure on $X$. We also exhibit several other examples of baric structures that have appeared in the literature.
The second goal of the paper is to recast the construction in~\cite{a} as an instance of an abstract operation that can be done on any triangulated category. Specifically, given a triangulated category with ``compatible'' $t$- and baric structures, we outline a procedure, which we call \emph{staggering}, for producing a new $t$-structure. Note that in~\cite{a}, ``staggered'' was simply a name assigned to certain specific $t$-structures by definition, whereas in this paper, ``to stagger'' is a verb. We prove that these two uses of the word are consistent: that is, that the $t$-structures of~\cite{a} arise by staggering the standard $t$-structure on $\dgb X$ with respect to a suitable baric structure.
(The staggering operation can also be applied to the weight baric structure
on $\dbmc Z$, as well as to other baric structures. This yields a new
$t$-structure that has not previously been studied.)
An outline of the paper is as follows. We begin in
Section~\ref{sect:baric} by giving the definition of a baric structure and
of the staggering operation. In Section~\ref{sect:examples}, we give
examples of baric structures, including Morel's version of the weight
filtration. Next, in Section~\ref{sect:baric-coh1}, we begin the study of
baric structures on derived categories of equivariant coherent sheaves,
especially those that behave well with respect to the geometry of the
underlying scheme.
The next three sections are devoted to the relationship between baric
structures and $s$-structures. First, in Section~\ref{sect:stagt}, we
review relevant definitions and results from~\cite{a}.
Section~\ref{sect:baric-coh2} contains the main result of the
paper, showing how $s$-structures on the abelian category of coherent
sheaves give rise to baric structures on the derived category.
In Section~\ref{sect:mult}, we briefly consider the reverse problem, that
of producing $s$-structures from baric structures.
Finally, in Section~\ref{sect:stag2}, we study staggered $t$-structures
associated to the baric structures produced in
Section~\ref{sect:baric-coh2}. Specifically, we prove that their hearts
are finite-length categories, and we give a description of their simple
objects. This was done in some cases in~\cite{a}, but remarkably, the
machinery of baric structures allows us to remove the assumptions that were
imposed in~{\it loc.~cit}.
We conclude by mentioning an application of the machinery
developed in this paper. The language of baric structures allows one to
define a notion of ``purity,'' similar to the one for $\ell$-adic mixed
constructible sheaves. In a subsequent paper~\cite{at}, the authors
prove that every simple staggered sheaf is pure, and that every pure
object in the derived category is a direct sum of shifts of simple
staggered sheaves. These results are analogous to the well-known Purity
and Decomposition Theorems for $\ell$-adic mixed perverse sheaves.
\section{Baric structures}
\label{sect:baric}
In this section we introduce baric structures on triangulated categories
(Definition~\ref{defn:baric}), and the
operation of \emph{staggering} a $t$-structure with respect to a baric
structure (Definition~\ref{defn:stag}).
Staggering produces, out of a $t$-structure $(\mathfrak{D}^{\leq 0}, \mathfrak{D}^{\geq 0})$ on a
triangulated category $\mathfrak{D}$, a new pair of orthogonal subcategories
$({}^s\!\fD^{\leq 0},{}^s\!\fD^{\geq 0})$. Our main result is a criterion which guarantees that
$({}^s\!\fD^{\leq 0},{}^s\!\fD^{\geq 0})$ is itself a $t$-structure
(Theorem~\ref{thm:stag-gen}).
\subsection{Baric structures}
\begin{defn}\label{defn:baric}
Let $\mathfrak{D}$ be a triangulated category.
A \emph{baric structure} on $\mathfrak{D}$ is a pair of collections of thick subcategories $(\{\mathfrak{D}_{\le w}\}, \{\mathfrak{D}_{\ge w}\})_{w \in \mathbb{Z}}$ satisfying the following axioms:
\begin{enumerate}
\item $\mathfrak{D}_{\le w} \subset \mathfrak{D}_{\le w+1}$ and $\mathfrak{D}_{\ge w} \supset \mathfrak{D}_{\ge w+1}$ for all $w$.
\item $\Hom(A,B) = 0$ whenever $A \in \mathfrak{D}_{\le w}$ and $B \in \mathfrak{D}_{\ge w+1}$.
\item For any object $X \in D$, there is a distinguished triangle $A \to X
\to B \to$ with $A \in \mathfrak{D}_{\le w}$ and $B \in \mathfrak{D}_{\ge w+1}$.\label{it:dt}
\end{enumerate}
\end{defn}
This definition is at least superficially very similar to that of \emph{$t$-structure},
and in fact arguments identical to those given in~\cite[\S\S 1.3.3--1.3.5]{bbd} yield
the following basic properties of baric structures.
\begin{prop}\label{prop:baric-basic}
Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure $(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$.
The inclusion $\mathfrak{D}_{\le w} \hookrightarrow \mathfrak{D}$ admits a right adjoint $\beta_{\le w}: \mathfrak{D} \to \mathfrak{D}_{\le w}$, and the inclusion $\mathfrak{D}_{\ge w} \to \mathfrak{D}$ admits a left adjoint $\beta_{\ge w}: \mathfrak{D} \to \mathfrak{D}_{\ge w}$. There is a distinguished triangle
\[
\beta_{\le w}X \to X \to \beta_{\ge w+1}X \to,
\]
and any distinguished triangle as in Axiom~(3) above is canonically isomorphic to this one. Furthermore,
if $v \le w$, then we have the following isomorphisms of functors:
\begin{align*}
\beta_{\le v} \circ \beta_{\le w} &\cong \beta_{\le v} &
\beta_{\ge v} \circ \beta_{\le w} &\cong \beta_{\le w} \circ \beta_{\ge v}
\\
\beta_{\ge w} \circ \beta_{\ge v} &\cong \beta_{\ge w} &
\beta_{\le v} \circ \beta_{\ge w} &\cong \beta_{\ge w} \circ \beta_{\le
v}=0
\qedhere
\end{align*}
\end{prop}
Note that in a baric structure, unlike in a $t$-structure, the
subcategories $\mathfrak{D}_{\le w}$ and $\mathfrak{D}_{\ge w}$ are required to be
stable under shifts in both directions, and it is not assumed that there is
an autoequivalence $\mathfrak{D} \to \mathfrak{D}$ taking $\mathfrak{D}_{\le w}$ to, say, $\mathfrak{D}_{\le
w+1}$.
Moreover, baric truncation functors enjoy the following important property.
\begin{prop}
The baric truncation functors $\beta_{\le w}$ and $\beta_{\ge w}$ take
distinguished triangles to distinguished triangles.
\end{prop}
\begin{proof}
Let $X \to Y \to Z \to$ be a distinguished triangle in $\mathfrak{D}$, and consider
the natural morphism $\beta_{\le w}X \to X$. The composition of this
morphism with $X \to Y$ factors through $\beta_{\le w}Y \to Y$ (since
$\Hom(\beta_{\le w}X, Y) \cong \Hom(\beta_{\le w}X, \beta_{\le w}Y)$), so
we obtain a commutative diagram
\[
\xymatrix@=10pt{
\beta_{\le w}X \ar[r]\ar[d] & \beta_{\le w}Y \ar[d] \\
X \ar[r] & Y}
\]
Let us complete this diagram using the
$9$-lemma~\cite[Proposition~1.1.11]{bbd}:
\[
\xymatrix@=10pt{
\beta_{\le w}X \ar[r]\ar[d] & \beta_{\le w}Y \ar[r]\ar[d] & Z' \ar[r]\ar[d]
& \\
X \ar[r]\ar[d] & Y\ar[r]\ar[d] & Z \ar[r]\ar[d] & \\
\beta_{\ge w+1}X \ar[r]\ar[d] & \beta_{\ge w+1}Y \ar[r]\ar[d] & Z''
\ar[r]\ar[d] & \\
&&&}
\]
Since $\mathfrak{D}_{\le w}$ and $\mathfrak{D}_{\ge w+1}$ are full triangulated subcategories
of $\mathfrak{D}$, we see that $Z' \in \mathfrak{D}_{\le w}$ and $Z'' \in \mathfrak{D}_{\ge w+1}$.
But then Proposition~\ref{prop:baric-basic} tells us that $Z' \cong
\beta_{\le w}Z$ and $Z'' \cong \beta_{\ge w}Z$, so we obtain distinguished
triangles
\[
\beta_{\le w}X \to \beta_{\le w}Y \to \beta_{\le w}Z \to
\qquad\text{and}\qquad
\beta_{\ge w+1}X \to \beta_{\ge w+1}Y \to \beta_{\ge w+1}Z \to,
\]
as desired.
\end{proof}
\begin{defn}
Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure
$(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$.
We will use the following terminology:
\begin{enumerate}
\item The adjoints $\beta_{\le w}$ and $\beta_{\ge w}$ to the inclusions
$\mathfrak{D}_{\le w} \hookrightarrow \mathfrak{D}$ and $\mathfrak{D}_{\ge w} \hookrightarrow \mathfrak{D}$ are called \emph{baric
truncation functors}.
\item The baric structure is \emph{bounded} if for each object $A \in
\mathfrak{D}$,
there exist integers $v, w$ such that $A \in \mathfrak{D}_{\ge v} \cap \mathfrak{D}_{\le w}$.
\item It is \emph{nondegenerate} if there is no nonzero object belonging to
all
$\mathfrak{D}_{\le w}$ or to all $\mathfrak{D}_{\ge w}$. Note that a bounded
baric structure is automatically nondegenerate.
\item Let $\mathfrak{D}'$ be another triangulated category, and suppose it is
equipped with a baric structure $(\{\mathfrak{D}'_{\le w}\}, \{\mathfrak{D}'_{\ge w}\})$.
A functor of triangulated categories $F: \mathfrak{D} \to \mathfrak{D}'$ is said to be
\emph{left baryexact} if $F(\mathfrak{D}_{\ge w}) \subset \mathfrak{D}'_{\ge w}$ for all
$w \in \mathbb{Z}$, and \emph{right baryexact} if $F(\mathfrak{D}_{\le w}) \subset
\mathfrak{D}'_{\le w}$ for all $w \in \mathbb{Z}$.
\end{enumerate}
\end{defn}
Let us also record the following definitions, though we will not use them
until later in the paper.
\begin{defn}
Let $\mathfrak{D}$ be a triangulated category equipped with a baric structure
$(\{\fD_{\leq w}\},\{\fD_{\geq w}\})_{w \in \Z}$.
\begin{enumerate}
\item Suppose $\mathfrak{D}$ is equipped with an involutive antiequivalence $\mathbb{D}: \mathfrak{D}
\to \mathfrak{D}$.
The baric structure is \emph{self-dual} if $\mathbb{D}(\mathfrak{D}_{\le w}) = \mathfrak{D}_{\ge
-w}$.
\item Suppose $\mathfrak{D}$ has the structure of a tensor category, with tensor
product $\otimes$.
The baric structure is \emph{multiplicative} with respect to $\otimes$ if
if for any $A \in \mathfrak{D}_{\le v}$ and $B \in \mathfrak{D}_{\le w}$, we have $A \otimes
B \in \mathfrak{D}_{\le v+w}$.
\item Suppose $\mathfrak{D}$ has an internal Hom functor $\cHom$. The baric
structure is \emph{multiplicative}
with respect to $\cHom$ if for any $A \in \mathfrak{D}_{\leq v}$ and $B \in
\mathfrak{D}_{\geq w}$, we have
$\cHom(A,B) \in \mathfrak{D}_{\geq w -v}$.
\end{enumerate}
Note that whenever we have an adjunction between $\otimes$ and $\cHom$, the
multiplicativity conditions
are equivalent.
\end{defn}
\subsection{Staggering}
Below, if $\mathfrak{D}$ is equipped with a $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge
0})$,
we write $\mathfrak{C} = \mathfrak{D}^{\le 0} \cap \mathfrak{D}^{\ge 0}$ for its heart, and we denote
the
associated truncation functors by $\tau^{\le n}$ and $\tau^{\ge n}$. The
$n$th
cohomology functor associated to the $t$-structure is denoted $h^n: \mathfrak{D} \to
\mathfrak{C}$.
\begin{defn}\label{defn:compat}
Let $\mathfrak{D}$ be a triangulated category equipped with both a $t$-structure
and a baric structure. These structures are said to be \emph{compatible}
if $\tau^{\le n}$ and $\tau^{\ge n}$ are right baryexact, and $\beta_{\le
w}$
and $\beta_{\ge w}$ are left $t$-exact.
\end{defn}
\begin{rmk}
Of course there is a dual notion of compatibility, but it does not seem to
arise
as often.
\end{rmk}
\begin{defn}\label{defn:stag}
Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and
baric structures. Define two full subcategories of $\mathfrak{D}$ as follows:
\begin{align*}
{}^s\!\fD^{\le 0} &= \{A \in D \mid \text{$h^k(A) \in \mathfrak{D}_{\le -k}$ for all $k
\in \mathbb{Z}$} \}, \\
{}^s\!\fD^{\ge 0} &= \{B \in D \mid \text{$\beta_{\le k}B \in \mathfrak{D}^{\ge -k}$ for
all $k \in \mathbb{Z}$} \}.
\end{align*}
Assume that the pair $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ constitutes a
$t$-structure. It is called the \emph{staggered $t$-structure}, or the
$t$-structure obtained by \emph{staggering} the original $t$-structure with
respect to the given baric structure.
\end{defn}
As usual, we let ${}^s\!\fD^{\le n} = {}^s\!\fD^{\le 0}[-n]$ and ${}^s\!\fD^{\ge n} =
{}^s\!\fD^{\ge 0}[-n]$.
\begin{lem}\label{lem:compat}
Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and
baric structures. Assume the $t$-structure is nondegenerate.
\begin{enumerate}
\item $A \in \mathfrak{D}_{\le w}$ if and only if $h^k(A) \in \mathfrak{D}_{\le w}$ for all
$k$. \label{it:lth}
\item $B \in \mathfrak{D}_{\ge w}$ if and only if $\beta_{\le w-1}\tau^{\le k}B \in
\mathfrak{D}^{\ge k+2}$ for all $k$.\label{it:gth}
\item We have \label{it:gthom}
\[
\mathfrak{D}_{\ge w} \cap \mathfrak{C} = \{ B \in \mathfrak{C} \mid
\text{$\Hom^k(A,B) = 0$ for all $A \in \mathfrak{D}_{\le w} \cap \mathfrak{C}$ and all $k \ge
0$} \}.
\]
\item $\mathfrak{D}_{\le w} \cap \mathfrak{C}$ is a Serre subcategory of $\mathfrak{C}$, and $\mathfrak{D}_{\ge
w} \cap \mathfrak{C}$ is stable under extensions.\label{it:ltserre}
\item ${}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 0}$ are stable under extensions.
\label{it:ssext}
\item $\mathfrak{D}^{\le k} \cap \mathfrak{D}_{\le w} \subset {}^s\!\fD^{\le k+w}$, and $\mathfrak{D}^{\ge
k} \cap \mathfrak{D}_{\ge w} \subset {}^s\!\fD^{\ge k+w}$. \label{it:sscap}
\end{enumerate}
\end{lem}
\begin{proof}
\eqref{it:lth}~Since $\mathfrak{D}_{\le w}$ is stable under $\tau^{\le k}$ and
$\tau^{\ge k}$, it is clear that $A \in \mathfrak{D}_{\le w}$ implies that $h^k(A)
\in \mathfrak{D}_{\le w}$. Conversely, suppose $h^k(A) \in \mathfrak{D}_{\le w}$ for all
$k$. Recall (e.g. \cite[Proposition 4.4.6]{verdier}) that we have a spectral sequence
\begin{equation}\label{eqn:e2-1}
E_2^{ab} = \Hom(h^{-b}(A), B[a]) \quad\Longrightarrow\quad
\Hom(A,B[a+b]).
\end{equation}
Since $\Hom(h^{-b}(A), B[a]) = 0$ for all $B \in \mathfrak{D}_{\ge w+1}$ and all $a,
b \in \mathbb{Z}$, we see that $\Hom(A,B) = 0$ for all $B \in \mathfrak{D}_{\ge w+1}$, and
hence that $A \in \mathfrak{D}_{\le w}$.
\eqref{it:gth}~Consider the distinguished triangle
\[
\beta_{\le w-1}\tau^{\le k}B \to \beta_{\le w-1}B \to \beta_{\le
w-1}\tau^{\ge k+1}B \to.
\]
The last term is always in $\mathfrak{D}^{\ge k+1}$ by the left $t$-exactness of
$\beta_{\le w-1}$. If $B \in \mathfrak{D}_{\ge w}$, so that $\beta_{\le w-1}B = 0$,
then $\beta_{\le w-1}\tau^{\le k}B \cong (\beta_{\le w-1}\tau^{\ge
k+1}B)[-1] \in \mathfrak{D}^{\ge k+2}$. Conversely, if the $t$-structure is
nondegenerate, and if $\beta_{\le w-1}\tau^{\le k}B \in \mathfrak{D}^{\ge k+2}$ for
all $k$, the distinguished triangle above shows that $\beta_{\le w-1}B \in
\mathfrak{D}^{\ge k+1}$ for all $k$, and hence that $\beta_{\le w-1}B = 0$, so $B
\in \mathfrak{D}_{\ge w}$, as desired.
\eqref{it:gthom}~If $B \in \mathfrak{D}_{\ge w} \cap \mathfrak{C}$, then clearly
$\Hom(A[-k],B) = 0$ for all $A \in \mathfrak{D}_{\le w-1} \cap \mathfrak{C}$ and all $k \ge
0$, since $A[-k] \in \mathfrak{D}_{\le w-1}$ for all $k$. Conversely, if
$\Hom(A,B[k]) = 0$ for all $A \in \mathfrak{D}_{\le w-1} \cap \mathfrak{C}$ and all $k \ge
0$, the spectral sequence~\eqref{eqn:e2-1} shows that $\Hom(A,B) = 0$ for
all $A \in \mathfrak{D}_{\le w-1}$, and hence that $B \in \mathfrak{D}_{\ge w}$.
\eqref{it:ltserre}~Suppose we have a short exact sequence
\[
0 \to A \to B \to C \to 0
\]
in $\mathfrak{C}$. If $A$ and $C$ are in $\mathfrak{D}_{\le w}$, then $B$ must be as well,
since $\mathfrak{D}_{\le w}$ is stable under extensions. Conversely, suppose $B \in
\mathfrak{D}_{\le w}$. Assume that $C \notin \mathfrak{D}_{\le w}$, and consider the
distinguished triangle
\[
\beta_{\le w}C \to C \to \beta_{\ge w+1}C \to.
\]
By left $t$-exactness of the baric truncation functors, we have an exact
sequence
\[
0 \to h^0(\beta_{\le w}C) \to C \to h^0(\beta_{\ge w+1}C).
\]
We must have $h^0(\beta_{\ge w+1}C) \ne 0$: otherwise, we would have $C
\cong h^0(\beta_{\le w}C) \in \mathfrak{D}_{\le w}$. Next, from the distinguished
triangle
\[
\beta_{\ge w+1}A \to 0 \to \beta_{\ge w+1}C \to,
\]
we see that $\beta_{\ge w+1}A \cong \beta_{\ge w+1}C[-1]$. In particular,
$h^0(\beta_{\ge w+1}A) = 0$. But then the exact sequence
\[
0 \to h^0(\beta_{\le w}A) \to A \to h^0(\beta_{\ge w+1}A) = 0
\]
shows that $A \cong h^0(\beta_{\le w}A) \in \mathfrak{D}_{\le w}$, and hence that
$\beta_{\ge w+1}A = 0$ and $\beta_{\ge w+1}C = 0$. Thus, $A$ and $C$ are
in $\mathfrak{D}_{\le w}$, as desired.
That $\mathfrak{D}_{\ge w} \cap \mathfrak{C}$ is stable under extensions follows immediately
from the fact that $\mathfrak{D}_{\ge w}$ is stable under extensions.
\eqref{it:ssext}~Let $A \to B \to C \to$ be a distinguished triangle with
$A \in {}^s\!\fD^{\le 0}$ and $C \in {}^s\!\fD^{\le 0}$, and consider the exact
sequence
\[
h^k(A) \overset{f}{\to} h^k(B) \overset{g}{\to} h^k(C).
\]
Since $h^k(A) \in \mathfrak{D}_{\le -k}$, its quotient $\im f$ is in $\mathfrak{D}_{\le -k}$
as well. Similarly, $\im g \in \mathfrak{D}_{\le -k}$ because it is a subobject
of $h^k(C)$. Now, from the short exact sequence $0 \to \im f \to h^k(B)
\to \im g \to 0$, we deduce that $h^k(B) \in \mathfrak{D}_{\le -k}$. Thus, $B \in
{}^s\!\fD^{\le 0}$.
On the other hand, if $A \to B \to C \to$ is a distinguished triangle with
$A, C \in {}^s\!\fD^{\ge 0}$, consider the distinguished triangle
\[
\beta_{\le k}A \to \beta_{\le k}B \to \beta_{\le k}C \to.
\]
Since $\beta_{\le k}A$ and $\beta_{\le k}C$ lie in $\mathfrak{D}^{\ge -k}$,
$\beta_{\le k}B \in \mathfrak{D}^{\ge -k}$ as well, so $B \in {}^s\!\fD^{\ge 0}$.
\eqref{it:sscap}~If $A \in \mathfrak{D}^{\le k} \cap \mathfrak{D}_{\le w}$, then $h^i(A[k+w])
= h^{i+k+w}(A) = 0$ if $i > -w$, and $h^i(A[k+w]) \in \mathfrak{D}_{\le w} \subset
\mathfrak{D}_{\le -i}$ if $i \le -w$. Thus, $A[k+w] \in {}^s\!\fD^{\le 0}$, or $A \in
{}^s\!\fD^{\le k+w}$. Next, suppose $B \in \mathfrak{D}^{\ge k} \cap \mathfrak{D}_{\ge w}$. Then
$\beta_{\le i}B[k+w] = 0$ if $i < w$, and $\beta_{\le i}B[k+w] \in \mathfrak{D}^{\ge
k}[k+w] = \mathfrak{D}^{\ge -w} \subset \mathfrak{D}^{\ge -i}$ if $i \ge w$. Hence, $B[k+w]
\in {}^s\!\fD^{\ge 0}$, or $B \in {}^s\!\fD^{\ge k+w}$.
\end{proof}
\begin{prop}\label{prop:compat}
Let $\mathfrak{D}$ be a triangulated category equipped with compatible $t$- and
baric structures. Assume the $t$-structure is nondegenerate.
\begin{enumerate}
\item $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$ and $B \in {}^s\!\fD^{\ge 1}$.
\label{it:sshom}
\item If $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, then $A \in {}^s\!\fD^{\le
0}$. If $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$, then $B \in {}^s\!\fD^{\ge
1}$. \label{it:ssorth}
\item ${}^s\!\fD^{\le 0} \subset {}^s\!\fD^{\le 1}$ and ${}^s\!\fD^{\ge 0} \supset {}^s\!\fD^{\ge
1}$. \label{it:ssshift}
\item If the baric structures is also nondegenerate, there is no nonzero
object belonging to all ${}^s\!\fD^{\le n}$ or to all ${}^s\!\fD^{\ge n}$.
\label{it:ssnondeg}
\item If the $t$- and baric structures are bounded, then for any $A \in D$,
there are integers $n, m$ such that $A \in {}^s\!\fD^{\ge n} \cap {}^s\!\fD^{\le m}$.
\label{it:ssbdd}
\end{enumerate}
\end{prop}
\begin{proof}
\eqref{it:sshom}~For any $k \in \mathbb{Z}$, $h^{-k}(A) \in \mathfrak{D}_{\le k}$, and
therefore $\Hom(h^{-k}(A),B[k]) \cong \Hom(h^{-k}(A), \beta_{\le k}B[k])$.
But $\beta_{\le k}B \in \mathfrak{D}^{\ge k+1}$, so $\Hom(h^{-k}(A), \beta_{\le
k}B[k]) = 0$ for all $k$. It follows from the spectral
sequence~\eqref{eqn:e2-1} that $\Hom(A,B) = 0$.
\eqref{it:ssorth}~Suppose $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, and
suppose for some $k$, $h^k(A) \notin \mathfrak{D}_{\le -k}$. That implies that
$\tau^{\ge k}A \notin \mathfrak{D}_{\le -k}$, so $\beta_{\ge -k+1}\tau^{\ge k}A \ne
0$. In particular, the natural adjunction morphism $A \to \beta_{\ge
-k+1}\tau^{\ge k}A$ is nonzero. However, $\beta_{\ge -k+1}\tau^{\ge k}A
\in \mathfrak{D}^{\ge k} \cap \mathfrak{D}_{\ge -k+1} \subset {}^s\!\fD^{\ge 1}$. This contradicts
the assumption that $\Hom(A,B) = 0$ for all $B \in {}^s\!\fD^{\ge 1}$, so we must
have $h^k(A) \in \mathfrak{D}_{\le -k}$ for all $k$, and hence $A \in {}^s\!\fD^{\le 0}$.
On the other hand, if $\Hom(A,B) = 0$ for all $A \in {}^s\!\fD^{\le 0}$, a
similar argument involving the morphism $\tau^{\le -k}\beta_{\le k}B \to B$
shows that $B \in {}^s\!\fD^{\ge 1}$.
\eqref{it:ssshift}~If $A \in {}^s\!\fD^{\le 0}$, then $h^k(A[1]) = h^{k+1}(A) \in
\mathfrak{D}_{\le -k-1} \subset \mathfrak{D}_{\le -k}$, so $A[1] \in {}^s\!\fD^{\le 0}$, and hence
${}^s\!\fD^{\le 0} \subset {}^s\!\fD^{\le 1}$. Similarly, if $B \in {}^s\!\fD^{\ge 0}$, then
$\beta_{\le k}B[-1] \in \mathfrak{D}^{\ge -k+1} \subset \mathfrak{D}^{\ge -k}$, so $B[-1] \in
{}^s\!\fD^{\ge 0}$.
\eqref{it:ssnondeg}~Suppose $A \in {}^s\!\fD^{\le n}$ for all $n$. Then $h^k(A)
\in \mathfrak{D}_{\le n-k}$ for all $n$ and all $k$. The nondegeneracy of the baric
structure implies that $h^k(A) = 0$; then, the nondegeneracy of the
$t$-structure implies that $A = 0$. Next, suppose $A \in {}^s\!\fD^{\ge n}$ for
all $n$, and assume $A \ne 0$. Choose some $w$ such that $\beta_{\le w}A
\ne 0$, and then choose some $k$ such that $\tau^{\le k}\beta_{\le w}A \ne
0$. By right baryexactness of $\tau^{\le k}$, we know that $\tau^{\le
k}\beta_{\le w}A \in \mathfrak{D}_{\le w}$, so we obtain a sequence of isomorphisms
\[
\Hom(\tau^{\le k}\beta_{\le w}A, \tau^{\le k}\beta_{\le w}A)
\cong \Hom(\tau^{\le k}\beta_{\le w}A, \beta_{\le w}A)
\cong \Hom(\tau^{\le k}\beta_{\le w}A, A).
\]
In particular, the natural map $\tau^{\le k}\beta_{\le w}A \to A$ is
nonzero. But clearly $\tau^{\le k}\beta_{\le w}A \in {}^s\!\fD^{\le k+w}$, so $A
\notin {}^s\!\fD^{\ge k+w+1}$, a contradiction.
\eqref{it:ssbdd}~This follows from Lemma~\ref{lem:compat}\eqref{it:sscap}.
\end{proof}
We will not prove in general that $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ is a
$t$-structure.
\begin{thm}\label{thm:stag-gen}
Let $\mathfrak{D}$ be a triangulated category endowed with compatible bounded,
nondegenerate $t$- and baric structures. Suppose we have a function $\mu:
\mathfrak{D} \to \mathbb{N}$ with the following properties:
\begin{enumerate}
\item $\mu(X) = 0$ if and only if $X = 0$.
\item If $X \in \mathfrak{D}^{\ge n}$ but $X \notin \mathfrak{D}^{\ge n+1}$, then
$\mu(\tau^{\ge n+1}\beta_{\le -n}X) < \mu(X)$.
\end{enumerate}
Then $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ is a bounded, nondegenerate $t$-structure
on $\mathfrak{D}$.
\end{thm}
\begin{proof}
It will be convenient to use ``$*$'' operation on triangulated categories
({\it cf.} \cite[\S 1.3.9]{bbd}): given two classes of objects
$\mathcal{A}, \mathcal{B} \subset \mathfrak{D}$, we denote by $\mathcal{A} * \mathcal{B}$ the class of all objects
$X \in \mathfrak{D}$ such that there exists a distinguished triangle $A \to X \to B \to$ with $A \in \mathcal{A}$ and $B \in \mathcal{B}$. In view of the preceding proposition, the present theorem will be proved once we show that every object of $\mathfrak{D}$ belongs to ${}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1}$. We proceed by
induction on $\mu(X)$. If $\mu(X) = 0$, then $X = 0$, and there is nothing
to prove. Otherwise, let $n$ be the smallest integer such that $h^n(X) \ne
0$. Let $A_1 = \tau^{\le n}\beta_{\le -n} X$, $X' = \tau^{\ge
n+1}\beta_{\le -n} X$, and $B_1 = \beta_{\ge -n+1} X$. It follows from
the right baryexactness of $\tau^{\le n}$ that $A_1 \in {}^s\!\fD^{\le 0}$, and,
similarly, it follows from the left $t$-exactness of $\beta_{\ge -n+1}$
that $B_1 \in {}^s\!\fD^{\ge 1}$. Recall~\cite[Proposition~1.3.10]{bbd} that the ``$*$'' operation is associative. By construction, we have
\[
X \in \{A_1\} * \{X'\} * \{B_1\} \subset {}^s\!\fD^{\le 0} * \{X'\} * {}^s\!\fD^{\ge 1}.
\]
Since $\mu(X') < \mu(X)$ by assumption, we know that $X' \in {}^s\!\fD^{\le 0} *
{}^s\!\fD^{\ge 1}$, and hence
\[
X \in {}^s\!\fD^{\le 0} * {}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1} * {}^s\!\fD^{\ge 1}.
\]
Since ${}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 1}$ are stable under extensions, we have
${}^s\!\fD^{\le 0} * {}^s\!\fD^{\le 0} = {}^s\!\fD^{\le 0}$ and ${}^s\!\fD^{\ge 1} * {}^s\!\fD^{\ge 1} =
{}^s\!\fD^{\ge 1}$, so $X \in {}^s\!\fD^{\le 0} * {}^s\!\fD^{\ge 1}$, as desired.
\end{proof}
\section{Examples}
\label{sect:examples}
In this section, we exhibit several examples of baric structures occurring
``in nature.'' In the first one, the staggering operation of
Definition~\ref{defn:stag} is a new approach to a known $t$-structure. In
two others, this operation gives what appears to be a previously unknown
$t$-structure. The main example of this paper---baric structures on
derived categories of coherent sheaves---will be discussed in the next section.
\subsection{Perverse sheaves}
Let $X$ be a topologically stratified space (as in~\cite{gm:ih}), with all
strata of even real dimension. (This example can be easily modified to
relax that condition, or to treat stratified varieties over a field
instead.) Let $D = D^b_c(X)$ be the bounded derived category of sheaves of
complex vector spaces that are constructible with respect to the given
stratification. For any $w
\in \mathbb{Z}$, let $X_w$ be the union of all strata of dimension at most $2w$.
(Thus, $X_w = \varnothing$ if $w < 0$.) This is a closed subspace of $X$.
Let $i_w: X_w \to X$ be the inclusion map. Let $D_{\le w}$ be the full
subcategory consisting of complexes whose support is contained in $X_w$,
and let $D_{\ge w+1}$ be the full subcategory of complexes $\mathcal{F}$ such that
$i_w^!\mathcal{F} = 0$.
If $\mathcal{F} \in D_{\le w}$ and $\mathcal{G} \in D_{\ge w+1}$, then $\mathcal{F} \cong
i_{w*}i_w^{-1}\mathcal{F}$, and
\[
\Hom(\mathcal{F},\mathcal{G}) \cong \Hom(i_{w*}i_w^{-1}\mathcal{F},\mathcal{G}) \cong \Hom(i_w^{-1}\mathcal{F},
i_w^!\mathcal{G}) = 0.
\]
Next, let $j_{w+1}: (X \smallsetminus X_w) \to X$ be the open inclusion of the
complement of $X_w$. For any complex $\mathcal{F}$, the distinguished triangle
\[
i_{w*}i_w^!\mathcal{F} \to \mathcal{F} \to (j_{w+1})_*j_{w+1}^{-1}\mathcal{F} \to
\]
is one whose first term lies in $D_{\le w}$ and whose last term lies in
$D_{\ge w+1}$. Thus, we see that $(\{D_{\le w}\}, \{D_{\ge w}\})_{w \in \mathbb{Z}}$ is a
baric structure on $D^b_c(X)$, with baric truncation functors
\[
\beta_{\le w} = i_{w*}i_w^!
\qquad\text{and}\qquad
\beta_{\ge w} = j_{w*}j_w^{-1}.
\]
It is easy to see that this baric structure is compatible with the standard
$t$-structure on $D$. If $\mathcal{F}$ is supported on $X_w$, it is obvious that
any truncation of it is as well, so $D_{\le w}$ is stable under $\tau^{\le
n}$ and $\tau^{\ge n}$. On the other hand, it is clear from the formulas
above that $\beta_{\le w}$ and $\beta_{\ge w}$ are both left $t$-exact.
In the associated staggered $t$-structure $({}^s D^{\le 0}, {}^s D^{\ge 0})$,
we have $\mathcal{F} \in {}^s D^{\le 0}$ if and only if $h^k(\mathcal{F}) \in D_{\le -k}$,
or, in other words,
\[
\dim \supp h^k(\mathcal{F}) \le -2k.
\]
The staggered $t$-structure in this case is none other than the perverse
$t$-structure of middle perversity.
\subsection{Quasi-exceptional sets}
Let $\mathfrak{D}$ be a triangulated category. A set of objects $\{\nabla^w\}_{w
\in \mathbb{N}}$ in $\mathfrak{D}$ indexed by nonnegative integers is called a
\emph{quasi-exceptional set} if the following conditions hold:
\begin{enumerate}
\item If $v < w$, then $\Hom(\nabla^v, \nabla^w[k]) =0$ for all $k \in
\mathbb{Z}$.\label{it:exc-hom}
\item For any $w \in \mathbb{N}$, $\Hom(\nabla^w, \nabla^w[k]) = 0$ if $k < 0$, and
$\End(\nabla^w)$ is a division ring.\label{it:exc:end}
\end{enumerate}
For $w \in \mathbb{N}$, let $\mathfrak{D}_{\le w}$ be the full triangulated subcategory of
$\mathfrak{D}$ generated by $\nabla^0, \ldots, \nabla^w$, and for an integer $w <
0$, let $\mathfrak{D}_{\le w}$ be the full triangulated subcategory containing only
zero objects. (Here, we are following the notation of~\cite{bez:qes}, but
this will turn out to be consistent with our notation for baric structures
as well.) A quasi-exceptional set is \emph{dualizable} if there is another
collection of objects $\{\Delta_w\}_{w \in \mathbb{N}}$ such that
\begin{enumerate}
\setcounter{enumi}{2}
\item If $v > w$, $\Hom(\Delta_v, \nabla^w[k]) = 0$ for all $k \in
\mathbb{Z}$.\label{it:dexc-hom}
\item For any $w \in \mathbb{N}$, we have $\Delta_w \cong \nabla^w \mod
\mathfrak{D}_{\le w-1}$.\label{it:dexc-iso}
\end{enumerate}
The last condition means that $\Delta_w$ and $\nabla^w$ give rise to
isomorphic objects in the quotient category $\mathfrak{D}_{\le w}/\mathfrak{D}_{\le w-1}$.
Next, let $\mathfrak{D}_{\ge w}$ be the full triangulated subcategory generated by
the objects $\{\nabla^k \mid k \ge w\}$. If $A \in \mathfrak{D}_{\le w}$ and $B
\in \mathfrak{D}_{\ge w+1}$, then Axiom~\eqref{it:exc-hom} above implies that
$\Hom(A,B) = 0$. In addition, by~\cite[Lemma~4(e)]{bez:qes}, each
inclusion $\mathfrak{D}_{\le w} \to \mathfrak{D}_{\le w+1}$ admits a right adjoint $\iota_w$.
By a straightforward argument, these functors can be used to construct
distinguished triangles as in Definition~\ref{defn:baric}\eqref{it:dt}.
Thus, $(\{\mathfrak{D}_{\le w}\}, \{\mathfrak{D}_{\ge w}\})_{w \in \mathbb{Z}}$ is a baric structure
on $\mathfrak{D}$. It is nondegenerate and bounded by construction.
A key result of~\cite{bez:qes} is the construction of a
bounded, nondegenerate $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge 0})$
associated to a quasi-exceptional set. This $t$-structure is defined as
follows (see~\cite[Proposition~1]{bez:qes}):
\begin{align*}
\mathfrak{D}^{\le 0} &= \langle \{ \Delta_w[n] \mid n \ge 0 \} \rangle, \\
\mathfrak{D}^{\ge 0} &= \langle \{ \nabla_w[n] \mid n \le 0 \} \rangle.
\end{align*}
Here, the notation $\langle S \rangle$ stands for the smallest strictly
full subcategory of $\mathfrak{D}$ that is stable under extensions and contains all
objects in the set $S$.
We claim that this $t$-structure and the baric
structure defined above are compatible. It follows from
Axiom~\eqref{it:exc-hom} above that
\[
\beta_{\le w}\nabla^v =
\begin{cases}
0 & \text{if $w < v$,} \\
\nabla^v & \text{if $w \ge v$,}
\end{cases}
\qquad\text{and}\qquad
\beta_{\ge w}\nabla^v =
\begin{cases}
0 & \text{if $w > v$,} \\
\nabla^v & \text{if $w \le v$.}
\end{cases}
\]
This calculation shows that the baric truncation functors preserve
$\mathfrak{D}^{\ge 0}$. On the other hand, Axiom~\eqref{it:dexc-hom} implies that
$\tau^{\le 0}\nabla^w$ is contained in the subcategory generated by
$\Delta_0, \ldots, \Delta_w$, and that subcategory coincides with
$\mathfrak{D}_{\le w}$ by Axiom~\eqref{it:dexc-iso}. Thus, $\tau^{\le 0}$
preserves $\mathfrak{D}_{\le w}$, so $\tau^{\ge 0}$ does as well.
Finally, given a nonzero object $X \in \mathfrak{D}$, let $a(X)$ be the smallest
integer $n$ such that $X \in \mathfrak{D}^{\ge -n}$, and let $b(X)$ be the smallest
integer $w$ such that $X \in \mathfrak{D}_{\le w}$. Note that $b(X) \ge 0$. Let
\[
\mu(X) =
\begin{cases}
\max \{a(X)+1,b(X)\} + 1 & \text{if $X \ne 0$,} \\
0 & \text{if $X = 0$.}
\end{cases}
\]
Clearly, $\mu$ takes nonnegative integer values, and $\mu(X) = 0$ if and
only if $X = 0$. Moreover, if $a(X) = -n$ (which implies $\mu(X) \ge
-n+2$), then $a(\tau^{\ge n+1}\beta_{\le -n}X) \le -n-1$ and $b(\tau^{\ge
n+1}\beta_{\le -n}X) \le -n$, so $\mu(\tau^{\ge n+1}\beta_{\le-n}X) \le
-n+1$. Thus, the conditions of Theorem~\ref{thm:stag-gen} are satisfied,
and there is a staggered $t$-structure $({}^s\!\fD^{\le 0}, {}^s\!\fD^{\ge 0})$ on
$\mathfrak{D}$.
\subsection{Weight truncation for $\ell$-adic mixed constructible sheaves}
Let $X$ be a scheme of finite type over a finite field $\mathbb{F}_q$, and let
$\ell$ be a fixed prime number distinct from the characteristic of $\mathbb{F}_q$.
Let $D = D^b_m(X,\mathbb{Q}_\ell)$ be the bounded derived category of mixed
constructible $\mathbb{Q}_\ell$-sheaves on $X$. Let ${}^p\! h^n$ denote the $n$th
cohomology functor with respect to the perverse $t$-structure on $D$ with
respect to the middle perversity. Let $D_{\le w}$ (resp.~$D_{\ge w}$) be
the full subcategory of $D^b_m(X,\mathbb{Q}_\ell)$ consisting of objects $\mathcal{F}$ such
that ${}^p\! h^n(\mathcal{F})$ is of weight $\le w$ (resp.~$\ge w$) for all $n \in \mathbb{Z}$.
S.~Morel has shown~\cite[Proposition~4.1.1]{mor} that $(\{D_{\le w}\}, \{D_{\ge
w}\})_{w\in\mathbb{Z}}$ is a baric structure on $D^b_m(X, \mathbb{Q}_\ell)$.
Since all objects in the heart of this $t$-structure have finite length, we
may attach a nonnegative integer $\mu(\mathcal{F})$ to each complex $\mathcal{F}$ by the
formula
\[
\mu(\mathcal{F}) = \sum_{n \in \mathbb{Z}} (\text{length of ${}^p\! h^n(\mathcal{F})$}).
\]
Moreover, by~\cite[Proposition~4.1.3]{mor}, the baric truncation functors
are $t$-exact for the perverse $t$-structure. This implies that $\mu$
satisfies the assumptions of Theorem~\ref{thm:stag-gen}, so the perverse
$t$-structure on $D^b_m(X, \mathbb{Q}_\ell)$ can be staggered with respect to
Morel's baric structure to obtain a new $t$-structure. The authors are not
aware of any previous appearance of this ``staggered-perverse''
$t$-structure on $\ell$-adic mixed constructible sheaves.
\subsection{Diagonal complexes}
We conclude with an example, due to T.~Ekedahl~\cite{eke}, of a
$t$-structure that closely resembles a staggered $t$-structure, although
it does not in general arise by staggering with respect to a baric
structure. (The authors thank N.~Ramachandran for pointing out this work
to them.) Let $\mathfrak{D}$ be a triangulated category with a bounded,
nondegenerate $t$-structure $(\mathfrak{D}^{\le 0}, \mathfrak{D}^{\ge 0})$, and as usual,
let $\mathfrak{C} = \mathfrak{D}^{\le 0} \cap \mathfrak{D}^{\ge 0}$. Suppose $\{\mathfrak{C}_{\le w}\}_{w \in
\mathbb{Z}}$ is an increasing collection of Serre subcategories of $\mathfrak{C}$, and let
$\mathfrak{C}_{\ge w} = \{ B \in \mathfrak{C} \mid \text{$\Hom(A,B) = 0$ for all $A \in
\mathfrak{C}_{\le w-1}$} \}$. Following Ekedahl, the collection $\{\mathfrak{C}_{\le w}\}$
is called a \emph{radical filtration} of the pair $(\mathfrak{D}, \mathfrak{C})$ if the
following axioms hold:
\begin{enumerate}
\item For each object $A \in \mathfrak{C}$, there exist integers $v, w$ such that
$A \in \mathfrak{C}_{\ge v} \cap \mathfrak{C}_{\le w}$.
\item If $A \in \mathfrak{C}_{\le w}$ and $B \in \mathfrak{C}_{\ge v}$, then $\Hom^{v-w-1}(A,
B) = 0$ in $\mathfrak{D}$.
\end{enumerate}
If $(\mathfrak{D}, \mathfrak{C})$ is equipped with a radical filtration, Ekedahl shows that
the categories
\begin{align*}
\tilde \mathfrak{D}^{\le 0} &= \{ A \in \mathfrak{D} \mid \text{$h^k(A) \in \mathfrak{C}_{\le-k}$ for
all $k \in \mathbb{Z}$} \}, \\
\tilde \mathfrak{D}^{\ge 0} &= \{ B \in \mathfrak{D} \mid \text{$h^k(B) \in \mathfrak{C}_{\ge-k}$ for
all $k \in \mathbb{Z}$} \}
\end{align*}
constitute a bounded, nondegenerate $t$-structure on $\mathfrak{D}$. This is
called the \emph{diagonal $t$-structure}, and the objects in its heart are
called \emph{diagonal complexes}.
These formulas are, of course, strongly reminiscent of those in
Definition~\ref{defn:stag}. Let us comment briefly on the relationship
between the two constructions. Given a radical filtration, one could hope
to define a baric structure by setting $\mathfrak{D}_{\le w} = \{ A \in \mathfrak{D} \mid
h^k(A) \in \mathfrak{C}_{\le w}\text{ for all }k \in \mathbb{Z} \}$. However, the
construction of a baric truncation functor turns out to require a stronger
Hom-vanishing condition between $\mathfrak{C}_{\le w}$ and $\mathfrak{C}_{\ge w+1}$ than
that stated above: one needs something like
Lemma~\ref{lem:compat}\eqref{it:gthom}. Conversely, given a
baric structure, one could hope to define a radical filtration by setting
$\mathfrak{C}_{\le w} = \mathfrak{D}_{\le w} \cap \mathfrak{C}$. This also fails, because a baric
structure imposes no higher Hom-vanishing conditions on the
right-orthogonal of $\mathfrak{C}_{\le w}$.
\section{Baric Structures on Coherent Sheaves, I}
\label{sect:baric-coh1}
In this section, we will investigate baric structures on derived categories
of coherent sheaves. Let $X$ be a scheme of finite type over a noetherian
base scheme, and let $G$ be an affine group scheme over the same base,
acting on $X$. We adopt the convention that all statements about
subschemes are to be understood in the $G$-invariant sense. Thus, ``open
subscheme'' will always mean ``$G$-stable open subscheme,'' and
``irreducible'' will mean ``not a union of two proper $G$-stable closed
subschemes.'' This convention will remain in effect for the remainder of
the paper.
Let $\cg X$ and $\qg X$ denote the categories of
$G$-equivariant coherent and quasicoherent sheaves, respectively, on $X$.
One of the headaches of the subject is the need to work with three closely
related triangulated categories, which we denote as follows:
\begin{enumerate}
\item[(1)] $\dgb X$ is the bounded derived category of $\cg X$.
\item[(2)] $\dgm X$ is the bounded-above derived category of $\cg X$.
\item[(3)] $\dgp X$ is the full subcategory of the bounded-below derived
category of
$\qg X$ consisting of objects with coherent cohomology sheaves.
\end{enumerate}
$\dgb X$ will be the focus of our attention, but it will be necessary to
work $\dgm X$ and $\dgp X$ as well, simply because most operations on
sheaves take values in one of those categories, even when acting on bounded
complexes.
\begin{defn}
A \emph{baric structure} on $X$ is a baric structure on $\dgb X$ which is
compatible
with the standard $t$-structure.
\end{defn}
\begin{rmk}
\label{rmks:schemebaric}
Implicit in this definition are some finiteness conditions; {\it e.g.}, it is
conceivable that there are interesting baric structures on $\dgp X$ that
take advantage of the fact that the functors $\beta_{\leq w}$ can take
bounded complexes to unbounded complexes. Nevertheless, this is the
definition we will work with.
\end{rmk}
Inspired by parts~\eqref{it:lth} and~\eqref{it:gth} of
Lemma~\ref{lem:compat}, we define the following subcategories of $\dgm X$
and $\dgp X$:
\begin{align*}
\dsml Xw &= \{ \mathcal{F} \in \dgm X \mid \text{$h^k(\mathcal{F}) \in \dsl Xw$ for all
$k$} \}, \\
\dspg Xw &= \{ \mathcal{F} \in \dgp X \mid \text{$\beta_{\le w-1}\tau^{\le k}\mathcal{F}
\in \dgbg X{k+2}$ for all $k$} \}.
\end{align*}
It is unknown whether these categories constitute parts of baric structures
on $\dgm X$ or on $\dgp X$. Nevertheless, they will be useful
in the sequel, in part because they admit the alternate characterization
given in the lemma below. If $Y$ is another scheme endowed with a baric
structure, we will, by a minor abuse of terminology, call a functor $\dgm X
\to \dgm Y$ \emph{right baryexact} if it takes objects of $\dsml Xw$ to
objects of $\dsml Yw$. Similarly, we call a functor $\dgp X \to \dgp Y$
\emph{left baryexact} if it takes objects of $\dspg Xw$ to $\dspg Yw$.
\begin{lem}\label{lem:unbdd-orth}
\begin{enumerate}
\item For $\mathcal{F} \in \dgm X$, we have $\mathcal{F} \in \dsml Xw$ if and only if
$\Hom(\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg X{w+1}$.
\item For $\mathcal{F} \in \dgp X$, we have $\mathcal{F} \in \dspg Xw$ if and only if
$\Hom(\mathcal{G},\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsl X{w-1}$.
\end{enumerate}
\end{lem}
In particular, we see from this lemma that
\begin{equation}\label{eqn:pm-bdd}
\begin{aligned}
\dsml Xw \cap \dgb X &= \dsl Xw, \\
\dspg Xw \cap \dgb X &= \dsg Xw.
\end{aligned}
\end{equation}
\begin{proof}
(1)~Suppose $\mathcal{F} \in \dsml Xw$. By Lemma~\ref{lem:compat}\eqref{it:lth},
$\tau^{\ge k}\mathcal{F} \in \dsl Xw$ for all $k$. In particular, given $\mathcal{G} \in
\dsg X{w+1}$, let $k$ be such that $\mathcal{G} \in \dgbg Xk$. Then $\Hom(\mathcal{F},\mathcal{G})
\cong \Hom(\tau^{\ge k}\mathcal{F}, \mathcal{G}) = 0$. Conversely, suppose $\mathcal{F} \in \dgm
X$ but $\mathcal{F} \notin \dsml Xw$, so that for some $k$, $h^k(\mathcal{F}) \notin \dsl
Xw$. Then $\tau^{\ge k}\mathcal{F} \notin \dsl Xw$. Let $\mathcal{G} = \beta_{\ge
w+1}\tau^{\ge k}\mathcal{F}$. We then have a nonzero morphism $\tau^{\ge k}\mathcal{F} \to
\mathcal{G}$. Moreover, since the baric structure on $\dgb X$ is compatible with
the standard $t$-structure, we have that $\mathcal{G} \in \dgbg Xk$, so there is a
natural isomorphism $\Hom(\tau^{\ge k}\mathcal{F},\mathcal{G}) \cong \Hom(\mathcal{F},\mathcal{G})$. Thus,
$\Hom(\mathcal{F},\mathcal{G}) \ne 0$.
(2)~Suppose $\mathcal{F} \in \dgpg Xw$. Given $\mathcal{G} \in \dsl X{w-1}$, let $k$ be
such that $\mathcal{G} \in \dgbl Xk$. Then $\Hom(\mathcal{G},\mathcal{F}) \cong \Hom(\mathcal{G},\tau^{\le
k}\mathcal{F}) \cong \Hom(\mathcal{G},\beta_{\le w-1}\tau^{\le k}\mathcal{F}) = 0$. Conversely, if
$\mathcal{F} \in \dgp X$ but $\mathcal{F} \notin \dgpg Xw$, then for some $k$, $\beta_{\le
w-1}\tau^{\le k}\mathcal{F} \notin \mathfrak{D}^{\ge k+2}$. Let $\mathcal{G} = \tau^{\le
k+1}\beta_{\le w-1}\tau^{\le k}\mathcal{F}$. Then clearly $\mathcal{G} \in \dgbl X{k+1}$ and
$\mathcal{G} \in \dsl X{w-1}$, and there is a nonzero morphism $\mathcal{G} \to \beta_{\le
w-1}\tau^{\le k}\mathcal{F}$. In particular, the group $\Hom(\mathcal{G}, \beta_{\le w-1}\tau^{\le k}\mathcal{F}) \cong \Hom(\mathcal{G}, \tau^{\le k}\mathcal{F})$ is nonzero. Now, consider the exact sequence
\[
\Hom(\mathcal{G}, (\tau^{\ge k+1}\mathcal{F})[-1]) \to \Hom(\mathcal{G}, \tau^{\le k}\mathcal{F}) \to \Hom(\mathcal{G}, \mathcal{F}).
\]
The first term vanishes because $(\tau^{\ge k+1}\mathcal{F})[-1] \in \dgpg X{k+2}$, so the natural map $\Hom(\mathcal{G}, \tau^{\le k}\mathcal{F}) \to \Hom(\mathcal{G}, \mathcal{F})$ is injective. It follows that $\Hom(\mathcal{G},\mathcal{F}) \ne 0$.
\end{proof}
\subsection{HLR baric structures}
\label{sect:hlr}
We do not wish to work with arbitrary baric structures on $\dgb X$; rather,
we want them to be well-behaved in relation to the scheme structure on
$X$. We have already imposed the condition that the baric structure be
compatible with the standard $t$-structure. We may also ask that it
give rise to baric structures on subschemes, in the following sense.
\begin{defn}\label{defn:induced}
Suppose $X$ is equipped with a baric structure, and let $\kappa: Y \hookrightarrow X$
be a locally closed subscheme. A baric structure
on $Y$ is said to be \emph{induced} by the one on $X$ if $L\kappa^*$ is
right baryexact and $R\kappa^!$ is left baryexact.
\end{defn}
The class of ``HLR (hereditary, local, and rigid) baric structures,''
defined below, is particularly well-behaved. For instance, every locally
closed subscheme of a scheme with an HLR baric structure admits a unique
induced baric structure. (See Theorem~\ref{thm:hlr-induced}.) The
remainder of Section~\ref{sect:baric-coh1} is devoted to establishing
various properties of HLR baric structures, and the main result of the
paper, Theorem~\ref{thm:main}, is a statement about a class of nontrivial
HLR baric structures.
\begin{defn}\label{defn:hlr}
A baric structure on $X$ is said to be
\emph{hereditary} if every closed subscheme admits an induced
baric structure. A hereditary baric structure on $X$ is said to be
\emph{local} if every open subscheme admits an induced baric structure that
is also hereditary.
Next, a hereditary baric structure on $X$ is \emph{rigid} if for every
sequence of closed subschemes $Z \overset{t}{\hookrightarrow} Z_1 \hookrightarrow X$ where $Z_1$
is a nilpotent thickening of $Z$ ({\it i.e.}, $Z_1$ has the same underlying
topological space as $Z$), the induced baric structures on $Z$ and $Z_1$
are related as follows:
\begin{equation}\label{eqn:rigid}
\begin{aligned}
\dsl {Z_1}w &= \text{the thick closure of $t_*(\dsl Zw)$,} \\
\dsg {Z_1}w &= \text{the thick closure of $t_*(\dsg Zw)$.}
\end{aligned}
\end{equation}
Finally, a baric structure that is hereditary, local, and rigid is called
an \emph{HLR baric structure}.
\end{defn}
It turns out that the ``local'' and ``rigid'' conditions on an HLR baric structure are redundant:
\begin{thm}\label{thm:hlr}
Every hereditary baric structure is HLR.
\end{thm}
This theorem will be proved in Section~\ref{sect:hlrproof}. We first
require a couple of preliminary lemmas about induced baric structures,
proved below. Following that, in Section~\ref{sect:hlrprop}, we will
establish a number of useful properties of HLR baric structures.
\begin{lem}\label{lem:induced}
Let $\schemebaric X$ be a baric structure on $X$, and let $i: Z \hookrightarrow X$ be
a closed subscheme. If $Z$
admits an induced baric structure, it is given by
\begin{equation}\label{eqn:hered}
\begin{aligned}
\dsl Zw &= \{ \mathcal{F} \in \dgb Z \mid i_* \mathcal{F} \in \dsl Xw \}, \\
\dsg Zw &= \{ \mathcal{F} \in \dgb Z \mid i_* \mathcal{F} \in \dsg Xw \}.
\end{aligned}
\end{equation}
Conversely, if the categories~\eqref{eqn:hered} constitute a baric
structure on $Z$, then that baric structure is induced from the one on $X$.
If an open subscheme $j: U \hookrightarrow X$ admits an induced baric structure, it
is given by
\begin{equation}\label{eqn:local}
\begin{aligned}
\dsl Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some
$\mathcal{F}_1 \in \dsl Xw$} \}, \\
\dsg Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some
$\mathcal{F}_1 \in \dsg Xw$} \}.
\end{aligned}
\end{equation}
Conversely, if the categories~\eqref{eqn:local} constitute a baric
structure on $U$, then that baric structure is induced from the one on $X$.
\end{lem}
\begin{proof}
Let $\schemebaric Z$ be an induced baric structure on a closed subscheme
$i: Z \hookrightarrow X$. If $\mathcal{F} \in \dsl Zw$, then for all $\mathcal{G} \in \dspg X{w+1}$,
we have (by Lemma~\ref{lem:unbdd-orth}) that $\Hom(\mathcal{F}, Ri^!\mathcal{G}) = 0$, and
therefore $\Hom(i_*\mathcal{F}, \mathcal{G}) = 0$. The latter implies that $i_*\mathcal{F} \in
\dsl Xw$. Similarly, if $\mathcal{F} \in \dsg Zw$, then $\Hom(Li^*\mathcal{G}, \mathcal{F}) =
\Hom(\mathcal{G}, i_*\mathcal{F}) = 0$ for all $\mathcal{G} \in \dsml X{w-1}$, so $i_*\mathcal{F} \in \dsg
Xw$. For the opposite inclusion, given an object $\mathcal{F} \in \dgb Z$, form
the distinguished triangle
\[
i_*\beta_{\le w}\mathcal{F} \to i_*\mathcal{F} \to i_*\beta_{\ge w+1}\mathcal{F} \to
\]
in $\dgb X$. By the reasoning above, we have $i_*\beta_{\le w}\mathcal{F} \in \dsl Xw$
and $i_*\beta_{\ge w+1} \in \dsg X{w+1}$, so the first and last terms
above must be the baric truncations of $i_*\mathcal{F}$:
\[
i_*\beta_{\le w}\mathcal{F} \cong \beta_{\le w}i_*\mathcal{F}
\qquad\text{and}\qquad
i_*\beta_{\ge w+1}\mathcal{F} \cong \beta_{\ge w+1}i_*\mathcal{F}.
\]
Thus, if $i_*\mathcal{F} \in \dsl Xw$, then $\beta_{\ge w+1}i_*\mathcal{F} = i_*\beta_{\ge
w+1}\mathcal{F} = 0$. Since $i_*$ is faithful, this implies that $\beta_{\ge
w+1}\mathcal{F} = 0$, so that $\mathcal{F} \in \dsl Zw$. The same argument shows that
$i_*\mathcal{F} \in \dsg Xw$ implies that $\mathcal{F} \in \dsl Xw$.
Next, assume the categories~\eqref{eqn:hered} constitute a baric
structure on $Z$. We will show that this baric structure is induced from
the one on $X$. If $\mathcal{F} \in \dsml Xw$, then $\Hom(\mathcal{F}, i_*\mathcal{G}) = 0$ for
all $\mathcal{G} \in \dsg Z{w+1}$ by Lemma~\ref{lem:unbdd-orth}, so
$\Hom(Li^*\mathcal{F},\mathcal{G}) = 0$, and hence $Li^*\mathcal{F} \in \dsml Zw$. Similarly, if
$\mathcal{F} \in \dspg Xw$, then $\Hom(i_*\mathcal{G},\mathcal{F}) = \Hom(\mathcal{G}, Ri^!\mathcal{F}) = 0$ for
all $\mathcal{G} \in \dsl Z{w-1}$, so $Ri^!\mathcal{F} \in \dspg Zw$. Thus, $Li^*$ is
right baryexact, and $Ri^!$ is left baryexact, as desired.
We turn now to open subschemes. Suppose $\schemebaric U$ is an induced
baric structure on an open subscheme $j: U \hookrightarrow X$. In view of the
equalities~\eqref{eqn:pm-bdd}, the definition of ``induced'' implies that
$j^*: \dgb X \to \dgb U$ is baryexact. In other words, if
$\mathcal{F}_1 \in \dsl Xw$, then $j^*\mathcal{F}_1 \in \dsl Uw$, and if $\mathcal{F}_1 \in \dsg
Xw$, then $j^*\mathcal{F}_1 \in \dsg Uw$. Conversely, if $\mathcal{F} \in \dsl Uw$, then
there exists some object $\mathcal{F}' \in \dgb X$ such that $j^*\mathcal{F}' \cong \mathcal{F}$.
Form the distinguished triangle $\beta_{\le w}\mathcal{F}' \to \mathcal{F}' \to \beta_{\ge
w+1}\mathcal{F}' \to$, and apply $j^*$ to it. We know that $j^*\beta_{\le w}\mathcal{F}'
\in \dsl Uw$ and that $j^*\beta_{\ge w+1}\mathcal{F}' \in \dsg U{w+1}$. Since
$j^*\mathcal{F}' \cong \mathcal{F}$, we see from the triangle
\[
j^*\beta_{\le w}\mathcal{F}' \to \mathcal{F} \to j^*\beta_{\ge w+1}\mathcal{F}' \to
\]
that $j^*\beta_{\ge w+1}\mathcal{F}' \cong \beta_{\ge w+1}\mathcal{F} = 0$, and hence that
$\mathcal{F} \cong j^*\beta_{\le w}\mathcal{F}'$. Thus, setting $\mathcal{F}_1 = \beta_{\le
w}\mathcal{F}'$, we have found an $\mathcal{F}_1 \in \dsl Xw$ such that $j^*\mathcal{F}_1 \cong
\mathcal{F}$. The argument for $\dsg Uw$ is similar.
Finally, assume the categories~\eqref{eqn:local} constitute a baric
structure on $U$. We must show that this baric structure is induced.
Clearly, $j^*$ is baryexact as a functor of bounded derived categories
$\dgb X \to \dgb U$. Since $j^*$ is also exact, it commutes with
truncation and cohomology functors, and it takes $\dgbg Xw$ to $\dgbg
Uw$. It follows from these observations that it takes $\dsml Xw$ to $\dsml
Uw$ and $\dspg Xw$ to $\dspg Uw$.
\end{proof}
\begin{lem}\label{lem:hlr-baric-res}
Let $j: U \hookrightarrow X$ be the inclusion of an open subscheme, and let $i: Z \hookrightarrow
X$ be the inclusion of a closed subscheme. Assume that $U$ and $Z$ are equipped with baric structures induced from one on $X$. Then:
\begin{enumerate}
\item $j^*$ takes $\dsml Xw$ to $\dsml Uw$ and $\dspg Xw$ to
$\dspg Uw$.
\item $Li^*$ takes $\dsml Xw$ to $\dsml Zw$.
\item $Ri^!$ takes $\dspg Xw$ to $\dspg Zw$.
\item $i_*$ takes $\dsml Zw$ to $\dsml Xw$ and $\dspg Zw$ to
$\dspg Xw$.
\end{enumerate}
\end{lem}
\begin{proof}
Parts~(1), (2), and~(3) hold by definition.
(4)~We saw in the proof of Lemma~\ref{lem:induced}
that as a functor of bounded derived categories $\dgb Z \to \dgb X$, $i_*$
is baryexact. Since $i_*$ is also an exact functor, we have $h^k(i_*\mathcal{F})
\cong i_*h^k(\mathcal{F})$ for any $\mathcal{F} \in \dgm Z$. Thus, if $\mathcal{F} \in \dsml
Zw$, we have $h^k(i_*\mathcal{F}) \in \dsl Xw$ for all $k$; in other words,
$i_*\mathcal{F} \in \dsml Xw$. On the other hand, suppose $\mathcal{F} \in \dspg Zw$.
Since $i_*$ is exact and baryexact on $\dgb Z$, we have $i_*\beta_{\le
w-1}\tau^{\le k}\mathcal{F} \cong \beta_{\le w-1}\tau^{\le k}i_*\mathcal{F}$. Moreover,
the fact that $\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg Z{k+1}$ for all
$k$ implies that $i_*\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg X{k+1}$ for
all $k$. Thus, $i_*\mathcal{F} \in \dspg Xw$.
\end{proof}
\begin{lem}\label{lem:closed-hered}
Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $i: Z
\hookrightarrow X$ be the inclusion of a closed subscheme. The induced baric
structure on $Z$ is also hereditary.
\end{lem}
\begin{proof}
Let $\kappa: Y \hookrightarrow Z$ be a closed subscheme of $Z$. We must show that $Y$
admits a baric structure
induced from the one on $Z$. In fact, we claim that the baric structure
on $Y$ induced from the on $X$ (via $i \circ \kappa: Y \hookrightarrow X$) has the
desired property. Suppose $\mathcal{F} \in \dsml Zw$. If $L\kappa^*\mathcal{F} \notin
\dsml Yw$, then there is some $\mathcal{G} \in \dsg Y{w+1}$ such that
$\Hom(L\kappa^*\mathcal{F},\mathcal{G})
\ne 0$. Then $\Hom(\mathcal{F}, \kappa_*\mathcal{G}) \ne 0$ and, because $i_*$ is
faithful,
$\Hom(i_*\mathcal{F}, i_*\kappa_*\
|
mathcal{G}) \ne 0$. But this is impossible,
because according to Lemma~\ref{lem:hlr-baric-res}, $i_*\mathcal{F} \in \dsml
Xw$ and $(i\circ \kappa)_*\mathcal{G} \in \dsg X{w+1}$. Thus, $L\kappa^*\mathcal{F} \in
\dsml Yw$. Similarly, if $\mathcal{F} \in \dspg Zw$, a consideration of $\Hom(\mathcal{G},
R\kappa^!\mathcal{F})$ and $\Hom(i_*\kappa_*\mathcal{G}, i_*\mathcal{F})$ for $\mathcal{G} \in \dsl
Y{w-1}$ shows that $R\kappa^!\mathcal{F} \in \dspg Yw$. Thus, $L\kappa^*$ is right
baryexact and $R\kappa^!$ is left baryexact, so the baric structure on $Y$
induced from the one on $X$ is also induced from the one on $Z$. The
induced baric structure on $Z$ is therefore hereditary.
\end{proof}
\subsection{Properties of HLR baric structures}
\label{sect:hlrprop}
In this section, we prove three useful results about HLR baric structures.
First, we prove that the HLR property is inherited by induced baric
structures on subschemes. Next, we prove an additional rigidity property
for nilpotent thickenings of closed subschemes. Finally,
we prove a ``gluing theorem'' that states that an HLR baric structure is
determined by the baric structures it induces on a closed subscheme and the
complementary open subscheme. It should be noted that the proofs of these
results depend on Theorem~\ref{thm:hlr}.
\begin{thm}\label{thm:hlr-induced}
Suppose $X$ is endowed with an HLR baric structure. Every locally closed
subscheme $\kappa: Y \hookrightarrow X$
admits a unique induced baric structure. Moreover, this baric
structure is also HLR.
\end{thm}
\begin{proof}
We have already seen the uniqueness of the induced baric structure in the
case of open or closed subschemes, in Lemma~\ref{lem:induced}. For a
general locally closed subscheme, let us factor the inclusion map $\kappa:
Y \to X$ as a closed imbedding $i: Y \hookrightarrow U$ followed by an open imbedding
$j: U \hookrightarrow X$. Then $U$ acquires a unique induced hereditary baric
structure from the baric structure on $X$, and it in turn induces a unique
baric structure on its closed subscheme $Y$. This baric structure is also
induced from the one on $X$: clearly, $L\kappa^* = Li^* \circ j^*$ is
right baryexact, and $R\kappa^! = Ri^! \circ j^*$ is left baryexact.
To show that this is the unique baric structure on $Y$ induced from the
one on $X$, we must show that the baryexactness assumptions on $L\kappa^*$
and $R\kappa^!$ imply the same conditions on $Li^*$ and $Ri^!$. (It then
follows that any baric structure induced from the one on $X$ is actually
induced from the one on $U$.) Suppose $\mathcal{F} \in \dsml Uw$, and consider
a distinguished triangle of the form
\[
Li^*\tau^{\le k-1}\mathcal{F} \to Li^*\mathcal{F} \to Li^*\tau^{\ge k}\mathcal{F} \to.
\]
Since $Li^*\tau^{\le k-1}\mathcal{F} \in \dgml Y{k-1}$, we see that $h^k(Li^*\mathcal{F})
\cong h^k(Li^*\tau^{\ge k}\mathcal{F})$. Now, $\tau^{\ge k}\mathcal{F}$ is an object in
$\dsl Uw$, so there exists an object $\mathcal{F}_1 \in \dsl Xw$ such that
$j^*\mathcal{F}_1 \cong \tau^{\ge k}\mathcal{F}$. By assumption, $L\kappa^*\mathcal{F}_1 \in
\dsml Yw$. But $L\kappa^*\mathcal{F} \cong Li^*\tau^{\ge k}\mathcal{F}$, so we conclude
that $h^k(Li^*\tau^{\ge k}\mathcal{F}) \cong h^k(Li^*\mathcal{F}) \in \dsl Yw$. Thus,
$Li^*\mathcal{F} \in \dsml Yw$.
On the other hand, suppose that $\mathcal{F} \in \dspg Uw$, and consider a
distinguished triangle of the form
\[
Ri^!\tau^{\le k}\mathcal{F} \to Ri^!\mathcal{F} \to Ri^!\tau^{\ge k+1}\mathcal{F} \to.
\]
Since $Ri^!\tau^{\ge k+1}\mathcal{F} \in \dgpg Y{k+1}$, we see that $\tau^{\le
k}Ri^!\mathcal{F} \cong \tau^{\le k}Ri^!\tau^{\le k}\mathcal{F}$. Next, consider the
distinguished triangle
\[
Ri^!\beta_{\le w-1}\tau^{\le k}\mathcal{F} \to Ri^!\tau^{\le k}\mathcal{F} \to Ri^!
\beta_{\ge w}\tau^{\le k}\mathcal{F} \to.
\]
By assumption, $\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgbg U{k+2}$, so $Ri^!\beta_{\le w-1}\tau^{\le k}\mathcal{F} \in \dgpg Y{k+2}$. It
follows that $\tau^{\le k}Ri^!\tau^{\le k}\mathcal{F} \cong
\tau^{\le k}Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F}$.
Now, $\beta_{\ge w}\tau^{\le k}\mathcal{F} \in \dsg Uw$, so there is some $\mathcal{F}_1
\in \dsg Xw$ such that $j^*\mathcal{F}_1 \cong \beta_{\ge w}\tau^{\le k}\mathcal{F}$.
Since $R\kappa^!\mathcal{F}_1$ belongs to $\dspg Yw$ by assumption, we have
$\beta_{\le w-1}\tau^{\le k}R\kappa^!\mathcal{F}_1 \in \dgbg Y{k+2}$. But we also
have $R\kappa^!\mathcal{F}_1 \cong Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F}$, and from the
chain of isomorphisms
\[
\tau^{\le k}Ri^!\mathcal{F} \cong \tau^{\le k}Ri^!\tau^{\le k}\mathcal{F} \cong
\tau^{\le k}Ri^!\beta_{\ge w}\tau^{\le k}\mathcal{F} \cong
\tau^{\le k}R\kappa^!\mathcal{F}_1,
\]
we see that $\beta_{\le w-1}\tau^{\le k}Ri^!\mathcal{F} \in \dgbg Y{k+2}$. Thus,
$Ri^!\mathcal{F} \in \dspg Yw$. We now conclude that any baric structure on $Y$
induced from the one on $X$ is also induced from the one on $U$, and is
therefore uniquely determined.
To show that the induced baric structure on a locally closed subscheme is
HLR, it suffices, by Theorem~\ref{thm:hlr}, to show that it is hereditary.
In the case of a closed subscheme, this was done in
Lemma~\ref{lem:closed-hered}, and in the case of an open subscheme, there
is nothing to prove: this property is part of the definition of ``local.''
The assertion then follows for a general locally closed subscheme, since,
by construction, the induced baric structure on such a subscheme is
obtained by first passing to an open subscheme, and then to a closed
subscheme of that.
\end{proof}
Next, we turn to nilpotent thickenings of a closed subscheme.
\begin{prop}\label{prop:rigid}
Suppose $X$ is endowed with an HLR baric structure, and let $Z
\overset{t}{\hookrightarrow} Z_1 \hookrightarrow X$ be a sequence of closed subschemes of $X$
with the same underlying topological space. Then:
\begin{enumerate}
\item For $\mathcal{F} \in \dgm {Z_1}$, $\mathcal{F} \in \dsml {Z_1}w$ if and only if
$Lt^*\mathcal{F} \in \dsml Zw$.
\item For $\mathcal{F} \in \dgp {Z_1}$, $\mathcal{F} \in \dspg {Z_1}w$ if and only if
$Rt^!\mathcal{F} \in \dspg Zw$.
\end{enumerate}
\end{prop}
\begin{proof}
If $\mathcal{F} \in \dsml {Z_1}w$, it is obvious that $Lt^*\mathcal{F} \in \dsml Zw$,
since the baric structure on $Z$ is induced from that on $Z_1$.
Conversely, suppose $\mathcal{F} \in \dgm {Z_1}$ and $Lt^*\mathcal{F} \in \dsml Zw$. Then
$\Hom(\mathcal{F}, t_*\mathcal{G}) \cong \Hom(Lt^*\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg
Z{w+1}$. But by the definition of ``rigid,'' $\dsg {Z_1}{w+1}$ is
generated by objects of the form $t_*\mathcal{G}$ with $\mathcal{G} \in \dsg Z{w+1}$, so it
follows that $\Hom(\mathcal{F},\mathcal{G}') = 0$ for all $\mathcal{G}' \in \dsg {Z_1}{w+1}$, and
hence that $\mathcal{F} \in \dsml {Z_1}w$. The proof of part~(2) is entirely
analogous and will be omitted.
\end{proof}
Finally, we prove a ``gluing theorem'' for HLR baric structures.
\begin{thm}
Suppose $X$ is endowed with an HLR baric structure. Let $i:Z \hookrightarrow X$ be a closed subscheme of $X$,
and let $j:U \hookrightarrow X$ be its open complement. Endow $U$ and $Z$ with the baric structures induced from that on $X$. Then we have
\begin{align*}
\dsl Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$j^*\mathcal{F} \in \dsl Uw$ and $Li^*\mathcal{F}
\in \dsml Zw$} \}, \\
\dsg Xw &= \{ \mathcal{F} \in \dgb X \mid \text{$j^*\mathcal{F} \in \dsg Uw$ and $Ri^!\mathcal{F}
\in \dspg Zw$} \}.
\end{align*}
In particular, there is a unique HLR baric structure on $X$ which induces
the baric structures $\schemebaric U$ and $\schemebaric Z$ on $U$ and $Z$.
\end{thm}
\begin{proof}
If $\mathcal{F} \in \dsl Xw$, then $j^*\mathcal{F} \in \dsl Uw$ and $Li^*\mathcal{F} \in \dsml Zw$ by the definition of the induced baric structure. For the other direction, suppose that $j^*
\mathcal{F} \in \dsl Uw$ and $Li^* \mathcal{F} \in \dsml Zw$. We will prove that $\mathcal{F} \in
\dsl Xw$ by showing $\Hom(\mathcal{F},\mathcal{G}) = 0$ for all $\mathcal{G} \in \dsg X{w+1}$.
Fix $\mathcal{G} \in \dsg X{w+1}$. We have an exact sequence
\[
\lim_{\substack{\to \\ {Z_1}}} \Hom(i_{Z_1*}Li_{Z_1}^*\mathcal{F},\mathcal{G}) \to
\Hom(\mathcal{F},\mathcal{G}) \to \Hom(j^*\mathcal{F},j^*\mathcal{G}),
\]
where the limit runs over nilpotent thickenings of $Z$.
(See, for instance,~\cite[Proposition~2 and Lemma~3(a)]{bez:pcs} for an
explanation of this exact sequence.) We have $j^* \mathcal{F} \in \dsl Uw$ and
$j^*\mathcal{G} \in \dsg U{w+1}$, and by Lemma~\ref{lem:hlr-baric-res}, we have
$i_{Z_1 *} Li_{Z_1}^* \mathcal{F} \in \dsml Xw$ so the first and third terms
vanish. We conclude that $\Hom(\mathcal{F},\mathcal{G})$ also vanishes. The argument for
$\dsg Xw$ is similar.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:hlr}}
\label{sect:hlrproof}
In this section, we will prove that hereditary baric structures are
automatically also local and rigid. We begin with a result about baric
truncation functors with respect to a hereditary baric structure. If $X$
is endowed with a hereditary baric structure, and $\mathcal{F} \in \dgb X$ is
actually supported on some closed subscheme $i: Z \hookrightarrow X$, then the baric
truncations of $\mathcal{F}$ are obtained by taking baric truncations in the
induced baric structure on $Z$, and then pushing them forward by $i_*$. In
other words, hereditary baric structures have the property that baric
truncation functors preserve support. More precisely:
\begin{prop}
\label{prop:settheorysupport}
Let $\schemebaric X$ be a hereditary baric structure on $X$. Then
\begin{enumerate}
\item If $\mathcal{F} \in \dgb X$ has set-theoretic support on a closed set $Z
\subset X$, then so do $\beta_{\leq w} \mathcal{F}$ and $\beta_{\geq w} \mathcal{F}$.
\item If a morphism $u:\mathcal{F} \to \mathcal{G}$ in $\dgb X$ has set-theoretic support
on $Z$, in the sense that $u \vert_{X \smallsetminus Z} = 0$, then so do $\beta_{\leq
w}(u)$ and $\beta_{\geq w}(u)$.
\end{enumerate}
\end{prop}
\begin{proof}
If $\mathcal{F}$ is set-theoretically supported on $Z$ then there is a subscheme
$i:Z_1 \hookrightarrow X$ of $X$, whose underlying closed set is $Z$, such
that $\mathcal{F} = i_* \mathcal{F}'$ for some $\mathcal{F}' \in \dgb {Z_1}$. Form the distinguished triangle
\[
\beta_{\leq w} \mathcal{F}' \to \mathcal{F}' \to \beta_{\geq w+1} \mathcal{F}' \to.
\]
By Lemma~\ref{lem:induced}, we have that $i_* \beta_{\leq w}\mathcal{F}' \in
\dsl Xw$ and $i_* \beta_{\geq w+1}\mathcal{F}' \in \dsg X{w+1}$. Since we have a
distinguished triangle
\[
i_* \beta_{\leq w} \mathcal{F}' \to \mathcal{F} \to i_* \beta_{\geq w+1} \mathcal{F}'\to,
\]
we must have $i_* \beta_{\leq w} \mathcal{F}' \cong \beta_{\leq w} \mathcal{F}$ and $i_*
\beta_{\geq w+1} \mathcal{F}' \cong \beta_{\geq w+1} \mathcal{F}$. In particular these objects
are set-theoretically supported on $Z$, proving the first assertion.
To prove the second assertion, consider the exact sequence
\[
\lim_{\substack{\to \\ Z'}} \Hom(\mathcal{F}, i_{Z'*}Ri^!_{Z'}\mathcal{G}) \to
\Hom(\mathcal{F},\mathcal{G}) \to \Hom(\mathcal{F}|_{X\smallsetminus Z},\mathcal{G}|_{X\smallsetminus Z}),
\]
where $i_{Z'}: Z' \hookrightarrow X$ ranges over all closed subscheme structures on
$Z$. By assumption, $u \in \Hom(\mathcal{F},\mathcal{G})$ vanishes upon restriction to $X
\smallsetminus Z$, so we see from the exact sequence above that it must factor
through $i_{Z'*}Ri^!_{Z'}\mathcal{G} \to \mathcal{G}$ for some closed subscheme structure
$i_{Z'}: Z' \hookrightarrow X$ on $Z$. Now, $i_{Z'*}Ri^!_{Z'}\mathcal{G}$ is in general an
object of $\dgp X$, but since $\mathcal{F}$ lies in $\dgb X$, any morphism $\mathcal{F} \to
i_{Z'*}Ri^!_{Z'}\mathcal{G}$ factors through $\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$ for
sufficiently large $n$. It follows that $\beta_{\le w}(u)$ and $\beta_{\ge
w}(u)$ factor through $\beta_{\le w}\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$ and
$\beta_{\ge w}\tau^{\le n}i_{Z'*}Ri^!_{Z'}\mathcal{G}$, respectively. These objects
have set-theoretic support on $Z$ by the first part of the proposition, so
$\beta_{\le w}(u)$ and $\beta_{\ge w}(u)$ have set-theoretic support on $Z$
as well, as desired.
\end{proof}
We may use this fact to prove the following:
\begin{thm}\label{thm:openlocal}
Every hereditary baric structure is local.
\end{thm}
We will prove this theorem over the course of the following three
propositions. Recall from Lemma~\ref{lem:induced} that in a local baric
structure, the induced baric structures on open subschemes necessarily have
the form given in the proposition below.
\begin{prop}\label{prop:hered-open}
Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $U$ be
an open subschme of $X$. For any $w \in \mathbb{Z}$, define full subcategories of
$\dgb U$ as follows:
\begin{align*}
\dsl Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some
$\mathcal{F}_1 \in \dsl Xw$} \}, \\
\dsg Uw &= \{ \mathcal{F} \in \dgb U \mid \text{$\mathcal{F} \cong j^*\mathcal{F}_1$ for some
$\mathcal{F}_1 \in \dsg Xw$} \}.
\end{align*}
Then $\dsl Uw$ and $\dsg Uw$ are thick subcategories of $\dgb U$.
\end{prop}
\begin{proof}
Suppose that $\mathcal{F}$ and $\mathcal{G}$ belong to $\dsl Uw$, so that there exist
$\mathcal{F}_1$ and $\mathcal{G}_1$ in
$\dsl Xw$ with $\mathcal{F}_1 \vert_U \cong \mathcal{F}$ and $\mathcal{G}_1 \vert_U \cong \mathcal{G}$.
Since $\dgb U$ is a localization of $\dgb X$, we may find for every
morphism $u:\mathcal{F} \to \mathcal{G}$ an object $\mathcal{G}_2 \in \dgb X$
and a diagram
$\mathcal{F}_1 \to \mathcal{G}_2 \leftarrow \mathcal{G}_1$
such that $(\mathcal{G}_2 \leftarrow \mathcal{G}_1)\vert_U$ is an isomorphism, and the
composition
\[
\mathcal{F} \cong \mathcal{F}_1 \vert_U \to \mathcal{G}_2 \vert_U \cong \mathcal{G}_1 \vert_U \cong
\mathcal{G}
\]
coincides with $u$. We claim that the diagram
\[
\beta_{\leq w} \mathcal{F}_1 \to \beta_{\leq w} \mathcal{G}_2 \leftarrow \beta_{\leq w}
\mathcal{G}_1
\]
has the same property. In that case, the cone on the composition $\mathcal{F}_1
\cong \beta_{\leq w} \mathcal{F}_1 \to \beta_{\leq w} \mathcal{G}_2$ belongs to $\dsl Xw$,
which shows that the cone on $u:\mathcal{F} \to \mathcal{G}$ belongs to $\dsl Uw$. To
prove the claim, note that the cone on the map $\mathcal{G}_1 \to \mathcal{G}_2$ is
set-theoretically supported on the closed set $X \smallsetminus U$, and since
the baric structure $\schemebaric X$ is hereditary, the same must be true
for the cone on
$\beta_{\leq w} \mathcal{G}_1 \to \beta_{\leq w} \mathcal{G}_2$; in particular the
restriction of the latter map to $U$ is an isomorphism.
We have shown that the $\dsl Uw \subset \dgb U$ is a triangulated subcategory. To show that it is thick we have to show that it is also closed under summands -- \emph{i.e.} that if $\mathcal{F} \oplus \mathcal{G} \in \dsl Uw$ then $\mathcal{F}$ and $\mathcal{G}$ also belong to $\dsl Uw$. Thus suppose that $\mathcal{F} \oplus \mathcal{G}$ belongs to $\dsl Uw$. Since $\dgb U$ is a localization of $\dgb X$, we may find a triangle
$$\mathcal{F}_1 \to \mathcal{H} \to \mathcal{G}_1 \to$$
whose restriction to $U$ is isomorphic to the triangle
$$\mathcal{F} \to \mathcal{F} \oplus \mathcal{G} \to \mathcal{G} \to $$
In particular the map $\mathcal{G}_1 \to \mathcal{F}_1[1]$ is set-theoretically supported on $X \smallsetminus U$, so by proposition \ref{prop:settheorysupport} the same must be true of $\beta_{\leq w} \mathcal{G}_1 \to \beta_{\leq w} \mathcal{F}_1$. From the diagram
$$
\xymatrix{
\beta_{\leq w} \mathcal{F}_1 \ar[r] \ar[d] & \mathcal{H} \ar[r] \ar[d] & \beta_{\leq w} \mathcal{G}_1 \ar[r] \ar[d]& \\
\mathcal{F}_1 \ar[r] \ar[d] & \mathcal{H} \ar[r] \ar[d] & \mathcal{G}_1 \ar[r] \ar[d]& \\
\beta_{\geq w+1} \mathcal{F}_1 \ar[r] & 0 \ar[r]& \beta_{\geq w+1} \mathcal{G}_1 \ar[r] & \\
}
$$
whose rows and columns are distinguished triangles, we see that $\beta_{\geq w+1} \mathcal{G}_1 \to \beta_{\geq w+1} \mathcal{F}_1$ is an isomorphism. But since this morphism has set-theoretic support on $X - U$ the objects $\beta_{\geq w+1} \mathcal{F}_1$ and $\beta_{\geq w+1} \mathcal{G}_1$ must have set-theoretic support on $X- U$ which implies there are isomorphisms $\beta_{\leq w} \mathcal{F}_1 \vert_U \cong \mathcal{F}$ and $\beta_{\leq w} \mathcal{G}_1 \vert_U \cong \mathcal{G}$. Thus $\mathcal{F}$ and $\mathcal{G}$ belong to $\dsl Uw$.
A similar proof shows that the subcategories $\dsg Uw$ are thick.
\end{proof}
\begin{prop}
Let $\schemebaric X$ be a hereditary baric structure on $X$, let $U$ be an
open subscheme of $X$, and
let $\schemebaric U$ be as in Proposition~\ref{prop:hered-open}. Then
$\schemebaric U$ is a baric structure on $\dgb U$, compatible with the
standard $t$-structure.
\end{prop}
\begin{proof}
It is clear that $\dsl Uw \subset \dsl U{w+1}$ and $\dsg Uw \supset \dsg
U{w+1}$ and that $\dgb U = \dsl Uw * \dsg U{w+1}$. If $\mathcal{F} \in \dsl Uw$
and $\mathcal{G} \in \dsg U{w+1}$, then we have an exact sequence
\[
\Hom(\mathcal{F}_1,\mathcal{G}_1) \to \Hom(\mathcal{F},\mathcal{G}) \to
\varinjlim_{i:Z \hookrightarrow X} \Hom(i_*Li^* \mathcal{F}_1, \mathcal{G}_1[1]) \to
\]
where $\mathcal{F}_1$ is an extension of $\mathcal{F}$ to $\dsl Xw$, $\mathcal{G}_1$ is an
extension of $\mathcal{G}$ to $\dsg X{w+1}$, and $i:Z \hookrightarrow X$ runs over
all subscheme structures on $X \smallsetminus U$. The first term above
vanishes automatically, and each of the terms $\Hom(i_* Li^*
\mathcal{F}_1,\mathcal{G}_1[1])$ vanishes because, by Lemma~\ref{lem:hlr-baric-res},
$i_*Li^*\mathcal{F}_1 \in \dsml Xw$. Thus, $\Hom(\mathcal{F},\mathcal{G}) = 0$ and
$\schemebaric U$ is a baric structure on $\dgb U$.
By assumption the baric structure $\schemebaric X$ is compatible with the standard $t$-structure on $\dgb X$. Thus if $\mathcal{F}_1$ belongs to $\dsl Xw$ then so
do $\tau^{\leq n} \mathcal{F}_1$ and $\tau^{\geq n} \mathcal{F}_1$. The objects $\mathcal{F}_1 \vert_U$,
$(\tau^{\leq n} \mathcal{F}_1) \vert_U \cong \tau^{\leq n} (\mathcal{F}_1 \vert_U)$ and
$(\tau^{\geq n} \mathcal{F}_1)\vert_U \cong \tau^{\geq n} (\mathcal{F}_1 \vert_U)$ therefore all
belong to $\dsl Uw$. Similarly, we have $(\beta_{\leq w} \mathcal{F}_1)\vert_U
\cong \beta_{\leq w} (\mathcal{F}_1\vert_U)$ and $(\beta_{\geq w} \mathcal{F}_1)\vert_U
\cong \beta_{\geq w} (\mathcal{F}_1 \vert_U)$ so that the baric truncation
functors preserve $\dgb U^{\geq 0}$. Thus the baric structure
$\schemebaric U$ is compatible with the standard $t$-structure on $\dgb U$.
\end{proof}
\begin{prop}
Let $\schemebaric X$ be a hereditary baric structure on $X$, and let $U$ be
an open subscheme of $X$. Then the collection of categories $\schemebaric
U$ defined in Proposition~\ref{prop:hered-open} constitute a
hereditary baric structure on $U$.
\end{prop}
\begin{proof}
Using Lemma~\ref{lem:induced} and the previous proposition, we know that
the baric structure $\schemebaric U$ is induced from the one on $X$. It
remains only to show that this baric structure is hereditary. Let $i:Y
\hookrightarrow U$ be a closed subscheme of $U$. By
Lemma~\ref{lem:induced}, we must prove that the following categories
constitute a baric structure on $Y$:
\begin{align*}
\dsl Yw &= \{ \mathcal{F} \in \dgb Y \mid
\text{$i_*\mathcal{F} \cong \mathcal{F}_1|_U$ for some $\mathcal{F}_1 \in \dsl Xw$} \}, \\
\dsg Yw &= \{ \mathcal{F} \in \dgb Y \mid
\text{$i_*\mathcal{F} \cong \mathcal{F}_1|_U$ for some $\mathcal{F}_1 \in \dsg Xw$} \}.
\end{align*}
Let $\overline{Y}$ be the closure of $Y$ in $X$, and let $i_1: \overline Y
\hookrightarrow X$ be the inclusion map, so that we have a commutative square of
inclusions
\[
\xymatrix@=12pt{
Y \ar@{^{(}->}[r]^i \ar@{^{(}->}[d] & U \ar@{^{(}->}[d] \\
\overline{Y} \ar@{^{(}->}[r]^{i_1} & X}
\]
By definition, the hereditary baric structure on $X$ induces a baric
structure on $\overline{Y}$. This baric structure is itself hereditary,
by Lemma~\ref{lem:closed-hered}. Thus,
by the previous proposition, the baric structure on $\overline Y$ induces
one on its open subscheme $Y$. This is
given by
\begin{align*}
(\dsl Yw)' &= \{ \mathcal{F} \mid
\text{$\mathcal{F} \cong \mathcal{F}_2|_Y$ for some $\mathcal{F}_2 \in \dgb {\overline Y}$ with
$i_{1*}\mathcal{F}_2 \in \dsl Xw$} \}, \\
(\dsg Yw)' &= \{ \mathcal{F} \mid
\text{$\mathcal{F} \cong \mathcal{F}_2|_Y$ for some $\mathcal{F}_2 \in \dgb {\overline Y}$ with
$i_{1*}\mathcal{F}_2 \in \dsg Xw$} \}.
\end{align*}
It suffices now to show that $\dsl Yw = (\dsl Yw)'$ and $\dsg Yw = (\dsg
Yw)'$. If $\mathcal{F} \in \dgb Y$ is such that we may find $\mathcal{F}_2 \in
\dgb{\overline{Y}}$ with $\mathcal{F}_2 \vert_Y \cong \mathcal{F}$ and $i_* \mathcal{F}_2 \in \dsl
Xw$ then $\mathcal{F}_1 := i_{1*} \mathcal{F}_2$ has the property that $\mathcal{F}_1 \vert_U \cong
i_* \mathcal{F}$. Thus, $(\dsl Yw)' \subset \dsl Yw$. To show the reverse
inclusion, let $\mathcal{F} \in \dgb Y$ and $\mathcal{F}_1 \in \dsl Xw$ be such that $\mathcal{F}_1
\vert_U \cong i_* \mathcal{F}$, and let $\mathcal{F}_2' \in \dgb {\overline{Y}}$ be such
that there exists a map $i_{1*} \mathcal{F}_2' \to \mathcal{F}_1$ which is an isomorphism
over $U$. Then $i_{1*} \beta_{\leq w} \mathcal{F}_2' \to \mathcal{F}_1$ is also an
isomorphism over $
|
}}
\psfrag{mb1}[c][c][0.8]{\color{myblue}{$\phi_{b_1}$}}
\psfrag{mab11}[c][c][0.8]{\color{myblue}{\hspace{1mm}{$\phi_{\Psi_{1,1} \to b_1}$}}}
\psfrag{mbaJ1}[c][c][0.8]{\color{myblue}{\hspace{-1mm}{$\phi_{\Psi_{1,J} \to a_1}$}}}
\psfrag{q1}[c][c][0.8]{\hspace{.4mm}\raisebox{0mm}{$q_1$}}
\psfrag{qI}[c][c][0.8]{\hspace{.4mm}\raisebox{-2mm}{$q_I$}}
\psfrag{v1}[c][c][0.8]{\raisebox{-2.3mm}{\hspace{.5mm}\hspace{.2mm}\raisebox{-.5mm}{$v_1$}}}
\psfrag{vJ}[c][c][0.8]{\raisebox{-1.8mm}{\hspace{.5mm}\hspace{-.5mm}\raisebox{-1.2mm}{$v_J$}}}
\psfrag{yu1}[c][c][0.8]{\hspace{.6mm}\raisebox{-2.3mm}{$\underline{\V{y}}{}_1$}}
\psfrag{yuI}[c][c][0.8]{\hspace{.5mm}\raisebox{-2.3mm}{$\underline{\V{y}}{}_I$}}
\psfrag{yo1}[c][c][0.8]{\hspace{.65mm}$\overline{\V{y}}{}_1$}
\psfrag{yoJ}[c][c][0.8]{\hspace{.65mm}$\overline{\V{y}}{}_J$}
\psfrag{f1}[c][c][0.8]{$f_1$}
\psfrag{fI}[c][c][0.8]{\hspace{.3mm}$f_I$}
\psfrag{L2}[c][c][1]{\hspace{.65mm}(a)}
\hspace{-10mm}\includegraphics[scale=0.8]{Figs/factorGraph6.eps} \label{fig:fg}}
\hspace{13mm}
\subfloat{\raisebox{9.65mm}{
\psfrag{da1}[c][c][0.8]{\hspace{1mm}\raisebox{-2.5mm}{$\V{h}_{a_{1}}$}}
\psfrag{dan}[c][c][0.8]{\hspace{1mm}$\V{h}_{a_{I}}$}
\psfrag{db1}[c][c][0.8]{\hspace{.8mm}\raisebox{-2.5mm}{$\V{h}_{b_{1}}$}}
\psfrag{dbm}[c][c][0.8]{\hspace{1mm}\raisebox{-2.5mm}{$\V{h}_{b_{J}}$}}
\psfrag{mab11}[c][c][0.8]{\hspace{-.5mm}\raisebox{-3.5mm}{\textcolor{red}{$\V{m}_{a_1 \to b_1}$}}}
\psfrag{L1}[c][c][1]{\raisebox{-21.5mm}{\hspace{.65mm}(b)}}
\includegraphics[scale=0.8]{Figs/GNN3.eps}} \label{fig:gnn}}
\captionsetup{singlelinecheck = false, justification=justified}
\vspace*{1mm}
\caption{ Factor graph for \ac{mot} (a) and \vspace{-.5mm} corresponding \acr{gnn} (b) for a\vspace{.7mm} single time frame $k$. In (a), \ac{bp} messages that correspond to \ac{da} are shown in blue color. These messages are enhanced by the proposed \ac{nebp} approach are shown; (b) shows the corresponding \ac{gnn} messages. The time index $k$ is omitted.}
\label{fig:fg_gnn}
\vspace{-1mm}
\end{figure*}
\section{Review of BP-based Multi-Object Tracking}
The proposed \ac{nebp} approach is based on \ac{bp}-based \ac{mot} introduced in \cite{MeyKroWilLauHlaBraWin:J18}. The statistical model used by \ac{bp}-based \ac{mot} is reviewed next.
\subsection{Object States}
At each time frame $k$, an object detector $g_{\text{det}}(\cdot)$ extracts $J_k$ measurements $\V{z}_{k} \triangleq [\V{z}_{k, 1}^\mathrm{T} \cdots \V{z}_{k, J_k}^\mathrm{T}]^\mathrm{T}$ from raw sensor data $\Set{Z}_k$, i.e., $\V{z}_{k} = g_{\text{det}}(\Set{Z}_k)$. All measurements extracted up to time frame $k$ are denoted as $\V{z}_{1 : k} \triangleq [\V{z}_{1}^\mathrm{T} \cdots \V{z}_{k}^\mathrm{T}]^\mathrm{T}\hspace*{-.3mm}$.
Since the number objects is unknown, \acp{po} states are introduced. The number of \acp{po} states $N_k$ is the maximum possible number of objects that have generated a measurement up to time frame $k$. At time frame $k$, the existence of a \ac{po} $n \in \{1, \cdots, N_k\}$ is modeled by a binary random existence variable $r_{k, n} \in \{0, 1\}$, i.e., \ac{po} $n$ exists if and only if $r_{k, n} = 1$. The state of \ac{po} $n$ is modeled by the random vector $\V{x}_{k, n}$. The augmented \ac{po} state\vspace{-.5mm} vector is denoted by $\V{y}_{k, n} \triangleq [\V{x}_{k, n}^\mathrm{T} \hspace{1mm} r_{k, n}]^\mathrm{T}$ and the joint \ac{po} state vector\vspace{-.3mm} by $\V{y}_{k} \triangleq [\V{y}_{k, 1}^\mathrm{T} \cdots \V{y}_{k, N_k}^\mathrm{T}]^\mathrm{T}\hspace*{-.3mm}\rmv$. There are two types of \acp{po}:
\begin{itemize}[leftmargin=*]
\item \textit{New \acp{po}} denoted by $\overline{\V{y}}{}_{k, j} = [\overline{\V{x}}{}_{k, j}^\mathrm{T} \hspace{1mm} \overline{r}{}_{k, j}]^\mathrm{T}\hspace*{-.3mm}$, $j \in \{1, \cdots, J_k\}$ represent objects that at time frame $k$ generated a measurement for the first time. Each measurement $\V{z}_{k, j}, j \hspace*{-.3mm}\in\hspace*{-.3mm} \{1, \cdots, J_k\}$ introduces a new \ac{po} $j$ with state $\overline{\V{y}}{}_{k, j}$.
\item \textit{Legacy \acp{po}} denoted by $\underline{\V{y}}{}_{k, i} = [\underline{\V{x}}{}_{k, i}^\mathrm{T} \hspace{1mm} \underline{r}{}_{k, i}]^\mathrm{T}\hspace*{-.3mm}$, $i = \{1, \cdots, I_k\}$ represent objects that have generated a measurement for the first time at a previous time frame $k^\prime < k$.
\end{itemize}
New \acp{po} become legacy \acp{po} when the measurements of the next time frame are considered. Thus, the number of legacy \acp{po} at time frame $k$ is $I_k \hspace*{-.3mm}=\hspace*{-.3mm} I_{k - 1} \hspace*{-.3mm}+\hspace*{-.3mm} J_{k - 1} \hspace*{-.3mm}=\hspace*{-.3mm} N_{k - 1}$ and the total number of \acp{po} is $N_k \hspace*{-.3mm}=\hspace*{-.3mm} I_k \hspace*{-.3mm}+\hspace*{-.3mm} J_k$. We further denote the joint new \ac{po} state by $\overline{\V{y}}{}_{k} \hspace*{-.3mm}\triangleq\hspace*{-.3mm} [\overline{\V{y}}{}_{k, 1}^\mathrm{T} \cdots \overline{\V{y}}{}_{k, J_k}^\mathrm{T} ]^\mathrm{T}$ and the joint legacy \ac{po} state by $\underline{\V{y}}{}_{k} \hspace*{-.3mm}\triangleq\hspace*{-.3mm} [\underline{\V{y}}{}_{k, 1}^\mathrm{T} \cdots \underline{\V{y}}{}_{k, I_k}^\mathrm{T} ]^\mathrm{T}\hspace*{-.3mm}\rmv$, i.e., $\V{y}_k = [\underline{\V{y}}{}_{k}^\mathrm{T} \hspace{1mm} \overline{\V{y}}{}_{k}^\mathrm{T}]^\mathrm{T}\hspace*{-.3mm}$.
\vspace{.8mm}
\subsection{Measurement Model}
The origin of measurements $\V{z}_{k, j}$, $j \in \{1, \cdots, J_k\}$ is unknown. A measurement can originate from a \ac{po} or can be a false alarm. Furthermore, a \ac{po} may also not generate any measurements (missed detection). With the assumption that a \ac{po} can generate at most one measurement and a measurement is originated from at most one \ac{po}, we model data association uncertainty as follows \cite{MeyKroWilLauHlaBraWin:J18}. The \ac{po}-measurement association at time frame $k$ can be described by an ``object-oriented'' \ac{da} vector $\V{a}_k \hspace*{-.3mm}=\hspace*{-.3mm} [a_{k, 1} \cdots a_{k, I_k}]^\mathrm{T}$. Here, the association variable $a_{k, i} \hspace*{-.3mm}=\hspace*{-.3mm} j \in \{1, \cdots, J_k\}$ indicates that legacy \ac{po} $i$ generated measurement $j$ and $a_{k, i} = 0$ indicated that legacy \ac{po} $i$ did not generate any measurement at time $k$. Following \cite{WilLau:J14}, we also introduce the ``measurement-oriented'' \ac{da} vector $\V{b}_k = [b_{k, 1} \cdots b_{k, J_k}]^\mathrm{T}$ with $b_{k, j} \hspace*{-.3mm}=\hspace*{-.3mm} i \in \{1, \cdots, I_k\}$ if measurement $j$ was generated by legacy \ac{po} $i$, or $b_{k, j} = 0$ if measurement $j$ was not generated by any legacy \ac{po}. Note that there is a one-to-one mapping between $\V{a}_k$ and $\V{b}_{k}$ and vice versa. Introducing $\V{b}_{k}$ in addition to $\V{a}_k$ makes it possible to develop scalable \ac{mot} \cite{MeyKroWilLauHlaBraWin:J18}. Finally,\vspace{0mm} we establish the notation $\V{a}_{1 : k} \triangleq [\V{a}_1^\mathrm{T} \cdots \V{a}_{I_k}^\mathrm{T}]^\mathrm{T}$ and $\V{b}_{1 : k} \triangleq [\V{b}_1^\mathrm{T} \cdots \V{b}_{J_k}^\mathrm{T}]^\mathrm{T}\hspace*{-.3mm}$.
If legacy \ac{po} $i$ exists, it generates a measurement (i.e. $a_{k, i} \hspace*{-.3mm}=\hspace*{-.3mm} j \in \{1, \cdots, J_k\}$) with probability $p_d$. Furthermore, it also exists at the next time step $k\hspace*{-.3mm}+\rmv1$ with probability $p_s$. The number of false alarms is modeled by a Poisson distribution with mean $\mu_{\mathrm{fa}}$ and false alarm measurements are independent and identically distributed according to $f_{\mathrm{fa}} (\V{z}_{k, j})$. Before the measurements $\{\V{z}_{k, j}\}_{j = 1}^{J_k}$ are observed, the number of new \acp{po} is unknown. The number of newly detected objects is Poisson distributed with mean $\mu_{\text{n}}$, while the states of newly detected objects are a priori independent and identically distributed according to $f_{\text{n}}(\overline{\V{x}}{}_{k, j})$. Following the assumptions presented in \cite[Sec. VIII-A]{MeyKroWilLauHlaBraWin:J18}, the joint posterior \ac{pdf} $f(\V{y}_{1 : k}, \V{a}_{1 : k}, \V{b}_{1 : k} | \V{z}_{1 : k})$ can be derived \cite[Sec. VIII-G]{MeyKroWilLauHlaBraWin:J18}. The factorization of this joint posterior pdf is visualized by the factor graph shown in Fig.~\ref{fig:fg}. Note that legacy POs are connected to object-oriented association variables and new POs are connected to measurement-oriented association variables.
\subsection{Object Declaration and State Estimation}
In the Bayesian setting, declaration of object existence and object state estimation are based on the marginal existence probabilities $p(r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1 | \V{z}_{1 : k})$ and the conditional \acp{pdf} $f(\V{x}_{k, n} | r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1, \V{z}_{1 : k})$. In particular, declaration of object existence is performed by comparing $p(r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1 | \V{z}_{1 : k})$ to a threshold $T_{\mathrm{dec}}$. In addition, for objects $n$ that are declared to exist, an estimate of $\V{x}_{k, n}$ is provided by the \ac{mmse}\vspace{.9mm} estimator
\begin{equation}
\hat{\V{x}}{}_{k, n}^{\text{MMSE}} \hspace*{-.3mm}=\hspace*{-.3mm} \int \V{x}_{k, n} f(\V{x}_{k, n} \hspace*{.3mm}|\hspace*{.3mm} r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1, \V{z}_{1 : k}) \hspace*{.3mm} \mathrm{d}\V{x}_{k, n}. \nonumber
\vspace{.9mm}
\end{equation}
Note that declaration of object existence is based on $p(r_{k, n} \hspace*{-.3mm}= 1 \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k}) \hspace*{-.3mm}=\hspace*{-.3mm} \int f(\V{x}_{k, n}, r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1 \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k}) \mathrm{d}\V{x}_{k, n}\vspace{1mm}$ and object state estimation relies\vspace{.8mm} on
\begin{equation}
f(\V{x}_{k, n} \hspace*{.3mm} | \hspace*{.3mm} r_{k, n} \hspace*{-.3mm}=\hspace*{-.3mm} 1, \V{z}_{1 : k}) \hspace*{-.3mm}=\hspace*{-.3mm} \frac{f(\V{y}_{k, n} \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k})}{p(r_{k, n} = 1 \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k})}. \nonumber
\vspace{.8mm}
\end{equation}
Thus, both tasks rely on the calculation of marginal posterior \acp{pdf} $f(\V{y}_{k, n} \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k}) \hspace*{-.3mm}\triangleq\hspace*{-.3mm} f(\V{x}_{k, n}, r_{k, n} \hspace*{.3mm} | \hspace*{.3mm} \V{z}_{1 : k})\vspace{0mm}$. By applying \ac{bp} following \cite[Sec. VIII-IX]{MeyKroWilLauHlaBraWin:J18}, accurate approximations (a.k.a. ``beliefs'') $\tilde{f}(\V{y}_{k, n}) \hspace*{-.3mm}\approx\hspace*{-.3mm} f(\V{y}_{k, n} | \V{z}_{1 : k})\vspace{-.5mm}$ of marginal posterior \acp{pdf} can be calculated efficiently.
Note that since we introduce a new \ac{po} for each measurement, the number of POs would grow with time $k$. Therefore, legacy and new POs whose approximate existence probabilities are below a threshold $T_{\mathrm{pru}}$ are pruned, i.e., removed from the state\vspace{-.5mm} space.
\section{NEBP-based Multi-Object Tracking}
\vspace{-.3mm}
To further improve the performance of \ac{bp}-based \ac{mot}, we augment the factor graph in Fig.~\ref{fig:fg} by a \ac{gnn}. The \ac{gnn} uses features extracted from previous estimates, measurements, and raw sensor information as an input. Since we limit the following discussion to a single time frame, we will omit the time index $k$.
\subsection{Feature Extraction} \label{sec:feaExtraction}
First, we introduce the procedure of extracting features for legacy \acp{po} and measurements, which will be used in our \ac{nebp} algorithm later. We consider motion and shape features. The motion features for legacy \ac{po} $i$ and measurement $j$ are computed according to $\V{h}_{a_i, \text{motion}} = g_{\text{motion}, 1}(\underline{\hat{\V{x}}}{}_{i}^{-})$ and $\V{h}_{b_j, \text{motion}} = g_{\text{motion}, 2}(\V{z}_{j})$, respectively. Here, $g_{\text{motion}, 1}(\cdot)$ as well as $g_{\text{motion}, 2}(\cdot)$ are neural networks and
$\underline{\hat{\V{x}}}{}_{i}^{-}$ is the \ac{mmse} estimate of the state of legacy \ac{po} $i$ at the previous time frame. Similarly, the shape features, denoted by $\V{h}_{a_i, \text{shape}}$ and $\V{h}_{b_j, \text{shape}}$, are extracted from raw sensor data $\Set{Z}^{-}$ and $\Set{Z}$ at previous and current time, respectively, i.e., $\V{h}_{a_i, \text{shape}} = g_{\text{shape}, 1}(\Set{Z}^{-}\hspace*{-.3mm}\rmv, \underline{\hat{\V{x}}}{}_{i}^{-})$ and $\V{h}_{b_j, \text{shape}} = g_{\text{shape}, 2}(\Set{Z},\V{z}_{j})$. Here, $g_{\text{shape}, 1}(\cdot)$ as well as $g_{\text{shape}, 2}(\cdot)$ are again neural networks. We will discuss one particular instance of shape feature extraction in\vspace{0mm} Sec.~\ref{sec:exp_setup}.
\subsection{The Proposed Message Passing Algorithm}
For neural enhanced \ac{da}, we introduce a \ac{gnn} that matches the topology of the \ac{da} section of the factor graph in Fig.~\ref{fig:fg}. The resulting \ac{gnn} is shown in Fig.~\ref{fig:gnn}. In addition to the output of the detector, the \ac{gnn} also uses raw sensor information as an input. The goal is to use this additional information to reject false alarm measurements and obtain improved \ac{da} probabilities by enhancing \ac{bp} messages with the output of the \ac{gnn}.
\ac{nebp} for \ac{mot} consists of the following\vspace{.5mm} steps:
\subsubsection{Conventional \ac{bp}}
First, conventional \ac{bp}-based \ac{mot} is run until convergence. This results in the \ac{bp} messages $\phi_{a_i} = [\phi_{a_i}(0) \cdots \phi_{a_i}(J)]^\mathrm{T} \in \mathbb{R}^{J + 1}, \phi_{b_j} = [\phi_{b_j}(0) \cdots \phi_{b_j}(I)]^\mathrm{T} \in \mathbb{R}^{I + 1}, \phi_{\Psi_{i, j} \to b_j} \in \mathbb{R}$, and $\phi_{\Psi_{i, j} \to a_i} \in \mathbb{R}$ (cf.~\cite[Sec.~IX-A1--IX-A3]{MeyKroWilLauHlaBraWin:J18}).
\vspace{1.5mm}
\subsubsection{\ac{gnn} Messages}
Next, \ac{gnn} message passing is\vspace{.5mm} executed iteratively. In particular, at iteration $l \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,L\}$ the following operations are performed:
\begin{align}
\V{m}_{a_i \to b_j}^{(l)} &= g_{\text{e}} \Big( \V{h}_{a_i}^{(l)}, \V{h}_{b_j}^{(l)}, \phi_{a_i}(j), \phi_{\Psi_{i, j} \to b_j} \Big) \label{eq:gnn_a_to_b} \\[1.2mm]
\V{m}_{b_j \to a_i}^{(l)} &= g_{\text{e}} \Big(\V{h}_{a_i}^{(l)}, \V{h}_{b_j}^{(l)}, \phi_{a_i}(j), \phi_{\Psi_{i, j} \to a_i} \Big) \nonumber\\[1.2mm]
\V{h}_{a_i}^{(l+1)} &= g_{\text{n}} \bigg(\V{h}_{a_i}^{(l)}, \sum_{j \in \mathcal{N}(i)} \V{m}_{b_j \to a_i}^{(l)}, \phi_{a_i}(0) \bigg) \nonumber\\[.8mm]
\V{h}_{b_j}^{(l+1)} &= g_{\text{n}} \bigg(\V{h}_{b_j}^{(l)}, \sum_{j \in \mathcal{N}(i)} \V{m}_{a_i \to b_j}^{(l)}, \phi_{b_j}(0) \bigg). \label{eq:gnn_b}
\end{align}
Here, $g_{\text{e}}(\cdot)$ is the edge neural network and $g_{\text{n}}(\cdot)$ is the node neural network. The edge neural network $g_{\text{e}}(\cdot)$ provides messages passed along the edges $a_i \hspace*{-.3mm}\to\hspace*{-.3mm} b_j$ and $b_j \hspace*{-.3mm}\to\hspace*{-.3mm} a_i$, respectively.
The node embeddings are initialized as the concatenation of respective motion and shape features\vspace{-.3mm}, i.e., $\V{h}_{a_i}^{(1)} \hspace*{-.3mm}=\hspace*{-.3mm} [\V{h}_{a_i, \text{motion}}^\mathrm{T}$ $\V{h}_{a_i, \text{shape}}^\mathrm{T}]^\mathrm{T}$ and $\V{h}_{b_j}^{(1)} \hspace*{-.3mm}=\hspace*{-.3mm} [\V{h}_{b_j, \text{motion}}^\mathrm{T}$ $\V{h}_{b_j, \text{shape}}^\mathrm{T}]^\mathrm{T}\hspace*{-.3mm}\vspace{0mm}$. Finally\vspace{-.5mm}, for
|
each $j \in \{1, \cdots, J\}$, the correction\vspace{-1mm} factors $\beta_{j} = g_{\text{r}}(\V{h}_{b_j}^{(L)}) \in (0, 1]$ and $\gamma_{i}(j) = g_{\text{a}}\big(\V{m}_{b_j \to a_i}^{(L)}\big)$ $\in \mathbb{R}$ are computed based\vspace{-.5mm} on the two additional neural networks $g_{\text{r}}(\cdot)$ and $g_{\text{a}}(\cdot)$. As discussed next, these correction factors provided by the \ac{gnn} are used to implement false alarm rejection and object shape association, respectively\vspace{1mm}.
\subsubsection{NEBP Messages}
After computing \eqref{eq:gnn_a_to_b}--\eqref{eq:gnn_b} for $L$ iterations, neural enhanced message passing is performed as\vspace{0mm} follows. First, neural enhanced versions $\tilde{\phi}_{a_i}$ of the messages $\phi_{a_i}$ are\vspace{0mm} obtained by computing\vspace{.5mm}
\begin{equation}
\tilde{\phi}_{a_i}(j) = \beta_{j} \bar{\phi}_{a_i}(j) + \text{ReLU}\big(\gamma_{i}(j) \big), \quad j \in \{1, \cdots, J\}
\label{eq:nebp_combine_a}
\vspace{.5mm}
\end{equation}
and setting $\tilde{\phi}_{a_i}(0) \hspace*{-.3mm}=\hspace*{-.3mm} \phi_{a_i}(0)$. Here, $\text{ReLU}(\cdot)$ is a rectified linear unit and $\bar{\phi}_{a_i}$ is a normalized\footnote{Multiplying BP messages by a constant factor does not alter the resulting beliefs \cite{KscFreLoe:01}.} version of $\phi_{a_i}$ (cf.~\cite[Sec.~IX-A2]{MeyKroWilLauHlaBraWin:J18}),\vspace{-1mm} i.e.,
\begin{equation}
\bar{\phi}_{a_i}(j) = \frac{\phi_{a_i}(j)}{\sum_{j^\prime = 0}^{J}\phi_{a_i}(j^\prime)}, \quad j \in \{1, \cdots, J\}. \nonumber
\vspace{0mm}
\end{equation}
Note that $\phi_{a_i}(j)$, $j \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,J\}$ represents the likelihood that the legacy \ac{po} $i$ is associated to measurement $j$ \cite{MeyKroWilLauHlaBraWin:J18}. Consequently, the $\text{ReLU}\big(\gamma_{i}(j) \big) \hspace*{-.3mm}>\hspace*{-.3mm} 0$ term in \eqref{eq:nebp_combine_a} provided by the \ac{gnn} implements object shape association, i.e., the likelihood that the legacy \ac{po} $i$ is associated to the measurement $j$ is increased if the raw sensor data extracted for legacy POs resembles the raw sensor data extracted for measurements.
Next, neural enhanced versions $\tilde{\phi}_{b_j}$ of the messages $\phi_{b_j}$ are obtained by computing
\begin{equation}
\tilde{\phi}_{b_j}\hspace*{-.3mm}(0) = \beta_{j} \big(\phi_{b_j}(0) - 1\big) + 1
\label{eq:nebp_combine_b}
\vspace{.5mm}
\end{equation}
and setting $\tilde{\phi}_{b_j}(i) = \phi_{b_j}(i), i \in \{1, \cdots, I\}$. We recall that $\phi_{b_j}(0)$ is given by
\begin{align}
\phi_{b_j}\hspace*{-.3mm}(0) = \frac{\mu_{\mathrm{n}} }{\mu_{\mathrm{fa}} \hspace*{.3mm} f_{\mathrm{fa}}\big( \V{z}_{k, j} \big) } \int \! f_{\mathrm{n}}\big( \overline{\V{x}}{}_{k, j} \big)\hspace*{.3mm} f\big( \V{z}_{k, j} \big| \overline{\V{x}}{}_{k, j} \big) \hspace*{.3mm} \mathrm{d}\overline{\V{x}}{}_{k, j} + 1\hspace*{.3mm}. \nonumber\\[-4mm]
\nonumber
\end{align}
The scalar $\beta_{j} \hspace*{-.3mm}\rmv\in\hspace*{-.3mm}\rmv (0, 1) $ in \eqref{eq:nebp_combine_a} and \eqref{eq:nebp_combine_b} provided by the \ac{gnn} implements false alarm rejection. In particular, $\beta_{j} < 1$ is equal to the local increase of the false alarm distribution given by $\tilde{f}_{\mathrm{fa}}(\V{z}_{j}) = \frac{1}{\beta_{j}} f_{\mathrm{fa}}(\V{z}_{j})$. This local increase of the false alarm distribution makes it less likely that the measurement $\V{z}_{j}$ is associated to a legacy PO and reduces the existence probability of the new PO introduced for the measurement $\V{z}_{j}$ \vspace{1mm}.
\subsubsection{Belief Calculation}
Finally, conventional \ac{bp}-based \ac{mot} is again run until convergence by\vspace{.3mm} replacing $\phi_{a_i}$ with its neural enhanced counterpart $\tilde{\phi}_{a_i}$. This results in the enhanced output messages $\tilde{\kappa}_{i} = [\tilde{\kappa}_{i}(0) \cdots \tilde{\kappa}_{i}(J)]^\mathrm{T} \in \mathbb{R}^{J + 1}$ and $\tilde{\iota}_{j} = [\tilde{\iota}_{i}(0) \cdots \tilde{\iota}_{i}(I)]^\mathrm{T} \in \mathbb{R}^{I + 1}$ (cf. Fig.~\ref{fig:fg_gnn}). After performing the normalization\vspace{-.5mm}
\begin{equation}
\tilde{\kappa}^\prime_{i}(j) = \frac{\tilde{\phi}_{a_i}(j)}{\phi_{a_i}(j)} \tilde{\kappa}_{i}(j), \quad j \in \{0, \cdots, J\} \nonumber
\vspace{.5mm}
\end{equation}
the resulting\vspace{.2mm} messages $\tilde{\kappa}_i^\prime$ are used for the calculation of legacy \ac{po} beliefs $\tilde{f}(\underline{\V{y}}{}_{i})$, $i \in \{1, \cdots, I\}$ (cf.~\cite[Sec.~IX-A4--IX-A6]{MeyKroWilLauHlaBraWin:J18}). Similarly, the enhanced messages $\tilde{\iota}_{j}$ are directly used for the calculation of new \ac{po} beliefs $\tilde{f}(\overline{\V{y}}{}_{j}), j \in \{1, \cdots, J\}$.
\subsection{The Loss Function} \label{sec:Loss}
For supervised learning, it is assumed that ground truth object tracks are available in the training set. Ground truth object tracks consist of a sequence of object positions and object identities (IDs). During the training of the \ac{gnn}, the parameters of all neural networks are updated through back-propagation, which computes the gradient of the loss function.
The loss function that has the form $\Set{L} = \Set{L}_{\mathrm{r}} + \Set{L}_{\mathrm{a}}$. Here, the two contributions $\Set{L}_{\mathrm{r}}$ and $\Set{L}_{\mathrm{a}}$ establish false alarm rejection and object shape association, respectively.
False alarm rejection, introduces the binary cross-entropy loss\vspace{0mm}
\begin{equation}
\Set{L}_{\text{r}} = \frac{-1}{J} \sum_{j = 1}^{J} \beta_j^{\text{gt}} \ln(\beta_{j}) + \epsilon (1 - \beta_j^{\text{gt}}) \ln(1 - \beta_{j}), \label{eq:loss_beta} \vspace{0mm}
\end{equation}
where $\beta_j^{\text{gt}} \hspace*{-.3mm}\in\hspace*{-.3mm} \{0,1\}$ is the pseudo ground truth label for each measurement and $\epsilon \hspace*{-.3mm}\in\hspace*{-.3mm} \mathbb{R}^{+}$ is a tuning parameter. $\beta_j^{\text{gt}}$ is $1$ if the distance between the measurement and any ground truth position is not larger than $T_{\mathrm{dist}}$ and $0$ otherwise.
The tuning parameter $\epsilon \hspace*{-.3mm}\in\hspace*{-.3mm} \mathbb{R}^{+}$ is motivated as follows. Since missing an object is typically more severe than producing a false alarm, object detectors often output many detections and produce more false alarm measurements than true measurements. The tuning parameter $\epsilon \hspace*{-.3mm}\in\hspace*{-.3mm} \mathbb{R}^{+}$ addresses this imbalance problem which is well studied in the context of learning-based binary classification \cite{OksCamKalAkb:20}.
Since $\tilde{\phi}_{a_i}(j)$ in \eqref{eq:nebp_combine_a} represents the likelihood that the legacy \ac{po} $i$ is associated to the measurement $j$, ideally $\text{ReLU}\big(\gamma_{i}(j) \big)$ is large if \ac{po} $i$ is associated to the measurement $j$, and is equal to zero if they are not associated. Thus, object shape association introduces the following binary cross-entropy\vspace{1.5mm} loss
\begin{align}
\Set{L}_{\mathrm{a}} = \frac{-1}{IJ} \sum_{i = 1}^{I} \sum_{j = 1}^{J} &\gamma^{\text{gt}}_{i}(j) \ln \big( \sigma(\gamma_{i}(j)) \big) \nonumber \\[0.5mm]
&+ \big( 1 - \gamma^{\text{gt}}_{i}(j) \big) \ln \big( 1 - \sigma(\gamma_{i}(j)) \big), \vspace{0mm} \label{eq:loss}\\[-2mm]
\nonumber
\end{align}
where $\sigma(x) = 1/(1 + e^{-x})$ is the sigmoid function and $\gamma^{\text{gt}}_{i} = [\gamma^{\text{gt}}_{i}(1) \cdots \gamma^{\text{gt}}_{i}(J)]^\mathrm{T} \in \{0,1\}^J$ is the pseudo ground truth association vector of legacy \ac{po} $i \hspace*{-.3mm}\in\hspace*{-.3mm}\{1,\dots,I\}$. In each pseudo ground truth association vector $\gamma^{\text{gt}}_{i}$, at most one element is equal to one and all the other elements are equal to zero.
Note that in \eqref{eq:loss}, we do not apply the ReLU to the $\gamma_{i}(j)$, since this would result in the gradients $\partial \Set{L}_{\mathrm{a}} / \partial \gamma_{i}(j)$ to be zero for negative values of $\gamma_{i}(j)$. It was observed that performing backpropagation by also making use of the gradients related to the negative values of $\gamma_{i}(j)$, leads to a more efficient training of the \ac{gnn}.
At each time step, pseudo ground truth association vectors are constructed from measurements and ground truth object tracks based on the following\vspace{.5mm} rules:
\begin{itemize}[leftmargin=*]
\item \textit{Get Measurement IDs:} Compute the Euclidean distance between all ground truth positions and measurements and run the Hungarian algorithm \cite{BarWilTia:B11} to find the best association between ground truth positions and measurements. All measurements that have been associated with a ground truth position and have a distance to that ground truth position that is smaller than $T_{\mathrm{dist}}$ inherit the ID of the ground truth position. All other measurements do not have an ID. \vspace{1mm}
\item \textit{Update Legacy PO IDs:} Legacy POs inherit the ID from the previous time step. If a legacy PO with ID has a distance not larger than $T_{\mathrm{dist}}$ to a ground truth position with the same ID, it keeps its ID. The for a legacy PO $i \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,I\}$ that has the same ID as measurement $j \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,J\}$, the entry $\gamma^{\text{gt}}_{i}(j)$ is set to one. All other entries $\gamma^{\text{gt}}_{i}(j)$, $i \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,I\}$, $j \hspace*{-.3mm}\in\hspace*{-.3mm} \{1,\dots,J\}$ are set to\vspace{1mm} zero.
\item \textit{Introduce New PO IDs:} For any measurement $j \hspace*{-.3mm}\in \{1,\dots,J\}$ with an ID that does not share its ID with a legacy object, the corresponding new PO inherits the ground truth ID from the measurement. All other new POs do not have an\vspace{-.5mm} ID.
\end{itemize}
\begin{table*}[!t]
\normalsize
\centering
\begin{tabular}{c|c|cccc}
Methods & Modalities & AMOTA $\uparrow$ & IDS $\downarrow$ & Frag $\downarrow$ \\ \hline
CenterPoint \cite{YinZhoKra:21} & LiDAR & 0.665 & 562 & 424 \\
Chiu et al. \cite{ChiLiAmbBoh:21} & LiDAR+Camera & 0.687 & - & - \\
Zaech et al. \cite{ZaeDaiLinDanVan:22} & LiDAR & 0.693 & 262 & 332 \\ \hline
BP & LiDAR & 0.699 & \textbf{87} & \textbf{215} \\
NEBP (proposed) & LiDAR & \textbf{0.708} & 133 & 231
\end{tabular}
\vspace{1mm}
\caption{Performance results on nuScenes validation set. ``-'' indicates that the metric is not reported. }
\label{tab:result}
\vspace{-2mm}
\end{table*}
\section{Experimental Results}
We present results in an urban autonomous driving scenario to validate our method. In particular, we use data provided by a LiDAR sensor mounted on the roof of an autonomous vehicle. This data is part of the nuScenes dataset \cite{CaeBanLanVorLioXuKriPanBalBei:20}.
\subsection{System Model and Implementation Details} \label{sec:exp_setup}
The nuScenes dataset consists of 1000 autonomous driving scenes and seven object classes. We use the official split of the dataset, where there are 700 scenes for training, 150 for validation, and 150 for testing. Each scene has a length of roughly 20 seconds and contains keyframes (frames with ground truth object annotations) sampled at 2Hz. Object detections extracted by the CenterPoint \cite{YinZhoKra:21} detector are used as measurements, which are then preprocessed using \ac{nms} \cite{NeuVan:06}. Each measurement has a class index and the proposed \ac{mot} method is performed for each class individually.
We define the states of \acp{po} as $\V{x}_{k, n} \in \mathbb{R}^4$ which include their 2D position and 2D velocity. The measurements $\V{z}_{k, j} \in \mathbb{R}^5$ consist of the 2D position and velocity obtained as well as a score $0 \hspace*{-.3mm}<\hspace*{-.3mm} s_{k, j} \hspace*{-.3mm}\le\hspace*{-.3mm} 1$ from the object detector.
The dynamics of objects are modeled by a constant-velocity model \cite{ShaKirLi:B02}. The \ac{roi} is given by $[x_e - 54, x_e + 54] \times [y_e - 54, y_e + 54]$, where $(x_e, y_e)$ is the 2D position of the autonomous vehicle. The prior \ac{pdf} of false alarms $f_{\mathrm{fa}}(\cdot)$ and newly detected obejcts $f_{\text{n}}(\cdot)$ are uniform over the \ac{roi}. All other parameters used in the system model are estimated from the training data. The thresholds for object declaration was set to $T_{\mathrm{dec}} \hspace*{-.3mm}=\hspace*{-.3mm} 0.5$ for legacy \acp{po} and to a class dependent value for new \acp{po}. The pruning threshold was set to $T_{\mathrm{pru}} \hspace*{-.3mm}=\hspace*{-.3mm} 10^{-3}$.
The neural networks $g_{\text{e}}(\cdot)$, $g_{\text{n}}(\cdot)$, $g_{\text{a}}(\cdot)$, $g_{\text{motion}, 1}(\cdot)$, $g_{\text{motion}, 2}(\cdot)$ are \acp{mlp} with a single hidden layer and leaky ReLU activation function. Furthermore, $g_{\text{r}}(\cdot)$ is a single-hidden-layer \ac{mlp} with sigmoid activation at the output layer. Finally, $g_{\text{shape}, 1}(\cdot)$ and $g_{\text{shape}, 2}(\cdot)$ consist of two convolutional layers followed by a single-hidden-layer \ac{mlp}. At each time step, we use the output $\Set{Z}$ of VoxelNet \cite{ZhuTuz:18} to extract shape features as discussed in Section \ref{sec:feaExtraction}. The used VoxelNet has been pre-trained by the CenterPoint method\cite{YinZhoKra:21}. Its parameters remain unchanged during the training of the proposed \ac{nebp} method. \ac{nebp} training is performed by employing the Adam optimizer. The number of \ac{gnn} iteration is $L = 3$. The batch size was set to $1$, the learning rate to $10^{-4}$, and the number of training epochs to 8. The tuning parameter $\epsilon$ in \eqref{eq:loss_beta} was set to\vspace{0mm} 0.1 and the threshold $T_{\mathrm{dist}}$ for the pseudo ground truth extraction discussed in Section \ref{sec:Loss} was\vspace{0mm} set to 2 meters.
\subsection{Performance Evaluation}
We use the widely used CLEAR metrics \cite{BerSti:08} that include the number of \ac{fp}, \ac{ids} and \ac{frag}. In addition, we also consider the \ac{amota} metric proposed in \cite{WenWanHelKit:20}. Note that the \ac{amota} is also the primary metric used for the nuScenes tracking challenge \cite{CaeBanLanVorLioXuKriPanBalBei:20}.
Evaluation of the \ac{amota} requires a score for each estimated object. It was observed that a high \ac{amota} performance is obtained by calculating the estimated object score as a combination of existence probability and measurement score. In particular, for legacy \ac{po} $i$ the estimated object score is calculated\vspace{.5mm} as
\begin{equation}
\underline{s}{}_i = \tilde{p}(\underline{r}{}_i = 1) + \frac{\sum_{j = 1}^{J} \phi_{a_i}(j) \kappa_{i}(j) s_j}{\sum_{j = 1}^{J} \phi_{a_i}(j) \kappa_{i}(j)}, \nonumber
\vspace{.5mm}
\end{equation}
and for new PO $j$ the estimated object score is given by $\overline{s}{}_j = \tilde{p}(\overline{r}{}_j = 1) + s_j $.
For a fair comparison, we use state-of-the-art reference methods that all rely on the CenterPoint detector \cite{YinZhoKra:21}. In particular, \ac{bp} refers to the traditional \ac{bp}-based \ac{mot} method \cite{MeyKroWilLauHlaBraWin:J18} that uses object detections provided by the CenterPoint detector as measurements. Furthermore, the tracking method proposed in \cite{YinZhoKra:21} uses a heuristic to create tracks and a greedy matching algorithm based on the Euclidean distance to associate CenterPoint object detections to tracks. Chiu et al. \cite{ChiLiAmbBoh:21} follows a similar strategy but makes use of a hybrid distance that combines the Mahalanobis distance and the so-called deep feature distance. Finally, the method introduced by Zaech et al. \cite{ZaeDaiLinDanVan:22} utilizes a network flow formulation and transforms the \ac{da} problem into a classification problem.
In Table~\ref{tab:result}, it can be seen that the proposed \ac{nebp} approach outperforms all reference methods in terms of \ac{amota} performance. Furthermore, it can be observed, that \ac{bp} and \ac{nebp} achieve a much lower \ac{ids} and \ac{frag} metric compared to the reference methods. This is because both \ac{bp} and \ac{nebp} make use of a statistical model to determine the initialization and termination of tracks \cite{MeyKroWilLauHlaBraWin:J18} which is more robust compared to the heuristic track management performed by other reference methods. The improved \ac{amota} performance of \ac{nebp} over \ac{bp} comes at the cost of a slightly increased \ac{ids} and \ac{frag}.
TABLE \ref{tab:fp} shows the \ac{amota} performance as well as number of \ac{fp} for the bicycle and pedestrian class. To ensure a fair comparison, all the \ac{fp} values are evaluated for the same percentage of true positives referred to as ``recall''. In particular, for each class, the recall that leads to the largest multi-object tracking accuracy \cite{BerSti:08} for CenterPoint was used.
For the considered two classes, \ac{nebp} yields the largest improvement in terms of \ac{amota} performance over \ac{bp}. Compared to \ac{bp}, the proposed \ac{nebp} method also has a reduced number of \ac{fp}. In conclusion, false alarm rejection and object shape association introduced by the proposed \ac{nebp} method can make effective use of raw sensor data and substantially improve \ac{mot} \vspace{.5mm}performance.
\begin{table}[!htbp]
\normalsize
\centering
\begin{tabular}{c|cc|cc}
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{bicycle} & \multicolumn{2}{c}{pedestrian} \\ \cline{2-5}
& AMOTA $\uparrow$ & FP $\downarrow$ & AMOTA $\uparrow$ & FP $\downarrow$ \\ \hline
CenterPoint \cite{YinZhoKra:21} & 0.458 & 380 & 0.777 & 5944 \\
BP & 0.499 & 171 & 0.795 & 4958 \\
NEBP & \textbf{0.536} & \textbf{135} & \textbf{0.806} & \textbf{4328}
\end{tabular}
\vspace{1mm}
\caption{Evaluation results on nuScenes validation set in terms of \ac{amota} and \ac{fp} for the bicycle and pedestrian class.}
\label{tab:fp}
\vspace{-2mm}
\end{table}
\section{Final Remarks}
\vspace{0mm}
In this paper, we present a \ac{nebp} method for \ac{mot} that enhances probabilistic data association by information learned from raw sensor data. A \ac{gnn} is introduced that matches the topology of the factor graph used for model-based data association. For false alarm rejection, the \ac{gnn} identifies which measurements are likely false alarms. Object shape association computes improved association probabilities by using raw sensor data. The proposed approach can improve the estimation accuracy of \ac{bp} while preserving its low computational complexity. We applied the proposed method to the nuScenes autonomous driving dataset and demonstrated state-of-the-art object tracking performance.
\vspace{1mm}
\section*{Acknowledgement}
This work was supported by the National Science Foundation (NSF) under CAREER Award No.~2146261.
\renewcommand{\baselinestretch}{1.02}
\selectfont
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Nearly all modern nuclear astrophysics studies rely on knowledge
of thermonuclear reaction rates between two
or more reacting particles.
The rate at which nuclei in a hot plasma interact is
governed by the reaction cross section and the velocities
of the reacting nuclei in their center of mass.
In general, the reaction rates for an environment at a
certain temperature are taken as the average rates, which
are deduced by integrating over the reaction
cross section (as a function of energy) weighted by the
Maxwell-Boltzmann
energy distribution of the reactants in the plasma involved,
known as the thermonuclear reaction rate (TRR), $\left\langle\sigma v\right\rangle$ \citep{illiadis,boyd08}.
For
resonances in the cross section at specific energies, the
evaluation is similar, but the cross section also has
a term defining the resonance.
In a hot plasma,
the background electrons create a
``screening'' effect between
two reacting
charged particles
\citep{wu17,liu16,spitaleri16,yakovlev14,kravchuk14,potekhin13,quarati07,
shaviv00,adelberger98,shalybkov87,wang83,
www82,itoh77,jancovici77,graboske73,dewitt73,salpeter69,salpeter54}.
Coulomb screening reduces
their Coulomb barrier because the effective
charge between two particles is reduced. The commonly-used ``extended'' \citep{jancovici77,itoh77} screening and recent evaluations of screening
from relativistic effects have been explored \citep{famiano16,luo20}.
In evaluating the screening effect, even a small shift in the potential energy can result in significant changes in
the classical turning points of the WKB approximation, resulting in an increase in the reaction rate. It should be noted that other positively charged nuclei in
a plasma also
increase the reaction rate as positive and negative charges are redistributed in the presence of a ``point-like'' nuclear potential. Though this
adjustment to thermonuclear rates has been known for a long time \citep{salpeter54},
effects from relativistic, magnetized plasmas have not been fully addressed.
Closely tied to the equilibrium abundances of electrons and positrons is pair
production, which occurs at high-enough temperatures in which the tail of the
Fermi distribution exceeds the pair-production threshold. Pair production has been studied in stellar cores of very massive stars \citep{kozyreva17, woosley17, spera17, takahashi18} and as a neutrino
cooling mechanism \citep{itoh96}.
Also, though electron capture reactions have been previously studied \citep{itoh02,liu07}, the simultaneous effects of external
magnetic fields and relativistic pair production on reaction rate screening (fusion and
electron capture) in magnetized plasmas
have not been fully considered.
For temperatures and magnetic fields that are high enough,
electrons and positrons can exist
in non-negligible equilibrium abundances.
In a magnetized plasma, the electron and
positron energy distributions are altered by the external field.
In a hot plasma,
the
background charges include the surrounding electrons, positrons, and other nuclei. Classically, for a non-relativistic charge-neutral medium the electrostatic potential $\phi$ of a charge $Ze$ in the
presence of a background charge density can be computed via
the Poisson-Boltzmann equation:
\begin{equation}
\label{PB}
\nabla^2\phi(r) = -4\pi Ze\delta(\mathbf{r}^3) -4\pi\sum_{z\ge-1} ze n_z\exp\left[-\frac{ze\phi(r)}{T}\right],
\end{equation}
where the last term is a sum over all charges in the medium with charge $ze$ and number density $n_z$, including non-relativistic electrons ($z=-1$).
This description is almost universally used in astrophysical calculations involving nuclear reactions.
Here, the electron degeneracy must be calculated or estimated explicitly to
accurately determine the energy and density distribution. (Natural units are used: $k=\hbar=c=1$.)
However, for hot, magnetized plasmas electrons and positrons must be expressed in equilibrium using Fermi-Dirac statistics.
The lepton number density in the presence of an external
field is modified by the
presence of Landau levels and changes from the zero-field form \citep{grasso01,kawasaki12}:
\begin{eqnarray}
\label{num_dens_terms}
n(B=0,T) & =& \frac{1}{\pi^2}\int\limits_0^\infty
\frac{p^2dp}{\exp\left[\frac{E-\mu}{T}\right]+1}
-\frac{1}{\pi^2}\int\limits_0^\infty
\frac{p^2dp}{\exp\left[\frac{E+\mu}{T}\right]+1},
\\\nonumber
n(B\ne 0,T) & =&
\frac{eB}{2\pi^2}\sum\limits_{n=0}^\infty(2-\delta_{n0})
\left[
{
\int
\limits_0^\infty
\frac{dp_z}
{\exp\left[\frac{\sqrt{E^2+2neB}-\mu}{T}\right]+1}
-\int
\limits_0^\infty
\frac{dp_z}
{\exp\left[\frac{\sqrt{E^2+2neB}+\mu}{T}\right]+1}
}
\right].
\end{eqnarray}
In the above Equation, $E=\sqrt{p_z^2+m^2}$, where the z direction is parallel to the
magnetic field.
The term $\delta_{n0}$
accommodates the degeneracy for the higher Landau levels, and the index $n$
takes into account the Landau level as well as the z-component of electron
spin. As
$B\rightarrow 0$, the summation in the second relationship in Equation \ref{num_dens_terms} becomes an integral, and the zero-field
number density results.
The Poisson-Boltzmann
equation must then be replaced with the equivalent equation assuming Fermi statistics with
a magnetic field, $B$, and chemical potential, $\mu$:
\begin{eqnarray}
\label{PF}
\tiny
\nabla^2\phi(r) &=& -4\pi Ze\delta^3(\mathbf{r}) -4\pi\sum_{z>0} ze n_z\exp{\left[-\frac{ze\phi_r}{T}\right]}
\\\nonumber
+&\frac{eB}{\pi}&\sum\limits_{n=0}^{\infty} g_n\int\limits_0^\infty{dp \left[\frac{1}{\exp(\sqrt{E^2+2neB}-\mu- e\phi_r)/T+1}-\frac{1}{\exp(\sqrt{E^2+2neB}+\mu+e\phi_r)/T+1}\right]}
\\\nonumber
+&4\pi&\sum\limits_{z>0}zen_z
\\\nonumber
-&\frac{eB}{\pi}&\sum\limits_{n=0}^{\infty} g_n\int\limits_0^\infty{dp \left[\frac{1}{\exp(\sqrt{E^2+2neB}-\mu)/T+1}-\frac{1}{\exp(\sqrt{E^2+2neB}+\mu)/T+1}\right]}
\end{eqnarray}
where the sum in the third term accounts for the quantized transverse momentum of electrons and positrons in a high magnetic field, and
$g_n=2 - \delta_{n0}$ accounts for Landau level degeneracy. The relativistic effects come from
the high thermal energy, $T\sim m_e$, the Landau level spacing for field strengths with $\sqrt{eB}\sim m_e$, or both \citep{kawasaki12, grasso01}.
The last two terms in Equation \ref{PF} account for
the redistribution of charge on
the uniform charge background.
For a charge-neutral plasma, the sum of
these last two terms is zero before the charge $Ze$ is introduced.
Here, electrons are assumed to be relativistic while ions
are still treated classically; the non-relativistic
nuclei are treated with
Boltzmann statistics.
At lower temperatures and higher densities, the electron degeneracy is higher, and a first-order
solution to the Poisson equation is invalid. The
chemical potential must be accounted for in the relativistic treatment of Equation \ref{PF} and computed
using the electron-positron number density assuming charge neutrality. The screening is very strong, and $E_C/kT\gg 1$.
The thermal energy is
less important, and the potential is modified by the difference in Coulomb energy
before and after the reaction -- the so-called ion sphere model \citep{clayton,salpeter69,salpeter54}.
Perhaps ``intermediate screening,'' where $E_C/kT\sim 1$, is the most
complicated.
In this regime, screening enhancement has been computed in one of two ways.
One method is to solve the Poisson-Boltzmann equation numerically \citep{graboske73}.
In this case, numerical fits or tables might be used for astrophysical codes.
For many computational applications, an empirical interpolation between strong and
weak screening is computed \citep{www82,salpeter69}.
The ``screening enhancement factor'' (SEF) $f$, relates the screened rate
to the unscreened rate
by $\left\langle\sigma v\right\rangle_{scr}=f\left\langle\sigma v\right\rangle_{uns}$.
The value of $f$ can be deduced from the WKB approximation in the thermonuclear reaction rates as
$f=e^H$ \citep{graboske73,jancovici77,salpeter54,salpeter69,www82}, where $H$ is a unitless value derived from the
specific type of screening employed \citep{das16,kravchuk14,itoh77,alastuey78,dewitt73,quarati07}.
As mentioned above, the intermediate exponent $H_I$ is often determined using strong and weak screening values,
$H_I = H_SH_W/\sqrt{H_S^2+H_W^2}$. This method is used
commonly in astrophysics codes incorporating
nuclear reaction networks \citep{mesa11, mesa15, mesa18, libnucnet}.
\begin{figure}[b]
\gridline{
\fig{pos_elec_ratio_t-crop.pdf}{0.49\textwidth}{(a)}
\fig{pos_elec_ratio_rho-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{\label{screen_modes}
(a)
Positron-electron ratio as a function of temperature and magnetic field in a neutral plasma at $\rho Y_e = 5\times10^5$ g/cm$^3$.
(b) Positron-electron ratio as a function of density and magnetic field in a neutral plasma at temperature $T_9=7$ and $Y_e = 0.5$.
The number densities are computed up to 2000 Landau levels.}
\end{figure}
An example of the importance of including thermal and magnetic field effects
is shown in Figure \ref{screen_modes}.
Shown in the figure is the ratio of positron to electron number density as a
function of temperature and magnetic field (where $T_9$ is the
temperature in billions of K) at a density and electron fraction $\rho Y_e=5\times10^{5}$ g/cm$^{3}$ taking into account the electron
chemical potential at high density. Relativistic effects
become increasingly important
in this region as the positron number density becomes a significant fraction of
the electron number density. The increased overall number of charges of any sign contribute to the screening effect, and this will be explored in this paper.
The goal of this paper is to evaluate the effects of
reaction rate screening in relativistic electron-positron plasmas found in hot, magnetized stellar environments. Results from this work will be
applied to an example nucleosynthesis process in a magnetohydrodynamic (MHD)
collapsar jet.
In this paper, the effects of weak screening corrections in a magnetized,
relativistic plasma will be evaluated. A useful approximation which can be
used effectively in computational applications is developed. The effects
of screening in a sample astrophysical site are evaluated. In addition,
the effects of strong magnetic fields on the weak interactions in a
plasma are explored.
\section{Weak Screening Limit}
\subsection{First Order Expansions: Debye-H\"uckel and Thomas-Fermi}
In the high temperature, low density ``weak screening'' limit, the Coulomb energy between two reacting nuclei is lower than the thermal energy, $E_C/kT\ll 1$,
as is the electron chemical potential. The electrons are mostly
non-degenerate, and Equations \ref{PB} and \ref{PF} can be expanded to first order in
potential,
$\mathcal{O}(\phi)$, known as the Debye-H\"uckel approximation.
A corresponding Debye length, $\lambda_D$ can be derived, resulting in
a Yukawa-type potential, $\phi(r)\propto (e^{-r/\lambda_D})/r$ as opposed to the usual $1/r$ unscreened Coulomb relationship. For lower
temperatures and higher densities resulting in higher electron degeneracy,
the Thomas-Fermi screening length is more appropriate. This is defined by the first order approximation:
\begin{equation}
\frac{1}{\lambda_{TF}^2} \equiv 4\pi e^2\frac{\partial n}{\partial\mu}.
\end{equation}
This is derived from the density of states at the Fermi surface \citep{ichimaru93}, but it is also equivalent to the first-order expansion in potential as the chemical
potential is used as a mathematical surrogate for the potential with the same
results.
This relationship can also be deduced from the solution
of the Schwinger-Dyson equation for
the photon propagator \citep{kapusta06}.
The contribution to the screening length from the surrounding nuclei must also be included, and this can be significant in some cases.
\begin{figure}
\includegraphics[width=\linewidth,]{debye_ratios-crop.pdf}
\caption{\label{debye_fig}The ratio of classical to relativistic electron screening lengths for a neutral plasma as a function of temperature and magnetic field (G) at a constant electron density, $\rho$Y$_e=5\times10^5$ g/cm$^3$. }
\end{figure}
The chemical potential can be determined using Equation \ref{num_dens_terms} for a plasma of density
$\rho$, electron fraction $Y_e$, and net electron density $n_- - n_+$. For most astrophysical applications, a
static plasma is assumed with a net charge density of zero.
The ratio of the relativistic Thomas-Fermi electron-positron screening length, $\lambda_{TF}$, to the classical Debye length, $\lambda_D$, is shown in Figure \ref{debye_fig} as a function of temperature and magnetic field at $\rho Y_e=5\times10^{6}$ g/cm$^{3}$. In this figure, only the electron-positron screening length ratio is shown to emphasize the difference that high temperature and magnetic fields can induce in a plasma. In astrophysical calculations, the screening length
from other nuclei must also be included, $1/\lambda^2 = 1/\lambda_{ion}^2+1/\lambda_{-,+}^2$. There is a significant difference
between the classical and relativistic screening lengths at high temperature and field. Because the screened rates depend exponentially
on the screening lengths at low density/high-temperature, even small changes in the screening length can be significant. The relativistic electron screening length can be quite small at high-enough temperature or $B$ field.
\begin{figure}
\gridline{
\fig{mu_nmax-crop.pdf}{0.49\textwidth}{(a)}
\fig{nmax_2d-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{\label{landau_sens} (a)
Electron chemical potential as a function of the maximum
number of Landau levels included in summation over Landau levels
at $T_9=2$, $\rho$Y$_e = 5\times10^4$ g cm$^{-3}$ for various magnetic
fields. The dashed line is the chemical potential for an ideal
Fermi gas.
(b) Number of Landau levels necessary to approach an equilibrium
electron number density with a maximum uncertainty of 1\%, $N_{0.01}$ at
a $\rho$Y$_e=5\times10^4$ g cm$^{-3}$ as a function of magnetic field (G) and $T_9$.}
\end{figure}
It is noted that at higher density or lower temperature, intermediate
screening depends more heavily on the electron chemical potential. The increased electron chemical potential
results in the electron-positron number densities
which
approach classical (non-relativistic) values. That is $n_- \rightarrow \rho N_A Y_e$ and $n_+ \rightarrow 0$.
Because of this, first-order weak screening is replaced by an ion-sphere screening model or a type of geometric mean between the ion-sphere and weak screening model \citep{www82,salpeter69}.
In determining the equilibrium electron-positron number density and the screening lengths,
computational models may truncate the number of Landau levels that are counted
in the evaluation, or the sum may be replaced by an integral in a low-field approximation \citep{kawasaki12}. For high fields, one can
determine the number of Landau levels necessary to sum over to obtain
a certain accuracy in the computation. This is illustrated in Figure \ref{landau_sens}. In Figure \ref{landau_sens}a, the
computed electron chemical potential is shown as a function of the maximum
Landau level, $N_{max}$ included in the sum in Equation \ref{num_dens_terms} for
$T_9=2$, and $\rho Y_e=5\times 10^4$ g cm$^{-3}$. As the number of Landau levels summed over increases, the chemical potential converges to its equilibrium value. For a field
of 10$^{15}$ G, the convergence is immediate, and the approximation where only the lowest Landau level is considered is valid. For
10$^{14}$ G, the convergence occurs rapidly at $N_{max}=1$, and the difference between
this approximation and the $N_{max}=1$ summation is small. At lower fields,
a summation over more Landau levels is necessary in order to achieve a reasonable
accuracy.
It is also interesting to note that at higher fields, the electron chemical potential is
reduced as the level density is adjusted by the presence of Landau levels. At the highest
fields, the electron transverse momentum is discrete and increases with field. The energy necessary to fill higher Landau levels is large compared to the thermal energy of
the plasma, $kT$, and electrons are forced into the lowest-energy levels. However, if the field is low-enough such that $\sqrt{eB}\ll kT$, the chemical potential approaches
that of an ideal Fermi gas, and the plasma can be treated as such. In this
case, the field can be ignored.
Similarly, in Figure \ref{landau_sens}b,
truncating the sum over Landau levels at a specific number is explored by
examining the number of Landau levels necessary to achieve a desired uncertainty. Shown on the right side of this figure is
the number of Landau levels necessary, $N_{0.01}$, such that the relative difference between the sum over $N_{0.01}$ Landau levels and the equilibrium number density is less than 1\%:
\begin{equation}
1 - \frac{\sum\limits_{i=0}^{N_{0.01}-1} h_i}{\sum\limits_{i=0}^{N_{large}} h_i}<0.01,
\end{equation}
where $h_i$ are individual terms in the number density in Equation \ref{num_dens_terms}.
That is, the relative difference between the number density if only
$N_{0.01}$ Landau levels are used and if a sufficiently large
number of Landau levels is used is less than 0.01. For this
figure, the density times the electron fraction is $\rho Y_e=5\times10^{4}$ g cm$^{-3}$.
For fields that are
high-enough, $B\gtrsim 10^{13}$ G, each successive term in the
sum drops
by roughly an order of magnitude, $h_{i+1}/h_i\sim 0.1$. Here, a value of $N_{large}$
of 10$^{4}$ is assumed. From the left side of the figure, it is seen that even at low
fields, sums up to terms less than 10$^4$ Landau levels are
sufficient to characterize the plasma, indicating a choice of $N_{large}=10^4$ to
be sufficient. Even at low fields, the last $\sim3000$
Landau levels in the sum contribute less than 1\% to the total
electron-positron number density.
For a lower field, it is necessary to include several hundred (or more) Landau levels in the sum for an
accurate calculation. For the
very high field, however, one can achieve a high accuracy by including only the lowest Landau level, known at the ``lowest Landau level approximation'' or the LLL approximation. A discussion of the accuracy and utility of the LLL approximation in
evaluating the TF length will be given later.
\iffalse
At a more fundamental level, the electron/positron chemical potential decreases with increasing magnetic field. The degree to which this
can be approximated either through a truncation of the sum over
Landau levels or by using the expansion of Equation \ref{numdens}
is shown in Figure \ref{mu_approx}. In this figure, the numerically calculated chemical
potential as a function of the maximum summed Landau level is shown
for a homogeneous plasma with temperature, $T_9=3$, density,
$\rho$=10$^7$ g/cm$^{3}$, and electron fraction $Y_e=0.5$. In this figure, chemical potentials for various magnetic fields are plotted.
Since the expansion of Equation \ref{numdens} is independent of
any summation, the lines are horizontal.
\fi
As can be seen from Figures \ref{screen_modes} and \ref{landau_sens}a, the effect of the
magnetic field becomes negligible roughly below 10$^{13}$ G. The
electron-positron population is determined almost exclusively by
the system temperature and density. In this region, the
thermally calculated chemical potential without magnetic fields is almost identical
to that computed if magnetic field effects are accounted for and the positron
number density approaches zero as stated previously. For
the temperature and density used for Figure \ref{landau_sens}a,
the electron chemical potential if no field were present would be 0.76 MeV.
Above 10$^{13}$ G, the chemical potential decreases with field.
\subsection{High-Field Approximation: Euler-MacLaurin Formula in Momentum}
\label{low_p_high_b}
At high fields and high temperatures, the chemical potential is low, and the electron-positron Fermi distribution is constrained to relatively low momentum. In this case, we consider an Euler-MacLaurin expansion in momentum
using the Euler-MacLaurin formula. The
net electron number density can be written as:
\begin{equation}
n_e=eB\frac{ T}{2\pi^2}\sum\limits_{\tilde{p},n=0}^\infty
g_n
\left(\frac{2-\delta_{p0}}{2}\right)
\frac{\sinh\tilde{\mu}}{\cosh\sqrt{\tilde{p}^2+\tilde{m}^2+n\gamma}+\cosh\tilde{\mu}},
\end{equation}
where $\gamma\equiv 2eB/T^2$ and terms with a tilde are divided by $T$, $\tilde{x}\equiv x/T$.
These terms are unitless. It is noteworthy that,
for the Euler-MacLaurin formula, the higher-order
derivatives are zero, meaning that the sum above is complete.
In the case of a strong magnetic
field, the LLL approximation yields:
\begin{equation}
n_e=eB\frac{T}{2\pi^2}
\sum\limits_{\tilde{p}=0}^\infty \left(\frac{2-\delta_{p0}}{2}\right)
\frac{\sinh\tilde{\mu}}{\cosh\sqrt{\tilde{p}^2+\tilde{m}^2}+\cosh\tilde{\mu}},
\end{equation}
resulting in a linear dependence on the external
magnetic field.
The Thomas-Fermi screening
length in a strong magnetic field is derived as:
\begin{eqnarray}
\label{EM1}
\frac{1}{\lambda_{TF}^2} & = &4\pi e^2\frac{\partial n}{\partial\mu}\\\nonumber
~& =& eB\frac{e^2} {\pi}\frac{\partial}{\partial\tilde{\mu}}
\sum\limits_{n=0}^{\infty}g_{n}
\int\limits_0^{\infty}\frac{d\tilde{p}}
{
\exp{
\left[
\sqrt{
\tilde{p}^2+\tilde{m}^2+n\gamma
}\mp\tilde{\mu}
\right]
}
+1
}\\\nonumber
~& =& eB\frac{e^2}{\pi}
\sum\limits_{n=0}^{\infty}g_{n}
\int\limits_0^{\infty}\frac{\partial}{\partial\tilde{\mu}}
\frac{d\tilde{p}}
{
\exp{
\left[
\sqrt{
\tilde{p}^2+\tilde{m}^2+n\gamma
}\mp\tilde{\mu}
\right]
}
+1
}\\\nonumber
~& =& eB\frac{e^2}{\pi}
\sum\limits_{n=0}^{\infty}g_{n}
\int\limits_0^{\infty}
\frac{d\tilde{p}}
{
1+\cosh{\left(\sqrt{\tilde{p}^2+\tilde{m}^2+n\gamma}\mp\tilde{\mu}\right)}
},
\end{eqnarray}
where the $\mp$ corresponds to the electron/positron number density, and the sum over
both electron and positron densities is implied.
The Euler-MacLaurin fomula, expanded in momentum, yields an easily-computed
form for the integral term above:
\begin{eqnarray}
\label{EM_p_b}
\frac{1}{\lambda_{TF}^2}
&=&eB\frac{e^2} {\pi}\sum_{\tilde{p},n=0}^{\infty}g_n
\left(\frac{2-\delta_{p0}}{2}\right)
\frac{1+\cosh{\tilde{\mu}}\cosh\sqrt{\tilde{p}^2+\tilde{m}^2+n\gamma}}
{
\left(
\cosh{\tilde{\mu}}+\cosh{\sqrt{\tilde{p}^2+\tilde{m}^2+n\gamma}}
\right)^2}
\end{eqnarray}
where the sum over $n$ is a sum over Landau Levels and the sum over $\tilde{p} = p/T$ results from the
Euler-MacLaurin formula for Equation \ref{EM1}.
One can approximate a sum over several Landau levels and only up to a maximum
value of $\tilde{p}$ in the above equation:
\begin{equation} \frac{1}{\lambda_{TF}^2}\propto\sum_{n=0}^{\infty}g_n
\left[
\sum_{\tilde{p}=0}^{\infty} ...
\right]
\rightarrow
\sum_{n=0}^{n_{max}}g_n
\left[
\sum_{\tilde{p}=0}^{\tilde{p}_{max}} ...
\right] + R_{\tilde{p}},
\end{equation}
where $n_{max}$ is the highest Landau level included in the sum, and $\tilde{p}_{max}$ is the highest term included
in the Euler-MacLaurin formula. The remainder induced by
truncating the sum is $R_{\tilde{p}}$.
At high magnetic field, the electron chemical potential is much smaller than the Landau level spacing. In this case, the sum over $\tilde{p}$ converges rapidly, and the summation can be
truncated to a few terms. For the purposes of computation, the limitation of the sum
may be determined to truncate at $\tilde{p}_{max}$ where the difference in successive terms is smaller than some uncertainty, $\varepsilon$:
\begin{equation}
\frac{f_{\tilde{p}_{max}}-f_{(\tilde{p}_{max}-1)}}
{f_{\tilde{p}_{max}}}<\varepsilon
\end{equation}
As an example, the relative error in $\lambda_{TF}$, $\Delta\lambda/\lambda = 1 -\lambda_{McL}/\lambda_{exact}$ (where the Thomas-Fermi length deduced
from the truncated sum is $\lambda_{McL}$ and that deduced from
Equation \ref{EM1} is $\lambda_{exact}$) induced by truncating the Euler-MacLaurin sum to a
maximum index of $\tilde{p}_{max}$ is shown in Figure \ref{error_gr} for
temperatures $T_9$=7 and 2,
at $\rho Y_e=5\times$10$^4$ g cm$^{-3}$, and three
values of the external magnetic field.
Even for a low value of $\tilde{p}_{max}=5$, the uncertainty is less than
1\%.
\begin{figure}
\centering
\gridline{
\fig{error_mcl_7-crop.pdf}{0.49\textwidth}{(a)}
\fig{error_mcl_2-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{
Relative error in Euler-MacLaurin formula compared
to exact numerical computation for the integration in Equation \ref{EM1} as a function of maximum $\tilde{p}$ in the sum.
Computations are for $\rho Y_e = 5\times 10^{4}$ g cm$^{3}$
at temperatures (a) $T_9= 7$ and (b) $T_9=2$. The maximum
Landau level calculated in each case is $N_{max} = 2000$.
}
\label{error_gr}
\end{figure}
The validity of this approximation in determining the screening length at temperatures $T_9 = 2$ and 7 at $\rho Y_e = 5\times$10$^4$ g cm$^{-3}$ is shown in Figure
\ref{low_p_high_b_compare}. In this figure,
the approximation given in Equation \ref{EM_p_b}
is used to determine the TF screening lengths.
For each line in the figure, only the lowest
12 terms in the sum over $\tilde{p}$ are used. That is
$\tilde{p}_{max} = 12$. The maximum number of Landau levels
summed over is indicated for the various
results in the figure.
\begin{figure}
\gridline{
\fig{McL_p_T7-crop.pdf}{0.49\textwidth}{(a)}
\fig{McL_p_T2-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{Electron Thomas-Fermi screening length using the
approximation described in Section \ref{low_p_high_b} for sums up to various maximum
Landau levels, as indicated in the figure.
In both figures, $\rho Y_e = 5\times10^4$ g cm$^{-3}$. The temperature is (a) $T_9=7$ and (b) $T_9=2$.
For the figure (b), the lines corresponding to $n=100$ and $n=200$ lie on top of each other.
In this figure, only the lowest 12 terms
in the Euler-MacLaurin sum are computed, $\tilde{p}_{max} = 12$.
}
\label{low_p_high_b_compare}
\end{figure}
One sees that the lowest Landau level (LLL, $N_{max}$ = 0) approximation performs quite well at high fields
(log(B)$\gtrsim$14). At lower fields, more
Landau levels must be included in the sum.
For completeness, the dependence of this approximation on temperature and
density is shown
in Figure
\ref{delta_lambda}, which shows the relative error
in the TF length computed with Equation \ref{EM_p_b} compared
to that
computed with Equation \ref{EM1}. It is seen that -- in the weak screening
regime -- there is almost no dependence on density and a small dependence on
temperature. Even at low fields, the (Figure \ref{delta_lambda}b), the
error is relatively small. At lower temperatures, the error is somewhat
larger. However, this area would very likely correspond to
non-relativistic or intermediate screening.
\begin{figure}
\gridline{
\fig{delta_lambda-crop.pdf}{0.49\textwidth}{(a)}
\fig{delta_lambda_lowB-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{Relative difference in TF screening length Euler-MacLaurin approximation to the exact computation
as a function of T and $\rho$ for (a)$ B = 10^{16}$ G and
(b)$ B = 10^{13}$ G for
various temperatures and densities. Here, only the LLL approximation is used for the 10$^{16}$ G case and the lowest 2000 Landau levels are used
for the 10$^{13}$ G case. In both panels, $Y_e=0.5$.
}
\label{delta_lambda}
\end{figure}
For a lower field of $B = 10^{13}$ G, the approximation of Equation \ref{EM_p_b}
is shown in Figure \ref{delta_lambda}b, including the lowest 2000 Landau
levels ($N_{max} = 2000$) and $\tilde{p}_{max} = 12$. The TF screening length is still
fairly well approximated over a wide range of temperatures and densities even at lower
B field if a sufficient number of Landau levels are included in the sum. For most
temperatures and densities, the screening length is within about 10\% of the actual Thomas-Fermi length. However, it is also noted that if the field
is low enough, $\lambda_{TF}$ for $B=0$ is an excellent approximation, and
the effect of the field can be ignored.
\section{Weak Interactions}
In addition to the inclusion of magnetized plasma effects on screening of the Coulomb potential and modifications to the electron-positron chemical potential, effects on weak interaction rates have also been examined. Weak interactions can be altered by changes to the electron Fermi-Dirac distribution function and the electron energy spectrum in weak decays \citep{luo20,grasso01,fassio69}. In addition, the shifts to the electron-positron chemical potentials in the thermal plasma are also modified.
The shift in chemical potentials can change the Fermi-Dirac functions, altering the available states for capture and decay as well as the Pauli blocking factors. This can
influence all of the weak interactions. Also, the electrons
and positrons involved in weak interactions
are constrained to Landau levels, creating nearly-discrete
energy spectra, especially at high fields.
In the presence of
magnetic fields, the phase space ($d^3p$) of the interactions is changed by the presence of Landau
Levels. The density of states is (in natural units):
\begin{equation}
dn \propto \frac{d^3p}{(2\pi)^3} = \sum\limits_{n=0}^{\infty}\left(2 - \delta_{n0}\right)\frac{eB}{2\pi^2}dp_z.
\end{equation}
The corresponding shift in the lepton energy spectra can have dramatic effects
on the weak interaction rates in a magnetized plasma. With the inclusion of
density distributions modified by the existence of Landau levels,
the approximate weak interaction rates can be rewritten
with the momentum component parallel to the magnetic
field vector and the discrete transverse momentum components \citep{luo20,grasso01,fassio69}:
\begin{eqnarray}
\label{mag_beta_minus}
\Gamma_{\beta^-} & = &\kappa \frac{eB}{2} \sum\limits_{n=0}^{N_{max}}\left(2-\delta_{n0}\right)\int\limits_{\omega_\beta}^{Q}\frac{E(Q-E)^2}{\sqrt{E^2 - m_e^2-2neB}}\left(1-f_{FD}(E,\mu_e)\right)\left(1-f_{FD}(Q-E,-\mu_\nu)\right)dE,
\\
\label{mag_beta_plus}
\Gamma_{\beta^+} & = &\kappa \frac{eB}{2} \sum\limits_{n=0}^{N_{max}}\left(2-\delta_{n0}\right)\int\limits_{\omega_\beta}^{-Q}\frac{E(-Q-E)^2}{\sqrt{E^2 - m_e^2-2neB}}\left(1-f_{FD}(E,-\mu_e)\right)\left(1-f_{FD}(-Q-E,-\mu_\nu)\right)dE,
\\
\label{mag_beta_EC}
\Gamma_{EC} & = &\kappa \frac{eB}{2} \sum\limits_{n=0}^{N_{max}}\left(2-\delta_{n0}\right)\int\limits_{\omega_{EC}}^\infty\frac{E(E-Q)^2}{\sqrt{E^2 - m_e^2-2neB}}f_{FD}(E,\mu_e)\left(1-f_{FD}(E-Q,\mu_\nu)\right)dE,
\\
\label{mag_beta_PC}
\Gamma_{PC} & = &\kappa \frac{eB}{2} \sum\limits_{n=0}^{N_{max}}\left(2-\delta_{n0}\right)\int\limits_{\omega_{PC}}^\infty\frac{E(E+Q)^2}{\sqrt{E^2 - m_e^2-2neB}}f_{FD}(E,-\mu_e)\left(1-f_{FD}(E+Q,-\mu_\nu)\right)dE,
\end{eqnarray}
where the following quantities are defined \citep{arcones10, hardy09}:
\begin{eqnarray}
\omega_{EC/PC} &\equiv & \mbox{max}\left[\pm Q,m_e\right],
\\\nonumber
\omega_{\beta} &\equiv & \sqrt{m_e^2+2neB},
\\\nonumber
N_{max}&\le &\frac{Q^2 - m_e^2}{2eB},
\\\nonumber
\kappa & \equiv & \frac{B\ln 2}{K m_e^5},
\\\nonumber
B & \equiv & 1+3g_A^2
=
\left\lbrace
\begin{array}{ll}
5.76, & \text{nucleons}, \\
4.6, & \text{nuclei},
\end{array}
\right.
\\\nonumber
K &\equiv & \frac{2\pi^3\hbar^7\ln 2}{G_V^2 m_e^5} = 6144 \mbox{ s}.
\end{eqnarray}
Here the transition $Q$ value is the difference in nuclear masses.
In Equations \ref{mag_beta_minus} -- \ref{mag_beta_PC}, the
Fermi Dirac distributions are cast to accommodate the electron
energy of individual Landau levels. The energy distribution of
an electron in the nth Landau level is:
\begin{equation}
f_{FD}(E,\mu_e) = \frac{1}{\exp{\left[\frac{\sqrt{E^2 + 2neB}-\mu_e}{T}\right]}+1}.
\end{equation}
For positrons, the chemical potential is negative.
\begin{figure}
\gridline{
\fig{frame_000-crop.pdf}{0.32\linewidth}{}
\fig{frame_001-crop.pdf}{0.32\linewidth}{}
\fig{frame_002-crop.pdf}{0.32\linewidth}{}
}
\gridline{
\fig{frame_003-crop.pdf}{0.32\linewidth}{}
\fig{frame_004-crop.pdf}{0.32\linewidth}{}
\fig{frame_005-crop.pdf}{0.32\linewidth}{}
}
\caption{Evolution of the $\beta^-$-decay spectrum with magnetic field for
six different fields indicated in each panel. The red, dashed line indicates the spectrum for $B=0$, and
the black line indicates the spectrum for the magnetic field indicated in each figure. For this
series of figures, the decay Q value is 12 MeV, and the values of $T_9$ and $\rho Y_e$ are 2 and 500 g cm$^{-3}$
respectively. The magnetic field units are G.}
\label{beta_spec_evolution}
\end{figure}
\begin{figure}
\gridline{
\fig{beta_spec_lowQ_11-crop.pdf}{0.49\textwidth}{(a)}
\fig{beta_spec_highQ_11-crop.pdf}{0.49\textwidth}{(b)}
}
\gridline{
\fig{beta_spec_lowQ_12-crop.pdf}{0.49\textwidth}{(c)}
\fig{beta_spec_highQ_12-crop.pdf}{0.49\textwidth}{(d)}
}
\caption{Electron $\beta$-decay spectra for $B = 10^{15}$ G (a,b)
and 10$^{16}$ G (c,d) for low $Q$ values (a,c) and high $Q$ values (b,d). The
spectra are calculated at $T_9=2$ and $\rho Y_e$=500. The red, dashed lines
correspond to the spectra for $B=0$.}
\label{beta_spec}
\end{figure}
Unlike the case of an ideal Fermi gas, the electron-positron energy
spectrum in weak interactions is not thermal, and the LLL approximation is
not necessarily applicable. For example, the evolution with magnetic field of the $\beta^-$
spectrum for a nucleus with a
decay Q value of 12 MeV at $T_9=2$ and $\rho Y_e=500$ g cm$^{-3}$ is shown in
Figure \ref{beta_spec_evolution}. This spectrum
is the integrand of Equation \ref{mag_beta_minus}. In the case of a non-zero
field, the $\beta$ spectrum is a sum of
individual spectra for each Landau level with the maximum Landau level energy
less than the decay $Q$ value, $\sqrt{2neB + m_e^2}\le Q$.
For a lower field, the Landau level spacing is much less than the $Q$ value of
the decay $\sqrt{eB}\ll Q$. An electron can
be emitted into any of a large number of Landau levels with level energies less
than the electron energy. The Landau level spacing is
quite small in this case. For decays to many possible Landau levels, the
integrated spectrum
is closer in value to the zero-field spectrum. In other words,
as $eB\rightarrow0$, the integrated non-zero-field spectrum
approaches the zero-field spectrum. The sum in Equation \ref{mag_beta_minus}
becomes an integral, and the Landau level spacing
$eB = \Delta p^2\rightarrow d^2p$. The sum over all Landau levels approaches
the zero-field spectrum
As the magnetic field increases, such that $\sqrt{2neB}\sim Q$, fewer Landau
levels contribute to the total spectrum. For a very few levels, the
zero-field and non-zero-field spectra can be dramatically different, and the
decay rates
can be magnified for higher fields.
This could be potentially important for an r process that proceeds in a high
magnetic field, such as in a collapsar jet or NS merger, for example. Because the r process
encompasses nuclei with a wide range of $\beta^-$ decay Q values, the effects
of
an external magnetic field can be significant.
This is shown in Figure
\ref{beta_spec}, which shows the electron energy spectrum in $\beta^-$ decay
for several cases of Q-value and magnetic fields. This spectrum is also the
integrand of Equation \ref{mag_beta_minus}. Spectra are computed
for $\beta^-$ decays at $T_9$=2 and $\rho Y_e$ = 500 g cm$^{-3}$.
\begin{figure}
\gridline{
\fig{rate_ratios_11-crop.pdf}{0.49\textwidth}{(a)}
\fig{rate_ratios_12-crop.pdf}{0.49\textwidth}{(b)}
}
\gridline{
\fig{rate_ratios_11_LLL-crop.pdf}{0.49\textwidth}{(c)}
\fig{rate_ratios_12_LLL-crop.pdf}{0.49\textwidth}{(d)}
}
\caption{Ratio of $\beta^-$ decay rates for decays of nuclei unstable
against $\beta^-$ decay in a non-zero
field to those in a zero field, $\Gamma(B\ne 0)/\Gamma(B=0)$ for magnetic
fields $B=10^{15}$ G (a,c)
and $B=10^{16}$ G (b,d). The top row corresponds to
ratios for which all relevant Landau levels are included in the
decay calculation, while the bottom row is for calculations for which
only the lowest
Landau level is included in the calculations. In all figures, $T_9 = 2$,
$\rho Y_e = 500$ g cm$^{-3}$. Note the difference in scales in each figure.}
\label{rate_ratios}
\end{figure}
In this figure, four cases are shown for each combination of two $Q$ values of 3
MeV and 12 MeV and two cases of magnetic field of 10$^{15}$ G and 10$^{16}$ G.
For the low $Q$ value of 3 MeV, the electrons can only be emitted into the lowest
Landau level for both fields, and the sum in Equation 12 consists of only the $n=0$ term;
$N_{max} = 0$. However, at a higher $Q$ value of 12 MeV, the electron
can be emitted into any of a number of Landau levels. For example,
an electron emitted with an energy of 6 MeV could fall into the $N=0$, 1, or
2 Landau level. The integration is thus a sum over all Landau levels up to
the maximum possible Landau level within the $\beta$ spectrum; $N_{max}=11$
in this case. For a field of 10$^{15}$ G, the Landau level spacing
$\sqrt{eB}=2.43$ MeV, which is less than the decay $Q$ value, so multiple
Landau levels contribute to the $\beta$ spectrum.
For a higher field of 10$^{16}$ G, with a Landau level spacing of 7.69 MeV,
even at high $\beta^-$ decay $Q$ values, only a few (or one) Landau levels
can be occupied by the emitted electrons. Further, as indicated
in Figure \ref{beta_spec}, for decay spectra that occupy very
few Landau levels,
the integrated spectrum, which is proportional to the total decay rate, can
be significantly higher than the zero-field spectrum.
The relationship between the Landau level spacing and the $\beta$-decay $Q$ value
is important in considering the astrophysical r process. Because
the r process proceeds along a path of potentially very neutron-rich nuclei,
the $\beta^-$ decay $Q$ values can be quite large, $\sim$ 10 MeV. Thus,
for an r process in a high-field environment, the
decay rates could be quite sensitive to the field. However, because the $Q$ values are large,
one cannot necessarily assume that the decay rates can be computed with
just the LLL approximation.
The influence of high magnetic fields on $\beta^-$ decay is shown in Figure
\ref{rate_ratios} for two assumptions of the magnetic field and two assumptions of Landau levels (whether the LLL approximation
is used or not)
at a temperature $T_9 = 2$ and $\rho Y_e = 500$. Here, the ratios of decay
rates in a
non-zero field to those in a zero field $\Gamma(B\ne0)/\Gamma(B=0)$
are plotted for each $\beta^-$ unstable nucleus with $Q$ values
taken from the AME2016 evaluation \citep{ame2016}.
Several findings are noted in this figure. First, for nuclei closer to
stability, the $Q$ values are much lower, and the rate ratio is higher. This
is because electrons are emitted in only a few (or one) Landau levels.
These nuclei would correspond to the schematic cases of
Figures \ref{beta_spec}a and c.
For the higher field of
10$^{16}$ G, the figures for the LLL assumption and the assumption for all
relevant Landau levels are very similar, indicating that the LLL is the primary
contributor to the electron spectra in $\beta^-$ decay for all nuclei
at this field strength. At this field, the difference between the zero-field
and non-zero-field computations is
significant, and the increase in rates is much higher.
However, for a field of 10$^{15}$ G, inclusion of
only the LLL underestimates the total rate. Including all relevant Landau
levels
in the rate computation is necessary.
For more neutron-rich nuclei, more Landau levels are filled by the emitted
electron, and the $\beta$ spectrum more closely matches the zero-field
spectrum. Thus, the ratio approaches unity. This would correspond
to the case represented schematically in Figure \ref{beta_spec}b.
For a higher field,
the ratio is close to unity only for the most neutron-rich nuclei, where
the $Q$ values are high enough fill multiple Landau levels in
the decay. For the
nuclei closer to stability, the $Q$ values are low enough that only a single
Landau level is filled by the ejected electron, resulting in a
decay spectrum that is significantly different than the zero-field case.
For the $B=10^{16}$ G case for nuclei close to
stability, the larger rates would correspond to the decay
spectrum represented schematically in Figure \ref{beta_spec}c.
\section{Effects of External Magnetic Fields in r-Process Nucleosynthesis}
As an example, r-process nucleosynthesis in a collapsar jet trajectory is examined.
It is
thought that the
magnetic fields associated with collapsar
jets and neutron star mergers (NSMs) could be as
high as 10$^{16}$ G \citep{nakamura15, kiuchi15, kiuchi14,takiwaki09}. Such strong
fields
are formed by amplifying initially weak fields associated with the accretion
region. While these fields may be near
the surface of the objects, these will be considered
as a possible upper limit in nucleosynthesis associated with
collapsars and NSMs. Within the actual jet region in this model,
fields have been computed to be $\sim 10^{12 - 14}$ G \citep{harikae}.
Other
evaluations of magnetic fields in collapsars or neutron star mergers
have resulted in similar fields near the surface or the accretion
disk, with some estimates up to and exceeding 10$^{17}$ G \citep{price,ruiz}.
While the field in the actual nucleosynthesis site may vary significantly,
a few field cases are examined here to show the field magnitudes necessary
to result in significant differences in the final r-process abundance
distribution. Some of the fields investigated in
the r-process nucleosynthesis studied here may very well exceed
realistic values or those in nature and are thus illustrative in
conveying field-strength effects in nucleosynthesis processes.
Temperature effects, on the other hand are computed for the actual computed
environmental temperature of the r-process site.
Here, the
effects of Coulomb screening in the early stages of the r process as well as
the
effects from the enhancement of weak interaction rates by the external field
are examined.
Several nucleosynthesis scenarios are investigated to evaluate the effects
on
r-process nucleosynthesis. These scenarios are listed in Table
\ref{screen_mag_models}, where the notation X(F)$_{\mbox{log}B}$ is used; the label `X' refers to a specific screening and weak interaction treatment at a field $B$, and `F' indicates the inclusion of fission cycling or not.
For example, model A$_{14}$ is model A at a magnetic field of 10$^{14}$ G without fission cycling while model
AF$_{14}$ is the same model with fission cycling included.
The various models summarized are:
\begin{itemize}
\item No Coulomb screening and no magnetic field effects. (Models A$_{\mbox{log}B}$ and AF$_{\mbox{log}B}$)
\item Default classical screening in which weak screening is determined by electrons in a Maxwell distribution \citep{jancovici77,itoh79}.
(Models B$_{\mbox{log}B}$ and BF$_{\mbox{log}B}$.)
\item Relativistic screening in which the weak screening TF length is
determined from electrons in an ideal Fermi gas \citep{famiano16}.
(Models C$_{\mbox{log}B}$ and CF$_{\mbox{log}B}$.)
\item Relativistic screening including effects on the TF length from an
external magnetic field on the Fermi gas \citep{luo20}.
(Models D$_{\mbox{log}B}$ and DF$_{\mbox{log}B}$.)
\item Relativistic including effects on the TF length plus magnetic field effects on weak
interaction rates assuming
the LLL approximation. (Models E$_{\mbox{log}B}$ and EF$_{\mbox{log}B}$.)
\item Relativistic including effects on the TF length plus effects on weak
interaction rates including
all contributing Landau levels to the $\beta^-$ decays. (Models F$_{\mbox{log}B}$ and FF$_{\mbox{log}B}$.)
\end{itemize}
In Table \ref{screen_mag_models}, the models indicated by $B=0$
are those for which the magnetic field effects are not included
in the evaluation of screening or weak interactions. Model E includes
effects of the magnetic field on weak interactions, but only the LLL
approximation is used. Model F includes weak interaction effects for all
relevant Landau levels in $\beta^-$ decays.
\begin{table}
\centering
\caption{Models used to evaluate the effects of screening
from temperature and magnetic fields as well as effects from
magnetic fields on weak interactions. For each model, the subscript is the magnetic field strength. }
\begin{tabular}{|c|c|c|}
\hline
\textbf{Model} & \textbf{Screening} & \textbf{Weak Interactions}\\
\hline
A(F)$_{\mbox{log}B}$ & None & $B = 0$\\
B(F)$_{\mbox{log}B}$ & Classical & $B = 0$\\
C(F)$_{\mbox{log}B}$ & Relativistic ($B=0$) & $B = 0$\\
D(F)$_{\mbox{log}B}$ & Relativistic (B$\ne$0) & $B = 0$\\
E(F)$_{\mbox{log}B}$ & Relativistic ($B\ne0$) & $B \ne 0$, LLL only\\
F(F)$_{\mbox{log}B}$ & Relativistic ($B\ne0$) & $B \ne 0$, All LL\\
\hline
\end{tabular}
\label{screen_mag_models}
\end{table}
In order to evaluate the effects of magnetic fields on screening and
weak interactions in a possibly highly magnetized plasma in
the r process, a single trajectory from the MHD jet model of
\citet{nakamura15} was used. This trajectory is shown in Figure
\ref{MHD_traj}. Several values of a static, external magnetic field were
evaluated.
Because the field may not be well understood in
many sites, this evaluation is taken to be qualitative only as
a demonstration of the magnitude of the effects of strong
external fields in nucleosynthetic sites. Nucleosynthesis in
static fields, $14\le \mbox{log}(B) \le 16$, was evaluated.
For the r-process calculation, the initial composition
was assumed to be protons and neutrons with $Y_e = 0.05$
as given in \cite{nakamura15}.
The nuclear reaction network code \texttt{NucnetTools} \citep{libnucnet} was
modified to include thermodynamic effects and screening effects at high
temperature and magnetic fields.
The reaction network was a
full network which was truncated at $Z = 98$.
The weak interaction rates
were computed using the relationships in Equations \ref{mag_beta_minus} --
\ref{mag_beta_PC}. These rates
are ground-state transitions only. However,
the purpose of this initial evaluation is not an evaluation of accurate
weak interaction rates, but a description of the effects
of strong magnetic fields on nucleosynthetic processes. If transitions
\textit{to} excited states are included, the rates are expected to be even
more sensitive to external fields because of the smaller transition
Q value relative to the Landau level spacing (Figure \ref{beta_spec}), while transitions \textit{from} excited states may be less sensitive as the Q values are larger, though one must also account for changes in transition order when including excited states.
The nucleosynthesis was computed to 6000 s. In order to do this, an
extrapolation of the \cite{nakamura12} trajectories to low T and low $\rho$ was made
because the published trajectories stop at 2.8s.
At low-enough temperatures and densities, neutron captures decline, and
only
$\beta$-decays and subsequent smoothing
ensues. The temperature and density extrapolation was done
assuming an adiabatic expansion for
$t>2.8$ s, $\log(T_9)\propto \log(\rho) \propto \log(t)$. This extrapolation
allows the temperature and density to drop significantly to follow
the processing
further
in time while examining effects from late-time fission cycling. Clearly,
there is still
some nucleosynthesis during this phase, and this is used to evaluate
long-term
effects of the nucleosynthesis.
\begin{figure}[b]
\gridline{
\fig{adiabatic_t9.pdf}{0.49\textwidth}{(a)}
\fig{adiabatic_rho.pdf}{0.49\textwidth}{(b)}
}
\caption{Trajectory used for the MHD r-process nucleosynthesis
calculation. (a) Temperatures, $T_9$. (b) Density.}
\label{MHD_traj}
\end{figure}
To include screening effects, relativistic weak screening was used for
$T_9>0.3$. For lower temperatures, the classical
Debye-H\"uckel screening was used in models C - F (see Figure \ref{debye_fig}). In model B,
classical Debye-H\"uckel screening was used for all temperatures.
For the strong magnetic field, the
Thomas-Fermi length of
Equation \ref{EM_p_b} was used. For weaker fields, the difference
between the screening lengths
for the relativistic case at $B=0$ and at $B\ne 0$ is negligible as shown
in
Figure \ref{low_p_high_b_compare
|
}.
Thus, to improve the speed of the network calculations, the LLL
approximation
was assumed with the
expansion of Equation \ref{EM_p_b}. In order to determine whether to use
the
LLL approximation or the
thermal screening length (with $B=0$), the inverse screening
length, $k\propto
1/\lambda$, was computed in each case, and the
maximum value was used:
\begin{equation}
k \rightarrow \mbox{max}\left[k(B=0),k(B\ne 0)\right]
\end{equation}
The resultant corresponding screening length is then determined by Equation
\ref{EM_p_b} at high fields
and the relativistic length computed in prior work \citep{famiano16} at lower
fields. Certainly, there is a
small transition region between the low-field and the high-field values shown
in Figure \ref{low_p_high_b_compare}
where the screening length is overestimated slightly. In this region, the
screening length could be overestimated by as much at $\sim$15\%, with a
resultant shift in the overall reaction rates of about 15\%. This can be
corrected by relaxing the LLL approximation and including as few as 10
Landau levels in the
length calculation. However, it is ignored in this evaluation because the
correction is small compared to the
change in screening length from the magnetic field. The r-process is not
expected to be dominated by screening as it is primarily a neutron capture
process, and the time spent in this transition region for the r-process is
expected to be brief compared to the entire r-process. Future, more accurate
evaluations may include this small correction.
Effects from fission cycling were included in a rudimentary
fashion following
the prescription of
Shibagaki et al. \citep{shibagaki16}. In this model,
fission was implemented for the Cf isotopic chain, $^{270 - 295}$Cf.
Fission rates were assumed to be 100 s$^{-1}$ for all nuclei in this isotopic
chain. The fission parameters in the \cite{shibagaki16} model
are shown in Table \ref{fission_parms}. With this parameter
set, the fission distribution for $^{282}$Cf is
shown in Figure \ref{fission_dist}. Clearly, this fission
model is overly simplistic and does not represent the
full details of the nuclear structure necessary for a proper
determination of fission. However, as we will discuss later, it is
necessary to include fission in a collapsar/NSM r process, and this model
provides an appropriate level of detail to capture the overall
effects of intense magnetic fields on $\beta$ decays in this site.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fission_probs-eps-converted-to.pdf}
\caption{Fission probability distribution in
product mass (black circles) and $Z$ (red squares) for
$^{282}$Cf showing the trimodal structure, which results from a combination
of symmetric and asymmetric
fission.}
\label{fission_dist}
\end{figure}
Fission of
the Cf isotopic chain here is meant to replace
neutron-induced fission, $\beta$-delayed fission, and
spontaneous fission of all fissile
nuclei produced in the r-process path. As such, the fission
product
distribution can be contrasted with that developed using
more accurate models. For example, the
evaluation of fission using the GEF 2016 and FREYA models
\citep{vassh2019} predicts similar neutron emission
in fission, though the fission product distribution for
the Cf nuclei is generally symmetric for spontaneous fission
with asymmetric components for neutron-induced fission.
Fission induced by $\beta$-decay of the Cf chain
has been predicted to be predominantly symmetric for $N>180$
with asymmetric components at lower mass \citep{vassh2019,kodama75}.
\begin{table}
\caption{Fission parameters used in this evaluation. Fission model
taken from \citet{shibagaki16}}
\label{fission_parms}
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Parameter}& \textbf{Description} & \textbf{Value} \\
\hline
\hline
$W_{i}$ & Intermediate fragment probability & 0.2\\\hline
$W_{H/L}$ & Heavy/Light fragment probability & 0.4 \\\hline
$N_{loss}$ & Average neutrons/fission & 2 \\\hline
$\sigma$ & Width of fragment distribution & 7 \\\hline
$\alpha$ & Relative difference of centroids of fragment distributions & 0.18 \\\hline
\end{tabular}
\end{table}
\subsection{r-Process Abundance Distributions}
The final abundance distributions for all six models studied with and without fission
are shown in Figure \ref{MHD_abun} for a field of 10$^{15}$ G.
Figure \ref{MHD_abun_10} shows the final
abundance distributions for models including fission at a field
of 10$^{14}$ G. (All models except E$_{14}$ and EF$_{14}$ shown in Figure \ref{MHD_abun_10}.)
The electron fraction $Y_e$ is plotted for
all models in Figure \ref{MHD_ye}.
\begin{figure}
\gridline{
\fig{MHD_abun_adi-crop.pdf}{0.48\textwidth}{(a)}
\fig{MHD_abun_f_adi-crop.pdf}{0.48\textwidth}{(b)}
}
\gridline{
\fig{MHD_abun_adi_Z-crop.pdf}{0.48\textwidth}{(c)}
\fig{MHD_abun_f_adi_Z-crop.pdf}{0.48\textwidth}{(d)}
}
\caption{Abundances at $t=6000$ s for MHD models for the
adiabatic trajectory in Figure \ref{MHD_traj} with an external field of 10$^{15}$ G.
Plots (a) and (c) show nucleosynthesis results without fission, and plots (b) and (d)
shows nucleosynthesis results with fission.
For the models with fission, the points for
the default screening model nearly coincide with those for the
unscreened model, and the points for the relativistic screening
model for $B=0$ nearly coincide with those for the relativistic
screening model with $B=10^{15}$ G.}
\label{MHD_abun}
\end{figure}
\begin{figure}
\gridline{
\fig{MHD_abun_10-crop.pdf}{0.49\textwidth}{(a)}
\fig{MHD_abun_10_Z-crop.pdf}{0.49\textwidth}{(b)}
}
\caption{Abundances at $t=6000$ s for MHD models for the
adiabatic trajectory in Figure \ref{MHD_traj} for a field of 10$^{14}$ G
including fission.
The colors are the same as those in Figure \ref{MHD_abun}.
The
LLL approximation is not shown. }
\label{MHD_abun_10}
\end{figure}
\begin{figure}
\gridline{
\fig{ye_t_adi.pdf}{0.49\textwidth}{(a)}
\fig{ye_t_f_adi.pdf}{0.49\textwidth}{(b)}
}
\gridline{
}
\caption{Electron fractions as a function of time for
trajectories without fission (a) and with fission (b). In both
figures the lines for no screening, default screening, and relativistic ($B=10^{15}$ G) screening coincide. In the right figure, screening
with enhanced weak interactions deviates from the other models. The colors are the same as those shown in Figure \ref{MHD_abun}.}
\label{MHD_ye}
\end{figure}
In all cases, Coulomb screening of nuclear reactions has a minimal effect on the overall reaction network. This is not surprising as the primary fusion
reaction is neutron capture, which is immune to screening. While the inclusion
of magnetic fields creates a slight enhancement in the overall abundance for the
heavier nuclei due to the enhancement of charged-particle reactions
early in the r process (e.g., proton and alpha captures), this enhancement is
minimal. Likewise, the effects from default screening and
relativistic screening are negligible in this treatment.
However, the
inclusion of enhanced weak rates does have an effect on the overall
resultant reactions. For a full treatment, including accurate
computations of the weak rates with contributions from all relevant
Landau levels, the overall $\beta^-$ rates are higher, resulting
in a more rapid progression to the heaviest nuclei. As can be seen in the
case for no fission in Figure \ref{MHD_abun}, the rapid $\beta$-decay rates
results in a large abundance of nuclei near the endpoint of the reaction
network ($Z=98$). The nucleosynthesis progresses to the Cf
isotopic chain, where the abundance builds up. At this point, the only possible
reactions are (n,$\gamma$), (n,$\alpha$), neutron-induced fission, and photospallation reactions as a result of truncating the network at Z=98. This results in additional neutron
production and minimal production of $\alpha$ particles. Of course, this is
an unrealistic scenario because of the artificial termination point in
the nucleosynthesis, but it does convey the increased nucleosynthesis
speed from the high magnetic field in a very neutron-rich environment.
The LLL approximation for $\beta^-$ decay rates is also shown in this
figure. In this case, the Landau level spacing is generally less than the
decay Q value, except for a few low mass nuclei with $Z\lesssim20$. This results
in overall slower $\beta$ decay rates, resulting in a slower progress to the heavy mass nuclei and a larger relative abundance at the low mass nuclei.
The right side of Figure \ref{MHD_abun} shows the final abundance
distributions if fission cycling is included in the network calculation.
As expected, there is very little difference between the
abundance distributions if nuclear screening is included in the
reaction network. However, the inclusion of
$\beta^-$ decay enhancement results in a an enhancement of
the low-mass nuclei, $(Z,A)\lesssim (40,100)$. For the heavier mass
nuclei, fission products dominate the abundance distribution.
As fission becomes dominant, heavier-mass nuclei are enhanced
in abundance relative to that of the low-mass nuclei, and one
notices a relative increase in abundance for $Z\gtrsim40$ for
all models.
However, there is also an enhancement of the abundances of the low-mass
nuclei with field-enhanced decay rates relative to the abundances of
nuclei without them. This is likely a result of the more rapid progression of
the r-process to the fissile nuclei. There are two effects that can
be considered in this case. First, from Figure \ref{rate_ratios},
it can be seen that the enhancement of the $\beta^-$ decay rates
is less for lower mass nuclei than for the higher mass nuclei. While this
enhancement is small, it results in a somewhat slower progression of
the r-process through these lower-mass progenitors \textit{relative to} the progression through higher mass nuclei. Thus, a slight
buildup of abundance relative to the high-mass nuclei can result. This is
particularly noticeable if only the LLL is taken into account. The rate
differences are more pronounced, and the the enhancement of low-mass
nuclear abundance is larger.
To a lesser extent, the neutrons produced in fission can also slightly enhance
the production of lower-mass nuclei. It is assumed
that two neutrons are produced in each fission in this model.
Because of the very large initial neutron abundance, the progression to
fission is not surprising in this scenario. However, for the the case
in which decays are enhanced by the magnetic field, the progress to fissile nuclei
is more rapid.
Thus, more
fission neutrons are produced in the r-process. These can be used
as fuel for subsequent processing. Of course,
neutrons produced in fission are captured by \textit{all}
progenitor nuclei, and not just the low-mass nuclei.
The slightly less-enhanced decay rates of the low-mass nuclei, on the other
hand, result in
an abundance that is likely even more enhanced than in the absence
of fission.
From Figure \ref{MHD_abun}, one also notes that there is a slight
shift to higher mass in the final abundance distribution for
the field-enhanced case. This is because the more rapid
decay rates result in a slight shift of the r-process path closer
to stability than in the case with zero field. This shift is
prominent at the abundance peaks.
For an r-process path
that is closer to stability, the path intersects the
magic numbers at a higher mass, resulting in the slight shift
by a few mass units.
This is shown in the inset for the A$\sim$195 abundance peak
in Figure \ref{MHD_abun}b.
Given the prominent contribution to the final abundance distribution by fission,
it it thus emphasized that -- in the collapsar model here -- fission
cycling is an integral part of r-process calculation.
An evaluation at a field of 10$^{14}$ G is shown in Figure \ref{MHD_abun_10}.
In this figure, the abundances at $t=6000$ s for a calculation including fission
are shown, and the LLL approximation has been
removed for clarity. As expected, for the lower field, the decay rates are
closer to the zero field decay rates, and the overall shift in the abundance
distribution is smaller, though a small increase in abundance is noted for $A<100$.
This trend
is consistent with the non-zero field trends observed but to a lesser extent.
The electron fraction as a function of time, $Y_e$, is shown for all
six models with and without fission cycling in Figure \ref{MHD_ye} at a field of
10$^{15}$ G.
For each case, it is observed that screening has a minimal
effect on the evolution of the electron fraction. During the early stages of the r process, the high-temperature environment is in nuclear statistical
equilibrium (NSE). As the environment cools and expands, reactions dominate with a small time window during
which charged-particle reactions (e.g., ($\alpha$,$\gamma$),
($\alpha$,n), etc.) may occur. These would
be affected by Coulomb screening.
Without
fission, the dominant contribution to $Y_e$ is from the Cf isotopic
chain. In the case of field-enhanced decay rates, because the
progression to the Cf chain is more rapid, an equilibrium $Y_e$
occurs very rapidly, with a more rapid progression if all Landau
levels are included in the decays, as expected. It is also
noted that a complete inclusion of all Landau levels results in
a slightly higher equilibrium $Y_e$ as the r-process path is closer to
stability. For the other calculations, the $Y_e$ is lower as
the r-process path is more neutron-rich as explained previously.
Figure \ref{MHD_ye}b shows the evolution of $Y_e$ in the more
realistic case including fission cycling in the calculation.
Here, as the r-process becomes dominated by fission products, the
equilibrium $Y_e$ is similar in all cases. However, it can be seen
that inclusion of the field-enhanced rates results in an earlier
rise in the electron fraction owing to a more rapid r process combined
with a more rapid decay to stability.
\begin{figure}
\gridline{
\fig{sr_dy_10-crop.pdf}{0.49\textwidth}{(a)}
\fig{sr_dy_11-crop.pdf}{0.49\textwidth}{(b)}
}
\gridline{
\fig{sr_dy_12-crop.pdf}{0.49\textwidth}{(c)}
\fig{ratios-eps-converted-to.pdf}{0.49\textwidth}{(d)}
}
\caption{Sr/Dy abundance ratios for the collapsar network
calculation for all models in Table \ref{screen_mag_models} for
(a) $B = 10^{14}$ G, (b) $B = 10^{15}$ G, and (c) $B = 10^{16}$ G. The colors are the same as those shown in Figure \ref{MHD_abun}. Panel (d) shows
abundance double ratios given by Equation \ref{double_ratio} for four elements as a
function of magnetic field.}
\label{sr_dy_fig}
\end{figure}
\subsection{Abundance Ratios}
The overall final abundance distribution can be characterized by various abundance ratios. This is particularly helpful in that these
provide a characteristic number to gauge the relative contribution
from fission compared to the abundance buildup of light nuclei.
This ratio is shown for three fields as a function of time in
Figure \ref{sr_dy_fig} for all six models studied. The zero-field cases
are represented by the unscreened and screened relativistic models. The
figure shows the abundance ratio for the cases in which
fission cycling is accounted for.
In all cases, the value of the abundance ratio, $Y_{Sr}/Y_{Dy}$ drops
rapidly as the r-process path moves to the heavier nuclei and into the
fissile nuclei, after which an equilibrium abundance of
Dy begins to be produced via fission. The abundance ratio continues
to drop more gradually with time after $\sim$4 s, when the Dy
continues to build more slowly, and an equilibrium abundance of Sr
is approached. This evolution continues into the post-processing
of the r process. It's also noticed the relativistic screening effect -- though small -- is
more prominent than effects from classical screening, resulting in a slight
reduction in the Sr/Dy ratio. While this reduction is
small compared to effects from the magnetic field on $\beta$ decays, it can be
seen in the figures.
For the lowest field, the effect of the enhanced rates is small because the field-enhanced rates -- consisting of decays to
many Landau levels -- are similar to the non-enhanced rates. If only
the LLL approximation is used (model EF$_{14}$), the evolution is
significantly different as the rates are grossly underestimated, resulting
in a very slow r-process evolution, and the Sr/Dy abundance ratio does
not drop until much later in the evolution. For the highest field,
on the other hand, there is a smaller difference between the LLL approximation
(model EF$_{16}$)
and the inclusion of all Landau levels (model FF$_{16}$) in the decay rates because
only a few Landau levels are populated in beta decays at this
field.
Shown in Figure \ref{sr_dy_fig}d are various abundance ratios
Y$_{Sr}$/Y$_{X}$ (where $X$ indicates an arbitrary element)
at t=6000 s as a function of the magnetic field.
Plotted in the figure is the relative elemental
abundance double ratio, $R$, defined as:
\begin{equation}
\label{double_ratio}
R\equiv \frac{\left(Y_{Sr}/Y_X\right)_B}{\left(Y_{Sr}/Y_X\right)_{B=0}}
\end{equation}
which shows the evolution of the elemental abundance ratios as the field
increases.
For low fields, all values are expected to converge at unity as seen in
the figure. However, as fields increase, different physical processes
affect the ratios.
For the lowest $Z$ element (Te), which can be weakly populated by fission at
all fields, a more rapid progression to the fission products can result
in a slightly increased production of Te.
However, production of Sr via neutron capture is enhanced by the strong magnetic field. Also, the Sr decay rates are not as enhanced as much as those of Te. Thus
the Sr/Te ratio increases
with field. For Ba and Dy, however, there is an increase, followed by
a decrease. This is because the population of Ba and Dy by fission not
only depends on the rate of progression to the fissile nuclei, but also the
final fission distribution. As the r-process path progression to fission
for $B=10^{15}$ G is similar to that for $B=10^{14}$ G (as will be described in
the next section), the production of the Ba and Dy progenitors is
faster as the field increases up to $B=10^{15}$ G. However, above this field,
the $\beta^-$ decay rates are fast enough such that the r-process
path itself -- being dynamic in nature -- shifts sufficiently such that the
distribution of fissile nuclei changes,
and the fission product distribution changes
somewhat. One might imagine the peaks of the fission distribution in
Figure \ref{fission_dist} shifting to lower mass, thus raising
or lowering abundances of
the progenitors of Ba and Dy. Clearly, the fission model used
in this work is
too simplistic to make a more than qualitative conclusion, but
the interplay
between the fission product distribution and the magnetic
fields compels further
investigation.
The element Tl is also fascinating. It is seen that the Sr/Tl
ratio decreases
with field. Tl lies above the fission products in mass and
Z. However, it also lies
just above the A=195 peak in the r-process distribution.
Recall from Figure \ref{MHD_abun} that the
r-process
distribution shifts slightly to higher mass as the field
increases, shifting
the $A=195$ abundance peak as well. This shift, in turn
increases the Tl
abundance dramatically, thus reducing the Sr/Tl abundance
ratio.
This effect of the magnetic field on the shape of the final r-process
abundance distribution, and hence, the Sr/X abundance ratio is
compelling as an r-process from a single collapsar site can be characterized by
the abundance distribution, and the magnetic field may be constrained by
the abundance ratios. Obviously, a more thorough evaluation incorporating a more realistic fission model is
necessary \citep{beun08,mumpower18,suzuki18,vassh2019},
but the
effect on the shape of the abundance distribution can still be made.
While not explicitly evaluated here, it is noted that if
magnetic fields as high as $10^{15}$--$10^{16}$ G exist in
r-process sites, neutron capture rates as well as charged-particle
reactions may be affected significantly via changes in nuclear
distribution functions. The field effect on reaction rates has
been studied for one important reaction in big bang
nucleosynthesis, $^7$Be($n$,$p$)$^7$Li \citep{kawasaki12}. That
reaction rate is affected only in cases of large magnetic fields
which can be excluded from observations of primordial abundances.
However, the field effect through modified distribution functions
can potentially change the neutron capture reaction rates with
non-flat $(\sigma v)(E)$ curves at low energies under strong
magnetic fields.
The ratios studied here may be of particular interest to
astronomers in evaluating elemental abundance ratios
in stars enriched in single sites. These ratios
are generally low compared to solar r-process abundance values \citep{arlandini99} owing to the fact that the single neutron-rich trajectory
presented here results in a large abundance of massive elements. The range of
observed values from the SAGA database \citep{suda08} are also large compared to the values here. This may likely result from a both a detection limit as well
as from the fact that if collapsar jets
contribute to the galactic r-process abundance distribution, they contribute in combination with
other sites.
\subsection{Fission Cycling Time}
The fission cycling time has also been explored, and the effects
of a strong magnetic field on the overall fission cycling has been
explored from the standpoint of the total time to progress from light nuclei to the fissile nuclei. Naively, one would expect that the
fission cycling time would decrease with magnetic field as the nucleosynthetic
progression speeds up.
Here, the fission cycling time is thought of as the time
to progress from the low-mass nuclei in an r-process path to the fissile
nuclei. The low-mass nuclei are defined to be those in the Zr isotopic
chain ($Z=40$), and the high-mass nuclei are defined to be those in the fissile region ($Z=98$). While this is a somewhat arbitrary choice, and
while fission cycling is more complex than this, such a method
provides a figure-of-merit for the speed at which nuclei
can cycle through the r-process to the fissile nuclei.
Using the $\beta^-$ decay lifetimes, $\tau_{\beta,i}$, for nuclei along the r-process
path, the fission cycling time, $\tau_f$, is then defined as:
\begin{equation}
\label{sum_fission}
\tau_{f} \equiv \sum\limits_{z=40}^{98}\tau_{\beta,z}
\end{equation}
where the sum is over the most abundant isotope of each element between Zr ($Z=40$) and Cf ($Z=98$) along the r-process path
at a specific point in time. It then remains to choose a point in time
at which the r-process path is chosen. Two methods are utilized
to characterize the r-process path.
With the first method, the r-process path is chosen at the point in the evolution when the region containing fissile nuclei is first reached in the
r process. Here, the r-process path is chosen at the epoch when the
first Cf nuclei are produced. This is defined to be the point in time when
$Y_{Cf}\ge 10^{-20}$. Because this point depends on
the magnetic field, the r-process path at this epoch is
unique for each magnetic field. In addition, the temperature and density of the environment are also different at this point, and thus the electron chemical potentials
vary in each case. Here, the total fission time is defined by the
term $\tau(B)$, and the path so-chosen is referred to as the
``dynamic'' r-process path.
A second method is adopted for comparison. With this method the r-process path,
temperature, and density is chosen to be fixed and independent of the
external field; the chosen isotopes are
the same for each choice of field. The $\beta$-decay lifetimes are then computed
for this path as a function of the magnetic field. In this case, the r-process
path is chosen to be defined by the isotope with maximum abundance for
each element at the point in time when $Y_{Cf}\ge 10^{-20}$ for a specific
field of $10^{14}$ G. At this point in the r-process
evolution, the temperature and density are $T_9=1.76$ and
$\rho=377.9$ g cm$^{-3}$, respectively. Here, the total lifetime is defined by
the term $\tau_s$, and the path is referred to as the ``static'' path.
In order to compute $\tau(B)$, the r-process path must then
be defined for each field, including a field of $10^{14}$ G, which is also used
to define the r-process path used to compute $\tau_s$. The paths defined this
way are shown in Figure \ref{fission_fig}. For fields of
10$^{14}$ G and 10$^{15}$ G, the paths are very similar. The dynamic path
corresponding to a field of 10$^{16}$ G is significantly closer to the
valley of stability because of the significantly faster $\beta$ decay
rates.
\begin{figure}
\gridline{
\fig{fission_path-eps-converted-to.pdf}{0.49\textwidth}{(a)}
\fig{fission_time-eps-converted-to.pdf}{0.49\textwidth}{(b)}
}
\caption{(a) Dynamic r-process paths used to determine fission cycling times
as described in the text. The dynamic path at 10$^{14}$ G is also the
static path used in this evaluation.(b) Fission cycling times as a function
of magnetic
field (in units of G) for the dynamic path, static path, and the zero-field case.}
\label{fission_fig}
\end{figure}
The computed fission-cycling times are also shown in Figure
\ref{fission_fig}b for both definitions of r-process path and for the
zero-field case. In either case of the path definition, the fission cycling
time decreases with field. The fission cycling time for a field of
10$^{14}$ G roughly corresponds to a fission cycling time with
zero field. (There is a small difference due to an imposed decay rate calculation
accuracy of 1\%, which accounts for the difference in both calculations.)
For the static path, the difference is more pronounced at higher fields
because the static path is defined to be farther from stability than the dynamic
path for the
10$^{16}$ G field. The difference between the static path and the dynamic path
case at a field of 10$^{15}$ G is small as the dynamic path at
10$^{15}$ G is very similar to the static path.
It is apparent from this result that fission cycling is faster at higher fields
and thus becomes more prominent. The production of fissile nuclei increases
during the r-processing time. This results in more fission products, but also
an additional neutron abundance in the r-process environment. The initial very
low $Y_e$ in the collapsar model is particularly conducive to
producing a significant abundance of fissile nuclei.
\section{Discussion}
Plasma effects on nuclear fusion and weak interactions in hot, highly-magnetized
plasmas were
evaluated, and the example of r-processing in a collapsar MHD jet site were
examined. Two primary effects
were analyzed. The first is the effect of Coulomb screening on fusion reactions
of charged particles.
Because the r-process is dominated by neutron captures, screening has a small
effect on the overall
evolution and final abundance distribution of the r process. However,
charged-particle reactions
in the early stages (e.g., ($\alpha$,n) and ($\alpha$,$\gamma$) reactions) may be affected. Coulomb
screening
is affected by both the temperature and the magnetic field of the environment.
While the
default classical weak screening commonly used in astrophysics codes was found
to have virtually
no effect on the final r-process abundance distribution, relativistic effects
from
high temperatures and high magnetic fields were found to have a slight effect on
the r-process evolution.
The second effect studied is the effect of high magnetic fields on nuclear weak
interaction
rates. As fields increase in strength, electron momentum transverse to the
field direction is
quantized into Landau levels. This alters the Fermi-Dirac distribution,
resulting in
a shift in the electron spectrum. While the magnetic field was found to have a
small effect
on Coulomb screening, strong fields may have a larger effect on nucleosynthesis
when
applied to weak interaction rates. This is because -- particularly in the case
of
the finite $\beta$-decay spectrum -- only a limited number of Landau levels can
be occupied by the
emitted charged lepton, as indicated in Figures \ref{beta_spec_evolution} and
\ref{beta_spec}.
For very high fields, $\sqrt{eB}\sim Q$, only a couple of Landau levels are
available to the
emitted electron or positron. The electron energy spectra have strong peaks
where the electron
longitudinal momentum is zero. The integrated spectrum, which is proportional
to the
decay rate, is thus much larger than that for the zero-field case. Large fields
can affect the r-process evolution.
A simple MHD collapsar jet model was adapted from the
hydrodynamics calculations of \cite{nakamura12} as an
illustrative model. In
this model, static fields of various strengths were assumed.
Various effects of thermal
and field effects were studied individually in a systematic
manner to gauge the effects
of individual environmental parameters. While the temperature
was treated dynamically
following a single trajectory, which was assumed to decay
adiabatically after 2.8 s, the
magnetic field in this case was assumed constant.
One interesting result of magnetic-field effects on the r process
studied was that $\beta^-$ decay rates increase with field strength. Because
of this, the r-process path, which
changes dynamically in time, may shift somewhat closer to stability for very
strong
fields. This has multiple effects. First, the point at which the r-process
path crosses
the magic numbers change, thus shifting the abundance peaks of the final
distribution. This shape can be evaluated using elemental abundance ratios,
such as Y$_{Sr}$/Y$_{Tl}$. Second, the
fissile nuclei produced in the r process will be different, resulting in
potentially different
fission rates and distributions. This could possibly be studied using abundance
ratios
such as Y$_{Sr}$/Y$_{Ba}$, Y$_{Sr}$/Y$_{Dy}$, or something similar. Finally,
the fission cycling
time decreases somewhat with increasing field, resulting in an increase in
fission products as well as a slight addition of neutrons to the r-process
environment.
While the results presented require more precise evaluations, it
is interesting to note that -- in a highly-magnetized r process site -- the elemental abundance ratios can constrain the magnetic field of the
site and vice-versa. This might be of interest to astronomers
in evaluating stellar abundance ratios in objects thought to contain
single-site abundances. The characteristic abundance ratios
with an MHD/collapsar model -- even at zero field -- may
characterize the contribution to r-process elements in a star. While
the fields presented in this paper are quite large -- commensurate
with a collapsar, MHD, or possibly NS merger -- if such fields
can be sustained in an r-process site, they would be manifest
in the isotopic ratios of the site.
Further, it is noted that fission in the collapsar model and effects
from the magnetic field may change the contribution to
currently observed elements in Galactic chemical evolution (GCE)
models. This will be studied in a subsequent paper.
The limitations of the model presented here are noted. These include
primarily the static field assumption and the simplified
fission model used. If the static field is assumed to be the maximum
field in the site, then the results could be thought of as upper limits. Also, the simplified fission model was used as the
primary evaluation of this paper was on the effects of strong
magnetic fields in nucleosynthesis sites. The progenitor
nuclei examined in the r-process site in this paper -- being
quite far from stability -- were treated in this much simpler
matter. Future work will concentrate on a more thorough treatment
of fission in the collapsar/MHD site and its effects on GCE. In
addition, a dynamic treatment of the magnetic field will be
examined.
\acknowledgments
M.A.F. is supported by National Science Foundation Grant No. PHY-1712832 and by NASA Grant No. 80NSSC20K0498. A.B.B. is supported in part by the U.S. National Science Foundation Grant No. PHY-1806368.
T.K. is supported in part by Grants-in-Aid for Scientific Research of JSPS (17K05459, 20K03958). K.M. is supported by JSPS
KAKENHI Grant Number JP19J12892. M.K. is supported by NSFC Research Fund for International Young Scientists (11850410441). Y.L. is supported by JSPS KAKENHI Grant Number 19J22167.
M.A.F. and A.B.B. acknowledge support from the NAOJ Visiting Professor program.
\bibliographystyle{aasjournal}
|
\section{Introduction}
In this article, we are concerned with solving the trust region problem, as it
frequently arises as a subproblem in sequential algorithms for nonlinear
optimization.
For this, let $ \mathcal H $ denote a Hilbert space with inner product $ \langle
\cdot, \cdot \rangle $ and norm $ \Vert \cdot \Vert $. Then, $ H: \mathcal H \to
\mathcal H $ denotes a self-adjoint, bounded operator on $\mathcal H$.
\revise{We assume that $H$ has compact negative part, which implies sequential weak lower semicontinuity of the mapping $ x \mapsto \langle x, Hx \rangle $, cf.~\cite{Hestenes1951} for details and a motivation}. \revise{In particular, we assume that self-adjoint, bounded operators $ P$ and $K $ exist on $ \mathcal H $, such that $ H = P - K $, that $ K $ is compact, and that $ \langle x, Px \rangle \ge 0 $ for all $ x \in \mathcal H $}.
The operator $ M: \mathcal H \to \mathcal H $ is
self-adjoint, bounded and coercive such that it induces an inner product $ \langle
\cdot, \cdot \rangle_M $ with corresponding norm $ \Vert \cdot \Vert_M $ via $
\langle x, y \rangle_M := \langle x, M y \rangle $ and $ \Vert x \Vert_M :=
\sqrt{ \langle x, x \rangle_M } $.
Furthermore, let $ \mathcal X \subseteq \mathcal H $ be a closed subspace.
\label{def:tr}
The trust region subproblem we are interested in reads
\begin{align}
\label{eq:tr}\tag{$\textup{TR}(H,g,M,\Delta,\mathcal X)$}
\hspace*{-2cm}\left\{
\quad\begin{array}{cl}\displaystyle
\min_{x \in \mathcal H}\quad& \tfrac 12 \langle x, Hx \rangle + \langle x, g \rangle \\
\textup{s.t.}\quad & \Vert x \Vert_M \le \Delta,\\
& x \in \mathcal X,
\end{array}
\right.
\end{align}
with $g\in\mathcal H$, objective function $ q(x) := \tfrac 12
\langle x, Hx \rangle + \langle x, g \rangle $, and trust region radius $ \Delta > 0 $.
Usually we take $ \mathcal X = \mathcal H $ but will also consider truncated versions where $ \mathcal X $ is a finite dimensional subspace of $ \mathcal H $.
Readers who are less comfortable with the function space setting may think of
$H$ as a symmetric positive definite matrix, of $\mathcal H$ as $\mathbb R^n$, and of
$M$ as the identity on $ \mathbb R^n $ inducing the standard scalar product and the euclidean norm
$\Vert \cdot\Vert_2$.
We follow the convention to indicate coordinate vectors $ \boldsymbol x \in \mathbb R^n $ with boldface letters.
\medskip
\subsubsection*{\bf Related Work}
Trust Region Subproblems are an important ingredient in modern optimization
algorithms as globalization mechanism. The monography \cite{Conn2000} provides
an exhaustive overview on Trust Region Methods for nonlinear programming, mainly
for problems formulated in finite-dimensional spaces. For trust region
algorithms in Hilbert spaces, we refer to
\cite{Kelley1987,Toint1988,Heinkenschloss1993,Ulbrich2000} and for Krylov
subspace methods in Hilbert space \cite{Guennel2014}.
In \cite{Absil2007}
applications of trust region subproblems formulated on Riemannian manifolds are
considered. Recently, trust region-like algorithms with guaranteed complexity
estimates in relation to the KKT tolerance have been proposed
\cite{Cartis2011,Cartis2011a,Curtis2016}. The necessary ingredients in the
subproblem solver for the algorithm investiged by Curtis and Samadi
\cite{Curtis2016} have been implemented in \texttt{trlib} as well.
Solution algorithms for trust region problems can be classified into direct
algorithms that make use of matrix factorizations and iterative methods that
access the operators $ H $ and $ M $ only via evaluations $ x \mapsto Hx $ and $
x \mapsto M x $ or $ x \mapsto M^{-1} x $. For the Hilbert space context, we
are interested in the latter class of algorithms. We refer to \cite{Conn2000}
and the references therein for a survey of direct algorithms, but point out the
algorithm of Mor\'{e} and Sorensen \cite{More1983} that will be used on a
specific tridiagonal subproblem, as well as the work of Gould et
al.~\cite{Gould2010}, who use higher order Taylor models to obtain high order
convergence results. The first iterative method was based on the conjugate
gradient process, and was proposed independently by Toint~\cite{Toint1981} and
Steihaug~\cite{Steihaug1983}. Gould et al.~\cite{Gould1999} proposed an
extension of the Steihaug-Toint algorithm. There, the Lanczos algorithm is used
to build up a nested sequence of Krylov spaces, and tri-diagonal trust region
subproblems are solved with a direct method. This idea also forms the basis for
our implementation. Hager \cite{Hager2001} considers an approach that builds on
solving the problem restricted to a sequence of subspaces that use SQP iterates
to accelerate and ensure quadratic convergence. Erway et
al.~\cite{Erway2009,Erway2010} investigate a method that also builds on a
sequence of subspaces built from accelerator directions satisfying optimality
conditions of a primal-dual interior point method.
\revise{In the methods of Steihaug-Toint and Gould, the operator $ M $ is used to
define the trust region norm and acts as preconditioner in the Krylov subspace algorithm.
The method of Erway et al.~allows to use a preconditioner that is independent
of the operator used for defining the trust region norm.
The trust region problem can equivalently be stated as generalized eigenvalue problem.
Approaches based on this characterization} are studied by Sorensen \cite{Sorensen1997}, Rendl and
Wolkowicz \cite{Rendl1997}, and Rojas et al.~\cite{Rojas2000,Rojas2008}.
\subsubsection*{\bf Contributions}
We introduce \texttt{trlib} which is a
new vector-free implementation of the GLTR \revise{(Generalized Lanczos Trust Region)} method for solving the trust region
subproblem. We assess the performance of this implementation on trust region
problems obtained from the set of unconstrained nonlinear minimization problems
of the CUTEst benchmark library, as well as on a number of examples formulated
in Hilbert space that arise from PDE-constrained optimal control.
\subsubsection*{\bf Structure of the Article} The remainder of this article is
structured as follows. In \S\ref{sec:ExUniq}, we briefly review conditions for
existence and uniqueness of minimizers. The GLTR methods for iteratively solving the
trust region problem is presented in \S\ref{sec:GLTR} in detail. Our implementation,
\texttt{trlib} is introduced in \S\ref{sec:TRLIB}. Numerical results for
trust-region problems arising in nonlinear programming and in PDE-constrained
control are presented in \S\ref{sec:results}. Finally, we offer a summary and
conclusions in \S\ref{sec:conclusions}.
\section{Existence and Uniqueness of Minimizers}
\label{sec:ExUniq}
In this section, we briefly summarize the main results about existence and
uniqueness of solutions of the trust region subproblem. We first note that our
introductory setting implies the following fundamental properties:
\begin{lemma}[Properties of~\eqref{eq:tr}]~\\[-1em]
\label{lem:fundamentals}
\begin{enumerate}
\item The mapping $ x \mapsto \langle x, H x \rangle $ is sequentially
weakly lower semicontinuous, and Fr\'echet differentiable for every $ x
\in \mathcal H $.
\item The feasible set $\mathcal F := \{ x \in \mathcal H \, \vert \,
\Vert x \Vert_M \le \Delta \} $ is bounded and weakly closed.
\item The operator $ M $ is surjective.
\end{enumerate}
\begin{proof}
$H \revise{{}= P - K}$ \revise{with compact $K$}, so (1) follows from \cite[Thm 8.2]{Hestenes1951}.
Fr\'echet differentiability follows from boundedness of $H$. Boundedness of
$\mathcal F$ follows from coercivity of $M$. Furthermore, $\mathcal F$ is
obviously convex and strongly closed, hence weakly closed. Finally, (3)
follows by the Lax-Milgram theorem~\cite[ex. \revise{7}.19]{Clarke2013}\revise{:
By boundedness of $ M $, there is $ C > 0 $ with $ \vert \langle x, M y \rangle \vert \le C \Vert x \Vert \, \Vert y \Vert $. The coercitivity assumption implies existence of $ c > 0 $ such that $ \langle x, Mx \rangle \ge c \Vert x \Vert^2 $ for $ x, y \in \mathcal H $.
Then, $ M $ satisfies the assumptions of the Lax-Milgram theorem. Given $ z \in \mathcal H $, application of this theorem yields $ \xi \in \mathcal H $ with $ \langle x, M \xi \rangle = \langle x, z \rangle $ for all $ x \in \mathcal H $.
Thus $ M \xi = z $.
}
\end{proof}
\end{lemma}
\begin{lemma}[Existence of a solution]~\\[.5em]
Problem~\eqref{eq:tr} has a solution.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:fundamentals}, the objecive functional $q$ is sequentially
weakly lower semicontinuous and the feasible set $\mathcal F$ is weakly
closed and bounded, the claim follows then from a generalized Weierstrass Theorem \cite[Ch. 7]{Kurdila2005}.
\end{proof}
To present optimality conditions for the trust region subproblem, we first
present a helpful lemma on the change of the objective function between two
points on the trust region boundary.
\begin{lemma}[Objective Change on Trust Region Boundary]\label{lem:obj-change}~\\[.5em]
Let $ x^0, x^1 \in \mathcal H $ with $ \Vert x^i \Vert_M = \Delta $ for $ i = 0, 1 $ be boundary points
of~\eqref{eq:tr}, and let $ \lambda \ge 0 $ satisfy $ (H+\lambda M) x^0 + g = 0 $.
Then $ d = x^1 - x^0 $ satisfies $ q(x^1) - q(x^0) = \tfrac 12 \langle d, (H+\lambda M) d \rangle $.
\end{lemma}
\begin{proof}
Using $ 0 = \Vert x^1 \Vert^2_M - \Vert x^0 \Vert^2_M = \langle x^0 + d, M (x^0 + d) \rangle - \langle x^0, M x^0 \rangle = \langle d, M d \rangle + 2 \langle x^0, M d \rangle $ and $ g = - (H+\lambda M) x^0 $ we find
\begin{align*}
q(x^1) - q(x^0) & = \tfrac 12 \langle d, Hd \rangle + \langle d, Hx^0 \rangle + \langle g, d \rangle = \tfrac 12 \langle d, Hd \rangle - \overbrace{ \lambda \langle x^0, Md \rangle}^{-\tfrac 12 \lambda \langle d, Md \rangle} \\
& = \tfrac 12 \langle d, (H + \lambda M) d \rangle. \qedhere
\end{align*}
\end{proof}
Necessary optimality conditions for the finite dimensional problem, see e.g.
\cite{Conn2000}, generalize in a natural way to the Hilbert space context.
\begin{theorem}[Necessary Optimality Conditions] \label{thm:noc}~\\[.5em]
Let $ x^* \in \mathcal H $ be a global solution of~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$. Then there is $
\lambda^* \ge 0 $ such that
\begin{enumerate}
\renewcommand{\theenumi}{(\alph{enumi})}
\item $ (H + \lambda^* M) x^* + g = 0 $,
\item $ \Vert x^* \Vert_M - \Delta \le 0 $,
\item $ \lambda^* ( \Vert x^* \Vert_M - \Delta ) = 0 $,
\item $ \langle d, (H + \lambda^* M) d \rangle \ge 0 $ for all $ d \in \mathcal H $.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $ \sigma: \mathcal H \to \mathbb R, \sigma(x) := \langle x, M x \rangle
- \Delta^2 $, so that the trust region constraint becomes $ \sigma(x) \le 0 $.
The function $ \sigma $ is Fr\'echet-differentiable for all $ x \in \mathcal
H $ with surjective differential provided $ x \neq 0 $ and satisfies constraint qualifications in that case.
We may assume $ x^* \neq 0 $ as the theorem holds for $ x^* = 0 $ (then $g=0$) for elementary
reasons.
Now if $ x^* $ is a \revise{global} solution of~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$, conditions (a)--(c) are necessary
optimality conditions, cf.~\revise{\cite[Thm 9.1]{Clarke2013}}.
\revise{To prove (d), we distinguish three cases:
\begin{itemize}
\item $ \Vert x \Vert_M = \Delta $ and $ d \in \mathcal H $ with $ \langle d, Mx^* \rangle \neq 0 $:
Given such $ d $, there
is $ \alpha \in \mathbb R \setminus \{0\} $ with $ \Vert x^* + \alpha d \Vert_M = \Delta
$. Using Lemma~\ref{lem:obj-change} yields $ \langle d, (H+\lambda^* M) d \rangle
= \tfrac 2{\alpha^2} ( q(x^* + \alpha d) - q(x^*) ) \ge 0 $ since $ x^* $ is a global solution.
\item $ \Vert x \Vert_M = \Delta $ and $ d \in \mathcal H $ with $ \langle d, Mx^* \rangle = 0 $:
Since $ x^* \neq 0 $ and $ M $ is surjective, there is $ p \in H $ with $ \langle p, M x^* \rangle \neq 0 $,
let $ d(\tau) := d + \tau p $ for $ \tau \in \mathbb R $. Then $ \langle d(\tau), M x^* \rangle \neq 0 $ for $ \tau \neq 0 $,
by the previous case \begin{align*} 0 & \le \langle d(\tau), (H + \lambda^* M) d(\tau) \rangle \\ & = \langle d, (H + \lambda^* M) d \rangle + \tau \langle p, (H+\lambda^* M) d \rangle + \tau^2 \langle p, (H+\lambda^* M) p \rangle. \end{align*}
Passing to the limit $ \tau \to 0 $ shows $ \langle d, (H+\lambda^* M) d \rangle \ge 0 $.
\item $ \Vert x \Vert_M < \Delta $: Then $ \lambda^* = 0 $ by (c). Let $ d \in \mathcal H $ and consider $ x(\tau) = x^* + \tau d $, which is feasible for sufficiently small $ \tau $.
By optimality and stationarity (a):
\begin{align*}
0 \le q(x(\tau)) - q(x^*) = \tau \langle x^*, Hd \rangle + \tfrac{\tau^2}{2} \langle d, Hd \rangle + \tau \langle g, d \rangle = \tfrac{\tau^2}{2} \langle d, H d \rangle,
\end{align*}
thus $ \langle d, H d \rangle \ge 0 $. \qedhere
\end{itemize}
}
\end{proof}
\begin{corr}[Sufficient Optimality Condition]~\\[.5em]
Let $ x^* \in \mathcal H $ and $ \lambda^* \ge 0 $
such that (a)--(c) of Thm.~\ref{thm:noc} hold and $ \langle d, (H + \lambda^*
M) d \rangle > 0 $ holds for all $ d \in \mathcal H $. Then $ x^* $ is the
unique global solution of~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$.
\end{corr}
\begin{proof}
This is an immediate consequence of Lemma~\ref{lem:obj-change}.
\end{proof}
\section{The GLTR Method}
\label{sec:GLTR}
The GLTR (Generalized Lanczos Trust Region) method is an iterative method to
approximatively solve $(\textup{TR}(H,g,M,\Delta,\mathcal H))$ and has first been described in Gould et al.
\cite{Gould1999}. Our presentation follows the presentation there and only
deviates in minor details.
In every iteration of the GLTR process, problem~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$ is restricted
to the Krylov subspace $ \mathcal K_i := \text{span}\{ (M^{-1}H)^j M^{-1}g \, \vert \, 0 \le j \le i \} $,
\begin{align}
\label{eq:tr_krylovi}\tag{$\textup{TR}(H,g,M,\Delta,\mathcal K_i)$}
\hspace*{-3cm}\left\{
\quad\begin{array}{cl}\displaystyle
\min_{x \in \mathcal H}\quad& \tfrac 12 \langle x, Hx \rangle + \langle x, g \rangle \\
\textup{s.t.}\quad & \Vert x \Vert_M \le \Delta,\\
& x \in \mathcal K_i.
\end{array}
\right.
\end{align}
The following Lemma relates solutions of~\eqref{eq:tr_krylovi} to those of~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$.
\begin{lemma}[Solution of the Krylov subspace trust region problem]~\\[.5em]
Let $ x^i $ be a global minimizer of~\eqref{eq:tr_krylovi} and $ \lambda^i $ the corresponding Lagrange multiplier.
Then $ (x^i, \lambda^i) $ satisfies the global optimality conditions of $(\textup{TR}(H,g,M,\Delta,\mathcal H))$ (Thm.~\ref{thm:noc}) in the following sense:
\begin{enumerate}
\renewcommand{\theenumi}{(\alph{enumi})}
\item $ (H + \lambda^i M) x^i + g \, \perp_M \, \mathcal K_i $,
\item $ \Vert x^i \Vert_M - \Delta \le 0 $,
\item $ \lambda^i ( \Vert x^i \Vert_M - \Delta ) = 0 $,
\item $ \langle d, (H + \lambda^i M) d \rangle \ge 0 $ for all $ d \in \mathcal K_i $.
\end{enumerate}
\end{lemma}
\begin{proof}
(b)--(d) are immediately obtained from Thm.~\ref{thm:noc} as $ \mathcal K_i \subseteq \mathcal H $ is a Hilbert space.
Assertion (a) follows from
$x^\ast=x^i+x^\perp$ with $x^i\in\mathcal K_i$, $x^\perp\perp\mathcal K_i$ and
Thm.~\ref{thm:noc} for $x^i$.
\end{proof}
Solving problem~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$ may thus be achieved by iterating the following
Krylov subspace process. Each iteration requires the solution of an instance of the
truncated trust region subproblem~\eqref{eq:tr_krylovi}.
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKw{KwReturn}{return}
\Input{$H$, $M$, $g$, $\Delta$, \emph{\texttt{tol}}}
\Output{$i$, $x^i$, $\lambda^i$}
\BlankLine
\For{$i\geq 0$}{
{Construct a basis for the $i$-th Krylov subspace $ \mathcal K_i $}\;
{Compute a representation of $q(x)$ restricted to $ \mathcal K_i $}\;
{Solve the subproblem~\eqref{eq:tr_krylovi} to obtain $(x^i,\lambda^i)$}\;
\lIf{$\Vert (H + \lambda^i M) x^i + g \Vert_{M^{-1}}\leq\texttt{tol}$}{\KwReturn}
}
\caption{Krylov subspace process for solving~$(\textup{TR}(H,g,M,\Delta,\mathcal H))$.}
\label{alg:KrylovTR}
\end{algorithm}
Algorithm~\ref{alg:KrylovTR} stops the subspace iteration as soon as $ \Vert (H
+ \lambda^i M) x^i + g \Vert_{M^{-1}} $ is small enough.
The norm $ \Vert \cdot \Vert_{M^{-1}} $ is used in the termination criterion since it is the norm belonging to the dual of $ (\mathcal H, \Vert \cdot \Vert_M) $ and the Lagrange \revise{derivative representation} $ (H + \lambda^i M) x^i + g $ should be regarded as element of the dual.
\subsection{Krylov Subspace Buildup}
In this section, we present the preconditioned conjugate gradient (pCG) process
and the preconditioned Lanczos process (pL) for construction of Krylov subspace
bases. We discuss the transition from pCG to pL upon breakdown of the pCG process.
\subsubsection{Preconditioned Conjugate Gradient Process}
An $ H $-conjugate basis $(\hat p_j)_{0 \le j \le i} $ of $ \mathcal K_i $ may
be obtained using preconditioned conjugate gradient (pCG) iterations,
Algorithm~\ref{alg:pCG}.
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$H$, $M$, $g^0$, $i\in\mathbb N$}
\Output{$v^i$, $g^i$, $p^i$, $\alpha^{i-1}$, $\beta^{i-1}$}
\BlankLine
{Initialize $\hat v^0 \gets M^{-1}g^0$, $\hat p^0 \gets -\hat v^0$}\;
\For{$j\gets 0$ \KwTo $i-1$}{
{$\alpha^j \gets {\langle \hat g^j, \hat v^j \rangle}/{\langle \hat p^j, H \hat p^j \rangle} = {\Vert \hat v^j \Vert_M}/{\langle \hat p^j, H \hat p^j \rangle}$}\;
{$\hat g^{j+1}\gets \hat g^j + \alpha^j H \hat p_j$}\;
{$\hat v^{j+1} \gets M^{-1} \hat g^{j+1}$}\;
{$\beta^j \gets {\langle \hat g^{j+1}, \hat v^{j+1} \rangle}/{\langle \hat g^j, \hat v^j \rangle} = {\Vert \hat v^{j+1} \Vert_M^2}/{\Vert \hat v^j \Vert_M^2}$}\;
{$\hat p^{j+1} \gets - \hat v^{j+1} + \beta^j \hat p^j$}\;
}
\caption{Preconditioned conjugate gradient (pCG) process.}
\label{alg:pCG}
\end{algorithm}
The stationary point $ s^i $ of $ q(x) $ restricted to the Krylov subspace $ \mathcal K_i $ is given by $ s^i = \sum_{j=0}^i \alpha^j \hat p^j $ and can thus be computed using the recurrence
\begin{align*}
s^0 \gets \alpha^0 \hat p^0,\quad s^{j+1} \gets s^j + \alpha^{j+1} \hat p^{j+1},\ 0\leq j \leq N-1
\end{align*}
as an extension of Algorithm~\ref{alg:pCG}.
The iterates' $M$-norms $ \Vert s^i \Vert_M $ are monotonically increasing \cite[Thm 2.1]{Steihaug1983}.
Hence, as long as $ H $ is coercive on the subspace $ \mathcal K_i $ (this implies $ \alpha_j > 0 $ for $ 0 \le j \le i $) and $ \Vert s^i \Vert_M \le \Delta $, the solution to \eqref{eq:tr_krylovi} is directly given by $ s^i $.
Breakdown of the pCG process occurs if $ \alpha^i = 0 $.
In computational practice, if the criterion $|\alpha^i|\leq \varepsilon$ is violated, where $ \varepsilon \ge 0 $ is a suitable small tolerance,
it is possible -- and necessary -- to continue with Lanczos iterations, described next.
\subsubsection{Preconditioned Lanczos Process}
An $M$-orthogonal basis $ (p_j)_{0 \le j \le i} $ of $ \mathcal K_i $ may be obtained using the preconditioned Lanczos (pL) process, Algorithm~\ref{alg:PLCZ}, and permits to continue subspace iterations even after pCG breakdown.
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$H$, $M$, $g^0$, $j\in\mathbb N$}
\Output{$v^i$, $g^i$, $p^{i-1}$, $\gamma^{i-1}$, $\delta^{i-1}$}
\BlankLine
{Initialize $ g^{-1} \gets 0$, $\gamma^{-1} \gets 1$, $v^0 \gets M^{-1}g^0$, $p^0 \gets v^0 $}\;
\For{$i\gets 0$ \KwTo $j-1$}{
{$\gamma^j \gets \sqrt{ \langle g^j, v^j \rangle } = \Vert g^j \Vert_{M^{-1}} = \Vert v^j \Vert_M$}\;
{$p^j \gets (1/{\gamma^j}) v^j = (1/{\Vert v^j \Vert_M}) v^j$}\;
{$\delta^j\gets \langle p^j, H p^j \rangle$}\;
{$g^{j+1} \gets Hp^j - ({\delta^j}/{\gamma^j}) g^j - ({\gamma^j}/{\gamma^{j-1}}) g^{j-1}$}\;
{$v^{j+1} \gets M^{-1} g^{j+1}$}
}
\caption{Preconditioned Lanczos (pL) process.}
\label{alg:PLCZ}
\end{algorithm}
The following simple relationship holds between the Lanczos iteration data and the pCG iteration data, and may be used to initialize the
pL process from the final pCG iterate before breakdown:
\begin{alignat*}{3}
\gamma^i & = \begin{cases} \Vert \hat v^0 \Vert_M, & i = 0 \\ {\sqrt{\beta^{i-1}}}/{\vert \alpha^{i-1} \vert}, & i \ge 1 \end{cases}, \qquad&
\delta^i & = \begin{cases} 1/{\alpha^{0}}, & i = 0 \\ 1/{\alpha^{i}} + {\beta^{i-1}}/{\alpha^{i}}, & i \ge 1 \end{cases}, \\
p^i & = 1/{\Vert \hat v_i \Vert_M}{\left[ \prod_{j=0}^{i-1} (- \text{sign} \, \alpha^{j})\right ]} \, \hat v_i, \qquad&
g^i & = \gamma^j/{\Vert \hat v_i \Vert_M} \left[ \prod_{j=0}^{i-1}(- \text{sign} \, \alpha^{j}) \right] \, \hat g_i.
\end{alignat*}
In turn, breakdown of the pL process occurs if an invariant subspace of $H$ is
exhausted, in which case $ \gamma^i = 0 $. If this subspace does not span
$ \mathcal H $, the pL process may be restarted if provided with a vector $g^0$ that is $ M
$-orthogonal to the exhausted subspace.
The pL process may also be expressed in compact matrix form as
\begin{align*}
H P_i - M P_i T_i = g^{i+1} \boldsymbol{e_{i+1}}^T,\ \langle P_i, M P_i \rangle = I ,
\end{align*}
with $ P_i $ being the matrix composed from columns $ p_0, \ldots, p_i $, and $
T_i $ the symmetric tridiagonal matrix with diagonal elements $ \delta^0,
\ldots, \delta^i $ and off-diagonal elements $ \gamma^1, \ldots, \gamma^i $.
As $ P_i $ is a basis for $ \mathcal K_i $, every $ x \in \mathcal K_i $ can be written as $ x =
P_i \boldsymbol h $ with a coordinate vector $ \boldsymbol h \in \mathbb R^{i+1} $. Using the compact form
of the Lanczos iteration, one can immediately express the quadratic form in this
basis as $ q(x) = \tfrac 12 \langle \boldsymbol h, T_i \boldsymbol h \rangle + \gamma^0 \langle \boldsymbol{e_1}, \boldsymbol h \rangle
$. Similarly, $ \Vert x \Vert_M = \Vert \boldsymbol h \Vert_2 $. Solving
\eqref{eq:tr_krylovi} thus reduces to solving $\textup{TR}(T_i,\gamma^0 \boldsymbol{e_1},I,\Delta,\mathbb R^{i+1})$ on
$ \mathbb R^{i+1} $ and recovering $ x = P_i \boldsymbol h $.
\subsection{Easy and Hard case of the Tridiagonal Subproblem}
As just described, using the tridiagonal representation $T_i$ of $H$ on the
basis $P_i$ of the $i$-th iteration of the pL process, the trust-region
subproblem $ \text{TR}(T_i, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{i+1}) $ needs to be solved. For
notational convenience, we drop the iteration index $ i $ from $ T_i $ in the
following. Considering the necessary optimality conditions of
Thm.~\ref{thm:noc}, it is natural to define the mapping
\begin{align*}
\lambda \mapsto \boldsymbol{x}(\lambda) := (T+\lambda I)^{+} (-\gamma^0 \boldsymbol{e_1}) \textup{ for }
\lambda \in I := [\max\{0, -\theta_{\min} \}, \infty),
\end{align*}
where $ \theta_{\min} $ denotes the smallest eigenvalue of $ T $, and the superscript $+$ denotes the Moore-Penrose pseudo-inverse.
On $ I $, $ T + \lambda I $ is positive semidefinite. The following definition
relates $\boldsymbol{x}(\lambda^\ast)$ to a global minimizer $ (\boldsymbol{x^*}, \lambda^*) $ of $\textup{TR}(T_i,\gamma^0 \boldsymbol{e_1},I,\Delta, \mathbb R^{i+1})$.
\begin{definition}[Easy Case and Hard Case]~\\[.5em] Let $ (\boldsymbol{x^*}, \lambda^*) $
satisfy the necessary optimality conditions of Thm.~\ref{thm:noc}.\\ If $
\langle \gamma^0 \boldsymbol{e_1}, \textup{Eig}(\theta_{\min}) \rangle \neq 0 $, we say that
$T$ satisfies the \emph{easy case}. Then, $ \boldsymbol{x^*} = \boldsymbol{x}(\lambda^*) $.\\ If $
\langle \gamma^0 \boldsymbol{e_1}, \textup{Eig}(\theta_{\min}) \rangle = 0 $, we say that $T$
satisfies the \emph{hard case}. Then, $ \boldsymbol{x^*} = \boldsymbol{x}(\lambda^*) + \boldsymbol{v} $ with suitable $
\boldsymbol v \in \textup{Eig}(\theta_{\min}) $.
Here $ \textup{Eig}(\theta) = \{ \boldsymbol v \in \mathbb R^{i+1} \, \vert \, T \boldsymbol v = \theta \boldsymbol v \} $ denotes the eigenspace of $ T $ associated to $ \theta $.
\end{definition}
With the following theorem, Gould et al.~in \cite{Gould1999} use the the
irreducible components of $ T $ to give a full description of the solution
$x(\lambda^*) + v$ in the hard case.
\begin{theorem}[Global Minimizer in the Hard Case]~\\[.5em]
\label{thm:hard-case-min}
Let $ T = \textup{diag}(R_1, \ldots, R_k) $ with irreducible tridiagonal matrices $ R_j $ and let $ 1 \le \ell \le k $ be the smallest index for which $ \theta_{\min}(R_\ell) = \theta_{\min}(T) $ holds.
Further, let $ \boldsymbol{x_1}(\theta) = (R_1 + \theta I)^+(- \gamma^0 \boldsymbol{e_1}) $ and let $ (\boldsymbol{x_1^*}, \lambda_1^*) $ be a KKT-tuple corresponding to a global minimum of $ \textup{TR}(R_1, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{r_1}) $, $ \boldsymbol{x_1^*} = \boldsymbol{x_1}(\lambda_1^*) $.\\[.5em]
If $ \lambda_1^* \ge -\theta_{\min} $, then $ x^* = (\boldsymbol{x_1}(\lambda_1^*)^T,\ \boldsymbol{0},\ \ldots,\ \boldsymbol{0})^T$ satisfies Thm.~\ref{thm:noc} for $ \text{TR}(T, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{i+1}) $.\\[.5em]
If $ \lambda_1^* < -\theta_{\min} $, then $ x^* = (\boldsymbol{x_1}(-\theta_{\min})^T,\ \boldsymbol 0,\ \ldots,\ \boldsymbol 0,\ \boldsymbol v^T,\boldsymbol 0,\ \ldots,\ \boldsymbol 0)^T$, with $v \in \textup{Eig}(R_\ell, \theta_{\min}) $ such that $ \Vert \boldsymbol{x^*} \Vert^2_2 = \Vert \boldsymbol{x_1}(-\theta_{\min}) \Vert^2_2 + \Vert \boldsymbol v \Vert^2_2 = \Delta^2 $ satisfies Thm.~\ref{thm:noc} for $ \text{TR}(T, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{i+1}) $.
\end{theorem}
In particular, as long as $ T $ is irreducible, the hard case does not occur.
\revise{A symmetric tridiagonal matrix $ T $ is irreducible, if and only if all it's offdiagonal elements are non-zero.}
For the tridiagonal matrices arising from Krylov subspace iterations, this is
the case as long as the pL process does not break down.
\subsection{Solving the Tridiagonal Subproblem in the Easy Case}
\label{sec:easy-tri}
Assume that $ T $ is irreducible, and thus satisfies the easy case.. Solving the
tridiagonal subproblem amounts to checking whether the problem admits an
interior solution and, if not, to finding a value $ \lambda^* \ge \max\{0,
-\theta_{\min}\} $ with $ \Vert x(\lambda^*) \Vert = \Delta $.
We follow Mor\'e and Sorensen~\cite{More1983}, who define $ \sigma_p(\lambda) :=
\Vert \boldsymbol x(\lambda) \Vert^p - \Delta^p $ and propose the Newton iteration
\begin{align*}
\lambda^{i+1}\gets \lambda^i-{\sigma_p(\lambda^i)}/{\sigma_p'(\lambda^i)}=
\lambda^i-\frac{\Vert \boldsymbol x(\lambda^i) \Vert^p - \Delta^p }{{p \Vert \boldsymbol x(\lambda^i) \Vert^{p-2} \langle \boldsymbol x(\lambda^i),
\boldsymbol x'(\lambda^i) \rangle}},\ i \geq 0,
\end{align*}
with $\boldsymbol x'(\lambda) = - (T+\lambda I)^{+} \boldsymbol x(\lambda) $, to find a root of $
\sigma_{-1}(\lambda) $. Provided that the initial value $\lambda^0$ lies in the
interval $ [\max\{0, - \theta_{\min}\}, \lambda^*] $, such that $ (T + \lambda^0
I) $ is positive semidefinite, $ \Vert \boldsymbol x(\lambda^0) \Vert \ge \Delta $, and
no safeguarding of the Newton iteration is necessary, it can be shown that
this leads to a sequence of iterates in the same interval that converges to $
\lambda^*$ at globally linear and locally quadratic rate,
cf.~\cite{Gould1999}.
Note that $ \lambda^* > - \theta_{\min} $ as $ \sigma_{-1}(\lambda) $ has a singularity in $ - \theta_{\min} $ but $ \sigma_{-1}(\lambda^*) = 1/\Delta $ and it thus suffices to consider $ \lambda > \max\{0, - \theta_{\min}\} $.
Both the function value and derivative require the solution of a linear system
of the form $ (T + \lambda I) \boldsymbol w = \boldsymbol b $. As $ T + \lambda I $ is tridiagonal,
symmetric positive definite, and of reasonably small dimension, it is
computationally feasible to use a tridiagonal Cholesky decomposition for this.
Gould et al.~in \cite{Gould2010} improve upon the convergence result by
considering higher order Taylor expansions of $ \sigma_p(\lambda) $ and values
$ p\neq-1 $ to obtain a method with locally quartic convergence.
\subsection{The Newton initializer}
\label{sec:newton-ini}
Cheap oracles for a suitable initial value $ \lambda^0 $ may be available,
including, for example, zero or the value $ \lambda^* $ of the previous
iteration of the pL process. If these fail, it becomes necessary to compute $
\theta_{\min} $. To this end, we follow Gould et al.~\cite{Gould1999} and
Parlett and Reid~\cite{Parlett1981}, who define the Parlett-Reid Last-Pivot
function $ d(\theta) $:
\begin{definition}[Parlett-Reid Last-Pivot Function]
\label{def:prlpf}
\begin{align*}
d(\theta) := \begin{cases} d_{i},& \parbox{.7\textwidth}{$\text{if there exists } (d_0,\ldots,d_i) \in (0,\infty)^i \times \mathbb R$, and $ L $ unit lower triangular {such that} $T - \theta I = L\, \textup{diag}(d_0, \ldots, d_i) \, L^T$} \\
-\infty,& {otherwise.} \end{cases}
\end{align*}
\end{definition}
Since $ T $ is irreducible, its eigenvalues are simple \cite[Thm 8.5.1]{Golub1996} and $ \theta_{\min} $ is
given by the unique value $ \theta \in \mathbb R $ with $ T - \theta I $
singular and positive semidefinite, or, equivalently, $ d(\theta) = 0 $.
A safeguarded root-finding method is used to determine $ \theta_{\min} $ by
finding the root of $ d(\theta) $. An interval of safety $ [\theta^k_\ell,
\theta^k_\textup{u}]$ is used in each iteration and a guess $ \theta^k \in
[\theta^k_\ell, \theta^k_\textup{u}] $ is chosen. Gershgorin bounds may be used to
provide an initial interval \cite[Thm 7.2.1]{Golub1996}. Depending on the sign of $
d(\theta) $ the interval of safety is then contracted to $ [\theta^k_\ell,
\theta^k] $ if $ d(\theta^k) < 0 $ and to $ [\theta^k, \theta^k_\textup{u}] $ if
$ d(\theta^k) \ge 0 $ as the interval of safety for the next iteration. One choice
for $ \theta^k $ is bisection. Newton steps as previously described may be taken
advantage of if they remain inside the interval of safety.
\begin{figure}[ht]
\centering
\includegraphics{prlp.pdf}
\caption{The Parlett-Reid last-pivot function $ d(\theta) $ and the lifted function $ \hat d(\theta) $ have the common zero $ \theta_{\min} $.
Dashed lines show the analytic continuation of the right hand side of $ d(\theta) = \prod_j (\theta - \theta_j) / \prod_j (\theta - \hat \theta_j) $ into the region where $ d(\theta) = - \infty $.}
\end{figure}
For sucessive pL iterations, the fact that the tridiagonal matrices grow by one
column and row in each iteration may be exploited to save most of the
computational effort involved. As noted by
Parlett and Reid \cite{Parlett1981}, the reccurence to compute the $d_i$ \revise{via Cholesky decomposition of $ T - \theta I $}
in~Def.~\ref{def:prlpf} is \revise{identical with the recurrence} that results from applying a Laplace expansion for \revise{the determinant of} tridiagonal matrices \cite[\S 2.1.4]{Golub1996}.
\revise{Comparing the recurrences thus} yields the explicit formula
\begin{align}
\label{eq:d-recurrence}
d(\theta) = \frac{\det(T - \theta I)}{\det(\hat T - \theta I)}
= \revise{-} \frac{\prod_{j} (\theta - \theta_j)}{\prod_{j} (\theta - \hat \theta_j)},
\end{align}
where $ \hat T $ denotes the principal submatrix of $ T $ obtained by erasing
the last column and row, and $\theta_j$ and $\hat\theta_j$ enumerate the
eigenvalues of $T$ and $\hat T$, respectively. \revise{The right hand side is obtained by identifying numerator and denominator with the characteristic polynomials of $ T $ and $ \hat T $, and by factorizing these.}
It becomes apparent that $ d(\theta) $ has a pole of first order in $ \hat
\theta_{\min} $. After lifting this pole, the function $ \hat d(\theta) :=
(\theta - \hat \theta_{\min}) d(\theta) $ is smooth on a larger interval. When
iteratively constructing the tridiagonal matrices in successive pL iterations,
the value $\hat \theta_{\min}$ is readily available and it becomes preferrable to use $
\hat d(\theta) $ instead of $d(\theta)$ for root finding.
\subsection{Solving the Tridiagonal Subproblem in the Hard Case}
If the hard case is present, the decomposition of $ T $ into irreducible components has to be determined.
This is given in a natural way by Lanczos breakdown.
Every time the Lanczos process breaks down and is restarted with a vector $ M $-orthogonal to the previously considered Krylov subspaces,
a new tridiagonal block is obtained.
Solving the problem in the hard case then amounts to applying Theorem~\ref{thm:hard-case-min}:
First all smallest eigenvalue $ \theta_i $ of the irreducible blocks $ R_i $ have to be determined as well as the KKT tuple $ (\boldsymbol{x_1^*}, \lambda_1^*) $ by solving the easy case for $ \text{TR}(R_1, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{r_1}) $.
Again, let $ \ell $ be the smallest index $ i $ with minimial $ \theta_i $.
In the case $ \lambda_1^* \ge -\theta_{\ell} $, the global solution is given by $ x^* = ((\boldsymbol{x_1^*})^T,\ \boldsymbol 0,\ \ldots, \boldsymbol 0)^T $.
On the other hand if $ \lambda_1^* < - \theta_\ell $ the eigenspace of $ R_\ell $ corresponding to $ \theta_{\ell} $ has to be obtained.
As $ R_\ell $ is irreducible, all eigenvalues of $ R_\ell $ are simple and an eigenvector $ \boldsymbol{\tilde v} $ spanning the desired eigenspace can be obtained for example by inverse iteration \cite[\S 8.2.2]{Golub1996}.
The solution is now given by $ \boldsymbol{x^*} = (\boldsymbol{x_1(-\theta_{\ell})}^T,\ \boldsymbol 0,\ \boldsymbol v^T,\ \boldsymbol 0)^T $ with $ \boldsymbol{x_1}(-\theta_{\min}) = (R_1 - \theta_{\ell} I)^{-1}(-\gamma^0 \boldsymbol{e_1}) $ and $ \boldsymbol v := \alpha \boldsymbol{\tilde v} $ where $ \alpha $ has been chosen as the root of the scalar quadratic equation $ \Delta^2 = \Vert \boldsymbol{x_1}(-\theta_{\min}) \Vert^2 + \alpha^2 \Vert \boldsymbol{\tilde v} \Vert^2 $ that leads to the smaller objective value.
\section{Implementation \texttt{trlib}}
\label{sec:TRLIB}
In this section, we present details of our implementation \texttt{trlib} of the
GLTR method.
\subsection{Existing Implementation}
The GLTR reference implementation is the
software package \texttt{GLTR} in the optimization library \texttt{GALAHAD}
\cite{GALAHAD}. This Fortran 90 implementation uses conjugate gradient
iterations exclusively to build up the Krylov subspace, and provides a reverse
communication interface that requires exchange vector data to be stored as
contiguous arrays in memory.
\subsection{\texttt{trlib} Implementation}
Our implementation is called \texttt{trlib}, short for \emph{trust region
library}. It is written in plain ANSI C99 code, and has been made available as
open source \cite{Lenders2016}. We provide a reverse communication interface in
which only scalar data and requests for vector operations are exchanged,
allowing for great flexibility in applications.
Beside the stable and efficient conjugate gradient iteration we also implemented the Lanczos iteration and a crossover mechanism to expand the Krylov subspace, as we frequently found applications in the context of constrained optimization with an SLEQP algorithm \cite{Byrd2003,Lenders2015} where conjugate gradient iterations broke down whenever directions of tiny curvature have been encountered.
\subsection{Vector Free Reverse Communication Interface}
The implementation is built around a reverse communication calling paradigm.
To solve a trust region subproblem, the according library function has to be repeatedly called by the user and after each call the user has to perform a specific action indicated by the value of an output variable.
Only scalar data representing dot products and coefficients in \texttt{axpy} operations as well as integer and floating point workspace to hold data for the tridiagonal subproblems is passed between the user and the library.
In particular, all vector data has to be managed by the user, who must be able to compute dot products $ \langle x, y \rangle $, perform \texttt{axpy} $ y := \alpha x + y $ on them and implement operator vector products $ x \mapsto Hx, x \mapsto M^{-1} x $ with the Hessian and the preconditioner.
Thus no assumption about representation and storage of vectorial data is made, as well as no assumption on the discretization of $ \mathcal H $ if $ \mathcal H $ is not finite-dimensional.
This is beneficial in problems arising from optimization problems stated in function space that may not be stored naturally as contiguous vectors in memory or where adaptivity regarding the discretization may be used along the solution of the trust region subproblem.
It also gives a trivial mechanism for exploiting parallelism in vector operations as vector data may be stored and operations may be performed on GPU without any changes in the trust region library.
In particular, this interface allows for easy interfacing with the PDE-constrained optimization software \texttt{DOLFIN-adjoint} \cite{Farrell2013,Funke2013} within the finite element framework \texttt{FEniCS} \cite{Alnaes2015,Logg2010,Alnaes2014} without having to rely on assumptions how the finite element discretization is stored, see~\S\ref{sec:fenics}.
\subsection{Conjugate Gradient Breakdown}
Per default, conjugate gradient iterations are used to build the Krylov subspace.
The algorithm switches to Lanczos iterations if the magnitude of the curvature $ \vert \langle \hat p, H \hat p \rangle \vert \le \emph{\texttt{tol\_curvature}} $ with a user defined tolerance $ \emph{\texttt{tol\_curvature}} \ge 0 $.
\subsection{Easy Case}
In the easy case after the Krylov space has been assembled in a particular iteration it remains to solve $ (\textup{TR}(T_i, \gamma^0 \boldsymbol{e_1}, I, \Delta, \mathbb R^{i+1})) $ which we do as outlined in \S\ref{sec:easy-tri}.
As mentioned there, an improved convergence order can be obtained by higher order Taylor expansions of $ \sigma_p(\lambda) $ and values $ p \neq -1 $, see \cite{Gould2010}.
However in our cases the computational cost for solving the tridiagonal subproblem --- often warmstarted in a suitable way --- is negligible in comparison the the cost of computing matrix vector products $ x \mapsto Hx $ and thus we decided to stick to the simpler Newton rootfinding on $ \sigma_{-1}(\lambda) $.
To obtain a suitable initial value $ \lambda^0 $ for the Newton iteration, we first try $ \lambda^* $ obtained in the previous Krylov iteration if available and otherwise $ \lambda^0 = 0 $.
If these fail, we use $ \lambda^0 = - \theta_{\min} $ computed as outlined in \S\ref{sec:newton-ini} by zero-finding on $ d(\theta) $ or $ \hat d(\theta) $.
This requires suitable models for $ \hat d(\theta) $.
Gould et al.~\cite{Gould1999} propose to use a quadratic model $ \theta^2 + a
\theta + b $ for $ \hat d(\theta) $ that captures the asymptotics $ t \to -
\infty $ obtained by fitting function value and derivative in a point in the
root finding process.
We have also had good success with the linear Newton
model $ a \theta + b $, and with using a second order quadratic model $ a
\theta^2 + b \theta + c $, that makes use of an additional second derivative, as
well.
Derivatives of $ d(\theta) $ or $ \hat d(\theta) $ are easily obtained
by differentiating the recurrence for the Cholesky decomposition.
In our implementation a heuristic is used to select the
option that is inside the interval of safety and promises good progress.
The heuristic is given by using $ \theta^2 + a \theta + b $ in case that the bracket width $ \theta^k_{\text u} - \theta^k_\ell $ satisfies $ \theta^k_{\text u} - \theta^k_\ell \ge 0.1 \max\{1, \vert \theta^k \vert \} $ and $ a \theta^2 + b \theta + c $ otherwise.
The motivation behind this is that in the former case it is not guaranteed, that $ \theta^k $ has been determined to high accuracy \revise{as zero of $ d(\theta) $} and thus the model that captures the global behaviour might be better suited. \revise{I}n the latter case, $ \theta^k $ \revise{has been confirmed to be a zero of $ d(\theta) $} to a certain accuracy and it is safe to \revise{use} the model representing local behaviour.
\subsection{Hard Case}
\revise{We now discuss the so-called hard case of the trust region problem, which we have found to be of critical importance for the performance of trust region subproblem solvers in general nonlinear nonconvex programming. We discuss algorithmic and numerical choices made in \texttt{trlib} that we have found to help improve performance and stability.}
\subsubsection{Exact Hard Case}
The function for the solution of the tridiagonal subproblem implements the algorithm as given by Theorem~\ref{thm:hard-case-min} if provided with a decomposition in irreducible blocks.
However, from local information it is not possible to distinguish between convergence to a global solution of the original problem and the case in which an invariant Krylov subspace is exhausted that may not contain the global minimizer as in both cases the gradient vanishes.
The handling of the hard case is thus left to the user who has to decide in the reverse calling scheme if once arrived at a point where the gradient norm is sufficiently small the solution in the Krylov subspaces investigated so far or further Krylov subspaces should be investigated.
In that case it is left to the user to determine a new nonzero initial vector for the Lanczos iteration that is $ M $-orthogonal to the previous Krylov subspaces.
\revise{One possibility to obtain such a vector is using a random vector and $ M $-orthogonalizing it with respect to the previous Lanczos directions using the modified Gram-Schmidt algorithm.}
\subsubsection{Near Hard Case}
The near hard case arises if $ \langle \gamma^0 \boldsymbol{e_1}, \frac{\boldsymbol{\tilde v}}{\Vert \boldsymbol{ \tilde v} \Vert} \rangle $ is tiny, where $ \boldsymbol{\tilde v} $ spans the eigenspace $\text{Eig}(\theta_{\min}) = \text{span}\{\boldsymbol{\tilde v}\} $.
Numerically this is detected if there is no $ \lambda \ge \max\{0, -\theta_{\min}\} $ such that $ \Vert \boldsymbol x(\lambda) \Vert \ge \Delta $ holds in floating point airthmetic. In that case we use the heuristic $ \lambda^* = - \theta_{\min} $ and $ x^* = x(-\theta_{\min}) + \alpha \boldsymbol v $ with $ \boldsymbol v \in \text{Eig}(\theta_{\min}) $ where $ \alpha $ is determined such that $ \Vert \boldsymbol{x^*} \Vert = \Delta $.
Another possibility would be to modify the tridiagonal matrix $ T $ by dropping offdiagonal elements below a specified treshold and work on the obtained decomposition into irreducible blocks.
However we have not investigated this possibility as the heuristic seems to deliver satisfactory results in practice.
\subsection{Reentry with New Trust Region Radius}
In nonlinear programming applications it is common that after a rejected step another closely related trust region subproblem has to be solved with the only changed data being the trust region radius. As this has no influence on the Krylov subspace but only on the solution of the tridiagonal subproblem, efficient hotstarting has been implemented.
Here the tridiagonal subproblem is solved again with exchanged radius and termination tested.
If this point does not satisfy the termination criterion, conjugate gradient or Lanczos iterations are resumed until convergence.
However, we rarely observed the need to resume the Krylov iterations in practice.
\revise{An explanation is offered based on the use of the convergence criterion $$ \Vert \nabla L \Vert_{M^{-1}} \le \emph{\texttt{tol}} $$ as follows: In the Krylov subspace $ \mathcal K_i $, $$ \Vert \nabla L \Vert_{M^{-1}} = \gamma^{i+1} \vert \langle \bm x(\lambda), \bm{e_{i+1}} \rangle \vert \le \gamma^{i+1} \Vert \bm x(\lambda) \Vert_2 = \gamma^{i+1} \Delta.$$ Convergence occurs thus if either $ \gamma^{i+1} $ or the last component of $ \bm x(\lambda) \le \Delta $ are small. Reducing the trust region radius also reduces the upper bound for $ \Vert \nabla L \Vert_{M^{-1}} $, so convergence is likely to occur, especially if $ \gamma^{i+1} $ turns out to be small.}
\revise{If the trust region radius is small enough, or equivalently the Lagrange multiplier large enough, it can be proven that a decrease in the trust region radius leads to a decrease in $ \Vert \nabla L \Vert_{M^{-1}} $:
\begin{lemma}
There is $ \hat \lambda \ge \max_i \vert \lambda_i(T) \vert $ such that $ \lambda \mapsto \gamma^{i+1} \vert \langle \bm{x}(\lambda), \bm{e_{i+1}} \rangle \vert $ is a decreasing function for $ \lambda \ge \hat \lambda $.
\end{lemma}
\begin{proof}
Using the expansion $$ (T_i+\lambda I)^{-1} = \sum_{k \ge 0} (-1)^k \tfrac 1{\lambda^{k+1}} T^k, $$ which holds for $ \lambda \ge \max_{i} \vert \lambda_i(T) \vert $, we find:
\begin{align*}
\Vert \nabla L \Vert_{M^-1} & = \gamma^{i+1} \vert \langle \bm x(\lambda), \bm{e_{i+1}} \rangle \vert = \gamma^{i+1} \gamma^0 \vert \langle (T_i+\lambda I)^{-1} \bm{e_1}, \bm{e_{i+1}} \rangle \vert \\
& = \gamma^{i+1} \gamma^0 \left \vert \sum_{k\ge 0} (-1)^k \tfrac{1}{\lambda^{k+1}} \bm{e_{i+1}}^T T^k \bm{e_1} \right \vert = \frac{\prod_{j=0}^{i+1} \gamma^j}{\lambda^{i+1}} + O( (\tfrac 1\lambda)^{i+2}),
\end{align*}
where we have made use of the facts that $ \bm{e_{i+1}^T} T^k \bm{e_0} $ vanishes for $ k < i $, and that $ \bm{e_{i+1}^T} T^k \bm{e_0} = \prod_{j=1}^{i} \gamma^j $, which can be easily proved using the relation $ T \bm{e_j} = \gamma^{j-1} \bm{e_{j-1}} + \gamma^{j+1} \bm{e_{j+1}} + \delta_j \bm{e_j} $.
The claim now holds if $ \lambda $ is large enough such that higher order terms in this expansion can be neglected.
\end{proof}
}
\subsection{Termination criterion}
Convergence is reported as soon as the Lagrangian gradient satisfies
\begin{align*}
\Vert \nabla L \Vert_{M^{-1}} \le \begin{cases} \max\{\emph{\texttt{tol\_abs\_i}}, \emph{\texttt{tol\_rel\_i}} \, \Vert g \Vert_{M^{-1}} \}, & \text{if } \lambda = 0 \\
\max\{\emph{\texttt{tol\_abs\_b}}, \emph{\texttt{tol\_rel\_b}} \, \Vert g \Vert_{M^{-1}} \}, & \text{if } \lambda > 0 \end{cases}.
\end{align*}
The rationale for using possibly different tolerances in the interior and boundary case is motivated from applications in nonlinear optimization where trust region subproblems are used as globalization mechanism.
There a local minimizer of the nonlinear problem will be an interior solution to the trust region subproblem and it is thus not necessary to solve the trust region subproblem in the boundary case to highest accuracy.
\subsection{\revise{Heuristic addressing ill-conditioning}}
\revise{The pL directions $ P_i $ are $ M $-orthogonal if computed using exact arithmetic.
It is well known that, in finite precision and if $ H $ is ill-conditioned, $ M $-orthogonality may be lost due to propagation of roundoff errors .
An indication that this happened may be had by verifying $$ \tfrac 12 \langle \bm h, T_i \bm h \rangle + \gamma^0 \langle \bm h, \bm{e_1} \rangle = q(P_i \bm h),$$
which holds if $ P_i $ indeed is $ M $-orthogonal.
On several badly scaled instances, for example \texttt{ARGLINB} of the \texttt{CUTEst} test set, we have seen that that both quantities above may even differ in sign,
in which case the solution of the trust-region subproblem would yield a direction of ascent.
This issue becomes especially severe if $ H $ has small, but positive eigenvalues and admits an interior solution of the trust region subproblem.
Then, the Ritz values computed as eigenvalues of $ T_i $ may very well be negative due to the introduction of roundoff errors, and enforce a convergence to a boundary point of the trust region subproblem.
Finally, if the trust region radius $ \Delta $ is large, the two ``solutions'' can differ in a significantly.}
\revise{To address this observation, we have developed a heuristic that, by convexification, permits to obtain a descent direction of progress even if $ P_i $ has lost $ M $-orthogonality.
For this, let $ \underline{\rho} := \min_j \frac{\langle p_j, H p_j \rangle}{\langle p_j, Mp_j \rangle} $ and $ \overline \rho := \max_j \frac{\langle p_j, H p_j \rangle}{\langle p_j, Mp_j \rangle} $ be the minimal respective and Rayleigh quotients used as estimates of extremal eigenvalues of $ H $. Both are cheap to compute during the Krylov subspace iterations.
\begin{enumerate}
\item If algorithm~\ref{alg:KrylovTR} has converged with a boundary solution such that $ \lambda \ge 10^{-2} \max\{1, \rho_{\max} \} $ and $ \vert \rho_{\min} \vert \le 10^{-8} \rho_{\max} $, the case described above may be at hand. We compute $ q_x := q(P_i \bm{h}) $ in addition to $ q_h := \tfrac 12 \langle \bm h, T_i \bm h \rangle + \gamma^0 \langle \bm h, \bm{e_1} \rangle $.
If either $ q_x > 0 $ or $ \vert q_x - q_h \vert > 10^{-7} \max\{ 1, \vert q_x \vert \} $, we resolve with a convexified problem.
\item The convexification heuristic we use is obtained by adding a positive diagonal matrix $ D $ to $ T_i $, where $ D $ is chosen such that $ T_i + D $ is positive definite. We then resolve then the tridiagonal problem with $ T_i + D $ as the new convexified tridiagonal matrix.
We obtain $ D $ by attempting to compute a Cholesky factor $ T_i $. Monitoring the pivots in the Cholesky factorization, we choose $ d_j $ such that the pivots $ \pi_j $ are at least slightly positive. The formal procedure is given in algorithm~\ref{alg:convexify}. Computational results use the constants $ \varepsilon = 10^{-12} $ and $ \sigma = 10 $.
\end{enumerate}
}
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{\revise{input}}
\SetKwInOut{Output}{\revise{output}}
\SetKw{KwReturn}{\revise{return}}
\Input{\revise{$ T_i $, $ \varepsilon > 0 $, $ \sigma > 0 $}}
\Output{\revise{$ D $ such that $ T_i + D $ is positive definite}}
\BlankLine
\For{\revise{$j = 0, \ldots, i$}}{
\revise{{$ \hat \pi_j := \begin{cases} \delta_0, & j = 0 \\ \delta_j - \gamma_j^2 / \pi_{j-1}, & j > 0 \end{cases} $}}\;
\revise{{$ d_j := \begin{cases} 0, & \hat \pi_j \ge \varepsilon \\ \sigma \vert \gamma_j^2/\pi_{j-1} - \delta_j \vert, & \hat \pi_j < \varepsilon \end{cases} $}}\;
\revise{{$ \pi_j := \hat \pi_j + d_j $}}\;
}
\caption{\revise{Convexification heuristic for the tridiagonal matrix $ T_i $.}}
\label{alg:convexify}
\end{algorithm}
\subsection{TRACE}
\label{sec:trlib-trace}
In the recently proposed TRACE algorithm \cite{Curtis2016}, trust region problems are also used.
In addition to solving trust region problems, the following operations have to be performed:
\begin{itemize}
\item $ \min_x \tfrac 12 \langle x, (H+\lambda M) x \rangle + \langle g, x \rangle $,
\item Given constants $ \sigma_{\text l}, \sigma_{\text u} $ compute $ \lambda $ such that the solution point
of $ \min_x \tfrac 12 \langle x, (H+\lambda M) x \rangle + \langle g, x \rangle $ satisfies $ \sigma_{\text l} \le \frac{\lambda}{\Vert x \Vert_{M}} \le \sigma_{\text u} $.
\end{itemize}
These operations have to be performed after a trust region problem has been solved and can be efficiently implemented using the Krylov subspaces already built up.
We have implemented these as suggested in \cite{Curtis2016}, where the first operation requires one backsolve with tridiagonal data and the second one is implemented as root finding on $ \lambda \mapsto \tfrac{\lambda}{\Vert x(\lambda) \Vert} - \sigma $ with a certain $ \sigma \in [\sigma_{\text l}, \sigma_{\text u}] $ that is terminated as soon as $ \tfrac{\lambda}{\Vert x(\lambda) \Vert} \in [\sigma_{\text l}, \sigma_{\text u}] $.
\subsection{C11 Interface}
\label{sec:c_interfaces}
The algorithm has been implemented in C11.
The user is responsible for holding vector-data and invokes the algorithm by repeated calls to the function \texttt{trlib\_krylov\_min} with integer and floating point workspace and dot products $ \langle v, g \rangle, \langle p, Hp \rangle $ as arguments and in return receives status informations and instructions to be performed on the vectorial data.
A detailed reference is provided in the \texttt{Doxygen} documentation to the code.
\subsection{Python Interface}
\label{sec:py_interfaces}
A low-level python interface to the C library has been created using \texttt{Cython} that closely resembles the C API and allows for easy integration into more user-friendly, high-level interfaces.
As a particular example, a trust region solver for PDE-constrained optimization problems has been developed to be used from \texttt{DOLFIN-adjoint} \cite{Farrell2013,Funke2013} within \texttt{FEniCS} \cite{Alnaes2015,Logg2010,Alnaes2014}.
Here vectorial data is only considered as \texttt{FEniCS}-objects and no numerical data except of dot products is used of these objects.
\section{Numerical Results}
\label{sec:results}
In this section, we present an assessment of the computational performance of
our implementation \texttt{trlib} of the GLTR method, and compare it to the
reference implementation \texttt{GLTR} as well as several competing methods for
solving the trust region problem and their respective implementations.
\subsection{Generation of Trust-Region Subproblems}
\label{sec:cutest}
For want of a reference benchmark set of non-convex trust region subproblems, we
resorted to the subset of unconstrained nonlinear programming problems of the
\texttt{CUTEst} benchmark library, and use a standard trust region
algorithm, e.g.~Gould et al.~\cite{Gould1999}, for solving $
\min_{\boldsymbol x \in \mathbb R^n} f(\boldsymbol x) $, as a generator of
trust-region subproblems. The algorithm starts from a given initial point $
\boldsymbol{x^0} \in \mathbb R^n $ and trust region radius $ \Delta^0 > 0 $, and
iterates for $ k \ge 0 $:
\begin{algorithm}[h]
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKw{KwReturn}{return}
\Input{$f$, $x^0$, $\Delta^0$, $\rho_\textup{acc}$, $\rho_\textup{inc}$, $\gamma^+$, $\gamma^-$, \emph{\texttt{tol\_abs}}}
\Output{$k$, $x^k$}
\BlankLine
\For{$k\geq 0$}{
{Evaluate $ \boldsymbol{g^k} := \nabla f(\boldsymbol{x^k}) $}\;
{Test for termination: Stop if $ \Vert \boldsymbol{g^k} \Vert \le \emph{\texttt{tol\_abs}} $}\;
{Evaluate $ H^k := \nabla^2_{\boldsymbol{xx}} f(\boldsymbol{x^k}) $}\;
{Compute (approximate) minimizer $ \boldsymbol{d^k} $ to $ \text{TR}(H^k,\boldsymbol{g^k},I,\Delta^k) $}\;
{Assess the performance $ \rho^k := ({f(\boldsymbol{x^k}+\boldsymbol{d^k}) - f(\boldsymbol{x^k})})/{q(\boldsymbol {d^k})} $ of the step}\;
{Update step: $\boldsymbol{x^{k+1}} := \begin{cases} \boldsymbol{x^k} + \boldsymbol{d^k}, & \rho^k \ge \rho_{\text{acc}} \\ \boldsymbol{x^k}, & \rho^k < \rho_{\text{acc}} \end{cases},$}\;
{Update trust region radius: $\Delta^{k+1} := \begin{cases} \gamma^+ \Delta^k, & \rho^k \ge \rho_{\text{inc}} \\ \Delta^k, & \rho_{\text{acc}} \le \rho^k < \rho_{\text{inc}} \\ \gamma^- \Delta^k, & \rho^k < \rho_{\text{acc}} \end{cases}$}\;
}
\caption{Standard trust region algorithm for unconstrained nonlinear programming, used to generate trust region subproblems from \texttt{CUTEst}.}
\label{alg:TRgenerator}
\end{algorithm}
In a first study, we compared our implementation \texttt{trlib} of the GLTR
method to the reference implementation \texttt{GLTR} as well as several
competing methods for solving the trust region problem, and their respective
implementations, as follows:
\begin{itemize}
\item \texttt{GLTR} \cite{Gould1999} in the \texttt{GALAHAD} library implements the GLTR method.
\item \texttt{LSTRS} \cite{Rojas2008} uses an eigenvalue based approach.
The implementation uses \texttt{MATLAB} and makes use of the direct \texttt{ARPACK} \cite{Lehoucq1998} reverse communication interface, which is deprecated in recent versions of \texttt{MATLAB} and lead to crashes within \texttt{MATLAB 2013b} used by us.
We thus resorted to the standard \texttt{eigs} eigenvalue solver provided by \texttt{MATLAB} which might severly impact the behaviour of the algorithm.
\item \texttt{SSM} \cite{Hager2001} implements a sequential subspace method that may use an SQP accelerated step.
\item \texttt{ST} is an implementation of the truncated conjugate gradient method proposed independently by Steihaug \cite{Steihaug1983} and Toint \cite{Toint1981}.
\item \texttt{trlib} is our implementation of the GLTR method.
\end{itemize}
All codes, with the exception of \texttt{LSTRS}, have been implemented in a
compiled language, Fortran 90 in case of \texttt{GLTR} and C in for all other
codes, by their respective authors. \texttt{LSTRS} has been implemented in
interpreted MATLAB code. The benchmark code used to run this comparison has also
been made open source and is available as \texttt{trbench} \cite{Lenders2016b}.
In our test case the parameters $ \Delta^0 = \tfrac{1}{\sqrt{n}} $, $ \emph{\texttt{tol\_abs}} = 10^{-7} $, $ \rho_{\text{acc}} = 10^{-2} $, $ \rho_{\text{inc}} = 0.95 $, $ \gamma^+ = 2 $ and $ \gamma^- = \tfrac 12 $ have been used.
\revise{We used the subproblem convergence criteria as specified in table~\ref{tab:convcrit} for the different solvers, trying to have as comparable convergence criteria as possible within the available applications.
Our rationale for the interior convergence criterion to request $ \Vert \nabla L \Vert_{M^{-1}} = O( \Vert \bm{g^k} \Vert_{M^{-1}}^2 ) $ is that it defines an inexact Newton method with q-quadratic convergence rate, \cite[Thm 7.2]{Nocedal2006}.
As \texttt{LSTRS} is a method based on solving a generalized eigenvalue problem, its convergence criterion depends on the convergence criterion of the generalized eigensolver and is incomparable with the other termination criteria. With the exception of \texttt{trlib}, no other solver allows to specify different convergence criteria for interior and boundary convergence.}
\begin{table}
\revise{\begin{tabular}{llll}
solver & $ \tau $ interior convergence & $ \tau $ boundary convergence \\ \hline
\texttt{GLTR} & $ \min\{ 0.5, \Vert \bm{g^k} \Vert_{M^{-1}} \} \Vert \bm{g^k} \Vert_{M^{-1}} $ & identical to interior \\
\texttt{LSTRS} & \multicolumn{2}{l}{defined in dependence of convergence of implicit restarted Arnoldi method} \\
\texttt{SSM} & $ \min\{ 0.5, \Vert \bm{g^k} \Vert \}_{M^{-1}} \Vert \bm{g^k} \Vert_{M^{-1}} $ & identical to interior \\
\texttt{ST} & $ \min\{ 0.5, \Vert \bm{g^k} \Vert \}_{M^{-1}} \Vert \bm{g^k} \Vert_{M^{-1}} $ & method heuristic in that case \\
\texttt{trlib} & $ \min\{ 0.5, \Vert \bm{g^k} \Vert \}_{M^{-1}} \Vert \bm{g^k} \Vert_{M^{-1}} $ & $ \max\{10^{-6}, \min\{ 0.5, \Vert \bm{g^k} \Vert_{M^{-1}}^{1/2} \} \} \Vert \bm{g^k} \Vert_{M^{-1}} $
\end{tabular}}
\caption{\revise{Convergence criteria for subproblem solvers $ \Vert \nabla L \Vert_{M^{-1}} \le \tau $}}
\label{tab:convcrit}
\end{table}
\newcommand{blue}{blue}
\newcommand{blue!60}{blue!60}
\newcommand{blue!25}{blue!25}
\newcommand{blue!60}{blue!60}
\newcommand{blue}{blue}
\begin{figure}[ht]
\centering
\includegraphics{cutest_all}
\caption{Performance Profiles for matrix-vector products, NLP iterations and total CPU time for different trust region subproblem solvers when used in a standard trust region algorithm for unconstrained minimization evaluated on the set of all unconstrained minimization problems from the \texttt{CUTEst} library.}
\end{figure}
The performance of the different algorithms is assessed using extended performance profiles as introduced by \cite{Dolan2002,Mahajan2011}, for a given set $ S $ of solvers and $ P $ of problems the performance profile for solver $ s \in S $ is defined by
\begin{equation*}
\rho_s(\tau) := \frac{1}{\vert P \vert} \vert \{ p \in P \, \vert \, r_{s,p} \le \tau \} \vert, \quad \text{where }r_{s,p} = \frac{t_{s,p} \text{ performance of } s \in S \text{ on } p \in P}{\min_{\sigma \in S, \sigma \neq s} t_{\sigma,p} }.
\end{equation*}
It can be seen that \texttt{GLTR} and \texttt{trlib} are the most robust solvers on the subset of unconstrained problems from \texttt{CUTEst} in the sense that they eventually solve the largest fraction of problems among all solversand that they are also among the fastest solvers.
That \texttt{GLTR} and \texttt{trlib} show similar performance is to be expected as they implement the identical GLTR algorithm, where \texttt{trlib} is slightly more robust and faster.
We attribute this to the implementation of efficient hotstart capabilities and also the Lanczos process to build up the Krylov subspaces once directions of zero curvature are encountered.
\revise{Tables~\ref{tab:cutestresults}--\ref{tab:cutestresults3} show the individual results on the \texttt{CUTEst} library.}
\begin{table}
\begin{tiny}
\begin{tabular}{l|l|ll|ll|ll|ll|ll}
problem & $ n $ & \multicolumn{2}{c|}{\texttt{GLTR}} & \multicolumn{2}{c|}{\texttt{LSTRS}} & \multicolumn{2}{c|}{\texttt{SSM}} & \multicolumn{2}{c|}{\texttt{ST}} & \multicolumn{2}{c}{\texttt{trlib}} \\
& & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ \\
\texttt{AKIVA} & 2 & 3.7e-04 & 12 & 1.7e-03 & 104 & 3.7e-04 & 18 & 3.7e-04 & 12 & 3.7e-04 & 12 \\
\texttt{ALLINITU} & 4 & 1.2e-06 & 28 & 1.9e-05 & 275 & 1.2e-06 & 30 & 3.3e-05 & 20 & 1.2e-06 & 27 \\
\texttt{ARGLINA} & 200 & 2.1e-13 & 9 & 1.0e-13 & 485 & 2.8e-13 & 648 & 1.9e-13 & 10 & 1.8e-13 & 9 \\
\texttt{ARGLINB} & 200 & 1.4e-01 & 9 & 2.1e-01 & 14695 & \multicolumn{2}{c|}{failure} & 3.6e-04 & 152 & 9.7e-03 & 76 \\
\texttt{ARGLINC} & 200 & 7.9e-02 & 9 & 3.1e-01 & 9177 & \multicolumn{2}{c|}{failure} & 1.6e-03 & 156 & 5.1e-02 & 21 \\
\texttt{ARGTRIGLS} & 10 & 1.0e-09 & 50 & 3.6e-06 & 372 & 1.0e-09 & 15 & 1.2e-08 & 42 & 1.0e-09 & 50 \\
\texttt{ARWHEAD} & 5000 & 3.7e-11 & 20 & 2.4e-08 & 1054 & 3.7e-11 & 551752 & 3.7e-10 & 24 & 3.7e-11 & 17 \\
\texttt{BA-L16LS} & 66462 & 1.1e+06 & 58453 & 9.8e+07 & 83115 & \multicolumn{2}{c|}{failure} & 2.4e+06 & 20698 & 1.1e+08 & 21941 \\
\texttt{BA-L1LS} & 57 & 4.6e-08 & 317 & 1.3e+01 & 72289 & 6.0e-08 & 30336 & 1.2e-08 & 436 & 2.4e-08 & 758 \\
\texttt{BA-L21LS} & 34134 & 6.2e+06 & 129819 & 5.7e+07 & 208393 & 2.7e+09 & 1123576 & 1.2e+06 & 43139 & 9.8e+05 & 36639 \\
\texttt{BA-L49LS} & 23769 & 4.4e+04 & 250639 & 1.7e+06 & 1412516 & \multicolumn{2}{c|}{failure} & 2.9e+05 & 60741 & 8.7e+05 & 35305 \\
\texttt{BA-L52LS} & 192627 & 3.5e+08 & 21964 & 6.7e+09 & 36939 & \multicolumn{2}{c|}{failure} & 3.1e+07 & 16589 & 2.7e+07 & 19543 \\
\texttt{BA-L73LS} & 33753 & 1.4e+06 & 161282 & 7.1e+12 & 32865 & \multicolumn{2}{c|}{failure} & 7.5e+11 & 10071 & 4.7e+07 & 92020 \\
\texttt{BARD} & 3 & 5.6e-07 & 23 & \multicolumn{2}{c|}{failure} & 5.6e-07 & 24 & 9.8e-08 & 2910 & 5.6e-07 & 24 \\
\texttt{BDQRTIC} & 5000 & 5.7e-04 & 218 & 5.8e-04 & 4235 & 5.7e-04 & 811903 & 1.0e-02 & 529 & 5.7e-04 & 209 \\
\texttt{BEALE} & 2 & 1.2e-08 & 16 & 4.8e-06 & 93 & 1.2e-08 & 24 & 2.0e-08 & 62 & 1.2e-08 & 16 \\
\texttt{BENNETT5LS} & 3 & 6.5e-08 & 405 & \multicolumn{2}{c|}{failure} & 2.2e-04 & 2256 & 9.9e-08 & 876 & 1.8e-08 & 1691 \\
\texttt{BIGGS6} & 6 & 1.8e-08 & 71 & \multicolumn{2}{c|}{failure} & 5.9e-09 & 108 & 2.5e-04 & 20128 & 2.1e-08 & 410 \\
\texttt{BOX} & 10000 & 4.0e-04 & 32 & 6.8e-05 & 1021 & 4.0e-04 & 3278 & 1.8e-05 & 2172 & 4.0e-04 & 32 \\
\texttt{BOX3} & 3 & 6.6e-11 & 24 & \multicolumn{2}{c|}{failure} & 6.6e-11 & 24 & 1.0e-07 & 17266 & 6.6e-11 & 24 \\
\texttt{BOXBODLS} & 2 & 2.6e-01 & 50 & 7.8e-05 & 450 & 2.6e-01 & 87 & 3.8e-01 & 23 & 2.6e-01 & 42 \\
\texttt{BOXPOWER} & 20000 & 2.4e-08 & 86 & \multicolumn{2}{c|}{failure} & 1.6e+05 & 10285059 & 4.7e-05 & 1335136 & 5.6e-08 & 107 \\
\texttt{BRKMCC} & 2 & 6.1e-06 & 6 & 2.0e-08 & 74 & 6.1e-06 & 9 & 6.1e-06 & 6 & 6.1e-06 & 6 \\
\texttt{BROWNAL} & 200 & 2.8e-09 & 37 & \multicolumn{2}{c|}{failure} & 4.2e-10 & 128430 & 1.0e-07 & 54218 & 7.9e-10 & 32 \\
\texttt{BROWNBS} & 2 & 6.0e-06 & 75 & 1.1e-08 & 777 & 2.4e-07 & 99 & 8.9e-10 & 69 & 2.4e-07 & 67 \\
\texttt{BROWNDEN} & 4 & 7.3e-05 & 47 & 5.1e-04 & 268 & 7.3e-05 & 36 & 1.1e-01 & 54 & 7.3e-05 & 45 \\
\texttt{BROYDN3DLS} & 10 & 6.2e-11 & 60 & 4.5e-05 & 218 & 6.2e-11 & 18 & 1.4e-10 & 43 & 6.2e-11 & 57 \\
\texttt{BROYDN7D} & 5000 & 4.7e-04 & 13895 & 1.6e-04 & 201198 & 2.9e-04 & 2285169 & 1.2e-03 & 2206 & 5.8e-04 & 1660 \\
\texttt{BROYDNBDLS} & 10 & 2.0e-11 & 110 & 8.0e-05 & 466 & 2.0e-11 & 33 & 3.6e-13 & 70 & 2.0e-11 & 105 \\
\texttt{BRYBND} & 5000 & 6.2e-08 & 630 & 1.9e-06 & 93338 & 1.2e-09 & 3781397 & 7.6e-10 & 733 & 8.3e-13 & 639 \\
\texttt{CHAINWOO} & 4000 & 6.6e-04 & 40920 & 8.8e+02 & 69945 & 1.3e-04 & 3282530 & 1.5e-02 & 41073 & 3.1e-04 & 11482 \\
\texttt{CHNROSNB} & 50 & 5.2e-08 & 2032 & 8.1e-05 & 39963 & 3.5e-09 & 4008 & 1.8e-13 & 629 & 2.7e-10 & 1422 \\
\texttt{CHNRSNBM} & 50 & 1.5e-08 & 3181 & 1.0e-05 & 107423 & 4.4e-08 & 5065 & 1.4e-09 & 809 & 9.2e-09 & 1863 \\
\texttt{CHWIRUT1LS} & 3 & 5.4e+00 & 59 & 2.3e-01 & 139 & 2.1e-01 & 42 & 5.3e+00 & 43 & 2.1e-01 & 27 \\
\texttt{CHWIRUT2LS} & 3 & 4.0e-03 & 57 & 9.8e-02 & 138 & 3.4e-01 & 39 & 1.3e-02 & 37 & 3.4e-01 & 23 \\
\texttt{CLIFF} & 2 & 2.1e-05 & 38 & \multicolumn{2}{c|}{failure} & 2.1e-05 & 81 & 2.1e-05 & 41 & 2.1e-05 & 40 \\
\texttt{COSINE} & 10000 & 1.2e-06 & 213 & 7.2e+01 & 1 & 1.2e-06 & 6703 & 9.3e-03 & 72 & 1.2e-06 & 133 \\
\texttt{CRAGGLVY} & 5000 & 1.3e-04 & 622 & 1.2e-04 & 27113 & 1.3e-04 & 4646010 & 2.3e-03 & 453 & 1.3e-04 & 698 \\
\texttt{CUBE} & 2 & 1.2e-07 & 64 & 9.2e-06 & 564 & 2.6e-11 & 105 & 9.8e-08 & 204 & 2.6e-11 & 50 \\
\texttt{CURLY10} & 10000 & 3.7e-01 & 93106 & 1.3e+02 & 1 & 3.7e-01 & 1755070 & 1.8e-04 & 290643 & 4.5e-01 & 84837 \\
\texttt{CURLY20} & 10000 & 4.2e-03 & 94429 & 3.0e+02 & 1 & 2.5e-03 & 1334642 & 8.3e-02 & 98598 & 5.2e-03 & 96190 \\
\texttt{CURLY30} & 10000 & 2.7e-01 & 78302 & 4.2e+03 & 6974346 & 2.7e-01 & 146501 & 1.9e-02 & 128689 & 3.3e-01 & 77637 \\
\texttt{DANWOODLS} & 2 & 2.2e-06 & 18 & 5.6e-06 & 232 & 2.2e-06 & 27 & 2.2e-06 & 18 & 2.2e-06 & 18 \\
\texttt{DECONVU} & 63 & 2.4e-08 & 3650 & 1.3e-03 & 418777 & 4.0e-09 & 37199 & 2.3e-06 & 563021 & 8.3e-08 & 72328 \\
\texttt{DENSCHNA} & 2 & 6.6e-12 & 12 & 5.3e-08 & 136 & 6.6e-12 & 18 & 6.6e-12 & 12 & 6.6e-12 & 12 \\
\texttt{DENSCHNB} & 2 & 5.8e-10 & 12 & 1.3e-06 & 155 & 5.8e-10 & 18 & 1.0e-10 & 9 & 5.8e-10 & 12 \\
\texttt{DENSCHNC} & 2 & 8.7e-08 & 20 & 3.4e-06 & 237 & 8.7e-08 & 30 & 5.9e-08 & 20 & 8.7e-08 & 20 \\
\texttt{DENSCHND} & 3 & 5.1
|
e-08 & 114 & \multicolumn{2}{c|}{failure} & 8.1e-08 & 135 & 3.7e-06 & 11399 & 8.1e-08 & 120 \\
\texttt{DENSCHNE} & 3 & 5.2e-12 & 35 & 9.5e-05 & 307 & 5.2e-12 & 45 & 2.1e-10 & 1442 & 5.2e-12 & 25 \\
\texttt{DENSCHNF} & 2 & 2.1e-09 & 12 & 3.6e-05 & 97 & 2.1e-09 & 18 & 1.0e-09 & 12 & 2.1e-09 & 12 \\
\texttt{DIXMAANA} & 3000 & 2.3e-13 & 44 & 1.5e-13 & 2763 & 2.3e-13 & 478120 & 6.7e-21 & 31 & 2.3e-13 & 38 \\
\texttt{DIXMAANB} & 3000 & 5.7e-08 & 503 & 7.3e-05 & 40355 & 5.7e-08 & 945986 & 1.6e-13 & 37 & 5.7e-08 & 80 \\
\texttt{DIXMAANC} & 3000 & 4.5e-12 & 1382 & 1.7e-05 & 40963 & 4.5e-12 & 1520049 & 4.5e-12 & 37 & 2.8e-09 & 95 \\
\texttt{DIXMAAND} & 3000 & 3.4e-13 & 1533 & 7.3e-08 & 68784 & 3.4e-13 & 1656761 & 2.7e-10 & 38 & 7.0e-17 & 169 \\
\texttt{DIXMAANE} & 3000 & 4.6e-08 & 2012 & \multicolumn{2}{c|}{failure} & 1.3e-11 & 3089 & 4.0e-11 & 515 & 1.6e-12 & 1281 \\
\texttt{DIXMAANF} & 3000 & 4.5e-08 & 2644 & \multicolumn{2}{c|}{failure} & 2.1e-08 & 1348070 & 1.0e-07 & 22275 & 6.7e-11 & 1079 \\
\texttt{DIXMAANG} & 3000 & 4.8e-08 & 4035 & 1.1e+00 & 845145 & 1.1e-08 & 1242789 & 1.0e-07 & 22211 & 2.0e-08 & 1673 \\
\texttt{DIXMAANH} & 3000 & 3.9e-08 & 5627 & 5.5e+02 & 1950740 & 5.9e-10 & 1696337 & 1.0e-07 & 22207 & 8.7e-08 & 2011 \\
\texttt{DIXMAANI} & 3000 & 1.0e-06 & 40507 & 1.0e+03 & 1 & 6.1e-06 & 19337 & 2.6e-07 & 3582057 & 1.8e-12 & 27353 \\
\texttt{DIXMAANJ} & 3000 & 4.6e-08 & 23746 & 2.2e+01 & 593623 & 6.2e-13 & 952725 & 1.8e-07 & 3314012 & 1.7e-07 & 11321 \\
\texttt{DIXMAANK} & 3000 & 4.6e-08 & 20831 & 1.5e+03 & 3100658 & 3.3e-11 & 1555718 & 1.8e-07 & 3310116 & 6.7e-07 & 14341 \\
\texttt{DIXMAANL} & 3000 & 4.6e-08 & 24371 & 3.1e+02 & 1122879 & 1.8e-09 & 1760641 & 1.8e-07 & 3319300 & 1.9e-11 & 16093 \\
\texttt{DIXMAANM} & 3000 & 4.7e-08 & 9845 & 4.4e+02 & 1 & 1.4e-11 & 2559 & 2.8e-07 & 4041601 & 1.0e-05 & 10745 \\
\texttt{DIXMAANN} & 3000 & 4.7e-08 & 33134 & 5.3e-01 & 1792578 & 4.5e-09 & 878377 & 1.9e-07 & 3874306 & 6.1e-08 & 18948 \\
\texttt{DIXMAANO} & 3000 & 4.8e-08 & 33105 & 1.1e-01 & 1810480 & 7.4e-08 & 968909 & 1.9e-07 & 3918576 & 3.4e-09 & 15832 \\
\texttt{DIXMAANP} & 3000 & 5.4e-08 & 19509 & 1.1e+02 & 90319 & 2.7e-08 & 1282847 & 2.8e-07 & 5486601 & 8.5e-10 & 12074 \\
\texttt{DIXON3DQ} & 10000 & 4.6e-08 & 40506 & 5.7e+00 & 1 & 6.1e-09 & 100140 & 1.3e-05 & 15308266 & 1.4e-12 & 19971 \\
\texttt{DJTL} & 2 & 3.9e+00 & 155 & 1.2e+05 & 1528 & 1.0e+01 & 3360 & 6.6e-01 & 1029 & 9.8e+00 & 2160 \\
\texttt{DMN15103LS} & 99 & 4.2e+01 & 924732 & 5.3e+03 & 177264 & 1.0e+02 & 87836914 & 7.8e+00 & 783230 & 6.6e+01 & 767826 \\
\texttt{DMN15332LS} & 66 & 2.7e-03 & 719233 & 8.1e+01 & 626859 & 3.6e+01 & 99777049 & 2.5e+00 & 1213511 & 2.5e+00 & 996706 \\
\texttt{DMN15333LS} & 99 & 1.5e+01 & 928176 & 2.7e+02 & 730749 & \multicolumn{2}{c|}{failure} & 5.4e+00 & 874786 & 2.9e+00 & 769091 \\
\texttt{DMN37142LS} & 66 & 9.4e-03 & 385536 & 3.1e+01 & 846259 & 1.4e-02 & 63711807 & 1.7e+00 & 1256055 & 1.7e+02 & 1073546 \\
\texttt{DMN37143LS} & 99 & 1.1e+00 & 547560 & 3.5e+03 & 84848 & 4.5e+00 & 41749169 & 1.4e+01 & 777991 & 1.3e+01 & 736780 \\
\texttt{DQDRTIC} & 5000 & 3.3e-10 & 39 & 8.3e-14 & 792 & 4.2e-12 & 3027385 & 1.3e-11 & 22 & 3.2e-10 & 25 \\
\texttt{DQRTIC} & 5000 & 4.1e-08 & 14236 & 1.3e+13 & 1 & 3.5e-08 & 15362086 & 1.0e-07 & 369300 & 3.5e-08 & 19244 \\
\texttt{ECKERLE4LS} & 3 & 1.8e-08 & 13 & \multicolumn{2}{c|}{failure} & 2.4e-08 & 63 & 1.6e-07 & 10001 & 2.4e-08 & 57 \\
\texttt{EDENSCH} & 2000 & 5.1e-05 & 342 & 9.5e-03 & 65271 & 5.1e-05 & 1645581 & 1.1e-04 & 147 & 5.1e-05 & 208 \\
\texttt{EG2} & 1000 & 2.9e-08 & 6 & \multicolumn{2}{c|}{failure} & 2.1e-04 & 1126 & 1.2e-02 & 11 & 2.9e-08 & 6 \\
\texttt{EIGENALS} & 2550 & 4.2e-07 & 9436 & \multicolumn{2}{c|}{failure} & 1.9e+00 & 276726 & 7.2e-08 & 151148 & 3.5e-09 & 5959 \\
\texttt{EIGENBLS} & 2550 & 6.5e-08 & 745535 & 4.8e+00 & 329779 & 6.8e-03 & 475261 & 3.3e-06 & 1132767 & 4.8e-05 & 1056840 \\
\end{tabular}
\end{tiny}
\caption{\revise{Results of subproblem solvers in individual \texttt{CUTEst} problems, part 1}}
\label{tab:cutestresults}
\end{table}
\begin{table}
\begin{tiny}
\begin{tabular}{l|l|ll|ll|ll|ll|ll}
problem & $ n $ & \multicolumn{2}{c|}{\texttt{GLTR}} & \multicolumn{2}{c|}{\texttt{LSTRS}} & \multicolumn{2}{c|}{\texttt{SSM}} & \multicolumn{2}{c|}{\texttt{ST}} & \multicolumn{2}{c}{\texttt{trlib}} \\
& & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ & $ \Vert \nabla f \Vert $ & \# $ Hv $ \\
\texttt{EIGENCLS} & 2652 & 3.8e-08 & 796370 & \multicolumn{2}{c|}{failure} & 5.7e-01 & 402829 & 5.4e-09 & 66267 & 7.9e-09 & 270864 \\
\texttt{ENGVAL1} & 5000 & 2.4e-03 & 120 & 2.4e-03 & 18197 & 2.4e-03 & 3023116 & 5.9e-04 & 96 & 2.4e-03 & 107 \\
\texttt{ENGVAL2} & 3 & 6.5e-07 & 43 & 5.9e-06 & 353 & 4.5e-15 & 45 & 0.0e+00 & 42 & 1.7e-12 & 45 \\
\texttt{ENSOLS} & 9 & 9.3e-05 & 95 & 9.6e-05 & 412 & 9.3e-05 & 33 & 2.8e-04 & 68 & 9.3e-05 & 88 \\
\texttt{ERRINROS} & 50 & 7.3e-07 & 1446 & \multicolumn{2}{c|}{failure} & 9.0e-04 & 6582 & 7.6e-07 & 109821 & 9.2e-04 & 883 \\
\texttt{ERRINRSM} & 50 & 1.1e-03 & 2817 & \multicolumn{2}{c|}{failure} & 8.3e-03 & 5037 & 2.6e-06 & 720904 & 8.3e-03 & 1487 \\
\texttt{EXPFIT} & 2 & 2.1e-06 & 17 & 6.1e-07 & 131 & 4.8e-09 & 24 & 5.8e-06 & 17 & 4.8e-09 & 12 \\
\texttt{EXTROSNB} & 1000 & 9.9e-08 & 33028 & 2.3e-01 & 18905 & 5.7e-08 & 3716226 & 2.7e-06 & 12048850 & 1.0e-07 & 247139 \\
\texttt{FBRAIN2LS} & 4 & 2.8e-01 & 236 & \multicolumn{2}{c|}{failure} & 1.3e-02 & 138 & 4.5e-04 & 30008 & 1.3e-02 & 187 \\
\texttt{FBRAIN3LS} & 6 & 1.5e-06 & 60534 & \multicolumn{2}{c|}{failure} & 1.6e+01 & 486095 & 2.6e-03 & 39955 & 8.6e-08 & 30562 \\
\texttt{FBRAINLS} & 2 & 3.4e-05 & 14 & 3.9e-05 & 149 & 3.4e-05 & 21 & 8.6e-05 & 14 & 3.4e-05 & 14 \\
\texttt{FLETBV3M} & 5000 & 9.1e-03 & 4883 & \multicolumn{2}{c|}{failure} & 1.1e-03 & 19423 & 2.2e-05 & 885 & 2.6e-03 & 1379 \\
\texttt{FLETCBV2} & 5000 & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} \\
\texttt{FLETCBV3} & 5000 & 3.1e+01 & 14194503 & 3.8e+01 & 55869908 & 3.2e+01 & 15365644 & 2.1e+01 & 4726900 & 3.0e+01 & 8099116 \\
\texttt{FLETCHBV} & 5000 & 2.7e+09 & 38547 & 3.7e+09 & 14764569 & 3.0e+09 & 35263513 & 3.6e+09 & 78 & 3.0e+09 & 18992 \\
\texttt{FLETCHCR} & 1000 & 4.2e-08 & 61120 & 7.0e-05 & 663337 & 4.8e-08 & 300564 & 4.2e-09 & 45367 & 4.8e-08 & 47342 \\
\texttt{FMINSRF2} & 5625 & 4.3e-08 & 12601 & 3.3e-01 & 1 & 6.4e-09 & 44273 & 5.1e-06 & 1931678 & 1.1e-09 & 3067 \\
\texttt{FMINSURF} & 5625 & 1.0e-07 & 8750 & 3.3e-01 & 1 & 5.8e-02 & 27451 & 6.8e-08 & 47015 & 8.7e-06 & 4011 \\
\texttt{FREUROTH} & 5000 & 3.9e-01 & 80 & 3.9e-01 & 4042 & 3.9e-01 & 6628218 & 6.0e-03 & 55 & 3.9e-01 & 69 \\
\texttt{GAUSS1LS} & 8 & 4.2e+01 & 68 & 1.1e+01 & 288 & 4.2e+01 & 21 & 1.4e+01 & 71 & 4.3e+01 & 60 \\
\texttt{GAUSS2LS} & 8 & 2.7e-01 & 79 & 2.3e-01 & 293 & 2.7e-01 & 24 & 1.4e+01 & 77 & 2.7e-01 & 70 \\
\texttt{GBRAINLS} & 2 & 1.4e-04 & 12 & 1.4e-04 & 94 & 1.4e-04 & 18 & 1.4e-04 & 12 & 1.4e-04 & 12 \\
\texttt{GENHUMPS} & 5000 & 4.8e-11 & 1486656 & 6.0e+03 & 1 & 4.7e-11 & 8692146 & 8.9e-08 & 35816 & 5.0e-12 & 529592 \\
\texttt{GENROSE} & 500 & 6.7e-04 & 16490 & 6.1e-05 & 309312 & 2.0e-06 & 66839 & 3.4e-05 & 3639 & 1.1e-04 & 8682 \\
\texttt{GROWTHLS} & 3 & 5.4e-03 & 345 & 3.2e-02 & 2027 & 8.9e-03 & 294 & 2.4e-03 & 4075 & 5.1e-05 & 239 \\
\texttt{GULF} & 3 & 4.0e-08 & 74 & \multicolumn{2}{c|}{failure} & 6.8e-08 & 78 & 5.7e-04 & 19576 & 6.8e-08 & 69 \\
\texttt{HAHN1LS} & 7 & 1.8e+03 & 9794 & 7.5e+01 & 5273 & 8.3e+01 & 332983 & 5.1e-01 & 5459 & 2.8e+00 & 592 \\
\texttt{HAIRY} & 2 & 1.7e-04 & 118 & 2.5e-05 & 993 & 1.2e-03 & 210 & 1.6e-03 & 137 & 1.2e-03 & 100 \\
\texttt{HATFLDD} & 3 & 2.1e-08 & 71 & \multicolumn{2}{c|}{failure} & 1.5e-11 & 75 & 1.0e-07 & 14033 & 1.5e-11 & 69 \\
\texttt{HATFLDE} & 3 & 3.5e-08 & 54 & \multicolumn{2}{c|}{failure} & 1.7e-10 & 57 & 9.8e-08 & 3318 & 1.7e-10 & 51 \\
\texttt{HATFLDFL} & 3 & 4.7e-08 & 283 & \multicolumn{2}{c|}{failure} & 6.6e-08 & 4404 & 5.1e-06 & 28015 & 3.5e-09 & 1078 \\
\texttt{HEART6LS} & 6 & 3.5e-08 & 6521 & 4.0e+00 & 29124 & 5.2e-08 & 3871 & 3.3e+00 & 39973 & 5.2e-08 & 8285 \\
\texttt{HEART8LS} & 8 & 4.0e-10 & 524 & 1.8e-05 & 1466 & 1.9e-09 & 147 & 2.0e-13 & 353 & 1.9e-09 & 379 \\
\texttt{HELIX} & 3 & 1.7e-11 & 36 & 3.4e-05 & 330 & 1.7e-11 & 36 & 3.7e-12 & 32 & 1.7e-11 & 36 \\
\texttt{HIELOW} & 3 & 5.4e-03 & 12 & 6.7e-03 & 87 & 5.4e-03 & 12 & 3.2e-05 & 18 & 5.4e-03 & 12 \\
\texttt{HILBERTA} & 2 & 2.8e-15 & 6 & 5.4e-15 & 56 & 2.2e-16 & 9 & 9.5e-08 & 301 & 6.2e-15 & 6 \\
\texttt{HILBERTB} & 10 & 2.4e-09 & 17 & 3.0e-06 & 202 & 2.4e-14 & 15 & 6.3e-10 & 12 & 2.4e-09 & 13 \\
\texttt{HIMMELBB} & 2 & 7.0e-07 & 18 & \multicolumn{2}{c|}{failure} & 2.1e-13 & 75 & 8.2e-13 & 33 & 1.2e-12 & 19 \\
\texttt{HIMMELBF} & 4 & 4.6e-05 & 308 & \multicolumn{2}{c|}{failure} & 4.6e-05 & 192 & 1.6e-02 & 29526 & 4.6e-05 & 287 \\
\texttt{HIMMELBG} & 2 & 8.6e-09 & 8 & 3.0e-05 & 62 & 8.6e-09 & 12 & 1.0e-13 & 11 & 8.6e-09 & 8 \\
\texttt{HIMMELBH} & 2 & 5.5e-06 & 8 & 7.7e-06 & 67 & 5.5e-06 & 15 & 5.0e-09 & 6 & 5.5e-06 & 9 \\
\texttt{HUMPS} & 2 & 1.0e-12 & 2955 & 4.7e-02 & 39232 & 3.1e-11 & 10767 & 1.0e-07 & 2297 & 2.6e-12 & 6202 \\
\texttt{HYDC20LS} & 99 & 1.1e-03 & 97095959 & 1.9e+06 & 738933 & \multicolumn{2}{c|}{failure} & 1.3e-01 & 93133732 & 1.3e-01 & 96002204 \\
\texttt{INDEF} & 5000 & 7.1e+01 & 297 & \multicolumn{2}{c|}{failure} & 7.1e+01 & 28565674 & 9.1e+01 & 6895561 & 7.1e+01 & 338 \\
\texttt{INDEFM} & 100000 & 1.1e-08 & 134 & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} & 1.2e-02 & 3308 & 4.6e-09 & 92 \\
\texttt{INTEQNELS} & 12 & 2.3e-09 & 12 & 1.3e-05 & 145 & 4.9e-11 & 9 & 4.9e-11 & 15 & 4.9e-11 & 15 \\
\texttt{JENSMP} & 2 & 3.4e-02 & 18 & 3.4e-02 & 213 & 3.4e-02 & 27 & 3.4e-02 & 18 & 3.4e-02 & 18 \\
\texttt{JIMACK} & 3549 & 1.1e-04 & 103654 & 1.4e+00 & 1 & 9.4e-06 & 123549 & 9.1e-08 & 397707 & 8.8e-05 & 105680 \\
\texttt{KIRBY2LS} & 5 & 9.5e-03 & 198 & 5.1e+01 & 349 & 2.5e+00 & 60 & 4.2e+00 & 769 & 2.7e+00 & 83 \\
\texttt{KOWOSB} & 4 & 2.3e-07 & 40 & \multicolumn{2}{c|}{failure} & 1.0e-07 & 36 & 9.9e-08 & 8576 & 1.0e-07 & 40 \\
\texttt{KOWOSBNE} & 4 & 7.0e-08 & 124 & \multicolumn{2}{c|}{failure} & \multicolumn{2}{c|}{failure} & 1.0e-07 & 8375 & 2.4e-08 & 68 \\
\texttt{LANCZOS1LS} & 6 & 3.9e-08 & 484 & \multicolumn{2}{c|}{failure} & 5.2e-08 & 348 & 2.6e-05 & 29889 & 7.6e-08 & 651 \\
\texttt{LANCZOS2LS} & 6 & 3.7e-08 & 461 & 1.3e+02 & 1 & 1.5e-09 & 342 & 2.7e-05 & 29858 & 9.6e-08 & 625 \\
\texttt{LANCZOS3LS} & 6 & 4.1e-08 & 455 & \multicolumn{2}{c|}{failure} & 9.9e-08 & 393 & 2.6e-05 & 29950 & 2.6e-09 & 757 \\
\texttt{LIARWHD} & 5000 & 1.9e-08 & 44 & 3.9e-06 & 5072 & 1.9e-08 & 6202073 & 3.2e-14 & 168 & 1.9e-08 & 43 \\
\texttt{LOGHAIRY} & 2 & 9.2e-07 & 5102 & \multicolumn{2}{c|}{failure} & 8.1e-05 & 15966 & 1.5e-03 & 10003 & 1.5e-06 & 6676 \\
\texttt{LSC1LS} & 3 & 2.4e-07 & 74 & 1.2e-05 & 893 & 2.4e-07 & 81 & 5.7e-08 & 3057 & 2.4e-07 & 58 \\
\texttt{LSC2LS} & 3 & 2.2e-05 & 113 & \multicolumn{2}{c|}{failure} & 5.1e-05 & 156 & 3.8e-02 & 19975 & 9.1e-09 & 162 \\
\texttt{LUKSAN11LS} & 100 & 3.1e-12 & 14138 & 1.9e-07 & 103185 & 1.8e-12 & 800008 & 2.9e-13 & 2684 & 1.8e-12 & 9341 \\
\texttt{LUKSAN12LS} & 98 & 9.2e-03 & 675 & 3.7e-02 & 59360 & 9.2e-03 & 2545 & 1.5e-02 & 411 & 9.1e-03 & 402 \\
\texttt{LUKSAN13LS} & 98 & 5.5e-02 & 324 & 1.8e-02 & 6656 & 5.5e-02 & 18870 & 7.7e-04 & 176 & 5.7e-02 & 237 \\
\texttt{LUKSAN14LS} & 98 & 1.2e-03 & 580 & 1.3e-03 & 47362 & 1.2e-03 & 5703 & 4.2e-06 & 289 & 1.2e-03 & 349 \\
\texttt{LUKSAN15LS} & 100 & 4.7e-03 & 868 & 1.4e+00 & 559146 & 8.8e-04 & 4816 & 9.7e-08 & 1217 & 4.0e-04 & 758 \\
\texttt{LUKSAN16LS} & 100 & 1.2e-05 & 118 & 3.0e+04 & 1 & 1.2e-05 & 1229 & 9.2e-03 & 91 & 1.2e-05 & 123 \\
\texttt{LUKSAN17LS} & 100 & 4.9e-06 & 1043 & 1.5e-01 & 1653079 & 4.9e-06 & 6687 & 2.9e-05 & 1379 & 4.9e-06 & 1208 \\
\texttt{LUKSAN21LS} & 100 & 4.4e-08 & 2042 & 2.8e+00 & 1 & 7.7e-09 & 5922 & 3.3e-08 & 6962 & 7.3e-10 & 1750 \\
\texttt{LUKSAN22LS} & 100 & 7.5e-06 & 1122 & 1.5e-04 & 49915 & 3.6e-05 & 1456 & 1.8e-06 & 1251618 & 3.6e-05 & 893 \\
\texttt{MANCINO} & 100 & 3.4e-05 & 192 & 8.3e-05 & 5269 & 1.2e-07 & 206932 & 1.0e-07 & 45 & 1.1e-07 & 138 \\
\texttt{MARATOSB} & 2 & 9.8e-03 & 2639 & 8.7e+00 & 731 & 4.8e-02 & 3006 & 2.2e-02 & 1566 & 4.8e-02 & 1322 \\
\texttt{MEXHAT} & 2 & 2.0e-05 & 145 & 8.7e+01 & 753 & 6.6e-04 & 96 & 4.3e-04 & 60 & 6.6e-04 & 54 \\
\texttt{MEYER3} & 3 & 1.6e-03 & 1242 & 2.3e-03 & 7573 & 1.1e+03 & 933 & 4.1e-05 & 3780 & 8.9e-04 & 879 \\
\texttt{MGH09LS} & 4 & 1.7e-09 & 571 & \multicolumn{2}{c|}{failure} & 6.5e-10 & 369 & 6.5e-04 & 11810 & 2.1e-07 & 400 \\
\texttt{MGH10LS} & 3 & 7.2e+03 & 987 & 3.3e+06 & 140325 & 4.6e+05 & 552 & 7.4e+26 & 751 & 9.8e+03 & 193 \\
\texttt{MGH17LS} & 5 & 1.6e+00 & 41696 & \multicolumn{2}{c|}{failure} & 9.2e-06 & 4299 & 4.9e-06 & 39945 & 3.2e-05 & 772 \\
\texttt{MISRA1ALS} & 2 & 5.4e-04 & 89 & 2.4e-04 & 669 & 8.2e-02 & 297 & 1.3e-05 & 20002 & 3.5e-03 & 74 \\
\texttt{MISRA1BLS} & 2 & 1.1e-01 & 51 &
|
} \hskip -.05cm \operatorname{Pseff}}^m(C_d)= \operatorname{Pseff}^m(C_d)$ and $\operatorname{Nef}^m(C_d)={^{\rm t} \hskip -.05cm \operatorname{Nef}}^m(C_d)$.
A case where we know a complete description of the (tautological) effective, pseudoeffective and nef cone of cycles is the case of curves of genus one\footnote{Note that if $g=0$, i.e. $C=\mathbb{P}^1$, then
$C_d \cong \mathbb{P}^d$ and all the cones in question become one-dimensional, hence trivial.}.
\begin{example}[Genus $1$ - {\cite[Example 2.9]{BKLV}}\label{genus1}]
If the curve $C$ has genus $1$, then for any $1\leq m \leq d-1$ we have that $N^m(C_d)=R^m(C_d)$ and
\begin{equation*}\label{E:cones-g1}
\operatorname{Pseff}^m(C_d)=\operatorname{Nef}^m(C_d)=\operatorname{cone}\left(x^{m-1}\theta, x^m-\frac{m}{d}x^{m-1}\theta\right) \subset N^m(C_d)_{\mathbb{R}} \cong \mathbb{R}^2.
\end{equation*}
\end{example}
\noindent This follows either by \cite{Ful} or by \cite[Theorem B]{BKLV} and Theorem \ref{T:B}\eqref{T:B1}.
\section{Abel-Jacobi faces}\label{S:AJ}
The aim of this section is to introduce some faces of the (tautological or not) pseudoeffective cones of $C_d$ obtained as contractibility faces of the Abel-Jacobi morphism $\alpha_d:C_d\to \operatorname{Pic}^d(C)$.
\subsection{Contractibility faces}\label{SS:contract}
In this subsection, we will introduce the contractibility faces associated to any morphism $\pi:X\to Y$ between irreducible projective varieties.
The definition of the contractibility faces is based on the contractibility index introduced in \cite[\S 4.2]{fl2}.
\begin{defi}\label{contract}
Let $\pi:X \to Y$ be a morphism between irreducible projective varieties and fix the class $h\in N^1(Y)$ of an ample Cartier divisor on $Y$. Given an element $\alpha\in \operatorname{Pseff}_k(X)$ for some $0\leq k \leq \dim X$, the \emph{contractibility index} of $\alpha$, denoted by $\contr_{\pi}(\alpha)$, is equal to the largest non-negative
integer \color{black}$c\leq k+1$ such that $\alpha\cdot \pi^*(h^{k+1-c})=0$.
\end{defi}
Since $\alpha\cdot \pi^*(h^{k+1})=0$ for dimension reasons, the contractibility index is well-defined and it is easy to see that it does not depend on $h$. The following properties are immediate:
\begin{itemize}
\item $\max\{0, k-\dim \pi(X)\}\leq \contr_{\pi}(\alpha)\leq k+1$ and equality holds in the last inequality if and only if $\alpha=0$;
\item $\contr_{\pi}(\alpha)>0 \Longleftrightarrow \pi_*(\alpha)=0$;
\item If $\alpha=[Z]$ for an irreducible subvariety $Z\subseteq X$ of dimension $k$, then $\contr_{\pi}(Z):=\contr_{\pi}(\alpha)=\dim Z-\dim \pi(Z)$.
\end{itemize}
\begin{defi}\label{contract2}
Let $\pi:X \to Y$ be a morphism between irreducible projective varieties and let $k, r$ be integers such that $0 \le k \le \dim X, r \ge 0$. Set
$$F_k^{\geq r}(\pi) = \operatorname{cone}(\{\alpha \in \operatorname{Pseff}_k(X) : \contr_{\pi}(\alpha)\ge r \}).$$
We set $c_{\pi}(r) = -1$ if there is no irreducible subvariety $Z \subseteq X$ with $\contr_{\pi}(Z)\ge r$; otherwise we define
\begin{displaymath}
c_{\pi}(r) =\max\left\{ 0 \le k \le \dim X \left| \begin{array}{l} \text{there exists an irreducible subvariety }Z \subseteq X\\ \text{with } \dim Z = k \text{ and } \contr_{\pi}(Z)\ge r \end{array}\right.\right\}.
\end{displaymath}
\end{defi}
Note that $F_k^{\ge r}(\pi) = \emptyset$ if $r > k+1$, $F_k^{\ge k+1}(\pi) = \{0\}$, $F_{\dim X}^{\ge r}(\pi) = \{0\}$ if and only if $r \ge 1 + \dim X -\dim \pi(X)$ and $F_k^{\ge r}(\pi) = \operatorname{Pseff}_k(X)$ if and only if $r \le \max\{0, k-\dim \pi(X)\}$. Moreover, if $c_{\pi}(r) \ge 0$, then $r \le c_{\pi}(r) \le \dim X$.
The following criterion of extremality follows from \cite[Thm. 4.15]{fl2} and it is an improvement of \cite[Prop. 2.1, 2.2 and Rmk. 2.7]{CC}.
\begin{prop}\label{extr-crit}
Let $\pi:X \to Y$ be a morphism between projective irreducible varieties and fix $k, r$ integers such that $1+ \max\{0, k-\dim \pi(X)\} \le r \le k \le \dim X$.
Then
\begin{enumerate}[(i)]
\item \label{extr-crit1} The cone $F_k^{\geq r}(\pi)$ is a face of $\operatorname{Pseff}_k(X)$. In particular, the cone $F_k^{\geq r}(\pi) \cap \operatorname{Eff}_k(X)$ is a face of $ \operatorname{Eff}_k(X)$. Moreover $F_k^{\geq r}(\pi)$ is non-trivial
for $r \le k \le c_{\pi}(r)$.
\item \label{extr-crit2} Suppose that $r \le k \le c_{\pi}(r)$. The number of irreducible subvarieties of $X$ of dimension $k$ and contractibility index at least $r$ is finite if and only if $k=c_{\pi}(r)$.
In this case, if we denote by $Z_1,\ldots, Z_s$ the irreducible subvarieties of $X$ of dimension $c_{\pi}(r)$ and contractibility index at least $r$, we have that
$$F_{c_{\pi}(r)}^{\geq r}(\pi)=\operatorname{cone}([Z_1],\ldots,[Z_s])=F_{c_{\pi}(r)}^{\geq r}(\pi)\cap \operatorname{Eff}_k(X).$$
\end{enumerate}
\end{prop}
Because of \eqref{extr-crit1}, we will call $F_k^{\geq r}(\pi)$ the \emph{$r$-th contractibility face} of $\operatorname{Pseff}_k(X)$.
\begin{proof}
Note that for any $\alpha \in \operatorname{Pseff}_k(X)$, we have $\alpha \in F_k^{\geq r}(\pi)$ if and only if $\alpha\cdot \pi^*(h^{k+1-r})=0$. Let $\beta_1, \beta_2 \in \operatorname{Pseff}_k(X)$ be such that $\beta_1+\beta_2 \in F_k^{\geq r}(\pi)$. Then
$\beta_1\cdot \pi^*(h^{k+1-r})+\beta_2\cdot \pi^*(h^{k+1-r}) = 0$ and $\beta_i\cdot \pi^*(h^{k+1-r}) \in \operatorname{Pseff}_{r-1}(X)$ (because $\pi^*(h)$ is nef, hence limit of ample classes) for $i=1, 2$, so that $\beta_1\cdot \pi^*(h^{k+1-r}) = \beta_2\cdot \pi^*(h^{k+1-r})=0$ since $\operatorname{Pseff}_{r-1}(X)$ is salient by \cite[Prop. 1.3]{BFJ}, \cite[Thm. 1.4(i)]{fl1}. Then $\beta_1, \beta_2 \in F_k^{\geq r}(\pi)$. This proves the first assertion in \eqref{extr-crit1}.
Assume now that $r < k \le c_{\pi}(r)$ and let $Z \subseteq X$ be an irreducible subvariety such that $\dim Z = k, \contr_{\pi}(Z) \ge r$. We claim that there are infinitely many irreducible subvarieties $W \subseteq X$ with $\dim W = k-1$ and $\contr_{\pi}(W) \ge r$. It follows by this claim that $F_k^{\geq r}(\pi)$ is non-trivial
for $r \le k \le c_{\pi}(r)$. To see the claim we consider two cases. If $\pi(Z)$ is not a point, then pick a generic codimension one subvariety $V \subset \pi(Z)$ such that $V$ intersects the open subset of $\pi(Z)$ where fibers of $\pi_{|Z}$ have dimension $\contr_{\pi}(Z)$. The inverse image $(\pi_{|Z})^{-1}(V)$ will have an irreducible component $W$ that dominates $V$ and therefore with $\dim W = k-1$ and $\contr_{\pi}(W) = \contr_{\pi}(Z) \ge r$. If $\pi(Z)$ is a point, then pick any codimension one subvariety $W \subset Z$. Then $\dim W = k-1$ and $\contr_{\pi}(W) = \dim W = k-1 \ge r$. In either case, there are infinitely many such
subvarieties $W$ \color{black}and the claim is proved.
Consider now the first assertion of part \eqref{extr-crit2}. The only if part follows immediately by the claim starting with an irreducible subvariety $Z \subset X$ of dimension $c_{\pi}(r) $ and contractibility index at least $r$ (such a $Z$ exists by the definition of $c_{\pi}(r)$). The if part is proved in \cite[Thm. 4.15(1)]{fl2}.
For the second assertion of \eqref{extr-crit2}: the first equality follows from \cite[Thm. 4.15(2)]{fl2} and the second equality follows directly from the first one.
\end{proof}
\begin{remark}\label{cong}
Let $1 \le r \le k \le \dim X$. It is natural to wonder if the following statements hold true:
\begin{enumerate}
\item (Strong$^r(\pi)$) $F_k^r(\pi)=\overline{F_k^r(\pi)\cap \operatorname{Eff}_k(X)}$.
\item (Weak$^r(\pi)$) $\langle F_k^r(\pi)\rangle=\langle F_k^r(\pi)\cap \operatorname{Eff}_k(X)\rangle$.
\end{enumerate}
For $r=1$, the above statements reduce to, respectively, the strong and weak conjecture in \cite[Conj. 1.1]{fl2}. If Weak$^{r}(\pi)$ holds true then
$F_k^{\ge r}(\pi) = \{0\}$ for any $k>c_{\pi}(r)$.
If we also assume that $k \ge r \ge \dim X - \dim \pi(X) + 1$ it is easy to see, using \cite[Thm. 4.13]{fl2}, that the last expectation holds. Moreover, since $F_k^{\ge r}(\pi) \subseteq F_k^{\ge r-1}(\pi)$ we expect that $F_k^{\ge r}(\pi) = \{0\}$ when $k > c_{\pi}(1)$.
\end{remark}
\subsection{Brill-Noether varieties}\label{SS:BNvar}
In order to apply the previous criterion to the
Abel-Jacobi map $\alpha_d: C_d \to \operatorname{Pic}^d(C)$, we need to know the subvarieties of $C_d$ that have contractibility index at least $r$ with respect to $\alpha_d$.
As we will see, these subvarieties turn out to be contained in the \emph{Brill-Noether} variety $C_d^r\subseteq C_d$ which is defined (set theoretically) as:
$$C_d^r:=\{D\in C_d: \: \dim |D|\geq r\}.$$
Note that $C_d^r=\alpha_d^{-1}(W_d^r(C))$ where $W_d^r(C)$ is the Brill-Noether variety in $\operatorname{Pic}^d(C)$ which is defined (set theoretically) as
$$W_d^r(C) = \{L\in \operatorname{Pic}^d(C): \: h^0(C,L)\geq r+1\}.$$
The Brill-Noether varieties $C_d^r$ and $W_d^r(C)$ are in a natural way determinantal varieties (see \cite[Chap. IV]{GAC1}).
From Riemann-Roch theorem, we have the following trivial cases for $W_d^r(C)$ and $C_d^r$:
\begin{itemize}
\item If $r\leq \max\{-1,d-g\}$ then $W_d^r(C)=\operatorname{Pic}^d(C)$, and hence $C_d^r=C_d$.
\item If $r=0$ and $d\leq g-1$ then $\alpha_d: C_d^0=C_d\to W_d^0(C)$ is a resolution of singularities.
\item If $d\geq 2g-1$ then $W_d^r(C)=
\begin{cases}
\operatorname{Pic}^d(C) & \text{ if } r\leq d-g, \\
\emptyset & \text{ if } r>d-g,
\end{cases}$
and
$C_d^r=
\begin{cases}
C_d & \text{ if } r\leq d-g, \\
\emptyset & \text{ if } r>d-g.
\end{cases}$
\end{itemize}
The non-emptiness of $C_d^r$ is equivalent to the existence of a linear system of degree $d$ and dimension $r$ on $C$, and we define an invariant of $C$ controlling the existence of such linear systems.
\begin{defi}\label{D:gonindex}
For any integer $r \ge 1$, the \emph{$r$-th gonality index} of $C$, denoted by $\operatorname{gon}_r(C)$, is the smallest integer $d$ such that $C$ admits a $\mathfrak{g}_d^r$.
\end{defi}
Clearly, $d\geq \operatorname{gon}_r(C)$ if and only if the curve $C$ admits a $\mathfrak{g}_d^r$. Observe that if $r=1$ then $\operatorname{gon}_1(C)$ is the (usual) gonality $\operatorname{gon}(C)$ of $C$.
The possible values that the $r$-th gonality index can achieve are described in the following
\begin{lemma}\label{L:gonindex}
The $r$-th gonality index of $C$ satisfies the following
\begin{equation}\label{gonlarge}
\operatorname{gon}_r(C)=
\begin{cases}
g+r & \text{ if }r\geq g,\\
2g-2 & \text{ if }r=g-1,
\end{cases}
\end{equation}
\begin{equation}\label{gonsmall}
2r\leq \operatorname{gon}_r(C)\leq \gamma(r):=
\left\lceil \frac{rg}{r+1} \right\rceil +r \: \text{if }1\leq r \leq g-2,
\end{equation}
where the first inequality is achieved if and only if $C$ is hyperelliptic and the second inequality is achieved for the general curve $C$.
\end{lemma}
\begin{proof}
From Clifford's inequality and Riemann-Roch theorem, it follows easily that:
\begin{itemize}
\item any $\mathfrak{g}_d^{g-1}$ on $C$ is such that $d\geq 2g-2$ with equality if and only if $\mathfrak{g}_{2g-2}^{g-1}$ is the
complete canonical system $|K_C|$;
\item if $r\geq g$ then any $\mathfrak{g}_d^{r}$ is such that $d\geq r+g\geq 2g$.
\end{itemize}
These two facts imply the first part of the statement.
For the second part of the statement: the lower bound is provided by Clifford's theorem and we have equality if and only if the curve is hyperelliptic; the upper bound is provided by Brill-Noether theory and equality holds for the general curve by
\cite[Chap. V, Thm. 1.5]{GAC1} (the proof of Griffiths and Harris works over any algebraically closed field, see \cite{oss}) .
\end{proof}
\begin{remark}\label{R:r-gon}
It follows easily from the previous lemma that if $d\geq \operatorname{gon}_n(C)$ then $d-n\geq \min\{n,g\}$, or equivalently that $r(n):=\min\{n,d-n,g\}=\min\{n,g\}.$
\end{remark}
The properties of the Brill-Noether varieties (in the non-trivial cases) are collected in the following fact that summarizes the main results of the classical Brill-Noether theory (see \cite[Chap. IV and VII]{GAC1}).
\begin{fact}\label{BNclass}
Fix integers $r$ and $d$ such that $\max\{1,d-g+1\}\leq r$ and $0 \leq d\leq 2g-2$.
\begin{enumerate}[(i)]
\item \label{BNclass0} The open subset $C_d^r\setminus C_d^{r+1}\subset C_d^r$ is dense. Therefore, the morphism $(\alpha_d)_{|C_d^r}:C_d^r \twoheadrightarrow W_d^r$ is generically a $\mathbb{P}^r$-fibration and each irreducible component of $C_d^r$ has contractibility index exactly $r$.
\item \label{BNclass00} $C_d^r$ is non-empty if and only if $d\geq \operatorname{gon}_r(C)$. In particular, we have the following
\begin{equation}\label{E:C-nonemp}
d\geq \gamma(r):= \left\lceil \frac{rg}{r+1} \right\rceil +r \Rightarrow C_d^r\neq \emptyset \Rightarrow d\geq 2r.
\end{equation}
\item \label{BNclass1}
If $C_d^r$ is non-empty, every irreducible component of $C_d^{r}$ has dimension at least $r+\rho(g,r,d)=r+g-(r+1)(g-d+r)=d-r(g-d+r)$ and at most $r+(d-2r)=d-r$.
\item \label{BNclass2} Assume that either $C_d^{r}$ is empty or has pure dimension $r+\rho(g,r,d)$. Then the class of $C_d^{r}$ is equal to
$$[C_d^{r}]=c_d^r:=\prod_{i=0}^r \frac{i!}{(g-d+r+i-1)!} \sum_{\alpha=0}^r(-1)^{\alpha} \frac{(g-d+r+\alpha-1)!}{\alpha!(r-\alpha)!} x^{\alpha}\theta^{r(g-d+r)-\alpha}.$$
\item \label{BNclass3} Assume that $C$ is a general curve of genus $g$.
\begin{itemize}
\item If $\rho(g,r,d)<0$ then $C_d^r$ is empty;
\item If $\rho(g,r,d)=0$ then $C_d^r$ is a disjoint union of $g!\displaystyle \prod_{i=1}^r\frac{i!}{(g-d+r+i)!}$ projective spaces of dimension $r$;
\item If $\rho(g,r,d)>0$ then $C_d^r$ is irreducible of dimension $r+\rho(g,r,d)$.
\end{itemize}
\end{enumerate}
\end{fact}
A curve satisfying the conditions of \eqref{BNclass3} is called a \emph{Brill-Noether general} curve.
\begin{proof}
\eqref{BNclass0}: the first assertion follows from the fact that there are no irreducible components of $C_d^r$ contained in $C_d^{r+1}$ by
\cite[Chap. IV, Lemma 1.7]{GAC1} (the proof works over any algebraically closed field).
Using that the restriction of $\alpha_d$ to $C_d^r\setminus C_d^{r+1}$ is a $\mathbb{P}^r$-fibration, the remaining assertions follow.
\eqref{BNclass00}: $C_d^r$ is non-empty if and only if there exists a $\mathfrak{g}_d^r$ on $C$ which is equivalent to the condition $d\geq \operatorname{gon}_r(C)$. The chain of implications \eqref{E:C-nonemp} follows then from Lemma \ref{L:gonindex}.
Using \eqref{BNclass0}, part \eqref{BNclass1} follows from the fact that every irreducible component of $W_d^r(C)$ has dimension greater or equal to $\rho(g,r,d)$ by \cite[Chap. IV, Lemma 3.3]{GAC1},
\cite{kl1, kl2} and dimension at most $d-2r$ by Martens' theorem
(see \cite[Chap. IV, Thm. 5.1]{GAC1}, \cite[Thm. 1]{mar})
For part \eqref{BNclass2}, see
\cite[Chap. VII, \S 5]{GAC1} (the proof works over any algebraically closed field).
Part \eqref{BNclass3}: we will distinguish three cases according to the sign of $\rho(g,r,d)$. If $\rho(g,r,d)<0$ then $W_d^r(C)$ is empty by \cite[Chap. V, Thm. 1.5]{GAC1}
(the proof of Griffiths and Harris works over any algebraically closed field, see \cite{oss}) and hence also $C_d^r$ is empty. If $\rho(g,r,d)=0$ then $W_d^r(C)$ consists of finitely many $\mathfrak{g}_d^r$
(see \cite[Chap. V, Thm. 1.3 and 1.6]{GAC1} - again holding over any algebraically closed field), whose number is equal to $g!\displaystyle \prod_{i=1}^r\frac{i!}{(g-d+r+i)!}$ by Castelnuovo's formula \cite[Chap. V, Formula (1.2)]{GAC1}, \cite{kl2}; hence the result for $C_d^r$ follows. If $\rho(g,r,d)>0$ then $W_d^r(C)$ is irreducible of dimension $\rho(g,r,d)$ by \cite[Chap. V, Thm. 1.4, Cor. of Thm. 1.6]{GAC1}
and by \cite{gies}, \cite[Thm. 1.1 and Rmk. 1.7]{fl}, from which we deduce that $C_d^r$ is irreducible of dimension $r+\dim W_d^r(C)=r+\rho(g,r,d)$ using \eqref{BNclass0}.
\end{proof}
There are some Brill-Noether varieties that are pure of the expected dimension for any curve (and not only for the general curve), as described in the following example.
\begin{example}\label{BN=subor}
For any $g\leq d \leq 2g-2$, the Brill-Noether variety $C_d^{d-g+1}$ is irreducible of the expected dimension $g-1$ \footnote{Indeed, these are the unique Brill-Noether varieties that are also subordinate varieties; more specifically, $C_d^{d-g+1}=\Gamma_d(|K_C|)$, with the notation of \eqref{D:sublocus}.}. Indeed, the variety $W_d^{d-g+1}(C)$ is irreducible of dimension $2g-2-d$ since it is isomorphic, via the residuation map $L\mapsto K_C\otimes L^{-1}$, to the variety $W_{2g-2-d}^0(C)={\rm Im}(\alpha_{2g-2-d})$.
We conclude that $C_d^{d-g+1}$ is irreducible of dimension $d-g+1+\dim W_d^{d-g+1}(C)=g-1$ by Fact \ref{BNclass}\eqref{BNclass0}.
Therefore, Fact \ref{BNclass}\eqref{BNclass2} implies that the class of $C_d^{d-g+1}$ is equal to
\begin{equation}\label{E:clBN}
[C_d^{d-g+1}]= \sum_{\alpha=0}^{d-g+1} (-1)^{\alpha} \frac{x^{\alpha}\theta^{d-g+1-\alpha}}{(d-g+1-\alpha)!}.
\end{equation}
\end{example}
\subsection{Abel-Jacobi faces}\label{SS:AJfaces}
We can now study the contractibility faces associated to the
Abel-Jacobi morphism $\alpha_d:C_d\to \operatorname{Pic}^d(C)$.
\begin{defi}[Abel-Jacobi faces]\label{D:AJfaces}
Let $0 \leq n \leq d$. For any $r$ such that $1 + \max\{0,n-g\}= 1 + \max\{0,n-\dim \alpha_d(C_d)\}\leq r\leq n$, let $\AJ_n^r(C_d):=F_n^{\geq r}(\alpha_d)\subseteq \operatorname{Pseff}_n(C_d)$ and call it the $r$-th \emph{Abel-Jacobi face} in dimension $n$. Moreover, we set $\AJt_n^r(C_d):=F_n^{\geq r}(\alpha_d)\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subseteq {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$
and call it the $r$-th \emph{tautological Abel-Jacobi face} in dimension $n$.
\end{defi}
\begin{remark}
\label{nontr}
Let $0 \le n \leq d, 1 + \max\{0,n-g\} \le r \le n$. If $d < \operatorname{gon}(C)$ then $\theta$ is ample, whence $\AJ^r_n(C_d) = \AJt^r_n(C_d) = \{0\}$ by \cite[Cor. 3.15]{fl1}, \cite[Prop. 3.7]{fl2}.
\end{remark}
Applying Proposition \ref{extr-crit} to our case, we get the following result that guarantees that the Abel-Jacobi faces are non-trivial, under suitable assumptions.
\begin{prop}\label{P:nonzero}
Let $1 \le n \le d-1$ and let $1 + \max\{0,n-g\} \le r \le n$. Then
\begin{equation}
\label{cpi}
c_{\alpha_d}(r) = \begin{cases} -1 & \text{ if } d < \operatorname{gon}_r(C) \ \mbox{(or equivalently} \ C_d^r = \emptyset)\cr \dim C_d^r & \text{ if } d \ge \operatorname{gon}_r(C) \ \mbox{(or equivalently} \ C_d^r \neq \emptyset). \cr \end{cases}
\end{equation}
Moreover $\AJ_n^r(C_d) = \{0\}$ whenever $1 + \max\{0,d-g\} \le r \le n$ and either $d < \operatorname{gon}_r(C)$ or $d \ge \operatorname{gon}_r(C)$ and $n > \dim C_d^r$.
Assume now that $d\geq \operatorname{gon}_r(C)$ (which then forces $\dim C_d^r\geq r$). Then the following hold:
\begin{enumerate}[(i)]
\item \label{BNextr1} $\AJ_n^r(C_d)$ is non-trivial if $n \le \dim C_d^r$.
\item \label{BNextr2} $\AJ_{\dim C_d^r}^r(C_d)$ is equal to $\AJ_{\dim C_d^r}^r(C_d)\cap \operatorname{Eff}_n(C_d)$ and it is the conic hull of the irreducible components of $C_d^r$ of maximal dimension.
\end{enumerate}
Furthermore, \eqref{BNextr1} holds for $\AJt_n^r(C_d)$ if $C_d^r$ has some tautological irreducible component of maximal dimension
and \eqref{BNextr2} holds for $\AJt_n^r(C_d)$ if all irreducible components of $C_d^r$ of maximal dimension are tautological.
\end{prop}
\begin{proof}
We will apply Proposition \ref{extr-crit} to the Abel-Jacobi map $\alpha_d:C_d\to \operatorname{Pic}^d(C)$.
Observe that if there is an irreducible subvariety $Z \subseteq C_d$ of contractibility index at least $r$ then $C_d^r \neq \emptyset$ and $Z \subseteq C_d^r$. Moreover we claim that each irreducible component of $C_d^r$ has contractibility index at least $r$. In fact, if $r > \max\{0,d-g\}$ the claim follows by Fact \ref{BNclass}\eqref{BNclass0} while if $r \le \max\{0,d-g\}$ then $C^r_d = C_d$ that has contractibility index $\max\{0,d-g\} \ge r$.
This proves \eqref{cpi} and, if $C_d^r \neq \emptyset$, that the subvarieties of dimension $c_{\alpha_d}(r)$ and contractibility index at least $r$ are exactly the irreducible components of $C_d^r$ of maximal dimension. Using these facts, the first part of the proposition follows from Remark \ref{cong} and Proposition \ref{extr-crit}.
In order to prove the same properties for $\AJt_n^r(C_d)$, observe that the non-triviality of $\AJt_{\dim C_d^r}^r(C_d)$
and the analogue of \eqref{BNextr2} for $\AJt_{\dim C_d^r}^r(C_d)$, follow directly by our assumption. On the other hand, the non-triviality of $\AJt_n^r(C_d)$ for $n < \dim C_d^r$ follows from the proof of Proposition \ref{extr-crit} using that there is one tautological component of $C_d^r$ of dimension equal to $\dim C_d^r$.
\end{proof}
\begin{remark}\label{congAJ}
According to Remark \ref{cong}, we expect that,
for any $1 + \max\{0,n-g\} \le r \le n$, $\AJ_n^r(C_d)=\{0\}$ if either $d<\operatorname{gon}_r(C)$ (which is equivalent to $C_d^r=\emptyset$) or $d\geq \operatorname{gon}_r(C)$ and $n>\dim C_d^r$. In case $r=1$, this would follow from the validity of the weak conjecture 1.1 in \cite{fl2} for the Abel-Jacobi morphism. Indeed, we know that $\alpha_d$ satisfies the
above mentioned conjecture if $d < \operatorname{gon}_1(C)$ (in which case it holds trivially) and if $d \ge g$ and the (algebraically closed) base field is uncountable, by \cite[Thm. 1.2]{fl3}.
\end{remark}
As a corollary of the above proposition, we can determine some ranges of $d$ and $n$ for which we can find non-trivial Abel-Jacobi faces in $\operatorname{Pseff}_n(C_d)$ or ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
\begin{cor}\label{C:nontriv}
Let $1 \le n \le d-1$ and let $C$ be a curve of genus $g\geq 1$.
\begin{enumerate}[(i)]
\item \label{P:nontriv1} There exist non-trivial Abel-Jacobi faces of $\operatorname{Pseff}_n(C_d)$ if $d \ge \frac{n+g+1}{2}$.
\item \label{P:nontriv2} There exist non-trivial Abel-Jacobi faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ if either $d \ge g+1$ or $d \ge \frac{n+g+1}{2}$ and $C_d^1$ has some tautological irreducible component of maximal dimension (which holds true if $C$ is a Brill-Noether general curve).
\end{enumerate}
\end{cor}
Note that the inequality $d\geq \frac{n+g+1}{2}$ automatically holds if either $n \ge g-1$ or $d-n\ge \frac{g}{2}$.
From the proof of the corollary, it will follow that the lower bound $d\geq \frac{n+g+1}{2}$ is sharp for Brill-Noether general curves provided that the expectation of Remark \ref{congAJ} holds true.
On the other hand, for special curves, the lower bound is far from being sharp, see Theorem \ref{T:hyper} for the case of hyperelliptic curves.
\begin{proof}
We will distinguish three cases.
\begin{itemize}
\item If $g\leq n$ (which implies that $g+1\leq d$) then $C_d^{n-g+1}=C_d$ by Riemann-Roch, and hence Proposition \ref{P:nonzero}\eqref{BNextr1} implies that $\AJ_n^{n-g+1}(C_d)$ and $\AJt_n^{n-g+1}(C_d)$ are non-trivial.
\item If $n\leq g\leq d-1$ then $C_d^1=C_d$ by Riemann-Roch, and hence Proposition \ref{P:nonzero}\eqref{BNextr1} implies that $\AJ_n^{1}(C_d)$ and $\AJt_n^{1}(C_d)$ are non-trivial.
\item If $d\leq g$ (which implies that $n\leq g-1$) then Fact \ref{BNclass}\eqref{BNclass1} gives that $\dim C_d^1\geq 2d-g-1$
if $C_d^1 \neq \emptyset$. Hence, if $n\leq 2d-g-1$ then Proposition \ref{P:nonzero}\eqref{BNextr1}
and Fact \ref{BNclass}\eqref{BNclass00} imply that $\AJ_n^1(C_d)$ is non-trivial and, furthermore, that $\AJt_n^1(C_d)$ is non-trivial provided that $C_d^1$ has some tautological irreducible component of maximal dimension.
\end{itemize}
\end{proof}
\begin{remark}\label{R:Kummer}
One may wonder if one could get more faces of the pseudoeffective cone of $C_d$ by looking at contractibility faces of some other regular morphism $f\colon C_d\to Z$ to some projective variety. There is no loss of generality (using the Stein factorization) in assuming that $f$ is a regular fibration,
i.e. $f_*(\mathcal O_{C_d})=\mathcal O_Z$.
Any such regular fibration is uniquely determined (up to isomorphism) by $f^*(\operatorname{Amp}(Z))$ which is a face of the semiample cone of $C_d$.
The intersection of the semiample cone with $R^1(C_d)$ is a subcone of the two dimensional cone ${^{\rm t} \hskip -.05cm \operatorname{Nef}}^1(C_d)$ which has two extremal rays: one is spanned by $\eta_{1,d}=dgx-\theta$ which is the dual of the class of the small diagonal $\Delta_{(d)}$ (see \cite[Cor. 3.15]{BKLV}) and the other one is generated by $\theta$ provided that $d\geq \operatorname{gon}(C)$ (see Theorem \ref{thetaNef}). The
Abel-Jacobi morphism $\alpha_d:C_d\twoheadrightarrow \alpha_d(C_d)\subseteq \operatorname{Pic}^d(C)$ corresponds to the face $\operatorname{cone}(\theta)$ while the other face $\operatorname{cone}(\eta_{1,n})$ corresponds to another fibration that we are going to describe.
Consider the regular morphism (as in \cite[\S 2.2]{Pac})
$$
\begin{aligned}
\phi_d: C^d & \longrightarrow J(C)^{{d \choose 2}}\\
(p_1,\ldots, p_d) & \mapsto (\mathcal O_C(p_i-p_j))_{1 \le i < j \le d}.
\end{aligned}
$$
By quotienting $C^d$ by the symmetric group $S_d$ and $J(C)^{{d \choose 2}}$ by the semi-direct product $\mathbb{Z}/2\mathbb{Z}^{{d \choose 2}}\rtimes S_{{d \choose 2}}$ (where $S_{{d \choose 2}}$ acts by permutation and each copy of $\mathbb{Z}/2\mathbb{Z}$ acts on the corresponding factor $J(C)$ as the inverse), we get a regular
fibration
\begin{equation}\label{E:Kum}
\varphi_d : C_d \twoheadrightarrow \varphi_d(C)\subset \Sym^{{d \choose 2}}(\Kum(C)).
\end{equation}
It is easily checked that the only subvariety contracted by $\varphi_d$ is $\Delta_{(d)}$. We then have $c_{\varphi_d}(r) = -1$ if $r \ge 2$ and $c_{\varphi_d}(1) = 1$. By Proposition \ref{extr-crit}(ii) we get that $\operatorname{cone}(\Delta_{(d)})$ is an extremal ray of $\operatorname{Pseff}_1(C_d)$ (which improves \cite[Lemma 2.2]{Pac} where the author uses the above maps to show that the class of the small diagonal $\Delta_{(d)}$ lies in the boundary of $\operatorname{Pseff}_1(C_d)$). In fact we know more, namely that $\operatorname{cone}(\Delta_{(d)})$ is an edge of $\operatorname{Pseff}_1(C_d)$ by \cite[Cor. 3.15(d)]{BKLV}. According to Remark \ref{cong}, we also expect that $F_k^{\ge r}(\varphi_d) = \{0\}$ for $k \ge 2$ or $k=1$ and $r \ge 2$. Hence, we do not expect to find new interesting faces by looking at the contractibility faces of $\phi_d$, apart from a new (and simpler) proof of the fact that $\Delta_{(d)}$ spans an extremal ray of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_1(C_d)$.
\end{remark}
\subsection{Brill-Noether rays}\label{SS:BNrays}
In this subsection, we use Proposition \ref{P:nonzero} to exhibit some extremal rays of $\operatorname{Pseff}_n(C_d)$ (and of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$) for a
Brill-Noether \color{black}general curve.
\begin{thm}\label{AJtaut}
Let $\max\{1,d-g+1\}\leq r$ and $\displaystyle \frac{rg}{r+1}+r\leq d\leq 2g-2$.
Assume that $C$ is a Brill-Noether general curve.
Then $\AJ^r_{r+\rho}(C_d)=\AJt^r_{r+\rho}(C_d)=\operatorname{cone}([C_d^r])$, where $\rho:=\rho(g,r,d)=g-(r+1)(g-d+r)$. In particular, $[C_d^r]$ generates an extremal ray (called the \emph{BN(=Brill-Noether) ray}) of $\operatorname{Pseff}_{r+\rho}(C_d)$ and of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{r+\rho}(C_d)$.
\end{thm}
Note that if $r=1$ and $\frac{g+2}{2}\leq d \leq g$ then $[C_d^1]$ generates an extremal ray of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{2d-g-1}(C_d)$, and this achieves the lower bound on $d$ in Corollary \ref{C:nontriv}.
\begin{proof}
This will follow from Proposition \ref{P:nonzero}\eqref{BNextr2} and its analogue for the tautological Abel-Jacobi faces, provided that we show that either $C_d^r$ is tautological and irreducible of dimension $r+\rho$ or all the irreducible components of $C_d^r$ are tautological, of dimension $r+\rho$ and numerically equivalent (in which case the class of $C_d^ r$ is a positive multiple of the class of each of its irreducible components).
The hypothesis $ \frac{rg}{r+1}+r\leq d$ is equivalent to $\operatorname{gon}_r(C)\leq d$ by Lemma \ref{L:gonindex} (which is in turn equivalent to $C_d^r\neq \emptyset$) and it implies that $C_d^r$ has pure dimension $r+\rho$ by Fact \ref{BNclass}\eqref{BNclass3} and it has tautological class by Fact \ref{BNclass}\eqref{BNclass2}. We now distinguish two cases, according to the sign of $\rho$. If $\rho>0$ then $C_d^r$ is irreducible by Fact \ref{BNclass}\eqref{BNclass3} and we are done.
If instead $\rho=0$, then $C_d^r$ is a disjoint union of $r$-dimensional fibers of the map $\alpha_d$ by Fact \ref{BNclass}\eqref{BNclass3}. We conclude by observing that all the $r$-dimensional fibers of $\alpha_d$ are numerically equivalent and they have tautological class (indeed, their class is equal to $\Gamma_d(\mathfrak{g}^r_d)$, see Fact \ref{subcyc}).
\end{proof}
\begin{example}\label{BNexa}
Two special cases of BN rays of fixed codimension $m$ (which are also the unique ones in codimension $m$ if $m$ is a prime) are the ones generated by the following Brill-Noether varieties:
\begin{enumerate}[(i)]
\item \label{BNexa1} If $1\leq m\leq g/2$ and $C$ is a
Brill-Noether general curve, then $C_{g-m+1}^1$ is a pure codimension $m$ (and irreducible if and only if $m<g/2$ or $g=2$) subvariety of $C_{g-m+1}$ of class
$$[C_{g-m+1}^1]=\frac{\theta^m}{m!}-\frac{x\theta^{m-1}}{(m-1)!}.$$
\item \label{BNexa2} If $1\leq m\leq g-1$ (and $C$ is any curve) then $C_{g+m-1}^m$ is a codimension $m$ irreducible subvariety of $C_{g+m-1}$ of class (see Example \ref{BN=subor})
$$[C_{g+m-1}^m]=\sum_{\alpha=0}^m (-1)^{\alpha} \frac{x^{\alpha}\theta^{m-\alpha}}{(m-\alpha)!}.$$
\end{enumerate}
If $m=1$ in each of the above special cases, we get that $[C_{g}^1]=\theta-x\in N^1(C_g)$ generates an extremal ray of $\operatorname{Pseff}^1(C_{g})$, thus extending \cite[Rmk 1 after Thm.\ 5]{Kou} from very general curves to arbitrary curves.
\end{example}
It is natural to ask if BN rays are perfect, i.e. if they are edges, in the entire or tautological pseudoeffective. As we will see, a way to prove this for the tautological pseudoeffective cone would be to apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2}. On the other hand we will show in Remark \ref{R:BNperf} that the unique BN rays $\operatorname{cone}([C_d^r])$ to which we can apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2}, and hence deduce that they are perfect rays, are those with $\rho=\rho(g,r,d)=0$ (when we will actually see in Remark \ref{R:compaAJ} that they coincide with the subordinate edge) and those with $d=g+r-1$ (when we will actually see in Theorem \ref{T:BNfaces} that they coincide with the BN edge in dimension $g-1$).
\subsection{The $\theta$-filtration}\label{SS:theta}
The tautological Abel-Jacobi faces can be described in terms of a multiplicative filtration of the tautological ring $R^*(C_d)$, determined by the class $\theta$.
\begin{defi}\label{thetalin}[The $\theta$-filtration]
For any $0\leq m \leq d$ and any $0\leq i \leq g+1$, let $\theta^{\geq i, m}$ (or simply $\theta^{\geq i}$ if $m$ is clear from the context) be the smallest linear subspace of $R^m(C_d)=R_{d-m}(C_d)$
containing the monomials $\{\theta^i x^{m-i}, \theta^{i+1}x^{m-i-1}, \ldots, \theta^{m} \}$,
with the obvious convention that $\theta^{\geq i, m}=\{0\}$ if $i>m$.
\end{defi}
The subspaces $\{\theta^{\geq i, m}\}$ form an exhaustive decreasing multiplicative filtration of the tautological ring $R^*(C_d)$, in the sense that
$$\{0\}=\theta^{\geq g+1, m}\subseteq \cdots \subseteq \theta^{\geq i+1, m}\subseteq \theta^{\geq i, m}\subseteq \cdots \subseteq \theta^{\geq 0,m}=R^m(C_d) \hspace{0.3cm} \text{ and } \hspace{0.3cm} \theta^{\geq i, m}\cdot \theta^{\geq j, l}\subseteq \theta^{\geq i+j, m+l}.$$
The properties of the $\theta$-filtration are collected in the following result.
\begin{prop}\label{P:theta}
Let $0\leq m \leq d$ and $0\leq i \leq g+1$. Set as usual $r(m):=\min\{m, d-m, g\}$. Then the following properties hold true.
\begin{enumerate}[(i)]
\item \label{P:theta1}
If $i\leq m+1$
then the codimension of $\theta^{\geq i, m}$ inside $R^m(C_d)$ is equal to
$$
\codim \theta^{\geq i, m}=
\begin{cases}
i & \text{ if } r(m)=m \: \text{ or } g, \\
\max\{i-g+d-m,0\} & \text{ if } r(m)=d-m\leq g\leq m,\\
\max\{i-2m+d,0\} & \text{ if } r(m)=d-m\leq m\leq g.
\end{cases}
$$
Moreover, a basis of $\theta^{\geq i, m}$ is given by
$$\begin{sis}
\{\theta^ix^{m-i}, \ldots, \theta^mx^0\} \quad & \text{ if } r(m)=m \:\: \text{ and } \: 0\leq i \leq m+1 , \\
\{\theta^ix^{m-i}, \ldots, \theta^g x^{m-g}\} \quad & \text{ if } r(m)=g \: \:\text{ and } \: 0\leq i \leq g+1, \\
\{\theta^ix^{m-i}, \ldots, \theta^g x^{m-g}\} \quad & \text{ if } r(m)=d-m\leq g\leq m \:\: \text{ and } \: g-(d-m)\leq i \leq g+1, \\
\{\theta^ix^{m-i}, \ldots, \theta^mx^0\} \quad & \text{ if } r(m)=d-m\leq m\leq g \: \:\text{ and } \: 2m-d\leq i \leq m+1. \\
\end{sis}$$
\item \label{P:theta2}
Under the perfect pairing between $R^m(C_d)$ and $R^{d-m}(C_d)$ given by the intersection product (see Proposition \ref{basetaut}\eqref{basetaut5}), we have that
$$(\theta^{\geq i, m})^{\perp} \supseteq\theta^{\geq g+1-i, d-m},$$
with equality if and only if one the following assumptions hold:
\begin{itemize}
\item $g\leq \max\{m,d-m\}$,
\item $i=g+1$ or $m\leq d-m\leq g$ and $g-(d-m)+m+1\leq i \leq g+1$, in which case the left and right hand side are both equal to $R^{d-m}(C_d)$,
\item $i=0$ or $d-m\leq m\leq g$ and $0 \leq i\leq 2m-d$, in which case the left and right hand side are both equal to zero.
\end{itemize}
\end{enumerate}
\end{prop}
\begin{proof}
Part \eqref{P:theta1} is obvious if either $r(m)=m$ or $r(m)=g$, since in the former case the elements $\{\theta^0x^m, \ldots,
\theta^m x^0\}$ form a basis of $R^m(C_d)$ while in the latter case the elements $\{\theta^0x^m, \ldots, \theta^gx^{m-g}\}$ form a basis of $R^m(C_d)$ by Proposition \ref{basetaut}\eqref{basetaut4}. On the other hand, if $r(m)=d-m$ then any subset of $(d-m+1)$ elements of $\{\theta^{0}x^{m}, \ldots, \theta^{\min\{g, m\}}x^{m-\min\{g,m\}}\}$ form a basis of $R^m(C_d)$ by Proposition \ref{basetaut}\eqref{basetaut4}. This easily imply \eqref{P:theta1} for $r(m)=d-m$.
Part \eqref{P:theta2}: the inclusion
\begin{equation*}
(\theta^{\geq i, m})^{\perp} \supseteq \theta^{\geq g+1-i, d-m}
\end{equation*}
follows from the relation $\theta^{g+1}=0$. We conclude with a straightforward comparison (left to the reader) of the
codimensions of $(\theta^{\geq i, m})^{\perp}$ and of $\theta^{\geq g+1-i, d-m}$, using \eqref{P:theta1}.
\end{proof}
The link between tautological Abel-Jacobi faces and the $\theta$-filtration is clarified in the following
\begin{prop}\label{P:AJtheta}
Let $0\leq n \leq d$ and $1+\max\{0,n-g\}\leq r\leq n$.
\begin{enumerate}[(i)]
\item \label{P:AJtheta1} We have an equality of subcones of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$
\begin{equation}\label{E:AJtheta}
\AJt_n^r(C_d)= (\theta^{\geq n+1-r,n})^\perp \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d).
\end{equation}
In particular, $\dim \AJt_n^r(C_d)\leq \dim (\theta^{\geq n+1-r,n})^\perp=\codim (\theta^{\geq n+1-r,n})$.
\item \label{P:AJtheta2}
If $\dim \AJt_n^r(C_d)=\dim (\theta^{\geq n+1-r,n})^\perp$ then $\AJt_n^r(C_d)$ is a perfect face of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ whose (perfect) dual face is $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$.
\end{enumerate}
\end{prop}
When the assumption of \eqref{P:AJtheta2} holds true, the perfect face $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$ of ${^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$ will be called \emph{nef $\theta$-face}.
A nef $\theta$-face of dimension one will be called \emph{nef $\theta$-edge}, and using Proposition
\ref{P:theta}\eqref{P:theta1} it is easy to see that a nef $\theta$-edge is equal to
$$\theta^{\geq \min\{n,g\},n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)=\operatorname{cone}(\theta^{\min\{n,g\}}x^{n-\min\{n,g\}}).$$
\begin{proof}
\eqref{P:AJtheta1}: note that, since $\theta$ is the pull-back via $\alpha_d:C_d\to \operatorname{Pic}^d(C)$ of an ample line bundle on $\operatorname{Pic}^d(C)$, from Definition \ref{contract} it follows that for any $\beta\in \operatorname{Pseff}_n(C_d)$ we have
\begin{equation}\label{conttheta}
\contr_{\alpha_d}(\beta)\geq r \Leftrightarrow \beta\cdot \theta^{n+1-r}=0.
\end{equation}
Therefore, since $\AJt_n^r(C_d)$ is the conic hull of all elements $\beta \in {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ having contractibility index at least $r$, formula \eqref{conttheta} implies that $\AJt_n^r(C_d)\subseteq (\theta^{\geq n+1-r,n})^\perp \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
In order to prove the reverse implication, by contradiction assume that there exists an element $\beta\in {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ such that $\beta\in (\theta^{\geq n+1-r,n})^\perp$ and $\beta\cdot \theta^{n+1-r}\neq 0$. The element $\beta\cdot \theta^{n+1-r}$ lies in $R^{d+1-r}(C_d)$ and, since it is non-zero (which implies that $r\geq 1$), applying Proposition \ref{basetaut}\eqref{basetaut5} we find an element $\gamma\in R^{r-1}(C_d)$ such that $\beta\cdot \theta^{n+1-r} \cdot \gamma\neq 0$. But then, since $ \theta^{n+1-r} \cdot \gamma\in \theta^{\geq n+1-r,n}$, we
find that $\beta \not\in (\theta^{\geq n+1-r,n})^\perp$, which is the desired contradiction.
Part \eqref{P:AJtheta2}: if $\dim \AJt_n^r(C_d)=\dim (\theta^{\geq n+1-r,n})^\perp$ then $\langle \AJt_n^r(C_d)\rangle=(\theta^{\geq n+1-r,n})^\perp$, which implies that the dual face of $\AJt_n^r(C_d)$ is equal to
$$((\theta^{\geq n+1-r,n})^\perp)^\perp \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)= \theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d).$$
Observe that $\AJt_n^r(C_d)$ is a full cone in $(\theta^{\geq n+1-r,n})^\perp$ by assumption, while $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$ is a full cone in $\theta^{\geq n+1-r,n}$ since $\theta$ is nef (hence limit of ample classes) and $x$ is ample.
Therefore, we can apply Remark \ref{perf} in order to conclude that $\AJt_n^r(C_d)$ and $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$
are perfect dual faces.
\end{proof}
\begin{remark}
The equality \eqref{E:AJtheta} is true also for the (non-tautological) Abel-Jacobi faces with the same proof (taking orthogonals in $N_n(C_d)$).
\end{remark}
Note that Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} gives a criterion to find perfect faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$. Let us see how we could apply this criterion to find facets (which are always perfect) and edges, i.e. one-dimensional perfect faces.
The dimension of $(\theta^{\geq n+1-r,n})^\perp\subseteq R_n(C_d)$, which
is equal to the codimension of $\theta^{\geq n+1-r,n}\subseteq R^n(C_d)$, can be computed (in the non trivial range $n+1-r\leq g$) using Proposition \ref{P:theta}\eqref{P:theta1} and it is equal to:
\begin{equation}\label{E:exp-dim}
\dim (\theta^{\geq n+1-r,n})^\perp=\codim \theta^{\geq n+1-r,n}=
\begin{cases}
n+1-r & \text{ if either } r(n)=n \text{ or } r(n)=g, \\
\max\{d-g+1-r, 0\} & \text{ if } d-n\leq g\leq n, \\
\max\{d-n+1-r,0\} & \text { if } d-n\leq n\leq g.
\end{cases}
\end{equation}
Therefore, we find that
$$\codim (\theta^{\geq n+1-r,n})^\perp=1 \Leftrightarrow \dim (\theta^{\geq n+1-r,n})^\perp=r(n)\Leftrightarrow
\begin{cases}
r=1 & \text{ if } n\leq g, \\
r=n+1-g & \text{ if } g\leq n.
\end{cases}
$$
Let us now examine when, in each of the above two cases, we get indeed a tautological Abel-Jacobi facet.
\begin{prop}\label{P:facet}
\noindent
\begin{enumerate}[(i)]
\item \label{P:facet1} If $g\leq n$ then $\AJt_n^{n+1-g}(C_d)$ is a facet of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
\item \label{P:facet2} If $n\leq g$ then $\AJt^1_n(C_d)$ is a facet of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ under one of the following assumptions:
\begin{enumerate}[(a)]
\item \label{P:faceta} $\operatorname{gon}_n(C)\leq d$ (which is always satisfied if $g\leq d-n$);
\item \label{P:facetb} $n=g-1$;
\item \label{P:facetc} $g\leq d$ and $C$ is very general over an uncountable base field $k$.
\end{enumerate}
\end{enumerate}
\end{prop}
Note that: \eqref{P:facet1} (and \eqref{P:faceta} for $g\leq d-n$) is a special case of Theorem \ref{thetaPseff}, \eqref{P:faceta} is a special case of Theorem \ref{thetaNef}, and
\eqref{P:facetb} for $d-n\leq g-1$ (otherwise it belongs to case \eqref{P:faceta}) is a special case of Theorem \ref{T:BNfaces}.
\begin{proof}
As observed above, parts \eqref{P:facet1}, \eqref{P:faceta} and \eqref{P:facetb} are special case of theorems that will be proved later.
Let us prove part \eqref{P:facetc}. The assumption that $g\leq d$ implies that the Abel-Jacobi morphism $\alpha_d$ is surjective. Hence, using that $k$ is uncountable (and algebraically closed) and that the fibers of $\alpha_d$ are projective spaces, we can apply \cite[Thm. 1.2]{fl3} in order to conclude that $\langle \AJ^1_n(C_d)\rangle=\ker ((\alpha_d)_*:N_n(C_d)\to N_n(\operatorname{Pic}^d(C)))$. Since $C$ is very general, we have that $N_n(C_d)=R_n(C_d)$ (which also implies that $\AJ^1_n(C_d)=\AJt_n(C_d)$) and $N_n(\operatorname{Pic}^d(C))=\langle [\Theta]^{g-n}\rangle$ (see \cite[Fact 2.6]{BKLV} and Ben Moonen's appendix to \cite{BKLV}).
Therefore, the kernel of $(\alpha_d)_*:N_n(C_d)\to N_n(\operatorname{Pic}^d(C))$ is isomorphic to the linear space of all elements $z\in R_n(C_d)$ such that $0=(\alpha_d)_*(z)\cdot [\Theta]^n=z\cdot \theta^n=0$,
that is to $(\theta^{\geq n, n})^{\perp}$. Putting everything together, we deduce that $\langle \AJt^1_n(C_d)\rangle=(\theta^{\geq n, n})^{\perp}$, which implies that $ \AJt^1_n(C_d)$ is a facet of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ (since $(\theta^{\geq n, n})^{\perp}$ has codimension one in $R_n(C_d)$ as observed above).
\end{proof}
Let us now discuss when Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} can be used to find edges of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
Using \eqref{E:exp-dim}, we find that
$$\dim (\theta^{\geq n+1-r,n})^\perp=1 \Longleftrightarrow
\begin{cases}
r=n & \text{ if either } r(n)=n \text{ or } r(n)=g, \\
r=d-g & \text{ if } d-n\leq g\leq n, \\
r=d-n & \text { if } d-n\leq n\leq g.
\end{cases}
$$
Let us now check, in each of the above cases, when we can apply the criterion of Proposition \ref{P:nonzero} to conclude that $\AJt_n^ r(C_d)$ is non-zero, and hence that it is an edge of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
We will distinguish the following cases (assuming that $1\leq n\leq d-1$ to avoid trivial faces):
\begin{enumerate}[(A)]
\item If $g\leq d-n$ then clearly $C_d^n=C_d$ and we deduce that $\AJt_n^n(C_d)$ is non-zero;
\item If $d-n\leq g\leq n$ then clearly $C_d^{d-g}=C_d$ and we deduce that $\AJt_n^{d-g}(C_d)$ is non-zero:
\item If $n\leq d-n<g$ (which implies that $d\leq 2g-2$) then $\AJt_n^n(C_d)$ is non-zero if $C_d^n$ has some tautological irreducible component of maximal dimension and if
$n\leq \dim C_d^n=n+\dim W_d^n(C)$, which is equivalent to the non-emptiness of $W_d^n(C)$, or in other words to $d\geq \operatorname{gon}_n(C)$.
\item If $d-n\leq n<g$ (which implies that $d\leq 2g-2$) then $\AJt_n^{d-n}(C_d)$ is non-zero if $C_d^n$ has some tautological irreducible component of maximal dimension and if
$$n\leq \dim C_d^{d-n}=d-n+\dim W_d^{d-n}(C) \Longleftrightarrow \dim W_d^{d-n}(C)\geq 2n-d=d-2(d-n).$$
By Martens' theorem (see \cite[Chap. IV, Thm. 5.1]{GAC1}), this can happen if either $d-n=d-g+1$, i.e. $n=g-1$,
or $C$ is hyperelliptic.
\end{enumerate}
We will see in the next sections that indeed in all the above cases we get edges of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$: cases (A) and (B) will be analyzed in Section \ref{S:theta} (and indeed Case (A) also follows from Section \ref{S:Neftheta}), case (C) in Section \ref{S:Neftheta}, case (D) with $n=g-1$
in Section \ref{S:BN} and case (D) for $C$ hyperelliptic in Section \ref{S:hyper}.
Quite remarkably, we will see that in all the above cases the non-trivial tautological Abel-Jacobi faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ form a \emph{maximal chain of perfect non-trivial faces}, i.e. a chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ whose dimensions start from one and increase by one at each step until getting to the dimension of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ minus one.
\begin{remark}\label{R:BNperf}
The unique BN rays $\operatorname{cone}([C_d^r])$ to which we can apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} are those with $\rho=\rho(g,r,d)=0$ or with $d=g+r-1$.
Indeed, since $\AJt_{r+\rho}^r(C_d)=\operatorname{cone}([C_d^r])$ has dimension one, the hypothesis of Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} does hold true if and only if
$$1=\dim(\theta^{\geq \rho+1, r+\rho})^\perp=\codim \theta^{\geq \rho+1, r+\rho}.$$
Now observe that $\displaystyle d=\frac{\rho+rg}{r+1}+r$ and the hypothesis on $d$ in Theorem \ref{AJtaut} translates into $0\leq \rho\leq g-r-1$. The dimension $n=r+\rho$ and the codimension $m=d-n$ of $C_d^r$ satisfy the following easily checked inequalities
$$
\begin{sis}
& n<g, \\
&m<g,\\
& n\geq m \Longleftrightarrow \frac{r}{2r+1}(g-r-1) \leq \rho.
\end{sis}
$$
Using this, we can compute the codimension of $\theta^{\geq \rho+1, r+\rho}$ using Proposition \ref{P:theta}\eqref{P:theta1}:
$$\codim \theta^{\geq \rho+1, r+\rho}=
\begin{cases}
\rho+1 & \text{ if } \rho\leq \frac{r}{2r+1}(g-r-1), \\
d-2r-\rho+1=r(g-d+r-1)+1 & \text{ if } \frac{r}{2r+1}(g-r-1) \leq \rho.
\end{cases}
$$
Hence we see that $\codim \theta^{\geq \rho+1, r+\rho}=1$ if either $\rho=0$ or $d=g+r-1$.
\end{remark}
\section{The $\theta$-faces}\label{S:theta}
In this section, we are going to describe the tautological Abel-Jacobi faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ under the assumption that
$g \le \max\{n,d-n\}$. Note that this assumption is always satisfied if $d>2g-2$ and it is never satisfied if $d<g$.
Let us start with the following result that gives a lower bound on the dimension of the tautological Abel-Jacobi faces.
\begin{lemma}\label{L:lowbound}
Let $0\leq n \leq d$ and $1+\max\{0,n-g\}\leq r\leq n$.
The cone
$$\theta^{\geq g-n+r,d-n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subset \theta^{\geq g-n+r,d-n}\subseteq R_{n}(C_d)$$
is contained in $\AJt_n^r(C_d)$ and it is a full-dimensional cone in $\theta^{\geq g-n+r,d-n}$.
In particular, we have that
$$\dim \AJt_n^r(C_d)\geq \dim \theta^{\geq g-n+r,d-n}.$$
\end{lemma}
\begin{proof}
Since $\theta^{\geq g-n+r,d-n}\subseteq (\theta^{n+1-r,n})^{\perp}$ by Proposition \ref{P:theta}\eqref{P:theta2}, we get that the cone $\theta^{\geq g-n+r,d-n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ is contained in
$\AJt^r_n(C_d)$ by \eqref{E:AJtheta}.
By Definition \ref{thetalin}, the linear subspace $\theta^{\geq g-n+r,d-n}\subseteq R_{n}(C_d)$ is generated by monomials in $x$ and $\theta$. Since $\theta$ is nef (hence limit of ample classes) and $x$ is ample we have that each monomial in $x$ and $\theta$ is a pseudoeffective class. This implies that $\theta^{\geq g-n+r,d-n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ is a full-dimensional cone in $\theta^{\geq g-n+r,d-n}$.
\end{proof}
Using the above Lemma, we can now prove the main result of this section.
\begin{thm}\label{thetaPseff}
Let $0 \le n \le d$ and assume that $g \le \max\{n,d-n\}$.
Then the Abel-Jacobi face $\AJt_n^r(C_d)$ is equal to $\theta^{\geq g-n+r, d-n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$, and it is non-trivial if and only if $1 + \max\{0,n-g\}\le r \le \min\{n,d-g\}$, in which case it is a perfect face of dimension $\min\{n,d-g\}-r+1$ and codimension $r-\max\{n-g,0\}$.
Hence, the following chain
\begin{equation}\label{E:chain-th}
\theta^{\geq {\min\{g,d-n\}}}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subset \ldots \subset \theta^{\geq g+1-\min\{g,n\}}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)
\end{equation}
is a maximal chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
The dual chain of \eqref{E:chain-th} is equal to
\begin{equation}\label{E:nef-th}
\theta^{\geq {\min\{g,n\}}}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)\subset \ldots \subset \theta^{\geq g+1-\min\{g,d-n\}}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d).
\end{equation}
\end{thm}
The faces in \eqref{E:chain-th} will be called \emph{pseff $\theta$-faces}, while the faces in \eqref{E:nef-th} are the nef $\theta$-faces introduced after Proposition \ref{P:AJtheta}.
Note that
$$\operatorname{cone}(\theta^{\min\{g,d-n\}}x^{d-n-\min\{g,d-n\}})=\theta^{\geq {\min\{g,d-n\}}}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)=\theta^{\geq {\min\{g,d-n\}}}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{d-n}(C_d)$$
is an edge (i.e. perfect extremal ray) of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$, which we will call the \emph{pseff $\theta$-edge}, and it coincides with the nef $\theta$-edge.
On the other hand, since the class $x$ is ample, the other monomials in $x$ and $\theta$ cannot generate an extremal ray of either ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ or of ${^{\rm t} \hskip -.05cm \operatorname{Nef}}^{d-n}(C_d)$.
\begin{proof}
Fix an integer $r$ such that $1+\max\{0,n-g\}\leq r\leq n$.
Using the assumption $g\leq \max\{n,d-n\}$, Proposition \ref{P:theta}\eqref{P:theta2} implies that
$$(\theta^{\geq n+1-r, n})^\perp=\theta^{\geq g-n+r, d-n}\subseteq R_n(C_d).$$
This, together with Proposition \ref{P:AJtheta}\eqref{P:AJtheta1} and Lemma \ref{L:lowbound}, gives the equality of cones
$$\AJt_n^r(C_d)=\theta^{\geq g-n+r, d-n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$$
and the fact that
$$\dim \AJt_n^r(C_d)=\dim (\theta^{\geq n+1-r, n})^\perp.$$
Hence we can apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} in order to conclude that $\AJt_n^r(C_d)$ is a perfect face of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ whose dual face is equal to $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$.
Finally, Proposition \ref{P:theta}\eqref{P:theta1} gives that the linear subspace $(\theta^{\geq n+1-r, n})^\perp\subseteq R_n(C_d) $ is non-trivial if and only if $1 + \max\{0,n-g\}\leq r\leq \min\{n,d-g\}$, in which case it has dimension $\min\{n,d-g\}-r+1$.
\end{proof}
\begin{remark}\label{sharp1}
Notice that, outside of the range $g \le \max\{n,d-n\}$, the cones $\theta^{\geq i} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ may not be a face of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$. To see this let $m$ be odd and such that $1\leq m \leq g-1$ and let $d = g+m-1$. Now, by \eqref{E:clBN}, the coefficient of $x^m$ in $[C_{g+m-1}^m]$ is $(-1)^m < 0$ while, for any $m$-codimensional diagonal, the same coefficient is positive by \cite[Prop. 3.1]{BKLV}. Hence, in ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_{g+m-1})$, the class $[C_{g+m-1}^m]$ and the $m$-codimensional diagonals lie in different half-spaces with respect to the hyperplane $\theta^{\geq 1}$, which then implies that $\theta^{\geq 1}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$ is not a face of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_{g+m-1})$.
\end{remark}
Let us finish this section by giving upper and lower bounds for the dimension of the tautological Abel-Jacobi faces in the numerical ranges not included in the above Theorem \ref{thetaPseff}.
\begin{prop}\label{P:uplow}
Assume that
$g \ge \max\{n,d-n\}$. Then
\begin{enumerate}[(i)]
\item \label{P:uplow1} $\AJt_n^r(C_d)$ is trivial unless $1\leq r\leq \min\{n, d-n\}$.
\item \label{P:uplow2} If $1\leq r\leq \min\{n, d-n\}$ then
\begin{equation}\label{E:uplow}
\max\{d+1-g-r,0\}\le \dim \AJt_n^r(C_d)\leq r(n)-r+1.
\end{equation}
In particular, if $1\leq r \leq d-g$ (which forces $g+1\le d$) then $\AJt_n^r(C_d)$ is non-trivial.
\end{enumerate}
\end{prop}
\begin{proof}
Observe that $\AJt_n^r(C_d)$ is defined only for $1=1+\max\{0,n-g\}\leq r \leq n$. Under this assumption, Proposition \ref{P:AJtheta}\eqref{P:AJtheta1} and Lemma \ref{L:lowbound} give that
\begin{equation}\label{E:inq1}
\dim \theta^{g-n+r,d-n}\leq \dim \AJt_n^r(C_d)\leq \codim \theta^{\geq n+1-r,n}.
\end{equation}
Using the assumption
$g \ge \max\{n,d-n\}$ and Proposition \ref{P:theta}\eqref{P:theta1}, we compute
\begin{equation}\label{E:inq2}
\codim \theta^{\geq n+1-r,n}=
\begin{cases} n+1-r & \text{ if } n \leq d-n \leq g, \cr
\max\{d-n-r+1,0\} & \text{ if } d-n\leq n \leq g. \cr
\end{cases}
\end{equation}
Therefore if $d-n<r$ (which can only happen in the second case) then $\codim \theta^{\geq n+1-r,n}=0$, while if $r\leq d-n$ then
$\codim \theta^{\geq n+1-r,n}=r(n)-r+1$. Using the upper bound in \eqref{E:inq1}, this implies that
$\AJt_n^r(C_d)=(0)$ if $d-n<r$ (which proves \eqref{P:uplow1}) and that
$\dim \AJt_n^r(C_d)\leq r(n)-r+1$ if $r\leq d-n$.
On the other hand, using again the assumption
$g \ge \max\{n,d-n\}$ and Proposition \ref{P:theta}\eqref{P:theta1}, we compute
\begin{equation}\label{E:inq4}
\dim \theta^{\geq g-n+r,d-n}=
\begin{cases}
d+1-g-r & \text{ if } r\leq d-g, \\
0 & \text{ otherwise. }
\end{cases}
\end{equation}
If we plug this formula into the lower bound in \eqref{E:inq1}, we get the lower bound of part \eqref{P:uplow2}, and this finishes the proof.
\end{proof}
\begin{remark}\label{R:uplow}
Note that the upper bound and lower bound in the above Proposition \ref{P:uplow} (which are always different except in the
special cases $n=g$ or $d-n=g$, which we exclude in the discussion that follows) can be strict.
For example:
\begin{itemize}
\item If $d<\operatorname{gon}(C)$ (which implies that $d\leq \frac{g+1}{2}$ by Lemma \ref{L:gonindex}) then Remark \ref{nontr} gives that $\AJt_n^r(C_d)=\{0\}$ for any $1\leq r\leq \min\{n, d-n\}$, which shows that the lower bound
in \eqref{E:uplow} is (trivially) achieved but not the upper bound.
\item The BN rays of Theorem \ref{AJtaut} do not achieve the lower bound in \eqref{E:uplow}, which is zero since $d-g+1\leq r$, while they achieve the upper bound only if $\rho(g,r,d)=0$ or $d=g+r-1$ (see Remark \ref{R:BNperf}).
\item In each of the cases specified in Proposition \ref{P:facet}\eqref{P:facet2}, $\AJt_n^1(C_d)$ is a facet, hence its dimension achieves the upper bound in \eqref{E:uplow} but not the lower bound.
\item We will show in the sequel that the upper bound in \eqref{E:uplow} is achieved for any $1\leq r\leq \min\{n, d-n\}$ if either $\operatorname{gon}_n(C)\leq d\leq n+g$ (see Theorem \ref{thetaNef}), or if $n=g-1$ and $g\leq d \leq 2g-2$ (see Theorem \ref{T:BNfaces}), or if
$g > \max\{n,d-n\}$ and $C$ is hyperelliptic (see Theorem \ref{T:hyper}); and in each of these cases, the lower bound is not achieved.
\end{itemize}
\end{remark}
\section{Subordinate faces}\label{S:Neftheta}
In this section, we are going to describe some of the Abel-Jacobi faces using subordinate varieties.
Recall that the \emph{subordinate} variety of a linear system $\l$ is defined (set theoretically) as
\begin{equation}\label{D:sublocus}
\Gamma_d(\l):=\{D\in C_d: \: D\leq E \text{ for some }E\in \l\}.
\end{equation}
There is a natural scheme structure on $\Gamma_d(\l)$ (indeed $\Gamma_d(\l)$ is a determinantal variety) and the class of $\Gamma_d(\l)$ is computed as follows (see
\cite[Chap. VIII, \S 3]{GAC1}, \cite[\S 1]{kl2} - the proof works over any algebraically closed field).
\begin{fact}\label{subcyc}
Let $\l$ be a $\mathfrak{g}_l^s$ on $C$ and fix an integer $d$ such that $l\geq d\geq s$. Then $\Gamma_d(\l)$ is of pure dimension $s$ and it has class equal to
$$[\Gamma_d(\l)]=\sum_{k=0}^{d-s}\binom{l-g-s}{k} \frac{x^k\theta^{d-s-k}}{(d-s-k)!}\in R_s(C_d). $$
\end{fact}
Using subordinate varieties, we construct subvarieties of $C_d$ that are suitably contracted by the
Abel-Jacobi map $\alpha_d:C_d\to \operatorname{Pic}^d(C)$.
\begin{prop}\label{cyclecontr}
Let $1\leq n\leq d$ with the property that $d\geq \operatorname{gon}_n(C)$. Fix a linear system $\l$ of degree $d$ and dimension $n$ on $C$. For any $0 \leq i \leq \min\{n, g\}$, consider the embedding $\psi_i : C_{d-i} \to C_d$ defined by $\psi_i(D)=D+ip_0$, where $p_0$ is a fixed point
of $C$. Then the subvariety
$$\Gamma_i:=\psi_i(\Gamma_{d-i}(\l)) \subseteq C_d$$
has pure dimension $n$, its class is tautological and equal to
\begin{equation}\label{E:classT}
[\Gamma_i]=\sum_{k=0}^{d-i-n}\binom{d-g-n}{k} \frac{x^{k+i}\theta^{d-i-n-k}}{(d-i-n-k)!},
\end{equation}
and its image $\alpha_d(\Gamma_i)$ in $\operatorname{Pic}^d(C)$ is irreducible of dimension $i$.
\end{prop}
Note that the subvarieties $\Gamma_i$ depend on the choices of the linear system $\l$ and of the base point $p_0$, but their classes $[\Gamma_i]$ are independent of these choices.
\begin{proof}
Note that $\min\{n,g\}\leq d-n$ by Remark \ref{R:r-gon}, whence we have that $i \le d-n$. Fact \ref{subcyc} implies that $\Gamma_{d-i}(\l)$ is pure $n$-dimensional, whence so is $\Gamma_i$. Moreover since the image of $\psi_i$ has class equal to $x^i$ and the
pull-back map $\psi_i^*$ preserves the classes $x$ and $\theta$, the class of $\Gamma_i$ is obtained by taking the class of $\Gamma_{d-i}(\l)$ in $R_n(C_{d-i})$ given by Fact \ref{subcyc}, interpreting it as a class in $R_{n+i}(C_d)$ and then multiplying it for $x^i$; in this way we get the formula \eqref{E:classT}.
The linear system $\l$ is a sublinear system of a complete linear system $|L|$ for some $L\in \operatorname{Pic}^d(C)$. Consider the $i$-dimensional irreducible subvariety of $\operatorname{Pic}^d(C)$:
$$V_i:=\{L(-D+ip_0): \: D\in C_{i}\}.$$
We claim that $\alpha_d(\Gamma_i) = V_i$, which will conclude the proof. In fact if $\mathcal L \in \alpha_d(\Gamma_i)$ then there is $D' \in \Gamma_{d-i}(\l)$ such that $\mathcal L \cong \mathcal O_C(D'+ip_0)$. But there is also $E \in \l$ such that $E \ge D'$, whence, setting $D = E - D'$ we see that $D \in C_i$ and $\mathcal L \cong L(-D+ip_0) \in V_i$. Vice versa if $\mathcal L \in V_i$ then $\mathcal L \cong L(-D+ip_0)$ for some $D \in C_i$. Since $\dim \l = n \ge i$ there is $E \in \l$ such that $E \ge D$. Setting $D' = E - D$ we find that $D' \in C_{d-i}$ and $D' \le E$, so that $D' \in \Gamma_{d-i}(\l)$, $D'+ip_0 \in \Gamma_i$ and $\alpha_d(D'+ip_0) = \mathcal O_C(D'+ip_0) \cong \mathcal L$.
\end{proof}
The intersection of the classes $[\Gamma_i]$ with the monomials $\theta^j x^{n-j}$ is easily computed via the projection formula as follows.
\begin{lemma}\label{intSigma}
Let $Z$ be any pure $n$-dimensional subvariety of $C_d$ such that $\dim \alpha_d(Z)=i$. Then
$$[Z]\cdot \theta^j x^{n-j}=
\begin{cases}
0 & \text{ if }i< j,\\
>0 & \text{ if } i \geq j.
\end{cases}
$$
\end{lemma}
\begin{proof}
Observe that, since $[Z]\cdot \theta^j x^{n-j}\in N_0(C_d)\cong \mathbb{R}$, we have that $[Z]\cdot \theta^j x^{n-j}=(\alpha_d)_*([Z]\cdot \theta^j x^{n-j})\in N_0(\operatorname{Pic}^d(C))\cong \mathbb{R}$. In order to compute
the last quantity, we use the projection formula for the Abel-Jacobi map $\alpha_d$:
$$(\alpha_d)_*([Z]\cdot \theta^j x^{n-j})=(\alpha_d)_*([Z]\cdot x^{n-j})\cdot [\Theta]^j.$$
Since $x$ is an ample class on $C_d$, for each irreducible component $Z_k$ of $Z$, the class $[Z_k]\cdot x^{n-j}$ can be represented by a $j$-dimensional irreducible subvariety $W_k$ contained in $Z$ such that $\dim \alpha_d(W_k)= \min\{\dim \alpha_d(Z_k),j\}$. Passing to the pushforward, we get
$$(\alpha_d)_*([Z_k] \cdot x^{n-j}) = (\alpha_d)_*([W_k])=
\begin{cases}
0 & \text {if } \dim \alpha_d(Z_k) < j , \\
\deg ((\alpha_d)_{|W_k}) \cdot [\alpha_d(W_k)] & \text{if } \dim \alpha_d(Z_k) \geq j.
\end{cases}
$$
Since $\dim \alpha_d(Z)=i$ we get that $\dim \alpha_d(Z_k) \le i$ for every $k$ and there is a $k_0$ such that $\dim \alpha_d(Z_{k_0}) = i$.
We conclude by observing that, in the case $j\leq i$, we have that $[\alpha_d(W_{k_0})]\cdot [\Theta]^j>0$ because $\dim \alpha_d(W_{k_0}) = j$ and $\Theta$ is ample on $\operatorname{Pic}^d(C)$.
\end{proof}
\begin{cor}\label{C:Sigmaper}
Let $0\leq n\leq d$ such that $d-n\geq \min\{n,g\}$ and let $\{Z_i\}_{i=0}^{\min\{n,g\}}$ be pure $n$-dimensional subvarieties of $C_d$ such that $\dim \alpha_d(Z_i)=i$. Then the classes $\{[Z_0],\ldots,[Z_{\min\{n,g\}}]\}$ are linearly independent in $N_n(C_d)$ and we have that
$$\langle [Z_0],\ldots,[Z_i] \rangle^{\perp}\cap R^n(C_d)= \theta^{\geq i+1, n}$$
has codimension $i+1$ in $R^n(C_d)$, for every $0\leq i \leq \min\{n,g\}$.
\end{cor}
\begin{proof}
The space $R^n(C_d)$ is freely generated by $\{\theta^0x^n,\ldots, \theta^{r(n)}x^{n-r(n)}\}$ by Proposition \ref{basetaut}\eqref{basetaut4}, where $r(n)=\min\{n,g\}$ because of the assumption on $d$.
Now Lemma \ref{intSigma} implies that
$$\langle [Z_0],\ldots,[Z_i] \rangle^{\perp}\cap R^n(C_d)= \theta^{\geq i+1, n} \quad \text{ for any } \quad 0\leq i\leq r(n).$$
The subspace $\theta^{\geq i+1, n}\subset R^n(C_d)$ has codimension $i+1$ by Proposition \ref{P:theta}\eqref{P:theta1}.
If we apply this result to $i=r(n)$ we deduce that the classes $\{[Z_0],\ldots,[Z_{\min\{n,g\}}]\}$ are linearly independent in $N_n(C_d)$ and this concludes the proof.
\end{proof}
Using the subvarieties in Proposition \ref{cyclecontr}, we can now describe tautological Abel-Jacobi faces under suitable numerical assumptions.
\begin{thm}\label{thetaNef}
Let $0 \le n \le d, 1+\max\{0,n-g\}\leq r \leq n$ and assume that $d \ge \operatorname{gon}_n(C)$.
For any $0\leq i \leq \min\{n,g\}$, consider the classes $[\Gamma_i]\in R_n(C_d)$ given by \eqref{E:classT} and set $$\Sigma_{i+1}:=\langle [\Gamma_0],\ldots,[\Gamma_{i}]\rangle \subset R_n(C_d).$$
Then $\AJt_n^r(C_d)$ is a non-trivial face, is equal to $\Sigma_{n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ and it is a perfect face of dimension $n+1-r$. Hence, the following chain
\begin{equation}\label{E:chain-sub}
\Sigma_1\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d) \subset \Sigma_2\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d) \subset \ldots \subset \Sigma_{\min\{n,g\}} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)
\end{equation}
is a maximal chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
The dual chain of the chain in \eqref{E:chain-sub} is equal to
\begin{equation}\label{E:nef-sub}
\theta^{\ge \min\{n,g\}}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)\subset \theta^{\ge \min\{n,g\}-1} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d) \subset \ldots \subset \theta^{\ge 1} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d) \subset {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d).
\end{equation}
\end{thm}
The faces in \eqref{E:chain-sub} will be called \emph{subordinate faces}, while the faces in \eqref{E:nef-sub} are the nef $\theta$-faces introduced after Proposition \ref{P:AJtheta}.
Note that
$$\operatorname{cone}([\Gamma_0])=\Sigma_1\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$$
is an edge (i.e. a perfect extremal ray) of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$, which we call the \emph{subordinate edge}.
On the other hand, we do not expect that the classes $[\Gamma_i]$ with $0< i\leq \min\{n,g\}$ generate an extremal ray of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$. Using the fact that $x$ is ample, we can prove that they are not extremal, unless, possibly, when $g>d-n\ge n$.
\begin{proof}
Consider the pure $n$-dimensional tautological subvarieties $\{\Gamma_0,\ldots,\Gamma_r\}$ of $C_d$ constructed in Proposition \ref{cyclecontr} (indeed the last subvariety $\Gamma_r$ will be of no use in what follows).
Since $d-n\geq \min\{n,g\}$ (see Remark \ref{R:r-gon}), we can apply Corollary \ref{C:Sigmaper} and we get that $(\theta^{\geq n+1-r,n})^\perp=\Sigma_{n+1-r}$,
which combined with Proposition \ref{P:AJtheta}\eqref{P:AJtheta1}, gives that
$$\AJt_n^r(C_d)=\Sigma_{n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d).$$
Since $[\Gamma_i]$ are effective classes, we get the following inclusions of cones
\begin{equation}\label{E:incl2}
\operatorname{cone}([\Gamma_0], \ldots, [\Gamma_{n-r}]) \subseteq\Sigma_{n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d) \subset \Sigma_{n+1-r}.
\end{equation}
Since $\{[\Gamma_0], \ldots, [\Gamma_{n-r}]\}$ is a basis of $\Sigma_{n+1-r}$ by Corollary \ref{C:Sigmaper}, we infer from the inclusions \eqref{E:incl2} that $\Sigma_{n-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ is a full dimensional cone in $\Sigma_{n-r}$,
and hence it has dimension $n+1-r=\dim (\theta^{\geq n+1-r,n})^\perp$.
We can therefore apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} and get that $\AJt_n^r(C_d)$ is a perfect face of dimension $n+1-r$ whose dual face is equal to $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)$.
\end{proof}
\begin{remark}\label{sharpNef}
Let us compare Theorem \ref{thetaNef} with Theorem \ref{thetaPseff} for a given $n$. We are going to use that $\operatorname{gon}_n(C)\leq g+n$ with equality if and only if $n\geq g$, a fact that follows easily from Lemma \ref{L:gonindex}.
\begin{itemize}
\item If $d\geq n+g$ (which forces $d\geq \operatorname{gon}_n(C)$) then the two theorems coincide.
\item If $d-n<g\leq n$ then Theorem \ref{thetaPseff} applies while Theorem \ref{thetaNef} does not apply since $d<\operatorname{gon}_n(C)=g+n$ (using that $g\leq n$).
\item If $n<g$ and $\operatorname{gon}_n(C)\leq d<g+n$ then Theorem \ref{thetaNef} applies but Theorem \ref{thetaPseff} does not apply since $\max\{n,d-n\}<g$.
\item If $n<g$ and $d<\operatorname{gon}_n(C)$ then neither one of the theorems applies.
\end{itemize}
\end{remark}
\section{Brill
|
-Noether faces in dimension $g-1$}\label{S:BN}
The aim of this subsection is to describe the tautological Abel-Jacobi faces of $C_d$ in dimension $g-1$.
We will assume throughout this section that $g\leq d$ (to avoid trivialities) and that $d\leq 2g-2$ since in the case $d>2g-2$ we have a complete description of the tautological Abel-Jacobi faces in Theorem \ref{thetaPseff}.
We start by using the Brill-Noether varieties in Example \ref{BN=subor} in order to construct subvarieties of $C_d$ of dimension $g-1$ that are suitably contracted by the Abel-Jacobi morphism $\alpha_d:C_d\to \operatorname{Pic}^d(C)$.
\begin{prop}\label{P:varU}
Let $d$ be such that $g\leq d\leq 2g-2$. For any $0 \leq i \leq d-g$, consider the embedding $\psi_i : C_{d-i} \to C_d$ defined by $\psi_i(D)=D+ip_0$, where $p_0$ is a fixed point of $C$. Then the subvariety
$$\Upsilon_i:=\psi_i(C_{d-i}^{d-g+1-i}) \subseteq C_d$$
is irreducible of dimension $g-1$, its class is tautological and equal to
\begin{equation}\label{E:classU}
[\Upsilon_i]=\sum_{\alpha=0}^{d-g+1-i}(-1)^{\alpha}\frac{x^{\alpha+i}\theta^{d-g+1-\alpha-i}}{(d-g+1-\alpha-i)!},
\end{equation}
and its image $\alpha_d(\Upsilon_i)$ in $\operatorname{Pic}^d(C)$ has dimension $2g-2-d+i$.
\end{prop}
Note that the subvarieties $\Upsilon_i$ depend on the choice of the base point $p_0$, but their classes $[\Upsilon_i]$ are independent of this choice.
\begin{proof}
Note that $C_{d-i}^{d-g+1-i}$ is an irreducible subvariety of $C_{d-i}$ of dimension $g-1$ by Example \ref{BN=subor}, whence $\Upsilon_i$ is an irreducible subvariety of $C_d$ of dimension $g-1$.
The class of $\Upsilon_i$ can be computed starting from \eqref{E:clBN} in the same way as formula \eqref{E:classT} is obtained in Proposition \ref{cyclecontr}.
Finally, by Fact \ref{BNclass}\eqref{BNclass0}, the dimension of $\alpha_{d-i}(C_{d-i}^{d-g+1-i})\subset \operatorname{Pic}^{d-i}(C)$ is equal to
$$\dim \alpha_{d-i}(C_{d-i}^{d-g+1-i})=\dim C_{d-i}^{d-g+1-i}-(d-g+1-i)=2g-2-d+i.$$
Since $ \alpha_d\circ \psi_i$ is obtained by composing $\alpha_{d-i}$ with the isomorphism
$$
\begin{aligned}
\operatorname{Pic}^{d-i}(C)&\longrightarrow \operatorname{Pic}^d(C) \\
L & \mapsto L(ip_0),
\end{aligned}
$$
we conclude that $\dim \alpha_d(\Upsilon_i)=\dim \alpha_{d-i}(C_{d-i}^{d-g+1-i})=2g-2-d+i.$
\end{proof}
Using the subvarieties in Proposition \ref{P:varU}, we can now describe tautological Abel-Jacobi faces in dimension $g-1$.
\begin{thm}\label{T:BNfaces}
Let $d$ be such that $g\leq d\leq 2g-2$.
For any $0 \leq i \leq d-g$, consider the classes $[\Upsilon_i]\in R_{g-1}(C_d)$ given by \eqref{E:classU} and set
$$\Omega_{i+1}:=\langle [\Upsilon_0],\ldots,[\Upsilon_{i}]\rangle \subset R_{g-1}(C_d).$$
Then $\AJt_{g-1}^r(C_d)$ is a non-trivial face if and only if $1\leq r \leq d-g+1$, in which case $\AJt_{g-1}^r(C_d)$ is equal to
$\Omega_{d-g+2-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$ and it is a perfect face of dimension $d-g+2-r$.
Hence, the following chain
\begin{equation}\label{E:chain-BN}
\Omega_1\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d) \subset \Omega_2\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d) \subset \ldots \subset \Omega_{d-g+1} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)
\end{equation}
is a maximal chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$.
The dual chain of the chain in \eqref{E:chain-BN} is equal to
\begin{equation}\label{E:nef-BN}
\theta^{\ge g-1}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{g-1}(C_d)\subset \theta^{\ge g-2} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{g-1}(C_d) \subset \ldots \subset \theta^{\ge 2g-1-d} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{g-1}(C_d) \subset {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{g-1}(C_d).
\end{equation}
\end{thm}
The faces in \eqref{E:chain-BN} will be called \emph{BN(=Brill-Noether) faces in dimension $g-1$}, while the faces in \eqref{E:nef-BN} are the nef $\theta$-faces introduced after Proposition \ref{P:AJtheta}.
Note that
$$\operatorname{cone}([C_d^{d-g+1}])=\Omega_1\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$$
is an edge (i.e. a perfect extremal ray) of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$, which we call the \emph{BN edge in dimension $g-1$}.
On the other hand, since the class $x$ is ample, the classes $[\Upsilon_i]$ with $0< i\leq d-g$ cannot generate an extremal ray of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$.
Note that from Proposition \ref{P:nonzero}\eqref{BNextr2} it follows that $\operatorname{cone}([C_d^{d-g+1}])$ is also an extremal ray of the entire (non-tautological) cone $\operatorname{Pseff}_{g-1}(C_d)$, although we do not know if it is an edge of the entire cone.
\begin{proof}
Using that $d-(g-1)\leq g-1$, Proposition \ref{P:theta}\eqref{P:theta1} gives that
\begin{equation}\label{E:dim-span}
\dim (\theta^{\geq g-r,g-1})^\perp=\codim \theta^{\geq g-r,g-1}=\max\{d-g+2-r,0\},
\end{equation}
which, together with Proposition \ref{P:AJtheta}\eqref{P:AJtheta1}, implies that $\AJt_{g-1}^r(C_d)$ is trivial unless
$1\leq r \leq d-g+1$. Therefore, from now until the end of the proof, we fix an index $r$ satisfying the above inequalities.
Consider the irreducible $(g-1)$-dimensional tautological subvarieties $\{\Upsilon_0,\ldots,\Upsilon_{d-g}\}$ of $C_d$ constructed in Proposition \ref{P:varU}.
Applying Lemma \ref{intSigma} and using \eqref{E:dim-span}, we get that $\{[\Upsilon_0],\ldots,[\Upsilon_{d-g}]\}$ are linearly independent in $R_{g-1}(C_d)$ and that, for any $1\leq i \leq d-g+1$,
$$\Omega_i^\perp= \theta^{\geq 2g-2-d+i,g-1}.$$
Combining this with Proposition \ref{P:AJtheta}\eqref{P:AJtheta1}, we get that
$$\AJt_{g-1}^r(C_d)=\Omega_{d-g+2-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d).$$
Since $[\Upsilon_i]$ are effective classes, we get the following inclusions of cones
\begin{equation}\label{E:incl-con2}
\operatorname{cone}([\Upsilon_0], \ldots, [\Upsilon_{d-g+1-r}]) \subseteq\Omega_{d-g+2-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d) \subset \Omega_{d-g+2-r}.
\end{equation}
Since $\{[\Upsilon_0], \ldots, [\Upsilon_{d-g+1-r}]\}$ is a basis of $\Omega_{d-g+2-r}$, we infer from the inclusions \eqref{E:incl-con2} that $\Omega_{d-g+2-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_d)$ is a full dimensional cone in $\Omega_{d-g+2-r}$,
and hence it has dimension $d-g+2-r=\dim (\theta^{\geq g-r,g-1})^\perp$.
We can therefore apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} and get that $\AJt_{g-1}^r(C_d)$ is a perfect face of dimension $d-g+2-r$ whose dual face is equal to $\theta^{\geq g-r,g-1} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{g-1}(C_d)$.
\end{proof}
We will now compare BN rays and BN faces in dimension $g-1$ with pseff $\theta$-faces and subordinate faces.
\begin{remark}\label{R:compaAJ}
Let us compare Theorems \ref{T:BNfaces} and \ref{AJtaut} with Theorems \ref{thetaPseff} and \ref{thetaNef}.
\begin{itemize}
\item BN faces in dimension $g-1$ and BN rays exist in a range where pseff $\theta$-faces do not exist.
Indeed, if we are in the numerical range of Theorem \ref{T:BNfaces}, then $n=g-1$ and $1\leq d-n\leq g-1$ which implies that $\max\{n,d-n\}=n=g-1<g$.
On the other hand, if we are under the hypotheses of Theorem \ref{AJtaut}, then $C_d^r$ has dimension $n:=r+\rho=d+r(d-g-r)$ and codimension $m:=d-n=r(g+r-d)$. Now it easily checked that
$$\begin{aligned}
n<g & \Leftrightarrow d<g+r-1+\frac{1}{r+1}, \\
m<g & \Leftrightarrow \frac{r-1}{r}g+r<d, \\
\end{aligned}
$$
and both conditions are satisfied because of the assumptions on $d$. This implies that $g>\max\{n,d-n\}$ in any of the two cases, hence pseff $\theta$-faces are not defined.
\item BN faces in dimension $g-1$ and subordinate faces coexist if only if $d=2g-2$ and $n=g-1$, in which case they are equal.
Indeed, if we are in the numerical range of Theorem \ref{T:BNfaces}, then $n=g-1$ and $d\leq 2g-2$.
On the other hand, if we are in the numerical range of Theorem \ref{thetaNef}, then $d\geq \operatorname{gon}_{g-1}(C)=2g-2$ (see Lemma \ref{L:gonindex}); hence we must have $d=2g-2$ and $n=g-1$. In this case, we have that
$$\Sigma_i\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_{2g-2})=\AJt_{g-1}^{g-i}(C_{2g-2})=\Omega_i\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{g-1}(C_{2g-2}),$$
for any $1\leq i \leq g-1$. Even more is true, namely that since $C_{2g-2-i}^{g-1-i}=\Gamma_{2g-2-i}(|K_C|)$, we have that $\Gamma_i=\Upsilon_i$ for any $0\leq i\leq g-1$.
\item BN rays can coexist with subordinate faces if and only if $\rho:=\rho(g,r,d)=0$, in which case the Brill-Noether ray $\operatorname{cone}([C_d^r])$ is equal to the subordinate edge $\operatorname{cone}([\Gamma_d(\l)])$, where $\l$ is a linear system of degree $d$ and dimension $r$.
Indeed, suppose that a BN ray $\operatorname{cone}([C_d^r])\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{r+\rho}(C_d)$ coexists with the subordinate faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{r+\rho}(C_d)$. Then it must happen that $d\geq \operatorname{gon}_{r+\rho}(C)$, which using that $C$ is Brill-Noether general, translates into
$$d=\frac{rg+\rho}{r+1}+r\geq \frac{(r+\rho)g}{r+\rho+1}+r+\rho.$$
Now it is easy to see, using that $\rho\geq 0$ because $C$ is a Brill-Noether general curve, that the above inequality is satisfied if and only if $\rho=0$. In this case, we claim that any subordinate variety $\Gamma_0=\Gamma_d(\l)$ where $\l$ is a $\mathfrak{g}_d^r$ (as in Proposition \ref{cyclecontr}) is a fiber of $\alpha_d$ and an irreducible component of $C_d^r$,
and $C_d^r$ is numerically equivalent to a positive multiple of $\Gamma_0$. Indeed, since $\rho=0$ and $C$ is a Brill-Noether general curve, $C_d^{r+1}=\emptyset$, which implies that any linear system $\l$ of dimension $r$ and degree $d$ is a complete linear system $|L|$ associated to some $L\in W_d^r(C)$, and clearly $\Gamma_d(|L|)=\alpha_d^{-1}(L)$.
Moreover, $\Gamma_d(|L|)$ has contractibility index with respect to $\alpha_d$ equal to $r$ (since it has dimension $r$ and it is a fiber of $\alpha_d$), hence it is an irreducible component of $C_d^r$ by Fact \ref{BNclass}\eqref{BNclass0}. Conversely, any irreducible component of $C_d^r$ is of the form $\Gamma_d(|L|)$ for some $L\in W_d^r(C)$. Since the class of $\Gamma_d(|L|)$ does not depend on the chosen $L\in W_d^r(C)$, we conclude that $[C_d^r]$ is a positive multiple of $[\Gamma_0]$.
\end{itemize}
\end{remark}
\section{Hyperelliptic curves}\label{S:hyper}
The aim of this Section is to describe the tautological Abel-Jacobi faces in ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ for $C$ an hyperelliptic curve.
We will assume throughout this section that $d\leq 2g-2$ since in the case $d>2g-2$ we have a complete description of the tautological Abel-Jacobi faces in Theorem \ref{thetaPseff}.
A crucial role is played by Brill-Noether varieties for hyperelliptic curves, which we now study.
\begin{prop}\label{P:BNhyper}
Let $C$ be an hyperelliptic curve of genus $g\geq 2$. Fix integers $d$ and $r$ such that $0\leq d\leq 2g-2$ and $\max\{0,d-g+1\}\leq r \leq \frac{d}{2}$. Then $C_d^r$ is irreducible of dimension $d-r$ and its class is a positive multiple of
\begin{equation}\label{Cdr-hyper}
\sum_{k=0}^{r}\binom{d-r-g}{k}\frac{x^ k\theta^{r-k}}{(r-k)!}.
\end{equation}
\end{prop}
\begin{proof}
We will denote by $\mathfrak{g}^1_2$ the hyperelliptic linear series on $C$, by $\mathcal O_C(\mathfrak{g}_2^1)$ its associated line bundle and by $\iota$ the hyperelliptic involution on $C$.
Let us distinguish two cases, according to whether or not $d\leq g$.
If $d\leq g$ then any $\mathfrak{g}_d^r$ on $C$ is of the form $r\mathfrak{g}^1_2+p_1+\ldots+p_{d-2r}$, where $p_1,\ldots,p_{d-2r}$ are points of $C$
such that no two of them are conjugate under the hyperelliptic involution \color{black}(see \cite[p.13]{GAC1}).
Therefore, $C_d^r$ is the image of the finite morphism
$$
\begin{aligned}
C_r\times C_{d-2r} & \longrightarrow C_d \\
(E,D) \ \ \ \ & \mapsto E+\iota(E)+D,
\end{aligned}
$$
from which we deduce that $C_d^r$ is irreducible of dimension $d-r$.
Moreover, the class $[C_d^r]$ is a positive multiple (depending on its scheme-structure) of $A^{d-2r}(\Gamma_{2r}(r\mathfrak{g}^1_2))$, where $A$ is the push operator of \cite[Def. 2.2]{BKLV} and $\Gamma_{2r}(r\mathfrak{g}^1_2)$ is the subordinate variety of \eqref{D:sublocus}.
Combining Facts \ref{subcyc} and \cite[Fact 2.9(ii)]{BKLV}, one can easily prove by induction on $0\leq i$ that
$$A^{i}(\Gamma_{2r}(r\mathfrak{g}^1_2))= i! \sum_{k=0}^{r}\binom{r-g+i}{k}\frac{x^ k\theta^{r-k}}{(r-k)!},$$
which for $i=d-2r$ gives the desired formula.
If $d>g$ then, using the isomorphism $W_d^r(C)\stackrel{\cong}{\longrightarrow} W_{2g-2-d}^{r-d+g-1}(C)$ obtained by sending $L$ into $\omega_C\otimes L^{-1}$ and the fact that $W_{2g-2-d}^{r-d+g-1}(C)$ is irreducible of dimension equal to $2g-2-d-2(r-d+g-1)=d-2r$ by what proved in the previous case for $C_{2g-2-d}^{r-d+g-1}$, we get that $W_d^r(C)$ is irreducible
of dimension equal to $d-2r$. Hence $C_d^r$ is irreducible of dimension $d-r$ by Fact \ref{BNclass}\eqref{BNclass0}.
Moreover, an effective degree-$d$ divisor $D$ on $C$ belongs to $C_d^r$ if and only if $\omega_C(-D)\in W_{2g-2-d}^{r-d+g-1}(C)$, which by the previous case is equivalent to
saying that $\omega_C(-D)=\mathcal O_C((r-d+g-1)\mathfrak{g}_2^1)(E)$ for some $E\in C_{d-2r}$. Using that $\omega_C=\mathcal O_C((g-1)\mathfrak{g}_2^1)$,
we conclude that
$$D\in C_d^r\Longleftrightarrow D+E\in (d-r)\mathfrak{g}_2^1 \text { for some }E\in C_{d-2r}.$$
Therefore, the class of $C_d^r$ is a positive multiple of the subordinate variety $\Gamma_d((d-r)\mathfrak{g}_2^1)$ whose class is given by \eqref{Cdr-hyper} according to Fact \ref{subcyc}.
\end{proof}
\begin{cor}\label{C:varhyp}
Let $C$ be an hyperelliptic curve of genus $g\geq 2$ and fix integers $n,d$ such that $0 \le d-n\leq n <g$ (which implies that $0\leq d \leq 2g-2$).
For any $0 \leq i \leq d-n$, consider the embedding $\psi_i : C_{d-i} \to C_d$ defined by $\psi_i(D)=D+ip_0$, where $p_0$ is a fixed point of $C$. Then the subvariety
$$\Upsilon_i^H:=\psi_i(C_{d-i}^{d-n-i}) \subseteq C_d$$
is irreducible of dimension $n$, its class is tautological and it is equal, up to a positive multiple, to
\begin{equation}\label{E:classH}
[\Upsilon_i]^H:=\sum_{k=0}^{d-n-i}\binom{n-g}{k}\frac{x^{k+i}\theta^{d-n-i-k}}{(d-n-i-k)!},
\end{equation}
and its image $\alpha_d(\Upsilon_i^H)$ in $\operatorname{Pic}^d(C)$ has dimension $2n-d+i$.
\end{cor}
Note that the subvarieties $\Upsilon_i^H$ depend on the choice of the base point $p_0$, but their classes $[\Upsilon_i^H]$, which coincide with $[\Upsilon_i]^H$ up to positive multiples, are independent of this choice.
\begin{proof}
Note that $C_{d-i}^{d-n-i}$ is an irreducible subvariety of $C_{d-i}$ of dimension $n$ by Proposition \ref{P:BNhyper}, whence $\Upsilon_i^H$ is an irreducible subvariety of $C_d$ of dimension $n$.
The class of $\Upsilon_i^H$ can be computed, up to a positive multiple, starting from \eqref{Cdr-hyper} in the same way as formula \eqref{E:classT} is obtained in Proposition \ref{cyclecontr}.
Finally, the dimension of $\alpha_d(\Upsilon_i^H)$ can be computed similarly to what was done in Proposition \ref{P:varU}.
\end{proof}
Using the subvarieties constructed in Proposition \ref{cyclecontr} and the ones constructed in Corollary \ref{C:varhyp}, we can now describe tautological Abel-Jacobi faces
for hyperelliptic curves.
\begin{thm}\label{T:hyper}
Let $C$ be an hyperelliptic curve of genus $g\geq 2$ and fix integers $n,d$ such that $0 \le n, d-n <g$ (which implies that $0\leq d \leq 2g-2$).
\begin{enumerate}[(i)]
\item \label{T:hyper1} Assume that $d\geq 2n$.
For any $0\leq i \leq \min\{n,g\}$, consider the classes $[\Gamma_i]\in R_n(C_d)$ given by \eqref{E:classT} and set $\Sigma_{i+1}:=\langle [\Gamma_0],\ldots,[\Gamma_{i}]\rangle \subset R_n(C_d)$.
Then, for any $1\leq r\leq n$, $\AJt_n^r(C_d)$ is a non-trivial face, is equal to $\Sigma_{n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$ and it is a perfect face of dimension $n+1-r$. Hence, the following chain
\begin{equation}\label{E:ch-hyp1}
\Sigma_1\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d) \subset \Sigma_2\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d) \subset \ldots \subset \Sigma_{n} \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)
\end{equation}
is a maximal chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$.
\item \label{T:hyper2} Assume that $d\leq 2n$.
For any $0 \leq i \leq d-n$, consider the classes $[\Upsilon_i]^H\in R_{n}(C_d)$ given by \eqref{E:classH} and set $\Omega_{i+1}^H:=\langle [\Upsilon_0]^H,\ldots,[\Upsilon_{i}]^H\rangle \subset R_{n}(C_d)$.
Then $\AJt_{n}^r(C_d)$ is a non-trivial face if and only if $1\leq r \leq d-n$, in which case $\AJt_{n}^r(C_d)$ is equal to
$\Omega_{d-n+1-r}^H \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d)$ and it is a perfect face of dimension $d-n+1-r$.
Hence, the following chain
\begin{equation}\label{E:ch-hyp2}
\Omega_1^H\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d) \subset \Omega_2^H\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d) \subset \ldots \subset \Omega_{d-n}^H \cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d)\subset {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d)
\end{equation}
is a maximal chain of perfect non-trivial faces of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d)$.
\end{enumerate}
The dual chain of both the chains in \eqref{E:ch-hyp1} and \eqref{E:ch-hyp2} is equal to
\begin{equation}\label{E:nef-hyp}
\theta^{\ge n}\cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d)\subset \theta^{\ge n-1} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d) \subset \ldots \subset \theta^{\ge \max\{1,2n-d+1\}} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d) \subset {^{\rm t} \hskip -.05cm \operatorname{Nef}}^n(C_d).
\end{equation}
\end{thm}
Note that the faces in \eqref{E:ch-hyp1} are the subordinate faces introduced in Theorem \ref{thetaNef}, while the faces in \eqref{E:nef-hyp} are the nef $\theta$-faces introduced after Proposition \ref{P:AJtheta}. The faces of \eqref{E:ch-hyp2} are new, and they will be called \emph{hyperelliptic BN(=Brill-Noether) faces}.
Note that
$$\operatorname{cone}([C_d^{d-n}])=\Omega_1^H\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$$
is an edge (i.e. a perfect extremal ray) of ${^{\rm t} \hskip -.05cm \operatorname{Pseff}}_n(C_d)$, which we call the \emph{hyperelliptic BN(=Brill-Noether) edge}.
Note that from Proposition \ref{P:nonzero}\eqref{BNextr2} it follows that the hyperelliptic BN edge $\operatorname{cone}([C_d^{d-n}])$ is also an extremal ray of the entire (non-tautological) cone $\operatorname{Pseff}_{n}(C_d)$, although we do not know if it is an edge of the entire cone.
\begin{proof}
Part \eqref{T:hyper1} follows from Theorem \ref{thetaNef}, using that $\operatorname{gon}_n(C)=2n$ for $C$ hyperelliptic and $n<g$ by Lemma \ref{L:gonindex}.
Let us now prove part \eqref{T:hyper2}.
Using that $d-n\leq n\leq g$, Proposition \ref{P:theta}\eqref{P:theta1} gives that
\begin{equation}\label{E:dim-sp}
\dim (\theta^{\geq n+1-r,n})^\perp=\codim \theta^{\geq n+1-r,n}=\max\{d-n+1-r,0\},
\end{equation}
which, together with Proposition \ref{P:AJtheta}\eqref{P:AJtheta1}, implies that $\AJt_{n}^r(C_d)$ is trivial unless
$1\leq r \leq d-n$. Therefore, from now until the end of the proof, we fix an index $r$ satisfying the above inequalities.
Consider the irreducible $n$-dimensional tautological subvarieties $\{\Upsilon_0^H,\ldots,\Upsilon_{d-n}^H\}$ of $C_d$ constructed in Corollary \ref{C:varhyp}.
Applying Lemma \ref{intSigma} and using \eqref{E:dim-sp}, we get that $\{[\Upsilon_0]^H,\ldots,[\Upsilon_{d-n}]^H\}$ are linearly independent in $R_{n}(C_d)$
and that, for any $1\leq i \leq d-n$,
$$(\Omega_i^H)^\perp= \theta^{\geq 2n-d+i,n}.$$
Combining this with Proposition \ref{P:AJtheta}\eqref{P:theta1}, we get that
$$\AJt_{n}^r(C_d)=\Omega_{d-n+1-r}^H\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d).$$
Since $[\Upsilon_i]^H$ are ${\mathbb Q}$-effective classes, we get the following inclusions of cones
\begin{equation}\label{E:in-con}
\operatorname{cone}([\Upsilon_0]^H, \ldots, [\Upsilon_{d-n-r}]^H) \subseteq\Omega^H_{d-n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d) \subset \Omega^H_{d-n+1-r}.
\end{equation}
Since $\{[\Upsilon_0]^H, \ldots, [\Upsilon_{d-n-r}]^H\}$ is a basis of $\Omega_{d-n+1-r}^H$, we infer from the inclusions \eqref{E:in-con} that $\Omega^H_{d-n+1-r}\cap {^{\rm t} \hskip -.05cm \operatorname{Pseff}}_{n}(C_d)$ is a full dimensional cone in $\Omega^H_{d-n+1-r}$, and hence it has dimension $d-n+1-r=\dim (\theta^{\geq n+1-r,n})^\perp$.
We can therefore apply Proposition \ref{P:AJtheta}\eqref{P:AJtheta2} and get that $\AJt_{n}^r(C_d)$ is a perfect face of dimension $d-n+1-r$ whose dual face is equal to $\theta^{\geq n+1-r,n} \cap {^{\rm t} \hskip -.05cm \operatorname{Nef}}^{n}(C_d)$.
\end{proof}
\section*{Acknowledgements}
We would like to thank Dawei Chen, Izzet Coskun and Brian Lehmann for helpful discussions.
|
\section{Introduction}
Differential operators and their discrete counterparts are widely used in geometry processing algorithms for data smoothing, interpolation, simulation, mapping, and numerous other applications. Starting from the classical Laplace-Beltrami operator \(\Delta\) on surfaces, specialized applications have demanded a wider class of operators. For example, higher-order operators such as the Bilaplacian \(\Delta^2\) may be used where the second-order smoothness of Laplacian solutions is insufficient. While the classical Laplacian is isotropic, operators incorporating anisotropy can be used for pattern generation, computing specialized distances, fluid simulation, and other applications.
Although anisotropic operators studied in geometry processing have generally been second-order, higher-order operators admit a richer variety of anisotropic behaviors. In particular, the coefficients of a fourth-order operator form a fourth-order symmetric tensor field, which can have symmetries that are not representable in second order. This allows a fourth-order operator to express, for example, multiple equally-preferred directions of variation at each point.
In this work, we construct anisotropic operators from the fourth-order \emph{symmetric frame fields} employed in quadrilateral and hexahedral meshing, which encode multiple directions at each point in a domain. Symmetric frame fields are tensor fields that have local quadrilateral or octahedral symmetry. While the only second-order tensors with this symmetry are scaled identity tensors---which are actually \emph{isotropic}---fourth-order \emph{octahedral} and more general \emph{odeco} tensor fields are nondegenerate and have been applied widely in quadrilateral and hexahedral meshing. Because of their relationship to meshing, many algorithms exist for designing and manipulating these fields.
Through a variational framework, we construct a family of \emph{frame field operators}---fourth-order elliptic differential operators acting on scalar functions, which measure variation in the directions of their associated frame fields. We define these operators in both planar domains---where they are related to the \emph{orthotropic thin plate} operators of elastic physics---and volumetric domains. Our frame field operators provide a link between the realms of frame field design and anisotropic elliptic operators.
We discretize frame field operators using a mixed finite element approach. While the linear finite elements commonly used in surface and volumetric geometry processing are well-suited to the discretization of second-order partial differential equations---for which the weak and variational forms only involve first derivatives---higher-order operators do not fit as neatly into this framework. Recently, the method of \emph{mixed finite elements} has shown promise in discretizing the Bilaplacian on planar domains and triangulated surfaces. Here, we apply an analogous method to our much larger class of fourth-order frame field operators on both planar and volumetric domains. Our discretization naturally generalizes recent discretizations of the Bilaplacian: a single parameter controls the degree of anisotropy, and when it is set to one, we recover the Bilaplacian. Moreover, we expand the palette of boundary conditions for discrete fourth-order operators by showing how to impose Neumann boundary conditions variationally, by restricting the space of Lagrange multipliers. We evaluate our discrete frame field operators numerically, study their properties, and outline a range of potential applications.
\paragraph*{Contributions}
In this work, we
\begin{itemize}
\item introduce a new class of \emph{frame field operators} parametrized by planar and volumetric frame fields, which measure function variation along frame directions;
\item design a mixed finite element discretization for frame field operators, including a natural way to impose desired boundary conditions;
\item empirically validate the expected convergence and behavior of discrete frame field operators;
\item show examples of PDE solutions, eigenfunctions, and assorted potential applications; and
\item provide a \textsc{matlab} implementation in supplemental material.
\end{itemize}
\begin{figure*}
\centering
\newcommand{0.45\textwidth}{0.23\textwidth}
\begin{tabularx}{\textwidth}{c|ccc}
\includegraphics[width=0.45\textwidth]{figures/elephant-ff-lvl6} &
\includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl1} &
\includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl2} &
\includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl3} \\
& $L_{\text{mean}} = \num{1.851e-02}$ & $L_{\text{mean}} = \num{9.255e-03}$ & $L_{\text{mean}} = \num{4.636e-03}$ \\
& \includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl4} &
\includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl5} &
\includegraphics[width=0.45\textwidth]{figures/elephant-dir-d50-lvl6} \\
& $L_{\text{mean}} = \num{2.322e-03}$ & $L_{\text{mean}} = \num{1.162e-03}$ & $L_{\text{mean}} = \num{5.814e-04}$ \\
\includegraphics[width=0.45\textwidth]{figures/troll-ff-lvl6} &
\includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl1} &
\includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl2} &
\includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl3} \\
& $L_{\text{mean}} = \num{1.199e-01}$ & $L_{\text{mean}} = \num{5.890e-02}$ & $L_{\text{mean}} = \num{2.932e-02}$ \\
& \includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl4} &
\includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl5} &
\includegraphics[width=0.45\textwidth]{figures/troll-dir-d100-lvl6} \\
& $L_{\text{mean}} = \num{1.464e-02}$ & $L_{\text{mean}} = \num{7.315e-03}$ & $L_{\text{mean}} = \num{3.657e-03}$ \\
\end{tabularx}
\caption{Solutions to Dirichlet boundary-value problems converge under mesh refinement (Loop subdivision) as the mean edge length $L_{\text{mean}}$ decreases. Here the boundary conditions consist of a square wave in boundary arc length.}
\label{fig:cvg-2d}
\end{figure*}
\section{Related Work}
\subsection{Phase Field Models}
Our work is loosely inspired by \emph{phase field models} from physics. These models often appear as ``mesoscale'' or ``effective theory'' abstractions in the physics of materials and pattern formation, wherein the physical state of a system is encoded in a \emph{phase field}, and the average behavior of microscale components is encoded in a PDE. Powerful tools from analysis can then help illuminate large-scale structural properties of the system. The freedom to specify the PDE affords enough flexibility to model a vast array of phenomena. A general reference on phase field modeling can be found in \cite{provatas2011phase}.
\subsection{Scalar Fields in Meshing}\label{sec:meshingrelatedwork}
Phase field-like models have appeared in geometry processing in \emph{Morse-based quadrangulation} methods, which employ a scalar oscillatory field to encode the combinatorial structure of a mesh; these controllable oscillatory fields have partly inspired our work.
We do not attempt to provide a complete overview of quadrilateral or hexahedral meshing; surveys can be found in \cite{Bommes2013,Campen2017,Amenta2019}.
In Morse-based meshing, a quadrilateral mesh is extracted from the \emph{Morse complex} or \emph{Morse-Smale complex} (MSC) of a scalar function (Morse function). The Morse complex is a topological skeleton whose study appears in Morse theory, where it is used to connect the topology of a manifold to the critical points and gradient flows of functions on it. A good reference on Morse theory and the Morse complex appears in \cite[Chapter 8]{Jost2017}. Prior to the advent of Morse-based meshing, the Morse complex of triangle meshes was studied in \cite{banchoff1970critical}, and the Morse-Smale complex on simplicial complexes was introduced and refined in \cite{edelsbrunner2000topological,edelsbrunner2001hierarchical,bremer2003multi,edelsbrunner2003morse}. \cite{Ni2004} proposed a method of computing ``fair'' Morse functions for use in mesh cutting and clustering.
In the work of Dong et al.\ \cite{Dong2006}, Laplacian eigenfunctions are used as Morse functions because of their ubiquity and smoothness. An argument from basic properties of the MSC in two dimensions shows that the mesh elements will be quadrilateral. Huang et al.\ \cite{Huang2008} extend this approach to allow orientation and alignment control by relaxing the Laplacian eigenproblem to a so-called \emph{quasi-eigenproblem} and introducing an objective term measuring alignment to a vector field. Alignment is measured against an ordinary vector field as symmetrized cross-field representations were not well-developed at the time.
\cite{ling2011spectral} studies the boundary conditions for Morse quadrangulation. The first- and second-order boundary conditions mean the Laplacian eigenproblem must still be relaxed to a least-squares quasi-eigenproblem. In our work, the use of a fourth-order operator means greater flexibility in choosing boundary conditions when solving a simple linear eigenproblem. \cite{Ling2014} refines the spectral approach by extending the alignment objective to allow separate control of local oscillation frequency in two orthogonal directions. This is a proxy for control of quad element size along each direction.
Other works solve more complicated nonlinear optimization problems to compute higher-quality Morse functions at the cost of complexity and performance. \cite{Zhang2010} extends the scalar Morse function to a section of a four-dimensional vector bundle to prevent degeneracy of the independent oscillation directions. Fang et al.\ \cite{Fang2018} combine this approach with a piecewise mesh construction approach that seeks the ``best of both worlds'' of parametrization-based and Morse-based meshing algorithms.
\subsection{Frame Fields and Tensor Fields}
Another key ingredient of our work is a tensor representation for symmetric frame fields, which have also grown up in the meshing literature.
Cross fields on surfaces have been applied extensively to the quad meshing problem. References can be found in the aforementioned meshing surveys as well as \cite{vaxman2016}. Cross field representations on planar domains and surfaces typically make use of the special structure available in two dimensions to represent crosses by complex numbers. More general non-orthogonal frame fields on surfaces have also been considered in \cite{panozzo2014,diamanti2014,diamanti2015}.
More recently, volumetric frame fields have become popular in the hex meshing literature. The challenge of representing volumetric frames with their complicated non-Abelian symmetries has been tackled in various ways in the hex-meshing literature. Early papers propose using special homogeneous polynomials or their coefficients in a basis of spherical harmonics to represent fields with local octahedral symmetry \cite{huang,RaySokolov2,Solomon}. More recently, \cite{chemin} suggests thinking of octahedral fields as special symmetric fourth-order tensor fields. \cite{palmer2020} proposes viewing octahedral fields as a subclass of more general tensor fields called \emph{odeco fields}, which can represent frame fields with independent scale along each frame axis. Other tensor field design problems have been explored in \cite{shen2016,palacios2017}. The homogeneous polynomial and symmetric tensor field representations naturally suggest thinking of a frame field as the principal symbol of a partial differential operator, which we explore in this work.
In field-based meshing, the recovery of a mesh from a field is generally mediated by a parametrization---i.e., a map from the domain to be meshed into Euclidean space, through which a lattice is then pulled back to yield a hex mesh. The frame field is viewed as encoding the derivatives of the parametrization up to symmetry, and the map is optimized to agree with the field in least-squares. Parametrization approaches have proven successful in 2D \cite{bommes2009mixed,bommes2013integer} and made inroads in the volumetric setting as well \cite{nieser2011cubecover,Lyon:2016:HRH}. In \Cref{sec:paramconnection}, we examine the frame field operator in the special case of frame fields associated to a parametrization.
\subsection{Discrete Bilaplacian}
The Bilaplacian, defined as the square of the scalar Laplace-Beltrami operator $\Delta$, is a popular fourth-order differential operator in geometry processing.
It is used for applications in surface fairing \cite{Desbrun1999}, surface deformation \cite{Sorkine2004},
data interpolation \cite{Jacobson2012}, data smoothing \cite{Weinkauf2011},
the computation of smooth distances \cite{Lipman2010}, skinning and character animation
\cite{Jacobson2011}, physical simulation \cite{Bergou2006}, and more \cite{Sykora2014,Andrews2011}.
The Bilaplacian with zero Neumann boundary is often discretized using mixed finite elements for the Laplacian \cite{Jacobson2010}, although other approaches are also popular for different boundary conditions \cite{Bergou2006,Stein2018,Stein2020}.
We consider a more general class of fourth order operators beyond the Bilaplacian, whose principal symbols are constructed from \emph{orthogonally decomposable} tensor fields, of which the Bilaplacian is a special case.
Our discretization generalizes the mixed finite element discretization for the Hessian energy on flat domains of Stein et al.\ \cite{Stein2018}, which is based on the classical mixed finite element method for the biharmonic equation \cite{Scholz1978}. We show how to engineer desired boundary conditions using a suitable choice of Lagrange multipliers. When our ellipticity parameter $\epsilon$ is set to $1$ and appropriate boundary conditions are chosen, we recover the Hessian energy discretization.
\subsection{Discrete Anisotropic Operators}
Several works in geometry processing have studied elliptic operators with built-in anisotropy and their applications.
The survey of Wang and Solomon \cite[Section 5.7]{Wang2019} provides a comprehensive overview of anisotropic Laplacians, their discretization, and their applications.
Anisotropic operators have seen use in anisotropic meshing \cite{Fu2014}, coloring vector graphics \cite{Finch2011}, elasticity simulation \cite{Kim2019}, and surface reconstruction \cite{Yu2013}.
Azencot et al. \cite{azencot2013operator} show how to represent discrete tangent vector fields by their first-order directional derivative operators on scalar functions. \cite{azencot2017consistent} extend this representation to two-dimensional cross fields by transforming them into vector fields using the complex power approach. While these operators discretize a directional (first) derivative, our operator is an anisotropic linear elliptic operator that measures function variation in all frame directions at once. As a result, our construction yields operators whose eigenfunctions oscillate in alignment with field directions, providing a fundamentally different means of encoding a frame field in a linear operator that generalizes to the volumetric setting.
\begin{figure}
\centering
\newcommand{0.1\textwidth}{0.1\textwidth}
\pgfplotstableread[col sep=comma, skip first n=1]{figures/horse-ev.csv}{\evs}
\begin{tikzpicture}
\begin{loglogaxis}[
mark size = 0.5pt,
width = \columnwidth,
height = 0.7\columnwidth,
grid = none,
xlabel = {Eigenvalue at Level 6},
ylabel = {Eigenvalue Error},
ylabel near ticks,
legend pos = south east,
every tick label/.append style = {font=\tiny}]
\addplot [only marks, red, mark = *] table [x index = 5, y expr = {abs(\thisrowno{0}-\thisrowno{5})}] {\evs};
\addlegendentry{Level 1};
\addplot [only marks, green!70!black, mark = *] table [x index = 5, y expr = {abs(\thisrowno{1}-\thisrowno{5})}] {\evs};
\addlegendentry{Level 2};
\addplot [only marks, blue, mark = *] table [x index = 5, y expr = {abs(\thisrowno{2}-\thisrowno{5})}] {\evs};
\addlegendentry{Level 3};
\addplot [only marks, orange, mark = *] table [x index = 5, y expr = {abs(\thisrowno{3}-\thisrowno{5})}] {\evs};
\addlegendentry{Level 4};
\addplot [only marks, violet, mark = *] table [x index = 5, y expr = {abs(\thisrowno{4}-\thisrowno{5})}] {\evs};
\addlegendentry{Level 5};
\end{loglogaxis}
\end{tikzpicture}
\caption{As the mesh is refined via Loop subdivision, eigenvalues converge. Here, the spectrum for a frame field operator on the \textbf{horse} is shown over five levels of Loop subdivision. Eigenvalue error is computed against the result on the sixth level of subdivision. The underlying frame field is the same as that depicted in \Cref{fig:dist}.}
\label{fig:eigs-2d}
\end{figure}
\section{Theory}
In this section, we will detail the construction of an elliptic operator measuring alignment to a symmetric frame field and examine its properties. We will also examine the behavior of the frame field operator for a frame field arising from a parametrization map.
\subsection{Preliminaries}
We aim to construct an operator that measures alignment to a given frame field. First, we recall the definition of symmetric tensors and tensor fields, which will be used to build the coefficients of our operator.
\begin{definition}[tensors, tensor fields]
The space of \textbf{symmetric $k$th-order tensors} on $\mathbb{R}^n$ is the symmetric tensor product of $k$ copies of $\mathbb{R}^n$, notated as $\operatorname{Sym}^k \mathbb{R}^n$. Elements $T \in \operatorname{Sym}^k \mathbb{R}^n$ are given by sets of coefficients $T_{i_1\dots i_k}$ invariant under all permutations of the indices $i_1,\dots,i_k$.
A \textbf{symmetric fourth-order tensor field} on a domain $\Omega \subset \mathbb{R}^n$ is a continuous map $T : \Omega \to \operatorname{Sym}^4 \mathbb{R}^n$.
\end{definition}
\begin{notation}
In what follows, all tensors will be symmetric unless otherwise specified. We will make liberal use of the Einstein summation convention in formulas with Latin indices. We will also use the tensor contraction notation $A : T$ defined by
\begin{equation} (A : T)_{kl} \coloneqq A_{ij} T_{ijkl} \end{equation}
when $A$ and $T$ are (symmetric) second- and fourth-order tensors, respectively.
\end{notation}
We now focus on a particular class of tensor fields, popularized in hexahedral meshing for encoding collections of orthogonal directions \cite{chemin, palmer2020}:
\begin{definition}[odeco tensors, odeco fields]
A fourth-order symmetric tensor $T$ is \textbf{orthogonally decomposable (odeco)} if it can be written
\begin{equation}
\sum_\alpha w_\alpha (\xi^\alpha)^{\otimes 4}
\end{equation}
for some orthonormal set of vectors $\xi^\alpha \in \mathbb{R}^n$, where $\otimes$ denotes the tensor product \cite{Robeva}. The $\xi^\alpha$ are known as the \textbf{orthogonal components} or \textbf{generalized eigenvectors} of $T$, the latter being in reference to the property that
\[ T_{ijkl} \xi^{\alpha}_i = w_\alpha \xi^{\alpha}_j \xi^{\alpha}_k \xi^{\alpha}_l. \]
Similarly, the $w_\alpha$ are known as the \textbf{weights} or \textbf{generalized eigenvalues} of $T$.
The odeco tensors form an algebraic variety known as an \emph{odeco variety} \cite{Robeva,Boralevi}. An \textbf{odeco field} is a field of odeco tensors \cite{palmer2020}.
\end{definition}
A particularly important subset of odeco tensors are those that are symmetric under permutation of their components. We refer to these as (conformal) octahedral due to their octahedral symmetry when the base dimension is three. We will focus our efforts on such fields, though our constructions also generalize to odeco fields.
\begin{definition}[octahedral, conformal octahedral tensors]
A tensor $T$ is \textbf{conformal octahedral} if it is odeco with equal weights $w_\alpha = w \ge 0$. $T$ is \textbf{octahedral} if all $w_\alpha = 1$. The octahedral tensors form a smooth variety known as the \emph{octahedral variety} \cite{palmer2020}. The conformal octahedral tensors occupy a cone over the octahedral variety.
We define \textbf{octahedral fields} and \textbf{conformal octahedral fields} as fields valued in the octahedral and conformal octahedral tensors, respectively.
\end{definition}
\begin{remark}
A generic odeco tensor $T = \sum_\alpha w_\alpha (\xi^{\alpha})^{\otimes 4}$ encodes its components $\{\xi^\alpha\}$ up to sign, as changing the sign of $\xi^\alpha$ has no effect on the coefficients of $T$. If $T$ is conformal octahedral with positive weight, it encodes its components up to permutation and sign. Thus odeco and (conformal) octahedral fields are generally known as \textbf{symmetric frame fields}. For more discussion and algorithms for computing such fields, see \cite{palmer2020}.
\end{remark}
Finally, we recall a useful norm on fourth-order tensors:
\begin{definition}[tensor spectral norm]
By analogy to the operator norm for second order tensors, one can define a spectral norm on symmetric tensors \cite{friedland2020spectral} as
\begin{equation}
\|T\| \coloneqq \max_{\|v\| = 1} T_{ijkl} v_i v_j v_k v_l.
\end{equation}
\end{definition}
When $T$ is odeco with weights $w_\alpha$, $\|T\| = \max_\alpha |w_\alpha|$. In particular, when $T$ is conformal octahedral with weight $w$, $\|T\| = w$, and $T/\|T\|$ is octahedral.
\subsection{Variational Problem}
Our task is to define a variational problem, whose optimality conditions will yield the desired frame field operator. The variational problem will also lead directly to our mixed finite element discretization in \Cref{sec.discretization}. The functional we define will measure alignment to a frame field. To do this, we first show that the tensor field itself encodes this alignment.
\begin{definition}[alignment]
Let $T$ be an odeco tensor with orthogonal components $\xi^\alpha$. We say a second-order symmetric tensor $S$ is \textbf{aligned} with $T$ if the $\xi^\alpha$ are eigenvectors of $S$.
\end{definition}
If $u$ is a scalar function and $S = \nabla^2u$ is its Hessian, then alignment between $S$ and $T$ expresses the intuitive idea that the primary directions of curvature of $u$ occur along the components of $T$.
In what follows, we will focus on the case of conformal octahedral tensors and fields. The case of general odeco tensors is similar but messier. To motivate our variational problem, we will view a fourth order tensor as inducing a quadratic form on second-order tensors. The following property of conformal octahedral tensors says that this quadratic form measures alignment:
\begin{lemma}
Let $T$ be a conformal octahedral tensor with components $\xi^\alpha$. Then for any given set of distinct eigenvalues $\lambda_i(S)$, the quadratic form $S : T : S$ is maximized over $S$ when the eigenvectors of $S$ agree with the $\xi^\alpha$.
\begin{proof}
Using the fact that the $\xi^\alpha$ form an orthonormal basis, we expand the expression for $T$ to get
\begin{align}
\frac{1}{\|T\|} \, S : T : S &= \sum_\alpha (\xi^\alpha)^\top S (\xi^\alpha)(\xi^\alpha)^\top S (\xi^\alpha) \\
&\le \sum_{\alpha,\beta} (\xi^\alpha)^\top S (\xi^\beta)(\xi^\beta)^\top S (\xi^\alpha) \\
\intertext{as all terms are nonnegative}
&= \sum_{\alpha} (\xi^\alpha)^\top S \left( \sum_\beta (\xi^\beta)(\xi^\beta)^\top \right) S (\xi^\alpha) \\
&= \sum_{\alpha} (\xi^\alpha)^\top S^2 (\xi^\alpha) \\
\intertext{as the parenthesized tensor is the identity by orthonormality of $\xi^\beta$}
&= \tr\left(S^2 \sum_\alpha \xi^\alpha (\xi^\alpha)^\transp \right) \\
&= \tr S^2 \\
&= \sum_{i} \lambda_i(S)^2,
\end{align}
with equality if the $\xi^\alpha$ are eigenvectors of $S$.
\end{proof}
\end{lemma}
Let $\Omega \subset \mathbb{R}^n$ be a compact domain and $T$ a conformal octahedral field on $\Omega$. Intuitively, we want to define a functional $\mathcal{E}_T(u)$ that will be \emph{minimized} when the scalar function $u : \Omega \to \mathbb{R}$ oscillates in alignment with the frame field. Since $S : T : S$ is \emph{maximized} when $S$ is aligned to $T$, we might be tempted to define $\mathcal{E}_T(u) \coloneqq -(\nabla^2 u) : T : (\nabla^2 u)$. However, this quadratic form is negative and degenerate. In particular,
\begin{equation}
(\xi^\alpha \otimes \xi^\beta + \xi^\beta \otimes \xi^\alpha) : T : (\xi^\alpha \otimes \xi^\beta + \xi^\beta \otimes \xi^\alpha) = 0
\end{equation}
when $\alpha \ne \beta$.
To get an elliptic differential operator, we want to define a positive, nondegenerate functional. To this end, we first choose some $\epsilon \in (0, 1]$ and define a modified tensor field
\begin{equation}T^\epsilon \coloneqq \|T\|\mathbb{I} - (1 - \epsilon) T, \end{equation}
where $\mathbb{I}$ denotes the fourth-order identity tensor whose characteristic property is that $\mathbb{I} : S = S$ for any symmetric second-order tensor $S$. Now we can define a functional as follows:
\begin{definition}[frame field functional]
The \textbf{frame field functional} associated to $T$ with ellipticity $\epsilon > 0$ is given by
\begin{equation}
\begin{aligned} \mathcal{E}_{T,\epsilon}(u) &= \frac{1}{2} \int_\Omega (\nabla^2 u) : T^\epsilon : (\nabla^2 u) \; d\Omega \\
&= \frac{1}{2} \int_\Omega \|T\| \|\nabla^2 u\|_F^2 - (1-\epsilon)(\nabla^2 u) : T : (\nabla^2 u) \; d\Omega \end{aligned} \end{equation}
for $u \in H^2(\Omega)$, where $\nabla^2 u$ denotes the Hessian of $u$, and $\|\cdot \|_F$ is the pointwise Frobenius norm.
\end{definition}
In summary, for any conformal octahedral field, we have constructed a nondegenerate functional $\mathcal{E}_{T,\epsilon}$ that preserves the alignment-measuring properties of the quadratic form $(\nabla^2 u) : T : (\nabla^2 u)$. Intuitively, $\mathcal{E}_{T, \epsilon}$ wants the Hessian of $u$ to align to the frame directions, and this effect becomes stronger as $\epsilon \to 0$. When $\epsilon = 1$, $\mathcal{E}_{T,1}$ is the \emph{Hessian energy} (see \cite{Stein2018}), for which the Euler-Lagrange equation is the biharmonic equation.
\begin{figure}
\newcommand{0.45\textwidth}{0.3\columnwidth}
\centering
\pgfplotstableread[col sep=comma]{figures/horse-mean-length.csv}{\data}
\begin{tikzpicture}
\begin{loglogaxis}[
width = \columnwidth,
height = 0.7\columnwidth,
enlarge x limits = 0.1,
enlarge y limits = 0.2,
grid = none,
xlabel = {Mean Edge Length},
ylabel = {Absolute Eigenvalue Error},
ylabel near ticks,
x dir = reverse,
every tick label/.append style = {font=\tiny},
legend pos = south west,
legend style = {nodes = {scale = 0.5, transform shape}}]
\addplot+ table [x index = 0, y index = 10] {\data};
\addlegendentry{$\lambda_{10}$}
\addplot+ table [x index = 0, y index = 20] {\data};
\addlegendentry{$\lambda_{20}$}
\addplot+ table [x index = 0, y index = 30] {\data};
\addlegendentry{$\lambda_{30}$}
\addplot+ table [x index = 0, y index = 40] {\data};
\addlegendentry{$\lambda_{40}$}
\addplot+ table [x index = 0, y index = 50] {\data};
\addlegendentry{$\lambda_{50}$}
\addplot+ table [x index = 0, y index = 60] {\data};
\addlegendentry{$\lambda_{60}$}
\end{loglogaxis}
\end{tikzpicture}%
\\
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl1}
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl2}
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl3} \\
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl4}
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl5}
\includegraphics[width=0.45\textwidth]{figures/horse-e64-lvl6}
\caption{As the mesh is refined, frame field eigenvalues and eigenfunctions converge. Eigenvalue error (top) is measured against the value at the lowest resolution. The $64$th eigenfunction is shown over six mesh resolutions (left to right, top to bottom). The underlying frame field on the \textbf{horse} is the same as that depicted in \Cref{fig:dist}.}
\label{fig:eigf-2d}
\end{figure}
\subsection{Euler-Lagrange Equations}
In the previous section, we proposed a functional $\mathcal{E}_{T,\epsilon}$ measuring the alignment of the variation directions of a scalar function $u$ to a given frame field $T$. Now we follow a standard procedure to extract a differential operator from this variational formulation, and we show that the resulting \emph{frame field operator} is elliptic.
Taking the first variation of $\mathcal{E}_{T,\epsilon}$ with respect to $u$ yields the following Euler-Lagrange equations:
\begin{equation} 0 = \int_\Omega \sum_{i, j, k, l} T^{\epsilon}_{ijkl} (\partial_i \partial_j u)(\partial_k \partial_l v) \; d\Omega, \end{equation}
for any smooth test function $v \in C^\infty(\Omega)$. Integrating by parts yields
\begin{equation}
\begin{aligned}
0 &= \int_{\partial\Omega} T^{\epsilon}_{ijkl} (\partial_i \partial_j u)n_k \partial_l v \; dA - \int_{\Omega} \partial_k (T^{\epsilon}_{ijkl} (\partial_i \partial_j u)) \partial_l v \; d\Omega \\
&= \int_{\partial\Omega} [T^{\epsilon}_{ijkl} (\partial_i \partial_j u)n_k \partial_l v - \partial_k(T^{\epsilon}_{ijkl} (\partial_i \partial_j u)) n_l v] \; dA \\
&\quad + \int_{\Omega} \partial_k \partial_l (T^{\epsilon}_{ijkl} \partial_i \partial_j u) v\; d\Omega.
\end{aligned}
\end{equation}
Eliminating the test function $v$, we obtain the following PDE with natural boundary conditions:
\begin{align}
\partial_k \partial_l(T^{\epsilon}_{ijkl}\partial_i\partial_j u) &= 0 \label{eq.pde1}\\
n_i T^{\epsilon}_{ijkl} \partial^2_{jk} u &= 0 \quad \text{on }\partial \Omega \label{eq.bdry2} \\
n_i \partial_j (T^{\epsilon}_{ijkl} \partial^2_{kl} u) &= 0 \quad \text{on }\partial\Omega. \label{eq.bdry3}
\end{align}
Accordingly, we define our differential operator as follows:
\begin{definition}[frame field operator] The \textbf{frame field operator} associated to a conformal octahedral frame field with ellipticity $\epsilon$ is given by
\begin{equation} \mathcal{A}_{T,\epsilon} u = \partial_k \partial_l (T^{\epsilon}_{ijkl} \partial_i \partial_j u). \end{equation}
\end{definition}
The fourth-order term will have coefficients $T^{\epsilon}_{ijkl}$, i.e., the \emph{principal symbol} of $\mathcal{A}_{T,\epsilon}$ is given by the polynomial
\begin{equation}\begin{aligned}
\sigma_P(\mathcal{A}_{T,\epsilon})(x, \zeta) &= T^{\epsilon}_{ijkl}(x)\zeta_i\zeta_j\zeta_k\zeta_l \\
&= \|T(x)\|\left(\|\zeta\|^4 - (1-\epsilon)\sum_{\alpha} (\xi^\alpha(x) \cdot \zeta)^4 \right) \\
&\ge \epsilon \|T(x)\| \|\zeta\|^4,
\end{aligned}\end{equation}
\vspace{-.1in}
\begin{wrapfigure}[4]{r}{0.2\linewidth}
\vspace{-.35in}\centering
\includegraphics[width=.95\linewidth]{figures/principal-symbol}
\end{wrapfigure}
\noindent where $\zeta \in T_x^*\mathbb{R}^n = \mathbb{R}^n$,
confirming that $\mathcal{A}_{T,\epsilon}$ is \emph{elliptic}. Moreover, if $\|T(x)\| \ge C > 0$ for all $x \in \Omega$, then $\mathcal{A}_{T,\epsilon}$ is \emph{uniformly elliptic}. An example of $\sigma_P(\mathcal{A}_{T,\epsilon})$ is shown at right (inset) as a plot over the unit sphere $\|\zeta\| = 1$.
To sum up, we have shown that solutions to the variational problem $\min_u \mathcal{E}_{T,\epsilon}(u)$ satisfy a fourth-order elliptic PDE $\mathcal{A}_{T,\epsilon} u = 0$ and corresponding natural boundary conditions. Intuitively, the solutions to this PDE are functions ``most aligned'' to the frame field $T$.
\subsection{Eigenproblem}
A wider variety of field-aligned functions can be obtained by solving an eigenproblem for $\mathcal{A}_T$. Imposing the nondegeneracy constraint $\|u\|_{L^2(\Omega)} = 1$, we obtain the Lagrangian
\begin{equation}
\mathcal{E}_{T,\epsilon}(u) - \lambda (\|u\|^2 - 1),
\end{equation}
whose Euler-Lagrange equations consist of the eigenvalue problem
\begin{equation} \mathcal{A}_{T,\epsilon} u = \partial_i \partial_j (T^\epsilon_{ijkl} \partial_k \partial_l u) = \lambda u, \label{eq.eig} \end{equation}
together with the natural boundary conditions \eqref{eq.bdry2}--\eqref{eq.bdry3}.
\subsection{Boundary-Aligned Frame Fields}
Frame fields encountered in hex meshing satisfy an alignment condition at the boundary of a domain. Given a boundary-aligned frame field, the natural boundary condition \eqref{eq.bdry2} simplifies considerably.
\begin{definition}[boundary-aligned frame field] A frame field $T$ on a domain $\Omega$ is \textbf{boundary-aligned} if the boundary normal $n(x)$ is a generalized eigenvector of $T(x)$ for all $x \in \partial \Omega$:
\begin{equation}
T_{ijkl}(x) n_l(x) = w(x) n_i(x) n_j(x) n_k(x) \text{ for all } x \in \partial \Omega. \label{eq.bdry_align}
\end{equation}
\end{definition}
Suppose that $T$ is boundary-aligned. Then from \eqref{eq.bdry2} and \eqref{eq.bdry_align}, the second-order natural boundary condition reduces to
\begin{equation}
\begin{aligned}
0 &= n_i T^{\epsilon}_{ijkl} \partial^2_{jk} u \\
&= \|T\| n_i \partial^2_{ij} - (1-\epsilon)n_i T_{ijkl} \partial^2_{jk} u \\
&= \|T\| n_i \partial^2_{ij} - (1-\epsilon)\|T\| n_j n_k n_l \partial^2_{jk} u \\
&= \|T\| (I - (1 - \epsilon) n n^\top) (\nabla^2 u) n.
\end{aligned}
\end{equation}
Observing that $(I - n n^\top)$ is positive definite, we obtain the reduced second-order boundary conditions
\begin{equation}
(\nabla^2 u) n = 0 \quad \text{on } \partial \Omega.
\label{eq.bdry2reduced}
\end{equation}
Intuitively, when $T$ is boundary-aligned, the natural boundary condition \eqref{eq.bdry2reduced} says that $u$ is \emph{linear} along the normal direction at the boundary, and moreover that its normal derivative is constant over $\partial \Omega$. Notice the similarity to the natural boundary conditions studied in \cite{Stein2018}.
\subsection{Relationship to Parametrization}\label{sec:paramconnection}
In parametrization-based quad and hex meshing, frame fields enter as a way to encode derivatives of a parametrization up to some symmetry, either quadrilateral or octahedral (see e.g., \cite{bommes2009mixed,bommes2013integer,nieser2011cubecover,liu2018}). In this section, we explore the properties of the frame field operator associated to a frame field arising from a parametrization. We motivate why we might expect high-frequency eigenfunctions of such an operator to have local lattice-like structure.
\begin{figure}
\centering
\pgfplotstableread[col sep=comma, skip first n=1]{figures/square-ev.csv}{\evs}
\begin{tikzpicture}
\begin{loglogaxis}[
mark size = 0.5pt,
width = \columnwidth,
height = 0.7\columnwidth,
grid = none,
xlabel = {Analytic Eigenvalue},
ylabel = {Eigenvalue Error},
ylabel near ticks,
legend pos = south east,
every tick label/.append style = {font=\tiny}]
\addplot [only marks, red, mark = *] table [x index = 0, y expr = {abs(\thisrowno{1}-\thisrowno{0})}] {\evs};
\addlegendentry{$L_{\text{mean}} = 0.03$};
\addplot [only marks, green!70!black, mark = *] table [x index = 0, y expr = {abs(\thisrowno{2}-\thisrowno{0})}] {\evs};
\addlegendentry{$L_{\text{mean}} = 0.025$};
\addplot [only marks, blue, mark = *] table [x index = 0, y expr = {abs(\thisrowno{3}-\thisrowno{0})}] {\evs};
\addlegendentry{$L_{\text{mean}} = 0.02$};
\addplot [only marks, orange, mark = *] table [x index = 0, y expr = {abs(\thisrowno{4}-\thisrowno{0})}] {\evs};
\addlegendentry{$L_{\text{mean}} = 0.015$};
\end{loglogaxis}
\end{tikzpicture}
\caption{For a constant frame field on the square $[-1, 1]^2$, eigenvalues can be computed analytically using the Fourier transform. The spectrum of our discrete operator converges to the analytic spectrum as mesh resolution increases. $L_{\text{mean}}$ indicates mean mesh edge length.}
\label{fig:square}
\end{figure}
\begin{definition}[map frame fields]
Suppose $f : \Omega \to f(\Omega) \subset \mathbb{R}^n$ is a diffeomorphism. Let $df$ be the differential of $f$. The \textbf{map frame field} associated to the map $f$ is defined as follows:
\begin{equation}
\begin{aligned}
(\mathrm{T}f)_{ijkl} &\coloneqq \sum_\alpha df^\alpha_i df^\alpha_j df^\alpha_k df^\alpha_l \\
&= \sum_\alpha (\partial_i f^\alpha) (\partial_j f^\alpha) (\partial_k f^\alpha) (\partial_l f^\alpha),
\end{aligned}
\end{equation}
where $\partial_i = \partial/\partial x_i$ is differentiation with respect to the $i$th coordinate in $\Omega$. The \textbf{inverse map frame field} or \textbf{map coframe field} is given by
\begin{equation} (\hat{\mathrm{T}}f)^{ijkl} \coloneqq \sum_\alpha (df^{-1})^i_\alpha (df^{-1})^i_\alpha (df^{-1})^\alpha_k (df^{-1})^l_\alpha, \end{equation}
where $df^{-1}$ denotes the matrix inverse of $df$.
\end{definition}
\begin{remark}
Observe that $\hat{\mathrm{T}}f$ is the image of the constant coframe field $\sum_\alpha (e_\alpha)^{\otimes 4}$ under the natural pullback map.
\end{remark}
\begin{remark}
For $\mathrm{T}f$ and $\hat{\mathrm{T}}f$ to be conformal octahedral frame fields, the map $f$ must be conformal. Moreover, for $\mathrm{T}f$ and $\hat{\mathrm{T}}f$ to be octahedral, $f$ must be a rigid motion, and $\mathrm{T}f$ and $\hat{\mathrm{T}}f$ will then be constant.
\end{remark}
Now suppose $f$ is conformal, let $v : \mathbb{R}^n \to \mathbb{R}$, and let $u = v \circ f$ be its pullback to $\Omega$. Then, modulo terms of lower differential order,
\begin{equation}
\partial^4_{ijkl} u \equiv df_i^{a} df_j^{b} df_k^{c}df_l^{d} (\partial^4_{abcd} v) \circ f.
\end{equation}
Let $J = df$ and $\hat{J} = df^{-1}$ so that $\hat{J}_a^b J_b^c = \delta_a^c$. Then
\begin{equation} \begin{aligned}
(\hat{\mathrm{T}}f)^{ijkl} \partial^4_{ijkl} u &\equiv \sum_\alpha \hat{J}^i_\alpha \hat{J}^j_\alpha \hat{J}^k_\alpha \hat{J}^l_\alpha J_i^{a} J_j^{b} J_k^{c}J_l^{d} (\partial^4_{abcd} v) \circ f \\
&= \sum_{\alpha} \delta_\alpha^a\delta_\alpha^b\delta_\alpha^c\delta_\alpha^d (\partial^4_{abcd} v) \circ f \\
&= \sum_\alpha \partial^4_{\alpha \alpha \alpha \alpha} v \circ f. \end{aligned} \end{equation}
Also,
\begin{equation} \begin{aligned}
\partial^4_{ijij} u &\equiv (\partial_i f^{a}) J_i^{a} J_j^{b} J_k^{c}J_l^{d} (\partial^4_{abcd} v) \circ f \\
&= \| J^a \|^2 \| J^b \|^2 \delta_{ac}\delta_{bd} [(\partial^4_{abcd} v) \circ f] \\
&= \|\mathrm{T} f\| \partial^4_{abab} v \circ f, \end{aligned} \end{equation}
where we have used that $f$ is conformal.
Hence, evaluating the full frame field operator on $u$ is equivalent at highest order to evaluating a constant frame field operator on $v$:
\begin{equation}
\mathcal{A}_{\hat{\mathrm{T}}f, \epsilon} u \equiv (\partial^4_{abab} - (1 - \epsilon) \partial^4_{aaaa}) v \circ f + \text{lower order},
\end{equation}
where we have used that $\|\hat{\mathrm{T}}f\|\|\mathrm{T}f\| = 1$. This says that---up to terms of lower differential order---the frame field operator associated to a map coframe field pushes forward through the map to the constant frame field operator.
From a hex meshing perspective, if $f$ is a parametrization carrying our frame field to a constant frame field, we might hope eigenfunctions whose critical points lie on a lattice to be pulled back to eigenfunctions whose critical points form a hex mesh. The above analysis tells us that this is true in the high-frequency limit---indeed,
at high frequencies, the highest-order derivatives will dominate, and $\mathcal{A}_{\hat{\mathrm{T}}f,\epsilon}$ will approach the pullback of the constant frame field operator on $f(\Omega)$. We should therefore expect that eigenfunctions of $\mathcal{A}_{\hat{\mathrm{T}}f,\epsilon}$ will increasingly look like warped copies of constant frame field eigenfunctions as their frequency increases. Even at relatively low frequencies, this appears to be borne out empirically (see \Cref{fig:warp,fig:warp-evs}).
\begin{figure}
\centering
\pgfplotstableread[col sep=comma]{figures/sphere-mean-length.csv}{\data}
\begin{tikzpicture}
\newcommand{0.45\textwidth}{0.15\columnwidth}
\begin{semilogyaxis}[
width = \columnwidth,
height = 0.7\columnwidth,
enlarge x limits = 0.1,
enlarge y limits = 0.2,
grid = none,
xlabel = {Mean Edge Length},
ylabel = {Absolute Eigenvalue Error},
ylabel near ticks,
x dir = reverse,
every tick label/.append style = {font=\tiny},
legend pos = south west,
legend style = {nodes = {scale = 0.5, transform shape}}]
\addplot+ table [x index = 0, y index = 10] {\data};
\addlegendentry{$\lambda_{10}$}
\addplot+ table [x index = 0, y index = 20] {\data};
\addlegendentry{$\lambda_{20}$}
\addplot+ table [x index = 0, y index = 30] {\data};
\addlegendentry{$\lambda_{30}$}
\addplot+ table [x index = 0, y index = 40] {\data};
\addlegendentry{$\lambda_{40}$}
\addplot+ table [x index = 0, y index = 50] {\data};
\addlegendentry{$\lambda_{50}$}
\addplot+ table [x index = 0, y index = 60] {\data};
\addlegendentry{$\lambda_{60}$}
\node[inner sep=0pt] at (axis cs:0.1, 5e3) {\includegraphics[width = 0.45\textwidth]{figures/sphere-ff-sing}};
\node[inner sep=0pt] at (axis cs:10^-1.15, 5e3) {\includegraphics[width = 0.45\textwidth]{figures/sphere-ff-int}};
\end{semilogyaxis}
\end{tikzpicture}
\caption{Convergence of the frame field operator spectrum on a ball in 3D, for the frame field shown (inset). We compare to eigenvalues on a finer mesh with mean edge length $\num{0.0680}$.}
\label{fig:cvg-3d}
\end{figure}
\begin{figure}
\centering
\newcommand{0.45\textwidth}{0.24\columnwidth}
\setlength\tabcolsep{1.5pt}
\begin{tabular}{@{}c|ccc@{}}
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-int} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef16-lvl1} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef16-lvl2} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef16-lvl3} \\
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-sing} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef32-lvl1} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef32-lvl2} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef32-lvl3} \\
\includegraphics[width=0.45\textwidth]{figures/teddy-surf} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef48-lvl1} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef48-lvl2} &
\includegraphics[width=0.45\textwidth]{figures/teddy-ff2-ef48-lvl3} \\
& {\footnotesize $L_{\text{mean}} = \num{0.0373}$} & {\footnotesize $L_{\text{mean}} = \num{0.0287}$} & {\footnotesize $L_{\text{mean}} = \num{0.0233}$}
\end{tabular}
\caption{Comparison of frame field operator eigenfunctions corresponding to $\lambda_{16}$ (top), $\lambda_{32}$ (middle), and $\lambda_{48}$ (bottom) across various mesh resolutions. The frame field, its singular curves, and the domain boundary are shown at left. Note how the lower-frequency eigenfunctions appear to stabilize at higher resolutions. $L_{\text{mean}}$ indicates mean mesh edge length.}
\label{fig:ef-3d}
\end{figure}
\section{Discretization} \label{sec.discretization}
As the functional $\mathcal{E}_{T,\epsilon}$ is quadratic in the second derivatives of $u$, we pursue a mixed finite element discretization that follows the discretization of the Hessian energy by Stein et al.\ \cite{Stein2018}. Unlike their work, however, we do not necessarily want the natural boundary conditions of our functional. For example, we might want to impose Neumann boundary conditions in distance computation applications, or for computing eigenfunctions that have ridgelines on the boundary. Rather than clamping function values on boundary triangles, which can be numerically unstable, we show how to impose boundary conditions in a weak sense, i.e., by clamping a Lagrange multiplier instead.
\subsection{Mixed FEM Lagrangian}
In the mixed finite element method (mixed FEM), the degree of the finite element basis is too low to represent even the derivatives that appear in the variational or weak formulations of a PDE. Instead, we replace higher derivatives with coupled lower-order PDEs enforced via Lagrange multipliers.
As the frame field functional and operator generalize the Hessian energy and Bilaplacian, respectively, we adopt the mixed FEM approach of \cite{Stein2018}, in which functions, Hessians, and Lagrange multipliers are represented in the linear FEM basis on a triangle mesh (or, in our case, a tetrahedral mesh).
We begin by reformulating the problem $\min_u \mathcal{E}_{T,\epsilon}(u)$ into the equivalent constrained optimization problem
\begin{equation}
\begin{alignedat}{2}
& \text{minimize } & & \int_\Omega \frac{1}{2}V:T^{\epsilon}:V \; d\Omega \\
& \text{subject to }& \quad & \nabla^2u = V.
\end{alignedat}
\end{equation}
Enforcing the constraint $\nabla^2 u = V$ via the Lagrange multiplier $\Lambda$, a second-order symmetric tensor field, we obtain the Lagrangian
\begin{equation} \mathcal{L}_{T,\epsilon}(u,V) = \int_\Omega \left[\frac{1}{2}V:T^{\epsilon}:V + \Lambda:(V - \nabla^2u)\right] \; d\Omega.
\end{equation}
Now integrating by parts, we see that $\mathcal{L}_{T,\epsilon}$ can be rewritten
\begin{equation} \begin{aligned}
\mathcal{L}_{T,\epsilon}(u,V) &= \frac{1}{2} \int_\Omega \left[ V:T^{\epsilon}:V + (\nabla \cdot \Lambda) \cdot \nabla u + \Lambda : V \right] \; d\Omega\\ &\quad + \int_{\partial\Omega}n^\top \Lambda \nabla u \; dA, \end{aligned} \label{eq.mixed-ibp} \end{equation}
where $\nabla \cdot V$ denotes the symmetric tensor divergence of $V$. Observe that $\mathcal{L}_{T,\epsilon}$ now includes only \emph{first derivatives} of $u$ and $\Lambda$, which can be represented faithfully in the linear FEM basis.
\subsection{Weak Boundary Conditions}
Boundary conditions on $V$ can now be imposed weakly by constraining on the Lagrange multiplier $\Lambda$. In particular, setting $\Lambda$ such that the normal $n(x)$ is an eigenvector of $\Lambda(x)$ for $x \in \partial\Omega$ will transform the boundary term in \eqref{eq.mixed-ibp} into the form
\begin{equation}
\int_{\partial\Omega}n^\top \Lambda \nabla u \; dA =
\int_{\partial\Omega} \phi n^\top \nabla u \; dA \end{equation}
for an arbitrary function $\phi : \partial \Omega \to \mathbb{R}$, which has the form of a homogeneous Neumann boundary term. An equivalent way to write the constraint on $\Lambda$ is
\begin{equation}
(I - n(x) n(x)^\top) \Lambda(x) n(x) = 0 \quad x \in \partial \Omega.
\end{equation}
This equation is linear and homogeneous---in particular, it can be expressed in the form $B(x) \operatorname{vec}\Lambda(x) = 0$, where $B(x)$ is a matrix-valued field on $\partial \Omega$, and $\operatorname{vec} \Lambda(x)$ denotes the coefficients
|
of $\Lambda(x)$ arranged in a vector.
To obtain \emph{natural boundary conditions}, we instead set $\Lambda$ to zero on the boundary, thus eliminating the boundary term from \eqref{eq.mixed-ibp} entirely.
\begin{figure}
\centering
\newcommand{0.45\textwidth}{0.19\columnwidth}
\setlength\tabcolsep{1.5pt}
\begin{tabular}{@{}c|cccc@{}}
\includegraphics[width=0.45\textwidth]{figures/square-diff-ff} &
\includegraphics[width=0.45\textwidth]{figures/square-diff-eps1} &
\includegraphics[width=0.45\textwidth]{figures/square-diff-eps2e-1} &
\includegraphics[width=0.45\textwidth]{figures/square-diff-eps4e-2} &
\includegraphics[width=0.45\textwidth]{figures/square-diff-eps8e-3} \\
\includegraphics[width=0.45\textwidth]{figures/disk-diff-ff} &
\includegraphics[width=0.45\textwidth]{figures/disk-diff-eps1} &
\includegraphics[width=0.45\textwidth]{figures/disk-diff-eps2e-1} &
\includegraphics[width=0.45\textwidth]{figures/disk-diff-eps4e-2} &
\includegraphics[width=0.45\textwidth]{figures/disk-diff-eps8e-3} \\
\includegraphics[width=0.45\textwidth]{figures/swirl-diff-ff} &
\includegraphics[width=0.45\textwidth]{figures/swirl-diff-eps1} &
\includegraphics[width=0.45\textwidth]{figures/swirl-diff-eps2e-1} &
\includegraphics[width=0.45\textwidth]{figures/swirl-diff-eps4e-2} &
\includegraphics[width=0.45\textwidth]{figures/swirl-diff-eps8e-3} \\
\includegraphics[width=0.45\textwidth]{figures/raptor-diff-ff} &
\includegraphics[width=0.45\textwidth]{figures/raptor-diff-eps1} &
\includegraphics[width=0.45\textwidth]{figures/raptor-diff-eps2e-1} &
\includegraphics[width=0.45\textwidth]{figures/raptor-diff-eps4e-2} &
\includegraphics[width=0.45\textwidth]{figures/raptor-diff-eps8e-3} \\
\includegraphics[width=0.45\textwidth]{figures/bunny2d-diff-ff} &
\includegraphics[width=0.45\textwidth]{figures/bunny2d-diff-eps1} &
\includegraphics[width=0.45\textwidth]{figures/bunny2d-diff-eps2e-1} &
\includegraphics[width=0.45\textwidth]{figures/bunny2d-diff-eps4e-2} &
\includegraphics[width=0.45\textwidth]{figures/bunny2d-diff-eps8e-3} \\
& $\epsilon = 1$ &
$\epsilon = \num{2e-1}$ &
$\epsilon = \num{4e-2}$ &
$\epsilon = \num{8e-3}$
\end{tabular}
\caption{As the ellipticity parameter $\epsilon$ decreases, the operator becomes more anisotropic, as shown by the short-time solution to the ``diffusion'' equation $\partial_t u = \mathcal{A}_{T,\epsilon} u$ with initial condition set to a sum of Dirac deltas. Also note the differing impulse responses for two different frame fields on the disk. The diffusion time is set to $10^{-5}$ for the square and disk, $\num{2e-7}$ for the raptor, and $10^{-4}$ for the bunny.}
\label{fig:impulseresponse}
\end{figure}
\begin{figure}
\centering
\newcommand{0.45\textwidth}{0.3\columnwidth}
\setlength\tabcolsep{1.5pt}
\begin{tabular}{@{}c|cc@{}}
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff1-int} &
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff1-ef9.59e-5-alt} &
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff1-ef1.082e-4-alt} \\
& $\lambda = \num{9.59e-5}$
& $\lambda = \num{1.082e-4}$ \\
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff2-int} &
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff2-ef9.16e-5-alt} &
\includegraphics[width=0.45\textwidth]{figures/cylinder-ff2-ef9.25e-5-alt} \\
& $\lambda = \num{9.16e-5}$
& $\lambda = \num{9.25e-5}$
\end{tabular}
\caption{Eigenfunctions of frame field operators for two different frame fields on the volumetric \textbf{cylinder}. Note how the oscillations follow the field lines.}
\label{fig:cylinder}
\end{figure}
\begin{figure*}
\newcommand{0.45\textwidth}{0.45\textwidth}
\newcommand{0.11\textwidth}{0.1\textwidth}
\centering
\begin{tabular}{c|c}
\includegraphics[height=0.11\textwidth]{figures/unwarped-domain} &
\includegraphics[height=0.11\textwidth]{figures/warped-domain}
\\
\includegraphics[width=0.45\textwidth]{figures/unwarped-eigf} &
\includegraphics[width=0.45\textwidth]{figures/warped-eigf}
\end{tabular}
\caption{Eigenfunctions of the operator associated to the constant axis-aligned frame field on the base domain (left) display similar qualitative behavior to those computed on a conformally warped domain with the conformal map coframe field operator (right), when both are displayed on the warped domain.}
\label{fig:warp}
\end{figure*}
\subsection{Matrix Representation}
Discretizing $u$, $V$, and $\Lambda$ in the piecewise linear hat basis, we obtain the matrix Lagrangian
\begin{equation} \footnotesize
\mathcal{L}_{T,\epsilon}(\bm{u}, \bm{V}) = \frac{1}{2} \bm{V}^\top \bm{M}_{T^{\epsilon}} \bm{V}^\top + \bm{\Lambda}^\top (\bm{D}^\top \bm{A} \bm{G} \bm{u} + \bm{M} \bm{V} + \bm{B}^\top \bm{\mu}), \end{equation}
where $\bm{M}_{T^{\epsilon}}$ is a block-diagonal matrix encoding the tensor field $T^{\epsilon}$ as a field of bilinear forms acting on symmetric second-order tensors scaled by the dual cell volumes, $\bm{G}$ and $\bm{D}$ are the piecewise-linear gradient and tensor divergence operators, respectively, $\bm{A}$ is a diagonal matrix of simplex volumes, and $\bm{M}$ is a diagonal matrix of dual cell volumes.
Note we have introduced a new term, $\bm{\mu}^\top \bm{B} \bm{\Lambda}$, which enforces the boundary constraint $\bm{B}\bm{\Lambda} = 0$ via yet another (discrete) Lagrange multiplier $\bm{\mu}$. $\bm{B}$ encodes the constraint $B(x) \operatorname{vec}\Lambda(x) = 0$ at each boundary vertex $x$.
The first order optimality conditions for the Lagrangian $\mathcal{L}_{T,\epsilon}$ are the following matrix equations:
\begin{equation}\begin{pmatrix}
\bm{M}_{T^{\epsilon}} & \bm{M} & \bm{0} & \bm{0} \\
\bm{M} & \bm{0} & \bm{B}^\top & \bm{D}^\top \bm{A}\bm{G} \\
\bm{0} & \bm{B} & \bm{0} & \bm{0} \\
\bm{0} & \bm{G}^\top \bm{A}\bm{D} & \bm{0} & \bm{0}
\end{pmatrix} \begin{pmatrix}
\bm{V} \\ \bm{\Lambda} \\ \bm{\mu} \\ \bm{u}
\end{pmatrix} = \bm{0}, \end{equation}
which reduce to the single equation
\begin{equation}\footnotesize \bm{G}^\top \bm{A}\bm{D} \left(\overline{\bm{M}}_{T^{\epsilon}} - \overline{\bm{M}}_{T^{\epsilon}} \bm{B}^\top (\bm{B}\overline{\bm{M}}_{T^{\epsilon}} \bm{B}^\top)^{-1} \bm{B} \overline{\bm{M}}_{T^{\epsilon}}\right)\bm{D}^\top \bm{A}\bm{G}\bm{u} = \bm{0}, \end{equation}
where $\overline{\bm{M}}_{T^{\epsilon}} = \bm{M}^{-1} \bm{M}_{T^{\epsilon}} \bm{M}^{-1}$. We thus define the \textbf{discrete frame field operator} as
\begin{equation} \footnotesize
\mathcal{A}_{T, \epsilon} \coloneqq \bm{G}^\top \bm{A}\bm{D} \left(\overline{\bm{M}}_{T^{\epsilon}} - \overline{\bm{M}}_{T^{\epsilon}} \bm{B}^\top (\bm{B}\overline{\bm{M}}_{T^{\epsilon}} \bm{B}^\top)^{-1} \bm{B} \overline{\bm{M}}_{T^{\epsilon}}\right)\bm{D}^\top \bm{A}\bm{G}.
\end{equation}
In case we want natural boundary conditions, we set $\bm{\Lambda} = 0$ on the boundary, so $\bm{B}$ becomes a matrix that selects out boundary vertex coordinates from $\bm{\Lambda}$. The matrix expression for $\mathcal{A}_{T,\epsilon}$ with natural boundary conditions thus reduces to
\begin{equation}
\mathcal{A}_{T,\epsilon} = \bm{G}^\top \bm{A}\bm{D}^\circ \overline{\bm{M}}_{T^{\epsilon}}^\circ (\bm{D}^\circ)^\top \bm{A}\bm{G},
\end{equation}
where superscript $\circ$ denotes that boundary columns (and rows in the case of $\overline{\bm{M}}_{T^{\epsilon}}$) have been deleted.
This is similar to the expression for the Bilaplacian with natural boundary conditions in \cite{Stein2018}. In fact it reproduces the Bilaplacian exactly when $\epsilon = 1$ as $\overline{\bm{M}}_{T^{1}} = \bm{M}^{-1}$.
By adding a unit norm constraint on $\bm{u}$ to the Lagrangian, we can also obtain the discrete frame field eigenproblem
\begin{equation}\mathcal{A}_{T,\epsilon} \bm{u} = \lambda \bm{M}\bm{u}. \end{equation}
\section{Validation}\label{sec:validation}
In this section, we check that the discrete operator constructed in the previous section has the desired behavior---convergence under mesh refinement, controllable anisotropy, and behavior under pullback.
\paragraph*{Dirichlet Problem}
We first examine convergence of solutions to the frame field operator \emph{Dirichlet problem}
\begin{equation}
\begin{aligned}
\mathcal{A}_{T,\epsilon} u &= 0 \\
\nabla_n u \mid_{\partial\Omega} &= 0 \\
u\mid_{\partial\Omega} &= u_0
\end{aligned}
\end{equation}
as we refine the underlying computational mesh. This is a standard test of convergence for finite element methods. We should expect to see the mixed FEM solution converge to the true solution.
The fact that the frame field operator has non-constant coefficients adds an extra complication. To test convergence of the PDE solutions, we first need to ensure that the underlying frame fields converge. To address this, we set up a hierarchy of frame field operators as follows: we first compute a boundary-aligned frame field at the finest resolution via MBO \cite{viertel2019}, then resample it to the coarser meshes, and finally renormalize so that frame fields at all levels are octahedral. A frame field operator is then constructed from the frame field at each level, using the same value of $\epsilon$.
\Cref{fig:cvg-2d} displays Dirichlet solutions over six levels of Loop subdivision on the \textbf{elephant} and \textbf{troll} meshes. Each successive subdivision halves edge lengths. The boundary values $u_0$ are set to square waves, which have components over many frequencies. At the coarsest levels, the high-frequency boundary data is highly aliased, and the solution appears muddy in the interior. After subdividing a few times, the solution quickly becomes smooth and displays the clear influence of the underlying frame field, as the sharp edges in the boundary data propagate along frame directions.
\paragraph*{Spectral Convergence}
Given a hierarchy of frame field operators at successive refinement levels, we can also test convergence of the spectra of the operators. For this experiment, we construct a hierarchy of operators with Neumann boundary conditions over six refinement levels on the \textbf{horse} domain, and we compute the first $64$ eigenvalues and eigenfunctions at each level. In \Cref{fig:eigs-2d}, we plot the error at each level $1$--$5$ against the eigenvalue at the finest level $6$, which we use as a proxy for the true spectrum. We drop the smallest eigenvalue because it is zero for Neumann boundary conditions. The error grows with the eigenvalue itself, but drops consistently at finer levels. \Cref{fig:eigf-2d} shows the same data in a different way, showing how the eigenvalue error drops with average edge length across a variety of eigenvalues. We also display the eigenfunction corresponding to the $64$th eigenvalue at each level. Note how the overall structure of oscillations remains consistent over many levels of refinement.
\begin{figure}
\pgfplotstableread[col sep=comma, skip first n=1]{figures/warp-evs.csv}{\warpevs}
\begin{tikzpicture}
\begin{axis}[
width = \columnwidth,
height = 0.7\columnwidth,
enlarge x limits = 0,
ymode = log,
grid = none,
xlabel = {Eigenvalue Index},
ylabel = {Eigenvalue},
ylabel near ticks,
every tick label/.append style = {font=\tiny},
legend pos = north west]
\addplot+ [mark = none] table [x expr=\coordindex+1, y index=0] {\warpevs};
\addlegendentry{Unwarped}
\addplot+ [mark = none] table [x expr=\coordindex+1, y index=1] {\warpevs};
\addlegendentry{Warped}
\end{axis}
\end{tikzpicture}
\caption{The spectra of the constant frame field operator (``unwarped'') and the map coframe field operator (``warped'') show broad agreement.}
\label{fig:warp-evs}
\end{figure}
\paragraph*{Analytic Ground Truth} In one special case, we can compare eigenvalues and eigenfunctions of the discrete frame field operator to their analytic counterparts. Consider the constant axis-aligned frame field on the square $[-1, 1]^2$, given by
\begin{equation} T = \sum_{\alpha = 1}^2 (e^\alpha)^{\otimes 4}, \end{equation}
where $e^\alpha$ is the standard basis in $\mathbb{R}^2$. The corresponding operator is
\begin{equation} \begin{aligned} \mathcal{A}_{T,\epsilon} u &= \partial^4 u_{ijij} - (1 - \epsilon)u_{iiii} \\
&= 2 u_{xxyy} + \epsilon (u_{xxxx} + u_{yyyy}). \end{aligned} \end{equation}
A Fourier basis component
\begin{equation} \phi_\omega = e^{i(\omega_x x + \omega_y y)} \end{equation}
is an eigenfunction of $\mathcal{A}_{T, \epsilon}$, since
\begin{equation} \begin{aligned} \mathcal{A}_{T,\epsilon} \phi_\omega &= [\omega_x^2\omega_y^2 + \epsilon (\omega_x^4 + \omega_y^4)] e^{i(\omega_x x + \omega_y y)} \\
&= \sigma_P(\mathcal{A}_{T,\epsilon})(\omega)e^{i(\omega_x x + \omega_y y)}. \end{aligned} \end{equation}
Thus, we can compute the analytic spectrum of the constant frame field operator on the square by evaluating the principal symbol $\sigma_P(\mathcal{A}_{T,\epsilon})$ on the Fourier lattice and then sorting the resulting eigenvalues.
In \Cref{fig:square}, we compare analytic eigenvalues computed this way to eigenvalues of discrete frame field operators generated from constant axis-aligned frame fields on meshes of the square at multiple levels of refinement. Observe that the error drops consistently with the mean edge length.
\begin{figure*}
\centering
\newcommand{0.45\textwidth}{0.15\textwidth}
\setlength\tabcolsep{5pt}
\begin{tabular}{@{}cc|cccc@{}}
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ff-int} &
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ff-sing} &
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ef1-scatter} &
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ef3-scatter} &
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ef5-scatter} &
\includegraphics[width=0.45\textwidth]{figures/rockerarm-ef10-scatter} \\
& & $\lambda = 1.0048$ & $\lambda = 3.0105$ & $\lambda = 4.9904$ & $\lambda = 9.9805$
\end{tabular}
\caption{At several different frequencies, oscillations of frame field operator eigenfunctions on the volumetric \textbf{rockerarm} model align to the field directions. Integral and singular curves of the frame field are shown at left.}
\label{fig:eigf-3d}
\end{figure*}
\paragraph*{Volumetric Spectral Convergence}
To test convergence of our operator on a volumetric domain, we perform a similar experiment to the one detailed above. However, we lack an equivalent to Loop subdivision for tetrahedral meshes that preserves tetrahedral angles and overall quality. Thus, to construct a hierarchy of frame fields, we use a sequence of (separately generated) tetrahedral meshes of different target edge lengths. We optimize a frame field on the finest mesh using volumetric frame field MBO \cite{palmer2020}. Then the field coefficients are linearly interpolated to the vertices of each coarser mesh, reprojected into the octahedral variety, and further optimized to ensure they are smooth at each level. This procedure should ensure that the overall structure of the frame field is consistent across levels.
\Cref{fig:cvg-3d} plots eigenvalue error for a frame field operator across various levels of refinement of the unit ball domain. The octahedral frame field and its singular structure are depicted in the inset. Error is measured against the eigenvalues at the finest level. There is a consistent decrease in error with decreasing mean edge length.
\Cref{fig:ef-3d} compares frame field operator eigenfunctions computed at various mesh resolutions on the \textbf{teddy}. They appear to stabilize as the mesh resolution increases, more so at lower frequencies.
\paragraph*{Controllable Anisotropy}
The frame field operator $\mathcal{A}_{T,\epsilon}$ has two parameters: a frame field $T$, which encodes the orientation of anisotropy, and a scalar $\epsilon$, which controls the degree of anisotropy and the uniform ellipticity bound. In \Cref{fig:impulseresponse}, we examine the effect of different settings of $T$ and $\epsilon$. The impulse response to a sum of delta functions $u_0$ is computed by solving a short-time diffusion problem $\partial_t u = \mathcal{A}_{T,\epsilon} u$ with natural boundary conditions via one step of implicit Euler integration---so the discrete equation is
\begin{equation}
(\bm{M} + \tau \mathcal{A}_{T,\epsilon}) \bm{u} = \bm{M} \bm{u}_0,
\end{equation}
where $\tau$ is the diffusion time.
When $\epsilon = 1$, the frame field operator reduces to the Bilaplacian, and diffusion occurs isotropically. When $\epsilon < 1$, observe that diffusion occurs mostly along integral curves of the underlying frame fields. This is due to propagation along characteristic directions of the operator. The effect is accentuated as $\epsilon \to 0$. Also note how the impulse response differs for two different frame fields on the disk---displaying fine-grained control of anisotropy through the frame field.
\Cref{fig:cylinder} displays control of anisotropy in a volumetric setting. When the frame field is aligned to the axis of the cylinder, the eigenfunctions oscillate radially and along this axis. With a helical frame field, the eigenfunctions show a similar helical pattern.
\paragraph*{Map Frame Field} \Cref{fig:warp} tests the results of \Cref{sec:paramconnection}. We start with a constant axis aligned frame field on a base domain comprising a union of rectangles. The domain is then warped via a conformal map computed in closed form. The derivatives of the map are also computed and used to build the map coframe field on the warped domain, a conformal octahedral field. Eigenfunctions of the map frame field operator on the warped domain are compared to eigenfunctions of the constant frame field operator on the base domain, after the latter are remapped onto the warped domain. Note the overall qualitative similarity of the eigenfunctions, showing broad agreement even at relatively low frequencies. \Cref{fig:warp-evs} shows that the spectra of the two operators also agree.
\paragraph*{More Volumetric Examples}
\Cref{fig:eigf-3d} shows eigenfunctions on the \textbf{rockerarm} at various eigenvalues. Eigenfunctions of the frame field operator at several relatively high frequencies display unmistakable alignment to the frame field.
\section{Additional Experiments}\label{sec:applications}
In this section, we provide some additional experiments involving our new operator and its discretization. In particular, we demonstrate how it can be substituted into two operator-based methods in geometry processing as a substitute for its isotropic counterparts, yielding output from these methods that is aware of the structure of the input frame field.
\begin{figure*}
\centering
\newcommand{0.11\textwidth}{0.11\textwidth}
\setlength\tabcolsep{1.5pt}
\begin{tabular}{@{}c|ccccc@{}}
\includegraphics[height=0.11\textwidth]{figures/disk-dist-ff} &
\includegraphics[height=0.11\textwidth]{figures/disk-dist-eps1} &
\includegraphics[height=0.11\textwidth]{figures/disk-dist-eps1e-1} &
\includegraphics[height=0.11\textwidth]{figures/disk-dist-eps1e-2} &
\includegraphics[height=0.11\textwidth]{figures/disk-dist-eps1e-3} &
\includegraphics[height=0.11\textwidth]{figures/disk-dist-eps1e-4} \\
\includegraphics[height=0.11\textwidth]{figures/horse-dist-ff} &
\includegraphics[height=0.11\textwidth]{figures/horse-dist-eps1} &
\includegraphics[height=0.11\textwidth]{figures/horse-dist-eps1e-1} &
\includegraphics[height=0.11\textwidth]{figures/horse-dist-eps1e-2} &
\includegraphics[height=0.11\textwidth]{figures/horse-dist-eps1e-3} &
\includegraphics[height=0.11\textwidth]{figures/horse-dist-eps1e-4} \\
& $\epsilon = 1$ &
$\epsilon = 10^{-1}$ &
$\epsilon = 10^{-2}$ &
$\epsilon = 10^{-3}$ &
$\epsilon = 10^{-4}$
\end{tabular}
\caption{Analogously to biharmonic distances, we can compute a smooth distance function from distances in the spectral embedding given by our operator with Neumann boundary conditions. When $\epsilon = 1$ we get biharmonic distances. As $\epsilon$ decreases, the distance functions become more anisotropic, and the shortest paths computed by gradient descent on distance become more aligned to the frame fields.}
\label{fig:dist}
\end{figure*}
\subsection{Anisotropic Biharmonic Distance}
By analogy to the biharmonic distance \cite{Lipman2010}, we can design smooth anisotropic distance functions that exhibit ``Manhattan-like" behavior along a prescribed frame field. These distance functions might be used for example in navigation, where we want the robot to trace out a path along a grid that varies smoothly and aligns to domain boundaries.
Inspired by the biharmonic distance, our frame field operator distances are computed as follows: first, the first $k = 1, \dots, N$ nonzero eigenvalues $\lambda_k$ and corresponding eigenfunctions $\phi_k$ of the frame field operator $\mathcal{A}_{T,\epsilon}$ are computed, discarding those where $\lambda_k = 0$. Then the frame field operator distance between points $p$ and $q$ is defined by:
\begin{equation}
d_{T, \epsilon}(p, q)^2 \coloneqq \sum_{k=1}^N \frac{|\phi_k(p) - \phi_k(q)|^2}{\lambda_k^2}.
\end{equation}
This is essentially computing distances in the spectral embedding corresponding to the inverse of the frame field operator.
\Cref{fig:dist} illustrates isolines of our frame field-aware distance along a disk and a horse model in the plane; we also show shortest paths in the domain from a set of randomly-chosen source points to a single source point, computed using gradient descent on the distance function. When $\epsilon$ is fairly large, our distances behave similarly to the biharmonic distance. As $\epsilon\rightarrow0$, however, the level sets of the distance are roughly $45^\circ$ rotated from the field, as might be expected from computing $L^2$ distances between functions like the impulse responses illustrated in \Cref{fig:impulseresponse}.
\subsection{Coloring with Frame Fields}
Diffusion curves \cite{Orzan2008} define a way to propagate color information from a sparse set of user-defined curves to the remainder of an image; similar approaches exist with higher-order operators \cite{Finch2011}.
One can achieve similar results by prescribing color values at the boundary of a meshed domain
and then minimizing a smoothing energy to smoothly color the domain.
As an illustration of this technique, in \Cref{fig:boundarycoloring} we solve a quadratic programming problem in each RGB color channel to obtain the color value \(\bm{c}\):
\begin{equation}\label{eq:diffusioncoloring}
\bm{c} = \argmin_{\bm{c}} \; \frac12 \bm{c}^\transp \mathcal{A}_{T,0.01} \bm{c},
\quad
\bm{l} \leq \bm{c} \leq \bm{u}
\;\textrm{,}
\end{equation}
with the operator's natural boundary conditions using the primal version
of the operator and the inequality bounds set so that the
minimum and maximum values in each color channels occur at the boundary.
The choice of field heavily influences the result; the direction of color diffusion
follows the selected field. Hence, we can view \eqref{eq:diffusioncoloring} using the frame field operator as a means of giving greater control to diffusion-based painting methods by linking this toolbox to frame field design.
\begin{figure*}
\centering
\newcommand{0.45\textwidth}{0.23\columnwidth}
\setlength\tabcolsep{3pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{@{}cc|cc|cc|cc@{}}
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-flower-odeco} &
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-field-flower-odeco} \;&\;
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-flower-odeco-rotpi} &
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-field-flower-odeco-rotpi} \;&\;
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-airfoil-odeco} &
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-field-airfoil-odeco} \;&\;
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-airfoil-odeco-rotpi} &
\includegraphics[width=0.45\textwidth]{figures/diffusioncurves-field-airfoil-odeco-rotpi}
\end{tabular}
\caption{This figure shows two different domains on which we solve the boundary value coloring problem \eqref{eq:diffusioncoloring} twice with the same boundary data, but with different frame fields.
The resulting coloring changes based on the frame field, as colors diffuse along the field directions.}
\label{fig:boundarycoloring}
\end{figure*}
\subsection{Conformal Octahedral Fields and Singularities}
Many of the octahedral fields we have depicted in this paper include singularities, places where the field is ill-defined because octahedral fields have unit norm everywhere. Our theory does not explicitly deal with these singularities; instead, they are considered to be ``cut out'' of the domain $\Omega$. Conformal octahedral fields can explicitly represent singularities as zeroes (i.e., points $x$ where $\|T(x)\| = 0$). In \Cref{fig:normed-comparison}, we compare the frame field operators arising from a pair of fields, one of which is octahedral and the other conformal octahedral. The fields have identical structure because the octahedral field is simply given by normalizing the conformal octahedral field. Their eigenfunctions look similar, but they appear in a different order. A deeper investigation of the behavior of a frame field operator around singularities of its underlying frame field would be an intriguing topic for future work.
\section{Conclusion and Future Work}
Our work provides an initial link between two key areas of study in geometry processing: spectral geometry and frame field design. By moving from second-order to fourth-order, we are able to design a differential operator that captures the complex structure of frame fields in both planar regions and volumes.
From a technical perspective, our work advances applications of mixed finite elements to a broader class of operators than have previously been considered in geometry processing, including a variety of boundary conditions. While the experiments in \Cref{sec:validation} show that our discretization reaches the empirical standard of convergence needed for applications, theoretical proof of convergence in the limit of mesh refinement will be a challenging avenue for future research in numerical analysis, broadening the scope of isotropic results like \cite{scholz1978mixed,stein2019mixed}. Our constructions also can be easily generalized to non-orthogonal frame fields, although design of such fields is largely an open problem in geometry processing.
Perhaps the most obvious next step of our research, however, will involve incorporating our operator into methods like those described in \Cref{sec:meshingrelatedwork} for quad and hex meshing.
As our high-frequency eigenfunctions exhibit grid-like oscillatory behavior (see \Cref{sec:paramconnection}), we can introduce eigenproblems involving our operator into Morse-based meshing pipelines. While engineering such a meshing pipeline may require substantial changes to heuristic steps of existing Morse-based algorithms, which depend heavily on the structure of the Laplacian operator specifically, the promise of linking Morse-based and field-based meshing is an enticing next-step beyond the simpler applications suggested in \Cref{sec:applications}.
It would also be interesting to explore frame field operators acting on vector or tensor fields. Replacing the Hessian in our variational problem with the differential of a vector field would be one simple way to do this, which would result in a second-order vectorial operator. We expect that the eigenfields of such operators would also show frame-aligned oscillations.
More broadly, our work suggests a new way to think about frame fields, direction fields, and other generalized vector fields studied in geometry processing. Associating operators to fields exposes a rich representation admitting a wide variety of tools for research and application---spectral methods and semidefinite programming come to mind. It may be possible to pose frame field design problems by optimizing spectral properties over the space of frame field operators. We hope that this new representation will enable new end-to-end methods in meshing and other domains.
\begin{figure*}
\centering
\newcommand{0.45\textwidth}{0.45\textwidth}
\begin{tabular}{@{}c|c@{}}
\includegraphics[width=0.08\textwidth]{figures/disk-ff-unnormed} & \includegraphics[width=0.08\textwidth]{figures/disk-ff-norm} \\
\includegraphics[width=0.45\textwidth]{figures/disk-unnormed-efs} &
\includegraphics[width=0.45\textwidth]{figures/disk-normed-efs}
\end{tabular}
\caption{This figure compares the first $64$ nonconstant eigenfunctions of operators associated to two frame fields that share the same structure, except that one is octahedral (left) and one is conformal octahedral (right), scaling to zero near its singularities. The eigenfunctions look very similar, but they appear in a different order.}
\label{fig:normed-comparison}
\end{figure*}
\section*{Acknowledgments}
The authors would like to thank Mirela Ben-Chen and David Bommes for their thoughtful insights and feedback, as well as Xianzhong Fang and Jin Huang for helping with some speculative experiments.
David Palmer acknowledges the generous support of the Hertz Graduate Fellowship and the MathWorks Fellowship. This work is supported in part by the Swiss National Science Foundation's Early Postdoc.Mobility fellowship. The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grant W911NF2010168, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grant IIS-1838071, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program.
\printbibliography
\end{document}
|
\section{Introduction}
\label{introduction}
Located at the geographic South Pole, the IceCube Neutrino Observatory~\cite{ICdetector} is the world's largest neutrino telescope in terms of instrumented volume. It consists of a Cherenkov detector of one cubic kilometre volume using the ultra-pure Antarctic ice~\cite{SPICE} at depths between 1.45~km and 2.45~km and a square kilometre air-shower detector at the surface of the ice~\cite{IceTop}. The primary objectives of the detector are the measurement of high-energy astrophysical neutrino fluxes and determining the sources of these fluxes~\cite{HEnu_Science,TXS}.\\
\indent Currently an upgrade comprised of seven densely instrumented strings in the centre of the active volume of the IceCube detector with new digital optical modules (DOMs) is being built as the IceCube Upgrade~\cite{ICRC2019:ICU-project}. On each string DOMs will be regularly spaced with a vertical separation of 3\,m between depths of 2160\,m and 2430\,m below the surface of the ice. Three different types of DOMs will be used: The pDOM which is based on the design of the existing IceCube DOMs with upgraded electronics, the D-Egg~\cite{ICRC2021:D-Egg} which has two 8-inch photomultiplier tubes (PMTs) (facing up and down respectively), and the mDOM~\cite{ICRC2021:mDOM} which has 24~three-inch PMTs distributed for close to uniform directional coverage. \\
\indent Precise characterization of the optical properties of the IceCube detector medium and thereby reducing uncertainties in directional and energy event reconstruction is one of the goals of the IceCube Upgrade. For this purpose, novel calibration devices will be deployed, with the IceCube Upgrade camera system being a key component.
\subsection{Objective and setup of the camera system}
\begin{figure}
\centering
\begin{minipage}{.8\textwidth}
\centering
\includegraphics[width=.75\linewidth]{figure/schematic.pdf}
\includegraphics[width=.48\linewidth]{figure/sim-hole.pdf}
\includegraphics[width=.48\linewidth]{figure/sim-bulk.pdf}
\end{minipage}
\caption[margin=0.8cm]{Schematic of planned IceCube Upgrade camera measurements and the simulated images of them. Left: Refrozen hole ice measurement utilising two vertically separated optical modules on the same string. A downward facing camera observes an upward pointing LED from the DOM below. The simulation image in bottom-left represents imaging the bubble column in the refrozen hole ice. Right: Bulk ice measurement utilising two optical modules on separate strings. A camera is observing scattered light from an LED on an adjacent string pointing at an angle of $60^{\circ}$ to the camera. The plot in the bottom-right shows the light cone observed from an optical module on the adjacent string in the simulation. The schematic is not to scale. The pixel noise is not included in the simulated images.}
\label{fig:schematic}
\end{figure}
The IceCube Upgrade camera system aims to measure the optical properties of the ice in the vicinity of DOMs. Additionally, information on the position and orientation of the optical sensors will be obtained. To do this, the camera system utilises camera modules integrated inside each newly installed DOM to measure light emitted from illumination modules that accompany each camera pointing in the same direction. With this setup, images of both reflected and transmitted light will be taken.\\
\indent In all three types of new DOMs, three cameras are going to be installed to carry out different types of measurements. In the mDOM two cameras will be installed in the upper hemisphere pointing upwards at $45^{\circ}$. The third camera is positioned at the bottom pole of the mDOM. At the top of the mDOM an additional illumination board is placed to illuminate the refrozen hole ice. In the D-Egg all three cameras are installed on a ring in the lower half of the sensor, point towards the horizon with an angle of $120^{\circ}$ to each other. The method of integration into the pDOMs is currently being developed. \\
\indent In Fig.~\ref{fig:schematic} on the left, one type of measurement is sketched. The downwards facing camera captures direct and scattered light from an illumination module in the DOM below, and the optical properties of the refrozen ice will be inferred based on the distribution of light in the images. Of special interest is a column of ice with different optical properties than the surrounding ice, known as the 'Bubble column', that was originally detected by a special camera system deployed below the deepest DOM of IceCube string~80~\cite{ICdetector}. The other cameras on the mDOMs and D-EGGs measure the optical properties of the bulk ice as shown in Fig.~\ref{fig:schematic}, right. The bulk ice between strings is illuminated by an LED on one of the DOMs, and a camera in a DOM on a neighboring string takes images of the scattered light. The optical properties of the ice can be determined based on the distribution of incident light. Since there are multiple cameras pointing in different directions the bulk ice measurement can be performed direction dependent to gauge the anisotropies in the optical properties of the ice~\cite{ICRC2021:anisotropy}.\\
\indent Simulated images based on a photon propagating Monte Carlo code~\cite{ppc} used in previous camera studies~\cite{ICRC2017:Gen2_camera, ICRC2019:ICU_camera} are shown in Fig.~\ref{fig:schematic}, bottom. The studies are to be extended to develop the image analysis methods which would be applied on the actual image data from the deployed camera system.
\subsection{Hardware}
The camera module for the IceCube Upgrade camera system is a custom designed device consisting of two PCBs constituting the camera and one PCB that serves as an illumination board. The parts can be seen in Fig.~\ref{fig:camdetail}. It uses a Sony \textit{IMX225LQR-C} CMOS image sensor, controlled via a Inter Integrated Circuit~(I2C) interface with a \textit{MachX02} FPGA by Lattice semiconductor. The FPGA also bridges the incoming communication using a Serial Peripheral Interface~(SPI) to the high-speed interface with the image sensor, whose connections to the DOM hardware is shown in Fig.~\ref{fig:comm}. It also extracts the captured image data from the sensor onto an 8~MB RAM inside the camera that serves as a frame buffer. Images have a maximal resolution of 1312 by 979 pixels with a depth of 12 bits resulting in a file size of 2.7~MB per image using full pixel information, which means that the camera can buffer up to 3 images. The illumination board uses an \textit{SSL 80 GB CS8PM1.13} LED from Oslon. The LED is operated with 1~W of power generating 43~lm of light whose dominant wavelength is 470~nm with the spectral bandwidth of 25~nm. The light emitted has a full width at half maximum of $80^{\circ}$.\\
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{figure/camphotos.png}
\caption[margin=1cm]{The major components of the IceCube Upgrade camera system. A: The camera as seen from the front; B: The camera as seen from the back showing the connectors for the mainboards and the LED system; C: The LED board for the mDOMs from the front with the LED ID code used to identify the boards; D: The LED board backside that shows the connector; E: The D-EGG illumination system from the front showing the board-to board connector and LED; F: The D-EGG illumination system from the back.}
\label{fig:camdetail}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{.80\textwidth}
\centering
\includegraphics[width=.40\linewidth]{figure/communication_pipeline.png}
\end{minipage}
\caption[margin=1cm]{Flow chart of camera data and image transfer speeds between camera and DOM main board.}
\label{fig:comm}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.6\textwidth}
\centering
\includegraphics[width=.45\linewidth]{figure/cam_dock.png}
\includegraphics[width=.49\linewidth]{figure/mDOM_integrated.png
|
}
\includegraphics[width=.45\linewidth]{figure/cam_ring.png}
\includegraphics[width=.49\linewidth]{figure/cam_D-Egg_integrated.png}
\end{minipage}
\begin{minipage}{0.29\textwidth}
\includegraphics[width=.94\linewidth]{figure/mDOM_integrated2.png}
\end{minipage}
\caption[margin=0.8cm]{Camera testing battery in custom 3D printed holding docks (top left), assembled camera ring for the D-Egg module with three cameras and illumination boards (bottom left), camera ring integrated in a D-EGG (bottom middle), camera inside a mDOM holding structure (top middle), camera in mDOM holding structure seen from the outside (right)}
\label{fig:camera_photos}
\end{figure}
\indent Cameras for the mDOM are integrated directly into the 3D-printed PMT holding structure. The cameras look through the glass of the pressure vessel using windows in the holding structure. For the D-EGGs, cameras are attached to rings made from fibre reinforced plastic FR-4 using aluminum brackets. An image of such a ring can be seen in Fig.~\ref{fig:camera_photos}. The rings are glued to the glass of the D-EGG pressure vessels using room temperature vulcanizing silicone glue.
\section{Camera calibration and design verification}
\label{sec:CAT}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/CAT_Flow_Chart.pdf}
\caption[margin=1cm]{Flow chart for the Camera Acceptance Testing.}
\label{fig:CAT_pipeline}
\end{figure}
To verify the camera operation and to calibrate more than 2000 cameras, all the components are subjected to an extensive suite of tests as shown in Fig.~\ref{fig:CAT_pipeline} (see details in \cite{ICRC2019:ICU_camera}). The entire test cycle for a camera takes 48~hours, with over 3000 (8.1~GB) images captured per camera in the process. During room temperature ($20^{\circ}$C) and low temperature ($-40^{\circ}$C) tests we capture calibration relevant data for each camera.\\
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{figure/darknoise.png}
\caption{Mean Pixel Darknoise distribution over 1178 cameras for multiple settings}
\label{fig:darknoise}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figure/params.pdf}
\caption{Distribution over 254 cameras, for cameras with R$^2$ of linear fit to exposure time and magnification better than 0.99. Left: Slope m and offset c for a linear fit to pixel response for increasing exposure time. Middle: Same as left, but linear fit for increasing magnification. Right: Shift between image sensor and lens. }
\label{fig:parameters}
\end{figure}
\indent As cameras will operate in sparse light conditions, a characterization of the pixel darknoise is paramount. Fig.~\ref{fig:darknoise} shows the average pixel darknoise at $-40^{\circ}$C for different camera settings.\\
\indent To verify the camera response to a light source, we take multiple images of an LED at a distance of 1~m at different camera settings. Generally, we find that pixel response for unsaturated pixels scales linearly for each camera with exposure time and magnification, which can be expressed in terms of the camera gain as $\sqrt{10^{Gain\text{[dB]}/10}}$, as shown in Fig.~\ref{fig:parameters}. The camera gain is defined as a factor in the conversion of electric charge per pixel to the digital count of that pixel.\\
\indent We measure the lens - image sensor alignment for each camera (see Fig.~\ref{fig:parameters}, right). Manufacturing inaccuracies can result in a small misalignment between the fish-eye lens and image sensor by $\sim 0.5$~mm. The shift in alignment can be determined with sub pixel accuracy, which translates to an error source for angular estimation below $0.2^{\circ}$.
\section{Run plans for the camera system and application to IceCube-Gen2}
\label{sec:runplan}
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{figure/pmtexcitation.pdf}
\includegraphics[width=0.56\textwidth]{figure/photon_yield_vs_distance.pdf}
\caption{Left: PMT base rate before and after switching on one illumination LED for 90~s. PMT threshold is 0.25~PE. Right: Relative light intensity in cameras directed at an adjacent string with a sideways pointing illumination LED.}
\label{fig:pmtexcitation}
\label{fig:photonyield}
\end{figure}
\subsection{In-ice run plan of the camera system}
To maximize science data taking and to minimize the impact on the detector up-time and on the supernova readiness, camera calibration runs will use a number of approaches; these include keeping the minimal run time required to achieve calibration goals and operating of multiple cameras across the detector simultaneously. Camera runs will require the operation of illumination boards that introduce light in the detector (LID), which will require all optical sensor modules to operate in a calibration mode with the PMT HV switched off. Illumination boards will excite PMTs resulting in an increased noise rate following a calibration run. Fig.~\ref{fig:pmtexcitation} shows the increased PMT base rate by switching on the illumination boards and the decrease to normal levels within about 20~min, which is comparable to the settling time of current IceCube detector measured after the runs of the special camera's illumination devices, based on a lab measurement with a 8-inch PMT for D-Egg. Given the low noise rates at the cold operating temperature of IceCube as well as the operational examples from the current IceCube detector, the impact on nominal detector performance is expected to be minimal. The impact on the supernova trigger can be mitigated by excluding optical modules in the vicinity of the illumination boards that were operated during the calibration run.\\
\indent A first set of calibration runs cycling through all cameras and all illumination boards, with one camera or LED operating for every second, will be sufficient to triangulate orientations and positions of all DOMs from the observed LEDs in the set of camera images. In-water tests demonstrated that the camera can resolve 10~cm separations at 25~m distance~\cite{ICRC2019:ICU_camera}. Once the geometry of each camera is well established, bulk ice measurements can be performed for which a camera observes one or more LEDs on an adjacent string. Hole ice measurements will be performed by operating simultaneously all mDOM - mDOM pairs with the downward facing camera on the upper mDOM taking a transmission photographic image of the illuminated LED in the mDOM below. For reflection photographic hole ice studies a large fraction of all mDOMs will be operated with their bottom camera and the associated LED.
\subsection{Application of camera system to the IceCube-Gen2}
For IceCube-Gen2~\cite{ICRC2021:Gen2} the ice models developed for IceCube can be utilized and a similar camera system base on the experiences for development and operation of the IceCube Upgrade camera system will be employed. The camera system will perform the hole ice related measurements outlined in this work while string to string measurements will be very challenging and are not a priority for Gen2 (see Fig.~\ref{fig:photonyield}). Bulk ice measurements with cameras could be made via back scattered light using setups similar to those utilised for the SPIceCore camera measurements~\cite{ICRC2021:SPIceCore_Camera}.
\section{Conclusions}
\label{conclusions}
The camera system is a key component for a comprehensive understanding of the detector medium. Calibration measurements acquired with the IceCube Upgrade act as a science multiplier as they will enable to retroactively analyze more than 15~years of IceCube data with a substantially improved ice model. Improvements in angular and energy resolution directly affect the science capabilities of IceCube. In particular improved neutrino event pointing is critical for multi-messenger science. A significant fraction of cameras have been tested and integrated into the DOMs for the IceCube Upgrade. The evaluated test data demonstrates the quality of the system and its capabilities. For IceCube-Gen2, a similar camera system will be employed to perform a hole ice survey, which also has a potential for bulk ice studies with back scattered light.
\bibliographystyle{ICRC}
|
\section{Introduction}
In this paper, all graphs considered are simple, finite and undirected. We refer to the
book \cite{B} for undefined notation and terminology in graph theory. For simplicity,
a set of internally vertex-disjoint paths will be called {\it disjoint}. A path in
an edge-colored graph is a {\it rainbow path} if its edges have different colors. An
edge-colored graph is {\it rainbow $k$-connected} if any two vertices of the graph
are connected by $k$ disjoint rainbow paths of the graph. For a $k$-connected graph
$G$, the {\it rainbow $k$-connection number} of $G$, denoted by $rc_{k}(G)$, is defined
as the smallest number of colors required to make $G$ rainbow $k$-connected. This concept came
after the September 11, 2001 terrorist attacks, from which many weaknesses of the secure
information transfer of USA had been discovered. Among all these weaknesses, the most nonnegligible one may be the problem of the secure communication between departments of government. There is an
information transfer path (may pass by some other departments) between every pair of departments and each step of this path needs a password. In order to protect the safety system from the invasion of terrorists, all passwords in the path must be different. As the data size can be quite huge, one natural question arose that what is the smallest number of passwords allowed to ensure one or more secure paths between every pair of departments. This concept
was first introduced by Chartrand et al. in \cite{CJM, CJMZ}. Since then, a lot of
results on the rainbow connection have been obtained; see \cite{GMP,LSS, LSu}.
As a natural counterpart of the concept of rainbow $k$-connection, the concept of
rainbow vertex $k$-connection was first introduced by Krivelevich and Yuster
in \cite{KY} for $k=1$, and then by Liu et al. in \cite{LMS} for general $k$.
A path in a vertex-colored graph is a {\it vertex-rainbow path} if its internal
vertices have different colors. A vertex-colored graph is {\it rainbow vertex $k$-connected}
if any two vertices of the graph are connected by $k$ disjoint vertex-rainbow paths of the
graph. For a $k$-connected graph $G$, the {\it rainbow vertex $k$-connection number} of
$G$, denoted by $rvc_k(G)$, is defined as the smallest number of colors required to make
$G$ rainbow vertex $k$-connected. There are many results on this topic, we refer to \cite{CLS1,CLS2,LS,MYWY}.
Concerning about the geodesics instead of the paths, Li et al. \cite{LMS1} introduced the concept of strong rainbow
vertex-connection. A vertex-colored graph is {\it strong rainbow
vertex-connected}, if for any two vertices $u, v$ of the graph,
there exists a vertex-rainbow $u$-$v$ geodesic, i.e., a $u$-$v$ path
of length $d(u,v)$. For a connected graph $G$, the {\it strong
rainbow vertex-connection number} of $G$, denoted by $srvc(G)$, is
the smallest number of colors required to make $G$ strong rainbow
vertex-connected.
In 2011, Borozan et al. \cite{BFG} introduced the concept of proper
$k$-connection of graphs. A path in an edge-colored graph is a {\it
proper path} if any two adjacent edges differ in color. An
edge-colored graph is {\it proper $k$-connected} if any two vertices
of the graph are connected by $k$ disjoint proper paths of the
graph. For a $k$-connected graph $G$, the {\it proper $k$-connection
number} of $G$, denoted by $pc_{k}(G)$, is defined as the smallest
number of colors required to make $G$ proper $k$-connected. This concept is also
based on the situation we introduce above if we bring down a little bit our demand.
That is, we only need to set them different for adjacent passwords instead of any pair of passwords in this path. Note
that $$1\leq pc_k(G)\leq \min\{\chi'(G), rc_k(G)\},\ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ (1)$$ where $\chi'(G)$ denotes the edge-chromatic
number. Recently, the case for $k=1$ has been studied by Andrews et
al. \cite{ALL}, Laforge et al. \cite{LLZ} and Mao et al. \cite{MYWY1}.
Inspired by the concepts above, Jiang et al. \cite{JLZZ} introduced the concepts of
proper vertex $k$-connection and strong proper vertex-connection. A path in a vertex-colored graph is a
{\it vertex-proper path} if any two internal adjacent vertices
differ in color. A vertex-colored graph is {\it proper vertex
$k$-connected} if any two vertices of the graph are connected by $k$
disjoint vertex-proper paths of the graph. For a $k$-connected graph
$G$, the {\it proper vertex $k$-connection number} of $G$, denoted
by $pvc_{k}(G)$, is defined as the smallest number of colors
required to make $G$ proper vertex $k$-connected. Let $\kappa(G)=$ max\{$k:G$ is $k$-connected\}
denote the vertex-connectivity of $G$. Note that $pvc_{k}(G)$ is
well defined if and only if $1\leq k\leq\kappa(G)$. We write $pvc(G)$
for $pvc_{1}(G)$, and similarly, $rc(G), rvc(G)$ and $pc(G)$ for
$rc_1(G), rvc_1(G)$ and $pc_1(G)$ respectively. For a complete graph $G$, set $pvc(G)=0$. Moreover, we have $pvc(G)\geq 1$ if $G$ is a noncomplete graph. For $k\geq 2$, by definition we
have $pvc_k(G)\geq 1$ if $G$ is a $k$-connected graph.
It is easy to see that $$0\leq pvc_k(G)\leq
\min\{\chi(G), rvc_k(G)\},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$
where $\chi(G)$ denotes the chromatic number of $G$. A vertex-colored graph is {\it strong proper
vertex-connected}, if for any two vertices $u, v$ of the graph,
there exists a vertex-proper $u$-$v$ geodesic. For a connected graph
$G$, the {\it strong proper vertex-connection number} of $G$,
denoted by $spvc(G)$, is the smallest number of colors required to
make $G$ strong proper vertex-connected. Especially, set $spvc(G)=0$ for a complete graph $G$. Furthermore, we have $spvc(G)\geq 1$ if $G$ is not complete. Note that if $G$ is a
nontrivial connected graph, then $$0\leq pvc(G)\leq spvc(G)\leq
\min\{\chi(G), srvc(G)\}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$$
We recall some fundamental results on $pvc_k(G)$ and $spvc(G)$ which can be found in \cite{JLZZ}.
\begin{pro}\label{pro1}\cite{JLZZ} Let $G$ be a nontrivial connected graph. Then
$(a)$ \ $pvc(G)=0$ if and only if $G$ is a complete graph;
$(b)$ \ $pvc(G)=1$ if and only if $diam(G)=2$.
\end{pro}
For the case that $diam(G)\geq 3$, we have the following theorem.
\begin{thm}\label{thm1}\cite{JLZZ} Let $G$ be a nontrivial connected graph. Then, $pvc(G)=2$ if
and only if $diam(G)\geq 3$.
\end{thm}
Moreover, Jiang et al. \cite{JLZZ} determined the value of $pvc_k(G)$ when $G$ is a complete graph and a complete bipartite graph.
\begin{lem}\label{lemJ1}\cite{JLZZ} $(1)$ $pvc_2(K_n)=pvc_3(K_n)=...=pvc_{n-1}(K_n)=1$.
$(2)$ $pvc_k(K_{n_1,n_2})=2$ for $2\leq k\leq n_1\leq{n_2}$.
\end{lem}
The following results on $spvc(G)$ are immediate from its definition.
\begin{pro}\label{pro2}\cite{JLZZ} Let $G$ be a nontrivial connected graph of order $n$. Then
$(a)$ \ $spvc(G)=0$ if and only if $G$ is a complete graph;
$(b)$ \ $spvc(G)=1$ if and only if $diam(G)=2$.
\end{pro}
The standard products (Cartesian, direct, strong, and lexicographic) draw a constant attention of graph research community, see some papers \cite{ACKP,BS,GV,KS,NS,P,S,Z}. In this paper we consider the join and the four standard products with respect to the (strong) proper vertex-connection number. Each of them will be treated in one of the forthcoming sections. In the join part, we determine the values of the
proper vertex $k$-connection number and the strong proper vertex-connection
number for the join of two graphs. Besides, for the Cartesian, the lexicographic
and the strong products, we also study the two parameters, giving exact values for
most of our results and upper bounds for the others. In the final section, we determine the values of the proper vertex connection number for the direct product, and study the proper vertex $2$-connection number and the strong proper vertex-connection
number for the direct product with one of its factors being the complete graph. For all graph products, only $k=1,2$ are considered in this paper.
\section{The join}
The {\it join} $G\vee H$ of two graphs $G$ and $H$ has vertex set $V(G)\cup V(H)$ and its edge set consists of $E(G)\cup E(H)$ and the set $\{uv:u\in V(G)$ and $v\in V(H)\}$.
\begin{thm}\label{thmJ1} $(1)$ If $G$ and $H$ are graphs such that $G\vee H$ is not complete, then $pvc(G\vee H)=spvc(G\vee H)=1$.
$(2)$ Let $G,H$ be two graphs and $2\leq k\leq \min\{|G|,|H|\}$. If the sum of the minimum degrees of $G$ and $H$ is less than $k-1$, then $pvc_k(G\vee H)=2$; otherwise, we have $pvc_k(G\vee H)=1$.
\end{thm}
\pf (1) By the definition of join, we have $diam(G\vee H)=2$ since $G\vee H$ is not complete. From Propositions \ref{pro1} and \ref{pro2}, it follows that $pvc(G\vee H)=spvc(G\vee H)=1$.
(2) Let $u$ and $v$ be two vertices with the minimum degree in $G$ and $H$, respectively. If the sum of the minimum degrees of $G$ and $H$ is less than $k-1$, then there must exist a path of length at least 3 among the $k$ desired paths from $u$ to $v$ in $G\vee H$. Thus $pvc_k(G\vee H)\geq2$.
Clearly, $G\vee H$ has a spanning complete bipartite subgraph. By Lemma \ref{lemJ1}(2), we have $pvc_k(G\vee H)\leq2$ and so $pvc_k(G\vee H)=2$. For the other cases, we can always find $k$ desired paths of length at most 2 between any two vertices of $G\vee H$. Thus $pvc_k(G\vee H)=1$.\qed
\section{The Cartesian product}
The {\it Cartesian product} $G\square H$ of two graphs $G$ and $H$ is the graph with vertex set $V(G)\times V(H)$, in which two vertices $(g,h)$ and $(g',h')$ are adjacent if and only if $g=g'$ and $hh'\in E(H)$, or $h=h'$ and $gg'\in E(G)$. Clearly, the Cartesian product is commutative, that is, $G\square H$ is isomorphic to $H\square G$. Moreover, $G\square H$ is $2$-connected whenever $G$ and $H$ are connected. Thus we consider $pvc_k(G\square H)$ for the case $k=2$ in this section. Remind that $d_G(u,v)$ is the shortest distance between the two vertices $u$ and $v$ in graph $G$.
\begin{lem}\label{lemC1}\cite{HIK} Let $(g,h)$ and $(g',h')$ be two vertices of $G\square H$. Then $$d_{G\square H}((g,h),(g',h'))=d_G(g,g')+d_H(h,h').$$
\end{lem}
\begin{thm}\label{thmC1} Let $G$ and $H$ be two nontrivial connected graphs.
$(1)$ If both $G$ and $H$ are complete, then $pvc(G\square H)=1$; otherwise, we have $pvc(G\square H)=2$.
$(2)$ If $G$ and $H$ are two complete graphs of order at least 3, then $pvc_2(G\square H)=1$; otherwise, we have $pvc_2(G\square H)=2$.
$(3)$ If both $G$ and $H$ are complete, then $spvc(G\square H)=1$; otherwise, we have $spvc(G\square H)\\ \leq \min\{spvc(G)\times\chi(H), spvc(H)\times\chi(G)\}$.
\end{thm}
\pf (1) If both $G$ and $H$ are complete, then $diam(G\square H)=2$ and so $pvc(G\square H)=1$ by Proposition \ref{pro1}. Otherwise, we have $diam(G\square H)\geq3$ and so $pvc(G\square H)=2$ by Theorem \ref{thm1}.
(2) First suppose that $G$ and $H$ are two complete graphs of order at least 3. Then $diam(G\square H)=2$ and so $pvc_2(G\square H)\geq1$. Color all the vertices of $G\square H$ with color 1. Next we just need to show that for any two vertices $(g,h)$ and $(g',h')$ in $G\square H$, there exist two vertex-proper paths between them. If $g=g'$, then $(g
|
,h)(g',h')$ and $(g,h)(g,h_0)(g',h')$ are the desired paths, where $h_0\in V(H)\backslash\{h,h'\}$. Similarly, we can get the case that $h=h'$. Now we may assume that $g\neq g'$ and $h\neq h'$. Then $(g,h)(g,h')(g',h')$ and $(g,h)(g',h)(g',h')$ are the desired paths. Hence $pvc_2(G\square H)\leq1$ and so $pvc_2(G\square H)=1$.
Afterwards suppose that $G=K_2$ and $H=K_n$, where $V(G)=\{g_1,g_2\}$. For two vertices $(g_1,h)$ and $(g_2,h)$ of $G\square H$, the edge $(g_1,h)(g_2,h)$ is one desired path but the length of the other desired path is at least 3. Thus $pvc_2(G\square H)\geq2$ and so it remains to show that $pvc_2(G\square H)\leq2$. Define a 2-coloring of $G\square H$ by coloring the vertex $(g_i,h)$ with color $i$ where $i\in \{1,2\}$ and $(g_i,h)\in V(G\square H)$. It is easy to check that there exist two vertex-proper paths between any two vertices in $G\square H$ and so $pvc_2(G\square H)\leq2$. Thus $pvc_2(G\square H)=2$.
Finally we may assume that $G$ is not complete without loss of generality. Then we have $diam(G\square H)\geq3$ and so $pvc_2(G\square H)\geq2$. Next we just need to show that $pvc_2(G\square H)\leq2$. Let $S$ and $T$ be spanning trees of $G$ and $H$, respectively. Then $S\square T$ is a spanning subgraph of $G\square H$ and so it suffices to show that $pvc_2(S\square T)\leq2$. Let $g_0$ and $h_0$ be the roots of $S$ and $T$, respectively. Define a 2-coloring of the vertices of $S\square T$ as follows: For each vertex $(g,h)\in V(S\square T)$, if $d_S(g,g_0)$ and $d_T(h,h_0)$ are of the same parity, then color the vertex $(g,h)$ with color 1; otherwise, color it with color 2. Now it remains to check that there are two vertex-proper paths between any two vertices $(g,h),(g',h')$ in $S\square T$. For any two vertices in $S$ or $T$, there is a path connecting them. Let $gg_1...g_kg'$ and $hh_1...h_lh'$ be two paths from $g$ to $g'$ in $S$ and from $h$ to $h'$ in $T$, respectively. If $g=g'$ and $gg^*$ is an edge in $S$, then $(g,h)(g,h_1)...(g,h_l)(g',h')$ and $(g,h)(g^*,h)(g^*,h_1)...(g^*,h_l)(g^*,h')(g',h')$ are the desired paths. The same is true for the case that $h=h'$. Now we may assume that $g\neq g'$ and $h\neq h'$. Then $(g,h)(g,h_1)...(g,h_l)(g,h')(g_1,h')...(g_k,h')(g',h')$ and $(g,h)(g_1,h)...(g_k,h)(g',h)(g',h_1)...(g',h_l)(g',h')$ are the desired paths. Thus $pvc_2(G\square H)=2$.
(3) If both $G$ and $H$ are complete, then $diam(G\square H)=2$ and so $spvc(G\square H)=1$ by Proposition \ref{pro2}. Otherwise, we will show that $spvc(G\square H)\leq spvc(G)\times\chi(H)$. Firstly define a vertex-coloring $c$ of $G\square H$ with $spvc(G)\times\chi(H)$ colors as follows. We give $G$ a vertex-coloring $c_G$ using $\{1,2,...,spvc(G)\}$ such that $G$ is strong proper vertex-connected, and give $H$ a proper coloring $c_H$ using $\{1,2,...,\chi(H)\}$. For $(g,h)\in V(G\square H)$, where $g\in V(G)$ and $h\in V(H)$, we set $c(g,h)=(c_G(g),c_H(h))$. By this way, we get a vertex-coloring of $G\square H$ with $spvc(G)\times\chi(H)$ colors and it remains to check that for any two vertices $(g,h),(g',h')$ of $G\square H$, there exists a vertex-proper geodesic between them. Let $P=gg_1...g_kg'$ be a vertex-proper geodesic from $g$ to $g'$ in $G$ and $Q=hh_1...h_lh'$ be a shortest path from $h$ to $h'$ in $H$. By Lemma \ref{lemC1}, the path $(g,h)(g_1,h)...(g_k,h)(g_k,h_1)(g_k,h_2)...(g_k,h_l)\\(g_k,h')(g',h')$ is the desired geodesic. Thus, $spvc(G\square H)\leq spvc(G)\times\chi(H)$. By the commutativity of the Cartesian product, we can also deduce that $spvc(G\square H)\leq spvc(H)\times\chi(G)$. Therefore, $spvc(G\square H)\leq \min\{spvc(G)\times\chi(H), spvc(H)\times\chi(G)\}$. \qed
\section{The lexicographic product}
The {\it lexicographic product} $G\circ H$ of graphs $G$ and $H$ is the graph with vertex set $V(G)\times V(H)$, in which two vertices $(g,h),(g',h')$ are adjacent if and only if $gg'\in E(G)$, or $g=g'$ and $hh'\in E(H)$. The lexicographic product is not commutative and is connected whenever $G$ is connected. Moreover, $G\circ H$ is $2$-connected if $G$ and $H$ are connected. Let $d_G(g)$ denote the degree of the vertex $g$ in $G$.
\begin{lem}\label{leml1}\cite{HIK} Let $(g,h)$ and $(g',h')$ be two vertices of $G\circ H$. Then
\begin{eqnarray}d_{G\circ H}((g,h),(g',h'))=
\begin{cases}
d_G(g,g') &if\ g\neq g'; \cr
d_H(h,h') &if\ g=g'\ and\ d_G(g)=0; \cr
\min\{d_H(h,h'),2\} &if\ g=g'\ and\ d_G(g)\neq0.
\end{cases}
\end{eqnarray}
\end{lem}
\begin{thm}\label{thml1} Let $G$ be a nontrivial connected graph and $H$ be a nontrivial graph.
$(1)$ If both $G$ and $H$ are complete, then $pvc(G\circ H)=0$; if $diam(G)\geq3$, then $pvc(G\circ H)=2$; otherwise, we have $pvc(G\circ H)=1$.
$(2)$ If both $G$ and $H$ are complete, then $spvc(G\circ H)=0$; if $diam(G)\geq3$, then $spvc(G\circ H)=2$; otherwise, we have $spvc(G\circ H)=1$.
$(3)$ Let $H$ be a connected graph. If $diam(G)\geq3$, then $pvc_2(G\circ H)=2$; otherwise, we have $pvc_2(G\circ H)=1$.
\end{thm}
\pf (1) If both $G$ and $H$ are complete, then $diam(G\circ H)=1$ and so $pvc(G\circ H)=0$ by Proposition \ref{pro1}. If $G$ is complete and $H$ is not complete, then $diam(G\circ H)=2$ by Lemma \ref{leml1} and so $pvc(G\circ H)=1$ by Proposition \ref{pro1}. Now we may assume that $G$ is not complete. From Lemma \ref{leml1}, it follows that $diam(G\circ H)=diam(G)$. Thus we have that $pvc(G\circ H)=1$ if $diam(G)=2$ and $pvc(G\circ H)=2$ if $diam(G)\geq3$ by Proposition \ref{pro1} and Theorem \ref{thm1}.
(2) If both $G$ and $H$ are complete, then $diam(G\circ H)=1$ and so $spvc(G\circ H)=0$ by Proposition \ref{pro2}. If $G$ is complete and $H$ is not complete, then $diam(G\circ H)=2$ by Lemma \ref{leml1} and so $spvc(G\circ H)=1$ by Proposition \ref{pro2}. Now we may assume that $G$ is not complete. Then $diam(G\circ H)=diam(G)$ by Lemma \ref{leml1}. From Proposition \ref{pro2}, we have that $spvc(G\circ H)=1$ if $diam(G)=2$. Next set $diam(G)\geq3$. Then $spvc(G\circ H)\geq2$ and we just need to show that $spvc(G\circ H)\leq2$. Let $V(H)=\{h_1,h_2,...,h_n\}$. Define a vertex-coloring $c$ of $G\circ H$ with two colors as follows. For $(g,h_i)\in V(G\circ H)$, where $g\in V(G)$ and $i\in[n]$, we set $c(g,h_i)=1$ if $i$ is odd and $c(g,h_i)=2$ if $i$ is even. It suffices to check that there exists a vertex-proper geodesic between any two vertices $(g,h_i),(g',h_j)$ of $G\circ H$. Let $P=gg_1...g_kg'$ be a $g$-$g'$ geodesic in $G$. If $(g,h_i)$ and $(g',h_j)$ are adjacent, then the edge $(g,h_i)(g',h_j)$ is the desired geodesic. Otherwise, if $g=g'$, then set $g^*$ be a neighbor of $g$ in $G$ and so the path $(g,h_i)(g^*,h_1)(g',h_j)$ is the desired geodesic; if $g\neq g'$, then the desired geodesic is $(g,h_i)(g_1,h_1)(g_2,h_2)(g_3,h_1)...(g_k,h_1)(g',h_j)$ when $|P|$ is odd and the desired geodesic is $(g,h_i)(g_1,h_1)(g_2,h_2)(g_3,h_1)...(g_k,h_2)(g',h_j)$ when $|P|$ is even. Thus $spvc(G\circ H)=2$.
(3) If $diam(G)\geq3$, then $diam(G\circ H)=diam(G)$ by Lemma \ref{leml1} and so $pvc_2(G\circ H)\geq2$. Since $G\square H$ is a spanning subgraph of $G\circ H$, $pvc_2(G\circ H)\leq2$ by Theorem \ref{thmC1}$(2)$. Thus $pvc_2(G\circ H)=2$. We now assume that $diam(G)\leq2$. If $diam(G)=diam(H)=1$, then $G\circ H$ is complete and so $pvc_2(G\circ H)=1$ by Lemma \ref{lemJ1}(1). For the other cases, we have $diam(G\circ H)=2$ and so $pvc_2(G\circ H)\geq1$. It suffices to show that $pvc_2(G\circ H)\leq1$. Define a vertex-coloring of $G\circ H$ by coloring each vertex with color 1. Next it remains to check that there are two vertex-proper paths between any two vertices $(g,h),(g',h')$ in $G\circ H$. If $g=g'$, then the paths $(g,h)(g^*,h)(g',h')$ and $(g,h)(g^*,h')(g',h')$ are the desired paths, where $g^*$ is a neighbor of $g$ in $G$. If $h=h'$ and $gg'\in E(G)$, then the edge $(g,h)(g',h')$ and the path $(g,h)(g,h^*)(g',h')$ are the desired paths,where $h^*$ is a neighbor of $h$ in $H$. If $h=h'$ and $gg'\notin E(G)$, then $g$ and $g'$ must have a common neighbor, say $g^*$, since $diam(G)\leq2$. Then the paths $(g,h)(g^*,h)(g',h')$ and $(g,h)(g^*,h^*)(g',h')$ are the desired paths where $h^*\in V(H)\backslash\{h\}$. Now we may assume that $g\neq g'$ and $h\neq h'$. If $gg'\in E(G)$, then the edge $(g,h)(g',h')$ and the path $(g,h)(g,h^*)(g',h')$ are the desired paths, where $h^*$ is a neighbor of $h$ in $H$. Otherwise we have $gg'\notin E(G)$ and then $g$ and $g'$ have a common neighbor, say $g^*$, since $diam(G)\leq2$. Thus the paths $(g,h)(g^*,h)(g',h')$ and $(g,h)(g^*,h')(g',h')$ are the desired paths. Hence $pvc_2(G\circ H)=1$.
\qed
\section{The strong product}
The {\it strong product} $G\boxtimes H$ of graphs $G$ and $H$ is the graph with vertex set $V(G)\times V(H)$, in which two vertices $(g,h),(g',h')$ are adjacent whenever $gg'\in E(G)$ and $h=h'$, or $g=g'$ and $hh'\in E(H)$, or $gg'\in E(G)$ and $hh'\in E(H)$. If an edge of $G\boxtimes H$ belongs to one of the first two types, then we call such an edge a {\it Cartesian edge} and an edge of the last type is called a {\it non
|
uchinPRC}
\begin{eqnarray}
\mathcal{N} (x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) = 1-\cos\left[Z\langle\mathrm{Re}\, i\Gamma_{em}\rangle\right]\exp\left[-A\langle\mathrm{Im}\, i\Gamma_s\rangle\right]\,,
\label{sec_dip}
\end{eqnarray}
where $i\Gamma_{em}$ and $i\Gamma_s$ are the electromagnetic and strong contribution to the dipole - nucleon elastic scattering amplitude, respectively.
At large impact parameters ($b > R_A$) the strong interaction contribution is expected to be small, i.e. $A[i\Gamma_s]\rightarrow0$, and hence
\begin{eqnarray}
\mathcal{N} (x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \approx \mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) = 1 - \cos\left[Z\langle i\Gamma_{em}\rangle\right]\,.
\label{N_em}
\end{eqnarray}
On the other hand, at small impact parameters ($b < R_A$), the strong interaction becomes dominant and the dipole - nucleus scattering amplitude can be
approximated by
\begin{eqnarray}
\mathcal{N} (x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \approx
\mathcal{N}_s(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) = 1 - \exp\left[-A\langle i\Gamma_s\rangle\right]\,.
\end{eqnarray}
In what follows we will assume that these asymptotic solutions can be used to estimate the dipole -- nucleus scattering amplitude for $b \sim R_A$, which is the region of interest for the deep inelastic scattering. In order to calculate the inclusive and diffractive cross sections we must specify the models used for the electromagnetic and strong dipole -- nucleus
scattering amplitude. Following Ref. \cite{TuchinPRL} we will assume that
\begin{eqnarray}
{\cal{N}}_{s}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) =
\left[\, 1- \exp \left(-\frac{ Q_{\mathrm{s},A}^2(x)\,\mbox{\boldmath $r$}^2}{4} \right) \, \right] \Theta \left(R_A-b \right),
\label{N_GBW}
\end{eqnarray}
where $ Q_{\mathrm{s},A}$ is the nuclear saturation scale, which we will assume to be given by $Q_{\mathrm{s},A}^2(x) = A^{1/3}Q_{\mathrm{s}}^2 $,
where
$Q_{\mathrm{s}}^2 (x) = Q_0^2\,e^{\lambda\ln(x_0/x)}$ is the saturation scale for a proton. This model is a naive generalization for the nucleus of the
saturation model
proposed in Refs. \cite{GBW}, which captures the main properties of the high energy evolution equations and it is suitable for describing the non -- linear
physics in the small-$x$ region. In particular, this model implies that ${\cal{N}}_{s} \propto r^2$ (color transparency) at small pair separations and
that the multiple scatterings are ressumed in a Glauber -- inspired way.
Such model was used in Refs. \cite{simone1,simone2} to estimate the impact of the saturation physics in the observables that will be measured in the
future $eA$ collider. In our calculations we will consider the parameters obtained from a fit to the HERA data in Ref. \cite{GBW}: $Q_0^2 = 1$ GeV$^2$,
$\lambda= 0.277$ and $x_0=3.41 \cdot 10^{-4}$.
Using Eq. (\ref{N_GBW}) we obtain that
\begin{eqnarray}
2 \int_{b<R_A} d^2\mbox{\boldmath $b$} \, \mathcal{N}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \approx 2 \int_{b<R_A} d^2\mbox{\boldmath $b$} \, \mathcal{N}_{s}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) = 2\pi R_A^2 \left(1 - e^{-Q_{\mathrm{s},A}^2r^2/4}\right)\,.
\label{Nsinc_inc}
\end{eqnarray}
and
\begin{eqnarray}
\int_{b<R_A} d^2\mbox{\boldmath $b$} \left|\mathcal{N}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2 & \approx & \int_{b <R_A} d^2\mbox{\boldmath $b$} \left|\mathcal{N}_s(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2
= \pi R_A^2 \left(1 - e^{-Q_s^2r^2/4}\right)^2\,.
\label{Nsinc_dif}
\end{eqnarray}
For the electromagnetic case we will follow the approach proposed in Refs. \cite{TuchinPRL,TuchinPRC}.
In particular, we will assume that the leading term in $\alpha$ for the dipole - nucleon scattering arises from a single photon exchange, with the elastic dipole - nucleon amplitude being approximately real. As demonstrated in Ref. \cite{TuchinPRD}, at the Born approximation, the contribution of the imaginary part is proportional to $\alpha^2 Z \sim \alpha$ in the $Z\alpha \sim 1$, while the contribution of the real part is of the order of $\alpha^2 Z^2 \sim 1$. Considering the Weizscker - Williams approximation, where the $t$ - channels photons at high energies are assumed to be almost real, the electromagnetic dipole - nucleon elastic scattering amplitude was derived in Ref.\cite{TuchinPRD}, being given by
\begin{eqnarray}
\langle i\Gamma_{em}\rangle \approx 2\alpha\ln\left(\frac{\left| \mbox{\boldmath $b$}-\mbox{\boldmath $r$}/2 \right|}{\left| \mbox{\boldmath $b$} + \mbox{\boldmath $r$}/2\right|}\right)
= \alpha \ln\left(\frac{b^2+r^2-br\cos\phi}{b^2+r^2+br\cos\phi}\right)\,.
\label{ampelet}
\end{eqnarray}
Moreover, we will take into account that for the deep inelastic scattering the main contributions for the DIS cross sections comes from values of pair separations of the order $r \sim 1/m_f$ \cite{simone1,simone2}. Consequently, we will have that in general $r \ll b$, which allows to simplify the expression for the dipole - nucleon scattering amplitude. Such inequality becomes stronger in the case of the heavy quarks. In particular, for $b\gg r$ the Eq. (\ref{ampelet}) can be simplified and becomes
\begin{eqnarray}
\langle i\Gamma_{em}\rangle = 2\alpha\frac{\mbox{\boldmath $b$}\cdot\mbox{\boldmath $r$}}{b^2} + \mathcal{O}(r^3/b^3)
\approx \frac{2\alpha r\cos\phi}{b}\,,
\label{b>r}
\end{eqnarray}
which implies that
\begin{eqnarray}
\cos\left[Z\langle i\Gamma_{em}\rangle\right] = \cos\left( \frac{2\alpha Zr\cos\phi}{b} \right)
= 1 - \frac{1}{2}\left(\frac{2\alpha Zr\cos\phi}{b}\right)^2 + \mathcal{O}(r^4/b^4)
\label{cos_exp}\approx 1 -\frac{2\alpha^2Z^2 r^2\cos^2\phi}{b^2}\,,
\label{cos_approx}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \approx \frac{2\alpha^2Z^2r^2\cos^2\phi}{b^2}\,.
\label{N_s_final}
\end{eqnarray}
Consequently, we have
\begin{eqnarray}
2 \int_{b>R_A} d^2\mbox{\boldmath $b$} \, \mathcal{N}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \approx 2 \int_{b>R_A} d^2\mbox{\boldmath $b$} \, \mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})
& = & 4\alpha^2Z^2r^2\int_{R_A}^{b_{max}}\frac{db}{b}\int_0^{2\pi}d\phi \cos^2\phi\\
&=& 4\pi\alpha^2Z^2r^2\ln\left(\frac{b_{max}}{R_A}\right)
= 4\pi\alpha^2Z^2r^2\ln\left(\frac{W^2}{4m_f^2m_NR_A}\right)\,,
\label{Ninc_em_int}
\end{eqnarray}
where $m_N$ is the nucleon mass and $W$ is the photon - nucleus center of mass energy.
Moreover, $b_{max} = s/(4m_N m_f^2)$ is a long distance cutoff of the $b$ integral, which is directly related to the minimum transverse momentum at which the Weizsacker - Williams approximation still is valid \cite{TuchinPRD}. As in demonstrated in Ref.
\cite{TuchinPRD}, one have that $b_{max} \gg R_A$, which justifies the approximations used in the derivation of the Eqs. (\ref{ampelet}) -- (\ref{Ninc_em_int}).
Considering the same approximations, for $b > R_A$ we find
\begin{eqnarray}
\left|\mathcal{N}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2 \approx \left|\mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2 = \left| 1-\cos\left[Z\langle\mathrm{Re}\, i\Gamma_{em}\rangle\right]\right|^2
= 1 - 2\cos\left[Z\langle\mathrm{Re}\, i\Gamma_{em}\rangle\right] + \cos^2\left[Z\langle\mathrm{Re}\, i\Gamma_{em}\rangle\right]\,.
\label{N2cos}
\end{eqnarray}
Using that $\cos^2\left[Z\langle i\Gamma_{em}\rangle\right] = 1 + \mathcal{O}(\alpha^4 r^4/b^4)$, yields
\begin{eqnarray}
\left|\mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2 \approx \frac{4\alpha^2Z^2r^2\cos^2\phi}{b^2}\,,
\label{N2_diff}
\end{eqnarray}
which implies that
\begin{eqnarray}
\int_{b>R_A} d^2\mbox{\boldmath $b$} \left|\mathcal{N}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2 & \approx & \int_{b>R_A} d^2\mbox{\boldmath $b$} \left|\mathcal{N}_{em}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})\right|^2
= 4\pi\alpha^2Z^2r^2\ln\left(\frac{W^2}{4m_f^2m_NR_A}\right)\,.
\label{N2_em_int}
\end{eqnarray}
As obtained in Ref. \
|
cite{TuchinPRL}, the electromagnetic contribution for the inclusive and diffractive cross sections are equal, differently
from the strong one. As a consequence we expect that its impact on inclusive and diffractive observables will be different. In the next Section
we will estimate how the electromagnetic contribution modifies these observables in the kinematical range that will be probed by the future
electron - ion collider.
\begin{figure*}[t]
\includegraphics[scale=0.35]{tuchin_inc_A.eps}
\includegraphics[scale=0.35]{tuchin_diff_A.eps}
\caption{Dependence on the photon virtuality $Q^2$ for different atomic numbers and $x= 10^{-4}$ of the ratio between the electromagnetic and
strong contributions for the inclusive (left panel) and diffractive (right panel) cross sections.}
\label{fig1}
\end{figure*}
\begin{figure*}[t]
\includegraphics[scale=0.35]{ratio_diff_inc_Q2.eps}
\includegraphics[scale=0.35]{ratio_diff_inc_x.eps}
\caption{Ratio $\sigma_{diff}/\sigma_{tot}$ as a function of $Q^2$ for fixed $x$ (left panel) and as a function of $x$ for fixed $Q^2$ (right panel).}
\label{fig1b}
\end{figure*}
\begin{figure*}[t]
\includegraphics[scale=0.35]{ratio_inc.eps}
\includegraphics[scale=0.35]{ratio_diff.eps}
\caption{Flavor decomposition for the ratio between the electromagnetic and strong contributions for the inclusive (left panel) and diffractive
(right panel) processes as a function of photon virtuality $Q^2$ and fixed $x=10^{-4}$ for gold nucleus.}
\label{fig2}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{ratio_charm_tot_Q2}
\includegraphics[scale=0.35]{ratio_charm_tot_x}
\caption{Dependence on $Q^2$ (left panel) and $x$ (right panel) of the ratios between the charm and total cross sections for inclusive and
diffractive interactions. }
\label{fig3}
\end{figure}
\section{Results and discussion}
\label{results}
In this Section we present a detailed study of the flavor and polarization decomposition of inclusive and diffractive cross sections including the Coulomb
correction. Improving the work done in Ref. \cite{TuchinPRL}, we will include in our analysis the contribution of the heavy quarks (charm and bottom).
As discussed e.g. in Ref. \cite{simone1,erike_ea2}, and shown in what follows, the charm contribution is expected to be $\approx 20 \%$ of the total cross
section at small values of $x$ and low virtualities. Moreover, it is considered an important probe of the nuclear gluon distribution. Similar expectation also
motivates the analysis of the longitudinal cross section (See e.g. \cite{erike_ea1,zurita}). Our goal is to verify the impact of the Coulomb corrections
in these observables in inclusive and diffractive interactions.
We start our analysis presenting in Fig \ref{fig1} the ratio between the Coulomb and strong (QCD) contributions to the inclusive (left panel) and
diffractive (right panel) cross sections. In Fig. \ref{fig1} the ratios are shown as a function of photon virtuality $Q^2$ for a fixed value of Bjorken
variable, $x=10^{-4}$, and three distinct nuclei: Gold (solid line), Silver (dashed line) and Calcium (dotted line).
The results are consistent with those obtained in Ref. \cite{TuchinPRL}, even including the heavy quark contribution, showing that the Coulomb correction
is quite important in the kinematical region of low-$Q^2$ and small-$x$ and it is enhanced for the heavier nuclei. In the perturbative range that will be
probed in the future EIC ($Q^2 \gtrsim 1$ GeV$^2$) and $A = 197$, the Coulomb contribution is $\approx 10 \, (21) \%$ for the inclusive
(diffractive) cross section. The different impact of the Coulomb interactions on the inclusive and diffractive cross sections can be understood as follows.
In contrast to the inclusive cross section, the main contribution to the diffractive cross section comes from larger dipoles (See e.g. Ref. \cite{simone2}) and
the electromagnetic terms are proportional to $r^2$ [See Eqs. (\ref{Ninc_em_int}) and (\ref{N2_em_int})].
As a consequence, the presence of Coulomb corrections modifies the ratio between the diffractive and the total cross section, which is
expected to be measured in the future electron - ion collider. Our predictions for the $Q^2$ and $x$ dependence of this ratio are presented in
Fig. \ref{fig1b}. We compare the predictions obtained with the sum of the electromagnetic and strong contributions, denoted total in the figure, with
those derived disregarding the Coulomb corrections. We observe that the ratio is enhanced by the electromagnetic contribution, in particular at
low $Q^2$ and small -- $x$. At $Q^2 = 1$ GeV$^2$, this enhancement is $\approx 10 \%$.
The $r^2$ dependence of the electromagnetic contribution has a direct impact on the Coulomb corrections to the heavy quark cross sections.
As these cross sections are dominated by small dipoles, we expect that the effect of the Coulomb corrections will be smaller here than in the
case of light quark production. This expectation is confirmed by the results shown in Fig. \ref{fig2}, where we present the flavor decomposition
of the ratio between the electromagnetic and strong contributions. We observe that for $Q^2 \approx 1$ GeV$^2$ the Coulomb corrections can be
disregarded in the charm and bottom production in the inclusive case and are of order of 2 \% in diffractive interactions. At larger values of $Q^2$,
the heavy quark contribution to the total cross sections increases as well as the Coulomb corrections. We have verified that the impact of the Coulomb
corrections to the charm and bottom production is almost $x$ independent in the range
$1 \le Q^2 \le 10$ GeV$^2$. Another interesting aspect is that the inclusion of the heavy quarks decreases the magnitude of the Coulomb corrections
for the total cross sections in comparison to those obtained considering only light quarks, denoted by $u + d + s$ in the figure.
\begin{figure}[t]
\centering
\includegraphics[scale=0.33]{L_ratio_inc}
\includegraphics[scale=0.33]{L_ratio_diff}
\includegraphics[scale=0.33]{T_ratio_inc}
\includegraphics[scale=0.33]{T_ratio_diff}
\caption{ Upper panels: Dependence on $Q^2$ of the ratio between the electromagnetic and strong longitudinal cross sections for the inclusive (left)
and diffractive (right) interactions. Lower panels: Dependence on $Q^2$ of the ratio between the electromagnetic and strong transverse cross sections
for inclusive (left) and diffractive (right) interactions.}
\label{fig4}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{ratio_long_Q2}
\includegraphics[scale=0.35]{ratio_long_x}
\caption{Dependence on $Q^2$ (left panel) and $x$ (right panel) for the ratio between the longitudinal and total cross sections for inclusive and diffractive
interactions.}
\label{fig5}
\end{figure}
Let us now analyze more carefully the impact of the Coulomb corrections on the charm cross section, which is expected to be measured in the future
electron - ion collider. In Fig. \ref{fig3} we present our predictions for the $Q^2$ (left panel) and $x$ (right panel) dependence of the ratio
between the charm and the total cross sections for inclusive and diffractive interactions. The predictions obtained disregarding the Coulomb corrections
are denoted as {\it strong} in the figure. The charm contribution increases with $Q^2$ and at smaller values of $x$. Moreover, it is larger for inclusive
processes. Our results indicate that the inclusion of the Coulomb corrections implies a mild decreasing of the ratios. In particular for inclusive
processes, the $Q^2$ and $x$ dependence of the ratios are only slightly modified by Coulomb corrections in the perturbative $Q^2$ range that will probed
in the future electron - ion collider. This result implies that the study of the charm production is a good probe of the high energy regime of the QCD dynamics.
Let us now consider the impact of the Coulomb corrections on the longitudinal and transverse cross sections in inclusive and diffractive interactions.
In Fig. \ref{fig4} (upper panels) we present our results for the ratio between the electromagnetic and strong longitudinal cross sections.
On the left (right) panel we show the inclusive (diffractive) cross sections. In the inclusive case, the ratio rapidly decreases with $Q^2$ and becomes
smaller than 5 \% for $Q^2 \gtrsim 1.5$ GeV$^2$. In the diffractive case, the Coulomb correction is a factor 2 larger in the same kinematical range. As
expected from our previous analysis, the charm contribution is small in the $1 \le Q^2 \le 10$ GeV$^2$
range. The results presented in the lower panels of Fig. \ref{fig4} are analogous to those of the upper panels but refer to the transverse cross sections.
They indicate that the Coulomb corrections to both longitudinal and transverse cross sections are comparable. Moreover, compared to the strong cross sections,
they are small everywhere, except in the very low $Q^2$ ($Q^2 \simeq 0.1$ GeV$^2$) region, where dipoles of larger size dominate the cross sections. In the figures
we also see that for the charm cross sections, the Coulomb corrections are very small. The fact that the longitudinal cross section is not sensitive to the
Coulomb corrections is illustrated in a different way in Fig. \ref{fig5} where we present the $Q^2$ and $x$ dependence of the ratio $\sigma_L/\sigma_{tot}$
in inclusive and diffractive interactions. These results demonstrate that the inclusion of the Coulomb corrections has no impact on the ratio. The same is true
for the transverse cross sections. Therefore, the study of the longitudinal and transverse cross sections can also be useful to understand the QCD dynamics
at small - $x$.
A final comment is in order. In the analysis presented above we have assumed, following Refs. \cite{TuchinPRL,TuchinPRC}, that the modelling of the strong dipole -- nucleus scattering amplitude is given by Eq. (\ref{N_GBW}). However, as discussed in Refs. \cite{erike_ea2,vic_erike}, a more realistic model for $\mathcal{N}_{s}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})$ can be derived using the Glauber -- Mueller (GM) approach \cite{mueller}. In particular, in Ref. \cite{erike_ea2} two of the authors have demonstrated that the GM approach is able to describe the current data for the nuclear structure function. In order to verify the dependence of our predictions on the model used to describe $\mathcal{N}_{s}$ we also have estimated the different observables discussed in this Section using the GM approach and have obtained that the contribution of the Coulomb corrections still are smaller than those presented here.
\section{Summary}
\label{conc}
In this paper we have studied in detail the Coulomb contribution to some of the observables that will be measured in the future $eA$ collider.
In particular, we study the impact of
these corrections on the total, charm and longitudinal cross sections of inclusive and diffractive interactions. Our analysis is motivated by the fact
that these observables are expected to probe the QCD dynamics and constrain the description of the non - linear effects. Our results indicate that the
Coulomb corrections to the total cross sections are important at low - $Q^2$ and small values of $x$ and are larger for diffractive interactions. In
particular, the ratio between the diffractive and total cross sections are enhanced by $\approx 10 \%$. In contrast, our results indicate the impact
of the Coulomb corrections on the transverse and longitudinal cross sections is small and it is negligible on the charm cross sections. Therefore, these
observables (especially the latter) can be considered clean probes of the QCD dynamics.
\begin{acknowledgments}
The authors thank Kirill Tuchin for helpful comments. This work was partially financed by the Brazilian funding
agencies CNPq, CAPES, FAPERGS, FAPESP (contract 12/50984-4) and INCT-FNA (process number
464898/2014-5).
\end{acknowledgments}
|
\section{Introduction}
The design of spectrum sharing and access mechanisms for cognitive radio networks (CRNs) has attracted much attention in the last few years. The interest in CRNs is mainly attributed to their ability to enable efficient spectrum utilization and provide wireless networking solutions in scenarios where the un-licensed spectrum bands are heavily utilized. Furthermore, due to their cognitive nature, CRNs are more spectrum efficient and robust than their non-cognitive counterparts against spectrum unavailability, and have the capability to utilize different frequency bands and adapt their operating parameters based on the surrounding radio frequency (RF) environment. Specifically, CR is considered as the key technology to effectively address the inefficient spectrum utilization in legacy licensed wireless communication systems by providing opportunistic on-demand access \cite{[1],[13]}. CR technology enables unlicensed users to opportunistically utilize the idle PR channels
(so-called spectrum holes). The spectrum holes represent the PR channels that are currently under-utilized. In order to utilize these spectrum opportunities without interfering with the PRNs, CR users should perform accurate spectrum sensing, through which idle channel
lists are identified. In addition, the CR users should be flexible enough to quickly vacate the operating channel when a PR user reclaims it. In this case, CR users should quickly and seamlessly switch their operating channel(s).
While large-scale deployment of CRNs is still to come, extensive research attempts are currently underway to improve the effectiveness of spectrum sharing protocols and improve the spectrum management and operation of such networks~\cite{[13],[16a],[3av],[5av],[116],[120],[124],[37v],[125],[37]}. Two of the most crucial challenges in deploying CRNs are the needs to maximize spectrum efficiency and minimize the caused interference to PRNs. On other words, providing efficient communication and spectrum access protocols that provide high throughput while protecting the performance of licensed PRNs is the crucial design challenge in CRNs.
The main objective of this paper is to overview and analyze the key schemes and protocols for spectrum access/sharing/managemnent that have been developed
for CRNs in the literature. Furthermore, we briefly highlight a number of opportunistic spectrum sharing and management schemes and explain their operation details. As indicated later, it follows logically that cross-layer design, link quality/channel availability tradeoff and interference management are the key design principles for providing efficient spectrum utilization in CRNs. We start by describing the main CRN architectures and operating environment. Then, the spectrum sharing problem is stated. The various objectives used to formulate the spectrum sharing problem in CRNs are summarized. We then point out the several design challenges in designing efficient spectrum sharing and access mechanisms. The tradeoffs in selecting the operating channel(s) in CRNS are discussed. A number of spectrum sharing design categories are then surveyed. Various complementary approaches, new technologies and optimization methods that have great potential in facilitating the design of efficient CRN communication protocols are highlighted and discussed. Finally, concluding remarks are provided with several open research challenges.
\begin{figure}[h!]
\begin{center}
\epsfxsize=3in \epsfysize=4.5in \leavevmode
\epsfbox{Fig1.eps} \epsfxsize=1.5in
\epsfysize=1.2in \caption{Generic architecture of a CRN environment.}
\label{fig:CRNc}
\end{center} \end{figure}
\section{Network Architecture}
\subsection{CRN Model}
Typical CRN environment consists of a number of different types of PRNs and one or several CRNs. The PR and CR networks geographically co-exist
within the same area. In terms of network topology, two basic types of CRNs are proposed: centralized multi-cell CRNs and infrastructure-less ad hoc CRNs. Figure \ref{fig:CRNc} depicts a composition view of a CRN operating environment consisting of an ad hoc CRN and a multi-cell centralized CRN that coexist with two different types of PRNs. The different PRNs have license to transmit over orthogonal non-overlapping spectrum bands, each with a different licensed bandwidth. PR users of a given PRN operate over the same set of licensed channels. CR users can opportunistically utilize the entire PR licensed and unlicensed spectrum. For ad hoc multi-hop CRNs without centralized entity, it is necessary to provide distributed spectrum access protocols that allow each CR user to separately access and utilize the available spectrum. Furthermore, for centralized multi-cell CRNs, it is desirable to provide (1) centralized spectrum allocation protocols that allocate the available channels to the different CR cells, and (2) centralized channel assignment mechanisms that enable efficient spectrum reuse inside each cell.
\subsection{PR ON/OFF Channel Model}
In general, the channel availability model of each PR channel in a given locality is described by a two-state ON/OFF Markov process. This model describes the evolution between idle (OFF) and busy (ON) states (i.e., the ON state of a PR channel indicates that the PR channel is busy, while the OFF state reveals that the PR channel is idle). The model is further described by the stochastic distribution of the busy and idle periods, which are generally distributed. The distributions of the idle and busy states depend on the PR activities. We note here the ON and OFF periods of a given channel are independent random variables. For a given channel $i$, the average idle and busy periods are $\overline{T}_{I}$ and $\overline{T}_{B}$, respectively. Based on this model, the idle and busy probabilities of a PR channel $i$ are respectively given by $P_I^{(i)}=\frac{\overline{T}_{I}} {\overline{T}_{I}+\overline{T}_{B}}$ and $P_B^{(i)}=\frac{\overline{T}_{B}} {\overline{T}_{I}+\overline{T}_{B}}$. Figure \ref{fig:ONOFF} shows a transition diagram of a $2$-state busy/idle Markov model of a given PR channel. We note here that neighboring CR users typically have similar views to spectrum availabilities, while non-neighboring CR users have different channel availability conditions.
\begin{figure}[htb]
\begin{center}
\epsfxsize=3.2in \epsfysize=2.0in \leavevmode
\epsfbox{Fig3.eps} \epsfxsize=1.5in
\epsfysize=1.2in \caption{Two-state Markov channel availability model of a given PR channel.}
\label{fig:ONOFF}
\end{center} \end{figure}
\section{Spectrum Sharing Problem Statement and Objectives}
Spectrum sharing problem (including spectrum management and decision) can be stated as follows: `` Given the output of spectrum sensing, the main goal is to determine which channel(s) to use, at what powers, and at what rates? such that a given performance metric (objective function) is optimized. This is often a joint optimization problem that is very difficult to solve (often it constitutes an NP-hard problem). Recently, several spectrum assignment strategies have been proposed for CRNs
\cite{[63Go],[72Go],[73Go],[11],[37],[75Go],[70Go],[3av],[74Go],[43FT],[71Go],[Haythem1],[79Go],[Haythem2],[Haythem3],[69SD],[82SD],[76Go],[77Go],[78Go],[80Go],[53EGo],[54EGo],[91EGo],[92EGo],[71F],[75F],[77F],[78F],[95Co],[96Co],[97Co]}. These strategies are designed to optimize a number of performance
metrics including:
\begin{itemize}
\item Maximizing the CR throughput (individual users or network-level) based on Shannon capacity or a realistic staircase rate-SINR function (e.g., \cite{[63Go],[72Go],[73Go]}).
\item Minimizing the number of assigned channels for each CR transmission (e.g., \cite{[11],[37]}).
\item Maximizing the CR load balance over the different PR channels (e.g., \cite{[75Go]}).
\item Minimizing the probability of PR disruption, i.e., minimizing the PR outage probability (e.g., \cite{[70Go],[3av]}).
\item Minimizing the average holding time of selected channels (minimizing the PR disruption) (e.g., \cite{[74Go],[43FT]}).
\item Minimizing the frequency of channel switching due to PR appearance by selecting the channel with maximum residual idle time, i.e., minimizing the CR disruption in terms of forced-termination rate (e.g., \cite{[71Go],[Haythem1],[79Go]}).
\item Maximizing the CR probability of success (e.g., \cite{[Haythem2],[Haythem3]}).
\item Minimizing the spectrum switching delay for CR users (e.g., \cite{[69SD],[82SD],[76Go]}).
\item Minimizing the expected CR waiting time to access a PR channel (e.g., \cite{[77Go],[78Go]}).
\item Minimizing CRN overhead and providing CR QoS (e.g., \cite{[80Go]}).
\item Minimizing the overall energy consumption (e.g., \cite{[53EGo],[54EGo],[91EGo],[92EGo]}).
\item Achieving fair spectrum allocation and throughput distribution in the CRN (e.g., \cite{[71F],[75F],[77F],[78F]}).
\item Maintaining CRN connectivity with predefined QoS requirements (e.g., \cite{[95Co],[96Co],[97Co]}).
\end{itemize}
We note here that the spectrum sharing problem for any of the aforementioned objectives is, in general, NP-hard.
Therefore, several heuristics algorithms and approximations have been proposed to provide suboptimal solutions for the problem in polynomial-time.
These heuristics and approximations can be classified based on their adopted optimization method as: graph theory-based algorithms (e.g., \cite{[134GRAph]}), game theory-based algorithms (e.g., \cite{[82SD]}), genetic-based algorithms (e.g., \cite{[Genetic]}), linear programming relaxation-based algorithms (e.g., \cite{[66Go]}), fuzzy logic-based algorithms (e.g., \cite{[148FU]}), dynamic programming-based algorithms (e.g., \cite{[66DY]}), sequential-fixing-based algorithms (e.g., \cite{[37]}).
\section{Issues in Designing Spectrum Sharing Mechanisms}
\subsection{Interference Management and Co-existence Issue}
The coexistence problem is one of the most limiting factors in achieving efficient CR communications. In a CRN environment, there are three kinds of harmful interference that should be considered: PR-to-CR/CR-to-PR interference (the so-called PR coexistence) and CR-to-CR interference (the so-called self-coexistence). While several mechanisms have been proposed to effectively deal with the PR-to-CR interference problem based on cooperative (e.g.,\cite{[Sensing2],[Sensing3],[Sensing4],[Sensing5]}) or noncooperative (e.g.,\cite{[Sensing1],[Sensing11],[Sensing12]}) spectrum sensing, the CR-to-CR and CR-to-PR interference problems are still challenging issues.
\subsubsection{Self-coexistence Management}
To address the CR-to-CR interference problem in ad hoc CRNs, several channel allocation and self-coexistence management mechanisms have been proposed based on either (1) exclusive channel assignment or (2) joint channel assignment and power control. On the contrary, the CR-to-CR interference problem has been addressed in multi-cell centralized CRNs based on either fixed channel allocation~\cite{[8WRAN],[9WRAN],[10WRAN],[11WRAN],[7WRAN]} or adaptive traffic-aware/spectrum-aware channel allocation~\cite{[9WRANH],[10WRANH],[11WRANH]}.
\subsubsection{Providing Performance Guarantees to PR users}
It has been shown that the CR-to-PR interference is the most crucial interference in CRN environment, because it has a direct effect on the performance of PRNs. Hence, the transmission power of CR users over the PR channels should be adaptively computed such that the performance of the PRNs is protected. Based on the outcomes of spectrum sensing, two different power control strategies can be identified: binary and multi-level transmission power strategies. According to the binary-level strategy (the most widely used power control strategy in CRNs), CR users can only transmit over idle channels with no PR activities. Specifically, for a given PR channel, a CR transmits $0$ power if the channel is busy, and uses the maximum possible power if the PR channel is idle. While this strategy ensures collision-free spectrum sharing between the CR and PR users, it requires perfect spectrum sensing. Worse yet, the binary-level strategy can lead to non-optimal spectrum utilization. On the other hand, using a multi-level adaptive frequency-dependent transmission power strategy allows the CR and PR users to simultaneously share the available spectrum in the same locality, which can significantly improve spectrum utilization. By allowing CR users to utilize both idle and partially-occupied PR channels, much better spectrum utilization can be achieved. The multi-level power strategy can also be made time-dependent to capture the dynamics of PR activities. Under this strategy, controlling the CR-to-PR interference is nontrivial. In addition, computing the appropriate and adequate multi-level power strategy is still a challenging issue, which has been studied under some simplified assumptions. Specifically, the authors in \cite{[11]} proposed an adaptive multi-level frequency- and locality-dependent CR transmission power strategy that provides a soft guarantee on PRNs' performance. This adaptive strategy is dynamically determined according to the PR traffic activities and interference margins.
\subsection{Distributed Coordination Issue}
\label{sec:related}
In this section, we review several well-known distributed coordination mechanisms designed for CRNs. We note that control channel designs for CRNs can be loosely classified into six different categories~\cite{[12],[13]}:
\begin{itemize}
\item Dedicated pre-specified out-of-band common control channel (CCC) design \cite{[3av],[3],[8],[8a],[9]}.
\item Non-dedicated pre-specified in-band CCC design \cite{[16a],[Haythem3],[5av],[15]}.
\item Hybrid out-of-band and in-band CCC design \cite{[HYBRID]}.
\item Hopping-based control channel design \cite{[79Gox],[18],[19],[22],[23],[24],[25]}.
\item Spread-spectrum-based control channel design \cite{[DSSS],[DSSS1]}.
\item Cluster-based local CCC design~\cite{[27],[27a],[28],[38],[29]}.
\end{itemize}
Despite the fact that using a dedicated out-of-band CCC is straightforward, it contradicts the opportunistic behavior of CRNs, may result in a single-point-of-failure (SPOF) and performance bottleneck due to CCC saturation under high CR traffic loads. Similarly, using a pre-specified non-dedicated in-band CCC is not a practical solution due to spectrum heterogeneity and, if exists, such solution can result in a SPOF, become a performance bottleneck, and introduce security challenges. Another approach that can effectively deal with the CCC saturation issue (bottleneck problem) is to use a hybrid out-of-band and in-band CCC (simultaneous control communications over in-band PR channels and dedicated out-of-band CCCs). This approach exploits the strengths of out-of-band and in-band signaling and, hence, can significantly enhance the performance of multi-hop CRNs. Using a hopping-based control channel can address the SPOF, bottleneck and security issues. However, in such type of solutions, the response to PR appearance is challenging as CR users cannot use a PR channel once reclaimed by PR users. In addition, this type of solution is generally spectrum unaware. Another key design issue in such solutions is the communication delay that heavily depends on the time to rendezvous. Using cluster-based coordination solutions, where neighboring CR users are dynamically grouped into clusters and establish local CCCs, can provide reliable distributed coordination in CRNs~\cite{[13],[12]}. However, adopting this type of solutions in a multi-hop CRN is limited by several challenges, such as providing reliable inter-cluster communication (i.e., different cluster may consider different CCCs), maintaining connectivity, broadcasting control information, identifying the best/optimal cluster size, and maintaining time-synchronization \cite{[13]}. Finally, using spread-spectrum-based distributed coordination is a promising solution to most of the aforementioned design challenges, but the practicality and design issues of such solution need to be further investigated. According to this solution, the control information is spread over a huge PR bandwidth with a very low transmission power level (below the noise level). Consequently, with a proper design, an efficient CCC design can be implemented using spread spectrum with minor effect on PRNs' performance. In conclusion, various distributed coordination mechanisms have been developed to provide reliable communications for CRNs, none of which are totally satisfactory. Hence, designing efficient distributed coordination schemes in CRNs should be based on novel coordination mechanisms along with effective transmission technologies that enable effective, robust and efficient control message exchanges.
\section{Tradeoffs in Selecting the Operating Channel}
The spectrum (channel) assignment problem in CRNs has been extensively studied in the literature. Existing channel assignment/selection solutions can loosely be classified into three categories: best link-quality schemes, larger availability-period schemes, and joint link-quality and channel-availability-aware schemes. It has been shown (e.g.,\cite{[3av],[2av],[Haythem1]}) that using the best link-quality schemes in CRNs, where the idle channel(s) with the highest transmission rate(s) are selected, can only provide good performance under relatively static PR activities with average PR channel idle durations that are much larger than the needed transmission times for CR users \cite{[3av],[5av],[Haythem1],[Haythem2],[Haythem3]}. Under highly dynamic PR activities, this class of schemes can result in increasing the CR termination rate, leading to a reduction in CRN performance as a CR user may transmit over a good-quality PR channel with relatively short availability time (short channel-idle period). On the other hand, employing the larger availability-period schemes in CRNs (e.g., \cite{[6av]}) can result in increasing the CR forced-termination rate as an idle PR channel of very poor link-quality (low transmission rate) may be chosen, resulting in a significant reduction in CRN performance. We note here that the interaction between the CRN and PRNs is fundamental for conducting channel assignment in CRNs.
The above discussion presents sufficient motivation to jointly consider the link-quality and average idle durations of PR channels when assigning operating channels to CR users. However, several open questions in this domain still need to be addressed; possibly the most challenging one is how to jointly consider the link-quality and average idle durations into one metric to perform channel assignment. Other important questions are: How can a CR user estimate the distribution of the idle periods of the different PR channels? What are the implications of the interaction between the CRN and the PRNs? How can a CR user determine the link-quality conditions over the various (large number) PR channels? Some of these questions have been addressed in \cite{[Haythem1],[Haythem2],[Haythem3]} by introducing the CR packet success probability metric. This metric is derived based on stochastic models of the time-varying PR behaviors. The probability of success over a given channel is a function of both the link-quality condition and the average-idle period of that PR channel. It has been proven that it is necessary to jointly consider the link-quality conditions and availability times of available PR channels to improve the overall network performance~\cite{[Haythem1]}.
\section{State-of-the-Art Spectrum Sharing Protocols in CRNs}
There are several attempts have been made to design spectrum sharing protocols with the objective of improving the overall spectrum utilization while protecting the performance of licensed PRNs. Existing spectrum sharing/access protocols and schemes for CRNs can loosely be categorized into four main classes based on: the number of radio transceivers per CR user (single-transceiver, dual-transceiver, or multiple transceiver), their reaction to PR behavior (reactive, proactive, or interference-based threshold), their spectrum allocation behavior (exclusive or non-exclusive spectrum occupancy model), and the guardband considerations (guardband-aware or guardband-unaware).
\subsection{Number of Radio Transceivers and Assigned Channels}
Spectrum sharing protocols and schemes
|
for CRNs can also be categorized based on the number of radio transceivers per a CR user (i.e., single transceiver \cite{[116],[120],[4SR],[129],[127],[131],[128]}, dual transceivers~\cite{[124],[147]}, and multiple transceivers~\cite{[3av],[5av],[Haythem2],[2MC]}). Using multiple (or dual) transceivers greatly simplifies the task of spectrum access design and significantly improve system performance. This is because a CR user can simultaneously utilize multiple channels (the potential benefits of utilizing multi-channel parallel transmission in CRNs were demonstrated in \cite{[37],[Taos]}). In addition, the spectrum access issues such as hidden/exposed terminals, transmitter deafness and connectivity can be easily overcome as one of the transceivers can be switched to the assigned control channel (i.e., CR users can always receive control packet over the CCC even when they are operating over the data channels). However, the achieved performance gain of using multiple transceivers (multi-channel parallel transmission) comes at the expense of extra hardware. Worse yet, the optimal joint channel assignment and power control problem in multi-transceiver CRNs is, in general, NP-Hard. On the other hand, it has been shown that the design of efficient channel assignment schemes
for single-transceiver single-channel low-cost CRNs is simpler than that of the multi-transceiver counterpart \cite{[5av]}. While single-transceiver designs can greatly simplify the task of finding the optimal channel assignment, the aforementioned channel access issues are not trivial and the performance is limited to the capacity of the selected channel.
\subsection{Reaction to PR Appearance}
Spectrum sharing schemes in the CRNs can also be classified based on their reaction to the appearance of PR users into three main groups: (1) proactive (e.g., \cite{[83Pro],[84Pro],[86Pro]}), (2) reactive (e.g., \cite{[91Reactive],[87Reactive],[88Reactive]}), and (3) interference threshold-based (e.g.,~\cite{[Taos],[37],[11],[68Inter]}). In reactive schemes, the active CR users switch channels after the PR appearance. On the other hand, in proactive schemes, the CR users predict the PR appearance and switch channels accordingly. The threshold-based schemes allow the CR users to share the spectrum (both idle and partially-occupied PR channels) with PR users as long as the interference introduced at the PR users is within acceptable values. Existing threshold-based schemes attempt at reducing the impacts of the un-controllable frequency-dependent PR-to-CR interference on CRN performance through proper power control based on either (1) the instantaneous sensed interference~\cite{[37]}, (2) the average measured PR interference \cite{[Taos]}, or (3) using stochastic PR interference models \cite{[11]}.
\subsection{Spectrum Sharing Model}
The spectrum sharing model represents the type of interference model used to solve the channel and power assignment problem. There are two different spectrum sharing models: protocol (interference avoidance) and physical (interference) models~\cite{[13]}. The former employs an exclusive channel occupancy strategy, which eliminates the CR-to-CR interference and simplifies the management of the CR-to-PR interference \cite{[37],[3av]}. However, it does not support concurrent CR transmissions over the same channel, which may reduce the spectrum efficiency. On the other hand, the overlay physical model allows for multiple concurrent interference-limited CR transmissions to simultaneously proceed over the same channel in the same locality, which improves spectrum efficiency \cite{[FAN]}. However, the power control issue (CR-to-CR and CR-to-PR interference management) under this model is not trivial. Worse yet, using this model requires a distributed iterative power adjustment for individual CR users, which was shown that it results in slow protocol convergence~\cite{[FAN]}.
\subsection{Guard-band Considerations}
Most of existing spectrum sharing protocols for CRNs were designed assuming orthogonal channels, where the adjacent channel interference (ACI) is ignored (e.g., \cite{[16a],[Haythem3],[3av],[5av],[116],[9],[35]}). However, this requires using ideal sharp transmit and receive filters, which is practically not feasible. In practice, frequency separation (guard bands) between adjacent channels is needed to mitigate the effects of ACI and protect the performance of ongoing PR and CR users operating over adjacent channels. It has been shown that introducing guard bands can significantly impact the spectrum efficiency, and hence it is very important to account for the
guard-band constraints when designing spectrum sharing protocols for CRNs.
Few number of CRN spectrum access and sharing protocols have been designed
while accounting for the guard band issue \cite{[37],[8gb],[9gb]}.
Guard band-aware strategies enable effective and safe spectrum sharing, have a great potential to enhance the spectral efficiency, and protect the receptions of the ongoing CR and PR transmissions over adjacent channels. The need for guard band-aware spectrum sharing mechanisms and protocols was discussed in \cite{[37]}. Specifically, the authors, in\cite{[37]}, have investigated the ACI problem and proposed guard-band-aware spectrum access/sharing protocols for CRNs. The main objective of their proposed mechanism is to minimize the total number of reserved guard-band channels such that the overall spectrum utilization is maximized. In \cite{[8gb]}, the authors showed that selecting the operating channels on per block (group of adjacent channels) basis instead of per channel basis (unlike the work in \cite{[37]}) provides better spectrum efficiency. The work in \cite{[8gb]} attempts at selecting channels such that at most one guard band is introduced for each new CR transmission. In \cite{[9gb]}, the authors proposed two guard-band spectrum sharing mechanisms for CRNs. The first mechanism is a static single-stage channel assignment that is suitable for distributed multi-hop CRNs. The second one is an adaptive two-stage channel assignment that is suitable for centralized CRNs. The main objective of the proposed mechanisms is to maximize spectrum efficiency while providing soft guarantees on CR performance in terms of a pre-specified rate demand.
\section{Complementary Techniques and Optimization Methods}
In this section, we discuss and explain several methods and optimizations that interact with spectrum sharing protocols to further improve spectrum utilization in CRNs.
\subsection{Resource Virtualization in CRNs}
The resource virtualization concept has been extensively discussed in the literature, which refers to the process of creating
a number of logical resources based on the set of all available physical resources. This concept allows the users
to utilize the logical resources in the same way they are using the physical resources. This leads to a better utilization of the physical resources as virtualization allows more users to share the available physical resources. In addition, virtualization introduces an additional layer of security as user's application cannot directly control the physical resources. The concept of virtualization was originally used in computer systems to better utilize the available physical resources (e.g., processors, memory, storage units, and network interfaces). These resources are virtualized into separate sets of logical resources, each set of these virtual resources can be assigned to different users. Using system virtualization can achieve: (1) users' isolation, (2) customized services, and (3) improved resource efficiency. Virtualization was also been introduced in wired
networks by introducing the framework of virtual private networks (VPNs).
Recently, several attempts have been made to implement the virtualization concept in wireless CRNs. We note here that employing virtualization in CRNs is daunted by several challenges including: spectrum sharing, limited infrastructure, different geographical regions, self co-existence, PR co-existence, dynamic spectrum availability, spectrum heterogeneity, and users' mobility~\cite{[7v]}. In \cite{[SDS1]}, a single cell CRN virtualization framework was introduced. According to this framework, a network with one BS and $M$ physical radio nodes (PNs) with varying sets of resources are considered. The resources include the number of radio interfaces at each PN, the set of orthogonal idle channels at each PN, and the employed coding schemes. Each PN hosts a set of virtual nodes (VNs). The VNs located in the different PNs can communicate with each other. To facilitate such communications, VNs request resources from their hosting PNs. Simulation results have demonstrated the effectiveness of using network visrtualization in improving network performance. In \cite{[SDS2]}, the authors have proposed a virtualization framework for multi-channel multi-cell CRNs. In this work, a virtualization based semi-decentralized resource
allocation mechanism for CRNs using the concept of multilayer hypervisors was proposed. The main objective of this work is to reduce the overall CR control
overhead by minimizing the CR users' reliance on the base-station in assigning spectrum resources. Simulation results have indicated significant improvement in CRN performance (in terms of control overhead, spectrum utilization and blocking rate) is achieved by the virtualized framework compared to non-virtualized resource allocation schemes.
\subsection{Full Duplex Communications}
The problem of computing the optimal spectrum access strategy for CR users has been well investigated in \cite{[14xx],[17xx],[15xx]}, but for CR users that are equipped with half-duplex (HD) transceivers. It has been shown that using HD transceivers can significantly reduce the achieved network performance~\cite{[10xx]}. Motivated by the recent advances in full-duplex (FD) communications and self-interference suppression (SIS) techniques, several attempts have been made to exploit
the FD capabilities and SIS techniques in designing communication protocols for CRNs ~\cite{[10xx],[11xx],[12xx]}.
The main objective of these protocols is to improve the overall spectrum efficiency by allowing simultaneous transmission and reception (over the same channel or over different channels) at each CR user. These protocols, however, require additional hardware
support (i.e., duplexers). The practical aspects of using FD radios in CRNs need to be further investigated. The design of effective channel/power/rate assignment schemes for FD-based CRNs is still an open problem.
\subsection{Beamforming Techniques}
Beamforming techniques are another optimization that can enable efficient spectrum sharing~\cite{[Beamforming2],[Beamforming3],[Beamforming4],[Beamforming5]}.
According to beamforming, the transmit and receive beamforming coefficients are adaptively computed by each CR user such that the achieved CR throughput is maximized while minimizing the introduced interference at the CR and PR users. Furthermore, the performance gain achieved by using beamforming in CRNs can be significantly improved by allowing for adaptive adjustment of the allocated powers to the transmit beamforming weights \cite{[Beamforming5]}. The operation details of such an approach need to be further explored.
\subsection{Software Defined Radios and Variable Spectrum-width}
The use of variable channel widths through channel aggregation and bonding is another promising approach in improving spectral efficiency. However, this approach has not given enough attention. Based on its demonstrated excellent performance (compared to using fixed-bandwidth channels), variable channel widths
has been chosen as an effective spectrum allocation mechanism in cellular mobile communication systems, including the recently deployed 4G wireless systems. Thus, it is very important to use variable-bandwidth channels in CRNs. More specifically,
in CRNs, assigning variable bandwidth to different CR users can be achieved through channel bonding and aggregation. This has a great potential in improving spectrum efficiency. The use of variable bandwidth transmission in CRNs is not straightforward due to the dynamic time-variant behavior of PR activities and the hardware nature of most of existing CR devices \cite{[13]}, which make it very hard to control the channel bandwidth~\cite{[37]}.
So far, most of CR systems have been designed with the assumption that each CR user is equipped with single or several radio transceivers. Using hardware radio transceivers can limit the number of possibly assigned channels to CR users and cannot fully support variable-width channel assignment. One possible approach to enable variable-width spectrum assignment and increase network throughput is to employ software defined radios (SDRs). The use of the SDRs enables the CR users to bond and/or aggregate any number of channels, thus enabling variable spectrum-width CR transmissions. Thus, SDRs support more efficient spectrum utilization, which significantly improves the overall CRN performance and provides QoS guarantees to CR users.
\subsection{Cross-layer Design Principle}
Cross-layer design is essential for efficient operation of CRNs. Spectrum sharing protocols for CRNs should select the next-hop and the operating PR frequency channel(s) using a cross-layer design that incorporates the network, MAC, and physical layers. A cross-layer routing metric called the maximum probability of success (MPoS) was proposed in \cite{[Haythem3]}. The MPoS incorporates the link quality conditions and the average availability periods of PR users to improve the CRN performance in terms of the network throughput. The metric assigns operating channels to the candidate routes so that a route with the maximum probability of success and minimum CR forced termination-rate is selected. The main drawback of the MPoS approach is its requirement of known PR channel availability distributions (the probability density function of idle periods of the PR channels).
\subsection{Discontinuous-OFDM Technology}
Based on the spectrum availability conditions and to enable efficient CRN operation, a CR user may need to utilize multiple adjacent (contiguous) idle PR channels (the so-called spectrum bonding) or non-adjacent (non-contiguous) idle PR channels (the so-called spectrum aggregation). Spectrum bonding and aggregation can be realized using either the traditional frequency division multiplexing (FDM) or the discontinuous-orthogonal frequency division multiplexing (D-OFDM) technology \cite{[37],[37v],[36ofdn]}. The former technology requires several half-duplex transceivers and tunable filters at each CR user, where each assigned channel will use one of the available transceivers. While this approach is simple, it requires a large number of transceivers and does not provide the enough flexibility needed to implement channel aggregation and bonding at a large-scale. The D-OFDM is a novel wireless radio technology that allows a CR transmission to simultaneously take place over several (adjacent or non-adjacent) channels using one half-duplex OFDM transceiver. According to D-OFDM, each channel includes a distinct equal-size group of adjacent OFDM sub-carriers. According to D-OFDM, spectrum bonding and aggregation with any number of channels can be realized through power control, in which the sub-carriers of a non-assigned channel will be assigned $0$ power and all the sub-carries of a selected channel will be assigned controlled levels of powers. We note here that the problem of assigning different powers to different OFDM symbols within the same channel is still an open issue.
\subsection{Spectrum Sharing for MIMO-Based Ad Hoc CRNs}
Multiple Input Multiple Output (MIMO) is considered as a key technology to increase the achieved wireless performance. Specifically, MIMO can be used to improve spectrum efficiency, throughput performance, wireless capacity, network connectivity, and energy efficiency. The majority of previously proposed works on MIMO-based CRNs (e.g., \cite{[15MIMO],[16MIMO],[10MIMO]}) have focused on the physical layer and addressed a few of the challenging issues at the upper layers, but certainly more effort is still required to investigate the achieved capacity of MIMO-based CRNs, the design of optimal channel/power/rate assignment for such CRNs, the interoperability with the non-MIMO CRNs, and many other challenging issues.
\subsection{Cooperative CR Communication (Virtual MIMO)}
One of the main challenges in the design of CRNs communication protocols is the time-varying
nature of the wireless channels due to the PR activities and the multi-path fading. Cooperative communication is a promising approach that can deal with the time-varying nature of the wireless channels, and hence improve the CRN performance. Cooperative communication can create a virtual MIMO system by allowing CR users to assist each other in data delivery (by relaying data packets to the receiver). Hence, the received data packets at the CR destination traverse
several independent paths achieving diversity gains. Cooperative communication can also extend the coverage area. The benefits of employing cooperative communication, however, are achieved at the cost of an increase in power consumption, an increase in computation resources and an increase in system complexity. It has been shown that cooperation may potentially lead to significant long-term resource savings for the whole CRN. An important challenge in this domain is how to design effective cooperative MAC protocols that combine the cooperative communication with CR multiple-channel capability such that the overall network performance is improved. The CR relay selection is another challenging problem that needs to be further investigated. Therefore, new cooperative CRN MAC protocols and relay selection strategies are needed to effectively utilize the available resources and maximize network performance.
\subsection{Network Coding}
Network coding in CRNs is another interesting approach that has not yet explored in CRNs. Based on its verified excellent performance in wireless networks \cite{[NETCOD]}, it is natural to consider it in the design of cooperative-based CRNs. The packet relaying strategies in cooperative communication are generally implemented on a per packet basis, where a store-and-forward (SF) technique is used (the received packets at the CR relays are received, stored and retransmitted towards the receiver). While this type of relaying mechanisms is simple, it has been shown that it provides a sub-optimal performance in terms of the overall achieved CRN throughput (especially, in multi-cast scenarios). Instead of using SF, network coding can be used to maximize the CRN performance. With network coding, the intermediate relay CR users can combine the incoming packets using mathematical operations (additions and subtractions over finite fields) to generate the output packets
One drawback in using network coding is that the computational complexity increases as the finite field size increases. The higher the field size, the better is the network performance. However, the tradeoff should be further investigated and more efforts are required to identify and study the benefits and drawbacks of increasing the field-size in CRNs. In addition, the performance achieved through network coding can be
further enhanced in CRNs by dynamically adapting the total number of coded packets that need to be sent by the source CR user. Such adaptation adjustment is yet to be explored, which should be based on the PR activities, link loss rates, link correlations, and nodes' reachability.
\section{Summary and Open Research Problems}
CR technology has a great potential to enhance the overall spectrum efficiency. In this paper, we first highlighted the main existing CRN architectures. Then, we described the unique characteristics of their operating RF environment that need to be accounted for in designing efficient communication protocols and spectrum assignment mechanisms for these networks. We then surveyed several spectrum sharing approaches for CRNs. We showed that these approaches differ in their design objectives. Ideally, one would like to design a spectrum sharing solution that maximizes spectrum efficiency while causing no harmful interference to PR users. We showed that interference management (including self-coexistence and the PR coexistence) and distributed coordination are the main crucial issues in designing efficient spectrum sharing mechanisms. The key idea in the design of effective spectrum sharing and assignment protocols for CRNs is to jointly consider the PR activities and CR link-quality conditions.
The reaction to PR appearance is another important issue in designing spectrum sharing schemes for CRNs. Currently, most of spectrum sharing schemes are either reactive or proactive schemes. Interference threshold-based schemes are very promising, where more research should be conducted to explore their advantages and investigate their complexities. Another crucial and challenging problem is the incorporation of the guard-band constraints in the design of spectrum sharing schemes for CRNs. A huge amount of interference is leaked into the adjacent channels when guard bands are not used. This can significantly reduce spectrum efficiency and cause harmful interference to PR users. The effect of introducing guard-bands on the spectrum sharing design has not been well explored.
Many interesting open design issues still to be addressed. Variable-width spectrum sharing approach is very promising, but their design assumptions and feasibility should be carefully investigated. Resource virtualization is another important concept that can significantly improve the overall spectrum utilization. Beamforming and MIMO technology have recently been proposed as a means of maximizing spectrum efficiency. The use of beamforming in CRNs with MIMO capability can achieve significant improvement in spectrum efficiency. However, the spectrum sharing problem becomes more challenging due to the resurfacing of several design issues such as the determination of the beamforming weights, the joint channel assignment and power control, etc., which need to be further addressed. Research should focus also on the cooperative CR communication and cross-layer concepts. Using FD radios versus using HD radios is another interesting issue. Moreover, utilizing network coding is very promising in improving the CRN's performance. Finally, we showed that channel bonding and aggregation can be realized through the use of D-OFDM technology. This technology allows CR user to simultaneous transmit or receive over multiple channels using a single radio transceiver.
|
\section{Introduction}
The computation of $\int_a^b f(x)e^{i w g(x)}dx$
occurs in a wide range of practical problems
and applications,
{\color{black}
e.g.,} nonlinear optics, fluid dynamics, computerized tomography,
celestial mechanics, electromagnetics, acoustic scattering, etc.
The high oscillation ($|w|\gg 1$) means that classical Gaussian quadrature requires $\mathcal{O}(w)$ quadrature points, which is impractical.
To handle the difficulty caused by rapid oscillation, many effective methods have been proposed for oscillatory integrals without
{\color{black}
singularities, such} as Filon-type methods \cite{GAO2017,ISERLES2004,ISERLES2005N}, Levin methods \cite{LEVIN1982,OLVER2006}, the generalized quadrature rule \cite{EVANS2000},
{\color{black} and} numerical steepest-descent methods \cite{HUYBRECHES2006}. We refer interested readers to \cite{ISERLES2018B,HUYBRECHES2009} for a review of these methods.
However, in the context of electromagnetic and acoustic scattering, one frequently must compute many oscillatory integrals with singularities of the form
\begin{equation}\label{SOI}
I_{w}^{[0,a]}[f,s,g]:=\int_0^a f(x)s(x)e^{iwg(x)}dx
\end{equation}
(see \cite{BRUNO2004,CHANDLER2012,COLTON1983,ISERLES2018B,DOMINGUEZ2013,SPENCE2014}),
where $f$ and $g$ are suitably smooth functions, $g'(x)\neq 0, x\in[0,a]$, and $w$ is a real
{\color{black} parameter, the absolute value of which could be extremely large.} Without loss of generality, we assume $g(0)=0$ and $g'(x)>0$ for $x\in[0,a]$. If $g(0)\neq 0$, we replace $g(x)$ by $g(x)-g(0)$, and if $g'(x)<0, x\in(0,a]$, the function $g(x)$ is replaced by $-g(x)$ and $w$ by $-w$, respectively.
The function $s$ is singular
{\color{black}
and the singularity locates at $x=0$.} If the
integral has finitely many singular points, then it can
be rewritten in terms of integrals of the form $I_{w}^{[0,a]}[f,s,g]$.
{\color{black}
A significant amount of work} has also been done on the computation of singular and oscillatory integrals of the type \eqref{SOI}.
The asymptotic behavior for the integral was obtained by repeated integration by parts \cite{ERDELYI1955,GAO2016} or by the inverse functions \cite{LYNESS2009}. When the oscillator is linear, i.e., $g(x)=x$, it was studied by the Clenshaw-Curtis-Filon-type methods \cite{KANG2011,KANG2013,XAINGGUO2014},
{\color{black}
in which the modified moments} can be obtained numerically by stable recurrence relations.
However,
{\color{black}
these methods} may not be suitable for the general case since it is difficult to
{\color{black}
accurately calculate the modified moments $\int_0^a T_j(x)s(x)e^{iwg(x)}dx$, where $T_j(x)$} denotes the shifted Chebyshev polynomial of the first kind of degree $j$.
A composite Filon-Clenshaw-Curtis quadrature was proposed in \cite{DOMINGUEZ2013}
{\color{black}
based on} efficient evaluation of the inverse function of the oscillator for the case of nonlinear oscillators.
Another kind of composite method was developed recently in \cite{MAXU2017} based on the careful design of meshes to achieve a convergence of the polynomial order or of the exponential order.
{\color{black}
The main disadvantage of the composite methods is that sub-intervals near the singular point in the designed mesh have very small lengths and thus may cause serious round-off-error problems.}
Based on the numerical steepest method, Gauss-type quadrature has been used for computation of the highly oscillatory integrals with algebraic singularities with a linear
oscillator \cite{HE2014,XU2015,Xu2016}. There is still much work to do on the computation of the singular and oscillatory integrals, especially with complicated oscillators
{\color{black}
in terms of efficiency and accuracy.}
In this paper, we are interested in efficient numerical methods for (1.1) with
\[ s(x)=x^\alpha\;\text{or}\; x^\alpha\log x,\quad 0<|\alpha|<1,\]
and develop new efficient methods based upon the classic Levin method, which is quite different from the existing methods, to compute the integral of the type $I_{w}^{[0,a]}[f,s,g]$. For the specification, we set $s_1(x)=x^\alpha$ and $s_2(x)=x^\alpha\log x$.
The spirit of
{\color{black}
the Levin method} in the computation of the integral $I_w^{[0,a]}[f,s,g]$ is to find a function $p$ such that $\left(p(x)e^{iwx}\right)'=f(x)s(x)e^{iwx}$. This is
{\color{black}
equivalent to obtaining} a particular solution of the ODE,
\begin{equation}\label{levinode}
\mathcal{L}[p](x)\equiv p'(x)+iwp(x)=f(x)s(x).
\end{equation}
However, the particular solution of the ODE \eqref{levinode}
{\color{black}
cannot} be obtained directly by collocation methods due to the singular forcing function. The singularity would cause large errors.
To deal with the singularity, a technique of singularity separation is developed. The computation of integral $I_{w}^{[0,a]}[f,s,g]$ can be converted into the solution of two kind of ODEs.
One kind of ODE has an explicit solution with the vanishing initial condition, while the other
possesses a specific structure of the form
\begin{equation}\label{modelode}
iwg'(x)c_0+g(x)q_1'(x)+[1+\alpha+iwg(x)]g'(x)q_1(x)=f(x),
\end{equation}
where $f$ and $g$ are given functions, and function $q_1$ and coefficient $c_0$ are unknown
{\color{black}
and must be determined.}
It will be proved in this paper that there exists at least one pair solution of a non-oscillatory function $q_1$ and a number $c_0$ for \eqref{modelode}. The term \textit{non-oscillatory} is understood in the sense that the function's derivatives of high orders are independent of the frequency.
This means that the ODE \eqref{modelode} can be solved well by collocation methods without the influence of the high oscillation. A new collocation method is developed for
{\color{black}
the ODE \eqref{modelode}} by adopting the differential matrix based on the Chebyshev-Gauss-Radau points.
In particular, following the asymptotic method and convergence rates for Filon-type method in \cite{ISERLES2004,ISERLES2005N}, the convergence for the new Levin methods is derived.
We also show the equivalence between the new Levin methods and the corresponding Filon-type methods
{\color{black}
with a proper basis.} The new methods for the oscillatory integrals with algebraic and/or logarithmic singularities can avoid
{\color{black}
the round-off-error problem} caused by the tiny meshes and the computation of the modified moments, and also enjoy
{\color{black}
the following merits.}
\begin{enumerate}
\item They are applicable for
{\color{black}
nonlinear oscillators.}
\item They converge
{\color{black}
supralgebraically} with respect to the number of collocation points and the higher oscillation.
\item Their asymptotic order with respect to the frequency is $\mathcal{O}(w^{-s-1-\min\{1+\alpha,1\}})$ for algebraic singularities and $\mathcal{O}(\delta_{\alpha}(w)w^{-s-1-\min\{1+\alpha,1\}})$ for algebraic and logarithmic singularities, where $\delta_\alpha$ is defined in \eqref{delta1}.
\end{enumerate}
{\color{black}
The rest of this paper} is organized as follows. In Section 2, we develop a new Levin method for oscillatory integrals with algebraic singularity and then analyze the asymptotic order and the convergence. The equivalence between the new Levin method and the Filon-type method is studied. Another new Levin method is developed analogously for $I_{w}^{[0,a]}[f,s_2,g]$ in
{\color{black}
Section 3}. We construct the new collocation method for
{\color{black}
the ODE \eqref{modelode} in Section 4.
Numerical results are shown in Section 5 to validate the theory developed herein. }
\section{New Levin method for \texorpdfstring{$I_w^{[0,a]}[f,s_1,g]$}{I1}}
We commence from the integral $I_w^{[0,a]}[f,s_1,g]$ with algebraic singularity, assuming that the oscillator $g$ is strictly monotone in $[0,a]$. To cope with the singularity, our basic idea is to seek a particular solution
{\color{black}
the singularity of which is represented separately.}
Inasmuch as the observation that the solution of the corresponding ODE \eqref{levinode} possesses the algebraic singularity, a particular solution $p$ is assumed to have a specific form
\[p(x)=q(x)g^{\alpha}(x)+h(x),\]
where
{\color{black}
the functions $q$ and $h$} need to be determined. The selection of $g^\alpha(x)$ instead of $x^\alpha$ is necessary, which will be seen later.
The substitution of $p$ in
{\color{black}
the ODE \eqref{levinode}} leads to a new ODE for $q$ and $h$,
\begin{equation}\label{gpeq}
\begin{split}
\left (q'(x)+iw g'(x)q(x)\right)g^{\alpha}(x) & +h'(x)+iw g'(x)h(x)\\
& +\alpha q(x)g'(x)g^{\alpha-1}(x)=f(x)x^{\alpha}.
\end{split}
\end{equation}
{\color{black}
A new function is defined,}
\begin{equation}\label{f1}
f_1(x)=\begin{cases}f(x)\left(\frac{x}{g(x)}\right)^\alpha, \;x\neq0, \\
\frac{f(0)}{(g'(0))^\alpha},\quad\quad\; x=0. \end{cases}
\end{equation}
Two decoupled ODEs for $q$ and $h$ are obtained from \eqref{gpeq}
{\color{black}
separately} by the superposition principle according to the singularity:
\begin{eqnarray}
q'(x)+iw g'(x)q(x)+\alpha g'(x)\frac{q(x)-c_0(1-e^{-iwg(x)})}{g (x)} &=& f_1(x), \label{gqeq}\\
h'(x)+iw g'(x)h(x) +\alpha c_0g'(x)\frac{1-e^{-iwg(x)}}{g^{1-\alpha}(x)}&=& 0, \label{gheq}
\end{eqnarray}
where $c_0$ is an unknown parameter to be determined. Note that a minor trick was used in the splitting procedure by adding and then subtracting the term $\alpha c_0g'(x)\frac{1-e^{-iwg(x)}}{g^{1-\alpha}(x)}$. This minor modification makes the Levin method effective in computing singular and oscillatory integrals.
{\color{black}
Letting $q_1(x)=\frac{q(x)-c_0(1-e^{-iwg(x)})}{g(x)}$,} the equation \eqref{gqeq} is simplified in a clear form,
\begin{equation}\label{gq1eq}
iwg'(x)c_0+g(x)q_1'(x)+[1+\alpha+iwg(x)]g'(x)q_1(x)=f_1(x).
\end{equation}
Note that the solution $q$ in \eqref{gqeq} might be oscillatory, while the new defined function $q_1$ is non-oscillatory. In fact, we rigorously prove the non-oscillation property of the solution of \eqref{gq1eq} in the following lemma. To avoid distraction from the narrative of the new Levin method, its proof is given in Appendix A.
\begin{lemma}\label{lemma3}
Suppose that $f_1\in C^{2n+1}[0,a]$ and $g\in C^{2n+2}[0,a]$ with $g(0)=0$ and $g'(x)>0, x\in[0,a]$. If $f_1$ and $g$ are independent of $w$, then there exist a function $q_1$ and a number $c_0$ satisfying \eqref{gq1eq} such that
\begin{equation}\label{property}
|c_0|<C/w \;\text{and}\; \|\mathcal{D}^{j}q_1\|_\infty < C/w,\;j=0,1,\ldots,n,
\end{equation}
where $C$ is a constant independent of $w$.
\end{lemma}
Lemma \ref{lemma3} is the cornerstone of the proposed new method since
{\color{black}
it ensures that the ODE \eqref{gq1eq} can be solved efficiently by the collocation method based on polynomials no matter how large the absolute value of $w$ is. }
Once the value of $c_0$ is known, a particular solution $h$ of \eqref{gheq} subject to the initial condition $h(0)=0$ is well-known by the standard ODE theory, given explicitly by
\begin{equation}\label{hsol}
\begin{split}
h(x) &=\alpha c_0e^{-iwg(x)}\int_0^x\frac{g'(t)(1-e^{iwg(t)})}{g^{1-\alpha}(t)}dt \\
&= \alpha c_0e^{-iwg(x)} \int_0^{g(x)}\frac{1-e^{iwt}}{t^{1-\alpha}}dt \\
&=c_0e^{-iwg(x)}\left(g^\alpha(x)+\frac{\alpha \Gamma(\alpha,-iwg(x)-\Gamma(\alpha+1)}{(-iw)^\alpha}\right),
\end{split}
\end{equation}
where $\Gamma(s,z)$ is the incomplete gamma function \cite{STEGUN}. It is the reason why we choose $g^\alpha(x)$ to express the algebraic singularity. Instead, if $x^\alpha$ is adopted, it is difficult to evaluate the solution $h$ explicitly or numerically.
{\color{black}
We now formally propose} the new Levin method for the integral $I_w^{[0,a]}[f,s_1,g]$. To this end, we define a new operator for a given function $g$, a number $w$ and a number $\alpha$:
\[
\mathcal{W}_{w,\alpha,g}[c_0,q_1]:=iwg'(x)c_0+g(x)q_1'(x)+[1+\alpha+iwg(x)]g'(x)q_1(x).
\]
For notational simplicity, the explicit dependence of the new operator on $w, \alpha$, and $g$ will be suppressed, to be understood only implicitly. Let $\{\phi_j\}_{j=1}^{n}$ be a basis of functions independent of $w$. Moreover, let $\{x_j\}_{j=0}^{n}$ be a set of collocation nodes such that $0=x_0<x_1<\ldots<x_{n}=a$. We are seeking a pair of a function $q_1=\sum_{j=1}^{n} c_j\phi_j$ and a number $c_0$ such that they satisfy \eqref{gq1eq} at the collocation nodes: this reduces to the linear system
\begin{equation}\label{interp1}
\mathcal{W}[c_0,q_1](x_j)=f_1(x_j), j=0,1,\ldots,n,
\end{equation}
where $f_1$ is defined in \eqref{f1}.
Written in the form of a vector, the system \eqref{interp1} becomes
\[(\textbf{A}+iw\textbf{B})\textbf{c}=\textbf{f}_1,\]
where the $(n+1)\times(n+1)$ matrices $\textbf{A}$ and $\textbf{B}$ are independent of $w$. Specifically,
\[
\textbf{B}=\begin{pmatrix}
g'(0) & 0 & 0 & \ldots & 0\\
g'(x_1) & g(x_1)g'(x_1)\phi_1(x_1) & g(x_1)g'(x_1)\phi_2(x_1) &\ldots & g(x_1)g'(x_1)\phi_n(x_1) \\
g'(x_2) & g(x_2)g'(x_2)\phi_1(x_2) & g(x_2)g'(x_2)\phi_2(x_2) &\ldots & g(x_2)g'(x_2)\phi_n(x_2) \\
\vdots & \vdots & \vdots & \\
g'(x_n) & g(x_n)g'(x_n)\phi_1(x_n) & g(x_n)g'(x_n)\phi_2(x_n) & \ldots & g(x_n)g'(x_n)\phi_n(x_n)
\end{pmatrix},
\]
and hence the matrix $\textbf{B}$ is non-singular once the basis $\{\phi_j\}_1^{n}$ is a Chebyshev set \cite{OLVER2006}.
\begin{proposition}
For sufficiently large $w$, the system \eqref{interp1} has a unique solution. Moreover, its solution $q_1$ is slowly oscillatory and both $q_1$ and $c_0$ are $\mathcal{O}(w^{-1})$ as $w\rightarrow \infty$.
\end{proposition}
\begin{proof}
The proof is trivial with the use of Cramer's rule following \cite{OLVER2006} and \cite{XIANGBIT2007}.
\end{proof}
With a little effort, this collocation is readily generalized, including confluent collocation nodes. Assuming that each collocation node $x_j$ is
{\color{black}
accompanied by multiplicity $m_j\geq 1$} such that $\sum_{j=0}^{n}m_j-1=M$, we require not only the equivalence of the values of $f_1$
and $\mathcal{W}[c_0,q_1]$ at the collocation nodes, but also the values of the derivatives of $f_1$
and $\mathcal{W}[c_0,q_1]$, up to the given multiplicity. In place of $q_1=\sum_{j=1}^{n} c_j\phi_j$ and \eqref{interp1}, we pursue a function $q_1=\sum_{j=1}^{M} c_j\phi_j$
{\color{black}
that satisfies the linear system}
\begin{equation}\label{interp2}
\begin{split}
\frac{d^{j}\mathcal{W}[c_0,q_1]}{dx^j}(x_l)=f_1^{(j)}(x_l), j=0,1,\ldots,m_l-1, l=0,1,\ldots,n.
\end{split}
\end{equation}
Note that the preceding system \eqref{interp1} is a special case with $M=n$ and all multiplicities equal 1.
Having obtained the solution of \eqref{interp2} together with the formula \eqref{hsol}, we define the new Levin method
\begin{equation}\label{levin1}
Q_{w,\alpha,n}^{L,s}[f]\equiv \left.\left[g^{\alpha+1}(x)q_1(x)+c_0(1-e^{-iwg(x)}) g^\alpha(x)+h(x)\right]e^{iwg(x)} \right|_0^a,
\end{equation}
where $s=\min(m_0,m_n)-1$.
In the following, we consider the asymptotic order of the new Levin method. To this end, we recall a lemma concluded from the results in \cite{XAINGGUO2014} and \cite{GAO2016}.
\begin{lemma}\label{xlemma}
Suppose $f\in C^{s+1}[0,a]$, $f^{(j)}(0)=f^{(j)}(a)=0, j=0,1,\ldots,s$ and every function in the set $\{f,f',\ldots,f^{(s+1)}\}$ is of asymptotic order $\mathcal{O}(1), w\rightarrow \infty$, then
\begin{eqnarray*}
\int_0^a x^\alpha f(x) e^{iwx}dx &\sim & \mathcal{O}\left(w^{-s-1-\min\{1+\alpha,1\}}\right), \\
\int_0^a x^\alpha \ln (x) f(x) e^{iwx}dx &\sim & \mathcal{O}\left(\delta_\alpha(w) w^{-s-1-\min\{1+\alpha,1\}} \right),
\end{eqnarray*}
where
\begin{equation}\label{delta1}
\delta_{\alpha}(w):=\begin{cases} 1+|\ln(w)|, -1<\alpha\leq 0,\\ 1, \alpha>0.\end{cases}
\end{equation}
\end{lemma}
\begin{theorem}\label{theorem3}
Suppose that $g(0)=0$ and $g'(x)>0, x\in[0,a]$ and $m_0=m_n=s+1$. If the basis $\{\phi_j\}_{j=1}^{M}$ is a Chebyshev set where $M=\sum_{j=0}^{n}m_j-1$, then for sufficiently large $w$ the system \eqref{interp2} has a unique solution and
\begin{equation}
I_w^{[0,a]}[f,s_1,g]-Q_{w,\alpha,n}^{L,s}[f]\sim \mathcal{O}(w^{-s-1-\min\{1+\alpha,1\}}).
\end{equation}
\end{theorem}
\begin{proof}
It is known from the fundamental theorem of calculus that
\[\begin{split}
Q_{w,\alpha,n}^{L,s}[f] & =\int_0^a \mathcal{L}\left[g^{\alpha+1}q_1+c_0(1-e^{-iwg})g^\alpha+h\right](x) e^{iwg(x)} dx \\
&=\int_0^a \mathcal{W}[c_0,q_1](x)g^\alpha(x)e^{iwg(x)}dx,
\end{split}
\]
where the expression \eqref{gheq} for $\mathcal{L}[h]$ has been used in the computation. It follows that
\[
I_w^{[0,a]}[f,s_1,g]-Q_{w,\alpha,n}^{L,s}[f]=\int_0^a \left( \mathcal{W}[c_0,q_1](x)-f_1(x)\right)g^\alpha(x)e^{iwg(x)}dx,
\]
on the face of which we are almost done using Lemma \ref{xlemma}.
To bridge the final gap, we only must show that $\mathcal{W}[c_0,q_1]^{(j)}$ is $\mathcal{O}(1), j=0,1,\ldots,s+1$, which is similar to the proof of Theorem 4.1 in \cite{OLVER2006} or of Theorem 3.5 in \cite{ISERLES2018B}.
{\color{black}
Note that the linear system \eqref{interp2} can be written in the vector form $(\textbf{A}+iw\textbf{B})\textbf{c}=\textbf{f}_1$}, where $\textbf{A}$ and $\textbf{B}$ are independent of $w$.
{\color{black}
For sufficiently large $w$, $\det \textbf{B} \neq 0$ is sufficient to show} the unique existence of \eqref{interp2} and the boundedness of $\mathcal{W}[c_0,q_1]^{(j)}$.
{\color{black}
Hence, all we must show is that the matrix $\textbf{B}$ is non-singular.} Since the proof is identical in concept, we just prove for the case with $n=1$, $m_0=m_1=2$ and $x_0=0, x_1=a$. In this case,
\[
\textbf{B}=\begin{pmatrix}
g'(0) & 0 & 0 & 0\\
g''(0) & \eta_1'(0) & \eta_2'(0) & \eta_3'(0) \\
g'(a) & \eta_1(a) & \eta_2(a) & \eta_3(a) \\
g''(a) & \eta_1'(a) & \eta_2'(a) & \eta_3'(a)
\end{pmatrix},
\]
where $\eta_k(x)=g(x)g'(x)\phi_k(x), k=1,2,3$.
{\color{black}
Performing several row operations, we derive }
\[
\det\textbf{B}=(g'(0))^3(g(a)g'(a))^2 \begin{vmatrix}
\phi_1(0) & \phi_2(0) & \phi_3(0)\\
\phi_1(a) & \phi_2(a) & \phi_3(a)\\
\phi_1'(a) & \phi_2'(a) & \phi_3'(a)
\end{vmatrix}.
\]
{\color{black}
The assumption of a Chebyshev set $\{\phi_k\}_{k=1}^3$ assures that the matrix on the right is non-singular. In addition to $g'(x)\neq 0$ and $g(a)\neq 0$,} it follows that $\det\textbf{B}\neq 0$. The proof is finished.
\end{proof}
{\color{black}
In addition to the asymptotic order, the precision of the Levin method also relies on the number of collocation nodes.} To show the dependence, we consider a special case of linear oscillator.
{\color{black}
We also set the multiplicity of each point to 1.}
Let $E_n(f)=\left|I_w^{[0,a]}[f,s_1,\tau]-Q_{w,\alpha,n}^{L,0}[f]\right|$ denote the absolute error where $\tau(x):=x$.
\begin{theorem}\label{theom1}
If $f$ is suitably smooth and independent of $w$, then the new Levin method collocating on points $\{0= x_0<x_1<\ldots<x_{n}\leq a\}$ satisfies
\begin{equation}
E_n(f)\leq Cw^{-\min\{1+\alpha,1\}} \frac{\|f^{(n+1)}\|_\infty a^{n+1} }{n!},
\end{equation}
where $C$ is a constant independent of $w$ and $n$.
\end{theorem}
\begin{proof}
It is already known from preceding analysis that
\begin{equation}
E_n(f)=\left|\int_0^a (f(x)-\mathcal{W}[c_0,q_1](x))x^\alpha e^{iwx}dx\right|=\left|\int_0^a (f(x)-p(x))x^\alpha e^{iwx}dx\right|,
\end{equation}
where $p$ is the interpolation of $f$ on the nodes $0= x_0<x_1<\ldots<x_{n}\leq a$.
{\color{black}
To estimate the error,}
let $\eta(x):=f(x)-p(x)$. It is obvious that $\eta(x_j)=0, j=0,1,\ldots,n$. According to Rolle's theorem, there exists $y_j\in (x_j, x_{j+1})$ such that
\[
\eta'(y_j)=0,\; j=0,1,\ldots,n-1.
\]
Using the expression for interpolation errors, we derive
\[
\eta(x)=\frac{\eta^{(n+1)}(\xi_1)}{(n+1)!}\prod_{j=0}^{n} (x-x_j), \;\; \eta'(x)=\frac{\eta^{(n+1)}(\xi_1)}{n!}\prod_{j=0}^{n-1} (x-y_j),
\]
where $\xi_1,\xi_2\in[0,a]$ depending on the value of $x$.
By the van der Corput-type lemma in \cite{XAINGGUO2014}, there exists a constant $C$ independent of $w$ and $n$ such that
\begin{equation}\label{eq15}
\begin{split}
E_n(f)&\leq Cw^{-\min\{1+\alpha,1\}}\left(|\eta(a)|+ \int_0^a \left|\eta'(x)\right|dx\right)\\
&\leq Cw^{-\min\{1+\alpha,1\}}\left(\|\eta\|_\infty+ a\left\|\eta'\right\|_\infty \right).
\end{split}
\end{equation}
{\color{black}
The desired inequality follows directly from the fact} that $p^{(n+1)}\equiv 0$ and $\eta^{(n+1)}=f^{(n+1)}$.
\end{proof}
Note that the dependence on the number of nodes is related to the interpolation errors of the function $f$ and its derivative. Once the function $f$ is analytic within an ellipse and collocation points are chosen to be the Chebyshev points, the proposed method possesses the superalgebraic convergence
{\color{black}
since the corresponding interpolation errors decrease supralgebraically} \cite{XIANG2010chebyshev}.
{\color{black}
Thus, the new Levin method requires a small number of nodes to attain machine precision} that is also uniformly efficient for small $w$.
In computation of highly oscillatory integrals without singularity, it is known that the Filon-type method is equivalent to the Levin method once we use a proper basis \cite{OLVER2006,ISERLES2018B,XIANG2007b}. This conclusion is readily generalized to the highly oscillatory integrals with algebraic singularity.
Assuming that $g'\neq 0$ in $[0,a]$, we define two sets
\[
\Psi_M=\{g',g'g,g'g^2,\ldots,g'g^{M-1}\}
\]
and
\[
\Phi_M=\{1,g,g^2,\ldots,g^{M-1}\}.
\]
Note that when $g$ is strictly monotone, $\Psi_N$ and $\Phi_N$ are both Chebyshev sets. Let $\varphi_k=g'g^{k-1}, k=1,2,\ldots,$. Suppose that $p(x)=\sum_{j=1}^{M+1}p_j\varphi_j$ where $M=\sum_{j=0}^nm_j-1$ is the solution to the linear system
\begin{equation}\label{interpf}
p^{(j)}(x_l)=f_1^{(j)}(x_l), j=0,1,\ldots,m_l-1, l=0,1,\ldots,n,
\end{equation}
where $f_1$ is defined in \eqref{f1}.
A new Filon-type method for $I_w^{[0,a]}[f,s_1,g]$ is defined by
\begin{equation}\label{filon1}
Q_{w,\alpha,n}^{F,s}[f]\equiv\int_0^ap(x)g^\alpha(x)e^{iwg(x)}dx=\sum_{j=1}^{M+1}p_j\mu_j,
\end{equation}
where $\mu_j$ are the generalized moments defined by
\[
\mu_j=\int_0^a\varphi_j(x)g^\alpha(x)e^{iwg(x)}dx.
\]
They can be evaluated fast by a recurrence relation,
\[
\mu_{j+1}=-\frac{j+\alpha}{iw}\mu_j+\frac{1}{iw} g^{j+\alpha}(a)e^{iwg(a)}, j=1,2,\ldots,
\]
and
\[
\mu_1=\frac{g^{1+\alpha}(a)}{(-iwg(a))^{1+\alpha}}\left[\Gamma(1+\alpha)-\Gamma(1+\alpha,-iwg(a))\right].
\]
\begin{theorem}\label{theorem2}
The Filon-type method \eqref{filon1} based on the basis set $\Psi_{M+1}$ is identical to the Levin method \eqref{levin1} using the basis set $\Phi_M$.
\end{theorem}
\begin{proof}
It is trivial that the interpolant $p$ in the Filon-type method and the function $\mathcal{W}[c_0,q_1]$ in the Levin method are in the same space: they both belong to $\text{span}\{\Psi_{M+1}\}$.
Moreover, both $p$ and $\mathcal{W}[c_0,q_1]$ obey the Hermite interpolation conditions \eqref{interp2}. According to the uniqueness of the Hermite interpolation, we derive that $p=\mathcal{W}[c_0,q_1]$. The equivalence of these two methods follows directly.
\end{proof}
\begin{remark}
As Olver pointed out in \cite{OLVER2010,OLVER2010b},
{\color{black}
how to construct the Filon-type method in a numerically stable manner with the basis set $\Psi_{M+1}$ is still unknown, as is how to choose the interpolation points to optimize the order of convergence.} However, the new Levin method can be implemented by the polynomial interpolation at Chebyshev points, which can ensure the convergence. Compared to the Filon-type method, the Levin method is more stable and accurate for the case of nonlinear oscillators. This is a merit of the Levin method.
\end{remark}
\section{New Levin method for \texorpdfstring{$I_w^{[0,a]}[f,s_2,g]$}{I2}}
{\color{black}
We now further consider a new Levin method} for the case of oscillatory integrals with both algebraic and logarithmic singularities.
Before we commence the development of the new Levin method, it behooves us to decompose the integral $I_w^{[0,a]}[f,s_2,g]$:
\begin{equation}\label{eq1}
\begin{split}
I_w^{[0,a]}[f,s_2,g] & =\int_0^a f_1(x)g^{\alpha}(x)\log g(x)e^{iwg(x)}dx+\int_0^a f_2(x)x^{\alpha}e^{iwg(x)}dx
\\
&\triangleq I_1(f_1)+I_2(f_2),
\end{split}
\end{equation}
where $f_1$ is defined in \eqref{f1}
and
\begin{equation}\label{f2}
f_2(x)=\begin{cases}f(x) \log \frac{x}{g(x)}, \;x\neq0, \\
f(0)\log \frac{1}{g'(0)}, \;x=0.\end{cases}
\end{equation}
{\color{black}
It is obvious that $I_2(f_2)=I_w^{[0,a]}[f_2,s_1,g]$, which is readily computed by $Q_{w,\alpha,n}^{L,s}[f]$. All we need to do is to evaluate the integral $I_1(f_1)$.}
To compute $I_1(f_1)$, the spirit of the classic Levin method requires the solution of the ODE:
\begin{equation}\label{levinode2}
p'(x)+iwg'(x)p(x)=f_1(x)g^\alpha(x)\log(g(x)), x\in(0,a].
\end{equation}
{\color{black}
It is not wise to solve the ODE directly due to the singularity on the right-hand side. To deal with this obstacle, we combine the techniques described in the preceding section and in \cite{WangXiang2018} to seek a particular solution of a form with its singularity explicitly represented: }
\[
p(x)=q(x)g^{\alpha}(x) \log g(x)+\ell(x)g^{\alpha}(x)+h(x),
\]
where $q,\ell$ and $h$ are unknown functions. Substituting the form of $p$ in \eqref{levinode2}, we derive
\begin{equation*}
\begin{split}
& \left(q'(x)+iwg'(x)q(x)+\alpha g'(x)\frac{q(x)}{g(x)}\right) g^{\alpha}(x)\log g(x) \\
&\quad\quad\quad +\left(\ell'(x)+iwg'(x)\ell(x)+\alpha g'(x)\frac{\ell(x)}{g(x)}+g'(x)\frac{q(x)}{g(x)} \right) g^{\alpha}(x) \\
&\quad\quad\quad\quad\quad\quad +h'(x)+iwg'(x)h(x)=f_1(x)g^{\alpha}(x)\log g(x).
\end{split}
\end{equation*}
By the superposition principle, we then split the above ODE under the criterion of singularity:
\begin{equation}\label{gqeq2}
q'(x)+iwg'(x)q(x)+\alpha g'(x)\frac{q(x)-c_0(1-e^{-iwg(x)})}{g(x)}=f_1(x),
\end{equation}
\begin{equation}\label{gleq2}
\begin{split}
& \ell'(x)+iwg'(x)\ell(x)+\alpha g'(x)\frac{\ell(x)-c_1(1-e^{-iwg(x)})}{g(x)}
+g'(x)\frac{q(x)-c_0(1-e^{-iwg(x)})}{g(x)}=0,
\end{split}
\end{equation}
\begin{equation}\label{gheq2}
\begin{split}
& h'(x)+iwg'(x)h(x)+\alpha c_0 g'(x)\frac{1-e^{-iwg(x)}}{(g(x))^{1-\alpha}}\log g(x)
\\
& \quad\quad\quad\quad\quad
+\alpha c_1 g'(x)\frac{1-e^{-iwg(x)}}{(g(x))^{1-\alpha}}+c_0g'(x) \frac{1-e^{-iwg(x)}}{(g(x))^{1-\alpha}}=0.
\end{split}
\end{equation}
Setting
\[
\begin{split}
q_1(x)=\frac{q(x)-c_0(1-e^{-iwg(x)})}{g(x)}
\;\;\text{and}\; \ell_1(x)=\frac{\ell(x)-c_1(1-e^{-iwg(x)})}{g(x)},
\end{split}
\]
{\color{black}
Eqs. \eqref{gqeq2} and \eqref{gleq2} are simplified as}
\begin{eqnarray}
iwg'(x)c_0+g(x)q_1'(x)+[1+\alpha+iwg(x)]g'(x)q_1(x) &=& f_1(x), \label{gq1eq2}\\
iwg'(x)d_0+g(x)\ell_1'(x)+[1+\alpha+iwg(x)]g'(x)\ell_1(x) &=& -q_1(x)g'(x) \label{gl1eq2},
\end{eqnarray}
{\color{black}
which have exactly the same form of the ODE \eqref{gq1eq}. This means that both \eqref{gq1eq2} and \eqref{gl1eq2} possess at least one well-regularized and non-oscillatory solution that can be solved efficiently by collocation methods. Regarding Eq. \eqref{gheq2},} there is an explicit solution subject to the initial condition $h(0)=0$:
\begin{equation}\label{ghsol22}
\begin{split}
h(x)&=e^{-iwg(x)}\left(\int_0^x \alpha c_0 g'(t)\frac{1-e^{iwg(t)}}{g^{1-\alpha}(t)}\log g(t) +\alpha c_1 g'(t)\frac{1-e^{iwg(t)}}{g^{1-\alpha}(t)}+c_0g'(t) \frac{1-e^{iwg(t)}}{g^{1-\alpha}(t)}dt\right)\\
&=\left( c_0\log g(x) + c_1+\frac{c_0}{\alpha}\right) \left(g^{\alpha}(x)+\frac{\alpha \Gamma(\alpha,-iwg(x))-\Gamma(\alpha+1)} {(-iw)^\alpha} \right)e^{-iwg(x)} \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad +\frac{c_0}{\alpha}g^{\alpha}(x)(_2F_2(\alpha,\alpha;1+\alpha,1+\alpha;iwg(x))-1)e^{-iwg(x)},
\end{split}
\end{equation}
where $_mF_n$ is the generalized hypergeometric function, defined as a power series,
\begin{equation}\label{fdef}
_mF_n(a_1,\ldots,a_m;b_1,\ldots,b_n;z):=\sum_{k=0}^{+\infty} \frac{(a_1)_k\ldots(a_m)_k}{(b_1)_k\ldots(b_n)_k} \frac{z^k}{k!},
\end{equation}
and
{\color{black}
$(b)_k$ is known as a Pochhammer symbol,} i.e., $(b)_0=1$ and $(b)_k=\frac{\Gamma(k+b)}{\Gamma(b)}$ for $k\geq 1$.
Let $\{\phi_j\}_{j=1}^{M}$ be a basis of functions independent of $w$ and $\{x_j\}_{j=0}^{n}$ be a set of collocation nodes accompanied with multiplicity $m_j\geq 1$ such that $0=x_0<x_1<\ldots<x_{n}=a$ and $\sum_{j=0}^{n}m_j-1=M$. We are seeking two functions, $q_1=\sum_{j=1}^{M} c_j\phi_j$ and $\ell_1= \sum_{j=1}^{M} d_j\phi_j$, and two numbers, $c_0$ and $d_0$ such that they obey two linear systems, respectively,
\begin{equation}\label{interp3}
\begin{split}
\frac{d^{j}\mathcal{W}[c_0,q_1]}{dx^j}(x_l) & =f_1^{(j)}(x_l),\quad\qquad j=0,1,\ldots,m_l-1, l=0,1,\ldots,n. \\
\frac{d^{j}\mathcal{W}[d_0,\ell_1]}{dx^j}(x_l) & =(-q_1g')^{(j)}(x_l), j=0,1,\ldots,m_l-1, l=0,1,\ldots,n.
\end{split}
\end{equation}
We define the new Levin method for $I_w^{[0,a]}[f,s_2,g]$ as
\begin{equation}\label{levin2}
\begin{split}
Q_{w,n}^{L,s}[f]\equiv Q_{w,\alpha,n}^{L,s}[f_2] & +\left[(q_1(x)\log(g(x))+\ell_1(x))g^{1+\alpha}(x)\right. \\ & \left. \left.-(c_0\log(g(x))+d_0)(1-e^{-iwg(x)})g^\alpha(x)+h(x)\right]e^{iwg(x)}\right|_0^a,
\end{split}
\end{equation}
where $h$ is given in \eqref{ghsol22} and $f_2$ is defined in \eqref{f2}.
We next show the asymptotic order of the new Levin method.
\begin{theorem}
Suppose that $g(0)=0$ and $g'(x)>0, x\in[0,a]$ and $m_0=m_n=s+1$. If the basis $\{\phi_j\}_{j=1}^{M}$ is a Chebyshev set where $M=\sum_{j=0}^{n}m_j-1$, then for sufficiently large $w$ each of the two systems \eqref{interp3} has a unique solution and
\begin{equation}
I_w^{[0,a]}[f,s_2,g]-Q_{w,n}^{L,s}[f]\sim \mathcal{O}(\delta_\alpha(w)w^{-s-1-\min\{1+\alpha,1\}}).
\end{equation}
\end{theorem}
\begin{proof} Similar to the proof of Theorem \ref{theorem3},
{\color{black}
we commence with the representation of $Q_{w,n}^{L,s}[f]$ in integral form.}
Using the fundamental theorem of calculus, we derive
\[\begin{split}
Q_{w,n}^{L,s}[f]= Q_{w,\alpha,n}^{L,s}[f_2] & + \int_0^a \mathcal{L}\left[(q_1\log(g)+\ell_1)g^{1+\alpha}\right. \\ & \left.-(c_0\log(g)+d_0)(1-e^{-iwg})g^\alpha +h \right](x) e^{iwg(x)} dx,\\
= Q_{w,\alpha,n}^{L,s}[f_2] & + \int_0^a \mathcal{W}[c_0,q_1](x)g^\alpha(x)\log(g(x)) e^{iwg(x)} dx \\
& + \int_0^a \left(\mathcal{W}[d_0,\ell_1](x) +q_1(x)g'(x)\right) g^\alpha(x) e^{iwg(x)} dx.
\end{split}
\]
Hence
\[
\begin{split}
I_w^{[0,a]}[f,s_2,g]-Q_{w,n}^{L,s}[f]= & I_w^{[0,a]}[f_2,s_1,g]-Q_{w,\alpha,n}^{L,s}[f_2] \\
& \quad+ \int_0^a \left(f_1(x) - \mathcal{W}[c_0,q_1](x)\right)g^\alpha(x)\log(g(x))e^{iwg(x)}dx \\
& \qquad - \int_0^a \left(\mathcal{W}[d_0,\ell_1](x) +q_1(x)g'(x)\right) g^\alpha(x) e^{iwg(x)} dx.
\end{split}
\]
Finally, we use Lemma 2.3 and Theorem 2.4 in a manner similar to the proof of Theorem 2.4 and the desired results follow.
\end{proof}
Similar to the case of algebraic singularity, the error bound of the proposed Levin method on the number of nodes closely depends on the interpolation errors of the related
functions. We list the result without proof for a special case of linear oscillator.
Let $E_n(f):=\left|I_w^{[0,a]}[f,s_2,\tau]-Q_{w,n}^{L,0}[f]\right|$ denote the absolute error where $\tau(x):=x$.
\begin{theorem}
If $f$ is suitably smooth and independent of $w$, then the new Levin method collocating on points $\{0\leq x_0<x_1<\ldots<x_{n}\leq a\}$ satisfies
\begin{equation}
E_n(f)\leq C\delta_\alpha(w)w^{-\min\{1+\alpha,1\}} \frac{\|f^{(n+1)}\|_\infty a^{n+1} }{n!},
\end{equation}
where $C$ is a constant independent of $w$ and $n$.
\end{theorem}
Inspired by the new Levin method, a new moment-free Filon-type method is readily developed for $I_w^{[0,a]}[f,s_2,g]$.
{\color{black}
We find a function $p(x)=\sum_{j=1}^{M+1}p_j\varphi_j$ where $M=\sum_{j=0}^nm_j-1$ and $\varphi_j=g'g^{j-1},j=1,\ldots,M+1$,} such that
\begin{equation}\label{interpf2}
p^{(j)}(x_l)=f_1^{(j)}(x_l), j=0,1,\ldots,m_l-1, l=0,1,\ldots,n,
\end{equation}
where $f_1$ is defined in \eqref{f1}.
We define a new Filon-type method for $I_w^{[0,a]}[f,s_2,g]$ by
\begin{equation}\label{filon2}
\begin{split}
Q_{w,n}^{F,s}[f] & \equiv Q_{w,\alpha,n}^{F,s}[f_2]+\int_0^ap(x)g^\alpha(x)\log(g(x))e^{iwg(x)}dx \\
& = Q_{w,\alpha,n}^{F,s}[f_2]+ \sum_{j=1}^{M+1}p_j\nu_j,
\end{split}
\end{equation}
where $\nu_j$ are the generalized moments defined by
\[
\nu_j=\int_0^a\varphi_j(x)g^\alpha(x)\log(g(x))e^{iwg(x)}dx.
\]
They can be evaluated fast by a recurrence relation,
\[
\nu_{j+1}=-\frac{j+\alpha}{iw}\nu_j-\frac{1}{iw}\mu_j+\frac{1}{iw} g^{j+\alpha}(a)\log(g(a))e^{iwg(a)}, j=1,2,\ldots,
\]
and
\[
\nu_1=\frac{\log(g(a))}{(-iw)^{1+\alpha}}\left[\Gamma(1+\alpha)-\Gamma(1+\alpha,-iwg(a))\right] -\frac{g^{1+\alpha}(a)}{(1+\alpha)^2} {_2}F_2(1+\alpha,1+\alpha;2+\alpha,2+\alpha;iwg(a)).
\]
An identical reasoning of Theorem 2.6 reveals the relation between the Filon-type method and the Levin method.
\begin{theorem}
The Filon-type method \eqref{filon2} based on the basis set $\Psi_{M+1}$ are identical to the Levin method \eqref{levin1} using the basis set $\Phi_M$.
\end{theorem}
\section{Collocation method for \texorpdfstring{\eqref{gq1eq}}{(2.5)}}
{\color{black}
As seen in Sections 2 and 3,} new Levin methods depend on numerical solution of the kind ODE \eqref{gq1eq}. The ODE can be solved either in the frequency space (i.e., to obtain coefficients of the basis functions) or in the physical space (i.e., to obtain values of function in the collocation points). It is suggested from \eqref{levin1} and \eqref{levin2} that the solution solved in the physical space is more convenient to avoid the recovery process from the expanding expression.
{\color{black}
Because many integrals of interest appeared in high-frequency scatterings, only point values of $f$ can be used since $f$ is often very complicated (and may itself be an integral involving special functions); we therefore propose in this section a new collocation method for \eqref{gq1eq} without any derivative information, i.e., $s=0$.We also note that derivatives might be avoided by allowing interpolation points close to the critical points as $w$ increases \cite{ISERLES2004b}.}
There exists a stable collocation method in the physical space to solve the classic Levin ODE for oscillatory integrals without singularities \cite{LI2008}. Unluckily, it is not applicable directly for \eqref{gq1eq} since there is an extra unknown coefficient $c_0$ to be decided.
{\color{black}
To circumvent this difficulty,}
we adopt the Chebyshev-Gauss-Radau points, $t_j=-\cos \frac{2j\pi}{2n-1}, j=0,1,\ldots,n-1$, instead of Chebyshev-Lobatto nodes. We commence by recalling the first-order differentiation matrix $D$ based on the Chebyshev-Gauss-Radau points.
{\color{black}
The matrix is determined by an explicit formula the entries of which are given by} (\cite{SHTW2011}, p. 100)
\begin{equation}\label{Dcgr}
d_{kj}=
\begin{cases}
-\frac{n(n-1)}{3}, k=j=0,\\
\frac{t_k}{2(1-t_k^2)}+\frac{(2n-1)T_{n-1}(t_k)}{2(1-t_k^2)Q'(t_k)}, \,\, 1\leq k=j\leq n-1, \\
\frac{Q'(t_k)}{Q'(t_j)}\frac{1}{t_k-t_j}, k\neq j,
\end{cases}
\end{equation}
where $Q(t)=T_n(t)+T_{n-1}(t), t\in[-1,1]$.
{\color{black}
Letting $x(t)=\frac{a}{2}\left(1-t\right)$, $t\in[-1,1]$,}
we select the modified Chebyshev-Gauss-Radau points $x_j=x(t_{n-j}),j=1,\ldots,n$ and $x_0=0$ as the collocation points.
Then, for a given polynomial $u$ of degree less than $n$ on $[0,a]$, there exists the relation
\[
\textbf{u}'=\frac{2}{a}D\textbf{u},
\]
where $\textbf{u}$ and $\textbf{u}'$ are two vectors of evaluations of functions $u$ and $u'$ at the modified Chebyshev-Gauss-Radau points, respectively.
{\color{black}
To derive a linear system for \eqref{gq1eq}, we must tackle the collocation condition at $x_0$ carefully.
Since the coefficient of $q_1'(0)$ is $g(x_0)$ and $g(x_0)=0$, there is no need to express $q_1'(0)$ in terms of the values of $q$.
Owing to the use of Chebyshev-Gauss-Radau points, the value of $q_1(0)$ must be represented by the extrapolation.} Using the interpolant of $q_1$, it is obtained by
\[
q_1(0)=\frac{1}{2n-1}\cos((n-1)\pi) q_1(x_n) +\sum_{j=1}^{n-1} \frac{2}{2n-1} \left(\cos((n-1-j)\pi)\sec \frac{j\pi}{2n-1}\right) q_1(x_{n-j}),
\]
if $q_1$ is a polynomial of degree no more than $n-1$.
Let $\textbf{x}:=[x_0,x_1,\ldots,x_n]$ and denote by $\textbf{q}_1$ and $\textbf{f}$ the modified vectors of values of functions $q_1$ and $f$, respectively,
i.e.,
\[
\textbf{q}_1=[c_0, q_1(x_1), \ldots, q_1(x_n)],\; \text{and}\; \textbf{f}=[f(x_0), f(x_1),\ldots,f(x_n)].
\]
{\color
|
{black}
Set $\textbf{r}$ as a vector of the coefficients of $q_1(0)$ in terms of $q_1(x_j),j=1,2,\ldots,n$
and $\textbf{c}$ as a vector of size $n\times 1$ the entries of which equal $iw$.}
We assemble a matrix $L$ of size $(n+1)\times (n+1)$ by
\[
L=\begin{pmatrix}iw & (1+\alpha)\textbf{r}^\top \\ \textbf{c} & \Lambda_1D+\Lambda_2 \end{pmatrix},
\]
where $\textbf{r}^\top$ means the transpose of the vector $\textbf{r}$, $\Lambda_1=\diag\left(-\frac{2g(x_1)}{a},-\frac{2g(x_2)}{a},\ldots,-\frac{2g(x_n)}{a}\right)$ and $\Lambda_2=\diag(1+\alpha+iwg(x_1),1+\alpha+iwg(x_2),\ldots,1+\alpha+iwg(x_n)$.
Equation \eqref{gq1eq} is discretized on the collocation points $x_j(j=0,1,\ldots,n)$, and then we obtain the linear system in the vector form
\begin{equation}\label{Aeq2}
L\textbf{q}_1=\textbf{f}.
\end{equation}
Note that the matrix $L$ is ill-conditioned when the dimension is large. However, as observed in \cite{LI2008}, only the last singular value of $L$ is very small and it is well separated from the rest.
{\color{black}
Hence the technique of truncated singular value decomposition (TSVD)} is suggested to be used when the last singular value is smaller than $10^{-8}$ to obtain a stable solution.
\section{Numerical Examples}
In this section, we illustrate the convergence characteristics of proposed new Levin methods with a number of numerical experiments.
We also compare the computational performance of the new collocation method with that of the composite moment-free Filon-type quadrature (CMFP) proposed in \cite{MAXU2017}.
{\color{black}
The numerical results presented below were all obtained using MatLab (MathWorks, USA) on a laptop with an Intel(R) Core(TM) i7-6500U CPU with 8 GB of RAM.}
\begin{example}\label{example1}
We first consider the integral with algebraic singularity considered in \cite{PIESSENS1992},
\[
\int_0^1 f(x)x^\alpha e^{-iwx}dx= \frac{1}{2(\alpha+1)}-\frac{\sqrt{\pi}}{2} \left(2/w\right)^{\alpha+1/2}\Gamma(\alpha+1)H_{\alpha+3/2}(w) +\frac{i\sqrt{\pi}}{2}\left(2/w\right)^{\alpha+1/2}\Gamma(\alpha+1) J_{\alpha+3/2}(w),
\]
where $f(x)=e^{iw}(1-x)(2-x)^\alpha$, $H_v$ is the Struve function and can be expressed in terms of the generalized hypergeometric function $_1F_2$,
\[
H_v(z)=\frac{z^{v+1}}{2^v\sqrt{\pi}\Gamma(v+3/2)} {_1F_2}\left(1,3/2,v+3/2,-z^2/4 \right).
\]
\end{example}
\begin{figure}
\centering
\subfloat[$s=0$,$\alpha=0.5$]{
\label{fig1:subfig:a}
\includegraphics[width=3.0in]{As0a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=0$,$\alpha=-0.5$]{
\label{fig1:subfig:b}
\includegraphics[width=3.0in]{As0a-12w.pdf}}
\subfloat[$s=1$,$\alpha=0.5$]{
\label{fig1:subfig:c}
\includegraphics[width=3.0in]{As1a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=1$,$\alpha=-0.5$]{
\label{fig1:subfig:d}
\includegraphics[width=3.0in]{As1a-12w.pdf}}
\subfloat[$s=2$,$\alpha=0.5$]{
\label{fig1:subfig:e}
\includegraphics[width=3.0in]{As2a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=2$,$\alpha=-0.5$]{
\label{fig1:subfig:f}
\includegraphics[width=3.0in]{As2a-12w.pdf}}
\caption{Scaled absolute errors of the new Levin method for the integral in Example 5.1 as a function of increasing $w$. Errors behave asymptotically as $\mathcal{O}(w^{-s-1-\min\{1+\alpha,1\}})$ for different values of $s$ and $\alpha$.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\subfloat[$s=0$,$\alpha=0.5$]{
\label{fig2:subfig:a}
\includegraphics[width=3.0in]{As0a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=0$,$\alpha=-0.5$]{
\label{fig2:subfig:b}
\includegraphics[width=3.0in]{As0a-12n.pdf}}
\subfloat[$s=1$,$\alpha=0.5$]{
\label{fig2:subfig:c}
\includegraphics[width=3.0in]{As1a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=1$,$\alpha=-0.5$]{
\label{fig2:subfig:d}
\includegraphics[width=3.0in]{As1a-12n.pdf}}
\subfloat[$s=2$,$\alpha=0.5$]{
\label{fig2:subfig:e}
\includegraphics[width=3.0in]{As2a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=2$,$\alpha=-0.5$]{
\label{fig2:subfig:f}
\includegraphics[width=3.0in]{As2a-12n.pdf}}
\caption{Absolute errors of the new Levin method for the integral in Example 5.1 as a function of increasing number of collocation points $n$. Exponential convergence is observed for different values of $s$ and $\alpha$.}
\label{fig2}
\end{figure}
The Levin method is implemented based on the modified Chebyhev-Lobatto points. Figures \ref{fig1} and \ref{fig2} show numerical convergence for increasing frequency $w$ and for increasing number of collocation points. In Figure \ref{fig1}, $w^{s+1+\min\{1+\alpha,1\}}$-scaled absolute errors are plotted as a function of $w$. The lines are approximately straight, which confirms the asymptotic decay of the error at the rate of $w^{-1-s-\min\{1+\alpha,1\}}$. In Figure \ref{fig2}, convergence is shown as a function of $n$, the number of collocation points. Exponential convergence is observed, which levels off only when machine precision is reached. Errors decrease as the values of $w$ increase.
\begin{example}
In the second example, we compute the integral with algebraic and logarithmic singularities
\[
\int_0^1\frac{1}{1+x^2}x^\alpha\log x e^{iwx}dx.
\]
\end{example}
We present in Figures \ref{fig4} and \ref{fig5} the similar results of numerical convergence for increasing frequency $w$ and for increasing number of collocation points . In Figure \ref{fig4} , $w^{s+1+\min\{1+\alpha,1\}}\delta_\alpha^{-1}(w)$-scaled absolute errors are plotted as a function of $w$.
{\color{black}
The nearly straight lines confirm the asymptotic decay of the error}
at the rate of $\delta_\alpha(w)w^{-1-s-\min\{1+\alpha,1\}}$. As a function of the number of collocation points, exponential convergence is observed in Figure \ref{fig5}. Errors also decrease as the values of $w$ increase.
\begin{figure}
\centering
\subfloat[$s=0$,$\alpha=0.5$]{
\label{fig4:subfig:a}
\includegraphics[width=3.0in]{Ls0a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=0$,$\alpha=-0.5$]{
\label{fig4:subfig:b}
\includegraphics[width=3.0in]{Ls0a-12w.pdf}}
\subfloat[$s=1$,$\alpha=0.5$]{
\label{fig4:subfig:c}
\includegraphics[width=3.0in]{Ls1a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=1$,$\alpha=-0.5$]{
\label{fig4:subfig:d}
\includegraphics[width=3.0in]{Ls1a-12w.pdf}}
\subfloat[$s=2$,$\alpha=0.5$]{
\label{fig4:subfig:e}
\includegraphics[width=3.0in]{Ls2a+12w.pdf}}
\hspace{0.1in}
\subfloat[$s=2$,$\alpha=-0.5$]{
\label{fig4:subfig:f}
\includegraphics[width=3.0in]{Ls2a-12w.pdf}}
\caption{Scaled absolute errors of the new Levin method for the integral in Example 5.2 as a function of increasing $w$. Errors behave asymptotically as $\mathcal{O}(\delta_\alpha (w)w^{-s-1-\min\{1+\alpha,1\}})$ for different values of $s$ and $\alpha$.}
\label{fig4}
\end{figure}
\begin{figure}
\centering
\subfloat[$s=0$,$\alpha=0.5$]{
\label{fig5:subfig:a}
\includegraphics[width=3.0in]{Ls0a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=0$,$\alpha=-0.5$]{
\label{fig5:subfig:b}
\includegraphics[width=3.0in]{Ls0a-12n.pdf}}
\subfloat[$s=1$,$\alpha=0.5$]{
\label{fig52:subfig:c}
\includegraphics[width=3.0in]{Ls1a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=1$,$\alpha=-0.5$]{
\label{fig5:subfig:d}
\includegraphics[width=3.0in]{Ls1a-12n.pdf}}
\subfloat[$s=2$,$\alpha=0.5$]{
\label{fig5:subfig:e}
\includegraphics[width=3.0in]{Ls2a+12n.pdf}}
\hspace{0.1in}
\subfloat[$s=2$,$\alpha=-0.5$]{
\label{fig5:subfig:f}
\includegraphics[width=3.0in]{Ls2a-12n.pdf}}
\caption{Absolute errors of the new Levin method for the integral in Example 5.2 as a function of increasing number of collocation points $n$. Exponential convergence is observed for different values of $s$ and $\alpha$.}
\label{fig5}
\end{figure}
\begin{example}
To compare the convergence of the new Levin methods and the corresponding Filon-type methods, we consider two integrals with a non-linear oscillator,
\begin{equation}
\int_0^1\frac{1}{1+x^2}x^\alpha e^{iw(x^2+x+1)}dx\;\text{and}\; \int_0^1\frac{1}{1+x^2}x^\alpha \log x e^{iw(x^2+x+1)}dx.
\end{equation}
\end{example}
Filon-type methods are implemented based on the basis $\Psi_M$ while Levin methods are based on Chebyshev polynomials.
{\color{black}
Both adopt the modified Chebyhev-Lobatto points as collocation points.} Tables \ref{fig7} and \ref{fig8} show the absolute errors of integrals $\int_0^1\frac{1}{1+x^2}x^\alpha e^{iw(x^2+x+1)}dx$ and $\int_0^1\frac{1}{1+x^2}x^\alpha \log x e^{iw(x^2+x+1)}dx$, respectively. We fix $w=100$ and $\alpha=0.5$. It is shown that the errors of Levin methods are much smaller than those of Filon-type methods.
{\color{black}
Hence, the new Levin methods outperform the Filon-type methods} when computing an integral with a nonlinear oscillator.
\begin{table}[htb]
\begin{center}
\small
\caption{Absolute errors of the Levin method and Filon-type method for integral $\int_{0}^{1}\frac{1}{1+x^2}x^\alpha e^{iw(x^2+x+1)}dx$ with $w=100$ and $\alpha=0.5$.}
\label{fig7}
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{\centering $n$} & \multicolumn{3}{c}{Levin ( $Q_{w,\alpha,n}^{L,s}[f]$ )} & \multicolumn{3}{|c}{Filon-type ( $Q_{w,\alpha,n}^{F,s}[f]$ )} \\
\cline{2-7}
& $s=0$ & $s=1$ & $s=2$ & $s=0$ & $s=1$ & $s=2$ \\
\hline
4&$1.5382e-05$&$2.6363e-07$&$1.2572e-08$&$4.3048e-05$&$2.1433e-06$&$2.1830e-07$\\
6&$2.3171e-06$&$2.5048e-08$&$1.6132e-09$&$2.7080e-05$&$1.4631e-06$&$1.5978e-07$\\
8&$3.0090e-07$&$1.4410e-09$&$1.8206e-10$&$1.6425e-05$&$9.5278e-07$&$1.0084e-07$\\
10&$2.9168e-08$&$2.1253e-10$&$1.8362e-11$&$9.4919e-06$&$5.9085e-07$&$6.0610e-08$\\
12&$2.5673e-09$&$3.8829e-11$&$1.7950e-12$&$5.3341e-06$&$3.5403e-07$&$3.6018e-08$\\
14&$1.8243e-10$&$4.9531e-12$&$1.9541e-13$&$2.9614e-06$&$2.0881e-07$&$8.6379e-09$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]
\begin{center}
\small
\caption{Absolute errors of the Levin method and Filon-type method for integral $\int_{0}^{1}\frac{1}{1+x^2}x^\alpha\log(x)e^{iw(x^2+x+1)}dx$ with $w=100$ and $\alpha=0.5$.}
\label{fig8}
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{\centering $n$} & \multicolumn{3}{c}{Levin ( $Q_{w,n}^{L,s}[f]$ )} & \multicolumn{3}{|c}{Filon-type ( $Q_{w,n}^{F,s}[f]$ )} \\
\cline{2-7}
& $s=0$ & $s=1$ & $s=2$ & $s=0$ & $s=1$ & $s=2$ \\
\hline
4&$2.2974e-05$&$9.0885e-07$&$3.1955e-08$&$5.0237e-05$&$4.3859e-06$&$1.7800e-07$\\
6&$2.4346e-06$&$1.1992e-07$&$3.6359e-09$&$3.0141e-05$&$2.5481e-06$&$1.3454e-07$\\
8&$1.2843e-07$&$1.2403e-08$&$4.0700e-10$&$1.9505e-05$&$1.4536e-06$&$1.1448e-07$\\
10&$6.3650e-09$&$1.1533e-09$&$4.7044e-11$&$1.1996e-05$&$8.2261e-07$&$7.9733e-08$\\
12&$2.3501e-09$&$9.4296e-11$&$5.2921e-12$&$6.9970e-06$&$4.7287e-07$&$4.9349e-08$\\
14&$3.5590e-10$&$8.0273e-12$&$5.5103e-13$&$3.9501e-06$&$2.7718e-07$&$1.1227e-08$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{example}
In the final example, we show the efficiency of the new collocation method in Section 4 by recomputing the integral in Example \ref{example1} and comparing relative errors and CPU time with those of the CMFP.
\end{example}
To this end, we simply recall the quadrature formulas of the CMFP.
The moment-free Filon method in \cite{XIANG2007} approximates the integral $\int_a^b f(x)e^{iwg(x)}dx$ by
\[
Q^{[a,b],MF}_{w,m}[f,g]:=\int_{g(a)}^{g(b)} p_n(x)e^{iwx}dx,
\]
where $p_n$ is a polynomial of degree $n-1$
{\color{black}
that interpolates $\left[(f/g')\circ g^{-1}\right]$ at $g(t_j),j=0,1,\ldots,m$ and $t_j, j=0,1,\ldots,m$ comprise a set of distinguishing points on $[a,b]$. The composite moment-free Filon-type rules used in CMFP read}
\[
Q^{[a,b],CMF}_{w,n,m}[f,g]:=\sum_{j=1}^{n} Q^{[x_{j-1},x_j],MF}_{w,m}[f,g]\; \text{with}\; x_j=a+\frac{j}{n}(b-a), j=0,1,\ldots,n.
\]
The Gauss-Legendre quadrature rule for integral $\int_a^b f(x)dx$ is given by
\[
Q_m^{[a,b],GL}[f]:=\frac{b-a}{2}\sum_{j=1}^m w_j f\left(\frac{(b-a)t_j+b+a}{2}\right),
\]
where $w_j$ and $t_j$ are the standard weights and points of the Gauss-Legendre rule on the domain $[-1,1]$.
Suppose for a non-negative integer $r$ that the function $g\in C^{r+1}[0,1]$ has a single stationary point at zero and satisfies $g^{(j)}(0)=0$ for $j=1,\ldots,r$ and $g^{(r+1)}(x)\neq 0$ for $x\in[0,1]$.
{\color{black}
Letting $\sigma_r:=\|g^{(r+1)}\|_\infty/(r+1)!$,} $w_r:=\max\{k\sigma_r,k\}$, and $\lambda_r:=w_r^{-1/(r+1)}$,
the CMFP method for integral $\int_0^1 f(x)e^{ikg(x)}dx$ is established by
\[
Q_{w,n,s,m_1,m_2}^{CMFP}[f,g]:=\lambda_r\sum_{j=1}^{s-1}Q_{m_1}^{[x_j,x_{j+1}],GL}[\varphi]+ \sum_{j=1}^{n} Q^{[y_{j-1},y_{j}],CMF}_{w,N_j,m_2}[f,g],
\]
where $x_0=0$, $x_j=(j/s)^p, p=(2m+1)/(1+\mu)$, $\mu$ is the index of singularity of $f$, $j=1,2,\ldots,s$, $\varphi(x)=f(\lambda_r x)e^{iwg(\lambda_r x)}$,
$y_j=w_r^{(j-n)/n/(r+1)}, j=0,1,\ldots,n$, $N_j=\lceil q_j^{m/(m-1)}\rceil$, $q_j=\max\{|g'(y_{j-1})|,|g'(y_j)|\}x_{j-1}/g(x_{j-1})$, $j=1,2,\ldots,n$, and $\nu$ is the index of singularity of $\left[(f/g')\circ g^{-1}\right]$. When $f$ has only the logarithmic singularity, the value of $\mu$ is set to 0.
When $g(x)=x$, $Q_{w,n,s,m_1,m_2}^{CMFP}[f,g]$ is shortened to $Q_{w,n,s,m_1,m_2}^{CMFP}[f]$.
The comparisons of relative errors and CPU time are shown in Figure \ref{fig3}, between the new Levin method by $Q_{-w,\alpha,n}^{L,0}[f]$ and the CMFP by $Q_{-w,n_1,n_1,4,4}^{CMFP}[f]$ with $r=0$ and $\sigma_r=1$, where $\alpha=-0.5$, $w=100$ or $1000$, $n$ is set to $[4,6,\ldots,36]$, and $n_1=2^{(n-2)/2}$.
{\color{black}
The left-hand panel of Figure \ref{fig3} shows} that errors of the Levin method decrease faster and errors of the CMFP increase when the number of points is large enough (seen in the ellipse of dashes).
Hence, the new Levin method is more stable.
{\color{black}
The right-hand panel shows that the Levin method takes less time to attain machine precision and the CPU time grows slower. This is because the proposed method has reached superalgebraic convergence.}
\begin{figure}
\centering
\includegraphics[width=3in]{Example1CompareError.pdf}
\hspace{.31in}%
\includegraphics[width=3in]{Example1CompareTime.pdf}
\caption{Comparison of the relative errors (left) and the CPU time (right) in computing $\int_0^1 f(x)x^\alpha e^{-iwx}dx$ by the new Levin method ($Q_{-w,\alpha,n}^{L,0}[f]$) and CMFP ($Q_{-w,n_1,n_1,4,4}^{CMFP}[f] $), where $\alpha=-0.5$, $n$ is set to be $[4,6,\ldots,36]$, and $n_1=2^{(n-2)/2}$. }
\label{fig3}
\end{figure}
\begin{appendices}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\section{Proof of Lemma \ref{lemma3}}
To prove Lemma \ref{lemma3}, we first present a useful result in the next lemma.
\begin{lemma}\label{lemma01}
Let $w\in\mathbb R$ be a parameter and assume that $g\in C^1[0,a]$ is a function independent of $w$ satisfying $g(0)=0$ and $g'(x)>0$ for $x\in[0,a]$. If $f\in C[0,a]$ and $q$ is a solution of the ODE,
\begin{equation*}
g(x)q'(x)+[k+\alpha+iwg(x)]g'(x)q(x)=f(x), k=1,2,\ldots,
\end{equation*}
with the initial condition $q(0)=0$, then there exists a constant $C$ independent of $w$ such that
\begin{equation*}
\|q\|_\infty\leq C\|f\|_\infty.
\end{equation*}
\end{lemma}
\begin{proof} Multiplying both sides with the term $g^{k+\alpha-1}(x)e^{iwg(x)}$, and integrating over the domain $[0,x]$, we derive
\begin{equation*}
q(x)=g^{-k-\alpha}(x)e^{-iwg(x)}\int_0^x f(t)g^{k+\alpha-1}(t) e^{iwg(t)}dt.
\end{equation*}
Taking the absolute value,
\begin{equation*}
|q(x)|
\leq\|f\|_\infty \left| g^{-k-\alpha}(x) \right| \int_0^x \left|g^{k+\alpha-1}(t)\right| dt.
\end{equation*}
For any $x\in[0,a)$, there exists $\xi\in[0,a)$ such that $g(x)/x=g'(\xi)$. Since $g'(x)>0$ on $[0,a]$ and $g'\in C[0,a]$, there exist two positive constants $C_1$ and $C_2$ depending on $g$ and $a$ such that $C_1<|g(x)/x|<C_2$, $x\in[0,a]$. Then, it is obtained that
\begin{equation*}
|q(x)|\leq\|f\|_\infty C_1^{-k-\alpha}C_2^{k+\alpha-1}x^{-k-\alpha}\int_0^x t^{k+\alpha-1}dt=\frac{C_2^{k+\alpha-1}}{(k+\alpha)C_1^{k+\alpha}} \|f\|_\infty.
\end{equation*}
The proof is finished by setting $C=\frac{C_2^{k+\alpha-1}}{(k+\alpha)C_1^{k+\alpha}}$.
\end{proof}
{\color{black}
Now we are ready to prove Lemma \ref{lemma3}.}
\begin{proofspecial}
Rewriting the ODE \eqref{gq1eq} as
\begin{equation}\label{flsode2}
c_0+g(x)q_1(x)=\frac{1}{iwg'(x)}\left(f_1(x)-g(x)q_1'(x)-(1+\alpha)q_1(x)\right),
\end{equation}
we then generate a sequence of successive approximations.
{\color{black}
Setting the initial settings as $q_1^{[0]}\equiv0$,} we obtain, for $k\geq1$,
\begin{equation*}
\begin{split}
\Phi^{[k]}(x)&:= \frac{1}{iwg'(x)}\left(f_1(x)-g(x)\mathcal{D} q_1^{[k-1]}(x)-(1+\alpha)q_1^{[k-1]}(x)\right), \\
c^{[k]}_0&:=\Phi^{[k]}(0),\\
q_1^{[k]}(x)&:=\frac{1}{g(x)}\left(\Phi^{[k]}(x)-c^{[k]}_0\right).
\end{split}
\end{equation*}
Since $f_1\in C^{2n+1}[0,a]$ and $g\in C^{2n+2}[0,a]$ with $g'(x)>0$ for $x\in[0,a]$, it can be obtained by induction that $\Phi^{[k]}, q_1^{[k]}\in C^{2n-k+2}[0,a]$ and
\begin{equation}\label{qn2}
\left\|\mathcal{D}^{m}\Phi^{[k]}\right\|_\infty=\mathcal{O}(w^{-1}),\max_{j}\left|c^{[k]}_j\right|=\mathcal{O}(w^{-1}) \; \text{and}\; \left\|\mathcal{D}^{m}q_1^{[k]}\right\|_\infty=\mathcal{O}(w^{-1}),
\end{equation}
for $m=0,1,\ldots,2n-k+1,\; k=1,\ldots,n+1$. Therefore, functions $q_1^{[n+1]}$ and $c^{[n+1]}_0$ possess the desired property \eqref{property} and satisfy
\begin{equation*}
\begin{split}
&iwg'(x)c^{[n+1]}_0+ g(x)\mathcal{D} q_1^{[n+1]}(x)+[1+\alpha+iwg(x)]g'(x)q_1^{[n+1]}(x) \\
&\quad\quad =f_1(x)+g(x)\mathcal{D}\left(q_1^{[n+1]}(x)- q_1^{[n]}(x)\right)+(1+\alpha)g'(x)\left(q_1^{[n+1]}(x)- q_1^{[n]}(x) \right).
\end{split}
\end{equation*}
From the relation
\begin{equation*}
\Phi^{[k+1]}(x)-\Phi^{[k]}(x)=-\frac{1}{iwg'(x)} \left[ g(x)\mathcal{D}\left(q_1^{[k]}(x)- q_1^{[k-1]}(x)\right)+(1+\alpha)g'(x)\left(q_1^{[k]}(x)- q_1^{[k-1]}(x) \right) \right], \; k\geq1,
\end{equation*}
it is derived by induction that
\begin{equation*}
\left\|g(x)\mathcal{D}\left(q_1^{[n+1]}- q_1^{[n]}\right)(x) +(1+\alpha)g'(x)\left(q_1^{[n+1]}(x)- q_1^{[n]}(x) \right) \right\|_\infty=\mathcal{O}(w^{-n-1}).
\end{equation*}
Now we define $q_1$ to be the solution of the ODE \eqref{gq1eq} with $c_0=c^{[n+1]}_0$ and initial condition $q_1(0)=q_1^{[n+1]}(0)$. The difference $d(x)=q_1(x)-q_1^{[n+1]}(x)$ satisfies
\begin{equation}\label{deq2}
\begin{split}
& g(x) d'(x)+[1+\alpha+iwg(x)]g'(x)d(x) \\
&\quad\quad =g(x)\mathcal{D}\left(q_1^{[n]}(x)- q_1^{[n+1]}(x)\right)+(1+\alpha)g'(x)\left(q_1^{[n]}(x)- q_1^{[n+1]}(x) \right) \triangleq \psi^{[0]}(x),
\end{split}
\end{equation}
with zero initial condition, where $\psi^{[0]}\in C^{n}[0,a]$.
Since $\left\|\psi^{[0]}\right\|_\infty=\mathcal{O}(w^{-n-1})$, we obtain from Lemma \ref{lemma01} that $\|d\|=\mathcal{O}(w^{-n-1})$. Differentiating \eqref{deq2}, it follows that $d'$ satisfies
\begin{equation*}
g(x)d''(x)+[2+\alpha+iwg(x)]g'(x)d'(x)=\psi^{[1]}(x),
\end{equation*}
with $\psi^{[1]}(x)=\mathcal{D} \psi^{[0]}(x)- [(1+\alpha+iwg(x))g''(x)+iw(g'(x))^2]d(x)$.
{\color{black}
It is clear that $\psi^{[1]}\in C^{n-1}[0,a]$ and $\left\| \psi^{[1]}\right\|_\infty=\mathcal{O}(w^{-n})$, which implies that $\|d'\|_\infty=\mathcal{O}(w^{-n})$.} Repeating the differential process, we have
\begin{equation}\label{dn2}
\|\mathcal{D}^md\|_\infty=\mathcal{O}(w^{-n-1+m}), m=0,1,\ldots,n.
\end{equation}
{\color{black}
Combining \eqref{qn2} and \eqref{dn2}, the number $c^{[n+1]}_0$ and the solution $q_1$ meet the requirements, which finishes the proof.}
\end{proofspecial}
\end{appendices}
\vspace{0.1cm} {\bf Acknowledgements}. The authors are grateful for the referees' helpful suggestions and insightful comments, which helped improve the manuscript significantly. The authors thank Dr. Saira and Dr. Suliman at Central South University for their careful checking of numerous details.
|
\section{Introduction}
After nearly thirty years since the discovery of `spin crisis' by the EMC
collaboration~\cite{Ashman:1987hv}, the partonic decomposition of the nucleon spin
continues to be a fascinating research area. Among the four terms in the Jaffe-Manohar
decomposition formula~\cite{Jaffe:1989jz},
\begin{eqnarray}
\frac{1}{2}=\frac{1}{2}\Delta \Sigma + \Delta G + L_q+L_g\,, \label{1}
\end{eqnarray}
the quark helicity contribution $\Delta \Sigma$ is reasonably well constrained by the experimental data.
The currently accepted value is $\Delta \Sigma \sim 0.30$. Over the past decade or so, there have been
worldwide experimental efforts to determine the gluon helicity contribution $\Delta G$ as the integral of
the polarized gluon distribution function $\Delta G=\int_0^1 dx \Delta G(x)$. The most recent NLO global
QCD analysis has found a nonvanishing gluon polarization in the moderate $x$ region
$\int_{0.05}^1dx \Delta G(x)\approx 0.2^{+0.06}_{-0.07}$~\cite{deFlorian:2014yva}.
However, uncertainties from the small-$x$ region $x<0.05$ are quite large, of order
unity. Future experimental data from RHIC at $\sqrt{s}=510$ GeV~\cite{Adare:2015ozj} and the planned
Electron-Ion Collider (EIC)~\cite{Accardi:2012qut} are
expected to drastically reduce these uncertainties.
In contrast to these achievements in the helicity sector, it is quite frustrating
that very little is known about the orbital angular momentum (OAM) of quarks $L_q$ and gluons $L_g$.
In fact, even the proper, gauge-invariant definitions of $L_{q,g}$ have long remained obscure (see, however,
\cite{Bashinsky:1998if}). Thanks to recent theoretical developments, it is now understood that
$L_{q,g}$ can be defined in a manifestly gauge invariant (albeit nonlocal)
way~\cite{Hatta:2011zs,Hatta:2011ku}. Moreover, this construction naturally
allows one to define, also gauge invariantly, the associated partonic distributions~\cite{Hatta:2012cs,Ji:2012ba},
\begin{eqnarray}
L_{q,g}=\int_0^1 dx L_{q,g}(x)\,.
\end{eqnarray}
A detailed analysis shows that $L_{q,g}(x)$ is sensitive to the twist-three correlations in the longitudinally polarized nucleon.
Introducing the $x$-distributions $L_{q,g}(x)$ is essential for the experimental measurement of OAMs. Just like $\Delta \Sigma$, which is the integral of the polarized quark distribution $\Delta \Sigma=\int_0^1dx \Delta q(x)$, $L_{q,g}$ can only be determined through a global analysis of the `OAM parton distributions' $L_{q,g}(x)$ extracted from various observables. However, accessing $L_{q,g}(x)$ experimentally is quite challenging, and there has been some recent debate over whether they can be in principle related to observables in the first place \cite{Courtoy:2013oaa,Kanazawa:2014nha,Rajan:2016tlg}.
In this paper we propose a method to experimentally measure the {\it gluon} OAM distribution $L_{g}(x)$ for small values of $x$. This is practically important in view of
the abovementioned large uncertainties in $\Delta G$ from the small-$x$ region, as well as a
strong coupling analysis \cite{Hatta:2009ra} which suggests that a significant fraction of spin
comes from OAM at small-$x$. Together with a related proposal which focuses on the moderate-$x$ region \cite{Ji:2016jgn}, our work represents a major step forward towards understanding the spin sum rule (\ref{1}).\footnote{
Very recently, a different observable related to the quark OAM distribution $L_q(x)$ for generic values of $x$ \cite{Bhattacharya:2017bvs} has been suggested. Moreover, the first direct computation of $L_q$ in lattice QCD simulations \cite{Engelhardt:2017miy} has appeared. }
We shall make a crucial use of the relation \cite{Lorce:2011kd,Lorce:2011ni,Hatta:2011ku} between $L_{q,g}$ and the QCD Wigner distribution \cite{Belitsky:2003nz}, or its Fourier transform, the generalized transverse momentum dependent distribution (GTMD) \cite{Meissner:2009ww,Hatta:2011ku,Lorce:2013pza}, which actually holds at the density level $L_{q,g}(x)$.
Since the gluon Wigner distribution is measurable at small-$x$ \cite{Hatta:2016dxp}, $L_{g}(x)$ should also be measurable through this relation.
In Section II, we review the gauge invariant gluon OAM $L_g$ and its $x$-distribution $L_g(x)$. In Section III, we discuss the said relation between $L_g(x)$ and the gluon Wigner distribution, and prove some nontrivial identities. From Section IV on, we focus on the small-$x$ regime.
We derive a novel operator representation of $L_g(x)$ in terms of lightlike Wilson lines. The operator is unusual (for those who are familiar to nonlinear small-$x$ evolution equations) as it is comprised of half-infinite Wilson lines and covariant derivatives. We observe that exactly the same operator is relevant to the polarized gluon distribution $\Delta G(x)$ at small-$x$. This, together with the arguments in Appendix B, has led us to advocate the relation
\begin{eqnarray}
L_g(x) \approx -2\Delta G(x)\,, \qquad (x \ll 1) \label{ap}
\end{eqnarray}
which puts strong constraints on the small-$x$ behavior of $L_g(x)$ and $\Delta G(x)$ and their uncertainties. It also suggests that the measurement of $L_g(x)$ at small-$x$ is closely related to that of $\Delta G(x)$. Based on this expectation, in Section V
we compute longitudinal single spin
asymmetry
$d\Delta \sigma=d\sigma^\rightarrow -d\sigma^\leftarrow$
in diffractive dijet
production in lepton-nucleon scattering. It turns out that the asymmetry vanishes in the leading eikonal approximation, and the first nonvanishing contributions come from the next-to-eikonal corrections. This involves precisely the OAM operator found in Section IV, and as a result, the asymmetry is directly proportional to $L_g(x)$ in certain kinematic regimes. Interestingly, the asymmetry is also proportional to the odderon amplitude in QCD. Finally, we comment on the small-$x$ evolution of $L_g(x)$ and $\Delta G(x)$ in Sec.~VI and conclude in Sec.~VII.
\section{Gluon orbital angular momentum}
In this section, we review the gluon OAM $L_g$ and its associated parton distribution $L_g(x)$ following \cite{Hatta:2011zs,Hatta:2011ku,Hatta:2012cs}. The precise gauge invariant definition of $L_g$ is given by the nonperturbative proton matrix element
\begin{eqnarray}
\lim_{\Delta\to 0} \langle P'S|F^{+\alpha}\overleftrightarrow{D}_{\rm pure}^iA^{\rm phys}_{\alpha}|PS\rangle = -i\epsilon^{ij}\Delta_{\perp j} S^+L_g\,, \label{def}
\end{eqnarray}
where $P^\mu\approx \delta^\mu_+ P^+$ is the proton momentum and the spin vector is longitudinally polarized $S^\mu \approx \delta^\mu_+ S^+$. On the right hand side, we keep only the linear term in the transverse momentum transfer $\Delta_\perp = P'_\perp-P_\perp$ which is assumed to be small.
We use the notations $\overleftrightarrow{D}^\mu \equiv \frac{\partial^\mu -\overleftarrow{\partial}^\mu}{2}+igA_\mu$ and $D_{\rm pure}^\mu\equiv D^\mu-igA_{\rm phys}^\mu$.
$A^\mu_{\rm phys}$ is a nonlocal operator defined by \cite{Hatta:2011zs}
\begin{eqnarray}
A_{\pm {\rm phys}}^\mu(y) = \mp \int dz^- \theta(\pm(z^- - y^-))\widetilde{U}_{y^-,z^-}(y_\perp)F^{+\mu}(z^-,y_\perp)\,, \label{aphys}
\end{eqnarray}
where $\widetilde{U}$ is the lightlike Wilson line segment in the adjoint representation. $L_g$ does not depend on the choice of the $\pm$ sign in
(\ref{aphys}) due to $PT$ symmetry \cite{Hatta:2011ku}. In the light-cone gauge $A^+=0$, $A_{\rm phys}^\mu=A^\mu$ and (\ref{def}) reduces to the canonical gluon OAM originally introduced by Jaffe and Manohar \cite{Jaffe:1989jz}.
The operator structure (\ref{def}) was first written down in \cite{Chen:2008ag}, but the authors proposed a different $A_{\rm phys}^\mu$. We emphasize that the choice (\ref{aphys}) is unique if one identifies $\Delta G$ in (\ref{1}) with the usual gluon helicity $\Delta G$ that has been measured at RHIC and other experimental facilities.
Next we discuss the gluon OAM distributions $L_{g}(x)$ with the property\footnote{The normalization of $L_g(x)$ in (\ref{norm}) and (\ref{lg}) differs by a factor of 2 from that in Ref.~\cite{Hatta:2012cs} where $L_g(x)$ was defined as $L_g=\int_{-1}^1dx L_g(x)=2\int_0^1dx L_g(x)$. The present choice is in parallel with the definition of $\Delta G(x)$: $\int_0^1dx \Delta G(x)=\Delta G$. }
\begin{eqnarray}
L_g= \int_{0}^1 dx L_g(x) = \frac{1}{2}\int_{-1}^1 dx L_g(x)\,. \label{norm}
\end{eqnarray}
The $x$-distributions for the quark and gluon OAMs $L_{q,g}(x)$ have been previously introduced in \cite{Hagler:1998kg,Harindranath:1998ve} and their DGLAP evolution equation has been derived to one-loop. However, the definition in \cite{Hagler:1998kg,Harindranath:1998ve} is not gauge invariant, and the computation of the anomalous dimensions has been performed in the light-cone gauge $A^+=0$. The gauge invariant canonical OAM distributions $L_{q,g}(x)$ have been first introduced in \cite{Hatta:2012cs}. They reduce to the previous definitions \cite{Hagler:1998kg,Harindranath:1998ve} if one takes the light-cone gauge.\footnote{There is an alternative gauge invariant definition in \cite{Hoodbhoy:1998yb}, but this is different from the one \cite{Hatta:2012cs} we discuss in the following. }
While the notion of OAM parton distributions is not yet widely known, we emphasize that they are crucial for the measurability of OAMs. Just as one has to measure the polarized quark and gluon distributions $\Delta q(x),\Delta G(x)$ in order to extract $\Delta \Sigma=\int_0^1 dx \Delta q(x)$ and $\Delta G=\int_0^1 dx \Delta G(x)$, any attempt to experimentally determine $L_{q,g}$ must start by measuring its $x$-distribution $L_{q,g}(x)$.
For the gauge invariant gluon OAM (\ref{def}) with $A_{\rm phys}^\mu$ given by (\ref{aphys}), the distribution $L_g(x)$ is also gauge invariant and is defined through the relation \cite{Hatta:2012cs}
\begin{eqnarray}
\delta(x-x')\frac{L_g(x)}{2} = \frac{M_F(x,x')}{x(x-x')} -\frac{M_D(x,x')}{x}\,, \label{lg}
\end{eqnarray}
where $M_F$ and $M_D$ are the `F-type' and `D-type' three-gluon collinear correlators
\begin{eqnarray}
&&\int \frac{dy^- dz^-}{(2\pi)^2} e^{i xP^+y^- +i(x'-x)P^+z^-} \langle P'S|F^{+\alpha}(0) gF^{+i}(z^-)F^{+}_{\ \alpha}(y^-)|PS\rangle \nonumber \\
&&= -ixP^+ \int \frac{dy^- dz^-}{(2\pi)^2} e^{i xP^+y^- +i(x'-x)P^+z^-} \langle P'S|F^{+\alpha}(0) gF^{+i}(z^-)A^{\pm
{\rm phys}}_\alpha(y^-)|PS\rangle \nonumber \\
&&=\epsilon^{ij}\Delta_{\perp j} S^+ M_F(x,x')+\cdots \,, \label{three}
\end{eqnarray}
\begin{eqnarray}
&& \int \frac{dy^- dz^-}{(2\pi)^2} e^{ixP^+y^- +i(x'-x)P^+z^-} \langle P'S|F^{+\alpha}(0) \overleftrightarrow{D}^{i}(z^-)F^{+}_{\ \alpha}(y^-)|PS\rangle \nonumber \\
&&=-ixP^+ \int \frac{dy^- dz^-}{(2\pi)^2} e^{ixP^+y^- +i(x'-x)P^+z^-} \langle P'S|F^{+\alpha}(0) \overleftrightarrow{D}^{i}(z^-)A^{\pm {\rm phys}}_\alpha(y^-)|PS\rangle \nonumber \\
&& =\epsilon^{ij}\Delta_{\perp j} S^+ M_D(x,x') + \cdots\,. \label{dt}
\end{eqnarray}
(In the above, we omitted Wilson lines $\widetilde{U}$ for simplicity.) The quark OAM distribution $L_q(x)$ can be similarly defined through the collinear quark-gluon-quark operators.
Interestingly, although $L_{q,g}(x)$ are related to three-parton correlators which are twist-three, a partonic interpretation is possible because one of the three partons has vanishing longitudinal momentum fraction $x-x'=0$ due to the delta function constraint in (\ref{lg}).
After using the QCD equations of motion, one can reveal the precise twist structure of $L_g(x)$: It can be written as the sum of the `Wandzura-Wilczek' part and the genuine twist-three part \cite{Hatta:2012cs}
\begin{eqnarray}
\frac{1}{2}L_g(x) &=& \frac{x}{2}\int_x^{1} \frac{dx'}{x'^2}(H_g(x')+E_g(x'))- x\int_x^{1} \frac{dx'}{x'^2} \Delta G(x') \nonumber \\
&& +2x\int_x^{1} \frac{dx'}{x'^3} \int dX \Phi_F(X,x') +2x\int_x^{1}dx_1 \int_{-1}^1 dx_2 \tilde{M}_F(x_1,x_2) {\mathcal P}\frac{1}{x_1^3(x_1-x_2)}
\nonumber \\
&& \qquad +2 x\int_x^{1} dx_1\int_{-1}^1 dx_2 M_F(x_1,x_2) {\mathcal P} \frac{2x_1-x_2}{x_1^3(x_1-x_2)^2}\,, \label{ma}
\end{eqnarray}
where $H_g=xG(x)$ and $E_g$ are the gluon generalized parton distributions (GPDs) at vanishing skewness.
$\Phi_F$ and $\tilde{M}_F$ are the quark-gluon-quark and three-gluon correlators defined similarly to (\ref{three}) (see \cite{Hatta:2012cs} for the details). Eq.~(\ref{ma}) shows that $L_g(x)$ and $\Delta G(x)$ are related, albeit in a complicated way. Later we shall find a more direct relation between the two distributions special to the small-$x$ region.
Before leaving this section, we show the DGLAP equations for $L_{q,g}(x)$. They can be extracted from the results of the anomalous dimensions in \cite{Hagler:1998kg,Harindranath:1998ve} (see, also, \cite{Ji:1995cu}).
\begin{eqnarray}
\frac{d}{d\ln Q^2} \left(\begin{matrix} L_q(x) \\ L_g(x) \end{matrix}\right)= \frac{\alpha_s}{2\pi} \int_x^1 \frac{dz}{z} \left(\begin{matrix} \hat{P}_{qq}(z) &\hat{P}_{qg}(z) & \Delta \hat{P}_{qq}(z) & \Delta \hat{P}_{qg}(z) \\ \hat{P}_{gq}(z) & \hat{P}_{gg}(z) & \Delta \hat{P}_{gq}(z) & \Delta \hat{P}_{gg}(z) \end{matrix}\right) \left(\begin{matrix} L_q(x/z) \\ L_g(x/z) \\ \Delta q(x/z) \\ \Delta G(x/z) \end{matrix}\right)\,, \label{d1}
\end{eqnarray}
\begin{eqnarray}
&&\hat{P}_{qq}(z)=C_F \left(\frac{z(1+z^2)}{(1-z)_+} + \frac{3}{2}\delta(1-z) \right)\,, \\
&&\hat{P}_{qg}(z) =n_f z(z^2+(1-z)^2)\,, \\
&&\hat{P}_{gq}(z)= C_F(1+(1-z)^2)\,, \\
&&\hat{P}_{gg}(z)= 6\frac{(z^2-z+1)^2}{(1-z)_+} + \frac{\beta_0}{2}\delta(z-1)\,, \\
&&\Delta \hat{P}_{qq}(z)=C_F (z^2-1)\,, \\
&&\Delta \hat{P}_{qg}(z)= n_f (1-3z+4z^2-2z^3)\,, \\
&&\Delta \hat{P}_{gq}(z)= C_F (-z^2+3z-2)\,, \\
&&\Delta \hat{P}_{gg}(z)=6 (z-1)(z^2-z+2)\,,
\end{eqnarray}
where $C_F=\frac{N_c^2-1}{2N_c}=\frac{4}{3}$, $n_f$ is the number of flavors and $\beta_0=11-\frac{2n_f}{3}$.
For completeness and a later use, we also note the DGLAP equation for the helicity distributions
\begin{eqnarray}
\frac{d}{d\ln Q^2} \left(\begin{matrix} \Delta q(x) \\ \Delta G(x) \end{matrix}\right)= \frac{\alpha_s}{2\pi} \int_x^1 \frac{dz}{z} \left(\begin{matrix} \Delta P_{qq}(z) & \Delta P_{qg}(z) \\ \Delta P_{gq}(z) & \Delta P_{gg}(z) \end{matrix}\right) \left(\begin{matrix} \Delta q(x/z) \\ \Delta G(x/z) \end{matrix}\right)\,, \label{d2}
\end{eqnarray}
\begin{eqnarray}
&&\Delta P_{qq}(z)=C_F \left(\frac{1+z^2}{(1-z)_+} + \frac{3}{2}\delta(1-z) \right)\,, \\
&&\Delta P_{qg}(z)= \frac{n_f}{2}(2z-1)\,, \\
&&\Delta P_{gq}(z)= C_F (2-z)\,, \\
&&\Delta P_{gg}(z)=6 \left(\frac{1}{(1-z)_+}-2z+1\right) + \frac{\beta_0}{2}\delta(z-1)\,.
\end{eqnarray}
\section{OAM and the Wigner distribution}
The original definition (\ref{lg}) is technical and does not immediately invoke its physical meaning as the OAM. Fortunately, there exists an equivalent and very intuitive definition of $L_g(x)$ in terms of the Wigner distribution. The gluon Wigner distribution is defined as
\begin{eqnarray}
xW(x,q_\perp,b_\perp,S)&=&2\int\frac{dz^-d^2z_\perp}{{(2\pi)}^3P^+}\int\frac{d^2 \Delta_\perp}{{(2\pi)}^2}e^{-ixP^+z^-+iq_\perp\cdot z_\perp} \nonumber \\
&&\times\left\langle P+\tfrac{\Delta_\perp}{2},S\left|{\rm Tr}F^{+i}\left(b_\perp+\tfrac{z}{2}\right)F^{+i}\left(b_\perp-\tfrac{z}{2}\right)\right|P-\tfrac{\Delta_\perp}{2},S\right\rangle\,, \label{wi}
\end{eqnarray}
where the trace is in the fundamental representation.
It is convenient to also consider the Fourier transform of the Wigner distribution with respect to $b_\perp$, namely, the generalized transverse momentum dependent distribution (GTMD) \cite{Meissner:2009ww,Hatta:2011ku,Lorce:2013pza}
\begin{eqnarray}
xW(x,q_\perp,\Delta_\perp,S)&=&\int d^2b_\perp W_g(x,q_\perp,b_\perp,S)e^{i\Delta_\perp\cdot b_\perp} \nonumber \\
&=&4\int\frac{d^3x d^3y}{(2\pi)^3}e^{-ixP^+(x^--y^-)+iq_\perp\cdot(x_\perp-y_\perp)+i\frac{\Delta_\perp}{2}\cdot(x_\perp+y_\perp)} \langle {\rm Tr} F^{+i}(x)F^{+i}(y)\rangle\,, \label{gt}
\end{eqnarray}
where $\langle\cdots\rangle\equiv\frac{\langle P+\frac{\Delta_\perp}{2},S|\cdots|P-\frac{\Delta_\perp}{2},S \rangle}{\langle P,S|P,S \rangle}$.
In (\ref{wi}) and (\ref{gt}), we have to specify the configuration of Wilson lines to make the nonlocal operator $F(x)F(y)$ gauge invariant. There are two interesting choices for this \cite{Bomhof:2007xt,Dominguez:2011wm}.
One is the Weizs\"acker-Williams (WW) type
\begin{align}
\ave{\text{Tr} F^{+i}(x)F^{+i}(y)}\to\ave{\text{Tr} F^{+i}(x)U_\pm(x,y)F^{+i}(y)U_\pm(y,x)} \,, \label{eq:WW}
\end{align}
and the other is the dipole type
\begin{align}
\ave{\text{Tr} F^{+i}(x)F^{+i}(y)}\to\ave{\text{Tr} F^{+i}(x)U_-(x,y)F^{+i}(y)U_+(y,x)} \,, \label{eq:dip}
\end{align}
where $U_\pm(x,y)\equiv U_{x^-,\pm \infty}(x_\perp)U_{x_\perp,y_\perp}(\pm \infty)U_{\pm \infty,y^-}(y_\perp)$ is a staple-shaped Wilson line in the fundamental representation.
We denote the corresponding distributions as $W_\pm$ and $W_\text{dip}$, respectively.
The Wigner distribution describes the phase phase distribution of gluons with transverse momentum $q_\perp$ and impact parameter $b_\perp$. Their cross product $b_\perp \times q_\perp$ classically represents the orbital angular momentum. It is thus natural to define $L_g$ as \cite{Hatta:2011ku}
\begin{eqnarray}
L_{g}&\equiv& \int_{-1}^1 dx \int d^2b_\perp d^2q_\perp\ \epsilon_{ij} b^i_\perp q^j_\perp W_\pm (x,q_\perp,b_\perp) \nonumber \\
&=& -i\int_{-1}^1 dx \int d^2q_\perp\ \epsilon^{ij}q_\perp^j\lim_{\Delta_\perp\to 0}\frac{\partial}{\partial\Delta^i_\perp}W_\pm(x,q_\perp,\Delta_\perp) \,, \label{oamdef}
\end{eqnarray}
where our default choice is the WW-type Wigner distribution because it is consistent with a partonic interpretation.
One can check that (\ref{oamdef}) agrees with (\ref{def}), with the $\pm$ sign taken over to that in (\ref{aphys}).
$W$ has the following spin-dependent structure
\begin{align}
W(x,q_\perp,\Delta_\perp,S)=i\frac{S^+}{P^+}\epsilon^{ij} \Delta^i_\perp q_\perp^j \bigl( f(x,|q_\perp|)+i\Delta_\perp \cdot q_\perp h(x,|q_\perp|) \bigr) +\cdots\,. \label{w0}
\end{align}
Substituting this into (\ref{oamdef}), one finds
\begin{align}
L_g=\lambda \int_{-1}^1 dx \int d^2q_\perp\ q^2_\perp f(x,|q_\perp|)\,, \label{mom}
\end{align}
where
$\lambda=\frac{S^+}{P^+}=\pm 1$ is the helicity of the proton.
The result (\ref{mom}), together with a similar relation for the quark OAM, is by now well established \cite{Lorce:2011kd,Lorce:2011ni,Hatta:2011ku}. We now discuss this relation at the level of the $x$-distribution.
Since (\ref{oamdef}) involves an integration over $x$, it is tempting to identify the integrand with $L_g(x)$
\begin{eqnarray}
L_g(x) &=& 2\int d^2b_\perp d^2q_\perp\ \epsilon_{ij} b^i_\perp q^j_\perp W_\pm(x,q_\perp,b_\perp) \nonumber \\
&=& -2i\int d^2q_\perp\ \epsilon^{ij}q_\perp^j\lim_{\Delta_\perp\to 0}\frac{\partial}{\partial\Delta^i_\perp}W_\pm (x,q_\perp,\Delta_\perp) \,. \label{lx}
\end{eqnarray}
(The factor of 2 is because $\int_{-1}^1dx = 2\int_0^1dx$.)
It turns out that this exactly agrees with $L_g(x)$ defined in (\ref{lg}). The proof was essentially given in \cite{Hatta:2012cs} for the quark OAM distribution $L_q(x)$. The generalization to the gluon case is straightforward, and this is outlined in Appendix \ref{aa}. Here we prove another nontrivial fact that $L_g(x)$'s defined through the WW and dipole Wigner distribution are identical for all values of $x$. Namely,
\begin{eqnarray}
\int \!d^2b_\perp d^2q_\perp \epsilon^{ij}b_\perp^i q_\perp^j W_\pm(x,b_\perp,q_\perp) = \int \!d^2b_\perp d^2q_\perp \epsilon^{ij}b_\perp^i q_\perp^j W_{\rm dip}(x,b_\perp,q_\perp) \,.
\label{wign}
\end{eqnarray}
The proof goes as follows. Consider the part that involves $q_\perp$; $\int d^2q_\perp q_\perp^j W$. For the WW-type Wigner, this is evaluated as
\begin{eqnarray}
&&\int d^2q_\perp q_\perp^j\int\frac{d^2z_\perp}{(2\pi)^2}e^{iq_\perp\cdot z_\perp}\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_\pm F^{+i}\left(-\tfrac{z}{2}\right)U_\pm^\dagger=i\lim_{z_\perp\to 0}\frac{\partial}{\partial z_\perp^j}\left(\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_\pm F^{+i}\left(-\tfrac{z}{2}\right)U_\pm^\dagger\right) \notag \\
&&=\frac{1}{2}\text{Tr} \left[ F^{+i}\left(\tfrac{z^-}{2}\right)(i\overleftarrow{D}_jU-iUD_j)F^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger\right] \nonumber \\
&&\qquad +\frac{1}{2}\text{Tr}\left[\left[F^{+i},gA_{\pm\text{phys}}^j \right]\left(\tfrac{z^-}{2}\right)UF^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger\right] -\frac{1}{2}\text{Tr} \left[ F^{+i}\left(\tfrac{z^-}{2}\right)U\left[F^{+i},gA_{\pm\text{phys}}^j\right]\left(-\tfrac{z^-}{2}\right)U^\dagger\right] \nonumber \\
&& = \frac{1}{2}\text{Tr}\left[ F^{+i}\left(\tfrac{z^-}{2}\right)(i\overleftarrow{D}^{\rm pure}_jU-iUD^{\rm pure}_j)F^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger \right]
\,, \label{eq:WW moment}
\end{eqnarray}
where we only show the relevant operator structure and suppress the arguments of Wilson lines $U$ which should be obvious from gauge invariance.
The same type of calculation for the dipole Wigner distribution gives
\begin{eqnarray}
&&\int d^2q_\perp q_\perp^j\int\frac{d^2z_\perp}{(2\pi)^2}e^{iq_\perp\cdot z_\perp}\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_- F^{+i}\left(-\tfrac{z}{2}\right)U_+^\dagger \nonumber \\
&&=\frac{1}{2}\text{Tr} \left[ F^{+i}\left(\tfrac{z^-}{2}\right)(i\overleftarrow{D}_jU-iUD_j)F^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger\right] \nonumber \\
&& \qquad +\frac{1}{2}\text{Tr}\left[g(F^{+i}A_{-\text{phys}}^j-A_{+\text{phys}}^jF^{+i})\left(\tfrac{z^-}{2}\right)UF^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger\right]
\nonumber \\
&& \qquad -\frac{1}{2}\text{Tr}\left[ F^{+i}\left(\tfrac{z^-}{2}\right)Ug(F^{+i}A_{+\text{phys}}^j-A_{-\text{phys}}^jF^{+i})\left(-\tfrac{z^-}{2}\right)U^\dagger\right]. \label{eq:dip moment}
\end{eqnarray}
Taking the plus sign in (\ref{eq:WW moment}) (the minus sign leads to the same conclusion) and subtracting (\ref{eq:dip moment}), we obtain
\begin{align}
&i\lim_{z_\perp\to 0}\frac{\partial}{\partial z_\perp^j}\left(\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_+ F^{+i}\left(-\tfrac{z^-}{2}\right)U_+^\dagger\right)
-i\lim_{z_\perp\to 0}\frac{\partial}{\partial z_\perp^j}\left(\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_- F^{+i}\left(-\tfrac{z}{2}\right)U_+^\dagger\right) \notag \\
&=\frac{1}{2}\text{Tr} \left[F^{+i}\left(A_{+\text{phys}}-A_{-\text{phys}}\right)\left(\tfrac{z^-}{2}\right)UF^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger \right]\notag \\
&\qquad +\frac{1}{2}\text{Tr} \left[F^{+i}\left(\tfrac{z^-}{2}\right)U\left(A_{+\text{phys}}-A_{-\text{phys}}\right)F^{+i}\left(-\tfrac{z^-}{2}\right)U^\dagger\right] \notag \\
&=-\int dy^-\text{Tr} \left[F^{+i}\left(\tfrac{z^-}{2}\right)U_{\frac{z^-}{2},y^-}F^{+i}(y^-)U_{y^-,-\frac{z^-}{2}}F^{+i}\left(-\tfrac{z^-}{2}\right)U_{-\frac{z^-}{2},\frac{z^-}{2}}\right] \label{eq:diff WW +}\,.
\end{align}
The question is whether the nonforward matrix element $\langle ...\rangle$ of the operator (\ref{eq:diff WW +}) contains the structure $i\frac{S^+}{P^+}\epsilon^{ij}\Delta_\perp^i \delta L(x)$. If so, the function $\delta L$ would contribute to the difference $L^{WW}_g(x)-L^{\rm dip}_g(x)$. However, this is impossible as one can easily see by applying the $PT$ transformation to the matrix element. Under $PT$, $F^{\mu\nu}\to -F^{\mu\nu}$, and one obtains an identity
\begin{align}
&i\frac{S^+}{P^+}\epsilon^{ij}\Delta_\perp^i \delta L(x)=-i\frac{-S^+}{P^+}\epsilon^{ij}(-\Delta_\perp^i)\delta L(x)\,,
\end{align}
which immediately gives $\delta L(x)=0$.
The above proof is crucial for the measurability of $L_g(x)$. While $L_g(x)$ is naturally defined by the WW-type Wigner distribution, the dipole Wigner distribution has a better chance to be measured in experiments \cite{Hatta:2016dxp}. Below we only consider $W_{\rm dip}$ and omit the subscript.
\section{Small-$x$ regime}
Our discussion so far has been general and valid for any value of $x$. From now on, we focus on the small-$x$ regime. In this section we derive a novel operator representation of $L_g(x)$ and point out its unexpected relation to the polarized gluon distribution $\Delta G(x)$.
\subsection{Leading order}
In order to study the properties of the (dipole) Wigner distribution at small-$x$, as a first step we approximate
$e^{ -ixP^+(x^- - y^-)}\approx 1$ in (\ref{gt}). We shall refer to this as the eikonal approximation. We then use the identity
\begin{eqnarray}
\partial_i U(x_\perp)
&=& -ig\int_{-\infty}^\infty dx^- U_{\infty, x^-} F^{+i}(x) U_{x^-,-\infty} -igA^i(\infty,x_\perp)U(x_\perp) +igU(x_\perp)A^i(-\infty,x_\perp)\,,
\label{las}
\end{eqnarray}
where $U(x_\perp)\equiv U_{\infty,-\infty}(x_\perp)$ and do integration by parts. This leads us to
\cite{Hatta:2016dxp}
\begin{eqnarray}
W (x,\Delta_\perp,q_\perp,S) \approx W_0 (x,\Delta_\perp,q_\perp)
= \frac{4N_c}{ x g^2 (2\pi)^3} \left(q_\perp^2 - \tfrac{\Delta_\perp^2}{4}\right) F(x,\Delta_\perp, q_\perp)\,, \label{der}
\end{eqnarray}
where $F$ is the Fourier transform of the so-called dipole S-matrix
\begin{eqnarray}
F(x,\Delta_\perp, q_\perp) \equiv \int d^2x_\perp d^2y_\perp e^{iq_\perp \cdot (x_\perp-y_\perp)+ i(x_\perp + y_\perp)\cdot \frac{\Delta_\perp}{2}}
\left\langle \frac{1}{N_c} {\rm Tr} \left[U(x_\perp) U^\dagger(y_\perp)\right] \right\rangle\,. \label{dipF}
\end{eqnarray}
The last two terms in (\ref{las}) have been canceled against the terms which come from the derivative of the transverse gauge links connecting $x_\perp$ and $y_\perp$ at $x^-=\pm \infty$ (not shown in (\ref{dipF}) for simplicity). The $x$-dependence of $F$ arises from the quantum evolution of the dipole operator ${\rm Tr}U(x_\perp)U^\dagger(y_\perp)$.
To linear order in $\Delta_\perp$, we can parameterize $F$ as
\begin{eqnarray}
F(x,\Delta_\perp,q_\perp)
= P(x,\Delta_\perp,q_\perp) + iq_\perp \cdot \Delta_\perp O(x,|q_\perp| )\,. \label{p}
\end{eqnarray}
The imaginary part $O$ comes from the so-called odderon operator \cite{Hatta:2005as,Zhou:2016rnt}. It is important to notice that $F$ cannot depend on the longitudinal spin $S^+$, and therefore, $W_0$ cannot have the structure (\ref{w0}). This follows from $PT$ symmetry which dictates that
\begin{eqnarray}
\left\langle P+\tfrac{\Delta}{2},S\left| {\rm Tr}[U(x_\perp)U^\dagger(y_\perp)]\right|P-\tfrac{\Delta}{2},S\right \rangle =
\left\langle P-\tfrac{\Delta}{2},-S\left| {\rm Tr}[U(-x_\perp)U^\dagger(-y_\perp)]\right|P+\tfrac{\Delta}{2},-S\right \rangle \,, \nonumber
\end{eqnarray}
so that $W_0(x,q_\perp,\Delta_\perp,S)=W_0(x,-q_\perp,-\Delta_\perp,-S)$.
Therefore, it is impossible to access any information about spin and OAM in the eikonal approximation. This is actually expected on physical grounds. At high energy, spin effects are suppressed by a factor of $x$ (or inverse energy) compared to the `Pomeron' contribution as represented by the first term $P$ in (\ref{p}).\footnote{The situation is different when the spin is transversely polarized. In this case, $F$ can have the structure $\epsilon_{ij}S_{\perp}^i q_\perp^j$, and the corresponding amplitude has been dubbed the `spin-dependent odderon' \cite{Zhou:2013gsa}. While this is subleading compared to the leading Pomeron term $P$, it is suppressed only by a fractional power $x^\alpha$ with $\alpha \sim 0.3$. }
\subsection{First subleading correction}
In order to be sensitive to the spin and OAM effects, we have to go beyond the eikonal approximation. By taking into account the second term in the expansion $e^{-ixP^+(x^- - y^-)} = 1-ixP^+(x^- - y^-)+\cdots$ and writing $W=W_0+\delta W$ accordingly, we find
\begin{eqnarray}
\delta W (x,\Delta_\perp,q_\perp,S) &=&-\frac{4P^+}{ g (2\pi)^3 }\int d^2x_\perp d^2y_\perp e^{iq_\perp \cdot (x_\perp-y_\perp)+ i(x_\perp + y_\perp)\cdot \frac{\Delta_\perp}{2} } \nonumber \\
&& \times \Biggl\{
\int_{-T}^T dx^- ( x^- +T) \frac{\partial}{\partial y_\perp^i} \left\langle {\rm Tr} \left[U_{T,x}F^{+i}(x)U_{x,-T} U^\dagger(y_\perp) \right] \right\rangle \nonumber \\
&& + \int_{-T}^T dy^- (y^- +T) \frac{\partial}{\partial x_\perp^i} \left\langle {\rm Tr} \left[U(x_\perp) U_{-T,y}F^{+i}(y)U_{y,T} \right] \right\rangle \Biggr\} \nonumber \\
&=&\frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp e^{i(q_\perp +\frac{\Delta_\perp}{2})\cdot x_\perp + i(-q_\perp+ \frac{\Delta_\perp}{2})\cdot y_\perp } \nonumber \\
&& \times \Biggl\{
\int_{-T}^T dz^- \left(q_\perp^i-\tfrac{\Delta^i_\perp}{2}\right)\left\langle {\rm Tr} \left[ U_{Tz^-} (x_\perp)\overleftarrow{D}_i U_{z^--T} (x_\perp) U^\dagger(y_\perp) \right] \right\rangle \nonumber \\
&& + \int_{-T}^T dz^- \left(q_\perp^i +\tfrac{\Delta^i_\perp}{2} \right) \left\langle {\rm Tr} \left[U(x_\perp) U_{-Tz^-}(y_\perp) D_i U_{z^-T} (y_\perp) \right] \right\rangle \Biggr\} \,. \label{yo}
\end{eqnarray}
The first equality is obtained by splitting $x^- - y^- = x^- + T -(y^- +T)$ where $T$ is eventually sent to infinity. In the second equality
we write $x^- + T = \int_{-T}^{x^-} dz^-$ and switch the order of integrations between $\int dx^-$ and $\int dz^-$.
In contrast to $W_0$, $\delta W$ can have the structure (\ref{w0}): From PT symmetry, one can show that
$\delta W(x,q_\perp,\Delta_\perp,S)=-\delta W(x,-q_\perp,-\Delta_\perp,-S)$.\footnote{More generally, in the Taylor expansion of the phase factor $e^{-ixP^+(x^--y^-)}$, the odd terms in $x$ can contribute to the OAM.}
The most general parameterization of the near-forward matrix element in (\ref{yo}) is, to linear order in $\Delta_\perp$ and $S^+$,
\begin{align}
&\frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp\ e^{i(q_\perp+\frac{\Delta_\perp}{2})\cdot x_\perp+i(-q_\perp+\frac{\Delta_\perp}{2})\cdot y_\perp} \int dz^-\ \ave{\text{Tr}[U_{\infty,z^-}(x_\perp)\overleftarrow{D}_iU_{z^-,-\infty}(x_\perp)U^\dagger(y_\perp)]} \nonumber \\
&=-i\frac{S^+}{2P^+}\epsilon_{ij}\Biggl\{\Biggl(q_\perp^j+\tfrac{\Delta_\perp^j}{2}\Biggr)f(x,|q_\perp|)+\Biggl(q_\perp^j-\tfrac{\Delta_\perp^j}{2}\Biggr)g(x,|q_\perp|)+q_\perp^j\Delta_\perp\cdot q_\perp A(x,|q_\perp|)\Biggr\} \nonumber \\
&\quad -\frac{S^+}{2P^+}\epsilon_{ij}\Biggl\{\Biggl(q_\perp^j+\tfrac{\Delta_\perp^j}{2}\Biggr)B(x,|q_\perp|)+\Biggl(q_\perp^j-\tfrac{\Delta_\perp^j}{2}\Biggr)C(x,|q_\perp|)-2q_\perp^j\Delta_\perp\cdot q_\perp h(x,|q_\perp|)\Biggr\} +\cdots \,. \label{eq:1 para'}
\end{align}
\begin{align}
&\frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp\ e^{i(q_\perp+\frac{\Delta_\perp}{2})\cdot x_\perp+i(-q_\perp+\frac{\Delta_\perp}{2})\cdot y_\perp} \int dz^-\ave{\text{Tr}[U(x_\perp)U_{-\infty,z^-}(y_\perp)D_iU_{z^-,\infty}(y_\perp)]} \nonumber \\
&=i\frac{S^+}{2P^+}\epsilon_{ij}\Biggl\{\Biggl(q_\perp^j-\tfrac{\Delta_\perp^j}{2}\Biggr)f(x,|q_\perp|)+\Biggl(q_\perp^j+\tfrac{\Delta_\perp^j}{2}\Biggr)g(x,|q_\perp|)-q_\perp^j\Delta_\perp\cdot q_\perp A(x,|q_\perp|) \Biggr\}\nonumber \\
&\quad -\frac{S^+}{2P^+}\epsilon_{ij}\Biggl\{\Biggl(q_\perp^j-\tfrac{\Delta_\perp^j}{2}\Biggr)B(x,|q_\perp|)+\Biggl(q_\perp^j+\tfrac{\Delta_\perp^j}{2}\Biggr)C(x,|q_\perp|)+2q_\perp^j\Delta_\perp\cdot q_\perp h(x,|q_\perp|) \Biggr\}+\cdots\,. \label{eq:1 para cc}
\end{align}
(\ref{eq:1 para cc}) is obtained from (\ref{eq:1 para'}) by applying the $PT$ transformation.
We recognize the functions $f$ and $h$ that appear in (\ref{w0}), the former is related to the OAM as in (\ref{mom}). The other real-valued functions $g, A, B, C$ do not contribute to the Wigner distribution.
Integrating both sides over $q_\perp$, we obtain the following sum rules
\begin{eqnarray}
\int d^2q_\perp \left( f- g+q_\perp^2 A\right)=0\,, \qquad
\int d^2q_\perp \left( B-C-2q_\perp^2 h\right)=0 \,. \label{fo}
\end{eqnarray}
Eq.~(\ref{eq:1 para'}) uncovers a novel representation of the OAM distribution at small-$x$ in terms of an unusual Wilson line operator in which the covariant derivative $D_i$ is inserted at an intermediate time $z^-$. Such operators do not usually appear in the context of high energy evolution. In the next section we shall see that this structure is related to the next-to-eikonal approximation. Here we point out that the same operator is relevant to the polarized gluon distribution $\Delta G(x)$. This elucidates an unexpected relation between $\Delta G(x)$ and $L_g(x)$.
Let us define the `unintegrated' (transverse momentum dependent) polarized gluon distribution $\Delta G(x,q_\perp)$ as
\begin{align}
&ix\Delta G(x,q_\perp)\frac{S^+}{P^+}\equiv 2\int \frac{d^2z_\perp dz^-}{(2\pi)^3P^+}e^{-ixP^+z^-+iq_\perp\cdot z_\perp}\left\langle PS\left|\epsilon_{ij}\text{Tr} F^{+i}\left(\tfrac{z}{2}\right)U_- F^{+j}\left(-\tfrac{z}{2}\right)U_+\right|PS\right\rangle \notag \\
& \qquad =4\int\frac{d^3x d^3y}{(2\pi)^3} e^{-ixP^+(x^--y^-) +iq_\perp \cdot (x_\perp-y_\perp)}\frac{\langle PS| \epsilon_{ij} {\rm Tr} \left[F^{+i}(x)U_- F^{+j}(y)U_+\right]|PS\rangle }{\langle PS|PS\rangle}\,, \label{uni}
\end{align}
such that $\int d^2q_\perp \Delta G(x,q_\perp) = \Delta G(x)$ and $\int_0^1dx \Delta G(x)=\Delta G$.
Note that (\ref{uni}) is a forward matrix element $\Delta_\perp=0$.
Using the same approximation as above, we obtain the following representation at small-$x$
\begin{align}
i\Delta G(x,q_\perp)\frac{S^+}{P^+}&= \frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp e^{i q_\perp \cdot( x_\perp - y_\perp) } \notag \\
& \times \epsilon_{ij} \Biggl\{ q_\perp^j
\int_{-\infty}^\infty dz^- \left\langle {\rm Tr} \left[ U_{\infty z^-} (x_\perp)\overleftarrow{D}_i U_{z^--\infty} (x_\perp) U^\dagger(y_\perp) \right] \right\rangle \notag \\
& \qquad \qquad + q_\perp ^i \int_{-\infty}^\infty dz^- \left\langle {\rm Tr} \left[U(x_\perp) U_{-\infty z^-}(y_\perp) D_j U_{z^-\infty} (y_\perp) \right] \right\rangle \Biggr\} \,,
\end{align}
or equivalently,
\begin{eqnarray}
&& \Delta G(x,q_\perp)\frac{S^+}{P^+ } \\
&&= \frac{8P^+}{g^2(2\pi)^3} \epsilon_{ij} q_\perp^j{\mathfrak Im} \left[\int d^2x_\perp d^2y_\perp e^{i q_\perp \cdot( x_\perp - y_\perp) }
\int_{-\infty}^\infty dz^- \left\langle {\rm Tr} \left[ U_{\infty z^-} (x_\perp)\overleftarrow{D}_i U_{z^--\infty} (x_\perp) U^\dagger(y_\perp) \right] \right\rangle \right] \,. \nonumber
\end{eqnarray}
Substituting (\ref{eq:1 para'}), we find
\begin{align}
\Delta G(x)&= -\int d^2q_\perp q_\perp^2 (f(x,|q_\perp|) + g(x,|q_\perp|)) \nonumber \\
&=-\frac{1}{2}L_g(x) -\int d^2q_\perp q^2_\perp g(x,|q_\perp|)\,. \label{eq:G=L}
\end{align}
This is a rather surprising result. From (\ref{ma}), one can argue that if $\Delta G(x)$ shows a power-law behavior at small-$x$, $\Delta G(x) \sim x^{-\alpha}$, the OAM distribution grows with the same exponent $L_g(x)\sim x^{-\alpha}$. Eq.~(\ref{eq:G=L}) imposes a strong constraint on the respective prefactors, and the relation is preserved by the small-$x$ evolution because both $L_g(x)$ and $\Delta G(x)$ are governed by the same operator. Moreover, in Appendix \ref{ab} we present three different arguments which indicate that $|f|\gg |g|$. If this is true, a very intriguing relation emerges
\begin{eqnarray}
L_g(x)\approx -2\Delta G(x) \,. \label{sur}
\end{eqnarray}
As mentioned in the introduction, reducing the huge uncertainty in $\Delta G$ from the small-$x$ region $x<0.05$ \cite{deFlorian:2014yva} is a pressing issue in QCD spin physics. Eq.~(\ref{sur}) suggests that, if the integral $\int_0^{0.05} dx \Delta G(x)$ turns out to be sizable in future, one should expect an even larger contribution from the gluon OAM in the same $x$-region which reverses the sign of the net gluon angular momentum
\begin{eqnarray}
\int^{0.05}_0 dx \Delta G(x) + \int^{0.05}_0 dx L_g(x) \approx -\int_0^{0.05} dx \Delta G(x)\,. \label{e50}
\end{eqnarray}
This has profound implications on the spin sum rule (\ref{1}). In particular, it challenges the idea that $\Delta \Sigma$ and $\Delta G$ alone can saturate the sum rule. There must be OAM contributions.
Eq.~(\ref{sur}) is reminiscent of a similar relation observed in the large-$Q^2$ asymptotic scaling behavior of the components in the spin decomposition formula Eq.~(\ref{1})
\cite{Ji:1995cu}. To one-loop order,
\begin{eqnarray}
\Delta\Sigma(t)&=&{\rm const}. \,,\\
L_q(t)&=&-\frac{1}{2}\Delta\Sigma +\frac{1}{2}\frac{3n_f}{16+3n_f} \ ,\\
\Delta G(t)&=& -\frac{4\Delta\Sigma}{\beta_0}+\frac{t}{t_0}\left(\Delta G_0+\frac{4\Delta\Sigma}{\beta_0}\right)\ ,\label{e53}\\
L_g(t)&=&-\Delta G(t)+\frac{1}{2}\frac{16}{16+3n_f} \ ,\label{e54}
\end{eqnarray}
where $t=\ln\left(Q^2/\Lambda_{QCD}^2\right)$
and we have neglected the subleading terms at large-$Q^2$.
$\Delta G_0$ represents the gluon helicity contribution
at some initial scale $t_0$. From these equations, we find that the large negative
gluon orbital angular momentum would cancel out the gluon helicity contribution
if the latter is large and positive. It is interesting to see how this behavior
imposes a constraint on the small-$x$ contribution to $\Delta G$ and $L_g$ when we apply
Eq.~(\ref{e50}) as the initial condition.
The scale evolution of $L_g(x)$ and $\Delta G(x)$ can be an important agenda for
the future electron-ion collider~\cite{Accardi:2012qut} where one of the primary goals is to
investigate the sum rule (\ref{1}).
\section{Single spin asymmetry in diffractive dijet production}
In this section, we calculate longitudinal single spin asymmetry in
forward dijet production in exclusive diffractive lepton-nucleon scattering.
As observed recently \cite{Hatta:2016dxp}, in this process one can probe the gluon
Wigner distribution at small-$x$ (see also \cite{Altinoluk:2015dpi}) and its characteristic
angular correlations. Here we show that the same process, with the proton being
longitudinally polarized, is directly sensitive to the function $f(x,q_\perp)$.
\subsection{Next-to-eikonal approximation}
Exclusive diffractive forward dijet production in $ep$ collisions has been extensively studied in the literature mostly in the BFKL framework \cite{Nikolaev:1994cd,Bartels:1996ne,Bartels:1996tc,Braun:2005rg,Boussarie:2016ogo}, and more recently in the color glass condensate framework \cite{Altinoluk:2015dpi,Hatta:2016dxp}.
We work in the so-called dipole frame where the left-moving virtual photon with virtuality $Q^2$
splits into a $q\bar{q}$ pair and scatters off the right-moving proton. The proton emerges elastically with momentum transfer $\Delta_\perp$. The $q\bar{q}$ pair is detected in the forward region (i.e., at large negative rapidity) as two jets with the total transverse
momentum $k_{1\perp}+k_{2\perp}=-\Delta_\perp$ and the relative momentum
$\frac{1}{2}(k_{2\perp}-k_{1\perp})=P_\perp$.
In the eikonal approximation and for the transversely polarized virtual photon, the
amplitude is proportional to \cite{Altinoluk:2015dpi,Hatta:2016dxp}
\begin{eqnarray}
&\propto& \int d^2x_\perp d^2 y_\perp e^{-ik_{1\perp}\cdot x_\perp - ik_{2\perp} \cdot y_\perp} \left\langle\frac{1}{N_c}{\rm Tr}[U(x_\perp)U^\dagger(y_\perp)] \right\rangle \frac{\varepsilon K_1(\varepsilon r_\perp)}{2\pi} \frac{r^i_\perp }{r_\perp} \nonumber \\
&=& i\int \frac{d^2q_\perp}{(2\pi)^2} \frac{P^i_\perp -q^i_\perp}{(P_\perp-q_\perp)^2+\varepsilon^2} F(\Delta_\perp,q_\perp),
\label{gene}
\end{eqnarray}
where $r_\perp = x_\perp-y_\perp$ and $\varepsilon^2=z(1-z)Q^2$. $z$ (or $1-z$) is the longitudinal momentum fraction of the virtual photon energy $q^-$ carried by the quark (or antiquark).
As we already pointed out, (\ref{gene}) cannot depend on spin. Our key observation is that the next-to-eikonal corrections to (\ref{gene}) include exactly the same matrix element as (\ref{eq:1 para'}) and is therefore sensitive to the gluon OAM function $f$. Going beyond the eikonal approximation, we generalize (\ref{gene}) as
\begin{eqnarray}
\int d^2x_\perp d^2x'_\perp d^2 y_\perp d^2y'_\perp e^{-ik_{1\perp}\cdot x_\perp - ik_{2\perp} \cdot y_\perp} \left\langle\frac{1}{N_c}{\rm Tr}[U(x_\perp,x'_\perp)U^\dagger(y_\perp,y'_\perp)] \right\rangle \frac{\varepsilon K_1(\varepsilon r'_\perp)}{2\pi} \frac{r'^i_\perp }{r'_\perp}\,, \label{go}
\end{eqnarray}
where we allow the quark and
|
antiquark to change their transverse coordinates during propagation. $U(x_\perp,x'_\perp)$ is essentially the Green function and can be determined as follows.
Consider the propagation of a quark with energy $k^-=zq^-$ in the background field $A^+$, $A_\perp^i$. The Green function satisfies the equation\footnote{For a quark, there is an extra term in the equation at ${\cal O}(1/k^-)$ which depends on the gamma matrices $\Slash D \Slash D = D^2 + \frac{g}{2}\sigma_{\mu\nu}F^{\mu\nu}$. We neglect this term because it gives vanishing contribution to the physical cross section to ${\cal O}(1/k^-)$ since ${\rm Tr} \, \sigma_{\mu\nu}=0$. }
\begin{eqnarray}
\left[ i\frac{\partial}{\partial x^-} + \frac{1}{2k^-}D^2_{x_\perp} - gA^+(x^-,x_\perp) \right] G_{k^-}(x^-,x_\perp,x'^-,x'_\perp) =i\delta(x^- - x'^-)\delta^{(2)}(x_\perp-x'_\perp)\,.
\end{eqnarray}
To zeroth order in $1/k^-$, the solution is
\begin{eqnarray}
G^0 _{k^-}(x^-,x_\perp,x'^-,x'_\perp) = \theta(x^- - x'^-)\delta^{(2)}(x_\perp-x'_\perp) \exp\left(-ig\int^{x^-}_{x'^-}dz^- A^+(z^-,x_\perp)\right)\,.
\end{eqnarray}
This is the eikonal approximation.
Writing $G=G^0 + \delta G$, we find the equation for $\delta G$
\begin{eqnarray}
\left[ i\frac{\partial}{\partial x^-} - gA^+(x^-,x_\perp) \right]\delta G + \frac{1}{2k^-}D^2_{x_\perp} G^0=0\,.
\end{eqnarray}
This can be easily solved as
\begin{eqnarray}
\delta G(x^-,x_\perp,x'^-,x'_\perp) =\frac{i}{2k^-} \theta(x^-- x'^-)\int_{x'^-}^{x^-} dz^- U_{x^-z^-}(x_\perp) D^2_{x_\perp} \delta^{(2)}(x_\perp-x'_\perp) U_{z^-x'^-}(x'_\perp)\,.
\end{eqnarray}
We thus obtain the desired propagator
\begin{eqnarray}
U(x_\perp,x'_\perp) &\equiv & G_{k^-}(\infty,x_\perp, -\infty, x'_\perp) \nonumber \\
&=& U(x_\perp) \delta^{(2)}(x_\perp-x'_\perp) +\frac{i}{2k^-}\int^\infty_{-\infty}
dz^- U_{\infty z^-}(x_\perp)D^2_{x_\perp}
\delta^{(2)}(x_\perp-x'_\perp) U_{z^--\infty}(x'_\perp)\,.
\label{propdef}
\end{eqnarray}
In (\ref{go}), we need the Fourier transform of $U(x_\perp,x'_\perp)$
\begin{eqnarray}
&&\int d^2x_\perp e^{-ik_\perp \cdot x_\perp } U(x_\perp,x'_\perp)\nonumber \\
&& =e^{-ik_\perp \cdot x'_\perp} \left(U(x'_\perp) + \frac{i}{2k^-} \int_{-\infty}^\infty dz^-
U_{\infty z^-}(x'_\perp) (\overleftarrow{D}^2_{x'_\perp} -k_\perp^2 -2ik^i_\perp \overleftarrow{D}_{x'^i_\perp}) U_{z^- -\infty}(x'_\perp) \right)\,. \label{al}
\end{eqnarray}
If we ignore $A_\perp$, (\ref{al}) agrees with the result of \cite{Altinoluk:2014oxa,Altinoluk:2015gia} to the order of interest, although equivalence is not immediately obvious.\footnote{ Note that the $k_\perp^2$ term comes from the expansion of the on-shell phase factor
\begin{eqnarray}
e^{-ik^+\int dz^-}= \exp\left(-i\frac{k_\perp^2}{2k^-}\int dz^-\right) \approx 1-i\frac{k_\perp^2}{2k^-}\int dz^-\,. \nonumber
\end{eqnarray}
This term is proportional to the leading term and can be dropped since it does not give any spin-dependence.
}
Clearly, $A_\perp$ is important for the result to be gauge invariant (covariant). The last term in (\ref{al}), when substituted into (\ref{go}), gives the same operator as in (\ref{eq:1 para'}). In addition, (\ref{al}) contains the operator $U_{\infty,z^-}\overleftarrow{D}^2_{x_\perp}U_{z^-,-\infty}$ which we did not encounter in the previous section.
However, the matrix element of this operator does not require new functions. To see this, we write down the general parameterization to linear order in $\Delta_\perp$
\begin{align}
&\frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp e^{iq_\perp\cdot(x_\perp-y_\perp)+i\frac{\Delta_\perp}{2}\cdot(x_\perp+y_\perp)}\int dz^- \ave{\text{Tr}[U_{\infty,z^-}(x_\perp)\overleftarrow{D}^2_{x_\perp}U_{z^-,-\infty}(x_\perp)U^\dagger(y_\perp)]} \notag \\
&=\bigl( \kappa(x,|q_\perp|) + i\eta(x,|q_\perp|) \bigr) \frac{S^+}{P^+}\epsilon^{ij}q_\perp^i\Delta_\perp^j+\cdots\,,\label{cc}
\end{align}
\begin{align}
&\frac{4P^+}{g^2(2\pi)^3}\int d^2x_\perp d^2y_\perp e^{iq_\perp\cdot(x_\perp-y_\perp)+i\frac{\Delta_\perp}{2}\cdot(x_\perp+y_\perp)}\int dz^- \ave{\text{Tr}[U_{\infty,z^-}(x_\perp)\overrightarrow{D}^2_{x_\perp}U_{z^-,-\infty}(x_\perp)U^\dagger(y_\perp)]} \notag \\
&=-\bigl( \kappa(x,|q_\perp|) + i\eta(x,|q_\perp|) \bigr) \frac{S^+}{P^+}\epsilon^{ij}q_\perp^i\Delta_\perp^j+\cdots\,,\label{cc2}
\end{align}
where $\kappa$, $\eta$ are real. (\ref{cc}) and (\ref{cc2}) are related by $PT$ symmetry.
By integrating by parts in (\ref{cc}) twice, we can replace the operator $U_{\infty,z^-}\overleftarrow{D}^2_{x_\perp}U_{z^-,-\infty}$ with a linear combination of $U_{\infty,z^-}\overrightarrow{D}^2_{x_\perp}U_{z^-,-\infty}$ and the surface terms. The latter can depend on spin through the operator
\begin{eqnarray}
i \left(q^i_\perp + \frac{\Delta^i_\perp}{2}\right)
U_{\infty,z^-}\overleftarrow{D}_{x_\perp^i}U_{z^-,-\infty}\,,
\end{eqnarray}
as in (\ref{eq:1 para'}). We thus obtain an identity
\begin{eqnarray}
\kappa + i\eta = -( \kappa + i\eta ) + \frac{g}{2} -i\frac{C}{2}\,,
\end{eqnarray}
and therefore,
\begin{align}
\kappa(x,|q_\perp|)=\frac{1}{4}g(x,|q_\perp|)\,, \qquad
\eta(x,|q_\perp|)=-\frac{1}{4}C(x,|q_\perp|)\,.
\end{align}
\subsection{Calculation of the asymmetry}
We are now ready to compute the longitudinal single spin asymmetry.
\begin{eqnarray}
\frac{d\Delta \sigma }{dy_1d^2k_{1\perp} dy_2 d^2k_{2\perp} } \equiv
\frac{d\sigma^{\lambda=+1} }{dy_1d^2k_{1\perp} dy_2 d^2k_{2\perp} } -\frac{d\sigma^{\lambda=-1} }{dy_1d^2k_{1\perp} dy_2 d^2k_{2\perp} }\,,
\end{eqnarray}
where $y_1$, $y_2$ are the rapidities of the two jets.
Our strategy is the following. We first substitute (\ref{al}) into (\ref{go}) and use the parameterizations (\ref{eq:1 para'}) and (\ref{cc}) for the resulting matrix elements. We then square the amplitude and keep only the linear terms in $S^+/k^-$.
The leading eikonal contribution has both the real and imaginary parts from the Pomeron and odderon exchanges, respectively
\begin{eqnarray}
\int d^2q_\perp \frac{P_\perp ^i -q_\perp^i}{(P_\perp -q_\perp)^2+\varepsilon^2} \left(P(\Delta_\perp,q_\perp) + i\Delta_\perp \cdot q_\perp O(q_\perp) \right)\,. \label{dom}
\end{eqnarray}
The next-to-eikonal contribution of order $1/k^-$ also contains both real and imaginary parts as shown in (\ref{eq:1 para'}) and (\ref{cc}). When squaring the amplitude, we see that the terms linear in $S^+$ arises from the interference between the leading and next-to-eikonal contributions. It turns out that the odderon $O$ interferes with the imaginary terms in (\ref{eq:1 para'}) which in particular include the OAM function $f$, while the Pomeron $P$ interferes with the real terms in (\ref{eq:1 para'}) which we are not interested in. The problem is that, on general grounds, one expects that the Pomeron amplitude $P$ is numerically larger than the odderon amplitude $O$, and this can significantly reduce the sensitivity to the OAM function. We avoid this problem by focusing on the following two kinematic regions
\begin{eqnarray}
P_\perp \gg q_\perp, Q\,, \qquad Q\gg q_\perp, P_\perp\,.
\end{eqnarray}
($q_\perp$ here means the typical values of $q_\perp$ within the support of the functions $P$ and $O$.) In this limit, the Pomeron contributionin (\ref{dom}) drops out because
\begin{eqnarray}
\int d^2q_\perp P(\Delta_\perp,q_\perp) = 0, \qquad \int d^2q_\perp q_\perp^i P(\Delta_\perp,q_\perp)=0\,,
\end{eqnarray}
for $\Delta_\perp \neq 0$. The first integral vanishes because the $q_\perp$-integral sets the dipole size $r_\perp=x_\perp-y_\perp$ to be zero so that $U(x_\perp)U^\dagger(x_\perp)=1$. Thus the integral becomes proportional to the delta function $\delta^{(2)}(\Delta_\perp)$. The second relation follows from the symmetry $P(\Delta_\perp,q_\perp)=P(\Delta_\perp,-q_\perp)$.
On the other hand, the odderon contribution survives in this limit because, for example,
\begin{eqnarray}
\int d^2q_\perp q_\perp^i \Delta_\perp \cdot q_\perp O(q_\perp) = \frac{\Delta_\perp^i}{2}\int d^2q_\perp q_\perp^2 O(q_\perp)\,.
\label{od}
\end{eqnarray}
We can thus approximate, when $P_\perp\gg q_\perp,Q$,
\begin{align}
\int d^2q_\perp \frac{P_\perp ^i -q_\perp^i}{(P_\perp -q_\perp)^2+\varepsilon^2} \left(P(\Delta_\perp,q_\perp) + i\Delta_\perp \cdot q_\perp O(q_\perp) \right) \approx i \left(-\frac{\Delta_\perp^i}{2P_\perp^2} + \frac{P_\perp^i P_\perp \cdot \Delta_\perp}{P_\perp^4}\right)\int d^2q_\perp q_\perp^2 O(q_\perp)\,. \label{4}
\end{align}
A similar result follows in the other limit $Q\gg q_\perp, P_\perp$. (\ref{4}) is to be multiplied by the next-to-eikonal amplitude which reads
\begin{eqnarray}
&&
\int \frac{d^2q_\perp}{(2\pi)^2} \frac{ P_\perp^i-q_\perp^i}{(P_\perp-q_\perp)^2 + \epsilon_f^2}\int d^2x'_\perp d^2 y'_\perp e^{i(q_\perp + \frac{\Delta_\perp}{2}) \cdot x'_\perp + i(-q_\perp + \frac{\Delta_\perp}{2}) \cdot y'_\perp} \nonumber \\
&& \qquad \times \int dz^- \Biggl\langle \frac{1}{k_1^-} {\rm Tr} U_{\infty z^-}(x'_\perp) \left( k_{1\perp}^j \overleftarrow{D}'_j +\frac{i}{2}\overleftarrow{D}'^2\right) U_{z^-,-\infty}(x'_\perp) U^\dagger(y'_\perp)
\nonumber \\
&& \qquad \qquad \qquad - \frac{1}{k^-_2} {\rm Tr} U(x'_\perp) U_{-\infty z^-}(y'_\perp) \left(k_{2\perp}^j D_j' + \frac{i}{2}\overrightarrow{D}'^2\right) U_{z^-\infty}(y'_\perp) \Biggr\rangle \nonumber \\
&&= \frac{i\lambda}{4}\frac{g^2(2\pi)^3}{4P^+} \int \frac{d^2q_\perp}{(2\pi)^2} \frac{ P_\perp^i-q_\perp^i}{(P_\perp-q_\perp)^2 + \varepsilon^2}
\nonumber \\
&& \quad \times \Biggl[
\left(\frac{1}{k_1^-}+\frac{1}{k_2^-}\right) \epsilon_{jk} \left((f-g) P_\perp^j \Delta_\perp^k -(f+g)q_\perp^j \Delta_\perp^k + 2A\Delta_\perp \cdot q_\perp P_\perp^j q_\perp^k +2\kappa q_\perp^j \Delta_\perp^k \right) \nonumber \\
&& \qquad \qquad +\left( \frac{1}{k_1^-}-\frac{1}{k_2^-}\right) \epsilon_{jk} \left( 2(f+g) P_\perp^j q_\perp^k + A\Delta_\perp \cdot q_\perp \Delta_\perp^jq_\perp^k \right)
\Biggr]+\cdots\,,
\label{we}
\end{eqnarray}
where we kept only the imaginary part.
Here, $k_1^-= zq^-$, $k_2^- =(1-z) q^-$ and $k_{1\perp}= -\frac{\Delta_\perp}{2}-P_\perp$, and $k_{2\perp}=-\frac{\Delta_\perp}{2}+P_\perp$. We then expand the integrand in powers of $1/P_\perp$ or $1/Q$ and perform the angular integral over $\phi_q$. Consider, for definiteness, the large-$P_\perp$ limit. At first sight, the dominant contribution comes from the ${\cal O}(1)$ terms proportional to $\frac{P_\perp^i P_\perp^j}{P_\perp^2} (f-g)$ and $\frac{P_\perp^i P_\perp^j}{P_\perp^2} A$. However, after the $\phi_q$-integral they cancel exactly due to the sum rule (\ref{fo}). Thus the leading terms are ${\cal O}(1/P_\perp)$ and actually come from the last line of (\ref{we}) which can be evaluated as
\begin{eqnarray}
\approx \frac{i\lambda}{4}\frac{g^2(2\pi)^3}{4P^+} \left( \frac{1}{k_1^-}-\frac{1}{k_2^-}\right)\frac{\epsilon_{ij}P_\perp^j}{P_\perp^2} \int \frac{ d^2q_\perp}{(2\pi)^2} q_\perp^2(f+g) =-\frac{i\lambda \alpha_s(1-2z)}{32P^+q^-}\Delta G(x)\frac{\epsilon_{ij}P_\perp^j}{P_\perp^2}\nonumber \\ \approx \frac{i\lambda \alpha_s(1-2z)}{64P^+q^-}L_g(x)\frac{\epsilon_{ij}P_\perp^j}{P_\perp^2}
\,, \label{fin}
\end{eqnarray}
where we used (\ref{eq:G=L}) and (\ref{sur}).
Multiplying (\ref{fin}) by (\ref{4}) and restoring the prefactor, we finally arrive at
\begin{eqnarray}
\frac{d\Delta \sigma }{dy_1d^2k_{1\perp} dy_2 d^2k_{2\perp} }
&\approx &
4\pi^4 \alpha_s N_c\alpha_{em} x\sum_q e_q^2 \delta(x_{\gamma^*}-1)(1-2z)(z^2+(1-z)^2) \nonumber \\
&& \qquad \times \frac{\Delta_\perp}{ P_\perp^3Q^2} \sin \phi_{P\Delta} \left\{ \begin{matrix} -2 \Delta G(x) \\ L_g(x) \end{matrix} \right\} \int d^2q_\perp q^2_\perp O (x, q_\perp)
\,, \label{rema}
\end{eqnarray}
where $\phi_{P\Delta}$ is the azimuthal angle between $P_\perp$ and $\Delta_\perp$ and $e_q$ is the electric charge of the massless quark in units of $e$. We also used $x=\frac{Q^2}{2P^+q^-}$. $z$ is fixed by the dijet kinematics as
\begin{eqnarray}
z=\frac{|k_{1\perp}|e^{y_1}}{|k_{1\perp}| e^{y_1}+|k_{2\perp}| e^{y_2}}\, .
\end{eqnarray}
In the other limit $Q\gg q_\perp,P_\perp$, the cross section reads
\begin{eqnarray}
\frac{d\Delta \sigma }{dy_1d^2k_{1\perp} dy_2 d^2k_{2\perp} }
&\approx &
4 \pi^4 \alpha_s N_c\alpha_{em} x\sum_f e_f^2 \delta(x_{\gamma^*}-1)(1-2z) \frac{z^2+(1-z)^2}{z^2(1-z)^2}
\nonumber \\
&& \qquad \times \frac{P_\perp \Delta_\perp}{ Q^6} \sin \phi_{P\Delta} \left\{ \begin{matrix} -2 \Delta G(x) \\ L_g(x) \end{matrix} \right\} \int d^2q_\perp q^2_\perp O (x, q_\perp)
\,. \label{q}
\end{eqnarray}
The terms neglected in (\ref{rema}) and (\ref{q}) are suppressed by powers of $1/P_\perp$ and $1/Q$, respectively.
The above results have been obtained for the transversely polarized virtual photon. In fact, the whole contribution from the longitudinally polarized virtual photon is subleading. The only difference in the longitudinal photon case is the integral kernel
\begin{eqnarray}
\int d^2q_\perp \frac{P_\perp ^i -q_\perp^i}{(P_\perp -q_\perp)^2+\varepsilon^2} \to \int d^2q_\perp \frac{Q}{(P_\perp -q_\perp)^2+\varepsilon^2}\,.
\end{eqnarray}
Proceeding as before, we find that the contribution from the longitudinal photon to $\Delta \sigma$ is suppressed by factors $1/P_\perp^3$ and $1/Q^2$ compared to (\ref{rema}) and (\ref{q}), respectively.
We thus find that the asymmetry is directly proportional to $\Delta G(x)$. On the basis of (\ref{sur}), we may also say that it is proportional to $L_g(x)$. Previous direct measurements of $\Delta G(x)$ (or rather, the ratio $\langle \Delta G(x)/G(x)\rangle$ averaged over a limited interval of $x$) in DIS are based on longitudinal {\it double} spin asymmetry \cite{Adolph:2012ca,Adolph:2012vj}.
In general, longitudinal single spin asymmetry vanishes in QCD due to parity. Here, however, we get a nonzero result because we measure the correlation between two particles (jets) in the final state. The experimental signal of this is the $\sin \phi_{P\Delta}$ angular dependence.
This is distinct from the leading angular dependence of the dijet cross section $\cos 2\phi_{P\Delta}$ \cite{Hatta:2016dxp} which has been canceled in the difference $d\Delta \sigma=d\sigma^{\lambda=1} -d\sigma^{\lambda=-1}$.
Notice that the asymmetry vanishes at the symmetric point $z=1/2$ and the product $(1-2z)\sin \phi_{P\Delta}$ is invariant under the exchange of two jets $z\leftrightarrow 1-z$ and $k_{1\perp}\leftrightarrow k_{2\perp}$.
Subleading corrections to (\ref{rema}) include terms proportional to $\sin 2\phi_{P\Delta}$ without a prefactor $1-2z$.
These are consequences of parity.
Compared to $\sin \phi_{P\Delta}$, $\sin 2\phi_{P\Delta}$ has an extra zero at $\phi_{P\Delta}=\pi/2$, or equivalently, $|k_{1\perp}| = |k_{2\perp}|$. When $z=1/2$ and $|k_{1\perp}| = |k_{2\perp}|$, the two jets cannot be distinguished. Therefore, the $\lambda=\pm 1$ cross sections are exactly equal by parity and the asymmetry vanishes. This argument can be generalized to higher Fourier components.
The most general form of longitudinal single spin asymmetry consistent with parity is
\begin{align}
\frac{d\Delta\sigma}{dy_1d^2{k_1}_\perp dy_2d^2{k_2}_\perp}&=\sum_{n=0}^\infty c_n(z,Q,|P_\perp|,|\Delta_\perp|)\sin(2n+1)\phi_{P_\perp\Delta_\perp} \notag \\
&+\sum_{n=1}^\infty d_n(z,Q,|P_\perp|,|\Delta_\perp|)\sin 2n\phi_{P_\perp\Delta_\perp}\,,
\end{align}
where $c_n(z=\frac{1}{2},Q,|P_\perp|,|\Delta_\perp|)=0$.
It is very interesting that the measurement of (\ref{rema}) also establishes the odderon exchange in QCD which has long evaded detection despite many attempts in the past \cite{Ewerz:2003xi}. The connection between odderon and (transverse) single spin aymmetries has been previously discussed in the literature \cite{Hagler:2002nf,Kovchegov:2012ga,Zhou:2013gsa,Boer:2015pni}. However, the observable and the mechanism considered in this work are new. To estimate the cross section quantitatively, the integral $\int d^2q_\perp q_\perp^2 O(x,q_\perp)$ should be evaluated using models including the QCD evolution effects. Importantly, theory predicts \cite{Bartels:1999yt,Chachamis:2016ejm} that $O(x,q_\perp)$ has no or very weak dependence on $x$ in the linear BFKL regime. This will make the extraction of the $x$-dependence of $\Delta G(x)$ easier.
\section{Comments on the small-$x$ evolution equation}
The appearance of half-infinite Wilson line operators is quite unusual in view of the standard approaches to high energy QCD evolution which only deal with infinite Wilson lines $U_{\infty,-\infty}$. At the moment, little is known about the small-$x$ evolution of these operators. Still, we can formally write down the evolution equation by assuming that the soft gluon emissions only affect Wilson lines at the end points $x^-=\pm \infty$ \cite{Ferreiro:2001qy}.
Defining ${\cal O}_{x_\perp}\equiv \int dz^- U_{\infty z^-}(x_\perp)\overleftarrow{D}U_{z^-,-\infty}(x_\perp)$ and
using the technique illustrated in \cite{Ferreiro:2001qy},
we obtain
\begin{eqnarray}
&&\frac{\partial }{\partial \ln 1/x} \textrm{Tr}\left[ {\cal O}_{x_\perp} U^\dagger_{y_\perp}\right] \nonumber \\
&&= \frac{\alpha_s N_c}{2\pi^2} \int d^2 z_\perp \frac{(x_\perp -y_\perp)^2}{(x_\perp -z_\perp)^2(z_\perp -y_\perp)^2}\left\{ \frac{1}{N_c
}\textrm{Tr}\left[{\cal O}_{x_\perp}U^\dagger_{z_\perp} \right] \textrm{Tr}\left[ U_{z_\perp}U^\dagger_{y_\perp}\right]- \textrm{Tr}\left[ {\cal O}_{x_\perp}U^\dagger_{y_\perp} \right]\right\} \nonumber \\
&&\quad +\frac{\alpha_s N_c }{2\pi^2} \int d^2 z_\perp \frac{(x_\perp -z_\perp)\cdot (y_\perp -z_\perp)}{(x_\perp -z_\perp)^2(z_\perp -y_\perp)^2}\left\{ \frac{1}{N_c}
\textrm{Tr}\left[ {\cal O}_{x_\perp}U^\dagger_{x_\perp}\right]\textrm{Tr}\left[ U_{x_\perp}U^\dagger_{y_\perp}\right]-\textrm{Tr}\left[ {\cal O}_{x_\perp}U^\dagger_{y_\perp}\right] \right\} \nonumber \\
&&\quad +\frac{\alpha_s N_c}{2\pi^2} \int d^2 z_\perp \left[\frac{(x_\perp -z_\perp)\cdot (y_\perp -z_\perp)}{(x_\perp -z_\perp)^2(z_\perp -y_\perp)^2}-\frac{1}{(x_\perp-z_\perp)^2}\right] \nonumber \\
&& \qquad \times \left\{ \frac{1}{N_c}
\textrm{Tr}\left[ {\cal O}_{x_\perp}U^\dagger_{z_\perp}\right]\textrm{Tr}\left[ U_{z_\perp}U^\dagger_{y_\perp}\right] -\frac{1}{N_c} \textrm{Tr}\left[ U_{x_\perp}U^\dagger_{z_\perp}\right]
\textrm{Tr}\left[ U_{z_\perp}U_{x_\perp}^\dagger {\cal O}_{x_\perp} U^\dagger_{y_\perp} \right] \right\} . \label{right}
\end{eqnarray}
One can show that
\begin{eqnarray}
{\cal O}_{x_\perp} U_{x_\perp}^\dagger = \int dz^- U_{\infty,z^-}\overleftarrow{D}U_{\infty,z^-}^\dagger\,,
\end{eqnarray}
is an element of the Lie algebra of SU(3). Therefore, its trace, which appears on the second line of the right hand side of (\ref{right}), vanishes.
Note that there is no singularity at $z_\perp=y_\perp$ and $z_\perp=x_\perp$. The latter can be seen from the identity
\begin{eqnarray}
\frac{(x_\perp -y_\perp)^2}{(x_\perp -z_\perp)^2(z_\perp -y_\perp)^2}+2\frac{(x_\perp -z_\perp)\cdot (y_\perp -z_\perp)}{(x_\perp -z_\perp)^2(z_\perp -y_\perp)^2}-\frac{1}{(x_\perp-z_\perp)^2} = \frac{1}{(y_\perp-z_\perp)^2}\,.
\end{eqnarray}
The above equation is similar to the ones discussed in \cite{Kovchegov:2015pbl,fabio}. In particular, ${\cal O}_{x_\perp}$ and the next-to-eikonal operators in (\ref{propdef}) are possibly related to the operator $V^{pol}$ introduced, but unspecified in \cite{Kovchegov:2015pbl}. If this is the case, the small-$x$ behavior of $L_g(x)$ and $\Delta G(x)$ is related to that of the $g_1(x)$ structure function or the polarized quark distribution $\Delta q(x)$. This issue certainly deserves further study.
\section{Conclusions}
In this paper, we first presented a general analysis of the OAM gluon distribution $L_g(x)$ by making several clarifications regarding its definition and properties. We then focused on the small-$x$ regime and derived a novel operator representation for $L_g(x)$
in terms of half-infinite Wilson lines $U_{\pm\infty, z^-}$ and the covariant derivatives $D^i$.
It turns out that the exactly the same operators describe the polarized gluon distribution $\Delta G(x)$. Based on this, we have argued that $L_g(x)$ and $\Delta G(x)$ are proportional to each other with the relative coefficient $-2$. Moreover, the small-$x$ evolution of these distributions can be related to that of the polarized quark distribution. These observations shed new light on the nucleon spin puzzle.
We have also pointed out that the same operator shows up in the next-to-eikonal approximation \cite{Altinoluk:2014oxa,Altinoluk:2015gia}. This allows us to relate the helicity and OAM distributions to observabes. We have shown that single longitudinal spin asymmetry in diffractive dijet production in
lepton-nucleon collisions is a sensitive probe of the gluon OAM in certain kinematic regimes.
The large-$x$ region, on the other hand, requires a different treatment and the first result has been recently reported in \cite{Ji:2016jgn} to which our work is complementary. Probing the quark OAM $L_q$ seems more difficult, but there are interesting recent developments \cite{Engelhardt:2017miy,Bhattacharya:2017bvs}. Together they open up ways to access the last missing pieces in the spin decomposition formula (\ref{1}), and we propose to explore this direction at the EIC.
\section*{ Acknowledgements}
We thank Guillaume Beuf, Edmond Iancu, Cedric Lorc\'{e} for discussions.
This material is based upon work supported by the Laboratory Directed Research and Development
Program of Lawrence Berkeley National Laboratory, the U.S. Department of Energy,
Office of Science, Office of Nuclear Physics, under contract number
DE-AC02-05CH11231 and DE-FG02-93ER-40762. Y.~Z. is also supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the TMD Topical Collaboration.
|
\section{Introduction}
\label{intro}
The past two decades have shown tremendous progress in studies of
galaxy evolution out to high redshift. We have now understood that
galaxies formed a lot more stars in the past than they do today (by an
order of magnitude or more by $z$$\simeq$1; e.g., Madau et al.\ 1996;
Lilly et al.\ 1996), and that the buildup of stellar mass in galaxies
through cosmic times took place through both brief, but very intense
starburst periods lasting tens to 100\,Myr and steady, more quiescent
rates of star formation lasting typically half a Gyr or longer (e.g.,
Daddi et al.\ 2010a, Genzel et al.\ 2010). In many cases, the most
infrared-luminous starbursts are driven by major mergers. The most
extreme starbursts observed at earlier cosmic times are typically by
an order of magnitude more infrared-luminous than the most extreme
examples observed today (e.g., Blain et al.\ 2002). In addition, the
luminosity threshold above which star-forming galaxies are dominantly
major mergers has likely been higher by an order of magnitude or more
in the past as well (e.g., Daddi et al.\ 2007). In recent years, it
has been found that this increased activity in galaxies at high
redshift can be understood in the context of a higher gas content of
galaxies on average, and that even fairly typical high-redshift
galaxies contain high fractions of molecular gas (e.g., Daddi et
al.\ 2008, 2010b; Tacconi et al.\ 2010). Thus, studies of molecular
gas in distant galaxies have become an important means towards
understanding cosmic star formation at early epochs.
Molecular gas is an important probe of the physical conditions in
distant galaxies. Great progress has been made since the initial
detections of CO at high redshift more than two decades ago (Brown \&
Vanden Bout 1991; Solomon et al.\ 1992). Significant samples of
different high-redshift galaxy populations have been detected in CO
emission, and detailed follow-up campaigns of subsamples at spatial
resolutions of up to 1\,kpc have provided detailed insight on the
spatial distribution, mass density, and dynamical structure of the
gas. Studies of the gas excitation have constrained the physical
properties of the interstellar medium, and the detection of other
molecules such as HCN, HCO$^+$, HNC, CN, and H$_2$O has enabled
studies of the chemical composition (see Solomon \& Vanden Bout 2005;
Omont 2007; Carilli \& Walter 2013 for comprehensive reviews of the
subject). The future of the field is bright, now that we have entered
a new era in such investigations with the advent of powerful new
facilities such as the Karl G.\ Jansky Very Large Array (JVLA) and the
Atacama Large sub/Millimeter Array (ALMA). In the context of this
volume, this article highlights some key aspects related to the
determination of total molecular gas masses in high-$z$ galaxies, and
discusses what physical properties can be probed through studies of
the gas composition and excitation.
\section{Detections of Molecular Gas at High Redshift:\ a Brief Summary}
\label{intro}
To date,\footnote{This census is based on data published in the
peer-reviewed literature by the end of 2012, and updates previous
compilations shown by Riechers (2011a, 2012). It does not contain
galaxies detected in [CII] emission but not CO (e.g., Stacey et
al.\ 2010; Venemans et al.\ 2012).} molecular gas (most commonly CO)
has been detected in $\sim$150 galaxies at $z$$>$1 (Fig.~\ref{f1}),
back to only 870 million years after the Big Bang (corresponding to
$z$=6.42; e.g., Walter et al.\ 2003, 2004; Bertoldi et al.\ 2003;
Riechers et al.\ 2009a). Except for few sources highly magnified by
strong gravitational lensing (e.g., Baker et al.\ 2004; Coppin et
al.\ 2007; Riechers et al.\ 2010a), these are massive galaxies hosting
large molecular gas reservoirs (typically 10$^{10}$\,M$_\odot$ or
more), commonly with high gas fractions of (at least) tens of per
cent. Approximately 20\% of the detected systems are massive, gas-rich
optically/near-infrared selected star forming galaxies (SFGs; e.g.,
Daddi et al.\ 2010b; Tacconi et al.\ 2010), and 30\% each are
far-infrared-luminous, star-bursting quasars (QSOs; e.g., Wang et
al.\ 2010; Riechers et al.\ 2006a; Riechers 2011b) and submillimeter
galaxies (SMGs; e.g., Greve et al.\ 2005; Tacconi et al.\ 2008;
Riechers et al.\ 2010b; Fig.~\ref{f1}). The rest of CO-detected
high-redshift galaxies are limited samples of galaxies selected
through a variety of techniques, such as Extremely Red Objects (EROs),
Star-Forming Radio-selected Galaxies (SFRGs), 24\,$\mu$m-selected
galaxies, gravitationally lensed Lyman-break galaxies (LBGs), and
radio galaxies (RGs; see Carilli \& Walter 2013 for a recent
summary). Besides CO, the high-density gas tracers HCN, HCO$^+$, HNC,
CN, and H$_2$O were detected towards a small subsample of these
galaxies (e.g., Solomon et al.\ 2003; Vanden Bout et al.\ 2004;
Riechers et al.\ 2006b, 2007a, 2010c, 2011a; Guelin et al.\ 2007;
Omont et al.\ 2011).
\begin{figure}[t]
\centering
\includegraphics[scale=0.65]{f1_dr}
\caption{Detections of CO emission in $z$$>$1 galaxies as of
2012. {\em Left:} Total number of detections (red) and detections
per year (yellow) since the initial detection in 1991/1992. {\em
Right:} Detections as a function of redshift, and color encoded by
galaxy type (figure updated from Riechers 2011a, 2012).
}
\label{f1}
\end{figure}
\section{Total Molecular Gas Masses}
\label{mgas}
\begin{figure}[t]
\centering
\includegraphics[scale=0.22]{f2_dr}
\caption{CO($J$=1$\to$0) spectroscopy of high-redshift quasar host
galaxies with the Green Bank Telescope, the Effelsberg 100\,m, and
the JVLA (Riechers et al.\ 2006a, 2011b). F10214+4724 is the galaxy
that was initially studied by Brown \& Vanden Bout (1991) and
Solomon et al.\ (1992) in the CO($J$=3$\to$2) line, but only
recently, detection of CO($J$=1$\to$0) in this system became
possible due to previous tuning restrictions. The spectral coverage
of the CO($J$=1$\to$0) spectrum of RX\,J0911+0551 was limited by the
bandwidth of the old VLA correlator. All galaxies shown here except
BR\,1202--0725 are gravitationally lensed. The measured
CO($J$=1$\to$0) luminosity is used to determine the total molecular
gas masses of these objects.}
\label{f2}
\end{figure}
Molecular gas is the fuel for star formation, and thus, a key aspect
in studies of galaxy evolution. Measurements of total molecular gas
masses (and thereby, gas fractions) of galaxies provide the means to
understand in what phase of their evolution galaxies are, as they
place direct constraints on the amount of material that is left for
future star formation, and thus, on how long star formation can be
maintained at the current rate, and on how much stellar mass can be
assembled without a source of external gas supply.
In the nearby universe, molecular gas masses are most commonly
determined based on observations of CO($J$=1$\to$0) emission. To
determine the mass in H$_2$ gas from the CO($J$=1$\to$0) luminosity is
not trivial, as the conversion factor ($\alpha_{\rm CO}$) between both
quantities depends on the physical conditions in the gas, as well as
on the gas phase metallicity (see review by Bolatto et
al.\ 2013). Values for $\alpha_{\rm CO}$ range from
0.3--1.3\,M$_\odot$\,(K\,km\,s$^{-1}$\,pc$^2$)$^{-1}$ in the most
intense starbursts (e.g., Downes \& Solomon 1998) to
3.5--4.6\,M$_\odot$\,(K\,km\,s$^{-1}$\,pc$^2$)$^{-1}$ in quiescently
star-forming galaxies (e.g., Solomon \& Barrett 1991), and can be even
higher in low-metallicity galaxies (e.g., Leroy et
al.\ 2011). However, the CO($J$=1$\to$0) line luminosity is still the
best-calibrated estimator available in the local universe, and given
the limited direct constraints available at high redshift, the most
reliable diagnostic to be utilized in distant galaxies as well. Also,
our theoretical understanding of variations in $\alpha_{\rm CO}$ among
galaxies near and far has been improving in recent years (see, e.g.,
Narayanan 2013, this volume). Due to the redshifting of molecular
lines, high-$z$ galaxies are most commonly detected in $J$$\geq$3 CO
lines, which are shifted to the 3\,mm observing window that is
accessible with telescopes that are used for the detection of
CO($J$=1$\to$0) emission in nearby galaxies. Besides $\alpha_{\rm
CO}$, the determination of gas masses from $J$$\geq$3 CO lines bears
the additional uncertainty of gas excitation, which needs to be known
to infer the CO($J$=1$\to$0) line luminosity based on the brightness
of higher-$J$ lines. Without a correction, the CO($J$=1$\to$0) line
luminosity, and thus, the gas mass, may be underestimated by up to a
factor of a few.
\begin{figure}[t]
\vspace{-10mm}
\centering
\includegraphics[scale=0.3]{f3_dr}
\caption{CO($J$=1$\to$0) observations of submillimeter galaxies along
the ``merger sequence'' with the JVLA (Riechers et al. 2011c,
2011d). SMGs show complex gas morphologies that can extend over
$>$10 kpc scales, and commonly consist of multiple components. The
gas distribution and kinematics of the majority of SMGs are as
expected for major, gas-rich mergers. The observed diversity in
these properties is consistent with different merger stages, as
shown here for three examples. {\em Left:} The two gas-rich galaxies
in this SMG are separated by tens of kpc and
$\sim$700\,km\,s$^{-1}$, representing an early merger stage. {\em
Middle:} This SMG still shows two separated components, but at
similar velocity, representing a more advanced merger stage. {\em
Right:} This SMG shows a single, complex extended gas structure
with multiple velocity components, representing a fairly late merger
stage.}
\label{f3}
\end{figure}
To overcome this limitation, direct studies of CO($J$=1$\to$0)
emission in high-redshift galaxies, redshifted down to short radio
wavelengths (6\,mm--2\,cm) accessible with the Green Bank Telescope
(GBT) and the JVLA have become more common in recent years. After
initial detections in less than a handful of galaxies (e.g., Carilli
et al.\ 2002; Riechers et al.\ 2006a; Hainline et al.\ 2006), the
increased frequency coverage and spectroscopic capabilities of the
JVLA upgrade, and the availability of the Zpectrometer wide-band
instrument with stable spectral baselines at the GBT have yielded tens
of detections in galaxies at redshifts of $z$$\gtrsim$1.5,
encompassing quasar host galaxies,\footnote{Based on higher spatial
resolution CO($J$=3$\to$2) observations than previously available,
the CO emission in J4135+10277 shown in Fig.~\ref{f2} has recently
been found to not be directly associated with the quasar host, but
with a nearby, optically faint dust-obscured starburst at the same
redshift. This gas-rich star-forming galaxy will likely merge with
the quasar host galaxy in the future (Rie
|
chers 2013; see Hainline et
al.\ 2004 for original CO $J$=3$\to$2 detection).} SMGs, LBGs, SFGs,
and RGs (e.g., Figs.~\ref{f2}, \ref{f3}; Riechers et al.\ 2010a,
2011b, 2011c, 2011d, 2011e; Ivison et al.\ 2010, 2011, 2012; Harris et
al.\ 2010, 2012; Frayer et al.\ 2011; Aravena et al.\ 2010). These
studies have revealed a wide range in gas excitation properties,
showing that the gas masses derived based on the brightness of
higher-$J$ CO lines were biased low by factors of two or more for many
of the systems (e.g., Riechers et al.\ 2011c, 2011d; Ivison et
al.\ 2011), while others are highly excited, and thus, did not require
significant corrections (typically quasar hosts; e.g., Riechers et
al.\ 2006a, 2009b, 2011b; see next section). Spatially resolved
mapping with the JVLA also shows that submillimeter-selected galaxies
commonly show gas reservoirs that are significantly more spatially
extended in CO($J$=1$\to$0) than in higher-$J$ lines, indicating the
presence of significant amounts of cold gas with low excitation (e.g.,
Riechers et al.\ 2011c; Ivison et al.\ 2011). This potentially
requires a different conversion factor $\alpha_{\rm CO}$ for different
components of the gas, which would imply an even larger correction to
gas masses derived from $J$$\geq$3 CO lines alone. It also implies
that accurate sizes of the gas reservoirs and gas dynamical masses for
these galaxies can only be derived based on CO($J$=1$\to$0) imaging.
In contrast, quasar host galaxies typically do not show evidence for
significant fractions of spatially extended, low-excitation gas, which
may suggest that gas masses derived from $J$$\geq$3 CO lines require
little corrections (Riechers et al.\ 2006a, 2011b). Studies of more
quasar host galaxies at high spatial resolution in both low- and
high-$J$ CO transitions are desirable to investigate this apparent
difference between quasars and SMGs with comparable far-infrared
luminosities in more detail, and to better understand the role of both
populations in the context of the evolution of massive galaxies. The
JVLA and ALMA are the ideal tools for such investigations.
\section{Molecular Gas Excitation}
\label{coex}
\subsection{CO}
To understand the excitation properties of the molecular gas
reservoirs of high-redshift galaxies in more detail, it is necessary
to observe multiple molecular lines, most commonly by covering three
or more transitions of the rotational ladder of CO. In most cases,
these studies are currently limited to the study of spatially
integrated properties, rather than variations between different
regions of galaxies as done in the most detailed studies of nearby
galaxies (e.g., Wei\ss\ et al.\ 2005). In the four best-studied
high-redshift systems, all of which are strongly gravitationally
lensed, seven or more CO lines have been detected (e.g.,
Fig.~\ref{f4}; Riechers et al.\ 2006a, 2011b, 2011e; Wei\ss\ et
al.\ 2007; Bradford et al.\ 2009, 2011; Scott et al.\ 2011; Danielson
et al.\ 2011). By modeling the intensity of different rotational
levels of CO relative to equilibrium (in which case all lines would
have the same brightness temperature, and the line fluxes would scale
with $\nu_{\rm line}^2$), it is possible to constrain the physical
properties of the gas (in particular its density and kinetic
temperature). The CO lines, when observed in emission, can be excited
either by collisions with other molecules, or by the ambient radiation
field (e.g., Meijerink \& Spaans 2005). In lack of detailed
constraints on the gas distribution and local radiation field, the
most common approach is to model the collisional excitation of CO
using the large velocity gradient (LVG) approximation (e.g., Scoville
\& Solomon 1974; Goldreich \& Kwan 1974). This method utilizes an
escape probability formalism (i.e., photons produced locally can only
be absorbed locally) resulting from a strong velocity gradient, which
helps to minimize the number of free parameters in models of the
collisional line excitation. LVG models appear to describe the CO
excitation in high-redshift galaxies on global scales fairly well.
As examples, the CO excitation ladders and LVG models for the
Cloverleaf quasar ($z$=2.56) and APM\,08279+5255 ($z$=3.91) are shown
in Fig.~\ref{f5}a and \ref{f5}c. Like all high-$z$ quasars studied in
detail so far, the interstellar medium in the host galaxies of these
systems is dominated by high-excitation gas components, suggesting
that the molecular gas is comparatively dense and warm, with
characteristic gas kinetic temperatures of $T_{\rm kin}$=50\,K and
densities of typically few times 10$^4$\,cm$^{-3}$ (e.g., Riechers et
al.\ 2006a, 2011b; Ao et al.\ 2008). APM\,08279+5255 is an outlier
among high-$z$ quasars, with evidence for a warm, dense gas component
with $T_{\rm kin}$=220\,K (Wei\ss\ et al.\ 2007). Typically, the
high-excitation gas components in quasars can account for all of the
flux measured in the CO($J$=1$\to$0) line as well, showing little
evidence for the existence of significant cold, low-excitation
components from the CO excitation ladders alone. For comparison, SFGs
appear to be dominated by colder, less dense gas components with
characteristic $T_{\rm kin}$=25\,K and densities of typically few
times 10$^3$\,cm$^{-3}$ (Dannerbauer et al.\ 2009; Aravena et
al.\ 2010). Given the integrated star formation rates of SFGs, the
presence of some higher excitation gas is expected, but the relative
strength of such components is not constrained well at present. The CO
excitation in SMGs is typically intermediate between SFGs and quasars,
containing a mix of dense, high excitation gas (even though somewhat
lower excitation than in quasars on average) and spatially extended,
low excitation gas (e.g., Riechers et al.\ 2011c). The high-excitation
gas components are responsible for the dominant fraction of the line
flux in the $J$$>$2 transitions, but the low-excitation components
likely constitute a significant, and sometimes dominant fraction of
the total gas mass (Riechers et al.\ 2011c; Ivison et al.\ 2011). The
emerging picture is that the high-excitation gas components are
associated with the regions that are actively forming stars, while the
low-excitation gas is found outside these regions and represents cold,
commonly diffuse and spatially more extended reservoirs for future
star formation.
Active Galactic Nuclei (AGN) may contribute to the excitation of CO
through their intense radiation fields as well (e.g., Spaans \&
Meijerink 2008). Studies of $J$$>$10 CO lines in nearby quasars show
evidence for a likely AGN contribution to the excitation of very
high-$J$ CO lines, but the contribution to lower-$J$ CO lines is
typically minor (e.g., van der Werf et al.\ 2010). The only high-$z$
system with detected $J$$>$10 CO lines currently is APM\,08279+5255
(Wei\ss\ et al.\ 2007), and its CO excitation is not representative of
other high-$z$ quasars or SMGs with AGN in them. As such, the role of
high-$z$ AGN for the excitation of molecular gas in their hosts
remains subject to further study. ALMA will be the ideal instrument
for such investigations.
\begin{figure}[t]
\centering
\includegraphics[scale=0.44]{f4_dr}
\caption{Multi-line CO spectroscopy of the strongly lensed
Herschel-selected SMG HLSW--01 ($z$=2.957; Riechers et al.\ 2011e;
Scott et al.\ 2011). {\em Left:} CO($J$=5$\to$4) emission contours
on a 2.2\,$\mu$m continuum image, showing the lensed images of the
SMG (labeled A to D) and the group of $z$$\simeq$0.6 galaxies
responsible for the lensing magnification. {\em Middle:} CO
$J$=1$\to$0, 3$\to$2, and 5$\to$4 emission lines. {\em Right:} CO
$J$=7$\to$6 to 10$\to$9 emission lines. CO($J$=7$\to$6) is partially
blended with the upper [CI] fine structure line. The relative
strength of the different CO lines, covering the excitation ladder
from $J$=1 to 10, provide detailed constraints on the excitation of
the molecular gas.}
\label{f4}
\end{figure}
\subsection{HCN and HCO$^+$}
CO is the most commonly employed tracer of molecular gas in galaxies,
and a good tracer for the full amount of molecular gas that is
present, but it is not a particular tracer of the dense gas that is
found in actively star-forming regions. The most common tracers of
dense molecular gas in galaxies are HCN and HCO$^+$, as the critical
densities of their low-$J$ transitions\footnote{The critical densities
of high-$J$ CO lines can approach those of HCN and
HCO$^+$($J$=1$\to$0), but their excitation temperatures are by at
least 1--2 orders of magnitude higher. Thus, they do not necessarily
trace the same gas phase.} are significantly higher than those of CO
(of order 10$^5$\,cm$^{-3}$, as compared to
$\lesssim$10$^3$\,cm$^{-3}$ for CO), and more similar to the densities
found in star-forming cores. As such, the study of HCN and HCO$^+$
excitation in combination with that of CO can provide valuable
constraints on the excitation mechanisms, and it can also help to
reduce the partial degeneracies between model parameters. The only
high-redshift galaxies in which the excitation of the dense gas has
been studied to date are the Cloverleaf and APM\,08279+5255
(Fig.~\ref{f5}b and \ref{f5}d; Riechers et al.\ 2006b, 2010c, 2011a;
Wei\ss\ et al.\ 2007). The HCO$^+$ excitation in the Cloverleaf
suggest that CO and HCO$^+$ trace the same warm, dense gas phase, with
HCO$^+$ tracing the densest $\sim$15\%--20\% of the gas (Riechers et
al.\ 2011a). The HCO$^+$ excitation is comparable to what is found in
the nuclei of nearby starburst and ultra-luminous infrared
galaxies. Interestingly, the HCN excitation in APM\,08279+5255 is
poorly represented by models assuming purely collisional excitation,
but instead suggests significant enhancement by the intense, warm
infrared radiation field ($T_{\rm IR}$=220\,K) in this galaxy
(Riechers et al.\ 2010c). This is also consistent with the detection
of bright emission from multiple submillimeter H$_2$O lines, which
have critical densities of $>$10$^8$\,cm$^{-3}$, and thus are unlikely
to be collisionally excited (Fig.~\ref{f6}; Lis et al.\ 2011; Bradford
et al.\ 2011; van der Werf et al.\ 2011). These case studies
exemplify the importance of studies of dense gas excitation to
understand the conditions for star formation in high-redshift galaxies
better than possible based on the study of CO alone.
\begin{figure}[t]
\centering
\includegraphics[scale=0.22]{f5_dr}
\caption{CO, HCO$^+$, and HCN excitation ladders (points) and LVG
models (lines) of the molecular gas excitation for the Cloverleaf
quasar ($z$=2.56; panels {\bf a} and {\bf b}) and APM\,08279+5255
($z$=3.91; panels {\bf c} and {\bf d}; Riechers et al.\ 2006b,
2010c, 2011a; Wei\ss\ et al.\ 2007). Data for the Cloverleaf are fit
well by a gas component with a kinetic temperature of $T_{\rm
|
sigma \beta _{0})$ & $2$ & $134/77$ & $0.344\beta _{0}^{-1}$ & $4/3$
& $-0.689\beta _{0}^{-1}$ & $-0.689/n_{0}$ \\ \hline
\end{tabular
\end{center}
\caption{{\protect\small The coefficients for the shear stress for a
classical gas with constant cross section in the ultrarelativistic limit, in
the 23-moment approximation.}}
\label{shear_massless2}
\end{table}
To obtain these expressions we used the results from Appendix \ref{therm}
and that, in the massless/classical limits, $D_{20}=3P_{0}^{2}$. Note that
most of the transport coefficients were corrected by the inclusion of more
moments in the computation. The coefficients related to the shear-stress
tensor were less affected by the additional moments, when compared to the
particle-diffusion coefficients. This might explain the poor agreement
between the Israel-Stewart theory and numerical solutions of the Boltzmann
equation in Refs.\ \cite{BAMPS} regarding heat flow and fugacity.
We further checked the convergence of this approach by taking 32 and 41
moments. In this case, the matrices $\mathcal{A}^{\left( 1,2\right) }$,
\tau ^{\left( 1,2\right) }$\ and $\Omega ^{\left( 1,2\right) }$\ were
computed numerically. There is a clear tendency of convergence as we
increase the number of moments. For the particular case of classical
particles with constant cross sections, 32 moments seems sufficient. See
Tables \ref{diff_massless3} and \ref{shear_massless33} for the results.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
number of moments & $\kappa _{n}$ & $\tau _{n}[\lambda _{\mathrm{mfp}}]$ &
\delta _{nn}[\tau _{n }]$ & $\lambda _{nn}[\tau _{n }]$ & $\lambda _{n\pi
}[\tau _{n }]$ & $\ell _{n\pi }[\tau _{n }]$ & $\tau _{n\pi }[\tau _{n }]$
\\ \hline
$14$ & ${3}/\left( 16{\sigma }\right) $ & $9/{4}$ & $1$ & $3/5$ & $\beta
_{0}/{20}$ & ${\beta _{0}}/{20}$ & $0$ \\ \hline
$23$ & $21/\left( 128\sigma \right) $ & $2.59$ & $1.0$ & $0.96$ &
0.054\beta _{0}$ & $0.118\beta _{0}$ & $0.0295\beta _{0}/P_{0}$ \\ \hline
$32$ & $0.1605/\sigma $ & $2.57$ & $1.0$ & $0.93$ & $0.052\beta _{0}$ &
0.119\beta _{0}$ & $0.0297\beta _{0}/P_{0}$ \\ \hline
$41$ & $0.1596/\sigma $ & $2.57$ & $1.0$ & $0.92$ & $0.052\beta _{0}$ &
0.119\beta _{0}$ & $0.0297\beta _{0}/P_{0}$ \\ \hline
\end{tabular
\end{center}
\caption{{\protect\small The coefficients for the particle diffusion for a
classical gas with constant cross section in the ultrarelativistic limit, in
the 14, 23, 32 and 41-moment approximation.}}
\label{diff_massless3}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
number of moments & $\eta $ & $\tau _{\pi }[\lambda _{\mathrm{mfp}}]$ &
\tau _{\pi \pi }[\tau _{\pi }] $ & $\lambda _{\pi n}[\tau _{\pi }]$ &
\delta _{\pi \pi }[\tau _{\pi }]$ & $\ell _{\pi n}[\tau _{\pi }] $ & $\tau
_{\pi n}[\tau _{\pi }] $ \\ \hline
$14$ & ${4}/({3\sigma \beta _{0}})$ & ${5}/3$ & $10/7$ & $0$ & $4/3$ & $0$ &
$0$ \\ \hline
$23$ & $14/(11\sigma \beta _{0})$ & $2$ & $134/77$ & $0.344\beta _{0}^{-1}$
& $4/3$ & $-0.689/\beta _{0}$ & $-0.689/n_{0}$ \\ \hline
$32$ & $1.268/(\sigma \beta _{0})$ & $2$ & $1.69$ & $0.254\beta _{0}^{-1}$ &
$4/3$ & $-0.687/\beta _{0}$ & $-0.687/n_{0}$ \\ \hline
$41$ & $1.267/(\sigma \beta _{0})$ & $2$ & $1.69$ & $0.244\beta _{0}^{-1}$ &
$4/3$ & $-0.685/\beta _{0}$ & $-0.685/n_{0}$ \\ \hline
\end{tabular
\end{center}
\caption{{\protect\small The coefficients for the shear stress for a
classical gas with constant cross section in the ultrarelativistic limit, in
the 14, 23, 32 and 41-moment approximation.}}
\label{shear_massless33}
\end{table}
\section{Discussion and Conclusions}
\label{conclusions}
\subsection{Knudsen number and the reduction of dynamical variables}
It is important to mention that the terms $\mathcal{K},\,\mathcal{K}^{\mu }
, and $\mathcal{K}^{\mu \nu }$ which are of second order in Knudsen number
lead to several problems. The terms which contain second-order spatial
derivatives of $u^{\mu }$, $\alpha _{0}$, and $P_{0}$, e.g., $\nabla _{\mu
}I^{\mu }$, $\nabla _{\mu }F^{\mu }$, $\nabla ^{\left\langle \mu \right.
}I^{\left. \nu \right\rangle }$, $\nabla ^{\left\langle \mu \right.
}F^{\left. \nu \right\rangle }$, $\Delta _{\alpha }^{\mu }\partial _{\nu
}\sigma ^{\alpha \nu }$, and $\nabla ^{\mu }\theta $, are especially
problematic since they change the boundary conditions of the equations. In
relativistic systems these derivatives, even though they are space-like,
also contain time derivatives and thus require initial values. This means
that, by including them, one would have to specify not only the initial
spatial distribution of the fluid-dynamical variables but also the spatial
distribution of their time derivatives. In practice, this implies that we
would be increasing the number of fluid-dynamical degrees of freedom.
There is an even more serious problem. By including terms of order higher
than one in Knudsen number, the transport equations become parabolic. In a
relativistic theory, this comes with disastrous consequences since the
solutions are acausal and consequently unstable \cite{his}. For this reason,
if one wants to include terms of higher order in Knudsen number, it is
mandatory to include also second-order co-moving time derivatives of the
dissipative quantities. Or, equivalently, one could promote the moments
\rho_3,\,\rho_2^\mu, \rho_1^{\mu \nu}$ or further ones to dynamical
variables. For this reason we do not compute the transport coefficients for
these higher-order terms in this paper.
In practice, a way around this would be to replace e.g.\ the
\sigma^{\lambda \langle \mu} \sigma_{\lambda}^{\nu\rangle}$ term in
\mathcal{K}^{\mu \nu}$ using the asymptotic (Navier-Stokes) solution by
(1/2\eta)\pi^{\lambda \langle \mu} \sigma_{\lambda}^{\nu\rangle}$, and thus
effectively rendering it a term contributing
|
to $\mathcal{\ J}^{\mu \nu}$.
This should be a reasonable approximation if one is sufficiently close to
the asymptotic solution. This would then change the coefficient of the
respective term in $\mathcal{J}^{\mu \nu}$. In principle, this could be done
to all terms in $\mathcal{K},\, \mathcal{K}^\mu,$ and $\mathcal{K}^{\mu \nu}
, except for the ones containing exclusively powers and/or gradients of
F^\mu$ and $\omega^{\mu \nu}$. In the same spirit, using the asymptotic
solutions one could also shuffle some of the terms in $\mathcal{J},\,
\mathcal{J}^\mu$, and $\mathcal{J}^{\mu \nu}$ (those not containing $F^\mu$,
$\omega^{\mu \nu}$, and gradients of dissipative currents) into terms
contributing to $\mathcal{R},\, \mathcal{R}^\mu$, and $\mathcal{R}^{\mu \nu}$
(or vice versa). How this changes the actual transient dynamics remains to
be investigated in the future.
\subsection{Navier-Stokes limit}
Note that one of the main features of transient theories of fluid dynamics
is the relaxation of the dissipative currents towards their Navier-Stokes
values, on time scales given by the transport coefficients $\tau _{\Pi }$,
\tau _{n}$, and $\tau _{\pi }$. From the Boltzmann equation, Navier-Stokes
theory is obtained by means of the Chapman-Enskog expansion which describes
an asymptotic solution of the single-particle distribution function. It is
already clear from the previous section that the equations of motion derived
in this paper approach Navier-Stokes-type solutions at asymptotically long
times, in which the dissipative currents are solely expressed in terms of
gradients of fluid-dynamical variables.
It is interesting to investigate, however, if our equations approach the
correct Navier-Stokes theory, i.e., if the viscosity coefficients obtained
via our method are equivalent to the ones obtained via Chapman-Enskog
theory. It should be noted that this is not the case for Grad's and Israel
and Stewart's theories \cite{IS, DeGroot,dkr}. The viscosity coefficients
computed by these theories do not coincide with those extracted from the
Chapman-Enskog theory. We remark that, after taking into account the first
corrections to the shear viscosity coefficient, see Eq.\ (\ref{Result3}) and
Table \ref{shear_massless33}, our result approached the solution obtained
using Chapman-Enskog theory, $\eta_{NS}=1.2654/\left( \beta_{0}\sigma\right)
$ \cite{DeGroot}. In principle there is no reason for the method of moments
to attain a different Navier-Stokes limit than Chapman-Enskog theory. We can
show that, if the same basis of irreducible tensors $k^{\langle \mu_1}
\cdots k^{\mu_\ell \rangle}$ and polynomials $P_{n\mathbf{k}}^{(\ell)}$ is
used in both calculations, they both yield the same result, even order by
order.
\subsection{ \textquotedblleft Non-hydrodynamic\textquotedblright\ modes and
the microscopic origin of the relaxation time}
One of the features of the theory derived in this paper (and also of Grad's
and Israel-Stewart's theories) is the appearance of so-called
non-hydrodynamics modes, i.e., modes that do not vanish in the limit of zero
wave-number. Such modes do not exist in Navier-Stokes theory or its
extensions via the Chapman-Enskog expansion. For this reason, these modes
are usually not associated with fluid-dynamical behavior, hence the label
"non-hydrodynamic".
The non-hydrodynamic modes describe the relaxation of the dissipative
currents towards their respective Navier-Stokes solutions and can be
directly related to the respective relaxation times. For the case of the
shear non-hydrodynamic mode, $\omega _{\mathrm{shear}}^{\mathrm{non-hydro
}\left( \mathbf{k}\right) $, it can be shown that in the limit of $\mathbf{k
\rightarrow 0$ the mode is given by $\omega _{\mathrm{shear}}^{\mathrm
non-hydro}}\left( \mathbf{0}\right) =-i/\tau _{\pi }$ \cite{his}. In
Chapman-Enskog theory the transient dynamics of the system is neglected,
e.g., it is assumed that in the absence of space-like gradients, time-like
gradients vanish as well, and it is natural that such modes do not exist.
The appearance of non-hydrodynamic modes in a fluid-dynamical theory seems
to counteract the prevalent belief that fluid dynamics effectively describes
the asymptotic long-time and long-distance behavior of the microscopic
theory. Recently, a microscopic formula for the relaxation time of
dissipative currents was obtained in the framework of linear response theory
\cite{paperpoles}. In that paper, the relaxation time was shown to be
intrinsically related to the slowest microscopic time scale of the system,
i.e., to the singularity of the retarded Green's function closest to the
origin in the complex-plane. Thus, the non-hydrodynamic modes in Israel and
Stewart's theory and in the equations derived in this paper belong to a
description at long, but not asymptotically long, times.
This means that the theory derived in this paper (as well as Israel and
Stewart's theory) attempts to describe the dynamics of the dissipative
currents at time scales of the order of the (slowest) microscopic times
scale (which is of the order of the mean-free path). Such findings challenge
the point of view that a fluid-dynamical description can only be formulated
around zero frequency and wave number and that the inclusion of relaxation
times can only be understood as a regularization method to control the
instabilities of the gradient expansion. In fact, the relaxation times
correspond to microscopic time scales, independent of any macroscopic scale
related to the gradients of fluid-dynamical variables. Note that the
expressions presented in Ref.\ \cite{paperpoles} and in this paper for $\eta
$ and $\tau _{\pi }$ are equivalent.
\subsection{Conclusions}
In this work we have presented a general and consistent derivation of
relativistic fluid dynamics from the Boltzmann equation using the method of
moments. First, a general expansion of the single-particle distribution
function in terms of its moments was introduced in Sec.\ \ref{Mom_Meth}. We
constructed an orthonormal basis which allowed us to expand and obtain exact
relations between the expansion parameters and irreducible moments of the
deviations of the distribution function from equilibrium. We then proceeded
to derive exact equations for these moments.
The main difference of our approach to previous work is that we did not
close the fluid-dynamical equations of motion by truncating the expansion of
the distribution function. Instead, we kept all terms in the moment
expansion and truncated the exact equations of motion according to a
power-counting scheme in Knudsen and inverse Reynolds number. Contrary to
many calculations, we did not assume that the inverse Reynolds and Knudsen
numbers are of the same order. As a matter of fact, in order to obtain
relaxation-type equations, we had to explicitly include the slowest
microscopic time scales, which are shown to be the characteristic times
within which dissipative currents relax towards their asymptotic
Navier-Stokes solutions. Thus, Navier-Stokes theory, or the Chapman-Enskog
expansion, is already included in our formulation as an asymptotic limit of
the dynamical equations derived in this paper.
We concluded that the equations of motion can be closed in terms of only 14
dynamical variables, as long as we only keep terms of second order in
Knudsen and/or inverse Reynolds number. Even though the equations of motion
are closed in terms of these 14 fields, the transport coefficients carry
information about all moments of the distribution function (all the
different relaxation scales of the irreducible moments). The bulk-viscosity,
particle-diffusion, and shear-viscosity coefficients agree with the values
obtained via Chapman-Enskog theory.
\section{Acknowledgments}
G.S.D.\ and H.N.\ acknowledge the hospitality of MTA-KFKI, Budapest, where
part of this work was accomplished. G.S.D and D.H.R. acknowledge the
hospitality of the High-Energy Physics group at Jyv\"{a}skyl\"{a} University
where this work was completed. The authors thank T.\ Koide for enlightening
discussions. This work was supported by the Helmholtz International Center
for FAIR within the framework of the LOEWE program launched by the State of
Hesse. The work of H.N.\ was supported by the Extreme Matter Institute
(EMMI). E.M.\ was supported by Hungarian National Development Agency OTKA/N
\"{U} 81655.
|
\section*{Supplemental Material}
\subsection{Matrix elements and initial state on the ring}
The initial state after the Bragg pulse is easily obtained from the matrix elements of the operator
\begin{equation}
\label{eq:1}
\hat{U}_B(q,A) = e^{-iA\int dx \cos(q x){\hat{\Psi}}^{\dagger}(x){\hat{\Psi}}(x)}.
\end{equation}
In the Tonks-Girardeau (TG) limit on a ring geometry we use the eigenstates
\begin{equation} \label{eq:quantum_state_definition_LL}
\ket{{\boldsymbol{\lambda}}} = \frac1{\sqrt{N!}}\int_0^L d^{N}x \, \psi_{N}({\boldsymbol{x}}|{\boldsymbol{\lambda}})\,{\hat{\Psi}}^{\dagger}(x_{1})\dots{\hat{\Psi}}^{\dagger}(x_{N})\,|0\rangle \: ,
\end{equation}
with wavefunctions given by
\begin{equation}
\psi_{N}({\boldsymbol{x}}|{\boldsymbol{\lambda}}) = \frac1{\sqrt{N!}}\,\text{det}
\left[ e^{ix_{l} \lambda_j} \right] \displaystyle{\prod_{1\leq l<j \leq N}} \text{sgn}(x_{j}-x_{l}) \: .
\end{equation}
By commuting $\hat{U}_B(q,A) $ through the creation operators, the
matrix elements can be expressed as
\begin{align}
\label{eq:9}
\bra{{\boldsymbol{\lambda}}} \hat{U}_B(q,A) \ket{\boldsymbol{\mu}}
= \frac{1}{N!} \int d^Nx d^Nx'
\psi_{N}({\boldsymbol{x}}|{\boldsymbol{\lambda}})^{*}\psi_{N}({\boldsymbol{x}}'|\boldsymbol{\mu}) \notag \\
\times \; e^{-iA\sum_n \cos(qx_n)}
\bra{0} \prod_n{\hat{\Psi}}(x'_n) \prod_j {\hat{\Psi}}^{\dag}(x_j)\ket{0}.
\end{align}
The expectation value of the bosonic operators conspires with the
signs in the Tonks-Girardeau wavefunctions leading to a determinant
of $\delta$-functions. Treating the coordinates as dummy variables under the
integral sign, it is easy to rewrite the integral in factorized form
as a determinant of integrals of the form
\begin{equation}
\label{eq:2}
\frac{1}{L} \int_0^L dx e^{ix (\lambda_j - \mu_k) - iA \cos( q x) } = I_{\frac{\lambda_j-\mu_k}{q}}(-iA)\, \delta^{(q)}_{\lambda_j,\mu_k},
\end{equation}
where we define $\delta^{(q)}_{\lambda,\mu} = \delta_{(\lambda -\mu)\,
\text{mod}\,q,0}$ and where we used that $\lambda_j-\mu_k$ and $q$ lie on the momentum lattice $(2\pi \alpha/L)$ with $\alpha \in \mathbb{Z}$.
This results in the matrix elements
\begin{equation}
\label{eq:3}
\frac{\bra{\boldsymbol{\mu}}{{\hat{U}}_B (A)}\ket{{\boldsymbol{\lambda}}}}{L^N} = \, \text{det}_N\! \left[ \left( I_{\frac{\lambda_j-\mu_k}{q}}(-iA)\, \delta^{(q)}_{\lambda_j,\mu_k} \right)_{j,k} \right] .
\end{equation}
The initial state
\begin{equation}
\label{eq:4}
\ket{\psi_{q,A}} = \hat{U}_B(q,A) \ket{\psi_{GS}}
\end{equation}
is easily expressed in the Tonks-Girardeau eigenbasis using the matrix
elements $\bra{\boldsymbol{\mu}} \hat{U}_B(q,A) \ket{\psi_{GS}}$.
\subsection{The stationary state on a ring from a GGE and the Quench Action approach}
In order to implement the GGE logic~\cite{2007_Rigol_PRL_98,2008_Rigol_NATURE_452}, one starts with computing the conserved charges on the initial state. Let us focus on the case $q >2 \lambda_F$, for which the overlaps $\langle {\boldsymbol{\lambda}} | \psi_{q,A} \rangle$ coming from Eq.~\eqref{eq:3} reduce to a simple product of $N$ modified Bessel functions. While odd charges are trivially zero, for the even charges we find at finite system size
\begin{align}
& \quad\, \smatrixel{\psi_{q,A}}{\hat{Q}_{2\alpha}}{\psi_{q,A}} \notag \\
&= \sum_{j=1}^N \sum_{\beta\in\mathbb{Z}} \left| I_{\beta}(iA)\right|^2 \big( \lambda_j (\beta) \big)^{2\alpha} \notag \\
&= \sum_{j=1}^N \sum_{\beta\in\mathbb{Z}} \left| I_{\beta}(iA)\right|^2 \sum_{l=0}^{\alpha} {2\alpha \choose 2l} \big(\lambda_{j}^\text{GS}\big)^{2(\alpha-l)} (q\beta)^{2l} \raisetag{.5cm} \notag\\
&= \sum_{j=1}^N \sum_{l=0}^{\alpha} {2\alpha \choose 2l} \big(\lambda_{j}^\text{GS}\big)^{2(\alpha-l)} q^{2l} B_{2l,0}(A) \: ,
\end{align}
where we defined $\lambda_j (\beta) = \lambda_{j}^\text{GS} + q \beta $ and where the coefficients $B_{2l,0}$ come from the sum over the order of the Bessel functions and are known recursively \cite{2011_Bevilacqua_JMP_52}. The sum over particles $j$ can be performed, after which the thermodynamic limit can be taken,
\begin{align} \label{eq:expecation_value_local_charge_large_q}
& \quad \, \lim\nolimits_\text{th} \smatrixel{\psi_{q,A}}{\hat{Q}_{2\alpha}/N}{\psi_{q,A}} \notag \\
& = \sum_{l=0}^{\alpha} {2\alpha \choose 2l} \frac{(n \pi)^{2(\alpha-l)}q^{2l}}{2(\alpha-l)+1} B_{2l,0}(A) \: ,
\end{align}
where $n$ is the average particle density. For example, the energy density pumped into the system by an instantaneous Bragg pulse is given by
\begin{equation}
\lim\nolimits_\text{th} \big( \smatrixel{\psi_{q,A}}{\hat{Q}_{2}/N}{\psi_{q,A}} - \smatrixel{\psi_\text{GS}}{\hat{Q}_{2}/N}{\psi_\text{GS}} \big) = \frac{q^2 A^2}{2} \: .
\end{equation}
One can show that the saddle-point density
\begin{equation} \label{eq:supmat_saddle_point_bragg_pulse}
\rho^\text{sp}_{q,A}(\lambda) = \frac{1}{2\pi} \! \sum_{\beta\in\mathbb{Z}} \big[ \theta(\lambda-\beta q + \lambda_F) - \theta(\lambda-\beta q - \lambda_F) \big] \! \left| I_{\beta} ( iA) \right|^2
\end{equation}
reproduces these values of the conserved charges, i.e. $L \int_{-\infty}^\infty d\lambda\, \rho^{\text{sp}}_{q,A}(\lambda) \, \lambda^{2\alpha} = \lim\nolimits_\text{th} \smatrixel{\psi_{q,A}}{\hat{Q}_{2\alpha}}{\psi_{q,A}}$ for all $\alpha \in\mathbb{N}$, by performing the integral and recasting the infinite sum into the coefficients $B_{2a,0}$. Since the local conserved charges (if well defined) uniquely determine the saddle point, we have thus found the saddle-point density after a Bragg pulse for $q >2 \lambda_F$. For smaller Bragg momenta $q <2 \lambda_F$ the computation becomes considerably more difficult due to the determinant structure of the overlaps, but one can show that the saddle-point density given in Eq.~\eqref{eq:supmat_saddle_point_bragg_pulse} is still correct.
The Quench Action (QA) approach~\cite{2013_Caux_PRL_110,2014_DeNardis_PRA_89} reproduces this saddle-point density for $q >2 \lambda_F$. As a consequence of working in the Tonks-Girardeau regime, there are many microstates with exactly the same overlap. We can rephrase the overlaps as
\begin{equation}
\langle \{ \lambda_j(\beta_j) \}_{j=1}^N | \psi_{q,A} \rangle = L^N \prod_{\alpha=-\infty}^\infty \left[I_{\alpha} (-iA) \right]^{n_\alpha} \: ,
\end{equation}
where $n_\alpha$ is the number of rapidities $j$ with $\beta_j=\alpha$ and $\alpha \in \mathbb{Z}$. In the thermodynamic limit these numbers are given by
\begin{equation}
n_\alpha = L \int_{\alpha q - \lambda_F}^{\alpha q + \lambda_F} d\lambda \, \rho(\lambda) \: .
\end{equation}
The normalized overlap coefficients $S_{\{ \beta_j \}} = - \ln \big( \langle \{ \lambda_j(\beta_j) \}_{j=1}^N | \psi_{q,A} \rangle/L^N \big)$ have a well-defined thermodynamic limit,
\begin{align}
S [ \rho] &= \lim\nolimits_\text{th} \text{Re}\, S_{\{ \beta_j \}} \\
&= - L \sum_{\alpha=-\infty}^\infty \int_{\alpha q - \lambda_F}^{\alpha q + \lambda_F} d\lambda \, \rho(\lambda) \ln \left[ \left| I_{\alpha} (-iA) \right| \right] \\
&= L \! \int_{-\infty}^\infty \!\!\! d\lambda \, \rho(\lambda) \!\! \sum_{\alpha=-\infty}^\infty \!\! \big[ \theta(\lambda - \alpha q - \lambda_F) \notag \\
& \qquad \qquad - \theta(\lambda - \alpha q + \lambda_F) \big] \log \left[ \left| I_{\alpha} (iA) \right| \right],
\end{align}
where $\theta$ is the Heaviside step function and we used that $|I_n(-z)|=|I_n(z)|$. Furthermore, in the thermodynamic limit $\lambda_F= \pi n$, where $n$ is the average particle density. The noncontinuous integrand will serve as the driving term of the GTBA equations. Note that in the second line we implicitely assume that $\rho(\lambda)=0$ when $\lambda \notin \left[ \lambda - \alpha q - \lambda_F, \lambda - \alpha q + \lambda_F \right]$ for any $\alpha \in \mathbb{Z}$. The reason is that for Bethe states that do not obey this condition, the overlap is exactly zero (rapidities will never end up in those regions) and therefore $S[\rho]=\infty$. These states are therefore infinitely suppressed in the Quench Action saddle-point equations. Another way of seeing this is that originally the functional integral in the quench action approach is a sum over states with non-zero overlaps and these states are not in that sum.
Even when you restrict the support of the density function to these intervals, in this ensemble of states there are still many microstates that have zero overlap with the Bragg-pulsed ground state. The reason is that when a rapidity $\lambda_j^\text{GS}$ has moved to an interval $\alpha=\beta_j$, it is not in the other intervals $\alpha \neq\beta_j$ and therefore leaves a hole there. This alters the usual form of the Yang-Yang entropy significantly. Given the fillings $\{ n_\alpha \}_{\alpha=-\infty}^\infty$, the finite-size entropy is
\begin{subequations}
\begin{align}
e^{S_{\text{YY},\{ n_\alpha\}}} \!
&= \frac{N!}{\prod_{\alpha=-\infty}^\infty (n_\alpha !)} \: ,
\end{align}
\end{subequations}
which leads to a modified Yang-Yang entropy for the Bragg pulse from the ground state,
\begin{equation}
S_{\text{YY}}[\rho] = - L \int_{-\infty}^\infty d\lambda \, \rho(\lambda) \log [2\pi\rho(\lambda)] \: ,
\end{equation}
where we used the Tonks-Girardeau Bethe equation $2\pi [\rho(\lambda) + \rho_h(\lambda)]=1$. The variation of the quench action should be restricted to densities for which $\rho(\lambda)=0$ when $\lambda \notin \left[ \lambda - \alpha q - \lambda_F, \lambda - \alpha q + \lambda_F \right]$ for any $\alpha \in \mathbb{Z}$. Also, a Lagrange multiplier $h$ is added to fix the particle density to $n=N/L$. The resulting GTBA equation is not an integral equation because of the Tonks-Girardeau limit,
\begin{align}
&0=2 \!\! \sum_{\alpha=-\infty}^\infty \!\!\! \left[ \theta(\lambda \! - \! \alpha q \! - \! \lambda_F) \! - \! \theta(\lambda \! - \! \alpha q \! + \! \lambda_F) \right] \log \left[ \left| I_{\alpha} (iA) \right| \right] \notag \\
& \qquad \qquad \qquad \qquad + \log\! \left( 2\pi\rho^\text{sp}_{q,A}(\lambda)\right) + 1 - h \: .
\end{align}
This is solved by the normalized saddle-point density of Eq.~\eqref{eq:supmat_saddle_point_bragg_pulse}, with $h=1$. For smaller Bragg momenta $q <2 \lambda_F$ the derivation of the saddle-point distribution using the QA approach remains an open problem, since the determinant structure of the overlaps prevents obtaining a straightforward thermodynamic limit of the overlap coefficients that is expressible in terms of a root density $\rho(\lambda)$.
Moreover, one could question whether the time evolution of simple observables obtained from the QA approach using the saddle-point density of Eq.~\eqref{eq:supmat_saddle_point_bragg_pulse} is valid also for small Bragg momenta $q < 2\lambda_F$. However, the analysis of the FB mapping does not show any qualitative differences for Bragg momenta smaller or bigger than $2 \lambda_F$ and the agreement with the time evolution of the QA approach is excellent for all $q>0$. It therefore seems safe to assume that the time evolution from the QA approach is valid for all Bragg momenta.
\subsection{Time evolution of the momentum distribution function on a ring in the thermodynamic limit}
In Ref.~\cite{2014_Nardis_JSTAT_P12012} the thermodynamic limit of the matrix elements of the the one-body density matrix between states with a countable number $n_{\rm e}$ of particle-hole excitations $\{ h_j \to p_j \}_{j=1}^{n_{\rm e}}$ on a thermodynamic state $| \rho \rangle$ was computed. The result is obtained by decomposing the elements into a Fredholm determinant and a finite-size determinant accounting for the excitations. The result is
\begin{align}
&\matrixel{\rho}{{\hat{\Psi}}^\dag(x) {\hat{\Psi}}(0)}{\rho,\{ h_j \to p_j \}_{j=1}^{n_{\rm e}}} \nonumber \\
& = L^{-n_{\rm e}} e^{i\frac{x}{2} \sum_{j=1}^{n_{\rm e}} (p_j-h_j)} \Big\{ \text{Det}(1+ K'\rho) \det_{i,j=1}^{n_{\rm e}} \left[W'\big(h_i,p_j\big)\right] \nonumber \\&
- \text{Det}(1+K\rho) \det_{i,j=1}^{n_{\rm e}} \left[W\big(h_i,p_j\big)\right] \Big\} ,
\end{align}
where $\rho(\lambda)$ is the density of rapidities of the thermodynamic state.
The Fredholm determinants are denoted by $\text{Det}$, where $(K\rho)(\lambda,\mu) = K(\lambda,\mu)\rho(\mu)$. The kernels are given by
\begin{align}
& K'(\lambda,\mu) = K(\lambda,\mu) + n e^{-i\frac{x}{2}(\lambda+\mu)} \: , \\& K(\lambda,\mu) = -4 n \frac{\sin\left(\frac{x}{2}(\lambda-\mu)\right)}{\lambda-\mu} \: ,
\end{align}
where $n=N/L$ is the density. The function $W$ is defined as
\begin{align}
W(\lambda,\mu) & = \left( (1+K\rho)^{-1} K \right) (\lambda,\mu) \: , \\
W'(\lambda,\mu) & = \left( (1+K'\rho)^{-1} K' \right) (\lambda,\mu) \: ,
\end{align}
or equivalently via the integral equations
\begin{align}
& W(\lambda,\mu) + \int_{-\infty}^\infty d\nu\, K(\lambda,\nu) \rho(\nu) W(\nu,\mu) = K(\lambda,\mu) \: , \\&
W'(\lambda,\mu) + \int_{-\infty}^\infty d\nu\, K'(\lambda,\nu) \rho(\nu) W'(\nu,\mu) = K'(\lambda,\mu) \: .
\end{align}
The QA approach yields the following expression for the time evolution of the one-body density matrix in the thermodynamic limit
\begin{align}\label{eq:sum_exc_QA}
& \matrixel{\psi_{q,A}(t)}{{\hat{\Psi}}^\dag(x) {\hat{\Psi}}(0)}{\psi_{q,A}(t)} \nonumber \\& = \Re \sum_{n_{\rm e}=0}^{\infty} \frac{1}{(n_{\rm e}!)^2} \left( \prod_{j=1}^{n_{\rm e}} L^2 \int_{-\infty}^{\infty} dh_j dp_j \: \varphi_{-}^{(t)}(h_j) \varphi^{(t)}_{+}(p_j) \right)\nonumber \\& \times \matrixel{\rho_{q,A}}{{\hat{\Psi}}^\dag(x) {\hat{\Psi}}(0)}{\rho_{q,A},\{ h_j \to p_j \}_{j=1}^{n_{\rm e}}} \: ,
\end{align}
where the effective densities of holes and particles are given by
\begin{align}
&\varphi_{-}^{(t)}(
|
h_j) = e^{ \delta s(h_j) + i \delta \omega(h_j)t} \rho_{q,A}(h_j) \: , \\
& \varphi_{+}^{(t)}(p_j) = e^{- \delta s(h_j) - i \delta \omega(h_j)t} \rho^h_{q,A}(p_j) \: ,
\end{align}
with the density of holes of the saddle point given by $\rho^h_{q,A}(p) = \frac{1}{2 \pi} - \rho_{q,A}(p) $.
The differential overlap for a single particle-hole $e^{- \delta s(p) + \delta s (h)}$ is obtained by taking a finite-size realization of the saddle point state $| \boldsymbol{\lambda}_{q,A} \rangle \to | \rho_{q,A }\rangle $ and the modified state obtained by performing a single particle-hole on $| \boldsymbol{\lambda}_{q,A} \rangle $. The ratio of the two overlaps in the thermodynamic limit gives the differential overlap
\begin{equation}
e^{- \delta s(p) + \delta s (h)} = \lim_{N \to \infty} \frac{\langle \psi_{q,A} | \boldsymbol{\lambda}_{q,A} , h \to p \rangle }{\langle \psi_{q,A} | \boldsymbol{\lambda}_{q,A} \rangle} \: .
\end{equation}
The same argument gives the energy of the single particle-hole excitations
\begin{equation}
e^{- i \delta \omega(p)t + i \delta \omega (h)t} = e^{- i p^2 t + i h^2 t } \: .
\end{equation}
Since the overlaps only couple states such that the difference of their rapidities are multiples of $q$, the sum over the excitations on the saddle point state reduces to
\begin{align}
& \frac{1}{(n_{\rm e}!)^2}\prod_{j=1}^{n_{\rm e}} L^2 \int_{-\infty}^{\infty} dh_j dp_j \: \varphi_{-}^{(t)}(h_j) \varphi^{(t)}_{+}(p_j) \nonumber \\
& \to \frac{1}{n_{\rm e}!}\prod_{j=1}^{n_{\rm e}} L \int_{-\infty}^{\infty} dh_j \sum_{\beta_j \in \mathbb{Z}} \varphi_{-}^{(t)}(h_j) \frac{\varphi^{(t)}_{+}(h_j + \beta_j q)}{\rho^h_{q,A}(h_j + \beta_j q) } \: .
\end{align}
After this substitution the sum in Eq.~\eqref{eq:sum_exc_QA} becomes the definition of a Fredholm determinant, and using the definition of the saddle-point distribution in Eq.~\eqref{eq:supmat_saddle_point_bragg_pulse}, the expression for the time evolution of the one-body density matrix can be rewritten as the difference of two Fredholm determinants of two infinite block matrices where each block $S_{\alpha, \beta}$ and $S'_{\alpha, \beta}$ for any $\alpha,\beta \in \mathbb{Z}$ is an operator acting on $\Lambda = [-\lambda_F, \lambda_F]$,
\begin{align}\label{eq:TE1}
\langle \Psi_{q,A}(t) | & \Psi^\dagger(x) \Psi(0) |\Psi_{q,A}(t) \rangle \nonumber \\=
\Re \Big[ \text{Det}_\Lambda& \left(\mathbf{1} \delta_{\alpha, \beta}+ S'_{\alpha, \beta} \right)_{\alpha,\beta \in \mathbb{Z}} \nonumber \\& -
\text{Det}_\Lambda \left(\mathbf{1} \delta_{\alpha, \beta}+ S_{\alpha, \beta} \right)_{\alpha,\beta \in \mathbb{Z}} \Big]\: ,
\end{align}
with the operators given by
\begin{equation}
S'_{\alpha, \beta} (u,v)= \sum_{\gamma \in \mathbb{Z}}
\zeta^{(t)}_\gamma(u + \alpha q) K' (u + \alpha q, v + (\beta + \gamma) q) \Phi^{(t)}_{\beta, \gamma} \: , \nonumber
\end{equation}
\begin{equation}
S_{\alpha, \beta} (u,v)= \sum_{\gamma \in \mathbb{Z}}
\zeta^{(t)}_\gamma(u + \alpha q) K (u + \alpha q, v + (\beta + \gamma) q) \Phi^{(t)}_{\beta, \gamma} \: .
\end{equation}
Here $u,v \in [-\lambda_F, \lambda_F ]$ and $\mathbf{1}$ is the identity operator. The coefficients $\Phi^{(t)}_{\beta, \gamma}$ and the function $\zeta_\gamma^{(t)}(u)$ are given by
\begin{equation}
\Phi^{(t)}_{\beta, \gamma} = \frac{ I_{\beta}( i A) I_{\beta + \gamma} (-i A)}{2 \pi} e^{- it (q \gamma)^2 + i x q \gamma/2} \: ,
\end{equation}
\begin{equation}
\zeta^{(t)}_\gamma(u)= e^{-2 i t q \gamma u } \: .
\end{equation}
In order to obtain the time evolution of the momentum distribution $ \hat{n}(k,t)$ one needs to restrict the sum in Eq.~\eqref{eq:sum_exc_QA} to excitations with zero total momentum, namely
\begin{align}\label{eq:sum_exc_QA_2}
& \hat{n}(k,t) = \nonumber \\& \text{FT} \Big\{ \sum_{n_{\rm e}=0}^{\infty} \frac{1}{n_{\rm e}!} \left( \prod_{j=1}^{n_{\rm e}} \int_{-\infty}^{\infty} \!\! dh_j \! \sum_{\beta_j \in \mathbb{Z}}\: \varphi_{-}^{(t)}(h_j) \frac{\varphi^{(t)}_{+}(h_j + \beta_j q)}{\rho^h_{q,A}(h_j + \beta_j q) } \right)\nonumber \\& \times L^{n_{\rm e}}\matrixel{\rho_{q,A}}{\Psi^\dag(x) \Psi(0)}{\rho_{q,A},\{ h_j \to h_j + q \beta_j \}_{j=1}^{n_{\rm e}}} \nonumber \\& \times \delta_{\sum_{j=1}^{n_{\rm e}} \beta_j ,0} \Big\}
\end{align}
where we denoted the Fourier transform as $\text{FT} \{ f(x) \} = \int_{-\infty}^{\infty} dx \: f(x) e^{- i k x} $. Using the identity
\begin{equation}
\int_{-\pi}^{\pi}\frac{d v}{2 \pi} e^{- i \beta v} = \delta_{\beta,0} \: ,
\end{equation}
we obtain
\begin{widetext}
\begin{equation}
\hat{n}(k,t) \nonumber = \text{FT} \Big\{ \Re \int_{-\pi}^{\pi} \frac{d\kappa}{2 \pi} \left[
\text{Det}_\Lambda \left(\mathbf{1} \delta_{\alpha, \beta}+ S^{(\kappa)}{}'_{\alpha, \beta} \right)_{\alpha,\beta \in \mathbb{Z}} -
\text{Det}_\Lambda \left(\mathbf{1} \delta_{\alpha, \beta}+ S^{(\kappa)}{}_{\alpha, \beta} \right)_{\alpha,\beta \in \mathbb{Z}} \right]\Big\} \: ,
\end{equation}
\begin{equation}
S^{(\kappa)}{}'_{\alpha, \beta} (u,v)= \sum_{\gamma \in \mathbb{Z}}
\zeta^{(t)}_\gamma(u + \alpha q) K' (u + \alpha q, v + (\beta + \gamma) q) \Phi^{(t)}_{\beta, \gamma} e^{-i \kappa \gamma} \: , \nonumber
\end{equation}
\begin{equation}
S^{(\kappa)}_{\alpha, \beta} (u,v)= \sum_{\gamma \in \mathbb{Z}}
\zeta^{(t)}_\gamma(u + \alpha q) K (u + \alpha q, v + (\beta + \gamma) q) \Phi^{(t)}_{\beta, \gamma} e^{-i \kappa \gamma} \: .
\end{equation}
\end{widetext}
\subsection{Time-evolved single-particle states in the trap}
The propagator for the quantum harmonic oscillator (Mehler kernel) is given by
\begin{align}
K(x,y;t) = &\sqrt{\frac{m\omega}{2\pi i \sin (\omega t)}} \times \notag \\
&\times \exp\left(\frac{- m \omega (x^2 + y^2 ) \cos(\omega t) + 2 m \omega xy}{2 i \sin(\omega t)}\right).
\end{align}
The single particle (SP) wavefunctions after the Bragg pulse can then be time evolved by integrating the initial wavefunctions including the cosine phase with the propagator:
\begin{align}
\psi_j(x;t) = \int_{-\infty}^{\infty} &dy K(x,y;t) e^{-iA \cos(qx)}\psi_j(y) \notag \\
=\sum_{\beta=-\infty}^{\infty}&I_{\beta}(-iA)e^{-i\beta q \cos(\omega t)\left(x+\frac{\beta q}{2m\omega}\sin(\omega t)\right)} \notag \\
&\;\psi_j(x+ \tfrac{\beta q}{m\omega}\sin(\omega t)) e^{-i \omega(j+\frac{1}{2})t} \: ,
\label{eq:supp_harm_phit}
\end{align}
where $\psi_j(x)$ are the groundstate harmonic eigenfunctions
\begin{align}
\psi_j(x) = \frac{1}{\sqrt{2^j j!}} \left(\frac{m \omega}{\pi}\right)^{1/4} e^{-\frac{m\omega x^2}{2}} H_j\left(\sqrt{m \omega} x\right).
\label{eq:supmat_harm_eig}
\end{align}
The result in Eq. \eqref{eq:supp_harm_phit} has been obtained by using the following two identities
\begin{align}
&e^{-i z \cos(\phi)} = \sum_{n=-\infty}^{\infty} I_n(- i z) e^{-i n \phi} \: , \\
&\int_{-\infty}^{\infty} \mathrm{d} x e^{-(x-y)^2} H_j(\alpha x)
= \sqrt{\pi} (1-\alpha^2)^{\sfrac{j}{2}} H_j\left(\frac{\alpha y}{\sqrt{1-\alpha^2}}\right).
\end{align}
\subsection{Exact momentum distribution at $t=0$ for arbitrary interactions}
The one-body density matrix at $t=0$ (after the Bragg pulse) is given by
\begin{multline}
\label{eq:30}
\langle \hat{U}_B^{\dag} (q,A) {\hat{\Psi}}^{\dag}(x){\hat{\Psi}}(y) \hat{U}_B (q,A)
\rangle \\
= \langle {\hat{\Psi}}^{\dag}(x){\hat{\Psi}}(y)\rangle e^{-i 2 A \sin\left(q \frac{x-y}{2}\right)\sin\left(q \frac{x+y}{2} \right)}.
\end{multline}
The latter equality follows strictly from the commutation relations of the Bose
fields with the density and thus holds irrespective of interaction or
geometry.
For the case of the ring geometry, the associated momentum distribution function (MDF) is
\begin{equation}
\label{eq:32}
\langle \hat{n}(k,t=0)\rangle
= \frac{1}{L}\int_0^L d\xi e^{ik\xi}I_0\left(i 2A \sin(q\xi/2)\right)\langle {\hat{\Psi}}^{\dag}(\xi){\hat{\Psi}}(0)\rangle \: ,
\end{equation}
where we defined $\xi = x-y$ and used the integral
\begin{multline}
\label{eq:51}
\frac{1}{L}\int_0^L dy e^{-i 2A\sin(q \xi/2)\sin(qy+q\xi/2) } \\
= I_0\left(i 2A \sin(q \xi/2)\right) \: ,
\end{multline}
under the assumption that $qL/2\pi$ is integer.
\begin{figure}[ht]
\includegraphics[width=0.99 \columnwidth]{t0mom}
\caption{The initial MDF ($t = 0$) for $A = 1.5$, $q=3\pi$ and different values of the interaction strength $c$. The finite-$c$ interactions cause a decrease of the width of the satellites but do not influence their relative heights.}
\label{fig:t0mom}
\end{figure}
Using the convolution theorem we obtain
\begin{equation}
\label{eq:53}
\langle \hat{n}(k,t=0)\rangle =\sum_{k'} f(k')\langle \hat{n}(k-k')\rangle_{\rm GS}
\end{equation}
where
\begin{equation}
\label{eq:54}
f(k) = \frac{1}{L}\int_0^L dx e^{ikx}I_0(i 2 A \sin(q x/2))
\end{equation}
and $\langle \hat{n}(k)\rangle_{\rm GS}$ is the MDF of the ground state.
Using the expansion $ I_0 (z) = \sum_{n=0}^{\infty} (\frac{1}{4}z^2)^n/(n!)^2$
one finds
\begin{multline}
\label{eq:56}
I_0(i 2 A \sin(q x/2)) =\sum_{n=0}^{\infty} \frac{ (-1)^n}{(n!)^2}A^{2n}\sin^{2n}(qx/2)\\
=\sum_{n=0}^{\infty}\sum_{l=-n}^n \frac{(-1)^{n+l}(2n)!}{(n!)^2(n-l)!(n+l)!} \left(\frac{A}{2}\right)^{2n}e^{ilqx} \: ,
\end{multline}
where we used the binomium to expand in plane waves.
The order of the sums can now be interchanged. Defining the coefficients
\begin{equation}
\label{eq:57}
c_l(A) = \sum_{n=|l|}^{\infty} \frac{(-1)^{n+l}(2n)!}{(n!)^2(n-l)!(n+l)!} \left(\frac{A}{2}\right)^{2n}
\end{equation}
we obtain $f(k) = \sum_l c_l \delta_{k,lq}$. The coefficients
$c_l(A)$ can in fact be resummed and expressed in terms of a
hypergeometric function
\begin{equation}
\label{eq:59}
c_l (A)=\frac{A^{2|l|}}{(|l|!)^2 2^{2|l|}}\tensor*[_{1}]{\mathrm{F}}{_{2}}\left( \frac{2|l|+1}{2};|l|+1,2|l|+1;-A^2 \right).
\end{equation}
The $t=0$ post-pulse MDF can therefore
be exactly expressed in terms of the MDF before the
pulse $\langle \hat{n}(k)\rangle_{\rm {GS}}$ as
\begin{equation}
\label{eq:nkt0}
\langle \hat{n}(k,t=0) \rangle = \sum_{l=-\infty}^{\infty}c_l(A)\langle \hat{n}(k+ l q)\rangle_{\rm {GS}}\;.
\end{equation}
Note that this result holds for arbitrary interaction strength $c$ with $\langle \hat{n}(k)\rangle_{\rm {GS}}$ the appropriate ground state MDF.
The result is plotted in Fig.~\ref{fig:t0mom} for different values of $c$.
The influence of the finite interactions resides solely in the groundstate MDF $\langle \hat{n}(k + ql) \rangle_{\text{GS}}$, leading to a decreasing width of the peaks as one goes from the hard-core limit ($c\rightarrow \infty$) to the BEC limit ($c\rightarrow0$). In contrast, Eq. \eqref{eq:nkt0} shows that their relative heights are completely determined by the value of $A$.
\subsection{Local density approximation}
The local density approximation (LDA) for the gas in a parabolic trap
amounts to replacing the value for the mean density in the Quench Action
result for the short distance fluctuations with a space-dependent
density profile corresponding to the ground state in the
trap. This result is considerably improved when one introduces the
classical harmonic motion of the density profile in accordance with
the exact $t=0$ MDF Eq.~\eqref{eq:nkt0}.
In the thermodynamic limit the ground state density profile in a harmonic trap is given by
\begin{equation}
\label{eq:5}
\rho_{\mathrm{GS}}(x) = \langle \hat{\rho}(x)\rangle_{\mathrm{GS}} =\frac{1}{\pi}\sqrt{m N \omega - m^2\omega^2 x^2}.
\end{equation}
The QA result for the time-evolved density profile yields
\begin{align}
\rho_{\rm QA}(x,t;n) =& \lim\nolimits_\text{th} \matrixel{ \psi_{q,A} (t)}{ \hat{\rho}(x) }{ \psi_{q,A} (t) } \notag \\
=& \frac{n m}{ q \lambda_F t}
\sum_{\beta=-\infty}^{\infty} J_{\beta }(-2A\sin(q^2\beta t/2m)) \times \notag \\
\quad &\cos(xq\beta) \frac{\sin(q \lambda_F \beta t/m )}{\beta } \: ,
\label{eq:supmat_time_evolution_density}
\end{align}
with $n$ the mean density on the ring. The result for the LDA in the trap then reads
\begin{multline}
\label{eq:10}
\rho_{\rm LDA}(x,t) =\\ \sum_l c_l(A)\;\rho_{\rm QA}\left(x - \frac{l q}{\omega m}\sin(\omega t),t; \rho_{\rm GS}(x- \frac{l q}{\omega m}\sin(\omega t))\right) \: ,
\end{multline}
where the coefficients $c_l(A)$ are given in Eq. \eqref{eq:59}.
\end{document}
|
\section{Introduction}
\label{sec:intro}
Planets form in the discs of dust and gas known as `planet-forming' or `protoplanetary' discs. These discs are found around most young stellar objects (YSOs) in star forming regions up to ages of $\sim 3{-}10$~Myr \citep[e.g.][]{Haisch01, Ribas15}. Since 2015, \textit{Atacama Large Millimetre/sub-Millimetre Array} (ALMA) has instigated a revolution in our understanding of these objects. In particular, the high resolution sub-mm continuum images, tracing the dust content of protoplanetary discs, not only offer an insight into the dust masses and radii \citep[e.g.][]{Ansdell16, Ansdell17, Eisner18, vanTerwisga20, Ansdell20}, but also a wealth of internal sub-structures which serve as a window into planet formation processes \citep[e.g.][]{HLTau15, 2018ApJ...869L..41A, 2021ApJ...916L...2B}. The nearby (distance $D\lesssim 150$~pc) discs in the low-mass star forming regions such as Taurus and Lupus can be sufficiently resolved to expose rings, gaps and spirals in dust, as well as kinematic signatures in gas \citep{2018ApJ...860L..12T, 2019NatAs...3.1109P, 2019Natur.574..378T, 2022arXiv220309528P, 2022arXiv220603236C} which may either be signatures of planets or the processes that govern their formation.
Prior to ALMA, some of the first images of discs were obtained around stars in the Orion Nebula cluster (ONC) at a distance of $D\sim 400$~pc away, much further than the famous HL Tau \citep{Sargent87, HLTau15}. These discs exhibit non-thermal emission in the radio \citep{Churchwell87, Felli93, Felli93b}, with resolved ionisation fronts and disc silhouettes seen at optical wavelengths \citep{O'dell93, O'Dell94, Prosser94, McCaughrean96}. Identified as `solar-system sized condensations' by \citet{Churchwell87}, they were estimated to be undergoing mass loss at a rate of $\dot{M} \sim 10^{-6}\,M_\odot$~yr$^{-1}$. \citet{O'dell93} confirmed them to be irradiated protoplanetary discs, dubbing them with the contraction `proplyds'. While this term has now been dropped for the broader class of protoplanetary disc, it is still used in relation to discs with a bright, resolved ionisation front owing to the external UV radiation field. These objects are of great importance in unravelling what now appears to be an important process in understanding planet formation: \textit{external photoevaporation}.
The process of external photoevaporation is distinguished from the internal photoevaporation (or more commonly just `photoevaporation') of protoplanetary discs via the source that is responsible for driving the outflows. Both processes involve heating of the gas by photons, leading to thermal winds that deplete the discs. However, internal photoevaporation is driven by some combination of FUV (far-ultraviolet), EUV (extreme-ultraviolet) and X-ray photons originating from the central host star \citep[e.g.][]{2008ApJ...688..398E, 2019MNRAS.487..691P, 2022arXiv220409704S}, depleting the disc from inside-out \citep[e.g.][]{Clarke01, Owen10, 2012MNRAS.422.1880O, Jennings18, Coleman22}. \textit{External} photoevaporation, by contrast, is driven by an OB star external to the host star-disc system. The high FUV and EUV luminosities of OB stars, combined with the heating of outer disc material that experiences weaker gravity, can result in extremely vigorous outside-in depletion \citep{Johnstone98}. Indeed, as inferred by \citet{Churchwell87}, the brightest proplyds in the ONC exhibit wind-driven mass loss rates of up to $\dot{M}_{\rm{ext}} \sim 10^{-6}\,M_\odot$~yr$^{-1}$ \citep[see also e.g.][]{Henney98, Henney99}.
At present, external photoevaporation remains an under-studied component of the planet formation puzzle. It has long been thought that the majority of star formation occurs in clusters or associations which include OB stars \citep{Miller78} and that mass loss due to external photoevaporation can be more rapid than the $\dot{M}_\mathrm{acc/int} \sim 10^{-8}\,M_\odot$~yr$^{-1}$ typical of stellar accretion \citep[e.g.][]{Manara12} and internal photoevaporation \citep[e.g.][]{Owen10}. Thus it is reasonable to ask why external photoevaporation has, until now, not featured heavily in models for planet formation. While there is no one answer to this question, two important historical considerations are compelling in this context:
\begin{itemize}
\item The first is a selection bias. Low mass star forming regions (SFRs) are more numerous, so they are also the closest to us \citep[although nearby star formation is probably also affected by the expansion of a local supernova bubble --][]{Zucker22}. Surveys with, for example, ALMA, have rightly prioritised bright and nearby protoplanetary discs in these low mass SFRs. However, when accounting for the relative number of stars, these regions are not typical birth environments for stars and their planetary systems (see Section~\ref{sec:demographics_SFRs}). This has motivated recent studies of more typical birthplaces that experience strong external irradiation from neighbouring massive stars \citep[e.g.][]{Ansdell17,Eisner18,vanTerwisga19}.
\item The second is the well-known `proplyd lifetime problem' in the ONC \citep[e.g.][]{Storzer99b}. The presence or absence of a near infrared (NIR) excess, indicating inner material, was one of the earliest available probes for studying disc physics. The high fraction ($\sim 80$~percent) of stars with a NIR excess in the ONC \citep{Hillenbrand98} was an apparent paradox that appeared to undermine the efficacy of external photoevaporation in influencing disc populations. We discuss this problem in Section~\ref{sec:stellar_dynamics}.
\end{itemize}
Given the apparently high fraction of planet forming discs that undergo at least some degree of external photoevaporation, this process has particular relevance for modern efforts to connect observed disc populations to exoplanets \citep[e.g.][]{2018haex.bookE.143M}. Now is therefore an opportune time to take stock of the findings from the previous three decades.
In this review, we first summarise the theory of external photoevaporation in Section~\ref{sec:theory}, including both analytic estimates that provide an intuition for the problem as well as state-of-the-art simulations and microphysics. We address the observational signatures of disc winds in Section~\ref{sec:obs}, including inferences of individual mass loss rates. We consider the role of external photoevaporation for disc evolution and planet formation in Section~\ref{sec:planet_formation}, including evidence from disc surveys. In Section~\ref{sec:SFRs} we contextualise these studies in terms of the physics and demographics of star forming environments. Finally, we summarise the current understandings and open questions in Section~\ref{sec:summary}.
\section{Theory of externally driven disc winds}
\label{sec:theory}
\subsection{The most basic picture}
We begin by reviewing our theoretical understanding of external photoevaporation. At the very basic level one can determine whether material will be unbound from a potential by comparing the mean thermal velocity of particles (i.e. the sound speed) in the gas with the local escape velocity. In an isothermal system, equating the sound speed and escape velocity yields the gravitational radius, beyond which material is unbound
\begin{equation}
R_{\rm{g}} = \frac{GM_*}{c_{\rm{s}}^2}
\end{equation}
where $M_*$ is the point source potential mass, and $c_{\rm{s}}$ is the sound speed. Therefore if an isothermal disc were to extend beyond $R_{\rm{g}}$, then material would be lost in a wind. Consider now an isothermal disc that is entirely interior to the gravitational radius and is hence fully bound (upper panel of Figure \ref{fig:basicRgrav}). If this disc is then externally irradiated, the temperature and hence sound speed increases, driving down the gravitational radius to smaller values, potentially moving interior to the edge of the disc and unbinding its outer parts (lower panel of Figure \ref{fig:basicRgrav}).
\begin{figure}
\centering
\vspace{-0.5cm}
\includegraphics[width=\columnwidth]{Figures/BasicRgrav2.png}
\caption{A schematic of the basic picture of external photoevaporation in terms of the gravitational radius. The gravitational radius is that beyond which mean thermal motions (propagating at the sound speed) exceed the escape velocity and are unbound. In this basic picture, a disc smaller than the gravitational radius will hence not lose mass. External UV irradiation heats the disc, leading to faster mean thermal motions (a higher sound speed), driving the gravitational radius to smaller radii and unbinding material in the disc. }
\label{fig:basicRgrav}
\end{figure}
The details of external photoevaporation do get subtantially more complicated than the above picture, for example with pressure gradients helping to launch winds interior to $R_{\rm{g}}$. However, this picture provides a neat basic insight into how external photoevaporation can instigate mass loss in otherwise bound circumstellar planet-forming discs.
\subsection{EUV and FUV driven flows}
\label{sec:EUVvFUV}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/EUV_flowschem.pdf}
\caption{}
\label{fig:EUVschem}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/FUV_flowschem.pdf}
\caption{
}
\label{fig:FUVschem}
\end{subfigure}
\label{fig:schem_wind}
\caption{Schematic diagrams of the flow structure in EUV (Figure~\ref{fig:EUVschem}) and FUV (Figure~\ref{fig:FUVschem}) externally driven winds, adapted from \citet{Johnstone98}. In the EUV driven wind, the flow from the disc edge at radius $R_\mathrm{d}$ travels at a subsonic velocity through the thin photodissociation region (PDR) of thickness $xR_\mathrm{d}$ with $x\lesssim 1.5$ before reaching the ionisation front (IF). Mass loss is therefore determined by the thermal pressure at the IF. For $x\gtrsim 1.5$, the wind is launched from the disc edge at a supersonic velocity, producing a shock front that is reached before the IF. In this case, the mass loss rate is determined by the thermal conditions in the PDR. }
\end{figure}
\subsubsection{Flux units}
Before considering the physics of EUV and FUV driven winds, a note on the units canonically used to measure UV fluxes is necessary. While the ionising flux is usually measured in photon counts per square centimetre per second, FUV flux is normally expressed in terms of the Habing unit, written $G_0$ \citep{1968BAN....19..421H}. This is the flux integral over $912-2400\textup{\AA}$, normalized to the value in the solar neighbourhood, i.e.
\begin{equation}
\left(\frac{F_{FUV}}{G_0}\right) = \int_{912\textup{\AA}}^{2400\textup{\AA}} \frac{F_\lambda d\lambda}{1.6\times10^{-3}\textrm{erg\,s}^{-1}\,\textrm{cm}^{-2}}.
\end{equation}
Another measure of the FUV field strength is the Draine unit \citep{1978ApJS...36..595D}, which is a factor 1.71 larger than the Habing unit. Hence $10^3 \, G_0 \approx 585$~Draines. We highlight both because two similar units that vary by a factor of order unity can and does lead to confusion. For reference, the UV environments that discs are exposed to in star forming regions ranges from $\ll 1$ (i.e. embedded discs) to $\sim10^{7}$G$_0$ (discussed further in Section~\ref{sec:SFRs}).
\subsubsection{Flow geometry}
The basic physical picture of an externally photoevaporating protoplanetary disc was laid out by \citet{Johnstone98}, and has remained largely the same since. We summarise that picture here because it is useful for what follows, but refer interested readers to the more detailed discussion in that original work as well as that of \citet{Storzer99b}.
The heating mechanism that launches the thermal wind may be driven by ionising EUV photons (energies $h\nu >13.6$~eV), heating gas to temperatures $T\sim 10^4$~K, or photodissociation FUV photons ($6\,\rm{eV}< h\nu < 13.6$~eV), yielding temperatures of $T\sim 100{-}1000$~K \citep{Tielens85, Hollenbach97}. The EUV photons penetrate down to the ionisation front, at radius $R_\mathrm{IF} = (1+x)R_\mathrm{d}$ from the disc-hosting star, with disc outer radius $R_\mathrm{d}$. The atomic gas outside of $R_\mathrm{IF}$ is optically thin for the FUV photons, which penetrate down to $R_\mathrm{d}$ producing a neutral PDR of thickness $x R_\mathrm{d}$. Whichever photons drive the photoevaporative wind, the flow geometry far from the disc surface is approximately spherically symmetric, since it is accelerated by the radial pressure gradient (cf. the Parker wind). We will start with this approximation of spherical geometry, which guides the following analytic intuition.
\subsubsection{EUV driven winds}
For an EUV driven wind, the gravitational radius is $R_\mathrm{g}\sim 10$~au (depending on the stellar host mass), such that we are generally in a regime in the outer disc where $R_\mathrm{d}\gg R_\mathrm{g}$, and a wind can be launched. If the EUV flux is sufficiently strong, the ionisation front (IF) sits close to the disc surface making the PDR thin ($x \lesssim 1.5$; see Section~\ref{sec:FUVwind}). In this case, the basic geometry of the system is shown in Figure~\ref{fig:EUVschem}. The thermal pressure at the disk surface is determined by the ionisation rate, with the flow proceeding subsonically through the PDR. If we assume isothermal conditions in the PDR, then the density $n_\mathrm{I}$ is constant: $n_\mathrm{I} = n_0 = N_\mathrm{D} /xR_\mathrm{d}$, where $N_\mathrm{D}$ is the column density of the PDR, which is the column density required to produce an optical depth $\tau_\mathrm{FUV} \sim 1$. This column density is dominated by the base of the wind $N_\mathrm{D}\approx n_0 R_\mathrm{d}$, but is dependent on the microphysics and dust content (see Section~\ref{sec:microphysics}). For our purposes, we will simply adopt $N_\mathrm{D} \sim 10^{21}$~cm$^{-2}$ \citep[although see the direct calculations of][for example]{Storzer99b}.
Since density in the PDR is constant, the velocity in the flow $v_\mathrm{I} \propto r^{-2}$ to maintain a constant mass flux. In order to conserve mass and momentum flux, the velocity at the ionisation front must be $v_\mathrm{IF} = c_\mathrm{s,I}^2/2c_\mathrm{s,II} \sim 0.5$~km~s$^{-1}$, where $c_\mathrm{s,I} \approx 3$~km~s$^{-1}$ is the sound speed in the PDR and $c_\mathrm{s,II} \approx 10$~km~s$^{-1}$ beyond the IF. We can write the mass loss rate:
\begin{equation}
\dot{M}_\mathrm{EUV} = 4\pi \mathcal{F} (1+x)^2 R_\mathrm{d}^2 n_\mathrm{I} m_\mathrm{I} \frac{c_\mathrm{s,I}^2}{2c_\mathrm{s,II}} ,
\end{equation}
where $\mathcal{F}$ is a geometric factor and $m_\mathrm{I}$ is the mean molecular mass in the PDR. Since $n_\mathrm{I} \propto 1/x$ this mass loss rate appears to diverge in the limit of a thin PDR. However, $x$ must satisfy the recombination condition such that:
\begin{equation}
\label{eq:recomb}
\frac{f_\mathrm{r} \Phi }{4\pi d^2} = \int_{R_\mathrm{IF}}^\infty \alpha_\mathrm{B} n_{\mathrm{II}}^2 \,\mathrm{d}r = R_\mathrm{IF}^4\left(\frac{m_\mathrm{I} v_{\rm{IF}}}{m_\mathrm{II} v_\mathrm{II}}\right)^2 \int_{R_\mathrm{IF}}^\infty \alpha_\mathrm{B} \frac{n_{\rm{I}}^2}{r^4} \,\mathrm{d}r
\end{equation}where $\Phi$ is the EUV counts of the ionising source at distance $d$, $f_\mathrm{r}$ is the fraction of photons unattenuated by the interstellar medium (ISM), and $\alpha_\mathrm{B} = 2.6 \times 10^{-13}$~cm$^{3}$~s$^{-1}$ is the recombination coefficient for hydrogen with temperature $10^4$~K \citep[e.g.][]{Osterbrock89}. Substituting $R_\mathrm{IF} = (1+x)R_\mathrm{d}$ into equation~\ref{eq:recomb} and adopting typical values, we can write a defining equation for $x$ that can be solved numerically. However, in the limit of $x\ll 1$ we can also simply estimate:
\begin{equation}
\label{eq:Mdot_EUV}
\dot{M}_\mathrm{EUV} = 5.8 \times 10^{-8} \epsilon_\mathrm{EUV} \mathcal{F} \left( \frac{d}{0.1\, \mathrm{pc}}\right)^{-1} \left(\frac{ f_\mathrm{r} \Phi}{10^{49} \, \mathrm{s}^{-1}} \right)^{1/2} \left( \frac{ R_\mathrm{d}}{100 \, \mathrm{au}}\right)^{3/2} \, M_\odot \, \rm{yr}^{-1},
\end{equation}which is the solution for an ionised globule with no PDR physics \citep{Bertoldi90}. Here, $\epsilon_\mathrm{EUV} \sim 1$ is a correction factor that absorbs uncertainties in the PDR physics and geometry. We notice that the EUV driven wind is super-linearly dependent on $R_\mathrm{d}$, but only scales with the square root of the EUV photon count.
\subsubsection{FUV driven winds}
\label{sec:FUVwind}
We will here proceed under the assumption that the outer radius and FUV flux is sufficient to produce a supersonic, FUV heated wind. This means that $R_\mathrm{d}\gg R_\mathrm{g}$, where $R_\mathrm{g}$ is the gravitational radius in the PDR. In this case, where the wind mass loss is determined by FUV heating, the neutral wind must launch at (constant) supersonic velocity $v_\mathrm{I} = v_0\gtrsim c_\mathrm{s,I}$ from the disc surface. To conserve mass, the density in the PDR drops as $n_\mathrm{I} \propto r^{-2}$. The wind travels faster than at the IF ($v_\mathrm{IF} = c_\mathrm{s,I}^2/2c_\mathrm{s,II}$, as before) and therefore must eventually meet a shock front at radius $R_\mathrm{shock}$. Assuming this shock is isothermal, the density $n_\mathrm{I}$ increases by a factor $\sim M^2 $, where $M = v_0/c_{\mathrm{s,I}}$ is the Mach number. If the region between the shock and ionisation fronts is isothermal, then $n_\mathrm{I}$ is then constant and $v_\mathrm{I} \propto r^{-2}$ to conserve mass. This geometry is shown in Figure~\ref{fig:FUVschem}.
Solving mass and momentum conservation requirements for the flow, we have:
\begin{equation}
R_\mathrm{shock} = \sqrt{ \frac{v_0}{2c_\mathrm{s, II}}} R_\mathrm{IF},
\end{equation}which immediately puts a minimum distance below which EUV mass loss dominates over FUV. If $R_\mathrm{shock}< R_\mathrm{d}$ then the shock front is inside the disc, the flow is subsonic at the base, and we are back in the EUV driven wind regime. Given that $\sqrt{{v_0}/{2c_\mathrm{s, II}}} \sim 0.4{-}0.6$ (with $v_0 \sim 3{-}6$~km~s$^{-1}$), then we require $R_\mathrm{IF}\gtrsim 2.5 R_\mathrm{d}$ for an FUV driven wind to be launched. Coupled with equation~\ref{eq:recomb} this gives the minimum distance required for the launching of FUV driven winds. Conversely, a maximum distance exists from the requirement that the gas can be sufficiently heated to escape the host star, meaning that FUV driven winds only occur at intermediate separations from a UV source.
Under the assumption that the FUV flux is sufficient to launch a wind and $R_\mathrm{IF}\gtrsim 2.5 R_\mathrm{d}$, then the overall mass loss in the flow does not care about what is going outside of the base of the FUV-launched wind. The mass loss rate in the FUV dominated case is simply:
\begin{equation}
\label{eq:Mdot_FUV}
\dot{M}_\mathrm{FUV} = 4\pi \mathcal{F} R_\mathrm{d}^2 n_0 v_0 m_\mathrm{I} \approx 1.9 \times 10^{-7} \epsilon_\mathrm{FUV} \mathcal{F} \left( \frac{R_\mathrm{d}}{100\,\rm{au}} \right) \, M_\odot \, \rm{yr}^{-1},
\end{equation}where all of the difficult physics is contained within a convenient correction factor $\epsilon_\mathrm{FUV}$. In reality, this expression is only helpful in the limit of a very extended disc, and computing the mass loss rate in general requires a more detailed treatment that we will discuss in Section~\ref{sec:microphysics}.
Nonetheless, we gain some insight from the estimate from equation~\ref{eq:Mdot_FUV}. First, we see that the mass loss rate is not dependent on the FUV flux $F_\mathrm{FUV}$. In reality, there is some dependence due to the increased temperature in the PDR with increasing $F_\mathrm{FUV}$, but this dependence is weak for $F_\mathrm{FUV}\gtrsim 10^4\, G_0$ \citep{Tielens85}. From equation~\ref{eq:Mdot_EUV}, we also see that the mass loss rate $ \dot{M}_\mathrm{FUV}$ scales less steeply with $R_\mathrm{d}$ than $ \dot{M}_\mathrm{EUV}$ does. This means that once a disc has been sufficiently truncated by these external winds, the FUV dictates the mass loss rate. Since the time-scale for this depletion can be short (see Section~\ref{sec:gas_evolution}), we expect FUV flux to dominate mass loss over the disc lifetime for reasonable EUV fluxes.
While this picture is useful to gain some insight into the physics of external photoevaporation, accurately computing the mass loss rate in the wind requires more detailed numerical modelling of the PDR physics. We consider efforts in this direction to date as follows.
\subsection{Microphysics of external photoevaporation}
\label{sec:microphysics}
One of the biggest challenges surrounding external photoevaporation is that it depends upon a wide array of complicated microphysics. The wind is launched primarily by the FUV radiation field and determining the temperature in this launching region, which is critical, requires solving photodissociation region (PDR) microphysics which in itself can consist of many hundreds of species and reactions and complicated thermal processes \citep[e.g.][]{Tielens85, Hollenbach97}. In particular, as we will discuss below, the line cooling is difficult to estimate in 3D, meaning most PDR codes are limited to 1D \citep[e.g.][]{2007A&A...467..187R}. The dust and PAH abundance in externally driven wind also play key roles in determining the mass loss rate, but may differ substantially from the abundances in the ISM \citep[e.g.][]{2013ApJ...765L..38V, Facchini16}. In addition to the complicated PDR-dynamics, EUV photons establish an ionisation front downstream in the wind which affects the observational characteristics. Here we introduce some of the key aspects of the microphysics of external photoevaporation in more detail.
\subsubsection{The theory of dust grain entrainment in external photoevaporative winds}
We begin by considering the dust microphysics of external photoevaporation. First it is necessary to provide some context by discussing briefly how dust evolves in the disc itself. Canonically, in the ISM the dust-to-gas mass ratio is $10^{-2}$ and grains typically follow a size distribution of the form
\begin{equation}
\frac{\mathrm{d} n(a_\mathrm{s})}{\mathrm{d} a_\mathrm{s}} \propto a_\mathrm{s}^{-q}
\end{equation}
\citep{1977ApJ...217..425M} with grain sizes spanning $a_\mathrm{s}\sim10^{-3}-1$\,$\mu$m \citep{2001ApJ...548..296W} and $q\approx3.5$. In protoplanetary discs the dust grains grow to larger sizes which eventually (when the Stokes number is of order unity) dynamically decouples them from the gas, leading to radial drift inwards to the inner disc. This growth proceeds more quickly in the inner disc \citep[e.g.][]{Birnstiel12} and so there is a growth/drift front that proceeds from the inner disc outwards. It is not yet clear how satisfactory our basic models of this process are, particularly in terms of the timescale on which it operates, since if left unhindered by pressure bumps in the disc it quickly results in most of the larger drifting dust being deposited onto the central star \citep[see e.g.][]{Birnstiel12, 2015PASP..127..961A, 2020MNRAS.498.2845S}. However for our purposes the key point is that the abundance of smaller ($\sim \mu$m) grains in the disc ends up depleted relative to the ISM due to grain growth.
The nature of dust in the external photoevaporative wind is important for three key reasons
\begin{enumerate}
\item The dust in the wind sets the extinction in the wind and hence has a significant impact on the mass loss rate
\item The extraction of dust in the wind could have implications for the mass reservoir in solids for terrestrial planet formation and/or the cores of gas giants.
\item The entrainment of dust in winds could provide observational diagnostics of the external photoevaporation process (we will discuss this further in section \ref{sec:obs}).
\end{enumerate}
and so the key questions are what size, and how much, dust is entrained in an external photoevaporative wind. This problem was addressed in the semi-analytic work of \cite{Facchini16}. They solved the flow structure semi-analytically (we discuss semi-analytic modelling of external photoevaporative winds further in section \ref{sec:semianalytic}) and calculated the maximum entrained grain size. The efficiency of dust entrainment in the wind is dependent on the balance of the \citet{Epstein24} drag exerted on the dust grain with density $\rho_\mathrm{s}$ by the the outflowing gas of velocity $v_\mathrm{th}$ versus the gravity from the host star. Thus the condition for a dust grain to be lost to external photoevaporation is \citep{Facchini16}:
\begin{equation}
a_\mathrm{s} < a_\mathrm{ent} =\frac{1}{4\pi \mathcal{F} } \frac{v_\mathrm{th}\dot{M}_\mathrm{ext}}{G M_* \rho_\mathrm{s}} ,
\label{equn:entrainedSize}
\end{equation}
where $4\pi \mathcal{F}$ is the solid angle subtended by the wind \citep[see][]{Adams04}.
The main outcome of the above is that only small grains are entrained in the wind and the mean cross section is reduced. Therefore, when grain growth proceeds to the disc outer edge, the dust-to-gas mass ratio, mean cross section, and hence extinction in the wind drops substantially. This makes external photoevaporation more effective than previously considered when the dust in the wind was treated as ISM-like. This lower cross section in the wind is now accounted for in numerical models of external photoevaporation, assuming some constant low value \citep[e.g. the FRIED grid of mass loss rates][discussed more in \ref{sec:RHDmodels}]{Haworth18b}. However, what is still missed in models is that the cross section in the wind is actually a function of the mass loss rate \citep{Facchini16} and so needs to be solved iteratively with the dynamics.
\cite{2021MNRAS.508.2493O} also recently introduced dynamically decoupled dust into 1D isothermal models of externally irradiated discs (discussed more in section \ref{sec:boundaryconditions}) finding that it does indeed lead to a radial decrease in the maximum grain size which could be searched for observationally. This gradient in grain sizes has been observed by \citet{2001Sci...292.1686T} and \citet{Miotello12}, who studied the large proplyd 114-426 on the near side of the ONC, calculating the attenuation of the background radiation through translucent parts of the disc. They found a maximum grain size decreasing from 0.7\,$\mu$m to 0.2\,$\mu$m, moving away from the disc outer edge over a distance of around 250\,au, consistent with \cite{2021MNRAS.508.2493O}.
\subsubsection{Photodissociation region physics for external photoevaporation}
\label{sec:PDRmicrophysics}
The FUV excited photodissociation region (PDR) microphysics determines the composition, temperature and therefore the dynamics of the inner parts of external photoevaporative winds. As discussed above, this FUV/PDR part of the inner wind can determine the mass loss rate from the disc. This is not a review on PDR physics \citep[for further information see e.g. ][]{Tielens85, Hollenbach97, 2008ARA&A..46..289T} but given its importance for setting the temperature, and therefore the dynamics, we provide a brief overview of some relevant processes.
We focus primarily on the main heating and cooling contributions. These are summarised as a function of extinction for an external FUV field of 300\,G$_0$ in Figure \ref{fig:PdrHeatingCooling}, which is taken from \cite{Facchini16}. Note that, as we will discuss below, the exact form of these plots depends on the FUV field strength and the assumed composition, e.g. the metallicity, dust grain properties and polycyclic aromatic hydrocarbon (PAH) abundance.
\begin{figure}
\centering
\vspace{-0.5cm}
\includegraphics[width=0.48\columnwidth]{Figures/heating300fig2.pdf}
\includegraphics[width=0.48\columnwidth]{Figures/cooling300fig2.pdf}
\vspace{-0.2cm}
\caption{A summary of the key heating and cooling mechanisms in a medium irradiated by a 300G$_0$ FUV radiation field. PAH driven photoelectric heating dominates until high AV, where cosmic rays take over. The key cooling mechanism in the wind is the escape of line photons. From \protect\cite{Facchini16}.}
\label{fig:PdrHeatingCooling}
\end{figure}
The heating mechanism that is anticipated to be most important for external photoevaporation is photoelectric heating (see the left hand panel of Figure \ref{fig:PdrHeatingCooling}) that occurs when PAHs lose electrons following photon absorption, increasing the gas kinetic energy \citep{2008ARA&A..46..289T}. The impact this can have on the mass loss rate is illustrated in Figure \ref{fig:Metallicity_PAH}, which shows the results of numerical models of an externally photoevaporating 100\,au disc around a 1\,M$_\odot$ star in a 1000G$_0$ FUV environment as a function of metallicity. Each coloured set of points connected by a line represents a different PAH-to-dust ratio. Reducing the PAH-to-dust ratio has a much larger impact on the mass loss rate than changing the overall metallicity. These models are previously unpublished extensions of the 1D PDR-dynamical calculations of \cite{Haworth18b}, which are discussed further in \ref{sec:RHDmodels}. When the metallicity is reduced the PAH abundance and heating is also lowered, but so is the line cooling. Changes in metallicity therefore only lead to relatively small changes to the mass loss rate as the heating and cooling changes compensate. Conversely changing only the PAH-to-dust ratio can lead to dramatic changes in the mass loss rate.
A key issue for the study of external photoevaporation is that the PAH abundance in the outer parts of discs and in winds is very poorly constrained. For the proplyd HST 10 in the ONC, \cite{2013ApJ...765L..38V} inferred a PAH abundance relative to gas around a factor 50 lower than the ISM and a factor 90 lower than NGC 7023 \citep{2012PNAS..109..401B}. PAH detections around T Tauri stars are generally relatively rare \citep{2006A&A...459..545G, 2007A&A...476..279G}, which leads us to expect that the PAH abundance is depleted in discs irrespective of external photoevaporation. This lower PAH abudance would mean less heating due to external photoevaporation, resulting in lower external photoevaporative mass loss rates. Conversely, \cite{2021A&A...653A..21L} demonstrated that PAH \textit{emission} from the inner disc could be suppressed when PAHs aggregate into clumps, which also crucially would not suppress the heating contribution from PAHs (Lange, priv. comm.). However, it is unclear if that same model for PAH clustering applies at larger radii in the disc, let alone in the wind itself and so this is to be addressed in future work (Lange, priv. comm.).
\begin{figure}
\centering
\vspace{-0.5cm}
\includegraphics[width=0.6\columnwidth]{Figures/MdotComparison_fPAH.pdf}
\caption{External photoevaporative mass loss rate as a function of metallicity ($Z/Z_\odot$) for a 100\,au disc around a 1\,M$_\odot$ star irradiated by a 1000$G_0$ radiation field. Each coloured line represents a different value of the base PAH-to-dust mass ratio scaling $f_{PAH}$. These are extensions of the FRIED PDR-dynamical models of \protect\cite{Haworth18b}. When the overall metallicity is scaled, there are changes to both the heating and cooling contributions that broadly cancel out. Conversely, varying the PAH-to-dust ratio (which is very uncertain) can lead to large changes in the mass loss rate. Note that these calculations have a floor value of $10^{-11}$\,M$_\odot$\,yr$^{-1}$. }
\label{fig:Metallicity_PAH}
\end{figure}
Given its potential role as the dominant heating mechanism, determining the PAH abundance in the outer regions of discs is vital for understanding the magnitude of mass loss rates and so should be considered a top priority in the study of external photoevaporation. The \textit{James Webb Space Telescope} (JWST) should be able to constrain abundances by searching for features such as the 15-20\,$\mu$m emission lines \citep[e.g.][]{Boulanger98, 1999ESASP.427..579T, 2000A&A...354L..17M, 2010A&A...511A..32B, 2015PASP..127..584R, 2022A&A...661A..80J}. \cite{2022arXiv220208252E} also demonstrated with synthetic observations that the upcoming \textit{Twinkle} \citep{2019ExA....47...29E} and \textit{Ariel} \citep{2018ExA....46..135T, 2021arXiv210404824T} missions should be able to detect the PAH 3.3\,$\mu$m feature towards discs, at least out to 140\,pc. Even if detections with \textit{Twinkle}/\textit{Ariel} would not succeed in high UV environments because of the larger distances to those targets, constaining PAH abundances of the outer disc regions in lower UV nearby regions would also provide valuable constraints. These will be an important step for calibrating models, refining the mass loss rate estimates and hence our understanding of external photoevaporation.
Often the dominant cooling in PDRs is the escape of line photons from species such as CO, C, O and C$^+$. Evaluating this is the most challenging component of PDR calculations, since to estimate the degree of line cooling, line radiative transfer in principle needs to sample all directions ($4\pi$ steradians) from every single point in the calculation. For this reason, most PDR studies to date (even without dynamics) have been 1D, where it is assumed that exciting UV radiation and cooling radiation can only propagate along a single path, with all other trajectories infinitely optically thick \citep[e.g.][]{1999ApJ...527..795K, 2006ApJS..164..506L, 2006MNRAS.371.1865B, 2007A&A...467..187R}. Most dynamical models of external photoevaporation with detailed PDR microphysics have also therefore been 1D. For example \cite{Adams04} used the \cite{1999ApJ...527..795K} 1D PDR code to pre-tabulate PDR temperatures as inputs for 1D semi-analytic dynamical models (we will discuss these in more detail in section \ref{sec:semianalytic}). Note that 2D models of other features of discs have circumvented this issue by assuming a dominant cooling direction, for example vertically through the disc \citep[e.g.][]{2016A&A...586A.103W}, or radially in the case of internal photoevaporation calculations \citep{2017ApJ...847...11W}. This approach is not applicable in multidimensional simulations of an externally driven wind, where there is no obvious or universally applicable dominant cooling direction.
The \textsc{3d-pdr} code developed by \cite{2012MNRAS.427.2100B} and based on the \textsc{ucl-pdr} code \citep{2006MNRAS.371.1865B} was the first code (and to our knowledge remains the only code) able to treat PDR models in 3D. It utilises a \textsc{healpix} scheme \citep{2005ApJ...622..759G} to estimate the line cooling in 3D without assuming preferred escape directions. \textsc{healpix} breaks the sky into samples of equal solid angle at various levels of refinement. For applications to external photoevaporation, \textsc{3d-pdr} was coupled with the Monte Carlo radiative transfer and hydrodynamics code \textsc{torus} \citep{2019A&C....27...63H} in the \textsc{torus-3dpdr} code \citep{2015MNRAS.454.2828B}, making 2D and 3D calculations possible in principle, which we will discuss more in section \ref{sec:RHDmodels}. However, the computational expense of doing 3D ray tracing from every cell in a simulation iteratively with a hydrodynamics calculation is prohibitely expensive. Finding ways to emulate the correct temperature without solving the full PDR chemistry may offer a way to alleviate this problem \citep[e.g.][]{2021A&A...653A..76H}.
\subsection{1D Semi-analytic models of the external photoevaporative wind flow structure}
\label{sec:semianalytic}
In Section \ref{sec:microphysics} we discussed the importance of PDR microphysics for determining the temperature structure and hence the flow structure of externally irradiated discs. We also noted that PDR calculations are computationally expensive and are usually limited to 1D geometries. Until recently, calculations of the mass loss rate that utilise full PDR physics have also been confined to 1D and solved semi-analytically. Here we briefly review those approaches.
First we describe the 1D approach to models of external photoevaporation and the justification for such a geometry. 1D models essentially follow the structure radially outwards along the disc mid-plane into the wind, as illustrated in Figure \ref{fig:1Dgeometry}. The grid is spherical, buts the assumptionis that the flow only applies over the solid angle subtended by the disc outer edge at $R_{\rm{d}}$. That fraction of $4\pi$ steradians is
\begin{equation}
\mathcal{F} = \frac{H_{\rm{d}}}{\sqrt{H_{\rm{d}}^2+R_{\rm{d}}^2}}
\end{equation}
for a disc scale height $H_\mathrm{d}$ (again see Figure \ref{fig:1Dgeometry}). The mass loss rate at a point in the flow at $R$ with velocity $\dot{R}$ and density $\rho$ is then
\begin{equation}
\dot{M} = 4\pi R^2\mathcal{F} \rho \dot{R}.
\end{equation}
\begin{figure}
\centering
\vspace{-0.1cm}
\includegraphics[width=0.5\columnwidth]{Figures/1Dgeometry.png}
\vspace{-0.1cm}
\caption{A schematic of the 1D semi-analytic model structure and how it is used to estimate total mass loss rate estimates. The flow solution is solved along the disc mid-plane with appropriate boundary conditions, e.g. at the disc outer edge and at some critical point in the wind. This mid-plane flow is then assumed to apply over the entire solid angle subtended by the disc outer edge. }
\label{fig:1Dgeometry}
\end{figure}
The 1D geometry is justified based on the expectation that it is there that material is predominantly lost from the disc outer edge. That expectation arises because:
\begin{enumerate}
\item Material towards the disc outer edge is the least gravitationally bound.
\item The vertical scale height is much smaller than the radial, which results in a higher density at the radial sonic point than the vertical one \citep{Adams04}.
\end{enumerate}
This is demonstrated analytically in the case of compact discs in the appendix of \cite{Adams04}, who show that the ratio of mass lost from the disc surface to disc edge is
\begin{equation}
\frac{\dot{M}_\mathrm{surface}}{\dot{M}_\mathrm{edge}} \approx \left(\frac{R_{\rm{d}}}{R_{\rm{g}}}\right)^{1/2}.
\end{equation}
where $\dot{M}_\mathrm{surface}$ and $\dot{M}_\mathrm{edge}$ are the mass loss rates from the disc upper layers and mass loss rates from the outer edge respectively. As before, $R_{\rm{d}}$ and $R_{\rm{g}}$ are the disc outer radius and gravitational radius respectively. That is for larger, more strongly heated discs there is a more significant contribution from the disc surface. This has also been tested and validated in 2D radiation hydrodynamic simulations by \cite{2019MNRAS.485.3895H} (that we will discuss fully in Section~\ref{sec:RHDmodels}) who showed that, at least in a $10^{3}$G$_0$ environment, the majority of the mass loss comes from the disc outer edge and the rest from the outer 20\,percent of the disc surface. Mass loss rates in 2D and analogous 1D models were also determined to be similar to within a factor two, with the 1D mass loss rates being the lower values. Mass loss rates computed in one dimension are therefore expected to be somewhat conservative but reasonable approximations.
\cite{Adams04} took a semi-analytic approach to solving for the flow structure by using pre-tabulated PDR temperatures from the code of \cite{1999ApJ...527..795K} and using those in the flow equations. The found that the flow structure is analogous to a \cite{1965SSRv....4..666P} wind, but non-isothermal and with centrifugal effects. At each point in the flow, the pre-tabulated PDR temperatures are interpolated as a function of local density incident FUV and extinction. The boundary conditions used were the conditions at the disc outer edge and the sonic point in the flow. They demonstrated both that FUV driven winds are dominant for setting the mass loss rate and that winds could be driven interior to the gravitational radius, down to $\sim 0.1-0.2\,R_{\mathrm{g}}$ \citep[see also e.g.][]{Woods96, Liffman03}.
\cite{Facchini16} took a similar approach to 1D semi-analytic models with pre-tabulated PDR temperatures from \cite{2012MNRAS.427.2100B}. As already discussed above, their main focus was on dust entrainment and the impact of grain growth in the disc on the dust properties in the wind. They found that the entrainment of only small grains, coupled with grain growth in the disc, reduces the extinction in the wind and can enhance the mass loss rate. In addition, they used a different approach to the outer boundary condition, finding a critical point in the modified Parker wind solution and taking into account deviations from isothermality at that point. They then integrated from that critical point, inwards to the disk. Thanks to this different approach \cite{Facchini16} were able to compute solutions over a wider parameter space than before, particularly down to low FUV field strengths.
Semi-analytic models have offered a powerful and efficent tool for estimating mass loss rates in different regimes. However, there are still regions of parameter space where solutions are not possible and semi-analytic models are still only limited to 1D. To alleviate those issues we require radiation hydrodynamic simulations.
\subsection{Radiation hydrodynamic models of external photoevaporation}
\label{sec:RHDmodels}
Above we discussed semi-analytic calculations of the external wind flow structure. Those have the advantage that they are quick to calculate. However they are limited by being restricted to 1D solutions and by solutions not always being calculable. This leaves a demand for radiation hydrodynamic calculations capable of solving the necessary radiative transfer and PDR temperature structure in conjunction with hydrodynamics. Such calculations can solve for the flow structure in 2D/3D and in any scenario.
The radiation hydrodynamics of external photoevaporation is one of the more challenging problems in numerical astrophysics because the key heating responsible for launching the wind is described by PDR physics. That is, we are required to iteratively solve a chemical network that is sensitive to the temperature with a thermal balance that is sensitive to the chemistry. To make matters worse, the cooling in PDRs is non-local, being dependent on the escape probability of line photons into $4\,\pi$ steradians from any point in the simulation (see Section \ref{sec:PDRmicrophysics}). In other scenarios this does not cause significant issues if there is a clear dominant escape direction. For example, within a protoplanetary disc the main cooling can quite reasonably be assumed to occur vertically through the disc, since other trajectories have a longer path length (and column) through the disc \citep[``1+1D'' models, e.g.][]{2008ApJ...683..287G, 2016A&A...586A.103W}. Similarly, for internal winds there are models with radiation hydrodynamics and PDR chemistry, where the line cooling is evaluated along single paths radially on a spherical grid \citep[][]{2017ApJ...847...11W, 2018ApJ...857...57N,2018ApJ...865...75N, 2019ApJ...874...90W}. In the complicated structure of an external photoevaporative wind, however, this sort of geometric argument cannot be applied and so multiple samplings of the sky ($4\pi$ steradians) are required from every point in a simulation.
Although 3D cooling is ideally required, simulations have been performed using approximations to the cooling in high UV radiation fields in particular, where the PDR is small. For example \cite{2000ApJ...539..258R} ran 2D axisymmetric simulations of discs irradiated face-on. In their calculations the optical depth and cooling is estimated using a single ray from the cell to the irradiating UV source (this same path is used for calculating the exciting UV and cooling radiation). They also employed a more simple PDR microphysics model compared to work at the time such as \cite{Johnstone98}, which enabled the move from 1D to 2D-axisymmetry. \cite{2000ApJ...539..258R} studied the mass loss of proplyds as well as the observational characteristics using intensity maps derived from their dynamical models, some of which are illustrated in Figure \ref{fig:RichlingYorkeIntensities}. They found, in the first geometrically realistic EUV+FUV irradiation models, that rapid disc dispersal gives morphologies in various lines similar to those observed in the ONC.
The \textsc{torus-3dpdr} code \citep{2015MNRAS.454.2828B} is a key recent development in the direct radiation hydrodynamic modelling of external photoevaporation. It was constructed by merging components of the first fully 3D photodissociation region code \textsc{3d-pdr} \citep{2012MNRAS.427.2100B} with the \textsc{torus} Monte Carlo radiative transfer and hydrodynamics code \citep{2019A&C....27...63H}. \textsc{3d-pdr} (and hence \textsc{torus-3dpdr}) address the 3D line cooling issue using a \textsc{healpix} scheme, which breaks the sky into regions of equal solid angle. \textsc{torus-3dpdr} has been used to run a range of 1D studies of external photoevaporation. It has been shown to be consistent with semi-analytic calculations \citep{2016MNRAS.463.3616H}. It was used to study external photoevaporation in the case of very low mass stars, with a focus on Trappist-1 \citep{2018MNRAS.475.5460H}. The approach there was to run a grid of models to provide the mass loss rate as a function of the UV field strength, disc mass/radius for a 0.08\,M$_\odot$ star and interpolate over that grid in conjunction with a disc viscous evolutionary code based on that of \citet{Clarke07} to evolve the disc. The usefulness of such a grid led to the FRIED (\textbf{F}UV \textbf{R}adiation \textbf{I}nduced \textbf{E}vaporation of \textbf{D}iscs) grid of mass loss rates that has since been employed in a wide range of disc evolutionary calculations by various groups \citep[e.g.][others]{Winter19a, Concha-Ramirez19,Sellek20}. Note, given the discussion on the importance of PAHs above that the FRIED models use a dust-to-gas ratio a factor 33 lower than the canonical $10^{-2}$ for the ISM and a PAH-to-dust mass ratio a further factor of 10 lower than ISM-like. This is conservative (i.e. a PAH-to-gas ratio of 1/330 that in the ISM) compared to the factor 50 or so PAH depletion measured by \cite{2013ApJ...765L..38V}. So models predict that the PDR heating is still capable of driving significant mass loss, even when the PAH abundance is heavily depleted compared to the ISM.
These applications were all still 1D though and so there is a growing theoretical framework that is based on that geometric simplification. \cite{2019MNRAS.485.3895H} ran 2D-axisymmetric external photoevaporation calculations (with 3D line cooling, utilising the symmetry of the problem) and found that 1D calculations are, if anything, conservative since the 2D mass loss rates were slightly higher. True 3D calculations with 3D line cooling and the disc not irradiated isotropically or face-on, are yet to be performed. Though in principle \textsc{torus-3dpdr} is capable of this, in practice the 3D ray tracing of the \textsc{healpix} scheme makes such calculations computationally expensive.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/RichlingAndYorkeGallery.png}
\caption{A gallery of intensity maps ($\log_{10}$erg\,s$^{-1}$\,cm$^{-2}$\,strad$^{-1}$) resulting from the 2D-axysmmetric radiation hydrodynamic simulations of proplyds by \protect\cite{2000ApJ...539..258R}. The simulations utilised a simplified microphysics approach which enabled them to model proplyds in 2D-axisymmetry, with UV radiation incident from the top. These multi-wavelength synthetic line emission maps were compared and found to be consistent with the properties of ONC proplyds. }
\label{fig:RichlingYorkeIntensities}
\end{figure}
\subsubsection{The disc-wind transition}
\label{sec:boundaryconditions}
Most of the models discussed so far impose a disc as a boundary condition from which a wind solution is derived. In reality the wind is not launched from some arbitrarily imposed point in an irradiated disc. \cite{2021MNRAS.508.2493O} recently implemented a model without an imposed inner boundary and a smooth transition from disc to wind using a slim disc approach \citep{1988ApJ...332..646A}. In this first approach they assume an isothermal disc, but demonstrate that while the transition from disc to wind is narrow, it is not negligibly thin. They also introduced dust of different sizes into their model, which predicts a radial gradient in grain size in the outer disc/inner wind. Although the fixed inner boundary models are valid for computing steady state mass loss rates for disc evolutionary models, a worthwhile future development will be to include detailed microphysics in a slim-disc approach like that of \cite{2021MNRAS.508.2493O}.
\begin{mdframed}
\vspace{-0.8cm}
\subsection{Summary and open questions for the theory of externally driven disc winds}
\begin{enumerate}
\item Numerical models of external photoevaporation require some of the most challenging radiation hydrodynamics models in astrophysics. This is primarily because it is necessary to include 3D PDR microphysics, including line cooling with no obvious dominant cooling direction.
\item Limited by the above, 1D models of external photoevaporation are now well established and are used to estimate disc mass loss rates. But 2D and 3D simulations are still limited.\\
\end{enumerate}
Some of the many open questions and necessary improvements to current models are
\begin{enumerate}
\item \textit{What is the PAH abundance in external photoevaporative winds and the outer regions of discs? This is key to setting the wind temperature and mass loss rate.}
\item \textit{Including mass loss rate dependent dust-to-gas mass ratios and maximum grain sizes (and hence extinction) in numerical models of external photoevaporation. At present a single representative cross section is assumed, irrespective of the mass loss rate.}
\item \textit{Can accurate temperatures from PDR microphysics be computed at vastly reduced computational expense (e.g. via emulators)? }
\item \textit{What is the interplay between internal and external winds?}
\item \textit{3D simulations of external photoevaporation with full PDR-dynamics}
\item \textit{Non-isothermal slim-disc models of externally photoevaporating discs. }
\end{enumerate}
\end{mdframed}
\section{Observational properties of externally photoevaporating discs}
\label{sec:obs}
Here we discuss observations to date of individual externally photoevaporating discs. We discuss the diversity in their properties such as UV environment and age, and summarise key diagnostics.
\subsection{Defining the term proplyd}
The term ``proplyd'' was originally used to describe any disc directly imaged in Orion with HST in the mid-90s \citep[e.g.][]{O'dell93, Johnstone98} as a portmanteau of ``protoplanetary disc''. Since then, use of the term has adapted to only refer to cometary objects resulting from the external photoevaporation of compact systems. However this use of the term is ambiguous, since a cometary morphology can result from both externally irradiated protoplanetary discs \textit{and} externally irradiated globules which may be host no embedded star/disc, such as many of the compact globulettes as illustrated in Figure \ref{fig:globulettes} \citep{2007AJ....133.1795G, 2014A&A...565A.107G}. We therefore propose to define a proplyd as follows
\begin{mdframed}
\textbf{Proplyd:} \\ \textit{A circumstellar disc with an externally driven photoevaporative wind composed of a photodissociation region and an exterior ionisation front with a cometary morphology. } \\
\end{mdframed}
We have chosen this definition such that it makes no distinction as to whether EUV or FUV photons drive the wind, but specifically define that for an object to be a proplyd then the wind must be launched from a circumstellar disc. In the absence of a disc it is a globule or globulette \citep[also sometimes referred to as an evaporating gas globules, or EGG, e.g.][]{MesaDelgado16}. To further clarify, a globule or globulette with an embedded YSO (identified through a jet for example) with cometary morphology would also not be defined as a proplyd since it is the ambient material being stripped rather than the disc.
\begin{figure}
\centering
\includegraphics[width=0.46\columnwidth]{Figures/proplydGallery.jpeg}
\includegraphics[width=0.4\columnwidth]{Figures/Globulettes.png}
\caption{The left hand panel is a gallery of proplyds in the ONC -- evaporating discs around YSO's (Credit: NASA/ESA and L. Ricci). The right hand panel is a gallery of globulettes in Carina from \protect\citep{2014A&A...565A.107G}. Globulettes have radii from hundreds to thousands of au and can also take on a cometary morphology when externally irradiated, though in many cases do not contain any YSOs, which by our definition would mean that they are not proplyds (note that \protect\cite{2014A&A...565A.107G} never referred to them as such, rather using the term globulette). }
\label{fig:globulettes}
\end{figure}
\subsection{Where and what kind of proplyds have been detected?}
Proplyds were originally discovered in the Orion Nebular Cluster (ONC) and there are now over 150 known in the region
\citep[e.g.][]{O'dell93, Johnstone98, 2005AJ....129..382S, 2008AJ....136.2136R}. The ONC is around 2.5\,Myr old with the primary UV source the O6V star $\theta^1$C. In general, the surface brightness of proplyds if the local EUV exposure is lower then proplyds can be harder to detect. For example, the surface brightness of $\mathrm{H}\alpha$ is \citep{O'dell98}:
\begin{equation}
\label{eq:SHalpha}
\langle S(\mathrm{H}\alpha) \rangle \approx 7 \times 10^{11} \frac{\alpha_{\mathrm{H}\alpha}^{\mathrm{eff}}}{\alpha_B}\frac{\Phi}{10^{49} \, \mathrm{s}^{-1}} \left( \frac{d}{0.1 \, \mathrm{pc}} \right)^{-2} \, \mathrm{s}^{-1} \, \mathrm{cm}^{-2} \, \mathrm{s.r}^{-1}
\end{equation} where $\alpha_{\mathrm{H}\alpha}^\mathrm{eff} = 1.2\times 10^{-13}$~cm$^{-3}$~s$^{-1}$ and $\alpha_B = 2.6\times 10^{-13}$~cm$^{-3}$~s$^{-1}$ at temperature $T=10^4$~K are hydrogen recombination coefficients \citep[e.g.][]{Osterbrock89}. Nonetheless, in recent years there have been important detections of proplyds in lower EUV environments. \cite{Kim16} found proplyds in the vicinity of the B1V star 42 Ori in NGC 1977. This was significant because it clearly demonstrates that B stars can drive external photoevaporative winds. \cite{Haworth21} also presented proplyds in NGC 2024. In that region it appears that both an O8V and B star are driving external photoevaporation. The main significance of proplyds there is the $\sim0.2-0.5$\,Myr age of a subset of the region where proplyds have been discovered. This is important since it implies that external photoevaporation can even be in competition with our earliest stage evidence for planet formation \citep{Sheehan18, Segura-Cox20}.
In these regions proplyds have been detected with host star masses from around solar down to almost (and perhaps even) planetary masses \citep{2010AJ....139..950R, Kim16, 2016ApJ...833L..16F, 2022MNRAS.512.2594H}. The UV fields incident upon these proplyds ranges from $>10^5$G$_0$ down to possibly around 100G$_0$ \citep{Miotello12}. Mass loss rates are estimated to regularly be greater than $10^{-7}$\,M$_\odot$\,yr$^{-1}$ and sometimes greater than than $10^{-6}$\,M$_\odot$\,yr$^{-1}$ \citep[e.g.][]{Henney98, Henney99,2002ApJ...566..315H, Haworth21}. Examples of binary proplyds have also been discovered \citep{2002ApJ...570..222G}. For sufficiently close separation binaries the disc winds merge to form a so-called interproplyd shell, which was studied by \cite{2002RMxAA..38...71H}. These nearby regions, the ONC, NGC 1977 and NGC 2024 show the clearest evidence for external photoevaporation.
Due to resolution and sensitivity issues, unambiguous evidence for external photoevaporation is more difficult to obtain in more distant star forming regions than the $D \sim400$\,pc of Orion. \cite{Smith10} identified candidate proplyds in Trumpler 14, at a distance of $D\sim2.8\,$kpc, and \cite{MesaDelgado16} subsequently detected discs towards those candidates with ALMA. Although many of those candidates are large evaporating globules (in some cases, with embedded discs detected), some are much smaller and so could be bona fide proplyds.
There are other regions where ``proplyd-like'' objects have been discovered, including Cygnus OB2 \citep{Wright12, 2014ApJ...793...56G}, W5 \citep{2008ApJ...687L..37K}, NGC 3603 \citep{Brandner00}, NGC 2244, IC 1396 and NGC 2264 \citep{2006ApJ...650L..83B}. However our evaluation of those systems so far is that they are all much larger than ONC propyds and likely evaporating globules. Given the high UV environments of those regions and the identification of evaporating globules we \textit{do} expect external photoevaporation to be significant in the region. However, it remains unclear for many of these objects whether the winds are launched from an (embedded) star-disc system. Future higher resolution observations and/or new diagnostics of external photoevaporation that do not require spatially resolving the proplyd are required in these regions.
We provide a further discussion of proplyd demographics and particularly the demographics of discs in irradiated star forming regions in Section \ref{sec:surveys}.
\subsection{Estimating the mass loss rates from proplyds}
\label{sec:MdotEstimate}
As discussed above, proplyds have a cometary morphology, pointing towards the UV source responsible for driving the wind. The leading hemisphere that is directly irradiated by the UV source is referred to as the cometary cusp. On the far side of the proplyd is a trailing cometary tail.
The extent of the cometary cusp is set by the point beyond which all of the incident ionising flux is required to keep the gas ionised under ionisation equilibrium. A higher mass loss rate and hence denser flow increases the recombination rate in the wind and would move the ionisation front to larger radii. Conversely, increasing the ionising flux will reduce the ionisation front to smaller radii. As a result, the ionisation front radius $R_{\textsc{if}}$ (i.e. the radius of the cometary cusp) is related to the ionising flux incident upon the proplyd and the mass loss rate $\dot{M}_\mathrm{ext}$ \citep{Johnstone98}. This is independent of the actual wind driving mechanism, being enforced simply by photoionisation equilibrium downstream of the launching region of the flow. This provides a means to estimate the mass loss rate from the disc:
\begin{equation}
\left(\frac{\dot{M}_\mathrm{ext}}{10^{-8}\,M_\odot\,\rm{yr}^{-1}}\right) = \left(\frac{1}{1200}\right)^{3/2}\left(\frac{R_{\textsc{if}}}{\textrm{au}}\right)^{3/2}\left(\frac{d}{\textrm{pc}}\right)^{-1}\left(\frac{\Phi}{10^{45}\,\mathrm{s}^{-1}}\right)^{1/2}
\label{equn:Mdot}
\end{equation}
where $\Phi$ is the ionising photons per second emitted by the source at distance $d$ responsible for setting the ionisation front. This has been applied to estimating mass loss rates in NGC 2024 \citep{Haworth21} and NGC 1977 \citep{2022MNRAS.512.2594H}.
Note that this is neglects extinction between the UV source and the proplyd and could underestimate the true separation between the UV source and proplyd. Both of these effects would reduce the true ionising flux incident upon the proplyd and so equation \ref{equn:Mdot} provides an upper limit on the mass-loss rate.
Generally, the mass loss rate could alternatively be inferred if one knows the density and velocity through a surface in the wind enclosing the disc. However, the ionisation front is very sharp, making it an ideal surface through which to estimate the mass loss rate.
Other more sophisticated model fits to proplyds have been made, such as \cite{2002ApJ...566..315H}, who use photoionisation, hydrodynamics and radiative transfer calculations to model the proplyd LV 2, but equation \ref{equn:Mdot} provides a quick estimate that gives mass loss rates comparable to those more complex estimates.
\subsection{Multi-wavelength anatomy of proplyds}
Here we provide a brief overview of some key observational tracers of proplyds. We also highlight possible alternative tracers that might prove useful to identify external photoevaporation when proplyds cannot be inferred based on their cometary morphology (i.e. in weaker UV environments and distant massive clusters). A schematic of the overall anatomy and location of various tracers of a proplyd is shown in Figure \ref{fig:proplydAnatomy}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/AnatomyFig.pdf}
\caption{The anatomy of a proplyd, highlighting the observables in different parts of the system. The three main zones are the disc, PDR and ionised gas. }
\label{fig:proplydAnatomy}
\end{figure}
\subsubsection{Ionised gas tracers in absorption and emission}
Proplyds are observed in two ways using ionised gas emission lines. Proplyds are typically found in H\,\textsc{ii} regions which are associated with widespread ionised emission lines. Proplyds on the near side of the H\,\textsc{ii} region can therefore manifest in the absorption of those ionised gas emission lines. In addition, the ionisation front at the rim of the proplyd cusp is an additional source of ionised emission lines that can be directly detected. Ionised gas tracers in emission probe the region close to or outside of the H\textsc{i}-H\,\textsc{ii} ionisation front \citep[e.g.][]{Henney99}. These ionised gas tracers detected in emission are valuable for estimating mass loss rates using the procedure discussed in \ref{sec:MdotEstimate}. Prominent ionised gas emission lines include H$\,\alpha$, Paschen$\,\alpha$, [O\,\textsc{iii}], [Ne \textsc{ii}] and [N\,\textsc{ii}]. Combinations of these photoionised gas tracers have also been used to constrain models of proplyds. For example by \cite{2002ApJ...566..315H}, who compared simulated images and observations in H$\,\alpha$, [N\,\textsc{ii}] (6583\AA) and [O\,\textsc{iii}] (5007\AA) to model the proplyd LV 2.
\subsubsection{Disc tracers}
\noindent\textbf{CO} \\
Thanks to its brightness, CO is one the most common line observations towards protoplanetary discs. \cite{Facchini16} pointed out that because the specific angular momentum in the wind scales as $R^{-2}$ rather than the Keplerian $R^{-3/2}$ that non-Keplerian rotation is a signature of external photoevaporation. Although this deviation grows with distance into the wind, it does not get a chance to do so to detectable levels before CO is dissociated \citep{2016MNRAS.463.3616H, 2020MNRAS.492.5030H}. CO is therefore not expected to kinematically provide a good probe of external winds for proplyds (though it may be useful for more extended discs with slow external winds in low UV environments, as we will discuss in \ref{sec:nonProplydWinds}). However, this does not preclude CO line ratios or intensities showing evidence of external heating in spatially resolved observations. \\
\noindent\textbf{Dust continuum} \\
External photoevaporation influences the dust by i) entraining small dust grains in the wind \citep{Facchini16} and ii) by heating the dust in the disc. Directly observing evidence for grain entrainment would provide a key test of theoretical models both of external photoevaporation and dust-gas dynamics. In addition to the prediction that small grains are entrained \citep{Facchini16}, \cite{2021MNRAS.508.2493O} predict a radial gradient in the grain size distribution.
Evidence for such a radial gradient in grain size was inferred in the ONC 114-426 disc by \cite{Miotello12} \citep[see also][]{2001Sci...292.1686T}. This disc is the largest in the central ONC, on the near side of the H\,\textsc{ii} region. Although the UV field incident upon it is expected to be of order $10^2$\,G$_0$, it is clearly evaporating, with the wind resulting in an extended diffuse foot-like structure (though no clear cometary proplyd morphology). They mapped the absorption properties of the transluscent outer parts of the disc, finding evidence for a radially decreasing maximum grain size, which would be consistent with theoretical expectations. Revisiting 114-426 to obtain further constraints on grain entrainment would be valuable, as well as searching for the phenomenon in other systems. JWST will offer the capability to similarly study the dust in the outer parts of discs in silhouette in the ONC, comparing JWST Paschen $\alpha$ absorption with HST H$\alpha$ absorption \citep[as part of PID: GTO 1256][]{2017jwst.prop.1256M}.
The dust in discs is also influenced by the radiative heating from the environment. If a proplyd is sufficiently close to a very luminous external source the grain heating can be comparable to, or in some parts of the disc exceed, the heating from the disc's central star. If this is not accounted for when estimating the dust mass in a proplyd (i.e. if one assumes some constant characteristic temperature, typically $T=20\,$K) the mass ends up increasingly overestimated in closer proximity to the luminous external source \citep{Haworth21b} and so may suppress spatial gradients in disc masses at distances within around 0.1\,pc of an O star like $\theta^1$C \citep[e.g.][]{Eisner18, 2021ApJ...923..221O}.
\subsubsection{Photodissociation region tracers}
Photodissociation region (PDR) tracers are valuable because they trace the inner regions of the flow. In particular, as discussed in \ref{sec:PDRmicrophysics}, it is the FUV/dissociation region that determines the mass loss rate. PDR tracers in the wind are also valuable because they are what we will rely on to identify externally photoevaporating discs that are non proplyds (discussed further in section \ref{sec:nonProplydWinds}). PDR tracers of external photoevaporation have received a lot of attention in recent years and so are explored somewhat more thoroughly than photoionised gas tracers here. \\
\noindent\textbf{[OI] 6300\AA} \\
\cite{Storzer98} modelled the [OI] 6300\AA\, emission from proplyds motivated by \cite{1998AJ....116..293B} observations of the ONC proplyd 1822-413. They found that in the case of external photoevaporation the line is emitted following the photodissociation of OH, with the resulting excited oxygen having a roughly 50\,per cent chance of de-exciting through the emission of a [OI] 6300\AA\, photon. For the density/temperature believed to be representative in the wind this dissociation is expected to dominate over OH reacting out by some other pathway. This model approximately reproduced the flux of 1822-413. Ballabio et al. (in preparation) generalised this, studying how the [OI] 6300\AA\, line strength varies with UV field strength and star disc parameters. They found that the line is a poor kinematic tracer of external winds because the velocity is too low to distinguish it from [OI] emission from internal winds \citep[though spectro-astrometry, e.g.][and future instrumentation may solve this issue by spatially distinguishing the internal and external winds]{2021ApJ...913...43W}. However, the [OI] luminosity increases significantly with UV field strength. The ratio of [OI] 6300\AA\, luminosity to accretion luminosity is therefore expected to be unusually high in strong UV environments. This could make [OI] a valuable tool for identifying external photoevaporation at large distances where proplyds cannot be spatially resolved.
There are some observational challenges associated with utlising this diagnostic though. One is that the targeted YSO's emission has to be distinguished from emission from the star forming region. For example [OI] emission from a background PDR. Furthermore, estimating the accretion luminosity of proplyds appears to have only been attempted in a handful of cases, with tracers like H\,$\alpha$ also possibly originating from the proplyd wind. \\
\noindent\textbf{C\,I } \\
As we discussed above, the most commonly used disc gas tracer, CO, is dissociated in the wind. This means that it is ineffective for detecting deviations from Keplerian rotation that are expected in external photoevaporative winds. C\,I primarily resides in a layer outside of the CO abundant zone until it is ionised at larger distances in the wind, so could therefore trace the deviation from Keplerian rotation. \citet{2020MNRAS.492.5030H} proposed that C\,I offers a means of probing the inner wind kinematically. A key utility of this would be its possible use as an identifier of winds in systems where there is no obvious proplyd morphology. \citet{2022MNRAS.512.2594H} used APEX to try and identify the C\,I 1-0 line in NGC 1977 proplyds, which are known evaporating discs in an intermediate ($\sim3000$G$_0$) UV environment. However they obtained no detections, which they explain in terms of those proplyd discs being heavily depleted of mass. However an alternative explanation would be if the disc were just depleted in carbon. Distinguishing these requires independent constraints on the mass in those discs. Overall the utility of C\,I remains to be proven and should be tested on higher mass evaporating discs. Based on the expected flux levels \citep[see also][]{2016A&A...588A.108K} it seems unlikely though that C\,I will be suitable for mass surveys searching for the more subtle externally driven winds when there is no ionisation front. Though it could be used for targeted studies of extended systems that are suspected to be intermediate UV environments. \\
\noindent\textbf{Far-infrared lines: C\,II 158\,$\mu$m and [OI] 63$\mu$m} \\
The [C\,II] 158\,$\mu$m and [O\,\textsc{i}] 63\,$\mu$m lines are both bright tracers of the PDR. They have both been observed with \textit{Herschel} \citep{2010A&A...518L...1P} towards a small number of proplyds by \cite{2017A&A...604A..69C}, who compared the line fluxes with uniform density 1D PDR models with the \textsc{Meudon} code \citep{2006ApJS..164..506L} to constrain parameters such as the mean flow density, which is supported by ALMA observations. They suggested that the proplyd PDR self-regulates to maintain the H-H$_2$ transition close to the disc surface and maintain a flow at $\sim1000$\,K in the supercritical regime ($R_{\mathrm{d}} > R_{\mathrm{g}}$). Their models also pointed towards a number of heating contributions being comparably (or more) important than PAH heating. However those calculations also assumed uniform density and ISM-like dust, whereas we now know it is depleted in the wind. Overall this highlights the need for further detail in PDR-dynamical models.
Clearly these PDR tracers do have enormous utility for understanding the conditions in the inner part of external photoevaporative winds. The main limitation to using these far-infrared lines now is the lack of facilities to observe them, with \textit{Herschel} out of commission and SOFIA \citep[e.g.][]{2012ApJ...749L..17Y} due to end soon. We are unaware of any short term concepts to alleviate this, but in the longer term there are at least two relevant far-infrared probe class mission concepts being prepared, FIRSST and PRIMA, which would address this shortfall. However, these missions would not launch until the 2040s.
\subsection{External photoevaporation of discs without an observed ionisation front}
\label{sec:nonProplydWinds}
Proplyds are most easily identified because of their cometary morphology. However in weaker UV fields it is still possible to drive a significant wind that is essentially all launched only from close to the disc outer edge (where material is most weakly bound. Recent years have seen the discovery of possible external winds from very extended discs in weak ($F_\mathrm{FUV}<10\,G_0$) UV environments. The first example of this was the extremely large disc IM Lup, which has CO emission out to $\sim1000\,$au. IM Lup had previously been demonstrated to have an unusual break in the surface density profile at around 400\,au in submillimetre array (SMA) observations by \cite{2009A&A...501..269P}. \cite{2016ApJ...832..110C} then observed IM Lup in the continuum and CO isotopologues with ALMA, similarly finding that the CO intensity profile could not be simultaneously reproduced at all radii by sophisticated chemical models, inferring a diffuse outer halo of CO. Using \textit{Hipparcos} to map the 3D locations of the main UV sources within 150\,pc of IM Lup and geometrically diluting their UV with various assumptions on the extinction, \cite{2016ApJ...832..110C} had determined that the UV field incident upon IM Lup is only $F_\mathrm{FUV}\sim 4\,G_0$, so not expected to be sufficient to drive an external wind. \cite{2017MNRAS.468L.108H} demonstrated using 1D radiation hydrodynamic models that the CO surface density profile as a disc/halo could be explained by a slow external photoevaporative wind launched from around 400\,au by a $\sim4$\,G$_0$ external field. This is possible because the disc is so extended that the outer parts are only weakly gravitationally bound, so even modest heating can drive a slow molecular wind. However, 2D models are required to give a more robust geometric comparison between simulations and observations to verify an outer wind in IM Lup.
Another candidate external wind in a $F_\mathrm{FUV}<10G_0$ UV environment was identified in HD~163296 by \cite{2019Natur.574..378T} and \cite{2021ApJS..257...18T}. They developed a framework in which the 3D CO emitting surface of the disc is traced, which can then be translated into a map of the velocity as a function of radius and height in the disc as illustrated in Figure \ref{fig:TeagueHD163296}. Their main focus was the meridional flows identified at smaller radii in the disc, but they serendipitously discovered evidence for an outer wind launched from $\sim350-400$\,au (see the right hand blue box of Figure \ref{fig:TeagueHD163296}). This is yet to be interpreted with any numerical models or synthetic observations which will be necessary to support the interpretation that it is external photoevaporation that is responsible.
\cite{2021ApJS..257...18T} also carried out a similar analysis of the disc MCW 480 but found no evidence of an outer wind despite having a similar radial extent and environment. Whether this is a consequence of the face-on orientation ($\sim 33^\circ$ compared to $\sim 45^\circ$ for HD 163296) or because there is no outer wind remains to be determined. Indeed, the HD 163296 kinematic feature that appears to be an outflow may also be due to some other mechanism. Furthermore, a similar approach is yet to be applied to IM Lup to search for kinematic traces of an outer wind.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Figures/TeagueHD163296.png}
\caption{The azimuthally averaged velocity (vectors) at the height of CO emission as a function of radius in HD 163296 by \protect\cite{2021ApJS..257...18T}. In addition to meridional flows, there is detection of a possible outer wind at $\sim320-400$\,au, highlighted by the blue dashed box on the right. }
\label{fig:TeagueHD163296}
\end{figure}
Looking ahead, determining whether external irradiation can really launch winds from discs down to FUV fluxes $F_\mathrm{FUV}<10\,G_0$ is important for understanding just how pervasive external photoevaporation is.
\begin{mdframed}
\subsection{Summary and open questions for observational properties of externally photoevaporating discs}
\begin{enumerate}
\item{External photoevaporation has been directly observed (e.g. proplyds) for almost 30 years. The vast majority of these are in the Orion Nebular Cluster. }
\item{In recent years direct evidence for external photoevaporation is being identified in more regions, and B stars are now also known to facilitate it. However, the range of environments in which it has been observed is still very limited.}
\end{enumerate}
\noindent Some of the many open questions are:
\begin{enumerate}
\item \textit{What are robust diagnostics and signposts of external winds in weak UV environments?}
\item \textit{What are diagnostics of external photoevaporation in distant clusters where a cometary proplyd morphology is spatially unresolved?}
\item \textit{How widespread and significant are external winds from discs in weak UV environments?}
\end{enumerate}
\end{mdframed}
\section{Impact on disc evolution and planet formation}
\label{sec:planet_formation}
In this section we consider how a protoplanetary disc evolves when exposed to strong external UV fields, and the consequences for planet formation. We will only briefly describe some relevant processes for planet formation in isolated star-disc systems. This is a vast topic with several existing review articles on both protoplanetary discs \citep[e.g.][]{Armitage11, Williams11, Andrews20, Lesur22} and the consequences for forming planets \citep[e.g.][]{Kley12, Baruteau14, Zhu21, Drazkowska22} to which we refer the reader. \citet{Adams10} reviewed general influences on planet formation \citep[see also][]{Parker20}, with a focus on the Solar System. Here we focus on a detailed look on how external photoevaporation affects the formation of planetary systems.
\subsection{Gas evolution}
\label{sec:gas_evolution}
\subsubsection{Governing equations}
The gas dominates over dust by mass in the interstellar medium (ISM) by a factor $\rho_\mathrm{g}/\rho_\mathrm{d}\sim 100$ \citep{Bohlin78, Tricco17}. This ratio is usually assumed to be similar in (young) protoplanetary discs, although CO emission suggests that the gas may be somewhat depleted with respect to the dust \citep{Ansdell16}. Nonetheless, gas is a necessary ingredient in instigating the growth of planetesimals by streaming instablility \citep{Youdin05} and represents the mass budget for assembling the giant planets \citep{Mizuno78, Bodenheimer86}. Thus, how the gas evolves in the protoplanetary disc is one of the first considerations in planet formation physics.
Despite its importance, the gas evolution of the disc remains uncertain. The observed accretion rates of $\dot{M}_\mathrm{acc} \sim 10^{-10} {-}10^{-7} \, M_\odot$~yr$^{-1}$ onto young stars \citep{Gullbring98, Muzerolle98, Manara12}, depending on the stellar mass \citep{Herczeg08, Manara17}, imply radial angular momentum transport. For several decades, this angular momentum transport has widely been assumed to be driven by an effective viscosity, mediated by turbulence that may originate from the magnetorotational instability \citep{Balbus98}. In the absence of purturbations, the surface density $\Sigma_\mathrm{g}$ of the gaseous disc as a function of cylindrical radius $r$ then follows
\citep{Lynden-Bell74}:
\begin{equation}
\label{eq:disc_evol}
{\dot{\Sigma}_\mathrm{g}} = \frac 1 r \partial_r \left[ 3 r^{1/2} \partial_r \left( \nu \Sigma_\mathrm{g} r^{1/2} \right)\right] - \dot{\Sigma}_{\rm{int}} -\dot{\Sigma}_{\rm{ext}}.
\end{equation}The loss rates $\dot{\Sigma}_{\rm{int}}$ and $\dot{\Sigma}_{\rm{ext}}$ are the surface density change due to the internally and externally driven winds respectively. The kinematic viscosity $\nu$ is usually parametrised by an $\alpha$ parameter \citep{Shakura73} such that $ \nu(r) = \alpha c_{\rm{s}}^2/\Omega_\mathrm{K}$, for sound speed $c_\mathrm{s}$ and Keplerian frequency $\Omega_\mathrm{K}$. In a disc with a mid-plane temperature $T\propto r^{-1/2} $, this yields $\nu \propto r$ \citep{Hartmann98}.
In the following discussion, we will assume angular momentum transport is viscous. However, recently empirical several studies of discs have suggested a low $\alpha \sim 10^{-4} {-} 10^{-3}$ \citep{Pinte16, Flaherty17, Trapman20}. This is difficult to reconcile with observed accretion rates. Alternative candidates, particularly magnetohydrodynamic (MHD) winds, have been suggested to drive angular momentum transport \citep[e.g.][]{Bai13}. In this case, angular momentum is not conserved but extracted from the gas disc, with consequences for the disc evolution \citep{Lesur21, Tabone22}. In the following we will assume a standard viscous $\alpha$ disc model, with the caveat that future simulations may offer different predictions by coupling the externally driven photoevaporative wind with MHD mediated angular momentum removal.
\subsubsection{Implementing wind driven mass loss}
In order to integrate equation~\ref{eq:disc_evol}, we must define the form of $\dot{\Sigma}_{\rm{int}}$ and $\dot{\Sigma}_{\rm{ext}}$. The internal wind may be driven by MHD effects \citep[e.g.][]{Bai13, Lesur14} or thermally due to a combination of EUV \citep[e.g.][]{Hollenbach94,Alexander06}, X-ray \citep[e.g.][]{Ercolano09,Owen10, Owen11} and FUV \citep[e.g.][]{Gorti09} photons, or probably a combination of the two \citep[e.g.][]{Bai16,Bai17, Ballabio20} . We do not focus on the internally driven wind in this review, but note that the driving mechanism influences the radial profile of $\dot{\Sigma}_{\rm{int}}$ \citep[see][for a review]{Ercolano17}.
Several authors have included the external wind in models of (viscous) disc evolution \citep[e.g.][]{Clarke07,Anderson13,Rosotti17, Sellek20, Concha-Ramirez21, Coleman22}. In general, these studies follow a method similar to that of \citet{Clarke07} in removing mass from the outer edge, because winds are driven far more efficiently where the disc is weakly gravitationally bound to the host star. We discuss the theoretical mass loss rates in Section~\ref{sec:theory}; in brief, the analytic expressions by \citet{Johnstone98} are applied to compute the mass loss rate in the EUV driven wind, while early studies applied the expressions by \citet{Adams04} for an FUV driven wind. The latter has now been improved upon using more detailed models by \citet{Haworth18b}, such that one can interpolate over the {FRIED} grid to find an instantaneous mass loss rate. For typical EUV fluxes, once the disc is (rapidly) sheared down to a smaller size, any severe externally driven mass loss is expected to be driven by FUV rather than EUV photons (see Section~\ref{sec:EUVvFUV}). For this reason, the EUV mass loss is often neglected in studies of disc evolution.
Since the mass loss rate is sensitive to the outer radius $R_\mathrm{out}$, care must be taken when implementing a numerical scheme that the value of $R_\mathrm{out}$ is sensibly chosen. In practice, a sharp outer radius quickly develops for a disc with initial mass loss rate higher than the rate of viscous mass flux (accretion). For a viscous disc with $\nu\propto r$, the surface density $\Sigma \propto r^{-1}$ in the steady state, which is the same profile as adopted for the numerical models in the {FRIED} grid. One can then interpolate using the total disc mass and outer radius. The latter is evolved each time-step by considering the rate of wind-driven depletion versus viscous re-expansion \citep[e.g.][]{Clarke07, Winter18b}. However, the physically correct way to define the outer radius is to find the optically thin-thick transition, since the flow in the optically thin region is set by the wind launched from inside this radius. mass loss scales linearly with surface density in the optically thin limit \citep{Facchini16}, such that one can define $R_\mathrm{out}$ to be the value of $r$ that gives the maximal mass loss rate for the corresponding $\Sigma(r)$ in the disc evolution model \citep[see discussion by][]{Sellek20}. Under the assumption of a viscous disc with $\nu \propto r$ both approaches yield similar outcomes, but the approach of \citet{Sellek20} should be adopted generally. For example, this prescription would be particularly important in the case of a disc model incorporating angular momentum removal via MHD winds \citep{Tabone22}.
\subsubsection{Viscous disc evolution with external depletion}
\label{sec:visc_evol_extdep}
\begin{figure}
\centering
\includegraphics[width=0.87\textwidth]{Figures/R_Mdot_evol.pdf}
\caption{Evolution of the externally driven mass loss rates $\dot{M}_\mathrm{ext}$ (black, from the {FRIED} grid), accretion rates $
\dot{M}_\mathrm{acc}$ (blue) and radii containing 90~percent of the disc mass (red) for a disc evolving under equation~\ref{eq:disc_evol} with a viscous $\alpha=3\times10^{-3}$, no internal photoevaporation and varying the external FUV flux $F_\mathrm{FUV}= 50 \,G_0$ (dotted), $500\,G_0$ (dashed) and $5000 \, G_0$ (solid). We stop integrating when the mass loss rate or inner surface density reaches the floor in the {FRIED} grid ($\dot{M}_\mathrm{ext} = 10^{-10}\,M_\odot$~yr$^{-1}$ and $\Sigma_\mathrm{g}(1\,\rm{au}) = 0.1$~g~cm$^{-2}$ respectively). }
\label{fig:R_Mdot_evol}
\end{figure}
In Figure~\ref{fig:R_Mdot_evol} we show examples for the evolution of the disc radius that contains 90~percent of the mass, $R_{90}$, and corresponding externally driven mass loss rates \citep[from][]{Haworth18b} due to the numerical integration of equation~\ref{eq:disc_evol} \citep[see also Figures 3 and 4 of][for example]{Sellek20}. To illustrate the variation in mass loss and radius, we choose an initial scale radius $R_\mathrm{s} = 100$~au and mass $M_\mathrm{disc}=0.1\,M_\odot$, truncated outside of $200$~au, and with a viscous $\alpha=3\times 10^{-3}$. For simplicity, we ignore internal winds ($\dot{\Sigma}_\mathrm{int} = 0$ everywhere), in order to highlight the main consequences of the externally driven photoevaporative wind in isolation.
From such simple models, we gain some insights into how we expect disc evolution to be affected by externally driven mass loss. In the first instance, the efficiency of these winds at large $r$ leads to extremely rapid shrinking of the disc. In a viscous disc evolution model, this shrinking continues until the outwards mass flux due to angular momentum transport balances the mass loss due to the wind such that the accretion rate $\dot{M}_\mathrm{acc} \sim \dot{M}_\mathrm{ext}$ \citep{Winter20a, Hasegawa22}. This offers a potential discriminant for disc evolution models: while $\dot{M}_\mathrm{acc} \sim \dot{M}_\mathrm{ext}$ for a viscously evolving disc in an irradiated environment, this need not be the case if angular momentum is removed from the disc via MHD winds or similar. In either case, the initial rapid mass loss rates of $\dot{M}_\mathrm{ext} \sim 10^{-7} \, M_\odot$~yr$^{-1}$ are only sustained for relatively short time-scales of a few $10^5$~yr. Because the mass loss rate is related to the spatial extent of proplyds (equation~\ref{equn:Mdot}), this implies easily resolvable proplyds should be short-lived and therefore rare.
This rapid shrinking of the disc has consequences for the apparent viscous depletion time-scale of the disc, leading \citet{Rosotti17} to expound the usefulness of the dimensionless accretion parameter:
\begin{equation}
\eta \equiv \frac{\tau_\mathrm{age} \dot{M}_\mathrm{acc}}{M_\mathrm{disc}},
\end{equation}where $\tau_\mathrm{age}$, $M_\mathrm{disc}$ are the age and mass of the disc respectively, while $\dot{M}_\mathrm{acc}$ is the stellar accretion rate. For disc evolution driven by viscosity, and where the disc can reach
a quasi steady-state, we expect $\eta\approx 1$. Indeed $\eta\approx 1$ for discs in low mass local SFRs when using the dust mass $M_\mathrm{dust}$ (or, more precisely, sub-mm flux) as a proxy for the total disc mass $M_\mathrm{disc} = 100\cdot M_\mathrm{dust}$. While numerous processes can interrupt accretion and yield $\eta<1$, only an outside-in depletion process can yield $\eta >1$.
\citet{Rosotti17} showed that this applies to a number of discs in the ONC, hinting that external photoevaporation has sculpted this population.
With the inclusion of internal disc winds, a number of disc evolution scenarios become possible for an externally irradiated disc. The internal wind drives mass loss outside of some launching radius $R_\mathrm{launch} \approx 0.2 R_\mathrm{g}$, which is inside of the gravitational radius due to hydrodynamic effects \citep{Liffman03}. Once (viscous) mass flux through the disc becomes sufficiently small ($\dot{M}_\mathrm{acc}
\lesssim \dot{M}_\mathrm{int}$, the mass loss in the internal wind), the internal wind depletes material at $r\approx R_\mathrm{launch}$ faster than it is replenished, leading to gap opening. Subsequently, the inner disc is rapidly drained and inside-out disc depletion proceeds \citep{Skrutskie90, Clarke01}. \citet{Coleman22} discussed how the balance of internal and external photoevaporation can alter this evolutionary picture. In the limit of a vigorous externally driven wind, the outer disc may be depleted down to $R_\mathrm{launch}$. In this case, the internal wind no longer drives inside-out depletion, and the disc is dispersed outside-in. In the intermediate case, external disc depletion erodes the outer disc without reaching $R_\mathrm{launch}$. In models for disc material where outer disc material is eventually transported inwards, then outer disc depletion should still shorten the lifetime to some degree. In Figure~\ref{fig:R_Mdot_evol} we see that only the disc exposed to $F_\mathrm{FUV}=5000\,G_0$ is sheared down to $R_\mathrm{out}< 20$~au before the inner surface density becomes small (lower than the {FRIED} grid range). Thus, if angular momentum transport is inefficient beyond this radius, then sustained exposure to $F_\mathrm{FUV}\gtrsim 5000\, G_0$ should be required to shorten the disc lifetime. In this case, we may expect inside-out depletion for discs exposed to more moderate $F_\mathrm{FUV}$, although the external depletion still reduces their overall mass.
\subsection{Solid evolution}
We now consider how the evolution of solids within the gas disc may be influenced by externally driven winds. The growth of ISM-like dust grains to larger aggregates and eventually planets is the result of numerous inter-related physical processes, covering a huge range of scales. We do not review these processes in detail here, but refer the reader to the recent reviews by \citet[][with a focus on dust-gas dynamics]{Lesur22} and \citet[][with a focus on planet formation]{Drazkowska22}. Due to the complexity of the topic, and the fact that the primary empirical evidence comes from local, low mass SFRs without OB stars, most studies to date have focused on dust growth in isolated protoplanetary discs. We here consider the results of the few investigations focused on dust evolution specifically in irradiated discs.
\subsubsection{Dust evolution}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/size_comparison_full_1000.pdf}
\caption{Maximum grain size.}
\label{fig:gsize}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/f_wind_contourNEW.pdf}
\caption{Dust entrainment in an externally driven wind}
\label{fig:fd_wind}
\end{subfigure}
\caption{The effect of external FUV irradiation on the dust content of a protoplanetary disc, adapted from \citet{Sellek20}. Figure~\ref{fig:gsize} shows the maximum grain size, relative to an isolated disc ($F_\mathrm{FUV}=0\, G_0$) as a function of time and radius in an irradiated disc with viscous $\alpha=10^{-3}$, exposed to FUV flux $F_\mathrm{FUV}=1000\, G_0$. Figure~\ref{fig:fd_wind} shows the total fraction of dust removed by the wind over the lifetime of a disc as a function of the viscous $\alpha$ and $F_\mathrm{FUV}$. }
\end{figure}
\citet{Sellek20} investigated the drift and depletion of dust in a viscously evolving protoplanetary disc. Dust is subject to radial drift, wherein dust moves towards pressure maxima \citep[i.e. inwards in the absence of local pressure traps --][]{Weidenschilling77}, as well as grain growth dependent on the local sticking, bouncing and fragmentation properties \citep[see][and references therein]{Birnstiel12}. The drift velocity is determined by the Stokes number, which in this context is the ratio of the stopping time of the dust grain to the largest eddy timescale $\sim \Omega_\mathrm{K}^{-1}$. Near the midplane and in the Epstein regime, this can be approximated:
\begin{equation}
\rm{St} \approx \frac {\pi} {2} \frac{ \rho_\mathrm{s} a_{\mathrm{s}}}{\Sigma_\mathrm{g}},
\end{equation}where $a_\mathrm{s}$ and $\rho_\mathrm{s}$ are the grain size and density respectively. The draining of large dust grains can be understood in terms of the balance of viscous mass flux and radial drift \citep{Birnstiel12}, such that the equilibrium value of $\rm{St}$ is:
\begin{equation}
\rm{St}_{\mathrm{eq}} = 3\alpha \left| \frac{\frac{3}{2} + \frac{\mathrm{d}\ln \Sigma_\mathrm{g}}{\mathrm{d}\ln r} }{\frac{7}{4} - \frac{\mathrm{d}\ln \Sigma_\mathrm{g}}{\mathrm{d}\ln r} } \right|
\end{equation} for the standard $\alpha$ disc model with $\nu \propto r$. For $\rm{St}>\rm{St}_\mathrm{eq}$, dust drifts inwards regardless of the viscosity. At the outer edge of the disc $|{\mathrm{d}\ln \Sigma_\mathrm{g}}/{\mathrm{d}\ln r}| $ becomes large, so that $\rm{St}_{\mathrm{eq}} \rightarrow 3\alpha$. Hence dust in the outer disc that grows above $\rm{St}>3\alpha$ will migrate rapidly inwards. External evaporation acts to increase $|{\mathrm{d}\ln \Sigma_\mathrm{g}}/{\mathrm{d}\ln r}|$ in the outer disc, clearing this region of large dust grains. Figure~\ref{fig:gsize}, adapted from \citet{Sellek20}, shows how the external wind can rapidly evacuate the outer disc of large grains, dependent on the value of $\alpha$. Given that this occurs on short time-scales compared to the disc lifetime, it has consequences for planet formation and observational properties of discs, possibly explaining why the sub-mm flux-radius relationship seen in low mass SFRs
\citep{Tripathi17} does not hold in the ONC \citep{Eisner18}.
The clearing of large grains from the outer disc also has consequences for the quantity of solid material that can be lost in the wind.
\citet{Sellek20} use the entrainment constraints given by equation \ref{equn:entrainedSize} to estimate an entrainment fraction $f_\mathrm{ent}$ for a given grain size distribution. The fraction of dust removed in their global externally irradiated disc model is shown in Figure~\ref{fig:fd_wind}. When viscosity is large ($\alpha \gtrsim 10^{-2}$), this fraction can exceed $50$~percent for $F_\mathrm{FUV}\gtrsim 10^3\, G_0$. This trend of higher dust depletion with higher $\alpha$ is both due to the faster replenishment of disc material in the outer regions where the wind is launched, and because inward drift of large grains is less efficient (higher $\rm{St}_\mathrm{eq}$). However, in general depletion of dust is not efficient, with typically less than half of the dust mass being lost. However, in the models of \citet{Sellek20} this does not result in an enhancement of the dust-to-gas ratio due to the enhanced loss of the large dust grains via rapid inwards drift.
A big caveat of the above discussion is that it does not consider the role of local pressure traps in halting the radial drift \citep[e.g.][]{Pinilla12, Rosotti20}. This local accumulation of solids can also lead to a mutual aerodynamic interaction that leads to a local unstable density growth that can seed planet formation \citep{Youdin05}. If sufficient quantities of dust remain when the gas component is depleted by external photoevaporation, then this may serve to initiate planetesimal formation in the outer disc \citep{Throop05}. Future studies may consider how the efficiency of dust trapping in the outer disc affect this picture.
\subsubsection{Planet formation and evolution}
As discussed in the introduction, the influence of external photoevaporation is still rarely included in models for planet formation, despite its apparent prevalence. However, \citet{Ndugu18} studied how increases in the disc temperature due to external irradiation alter the formation process. By increasing the disc scale height (thus decreasing midplane density), this heating reduces the efficiency of pebble accretion and giant planet formation in the outer disc. In this framework, giant planets that do form at high temperatures more frequently orbit with short periods (hot/warm Jupiters) because their planet cores need to form early or in the inner disc. Temperature also has an influence after the formation of a low mass planet core. Low mass planets that have not opened up a gap in the gas disc undergo type I migration, which is due to torques associated with a number of local resonances \citep{Paardekooper11}. These torques are sensitive to thermal diffusion, such that they also depend on local temperature and associated opacity. This may lead to complex migration behaviour for the low mass planets \citep{Coleman14}, which would also be influenced by increasing the disc temperature due to external irradiation \citep{Haworth21b}.
Perhaps more directly, where there is sufficient mass loss from the disc due to an externally driven wind, this also reduces the time and mass budget available for (giant) planet formation and migration. Internal winds have long been suggested to play an important role in stopping inward planet migration \citep{Matsuyama03}. \citet{Armitage02} and \citet{Veras04} investigated a how type II migration of giant planets (within a gap) can be severely altered by the loss of the disc material in a photoevaporative wind, even leading to outwards migration if the outer disc material is removed. Since then, a number of authors have investigated how giant planet migration can be halted by internally driven disc winds \citep[e.g.][]{Alexander09,Jennings18, Monsch19}. Planet population synthesis models have now started to implement prescriptions for mass loss due to an external wind \citep[such as in the Bern synthesis models --][]{Emsenhuber21}. However, this is presently based on a single (typical) estimated $\dot{\Sigma}_\mathrm{ext}$ that is constant throughout time and with radius $r$, rather than the more physical outside-in, radially dependent depletion models discussed in Section~\ref{sec:gas_evolution}. Recently, \citet{Winter22} looked at how growth and migration is altered by external FUV flux exposure, and we show the outcomes of some of these models in Figure~\ref{fig:aG_exp}. The outside-in mass loss prescription leads to more dramatic consequences than those of \citet{Emsenhuber21}, curtailing growth and migration early. As a result, low FUV fluxes ($F_\mathrm{FUV}\lesssim 300\, G_0$) produce planets that are massive ($M_\mathrm{p}\gg 100\,M_\oplus$) and on short orbital periods ($P_\mathrm{orb}\ll 10^4$~days), similar to those that are most frequently discovered by radial velocity (RV) surveys \citep[e.g.][]{Mayor11}. The typical FUV fluxes in the solar neighbourhood are $\langle F_\mathrm{FUV}\rangle \sim 10^3 {-}10^4\, G_0$ (see Section~\ref{sec:demographics_SFRs}), which yield typical planet masses and orbital periods closer to those of the massive planets in the Solar System. Such relatively low mass planets fall close to, or below, typical RV detection limits, and the anti-correlation between planet mass and semi-major axis may therefore contribute to the inferred dearth of detected planets with periods $P_\mathrm{orb}\gtrsim 10^3$~days \citep{Fernandes19}. Testing the role of external photoevapation for populations of planets further requires coupling these prescriptions with population synthesis efforts.
While we can try to identify the role of external photoevaporation via correlations in disc populations, more direct ways to connect environment with the present day stellar (and exoplanet) population would be useful. For an example of how this might work in practice, \citet{Roquette21} have highlighted that premature disc dispersal may leave an impact on the rotation period distribution of stars. The rotation of a star is decelerated due to `disc-locking' during the early pre-main-sequence phase accelerates rotation. Thus, by shortening the inner disc lifetime (see Section~\ref{sec:inner_disc}), external photoevaporation may leave an imprint on the stellar population up to several $100$~Myr after formation. This may explain, for example, the increased number of fast rotating stars in the ONC compared to Taurus \citep{Clarke00}. In future, models and rotation rates may be used to interpret the disc dispersal time-scales for main sequence stars, which may complement investigations into planet populations in stellar clusters \citep[e.g.][]{Gilliland00, Brucalassi16, Mann17}.
\label{sec:giant_planets}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Figures/comp_obs_review.pdf}
\caption{The final masses $M_\mathrm{p}$ and orbital periods $P_\mathrm{orb}$ of giant planets evolving under a growth and evolution model for planetary cores of mass $10\, M_\oplus$ and formation time of $\tau_\mathrm{form} = 1$~Myr with disc viscosity $\alpha=3\cdot 10^{-3}$ \citep[adapted from][]{Winter22}. The circular points, connected by faint green lines for fixed starting semi-major axis $a_{\mathrm{p},0}$, represent the final location of the planet in $P_\mathrm{orb}-M_\mathrm{p}$ space. Points are coloured by the external FUV flux experienced in that model, with the same colour is shown in a Voronoi cell to emphasise trends. Red crosses show the locations of HARPS radial velocity (RV) planet discoveries presented by \citet{Mayor11}. The 50~percent detection efficiency of the HARPS survey is shown by the red contour. We show orbital period at longer than which \citet{Fernandes19} infers a dearth of planets (dotted black line) and the limit above which planets may no longer form by core accretion \citep[black dashed line --][]{Schlaufman18}. We also show the planets in the Solar Sytem (green triangles). }
\label{fig:aG_exp}
\end{figure}
\subsection{Disc surveys of local star forming regions}
\label{sec:surveys}
In this section so far, we focus on the theoretical influence of external photoevaporation on planet formation. However, the most important evidence for or against the influence of external photoevaporation on forming planets must be found statistically in the surveys of local protoplanetary disc populations. Such surveys and more detailed observations of individual discs offer the most direct way to probe the physics of planet formation. Here we report the evidence in the literature for the role of external photoevaporation in sculpting disc populations. We divide this into studies that probe (inner) disc survival fractions (Section~\ref{sec:inner_disc}) and those studies that probe the outer disc properties (Section~\ref{sec:outer_disc}). For quick reference, in Table~\ref{tab:SFR_props} we summarise the properties of some local SFRs in which some observational evidence of external photoevaporation has been uncovered.
\begin{table}[]
\centering
\rowcolors{2}{gray!25}{white}
\begin{tabular}{c| c c c c c c c c c}
Name & $D$ & $\log M_{\rm{tot}}$ & $\log \rho_0$ & $\log L_\mathrm{FUV}$ & $T_\mathrm{age}$ & Evidence & References\\
& [kpc] & [$M_\odot$] & [$M_\odot$~pc$^{-3}$] & [erg~s$^{-1}$] & [Myr] & & \\
\hline
NGC 1977 & $0.40$ & $\sim 1.9 $ & - & $37.3$ & - & P & \small{1, 2, 28} \\
NGC 2024 & $0.40$& $\sim 2.1$ & - & $\sim 38.1$ & $\sim 0.5$ & P/DD & \small{3, 4, 5, 6, 7, 28} \\
$\sigma$ Orionis & $0.40$& 2.2 & 2.7 & $38.3$ & $3{-}5$ & P/DD & \small{8, 9, 10, 11, 12, 28, 39} \\
ONC & $0.40$ & $3.6$ & $4.1$ & $38.7$ & $1{-}3$ & P/DD & \small{13, 14, 15, 16, 28} \\
Pismis 24 & $1.7$ & $\sim 2.8$ & $\sim3.8$ & $39.8$ & $1{-}1.5$ & IDL & \small{17, 18, 37} \\
Quintuplet & $8.0$ & $4.0$ & $2.7$ & $39.8$ & $3-5$ & IDL & \small{19, 20, 34}\\
Trumpler 14 & $2.8$ & $3.6$ & $4.3$ & $39.9$ & $0.5{-}3$ & P/IDL & \small{21, 22, 23, 38, 40} \\
Cygnus OB2 & $1.6$ & $4.2$ & $1.3$ & $40.0$ & $1{-}3$ & IDL & \small{24, 25, 26, 27, 41} \\
NGC 3603 & 6.7 & $4.1$ & $4.8$ & $40.3$ & $0.5{-}1$ & IDL & \small{28, 29, 30, 41}\\
Arches & $8.0$ & $4.7$ & $5.3$ & $40.5$ & $2{-}3$ & IDL & 31, 32, 33, 34 \\
Westerlund 1 & 3.8 & $4.8$ & $5.0$ & $40.6$ & $3-4$ & - & 35, 36, 41 \\
\end{tabular}
\caption{Properties of some SFRs that show some evidence of disc populations that are sculpted by external photoevaporation \citep[except Westerlund 1, which is the subject of an upcoming JWST campaign --][]{JWST_Guarcello}. Columns from left to right are: name of the SFR, heliocentric distance, total stellar mass, central density, total FUV luminosity, age, type of evidence for external disc depletion and references. FUV luminosity is calculated using the luminosity of the most massive members at an age of $1$~Myr. The evidence for external dispersal in each region is listed as proplyds/winds (P), dust/outer disc depletion (DD) or shortened inner disc lifetime (IDL). \\ \small{References -- 1: \citet{Peterson08} 2: \citet{Kim16}, 3: \citet{Skinner03}, 4: \citet{Bik03}, 5: \citet{Levine06}, 6: \citet{vanTerwisga20}, 7: \citet{Haworth21}, 8: \citet{Caballero08}, 9: \citet{Oliveira02} 10: \citet{Oliveira06}, 11: \citet{Mauco16}, 12: \citet{Ansdell17}, 13: \citet{Hillenbrand98b}, 14: \citet{Palla99} \citet{Beccari17}, 15: \citet{Eisner18}, 16: \citet{vanTerwisga19}, 17: \citep{Fang12}, 18: \citet{JWST_RamirezTannus}, 19: \citet{Figer99a}, 20: \citet{Stolte15}, 21: \citet{Sana10}, 22: \citet{Preibisch11}, 23: \citet{Preibisch11b}, 24: \citet{Hanson03}, 26: \citet{Wright15}, 27: \citet{Guarcello16}, 28: \citep{Grosschedl18}, 29: \citet{Stolte04}, 30: \citet{Harayama08}, 31: \citet{Najarro04}, 32: \citet{Espinoza09}, 33: \citet{Harfst10}, 34: \citet{Stolte15}, 35: \citet{Mengel07}, 36: \citet{JWST_Guarcello}, 37: \citet{Getman14a}, 38: \citet{Smith10}}, 39: \citet{Schaefer16}, 40: \citet{Cantat-Gaudin20}, 41: \citet{Rate20}}
\label{tab:SFR_props}
\end{table}
\subsubsection{Disc survival fractions}
\label{sec:inner_disc}
Photometric censuses of young stars in varying SFRs can yield insights into the survival time of discs in regions of similar ages. Young stars exhibit luminous X-ray emission due to the magnetically confined coronal plasma \citep{Pallavicini81}, X-ray surveys with telescopes such as \textit{Chandra} offer the basis for constructing a catalogue of young members of a SFR. These can be coupled with photometric surveys to infer the existence or absence of a NIR excess. Comparison of the fractions of discs with varying either within the same SFR, or between different regions with similar ages, allows one to identify regions where discs have shorter lifetimes than the $\sim 3$~Myr typical of low mass SFRs.
While this principle appears simple, in fact several steps required in achieving this comparison bear with them numerous pitfalls. One issue is reliable membership criteria, which were historically photometric or spectroscopic \citep[e.g.][]{Blaauw56, deZeeuw99}, improved recently through proper motions (and to a lesser extent, parallax measurements) from \textit{Gaia} DR2 \citep{Arenou18}. Uncertainty and heterogeneity in age determination for young SFRs also represent a significant challenge, particularly in comparing across different SFRs \citep[e.g.][]{Bell13}. The three dimensional geometry (projection effects) and dynamical mixing in stellar aggregates may also hide correlations between historic FUV exposure and present day disc properties \citep[e.g.][]{Parker21}. Even empirically quantifying the luminosity and spectral type of neighbour OB stars can be challenging. Massive stars are often found in multiple systems \citep[e.g.][]{Sana09}, which can lead to mistaken characterisation \citep[e.g.][]{MaizApellaniz07}, while these massive stars are also expected to go through transitions in the Hertzsprung–Russell diagram in combination with rapid mass loss \citep[see][for a review]{Vink22}. Statistically, any studies attempting to measure correlations in individual regions must choose binning procedures for apparent FUV flux with care. For example, the number of stars per bin must be sufficient such that uncertainties are not prohibitively large and binning should be performed by projected UV flux rather than separation alone (i.e. controlling for the luminosity of the closest O star). Finally, studies of NIR excess probe the presence of inner disc material, which represents the part of the disc expected to be least affected by external photoevaporation (see discussion at the end of Section~\ref{sec:visc_evol_extdep}). Therefore, inner disc survival fractions should be interpreted as the most conservative metric by which to measure the role of external disc depletion.
Despite these difficulties, numerous studies have reported correlations between disc lifetimes and local FUV flux. One of the earliest was \citet{Stolte04}, who found an increase from $20\pm 3$~percent in the central $0.6$~pc of NGC 3603, increasing to $30\pm 10$~percent at separations of $\sim 1$~pc from the centre. Later, \citet{Balog07} presented a \textit{Spitzer} survey of NGC 2244 \citep[of age $\sim {2}$~Myr -- ][]{Hensberge00, Park02} found a marginal drop off in the disc hosting stars for separations $d< 0.5$~pc from an O star ($ 27\pm 11$~percent) versus those at greater separations ($45\pm 6$~percent). \citet{Guarcello07} obtained similar results for NGC 6611 (of age $\sim 1$~Myr), finding $31.1\pm 4$~percent survival in their lowest bolometric flux bin, versus $16\pm 3$~percent in their highest \citep[see also][]{Guarcello09}. Similar evidence of shortened inner disc lifetimes have been reported in Arches \citep{Stolte10, Stolte15}, Trumpler 14, 15 and 15 \citep{Preibisch11}, NGC 6357 \citep[or Pismis 24 --][]{Fang12} and the comparatively low density OB association Cygnus OB2 \citep[][]{Guarcello16}.
Interestingly, the study by \citet{Fang12} of Pismis 24 also revealed a stellar mass dependent effect, wherein disc fractions are found to be lower for higher mass stars. This is the opposite of what might be expected from the dependence of $\dot{M}_\mathrm{ext}$ on the stellar mass; lower mass stars are more vulnerable to externally driven winds due to weaker gravitational binding of disc material \citep[e.g.][]{Haworth18b}. This finding, if generally true for irradiated disc populations, would therefore put constraints on how other processes in discs scale with stellar host mass. For example, more rapid (viscous) angular momentum transport for discs around higher mass stars would sustain higher mass loss rates for longer, due to the mass flux balance in the outer disc (see discussion in Section~\ref{sec:gas_evolution}). However, although accretion rates correlate with stellar mass \citep[][]{Herczeg08,Manara17}, this does not necessarily imply faster viscous evolution for discs with high mass host stars \citep{Somigliana22}; thus the physical reason for the \citet{Fang12} findings remain unclear. Whether discs around high mass stars are generally depleted faster than those around low mass stars may be further confirmed by the upcoming JWST GTO program by \citet{JWST_Guarcello}, who aim to map out disc fractions and properties in Westerlund 1 for stars down to brown dwarf masses. This dataset should allow to control both for stellar mass and location in the starburst cluster.
All of the regions mentioned above, in which evidence of inner disc dispersal has been inferred, share the property that they are sufficiently massive to host many O stars. A notable example for which shortened disc lifetimes are not observed is in the ONC. Despite rapid mass loss rate for the central proplyds, \citet{Hillenbrand98} inferred a high disc fraction of $\sim 80$~percent. This may be due to the large spread in ages \citep[typically estimated at $\sim 1{-}3$~Myr -- e.g.][]{Hillenbrand97, Palla99, Beccari17}, such that the resultant dynamical evolution \citep{Kroupa18, Winter19b} leads to the central concentration of young stars \citep{Hillenbrand97, Getman14, Beccari17}, as discussed in Section~\ref{sec:stellar_dynamics}. However, high survival rates may also reflect the inefficiency of inner disc clearing of photoevaporative winds in intermediate $F_\mathrm{FUV}$ environments. This would hint at inefficient angular momentum transport at large radii (due to dead zones, for example).
{Some other studies have also obtained null results when trying to find evidence of spatial gradients in the fraction of stars with NIR excess within SFRs. Studies searching for spatial correlations in NIR excess fraction are useful because they are not subject to the same large uncertainties in the stellar ages. However, for studies of this kind the question of membership criteria to the region is of utmost importance, since given a constant number of foreground/background contaminants then the outer regions should be more affected, presumably suppressing the apparent disc fraction. One example of a search for these correlations is that of \citet{Richert15}, who found an absence of any correlation in the MYStIX catalogue \citep{Povich13} of NIR sources across several O star hosting SFRs. The methodology of \citet{Richert15} differed from other studies in that they adopted the metric of the aggregate density of disc-hosting or disc-free stars around O stars, rather than distances of these lower mass stars to their nearest O star. Similarly, \citet{Mendigutia22} used \textit{Gaia} photometry and astrometry to compare relative disc fractions in a sample of SFRs, finding no spatial correlations. This study did not use FUV flux/O star proximity, but binned stars into the inner $2$~pc versus the outer $2-20$~pc for each region. Both studies have not attempted an absolute measure of disc survival fractions, but relative numbers. They highlight that environmental depletion of inner discs is not necessarily ubiquitous in high mass SFRs. Physical considerations such as age and dynamics may also play an important role in whether correlations in disc survival fractions can be uncovered. }
In Figure~\ref{fig:lifetimes} we show the disc survival fraction versus SFR age for a composite sample, including a number of the massive regions discussed in this section. Setting aside the above caveats for uncertainties, particularly in the stellar ages, this demonstrates a clear shortening of disc lifetimes across numerous massive SFRs. Low mass SFRs typically have $\tau_\mathrm{life}\approx 3$~Myr, while the most massive SFRs may have $\tau_\mathrm{life} \lesssim 1$~Myr. This shortening of disc lifetimes has so far only been found in regions with a total FUV luminosity $L_\mathrm{FUV}\gtrsim 10^{39}$~erg~s$^{-2}$ (see Table~\ref{tab:SFR_props}). In the case of many of these massive regions, the shortening of disc lifetimes can also be seen in a local gradient, for which stars that are far from an O star have a greater probability of hosting a disc. This appears to convincingly demonstrate the influence of external photoevaporation on disc lifetimes. If this is the case, it implies that if gas giants form in these environments, they must do so early \citep[as suggested by some recent results -- e.g.][]{Segura-Cox20, Tychoniec20}.
\subsubsection{Outer disc properties}
\label{sec:outer_disc}
In recent times, ALMA has become the principle instrument for surveying outer disc properties in local star forming regions. Nonetheless, prior to this revolution a number of studies had already demonstrated, by degrees, the role of external photoevaporation in sculpting the outer regions of protoplanetary discs. For example, in the ONC, \citet{Henney98} and \citet{Henney99} used HST images and \textit{Keck} spectroscopy to demonstrate the rapid mass loss rates up to $\sim 10^{-6}\, M_\odot$~yr$^{-1}$ of several proplyds in the core of the ONC, and their concentration around the O star, $\theta^1$C. Later, \citet{Mann14} used SMA data to show a statistically significant dearth of discs with high dust masses close to $\theta^1$C \citep[see also][]{Mann10}. This result has since been confirmed via ALMA observations towards the core \citep{Eisner18} and outskirts \citep{vanTerwisga19} of the region. This is a clear demonstration that external photoevaporation depletes the dust content, although it remains unclear if this is via instigating rapid radial drift \citet{Sellek20} or via entrainment in the wind \citep{Miotello12}. \citet{Boyden20} also find the gas component of the disc is more compact than in other SFRs, and correlated with distance to $\theta^1$C, suggestive of truncation by external photoevaporation.
Dust depletion has also been inferred in $\sigma$ Orionis. Here the dominant UV source is the $\sigma$~Ori multiple system, for which the most massive component has a mass $\sim 17\,M_\odot$ \citep{Schaefer16}. From \textit{Herschel} spectral energy distributions of discs in this region, \citet{Mauco16} found evidence of external depletion in the abundance of compact discs ($R_\mathrm{out}<80$~au) consistent with the models of \citet{Anderson13}. The authors also found evidence of ongoing photoevaporation in one disc exhibiting forbidden [Ne\,\textsc{ii}] line emission \citep[see also][]{Rigliaco09}. Later, \citet{Ansdell17} presented an ALMA survey that uncovered an absence of discs with high dust masses at distances $\lesssim 0.5$~pc from $\sigma$~Ori, consistent with models of dynamical evolution and disc depletion \citep{Winter20b}.
In NGC 2024, an early survey of the discs with the SMA presented by \citet{Mann15} revealed a more extended tail of high mass discs than those of other SFRs. The authors suggested this is indicative of a young population of discs that had not (yet) been depleted by external photoevaporation. However, \citet{vanTerwisga20} presented an ALMA survey of the region and found evidence of two distinct disc populations. The eastern population is very young ($\sim 0.2{-}0.5$~Myr old) and still embedded close to a dense molecular cloud that may shield them from irradiation. The western population is slightly older, exposed to UV photons from the nearby stars IRS 1 and IRS 2b. The western discs are lower mass than those in the east and those in similar age SFRs, which is probably due to external depletion. This conclusion is supported by the subsequent discovery of proplyd (candidates) in the region \citep{Haworth21}.
We have focused here on regions where evidence of external depletion appears to be present. It is challenging to convincingly demonstrate the converse by counter example, particularly considering the many potential issues discussed above (e.g. accurate stellar aging, projection effects and dynamical histories). As an example, in an ALMA survey of discs in $\lambda$~Orionis, \citet{Ansdell20} reported no correlation between disc dust mass and separation from the OB star $\lambda$~Ori. As discussed by the authors, a number of explanations for this are possible. One explanation is that, given the older age ($\sim 5$~Myr) of the region, the discs may have all reached a similar state of depletion, with spatial correlations washed out by dynamical mixing \citep[e.g.][]{Parker21}. Meanwhile, in a survey of non-irradiated discs, \citet{vanTerwisga22} demonstrated that discs in SFRs of similar ages appear to have similar dust masses, suggesting that any observed depletion in higher mass SFRs is probably the result of an environmental effect.
\begin{figure}
\vspace{-0.5cm}
\centering
\includegraphics[width=0.9\textwidth]{Figures/dliftimes.pdf}
\caption{The fraction of surviving inner discs as a function of age of star forming regions. Yellow points show a compilation of various star forming regions using data from \citet{Richert18} and \citet{Mamajek09}. We further show some specific SFRs discussed in the text and listed in Table~\ref{tab:SFR_props}. Massive and dense `starburst' clusters are shown as star symbols. Where a local gradient in the disc fraction has been reported in the literature, we show the outer regions as a square symbol. We show the fraction of discs following $f_\mathrm{discs} = \exp(-t_\mathrm{age}/\tau_\mathrm{life})$ for $\tau_\mathrm{life} = 1$ and $3$~Myr (dotted and dashed lines respectively). }
\label{fig:lifetimes}
\end{figure}
\subsection{Future prospects}
Notably, we have not discussed chemistry in this section. The chemistry of discs \citep[e.g.][]{Bruderer12, Kamp17} and the planets they produce \citep[e.g.][]{Cridland16, Cridland19, Bitsch20} is a complex function of the stellar metallicity and local disc temperature. For irradiated discs, temperatures increase in the outer regions and disc surface layers with respect to isolated discs, altering the chemistry in these regions, although not necessarily in the disc mid-plane \citep[e.g.][]{Nguyen02, Walsh13}. However, how this alters planet formation and chemistry may dependent on disc evolution, and this remains a matter for future investigation. This issue may also soon be addressed empirically via JWST observations, with the GTO programme by \citet{JWST_RamirezTannus} aiming to probe the chemistry of discs of similar age but varying FUV flux histories in NGC 6357.
Apart from chemistry, this section has highlighted the many gaps in the understanding of the role of external photoevaporation in shaping the evolution of the disc and the formation of planets. From both theory and observations, we understand that gas disc lifetimes are shortened by external photoevaporation. This shortened lifetime of the gas disc alone should be enough to influence the formation and disc induced migration of giant planets. Meanwhile, how the aggregation of dust produces planets in irradiated discs has only just started to be addressed \citep[e.g.][]{Ndugu18, Sellek20}. When coupled with the apparent prevalence of external photoevaporation in shaping the overall disc population, the role of star formation environment must be considered as a matter of urgency for planet population synthesis efforts \citep[e.g.][]{Alibert05, Ida05, Mordisini09, Emsenhuber21}. Meanwhile, the connection between stellar rotation periods and premature disc dispersal via external photoevaporation may in future offer a window into the birth environments of mature exoplanetary systems up to $\sim 1$~Gyr after their formation \citep{Roquette21}.
\begin{mdframed}
\vspace{-0.8cm}
\subsection{Summary and open questions for irradiated protoplanetary disc evolution}
While many aspects of disc evolution in irradiated evironments remain uncertain, we can be relatively confident in the following conclusions:
\begin{enumerate}
\item External photoevaporation depletes (dust) masses and truncate the outer disc radii in regions of high
|
FUV flux, at least for $F_\mathrm{FUV} \gtrsim 10^3\, G_0$.
\item Inner disc lifetimes appear to be shorted for discs in regions where the total FUV luminosity $L_\mathrm{FUV} \gtrsim 10^{39}$~erg~s$^{-1}$, while does not appear to be strongly affected for regions with lower $L_\mathrm{FUV}$.
\item Dust evolution, low mass planet formation and giant planet formation all have the potential to be strongly influenced by external photoevaporation through changes in the temperature, mass budget, and time available for planet formation.
\end{enumerate}
\noindent Some of the many open questions in this area include:
\begin{enumerate}
\item \textit{Do the externally driven mass loss rates in photoevaporating discs balance with accretion rates, as expected from viscous angular momentum transport?}
\item \textit{If angular momentum transport is extracted via MHD winds rather than viscously transported, how does this influence the efficiency of external photoevaporation? }
\item \textit{Is the lack of correlation shortened disc lifetime with FUV flux in intermediate mass environments related to a dead-zone or similar suppression of angular momentum transport? }
\item \textit{Is the observed dust depletion in near to O stars due to entrainment in the wind or rapid inward drift? }
\item \textit{In relation to this, how do external winds influence dust trapping and the onset of the streaming instability?}
\item \textit{Conversely, how does the trapping of dust influence the dust depletion and dust-to-gas ratio in discs? }
\item {\textit{How does disc chemistry vary between isolated and externally FUV irradiated discs?}}
\item \textit{How does the local distribution of FUV environments influence the planet population as a whole?}
\end{enumerate}
\end{mdframed}
\section{Demographics of star forming regions}
\label{sec:SFRs}
\label{sec:sf_physics}
In this section, we consider the properties of star forming regions and the discs they host from an observational and theoretical perspective. We are interested in understanding: `what is the role of external photoevaporation of a typical planet-forming disc in the host star's birth environment?' The degree to which protoplanetary discs are influenced by external irradiation depends on exposure to EUV and FUV photons. Hence the overall prevalence of external photoevaporation for discs (and probably the planets that form within them), depends critically on the physics of star formation and the demographics of star forming regions.
\subsection{OB star formation}
\label{sec:OBstar}
OB stars have spectral types earlier than B3, and are high mass stars with luminosities $> 10^3\, L_\odot$ and masses $>8 \, M_\odot$. These stars are responsible for shaping a wide range of physical phenomena, from molecular cloud-scale feedback on Myr timescales \citep{Krumholz14, Dale15} to galactic nucleosynthesis over cosmic time \citep{Nomoto13}. The radiation feedback of these stars on their surroundings already plays a significant role during their formation stage, where stars greater than a few $10\,M_\odot$ in mass must overcome the UV radiation pressure problem \citep{Wolfire87}. Forming OB stars may do this through monolithic turbulent collapse \citep[e.g.][]{McKee03}, early core mergers \citep[e.g.][]{Bonnell98}, or competitive accretion \citep[e.g.][]{Bonnell01} -- see \citet{Krumholz15} for a review.
Two questions regarding OB stars are important with respect to local circumstellar disc evolution: `when do they form?' and `how many of them are there?'. The former determines when the local discs are first exposed to strong external irradiation. The latter determines the strength and, importantly, the spatial uniformity of the UV field (we discuss this point further in Section~\ref{sec:stellar_dynamics}).
The question of the timescale for formation of massive stars is empirically tied to the frequency of ultra-compact HII (UCHII) regions. These regions are the small (diameters $\lesssim 0.1$~pc) and dense (HII densities $\gtrsim 10^4 $~cm$^{-3}$) precursors to massive stars. During the main sequence lifetime of the central massive star, the associated HII region will evolve from the embedded ultra compact state to a diffuse nebula. In the context of the surrounding circumstellar discs, the lifetime of a UCHII region represents the length of time for which the surroundings of massive stars are efficiently shielded from UV photons. These lifetimes can be inferred by comparing the the number of main-sequence stars to the number of observed UCHII regions, yielding timescales of a few $10^5$~yrs \citep{Wood89,Mottram11}. The star-less phase prior to excitation of the UCHII region is short \citep[$\sim 10^4$~yr --][]{Motte07, Tige17}, hence the UCHII lifetime represents the effective formation timescale for massive stars in young clusters and associations. This formation timescale is much shorter than the typical lifetime for protoplanetary discs evolving in isolation \citep[$\sim 3$~Myr -- e.g.][]{Haisch01, Ribas15}. Therefore, in regions with many OB stars we expect the local discs to be irradiated throughout this lifetime. However, this need not be the case when the number of OB stars is $\sim 1$, in which case the time at which an OB star forms may be statistically determined by the spread of stellar ages in the SFR. We discuss this point further in Section~\ref{sec:stellar_dynamics}.
The relative contribution of stars of a given mass to the local UV radiation field depends on the relative numbers of stars with this mass, or the initial mass function (IMF), $\mathrm{d} n /\mathrm{d} \log m_*$. We can write the mean total luminosity $\langle L_\mathrm{SFR} \rangle$ of a star forming region (SFR) with $N$ members:
\begin{equation}
\label{eq:L_mean}
\langle L_\mathrm{SFR} \rangle = N\langle L \rangle = N \int \frac{\mathrm{d} n }{\mathrm{d} \log m_* } \cdot L(m_*) \,\mathrm{d} \log m_* ,
\end{equation}where $L$ is the luminosity of a single star. Hence, to understand the contribution of massive stars to the UV luminosity, we are interested in the shape of the IMF. The IMF in local SFRs exhibits a peak at stellar masses $m_* \sim 0.1{-}1\, M_\odot$ and a steep power law $\mathrm{d} n /\mathrm{d} \log m_* \propto m_*^{-\Gamma}$ with $\Gamma \approx 1.35$ at higher masses \citep[see][for a review]{Bastian10}. This power-law appears reasonable at least up to $m_* \sim 100\, M_\odot$ \citep{Massey98, Espinoza09}.
We combine the mass-dependent luminosity for young stars, as in the left panel of Figure~\ref{fig:stellar_lums}, with the \citet{Chabrier03} IMF to produce the right panel of Figure~\ref{fig:stellar_lums}. Here we multiply the stellar mass dependent luminosity by the IMF, as in the integrand on the RHS of equation~\ref{eq:L_mean}, to yield the average contribution of stars of certain mass to the FUV and EUV luminosity of a SFR. Note that this is only accurate for a very massive SFR, where the IMF is well sampled. However, the figure demonstrates that for low mass SFRs -- which here means where the IMF is not well sampled for stellar masses $m_*\gtrsim 30\, M_\odot$ -- both EUV and FUV luminosities are dominated by individual massive stars, rather than many low mass stars. In the following, we consider this in the context of the properties of local star forming regions.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.5\textwidth]{Figures/UV_lums.pdf}}
\subfloat{\includegraphics[width=0.5\textwidth]{Figures/mst_contribution.pdf}}
\caption{\textit{Left panel:} stellar FUV and EUV luminosity as a function of stellar mass. The total luminosity is calculated using the effective temperature and total luminosity from the model grids of \citet{Schaller92}, with metallicity $Z=0.02$ and output time closest to $1\,$~Myr. We apply the atmosphere models of \citet{Castelli03} to give the wavelength-dependent luminosity. \textit{Right panel:} The relative contribution of stars of a given mass to the total luminosity averaged across the IMF. Here we use a log-normal \citet{Chabrier03} sub-solar IMF, and power-law with $\Gamma=1.35$ for $m_*>1\, M_\odot$. }
\label{fig:stellar_lums}
\end{figure}
\subsection{Demographics of star forming regions}
\label{sec:demographics_SFRs}
We now consider the FUV flux experienced by protoplanetary discs in typical SFRs. We will focus on FUV, since as discussed in Section~\ref{sec:gas_evolution}, the FUV is expected to dominate disc evolution over the lifetime of the disc. The word `typical' in the context of SFRs is in fact dependent on the local properties within a galaxy, and we here refer exclusively to the solar neighbourhood (at distances $\lesssim 2$~kpc from the sun). A number of studies have approached this problem in differing ways \citep{Fatuzzo08,Thompson13,Winter20a, Lee20}, however all such efforts require estimating the statistical distribution ${\mathrm{d} n_\mathrm{SFR}}/{\mathrm{d} N}$ in the number of members $N$ of SFRs, which dictates how many OB stars there are and therefore the local UV luminosity. For example, \citet{Fatuzzo08} used two different approaches to this problem. One was to directly take the distribution from the list of nearby SFRs compiled by \citet{Lada03}. While this is the most direct, it suffers from small number statistics for massive SFRs. The other approach by \citet{Fatuzzo08} is to assume a log-uniform distribution in the number of stars existing in a SFR with a number of members $N$, truncated outside of a minimum $N_\mathrm{min}=40$ and maximum $N_\mathrm{max}=10^5$. Alternatively, one can produce a similar distribution using a smooth \citet{Schechter76} function \citep[see e.g.][]{Gieles06b}:
\begin{equation}
\label{eq:SFRmfunc}
\frac{\mathrm{d} n_\mathrm{SFR}}{\mathrm{d} N} \propto N^{-\beta} \exp\left(\frac{-N}{N_\mathrm{max}} \right) \exp\left(-\frac{N_\mathrm{min}}{N} \right),
\end{equation}
where $\beta\approx 2$ is expected from the hierarchical collapse of molecular clouds \citep{Elmegreen96} and is consistent with empirical constraints \citep[e.g.][]{Gieles06b, Fall12, Chandar15}, as well as the log-uniform distribution adopted by \citet[][n.b. that to obtain the fraction of \textbf{stars}, one must multiply the mass function described by equation~\ref{eq:SFRmfunc} by a factor $N$]{Fatuzzo08}.
In order to interpret equation~\ref{eq:SFRmfunc}, we further need to estimate $N_\mathrm{min}$ and $N_\mathrm{max}$. The upper limit $N_\mathrm{max}$ can be inferred empirically \citep{Gieles06b, Gieles06}, or by appealing to the theoretical limits. As elucidated by \citet{Toomre64}, the origin of the maximum mass of a SFR can be understood in terms of the maximum size of a region that can overcome the galactic shear (Toomre length) and local kinetic pressure (Jeans length). This length scale, the Toomre length, may then be coupled to a local surface density of a flattened disc to obtain a maximum mass for a SFR. However, in the outer regions of the Milky Way, the stellar feedback in massive SFRs can interrupt star formation and further reduce $N_\mathrm{max}$. Adopting the simple model presented by \citet{ReinaCampos17} for this limit with a typical stellar mass $m_*\sim 0.5\,M_\odot$, this yields a maximum $ N_\mathrm{max} \sim 7 \times 10^4$ for all SFRs (not necessarily gravitationally bound). This is broadly consistent with the most massive local young stellar clusters and associations \citep{PortegiesZwart10}.
Meanwhile, for the minimum number of members in a SFR, \citet{Lamers05} used the sample of \citet{Kharchenko05} to infer a dearth of SFRs with $N<N_\mathrm{min} \sim 280$. Such a lower limit might be understood as the point at which the lower mass end of the hierarchical molecular cloud complex merges due to high star formation efficiency and long formation timescales \citep{Truijillo19}. However, obtaining constraints on $N_\mathrm{min}$ in general is made difficult by the lack of completeness in surveys of low mass stellar aggregates, and it remains empirically unclear how $N_\mathrm{min}$ varies with galactic environment. Note that generally SFR demographics may depend on galactic environment, and therefore also vary over cosmic time \citep[see][for a review]{Adamo20}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figures/typical_regions.pdf}
\caption{\textit{Left panel:} the FUV luminosities at an age of $1$~Myr of $6\times 10^4$ realisations drawing star forming regions (SFRs), with the number of members drawn from equation~\ref{eq:SFRmfunc} \citep[cf.][Figure 5b]{Fatuzzo08}. We draw individual stellar masses from a \citet{Chabrier03} IMF in the range $0.08{-}100\, M_\odot$. The black points represent a subset of 1000 of the synthetic SFRs. The solid red line follows the mean luminosity as a function of the number of members (equation~\ref{eq:L_mean}), the dashed red line is the median, while the dotted lines bound the the $14^\mathrm{th}{-}86^{\mathrm{th}}$ percentile range. The colour bar shows an estimate for the relative density of \textbf{stars} (i.e. density of SFRs multiplied by the number $N$ of members) in logarithmic space using a Gaussian kernel density estimate with a bandwidth $0.1$~dex. We estimate the numbers of stars and total luminosity of three local SFRs: Taurus (green), the ONC (purple) and Westerlund 1 (orange). \textit{Right panel:} the corresponding distribution in the number of stars (red) and SFRs (black) per logarithmic bin in total FUV luminosity space. }
\label{fig:demographics}
\end{figure}
With the above considerations, we can now generate the distribution of UV luminosities experienced by stars in their stellar birth environment. We adopt $N_\mathrm{min}=280$ and $N_\mathrm{max}=7\times10^4$ and couple equation~\ref{eq:SFRmfunc} with the stellar mass dependent luminosity discussed in Section~\ref{sec:OBstar}. The resultant FUV luminosity distribution when drawing $6\times 10^4$ SFRs, each with $N$ members drawn from equation~\ref{eq:SFRmfunc}, is shown in Figure~\ref{fig:demographics}. For context, we estimate the FUV luminosity in Taurus using the census of members by \citet{Luhman18}, assuming the local luminosity is dominated by four B9 and three A0 stars in that sample of 438 members. We also add the Orion Nebula cluster (ONC), whose total UV luminosity is dominated by the O7-5.5 type star $\theta^1$C, with a mass of $\sim 35 \, M_\odot$ \citep[e.g.][]{Kraus09, Balega14}, and Westerlund 1 which has a total mass of $\sim 6\times 10^4 \, M_\odot$ \citep{Mengel07, Lim13} and thus a well sampled IMF. Regions like Taurus with only a few hundred members are common, and therefore we expect to find such regions nearby. However, each one hosts $\sim 1/1000$ as many stars as a starburst cluster like Westerlund 1. Taurus thus represents an uncommon stellar birth environment in terms of the FUV luminosity. The most common type of birth environment for stars lies somewhere between the ONC and Westerlund 1, with FUV luminosity $\sim 10^{40}$~erg~s$^{-1}$ (at an age of $ 1$~Myr).
In order to understand the typical flux experienced by a circumstellar disc, we further need to consider the typical distance between it and nearby OB stars -- i.e. the geometry of the SFR. To this end, \citet[][see also \citealt{Adams06}]{Fatuzzo08} and \citet{Lee20} use the \citet{Larson81} relation that giant molecular clouds and young, embedded SFRs have a size-independent surface density \citep[see also][]{Solomon87}; hence the radius $R$ of the SFR scales as $R \propto \sqrt{N}$ \citep{Adams06, Allen07}. However, this relationship does not hold for very young massive clusters \citep[see][for a review]{PortegiesZwart10}, with several exhibiting comparatively high densities \citep[e.g. Westerlund 1 --][]{Mengel07}, while older clusters follow a shallower mass-radius relationship $R \propto N^\alpha$ with $\alpha \sim 0.2{-}0.3$ \citep{Krumholz19}. Instead of the Larson relation, \citet{Winter20a} attempt to relate SFR demographics to galactic-scale physics by using the lognormal density distribution of turbulent, high Mach number flows combined with a theoretical star formation efficiency. This yields the stellar density distribution in a given galactic environment. In massive local SFRs, the local FUV flux $F_\mathrm{FUV} \approx 1400 \, (\rho_*/10^3 M_\odot\,\rm{pc}^{-3})^{1/2} \, G_0$, for stellar density $\rho_*$, due to their radial density profiles \citep{Winter18b}. Using these two ingredients, it is possible to estimate the distribution of FUV fluxes experienced by stars in the solar neighbourhood from a mass function for SFRs. Despite the differences in the two approaches, both \citet{Fatuzzo08} and \citet{Winter20a} find that, neglecting extinction, young stars in typical stellar birth environments experience external FUV fluxes in the range $\langle F_\mathrm{FUV} \rangle \sim 10^3{-}10^4 \, G_0$.
While these efforts produce some intuition as to the typical FUV radiation fields, this is not the end of the story for understanding how discs in SFRs are exposed to UV photons. In Sections~\ref{sec:extinction} and~\ref{sec:stellar_dynamics} we will discuss the role of interstellar extinction and the dynamical evolution of stars in SFRs.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figures/extinction.pdf}
\caption{FUV flux experienced by stars formed in the hydrodynamic simulations of \citet{Ali21}, following the collapse of a molecular cloud of mass $10^5 \, M_\odot$ at time $0.5$, $1.0$ and $1.5$~Myr from left to right. \textit{Top row:} the gas column density and location of sink particles as points, with white points representing ionising sources. \textit{Bottom row:} histogram of the number of sources binned by instantaneous FUV flux \citep[see also][]{Qiao22}. Comparatively few stars are born in the early stages of the cluster formation, where attenuation of FUV photons is efficient. }
\label{fig:extinction}
\end{figure}
\subsection{Extinction due to the inter-stellar medium}
\label{sec:extinction}
After stars begin to form, residual gas and, more importantly, the dust in the inter-stellar medium (ISM) attenuates FUV photons such that circumstellar discs may be shielded from external photoevaporation. The column density required for one magnitude of extinction at visual magnitudes is $N_\mathrm{H}/A_\mathrm{V} = 1.8\times10^{-21}$~cm$^{-2}$~mag$^{-1}$ \citep{Predehl95}, and this can be multiplied by a factor $A_\mathrm{FUV}/A_\mathrm{V} \approx 2.7$ \citep{Cardelli89} to yield the corresponding FUV extinction. The main problem comes in estimating the column density between OB stars and the cluster members. Both \citet{Fatuzzo08} and \citet{Winter20a} approach this problem by adopting smooth, spherically symmetric density profiles with some assumed star formation efficiency to estimate this column density. However, even during the embedded phase this approach overestimates the role of extinction because a more realistic, clumpy geometry of the gas makes the attenuation inefficient \citep[e.g.][]{Bethell07}. In addition, stellar feedback from massive stars acts to drive gas outflows from the SFR even before local supernovae ignition \citep[e.g.][]{Lyon1980,Matzner02, Jeffreson21}, reducing the quantity of attenuating matter.
Due to the above concerns, understanding the influence of extinction on shielding the young disc population requires direct simulation of feedback in the molecular cloud. To approach this problem, \citet{Ali19} simulated feedback from a single $34\, M_\odot$ mass star in a molecular cloud of mass $10^4 \, M_\odot$, similar to the conditions in the ONC. The authors find that many discs are efficiently shielded for the first $\sim 0.5$~Myr of evolution, while the discs may experience short drops in UV exposure at later stages. This result is somewhat dependent on cloud metallicity, since lower metallicity increases the efficiency of feedback by lengthening the cooling time \citep{Ali21}.
If more than one massive star forms in a SFR, this further increases feedback efficiency and geometrically reduces the efficiency of extinction. \citet{Qiao22} investigated the role of feedback in the simulations of a molecular cloud of mass $10^5 \, M_\odot$ by \citet{Ali21}. The resultant FUV flux experienced by the stellar population is shown in Figure~\ref{fig:extinction}. For such a massive region with several O stars, lower mass disc-hosting stars typically experience extremely strong irradiation by the time they reach an age of $\sim 0.5$~Myr. Although extinction may be efficient for the stars that form early, by the time the majority of stars they are typically exposed to $F_\mathrm{FUV} \sim 10^5 \, G_0$ Hence the embedded phase of evolution in massive SFRs can only shield discs early on. Any giant planet formation in such a region must therefore initiate early and may be strongly influenced by their environment (see Section~\ref{sec:giant_planets}).
\subsection{The role of stellar dynamics}
\label{sec:stellar_dynamics}
In some instances, the FUV exposure history of stars may strongly depend on the dynamical evolution of the SFR. This is particularly true only one dominant OB star is present. For example, the difference in $F_\mathrm{FUV}$ for a disc at a separation $d = 0.05$~pc from a single massive star (as for some of the brightest proplyds in the ONC) and those at $d=2$~pc (a typical distance for discs in the ONC) is a factor $1600$. Hence the dynamical evolution of the star and the SFR can play a major role in the historic UV exposure of any observed star-disc system.
Many studies have performed N-body simulations of SFRs to track the exposure of star-disc systems, either with aim of quantifying general trends \citep{Holden11, Nicholson19,Concha-Ramirez19,Parker21} or modelling specific regions \citep{Scally01, Winter19b, Winter20b}. These studies generally adopt the FUV driven mass loss rates given by the {FRIED} grid \citep{Haworth18b} with an external flux computed directly from N-body simulations. In general, these studies find that SFRs with typical stellar densities $\rho_* \gtrsim 100\, M_\odot$ are sufficient to rapidly deplete protoplanetary discs \citep[e.g.][]{Nicholson19, Concha-Ramirez19, Concha-Ramirez21}. Similarly, discs are more rapidly depleted in sub-virial initial stellar populations that undergo cold collapse, as a result of the higher densities and therefore UV flux exposure \citep{Nicholson19}. However, initial substructure has a minimal effect on a external photoevaporation since UV fields are generally dominated by the most massive local stars and not necessarily nearest neighbours \citep{Nicholson19, Parker21}. Thus it is volume-averaged rather than star-averaged density measures in a SFR that act as a proxy for the typical external UV flux. This is not necessarily true for the bolometric flux, which may still be dominated by the closest neighbours in a highly structured SFR \citep{Lee20}.
For an example of how the dynamics in SFRs may alter disc properties, we need only look at the closest intermediate mass SFR to the sun: the ONC. This region has historically dominated the study of externally photoevaporation protoplanetary discs since the discovery of proplyds \citep[e.g.][]{O'Dell94}. Surprisingly, up to $\sim 80$~percent of stars exhibit a NIR excess \citep{Hillenbrand98}, indicating inner-disc retention. This finding is apparently inconsistent with the $\sim 1{-}3$~Myr age \citep{Hillenbrand97, Palla99} when accounting for reasonable stellar dynamics, mass loss rates and initial disc masses \citep[e.g.][]{Churchwell87, Scally01}. The ONC contains one O star, $\theta^1$C, that dominates the local UV luminosity, tying the FUV history of the local circumstellar discs precariously to the history of this single star. Indeed, for this reason intermediate mass star forming regions may be subject to multiple bursts of star formation due to the periodic ejection of individual massive stars \citep{Kroupa18}. This possibility seems to be supported by the age distribution of stars in the ONC \citep{Beccari17, Jerabkova19}. \citet{Winter19b} showed how, when considering the gravitational potential of residual gas, such episodic star formation can yield inward migration of young stars and outward migration of older stars. This yields a stellar age gradient as observed in the ONC \citep{Hillenbrand97}. As a result, much younger discs experiencing the strongest UV fluxes such that inner disc survival over their lifetime is feasible. Similar core-halo age gradients have been reported in other star forming regions \citep[e.g.][]{Getman14}, highlighting the importance of interpreting disc properties through the lens of the dynamical processes in SFRs.
Unlike regions with one O star, dynamical evolution may not be such an important consideration for discs in massive SFRs. For example, tracking the UV fluxes experienced by stars in the neighbourhood of numerous OB stars, \citet{Qiao22} find that the vast majority of stars experience $F_\mathrm{FUV} \sim 10^5 \, G_0$ at an age of $1$~Myr. The relatively uniform flux in regions with multiple OB stars may go someway to explaining the absence of correlation between disc mass and location found in some simulations \citep{Parker21}. By constrast, \citet{Winter20b} reproduce a disc mass-separation correlation in the relatively low mass region $\sigma$ Orionis region \citep{Ansdell17}, which is occupied by a single massive star (see Section~\ref{sec:surveys}). In either case, it is clear that the physics of star formation cannot be ignored when interpreting the properties of protoplanetary discs, and probably the resultant planetary systems.
\begin{mdframed}
\vspace{-0.8cm}
\subsection{Summary and open questions for the demographics of star forming regions}
Based on the previous dicussion, the following conclusions represent the current understanding of the demographics of SFRs in the context of external disc irradiation:
\begin{enumerate}
\item When accounting for a standard IMF, the total FUV and EUV luminosity of a SFR is dominated by stars of mass $\gtrsim 30 \, M_\odot$.
\item Although most SFRs are low mass and have few OB stars, the majority of stars form in regions with a total FUV luminosity greater than that of the ONC ($\gtrsim 10^{39}$~erg~s$^{-1}$).
\item In the absence of interstellar extinction, typical FUV fluxes experienced by star-disc systems in the solar neighbourhood are $\langle F_\mathrm{FUV} \rangle \sim 10^3{-} 10^4 \, G_0$. This is enough to shorten their lifetime with respect to the typical $\sim 3$~Myr for isolated discs.
\item Extinction due to residual gas in SFRs can shield some young circumstellar discs, with ages $\lesssim 0.5$~Myr. However, at later times extinction is inefficient at shielding discs, and unattenuated FUV photons may reach protoplanetary discs.
\end{enumerate}
Some of the open questions that remain in quantifying the typical external environments for planet formation:
\begin{enumerate}
\item \textit{How important are the dynamics of SFRs in determining FUV exposure for populations discs, and how much does this vary between SFRs? }
\item \textit{How does FUV exposure end external photoevaporation of typical protoplanetary discs vary with cosmic time?}
\item \textit{Do particular types of planetary system preferentially form in certain galactic environments? }
\end{enumerate}
\end{mdframed}
\label{sec:summary}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figures/Review_figure.pdf}
\caption{A cartoon for how protoplanetary disc evolution and planet formation proceeds in weakly-irradiated, low mass SFRs (top) and strongly-irradiated, high mass SFRs (bottom). We consider two identical, initially large discs with outer radii of $\sim 100$~au. In the high mass SFR, any neighbouring massive stars may initially be extincted by an UCHII region for $\sim 10^5$~yrs. However, before the star-planet system is $0.5$~Myr old, extinction in the SFR will typically become inefficient and the disc is irradiated by a strong UV field. This produces a bright, extended ionisation front (IF) and rapidly truncates the disc down to a few $10$~au. This may also interrupt the early giant planet formation that proceeds in the outer regions of an isolated disc in a low FUV environment. The irradiated disc is now small, which results in slow mass loss rates and a smaller IF. Given its compact size, the time-scale for viscous draining of the irradiated disc becomes short compared to its isolated counterpart. This can lead to premature clearing of disc material earlier than the typical $\sim 3{-}10$~Myr for which isolated discs survive. }
\label{fig:summary_fig}
\end{figure}
\section{Summary and future prospects}
In this review, we have discussed many aspects of the process of external photoevaporation, covering basic physics, observational signatures, consequences for planet formation and prevalence across typical star forming regions. While numerous open questions remain, with the current understanding we can make an educated guess at how protoplanetary discs evolve in different star forming regions. In Figure~\ref{fig:summary_fig} we show a cartoon of how disc evolution proceeds in low and high FUV flux environments. The early period of efficient extinction in SFRs typically lasts less than $0.5$~Myr of a disc's lifetime \citep{Qiao22}. After this, studies of individual externally irradiated discs in proplyd form have demonstrated mass loss rates up to $\sim 10^{-7}{-}10^{-6} \, M_\odot$~yr$^{-1}$ \citep{Weidenschilling77,O'Dell94, Henney99}. This extended proplyd state is short-lived, but the disc is rapidly eroded outside-in during this period. The influence of the external wind on the outer disc can result in dust loss via entrainment in the wind \citep{Miotello12} or due to rapid inward migration of large grains \citep{Sellek20}. It can also interrupt any giant planet formation that occurs on this time-scale in the outer disc \citep{Sheehan18, Segura-Cox20, Tychoniec20}. Given the shorter viscous time-scale of the compact disc, this can lead to a rapid clearing of the disc material similar to expected after gap opening in internally photoevaporating discs \citep{Clarke01}. This is corroborated by the observed shortening of inner disc lifetimes in several massive local SFRs \citep[e.g.][]{Preibisch11, Stolte15,Guarcello16}, however only those in which there are several O stars and a total FUV luminosity $L_\mathrm{FUV} \gtrsim 10^{39}$~erg~s$^{-1}$. In these extreme environments, where high FUV fluxes are sustained over the disc lifetimes, external photoevaporation presumably shuts off accretion onto growing planets and curtails inward migration, possibly leaving behind relatively low mass outer planets \citep{Winter22}.
Within this framework several questions arise, requiring both theoretical and empirical future study. As an example, models for the chemical evolution of planetary discs and planets in irradiated environments, and their expected observational signatures, are urgently needed. Upcoming JWST investigations of high UV environments may shed some light on disc chemistry from an observational perspective \citep[e.g.][]{JWST_RamirezTannus}. Inferring mass loss rates for moderately photoevaporating discs that do not exhibit bright ionisation fronts is also a crucial test of photodissociation region models. Meanwhile, perhaps the biggest problem for planet population synthesis efforts is determining how the solid content evolves differently in photoevaporating discs; when and where do solid cores form in the discs in high mass versus low mass SFRs? These are just some of the numerous questions that remain open towards the goal of understanding external photoevaporation.
In conclusion, the process of external photoevaporation appears to be an important aspect of planet formation, although the full extent of its influence remains uncertain. Future efforts in both theory and observations are required to determine how it alters the evolution of a protoplanetary disc, and the consequences for the observed (exo)planet population.
\section*{Acknowledgements}
We thank Cathie Clarke for helpful comments and Arjan Bik for discussions and tables regarding disc lifetimes. We further thank Stefano Facchini, Sabine Richling, Tiia Grenman, Richard Teague, Andrew Sellek, and Ahmad Ali for permission to use the material in Figures 3, 6, 7, 9 and 16 respectively. AJW thanks the Alexander von Humboldt Stiftung for their support in the form of a Humboldt Fellowship. TJH is funded by a Royal Society Dorothy Hodgkin Fellowship.
\bibliographystyle{apalike}
|
\section{Introduction and Results}
\subsection{Stable sets and stable manifolds}
One of the most fundamental concepts in the modern geometric theory
of dynamical systems is that of the \emph{stable set} associated to a
point: given a map \( \varphi: M \to M \) on a metric space
\( M \), and a point \(
z\in M \) we define the (global) \emph{ stable set} of \( z \) as
\[
W^{s}(z) = \{x\in M :
d(\varphi^{k} (x),
\varphi^{k}(z)) \to 0 \text{ as } k\to \infty\}.
\]
If \( \varphi \) is invertible, (global) \emph{ unstable set} can be defined
in the same way by taking \( k\to -\infty \). The situation is completely
analogous and so we will concentrate here on stable sets.
This definition gives an equivalence relation on \( M \) which
defines a partition into sets
which are invariant under the action of \( \varphi \) and which are
formed of orbits which have the same asymptotic behaviour. An
understanding of the geometry of the stable and unstable sets of
different points, of how they depend on the base point \( z \), and of
how they intersect, forms the
core of many powerful arguments related to all kinds of
properties of dynamical systems, from ergodicity to structural
stability to estimates on decay of correlations.
In general \( W^{s}(z) \) can be
extremely complicated, both in its intrinsic geometry and/or
in the way it is embedded in \( M \). A first step towards
understanding this complexity is to focus on the
\emph{local stable set}
\[
W^{s}_{\varepsilon}(z) = \{x \in W^{s}(z): d(\varphi^{k} (x),
\varphi^{k}(z)) \leq \varepsilon \ \forall \ k\geq 0\}.
\]
A key observation is that the local stable
set may, under suitable conditions,
have a regular geometrical structure.
In particular, if \( M \) is a smooth Riemannian manifold
and \( \varphi \) is a differentiable map, a typical
statement of a ``Stable Manifold Theorem''
is the following:
\begin{center}\(
W^{s}_{\varepsilon}(x) \) \emph{is a smooth submanifold of} \( M \).
\end{center}
This implies in particular that the global stable manifold, which can
be written as
\[
W^{s}(x) = \bigcup_{n\geq 0} \varphi^{-n}(W^{s}_{\varepsilon}(z)),
\]
is also a smooth (immersed) submanifold of M; it may however fail to be an
\emph{embedded} manifold (i.e. a manifold in the topology induced from
\( M \)) due to the complicated way in which it may twist
back on itself.
\subsection{Historical remarks}
\label{hist}
As befits such a fundamental result, there exists an enormous
literature on the subject, tackling the problem under a number of
different conditions. A key
idea is that of \emph{hyperbolicity}. In the simplest setting
we say that a fixed point \( p \) is hyperbolic if the derivative \(
D\varphi_{p} \) has no eigenvalue on the unit circle. In the
analytic, two-dimensional,
area preserving case Poincar\'e proved that the local stable (and
unstable) sets are analytic submanifolds \cite{Poi86s}. Hadamard
and Perron independently developed more geometric methods
allowing them to assume only a \( C^{1} \)
smoothness condition \cites{Had01s, Had01, Per28, Per30}; the stable manifold
theorem for hyperbolic fixed points is sometimes called the
\emph{Hadamard-Perron Theorem}.
In \cite{Ste55}, Sternberg used a simple geometric
argument, related to Hadamard's technique,
to obtain existence and regularity results assuming
only \emph{partial} hyperbolicity of the fixed point, i.e. assuming
only that the two eigenvalues are real and distinct.
Other early work on the subject includes \cites{BogMit61, BogMit63,
Hal60, Hal61, Dil60, Dil61} and \cite{Sac67} in which the techniques
were generalized to deal with stable manifolds associated to more
general compact sets as opposed to just fixed points.
In the late 60's and early 70's the theory of stable manifolds became
fundamental to the theory of \emph{Uniformly Hyperbolic} dynamics
pioneered by Anosov \cite{Ano67} and Smale \cite{Sma67}: there exists
a \emph{continuous} decomposition
\begin{equation*}\label{split}
T\Lambda = E^{s}\oplus E^{u}
\end{equation*}
of the tangent bundle over some set \( \Lambda \) into subbundles on which
uniform contraction and exponential estimates hold under the action of
the derivative. A straightforward generalizations of this set-up is
that of \emph{partial} or \emph{normal} uniform hyperbolicity
which allows for the
possibility of a neutral subbundle
\begin{equation*}\label{splitpart}
T\Lambda = E^{s}\oplus E^{c}\oplus E^{u}.
\end{equation*}
This is a significant weakening of the uniform
hyperbolicity assumptions as it allows the dynamics tangent to \( E^{c} \)
to be quite general.
Such situations have been
systematically and thoroughly investigated using variations and
generalizations of the basic methods of Hadamard and Perron
\cites{Kel67, HirPug69, HirPug70, HirPalPugShu70, Irw70, Fen71,
Fen74}, see \cite{HirPugShu77} for a comprehensive treatment.
An even more general set-up is based
on the \emph{Multiplicative Ergodic Theorem} of Oseledets
\cite{Ose68} which says that there always exists
a \emph{measurable} decomposition
\[
T\Lambda = E^{1}\oplus E^{2} \oplus \cdots \oplus E^{k}
\]
with respect to \emph{any} invariant probability measure \( \mu
\), such that the \emph{asymptotic exponential growth rate}
\[
\lim_{n\to\infty} \frac{1}{n}\log \|D\varphi_{x}^{n}(v)\| = \lambda^{i}
\]
is well defined, and for ergodic \( \mu \) even
independent of \( x \), for all non-zero \( v\in
E^{i} \). The condition \( \lambda^{i}\neq 0 \) for all \( i=1,\ldots,
k \) is a condition of \emph{non-uniform hyperbolicity} (with respect
to the measure \( \mu \)) since it implies that all vectors as
asymptotically contracted or expanded at an exponential rate. The
non-uniformity comes from the fact that the convergence to the limit
is in general highly non-uniform and thus one may have to wait an
arbitrarily long time before this exponential behaviour becomes
apparent. Pesin \cite{Pes76, Pes77} extended many results of the
theory of uniform hyperbolicity concerning stable manifolds
to the non-uniform setting.
There
have also been some recent papers introducing new approaches and
focussing on particular subtleties of interest in various contexts
\cites{AbbMaj01, Cha01, Cha02}. We emphasize however that all these
results assume \emph{exponential} estimates for the derivative on the
relevant subbundles.
\subsection{Very weak hyperbolicity}
The aim of this paper is to develop some techniques suitable for
dealing with situations with \emph{very weak} forms of hyperbolicity.
For \( z\in M \) and \( k\geq 1 \)
let
\[
F_{k}(z) = \|(D\varphi^{k}_{z})\| =
\max_{\|v\|=1}\{\|D\varphi^{k}_{z}(v)\|\},
\]
and
\[
E_{k}(z) =
\|(D\varphi^{k}_{z})^{-1}\|^{-1} =
\min_{\|v\|=1}\{\|D\varphi^{k}_{z}(v)\|\}.
\]
These quantities have a simple geometric interpretation:
since \( D\varphi_{z}^{k}: T_{z}M \to
T_{\varphi(z)}M \) is a linear map, it sends circles to ellipses; then
\( F_{k}(z)\) is precisely half the length of
major axis of this ellipse and \( E_{k}(z) \) is
precisely half the length of the minor axis of this ellipse. Then let
\[
H_{k}(z) = \frac{E_{k}(z)}
{F_{k}(z)}.
\]
Notice that we always have \( H_{k}(z) \leq 1 \).
The weakest possible \emph{hyperbolicity condition} one could assume
on the orbit of some point \( x \) is perhaps the condition
\[
H_{k}(z) < 1
\]
for all \( k \geq 1 \) (or at least all \( k \) sufficiently large),
equivalent to saying that the image of the unit circle is
strictly an ellipse or that \( D\varphi^{k}_{z} \) is not
conformal.
At the other extreme, perhaps the strongest hyperbolicity condition is
to assume that
\( H_{k}(z) \to 0 \) exponentially fast in \( k \). This is the case
in the classical hyperbolic
setting, both uniform and nonuniform.
In this paper we prove a
stable manifold theorem essentially under the
``summable" hyperbolicity condition
\begin{equation*}
\sum H_{k}(z) < \infty.
\end{equation*}
The precise statement of the results
requires some additional technical conditions which will be given
precisely in the next section, however
the main idea is that the usual exponential decay of \( H_{k} \) is an
unnecessarily strong condition.
Existing
arguments rely
on a contraction mapping theorem in some suitable space of ``candidate''
stable manifolds which yields a fixed
point (corresponding to the real stable manifold) by the observation that a
certain sequence is Cauchy and thus converges. In our
approach we construct an \emph{canonical} sequence of
\emph{finite time local stable manifolds} and use the summability
condition to show directly that this sequence is Cauchy and
thus converges to a real stable manifold.
Also, we make no
a priori assumptions on the existence of any tangent space
decomposition.
\subsection{Finite time local stable manifolds}
Our method is based on the key notion of
\emph{finite time} local stable manifold.
Let \( k\geq 1 \) and suppose that
\( H_{k}(z) < 1 \); then we let
\( e^{(k)}(z) \) and \( f^{k}(z) \) denote unit vectors in the
directions which are
\emph{most contracted} and \emph{most expanded} respectively by
\( D\varphi^{k}_{z} \).
Notice that these directions are
solutions to
the differential equation \( d\|D\varphi_{z}^{k}(v)\|/d\theta = 0 \)
which are given by
\begin{equation}\label{contractive directions}
\tan 2\theta =
\frac{2 (\pfi x1^{k}\pfi y1^{k} +\pfi x2^{k}\pfi y2^{k})}
{(\pfi x1^{k})^2+(\pfi x2^{k})^2 - (\pfi y1^{k})^2 -(\pfi y2^{k})^2}.
\end{equation}
In particular, \( e^{(k)} \) and \( f^{(k)} \) are \emph{orthogonal}
and, if \( \varphi^{k} \) is \( C^{2} \),
\emph{continuously differentiable} in some
neighbourhood \( \mathcal N^{(k)}(z) \) in which they are defined.
Therefore they determine
two orthogonal foliations \( \mathcal E^{(k)}\) and \( \mathcal
F^{(k)} \) defined by the integral curves of the unit vector fields
\(e^{(k)}(x) \) and \( f^{(k)}(x) \) respectively. We let
\( \mathcal E^{(k)}(z) \) and \( \mathcal F^{(k)}(z) \)
denote the corresponding leaves through the point \( z \).
These are the natural finite time versions of the local stable and
unstable manifolds of the point \( z \) since they are, in some
sense, the most contracted and most expanded curves through \( z \)
in \( \mathcal N^{(k)}(z) \). Notice that they are uniquely defined
locally. We will show
that under suitable conditions the finite time local stable
manifolds converge to
real local stable manifold.
The idea of constructing finite time
local stable manifolds is not new.
In the context of Dynamical Systems,
as far as we know it was first introduced in
\cite{BenCar91} and developed further in several papers
including \cites{MorVia93, BenYou93, LuzVia2, HolLuz, WanYou01}
in which systems satisfying some nonuniform hyperbolicity are
considered.
All these papers deal with families of systems in which, initially,
hyperbolicity cannot be guaranteed for all time for all parameters. A
delicate parameter-exclusion argument
requires information about the geometrical structure of stable and
unstable leaves based only on a finite number of iterations and thus
the notion of finite time manifolds as given above is very natural.
We emphasise however that in these papers
the construction is heavily embedded in the global
argument and no particular emphasis is placed on this method as an
algorithm for the construction of real local stable manifolds \emph{per
se}. Moreover the decay rate of \( H_{k} \) there is exponential
and the specific properties of the systems
(such as the small determinant and various other hyperbolicity
and distortion conditions) are heavily
used, obscuring the precise conditions required for the argument to work.
One aim of this paper is to
clarify the setting and assumptions
required for the construction to work and to show that the
main ideas can essentially be turned into a
fully fledged alternative approach to
theory of stable manifolds. Moreover we show that the argument
goes through under much weaker conditions than those
which hold in the papers cited above.
\subsection{Main Results}
\label{results}
We shall consider dynamical systems given by maps
\[
\varphi: M \to M
\]
where
\( M \) is a two-dimensional
Riemannian manifold with Riemannian metric \( d \).
The situation we have in mind is that of
a piecewise \( C^{2} \) diffeomorphism
with singularities:
there exists a set \( \mathcal
S \) of zero measure such that \( \varphi \) is a \( C^{2} \) local
diffeomorphism on \( M\setminus\mathcal S \).
The map \( \varphi \) may be discontinuous on \( \mathcal S \) and/or
the first and second derivatives may become unbounded near \( \mathcal
S \). The precise assumptions will be local and will be formulated
below.
First of all we introduce some notation. For \( x \in M \) let
\[
P_k(x)=\|D\varphi_{\varphi^kx}\|,\quad
Q_k(x)=\|(D\varphi_{\varphi^kx})^{-1}\|,\quad
\widetilde{P}_k(x)=\|D^2\varphi_{\varphi^kx}\|,
\]
and
\[
\mathfrak{D}_k(x)=|\det D\varphi_{\varphi^kx}|,\quad
\tilde\mathfrak{D}_k(x)=\|D(\det D\varphi_{\varphi^kx})\|
\]
Notice that all of these quantities
depend only on the derivatives
of \( \varphi \) at the point \( \varphi^{k}(x) \). If \( \varphi \)
is globally a \( C^{2} \) diffeomorphism then they are all uniformly
bounded above and below and play no essential role in the result.
On the other hand, if the contraction and/or expansion is unbounded
near the singularity set \( \mathcal S \) some control of the
recurrence is implicitly given by some conditions which we impose on
these quantities.
We shall also use the notation
\[ F_{j, k}(x) = \|D\varphi^{k-j-1}_{\varphi^{j+1}(x)}\|. \]
We now give a generalization of the notion of local stable manifold.
For a sequence
\[
\underline\varepsilon=\{\varepsilon_j\}_{j=0}^{\infty}
\]
with
\(
\varepsilon_{j}\geq \varepsilon_{j+1}> 0
\)
for all \( j \geq 0 \),
we let
\[
\mathcal N^{(k)} = \mathcal N^{(k)}_{\underline\varepsilon}(z)
=\{\tilde{z}\in M:\,\|\varphi^{j}(\tilde{z})-\varphi^{j}(z)\|\leq
\varepsilon_j,\forall j\leq k-1\}.
\]
This defines a nested sequence of neighbourhoods of the
point \( z \).
We shall always suppose that for all \( k\geq j \geq 1 \) the
restriction \( \varphi^{j}|
\mathcal N^{(k)} \) is a \( C^{2} \) diffeomorphism onto its
image. In the presence of singularities this may impose a strong
condition on the sequence \( \underline \varepsilon \) whose terms may
be required to decrease very quickly.
We then let
\( \{
p_{k},
q_{k}, \tilde{p}_{k} \}_{k=1}^{\infty}
\)
be uniform upper bounds for the values of \( P_{k}, Q_{k}, \tilde P_{k}
\) respectively in \( \mathcal N^{(k)}(z) \):
\[
p_{k}=\max_{x\in\mathcal N^{(k)}}P_{k}(x), \quad
q_{k}=\max_{x\in\mathcal N^{(k)}}Q_{k}(x), \quad
\tilde p_{k}=\max_{x\in\mathcal N^{(k)}}\tilde P_{k}(x).
\]
These values may be unbounded.
Then let
\(
\{ \gamma_{k}, \gamma^{*}_{k},
\delta_{k} \}_{k=1}^{\infty}
\) be given by
\[
\gamma_{k}= \max_{x\in\mathcal N^{(k)}}
\{H_{k}\},\quad \gamma^{*}_{k}= \max_{x\in\mathcal N^{(k)}}\{E_{k}\},
\]
and
\[
\delta_{k}=
\max_{x\in\mathcal N^{(k)}}
\left\{
\frac{E_k}{F^{2}_{k}}\sum_{j=0}^{k-1} \tilde{P}_{j}F_{j, k}
F^{2}_{j} +
\frac{E_{k}}{F_{k}} \sum_{j=0}^{k-1}
\mathfrak{D}^{-1}_{j}\tilde{\mathfrak{D}}_{j}F_{j}
\right\}.
\]
We are now ready to state our two hyperbolicity conditions. The first
is a hyperbolicity condition
\begin{equation}\tag{\( * \)}
\sum_{k=1}^{\infty}
p_k q_k\gamma_{k+1}+
\tilde{p}_k q^{5}_{k} p_k^3\gamma^{*}_{k+1} +
p_k^{5}q_k^5\delta_{k} + p_k^{2}q_k^2\delta_{k+1}< \infty.
\end{equation}
Notice that if the norm of the derivative is bounded, such as
in the absence of singularities, this
reduces to the more ``user-friendly'' condition
\[
\sum \gamma_{k}+\gamma^{*}_{k}+\delta_{k} < \infty.
\]
The summability of \( \{\gamma^{*}_{k}\} \) is not particularly
crucial and is really only used to ensure that some minimal
contraction is present, so that the presence of a contracting stable
manifolds makes sense. The summability of \( \{\gamma_{k}\} \) is
simply the ``summable hyperbolicity'' assumptions stated above. The
summability of \( \{\delta_{k}\} \) is a quite important technical
assumption related to the ``monotonicity'' of the estimates in \( k
\), it is not overly intuitive but it is easily verified
in standard situations such as in the uniformly hyperbolic setting.
Taking advantage of condition \( (*) \) we define
\[
k_{0} = \min\{j: p_k q_k\gamma_{k+1}< 1/2,\,
\forall\, k\geq j-1\}\} < \infty
\]
and the sequence
\[
\tilde\gamma_{k} = \gamma^{*}_k+
2 \max_{x\in\mathcal N^{(k)}}
\{F_{k}\}\sum_{i=k}^{\infty}p_iq_i\gamma_{i+1} < \infty.
\]
Our second assumption is that there exists some constant
\( \Gamma >0 \) such that
\begin{equation*}\tag{\( ** \)}
\tilde\gamma_{j} + 4 \max_{\mathcal N^{(j)}} \{\| F_{j}\|\}
p_{k}q_{k}\gamma_{k+1} <
\Gamma \varepsilon_{j}
\end{equation*}
for all \( k\geq k_{0} \) and \( j\leq k \).
This is not a particularly intuitive condition but
thinking of it in the simplest setting can be useful.
Supposing for example that we are in a uniformly hyperbolic situation
and that all derivatives are bounded, we have that
the left hand side is \( \lesssim E_{k} \) which specifies that in
some sense, the images of the neighbourhoods of \( z \) under consideration
should not shrink to fast relative to the contraction in these
neighbourhoods.
We now state our main result.
\begin{main theorem}
Let \( z\in M \) and suppose that there exists a sequence
\( \underline \varepsilon \) such that \( \varphi^{k} \)
restricted to \( \mathcal N^{(k)} \) is a \( C^{2} \)
diffeomorphism onto its image for all \( k\geq 1 \), and suppose
also that conditions \( (*) \) and \( (**) \) hold.
Then there exists \(
\varepsilon>0 \) and a
\( C^{1+\text{Lip}} \) embedded one dimensional submanifold
\( \mathcal{E}^{\infty}(z) \) of \( M
\) containing \( z \) such that \( |\mathcal E^{(\infty)}(z)| \geq \varepsilon
\) and such that
there exists a constant \( C>0 \) such that
\( \forall z, z' \in \mathcal E^{\infty}(z) \ \forall \ k\geq k_0 \) we have
\[
\textstyle{|\varphi^{k}(z)- \varphi^{k}(z')| \leq C \tilde\gamma_{k} |z -z'|.}
\]
In particular if $\tilde\gamma_k\to 0$ then \( |\varphi^{k}(z)- \varphi^{k}(z')| \to 0\) as
\( k\to\infty \) and therefore
\[
|
\mathcal E^{(\infty)}(z) \subseteq W^{s}_{\varepsilon} (z).
\]
Moreover if
\( F_{k}\to\infty \) uniformly in \( k \),
then
\[ \mathcal E^{(\infty)}(z) =
\bigcap_{k\geq k_0}\mathcal{N}^{(k)}(z).\]
\end{main theorem}
We divide the proof into several sections. In \ref{section_notation}
we introduce some useful notation. In \ref{s:dist} we prove a
technical estimate which shows that the summability condition on \(
\delta_{k} \) implies some uniform distortion bounds on the \(
\mathcal N^{(k)} \). In \ref{point_theory} we study the convergence of
pointwise contracting directions and in \ref{sec-regularity1}
we use these to study the convergence of the local finite time stable
manifolds. In \ref{strategygeom} we show that the limit curve has
positive length. This is not directly implied by the preceding
convergence estimates which give convergence of the leaves on
whichever domain they are defined. Here we need to make sure that
such a domain of definition (i.e. length) of the leaves can be
chosen uniformly. Thus we have to worry about the shrinking of the
sets \( \mathcal N^{(k)}(z) \). Condition \( (**) \) is used
crucially in this section. We remark that the lower bound \( \varepsilon \)
for the length of the local stable manifold is determined in this
section.
In \ref{sec-regularity2} we show that the
limit curve is smooth and in \ref{sec-contraction} that it
``contracts'' and is therefore indeed part of the local stable
manifold. Finally, in \ref{uniqueness} we discuss uniqueness issues.
\section{Hyperbolic fixed points}
\label{fixed_point_case}
As an application of our abstract theorem,
we consider the simplest case of a hyperbolic fixed
point. The result is of course already well-known in this context, but
we show that our conditions are easy to check and that it
therefore follows almost immediately from our general result.
Let \( M \) be a two-dimensional
Riemannian manifold with Riemannian metric \( d \), and let \(
\varphi: M \to M \) be a \( C^{2} \) diffeomorphism.
Suppose that \( p\in M \) is a fixed point. The local stable manifold of $p$ is the set
\( W^{s}(p) \) of points which
remain in a fixed neighbourhood of \( p \) for all forward
iterations: for \( \eta > 0 \) and \( k\geq 1 \) let
\[
\mathcal N^{(k)}_{\eta}=\{x: d(\varphi^{j}(x), p) \leq
\eta \ \forall \ 0\leq j \leq k-1\}
\]
and
\[
\mathcal N^{(\infty)}_{\eta}= \bigcap_{k\geq 1} \mathcal
N^{(k)}
\]
For \( \eta > 0 \),
we define the \emph{local stable set} of \(
p \) by
\[
W^{s}_{\eta}(p) = W^{s}(p) \cap\mathcal
N^{(\infty)}_{\eta}(p)
\]
In this section we shall focus on the simplest setting
of a hyperbolic fixed point.
We recall that the fixed point \( p \) is hyperbolic if the
derivative \( D\varphi_{p} \) has no eigenvalues on the unit circle.
\begin{theorem*}
Let \( \varphi: M \to M \) be a \( C^{2} \) diffeomorphism of a
Riemannian surface and suppose that \( p \) is a hyperbolic
fixed point with eigenvalues
\( 0<| \lambda_{s}|< 1 < | \lambda_{u}| \). Given $\eta>0$,
there exists a constant
\( \varepsilon(\eta) > 0 \) such that the following properties hold:
\begin{enumerate}
\item \( W^{s}_{\eta} (p) \) is \( C^{1+Lip} \)
one-dimensional submanifold of M
tangent to \( E^{s}_{p} \);
\item \( |W^{s}_{\eta} (p)|\geq \varepsilon \)
on either side of \( p \);
\item \( W^{s}_{\eta} (p) \) contracts at an
exponential rate.
\item
\[
W^{s}_{\eta}(p) = \bigcap_{k\geq 0} \mathcal N^{(k)}_{\eta}(p).
\]
\end{enumerate}
\end{theorem*}
\begin{proof}
To prove this result, it suffices to verify the hyperbolicity conditions
stated in section \ref{results}.
First of all , since $\varphi$ is a $C^2$ diffeomorphism, all the first and second partial derivatives
are continuous and bounded. Hence for all $k\geq 0$, there is a uniform constant $K>0$ such that
$$p_k,q_k,\tilde{p}_k,\mathfrak{D}_k\tilde{\mathfrak{D}}_k\leq K.$$ To estimate expansion and
contraction rates in $\mathcal N^{(k)}_{\eta}$ we have the following lemma:
\begin{lemma}\label{regular growth}
There exists a constant \( K>0 \) such that
for all \( \delta > 0 \) there exists
\( \eta (\delta) > 0 \)
such that for
all \( k\geq j\geq 0 \) and all \( x\in \mathcal N^{(k)}_{\eta}
\) we have
\begin{equation}\label{eguh}
K ( \lambda_{u}+ \delta)^{j} \geq F_{j} \geq
( \lambda_{u}- \delta)^{j} \geq
(\lambda_{s}+ \delta)^{j}
\geq E_{j} \geq
K^{-1} (\lambda_{s}- \delta)^{j}
\end{equation}
and
\begin{equation}\label{eguh1}
\sum_{j=0}^{k-1} F_{j} \leq K F_{k}; \quad
F_{j} F_{j, k} \leq K F_{k}; \quad\text{ and } \quad
\sum_{i =j}^{\infty} H_{i} \leq K H_{j}
\end{equation}
In particular
\begin{equation}\label{DUH}
\|D^{2}\varphi^{k}_{x}\| \leq K F^{2}_{k};
\quad \text{ and } \quad
\|D(\det
D\varphi^{k}_{x})\|
\leq K E_{k} F_{k}^{2}.
\end{equation}
\end{lemma}
\begin{proof}
The estimates in \eqref{eguh} and (\ref{eguh1})
follow from standard estimates in the theory of uniform
hyperbolicity. We refer to \cite{KatHas94} for details and proofs.
The estimates in \eqref{DUH} then follow from substituting
\eqref{eguh1} into the estimates of Lemma \ref{second derivative}.
\end{proof}
Next we verify hyperbolicity conditions $(*)$ and $(**)$. We estimate $\gamma_k,\tilde\gamma_k,\gamma^{*}_k$
and $\delta_k$ for each $k\geq 0$. For $\gamma_k$ and $\gamma^{*}_k$ we have
\begin{gather*}
\gamma_k=\max_{x\in\mathcal N^{(k)}}
\{H_{k}\}\leq K\frac{(\lambda_{s}+ \delta)^{k}}{(\lambda_{u}-\delta)^{k}},\\
\gamma^{*}_k=\max_{x\in\mathcal N^{(k)}}
\{E_{k}\}\leq K(\lambda_{s}+ \delta)^{k},
\end{gather*}
while for $\tilde\gamma_k$ we obtain
\begin{equation*}
\begin{split}
\tilde\gamma_k &= \gamma_k+
\max_{x\in\mathcal N^{(k)}}
\{2F_{k}\}\sum_{i=k}^{\tilde k-1}p_iq_i\gamma_{i+1}\\
&\leq 2(\lambda_{s}+ \delta)^{k}+K\biggl[\frac{(\lambda_{u}+\delta)(\lambda_{s}+\delta)}
{(\lambda_{u}-\delta)}\biggr]^k\leq
K(\lambda_{s}+\tilde\delta)^k,
\end{split}
\end{equation*}
where $\tilde\delta$ can be made small with $\delta$ small.
To estimate $\delta_k$, we just use Lemma \ref{regular growth} above to conclude that
\begin{equation*}
\begin{split}
\delta_{k} &=
\max_{x\in\mathcal N^{(\tilde k)}}
\left\{
\frac{E_k}{F^{2}_{k}}\sum_{j=0}^{k-1} \tilde{P}_{j}F_{j, k}
F^{2}_{j} +
\frac{E_{k}}{F_{k}} \sum_{j=0}^{k-1}
\mathfrak{D}^{-1}_{j}\tilde{\mathfrak{D}}_{j}F_{j}
\right\}\\
&\leq K(\lambda_{s}+\tilde\delta)^k.
\end{split}
\end{equation*}
In the estimates above, the constant $K$ is uniform and depends only
on $\lambda_s, \lambda_u$ and the bounds for the partial derivatives of $\varphi$.
Condition $(*)$ is now immediate, since for $\delta,\tilde\delta$ sufficiently small, the constants
$\gamma_k,\gamma^{*}_k,\tilde\gamma_k$ and $\delta_k$ all decay exponentially fast. In particular there
exists a constant $L>0$ such that $\mathrm{Lip}(e^{(k)})\leq L$ inside each $\mathcal{N}^{(k)}$.
Let $k_0$ be the constant defined in section \ref{results}. To verify condition $(**)$,
we just need to show that there is a $\Gamma>0$ such that $\forall k\geq k_0$ we have:
\begin{equation}\label{geom_fixedpt}
K\frac{(\lambda_{u}+\delta)^{j}(\lambda_{s}+ \delta)^{k+1}}
{(\lambda_{u}-\delta)^{k+1}}+K(\lambda_{s}+\tilde\delta)^k<\Gamma\eta,\qquad\forall j\leq k+1.
\end{equation}
The existence of $\Gamma$ follows immediately if we
choose $\delta$ sufficiently small so that
$$(\lambda_{u}+\delta)(\lambda_{s}+ \delta)(\lambda_{u}-\delta)^{-1}<1,\quad\textrm{and}
\quad\lambda_{s}+\tilde\delta<1.$$
The conclusions of the theorem now follow.
In particular, the length $\varepsilon$ of the limiting leaf $\mathcal{E}^{(\infty)}$
is determined by equation (\ref{epsilon1}) in section \ref{geom_section}.
\end{proof}
\section{Finite time local stable manifolds}
In this section we prove some estimates concerning the relationships
between finite time local stable manifolds of different orders. In
particular we prove that they form a Cauchy sequence of smooth curves.
Throughout this and the following
section we work under the assumptions of our main
theorem. In particular we consider the orbit of a point \( z \) and
are given a sequence of neighbourhoods \( \mathcal N^{(k)}=\mathcal
N^{(k)}(z) \) in which most contractive and most expanding directions
are defined and thus, in particular, in which the finite time local stable
manifolds \( \mathcal E^{(k)}(z) \) are defined. The key problem
therefore is to show that these finite time local stable manifolds
converge, that they converge to a smooth curve, and that this curve
has non-zero length !
\subsection{Notation}
\label{section_notation}
We shall use \( K \) to denote a generic constant which is
allowed to depend only on the diffeomorphism \( \varphi \).
For any \( j\geq 1 \) we let
\[
e^{(k)}_{j}(x) =
D\varphi^{j}_{x}(e^{(k)}(x))
\quad \text{ and } \quad
f^{(k)}_{j}(x) =
D\varphi^{j}_{x}(f^{(k)}(x))
\]
denote the images of the most contracting and most expanding vectors.
To simplify the formulation of angle estimates we introduce the
variable \( \theta \) to define the position of the vectors. We write
\begin{gather*}
e^{(n)}=(\cos\theta^{(n)},\,\sin\theta^{(n)}),\quad
f^{(n)}=(-\sin\theta^{(n)},\,\cos\theta^{(n)}).\\
e^{(n)}_{n}=E_n (\cos\theta^{(n)}_{n},\,\sin\theta^{(n)}_{n}),\quad
f^{(n)}_{n}=F_n (-\sin\theta^{(n)}_{n},\,\cos\theta^{(n)}_{n}).
\end{gather*}
Finally, we let
\[\phi^{(k)}=\measuredangle(e^{(k)},e^{(k+1)})
\text{ and }
\phi^{(k)}_j=\measuredangle(e^{(k)}_j,e^{(k+1)}_j).
\]
We also identify any vector $v$ with $-v$, or equivalently we identify
an angle $\theta$ with the angle $\theta+\pi$.
Important parts of the proof depend on estimating the derivative of
various of these quantities with respect to the base point \( x \). We
shall write \( D\phi^{(k)}, De^{(k)}, D\theta^{(n)}_{j}, \ldots \) to denote
the derivatives with respect to the base point \( x \).
To simplify the notation we let
\begin{equation}
\Xi_{k}(x) :=
\frac{P_k(x)Q_k(x) H_{k+1}(x)}{(1-P_k(x)Q_k(x) H_{k+1}(x))}
\leq \frac{p_kq_k\gamma_{k+1}}{(1-p_kq_k\gamma_{k+1})} :=\xi_k
\end{equation}
Also, all statements
hold uniformly for all \( x\in \mathcal N^{(k)} \).
\subsection{Distortion}
\label{s:dist}
The following distortion estimates follow from completely general
calculations which do not depend on any hyperbolicity assumptions.
The definition of \( \delta_{k} \) is motivated by these estimates
which will be used extensively in section \ref{point_theory}.
\begin{lemma}\label{second derivative}
For all \( k\geq 1 \) and and all \( x \) such that \( \varphi^{k} \)
is \( C^{2} \) at \( x \), we
have
\begin{equation*}\tag{D1}\label{D1}
H_k\frac{ \|D^{2}\varphi^{k} \|}{\|D\varphi^{k}\|} \leq
\frac{E_k}{F^{2}_{k}}\sum_{j=0}^{k-1} \tilde{P}_{j}F_{j, k}
F^{2}_{j} \ \ (\leq \delta_{k})
\end{equation*}
and
\begin{equation*}\tag{D2}\label{D2}
\frac{\| D(\det D\varphi^{k}_{z})\|}
{\|D\varphi^{k}\|^{2}} \leq \frac{E_{k}}{F_{k}} \sum_{j=0}^{k-1}
\mathfrak{D}^{-1}_{j}\tilde{\mathfrak{D}}_{j}F_{j} \ \ (\leq \delta_{k})
\end{equation*}
\end{lemma}
\begin{proof}
Let \( A_{j}= D\varphi_{\varphi^{j}z} \) and let \( A^{(k)}=
A_{k-1}A_{k-2}\dots A_{1} A_{0} \). Let \( D A_{j}\) denote
differentiation of \( A_{j} \) with respect to the space variables.
By the product rule for differentiation we have
\begin{equation}\label{productrule}
\begin{split}
D^{2}\varphi^{k}_{z} &= DA^{(k)}
= D(A_{k-1}A_{k-2}\dots
A_{1}A_{0}
) \\ &= \sum_{j=0}^{k-1} A_{k-1}\dots A_{j+1}(DA_{j}) A_{j-1}\dots A_{0}.
\end{split}
\end{equation}
Taking norms on both sides of
\eqref{productrule} and using the fact that
\( A_{k-1}\dots A_{j+1} = D\varphi^{k-j-1}_{\varphi^{j+1}z} \), \(
A_{j-1}\dots A_{0} = D\varphi^{j}_{z} \) and, by the chain rule, \(
DA_{j} = D (D\varphi_{\varphi^{j}z}) = D^{2}\varphi_{\varphi^{j}z}
D\varphi^{j}_{z} \), we get
\[
\|D^{2}\varphi^{k}_{z}\| \leq \sum_{j=0}^{k-1}
\|D\varphi^{k-j-1}_{\varphi^{j+1}z}\| \cdot \|
D^{2}\varphi_{\varphi^{j}z}\| \cdot \|D\varphi^{j}_{z} \|^{2} \leq
\sum_{j=0}^{k-1} \| D^{2}\varphi_{\varphi^{j}z}\| F_{j,k} F_{j}^{2}.
\]
The inequality (D1) now follows. For (D2)
we argue along similar lines, this time letting \(
A_{j}= \det D\varphi_{\varphi^{j}z} \). Then we have, as in
\eqref{productrule} above,
$$
D(\det\varphi^{k}_{z}) = DA^{(k)} = \sum_{j=0}^{k-1} A_{k-1} \dots
A_{j+1}(DA_{j}) A_{j-1}\dots A_{0}.$$
Moreover we have that
\( A_{k-1}\dots A_{j+1} = \det D\varphi^{k-j-1}_{\varphi^{j+1}z} \),
\( A_{j-1}\dots A_{0} = \det D\varphi^{j}_{z}, \) and by the chain
rule, also: $$ DA_{j} = D(\det D\varphi_{\varphi^{j}z}) = (D \det
D\varphi_{\varphi^{j}z}) D\varphi^{j}_{z}.$$ This gives
\begin{equation}\label{det}
D(\det\varphi^{k}_{z}) = \sum_{j=0}^{k-1} (\det
D\varphi^{k-j-1}_{\varphi^{j+1}z}) (D \det D\varphi_{\varphi^{j}z})
(\det D\varphi^{j}_{z} )(D\varphi^{j}_{z}).
\end{equation}
By the multiplicative property of the determinant we have
the equality: $$
(\det D\varphi^{k-j-1}_{\varphi^{j+1}z}) (\det D\varphi^{j}_{z} ) =
\det D\varphi^{k}_{z}/\det D\varphi_{\varphi^{j}{z}}.$$ Thus, taking
norms on both sides of \eqref{det} gives
\[
\| D(\det D\varphi^{k}_{z})\| \leq \ |\det D\varphi^{k}_{z} |
\sum_{j=0}^{k-1} \frac{\|D(\det D\varphi_{\varphi^{j}(z)})\|}{|\det
D\varphi^{\varphi^{j}(z)}|} F_{j}
\]
The inequality (D2) now follows
from the fact that \( \det D\varphi^{k} = E_{k} F_{k}\).
\end{proof}
\subsection{Pointwise convergence}
\label{point_theory}
In this section we prove two key lemmas showing that both the angle
\( \phi^{(k)} \) (Lemma \ref{angleconvergence})
between consecutive most contracted directions and
the norm of its spatial derivative \( D\phi^{k} \) (Lemma
\ref{derivativeconvergence}) can be bounded in terms of the
hyperbolicity. In particular, from the summability condition \( (*)
\), we obtain also that the norm \(
\|De^{(k)}\| \) of the spatial
derivative of the contractive directions is uniformly bounded in \( k
\).
\begin{lemma} \label{angleconvergence}
For all \( k\geq k_{0} \) and \( x\in\mathcal N^{(k)} \) we
have
\begin{equation}\label{angle}
|\phi^{(k)}| \leq |\tan\phi^{(k)}| \leq\frac{P_kQ_k H_{k+1}}{
1-P_kQ_k H_{k+1}} \ \ (\leq \xi_{k}).
\end{equation}
Moreover, for all \( k\geq j\geq k_{0} \) we have
\begin{equation}
\label{contractionclaim}
\|e^{(k)}_{j}(x)\|\leq E_{j}(z)+F_{j}(z)\sum_{i=j}^{k-1}\phi^{(i)}(z)
\ \
(\leq \tilde{\gamma}_j).
\end{equation}
\end{lemma}
Notice that the estimate in \eqref{contractionclaim}
gives an upper bound for the contraction which depends only on \( j \)
and not on \( k \).
\begin{proof}
We claim first of all that for all $k\geq k_0$ we have
\begin{equation}\label{half}
\|e^{(k)}_{k+1}\|/F_{k+1}\leq P_kQ_k H_{k+1}\leq 1/2.
\end{equation}
To see this observe that
\(
E_k \leq \|e^{(k+1)}_{k}\|\leq
\|D\varphi^{-1}_{z_k}e^{(k+1)}_{k+1}\|\leq Q_k E_{k+1}\) ,
\( E_{k+1} \leq \|e^{(k)}_{k+1}\|=\|D\varphi_{z_k}e^{(k)}_{k}\|
\leq P_k E_k \),
\( F_k= \|D\varphi^{-1}_{z_k}f^{(k)}_{k+1}\|\leq Q_k F_{k+1} \),
\( F_{k+1} =\|D\varphi_{z_k}f^{(k+1)}_{k}\|\leq P_k F_k \).
Moreover \( H_{k+1}/H_{k}=(E_{k+1}/F_{k+1})/(E_{k}/F_k) \).
Combining these inequalities gives
\begin{equation}\label{minidistortion}
E_{k+1}/E_k\in[Q^{-1}_k,P_k], \quad F_{k+1}/F_k\in[Q^{-1}_k,P_k],
\quad H_{k+1}/H_k\in[(P_kQ_k)^{-1},P_kQ_k].
\end{equation}
Therefore, writing
write $e^{(k)}_{k+1}=D\varphi(z_k)e^{
|
AllocationUAVHarvestLiu2020} derived an optimal strategy for allocating charging time and power to UAVs to wirelessly charge them during operation.
The charging power is supplied by the BS i.e., it is scavenged from the RF energy typically used for communication.
A different tack was followed in~\cite{AerielEnergySharingUAVsLakew2021} where the authors proposed an energy sharing scheme whereby high-capacity UAVs harvest solar energy and share it with smaller-capacity UAVs.
In some implementations of energy harvesting to support UAV operations, the authors assume that the harvested energy is sufficient to support the communication functions of the UAV, whereas in others, harvested energy merely serve to support power provided by the UAV on-bard battery.
In~\cite{UAVRelayingHarvestYang2018}, Yang et al. studied the outage problem of an UAV that harvests energy from a ground-based BS and uses the harvested energy to relay data for user terminals on the ground.
\section{Challenges and Open Research Problems}\label{sec:challenges}
In this section, we present the most common challenges and open research issues that are limiting the widespread adoption of UAVs in cellular networks. The list presented is not exhaustive but is meant to guide the reader on some of the most pressing concerns that need to be addressed to fully exploit the advantages of UAVs as an aid to cellular networks.
The reader is referred to the work in~\cite{DesignChallengesMultiUAVShakeri2109}, which for a more detailed look at UAV network design challenges.
\subsection{Security Challenges}
Security is a big challenge in UAV-aided communication. Due to the small form factor and capability of most UAVs, they are prone to both cyber and physical security threats, such as hijacking.
Moreover, due to the broadcast nature of UAV communications, they are subject to eavesdropping, packet snooping, jamming, spoofing, denial of service attacks, and cyber attacks~\cite{BlockchainUAVsMehta2020, Fotouhi2019}.
The security of UAV-to-UAV communication against multiple eavesdroppers was investigated in~\cite{SecurityUAV2UAVYe2019} where the authors derived expressions for the statistical SNR as well as for the secrecy outage probability.
In extreme cases, interceptors could potentially spoof the control signals and use them to gain control of the UAVs~\cite{SecureUAVLi2019, Fotouhi2019}, or jam such signals to prevent them from reaching the ground control station.
Thus, authentication of users and operators is an issue in UAV-based networks~\cite{ANNCellularUAVChallita2019}.
UAV security issues are categorized under cyber detection, which focuses on identifying intruders within the UAV network, and cyber protection, which focuses on eliminating or reducing external threats to the UAV network~\cite{DetectingUAVCyberAttackSedjelmaci2016}.
The authors in~\cite{UAVOSSecurityIqbal2021} also distinguished different forms of security, such as information and software security, sensors security, and communications security.
Information and software security is related to the security of the UAV operating systems (including its configuration files, mission-related data, and flight control files) as well as its collected data.
Sensor security deals with the security of the various sensors used for the real-time maneuvering of the UAV, such as accelerometers, gyroscopes, GPS, barometers, etc. If attackers gain control of such subsystems, it could lead to the malfunctioning of the UAV or cause it to send erroneous data.
Communications security is related to the security of the communications components of the UAV, including control commands, telemetry data, and transmission of the collected data.
Since these communications happen over the air and some of the packets are unencrypted (e.g., data packets from simple sensors or even GPS data), they pose cybersecurity threats to the successful operation of UAV networks.
Packet routing is another potential source of attacks on UAV communications.
Here, malicious entities can disguise themselves as legitimate elements in the UAV network to steal, modify or drop packets.
There are three common types of routing-related attacks: wormhole, selective forwarding and sink hole attacks~\cite{SecureCommsUAVFotohi2020}.
Wormhole attacks involve a malicious entity that captures packets in one location within the network, then tunnels the packet to a third party in another location where the packets are modified and retransmitted into the network~\cite{WormholeAttacksYihChun2006}. This attack is common in ad hoc networks that use reactive routing protocol such as shortest-path-first routing, where the routing metric is based on the hop count. Malicious intermediaries pretend to be neighbour nodes of genuine network nodes, and route packets through private tunnels. Tunneling ensures that the packets arrive at the destination node with shorter distances or lower number of hops than genuinely multi-hop-routed packets, thereby misleading the receiver.
Selective forwarding attacks occur when malicious nodes behave like genuine network nodes and correctly forward packets most times, but occasionally selectively drop sensitive packets that may be critical to the functioning of the network. In UAV networks, other UAVs or edge devices can be used to perpetrate selective forwarding attacks.
Since wireless networks randomly drop packets (due to congestion or unreliable links), this type of attack is difficult to detect because it is hard to distinguish if packet drops are due to network faults or malicious attacks~\cite{SelectiveForwardingAttacksRen2016}.
In sink hole attacks, adversary nodes advertize a false hop count to the sink, tricking its neighbouring nodes into believing that they have found a shorter route to the sink and then forward their data packets via the adversary node~\cite{SinkHoleAttacksLiu2020}.
Thus, the adversary node not only gains access to packets of nodes within its radio range but also blocks these nodes from transmitting to the final receiver or genuine network sink. Again, other UAVs, relay nodes or edge devices can be used to perpetrate this attack in UAV-assisted networks.
A potential solution for the communication security challenges in UAV-based networks is to use blockchain technology~\cite{BlockchainUAVsMehta2020}.
Proper authentication will also ensure that only vetted elements are admitted into the network.
Designing a robust intrusion and detection system will also protect UAV-aided networks from malicious attacks~\cite{UAVOSSecurityIqbal2021}.
In addition, innovative communication protocol design such as use of spread spectrum technologies, MIMO techniques or cooperative UAV features can also help to protect UAV communication from eavesdroppers. Due to their directionality, mm-wave communication can also reduce the threat of jamming and eavesdropping since the electromagnetic signal beams are focused on the intended receiver(s)~\cite{PHYSecurityUAVsSun2019}.
Data encryption is very helpful in dealing with information and data security, as is proper design of the configuration files through a process known as \textit{system hardening}~\cite{UAVOSSecurityIqbal2021}.
Wormhole attacks can be detected and defended using packet leashes~\cite{WormholeAttacksYihChun2006}, where some information is added to a packet to restrict the maximum transmission distance per link.
In addition, ANN have been proposed to create threat maps in the operating environment of UAVs, with recurrent neural networks used to predict the normal trajectory motion of the UAV and hence detect any deviations in real-time as suspected cyber attack~\cite{ANNCellularUAVChallita2019}.
\subsection{Complexity}
Increasingly, UAVs are deployed as a swarm of vehicles to achieve communication, reconnaissance, search/rescue or monitoring tasks. Due to the complexity of these tasks, multiple UAVs (also called a swarm of UAVs) are usually required to perform them more efficiently. The advantages of using multiple UAVs in such scenarios, such as cost and time efficiency, reliability/fault tolerance, flexibility/adaptability to changing requirements, and the ability to perform multiple tasks simultaneously, have been well-established~\cite{MultiUAVSurveySkorobogatov2020}.
However, due to the dynamism and uncertainty of the operating environments for such complex tasks, coordinating multiple UAVs to work together is highly challenging~\cite{MultiUAVCoordinationCommunicationsTortonesi2012}.
In addition, UAVs nowadays are increasingly subjected to more and more complicated cyber and physical threats. Thus, they have to be designed with additional software and hardware components to thwart such threats, which further increases their complexity.
The most common issues that arise in multi-UAV coordination have been documented in~\cite{Fotouhi2019, AerialSwarmRoboticsSurveyChung2018, InterferenceCoordinationUAVsShen2020, LiveFlyUAVExptChung2016, ChannelSlottingMultiUAVChen2018}. They include:
\begin{itemize}
\item Algorithmic planning to manage communication and task allocation,
\item Coverage issues and equitable distribution of workload,
\item Aeriel manipulation of the vehicles,
\item Power management,
\item Management of the communication infrastructure,
\item Path planning to avoid collisions while ensuring adequate coverage without overlaps,
\item Interference arising from other UAVs,
\item Conflict resolution,
\item Safety issues related to preventing the vehicles from flying into one another's buffer zones,
\item Safety issues related to take-off and landing (in some current implementations, a swarm of fixed-wing UAVs spent less than 20\% of the time staying simultaneously in the air to execute assigned tasks) while the bulk of the time is spent trying to coordinate the flight of the UAVs.
\item Network congestion and channel interference~\cite{} due to multiple UAVs exchanging data to coordinate the execution of assigned tasks.
\end{itemize}
Multi-UAV systems may also require more than a single pilot to manage them, which introduces another layer of complexity to the system~\cite{MultiUAVSurveySkorobogatov2020}.
One of the main challenges facing the implementation of a swarm of UAVs is localization. To be effective as part of a fleet, a UAV needs to be aware of its position in a given map of the environment. The UAV position is either relative to a reference point or relative to other UAVs in the fleet.
As one can imagine, this is a non-trivial task that requires numerous exchanges of communication and control commands. In addition, since the positions of the UAVs are constantly changing, the map of the fleet is also constantly changing 3D. This gives rise to a dynamic 3D map of the environment, rather than a constrained static map with a reference landmark~\cite{AerialSwarmsChallengesAbdelkader2021}. Thus, achieving partial or full localization is both energy- and bandwidth-intensive.
GPS sensors are insufficient to address this problem because they provide position accuracy to only three meters~\cite{AerialSwarmsChallengesAbdelkader2021}, which is not granular enough to prevent collisions. Alternatives found in the literature include equipping the UAVs with wireless communication modules and inertial navigation systems, coupled with on-board sensor fusion to enable accurate estimation of position. As highlighted already, this incurs a heavy computational, communication (bandwidth), and energy cost, hence, innovative ways to do this more efficiently is still an open problem.
Path planning is another challenge that arises when multiple UAVs work together to achieve a common objective (this is related to the localization problem). As the number of vehicles in the swarm/fleet grows, it becomes more difficult to plan the trajectory of each UAV from the starting points to the goal points in order to traverse the minimum path (so as to save energy) and avoid collisions with obstacles or other vehicles in the swarm.
In addition, path planning must be executed such that the UAVs maintain connectivity with one another and with the ground control station while performing assigned tasks.
Path planning involves motion planning (to control the path length and turning angles), trajectory planning (involving the speed and kinematics of the vehicle) and navigation (involving localization and obstacle/collision avoidance)~\cite{PathPlanningTechniquesAggarwal2020}.
For UAV swarms used as aeriel BSs, path planning in such cases requires high-rate exchange of positional sensor data, which involves multi-dimensional channel characterisation, tracking and communication, interference management, transmit power allocation, resource block assignment, etc.~\cite{CooperativeVehicularNetsZhou2015}.
Path planning for other UAV applications such as monitoring, or target tracking are complicated by issues such as target location and identification which arise due to the high mobility of the UAVs.
Some of the techniques used in path planning can be categorized under representative, cooperative, and non-cooperative techniques~\cite{PathPlanningTechniquesAggarwal2020}. Increasingly, ML approaches are employed to address path planning problems~
|
\cite{PathPlanningZear2020}.
To save on energy, time, computational and communications costs, path planning is a complex task whose complexity grows rapidly as the number of UAVs in the swarm increases.
Other key challenges that arise due to the complexity of multi-UAV systems include security, equitable allocation of workload and coverage area.
The higher the number of UAVs in a swarm, the larger the attack surface~\cite{ThreatModelingUAVsAlmulhem2020}. Therefore, modelling potential threats and designing a robust security framework to thwart cyber attacks is a highly complex and challenging task.
\subsection{Data Availability}
ML algorithms live on data; the more data we feed the algorithms, the smarter they tend to become~\cite{MLDataVisualizationLi2018}.
To make strategic and intelligent decisions, data is typically required from different sources; these data are integrated and transformed before they are used to make decisions or predictions~\cite{DataIntensiveBiasPark2021}. This process is the so-called data-driven or evidence-based decision-making~\cite{MLTrendseJordan2015}.
In many cases, large data sets are required both to train and test the ML algorithms as well as to evaluate their efficacy in making predictions.
In addition, both labeled and unlabelled data are required to compare the efficacy of different ML algorithms~\cite{MLWirelessNetworksSun2019}, and guide in selecting the best technology tool for a given situation.
However, data is not always available or is available in insufficient quantity or quality.
Due to the criticality of data to the success of businesses, companies are usually reluctant to share data obtained from UAV deployment trials.
There are also cases where it is very difficult to obtain data, especially when a process or technique is still in infancy.
Some of the most useful data sets are not publicly available.
To make predictions which are generalizable, there needs to be sufficient supply of heterogeneous data repositories that are accessible many researchers. Failing this, each institution might end up developing individual analytical pipelines that may fail under new circumstances, such as a different operating environment.
Open access data repositories are a key enabler to unlocking insights into addressing the challenges of UAV deployment in cellular networks.
With sufficient data shared collaboratively amongst stakeholders in the UAV research and development industry, robust and reproducible ML solutions can be developed for UAV applications in cellular networks.
The ability to obtain and reuse data will enable easy collaboration among researchers and the industry, save costs, and minimize time to market for new products.
It is desirable to have an open database for UAV-aided cellular communication, similar to the modified National Institute of Standards and Technology (MNIST) database (\textit{c.f.}~\cite{MNISTDeng2012}) widely used in computer vision studies.
\subsection{Limited Energy Storage Capacity}
One of the key challenges of using UAVs to support communication networks operations is that they have limited lifetime due to the low capacity of their batteries, which limits how long they can be deployed~\cite{UAVSwappingBhola2021}.
In fact, most UAVs have an endurance of just a few hours~\cite{UAVITSChallengesMenouar2017}.
To reduce the weight of UAVs and the attendant energy drain problem, it is often necessary to use smaller batteries so as to avoid expending too much energy on flying as the energy required to fly the UAV varies with the payload size~\cite{EMFUAVChargingNguyen2020}.
However, small batteries have low storage capacities, further complicating the energy situation of the UAV.
In addition, there exists uncertainties in estimating the remaining battery charge in UAVs~\cite{BatteryHealthUAVSaha2011}, leading to conservative estimates of left-over flight time to avoid \textit{dead stick conditions} whereby the UAV runs out of battery power in-flight, which could have disastrous consequences.
Supercapacitors, on the other hand, are not ideal for UAVs due to their low energy density~\cite{simic2015investigation}.
Improvements in battery technology are required to enhance the storage capacity of UAVs.
One of the solutions that have been proposed to address the battery capacity limitations of UAVs include wireless charging~\cite{simic2015investigation}, whereby the UAV is recharged during operation via RF energy harvesting~\cite{EMFUAVChargingNguyen2020}, laser power beaming~\cite{LaserChargedUAVJaafar2021, LaserPowerXmissionJin2019}, use of electrical power lines~\cite{UAVPowerlinesChargingLu2017} by directly perching on current-carrying cables or harvesting electromagnetic energy generated by the cables.
Other alternatives for recharging the UAV when its battery becomes depleted have been explored in~\cite{UAVRechargingOptionsGalkin2019}, including UAV swapping (replacing those with depleted batteries with others that are fully charged)~\cite{UAVSwappingBhola2021} and battery swapping. Alternatively, the UAV could be tethered to a mains power supply~\cite{TetheredUAVSaif2021} or powered with fuel cells~\cite{boukoberine2019critical}.
\subsection{Energy Harvesting Challenges}
It is well known that one of the biggest limitations to the widespread adoption of UAVs in cellular networking is their limited operating lifetime~\cite{PowerTimeAllocationUAVEHLiu2020}.
One of the most popular solutions for dealing with this problem is using energy harvesting technologies to enable UAVs to harvest energy from the environment to support the on-board battery.
However, this technology is still in its infancy and fraught with many natural and technical challenges. For instance, solar-based energy harvesting still depends on climatic conditions and become unavailable when the sun is not shining~\cite{WIPTandEHStatusProspectsHuang2019, lu2018wireless}, especially in the winter (which can last several months in the northern hemisphere). Solar energy harvesting solutions are based on use of photovoltaic arrays, which are only suitable for fixed-wing UAVs~\cite{boukoberine2019critical}. Similarly, the availability of RF energy depends on the density of RF devices within the area.
Most of the energy harvesting technologies in the literature have low energy transfer efficiency due to environmental (such as free space pathloss for RF energy harvesting) or device limitations (such as poor energy conversion ratio). Moreover, since UAVs are usually in motion, they keep losing LOS connection with the charging BS.
As a matter of fact, the amount of energy harvested is still very small compared to the amount of power that can be stored in on-board batteries, especially when such energy comes from RF environment~\cite{UAVEHInterferenceChen2019}.
The output of most energy harvesting techniques is still quite low due to the poor efficiency of the energy conversion and matching circuits~\cite{RFEfficiencyEfficiencyCansiz2019}.
The amount of energy captured from RF energy, for instance, depends on the area of the antenna elements~\cite{RFSOlarHarvestingQuyen2020}.
Since most UAVs come in small form factors to limit their weight, their antennas are also small. Even where solar or other forms of energy are targeted for energy harvesting, the small form factor also implies that a limited area of the UAV is exposed to ambient energy, thereby limiting the amount of energy that can be harvested from such environments.
Moreover, the distance between the UAVs and the charging (base) station as well as whether the UAV has an LOS connection to the base also affect the amount of energy received; the farther the UAV, the lower the energy received~\cite{AerielEnergySharingUAVsLakew2021}. However, since UAVs are deployed to supplement the services provided by MBSs, it is desirable that they operate far from the BS in areas with poor coverage, which limits the power received from the macro cell.
Another limitation associated with energy harvesting is the time it takes to recharge in-flight UAVs from harvested energy due to shortcomings in the charging rates.
Finally, since energy harvesting is still in its infancy, there is a lack of a unified standard on it, which limits the widespread involvement of original equipment manufacturers (OEM) to develop wireless recharging devices.
One way to improve the amount of energy harvested from the radio waves is to use directional antennas via beamforming~\cite{AerielEnergySharingUAVsLakew2021, BeamformingEnergyTransferChoi2017}, which ensures that more of the transmitted RF energy is received at the UAV.
There are other proposed solutions aimed at optimizing the charging time of UAVs by improving the energy transfer and conversion rates. Solutions related to the above two include optimal or near optimal trajectory planning to guarantee maximum exposure of the UAV to the RF field of the charging BS in cases where energy is harvested from the radio environment.
\subsection{Regulations}
Due to public safety, privacy, and data protection concerns, there are strict restrictions on use of UAVs by governments around the globe.
Both international and national regulations are put in place to ensure that UAVs pose minimal risks to other users of the airspace and to protect people and property on the ground~\cite{UAVRegulationsReviewStocker2017}.
For UAVs to gain wider acceptability, the public needs to trust that their deployment is in their best interests and will be used safely.
For instance, there are valid concerns arising from the knowledge that UAVs have been used for illegal activities such as surveilling or tracking people.
In addition, there are concerns over data protection since UAVs can be equipped with cameras and other sensors that can collect data from areas where they do not have authorization.
There are also serious concerns regarding public safety, as UAVs can lose control during flight and collide with other aircraft, buildings, etc. or crash into people on the ground~\cite{Fotouhi2019}.
Moreover, UAVs can be equipped with weapons and used to carry out remote attacks.
There are many regulatory barriers that need to be addressed before the full potential of UAVs as part of wireless networks can be realized.
One of the commonest issues is restrictions on areas where UAVs can operate.
Due to some of the concerns highlighted above, UAVs serving as cellular BSs are not yet approved to operate in many public areas, especially in areas with large crowds.
In addition, there are strict restrictions on how far UAVs can fly.
Even when a UAV is capable of operating autonomously, many municipalities and cities still require that there be a licensed UAV pilot present before a UAV can be deployed, which increases the operating costs of deploying such UAVs and limits their use.
The red tape in the approval of the use of UAVs has also been highlighted as a challenge mitigating against their widespread use.
Regulatory and policy issues are usually slow and far behind the advances in UAV development, which in turn curtail the research, development and deployment efforts by both the telecommunication industry and the research community~\cite{UAVRegulationsReviewStocker2017}.
Despite some of the current limitations, significant progress has been made in UAV regulations. In Europe, for example, roadmaps have been created on how to integrate UAV operations into the civilian aviation industry~\cite{UAVRegulationsReviewStocker2017}. There are similar efforts around the globe to enact regulations that will both promote the wide adoption of UAVs in civilian applications as well as ensure privacy and public safety~\cite{UAVRegulationsReviewStocker2017}.
\section{Conclusion}\label{sec:conclusion}
This survey paper covers the energy optimization techniques for UAV-assisted wireless communication networking by categorizing them in terms of the optimization algorithm employed.
On one hand, there are some well-known optimization methods, such as heuristics, game theory, etc., that have been widely used for energy optimization.
The implementation of ML for optimization, on the other hand, have been gaining momentum due to its proven capabilities.
Thus, we combined both conventional and ML algorithms in this survey paper in order to cover the literature in a comprehensive and inclusive manner.
The studies on energy optimization in UAV-assisted wireless networking were investigated thoroughly to reveal the state-of-the-art.
Some background information on both the optimization algorithms and power supply/charging mechanisms of UAVs were given in order to cover the topic in a more complete manner.
Moreover, different types of UAV deployments were also discussed to highlight how the UAV-assisted communication networks can be divergent, increasing the level of challenge in optimization.
As one of the most novel parts of this survey, emerging technologies, such as RIS and landing spot optimization, were presented to capture the latest advancements in the literature.
The survey was concluded by the identification of challenges and possible research directions.
This will help focus the research efforts into these areas, thus making the UAV-assisted wireless networking a complete and mature concept.
This, in turn, would result in a feasible and applicable concept for wireless communication networks, which has the potential to mitigate the capacity scarcity issue.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Survival analysis is a well-defined problem in machine learning that estimates the probability of the occurrence of an event of interest through time. An example of such an event is an organ failure in a recipient after an organ transplant, or the death of a patient admitted to an intensive care unit (ICU). The emergence of deep neural networks (DNNs) and their superior performance in the field of survival analysis \cite{nagpal2021deep, lee2018deephit, lee2019dynamic, miscouridou2018deep} over traditional Cox-based \cite{therneau2000cox} and shallow machine learning models such as logistic regression \cite{efron1988logistic} and random survival forest (RSF) \cite{ishwaran2010consistency} motivated healthcare industry and organizations\footnote{https://impact.canada.ca/en/challenges/deep-space-healthcare-challenge} to utilize DNN models for survival analysis.
As new advances in DNNs become increasingly common in survival analysis applications \cite{lee2019dynamic,nagpal2021deep, katzman2018deepsurv,thiagarajan2020calibrating,ozen2019sanity,hanif2018robust,chung2020deep}, ensuring their operational reliability has become crucial. DNN models usually outperform traditional survival models in estimating the probability density functions (PDFs) of events by learning complex interconnected relationships between the observations and events \cite{rezaei2022deep}.
The decision-making process of traditional survival models such as Cox and RSF is simple and easy to interpret by a human. On the other hand, DNNs can learn complex and interconnected relationships between features and targets. However, the internal working process of DNNs looks like a black box and therefore, is not easily interpretable by a human. Consequently, predictions of DNNs, especially in healthcare settings, might not be easily explainable or trusted.
There have been a few attempts to gain the trust of healthcare professionals on DNN predictions.
\cite{ribeiro2016should} developed an algorithm named LIME to make classifier or regression models interpretable, by adding an interpreter model such as a decision tree to identify a list of important features relevant to the predictions. While the algorithm is applied to text processing and computer vision, it is not applied to survival analysis. Applying LIME to survival analysis would be significantly harder, as there is no ground truth for the survival function. The LIME algorithm requires training at least two models at the same time, which makes the training process complex. The other drawback of LIME is that the results are subjective to a specific problem and its accuracy cannot be measured statistically.\\
\cite{yousefi2017predicting} introduced a pipeline for interpreting survival analysis results using a DNN. They used Cox partial likelihood as the cost function and back-propagated the calculated risks to the first layer of the network to determine the risk factor of each input feature corresponding to a patient prognosis. The calculated risk for each input provides an interpretation factor for the whole model. Although the model provides an approach to make a DNN model interpretable, it cannot be used when the electronic health record (EHR) cohort has longitudinal measurements, missing values, censored records, or competing risks. Though the proposed pipeline provides a ranked list of the most relevant features, it cannot identify similar patients to a patient of interest based on the similarity of outcomes or EHR records as a source of trust to predictions.\\
In the field of healthcare, identifying patients with clinical measurements and outcomes similar to a case under investigation is considered a source of trust. \cite{gallego2015bringing} developed a process that can provide similar patients from the database to an individual patient based on the past clinical decisions and clinical verdicts. \cite{sun2012supervised} used a generalized Mahalanobis distance \cite{de2000mahalanobis} for deriving similar patients based on a physician's feedback. Their method is a supervised learning approach for clustering EHR patients based on key clinical indicators. This approach does not apply to survival analysis, since the output of survival analysis is a survival function with no ground truth. \cite{li2016deep} discussed a few methods for ranking features during training a DNN for genome research, where all those methods add a regularized term to the cost function of the model to rank input features. Although this technique is simple and effective to rank input features, it needs the ground truth for DNN targets to be able to classify significant features associated with each class. Unfortunately, the assumption of availability of the ground truth is not held in many problems including survival analysis, where the ground truth is unknown, including \cite{nagpal2021deep}. \\
To the best of our knowledge, there is no unified model or framework specifically designed for the interpretation of deep survival models. A comprehensive interpretability tool for a deep survival model can be used for evaluating the reliability of predictions of the model and consequently increase the chance of acceptance of that model by healthcare professionals.
In this research, we propose a framework, reverse survival model (RSM), that provides further insights into the decision-making process of deep survival models. For each prediction, RSM extracts similar clinical measurements and ranks them based on their relevance to the predicted survival PDFs of a deep survival model. For example, RSM can provide a list of similar patients in terms of their survival PDFs and the most relevant clinical observations.
It has been shown in \cite{che2016interpretable,lee2019dynamic,katzman2018deepsurv,nagpal2021deep,miscouridou2018deep} that the estimated PDFs of DNN models usually surpass the PDFs of traditional survival models in terms of accuracy and quality. However, when it comes to individual predictions, a physician might not trust a DNN model, since the range of error for an individual prediction is unknown to the physician. In this paper, we try to address this question: \emph{"when should we trust an individual survival prediction by a DNN model?"}.
Our response to this question is to provide a source of trust; a list of \emph{similar patients} from the history of the model, with clinical measurements and outcomes that are similar to the current prediction being made.\\
To interpret the outcomes of a deep survival model, RSM executes three major steps: 1) It finds the features that are most relevant to the predictions made by deep survival models, and 2) it categorizes patients into a few distinct clusters based on the similarity between survival predictions, and, 3) it ranks similar patients based on their survival PDFs and similarities among relevant clinical measurements. RSM applies Jensen–Shannon divergence (JSD) between survival PDFs, as the measure of similarity \cite{fuglede2004jensen}. Smaller the JSD between the PDFs of two patients, more similar the outcomes for those two patients are \cite{connor2013evaluation}. The significance of clinical measurements is measured by the Kolmogorov-Smirnov statistical test \cite{massey1951kolmogorov}.\\
The unique advantage of RSM is that it can be applied to any deep survival models that predict the time to an event based on the clinical measurements of an EHR, where the EHR can contain longitudinal measurements, missing values, and censored records. We tested RSM on a synthetic dataset and MIMIC-IV \cite{johnson2020mimic}, a well-known ICU dataset, for survival analysis.
The results prove that for each prediction, RSM can successfully identify similar records from historical data, and then rank them, based on the degree of similarity.\\
The rest of this paper is organized as follows: The pipeline of RSM is described in Section \label{reversed_section}. Section \ref{section:cohort} introduces the datasets used for evaluating the model. Experimental results are provided in Section \ref{section:experiments}. Finally, Section \ref{section:Discussion} discusses the characteristics and limitations of RSM and concludes the paper.
\section{Methods}\label{reversed_section}
Assume that $D$ consists of a set of tuples $\{(\boldsymbol{x}_i, t_i, \delta_i)\}_{i=1}^N$, where $x_i \in \mathbb{R}^M$ is a vector of $M$ clinical measurements of an individual $i$, $t_i$ and $\delta_i$ are time and type of an event like death, respectively.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure1.png}
\caption{Schematic view of the RSM model. During the training phase, RSM learns to identify the most significant clinical measures related to the survival PDF of a patient, along with its designated cluster for similarity. For the test path, based on the similarity of survival PDFs and significant clinical measures, Unit 6 suggests the most similar patients from the dataset for a test patient.}
\label{fig:RSM_PIPELINE}
\end{figure}
Given a trained deep survival model, we hypothesize that two patients with similar clinical observations would likely generate similar outcomes. Note that this is based on the assumption that clinical observations are sufficient and relevant to clinical events. As it is shown in Figure \ref{fig:RSM_PIPELINE}, RSM consists of 6 different units. Unit 1 estimates the rank of clinical measurements for the survival PDF prediction. Unit 2 consists of a deep survival model which predicts the survival PDFs. In this study, we use the Survival Seq2Seq model \cite{survival_Seq2Seq}, a state-of-the-art model for survival analysis. Unit 3 extracts statistics from the estimated PDFs. Unit 4 clusters patients with similar survival PDFs based on the outcome of unit 3. Unit 5 ranks patients in each cluster based on the similarity between their survival PDFs.\\
RSM has two phases of training and testing that utilize these six units slightly differently.
All units 1-5 are parts of the training path of RSM. In the training phase, Units 1 and 2 are trained using traditional optimization techniques for deep survival models considering a modified loss function defined in equation \ref{eq-significant-CM}. Then, unit 4 learns to cluster the patients by extracting features from the predicted survival PDFs by unit 3 (see section \ref{pdf-similarity-section} for more details). Finally, unit 6 learns to find the most significant clinical measurements and gives a similar measure that represents the closeness of the most significant clinical measurements. In the test phase, RSM takes the clinical measurements of a test patient and runs units 1 to 5 to reach a similarity score to other patients with the similar designated cluster, those with similar survival PDFs, based on the most significant clinical measurements. Finally, unit 6 ranks the most similar patients in the associated cluster to the test patient and returns them as the output.
\subsubsection*{PDFs Similarity Measure}
\label{pdf-similarity-section}
There are several measures of similarity for PDFs \cite{cha2007comprehensive}, and JSD is a well-known example of that. JSD is a symmetric measure between two PDFs \cite{fuglede2004jensen} defined by
\begin{equation}
JS_{dist}(P\parallel Q) =\sqrt{ \frac{1}{2} D(P\parallel Q) + \frac{1}{2} D(Q\parallel P)},
\end{equation}
where $P$ and $Q$ are PDFs and $D(P \parallel Q)$ is the Kullback–Leibler (KL) divergence between $P$ and $Q$ distributions.
RSM clusters patients into $K$ clusters based on their JSD similarity scores. RSM calculates the JSD distance for each pair of survival PDFs as a measure of distance. This measure is used to cluster patients. We used K-means clustering for the sake of simplicity and generalizability. However, alternative clustering algorithms can also be used to cluster patients as well.
\subsection*{Significant Input Features (Clinical Measurements) And Their Ranking} \label{signifcant_sect}
Estimating the rank of clinical measurements with respect to a DNN prediction is crucial to the functionality of RSM. Considering all features for measuring the similarity of predictions of two patients could be computationally prohibitive. Therefore, to reduce computational complexity, RSM finds similarities among patients by only considering the features that are most relevant to survival predictions.\\
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure2.png}
\caption{Estimating the significance of clinical measurements by Unit 1. A trainable weight is assigned to each clinical measurement, which represents the importance of each clinical measurement for the survival model predictions. The magnitude of each weight is proportional to the importance of the corresponding clinical measurement. RSM uses $L_1$ regularization for optimizing these weights. Consequently, insignificant measurements are assigned smaller weights, as represented in the right panel by weak connections such as $\omega_2$ and $\omega_M$ for $I_2$ and $I_M$ measurements, respectively.}
\label{fig:fig_3}
\end{figure}
The rank and significance of clinical measurements are estimated by assigning a trainable weight, $\omega$, to each clinical measurement, $I$, and then use the weighted ones, $E$, as the input of the survival model, $f$, as is shown in Figure \ref{fig:fig_3}.
After training, the one-to-one layer identifies the significant features by assigning a weight to each feature, $\omega_m, m={1, ...,M}$, where $M$ is the input dimension.
The loss function of the survival model is modified to learn feature weights and is given by
\begin{equation}
\label{eq-significant-CM}
\mathcal{L'}^{(i)}=\mathcal{L}^{(i)}(y^{(i)},f_{\theta}(x^{(i)})) +\frac{\lambda}{M}\sum_{m=1}^{M} \|\omega_m\|,
\end{equation}
where the term $\mathcal{L}^{(i)}(y^{(i)},f_{\theta}(x^{(i)}))$ is the original loss function of the survival model $f_{\theta}$ for patient $x^{(i)} \in R^M$ estimating label $y^{(i)}$, and the term $\frac{\lambda}{M} \sum_{m=1}^{M} \|\omega_m\|$ is the regularization term. Here, $|.|$ represents the absolute value of weight $\omega_m$ associated with the $m^{th}$ input clinical measurement. The hyperparameter $\lambda$ is the regularization term. The $L_1$ regularization technique \cite{molchanov2017variational,chang2017dropout} is used to estimate the rank and significance of clinical measurements. The absolute value of a regularized weight represents the significance of the associated clinical measurement in the deep survival model predictions, as shown in Figure \ref{fig:fig_3}.\\
After the features are ranked, RSM uses the two-sample Kolmogorov–Smirnov (KS) statistical test \cite{massey1951kolmogorov} to identify the most and least significant clinical measurements with respect to the outcomes of the deep survival model (survival PDFs). The two-sample KS enables capturing discontinuity, heterogeneity, and dependence across data samples \cite{naaman2021tight}, which is beyond the ability of simpler statistical tests like T-test \cite{naaman2021tight}.
The Kolmogorov-Smirnov test (K-S test) compares the data with a known distribution and confirms if they have the same distribution. More specifically, the K-S test compares the distribution of the significance of all the clinical measurements, to the distribution generated by the most significant clinical measurements. We make the later distribution by selecting some features from the top-ranked features. This number incrementally increases to the point that the K-S test shows no significant difference between the distribution of significance of all clinical measurements and the most significant ones. Therefore, the identified number represents the most significant clinical measurements that represent the distribution of the all measurements. After finding the most significant clinical measurements, RSM ranks patients similar to a test patient based on the rank of the most significant clinical measurements and the cluster the predicted PDFs fall in.
\subsubsection*{Finding The Most Similar Patients}
\label{PCA_desc_section}
Units one to five of RSM explained so far and depicted in Figure \ref{fig:RSM_PIPELINE} are optimized in the training phase of RSM. In the test phase, RSM aims to find the most similar patients from the historical dataset, based on the predicted survival PDFs of a survival analysis model. First, RSM finds the cluster of the test patient using the trained clustering model. Then, if clinical measurements are continuous, RSM compares the values of clinical measurements against each other based on a Euclidean distance to find and rank the most similar patients. We will showcase this approach on a synthetic dataset in later sections.
For larger datasets with hundreds of numerical and categorical features, a simple ranking approach based on Euclidean distance between the clinical measurements becomes computationally expensive. To reduce computations, RSM uses Principal component analysis (PCA) to describe the clinical measurements by a set of linearly uncorrelated principal-components \cite{abdi2010principal}. The number of the most significant principal components is relevantly smaller than the number of input clinical measurements \cite{reddy2020analysis}. Therefore, ranking becomes computationally efficient. RSM ranks patients based on the Euclidean distance of their significant feature representations in the subspace of the largest principal components.\\
Generally, the PCA analysis reduces the computational complexity of finding and ranking similar patients to test patients. Each eigenvector of PCA represents a variance measure for the selected significant clinical measurements. In other words, we apply PCA on a matrix consisting of patients in the train set, where only selected significant clinical measurements exist. For patients in each cluster resulting from K-means with J-S difference, we calculate a weighted sum using significant eigenvectors, where the weights are eigenvalues, i.e, each eigenvector is multiplied with each patient's clinical measurements and weighted with its eigenvalue. This calculation gives a score measure that represents the magnitude of the transformed patient in the significant PCA space resulting from significant clinical measurements. This score can be used to measure the similarity of patients in the PCA space. If we sort all patients in a cluster using this score and apply the same procedure to the test patient to calculate that patient's score, we can simply find the most similar patients using a binary search. For example, assume you want 10 similar patients, by finding the placement of the test patient in the cluster, choose 5 patients above and 5 patients under the found placement.\\
Despite the advantages of using PCA, there are assumptions about PCA that should be considered. PCA assumes an affine transformation among significant clinical measurements. Also, measurements should be independent and identically distributed (iid) which is a valid assumption in our analysis, i.e., we know that measurements of each patient are independent of other patients. PCA is fit optimally if these assumptions are satisfied. Otherwise, the outcome will be sub-optimal. If the affine transformation assumption between data samples does not hold, one can use kernel-PCA to take into account non-linear relationships \cite{rosipal2001kernel}.
\section{Experiments}
\label{section:cohort}
We assessed the performance of RSM on two datasets: a synthetic dataset that partially resembles medical datasets and MIMIC-IV, a well-known ICU dataset. A detailed description of each dataset is provided in the following subsections.
\subsection{The Synthetic dataset}
To investigate the ability of RSM in interpreting the predictions of deep survival models, we created a synthetic dataset based on a statistical process. We considered $\boldsymbol{x}=(x^{1},...,x^{K})$ as a tuple of $K$ random variables, where each random variable can be considered as a clinical measurement with an independent normal distribution, $N \sim (0,\boldsymbol{I})$. We modeled the distribution of the event time, $T_i$, for each data sample $i$ as a nonlinear combination of these $K$ random variables at time index $i$ given by
\begin{equation}
\label{synthetic_data}
T_i \sim \exp((\boldsymbol{\alpha}^T\times{(\boldsymbol{x}_{i}^{k_1})}^2+(\boldsymbol{\beta}^T\times(\boldsymbol{x}_{i}^{k_2})),
\end{equation}
where $k_1$ and $k_2$ are two randomly selected disjoint subsets of $K$ covariates $\{1,..., K\}$. Figure \ref{fig:fig_4} shows the histogram of event times for the Synthetic dataset. By applying an exponential function to the normally distributed features with additive Gaussian noise, the event times will be exponentially distributed with an average that depends on the parameters set $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$. Notice that the size of $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ depend on the size of subsets $k_1$ and $k_2$ of the random variables, respectively. In this simulation, we considered $K=10$, $k_1=\{1,3,5,7\}$, and $k_2=\{2,4,6,8,9,10\}$ (this means the size of the parameter sets $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ are 5 and 6, respectively). We generated 20000 data samples from this stochastic process. \\
In medical examinations and follow-ups of a patient, some of the clinical measurements may not have a significant contribution to the occurrence of an event of interest. Therefore, we call them insignificant clinical measurements. In the generated dataset, the time of the events is influenced by all of the clinical measurements defined in equation \ref{synthetic_data}. We added a few insignificant clinical measurements to the Synthetic dataset to test the feature selection capability of RSM. To add insignificant clinical measurements to the dataset, we considered a group of $M$ non-informative clinical measurements $\boldsymbol{x}_p=(x_p^{1},...,x_p^{M})$ that have no effect on the event times, $\boldsymbol{x}'=(x^{1},...,x^{K}, x_p^{0},...,x_p^{M})$. Here we considered five of such non-informative features for the dataset, $M=5$.
Real-life clinical datasets have some characteristics like containing censored and missing values. We considered such characteristics when generating our synthetic data to make the dataset more realistic \cite{lagakos1979general,leung1997censoring, ibrahim2012missing, sainani2015dealing, nazabal2020handling}. \\
\textbf{Right-censoring}: Right censoring is a common feature of medical datasets. Patients are frequently lost to the follow-ups. Consequently, their medical records are not gathered after the censoring time \cite{lagakos1979general,leung1997censoring}. To consider this real-world situation, we randomly selected half of the data, 10000 data samples, to be right-censored. Therefore, each data sample is represented by $(\boldsymbol{x}'_i, s_i, k_i)$, where $s_i$ indicates if the event time for a given data sample is right-censored ($s_i=1$) or not ($s_i=0$). $k_i$ shows the event time for the non-censored data samples and the lost to-follow-up time for the censored data samples.\\
\textbf{Missing values}: The other phenomenon that is frequently observed in medical data is the presence of missing values. In a longitudinal dataset, such as MIMIC-IV, only a few clinical measurements are recorded at a given time, leaving the rest of the clinical measurements unrecorded \cite{ibrahim2012missing, sainani2015dealing, nazabal2020handling}. It
has been noted that missing values and their missing patterns are often correlated with the target
labels, a.k.a., informative missingness, which leads to high missing rates for longitudinal datasets \cite{che2018recurrent}. We introduced such not-missing-at-random values to the Synthetic dataset by creating missing patterns for covariates that are to different extent correlated to labels. We introduced up to 45\% of such not-missing-at-random
|
values to the dataset. We also introduced up to 5\% missing-at-random values to clinical measurements. In sum, the overall missing rate of the Synthetic dataset is 50\%. \\
\begin{figure}[!tb]
\centering
\includegraphics[width=.8\linewidth]{Figure3.png}
\caption{The histogram of survival times in the Synthetic dataset. The solid blue line shows a kernel density estimation interpolation of the bars that shows the frequency of each quantified survival time \cite{kim2012robust}.}
\label{fig:fig_4}
\end{figure}
\subsection{The MIMIC-IV dataset}
MIMIC-IV is a large, freely-available database comprising de-identified health-related data associated with over 200,000 patients grouped into three modules: core, hosp, and ICU. The documentation of MIMIC-IV is available on its website \footnote{https://mimic.mit.edu/}. In this research, we use the ICU module that contains clinical measurements and outcome events of ICU patients, which can be used for survival analysis \cite{johnson2020mimic} (see appendix section for more details).
\section{Results}
\label{section:experiments}
In this section, we evaluate the ability of RSM in identifying the most relevant variables to the event time and ranking similar patients for MIMIC-IV and Synthetic datasets.
\subsection{Evaluation Approach/Study Design}
To evaluate the performance of a deep survival model, we considered the time-dependent Concordance-Index $\mathbb{C}^{td}(t)$ \cite{antolini2005time} and mean absolute error (MAE) \cite{chai2014root}. $\mathbb{C}^{td}(t)$ given by
\begin{equation*}
\mathbb{C}^{td}(t)=P(\hat{F}(t\mid x_i) > \hat{F}(t \mid x_j)\mid \delta_i =1, T_i<T_j, T_i\leq t),
\end{equation*}
where, $\hat{F}(t\mid x_i)$ is the estimated cumulative distribution function (CDF) of an event predicted by the model at time $t$, given clinical measurements $x_i$. $\delta_i$ is the event identifier and $\delta_i=1$ identifies uncensored data samples.
The time dependency in $\mathbb{C}^{td}(t)$ allows us to measure how effective the survival model is in capturing the possible changes in risk over time. We report $\mathbb{C}^{td}(t)$ at four $25\%$, $50\%$, $75\%$, $100\%$ quantiles to roughly cover the whole event horizon.The MAE measure is given by $$MAE=\frac{\sum_{i=1}^{N} \mathbb{I}_{\{\delta_i ==1\}}(\|y_i - \hat{y}_i\|)}{\sum_{i=1}^{N} \mathbb{I}_{\{\delta_i ==1\}}}, $$ where $N$ is the sample size in each quantile, $\mathbb{I}_{\{\delta_i ==1\}}$ indicates the $i^{th}$ patient experienced an interested event, $y_i$ is the true event time, and $\hat{y_i}$ is the expected value of the predicted PDF. Note that the MAE measure is only calculated for uncensored patients.
\subsection{RSM results on the Synthetic dataset}
To evaluate the performance of RSM, we applied it to the Survival Seq2Seq model described in \cite{survival_Seq2Seq}. Survival Seq2Seq has been recently proposed as a state-of-art model for survival analysis. The performance of Survival Seq2Seq on MIMIC-IV and Synthetic datasets is provided in Table \ref{tab:Seq2Seq_result}.
\begin{table}
\centering
\caption{The performance of Survival Seq2Seq \cite{survival_Seq2Seq} on Synthetic and MIMIC-IV datasets. Results are reported with 95\% confidence interval.}
\begin{tabular}{|l|l|llll|}
\hline
& \begin{tabular}[c]{@{}l@{}}Performance\\ Measures\end{tabular} & \multicolumn{4}{c|}{Quantiles} \\ \hline
& & \multicolumn{1}{c|}{25\%} & \multicolumn{1}{c|}{50\%} & \multicolumn{1}{c|}{75\%} & \multicolumn{1}{c|}{100\%} \\ \hline
MIMIC\_IV & MAE & \multicolumn{1}{l|}{34.83±4.1} & \multicolumn{1}{l|}{37.06±4.6} & \multicolumn{1}{l|}{39.53±4.0} & 62.74±3.2 \\ \hline
MIMIC\_IV & CI & \multicolumn{1}{l|}{0.876±0.02} & \multicolumn{1}{l|}{0.882±0.02} & \multicolumn{1}{l|}{0.885±0.02} & 0.906±0.02 \\ \hline
Synthetic & MAE & \multicolumn{1}{l|}{11.85±0.6} & \multicolumn{1}{l|}{12.47±1.2} & \multicolumn{1}{l|}{14.01±1.4} & 15.54±1.8 \\ \hline
Synthetic & CI & \multicolumn{1}{l|}{0.874±0.00} & \multicolumn{1}{l|}{0.777±0.03} & \multicolumn{1}{l|}{0.772±0.05} & 0.807±0.08 \\ \hline
\end{tabular}
\label{tab:Seq2Seq_result}
\end{table}
As described in \cite{survival_Seq2Seq}, Survival Seq2Seq predicts a PDF for each data sample and event in the dataset. Figure \ref{fig:fig_Seq2Seq_result_synth}.A shows an example of the outcome of Survival Seq2Seq for a group of 7 simulated patients.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure4.png}
\caption{RSM identification results on the Synthetic dataset. A) Survival PDFs predicted by survival Seq2Seq for a group of 7 randomly selected patients, relabeled from 1 to 7. B) The clustering results for the group of 7 randomly selected patients whose PDFs are shown in (A).
C) Finding the optimal number of clusters for K-Means. The selected value for K is 3 in this analysis.
D) Significant clinical measurements identified by RSM. The significant features are ranked based on their significant weights in a descending order. The most and least significant features identified using the KS statistical test are indicated by green and red boxes.}
\label{fig:fig_Seq2Seq_result_synth}
\end{figure}
Figure \ref{fig:fig_Seq2Seq_result_synth}.B shows the outcome of clustering on survival PDFs estimated by survival Seq2Seq. The optimal number of the clusters is identified by measuring the mean of squared distances, as shown in Figure \ref{fig:fig_Seq2Seq_result_synth}.C. As previously mentioned, we intentionally added non-informative clinical measurements to the Synthetic dataset. Figure \ref{fig:fig_Seq2Seq_result_synth}.D shows RSM can successfully identify those non-informative clinical measurements and exclude them from the rest of the features by assigning smaller weights to them. We had considered features with IDs 16 to 20 as non-informative, where RSM successfully identifies 4 of them as the least significant features. In addition, none of the non-informative features are identified by RSM as the most significant features. This shows that RSM can correctly identify the significant features of the Synthetic dataset.
\subsection{RSM results on the MIMIC-IV dataset}
The performance of RSM is evaluated using MIMIC-IV dataset as shown in Figure \ref{fig:fig_Seq2Seq_result_MIMIC}.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure5.png}
\caption{The RSM results for the MIMIC-IV dataset. A) The survival PDFs predicted by survival Seq2Seq for a group of 10 random patients. Patients are labelled from 1 to 10. B) Clustering results for a group of 10 randomly selected patients whose PDFs were shown in (A) with the same colors.
C) Finding the optimal number of the clusters for K-Means. The selected value for K is 5 in this analysis.
D) The most significant and top 30\% of the significant clinical measurements identified by RSM are shown and grouped by different boxes. Due to the sheer number of clinical measurements in MIMIC-IV, we only show the significance values of the most significant features. The significant features are ranked based on their significant weights in a descending order.}
\label{fig:fig_Seq2Seq_result_MIMIC}
\end{figure}
Table \ref{tab:sign_perf_MIMIC} shows the top 30\% of the most significant clinical measurements of MIMIC-IV. Medical references that confirm the significance of the clinical measurements identified by RSM are also provided in that table.
\begin{table}[tb]
\small
\centering
\caption{Top 30\% of the most significant clinical measurements of MIMIC-IV in survival analysis, identified by RSM.}
\begin{tabular}{|l|ll|}
\hline
\textit{\begin{tabular}[c]{@{}l@{}}Significance\\ percent\end{tabular}} & \multicolumn{2}{c|}{Clinical Measurements} \\ \hline
1\% & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} 1-O2 saturation \\ pulseoxymetry \cite{vold2015low}\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} 2-Mean Airway \\ Pressure \cite{sahetya2020mean}\end{tabular}} \\ \hline
2-3\% & \multicolumn{1}{l|}{3-Anion gap \cite{zhang2022value}} & \begin{tabular}[c]{@{}l@{}} 4-Non Invasive Blood\\ Pressure systolic \cite{lacson2019use}\end{tabular} \\ \hline
4-5\% & \multicolumn{1}{l|}{5-Respiratory Rate \cite{kang2020machine}} & \begin{tabular}[c]{@{}l@{}}6-Non Invasive Blood\\ Pressure mean \end{tabular} \\ \hline
6-7\% & \multicolumn{1}{l|}{7-PEEP set} & 8-Total Bilirubin \cite{chen2021association} \\ \hline
8-9\% & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}9-Non-Invasive Blood \\ Pressure Alarm - Low\end{tabular}} & \begin{tabular}[c]{@{}l@{}}10-Non Invasive Blood\\ Pressure diastolic \cite{greenberg2006blood}\end{tabular} \\ \hline
10-11\% & \multicolumn{1}{l|}{11-Hematocrit (serum) \cite{erikssen1993haematocrit}} & 12-PH (Arterial) \cite{kang2020machine} \\ \hline
12-13\% & \multicolumn{1}{l|}{13-Daily Weight \cite{kang2020machine}} & \begin{tabular}[c]{@{}l@{}}14-Arterial Blood\\ Pressure Alarm - Low\end{tabular} \\ \hline
14-15\% & \multicolumn{1}{l|}{Sodium (whole blood)} & Arterial O2 pressure \\ \hline
16-17\% & \multicolumn{1}{l|}{15-Heart Rate \cite{chen2022development}} & 16-Gentamicin \cite{chen2022development} \\ \hline
18-19\% & \multicolumn{1}{l|}{17-BUN \cite{beier2011elevation}} & 18-Arterial O2 Saturation \\ \hline
20-21\% & \multicolumn{1}{l|}{19-Phosphorous \cite{kestenbaum2005serum}} & \begin{tabular}[c]{@{}l@{}}20-Arterial Blood\\ Pressure mean\end{tabular} \\ \hline
22-23\% & \multicolumn{1}{l|}{21-Platelet Count \cite{msaouel2014abnormal}} & 22-TPN without Lipids \\ \hline
24-25\% & \multicolumn{1}{l|}{23-Calcium non-ionized\cite{miller2010association}} & 24-PTT \cite{reddy1999partial} \\ \hline
26-27\% & \multicolumn{1}{l|}{25-Arterial Base Excess \cite{hamed2019base}} & 26-TCO2 (calc) Arterial \cite{wayne1995use} \\ \hline
28-30\% & \multicolumn{1}{l|}{27-age \cite{ferreira2022two}} & 28-Sodium (serum) \cite{vaa2011influence} \\ \hline
\end{tabular}
\label{tab:sign_perf_MIMIC}
\end{table}
We also trained the survival Seq2Seq model using only the top 30\% of the most significant clinical measurements identified by RSM, where the outcome is presented in Table \ref{tab:Seq2Seq_result_MIMIC_30}. Our objective was to investigate if training Survival Seq2Seq using the most significant features would result in an outcome close to the original MIMIC-IV outcome reported in Table \ref{tab:Seq2Seq_result}. In other words, we wanted to verify if the significant features that RSM identifies indeed carry the information that is most relevant to survival analysis. It can be observed from Table \ref{tab:Seq2Seq_result_MIMIC_30} that the performance of Survival Seq2Seq drops slightly compared to the original results reported in Table \ref{tab:Seq2Seq_result}. This verifies that RSM is able to accurately identify the most significant clinical measurements for MIMIC-IV.
\begin{table}[tb]
\caption{Performance of Survival Seq2Seq \cite{survival_Seq2Seq} trained on the top 30\% of the most significant clinical measurements of MIMIC-IV. Results are reported with 95\% confidence interval.}
\begin{tabular}{|l|llll|}
\hline
\begin{tabular}[c]{@{}l@{}}Performance\\ Measures\end{tabular} & \multicolumn{4}{c|}{Quantiles} \\ \hline
& \multicolumn{1}{c|}{25\%} & \multicolumn{1}{c|}{50\%} & \multicolumn{1}{c|}{75\%} & \multicolumn{1}{c|}{100\%} \\ \hline
MAE & \multicolumn{1}{l|}{34.78±7.41} & \multicolumn{1}{l|}{36.12±7.35} & \multicolumn{1}{l|}{38.58±6.04} & 64.0±5.72 \\ \hline
CI & \multicolumn{1}{l|}{0.847±0.035} & \multicolumn{1}{l|}{0.846±0.013} & \multicolumn{1}{l|}{0.840±0.019} & 0.651±0.013 \\ \hline
\end{tabular}
\label{tab:Seq2Seq_result_MIMIC_30}
\end{table}
For the MIMIC-IV dataset with hundreds of numerical and categorical clinical measurements, as described in section \ref{PCA_desc_section}, we suggest using PCA or kernel-PCA methods to measure the similarity score between patients. Figure \ref{fig:fig06} shows the histogram of error for the similarity score between the test patients. As expected, kernel-PCA shows smaller errors due to the presence of nonlinear relationships between patients' clinical measurements.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure6.png}
\caption{The histogram of similarity score error for the test patients with A) kernel-PCA and B) PCA analysis.}
\label{fig:fig06}
\end{figure}
In the end, RSM ranks the most similar patients to a patient of interest based on the score measure identified by the kernel-PCA. As an example, the most similar patients to patients with IDs 1 and 2 are shown in Figure \ref{fig:fig07}.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{Figure7.png}
\caption{RSM identifies similar patients based on their survival PDFs and most significant clinical measurements.
A) PCA analysis for 95\% cut-off threshold of the cumulative variance identification for the patients with IDs 1 (left plot) and 2 (right plot).
B) Histogram of the logarithm of the score for the most representative principal components for the patients and associated similar patients. The score range of the most similar patients are identified by a star.
C) Visualization of the first two principal components for the most similar patients those selected from the histogram of distances (B)) for patients with IDs 1 (left plot) and 2 (right plot).}
\label{fig:fig07}
\end{figure}
\section{Discussion}
\label{section:Discussion}
In this study, we proposed RSM, a framework that identifies patients with similar clinical measurements and outcomes to a patient of interest, and therefore verifies the predictions of deep survival models. We applied RSM to the predicted survival PDFs of the Survival Seq2Seq model trained on a synthetic dataset to validate the ability of RSM in recognizing significant clinical measurements and identifying the data samples that are most similar to a given data sample. After validating these capabilities, we tested RSM on Survival Seq2Seq trained on MIMIC-IV to justify the predictions of Survival Seq2Seq. To the best of our knowledge, this is the first time a framework has been specifically designed to interpret the outputs of a trained deep survival model.\\
\subsubsection*{Limitations} Despite adding the interpretation capability to deep survival models, a few limitations can affect the performance of RSM. RSM uses a deep model for performing survival analysis. As a result, the precision of the deep survival model has a direct impact on the performance of RSM. RSM cannot properly interpret the outcome of a deep survival that model suffers from poor predictions. The other limitation of RSM is that its overall performance is bounded by the individual performance of its several interpretation units such as the similarity measure, clustering model, $L_1$ significant feature selection, and statistical tests. One can replace each of the mentioned components of RSM with a more advanced algorithm to achieve a higher overall performance for RSM. For example, to achieve a better performance in clustering, a clustering algorithm based on deep unsupervised learning \cite{dilokthanakul2016deep} can be used. Upgrading a single or all units employed in RSM is a subject for our future research. In addition, identification of significant feature in RSM is based on the training cohort. As health care industry pursues individualized services, a case-specific significant feature identification is potentially more desirable.
\begin{appendices}
\section{MIMIC-IV database}
The MIMIC-IV database contains health related data of ICU patients of Beth Israel Deaconess Medical Center, between 2008 and 2019. There are a total of number of 71791 distinct ICU admission records with an average ICU stay of 4 days. This dataset includes vital sign measurements, laboratory test results, medications, imaging report of the patients, stored in separate tables. For mortality prediction, relevant covariates has been extracted from the following three tables: 1) INPUTEVENTS (continuous infusions or intermittent administrations), 2) OUTPUTEVENTS (patient outputs including urine, drainage, and so on), and 3) CHARTEVENTS (Patient's routine vital signs and any additional information relevant to their care during ICU stay). We selected a total of 108 covariates from these three tables. These covariates were selected based on the feedback from our medical team, as well as applying conventional feature selection techniques on the dataset. The number of patients after feature selection dropped to 66363 with an uncensored (deceased patients) rate of about $12\%$. The following table lists the total number of covariates and selected number of covariates from each table.
\begin{table}
\caption{\small \label{tbl:MIMIC-TABLE-DIST} \small MIMIC-IV tables with their corresponding number of total and selected covariates.}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Table} & \textbf{Total \# of Covariates} & \textbf{\# of Selected Covariates} \\
\hline
INPUTEVENTS & 282 & 30 \\
\hline
OUTPUTEVENTS & 69 & 5 \\
\hline
CHARTEVENT & 1566 & 73 \\
\hline
\textbf{Total} & 1917 & 108 \\
\hline
\end{tabular}
\end{table}
\subsection*{Data Pre-Processing Considerations}
\begin{itemize}
\item Patients with the following diagnosis are excluded from the data: Sudden Infant Death synd, unattended death, maternal death affecting fetus or newborn, fetal death from asphyxia or anoxia during labor, intrauterine death.
\item Invalid measurements were removed from the data using the provided WARNING and ERROR columns.
\item Survival time was defined as the period between the time of admission and time of death (for patients who died in hospital), or time of discharge (for censored patients).
\item Time of observation is defined as the time of recording of the measurement with admission time as the baseline.
\end{itemize}
\end{appendices}
\subsubsection*{Acknowledgment}
We would like to sincerely thank Health Canada for their kind support for funding the challenge "Machine learning to improve organ donation rates and make better matches" (Challenge ID: 201906-F0022-C00008). This challenge aims to improve the quality of organ matchmaking and increase the pool of Donation after Circulatory Death (DCD) donors.
\section*{Declarations}
\subsubsection*{Conflict of Interests} The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
\section{Introduction}
The family of quaternary nickel borocarbides superconductors $R$Ni$_2$B$_2$C, where $R$ is a rare-earth element
or Y, has attracted worldwide attention both because of a relatively high critical temperature T$_{c}$, up to
16\,K for $R$=Lu, and especially from the point of view of competition between superconducting and magnetic
ordered states in the case of $R$=Tm, Er, Ho, Dy, where energy scales for the antiferromagnetic and
superconducting order can be varied over a wide range (see, e.\,g.,\ Refs.\,\cite{Muller,Budko06} and further
Refs.\ therein). The compound with $R$=Er and $T_c\simeq$11\,K is interesting for two reasons \cite{Muller}:
below (T$_{\mbox{\tiny N}}\simeq$6\,K) incommensurate antiferromagnetic order with spin density wave occurs and
weak ferromagnetism develops below T$_{\mbox{\tiny WFM}}\simeq$2\,K \cite{Canf96}. Both phenomena are, in
general, antagonistic to superconductivity, so that competition between superconducting and the magnetic state
should take place in this compound. Additionally, the superconducting ground state in borocarbide
superconductors is expected to have a multiband nature \cite{Shulga0,Drechsler} with a complex Fermi surface and
different contributions to the superconducting state by different Fermi surface sheets. Therefore, determining
the influence of these magnetic states on a possible multiband superconducting ground state or multiband order
parameter (OP) in ErNi$_2$B$_2$C is a challenge.
Previous tunneling (STM/STS) and point contact (PC) spectroscopy results have left some open questions regarding
the coexistence of superconductivity and magnetism in ErNi$_2$B$_2$C. STM/STS measurements of \cite{Wata} show a
small feature, namely, decreasing of the superconducting gap below T$_{\mbox{\tiny N}}$ nearly within error
bars, which was not reproduced in subsequent experiments \cite{Crespo}. Early PC data on polycrystalline samples
\cite{Yanson} indicated that the superconducting gap has roughly a BCS dependence with only a shallow dip around
T$_{\mbox{\tiny N}}\simeq$6\,K. Very recent laser-photoemission spectroscopy data \cite{Baba} show the SC gap decrease (with remarkably large error bars) below the Neel temperature, but, at present the laser-photoemission spectroscopy has not enough resolution to go deeper in details.
In this paper we report our detailed directional PC Andreev reflection measurements on single crystal ErNi$_2$B$_2$C along the c-axis and in the ab-plane. Our results show for the first time the presence of two dominating OPs in ErNi$_2$B$_2$C, which differ by a factor of about two, and appreciable depression of both OPs by the antiferromagnetic transition is found.
\section{Experimental details}
We have used single crystals of ErNi$_{2}$B$_{2}$C grown by the Ames Laboratory Ni$_2$B high-temperature flux
growth method \cite{Cho95}. PCs were established both along the c axis and in perpendicular direction by
standard ``needle-anvil'' methods \cite{Naid}. The ErNi$_{2}$B$_{2}$C surface was prepared by chemical etching
or cleavage as described in \cite{Bobr06}. As a counter electrode, edged thin Ag wires ($\oslash$=0.15\,mm)
were used to improve mechanical stability of PCs in comparison to use of a bulk Ag piece. We have measured the
temperature dependence of $dV/dI(V)$ characteristics of such N-S PCs (here N denotes a normal metal and S is the
superconductor under study) in the range between 1.45\,K and T$_{c}$ for several contacts oriented both along
the c-axis and in the ab-plane. In the paper we demonstrate results of analysis of 60 $dV/dI(V)$ along the
ab-plane measured for the same PC at different temperatures between 1.45\,K and 11\,K and of 46 $dV/dI(V)$ along
the c-direction for another PC \footnote{The PC resistance is 36 $\Omega$ along the c-axis and 10.5 $\Omega$ in the ab-plane. The PC diameter estimated by Wexler formula (see\cite{Naid}, pages 9, 31) is about 7\,nm and 14\,nm, respectively, using $\rho l\simeq 10^{-11}\Omega$ cm$^2$ \cite{Shulga0}. At the same time a mean free path $l$ is 28\,nm, using $\rho \simeq 3.5^{-6}\Omega$ cm \cite{Cho95} just above T$_c$. Therefore mentioned PCs are close to the ballistic limit $d<l$.} in the same temperature range.
\section{Results and discussion}
To determine the OP from the measured differential resistance curves $dV/dI(V)$ we used recent theory \cite{Belob} of Andreev reflection in PC, which includes the pair-breaking effect by magnetic impurities. The last assumption is reasonable, because of the presence of the local magnetic moments of Er ions.
The fit of the measured curves using equations as (1) in \cite{Bobr05} has been performed. As fit parameters the
superconducting OP $\Delta $ \footnote{Assuming that pair breaking is by magnetic impurities, the energy gap
$\Delta_0$ and the OP $\Delta$ are related as follows: $ \Delta_0=\Delta(1-\gamma^{2/3})^{3/2}$ \cite{Belob}.},
the pair-breaking parameter $\gamma=1/(\tau_s\Delta)$ (here $\tau_s$ is the spin-flip scattering time) and the
dimensionless barrier parameter $Z$ have been used. Although the $dV/dI$ curves shown in Fig.\,\ref{erf1}
exhibit one minimum for each polarity, as in the case of ordinary one gap superconductors \cite{Naid,Naidyuk},
to fit $dV/dI$ in full we had to use a two-OP(gap) approach \footnote{Not only does the one OP
approach give a worse fit (especially at minima position and at maximum, see insets in Fig.\,\ref{erf2}) of the experimental data such that the rms deviation is 2-3 times higher compared to the two OP fit, it also requires a varying $Z$ parameter. On the contrary,
at two OP fit $Z$ parameters remain constant, equal to 0.77 and 0.6 for the ab-plane and c-direction,
respectively. This is important because there is no physical reason for the barrier parameter $Z$ to be
temperature dependent.}, adding the corresponding conductivities as made in \cite{Bobr06} for LuNi$_{2}$B$_{2}$C.
The contributions of these conductivities account for the part of the Fermi surface containing a particular OP.
Thus, for the two-OP model, experimental curves are fitted \footnote{Before the fit, $dV/dI$ curves were
normalized to the $dV/dI$ curve measured above $T_c$ and symmetrized. The fit was done between $\pm$
8\,mV, to avoid contribution from the phonons seen, e.\,g., as an inflection point around 10\,mV for some curves
in Fig.\,\ref{erf1}. The fit was done in two stages. At first we keep both $\gamma_{1,2}$ coefficients equal to
zero. As a result of such fit $\Delta_{1,2}$ values shown in Fig.\,3 were obtained, while variation of $Z$ and
$K$ was within 10\%. If we will hold $K$ on this stage strictly constant, it results only in more scatter (noise) for $\Delta_{1,2}$, but their overall behavior remains the same. To improve the fit on the second stage, we used obtained $\Delta_{1,2}$ and variate $\gamma_{1,2}$ and $K$. As a result, obtained $K$ and $\gamma_{1,2}$ are shown in Figs.\,4 and 5. After this the theoretical curves were almost indistinguishable from the experimental ones (see insets in Fig.\,2).} by the following
expression:
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm,angle=0]{erf1.eps}
\end{center}
\vspace{-0.5cm}
\caption{ (Color online) Raw $dV/dI$ curves of ErNi$_{2}$B$_{2}$C--Ag PCs in the ab-plane and
in the c-direction at indicated temperatures. For clarity, only several representative curves
from the total 60 for the ab-plane and 46 for the c-direction measured at different temperatures are shown.} \label{erf1}
\end{figure}
\begin{equation}
\label{twogap}
\frac{dV}{dI}=\frac{S}{K\frac{dI}{dV}(\Delta_1,\gamma_1,Z_1)+
(1-K)\frac{dI}{dV}(\Delta_2,\gamma_2,Z_2)}
\end{equation}
Here, the coefficient $K$ reflects the contribution of the part of the Fermi surface having the
OP $\Delta_1$, $S$ is the scaling factor to match the amplitude of the calculated and the experimental curves.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{erf2.eps}
\end{center}
\vspace{-0.5cm} \caption{(Color online) Reduced position of the minima in the raw $dV/dI$ curves for ErNi$_{2}$B$_{2}$C and that for LuNi$_{2}$B$_{2}$C from \cite{Bobr06}. Insets: comparison of two
|
OPs (solid line) and one-OP (dashed
line) fitting of the reduced experimental $dV/dI$ (symbols) at 1.45\,K.} \label{erf2}
\end{figure}
Before discussing the fitting results, we point out the unusual specific behavior of the measured $dV/dI$ \footnote{Presented in the paper PCs have survived about 36 hours (c-dir) and 50 hours (ab-plane) of measurements. The $dV/dI$ temperature series for these PCs are the most full, therefore they presented in the paper. Of course, there were other PCs with $dV/dI$ of lower quality or which did not survive temperature sweep in the whole range between 1.45\,K and T$_{c}$. Nevertheless, there were a few of PCs which had $dV/dI$ similar to the presented in the paper, supporting our observations.}
(see Fig.\,\ref{erf1}). First, the distance between $dV/dI$-minima shown in Fig.\,\ref{erf2}, which is often taken as
a rough estimation of the superconducting gap value, increases with temperature before decreasing on approaching
T$_c$ -- quite different behavior from the nonmagnetic LuNi$_2$B$_2$C. Second, the $dV/dI$-minima for
ErNi$_2$B$_2$C persist up to temperatures close to T$_c$ (see Fig\,2, upper panel) though with a small amplitude, leading to a supposition of the presence of the second OP. From these direct observations, a nontrivial behavior of the superconducting OP parameters are expected in ErNi$_{2}$B$_{2}$C.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm,angle=0]{erf3.eps}
\end{center}
\vspace{-0.5cm} \caption{ (Color online) Temperature dependence of the large OP $\Delta_1$ (circles), small OP $\Delta_2$ (squares) and OP $\Delta$ determined by one OP fit (triangles) for ErNi$_{2}$B$_{2}$C for the two main crystallographic directions. In the upper panel closed (open) symbols show OPs determined during
increasing (decreasing) temperature. The same meaning have closed (open) symbols in Figs.\,\ref{erf4} and \ref{erf5}.} \label{erf3}
\end{figure}
Indeed, from the two-band model fitting both OPs $\Delta_1$ (large) and $\Delta_2$ (small) diminish on entering
the antiferromagnetic state around 6\,K (see Fig.\,\ref{erf3}). Qualitatively the same behavior has OP determined by one OP fit (Fig.\,\ref{erf3}, triangles). This is qualitatively consistent with the
temperature dependence of the superconducting gap determined by tunneling in \cite{Wata}, by laser-photoemission spectroscopy data \cite{Baba} and also with the upper critical field \cite{Budko00,Budko06} and the superconducting coherence length behavior \cite{Gammel99} in the vicinity of T$_{\tiny N}$. The theories of coexistence of superconductivity and antiferromagnetic state also predict such OP suppression below T$_{\tiny N}$, \footnote{Various theories of antiferromagnetic superconductors including the affect of spin fluctuations, molecular field, and impurities on the $\Delta$ behavior (see e.\ g. H. Chi and A. D. S. Nagi, J. Low Temp. Phys. {\bf 86}, 139 (1992) and Refs. therein) in support of our observation will be discussed in the forthcoming extended publication.} e.\ g., by
antiferromagnetic molecular field \cite{Machida}.
Further, the large OP $\Delta_1(T)$ may be described by a BCS dependence above T$_{\tiny N}\simeq6$\,K in the
paramagnetic state with extrapolated T$_{c}^{*}\simeq$14.5\,K, close to that of nonmagnetic $R$Ni$_{2}$B$_{2}$C
($R$=Lu, Y). On the other hand retention of the Andreev reflection minima in $dV/dI$ up to T$_{c}$ (see Fig.\,\ref{erf2}) result in unconventional abrupt $\Delta_1(T)$ vanishing near T$_{c}$. Note, that to fit experimental curves, not only OP but also large OP relative contribution $K$ (see Eq.\,(\ref{twogap})), must be temperature dependent (see Fig.\,\ref{erf4}). Shown in Fig.\,\ref{erf4}, the decrease of $K$ with temperature points to diminution of
the "superconducting" part of the Fermi surface with the large OP by approaching T$_c$, which results in collapse of the large OP at this point.
\begin{figure}[t]
\vspace{3cm}
\begin{center}
\includegraphics[width=8.5cm,angle=0]{erf4.eps}
\end{center}
\vspace{-2.5cm} \caption{(Color online) Temperature dependence of the contribution $K$ (see Eq.\,\ref{twogap})
of the larger OP to the $dV/dI$ spectra. Curves represent a polynomial fit simply to guide the eye.} \label{erf4}
\end{figure}
The mentioned decrease of $K$ correlates with behavior of the pair-breaking parameter $\gamma$ shown in
Fig.\,\ref{erf5}. It appears that $\gamma_1$ is always larger than $\gamma_2$ above 2\,K, that is the
pair-breaking effect is stronger in the band with the larger OP. This is in line with the conclusion, that the
different bands are differently affected by magnetic order, made in \cite{Drechsler,Shorikov} by band structure
analysis of the coexistence of superconductivity and magnetism in the related antiferromagnetic superconductor
DyNi$_{2}$B$_{2}$C, {\em i.e.}, some bands provide a basis for superconductivity while others are important for
the magnetic interactions. Here we should add, that the $S$ parameter in (1) has a maximal value of 0.5 at low
temperature, the same value as the normalized zero-bias tunnel conductivity obtained by STS study of
ErNi$_{2}$B$_{2}$C \cite{Crespo}. So, both observations suggest that nearly half of the total Fermi surface (or
bands) is nonsuperconducting in ErNi$_{2}$B$_{2}$C. Formation of a superzone gap at antiferromagnetic transition
seen in transport measurements \cite{Budko00} may be responsible for this.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{erf5.eps}
\end{center}
\vspace{-0.5cm}
\caption{(Color online) Temperature dependence of the pair-breaking parameter $\gamma$. Curves represent polynomial fit simply to guide the eye.} \label{erf5}
\end{figure}
From Fig.\,\ref{erf5} it is also seen that $\gamma$ has a maxima at temperature close to $T_{\mbox{\tiny N}}$
and also close to the appearance of weak ferromagnetism around 2\,K, which is reasonable. At both of these
temperatures an increase in pair breaking is expected due to increasing spin fluctuations accompanying the
corresponding transitions.
\section{Conclusion}
This study demonstrates that the two-band approximation with two OPs, including
pair-breaking effects, suits better for describing the PC Andreev reflection spectra in ErNi$_{2}$B$_{2}$C
pointing for the first time to presence of multiband superconducting OP in this compound. The values and the temperature dependencies of the large and the small OPs have been estimated for the ab-plane and in the c- direction. It is found that in the paramagnetic state both OPs can be described by BCS dependence, but formation of the antiferromagnetic state below T$_{\tiny N}\simeq6$\,K leads to a decrease of both OPs. The pair-breaking effect is found to be different for the large and the small OP indicating that the different bands are affected differently by magnetic order. This may be the reason for the observed abrupt vanishing of the larger OP at T$_{c}$. It is interesting that extrapolation of the larger OP by ``conventional'' BCS behavior above T$_{\tiny N}$ results in T$_{c}^{*}\simeq$14.5\,K, similar to nonmagnetic YNi$_{2}$B$_{2}$C, so that $2\Delta^{\mbox{\tiny BCS}}(0)/{\rm k}_{\mbox{\tiny B}}$T$_{c}^{*}\simeq4.25$ and 4.7 for the ab-plane and c-direction, respectively. BCS extrapolation gives for the small OP $2\Delta^{\mbox{\tiny BCS}}(0)/{\rm k}_{\mbox{\tiny B}}$T$_{c}\simeq4.1$ (ab-plane) and 3.5 (c-dir), while for the one OP fit $2\Delta^{\mbox{\tiny BCS}}(0)/{\rm k}_{\mbox{\tiny B}}$T$_{c}\simeq 4.6$ (ab-plane) and 4.5 (c-dir) pointing, in general, to moderately anisotropic and strongly coupled superconducting state in ErNi$_{2}$B$_{2}$C.
\acknowledgments
The support by the State Foundation of Fundamental Research of Ukraine (project $\Phi$16/448-2007), by the
Robert A. Welch Foundation (Grant No A-0514, Houston, TX), and the National Science Foundation (Grant No.
DMR-0422949) are acknowledged. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State
University under Contract No. W-7405-Eng-82.
|
\section*{Model}
To simulate thermal energy transfers through a one-dimensional particle chain we solve the following space-dependent WKE associated with the $\beta-$FPUT model in the thermodynamic limit~\cite{lvov2018double}, for the spectral action density $n_k=n(x,k,t)$~\cite{nazarenko2011wave,deng2021full,ampatzoglou2021derivation}
\begin{equation}
\label{eqn:wavkin}
\frac{\partial n_k}{\partial t} + v_k\frac{\partial n_k}{\partial x} = \mathcal{I}_k.
\end{equation}
We denote by $x\in[0,L]$, $k\in[0,2\pi)$ and $t>0$ the (macroscopic) physical space, the Fourier space and the time variables, respectively.
The second term at the LHS of \eqref{eqn:wavkin} represents the advection of $n_k$ due to spatial inhomogeneities, where $\omega_k=2\sin(k/2)$ is the linear dispersion relation and $v_k = d\omega_k/dk = \cos(k/2)$ is the group velocity; the RHS of \eqref{eqn:wavkin} is the $4-$wave collision integral
\begin{equation}
\label{eqn:coll}
\begin{aligned}
\mathcal{I}_k &= 4\pi\int_{0}^{2\pi}|T_{k123}|^2n_{k}n_{k_1}n_{k_2}n_{k_3}\biggl(\frac{1}{n_{k}} + \frac{1}{n_{k_1}} \\
&\quad- \frac{1}{n_{k_2}} - \frac{1}{n_{k_3}}\biggl)\delta(\Delta K)\delta(\Delta \Omega)dk_1dk_2dk_3,
\end{aligned}
\end{equation}
where the arguments of the Dirac's deltas are defined as $\Delta K = k + k_1 - k_2 - k_3$, $\Delta \Omega = \omega_k + \omega_1 - \omega_2 - \omega_3$, and
$|T_{k123}|^2 = 9\omega_{k}\omega_{1}\omega_{2}\omega_{3}/16$ is the matrix element associated with the $\beta-$FPUT model~\cite{lukkarinen2008anomalous,bustamante2019exact}.
The integration of \eqref{eqn:coll} has only one degree of freedom, since the resonance conditions imposed by the Delta functions constrain the integration to the so-called {\it resonant manifold}, i.e. the subset of possible combinations of $k_1,k_2,k_3$ that are in resonance with mode $k$, representing all the resonant wave quartets.
We provide an explicit expression of this one-dimensional integration in the {\it Materials and Methods} section.
The resonant interactions contained in the collision integral represent the mechanism responsible for the local ({\it i.e.} at fixed $x$)
relaxation to the equilibrium distribution of $n_k$ which, given the two conserved quantities of \eqref{eqn:wavkin}, is given by the Rayleigh-Jeans (RJ)
solution
\begin{equation}\label{eq:5}
n^{(RJ)}_k = \frac{T}{\omega_{k} + \mu}\,.
\end{equation}
Here, $T$ plays the role of the temperature of the system and $\mu$ of the chemical potential; these quantities are associated with the conservation
of the harmonic energy and of the action (or number of particles), respectively. The spatial energy density profile can be
computed multiplying $n_k$ by $\omega_k$ and integrating in $k$:
\begin{equation}
e(x,t) = \int_0^{2\pi} \omega_k n(x,k,t)dk.
\end{equation}
To avoid confusion, we recall that due to the discreteness of the physical space the Fourier space is periodic and therefore the modes in the interval $[\pi,2\pi)$ can be equivalently interpreted as in $[-\pi,0)$. For this reason, hereafter we refer to the modes near $0$ or $2\pi$ as the {\it low wavenumbers} and to the modes near $\pi$ as the {\it high wavenumbers}.
\section*{Results}
In what follows, we will discuss results achieved from two types of numerical simulations of the nonhomogeneous wave kinetic equation: case (A) corresponds to the classical problem of a chain in between two thermostats at different temperatures and case (B) corresponds to the free evolution of an initial energy density narrow Gaussian profile in $x$. The latter is the typical experiment used to asses the diffusive (or not) properties of the system.
\subsection{Anomalous conduction}
\begin{figure*}[ht]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{figure/fig1a.pdf}
\caption{\label{fig:KT}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{figure/fig1b.pdf}
\caption{\label{fig:KL}}
\end{subfigure}
\caption{(a) Thermal conductivity $\mathcal{K}$ as function of non-dimensional time for several values of $L$; (b) Steady state thermal conductivity $\mathcal{K}$
as a function of $L$. For small $L$ most of the modes are non-interacting and the ballistic scaling $\mathcal K\propto L^1$ is recovered. For larger $L$, we observe excellent asymptotic agreement with the scaling $\mathcal K\propto L^{0.4}$. The inset of panel (b) reports the thermal conductivity $\mathcal{K}$ in panel (a) normalized by $L^{0.4}$ as a function of non-dimensional time.\label{fig1}}
\end{figure*}
\begin{figure*}[hbt]
\centering
\includegraphics[width = \textwidth]{figure/fig2.png
\caption{Color map of $T(x,k,t)=(\omega_k+\mu) n(x,k,t)$. Top row: $L=1$; bottom row: $L=10$. Color ranges from $T_2$ (white) to $T_1$ (red). For $t>0$, two fronts start propagating the temperature of the thermostats, perturbing the initial homogeneous state. The modes in $[0,\pi]$ propagate to the right ($v_k>0$) and the modes in $[\pi,2\pi]$ propagate to the left ($v_k<0$). The upper panels depict a predominantly ballistic situation also at the stationary state (right panel), since $L$ is not sufficiently large for most of the modes to interact. For the larger system in the lower panels, once a steady state is reached the diffusive modes (around $k=\pi$) have equipartitioned at fixed $x$ and are
accompanied by a $k-$independent constant gradient
between the two thermostats. On the other
hand, the ballistic modes (around $k=0$ and $k=2\pi$) carry the energy density
of their originating thermostat all the way
to the opposite side without interactions with
other modes. Here we observe that the width of the ballistic region becomes thinner as $L$ increases (see Fig.~\ref{fig:kjmax}).
\label{fig:Tmap}}
\end{figure*}
To demonstrate numerically anomalous conduction, we consider a domain of size $L$ with two thermostats at its ends at
different temperature $T_1$ and $T_2$. For normal conduction, at the steady state one expects a linear
temperature, $T$, profile (Fourier's law) and the conductivity, $\mathcal{K}$, to be independent of the
size of the domain, $L$. By defining the net spectral energy current as
\begin{equation}
j(k,x,t) = \omega_k v_k[n(k,x,t) - n(-k,x,t)]/2,
\end{equation}
the conductivity can be computed as
\begin{equation}
\label{eqn:cond}
\mathcal{K} = \frac{J L}{\Delta T}
\end{equation}
with
\begin{equation}
J = \frac{1}{L}\int_0^L\int_0^{2\pi} j(k,x,t)dkdx,
\end{equation}
being the spatial average of the integral of $j(k,x,t)$ and $\Delta T = T_1 - T_2$
the temperature difference between the two thermostats. The term $\Delta T/L$ represents the mean
temperature gradient and one can recognize the definition of $\mathcal{K}$ as given by Fourier's law.
Note that, at the steady state, the energy current is independent of $x$.
In Fig. \ref{fig:KT}, we report the time history of the conductivity for several values
of the domain size $L$, keeping fixed $ \Delta T$. The initial ($t=0$) distribution is set to be a RJ distribution at the average temperature between the two thermostats; subsequently, there is an initial transient during which the energy flux starts growing (and consequently also $\mathcal{K}$ ), until a stationary state is reached. Note that time is made non-dimensional with the
reference time $L/\mathcal{V}$, with $\mathcal{V}=v_{k=0}=1$ being the maximal ballistic velocity.
In Fig.~\ref{fig:KL}, instead, we report the measured conductivity as a function of the domain size
$L$: the results clearly indicate that for small $L$ the stationary value of the conductivity tends to be proportional to $L$, as
in the purely harmonic system; on the other hand, for $L \to \infty$ the exponent $\alpha$ tends to the
constant value of $0.4$, consistent with the value measured in the microscopic simulations \cite{lepri1998anomalous}. In the inset of the figure, the time history of the conductivity
$\mathcal{K}$ divided by $L^{0.4}$ is drawn to highlight that for large values of $L$ the
curves overlap.
\begin{figure*}[ht]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width = \columnwidth]{figure/fig3a.pdf}
\caption{\label{fig:jk}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width = \columnwidth]{figure/fig3b.pdf}
\caption{\label{fig:jLk}}
\end{subfigure}
\caption{(a) Energy current $\langle j\rangle_x$ as function of $k$ for different domain size, once a steady state has been reached. While for the high wavenumbers the energy current contribution decreases as $L$ increases, the low-wavenumber contribution is independent of $L$, in agreement with a ballistic behavior. In the inset, one can appreciate this invariance as $k\to0$.
(b) The energy current $\langle j\rangle_x$ is multiplied by the size of the domain $L$. Now, the renormalized curves tend to converge onto each other independently of $L$ for the high wavenumbers. This behavior is in agreement with Fourier's law (see \eqref{eqn:cond}) prescribing inverse proportionality between energy current and system size.
\label{fig:j}}
\end{figure*}
Via integration of the deterministic microscopic equations, a recent work \cite{Dematteis2020} provided
evidence that the collision integral $\mathcal{I}_k$ brings the system to local equilibrium down to a
critical $k_c$ whereas, for lower $k$'s advection is predominant and waves travel in the domain transported
by the group velocity $v_k$ interacting too weakly in order to relax locally to a RJ spectrum.
Here, we observe this clearly by looking at the evolution of the color map of the temperature in Fig.~\ref{fig:Tmap}, where the temperature spectral density $T(x,k,t)=(\omega_k+\mu)n(x,k,t)$ is defined by inverting \eqref{eq:5}, using the fact that $\mu$ is constant throughout the evolution. Note that for
$k < \pi$ the velocity $v_k$ is positive, while it is negative for $k > \pi$. The initial condition at
$t=0$ is a homogeneous field, with $n(x,k,t=0)=n^{(RJ)}_k$ at temperature $(T_1+T_2)/2$. Due to the presence of the thermostats, as
$t>0$ the waves going to the right start to propagate a hot front from the left thermostat, while
the waves going to the left start propagating a cold front from the right thermostat. The edge of the
front propagates at the maximal speed allowed, which is the speed of the acoustic modes
$v(k\to0^\pm)=\pm1$. For $L = 1$, the energy flows in a ballistic way for almost the entire domain and the
collision integral is not strong enough to bring the system to local equilibrium. As a result, at large
times and at any fixed point $x$, the right-going waves are at temperature $T_1$ and the left-going waves
are at temperature $T_2$, far from local equipartition. For larger system size, $L=10$, instead, advection
is predominant only in a small range of $k$, with the remaining part of the domain dominated by
diffusion: in this region, at large time and at any fixed point $x$, the energetic content of the left-
and right-going waves is equipartitioned, and there is a constant smooth temperature gradient between the
two thermostats. This is reflected in the profile in $k$ of the spatial integral of the spectral energy
current $\langle j\rangle_x = \tfrac{1}{L}\int_x j(k,x)dx$, once a steady state is reached, see figure \ref{fig:jk}. For modes with small $k$, $\langle j\rangle_x$ is
independent of the system size since the behavior is purely ballistic~\cite{rieder1967properties}. For
modes with large $k$ the energy current flattens as $L$ increases, accompanied by a reduction of the peak
value.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figure/fig4.pdf}
\caption{Scaling of $k({\rm max}(\langle j\rangle_x))$ (defined as the point of maximum in Fig.~\ref{fig:jk}) as function of $L$. The observed scaling is consistent with $L^{-0.3}$, which implies an anomalous exponent $\alpha=0.4$.
\label{fig:kjmax}}
\end{figure}
In particular, Fig. \ref{fig:jLk} shows how for these modes the energy current is in inverse
proportionality with the chain length $L$ as one would expect from Fourier's law, i.e. \eqref{eqn:cond}
when $\mathcal{K}$ does not depend on $L$. There shoud be, then, a critical value $k_c$ above which the
evolution of the energy is diffusion-dominated and below which the predominant transport mechanism is the
purely ballistic one. Considering that at equilibrium the transport term and the collision integral
should balance, and using dimensional arguments, one can find that $k_c \approx L^{-3/10}$
\cite{Dematteis2020}. A scaling consistent with this estimation can be found, in our simulations, for the value of $k$ for which $\langle j\rangle_x$ is maximal, as reported in figure \ref{fig:kjmax}, suggesting that this criterion could be used as
a proxi for the determination of $k_c(L)$, to distinguish between diffusive and ballistic modes.
\subsection{Ballistic and diffusive propagation}
\begin{figure*}
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width = \columnwidth]{figure/fig5a.pdf}
\caption{\label{fig:diffusion_n}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width = \columnwidth]{figure/fig5b.pdf}
\caption{\label{fig:diffusion_sigma}}
\end{subfigure}
\caption{(a) Spatial distribution of $\langle n\rangle_k(x,t)$ (integrated in $k$) at different times. An initial spatially localized perturbation over a uniform background propagates as two ballistic peaks moving in opposite direction at constant velocity and a central decaying peak. (b) Time
history of the variance of the temperature distribution. After the ballistic peaks exit the system, the broadening of the central peak (also known as the heat peak) shows agreement with a diffusive behavior. \label{fig:diffusion}}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \textwidth]{figure/fig6a.pdf}
\caption{\label{fig:6a}}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \textwidth]{figure/fig6b.pdf}
\caption{\label{fig:6b}}
\end{sub
|
figure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \textwidth]{figure/fig6c.pdf}
\caption{\label{fig:6c}}
\end{subfigure}
\caption{Heat peak evolution as predicted by WKE (\textcolor{blue}{--}), and pure diffusion
(\textcolor{red}{- -}).\label{fig:diffusion_comparison}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{figure/fig7.png}
\caption{Color map of $e(x,k,t)=\omega_kn(x,k,t)$ at four non-dimensional time instants. The second sound emission is clearly seen in the central maps where perturbations at $k\simeq 0$ and $k\simeq 2\pi$ (the low modes) are detaching from the central diffusive peak involving the high modes. The interacting and diffusive character of these modes is evident from the fact that the shape of the central peak remains close to a rectangle during the evolution, tending to populate all modes $k$ with the same energy density at fixed position $x$. \label{fig:energy}}
\end{figure*}
In nonequilibrium statistical physics, the transport coefficients characterizing nonequilibrium steady
states that are not too far from equilibrium can be computed in terms of space-time correlations in an
equilibrium ensemble of realizations of the microscopic dynamics~\cite{kubo2012statistical,lepri2003thermal,aoki2006energy,lukkarinen2016kinetic,spohn2016fluctuating},
via the so-called Kubo integral. Likewise, for the mesoscopic model of~\eqref{eqn:wavkin}, let us now
consider an initial background equilibrium state with constant $T = T_0$ and chemical potential $\mu$.
Let us then consider an initial narrow bell-shaped perturbation $\delta T(x)$ in the center of the
domain, such that $\delta T\ll T_0$. We initialize $n(x,k,t=0)$ with a RJ distribution, see~\eqref{eq:5},
with $T(x)=T_0+\delta T(x)$, {in a domain going from $-L$ to $L$}.
The evolution of $ \langle n \rangle_k(x,t) = \int_0^{2\pi} n(x,k,t) dk$ is shown in Fig.~\ref{fig:diffusion}, where we considered a perturbation having an initial amplitude of $\delta T(x=0)/T_0 = 0.1$.
One can recognize the familiar behavior of a central
peak ({\it heat peak}) and two traveling peaks ({\it acoustic peaks}) which correspond to the emission of the {\it second sound}. This configuration has been studied
using the the microscopic dynamics~\cite{lepri2003thermal,lukkarinen2016kinetic} and the
stochastic model known as {\it fluctuating hydrodynamics}~\cite{spohn2016fluctuating}.
For $t>0$, two
peaks separate and propagate in opposite directions with constant velocity about $\pm 1$; the central
peak, instead, evolves diffusively in time (Fig.~\ref{fig:diffusion_n}). This is clearly visible by looking at
the time evolution of the variance of the distribution, computed as follows:
\begin{equation}
\sigma^2 = \frac{\int_{x,k} n(k,x,t)\omega(k) x^2 dx dk}{\int_{x,k} n(k,x,t)\omega(k) dx dk}.
\end{equation}
Indeed, as shown in Fig.~\ref{fig:diffusion_sigma}, after the acoustic peaks exit the domain (at
about $t\mathcal{V}/L = 1$) and the central peak is left alone, the variance starts to grow linearly in
time. The time evolution of the central peak follows
regular diffusion, with diffusion coefficient given by half the slope of the asymptote on the right
hand-side of Fig. \ref{fig:diffusion_sigma}. This particular result may sound controversial in the face
of notable results advocating for a {\it heat peak} that follows {\it fractional diffusion}
(of super-diffusive type). We address this further in the {\it Discussion} section.
We can therefore identify the second sound emission with the non-decaying transport of the ballistic modes, and the
{\it heat peak} with the regular diffusion of the modes that thermalize locally. Further confirmation of
this is found in Fig.~\ref{fig:diffusion_comparison}, where we plot the time
evolution of the small initial Gaussian perturbation on homogeneous background, and we show
that the numerical simulation of the WKE follows closely a diffusive
solution with diffusion coefficients
around $0.24$, for this condition. Thus, we can simply refer to the
{\it heat} and the {\it acoustic} peaks as to the {\it diffusive} and the {\it ballistic} (or {\it second-sound}) peaks,
respectively, without ambiguity. Fig.~\ref{fig:energy} shows the energy density $ e(x,k,t) $ at various times of the evolution of the perturbation. The low modes, with $k \approx 0$ and $k \approx 2\pi$, are the ones with the highest ballistic
velocity. Hence, they will leave the domain in a timescale of the order of $L/\mathcal V$. For longer times, the higher modes start
to diffuse thanks to the collision integral and the distribution will start to follow a diffusive evolution.
\section*{Discussion}
Our direct numerical simulation of the WKE shows that two phononic states coexist in the $\beta$-FPUT chain. The first, involving the low modes, is equivalent to the emission of second sound; the second, involving the higher modes, is purely diffusive. This has been analyzed under the following different points of view.
\begin{itemize}
\item The anomalous scaling of the energy conductivity $\mathcal K\propto L^\alpha$, with $\alpha\simeq0.4$, is confirmed in Figs.~\ref{fig:KT}-\ref{fig:KL}. This is due to the scaling $ k_c(L) $ of the separation between the ballistic modes (low wavenumbers) and the diffusive modes (high wavenumbers), individually contributing towards $\alpha=1$ and $\alpha=0$, respectively. We find that $k_c$ varies consistently with $L$ by the scaling $k_c(L)\propto L^{-3/10}$, as shown in Fig.~\ref{fig:kjmax}.
\item The spatial integral of the spectral energy current modal density $\langle j\rangle_x(k)$ is independent of $L$ for the low modes, as predicted for the purely harmonic chain \cite{rieder1967properties}, while it is proportional to $L^{-1}$ for the higher modes, in agreement with Fourier's law. This was shown in Fig.~\ref{fig:j}.
\item A complementary way to analyze energy transport is to look at the evolution of a small localized perturbation of the thermal equilibrium condition. By doing that, we confirm the presence of two {\it acoustic} peaks shooting off in opposite directions and a central {\it heat} peak (figure~\ref{fig:diffusion_n}) evolving in agreement with standard Fourier diffusion, as confirmed in Figs. ~\ref{fig:diffusion_sigma} and \ref{fig:6a}-\ref{fig:6c}.
\item Existence of the two types of heat transfer,
ballistic and diffusive, is cleanly demonstrated in
Figs.~\ref{fig:Tmap} and \ref{fig:energy}. In Fig.~\ref{fig:Tmap} the qualitative difference between
these two types is most evident near the stationary
nonequilibrium state,
where the horizontal separation between the two different regions gives an intuitive visualization of $k_c$. Finally, in Fig.~\ref{fig:energy} we see an $x-k$ representation of the evolution of a perturbed equilibrium state.
Again, the sharp separation wavenumber, $k_c$, can be observed by eye. Not surprisingly, we discover that the acoustic peaks are made exclusively of noninteracting ballistic modes with low wavenumber, while the heat peak is made exclusively of modes with high wavenumber.
\end{itemize}
Although the separation of scales at $k_c$ observed in the $ x-k $ plots is slightly smeared, the two-state ballistic-diffusive picture analyzed above under these four different angles is robust and does not include super-diffusive propagation with fractionary exponents. A fractionary diffusion equation with space derivative of order $8/5$ instead of $2$, is an alternative explanation compatible with the conductivity scaling $\mathcal \mathcal{K} \propto L^{2/5}$, rigorously derived in \cite{mellet2015anomalous}. However, this scaling would predict the propagation of a single peak that does not find correspondence in our observations.
The apparent disagreement may have different origins. For example, the assumption of having only two conserved quantities (energy and action) made in \cite{mellet2015anomalous} is partially violated by the ballistic modes, which effectively preserve momentum and whose propagation is not a part of the heat peak. Although we have assessed overall compatibility of the heat peak evolution with a diffusive behavior, providing a definitive study of this issue is not the aim of the current manuscript.
In close analogy with {\it second sound} propagation in superfluids~\cite{peshkov2013second}, our results show that, if the ballistic phonons are recognized as noninteracting traveling waves~\cite{kuzkin2020ballistic,Dematteis2020,kuzkin2021unsteady}, the two-state ballistic-diffusive picture is compatible with the main observable aspects of energy transport.
Finally, it is worth noticing that second sound in dielectric solids was predicted a long time ago~\cite{chester1963second,prohofsky1964second,chandrasekharaiah1986thermoelasticity}, and later observed for instance in solid He$^3$ and He$^4$ below $4$ K and in NaF below $20$ K, at extremely low temperature. In a dielectric crystal, second sound can be observed when Umklapp resonances are very small, and by lowering the temperature enough, the scattering level is reduced to a point where noninteracting wavelike transport becomes visible on macroscopic scales. It is now well-known that reducing the dimensionality of the material is another way to reduce drastically the number of interactions, in part explaining why it was recently possible to observe second-sound propagation in 2D graphite at temperatures above $100$ K~\cite{huberman2019observation,chen2021non}. Our results further indicate that reducing the dimensionalty to (quasi-)1D structures such as nanotubes may give a hope to finally observe second-sound propagation at room temperature.
\bigskip\bigskip
\section*{Materials and Methods}
\subsection*{Integration on the resonant manifold}
In \eqref{eqn:coll}, the equality coming from the momenta Dirac delta has to be interpreted $\mod I, I=[0,2\pi)$, to include possible Umklap resonances. The resonant manifold is the subset of $I\times I\times I\times I$ satisfying at the same time the two conditions
\begin{equation}\label{eq:3m}
\omega_k+\omega_{k_1}-\omega_{k_2}-\omega_{k_3} = 0\,,\qquad k_3 = (k+k_1-k_2)\mod I\,.
\end{equation}
The constraint imposed by integration on the resonant manifold reduces the triple integral of \eqref{eqn:coll} to a one-dimensional integral.
Here, we briefly report some important rigorous results from Ref.~\cite{lukkarinen2008anomalous} (see also~\cite{lukkarinen2016kinetic,pereverzev2003fermi}).
The solutions of the collisional constraints \eqref{eq:3m} are of three types:
\begin{itemize}
\item $k_1 = k_3$, $k_2=k_4$\,;
\item $k_1 = k_4$, $k_2=k_3$\,;
\item $k_2=h(k_1,k_3) \mod I$, where
\begin{equation}
h(x,y) = \frac{y-x}{2} + 2 \arcsin\left( \tan{\frac{|y-x|}{4}} \cos \frac{y+x}{4} \right)\,.
\end{equation}
\end{itemize}
The first two types (perturbative solutions) are trivial resonances that contribute to nonlinear frequency shift and broadening~\cite{lvov2018double}. The third type of solutions (non-perturbative) represents non-trivial resonances that are responsible for irreversible spectral transfers.
By integrating analytically in $k_3$, the collision integral of \eqref{eqn:coll} can be written as
\begin{equation}\label{eq:5m}
\mathcal I_k = \int_{0}^{2\pi} dk_1dk_2\; g(k,k_1,k_2) \delta(\Omega(k,k_1,k_2))\,,\\
\end{equation}
with
\begin{equation}\label{eq:6m}
\begin{aligned}
g(k,k_1,k_2) &= |T_{k,k_1,k_2,k+k_1-k_2}|^2 n_kn_{k_1}n_{k_2}n_{k+k_1-k_2}\\
& \;\;\;\times\left(\frac{1}{n_k}+\frac{1}{n_{k_1}}-\frac{1}{n_{k_2}}-\frac{1}{n_{k+k_1-k_2}} \right)\,,\\
\Omega(k,k_1,k_2) &= \omega_k+\omega_{k_1}-\omega_{k_2}-\omega_{k+k_1-k_2}\,.
\end{aligned}
\end{equation}
In order to integrate out the frequency delta, we exploit the following property of the Dirac delta function:
\begin{equation}\label{eq:7m}
\int dx \;G(x)\delta(f(x)) = \int dx \;G(x) \sum_i \frac{\delta(x-x_i^\star)}{|f'(x_i^\star)|}\,,
\end{equation}
where $x_i^\star$ are all the zeros of $f$. In \eqref{eq:5m}, integrating in the variable $k_1$, we know that all of the zeros of $\Omega = 2[\sin(k/2) + \sin(k_1/2) - \sin(k_2/2) - \sin(|k+k_1-k_2|/2)]$ are of one of the three types above. The trivial solutions give $\Omega'(k_1)=0$ identically, which implies singlular denominators. Though, as discussed in Ref.~\cite{lukkarinen2008anomalous} these terms come in pairs of opposite sign (this can be seen easily looking at the symmetries of the integrand of~\eqref{eqn:coll}), which cancel each other and do not contribute. Therefore, the non-vanishing contributions come from the non-trivial resonances, and we obtain
\begin{equation}\label{eq:8m}
\begin{aligned}
\mathcal I_k&= \int_0^{2\pi} dk_2 \int_0^{2\pi}dk_1 \; g(k,k_1,k_2)\frac{\delta(k_1-h(k,k_2))}{|\partial_{k_1}\Omega(k,h(k,k_2),k_2)|}\\
&= \int_0^{2\pi} dk_2 \; \frac{g(k,h(k,k_2),k_2)}{\sqrt{\left(\cos\tfrac{k}{2}+\cos\tfrac{k_2}{2}\right)^2 + 4 \sin\tfrac{k}{2}\sin\tfrac{k_2}{2}}}\,.
\end{aligned}
\end{equation}
\subsection*{Numerical details}
\eqref{eqn:wavkin} is solved by finite difference approximation in time and space, using the expression
of $\mathcal I_k$ given in \eqref{eq:8m}.
In all simulations we used 100 grid points in $x$ and 1001 points in $k$; the chemical potential is set
to $\mu = 0.05$. The adopted discretization guarantees the conservation of energy and wave action. In {\it Case A} of the {\it Results} we used $T_1=0.4$, $T_2=0.2$. In {\it Case B} of the {\it Results} we used $T_0=0.3$, $\delta T(x=0)/T_0=0.1$.
\section*{Acknowledgements}
M.O. was supported by the ``Departments of Excellence 2018-2022'' Grant awarded by the Italian Ministry of Education, University and Research (MIUR) (L.232/2016). GD and YL gratefully acknowledge funding from ONR grant N00014-17-1-2852. YL also acknowledges support from NSF DMS award 2009418. M.O. was supported by Simons Collaboration on
Wave Turbulence, Grant No. 617006. We thank Gregory Falkovich for pointing out to us the analogy with second-sound propagation. Lamberto Rondoni is also acknowledged for fruitful discussions during the early stages of the work.
|
\section{Introduction}
Various powerful inference methodologies for continuous-time stochastic processes are based on stationarity. One reason for this is that, in many cases, stationarity is essential to derive asymptotic results. For instance, based on stationarity arguments, different estimation procedures have been successfully applied to flexible and widely used continuous-time models in \cite{BDY2011, C2018, FHM2020, HKLZ2007} and \cite{SS2012b}.\\
However, numerous established models, including processes used in the references above, are inappropriate for modeling data that shows non-stationary behavior. To overcome this issue, \cite{SS2021} recently introduced a general theory on stationary approximations for non-stationary continuous-time processes that allows modeling non-stationary data. Heuristically, this approach follows the intuitive idea of local stationarity as discussed in \cite{D2012, DRW2019, DSR2006, V2012}, and assumes that a sequence of non-stationary processes can be locally approximated by a stationary process. Noticeable examples of time-series models discussed in \cite{SS2021} come from the class of time-varying L\'evy-driven Ornstein-Uhlenbeck processes and time-varying L\'evy-driven state space models.
Since such processes are non-stationary, classical methods used for statistical inference in a stationary setting cannot be applied, and novel estimation procedures are needed.\\
In the present work, we address this issue and provide inference methodologies for sequences of non-parametric non-stationary continuous-time processes that possess a locally stationary approximation. Noteworthy, our results are established in a model-free setting using limit theorems from \cite{SS2021}. We apply these results to study concrete estimators for several well-known non-stationary time series models and analyze their asymptotic properties.\\
To the best of our knowledge, the only comparable results can be found in \cite{KL2012}, where the authors investigate time-varying Gaussian-driven diffusion models and provide asymptotic results of a proposed estimator. Different from \cite{KL2012}, our theory also encompasses non-Gaussian and non-linear time series models. In the discrete-time setting, results similar to our theory have been obtained in \cite{BDW2020} and \cite{DRW2019}, where the authors derive a remarkably versatile theory including various analytical and statistical results for locally stationary processes. \\
More precisely, we introduce a class of kernel-based $M$-estimators whose objective function is a contrast that depends on observations sampled from a sequence of non-stationary processes. To establish consistency and asymptotic normality in a general setting, we impose conditions on the stationary approximation of the sequence and the contrast function. Specifically, the stationary approximation is assumed to be $\theta$-weakly dependent as introduced in \cite{DD2003} and the contrast function is assumed to satisfy identifiability and regularity conditions. In particular, these conditions ensure the existence of a $\theta$-weakly dependent stationary approximation of the contrast. The relative simplicity of the conditions allows us to readily derive asymptotic results for different contrast functions of finite and infinite memory.\\
For instance, we consider a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes and obtain, based on a least squares contrast, a consistent and asymptotically normally distributed $M$-estimator of the underlying coefficient function. The estimator's good performance is demonstrated through a simulation study in a finite sample for different coefficient functions.\\
Moreover, we consider a sequence of time-varying L\'evy-driven state space models, whose locally stationary approximation is a time-invariant L\'evy-driven state space model. The latter processes build a flexible class of continuous-time models that encompasses the well-known class of CARMA processes (see \cite{B2014,MS2007} for an introduction) and allow modeling high-frequency and irregularly spaced data occurring, for example, in finance and turbulence. Recently, a quasi-maximum likelihood and a Whittle estimator for L\'evy-driven state space models sampled at low frequencies have been discussed and compared in \cite{FHM2020}, and \cite{SS2012b}. We use results from these works and establish consistency results for two novel estimators, a localized quasi-maximum likelihood and a localized Whittle estimator. While the localized quasi-maximum likelihood estimator is a time domain $M$-estimator that is based on a log-likelihood contrast, the localized Whittle estimator is a frequency domain estimator constructed from a consistent estimator of the sample autocovariance. We compare both estimators in a simulation study, where their finite sample performances and convergence behaviors are studied.\\
The paper is structured as follows. In Section \ref{sec2}, all technical results needed throughout this work are presented. We review locally stationary approximations, introduce the sampling schemes in use, discuss $\theta$-weak dependence and outline hereditary properties of this measure of dependence and the stationary approximations.
In Section \ref{sec3}, we discuss the aforementioned class of $M$-estimators and establish consistency and asymptotic normality.
In Section \ref{sec4}, we first review elementary properties of L\'evy processes, stochastic integration with respect to them, and time-varying Ornstein-Uhlenbeck processes.
We then apply our results to a least squares contrast and obtain asymptotic results of the corresponding $M$-estimator, where the observations are sampled from a sequence of time-varying Ornstein-Uhlenbeck processes.\\
In Section \ref{sec5}, we first review time-varying L\'evy-driven state space models. Then, for observations sampled from a sequence of such processes, we propose a localized quasi-maximum likelihood estimator in Section \ref{sec5-2} and a localized Whittle estimator in Section \ref{sec5-4}. We show consistency of both estimators and present a truncated version of the localized quasi-maximum likelihood estimator in Section \ref{sec5-3}.\\The outcomes of the simulation study are discussed in Section \ref{sec6} and the proofs of most results are given in Section \ref{sec7}.
\subsection{Notation}
\label{sec1-1}
In this paper, we denote the set of positive integers by $\mathbb{N}$, non-negative integers by $\mathbb{N}_0$, positive real numbers by $\mathbb{R}^+$, non-negative real numbers by $\mathbb{R}_+^0$, the set of $m\times n$ matrices over a ring $R$ by $M_{m\times n}(R)$, and $\mathbf{1}_n$ stands for the $n\times n $ identity matrix. The real part of a complex number $z\in\mathbb{C}$ is written as $\mathfrak{Re}(z)$. For square matrices $A,B\in M_{n\times n}(R)$, $[A,B]=AB-BA$ denotes the commutator of $A$ and $B$. We shortly write the transpose of a matrix $A \in M_{m\times n}(\mathbb{R})$ as $A'$, and norms of matrices and vectors are denoted by $\norm{\cdot}$. If the norm is not further specified, we take the Euclidean norm or its induced operator norm, respectively. For a bounded function $h$, $\norm{h}_\infty$ denotes the uniform norm of $h$. In the following Lipschitz continuous is understood to mean globally Lipschitz. For $u,n\in\mathbb{N}$, let $\mathcal{G}_u^*$ be the class of bounded functions from $(\mathbb{R}^n)^u$ to $\mathbb{R}$ and $\mathcal{G}_u$ be the class of bounded, Lipschitz continuous functions from $(\mathbb{R}^n)^u$ to $\mathbb{R}$ with respect to the distance $\sum_{i=1}^{u}\norm{x_i-y_i}$, where $x,y\in(\mathbb{R}^n)^u$. For $G\in\mathcal{G}_u$ we define
\begin{align*}
Lip(G)=\sup_{x\neq y}\tfrac{|G(x)-G(y)|}{\norm{x_1-y_1}+\ldots+\norm{x_u-y_u}}.
\end{align*
The Borel $\sigma$-algebras are denoted by $\mathcal{B}(\cdot)$ and $\lambda$ stands for the Lebesgue measure, at least in the context of measures. For a normed vector space $W$, we denote by $\ell^\infty(W)$ the space of all bounded sequences in $W$.
In the following, we will assume all stochastic processes and random variables to be defined on a common complete probability space $(\Omega,\mathcal{F},P)$ equipped with an appropriate filtration if necessary.
Finally, we simply write $L^p$ to denote the space $L^p(\Omega,\mathcal{F},P)$ and $L^p(\mathbb{R})$ to denote the space $L^p(\mathbb{R},\mathcal{B}(\mathbb{R}),\lambda)$ with corresponding norms $\norm{\cdot}_{L^p}$.
\section{Locally stationary approximations and $\theta$-weak dependence}
\label{sec2}
\subsection{Locally stationary approximations}
\label{sec2-1}
Throughout this paper, we consider sequences of processes that can be locally approximated in $L^p$ by a stationary process. This concept is a non-parametric approach to express the intuitive idea of local stationarity, as discussed by Dahlhaus and others (see e.g. \cite{D2012,V2012}). In this paper, we consider locally stationary approximations defined as follows.
\begin{Definition}[{\cite[Definition 2.1]{SS2021}}]\label{definition:statapproxconttime}
Let $Y_N=\{Y_N(t),t\in\mathbb{R}\}_{N\in\mathbb{N}}$ be a sequence of real-valued stochastic processes and $\tilde{Y}=\{\tilde{Y}_u(t),t\in\mathbb{R}\}_{u\in\mathbb{R}^+}$ a family of real-valued stationary processes. We assume that the process $\tilde{Y}_u$ is ergodic for all $u\in\mathbb{R}^+$ and $\sup_{u\in \mathbb{R}^+} \norm{\tilde{Y}_u(0)}_{L^p}<\infty$ for some $p\geq1$. If there exists a constant $C>0$, such that uniformly in $t\in \mathbb{R}$ and $u,v\in\mathbb{R}^+$
\begin{gather}
\lVert\tilde{Y}_u(t)-\tilde{Y}_v(t)\rVert_{L^p}\leq C \abs{u-v} \text{ and }\qquad \lVert Y_N(t)-\tilde{Y}_{t}(Nt)\rVert _{L^p} \leq C \frac{1}{N},\label{assumption:LS}\tag{LS}
\end{gather}
then we call $\tilde{Y}_u$ a \emph{locally stationary approximation} of the sequence $Y_N$ \emph{for} $p$.
\end{Definition}
If $\tilde{Y}_u$ is a locally stationary approximation of $Y_N$ for $p$, then it is also a locally stationary approximation for $p'$, where $1\leq p'\leq p$. \\
Whenever we investigate estimators based on observations from a sequence of processes, we assume the observations to be sampled according to one of the following schemes.
\begin{Assumption}\label{assumption:observations}
For fixed $N\in\mathbb{N}$ and $u\in\mathbb{R}^+$ we assume $Y_N$ to be equidistantly observed at times $\tau_i^N=u+i\delta_N$ with grid size $\delta_N=|\tau_i^N-\tau_{i-1}^N|$ such that $\delta_N\downarrow 0$ for $N\rightarrow\infty$. For a sequence $b_N\downarrow0$ we consider the observation window $[u-b_N,u+b_N]$ and set $m_N=\lfloor b_N/ \delta_N \rfloor$. Thus, the number of observations is given by $2m_N+1=|\{i\in\mathbb{Z}: \tau_i^N\in [u-b_N,u+b_N]\}|$. We require $b_N/\delta_N\rightarrow\infty$ as $N\rightarrow\infty$ and either
\begin{enumerate}[label=\textbf{(O\arabic*)}]
\item $N\delta_N=\delta>0$ for all $N\in\mathbb{N}$ or \label{observations:O1}
\item $N\delta_N\rightarrow\infty$ as $N\rightarrow\infty$.\label{observations:O2}
\end{enumerate}
\end{Assumption}
Note that these conditions on $N$, $b_N$ and $\delta_N$ immediately imply that $Nb_N\rightarrow\infty$ as $N\rightarrow\infty$. For a comprehensive discussion on the above approximations and observations, including examples of sequences that satisfy Definition \ref{definition:statapproxconttime}, we refer to \cite{SS2021}.
\subsection{$\theta$-weak dependence and hereditary properties}
\label{sec2-2}
In this section we summarize results that are needed throughout the paper. We start with a brief review of the concept of $\theta$-weak dependence.
\begin{Definition}[{\cite{DD2003}}]\label{thetaweaklydependent}
Let $X=\{X(t)\}_{t\in\mathbb{R}}$ be an $\mathbb{R}^n$-valued stochastic process. Then, $X$ is called $\theta$-weakly dependent if
\begin{gather*}
\theta(h)=\sup_{v\in\mathbb{N}}\theta_{v}(h) \underset{h\rightarrow\infty}{\longrightarrow} 0,
\end{gather*}
where
\begin{align*}
\theta_{v}(h)\!=\!\sup\bigg\{\frac{|Cov(F(X(i_1),\ldots,X(i_v)),G(X(j)))|}{\norm{F}_{\infty}Lip(G)}, F\in\mathcal{G}_u^*,G\in\mathcal{G}_1, i_1\leq\ldots\leq i_v\leq i_v+h\leq j \bigg\}.
\end{align*}
We call $(\theta(h))_{h\in\mathbb{R}_0^+}$ the $\theta$-coefficients.
\end{Definition}
Next, we summarize hereditary properties of locally stationary approximations and $\theta$-weak dependence under transformations (see \cite[Section 2.3 and 2.4]{SS2021} for a comprehensive discussion).\\
Let $Y_N$ be a sequence of stochastic processes with locally stationary approximation $\tilde{Y}_u$ for some $p\geq1$. For $k\in\mathbb{N}_0$ we define the infinite and finite memory vectors
\begin{alignat*}{3}
Z_N(t)&=\Big(Y_N(t),Y_N\Big(t-\tfrac{1}{N}\Big),\ldots\Big)&\text{ and } &\tilde{Z}_u(t)=(\tilde{Y}_u(t),\tilde{Y}_u(t-1),\ldots),\text{ as well as}\\
Z_N^{(k)}(t)&=\Big(Y_N(t),Y_N\Big(t-\tfrac{1}{N}\Big),\ldots,Y_N\Big(t-\tfrac{k}{N}\Big)\Big)&\text{ and } &\tilde{Z}_u^{(k)}(t)=(\tilde{Y}_u(t),\tilde{Y}_u(t-1),\ldots,\tilde{Y}_u(t-k)).
\end{alignat*}
For functions from the following two classes we obtain hereditary properties.
\begin{Definition}[{\cite[Definition 2.4]{DRW2019}}]
A measurable function $g:\mathbb{R}^{k+1}\rightarrow\mathbb{R}$ is said to be in the class $\mathcal{L}_{k+1}(M,C)$ for $M\geq0$ and $C\in[0,\infty]$, if
\begin{align*}
\sup_{x\neq y}\frac{|g(x)-g(y)|}{\norm{x-y}_1(1+\norm{x}_1^M+\norm{y}_1^M)}\leq C.
\end{align*}
\end{Definition}
\begin{Definition}[{\cite[Definition 2.10]{SS2021}}]\label{definition:functionclassinfinite}
A measurable function $h:\mathbb{R}^\infty\rightarrow\mathbb{R}^n$ is said to belong to the class $\mathcal{L}_\infty^{p,q}(\alpha)$ for $p,q \geq1$, if there exists a sequence $\alpha=(\alpha_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}_0$ satisfying $\sum_{k=0}^\infty\alpha_k<\infty$ and a function $f:\mathbb{R}_0^+\rightarrow\mathbb{R}_0^+$ such that for all sequences $X=(X_k)_{k\in\mathbb{N}_0}\in \ell^\infty(L^q)$ and $Y=(Y_k)_{k\in\mathbb{N}_0}\in\ell^\infty(L^q)$ it holds
\begin{align*}
\norm{h(X)-h(Y)}_{L^p}\leq f\Big(\sup_{k\in\mathbb{N}_0}\{\norm{X_k}_{L^q}\vee \norm{Y_k}_{L^q}\}\Big) \sum_{k=0}^\infty \alpha_k\norm{X_k-Y_k}_{L^q}.
\end{align*}
\end{Definition}
The next proposition is a combination of Proposition 2.7 and 2.11 from \cite{SS2021}.
\begin{Proposition}\label{proposition:inheritanceproperties}
Let $Y_N$ be a sequence of stochastic processes with locally stationary approximation $\tilde{Y}_u$ for some $q\geq1$. Then, for $g\in\mathcal{L}_{k+1}(M,C)$ and a real-valued function $h\in \mathcal{L}_{\infty}^{p,q}(\alpha)$, where $M\geq0$, $C\in [0,\infty)$, $p\geq1$ and $\sum_{k=0}^\infty k\alpha_k<\infty$, it holds:
\begin{enumerate}[label={(\alph*)}]
\item $g(\tilde{Z}_u^{(k)}(t))$ is a locally stationary approximation of the sequence $g(Z_N^{(k)}(t))$ for $\tilde{p}=\frac{q}{M+1}$.
\item If $\tilde{Y}_u$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta_{\tilde{Y}_u}(h)$, then $\tilde{Z}_u^{(k)}$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta_{\tilde{Z}_u^{(k)}}(h)\leq (k+1) \theta_{\tilde{Y}_u}(h-(k+1))$ for $h\geq (k+1)$.
\item If $\tilde{Y}_u$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta_{\tilde{Y}_u}(h)$, satisfies $E[|\tilde{Y}_u(t)|^{(1+M+\gamma)}]<\infty$ for some $\gamma>0$ and additionally $|g(x)|\leq \tilde{C} \norm{x}_1^{M+1}$ for a constant $\tilde{C}>0$, then $g(\tilde{Z}_u^{(k)}(t))$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta_{g(\tilde{Z}_u^{(k)})}(h)=\mathcal{O}\left(\theta_{\tilde{Y}_u}(h)^{\frac{\gamma}{M+\gamma}}\right)$.
\item $h(\tilde{Z}_u(t))$ is a locally stationary approximation of $h(Z_N(t))$ for $p$.
\end{enumerate}
\end{Proposition}
If $\tilde{Y}_u(t)$ is a L\'evy-driven moving average processes (see Section \ref{sec4-1}) we give sufficient conditions for $h(\tilde{Z}_u(t))$ to be $\theta$-weak dependence in Proposition \ref{proposition:infinitememorymovingaverage}.
\section{$M$-estimation of contrast functions based on locally stationary approximations}
\label{sec3}
Let $Y_N$ be a sequence of stochastic processes with locally stationary approximation $\tilde{Y}_u$ as described in Definition \ref{definition:statapproxconttime}.
In this section, we study localized $M$-estimators of contrast functions based on observations of $Y_N$. In a discrete-time setting, such an estimation procedure has recently been investigated in \cite{BDW2020} and \cite{DRW2019}. The contrast functions we investigate are assumed to be of the form
\begin{gather}\label{equation:contrastfunctioninfinitememory}
\Phi\Big(\Big(\tilde{Y}_u(\Delta (1-k))\Big)_{k\in\mathbb{N}_0},\vartheta\Big),
\end{gather}
where $\tilde{Y}=(\tilde{Y}_u(\Delta (1-k)))_{k\in\mathbb{N}_0}$ is a sequence in $L^p$ for $p\geq1, \Delta>0$ and $\vartheta\in\Theta$, where $\Theta\subset\mathbb{R}^d$ is a parameter space. We assume that the true parameter $\vartheta^*$ is identifiable from the contrast, i.e.
\begin{manualassumption}{(M1)}\label{assumption:M1}
Assume that $\Phi(\tilde{Y},\vartheta)\in L^1$ for all $\vartheta\in\Theta$ and that $\vartheta^*$ is the unique minimum in $\Theta$ of the function $\vartheta\mapsto E[\Phi(\tilde{Y},\vartheta)]=M(\vartheta)$.
\end{manualassumption}
\noindent In a stationary setting, the natural choice of an $M$-estimator for $\vartheta^*$ is
\begin{gather}\label{equation:naturalMestimator}
\argmin_{\vartheta\in\Theta}\frac{1}{n}\sum_{i=1}^{n}\Phi\left((\tilde{Y}_u(i+\Delta (1-k)))_{k\in\mathbb{N}_0},\vartheta\right).
\end{gather}
For processes that possess a locally stationary approximation, a localized law of large numbers has recently been proven in \cite[Theorem 3.5 and 3.6]{SS2021}. There, the localization is achieved by using a localizing kernel of the following type.
\begin{Definition}\label{definition:localizingkernel}
Let $K:\mathbb{R}\rightarrow \mathbb{R}$ be a bounded function. If $K$ is of bounded variation, has compact support $[-1,1]$ and satisfies $\int_\mathbb{R} K(x)dx=1$, then we call $K$ a localizing kernel.
\end{Definition}
\noindent From now on, if not otherwise stated, $K$ always denotes a localizing kernel.\\
Following this approach we replace the observations of $\tilde{Y}_u$ in (\ref{equation:naturalMestimator}) by observations of the sequence $Y_N$ as defined in Assumption \ref{assumption:observations}, leading to the localized estimator
\begin{align}\label{eq:M_N}
\begin{aligned}
\hat{\vartheta}_N&=\argmin_{\vartheta\in\Theta}M_N(\vartheta), \text{where}\\
M_N(\vartheta)&=\frac{\delta_N}{b_N}\sum_{i=-m_N}^{m_N}K\left(\frac{\tau_i^N-u}{b_N} \right)\Phi\bigg(\bigg(Y_N\bigg(\tau_i^N+\frac{\Delta (1-k)}{N}\bigg)\bigg)_{k\in\mathbb{N}_0},\vartheta\bigg).
\end{aligned}
\end{align}
\noindent In the next two sections, we derive sufficient conditions on the contrast $\Phi$ that ensure consistency and asymptotic normality of $\hat{\vartheta}_N$. As first step, we give conditions ensuring that $\Phi$ is integrable.
\begin{Lemma}\label{lemma:integrabilityphi}
Let $\Phi:\mathbb{R}^\infty\times\mathbb{R}^d\rightarrow\mathbb{R}$ be a measurable function. If $\Phi(\cdot,\vartheta)\in \mathcal{L}_\infty^{p,q}(\alpha)$ for all $\vartheta\in\Theta$ and $\sup_{\vartheta\in\Theta}\norm{\Phi(0,\vartheta)}<\infty$, then $\Phi(X,\vartheta)\in L^1$ for all $X=(X_k)_{k\in\mathbb{N}_0}\in \ell^\infty(L^q)$ and $\vartheta\in\Theta$. Moreover, if $X=(X_t)_{t\in\mathbb{R}}$ is a stationary integrable ergodic process, then $\left(\Phi\left((X_{t-k})_{k\in\mathbb{N}_0},\vartheta\right)\right)_{t\in\mathbb{R}}$ is again a stationary integrable ergodic process for all $\vartheta\in\Theta$.
\end{Lemma}
\begin{proof}
For $t\in\mathbb{R}$ and $m\in\mathbb{N}$ we have that $\phi_{t,m}=\Phi(X_{t},\ldots,X_{t-m},0,\ldots,\vartheta)\in L^1$, since $\sup_{\vartheta\in\Theta}\norm{\Phi(0,\vartheta)}<\infty$ and $\Phi(\cdot,\vartheta)\in \mathcal{L}_\infty^{p,q}(\alpha)$. Then, similar to \cite[Lemma 3.1]{BDW2020}, one can show that $\phi_{t,m}$ is a Cauchy sequence in $L^1$. Noting that $\Phi$ is measurable, we conclude analogous to \cite[Proposition 4.3]{K1985}.
\end{proof}
\noindent Note that Lemma \ref{lemma:integrabilityphi} is often used implicitly in the following
\subsection{Consistency}
\label{sec3-1}
We now show pointwise convergence of $M_N(\vartheta)$, i.e. $M_N(\vartheta)\overset{P}{\rightarrow} M(\vartheta)$ for all $\vartheta\in\Theta$ as $N\rightarrow\infty$ and stochastic equicontinuity of the sequence $\{M_N(\vartheta)\}_{N\in\mathbb{N}}$. Together, these properties imply $\hat{\vartheta}_N\overset{P}{\longrightarrow}\vartheta^*$ as $N\rightarrow\infty$ along usual lines.
To show pointwise convergence, we use the localized law of large numbers from \cite{SS2021}. To this end, it is necessary to impose regularity conditions on $\Phi$. We demand that $\Phi$ belongs to $\mathcal{L}_{\infty}^{p,q}$ for each $\vartheta \in \Theta$. Moreover, if \hyperref[observations:O2]{(O2)} holds, it is clear that $\Phi(\tilde{Y},\vartheta)$ has to be $\theta$-weakly dependent for all $\vartheta\in\Theta$ (see \cite[Theorem 3.6]{SS2021}). Besides this, $\Phi$ has to additionally belong to $\mathcal{L}_d$ with respect to the parameter space to ensure stochastic equicontinuity.
\begin{Theorem}\label{theorem:consistency}
Let $Y_N$ be a sequence of stochastic processes with locally stationary approximation $\tilde{Y}_u$ for some $q\geq1$ such that either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} holds. Besides, for some $p\geq1$ and a compact set $\Theta\subset\mathbb{R}^d$, we assume that
\begin{enumerate}[label={(\alph*)}]
\item $\Phi(\cdot,\vartheta)\in \mathcal{L}_\infty^{p,q}(\alpha)$ for all $\vartheta\in\Theta$, such that $\sum_{k=0}^\infty k \alpha_k<\infty$,
\item $\Phi(x,\cdot)\in \mathcal{L}_d\left(0,D_1(1+\sum_{k=0}^\infty \beta_k|x_k|^q) \right)$ for all real sequences $x=(x_k)_{k\in\mathbb{N}_0}$ and some $D_1\geq0$, where $(\beta_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}^+$ is a sequence such that $\sum_{k=0}^\infty k\beta_k<\infty$,
\item if \hyperref[observations:O2]{(O2)} holds, both $\Phi((\tilde{Y}_u(t+\Delta (1-k)))_{k\in\mathbb{N}_0},\vartheta)$ as well as $g((\tilde{Y}_u(t+\Delta (1-k)))_{k\in\mathbb{N}_0})$ are $\theta$-weakly dependent for all $\vartheta\in\Theta$, where $g(x)=\sum_{k=0}^\infty \beta_k|x_k|^q$,
\item $\sup_{\vartheta\in\Theta}\norm{\Phi(0,\vartheta)}<\infty$ and
\item the identifiability condition \hyperref[assumption:M1]{(M1)} holds.
\end{enumerate}
Then, $\hat{\vartheta}_N$ is consistent, i.e. $\hat{\vartheta}_N\overset{P}{\longrightarrow}\vartheta^*$ as $N\rightarrow\infty$.
\end{Theorem}
\begin{proof}
See Section \ref{sec7-1}.
\end{proof}
\begin{Remark}\label{remark:consistencyfinitememory}
In the case where \hyperref[observations:O2]{(O2)} holds and the contrast function $\Phi$ is of finite memory, i.e. there exists $n\in\mathbb{N}_0$ such that $\Phi((\tilde{Y}_u(\Delta (1-k)))_{k\in\mathbb{N}_0},\vartheta)=\Phi(\tilde{Y}_u(\Delta),\ldots,\tilde{Y}_u(\Delta(1-n)),\vartheta)$, Proposition \ref{proposition:inheritanceproperties} shows that the conditions (c) and (d) of Theorem \ref{theorem:consistency} are implied by the condition that
\begin{enumerate}[label={(\alph*)}]
\item[(c$^*$)] $\Phi(x,\vartheta)\leq C\norm{x}_1^{M+1}$ and $\Phi(\cdot,\vartheta)\in\mathcal{L}_{n+1}(M,C)$ for some $C,M\geq0$ and all $x\in\mathbb{R}^{n+1}$, $\vartheta\in\Theta$. Moreover, $\tilde{Y}_u$ is $\theta$-weakly dependent and $E[|\tilde{Y}_u|^{(q\vee (M+1))+\gamma}]<\infty$ for some $\gamma>0$.
\end{enumerate}
For contrast functions that are of infinite memory, we give sufficient conditions for (c) in Corollary \ref{corollary:thetaweakdependenceinfinitememory}, where we investigate processes, whose locally stationary approximation $\tilde{Y}_u$ is a L\'evy-driven moving average.
\end{Remark}
\subsection{Asymptotic normality}
\label{sec3-2}
To establish asymptotic normality of $\hat{\vartheta}_N$ we follow the classical approach (see e.g. \cite[Section 5.3]{V1998}) to show asymptotic normality of an $M$-estimator. We impose conditions on the first and second order partial derivatives of the contrast $\Phi$ and investigate the Taylor expansion of $\nabla_\vartheta M_N$ at $\vartheta^*$. The individual components of the expansion are then shown to either converge to $0$ or to be asymptotically normal. The localization is achieved by using the rectangular kernel
\begin{gather}\label{equation:rectangularkernel}
K_{rect}(x)=\frac{1}{2}\mathbb{1}_{\{x\in[-1,1]\}}.
\end{gather}
It is easy to see that $K_{rect}$ is a localizing kernel. Depending on whether \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} holds, we obtain different asymptotic variances.\\
To establish asymptotic normality of the components of the Taylor expansion we use results from \cite{SS2021}. There, the authors derived central limit type results under the following condition on the $\theta$-coefficient $\theta(h)$ of the locally stationary approximation
\begin{align*
\hypertarget{DD}{DD(\varepsilon):} \qquad\sum_{h=1}^\infty\theta(h)h^{\frac{1}{\varepsilon}}<\infty \quad \text{for some }\varepsilon>0.
\end{align*}
Sufficient conditions for \hyperlink{DD}{DD($\varepsilon$)} to hold are for instance $\theta(h)\in\mathcal{O}(h^{-\alpha})$ for some $\alpha>(1+\frac{1}{\varepsilon})$ or $\theta(h)\in\mathcal{O}\big(\big(h\ln(h)\big)^{-1-\frac{1}{\varepsilon}}\big)$.
\begin{Theorem}\label{theorem:asymptoticnormality}
Let $q,\tilde{q},\bar{q}\geq1$ and $Y_N$ be a sequence of stochastic processes with locally stationary approximation $\tilde{Y}_u$ for some $s\geq\max\{q,\tilde{q},\bar{q}\}$ such that either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} holds. The contrast function $\Phi$ is assumed to be of the form (\ref{equation:contrastfunctioninfinitememory}) such that the Hessian matrix $\nabla_\vartheta^2 \Phi$ of $\Phi$ with respect to $\vartheta$ exists.
Moreover, assume that
\begin{enumerate}[label={(\alph*)}]
\item the parameter space $\Theta\subset\mathbb{R}^d$ is compact, \hyperref[assumption:M1]{(M1)} holds and the unique minimum $\vartheta^*$ is located in the interior of $\Theta$.
\item $\sqrt{m_N}b_N\rightarrow0$ as $N\rightarrow\infty$ and the localizing kernel is given by (\ref{equation:rectangularkernel}).
\item $\Phi(x,\cdot)\in \mathcal{L}_d\left(0,D_0(1+\sum_{k=0}^\infty \beta_k|x_k|^q) \right)$ for all real sequences $x=(x_k)_{k\in\mathbb{N}_0}$ and some $D_0\geq0$, where $(\beta_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}^+$ is a sequence such that $\sum_{k=0}^\infty k\beta_k<\infty$ and $\Phi(\cdot,\vartheta)\in\mathcal{L}_{\infty}^{p,q}(\alpha)$ for all $\vartheta\in\Theta$, where $p\geq1$ and $\sum_{k=0}^\infty k \alpha_k<\infty$.
\item $\frac{\partial}{\partial\vartheta_i}\Phi(\cdot,\vartheta^*)\in\mathcal{L}_{\infty}^{\tilde{p},\tilde{q}}(\tilde{\alpha})$ for all $i=1,\ldots,d$, where $\tilde{p}\geq2$ and $\sum_{k=0}^\infty k \tilde{\alpha}_k<\infty$.
\item the stationary process $\nabla_\vartheta \Phi(t):=\nabla_\vartheta \Phi\left( \left(\tilde{Y}_u(t+\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta^*\right)\in L^{2+\gamma_1}$ for some $\gamma_1>0$. Moreover, $\nabla_\vartheta \Phi(t)$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta(h)$ satisfying \hyperlink{DD}{DD($\gamma_1$)}.
\item $\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi(x,\cdot)\in \mathcal{L}_d\left(0,D_1(1+\sum_{k=0}^\infty \bar{\beta}_k|x_k|^{\bar{q}}) \right)$ for all real sequences $x=(x_k)_{k\in\mathbb{N}_0}$, $i,j=1,\ldots,d$ and some $D_1\geq0$, where $(\bar{\beta}_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}^+$ is a sequence such that $\sum_{k=0}^\infty k\bar{\beta}_k<\infty$ and $\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi(\cdot,\vartheta)\in\mathcal{L}_{\infty}^{\bar{p},\bar{q}}(\bar{\alpha})$ for all $\vartheta\in\Theta$, $i,j=1,\ldots,d$, where $\bar{p}\geq1$ and $\sum_{k=0}^\infty k \bar{\alpha}_k<\infty$.
\item if \hyperref[observations:O2]{(O2)} holds, the following conditions are satisfied:
\begin{enumerate}[label={(\alph*)}]
\item[(g1)] the processes $\Phi\left( \left(\tilde{Y}_u(t+\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta\right)$ and $g\left(\left(\tilde{Y}_u(t+\Delta(1-k))\right)_{k\in\mathbb{N}_0}\right)$ are $\theta$-weakly dependent for all $\vartheta\in\Theta$, where $g((x_k)_{k\in\mathbb{N}_0})=\sum_{k=0}^\infty\beta_k\abs{x_k}^{\bar{q}}$.
\item[(g2)] the processes $\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi\left( \left(\tilde{Y}_u(t+\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta\right)$ and $\bar{g}\left(\left(\tilde{Y}_u(t+\Delta(1-k))\right)_{k\in\mathbb{N}_0}\right)$ are $\theta$-weakly dependent for all $\vartheta\in\Theta$ and $i,j=1,\ldots,d$, where $\bar{g}((x_k)_{k\in\mathbb{N}_0})=\sum_{k=0}^\infty\Bar{\beta}_k\abs{x_k}^{\bar{q}}$.
\end{enumerate}
\item $\sup_{\vartheta\in\Theta}\norm{\Phi(0,\vartheta)}<\infty$ and $\sup_{\vartheta\in\Theta}\norm{\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi(0,\vartheta)}<\infty$ for all $i,j=1,\ldots, d$.
\item the matrices
\begin{align*}
I(u)&=\begin{cases}\frac{1}{2}I(u,0)+\sum_{k=1}^\infty I(u,k), &\text{if \hyperref[observations:O1]{(O1)} holds,}\\
\frac{1}{2}I(u,0), &\text{if \hyperref[observations:O2]{(O2)} holds}\end{cases}\text{ and}\\
V(u)&=E\left[ \nabla_\vartheta^2 \Phi\left(\left(\tilde{Y}_u(\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta^* \right)\right]
\end{align*}
are positive definite, where
\begin{gather*}
I(u,k)=E\left[ \nabla_\vartheta \Phi\left(\left(\tilde{Y}_u(\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta^* \right)\nabla_\vartheta
\Phi\left(\left(\tilde{Y}_u(k\delta+\Delta(1-k))\right)_{k\in\mathbb{N}_0},\vartheta^* \right)'\right].
\end{gather*}
\end{enumerate}
Then, it holds
\begin{gather}\label{eq:asymptoticnormalitythetahat}
\sqrt{\frac{b_N}{\delta_N}} \left(\hat{\vartheta}_N-\vartheta^* \right)\overset{d}{\underset{N\rightarrow\infty}{\longrightarrow}} \mathcal{N}\left(0, V(u)^{-1}I(u)V(u)^{-1}\right).
\end{gather}
\end{Theorem}
\begin{proof}
See Section \ref{sec7-2}.
\end{proof}
\begin{Remark}\label{remark:asymptoticnormalityfinitememory}
If the contrast function $\Phi$ is of finite memory (see Remark \ref{remark:consistencyfinitememory}), Proposition \ref{proposition:inheritanceproperties} and the obvious analog of \cite[Proposition 3.4]{CS2018} for our $\theta$-weak dependence coefficient, show that condition (e) is implied by
\begin{enumerate}[label={(\alph*)}]
\item[(e$^*$)] $\nabla_\vartheta \Phi(0,\vartheta^*)=0$, $\frac{\partial}{\partial\vartheta_i} \Phi(\cdot,\vartheta^*)\in\mathcal{L}_{n+1}(M_1,C_1)$ for some $C_1,M_1\geq0$ and $\tilde{Y}_u$ is $\theta$-weakly dependent with $\theta$-coefficients $\theta_{\tilde{Y}_u}(h)\in\mathcal{O}(h^{-\alpha})$ for some $\alpha>(1+\frac{M_1+1}{\gamma_1})(\frac{1+2M_1+\gamma_1}{1+M_1+\gamma_1})$, where $\gamma_1>0$ such that $\tilde{Y}_u(0)\in L^{2(M_1+1)+\gamma_1}$.
\end{enumerate}
If, in addition, \hyperref[observations:O2]{(O2)} holds, condition (g) is implied by
\ref{theorem:consistency} by
\begin{enumerate}[label={(\alph*)}]
\item[(g1$^*$)] $\Phi(x,\vartheta)\leq C_2\norm{x}^{M_2+1}$ and $\Phi(\cdot,\vartheta)\in\mathcal{L}_{n+1}(M_2,C_2)$ for some $M_2,C_2\geq0$ and all $x\in\mathbb{R}^{n+1}$, $\vartheta\in\Theta$. Moreover, $\tilde{Y}_u$ is $\theta$-weakly dependent and $\tilde{Y}_u\in L^{(q\vee (M_2+1))+\gamma_2}$ for some $\gamma_2>0$.
\item[(g2$^*$)] $\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi(x,\vartheta)\leq C_3\norm{x}^{M_3+1}$ and $\frac{\partial^2}{\partial\vartheta_i\partial\vartheta_j}\Phi(\cdot,\vartheta)\in\mathcal{L}_{n+1}(M_3,C_3)$ for some $M_3,C_3\geq0$ and all $x\in\mathbb{R}^{n+1}$, $i,j=1,\ldots,d$ and $\vartheta\in\Theta$. Moreover, $\tilde{Y}_u$ is $\theta$-weakly dependent and $\tilde{Y}_u\in L^{(\bar{q}\vee (M_3+1))+\gamma_3}$ for some $\gamma_3>0$.
\end{enumerate}
\end{Remark}
\section{Least squares estimation for time-varying L\'evy-driven Orn\-stein-Uhlenbeck processes}
\label{sec4}
In this section, we establish consistency and asymptotic normality of an $M$-estimator using results from Section \ref{sec3} for a least squares contrast. The observations are assumed to be sampled according to Assumption \ref{assumption:observations} from a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes, which possesses a locally stationary approximation.\\
Before we give the definition of (time-varying) L\'evy-driven Ornstein-Uhlenbeck processes, we review L\'evy processes and discuss basic results including stochastic integration with respect to them. For further insight we refer to \cite{A2009} and \cite{S2014}.
\subsection{L\'evy processes and stochastic integration}
\label{sec4-1}
\begin{Definition}
A real-valued stochastic process $L=\{L(t),t\in \mathbb{R}_0^+\}$ is called L\'evy process if
\begin{enumerate}[label={(\alph*)}]
\item $L(0)=0$ almost surely,
\item the random variables $(L(t_0),L(t_1)-L(t_0),\dots,L(t_n)-L(t_{n-1}))$ are independent for any $n\in\mathbb{N}$ and $t_0<t_1<t_2<\dots<t_n$,
\item for all $s,t \geq 0$, the distribution of $L(s+t)-L(s)$ does not depend on $s$ and
\item $L$ is stochastically continuous.
\end{enumerate}
Without loss of generality we additionally consider $L$ to be c\`adl\`ag, i.e. right continuous with finite left limits.
\end{Definition}
Let $L=\{L(t),t\in \mathbb{R}_0^+\}$ be a real-valued L\'evy process. Then, $L(1)$ is an infinitely divisible real-valued random variable with characteristic triplet $(\gamma,\Sigma,\nu)$, where $\gamma \in \mathbb{R}$, $\Sigma>0$ and $\nu$ is a L\'evy measure on $\mathbb{R}$, i.e. $\nu(0)=0$ and $\int_{\mathbb{R}}\left(1\wedge\normabs{x}^2\right)\nu(dx)<\infty$. The characteristic function of $L(t)$ is given by $\varphi_{L(t)}(z)=E[e^{izL(t)}]=e^{t \Psi_{L}(z)}$ with characteristic exponent $\Psi_L(z)=( i\gamma z -\frac{\Sigma z^2}{2}+\int_{\mathbb{R} } (e^{i z x}-1-i z x \mathbb{1}_{Z}(x))\nu(dx))$, $z \in \mathbb{R}$ and $Z=\{x \in \mathbb{R}, \normabs{x}\leq 1\}$. If $\nu$ has finite second moment, i.e.
\begin{gather}\label{eq:condlevyproc}
\int_{\normabs{x}>1}\normabs{x}^2\nu(dx)<\infty \left(\iff\int_{\mathbb{R}}\normabs{x}^2\nu(dx)<\infty\right),
\end{gather}
then $L(t)\in L^2$ for all $t\geq0$ and we have $E[L(t)]=t \left(\gamma+\int_{\normabs{x}>1}x\nu(dx)\right)<\infty$ and $Var(L(t))=t \left(\Sigma +\int_{\mathbb{R}}x^2\nu(dx)\right)<\infty$.
In the remainder we work with two-sided L\'evy processes, i.e. $L(t)=L_1(t)\mathbb{1}_{\{t\geq0\}} - L_2(-t)\mathbb{1}_{\{t<0\}}$, where $L_1$ and $L_2$ are independent copies of a one-sided L\'evy process. Consider
\begin{gather}\label{eq:stochint}
X(t)=\int_\mathbb{R} f(t,s) L(ds),
\end{gather}
where $t\in\mathbb{R}$ and $f:\mathbb{R}\times\mathbb{R}\mapsto\mathbb{R}^n$ is $\mathcal{B}(\mathbb{R}\times\mathbb{R})-\mathcal{B}(\mathbb{R}^n)$ measurable.
Necessary and sufficient conditions for the stochastic integral (\ref{eq:stochint}) to exist are given in \cite[Theorem 3.3]{S2014}, namely if
\begin{itemize}
\item $\Sigma\int_\mathbb{R} \norm{f(t,s)f(t,s)'}ds<\infty$,
\item $\int_\mathbb{R} \int_{\mathbb{R}}\left((\norm{f(t,s)}x)^2\wedge 1 \right)\nu(dx)ds<\infty$ and
\item $\int_\mathbb{R} \norm{f(t,s)\left( \gamma+\int_{\mathbb{R}}x\left(\mathbb{1}_{[0,1]} \left(\norm{f(t,s)x}\right)- \mathbb{1}_{[0,1]}\left( \normabs{x}\right)\nu(dx)\right) \right)}ds<\infty$
\end{itemize}
are satisfied, then (\ref{eq:stochint}) is well-defined. If $L$ satisfies (\ref{eq:condlevyproc}) and $f(t,\cdot)\in L^1(\mathbb{R})\cap L^2(\mathbb{R})$, then the above conditions are satisfied and the integral $X(t)=\int_\mathbb{R} f(t,s)L(ds)$ exists in $L^2$. If $X=\{X(t), t\in\mathbb{R}\}$ with $X(t)$ as in (\ref{eq:stochint}) is well-defined, $X(t)$ is infinitely divisible with characteristic triplet $(\gamma_{int},\Sigma_{int},\nu_{int})$, where
\begin{itemize}
\item $\gamma_{int}=\int_{\mathbb{R}}(f(t,s)\gamma ds+ \int_\mathbb{R}\int_{\mathbb{R}}f(t,s)x(\mathbb{1}_{[0,1]}(\norm{f(t,s)x})-\mathbb{1}_{[0,1]}(\normabs{x}))\nu(dx))ds$,
\item $\Sigma_{int}=\Sigma\!\int_{\mathbb{R}}f(t,s)f(t,s)'ds$ and
\item $\nu_{int}(B)= \int_{\mathbb{R}}\int_{\mathbb{R}} \mathbb{1}_B(f(t,s)x)\nu(dx) ds$,\qquad $B\in\mathcal{B}(\mathbb{R}^n)$.
\end{itemize}
The following proposition shows that infinite memory transformations of L\'evy-driven moving average processes, i.e. processes of the form (\ref{eq:stochint}) for which $g(u,t,s)=g(u,t-s)$, are $\theta$-weakly dependent.
\begin{Proposition}\label{proposition:infinitememorymovingaverage}
Let $L$ be a two-sided L\'evy process satisfying (\ref{eq:condlevyproc}), $\mu+\int_{\abs{x}>1}x\nu(dx)=0$ and $g:\mathbb{R}^+\times\mathbb{R}\mapsto\mathbb{R}$ a function such that $g(u,\cdot)\in L^1(\mathbb{R})\cap L^q(\mathbb{R})$ for all $u\in\mathbb{R}^+$ and some $q\in\{2,4\}$. For fixed $u\in\mathbb{R}^+$ we define the process $X_u=\{X_u(t),t\in\mathbb{R}\}$ as
\begin{align}\label{equation:infinitememorylevydrivenmovingaverage}
X_u(t)=\int_{-\infty}^t g(u,t-s)L(ds).
\end{align}
Consider an $\mathbb{R}^n$-valued function $\varphi\in\mathcal{L}_\infty^{p,q}(\alpha)$, where $p\geq1$ and for some $\Delta>0$ the infinite memory vector $Z_u(t)=\left(X_u(t+\Delta),X_u(t),X_u(t-\Delta),\ldots\right)$. Then, the process $\varphi(Z_u(t))$ is $\theta$-weakly dependent with $\theta$-coefficients satisfying
\begin{align}\label{equation:thetacoefficientinequality}
\begin{aligned}
q&=2:\quad\theta_{\varphi(Z_u)}(h)\leq C\sum_{k=\left\lfloor\frac{h}{2\Delta} \right\rfloor}^\infty \alpha_k+C\left(\Sigma_L\int_{-\infty}^{-\frac{h}{2}}g(u,-s)^2ds\right)^{\frac{1}{2}} =\hat{\theta}_{\varphi(Z_u)}(h)\\
q&=4:\quad \theta_{\varphi(Z_u)}(h)\leq C\sum_{k=\left\lfloor\frac{h}{2\Delta} \right\rfloor}^\infty \alpha_k+C\Bigg(\int_{-\infty}^{-\frac{h}{2}}g(u,-s)^4ds\left(\int_{\mathbb{R}}x^4\nu(dx)\right)\\
&\qquad\qquad\qquad\qquad\qquad\qquad\quad\ \ \ + 3\Sigma_L^2\bigg(\int_{-\infty}^{-\frac{h}{2}}g(u,-s)^2ds\bigg)^2\Bigg)^{\frac{1}{4}} =\hat{\theta}_{\varphi(Z_u)}(h)
\end{aligned}
\end{align}
for a constant $C\geq0$ and all $h\geq1$.
\end{Proposition}
\begin{proof}
See Section \ref{sec7-3}.
\end{proof}
\begin{Corollary}\label{corollary:thetaweakdependenceinfinitememory}
Consider $Y_N$, $\tilde{Y}_u$ and $\Phi$ as in Theorem \ref{theorem:consistency} for $q=2$ where $\Phi$ is of infinite memory. Then, condition (c) from Theorem \ref{theorem:consistency} is implied by
\begin{enumerate}
\item[(c$^{**}$)] $\tilde{Y}_u$ satisfies the conditions of Proposition \ref{proposition:infinitememorymovingaverage}.
\end{enumerate}
\end{Corollary}
\begin{Corollary}
Consider $Y_N$, $\tilde{Y}_u$ and $\Phi$ as in Theorem \ref{theorem:asymptoticnormality}, where $q,\bar{q}=2$, $\tilde{q}=4$ and $\Phi$ is of infinite memory. Then, the conditions (e) and (g) are respectively implied by
\begin{enumerate}
\item[(e$^{**}$)] $\nabla_\vartheta \Phi(t)\in L^{2+\gamma_1}$ for some $\gamma_1>0$ and $\tilde{Y}_u$ satisfies the conditions of Proposition \ref{proposition:infinitememorymovingaverage}. Moreover, $\hat{\theta}_{\nabla_\vartheta \Phi(t)}(h)$ from (\ref{equation:thetacoefficientinequality}) satisfies \hyperlink{DD}{DD($\gamma_1$)} and
\item[(g$^{**}$)] $\tilde{Y}_u$ satisfies the conditions of Proposition \ref{proposition:infinitememorymovingaverage}.
\end{enumerate}
\end{Corollary}
\subsection{Time-varying L\'evy-driven Ornstein-Uhlenbeck processes}
\label{sec4-2}
We consider a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes
\begin{align}\label{eq:tvcar}
\begin{aligned}
Y_N(t)&=\int_{-\infty}^{\infty}g_N(Nt,Nt-u)L(du), \text{ with kernel function}\\
g_N(Nt,Nt-u)&=\mathbb{1}_{\{Nt-u\geq0\}} e^{-\int_{u}^{Nt}a\left(\frac{s}{N}\right)ds}= \mathbb{1}_{\{Nt-u\geq0\}} e^{-\int_{-(Nt-u)}^0a\left(\frac{s+Nt}{N}\right)ds},
\end{aligned}
\end{align}
where $a:\mathbb{R}\rightarrow\mathbb{R}_0^+$ is continuous such that $u\mapsto e^{-\int_{-u}^{0}a\left(\frac{s+Nt}{N} \right)ds}\in L^1(\mathbb{R}^+)$ for all $t\in\mathbb{R}$ and $N\in\mathbb{N}$, which ensures the existence of (\ref{eq:tvcar}), since additionally (\ref{eq:condlevyproc}) holds. In the next proposition we review sufficient conditions under which the sequence (\ref{eq:tvcar}) possesses a locally stationary approximation $\tilde{Y}_u$ for $p=2$ and $p=4$ given by
\begin{align}\label{eq:locapproxcar}
\begin{aligned}
\tilde{Y}_{u}(t)&=\int_\mathbb{R} g(u,t-s)L(ds), \text{ with kernel function}\\
g(u,t-s)&=\mathbb{1}_{\{t-s\geq0\}} e^{-a(u)(t-s)}.
\end{aligned}
\end{align}
\begin{Remark}
The process $\tilde{Y}_{u}$ from (\ref{eq:locapproxcar}) is the unique stationary solution to the stochastic differential equation
\begin{align*}
d\tilde{Y}_{u}(t)=-a(u)\tilde{Y}_{u}(t)dt+L(dt).
\end{align*}
\end{Remark}
\begin{Proposition}[{\cite[Proposition 5.3]{SS2021}}]\label{proposition:car1locstat}
Let $Y_N$ be a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes as given in (\ref{eq:tvcar}) such that (\ref{eq:condlevyproc}) holds. Then, $\tilde{Y}_{u}$ as given in (\ref{eq:locapproxcar}) is a locally stationary approximation of $Y_N$ for $p=2$, if
\begin{enumerate}[label={(\alph*)}]
\item the coefficient function $a$ is Lipschitz with constant L,
\item $\inf_{s\in\mathbb{R}}a(s)>0$.
\end{enumerate}
If additionally $\int_{\abs{x}>1}x^4\nu(dx)<\infty$, then $\tilde{Y}_{u}$ is also a locally stationary approximation of $Y_N$ for $p=4$.
\end{Proposition}
\subsection{Least squares estimation}
\label{sec4-3}
Let us assume that we observe a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes $Y_N$ as defined in (\ref{eq:tvcar}), where the characteristic triplet of the driving L\'evy process $L$ is known and the observations are sampled according to Assumption \ref{assumption:observations} such that either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} hold. \\% for some $\delta>0$.\\
Our goal is to estimate the coefficient function at a fixed point $u>0$, i.e. to estimate $a(u)$. To this aim we assume that $a(u)\in\Theta\subset\mathbb{R}^+$, where $\Theta$ is a compact parameter space.\\
For $\vartheta\in\Theta$ and $\Delta>0$ we define the following least squares contrast
\begin{gather}\label{equation:tvCARleastsquaresestimator}
\Phi^{LS}\Big((\tilde{Y}_u(\Delta (1-k)))_{k\in\mathbb{N}_0},\vartheta\Big)=\Phi^{LS}\Big(\tilde{Y}_u(\Delta),\tilde{Y}_u(0),\vartheta\Big)=\Big(\tilde{Y}_u(\Delta)-e^{-\Delta\vartheta}\tilde{Y}_u(0)\Big)^2.
\end{gather}
We show consistency and asymptotic normality of the estimator $\hat{\vartheta}_N$ from (\ref{eq:M_N}), defined with respect to the least squares contrast (\ref{equation:tvCARleastsquaresestimator}), using Theorem \ref{theorem:consistency} and \ref{theorem:asymptoticnormality}.
\begin{Theorem}\label{theorem:asymptoticpropertiesleastsquares}
Let $Y_N$ be a sequence of time-varying L\'evy-driven Ornstein-Uhlenbeck processes as given in (\ref{eq:tvcar}) such that (\ref{eq:condlevyproc}) and either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} hold. Assume that
\begin{enumerate}[label={(\alph*)}]
\item $\gamma+\int_{\abs{x}>1}\nu(dx)=0$,
\item the parameter space $\Theta\subset\mathbb{R}^+$ is compact,
\item the coefficient function $a$ is Lipschitz and satisfies $\inf_{s\in\mathbb{R}}a(s)>0$ and
\item if \hyperref[observations:O2]{(O2)} holds, $\int_{\abs{x}>1}\abs{x}^{2+\gamma_1}\nu(dx)<\infty$ for some $\gamma_1>0$.
\end{enumerate}
Then, $\hat{\vartheta}_N$ is consistent, i.e. $\hat{\vartheta}_N\overset{P}{\longrightarrow}a(u)$ as $N\rightarrow\infty$.
Moreover, if
\begin{enumerate}[label={(\alph*)}]
\item[(d)] $\int_{\abs{x}>1}x^{4+\gamma_2}\nu(dx)<\infty$ for some $\gamma_2>0$,
\item[(e)] $a(u)$ is in the interior of $\Theta$ and $\sqrt{m_N}b_N\rightarrow0$ as $N\rightarrow\infty$ and
\item[(f)] $\hat{\vartheta}_N$ is defined with respect to the rectangular kernel (\ref{equation:rectangularkernel}),
\end{enumerate}
then $\sqrt{\frac{b_N}{\delta_N}} \left(\hat{\vartheta}_N-a(u) \right)\overset{d}{\longrightarrow} \mathcal{N}\left(0, \Sigma(u)\right)$ as $N\rightarrow\infty$, where
\begin{gather}\label{eq:asymptoticvarianceleastsquarestvCAR1}
\Sigma(u)=\frac{1}{2\Delta^2}\begin{cases} e^{2a(u)\Delta}+2e^{2a(u)\Delta}e^{-2a(u)\delta}\frac{1-e^{-2a(u)\delta(\left\lceil\Delta/\delta \right\rceil-1)}}{1-e^{-2a(u)\delta}}-2\left\lceil\Delta/\delta \right\rceil+1, &\text{if \hyperref[observations:O1]{(O1)} holds},\\
e^{2a(u)\Delta}-1,&\text{if \hyperref[observations:O2]{(O2)} holds}.
\end{cases}
\end{gather}
Remind that $\Delta$ denotes the step size in the contrast function (\ref{equation:contrastfunctioninfinitememory}) and $\delta=N\delta_N$ for the observation scheme \hyperref[observations:O1]{(O1)}. In particular, if $\delta=\Delta$, the asymptotic variance is given by $\Sigma(u)= \frac{1}{2\Delta^2}(e^{2a(u)\Delta}-1)$, independent of whether \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} holds.
\end{Theorem}
\begin{proof}
See Section \ref{sec7-4}.
\end{proof}
\begin{Remark}\label{remark:consistentestimator}
A consistent plug-in estimator of $\Sigma(u)$ can be readily obtained by replacing $a(u)$ in (\ref{eq:asymptoticvarianceleastsquarestvCAR1}) with $\hat\vartheta_N$.
\end{Remark}
\section{Quasi-maximum likelihood and Whittle estimation for time-varying L\'evy-driven state space models}
\label{sec5}
We consider observations of a time-varying L\'evy-driven state space model that follow the sampling scheme from Assumption \ref{assumption:observations} such that either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} hold. In this section, we derive consistency results for an $M$-estimator that is based on a log-likelihood contrast. In addition, we establish consistency for a localized Whittle estimator.
\subsection{Time-varying L\'evy-driven state space models}
\label{sec5-1}
We give a brief review of the definition and basic properties of time-varying L\'evy-driven state space models. For further details we refer to \cite{BSS2021,SS2021}.\\
Let $L=\{L(t),t\in\mathbb{R}\}$ be a two-sided L\'evy process with values in $\mathbb{R}$ satisfying (\ref{eq:condlevyproc}). For $p\in\mathbb{N}$ and arbitrary continuous coefficient functions $A(t)\in M_{p\times p}(\mathbb{R})$ and $B(t),C(t)\in M_{p\times1}(\mathbb{R})$, $t\in\mathbb{R}$ we consider the
observation and state equation
\begin{align}\label{eq:tvLDstatespace}
\begin{aligned}
Y(t)=B(t)'X(t) \text{ and}\qquad
dX(t)=A(t)X(t)dt+C(t)L(dt).
\end{aligned}
\end{align}
The solution of (\ref{eq:tvLDstatespace}) is unique and given by (see \cite[Section 4]{BSS2021})
\begin{align*}
X(t)=\int_{-\infty}^t\Psi(t,s)C(s)L(ds)\text{ and }Y(t)=B(t)\int_{-\infty}^t\Psi(t,s)C(s)L(ds),
\end{align*}
provided that the integrals exist in $L^2$. The matrix $\Psi(t,t_0)$ for $t>t_0$ is the unique solution to the homogeneous initial value problem (IVP)
\begin{align}\label{eq:ivpA}
\begin{aligned}
\frac{d}{dt}\Psi(t,t_0)&=A(t)\Psi(t,t_0),\text{ with initial condition}\quad\Psi(t_0,t_0)&=\mathbf{1}_p.
\end{aligned}
\end{align}
A comprehensive discussion on the IVP (\ref{eq:ivpA}) can be found in \cite[Section 3 and 4]{B1970} and in the context of L\'evy-driven state space models in \cite[Section 5.2 and 5.3]{SS2021}.
\begin{Definition}
Let $X=\{X(t),t\in\mathbb{R}\}$ be a solution to the state space representation (\ref{eq:tvLDstatespace}). Then, we call $X$ a time-varying L\'evy-driven state space process. If the coefficient functions $A,B$ and $C$ are time-invariant, the solution is called a L\'evy-driven state space process.
\end{Definition}
\begin{Remark}
Noticeable examples from the class of time-varying L\'evy-driven state space models are time-varying L\'evy-driven CARMA processes, for which the matrix function $C(t)$ is time-invariant, i.e. $C(t)=C$ for all $t\in\mathbb{R}$ and the matrix function $A(t)$ is of the form
\begin{align*}
A(t)&=\left( \begin{array}{cccc}
0 & 1 & \ldots & 0 \\
\vdots & ~ & \ddots & \vdots \\
0 & ~ & ~ & 1 \\
-a_p(t) & -a_{p-1}(t) & \ldots & -a_1(t) \\
\end{array}\right),
\end{align*}
where $a_i(t)$, $i=1,\ldots,p$ are continuous real functions.
\end{Remark}
Now, for continuous coefficient functions $A(t)\in M_{p\times p}(\mathbb{R})$ and $B(t),C(t)\in M_{p\times1}(\mathbb{R})$, $t\in\mathbb{R}$ and a two-sided L\'evy process $L$, we consider a sequence $Y_N$ of time-varying L\'evy-driven state space models, where
\begin{align}\label{eq:seqtvLDstatespacesolution}
\begin{aligned}
Y_N(t)&=\int_\mathbb{R} g_N(Nt,Nt-s)L(ds), \text{ with kernel function}\\
g_N(Nt,Nt-s)&= \mathbb{1}_{\{Nt-s\geq0\}} B(t)' \Psi_{N,t}(0,-(Nt-s))C\left(\frac{-(Nt-s)}{N}+t \right),
\end{aligned}
\end{align}
where $t\in\mathbb{R}$. Then, $\Psi_{N,t}(0,-(Nt-u))$ is the solution to the IVP
\begin{gather*}
\frac{d}{ds}\Psi_{N,t}(s,s_0)=A\left(\frac{s}{N}+t\right)\Psi_{N,t}(s,s_0),\text{ with initial condition}\quad\Psi_{N,t}(s_0,s_0)= \mathbf{1}_p
\end{gather*}
for $s>s_0$.
We assume that $A(u)$, $u\in\mathbb{R}^+$ has eigenvalues with strictly negative real part and
consider as corresponding locally stationary approximation $\tilde{Y}_{u}$ the process
\begin{align}\label{eq:seqlimLDstatespacesolution}
\begin{aligned}
\tilde{Y}_{u}(t)&= \int_\mathbb{R} g(u,t-s)L(ds), \text{ with kernel function}\\
g(u,t-s)&= \mathbb{1}_{\{t-s\geq0\}} B(u)' e^{A(u)(t-s)}C\left(u \right).
\end{aligned}
\end{align}
\begin{Proposition}[{\cite[Corollary 5.13]{SS2021}}]\label{proposition:tvLDSPhavealocstatapprox}
Let $Y_N$ be a sequence of time-varying L\'evy-driven state space models as given in (\ref{eq:seqtvLDstatespacesolution}).
Then, $\tilde{Y}_{u}$ as given in (\ref{eq:seqlimLDstatespacesolution}) is a locally stationary approximation of $Y_N$ for $p=2$, if
\begin{enumerate}[label={(\alph*)}]
\item the coefficient functions $A,B$ and $C$ are Lipschitz with constants $L_A,L_B$ and $L_C$,
\item $\sup_{s\in\mathbb{R}}\norm{B(s)}<\infty$ and $\sup_{s\in\mathbb{R}}\norm{C(s)}<\infty$,
\end{enumerate}
and either (c1)-(e1) or (c1), (d2) and (e2) hold, where
\begin{enumerate}[label={(\alph*)}]
\item[(c1)] $\{A(t)\}_{t\in\mathbb{R}}$ commutes, i.e. $[A(t),A(s)]=0$ for all $s,t\in\mathbb{R}$,
\item[(d1)] $\norm{e^{\nu\int_s^0A(\frac{\tau}{N}+t)d\tau}}\leq \gamma e^{\nu s \lambda}$, with $\gamma,\lambda>0$ for all $\nu\in[0,1]$, $s<0$, $t\in\mathbb{R}$ and $N\in\mathbb{N}$ and
\item[(e1)] $\norm{e^{-\nu A(t)s}}\leq \tilde\gamma e^{\nu s \tilde\lambda}$, with $\tilde\gamma,\tilde\lambda>0$ for all $\nu\in[0,1]$, $s<0$ and $t\in\mathbb{R}$,
\item[(d2)] the eigenvalues $\lambda_j(t)$ of $A(t)$ for $j=1,\ldots,p$ satisfy $\sup_{t\in\mathbb{R}}\max_{j=1,\ldots,p}\mathfrak{Re}(\lambda_j(t))<0$.
\item[(e2)] $A(t)$ is diagonalizable for all $t\in\mathbb{R}$.
\end{enumerate}
If we additionally assume that $\int_{\mathbb{R}}x^4\nu(dx)<\infty
, then $\tilde{Y}_{u}$ is also a locally stationary approximation of $Y_N$ for $p=4$.
\end{Proposition}
\subsection{Quasi-maximum likelihood estimation}
\label{sec5-2}
Let $\Theta\subset\mathbb{R}^d$ be a compact parameter space and $\vartheta^*=\{\vartheta^*(t), t\in\mathbb{R}\}$ a parameter curve in $\Theta$. Moreover, let $(A_{\vartheta})_{\vartheta\in\Theta}\subset M_{p\times p}(\mathbb{R})$ and $(B_\vartheta)_{\vartheta\in\Theta},(C_\vartheta)_{\vartheta\in\Theta}\subset M_{p\times 1}(\mathbb{R})$ be families of matrices, $(\Sigma_\vartheta)_{\vartheta\in\Theta}$ a family of positive numbers, and $(L_\vartheta)_{\vartheta\in\Theta}$ a family of L\'evy processes such that $Var(L_\vartheta(1))=\Sigma_\vartheta$.\\
Consider a sequence of time-varying L\'evy-driven state space models $(Y_N^{\vartheta^*}(t))_{N\in\mathbb{N}}$ such that $Y_N^{\vartheta^*}(t)$ is defined as in (\ref{eq:seqtvLDstatespacesolution}) with coefficient functions $A(t)=A_{\vartheta^*(t)},B(t)=B_{\vartheta^*(t)}$, $C(t)=C_{\vartheta^*(t)}$ and driving noise $L=L_{\vartheta^*(0)}$, such that $Var(L(1))=\Sigma_{\vartheta^*(0)}$.\\
Based on the families $A_\vartheta,B_\vartheta$, $C_\vartheta$ and $L_\vartheta$ from above we define a family of processes $(\tilde{Y}^\vartheta(t))_{\vartheta\in\Theta}$, wher
\begin{gather}\label{equation:statparameterized}
\tilde{Y}^\vartheta(t)=\int_{-\infty}^t B_\vartheta' e^{A_\vartheta(t-s)}C_\vartheta L_\vartheta(ds).
\end{gather}
The following assumption is crucial for all results that we derive in the sequel.
\begin{manualassumption}{(C0)}\label{assumption:C0}
We assume that $\tilde{Y}_u(t)=\tilde{Y}^{\vartheta^*(u)}(t)$ is a locally stationary approximation of $Y_N^{\vartheta^*}(t)$ for some $p\geq1$.
\end{manualassumption}
To obtain a consistent estimator for $\vartheta^*(u)$, $u\in\mathbb{R}^+$, we consider an $M$-estimator of a log-likelihood contrast $\Phi^{LL}$ and use results from Section \ref{sec3}. We derive the contrast $\Phi^{LL}$ using results from \cite{SS2012b}, where the authors investigated a related estimator in a stationary setting.
\begin{manualassumption}{(C1)}\label{assumption:C1}
For each $\vartheta\in\Theta$, it holds $E[L_\vartheta(1)]=0$ and $E[L_\vartheta(1)^2]=\Sigma_\vartheta<\infty$. Additionally, we rule out the degenerate case, where $E[L_\vartheta(1)^2]=0$.
\end{manualassumption}
\begin{manualassumption}{(C2)}\label{assumption:C2}
For each $\vartheta\in\Theta$, the eigenvalues of $A_\vartheta$ have strictly negative real parts.
\end{manualassumption}
Under the previous assumptions $\tilde{Y}^\vartheta$ is for all $\vartheta\in\Theta$ the unique stationary centered solution to the observation and state equation
\begin{align}
\begin{aligned}\label{equation:sampledprocessY_u}
\tilde{Y}^\vartheta(t)&=B_\vartheta' X(t)\text{ and}\qquad
dX(t)=A_\vartheta X(t)dt+C_\vartheta L_\vartheta(dt),
\end{aligned}
\end{align}
i.e. a L\'evy-driven state space process.
Moreover, for $\Delta>0$ it follows from \cite[Proposition 3.6]{SS2012b} that the sampled process $(\tilde{Y}^{\vartheta,\Delta}(k))_{k\in\mathbb{Z}}$, where $\tilde{Y}^{\vartheta,\Delta}(k)=\tilde{Y}^\vartheta(\Delta k)$ satisfies the state space representation
\begin{align}
\begin{aligned}\label{equation:statespacerepresentationsampled}
\tilde{Y}^{\vartheta,\Delta}(k)=B_\vartheta'X(k) \text{ and}\qquad
X(k)&=e^{\Delta A_\vartheta}X(k-1)+N^{(\Delta)}_\vartheta(k),
\end{aligned}
\end{align}
where $N^{(\Delta)}_\vartheta(k)=\int_{(k-1)\Delta}^{k\Delta}e^{A_\vartheta(k\Delta-s)}C_\vartheta L_\vartheta(ds)$, $k\in\mathbb{Z}$. The sequence $(N^{(\Delta)}_\vartheta(k))_{k\in\mathbb{Z}}$ is i.i.d with mean zero and covariance matrix
\begin{gather}\label{eq:variancesampledprocess}
\cancel{\Sigma}^{(\Delta)}_\vartheta=\Sigma_\vartheta \int_0^\Delta e^{A_\vartheta s}C_\vartheta C_\vartheta' e^{A_\vartheta's} ds.
\end{gather}
Moreover, the spectral density of $\tilde{Y}^{\vartheta,\Delta}$, denoted by $f_{\tilde{Y}}^{(\Delta)}(\omega,\vartheta)$, is given by
\begin{gather}\label{eq:spectraldensitysampledprocess}
f_{\tilde{Y}}^{(\Delta)}(\omega,\vartheta)=\frac{1}{2\pi}B_\vartheta'(e^{i\omega}\mathbf{1}_p-e^{\Delta A_\vartheta})^{-1} \cancel{\Sigma}^{\ (\Delta)}_\vartheta(e^{-i\omega}\mathbf{1}_p-e^{\Delta A_\vartheta'})B_\vartheta, \quad \omega\in[-\pi,\pi].
\end{gather}
Next, we review some aspects of Kalman filtering that are necessary to define the log-likelihood contrast $\Phi^{LL}$.
\begin{Proposition}[{\cite[Proposition 2.1]{SS2012b}}]\label{proposition:kalmanfilter}
Let $Y=\{Y_n,n\in\mathbb{Z}\}$ be the output process of the state space model $Y_n=B'X_n$ and $X_n=A X_{n-1}+Z_{n-1}$, $n\in\mathbb{Z}$, where $A\in M_{p\times p}(\mathbb{R})$, $B\in M_{p\times 1}(\mathbb{R})$ and $Z_n$ is an $\mathbb{R}^p$-valued centered i.i.d. sequence with covariance matrix $Q$. The linear innovations $\varepsilon=(\varepsilon_n)_{n\in\mathbb{Z}}$ of $Y$ are defined as $\varepsilon_n=Y_n-P_{n-1}Y_n$, where $P_n$ is the orthogonal projection onto $\overline{\text{span}}\{Y_k:k\leq n\}$ and the closure is taken in $L^2$. If the eigenvalues of $A$ are less than $1$ in absolute value it holds:
\begin{enumerate}[label={(\alph*)}]
\item The Ricatti equation $\Omega=A\Omega A'+Q-(A\Omega B)(B'\Omega B)^{-1}(A\Omega B)'$ has a unique positive semidefinite solution $\Omega\in M_{p\times p}(\mathbb{R})$.
\item The eigenvalues of $A-KB'\in M_{p\times p}(\mathbb{R})$ have absolute value less than one, where $K=(A\Omega B)(B'\Omega B)^{-1}\in M_{p\times 1}(\mathbb{R})$ is the steady-state Kalman gain matrix.
\item The linear innovations $\varepsilon$ of $Y$ are the unique stationary solution to $\tilde{X}_n=(A-KB')\tilde{X}_{n-1}+KY_{n-1}$ and $\varepsilon_n=Y_n-B'\tilde{X}_n$, $n\in\mathbb{Z}$. Moreover, $\varepsilon_n$ can equivalently be written as
\begin{gather}\label{equation:varepsilon}
\varepsilon_n=Y_n-B'\sum_{\nu=1}^\infty (A-KB')^{\nu-1}KY_{n-\nu}.
\end{gather}
For the covariance matrix $V=E[\varepsilon_n^2]$ we have $V=B'\Omega B$.
\end{enumerate}
\end{Proposition}
\noindent We apply the above proposition to the state space model (\ref{equation:statespacerepresentationsampled}) and obtain the parametrized matrices $\Omega_\vartheta,K_\vartheta$ and $V_\vartheta$. In addition to the assumptions \hyperref[assumption:C1]{(C1)} and \hyperref[assumption:C2]{(C2)}, we impose the following conditions.
\begin{manualassumption}{(C3)}\label{assumption:C3}
The parameter space $\Theta\subset\mathbb{R}^d$ is compact.
\end{manualassumption}
\begin{manualassumption}{(C4)}\label{assumption:C4}
The functions $\vartheta\mapsto A_\vartheta$, $\vartheta\mapsto B_\vartheta$, $\vartheta\mapsto C_\vartheta$ and $\vartheta\mapsto \Sigma_\vartheta$ are continuously differentiable and $B_\vartheta\neq0$ for all $\vartheta\in\Theta$.
\end{manualassumption}
\begin{Lemma}
Assume that \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C4]{(C4)} hold. Then, $ e^{\Delta A_\vartheta}$ has eigenvalues with absolute value less than $1$ and $V_\vartheta=B_\vartheta'\Omega_\vartheta B_\vartheta>C_V$ for a constant $C_V>0$ and all $\vartheta\in\Theta$.
\end{Lemma}
\begin{proof}
Follows from the proof of \cite[Lemma 2.2 and Lemma 3.14]{SS2012b}.
\end{proof}
Following the sensitivity analysis of the Ricatti equation in \cite{S1998}, the degree of smoothness of $A_\vartheta,B_\vartheta,C_\vartheta$ and $ \Sigma_\vartheta$, namely $C^1$ carries over to the mapping $\vartheta\mapsto\Omega_\vartheta$. Therefore, from the previous assumptions it follows that the functions $\vartheta\mapsto A_\vartheta$, $\vartheta\mapsto B_\vartheta$, $\vartheta\mapsto C_\vartheta$, $\vartheta\mapsto \Sigma_\vartheta$, $\vartheta\mapsto K_\vartheta$ and $\vartheta\mapsto V_\vartheta$ are Lipschitz.\\
\noindent Under the conditions \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C4]{(C4)} we define for $u\in\mathbb{R}^+$, $\tilde{Y}^{\vartheta^*(u),\Delta}=(\tilde{Y}^{\vartheta^*(u),\Delta}(1-k))_{k\in\mathbb{N}_0}$ and $\tilde{Y}^{\vartheta^*(u),\Delta}(k)=\tilde{Y}^{\vartheta^*(u)}(\Delta k)$ as in (\ref{equation:statespacerepresentationsampled}) the log-likelihood contrast
\begin{gather}\label{equation:quasilogcontrast}
\Phi^{LL}\left( \tilde{Y}^{\vartheta^*(u),\Delta},\vartheta \right) =\log(2\pi)+\log(V_\vartheta)+\frac{\varepsilon_{\vartheta}^2\left(\tilde{Y}^{\vartheta^*(u),\Delta} \right)}{V_\vartheta},
\end{gather}
where $\varepsilon_{\vartheta}(\tilde{Y}^{\vartheta^*(u),\Delta})$ it given by the analogue of (\ref{equation:varepsilon}), i.e.
\begin{align}\label{equation:varepsilontheta}
\begin{aligned}
\varepsilon_{\vartheta}\left(\tilde{Y}^{\vartheta^*(u),\Delta}\right)&=\tilde{Y}^{\vartheta^*(u),\Delta}(1)-B_\vartheta'\sum_{n=1}^\infty \left( e^{\Delta A_\vartheta}-K_\vartheta B_\vartheta' \right)^{n-1}K_\vartheta \tilde{Y}^{\vartheta^*(u),\Delta}(1-n).
\end{aligned}
\end{align}
The localized $M$-estimator resulting from the contrast (\ref{equation:quasilogcontrast}) can be considered as a localized quasi-maximum likelihood estimator.
It is given by
\begin{align}
\begin{aligned}\label{equation:thetahatqmle}
\hat{\vartheta}_N&=\argmin_{\vartheta\in\Theta} M_N(\vartheta), \text{ where}\\
M_N(\vartheta)&=\frac{\delta_N}{b_N}\sum_{i=-m_N}^{m_N}K\left(\frac{\tau_i^N-u}{b_N} \right)\Phi^{LL}\left(\left(Y_N^{\vartheta^*}\left(\tau_i^N+\frac{\Delta (1-k)}{N}\right)\right)_{k\in\mathbb{N}_0},\vartheta\right),
\end{aligned}
\end{align}
where $K$ is a localizing kernel. The following proposition shows that if \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C4]{(C4)} hold, $\Phi^{LL}$ satisfies all conditions of Theorem \ref{theorem:consistency} besides the identifiability condition \hyperref[assumption:M1]{(M1)}.
\begin{Proposition}\label{proposition:qmlecontrast}
If \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C4]{(C4)} hold, then
\begin{enumerate}[label={(\alph*)}]
\item $\Phi^{LL}(\cdot,\vartheta)\in \mathcal{L}_\infty^{1,2}(\alpha)$ for all $\vartheta\in\Theta$, where $\alpha=(\alpha_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}^+$ satisfies $\sum_{k=0}^\infty k\alpha_k<\infty$,
\item $\Phi^{LL}(x,\cdot)\in\mathcal{L}_d(0,D_1(1+\sum_{k=0}^\infty \beta_kx_{1-k}^2))$ for all real sequences $x=(x_{1-k})_{k\in\mathbb{N}_0}$ and some $D_1\geq0$, where $(\beta_k)_{k\in\mathbb{N}_0}\subset\mathbb{R}$ satisfies $\sum_{k=0}^\infty k\beta_k<\infty$ and
\item if the observations follow the sampling scheme \hyperref[observations:O2]{(O2)}, condition (c) of Theorem \ref{theorem:consistency} is satisfied.
\end{enumerate}
\end{Proposition}
\begin{proof}
See Section \ref{sec7-5}.
\end{proof}
The following conditions \hyperref[assumption:C5]{(C5)}-\hyperref[assumption:C7]{(C7)} will help to verify the identifiability condition \hyperref[assumption:M1]{(M1)}.\\
First, it is necessary to ensure that the sampled process $\tilde{Y}^{\vartheta,\Delta}$ is not the output process of any state space representation of lower dimension than $p$ for all $\vartheta\in\Theta$. The concept of minimality is suitable for this purpose.
\begin{Definition}
Let $H:\mathbb{R}\rightarrow\mathbb{R}$ be a rational function and $A\in M_{p\times p}(\mathbb{R})$ and $B,C\in M_{p\times 1}(\mathbb{R})$ such that $H(x)=B'(x \mathbf{1}_p-A)^{-1}C$. We then call the triplet $(A,B,C)$ an algebraic realization of $H$ of dimension $p$. If $(A,B,C)$ is an algebraic realization whose dimension is smaller than or equal to the dimension of any other algebraic realization of $H$ we call it minimal. The corresponding dimension of such a minimal algebraic realization is called McMillan degree.
\end{Definition}
\begin{manualassumption}{(C5)}\label{assumption:C5}
For each $\vartheta\in\Theta$ the triplet $(A_\vartheta,B_\vartheta,C_\vartheta)$ is minimal with McMillan degree $p$.
\end{manualassumption}
\begin{Definition}
An algebraic realization $(A,B,C)$ of dimension $p$ is called controllable if the matrix $[C~~ AC ~\hdots ~A^{p-1}C]\in\mathbb{R}_{p\times p}$ has full rank. Moreover, it is called observable if the matrix $[B~~ BA' ~\hdots~ B(A^{p-1})']\in\mathbb{R}_{p\times p}$ has full rank.
\end{Definition}
\begin{Proposition}\label{proposition:controllableobservableminimal}
An algebraic realization $(A,B,C)$ of dimension $p$ is controllable and observable if and only if it is minimal.
\end{Proposition}
\begin{proof}
See \cite[Theorem 2.3.3]{HD1988}.
\end{proof}
\begin{manualassumption}{(C6)}\label{assumption:C6}
Let $(\tilde{Y}^\vartheta)_{\vartheta\in\Theta}$ be the family of output processes of the observation and state equation (\ref{equation:sampledprocessY_u}). For all $\vartheta_1\neq\vartheta_2$ the spectral densities of the two processes $\tilde{Y}^{\vartheta_1}$ and $\tilde{Y}^{\vartheta_2}$ are different.
\end{manualassumption}
\begin{Proposition}
For $\vartheta\in\Theta$ let $\tilde{Y}^\vartheta$ be the output process of (\ref{equation:sampledprocessY_u}). Then, its spectral density $f_{\tilde{Y}^\vartheta}$ is given by
\begin{gather*}
f_{\tilde{Y}^\vartheta}(\omega)=\frac{1}{2\pi}H_{\vartheta}(i\omega)\Sigma_\vartheta H_\vartheta(-i\omega), ~ \omega\in\mathbb{R},
\end{gather*}
where $H_\vartheta:\mathbb{R}\rightarrow\mathbb{R}$ is the transfer function of $\tilde{Y}^\vartheta$ and defined as $H_\vartheta(x)=B_\vartheta'(x \mathbf{1}_p-A_\vartheta)^{-1}C_\vartheta$.
\end{Proposition}
\begin{proof}
See \cite[Proposition 3.4]{SS2012b}.
\end{proof}
Under the following assumption one can show that \hyperref[assumption:C6]{(C6)} also holds for the sampled process $\tilde{Y}^{\vartheta,\Delta}$.
\begin{manualassumption}{(C7)}\label{assumption:C7}
For each $\vartheta\in\Theta$ the spectrum of $A_\vartheta$ is a subset of $\{z\in\mathbb{C}, \frac{-\pi}{\Delta}< \mathfrak{Im}(z) < \frac{\pi}{\Delta}\}$.
\end{manualassumption}
\begin{Proposition}\label{proposition:qmlem1}
Let $\Phi^{LL}$ be as defined in (\ref{equation:quasilogcontrast}) and assume that \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C7]{(C7)} hold. Then, \hyperref[assumption:M1]{(M1)} holds.
\end{Proposition}
\begin{proof}
Follows from \cite[Lemma 2.10 and the proof of Theorem 3.16]{SS2012b}.
\end{proof}
\begin{Theorem}\label{theorem:qmleconsistent}
Assume that \hyperref[assumption:C0]{(C0)} is satisfied for p=2 and that either \hyperref[observations:O1]{(O1)} or \hyperref[observations:O2]{(O2)} holds. If \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C7]{(C7)} hold, then $\hat{\vartheta}_N\!\overset{P}{\longrightarrow}\vartheta^*(u)$ as $N\rightarrow\infty$ for all $u\in\mathbb{R}^+$ with $\hat{\vartheta}_N$ as defined in (\ref{equation:thetahatqmle}).
\end{Theorem}
\begin{proof}
According to Proposition \ref{proposition:qmlecontrast} and \ref{proposition:qmlem1} the contrast $\Phi^{LL}$ as defined in (\ref{equation:quasilogcontrast}) satisfies the conditions of Theorem \ref{theorem:consistency}.
\end{proof}
\begin{Remark}
Theorem \ref{theorem:qmleconsistent} provides an important first result for the statistical inference of time-varying L\'evy-driven state space models, specifically including time-varying CARMA processes.\\
In addition, it is desirable to have results on the estimator's (asymptotic) distribution at hand to construct for instance confidence intervals. In fact, Theorem \ref{theorem:asymptoticnormality} should pave the way to prove asymptotic normality of $\hat{\vartheta}_N$ as defined in (\ref{equation:thetahatqmle}). However, this requires an in-depth analysis of the regularity properties of the contrast function's first and second order partial derivatives, which is beyond the scope of this work.
\end{Remark}
\begin{Remark}\label{remark:qmleconsistent}
If \hyperref[observations:O1]{(O1)} holds for $N\delta_N=\Delta$, the estimator $\hat{\vartheta}_N$ is given by the simpler expression
\begin{gather*}
\hat{\vartheta}_N=\argmin_{\vartheta\in\Theta}\frac{\delta_N}{b_N}\sum_{i=-m_N}^{m_N}K\left(\frac{i\delta_N}{b_N} \right)\Phi^{LL}\left(\left(Y_N^{\vartheta^*}(\tau_{i+1-k})\right)_{k\in\mathbb{N}_0},\vartheta\right).
\end{gather*}
\end{Remark}
In the next example we briefly present an estimation setting that satisfies the conditions \hyperref[assumption:C0]{(C0)}-\hyperref[assumption:C7]{(C7)}.
\begin{Example}\label{example:timevaryingstatespacemodel}
Assume that $\Theta\subset\mathbb{R}^3$ is a properly restricted compact parameter space. For $\vartheta=(\vartheta_1,\vartheta_2,\vartheta_3)\in\Theta$ we consider the following families of matrices and real numbers
\begin{align*}
A_\vartheta=\left(\begin{array}{cc}\vartheta_1 & 0 \\0 & \vartheta_2 \\\end{array}\right),~
B_\vartheta=\left(\begin{array}{c}\frac{1}{\vartheta_2-\vartheta_1} \\ \frac{-1}{\vartheta_2-\vartheta_1}\\ \end{array}\right),~
C_\vartheta=\left(\begin{array}{c}-\vartheta_1(1+\vartheta_2) \\ -\vartheta_2(1+\vartheta_1)\\ \end{array}\right) \text{ and }\Sigma_\vartheta=\vartheta_3>0.
\end{align*}
Moreover, $L_\vartheta$ denotes a family of two sided L\'evy process that satisfies \hyperref[assumption:C1]{(C1)} for all $\vartheta\in\Theta$.
For a properly restricted compact parameter space $\mathcal{T}\subset\mathbb{R}^5$, we define the family $R_\Theta=\{\tilde{\vartheta}(t)=\left(\tau_1+\tau_2|\sin(t)|,\tau_1+\tau_3|\cos(x)|,\tau_4\right) \}_{\tau=(\tau_1,\tau_2,\tau_3,\tau_4)\in\mathcal{T}}$ of curves in $\Theta$, such that $(\tilde{\vartheta}(t))_{t\in\mathbb{R}}\subset\Theta$ for all $\tilde{\vartheta}\in R_\Theta$.\\
Following Remark \ref{remark:qmleconsistent}, we define the family $(Y_N^{\tilde{\vartheta}}(t))_{\tilde{\vartheta}\in R_\Theta}$ of sequences of time-varying L\'evy-driven state space models such that $Y_N^{\tilde{\vartheta}}(t)$ is defined as in (\ref{eq:seqtvLDstatespacesolution}) with coefficient functions $A(t)=A_{\tilde{\vartheta}(t)},~B(t)=B_{\tilde{\vartheta}(t)},~C(t)=C_{\tilde{\vartheta}(t)}\text{ and driving noise }L_{\tilde{\vartheta}(t)},~\tilde{\vartheta}\in R_\Theta$. All of the above functions are uniformly bounded and Lipschitz in $t$ for all $\tau\in\mathcal{T}$. In addition, it holds $[A(t),A(s)]=0$ for all $s,t\in\mathbb{R}$ and $\tau\in\mathcal{T}$ (see e.g. \cite{WS1976}). All eigenvalues of $A(t)$ are real and strictly negative for a properly restricted $\mathcal{T}$. Overall, (a), (b), (c1), (d2) and (e2) of Proposition \ref{proposition:tvLDSPhavealocstatapprox} hold and $\tilde{Y}^{\tilde{\vartheta}(u)}(t)$ as given in (\ref{equation:statparameterized}) is a locally stationary approximation of $Y^{\tilde{\vartheta}}_N(t)$ for $p=2$ and all $\tau\in\mathcal{T}$. Now, assume that we observed $Y^{\vartheta^*}_N(t)$ for some true parameter curve $\vartheta^*(t)\in R_\Theta$. Then, \hyperref[assumption:C0]{(C0)} holds.\\
By construction, the functions $\vartheta\mapsto A_\vartheta$, $\vartheta\mapsto B_\vartheta$, $\vartheta\mapsto C_\vartheta$ and $\vartheta\mapsto \Sigma_\vartheta$ are continuously differentiable. It is easy to see that the triplet $(A_\vartheta,B_\vartheta,C_\vartheta)$ is controllable and observable and therefore, according to Proposition \ref{proposition:controllableobservableminimal} also minimal for all $\vartheta\in\Theta$. Overall, \hyperref[assumption:C0]{(C0)}-\hyperref[assumption:C5]{(C5)} and \hyperref[assumption:C7]{(C7)} hold. In view of \hyperref[assumption:C6]{(C6)} it is enough to observe that $f_{\tilde{Y}^\vartheta}(\omega)=\frac{\vartheta_3}{2\pi}\frac{(\omega-i \vartheta_1\vartheta_2)(\omega+i \vartheta_1\vartheta_2)}{(\omega+i \vartheta_1)(\omega+i\vartheta_2)(\omega-i \vartheta_1)(\omega-i\vartheta_2)}$, which satisfies \hyperref[assumption:C6]{(C6)} whenever $\vartheta_1\neq\vartheta_2$, $\vartheta_i\neq\overline\vartheta_i$ and $\mathfrak{Re}(\vartheta_i)<0$ for $i=1,2$.
Finally, for all $u\in\mathbb{R}^+$ Theorem \ref{theorem:qmleconsistent} implies that $\hat{\vartheta}_N$ is a consistent estimator of $\vartheta^*(u)$. If the true parameter curve $\vartheta^*$ can be estimated at $0<u_1<u_2$, one can solve a system of equations to obtain estimators for $\tau$, such that the whole curve $\vartheta^*$ can be reconstructed.\\
It is interesting to note that \cite[Theorem 3.3]{SS2012a} immediately shows that $\tilde{Y}^{\vartheta}$ is a CARMA(2,1) process with AR polynomial $p(z)=(z-\vartheta_1)(z-\vartheta_2)$ and
MA polynomial $q(z)=z-\vartheta_1\vartheta_2$.\\
We note that the form of $B$ and $C$ ensures the transfer function of $\tilde{Y}^{\vartheta}$ to be properly normed and the AR and MA polynomials to be monic, which helps to obtain the identifiability condition from the spectral density.
\end{Example}
\subsection{Truncated quasi-maximum likelihood estimation}
\label{sec5-3}
We consider observations as described in Assumption \ref{assumption:observations} that follow the sampling scheme \hyperref[observations:O1]{(O1)} for $N\delta_N=\Delta$.\\
It is clear that in practice one does not observe the full history of $Y_N^{\vartheta^*}$ as assumed in (\ref{equation:thetahatqmle}). As unobserved sampling points must not contribute to the estimator, we set $Y_N^{\vartheta^*}(\tau)$ to $0$ if $\tau$ is not included in the observation window $[u-b_N,u+b_N]$.
Thus, for $Y_N^{\vartheta^*}=(Y_N^{\vartheta^*}(\tau_{i+1-k}))_{k\in\mathbb{N}_0}$ we define
\begin{align*}
\tilde\Phi^{LL}_{i,m_N}\left(Y_N^{\vartheta^*},\vartheta\right)&=\bigg(\log(2\pi)+\log(V_\vartheta)+\frac{\tilde{\varepsilon}_{\vartheta,i,m_N}^2\big(Y_N^{\vartheta^*}\big)}{V_\vartheta}\bigg)
\text{ and}\\
\tilde{\varepsilon}_{\vartheta,i,m_N}\left(Y_N^{\vartheta^*}\right)&=Y_N^{\vartheta^*}\left(\tau_{i+1}^N\right)-B_\vartheta'\sum_{n=1}^{m_N+i+1} \left(e^{\Delta A_\vartheta}-K_\vartheta B_\vartheta' \right)^{n-1}K_\vartheta Y_N^{\vartheta^*}\left(\tau_{i+1-n}^N\right),i\in\mathbb{Z}.
\end{align*}
This leads to the truncated estimator
\begin{align}\label{equation:modifiedestimator}
\begin{aligned}
\hat{\vartheta}^{mod}_N&=\argmin_{\vartheta\in\Theta}\tilde{M}_N(\vartheta),\\
\end{aligned}
\end{align}
where $\tilde{M}_N(\vartheta)=\frac{\delta_N}{b_N}\sum_{i=-m_N}^{m_N-1}K\left(\frac{\tau_i^N-u}{b_N} \right)\tilde\Phi^{LL}_{i,m_N}\left(Y_N^{\vartheta^*},\vartheta\right)$. The following theorem extends the consistency result from Theorem \ref{theorem:qmleconsistent} to the truncated estimator $\hat{\vartheta}^{mod}_N$.
\begin{Theorem}\label{equation:consistencymodifiedQMLE}
Assume that \hyperref[assumption:C0]{(C0)} is satisfied for p=2. If \hyperref[observations:O1]{(O1)} as well as \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C7]{(C7)} hold, then $\hat{\vartheta}^{mod}_N\overset{P}{\longrightarrow}\vartheta^*(u)$ as $N\rightarrow\infty$ for all $u\in\mathbb{R}^+$ with $\hat{\vartheta}^{mod}_N$ as defined in (\ref{equation:modifiedestimator}).
\end{Theorem}
\begin{proof}
See Section \ref{sec7-6}.
\end{proof}
\subsection{Whittle estimation}
\label{sec5-4}
In this section, we investigate under the same setting as in Section \ref{sec5-2} a localized Whittle estimator for $\vartheta^*$. Before we prove consistency of this estimator, we briefly review the Whittle estimator in a stationary setting \cite{FHM2020}.\\
Let $X=\{X(t),t\in\mathbb{R}\}$ be a real-valued centered square integrable L\'evy-driven state space model given by $X(t)=\int_{-\infty}^t B' e^{A(t-s)}CL(ds)$, where $A\in M_{p\times p}(\mathbb{R})$ and $B,C\in M_{p\times 1}(\mathbb{R})$.
For some $\Delta>0$ we consider the sampled process $X^{\Delta}=\{ X^{\Delta}(k), k\in \mathbb{N}_0\}$, where $X^{\Delta}(k)=X(\Delta k)$.
The spectral density $f_X^{(\Delta)}$ is defined as the Fourier transform of the autocovariance function $\Gamma^{(\Delta)}_X(h)=E[X^\Delta(h)X^\Delta(0)]$, $h\in\mathbb{Z}$, i.e.
\begin{gather*}
f_X^{(\Delta)}(\omega)=\frac{1}{2\pi}\sum_{h\in\mathbb{Z}}\Gamma^{(\Delta)}_X(h)e^{-ih\omega}, \qquad \omega\in[-\pi,\pi],
\end{gather*}
and conversely, using the inverse Fourier transform, $\Gamma_X^{(\Delta)}(h)=\int_{-\pi}^\pi f_X^{(\Delta)}(\omega) e^{ih\omega}$, $h\in\mathbb{Z}$, where we make the additional convention that $\Gamma^{(\Delta)}_X(-h)=\Gamma^{(\Delta)}_X(h)$. Based on the sample autocovariance $\overline{\Gamma}_n(h)=\frac{1}{n}\sum_{k=1}^{n-h}X^\Delta(k+h)X^\Delta(k)$, $h\in\mathbb{Z}$, where $\overline{\Gamma}_n(-h):=\overline{\Gamma}_n(h)$, we define the periodogram $I_n:[-\pi,\pi]\rightarrow [0,\infty)$ as
\begin{gather}\label{eq:periodogram}
I_n(\omega)=\frac{1}{2\pi n} \left( \sum_{j=1}^n X^{\Delta}(j)e^{-ij\omega} \right)\left( \sum_{k=1}^n X^{\Delta}(k)e^{ik\omega} \right)=\frac{1}{2\pi} \sum_{h=-n+1}^{n-1} \overline{\Gamma}_n(h)e^{-ih\omega},\ \omega\in [-\pi,\pi].
\end{gather}
The periodogram $I_n(\omega)$ can be considered as the empirical version of the spectral density and is the main part of the Whittle estimator. \\
Now, let $\Theta\subset\mathbb{R}^d$ be a compact parameter space. For $\vartheta\in\Theta$ let $X^\vartheta$ be a real valued centered square integrable state space model of the form (\ref{equation:statparameterized}) and $f_X^{(\Delta)}(\omega,\vartheta)$ the corresponding spectral density. In this stationary setting the Whittle function is defined as
\begin{gather}\label{eq:whittlefunction}
W_n^{stat}(\vartheta)=\frac{1}{2n}\sum_{j=-n+1}^n\left(\frac{I_n(\omega_j)}{f_X^{(\Delta)}(\omega_j,\vartheta)}+\log\left(f_X^{(\Delta)}(\omega_j,\vartheta)\right)\right), \qquad \vartheta\in\Theta,
\end{gather}
where $\omega_j=\frac{\pi j}{n}$ for $j=1,\ldots,n$. Based on this Whittle function, the Whittle estimator is defined as $\hat{\vartheta}_n^{stat}=\argmin_{\vartheta\in\Theta}W_n^{stat}(\vartheta)$. For more information on the Whittle estimator including conditions that ensure consistency, we refer to \cite{FHM2020}.\\
Let $(Y_N^{\vartheta^*}(t))_{N\in\mathbb{N}}$ denote a sequence of time-varying L\'evy-driven state space models as considered in Section \ref{sec5-2} and $(\tilde{Y}^\vartheta(t))_{\vartheta\in\Theta}$ a family of L\'evy-driven state space models in the form of (\ref{equation:statparameterized}). Based on observations of $Y_N^{\vartheta^*}$, we now give a localized version of the Whittle estimator.\\% as defined in (\ref{eq:whittleestimator}). \\
We fix $\Delta>0$ and assume that the available observations follow the sampling scheme introduced in Assumption \ref{assumption:observations} such that \hyperref[observations:O1]{(O1)} holds for $N\delta_N=\Delta$.
For a positive localizing kernel $K$ we consider a localized version $I_N^{loc}:[-\pi,\pi]\rightarrow [0,\infty)$ of the periodogram (\ref{eq:periodogram}) which is given by
\begin{align}\label{eq:localperiodogram}
\begin{aligned}
\!\! I_N^{loc}(\omega)\!&= \!\frac{\delta_N}{2\pi b_N} \!\! \left(\sum_{j=-m_N}^{m_N} \!\!\! \sqrt{K\left(\frac{\tau_j^N-u}{b_N}\right)}Y_N^{\vartheta^*}(\tau_j^N)e^{-ij\omega} \! \right) \!\! \left(\sum_{j=-m_N}^{m_N} \!\!\! \sqrt{K\left(\frac{\tau_j^N-u}{b_N}\right)}Y_N^{\vartheta^*}(\tau_j^N)e^{ij\omega} \! \right)\\
&=\frac{1}{2\pi} \sum_{h=-2m_N}^{2m_N}\hat{\Gamma}_N^{loc}(h)e^{-ih\omega},
\end{aligned}
\end{align}
where $\omega\in[-\pi,\pi]$ and
\begin{gather*}
\hat{\Gamma}_N^{loc}(h)=\frac{\delta_N}{b_N}\sum_{j=-m_N}^{m_N-h}\sqrt{K\left(\frac{\tau_{j+h}^N-u}{b_N}\right)K\left(\frac{\tau_j^N-u}{b_N}\right)}Y_N^{\vartheta^*}(\tau_j^N)Y_N^{\vartheta^*}(\tau_{j+h}^N),\qquad h\in\mathbb{Z},
\end{gather*}
with the convention $\hat{\Gamma}_N^{loc}(-h)=\hat{\Gamma}_N^{loc}(h)$. Based on the localized periodogram we define the localized Whittle function $W_N(\vartheta)$ as
\begin{gather*}
W_N(\vartheta)=\frac{1}{4m_N+2}\sum_{j=-2m_N}^{2m_N+1} \left( \frac{I_N^{loc}(\omega_j)}{f_{\tilde{Y}}^{(\Delta)}(\omega_j,\vartheta)}+\log\left(f_{\tilde{Y}}^{(\Delta)}(\omega_j,\vartheta)\right)\right), \qquad \vartheta\in\Theta,
\end{gather*}
where $\omega_j=\frac{\pi j}{2m_N+1}$ for $j=-2m_N,\ldots,2m_N+1$ and $f_{\tilde{Y}}^{(\Delta)}(\cdot,\vartheta)$ denotes the spectral density of $\tilde{Y}^{\vartheta,\Delta}$ given by (\ref{eq:spectraldensitysampledprocess}). Then, the localized Whittle estimator is defined as
\begin{gather}\label{eq:localizedwhittleestimator}
\hat{\vartheta}_N=\argmin_{\vartheta\in\Theta} W_N(\vartheta).
\end{gather}
Under the same conditions as in Section \ref{sec5-2} we obtain consistency
\begin{Theorem}\label{theorem:whittleconsistent}
Assume that \hyperref[assumption:C0]{(C0)} is satisfied for p=2, \hyperref[observations:O1]{(O1)} holds for $N\delta_N=\Delta$ and that the localizing kernel $K$ is continuous and positive or the non-continuous rectangular kernel (\ref{equation:rectangularkernel}). If additionally \hyperref[assumption:C1]{(C1)} - \hyperref[assumption:C7]{(C7)} hold, then $\hat{\vartheta}_N\overset{P}{\longrightarrow}\vartheta^*(u)$ as $N\rightarrow\infty$ for all $u\in\mathbb{R}^+$ with $\hat{\vartheta}_N$ as defined in (\ref{eq:localizedwhittleestimator}).
\end{Theorem}
\begin{proof}
See Section \ref{sec7-7}.
\end{proof}
\begin{Remark}
In contrast to Theorem \ref{theorem:qmleconsistent}, we cannot readily adapt the above theorem for observations following the sampling scheme \hyperref[observations:O2]{(O2)} as $\hat{\Gamma}_N^{loc}(h)$ does not necessarily convergence to $E[\tilde{Y}_u(0)\tilde{Y}_u(h)]$ in this setting (see also Lemma \ref{lemma:convergencegamma}).
\end{Remark}
\begin{Remark}
For time-invariant L\'evy-driven state space models, asymptotic normality of the Whittle estimator has been shown in \cite[Theorem 2]{FHM2020} following an approach that is similar to the proof of Theorem \ref{theorem:asymptoticnormality}. More detailed, the authors approximated the periodogram (\ref{eq:periodogram}) in the score function (\ref{eq:whittlefunction}) by the corresponding periodogram with respect to the process $N^{(\Delta)}_\vartheta$ as defined in (\ref{equation:statespacerepresentationsampled}) (see \cite[Lemma 3]{FHM2020}). It is worth noting that this technique does not immediately carry over to our non-stationary setting.
\end{Remark}
\section{Simulation study}
\label{sec6}
In this section, we study the finite sample behavior and the convergence of the estimators introduced in Section \ref{sec4-3}, Section \ref{sec5-3} and Section \ref{sec5-4} in a simulation study.\\
More precisely, we perform a Monte Carlo study for each estimator and different data generating processes. Using an Euler-Maruyama scheme on the interval $[0,2000]$ we simulate $400$ independent paths of different time-varying L\'evy-driven state space models that start in $0$. For each simulated path we estimate the coefficient function at equidistant points $(u_i)_{i=1,\ldots,101}$, where $u_1=400$ and $u_{101}=1600$.\\
The driving L\'evy process is either a centered Gaussian L\'evy process or a centered normal-inverse Gaussian (NIG) L\'evy process (see e.g. \cite{BN1997} and \cite{R1997} for more details). The distribution of the increments $L(t)-L(t-1)$ of an NIG L\'evy process $L$ is characterized by the density
\begin{align*}
f_{NIG}(x,\mu,\alpha,\beta,\delta_{NIG})=\frac{\delta_{NIG}}{2\pi}\frac{(1+\alpha g(x))}{g(x)^3}e^{\kappa+\beta x -\alpha g(x)},\qquad x\in\mathbb{R}
\end{align*}
with $g(x)=\sqrt{\delta_{NIG}^2+(x-\mu)^2}$ and $\kappa=\sqrt{\alpha^2-\beta^2}$,
where $\mu\in\mathbb{R}$ is a location parameter, $\alpha\geq0$ is a shape parameter, $\beta\in\mathbb{R}$ is a symmetry parameter and $\delta_{NIG}\geq0$ is a scale parameter.\\
It is clear that the step size used for the Euler-Maruyama scheme has a strong impact on the accuracy of the simulated solution of the L\'evy-driven state space model and hence also on a sample taken from this.
Even for a constant step size, a sample becomes inaccurate if the distance between two observations shrinks, as we observe it for $\delta_N$ when $N$ increases. To overcome possible distortions of our simulation study caused by this issue, we adapt the step size to the sampling scheme by considering a ratio of $1$:$1000$ between the sampled and simulated points, i.e. every $1000$th simulated point is sampled.\\
The observations are sampled according to \hyperref[observations:O1]{(O1)}, where
\begin{gather*}
\delta_N=\frac{1}{N},\qquad b_N=\frac{400}{\sqrt{N}}, \text{ such that}\qquad\Delta=1 \text{ and }\qquad m_N=\lfloor 400\sqrt{N}\rfloor.
\end{gather*}
The bandwidth parameter $b_N$ has to be chosen. Our choice of $b_N$ satisfies the conditions imposed in Theorem \ref{theorem:consistency} and \ref{theorem:asymptoticnormality}. Moreover, the constant $400$ aims to establish a sample size known to provide good results in a stationary setting (see \cite{FHM2020, SS2012b}). An investigation on the choice of the bandwidth is an interesting topic of further research.\\
In our simulation study we investigate the values $N=1,4,16,64,256$. If the driving L\'evy process is Gaussian (i.e. $L(1)\sim\mathcal{N}(\mu,\sigma^2)$), we assume that $\mu=0$ and $\sigma^2=0.2$. In the case of an NIG L\'evy process as driving noise we consider the parameters $\alpha=3,\beta=1, \delta_{NIG}=2$ and $\mu=-2/\sqrt{8}$, which implies that $E[L(1)]=0$ and $\Sigma_L=9\sqrt{2}/16\approx 0.7955$. As localizing kernel we consider the rectangular kernel (\ref{equation:rectangularkernel}) or the Epanechnikov kernel
\begin{align}\label{equation:epankernel}
K_{epan}(x)=\frac{3}{4}(1-x^2) \mathbb{1}_{\{x\in[-1,1]\}}.
\end{align}
To measure the coefficient function estimate's quality, we use the mean integrated square error (MISE), where the integral in the MISE over the interval $[400,1600]$ is replaced by a Riemann sum over the equidistant partition that is based on the estimation points $(u_i
|
)_{i=1,\ldots,101}$.\\
All simulations have been conducted in MATLAB on the BwUniCluster. For numerical optimization, a differential evolution optimization routine has been used.
\subsection{Simulation Study: least squares estimation}
\label{sec6-1}
We simulate a sequence of time-varying Ornstein-Uhlenbeck processes as defined in (\ref{eq:tvcar}) for three different coefficient functions
\begin{align*}
a^{(1)}(t)=\frac{1}{10}+\frac{1}{2} \abs{\cos\left(\frac{t}{500}\right)} ,\quad a^{(2)}(t)=1+\frac{1}{10}\sin\left(\frac{t}{150}\right)\text{ and} \quad a^{(3)}(t)=\frac{1}{2}-\frac{t}{5000},
\end{align*}
$t\in[0,2000]$. The characteristic triplet of the driving L\'evy process is assumed to be known. Using the rectangular kernel as localizing kernel, we compute for the above coefficient functions the localized least squares estimators $\hat{a}^{(1)}(u_i),\hat{a}^{(2)}(u_i)$ and $\hat{a}^{(3)}(u_i)$, $i=1,\ldots,101$. Since the conditions of Theorem \ref{theorem:asymptoticpropertiesleastsquares} are satisfied, all estimators are consistent and asymptotically normal.\\
Indeed, Figure \ref{figure:CAR_NIG} reflects the consistency of $\hat{a}^{(1)}$ and $\hat{a}^{(2)}$. As $N$ grows the mean over $400$ estimations recovers the respective coefficient function with increasing accuracy. Moreover, the MISE of the estimated coefficient functions decreases across all coefficient functions and driving noises as $N$ increases (see Table \ref{table:1}).\\
Qualitatively, we observe a higher bias for estimates conducted near to extreme points of the coefficient functions (see Figure \ref{figure:CAR_NIG}). This arises from the fact that the localizing kernel smoothes the estimation at each fixed estimation point $u_i$ over the window $[u_i-b_N,u_i+b_N]$. If the average of the respective coefficient function on the estimation window deviates from the value of the coefficient function at $u_i$, we observe a comparably high bias (see $N=1,16$ in Figure \ref{figure:CAR_NIG}). For our coefficient functions, the peak effect occurs at extreme points. Since $b_N \downarrow0$, the smoothing window $[u_i-b_N,u_i+b_N]$ shrinks which eventually ensures a low bias also at extreme points (see $N=256$ in Figure \ref{figure:CAR_NIG}).\\
Exemplary, we investigate the performance of the estimates $\hat{a}^{(1)},\hat{a}^{(2)}$ and $\hat{a}^{(3)}$ at fixed estimation points $u_i$, $i=25,50,75$ in Table \ref{table:2}. The MSE presented in Table \ref{table:2} decrease across all estimation points, coefficient functions and noises as $N$ increases.\\
It is not surprising that we find high differences in the MSE across the estimation points for each of the coefficient curves $a^{(1)}$ and $a^{(2)}$ at low values of $N$. Again, the main driver for this effect is an increased bias at estimation points close to extreme points of the coefficient functions.
As $a^{(3)}$ is a linear function, we do not find the same effect for this coefficient function.\\
Moreover, in Figure \ref{figure:CAR_QQ}, we compare the empirical distribution of the standardized estimation error of the estimates $\hat{a}^{(1)}$, $\hat{a}^{(2)}$ and $\hat{a}^{(3)}$ at $u_{25}$ for $N=256$ with a standard normal distribution through a Q-Q plot. For standardization we use the consistent estimator from Remark \ref{remark:consistentestimator}. All Q-Q plots show that the sample quantiles of the standardized estimation error are close to those of a standard normal law.\\
Overall, the investigated least squares estimators perform well across all coefficient functions and noises and the finite sample behavior is very well described by the asymptotic results established in Theorem \ref{theorem:asymptoticpropertiesleastsquares}.
\begin{table}
\center
\begin{tabular}{ |c||cc|cc|cc|}
\hline
&\multicolumn{2}{ c| }{$\hat{a}^{(1)}$}& \multicolumn{2}{c|}{$\hat{a}^{(2)}$}& \multicolumn{2}{c|}{$\hat{a}^{(3)}$} \\
\cline{2-3}\cline{4-5}\cline{6-7}
$N$ &Gaussian &
\multicolumn{1}{c|}{NIG} &
Gaussian&
\multicolumn{1}{c|}{NIG} &
\multicolumn{1}{c}{Gaussian}&
\multicolumn{1}{c|}{NIG} \\
\hline
1 & 6.2142 & 6.4279 & 7.5200 & 8.9841 & 1.0293 & 1.1089 \\
4 & 1.3816 & 1.3120 & 2.2083 & 2.8325 & 0.4889 & 0.5416 \\
16 & 0.3550 & 0.3868 & 0.9882 & 1.2945 & 0.2407 & 0.2562 \\
64 & 0.1400 & 0.1543 & 0.4837 & 0.6278 & 0.1203 & 0.1290 \\
256 & 0.0651 & 0.0722 & 0.2402 & 0.3085 & 0.0588 & 0.0650 \\
\hline
\end{tabular}
\vspace*{-2mm}
\caption{\small{MISE of $\hat{a}^{(1)},\hat{a}^{(2)},\hat{a}^{(3)}$ for $N=1,4,16,64,256$ using the rectangular kernel (\ref{equation:rectangularkernel}). As driving noise we use either a Gaussian or NIG L\'evy process.}\label{table:1}}
\end{table}
\begin{figure}
\includegraphics[width=16cm]{CAR_estimation}
\vspace*{-7mm}
\caption{\small{First row: coefficient function $a^{(1)}$ with five realizations of $\hat{a}^{(1)}$ and the mean over $400$ realizations of $\hat{a}^{(1)}$ respectively for $N=1,16,256$. Second row: coefficient function $a^{(2)}$ with five realizations of $\hat{a}^{(2)}$ and the mean over $400$ realizations of $\hat{a}^{(2)}$ respectively for $N=1,16,256$. For simulation we considered an NIG L\'evy process. In addition, we indicate $u_{25},u_{50}$ and $u_{75}$ for $N=256$.}\label{figure:CAR_NIG}}
\end{figure}
\vspace{-11pt}
\begin{figure}[H]
\includegraphics[width=16cm]{CAR_QQ}
\vspace*{-7mm}
\caption{\small{Normal Q-Q plots of the standardized estimation error of $\hat{a}^{(1)}(u_{25})$, $\hat{a}^{(2)}(u_{25})$ and $\hat{a}^{(3)}(u_{25})$ for $N=256$, where we considered either a Gaussian or an NIG L\'evy process as driving noise.}\label{figure:CAR_QQ}}
\end{figure}
\begin{table}
\center
\begin{tabular}{ |c||ccc|ccc|ccc|}
\hline
&\multicolumn{3}{ c| }{$\hat{a}^{(1)}$}& \multicolumn{3}{c|}{$\hat{a}^{(2)}$}& \multicolumn{3}{c|}{$\hat{a}^{(3)}$} \\
\cline{2-4}\cline{5-7}\cline{8-10}
$N$ & $u_{25}$ & $u_{50}$ & $u_{75}$ & $u_{25}$ & $u_{50}$ & $u_{75}$ & $u_{25}$ & $u_{50}$ & $u_{75}$ \\
\hline
1 & 3.5077 & 1.5266 & 5.6332 & 10.6488 & 5.1043 & 8.4159 & 1.0586 & 0.8761 & 0.7687 \\
4 & 0.3127 & 1.5355 & 0.7543 & 2.4044 & 2.1300 & 2.6313 & 0.4783 & 0.4464 & 0.3483 \\
16 & 0.2698 & 0.2551 & 0.3058 & 0.9397 & 1.0692 & 1.1790 & 0.2917 & 0.2272 & 0.1739 \\
64 & 0.0597 & 0.0868 & 0.1576 & 0.4116 & 0.5218 & 0.5996 & 0.1205 & 0.1134 & 0.0767 \\
256 & 0.0338 & 0.0470 & 0.0776 & 0.2163 & 0.2569 & 0.2948 & 0.0662 & 0.0591 & 0.0417 \\
\hline
\end{tabular}
\vspace*{-2mm}
\caption{\small{MSE$\times10^{3}$ of the estimators $\hat{a}^{(1)}(u_{i}),\hat{a}^{(2)}(u_{i})$ and $\hat{a}^{(3)}(u_{i})$ for $N=1,4,16,64,256$ and $i=25,50,75$ using the rectangular kernel (\ref{equation:rectangularkernel}). As driving noise we consider an NIG L\'evy process.}\label{table:2}}
\end{table}
\subsection{Simulation study: quasi-maximum likelihood and Whittle estimation}
\label{sec6-2}
We simulate a sequence of time-varying L\'evy-driven state space models as defined in (\ref{eq:seqtvLDstatespacesolution}) for the matrix functions
\begin{align*}
A(t)=\left(\begin{array}{cc}\vartheta_1(t) & 0 \\0 & \vartheta_2(t) \\\end{array}\right),~
B(t)=\left(\begin{array}{c}\frac{1}{\vartheta_2(t)-\vartheta_1(t)} \\ \frac{-1}{\vartheta_2(t)-\vartheta_1(t)}\\ \end{array}\right),
C(t)=\left(\begin{array}{c}-\vartheta_1(t)(1+\vartheta_2(t)) \\ -\vartheta_2(t)(1+\vartheta_1(t))\\ \end{array}\right),
\end{align*}
and $\Sigma_L=\vartheta_3(t)$, where
\begin{align*}
\vartheta_1(t)=-\tfrac{1}{2}+0.1\abs{\sin(\tfrac{t}{500})}\text{, and}\quad
\vartheta_2(t)=-3-0.2\abs{\cos(\tfrac{t}{500})},\quad t\in[0,2000].
\end{align*}
In the Gaussian case, we consider $\vartheta_3(t)=0.2$ and $\vartheta_3(t)=\frac{9\sqrt{2}}{16}\approx 0.7955$ in the NIG case. Using either the rectangular kernel (\ref{equation:rectangularkernel}) or the Epanechnikov kernel (\ref{equation:epankernel}) as localizing kernel, we compute for the aforementioned coefficient functions the
\begin{alignat*}{5}
&\text{quasi-maximum likelihood estimators }\qquad &&(\hat{\vartheta}_1^{QML}(u_i),&&\hat{\vartheta}_2^{QML}(u_i),&&\hat{\vartheta}_3^{QML}(u_i))\qquad &&\text{and the}\\
&\text{Whittle estimators }&&(\hat{\vartheta}_1^{W}(u_i),&&\hat{\vartheta}_2^{W}(u_i),&&\hat{\vartheta}_3^{W}(u_i)),\qquad &&i=1,\ldots,101,
\end{alignat*}
from Section \ref{sec5-3} and Section \ref{sec5-4}, respectively. All conditions of Theorem \ref{equation:consistencymodifiedQMLE} and Theorem \ref{theorem:whittleconsistent} are satisfied (see also Example \ref{example:timevaryingstatespacemodel}) such that both estimators are consistent.\\
Figure \ref{figure:CARMA_NIG_QML} and \ref{figure:CARMA_NIG_W} are in line with our theoretical findings and reflect the estimators' consistency. For both estimators, we observe that the mean over $400$ estimates recovers the respective true coefficient function more precisely as $N$ increases, independently of the driving noise and the localizing kernel. We note that the Epanechnikov kernel has a stronger smoothing effect compared to the rectangular kernel (see Figure \ref{figure:CARMA_NIG_QML} and \ref{figure:CARMA_NIG_W}).\\
Table \ref{table:3} and Table \ref{table:4} show that also the MISE of all estimated coefficient functions decreases as $N$ increases for both estimators, all localizing kernels and driving noises. However, for the rectangular kernel, we observe lower MISE.\\
Exemplary, we compare in Figure \ref{figure:CARMA_QQ_QML} and Figure \ref{figure:CARMA_QQ_W} for fixed estimation points $u_i$, $i=25,75$ the empirical distribution of the estimation error with a standard normal distribution, where we consider different localizing kernels and driving noises for the quasi-maximum likelihood and Whittle estimator. The results show that the estimation error's distribution of both estimators can be well approximated by a normal distribution and strengthen the hypothesis that the estimators are asymptotically normal.\\% Indeed, an application of Theorem \ref{theorem:asymptoticnormality} should pave the way to establish asymptotic normality of both estimators.\\
Overall, the performances of the quasi-maximum likelihood and the Whittle estimator are very similar, and neither of the estimators is preferable.
\begin{figure}[H]
\center
\includegraphics[width=16cm]{CARMA_estimation_QML}
\vspace*{-7mm}
\caption{\small{Coefficient functions $\vartheta_1, \vartheta_2, \vartheta_3$ and the mean over $400$ realizations of $\hat{\vartheta}_1^{QML}, \hat{\vartheta}_2^{QML}$ and $\hat{\vartheta}_3^{QML}$ for $N=16,256$ using either the rectangular or the Epanechnikov kernel.
For the simulation we considered an NIG L\'evy process.}\label{figure:CARMA_NIG_QML}}
\end{figure}
\vspace{-6pt}
\begin{figure}[H]
\center
\includegraphics[width=16cm]{CARMA_estimation_W}
\vspace*{-7mm}
\caption{\small{Coefficient functions $\vartheta_1, \vartheta_2, \vartheta_3$ and the mean over $400$ realizations of $\hat{\vartheta}_1^{W}, \hat{\vartheta}_2^{W}$ and $\hat{\vartheta}_3^{W}$ for $N=16,256$ using either the rectangular or the Epanechnikov kernel.
For the simulation we considered an NIG L\'evy process.}\label{figure:CARMA_NIG_W}}
\end{figure}
\begin{table}[H]
\center
\begin{tabular}{ |cc||ccc|ccc|}
\hline
& &\multicolumn{3}{ c| }{$K_{rect}$}& \multicolumn{3}{c|}{$K_{epan}$}\\
\cline{1-3}\cline{3-5}\cline{6-8}
& \multicolumn{1}{|c||}{} &&& &&& \\[-11pt]
Noise& \multicolumn{1}{|c||}{$N$} & $\hat{\vartheta}_1^{QML}$ & $\hat{\vartheta}_2^{QML}$ & $\hat{\vartheta}_3^{QML}$ &$\hat{\vartheta}_1^{QML}$ & $\hat{\vartheta}_2^{QML}$ & $\hat{\vartheta}_3^{QML}$ \\
\hline
\multirow{5}{*}{Gaussian}& \multicolumn{1}{|c||}{1} & 12.2178 & 954.9114 & 1.5844 & 15.9248 & 1074.9837 & 1.6684 \\
&\multicolumn{1}{|c||}{4} & 5.6206 & 527.5252 & 0.8054 & 6.7165 & 623.1291 & 0.9417 \\
&\multicolumn{1}{|c||}{16} & 2.7979 & 257.3131 & 0.3921 & 3.4222 & 314.5225 & 0.4845 \\
&\multicolumn{1}{|c||}{64} & 1.3190 & 125.3640 & 0.1887 & 1.6603 & 150.1066 & 0.2357 \\
&\multicolumn{1}{|c||}{256} & 0.6793 & 62.2253 & 0.0965 & 0.7965 & 74.7023 & 0.1159 \\
\hline
\multirow{5}{*}{NIG}& \multicolumn{1}{|c||}{1} & 13.3915 &962.2520 & 25.2487 & 15.1466 & 1081.8759 & 28.8395 \\
& \multicolumn{1}{|c||}{4} & 5.8316 & 528.2882 & 13.0114 & 6.8370 & 632.4238 & 15.7627 \\
& \multicolumn{1}{|c||}{16} & 2.8093 & 267.8274 & 6.5422 & 3.2770 & 310.6784 & 7.5447 \\
& \multicolumn{1}{|c||}{64} & 1.3502 & 125.6270 & 3.1771 & 1.6305 & 155.1777 & 3.8812 \\
& \multicolumn{1}{|c||}{256} & 0.6743 & 63.4687 & 1.6086 & 0.8241 & 76.5732 & 1.9886 \\
\hline
\end{tabular}
\vspace*{-2mm}
\caption{\small{MISE of the estimators $\hat{\vartheta}_1^{QML}, \hat{\vartheta}_2^{QML}$ and $\hat{\vartheta}_3^{QML}$ for $N=1,4,16,64,256$ using either the rectangular kernel (\ref{equation:rectangularkernel}) or the Epanechnikov kernel (\ref{equation:epankernel}). As driving noise we consider either a Gaussian or an NIG L\'evy process.}\label{table:3}}
\end{table}
\begin{figure}[H]
\includegraphics[width=16cm]{CARMA_QQ_QML}
\vspace*{-6mm}
\caption{\small{First row: normal Q-Q plots of the estimation error of the estimates $\hat{\vartheta}_1^{QML}(u_{25}),\hat{\vartheta}_2^{QML}(u_{25})$ and $\hat{\vartheta}_3^{QML}(u_{25})$ for $N=256$ using the rectangular kernel (\ref{equation:rectangularkernel}) and a Gaussian L\'evy process as driving noise.
Second row: normal Q-Q plots of the estimation error of the estimates $\hat{\vartheta}_1^{QML}(u_{75}),\hat{\vartheta}_2^{QML}(u_{75})$ and $\hat{\vartheta}_3^{QML}(u_{75})$ for $N=256$ using the Epanechnikov kernel (\ref{equation:epankernel}) and an NIG L\'evy process as driving noise.}\label{figure:CARMA_QQ_QML}}
\end{figure}
\begin{table}[H]
\center
\begin{tabular}{ |cc||ccc|ccc|}
\hline
& &\multicolumn{3}{ c| }{$K_{rect}$}& \multicolumn{3}{c|}{$K_{epan}$}\\
\cline{1-3}\cline{3-5}\cline{6-8}
& \multicolumn{1}{|c||}{} &&& &&& \\[-11pt]
Noise& \multicolumn{1}{|c||}{$N$} & $\hat{\vartheta}_1^{W}$ & $\hat{\vartheta}_2^{W}$ & $\hat{\vartheta}_3^{W}$ &$\hat{\vartheta}_1^{W}$ & $\hat{\vartheta}_2^{W}$ & $\hat{\vartheta}_3^{W}$ \\
\hline
\multirow{5}{*}{Gaussian}& \multicolumn{1}{|c||}{1} & 12.8997 & 1082.6087 & 1.5627 & 14.3332 & 1143.9525 & 1.8873 \\
&\multicolumn{1}{|c||}{4} & 5.2176 & 526.1243 & 0.7166 & 6.9049 & 626.2685 & 0.9188 \\
&\multicolumn{1}{|c||}{16} & 2.8774 & 256.1971 & 0.4000 & 3.3788 & 295.5396 & 0.4658 \\
&\multicolumn{1}{|c||}{64} & 1.3734 & 125.1810 & 0.1950 & 1.6296 & 153.6250 & 0.2351 \\
&\multicolumn{1}{|c||}{256} & 0.6737 & 61.8317 & 0.0960 & 0.8060 & 75.4662 & 0.1164 \\
\hline
\multirow{5}{*}{NIG}& \multicolumn{1}{|c||}{1} & 12.1261 & 960.6616 & 22.9088 & 14.0046 & 1127.7369 & 27.4745 \\
& \multicolumn{1}{|c||}{4} & 5.6328 & 485.4932 & 12.8253 & 6.8520 & 639.1451 & 15.1066 \\
& \multicolumn{1}{|c||}{16} & 2.7545 & 255.6482 & 6.4252 & 3.4720 & 313.6670 & 7.8922 \\
& \multicolumn{1}{|c||}{64} & 1.3238 & 124.9270 & 3.1781 & 1.5885 & 155.3078 & 3.8583 \\
& \multicolumn{1}{|c||}{256} & 0.6855 & 63.3847 & 1.6103 & 0.8083 & 77.1361 & 1.9350 \\
\hline
\end{tabular}
\vspace*{-2mm}
\caption{\small{MISE of the estimators $\hat{\vartheta}_1^{W}, \hat{\vartheta}_2^{W}$ and $\hat{\vartheta}_3^{W}$ for $N=1,4,16,64,256$ using either the rectangular kernel (\ref{equation:rectangularkernel}) or the Epanechnikov kernel (\ref{equation:epankernel}). As driving noise we consider a Gaussian or an NIG L\'evy process.}\label{table:4}}
\end{table}
\begin{figure}[H]
\includegraphics[width=16cm]{CARMA_QQ_W}
\vspace*{-6mm}
\caption{\small{First row: normal Q-Q plots of the estimation error of the estimates $\hat{\vartheta}_1^{W}(u_{25}),\hat{\vartheta}_2^{W}(u_{25})$ and $\hat{\vartheta}_3^{W}(u_{25})$ for $N=256$ using the rectangular kernel (\ref{equation:rectangularkernel}) and a Gaussian L\'evy process as driving noise.
Second row: normal Q-Q plots of the estimation error of the estimates $\hat{\vartheta}_1^{W}(u_{75}),\hat{\vartheta}_2^{W}(u_{75})$ and $\hat{\vartheta}_3^{W}(u_{75})$ for $N=256$ using the Epanechnikov kernel (\ref{equation:epankernel}) and an NIG L\'evy process as driving noise.}\label{figure:CARMA_QQ_W}}
\end{figure}
\section{Proofs}
\label{sec7}
\subsection{Proof for Section \ref{sec3-1}}
\label{sec7-1}
\begin{proof}[Proof of Theorem \ref{theorem:consistency}]
Clearly, Lemma \ref{lemma:integrabilityphi} implies that $\Phi$ is integrable. In the following we show the sufficient conditions of \cite[Theorem 5.7]{V1998}.\\
First, we note that $M(\vartheta)$ is continuous, since for $\vartheta_1,\vartheta_2\in\Theta$ we have
\begin{align}\label{eq:continuityMforlater}
\begin{aligned}
\abs{M(\vartheta_1)-M(\vartheta_2)}&\leq\norm{\Phi(\tilde{Y},\vartheta_1)-\Phi(\tilde{Y},\vartheta_2)}_{L^1}\\
&\leq \norm{\vartheta_1-\vartheta_2} 3D_1\left(1+\sum_{k=0}^\infty \beta_k E[|\tilde{Y}_u(\Delta (1-k))|^q]\right).
\end{aligned}
\end{align}
To show uniform convergence in probability of $M_N(\vartheta)$ we use \cite{N1991}. For all $\vartheta \in\Theta$ Proposition \ref{proposition:inheritanceproperties} and Lemma \ref{lemma:integrabilityphi} imply that $\Phi((\tilde{Y}_u(t+\Delta (1-k)))_{k\in\mathbb{N}_0},\vartheta)$ is a locally stationary approximation of $\Phi\big(\big(Y_N\big(t+\Delta \frac{(1-k)}{N}\big)\big)_{k\in\mathbb{N}_0},\vartheta\big)$ for $p\geq1$. An application of \cite[Theorem 3.5]{SS2021} in the case of \hyperref[observations:O1]{(O1)} and \cite[Theorem 3.6]{SS2021} if \hyperref[observations:O2]{(O2)} holds, gives
\begin{gather*}
\norm{M_N(\vartheta)-M(\vartheta)}_{L^1}\underset{N\rightarrow\infty}{\longrightarrow}0, \text{ for all } \vartheta\in\Theta.
\end{gather*}
It is left to show stochastic equicontinuity of the family $(M_N(\vartheta))_{N\in\mathbb{N}}$. Define $g:\mathbb{R}^\infty\rightarrow\mathbb{R}$ where $g(x)=\sum_{k=0}^\infty \beta_k|x_k|^q$. Using the mean value theorem, we obtain $||y|^q-|z|^q|\leq q|y-z|(1+|y|^{q-1}+|z|^{q-1})$ for all $y,z\in\mathbb{R}$. An application of Hoelder's inequality ensures $g\in\mathcal{L}_\infty^{1,q}(\beta)$. Since $\sum_{k=0}^\infty k\beta_k<\infty$, Proposition \ref{proposition:inheritanceproperties} implies that $g((\tilde{Y}_u(t+\Delta (1-k)))_{k\in\mathbb{N}_0})$ is a locally stationary approximation of $g\big(\big(Y_N\big(t+\frac{\Delta (1-k)}{N}\big)\big)_{k\in\mathbb{N}_0}\big)$ for $p=1$. Noting that $\frac{|K(x)|}{\int |K(x)|dx}$ is again a localizing kernel, we obtain from either \cite[Theorem 3.5]{SS2021} or \cite[Theorem 3.6]{SS2021} that \begin{align}
\begin{aligned}\label{equation:stochequiproof1}
&\frac{\delta_N}{b_N}\sum_{i=-m_N}^{m_N} \left|K\left(\frac{\tau_i^N-u}{b_N}\right)\right| \left(1+g\left(\left(Y_N\left(\tau_i^N+\frac{\Delta (1-k)}{N}\right)\right)_{k\in\mathbb{N}_0}\right)\right)\\
&\quad\underset{N\rightarrow\infty}{\overset{P}{\longrightarrow}}E\left[1+g\left(\left(\tilde{Y}_u\left(\Delta (1-k)\right)\right)_{k\in\mathbb{N}_0}\right)\right]\int_\mathbb{R} |K(x)|dx
=:E.
\end{aligned}
\end{align}
Then, for $\lambda=\frac{\eta}{6D_1 E}$ it holds
\begin{align}
\begin{aligned}\label{equation:stochequiproof2}
&P\left( \sup_{\norm{\vartheta_1-\vartheta_2}<\lambda}\left|M_N(\vartheta_1)-M_N(\vartheta_2) \right|>\eta\right)\\
&\leq P\left( \left| \frac{\delta_N}{b_N}\!\!\sum_{i=-m_N}^{m_N} \left|K\left(\frac{\tau_i^N-u}{b_N}\right)\right| \left(\!1\!+\!g\left(\!\left(Y_N\left(\tau_i^N\!+\!\frac{\Delta(1- k)}{N}\right)\right)_{k\in\mathbb{N}_0}\right)\!\right)\!-\!E \right| \!>\!E\!\right)\!\!\underset{N\rightarrow\infty}{\longrightarrow}\!0.
\end{aligned}
\end{align}
It follows that $\sup_{\vartheta\in\Theta}\norm{M_N(\vartheta)-M(\vartheta)}\underset{N\rightarrow\infty}{\overset{P}{\longrightarrow}}0$. Finally, we conclude with \cite[Theorem 5.7]{V1998}.
\end{proof}
\subsection{Proof for Section \ref{sec3-2}}
\label{sec7-2}
\begin{proof}[Proof of Theorem \ref{theorem:asymptoticnormality}]
For $M_N$ as defined in (\ref{eq:M_N}) we investigate the Taylor expansion of $\nabla_\vartheta M_N$ at $\vartheta^*$, which is given by
\begin{gather*}
\sqrt{\frac{b_N}{\delta_N}}\Big( \nabla_\vartheta M_N\left(\vartheta^*\right)\Big)=\sqrt{\frac{b_N}{\delta_N}} \left(\nabla_\vartheta M_N(\hat{\vartheta}_N) \right) - \sqrt{\frac{b_N}{\delta_N}} \left(\hat{\vartheta}_N-\vartheta^* \right) \left(\nabla_\vartheta^2 M_N(\tilde{\vartheta}) \right)
\end{gather*}
for some $\tilde{\vartheta}\in\Theta$ satisfying $\lVert\tilde{\vartheta}-\vartheta^*\rVert\leq\lVert\hat{\vartheta}_N-\vartheta^*\rVert$. From the definition of $\hat{\vartheta}_N$ it follows for sufficiently large $N$ that $\nabla_\vartheta M_N(\hat{\vartheta}_N)=0$. Hence, for $\bar{Y}_N=\big(Y_N\big(\tau_i^N+\Delta\frac{(1-k)}{N}\big)\big)_{k\in\mathbb{N}_0}$ and $\bar{Y}_u=(\tilde{Y}_u(N\tau_i^N+\Delta(1-k)))_{k\in\mathbb{N}_0}$ we obtain
\begin{align*}
\sqrt{\frac{b_N}{\delta_N}}\left(\hat{\vartheta}_N-\vartheta^* \right)&= -\sqrt{\frac{b_N}{\delta_N}}\left( \nabla_\vartheta M_N\left(\vartheta^*\right)\right)\left(\nabla_\vartheta^2 M_N(\tilde{\vartheta}) \right)^{-1}\\
&=-\Bigg(\sqrt{\frac{b_N}{\delta_N}} \sum_{i=-m_N}^{m_N} K_{rect}\left( \frac{\tau_i^N-u}{b_N}\right) \left( \nabla_\vartheta \Phi\left(\bar{Y}_N,\vartheta^*\right)-E\left[\nabla_\vartheta \Phi\left(\bar{Y}_N,\vartheta^*\right)\right]\right)\\
&\qquad \qquad +\sqrt{\frac{b_N}{\delta_N}} \sum_{i=-m_N}^{m_N} K_{rect}\left( \frac{\tau_i^N-u}{b_N}\right) E\left[\nabla_\vartheta \Phi\left(\bar{Y}_N,\vartheta^*\right)\right]\Bigg)\left(\nabla_\vartheta^2 M_N(\tilde{\vartheta}) \right)^{-1}\\
&=:-(P_1+P_2)\left(\nabla_\vartheta^2 M_N(\tilde{\vartheta}) \right)^{-1}.
\end{align*}
As first step, we show asymptotic
|
}$ & EER & & $C^{\rm Prm}_{\rm min}$ & EER\\
\midrule
G-PLDA, LN & 0.26 & 2.5 & & 0.97 & 16.5 & & \textbf{0.68} & 9.7 & & 0.99 & 21.0\\
G-PLDA, no LN & 0.33 & 4.0 & & 0.97 & 17.8 & & 0.69 & 11.5 & & 0.98 & 21.3 \\
\midrule
HT-PLDA $\nu=2$, initialized from G-PLDA & 0.30 & 2.9 & & 0.96 & 16.7 & & 0.68 & 10.0 & & 0.98 & 21.1 \\
HT-PLDA $\nu=2$, trained with BXE & \textbf{0.21} & \textbf{2.1} & & \textbf{0.90} & \textbf{15.1} & & 0.74 & \textbf{9.3} & & \textbf{0.97} & \textbf{20.2} \\
\midrule
HT-PLDA, train $\nu=\infty$, test $\nu=2$ & 0.30 & 2.7 & & 0.97 & 16.7 & & 0.68 & 10.0 & & 0.98 & 21.2\\
HT-PLDA, train $\nu=2$, test $\nu=2$ & 0.31 & 3.2 & & 0.97 & 16.9 & & 0.69 & 10.4 & & 0.99 & 21.3\\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{HT-PLDA for x-vectors}
\subsubsection{x-vector extractor}
The x-vector system is a modified version of the DNN in \cite{DS_ICASSP18}. The features are 23 dimensional MFCCs with a frame-length of 25ms,
mean-normalized over a sliding window of up to 3 seconds. An energy SAD is used to filter out nonspeech frames. The first few layers of the x-vector extractor operate on sequences of frames. They are a hierarchy of convolutional layers (only convolving in time) that provide a long temporal context (23 frames, 11 to each side of the center frame) with reduced complexity. Their outputs are processed by fully connected layers and followed by a statistics pooling layer that aggregates across time by computing the mean and standard deviation. This process aggregates information so that subsequent layers operate on the entire segment. The mean and standard deviation are concatenated together
and propagated through segment-level layers and finally the softmax output layer. The nonlinearities are all rectified linear units (ReLUs).
The DNN is trained to classify the $N$ speakers in the training data.
A training example consists of a chunk of speech features (about 3 seconds average),
and the corresponding speaker label.
After training, x-vectors (512 dimension) are extracted from an affine layer 2 levels above the pooling layer.
The software framework has been made available in the Kaldi toolkit.
An example recipe is in the main branch of Kaldi at \url{https://github.com/kaldi-asr/kaldi/tree/master/egs/sre16/v2} and a pretrained x-vector system can be downloaded from \url{http://kaldi-asr.org/models.html}.
\subsubsection{Experimental setup}
The DNN training data consists of both telephone and microphone speech (mostly English). All wideband audio is downsampled to 8kHz. We pooled data from Switchboard, Fisher, Mixer (SRE 2004-2010), and VoxCeleb\footnote{We removed the 60 speakers that overlap with SITW since we evaluate on it.} \cite{Nagrani17} datasets yielding approximately 175K recordings from 15K speakers. Additionally, the recordings were augmented (using noise, reverb, and music) to produce 450K examples. From this augmented set, 15K chunks of 2 to 4 seconds were extracted for each speaker to form minibatches (64 chunks). We sampled equally for each speaker (i.e., balanced the training data per speaker) and trained for 3 epochs.
The G-PLDA and HT-PLDA classifiers are trained on a subset of the augmented data (we removed Switchboard and Fisher data) comprising 7K speakers and 230K recordings. For all experiments, we use a speaker subspace of dimension $d=150$. To explore the effects of LN on x-vectors we present results with and without it. More precisely, although LN comprises multiple steps (center, whitening, and projection onto unit-sphere) we use the notation ``no LN'' to refer to the lack of projection. We always center and whiten the data. Finally, the scores are normalized using adaptive symmetric score normalization (ass-norm)~\cite{sturim2005snorm}.
We report results on SITW core-core condition
\cite{mclaren2016} and the Cantonese subset of NIST SRE'16~\cite{NIST_SRE_2016} to characterize the system behavior under microphone and telephone recording conditions. Each of these sets provides development data that we use for centering the evaluation data and computing ass-norm. The PLDA training set is always centered to its own mean and used to estimate the whitening transform. Note that this transformation does not have any impact if no projection is applied to the x-vectors. Additionally, for the SRE'16 set, we also show results applying unsupervised domain adaptation~\cite{unsup_adapt} of the PLDA parameters.
\subsubsection{Results}
The x-vector results are presented in tables~\ref{tbl:results_xvecs_sitw} and~\
|
ref{tbl:results_xvecs_sre16}. To the best of our knowledge, these are the best numbers published on both tasks. Moreover, the HT-PLDA classifier with no LN outperforms G-PLDA (even with domain adaptation for SRE'16). It is interesting to note that LN is detrimental to the HT-PLDA performance. Recall that the precision scaling factors $b_{ij}$ in~\eqref{eq:Banda} are determined by $\mathbf{r}'_{ij}\mathbf{G}\mathbf{r}_{ij}$, the energy of the x-vectors in the complement of the speaker subspace. This results in scaling factors that get smaller as the energy of the x-vectors outside of the speaker subspace grows (which is consistent with the phenomenon that our model is trying to capture). Therefore, projecting the x-vector onto the unit sphere interferes with this process and the results indicate that it is detrimental. The G-PLDA classifier seems suboptimal for these tasks, but still benefits from LN. This is more noticeable for the SRE'16 results than for SITW where LN does not seem to have much effect. This is an indication that x-vectors behave differently than i-vectors in this regard and requires further investigation. Finally, unsupervised domain adaptation using parameter interpolation works quite well for both G-PLDA and the HT-PLDA model.
\begin{table}
\caption{\label{tbl:results_xvecs_sitw} G-PLDA vs HT-PLDA on eval part of SITW core-core using x-vectors.}
\vspace{3mm}
\centerline{
\begin{tabular}{l c c }
\toprule
System & $\rm minDCF_{\rm 0.01}$ & EER \\
\midrule
G-PLDA, LN & 0.34 & 3.3 \\
G-PLDA, no LN & 0.34 & 3.4\\
\midrule
HT-PLDA, $\nu=2$, LN& 0.34& 3.4 \\
HT-PLDA, $\nu=2$, no LN& \textbf{0.33} & \textbf{2.7} \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}
\caption{\label{tbl:results_xvecs_sre16} G-PLDA vs HT-PLDA (with and without adaptation) on SRE 2016 Cantonese using x-vectors. The performance metrics are balanced $C^{\rm Prm}_{\rm min}$, as computed by NIST scoring tool, $\rm minDCF_{\rm 0.01}$ and \rm{EER}(\%)}
\vspace{3mm}
\centerline{
\begin{tabular}{l c c c}
\toprule
System & $C^{\rm Prm}_{\rm min}$ (bal.) & $\rm minDCF_{\rm 0.01}$ & EER \\
\midrule
G-PLDA, LN & 0.30 & 0.31 & 4.5 \\
G-PLDA, no LN & 0.33 & 0.32 & 4.7 \\
HT-PLDA, $\nu=2$, LN & 0.31& 0.31 & 4.5 \\
HT-PLDA, $\nu=2$, no LN & \textbf{0.30}& \textbf{0.30} & \textbf{3.8} \\
\midrule
+ unsupervised adaptation \\
\midrule
G-PLDA, LN & 0.27 & 0.27 & 3.9 \\
G-PLDA, no LN & 0.29 & 0.28 & 4.3 \\
HT-PLDA, $\nu=2$, LN & 0.27 & 0.28 & 4.2 \\
HT-PLDA, $\nu=2$, no LN & \textbf{0.25 }& \textbf{0.26} & \textbf{3.2} \\
\bottomrule
\end{tabular}}
\end{table}
\section{Discussion}
In this paper and in our previous work~\cite{Brummer_Odyssey18}, we revisit heavy-tailed PLDA and re-engineer it to provide a computationally attractive alternative to the existing state of the art given by Gaussian PLDA with length normalization. Our experiments show benefits to HT-PLDA on both i-vectors and x-vectors on three different evaluation sets. In the case of i-vectors, discriminative training worked better than generative training. In the case of x-vectors, only generative training was tried to date, and this gave record performance on SRE'16 and SITW.
In future work, we plan to try discriminative training for the HT-PLDA backend also on x-vectors. After that, we want to backpropagate the discriminative training, \emph{through} this backend and also into the x-vector extractor. The idea is that the variable precisions of HT-PLDA should serve as a vehicle for uncertainty propagation from the input MFCCs to the output scores, as more fully motivated in~\cite{Brummer_Odyssey18,meta_embeddings}.
\section{Acknowledgements}
This work was started at the Johns Hopkins University HLTCOE SCALE 2017 Workshop. The authors thank the workshop organizers for inviting us to attend and (in the case of Niko Br\"ummer) for generous travel funding. The work was also supported by Czech Ministry of Interior project No. VI20152020025 ``DRAPAK'' Google Faculty Research Award program, Technology Agency of the Czech Republic project No. TJ01000208 ``NOSICI'', and by Czech Ministry of Education, Youth and Sports from the National Programme of Sustainability (NPU II) project ``IT4Innovations excellence in science - LQ1602''.
\bibliographystyle{IEEEtran}
|
\subsubsection*{References}}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{tikz}
\usepackage{algorithm2e}
\usepackage{multirow}
\usepackage{listings}
\usetikzlibrary{calc}
\usepackage[many]{tcolorbox}
\tcbuselibrary{listings}
\newcommand{\swap}[3][-]{#3#1#2}
\title{PASOCS: A Parallel Approximate Solver for Probabilistic Logic Programs under the Credal Semantics}
\author[1]{\href{mailto:David Tuckey <[email protected]>?Subject=Your UAI 2021 paper}{David Tuckey}{}}
\author[2]{Alessandra Russo}
\author[2]{Krysia Broda}
\affil[1,2]{%
Department of Computing\\
Imperial College London\\
London, UK
}
\affil{%
[email protected]
}
\begin{document}
\maketitle
\begin{abstract}
The Credal semantics is a probabilistic extension of the answer set semantics which can be applied to programs that may or may not be stratified. It assigns to atoms a set of acceptable probability distributions characterised by its lower and upper bounds. Performing exact probabilistic inference in the Credal semantics is computationally intractable. This paper presents a first solver, based on sampling, for probabilistic inference under the Credal semantics called PASOCS (Parallel Approximate SOlver for the Credal Semantics). PASOCS performs both exact and approximate inference for queries given evidence. Approximate solutions can be generated using any of the following sampling methods: naive sampling, Metropolis-Hastings and Gibbs Markov Chain Monte-Carlo. We evaluate the fidelity and performance of our system when applied to both stratified and non-stratified programs. We perform a sanity check by comparing PASOCS to available systems for stratified programs, where the semantics agree, and show that our system is competitive on unstratified programs.
\end{abstract}
\section{Introduction}\label{sec:intro}
Probabilistic logic programming (PLP) is a subfield of Artificial Intelligence aimed at handling uncertainty in formal arguments by combining logic programming with probability theory~\citep{Riguzzi2018}. Traditional logic programming languages are extended with constructs, such as probabilistic facts and annotated disjunctions, to model complex relations over probabilistic outcomes. Different probabilistic semantics for PLP have been proposed in the literature of which Sato's distribution semantics \citep{Sato1995} is to date the most prominent one. It underpins a variety of PLP systems including Problog \citep{Fierens2015}, PRISM \citep{Sato2001} and Cplint \citep{Riguzzi2007}, which extend Prolog-based logic programming with probabilistic inference. Probabilistic extensions have also been proposed within the context of Answer Set Programming (ASP), leading to systems such as P-log \citep{Baral2004}, $\mathrm{LP}^{\mathrm{MLN}}$ \citep{Lee2015} and PrASP \citep{Nickles2015}. Although all these approaches and systems make use of different methodologies ranging from knowledge compilation \citep{Fierens2015} to various forms of sampling \citep{Nampally2014,Azzolini2020,Shterionov2010} to compute probabilistic outcomes, they all share the characteristic of computing point probabilities, assuming a single probability distribution over the outcomes of a PLP.
In real-world applications uncertainty comes not only in the form of likelihood of truth or falsity of Boolean random variables, but also as a result of incomplete knowledge or non-deterministic choices, which are not quantifiable probabilistically. These concepts are expressible in Answer Set Programming, for instance through negative loops and choice rules, and semantically lead to multiple models (answer sets \citep{Gelfond1988}) for a given answer set program. The probabilistic inference has, in this case, to cater for the possibility of multiple answer sets for a given total choice of probabilistic variables. The credal semantics captures this property and generalises Sato's distribution semantics \citep{Lukasiewicz2005, Cozman2020}. It attributes to the elements in a probabilistic logic program a set of probability measures instead of single point values. This allows incomplete knowledge to be represented on top of stochastic behaviours \citep{Halpern2018a} without having to make any stochastic assumptions on missing information. The result is that each element in the program has a lower and upper bound probability akin to worst and best case scenarios. To the best of our knowledge no probabilistic inference system has been proposed for the Credal semantics.
This paper addresses this problem by presenting the first approximate solver for the Credal semantics, named PASOCS (Parallel Approximate SOlver for the Credal Semantics)\footnote{\href{https://github.com/David-Tuc/pasocs}{https://github.com/David-Tuc/pasocs}}. It is applicable to PLPs that are stratified and for which the Credal semantics collapses to the point probability of Sato's distribution semantics. But for more general problems, where the logic programs involved have multiple answer sets, PASOCS returns probability bounds as described by the Credal semantics. Performing probabilistic inference under the Credal semantics is inherently a hard computational task \citep{Maua2020}. Though our solver supports exact inference through the computation of every possible world, its true aim is to allow for approximate inference by sampling the space of possible worlds and using Clingo \citep{Gebser2014} to solve the resulting logic programs. The system incorporates different sampling methods: naive sampling, Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) sampling. Solving calls are made in parallel to leverage the availability of multi-core processing nodes and scale to high-throughput computing clusters.
We evaluate the fidelity and performance of our system when applied to both stratified and non-stratified programs. We perform a sanity check by comparing PASOCS to available PLP systems for stratified programs, where the different semantics agree, and show that our system is competitive while working with a less restrictive semantics on unstratified programs.
The paper is structured as follows: Section \ref{sec::background} introduces the credal semantics to allow for the presentation of PASOCS in Section \ref{sec::PASOCS}. We evaluate our system in Section \ref{sec::experiments}. We then present related works in Section \ref{sec::related_works} and conclude in Section \ref{sec::conclusion}.
\section{Background}
\label{sec::background}
This section introduces the main two notions used throughout the paper: the syntax and semantics for Answer Set Programming (ASP)~\citep{Gelfond1988}, and the Credal semantics \citep{Lukasiewicz2005, Cozman2020} for PLP.
\subsection{Answer set semantics}
We assume the following subset of the ASP language\footnote{For the full syntax and semantics of ASP please see~\citep{Gebser2014}.}. A {\em literal} can be either an atom of the form $p(t_1,...,t_n)$, where $p$ is a predicate of arity $n$ and $t_i$ are either constants or variables, or its {\em default negation} $not\textnormal{ }p(t_1,...,t_n)$, where $not$ is {\em negation as failure}. A rule $r$ is of the form $A_1\leftarrow B_1, ..., B_n,not\textnormal{ }B_{n+1}, ..., not\textnormal{ }B_m$ where all $A_1$ and $B_{i}$ are atoms. We call $A_1$ the {\em head} of the rule (denoted $h(r)$) and
$B_1, ..., B_n,not\textnormal{ }B_{n+1}, ..., not\textnormal{ }B_m$ the {\em body} of the rule. Specifically, we refer to
$\{B_1,...,B_n\}$ as the {\em positive body} literals of the rule $r$ (denoted $B^+(r)$) and to $\{not\textnormal{ }B_{n+1},...,not\textnormal{ }B_m\}$ as the {\em negative body} literals of the rule $r$ (denoted $B^-(r)$). A rule $r$ is said to be {\em definite} if $B^-(r)=\emptyset$. A
{\em fact} is a rule with an empty body. An ASP program is a finite set of rules.
Given an ASP program $P$, the Herbrand Base of $P$, denoted as $HB_{P}$, is the set of all ground (variable free) atoms that can be formed from the predicates and constants that appear in $P$. The grounding of an ASP program $P$, denoted $gr(P)$, is the program composed of all possible ground rules constructed from the rules in $P$ by substituting variables with constants that appear in $P$.
Given an ASP program $P$ and a set $I\subseteq HB_P$, the {\em reduct} of $P$ with respect to $I$, denoted ($P^{I}$), is the ground program constructed from $gr(P)$ by removing any rule whose body contains a negative body literal $not\textnormal{ }B_j$ where $B_j\in I$, and removing any negative literal in the remaining rules. Note that $P^{I}$ is, by construction, a ground definite program. i.e. composed only of definite rules and facts. A model of $P^{I}$ is an interpretation $I^{'}\subseteq HB_P$ (a set of ground atoms) such that for each rule $r \in P^I$, $h(r) \in I'$ or $B^+(r) \not\subset I'$. The Least Herbrand Model of $P^{I}$, denoted as $LHM(P^{I})$, is the minimal model (with respect to set inclusion) of $P^{I}$. An interpretation $A\subseteq HB_P$ is an {\em Answer Set} of an ASP program $P$ if and only if $A=LHM(P^{A})$. The set of Answer Sets of a program $P$ is denoted $AS(P)$.
An ASP program $P$ is \textit{stratified} if it can be written as $P = P_1 \cup P_2 \cup ... \cup P_k$ such that $P_i\cap P_j=\emptyset$ for any $1\leq i,j\leq k$, any positive body literal $B_i$ of a rule in $P_i$ is the head of a rule in $P_j$, for $j \leq i$, and the atom of any negative body literal $not\textnormal{ }B_j$ of a rule in $P_i$ is the head of a rule in $P_j$ with $j<i$.
A stratified ASP program $P$ has only one Answer Set $A$ (i.e., $|AS(P)|=1$), which is the only interpretation such as $A=LHM(P^{A})$. Non-stratified ASP programs may have multiple Answer Sets (i.e., $|AS(P)|> 1$).
\subsection{Credal semantics}
\label{sec::cs}
\begin{figure}[t]
\centering
\begin{lstlisting}[]
0.3::a.
p :- not q, a.
q :- not p.
\end{lstlisting}
\caption{Example of simplistic probabilistic logic program. $a$ is true with probability $0.3$ and false with probability $0.7$. This program has only two total choices: $\{a\}$ and $\{\}$. With the total choice where $a$ is false, there is only one answer set $\{q\}$, and when $a$ is true there are two possible answer sets $\{p, a\}$ and $\{q, a\}$. This makes that $\underline{P}(q) = 0.7$, $\overline{P}(q) = 1$, $\underline{P}(p) = 0$ and $\overline{P}(p) = 0.3$. }
\label{fig::ex}
\end{figure}
The Credal semantics is a generalisation of Sato's distribution semantics \citep{Sato1995} which allows for probabilistic logic programs to be non stratified. In this paper, we consider PLP programs $P=<P_l, P_f>$ that are composed of a (possibly non-stratified) logic program $P_l$ and a set $P_f$ of {\em probabilistic facts} of the form $pr::B$ where $B$ is a ground atom, representing a Boolean random variable, and $pr\in[0,1]$ defines its probability distribution: $p(B =\mbox{true}) = pr$ and $p(B=\mbox{false})=1-pr$. Informally, $B$ is true with probability $pr$ and false with probability $1-pr$. Probabilistic facts are assumed to be independent Boolean random variables and to have unique ground atoms (i.e., an atom $B$ of a probabilistic fact may appear only once in $P_f$). PLP programs $P=<P_l, P_f>$ are also assumed to satisfy the {\em disjoint condition}, that is atoms of probabilistic facts cannot appear as head atoms of any rule in $gr(P_l)$.
Atoms of probabilistic facts can be ``chosen" to be true or false. This choice can be seen as a probabilistic outcome of their truth value governed by their probability distribution. We refer to a {\em total choice} $C\in 2^{P_f}$ as a subset of probabilistic facts that are chosen to be true: $B$ is considered to be true in a total choice $C$ if $(pr::B)\in C$. Each choice $C$ has an associated probability given by
$Prob(C) = \prod_{(pr::B_i) \in C}\; pr\;\prod_{(pr::B_j) \not\in C}\;(1-pr)$, since probabilistic facts are assumed to be independent. Given a total choice $C$, we denote with $C_p =\{B|(pr::B)\in C\}$ the set of ground atoms that appear in $C$ and with $\bigwedge C =\bigwedge_{(pr::B)\in C}\textnormal{ } B$ the logical conjunction of the atoms chosen to be true in $C$. Given a PLP $P=<P_l, P_f>$, we define the Herbrand Base as $HB_{P} = HB_{P_l}\cup\{B|(pr::B) \in P_f\}$.
For a PLP $P$, probability measures $Pr$ are defined over its interpretations $I\subseteq HB_P$.
Given an atom $B$ in the language of a PLP program $P$, we can define its probability (by surcharge of notation) as $Pr(B) = \sum_{I\subseteq HB_P, \;I\models B} Pr(I)$. We also define the probability of a conjunctive formula $F$ as $Pr(F) = \sum_{I\subseteq HB_P,\;I\models F} Pr(I)$, the sum of the probabilities of the interpretations in which $F$ is true. A PLP is said to be consistent under the Credal semantics if for all total choices $C$ we have $AS(P_l \cup C_p) \neq \emptyset$. The Credal semantics links the notion of probability of a total choice with the notion of answer sets \citep{Cozman2017}. In particular, given a PLP $P$, a \textit{probability model} $Pr$ is a probability measure such that every interpretation $I\subseteq HB_P$ with $Pr(I)>0$ is an answer set of $P_l\cup C_p$ for some total choice $C$ ($Pr(I)>0\Rightarrow\exists C \in 2^{P_f} \textnormal{ s.t. }I\in AS(P\cup C_p)$), and for all total choices C, $Pr(\bigwedge C) = Prob(C)$.
The Credal semantics of a PLP program $P=\langle P_l, P
_f\rangle$ is its set of probability models. Given a query with evidence $q = (Q|E)$ where $Q$ and $E$ are sets of truth assignments to ground atoms in $HB_P$, the Credal semantics associates to it a set of probability measures, to which we can give its upper bound $\overline{P}(Q|E)$ and lower bound $\underline{P}(Q|E)$. An example is given in Figure \ref{fig::ex}. \citet{Cozman2017} provide an algorithm to compute the upper and lower bound of a given query with evidence, which we report in Algorithm \ref{alg::cs}. We say that a PLP $P$ is stratified when $P_l$ is stratified. For such programs, $\underline{P}(Q|E) = \overline{P}(Q|E)$ for any query $(Q|E)$.
\begin{algorithm}[t]
\SetAlgoLined
\KwData{PLP $P=<P_l, P_f>$ and query $(Q|E)$}
\KwResult{$[\underline{P}(Q|E),\overline{P}(Q|E)]$}
a,b,c,d = 0\;
\ForEach{Total choice C}{
\lIf{$Q\cup E$ is true in every answer sets $AS(P\cup C_p)$}{
a $\leftarrow$ a + $Prob(C)$}
\lIf{$Q\cup E$ is true in some answer sets $AS(P\cup C_p)$}{
b $\leftarrow$ b + $Prob(C)$}
\lIf{$Q$ is false and $E$ is true in every answer sets $AS(P\cup C_p)$}{
c $\leftarrow$ c + $Prob(C)$}
\lIf{$Q$ is false and $E$ is true in some answer sets $AS(P\cup C_p)$}{
d $\leftarrow$ d + $Prob(C)$}
}
\uIf{$ b+c=0 $ and $ d>0 $}{\KwRet [0,0]}
\uElseIf{$ a+d=0 $ and $ b>0 $}{\KwRet [1,1]}
\Else{\KwRet $[a/(a+d), b/(b+c)]$}
\caption{Algorithm to compute the lower and upper bound of a query from answer sets. From \citep{Cozman2017}}
\label{alg::cs}
\end{algorithm}
\subsection{Sampling}
Sampling is a process by which one randomly selects a subset of the population to estimate a distribution \citep{Montgomery1994}. It has been applied to different probabilistic logic programming settings under various semantics \citep{Shterionov2010,Nickles2015,Azzolini2019,Nampally2014}.
A well known method for sampling uses the Markov Chain Monte-Carlo method where the next sample is built from the existing sample, instead of being drawn independently from the distribution. In this paper, we will use the well known Metropolis-Hastings (MH) \citep{Hastings1970} and Gibbs \citep{Geman1984} algorithms, which are both MCMC methods. The MH MCMC method consists of creating the samples using an intermediate Markov chain, which is easier to traverse, and accepting these new samples with a certain ``acceptance'' probability. Gibbs MCMC sampling consists of re-sampling one (or a fixed amount $k$ in the case of block Gibbs sampling) of the random variables involved using their respective distribution at each new sample $n+1$. It is a special case of MH sampling where the acceptance probability is always $1$. For both methods, it is common practice to perform a \textit{burning} step, which means taking $b$ sampling steps at the beginning and discard them, in order to minimize the impact of the initialization.
\section{PASOCS}
\label{sec::PASOCS}
We now present our solver PASOCS (Parallel Approximate SOlver for the Credal Semantics), based on sampling, for performing probabilistic inference under the Credal semantics. PASOCS is a system aimed at computing the lower and upper bounds of queries with evidence for probabilistic ASP programs. The system allows for exact and approximate inference by computing, or sampling, the set of total choices and solving the resulting ASP program using the ASP solver Clingo \citep{Gebser2008}. The calls to Clingo are parallelized, which allows scaling the computation over multiple CPUs. In what follows, we present the input probabilistic ASP language accepted by PASOCS and give details of the the sampling parameters.
\subsection{Input Language}
\label{sec::input}
A PASOCS program $P=P_{ASP} \cup P_A$ is an ASP program ($P_{ASP}$) extended with a set of rules ($P_A$) annotated with probabilities. Annotated rules are of the following form, where $p$, $p1$, $p2$, $p3$ are ground atoms and $L_1...L_m$ are literals.
\vspace{-3mm}
\begin{equation*}
\begin{split}
&pr::p. \textnormal{ (1)} \\
&pr::p \textnormal{ :- }L_1, ..., L_m. \textnormal{ (2)} \\
&pr1::p1;pr2::p2;pr3::p3. \textnormal{ (3)}\\
&pr1::p1;pr2::p2;pr3::p3\textnormal{ :- }L_1, ..., L_m. \textnormal{ (4)}
\end{split}
\end{equation*}
A probabilistic ASP program $P=P_{ASP} \cup P_A$ can be translated into an equivalent PLP program $<P_l, P_f>$. This is done as follows. Firstly, we initialise $P_l=P_{ASP}$. Annotated rule (1) is already a probabilistic fact, so we add $pr::p$ to $P_f$. Annotated rule (2) is a syntactic sugar for a "probabilistic clause". We add
|
a probabilistic fact $pr::p_i$ to $P_f$ and add the clause $p :- L_1, ..., L_m, p_i$ to $P_l$, where $i$ is a unique identifier. Annotated rule (3) is an annotated disjunction, meaning that a "probabilistic fact" takes value $p1$ with probability $pr1$, value $p2$ with probability $pr2$, etc... The sum of the probabilities in the annotated disjunction has to be less than or equal to 1. This annotated rule (3) is transformed, using the mechanism described in \citep[Chapter~3]{Gutmann2011a}, into a set of clauses, which are added to $P_l$, and related probabilistic facts, which are added to $P_f$ (see Figure \ref{fig::disjunction} for an example translation).
Annotated rules of type (4) are translated similarly to the rule of type (3) with the addition that the body elements $L_1,....L_m$ are added to the body of all the rules created from the disjunctive head.
\begin{figure}[t]
\centering
\begin{lstlisting}[]
a::pf1.
b::pf2.
c::pf3.
p1 :- pf1.
p2 :- pf2, not pf1.
p3 :- pf3, not pf1, not pf2.
\end{lstlisting}
\caption{Translation of annotated rule (3) in Section \ref{sec::input} following the method in \citep[Chapter~3]{Gutmann2011a}. The probabilistic facts have probabilities $a=pr1$, $b = \dfrac{pr2}{1-pr1}$ and $c = \dfrac{pr3}{1-pr1-pr2}$}
\label{fig::disjunction}
\end{figure}
Queries can be written as part of the program and are expressed using the $\#query$ keyword. An example query would be:
\begin{equation*}
\#query(p1,not p2, p3|p4:true, p5:false,p6:true).
\end{equation*}
\vspace{-3mm}
where $p1, p2, p3, p4, p5$ and $p6$ are ground atoms. $p1,not p2, p3$ is the query and $p4:true, p5:false,p6:true$ is the provided evidence. In this example, we want to know the probability of $p1$ and $p3$ to be true and $p2$ to be false (at the same time), given that $p4$ and $p6$ are true and $p5$ is false.
In PASOCS, we can specify multiple queries at the same time, which can all have a different set of evidence. One advantage of PASOCS is that it can run all the queries at the same time and does not require multiple calls to answer queries with different evidence.
\subsection{Exact Inference}
The PASOCS system performs exact inference for a given set of queries in three steps: firstly it computes the set of all total choices from the transformed program $<P_l, P_f>$; secondly it generates the answer sets corresponding to each of these total choices; and finally it uses Algorithm \ref{alg::cs} to obtain the lower and upper bounds probabilities. We have evaluated PASOCS's exact inference and compared it with other existing methods in the case of stratified PLP programs. Results, given in Section~\ref{sec::experiments}, show that the computational time is, as expected, exponential in the number of probabilistic variables (see Table~\ref{Table1}). This is because given $n$ probabilistic facts there are $2^n$ total choices. To address this problem PASOCS uses sampling for approximate solving of probabilistic inference tasks.
\subsection{Approximate solving}
PASOCS performs approximate solving through sampling total choices. Specifically, PASOCS makes use of three different sampling algorithms: Naive Sampling, Metropolis-Hasting MCMC and Gibbs MCMC. These three sampling methods share the same stopping criteria, which are user defined. Total choices are sampled and queries are evaluated on the answer sets resulting from each of the sampled total choices. All queries are evaluated simultaneously on the same samples. If a query has evidence, PASOCS instructs it to ignore a sample when the evidence is not true in any of the resulting answer sets. This means that different queries, even if evaluated on the same set of samples, may not have the same count of number of samples. To evaluate the upper and lower bound of a query, PASOCS uses Algorithm \ref{alg::cs} where the probability of a total choice, $Prob(c)$, is replaced by $1$, and the resulting values $a,b,c$ and $d$ are divided by the number of samples that the query has counted. \\
PASOCS considers the predicted lower and upper bounds of a query separately and so computes the uncertainty on both independently. With each bound it estimates following a Bernoulli distribution, we formulate the uncertainty as: \citep{Montgomery1994}:
\begin{equation}
U = 2*perc*\sqrt{\dfrac{p(1-p)}{N}}
\end{equation}
where $p$ is the estimated lower or upper bound for the query, $N$ is the number of samples the query has counted and $perc$ is a user defined parameter that is the percentile to use (by default 1.96 for the 95\% confidence bounds). The system stops sampling when the uncertainty of the lower and upper bounds (computed separately) for all queries is under a certain \textit{threshold} defined by the user. For $p=0$ or $p=1$, the uncertainty is $0$. In these cases the system does not consider the user defined threshold but instead continues sampling until a minimum amount of sample $min_{sample}$ has been counted for each query. Finally, a user defined parameter $max_{sample}$ provides the system with the maximum number of samples that it is allowed to take in total. The rest of this section describes the use of the three different sampling methods.
\paragraph{Naive Sampling.}
The naive sampling algorithm samples total choices by independently sampling each probabilistic fact using their associated probabilities in $P_f$. Each sample is independent from the next.
\paragraph{Metropolis-Hastings Sampling.}
For the MH MCMC algorithm, PASOCS randomly initialises the value for each probabilistic fact and performs a user defined number of burning steps. To build a sample $n+1$ from an existing sample $n$, PASOCS switches the value of each probabilistic fact with probability $p_{change}$ (user defined, default is $0.3$). This makes that the acceptance probability for sample $S_{n+1}$ is $max(1, \dfrac{Prob(S_{n+1})}{Prob(S_n)})$ where $S_{n+1}$ is the sample $n+1$ and $S_n$ is the sample $n$, and $Prob$ being the probability of the total choices as defined in Section \ref{sec::cs}.
\paragraph{Gibbs sampling.}
PASOCS can also use Gibbs MCMC sampling, in particular block Gibbs MCMC sampling. In this case the size $bk$ of the block is user defined. The system initialises the values of the probabilistic facts randomly and performs a burn phase at the beginning of the sampling. Then at each iteration, $bk$ probabilistic facts are sampled at a time using their respective probabilities defined in $P_f$ while keeping the others fixed. By default $bk=1$.
\section{Experiments}
\label{sec::experiments}
We demonstrate the capabilities of PASOCS on two tasks to evaluate the sampling mechanism. We compare against Problog, Cplint (PITA \citep{Riguzzi2011} and MCINTYRE \citep{Riguzzi2011b}), Diff-Sat and $\mathrm{LP}^{\mathrm{MLN}}$ both quantitatively and qualitatively. We report running times of all the systems when relevant. We ran all the systems (apart from Cplint\footnote{Which we ran from http://cplint.eu}) on a computing node with 24 CPUs.
Throughout this Section, we will be using two example tasks. The first one, Task 1 (see Listing \ref{list:task1}), taken from \citep{Lee2017b}, consists of finding the probability for a path to exist between two nodes in a graph where edges are probabilistic. This task also appears in \citep{Azzolini2020} and its representation as a PLP is stratified. The second task, Task 2 (see Listing \ref{list:task2}), is the same path finding exercise but now an unknown agent can decide that a node may or may not be included in the graph, and we do not make any assumption as to this agent's behavior: we do not put probabilities on the nodes. We instead represent it as a choice rule\footnote{A choice rule $\{p\}\leftarrow$ is similar to a negative loop $p \leftarrow not q$, $q\leftarrow not p$, making two Answer Sets, one with $p$ and the other without. The difference is that this choice rule doesn't make the atom q appear in the other answer set.} in ASP, meaning that for each total choice there will be multiple answer sets, some in which the node is and some in which it is not. In both tasks there are 6 nodes, edges are generated randomly with random probabilities and we always query the path between node 1 and node 5. Depending on the experiments we vary the number of edges. The listings for the other systems are given in Appendix \ref{ap::t1} and \ref{ap::t2}\footnote{\href{https://github.com/David-Tuc/pasocs\_solve\_exp}{https://github.com/David-Tuc/pasocs\_solve\_exp}}.
\subsection{Convergence}
We empirically show the convergence of the sampling methods on stratified and unstratified programs. In the case where the program is stratified (see Figure \ref{fig:task1sampling}) the lower and upper bound probabilities are equal and we obtain a point probability. In the case of an unstratified program, the predicted lower and upper bounds are different (see Figure \ref{fig:task2sampling}). Both of these graphs were obtained by running each sampling method 10 times for each number of samples on the programs with 20 edges.
We see that the standard deviation becomes lower as the number of samples increases and that our sampling methods do converge towards the right estimate given enough samples. Figures \ref{fig:task1sampling} and \ref{fig:task2sampling} also show that the running time is linear with regards to the number of samples and is the same for all three methods (from about 0.57s for $1000$ samples to about 10 minutes for 1 million samples). For both graphs, default parameters were used: $p_{change}=0.3$, $bk=1$ and 100 samples were burned for the MCMC samplings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{task1.eps}
\caption{Predicted probability and processing time on task 1 with 20 edges running the three sampling methods for given number of samples. Results are averaged over 10 runs and standard deviation is displayed.}
\label{fig:task1sampling}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{task2.eps}
\caption{Predicted lower and upper probability and processing time on task 2 with 20 edges running the three sampling methods for given number of samples. Results are averaged over 10 runs and standard deviation is displayed.}
\label{fig:task2sampling}
\end{figure}
\subsection{Sanity Check - stratified programs}
\begin{figure}[t]
\begin{lstlisting}[frame=single,breaklines=true,
label={list:task1},caption={PASOCS Task 1, only two edges are included} ,basicstyle=\small]
0.3576::edge(1, 6).
0.8565::edge(5, 3).
node(1..6).
path(X,Y) :- path(X,Z), path(Z, Y), Y != Z.
path(X,Y) :- node(X), node(Y), edge(X,Y).
#query(path(1,5)).
\end{lstlisting}
\end{figure}
We compare here PASOCS with the following systems: Problog, Cplint, $\mathrm{LP}^{\mathrm{MLN}}$ (LPMLN2ASP \citep{Lee2017b}) and Diff-SAT on task number one. The aim is to show that PASOCS performs as expected and has similar accuracy as other PLP solvers. Now in the context of stratified programs, the semantics of all these systems agree with Sato's Distribution Semantics thus yielding the same results.
Table \ref{tab:task1exact} shows the running times of exact inference for the given systems. As expected, the running time of PASOCS's exact inference is exponential as it computes each total choice and results are consistent with $\mathrm{LP}^{\mathrm{MLN}}$ which uses the same technique. Problog2 and PITA use knowledge compilation for exact inference which is of course much faster under the condition that the system manages to translate the program. Problog2 was not able in our examples to cope with cycles in the represented graph, which explains the Timeout passed 6 edges.
\begin{table}[t]
\centering
\caption{Task 1 time comparison for exact inference (in seconds)}\label{tab:task1exact}
\begin{tabular}{c c c c c c}
\toprule
\bfseries Edges & \bfseries PASOCS & \bfseries PITA & \bfseries Problog2 & \bfseries $\mathbf{LP}^{\mathbf{MLN}}$\\
\midrule
6 & 0.106 & 0.031 & 1.44 & 1.9 \\
10 & 0.68 & 0.032 & Timeout & 2,7\\
16 & 37.34 & 0.031 & Timeout & 64.7\\
20 & 502 & 0.035 & Timeout & 893\\
24 & 8748 & 0.037 & Timeout & 15182\\
\bottomrule
\end{tabular}
\label{Table1}
\end{table}
We also ran approximate inference on Task 1 with 20 edges and report the results in Table \ref{tab:task1samp} using $100000$ samples. All systems find a similar estimate and PASOCS's standard error is only marginally above other systems'. This experiment shows that PASOCS's approximate inference estimate is similar in precision as its competitors on stratified programs, where the Credal semantics matches their respective semantics. If we consider the running times, MCINTYRE is able to perform the sampling in $1.25$ seconds while PASOCS and Diff-SAT take about one minute in average. Problog2 was taking more than an hour and $\mathrm{LP}^{\mathrm{MLN}}$ about two hours to compute the estimate.
\begin{table}[t]
\centering
\caption{Task 1 approximate inference comparison with 100000 samples for 20 edges, averaged over 10 runs. Given with the standard error}\label{tab:task1samp}
\begin{tabular}{c c c}
\toprule
Systems & \bfseries Probability & \bfseries err \\% & \bfseries avg time (sec) \\
\midrule
\bfseries PASOCS & 0.9463 & $1.3e^{-3}$\\
\bfseries MCINTYRE & 0.9459 & $4e^{-4}$\\
\bfseries Problog2 & 0.9464 & $4.84e^{-4}$\\
\bfseries $\mathbf{LP}^{\mathbf{MLN}}$ & 0.9459 & $8.25e^{-4}$\\
\bfseries Diff-SAT & 0.9473 & $1.3e^{-5}$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Comparison - unstratified programs }
\begin{figure}[t]
\begin{lstlisting}[frame=single,breaklines=true,
label={list:task2},caption={PASOCS Task 2, only two edges are included},basicstyle=\small]
0.3576::edge(1, 6).
0.8565::edge(5, 3).
node(1).
{node(2)}.
{node(3)}.
node(4).
node(5).
node(6).
path(X,Y) :- path(X,Z), path(Z, Y), Y != Z.
path(X,Y) :- node(X), node(Y), edge(X,Y).
#query(path(1,5)).
\end{lstlisting}
\end{figure}
While PASOCS is capable of dealing with stratified program, its novelty is that it computes the lower and upper bound probabilities for unstratified programs following the Credal semantics. In this case the only systems we can compare with are $\mathrm{LP}^{\mathrm{MLN}}$ and Diff-SAT. We run each system on Task 2, which includes choice rules. PASOCS will return a lower and upper bound to the probabilities the query can take.
We report the result of running those systems with $100000$ samples in Table \ref{tab:task2samp}. The lower bound computed by PASOCS has a higher standard error than the upper bound, as it is more reliant on the actual sampled probabilistic facts than the upper bound. We note that the lower and upper bounds computed by PASOCS capture the probabilities computed by the other systems: Diff-SAT's and $\mathrm{LP}^{\mathrm{MLN}}$'s estimate are here approximately their mean value. PASOCS and Diff-SAT take around a minute for that amount of samples while $\mathrm{LP}^{\mathrm{MLN}}$ takes more than five hours.
\begin{table}[t]
\centering
\caption{Task 2 approximate inference comparison with 100000 samples for 20 edges, averaged over 10 runs}\label{tab:task2samp}
\begin{tabular}{c c c c}
\toprule
Systems & \bfseries Probability & \bfseries err & \\
\midrule
\multirow{2}{4em}{\bfseries PASOCS} & lower & 0.8282 & $2.4e^{-3}$ \\
& upper & 0.9460 & $1.5e^{-3}$ \\
\bfseries $\mathbf{LP}^{\mathbf{MLN}}$& & 0.8887 & $1.13e^{-3}$ \\
\bfseries Diff-SAT & & 0.8982 & $5.38e^{-4}$ \\
\bottomrule
\end{tabular}
\end{table}
In summary, PASOCS is capable of similar if not better performance time-wise as other systems dealing with non-stratified logic programs while allowing more expressiveness in the output probabilities. It returns an estimate of the lower and upper bounds probability as described by the Credal semantics, thus allowing complex programs to be solved without having to make assumptions about unknown probabilities.
\section{Related works}
\label{sec::related_works}
There exist many probabilistic logic programming systems. We have compared with a few in this paper, and we will now detail some of their inner workings and difference with PASOCS. Problog \citep{Fierens2015} and Cplint \citep{Riguzzi2007} work under the distribution semantics \citep{Sato1995}. Problog is a popular solver that uses knowledge compilation to compute the answer to a query (with Problog2), translating the program into a more ordered form which allows for much faster weighted model counting. When this translation succeeds, evaluating the queries is done in linear times. We also compared with its sampling method \citep{Dries2015}. Cplint is implemented as a SWI-Prolog \citep{Colmerauer1990, Wielemaker2012} package that implements both exact and approximate inference. Exact inference is done through PITA \citep{Riguzzi2011a} using knowledge compilation and sampling through MCINTYRE \citep{Riguzzi2011b} using MCMC methods.
There exist other semantics for PLP which are based on the answer set semantics for logic programs. P-log \citep{Baral2004} is based on a semantics close to the distribution semantics, which assigns a single probability value to queries. It distributes evenly the probability when multiple answer sets correspond to the same total choice. $\mathrm{LP}^{\mathrm{MLN}}$ \citep{Lee2015} is a semantics which takes inspiration both from stable models and Markov logic networks \citep{Richardson2006a}. It annotates logical rules with weights which in turn assign weights to interpretations depending if they are an Answer Set for a subset of the PLP. Weights are then normalised to give a point probability to each interpretation. PrASP \citep{Nickles2015} is a generalisation of the Answer Set semantics which solves an optimization problem defined by a PLP. Its more recent iteration Diff-SAT \citep{Nickles2021} is a more complex system which is capable of computing probabilities of atoms in a PLP through sampling and gradient descent. Like $\mathrm{LP}^{\mathrm{MLN}}$ and P-log, Diff-SAT assigns point probabilities to queries.
\section{Conclusion}
\label{sec::conclusion}
We presented the new approximate solver PASOCS for probabilistic logic programs under the Credal semantics. It computes the lower and upper bounds of queries given evidence using sampling algorithms. It is aimed at being used with unstratified program but we showed it is similar to other PLP solvers on stratified programs where it agrees with the distribution semantics.
In the future, PASOCS will aim at solving the equivalent of the Most Probable Explanation and Maximum-A-Posteriori (MAP) tasks under the Credal semantics: cautious explanation and cautious MAP. The system itself will also be kept under constant development to optimize the running time on large clusters, and implement MPI to use multiple computing nodes simultaneously.
|
\section{Introduction}
In \cite{ak} Akbulut and King proved that ``All knots are algebraic''. Since the word \emph{algebraic} carries several different meanings, their title could cause confusion. Besides links that are built from rational tangles as studied by Conway \cite{conway} the term ``algebraic link'' is nowadays usually reserved for links of isolated singularities of complex hypersurfaces. These are known to be unions of certain iterated cables of torus links. So clearly not all knots are algebraic in this sense. If the word ``algebraic'' is interpreted as ``an algebraic set'', then Akbulut's and King's title is a true statement, but also a classical and well-known result in algebraic geometry, see for example the Nash-Tognoli theorem \cite{bcr:1998real}.
The correct interpretation of Akbulut's and King's ``algebraic knots'' lies somewhere between the notion of the link of an isolated singularity and an algebraic set. Consider a real polynomial map $f:\mathbb{R}^4\to\mathbb{R}^2$. A critical point of $f$ is a point $p\in\mathbb{R}^4$, where the real Jacobian matrix $Df(p)$ of $f$ does not have full rank. We say that the origin $O\in\mathbb{R}^4$ is a \textit{weakly isolated singularity} of $f$, if $f(O)=0$, $Df(O)=0$ (i.e., the $2\times4$-matrix with zero entries) and there is some neighbourhood $U$ of the origin such that $U\backslash\{O\}\cap f^{-1}(0)$ contains no critical points of $f$. Hence $f$ is allowed to have a line of critical points pass through the origin, but the origin should be an isolated intersection point of $f^{-1}(0)$ and the critical set.
Every weakly isolated singularity can be associated with a link, since the link type of the intersection of $f^{-1}(0)$ and the 3-sphere $S_{\rho}^3$ of radius $\rho$ does not depend on the sufficiently small radius $\rho>0$. We call $L_f=f^{-1}(0)\cap S_{\rho}^3$ the link of the singularity.
Akbulut and King prove that every link in the 3-sphere arises as the link of a weakly isolated singularity of a real polynomial map $f:\mathbb{R}^4\to\mathbb{R}^2$. Thus their interpretation of the term ``algebraic'' does involve singularities, but of real polynomial maps instead of complex ones, and their definition of an isolated singularity is (as the name suggests) so weak that there is no restriction in the type of links that can be obtained this way.
Note that by composing an inverse stereographic projection $\mathbb{R}^3\to S^3_{\rho}$ with $f$ and clearing the denominator we obtain a real polynomial map on $\mathbb{R}^3$ whose variety is isotopic to the link of the singularity $L_f$ of $f$, so that Akbulut's and King's proof also establishes $L_f$ as an algebraic set in $\mathbb{R}^3$.
In \cite{bode:ralg}, we discuss a construction of weakly isolated singularities for certain links. It produces functions $f:\mathbb{C}^2\to\mathbb{C}$ that can be written as polynomials in complex variables $u$, $v$ and the complex conjugate $\bar{v}$. Hence they are holomorphic with respect to one complex variable, but not necessarily with respect to the other. We call such functions \textit{semiholomorphic}. They form an interesting family of mixed polynomials \cite{oka}, lying between the complex and the real setting. However, the construction in \cite{bode:ralg} only works for links that satisfy certain symmetry constraints, such as being the closure of a 2-periodic braid. This is necessary in order to obtain polynomials with the desired properties rather than more general real analytic maps.
In this paper we offer a constructive proof of Akbulut's and King's result, an algorithm that takes a braid word as input and produces a polynomial with a weakly isolated singularity, whose link is ambient isotopic to the closure of the given braid. Furthermore, all of the constructed polynomials are semiholomorphic.
\begin{theorem}\label{thm:ak}
Algorithm 1 (outlined in Section 3) constructs for any given braid $B$ on $s$ strands a semiholomorphic polynomial $f:\mathbb{C}^2\to\mathbb{C}$ with $\deg_u(f)=s$, with a weakly isolated singularity at the origin and $L_f$ ambient isotopic to the closure of $B$.
\end{theorem}
The algorithm is based on trigonometric interpolation, which allows us to prove upper bounds on the polynomial degrees of the constructed functions.
\begin{theorem}\label{thm:bound}
Let $B$ be a braid with $s$ strands, $\ell$ crossings and let $\mathcal{C}$ denote the set of components of its closure, which by assumption is not the unknot. Let $s_C$ denote the number of strands of the component $C\in\mathcal{C}$. Then the degree of the polynomial $f$ that Algorithm 1 constructs from the input $B$ is at most:
\begin{equation}
\deg(f)\leq s\ell(2+s)+1+\underset{C\in\mathcal{C}}{\sum}s_C^2\ell.
\end{equation}
\end{theorem}
\begin{corollary}\label{cor}
If the closure of $B$ is a non-trivial knot, the degree of the polynomial $f$ constructed by Algorithm 1 is bounded by
\begin{equation}
\deg(f)\leq 2s\ell(s+1)+1.
\end{equation}
\end{corollary}
We would also like to point out that there is a stronger notion of isolation of singularities of real polynomial maps. We say that the origin is an \emph{isolated singularity} if $f(O)=0$, $Df(O)=0$ and $U\backslash\{O\}$ contains no critical points of $f$. Typically the set of critical points of $f$ is 1-dimensional, so that polynomials with isolated singularities are very rare. The links that arise from isolated singularities, the \emph{real algebraic links}, have not been classified yet and are conjectured to be equal to the set of fibered links \cite{benedetti}. Some constructions of isolated singularities have been put forward to make progress on this conjecture \cite{bode:ralg, looijenga, perron}, but the family of links that are known to be real algebraic is still comparatively small. A construction similar to Algorithm 1, which produces isolated singularities for a large families of fibered links will be subject of a future paper.
Our algorithm can be interpreted as a deformation of a Newton degenerate mixed function in the sense of \cite{oka} or \cite{eder}. Our results can thus be viewed in the broader context of the question: How do deformations of real polynomial mappings affect the topology of their zeros close to singular points? Some work has been done on this question regarding so-called inner Newton non-degenerate mixed functions \cite{eder} and complex polynomial mappings \cite{fukui, king, saeki}, but the problem is still wide open in the general setting.
The remainder of this paper is structured as follows. Section \ref{sec2} reviews some useful background and introduces notation and conventions. In Section \ref{sec:algo} we give an overview of the algorithm that constructs weakly isolated singularities for any given link. The individual steps of the algorithm are discussed in Section \ref{sec:steps}, where we illustrate that all the steps can indeed be performed algorithmically. We prove our main result Theorem \ref{thm:ak} in Section \ref{sec:proof} by proving that the described algorithm constructs weakly isolated singularities for any given link. The bounds on the polynomial degrees are shown in Section \ref{sec:bounds}.
\ \\
\textbf{Acknowledgments:} The author is grateful to Raimundo Nonato Araújo dos Santos and Eder Leandro Sanchez Quiceno for discussions and feedback on the paper. This work is supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No 101023017.
\section{Background}\label{sec2}
Semiholomorphic polynomials are a special type of \textit{mixed polynomials} as introduced by Oka \cite{oka}. In the dimensions that we are interested in, the set of mixed polynomials $f:\mathbb{C}^2\to\mathbb{C}$ consists of polynomials in two complex variables $u$ and $v$, and their complex conjugates, $\overline{u}$ and $\overline{v}$, so that $f$ takes the form
\begin{equation}
f(u,v)=\underset{i,j,k,\ell}{\sum}c_{i,j,k,\ell}u^i\overline{u}^jv^k\overline{v}^\ell,
\end{equation}
with all but finitely many $c_{i,j,k,\ell}\in\mathbb{C}$ equal to zero. Note that every polynomial map from $\mathbb{R}^4$ to $\mathbb{R}^2$ can be written as a mixed polynomial. A mixed polynomial is semiholomorphic if and only if $c_{i,j,k,\ell}\neq0$ implies $j=0$. Thus a semiholomorphic polynomial is holomorphic with respect to the variable $u$, but not necessarily with respect to the variable $v$.
Semiholomorphic polynomials lend themselves to constructions like the one discussed in this paper for two reasons. First, the holomorphicity in one variable grants us a certain rigidity and control over the behaviour of zeros that is usually associated with complex functions. For instance, we know that for any fixed value of $v=v_*$ the number of zeros of $f(\cdot,v_*)$ is equal to its degree. The second advantage of working with semiholomorphic polynomials is that it is comparatively easy to prove that a point is a regular point, i.e., that the real Jacobian matrix has full rank, and consequently to prove that a singularity is weakly isolated. It suffices to show that the origin $O$ is the only zero of $f$ where $\tfrac{\partial f}{\partial u}$ vanishes.
As for complex polynomials there is the notion of a Newton polyhedron for mixed polynomials \cite{oka}. For every weight vector $P=(p_1,p_2)\in\mathbb{N}^2$ we can define the radially weighted degree of a mixed monomial $M=c_{i,j,k,\ell}u^i\overline{u}^jv^k\overline{v}^\ell$ with respect to $P$ as $d(P;M):=p_1(i+j)+p_2(k+\ell)$. A mixed polynomial $f$ is \textit{radially weighted homogeneous} of degree $d(P;f)$ if there is a weight vector $P$ such that all non-zero monomials $M$ in $f$ have the same radially weighted degree $d(P;M)=d(P;f)$ with respect to $P$.
Our algorithm is based on braids and the fact that every link is the closure of some braid \cite{alexander:1923lemma}. A braid on $s$ strands is a collection of $s$ disjoint curves $(u_j(t),t)\subset\mathbb{C}\times [0,2\pi]$, $j=1,2,\ldots,s$, parametrized by their height coordinate $t$ going from $0$ to $2\pi$. The functions $u_j:[0,2\pi]\to\mathbb{C}$ are assumed to be smooth and to satisfy that for every $j\in\{1,2,\ldots,s\}$ there is a unique $i\in\{1,2,\ldots,s\}$ such that $u_j(2\pi)=u_i(0)$.
Identifying the $t=0$- and the $t=2\pi$-plane results in a link in $\mathbb{C}\times S^1$, the closed braid. Embedding the open solid torus $\mathbb{C}\times S^1$ as an untwisted neighbourhood of the unknot in the 3-sphere $S^3$ defines a link in $S^3$, the closure of the braid, whose link type is well-defined.
Projecting curves via the map $(u,t)\mapsto(\text{Re}(u),t)$ into $\mathbb{R}\times[0,2\pi]$ results in $s$ intersecting curves. A \textit{braid diagram} is such a projection where every intersection is transverse and involves exactly two strands. We keep track of the information about the $\text{Im}(u)$-coordinate of these two strands at the crossing by deleting the strand with the larger $\text{Im}(u)$-coordinate in a neighbourhood of the crossing. The strand with the smaller $\text{Im}(u)$-coordinate is thus the overcrossing strand. This is an arbitrary choice and in previous papers we have not been consistent with our choices (although consistent in each individual paper). Changing this convention only means that several signs throughout this paper need to be reversed. Obviously, the results are not affected by this.
If two strands cross at $t=t_k$ and for all small $\varepsilon>0$ the overcrossing strand has smaller $\text{Re}(u)$-coordinate for all $t\in(t_k-\varepsilon,t_k)$, this crossing is a positive crossing. Non-positive crossings are negative.
The braid isotopy classes of braids on $s$ strands form a group generated by the Artin generators $\sigma_j$, $j=1,2,\ldots,s-1$, where $\sigma_j$ denotes a positive crossing between the strand with the $j$th smallest $\text{Re}(u)$-coordinate (the ``$j$th strand'') and the strand with the next larger $\text{Re}(u)$-coordinate (the $j+1$th strand). The square $B^2$ of a braid $B$ is thus the double repeat of its braid word, two copies of the same braid concatenated.
A braid diagram can be interpreted as a \textit{singular braid} on $s$ strands, that is, a collection of curves that are allowed to intersect transversely in finitely many points, whose image under projection map $(u,t)\mapsto(\text{Re}(u),t)$ into $\mathbb{R}\times[0,2\pi]$ results in a braid diagram (see Figure \ref{fig:diag}). The singular braid monoid is generated by the Artin generators, their inverses $\sigma_j^{-1}$ and $\tau_{j}$, $j=1,2,\ldots,s-1$, which correspond to intersection points between the $j$th strand and the $j+1$th strand. Thus a singular braid in $\mathbb{R}\times[0,2\pi]$ (i.e., it is a braid diagram) is represented by a word that only consists of $\tau_j$s.
The projection map that associates to every geometric braid a braid diagram can thus be understood as a function from the set of braid words to the set of singular braid words mapping a generator $\sigma_j^{\varepsilon}$, $\varepsilon\in\{\pm 1\}$ to $\tau_j$, regardless of the sign $\varepsilon$.
\begin{figure}
\centering
\labellist
\large
\pinlabel a) at 100 1200
\pinlabel b) at 1500 1200
\endlabellist
\includegraphics[height=5cm]{Eder_algorithm1}
\caption{a) A braid diagram. b) The corresponding singular braid \label{fig:diag}}
\end{figure}
For some given geometric braid its image under the projection map $(u,t)\mapsto(\text{Re}(u),t)$ into $\mathbb{R}\times[0,2\pi]$ might not be a braid diagram. We call a collection of curves $(u_j(t),t)$, $j=1,2,\ldots,s$, in $\mathbb{C}\times[0,2\pi]$ a \textit{generalized singular braid} if for every $j=1,2,\ldots,s$, there is a unique $i\in\{1,2\ldots,s\}$ with $u_j(0)=u_i(2\pi)$ and all intersection points between the different strands $(u_j(t),t)$ are isolated points. Hence the intersection points are allowed to be tangential and to involve more than two strands. The condition that the intersections are isolated is always satisfied if the strands are not identical and parametrized by real-analytic functions.
We say that a singular crossing of a generalized singular braid is \textit{generic} if it is a transverse intersection between exactly two strands. Otherwise we call it \textit{non-generic}. Non-generic crossings thus consist of more than two strands or are a tangential intersection of at least two strands. We also say that the functions $u_j$ that parametrize a generalized singular braid are non-generic if it has non-generic singular crossings.
\section{An outline of the algorithm}\label{sec:algo}
In \cite{bodepoly} we describe an algorithm that finds for any given braid a semiholomorphic polynomial $f:\mathbb{C}^2\to\mathbb{C}$ whose vanishing set intersects the 3-sphere of unit radius transversely in the closure of the given braid. In \cite{bode:ralg} this construction is modified so that it produces a semiholomorphic polynomial $f$ with a weakly isolated singularity, whose link is the closure of the square $B^2$ of the given braid word $B$.
The first step in both of these constructions is to find (via trigonometric interpolation) a parametrization of the given braid $B$ (up to isotopy) in terms of trigonometric polynomials. Let $\mathcal{C}$ denote the set of connected components of the closure of $B$ and let $s_C$ denote the number of strands that make up the component $C\in\mathcal{C}$. The first step of the algorithm in \cite{bodepoly} finds for every $C\in\mathcal{C}$ a pair of trigonometric polynomials $F_C, G_C:[0,2\pi]\to\mathbb{R}$ such that $B$ is parametrized by
\begin{equation}
\bigcup_{t\in[0,2\pi]}\bigcup_{C\in\mathcal{C}}\bigcup_{j=1}^{s_C}\left(F_C\left(\frac{t+2\pi j}{s_C}\right)+\rmi G_C\left(\frac{t+2\pi j}{s_C}\right),t\right)\subset\mathbb{C}\times[0,2\pi].
\end{equation}
Note that since we use the projection $(u,t)\mapsto(\text{Re}(u),t)$ to obtain braid diagrams and braid words, the real part of the parametrized strands $F_C\left(\tfrac{t+2\pi j}{s_C}\right)$ determines the crossing pattern (i.e., the braid word without the signs of the crossings) and the imaginary part $G_C\left(\frac{t+2\pi j}{s_C}\right)$ determines the signs of the crossings.
In particular, the first step of the algorithm in \cite{bodepoly} yields via trigonometric interpolation a set of trigonometric polynomials $F_C$ such that the corresponding curves $\left(F_C\left(\frac{t+2\pi j}{s_C}\right),t\right)$ parametrize a generalized singular braid.
As in \cite{bodepoly} we would like to point out that the braid diagram for the braid parametrized by the $F_C$s and the $G_C$s is not necessarily identical to the braid diagram of $B$ that we started with. However, the braids are guaranteed to be braid isotopic. The functions $G_C$ are also found via trigonometric interpolation.
Once we have found a parametrization of a braid that is isotopic to $B$ in terms of $F_C$ and $G_C$, we define $g:\mathbb{C}\times S^1$ via
\begin{equation}
g(u,\rme^{\rmi t})=\prod_{C\in\mathcal{C}}\prod_{j=1}^{s_C}\left(u-F_C\left(\frac{2t+2\pi j}{s_C}\right)-\rmi G_C\left(\frac{2t+2\pi j}{s_C}\right)\right).
\end{equation}
Note the factor 2 in front of the variable $t$ in the expression above. It means that as $t$ varies between 0 and $2\pi$ we are traversing the braid $B$ twice. In other words, the vanishing set of $g$ is (up to isotopy) the closed braid $B^2$.
Expanding the product above results in a polynomial expression for $g$ with respect to the complex variable $u$, as well as with respect to $\rme^{\rmi 2t}$ and $\rme^{-\rmi 2t}$. We define $p_k:\mathbb{C}^2\to\mathbb{C}$ by
\begin{equation}
p_k(u,r\rme^{\rmi t})=r^{2ks}g\left(\frac{u}{r^{2k}},\rme^{\rmi t}\right),
\end{equation}
where $k$ is a sufficiently large integer. Note that by writing $v=r\rme^{\rmi t}$ this becomes a mixed polynomial $p_k(u,v,\bar{v})=(v\bar{v})^{ks}g\left(\frac{u}{(v\bar{v})^k},\frac{v}{\sqrt{v\bar{v}}}\right)$ if $2ks$ is greater than the degree of $g$ with respect to $\rme^{\rmi t}$ and $\rme^{-\rmi t}$. Note that exponents of $\rme^{\rmi t}$ and $\rme^{-\rmi t}$ in $g$ are even, so that the term $\sqrt{v\bar{v}}$ always comes with an even exponent.
The constructed polynomials $p_k$ are semiholomorphic and radially weighted homogeneous with respect to $P=(2k,1)$ with degree $d(P;f)=2ks$.
Since all roots of $g(\cdot,\rme^{\rmi t})$ are simple, the singularity at the origin is weakly isolated. The link of the singularity is the closure of $B^2$. An explicit isotopy between $p_k^{-1}(0)\cap S_{\rho}^3$ and a projection of $p_{k}^{-1}(0)\cap(\mathbb{C}\times \rho S^1)$ to $S_{\rho}^3$, which is known to be the closure of $B^2$, can be constructed as in \cite{bodepoly}.
\begin{figure}[H]
\labellist
\large
\pinlabel a) at 100 1100
\pinlabel b) at 1000 1100
\pinlabel c) at 2000 1100
\pinlabel d) at 400 -100
\pinlabel e) at 1800 -100
\endlabellist
\includegraphics[height=4.3cm]{Eder_algorithm14}
\vspace{0.5cm}
\includegraphics[height=8.6cm]{algorithm13}
\caption{a) The generalized singular braid parametrized by the $F_C$s is not necessarily a singular braid. b) We can make the $F_C$s generic. The resulting functions $\tilde{F}_C$ parametrize a singular braid $B_{sing}$. c) The braid diagram of a braid $B'$ that is obtained from an appropriate choice of crossing signs for the singular crossings in $B_{sing}$. The braid $B'$ is braid isotopic to $B$ in Figure \ref{fig:diag}a). d) $B_{sing}^2$, the vanishing set of $g$. e) A resolution of the singular crossings of $B_{sing}^2$ that results in $B'$, whose closure is the link of the singularity of $f$. \label{fig:braids}}
\end{figure}
Algorithm 1 below, which constructs a weakly isolated singularity for any given link, is based on the same ideas. However, it uses parametrizations of singular braids instead of classical braids.
Figure \ref{fig:braids} shows the parametrized braids and vanishing sets of functions at various steps throughout the algorithm.
We start with a braid diagram of a braid $B$ that closes to the link that we want to construct, such as shown in Figure \ref{fig:diag}a). Via the same trigonometric interpolation procedure as in \cite{bodepoly} we obtain trigonometric polynomials $F_C$ that parametrize curves that form a generalized singular braid, see Figure \ref{fig:braids}a). The functions $F_C$ are not necessarily generic. Via small modifications we can make the $F_C$s generic and obtain a singular braid $B_{sing}$ (Figure \ref{fig:braids}b)) that has the property that there exists a choice of signs for each of its singular crossings that turns $B_{sing}$ into a classical braid that is isotopic to $B$, see Figure \ref{fig:braids}c).
As in \cite{bodepoly} we define a function $p_k$. It is a radially weighted homogeneous mixed polynomial with a singularity at the origin. The intersection $p_k^{-1}(0)\cap S_{\rho}^3$ is the singular braid $B_{sing}^2$ for all $\rho>0$, shown in Figure \ref{fig:braids}d). In particular, it does not have a weakly isolated singularity, since the singular crossings correspond to lines of critical points of $p_k$ through the origin.
However, we can add a term $r^m A(\rme^{\rmi t})$, where $v=r\rme^{\rmi t}$ and $A$ is a finite Fourier series, that makes the singularity weakly isolated. This term has to be constructed in such a way that all singular crossings of $B_{sing}^2$ are resolved in such a way that the link of the singularity is the closure of a braid isotopic to $B$, which is displayed in Figure \ref{fig:braids}e).
\begin{algorithm}[H]
\caption{Construction of weakly isolated singularities}
\begin{steps}
\item From the given braid word $B$ find the trigonometric polynomials $F_C$ via trigonometric interpolation as in \cite{bodepoly}.
\item Make $F_C$ generic. Call the resulting functions $\tilde{F}_C$.
\item Define $g(u,\rme^{\rmi t})=\underset{C\in\mathcal{C}}{\prod}\underset{j=1}{\overset{s_C}{\prod}}\left(u-\tilde{F}_C\left(\tfrac{2t+2\pi j}{s_C}\right)\right)$.
\item Define $p_k(u,r\rme^{\rmi t})=r^{2ks}g\left(\tfrac{u}{r^{2k}},\rme^{\rmi t}\right)$ with $2ks$ greater than the degree of $g$ with respect to $\rme^{\rmi t}$ and $\rme^{-\rmi t}$.
\item Solve the trigonometric interpolation problem $(*)$ in Section \ref{sec:A} for $A:S^1\to\mathbb{C}$.
\item Define $f(u,r\rme^{\rmi t})=p_k(u,r\rme^{\rmi t})+r^mA(\rme^{\rmi t})$, where $m$ is odd and larger than the degree of $A$ with respect to $\rme^{\rmi t}$ and $\rme^{-\rmi t}$ and larger than $2ks$.
\end{steps}
\end{algorithm}
The idea behind Algorithm 1 can be understood as a natural consequence of \cite{eder}, where we introduce certain non-degeneracy conditions of mixed functions and study links of their (weakly) isolated singularities. We show that for such non-degenerate mixed polynomials adding terms above the boundary of the Newton polygon does not change the topology of the link. This seems to suggest that not all links can be obtained as link of weakly isolated singularities of non-degenerate mixed polynomials (and it is an interesting question for which links this is possible). Algorithm 1 thus constructs a degenerate polynomial $p_k$ and adds an appropriate term above the Newton boundary.
In the following sections we explain the individual steps. In particular, we show that each of the steps can be performed algorithmically. Then we show that the algorithm indeed constructs weakly isolated singularities with the closure of $B$ as the link of the singularity.
\section{The individual steps}\label{sec:steps}
Step 1 is identical to the corresponding procedure in \cite{bodepoly}. Note however that the resulting trigonometric polynomials $F_C$ are not necessarily generic. This is not a problem for the construction in \cite{bodepoly}. For the construction in Algorithm 1 however, we need generic parametrizations. This is done in Step 2, which requires a more detailed explanation, detailed in Section \ref{sec:generic}. Step 3 and 4 are then simply definitions of functions. Step 5 is arguably the most important part of this algorithm. It will be discussed in detail in Section \ref{sec:A}. Step 6 is again simply the definition of a function $f$. Thus if Step 2 and Step 5 can be performed algorithmically, Algorithm 1 is indeed an algorithm.
\subsection{Generic parametrizations of singular braids (Step 2)}\label{sec:generic}
The set of trigonometric polynomials $F_C$ that result in generic parametrizations is dense in the set of trigonometric polynomials. So we should expect that the trigonometric polynomials $F_C$ found via the method from \cite{bodepoly} almost always have this property. However, there is no guarantee. If the $F_C$s are not generic, we have to make some adjustments to make them generic. Again we would like to emphasize that in practice, this is usually not necessary.
Alternative to the method from \cite{bodepoly} trigonometric approximation can be used in Step 1 to find a trigonometric parametrisation of the braid. If the approximated original braid parametrisation is generic, i.e., the corresponding projection gives a braid diagram, then a sufficiently close approximation is generic, too. Therefore, Step 2 of the algorithm is not needed if trigonometric approximation is used in Step 1. However, in contrast to the method from \cite{bodepoly}, trigonometric approximation does not allow us to give bounds on the degrees of the trigonometric polynomials that parametrize the strands.
To a given set of trigonometric polynomials $F_C$, $C\in\mathcal{C}$, with given values of $s_C$, and for any closed interval $[a,b]$ in $[0,2\pi]$ (with $a$ and $b$ away from the values of $t$ for which there are intersections between the $F_C\left(\tfrac{t+2\pi j}{s_C}\right)$s) we call the permutation of the $s=\underset{C\in\mathcal{C}}{\sum}s_C$ curves parametrized by
\begin{equation}
\bigcup_{t\in[a,b]}\bigcup_{C\in\mathcal{C}}\bigcup_{j=1}^{s_C}\left(F_C\left(\frac{t+2\pi j}{s_C}\right),t\right)
\end{equation}
the \textit{permutation associated to the $F_C$s in the interval $[a,b]$}. It is thus an element of the symmetric group on $s$ elements. Note that this is possible, because the $F_C$s are real analytic. This is why even at tangential intersections of strands, we can uniquely determine which incoming arc corresponds to which outgoing arc.
\begin{lemma}[\cite{bodepoly}]\label{lem:perm}
Let $F_C$, $C\in\mathcal{C}$ be trigonometric polynomials and let $B=\underset{j=1}{\overset{\ell}{\prod}}\sigma_{i_j}^{\varepsilon_j}$ be a braid, whose closure has $|\mathcal{C}|$ components. Then there exist trigonometric polynomials $G_C$, $C\in\mathcal{C}$, such that
\begin{equation}
\bigcup_{t\in[0,2\pi]}\bigcup_{C\in\mathcal{C}}\bigcup_{j=1}^{s_C}\left(F_C\left(\frac{t+2\pi j}{s_C}\right)+\rmi G_C\left(\frac{t+2\pi j}{s_C}\right),t\right)
\end{equation}
parametrizes a braid that is braid isotopic to $B$ if there exist values $t_j\in[0,2\pi]$, $j=1,2,\ldots,\ell+1$, $t_1=0$, $t_{\ell+1}=2\pi$, $t_j<t_{j+1}$ such that the permutation associated to the $F_C$s in the interval is $[t_j,t_{j+1}]$ is the transposition $(i_j\leftrightarrow i_j+1)$.
\end{lemma}
The algorithm in \cite{bodepoly} finds trigonometric polynomials $F_C$ such that the condition in Lemma \ref{lem:perm} is satisfied. We would like to make the $F_C$s generic, while maintaining this property.
The $F_C$s being non-generic could mean that there are tangential intersections between strands of the corresponding generalized singular braid $B_{sing}$ or that there are more than two strands involved in a singular crossing of $B_{sing}$.
\begin{figure}
\centering
\labellist
\large
\pinlabel a) at 100 1600
\pinlabel b) at 1900 1600
\pinlabel c) at 100 800
\pinlabel d) at 1900 800
\endlabellist
\includegraphics[height=6cm]{generic}
\caption{The elimination of non-generic crossings. a) A tangential intersection between strands from different components gets eliminated. b) An intersection between more than 2 strands from different components gets eliminated. c) A tangential intersection between strands from the same component gets eliminated. d) An intersection between more than two strands, all of which are from the same component, is eliminated. \label{fig:elim}}
\end{figure}
Having explicit expressions for the functions $F_C$ we can find all values $t=t_{k}'$, $k=1,2,\ldots,M$, for which there are non-generic crossings. The fact that there are only finitely many of these follows from the real analyticity of the functions. It will simplify our notation if we adopt the convention of updating our variables throughout the modifications outlined below. That is to say, when we change the function $F_C$ for example by adding a term, the resulting function will again be called $F_C$. The values $t_k'$, $k=1,2,\ldots,M$, are again defined as the values of $t$ at which the new collection of functions $F_C$ has non-generic crossings. Note that their number $M$ can change throughout our modification, and will eventually be 0.
A tangential intersection between the strands $(C,j)$ and $(C',j')$ at $t=t_k'$ corresponds to a non-simple root of $F_{C}\left(\tfrac{t+2\pi j}{s_C}\right)-F_{C'}\left(\tfrac{t+2\pi j'}{s_{C'}}\right)$ at $t=t_k'$ and a singular crossing between more than two strands $(C_i,j_i)$, $i=1,2,\ldots,m'$, at $t=t_k$ corresponds to a common root of $F_{C_i}\left(\tfrac{t+2\pi j_i}{s_{C_i}}\right)-F_{C_{i'}}\left(\tfrac{t+2\pi j_{i'}}{s_{C_{i'}}}\right)$ at $t=t_k'$. We can thus check numerically if any given collection of trigonometric parametrizations $F_C$ is generic or not. Likewise, we can check numerically if the $F_C$s satisfy the condition from Lemma \ref{lem:perm} for the same values $t_j$, $j=1,2,\ldots,\ell+1$, as the original functions $F_C$.
We can remove the tangential intersection points between two strands of different components $C$ and $C'$ by adding small constants $\varepsilon_{C,1}$ to each $F_C$. This is displayed in Figure \ref{fig:elim}a). This requires $\varepsilon_{C,1}\neq\varepsilon_{C',1}$ if $C\neq C'$. Furthermore, we can choose $\varepsilon_{C,1}$ sufficiently small so that the resulting trigonometric polynomials $F_C$ still satisfy the property from Lemma \ref{lem:perm} for the same values $t_j$, $j=1,2,\ldots,\ell+1$. Furthermore, the addition of $\varepsilon_{C,1}$ should not introduce any new non-generic crossings. This can be achieved by choosing the values for the different $\varepsilon_{C,1}$s successively, i.e., for an arbitrary ordering of the components $C_1,C_2,\ldots,C_{|C|}$ we first choose $\varepsilon_{C_1,1}$ such that it removes tangential intersection points involving strands from $C_1$ without introducing new ones, then we choose $\varepsilon_{C_2,1}$ and so on. We also add a small constant to every $F_C$ that is constant. Such components only consist of a single vertical strand. Adding a small constant guarantees that none of them are involved in any non-generic crossings.
Note that sufficient values for $\varepsilon_{C,1}$ can be found explicitly knowing the values of $t$ for which we have generic or non-generic crossings as well as maxima and minima of the functions $F_{C}\left(\tfrac{t+2\pi j}{s_C}\right)-F_{C'}\left(\tfrac{t+2\pi j'}{s_{C'}}\right)$, $C,C'\in\mathcal{C}$, $j\in\{1,2,\ldots,s_C\}$, $j'\in\{1,2,\ldots,s_{C'}\}$. Alternatively, since we can check numerically if the $F_C$s are generic or not, we can take $\varepsilon_{C,1}$ to be an element of a non-zero sequence converging to $0$ and if the resulting $F_C$ is non-generic, we redefine $\varepsilon_{C,1}$ to be the next element in that sequence.
By taking $F_C(t+\varepsilon_{C,2})$ instead of $F_C(t)$ as the trigonometric polynomial for the component $C$ with appropriately chosen small $\varepsilon_{C,2}$, we obtain a parametrization where every singular crossing that involves
|
more than two strands only involves strands from the same component. This is achieved by choosing $\varepsilon_{C,2}\neq\varepsilon_{C',2}$ if $C\neq C'$ and each $\varepsilon_{C,2}$ sufficiently small. We do not introduce any new non-generic crossing in doing this, since the intersection is transverse, the curves are real analytic and the intersection does not involve any constant strands. The effect is shown in \ref{fig:elim}b). How small we have to choose each $\varepsilon_{C,2}$ can be determined from the values of $t$ for which there are crossings. Note in particular that the $\varepsilon_{C,2}$s can be chosen such that the condition from Lemma \ref{lem:perm} is still satisfied for the same values $t_j$, $j=1,2,\ldots,\ell+1$, as before. Note that we find the values of $\varepsilon_{C,2}$ successively. We choose a value for one component $C$ and only then decide on the value for the next component $C'$ and so on.
Thus the only remaining non-generic crossings are between strands of the same component. Suppose we have a tangential intersection between $(C,j)$ and $(C,j')$ at $t=t_k'$. Then we add $\varepsilon\cos\left(t-\frac{t_k'+2\pi j}{s_C}\right)$ to $F_C$, where as usual $\varepsilon$ is small and its sign is determined by the sign of $F_{C}\left(\tfrac{t+2\pi j}{s_C}\right)-F_{C}\left(\tfrac{t+2\pi j'}{s_{C}}\right)$ in a neighbourhood of $t=t_k'$.
By adding $\varepsilon\cos\left(t-\frac{t_k+2\pi j}{s_C}\right)$ we know that $F_{C}\left(\tfrac{t+2\pi j}{s_C}\right)-F_{C}\left(\tfrac{t+2\pi j'}{s_{C}}\right)$ is non-zero in a neighbourhood of $t=t_k'$ (independent of $\varepsilon$) for all $\varepsilon$ of the correct sign and sufficiently small modulus. We have thus reduced the number of tangential intersections by one, see Figure \ref{fig:elim}c). Proceeding like this we eliminate all tangential intersections between strands. This includes tangential intersections that are part of non-generic crossings with more than two strands. Again we can choose $\varepsilon$ sufficiently small, so that Lemma \ref{lem:perm} is still satisfied for $t_j$, $j=1,2,\ldots,\ell+1$.
Thus the only remaining non-generic crossings are crossings that involve more than two strands and all of them are from the same component. Suppose we have such a crossing at $t=t_k'$ and two of the strands involved in that crossing are $(C,j)$ and $(C,j')$. Then we add $\varepsilon'\cos\left(t-\varphi\right)$, where $\varphi=t_k'/s_C-\pi+\pi(j'-j)/s_C$. This value is chosen such that $\cos\left(\tfrac{t_k'+2\pi j}{s_C}-\varphi\right)=\cos\left(\tfrac{t_k'+2\pi j'}{s_C}-\varphi\right)$, while $\cos\left(\tfrac{t_k'+2\pi j}{s_C}-\varphi\right)\neq\cos\left(\tfrac{t_k'+2\pi j''}{s_C}-\varphi\right)$ for all $j''\notin\{j,j'\}$. Thus after adding $\varepsilon'\cos\left(t-\varphi\right)$ we still have a crossing at $t=t_k'$ between $(C,j)$ and $(C,j')$, but no other strand is involved in that crossing. Hence it is a generic crossing. The other strands that used to be part of that crossing have been moved aside and could form other non-generic crossings with more than two strands. Therefore, we have not necessarily reduced the number of non-generic crossings in this step. However, we have reduced the sum of the number of strands involved in a non-generic crossing $c$, with the sum going over all non-generic crossings $c$. Thus repeating this step we can eliminate all non-generic crossings and obtain a generic parametrization $F_C$. This elimination process is illustrated in Figure \ref{fig:elim}d).
It is more difficult to give an explicit formula for a sufficient value of $\varepsilon'$. Since we are not particularly concerned with achieving an optimal run-time for our algorithm, we may again resort to the approach of using a non-zero sequence converging to $0$ and checking at each value of $\varepsilon'$ if the resulting parametrisation is generic and satisfies Lemma \ref{lem:perm}.
Let now $\tilde{F}_C$, $C\in\mathcal{C}$ denote the generic trigonometric polynomials that we obtain from this procedure and let $B_{sing}=\underset{j=1}{\overset{\ell'}{\prod}}\tau_{i_j}$ be the singular braid that is parametrized by the $\tilde{F}_C$s.
\begin{lemma}\label{lem:signs}
There is a choice of signs $\varepsilon_j\in\{\pm 1\}$ such that the input braid $B$ is braid isotopic to $\underset{j=1}{\overset{\ell'}{\prod}}\sigma_{i_j}^{\varepsilon_j}$.
\end{lemma}
\begin{proof}
We have selected the values of the different $\varepsilon_{C,1}$s, $\varepsilon_{C,2}$s, $\varepsilon$s and $\varepsilon'$s such that the $\tilde{F}_C$s still satisfy the condition from Lemma \ref{lem:perm} for the same values $t_j$, $j=1,2,\ldots,\ell+1$ as the original $F_C$s. Therefore by Lemma \ref{lem:perm}, there exist trigonometric polynomials $\tilde{G}_C$, $C\in\mathcal{C}$, such that $\tilde{F}_C+\rmi \tilde{G}_C$ parametrizes a braid $B'$ that is braid isotopic to $B$. Since the $\tilde{F}_C$s are generic, this is equivalent to $B_{sing}$ being obtained from a braid diagram of $B'$ by forgetting information about signs of crossings, with $B'$ a braid that is isotopic to $B$. Thus the value of $\varepsilon_j$ is the sign of the corresponding crossing in the braid $B'$.
\end{proof}
Note that throughout Step 2 we only add terms of degree 0 or 1 with respect to $\rme^{\rmi t}$ and $\rme^{-\rmi t}$ and the degree 1 terms are only necessary for components with more than one strand. Therefore, the degrees of the trigonometric polynomials $F_C$ are not affected by the procedure above and the degree of $\tilde{F}_C$ is equal to the degree of $F_C$.
\subsection{Trigonometric interpolation (Step 5)}\label{sec:A}
For a given generic collection of trigonometric polynomials $\tilde{F}_C$ the roots of
\begin{equation}\label{eq:defg}
g(u,\rme^{\rmi t})=\underset{C\in\mathcal{C}}{\prod}\underset{j=1}{\overset{s_C}{\prod}}\left(u-\tilde{F}_C\left(\tfrac{2t+2\pi j}{s_C}\right)\right)
\end{equation}
form a singular braid $B_{sing}^2$ that is the square of the singular braid $B_{sing}$. Both of these singular braids have $s=\underset{C\in\mathcal{C}}{\sum}s_C$ strands.
Let $t_k$, $k=1,2,\ldots,\ell'$ denote the values of $t\in[0,2\pi]$ for which there are singular crossings in $B_{sing}$. By a shift of the variable $t$, we can always guarantee that $t_k\neq \pi$ for all $k$.
Denote by $(C_1(k),j_1(k))$ and $(C_2(k),j_2(k))$ the two strands that form the crossing at $t=t_k$. Which of these strands carries which label is not important, but by convention we choose the labels such that $\tilde{F}_{C_1(k)}\left(\tfrac{t+2\pi j_1(k)}{s_{C_1(k)}}\right)<\tilde{F}_{C_2(k)}\left(\tfrac{t+2\pi j_2(k)}{s_{C_2(k)}}\right)$ for all $t\in(t_k-\varepsilon,t_k)$ for some small $\varepsilon>0$.
Note that $g$ from Eq.~\eqref{eq:defg} has only real roots and is therefore a real polynomial. Since all roots of $g(\cdot,\rme^{\rmi t})$ are simple when $t\neq t_k$, $k=1,2,\ldots,\ell'$, there is a critical point of $g$ between each neighbouring pair of roots of $g$, i.e., if $u_1,u_2\in\mathbb{R}$ are roots of $g(\cdot,\rme^{\rmi t})$ and there is no root of $g(\cdot,\rme^{\rmi t})$ in the open interval $(u_1,u_2)$, there is a unique critical point $c\in(u_1,u_2)$. We call $\sign(g(c,\rme^{\rmi t}))$ the sign of the critical point $c$. As $t$ varies, the critical points of $g(\cdot,\rme^{\rmi t})$ move on the real line, but they remain distinct and maintain their sign for all $t\neq t_k$, $k=1,2,\ldots,\ell'$.
At $t=t_k$ two roots and their intermediate critical point $c$ collide. We say that $c$ is the critical point associated with the crossing.
Step 5 of Algorithm 1 is to solve the following trigonometric interpolation problem $(*)$:\\
The set of data points takes the form $(t_k,y_k,z_k)$, $k=1,2,\ldots,\ell'$, where $t_k$, $k=1,2,\ldots,\ell'$, are as above the values of $t$ for which there are crossings of $B_{sing}$. The value $y_k$ is such that $\tfrac{y_k}{\cos\left(\tfrac{t_k}{2}\right)}$ is a non-zero real number that has the same sign as the critical point associated with the crossing at $t=t_k$.
We know from Lemma \ref{lem:signs} that for every crossing of $B_{sing}=\underset{k=1}{\overset{\ell'}{\prod}}\tau_{j_k}$ there is a choice of sign $\varepsilon_k\in\{\pm 1\}$ such that $B'=\prod_{k=1}^{\ell'}\sigma_{j_k}^{\varepsilon_{k}}$ is braid isotopic to $B$ and thus closes to the desired link. The value of $z_k$ is set to $\varepsilon_k$.
\ \\
\textbf{The interpolation problem $(*)$}: Find a trigonometric polynomial $\tilde{A}:S^1\to\mathbb{C}$ such that $\tilde{A}(\rme^{\rmi t_k})=\tfrac{y_k}{\cos\left(\tfrac{t_k}{2}\right)}$ and $\tfrac{\partial \arg(\tilde{A})}{\partial t}(\rme^{\rmi t_k})=z_k$ for all $k\in\{1,2,\ldots,\ell'\}$.
\ \\
Since
\begin{equation}
\frac{\partial \arg(\tilde{A})}{\partial t}(\rme^{\rmi t_k})=\left.\left(\frac{\text{Re}(\tilde{A})\frac{\partial \text{Im}(\tilde{A})}{\partial t}-\text{Im}(\tilde{A})\frac{\partial \text{Re}(\tilde{A})}{\partial t}}{|\tilde{A}|^2}\right)\right|_{t=t_k},
\end{equation} the interpolation problem above can be written as an interpolation where the values of the data points correspond to values of the desired function $\tilde{A}$ and its first derivative. Such an interpolation always has a solution that can be found via explicit formulas such as the one in \cite{nathan}. The degree of the solution is equal to $\ell'$.
We then define $A(\rme^{\rmi t}):=\tilde{A}(\rme^{\rmi 2t})\cos(t)$, which satisfies $A(\rme^{\rmi t_k/2})=y_k$ and
$\tfrac{\partial \arg(A)}{\partial t}(\rme^{\rmi t_k/2})$ has the same sign as $\varepsilon_k$ for all $k\in\{1,2,\ldots,\ell'\}$.
Since $A$ is odd, i.e., $A(\rme^{\rmi (t+\pi)})=-A(\rme^{\rmi t})$, it automatically also satisfies $A(\rme^{\rmi (t_{k}/2+\pi)})=-y_k$ and $\tfrac{\partial \arg(A)}{\partial t}(\rme^{\rmi (t_{k}/2+\pi)})$ also has the same sign as $\varepsilon_k$ for all $k\in\{1,2,\ldots,\ell'\}$.
\section{Weakly isolated singularities}\label{sec:proof}
In this section we prove that Algorithm 1 does what it is supposed to do: It constructs weakly isolated singularities with the desired link as the link of the singularity. Thereby we establish a proof of Theorem \ref{thm:ak}.
We use the same notation as in the previous sections. $\tilde{F}_C$ is a generic trigonometric parametrization of the singular braid $B_{sing}=\underset{k=1}{\overset{\ell'}{\prod}}\tau_{j_k}$. Let $\varepsilon_k\in\{\pm 1\}$, $k=1,2,\ldots,\ell'$ and let $A:S^1\to\mathbb{C}$ be the trigonometric polynomial found via the interpolation procedure in Step 5 of Algorithm 1. Let $g$, $p_k$ and $f=p_k+r^mA(\rme^{\rmi t})$ be defined as in the description of Algorithm 1.
\begin{lemma}
For every fixed and sufficiently small $r_*>0$ the vanishing set of $f|_{r=r_*}:\mathbb{C}\times S^1\to\mathbb{C}$ is the closed braid $\underset{k=1}{\overset{\ell'}{\prod}}\sigma_{j_k}^{\varepsilon_k}$.
\end{lemma}
\begin{proof}
The vanishing set of $f|_{r=r_*}$ corresponds (up to an overall scaling in the $u$-coordinate) to the vanishing set of $g+r_*^{m-2ks}A$, which is equal to $(g_t)^{-1}(-r_*^{m-2ks}A(\rme^{\rmi t}))$.
Since $g_t$ is monic and real and its critical points are distinct for all values of $t\in[0,2\pi]$, there is a diffeomorphism $h:\mathbb{C}\times S^1\to\mathbb{C}\times S^1$ that is the identity outside of $\{(u,\rme^{\rmi t}):|u|<R\}$ for some $R>0$ and that preserves the fibers of the projection map onto the second factor $\mathbb{C}\times S^1\to S^1$, and a disk $D$ such that $(g_t(h))^{-1}(\mathbb{R})\cap D$ is the union of the real line ($\{(u,\rme^{\rmi t}):\text{Im}(u)=0\}\cap D$) and $s-1$ straight, ``vertical'' lines orthogonal to the real line for every $t\in[0,2\pi]$. This is displayed in Figure \ref{fig:fol}. Note that the vertical lines intersect the real line in the critical points of $g_t$. Since the critical points vary with $t$, so do the vertical lines.
\begin{figure}
\centering
\labellist
\large
\pinlabel $\mathbb{R}$ at 1950 1150
\pinlabel $D$ at 1250 550
\pinlabel $(g_t(h))^{-1}(\mathbb{R})$ at -200 1600
\endlabellist
\includegraphics[height=6cm]{hD}
\caption{The preimage $(g_t(h))^{-1}(\mathbb{R})$ in the complex plane for a fixed value of $t\in[0,2\pi]$ and the disk $D$.\label{fig:fol}}
\end{figure}
Let $t_k$, $k=1,2,\ldots,2\ell'$, denote the values of $t$ for which there are crossings of $B_{sing}^2$. Note that for $k\leq\ell'$ these differ from the values of $t_k$ in the previous section, corresponding to the crossings of $B_{sing}$, by a factor of $1/2$. By symmetry we have $t_{k+\ell'}=t_k+\pi$. Figure \ref{fig:motion} shows subsets of the complex plane in a neighbourhood of a singular crossing at $t=t_k$, and at $t=t_k+\pi$. The black lines are the preimage set $(g_t(h))^{-1}(\mathbb{R})$ with the horizontal line being a segment of the real line. The red points are the roots of $g_t$ at values $t=t_k-2\varepsilon$, $t=t_k-\varepsilon$, $t=t_k$, $t=t_k+\varepsilon$ and $t=t_k+2\varepsilon$. By symmetry the corresponding roots at $t_{k+\ell'}$ are the same. The blue points indicate the roots of $g_t(h)+\delta A(\rme^{\rmi t})$, which are the preimage points $(g_t(h))^{-1}(-\delta A(\rme^{\rmi t}))$, for some small $\delta>0$.
\begin{figure}
\centering
\labellist
\large
\pinlabel a) at 100 2400
\footnotesize
\pinlabel $t=t_k-2\varepsilon+\pi$ at 400 2200
\pinlabel $t=t_k-2\varepsilon$ at 400 1000
\endlabellist
\includegraphics[height=4.5cm]{Ederalgorithm5a}
\labellist
\large
\pinlabel b) at 100 2400
\footnotesize
\pinlabel $t=t_k-\varepsilon+\pi$ at 400 2200
\pinlabel $t=t_k-\varepsilon$ at 400 1000
\endlabellist
\includegraphics[height=4.5cm]{Ederalgorithm6a}
\labellist
\large
\pinlabel c) at 100 2400
\footnotesize
\pinlabel $t=t_k+\pi$ at 400 2200
\pinlabel $t=t_k$ at 400 1000
\endlabellist
\includegraphics[height=4.5cm]{Ederalgorithm7a}
\labellist
\large
\pinlabel d) at 100 2400
\footnotesize
\pinlabel $t=t_k+\varepsilon+\pi$ at 400 2200
\pinlabel $t=t_k+\varepsilon$ at 400 1000
\endlabellist
\includegraphics[height=4.5cm]{Ederalgorithm8a}
\labellist
\large
\pinlabel e) at 100 2400
\footnotesize
\pinlabel $t=t_k+2\varepsilon+\pi$ at 400 2200
\pinlabel $t=t_k+2\varepsilon$ at 400 1000
\endlabellist
\includegraphics[height=4.5cm]{Ederalgorithm9a}
\caption{The motion of the roots of $g_t$ (in red) and of $g_t+r^{m-2ks}A$ (in blue) in the complex plane in a neighbourhood of singular crossings at $t=t_k$ and $t=t_k+\pi$, $k\in\{1,2,\ldots,\ell'\}$. For each subfigure the lower part shows the behaviour near $t=t_k$ and the upper part shows the behaviour near $t=t_k+\pi$. a) At $t=t_k-\varepsilon$ and $t=t_k+\pi-\varepsilon$. b) At $t=t_k-\varepsilon/2$ and $t=t_k+\pi-\varepsilon/2$. c) At $t=t_k$ and $t=t_k+\pi$. d) At $t=t_k+\varepsilon/2$ and $t=t_k+\pi+\varepsilon/2$. e) At $t=t_k+\varepsilon$ and $t=t_k+\pi+\varepsilon$. \label{fig:motion}}
\end{figure}
By construction $A(\rme^{\rmi t_k})$, $k=1,2,\ldots,\ell'$, is real and has the same sign as the critical point associated with the crossing at $t=t_k$. Hence the two preimage points $(g_{t_k}(h))^{-1}(-\delta A(\rme^{\rmi t_k}))$ lie on the real line on opposite sides of the vertical line for all values of $\delta=r^{m-2ks}>0$ as indicated in the lower part of Figure \ref{fig:motion}c).
Since the derivative of the argument of $A$ is non-zero at $t=t_k$, there is a neighbourhood $U$ of $t_k$ independent of $\delta$ such that $t=t_k$ is the only point in the neighbourhood where $\arg(\delta A)$ is 0 or $\pi$. Thus $t=t_k$ is the only point in $U$, for which the roots of $g_t+\delta A(\rme^{\rmi t})$ lie on $g_t^{-1}(\mathbb{R})$. The two roots (which are the preimage points $(g_{t_k}(h))^{-1}(-\delta A(\rme^{\rmi t_k}))$) lie on opposite sides of the vertical line at $t=t_k$ and cannot cross the vertical line while $t$ is in $U$.
Recall that a crossing only occurs when two strands have the same $\text{Re}(u)$-coordinate. Since the two preimage points remain on opposite sides of the vertical line throughout $U$, there is no crossing between the strands that are formed by the two preimage points $(g_{t}(h))^{-1}(-\delta A(\rme^{\rmi t}))$ in a neighbourhood of the original crossing for all sufficiently small $\delta>0$.
Thus all crossings at $t=t_k$, $k=1,2,\ldots,\ell'$, are resolved as in Figure \ref{fig:resolution}a), that is, there are no more crossings in the lower half of the braid.
\begin{figure}
\centering
\labellist
\large
\pinlabel a) at 100 2600
\pinlabel b) at 100 1600
\pinlabel $\varepsilon_k=1$ at 1300 1300
\pinlabel $\varepsilon_k=-1$ at 1300 400
\endlabellist
\includegraphics[height=7.5cm]{resolution}
\caption{Resolution of singular crossings. a) A singular crossing is resolved into two strands without a crossing. b) A singular crossing is resolved into a classical crossing with sign $\varepsilon_k$.\label{fig:resolution}}
\end{figure}
By symmetry $A(\rme^{\rmi t_{k+\ell'}})=A(\rme^{\rmi (t_k+\pi)})$, $k=1,2,\ldots,\ell'$, is real and has the opposite sign as the critical point associated with the crossing at $t=t_{k+\ell'}$. Therefore, the two preimage points $(g_{t}(h))^{-1}(-\delta A(\rme^{\rmi t}))$ both lie on the vertical line, one ``above'' (with positive imaginary part) the real line and one ``below'' (negative imaginary part), see the upper part of Figure \ref{fig:motion}c). Furthermore, we know that the sign of $\tfrac{\partial \arg(A)}{\partial t}$ is the sign of the desired crossing. Suppose that that sign is positive. Then the point above the real line is moving from right to left and the point below is moving from left to right, relative to the motion of the vertical line. That is, there is an $\varepsilon>0$ such that for all $t\in(t_{k+\ell'}-\varepsilon,t_{k+\ell'})$ the point above the real line is in the upper right quadrant and the point below is in the lower left quadrant, while for all $t\in(t_{k+\ell'},t_{k+\ell'}+\varepsilon)$ the point above the real line is in the upper left quadrant and the point below the real line is in the lower right quadrant.
Recall again that there is a crossing if and only if the two points have the same $\text{Re}(u)$-coordinate. This means that in $(t_{k+\ell'}-\varepsilon,t_{k+\ell'}+\varepsilon)$ there is a unique crossing, which occurs at $t=t_{k+\ell'}$. By our sign convention the sign of this crossing is positive as the point below the real line passing the point above the real line from left to right.
Likewise, if the desired sign $\varepsilon_{k}$ of the crossing is negative, then the point above the real line is moving from left to right and the point below is moving from right to left. That is, there is an $\varepsilon>0$ such that for all $t\in(t_{k+\ell'}-\varepsilon,t_{k+\ell'})$ the point above the real line is in the upper left quadrant and the point below is in the lower right quadrant, while for all $t\in(t_{k+\ell'},t_{k+\ell'}+\varepsilon)$ the point above the real line is in the upper right quadrant and the point below the real line is in the lower left quadrant. Thus there is a unique crossing at $t=t_{k+\ell'}$ and it has a negative sign.
In either case we obtain a classical crossing of the required sign as in Figure \ref{fig:resolution}b). Note that the $\varepsilon$-neighbourhood can be chosen independently of $\delta$ and thus independent of $r$, so that we have the correct crossing for all small values of $r$. Outside of the discussed neighbourhoods of $t_k$, $k=1,2,\ldots,2\ell'$, we can guarantee that there are no crossings when $r$ is sufficiently small. It follows that the zeros of $g_t(h)+r^{m-2ks}A(\rme^{\rmi t})$ form a closed braid in $\mathbb{C}\times S^1$ as $t$ varies from $0$ to $2\pi$.
Since the singular crossings in the first half of $B_{sing}^2$ at $t=t_k$, $k=1,2,\ldots,\ell'$, are all resolved into strands without crossings and the singular crossings in the second half of $B_{sing}^2$ at $t=t_k+\pi$, $k=1,2,\ldots,\ell'$, are resolved as desired, i.e., $\tau_{j_k}\mapsto \sigma_{j_k}^{\varepsilon_k}$, the braid formed by the roots of $g_t(h)+r^{m-2ks}A(\rme^{\rmi t})$ is represented by the word $\underset{k=1}{\overset{\ell'}{\prod}}\sigma_{j_k}^{\varepsilon_k}$, which by construction is braid isotopic to the braid $B$ that we used as input. Since $h$ is a diffeomorphism that preserves the fibers of the projection map onto the second factor $\mathbb{C}\times S^1\to S^1$, the roots of $g_t+r^{m-2ks}A(\rme^{\rmi t})$ form a braid that is isotopic to $B$ as a closed braid.
\end{proof}
\begin{lemma}
The constructed semiholomorphic polynomial $f$ has a weakly isolated singularity whose link is the closure of the given braid $B$.
\end{lemma}
\begin{proof}
At $v=0$ we have that $f(u,v)=u^s$, so that the origin is the only critical point with $v=0$.
We have shown that the roots of $f|_{|v|=r}$ form a braid as $t$ varies from $0$ to $2\pi$ for small values of $r>0$. In particular, all roots of $f|_{|v|=r}$ are simple, which means that $\tfrac{\partial f}{\partial u}\neq 0$ on $f^{-1}(0)\backslash \{O\}$. Thus $f$ has a weakly isolated singularity at the origin.
We also have that $f^{-1}(0)\cap(\mathbb{C}\times rS^1)$ is isotopic to the closed braid $B$ for all sufficiently small values of $r$. As $r$ goes to zero, the $u$-coordinates of all strands converges to zero. As in \cite{bodepoly} we can construct an explicit isotopy between the projection of $f^{-1}(0)\cap(\mathbb{C}\times rS^1)$ to $S_r^3$ and $f^{-1}(0)\cap S_r^3$ for small values of $r$, which shows that the closure of $B$ is the link of the singularity.
\end{proof}
\section{Upper bounds on the degree}\label{sec:bounds}
In this section we prove Theorem \ref{thm:bound} and Corollary \ref{cor}. The proof of the upper bound on the degree of the constructed polynomials is very similar to the one of the bound obtained in \cite{bodepoly}.
\begin{proof}
Since $F_C$ is found via trigonometric interpolation, its degree (as a trigonometric polynomial) is equal to $\left\lfloor\tfrac{x}{2}\right\rfloor$, where $x$ is the number of data points used in the interpolation and $\left\lfloor y\right\rfloor$ is the floor function that maps any real number $y$ to the largest integer less than or equal to $y$. As in \cite{bodepoly} we need $s_C\ell$ data points for the interpolation for $F_C$, so that the degree of $F_C$ is $\left\lfloor\tfrac{s_C\ell}{2}\right\rfloor$. (In \cite{bodepoly} this was erroneously stated as $\left\lfloor\tfrac{s_c\ell-1}{2}\right\rfloor$.)
We can assume that $\ell>1$, since the closure of the braid is not an unknot. As observed in Section \ref{sec:generic} the degree of $\tilde{F}_C$ is equal to the degree of $F_C$. The degree of $g$ is $\underset{C\in\mathcal{C}}{\sum}\max\{s_C,2\deg(F_C)\}\leq s\ell$. Thus $k=\left\lceil\ell/2\right\rceil$ is a choice that guarantees that $p_k$ is a polynomial, where $\left\lceil y\right\rceil$ is the smallest integer bigger than or equal to $y$.
The degree of $p_k$ is then equal to $2ks\leq s(\ell+1)$.
The trigonometric polynomial $\tilde{A}$ is found via trigonometric interpolation, where for every singular crossing of $B_{sing}$ there is one data point for the value of $\tilde{A}$ and one data point for its derivative. The degree of $\tilde{A}$ is then equal to $\ell'$, where $\ell'$ is the number of singular crossings of $B_{sing}$ \cite{nathan}. Recall that $\ell'$ could be strictly greater than $\ell$.
Singular crossings of $B_{sing}$ correspond to intersections of the curves parametrized by $\tilde{F}_C$, which correspond to the zeros of certain complex polynomials on the unit circle as in \cite{bodepoly}. It was shown in \cite{bodepoly} that the number of singular crossings that involve two strands from the same component $C$ is bounded above by $(s_C+1)s_C\ell$. (Following the mistake in \cite{bodepoly} mentioned above this bound was originally stated as $(s_C+1)(s_C\ell-1)$)
It is also shown in \cite{bodepoly} that there are at most $\ell s_Cs_{C'}$ singular crossings with one strand from the component $C$ and the other strand from a component $C'\neq C$. The total number of singular crossings and the degree of $\tilde{A}$ is bounded from above by
\begin{align}
\deg(\tilde{A})&\leq \underset{C\in\mathcal{C}}{\sum}(s_C+1)s_C\ell+\frac{1}{2}\underset{C\in\mathcal{C}}{\sum}\underset{C'\neq C}{\sum}\ell s_C s_{C'}\nonumber\\
&= \underset{C\in\mathcal{C}}{\sum}(s_C+1)s_C\ell+\frac{1}{2}\underset{C\in\mathcal{C}}{\sum}\ell s_C (s-s_C)\nonumber\\
&=\underset{C\in\mathcal{C}}{\sum}\frac{1}{2}s_C^2\ell+s\ell(1+\frac{s}{2}).
\end{align}
We need to choose $m$, which will equal the degree of $f$, to be greater than the degree of $p_k$ and at least the degree of $A$. The degree of $A$ is $2\deg(\tilde{A})+1$ and the degree of $p_k$ was at most $s(\ell+1)$. Thus
\begin{equation}
m=\underset{C\in\mathcal{C}}{\sum}s_C^2\ell+s\ell(2+s)+1>2s\ell>s(\ell+1).
\end{equation}
is a sufficient choice.
Therefore, the degree of $f$, may be chosen to be
\begin{equation}
\deg(f)\leq\underset{C\in\mathcal{C}}{\sum}s_C^2\ell+s\ell(2+s)+1.
\end{equation}
\end{proof}
If the closure of $B$ is a knot, we have that $|\mathcal{C}|=1$ and $s_C=s$. Corollary \ref{cor} follows immediately.
|
\section{Theoretically clean vs the rest of the observables}
Recent LHCb measurements have indicated tensions with the Standard Model (SM) predictions in a number of $b \to s$ decays.
There are tensions in the angular observables of the $B \to K^* \mu^+\mu^-$ decay with the most significant tension in the $P_5^{\prime}$ observable~\cite{LHCb:2020lmf}. Similar tensions have also been measured in the $B^+ \to K^{*+}\mu^+\mu^-$ decay~\cite{LHCb:2020gog}.
Furthermore, the branching ratio of several $B$-decays such as $B\to K \mu^+\mu^-, B_s\to \phi \mu^+\mu^-$ and $\Lambda_b \to \Lambda \mu^+\mu^-$ have been measured to be below the SM prediction~\cite{LHCb:2014cxe,LHCb:2015tgy,LHCb:2021zwz}.
The recent LHCb measurement on the lepton flavour universality violating (LFUV) observable $R_K$ has confirmed the tension with the SM with $3.1\sigma$ significance~\cite{LHCb:2021trn}. LHCb has measured similar deviations in $R_{K^*}$ in the two low $q^2$ bins with $2.3$ and $2.5\sigma$ significance~\cite{LHCb:2017avl}.
To study the New Physics (NP) implication of these measurements, all the relevant $B$-decay observables should be considered.
However, the precision of the theoretical predictions is not the same for all these observables.
Due to the cancellation of hadronic uncertainties in the numerator and the denominator,
the LFUV observables $R_{K^{(*)}}={\rm BR}(B\to K^{(*)}\mu\mu)/{\rm BR}(B\to K^{(*)}ee)$ are predicted very precisely in the SM, with theoretical uncertainty less than 1~(3)\% for the $q^2~\in~[1.1,6] ([0.045,1.1])$ GeV$^2$ bin. Another clean observable with small theoretical uncertainty (less than 5\%) is the branching ratio of the $B_s \to \mu^+ \mu^-$ decay. On the other hand, the rest of the $b \to s$ observables in general suffer from larger theoretical uncertainties due to hadronic contributions. Although with the appropriate choice of angular observables, less sensitivity from local form factor uncertainties is achievable, there are still contributions from power corrections of non-local hadronic effects which are not well-known within QCD factorisation and are usually ``guesstimated'' (for a study of the impact of the local and non-local hadronic uncertainties on NP fits see Ref.~\cite{Hurth:2016fbr}).
Therefore, we separate the theoretically ``clean observables''
from the rest of the $b \to s$ observables and compare the NP implications
and coherence of NP fits to these two data sets.
For the analysis we have used the {\ttfamily SuperIso} public program~\cite{Mahmoudi:2007vz}.
From Table~\ref{tab:Clean_vs_rest_1D}, we see that in the one-dimensional fits to the clean observables there are several NP scenarios explaining the data with more than $4\sigma$ significance better than the SM~\cite{Hurth:2021nsi}.
\begin{table}[bh!]
\begin{center}
\setlength\extrarowheight{0pt}
\scalebox{0.75}{
\begin{tabular}{|l|r|r|c|}
\hline
\multicolumn{4}{|c|}{\footnotesize \qquad Only $R_{K^{(*)}}, B_{s,d} \to \mu^+ \mu^-$ \qquad ($\chi^2_{\rm SM}= 28.19 $)}\\ \hline
& b.f. value & $\chi^2_{\rm min}$ & ${\rm Pull}_{\rm SM}$ \\
\hline \hline
$\delta C_{9} $ & $ -1.00 \pm 6.00 $ & $ 28.1 $ & $ 0.2 \sigma $ \\[-4pt]
$\delta C_{9}^{e} $ & $ 0.80 \pm 0.21 $ & $ 11.2 $ & $ 4.1 \sigma $ \\[-2pt]
$\delta C_{9}^{\mu} $ & $ -0.77 \pm 0.21 $ & $ 11.9 $ & $ 4.0 \sigma $ \\
\hline
$\delta C_{10} $ & $ 0.43 \pm 0.24 $ & $ 24.6 $ & $ 1.9 \sigma $ \\[-4pt]
$\delta C_{10}^{e} $ & $ -0.78 \pm 0.20 $ & $ 9.5 $ & $ 4.3 \sigma $ \\[-2pt]
$\delta C_{10}^{\mu} $ & $ 0.64 \pm 0.15 $ & $ 7.3 $ & $ 4.6 \sigma $ \\
\hline
$\delta C_{\rm LL}^e$ & $ 0.41 \pm 0.11 $ & $ 10.3 $ & $ 4.2 \sigma $ \\[-4pt]
$\delta C_{\rm LL}^\mu$ & $ -0.38 \pm 0.09 $ & $ 7.1 $ & $ 4.6 \sigma $ \\
\hline
\end{tabular} \qquad
\begin{tabular}{|l|r|r|c|}
\hline
\multicolumn{4}{|c|}{\footnotesize All obs. except $R_{K^{(*)}}, B_{s,d}\to\mu^+ \mu^-$ \; ($\chi^2_{\rm SM}= 200.1 $)}\\ \hline
& b.f. value & $\chi^2_{\rm min}$ & ${\rm Pull}_{\rm SM}$ \\
\hline \hline
$\delta C_{9} $ & $ -1.01 \pm 0.13 $ & $ 158.2 $ & $ 6.5 \sigma $ \\[-4pt]
$\delta C_{9}^{e} $ & $ 0.70 \pm 0.60 $ & $ 198.8 $ & $ 1.1 \sigma $ \\[-2pt]
$\delta C_{9}^{\mu} $ & $ -1.03 \pm 0.13 $ & $ 156.0 $ & $ 6.6 \sigma $ \\
\hline
$\delta C_{10} $ & $ 0.34 \pm 0.23 $ & $ 197.7 $ & $ 1.5 \sigma $ \\[-4pt]
$\delta C_{10}^{e} $ & $ -0.50 \pm 0.50 $ & $ 199.0 $ & $ 1.0 \sigma $ \\[-2pt]
$\delta C_{10}^{\mu} $ & $ 0.41 \pm 0.23 $ & $ 196.5 $ & $ 1.9 \sigma $ \\
\hline
$\delta C_{\rm LL}^e$ & $ 0.33 \pm 0.29 $ & $ 198.9 $ & $ 1.1 \sigma $ \\[-4pt]
$\delta C_{\rm LL}^\mu$ & $ -0.75 \pm 0.13 $ & $ 167.9 $ & $ 5.7 \sigma $ \\
\hline
\end{tabular}
}
\caption{\small Comparison of one operator NP fits to clean observables on the left and to the rest of the $b\to s$ observables on the right (assuming 10\% error for the power corrections).
\label{tab:Clean_vs_rest_1D}}
\end{center}
\end{table}
\vspace{-0.2cm}
For the one-dimensional fit to all $b \to s$ observables except the clean ones (right panel of Table.~\ref{tab:Clean_vs_rest_1D}), the most favoured scenario is NP in $C_9^{(\mu)}$ with a significance of ~$6.5\sigma$. However, this significance depends on the choice of form factors as well as the guesstimated size of the non-factorisable power corrections (here assumed to be 10\% compared to leading order QCD factorisation contributions).
Compared with the NP fit to the rest of the observables, there are favoured scenarios such as NP in $C_9^\mu$, resulting in coherent best fit values for both sets of observables.
This is also the most favoured scenario in the global fit where the clean observables and the rest of the $b\to s$ observables are considered together~\cite{Hurth:2021nsi} (see Refs.~\cite{Geng:2021nhg,Altmannshofer:2021qrr,Alguero:2021anc} for other recent global fits).
\section{NP or hadronic contributions in $B\to K^* \mu \mu$ observables}
The impact of the guesstimated size of power corrections on the significance of NP in $C_9$ can be clearly seen by describing the $B \to K^* \mu^+\mu^-$ decay in terms of helicity amplitudes, with NP effects in $C_9$ (and $C_7$) and power corrections $h_\lambda$, both contributing to the vectorial helicity amplitude~\cite{Jager:2012uw}
\begin{align}
H_V(\lambda) &=-i\, N^\prime \Big\{ C_9^{\rm eff} \tilde{V}_{\lambda} - C_{9}' \tilde{V}_{-\lambda}
+ \frac{m_B^2}{q^2} \Big[\frac{2\,\hat m_b}{m_B} (C_{7}^{\rm eff} \tilde{T}_{\lambda} - C_{7}' \tilde{T}_{-\lambda})
- 16 \pi^2 \left(\text{LO QCDf}+h_\lambda\right) \Big] \Big\} \,.
\end{align}
Instead of making assumptions on the size of the power corrections, these contributions can be parameterised by a number of free parameters and fitted directly to the data.
A general description of the power corrections involves several free parameters~\cite{Ciuchini:2015qxb,Chobanova:2017ghn} which with the current experimental data results in fitted parameters that are loosely constrained~\cite{Hurth:2020rzx}.
A minimalistic description of the hadronic effect is given by~\cite{Hurth:2020rzx,Neshatpour:2017qvi}
\begin{align}
h_\lambda (q^2)= -\frac{\tilde{V}_\lambda(q^2)}{16 \pi^2} \frac{q^2}{m_B^2} \Delta C_9^{\lambda,\rm{PC}}\,,
\end{align}
which involves only three real free parameters corresponding to each helicity $\lambda=0,\pm$ (six if assumed complex). This description with smaller degrees of freedom (dof) in principle has a better chance of giving a constrained fit and can be considered as a null test for NP; if the three fitted hadronic parameters (one free parameter corresponding to each helicity) differ from each other, NP in $\delta C_9^{\rm NP}$ can be ruled out. Although it is possible that the fitted power corrections for each helicity are very similar to mimic NP in $\delta C_9^{\rm NP}$, it is highly improbable, furthermore there are theoretical arguments that the positive helicity amplitude should be suppressed compared to the two other helicities~\cite{Jager:2014rwa}.
\begin{table}[b!]
\ra{1.}
\rb{1.3mm}
\begin{center}
\setlength\extrarowheight{2pt}
\scalebox{0.85}{
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{Best fit values of hadronic power corrections}\\ \hline \hline
$\Delta C_9^{+,{\rm PC}}$ & $\Delta C_9^{-,{\rm PC}}$ & $\Delta C_9^{0,{\rm PC}}$ \\
\hline
$ 5.43 \pm 6.22 $ & $ -1.06 \pm 0.21 $ & $ -0.73 \pm
|
0.52 $ \\
\hline
\end{tabular} \qquad
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{3}{|c|}{Significance of NP and hadronic p.c. fits} \\
\hline \hline
nr. of dof & $\footnotesize 1\; (\delta C_9^{\rm NP})$ & $\footnotesize 3\; (\Delta C_9^{\lambda,{\rm PC}})$ \\ [1pt]
\hline
0 (plain SM) & $6.0\sigma$ & $5.4\sigma$ \\
1 {\small(Real $\delta C_9$)} & $\text{---}$ & $0.6\sigma$\\
\hline
\end{tabular}
}
\caption{\small
On the left, fit of hadronic power corrections for the three helicities ($\lambda=0,\pm$) with real $\Delta C_9^{\lambda,{\rm PC}}$, using the data on $B\to K^* \bar\mu\mu/\gamma$ observables with $q^2$ bins $\leqslant 8\text{ GeV}^2$. On the right, the significance of the improved description of the hadronic fit as well as the NP fit compared to the SM and to each other.
\label{tab:DeltaC9pc}
}
\end{center}
\end{table}
For the fit do data, we consider only the experimental measurements on $B \to K^* \mu^+ \mu^-$ observables in the $q^2 \leq 8$ GeV$^2$ bins since the power corrections for the low- and high-$q^2$ regions are not necessarily the same.
From the left panel of Table~\ref{tab:DeltaC9pc}, it is clear that although the central value of the best fit point for each helicity is different, the three free parameters cannot be strongly constrained and are compatible with each other within 68\% confidence interval.
As given in the right panel of Table~\ref{tab:DeltaC9pc}, including either NP contributions ($\delta C_9^{\rm NP}$) or power corrections ($\Delta C_9^{\lambda,{\rm NP}}$), a better description of the data is obtained with a significance of more than $5\sigma$ compared to the SM.
It should be noted that the NP scenario with $\delta C_9^{\rm NP}$ contributions is embedded in the hadronic fit, hence it is possible to make a statistical comparison between the two fits. And as given in the right panel of Table~\ref{tab:DeltaC9pc}, the improvement of the hadronic fit compared to the NP description is less than $1\sigma$ suggesting that there is no indication to introduce two more dof for the hadronic fit\footnote{Assigning the global $\delta C_9$ as a nuisance parameter to take into account unknown power corrections -- as done for example in Ref.~\cite{Isidori:2021vtc} -- is inappropriate as there is no theory indication that the three helicities would be described by a common hadronic effect. Even considering the weak sensitivity on the positive helicity, at least two independent free parameters would be necessary to describe the power corrections.}.
\section{Future projections of clean observables}
We consider three benchmark points for the planned LHCb upgrades and make predictions for the clean observables. For the benchmarks, we consider the two LHCb upgrades with 50 and 300~fb$^{-1}$ integrated luminosity as well as an intermediate stage with 18 fb$^{-1}$ of data. Assuming that in future measurements, the current experimental central values remain the same, with the future reduced experimental uncertainties (see~\cite{Hurth:2021nsi} for details) it is not possible to get acceptable fits.
\begin{table}[t!]
\begin{center}
\setlength\extrarowheight{3pt}
\scalebox{0.85}{
\begin{tabular}{|c||c|c|c|}
\hline
\multicolumn{4}{|c|}{Pull$_{\rm SM}$ with $R_{K^{(*)}}$ and ${\rm BR}(B_s\to \mu^+ \mu^-)$ prospects} \\ [-2pt]
\hline
LHCb lum. & 18 fb$^{-1}$ & 50 fb$^{-1}$ & 300 fb$^{-1}$ \\[-4pt]
\hline
$\delta C_{9}^{\mu}$ & $ 6.5\sigma $ & $ 14.7\sigma $ & $ 21.9\sigma $ \\[-6pt]
$\delta C_{10}^{\mu}$ & $ 7.1\sigma $ & $ 16.6\sigma $ & $ 25.1\sigma $ \\[-6pt]
$\delta C_{LL}^{\mu}$ & $ 7.5\sigma $ & $ 17.7\sigma $ & $ 26.6\sigma $ \\
\hline
\end{tabular}
}\vspace*{0.1cm}
\caption{\small
Predictions of Pull$_{\rm SM}$ for the LHCb upgrade scenarios with 18, 50 and 300 fb$^{-1}$ luminosity collected, for the fit to $\delta C_9^\mu$, $\delta C_{10}^\mu$ and $\delta C_{LL}^\mu$ (as given in the left panel of Table~\ref{tab:Clean_vs_rest_1D}).
\label{tab:LFUV_Bsmumu_projections}}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{C9mu30Plot.png}
\includegraphics[width=0.45\textwidth]{C10mu30Plot.png}
\caption{\small
Significance of Pull$_{\rm SM}$ for each of the projected LFUV observables, individually.
\label{fig:projections}}
\end{center}
\end{figure}
Instead, we make an equally strong assumption; we presume that future data correspond to projecting the observables with the current fitted values of each of the three most favoured scenarios of the left panel in Table~\ref{tab:Clean_vs_rest_1D}. As given in Table~\ref{tab:LFUV_Bsmumu_projections}, already with 18 fb$^{-1}$ data, the NP significance will be more than $6\sigma$ in all three scenarios.
However, the significance is quite dependent on the presumed reduction in statistical uncertainties, as can be seen in Fig.~\ref{fig:projections} where Pull$_{\rm SM}$ is shown for each of the individual LFUV observables when assuming the current central value of $C_9^\mu$ ($C_{10}^\mu$) from the clean observables remains unchanged.
The lower [upper] limit in each band is when assuming current systematic uncertainties do not improve [having ultimate systematic uncertainty of 1\% for the LFUV observables and 5\% for BR($B_s \to \mu^+ \mu^-$)].
For the $C_9^\mu$ ($C_{10}^\mu$) scenario, $R_K$ alone can reach $5\sigma$ significance with $\sim 15$ ($20$) fb$^{-1}$ integrated luminosity.
\section{Multidimensional global fit and look-elsewhere effect}
NP does not necessarily present itself in only one or two operator structures, and in principle all the 20 relevant Wilson coefficients can receive NP contributions.
Furthermore, while look-elsewhere effect (LEE) can be introduced when focusing on a subset of observables, this can also happen when choosing a posteriori one and/or two operators.
However, in the case where the fit includes all relevant observables and the
maximum number of Wilson coefficients are set to be free, then LEE is avoided
as there are no a posteriori decisions and the p-values take into account the
number of degrees of freedom and finally insensitive parameters and flat directions can be eliminated based on profile likelihoods and correlations of the fit.
\begin{table}[h!]
\begin{center}
\setlength\extrarowheight{0pt}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{All observables with $\chi^2_{\rm SM}= 225.8 $} \\
\multicolumn{4}{|c|}{$\chi^2_{\rm min}= 151.6 ;\quad {\rm Pull}_{\rm SM}= 5.5 (5.6) \sigma$} \\
\hline \hline
\multicolumn{2}{|c|}{$\delta C_7$} & \multicolumn{2}{c|}{$\delta C_8$}\\
\multicolumn{2}{|c|}{$ 0.05 \pm 0.03 $} & \multicolumn{2}{c|}{$ -0.70 \pm 0.40 $}\\
\hline
\multicolumn{2}{|c|}{$\delta C_7^\prime$} & \multicolumn{2}{c|}{$\delta C_8^\prime$}\\
\multicolumn{2}{|c|}{$ -0.01 \pm 0.02 $} & \multicolumn{2}{c|}{$ 0.00 \pm 0.80 $}\\
\hline
$\delta C_{9}^{\mu}$ & $\delta C_{9}^{e}$ & $\delta C_{10}^{\mu}$ & $\delta C_{10}^{e}$ \\
$ -1.16 \pm 0.17 $ & $ -6.70 \pm 1.20 $ & $ 0.20 \pm 0.21 $ & degenerate w/ $C_{10}^{\prime e}$ \\
\hline\hline
$\delta C_{9}^{\prime \mu}$ & $\delta C_{9}^{\prime e}$ & $\delta C_{10}^{\prime \mu}$ & $\delta C_{10}^{\prime e}$ \\
$ 0.09 \pm 0.34 $ & $ 1.90 \pm 1.50 $ & $ -0.12 \pm 0.20 $ & degenerate w/ $C_{10}^{ e}$ \\
\hline\hline
$C_{Q_{1}}^{\mu}$ & $C_{Q_{1}}^{e}$ & $C_{Q_{2}}^{\mu}$ & $C_{Q_{2}}^{e}$ \\
$ 0.04 \pm 0.10 $ & $ -1.50 \pm 1.50 $ & $ -0.09 \pm 0.10 $ & $ -4.10 \pm 1.5 $ \\
\hline\hline
$C_{Q_{1}}^{\prime \mu}$ & $C_{Q_{1}}^{\prime e}$ & $C_{Q_{2}}^{\prime \mu}$ & $C_{Q_{2}}^{\prime e}$ \\
$ 0.15 \pm 0.10 $ & $ -1.70 \pm 1.20 $ & $ -0.14 \pm 0.11 $ & $ -4.20 \pm 1.2 $ \\
\hline
\end{tabular}
}
\caption{\small 20-dimensional global fit to the $b \to s$ data, assuming 10\% error for the power corrections.
\label{tab:ALL_20D_C78910C12primes}}
\end{center}
\end{table}
\vspace{-0.5cm}
In Table.~\ref{tab:ALL_20D_C78910C12primes} we present the 20-dimensional global fit where we obtain Pull$_{\rm SM}=5.5\sigma$. However, considering that two of the Wilson coefficients are degenerate and taking into account the criterion presented in Refs.~\cite{Arbey:2018ics,Hurth:2018kcq}, the effective degrees of freedom are 19 resulting in Pull$_{\rm SM}=5.6\sigma$.
\section{Conclusions}
The $R_K$ and $R_{K^{*}}$ ratios measured by the LHCb collaboration suggest lepton flavour universality violating new physics.
This implication is enforced by considering the rest of the $b\to s$ observables. However, some of the latter observables might suffer from underestimated non-local hadronic uncertainties.
We suggested a minimal description of these contributions which can work as a null test for new physics. Nonetheless, with the current data no conclusive judgment is possible.
Moreover, we showed that assuming any of the favoured new physics scenarios remain, future LHCb measurements of lepton flavour universality violating observables can establish beyond the Standard Model physics with more than 5$\sigma$ significance already with 18~fb$^{-1}$ data.
Furthermore, for an unbiased determination of the new physics structure, we also considered a 20-dimensional fit, still finding a large significance for the new physics description of the $b\to s$ data.
\vspace{-0.1cm}
|
\section{Introduction}
Magnetocaloric effect (MCE) belongs to particularly intensely investigated subjects within physics of magnetic systems\cite{mcalbook,Tishin2014,Gschneidner2005,Szymczak}, mainly owing to its high potential for applications in modern and highly expected solid-state refrigeration and for construction of energy-saving devices. The works focus on extensive searching for novel materials exhibiting better performance \cite{Sandeman2012,Manosa2013}. In parallel, there is a constant need for theoretical models giving physical insight into the phenomenon and allowing to describe the experimental results. The recently used analytical approaches to the theory of MCE usually exploit scaling relations and employ scaling-based equations of state, making use of mean field approximation for constructing the thermodynamics of magnetic systems \cite{Pelka2013,Franco2008,Franco2010,Franco2012,AmaralInTech,Amaral2007,deOliveira2010,deOliveira2014,Dong2008,Basso2014}. Such approaches are very successful from the practical point
of view, enabling for example extrapolation of the experimental results and allowing for construction of universal dimensionless functions \cite{Amaral2007,Franco2008,Franco2012,AmaralInTech,Pelka2013}. However, they are not free from some empirical parameters. On the other hand, also Monte Carlo simulations are used to predict theoretically the magnetocaloric properties of materials \cite{Nobrega2005,Nobrega2006,Nobrega2007,Nobrega2011,Buchelnikov2011,Singh2013}.
Let us also mention the existence of a number of exactly solved spin models for which magnetocaloric quantities have been discussed within various approaches, such as Jordan-Wigner transformation \cite{Zhitomirsky2004,Topilko2012}, Bethe ansatz-based quantum transfer matrix and non-linear integral equations method \cite{Trippe2010,Ribeiro2010}, or generalized classical transfer-matrix method and decoration-iteration mapping \cite{Canova2006,Pereira2009,Strecka2014,Ohanyan2012,Canova2014}.
Among various systems exhibiting MCE, ferrimagnets draw considerable attention. In such magnets, an inverse MCE can occur, consisting in decrease of the entropy when the external magnetic field changes isothermally from non-zero to zero value, whereas for the normal MCE the entropy rises under such conditions. This phenomenon is both observed experimentally (e.g. \cite{Burzo2010,Zhang2010,Reis2010}),
for instance, in the following materials: $\rm MnBr4H_2O$, $\rm Yb_3Fe_5O_{12}$, $\rm Ni_{50}Mn_{34}In_{16}$, $\rm CoMnSi$, $\rm Mn_{1.82}V_{0.18}Sb$,
and its existence has been qualitatively explained \cite{vonRanke2009a,vonRanke2009b,vonRanke2010,Biswas2013a,Biswas2013b,Alho2010} and predicted for some classes of non-trivial magnets (like low-dimensional \cite{Qi2012} and frustrated systems \cite{Zukovic2013}). On the other hand, one of the directions in studies of MCE is turning the attention to magnetic multilayers \cite{Caballero2012,Florez2013}. In particular, layered magnets with antiferromagnetic coupling between the layers can exhibit ferrimagnetic ground state. The temperature dependence of total magnetization of such systems may show the presence of
magnetic compensation, which has been extensively studied in various classes of layered and similar magnets \cite{Kaneyoshi1993,Kaneyoshi1995,Kaneyoshi1996,Kaneyoshi2011,Kaneyoshi2012a,Kaneyoshi2012b,Kaneyoshi2013a,
Kaneyoshi2013b,Kaneyoshi2013c,Kantar2014,Jascur1997a,Jascur1997b,Yuksel2013,Bobak2011,Oitmaa2003,Oitmaa2005,Veiller2000,Balcerzak2014}. Also, MCE in vicinity of the compensation temperature has been a subject of studies \cite{Ma2013}. We mention that the coexistence of normal and inverse MCE is in general possible in a class of magnets exhibiting rich structure of the ground-state phase diagram, in particular in Ising-Heisenberg chains \cite{Zhitomirsky2004,Topilko2012,Trippe2010,Ribeiro2010,Canova2006,Pereira2009,Strecka2014,Ohanyan2012,Canova2014} or even some zero-dimensional structures \cite{Cisarova2014}. However, we consider the multilayer geometry to be of particular interest.
Motivated by the findings connected with normal and inverse MCE in layered magnets, we present a theoretical study of a magnetic spin-1/2 multilayer with antiferromagnetic interlayer couplings.
We assume that the planes A and B forming multilayer are magnetically non-equivalent, having different exchange parameters and different anisotropy. Moreover, one of the plane (that one which is magnetically stronger) is randomly diluted. In such a system the compensation phenomenon can occur, whose characteristic compensation temperature can be much lower than the N\'eel temperature, and can be modified by the degree of dilution \cite{Balcerzak2014}. Since in the vicinity of compensation temperature the inverse MCE can be expected, the temperature range of occurrence of this effect (and its strength) could be, to some extent, controlled by the dilution parameter, even though the other material parameters (exchange integrals) remain constant. Apart from pure theoretical interest in the model we think that such a possibility may also inspire the researchers for its experimental realization.\\
The paper develops a theoretical model and its thermodynamic description. Then the conditions for presence of magnetic compensation phenomenon are discussed and the magnetocaloric properties (the isothermal entropy change and the adiabatic cooling rate) are analysed. The numerical results are extensively illustrated in plots.
\section{Theoretical model}
\begin{figure}
\includegraphics[scale=0.25]{fig1}
\caption{\label{fig:fig1}A schematic view of the multilayer consisting of two inequivalent kinds of planes, $A$ and $B$. The intraplanar couplings are $J^{AA}_{x}=J^{AA}_{y}=J^{AA}_{\perp}$, $J^{AA}_{z}$ and $J^{BB}_{x}=J^{BB}_{y}=J^{BB}_{\perp}$, $J^{BB}_{z}$, respectively. The interplanar coupling is $J^{AB}_{z}$. The plane $B$ is randomly diluted. }
\end{figure}
The subject of our interest is a magnetic multilayer, in which the spins are located at the sites of simple cubic (sc) crystalline lattice. The system in question is composed of inequivalent parallel monolayers which are stacked alternately and are further called A and B planes. Each single plane is therefore a simple quadratic lattice. All sites of plane A are populated by magnetic atoms, while plane B is site-diluted, so that the concentration of magnetic atoms equals there to $p$. All magnetic atoms are assumed to have spin 1/2. The schematic view of the multilayer is presented in Fig.~\ref{fig:fig1}.
The Hamiltonian of the system takes the following form:
\begin{eqnarray}
\mathcal{H}&=&-\sum_{\left\langle i\in A,j\in A \right\rangle}^{}{\left[J_{\perp}^{AA}\left(S_{x}^{i}S_{x}^{j}+S_{y}^{i}S_{y}^{j}\right)+J_{z}^{AA}S_{z}^{i}S_{z}^{j}\right]}-\sum_{\left\langle i\in A,j\in B \right\rangle}^{}{J_{z}^{AB}S_{z}^{i}S_{z}^{j}\xi_j}\nonumber\\
&&-\sum_{\left\langle i\in B,j\in B \right\rangle}^{}{\left[J_{\perp}^{BB}\left(S_{x}^{i}S_{x}^{j}+S_{y}^{i}S_{y}^{j}\right)+J_{z}^{BB}S_{z}^{i}S_{z}^{j}\right]\xi_i\xi_j}-h\sum_{i\in A}^{}{S^{i}_{z}}-h\sum_{j\in B}^{}{S^{j}_{z}\xi_j}.
\label{eq1}
\end{eqnarray}
The exchange integrals characterizing the interactions between various nearest-neighbour (NN) spins are indicated schematically in Fig.~\ref{fig:fig1}. In particular, $J^{AA}_{x}=J^{AA}_{y}=J^{AA}_{\perp}$ and $J^{AA}_{z}$ are intraplanar couplings in plane A, while $J^{BB}_{x}=J^{BB}_{y}=J^{BB}_{\perp}$ and $J^{BB}_{z}$ are intraplanar couplings in plane B. All the intraplanar interactions are ferromagnetic. Spins in different neighbouring planes interact antiferromagnetically, and the corresponding exchange integral is $J^{AB}_{z}<0$. External magnetic field is denoted by $h$.
In our study we consider the interplanar couplings in planes A and B exhibiting easy axis-type anisotropy, i.e. $0\leq J^{\gamma \gamma}_{\perp} \le J^{\gamma \gamma}_{z}$ for $\gamma=A,B$. This corresponds to the limiting cases of all-Ising interactions (when $J^{\gamma \gamma}_{\perp}=0$) and the isotropic Heisenberg couplings (with $J^{\gamma \gamma}_{\perp}=J^{\gamma \gamma}_{z}$). The interaction between the planes is taken in Ising form, since $J^{AB}_{\perp}=0$ might lead to some non-classical magnetic ground state of the system. The site occupation operators $\xi_{j}$ take the values of $\xi_j=0,1$ and are introduced to describe random site dilution in the plane $B$ whereas the concentration of magnetic atoms in plane B equals to their configurational average, $\left< \xi_j \right> =p$.
In order to describe the thermodynamic properties of the model in question, we use the Pair Approximation (PA), which method has been employed by us for characterization of various spin-1/2 magnetic systems \cite{Balcerzak2009a,Balcerzak2009b,Szalowski2011,Szalowski2012,Szalowski2013,Balcerzak2014}, including the magnetocaloric studies \cite{Szalowski2011,Szalowski2013} and studies of layered magnets \cite{Szalowski2012,Szalowski2013,Balcerzak2014}. The method allows to find a Gibbs free energy within fully self-consistent approach, which allows for further determination of all interesting magnetic properties by employing thermodynamic identities.
In order to obtain the Gibbs energy we employ the general relationship:
\begin{equation}
G=\left<\mathcal{H}\right> - ST
\label{neweq1}
\end{equation}
where $\left<\mathcal{H}\right>$ is the average value of Hamiltonian (\ref{eq1}) containing external field (i.e., enthalpy of the system), and $S$ is the total entropy. The entropy is obtained by the cumulant technique where only single-site and pair entropy cumulants are taken into account:
\begin{equation}
S=\frac{N}{2}\left(\sigma^A+p\sigma^B\right)+N\left[\left(\sigma^{AA}-2\sigma^A\right)+p^2\left(\sigma^{BB}-2\sigma^B\right)+p\left(\sigma^{AB}-\sigma^A-\sigma^B\right)\right].
\label{neweq2}
\end{equation}
In Eq.~(\ref{neweq2}), $N$ is the total number of lattice sites, whereas $\sigma^\gamma$ and $\sigma^{\gamma \delta}$ ($\gamma=A,B$; $\delta=A,B$) are single-site and pair entropies, respectively. Eq.~(\ref{neweq2}) can be re-written in a more convenient form presenting the entropy per lattice site:
\begin{equation}
\frac{S}{N}=
\sigma^{AA}+p\sigma^{AB}+p^2\sigma^{BB}-\left(\frac{3}{2}+p\right)\sigma^A-p\left(\frac{1}{2}+2p\right)\sigma^B.
\label{neweq3}
\end{equation}
The single-site and pair entropies are defined by the expressions:
\begin{equation}
\sigma^\gamma=-k_{\rm B}\left<\ln \rho^{\gamma}_{i}\right>
\label{neweq4}
\end{equation}
and
\begin{equation}
\sigma^{\gamma \delta}=-k_{\rm B}\left<\ln \rho^{\gamma \delta}_{i j}\right> ,
\label{neweq5}
\end{equation}
where $\rho^{\gamma}_{i}$ and $\rho^{\gamma \delta}_{i j}$ are single-site and pair density matrices, respectively. These matrices are of the form:
\begin{equation}
\rho^{\gamma}_{i}=e^{\beta\left(G^{\gamma} -\mathcal{H}^{\gamma}_{i}\right)}
\label{neweq6}
\end{equation}
and
\begin{equation}
\rho^{\gamma \delta}_{i j}=e^{\beta\left(G^{\gamma \delta} -\mathcal{H}^{\gamma \delta}_{i j}\right)}
\label{neweq7}
\end{equation}
($\beta=1/k_{\rm B}T$). \\
In Eqs.~(\ref{neweq6}) and (\ref{neweq7}), $\mathcal{H}^{\gamma}_{i}$ and $\mathcal{H}^{\gamma \delta}_{i j}$ denote single-site ($i \epsilon \gamma$) and NN-pair ($i\epsilon \gamma; j\epsilon \delta$) cluster Hamiltonians \cite{Szalowski2012} whereas $G^{\gamma}$ and $G^{\gamma \delta}$ are corresponding cluster Gibbs energies. With the help of expressions (\ref{neweq4}) and (\ref{neweq5}) the cluster entropies can also be expressed as:
\begin{equation}
-\sigma^{\gamma}T=G^{\gamma} -\left<\mathcal{H}^{\gamma}_{i}\right>
\label{neweq8}
\end{equation}
and
\begin{equation}
-\sigma^{\gamma \delta}T=G^{\gamma \delta} -\left<\mathcal{H}^{\gamma \delta}_{i j}\right>.
\label{neweq9}
\end{equation}
Substituting Eqs.~(\ref{neweq8}) and (\ref{neweq9}) into the expression for the total entropy (\ref{neweq3}), the total Gibbs energy per lattice site is finally expressed from Eq.~(\ref{neweq1}) in terms of single-site and NN-pair Gibbs energies as:
\begin{equation}
\frac{G}{N}=G^{AA}+pG^{AB}+p^{2}G^{BB}-\left(\frac{3}{2}+p\right)G^{A}-p\left(\frac{1}{2}+2p\right)G^{B},
\label{eq2}
\end{equation}
where the terms representing cluster enthalpies $\left< \mathcal{H}^{\gamma}_{i}\right>$ and $\left<\mathcal{H}^{\gamma \delta}_{i j}\right>$ have been cancelled in Eq.~(\ref{neweq1}) with the total enthalpy term $\left<\mathcal{H}\right>$.
The Gibbs energies are then obtained from normalization condition for density matrices (\ref{neweq6}) and (\ref{neweq7}) and are given by \cite{Szalowski2012}:
\begin{equation}
G^{\gamma}=-k_{\rm B}T \,\ln\left\{2\cosh\left[\frac{\beta}{2}\left(\Lambda^{\gamma}+h\right)\right]\right\}
\label{eq3}
\end{equation}
for single-site clusters ($\gamma =A,B$), while for NN-pairs the Gibbs energies are of the form:
\begin{equation}
G^{\gamma \delta}=-k_{\rm B}T \,\ln\left\{2\exp\left(\frac{\beta J^{\gamma \delta}_{z}}{4}\right)\cosh\left[\beta\left(\Lambda^{\gamma \delta}+h\right)\right]+2\exp\left(-\frac{\beta J^{\gamma \delta}_{z}}{4}\right)\cosh\left[\frac{\beta}{2}\sqrt{\left(\Delta^{\gamma \delta}\right)^2+\left(J_{\perp}^{\gamma \delta}\right)^2}\,\right]\right\}
\label{eq4},
\end{equation}
where $\gamma=A,B$ and $\delta=A,B$.\\
The above equations contain the parameters $\Lambda^{\gamma}$, $\Lambda^{\gamma \delta}$ and $\Delta^{\gamma \delta}$, which can be expressed by four independent variational parameters $\lambda^{AA}$, $\lambda^{BB}$, $\lambda^{AB}$ and $\lambda^{BA}$ in the following way:
\begin{eqnarray}
\Lambda^{A}&=&4\lambda^{AA}+2p\lambda^{AB}\nonumber\\
\Lambda^{B}&=&4p\lambda^{BB}+2\lambda^{BA}\nonumber\\
\Lambda^{AA}&=&3\lambda^{AA}+2p\lambda^{AB}\nonumber\\
\Lambda^{BB}&=&\left(4p-1\right)\lambda^{BB}+2\lambda^{BA}\nonumber\\
\Lambda^{AB}&=&2\left(\lambda^{AA}+p\lambda^{BB}\right)+\frac{1}{2}\left(2p-1\right)\lambda^{AB}+\frac{1}{2}\lambda^{BA}\nonumber\\
\Delta^{AA}&=&0 \nonumber\\
\Delta^{BB}&=&0 \nonumber\\
\Delta^{AB}&=&4\left(\lambda^{AA}-p\lambda^{BB}\right)+\left(2p-1\right)\lambda^{AB}-\lambda^{BA}
\label{eq5}.
\end{eqnarray}
The parameters are found from the variational principle for the Gibbs energy minimization, $\partial G/\partial \lambda^{\gamma \delta}=0$. As a result, the following set of four self-consistent equations for parameters $\lambda^{\gamma \delta}$ is obtained:
\begin{eqnarray}
\tanh\left[\frac{\beta}{2}\left(\Lambda^{A}+h\right)\right]&=&\frac{\exp\left(\frac{\beta J^{AA}_{z}}{4}\right)\sinh\left[\beta\left(\Lambda^{AA}+h\right)\right]}{\exp\left(\frac{\beta J^{AA}_{z}}{4}\right)\cosh\left[\beta\left(\Lambda^{AA}+h\right)\right]+\exp\left(-\frac{\beta J^{AA}_{z}}{4}\right)\cosh\left(\frac{\beta J^{AA}_{\perp}}{2}\right)}
\label{eq7}
\\
\tanh\left[\frac{\beta}{2}\left(\Lambda^{A}+h\right)\right]&=&\frac{\exp\left(\frac{\beta J^{AB}_{z}}{4}\right)\sinh\left[\beta\left(\Lambda^{AB}+h\right)\right]+ \exp\left(-\frac{\beta J^{AB}_{z}}{4}\right)\sinh\left(\frac{\beta \Delta^{AB}}{2}\right)}{\exp\left(\frac{\beta J^{AB}_{z}}{4}\right)\cosh\left[\beta\left(\Lambda^{AB}+h\right)\right]+\exp\left(-\frac{\beta J^{AB}_{z}}{4}\right)\cosh\left(\frac{\beta \Delta^{AB}}{2}\right)}
\label{eq8}
\\
\tanh\left[\frac{\beta}{2}\left(\Lambda^{B}+h\right)\right]&=&\frac{\exp\left(\frac{\beta J^{BB}_{z}}{4}\right)\sinh\left[\beta\left(\Lambda^{BB}+h\right)\right]}{\exp\left(\frac{\beta J^{BB}_{z}}{4}\right)\cosh\left[\beta\left(\Lambda^{BB}+h\right)\right]+\exp\left(-\frac{\beta J^{BB}_{z}}{4}\right)\cosh\left(\frac{\beta J^{BB}_{\perp}}{2}\right)}
\label{eq9}
\\
\tanh\left[\frac{\beta}{2}\left(\Lambda^{B}+h\right)\right]&=&\frac{\exp\left(\frac{\beta J^{AB}_{z}}{4}\right)\sinh\left[\beta\left(\Lambda^{AB}+h\right)\right]- \exp\left(-\frac{\beta J^{AB}_{z}}{4}\right)\sinh\left(\frac{\beta \Delta^{AB}}{2}\right)}{\exp\left(\frac{\beta J^{AB}_{z}}{4}\right)\cosh\left[\beta\left(\Lambda^{AB}+h\right)\right]+\exp\left(-\frac{\beta J^{AB}_{z}}{4}\right)\cosh\left(\frac{\beta \Delta^{AB}}{2}\right)}
\label{eq10}.
\end{eqnarray}
The above set of non-linear, self-consistent equations can be solved only numerically and for this purpose in the present work we used Mathematica software package \cite{Mathematica}.
The Gibbs free energy (Eq.~\ref{eq2}) is in principle the function of the external field $h$ and temperature $T$. The self-consistency of the thermodynamic description yields the possibility of determining all the other interesting thermodynamic quantities form the appropriate identities. For instance, the total magnetization per site can be found from:
\begin{equation}
m_{tot}=-\left(\frac{\partial G}{\partial h}\right)_{T},
\label{eq11}
\end{equation}
leading finally to:
\begin{equation}
m_{tot}=\frac{1}{2}\left(m_A+pm_B\right),
\label{eq12}
\end{equation}
where
\begin{equation}
m_{\gamma}=\frac{1}{2}\tanh\left[\frac{\beta}{2}\left(\Lambda^{\gamma}+h\right)\right]
\label{eq13}
\end{equation}
for $\gamma = A,B$. $m_A$ and $m_B$ are the magnetizations per occupied lattice site in plane A and B, respectively.
For the antiferromagnetic interaction $J^{AB}_{z}$ between the planes $A$ and $B$, magnetizations $m_A$ and $m_B$ have opposite signs. As the magnetizations vary with the temperature, a compensation phenomenon can take place, which means that $m_{tot}=m_A+pm_B=0$ for $m_{A},m_{B}\neq 0$ and $T<T_{c}$, where $T_{c}$ is the phase transition temperature. Let us mention that the occurrence of such a phenomenon in magnetic bilayer has been a subject of our extensive study in Ref.~\cite{Balcerzak2014}. The conditions for its existence in the present system, i.e. the magnetic multilayer, will be discussed in further part of the paper. We also mention that the critical temperature for layered ferro- and ferrimagnets with
anisotropic interactions has been a subject of numerous recent studies (e.g. \cite{Szalowski2012,Balcerzak2014,Akinci}).
The magnetocaloric effect can be characterized mostly with the help of such quantity as the entropy change $\Delta S_{T}$ in the process of isothermal demagnetization between the external field $h>0$ and $h=0$ \cite{mcalbook,Szalowski2011}. It can be expressed as $\Delta S_{T}=S\left(T,h\right)-S\left(T,h=0\right)$. The magnetic entropy of the system is found from the identity $S=-\left(\partial G/\partial T\right)_{h}$. In the presented convention, positive value of $\Delta S_{T}$ corresponds to normal MCE, while the negative value denotes inverse MCE. Another magnetocaloric quantity of interest is the temperature change vs. magnetic field under adiabatic (isentropic) process, usually characterized with the help of cooling rate $\Gamma_{S}=\left(\partial T/\partial h\right)_{S}=-\left(\partial m/\partial T\right)_{h}/\left(\partial S/\partial T\right)_{h}$ \cite{mcalbook,Szalowski2011}.
\section{Numerical results and discussion}
\begin{figure}
\includegraphics[scale=0.25]{fig2}
\caption{\label{fig:fig2} Phase diagram for the multilayer ferrimagnetic system showing the phase boundaries between the ranges of parameters in which the magnetic compensation phenomenon is present and absent. The ferrimagnetic phase without compensation is the phase right of the boundary line. Various concentrations of magnetic component in plane B are considered. The intraplanar couplings are of Ising type (a) and isotropic Heisenberg type (b). }
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig3}
\caption{\label{fig:fig3} The temperature dependence of total magnetization for the multilayer system in various external magnetic fields. The interaction parameters situate the system (for zero external field) in the ferrimagnetic phase with compensation (a) and without compensation (b). The inset in (a) shows the dependence of the compensation temperature on the normalized external field.}
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig4}
\caption{\label{fig:fig4}} The temperature dependence of the magnetization of planes A and B as well as total magnetization for the multilayer system. The interaction parameters indicated in the plots locate the system (for zero external field) in the ferrimagnetic phase with compensation (a,b) and without compensation (c,d). The external field in the cases (a,c) is slightly below the critical field for metamagnetic transition, while for (b,d) it is slightly above this critical field.
\end{figure}
At the beginning, it is instructive to determine the range of the model parameters (exchange integrals and concentrations of magnetic atoms) for which a compensation phenomenon is present and is manifested in the temperature dependences of the total system magnetization, at external magnetic field $h=0$. For all couplings of Ising type, this phase diagram is presented in Fig.~\ref{fig:fig2}(a). Each solid line corresponds
|
to some fixed concentration $p$ of magnetic component in plane B. The area of the $J^{AB} - J^{AA}$ plane to the left of each line contains the phase diagram points for which a compensation phenomenon takes place for some temperature. On the contrary, this phenomenon is absent for the phase diagram points to the right of each solid line. It is visible that for large concentrations of magnetic component in plane B, e.g. $p=0.9$, the line separating two phases is strongly inclined and the critical value of $J^{AB}$ is very sensitive to intralayer coupling $J^{AA}$. When concentration $p$ drops, the
inclination of the phase boundary decreases and the phase with compensation tends to occupy only the upper left corner of the diagram (i.e. the area with weak interlayer couplings $J^{AB}$ and weak intralayer interactions $J^{AA}$). For concentrations lower than $p \lesssim 0.5$ no compensation takes place for all the area of the diagram.
The situation is somehow different if the intraplanar couplings $J^{AA}$ and $J^{BB}$ are both of isotropic Heisenberg type, which is the case illustrated in Fig.~\ref{fig:fig2}(b). There, the boundary separating the phase with and without compensation is almost vertical for high $p$ unless interplanar coupling is very weak (when the boundary tends to be almost horizontal, with small slope). All the phase boundaries meet in the point in which both $J^{BB}$ and $J^{AB}$ vanish. When $p$ decreases, the phase boundary shifts to the lower values of $J^{AA}$. Unlike in the case of all Ising couplings, the phase with compensation survives in the diagram range with weak interactions $J^{AA}$ in principle for all considered values of $J^{AB}$, provided that $p\gtrsim 0.70$. If $p\lesssim 0.70$, only the phase without compensation remains.
In order to describe the behaviour of the system in question in external magnetic field $h>0$, let us first analyse the dependence of total magnetization on the temperature for two distinct points of the phase diagram, corresponding either to the case with compensation or without compensation. For illustration, the system with all-Ising couplings was selected. Fig.~\ref{fig:fig3}(a) presents the influence of the external magnetic field which, in the ground state, is chosen as opposite to the total magnetization, on the temperature dependence of this magnetization. The system parameters correspond here to occurrence of compensation phenomenon (weak interlayer coupling $J^{AB}$ and weak intralayer coupling $J^{AA}$) for quite low concentration $p$.
It should be explained here that in the external magnetic field equal to zero ($h=0$) the system possesses up-down symmetry. This means that there exist two solutions for the total magnetization, which are of the same magnitude but have opposite sign. According to Landau theory of phase transitions, these symmetrical solutions correspond to the same energy, and in the ground state they are separated by the energy maximum. The presence of this separating maximum opens the possibility to apply a small external magnetic field, oriented parallel or antiparallel to the total magnetization, without changing the magnetization direction in the ground state. However, when the field is applied, the energy becomes asymmetric vs. total magnetization, and, for instance, the solution ($m_{tot}<0$) which is opposite to the field ($h>0$) corresponds to a metastable state. In this case, when the field increases and reaches its critical value, the spin-flip transition takes place and the total
magnetization becomes re-oriented parallel to the field. After this transition, which is of the 1-st order, the signs of $m_{tot}$ and $h$ are the same, and the system falls into the stable state, where the Gibbs energy is in its absolute minimum. Taking this into account, we think that such a kind of transition from metastable to stable states becomes interesting for studies in the context of MCE.
To begin with, the temperature behaviour of total magnetization should be calculated. It is known, that temperature itself does not change the symmetry of the Gibbs potential, however, it diminishes the energy barrier between metastable and stable states. Thus, increasing temperature enables the spin-flip transition. In Fig.~\ref{fig:fig3}(a) the dependence $m_{tot}(T)$ for the fields $h/J^{BB}_{z}<0.06$ indicate the compensation at the temperature $T_{comp}$, which shifts to lower values when $h$ increases. The inset in Fig.~\ref{fig:fig3}(a) shows the dependence of the compensation temperature on the external magnetic field and proves the linear type of dependence $T_{comp}(h)
$ up to some critical field $h/J^{BB}_{z}\simeq 0.06$. When $h$ approaches this critical value, the reorientation of the magnetization direction becomes increasingly abrupt. Above the critical magnetic field, the compensation vanishes and magnetization becomes a monotonous, decreasing function of temperature, with a kink in the vicinity of former compensation temperature. However, this kink disappears fast when $h$ further increases.
If the intralayer interaction $J^{AA}$ becomes strong, as in Fig.~\ref{fig:fig3}(b), the qualitative behaviour of magnetization vs. temperature changes. Let us remind that no magnetic compensation is predicted for that range of parameters. Below the critical field the total magnetization first rises and then drops with temperature, but always keeps the same sign. If the critical field $h/J^{BB}\simeq 0.1$ is exceeded, again the magnetization dependence on temperature becomes monotonous and bears similarity to the previous case shown in Fig.~\ref{fig:fig3}(a).
The behaviour of total magnetization can be explained in details by focusing on magnetizations of both types of magnetic planes, A and B. Their temperature dependences for representative parameters are presented in Fig.~\ref{fig:fig4}. First let us analyse the behaviour of the system in the parameter range for which compensation occurs, just below the critical magnetic field $h/J^{BB}\simeq 0.06$ (Fig.~\ref{fig:fig4}(a)).
In accordance with Fig.~\ref{fig:fig3}(a) such situation corresponds to the metastable state,
where at low temperature both planes indicate magnetizations oriented antiferromagnetically, and the total magnetization is opposite to the external field. Magnetization of plane A, with weaker intraplanar couplings that plane B, has rectangular-like temperature dependence and abruptly reaches very low values close to the compensation temperature. In the vicinity of this temperature $m_{A}$ varies quasi-linearly and changes its orientation to parallel to $m_{B}$ at the temperature slightly higher than $T_{comp}$. The magnetization of plane B is almost constant in that temperature range and starts to decrease for temperatures significantly higher than compensation temperature. Moreover, it keeps the same orientation for all temperatures.
For external magnetic field exceeding $h/J^{BB}\simeq 0.06$, illustrated in Fig.~\ref{fig:fig4}(b), the situation changes qualitatively, since no compensation occurs. Starting from the lowest temperatures, both magnetizations, $m_{A}$ and $m_{B}$ indicate parallel orientation. The temperature dependence of $m_{B}$ is very similar to the previous case (Fig.~\ref{fig:fig4}(a)). The magnetization $m_{A}$ again shows rectangular-like thermal dependence for low temperatures. However, after the drop to low values, it remains constant in some range of temperatures in the vicinity of former $T_{comp}$. Then it increases slightly for higher temperatures. Therefore, for the set of parameters allowing for compensation at fields $h$ low enough, the behaviour of the total magnetization with abrupt change at some temperature close to $T_{comp}$ is due to reorientation of the magnetization of plane A, which is characterized by weaker intraplanar couplings than plane B and no dilution.
The situation is different for the set of parameters for which the phenomenon of compensation does not take place at any temperature and magnetic field, what is the content of Fig.~\ref{fig:fig4}(c) and (d). Below the critical magnetic field $h/J^{BB}\simeq 0.1$, it is the diluted plane B which indicates a rectangular-like temperature dependence of magnetization, with a plateau at intermediate temperatures and change of orientation. In principle, the behaviour is analogous to one presented in Fig.~\ref{fig:fig4}(a), but with roles of A and B planes being swapped. Exactly the same comment applies to the case of the stronger external field, exceeding the critical value, shown in Fig.~\ref{fig:fig4}(d). In both figures, the magnetization of undiluted plane has weaker temperature dependence and constant sign. The reorientation transition concerns the plane B with dilution, and since the absolute value of magnetization is reduced with respect to plane A, no change of sign is visible in the total magnetization of
the system.
The main quantity the knowledge of which allows to predict the magnetocaloric properties is the magnetic entropy as a function of temperature and external magnetic field. Therefore, let us illustrate its dependence on these parameters in a form of plot of isentropes. Such results are presented in Fig.~\ref{fig:fig10}) for two sets of parameters characterizing the system, i.e. intra- and interaplanar exchange integrals and magnetic component concentration. Namely, Fig.~\ref{fig:fig10}(a) corresponds to the occurrence of magnetic compensation at zero external field, while for Fig.~\ref{fig:fig10}(b) no compensation occurs (the parameters are the same as in Fig.~\ref{fig:fig3}(a) and \ref{fig:fig3}(b), respectively). In both plots it is visible that for high temperature range the entropy always decreases with the increase of the magnetic field. However, for lower temperatures, the field first causes an increase in entropy, and after exceeding some critical field the tendency reverts. The decrease of the magnetic entropy with the magnetic field gives rise to normal MCE, while the increase marks the occurrence of inverse MCE. Therefore, occurrence of inverse MCE can be expected in a considered system in general for low external magnetic fields. The pronounced dip in the isentropes marks the critical field separating both regimes. It should be stated that for the case with possible magnetic compensation, the non-monotonicity of entropy as a function of field extends up to higher entropy values. The behaviour of entropy as a function of external field is fully consistent with results shown in Fig.~\ref{fig:fig3}. Let us emphasize here the thermodynamic identity $\left(\partial S/\partial h\right)_{T}=\left(\partial m/\partial T\right)_{h}$, with the help of which the dependencies $m(T,h)$ and $S(T,h)$ can be analysed jointly. In particular, inverse MCE corresponds to $\left(\partial m/\partial T\right)_{h}<0$, which condition is fulfilled for low magnetic fields both for phase with and without compensation, as seen in Fig.~\ref{fig:fig3}).
\begin{figure}
\includegraphics[scale=0.25]{fig5}
\caption{\label{fig:fig10}} The isentropes for a few values of normalized entropy in temperature-magnetic field variables. All couplings are of Ising type. The system for zero external field is in phase with magnetic compensation (a) or without magnetic compensation (b).
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig6}
\caption{\label{fig:fig5}} The normalized isothermal entropy change as a function of normalized temperature for fixed external magnetic field amplitude and varying intraplanar coupling in plane A. All couplings are of Ising type. The system for zero external field is in phase with magnetic compensation (a) or without magnetic compensation (b).
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig7}
\caption{\label{fig:fig6}} The normalized isothermal entropy change as a function of normalized temperature for fixed external magnetic field amplitude and varying intraplanar coupling in plane A. All intraplanar couplings are of isotropic Heisenberg type. The system for zero external field is in phase with magnetic compensation (a) or without magnetic compensation (b).
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig8}
\caption{\label{fig:fig7}} The normalized isothermal entropy change as a function of normalized temperature for fixed external magnetic field amplitude and varying intraplanar coupling in plane A. All intraplanar couplings are of Ising type (solid lines) or isotropic Heisenberg type (dashed lines). The system for zero external field is in phase with magnetic compensation ($J^{AA}_{z}/J^{BB}_{z}=0.2$) or without magnetic compensation ($J^{AA}_{z}/J^{BB}_{z}=0.8$).
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig9}
\caption{\label{fig:fig11}} The normalized isentropic cooling ratio as a function of normalized temperature for fixed external magnetic field and varying intraplanar coupling in plane A. All couplings are of Ising type. The system for zero external field is in phase with magnetic compensation (a) or without magnetic compensation (b).
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig10}
\caption{\label{fig:fig8}} The normalized isothermal entropy change as a function of normalized temperature for varying external magnetic field amplitude. The system for zero field is in ferrimagnetic phase without compensation. The inverse MCE increases in magnitude for (a) and decreases for (b). All couplings are of Ising type.
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig11}
\caption{\label{fig:fig9}} The normalized isothermal entropy change as a function of normalized temperature for varying external magnetic field amplitude. The system for zero field is in ferrimagnetic phase with compensation. The inverse MCE increases in magnitude for (a) and decreases for (b). All couplings are of Ising type.
\end{figure}
\begin{figure}
\includegraphics[scale=0.25]{fig12}
\caption{\label{fig:fig12}} (a) The normalized isothermal entropy change and (b) the normalized isentropic cooling ratio as a function of concentration of magnetic component in plane B, for various normalized temperatures. All couplings are of Ising type.
\end{figure}
Having discussed the magnetic entropy vs. temperature and magnetic field, let us focus on the main aim of the study, which is to characterize the magnetocaloric properties of the system. Let us commence from analysis of the magnitude of entropy change when the external field is changed between $h$ and $0$, as a function of the temperature. The results for fixed $h/J^{BB}=0.03$, fixed interplanar coupling $J^{AB}/J^{BB}=-0.5$ and various intraplanar couplings $J^{AA}$ are presented in Fig.~\ref{fig:fig5} (the concentration of magnetic atoms in plane B is $p=0.7$). For the range $0\leq J^{AA}/J^{BB}\lesssim 0.46$ (Fig.~\ref{fig:fig5}(a)), the system is within the regime in which the compensation takes place. It is evident that for lower temperatures, a distinct range of inverse magnetocaloric effect exists, which converts into normal magnetocaloric effect when the temperature is elevated. The minimum in entropy change corresponding to strongest invesre MCE is quite sensitive to coupling $J^{AA}$, since it becomes more shallow and significantly shifts towards higher temperatures when the interplanar
coupling increases. At the same time, slight shift in the position of the maximum corresponding to normal MCE is also observed, and its magnitude is strongly reduced when $J^{AA}$ tends to the critical value at which the compensation phenomenon vanishes. At that point normal MCE completely disappears and only a pronounced minimum reflecting inverse MCE exists. The situation changes when $J^{AA}$ exceeds the critical value. Just above the critical coupling, it is the inverse MCE that totally disappears, leaving only a broad maximum corresponding to normal MCE exactly at the position of the former negative MCE minimum. Actually the maximum resembles a mirror reflection of that minimum. When intraplanar coupling in plane A increases further, the mentioned maximum tends to vanish and at the same time a peak of normal MCE builds up at higher temperatures. For very strong $J^{AA}$, we deal again with a shallow minimum (inverse MCE) at low temperatures and a pronounced, sharp maximum (normal MCE) at higher
temperatures.
If the interplanar couplings take isotropic Heisenberg form, the behaviour of MCE is presented in Fig.~\ref{fig:fig6}, for concentration $p=0.9$ and $J^{AB}/J^{BB}=-0.2$. The qualitative features are quite similar to that observed for purely Ising interactions (as in Fig.~\ref{fig:fig5}). Again, if the interaction parameters are such that compensation phenomenon occurs, then a low-temperature range of inverse MCE and high-temperature range of normal MCE is present. If the coupling $J^{AA}$ increases, the high-temperature maximum tends to disappear and inverse MCE minimum shifts towards higher temperatures and becomes less pronounced. After crossing the critical value of $J^{AA}/J^{BB}\simeq 0.65$, for which compensation disappears, the high-temperature peak of normal MCE rebuilds. However, the low-temperature range of inverse MCE does not vanish in the vicinity of critical $J^{AA}$, as it was observed for Ising interactions. Some shallow minimum for lowest temperatures survives and gradually extends towards
higher temperatures without considerable change in magnitude when $J^{AA}$ rises. Therefore, inverse MCE appears slightly more robust for Heisenberg intraplanar couplings than for all-Ising interactions.
Both cases of intraplanar couplings (Ising or isotropic Heisenberg) are compared for the same interaction parameter $J^{AB}/J^{BB}=-0.5$ and concentration $p=0.8$ in Fig.~\ref{fig:fig7}. The plots present the entropy change as a function of the temperature for weak ($J^{AA}/J^{BB}=0.2$) and strong ($J^{AA}/J^{BB}=0.8$) intraplanar interaction within plane A. The former value corresponds to the case with compensation for both interaction anisotropies, while the latter implies no compensation phenomenon. In general, the qualitative shape of the functions for Ising and isotropic Heisenberg couplings is analogous. The extrema for isotropic coupling are less pronounced and occur for lower temperatures compared to the Ising couplings.
In order to complete the study, we also analyse the behaviour of another magnetocaloric quantity of interest, namely, the adiabatic cooling rate $\Gamma_{S}$. Its temperature dependence is shown in Fig.~\ref{fig:fig11} for various strengths of interplanar antiferromagnetic coupling. Fig.~\ref{fig:fig11}(a) corresponds to the presence of compensation (at zero field), while in Fig.~\ref{fig:fig11}(b) the compensation is absent. The ranges of interaction parameters are the same as in the case of Fig.~\ref{fig:fig5} showing the temperature dependence of isothermal entropy change, while the external magnetic field is set to $h/J^{z}_{BB}=0.03$. The general qualitative behaviour of $\Gamma_{S}$ is quite similar to the behaviour of $\Delta S_{T}$ quantity. For weaker interplanar couplings (case with possible compensation), for low temperatures a pronounced inverse MCE is seen, with the tendency of shifting of the minimum towards higher temperatures with the increase of $J^{AB}$ interaction. On the contrary, maximum corresponding to normal MCE tends to reduce its magnitude completely when interplanar interaction becomes stronger. Close to the boundary between phase with and without compensation (in zero field), the value of cooling ratio remains almost constant (and corresponds to normal MCE) above the critical temperature, whereas below a range of inverse MCE is present. For the case of absence of compensation, for interplanar couplings strong enough, the inverse MCE is absent in any temperature range. Further increase in $J^{AB}$ causes the maximum to build up again (with strong shift towards higher temperatures). In parallel, a range of inverse MCE is recovered at low temperatures. The described changes mimic the behaviour of $\Delta S_{T}$ to large extent (compare Fig.~\ref{fig:fig5}).
It is also interesting to follow the evolution of the entropy change magnitude when the amplitude of the external magnetic field $h$ is changed. Such dependencies are presented in Figs.~\ref{fig:fig8} and ~\ref{fig:fig9}. First of them concerns the case of the multilayer with strong intraplanar interactions in plane A, thus it is prepared for the system exhibiting no compensation phenomenon (Fig.~\ref{fig:fig8}(a,b)). For the range of external fields not exceeding the critical field of $h/J^{BB}\simeq 0.20$ (Fig.~\ref{fig:fig8}(a)), a low-temperature, broadened minimum of inverse MCE and a sharp high-temperature maximum of normal MCE occur. When external field $h$ amplitude increases, the magnitudes of both extrema also increase and remain somehow comparable. The high-temperature maximum has quite stable position, while the low-temperature minimum tends to shift to lower temperatures. If the field amplitude exceeds the critical field (Fig.~\ref{fig:fig8}(b)), the situation
changes. Namely, the inverse MCE range
tends to decrease its magnitude and then vanishes completely, so that only a single maximum (showing normal MCE) remains. This maximum increases in magnitude monotonously when $h$ rises. Therefore, the inverse MCE is present only for a limited range of external fields $h$.
If the system is in regime of parameters with compensation, like in Fig.~\ref{fig:fig9}(a,b), the situation is qualitatively similar. However, below the critical magnetic field ($h/J^{BB}\simeq 0.06$ in this case), the inverse MCE dominates over normal MCE when the magnitude of the effect is taken into account, as the low-temperature minimum becomes very deep (Fig.~\ref{fig:fig9}(a)). However, above the critical field (Fig.~\ref{fig:fig9}(b)), this low-temperature range of inverse MCE again tends to vanish completely, while the high-temperature maximum builds up monotonously. For $h$ large enough, a low-temperature minimum transforms into a kind of maximum or kink, the position of which corresponds to increasingly higher temperatures and finally tends to merge with the main maximum. We can conclude that for that choice of interaction parameters (Fig.~\ref{fig:fig9}(a)) the inverse MCE is more pronounced , but occurs in a more narrow range of field amplitudes $h$, whereas for larger fields a
more broad maximum of normal MCE emerges (Fig.~\ref{fig:fig9}(b)).
In order to illustrate the influence of the magnetic dilution of plane B on the magnetocaloric properties, we show Fig.~\ref{fig:fig12}. In Fig.~\ref{fig:fig12}(a) isothermal entropy change between $h/J^{z}_{BB}=0.03$ and $h/J^{z}_{BB}=0$ is plotted against concentration of magnetic component in plane B for three normalized temperatures, while Fig.~\ref{fig:fig12}(b) shows analogous dependence of normalized isentropic cooling rate $\Gamma_{S}$ at $h/J^{z}_{BB}=0.03$. Let us mention that for zero magnetic field, for $p\lesssim 0.683$ magnetic compensation is absent, while for $p\gtrsim 0.683$ the compensation phenomenon takes place (compare with Fig.~\ref{fig:fig2}(a)). It is visible that for the highest considered temperature, the MCE remains normal in the whole range of concentrations (for both studied magnetocaloric quantities) and achieves a pronounced maximum when this particular temperature becomes the critical temperature of the system. For the lower temperature, the MCE is an inverse one for high concentrations $p$. Dilution causes the effect of switching to a normal one. For the lowest studied temperature, again entropy change and cooling rate are negative for high $p$. When $p$ is reduced, they change sign in a discontinuous way while crossing the boundary between phase without and with compensation. For lower concentrations they reach a minimum and then increase significantly. This plot supports the statement that for high temperatures the MCE in a studied system is a normal one, with a maximum corresponding to the critical temperature. On the other hand, for low temperatures, the effect is inverse for the phase with compensation and can be switched to normal one by crossing the boundary towards the phase where the compensation is absent.
\section{Conclusions}
In the paper, the coexistence of normal and inverse MCE has been analysed for a magnetic multilayer with antiferromagnetic interlayer couplings and selective dilution of one kind of inequivalent magnetic planes. The thermodynamics of the model was described within Pair Approximation method, which is superior to the commonly applied molecular field approximation in characterization of magnetocaloric properties \cite{Szalowski2011}. In particular, it takes into account the interaction anisotropy in spin space, allowing to distinguish between Ising and isotropic Heisenberg couplings, which is beyond the scope of molecular field-based description. Moreover, the method provides a nontrival description of diluted magnets \cite{Balcerzak2009a,Szalowski2012}, including a nonzero critical concentration and nonlinear dependence of the critical temperature on magnetic component concentration. Within the accepted approach to the thermodynamics, the phase diagram and the conditions for occurrence of magnetic compensation
phenomenon (the concentration of magnetic component as well as inter- and intraplanar couplings) were discussed for the system in question. The presence of normal as well as inverse MCE has been found in isothermal entropy change. It was found that the inverse MCE may be present for the temperatures lower than normal MCE, mainly in the range of parameters for which the compensation is possible. Very close to the boundary between the phase with and without compensation, it is possible to observe only the inverse MCE or only the normal MCE, depending on the side from which the boundary is approached. The existence of inverse MCE is limited by the amplitude of the external magnetic field, since the increase of the field amplitude first enhances and then reduces inverse MCE, finally promoting only the normal MCE.
It should be stressed that for the cases embracing compensation phenomenon we studied only the most interesting case, namely, where the external field enforces the existence of metastable states. These states undergo discontinuous transition to stable solutions above some critical field values. The first-order transitions are accompanied by the pronounced changes in all magnetic properties, and their description was possible on the basis of the expression for the Gibbs energy. This possibility can be regarded as an advantage of the method. The studies of other possible situations in the external field still should be done, for instance, when exclusively stable solutions in the ground state are taken into account. Such studies require further extensive numerical calculations and will be presented elsewhere.
The presented results allow for indicating the range of parameters of the model for which either both effects or just one of them can be observed as a function of the temperature. Moreover, they show how the MCE is sensitive and how it can be controlled by varying magnetic interactions and concentration of magnetic atoms in layered system. This is vital in the context of possible design and optimization of multilayer ferrimagnets to achieve desired magnetocaloric properties. Moreover, the formalism could be extended for example to the case of more than two uniform magnetic subsystems or to systems with long-range interactions.
\begin{acknowledgments}
This work has been supported by the Polish Ministry of Science and Higher Education on a special purpose grant to fund the research and development activities and tasks associated with them, serving the development of young
scientists and doctoral students.
\end{acknowledgments}
|
\section{\bf Introduction}
In the work, we prove some results for the viscosity solutions to some doubly nonlinear parabolic equations. The main focus of this work is the Trudinger's equation but we will also state some results for a parabolic equation involving the infinity-Laplacian. This is a follow-up of the works in \cite{BM1, BM2}.
To describe our results more precisely, we introduce definitions and notations. We take $n\ge 2$ in this work.
Letters like $x,\;y,\;z$ etc, denote the spatial variable, $s,\;t$ the time variable and $o$ stands for the origin in $\mathbb{R}^n$. Let $\overline{A}$ denote the closure of a set $A$. The ball of radius $R>0$ and center $x\in \mathbb{R}^n$ is denoted by $B_R(x)$.
Let $\Omega\subset \mathbb{R}^n,$ be a bounded domain and $0<T<\infty$. We define
$\Omega_T=\Omega\times (0,T)$ and its parabolic boundary as $P_T=(\overline{\Omega}\times\{0\})\cup (\partial \Omega \times (0,T)).$ Also set $H=\mathbb{R}^n$, $H_T=H\times(0,\infty)$.
For $1<p<\infty$, define the $p$-Laplacian $\D_p$ and the infinity-Laplacian $\D_{\infty}$ as
\eqRef{1.1}
\D_p u=\mbox{div}(|Du|^{p-2}Du)\;\;\;\mbox{and}\;\;\D_{\infty} u=\sum_{i,j=1}D_i uD_j u D_{ij}u,
\end{equation}
where $u=u(x)$. We now define the parabolic operators of interest to us. Call
\eqRef{1.2}
\Gamma_p u=\D_p u-(p-1) u^{p-2} u_t,\;\;2\le p<\infty,\;\;\;\mbox{and}\;\;\;\Gamma_{\infty} u=\D_{\infty} u-3 u^2 u_t,
\end{equation}
where $u=u(x,t)$. The equation $\Gamma_p=0,\;2\le p<\infty,$ is the well-known Trudinger's equation \cite{TR}. See also
\cite{BM1,BM2} and the references therein. The operators $\Gamma_p,\;2< p\le \infty$ are doubly nonlinear and degenerate and, in this work, solutions will be understood to be in the viscosity
sense. Note that in our work, we use $p=\infty$ as a label. It is not clear to us what the limit of $\Gamma_p$ (and $G_p$, see below) is for $p\rightarrow \infty$. For a detailed discussion about nonlinear parabolic equations, see \cite{ED}.
Suppose that $0<T<\infty$. Let $f\in C(\overline{\Omega})$ and $g(x,t)\in C( \partial\Omega\times[0,T))$. For ease of notation, we define
\eqRef{1.30}
h(x,t)=\left\{ \begin{array}{ccc} f(x), & \forall x\in \overline{\Omega},\\ g(x,t), & \forall (x,t)\in \partial\Omega\times[0,T) \end{array}\right.
\end{equation}
In most of this work, we take
\eqRef{1.3}
0<\inf_{P_T} h(x,t)\le \sup_{P_T}h(x,t)<\infty.
\end{equation}
For $2\le p\le \infty$, we consider positive viscosity solutions $u\in C(\Omega_T\cup P_T)$ of
\eqRef{1.4}
\Gamma_p u=0,\;\;\mbox{in $\Omega_T$ and $u=h$ on $P_T$.}
\end{equation}
In \cite{BM1} (see Theorem 5.2), we showed the existence of positive viscosity solutions of (\ref{1.4}) for $p=\infty$. The work \cite{BM2} showed the existence of positive viscosity solutions
for $2\le p<\infty$, see Theorems 1.1 and 1.2 therein. For the case $2\le p\le n$, this result is proven for domains $\Omega$ that satisfy a uniform outer ball condition. For $n<p<\infty$, the result is shown for any $\Omega$.
We will also have occasion to work with equations related to $\Gamma_p$. As observed in Lemma 2.2 in \cite{BM1} and Lemma 2.1 in \cite {BM2},
if $u>0$ solves the doubly nonlinear equation $\Gamma_p u=0,\;2\le p\le \infty,$ (see (\ref{1.2})), then $v=\log u$ solves
$$\Delta_pv+(p-1)|Dv|^p-(p-1)v_t=0,\;\;2\le p<\infty,\;\;\mbox{and}\;\;\;\D_{\infty} v+|Dv|^4-3v_t=0,$$
For convenience of presentation, call
\eqRef{1.5}
G_p w=\Delta_pw+(p-1)|Dw|^p-(p-1)w_t,\;\;2\le p<\infty,\;\;\mbox{and}\;\;G_\infty w=\D_{\infty} w+|Dw|^4-3w_t.
\end{equation}
We now introduce the notion of a viscosity solution. The set $usc(A)$ denotes the
set of all upper semi-continuous functions on a set $A$ and $lsc(A)$ the set of all lower semi-continuous functions on $A$. We say $u\in usc(\Omega_T),\;u>0,$ is a sub-solution of $\Gamma_p w=0$, in $\Omega_T$, or $\Gamma_p u\ge 0$ (see (\ref{1.2}))
if for any function $\psi(x,t)$, $C^2$ in $x$ and $C^1$ in $t$, such that $u-\psi$ has a local maximum
at some $(y,s)\in \Omega_T$, we have
$$\D_p \psi(y,s)-(p-1)u(y,s)^{p-2} \psi_t(y,s)\ge 0.$$
Similarly, $u\in lsc(\Omega_T),\;u>0,$ is a super-solution of $\Gamma_p w=0$ in $\Omega_T$ or $\Gamma_pu\le 0$ (see (\ref{1.2}))
if for any function $\psi(x,t)$, $C^2$ in $x$ and $C^1$ in $t$, such that $u-\psi$ has a local minimum
at some $(y,s)\in \Omega_T$, we have $\D_p \psi(y,s)-(p-1)u(y,s)^{p-2} \psi_t(y,s)\le 0.$ A function $u\in C(\Omega_T)$ is a solution of $\Gamma_p w=0$, in $\Omega_T$, or $\Gamma_p u=0$, if $u$ is both a sub-solution and a super-solution. Analogous definitions can be provided for the equation $G_p w=0$, see (\ref{1.5}).
Next, we say $u\in usc(\Omega_T\cup P_T),\;u>0,$ is a viscosity sub-solution of (\ref{1.4}) if $\Gamma_p u\ge 0$, in $\Omega_T,$ and $u\le h$ on $P_T$. Similarly, $u\in lsc(\Omega_T\cup P_T),\;u>0,$ is a viscosity super-solution of (\ref{1.4}) if
$\Gamma_p u\le 0$ in $\Omega_T$, and $u\ge h$ on $P_T$. A function $u\in C(\Omega_T\cup P_T),\;u>0,$ is a solution of (\ref{1.4}) if $\Gamma_p u=0$ in $\Omega_T$ and $u=h$ on $P_T$.
We now state the main results of this work. Let $\lambda_{\Om}$ be the first eigenvalue of $\Delta_p$ on $\Omega$.
\begin{thm}\label{1.6} Let $2\le p<\infty$ and $\Omega\subset \mathbb{R}^n,\;n\ge 2,$ be a bounded domain. Call
$\Omega_\infty= \Omega\times(0, \infty)$ and
$P_\infty$ its parabolic boundary. Suppose that $h\in C(P_\infty)$ is as defined in (\ref{1.30}) with
$h\ge 0$ and $\sup_{P_\infty} h<\infty$.
Let
$u\in usc(\Omega_\infty\cup P_\infty)),\;u\ge 0,$ solve
$$\Gamma_p u\ge 0,\;\;\mbox{in $\Omega_\infty$ and $u\le h$ on $P_\infty$}.$$
(i) If $\lim_{t\rightarrow \infty} (\sup_{\partial\Omega}g(x,t) )=0$ then $\lim_{t\rightarrow \infty} (\sup_{\Omega\times [t, \infty)}u)=0.$
(ii) Moreover, if $g(x,t)=0,\forall(x,t)\in \partial\Omega\times[T_0,\infty)$, for some $T_0\ge 0$, then
$$\lim_{t\rightarrow \infty} \frac{\log(\sup_\Omega u(x,t) )}{t} \le -\frac{\lambda_\Omega}{p-1}. $$
\end{thm}
The above result is an analogue of the asymptotic result proven in Theorem 4.4 and Lemma 4.7 in \cite{BM1} for
$\Gamma_\infty u\ge 0$. We provide an example where the rate $\exp(-\lambda_{\Om} t/(p-1))$ is attained, see Remark
\ref{3.15}. Note that we do not address existence for $h\ge 0$. We also show
\begin{thm}\label{1.61} Let $2\le p\le \infty$, $\Omega\subset \mathbb{R}^n,\;n\ge 2,$ be a bounded domain,
$\Omega_\infty= \Omega\times(0, \infty)$ and
$P_\infty$ be its parabolic boundary. Suppose that $h\in C(P_\infty)$ is as defined in (\ref{1.30}).
Assume that $0<\inf_\Omega f\le 1\le \sup_\Omega f<\infty$ and $g(x,t)=1,\;\forall(x,t)\in \partial\Omega\times[0,\infty)$.
If $u\in C(\Omega_\infty\cup P_\infty)),\;u>0,$ solves
$$\Gamma_p u= 0,\;\;\mbox{in $\Omega_\infty$, $u(x,0)=f(x),\;\forall x\in \overline{\Omega}$ and $u(x,t)=1,\;\forall(x,t)\in \partial\Omega\times[0,\infty)$},$$
then for every $x\in \Omega$, $\lim_{t\rightarrow \infty} u(x,t)=1.$
\end{thm}
From the proof, it follows that (i) $u(x,t)=\exp(O(t^{-s})),\;p=2$ and any $s>0$, (ii) $u(x,t)=\exp(O(t^{-1/(p-2)})),\;2<p<\infty,$ and (iii)
$u(x,t)=\exp(O(t^{-1/2})),\;p=\infty$, as $t\rightarrow \infty$.
From the works in \cite{AJK, JL} one sees that (i) for $\Delta_\infty u=u_t$, the asymptotic decay
is $t^{-1/2}$ and (ii) for $\D_p u=u_t$, the rate is $t^{-1/(p-2)}.$ They do appear to agree if we consider $G_p.$
However, at this time, it is not clear if the asymptotic rates in Theorem \ref{1.61} are optimal and also if $u$ tends to a $p$-harmonic function when $g(x,t)=g(x)$, for all $t>0$.
We now state a Phragm\'en-Lindel\"of type result for the unbounded domain $H_T$. A version was shown in Theorem 4.1 in \cite{BM1} for $\Gamma_\infty$. We show an analogue for $\Gamma_p$, $2\le p < \infty$, and include an improvement for $\Gamma_\infty$.
\begin{thm}\label{1.7} Let $2\le p\le \infty$, $T>0$, $H=\mathbb{R}^n$ and $H_T=\mathbb{R}^n\times (0,T)$.
Assume that $0<\inf_{H}f(x)\le \sup_H f(x)<\infty$.
Suppose that $u\in C(H\times\{0\} \cup H_T )$, $2\le p\le \infty,$ solves
$$\Gamma_p u=0,\;\;\mbox{in $H_T$},$$
and as $R\rightarrow \infty$,
\begin{eqnarray*}
\sup_{0\le |x|\le R,\;0\le t\le T}u(x,t)\le \left\{ \begin{array}{llr} \exp\left( o( R^{p/(p-1)}\right), &\mbox{for $2\le p<\infty$, and}\\
\exp\left( o( R^{4/3}\right),& \mbox{for $p=\infty$}.\end{array}\right.
\end{eqnarray*}
It follows that $\inf_{H}f(x)\le u(x,t) \le\sup_H f(x),\;\forall(x,t)\in H_T$.
\end{thm}
It is not clear if the result in the theorem is optimal.
The proofs of Theorems \ref{1.6}, \ref{1.61} and \ref{1.7} employ appropriate auxiliary functions and the comparison principle. We have divided our work as follows. Section 2 contains some previously proven results and includes some useful calculations. Proofs of Theorems \ref{1.1} and {1.2} are in Section 3. Theorem \ref{1.3} is proven in Section 4. Section 5 contains a discussion of the eigenvalue problem for $\Delta_p$ in the viscosity setting and has relevance for Theorem \ref{1.1}.
\section{Preliminaries and some observations}
We recall some previously proven results and include some useful calculations.
See Section 3 in \cite{BM1, BM2} for proofs of Lemmas \ref{2.1}, \ref{2.3} and \ref{2.5}, and Theorems \ref{2.4} and \ref{2.2}. Lemma \ref{2.1} and Theorem \ref{2.4} hold regardless of sign of $u$.
\begin{lem}\label{2.1}{(Maximum principle)} Let $\Omega\subset \mathbb{R}^n,\;n\ge 2$ be a bounded domain and $T>0$.\\
(a) If $u\in usc(\Omega_T\cup P_T)$ solves
$$\Delta_p u- (p-1)|u|^{p-2} \phi_t\ge 0,\;\;2\le p<\infty,\;\;\mbox{or}\;\;\D_{\infty} u-3u^2 u_t\ge 0,\;\; \mbox{in $\Omega_T$},$$
then $\sup_{\Omega_{T}}u\le\sup_{P_T} \phi=\sup_{\Omega_T\cup P_T}u.$\\
(b) If $\phi\in lsc(\Omega_T\cup P_T)$ and
$$\Delta_p \phi- (p-1)|\phi|^{p-2}\phi_t\le 0,\;\;2\le p<\infty,\;\;\D_{\infty} u-3u^2u_t\le 0,\;\;\mbox{in $\Omega_T$},$$
then
$\inf_{\Omega_T} \phi\ge \inf_{P_T}\phi=\inf_{\Omega_T\cup P_T}\phi.$
\end{lem}
We present a comparison principle for $G_p$ (see (\ref{1.5})) that leads to Theorem \ref{2.2}.
\begin{thm}\label{2.4}{(Comparison Principle)}
Let $2\le p\le \infty$. Suppose that $\Omega\subset \mathbb{R}^n,\;n\ge 2,$ is a bounded domain and $T>0$. Let $u\in usc(\Omega_T\cup P_T)$ and $v\in lsc(\Omega_T\cup P_T)$ satisfy
$$G_p u\ge 0,\;\;\mbox{and}\;\;G_p v\le 0,\;\;\mbox{in $\Omega_T$}.$$
If $u,\;v$ are bounded and $u\le v$ on $P_T$, then $u\le v$ in $\Omega_T$.
\end{thm}
The next is a comparison principle for $\Gamma_p$ (see (\ref{1.2})) that applies to positive solutions.
\begin{thm}\label{2.2}{(Comparison Principle})
Suppose that $\Omega\subset \mathbb{R}^n,\;n\ge 2,$ is a bounded domain and $T>0$. Let $u\in usc(\Omega_T\cup P_T)$ and $v\in lsc(\Omega_T\cup P_T)$ satisfy
$$\Gamma_p u\ge 0,\;\;\mbox{and}\;\;\Gamma_p v\le 0,\;\;\mbox{in $\Omega_T$},\;\;\;\;2\le p\le \infty.$$
Assume that $\min(\inf_{\Omega_T\cup P_T} u, \inf_{\Omega_T\cup P_T} v)>0$.
If $\sup_{P_T}v<\infty$ then
$$\sup_{\Omega_T} u/v=\sup_{P_T}u/v.$$
In particular, if $u\le v$ on $P_T$, then $u\le v$ in $\Omega_T$. Clearly, solutions to (\ref{1.4}) are unique.
\end{thm}
\begin{rem}\label{2.20} We extend Theorem \ref{2.2} to the case $u\ge 0$ on $P_T$. Let $v$ be as in Theorem \ref{2.2}.
(i) If $u=0$ on $P_T$, then by Lemma \ref{2.1}, $u=0$, in $\Omega_T$, and the conclusion holds.
(ii) Let $u\ge 0$ be a sub-solution (see Theorem \ref{2.2}) and $\sup_{\Omega_T}u>0$; clearly,
$\sup_{P_T}u>0$, by Lemma \ref{2.1}.
Let $\varepsilon>0$, be small. Define $u_\varepsilon(x,t)=\max(u(x,t),\;\varepsilon),\;\forall(x,t)\in \Omega_t\cup P_T.$ We show that $u_\varepsilon\in usc(\Omega_T\cup P_T)$ and $\Gamma_p u_\varepsilon\ge 0,$ in $\Omega_T$.
Let $(y,s)\in \Omega_T\cup P_T$. Since $\limsup_{(x,t)\rightarrow (y,s)}u(x,t)\le u(y,s)$, we have
$\limsup_{(x,t)\rightarrow (y,s)}u_\varepsilon(x,t)\le u_\varepsilon(y,s)$ and $u_\varepsilon\in usc(\Omega_T\cup P_T)$. Next, let $\psi$, $C^2$ in $x$ and $C^1$ in $t$, and $(y,s)\in \Omega_T$ be such that $u_\varepsilon-\psi$ has a maximum at $(y,s)$. If $u_\varepsilon(y,s)=u(y,s)(\ge \varepsilon)$ then $u-\psi$ has a maximum at $(y,s)$ (since $u\le u_\varepsilon$).
Since $u$ is sub-solution,
we get
$$\Delta_p\psi(y,s)-(p-1)u(y,s)^{p-2} \psi_t(y,s)=\Delta_p\psi(y,s)-(p-1)u_\varepsilon(y,s)^{p-2} \psi_t(y,s)\ge 0.$$
Next, assume that $u_\varepsilon(y,s)=\varepsilon$. Rewriting $(u_\varepsilon-\psi)(x,t)\le \varepsilon-\psi(y,s)$,
$$0\le u_\varepsilon(x,t)-\varepsilon\le \langle D\psi(y,s), x-y\rangle+\psi_t(y,s)(t-s)+o(|x-y|+|t-s|),$$
as $(x,t)\rightarrow (y,s)$, where $(x,t)\in \Omega_T$. Clearly, $D\psi(y,s)=0$ and $\psi_t(y,s)=0$. Thus,
$\Delta_p\psi(y,s)-(p-1)u_\varepsilon(y,s)^{p-2} \psi_t(y,s)=0$, if $p>2$. For $p=2$, we write the above Taylor expansion as $0\le \langle D^2\psi(y,s)(x-y),x-y\rangle/2 +o(|x-y|^2+|t-s|)$, as $(x,t)\rightarrow (y,s)$. It is clear that $\Delta\psi(y,s)\ge 0$. Thus, $u_\varepsilon$ solves $\Gamma_pu_\varepsilon \ge 0$, in $\Omega_T$, for $2\le p<\infty$.
We now apply Theorem \ref{2.2} to obtain that $\sup_{\Omega_T}u_\varepsilon/v\le \sup_{P_T}u_\varepsilon/v$, for every small $\varepsilon>0$. Since $u_\varepsilon\le u+\varepsilon$ (note that $u\ge 0$), we have
$\sup_{\Omega_T}u/v\le \sup_{\Omega_T}u_\varepsilon/v\le \sup_{P_T}(u+\varepsilon)/v\le \sup_{P_T} u/v+\sup_{P_T} \varepsilon/v.$
The conclusion of Theorem \ref{2.2} holds by letting $\varepsilon\rightarrow 0.$ $\Box$
\end{rem}
Next we state a change of variables result which relates $\Gamma_p$ to $G_p$, see (\ref{1.2}) and (\ref{1.5}).
\begin{lem}\label{2.3} Let $\Omega\subset \mathbb{R}^n,\;n\ge 2,$ be a domain and $T>0$, and $2\le p\le \infty$. Suppose $u:\Omega_T\rightarrow \mathbb{R}^+$ and $v:\Omega_T\rightarrow \mathbb{R}$ such that
$u=e^v$. The following hold.\\
(a) $u\in usc(\Omega_t\cup P_T)$ and $\Gamma_p u\ge 0$ if and only if $v\in usc(\Omega_T\cup P_T)$ and $G_p v\ge 0$.\\
(b) $u\in lsc(\Omega_t\cup P_T)$ and $\Gamma_p u\le 0$ if and only if $v\in lsc(\Omega_T\cup P_T)$ and $G_p v\le 0$.
\end{lem}
We now present a separation of variable result that will be used for proving Theorem \ref{1.6}.
See Lemma 2.14 in \cite{BM1} and Lemma 2.3 in \cite{BM2}.
\begin{lem}\label{2.5} Let $\lambda\in \mathbb{R}$, $\mu \in \mathbb{R}$, $T>0$, and $\psi:\Omega\rightarrow \mathbb{R}^+$.
(a) Suppose that for some $2\le p<\infty$, $\psi\in usc(lsc)(\Omega)$ solves
$\Delta_p\psi+\lambda \psi^{p-1}\ge (\le) 0$ in $\Omega$. If $u(x,t)=\psi(x,t) e^{-\mu t/(p-1) }$ then
$\Gamma_p u\ge (\le) 0,\;\mbox{where $\mu\ge(\le) \lambda.$}$
(b) Suppose that $\psi\in usc(lsc)(\Omega)$ solves
$\D_{\infty} \psi+\lambda \psi^{3}\ge (\le) 0$ in $\Omega$. If $u(x,t)=\psi(x,t) e^{-\mu t/3 }$ then
$\Gamma_\infty u\ge (\le) 0,\;\mbox{where $\mu\ge(\le) \lambda.$} $
\end{lem}
We include two results that will be used in Theorem \ref{1.7}. We recall the radial form for the $\D_p$, that is, if $r=|x|$, then, for $2\le p\le \infty$,
\eqRef{2.50}
\D_p v(r)=|v^{\prime}(r)|^{p-2} \left( (p-1) v^{\prime\prime}(r)+\frac{n-1}{r} v^{\prime}(r) \right)\;\;\mbox{and}\;\;\D_{\infty} u= \left(u^{\prime}(r)\right)^2 u^{\prime\prime}(r).
\end{equation}
\begin{lem}\label{2.6} Let $R>0$; set
$r=|x|,\;\forall x\in \mathbb{R}^n$ and $h(x)=1-(r/R)^2,\; \forall 0\le r\le R.$
(i) For $2\le p<\infty$, take
$$k=p+n-2,\;\;\;\alpha=\frac{2p+k-1}{2(p-1)},\;\;\;\theta^2=\frac{k}{k+1}\;\;\;\mbox{and}
\;\;\;\lambda_p= \frac{k\theta^{p-2}}{R^p} \left( \frac{2\alpha}{1-\theta^2}\right)^{p-1}.$$
Call $\eta(x)=h(x)^\alpha$ and $\phi_p(x,t)=\eta(x) e^{-\lambda_p t/(p-1)},\;\;\forall 0\le r\le R.$
(ii) For $p=\infty$, define
$$\theta=1/\sqrt{2},\;\;\;\mbox{and}\;\;\;\lambda_\infty=2^8/R^4. $$
Set $\eta=h(x)^2$ and $\phi_\infty(x,t)=\eta(x)e^{-\lambda_\infty t/3},\;\forall 0\le r\le R.$
Then, for $2\le p\le \infty$,
$\Gamma_p \phi_p\ge 0,$ in $B_R(o)\times(0,\infty)$, $\phi(0,0)=1$ and $\phi(x, t)=0$, on $|x|=R$ and $t\ge 0.$
\end{lem}
\begin{proof} Our goal is to show that for $2\le p\le \infty$, $\D_p \eta+\lambda_p \eta^{p-1}\ge 0$, in $0\le r\le R$.
{\bf Part (i): $2\le p<\infty$.} Observe that $\alpha>1$. Differentiating $\eta$, by using (\ref{2.50}), and
setting $H=h^{(\alpha-1)(p-1)-1}$,
\begin{eqnarray}\label{2.61}
\D_p \eta&+&\lambda_p \eta^{p-1}=\D_p h^\alpha+\lambda_p h^{\alpha(p-1)}\nonumber\\
&=&\alpha^{p-1}\left( h^{(\alpha-1)(p-1)} \D_p h+(\alpha-1)(p-1)h^{(\alpha-1)(p-1)-1}|Dh|^p\right)
+\lambda_p h^{\alpha(p-1)} \nonumber\\
&=&h^{(\alpha-1)(p-1)-1} \left[ \lambda_p h^p+\alpha^{p-1} \left\{ (\alpha-1)(p-1)|Dh|^p+h\D_p h\right\}\right]\nonumber\\
&=&H\left[ \lambda_p h^p+\alpha^{p-1} \left\{ (\alpha-1)(p-1) \left(\frac{2r}{R^{2}}\right)^p - \frac{2^{p-1}r^{p-2}( p+n-2)h}{R^{2(p-1)}} \right\} \right] \nonumber \\
&=&H \left[\lambda_p h^{p} + \alpha^{p-1} \left\{ (\alpha-1)(p-1) \left(\frac{2r}{R^{2}}\right)^p- \frac{2^{p-1}r^{p-2}kh}{R^{2(p-1)}}
\right\} \right].
\end{eqnarray}
We now estimate the right hand side of (\ref{2.61}) in $0\le r\le \theta R$ and in $\theta R\le r\le R$ separately.
In $0\le r\le \theta R$, disregard the middle term in (\ref{2.61}) and take $r=\theta R$, to see
\begin{eqnarray*}
\D_p \eta+\lambda_p \eta^{p-1}&\ge& H \left( \lambda_p h^{p}- \frac{(2\alpha)^{p-1}r^{p-2}kh}{R^{2(p-1)}}\right)=hH \left( \lambda_p h^{p-1}-\frac{(2\alpha)^{p-1}r^{p-2}k}{R^{2(p-1)}}\right)\\
&\ge &hH \left( \lambda_p\left(1-\theta^2\right)^{p-1} -\frac{(2\alpha)^{p-1}\theta^{p-2}k}{R^p}\right)= 0.
\end{eqnarray*}
In $\theta R\le r\le R$, disregard the $\lambda_p h^p$ term in (\ref{2.61}), set $r=\theta R$ in the second term and $h=1$ in the third term to obtain
\begin{eqnarray*}
\D_p \eta+\lambda_p \eta^{p-1}&\ge& \alpha^{p-1}H \left\{ (p-1)(\alpha-1) \frac{2^{p}r^p}{R^{2p}}-\frac{2^{p-1}r^{p-2}k}{R^{2(p-1)}} h
\right\}\\
&\ge & \frac{(2\alpha)^{p-1}Hr^{p-2}}{R^{2(p-1)}} \left\{ \frac{2(\alpha-1)(p-1)r^2}{R^{2}}-k \right\}\\
&=&\frac{(2\alpha)^{p-1}Hr^{p-2}}{R^{2(p-1)}} \left\{ 2(\alpha-1)(p-1)\theta^2-k \right\}=0,
\end{eqnarray*}
since $2(p-1)(\alpha-1)\theta^2=k$.
{\bf Part(ii): $p=\infty$.} The work is similar to Part (i).
\begin{eqnarray}\label{2.62}
\D_{\infty} \eta+\lambda_\infty \eta^3&=&\D_{\infty} h^2+\lambda_\infty h^6=8h^3\D_{\infty} h+ 8h^2|Dh|^4+\lambda_\infty h^6\nonumber\\
&=&h^2\left[ \lambda_\infty h^4+8\left( |Dh|^4+h\D_{\infty} h\right) \right]
=h^2\left[ \lambda_\infty h^4+8\left\{ \left(\frac{2r}{R^2}\right)^4-\frac{8r^2}{R^6}h\right\} \right]
\end{eqnarray}
We estimate (\ref{2.62}) in $0\le r\le \theta R$,
\begin{eqnarray*}
\D_{\infty} \eta+\lambda_\infty \eta^3\ge h^2\left( \lambda_\infty h^4-\frac{64r^2}{R^6}h \right)=h^3\left( \lambda_\infty h^3-\frac{64r^2}{R^6} \right)\ge h^3\left( \lambda_\infty (1-\theta^2)^3-\frac{64\theta^2}{R^4} \right)=0.
\end{eqnarray*}
From (\ref{2.62}), if $\theta R\le r\le R$ then
\begin{eqnarray*}
\D_{\infty} \eta+\lambda_\infty \eta^3\ge 8h^2\left\{ \left(\frac{2r}{R^2}\right)^4-\frac{8r^2}{R^6}h\right\}
\ge \frac{64h^2r^2}{R^6} \left(\frac{2r^2}{R^2}-1\right)\ge \frac{64h^2r^2}{R^6}
\left(2\theta^2-1\right)=0.
\end{eqnarray*}
The claim holds by an application of Lemma \ref{2.5}.
\end{proof}
We record a calculation we use in the various auxiliary functions we employ in our work.
\begin{rem}\label{2.70} Let $f(t)\in C^1,$ in $t\ge 0,$ and $f(t)\ge 0$. Set $r=|x|$ and
$$u(x,t)=\pm f(t) r^{p/(p-1)},\;\;2\le p<\infty,\;\;\mbox{and}\;\;u(x,t)=f(t) r^{4/3},\;\;p=\infty.$$
Call $A=n(p/(p-1))^{p-1}$ and $B=(p/(p-1))^p.$
We show that in $r\ge 0$,
$$G_pu=\left\{ \begin{array}{lcr} \pm \left\{Af^{p-1}+(p-1)Bf^pr^{p/(p-1)}-(p-1)r^{p/(p-1)}f^{\prime}\right\},& 2\le p<\infty,\\ \pm\left\{(4^3/3^4)f^3+f^4 r^{4/3}-3f^{\prime}r^{4/3}\right\},& p=\infty.
\end{array}\right. $$
We prove the above for the $+$ case. The $-$ case can be shown similarly. Using (\ref{2.50}) the above holds in $r>0$ and $u\in C^2$ for $p=2$. We check at $r=0$ and for $2<p\le \infty$.
Suppose that $\psi$, $C^1$ in $t$ and $C^2$ in $x$, is such that $u-\psi\le u(o,s)-\psi(o,s)$, for $(x,t)$ near $(o,s)$. Thus,
$u(x,t)\le \langle D\psi(o,s), x\rangle+\psi_t(o,s)(t-s)+o(|x|+|t-s|),$ as $(x,t)\rightarrow (o,s)$. Clearly, $\psi_t(o,s)=0$ and $D\psi(o,s)=0$. Using the expansion
$$u(x,t)=f(t)|x|^{p/(p-1)}\le \langle D^2\psi(o,s)x, x\rangle/2+\psi_t(o,s)(t-s)+o(|x|^2+|t-s|),$$
as $(x,t)\rightarrow (o,s)$, we see that $D^2\psi(o,s)$ does not exist. Hence, $u$ is a sub-solution.
Next, let $\psi$, $C^1$ in $t$ and $C^2$ in $x$, is such that $u-\psi\ge u(o,s)-\psi(o,s)$, for $(x,t)$ near $(o,s)$. Thus,
$u(x,t)\ge \langle D\psi(o,s), x\rangle+\psi_t(o,s)(t-s)+o(|x|+|t-s|),$ as $(x,t)\rightarrow (o,s)$. Clearly, $Du(o,s)=0$, $\psi_t(o,s)=0$ and $G_p\psi(o,s)=0$.
Hence, $u$ is a super-solution. A similar argument works for $G_\infty$. $\Box$
\end{rem}
The next is an auxiliary function which is employed in the proof of Theorem \ref{1.7}.
\begin{lem}\label{2.7} Let $T>
|
0$ and $2\le p\le \infty$. Set $r=|x|$ and for any fixed $\alpha>0$, define $\forall x\in \mathbb{R}^n$ and any $0\le t\le T$,
\begin{eqnarray*}
&&\phi_p(x,t)=\exp\left( a\left[ (t+1)^{\alpha (p-1)+1}-1\right]+ b (t+1)^\alpha r^{p/(p-1)} \right),\;\;2\le p<\infty,\\
&&\phi_\infty(x,t)=\exp\left( a [ (t+1)^{3\alpha+1}-1]+ b (t+1)^{\alpha}r^{4/3} \right),\;\;\;\;p=\infty.
\end{eqnarray*}
(i) For $2\le p<\infty$, take $a$ and $b$ such that
$$a= \frac{n p^{p-1}b^{p-1}}{(p-1)^p\{1+ \alpha (p-1)\}}\;\;\mbox{and}\;\;0<b^{p-1} \left( \frac{p}{p-1}\right)^p(T+1)^{\alpha (p-1)+1}<\alpha.$$
(ii) For $p=\infty$, take $a$ and $b$ such that
$$a=\frac{4^3b^3}{3^5(3\alpha+1)}\;\;\mbox{and}\;\; 0<b^3\left(\frac{4^4}{3^5}\right)(T+1)^{3\alpha+1}<\alpha.$$
Then for $2\le p\le \infty$, $\phi_p(0,0)=1$ and $\Gamma_p \phi_p\le 0$, in $\mathbb{R}^n\times (0,T)$.
\end{lem}
\begin{proof} We set $v=\log \phi_p$ and use Lemma \ref{2.3} and Remark \ref{2.70} to show that
$G_pv\le 0$, in $\mathbb{R}^n\times(0,T).$
(i) Let $2\le p<\infty$. Then
$$v=\log\phi_p= a\left[ (t+1)^{\alpha (p-1)+1}-1\right]+ b (t+1)^\alpha r^{p/(p-1)}.$$
A simple calculation shows that $\D_p r^{p/(p-1)}= n[ p/(p-1)]^{p-1},\;0\le r<\infty,\;2<p<\infty$ and $\Delta r^2=2n,\;0\le r<\infty$, see Remark \ref{2.70}.
Using the above, calculating in $0< r<\infty$ and $0< t< T$, and using the definitions of $a$ and $b$, we see that
\begin{eqnarray*}
&&\Delta_pv+(p-1)|Dv|^p-(p-1)v_t\\
&&=n\left(\frac{p}{p-1}\right)^{p-1}b^{p-1}(t+1)^{\alpha(p-1)} +(p-1) \left(\frac{p}{p-1}\right)^p b^p (t+1)^{\alpha p} r^{p/(p-1)}\\
&&\qquad\qquad\qquad\qquad\qquad -(p-1) \left\{ a (\alpha(p-1)+1) (t+1)^{\alpha(p-1)}+ \alpha b(t+1)^{\alpha-1} r^{p/(p-1)} \right\}\\
&&=\left[ n\left( \frac{p }{p-1}\right)^{p-1}b^{p-1} -a(p-1)(\alpha(p-1)+1)\right] (t+1)^{\alpha(p-1)}\\
&&\qquad\qquad\qquad\qquad\qquad+ (p-1) r^{p/(p-1)}\left[ \left(\frac{p}{p-1}\right)^pb^p(t+1)^{\alpha p}-\alpha b (t+1)^{\alpha-1} \right]\\
&&=b(p-1)r^{p/(p-1)}(t+1)^{\alpha-1} \left(\left(\frac{p}{p-1}\right)^p b^{p-1}(t+1)^{\alpha (p-1)+1}-\alpha\right)\le 0.
\end{eqnarray*}
Thus, $\phi$ is a super-solution in $0\le r<\infty$ and $0<t<T$.
We now show part (ii). Set
$$v=\log\phi_\infty= a [ (t+1)^{3\alpha+1}-1]+ b (t+1)^{\alpha}r^{4/3}.$$
Noting that $\D_{\infty} r^{4/3}=4^3/3^4$, in $0\le r<\infty$, and calculating,
\begin{eqnarray*}
\D_{\infty} v+|Dv|^4-3v_t&=&\frac{4^3}{3^4}b^3(t+1)^{3\alpha} +b^4 \left(\frac{4}{3}\right)^4(t+1)^{4\alpha}r^{4/3}\\
&&\qquad\qquad\qquad\quad\qquad\qquad -3\left\{a(3\alpha+1) (t+1)^{3\alpha}+b \alpha r^{4/3}(t+1)^{\alpha-1} \right\}\\
&&\le (t+1)^{3\alpha} \left( \frac{4^3b^3}{3^4}-3a(3\alpha+1) \right)\\
&&\qquad\qquad\qquad\qquad+b r^{4/3}(t+1)^{\alpha-1}\left( b^3\left( \frac{4}{3}\right)^4(T+1)^{3\alpha+1}-3\alpha\right)\le 0,
\end{eqnarray*}
where we have used the definitions of $a$ and $b$. Rest of the proof is similar to part (i).
\end{proof}
We now extend existence results in \cite{BM1, BM2} to cylindrical domains $\Omega\times (0,\infty)$. Set $\Omega_\infty=\Omega\times(0,\infty)$ and $P_\infty$ its parabolic boundary.
\begin{lem}\label{2.8} Let $\Omega\subset \mathbb{R}^n$ be bounded and $h\in C(P_\infty)$ with
$0<\inf_{P_\infty}h\le \sup_{P_\infty}h<\infty$. Suppose that, for any $T>0$,
$\Gamma_p v=0,\;\mbox{in $\Omega_T$ and $v=h$ on $P_T$,}\;2\le p\le \infty,$
has a unique positive solution. Then the problem
\eqRef{2.80}
\Gamma_p v=0,\;\;\mbox{in $\Omega{ _\infty}$ and $v=h$ on $P_\infty$,}
\end{equation}
has a unique positive solution $u\in C(\Omega_\infty\times P_\infty).$ Moreover,
$\inf_{P_\infty} h\le u \le \sup_{P_\infty}h.$
In particular, existence holds for any $\Omega$, if $p>n$, and for any $\Omega$ satisfying a uniform outer ball condition, if $2\le p\le n.$
\end{lem}
\begin{proof}
For any $T>0$, call $u_T$ to be the unique positive solution of $\Delta_p u_T-(p-1) u_T^{p-2} (u_T)_t=0,\;\mbox{in $\Omega_T$, $u_T=h$ on $P_T$.}$
By Theorem \ref{2.2}, $u_{T_1}=u_{T}$ in $\Omega_{T_1}$, for any $T>T_1>0$. Define
$u=\lim_{T\rightarrow \infty} u_T.$
Hence, $u$ solves the problem in $\Omega_\infty$. To show uniqueness, if $v$ is any other positive solution, then
$v=u_T=u$ in $\Omega_T$, by using Theorem \ref{2.2}. The maximum principle in Lemma \ref{2.1} shows that $\inf_{P_\infty}h\le \inf_{P_T}h\le u_T\le \sup_{P_T}\le \sup_{P_\infty} h.$
\end{proof}
\begin{rem}\label{2.9} We record the following kernel functions of $\Gamma_p$, for $2\le p\le \infty$. Define the functions $K_p$, in $\mathbb{R}^n\times(0,\infty)$, as follows.
\begin{eqnarray*}
&&K_p(x,t)=t^{-n/p(p-1)}\exp\left(- \left(\frac{p-1}{p^{p/(p-1)}}\right)\left(\frac{|x|^{p}}{t}\right)^{1/(p-1)} \right),\;\;\;2\le p<\infty,\\
&&K_\infty(x,t)=t^{-1/12} \exp\left( - \left(\frac{3}{4} \right)^{4/3}\left( \frac{r^4}{t} \right)^{1/3} \right),\;\;\;\;\;p=\infty.
\end{eqnarray*}
For $p=2$, $K_2(x,t)=t^{-n/2}\exp( -|x|^2/(4t))$ is the well-known heat kernel for the heat equation.
Also,
$$K_p(x,0^+)=0,\;x\ne o,\;\;\lim_{t\rightarrow 0^+}K_p(0, t)=\infty,\;\;\mbox{and}\;\;\lim_{|x|+t\rightarrow \infty} K_p(x,t)=0.$$
We omit the proof that $\Gamma_p K_p=0$. $\Box$
\end{rem}
\section{Proofs of Theorems \ref{1.6} and \ref{1.61}}
{\bf Proof of Theorem \ref{1.6}.} The proof is a some what simplified version of the one in \cite{BM1} and we use Remark \ref{2.20}. Let $\lambda_{\Om}$ be the first eigenvalue of
$\D_p$ on $\Omega$, see Section 5.
Define $M=\sup_{P_\infty} h$. By Lemma \ref{2.8}, we obtain $u\ge 0$ and $\sup_{\Omega_\infty}u\le M.$
For every $t>0$, set
\eqRef{3.1}
\mu(t)=\sup_{\overline{\Omega}} u(x,t)\;\;\;\mbox{and}\;\;\;\nu(t)=\sup_{\partial\Omega} g(x,t).
\end{equation}
{\bf Part (i).} We observe by Remarks \ref{6.13} and {\ref{6.15} that
for a fixed $0<\lambda<\lambda_\Omega$, one can find a solution $\psi_\lambda\in C(\overline{\Omega}),\;\psi_\lambda>0,$ such that
\eqRef{3.3}
\D_p \psi_\lambda+\lambda \psi_\lambda^{p-1}=0,\;\;\mbox{in $\Omega$, and $\psi_\lambda=M$, on $\partial\Omega$.}
\end{equation}
By Remark \ref{6.10},
\eqRef{3.4}
\psi_\lambda>M,\;\;\mbox{in $\Omega$, and}\;\;\psi_\lambda(x)\ge u(x,T_1),\;\forall x\in \overline{\Omega}.
\end{equation}
We now construct an auxiliary function for the proof. Let $0<S<T<\infty$; define
\begin{eqnarray}\label{3.40}
\beta(t,T)=\exp\left( \frac{\lambda (T-t)}{p-1} \right),\;\;\forall\;S\le t\le T.
\end{eqnarray}
In the rest of Part (i), we always choose $S$ and $T$ such that $\beta(S,T)\ge 2.$ Next, define the function
\eqRef{3.5}
F(t; S, T)=\frac{1}{2}\left[1+ \frac{\beta(t,T)-1 }{ \beta(S,T)-1 }\right]=\frac{1}{2}\left[\frac{\beta(S,T)-2}{\beta(S,T)-1}+ \frac{\beta(t,T)}{ \beta(S,T)-1}\right],\;\;\forall t\in [S, T].
\end{equation}
Using (\ref{3.40}) and (\ref{3.5}), we get $\forall t\in [S,T]$,
\eqRef{3.6}
F_t=-\frac{\lambda \beta(t,T)}{2(p-1) (\beta(S,T)-1)},\;\;F(S; S,T)=1,\;\;F(T;S,T)=\frac{1}{2},\;\;\mbox{and}\;\;\frac{1}{2}\le F(t; S,T)\le 1.
\end{equation}
Let $\phi=\psi_\lambda(x) F(t; S, T),$ $\forall(x,t)\in \Omega\times(S,T)$, where $\psi_\lambda$ is as in (\ref{3.3}). Using Lemma \ref{2.5}, (\ref{3.5}) and (\ref{3.6}), we get
\begin{eqnarray}\label{3.7}
\Gamma_p\phi&=&F^{p-1}\D_p \psi_\lambda-(p-1)\psi_\lambda^{p-1}F^{p-2} F_t=-\lambda F^{p-2} \psi_\lambda^{p-1}
\left(F{+}\frac{p-1}{\lambda}F_t\right)\nonumber\\
&=&-\lambda F^{p-2} \psi_\lambda^{p-1}\left\{ F- \frac{1}{2}
\left( \frac{\beta(t,T) }{ \beta(S,T)-1 } \right) \right\}\nonumber\\
&=&-\frac{\lambda \psi_\lambda^{p-1} F(t)^{p-2}}{2} \left( \frac{\beta(S,T)-2 }{ \beta(S,T)-1 } \right)\le 0, \;\;\;\;S<t< T,
\end{eqnarray}
where we have used that $\beta(S,T)\ge 2.$
Recalling (\ref{3.1}) and the hypothesis of the theorem, there are $1<T_1<\cdots<T_m<\cdots<\infty$ such that for $m=1,2,\cdots,$
\begin{eqnarray}\label{3.8}
(a)\;\;\beta(T_m, T_{m+1})\ge 2,\;\;\mbox{and}\;\;(b)\;\;0\le \nu(t)\le \frac{M}{2^{m}},\;\forall t\ge T_m,
\end{eqnarray}
see (\ref{3.40}). Note that (\ref{3.8}) (a) implies that $\lim_{m\rightarrow \infty}T_m =\infty$.
For each $m=1,2,\cdots,$ set
\begin{eqnarray}\label{3.9}
&&\mbox{$I_m=\Omega\times(T_m, T_{m+1})$, $J_m$ the parabolic boundary of $I_m$, } \\
&&\eta_m(t)= F(t; T_m, T_{m+1}) ,\;\forall t\in [T_m,\;T_{m+1}], \;\;\mbox{and}\;\;\phi_m(x,t)=\frac{\psi_\lambda(x) \eta_m(t)}{2^{m-1}},\;\forall(x,t)\in \overline{I}_m,\nonumber
\end{eqnarray}
see (\ref{3.5}).
Taking $m=1$, $\phi_1(x,t)=\psi_\lambda \eta_1(t),$ using (\ref{3.3}), (\ref{3.4}), (\ref{3.6}) and (\ref{3.9}),
\eqRef{3.10}
\phi_1(x, T_1)\ge M,\;\forall x\in \overline{\Omega}, \;\;\mbox{and}\;\;\;\frac{M}{2}\le \phi_1(x,t)\le M,\;\;\forall(x,t)\in \partial\Omega\times [T_1,T_2].
\end{equation}
Also, by (\ref{3.4}) and (\ref{3.8})(b),
\eqRef{3.101}
u(x,T_1)\le \phi_1(x,T_1),\;\forall x\in \overline{\Omega},\;\;\mbox{and}\;\;u(x,t)\le \nu(t)\le \frac{M}{2},\;\forall(x,t)\in \partial\Omega\times[T_1, \infty).
\end{equation}
Thus, $u\le \phi_1$, on $J_1$, and as $\Gamma_p \phi_1\le 0$, in $I_1$, (see (\ref{3.7})), Theorem \ref{2.2} and Remark \ref{2.20} imply that $u\le \phi_1(x,t)$ in $I_1.$
We claim that $u\le \phi_1(x,t)$, in $\overline{I}_1$ ($u$ is upper semi-continuous). Take $\hat{T}_2>T_2$ and near $T_2$. The function
$\hat{\phi}_1(x,t)=\psi_\lambda(x) F(t; T_1, \hat{T}_2)$ (see (\ref{3.6}) and (\ref{3.9})) satisfies the conclusions in (\ref{3.10}) and (\ref{3.101}) if we replace $\phi_1$ by $\hat{\phi}_1$.
Thus, $u\le \hat{\phi}_1$, in $\Omega\times(T_1,\hat{T}_2)$ and the conclusion that $u\le \phi_1$, in $\overline{I}_1$, now follows by letting $\hat{T}_2\rightarrow T_2$.
Clearly,
\eqRef{3.102}
u(x,T_2)\le \phi_1(x,T_2)=\psi_\lambda(x)\eta_1(T_2)=\frac{\psi_\lambda(x)}{2},\;\forall x\in \overline{\Omega},
\end{equation}
where we have used (\ref{3.6}). Moreover, since $F$ is deceasing in $t$ (see (\ref{3.6})), recalling (\ref{3.1}), we have
$$\mu(t)\le \sup_{\Omega}\psi_\lambda,\;\forall t\in [T_1,\;T_2],\;\;\mbox{and}\;\;\mu(T_2)\le \frac{\sup_{\Omega} \psi_\lambda}{2}.$$
We now use induction and suppose that for some $m=1,2\cdots$,
\eqRef{3.11}
u(x,T_m)\le \frac{\psi_\lambda(x)}{2^{m-1}},\;\;\forall x\in \overline{\Omega},
\end{equation}
(note that (\ref{3.11}) holds for $m=1,2$, see (\ref{3.4}) and (\ref{3.102})). We will prove that
\eqRef{3.12}
u(x,t)\le \frac{\psi_\lambda(x)}{2^{m-1}},\;\forall(x,t)\in \overline{I}_m, \;\;\;\mbox{and}\;\;\;\mu(T_{m+1})\le \frac{\psi_\lambda(x) } {2^m}.
\end{equation}
thus proving part (i) of the theorem.
By (\ref{3.7}) and (\ref{3.9}),
$\Gamma_p\phi_m\le 0, \;\mbox{in $I_m$}.$ By (\ref{3.6}), (\ref{3.9}) and (\ref{3.11}),
\begin{eqnarray*}
&&u(x,T_m)\le \frac{\psi_\lambda(x)\eta_m(T_m)}{2^{m-1}}=\phi_m(x,T_m),\;\forall x\in \overline{\Omega},\;\;\mbox{and}\\
&&0\le g(x,t)\le \frac{M}{2^{m}}\le \phi_m(x,t),\;\forall(x,t)\in \partial\Omega\times[T_m, T_{m+1}).
\end{eqnarray*}
Thus, $\phi_m\ge u$ on $J_m$. Using Theorem \ref{2.2} and Remark \ref{2.20}, $u\le \phi_m$ in
$\overline{I}_m$, and using (\ref{3.6})
$$u(x,t)\le \frac{\psi_\lambda(x) \eta_m(t)}{2^{m-1}}\le \frac{\psi_\lambda(x)}{2^{m-1}},\;\forall(x,t)\in \overline{I}_m,\;\;\mbox{and}\;\;u(x,T_{m+1})\le \frac{\psi_\lambda(x)}{2^m},\;\forall x\in \overline{\Omega}.$$
Thus, (\ref{3.12}) holds and part (i) is proven.
{\bf Part (ii).} Let $g(x,t)=0$, $\forall(x,t)\in \partial\Omega\times[T_0,\infty)$, for some $T_0>0$.
We make some elementary observations. From (\ref{3.1}) one sees that
$$M=\sup_{P_\infty} h=\max( \sup_{\overline{\Omega}} f, \; \sup_{\partial\Omega\times[0,T_0]} g(x,t)).$$
Lemma \ref{2.1} implies that $0\le u(x,t)\le M,\;\forall (x,t)\in \Omega_{T},$ for any $T>0$.
We claim that
$\mu(t)$ is decreasing in $[T_0,\infty)$. Let $T_0\le T<\hat{T}<\infty$. Since $g=0$ on
$\partial\Omega\times[T,\hat{T})$, by Lemma \ref{2.1}, $\sup_{\Omega\times(T,\hat{T})}u\le \mu(T)$. Since $u\in usc(\Omega_\infty\cup P_\infty),\;u\ge 0$, it follows
that $\mu(t)\le \mu(T),\;T<t<\hat{T}$. Combining this with Part (i), we obtain
\eqRef{3.13}
\mbox{$\mu(t)$ is decreasing, in $t\ge T_0$, and $\lim_{t\rightarrow \infty} \left(\sup_{\overline{\Omega}}u(x,t)\right)=0.$}
\end{equation}
Next, let $T>T_0$ be large enough so that $\mu(T)>0$ and small (if $\mu(T)=0$ Part (ii) holds by Lemma \ref{2.1} and (\ref{3.13})).
By Remarks \ref{6.13} and \ref{6.15}, for any $0<\lambda<\lambda_\Omega$, there is a $\psi_\lambda \in C(\overline{\Omega}),\;\psi_\lambda>0,$ that solves
$$\D_p \psi_\lambda+\lambda |\psi_\lambda|^{p-2}\psi_\lambda=0,\;\;\mbox{in $\Omega$, with $\psi_\lambda=\mu(T)$ on $\partial\Omega$.}
$$
By Remark \ref{6.10}, $\psi_\lambda\ge \mu(T)$, in $\overline{\Omega},$ and $\psi_\lambda(x)\ge u(x,T),\;\forall x\in \overline{\Omega}.$
Call $D_T=\Omega\times(T,\infty)$ and $Q_T$ its parabolic boundary. We fix $\lambda<\lambda_{\Om}$, close to $\lambda_{\Om}$, in what follows. Define
$$L(x,t)=\psi_\lambda(x) \exp\left(-\lambda (t-T)/(p-1)\right),\;\;\mbox{in $\overline{D}_T$,}$$
and note that
$$L(x,T)\ge u(x,T),\;\forall x\in \overline{\Omega},\;\; L(x,t)>0,\;\forall(x,t)\in \partial\Omega\times[T,\infty).$$
By Lemma \ref{2.5}, $\Gamma_p L=0$, in $D_T$. Since $L\ge u$ on $Q_T$, Theorem \ref{2.2} and Remark \ref{2.20}
implies that $L\ge u,$ $\overline{D}_T,$ and
$$\lim_{t\rightarrow \infty} \frac{\log(\sup_\Omega u(x,t) )}{t}\le \lim_{t\rightarrow \infty} \frac{\log(\sup_\Omega L(x,t) )}{t}= -\frac{\lambda}{p-1}.$$
Choosing $\lambda$ arbitrarily close to $\lambda_\Omega$, we conclude
$$\lim_{t\rightarrow \infty} \frac{\log(\sup_\Omega u(x,t) ) }{t}\le -\frac{\lambda_{\Om}}{p-1}.\;\;\;\;\;\;\;\Box$$
\begin{rem}\label{3.14} Let $2\le p\le \infty$.
The requirement that $u$ be a sub-solution is necessary in Theorem \ref{1.6}. To see this, we take
$\Omega=B_R(o)$, and
$$\psi(x,t)=R^2-r^2,\;\;(x,t)\in \overline{B}_R(o)\times(0,\infty).$$
One sees that $\Gamma_p \psi<0$, except perhaps at $r=0$. Clearly, $\psi$ does not decay in $t$. $\Box$
\end{rem}
\begin{rem}\label{3.15} The decay rate in Part (i) of Theorem \ref{1.6} may depend on the decay rate along $\partial\Omega\times(0,\infty)$. Take $\ell(t):\mathbb{R}^+\rightarrow \mathbb{R}^+\cup\{0\}$,
$C^2$ in $t$ and decreasing to $0$, as $t\rightarrow \infty$.
Part (ii) shows that if $u=0$ on $\partial\Omega\times(0,\infty)$ then
the slowest rate of decay is $e^{-\lambda_{\Om} t/(p-1)}$.
Let $\Omega=B_R(o)$; set $u(x,t)=\psi(x)\exp(-\lambda_{\Om} t/(p-1))$, where $\psi>0$ is a first eigenfunction of $\D_p$ on $B_R(o)$, see Remark \ref{6.19}. By Lemma \ref{2.5}, $\Gamma_pu=0$, in $B_R(o)\times (0,\infty)$. The decay rate in Theorem \ref{1.6} is attained.
\quad $\Box$
\end{rem}
Before presenting the proof of Theorem \ref{1.61} we make a remark.
\begin{rem}\label{3.17} Let $0\le S<T$ and $O=\Omega\times(S,T)$.
We look at three possibilities. Let $u\in C(\Omega_\infty),\;u>0$ solves
$\Gamma_pu=0,\;u(x,0)=f(x),\;\forall x\in \overline{\Omega},$ and $g(x,t)=1,\;\forall(x,t)\in \partial\Omega\times[0,\infty).$ Set $\mu(t)=\sup_{\overline{\Omega}} u(x,t)$ and $m(t)=\inf_{\overline{\Omega}} u(x,t)$. We apply Lemma \ref{2.1}
{\bf (a) $\inf_\Omega f=1$:} For every $t>0$, $m(t)=1$ and $1\le \mu(t)\le \sup_\Omega f$. Then $u(x,t)\le \mu(S)$, in $O$, and $\mu(T)\le \mu(S)$. Hence, $\mu(t)$ is decreasing in $t$.
{\bf (b) $\sup_\Omega f= 1$:} Clearly, $\mu(t)=1$ and $m(t)\le 1$, for every $t>0$.
Clearly, $m(t)$ is increasing in $t$, since $u(x,t)\ge m(S)$, in $O$, and $m(T)\ge m(S)$.
{\bf (c) $\inf_\Omega f\le 1\le \sup_\Omega f$:} Then $m(t)\le 1\le \mu(t),\;\forall t>0$. Arguing as in (a) and (b) we see that $m(t)$ is increasing and $\mu(t)$ is decreasing in $t$. \qquad $\Box$
\end{rem}
{\bf Proof of Theorem \ref{1.61}.} Let $u>0$ be a solution, as stated in Theorem \ref{1.61}. We assume that
$0<\inf_\Omega f< 1< \sup_\Omega f,$ and set
\eqRef{3.18}
m=\inf_\Omega f\;\;\mbox{and}\;\;M=\sup_\Omega f.
\end{equation}
Let $B_R(z)$ be the out-ball of $\Omega$, where $z\in \mathbb{R}^n$; define $r=|x-z|$. Part (i) addresses the case $2\le p<\infty$, and Part (ii) discusses $p=\infty$. Recall
Remark \ref{2.70}.
{\bf Part (i): $2\le p<\infty$.} Call
\eqRef{3.19}
A=A(p,n)=n\left(p/(p-1)\right)^{p-1},\;\; B=(p-1)\left(p/(p-1)\right)^p R^{p/(p-1)}.
\end{equation}
{\bf Upper Bound.} Let $T_0>0$, to be determined later. Recalling (\ref{3.19}), take
\eqRef{3.20}
\phi(x,t)=\exp\left[ a \left( \frac{R^{p/(p-1)}-r^{p/(p-1)}+b}{(1+t)^{\alpha}} \right) \right],\;\;\;0\le r\le R,
\end{equation}
where
\begin{eqnarray}\label{3.201}
&& (i)\;\;0<\alpha\le \frac{1}{p-2},\;\;\mbox{for $2<p<\infty$, and}\;\;(ii)\;\;0<\alpha<\infty,\;\;\mbox{for $p=2$,}\nonumber\\
&& ab=(1+T_0)^{\alpha}\log M\;\;\;\mbox{and}\;\;\;a=\frac{A(p-1) (1+T_0)^{\alpha}}{pB}.
\end{eqnarray}
To make the calculations easier, we use Lemma \ref{2.3} and work with $v=\log \phi$.
Recalling that $G_p v=\Delta_p v+(p-1) |Dv|^p-(p-1) v_t ,$ using (\ref{3.19}), the value of $ab$ (see (\ref{3.201})) and setting $C=\alpha(p-1)$, we get in $0\le r\le R,$ and $t>0$,
\begin{eqnarray}\label{3.21}
G_p v&\le&-\frac{A a^{p-1}}{(1+t)^{\alpha(p-1)}}+\frac{B a^p}{(1+t)^{\alpha p }} + \frac{\alpha(p-1)a(R^{p/(p-1)}-r^{p/(p-1)}+b)}{(1+t)^{\alpha+1}} \nonumber\\
&\le & C\left(\frac{aR^{p/(p-1)}+(1+T_0)^{\alpha}\log M}{(1+t)^{\alpha+1}}\right) +\frac{B a^p}{(1+t)^{\alpha p}}
-\frac{A a^{p-1}}{(1+t)^{\alpha(p-1)}}\nonumber\\
&=&\frac{1}{(1+t)^{\alpha(p-1)}}\left[ C\left(\frac{aR^{p/(p-1)}+(1+T_0)^{\alpha}\log M}{(1+t)^{1-\alpha(p-2)}} \right)+a^{p-1}\left( \frac{B a}{(1+t)^{\alpha}}
-A \right) \right].
\end{eqnarray}
Using (\ref{3.201}) and calling $K=K(\alpha,p,n,R)>0$ (see below), we calculate in $t\ge T_0$,
\begin{eqnarray*}
a^{p-1}\left( \frac{B a}{(1+t)^{\alpha}}-A \right)&=&a^{p-1}A \left( \frac{p-1}{p} \left(\frac{1+T_0}{1+t}\right)^{\alpha}
-1\right)
\le a^{p-1} A\left( \frac{p-1}{p}-1\right)\\
&=&-\frac{a^{p-1}A}{p}=-K(1+T_0)^{\alpha(p-1)}.
\end{eqnarray*}
Using the above in (\ref{3.21}) together with the value of $a$ in (\ref{3.201}), we obtain in $t\ge T_0$,
\begin{eqnarray}\label{3.22}
G_pv&\le& \frac{1}{(1+t)^{\alpha(p-1)}}\left[C\left( \frac{aR^{p/(p-1)}+(1+T_0)^\alpha\log M }{(1+t)^{1-\alpha(p-2)}}\right) -K(1+T_0)^{\alpha(p-1)} \right] \nonumber\\
&\le&\frac{1}{(1+t)^{\alpha(p-1)}}\left( \frac{\bar{K}(1+T_0)^{\alpha}}{(1+T_0)^{1-\alpha(p-2)}} -K(1+
|
frac{N^{RuZr}(y^{(0)})}{N^{ZrRu}(y^{(0)})}$
as
function of the normalized rapidity $y^{(0)}=y/y_{proj}$ of free protons
(left panels) and free neutrons (right panels) for the $NL\rho$
and $NL\rho\delta$ models, for central ($b\leq 2~fm$)
$Ru(Zr)+Zr(Ru)$-collisions
at $0.4$ (top) and $1.528~AGeV$ (bottom) beam energies.
The experimental data are taken from the $FOPI$ collaboration
\cite{HongPRC66}. The ratio in the initial system is shown by the horizontal
dotted line.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{ch8_fig8.eps}
\caption{\label{reldyn8}
Proton imbalance ratio as
function of the normalized rapidity for central ($b\leq 2~fm$)
$Ru(Zr)+Zr(Ru)$-collisions
at $0.4~AGeV$ as in the top-left part of Fig.\ref{reldyn7}.
A curve is added (empty squares) corresponding to a $NL\rho$ calculation
with reduced nucleon-nucleon cross sections (half the free values,
$\sigma_{NN}(E) = \frac{1}{2} \sigma_{free}$).
}
\end{center}
\end{figure}
In Fig. \ref{reldyn7}
we show the rapidity dependence of
the imbalance ratio
for free protons ($R(p)$) and
free neutrons ($R(n)$) at the two energies
$0.4$ and $1.528~AGeV$.
The imbalance ratio approaches
unity at mid-rapidity for all particle types due to
symmetry
in the mid-rapidity region, as expected in central collisions.
Going from target- to mid-rapidity the ratio nicely rises for protons,
and decreases for neutrons, a good signature of isospin transparency.
The effect is more evident for the $NL\rho\delta$ interaction.
The observed difference between
the two models can be understood
since within the $NL\rho\delta$ picture
neutrons experience
a more repulsive iso-vector mean field, particularly at high densities, than
protons, and consequently much less nucleon stopping in the
colliding system.
However the influence of the iso-$EOS$ on the imbalance ratio of protons
and neutrons is
not very large. At low intermediate energies ($0.4~AGeV$)
one deals with moderate compressions of $\rho_{B} < 2 \cdot \rho_{sat}$
where the differences in the iso-vector $EOS$ arising from
the $\delta$ meson are small.
At higher
incident energies ($1.528~AGeV$, Fig.\ref{reldyn7}-bottom)
where a larger effect is expected,
we actually see a slightly higher
isospin effect on the imbalance ratios, at least for protons.
In fact with increasing beam energy the
opening of inelastic channels via the
production/decay of $\Delta$ resonances through pions,
as discussed in the last section,
also contribute to the final result.
This interpretation is
confirmed by other studies \cite{BaoPRC67}.
Reduced in-medium Nucleon-Nucleon ($NN$) cross sections, in particular
$\sigma_{np}$, will also increase the isospin transparency.
This possibility can be investigated considering a factor
of two reduction, $\sigma = \frac{1}{2} \sigma_{free}$, of the free
$NN$-cross section values used before, \cite{GaitanosPLB595}. We note
that such a reduction
represents rather an upper limit of in-medium effects as compared to
recent microscopic Dirac-Brueckner estimations \cite{FuchsPRC64}.
In Fig.\ref{reldyn8} we show the results for the proton
imbalance ratios at $0.4~AGeV$ with $0.5\sigma_{free}$ in the $NL\rho$
case. We see an overall slightly increased transparency but not enough
to reproduce the trend of the experimental values in the target rapidity
region. On the other hand the reduction of the $NN$ cross sections,
and in particular of $\sigma_{(np)}$, leads too large a
transparency in the proton rapidity distributions for central collisions
of the charge symmetric $Ru~+~Ru$ case; as is seen in our
simulations and already remarked in previous $IQMD$ calculations,
\cite{HongPRC66}. Thus can exclude further reductions
of the $NN$ cross sections.
Since the calculations are performed with the same $EOS$ for the
symmetric nuclear matter, the same compressibility and momentum dependence,
the observed transparency appears to be uniquely
related to $isovector-EOS$ effects, i.e. to the isospin dependence
of the nucleon self-energies at high baryon densities.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.30]{ch8_fig9.eps}
\caption{Imbalance ratio of $t$ to ${}^{3}He$ for the
same collision as in Fig.~\ref{reldyn7} for $0.4~AGeV$ beam energy.
}
\label{reldyn9}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.30]{ch8_fig10.eps}
\caption{ Imbalace ratio for the ratio of negative
to positive charged pions for the
same collision as in Fig.~\ref{reldyn7} for $0.4~AGeV$ beam energy.
}
\label{reldyn10}
\end{center}
\end{figure}
The fact that protons {\it and} neutrons
exhibit an {\it opposite behavior} for the imbalance
ratios at target
rapidity suggests that the detection of the imbalance observable
$R(t/{}^{3}He)$, i.e. the double ratio of the
$t/{}^{3}He$ yield should reveal a larger sensitivity.
The correct and practical
method how to properly describe light fragment formation is still
controversial. We report here on results obtained in ref.
\cite{GaitanosPLB595} with
simplest algorithm,
namely a phase space coalescence model \cite{GaitanosEPJA12}.
Fig. \ref{reldyn9} shows the imbalance ratio $R(t/{}^{3}He)$
for central
collisions at $0.4~AGeV$ incident energy. The isospin effect
from the
inclusion of the iso-vector, scalar $\delta$ meson in the
$NL\rho\delta$ model
is found to be very large near target rapidities. We note that the
effect indeed can be
hardly seen from the separate imbalance ratios for protons and neutrons
at the same
rapidity (see Fig. \ref{reldyn7} top panels) apart from the difficulties
of neutron detections.
It would be therefore of great interest to experimentally measure
directly this quantity.
Finally, another sensitive observable should be, from the
discussion in the last Section,
the imbalance
ratio of charged pions
$R(\pi^{-}/\pi^{+})$ (Fig. \ref{reldyn10}).
At variance with the previous results for neutrons and light isobars,
this ratio is reduced at target rapidity with the $NL\rho\delta$ model.
This effect is consistent with our understanding of the $\pi^{+}/\pi^{-}$-
ratios.
Pions are produced from the decay of $\Delta$ resonances formed
during the high density phase, see Fig.\ref{reldyn5}.
The $\pi^{-}$ abundance is then linked to the neutron-excess of the
high density matter,
as discussed in the previous Section. We recall that
the contribution
of the $\delta$
meson leads to a more repulsive field for neutrons at supra-normal densities
and consequently to less neutron collisions and finally to
a smaller $\pi^{-}/\pi^{+}$ ratio.
We have analyzed the isospin
transparency in relativistic collisions.
We have observed that this observable is sensitive to the microscopic Lorentz
structure of the symmetry term. Effective interactions with symmetry energies
which are
not much different at 2-3 times normal density $\rho_{sat}$
predict large differences in the $isospin-transparencies$, depending on
the relative contribution of the various charged vector and scalar fields.
Intermediate energy heavy-ion collisions
with radioactive beams can give information on the symmetry energy
at high baryon density and on its detailed microscopic structure.
We have shown that such experiments provide
a unique tool to investigate the strength of the $\delta-like$ field.
The sensitivity is enhanced relative to the static property $E_{sym}(\rho_B)$
because of the covariant nature of the fields
involved in collision dynamics.
\section{Conclusion and outlook}\label{out}
\markright{Chapter \arabic{section}: out}
Nuclear reactions with neutron-rich (or radioactive) nuclei have opened the
possibility to learn about the behaviour of the nuclear interaction and, in
particular, of the symmetry energy in a wide spectrum of conditions of
density and temperature. This study appears extremely important: indeed
a deeper understanding of the behaviour of neutrons and protons in a
charge asymmetric nuclear medium is essential to test and to extend
our present knowledge of the nuclear interaction and is of highest
importance for the modelling of astrophysical processes, like supernova
explosion or neutron stars.
In the
|
context of nuclear reactions, one has to identify the most sensitive
observables that may provide information on the behaviour of the symmetry
energy in several conditions of density and temperature.
This is the main line followed along this Report.
In Section \ref{eos} we discuss
the symmetry energy dependence around normal density
and how this can affect important properties of neutron-rich nuclei, such as
compressibility (and monopole frequency), saturation density and neutron
skin.
The behaviour of asymmetric matter at low density has been investigated in
Section \ref{rpa}. We discuss in
particular the phase diagram of asymmetric matter
and the relevant features of instabilities.
This subject appears important in connection to the possibility to observe a
liquid-gas phase transition in violent heavy ion reactions, where, after the
initial collisional shock, low density regions can be easily reached during
the expansion phase. New important features, such as the isospin distillation
effect, are predicted and some experimental evidences have already
appeared along this direction, though a more careful analysis is still needed,
to disantangle among other possible contributions to the distillation,
such as pre-equilibrium effects, and to really prove the mechanism driving
the fragmentation process.
In Section \ref{fastflows} we have focused on features of the early stage of
the reaction dynamics between neutron-rich nuclei.
Pre-equilibrium emission and collective flows appear particularly sensitive
also to the the momentum-dependent part of the interaction.
In asymmetric matter a splitting of neutron and proton effective
masses is observed. The sign of the splitting is quite controversial, since
the behaviour of neutron and proton optical potential at large energy
has not been experimentally measured yet.
Hence it appears very important to try to extract information on this
fundamental question from nuclear collisions, where one can use
probes, such as pre-equilibrium particles, particularly sensitive to
the high density phase, where also high momenta are reached.
In Section \ref{fermi}, we have explored several
fragmentation mechanisms, occurring
at the Fermi energies, in the framework of a stochastic mean-field approach.
We discuss the features of multifragmentation in neutron-rich systems,
and in particular the isotopic content of fragments. This can be connected
to the behaviour (the slope) of the symmetry energy at low density.
For semi-peripheral reactions an interesting neutron enrichment of the
overlap (``neck'') region appears, due to the neutron migration from
higher (spectator regions) to lower (neck) density regions. Also this effect
is nicely connected to the slope of the symmetry energy.
A careful comparison with experimental data would give important indications
on the fragmentation mechanism and on the behaviour of the symmetry energy.
In sections 6-7 we have discussed static properties and
dynamical mechanisms in the context of relativistic approaches
including isovector, both vector and scalar, channels
(the $\rho$ and $\delta$ mesons).
The contribution of these two channels to static or dynamical properties
has a different weight, leading to interesting effects on the frequency
of the stable isovector modes, that is not simply related to the value of the
symmetry energy (see Sections \ref{qhd}, \ref{relin}).
In the low density region, instabilities and the distillation mechanims are
observed, in close parallelism with the results of the non-relativistic
treatment discussed above.
Finally in Section \ref{reldyn} we discuss isospin
effects in relativistic heavy ion
collisions.
The observable consequences of the inclusion of the $\delta$ meson are
enhanced by the Lorentz structure of the effective nuclear interaction in
the isovector channel.
In particular, effects
on particle production, such as pions or kaons,
light particle collective flows and
isospin diffusion are investigated.
Once again, from the comparison with experimental data,
one could get important information on the structure of the isovector
interaction.
\subsection{Outlook: The Eleven Observables}
In conclusion,
the study of isospin effects on static and dynamical nuclear properties
appears as a very rich and stimulating field.
Several probes can be used to get an insight on the behaviour of the
symmetry energy in different conditions of density and temperature.
A joint effort, from the experimental and theoretical side, should allow
to extract relevant information on fundamental properties of the nuclear
interaction.
We like to suggest a selection
of {\it Eleven Observables}, from low to relativistic energies, that
we expect particularly sensitive
to the microscopic structure of the {\it in medium }interaction
in the isovector channel, i.e. to the symmetry energy and its
``fine structure'':
\noindent
{\it 1. Competition of Reaction Mechanisms}. Interplay of low-energy
dissipative mechanisms, e.g. fusion (incomplete) vs.
deep-inelastic vs. neck fragmentation: a stiff symmetry term leads to
a more repulsive dynamics.(Sect.\ref{fermi})
\noindent
{\it 2. Energetic particle production}. N/Z of fast nucleon emission:
symmetry repulsion of the
neutron/proton mean field in various density regions. Moreover at the Fermi
energies we expect to see also effects from the $n/p$ splitting of
the effective masses. Even the spectra and yields of hard photons
produced via $(n,p)$ bremmstrahlung should be sensitive to the
density and momentum dependence of the symmetry fields.(Sect.\ref{fastflows})
\noindent
{\it 3. Neutron/Proton correlation functions}. Time-space structure
of the fast particle emission and its relation to the baryon density
of the source. Again combined effects of density and momentum
dependence of the symmetry term are expected (Sect.\ref{fastflows}).
\noindent
{\it 4. E-slope of the Lane Potential}. A systematic study of the
energy dependence of the $(n/p)$ optical potentials on asymmetric nuclei
will shed lights on the effective mass splitting, at least around
normal density.(Sects.\ref{eos} and \ref{qhd})
\noindent
{\it 5. Isospin Distillation (Fractionation)}. Isospin content
of the Intermediate Mass Fragments in central collisions. Test of
the symmetry term in dilute matter and connection to
the possibility to observe a liquid-gas phase transition.(Sect.\ref{fermi})
\noindent
{\it 6. Properties of Neck-Fragments}. Mid-rapidity $IMF$ produced
in semicentral collisions: correlations between $N/Z$, alignement and
size. Isospin effects on the reaction dynamics and ``Isospin
Migration''.(Sect.\ref{fermi})
\noindent
{\it 7. Isospin Diffusion}. Measure of charge equilibration
in the ``spectator'' region in semicentral collisions. Test of
the interplay between concentration and density gradients in
the isospin dynamics.(Sect.\ref{fermi})
\noindent
{\it 8. Neutron-Proton Collective Flows}. Together with
light isobar flows. Check of symmetry transport effects.
Test of the momentum dependence (relativistic structure) of the
interaction in the isovector channel.
Measurements also for different $p_t$ selections.
(Sects.\ref{fastflows} and \ref{reldyn})
\noindent
{\it 9. Isospin Transparency}. Measure of isospin properties in
Projectile/Target rapidity regions in central collisions
of ``mirror'' ions at intermediate energies. Similar effects, but due to
the nucleon-nucleon cross section are expected to be smaller
(Sect.\ref{reldyn}).
\noindent
{\it 10. $\pi^-/\pi^+$ Yields}. Since $\pi^-$ are mostly produced
in $nn$ collisions we can expect a reduction for highly repulsive
symmetry terms at high baryon density. Importance of a $p_t$ selection.
Similar studies for mesons with smaller rescattering effects
would be of great interest. (Sect.\ref{reldyn})
\noindent
{\it 11. Deconfinement Precursors}. Signals of a mixed phase formation
(quark-bubbles) in high baryon density regions reached with asymmetric
$HIC$ at intermediate energies, versus the properties of the
interaction considered (Sect.\ref{reldyn}).
We stress again the richness of the phenomenology and nice opportunities of
getting several cross-checks from completely different experiments.
For the points $3,~5,~6,~7,~8,~9,~10$
from the transport simulations discussed here we presently get
some indications of {\it asy-stiff} behaviors, i.e. increasing
repulsive density dependence of the symmetry term, but not more
fundamental details. Moreover all the available data are obtained
with stable beams, i.e. within low asymmetries.
\vskip 1.0cm
{\bf Acnowledgements}
\vskip 0.5cm
This report is deeply related to ideas and results partially reached in
very pleasant and fruitful collaborations with very nice people:
L.W.Chen, G.Fabbri,
G.Ferini, Th.Gaitanos, C.M.Ko, A.B.Larionov, B.A.Li, R.Lionti,
B.Liu, S.Maccarone, F.Matera, M.Zielinska-Pfabe', J.Rizzo, L.Scalone
and H.H.Wolter. We have learnt a lot
from all of them in physics as well as in human relationships.
We are particularly grateful to H.H.Wolter for many stimulating suggestions
during the preparation of the report and for a very accurate reading of the
manuscript.
Finally we warmly thank our experimental colleagues for the richness of
mutual interaction and for the exciting possibility of discussing new
results even before publication. We like to mention in particular the
Medea/Multics and Chimera Collaboration at the LNS-Catania and the
FOPI Collaboration at the GSI-Darmstadt.
|
\section{Introduction: main results of this paper}\label{IntroductionSEC}
\noindent{\bf Part I : About the study of stationary Navier-Stokes flow with isotropic streamlines on a round sphere}\\
Yoden and Yamada \cite{YY} studied a two dimensional flow on a rotating sphere without any external force.
They investigated the morphology of the stream-function and the vorticity field at several rotation rates.
As the rotation rate increases, the temporal evolution of the flow field changes drastically. In particular, they observed in \cite{YY} that an easterly circumpolar vortex starts to appear in high latitudes, and that the flow field becomes anisotropic in all the latitudes.
However, to the best of our knowledge, it appears that these observations made in \cite{YY} have not been investigated by means of a pure mathematical approach in the research literature (see however a recent article \cite{Cheng} for a analytical study of time-dependent solutions to the Euler equation on a rotating round sphere).
In this paper we investigate existence of solutions to the following $2$-dimensional Stationary Navier-Stokes equation on a rotating sphere, which is written in the language of differential $1$-forms on the sphere.
\begin{equation}\label{NavierStokesintroduction}
\begin{split}
\nu ((-\triangle)u^* -2 Ric (u^*) ) +\beta \cos (ar) * u^* +[\overline{\nabla}_{u}u]^* + dP & = 0, \\
d^* u^* & = 0 .
\end{split}
\end{equation}
In \eqref{NavierStokesintroduction},$*$ is the Hodge-star operator sending the space of differential $1$-forms into itself. Intuitively, the action $u^* \to * u^*$ should be interpreted as the rotational action $u \to u^{\perp}$ by the angle $\frac{\pi}{2}$ in the anti-clockwise direction. So, it turns out that the term $\beta \cos (ar) * u^*$ which appears in \eqref{NavierStokesintroduction} represents the effect upon the velocity field $u$ due to the rotation of the sphere with some \emph{constant} rotational speed $\beta \geqslant 0$. The operator $d^*$ acting on the space of smooth differential $1$-forms on the sphere should be interpreted as the operator $- div$ acting on the space of smooth vector fields on the sphere. Notice also that the viscosity term of \eqref{NavierStokesintroduction} is represented as the linear combination of the standard Hodge Laplacian $(-\triangle ) = dd^* + d^*d$ acting on the space of $1$-forms on the sphere and $-2Ric (u^*)$, with $Ric$ to be the standard Ricci tensor on the sphere in differential geometry. Notice that the multiplicative \emph{constant} $\nu > 0$ stands for the viscosity coefficient for the stationary Navier-Stokes flows governed by equation \eqref{NavierStokesintroduction}. Moreover, the symbol $\overline{\nabla}$ as appears in the nonlinear convection term of \eqref{NavierStokesintroduction} is the Levi-Civita connection (which operates on the space of smooth vector fields) induced by the intrinsic Riemannian geometry of the sphere. \\
\begin{remark}
For the precise definitions of the operators $(-\triangle)$, $d^*$, $*$, and the Levi-Civita connection $\overline{\nabla}$ in the general Riemannian manifold setting, we refer our readers to \textbf{Definitions \ref{VolumeformDef}, \ref{HodgeDefinition} , \ref{LeviCivitadefinition}} in \textbf{Section \ref{BasicsphereSEC}} and \textbf{Definition \ref{dstarDef}} in \textbf{Section \ref{Spheremainsection}} of this paper. For the intuitive meaning of these operators, we refer our readers to \textbf{Remarks \ref{remarkgradient}, \ref{curlremark}, and \ref{divergenceremark}} in \textbf{Section \ref{BasicsphereSEC}} of this paper.
\end{remark}
\begin{remark}\label{1formremark}
Here, we would like to explain the meaning of the notation $u^*$ which appears in \eqref{NavierStokesintroduction}. In the case of a general $N$-dimensional Riemannian manifold $M$ equipped with a Riemannian metric $g(\cdot , \cdot )$, for any given smooth vector field $u$ on $M$, we can always construct its associated $1$-form, namely $u^* \in C^{\infty}(T^*M)$, in accordance with the following relation.
\begin{equation}
u^* = g(u , \cdot ).
\end{equation}
Since by definition, the Riemannian metric $g(\cdot,\cdot )$ on $M$ is actually a smoothly varying way of assigning to each $p\in M$ the positive definite inner product $g_p(\cdot , \cdot )$ on the tangent space $T_pM$ of $M$ at $p$, the above construction of the associated $1$-form $u^*$ on $M$ for each smooth vector field $u$ on $M$ actually provides a one-to-one correspondence between the space of smooth vector fields and the space of smooth $1$-forms on $M$.
\end{remark}
\begin{remark}\label{NSremark}
The structure of \eqref{NavierStokesintroduction} is based on the standard stationary Navier-Stokes equation as given in \eqref{NavierStokeshyperbolic} with an extra term $\beta \cos (ar) * u^*$ being included in order to take the effect due to the rotational action of the sphere into our account. The structure of the Navier-Stokes equation as specified in \eqref{NavierStokeshyperbolic} first appeared in the Historical work \cite{ EbinMarsden} by D. Ebin and J. Marsden. Since then, equation \eqref{NavierStokeshyperbolic} (with the extra term $\partial_{t}u^*$ in the time-dependent case) had been accepted as the standard form of the Navier-Stokes equation being written on a general finite dimensional Riemannian manifold $M$. Having a discussion about the research literature of the Navier-Stokes equation on a general Riemannian manifold is out of the scope of this paper. However, for further historical remarks, we refer our readers to the textbook \cite{MichaelTaylor} by M. Taylor, and also some recent works on this subject matter such as the work \cite{ DindosMitrea} by M. Dindos and M. Mitrea, the work \cite{nonuniquenesshyperbolic} by the first author and M. Czubak, and work \cite{Khesin} by B. Khesin and G. Misiolek for instance.
\end{remark}
\noindent
The first goal of our paper is to show that there exists a stationary Navier-Stokes flow solving equation \eqref{NavierStokesintroduction} (the same stationary flow for any rotation rates) with isotropic streamlines in all the latitudes on a (rotating) sphere, and with given boundary values near the north polar of the sphere.
Moreover, we show that there is \emph{no} stationary flow composed by a quadratic profile (Poiseuille flow profile).
Here, we just mention that in the case when the background manifold is just the standard Euclidean $2$-dimensional space $\mathbb{R}^2$, the analog investigation of the behavior of parallel laminar Navier-Stokes flows around a cicuclar arc boundary portion of an obstacle in $\mathbb{R}^2$ has been carried out in a recent paper \cite{Y} by the second author by means of elementary method. \\
\noindent
Let us to be more precise. Consider the $2$-dimensional space form $S^2(a^2)$ of positive sectional curvature $a^2$, which can be realized as the standard sphere $\{ (x_1 ,x_2 , x_3 ) : x_{1}^2 + x_{2}^2 + x_{3}^2 = \frac{1}{a^2} \}$ of radius $\frac{1}{a}$ in $\mathbb{R}^3$. Consider the selected point $O = (0,0,1) \in S^2(a^2)$, which can be regarded to be the North Pole of the sphere $S^2(a^2)$, and introduce the standard \emph{normal polar coordinate system} $(r,\theta )$ about the based point $O$ on $S^2(a^2)$ via the exponential map $\exp_{O} : T_{O}S^2(a^2) \rightarrow S^2(a^2)$ (See \textbf{Definition \ref{expsphereDef}} for the precise meaning of $\exp_O$, and \textbf{Definition \ref{PolarDefnSphere}} for the precise meaning of $(r,\theta)$ on $S^2(a^2)$ ).\\
\noindent
The first goal of this paper is to study the existence and non-existence of locally defined parallel laminar Stationary Navier-Stokes flow on some local exterior region near the circular-arc \emph{boundary portion} of some compact obstacle $K$ in $S^2(a^2)$.
As a preparation for the statement of Theorem \ref{Noparallel}, let $K$ to be a compact set in $S^2(a^2)$ whose boundary $\partial K$ has a \emph{circular-arc boundary portion}. $K$ will represent an obstacle around which stationary Navier-Stokes flows occur. We will study stationary Navier-Stokes flows in some \emph{simply-connected} exterior region whose boundary shares the same circular-arc boundary portion with $\partial K$. More precisely, suppose that, for some positive numbers $\delta \in (0, \frac{\pi}{a})$ and $\tau \in (0, 2\pi)$, $\partial K$ contains the following circular-arc portion
\begin{equation}
C_{\delta , \tau} = \{p\in S^2(a^2) : d (p, O) = \delta , 0 < \theta (p) < \tau \}.
\end{equation}
That is, we have $C_{\delta , \tau } \subset \partial K$. Here, the symbol $d(p,O)$ stands for the geodesic distance between $p$ and $O$ on the sphere $S^2(-a^2)$. For convenience, we will assume also that $K$ is confined within the closed geodesic ball $\overline{B_{O}(\delta)} = \{ p \in S^2(a^2) : d(p,O) \leqslant \delta\}$. That is, we have $K \subset \overline{B_O(\delta )}$. Then, Consider the following \emph{sector-shaped} open region $R_{\delta , \tau , \epsilon_{0}}$ in $S^2(a^2)-K$
\begin{equation}\label{exteriorregion}
R_{\delta , \tau , \epsilon_0 } = \{p\in S^2(a^2): \delta<d(p,O)< \delta + \epsilon_{0} , 0 < \theta (p) < \tau \},
\end{equation}
with $\epsilon_0$ to be any small positive number which satisfies $\delta + \epsilon_{0} < \frac{\pi}{a}$.
Let $\{\frac{\partial }{\partial r} , \frac{\partial }{\partial \theta }\}$ to be the natural coordinate frame induced by the normal polar coordinate system $(r,\theta)$ on $S^2(a^2)$ (see \textbf{Definition \ref{coordFRAMEsphere}} for the precise meaning of $\{\frac{\partial }{\partial r} , \frac{\partial }{\partial \theta }\}$ on $S^2(a^2)$ ). Then, the following assertion holds:
\begin{thm}\label{Noparallel}
Consider the simply-connected exterior region $R_{\delta , \tau , \epsilon }$ as defined in \eqref{exteriorregion}.
For the quadratic profile (Poiseuille flow profile) $h(\lambda ) = \alpha_1 \lambda - \frac{\alpha_2}{2} \lambda^2$, with both $\alpha_1 > 0$ and $\alpha_2 > 0$, there does not exist any parallel laminar flow in the form of $u = -h(r-\delta ) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ which solves the stationary Navier-Stokes equation \eqref{NavierStokesintroduction} in the region $R_{\delta , \tau , \epsilon_0 }$ as specified in \eqref{exteriorregion}.
\end{thm}
\begin{remark}
In the statement of Theorem \ref{Noparallel}, a velocity field in the form of $u = -h(r-\delta ) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ is called a parallel laminar flow along the circular-arc boundary portion $C_{\delta , \tau}$ of the given obstacle $K$ in $S^2(a^2)$, exactly because each streamline of such a velocity field $u$ is by itself a circular arc which keeps a constant geodesic distance from the boundary portion $C_{\delta , \tau}$. In general, a smooth velocity field $u$ as specified on some open flow region near some smooth boundary portion $\Gamma$ of some obstacle $K$ in a $2$-D manifold $M$ is called a parallel laminar flow along the smooth boundary portion $\Gamma$, if every single streamline of $u$ keeps a constant geodesic distance from $\Gamma$. This notion of parallel laminar flows as employed here is consistence with the definition of parallel laminar flow as given in the recent work \cite{Y} by the second author.
\end{remark}
\noindent
Here, let us say something about the idea of the proof of Theorem \ref{Noparallel}, which will be given in \textbf{Section \ref{Spheremainsection}} in details. The first step of the argument is to obtain an explicit formula of the following expression, for $u = -h(r-\delta ) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$.
\begin{equation}\label{dclosedtest}
d \{ \nu ((-\triangle)u^* -2 Ric (u^*) ) + \overline{\nabla}_{u}u^* \},
\end{equation}
In \eqref{dclosedtest}, the symbol $u^*$ stands for the associated $1$-form of the vector field $u$, which is defined by the relation $u^* = g(u,\cdot )$, with $g(\cdot, \cdot )$ to be the Riemannian metric of the sphere $S^2(a^2)$.
Also, the symbol $d$ stands for the exterior differential operator $d : C^{\infty}(T^*M) \rightarrow C^{\infty}(\wedge^2 T^*M)$ sending smooth $1$-forms to smooth $2$-forms on a Riemannian mainfold $M$, which in case of Theorem \ref{Noparallel}, is taken to be $M = S^2(a^2)$.\\
So, \textbf{Step 1} and \textbf{Step 2} in \textbf{Section \ref{Spheremainsection}} are carried out in order to compute the terms $(-\triangle )u^*$, and the nonlinear convection term $\overline{\nabla}_{u}u^*$ involved in equation \eqref{NavierStokesintroduction}. These efforts eventually lead to the following representation formula of expression \eqref{dclosedtest} in \textbf{step 3} of \textbf{Section \ref{Spheremainsection}}.
\begin{equation}\label{ONE}
\begin{split}
d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \overline{\nabla}_{u}u^* \big \}
& = \nu \bigg \{ h'''(r-\delta ) \frac{\sin (ar)}{a} + 2 h''(r-\delta ) \cos (ar) \\
& + a h'(r-\delta ) (\sin (ar) -\frac{1}{\sin (ar)}) \\
& + a^2 h(r-\delta ) \cos (ar)(2+ \frac{1}{\sin^2(ar)}) \bigg \} dr\wedge d\theta .
\end{split}
\end{equation}
Moreover, since $\beta\cos(a r) *u^*=\beta h(r-\delta)\cos (ar)\frac{\sin(ar)}{a}dr$, we can easily get
\begin{equation}\label{TWO}
d\{\beta \cos r *u^*\}=0.
\end{equation}
Then, expressions \eqref{ONE} and \eqref{TWO} together will imply the following representation formula for $d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \}$.
\begin{equation}\label{generalexpINTRO}
\begin{split}
& d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \big \} \\
& = \nu \bigg \{h'''(r-\delta ) \frac{\sin (ar)}{a} + 2 h''(r-\delta ) \cos (ar) \\
& + a h'(r-\delta ) (\sin (ar) -\frac{1}{\sin (ar)}) \\
& + a^2 h(r-\delta ) \cos (ar)(2+ \frac{1}{\sin^2(ar)}) \bigg \} dr\wedge d\theta .
\end{split}
\end{equation}
The representation formula \eqref{generalexpINTRO} is valid for \emph{any smooth functions} $h$ defined on any open interval around the point $\delta$.
So, the representation formula \eqref{generalexpINTRO} is the key tool which allows us to decide whether $u = -h(r-\delta ) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ is a solution to \eqref{NavierStokesintroduction} or not. This is because, in accordance with the basic knowledge in differential topology, the \emph{vanishing} of
$d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \}$ over some simply-connected open region (say for instance $R_{\delta , \tau ,\epsilon_0}$ as specified in \eqref{exteriorregion}) near the circular-arc portion of the boundary of the obstacle $K$ will immediately imply the existence of some locally defined smooth pressure $P$ which solves \eqref{NavierStokesintroduction} on the same simply-connected open region. In other words, this basic idea of \textbf{$d$-closed implies $d$-exact on simply-connected region} in differential topology allows us to reduce our problem to the one of testing whether
or not $d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \}$ totally vanishes on some prescribed simply-connected open region near the circlar-arc portion of $\partial K$. So, in \textbf{Step 3} of \textbf{Section \ref{Spheremainsection}}, we finish the proof of Theorem \ref{Noparallel} by showing, case by case, that for the quadratic function $h(\lambda ) = \alpha_1 \lambda - \frac{\alpha_2}{2} \lambda^2$, the expression $d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \}$ would never vanish identically on the simply-connected region $R_{\delta , \tau , \epsilon_0}$ as specified in \eqref{exteriorregion}, \emph{no matter how small the positive number $\epsilon_0$ would be}. In this way, we get a neat and clean argument showing that $u = -h(r-\delta ) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ is not a solution to \eqref{NavierStokesintroduction}, for any quadratic profile $h(\lambda ) = \alpha_1 \lambda - \frac{\alpha_2}{2} \lambda^2$, with $\alpha_1 >0$, and $\alpha_2 > 0$.\\
\noindent
As a by product of this approach which we used in the proof of Theorem \ref{Noparallel}, we also make another important observation that the function as appears in the right hand side of \eqref{generalexpINTRO} is a third order linear differential operator $L$ acting on the unknown function $Y (r) = h (r-\delta )$. In other words, for a general parallel laminar flow in the form of $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$, the vanishing of $d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta\cos(a r) *u^* + \overline{\nabla}_{u}u^* \}$ over a certain simply-connected region near the circular-arc portion of $\partial K$ is equivalent to saying that the unknown function $Y(r) = h(r-\delta )$ will solve the following $3$-order linear ODE over some interval $[\delta , \delta + \epsilon )$.
\begin{equation}\label{thirdorderODE}
\begin{split}
0 & = Y'''(r) \frac{\sin (ar)}{a} + 2 Y''(r) \cos (ar) \\
& + a Y'(r) (\sin (ar) -\frac{1}{\sin (ar)}) + a^2 Y(r) \cos (ar)(2+ \frac{1}{\sin^2(ar)}) .
\end{split}
\end{equation}
Since all the coefficient functions involved in the above $3$-order ODE are all real-analytic over the interval $(0, \frac{\pi}{a})$, the basic existence theorem (Theorem \ref{ODEtheorem} in \textbf{Section \ref{Spheremainsection}}) in the theory of linear ODE will ensure, for the prescribed initial data $Y(\delta ) = 0$, $Y'(\delta) = \alpha_{1}$, and $Y''(\delta ) = - \alpha_{2}$, the existence of a unique smooth function $Y (r) = h(r-\delta) $ over $[\delta , \frac{\pi}{a})$ which solves equation \eqref{thirdorderODE} on $(0, \frac{\pi}{a})$, and which at the same time turns out to be real-analytic on $[\delta , \frac{\pi}{a} )$. This basically leads to our second basic theorem (Theorem \ref{existenceEasy}), which says that, for any prescribed constants $\alpha_1>0 $, $\alpha_2>0 $, there exists a unique smooth function $Y \in C^{\infty}([\delta , \frac{\pi}{a} ))$ satisfying $Y(\delta ) =0$, $Y'(\delta ) = \alpha_1$, and $Y''(\delta ) = -\alpha_2$, which turns out to be real-analytic over $[\delta , \frac{\pi}{a})$, such that the associated parallel laminar flow $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ will be a solution to \eqref{NavierStokesintroduction} on some simply-connected open region near the circular-arc portion of the boundary of the compact obstacle $K$.
\begin{thm}\label{existenceEasy}
Consider the space form $S^2(a^2) = \{ (x_1 ,x_2 , x_3 ) : x_{1}^2 + x_{2}^2 + x_{3}^2 = \frac{1}{a^2} \}$, with $a > 0$. Let $O \in S^2(a^2)$ to be a selected based point, and let $(r, \theta )$ to be the normal polar coordinate system on $S^2(a^2)$ about the based point $O$, which is introduced through the standard exponential map $\exp_{O} : \{ v \in T_OS^2(a^2) : \|v\| < \frac{\pi}{a} \} \rightarrow S^2(a^2)$.\\
Consider a fixed positive number $\delta \in (0 ,\frac{\pi}{a})$, and let $K$ to be some \emph{compact} region which is a subset of $\{p\in S^2(a^2): d(p,O)\leq \delta \}$, and which plays the role of an obstacle in $S^2(a^2)$. Suppose further that for some positive number $\tau \in (0, 2\pi )$, the circular arc $C_{\delta , \tau} = \{p\in S^2(a^2) : d (p, O) = \delta , 0 < \theta (p) < \tau \}$ constitutes a smooth boundary portion of $\partial K$ in that
\begin{equation}
\{p\in S^2(a^2) : d (p, O) = \delta , 0 < \theta (p) < \tau \} \subset \partial K .
\end{equation}
Then, it follows that, for any prescribed positive numbers $\alpha_1 > 0$, $\alpha_2 > 0$, there exists some unique smooth function $Y : [\delta , \frac{\pi}{a}) \rightarrow \mathbb{R}$ satisfying $Y(\delta ) = 0$, $Y'(\delta ) = \delta_1$, $Y''(\delta ) = -\alpha_2$, such that the associated parallel laminar flow $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ will solve \eqref{NavierStokesintroduction} in the following simply connected exterior region
\begin{equation}\label{OMEGASPHERE}
\Omega_{\delta, \tau} = \{p\in S^2(a^2) : \delta < d (p, O) < \frac{\pi}{a} , 0 < \theta (p) < \tau \} ,
\end{equation}
which shares the same circular-arc boundary portion $C_{\delta , \tau }$ with the boundary of the compact obstacle $K$. Moreover, such a unique smooth function $Y : [\delta , \frac{\pi}{a}) \rightarrow \mathbb{R}$ which we desire turns out to be actually \textbf{real-analytic} on $[\delta , \frac{\pi}{a})$.
\end{thm}
\begin{remark}
Indeed, according to the basic existence theorem (Theorem \ref{ODEtheorem}) in the O.D.E. theory, the conclusion of Theorem \ref{existenceEasy} will remain valid even if we generalize $h$ to $h(\lambda)=\alpha_0+\alpha_1\lambda-\frac{\alpha_2}{2}\lambda^2$, with three prescribed parameters $\alpha_0 \geq 0$, $\alpha_1 > 0$, and $\alpha_2 > 0$.
\end{remark}
\noindent
Indeed, Theorem \ref{existenceEasy} is a by-product which follows from an application of representation formula \eqref{generalexpINTRO} together with the use of the basic existence and uniqueness Theorem (Theorem \ref{ODEtheorem} in \textbf{Section \ref{Spheremainsection}}) for $3$-order ODE with real-analytic coefficients. The very brief and easy proof of Theorem \ref{existenceEasy} will be given in \textbf{Step 4} of \textbf{Section \ref{Spheremainsection}}.\\
\noindent
At a first glance, the conclusion in Theorem \ref{existenceEasy} seems to state a conclusion which is opposite to that of Theorem \ref{Noparallel}. However, this is \textbf{not} true at all, since Theorem \ref{Noparallel} only rules out the existence of stationary Navier-Stokes flow in the form of $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$, with the \emph{quadratic profile (Poiseuille flow profile)} $Y (r) = \alpha_{1} (r-\delta ) - \frac{\alpha_2}{2} (r-\delta )^2$.
However, Theorem \ref{existenceEasy} says that, as long as \emph{higher order terms} beyond the quadratic power $(r-\delta )^2$ are allowed in the Taylor series expansion
of the unknown function $Y(r)$, the basic existence and uniqueness theory for $3$-order linear ODE will ensure the existence of a unique real-analytic function $Y$ for which $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ will solve \eqref{NavierStokesintroduction} in some simply-connected open region near the circular-arc portion of $\partial K$. Before we leave Part I of the introduction, let us make another interesting remark here which helps us to relate the result in Theorem \ref{existenceEasy} with the observations made in \cite{YY} by Yoden and Yamada.
\begin{remark}
It is worthwhile to notice that, in Theorem \ref{existenceEasy}, the unique real-analytic function $Y$ on $[\delta , \frac{\pi}{a} )$ which makes $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ become a solution to \eqref{NavierStokesintroduction} is \emph{independent} of the constant rotational speed $\beta > 0$ of the round sphere $S^2(a^2)$. In other words, for the \emph{same} real analytic function $Y$ on $[\delta , \frac{\pi}{a} )$ satisfying equation \eqref{thirdorderODE} and the initial values $Y(\delta )=0$, $Y'(\delta )= \alpha_1$, and $Y''(\delta ) = -\alpha_2$, the velocity field $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ can be realized as a stationary Navier-Stokes flow on the rotating sphere $S^2(a^2)$ with any prescribed rotational speed $\beta > 0$. This remark seems to be even more interesting when one compares the existence result as given in Theorem \ref{existenceEasy} with the Numerical experiment as carried out in the work \cite{YY} by Yoden and Yamada, according to which easterly circumpolar vortex starts to appear in high latitudes when the rotational speed $\beta > 0$ increases, and that the flow field becomes anisotropic in all the latitudes. So, by combining the result as given in Theorem \ref{existenceEasy} and the observation made by Yoden and Yamada in \cite{YY}, it is very tempting for one to speculate that a large rotational speed $\beta$ of the rotation of the sphere $S^2(a^2)$ about the North-South poles axis may have some considerable effect in \emph{stabilizing} the flow pattern of the (real analytic) Stationary Navier-Stokes flow $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ as ensured by Theorem \ref{existenceEasy}.
\end{remark}
\noindent{\bf Part II : About the study of stationary parallel laminar Navier-Stokes flows around an obstacle in a hyperbolic manifold with constant negative sectional curvature.}\\
\noindent
In the second part of the introduction, we will study the existence and non-existence of Stationary Navier-Stokes flows with circular-arc streamlines around some compact obstacle $K$ in a $2$-dimensional space-form $\mathbb{H}^2(-a^2)$ of constant negative sectional curvature $-a^2$. On such a $2$-dimensional space-form $\mathbb{H}^2(-a^2)$ with constant negative sectional curvature, which will also be called the $2$-dimensional hyperbolic space with constant sectional curvature $-a^2 < 0$, we will study the following stationary Navier-Stokes equation on $\mathbb{H}^2(-a^2)$, which again is formulated in terms of the language of differential $1$-forms on $\mathbb{H}^{2}(-a^2)$ instead of that of smooth vector fields (Again, see \textbf{Remark \ref{1formremark}}).
\begin{equation}\label{NavierStokeshyperbolic}
\begin{split}
\nu ((-\triangle)u^* -2 Ric (u^*) ) +[\overline{\nabla}_{u}u]^* + dP & = 0, \\
d^* u^* & = 0 .
\end{split}
\end{equation}
In equation \eqref{NavierStokeshyperbolic}, $\overline{\nabla}$ stands for the standard Levi-Civita connection (covariant derivative) acting on the space of smooth vector fields on $\mathbb{H}^2(-a^2)$. Again, the operator $d^*$ sending smooth $1$-forms into the space of smooth functions on $\mathbb{H}^2(-a^2)$ is interpreted as $-div$. The viscosity term in \eqref{NavierStokeshyperbolic} consists of two terms, namely $(-\triangle )u^*$ and $-2 Ric (u^*)$, where $(-\triangle ) = dd^* + d^*d$ is the standard Hodge Laplacian acting on the space of $1$-forms on $\mathbb{H}^2(-a^2)$ and $Ric$ is the standard Ricci tensor with respect to the Riemannian metric of the hyperbolic manifold $\mathbb{H}^2(-a^2)$.\\
\noindent
Here, let us explain a little bit about why we would like to extend our study of stationary parallel laminar flows around circular-arc boundary portion of some obstacle to the case in which the background space is a hyperbolic manifold $\mathbb{H}^2(-a^2)$ of constant negative sectional curvature $-a^2$. Indeed, it is understandable that a P.D.E. specialist with a more practical mind (or a scientist working in the area of fluid dynamics in general) may find it difficult to comprehend or appreciate the meaning or significance of studying Navier-Stokes flow on such a hyperbolic manifold. This kind of negative attitude towards the study of Navier-stokes flows on hyperbolic manifolds is understandable because a classical theorem due to Hilbert \cite{Hilbert} and Efimov \cite{Efimov} states that a complete $2$-dimensional Riemannian manifold with negative sectional curvature to be bounded above by a negative constant cannot be isometrically embedded into the standard Euclidean space $\mathbb{R}^3$ equipped with the standard Euclidean metric. However, if one looks at this issue from the view point of pure mathematics, a hyperbolic space $\mathbb{H}^2(-a^2)$ with constant negative sectional curvature $-a^2 < 0$ is exactly the ''negative counterpart'' of the sphere $S^2(a^2)$ with radius $\frac{1}{a}$, which is the space-form of constant positive sectional curvature $a^2$. So, methodically speaking, if a set of methods, such as those we used in \textbf{Section \ref{Spheremainsection}}, works well in the study of parallel laminar flows around an obstacle in the round sphere $S^2(a^2)$, one expects that the same set of methods, once being adopted to the setting of a hyperbolic space $\mathbb{H}^2(-a^2)$, should work equally well and should yield equally interesting results analogical to those being obtained in the spherical case $S^2(a^2)$. Indeed, if one tries to compare the mathematical content of \textbf{Section \ref{Spheremainsection}}, which contains the proofs of Theorems \ref{Noparallel} and \ref{existenceEasy}, with \textbf{Section \ref{hyperMAINSECTION}}, which contains the proof of Theorem \ref{existenceHyperbolic}, the similarities between the spherical case and the hyperbolic counterpart are striking. In order words, from the mathematical view-point, it is completely natural to state and prove Theorem \ref{existenceHyperbolic}, which is analogical to the results of Theorems \ref{Noparallel} and Theorem \ref{existenceEasy} in the spherical case.
\begin{thm}\label{existenceHyperbolic}
Consider the $2$-dimensional space form $\mathbb{H}^2(-a^2)$ of constant negative sectional curvature $-a^2$, with $a > 0$ to be given. Let $O \in \mathbb{H}^2(-a^2)$ to be a selected based point, and let $(r, \theta )$ to be the normal polar coordinate system on $\mathbb{H}^2(a^2)$ about the based point $O$, which is introduced through the standard exponential map $\exp_{O} : T_{O}\mathbb{H}^2(-a^2) \rightarrow \mathbb{H}^2(-a^2)$ (see \textbf{Definition \ref{exphyper}} for the precise meaning of $\exp_O$, and also \textbf{Definition \ref{polarDefn}} for the precise meaning of $(r,\theta )$ on $\mathbb{H}^2(-a^2)$ ).\\
Consider a fixed choice of positive number $\delta \in (0 , \infty )$, and let $K$ to be some \emph{compact} region which is a subset of $\{p\in \mathbb{H}^2(a^2): d(p,O)\leq \delta \}$, and which plays the role of an obstacle in $\mathbb{H}^2(-a^2)$. Here, $d(p,O)$ stands for the geodesic distance between $O$ and $p$ in $\mathbb{H}^2(a^2)$. Suppose further that for some positive number $\tau \in (0, 2\pi )$, the circular arc $C_{\delta , \tau} = \{p\in \mathbb{H}^2(-a^2) : d (p, O) = \delta , 0 < \theta (p) < \tau \}$ constitutes a smooth boundary portion of $\partial K$ in that
\begin{equation}
\{p\in \mathbb{H}^2(-a^2) : d (p, O) = \delta , 0 < \theta (p) < \tau \} \subset \partial K .
\end{equation}
Moreover, let $\frac{\partial}{\partial r}$, and $\frac{\partial}{\partial \theta }$ to be the two natural vector fields induced by the normal polar system $(r,\theta )$ on $\mathbb{H}^2(a^2)$ about the based point $O$, which are defined in \textbf{Definition \ref{FRAMEHyperDef}} of \textbf{Section \ref{HYPERBOLIDSECTION}}. Then, it follows that the following two assertions are valid.\\
\noindent
\textbf{Assertion I} For the quadratic profile $h(\lambda ) = \alpha_1 \lambda -\frac{\alpha_2}{2} \lambda^2$ with any prescribed
constants $\alpha_1 > 0$, and $\alpha_2 > 0$, the velocity field $u = -h(r-\delta ) \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta }$ does not satisfy equation \eqref{NavierStokeshyperbolic} on any sector-shaped region $R_{\delta , \tau , \epsilon_0 }$ of $\mathbb{H}^2(-a^2)$ specified as follow, regardless of how small the positive number $\epsilon_0 > 0$ may be:
\begin{equation}\label{RegionHYPER}
R_{\delta , \tau , \epsilon_0 } = \{ p \in \mathbb{H}^2(-a^2) : \delta < d(p,O) < \delta + \epsilon_0, 0 < \theta (p) < \tau \}.
\end{equation}
\noindent
\textbf{Assertion II} For any prescribed positive numbers $\alpha_1 > 0$, $\alpha_2 > 0$, there exists some unique smooth function $Y : [\delta , \infty ) \rightarrow \mathbb{R}$ satisfying $Y(\delta ) = 0$, $Y'(\delta ) = \alpha_1$, $Y''(\delta ) = -\alpha_2$, such that the associated parallel laminar flow $u = -Y(r) \frac{a}{\sinh (ar)}\frac{\partial}{\partial \theta }$ will satisfy equation \eqref{NavierStokeshyperbolic} on the following simply connected exterior region:
\begin{equation}\label{OMEGAhyper}
\Omega_{\delta, \tau} = \{p\in \mathbb{H}^2(a^2) : d (p, O) > \delta , 0 < \theta (p) < \tau \} ,
\end{equation}
which shares the same circular-arc boundary portion $C_{\delta , \tau }$ with the boundary of the compact obstacle $K$. Moreover, such a unique smooth function $Y : [\delta , \infty ) \rightarrow \mathbb{R}$ which we desire turns out to be actually \textbf{real-analytic} on $[\delta , \infty )$.
\end{thm}
We just remark that the proof of Theorem \ref{existenceHyperbolic} will be given in \textbf{Section \ref{hyperMAINSECTION}}.
\noindent
The last mathematical result which we are going to give concerns the existence or non-existence of stationary parallel laminar Navier-Stokes flows along a geodesic representing the ''straight edge''-boundary of some obstacle in $\mathbb{H}^2(-a^2)$. In order to state this last mathematical result (Theorem \ref{parallelflowhyperbolicThm}) we need to consider another pairs of vector fields $\frac{\partial}{\partial \tau}$ and $\frac{\partial}{\partial s}$ on $\mathbb{H}^2(-a^2)$ as defined in expression \eqref{intuitivehyp} of \textbf{Section \ref{Cartesianhyperbolicsection}}.
\begin{thm}\label{parallelflowhyperbolicThm}
Let $\mathbb{H}^2(-a^2)$ to be the $2$-dimensional space form with constant negative sectional curvature $-a^2$. Consider the ''Cartesian coordinate system'' $\Phi : \mathbb{R}^2 \rightarrow \mathbb{H}^2(-a^2)$ on $\mathbb{H}^2(-a^2)$, which is constructed in \textbf{Section \ref{Cartesianhyperbolicsection}} of this paper, with the two geodesics $\tau \to c(\tau )$ and $s \to \gamma (s)$ playing the roles of the ''$x$-axis'' and the ''$y$-axis'' on $\mathbb{H}^2(-a^2)$ respectively (For the precise definition of $\Phi (\tau , s)$ and the roles played by the geodesics $c$ and $\gamma$, see expression \eqref{coordinatesystemhyper} and the discussion in \textbf{Section \ref{Cartesianhyperbolicsection}}). Here, the geodesic $\gamma$ playing the role the ''$y$-axis'' of the coordinate system $\Phi (\tau, s)$ will also represent the ''straight-edge'' boundary of the an obstacle $K$ occupying the infinite region lying on the ''left-hand side'' of $\gamma$ $($see \eqref{K} in \textbf{Section \ref{Cartesianhyperbolicsection}} for a precise definition of $K$$)$. Let $\frac{\partial}{\partial \tau }$ and $\frac{\partial}{\partial s } $ to be the natural vector fields on $\mathbb{H}^2(-a^2)$, which we define in expression \eqref{intuitivehyp} of \textbf{Section \ref{Cartesianhyperbolicsection}}. Then, we claim that the following two assertions hold.\\
\noindent
\textbf{Assertion I} For the quadratic function $h(\tau ) = \alpha_1 \tau - \frac{\alpha_2}{2}\tau^2$, with any given constants $\alpha_1 > 0$ and $\alpha_2 >0$, the parallel laminar flow given by $u = -h(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s }$ does not satisfy equation \eqref{NavierStokeshyperbolic} on any simply connected region $\Omega_{\tau_{0}} \subset \mathbb{H}^2(-a^2)-K$ in the following form.
\begin{equation}
\Omega_{\tau_{0}} = \{p \in \mathbb{H}^2(-a^2) : 0 < \tau (p) < \tau_0 \} = \Phi ((0 ,\tau_0 )\times \mathbb{R}),
\end{equation}
with $\tau_0 > 0$ to be an arbitrary constant, and the coordinate system $\Phi (\tau , s)$ on $\mathbb{H}^2(-a^2)$ to be the one given in \eqref{coordinatesystemhyper}. \\
\noindent
\textbf{Assertion II} For any prescribed positive numbers $\alpha_1 > 0$, and $\alpha_2 > 0$, there exists a unique smooth function $Y \in C^{\infty} \big ( [0, \infty )\big ) $ satisfying $Y(0)=0$, $Y'(0)= \alpha_1$, and $Y''(0) = -\alpha_2$ such that the associated parallel laminar flow
$u = -Y(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$ will solve equation \eqref{NavierStokeshyperbolic} in the whole exterior region $\mathbb{H}^2(-a^2)- K = \Phi \big ( (0,\infty )\times \mathbb{R} \big )$. Moreover, such a unique smooth function $Y : [0,\infty ) \rightarrow \mathbb{R}$ which we desire turns out to be actually \textbf{real-analytic} on $[0,\infty )$.
\end{thm}
The proof of Theorem \ref{parallelflowhyperbolicThm} will be given in \textbf{Section \ref{ProofofHypstraightSec}}. We also point out that, in the case when the background manifold is the round sphere $S^2(a^2)$, the study of Stationary parallel laminar Navier-Stokes flows along a \emph{geodesic} representing the boundary of some obstacle in $S^2(a^2)$ is essentially covered by Theorem \ref{Noparallel} and Theorem \ref{existenceEasy}, simply due to the fact that the great circle $\{p\in S^2(a^2) : d(p,O) = \frac{\pi}{2a}\}$ is a geodesic on $S^2(a^2)$ (and is the only possible type of geodesic on $S^2(a^2)$ up to the group of rotational isometries).
\section{Some Basic Geometry and Geometric Construction on the Space Form $S^{2}(a^{2})$ with Positive Constant Sectional Curvature $a^{2}$ }\label{BasicsphereSEC}
\noindent
As a preparation for the proofs of Theorems \ref{Noparallel} and \ref{existenceEasy} in \textbf{Section \ref{Spheremainsection}}, we will discuss first the \emph{normal polar coordinate system} and then the \emph{Hodge Laplacian} on the space form $S^{2}(a^{2})$ with positive sectional curvature $a^{2}$, with $a > 0$ to be a given constant. We would like to stress that all the material as presented in this section, as well as those in \textbf{Sections \ref{HYPERBOLIDSECTION}} and \textbf{\ref{Cartesianhyperbolicsection}}, pertains to the standard, fundamental working knowledge which is basic and well-known to researchers working in differential geometry, geometric analysis, or differential topology. However, it seems to us that many of our potential readers, including P.D.E. specialists working in the areas of Navier-Stokes equations, may not be familiar with those basic geometric language which we employ in this paper. Based on this consideration, we include some necessary geometric background here as a preparation for the differential geometric computation being done in the proofs of Theorems \ref{Noparallel} and \ref{existenceEasy} in the \textbf{Section \ref{Spheremainsection}}. \\
\noindent{\bf The normal polar coordinate system $(r, \theta )$ on $S^2(a^2)$, with the induced natural moving frame $\{\frac{\partial}{\partial r} , \frac{\partial}{\partial \theta }\}$ on $S^2(a^2)$ .}\\
\noindent
Geometrically, the space form $S^{2}(a^{2})$ can be \emph{realized} as the sphere in $\mathbb{R}^{3}$ with radius $\frac{1}{a}$ and centered at the origin $(0,0,0)$ of $\mathbb{R}^{3}$. That is, we have the following identification
\begin{equation}
S^{2}(a^{2}) = \left\{ (x_1 ,x_2 , x_3 ) : x_{1}^2 + x_{2}^2 + x_{3}^2 = \frac{1}{a^2} \right \} ,
\end{equation}
where the sphere $\{ (x_1 ,x_2 , x_3 ) : x_{1}^2 + x_{2}^2 + x_{3}^2 = \frac{1}{a^2} \}$ is understood to be a submanifold of $\mathbb{R}^{3}$, equipped with the standard induced Riemannian metric $g(\cdot ,\cdot )$ inherted from the background Euclidean space $\mathbb{R}^3$. Before we introduce the normal polar coordinate system on $S^{2}(a^{2})$, we choose the point $O = (0,0, \frac{1}{a} ) \in S^{2}(a^{2})$, which is the North Pole of the sphere $S^{2}(a^{2})$, to be the reference point at which our normal polar coordinate system will be based. Let $T_{O}S^{2}(a^{2})$ to be the tangent space of $S^{2}(a^{2})$ at $O \in S^{2}(a^{2})$.
Intuitively, $T_{O}S^{2}(a^2)$ can be realized as the two dimensional plane in $\mathbb{R}^{3}$ that passes through $(0,0, \frac{1}{a})$ and that is parallel to $\{ (x,y,0) : x , y \in \mathbb{R} \}$. So, it is natural to identify $T_{O}S^{2}(a^2)$ with $\mathbb{R}^{2} = \{ (x,y) : x,y \in \mathbb{R} \}$. Recall that
the tangent space $T_{p}M$ of a manifold $M$ at $p \in M$ is, by definition, a vector space of the same dimension as that of the manifold $M$. \\
\noindent
Here, we need to introduce the concept of \emph{exponential map} $\exp_{O} : T_{O} S^2 (a^2) \rightarrow S^2 (a^2)$.
\begin{defn}\label{expsphereDef}
\textbf{The exponential map $\exp_O$ about the reference point $O\in S^2(a^2)$:}
For any vector $v \in T_{O} S^2 (a^2)$, $\exp_{O}(v)$ is defined as
\begin{equation}\label{exp}
\exp_{O}(v) = \gamma_{v}(1),
\end{equation}
where $\gamma_{v} : [0,\infty ) \rightarrow S^2 (a^2)$ is the unique geodesic on $S^2 (a^2)$ which satisfies $\gamma (0) = O$ and $\frac{d \gamma }{dt}|_{t=0} = v$.
\end{defn}
\noindent
Indeed, in accordance with \eqref{exp}, it is plain to see that the following relation holds for any vector $v \in T_{O} S^2 (a^2)$ with $|v| = 1$ and any $t > 0$.
\begin{equation}
\exp_{O}(t v) = \gamma_{v}(t).
\end{equation}
It is a well known fact that the exponential map $\exp_{O}$, once restricted on the open disc
$D_{0} (\frac{\pi }{a}) = \{ v \in T_{O} S^2 (a^2) : |v| < \frac{\pi}{a} \} $, becomes a \emph{diffeomorphism} of $D_{0} (\frac{\pi }{a})$ onto $S^{2}(a^{2})-\{(0,0,-\frac{1}{a})\}$. This means that $\exp_{O} : D_{0} (\frac{\pi }{a}) \rightarrow S^{2}(a^{2})-\{(0,0,-\frac{1}{a})\}$ is a smooth bijective map with a smooth inverse map $\exp_{O}^{-1} : S^{2}(a^{2})-\{(0,0,-\frac{1}{a})\} \rightarrow D_{0} (\frac{\pi }{a}) $. Since we have the natural identification of the space form $S^{2}(a^{2})$ with $\{ (x_1 ,x_2 , x_3 ) : x_{1}^2 + x_{2}^2 + x_{3}^2 = \frac{1}{a^2} \}$, $\exp_{O} : T_{O} S^2 (a^2) \rightarrow S^2 (a^2)$ can be given explicitly as follow. Since $T_{O} S^2 (a^2)$ is naturally identified with $\mathbb{R}^{2}$, any vector $v \in T_{O} S^2 (a^2)$ with $|v| = 1$ can be represented as $v = (\cos \lambda , \sin \lambda )$, for some $\lambda \in [0 , 2\pi )$. Then, the unique geodesic $\gamma_{v} : [0,\infty ) \rightarrow S^2 (a^2)$ satisfying $\gamma_{v}(0) = O$, and $\frac{d \gamma_{v}}{dt}|_{t=0} = v$ is given explicitly by
\begin{equation}
\gamma_{v}(t) = \frac{1}{a} (\sin (ta) \cos \lambda , \sin (ta) \sin \lambda , \cos (ta)).
\end{equation}
Hence, $\exp_{O} : D_{0} (\frac{\pi }{a}) \rightarrow S^{2}(a^{2})-\{(0,0,-\frac{1}{a})\}$ can be given explicitly as follow.
\begin{equation}
\exp_{O} (t v) = \frac{1}{a} (\sin (t a) \cos \lambda , \sin (ta) \sin \lambda , \cos (ta)) ,
\end{equation}
where $v = (\cos \lambda , \sin \lambda )$ is a unit vector in $T_{O} S^2 (a^2)$, and $t \in [0 , \frac{\pi}{a})$.\\
\noindent
We can now introduce the \emph{normal polar coordinate system} on $S^2 (a^2)$, through the use of the exponential map $\exp_O$ as follow.
\begin{defn}\label{PolarDefnSphere}
\textbf{The normal polar coordinate system on the sphere $S^2(a^2)$ :} The normal polar coordinate system on $S^2 (a^2)$ about the based point $O \in S^2 (a^2)$ is the bijective smooth map $( r,\theta ) : S^2 (a^2)-\{(x_1 , 0 , x_3) : x_{1}^2 + x_{3}^2 = \frac{1}{a^2}, x_1 \geqslant 0 \} \rightarrow
(0 , \frac{\pi}{a})\times (0 , 2\pi )$ defined by
\begin{equation}
(r, \theta ) = (\overline{r} , \overline{\theta}) \circ (exp_{O}^{-1}) ,
\end{equation}
where $(\overline{r} , \overline{\theta})$ stands for the \emph{standard polar coordinate system} $(\overline{r} , \overline{\theta }) : \mathbb{R}^{2}-\{(x_1 , 0) : x_{1} \geqslant 0 \} \rightarrow (0, \infty )\times (0 , 2\pi)$ on $\mathbb{R}^{2}$. In order words, the normal polar coordinate system $(r,\theta )$ on $S^2(a^2)$ is \emph{defined} to be the composite of the standard polar coordinate system $(\overline{r} , \overline{\theta})$ on $\mathbb{R}^2$ with
$\exp_{O}^{-1} : S^2 (a^2)-\{(x_1 , 0 , x_3) : x_{1}^2 + x_{3}^2 = \frac{1}{a^2}, x_1 \geqslant 0 \} \rightarrow D_{0}(\frac{\pi}{a}) - \{ (x_{1} , 0) : 0 \leqslant x_1 < \frac{\pi}{a} \}$.
\end{defn}
\noindent
With such a normal polar coordinate system $(r , \theta )$ on $S^2(a^2)$, we now \emph{define} two \emph{natural, everywhere linearly independent vector fields} $\frac{\partial}{\partial r}$, and $\frac{\partial}{\partial \theta}$ on $S^2(a^2)$ as follow.
\begin{defn}\label{coordFRAMEsphere}
\textbf{Natural coordinate frame $\big \{ \frac{\partial}{\partial r} , \frac{\partial}{\partial \theta} \big \}$ on the sphere $S^2(a^2)$ : }For any point $p \in S^2 (a^2)-\{(x_1 , 0 , x_3) : x_{1}^2 + x_{3}^2 = \frac{1}{a^2}, x_1 \geqslant 0 \}$, the vectors $\frac{\partial}{\partial r}|_{p}$,
$\frac{\partial}{\partial \theta}|_{p}$ in the tangent space $T_{p}(S^2(a^2))$ can be regarded as \emph{linear functionals} on the space $C^{\infty}(S^{2}(a^{2}))$ of smooth functions on $S^2 (a^2)$, and hence, $\frac{\partial}{\partial r}|_{p}$, $\frac{\partial}{\partial \theta}|_{p}$ in $T_{p}S^{2}(a^2)$ can be \emph{defined} through the following characterization, where $f \in C^{\infty}(S^{2}(a^2))$.
\begin{equation}\label{naturalvector}
\begin{split}
\frac{\partial}{\partial r}\bigg |_{p} f &= \frac{\partial}{\partial \overline{r}}\bigg |_{(r,\theta )(p)}[f \circ (r,\theta)^{-1}] =
\frac{\partial}{\partial \overline{r}}\bigg |_{(r,\theta )(p)}[f \circ \exp_{O} \circ (\overline{r} , \overline{\theta})^{-1} ] \\
\frac{\partial}{\partial \theta }\bigg |_{p} f & = \frac{\partial}{\partial \overline{\theta}}\bigg |_{(r,\theta )(p)}[f \circ (r,\theta)^{-1}] =
\frac{\partial}{\partial \overline{\theta}}\bigg |_{(r,\theta )(p)}[f \circ \exp_{O} \circ (\overline{r} , \overline{\theta})^{-1} ].
\end{split}
\end{equation}
\end{defn}
\noindent
The two vector fields $\frac{\partial}{\partial r}$, and $\frac{\partial}{\partial \theta}$ on $S^2(a^2)$, which are characterized by relation \eqref{naturalvector}, are everywhere linearly independent on $S^2(a^2)$, and they together constitutes \emph{the natural moving frame} induced by the normal polar coordinate system on $S^{2}(a^{2})$. Now, by means of the identification $S^{2}(a^2) = \{(x_1 , x_2 , x_3) \in \mathbb{R}^3 : x_1^2 + x_2^2 + x_3^2 = \frac{1}{a^2} \}$, we can express the vector fields $\frac{\partial}{\partial r}$, and $\frac{\partial}{\partial \theta}$ concretely as follow. \\
\noindent
Since $\exp_{O} : \{v \in T_{O}S^2(a^2) : |v| < \frac{\pi}{a}\} \rightarrow S^2(a^{2}) -\{(0,0,-\frac{1}{a})\}$ is a diffeomorphism, any given $p \in S^{2} - \{(0,0, -\frac{1}{a})\}$ which is away from the base point $O$ can be represented as $p = \exp_{O}(rv)$, with some uniquely determined unit vector $v = (\cos \lambda , \sin \lambda ) \in T_{O}S^2(a^2)$, and $r \in (0, \frac{\pi}{a})$. It turns out that $\frac{\partial}{\partial r}|_p$, and $\frac{\partial}{\partial \theta}|_p$ can be expressed concretely as follow, with $\gamma_{v} ;[0, \infty ) \rightarrow S^2(a^2)$ to be the unique geodesic satisfying $\gamma_{v}(O) =0$ and $\frac{d \gamma_{v}}{dt}|_{t=0} =v$, and
$v^{\perp} = (-\sin \lambda , \cos \lambda )$.
\begin{equation}\label{vectorexplicit}
\begin{split}
\frac{\partial}{\partial r}\bigg |_p &= \frac{d \gamma_{v}}{dt} \bigg |_{t=r} = \cos (ar) (v,0) - \sin (at) (0,0,1) \in T_{p}S^2(a^2) \\
\frac{\partial}{\partial \theta }\bigg |_p &= \frac{1}{a}\sin (ar) (v^{\perp} , 0) = \frac{1}{a} \sin (ar) (-\sin \lambda , \cos \lambda ,0 ) \in T_{p}S^2(a^2).
\end{split}
\end{equation}
From \eqref{vectorexplicit}, it follows at once that the vector fields $e_{1} = \frac{\partial}{\partial r}$, and $e_{2} = \frac{a}{\sin (ar)} \frac{\partial}{\partial \theta }$ together constitute a moving frame on $S^{2}(a^2)$ which is \emph{everywhere orthonormal} on $S^{2}(a^2)$. That is, we know that $|e_{1}(p)|=1$, $|e_{2}(p)|=1$, and $g(e_{1}(p), e_{2}(p)) = 0$ holds for all $p \in S^2(a^2)$. Here, of course the symbol $g(\cdot , \cdot )$ denotes the Riemannian metric on $S^2(a^2)$, with $|v| = [g(v,v)]^{\frac{1}{2}}$. We also observe that the vector field $e_{2}$ satisfies the following relation, where $v \in T_{O}S^2(a^2) $ is a unit vector, and $0 < r < \frac{\pi}{a}$.
\begin{equation}
e_{2} |_{\exp_{O}(rv)} = (v^{\perp} , 0),
\end{equation}
which indicates that the vector field $e_{2}$, once being restricted to the geodesic ray $\gamma_{v}$ starting from $O$, is a \emph{parallel along} $\gamma_{v}$. So, it follows that $\overline{\nabla}_{\frac{d \gamma }{dt}} (e_{2}) = 0$, with
$\overline{\nabla } : C^{\infty }(TS^{2}(a^2)) \rightarrow C^{\infty} (T^{*}S^2(a^2)\otimes TS^2(a^2)) $ to be the Levi-Civita connection induced by the Riemannian metric $g (\cdot , \cdot )$ of $S^{2}(a^2)$ (See \textbf{Definition \ref{LeviCivitadefinition}} in \textbf{Section \ref{Spheremainsection}} for a precise definition of the Levi-Civita connection as induced by the intrinsic geometry of a Riemannian manifold). Upon this setting, we can now discuss the Hodge-star operator and the Hodge Laplacian on $S^2(a^2)$ in the following subsection.\\
\noindent{\bf The Hodge-star operator and Hodge Laplacian on $S^2(a^2)$ in terms of the normal polar coordinate on $S^2(a^2)$.}
\noindent
The Hodge Laplacian $(-\triangle ) = d d^* + d^* d$ on a Riemannian manifold $M$ is an operator sending the space $C^{\infty}(T^{*}M)$ of all smooth $1$-forms into $C^{\infty}(T^*M)$ itself. So, the Hodge Laplacian $(-\triangle )$ on $S^2(a^2)$ acts on differential $1$-forms \emph{instead of vector fields} on $S^2(a^2)$. This leads (\emph{actually forces}) us to consider the natural identification of the space $C^{\infty}(TS^2(a^2))$ of smooth vector fields on $S^2(a^2)$ with the space
$C^{\infty}(T^*S^2(a^2))$ of smooth $1$-forms, which identifies a smooth vector field $v \in C^{\infty}(TS^2(a^2))$ with the associated $1$ form $v^* = g(v, \cdot ) \in C^{\infty}(T^*M)$. Here, $g(\cdot , \cdot )$ is the Riemannian metric of the sphere $S^2(a^2)$.\\
\noindent
Based upon the polar normal coordinate system $(r, \theta )$ on $S^2(a^2)$ as given in \textbf{Definition \ref{PolarDefnSphere}}, we have two smooth vector fields $e_{1} = \frac{\partial}{\partial r}$, and $e_{2} = \frac{a}{\sin (ar)} \frac{\partial}{\partial \theta }$, which together constitute a positively oriented orthonormal moving frame on\\
$S^2(a^2)-\{(0,0, \frac{1}{a}) , (0,0,\frac{-1}{a})\}$ for the tangent bundle $TS^2(a^2)$ on $S^2(a^2)$. Then, the associated $1$-forms $e_1^* = g (e_1, \cdot )$, and $e_2^* = g(e_2 , \cdot )$ together constitute a positively oriented orthonormal moving \emph{co-frame} on for the cotangent bundle $T^{*}S^2(a^2)$ on $S^2(a^2)$. Indeed, the $1$ forms $e_{1}^*$, and $e_2^*$ can be expressed by
\begin{equation}\label{dualcoframe}
\begin{split}
e_{1}^{*} &= dr ,\\
e_{2}^{*} & = \frac{\sin ar}{a} d\theta .
\end{split}
\end{equation}
Here, we first define the volume form on a general oriented $2$-dimensional Riemannian manifold $M$, which is a everywhere non-vanishing globally defined $2$-form on $M$ induced by the intrinsic Riemannian geometry of $M$.
\begin{defn}\label{VolumeformDef}
\textbf{Volume form on a $2$-dimensional oriented Riemannian manifold :}
Given $M$ to be an oriented $2$-dimensional Riemannian manifold equipped with a Riemannian metric $g(\cdot , \cdot )$. Consider two locally defined vector fields $e_{1}$, $e_2$ on some open region $U$ of $M$ which together constitute an positively oriented orthonormal moving frame $\{e_1 , e_2\}$ of the tangent bundle $TM$ of $M$ over $U$. Then, the (locally defined) associated $1$-forms $e_{1}^* = g(e_1 ,\cdot )$, $e_2^* = g(e_2 , \cdot )$ will constitute the so-called orthonormal co-frame for the cotangent bundle $T^*M$ over $M$. Then, the volume form $Vol_{M}$ can be locally defined through the relation
\begin{equation}
Vol_M = e_1^* \wedge e_2^*.
\end{equation}
It is an elementary fact in differential geometry that such a local definition of $Vol_{M}$ turns out to be \textbf{independent} of the choice of the locally defined positively oriented orthonormal frame $\{e_1 , e_2\}$. Hence, our local construction gives a globally-defined Volume form $Vol_M$ on the whole $2$-dimensional manifold $M$ (see Chapter 2 of \cite{Jost} for a more general discussion).
\end{defn}
\noindent
Hence, the \emph{globally defined} volume form $Vol_{S^2(a^2)}$ on $S^{2}(a^2)$ is the $2$-form which can \emph{locally be expressed} by $Vol_{S^{2}(a^2)} = e_{1}^{*}\wedge e_{2}^* = \frac{\sin ar}{a} dr\wedge d\theta \in C^{\infty}(\wedge^2 T^*S^2(a^2))$. Here, the symbol $C^{\infty}(\wedge^2 T^*S^2(a^2))$ stands for the space of all smooth $2$-forms on $T^*S^2(a^2)$.\\
\noindent
Before we talk about the Hodge-star operator on $S^{2}(a^2)$, we recall the general definition of the Hodge-Star operator on a given $N$-dimensional manifold (See Chapter $2$ of Jost \cite{Jost}).
\begin{defn}\label{HodgeDefinition}
\textbf{The Hodge-Star operator :}
In the case of a general oriented Riemannian manifold $M$ of dimension $N$,for each integer $0 \leqslant k \leqslant N$, the Hodge-star operator $* : C^{\infty }(\wedge^k T^*M ) \rightarrow C^{\infty }(\wedge^{N-k} T^*M ) $ sending the space of $k$ forms on $M$ to the space of $N-k$ forms on $M$ is characterized by the following relation: For any $k$-forms $\alpha$, $\beta$, where $\overline{g} (\cdot , \cdot )$ stands for the metric on the vector bundle $\wedge^{k}T^*M$ induced by the Remannian metric $g(\cdot , \cdot )$ on $M$, and that $Vol_{M}$ is the volume form on $M$ as defined in \textbf{Definition \ref{VolumeformDef}}.
\begin{equation}
\alpha \wedge *\beta = \overline{g}(\alpha , \beta ) Vol_{M}.
\end{equation}
\end{defn}
\noindent
Since the dimension of $S^2(a^2)$ is just $2$, in the case of $M = S^{2}(a^2)$ the Hodge-star operator $*$ can easily be described as follow.
First, $* : C^{\infty}(T^{*}S^{2}(a^2)) \rightarrow C^{\infty}(T^{*}S^{2}(a^2)) $ is characterized by the following relation
\begin{equation}\label{Hodgestar}
\begin{split}
*e_{1}^* &= e_{2}^* \\
*e_{2}^* & = -e_{1}^{*}.
\end{split}
\end{equation}
Since $* : C^{\infty }(\wedge^k T^*M ) \rightarrow C^{\infty }(\wedge^{N-k} T^*M ) $ is \emph{tensorial} in that the relation
$*(f \alpha ) = f *(\alpha)$ holds for any smooth function $f \in C^{\infty}(M)$ and $k$-form $\alpha$ on a Riemannian manifold $M$. It follows that in the case of
$M = S^{2}(a^2)$, we have
\begin{equation}
\begin{split}
* dr &= e_2^* = \frac{\sin ar}{a} d\theta ,\\
* d\theta &= *[\frac{a}{\sin ar} e_{2}^*] = -\frac{a}{\sin ar} e_{1}^* = -\frac{a}{\sin ar} dr.
\end{split}
\end{equation}
In the structure of the Hodge Laplacian sending $C^{\infty}(T^*S^2(a^2))$ into itself, one also encounters the Hodge-star operator
$* : C^{\infty}(\wedge^{2} T^* S^2(a^2)) \rightarrow C^{\infty}(S^2(a^2))$ sending smooth $2$-forms to smooth functions on $S^2(a^2)$, which can be characterized via the following relation.
\begin{equation}
*Vol_{S^2(a^2)} = * (e_1^* \wedge e_2^*) = 1,
\end{equation}
where we note that the $2$-form $e_1^* \wedge e_2^*$ is the volume form $Vol_{S^2(a^2)}$ on $S^2(a^2)$. By using the tensorial property of $*$, it follows that $* (dr \wedge d\theta ) = \frac{a}{\sin ar}$.
Now, we can discuss the Hodge Laplacian $(-\triangle) = d d^* + d^* d$, where the exterior differential operators $d : C^{\infty}(\wedge^kT^*S^2(a^2)) \rightarrow C^{\infty}(\wedge^{k+1}T^*S^2(a^2))$ and their associated adjoint operators $d^* : C^{\infty}(\wedge^{k+1}T^*S^2(a^2)) \rightarrow C^{\infty}(\wedge^kT^*S^2(a^2))$ are involved for $k =0,1$. The operator $d : C^{\infty}(S^2(a^2)) \rightarrow C^{\infty}(T^*S^2(a^2))$ sends a smooth function $f$ to the $1$-form $df$, which can locally be expressed, in terms of the natural co-frame $\{dr , d \theta \}$, as follow.
\begin{equation}
df = \frac{\partial f}{\partial r} dr + \frac{\partial f}{\partial \theta } d \theta ,
\end{equation}
where the smooth functions $\frac{\partial f}{\partial r}$ and $\frac{\partial f}{\partial \theta } $ are defined via \eqref{naturalvector}. It is an elementary fact that $df$ is \emph{independent} of the choice of the coordinate frame being used in its characterization. Here, we express $df$ in terms of the normal polar coordinate system $(r, \theta )$, since this will give us a quick and easy way in computing $(-\triangle) u^*$, with $u^* = g (u, \cdot )$ to be the associated $1$-form of the vector field $u$ representing a parallel laminar flow near some arc-shaped boundary of some obstacle in the sphere $S^2(a^2)$.
\begin{remark}\label{remarkgradient}
On a general $N$-dimensional Riemannian manifold $M$, one should \textbf{interpret} the $1$-form $df$ as the gradient field $\nabla f$ in the sense that the gradient of $f$ is the unique vector field $\nabla f$ on $M$ characterized by the following relation, with $g(\cdot , \cdot )$ to be the Riemannian metric on $M$.
\begin{equation}
df = g(\nabla f , \cdot ).
\end{equation}
\end{remark}
\noindent
Next, one may characterize the operator $d : C^{\infty}(T^* S^2(a^2) ) \rightarrow C^{\infty}(\wedge^2 T^*S^2(a^2)) $ sending smooth $1$-forms to $2$-forms in terms of the normal polar coordinate $(r, \theta )$ as follow. Since every smooth $1$-form $\alpha$ on $S^2(a^2)$ can locally be expressed as
\begin{equation}
\alpha = \alpha_{r} dr + \alpha_{\theta} d\theta ,
\end{equation}
with $\alpha_{r}$, and $\alpha_{\theta}$ to be some locally defined smooth functions on $S^2(a^2)$. Then, the $2$-form $d\alpha$ is locally expressed by
\begin{equation}
\begin{split}
d \alpha &= d \alpha_{r} \wedge dr + d \alpha_{\theta} \wedge d\theta \\
& = \frac{\partial \alpha_{r}}{\partial \theta } d\theta \wedge dr + \frac{\partial \alpha_{\theta}}{\partial r} dr \wedge d\theta \\
& = \big \{ \frac{\partial \alpha_{\theta}}{\partial r} - \frac{\partial \alpha_{r}}{\partial \theta } \big \} dr \wedge d\theta \\
& = \frac{a}{\sin (ar)} \big \{ \frac{\partial \alpha_{\theta}}{\partial r} - \frac{\partial \alpha_{r}}{\partial \theta } \big \} Vol_{S^2(a^2)} .
\end{split}
\end{equation}
where in the above computation, we have implicity used the facts that $dr \wedge dr = 0$, $d\theta \wedge d\theta = 0$, and $d\theta \wedge dr = -dr \wedge d\theta$, each of these follows directly from the definition $\alpha \wedge \beta = \frac{1}{2} \{\alpha \otimes \beta - \beta \otimes \alpha \}$, for two given smooth $1$-forms $\alpha$, and $\beta$ on a smooth manifold $M$.\\
\begin{remark}\label{curlremark}
We should think of the operator $d$ sending the space of $1$-forms into the space of $2$-forms on an oriented $2$-dimensional Riemannian manifold $M$ to be the \textbf{curl-operator} which assigns sends each smooth vector field $u$ to its vorticity function $\omega$ on $M$. More precisely, consider $u$ to be a smooth vector field on an oriented $2$-dimensional Riemannian manifold $M$ with Riemannian metric $g(\cdot, \cdot )$. Then, by taking the operator $d$ on the associated $1$-form $u^* = g(u,\cdot )$ of the vector field $u$, we yield $du^* = \omega Vol_{M}$, with some uniquely determined smooth function $\omega$ on $M$, which is exactly the vorticity of the vector field $u$. So, we should really think of the operator $d$ sending $1$-forms into $2$-forms as the natural generation of the \textbf{curl-operator} in the more general sense.
\end{remark}
\noindent
We can now easily characterize the co-adjoint operators $d^* : C^{\infty}(\wedge^p T^*S^2(a^2)) \rightarrow C^{\infty}(\wedge^{p-1} T^*S^2(a^2))$, with $p = 1, 2$, by means of the following standard definition in differential geometry (see for instance, Chapter 2 of \cite{Jost}).
\begin{defn}\label{dstarDef}
\textbf{The co-adjoint operators $d^*$ :}
Given $M$ to be a $2$-dimensional oriented manifold equipped with a Riemannian metric $g(\cdot , \cdot )$. For each $p= 1, 2$, the coadjoint operator $d^* : C^{\infty}(\wedge^p T^*M) \rightarrow C^{\infty}(\wedge^{p-1} T^*M)$ sending the space of $p$-forms on $M$ into the space of $p-1$-forms on $M$ is defined through the following relation.
\begin{equation}
d^{*} = (-1)^{2(p + 1) + 1}* d * = -*d* ,
\end{equation}
where the symbol $*$ stands for the Hodge-Star operators as defined in \textbf{Definition \ref{HodgeDefinition}}.
\end{defn}
\noindent
Two remarks about \textbf{Definition \ref{dstarDef}} are in order here.
\begin{remark}
We remark that, in the above formula for $d^*$, we have included the extra index $2(p +1)$ in the power of $(-1)$, simply because for a general $N$-dimensional Riemannian manifold $M$, the formula for $d^*$ acting on smooth $p$-forms is exactly $d^* = (-1)^{N(p+1)+1}*d*$.
\end{remark}
\begin{remark}\label{divergenceremark}
On an oriented $2$-dimensional Riemannian manifold $M$ equipped with Riemannian metric $g(\cdot, \cdot )$, the co-adjoint operator $d^* : C^{\infty}(T^*M) \rightarrow C^{\infty}(M)$, which sends $1$-forms into smooth functions, should be interpreted as the \textbf{divergence operator $-div$} acting on smooth vector fields on $M$ in the following sense: If $u$ is a smooth vector field on $M$, with associated $1$-form $u^* = g(u ,\cdot )$. Then, it is a standard fact in Riemannian geometry that the following relation holds for any smooth test function $f \in C^{\infty}_c(M)$ (see Chapter $2$ of \cite{Jost}, for instance).
\begin{equation}\label{integrationbypartI}
\int_{M} g(u , \nabla f ) Vol_{M} = \int_{M} f d^*u^* Vol_{M} .
\end{equation}
However, we know that $-div (u)$ also satisfies the following integration by parts formula, for any test function $f \in C^{\infty}_c(M)$,
\begin{equation}\label{integrationbyparttwo}
\int_{M} g(u , \nabla f ) Vol_{M} = - \int_{M} f div(u) Vol_{M} .
\end{equation}
So, by comparing \eqref{integrationbypartI} with \eqref{integrationbyparttwo}, we are forced to conclude that $d^*u^*= -div (u)$.
\end{remark}
\section{Proof of Theorem \ref{Noparallel}, and the proof of Theorem \ref{existenceEasy} }\label{Spheremainsection}
\noindent
To begin the proof of Theorem \ref{Noparallel}, let us consider the space form $S^2(a^2)$ of constant sectional curvature $a^2 > 0$. Let $O \in S^2(a^2)$ to be an selected reference point on $S^2(a^2)$, and let $(r,\theta )$ to be the normal polar coordinate on $S^2(a^2)$ about the base point $O$, which we introduce in the previous section.
Here, let $K$ to be some compact region in $S^2(a^2)$, which is contained in the closed geodesic ball
$\overline{B_{O}(\delta )} = \{p\in S^2(a^2) : d(p,O) \leq \delta \}$ for some positive radius $0 < \delta < \frac{\pi}{a}$.
As in the hypothesis of Theorem \ref{Noparallel}, we assume that $\partial K$ contains a circular-arc portion in that, for some positive angle
$\tau \in (0, 2\pi )$ the circular arc $C_{\delta , \tau }= \{p \in S^2(a^2) : r(p)= \delta , 0< \theta (p)< \tau \}$ is contained in $\partial K$.\\
\noindent
Now, let $u$ to be some smooth vector field defined over the simply-connected open region $R_{\delta , \tau , \epsilon_0}$ as specified in \eqref{exteriorregion} of \textbf{Section \ref{IntroductionSEC}}, with $\epsilon_0 > 0$ to be some positive number. Again, remember that the exterior region $R_{\delta , \tau , \epsilon_0}$ shares the same circular-arc boundary portion $C_{\delta , \tau}$ with $K$. Suppose that $u$ is a \emph{parallel laminar flow} on $R_{\delta , \tau , \epsilon_0}$ in the sense that $u$ can be expressed as
\begin{equation}\label{circularStreamlines}
u = - h( r -\delta ) e_{2} = - h( r -\delta ) \frac{a}{\sin ar} \frac{\partial}{\partial \theta }.
\end{equation}
In the expression \eqref{circularStreamlines}, $r$, which is the first component of the normal polar coordinate $(r, \theta )$, measures the distance of a point $p \in S^{2}(a^2)$ from the base point $O$. That is, $r(p) = d(p,O)$, with $d(p,O)$ to be the Riemannian distance of $p$ from $O$ in the Riemannian manifold $S^2(a^2)$. Also, in \eqref{circularStreamlines}, $e_{2} = \frac{a}{\sin (ar)} \frac{\partial}{\partial \theta }$ is the second vector field in the orthonormal moving frame
$e_{1}= \frac{\partial}{\partial r}$, $e_{2} = \frac{a}{\sin (ar)} \frac{\partial}{\partial \theta }$. Recall that from \eqref{dualcoframe}, the natural dual coframe of the moving frame $\{e_{1}, e_{2}\}$ is given by $e_{1}^* = dr$, and $e_{2}^* = \frac{\sin (ar)}{a} d\theta$. Hence the volume form on $S^2(a^2)$ is $Vol_{s^2(a^2)} = e_{1}^*\wedge e_{2}^* = \frac{\sin ar}{a} dr \wedge d\theta$.\\
\noindent
Against such a setting, the first step which we take is to compute $(-\triangle )u^*$, with $u^* = g(u, \cdot )$ to be the associated $1$-form of the parallel laminar flow $u$.\\
\noindent{\bf Step 1 : The divergence free property $d^* u^* = 0$ and the computation of $(-\triangle )u^*$, for $u$ as given in \eqref{circularStreamlines}}
\noindent
Here, it follows from \eqref{circularStreamlines} that the associated $1$-form $u^* = g (u, \cdot )$ is locally given by
\begin{equation}\label{associatedform}
u^* = -h(r-\delta ) e_{2}^* = - h(r-\delta )\frac{\sin (ar)}{a} d \theta .
\end{equation}
First, we point out that the \emph{divergence free condition} $d^*u^* = 0$ is just a direct consequence which follows from the following computation.
\begin{equation}\label{divfreesphere}
\begin{split}
d^* u^* &= (-1)^{2(1 +1) +1} *d* \{ -h(r-\delta ) \frac{\sin (ar) }{a} d\theta \} \\
& = *d*[h(r-\delta ) e_{2}^{*}] \\
& = -* d[h(r-\delta ) dr ]\\
& = -* \{ d[h(r-\delta )] \wedge dr \}\\
& = -* \{ [\frac{\partial h(r - \delta )}{\partial r} dr + \frac{\partial h(r - \delta )}{\partial \theta } d\theta] \wedge dr \} \\
& = -* \{ [\frac{\partial h(r - \delta )}{\partial r} dr \wedge dr \}\\
&=0.
\end{split}
\end{equation}
In the above calculation, the third equality follows from \eqref{Hodgestar}, the sixth equal sign follows from the fact that $\frac{\partial h(r - \delta )}{\partial \theta } = 0$, and the last equality is due to the fact that $dr \wedge dr = 0$. Here, $d^* u^* = 0$ means that the vector field $u$ is divergence free, since one thinks of $d^*$ as $-\dv$ when the language of smooth $1$-form is \emph{translated} back to the language of smooth vector field via the correspondence $u^* = g(u , \cdot )$, just as one thinks of the 1-form $df$ as the gradient field $\nabla f$ in the language of vector fields.\\
\noindent
Now, the divergence free property $d^* u^* = 0$ implies that $(-\triangle )u^* = (dd^* + d^*d)u^* = d^*du^*$, since the term $dd^*u^*$ vanishes.
Then we have
\begin{equation}
\begin{split}
du^{*} &= -\frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}] dr \wedge d\theta \\
& = -\frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}] \cdot \frac{a}{\sin (ar)} Vol_{S^2(a^2)} .
\end{split}
\end{equation}
As a result, we have the following direct computation, in accordance with the definition of the operators $d$, and $d^*$ as discussed in the previous section.
\begin{equation}
\begin{split}
d^{*}du^* &= (-1)^{2(2 + 1 ) +1}*d*\{ -\frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}]\cdot \frac{a}{\sin (ar)} Vol_{S^2(a^2)} \}\\
& = * d \{ \frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}]\cdot \frac{a}{\sin (ar)} \}\\
&= * \frac{\partial}{\partial r} \{ \frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}]\cdot \frac{a}{\sin (ar)} \} dr \\
& = \frac{\partial}{\partial r} \{ \frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}]\cdot \frac{a}{\sin (ar)} \} *e_{1}^* \\
&= \frac{\partial}{\partial r} \{ \frac{\partial}{\partial r} [h(r-\delta ) \frac{\sin (ar)}{a}]\cdot \frac{a}{\sin (ar)} \} e_{2}^* \\
& = \frac{\partial}{\partial r} \{ \frac{\partial h(r-\delta )}{\partial r} + a h(r-\delta ) \frac{\cos (ar)}{\sin (ar)} \} e_{2}^{*}.
\end{split}
\end{equation}
That is, we have the following local expression for $(-\triangle)u^{*}$
\begin{equation}\label{triangleu}
\begin{split}
(-\triangle )u^* & = \frac{\partial}{\partial r} \{ \frac{\partial h(r-\delta )}{\partial r} + a h(r-\delta ) \frac{\cos (ar)}{\sin (ar)} \} e_{2}^{*}\\
& = \frac{\partial}{\partial r} \{ \frac{\partial h(r-\delta )}{\partial r} + a h(r-\delta ) \frac{\cos (ar)}{\sin (ar)} \} \cdot \frac{\sin (ar)}{a} d\theta \\
& = \{ h''(r-\delta ) \frac{\sin (ar)}{a} + h'(r-\delta ) \cos (ar) - \frac{a}{\sin (ar)} h(r-\delta ) \} d\theta .
\end{split}
\end{equation}
Next, we need to compute the convection term $[\overline{\nabla }_{u}u]^{*}$, with $\overline{\nabla} : C^{\infty}(TS^2(a^2)) \rightarrow C^{\infty} (T^*S^2(a^2) \otimes TS^2(a^2) )$ to be the Levi-Civita connection (covariant derivative) acting on the space of smooth vector fields on $S^2(a^2)$
(See \textbf{Definition \ref{LeviCivitadefinition}} for the precise meaning of $\overline{\nabla}$).\\
\noindent{\bf Step 2 : The computation of $\overline{\nabla}_{u}u^*$, for $u$ as given in \eqref{circularStreamlines}.}
\noindent
For a $N$-dimensional Riemannian manifold $M$ equipped with a Riemannian metric $g (\cdot , \cdot )$ in general, the Levi-Civita connection
$\overline{\nabla} : C^{\infty}(TM) \rightarrow C^{\infty}(T^*M\otimes TM)$ is uniquely characterized as follows.
\begin{defn}\label{LeviCivitadefinition}
The Levi Civita connection $\overline{\nabla} : C^{\infty}(TM) \rightarrow C^{\infty}(T^*M\otimes TM)$ acting on the space of smooth vector fields on a $N$-dimensional Riemannian manifold $M$ $($ equipped with a Riemannian metric $g(\cdot , \cdot )$ $)$ is uniquely determined by the following characterizing properties:
\begin{itemize}
\item \textbf{(1)} $($compatibility condition with the Riemannian metric $g(\cdot , \cdot )$ on $M$$)$ The relation $X(g(Y,Z)) = g(\overline{\nabla}_{X}Y , Z) + g(Y, \overline{\nabla}_{X}Z)$, holds for any smooth vector fields $X$, $Y$, $Z$ on $M$.
(Here, the notation $X(g(Y,Z))$ stands for the directional derivative of the function $g(Y,Z)$ along the direction of $X$. See also \textbf{Remark \ref{LastRemark}}.)
\item \textbf{(2)} $($torsion free property of the connection$)$ $\overline{\nabla}$ is \emph{torsion free} in that the relation $\overline{\nabla}_{X}Y - \overline{\nabla}_{Y}X - [X,Y] = 0$, holds for any smooth vector fields $X$, $Y$ on $M$. Here, $[X,Y] = XY-YX$ is the Lie Bracket of $X$ and $Y$.
\item \textbf{(3)} $($Leibniz rule of $\overline{\nabla}$ as a covariant derivative$)$ For any smooth function $f$ on $M$, and any smooth vector fields $X$, $Y$ on $M$, the relation $\overline{\nabla}_X\big ( f X\big ) = X\big ( f \big )\cdot Y + f \overline{\nabla}_XY $ holds $($here, the symbol $X\big ( f \big )$ stands for the derivative of $f$ along the direction of the vector field $X$$)$.
\item \textbf{(4)} $($Tensorial property of $\overline{\nabla}$$)$ For any smooth function $f$ on $M$, and any smooth vector fields $X$, $Y$ on $M$, we always have $\overline{\nabla}_{fX} Y = f \overline{\nabla}_{X}Y $.
\end{itemize}
\end{defn}
\begin{remark}\label{LastRemark}
In conditions \textbf{(1)}, and \textbf{(3)} as given in \textbf{Definition \ref{LeviCivitadefinition}}, we have employed, for a given smooth vector field $X$ and a smooth function $f$ on $M$, the notion $Xf$, which is the \emph{rate of change of $f$ in the direction of $X$}. More precisely, for each $p\in M$, $Xf|_{p}$ is \emph{defined as} $Xf|_{p} = \frac{d}{dt}(f\circ \gamma )|_{t=0}$, with $\gamma : [0, \epsilon ) \rightarrow M$ to be some smooth path with $\gamma (0) = p \in M$, and $\frac{d\gamma }{dt}|_{t=0} = X_{p}$.
\end{remark}
\noindent
Now, in the case of $M = S^2(a^2)$, we again consider the locally defined parallel laminar flow $u = -h(r-\delta )e_{2}$ near some circular arc portion of $\partial \Omega$, as defined in \eqref{circularStreamlines}, and compute $\overline{\nabla}_{u}u$, with $\overline{\nabla} : C^{\infty}(TS^2(a^2)) \rightarrow C^{\infty}(T^*S^2(a^2) \otimes TS^2(a^2))$ to be the Levi-Civita connection with respect to the standard metric $g(\cdot , \cdot )$ on $S^2(a^2)$. As a first step in computing $\overline{\nabla}_{u}u$, we set
\begin{equation}
\overline{\nabla}_{u}u = A \frac{\partial}{\partial r} + B e_{2},
\end{equation}
where $A$ and $B$ are some smooth functions on the simply-connected open region $R_{\delta , \tau , \epsilon_0}$ as specified in \eqref{exteriorregion}. Notice that $\frac{\partial}{\partial r}$, and $e_{2} = \frac{a}{\sin (ar)} \frac{\partial}{\partial \theta }$ constitute an orthonormal moving frame on $S^2(a^2)$, it follows that
\begin{equation}
\begin{split}
A &= g(\overline{\nabla}_{u}u , \frac{\partial}{\partial r} ) , \\
B & = g( \overline{\nabla}_{u}u , e_{2} ).
\end{split}
\end{equation}
Now, since $|u|^2 = g(u, u) = [h(r-\delta )]^2$ is independent of the $\theta$ variable, it follows from condition \textbf{(1)} of
\textbf{Definition \ref{LeviCivitadefinition}} that we have
\begin{equation}
0 = e_{2}(g(u,u)) = 2 g (\overline{\nabla}_{e_{2}} u , u),
\end{equation}
from which it follows that
\begin{equation}
\begin{split}
B & = g( \overline{\nabla}_{u}u , e_{2} ) \\
& = -h(r-\delta ) g ( \overline{\nabla}_{e_2}u , e_{2}) \\
& = g (\overline{\nabla}_{e_2}u , -h(r-\delta ) e_{2}) \\
& = g (\overline{\nabla}_{e_{2}} u , u)\\
& = 0 ,
\end{split}
\end{equation}
with the second equality follows from the tensorial property of $\overline{\nabla}$ as specified in condition \textbf{(4)} in \textbf{Definition \ref{LeviCivitadefinition}}.
In the same way, we have
\begin{equation}\label{computeA}
\begin{split}
A & = g (\overline{\nabla}_{u} u , \frac{\partial}{\partial r}) \\
& = -h(r-\delta ) \frac{a}{\sin (ar)} g ( \overline{\nabla}_{\frac{\partial}{\partial \theta}} u ,\frac{\partial}{\partial r} ) \\
& = h(r-\delta ) \frac{a}{\sin (ar)} g (u , \overline{\nabla}_{\frac{\partial}{\partial \theta}} \frac{\partial}{\partial r} ) \\
& = h(r-\delta ) \frac{a}{\sin (ar)} g (u , \overline{\nabla}_{\frac{\partial}{\partial r}} \frac{\partial}{\partial \theta } ).
\end{split}
\end{equation}
In the above calculation, the fourth equal sign follows from the torsion-free property of $\overline{\nabla}$ $($condition \textbf{(2)} in \textbf{Definition \ref{LeviCivitadefinition}}$)$ which gives
$ \overline{\nabla}_{\frac{\partial}{\partial \theta}} \frac{\partial}{\partial r} = \overline{\nabla}_{\frac{\partial}{\partial r}} \frac{\partial}{\partial \theta }$. Yet the validity of the third equal sign is based on the following observation, which on its own is a direct consequence of condition \textbf{(1)} of \textbf{Definition \ref{LeviCivitadefinition}}.
\begin{equation}
0 = \frac{\partial}{\partial \theta } g(u,\frac{\partial}{\partial r} ) = g ( \overline{\nabla}_{\frac{\partial}{\partial \theta}} u ,\frac{\partial}{\partial r} ) +
g (u , \overline{\nabla}_{\frac{\partial}{\partial \theta}} \frac{\partial}{\partial r} ) .
\end{equation}
Now, to compute the term $\overline{\nabla}_{\frac{\partial}{\partial r}} \frac{\partial}{\partial \theta }$, one recall that $\overline{\nabla}_{\frac{\partial}{\partial r}} e_{2} = 0$ holds, since the restriction of $e_{2}$ along each geodesic ray $\gamma$ starting from $O \in S^2(a^2)$ must be parallel along $\gamma$. Hence, it follows from the Leibniz rule $($condition \textbf{(3)} in \textbf{Definition \ref{LeviCivitadefinition}} $)$ of the connection $\overline{\nabla}$ that the following relation holds.
\begin{equation}
\overline{\nabla}_{\frac{\partial}{\partial r}} \frac{\partial}{\partial \theta } = \overline{\nabla}_{\frac{\partial}{\partial r}} (\frac{\sin (ar)}{a} e_{2})
= \cos (ar) e_{2} .
\end{equation}
Hence, it follows from \eqref{computeA} that
\begin{equation}
\begin{split}
A & = h(r-\delta ) \frac{a}{\sin (ar)} g (u, \cos (ar) e_{2}) \\
& = h(r-\delta ) \frac{a}{\sin (ar)} \cos (ar) g (-h(r-\delta ) e_{2} , e_{2})\\
& = -[h(r-\delta )]^2 \frac{a \cos (ar)}{\sin (ar)}.
\end{split}
\end{equation}
So, finally, we have the following expression for $\overline{\nabla}_{u}u$
\begin{equation}
\overline{\nabla}_{u}u = -[h(r-\delta )]^2 \frac{a \cos (ar)}{\sin (ar)} \frac{\partial}{\partial r} ,
\end{equation}
which immediately gives the following expression for $[\overline{\nabla}_{u}u ]^*$,
\begin{equation}\label{connectionform}
[\overline{\nabla}_{u}u]^* = -[h(r-\delta )]^2 \frac{a \cos (ar)}{\sin (ar)} dr .
\end{equation}
\noindent{\bf Step 3 : The proof of Theorem \ref{Noparallel}. }
\noindent
With relations \eqref{triangleu}, and \eqref{connectionform}, we can now complete the proof of Theorem \ref{Noparallel} here. Assume towards contradiction that there exits some sufficiently small positive number $\epsilon_{0} < \frac{\pi}{a} - \delta $, such that there exists some smooth function $P$ defined on the sector-shaped region $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion} which solves the following stationary Navier-Stokes equation on $R_{\tau , \epsilon_{0}}$, with $\beta \in \mathbb{R}$ to be a given positive constant.
\begin{equation}\label{Navier1}
\nu (-\triangle u^* -2Ric (u^*)) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* + dP = 0.
\end{equation}
On the space form $S^2(a^2)$, we have $Ric (u^*) = (2-1)a^2 u^* = a^2 u^*$, and we can rephrase equation \eqref{Navier1} as follow.
\begin{equation}\label{Navier2}
\nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* + dP = 0.
\end{equation}
According to the well-known fact that $d\circ d = 0$, the existence of a smooth function $P$ solving equation \eqref{Navier2} on the simply-connected open region $R_{\delta , \tau , \epsilon_{0}} $ as specified in \eqref{exteriorregion} immediately implies that the following relation holds everywhere in the region $R_{\delta , \tau , \epsilon_{0}}$,
\begin{equation}\label{wrong1}
d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* \big \} = 0.
\end{equation}
Recall that $u^* = - h(r-\delta ) \frac{\sin ar}{a} d\theta$. Hence, we have, through direct computation, that
\begin{equation}\label{curlu}
d u^* = -\{h'(r-\delta ) \frac{\sin (ar)}{a} + h(r-\delta ) \cos (ar)\} dr\wedge d\theta.
\end{equation}
Moreover, we deduce from \eqref{connectionform} that
\begin{equation}\label{trivialnew}
d (\overline{\nabla}_{u}u^*) = 0.
\end{equation}
In addition, it is also obvious that we always have the following relation
\begin{equation}\label{drotat}
d \big \{ \beta \cos (ar) *u^* \big \} = \beta \frac{\partial}{\partial \theta } \big \{ \cos (ar) h(r-\delta )\big \} d\theta \wedge dr = 0 .
\end{equation}
Now, in accordance with \eqref{triangleu},\eqref{trivialnew} , \eqref{curlu}, and \eqref{drotat}, we have
\begin{equation}\label{generalexp}
\begin{split}
&d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* \big \}\\
& = \nu \bigg\{h'''(r-\delta ) \frac{\sin (ar)}{a} + 2 h''(r-\delta ) \cos (ar) + a h'(r-\delta ) \left(\sin (ar) -\frac{1}{\sin (ar)}\right) \\
& + a^2 h(r-\delta ) \cos (ar)\left(2+ \frac{1}{\sin^2(ar)}\right)\bigg\} dr\wedge d\theta .
\end{split}
\end{equation}
Now, recall that $h(\lambda)$ is chosen to be $h(\lambda) = \alpha_{1} \lambda -\frac{\alpha_{2}}{2}\lambda^2$ in Theorem \ref{Noparallel}. So, we have $h'(\lambda ) = \alpha_{1} - \alpha_{2} \lambda$, $h''(\lambda ) = - \alpha_{2}$, and eventually $h'''(\lambda ) = 0$, for every $\lambda \geqslant 0$. In which case, \eqref{generalexp} reduces down to
\begin{equation}
\begin{split}
&d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* \big \}\\
& = \nu \bigg\{ -2 \alpha_{2} \cos (ar) + a (\alpha_{1} -\alpha_{2}(r-\delta )) \left(\sin (ar) -\frac{1}{\sin (ar)}\right) \\
& + a^2 (r-\delta )\left[\alpha_{1} -\frac{\alpha_{2}(r-\delta )}{2}\right] \cos (ar)\left(2+ \frac{1}{\sin^2(ar)}\right)\bigg\} dr\wedge d\theta .
\end{split}
\end{equation}
For convenience, we will use the following abbreviation
\begin{equation}\label{expressionF}
\begin{split}
F_{\alpha_{1}, \alpha_{2}, \delta } (r)
& = \bigg\{ -2 \alpha_{2} \cos (ar) \\
& + a (\alpha_{1} -\alpha_{2}(r-\delta )) \left(\sin (ar) -\frac{1}{\sin (ar)}\right) \\
& + a^2 (r-\delta )\left[\alpha_{1} -\frac{\alpha_{2}(r-\delta )}{2}\right] \cos (ar)\left(2+ \frac{1}{\sin^2(ar)}\right)\bigg\} ,
\end{split}
\end{equation}
so that $F_{\alpha_{1}, \alpha_{2}, \delta }$ is a smooth function defined on $(0,\frac{\pi}{a})$, and that we have
\begin{equation}\label{cheapexpression}
d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* \big \} = \nu F_{\alpha_{1}, \alpha_{2}, \delta } (r) dr\wedge d\theta .
\end{equation}
Now, by insisting on the existence of a smooth $P$ which solves \eqref{Navier2} on the simply-connected region $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}, the validity of \eqref{wrong1} on $R_{\delta , \tau , \epsilon_{0}}$ will follow as a by-product, which in turns forces us to admit that the following relation should hold for every $r \in [\delta , \delta + \epsilon_{0} )$,
\begin{equation}\label{vanishingofF}
F_{\alpha_{1}, \alpha_{2}, \delta } (r) = 0.
\end{equation}
That is, $F_{\alpha_{1}, \alpha_{2}, \delta } $ should identically vanish on $r \in [\delta , \delta + \epsilon_{0} )$. In that which follows, we will split our argument into three cases, namely the case of $0< \delta < \frac{\pi}{2a}$, then the case of $\delta = \frac{\pi}{2a}$, and finally the case of $\frac{\pi}{2a} < \delta < \frac{\pi}{a}$. In each of these cases, we will derive a contradiction towards the validity of the vanishing property of $F_{\alpha_{1}, \alpha_{2}, \delta } $ on $[\delta , \delta + \epsilon_{0} )$.\\
\noindent
\textbf{Case One.} We first discuss the case of $0 < \delta < \frac{\pi}{2a} $. In this case, we simply observe that we have $\cos (a\delta ) > 0$, and $\frac{1}{\sin (a\delta )} - \sin (a\delta ) > 0$. Based upon such an observation, we deduce at once that the following property holds as long as $\alpha_{1} > 0$, and $\alpha_{2} > 0$,
\begin{equation}
F_{\alpha_{1}, \alpha_{2}, \delta }(\delta ) = -2\alpha_{2} \cos (a \delta ) -a \alpha_{1} \left(\frac{1}{\sin (a\delta )} - \sin (a\delta )\right) < 0.
\end{equation}
The validity of the above relation at once implies that, for some sufficiently small $\epsilon_{1} \in (0,\epsilon_{0})$, we will have the following property
\begin{itemize}
\item $F_{\alpha_{1}, \alpha_{2}, \delta }(r) < 0$ holds, for all $r \in [\delta , \delta + \epsilon_{1})$,
\end{itemize}
which directly contradicts the \emph{everywhere vanishing property} of $F_{\alpha_{1}, \alpha_{2}, \delta }(r)$ on $[\delta , \delta + \epsilon_{0})$. So, in this case, a contradiction has been arrived, which ensures the non-existence of a smooth $P$ solving \eqref{Navier2} on the sector-shaped region $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}, regardless of how small its angle $\tau$ or thickness $\epsilon_{0}$ would be.\\
\noindent
\textbf{Case Two.} We now deal with the case of $\delta = \frac{\pi}{2a}$. The problem involved here is that $F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} }(\frac{\pi}{2a}) =0 $. So, we look at the quantity $\frac{\partial}{\partial r}(\sin^2(ar)F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} })|_{r= \frac{\pi}{2a}}$ instead.
First, we have
\begin{equation}\label{expressionSinarF}
\begin{split}
\sin^2(ar) F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} } (r)
& = -2 \alpha_{2} \cos (ar) \sin^2(ar) \\
& + a \left(\alpha_{1} -\alpha_{2}\left(r- \frac{\pi}{2a}\right )\right) (\sin^3(ar) -\sin (ar)) \\
& + a^2\left (r- \frac{\pi}{2a}\right )\left[\alpha_{1} -\frac{\alpha_{2}}{2}\left(r- \frac{\pi}{2a} \right)\right] \cos (ar)(2 \sin^2(ar)+ 1 ) .
\end{split}
\end{equation}
Now, observe that
\begin{equation}\label{boring}
\begin{split}
\frac{\partial }{\partial r} \{ -2 \alpha_{2} \cos (ar) \sin^2(ar) \}\bigg |_{r= \frac{\pi}{2a}} &= 2a \alpha_{2} , \\
\frac{\partial }{\partial r} \left\{ a \left(\alpha_{1} -\alpha_{2}\left(r- \frac{\pi}{2a} \right)\right) (\sin^3(ar) -\sin (ar)) \right \} \bigg |_{r= \frac{\pi}{2a}} & = 0 ,\\
\frac{\partial }{\partial r} \left\{a^2 \left(r- \frac{\pi}{2a} \right)\left[\alpha_{1} -\frac{\alpha_{2}}{2}\left(r- \frac{\pi}{2a} \right)\right] \cos (ar)(2 \sin^2(ar)+ 1 )\right\} \bigg |_{r= \frac{\pi}{2a}} & = 0 .
\end{split}
\end{equation}
Hence, it follows from \eqref{expressionSinarF}, and \eqref{boring} that we have
\begin{equation}
\frac{\partial}{\partial r}(\sin^2(ar)F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} })|_{r= \frac{\pi}{2a}} = 2a\alpha_2 > 0 ,
\end{equation}
which, together with $F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} }(\frac{\pi}{2a}) =0 $, imply that the following property holds for some sufficiently small $\epsilon_{1} \in (0, \epsilon_{0})$,
\begin{itemize}
\item $\sin^2(ar)F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} } > 0 $ holds for all $r \in (\frac{\pi}{2a} , \frac{\pi}{2a} + \epsilon_{1}) $.
\end{itemize}
However, the above property is again in direct conflict with the fact that $F_{\alpha_{1}, \alpha_{2}, \frac{\pi}{2a} }$ should identically vanish on $[\frac{\pi}{2a} , \frac{\pi}{2a} + \epsilon_{0})$, should there be a smooth $P$ solving equation \eqref{Navier2} on the sector-shaped region $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}. Again, this contradiction ensures the non-existence of a smooth function $P$ solving \eqref{Navier2} on $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}.\\
\noindent
\textbf{Case Three.} We now consider the case of $\frac{\pi}{2a} < \delta < \frac{\pi}{a}$, which is the most delicate one among the three cases. The delicate issue here is that $\cos (a \delta ) < 0$. Here, we will use the basic theory of second order linear ODE to treat this case. As in the previous cases, by insisting on the existence of a smooth $P$ solving \eqref{Navier2} on $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}, we will arrive at the consequence that the function $F_{\alpha_1 , \alpha_2 , \delta }$ must identically vanish over $[\delta , \delta + \epsilon_{0})$. But this is the same as saying that the quadratic function $Y(r) = \alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2$
will be a \emph{local solution} to the following linear second order ODE on the interval $[\delta , \delta + \epsilon_{0} )$.
\begin{equation}\label{linearODE}
y''(r) + Q_{1}(r) y'(r) + Q_{2}(r) y(r) = 0,
\end{equation}
where $Q_1$, $Q_2$ are the real-analytic functions on $(\frac{\pi}{2a} , \frac{\pi}{a})$ defined by
\begin{equation}
\begin{split}
Q_1(r) & = \frac{a}{2\cos (ar)} \left(\sin (ar) - \frac{1}{\sin (ar)} \right ) ,\\
Q_2(r) & = \frac{a^2}{2} \left(2+ \frac{1}{\sin^2(ar)}\right).
\end{split}
\end{equation}
In accordance with the basic existence and uniqueness theory for second order ODE's, the local solution $Y$ on $[\delta , \delta + \epsilon_{0})$ as represented by the quadratic function $Y(r) = \alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2$ can be uniquely extended to a global solution $Z$ to equation \eqref{linearODE} on the whole interval $(\frac{\pi}{2a} , \frac{\pi}{a})$. In addition, since the coefficient functions $Q_1(r)$, and $Q_2(r)$ are real-analytic on $(\frac{\pi}{2a} , \frac{\pi}{a})$, such a global solution $Z$ must also be real analytic on $(\frac{\pi}{2a} , \frac{\pi}{a}) $.
So, the power series representation $Z(r) = \sum_{k=0}^{\infty}a_{k} (r - \frac{3\pi}{4a})^k$ of the real-analytic solution $Z$ about the point $\frac{3\pi}{4}$ should have its radius of convergence to be at least $\frac{\pi}{4a}$. That is, $Z$ can be represented by $\sum_{k=0}^{\infty}a_{k} (r - \frac{3\pi}{4a})^k$, which converges absolutely for all $r \in (\frac{\pi}{2a} , \frac{\pi}{a})$. Now, we consider the following two holomorphic functions, which are the complexifications of the real analytic functions $Y(r) = \alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2$ and $Z(r) = \sum_{k=0}^{\infty}a_{k} (r - \frac{3\pi}{4a})^k$ respectively.
\begin{equation}
\begin{split}
\textbf{Y}(w) &= \alpha_1 (w-\delta) -\frac{\alpha_2}{2}(w-\delta )^2 , \\
\textbf{Z} (w) & = \sum_{k=0}^{\infty}a_{k} \left(w - \frac{3\pi}{4a}\right)^k .
\end{split}
\end{equation}
Since the radius of convergence of $\sum_{k=0}^{\infty}a_{k} (r - \frac{3\pi}{4a})^k$ is at least $\frac{\pi}{4a}$, the holomorphic function $\textbf{Z} (w) = \sum_{k=0}^{\infty}a_{k} (w - \frac{3\pi}{4a})^k$ is well-defined \emph{at least} on the open ball $\{w \in \mathbb{C} : |w - \frac{3\pi}{4a}| < \frac{\pi}{4a}\}$ in $\mathbb{C}$. Recall that the real analytic solution $Z$ to \eqref{linearODE} arises as the unique extension of the local solution $Y(r) = h(r-\delta )$ in that
\begin{equation}
Z|_{[\delta , \delta + \epsilon_{0})} =Y ,
\end{equation}
which means the same as saying that the two holomorphic functions $\textbf{Z}$ and $\textbf{Y}$ coincide on the line segment $\{r : \delta < r < \delta + \epsilon_{0} \}$, which by itself is included in the open disc $\{w \in \mathbb{C} : |w - \frac{3\pi}{4a}| < \frac{\pi}{4a}\}$. So, it follows from the \emph{identity theorem} of the complex function theory that $\textbf{Z}$ must be identical to the entire function $\textbf{Y}$ on $\mathbb{C}$, which basically says that the power series
$\sum_{k=0}^{\infty}a_{k} (w - \frac{3\pi}{4a})^k$ is identical to $ \alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2 $, for all $w \in \mathbb{C}$. So, we deduce that the global real-analytic solution $Z$ to \eqref{linearODE} must be \emph{identical to} the quadratic function $\alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2$ over the \emph{whole} interval $(\frac{\pi}{2a} , \frac{\pi}{a})$, which is the same as saying that we have the following identity for \emph{all} $r \in (\frac{\pi}{2a} , \frac{\pi}{a})$, with $Y(r)$ to be the quadratic function $Y(r) = \alpha_1 (r-\delta) -\frac{\alpha_2}{2}(r-\delta )^2$,
\begin{equation}\label{goodidentity}
Y''(r) + Q_{1}Y'(r) + Q_2(r)Y(r) = 0.
\end{equation}
To finish the argument, we will derive a contradiction against identity \eqref{goodidentity} through investigating the limiting behavior of $Y''(r) + Q_{1}(r)Y'(r) + Q_2(r)Y(r)$ as $r \rightarrow \frac{\pi}{a}^-$. Now, we will further split the discussion into two subcases subordinate to \textbf{Case Three}.
First, we consider the subcase when $\alpha_1 - \frac{\alpha_2}{2}(\frac{\pi}{a} -\delta ) $ is \emph{not} zero. In this subcase, we have the following relations, which follow from direct computations.
\begin{equation}
\begin{split}
\lim_{r\rightarrow \frac{\pi}{a}^-} \sin (ar) Y''(r)& = 0, \\
\lim_{r\rightarrow \frac{\pi}{a}^-} \sin (ar) Q_{1}(r) Y'(r) & = \frac{a}{2}\left[\alpha_1 - \alpha_2 \left(\frac{\pi}{a} -\delta \right)\right] \\
\lim_{r\rightarrow \frac{\pi}{a}^-} \sin (ar) Q_2(r) Y(r) & = \infty ,
\end{split}
\end{equation}
from which it follows at once that
\begin{equation}
\lim_{r\rightarrow \frac{\pi}{a}^-} \sin(ar) [Y''(r) + Q_{1}Y'(r) + Q_2(r)Y(r)] = \infty ,
\end{equation}
which is in direct conflict with identity \eqref{goodidentity}, which is supposed to hold on the \emph{whole interval} $(\frac{\pi}{2a} , \frac{\pi}{a})$. So, in this subcase, we can rule out the possibility of having a smooth function $P$ solving \eqref{Navier2} on the simply-connected open region $R_{\delta , \tau , \epsilon_{0}}$ as specified in \eqref{exteriorregion}.\\
\noindent
Next, we deal with the remaining subcase when $\alpha_1 = \frac{\alpha_2}{2}(\frac{\pi}{a} -\delta )$. For this case, it is immediate to see that
\begin{equation}
\begin{split}
\alpha_1 - \frac{\alpha_2}{2}(r-\delta ) &= \frac{\alpha_2}{2} \left(\frac{\pi}{a} - r\right), \\
\alpha_{1} - \alpha_2 (r-\delta ) &= \frac{\alpha_2}{2} \left( \frac{\pi}{a} -2r + \delta \right).
\end{split}
\end{equation}
Hence, it follows that
\begin{equation}\label{cosexpression}
\begin{split}
& \cos (ar) [Y''(r) + Q_1(r)Y'(r) + Q_2(r)Y(r)] \\
&= -2 \alpha_2 \cos (ar) + \frac{a \alpha_2}{2}\left(\frac{\pi}{a} - 2r + \delta \right) \sin (ar) + a^2 \alpha_2 (r-\delta )
\left(\frac{\pi}{a} -r \right)\cos (ar) \\
& + \frac{a \alpha_2}{2 \sin^2(ar)}\left\{ \left(2r - \frac{\pi}{a} - \delta \right) \sin (ar) -a (r-\delta )\left(r- \frac{\pi}{a}\right) \cos (ar) \right \}
\end{split}
\end{equation}
However, we see that the following relations follow from direct computation.
\begin{equation}
\begin{split}
& \lim_{r\rightarrow \frac{\pi}{a}^-} \left\{-2 \alpha_2 \cos (ar) + \frac{a \alpha_2}{2}\left(\frac{\pi}{a} - 2r + \delta \right) \sin (ar) + a^2 \alpha_2 (r-\delta )\left(\frac{\pi}{a} -r \right)\cos (ar) \right\} = 2 \alpha_2 \\
& \lim_{r\rightarrow \frac{\pi}{a}^-} \frac{a \alpha_2}{2 \sin^2(ar)}\left\{ \left(2r - \frac{\pi}{a} - \delta \right) \sin (ar) -a (r-\delta )\left(r- \frac{\pi}{a}\right) \cos (ar) \right \}
= -\frac{\alpha_2}{2} ,
\end{split}
\end{equation}
from which we deduce that
\begin{equation}
\lim_{r\rightarrow \frac{\pi}{a}^-} \cos (ar) [Y''(r) + Q_1(r)Y'(r) + Q_2(r)Y(r)] = \frac{3 \alpha_2}{2} \in \mathbb{R}-\{0\},
\end{equation}
which again is in direct conflict with identity \eqref{goodidentity}, whose validity is supposed to be on the whole interval $(\frac{\pi}{2a} , \frac{\pi}{a})$. So, this contradiction rules out the possibility of having a smooth $P$ solving \eqref{Navier2} on the sector-shaped region $R_{\delta , \tau ,\epsilon_0 }$ as specified in \eqref{exteriorregion}. So, we have finished the proof for Theorem \ref{Noparallel}.
In the proof of Theorem \ref{Noparallel}, we have already used the following basic existence and uniqueness theorem for linear ODE with real analytic coefficients, which can be found in standard ODE textbooks.
\begin{thm}\label{ODEtheorem}
Consider the following linear equation about the unknown solution $Y(t)$:
\begin{equation}\label{ODE}
Y^{(n)}(t) + Q_{n-1}(t) Y^{(n-1)}(t) + Q_{n-2}(t) Y^{(n-2)}(t) + .... + Q_0(t) Y(t) = 0 ,
\end{equation}
where $Q_{j}$ are the prescribed real analytic coefficient functions defined on some open interval $(a, b)$. Here, the symbol $Y^{(j)}(t)$ stands for the $j$-order derivative of $Y$. Let $\delta \in (a,b)$ to be some selected based point in $(a,b)$. Then, for any prescribed real numbers $\beta_0$, $\beta_{1}$, $\beta_{2}$, ... $\beta_{n-1}$, there exists a unique real analytic solution $Y : (a, b) \rightarrow \mathbb{R}$ to the linear ODE \eqref{ODE} which at the same time satisfies the initial values $Y^{(j)}(\delta ) = \beta_j$, for every $0 \leq j \leq n-1$.
\end{thm}
\noindent
Now, with the help of Theorem \ref{ODEtheorem}, we can now give a very brief proof for Theorem \ref{existenceEasy} as follow.\\
\noindent{\bf Step 4 : The proof of Theorem \ref{existenceEasy}.}
\noindent
To begin, we again consider the following representation formula, which we obtain through the efforts spent in \textbf{Step 1} and \textbf{Step 2} of this Section, and which is valid for \emph{any} parallel laminar flow $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$, with $Y(r) = h(r-\delta )$ to be \textbf{any possible} smooth function defined on some open interval about the based point $\delta$.
\begin{equation}\label{generalexpSECOND}
\begin{split}
&d \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar)* u^* + \overline{\nabla}_{u}u^* \} \\
& = \nu \bigg\{Y'''(r) \frac{\sin (ar)}{a} + 2 Y''(r) \cos (ar) + a Y'(r) \left(\sin (ar) -\frac{1}{\sin (ar)}\right) \\
& + a^2 Y(r) \cos (ar)\left(2+ \frac{1}{\sin^2(ar)}\right)\bigg\} dr\wedge d\theta .
\end{split}
\end{equation}
As we have already mentioned in the introduction, to see whether equation \eqref{Navier2} will admit a solution in the form of $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ on the simply-connected open region $\Omega_{\delta , \tau }$ as specified in \eqref{OMEGASPHERE}, with $Y(r) = h(r-\delta )$ to be some unknown function, it is enough to check whether the smooth $2$-form $d \big \{ \nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar) *u^* + \overline{\nabla}_{u}u^* \big \}$ will vanish identically over the same simply-connected open region $\Omega_{\delta , \tau } \subset S^2(a^2)$. This is because the $d$-closed property of $\nu (-\triangle u^* -2 a^2 u^*) + \beta \cos (ar)* u^* + \overline{\nabla}_{u}u^*$ over any simply-connected open region in $S^2(a^2)-K$ will at once ensure the existence of a smooth pressure function $P$ which solves equation \eqref{Navier2} on the same simply-connected open region. In order words, $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ will solve equation \eqref{Navier2} on the simply-connected open region $\Omega_{\delta , \tau }$ as specified in \eqref{OMEGASPHERE} if and only if the unknown function $Y(r) = h(r-\delta )$ will satisfy the following $3$-order linear ODE on $[\delta , \frac{\pi}{a})$.
\begin{equation}\label{thirdorderODESECOND}
Y'''(r) + 2 a Y''(r) \frac{\cos (ar)}{\sin (ar)} + a^2 Y'(r) \left(1 -\frac{1}{\sin^2 (ar)}\right) + a^3 Y(r) \frac{\cos (ar)}{\sin (ar)}\left(2+ \frac{1}{\sin^2(ar)}\right) = 0 .
\end{equation}
However, it is quite obvious that all the coefficient functions appearing in the above $3$-order linear ODE are all real-analytic functions on the open interval $(0,\frac{\pi}{a})$. As a result, Theorem \ref{ODEtheorem} ensures that there exits a \emph{unique} real analytic function $Y$ defined on the whole interval $(0,\frac{\pi}{a})$ which solves the $3$-order ODE \eqref{thirdorderODESECOND}, and which satisfies the prescribed initial values $Y(\delta ) = 0$, $Y'(\delta ) = \alpha_1$, and $Y''(\delta ) = - \alpha_2$, where $\alpha_1$, $\alpha_2$ are some arbitrary given positive numbers. For such a unique analytic solution $Y : (0,\frac{\pi}{a}) \rightarrow \mathbb{R}$ to equation \eqref{thirdorderODESECOND}, the associated parallel laminar flow $u = -Y(r) \frac{a}{\sin (ar)}\frac{\partial}{\partial \theta }$ will solve equation \eqref{Navier2} on the simply-connected open region $\Omega_{\delta , \tau }$ as specified in \eqref{OMEGASPHERE}. So, the proof of Theorem \ref{existenceEasy} is completed.
\section{Basic geometry of $\mathbb{H}^2(-a^2)$ : The visualization of $\mathbb{H}^2(-a^2)$ through the use of the hyperboloid model, and the geodesic normal polar coordinate on $\mathbb{H}^2(-a^2)$.}\label{HYPERBOLIDSECTION}
\noindent
The objective of this section is to give a brief introduction to the geometry of the space form $\mathbb{H}^2(-a^2)$ of constant negative sectional curvature $-a^2 < 0$. Again, all the materials as presented in this section are standard facts well-known to differential geometers. The purpose of this section, however, is to spell out the basic notions and geometric language which we will use in those differential-geometric calculations in \textbf{Section \ref{hyperMAINSECTION}}. We will first give a description of $\mathbb{H}^2(-a^2)$ through the use of the so-called hyperboloid model.\\
\noindent
In the following presentation, we closely follow the standard construction of the hyperboloid model of $\mathbb{H}^2(-a^2)$ as given in pages 201 to 202 of \cite{Jost}.\\
\begin{defn}\label{hyperbolid}
\textbf{(The characterization of $\mathbb{H}^2(-a^2)$ by means of the hyperboloid model).} Consider the \emph{linear space}
$\mathbb{V}^3 = \{ (x_0, x_1 , x_2) : x_0, x_1, x_2 \in \mathbb{R} \}$ which is equipped with the following quadratic form $<\cdot , \cdot > : \mathbb{V}^3\otimes \mathbb{V}^3 \rightarrow \mathbb{R}$.
\begin{equation}\label{quadraticform}
<x,y> = -x_0y_0 + x_1y_1 + x_2y_2 ,
\end{equation}
with $x$, $y$ to be any two elements in the linear space $\mathbb{V}^3$.
\end{defn}
\noindent
Notice that $\mathbb{V}^3$ which is equipped with quadratic form \eqref{quadraticform} is \textbf{not} the same as the Euclidean space $\mathbb{R}^3$. Then, we define $\mathbb{H}^2(-a^2)$ as follow.
\begin{equation}
\mathbb{H}^2(-a^2) = \big \{ x \in \mathbb{V}^3 : <x,x> = \frac{-1}{a^2}, x_0 > 0 \big \} .
\end{equation}
Then, it is clear that $\mathbb{H}^2(-a^2)$ is represented as a branch of the hyperboloid $<x,x> = \frac{-1}{a^2}$. Hence, topologically, $\mathbb{H}^2(-a^2)$, as a differentiable manifold on its own, is diffeomorphic to $\mathbb{R}^2$ (Be careful, $\mathbb{H}^2(-a^2)$ as a \textbf{Riemannian manifold} is \textbf{not} the same as the Euclidean space $\mathbb{R}^2$). Next, we would like to construct the Riemannian metric $g(\cdot, \cdot )$ on the $2$-dimensional manifold $\mathbb{H}^2(-a^2)$ as follow. Here, for any point $p \in \mathbb{H}^2(-a^2)$, we consider the symmetric bilinear form $\big ( -dx_0\otimes dx_0 + dx_1\otimes dx_1 + dx_2\otimes dx_2 \big )\big |_{p}$ acting on the tangent space $T_p\mathbb{V}^3$ of $\mathbb{V}^3$ at $p$, which is described as follow.
\begin{equation}
\big ( -dx_0\otimes dx_0 + dx_1\otimes dx_1 + dx_2\otimes dx_2 \big )\big |_{p}(v,w) = -v_0w_0 + v_1w_1 + v_2w_2 ,
\end{equation}
where $v$, $w \in T_p\mathbb{V}^3$. Then, for the point $p \in \mathbb{H}^2(-a^2)$, we consider the positive definite inner product $g_{p}(\cdot, \cdot ) $ on the tangent space $T_{p}\mathbb{H}^2(-a^2)$, which is \emph{defined} to be the restriction of the symmetric bilinear form $\big ( -dx_0\otimes dx_0 + dx_1\otimes dx_1 + dx_2\otimes dx_2 \big )\big |_{p}$ onto the vector subspace $T_{p}\mathbb{H}^2(-a^2)$ of $T_p\mathbb{V}^3$. Then, this smoothly varying family of positive definite inner products $g_{p}(\cdot , \cdot ) : T_{p}\mathbb{H}^2(-a^2)\otimes T_{p}\mathbb{H}^2(-a^2) \rightarrow \mathbb{R}$, for $p \in \mathbb{H}^2(-a^2)$ \emph{constitutes} the Riemannian metric $g(\cdot , \cdot )$ on $\mathbb{H}^2(-a^2)$.
In order to further clarify the content of the above definition of $\mathbb{H}^2(-a^2)$, a few remarks are in order here.
\begin{remark}
The $2$-dimensional space form $\mathbb{H}^2(-a^2)$ with constant sectional curvature $-a^2 < 0$ is often called the $2$-dimensional hyperbolic space (or hyperbolic manifold) of constant sectional curvature $-a^2$. If we consider the group $O(2,1)$ which consists of all those linear maps on $\mathbb{V}^3$ which leave the quadratic form $<\cdot, \cdot >$ as specified in \eqref{quadraticform} invariant, and which map the $x_0$-axis onto itself, then this group $O(2,1)$ will leave $\mathbb{H}^2(-a^2)$ invariant. Actually, $O(2,1)$ is exactly the group of isometries which acts transitively on $\mathbb{H}^2(-a^2)$ (see discussions on page 202 of the textbook \cite{Jost}). So, it turns out that the geometric structure of the hyperbolic space $\mathbb{H}^2(-a^2)$ around any selected point $p \in \mathbb{H}^2(-a^2)$ looks \textbf{exactly the same}, regardless of where the selected point $p$ of reference is. This just says that, for any two points $p$, $q$ in $\mathbb{H}^2(-a^2)$, the geometric structure of $\mathbb{H}^2(-a^2)$ around $p$ is identical to the geometric structure of $\mathbb{H}^2(-a^2)$ around $q$, up to an isometry $T$ from $O(2,1)$ sending $p$ to $T(p)=q$. So, in the following discussion, we can just, \emph{without the loss of generality}, choose the preferred reference point $O$ in $\mathbb{H}^2(-a^2)$ to be just $O = (\frac{1}{a^2},0,0)$.
\end{remark}
\noindent
Here, for the clarity of our presentation, we just select our preferred based point $O$ in $\mathbb{H}^2(-a^2)$ to be $O = (\frac{1}{a^2},0,0)$. That is, $O$ is located at the vertex of the hyperboloid $\{x \in \mathbb{V}^3 : <x,x> = \frac{1}{a^2} , x_0 > 0 \}$. We stress again that such a choice of $O$ is a choice without the loss of generality, due to the homogeneous structure of the hyperbolic manifold $\mathbb{H}^2(-a^2)$. Now, we will give a concrete description of the exponential map $\exp_{O} : T_{O}\mathbb{H}^2(-a^2)\rightarrow \mathbb{H}^2(-a^2)$ which maps the tangent space $T_{O}\mathbb{H}^2(-a^2)$ of $\mathbb{H}^2(-a^2)$ at $O$ onto the manifold $\mathbb{H}^2(-a^2)$ itself. Recall that such an exponential map is defined abstractly in the following manner.
\begin{defn}\label{exphyper}
\textbf{The exponential map on the hyperbolic space $\mathbb{H}^2(-a^2)$ :} For any $v \in T_{O}\mathbb{H}^2(-a^2)$ with $\|v\| = 1$, we consider the uniquely determined (\emph{unit speed}) geodesic $c_{v}: [0,\infty )\rightarrow \mathbb{H}^2(-a^2)$ which satisfies $c_v(0) = O$, and
$\dot{c}(0) = v$. Then, it follows that for any $r > 0$, $\exp_{O}(rv)$ is \emph{defined} to be
\begin{equation}\label{exponentialmaphyper}
\exp_{O}(rv) = c_v(r) .
\end{equation}
\end{defn}
\noindent
It is a basic differential geometric fact that the exponential map $\exp_{O} : T_{O}\mathbb{H}^2(-a^2)\rightarrow \mathbb{H}^2(-a^2) $ as specified in \textbf{Definition \ref{exphyper}} is a smooth bijective map of the tangent space $T_{O}\mathbb{H}^2(-a^2)$ onto $\mathbb{H}^2(-a^2)$, whose inverse map $\exp^{-1} : \mathbb{H}^2(-a^2) \rightarrow T_{O}\mathbb{H}^2(-a^2)$ is also smooth. That is, $exp_{O}$ is a diffeomorphism from $T_{O}\mathbb{H}^2(-a^2) = \mathbb{R}^2$ onto $\mathbb{H}^2(-a^2)$.\\
\noindent
Now, by means of the concrete hyperboloid model of $\mathbb{H}^2(-a^2)$ as given in \textbf{Definition \ref{hyperbolid}}, for the reference point
$O = (\frac{1}{a^2} , 0 , 0)$, the exponential map $\exp_{O} : T_{O}\mathbb{H}^2(-a^2)\rightarrow \mathbb{H}^2(-a^2)$ can be expressed concretely as follow. Notice that the tangent space $T_{O}\mathbb{H}^2(-a^2)$ can naturally be identified with the plane $\{ (0, v_1, v_2) : v_1 , v_2 \in \mathbb{R} \}$ in the linear space $\mathbb{V}^3$. So, for any vector $v = (0, v_1 ,v_2) \in T_{O}\mathbb{H}^2(-a^2)$ with $\|v\| = 1$, the geodesic
$c_v : [0,\infty ) \rightarrow \mathbb{H}^2(-a^2)$ satisfying $c_v(0) = O$, and $\dot{c}(0) = v$ is expressed concretely as follow.
\begin{equation}\label{concretegeohyper}
c_v(r) = \cosh (ar) (\frac{1}{a} , 0,0) + \frac{\sinh (ar)}{a} (0, v_1 , v_2).
\end{equation}
Hence, for any unit vector $v$ in $T_{O}\mathbb{H}^2(-a^2)$, and any $r > 0$, the term $\exp_O(rv)$ is given explicitly as follow.
\begin{equation}\label{concreteexphyper}
\exp_{O}(rv) = \cosh (ar) (\frac{1}{a} , 0,0) + \frac{\sinh (ar)}{a} (0, v_1 , v_2).
\end{equation}
We can now define the \emph{normal polar coordinate system} $(r , \theta )$ on $\mathbb{H}(-a^2)$ about the reference point $O$ as follow.
\begin{defn}\label{polarDefn}
\textbf{The normal polar coordinate system on $\mathbb{H}^2(-a^2)$ :} The normal polar coordinate system on $\mathbb{H}^2(-a^2)$ about the point $O$ is the smooth map $(r,\theta ) : \mathbb{H}^2(-a^2) \rightarrow (0, \infty )\times (0, 2\pi )$ defined by
\begin{equation}
(r,\theta) = (\overline{r} , \overline{\theta }) \circ \exp_{O}^{\Red{-1}} ,
\end{equation}
in which $(\overline{r} , \overline{\theta })$ is the standard polar coordinate system on the Euclidean $2$-space $\mathbb{R}^2$ (here, we identify the tangent space $T_{O}\mathbb{H}^2(-a^2)$ with $\mathbb{R}^2$).
\end{defn}
\noindent
By means of such a normal polar coordinate system $(r,\theta )$ on $\mathbb{H}^2(-a^2)$, we can define two natural vector fields
$\frac{\partial}{\partial r}$ and $\frac{\partial}{\partial \theta}$ on $\mathbb{H}^2(-a^2)$ as follow.
\begin{defn}\label{FRAMEHyperDef}
\textbf{Natural coordinate frame $\{ \frac{\partial}{\partial r} , \frac{\partial}{\partial \theta } \}$ on $\mathbb{H}^2(-a^2)$ :} For each $p \in \mathbb{H}^2(-a^2)$, the vectors $\frac{\partial}{\partial r}\big |_{p}$ and $\frac{\partial}{\partial \theta } \big |_p$ in $T_p \mathbb{H}^2(-a^2)$ are defined as linear derivations acting on the space $C^{\infty}(\mathbb{H}^2(-a^2))$ of smooth functions on $\mathbb{H}^2(-a^2)$ through the following relations.
\begin{equation}\label{naturalvectorhyper}
\begin{split}
\frac{\partial}{\partial r}\bigg |_{p} f &= \frac{\partial}{\partial \overline{r}}\bigg |_{(r,\theta )(p)}[f \circ (r,\theta)^{-1}] =
\frac{\partial}{\partial \overline{r}}\bigg |_{(r,\theta )(p)}[f \circ \exp_{O} \circ (\overline{r} , \overline{\theta})^{-1} ] \\
\frac{\partial}{\partial \theta }\bigg |_{p} f & = \frac{\partial}{\partial \overline{\theta}}\bigg |_{(r,\theta )(p)}[f \circ (r,\theta)^{-1}] =
\frac{\partial}{\partial \overline{\theta}}\bigg |_{(r,\theta )(p)}[f \circ \exp_{O} \circ (\overline{r} , \overline{\theta})^{-1} ] ,
\end{split}
\end{equation}
where $f$ can be any smooth function on $\mathbb{H}^2(-a^2)$.
\end{defn}
\noindent
Now, thanks to the concrete expression for the exponential map $\exp_O$ as given in \eqref{concreteexphyper}, we can give concrete expressions for
the natural vector fields $\frac{\partial}{\partial r}$, and $\frac{\partial}{\partial \theta }$ on $\mathbb{H}^2(-a^2)$ as follow.
\begin{equation}\label{concreteexpression}
\begin{split}
\frac{\partial}{\partial r}\bigg |_{\exp_O(rv)} & = \frac{d}{dr}\big ( \exp_O(rv)\big ) = \sinh (ar) (1,0,0) + \cosh (ar)v , \\
\frac{\partial}{\partial \theta } \bigg |_{\exp_O(rv)} & = \frac{\sinh (ar)}{a} v^{\perp} ,
\end{split}
\end{equation}
where $v = (0, v_1 ,v_2) \in T_{O}\mathbb{H}^2(-a^2)$ is any unit vector in $T_{O}\mathbb{H}^2(-a^2)$, with $v^{\perp} = (0, -v_2 ,v_1)$, and $r > 0$ is any positive number. Now if we consider the smooth vector field $e_2$ on $\mathbb{H}^2(-a^2)$ which is defined by
\begin{equation}
e_2 = \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta },
\end{equation}
then the restriction of such a smooth vector field $e_2$ \emph{along each geodesic $c_v$} must be a \emph{parallel vector field} along $c_v$, simply because the second expression in \eqref{concreteexpression} informs us that
\begin{equation}
e_2 \big |_{c_{v}(r)} = e_2 \big |_{\exp_O (rv)} = v^{\perp} .
\end{equation}
The above expression further informs us that we must have $\overline{\nabla}_{\dot{c_{v}}} e_2 = 0$, for any direction indicated by the unit vector $v = (0, v_1, v_2) \in T_{O}(\mathbb{H}^2(-a^2))$. Here, the symbol $\overline{\nabla}$ stands for the Levi-Civita connection acting on the space of smooth vector fields on $\mathbb{H}^2(-a^2)$, which is naturally induced by the intrinsic Riemannian geometry of $\mathbb{H}^2(-a^2)$.
So, it turns out that the vector fields $e_1 = \frac{\partial}{\partial r}$, $e_{2} = \frac{a}{\sinh (ar)}\frac{\partial}{\partial \theta }$ constitute a positively oriented orthonormal moving frame $\{e_1 , e_2\}$ of the tangent bundle $T\mathbb{H}^2(-a^2)$ of $\mathbb{H}^2(-a^2)$.\\
\noindent
Due to the fact that we will work with differential operators, such as the Hodge's Laplacian $(-\triangle ) = dd^* + d^*d$ or the exterior differential operator $d$ etc, which can only naturally operate on differential $1$-forms on $\mathbb{H}^2(-a^2)$, we will consider the two associated $1$-forms $e_1^* = g(e_1 ,\cdot )$, and $e_2^* = g(e_2 , \cdot )$ of the vector fields $e_1$, $e_2$ respectively, which together constitute the orthnormal co-frame of the cotangent bundle $T^*\mathbb{H}^2(-a^2)$. Indeed, the associated $1$-forms $e_1^*$, and $e_2^*$ on $\mathbb{H}^2(-a^2)$ are given by
\begin{equation}\label{coframehyper}
\begin{split}
e_1^* & = dr , \\
e_2^* & = \frac{\sinh (ar)}{a} d\theta .
\end{split}
\end{equation}
Then, the volume form $Vol_{\mathbb{H}^2(-a^2)}$ on the hyperbolic space $\mathbb{H}^2(-a^2)$ (see \textbf{Definition \ref{VolumeformDef}}) can be locally expressed by
\begin{equation}
Vol_{\mathbb{H}^2(-a^2)} = e_1^* \wedge e_2^* = \frac{\sinh (ar)}{a} dr \wedge d\theta .
\end{equation}
In dealing with the Hodge Laplacian $(-\triangle) = d d^* + d^*d$, and the co-adjoint operator $d^*$ of $d$, we will encounter the Hodge-star operator $* : C^{\infty}(T^*\mathbb{H}^2(-a^2)) \rightarrow C^{\infty}(T^*\mathbb{H}^2(-a^2))$ which sends $1$-forms into $1$-forms on $\mathbb{H}^2(-a^2)$, and also the Hodge-Star operator $* : C^{\infty}(\wedge^2T^*M) \rightarrow C^{\infty}(M)$ which sends $2$-forms into smooth functions on $\mathbb{H}^2(-a^2)$ (See \textbf{Definition \ref{HodgeDefinition}} for the precise definitions of these Hodge Star operators).\\
\noindent
Indeed, the Hodge-Star operator $* : C^{\infty}(T^*\mathbb{H}^2(-a^2)) \rightarrow C^{\infty}(T^*\mathbb{H}^2(-a^2))$ can be locally expressed in the following way, through the use of the orthonormal co-frame $\{e_1 , e_2\}$ as specified in \eqref{coframehyper}.
\begin{equation}\label{Hodgerotathyp}
\begin{split}
* e_1^* & = e_2^* ,\\
* e_2^* & = -e_1^* .
\end{split}
\end{equation}
Then, in accordance with the tensorial property $*(f \omega ) = f *( \omega )$, with $f$ to be a smooth function and $\omega$ to be a differential form on $\mathbb{H}^2(-a^2)$, of the Hodge-Star operator $*$, it is plain to see, from \eqref{Hodgerotathyp}, that
$* dr = \frac{\sinh (ar)}{a} d \theta $, and that $*d \theta = \frac{a}{\sinh (ar)} *e_2^* = -\frac{a}{\sinh (ar)}dr$. \\
\noindent
On the other hand, the Hodge-Star operator $*: C^{\infty}(\wedge^2T^*\mathbb{H}^2(-a^2)) \rightarrow C^{\infty}(\mathbb{H}^2(-a^2))$ sending smooth $2$-forms into smooth functions on $\mathbb{H}^2(-a^2)$ can be expressed by the following single relation, with $Vol_{\mathbb{H}^2(-a^2)}$ to be the standard volume form on $\mathbb{H}^2(-a^2)$.
\begin{equation}
* Vol_{\mathbb{H}^2(-a^2)} = 1 .
\end{equation}
\section{About stationary Navier-Stokes flows with circular-arc streamlines around an obstacle in $\mathbb{H}^2(-a^2)$ : The proof of Theorem \ref{existenceHyperbolic} }\label{hyperMAINSECTION}
\noindent
To begin the argument, let $K$ to be a given compact region in $\mathbb{H}^2(-a^2)$ which is entirely contained in $\overline{B_O(\delta )}$, where
$B_O(\delta) = \{ p \in \mathbb{H}^2(-a^2) : d(p, O) < \delta \}$ is the open geodesic ball centered at $O$, and with radius $\delta > 0$ in the hyperbolic manifold $\mathbb{H}^2(-a^2)$. Let $(r , \theta )$ to be the normal polar coordinate system on $\mathbb{H}^2(-a^2)$ about the reference point $O \in \mathbb{H}^2(-a^2)$ which is specified in \textbf{Definition \ref{polarDefn}}. Suppose further that $\partial K$ contains a circular-arc portion $C_{\delta , \tau } = \{ p \in \mathbb{H}^2(-a^2) : r(p) = d(p , O ) = \delta , 0 < \theta (p) < \tau \}$, with some angle $\tau \in (0, 2\pi )$. Let $\frac{\partial}{\partial r}$, $\frac{\partial}{\partial \theta }$ to be the two natural vector fields on $\mathbb{H}^2(-a^2)$ induced by the normal polar coordinate system $(r , \theta )$ (see \textbf{Definition \ref{FRAMEHyperDef}}). Under such setting, we now consider a velocity field of the following form, with $h \in C^{\infty} ( [\delta , \delta + \epsilon_{0} ))$ to be some smooth function defined on an interval $[\delta , \delta + \epsilon_{0} )$ of length $\epsilon_{0} > 0$.
\begin{equation}\label{circulararchyperbolic}
u = -h(r- \delta ) e_{2} = - h(r-\delta ) \frac{a}{ \sinh (ar )} \frac{\partial}{\partial \theta }
\end{equation}
Recall that $e_1 = \frac{\partial}{\partial r}$, $e_2 = \frac{a}{ \sinh (ar )} \frac{\partial}{\partial \theta }$ together constitute a positively oriented orthonormal moving frame $\{e_1 , e_2\}$ on $\mathbb{H}^2(-a^2)$, whose orthonormal coframe $\{ e_1^* , e_2^* \}$ is constituted by
the differential $1$-forms $e_1^* =dr$, and $e_2^* = \frac{\sinh (ar)}{a} d\theta$. So, the associated $1$-form $u^*$ of the velocity field $u$ in
\eqref{circulararchyperbolic} is just given by
\begin{equation}\label{associated1fromhyper}
u^* = -h(r- \delta ) e_{2}^* = - h(r-\delta ) \frac{\sinh (ar)}{a} d\theta .
\end{equation}
Notice that under this setting, both the velocity field $u$ as specified in \eqref{circulararchyperbolic} and its associated $1$-form $u^*$ are defined on the sector-shaped open region $R_{\delta , \tau , \epsilon } = \{ p \in \mathbb{H}^2(-a^2) : \delta < d(p,O) < \delta + \epsilon_0 , 0 < \theta (p) < \tau \}$ (The same open region as the one specified in \eqref{RegionHYPER}) whose boundary shares the same circular-arc boundary portion $C_{\delta , \tau }$ with $\partial K$.
In accordance with expression \eqref{associated1fromhyper} for $u^*$, we will compute $(-\triangle ) u^*$, and $\overline{\nabla}_{u}u$ step by step, just as what we did in dealing with the spherical case.\\
\noindent{\bf Step 1 : Checking the divergence free property of the velocity field $u$ as specified in \eqref{circulararchyperbolic}. }
For $u$ as given in \eqref{circulararchyperbolic}, in order to verify the divergence property $d^* u^* = 0$ on the sector-shaped open region $R_{\delta , \tau , \epsilon }$ of $\mathbb{H}^2(-a^2)$ as specified in \eqref{RegionHYPER}, we just carry out the following straightforward computation, which is \emph{formally} identical to those computations being done in \eqref{divfreesphere}.
\begin{equation}
d^* u^* = -*d* \big \{ - h(r-\delta ) e_{2}^* \big \} = -*d \{ h(r-\delta ) dr\} = 0 .
\end{equation}
\noindent{\bf Step $2$ : The computation of $(-\triangle ) u^*$, for $u$ to be given in \eqref{circulararchyperbolic}.}
Since we have $d^* u = 0$ on the open region $R_{\delta , \tau , \epsilon }$ of $\mathbb{H}^2(-a^2)$ as specified in \eqref{RegionHYPER}, it follows that $(-\triangle )u^* = (dd^* + d^* d)u^* = d^* du^*$. So, as in the spherical case, we first compute $du^*$, for the velocity field $u$ given in \eqref{circulararchyperbolic}, as follow.
\begin{equation}\label{vorticityDischyp}
\begin{split}
du^* & = -\frac{\partial}{\partial r} \bigg \{ \frac{\sinh (ar)}{a} h(r-\delta ) \bigg \} dr \wedge d\theta \\
& = - \frac{a}{\sinh (ar)} \frac{\partial}{\partial r} \bigg \{ \frac{\sinh (ar)}{a} h(r-\delta ) \bigg \} Vol_{\mathbb{H}^2(-a^2)} \\
& = - \bigg \{ h'(r-\delta ) + \frac{a \cosh (ar)}{\sinh (ar)} h(r-\delta ) \bigg \} Vol_{\mathbb{H}^2(-a^2)} ,
\end{split}
\end{equation}
where we recall that $Vol_{\mathbb{H}^2(-a^2)} = e_1^* \wedge e_2^* = \frac{\sinh (ar)}{a} dr \wedge d\theta$ is the volume form on $\mathbb{H}^2(-a^2)$. For $u$ as given in \eqref{circulararchyperbolic}, we now compute $(-\triangle )u^* = d^* du^*$ as follow.
\begin{equation}\label{Laplacianhyper}
\begin{split}
(-\triangle )u^* & = *d* \bigg \{ h'(r-\delta ) + \frac{a \cosh (ar)}{\sinh (ar)} h(r-\delta ) \bigg \} Vol_{\mathbb{H}^2(-a^2)} \\
& = * d \bigg \{ h'(r-\delta ) + \frac{a \cosh (ar)}{\sinh (ar)} h(r-\delta ) \bigg \} \\
& = \bigg \{ h''(r- \delta ) + \frac{a \cosh (ar)}{\sinh (ar)} h'(r-\delta ) + \frac{\partial}{\partial r}\bigg (\frac{a \cosh (ar)}{\sinh (ar)} \bigg ) h(r-\delta ) \bigg \} * dr \\
& = \bigg \{ \frac{\sinh (ar)}{a} h''(r-\delta ) + \cosh (ar) h'(r-\delta ) - \frac{a}{\sinh (ar)} h(r-\delta ) \bigg \} d\theta ,
\end{split}
\end{equation}
in which the second equal sign holds due to the fact that $* Vol_{\mathbb{H}^2(-a^2)} = 1$, and the last equal sign holds since
$* dr = e_2^* = \frac{\sinh (ar)}{a} d\theta$.
To prepare for the proof of Theorem \ref{existenceHyperbolic}, we will need the expression of $d \big \{ (-\triangle )u^* -2Ric (u^*) \big \}$, for $u$ as specified in \eqref{circulararchyperbolic}. Since $Ric (X^*) = -a^2 X^*$ always holds for any smooth vector field $X$ on $\mathbb{H}^2(-a^2)$, it follows from \eqref{Laplacianhyper} and a direct computation that we have the following expression of $d \big \{ (-\triangle )u^* -2Ric (u^*) \big \}$, for $u$ as specified in \eqref{circulararchyperbolic}.
\begin{equation}\label{VorticityhyperDisc}
\begin{split}
d \big \{ (-\triangle )u^* -2Ric (u^*) \big \} & = \bigg \{ \frac{\sinh (ar)}{a}h'''(r-\delta ) + 2 \cosh (ar) h''(r-\delta ) \\
& -a\bigg ( \sinh (ar) + \frac{1}{\sinh (ar)} \bigg ) h'(r-\delta ) \\
& + a^2\cosh (ar) \bigg ( \frac{1}{\sinh^2(ar)} -2 \bigg ) h(r-\delta ) \bigg \} dr \wedge d \theta .
\end{split}
\end{equation}
\noindent{\bf Step 3 : The computation of the nonlinear convection term $\overline{\nabla}_uu$ for $u$ as given in \eqref{circulararchyperbolic} . }
\noindent
To compute $\overline{\nabla}_uu$ for $u$ as given in \eqref{circulararchyperbolic}, we express $\overline{\nabla}_uu$ as follow.
\begin{equation}\label{convectionhyper}
\overline{\nabla}_uu = A \frac{\partial}{\partial r} + B \frac{\partial}{\partial \theta } ,
\end{equation}
with $A$, $B$ to be the two component functions on $\mathbb{H}^2(-a^2)$. To compute $B$, we take the inner product with $\frac{\partial}{\partial \theta }$ on both sides of \eqref{convectionhyper}, and carry out the following computation.
\begin{equation}
\begin{split}
B \bigg (\frac{\sinh^2 (ar)}{a^2}\bigg ) & = g( \overline{\nabla}_uu , \frac{\partial}{\partial \theta } ) \\
& = -\bigg ( \frac{\sinh (ar)}{a} \bigg ) \frac{1}{h(r-\delta )} g( \overline{\nabla}_uu , u ) \\
& = -\bigg ( \frac{\sinh (ar)}{a} \bigg ) \frac{1}{2 h(r-\delta )} u \big ( |u|^2 \big ) \\
& = \frac{1}{2} \frac{\partial}{\partial \theta } \big ( (h(r-\delta ))^2 \big ) = 0 .
\end{split}
\end{equation}
In the above computation, the third equal sign follows from property \textbf{(1)} of the Levi-Civita connection $\overline{\nabla}$ as stated in \textbf{Definition \ref{LeviCivitadefinition}}. The symbol $u \big ( |u|^2 \big )$ stands for the derivative of the function $|u|^2$ along the direction of the vector field $u$.\\
\noindent
Next, we compute the component $A$ which appears in \eqref{convectionhyper}, by taking inner product with $\frac{\partial}{\partial r}$ on both sides of \eqref{convectionhyper} and we get
\begin{equation}
\begin{split}
A & = g( \overline{\nabla}_uu , \frac{\partial}{\partial r} ) \\
& = -h(r-\delta ) \frac{a}{\sinh(ar)} g ( \overline{\nabla}_{\frac{\partial}{\partial \theta } }u , \frac{\partial}{\partial r} ) \\
& = h(r-\delta ) \frac{a}{\sinh(ar)} g ( u , \overline{\nabla}_{\frac{\partial}{\partial \theta } } \frac{\partial}{\partial r} ) \\
& = h(r-\delta ) \frac{a}{\sinh(ar)} g ( u , \overline{\nabla}_{\frac{\partial}{\partial r}}\frac{\partial}{\partial \theta } ) .
\end{split}
\end{equation}
In the above computation, the second equal sign follows directly from the tensorial property (property \textbf{(4)} in \textbf{Definition \ref{LeviCivitadefinition}}) of $\overline{\nabla}$. The third equal sign follows from a direct application of property \textbf{(1)} in \textbf{Definition \ref{LeviCivitadefinition}} of $\overline{\nabla}$. While the last equal sign follows from the torsion free property (property \textbf{(2)} of \textbf{Definition \ref{LeviCivitadefinition}} ) of $\overline{\nabla}$.\\
\noindent
Here, recall that $e_2 = \frac{a}{\sinh(ar)} \frac{\partial}{\partial \theta }$, once being restricted on each geodesic ray starting from the base point $O \in \mathbb{H}^2(-a^2)$ of the normal polar coordinate system $(r,\theta )$, is \emph{parallel} along that geodesic ray. This simply means that we have $\overline{\nabla}_{\frac{\partial}{\partial r}} e_2 = 0$. Hence, we can carry out the following computation in accordance with the Leibniz's rule of the connection $\overline{\nabla}$ (property \textbf{(3)} of \textbf{Definition \ref{LeviCivitadefinition}} ).
\begin{equation}
\overline{\nabla}_{\frac{\partial}{\partial r}} \frac{\partial}{\partial \theta } = \frac{\partial}{\partial r} \bigg ( \frac{\sinh (ar)}{a}\bigg ) e_2
= \cosh (ar) e_2 .
\end{equation}
Hence, it follows that the component function $A$ which appears in \eqref{convectionhyper} is given by
\begin{equation}
A = h(r-\delta ) \frac{a}{\sinh(ar)} g(u , \cosh (ar) e_2 ) = -a \bigg ( \frac{\cosh (ar)}{\sinh (ar)} \bigg ) \big ( h(r-\delta )\big )^2.
\end{equation}
So, it follows that for $u$ as given in \eqref{circulararchyperbolic}, the term $\overline{\nabla}_uu$ is given by
\begin{equation}
\overline{\nabla}_uu = -a \bigg ( \frac{\cosh (ar)}{\sinh (ar)} \bigg ) \big ( h(r-\delta )\big )^2 \frac{\partial}{\partial r} ,
\end{equation}
whose associated $1$-form $[\overline{\nabla}_uu]^*$ is given by
\begin{equation}\label{nonlinearconvectionhyper}
[\overline{\nabla}_uu]^* = -a \bigg ( \frac{\cosh (ar)}{\sinh (ar)} \bigg ) \big ( h(r-\delta )\big )^2 dr .
\end{equation}
So, by taking the operator $d$ on both sides of \eqref{nonlinearconvectionhyper}, it follows that the following relation holds on the sector-shaped region $R_{\delta , \tau \epsilon_0}$ of $\mathbb{H}^2(-a^2)$ as specified in \eqref{RegionHYPER}, for $u$ to be given by \eqref{circulararchyperbolic}.
\begin{equation}\label{dconzerohyper}
d [\overline{\nabla}_uu]^* = 0 .
\end{equation}
\noindent{\bf Step 4 : The proof of \textbf{Assertion I} in Theorem \ref{existenceHyperbolic}}
\noindent
Here, we will give a simple proof of \textbf{Assertion I} in Theorem \ref{existenceHyperbolic}, which states that for any quadratic profile $h(\lambda ) = \alpha_1 \lambda - \frac{\alpha_2}{2} \lambda^2$, with prescribed constants $\alpha_1 > 0$, and $\alpha_2 > 0$, the velocity field $u$ as specified in \eqref{circulararchyperbolic} does not satisfies equation \eqref{NavierStokeshyperbolic} on the sector shaped region $R_{\delta , \tau , \epsilon_0 }$ of $\mathbb{H}^2(-a^2)$ as specified in \eqref{RegionHYPER} , regardless of how small $\epsilon_0 > 0$ is.
Now, assume towards contradiction that for a certain choice of constants $\alpha_1 >0$, $\alpha_2 > 0$, the velocity field $u$ as given in \eqref{circulararchyperbolic} does satisfy equation \eqref{NavierStokeshyperbolic} on the sector-shaped region $R_{\delta , \tau , \epsilon_0 }$ of $\mathbb{H}^2(-a^2)$ as given in \eqref{RegionHYPER}, for some $\epsilon_0 > 0$. Then, for such a $u$ as given in \eqref{circulararchyperbolic}, we take the operator $d$ on both sides of the main equation in \eqref{NavierStokeshyperbolic}, and deduce the following vorticity equation from \eqref{vorticityDischyp} and \eqref{dconzerohyper}.
\begin{equation}\label{VorticityequationDischyp}
\begin{split}
0 & = d \bigg \{ \nu \big ( (-\triangle )u^* - Ric(u^*) \big ) + [\overline{\nabla}_uu]^* + dP \bigg \} \\
& = \nu d\bigg \{ \big ( (-\triangle )u^* - Ric(u^*) \big ) \bigg \} \\
& = \nu \bigg \{ \frac{\sinh (ar)}{a}h'''(r-\delta ) + 2 \cosh (ar) h''(r-\delta ) \\
& -a\bigg ( \sinh (ar) + \frac{1}{\sinh (ar)} \bigg ) h'(r-\delta ) \\
& + a^2\cosh (ar) \bigg ( \frac{1}{\sinh^2(ar)} -2 \bigg ) h(r-\delta ) \bigg \} dr \wedge d \theta .
\end{split}
\end{equation}
For the quadratic profile $h(\lambda ) = \alpha_1 \lambda -\frac{\alpha_2}{2} \lambda^2$, the vorticity equation \eqref{VorticityequationDischyp} reduces to the following form
\begin{equation}\label{Vortsimplehyper}
0 = G_{\alpha_1 , \alpha_2 , \delta} (r) dr \wedge d \theta ,
\end{equation}
where the function $G_{\alpha_1 , \alpha_2 , \delta} (r)$ is given by the following expression.
\begin{equation}\label{TrivialhyperDisc}
\begin{split}
G_{\alpha_1 , \alpha_2 , \delta} (r) & = -2\alpha_2 \cosh (ar) -a \bigg ( \sinh (ar) + \frac{1}{\sinh(ar)} \bigg ) (\alpha_1 - \alpha_2 \lambda ) \\
& + a^2 \cosh (ar) \bigg ( \frac{1}{\sinh^2(ar)} -2 \bigg ) \big ( \alpha_1 \lambda -\frac{\alpha_2}{2} \lambda^2 \big ) .
\end{split}
\end{equation}
So, if $u = - h(r-\delta ) \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta }$ does satisfies equation \eqref{NavierStokeshyperbolic} on the sector-shaped region $R_{\delta , \tau , \epsilon_0 }$ of $\mathbb{H}^2(-a^2)$ as specified in \eqref{RegionHYPER}, then, it must follow, in accordance with \eqref{Vortsimplehyper}, that the function $G_{\alpha_1 , \alpha_2 , \delta} (r)$ must totally vanish on the interval $(\delta , \delta + \epsilon_0 )$. However, we observe that
\begin{equation}\label{Gnegative}
G_{\alpha_1 , \alpha_2 , \delta} (\delta ) = -2\alpha_2 \cosh (a \delta ) - a \alpha_1 \bigg ( \sinh(a\delta ) + \frac{1}{\sinh (a\delta )} \bigg ) < 0 .
\end{equation}
Since $G_{\alpha_1 , \alpha_2 , \delta} (\delta )$ is continuous on $[\delta ,\infty )$, \eqref{Gnegative} immediately implies that $G_{\alpha_1 , \alpha_2 , \delta}< 0 $ holds on some interval $[\delta , \delta + \epsilon_1)$, for some $0 < \epsilon_1 < \epsilon_0$, which directly violates \eqref{Vortsimplehyper}. A contradiction has been achieved, and we are done in proving \textbf{Assertion I} of Theorem.\\
\noindent{\bf Step 5 : The proof of Assertion II in Theorem \ref{existenceHyperbolic}}
\noindent
Here, for any prescribed positive constants $\alpha_1 > 0$, and $\alpha_2>0$, we consider the velocity field $u = - Y(r) \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta }$ on the region $\Omega_{\delta , \tau }$ as given in \eqref{OMEGAhyper} of Theorem \ref{existenceHyperbolic}, with $Y \in C^{\infty } ([\delta ,\infty ))$ to be a smooth function on $[\delta ,\infty )$ satisfying $Y(\delta ) =0$, $Y'(\delta ) = \alpha_1$, and $Y''(\delta ) = -\alpha_2$. Since the sector-shaped region $\Omega_{\delta , \tau }$ as given in \eqref{OMEGAhyper} is simply connected in $\mathbb{H}^2(-a^2)$, we know that such
$u = - Y(r) \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta }$ will satisfy equation \eqref{NavierStokeshyperbolic} with some globally defined smooth pressure $P$ on $\Omega_{\delta , \tau }$ \emph{if and only if} the vorticity equation \eqref{VorticityequationDischyp} holds on the simply-connected region $\Omega_{\delta , \tau }$ of $\mathbb{H}^2(-a^2)$. However, saying that the vorticity equation \eqref{VorticityequationDischyp} holds on the simply connected region $\Omega_{\delta , \tau }$ as specified in \eqref{OMEGAhyper} of Theorem \ref{existenceHyperbolic} is equivalent to saying that the smooth function $Y \in C^{\infty } ([\delta ,\infty ))$ is a solution to the following third order O.D.E. on $[\delta ,\infty )$ with initial values $Y (\delta ) = 0$, $Y'(\delta )= \alpha_1$, and $Y''(\delta ) =-\alpha_2$.
\begin{equation}\label{ODEhyperbolicDisc}
\begin{split}
0 & = \frac{\sinh (ar)}{a}Y'''(r) + 2 \cosh (ar) Y''(r) -a\bigg ( \sinh (ar) + \frac{1}{\sinh (ar)} \bigg ) Y'(r) \\
& + a^2\cosh (ar) \bigg ( \frac{1}{\sinh^2(ar)} -2 \bigg ) Y(r).
\end{split}
\end{equation}
However, in accordance with the basic existence theorem in the theory of linear O.D.E.(i.e. Theorem \ref{ODEtheorem}) , we know that there exists a unique smooth solution $Y \in C^{\infty } ([\delta ,\infty ))$ to \eqref{ODEhyperbolicDisc} satisfying initial values $Y (\delta ) = 0$, $Y'(\delta )= \alpha_1$, and $Y''(\delta ) =-\alpha_2$. Moreover, since the coefficient functions involved in \eqref{ODEhyperbolicDisc} are all real analytic on $[\delta , \infty )$, it follows that such a unique solution $Y \in C^{\infty } ([\delta ,\infty ))$ must also be \textbf{real-analytic} on $[\delta , \infty )$. So, according to these observations, we can now deduce that, for any prescribed positive constants $\alpha_1 > 0$, and $\alpha_2 >0$, there exists a unique smooth function $Y \in C^{\infty } ([\delta ,\infty ))$ with $Y (\delta ) = 0$, $Y'(\delta )= \alpha_1$, and $Y''(\delta ) =-\alpha_2$ such that $u = - Y(r) \frac{a}{\sinh (ar)} \frac{\partial}{\partial \theta }$ is a solution to equation \eqref{NavierStokeshyperbolic} on the simply connected region $\Omega_{\delta , \tau }$ as specified in \eqref{OMEGAhyper}. Moreover, such a unique smooth function $Y \in C^{\infty } ([\delta ,\infty ))$ is further known to be real-analytic on $[\delta , \infty )$. So, we are done in proving \textbf{Assertion II} of Theorem \ref{existenceHyperbolic}.
\section{The ''Cartesian coordinate system'' on the hyperbolic space $\mathbb{H}^2(-a^2)$}\label{Cartesianhyperbolicsection}
\noindent
The purpose of this section is to introduce a natural coordinate system $\Phi : \mathbb{R}^2 \rightarrow \mathbb{H}^2(-a^2)$ on the hyperbolic space $\mathbb{H}^2(-a^2)$, with $\mathbb{R}^2 = \{(\tau , s) : \tau , s \in (-\infty , \infty )\}$ to be the parameter space being used to parameterize the manifold $\mathbb{H}^2(-a^2)$. Since this natural coordinate system $\Phi(\tau , s )$ which we proceed to construct here is the closest possible analog to the standard cartesian coordinate system of the Euclidean space $\mathbb{R}^2$, we will just call $\Phi(\tau , s )$ which we construct below to be a ''Cartesian coordinate system'' introduced on $\mathbb{H}^2(-a^2)$. Again, such a ''Cartesian coordinate system'' $\Phi (\tau , s)$ on $\mathbb{H}^2(-a^2)$ which we are going to describe is also another piece of standard knowledge in Riemannian Geometry. Indeed, our discussions in this section can be viewed as a special case of the well-known procedure of constructing Jacobi-fields along geodesics on a general Riemannian manifold, and we refer the interested readers to pages 185-193 of the textbook by Jost \cite{Jost}.\\
\noindent
To begin the construction, let $O \in \mathbb{H}^2(-a^2)$ to be any selected point in the hyperbolic space $\mathbb{H}^2(-a^2)$ of constant negative sectional curvature $-a^2$. Recall that $\mathbb{H}^2(-a^2)$ (just as $\mathbb{R}^2$, or $S^2(a^2)$ ) is a symmetric space in that the \emph{its geometric structure looks exactly the same at any selected reference point, up to isometries preserving the Riemannian metric on $\mathbb{H}^2(-a^2)$}. So, we can just select any reference point $O \in \mathbb{H}^2(-a^2)$, which will play the role of the origin of the Cartesian coordinate system as introduced below.\\
With such a reference point $O$ in $\mathbb{H}^2(-a^2)$ to be chosen and fixed, consider a geodesic $c : (-\infty , \infty ) \rightarrow \mathbb{H}^2(-a^2)$ which passes through $O$ in that $c(0)=O$, and which travels towards the East-direction as the parameter $\tau \in (-\infty , \infty )$ increases. Recall that a geodesic on a Riemannian manifold is really a straight line with respect to the Riemannian structure of that manifold. Such a geodesic $c(\tau )$ which we choose will play the role of the \emph{$x$-axis} for our Cartesian coordinate system on the hyperbolic space $\mathbb{H}^2(-a^2)$. In order to specify the appropriate $y$-axis, we will regard $\mathbb{H}^2(-a^2)$ to be an oriented manifold with the orientation compatible with the anti-clockwise rotation. Then, choose $w \in T_{O}\mathbb{H}^2(-a^2)$ to be the unit vector which, together with $\dot{c}(0) = \frac{d}{d\tau }c|_{\tau =0} \in T_{O}\mathbb{H}^2(-a^2)$, constitute a positively oriented \emph{orthonormal} basis $\{\dot{c}(0) , w \}$ of $T_{O}\mathbb{H}^2(-a^2)$ compatible with the anti-clockwise rotation on $T_{O}\mathbb{H}^2(-a^2)$. Then, we just consider the geodesic $\gamma :(-\infty , \infty ) \rightarrow \mathbb{H}^2(-a^2)$ which satisfies the properties that $\gamma (0) = O $ and that $\dot{\gamma}(0) = \frac{d}{ds}\gamma|_{s= 0}=w$. So, $\gamma$ will be a straight line (i.e. geodesic) which passes through $O$ and which travels towards the North-direction as the parameter $s \in (-\infty , \infty )$ increases. So, the geodesic $\gamma$, which intersects the geodesic $c$ at the reference point $O$ in an orthogonal manner, will play the proper role of the $y$-axis of our Cartesian coordinate system on $\mathbb{H}^2(-a^2)$. For a technical purpose, consider $V(s)$ to be the \emph{parallel vector field} along the geodesic $\gamma$ which satisfies $V(0) = \dot{c}(0)$ in the sense as specified in the following Definition
\begin{defn}
Let $\gamma : (a, b) \rightarrow M$ to be a geodesic on a $N$-dimensional Riemannian manifold $M$. A parallel vector field $V(s)$ along $\gamma$ is a smooth map $s \in (a,b) \to V(s)\in T_{\gamma (s)}M $ for which the property $\overline{\nabla}_{\dot{\gamma}}V = 0$ holds on $(a,b)$. Here, $\overline{\nabla}$ is the Levi-Civita connection (covariant derivative) acting on the space of smooth vector fields on $M$.
\end{defn}
Now, with such a parallel vector field $V(s)$ along the geodesic $\gamma$ with $V(0) = \dot{c}(0)$, we consider, for each real value $s\in \mathbb{R}$, the geodesic $c_{s}: (-\infty , \infty ) \rightarrow \mathbb{H}^2(-a^2) $ which passes through the point $\gamma (s)$ in that $c_{s}(0) = \gamma (s)$, and which satisfies the condition $\dot{c_{s}}(0) = \frac{d}{d\tau }c_{s}|_{\tau =0} = V(s) \in T_{\gamma(s)}\mathbb{H}^2(-a^2)$. In accordance with the definition of the exponential map about a reference point on a Riemannian manifold, we can express the geodesic $c_{s}$ in terms of the exponential map $\exp_{\gamma (s)} : T_{\gamma (s)} \mathbb{H}^2(-a^2) \rightarrow \mathbb{H}^2(-a^2)$ as follow, where $\tau \in (-\infty , \infty )$ is the parameter of the geodesic $c_{s}$,
\begin{equation}
c_{s}(\tau ) = \exp_{\gamma (s)}(\tau V(s)) .
\end{equation}
We can now define the smooth bijective map $\Phi : \mathbb{R}^2 \rightarrow \mathbb{H}^2(-a^2)$ in accordance with the following rule, where $\tau ,s \in \mathbb{R}$ are arbitrary real parameters.
\begin{equation}\label{coordinatesystemhyper}
\Phi (\tau , s) = \exp_{\gamma (s)}(\tau V(s)) = c_{s}(\tau ) .
\end{equation}
Indeed, the smooth map $\Phi$, which maps the parameter space $\mathbb{R}^2$ bijectively onto $\mathbb{H}^2(-a^2)$ with smooth inverse $\Phi^{-1}$, is exactly the coordinate system which we introduced on the hyperbolic manifold $\mathbb{H}^2(-a^2)$. Next, by means of this natural coordinate system $\Phi( \tau ,s)$, we will define two natural vector fields $\frac{\partial}{\partial \tau }$, and $\frac{\partial}{\partial s}$ \textbf{on the hyperbolic space} $\mathbb{H}^2(-a^2)$ as follow.
\begin{defn}
$($Natural vector fields $\frac{\partial}{\partial \tau }$, and $\frac{\partial}{\partial s}$ on $\mathbb{H}^2(-a^2)$ via the coordinate system $\Phi(\tau ,s)$ as given in \eqref{coordinatesystemhyper}$)$.\\
For any point $p \in \mathbb{H}^2(-a^2)$, the vectors $\frac{\partial}{\partial \tau }|_{p}$, $\frac{\partial}{\partial s}|_{p}$ in the tangent space $T_{p}\mathbb{H}^2(-a^2)$ of $\mathbb{H}^2(-a^2)$ at $p$ are characterized (as linear derivations acting on smooth functions) by the following rules.
\begin{equation}\label{naturalrelationhyp}
\begin{split}
\frac{\partial}{\partial \tau } \bigg |_{p} f &= \frac{\partial}{\partial \tau }(f \circ \Phi ) \bigg |_{\Phi^{-1}(p)} , \\
\frac{\partial}{\partial s} \bigg |_{p} f & = \frac{\partial}{\partial s}(f \circ \Phi ) \bigg |_{\Phi^{-1}(p)} ,
\end{split}
\end{equation}
where $f \in C^{\infty}(\mathbb{H}^{2}(-a^2))$ is any smooth function on $\mathbb{H}^{2}(-a^2)$. Here, we remark that the same symbol $\frac{\partial}{\partial \tau}$ means totally \emph{different} things on the two sides of relation \eqref{naturalrelationhyp}. The symbol $\frac{\partial}{\partial \tau } |_{p}$ on the left stands for the vector in $T_{p}\mathbb{H}^2(-a^2)$ which is to be defined through the right hand side. While, the symbol $\frac{\partial}{\partial \tau }$ appearing on the righthand-side is just the ordinary partial derivative acting on the Euclidean space $\mathbb{R}^2$ at the point $\Phi^{-1}(p) \in \mathbb{R}^2$. The same remark also applies to the symbol $\frac{\partial}{\partial s}$ appearing in the second line of \eqref{naturalrelationhyp}.
\end{defn}
\noindent
In accordance with the above rigorous definition for the vector fields $\frac{\partial}{\partial \tau }$, and $\frac{\partial}{\partial s}$ on $\mathbb{H}^2(-a^2)$, for each pair of parameters $(\tau ,s) \in \mathbb{R}^2$, we can \emph{think of} the two vectors $\frac{\partial}{\partial \tau }|_{\Phi (\tau ,s)}$ and $\frac{\partial}{\partial s}|_{\Phi (\tau ,s)}$ in $T_{\Phi (\tau ,s )} \mathbb{H}^2(-a^2)$ in the following intuitive manner.
\begin{equation}\label{intuitivehyp}
\begin{split}
\frac{\partial}{\partial \tau } \bigg |_{\Phi (\tau ,s)} & = \partial_{\tau}\{ \exp_{\gamma (s)}(\tau V(s)) \} = \partial_{\tau } \{ c_{s}(\tau ) \} , \\
\frac{\partial}{\partial s} \bigg |_{\Phi (\tau ,s)} & = \partial_{s}\{ \exp_{\gamma (s)}(\tau V(s)) \}
\end{split}
\end{equation}
As a result of relation \eqref{intuitivehyp}, it follows that for any point $p \in \mathbb{H}^{2}(-a^2)$, which is parameterized by the pair $(\tau ,s) \in \mathbb{R}^2$ $($i.e. $(\tau ,s)= \Phi^{-1}(p)$ $)$, we will have the following relation
\begin{equation}
\frac{\partial}{\partial \tau } \bigg |_{p} = \dot{c_{s}}(\tau ) .
\end{equation}
Notice that $\dot{c_{s}}(\tau )$ itself is a tangential parallel vector field along the geodesic $c_{s}$. So, it turns out that $\frac{\partial}{\partial \tau }$ must be of unit length. That is, we have $\|\frac{\partial}{\partial \tau } \| = 1$ holds everywhere on $\mathbb{H}^{2}(-a^2)$.\\
\noindent
Next, in accordance with basic knowledge in Riemannian geometry, the vector field $\frac{\partial}{\partial s}$, when being restricted to each geodesic $c_{s}$, is the \emph{uniquely determined} Jacobi field $W^{(s)}(\tau )$
along $c_{s}$ satisfying the initial conditions $W^{(s)}(0) = \dot{\gamma}(s)$, and $\overline{\nabla}_{\dot{c_{s}}}W^{(s)}|_{\tau = 0} = 0 $, in the sense of the following definition.
\begin{defn}\label{Jacobifield}
On a $N$-dimensional Riemannian manifold $M$, let $c : (a,b) \rightarrow M$ to be a geodesic. A vector field $X(\tau )$ along the geodesic $c(\tau )$ is called a Jacobi field along $c$ if $V$ satisfies the following Jacobi-field equation on $\tau \in (-\infty , \infty )$.
\begin{equation}
\overline{\nabla}_{\dot{c}}\overline{\nabla}_{\dot{c}} X + R(X,\dot{c})\dot{c} = 0 ,
\end{equation}
in which the symbol $\overline{\nabla}$ is again the Levi-Civita connection acting on the space of all smooth vector fields on $M$, and the symbol $R(\cdot ,\cdot )$ stands for the Riemannian curvature tensor which is defined in the following relation.
\begin{equation}
R(X,Y)Z = \overline{\nabla}_{X} \overline{\nabla}_{Y} Z - \overline{\nabla}_{Y} \overline{\nabla}_{X} Z - \overline{\nabla}_{[X,Y]}Z ,
\end{equation}
with $X$, $Y$, $Z$ to be smooth vector fields on $M$, and $[X, Y]$ is the smooth vector field given by $[X,Y] = XY-YX$.
\end{defn}
\noindent
The above definition of Jacobi-fields along geodesic on a given manifold may looks strange to those readers who are not familiar with Riemannian geometry. Giving a detailed discussion of the precise geometric meaning of the concept of Jacobi fields on a Riemannian manifold is out the scope of this paper. Here, we just simply mention that, on an intuitive level, the magnitude of a non-tangential Jacobi field along a geodesic on a Riemannian manifold encodes the growth rate of the spatial structure of the Riemannian manifold in the far range. But, fortunately, the very special symmetric structure of $\mathbb{H}^2(-a^2)$ ensures that the Riemannian curvature tensor $R(\cdot, \cdot )$ on $\mathbb{H}^2(-a^2)$ satisfies the following simple relation.
\begin{equation}\label{simplecurvaturetensor}
R(X, \dot{c}) \dot{c} = -a^2 X ,
\end{equation}
where $c :(-\infty , \infty ) \rightarrow \mathbb{H}^2(-a^2)$ is a geodesic on $\mathbb{H}^2(-a^2)$, and $X$ is a smooth vector field along the geodesic $c$. So, in the case of the hyperbolic manifold $\mathbb{H}^2(-a^2)$, we can just take relation \eqref{simplecurvaturetensor} for granted and hence the Jacobi-field equation as in Definition \eqref{Jacobifield} will reduce down to the following one in the case of a smooth vector field $X$ defined along a geodesic $c$ on $\mathbb{H}^2(-a^2)$.
\begin{equation}\label{Jacobifieldshyper}
\overline{\nabla}_{\dot{c}}\overline{\nabla}_{\dot{c}} X - a^2 X = 0.
\end{equation}
With the Jacobi-field equation as in \eqref{Jacobifieldshyper}, we can give a geometric description for the vector field $\frac{\partial}{\partial s}$ as follow. Here, For each fixed $s \in (-\infty , \infty )$, let $e_2^{(s)}(\tau )$ to be the parallel vector field along the geodesic $c_{s}$ which satisfies the initial condition $e_2^{(s)}(0) = \frac{d}{ds}\gamma|_{s} = \dot{\gamma}(s) $. Recall that, from our construction, we have
\begin{equation}
\dot{c_{s}}(0) = \frac{d}{d\tau }c_{s}\bigg |_{\tau =0 } = V(s) ,
\end{equation}
Where $V(s)$ is the parallel unit vector field along the geodesic $\gamma$ which satisfies $V(0) = \frac{d}{d\tau }c|_{\tau = 0}= \dot{c}(0)$. Since $V(s)$, as a parallel vector field along a geodesic, always preserves its inscribed angle with $\dot{\gamma}(s)$, $\dot{c_{s}}(0) = V(s)$ must be \emph{orthonormal} to $\dot{\gamma}(s) = e_2^{(s)}(0)$ in the tangent space $T_{\gamma (s)}\mathbb{H}^2(-a^2)$. That is, we have $e_2^{(s)}(0)\perp \dot{c_{s}}(0)$. Then, $e_2^{(s)}(\tau )$, as a parallel vector field along $c_{s}$, must preserves the its inscribed right-angle with $\dot{c_{s}}(\tau )$. That is, the parallel vector field $e_2^{(s)}(\tau )$ along $c_{s}$ is everywhere orthogonal to the geodesic $c_s$ itself. Now, since the coordinate system $\Phi : \mathbb{R}^2 \rightarrow \mathbb{H}^2(-a^2)$, as specified in \eqref{coordinatesystemhyper}, maps $\mathbb{R}^2$ bijectively onto $\mathbb{H}^2(-a^2)$, we know that each point $p \in \mathbb{H}^2(-a^2)$ has to be passed through by exactly one geodesic $c_{s}$. So, we can define a global smooth vector field $e_{2}$ on $\mathbb{H}^2(-a^2)$ by the following relation, where $\tau$, $s$ are arbitrary parameters.
\begin{equation}\label{e2hyper}
e_2(\Phi (\tau , s ) ) = e_2^{(s)}(\tau ) .
\end{equation}
Again, the vector field $e_2$ as defined above is everywhere orthonormal to $\frac{\partial}{\partial \tau }$ (just recall that $\frac{\partial}{\partial \tau }|_{\Phi(\tau ,s )} = \dot{c_s}(\tau )$ ). Now, we claim that the vector field $\frac{\partial}{\partial s}$ is related to $e_2$ through the following relation.
\begin{equation}\label{coshgrowth}
\frac{\partial}{\partial s} \bigg |_{\Phi(\tau , s)} = \cosh (ar) e_2(\Phi (\tau ,s ))
\end{equation}
To justify the above relation, we just have to recall that, for each $s \in (-\infty , \infty )$, the restriction of $\frac{\partial}{\partial s}$ on the geodesic $c_{s}$ is known to be the unique Jacobi field $W(\tau )$ along $c_{s}$ which satisfies the following initial conditions
\begin{equation}\label{initialvalues}
\begin{split}
W(0) & = \dot{\gamma}(s) ,\\
(\overline{\nabla}_{\dot{c}} W) \bigg |_{\tau = 0} & = 0.
\end{split}
\end{equation}
So, we just have to show that the vector field $W^{(s)}(\tau )$ along $c_s$ as \emph{defined} by $W^{(s)} (\tau ) = \cosh (ar) e_2(\Phi (\tau ,s ))$ is also the Jacobi field satisfying the two initial conditions specified in \eqref{initialvalues}. Once this is done, the uniqueness property of Jacobi field will immediately give the validity of relation \eqref{coshgrowth}. Now, observe that we must have $\overline{\nabla}_{\dot{c_s}} e_2^{(s)} = 0$ holds everywhere on $c_s$, simply because $e_2^{(s)}$ is a parallel vector field along $c_s$. So, by means of the product rule
$\overline{\nabla}_{X}(f Y) = (Xf) Y + f \overline{\nabla}_{X}Y$, which is one of the characteristic properties of any covariant derivative, it follows that
\begin{equation}
\overline{\nabla}_{\dot{c_s}}W^{(s)} = a \sinh (ar) e_2^{(s)} ,
\end{equation}
from which we immediately deduce that $W^{(s)}$ satisfies the second initial condition as specified in \eqref{initialvalues}. Of course $W^{s}$ clearly also satisfies the first condition in \eqref{initialvalues}. Now, by taking one more covariant derivative $\overline{\nabla}_{\dot{c}(s)}$ on both sides of the above equation and using the fact that $\overline{\nabla}_{\dot{c_s}} e_2^{(s)} = 0$, we immediately obtain
\begin{equation}
\overline{\nabla}_{\dot{c_s}} \overline{\nabla}_{\dot{c_s}}W^{(s)} = a^2 \cosh (ar) e_2^{(s)} = a^2 W^{(s)},
\end{equation}
from which we see immediately that $W^{(s)}$ clearly satisfies the Jacobi-field equation \eqref{Jacobifieldshyper}. As a result, $W^{(s)}$ is really a Jacobi field along $c_s$ which satisfies the same initial conditions as the Jacobi field $\frac{\partial}{\partial s}$ along $c_s$. Hence, by uniqueness, relation \eqref{coshgrowth} is true.
\section{The proof of Theorem \ref{parallelflowhyperbolicThm}: about parallel laminar flow along a straight edge of an obstacle in $\mathbb{H}^2(-a^2)$}\label{ProofofHypstraightSec}
\noindent
The goal of this section is to study parallel laminar flows along a geodesic, which represents the boundary (straight edge) of an obstacle, in the $2$-dimensional space form $\mathbb{H}^2(-a^2)$ with constant sectional curvature $-a^2$.\\
\noindent
Now, let us consider the ''Cartesian coordinate system'' $\Phi : \mathbb{R}^2 \rightarrow \mathbb{H}^2(-a^2)$ as given in \eqref{coordinatesystemhyper}, which we have just constructed in the previous section. For each point $p$, $\tau (p)$ and $s(p)$ stand for the first and second components of $\Phi^{-1}(p)$ in $\mathbb{R}^2$ respectively. That is, we have $\Phi^{-1}(p) = (\tau (p) , s(p))$. Then, we will consider the following solid region
\begin{equation}\label{K}
K = \{ p \in \mathbb{H}^2(-a^2) : \tau (p) \leqslant 0 \} ,
\end{equation}
which will represent a solid obstacle with the straight edge $\partial K$ along which we study parallel laminar flow under the ''no-slip'' condition. According to the setting as given in \textbf{Section \ref{Cartesianhyperbolicsection}}, the straight edge $\partial K$ is exactly the geodesic $\gamma$, which represents the ''$y$-axis'' of the ''Cartesian coordinate system'' $\Phi (\tau , s)$ on $\mathbb{H}^2(-a^2)$. Now, we consider the following vector field as defined on $\mathbb{H}^2(-a^2) - K = \{ p \in \mathbb{H}^2(-a^2) \tau (p) > 0 \}$.
\begin{equation}\label{parallelhypone}
u (p) = - h (\tau (p)) e_{2}(p) = - h(\tau (p)) \frac{1}{\cosh (a\tau (p))} \frac{\partial}{\partial s}\bigg |_{p} .
\end{equation}
From now on, the two real-valued \emph{smooth functions} $p \to \tau (p)$, and $p \to s(p)$ on $\mathbb{H}^2(-a^2)$ will simply be denoted by $\tau$ and $s$ respectively. Here, we remind our readers that, from now on, the symbols $\tau$, and $s$ stand for the first- and second-component functions of the map $\Phi^{-1} : \mathbb{H}^2(-a^2) \rightarrow \mathbb{R}^2$. (So, our readers should not confuse them with the use of the
|
same symbols ''$\tau $'', and ''$s$'' in $\mathbb{R}$ which represents parameters of the geodesics $c_{s}$, and $\gamma$ as in the previous section). With this convention for our notations, we can simply just write expression \eqref{parallelhypone} in the following ''short-hand'' form.
\begin{equation}\label{parallelhyptwo}
u = h(\tau ) e_2 = -h(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s} .
\end{equation}
Again, let $e_{2}$ to be the globally defined smooth unit vector field on $\mathbb{H}^2(-a^2)$ which we construct in \textbf{Section \ref{Cartesianhyperbolicsection}}. Recall, that, in terms of the notations as specified in the previous section, $e_{2}$ is everywhere orthonormal to the unit vector field $\frac{\partial}{\partial \tau } = \dot{c_s}$. Now, let us denote the vector field $\frac{\partial}{\partial \tau}$ on $\mathbb{H}^2(-a^2)$ by the symbol $e_1$. On the other hand, we recall, form \eqref{coshgrowth} of the previous section, that we have
$e_{2} = \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$. Then, in accordance with our construction in the previous section, we know that
\begin{itemize}
\item $e_1 = \frac{\partial}{\partial \tau}$ and $e_2 = \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$ constitute a positively oriented orthonormal moving frame on $\mathbb{H}^2(-a^2)$.
\end{itemize}
Then, it follows that the associated $1$-forms $e_1^* = d\tau$, and $e_2^* = \cosh (a\tau ) ds$ will constitute an orthonormal coframe with respect to the induced Riemannian metric on the cotangent bundle $T^*\mathbb{H}^2(-a^2)$ of $\mathbb{H}^2(-a^2)$. Then, the volume form on $\mathbb{H}^2(-a^2)$ can be expressed as
\begin{equation}\label{volumehyptwo}
Vol_{\mathbb{H}^2(-a^2)} = e_1^*\wedge e_2^* = \cosh (a\tau ) d\tau \wedge ds .
\end{equation}
Then, the Hodge-Star operator $* : C^{\infty}(\wedge^2T^* \mathbb{H}^2(-a^2)) \rightarrow C^{\infty}(\mathbb{H}^2(-a^2))$ sending $2$-forms back to the space of smooth functions is characterized by the \emph{defining} relation $* Vol_{\mathbb{H}^2(-a^2)} = 1 $, and the tensorial property $*(f\omega ) = f *\omega$, with $f$ to be a smooth function and $\omega$ to be a differential form. We also recall that the Hodge-star operator
$* C^{\infty}(T^* \mathbb{H}^2(-a^2)) \rightarrow C^{\infty}(T^* \mathbb{H}^2(-a^2))$ sending $1$-forms back to the space of $1$-forms can now be represented by the following rules.
\begin{equation}\label{rotationhyper}
\begin{split}
*(d\tau ) = *e_1^* & = e_2^* = \cosh (a\tau ) ds \\
*e_2^* = -e_1^* &= -d\tau .
\end{split}
\end{equation}
Now, we can proceed to compute $(-\triangle )u^*$, where $u^* = g(u, \cdot )$ is the associated $1$-form of the vector field $u$ as specified in \eqref{parallelhyptwo}.
\noindent{\bf Step $1$: Checking the divergence free property $d^*u^* = 0$ for $u$ as given in \eqref{parallelhyptwo}.}
Recall that the operator $d^* = -*d*$ which sends $1$-forms on $\mathbb{H}^2(-a^2)$ to the space of smooth functions on $\mathbb{H}^2(-a^2)$ is just the operator $- div$ acting on the space of smooth vector fields on $\mathbb{H}^2(-a^2)$. So, the desired divergence free property for $u$ as given in \eqref{parallelhyptwo} is expressed in the form of $d^* u^* =0$, which can easily be obtained through the following computation.
\begin{equation}
\begin{split}
d^*u^* & = *d[h(\tau ) * e_2^*] \\
& = - * d [h(\tau ) d\tau ] \\
& = - * \left\{ \frac{\partial}{\partial s}(h(\tau )) ds \wedge d\tau \right\} \\
& = 0.
\end{split}
\end{equation}
\noindent{\bf Step $2$: The computation of $(-\triangle ) u^*$ for the velocity field $u$ given by \eqref{parallelhyptwo}.}
\noindent
Recall that the Hodge Laplacian $(-\triangle )$, which sends the space of smooth $1$-forms into itself, is given by $(-\triangle ) = dd^* + d^* d$. Since the velocity field $u$ as given in \eqref{parallelhyptwo} satisfies the divergence free property $d^* u^* = 0$ on $\mathbb{H}^2(-a^2) - K$, with $K$ to be the solid obstacle with a straight edge boundary as specified in \eqref{K}, it follows that we have the following relation.
\begin{equation}
(-\triangle )u^* = d^* du .
\end{equation}
Now, we first carry out the following computation for the $2$-form $du^*$, which represents the vorticity of $u$ on $\mathbb{H}^2(-a^2)-K$.
\begin{equation}
\begin{split}
du^* & = d \{ - h(\tau ) \cosh (a\tau ) ds \} \\
& = - \frac{\partial}{\partial \tau } [h(\tau ) \cosh (a\tau )] d\tau \wedge ds \\
& = -\frac{1}{\cosh (a\tau )} \frac{\partial}{\partial \tau } [h(\tau ) \cosh (a\tau )] Vol_{\mathbb{H}^2(-a^2)} ,
\end{split}
\end{equation}
where $Vol_{\mathbb{H}^2(-a^2)}$ is the volume-form on $\mathbb{H}^2(-a^2)$ as described in \eqref{volumehyptwo}. Since $* Vol_{\mathbb{H}^2(-a^2)} = 1$, it follows that
\begin{equation}\label{Viscosityhyper}
\begin{split}
(-\triangle )u^* & = d^* du^* \\
& = -*d* \bigg \{ -\frac{1}{\cosh (a\tau )} \frac{\partial}{\partial \tau } [h(\tau ) \cosh (a\tau )] Vol_{\mathbb{H}^2(-a^2)} \bigg \} \\
& = * d \bigg \{ \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial \tau } [h(\tau ) \cosh (a\tau )] \bigg \} \\
& = * d \bigg \{ h'(\tau ) + a h(\tau ) \cdot \frac{\sinh (a\tau )}{\cosh (a\tau )} \big \} \\
& = * \frac{\partial}{\partial \tau } \bigg \{ h'(\tau ) + a h(\tau ) \cdot \frac{\sinh (a\tau )}{\cosh (a\tau )} \bigg \} d \tau \\
& = \frac{\partial}{\partial \tau } \bigg \{ h'(\tau ) + a h(\tau ) \cdot \frac{\sinh (a\tau )}{\cosh (a\tau )} \bigg \} * d\tau \\
& = \frac{\partial}{\partial \tau } \bigg \{ h'(\tau ) + a h(\tau ) \cdot \frac{\sinh (a\tau )}{\cosh (a\tau )} \bigg \} \cosh (a\tau ) ds \\
& = \bigg \{ h''(\tau ) \cosh (a\tau ) + a \sinh (a \tau ) h'(\tau ) + \frac{a^2}{\cosh (a\tau )} h(\tau ) \bigg \} ds .
\end{split}
\end{equation}
In the above computation, the symbol $h'(\tau )$ stands for $h'(\tau ) = \frac{\partial h}{\partial \tau }$. In the same way, $h''(\tau )$ and $h'''(\tau)$ mean $h''(\tau ) = \big (\frac{\partial}{\partial \tau }\big )^2 h$ and $h'''(\tau ) = \big ( \frac{\partial}{\partial \tau }\big )^3 h$ respectively. In the sixth equality of the above calculation, we have used the basic tensorial property $* (f \omega ) = f * (\omega )$ of the Hodge-Star operator $*$, with $f$ to be a smooth function and $\omega$ to be a smooth $1$-form.
\noindent{\bf Step 3: the computation of the nonlinear convection term $(\overline{\nabla}_{u}u)^*$, for $u$ to be given in \eqref{parallelhyptwo}.}
\noindent
In the computation for the nonlinear convection term $(\overline{\nabla}_{u}u)^*$, it is convenient for us to work directly with the computation of $\overline{\nabla}_{u}u$ at the level of smooth vector fields. Recall that the natural vector fields $\frac{\partial}{\partial \tau }$ and
$\frac{\partial}{\partial s }$ are everywhere orthogonal to each other on $\mathbb{H}^2(-a^2)$. So, we now express the vector field $\overline{\nabla}_{u}u$ in terms of the linear combination of $\frac{\partial}{\partial \tau }$ and
$\frac{\partial}{\partial s }$ as follow.
\begin{equation}\label{LeviCivitahyper}
\overline{\nabla}_{u}u = A \frac{\partial}{\partial \tau } + B \frac{\partial}{\partial s },
\end{equation}
where $A$, and $B$ are some smooth functions on $\mathbb{H}^2(-a^2)$ which we will figure out in a minute. First, by taking the inner product with $\frac{\partial}{\partial s }$ on both sides of \eqref{LeviCivitahyper}, we deduce that
\begin{equation}
\begin{split}
\cosh^2(a\tau ) B & = g(\overline{\nabla}_{u}u , \frac{\partial}{\partial s } ) \\
& = - \frac{\cosh (a\tau )}{h(\tau )} g(\overline{\nabla}_{u}u , u )\\
& = - \frac{\cosh (a\tau )}{h(\tau )} \cdot \frac{1}{2} u \big ( |u|^2 \big ) \\
& = \frac{1}{2} \frac{\partial}{\partial s } \big [ (h(\tau ))^2 \big ] = 0,
\end{split}
\end{equation}
from which we immediately get $B = 0$ on $\mathbb{H}^2(-a^2) - K$. We remark that in the above computation, the symbol $u \big ( |u|^2 \big )$ stands for \emph{the derivative of the function $|u|^2$ along the direction of the vector field $u$}. On the other hand, by taking inner product with $\frac{\partial}{\partial \tau }$ on both sides of \eqref{LeviCivitahyper}, we can compute the smooth function $A$ as follow.
\begin{equation}
A = g( \overline{\nabla}_{u}u , \frac{\partial}{\partial \tau } )
= \frac{-h(\tau )}{\cosh (a\tau )} g( \overline{\nabla}_{\frac{\partial}{\partial s }}u , \frac{\partial}{\partial \tau } )
\end{equation}
However, since $0= g(u , \frac{\partial}{\partial \tau } )$ holds on $\mathbb{H}^2(-a^2) - K$, it follows that we have
\begin{equation}
0 = \frac{\partial}{\partial s } g(u , \frac{\partial}{\partial \tau } ) = g(\overline{\nabla}_{\frac{\partial}{\partial s }} u , \frac{\partial}{\partial \tau } ) + g(u , \overline{\nabla}_{\frac{\partial}{\partial s }} \frac{\partial}{\partial \tau } ) .
\end{equation}
Hence, we can resume the calculation for $A$ as follow.
\begin{equation}\label{C1hyper}
A = \frac{-h(\tau )}{\cosh (a\tau )} g( \overline{\nabla}_{\frac{\partial}{\partial s }}u , \frac{\partial}{\partial \tau } )
= \frac{h(\tau )}{\cosh (a\tau )} g(u , \overline{\nabla}_{\frac{\partial}{\partial s }} \frac{\partial}{\partial \tau } )
= \frac{h(\tau )}{\cosh (a\tau )} g(u , \overline{\nabla}_{\frac{\partial}{\partial \tau }} \frac{\partial}{\partial s } ) ,
\end{equation}
where in the last equality sign, we have used the property $\overline{\nabla}_{\frac{\partial}{\partial s }} \frac{\partial}{\partial \tau } = \overline{\nabla}_{\frac{\partial}{\partial \tau }} \frac{\partial}{\partial s }$, which follows from the torsion free property of the Levi-Civita connection. Also, according with the Leibniz's rule satisfied by the Levi-Civita connection, we can carry out the following computation for $\overline{\nabla}_{\frac{\partial}{\partial \tau }} \frac{\partial}{\partial s }$, with $\overline{\nabla}_{\frac{\partial}{\partial \tau }}e_{2} = 0$ (which is true since $e_2$ is parallel along each geodesic $c_s$) being taken into account in the calculation.
\begin{equation}\label{C2hyper}
\overline{\nabla}_{\frac{\partial}{\partial \tau }} \frac{\partial}{\partial s } =
\overline{\nabla}_{\frac{\partial}{\partial \tau }} \big [ \cosh (a \tau ) e_2 \big ] =
a\sinh (a\tau ) e_2 =
\frac{a \sinh(a\tau )}{\cosh(a\tau )} \frac{\partial}{\partial s } .
\end{equation}
Hence, it follows from \eqref{C1hyper}, and \eqref{C2hyper} that
\begin{equation}
A = \frac{h(\tau )}{\cosh (a\tau )} g\big ( - h(\tau ) e_2 , a \sinh(a\tau ) e_2 \big ) \\
= \frac{-a \sinh(a\tau )}{\cosh (a\tau )} (h(\tau ))^2 .
\end{equation}
As a result, we finally conclude that
\begin{equation}
\overline{\nabla}_u u = \frac{-a \sinh(a\tau )}{\cosh (a\tau )} (h(\tau ))^2 \frac{\partial}{\partial \tau } ,
\end{equation}
with associated $1$-form $\big ( \overline{\nabla}_u u \big )^*$ to be given by
\begin{equation}\label{nonlinearhyper}
\big ( \overline{\nabla}_u u \big )^* = \frac{-a \sinh(a\tau )}{\cosh (a\tau )} (h(\tau ))^2 d\tau .
\end{equation}
Now, for our forthcoming applications of these calculations in the proof of Theorem \ref{parallelflowhyperbolicThm}, we will need concrete expressions of $d \big ( (-\triangle )u^* -2 Ric (u^*) \big )$ and $d \big ( \overline{\nabla}_uu \big )^*$. Indeed, by taking into account of the basic fact in Riemannian Geometry that $Ric (X^*) = -a^2 X^*$ always holds for any smooth vector fields $X$ on $\mathbb{H}^2(-a^2)$, we can use expression \eqref{Viscosityhyper} to derive the following expression for $d \big ( (-\triangle )u^* -2 Ric (u^*) \big )$.
\begin{equation}\label{vorticitytermhyper}
\begin{split}
&d \big \{(-\triangle )u^* -2Ric (u^*) \big \} \\
&= d\bigg \{ h''(\tau ) \cosh (a\tau ) + a \sinh (a \tau ) h'(\tau ) + a^2 \bigg ( \frac{1}{\cosh (a\tau )} - 2 \cosh (a\tau )\bigg ) h(\tau ) \bigg \} \wedge ds \\
&= \bigg \{ h'''(\tau ) \cosh (a\tau ) + 2a \sinh (a\tau ) h''(\tau ) - \frac{a^2\sinh^2(a\tau)}{\cosh(a\tau)}h'(\tau ) \\
& - a^3\sinh(a\tau ) \bigg ( 2 + \frac{1}{\cosh^2(a\tau )} \bigg ) h(\tau ) \bigg \} d\tau \wedge ds .
\end{split}
\end{equation}
Also, by taking the operator $d$ on both sides of \eqref{nonlinearhyper}, we immediately obtain the following relation,
\begin{equation}\label{TRIVIALhyper}
d \big ( \overline{\nabla}_u u \big )^* = \frac{\partial}{\partial s}\bigg \{ \frac{-a \sinh(a\tau )}{\cosh (a\tau )} (h(\tau ))^2 \bigg \} ds \wedge d\tau = 0.
\end{equation}
\noindent{\bf Step 4 : The proof of \textbf{Assertion I} in Theorem \ref{parallelflowhyperbolicThm}}\\
\noindent
With the preparations in the previous steps of this section, we are now ready to give a proof for \textbf{Assertion I} in Theorem \ref{parallelflowhyperbolicThm} as follow.\\
To begin the argument, assume towards contradiction that the parallel laminar flow $u = - h(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$, with the quadratic profile $h(\tau ) = \alpha_1 \tau -\frac{\alpha_2}{2}\tau^2$, does satisfy equation \eqref{NavierStokeshyperbolic} on the simply connected region $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 )\times \mathbb{R} \big )$ of $\mathbb{H}^2(-a^2)$, for certain positive constants $\tau_0 > 0$, $\alpha_1 > 0$, and $\alpha_2 > 0$. That is, the associated one form $u^* = -h(\tau ) \cosh (a\tau ) ds$ will satisfies the following Stationary Navier-Stokes equation on $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 )\times \mathbb{R} \big )$.
\begin{equation}\label{NShyperbolictwo}
\nu ((-\triangle)u^* -2 Ric (u^*) ) +\overline{\nabla}_{u}u^* + dP = 0 .
\end{equation}
Now, by taking the exterior differential operator $d$ on both sides of \eqref{NShyperbolictwo}, we deduce from \eqref{vorticitytermhyper} and \eqref{TRIVIALhyper} that the following vorticity equation would hold on $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 ) \times \mathbb{R} \big )$.
\begin{equation}\label{vorticityequationhyperplane}
\begin{split}
0 & = d \big \{ (-\triangle)u^* -2 Ric (u^*) \big \} \\
& = \bigg \{ h'''(\tau ) \cosh (a\tau ) + 2a\sinh (a\tau ) h''(\tau ) \\
&- \frac{a^2\sinh^2(a\tau )}{\cosh (a\tau )} h'(\tau )
- a^3 \sinh (a\tau ) \bigg ( 2+ \frac{1}{\cosh^2(a\tau ) }\bigg ) h(\tau ) \bigg \} d\tau \wedge ds .
\end{split}
\end{equation}
For convenience, we will consider the following smooth function
\begin{equation}\label{GoodFunctionhyperplane}
\begin{split}
F(\tau ) & = \bigg \{ 2a\sinh (a\tau ) h''(\tau )
- \frac{a^2\sinh^2(a\tau )}{\cosh (a\tau )} h'(\tau )
- a^3 \sinh (a\tau ) \bigg ( 2+ \frac{1}{\cosh^2(a\tau ) }\bigg ) h(\tau ) \bigg \} .
\end{split}
\end{equation}
Since for $h(\tau ) = \alpha_1 \tau -\frac{\alpha_2}{2}\tau^2$, we have $h'''(\tau ) = 0$ for all $\tau \in \mathbb{R}$, it follows that, for the velocity field $u = - h(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$, with the quadratic profile $h(\tau ) = \alpha_1 \tau -\frac{\alpha_2}{2}\tau^2$, equation \eqref{vorticityequationhyperplane} on $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 )\times \mathbb{R} \big ) $ should be equivalent to the following equation, which would hold on $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 )\times \mathbb{R} \big )$.
\begin{equation}\label{VorticityeqContradiction}
0 = F(\tau ) d\tau \wedge ds ,
\end{equation}
where $F(\tau )$ is the real-analytic function as given in \eqref{GoodFunctionhyperplane}. Since the differential $2$-form $d\tau \wedge ds$ is everywhere non-vanishing on $\mathbb{H}^2(-a^2)$, the validity of equation \eqref{VorticityeqContradiction} on $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 )\times \mathbb{R} \big )$ is equivalent to saying that
\begin{itemize}
\item The real-analytic function $F(\tau )$ as given in \eqref{GoodFunctionhyperplane} should totally vanish over $\Omega_{\tau_0} = \Phi \big ( (0,\tau_0 ) \times \mathbb{R} \big )$. That is, we should have $F (\tau ) = 0$, for all $\tau \in [0, \tau_0 )$.
\end{itemize}
So, to finish the proof for \textbf{Assertion I} of Theorem \ref{parallelflowhyperbolicThm}, we just need to arrive at a contradiction against the \emph{everywhere vanishing property of $F(\tau )$ over the interval $[0, \tau_{0})$}. To achieve this, we simply just compute the term $\frac{\partial F}{\partial \tau} \big |_{\tau = 0} = F'(0)$ as follow. Indeed, for the quadratic profile $h(\tau ) = \alpha_1 \tau -\frac{\alpha_2}{2}\tau^2$, we can carry out the following straightforward computations.
\begin{equation}\label{calculuscomphyper}
\begin{split}
\frac{\partial}{\partial \tau }\bigg \{ 2a\sinh (a\tau )h''(\tau ) \bigg \} \bigg |_{\tau = 0} & = 2a^2h''(0) = -2a^2\alpha_2 , \\
\frac{\partial}{\partial \tau }\bigg \{ - \frac{a^2\sinh^2(a\tau )}{\cosh (a\tau )} h'(\tau ) \bigg \} \bigg |_{\tau = 0} & = 0 ,\\
\frac{\partial}{\partial \tau }\bigg \{ - a^3 \sinh (a\tau ) \bigg ( 2+ \frac{1}{\cosh^2(a\tau ) }\bigg ) h(\tau ) \bigg \} \bigg |_{\tau = 0} & = 0 ,
\end{split}
\end{equation}
where in the above computation, we have taken the information $h(0)=0$, $h''(0) = -\alpha_2$, into account. So, in light of \eqref{calculuscomphyper}, we can easily deduce from \eqref{GoodFunctionhyperplane} that we must have
\begin{equation}
\frac{\partial F}{\partial \tau} \big |_{\tau = 0} = F'(0) = -2a^2 \alpha_2 < 0 ,
\end{equation}
since $\alpha_2 > 0$ by our hypothesis. The above relation clearly indicates that the function $F(\tau )$ must be a strictly decreasing function in some small open interval $(-\epsilon_0 , \epsilon_0 )$ around the point $\tau =0$. This fact, together with $F(0) = 0$, will imply that the function $F(\tau )$ must be strictly negative, for all $\tau \in (0, \epsilon_0 )$. This clearly violates the everywhere vanishing property: $F(\tau ) = 0$, for all $\tau \in [0, \tau_0 )$. So, we have derive a contradiction against the validity of the vorticity equation \eqref{VorticityeqContradiction}. Hence, we conclude that the velocity field $u = - h(\tau ) \frac{1}{\cosh (a\tau )} \frac{\partial}{\partial s}$, with the quadratic profile $h(\tau ) = \alpha_1 \tau -\frac{\alpha_2}{2}\tau^2$ is not a solution to equation \eqref{NavierStokeshyperbolic} on $\Omega_{\tau_0} = \Phi \big ((0,\tau_0 )\times \mathbb{R} \big )$, \emph{no matter which positive constants $\tau_0$, $\alpha_1 > 0$, $\alpha_2 > 0$ we take}.\\
\noindent{\bf Step $5$: The proof of \textbf{Assertion II} in Theorem \ref{parallelflowhyperbolicThm}}\\
\noindent
To begin the proof of \textbf{Assertion II} in Theorem \ref{parallelflowhyperbolicThm}, recall that all the computations in \textbf{Section \ref{ProofofHypstraightSec}} up to \eqref{vorticitytermhyper} are valid for a velocity field $u = -Y(\tau ) \frac{1}{\cosh (a\tau )}\frac{\partial}{\partial s}$, with $Y (\tau )$ to be any smooth function on $[0,\infty )$. Now, we are interested in the question of whether a velocity field of the form $u = -Y(\tau ) \frac{1}{\cosh (a\tau )}\frac{\partial}{\partial s}$, with $Y :[0,\infty )\rightarrow \mathbb{R}$ to be a smooth function, is a solution to equation \eqref{NavierStokeshyperbolic} on the whole exterior region $\mathbb{H}^2(-a^2)-K = \{ \Phi (\tau , s) \in \mathbb{H}^2(-a^2) : \tau > 0 , s \in \mathbb{R} \}$, with the obstacle $K = \{\Phi (\tau ,s ) \in \mathbb{H}^2(-a^2) : \tau \leqslant 0 , s \in \mathbb{R} \}$. Since $ \mathbb{H}^2(-a^2)-K = \Phi \big ( (0,\infty )\times \mathbb{R} \big )$ is clearly simply-connected, there exists a globally defined smooth function $P$ on $\mathbb{H}^2(-a^2)-K$ which satisfies equation \ref{NavierStokeshyperbolic} on $\mathbb{H}^2(-a^2)-K$ \emph{if and only if} $u = -Y(\tau ) \frac{1}{\cosh (a\tau )}\frac{\partial}{\partial s}$ satisfies the following vorticity equation on $\mathbb{H}^2(-a^2)-K$.
\begin{equation}\label{vorticityhyperLASTeq}
d \big \{ \nu \big ( (-\triangle ) u^* -2 Ric (u^*) \big ) + \overline{\nabla}_{u}u^* \big \} = 0 ,
\end{equation}
which, in accordance with \eqref{vorticitytermhyper} and \eqref{TRIVIALhyper}, is equivalent to the following third-order O.D.E. with real-analytic coefficients in the variable $\tau \in \mathbb{R}$.
\begin{equation}\label{3rdODEhyperplane}
\begin{split}
0 & = Y'''(\tau ) \cosh (a\tau ) + 2a\sinh (a\tau ) Y''(\tau ) \\
& - \frac{a^2\sinh^2(a\tau )}{\cosh (a\tau )} Y'(\tau )
- a^3 \sinh (a\tau ) \bigg ( 2+ \frac{1}{\cosh^2(a\tau ) }\bigg ) Y(\tau ).
\end{split}
\end{equation}
In other words, $u = -Y(\tau ) \frac{1}{\cosh (a\tau )}\frac{\partial}{\partial s}$ will satisfy equation \eqref{NavierStokeshyperbolic} on the simply-connected region $\mathbb{H}^2(-a^2)-K$ with some globally defined pressure $P$ on $\mathbb{H}^2(-a^2)-K$ \emph{if and only if} the smooth function $Y \in C^{\infty }\big ( [0,\infty ) \big )$ is a solution to the third-order ODE \eqref{3rdODEhyperplane}. However, in accordance with the basic existence theorem in the theory of linear O.D.E. $($i.e. Theorem \ref{ODEtheorem}$)$, we deduce that for any prescribed positive constants $\alpha_1 > 0$, and $\alpha_2 > 0$, there exists a unique smooth solution $Y \in C^{\infty}\big ( [0,\infty ) \big )$ to \eqref{3rdODEhyperplane}, which satisfies the initial values $Y(0) =0$, $Y'(0) = \alpha_1$, and $Y''(0) = -\alpha_2$. Since the coefficients in the third-order O.D.E. \eqref{3rdODEhyperplane} are all real analytic on $\mathbb{R}$, it turns out that such a unique solution $Y \in C^{\infty}\big ( [0,\infty ) \big )$ to \eqref{3rdODEhyperplane} must itself also be real-analytic on $[0,\infty )$. With such a real-analytic solution $Y$ to \eqref{3rdODEhyperplane} to be available to us, we can now conclude that: for any given $\alpha_1 > 0$, and $\alpha_2 > 0$, the real-analytic solution $Y :[0,\infty ) \rightarrow \mathbb{R}$ to \eqref{3rdODEhyperplane}, satisfying the initial values $Y(0) = 0$, $Y'(0) = \alpha_1$, $Y''(0) = - \alpha_2$, is the one and only one smooth function on $[0,\infty )$ for which the velocity field $u = -Y(\tau ) \frac{1}{\cosh (a\tau )}\frac{\partial}{\partial s}$ will satisfy equation \eqref{NavierStokeshyperbolic} $($with some globally defined pressure function $P$$)$ on the simply-connected open region $\mathbb{H}^2(-a^2)-K = \Phi \big( (0,\infty ) \times \mathbb{R} \big )$. So, we are done in proving \textbf{Assertion II} of Theorem \ref{parallelflowhyperbolicThm}.
|
\section{Introduction}
The $^{133}$Ce nucleus has been the focus of extensive experimentation and theoretical investigations for a long time and a number of important collective phenomena have been uncovered. Most recently, this nucleus was studied using the $^{116}$Cd($^{22}$Ne,5$n$) reaction and the Gammasphere array \cite{133ce-a}, leading to the identification of three new dipole bands, which represent the first experimental evidence for the multiple chiral doublet bands (M$\chi$D) phenomenon. Prior works on $^{133}$Ce reported results mainly on the medium-spin states in this nucleus. In the earliest experiment, seven bands were identified and the level scheme extended up to spin 49/2 \cite{ma1987}. The observed rotational bands were discussed in the framework of the cranking model and configurations based on one- and three-quasiparticle excitations were assigned. One sequence with quadrupole transitions only was observed, but not linked to low-lying states. Its three-quasiparticle configuration was suggested to involve either the $ \nu i_{13/2}$ or the $ \nu f_{7/2}$ orbital. Subsequently, the lifetimes of the states of the $ \nu h_{11/2}$ yrast band and one of the three quasiparticle bands were measured and the results confirmed the previously proposed configuration assignments~\cite{emediato1997}.
The first study of the high-spin level structure of $^{133}$Ce was performed using the Gammasphere array, and revealed the existence of three superdeformed bands \cite{karl1995}. The interpretation in the cranking approximation suggested superdeformed configurations involving one $\nu i_{13/2}$ or $ \nu f_{7/2}$ neutron coupled to the $^{132}$Ce superdeformed core. In addition, six new rotational structures were identified at high spins, with characteristics of triaxial configurations \cite{karl1996}. However, none of these bands were linked to low-lying states. The measured lifetimes of one of these bands permitted the extraction of a transitional quadrupole moment of 2.2 $\it e$b, thus confirming the triaxial interpretation~\cite{joss1998}.
The present paper reports on new experimental results that relate to both high- and low-spin structures in $^{133}$Ce. First, the high-spin triaxial bands reported in Ref.~\cite{karl1996} are now firmly connected to low-lying states through the identification of several linking transitions, thereby establishing their excitation energy, spin and parity. However, many transitions of the previously reported triaxial sequences are now placed differently. The level scheme is also extended to a higher excitation energy and spin of 22.8 MeV and $93/2$ $\hbar$, respectively. Secondly, two dipole bands and four rotational sequences of $\Delta I=2$ transitions are newly identified, and the angular-distribution coefficients and anisotropies of several transitions have been determined. The observed collective structures are extensively discussed in the framework of the Cranked Nilsson-Strutinsky (CNS) model, as described in Refs.~\cite{ing-phys-rep,Ben85,Afa95,Car06} and, one band of dipole character is interpreted using the tilted axis cranking covariant density functional theory (TAC-CDFT)~\cite{Zhao2011Phys.Lett.B181,Meng2013FrontiersofPhysics55}. A consistent interpretation of most of the observed bands is achieved. The observed level structure of $^{133}$Ce illustrates the ability of nuclei in the $A=130$ mass region to acquire different shapes and to rotate around either the principal or tilted axes of the intrinsic frame, as is the case in the neighboring
|
$^{138-141}$Nd nuclei for which new results were recently reported~\cite{138-low,138-switch,138-high,140-high,141nd}.
\section{\label{sec-exp} Experimental details}
The present work documents new and extended results at medium and high spins and continues the study of the $^{133}$Ce nucleus, a salient feature of which, the observation of multiple chiral bands, was published previously~\cite{133ce-a}. Both studies are based on the same measurement and hence, the experimental procedure and the analysis methods are similar, but with more information provided here.
The experiment was performed at the ATLAS facility at Argonne National Laboratory. Medium- and high-spin states in $^{133}$Ce were populated in two separate experiments following the $^{116}$Cd($^{22}$Ne,5$n$) reaction. In the first, a 112-MeV beam of $^{22}$Ne bombarded a 1.48 mg/cm$^{2}$-thick target foil of isotopically enriched $^{116}$Cd, sandwiched between a 50 $\mathrm{\mu g/cm^{2}}$ thick front layer of Al and a 150 $\mathrm{\mu g/cm^{2}}$ Au backing. The second experiment used the same beam and a target of the same enrichment and thickness evaporated onto a 55 $\mathrm{\mu g/cm^{2}}$-thick Au foil. A combined total of $4.1\times10^{9}$ four- and higher-fold prompt $\gamma$-ray coincidence events were accumulated using the Gammasphere array~\cite{LEE1990c641}, which comprised 101 (88) active Compton-suppressed HPGe detectors during the first (second) experiment. The accumulated events were unfolded and sorted into fully symmetrized, three-dimensional ($E_{\gamma}$-$E_{\gamma}$-$E_{\gamma}$) and four-dimensional ($E_{\gamma}$-$E_{\gamma}$-$E_{\gamma}$-$E_{\gamma}$) histograms and analyzed using the \textsc{radware}~\cite{rad1,rad2} analysis package.
Multipolarity assignments were made on the basis of extensive angular-distribution measurements~\cite{Iacob199757} and, for weak transitions, on a two-point angular-correlation ratio, $R_{ac}$~\cite{KramerFlecken1989333,Chiara.75.054305}. The angular-distribution analysis was performed using coincidence matrices sorted such that energies of $\gamma$ rays detected at specific Gammasphere angles (measured with respect to the beam direction) $E_{\gamma}(\theta)$, were incremented on one axis, while the energies of coincident transitions detected at any angle, $E_{\gamma}(any)$, were placed on the other. To improve statistics, adjacent rings of Gammasphere and those corresponding to angles symmetric with respect to $90^\circ$ in the forward and backward hemispheres were combined. A total of seven matrices (with the angles 17.3$^\circ$, 34.6$^\circ$, 50.1$^\circ$, 58.3$^\circ$, 69.8$^\circ$, 80.0$^\circ$, and 90.0$^\circ$) were created. After gating on the $E_{\gamma}(any)$ axis, background-subtracted and efficiency-corrected spectra were generated. From these, the intensities of transitions of interest were extracted and fitted to the usual angular distribution function $W(\theta)=a_{o}[1+a_2P_2(cos\theta) + a_4P_4(cos\theta)]$, where $P_2$ and $P_4$ are Legendre polynomials. The extracted coefficients, $
|
ll}
i + \min\{n,j\} & \mbox{~~~if $i \bmod n = 0$},\\
n \lceil i/n \rceil & \mbox{~~~if $i \bmod n \neq 0$}\\
\end{array}
\right.
\]
Define $| \phi_t \rangle$ to be the state in the $t^{th}$ template
such that the superposition of states defined by the $0/1$ subscripts
of the computation bits corresponds to the superposition of
states in the circuit after $g(t)$ gates
have been applied, assuming that the input to the quantum computation
is $|0\rangle$. Thus, $| \phi_0 \rangle = T(Q_0)^n N^{L-n+1}$.
For each $t \in \{0, \ldots, T-1 \}$, ${\cal{F}}_{prop}| \phi_t \rangle =
| \phi_{t+1} \rangle$ and
${\cal{F}}| \phi_T \rangle = 0$.
Also, for each $t \in \{1, \ldots, T \}$, ${\cal{B}} | \phi_t \rangle =
| \phi_{t-1} \rangle$ and
${\cal{B}}| \phi_0 \rangle = 0$.
Now define ${\cal{S}}$ to be the subspace spanned by the states
$|\phi_t \rangle$ for all $t \in \{0, \ldots, T \}$.
These states form an orthonormal basis of ${\cal{S}}$.
${\cal{S}}$ is closed under $H_{prop}$.
The matrix representation of $H_{prop}$ in the $|\phi \rangle$ basis
is exactly $P_{T+1}$.
Recall the definition of $P_r$ from Section \ref{sec:SpectralGaps}.
The unique ground state of
$H_{final}$ ( $= H_{prop}$) is
$$| \phi_{final} \rangle = \frac{1}{ \sqrt{T+1 }} \sum_{t=0}^{T} |\phi_{t} \rangle.$$
Now we need to define $H_{init}$:
$$H_{init} = (I - |T_R \rangle \langle T_R | )_0.$$
Note however, that while $H_{init}$ in its current form can distinguish between
$|\phi_0 \rangle$ and the other $|\phi \rangle$'s, it does not ensure that the
initial configuration starts out in ${\cal{S}}$.
To address this problem, we add an additional state $S$ (for Start) and some
extra constraints. The initial configuration will be
$T_R S^n N^{L-n+1}$.
We will add a constraint to $H_{init}$ that enforces the condition that if
particle $0$ is in state $T_R$, then particle $1$ is in state $S$.
This is achieved by adding in $|T_R X \rangle \langle T_R X |_0$,
where we sum over all states $X$ such that $X \neq S$.
Similarly for $i=1$ through $n-1$, we add the constraint that if
particle $i$ is in state $S$, then $i+1$ is also in state $S$.
We also add the constraint that if particle $n$ is in state $S$, then
particle $n+1$ is in state $N$. Finally for all $i=n+1$ through $L$,
we add the constraint that if particle $i$ is in state $N$, then particle
$i+1$ is in state $N$.
All of these constraints are added into $H_{init}$.
We need to alter $H_{prop}$ slightly in order to work with the $S$
states. In $H^0_{prop}$, we replace $H^0_{T_R Q \leftrightarrow FG}$
with $H^0_{T_R S \leftrightarrow F G_0}$. (Recall that the input is
all $0$'s). Then for $i=1$ through $n-1$, we replace
$H^0_{G Q \leftrightarrow BG}$ with $H^0_{G S \leftrightarrow B_0 G}$.
Templates $1$ through $n$ are now of the form
$F (B_0)^j G_0 S^{n-j-1} N^{L-n+1}$.
Lemma \ref{lem:state-change}
still holds. Furthermore, $H_{init}$ ensures that
there is only one eigenstate with eigenvalue $0$ and it
is our desired initial configuration.
\begin{lemma}
\label{lem:init}
The matrix representation of $H_{init}$ restricted to ${\cal{S}}$
and expressed in the basis of $| \phi_{t} \rangle$'s
is a diagonal matrix. The first diagonal entry is $0$ and the
others are $1$.
\end{lemma}
\proof
All templates start with $F$ except the first one
which starts with $T$. All the templates satisfy the
other conditions in $H_{init}$.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
To establish the spectral gap of $H_{{\cal{S}}_0}(s)$, we simply
invoke Lemma \ref{lem:sgap} which says that
the spectral gap of the restriction of $H(s)$ to ${\cal{S}}_0$
satisfies $\Delta(H_{{\cal{S}}}(s)) = \Omega ( (Ln)^{-2})$.
Using the Adiabatic Theorem, we get that
the running time of the algorithm will be $O(||H(s)|| \epsilon^{-\delta}
(Ln)^{4+2 \delta} )$. Note that $||H(s)||$ is $O(L)$ which gives an overall
running time of $O(\epsilon^{-\delta} L (Ln)^{4+2 \delta} )$.
However, this only produces a final state that is $\epsilon$ close
to $ | \phi_{final} \rangle$. This is a superposition of the
$| \phi_t \rangle$'s and only $| \phi_{T} \rangle$
encodes the desired output.
This can be corrected by adding $L/\epsilon$ identity gates to the
computation so that only a fraction of $\epsilon$ of the
$| \phi_{i,j} \rangle$'s will encode partial points in the computation.
This makes the final running time
$O(\epsilon^{-(5+3\delta)} L (Ln)^{4+2 \delta} )$.
In order to reduce the number of states from 14 to 10,
we observe that we can identify pairs $(N,F)$,
$(T_R, T_L)$, $(B_0,Q_0)$ and $(B_1,Q_1)$ and Lemma \ref{lem:state-change}
still holds.
\section{8-local Hamiltonian on a Chain is QMA-complete}
We now turn to the problem of the promise local Hamiltonian.
We will continue to work with $13$-state particles as in the previous section.
(We will not use the $S$ state added at the end of the previous section).
The ground state will now encode the computation performed by a
quantum verifier $V$ which works on input $|x \xi \rangle$
for input $x$ and witness $\xi$.
The total length of the input is $n$ qubits.
We will construct a Hamiltonian such that if there exists a witness
$\xi$ such that $V$ accepts with probability at least $1 - \epsilon$
on input $|0 \xi \rangle$, then the lowest eigenvalue of $H$ will be
less than $\epsilon$.
However, if for every $\xi$, $V$ accepts with probability at most
$\epsilon$, then the lowest eigenvalue will be at least 1 over a
polynomial in $n$ and $L$.
For this problem, we need to show that there is a large spectral gap
over the entire space of the particles, not just when restricted
to a particular subspace.
The Hamiltonian $H$ will consist of five components:
$$H = H_{prop} + H_{init} + H_{out} + H_{valid}+ H_{legal}.$$
$H_{prop}$ will the same as in the previous section (with the stipulation
that we do not use the changes made at the end of the
previous section to incorporate the additional state
$S$).
We will use the term {\em template} here to refer to any $L+2$-character
string over the alphabet $$\{ F, N, B, Q, L, R, G, T_R, T_L \}.$$
As before, each template will designate a subspace according to the
unspecified $0/1$ subscripts for the computation states.
This subspace will have dimension $2^m$ if the template has $m$
computation bits.
We will use various hamiltonians to enforce that only certain templates
will be valid or legal.
We start with a set of consraints enforced in
$H_{valid}$. We will enforce through $H_{valid}$
that a template must have a form which is a string of
length $L+2$ and is specified by one of the three regular expressions:
$$F^+ B^* (R + G + L) Q^* N^+,~~~
F^* T_L Q^+ N^+, ~~~F^+ B^+ T_R N.$$
Another way to denote these constraints is that a string
corresponding to a valid template must be a path in the
graph in Figure \ref{fig:valid} from the Start node
to the End node consisting of $L+2$ internal nodes.
$R$, $L$, $G$ are grouped together for clarity.
A path through this node can use either $R$, $L$ or $G$.
\begin{figure}
\label{fig:valid}
\centerline{\psfig{figure=Valid.eps,width=5.0in}}
\caption{Graph indicating set of valid templates.}
\end{figure}
These constaints are enforced by having a set of allowed
pairs where an edge in the
graph corresponds to an allowed pair.
We will need some additional constraints to enforce
that the first character must be $F$ or $T_R$ and that the last characers
must be $N$.
For ease of notation, we will omit
the subscripts. Therefore, the pair $RQ$ represents four possible
pairs for the four possible combinations of subscripts.
When summing over a set, summing over all possible combinations of subscripts
will be assumed.
\begin{eqnarray*}
S & = & \{ FF, FB, BB, FR, FL, FG, BR, BL, BG, RQ, GQ, LQ, \\
& & QQ, QN, NN, RN, GN, LN, FT_R, T_R Q, BT_L, T_L N\}\\
\end{eqnarray*}
Another way to express the set $S$ is that it is the set below, where the characer $X$
can be either $R$, $L$, or $G$:
\begin{eqnarray*}
S & = & \{ FF, FB, BB, FX, BX, XQ, QQ, QN, NN, XN, FT_R, T_R Q, BT_L, T_L N\}\\
\end{eqnarray*}
There is a one-to-one correspondence between the above pairs and the
edges in Figure \ref{fig:valid}.
For any $i$ from $0$ to $T-1$, we have
$$ H^{i}_{valid} = I - \sum_{\alpha \beta \in S} | \alpha \beta \rangle \langle \alpha \beta |_{i,i+1}
. $$
We then sum these together and add some additional constraints at the beginning and end of the chain.
$$H_{valid} = (I - |F\rangle \langle F| - |T_R \rangle \langle T_R|)_0
+ (I - | N \rangle \langle N | )_{L+1} + \sum_{i=0}^L H^{i}_{legal} .$$
Note that $H_{valid}$ is a diagonal matrix with non-negative integers
along the diagonal.
\begin{lemma}
Consider a state $| \phi \rangle$ contained in a subspace
specified by a template. $H_{valid} | \phi \rangle = \lambda | \phi \rangle $
for some non-negative integer $\lambda$.
If $\lambda = 0$,
then the template must have a form specified by one of the following
three regular expressions:
$$F^+ B^* (R + G + L) Q^* N^+,~~~
F^* T_L Q^+ N^+, ~~~F^+ B^+ T_R N.$$
\end{lemma}
\proof
If $H_{valid} | \phi \rangle = 0$,
then every consecutive pair of characters in the
the template containing $ |\phi \rangle$ must be an allowable pair.
Furthermore, the first character must be $F$ or $T_R$ and the last
character must be $N$.
The graph in Figure \ref{fig:valid}
has every allowable pair
labelled as a directed edge. Furthermore, the graph
enforces that any path from {\em Start} to
{\em End} must begin with an $F$ or a $T_R$ and
must end with an $N$.
Therefore, a valid template must correspond to a path that
starts at the {\em Start} node, ends at the {\em End}
node and traverses $L+2$ intermediate nodes.
The length of the template is enforced by the physical length
of the chain of particles.
The set of all such paths correspond to the three
regular expressions above with the constraint that the length must be
$L+2$.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
We will label each valid template with a pair $(m,t)$.
$m$ will designate the number of computation bits
in a template. We describe here
how to determine $t$ for a particular $m$.
We describe how to determine two inetegers $i$ and
$j$ and then let $t = i(2m+3) + j$. The labelling will have four
distinct cases, depending on the form of the template:
$$
\begin{array}{|c|c|c|}
\hline
F^k T_L Q^m N^{L-m-k+1} & i \leftarrow k & j \leftarrow 0 \\
\hline
F^{k+1} B^{l} (G + R) Q^{m-l-1} N^{L-m-k+1} & i \leftarrow k & j \leftarrow l+1 \\
\hline
F^{k+1} B^m T_R N^{L-m-k+1} & i \leftarrow k & j \leftarrow m+1 \\
\hline
F^{k+1} B^{m-l} L Q^{l} N^{L-m-k} & i \leftarrow k & j \leftarrow m+2+l \\
\hline
\end{array}
$$
Conversely, Figure \ref{fig:gen-templates} shows for a given pair
$(m,t)$ the form of the templates corresponding to that pair,
where $i = \lfloor t/(2m+3) \rfloor$ and $j = t \bmod (2m+3)$.
Note that for each pair, there is exactly one template unless
$1 \le j \le m$ in which case there are two templates, depending
on whether the particle in the control state is in a $G$ or an
$R$ state.
Define $T_m = (2m+3)(L-m)+m+1$.
Also note that for a given $m$, and any
$t \in \{0, \ldots, T_m \}$, $(m,t)$ corresponds to a valid template,
but having $t > T_m$ implies that the last particle
will not be in state $N$, which makes the template invalid.
If a template is labelled $(m,0)$, then we will call it an
{\em initial} template.
If it is labelled $(m, T_m)$, then we will call it
a {\em final} template.
Figure \ref{fig:gen-templates} also shows
for each template which terms in $H_{prop}$
apply in the forward direction and which apply in the backward direction.
The condition $(boundary)$ indicates whether a pair straddles two
blocks of particles. That is, if a pair of particles are located in positions
$i$ and $i+1$ and $i \bmod n = 0$.
\begin{figure}
\label{fig:gen-templates}
$$
\begin{array}{|c|ccc|c|c|}
\hline
\mbox{Condition} & & \mbox{Template} & &\mbox{Forward} & \mbox{Backward}\\
\hline
\hline
& {\scriptstyle 0 \cdots i }& {\scriptstyle (i+1) \cdots (m+i) }& {\scriptstyle (m+i+1) \cdots (L+1)} & &\\
\hline
& & & & \scriptstyle{ if~(not ~~boundary)} & \\
j = 0 & {F \cdots F} ~T_R &{Q ~~~ \cdots ~~~~ Q}~ & N {N \cdots N} & T_R Q \rightarrow FR & FL \leftarrow FT_R \\
& & & & \scriptstyle{ if~(boundary)} & \\
& & & & T_R Q \rightarrow FG & \\
\hline
& & & & & \scriptstyle{ if~(not ~~boundary)} \\
j = 1
& {F \cdots F} ~F & R ~{Q \cdots QQ \cdots Q} & N {N \cdots N} & RQ \rightarrow BR & T_R Q \leftarrow FR \\
\hline
& & & & & \scriptstyle{ if~(boundary)} \\
j = 1 & {F \cdots F} ~F & G ~{Q \cdots QQ \cdots Q} & N {N \cdots N} & RQ \rightarrow BG & T_R Q \leftarrow FG \\
\hline
1 < j < n
& {F \cdots F} ~F & ~\underbrace{B \cdots B}_{j} R
~\underbrace{Q \cdots Q}_{m-j-1}& N~ {N \cdots N} & RQ \rightarrow BR & RQ \leftarrow BR\\
\hline
1 < j < n
& {F \cdots F} ~F & ~\underbrace{B \cdots B}_{j} G
~\underbrace{Q \cdots Q}_{m-j-1}& N~ {N \cdots N} & GQ \rightarrow BG & GQ \leftarrow BG \\
\hline
& & & & \scriptstyle{ if~(not ~~boundary)} &\\
j = n
& {F \cdots F} ~F & ~ {B \cdots B} R & N~ {N \cdots N} & RN \rightarrow BT & RQ \leftarrow BR\\
\hline
& & & & \scriptstyle{ if~(boundary)} & \\
j = n
& {F \cdots F} ~F & B \cdots B G & N~ {N \cdots N} & GN \rightarrow BT & GQ \leftarrow BG\\
\hline
& & & & & \scriptstyle{ if~(not ~~boundary)} \\
j = n+1 & {F \cdots F} ~F & ~ {B \cdots B} & T_L {N \cdots N} & T_L N \rightarrow LN & RN \leftarrow BT_L\\
& & & & & \scriptstyle{ if~(boundary)} \\
& & & & & GN \leftarrow B T_L \\
\hline
|
j = n+2 & {F \cdots F} ~F & ~ {B \cdots B} & L {N \cdots N} & BL \rightarrow LQ & T_L N \leftarrow LN\\
\hline
n+2 < j < 2n+2 & {F \cdots F} ~F & ~ \underbrace{B \cdots B}_{m-j} L ~\underbrace{Q \cdots Q}_{j-1}
& Q
\underbrace{N \cdots N}_{L-i-m} & BL \rightarrow LQ & BL \leftarrow LQ \\
\hline
j = 2n+2 & {F \cdots F} ~F & L \underbrace{Q \cdots Q}_{m-1}~ & Q {N \cdots N}
& FL \rightarrow FT_R & LQ \leftarrow BL \\
\hline
\end{array}
$$
\caption{Generalized Templates}
\end{figure}
\begin{lemma}
\label{lem:state-change2}
\label{lem:valid}
Consider a valid template ${\cal{T}}$ labelled with $(m,t)$.
At most one term in $H_{prop}$
applies to ${\cal{T}}$ in the forward direction.
Furthermore, when ${\cal{F}}$ is applied to
a state in that template, the result is $0$ or a state in an $(m,t+1)$-template.
At most one term in $H_{prop}$
applies to ${\cal{T}}$ in the backward direction.
Furthermore, when ${\cal{B}}$ is applied to
a state in that template, the result is $0$ or a state in an $(m,t-1)$-template.
\end{lemma}
\proof
The proof consists of verifying that for each template in Figure \ref{fig:gen-templates},
at most one term applies in the forward direction (and in only one place)
and exactly one term applies in the backward direction (again in only one place).
When the term is applied in the forward direction, the result is the next template
in the sequence and when the term is applied in the backward direction, the result is
the previous template in the sequence.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
\begin{lemma}
\label{lem:end}
Consider a state $| \phi \rangle$ contained in the subspace corresponding to a template ${\cal{T}}$.
If ${\cal{T}}$ is an initial template, then ${\cal{B}} | \phi \rangle = 0$.
If ${\cal{T}}$ is a final template, then ${\cal{F}} | \phi \rangle = 0$.
\end{lemma}
\proof
There is no term in $H_{prop}$ that applies to an inital template
in the backward direction. Furthermore,
since $H^L_{GN \leftrightarrow BT}$ is removed
from $H^L_{prop}$, there is no term in $H_{prop}$ that applies to a
final template in the forward direction.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
We define another Hamiltonion $H_{legal}$ which penalizes any
template for which there is no term in $H_{prop}$ that applies in the
backward direction or for which there is no term that applies in the forward direction
(unless it is an inital or final template, respectively).
Using the table in Figure
\ref{fig:gen-templates}, we want to forbid pairs $RN$ and $FR$ from crossing a boundary.
We will also forbid pairs $GQ$ and $BG$ from crossing a boundary.
In addition, we want to forbid pairs $GN$ and $FG$ unless they cross a boundary.
For any $i$ such that $i \bmod n = 0$, we have
$$ H^{i}_{legal} = | RN \rangle \langle RN |_i + | FR \rangle \langle FR |_i
+ | GQ \rangle \langle GQ |_i + | BG \rangle \langle BG |_i.$$
For any $i$ such that $i \bmod n \neq 0$, we have
$$ H^{i}_{legal} = | GN \rangle \langle GN |_i + | FG \rangle \langle FG |_i.$$
As usual, summing over all the subscripts of the computation states is assumed.
Finally, we sum these together
$$H_{legal} = \sum_{i=0}^L H^{i}_{legal} .$$
We say that a template is legal if any state $| \phi \rangle$
in that template has $H_{legal} | \phi \rangle = 0$.
Otherwise, $H_{legal} | \phi \rangle \ge 1$ and we say the template is
{\em illegal}.
\begin{lemma}
\label{lem:legal}
Consider a template ${\cal{T}}$ that is both valid and legal.
If ${\cal{T}}$ is not a final template, then there is a term in $H_{prop}$
that applies to ${\cal{T}}$ in the forward direction.
Similarly, if ${\cal{T}}$ is not an initial template, there is a
term in $H_{prop}$ that applies to ${\cal{T}}$ in the backward
direction.
\end{lemma}
\proof
The proof consists of the obervation that any template
in Figure \ref{fig:gen-templates} for which there is no
term in $H_{prop}$ that applies in the forward direction is made
illegal by $H_{legal}$.Similarly, any template for which there is no
term in $H_{prop}$ that applies in the backward direction is made
illegal by $H_{legal}$.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
We can think of all the valid templates as nodes in a directed graph.
There is a directed edge from templates ${\cal{T}}$ to ${\cal{T}}'$ if
applying ${\cal{F}}$ to some state in ${\cal{T}}$ results in a state in
${\cal{T}}'$. (By definition then, applying ${\cal{B}}$ to a state in ${\cal{T}}'$
results in a state in ${\cal{T}}$).
We will call this graph the {\em template graph} and will refer
to nodes and templates interchangeably.
Lemmas \ref{lem:valid}, \ref{lem:end} and \ref{lem:legal}
imply that this graph consists of a set of disjoint chains.
All the nodes in a chain correspond to templates with the same
number of computation particles. Furthermore, the starting node
in a maximal chain is either an initial template or an illegal one.
The last node in a maximal chain is either a final template or an illegal one.
\begin{lemma}
There is exactly one chain in the template graph that contains no
illegal nodes. Furthermore, templates in this chain have $n$ computation
particles.
\end{lemma}
\proof
Consider a maximal chain with no illegal nodes. This chain must
begin with an initial template and end with a final one.
The initial template has the form $T_R Q^m N^{L-m+1}$ for some $m$.
By the forward rule $T_R Q \rightarrow FG$, the next node in the chain
is $F G Q^{m-1} N^{L-m+1}$. If $m < n$, then $m-1$ applications for
the forward rule $GQ \rightarrow BG$ will result in
$F B^{m-1} G N^{L-m-1}$. Since $m < n$, this will result in a
pair $GN$ in locations $m$ and $m+1$
which does not straddle a block boundary. This is made illegal
in $H_{legal}$.
If $m > n$, then $n-1$ applications of the forward rule $GQ \rightarrow BG$
to $F G Q^{m-1} N^{L-m+1}$ will result in
$F B^{n-1} G Q^{m-n} N^{L-m+1}$. This will result in a pair
$GQ$ in locations $n$ and $n+1$ which is also disallowed in $H_{legal}$.
Finally, if $m=n$, there is exactly one chain from an initial template
to a final one (because there is exactly one initial template).
The application of the forward rules do not result in any illegal tempaltes.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
It will be convenient at this point to define the remaining two terms in
$H$. The input to the quantum verifier will be $n$ qubits.
$n_1$ qubits will be ancillary qubits that are initialized to $0$
and $n_2$ qubits will be the witness $\xi$.
$n_1 + n_2 = n$. We force $x=0$ with the
following Hamiltonian:
$$ H_{input} = \sum_{i=1}^{n_1}
| Q_1 \rangle \langle Q_1 |_i + | R_1 \rangle \langle R_1 |_i +
| G_1 \rangle \langle G_1 |_i
+ | B_1 \rangle \langle B_1 |_i.$$
We will assume that the output will be present in the rightmost
qubit in of the computation. Therefore, we will have
$H_{out}$ defined to be $ | G_0 \rangle \langle G_0 |_{L}$.
Observe that $H_{input}$ and $H_{out}$ are both closed over the subspace
defined by each template.
A maximal chain in the template graph defines a subspace which is just the
subspace spanned by all the subspaces defined by the templates along the chain.
$H_{prop}$ is closed over the subspace defined by any maximal chain in the
template graph. All the other terms in $H$ are closed over
the subspace defined by each template.
Define ${\cal{S}}_{legal}$ to be the subspace defined by
the unique chain containing only legal nodes.
Let ${\cal{S}}_{legal}^{\perp}$ be the orthogonal subspace to ${\cal{S}}_{legal}$.
Since $H$ is closed under ${\cal{S}}_{legal}$, it is also closed
under ${\cal{S}}_{legal}^{\perp}$
and any eigenvector of $H$ must be
contained in ${\cal{S}}_{legal}$ or ${\cal{S}}_{legal}^{\perp}$.
\begin{lemma}
Any eigenvector of $H$ in ${\cal{S}}_{legal}^{\perp}$
will have an eigenvalue of at least $\Omega(1/L^4)$.
\end{lemma}
\proof
Define ${\cal{S}}_{valid}$ to be the subspace spanned by all the valid
templates. Since $H$ is closed over ${\cal{S}}_{valid}$, it is also closed
under the orthogonal space ${\cal{S}}_{valid}^{\perp}$.
Any state eigenstate of $H_{valid}$ in ${\cal{S}}_{valid}^{\perp}$
will have an eigenvalue of at least $1$. Since the remaining terms
in $H$ are positive semi-defininte, any eigenstate of $H$ in
${\cal{S}}_{valid}^{\perp}$ will also have an eigenvalue of at least $1$.
Now we can carve up $H_{valid}$ into the subspaces defined by the maximal
chains and $H$ is closed on each such subspace. We
focus on an arbitrary such maximal chain (except the one
containing only legal nodes) and the subspace it defines.
The chain goes from an
$(m,t_1)$ template to an $(m,t_2)$-template for some $m$, $t_1$ and $t_2$.
To specify a state in the first template, we specify an $m$-bit $x$
string which will determine the subscripts of particles in the computation states.
We call this state $| \phi_{x,m,t_1} \rangle$.
The set of these states for all $x$ forms a basis of the first template.
Since ${\cal{F}}$ is unitary,
when we apply ${\cal{F}}$ to all the $| \phi \rangle$'s, we get a basis of
the next template in the chain.
Applying ${\cal{F}}$ $i-1$ times gives a basis
of the $i^{th}$ template in the chain.
We will focus on a sequence
$| \phi_{x,m,t_1} \rangle, \ldots, | \phi_{x,m,t_2} \rangle$,
where ${\cal{F}}^i | \phi_{x,m,t_1} \rangle = | \phi_{x,m,t_1+i} \rangle$.
The subspace spanned by these states is closed under $H_{prop}+H_{legal}$.
Furthermore $H_{prop}$ when restricted to this subspace and expressed
in the basis of $\phi$'s is $P_r$, for $r = t_2 - t_1 +1$.
$H_{legal}$ when expressed in this basis is diagonal with non-negative
integer entries. Furthermore, we know that there is
at least one positive entry on the diagonal because the
chain has at least one illegal node.
By Lemma \ref{lem:gap}, we know that the lowest eigenvalue of any
eigenstate in the subspace defined by the chain must have an eigenvalue
that is at least $\Omega(1 / (T_n)^3 )$.
Since the remaining terms in $H$ are all positive semi-defininte,
the lower bound holds for $H$ as well.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
We will prove the following theorem:
\begin{theorem}
\label{th:complete}
If the circuit $V$ accepts with probability at least $1 - \epsilon$ on
some input $| 0 \xi \rangle$, then $H$ has an eigenvalue smaller than $\epsilon$.
If the circuit $V$ accepts with probability less than $\epsilon$ on all inputs
$| 0 \xi \rangle$, then all eigenvalues of $H$ are larger than $1$
over a polynomial in $n$ and $L$.
\end{theorem}
The input to $V$ consists of $n_1$ auxiliary qubits and $n_2$ witness qubits, where
$n_1 + n_2 = n$.
Together a string $x$ of $n_1$ bits and $\xi$ of $n_2$ bits defines an input to
$V$.
Let $|\phi_{x, \xi, 0} \rangle$ be the state in template $(n,0)$ such that
the input bits are set to $x$ and $\xi$.
Let $|\phi_{x, \xi, t} \rangle = {\cal{F}}^t |\phi_{x, \xi, 0} \rangle$ for
$t \in \{1, \ldots, T_n \}$.
The space ${\cal{S}}_{legal}$ is spanned by these $|\phi \rangle$'s.
We define
$$|\nu_{x,\xi} \rangle = \frac{1}{ \sqrt{T_n + 1} } \sum_{i=0}^{T_n}
|\phi_{x,\xi,t} \rangle .$$
The following lemma established Theorem \ref{th:complete}
in one direction.
\begin{lemma}
If there is a $\xi$ such that $V$ accepts with probability at least $1 - \epsilon$ on
some input $| 0 \xi \rangle$,
then the smallest eigenvalue of $H$ is at most $\epsilon$.
\end{lemma}
\proof
Let $|\nu \rangle = | \nu_{0,\xi} \rangle$.
$$ \langle \nu | H_{prop} |\nu \rangle =
\langle \nu | H_{legal} |\nu \rangle =
\langle \nu | H_{valid} |\nu \rangle =
\langle \nu | H_{in} |\nu \rangle =0.$$
For $t \in \{0, \ldots, T_n -1 \}$, particle $L$ is not in a $G$
state, so the conditions of $H_{out}$ are satisfied.
For $t = T_n$, if the system is in state
$\|phi_{0,\xi,t} \rangle$ and the $L^{th}$ particle is measured,
the probability that the outcome is $G_0$ is at most $\epsilon$.
Therefore $\langle \nu | H_{out} |\nu \rangle \le \epsilon$.
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
The final step is the following lemma:
\begin{lemma}
If for all $\xi$,
$V$ accepts with probability at most $\epsilon \le 1/2$ on
input $| 0 \xi \rangle$,
then the smallest eigenvalue of $H$ in ${\cal{S}}_{legal}$
is at least an
inverse polynomial in $L$.
\end{lemma}
\proof
We will use Lemma \ref{lem:geom}.
With $H_1 = H_{prop}$ and $H_2 = H_{in} + H_{out}$.
All of ${\cal{S}}_{legal}$ is zero on the other two terms,
$H_{legal}$ and $H_{valid}$.
The ground space of $H_{prop}$ is spanned by
the $|\nu_{x,\xi} \rangle$'s, and the smallest non-zero eigenvalue
is $\Omega(1 / (T_n)^2)$.
Similarly for $H_2$, the ground space can be expressed
as any linear combination of states that have $G_1$ in
particle $L$ and do not have $Q_1$, $R_1$, $G_1$ or $B_1$ in
particles $1$ through $n$.
Any non-zero eigenvalue is at least $1$.
Let $P_2$ be the projection onto the ground space of $H_2$.
The cosine of the angle between any state $|\phi \rangle$
and the ground state of $H_2$ is just
$\langle \phi | P_2 | \phi \rangle$.
We will show that for any $|\nu_{x,\xi} \rangle$, the
cosine of the angle between $|\nu_{x,\xi} \rangle$ and the ground space of $H_2$
$$\langle \nu_{x,\xi} | P_2 | \nu_{x,\xi}\rangle \le 1 - \frac {1 - \epsilon} {T_n + 1}.$$
Thus, for small enough $\epsilon$,
the sine of the angle between the ground space of $H_1$ and the
ground space of $H_2$ is at least $\Omega(1/T_n)$.
The amplitude of $ | \phi_{x,\xi,0} \rangle$ in $| \nu_{x,\xi} \rangle$
is at least $1 / \sqrt{T_n +1}$. If $x \neq 0$, this state has a $Q_1$ somewhere
in the first $n$ particles and $\langle \phi_{x,\xi,0} | P_2 | \phi_{x,\xi,0} \rangle =0$.
Therefore,
$\langle \nu_{x,\xi} | P_2 | \nu_{x,\xi}\rangle \le 1 - \frac 1 {T_n + 1} $.
If $x=0$, then we know that for all $\xi$,
$$\langle \Phi_{0,\xi,T_n} | (| G_1 \rangle \langle G_1 |)_L |\Phi_{0,\xi,T_n} \rangle \le \epsilon,$$
because this is the probability that the outcome on input $\xi$ is $1$.
This means that
$$\langle \nu_{0,\xi} | P_2 | \nu_{0,\xi}\rangle \le 1 - \frac {1 - \epsilon} T .$$
\mbox{\ \ \ }\rule{6pt}{7pt} \medskip
\bibliographystyle{plain}
|
\section{Introduction and statement of main results}
Tensors appear in numerous mathematical and scientific contexts. The two contexts
most relevant for this paper are quantum information theory and algebraic
complexity theory, especially the study of the complexity of matrix
multiplication.
There are numerous notions of {\it rank} for tensors. One such, {\it analytic rank}, introduced in
\cite{MR2773103} and developed further in \cite{MR3964143}, is defined only over finite
fields.
In \cite{kopparty2020geometric} they define a new kind of rank for tensors that
is valid over arbitrary fields that is an asymptotic limit (as one enlarges the field)
of analytic rank, that they call {\it geometric rank} (\lq\lq geometric\rq\rq\ in contrast to \lq\lq analytic\rq\rq ), and establish basic properties of geometric rank.
In this paper we begin a systematic study of geometric rank and what it reveals about the
geometry of tensors.
Let $T\in \mathbb C^\mathbf{a}{\mathord{ \otimes } } \mathbb C^\mathbf{b}{\mathord{ \otimes } } \mathbb C^\mathbf{c}$ be a tensor and let
$GR(T)$ denote the geometric rank of $T$ (see Proposition/Definition \ref{GRdef} below for the definition). For all tensors, one has $GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, and when $GR(T)<\operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, we say
$T$ has {\it degenerate geometric rank}. The case of geometric rank one was previously understood,
see Remark \ref{GRone}.
Informally, a tensor is
{\it concise} if it cannot be written as a tensor in a smaller ambient space (see
Definition \ref{concisedef} below for the
precise definition).
Our main results are:
\begin{itemize}
\item Classification of tensors with geometric rank two.
In particular, in $\mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$ there are exactly two concise tensors
of geometric rank two, and in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$, $m> 3$, there is a unique
concise tensor with geometric rank two (Theorem
\ref{GRtwo}).
\item Concise $1_*$-generic tensors (see Definition \ref{1stardef})
in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ with geometric rank at most three have tensor rank at least $2m-3$
and all other concise tensors of geometric rank at most three have
tensor rank at least $m+\lceil \frac {m-1}2 \rceil -2$ (Theorem \ref{GRthree}).
\end{itemize}
We also compute the geometric ranks of numerous tensors of interest
in \S\ref{Exsect}, and analyze the geometry associated to tensors with
degenerate geometric rank in \S\ref{GRgeom}, where we also point out especially intriguing
properties of tensors in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ of minimal border rank.
\subsection*{Acknowledgements} We thank Vladimir Lysikov for numerous
comments and pointing out a gap in the proof of Theorem \ref{GRthree}
in an earlier draft, Filip Rupniewski for pointing out
an error in an earlier version of Proposition \ref{comprex},
Hang Huang for providing a more elegant proof of Lemma \ref{linelem}
than our original one,
Harm Derksen for Remark \ref{gstablerem},
and Fulvio Gesmundo, Hang Huang, Christian Ikenemeyer and Vladimir Lysikov
for useful conversations. We also thank the anonymous referee for useful suggestions.
\section{Definitions and Notation}
Throughout this paper we give our vector spaces names: $A=\mathbb C^\mathbf{a},B=\mathbb C^\mathbf{b}, C=\mathbb C^\mathbf{c}$
and we often will take $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$. Write $\operatorname{End}(A)$ for
the space of linear maps $A\rightarrow A$ and $GL(A)$ for the invertible linear maps.
The dual space to $A$ is denoted $A^*$, its
associated projective space is $\mathbb P}\def\BT{\mathbb T A$, and for $a\in A\backslash 0$,
we let $[a]\in \mathbb P}\def\BT{\mathbb T A$ be its projection to projective space.
For a subspace $U\subset A$, $U^\perp\subset A^*$ is its annihilator.
For a subset $X\subset A$, $\langle X\rangle\subset A$ denotes its linear span.
We write $GL(A)\times GL(B)\times GL(C) \cdot T\subset A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$
for the orbit of $T$, and similarly for the images of $T$ under endomorphisms.
For a set $X$, $\overline{X}$ denotes its closure in the Zariski topology (which, for
all examples in this paper, will also be its closure in the Euclidean topology).
Given $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$, we let $T_A: A^*\rightarrow B{\mathord{ \otimes } } C$ denote the corresponding
linear map, and similarly for $T_B,T_C$. We omit the subscripts when there is no ambiguity. As examples, $T(A^*)$ means $T_A(A^*)$, and given $\beta\in B^*$, $T(\beta)$ means $T_B(\beta)$.
Fix bases $\{a_i\}$, $\{b_j\}$, $\{ c_k\}$ of $A,B,C$, let $\{\alpha_i\}$, $\{\beta_j\}$, $\{ \gamma_k\}$ be the corresponding dual bases of $A^*,B^*$ and $C^*$. The linear space $T(A^*)\subset B\otimes C$ is considered as a space of matrices, and is often presented as the image of a general point $\sum_{i =1}^\mathbf{a} x_i\alpha_i\in A^*$, i.e. a $\mathbf{b}\times \mathbf{c}$ matrix of linear forms in variables $\{x_i\}$.
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$. $T$ has {\it rank one} if
there exists nonzero $a\in A$, $b\in B$, $c\in C$ such that $T=a{\mathord{ \otimes } } b{\mathord{ \otimes } } c$.
For $r\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, write $\Mone^{\oplus r}=\sum_{\ell=1}^r a_\ell{\mathord{ \otimes } } b_\ell{\mathord{ \otimes } } c_\ell$.
We review various notions of rank for tensors:
\begin{definition} \
\begin{enumerate}
\item The smallest $r$ such that $T$ is a sum of $r$ rank one
tensors is called the {\it tensor rank} (or {\it rank}) of $T$ and is denoted $\bold R(T)$. This is
the smallest $r$ such that, allowing $T$ to be in a larger space, $T\in \operatorname{End}_r\times \operatorname{End}_r\times\operatorname{End}_r\cdot \Mone^{\oplus r}$.
\item The smallest $r$ such that $T$ is a limit of rank $r$ tensors
is called the {\it border rank} of $T$ and is denoted $\underline{\mathbf{R}}(T)$. This is the smallest
$r$ such that, allowing $T$ to be in a larger space, $T\in \overline{GL_r\times GL_r\times GL_r\cdot \Mone^{\oplus r}}$.
\item $(\bold{ml}_A,\bold{ml}_B,\bold{ml}_C):= ({\mathrm {rank}}\, T_A,{\mathrm {rank}}\, T_B,{\mathrm {rank}}\, T_C)$
are the three {\it multi-linear ranks} of $T$.
\item The largest $r$ such that $\Mone^{\oplus r}\in \overline{GL(A)\times GL(B)\times GL(C)\cdot T}$
is called the {\it border subrank} $T$ and denoted $\underline{\bold Q}(T)$.
\item The largest $r$ such that $\Mone^{\oplus r}\in \operatorname{End}(A)\times \operatorname{End}(B)\times \operatorname{End}(C)\cdot T$ is called
the {\it subrank} of $T$ and denoted $\bold Q(T)$.
\end{enumerate}
\end{definition}
We have the inequalities
$$\bold Q(T)\leq \underline{\bold Q}(T)\leq \operatorname{min}\{\bold{ml}_A,\bold{ml}_B,\bold{ml}_C\}
\leq \operatorname{max}\{\bold{ml}_A,\bold{ml}_B,\bold{ml}_C\}\leq \underline{\mathbf{R}}(T)\leq \bold R(T),
$$
and all inequalities may be strict.
For example $M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}$ of Example \ref{mmex} satisfies
$\underline{\bold Q}(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=3$ \cite{kopparty2020geometric} and $\bold Q(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=2$ \cite[Prop. 15]{MR4210715}
and all multilinear ranks are $4$. Letting $\mathbf{b}\leq \mathbf{c}$, $T=a_1{\mathord{ \otimes } } (\sum_{j=1}^\mathbf{b} b_j{\mathord{ \otimes } } c_j)$
has $\bold{ml}_A(T)=1$, $\bold{ml}_B(T)=\mathbf{b}$.
A generic tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ satisfies $ \bold{ml}_A=\bold{ml}_B,=\bold{ml}_C=m$
and $\underline{\mathbf{R}}(T)=O(m^2)$. The tensor
$T=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1 {\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$ satisfies
$\underline{\mathbf{R}}(T)=2$ and $\bold R(T)=3$.
We remark that very recently Kopparty and Zuiddam (personal
communication) have shown that a generic
tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ has subrank at most $3m^{\frac 23}$.
In contrast, the corresponding notions for matrices all coincide.
\begin{definition} \label{concisedef}
A tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ is {\it concise} if
$\bold{ml}_A=\mathbf{a}$, $\bold{ml}_B=\mathbf{b}$, $\bold{ml}_C=\mathbf{c}$,
\end{definition}
The rank and border rank of a tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ measure the complexity of
evaluating the corresponding bilinear map $T: A^*\times B^* \rightarrow C$ or trilinear form
$T: A^*\times B^*\times C^*\rightarrow \mathbb C$. A concise tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ of rank $m$ (resp. border rank $m$),
is said to be of {\it minimal rank} (resp. {\it minimal border rank}). It is a longstanding
problem to characterize tensors of minimal border rank, and how much larger
the rank can be than the border rank. The largest rank of any explicitly known
sequence of tensors
is $3m-o(m)$ \cite{MR3025382}. While tests exist to bound the ranks of tensors,
previous to this paper there was no general geometric criteria that would
lower bound tensor rank (now see Theorem \ref{GRthree} below).
The border rank is measured by a classical geometric object: secant
varieties of Segre varieties. The border subrank, to our knowledge,
has no similar classical object. In this paper we discuss how
geometric rank is related to classically studied questions in algebraic
geometry: linear spaces of matrices
with large intersections with the variety of matrices
of rank at most $r$. See Equation \eqref{Xi} for a precise statement.
Another notion of rank for tensors is the
{\it slice rank} \cite{Taoblog}, denoted by $\mathrm{SR}(T)$: it is the smallest $r$ such that there exist
$r_1,r_2,r_3$ such that $r=r_1+r_2+r_3$,
$A'\subset A$ of dimension $r_1$, $B'\subset B$ of dimension $r_2$, and
$C'\subset C$ of dimension $r_3$, such that
$T\in A'{\mathord{ \otimes } } B{\mathord{ \otimes } } C+ A{\mathord{ \otimes } } B'{\mathord{ \otimes } } C+ A{\mathord{ \otimes } } B{\mathord{ \otimes } } C'$.
It was originally introduced in the context of the cap set problem but has
turned out (in its asymptotic version) to be important for quantum information
theory and Strassen's laser method, more precisely, Strassen's theory
of asymptotic spectra, see \cite{MR3826254}.
\begin{remark}\label{gstablerem}
In \cite{derksen2020gstable} a notion of rank for tensors inspired by invariant theory,
called {\it $G$-stable rank} is introduced. Like geometric rank, it is bounded
above by the slice rank and below by the border subrank. Its relation to
geometric rank appears to be subtle: the $G$-stable rank of
the matrix multiplication tensor
$M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}$ equals $n^2$, which is greater than the geometric rank (see Example \ref{mmex}), but
the $G$-stable rank of $W:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$
is $1.5$ ($G$-stable rank need not be integer valued), while $GR(W)=2$.
\end{remark}
Like multi-linear rank,
geometric rank generalizes row rank and column
rank of matrices, but unlike multi-linear rank, it salvages the fundamental theorem
of linear algebra that row rank equals column rank.
Let $Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)\subset \mathbb P}\def\BT{\mathbb T (A^*{\mathord{ \otimes } } B^*)$ denote the
{\it Segre variety} of rank one elements.
Let $\Sigma^{AB}_T = \{ ([\alpha],[\beta])\in \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^* \mid T(\alpha,\beta,\cdot)=0\}$, so
\begin{align}
\label{sigmaab} Seg(\Sigma^{AB}_T)= \mathbb P}\def\BT{\mathbb T (T(C^*)^{\perp})\cap Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)\end{align}
and let $\Sigma^A_j=\{[\alpha]\in \mathbb P}\def\BT{\mathbb T A^*\mid {\mathrm {rank}}(T(\alpha))\leq \operatorname{min}\{\mathbf{b},\mathbf{c}\}-j\}$.
Let $\pi^{AB}_A: \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^* \rightarrow \mathbb P}\def\BT{\mathbb T A^*$ denote the projection.
\begin{proposition/definition}\cite{kopparty2020geometric}\label{GRdef}
The following quantites are all equal and called the {\it geometric rank} of $T$, denoted
$GR(T)$:
\begin{enumerate}
\item $\text{codim} (\Sigma^{AB}_T, \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)$
\item $\text{codim} (\Sigma^{AC}_T, \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T C^*)$
\item $\text{codim} (\Sigma^{BC}_T, \mathbb P}\def\BT{\mathbb T B^*\times \mathbb P}\def\BT{\mathbb T C^*)$
\item $ \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\} -1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^A_j +j) $
\item $ \mathbf{b}+\operatorname{min}\{\mathbf{a},\mathbf{c}\}-1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^B_j +j) $
\item $ \mathbf{c}+\operatorname{min}\{\mathbf{a},\mathbf{b}\} -1 - \operatorname{max}_{j}(\operatorname{dim}\Sigma^C_j +j) $.
\end{enumerate}
\end{proposition/definition}
\begin{proof}
The classical row rank equals column rank theorem implies that
when $\Sigma^A_j\neq \Sigma^A_{j+1}$,
the fibers of $\pi^{AB}_A$ are $\pp{j-1}$'s if $\mathbf{b}\geq \mathbf{c}$ and
$\pp{j-1+\mathbf{b}-\mathbf{c}}$'s when $\mathbf{b}<\mathbf{c}$.
The variety $\Sigma^{AB}_T$
is the union of the $(\pi^{AB}_A){}^{-1}(\Sigma^A_j)$, which
have dimension $\operatorname{dim}\Sigma^A_j+j-1$ when $\mathbf{b}\geq \mathbf{c}$ and $\operatorname{dim}\Sigma^A_j+j-1+\mathbf{b}-\mathbf{c}$
when $\mathbf{b}<\mathbf{c}$.
The dimension of a variety is the dimension of a largest dimensional irreducible component.
\end{proof}
\begin{remark} In \cite{kopparty2020geometric}
they work with $\hat\Sigma^{AB}_T:=\{ (\alpha,\beta)\in A^*\times B^* \mid T(\alpha,\beta,\cdot)=0\}$
and define
geometric rank to be
$
GR(T):=\text{codim}(\hat\Sigma^{AB}_T, A^*\times B^*)$. This is equivalent to our
definition, which
is clear except $0\times B^*$ and $A^*\times 0$ are always contained in $\hat\Sigma^{AB}_T$ which implies $ {GR}(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b}\}$ and by symmetry
$GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$, but there is no corresponding set in the projective variety $\Sigma^{AB}_T$. Since \eqref{sigmaab} implies
\begin{align*}
\operatorname{dim} \Sigma^{AB}_T&\geq
\operatorname{dim} \mathbb P}\def\BT{\mathbb T (T(C^*)^\perp) + \operatorname{dim} Seg(\mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T B^*)- \operatorname{dim} \mathbb P}\def\BT{\mathbb T (A^*{\mathord{ \otimes } } B^*)\\
&= \mathbf{a}\mathbf{b}-\mathbf{c}-1+\mathbf{a}+\mathbf{b}-2-(\mathbf{a}\mathbf{b}-1) \\
&=\mathbf{a}+\mathbf{b}-\mathbf{c}-2
\end{align*}
we still have $ {GR}(T)\leq \mathbf{a}+\mathbf{b}-\mathbf{c}$ and by symmetry $GR(T)\leq \operatorname{min}\{\mathbf{a},\mathbf{b},\mathbf{c}\}$ using our definition. We note that for tensors with more factors, one must be more careful when working
projectively.
\end{remark}
One has $\underline{\bold Q}(T)\leq GR(T)\leq SR(T)$ \cite{kopparty2020geometric}.
In particular, one may use geometric rank to bound the border subrank.
An example of such a bound
was an important application in \cite{kopparty2020geometric}.
\begin{remark}\label{GRone}
The set of tensors with slice rank one is the
set of tensors living in some $\mathbb C^1{\mathord{ \otimes } } B{\mathord{ \otimes } } C$
(after possibly re-ordering and re-naming factors), and the same is true for tensors with geometric rank one. Therefore for any tensor $T$, $GR(T)=1$ if and only if $\mathrm{SR}(T)=1$.
\end{remark}
\begin{definition}\label{1stardef} Let $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$.
A tensor is {\it $1_A$-generic} if $T(A^*)\subset B{\mathord{ \otimes } } C$ contains
an element of full rank $m$, {\it binding} if it is at least two of $1_A$, $1_B$, $1_C$ generic,
{\it $1_*$-generic} if it is $1_A$, $1_B$ or $1_C$-generic,
and it is {\it $1$-generic} if it is
$1_A,1_B$ and $1_C$-generic. A tensor is {\it $1_A$-degenerate} if it is not $1_A$-generic.
Let $1_A-degen$ denote the variety
of tensors that are not $1_A$-generic,
and let $1-degen$ the variety of tensors that are $1_A,1_B$ and $1_C$ degenerate.
\end{definition}
$1_A$-genericity is important in the
study of tensors as
Strassen's equations \cite{Strassen505} and more generally
Koszul flattenings \cite{MR3081636} fail
to give good lower bounds for tensors that are $1$-degenerate.
Binding tensors are those that arise as structure tensors of algebras, see \cite{MR3578455}.
Defining equations for $1_A-degen$
are given by the module $S^mA^*{\mathord{ \otimes } } \La m B^*{\mathord{ \otimes } } \La m C^*$, see \cite[Prop. 7.2.2.2]{MR2865915}.
\begin{definition} \label{bndrkdef} A subspace $E\subset B{\mathord{ \otimes } } C$ is of {\it bounded rank $r$}
if for all $X\in E$,
${\mathrm {rank}}(X)\leq r$.
\end{definition}
\section{Statements of main results}
Let ${\mathcal G}{\mathcal R}_{s}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)\subset \mathbb P}\def\BT{\mathbb T (A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ denote
the set of tensors of geometric rank at most $s$ which is Zariski closed \cite{kopparty2020geometric},
and write ${\mathcal G}{\mathcal R}_{s,m}:={\mathcal G}{\mathcal R}_{s}(\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m)$.
By Remark \ref{GRone}, ${\mathcal G}{\mathcal R}_{1}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors that
live in some $\mathbb C^1{\mathord{ \otimes } } B{\mathord{ \otimes } } C$, $A{\mathord{ \otimes } } \mathbb C^1{\mathord{ \otimes } } C$,
or $A{\mathord{ \otimes } } B{\mathord{ \otimes } } \mathbb C^1$.
In what follows, a statement of the form \lq\lq there exists a unique tensor...\rq\rq,
or \lq\lq there are exactly two tensors...\rq\rq,
means up to the action of $GL(A)\times GL(B)\times GL(C)\rtimes \FS_3$.
\begin{theorem} \label{GRtwo} For $\mathbf{a},\mathbf{b},\mathbf{c}\geq3$,
the variety $\mathcal{GR}_{2}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors $T$ such that $T(A^*)$, $T(B^*)$,
or $T(C^*)$ has bounded rank 2.
There are exactly two concise tensors in $\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3$ with $GR(T)=2$:
\begin{enumerate}
\item The unique up to scale skew-symmetric tensor $T=\sum_{\sigma\in \FS_3}\tsgn(\sigma) a_{\sigma(1)}{\mathord{ \otimes } } b_{\sigma(2)}{\mathord{ \otimes } } c_{\sigma(3)}\in \La 3\mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$
and
\item $T_{utriv,3}:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1 + a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_2+ a_1{\mathord{ \otimes } } b_3{\mathord{ \otimes } } c_3
+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_3{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_3\in S^2\mathbb C^3{\mathord{ \otimes } } \mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\end{enumerate}
There is a unique concise tensor $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
satsifying $GR(T)=2$ when $m>3$,
namely
$$
T_{utriv,m}:=
a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1+ \sum_{\rho=2}^m [a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho].
$$
This tensor satisfies $\underline{\mathbf{R}}(T_{utriv,m})=m$ and $\bold R(T_{utriv,m})=2m-1$.
\end{theorem}
In the $m=3$ case (1) of Theorem \ref{GRtwo} we have $\Sigma^{AB}_T\cong \Sigma^{AC}_T\cong \Sigma^{BC}_T\cong \mathbb P}\def\BT{\mathbb T A^*\subset \mathbb P}\def\BT{\mathbb T A^*\times \mathbb P}\def\BT{\mathbb T A^*$
embedded diagonally and $\Sigma^A_1=\Sigma^B_1=\Sigma^C_1=\mathbb P}\def\BT{\mathbb T A^*$.
In the $m=3$ case (2) of Theorem \ref{GRtwo} we have
\begin{align*}
\Sigma^{AB}_T&=\mathbb P}\def\BT{\mathbb T\langle \alpha_2,\alpha_3\rangle \times \mathbb P}\def\BT{\mathbb T \langle \beta_2,\beta_3\rangle=
\pp 1\times \pp 1\\
\Sigma^{AC}_T&=\{([ s\alpha_2+t\alpha_3 ] , [
u\gamma_1 + v( -t\gamma_2+s\gamma_3)])\in \mathbb{P}A\times\mathbb{P}C \mid [s,t]\in \pp 1, [u,v]\in \pp 1\} \\
\Sigma^{BC}_T&=\{([ s\beta_2+t\beta_3 ], [
u\gamma_1 + v( -t\gamma_2+s\gamma_3)])\in\mathbb{P}B\times\mathbb{P}C \mid [s,t]\in \pp 1, [u,v]\in \pp 1 \}.
\end{align*}
If one looks at the scheme structure,
$\Sigma^A_2$, $\Sigma^B_2$ are lines with multiplicity three and $\Sigma^C_1=\mathbb P}\def\BT{\mathbb T C^*$.
\begin{remark}
The tensor $T_{utriv,m}$ has appeared several times in the literature:
it is the structure tensor of the trivial algebra with unit (hence the name), and
it has the largest symmetry group of any binding tensor \cite[Prop. 3.2]{2019arXiv190909518C}.
It is also closely related to Strassen's tensor of \cite{MR882307}: it
is the sum of Strassen's tensor with a rank one tensor.
\end{remark}
Theorem \ref{GRtwo} is proved in \S\ref{GRtwopf}.
\begin{theorem} \label{GRthree}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be concise and assume $\mathbf{c}\geq\mathbf{b}\geq \mathbf{a}>4$.
If $GR(T)\leq 3$, then
$\bold R(T)\geq \mathbf{b}+ \lceil {\frac {\mathbf{a}-1}2}\rceil -2$.
If moreover $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $T$ is
$1_*$-generic, then $\bold R(T)\geq 2m-3$.
\end{theorem}
In contrast to ${\mathcal G}{\mathcal R}_{1,m}$ and ${\mathcal G}{\mathcal R}_{2,m}$, the variety ${\mathcal G}{\mathcal R}_{3,m}$
is not just the the set of tensors $T$ such that $T(A^*),T(B^*)$ or $T(C^*)$ has bounded rank 3. Other examples include
the structure tensor for $2\times 2$ matrix multiplication $M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}$ (see Example \ref{mmex}),
the large and small Coppersmith-Winograd tensors (see Examples
\ref{CWqex} and \ref{cwqex}) and others (see \S\ref{moregr3}).
Theorem \ref{GRthree} gives the first algebraic way to lower bound tensor rank. Previously,
the only technique to bound tensor rank beyond border rank was the {\it substitution
method} (see \S\ref{submethrev}), which is not algebraic or systematically implementable.
Theorem \ref{GRthree} is proved in \S\ref{GRthreepf}.
\section{Remarks on the geometry of geometric rank}\label{GRgeom}
\subsection{Varieties arising in the study of geometric rank}
Let $G(m,V)$ denote the Grassmannian of $m$-planes through the origin in the
vector space $V$.
Recall the correspondence (see, e.g., \cite{MR3682743}):
\begin{align}\nonumber
&\{ A\text{-concise tensors}\ T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } }
C\}/\{ GL(A)\times GL(B)\times GL(C)-{\rm equivalence}\}\\
&\label{tscorresp} \leftrightarrow\\
&\nonumber \{ \mathbf{a}-{\rm planes} \ E\in G(\mathbf{a} , B{\mathord{ \otimes } } C)\}/\{ GL(B)\times GL(C)-{\rm equivalence}.\}
\end{align}
It makes sense to study the $\Sigma^A_j$
separately, as they have different geometry.
To this end define $GR_{A,j}(T)= \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\}-1 - \operatorname{dim}\Sigma^A_j -j$.
Let ${\mathcal G}{\mathcal R}_{r, A,j}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)=\{[T]\in \mathbb P}\def\BT{\mathbb T (A{\mathord{ \otimes } } B{\mathord{ \otimes } } C ) \mid GR_{A,j}(T)\leq r\}$.
Let $\sigma_r(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))\subset \mathbb P}\def\BT{\mathbb T (B{\mathord{ \otimes } } C)$ denote
the variety of $\mathbf{b}\times \mathbf{c}$ matrices of rank at most $r$.
By the correspondence \eqref{tscorresp}, the study of ${\mathcal G}{\mathcal R}_{r,A,j}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the
study of the variety
\begin{equation}\label{Xi}
\{ E\in G(\mathbf{a} ,B^*{\mathord{ \otimes } } C^*)
\mid \operatorname{dim}(\mathbb P}\def\BT{\mathbb T E\cap \sigma_{\operatorname{min}\{\mathbf{b},\mathbf{c}\}-j}(\mathbb P}\def\BT{\mathbb T B^*\times \mathbb P}\def\BT{\mathbb T C^*))\geq \mathbf{a}+\operatorname{min}\{\mathbf{b},\mathbf{c}\} -j-1-r\}.
\end{equation}
The following is immediate from the definitions, but since it is significant
we record it:
\begin{observation} ${\mathcal G}{\mathcal R}_{\mathbf{a}-1, A,1}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C) = 1_A-degen$. In particular, tensors that are
$1_A$, $1_B$, or $1_C$ degenerate have degenerate geometric rank.
${\mathcal G}{\mathcal R}_{\mathbf{a}-1 , A,\mathbf{a}}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the set of tensors that fail to be $A$ concise. In particular,
non-concise tensors do not have maximal geometric rank.
\end{observation}
It is classical that $\operatorname{dim} \sigma_{m-j}(Seg(\pp{m-1}\times \pp{m-1}))=m^2-j^2-1$.
Thus for a general tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$, $\operatorname{dim}(\Sigma^A_j)=m-j^2$. In particular, it
is empty when $j>\sqrt{m}$.
\begin{observation}\label{Rmin} If $T\in\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ is concise and $GR(T)<m$, then
$\bold R(T)>m$.
\end{observation}
\begin{proof} If $T$ is concise $\bold R(T)\geq m$, and for
equality to hold it can be written as $\sum_{j=1}^m a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j$ for some bases $\{a_j\},\{b_j\}$ and $\{c_j\}$ of $A,B$ and $C$ respectively. But $GR(\sum_{j=1}^m a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j)=m$.
\end{proof}
\begin{question} For concise $1_*$-generic tensors $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } }\mathbb C^m$,
is $\bold R(T)\geq 2m-GR(T)$?
\end{question}
\subsection{Tensors of minimal border rank}
If $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C=\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$ is concise of minimal border rank $m$,
then there exist complete flags in $A^*,B^*,C^*$, $0\subset A_1^*\subset A_2^*\subset
\cdots \subset A_{m-1}^*\subset A^*$ etc..
such that $T|_{A_j^*{\mathord{ \otimes } } B_j^*{\mathord{ \otimes } } C_j^*}$ has border rank at most $j$, see \cite[Prop. 2.4]{CHLlaser}.
In particular, $\operatorname{dim}( \mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_j(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))\geq j-1$.
If the inequality is strict for some $j$, say equal to $j-1+q$, we say {\it the $(A,j)$-th flag condition
for minimal border rank is passed with excess $q$}.
\begin{observation} The geometric rank of
a concise tensor in $\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
is $m$ minus the largest excess of the $(A,j)$ flag conditions for minimal
border rank.
\end{observation}
We emphasize that a tensor with degenerate geometric rank need not have minimal
border rank, and need not pass all the $A$-flag conditions for minimal border rank,
just that one of the conditions is passed with excess.
\section{Examples of tensors with degenerate geometric ranks}\label{Exsect}
\subsection{Matrix multiplication and related tensors}
\begin{example}[Matrix multiplication]\label{mmex} Set $m=n^2$. Let $U,V,W=\mathbb C^n$.
Write $A=U^*{\mathord{ \otimes } } V$, $B=V^*{\mathord{ \otimes } } W$, $C=W^*{\mathord{ \otimes } } U$.
The structure tensor of matrix multiplication is
$T=M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}=\operatorname{Id}_U{\mathord{ \otimes } }\operatorname{Id}_V{\mathord{ \otimes } } \operatorname{Id}_W$ (re-ordered), where $\operatorname{Id}_U\in U^*{\mathord{ \otimes } } U$ is
the identity.
When $n=2$, $\Sigma^{AB}=Seg(\mathbb P}\def\BT{\mathbb T U^*\times \mathcal I_V\times \mathbb P}\def\BT{\mathbb T W)$,
where $\mathcal I_V=\{ [v]\times [\nu]\in \mathbb P}\def\BT{\mathbb T V\times \mathbb P}\def\BT{\mathbb T V^* \mid \nu(v)=0\}$ has dimension $3$, so $GR(M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle})=6-3=3$.
Note that
$\Sigma^A_2=
\Sigma^A_1=Seg(\mathbb P}\def\BT{\mathbb T U^*\times \mathbb P}\def\BT{\mathbb T V)=Seg(\pp 1\times\pp 1)$ (with multplicity two).
For $[\mu{\mathord{ \otimes } } v]\in \Sigma^A_2$, $\pi_A^{-1}[\mu{\mathord{ \otimes } } v]=
\mathbb P}\def\BT{\mathbb T (\mu{\mathord{ \otimes } } v{\mathord{ \otimes } } v^\perp{\mathord{ \otimes } } W)\cong \pp 1$. Since
the tensor is $\BZ_3$-invariant the same holds for $\Sigma^B,\Sigma^C$.
For larger $n$, the dimension of the fibers of $\pi^{AB}_A$ varies with the
rank of $X\in \{\operatorname{det}}\def\tpfaff{\operatorname{Pf}}\def\thc{\operatorname{HC}_{n}=0\}$. The fiber is
$[X]\times \mathbb P}\def\BT{\mathbb T ({\rm Rker}(X){\mathord{ \otimes } } W)$, which has dimension $(n-{\mathrm {rank}}(X))n-1$.
Write $r={\mathrm {rank}}(X)$.
Each $r$ gives rise to a $(n-r)n-1+(2nr-r^2-1)=n^2-r^2+nr-2$ dimensional component
of $\Sigma^{AB}$. There are $n-1$ components, the largest dimension
is attained when $r=\lceil \frac n2\rceil$, where the
dimension is $n^2-\lceil \frac n2\rceil\lfloor \frac n2\rfloor -2$
and we recover the result of \cite{kopparty2020geometric} that $GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})=\lceil \frac 34 n^2\rceil=\lceil \frac 34 m\rceil$, caused by $\Sigma^A_{\lceil \frac n 2\rceil}$.
\end{example}
\begin{example}[Structure Tensor of $\mathfrak{sl}_n$] Set $m=n^2-1$.
Let $U=\mathbb C^n$ , let $A=B=C={\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n={\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}(U)$. For $a,b\in {\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n$, $[a,b]$ denotes their commutator.
Let $T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n}\in\mathfrak{sl}_n(\mathbb{C})^{\otimes 3}$ be
the structure tensor of ${\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n$: $T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n}=\sum_{ij=1}^{n^2-1} a_i\otimes b_j\otimes [a_i,b_j]$. Then $\hat{\Sigma}^{AB} =\{(x,y)\in A^*\times B^*|[x,y]=0\}$.
Let $C(2,n):=\{(x,y)\in U^*{\mathord{ \otimes } } U\times U^*{\mathord{ \otimes } } U\,|\,xy=yx\}$. In \cite{MR86781} it was shown that
$C(2,n)$ is irreducible. Its dimension is
$n^2+n$, which
was computed in \cite[Prop. 6]{MR1753173}.
Therefore
$\hat{\Sigma}^{AB}=(\mathfrak{sl}_n(\mathbb{C})\times\mathfrak{sl}_n(\mathbb{C}))\cap C(2,n)$
has dimension $n^2+n-2$, and $GR(T_{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}_n})=\mathrm{dim}(\mathfrak{sl}_n(\mathbb{C})\times\mathfrak{sl}_n(\mathbb{C}))-\mathrm{dim}\hat{\Sigma}^{AB}=n^2-n
=m-\sqrt{m+1}$.
\end{example}
\begin{example}[Symmetrized Matrix Multiplication] Set $m=n^2$.
Let $A=B=C=U^*{\mathord{ \otimes } } U$, with $\operatorname{dim} U=n$.
Let $T=SM_{<n>}\in (U^*{\mathord{ \otimes } } U)^{\otimes 3}$ be
the symmetrized matrix multiplication tensor: $SM_{<n>}(X,Y,Z):=\mathrm{tr}(XYZ)+\mathrm{tr}(YXZ)$.
In \cite{MR3829726} it was shown that the exponent of $SM_{\langle n\rangle}$ equals the exponent
of matrix multiplication. On the other hand, $SM_{\langle n\rangle}$ is a cubic polynomial and thus
may be studied with more tools from classical algebraic geometry, which raises the hope of new
paths towards determining the exponent.
Note that $SM_{<n>}(X,Y,\cdot)=0$ if and only if $XY+YX=0$. So $\hat{\Sigma}^{AB}=\{(X,Y)\in U^*{\mathord{ \otimes } } U\times U^*{\mathord{ \otimes } } U\,|\,XY+YX=0\}$.
Fix any matrix $X$, let $M_X$ and $M_{-X}$ be two copies of $\mathbb{C}^n$ with $\mathbb{C}[t]$-module structures: $t\cdot v:=Xv,\forall v\in M_X$ and $t\cdot w:=-Xw,\forall w\in M_{-X}$, where $\mathbb{C}[t]$ is the polynomial ring.
For any linear map $\varphi:M_X\rightarrow M_{-X}$,
\begin{align*}
\varphi\in\mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X}) & \iff\varphi(tv)=t\varphi(v),\forall v\in M_X\\
&\iff\varphi(Xv)=-X\varphi(v), \forall v\in M_X\\
&\iff\varphi X=-X \varphi.
\end{align*}
This gives a vector space isomorphism $(\pi^{AB}_A)^{-1}(X):=\{Y|XY+YX=0\}\cong \mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X})$.
By the structure theorem of finitely generated modules over
principal ideal domains, $M_X$ has a primary decomposition:
$$M_X\cong \quot{\mathbb{C}[t]}{(t-\lambda_1)^{r_1}}\oplus\cdots\oplus\quot{\mathbb{C}[t]}{(t-\lambda_k)^{r_k}}$$
for some $\lambda_i\in\mathbb{C}$ and $\sum r_i=n$. Replacing $t$ with $-t$ we get a decomposition of $M_{-X}$:
$$M_{-X}\cong \quot{\mathbb{C}[t]}{(t+\lambda_1)^{r_1}}\oplus\cdots\oplus\quot{\mathbb{C}[t]}{(t+\lambda_k)^{r_k}}.
$$
We have the decomposition $\mathrm{Hom}_{\mathbb{C}[t]}(M_X,M_{-X})\cong \bigoplus\limits_{i,j}\mathrm{Hom}_{\mathbb{C}[t]}(\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}},\quot{\mathbb{C}[t]}{(t+\lambda_j)^{r_j}})$. For each $i,j$:
\begin{align*}&\mathrm{Hom}_{\mathbb{C}[t]}\left(\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}},\quot{\mathbb{C}[t]}{(t+\lambda_j)^{r_j}}\right)\\
&=\left\{
\begin{array}{ll}
\langle 1\mapsto(t-\lambda_i)^l\,|\,0\leq l\leq r_j-1\rangle &\mathrm{if}\; \lambda_i+\lambda_j=0\;\mathrm{and}\;r_i\geq r_j;\\
\langle 1\mapsto(t-\lambda_i)^l\,|\,r_j-r_i\leq l\leq r_j-1\rangle &\mathrm{if}\; \lambda_i+\lambda_j=0\;\mathrm{and}\;r_i< r_j;\\
0&\mathrm{otherwise.}
\end{array}
\right.
\end{align*}
Let $d_{ij}(X)$
denote its dimension, then $d_{ij}(X)=\left\{
\begin{array}{ll}
\mathrm{min}\{r_i,r_j\}&\mathrm{if}\; \lambda_i+\lambda_j=0;\\
0&\mathrm{otherwise.}
\end{array}
\right.$
Thus $\mathrm{dim}((\pi^{AB}_A)^{-1} (X))=\sum\limits_{i,j} d_{ij}(X)$.
Each direct summand $\quot{\mathbb{C}[t]}{(t-\lambda_i)^{r_i}}$ of $M_X$ corresponds to a Jordan block of the Jordan canonical form of $X$ with size $r_i$ and eigenvalue $\lambda_i$, denoted as $J_{\lambda_i}(r_i)$.
Assume $X$ has eigenvalues $\pm\lambda_1, \cdots,\pm\lambda_k,\lambda_{k+1},\cdots,\lambda_{l}$ such that $\lambda_i\neq\pm\lambda_j$ whenever $i\neq j$. Let $q_{X,1}(\lambda)\geq q_{X,2}(\lambda) \geq \cdots$ be the decreasing sequence of sizes of the Jordan blocks of $X$ corresponding to the eigenvalue $\lambda$. Let $W(X)$ be the set of matrices $X'$ with eigenvalues $\pm\lambda'_1, \cdots,\pm\lambda'_k,\lambda'_{k+1},\cdots,\lambda'_{l}$ such that $\lambda'_i\neq\pm\lambda'_j$ whenever $i\neq j$, and $q_{X,j}(\pm \lambda_i)=q_{X',j}(\pm \lambda'_i)\,\forall i,j$. Then $W(X)$ is quasi-projective and irreducible of dimension $\mathrm{dim}W(X)=\mathrm{dim}\{P^{-1}XP\,|\,\mathrm{det}P\neq 0\}+l$, and $(\pi^{AB}_A)^{-1}(X')$ is
of the same dimension as $(\pi^{AB}_A)^{-1}(X)$ for all $X'\in W(X)$.
By results in \cite{MR1355688}, the codimension of the orbit of $X$
under the adjoint action of $GL(U)$ is $c_{Jor}(X):=\sum_{\lambda}[q_{X,1}(\lambda)+3q_{X,2}(\lambda)+5q_{X,3}(\lambda)+\cdots]$. Then
$$\mathrm{dim}\hat{\Sigma}^{AB}=\max_X(\mathrm{dim}W(X)+\mathrm{dim}\pi^{-1}_1(X))=\max_X(n^2-c_{Jor}(X)+ \operatorname{dim}(\pi_{A}^{AB}){}^{-1}(X)+l)$$
because
$\hat \Sigma^{AB}=\cup_X(\pi^{AB}_A){}^{-1} (W(X))$ is a finite union.
It is easy to show that $\operatorname{dim}(\pi_{A}^{AB}){}^{-1}(X)-c_{Jor}(X)$ takes maximum $0$ when for every eigenvalue $\lambda_i$ of $X$, $-\lambda_i$ is an eigenvalue of $X$ and $q_{X,j}(\lambda_i)=q_{X,j}(-\lambda_i),\forall i,j$. So the total maximum is achieved when $X$ has the maximum possible number of distinct pairs $\pm\lambda_i$, i.e.,
$$X\cong\left\{
\begin{array}{ll}
\mathrm{diag}(\lambda_1,-\lambda_1,\lambda_2,-\lambda_2,\cdots,\lambda_{\frac{n}{2}},-\lambda_{\frac{n}{2}}) &\mathrm{if}\; n\; \mathrm{is\;even};\\
\mathrm{diag}(\lambda_1,-\lambda_1,\lambda_2,-\lambda_2,\cdots,\lambda_{\frac{n-1}{2}},-\lambda_{\frac{n-1}{2}},0) &\mathrm{if}\; n\; \mathrm{is\;odd}.
\end{array}
\right.
$$
In both cases $\mathrm{dim}\hat{\Sigma}^{AB}=n^2+\lfloor \frac{n}{2}\rfloor$. We conclude that $GR(sM_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})=n^2-\lfloor \frac{n}{2}\rfloor=m-\lfloor \frac {\sqrt{m}}2\rfloor$.
\end{example}
\subsection{Large border rank and small geometric rank}
The following example shows that border rank can be quite large while
geometric rank is small:
\begin{example}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C=\mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
have the form
$T=a_1{\mathord{ \otimes } } (b_1{\mathord{ \otimes } } c_1+\cdots + b_m{\mathord{ \otimes } } c_m)+ T'$
where $T'\in A'{\mathord{ \otimes } } B'{\mathord{ \otimes } } C':=\operatorname{span}\{ a_2, \hdots , a_m\}
{\mathord{ \otimes } } \operatorname{span}\{ b_1, \hdots , b_{\lfloor \frac m2\rfloor}\}
{\mathord{ \otimes } } \operatorname{span}\{ c_{\lceil \frac m2\rceil}, \hdots , c_m\}$, where $T'$ is generic.
It was shown in \cite{MR3682743} that $\bold R(T)= \bold R(T')+m$ and
$\underline{\mathbf{R}}(T)\geq \frac{m^2}8$.
We have
$$T(A^*)\subset \begin{pmatrix}
x_1 & & & & & \\
& \ddots & & & & \\
& & x_1 & & & &\\
* & \cdots & * & x_1 & &\\
\vdots & \vdots & \vdots & & \ddots & \\
* &\cdots & * & & & x_1\end{pmatrix}.
$$
Setting $x_1=0$, we see
a component of $\Sigma^A_{\lfloor \frac m2\rfloor}\subset \mathbb P}\def\BT{\mathbb T A^*$ is a hyperplane
so $GR(T)\leq \lceil \frac m2\rceil+1$.
\end{example}
\subsection{Tensors arising in Strassen's laser method}
\begin{example}[Big Coppersmith-Winograd tensor] \label{CWqex}
The following tensor has been used to obtain every new upper bound on the
exponent of matrix multiplication since 1988:
$$T_{CW,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j
+a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j+a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_0+a_0{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_{q+1}
+a_0{\mathord{ \otimes } } b_{q+1}{\mathord{ \otimes } } c_0+a_{q+1}{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_0.
$$
One has
$\bold R(T_{CW,q})=2q+3=2m-1$ \cite[Prop. 7.1]{MR3682743}
and $\underline{\mathbf{R}}(T_{CW,q})=q+2=m$ \cite{MR91i:68058}.
Note
$$
T_{CW,q}(A^*)=
\begin{pmatrix}
x_{q+1}& x_1 &\cdots & x_q &x_0\\
x_1 & x_0 & & & \\
x_2 & & \ddots & & \\
\vdots & & & & \\
x_q & & & x_0 & \\
x_0 & & & & 0
\end{pmatrix}
\ \ \simeq \ \
\begin{pmatrix}
x_0& x_1 &\cdots & x_q &x_{q+1}\\
& x_0 & & & x_1\\
& & \ddots & & x_2\\
& & & & \vdots\\
& & & x_0 & x_q\\
& & & & x_0
\end{pmatrix}
$$
where $\simeq$ means equal up to changes of bases. So we have
$\Sigma^A_{1}=\Sigma^A_2=\cdots =\Sigma^A_{q}=\{x_0=0\}$ and $\Sigma^A_{q+1}=\{x_0=\cdots=x_q=0\}$.
Therefore $GR(T_{CW,q})=2(q+2)-1-(\mathrm{dim}\Sigma^A_{q}+q)=3$.
\end{example}
\begin{example}[Small Coppersmith-Winograd tensor]\label{cwqex} The following tensor was
the second tensor used in the laser method and for $2\leq q\leq 10$, it
could potentially prove the exponent is less than $2.3$:
$T_{cw,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j
+a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j+a_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_0$.
It satisfies
$\bold R(T_{cw,q})=2q+1=2m-1$ \cite[Prop. 7.1]{MR3682743}
and $\underline{\mathbf{R}}(T_{cw,q})=q+2=m+1$ \cite{MR91i:68058}. We again have
$GR(T_{cw,q})=3$ as e.g., $\Sigma^{AB}=\{ x_0=y_0=\sum_{j\geq 1} x_j y_j=0\}\cup\{\forall j\geq 1, x_j=y_j=0\}$.
\end{example}
\begin{example} [Strassen's tensor] \label{strassenten} The following is the
first tensor that was used in the laser method:
$T_{str,q}=\sum_{j=1}^q a_0{\mathord{ \otimes } } b_j{\mathord{ \otimes } }
c_j + a_j{\mathord{ \otimes } } b_0{\mathord{ \otimes } } c_j\in \mathbb C^{q+1}{\mathord{ \otimes } } \mathbb C^{q+1}{\mathord{ \otimes } } \mathbb C^q$.
It satisfies $\underline{\mathbf{R}}(T_{str,q})=q+1$ and $\bold R(T_{str,q})=2q$ \cite{MR3682743}.
Since
$$
T_{str,q}(A^*)=
\begin{pmatrix} x_1& \cdots & x_q\\
x_0 & & \\
& \ddots & \\
& & x_0\end{pmatrix}
$$
We see $GR(T_{str,q})=2$ caused by $\Sigma^{A}_q=\mathbb P}\def\BT{\mathbb T \langle\alpha_1, \hdots , \alpha_q\rangle$.
\end{example}
\subsection{Additional examples of tensors with geometric
rank $3$}\label{moregr3}
\begin{example} \label{1ggr3} The following tensor
was shown in \cite{MR3682743} to take minimal values for Strassen's functional (called maximal
compressibility in \cite{MR3682743}):
$$
T_{maxsymcompr,m}
=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1
+\sum_{\rho=2}^m a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho
+ a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_1.
$$
Note
$$T_{maxsymcompr,m}(A^*)=\begin{pmatrix}
x_1 & x_2& x_3& \cdots & x_m\\
x_2 & x_1&0&\cdots&0 \\
x_3 & 0 & x_1 &&\vdots \\
\vdots & \vdots & & \ddots &0\\
x_m& 0&\cdots &0& x_1
\end{pmatrix}.
$$
Restrict to the hyperplane $\alpha_1=0$, we obtain
a space of bounded rank two, i.e., $\Sigma^A_{m-2}\subset \mathbb P}\def\BT{\mathbb T A^*$
is a hyperplane. We conclude, assuming $m\geq 3$, that
$GR(T)= 3$.
\end{example}
\begin{example}\label{badex}
Let $m=2q$ and let
$$
T_{gr3,1deg,2q}:=\sum_{s=1}^{q} a_s{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_s+ \sum_{t=2}^{q}a_{t+q-1}{\mathord{ \otimes } } b_t{\mathord{ \otimes } } c_1
+a_m{\mathord{ \otimes } } (\sum_{u=q+1}^m b_u{\mathord{ \otimes } } c_u),
$$
so
\begin{equation}\label{badA}
T_{gr3,1deg,m}(A^*)=
\begin{pmatrix} x_1 & x_2 & \cdots & x_q & 0 & \cdots & 0\\
x_{q+1}& 0 & \cdots & 0 &0 &\cdots & 0 \\
\vdots & \vdots & &\vdots &\vdots & &\vdots \\
x_{m-1}& 0 & \cdots &0 &0 &\cdots&0 \\
0 &0 &\cdots & 0 &x_m & & \\
\vdots &\vdots & & \vdots & &\ddots & \\
0 &0 &\cdots & 0 & & &x_m \end{pmatrix}.
\end{equation}
Then $GR(T_{gr3,1deg,m})=3$ (set $x_m=0$) and $\bold R(T)=\frac 32m-1$,
the upper bound is clear from the expression the lower bound is given in
Example \ref{badgrex}.
\end{example}
\begin{example}\label{badex2}
Let $m=2q-1$ and let
$$
T_{gr3,1deg,2q-1}:=\sum_{s=2}^{q} a_s{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_s+ a_{s+q-1}{\mathord{ \otimes } } b_s{\mathord{ \otimes } } c_1
+a_1{\mathord{ \otimes } } (\sum_{u=q+1}^m b_u{\mathord{ \otimes } } c_u),
$$
so
\begin{equation}\label{badA2}
T_{gr3,1deg,2q-1}(A^*)=
\begin{pmatrix} 0 & x_2 & \cdots & x_q & 0 & \cdots & 0\\
x_{q+1}& 0 & \cdots & 0 &0 &\cdots & 0 \\
\vdots & \vdots & &\vdots &\vdots & &\vdots \\
x_{m}& 0 & \cdots &0 &0 &\cdots&0 \\
0 &0 &\cdots & 0 &x_1 & & \\
\vdots &\vdots & & \vdots & &\ddots & \\
0 &0 &\cdots & 0 & & &x_1\end{pmatrix}.
\end{equation}
Then $GR(T_{gr3,1deg,2q-1})=3$ (set $x_1=0$) and $\bold R(T_{gr3,1deg,2q-1})= m+\frac{m-1}2-2$,
the upper bound is clear from the expression and the lower bound is given in
Example \ref{badgrex}.
\end{example}
\subsection{Kronecker powers of tensors with degenerate geometric rank}
For tensors $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ and $T'\in A'{\mathord{ \otimes } } B'{\mathord{ \otimes } } C'$, the {\it Kronecker product} of $T$ and $T'$ is the tensor $T\boxtimes T' := T {\mathord{ \otimes } } T' \in (A{\mathord{ \otimes } } A'){\mathord{ \otimes } } (B{\mathord{ \otimes } } B'){\mathord{ \otimes } } (C{\mathord{ \otimes } } C')$, regarded as a $3$-way tensor. Given $T \in A \otimes B \otimes C$, the {\it Kronecker powers} of $T$ are $T^{\boxtimes N} \in A^{\otimes N} \otimes B^{\otimes N} \otimes C^{\otimes N}$, defined iteratively. Rank and border rank are submultiplicative under the Kronecker product, while
subrank and border subrank are super-multiplicative under the Kronecker product.
Geometric rank is neither sub- nor super-multiplicative under the Kronecker product.
We already saw the lack of sub-multiplicativity with $M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}$ (recall $M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}^{\boxtimes 2}=
M_{\langle n^2\rangle}$):
$\frac 34n^2=GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}^{\boxtimes 2})> \frac{9}{16}n^2=GR(M_{\langle \mathbf{n} \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle})^2$.
An indirect example of the failure
of super-multiplicativity is given in \cite{kopparty2020geometric} where they point out that some power of
$W:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_1+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1$ is
strictly sub-multiplicative. We make this explicit:
\begin{example}
With basis indices ordered $22,21,12,11$ for $B^{{\mathord{ \otimes } } 2},C^{{\mathord{ \otimes } } 2}$, we have
$$
W^{\boxtimes 2} (A^{{\mathord{ \otimes } } 2*})=
\begin{pmatrix}
x_{11}& x_{12}&x_{21}&x_{22}\\
0 & x_{11} & 0 & x_{21}\\
0 & 0 &x_{11} & x_{12}\\
0 &0&0& x_{11}
\end{pmatrix}
$$
which is $T_{CW,2}$ after permuting basis vectors (see Example \ref{CWqex}) so
$ GR(W^{\boxtimes 2})=3<4=GR(W)^2$.
\end{example}
\section{Proofs of main theorems}
In this section, after reviewing facts about spaces of matrices of bounded rank and
the substitution method for bounding tensor rank, we prove
a result lower-bounding the tensor rank of tensors associated to compression
spaces (Proposition \ref{comprex}), a lemma on linear sections of
$\sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ (Lemma \ref{linelem}), and
Theorems \ref{GRtwo} and \ref{GRthree}.
\subsection{Spaces of matrices of bounded rank}\label{detspaces}
Spaces of matrices of bounded rank (Definition \ref{bndrkdef}) is a classical subject dating back at least to
\cite{MR136618}.
The results
most relevant here are from
\cite{MR587090,MR695915}, and they were recast in the language
of algebraic geometry in \cite{MR954659}. We review notions
relevant for our discussion.
A large class of spaces of matrices of bounded rank
$E\subset B{\mathord{ \otimes } } C$ are the {\it compression spaces}.
In bases, the space takes the block format
\begin{equation}\label{comprformat}
E=\begin{pmatrix} *& * \\ * & 0\end{pmatrix}
\end{equation}
where if the $0$ is of size $(\mathbf{b}-k)\times (\mathbf{c} -\ell)$,
the space is of bounded rank $k+\ell$.
If $m$ is odd, then any linear subspace of $\La 2 \mathbb C^m$ is
of bounded rank $m-1$.
More generally one can use the multiplication in any graded algebra
to obtain spaces of bounded rank, the case of $\La 2 \mathbb C^m$ being the
exterior algebra.
Spaces of bounded rank at most three are classified in \cite{MR695915}:
For three dimensional rank two spaces there are only the compression spaces and
the skew symmetric matrices $\La 2\mathbb C^3\subset
\mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\subsection{Review of the substitution method}\label{submethrev}
\begin{proposition}\cite[Appendix
B]{MR3025382} \label{prop:slice}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$.
Fix a basis $a_1, \hdots , a_{\mathbf{a}}$ of $A$, with dual basis $\alpha^1, \hdots , \alpha^\mathbf{a}$.
Write $T=\sum_{i=1}^\mathbf{a} a_i\otimes M_i$, where $M_i =T(\alpha_i)\in B\otimes C$.
Let $\bold R(T)=r$ and $M_1\neq 0$. Then there exist constants $\lambda_2,\dots,
\lambda_\mathbf{a}$, such that the tensor
$$T' :=\sum_{j=2}^\mathbf{a} a_j\otimes(M_j-\lambda_j M_1)\in \operatorname{span}\{ a_2, \hdots , a_{\mathbf{a}}\} {\mathord{ \otimes } }
B{\mathord{ \otimes } }
C,$$
has rank at most $r-1$. I.e., $\bold R(T)\geq 1+\bold R(T')$.
The analogous assertions hold exchanging the role of $A$ with that of $B$ or $C$.
\end{proposition}
A visual tool for using the substitution method is to write
$T(B^*)$ as a matrix of linear forms. Then the $i$-th row of $T(B^*)$ corresponds to a tensor $a_i{\mathord{ \otimes } } M_i\in \mathbb C^1{\mathord{ \otimes } } B {\mathord{ \otimes } } C$. One adds unknown multiples of the first row of $T(B^*)$ to all other rows, and deletes the first row, then the resulting matrix is $T'(B^*)\in \mathrm{span}\{ a_2, \hdots , a_{\mathbf{a}}\}{\mathord{ \otimes } } C$.
In practice one applies Proposition \ref{prop:slice} iteratively, obtaining a sequence
of tensors in spaces of shrinking dimensions. See \cite[\S 5.3]{MR3729273} for a discussion.
For a positive integer $k\leq\mathbf{b}$, if the last $k$ rows of $T(A^*)$ are linearly independent, then one can apply Proposition \ref{prop:slice} $k$ times on the last $k$ rows. In this way, the first $\mathbf{b} - k$ rows are modified by unknown linear combinations of the last $k$ rows, and the last $k$ rows are deleted. Then one obtains a tensor $T'\in A\otimes \mathrm{span}\{b_1,\cdots,b_{\mathbf{b}-k}\}\otimes C$ such that $\bold R(T')\leq \bold R(T)-k$.
\begin{proposition}\label{comprex}
Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be a concise tensor with $T(A^*)$ a bounded rank $\rho$
compression space. Then $\bold R(T)\geq \mathbf{b}+\mathbf{c}-\rho$. \end{proposition}
\begin{proof} Consider \eqref{comprformat}. Add to the first $k$ rows of $T(A^*)$ unknown linear combinations of the last $\mathbf{b}-k$ rows, each of which is
nonzero by conciseness. Then delete the last $\mathbf{b}-k$ rows. Note that the last $\mathbf{c}-\ell$ columns
are untouched, and (assuming the most disadvantageous combinations
are chosen) we obtain a tensor $T'\in A{\mathord{ \otimes } } \mathbb C^k{\mathord{ \otimes } } C$
satisfying $\bold R(T)\geq (\mathbf{b}-k)+\bold R(T')$. Next add to the first $\ell$ columns of $T'(A^*)$ unknown linear combinations of the last $\mathbf{c}-\ell$ columns, then delete the last $\mathbf{c}-\ell$ columns.
The resulting tensor $T''$ could very well be zero, but
we nonetheless have $\bold R(T')\geq (\mathbf{c}-\ell) +\bold R(T'')$ and thus $\bold R(T)
\geq (\mathbf{b}-k)+(\mathbf{c}-\ell)=\mathbf{b}+\mathbf{c}-\rho$.
\end{proof}
Here are the promised lower bounds for $T_{gr3,1deg,m}$:
\begin{example}\label{badgrex}
Consider \eqref{badA}. Add to the first row unknown linear combinations of the last $m-1$ rows then delete the last $m-1$ rows. The resulting tensor is still $\langle a_1, \hdots , a_{q}\rangle$-concise
so we have $\bold R(T_{gr3,1deg,2q})\geq m-1+ \frac m2$. The case of $T_{gr3,1deg,2q-1}$ is similar.
\end{example}
\subsection{Lemmas on linear sections of $\sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$}\label{linelempf}
\begin{lemma} \label{turbolinelem} Let $E\subset B{\mathord{ \otimes } } C$
be a linear subspace. If $\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ is a hypersurface in $\mathbb P}\def\BT{\mathbb T E$ of degree $r+1$ (counted with multiplicity) and does not contain any hyperplane of $\mathbb P}\def\BT{\mathbb T E$, then
|
$\mathbb P}\def\BT{\mathbb T E \subset \sigma_{r+1}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
\end{lemma}
\begin{proof}
Write $E=(y^i_j)$ where $y^i_j=y^i_j(x_1, \hdots , x_{q})$, $1\leq i\leq \mathbf{b}$, $1\leq j \leq \mathbf{c}$ and $q=\operatorname{dim} E$.
By hypothesis, all size $r+1$ minors are up to scale equal to a polynomial $S$ of degree $r+1$. No linear
polynomial divides $S$ since otherwise the intersection would contain a hyperplane.
Since $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{r}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, there must be a size $r+1$ minor that is nonzero restricted to $\mathbb P}\def\BT{\mathbb T E$. Assume it is the $(1, \hdots , r+1)\times (1, \hdots , r+1)$-minor.
\smallskip
Consider the vector consisting of the first $r+1$ entries of the $(r+2)$-st column. In order that all size $r+1$ minors of the upper left $(r+1)\times(r+2)$ block equal to multiples of $S$, this vector must be a linear combination
of the vectors corresponding to the first $r+1$ entries of the first $r+1$ columns. By adding linear combinations
of the first $r+1$ columns, we may make these entries zero. Similarly, we may make
all other entries in the first $r+1$ rows zero. By the same argument, we may do
the same for the first $r+1$ columns.
We have
\begin{equation}\label{boxformturbo}
\begin{pmatrix} y^1_1 & \cdots & y^1_{r+1} & 0 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
y^{r+1}_1 & \cdots & y^{r+1}_{r+1} & 0 & \cdots & 0\\
0 & \cdots & 0 & y^{r+2}_{r+2} & \cdots & y^{r+2}_\mathbf{c}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \cdots & 0 & y^\mathbf{b}_{r+2} & \cdots & y^\mathbf{b}_\mathbf{c}\end{pmatrix}.
\end{equation}
If $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{r+1}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, some entry in the
lower $(\mathbf{b}-r-1)\times (\mathbf{c}-r-1)$ block is nonzero. Take one such and
the minor with it and a $r\times r$ submatrix of
the upper left minor. We obtain a polynomial that has a linear factor, so it cannot be a multiple
of $S$, giving a contradiction.
\end{proof}
\begin{lemma} \label{linelem} Let $\mathbf{b},\mathbf{c}>4$. Let $E\subset B
{\mathord{ \otimes } } C$
be a linear subspace of dimension $q>4$. Say $\operatorname{dim}(\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=q-2$ and $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{3}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Then either all components of $\mathbb P}\def\BT{\mathbb T E\cap
\sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ are linear $\pp {q-2}$'s, or $E\subset\mathbb C^5\otimes\mathbb C^5$.
\end{lemma}
The proof is similar to the argument for Lemma \ref{turbolinelem}, except
that we work in a local ring.
\begin{proof}
Write $E=(y^i_j)$ where $y^i_j=y^i_j(x_1, \hdots , x_{q})$, $1\leq i\leq \mathbf{b}$, $1\leq j \leq \mathbf{c}$.
Assume, to get a contradiction, that there is an irreducible
component of degree greater than one in the intersection, given by an irreducible polynomial $S$ of degree
two or three that divides all size $3$ minors.
By Lemma \ref{turbolinelem},
$\operatorname{deg}(S)=2$.
Since $\mathbb P}\def\BT{\mathbb T E\not\subset \sigma_{2}(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, there must be some size $3$ minor that is nonzero restricted to $\mathbb P}\def\BT{\mathbb T E$. Assume it is the $(123)\times (123)$
minor.
\smallskip
Let $\Delta^I_J$ denote a (signed) size $3$ minor restricted to $E$, where $I=(i_1i_2 i_3)$,
$J=(j_1j_2 j_3)$, so $\Delta^I_J=L^I_JS$, for some $L^I_J\in E^*$.
Set $I_0=(123)$.
Consider the $(st4)\times I_0$ minors, where $1\leq s<t\leq 3$.
Using the Laplace expansion, we may write them as
\begin{equation}\label{delta}
\begin{pmatrix}
\Delta^{I_0\backslash 1}_{I_0\backslash 1} &\Delta^{I_0\backslash 1}_{I_0\backslash 2} & \Delta^{I_0\backslash 1}_{I_0\backslash 3}
\\
\Delta^{I_0\backslash 2}_{I_0\backslash 1} &\Delta^{I_0\backslash 2}_{I_0\backslash 2} & \Delta^{I_0\backslash 2}_{I_0\backslash 3}
\\
\Delta^{I_0\backslash 3}_{I_0\backslash 1} & \Delta^{I_0\backslash 3}_{I_0\backslash 2} & \Delta^{I_0\backslash 3}_{I_0\backslash 3}
\end{pmatrix}
\begin{pmatrix}
y^{4}_1 \\ y^{4}_2 \\ y^{4}_3
\end{pmatrix}
=
S \begin{pmatrix}
L^{234}_{I_0}\\
L^{134}_{I_0}\\
L^{124}_{I_0}
\end{pmatrix}
\end{equation}
Choosing signs properly, the matrix on the left is just the cofactor matrix of the
$(123)\times(123)$ submatrix, so its inverse is the transpose
of the original submatrix divided
by the determinant (which is nonzero by hypothesis).
Thus we may write
$$
\begin{pmatrix}
y^{4}_1 \\ y^{4}_2 \\ y^{4}_3
\end{pmatrix}
=
\frac{S}{\Delta^{I_0}_{I_0}}\begin{pmatrix}
y^1_1 & y^2_1 & y^3_1\\
y^1_2 & y^2_2 & y^3_2 \\
y^1_3 & y^2_3 & y^3_3\end{pmatrix}\begin{pmatrix}
L^{234}_{I_0}\\
L^{134}_{I_0}\\
L^{124}_{I_0}
\end{pmatrix}.
$$
In particular, each $y^{4}_s$, $1\leq s\leq 3$, is a rational function
of $L^{(I_0\backslash t),4}_{I_0}$ and $y^u_v$, $1\leq u,v\leq 3$.
$$
(y^{4}_1, y^4_2, y^{4}_3)=\sum_{t=1}^3 \frac{ L^{(I_0\backslash t),4}_{I_0} }{L^{I_0}_{I_0} }
(y^{t}_1, y^{t}_2, y^{t}_3).
$$
Note that the coefficients $\frac{ L^{(I_0\backslash t),4}_{I_0} }{L^{I_0}_{I_0} }$
are degree zero rational functions in $L^{(I_0\backslash t),4}_{I_0}$ and $y^u_v$, $1\leq u,v\leq 3$.
The same is true for all $(y^{\ell}_1, y^{\ell}_2, y^{\ell}_3)$ for $\ell\geq 4$.
Similarly, working with the first $3$ rows we get
$(y^{ 1}_\ell, y^{ 2}_\ell, y^{3}_\ell)$ written in terms of
the $(y_{t}^1, y_{t}^2, y_{t}^3)$ with coefficients degree zero rational functions in the $y^s_t$.
Restricting to the Zariski open subset of $E$ where $L^{I_0}_{I_0} \neq 0$,
we may subtract rational multiples of the first three rows and
columns to normalize our space to \eqref{boxformturbo}:
\begin{equation}\label{boxform}
\begin{pmatrix} y^1_1 & y^1_2 & y^1_3 & 0 & \cdots & 0\\
y^2_1 & y^2_2 & y^2_3 & 0 & \cdots & 0\\
y^3_1 & y^3_2 & y^3_3 & 0 & \cdots & 0\\
0 & 0 & 0 & y^4_4 & \cdots & y^4_\mathbf{c}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & y^\mathbf{b}_4 & \cdots & y^\mathbf{b}_\mathbf{c}\end{pmatrix}.
\end{equation}
Since a Zariski open subset of $\mathbb P}\def\BT{\mathbb T E$
is not contained in $\sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, at least one entry
in the lower right block must be nonzero. Say it is $y^{4}_{4}$.
On the Zariski open set $L^{I_0}_{I_0}\neq 0$,
for all $ 1\leq s<t\leq 3$, $1\leq u<v\leq 3$,
we have $y^4_4\Delta^{st}_{uv}=Q^{st}_{uv}S/L^{I_0}_{I_0}$, where $Q^{st}_{uv}$ is a quadratic polynomial (when $\Delta^{st}_{uv}\neq 0$) or zero (when $\Delta^{st}_{uv}=0$). Then either $y^4_4$ is a nonzero multiple of $S/L^{I_0}_{I_0}$, or all $\Delta^{st}_{uv}$'s are multiples of $S$, because $S$ is irreducible.
If all $\Delta^{st}_{uv}$'s are multiples of $S$,
at least one must be nonzero, say $\Delta^{12}_{12}\neq 0$. Then by a change of bases we set $y^3_1,y^3_2,y^1_3,y^2_3$ to zero. At this point,
for all $1\leq \alpha,\beta\leq 2$, $\Delta^{\alpha 3}_{\beta 3}$ becomes $y^\alpha_\beta y^3_3$. By hypothesis $\Delta^{\alpha 3}_{\beta 3}$ is a multiple of the irreducible quadratic polynomial $S$, so $y^\alpha_\beta y^3_3=0$. Therefore either all $y^\alpha_\beta$'s are zero, which contradicts with $\Delta^{12}_{12}\neq 0$, or $y^3_3=0$, in which case all entries in the third row and the third column are zero, contradicting
our assumption that the first $3\times 3$ minor is nonzero.
If there exists $\Delta^{st}_{uv}\neq0$ that is not a multiple of $S$, change bases such that it is $\Delta^{12}_{12}$. Note that $y^4_4=\Delta^{1234}_{1234}/\Delta^{123}_{123}=(\Delta^{1234}_{1234}/S)/L^{I_0}_{I_0}$ where $(\Delta^{1234}_{1234}/S)$ is a quadratic polynomial. By hypothesis $S$ divides $\Delta^{124}_{124}=\Delta^{12}_{12}y^4_4$. Since $S$ is irreducible and $\Delta^{12}_{12}$ is not a multiple of $S$, $\Delta^{1234}_{1234}/S$ must be a multiple of $S$. Therefore $\Delta^{1234}_{1234}$ is a multiple of $S^2$. This is true for all size $4$ minors, therefore we can apply \ref{turbolinelem}. By the proof of \ref{turbolinelem}, all entries of $E$ can be set to zero except those in the upper left $5\times 5$ block, so $E\subset\mathbb C^5\otimes\mathbb C^5$.
\end{proof}
\begin{remark}
The normalization in the case $\operatorname{deg}(S)=2$ is not possible in general without the restriction
to the open subset where $L^{I_0}_{I_0}\neq 0$. Consider
$
T(A^*)
$
such that the upper $3\times 3$ block
is
$$\begin{pmatrix} x_1& x_2&x_3\\ -x_2 & x_1 & x_4\\
-x_3& -x_4& x_1\end{pmatrix}.
$$
Then the possible entries in the first three
columns of the fourth row are not limited to the span of
the first three rows. The element
$(x_4,-x_3,x_2)$ is also possible.
\end{remark}
\subsection{Proof of Theorem \ref{GRtwo}}\label{GRtwopf}
We recall the statement:
\begin{customthm} {\ref{GRtwo}} For $\mathbf{a},\mathbf{b},\mathbf{c}\geq3$,
the variety $\mathcal{GR}_{2}(A{\mathord{ \otimes } } B{\mathord{ \otimes } } C)$ is the variety of tensors $T$ such that $T(A^*)$, $T(B^*)$,
or $T(C^*)$ has bounded rank 2.
There are exactly two concise tensors in $\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3$ with $GR(T)=2$:
\begin{enumerate}
\item The unique up to scale skew-symmetric tensor $T=\sum_{\sigma\in \FS_3}\tsgn(\sigma) a_{\sigma(1)}{\mathord{ \otimes } } b_{\sigma(2)}{\mathord{ \otimes } } c_{\sigma(3)}\in \La 3\mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$
and
\item $T_{utriv,3}:=a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1 + a_1{\mathord{ \otimes } } b_2{\mathord{ \otimes } } c_2+ a_1{\mathord{ \otimes } } b_3{\mathord{ \otimes } } c_3
+a_2{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_2+a_3{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_3\in S^2\mathbb C^3{\mathord{ \otimes } } \mathbb C^3\subset \mathbb C^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$.
\end{enumerate}
There is a unique concise tensor $T\in \mathbb C^m{\mathord{ \otimes } } \mathbb C^m{\mathord{ \otimes } } \mathbb C^m$
satsifying $GR(T)=2$ when $m>3$,
namely
$$
T_{utriv,m}:=
a_1{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_1+ \sum_{\rho=2}^m [a_1{\mathord{ \otimes } } b_\rho{\mathord{ \otimes } } c_\rho + a_\rho{\mathord{ \otimes } } b_1{\mathord{ \otimes } } c_\rho].
$$
This tensor satisfies $\underline{\mathbf{R}}(T_{utriv,m})=m$ and $\bold R(T_{utriv,m})=2m-1$.
\end{customthm}
\begin{proof} For simplicity, assume $\mathbf{a}\leq \mathbf{b}\leq \mathbf{c}$.
A tensor $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ has geometric rank 2 if and only if
$\mathbb{P}T(A^*)\not\subset Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$,
and either $\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$ has dimension $\mathbf{a}-2$ or $\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
For the case $\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$, $T(A^*)$ is of bounded rank $2$. By the classification of spaces of bounded rank $2$, up to equivalence $T(A^*)$ must be in one of the following forms:
\begin{center}
$\begin{pmatrix}
* & \cdots & * \\
* & \cdots & * \\
0 & \cdots & 0 \\
\vdots & &\vdots\\
0 & \cdots & 0
\end{pmatrix}$,
$\begin{pmatrix}
* & * &\cdots & * \\
* & 0 &\cdots & 0 \\
\vdots &\vdots & &\vdots\\
* & 0 &\cdots & 0
\end{pmatrix}$, or
$\begin{pmatrix}
0&x&y& 0 &\cdots & 0\\
-x&0&z& 0 &\cdots & 0\\
-y&-z&0& 0 &\cdots & 0\\
0 &&&\cdots && 0\\
\vdots &&&\vdots && \vdots\\
0 &&&\cdots && 0
\end{pmatrix}$.
\end{center}
When $T$ is concise, it must be of the second form or the third
with $\mathbf{a}=\mathbf{b}=\mathbf{c}=m=3$. If it is the third, $T$ is the unique up to scale skew-symmetric tensor in $ \mathbb{C}^3{\mathord{ \otimes } } \mathbb C^3{\mathord{ \otimes } } \mathbb C^3$. If it is of the second form and $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$, the entries in the first column must be linearly independent, as well as the entries in the first row. Thus we may choose a basis of $A$ such that $T=a_1\otimes b_1\otimes c_1 + \sum^m_{i>1} y_i\otimes b_i\otimes c_1 + \sum^m_{i>1}z_i\otimes b_1 \otimes c_i$ where $y_i$'s and $z_i$'s are linear combinations of $a_2,\cdots,a_m$. Then by a change of basis in $b_2,\cdots,b_m$ and $c_2,\cdots,c_m$ respectively, we obtain $T_{utriv,m}$.
For the case $\operatorname{dim}(\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=\mathbf{a}-2$, by Lemma \ref{turbolinelem}, if this intersection is an irreducible quadric, we are
reduced to the case
$\mathbb{P}T(A^*)\subset\sigma_2( Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Thus all $2\times2$ minors of $T(A^*)$ have a common linear factor.
Assume
the common factor is $x_1$. Then $\mathbb{P}T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\supset\{x_1=0\}$. Hence $\mathbb{P}T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)\subset Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$, i.e. $T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)$ is of bounded rank one. By a change of bases in $B,C$ and exchanging $B$ and $C$, all entries but the first row of $T(\langle\alpha_2,\cdots,\alpha_\mathbf{a}\rangle)$ becomes zero. Then all entries but the first column and the first row of $T(C^*)$ are zero, so $T(C^*)$ is of bounded rank $2$.
When $T$ is concise and $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$, the change of bases as in the case when $T(A^*)$ is of bounded rank $2$
shows that up to a reordering of $A$, $B$ and $C$ we obtain $T_{utriv,m}$.
\end{proof}
\subsection{Proof of Theorem \ref{GRthree}}\label{GRthreepf}
We recall the statement:
\begin{customthm}{\ref{GRthree} } Let $T\in A{\mathord{ \otimes } } B{\mathord{ \otimes } } C$ be concise and assume $\mathbf{c}\geq\mathbf{b}\geq \mathbf{a}>4$.
If $GR(T)\leq 3$, then
$\bold R(T)\geq \mathbf{b}+ \lceil {\frac {\mathbf{a}-1}2}\rceil -2$.
If moreover $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $T$ is
$1_*$-generic, then $\bold R(T)\geq 2m-3$.
\end{customthm}
\begin{proof}
In order for $GR(T)=3$, either $\mathbb P}\def\BT{\mathbb T T(A^*)\subset \sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$,
$\mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ has dimension $\mathbf{a}-2$,
or $\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)$ has dimension $\mathbf{a}-3$.
Case $\mathbb P}\def\BT{\mathbb T T(A^*)\subset \sigma_3(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$: Since $\mathbf{a}>3$,
it must be a compression space. We conclude by Proposition \ref{comprex}.
\medskip
Case
$\mathbb P}\def\BT{\mathbb T T(A^*)\cap \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$ has dimension $\mathbf{a}-2$:
By Lemma \ref{linelem}, there exists $a\in A$
such that $\mathbb P}\def\BT{\mathbb T T( a^\perp)\subset \sigma_2(Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))$.
Write $T(A^*)=x_1Z+U$, where $Z$ is a matrix of scalars
and $U=U(x_2, \hdots , x_\mathbf{a})$ is a matrix of linear forms of bounded rank two.
As discussed in \S\ref{detspaces}, there are two possible normal forms for $U$ up to symmetry.
If $U$ is zero outside of the first two rows,
add to the first two rows an unknown combination of the last $\mathbf{b}-2$
rows (each of which is nonzero by conciseness), so that the resulting tensor $T'$ satisfies
$\bold R(T)\geq \mathbf{b}-2+\bold R(T')$. Now the last $\mathbf{b}-2$ rows only contained
multiples of $x_1$ so $T'$ restricted to $a_1^\perp$ is $\langle a_2, \hdots , a_\mathbf{a}\rangle$-concise
and thus of rank at least $\mathbf{a}-1$, so $\bold R(T)\geq \mathbf{a}+\mathbf{b}-3$.
Now say $U$ is zero outside its first row and column.
Subcase: $\mathbf{a}=\mathbf{b}=\mathbf{c}$ and $T$ is $1_*$-generic. Then either $T$ is $1_A$-generic, or the first
row or column of $U$ consists of linearly independent entries.
If $T$ is $1_A$-generic, we may change bases so that $Z$ is of full rank.
Consider $T(B^*)$. Its first row consists
of linearly independent entries. Write $\mathbf{a}=\mathbf{b}=\mathbf{c}=:m$ and apply the substitution method to delete the last $m-1$ rows (each
of which is nonzero by conciseness). Call the resulting tensor
$T'$, so $\bold R(T)\geq \bold R(T')+m-1$.
Let $T''=T'|_{A^*\otimes\mathrm{span}\{\beta_2,\cdots,\beta_m\} \otimes\mathrm{span}\{\gamma_2,\cdots,\gamma_m\} }$. Then $T''(A^*)$ equals to the matrix obtained by removing the first column and the first row from $x_1Z$, so $\bold R(T'')\geq \bold R(x_1Z)-2=m-2$. Thus $\bold R(T)\geq (m-1)+m-2$ and we conclude.
If the first row of $U$ consists of linearly independent entries, then the
same argument, using $T(A^*)$, gives the bound.
Subcase: $T$ is $1$-degenerate or $\mathbf{a},\mathbf{b},\mathbf{c}$ are not all equal.
By $A$-conciseness, either the first row or column
must have at least $\lceil\frac{\mathbf{a}-1 }2\rceil$ independent entries of $\mathrm{span}\{x_2,\cdots,x_{\mathbf{a}}\}$. Say it is the
first row.
Then applying the substitution method, adding an unknown combination
of the last $\mathbf{b}-1$ rows to the first, then deleting the last $\mathbf{b}-1$ rows. Note that all entries in the first
row except the $(1,1)$ entry are only
altered by multiples of $x_1$, so there are at least $\lceil\frac{\mathbf{a}-1 }2\rceil -1$ linearly independent entries in the resulting matrix.
We obtain
$\bold R(T)\geq \mathbf{b}-1+ \lceil\frac{\mathbf{a}-1 }2\rceil -1$.
\medskip
Case
$\operatorname{dim}(\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C))=\mathbf{a}-3$:
We split this into three sub-cases based on the dimension
of the span of the intersection:
\begin{enumerate}
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-3$
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-2$
\item $\operatorname{dim} \langle\mathbb P}\def\BT{\mathbb T T(A^*)\cap Seg(\mathbb P}\def\BT{\mathbb T B\times \mathbb P}\def\BT{\mathbb T C)\rangle =\mathbf{a}-1$
\end{enumerate}
Sub-case (1): the intersection must be a linear space.
We may choose bases such that
$$
T(A^*)=
\begin{pmatrix}
0 & 0 & x_3& \cdots & x_\mathbf{a}\\
0& 0& 0 & \cdots & 0\\
& & \ddots & & \\
& & & \ddots & \\
& & & & 0\end{pmatrix} + x_1Z_1+ x_2Z_2
$$
where $Z_1,Z_2$ are $\mathbf{b}\times \mathbf{c}$ scalar matrices.
Add to the first row a linear combination of the last $\mathbf{b}-1$ rows
(each of which is nonzero by conciseness)
to obtain a tensor of rank at least $\mathbf{b}-2$, giving $\bold R(T)\geq \mathbf{a}+\mathbf{b}-3$.
Sub-case (2): By Lemma \ref{turbolinelem} the intersection must
contain a $\pp{\mathbf{a}-2}$, and one argues
as in case (1), except there is just $x_1Z_1$ and $x_2$ also appears in the first row.
\medskip
Sub-case (3):
$T(A^*)$ must have a basis of elements of rank one.
The only way $T$ can be concise, is for $\mathbf{a}=\mathbf{b}=\mathbf{c}=m$ and $m$ elements
of the $B$ factor form a basis and same for the $C$
factor. Changing bases, we
have $T=\sum_{j=1}^ma_j{\mathord{ \otimes } } b_j{\mathord{ \otimes } } c_j$ which just intersects
the Segre in points, so this case cannot occur.
\end{proof}
\bibliographystyle{amsplain}
|
\section{Introduction}\label{sec:intro}
Recent progress in the control and measurement of small-scale systems
provides the possibility of examining the extension of thermodynamics \cite{Ritort,Seifert,Campisi,Brandao}
and the foundation of statistical mechanics \cite{Trotzky,Gemmer} in nano materials.
In particular, elucidating how such purely quantum mechanical phenomena
as quantum entanglement and coherence are manifested in thermodynamics
has become a topic of growing interest for the past two decades. \cite{Nori0, Nori, Nori1,Brunner,Chotorlishvili,Uzdin,Lostaglio,Goold}
Investigations of these types of phenomena may provide new insights into our world
and aid in the development of quantum heat machines operating beyond classical bounds.
Such developments could lead to more efficient methods of energy usage.
Quantum heat transport problems involving quantum heat machines
have been studied with approaches developed through application of open quantum dynamics theory.
Quantum master equation (QME) approaches are frequently used to investigate the dynamics of quantum heat machines. \cite{Kosloff14,Gelbwaser15}
Although the QME approach is consistent with the laws of thermodynamics, \cite{Alicki,Kosloff13,Breuer}
its applicability is limited to cases in which
the system-bath interaction is treated as a second-order perturbation
and the Markov approximation is employed.
Thus, these investigations have been carried out only in weak coupling regimes.
Recent theoretical and experimental works \cite{Huelga,Engel} have demonstrated the importance of the interplay between the quantum nature of systems and environmental noise.
For example, it has been shown that the optimal conditions for excitation energy transfer in light-harvesting complexes
is realized in the non-perturbative, non-Markovian regime, in which a description beyond perturbative approaches
is essential to properly understand the quantum dynamics displayed by the system.
To this time, the approaches used to study this regime include
the QME employing a renormalized system-plus-bath Hamiltonian derived with the polaron transformation \cite{Gelbwaser}
or the reaction-coordinate mapping, \cite{Strasberg,Newman} the functional integral approach, \cite{Carrega}
the non-equilibrium Green's function method, \cite{Esposito15PRL,Esposito15PRB,Nitzan} and the stochastic Liouville-von Neumann equation approach.\cite{Schmidt}
In most cases, however, such attempts are limited to the nearly Markovian case, slow driving cases, or the investigation of the short-time behavior.
In the present study, we employ the hierarchal equations of motion (HEOM) approach \cite{Tanimura88,Ishizaki05,Tanimura06,Tanimura14, Tanimura15, Kato15}
to investigate heat transport and quantum heat engine problems.
This approach allows us to treat systems subject to external driving fields in a numerically rigorous manner under non-Markovian and
{non-perturbative system-bath coupling conditions. We must chose the definition of heat current carefully, however, to satisfy various thermodynamic requirements,
for example, to have right thermal equilibrium limit. \cite{Esposito15PRB}
While several researchers have studied a role of heat current between subsystems, \cite{Tannor,Segal,Castro}
which are introduced by partitioning a many-body system such as chain models,
here we consider the heat current between the system and baths.
Thus} we employ two definitions of the heat current, one in terms of the system energy (system heat current), and the other in terms of the bath energy (bath heat current). Both of these definitions are frequently used in the literature {for varieties of systems involving chain models, here} we carefully examine the validity of these definitions {for a non-equilibrium spin-boson model and a three-level heat engine model} in the case of the strong system-bath coupling,
because the existence of a non-commuting inter-bath coupling through the system
contributes to the heat current even in the steady state case.
This effect has not been investigated in previous studies.
The organization of this paper is as follows.
In Sec. \ref{sec:formula}, we introduce the two definitions of the heat current that we investigate, the system heat current (SHC), and the bath heat current (BHC).
In Sec. \ref{sec:thermodynamics}, we present the first and the second laws of thermodynamics as obtained through consideration of the BHC.
In Sec. \ref{sec:reduced}, we analytically derive reduced expressions for the SHC and BHC.
In Sec. \ref{sec:heom}, we explain the HEOM approach and demonstrate a method employing it to calculate the SHC and BHC numerically in a rigorous manner.
In Sec. \ref{sec:numerics}, we apply our formulation to a non-equilibrium spin-boson model and a three-level heat engine model.
Through numerical investigation of the HEOM, we investigate the cycle behavior of the quantum heat engine under a periodic driving field.
It is shown that the efficiency of the heat engine decreases as the system-bath coupling increases.
Section \ref{sec:conclusion} is devoted to concluding remarks.
\section{System Heat Current and Bath Heat Current}\label{sec:formula}
We consider a system coupled to multiple heat baths at different temperatures.
With $K$ heat baths, the total Hamiltonian is written
\begin{align}
\hat{H}(t)
= \hat{H}_\mathrm{S}(t)
+ \sum_{k=1}^K \left(
\hat{H}_\mathrm{I}^k + \hat{H}_\mathrm{B}^k \right),
\label{eq:H_total}
\end{align}
where $\hat{H}_\mathrm{S}(t)$ is the system Hamiltonian,
whose explicit time dependence originates from the coupling with the external driving field.
The Hamiltonian of the $k$th bath and the Hamiltonian representing the interaction between the system and the $k$th bath are given by
$\hat{H}_\mathrm{B}^k = \sum_j \hbar \omega_{k_j} \hat{b}_{k_j}^\dagger \hat{b}_{k_j}$
and $\hat{H}_\mathrm{I}^k = \hat{V}_k \sum_{j} g_{k_j}( \hat{b}_{k_j}^\dagger + \hat{b}_{k_j})$, respectively,
where $\hat{V}_k$ is the system operator that describes the coupling to the $k$th bath.
Here, $\hat{b}_{k_j}, \hat{b}_{k_j}^\dagger, \omega_{k_j}$ and $g_{k_j}$ are
the annihilation operator, creation operator, frequency, and coupling strength for the $j$th mode of the $k$th bath, respectively.
Due to the bosonic nature of the bath, all bath effects on the system are determined by the noise correlation function,
$C_k(t) \equiv \langle \hat{X}_k(t) \hat{X}_k(0) \rangle_\mathrm{B}$,
where $\hat{X}_k \equiv \sum_j g_{k_j}( \hat{b}_{k_j}^\dagger + \hat{b}_{k_j} )$ is the collective bath coordinate of the $k$th bath
and $\langle \ldots \rangle_\mathrm{B}$ represents the average taken with respect to the canonical density operator of the baths.
The noise correlation function is expressed in terms of the bath spectral density, $J_k(\omega)$, as
\begin{align}
C_k(t)
= \int_0^\infty d\omega \, \frac{ J_k(\omega) }{ \pi }
\left[ \coth\left( \frac{\hbar\omega}{2k_BT_k} \right) \cos(\omega t)
- i \sin(\omega t) \right],
\label{eq:noise}
\end{align}
where $J_k(\omega) \equiv \pi \sum_j g_{k_j}^2 \delta(\omega - \omega_{k_j})$,
and $T_k$ is the temperature of the $k$th bath.
For the system described above, we derive two rigorous expressions for the heat current,
which are convenient for carrying out simulations of reduced system dynamics.
One of these expressions is derived through consideration of conservation of the the system energy,
and for this reason, we call it the "system heat current" (SHC).
This current is defined as
\begin{align}
\frac{d}{dt}\langle \hat{H}_\mathrm{S}(t) \rangle - \dot{W}(t)
= \sum_{k=1}^K \dot{Q}_\mathrm{S}^k(t),
\end{align}
where $\dot{W}(t) \equiv \langle (\partial \hat{H}_\mathrm{S}(t)/\partial t) \rangle$ is the power,
i.e., the time derivative of the work,
and
\begin{align}
\dot{Q}_\mathrm{S}^k(t) = \frac{i}{\hbar} \langle [ \hat{H}_\mathrm{I}^k(t), {\hat{H}_\mathrm{S}}(t) ] \rangle
\label{eq:SHC_def}
\end{align}
is the change in the system energy due to the coupling with the $k$th bath.
This is identical to the definition used in the QME approach,
in which the system Hamiltonian is identified as the internal energy.
The second expression for the heat current is derived through consideration of the rate of decrease of the bath energy,
$\dot{Q}_\mathrm{B}^k(t) \equiv - d\langle \hat{H}_\mathrm{B}^k(t) \rangle/dt$.
We call this current the "bath heat current" (BHC).
Using the Heisenberg equations, the BHC can be rewritten as
\begin{align}
\dot{Q}_\mathrm{B}^k(t)
= \dot{Q}_\mathrm{S}^k(t)
+ \frac{d}{dt}\langle \hat{H}_\mathrm{I}^k(t) \rangle
+ \sum_{k' \ne k} \dot{q}_{k,k'},
\label{eq:bath_heat}
\end{align}
where
\begin{align}
\dot{q}_{k,k'}(t)
= \frac{i}{\hbar} \langle [ \hat{H}_\mathrm{I}^k(t), \hat{H}_\mathrm{I}^{k'}(t) ] \rangle.
\label{eq:current_I_def}
\end{align}
The second term on the right hand side of Eq.(\ref{eq:bath_heat}) vanishes under steady-state conditions and in the limit of a weak system-bath coupling.
The third term contributes to the heat current even under steady-state conditions,
while it vanishes in the weak coupling limit.
The third term, which is of purely quantum mechanical origin, as can be seen from the definition,
is the main difference between the SHC and the BHC.
This term plays a significant role in the case that
the $k$th and $k'$th system-bath interactions are non-commuting and each system-bath coupling is strong.
We also note that because this third term is of greater than fourth-order in the system-bath interaction,
it does not appear in the second-order QME approach.
{Therefore only non-perturbative approaches \cite{Gelbwaser,Strasberg,Newman,Carrega,Esposito15PRL,Esposito15PRB,Schmidt} including higher-order QME approaches\cite{Wu} may allow us to reveal the features that we are going to discuss in the present study.}
Hereafter, we refer to this term as the "correlation among the system-bath interactions" (CASBI).
For a mesoscopic heat-transport system, including nanotubes and nanowires, each system component is coupled to a different bath
( i.e., each $\hat{V}_k$ acts on a different Hilbert space),
and for this reason, the CASBI contributions vanish.
By contrast, for a microscopic system, including single-molecular junctions and superconducting qubits, the CASBI contribution plays a significant role. Using our two definitions of the heat current, we are able to elucidate the important dynamic properties of microscopic systems and clearly demonstrate how their quantum mechanical nature is manifested.
\section{The First and Second Laws of Thermodynamics}\label{sec:thermodynamics}
We can obtain the first law of thermodynamics by summing Eq.(\ref{eq:bath_heat}) over all $k$:
\begin{align}
\sum_{k=1}^K \dot{Q}_\mathrm{B}^k(t)
= \frac{d}{dt} \langle \hat{H}_\mathrm{S}(t) + \sum_{k=1}^{K} \hat{H}_\mathrm{I}^k(t) \rangle
- \dot{W}(t).
\label{eq:first_law}
\end{align}
The quantity $\hat{H}_\mathrm{S}(t) + \sum_{k=1}^K \hat{H}_\mathrm{I}^k (t)$ is identified as the internal energy,
because the contributions of $\dot{q}_{k,k'}$ cancel out.
The derivation of the second law of thermodynamics is presented in Appendix \ref{sec:second_law}.
In a steady state without external driving forces, the second law is expressed as
\begin{align}
- \sum_{k=1}^K \frac{\dot{Q}_\mathrm{B}^k}{T_k} \ge 0,
\end{align}
while with a periodic external driving force, it is given by
\begin{align}
- \sum_{k=1}^K \frac{Q_\mathrm{B}^{\mathrm{cyc},k}}{T_k} \ge 0,
\end{align}
where $Q_\mathrm{B}^{\mathrm{cyc},k} = \oint_\mathrm{cyc} dt\, \dot{Q}_\mathrm{B}^k(t)$ is the heat absorbed or released per cycle.
The second law without a driving force can be rewritten in terms of the SHC as
\begin{align}
- \sum_{k=1}^K \frac{Q_\mathrm{S}^{\mathrm{cyc},k} }{T_k}
\ge \sum_{k,k'=1}^K \frac{ q_{k,k'}^\mathrm{cyc} }{ T_k }.
\label{eq:second_law2}
\end{align}
When the right-hand side (rhs) of Eq.(\ref{eq:second_law2}) is negative,
the left-hand side (lhs) can also take negative values.
However, this contradicts the Clausius statement of the second law,
i.e., that heat never flows spontaneously from a cold body to a hot body.
As we show in Sec. \ref{sec:numerics},
it is necessary to include the $\dot{q}_{k,k'}$ terms to have a thermodynamically valid description.
\section{Reduced Description of Heat Currents}\label{sec:reduced}
In order to calculate the heat current from the reduced system dynamics,
we must evaluate the expectation value of the collective bath coordinate.
To do so, we adapt a generating functional approach \cite{Schwinger} by adding the source term, $f_k(t)$,
for the $k$th interaction Hamiltonian as
\begin{align}
\hat{V}_k \hat{X}_k \to \hat{V}_{k,f}(t) \hat{X}_k \equiv ( \hat{V}_k + f_k(t) ) \hat{X}_k.
\end{align}
{Here, in order to evaluate an expectation value, we add the source term to the ket (left) side of the density operator, which does not change a role of the system-bath interaction in the time-evolution operator.}
The interaction representation of any operator, $\hat A$, with respect to the non-interacting Hamiltonian,$\hat{H}_\mathrm{S}(t) + \sum_k \hat{H}_\mathrm{B}^k$ is expressed as $\tilde{A}(t)$. The total density operator, $\tilde{\rho}_\mathrm{tot}(t)$, with the source term is then denoted by $\tilde{\rho}_{\mathrm{tot},f}$.
This source term enables us to have a collective bath coordinate with the functional derivative as
\begin{align}
\tilde{X}_k(t) \tilde{\rho}_\mathrm{tot}(t)
= i \hbar \frac{\delta}{\delta f_k(t)} \left. \tilde{\rho}_{\mathrm{tot},f}(t) \right|_{f = 0},
\end{align}
Then, for example, the SHC and the interaction Hamiltonian are expressed in terms of the operators in the system space as
\begin{align}
\dot{Q}_\mathrm{S}^k(t)
= \mathrm{Tr_S}\left \{ \left[ \tilde{H}_\mathrm{S}(t), \tilde{V}_k \right]
\frac{\delta}{\delta f_k(t)} \tilde{\rho}_f(t)|_{f=0} \right \}
\label{eq:SHC_functional}
\end{align}
and
\begin{align}
\left\langle \hat{H}_\mathrm{I}^k(t) \right\rangle
= i \hbar \, \mathrm{Tr_S} \left \{
\tilde{V}_k \frac{\delta}{\delta f_k(t)} \tilde{\rho}_f(t)|_{f=0} \right \},
\label{eq:Hint_functional}
\end{align}
where $\tilde{\rho}(t) \equiv \mathrm{Tr_B}\{ \tilde{\rho}_\mathrm{tot}(t)\}$
is the reduced density operator of the system obtained by tracing out all bath degrees of freedom.
This is expressed in the form of a second-order cumulant expansion,
which is exact in the present case, due to the bosonic nature of the bath:
\begin{align}
\tilde{\rho}_f(t)
= \mathcal{T}_+ \left\{ \mathcal{U}_{\mathrm{IF},f}(t) \right\} \hat{\rho}(0).
\end{align}
Here, $\mathcal{U}_\mathrm{IF}(t) = \prod_{k=1}^K \exp[ \int_0^t ds\, W_k(s) ]$
is the Feynman-Vernon influence functional in operator form,
and $\mathcal{T}_+\{ \ldots \}$ is the time-ordering operator,
where the operators in $\{ \ldots \}$ are arranged in chronological order.
Here, the initial state is taken to be the product state of the reduced system and the bath density operators,
$\hat{\rho}_\mathrm{tot}(0) = \hat{\rho}(0)\prod_{k=1}^K \hat{\rho}_\mathrm{B}^{k,\mathrm{eq}}$,
where $\hat{\rho}_\mathrm{B}^{k,\mathrm{eq}}$ is the canonical density operator for the $k$th bath.
Note that, by generalizing the influence functional, \cite{Tanimura14, Tanimura15}
we can extend the present result to the case of a mixed initial state of the reduced system and the bath.
The operators of the influence phase are defined by
\begin{align}
W_k(s)
= &\, \int_0^s du\, \tilde{\Phi}_k(s)
\left[ C_{k}^\mathrm{R}(s-u) \tilde{\Phi}_k(u) \right.
\notag \\
&\, \left. - C_{k}^\mathrm{I}(s-u) \tilde{\Psi}_k(u) \right],
\label{eq:W_IF}
\end{align}
where we have introduced the two superoperators $\hat{\Phi}_k \hat{\rho} = (i/\hbar) [\hat{V}_k, \hat{\rho} ]$
and $\hat{\Psi}_k \hat{\rho} = (1/\hbar) \{ \hat{V}_k, \hat{\rho} \}$,
and $C_k^\mathrm{R}(s)$ and $C_k^\mathrm{I}(s)$ are the real and imaginary parts of $C_k(s)$.
Thus, by applying the functional derivative with respect to $f_k(t)$, to the reduced density operator,
we obtain the following relation:
\begin{align}
i \hbar \frac{\delta}{\delta f_k(t)} \tilde{\rho}_f(t)|_{f=0}
= &\, - \mathcal{T}_+\left\{ \int_0^t ds
\left[ C_k^\mathrm{R}(t-s) \tilde{\Phi}_k(s) \right. \right.
\notag \\
&\, \left.\left. - C_k^\mathrm{I}(t-s) \tilde{\Psi}_k(s) \right]
\mathcal{U}_\mathrm{IF}(t) \right\} \hat{\rho}(0).
\label{eq:IF_equality}
\end{align}
From the generating functional, for the SHC, we obtain the expression
\begin{align}
\dot{Q}_\mathrm{S}^k(t)
= &\, \int_0^t ds \left( C_{k}^\mathrm{R}(t-s)
\frac{i}{\hbar} \langle [ \hat{A}_k(t), \hat{V}_k(s) ] \rangle \right.
\notag \\
&\, \left. - C_{k}^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{A}_k(t), \hat{V}_k(s) \} \rangle \right),
\label{eq:current_h_exp}
\end{align}
where $\hat{A}_k(t) \equiv (i/\hbar)[ \hat{H}_\mathrm{S}(t), \hat{V}_k(t) ]$.
In order to obtain an explicit expression for the BHC given in Eq.\eqref{eq:SHC_functional},
we need to evaluate the expectation value of the interaction energy.
From the generating functional, this is obtained as
\begin{align}
\left\langle \hat{H}_\mathrm{I}^k(t) \right\rangle
= &\, - \int_0^t ds \left( \bar{C}_k^\mathrm{R}(t-s)
\frac{i}{\hbar} \langle [ \hat{V}_k(t), \hat{V}_k(s) ] \right.
\notag \\
&\, \left. - C_k^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{V}_k(t), \hat{V}_k(s) \} \rangle \right).
\label{eq:Hint_exp}
\end{align}
In order to obtain this expression,
we have divided the real part of the noise correlation function
into a short-time correlated part, expressed by a delta-function, and the remaining part, as
$C_{k}^R (s) = \bar{C}_{k}^R(s) + 2\Delta_k \delta(s)$.
The imaginary part of the noise correlation function is similarly divided into a delta-function part and the remaining part,
but in this case, we incorporate this delta-function part into the system Hamiltonian by renormalizing the frequency.
Taking the time derivative of Eq.(\ref{eq:Hint_exp}), we obtain the following:
\begin{align}
\frac{d}{dt} \left\langle \hat{H}_\mathrm{I}^k(t) \right\rangle
- \left \langle \frac{d\hat{V}_k(t)}{dt} \hat{X}_k(t) \right \rangle
= &\, - \int_0^t ds \left( \dot{\bar{C}}_k^\mathrm{R}(t-s)
\frac{i}{\hbar} \langle [ \hat{V}_k(t), \hat{V}_k(s) ] \right.
\notag \\
&\, \left. - \dot{C}_k^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{V}_k(t), \hat{V}_k(s) \} \rangle \right)
\notag \\
&\, + \frac{i}{\hbar} \Delta_k
\left\langle \left[ \frac{d \hat{V}_k(t)}{dt}, \hat{V}_k(t) \right] \right\rangle
\notag \\
&\, + \frac{2}{\hbar} C_{k}^\mathrm{I}(0) \langle \hat{V}_k^2(t) \rangle.
\label{eq:dHint_exp}
\end{align}
The lhs of the above equation is identical to $\dot{Q}_\mathrm{B}^k(t)$,
because we have
\begin{align}
\left\langle \frac{ d\hat{V}_k(t) }{dt} \hat{X}_k(t) \right\rangle
= - \dot{Q}_\mathrm{S}^k(t) - \sum_{k' \ne k} \dot{q}_{k,k'}(t),
\label{eq:dVdt_equality1}
\end{align}
which is derived from the Heisenberg equation of motion for $\hat{V}_k$,
\begin{align}
\frac{d\hat{V}_k(t)}{dt}
= \hat{A}_k(t)
- \sum_{k' \ne k} \frac{i}{\hbar} [ \hat{V}_k(t), \hat{V}_{k'}(t) ] \hat{X}_{k'}(t).
\label{eq:V_Heisenberg}
\end{align}
Accordingly, using the relation
\begin{align}
\frac{i}{\hbar} \left\langle \left[ \frac{d \hat{V}_k(t)}{dt}, \hat{V}_k(t) \right] \right\rangle
= &\, \sum_{k' \ne k} \int_0^t ds \left( C_{k'}^\mathrm{R}(t-s)
\frac{i}{\hbar} \left \langle [ \hat{B}_{k,k'}(t), \hat{V}_{k'}(s) ] \right\rangle \right.
\notag \\
&\, \left. - C_{k'}^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{B}_{k,k'}(t), \hat{V}_{k'}(s) \} \rangle \right)
\notag \\
&\, + \frac{i}{\hbar} \langle [ \hat{A}_k(t), \hat{V}_k(t) ] \rangle,
\label{eq:dVdt_equality2}
\end{align}
where $\hat{B}_{k,k'} \equiv (i/\hbar)^2 [ [ \hat{V}_k, \hat{V}_{k'} ], \hat{V}_k ]$,
we obtain the following as the final expression for the BHC:
\begin{align}
\dot{Q}_\mathrm{B}^k(t)
= &\, - \int_0^t ds \left( \dot{\bar{C}}_k^\mathrm{R}(t-s)
\frac{i}{\hbar} \langle [ \hat{V}_k(t), \hat{V}_k(s) ] \rangle \right.
\notag \\
&\, \left. - \dot{C}_k^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{V}_k(t), \hat{V}_k(s) \} \rangle \right)
\notag \\
&\, + \frac{2}{\hbar} C_k^\mathrm{I}(0) \langle \hat{V}_k^2(t) \rangle
+ \frac{i}{\hbar} \Delta_k \langle [ \hat{A}_k(t), \hat{V}_k(t) ] \rangle
\notag \\
&\, + \Delta_k \sum_{k' \ne k} \int_0^t ds \left( C_{k'}^\mathrm{R}(t-s)
\frac{i}{\hbar} \langle [ \hat{B}_{k,k'}(t), \hat{V}_{k'}(s) ] \rangle \right.
\notag \\
&\, \left. - C_{k'}^\mathrm{I}(t-s)
\frac{1}{\hbar} \langle \{ \hat{B}_{k,k'}(t), \hat{V}_{k'}(s) \} \rangle \right).
\label{eq:dQ_final}
\end{align}
Now that we have obtained the explicit expressions for the SHC and BHC,
the remaining task is to evaluate these expressions in
|
a numerically rigorous manner.
This was carried out using the HEOM approach.
\section{Hierarchal Equations of Motion Approach}\label{sec:heom}
When the noise correlation function, Eq. \eqref{eq:noise},
is written as a linear combination of exponential functions and a delta function,
$C_k(t) = \sum_{j=0}^{J_k} ( c'_{k_j} + i c''_{k_j}) e^{-\gamma_{k_j}|t|} + 2\Delta_k \delta(t)$,
which is realized for the Drude, \cite{Tanimura88,Ishizaki05,Tanimura06,Tanimura14, Tanimura15, Kato15}
Lorentz, \cite{KramerFMO,Nori12}
and Brownian \cite{TanakaJPSJ09, YanBO12, Liu} cases (and combinations thereof \cite{TanimruaJCP12}),
we can obtain the reduced equations of motion as the HEOM.
The HEOM consist of the following set of equations of motion for the auxiliary density operators (ADOs):
\begin{align}
\hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_K}(t)
\equiv &\, \mathcal{T}_+\left\{
\exp\left[ - \frac{i}{\hbar} \int_0^t ds\, \mathcal{L}(s) \right] \right\}
\notag \\
&\, \times \mathcal{T}_+\left\{ \prod_{k=1}^K \prod_{j=0}^{J_k}
\left[ - \int_0^t ds\, e^{-\gamma_{k_j}(t-s)} \tilde{\Theta}_{k_j}(s) \right]^{n_{k_j}}
\mathcal{U}_\mathrm{IF}(t) \right\}
\notag \\
&\, \times \hat{\rho}(0).
\label{eq:ADO}
\end{align}
Here, we have $\hat{\Theta}_{k_j} \equiv c'_{k_j} \hat{\Phi}_k - c''_{k_j} \hat{\Psi}_k$
and $\mathcal{L}(t) \hat{\rho} = [ \hat{H}_\mathrm{S}(t), \hat{\rho} ]$.
Each ADO is specified by the index $\vec{n}_k = ( n_{k_0}, \ldots, n_{k_{J_k}})$ with $k = 1, \ldots, K$,
where each element takes an integer value larger than zero.
The ADO for which all elements are zero, $\vec{n}_0 = \cdots = \vec{n}_K = 0$,
corresponds to the actual reduced density operator.
Taking the time derivative of Eq.(\ref{eq:ADO}),
the equations of motion for the ADOs are obtained as
\begin{align}
\frac{\partial}{\partial t} \hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_K}(t)
= &\, - \left[ \frac{i}{\hbar} \mathcal{L}(t)
+ \sum_{k=1}^K \sum_{j=0}^{J_k} n_{k_j} \gamma_{k_j} \right]
\hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_K}(t)
\notag \\
&\, - \sum_{k=1}^K \Delta_k \hat{\Phi}_k^2
\hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_K}(t)
\notag \\
&\, - \sum_{k=1}^K \hat{\Phi}_k \sum_{j=0}^{J_k}
\hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_k + \vec{e}_j, \ldots, \vec{n}_K}(t)
\notag \\
&\, - \sum_{k=1}^K \sum_{j=0}^{J_k} n_{k_j} \hat{\Theta}_{k_j}
\hat{\rho}_{\vec{n}_1, \ldots, \vec{n}_k - \vec{e}_j, \ldots, \vec{n}_K}(t),
\label{eq:HEOM}
\end{align}
where $\vec{e}_j$ is the unit vector along the $j$th direction.
The HEOM consist of an infinite number of equations,
but they can be truncated at finite order by ignoring all $k_j$ beyond the value
at which $\sum_{k_j} n_{k_j}$ first exceeds some appropriately large value $N$.
Employing the noise decomposition of the HEOM approach
for the noise correlation functions in Eqs. (\ref{eq:current_h_exp}) and (\ref{eq:dQ_final}),
and comparing the resulting expressions with the definition of the ADOs given in Eq.(\ref{eq:ADO}),
we can evaluate the SHC and BHC in terms of the ADOs as
\begin{align}
\dot{Q}_\mathrm{S}^k(t)
= &\, -\sum_{j=0}^{J_k} \mathrm{Tr}\{
\hat{A}_k(t) \hat{\rho}_{\vec{0},\ldots,\vec{e}_j,\ldots,\vec{0}}(t) \}
\notag \\
&\, + \Delta_k \frac{i}{\hbar}
\mathrm{Tr}\{ [ \hat{A}_k(t), \hat{V}_k ] \hat{\rho}(t) \}
\end{align}
and
\begin{align}
\dot{Q}_\mathrm{B}^k(t)
= &\, - \sum_{j=0}^{J_k} \gamma_{k_j} \mathrm{Tr}\{
\hat{V}_k \hat{\rho}_{\vec{0}, \ldots, \vec{e}_j, \ldots, \vec{0}}(t) \}
\notag \\
&\, + \frac{2}{\hbar} C_{k}^\mathrm{I}(0)
\mathrm{Tr}\{ \hat{V}_k^2 \hat{\rho}(t) \}
+ \frac{i}{\hbar} \Delta_k
\mathrm{Tr}\{ [ \hat{A}_k(t), \hat{V}_k ] \hat{\rho}(t) \}
\notag \\
&\, + \Delta_k \sum_{k' \ne k} \sum_{j=0}^{J_{k'}} \mathrm{Tr}\{
\hat{B}_{k,k'} \hat{\rho}_{\vec{0}, \ldots, \vec{e}_j, \ldots, \vec{0}}(t) \}
\notag \\
&\, + \Delta_k \sum_{k' \ne k } \frac{i}{\hbar} \Delta_{k'}
\mathrm{Tr}\{ [ \hat{B}_{k,k'}, \hat{V}_{k'} ] \hat{\rho}(t) \}.
\end{align}
It is important to note that the steady-state solution of the HEOM is an entangled state of the system and the baths;
for example, for a static system coupled to a single bath ($k=1$),
the steady-state solution of the HEOM takes the form
$\hat{\rho} \propto \mathrm{Tr_B}\{ \exp[-\beta(\hat{H}_\mathrm{S}+\hat{H}_\mathrm{I}^1+\hat{H}_\mathrm{B}^1 ]\}$.
\cite{Tanimura14,Tanimura15}
\section{Numerical Illustration}\label{sec:numerics}
To demonstrate the role of the CASBI in the heat current,
we carried out numerical simulations for a non-equilibrium spin-boson model \cite{Segal05,Thoss,Ruokola,Saito,Wang}
and a three-level heat engine model \cite{Scovil,Geva,Correa,Xu} with the HEOM approach (Fig. \ref{fig:model}).
{Here, we focus on investigating the steady-state heat currents, which are computed from Eq.(\ref{eq:HEOM}) using the Runge-Kutta method to numerically integrate them until convergence to the steady state is realized.}
We assume that the spectral density of each bath takes the Drude form,
$J_k(\omega) = \eta_k \gamma^2 \omega/( \omega^2 + \gamma^2 )$,
where $\eta_k$ is the system-bath coupling strength, and $\gamma$ is the cutoff frequency.
A Pad{\'e} spectral decomposition scheme is employed to obtain the expansion coefficients of the noise correlation functions. \cite{YanPade10A,YanPade10B,Hu}
{
The accuracy of numerical results is checked by increasing the values of $J_1, \ldots, J_K$ and $N$ until convergence is reached.}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig1_model.eps}
\caption{ Schematic depiction of (a) the non-equilibrium spin-boson model and (b) the three-level heat engine model.}
\label{fig:model}
\end{figure}
\subsection{Non-equilibrium spin-Boson model}
The non-equilibrium spin-boson model studied here consists of a two-level system
coupled to two bosonic baths at different temperatures.
This model has been employed extensively as the simplest heat-transport model.
The system Hamiltonian is given by $\hat{H}_\mathrm{S} = ( \hbar \omega_0/2 ) \sigma_z$.
We consider the case in which the system is coupled to the first bath through $\hat{V}_1 = \sigma_x$
and to the second bath through $\sigma_x$ and $\sigma_z$ in the form
$\hat{V}_2 = ( s_x \sigma_x + s_z \sigma_z)/( s_x^2 + s_z^2 )^{1/2}$.
In order to investigate the difference between the SHC and BHC,
we consider the case $s_z \ne 0$,
because otherwise the CASBI term vanishes,
and thus the SHC coincides with the BHC.
(This is the case that most of previous investigations have considered.)
It should be noted that the CASBI has a purely quantum origin, as can be seen from its definition.
We chose $T_1 = 2.0 \hbar\omega_0/k_B, T_2 = 1.0\hbar\omega_0/k_B, \gamma = 2.0\omega_0$ and $s_z = 1$.
Figure \ref{fig:nesb_sx} depicts the role of non-commuting component of the $V_1$ and $V_2$ interactions
in the SHC and BHC processes in the steady state as functions of $s_x$.
Here, the system-bath coupling strengths are set to $\eta_1 = \eta_2 = 0.01\omega_0$.
Even when the system Hamiltonian commutes with the second interaction Hamiltonian in the case $s_x = 0$
(i.e., even when the system couples to the second bath in a non-dissipative manner with the interaction $s_z \sigma_z$
as $[ \hat{H}_\mathrm{S}, \hat{H}_\mathrm{I}^2]=0$),
non-zero heat current arises due to the CASBI contribution, $\dot{q}_{1,2}$.
This is because the Hamiltonian of the system plus system-bath interactions does not commute with the second interaction
(i.e., $[ \hat{H}_\mathrm{S} + \sum_{k=1,2} \hat{H}_\mathrm{I}^k, \hat{H}_\mathrm{I}^2 ] = [ \hat{H}_\mathrm{I}^1, \hat{H}_\mathrm{I}^2 ] \ne 0$),
while the system Hamiltonian and the second interaction Hamiltonian do commute.
Figure \ref{fig:nesb_eta} depicts the heat currents
as functions of the system-bath coupling strength for the case $s_x = s_z = 1$.
In the weak system-bath coupling regime,
the SHC and BHC increase linearly with the coupling strength in similar manners.
It is thus seem that in this case, the CASBI contribution is minor.
As the strength of the system-bath coupling increases,
the difference between the SHC and BHC becomes large:
While $\dot Q_S^1$ decreases after reaching a maximum value near $\eta_1 = \eta_2 = 0.2\omega_0$,
the CASBI contribution, $\dot{q}_{1,2}$, dominates the BHC,
and as a result, it remains relatively large.
Thus, in this regime, the SHC becomes much smaller than the BHC.
In the very strong coupling regime,
the SHC eventually becomes negative,
which indicates the violation of the second law.
In order to eliminate such non-physical behavior, we have to include the $\dot{q}_{1,2}$ term in the definition of the SHC.
Note that the differences between the SHC and BHC described above vanish for $s_z = 0$,
and hence in this case, there is no negative current problem.
This is the case considered in most previous investigations.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig2_nesb_sx.eps}
\caption{The heat currents of the non-equilibrium spin-boson model are plotted
as functions of $s_x$ in order to illustrate the effect of the fact that $V_1$ and $V_2$ do not commute.
The solid (blue) and the dashed (black) curves represent $\dot{Q}_\mathrm{B}^1$
and $\dot{Q}_\mathrm{S}^1$, which correspond to the BHC and SHC.}
\label{fig:nesb_sx}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig3_nesb_eta.eps}
\caption{The SHC and BHC corresponding to $\dot{Q}_\mathrm{S}^1$ (black dashed curve) and $\dot{Q}_\mathrm{B}^1$ (blue solid curve)
for the non-equilibrium spin-boson model as functions of the system-bath coupling.}
\label{fig:nesb_eta}
\end{figure}
\subsection{Three-level heat engine model}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig4_three_time.eps}
\caption{The SHC of the first bath, $\dot{Q}_\mathrm{S}^1$ (red curve),
that of the second bath, $\dot{Q}_\mathrm{S}^2$ (blue curve),
and the power, $\dot{W}$ (black curve), are plotted as functions of time for (a) weak ($\eta_1 = 0.01\omega_1$),
(b) intermediate ($\eta_1 = 0.10\omega_1$),
and (c) strong ($\eta_1 = 1.00\omega_1$) coupling to the first bath
with fixed weak coupling to the second bath ($\eta_2 = 0.001\omega_1$).
The temperatures of the first and second baths are $T_1=10\hbar\omega_1/k_\mathrm{B}$ and $T_2=\hbar\omega_1/k_\mathrm{B}$.
The time period of $1$ corresponds to one cycle of the external force.
Each curve is properly adjusted as
$\dot{Y} \to \dot{Y} - \frac{1}{2}( \max_t\{ \dot{Y}(t) \} + \min_t\{ \dot{Y}(t) \})$
for $Y = Q_\mathrm{S}^1, Q_\mathrm{S}^2$, and $W$. }
\label{fig:three_time}
\end{figure}
The three-level heat engine model considered here consists of three states,
denoted by $| 0 \rangle$, $| 1 \rangle$, and $| 2 \rangle$, coupled to two bosonic baths.
The system is driven by a periodic external field with frequency $\Omega$.
The system Hamiltonian is expressed as
\begin{align}
\hat{H}_\mathrm{S}(t)
= \sum_{i=0,1,2} \hbar \omega_i | i \rangle \langle i | + g ( e^{-i\Omega t} | 1 \rangle \langle 2 | + \mathrm{H.c.})
\end{align}
with $\omega_1 > \omega_2 > \omega_0$.
The system-bath interactions are defined as
$\hat{V}_1 = | 0 \rangle \langle 1| + | 1 \rangle \langle 0|$
and $\hat{V}_2 = | 0 \rangle \langle 2 | + | 2 \rangle \langle 0 |$.
We set $\omega_0 = 0$ without loss of generality.
Roughly stated, this model acts as a quantum heat engine
when population inversion between the two excited states, $|1\rangle$ and $|2\rangle$, occurs.
This can be realized in the case that
the temperature of the first bath, $T_1$, is sufficiently higher than that of the second bath, $T_2$.
Using this model, we analyze the work and the heat per cycle,
i.e., $Y^{\mathrm{cyc}} = \lim_{ t \to \infty } \int_t^{ t + 2\pi/\Omega } \dot{Y}(t') dt'$
for $Y = W, Q_\mathrm{S}^{k}$, and $Q_\mathrm{B}^{k}$ with $k=1$ or $2$.
We set $\omega_2 = 0.5\omega_1$, $\Omega = 0.5\omega_1, \gamma = 2.0\omega_1$, and $g = 0.1\hbar\omega_1$.
In Figure \ref{fig:three_time}, we illustrate the time dependences of the two SHC,
$\dot{Q}_\mathrm{S}^1$ and $\dot{Q}_\mathrm{S}^2$, and the power, $W$,
that arise from transitions between $|0 \rangle$ and $|1 \rangle$, $|0 \rangle$ and $|2 \rangle$, and $|1 \rangle$ and $|2 \rangle$, respectively,
for several values of the first bath coupling,
with the second bath coupling fixed ($\eta_2 = 0.001\omega_1$).
The figure depicts $\dot{Q}_\mathrm{S}^1, \dot{Q}_\mathrm{S}^2$, and $\dot{W}$ for one cycle of the external force.
We set $t=0$ and $t=1$ to correspond to the maxima of the cyclic driving field.
The time delays observed for $\dot{Q}_\mathrm{S}^1$, $\dot{Q}_\mathrm{S}^2$, and $\dot{W}$
imply that the transition $|0\rangle \to |1\rangle \to |2\rangle$ is cyclic.
This behavior can be regarded as a microscopic manifestation of a quantum heat engine.
The periods of the currents and power are, however, half as long as the period of the driving field.
This is because,
while the transition for the work production is induced by an even number of system-field interactions,
the second-order interaction that involves components with frequency $2\Omega$,
which is twice that of the system-field interaction, becomes dominant.
The phases of $\dot{Q}_\mathrm{S}^1$, $\dot{Q}_\mathrm{S}^2$ and $W$ depend on the first bath coupling.
Because the strength of the second bath coupling and the external fields are weak,
all of these changes are a consequence of the change in $\dot{Q}_\mathrm{S}^1$.
When the first bath coupling is weak,
the first SHC, $\dot{Q}_\mathrm{S}^1$, which is the current from the high-temperature heat bath,
cannot follow the change of the external field and hence exhibits a delay in its response to the decrease of the heat current
that occurs at the maxima of the field intensity.
As a result, the power, $W$, and the heat current for the low-temperature bath, $\dot{Q}_\mathrm{S}^2$,
exhibit successively delayed response.
When the first bath coupling is strong,
$\dot{Q}_\mathrm{S}^1$ closely follows the variation of the field.
The time delays of $\dot{Q}_\mathrm{S}^2$ and $\dot{W}$ also decrease as first bath coupling increases.
In Fig. \ref{fig:three_eff},
we depict the system efficiency, $\epsilon_\mathrm{S} \equiv -W^\mathrm{cyc}/Q_\mathrm{S}^{\mathrm{cyc},1}$,
and the bath efficiency, $\epsilon_\mathrm{B} \equiv -W^\mathrm{cyc}/Q_\mathrm{B}^{\mathrm{cyc},1}$,
as functions of the strength of the coupling to the first bath
with fixed strength of the coupling to the second bath, $\eta_2=0.001\omega_1$. Here, $W^{\mathrm{cyc}}$ and $Q^{\mathrm{cyc}}$ represent the time average of $W$ and $Q$ per cycle. We consider a high temperature case with $T_2=\hbar\omega_1/k_\mathrm{B}$
and a low temperature case with $T_2=0.1\hbar\omega_1/k_\mathrm{B}$,
with the fixed ratio $T_2 / T_1 = 0.1$.
While the system efficiency is weakly dependent on the strength of the first bath coupling,
regardless of the temperature,
the bath efficiency decreases as the strength of the first bath coupling increases,
in particular in the low temperature case.
The reason for this can be understood from Fig. \ref{fig:three_work}.
There, it is seen that $Q_\mathrm{S}^{\mathrm{cyc},1}$ decreases as the strength of the first bath coupling increases,
as a result of the strong suppression of the thermal activation by dissipation.
{The overall profiles of the work, $W^{\mathrm{cyc}}$ as well as $Q_\mathrm{S}^{\mathrm{cyc},2}$ (not shown) are similar to $Q_\mathrm{S}^{\mathrm{cyc},1}$. The $\eta_1$ dependence of $Q_\mathrm{S}^{\mathrm{cyc},1}$ and $W^{\mathrm{cyc}}$ follows that of $Q_\mathrm{S}^{\mathrm{cyc},2}$, because, under the present weak system and second bath interaction, the heat flow and work are determined by the capability of the second bath to drain the heat.\cite{Correa} }
Because we set the strength of the second bath coupling and the external fields are weak,
the work $-W^\mathrm{cyc}$ tends to follow the behavior of $Q_\mathrm{S}^{\mathrm{cyc},1}$,
as illustrated in Fig. \ref{fig:three_time},
whereas $Q_\mathrm{B}^{\mathrm{cyc},1}$ increases as the strength of the coupling to the first bath increases,
due to the CASBI contribution, $\dot{q}_{1,2}$.
As a result, the system efficiency, $-W^\mathrm{cyc}/Q_\mathrm{S}^{\mathrm{cyc},1}$,
does not change significantly,
whereas the bath efficiency, $-W^\mathrm{cyc}/Q_\mathrm{B}^{\mathrm{cyc},1}$,
decreases as a function of the first coupling strength.
In the strong coupling regime,
the bath efficiency in the low temperature case is larger than that in the high temperature case,
as depicted in Fig. \ref{fig:three_eff},
because as the temperature decreases,
the system coupled to the low temperature bath becomes less activated.
Note that if the strength of the coupling to the second bath is sufficiently large
that the system is in the non-perturbative regime,
the system efficiency decreases as the strength of the coupling to the first bath increases,
because in this case, a part of $-W$ goes to $Q_\mathrm{S}^{\mathrm{cyc},2}$.
Unlike in the non-equilibrium spin-boson case, the system efficiency is physically meaningful.
We believe, however, that the bath efficiency is more appropriate as the rigorous definition of the heat efficiency,
because the system efficiency does not include the contribution to the energy from the system-bath interactions, which must be regarded as a part of the system. The decrease of efficiency can be regarded as a quantum effect, because it originates from the discretization of the energy eigenstates.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig5_three_eff.eps}
\caption{The efficiencies of the three-level heat engine
calculated as functions of the coupling to the first bath, $\eta_1$,
with fixed weak coupling to the second bath ($\eta_2 = 0.001\omega_1$).
Here, we consider $\epsilon_\mathrm{S}=-W^\mathrm{cyc}/Q_\mathrm{S}^{\mathrm{cyc},1}$ (solid line)
and $\epsilon_\mathrm{B}=-W^\mathrm{cyc}/Q_\mathrm{B}^{\mathrm{cyc},1}$ (curve with circles) in the high temperature case,
with $T_2 = 1.0\hbar\omega_1/k_\mathrm{B}$,
and $\epsilon_\mathrm{S}$ (dashed curve) and $\epsilon_\mathrm{B}$ (dash-dotted line)
in the low temperature case, with $T_2 = 0.1\hbar\omega_1/k_\mathrm{B}$.}
\label{fig:three_eff}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig6_three_work.eps}
\caption{The work and heat per cycle of the three-level heat engine
calculated as functions of the coupling to the first bath, $\eta_1$,
with fixed weak coupling to the second bath ($\eta_2 = 0.001\omega_1$).
Here, we consider only the high temperature case, with $T_2 = 1.0\omega_1$.
The solid, dashed, and circle curves represent the work,
the system heat, $Q_\mathrm{S}^{\mathrm{cyc},1}$,
and the bath heat, $Q_\mathrm{B}^{\mathrm{cyc},1}$, respectively.}
\label{fig:three_work}
\end{figure}
\section{Concluding Remarks}\label{sec:conclusion}
In this paper,
we introduced an explicit expression for the bath heat current (BHC),
which includes contributions from the correlations among the system-bath interactions (CASBI).
The BHC reduces to the widely used system heat current (SHC) derived in terms of the system energy
under conditions of a weak system-bath coupling
or in the case that all system-bath interactions commute.
Our definition of the BHC can be applied to any system with any driving force
and any strength of the system-bath coupling.
We numerically examined the role of the CASBI using the HEOM approach.
We demonstrated this approach in the case of a non-equilibrium spin-boson system in which the CASBI contribution is necessary
to maintain consistency with thermodynamic laws in the strong system-bath coupling regime.
In the three-level heat-engine model,
we observed cyclic time evolution of the high-temperature heat current, the work, and the low-temperature heat current,
as in a classical heat engine.
When the system-bath coupling is weak,
there is a time delay between the variation of the external field
and the heat current of the high-temperature bath,
because this bath cannot follow variations of the system, due to the weakness of the system-bath coupling.
Contrastingly the heat current does not exhibit any time delay in the strong system-bath coupling case.
The efficiency defined using the BHC,
which is regarded as physically more appropriate than that defined using the SHC, decrease as the strength of the system-bath coupling increases.
{Although the definition of the heat current under non-steady-state condition is not clear,\cite{Esposito15PRB}} we can also apply our formulation to analysis of transient behavior,
in which the variation of the bath energy in time is experimentally measurable.
Because the HEOM approach is capable of calculating various physical quantities in non-equilibrium situations,
it would also be interesting to extend the present investigation to other quantum transport problems \cite{Ye}
by calculating higher-order cumulants \cite{Cerrillo}
and non-linear optical signals \cite{Agarwalla,Gao1,Gao2} to reveal the detailed physical properties of the dynamics.
We leave such problems to future studies.
\acknowledgments
Financial support from a Grant-in-Aid for Scientific Research (A26248005) from the Japan Society for the Promotion of Science is acknowledged.
|
\section{Introduction}
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve the quality and safety of medical care, for patient benefit. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim to leverage advanced analytics to identify areas of improvement in its services. One such resource is
the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from England and Wales collected since 2004, which now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatiotemporal information (structured data) as well as, crucially, a substantial component of free text (unstructured data).
The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review of such data provides critical insight into complex procedures in healthcare with an aim towards service improvement.
Although statistical analyses are routinely performed on the structured components of the data, the free text component remains largely unused. Free text can be read manually but this task is time consuming, hence often ignored in practice. Methods that provide automatic, content-based categorisation of incidents from the free text could help sidestep difficulties in assigning incident categories from \textit{a priori} pre-defined lists, reducing human error and burden, as well as offering a unique insight into the root cause analysis of incidents that could improve safety, quality of care and efficiency.
Here, we showcase an algorithmic methodology that detects content-based groups of records in an unsupervised manner, based only on the free, unstructured textual description of the incidents. To do so, we combine deep neural-network high-dimensional text-embedding algorithms (Doc2vec) with network-theoretical methods (multiscale Markov Stability (MS) community detection) applied to a sparsified geometric similarity graph of documents derived from text vector similarities.
Traditional natural language processing tools have generally used bag-of-words representations of documents followed by statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents. More recent approaches have used deep neural network based language models, without a full multiscale graph analysis~\citep{Hashimoto2016TopicReviews}, while previous applications of network theory to text analysis
~\citep{PhysRevX.5.011007} were carried out at a single scale and used bag-of-word arrays lacking the power of neural network text embeddings. In contrast, multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting to pre-designed classifications.
Our analysis starts by training a Doc2vec vector text embedding using the 13 million NRLS records. (We have also trained models to 1 and 2 million records and the results are similar.) Once the text embedding is obtained, we use it to produce document vectors for a subset of 3229 incident reports from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014. Our graph clustering method is then applied to cluster these records across different levels of resolution, revealing multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the groups of records.
Upon reporting by the operator, the records had been independently hand-coded with up to 170 features per record, including a two-level manual classification of the incidents: 15 categories at Level 1; 95 sub-categories at Level 2.
We carried out an \textit{a posteriori} comparison against the hand-coded categories assigned by the reporter. Several of our content-based clusters exhibit good correspondence with well-defined hand-coded categories at both levels; yet our results also provide further resolution in certain areas and a complementary characterisation of the incidents, not defined in the \textit{a priori} external classification.
We find that our methodology provides improved performance over LDA models, as quantified by the Uncertainty Coefficient against the hand-coded categories.
\section{Methodology}
{\bf{Text Pre-processing and Doc2Vec Model Training.}}
The pre-processing of the raw text consisted of: lowering capital letters; tokenising sentences into words; stemming; and removing punctuation, stop-words and all numeric tokens. We then trained Doc2Vec models~\cite{doc2vec} using the PV-DBOW method from the Gensim~\cite{gensim} library. To ascertain the effect of the training of the Doc2Vec model on performance, we repeated the training with a broad range of parameters sets (window size, minimum count, subsampling). We carried Doc2Vec training both on a standard (generic, non-specialised) Wikipedia English corpus and on the full NRLS dataset (13+ million records with specialised language).
Table~\ref{table:d2v} shows that, while the Wikipedia corpus is useful to train models when the parameters sets are weak, the NRLS dataset performs better for optimised parameter sets. Once optimised, the Doc2Vec model trained on the NRLS corpus is used to infer vectors for each of the $N=3229$ records in our analysis dataset.
\begin{table}[h]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
\multicolumn{3}{|c|}{\textbf{Model Parameters}} & \multicolumn{2}{c|}{\textbf{Training Corpus}} \\ \hline
\textbf{\begin{tabular}[c]{@{}l@{}}Window\\ Size\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Minimum \\ Count\end{tabular}} & \textbf{Subsampling} & \textbf{Wikipedia} & \textbf{NRLS} \\ \hline
5 & 20 & 0.00001 & 465 & 379 \\ \hline
15 & 20 & 0.00001 & 424 & 387 \\ \hline
5 & 5 & 0.001 & 580 & 798 \\ \hline
5 & 20 & 0.001 & 587 & 809 \\ \hline
15 & 20 & 0.001 & 532 & 832 \\ \hline
15 & 5 & 0.001 & 531 & \bf{836} \\ \hline
\end{tabular}
\caption{The last two columns show the scores of Doc2Vec paragraph vector models trained using different hyper-parameter sets on different corpora. The scores are obtained by: (i) calculating centroids for the 15 hand-coded categories;
(ii) selecting the 100 nearest reports for each centroid;
(iii) counting the number of incidents (out of 1500) correctly assigned to their centroid.}
\label{table:d2v}
\end{table}
{\bf{Graph Construction.}} We constructed a normalised similarity matrix $\hat{S}$ based on the cosine similarity by: computing the matrix of cosine similarities between all pairs of records, $S_\text{cos}$; transforming it into a distance matrix $D_{cos} = 1-S_{cos}$; applying element-wise max norm to obtain $\hat{D}=\|D_{cos}\|_{max}$; and obtaining the normalized similarity matrix $\hat{S} = 1-\hat{D}$ which has values in $[0,1]$.
This (full) similarity matrix can be viewed as a completely connected, weighted graph. However, such a graph contains many edges with small weights (i.e., weak similarities) since, in high dimensional noisy datasets, even the least similar nodes present a non negligible degree of similarity. We thus apply a simple geometric sparsification to the normalized distance matrix $\hat{D}$ using the MST-kNN method\cite{mstknn}, a geometric heuristic that preserves the global connectivity of the graph while retaining the local geometry of the dataset.
The MST-kNN graph is a weighted graph obtained by the union of the minimum spanning tree (MST) of $\hat{D}$,
and adding edges connecting each node to its $k$ nearest nodes (kNN).
We scanned $k={17, 13, 5, 1}$ for the graph construction and found that the MST-kNN graph with $k=13$ presents a reasonable balance between local and global structure in the dataset, and thus analyse it with the multi-scale graph partitioning framework. However, the results are robust to the choice of $k$ within a set of values. Note that the MST-kNN method avoids global similarity thresholding.
{\bf{Markov Stability Multiscale Graph Partitioning.}}
We apply Markov Stability (MS), a multiscale community detection method, to the MST-kNN graph in order to detect clusters of documents with similar content
at different levels of resolution. MS
is an unsupervised method that scans across all scales to detect robust and stable graph partitions using a continuous time diffusion process on the graph. The method does not need to choose {\it a priori} the number or type of relevant subgraphs, other than it retains the diffusive flow over the Markov time $t$.
Hence $t$ acts as a resolution parameter revealing relevant partitions that persist over particular time scales in an unsupervised manner. For more details see~\citep{pnasStability,Schaub2012ZoomingLens,LambiotteMarkovProcess,bacik_celegans}.
Briefly, the method optimises the MS function $r(t,H)$ over the space of graph partitions $H$ at each time $t$. MS is defined as the trace of the clustered autocovariance matrix of the diffusion process~
|
\eqref{eq:MS}:
\begin{equation}
r(t,H) = \text{trace} \left[R(t,H)\right] = \text{trace} \left[H^T[\Pi e^{-t \mathcal{L}}-\pi\pi^T]H \right], \label{eq:MS}
\end{equation}
where $H$ is the membership matrix of the partition, $\mathcal{L}$ is the random walk Laplacian of the graph, $pi$ is the steady-state distribution of the process and $\Pi=\text{diag}(\pi)$. Our method searches for the partition $H^*(t)$ that maximises $r(t,H)$ at each Markov time. The partition $H^*(t)$ is formed by communities (subgraphs) that tend to preserve the flow within themselves over time $t$, since in that case the diagonal (off-diagonal) elements of $R(t,H)$ will be large (small). Although the maximization of~\eqref{eq:MS} is NP-Hard, there are optimisation methods that work well in practice. Here we use the Louvain Algorithm~\citep{louvain} which is efficient and known to give good results for benchmarks. We look to obtain robust partitions, i.e., partitions that are relevant across scales (i.e., consistently found over extended Markov time) and highly reproducible (i.e., consistently found by the Louvain optimisation). This is achieved by running the Louvain algorithm 500 times with different initialisations at each Markov time, picking the 50 with the highest MS value, and computing the variation of information $VI(t)$~\citep{Meila2007} of this ensemble. In addition, we also compute the variation of information between the optimised partitions found across time, $VI(t,t')$.
Robust partitions are indicated by dips of $VI(t)$ and extended plateaux of $VI(t,t')$, indicating a partition that is robust to the optimisation and valid over extended scales~\citep{bacik_celegans,LambiotteMarkovProcess}.
{\bf{Visualization and interpretation of results.} }
\textit{Tracking membership through Sankey diagrams.} Sankey diagrams visualise the flow of node memberships across different partitions and categories. ~\nocite{sankey}
We use Sankey diagrams with two different objectives:
\textit{(i)} a multilayer Sankey diagram to represent the results of the multi resolution MS community detection across different scales (Fig.~\ref{fig:MSPlot});
\textit{(ii)} two-layer Sankey diagrams to indicate the correspondence between MS clusters and the hand-coded external categories at a given level of resolution (Fig.~\ref{fig:MS_17}).
\textit{Normalized contingency tables.} Normalised contingency tables allow us to compare the relative membership of content clusters in terms of the external categories. We plot contingency tables as heatmaps of a z-score (Fig.~\ref{fig:MS_17}). We score the quality of this correspondence using the uncertainty coefficient, an informational theoretical measure of similarity between groupings.
\textit{Word clouds of increased intelligibility through lemmatisation.} To visualise the content of the document clusters, which can be understood as a type of \emph{topic detection}, we used Word Clouds weighted through lemmatisation as an intuitive way to summarise content and compare \textit{a posteriori} with hand-coded categories. Word clouds can also provide an aid for monitoring when used by practitioners.
\section{Results}
\begin{figure}[h!]
\includegraphics[width=1\linewidth,angle=0]{figures/MSPlot.pdf}
\caption{
(Top) Markov Stability (MS) analysis across Markov time $t$: number of clusters of the optimised partitions (red line), $VI(t)$ for the ensemble of Louvain optimised solutions at each $t$ (blue line); $VI(t,t')$ between optimised partitions across Markov times (background colourmap). Relevant partitions (indicated by numbers and blue vertical lines) correspond to dips of $VI(t)$ and extended plateaux of $VI(t,t')$. (Bottom) The Sankey diagram illustrates the quasi-hierarchical relationship of the communities of documents (indicated by numbers and colours) across levels of resolution.
}
\label{fig:MSPlot}
\end{figure}
We applied full MS across an extended, finely sampled span of Markov times (0.01--100 in steps of 0.01) to the similarity MST-kNN graph of 3229 NRLS incident records.
Figure~\ref{fig:MSPlot} presents a summary of this analysis including the number of clusters and the two metrics of variation of information across all Markov times. The existence of several long plateaux in $VI(t,t')$ coupled to the existence of dips in the $VI(t)$ imply the presence of robust partitions at different levels of resolution. We choose several robust partitions, from finer to coarser, and examine their structure and content. Their relative node membership is shown with a multi-level Sankey diagram.
{\bf{Quasi-hierarchical structure: themes at different resolution levels. }}
The MS analysis in Figure~\ref{fig:MSPlot} reveals a rich multi-level structure of partitions, with a strong quasi-hierarchical organization. It is important to remark that, although the Markov time acts as a natural resolution parameter from finer to coarser partitions, our process of optimization does not impose such a hierarchical structure. Hence this organisation is intrinsic to the data and implies the existence of content communities which naturally integrate with each other as sub-themes of larger thematic categories. The detection of intrinsic scales within the graph at which robust partitions exist, thus allows us to obtain thematically-based clusters of records at different levels of resolution.
{\bf{Interpretation of the MS communities: Word clouds and \textit{a posteriori} comparison against hand-coded categories.}} To ascertain the relevance and relationship between the layers of MS clusters, we examined in detail the five levels of resolution in Figure~\ref{fig:MSPlot}. For each level, we prepared word clouds (lemmatised for increased intelligibility) and a Sankey diagram and contingency table linking content clusters with the hand-coded categories assigned by the operator. Note that this comparison was only done \textit{a posteriori}, i.e., the external categories were not used in our analysis, hence our approach is truly unsupervised.
As an example, Figure~\ref{fig:MS_17} shows the 17-community partition with word clouds for all clusters and the comparison with the 15 hand-coded categories in Level 1.
\begin{figure}[h]
\includegraphics[width=1.05\linewidth,angle=0]{figures/Analysis_MB.pdf}
\caption{
Summary of the 17-community MS partition. The content of each cluster is summarised with a word cloud (name tags given by us based only on the word cloud). The \textit{a posteriori} comparison to the 15 hand-coded categories (indicated by names and colours) is presented in two equivalent ways: a Sankey diagram showing the correspondence between categories and communities and the heatmap of a z-score contingency table.
}
\label{fig:MS_17}
\end{figure}
{\bf{Comparison of the MS content clusters against other NLP methods.}}
We compared the MS document partitioning results against LDA models with a range of similar number of topics. LDA models are trained and inferred using the Gensim module on the same set of documents. We then quantify the match of the MS clusters with the hand-coded categories using the Uncertainty Coefficient \cite{uncoeff}
$$U(T|C)=\frac{H(T)-H(T|C)}{H(T)}=\frac{I(T;C)}{H(T)}$$ where $H(T)$ is the entropy of the hand-coded categories and $H(C)$ is the entropy of the clustering. The MS clusters show improved correspondence with both Level 1 and 2 categories as compared to LDA and spectral clustering methods (Fig.~\ref{fig:UC}).
\begin{figure}[h]
\includegraphics[width=.9 \linewidth,angle=0]{figures/mlmh_spectral_lin.png}
\caption{The Uncertainty Coefficient measures the matching of LDA and MS clusterings over different scales against the hand-coded Level 1 and Level 2 categories. MS communities are consistently more coherent with the hand-coded categories than LDA and spectral clustering. The vertical dashed lines indicate the number of hand-coded categories for Level 1 and 2 (i.e., 15 and 95, respectively)
}
\label{fig:UC}
\end{figure}
\section{Discussion}
This work has applied a multiscale graph partitioning algorithm (MS) to determine topic clusters for a textual dataset of healthcare safety incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and obtains an ensuing similarity graph of documents constructed from their content.
This method brings the advantage of multi-resolution algorithms capable of capturing clusters without imposing \textit{a priori} the number or structure of the clusters. Furthermore, it selects different levels of resolution of the clustering to suit the requirements of each task depending on the level of detail.
The \textit{a posteriori} analysis against hand-categories showed that the method recovers meaningful categories and outperformed LDA at both categorisation levels. Furthermore, some of the MS content clusters capture topics of medical relevance, which provide complementary information to the external classifications.
The nuanced information and classifications extracted from free text analysis suggest a complementary axis to existing approaches to characterise patient safety incident reports, as
the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit the pre-assigned categories.
\bibliographystyle{ACM-Reference-Format}
|
\section{Introduction}
The origin of nucleon spin is one of the major puzzles in
modern particle physics. The well known European Muon Collaboration experiment~\cite{EuropeanMuon:1987isl} has triggered interest in understanding the nucleon spin from the contributions of the spin and the orbital angular momentum (OAM) of each of its constituents.
In this context, how the total angular momentum (TAM) is split into separate quark and gluon (partons) contributions is intrinsically debatable due to quark-gluon couplings and the non-uniqueness of the decomposition~\cite{Leader:2013jra,Wakamatsu:2014zza,Liu:2015xha}. Meanwhile, it has become clear that the generalized parton distributions (GPDs)~\cite{Diehl:2003ny,Belitsky:2005qn,Goeke:2001tz}, appearing in the description of hard exclusive reactions like deeply virtual Compton scattering or deeply virtual meson production, provide us with essential information about the spatial distributions and orbital motion of partons inside the nucleon, and allow us to draw three-dimensional pictures of the nucleon. For more than two decades, the GPDs have been attracting numerous dedicated experimental and theoretical efforts as many observables can be connected to them. The GPDs are functions of three variables, namely, longitudinal momentum fraction ($x$) of the constituent, the skewness ($\zeta$) or the longitudinal momentum transferred, and the square of the total momentum transferred ($t$). Their first moments are linked to the electromagnetic form factors, whereas they reduce to the ordinary parton distributions in the forward limit ($t=0$). The second moments of the GPDs correspond to the gravitational form factors, which are linked to matrix elements of the energy-momentum tensor (EMT). Being off-forward matrix elements, the GPDs do not have probabilistic interpretations. Meanwhile, for zero skewness the Fourier transform (FT) of the GPDs with respect to the momentum transfer in the transverse direction provides the impact parameter dependent GPDs that do have a probabilistic interpretation~\cite{Burkardt:2000za,Burkardt:2002hr}.
The impact parameter dependent GPDs encode the correlations in
spatial and momentum distributions of partons in the nucleon. They contain the information about partonic distributions in the transverse position space for a given longitudinal momentum fraction carried by the constituent.
Ji has shown that the partonic contribution to the total angular momentum of the nucleon can be calculated using the second moment of the GPDs~\cite{Ji:PRL}.
Since, the GPDs provide the spatial distribution of the constituents inside the nucleon, it is therefore credible that the GPDs carry also the knowledge about the spatial distribution of angular momentum~\cite{Lorce:2017wkb,Polyakov:2002yz,Adhikari:2016dir,Kumar:2017dbf}. The angular momentum distribution in three-dimensional coordinate space was first introduced in Ref.~\cite{Polyakov:2002yz}. However, there is an issue of relativistic corrections for the three-dimensional distribution, while this ambiguity can be avoided by defining the two-dimensional distribution in the infinite momentum frame~\cite{Adhikari:2016dir,Leader:2013jra}. Different techniques to calculate the angular momentum distributions in the transverse plane have been prescribed in Ref.~\cite{Adhikari:2016dir} and concluded that none of them agrees at the density level. Meanwhile, a more detailed discussion on the various definitions of angular momentum has been reported in Ref.~\cite{Lorce:2017wkb} and the authors have identified all the missing terms, which hinder the proper comparison. They have illustrated explicitly using a scalar diquark model that there is no discrepancy between the different definitions of angular momentum densities. Later, the distributions of quark angular momentum in a light-front quark-diquark model (with both scalar and axial vector diquark) motivated by soft wall AdS/QCD have been investigated in Ref.~\cite{Kumar:2017dbf}.
In this paper, we investigate the spatial distributions of quark angular momentum inside the proton from its valence light-front wave functions (LFWFs) that features all three active quarks' spin, flavor, and three-dimensional spatial information on the
same footing. Our theoretical framework to explore the nucleon structure is rooted in basis light front quantization (BLFQ)~\cite{Vary:2009gt}, which provides a computational framework for solving
relativistic many-body bound state problem in quantum field theories~\cite{Vary:2009gt,Li:2021jqb,Zhao:2014xaa,Wiecki:2014ola,Li:2015zda,Li:2017mlw,Jia:2018ary,Lan:2019vui,Lan:2019rba,Lekha,Tang:2018myz,Tang:2019gvn,Mondal:2019jdg,Xu:2021wwj,Lan:2019img,Qian:2020utg,Lan:2021wok}. We evaluate the valence quark GPDs of the proton in both momentum space and position space using the LFWFs based on the BLFQ with only the valence Fock sector of the proton considered. The BLFQ provides for a Hamiltonian formalism that incorporates the advantages of the light-front dynamics~\cite{Brodsky:1997de}. Our effective Hamiltonian includes a three-dimensional confinement potential consisting of the light-front holography in the transverse direction~\cite{Brodsky:2014yha}, a longitudinal confinement~\cite{Li:2015zda}, and a one-gluon exchange interaction with fixed coupling to account for the spin structure~\cite{Mondal:2019jdg}. The nonperturbative solutions for the three-body LFWFs are given by the recent BLFQ study of the nucleon~\cite{Mondal:2019jdg}. These LFWFs have been applied successfully to predict the electromagnetic and axial form factors, radii, parton distribution functions (PDFs), and many other quantities of the nucleon~\cite{Mondal:2019jdg,Xu:2021wwj,Mondal:2021wfq}. Here, we extend those investigations to study the proton GPDs and their application for the description of angular momentum distributions.
The paper is organized as follows. We briefly summarize the BLFQ formalism for the nucleon in Sec.~\ref{sec:formalism}. We then present a detailed description of the angular momentum and the associated GPDs in Sec.~\ref{sec:AM}. Sec.~\ref{sec:results} details our numerical results for the GPDs, and different angular momentum densities. At the end, we provide a brief summary and conclusions in Sec.~\ref{sec:summary}.
\section{Light-front effective Hamiltonian for the proton}\label{sec:formalism}
The LFWFs that encode the structure of hadronic bound states are obtained as the eigenfunctions of the eigenvalue equation of the Hamiltonian:
$
H_{\rm LF}\vert \Psi\rangle=M_{\rm h}^2\vert \Psi\rangle
$
where $H_{\rm LF}$ represents the light-front Hamiltonian of the hadron with the mass squared ($M_{\rm h}^2$) eigenvalue. With quarks being the only explicit degree of freedom, the effective Hamiltonian we employ for the proton includes the two-dimensional harmonic oscillator (`2D-HO') transverse confining potential along with a longitudinal confinement and an effective one-gluon exchange interaction~\cite{Mondal:2019jdg}
\begin{align}\label{hami}
H_{\rm eff}=&\sum_a \frac{{\vec k}_{\perp a}^2+m_{a}^2}{x_a}+\frac{1}{2}\sum_{a\ne b}\kappa^4 \Big[x_ax_b({ \vec r}_{\perp a}-{ \vec r}_{\perp b})^2-\frac{\partial_{x_a}(x_a x_b\partial_{x_b})}{(m_{a}+m_{b})^2}\Big]
\nonumber\\&+\frac{1}{2}\sum_{a\ne b} \frac{C_F 4\pi \alpha_s}{Q^2_{ab}} \bar{u}(k'_a,s'_a)\gamma^\mu{u}(k_a,s_a)\bar{u}(k'_b,s'_b)\gamma^\nu{u}(k_b,s_b)g_{\mu\nu}\,,
\end{align}
where $x_a$ and ${\vec k}_{\perp a}$ represent the longitudinal momentum fraction and the relative transverse momentum carried by quark $a$. $m_{a}$ is the mass of the quark a, and $\kappa$ defines the strength of the confinement. The variable $\vec{r}_\perp={ \vec r}_{\perp a}-{ \vec r}_{\perp b}$ is the transverse separation between two quarks. The last term in the effective Hamiltonian corresponds to the OGE interaction where $Q^2_{ab}=-q^2=-(1/2)(k'_a-k_a)^2-(1/2)(k'_b-k_b)^2$ is the average momentum transfer squared, $C_F =-2/3$ is the color
factor, $\alpha_s$ is the coupling constant and $g_{\mu\nu}$ is the metric tensor. ${u}(k_a,s_a)$ represents the spinor with momentum $k_a$ and spin $s_a$.
For the BLFQ basis representation, the 2D-HO function is adopted for the transverse direction, while we employ the discretized plane-wave basis in the longitudinal direction~\cite{Vary:2009gt,Zhao:2014xaa}. Diagonalizing the Hamiltonian, Eq.~(\ref{hami}), in our chosen basis space gives the eigenvalues as squares of the bound state eigenmasses, and the eigenstates which specify the LFWFs. The lowest eigenstate is naturally identified as the nucleon state, denoted as $\ket{P, {\Lambda}}$, with $P$ and $\Lambda$ being the momentum and the helicity of the state.
In terms of the basis function the LFWFs of the nucleon are expressed as
\begin{align}
\Psi^{\Lambda}_{\{x_i,\vec{k}_{i\perp},\lambda_i\}}=\sum_{\{n_i,m_i\}} \psi^{\Lambda}_{\{x_{i},n_{i},m_{i},\lambda_i\}} \prod_i \phi_{n_i,m_i}(\vec{k}_{i\perp};b) \,,\label{wavefunctions}
\end{align}
where $\psi^{\Lambda}_{\{x_{i},n_{i},m_{i},\lambda_i\}}=\braket{P, {\Lambda}|\{x_i,n_i,m_i,\lambda_i\}}$ is the LFWF in the BLFQ basis obtained by diagnalizing Eq.~(\ref{hami}) numerically. The 2D-HO function we adopt as the transverse basis function is
\begin{align}
\phi_{n,m}(\vec{k}_{\perp};b)
=\frac{\sqrt{2}}{b(2\pi)^{\frac{3}{2}}}\sqrt{\frac{n!}{(n+|m|)!}}e^{-\vec{k}_{\perp}^2/(2b^2)}\left(\frac{|\vec{k}_{\perp}|}{b}\right)^{|m|}L^{|m|}_{n}(\frac{\vec{k}_{\perp}^2}{b^2})e^{im\theta}\label{ho}\,,
\end{align}
with $b$ as its scale parameter; $n$ and $m$ are the principal and orbital quantum
numbers, respectively, and $L^{|m|}_{n}$ is the associated Laguerre polynomial. In the discretized plane-wave basis, the longitudinal momentum fraction $x$ is defined as
$
x_i=p_i^+/P^+=k_i/K,
$
where the dimensionless quantity signifying the choice of antiperiodic boundary conditions is $k=\frac{1}{2}, \frac{3}{2}, \frac{5}{2}, ...$ and $K=\sum_i k_i$. The multi-body basis states have selected values of the total angular momentum projection
$
M_J=\sum_i\left(m_i+\lambda_i\right),
$
where $\lambda$ is used to label the quark helicity. The transverse basis truncation is specified by the dimensionless parameters $N_{\rm max}$, such that $\sum_i (2n_i+| m_i |+1)\le N_{\rm max}$. The basis cutoff $N_{\rm max}$ acts implicitly as the ultraviolet (UV) and infrared (IR) regulators for the LFWFs in the transverse direction, with a UV cutoff $\Lambda_{\rm UV}\approx b \sqrt{N_{\rm max}}$ and an IR cutoff $\Lambda_{\rm IR}\approx b /\sqrt{N_{\rm max}}$. The longitudinal basis cutoff $K$ controls the numerical resolution and regulates the longitudinal direction.
Parameters in the model Hamiltonian are fixed to reproduce the ground state mass of the nucleon and to fit the Dirac flavor form factors~\cite{Xu:2021wwj}. The LFWFs in this model have been successfully applied to compute a wide class of different
and related nucleon observables, e.g., the electromagnetic and axial form factors, radii, PDFs, helicity asymmetries, transverse momentum dependent parton distribution functions etc., with remarkable overall success~\cite{Mondal:2019jdg,Xu:2021wwj,Mondal:2021wfq}.
\section{Angular momentum distributions} \label{sec:AM}
In this section, we introduce our notation and briefly review the derivation of angular momentum distribution following Ref.~\cite{Lorce:2017wkb}. In field theory, the generalized angular momentum tensor operator is written as follows
\begin{equation}
J^{\mu\alpha\beta}(y)=L^{\mu\alpha\beta}(y)+S^{\mu\alpha\beta}(y) \; . \label{J}
\end{equation}
Both of the contributions are antisymmetric under $\alpha\leftrightarrow\beta$. When $\alpha,\beta$ are spatial components, $L^{\mu\alpha\beta}(y)$ and $S^{\mu\alpha\beta}(y)$ are identified with the OAM and spin operators, respectively. The first contribution can be expressed in terms of the EMT as
\begin{equation}
L^{\mu\alpha\beta}(y)= y^{\alpha}T^{\mu\beta}(y)-y^{\beta}T^{\mu\alpha}(y) \; .
\end{equation}
Note that $T^{\mu\nu}$ is referred to the canonical EMT and it is, in general, neither gauge invariant nor symmetric. Meanwhile, the TAM can also be expressed in a pure orbital form,
\begin{equation}
J^{\mu\alpha\beta}_{\text{Bel}}(y)=y^{\alpha}T_{\text{Bel}}^{\mu\beta}(y)-y^{\beta}T_{\text{Bel}}^{\mu\alpha}(y)\,,
\end{equation}
using the Belinfante-improved EMT~\cite{Belinfante1, Belinfante2,Rosenfeld}, which is defined by adding a term to the definition of $T^{\mu\nu}$ as
\begin{align}
T^{\mu\nu}_{\text{Bel}}(y)&= T^{\mu\nu}(y)+\partial_{\lambda}G^{\lambda\mu\nu}(y)\,,\label{tbel}
\end{align}
where $G^{\lambda\mu\nu}$ is given by
\begin{equation}\label{G}
G^{\lambda\mu\nu}(y)=\frac{1}{2}\left[S^{\lambda\mu\nu}(y)+S^{\mu\nu\lambda}(y)+S^{\nu\mu\lambda}(y)\right]=-G^{\mu\lambda\nu}(y) \;.
\end{equation}
The additional term revises the definition of the local density without changing the TAM. The Belinfante-improved tensor $T^{\mu\nu}_{\text{Bel}}$ is conserved, symmetric and gauge invariant.
The Belinfante-improved tensors can be seen as effective densities, where the effects of spin are imitated by a superpotential contribution to the angular momentum. In order to determine the angular momentum distributions, we are interested in the matrix elements of the above mentioned operator densities.
For a spin-$\frac{1}{2}$ target, the matrix elements of the general local asymmetric $T^{\mu\nu}$ are parametrized in terms of several gravitational form factors~\cite{Leader:2013jra}:
\begin{align}
&\langle P', {\bf\Lambda}'\lvert T^{\mu\nu}(0) \rvert P, {\bf\Lambda}\rangle = \bar{u}(P', {\bf\Lambda}')\Big[\frac{\bar{P}^{\mu}\bar{P}^\nu}{M}\,A(t)+\frac{\bar{P}^{\mu}i\sigma^{\nu\lambda}\Delta_{\lambda}}{4M}\,(A+B+D)(t)\nonumber\\ &\quad\quad\quad\quad
+\frac{\Delta^{\mu}\Delta^{\nu}-g^{\mu\nu}\Delta^2}{M}\,C(t)+Mg^{\mu\nu}\,\bar{C}(t)+\frac{\bar{P}^{\nu}i\sigma^{\mu\lambda}\Delta_{\lambda}}{4M}\,(A+B-D)(t)\Big]u(P, {\bf\Lambda}) \; , \label{dec4}
\end{align}
where $\bar{P}=\frac{1}{2}(P'+P)$, $\Delta=P'-P$, $t=\Delta^2$, $M$ is the system mass, the three-vector ${\bf\Lambda}({\bf\Lambda}')$ denotes the rest-frame polarization of the initial (final) state, and $u(P, \Lambda)$ is the spinor. The gravitational form factors $A(t)$, $B(t)$ and $C(t)$ can be related to leading-twist
GPDs, which are accessible in exclusive processes~\cite{Ji:2004gf}. Meanwhile, the form factor $\bar{C}(t)$, obtainable from the trace of the energy-momentum tensor, is related to the $\sigma_{\pi N}$ term extracted from pion-nucleon scattering amplitudes~\cite{Alarcon:2011zs,Hoferichter:2015dsa}.
On the other hand, the matrix elements of the quark spin operator
\begin{align}
S^{\mu\alpha\beta}_{q}(y)=\frac{1}{2}\,\varepsilon^{\mu\alpha\beta\lambda}\,\overline{\psi}(y)\gamma_{\lambda}\gamma_{5}\psi(y)\,,
\end{align}
with $\psi(y)$ and $\bar\psi(y)$ being the quark field are parametrized as
\begin{equation}
\langle P', {\bf\Lambda}'\lvert S^{\mu\alpha\beta}_q(0)\rvert P, {\bf\Lambda}\rangle =\frac{1}{2}\,\varepsilon^{\mu\alpha\beta\lambda}\,\overline{u}(P', {\bf\Lambda}')\left[\gamma_{\lambda}\gamma_5\, G^q_{A}(t)+\frac{\Delta_{\lambda}\gamma_5}{2M}\,G^q_{P}(t)\right]u(P, {\bf\Lambda})\,, \label{spar}
\end{equation}
where $G^q_{A}(t)$ and $G^q_{P}(t)$ are the axial vector and pseudoscalar form factors, respectively and the convention $\varepsilon^{0123}=+1$. According to Refs.~\cite{Bakker:2004ib,Leader:2013jra}, the axial form factor is connected to the gravitational form factor associated with the antisymmetric part of the quark EMT,
$
D_q(t)=-G^q_A(t).
$
The axial form factor is measurable from quasi-elastic neutrino scattering and pion electroproduction processes~\cite{Bernard:2001rs}. The different angular momentum distributions can thus be defined through the combination of the gravitational form factors and the axial form factor.
\subsection{Distributions in the transverse plane on the light front}
In the light-front (LF) formalism, the impact-parameter distributions of kinetic OAM and spin in the Drell-Yan (DY) frame
are given by~\cite{Lorce:2017wkb}
\begin{align}
\langle L^{z}\rangle({b}_\perp)&=-i\varepsilon^{3jk}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}\left.\frac{\partial\langle T^{+k}\rangle}{\partial\Delta^{j}_\perp}\right|_\text{DY}\nonumber\\
&=\Lambda^z\int\frac{\text{d}^2\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}\left[L(t)+t\,\frac{\text{d} L(t)}{\text{d} t}\right]_{t=-\vec \Delta^2_\perp},\label{eq:AM}\\
\langle S^{z}\rangle({b}_\perp)&=\frac{1}{2}\,\varepsilon^{3jk}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}\left.\langle S^{+jk}\rangle\right|_\text{DY} \nonumber\\
&=\frac{\Lambda^z}{2}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}G_{A}(-\vec \Delta^2_\perp) \; ,\label{eq:spin}
\end{align}
respectively, where
${2\sqrt{P'^+P^+}}\langle T^{\mu\nu}\rangle\equiv \langle P',S\lvert T^{\mu\nu}(0)\rvert P, S\rangle$ and $L(t)$ is the combination of energy-momentum form factors and the axial form factor,
\begin{align}
L(t)&=\frac{1}{2}\left[A(t)+B(t)+D(t)\right]=\frac{1}{2}\left[A(t)+B(t)-G_A(t)\right]\,.
\end{align}
The variable ${\vec b}_\perp$ is the Fourier conjugate to the transverse momentum transfer ${\vec \Delta}_\perp$. The impact parameter ${b}_\perp$ corresponds to the transverse displacement of the active quark from the center of momentum of the nucleon.
Meanwhile, the the Belinfante-improved TAM and the total divergence in the impact-parameter are defined as~\cite{Lorce:2017wkb}
\begin{align}
\langle J^{z}_\text{Bel}\rangle({b}_\perp)&=-i\varepsilon^{3jk}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i \vec{\Delta}_\perp\cdot\vec{b}_\perp}\left.\frac{\partial\langle T^{+k}_\text{Bel}\rangle}{\partial\Delta^{j}_\perp}\right|_\text{DY}\nonumber\\
&=\Lambda^z\int\frac{\text{d}^2\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\
|
perp}\left[J(t)+t\,\frac{\text{d} J(t)}{\text{d} t}\right]_{t=-\vec \Delta^2_\perp}, \label{eq:AM_Beli}\\
\langle M^{z}\rangle({b}_\perp)&=\frac{1}{2}\,\varepsilon^{3jk}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}\,\Delta^l_\perp\left.\frac{\partial\langle S^{l+k}\rangle}{\partial\Delta^j_\perp}\right|_\text{DY}\nonumber\\
&=-\frac{\Lambda^z}{2}\int\frac{\text{d}^{2}\vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec{\Delta}_\perp\cdot\vec{b}_\perp}\left[t\,\frac{\text{d} G_{A}(t)}{\text{d} t}\right]_{t=-\vec \Delta^2_\perp}\,, \label{eq:M2}
\end{align}
respectively, where
\begin{align}
J(t)&=\frac{1}{2}\left[A(t)+B(t)\right]\,.
\end{align}
Using the two-dimensional Fourier transform of the form factors defined as
\begin{equation}
\tilde{F}(b_\perp)=\int\frac{\text{d}^2 \vec{\Delta}_\perp}{(2\pi)^2}\,e^{-i\vec\Delta_\perp\cdot\vec b_\perp}\, F(-\vec \Delta^2_\perp) \; , \label{eq:Fourier}
\end{equation}
Eqs.~(\ref{eq:AM})-(\ref{eq:M2}) can be rewritten as
\begin{align}
\langle L^{z}\rangle({b}_\perp)&=-\frac{\Lambda^z}{2}\,b_{\perp}\,\frac{\text{d} \tilde{L}(b_{\perp})}{\text{d} b_{\perp}} \; ,\label{lip}\\
\langle S^{z}\rangle({b}_\perp)&=\frac{\Lambda^z}{2}\,\tilde{G}_{A}(b_{\perp}) \; ,\label{spin}\\
\langle J^{z}_\text{Bel}\rangle({b}_\perp)&=-\frac{\Lambda^z}{2}\,b_{\perp}\,\frac{\text{d} \tilde{J}(b_{\perp})}{\text{d} b_{\perp}} \; ,\label{JBel}\\
\langle M^{z}\rangle({b}_\perp)&=\frac{\Lambda^z}{2}\left[\tilde{G}_{A}(b_{\perp})+\frac{1}{2}\,b_{\perp}\frac{\text{d}\tilde{G}_A(b_{\perp})}{\text{d} b_{\perp}}\right] \; .\label{divb}
\end{align}
The total angular momentum density $\langle J^{z}\rangle({b}_\perp)$ is then given by
\begin{align}
\langle J^z\rangle(b_\perp)=\langle L^{z}\rangle({b}_\perp)+ \langle S^{z}\rangle({b}_\perp)=\langle J^{z}_\text{Bel}\rangle({b}_\perp)+ \langle M^{z}\rangle({b}_\perp)\,,
\end{align}
which is different form the ``naive'' density, which is defined by the two-dimensional Fourier transform of $J(t)$,
\begin{align}
\langle J^z_{\text{naive}}\rangle (b_{\perp})=\Lambda^z\tilde{J}(b_{\perp})\,,
\label{naive}
\end{align}
by a correction term
\begin{align}
\langle J^{z}_{\text{corr}}\rangle({b}_\perp)=-\Lambda^z\left[\tilde{L}(b_{\perp})+\frac{1}{2}\,b_{\perp}\,\frac{\text{d} \tilde{L}(b_{\perp})}{\text{d} b_{\perp}}\right] \; .
\label{corrb}
\end{align}
Beside the densities mentioned above, the Belinfante-improved TAM can also be formulated as the sum of monopole and quadrupole contributions~\cite{Lorce:2017wkb}
\begin{align}
\langle J_{\text{Bel}}^{z(\text{mono})}\rangle(b_{\perp})&=\frac{\Lambda^z}{3}\left[\tilde{J}(b_{\perp})-b_{\perp}\,\frac{\text{d}\tilde{J}(b_{\perp})}{\text{d} b_{\perp}}\right] \;,\label{lbur} \\
\langle J_{\text{Bel}}^{z(\text{quad})}\rangle(b_{\perp})&=-\frac{\Lambda^z}{3}\left[\tilde{J}(b_{\perp})+\frac{1}{2}\,b_{\perp}\,\frac{\text{d} \tilde{J}(b_{\perp})}{\text{d} b_{\perp}}\right] \; .\label{lquadrup}
\end{align}
The monopole contribution, Eq.~(\ref{lbur}), is the projection of the expression used by Polyakov and collaborators~\cite{Polyakov:2002yz,Goeke:2007fp} onto the transverse plane. This has later been studied as Polyakov–Goeke distribution in Ref.~\cite{Adhikari:2016dir}. The quadrupole contribution, Eq.~(\ref{lquadrup}), is also the 2D projection of the 3D quadrupole contribution to the Belinfante-improved TAM~\cite{Lorce:2017wkb}, which arises from the breaking of spherical symmetry down
to axial symmetry due to the polarization of the state.
Note that the total divergence (Eq.~\eqref{divb}), the correction (Eq.~\eqref{corrb}), and the quadrupole (Eq.~\eqref{lquadrup}) terms vanish when they are integrated over $\vec b_\perp$. This clarifies how different definitions lead to the same integrated total angular momentum though they are distinct from each other at the density level~\cite{Lorce:2017wkb,Kumar:2017dbf,Adhikari:2016dir}.
\subsection{Generalized parton distributions}
In general, the GPDs are defined through the off-forward matrix elements of the bilocal operators between hadronic states. The unpolarized and helicity dependent quark GPDs for the nucleon are parameterized as~\cite{Ji:1998pc}
\begin{align}
&\int\frac{\text{d} y^-}{8\pi} e^{ixP^+y^-/2}\braket{P^{\prime},\Lambda^{\prime}|\bar{\psi}(0)\gamma^+\psi(y)|P,\Lambda}|_{y^+=\vec{y}_{\perp}=0}\nonumber\\
&\quad\quad\quad\quad\quad\quad=\frac{1}{2\bar{P}^+} \bar{u}(P^{\prime},\Lambda^{\prime})\left[H^q(x,\zeta,t)\gamma^+
+ E^q(x,\zeta,t) \frac{i\sigma^{+j}\Delta_j}{2M}\right]u(P,\Lambda)\,,\label{HE}\\
&\int\frac{\text{d} y^-}{8\pi} e^{ixP^+y^-/2}\braket{P^{\prime},\Lambda^{\prime}|\bar{\psi}(0)\gamma^+\gamma_5\Psi(y)|P,\Lambda}|_{y^+=\vec{y}_{\perp}=0}\nonumber\\
&\quad\quad\quad\quad\quad\quad=\frac{1}{2\bar{P}^+} \bar{u}(P^{\prime},\Lambda^{\prime})\Big[\widetilde{H}^q(x,\zeta,t)\gamma^+\gamma_5
+ \widetilde{E}^q(x,\zeta,t) \frac{\gamma_5\Delta^+}{2M}\Big]u(P,\Lambda)\,.\label{HtEt}
\end{align}
Here $H$ and $E$ are the unpolarized quark GPDs, whereas $\widetilde{H}$ and $\widetilde{E}$ represent the helicity dependent GPDs. The kinematical variables are
$\bar{P}=(P'+P)/2$, $\Delta=P'-P$, $\zeta=-\Delta^+/2\bar{P}^+$
and $t=\Delta^2$. For $\zeta=0$, $t=-\vec{\Delta}_\perp^2$. We consider the light cone gauge $A^+ = 0$, which indicates that the gauge-link between the quark
fields in Eqs.~(\ref{HE}) and (\ref{HtEt}) is unity. In this paper, we concentrate only on the GPDs relevant to the angular momentum densities, i.e., $H$, $E$ and $\widetilde{H}$ at the zero skewness limit. Note that one has to consider nonzero skewness to compute GPD $\widetilde{E}$, which is not needed for this work.
Substituting the nucleon states within the valence Fock sector
\begin{align}
\ket{P,{\Lambda}} =& \int \prod_{i=1}^{3} \left[\frac{{\rm d}x_i{\rm d}^2 \vec{k}_{i\perp}}{\sqrt{x_i}16\pi^3}\right] 16\pi^3\delta \left(1-\sum_{i=1}^{3} x_i\right) \delta^2 \left(\sum_{i=1}^{3}\vec{k}_{i\perp}\right)\nonumber\\
&\times \Psi^{\Lambda}_{\{x_i,\vec{k}_{i\perp},\lambda_i\}} \ket{\{x_iP^+,\vec{k}_{i\perp}+x_i\vec{P}_{\perp},\lambda_i\}}\,,\label{wavefunction_expansion}
\end{align}
and the quark field operators in Eqs.~(\ref{HE}) and (\ref{HtEt}) leads to the GPDs in terms of the overlap of the LFWFs
\begin{align}
H^q(x,0,t)=&
\sum_{\{\lambda_i\}} \int \left[{\rm d}\mathcal{X} \,{\rm d}\mathcal{P}_\perp\right]\, \Psi^{\uparrow *}_{\{x^{\prime}_i,\vec{k}^{\prime}_{i\perp},\lambda_i\}}\Psi^{\uparrow}_{\{x_i,\vec{k}_{i\perp},\lambda_i\}} \delta(x-x_1)\,, \label{eq:H} \\
E^q(x,0,t)=& -\frac{2M}{(q^1-iq^2)}
\sum_{\{\lambda_i\}} \int \left[{\rm d}\mathcal{X} \,{\rm d}\mathcal{P}_\perp\right]\, \Psi^{\uparrow *}_{\{x^{\prime}_i,\vec{k}^{\prime}_{i\perp},\lambda_i\}}\Psi^{\downarrow}_{\{x_i,\vec{k}_{i\perp},\lambda_i\}} \delta(x-x_1)\,, \label{eq:E}\\
\widetilde{H}^q(x,0,t)=&
\sum_{\{\lambda_i\}} \int \left[{\rm d}\mathcal{X} \,{\rm d}\mathcal{P}_\perp\right]\, \lambda_1\, \Psi^{\uparrow *}_{\{x^{\prime}_i,\vec{k}^{\prime}_{i\perp},\lambda_i\}}\Psi^{\uparrow}_{\{x_i,\vec{k}_{i\perp},\lambda_i\}} \delta(x-x_1)\,, \label{eq:Ht}
\end{align}
where
\begin{align}
\left[{\rm d}\mathcal{X} \,{\rm d}\mathcal{P}_\perp\right]=\prod_{i=1}^3 \left[\frac{{\rm d}x_i{\rm d}^2 \vec{k}_{i\perp}}{16\pi^3}\right] 16 \pi^3 \delta \left(1-\sum_{i=1}^{3} x_i\right) \delta^2 \left(\sum_{i=1}^{3}\vec{k}_{i\perp}\right),
\end{align}
and the light-front momenta are $x^{\prime}_1=x_1$; $\vec{k}^{\prime}_{1\perp}=\vec{k}_{1\perp}+(1-x_1)\vec{\Delta}_{\perp}$ for the struck quark ($i=1$) and $x^{\prime}_i={x_i}; ~\vec{k}^{\prime}_{i\perp}=\vec{k}_{i\perp}-{x_i} \vec{\Delta}_{\perp}$ for the spectators ($i\ne1$) and $\lambda_1=1~(-1)$ for the struck quark helicity. The proton helicity is designated by $\Lambda=\uparrow(\downarrow)$, where $\uparrow$ and $\downarrow$ correspond to $+1$ and $-1$, respectively.
Integrating the non-local matrix element that parameterized the GPDs, over $x$ leads to the local matrix elements yielding the form factors. In the Drell-Yan frame, the expressions for the form factors are very similar to the expressions for GPDs, except that the longitudinal momentum fraction $x$ of the struck quark is not integrated out in the GPDs' expressions. Thus, GPDs defined in Eqs.~(\ref{eq:H}), (\ref{eq:E}) and (\ref{eq:Ht}) are also known as momentum-dissected form factors and measure the contribution of the struck quark with momentum fraction $x$ to the corresponding form factors. The electromagnetic form factors are related to the first moments of the unpolarized GPDs for the nucleon by the sum rules on the light-front as
\begin{align}
F_1^q(t)=&\int \text{d} x \, H^q(x,0,t)\,;\quad
F_2^q(t)=\int \text{d} x \, E^q(x,0,t)\,,\label{eq:FF}
\end{align}
where $F_1^q(t)$ and $F_2^q(t)$ are the Dirac (charge) and the Pauli (magnetic) form factors, respectively whereas, the axial form factor is connected to the helicity dependent GPD as
\begin{align}
G_A^q(t)=&\int dx \, \widetilde{H}^q(x,0,t)\,.\label{eq:gA}
\end{align}
Meanwhile, the gravitational form factors which are parameterized through
the matrix elements of the EMT are linked to the second moment of GPDs
\begin{align}
A^q(t)=\int \text{d} x \, x\,H^q(x,0,t)\,;\quad\quad
B^q(t)=\int \text{d} x \, x\, E^q(x,0,t)\,.\label{eq:GF}
\end{align}
The transverse impact parameter dependent GPDs are obtained via the FT of the GPDs with respect to the momentum transfer along the transverse direction $\vec{\Delta}_\perp$~\cite{Burkardt:2002hr}:
\begin{align}
{F}(x, {b}_\perp)& =
\int \frac{\text{d}^2{\vec \Delta}_\perp}{(2\pi)^2}
e^{-i {\vec \Delta}_\perp \cdot {\vec b}_\perp }
F(x,0,t)\,,\label{eq:Hb}
\end{align}
with $F$ being the GPDs $H$, $E$ and $\widetilde{H}$. The $H(x,b_\perp)$
provides the description of the density of unpolarized quarks in the unpolarized proton, while $E(x,b_\perp)$ is responsible for a deformation of the density in the transversely polarized proton.
The transverse distortion can be linked to Ji's angular momentum relation. Ji has shown that the TAM of quarks and gluons can be expressed in terms of GPDs~\cite{Ji:PRL}:
\begin{align}
J^z=\frac{1}{2} \int \text{d} x \, x\,[H(x,0,0)+E(x,0,0)]\,.
\end{align}
This sum rule is appropriate at the forward limit of the GPDs and relates the $z$-component of the TAM of the constituents in a nucleon polarized in the $z$-direction only. Again in the impact parameter space, the sum rule has a simple interpretation
for a transversely polarized nucleon~\cite{Burkardt:2005hp}. The term involving $E(x,0,0)$ arises due to the transverse deformation of the distribution in the center of momentum frame, whereas the term containing $H(x,0,0)$ is an overall transverse shift when going from the transversely polarized nucleon in instant form to the front form. Meanwhile, the helicity-dependent GPD $\widetilde{H}$ in the impact parameter space reflects the difference in the density of parton with helicity equal or opposite to the nucleon helicity~\cite{Diehl:2005jf,Boffi:2007yc,Pasquini:2007xz,Mondal:2017wbf}. This GPD has a direct connection with the partonic spin contribution to the TAM of the nucleon.
We can now rewrite the distributions defined in Eqs.~(\ref{lip})-(\ref{lquadrup}) using the impact parameter dependent GPDs, where $\tilde{L}(b_\perp)$, $\tilde{J}(b_\perp)$ and $\tilde{G}_A(b_\perp)$ are given by
\begin{align}
\tilde{L}({b}_\perp) &=\frac{1}{2}
\int \text{d} x \, \left\{x\left[{H}(x,b_\perp)+{E}(x,b_\perp)\right]-\widetilde{{H}}(x,b_\perp)\right\}\,,\label{eq:Lb}\\
\tilde{J}({b}_\perp) &=\frac{1}{2}
\int \text{d} x \, x\left[{H}(x,b_\perp)+{E}(x,b_\perp)\right]\,,\label{eq:Jb}\\
\tilde{G}_A({b}_\perp) &=
\int \text{d} x \,\widetilde{{H}}(x,b_\perp)\,.\label{eq:Gb}
\end{align}
\begin{figure}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_H_u.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_H_d.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_E_u.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_E_d.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_Htilde_u.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{GPD_Htilde_d.pdf}}
\end{tabular}
\caption{The valence quark GPDs of the proton: (a) $H(x,0,t)$, (c) $E(x,0,t)$, and (e) $\widetilde{H}(x,0,t)$ are for the valence up quark; (b), (d) and (f) are same as (a), (c), and (e), respectively, but for the valence down quark as functions of $x$ and $-t$.}
\label{Fig:gpds}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{H_u_b2.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{H_d_b2.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{E_u_b2.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{E_d_b3.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{Htilde_u_b2.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.35]{Htilde_d_b3.pdf}}
\end{tabular}
\caption{The valence quark GPDs of the proton in the transverse impact parameter space: (a) $H(x, b_\perp)$, (c) $E(x, b_\perp)$, and (e) $\widetilde{H}(x, b_\perp)$ are for the valence up quark; (b), (d) and (f) are same as (a), (c), and (e), respectively, but for the valence down quark as functions of $x$ and $b_\perp$.}
\label{Fig:impact_gpds}
\end{figure}
\section{Numerical results and discussions} \label{sec:results}
The LFWFs of the valence quarks in the proton have been solved in the BLFQ framework with the basis truncation $N_{\rm max}=10$ and $K=16.5$ and the model parameters $\{m_{\rm q/KE},~m_{\rm q/OGE},~\kappa,~\alpha_s\}=\{0.3~{\rm GeV},~0.2~{\rm GeV},~0.34~{\rm GeV},~1.1\pm 0.1\}$ and the HO scale parameter $b=0.6$ GeV.
The parameters in our model are fixed to fit the nucleon mass and the flavor
Dirac form factors~\cite{Mondal:2019jdg}. We estimate an uncertainty on the coupling that accounts for the model selections and major fitting uncertainties. The uncertainty for the $\alpha_s$ decreases with increasing basis cutoffs $N_{\rm{max}}$~\cite{Xu:2021wwj}. We employ the resulting wave functions to investigate the GPDs for the proton. We insert the valence wave functions given by Eq.~(\ref{wavefunctions}) into Eqs.~(\ref{eq:H}), (\ref{eq:E})
|
definition and
generates the corresponding output, this procedure itself will satisfy the same
privacy definition.
These two properties act as \emph{sanity checks} for our privacy definition.
\begin{prop}
$\varepsilon$-sketch privacy above cardinality $N$ satisfies transformation
invariance and convexity.
\end{prop}
\begin{proof}
The proof is similar to the proof of Theorem 5.1 in~\cite{kifer2012rigorous},
proved in Appendix B of the same paper.
\end{proof}
\section{Private cardinality estimators are imprecise\label{sec:main-result}}
Let us return to our privacy problem: someone with access to a sketch wants to
know whether a given individual belongs to the aggregated individuals in the
sketch. Formally, given a target $t$ and a sketch $M_{E}$, the attacker must
guess whether $t\in E$ with high probability. In
Section~\ref{subsec:main-result-deterministic}, we explain how the attacker can
use a simple test to gain significant information if the cardinality estimator
is deterministic. Then, in Section~\ref{subsec:main-result-probabilistic}, we
reformulate the main technical lemma in probabilistic terms, and prove an
equivalent theorem for probabilistic cardinality estimators.
\subsection{Deterministic case\label{subsec:main-result-deterministic}}
Given a target $t$ and a sketch $M_E$, the attacker can perform the following
simple attack to guess whether $t \in E$. She can try to add the target $t$ to
the sketch $M_{E}$, and observe whether the sketch changes. In other words, she
checks whether $\mathit{add}\left(M_{E},t\right)=M_{E}$. If the sketch changes,
this means with certainty that $t \notin E$. Thus, Bayes' law indicates that if
$\mathit{add}\left(M_{E},t\right)=M_{E}$, then the probability of $t\in E$
cannot decrease.
How large is this increase? Intuitively, it depends on how likely it is that
adding an element to a sketch does not change it \emph{if the element has not
previously been added to the sketch}. Formally, it depends on
$\mathbb{P}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]$.
\begin{itemize}
\item If $\mathbb{P}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]$
is close to $0$, for example if the sketch is a list of all elements seen
so far, then observing that $\mathit{add}\left(M_{E},t\right)=M_{E}$ will lead
the attacker to believe with high probability that $t\in E$.
\item If $\mathbb{P}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]$
is close to $1$, it means that adding an element to a sketch often
does not change it. The previous attack does not reveal much information.
But then, it also means that many elements are ignored when they are added
to the sketch, that is, the sketch does not change when
adding the element. Intuitively, the accuracy of an estimator based solely on a
sketch that ignores many elements cannot be very good.
\end{itemize}
We formalize this intuition in the following theorem.
\begin{figure*}
\includegraphics[height=4cm]{weakprivacyaccuracyloss-100}
\hfill
\includegraphics[height=4cm]{weakprivacyaccuracyloss-500}
\caption{\label{fig:minstderr}Minimum standard error for a cardinality estimator
with $\varepsilon$-sketch privacy above cardinality $100$ (left) and $500$ (right).
The blue line is the relative standard error of HyperLogLog with standard parameters.}
\end{figure*}
\begin{thm}\label{thm:main-result}
An unbiased deterministic cardinality estimator that satisfies
$\varepsilon$-sketch privacy above cardinality $N$ is not precise. Namely, its
variance is at least $\frac{1-c^{k}}{c^{k}}\left(n-k\cdot N\right)$, for any
$n\geq N$ and $k\leq\frac{n}{N}$, where $c = 1-e^{-\varepsilon}$
\end{thm}
Note that if we were using differential privacy, this result would be trivial:
no deterministic algorithm can ever be differentially private. However, this is
not so obvious for our definition of privacy: prior
work~\cite{bhaskar2011noiseless,bassily2013coupled,grining2017towards} shows
that when the attacker is assumed to have some uncertainty about the data, even
deterministic algorithms can satisfy the corresponding definition of privacy.
Figure~\ref{fig:minstderr} shows plots of the lower bound on the standard error of
a cardinality estimator with $\varepsilon$-sketch privacy at two
cardinalities (100 and 500). It shows that the standard error increases
exponentially with the number of elements added to the sketch. This demonstrates that
even if we require the privacy property for a large value of $N$ (500) and a
large $\varepsilon$ (which is generally less than $1$), the standard error of a
cardinality estimator will become unreasonably large after 20,000 elements.
\vspace{\topsep}
\begin{proof}[Proof of Theorem~\ref{thm:main-result}]
The proof is comprised of three steps, following the intuition previously given.
\begin{enumerate}
\item We show that a sketch $M_{E}$, computed from a random set
$E$ with an $\varepsilon$-sketch private estimator above
cardinality $N$, will ignore many elements after $N$ (Lemma~\ref{lem:ignores-elements}).
\item We prove that if a cardinality estimator ignores a certain ratio of
elements after adding $n=N$ elements, then it will ignore an even larger ratio
of elements as $n$ increases (Lemma~\ref{lem:ignores-more-and-more-elements}).
\item We conclude by proving that an unbiased cardinality estimator that
ignores many elements must have a large variance (Lemma~\ref{lem:bad-variance}).
\end{enumerate}
The theorem follows directly from these lemmas.
\end{proof}
\begin{lem}\label{lem:ignores-elements}
Let $t\in\mathcal{U}$. A deterministic cardinality estimator with
$\varepsilon$-sketch privacy above cardinality $N$ satisfies
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq e^{-\varepsilon}$
for $n\geq N$.
\end{lem}
\begin{proof}
We first prove that such an estimator satisfies
\begin{multline*}
\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\in E\right]{}
\\
\leq e^{\varepsilon}\cdot\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right].
\end{multline*}
We decompose the left-hand side of the inequality over all possible values of
$M_{E}$ which that $\mathit{add}\left(M_{E},t\right)=M_{E}$. If we call
this set $\mathcal{I}_{t}=\left\{ M \mid
\mathit{add}\left(M,t\right)=M\right\}$, we have:
\begin{align*}
\mathbb{P}_{n}&\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\in E\right]
\\
& {} = \sum\nolimits_{M\in\mathcal{I}_{t}}\mathbb{P}_{n}\left[M_E=M \mid t\in E\right]
\\
& {} \leq e^{\varepsilon}\cdot\sum\nolimits_{M\in\mathcal{I}_{t}}\mathbb{P}_{n}\left[M_E=M \mid t\notin E\right]
\\
& {} \leq e^{\varepsilon}\cdot\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right],
\end{align*}
where the first inequality is obtained directly from the definition of
$\varepsilon$-sketch privacy.
Now, Lemma~\ref{ce-property} gives $\mathbb{P}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\in E\right]=1$,
and finally $\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq e^{-\varepsilon}$.
\end{proof}
\begin{lem}\label{lem:ignores-more-and-more-elements}
Let $t\in\mathcal{U}$. Suppose a deterministic cardinality estimator satisfies
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq p$
for any $n\geq N$. Then for any integer $k\geq1$, it also satisfies
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq1-{\left(1-p\right)}^{k}$,
for $n\geq k\cdot N$.
\end{lem}
\begin{proof}
First, note that if $F\subseteq E$, and
$\mathit{add}\left(M_{F},t\right)=M_{F}$, then
$\mathit{add}\left(M_{E},t\right)=M_{E}$. This is a direct consequence of
Lemma~\ref{lem:merge-properties}:
$M_{E}=\mathit{merge}\left(M_{E\backslash F},M_{F}\right)$, so:
\begin{align*}
\mathit{add}\left(M_{E},t\right) & {} = \mathit{merge}\left(M_{E\backslash F},\mathit{add}\left(M_{F},t\right)\right)
\\
& {} = \mathit{merge}\left(M_{E\backslash F},M_{F}\right)
\\
& {} = M_{E}
\end{align*}
We show next that when
$n\geq k\cdot N$, generating a set $E\in\mathcal{P}_{n}\left(\mathcal{U}\right)$
uniformly randomly can be seen as generating $k$ \emph{independent} sets in
$\mathcal{P}_{N}\left(\mathcal{U}\right)$, then merging them. Indeed, generating
such a set can be done by as follows:
\begin{enumerate}
\item For $i\in\left\{ 1,\ldots,k\right\} $, generate a set
$E_{i}\subseteq\mathcal{P}_{N}\left(\mathcal{U}\right)$ uniformly randomly. Let
$E_{\cup}=\bigcup_{i}E_{i}$.
\item Count the number of elements appearing in multiple $E_{i}$:
$d=\#\left\{ x\in E_{i}|\exists j<i,x\in E_{j}\right\} $. Generate a set
$E^{\prime}\in\mathcal{P}_{n-d}\left(\mathcal{U}\backslash E_{\cup}\right)$
uniformly randomly.
\end{enumerate}
$E$ is then defined by $E=E_{\cup}\cup E^{\prime}$. Step $1$ ensures
that we used $k$ \emph{independent} sets of cardinality $N$ to generate
$E$, and step $2$ ensures that $E$ has exactly $n$ elements.
Intuitively, each time we generate a set $E_i$ of cardinality $N$ uniformly at
random in $\mathcal{U}$, we have \emph{one chance} that $t$ will be ignored by
$E_i$ (and thus by $E$). So $t$ can be ignored by $M_{E}$ with a certain
probability because it was ignored by $M_{E_1}$. Similarly, it can also be
ignored because of $M_{E_2}$, etc. Since the choice of $E_i$ is independent of
the choice of elements in $\bigcup_{j\neq i}E_j$, we can rewrite:
\begin{align*}
\mathbb{P}_{n}&\left[\mathit{add}\left(M_{E},t\right)\neq M_{E} \mid t\notin E\right]\\
& {} \leq\prod_{i=1}^{k}\mathbb{P}_{n}\left[\mathit{add}\left(M_{E_{i}^{0}},t\right)\neq M_{E_{i}^{0}} \mid t\notin E\right]\\
& {} \leq\prod_{i=1}^{k}\left(1-\mathbb{P}_{n}\left[\mathit{add}\left(M_{E_{i}^{0}},t\right)=M_{E_{i}^{0}} \mid t\notin E_{i}\right]\right)\\
& {} \leq{\left(1-p\right)}^{k}
\end{align*}
using the hypothesis of the lemma. Thus:
\begin{equation*}
\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq1-{\left(1-p\right)}^{k}.
\end{equation*}
\end{proof}
\begin{lem}\label{lem:bad-variance}
Suppose a deterministic cardinality estimator satisfies
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]\geq1-p$
for any $n\geq N$ and all $t$. Then its variance for $n\geq N$ is at least
$\frac{1-p}{p}\left(n-N\right)$.
\end{lem}
\begin{proof}
The proof's intuition is as follows. The hypothesis of the lemma requires
that the cardinality estimator, on average, \emph{ignores} a proportion $1-p$ of
new elements added to a sketch (once $N$ elements have been added): the sketch
is not changed when a new element is added. The best thing that the cardinality
estimator can do, then, is to store all elements that it does not ignore, count
the number of unique elements among these, and multiply this number by $1/p$ to
correct for the elements ignored. It is well-known that estimating the size $k$
of a set based on the size of a uniform sample of sampling ratio $p$ has a
variance of $\frac{1-p}{p}k$. Hence, our cardinality estimator has a variance of
at least $\frac{1-p}{p}\left(n-N\right)$.
Formalizing this idea requires some additional technical steps. The full proof
is given in Appendix~\ref{lem:bad-variance-proof}.
\end{proof}
All existing cardinality estimators satisfy our axioms and their standard error
remains low even for large values of $n$. Theorem~\ref{thm:main-result} shows,
for all of them, that there are some users whose privacy loss is
significant. In Section~\ref{subsec:averaging-epsilon}, we quantify this
precisely for HyperLogLog.
\subsection{Probabilistic case\label{subsec:main-result-probabilistic}}
Algorithms that add noise to their output, or more generally, are allowed to use
a source of randomness, are often used in privacy contexts. As such, even though
all cardinality estimators used in practical applications are deterministic, it
is reasonable to hope that a probabilistic cardinality estimator could satisfy
our very weak privacy definition. Unfortunately, this is not the case.
In the deterministic case, we showed that for any element $t$, the probability
that $t$ has an influence on a random sketch $M$ decreases exponentially with
the sketch size. Or, equivalently, the distribution of sketches of size $kn$
that do not contain $t$ is ``almost the same'' (up to a density of probability
${\left(1-e^{-\varepsilon}\right)}^{k}$) as the distribution of sketches of the
same size, but containing $t$.
The following lemma establishes the same result in the probabilistic setting.
Instead of reasoning about the probability that an element $t$ is ``ignored'' by
a sketch $M$, we reason about the probability that $t$ has a \emph{meaningful}
influence on this sketch. We show that this probability decreases exponentially,
even if $\mathbb{P}\left[M\neq\mathit{add}\left(M,t\right)\right]$ is very high.
First, we prove a technical lemma on the \emph{structure} that the
$\mathit{merge}$ operation imposes on the space of sketch distributions. Then,
we find an upper bound on the ``meaningful influence'' of an element $t$, when
added to a random sketch of cardinality $n$. We then use this upper bound,
characterized using the statistical distance, to show that the estimator
variance is as imprecise as for the deterministic case.
\begin{defn}
Let $\mathbb{D}$ be the real vector space spanned by the family $\left\{ \mathcal{D}_{E}|E\subseteq\mathcal{U}\right\} $
(seen as vectors of $\mathbb{R}^{\mathcal{M}}$). For any probability
distributions $\mathcal{A},\mathcal{B}\in\mathbb{D}$, we denote $\mathcal{A}\cdot\mathcal{B}=\mathit{merge}\left(\mathcal{A},\mathcal{B}\right)$.
We show in Lemma~\ref{lem:merge-algebra} that this notation makes sense: on
$\mathbb{D}$, we can do computations as if $\mathit{merge}$ was a multiplicative
operation.
\end{defn}
\begin{lem}\label{lem:merge-algebra}
The $\mathit{merge}$ operation defines a commutative and associative
algebra on $\mathbb{D}$.
\begin{proof}
By the properties required from probabilistic cardinality estimators in
Definition~\ref{defn:probabilistic}, the $\mathit{merge}$ operation is commutative
and associative on the family
$\left\{\mathcal{D}_{E}|E\subseteq\mathcal{U}\right\}$. By linearity of the
$\mathit{merge}$ operation, these properties are preserved for any linear
combination of vectors $\mathcal{D}_{E}$.
\end{proof}
\end{lem}
\begin{lem}\label{lem:statistical-distance}
Suppose a cardinality estimator satisfies $\varepsilon$-sketch privacy above
cardinality $N$, and let $t\in\mathcal{U}$. Let $\mathcal{D}_{\text{out},n}$ be
the distribution of sketches obtained by adding $n$ uniformly random elements of
$\mathcal{U}\backslash\left\{ t\right\}$ into $M_{\varnothing}$ (or,
equivalently,
$\mathcal{D}_{\text{out},n}\left(M\right)=\mathbb{P}_{n}\left[M_E=M|t\notin E\right]$).
Then:
\[
\upsilon\left(\mathcal{D}_{\text{out},kn},\mathit{add}\left(\mathcal{D}_{\text{out},kn},t\right)\right)\leq{\left(1-e^{-\varepsilon}\right)}^{k}
\]
where $\upsilon$ is the statistical distance between probability distributions.
\end{lem}
\begin{proof}
Let $\mathcal{D}_{\text{in},n}$ be the distribution of sketches obtained
by adding $t$, then $n-1$ uniformly random elements of $\mathcal{U}$
into $M$ (or, equivalently, $\mathcal{D}_{\text{in},n}\left(M\right)=\mathbb{P}_{n}\left[M_E=M|t\in E\right]$).
Then the definition of $\varepsilon$-sketch privacy gives that for
every sketch $M$, $\mathcal{D}_{\text{out},n}\left(M\right)\geq e^{-\varepsilon}\mathcal{D}_{\text{in},n}\left(M\right)$.
So we can express $\mathcal{D}_{\text{out},n}$ as the \emph{sum}
of two distributions:
\[
\mathcal{D}_{\text{out},n}=e^{-\varepsilon}\mathcal{D}_{\text{in},n}+\left(1-e^{-\varepsilon}\right)\mathcal{R}
\]
for a certain distribution $\mathcal{R}$.
First, we show that
$\mathcal{D}_{\text{out},kn}={\left(\mathcal{D}_{\text{out},n}\right)}^{k}\cdot\mathcal{C}$
for a certain distribution $\mathcal{C}$. Indeed, to generate a sketch
of cardinality $kn$ that does not contain $t$ uniformly randomly, one can use
the following process.
\begin{enumerate}
\item Generate $k$ random sketches of cardinality $n$ which do not contain $t$,
and merge them.
\item For all $E\subseteq\mathcal{U}$, denote by $p_{E}$ the probability that
the $k$ sketches were generated with the elements in $E$. There might be
``collisions'' between the $k$ sketches: if several sketches were generated
using the same element, $\left|E\right|<kn$. When this happens, we need to
``correct'' the distribution, and add additional elements. Enumerating all the
options, we denote $\mathcal{C}=\sum p_{E}\mathcal{D}_{E,nk}^{\text{c}}$, where
$\mathcal{D}_{E,nk}^{\text{c}}$ is obtained by adding $nk-\left|E\right|$
uniformly random elements in $\mathcal{U}\backslash E$ to
$\mathcal{M}_{\varnothing}$. Thus,
$\mathcal{D}_{\text{out},kn}={\left(\mathcal{D}_{\text{out},n}\right)}^{k}\cdot\mathcal{C}$.
\end{enumerate}
All these distributions are in $\mathbb{D}$:
$\mathcal{D}_{\text{out},n}=\text{avg}_{E\in\mathcal{P}_{n}\left(\mathcal{U}\right),t\notin E}\mathcal{D}_{E}$,
$\mathcal{D}_{\text{in},n}=\text{avg}_{E\in\mathcal{P}_{n}\left(\mathcal{U}\right),t\in E}\mathcal{D}_{E}$,
$\mathcal{R}={\left(1-e^{-\varepsilon}\right)}^{-1}\left(\mathcal{D}_{\text{out},n}-e^{-\varepsilon}\mathcal{D}_{\text{in},n}\right)$,
etc. Thus:
\begin{align*}
\mathcal{D}_{\text{out},kn} & ={\left(\mathcal{D}_{\text{out},n}\right)}^{k}\cdot\mathcal{C}\\
& ={\left(e^{-\varepsilon}\mathcal{D}_{\text{in},n}+\left(1-e^{-\varepsilon}\right)\mathcal{R}\right)}^{k}\cdot\mathcal{C}\\
& =\sum_{i=0}^{k}\binom{k}{i}e^{-i\cdot\varepsilon}{\left(1-e^{-\varepsilon}\right)}^{k-i}\mathcal{D}_{\text{in},n}^{i}\cdot\mathcal{R}^{k-i}\cdot\mathcal{C}.
\end{align*}
Denoting
$\mathcal{A}=\sum_{i=1}^{k}\binom{k}{i}e^{-i\cdot\varepsilon}{\left(1-e^{-\varepsilon}\right)}^{k-i}\mathcal{D}_{\text{in},n}^{i-1}\cdot\mathcal{R}^{k-i}\cdot\mathcal{C}$
and $\mathcal{\mathcal{B}=\mathcal{R}}^{k}\cdot\mathcal{C}$, this
gives us:
\[
\mathcal{D}_{\text{out},kn}=\mathcal{A}\cdot\mathcal{D}_{\text{in},n}+{\left(1-e^{-\varepsilon}\right)}^{k}\mathcal{B}.
\]
Finally, we can compute $\mathit{add}\left(\mathcal{D}_{\text{out},kn},t\right)$:
\begin{align*}
\mathit{add}\left(\mathcal{D}_{\text{out},kn},t\right) &
=\mathcal{A}\cdot\mathcal{D}_{\text{in},n}\cdot\mathcal{D}_{\left\{ t\right\} }+{\left(1-e^{-\varepsilon}\right)}^{k}\mathcal{B}\cdot\mathcal{D}_{\left\{ t\right\} }\\
& =\mathcal{A}\cdot\mathcal{D}_{\text{in},n}+{\left(1-e^{-\varepsilon}\right)}^{k}\mathcal{B}\cdot\mathcal{D}_{\left\{ t\right\} }
\end{align*}
Note that since
$\mathcal{D}_{\text{in},n}=\text{avg}_{E\in\mathcal{P}_{n}\left(\mathcal{U}\right),t\in E}\mathcal{D}_{E}$,
we have
$\mathcal{D}_{\text{in},n}\cdot\mathcal{D}_{\left\{t\right\}}=\mathcal{D}_{\text{in},n}$
by idempotence, and:
\begin{align}
\nonumber
\upsilon&\left(\mathcal{D}_{\text{out},kn},\mathit{add}\left(\mathcal{D}_{\text{out},kn},t\right)\right)
\\
\nonumber
& {} = \frac{1}{2}\left\Vert \mathcal{D}_{\text{out},kn}-\mathit{add}\left(\mathcal{D}_{\text{out},kn},t\right)\right\Vert _{1}
\\
\nonumber
& {} = \frac{1}{2}\left\Vert {\left(1-e^{-\varepsilon}\right)}^{k}\mathcal{B}-{\left(1-e^{-\varepsilon}\right)}^{k}\mathcal{B}\cdot\mathcal{D}_{\left\{ t\right\} }\right\Vert _{1}
\\
\nonumber
& {} \leq \frac{{\left(1-e^{-\varepsilon}\right)}^{k}}{2}\left(\left\Vert \mathcal{B}\right\Vert _{1}+\left\Vert \mathcal{B}\cdot\mathcal{D}_{\left\{ t\right\} }\right\Vert _{1}\right)
\\
\nonumber
& {} \leq {\left(1-e^{-\varepsilon}\right)}^{k}.
\end{align}
\end{proof}
Lemma~\ref{lem:statistical-distance} is the probabilistic equivalent of
Lemmas~\ref{lem:ignores-elements} and~\ref{lem:ignores-more-and-more-elements}.
Now, we state the equivalent of Lemma~\ref{lem:bad-variance}, and explain why
its intuition still holds in the probabilistic case.
\begin{lem}\label{lem:bad-variance-probabilistic}
Suppose that a cardinality estimator satisfies for any $n\geq N$ and all $t$,
$\upsilon\left(\mathcal{D}_{\text{out},n},\mathit{add}\left(\mathcal{D}_{\text{out},n},t\right)\right)\leq p$.
Then its variance for $n\geq N$ is at least
$\frac{1-p}{p}\left(n-N\right)$.
\end{lem}
\begin{proof}
The condition
``$\upsilon\left(\mathcal{D}_{\text{out},n},\mathit{add}\left(\mathcal{D}_{\text{out},n},t\right)\right)\leq p$''
is equivalent to the condition of Lemma~\ref{lem:bad-variance}: with
probability $\left(1-p\right)$, the cardinality estimator ``ignores'' when a new
element $t$ is added to a sketch. Just like in Lemma~\ref{lem:bad-variance}'s
proof, we can convert this constraint into estimating the size of a set based on
a sampling set. The best known estimator for this problem is deterministic, so
allowing the cardinality estimator to be probabilistic does not help improving
the optimal variance.
The same result than in Lemma~\ref{lem:bad-variance} follows.
\end{proof}
Lemmas~\ref{lem:statistical-distance} and~\ref{lem:bad-variance-probabilistic}
together immediately lead to the equivalent of Theorem~\ref{thm:main-result} in
the probabilistic case.
\begin{thm}\label{thm:main-result-probabilistic}
An unbiased probabilistic cardinality estimator that satisfies
$\varepsilon$-sketch privacy above cardinality $N$ is not precise. Namely, its
variance is at least $\frac{1-c^{k}}{c^{k}}\left(n-k\cdot N\right)$, for any
$n\geq N$ and $k\leq\frac{n}{N}$, where $c = 1-e^{-\varepsilon}$
\end{thm}
Somewhat surprisingly, allowing the algorithm to add noise to the data seems to
be pointless from a privacy perspective. Indeed, given the same privacy
guarantee, the lower bound on accuracy is the same for deterministic and
probabilistic cardinality estimators. This suggests that the constraints of
these algorithms (idemp
|
otence and commutativity) require them to somehow keep a
trace of who was added to the sketch (at least for some users), which is
fundamentally incompatible with even weak notions of privacy.
\section{Weakening the privacy definition\label{sec:weaker-versions}}
Our main result is negative: no cardinality estimator satisfying our privacy
definition can maintain a good accuracy. Thus, it is natural to wonder whether
our privacy definition is too strict, and if the result still holds for weaker
variants.
In this section, we consider two weaker variants of our privacy definition: one
allows a small probability of privacy loss, while the other averages the privacy
loss across all possible outputs. We show that these natural relaxations do not
help as close variants of our negative result still hold.
\subsection{Allowing a small probability of privacy loss}
\noindent
As Lemma~\ref{lem:equivalent-definition} shows, $\varepsilon$-sketch
differential privacy provides a bound on how much information the attacker can
gain in the worst case. A natural relaxation is to accept a small
\emph{probability of failure}: requiring a bound on the information gain in
\emph{most cases}, and accept a potentially unbounded information gain with low
probability.
We introduce a new parameter, called $\delta$, similar to the use of $\delta$ in
the definition of $\left(\varepsilon,\delta\right)$-differential privacy:
$\mathcal{A}$ is $\left(\varepsilon,\delta\right)$-differentially private if and
only if for any databases $D_1$ and $D_2$ that only differ by one element and
any set $S$ of possible outputs,
$\mathbb{P}\left[\mathcal{A}\left(D_{1}\right)\in S\right]
\leq e^{\varepsilon}\cdot\mathbb{P}\left[\mathcal{A}\left(D_{2}\right)\in S\right]+\delta$.
\begin{defn}\label{def:epsilon-delta-sketch-privacy}
A cardinality estimator satisfies \emph{$\left(\varepsilon,\delta\right)$-sketch
privacy above cardinality $N$} if for every $\mathcal{S\subseteq M}$, $n\geq N$,
and $t\in\mathcal{U}$,
\[
\mathbb{P}_{n}\left[M_{E}\in\mathcal{S} \mid t\in E\right]
\leq e^{\varepsilon}\cdot\mathbb{P}_{n}\left[M_{E}\in\mathcal{S} \mid t\notin E\right]+\delta.
\]
\end{defn}
Unfortunately, our negative result still holds for this variant of the
definition. Indeed, we show that a close variant of
Lemma~\ref{lem:ignores-elements} holds, and the rest follows directly.
\begin{lem}\label{lem:ignores-elements-with-delta}
Let $t\in\mathcal{U}$. A cardinality estimator that satisfies
$\left(\varepsilon,\delta\right)$-probabilistic sketch privacy above cardinality
$N$ satisfies
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]
\geq\left(\frac{1}{2}-\delta\right)\cdot e^{-\varepsilon}$
for $n\geq N$.
\end{lem}
The proof of Lemma~\ref{lem:ignores-elements-with-delta} is given in
Appendix~\ref{lem:ignores-elements-with-delta-proof}. We can then deduce a
theorem similar to our negative result for our weaker privacy definition.
\begin{thm}\label{thm:negative-result-with-delta}
An unbiased cardinality estimator that satisfies $\left(\varepsilon,\delta\right)$-sketch
privacy above cardinality $N$ has a variance at least
$\frac{1-c^{k}}{c^{k}}\left(n-k\cdot N\right)$ for any $n\leq N$ and
$k\leq\frac{n}{N}$, where $c = 1-\left(\frac{1}{2}-\delta\right)\cdot e^{-\varepsilon}$.
It is therefore not precise if $\delta < \frac{1}{2}$.
\end{thm}
\begin{proof}
This follows from
Lemmas~\ref{lem:ignores-elements-with-delta},~\ref{lem:ignores-more-and-more-elements}
and~\ref{lem:bad-variance}.
\end{proof}
\subsection{Averaging the privacy loss}
\noindent
Instead of requiring that the attacker's information gain is bounded by
$\varepsilon$ for every possible output, we could bound the \emph{average}
information gain. This is equivalent to accepting a larger privacy loss in some
cases, as long as other cases have a lower privacy loss.
This intuition is captured by the use of Kullback-Leiber divergence, which is
often used in similar
contexts~\cite{rebollo2010optimized,rebollo2010t,diaz2002towards,dwork2010boosting}.
In our case, we adapt it to maintain the asymmetry of our original privacy
definition. First, we give a formal definition the \emph{privacy loss} of a user
$t$ given output $M$.
\begin{defn}\label{def:privacy-loss}
Given a cardinality estimator, the \emph{positive privacy loss of $t$ given
output $M$ at cardinality $n$} is defined as
\[
\varepsilon_{n,t,M}=\max\left(\log\left(\frac{\mathbb{P}_{n}\left[M_E=M \mid t\in
E\right]}{\mathbb{P}_{n}\left[M_E=M \mid t\notin E\right]}\right),0\right).
\]
\end{defn}
This privacy loss is never negative: this is equivalent to discarding the case
where the attacker gains \emph{negative} information. Now, we bound this average
over all possible values of $M_E$, given $t\in E$.
\begin{defn}
A cardinality estimator satisfies \emph{$\varepsilon$-sketch average
privacy above cardinality $N$} if for every $n\geq N$ and $t\in\mathcal{U}$,
we have
\[
\sum_{M}\mathbb{P}_{n}\left[M_E=M \mid t\in E\right]\cdot\varepsilon_{n,t,M}\leq\varepsilon.
\]
\end{defn}
It is easy to check that $\varepsilon$-sketch average privacy above cardinality
$N$ is strictly weaker than $\varepsilon$-sketch privacy above cardinality $N$.
Unfortunately, this definition is also stronger than
$\left(\varepsilon_\delta,\delta\right)$-sketch privacy above cardinality $N$
for certain values of $\varepsilon$ and $\delta$, and as such,
Lemma~\ref{lem:ignores-elements-with-delta} also applies. We prove this in the
following lemma.
\begin{lem}\label{lem:average-implies-delta}
If a cardinality estimator satisfies $\varepsilon$-sketch average privacy above
cardinality $N$, then it also satisfies
$\left(\frac{\varepsilon}{\delta},\delta\right)$-sketch privacy above
cardinality $N$ for any $\delta>0$.
\end{lem}
The proof is given in Appendix~\ref{lem:average-implies-delta-proof}. This lemma
leads to a similar version of the negative result.
\begin{thm}
An unbiased cardinality estimator that satisfies $\varepsilon$-sketch
average privacy above cardinality $N$ has a variance at least
$\frac{1-c^{k}}{c^{k}}\left(n-k\cdot N\right)$ for any $n\leq N$ and
$k\leq\frac{n}{N}$, where $c = 1-\frac{e^{-4\varepsilon}}{4}$.
It is thus not precise.
\end{thm}
\begin{proof}
This follows directly from Lemma~\ref{lem:average-implies-delta} with
$\delta=\frac{1}{4}$, and Theorem~\ref{thm:negative-result-with-delta}.
\end{proof}
Recall that all existing cardinality estimators satisfy our axioms and have a
bounded accuracy. Thus, an immediate corollary is that for all cardinality
estimators used in practice, there are some users for which the \emph{average}
privacy loss is very large.
\begin{rem}\label{rem:renyi-privacy}
This idea of \emph{averaging $\varepsilon$} is similar to the idea behind Rényi
differential privacy~\cite{mironov2017renyi}. The parameter $\alpha$ of Rényi
differential privacy determines the averaging method used (geometric mean,
arithmetic mean, quadratic mean, etc.). Using KL-divergence corresponds to
$\alpha=1$, while $\alpha=2$ averages all possible values of $e^\varepsilon$.
Increasing $\alpha$ strengthens the privacy
definition~\cite[Prop.~9]{mironov2017renyi}, so our negative result still
holds.
\end{rem}
\section{Privacy loss of individual users\label{sec:individual-users}}
So far, we only considered definitions of privacy that give the same guarantees
for all users. What if we allow certain users to have less privacy than others,
or if we were to average the privacy loss \emph{across users} instead of
averaging over all possible outcomes for each user?
Such definitions would generally not be sufficiently convincing to be used in
practice: one typically wants to protect \emph{all users}, not just a majority
of them. In this section, we show that even if we relax this requirement,
cardinality estimators would in practice leak a significant amount of
information.
\subsection{Allowing unbounded privacy loss for some users}
\noindent
What happens if we allow some users to have unbounded privacy loss? We could
achieve this by requiring the existence of a subset of users
$\mathcal{T}\subseteq\mathcal{U}$ of density $1-\delta$, such that every user
in $\mathcal{T}$ is protected by $\varepsilon$-sketch privacy above cardinality
$N$. In this case, a ratio $\delta$ of possible targets are not protected.
This approach only makes sense if the attacker cannot choose the target $t$. For
our attacker model, this might be realistic: suppose that the attacker wants to
target just one particular person. Since all user identifiers are hashed before
being passed to the cardinality estimator, this person will be associated to a
hash value that the attacker can neither predict nor influence. Thus, although
the attacker picks $t$, the true target of the attack is $h(t)$, which the
attacker cannot choose.
Unfortunately, this drastic increase in privacy risk for some users does not
lead to a large increase in accuracy. Indeed, the best possible use of this
ratio $\delta$ of users from an accuracy perspective would simply be to count
exactly the users in a sample of sampling ratio $\delta$.
Estimating the total cardinality based on this sample, similarly to the optimal
estimator in the proof of Lemma~\ref{lem:bad-variance}, leads to a variance of
$\frac{1-\delta}{\delta}\cdot\left(n-N\right)$. If $\delta$ is very small (say,
$\delta\simeq10^{-4}$), this variance is too large for counting small values of
$n$ (say, $n\simeq1000$ and $N\simeq100$). This is not surprising: if $99.99\%$
of the values are ignored by the cardinality estimator, we cannot expect it to
count values of $n$ on the order of thousands. But even this value of $\delta$
is larger than what is often used with
$\left(\varepsilon,\delta\right)$-differential privacy, where typically,
$\delta=o(1/n)$.
But in our running example, sketches must yield a reasonable accuracy both at
small and large cardinalities, if many sketches are aggregated. This implicitly
assumes that the service operates at a large scale, say with at least $10^{7}$
users. With $\delta=10^{-4}$, this means that thousands of users are not covered
by the privacy property. This is unacceptable for most applications.
\subsection{Averaging the privacy loss across users\label{subsec:averaging-epsilon}}
\noindent
Instead of requiring the same $\varepsilon$ for every user, we could require
that the \emph{average} information gain by the attacker is bounded by
$\varepsilon$. In this section, we take the example of HyperLogLog to show that
accuracy is not incompatible with this notion of average privacy, but that
cardinality estimators used in practice do not preserve privacy even if we
average across all users.
First, we define this notion of average information gain across users.
\begin{defn}
Recall the definition of the positive privacy loss $\varepsilon_{n,t,M}$ of $t$ given output $M$ at
cardinality $n$ from Definition~\ref{def:privacy-loss}:
The maximum privacy loss of $t$ at cardinalty $n$ is defined as
$\varepsilon_{n,t} = \max_{M}\left(\varepsilon_{n,t,M}\right)$.
A cardinality estimator satisfies \emph{$\varepsilon$-sketch privacy on average}
if we have, for all $n$,
$\frac{1}{\left|\mathcal{U}\right|}\sum_{t\in\mathcal{U}}\varepsilon_{n,t}\leq\varepsilon$.
\end{defn}
In this definition, we accept that some users might have less privacy as long as
the \emph{average} user satisfies our initial privacy definition.
Remark~\ref{rem:renyi-privacy} is still relevant: we chose to average over all
values of $\varepsilon$, but other averaging functions are possible and would
lead to strictly stronger definitions.
We show that HyperLogLog satisfies this definition and we consider the value of
$\varepsilon$ for various parameters and their significance. Intuitively, a
HyperLogLog cardinality estimator puts every element in a random \emph{bucket},
and each bucket counts the \emph{maximum number of leading zeroes} of elements
added in this bucket. More details are given in
Appendix~\ref{thm:hll:average-proof}.
HyperLogLog cardinality estimators have a parameter $p$ that determines its
memory consumption, its accuracy, and, as we will see, its level of average
privacy.
\begin{thm}\label{thm:hll:average}
Assuming a sufficiently large $\left|\mathcal{U}\right|$, a HyperLogLog
cardinality estimator of parameter $p$ satisfies $\varepsilon_{n}$-sketch
privacy above cardinality $N$ on average where for $N\ge n$,
\[
\varepsilon_{n} \simeq -\sum_{k\geq1}2^{-k}\log\left(1-{\left(1-2^{-p-k}\right)}^{n}\right).
\]
\end{thm}
The assumption that the set of possible elements is very large and its
consequences are explained in more detail in the proof of this theorem,
given in Appendix~\ref{thm:hll:average-proof}.
How does this positive result fit practical use cases?
Figure~\ref{fig:averagedepsilon} plots $\varepsilon_{n}$ for three different
HyperLogLog cardinality estimators. It shows two important results.
\begin{figure}
\begin{centering}
\includegraphics[height=4cm]{averagedepsilon}
\par\end{centering}
\caption{\label{fig:averagedepsilon}$\varepsilon_{n}$ as a function of $n$,
for HyperLogLog cardinality estimators of different $p$ parameters.
The blue line is $2$, corresponding to the commonly recommended value
of $\varepsilon=\ln\left(2\right)$ in differential privacy.}
\end{figure}
First, cardinality estimators used in practice do not preserve privacy. For
example, the default parameter used for production pipelines at Google and on
the BigQuery service~\cite{bigqueryhll} is $p=15$. For this value of $p$, an
attacker can determine with significant accuracy whether a target was added to a
sketch; not only in the worst case, but for the \emph{average }user too. The
average risk only becomes reasonable for $n\geq10,000$, a threshold too large
for most data analysis tasks.
Second, by sacrificing some accuracy, it is possible to obtain a reasonable
\emph{average privacy}. For example, a HyperLogLog sketch for which $p=9$ has a
relative standard error of about $5\%$, and an $\varepsilon_{n}$ of about $1$
for $n=1000$. Unfortunately, even when the average risk is acceptable, some
users will still be at a higher risk: users $e$ with a large number of leading
zeroes are much more identifiable than the average. For example, if $n=1000$,
there is a $98\%$ chance that at least one user has $\rho(e)\geq8$. For this
user, $\varepsilon_{n,t}\simeq5$, a very high value.
Our calculations yield only an approximation of $\varepsilon_{n}$ that is an
upper bound on the actual privacy loss in HyperLogLog sketches. However, these
alarming results can be confirmed experimentally. We simulated
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin
E\right]$, for uniformly random values of $t$, using HyperLogLog sketches with
the parameter $p=15$, the default used for production pipelines at Google and on
the BigQuery service~\cite{bigqueryhll}. For each cardinality $n$, we generated
10,000 different random target values, and added each one to 1,000 HyperLogLog
sketches of cardinality $n$ (generated from random values). For each target, we
counted the number of sketches that ignored it.
Figure~\ref{fig:ignoredelements} plots some percentile values. For example, the
all-targets curve (100th percentile) has a value of 33\% at cardinality $n$ =
10,000. This means that each of the 10,000 random targets was ignored by at most
33\% of the 1,000 random sketches of this cardinality, i.e.,
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin
E\right] \leq 33\%$ for all $t$. In other words, an attacker observes with at
least 67\% probability a change when adding a random target to a random sketch
that did not contain it. Similarly, the 10th-percentile at $n$ = 10,000 has a
value of 3.8\%. So 10\% of the targets were ignored by at most 3.8\% of the
sketches, i.e., $\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid
t\notin E\right] \leq 3.8\%$ for 10\% of all users $t$. That is, for the average
user $t$, there is a 10\% chance that a sketch with 10,000 elements changes with
likelihood at least 96.2\% when $t$ is first added.
\begin{figure}
\begin{centering}
\includegraphics[height=5cm]{ignoredelementshll}
\par\end{centering}
\caption{\label{fig:ignoredelements} Simulation of
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin E\right]$,
for uniformly chosen values of $t$, using HyperLogLog sketches with parameter
$p=15$.}
\end{figure}
For small cardinalities ($n<1,000$), adding an element that has not yet been
added to the sketch will almost certainly modify the sketch: an attacker
observing that a sketch does not change after adding $t$ can deduce with
near-certainty that $t$ was added previously.
Even for larger cardinalities, there is always a constant number of people with
high privacy loss. For $n$ = 1,000, no target was ignored by more than
5.5\% of the sketches. For $n$ = 10,000, 10\% of the users were ignored by at
most 3.8\% of the sketches. Similarly, the 1st percentile at $n$ = 100,000 and
the 1st permille at $n$ = 1,000,000 are 4.6\% and 4.5\%, respectively. In
summary, across all cardinalities $n$, at least 1,000 users $t$ have
$\mathbb{P}_{n}\left[\mathit{add}\left(M_{E},t\right)=M_{E} \mid t\notin
E\right] \leq 0.05$. For these users, the corresponding privacy loss is
$e^\varepsilon=\frac{1}{0.055} \simeq 18$. Concretely, if the attacker initially
believes that $\mathbb{P}_{n}\left[t\in E\right]$ is 1\%, this number grows to
15\% after observing that $\mathit{add}\left(M_{E},t\right)=M_{E}$. If it is
initially 10\%, it grows to 66\%. And if it is initially 25\%, it grows to 86\%.
\section{Mitigation strategies\label{sec:discussion}}
\noindent
A corollary of Theorem~\ref{thm:main-result} and of our analysis of
Section~\ref{subsec:averaging-epsilon} is that the cardinality estimators used
in practice do not preserve privacy. How can we best protect cardinality
estimator sketches against insider threats,in realistic settings? Of course,
classical data protection techniques are relevant: encryption, access controls,
auditing of manual accesses, etc. But in addition to these best practices,
cardinality estimators like HyperLogLog allow for specific risk mitigation
techniques, which restrict the attacker's capabilities.
\subsection{Salting the hash function with a secret\label{subsec:salting-the-hash}}
\noindent
As explained in Section~\ref{subsec:def-cardinality-estimators}, most
cardinality estimators use a hash function $h$ as the first step of the
$\mathit{add}$ operation: $\mathit{add}\left(M,t\right)$ only depends on $M$ and
the hash value $h(t)$. This hash can be salted with a secret value. This salt can
be made inaccessible to humans, with access controls restricting access to
production binaries compiled from trusted code. Thus, an adversary cannot learn
all the relevant parameters of the cardinality estimator and can no longer add
users to sketches. Of course, to avoid a salt reconstruction attack, a
cryptographic hash function must be used.
The use of a salt does not hinder the usefulness of sketches: they can still be
merged (for all cardinality estimators given as examples in
Section~\ref{subsec:def-cardinality-estimators}) and the cardinality can still
be estimated without accuracy loss. However, if an attacker gains direct access
to a sketch $M$ with the aim of targeting a user $t$ and does not know the
secret salt, then she cannot compute $h\left(t\right)$ and therefore cannot
compute $\mathit{add}\left(M,t\right)$. This prevents the previous obvious
attack of adding $t$ to $M$ and observing whether the result is different.
However, this solution has two issues.
\begin{description}
\item[Secret salt rotation] The secret salt must be the same for all sketches as
otherwise sketches cannot be merged. Indeed, if a hash function $h_{1}$ is
used to create a sketch $M_{1}$ and $h_{2}$ is used to create $M_{2}$, then if
$h_{1}\left(t\right)\neq h_{2}\left(t\right)$ for some $t$ that is added to both
$M_{1}$ and $M_{2}$, $t$ will be seen as a \emph{different user} in $M_{1}$ and
$M_{2}$: the cardinality estimator no longer ignores duplicates. Good key
management practices also recommend regularly rotating secret keys. In this
context, changing the key requires recomputing all previously computed sketches.
This requires keeping the original raw data, makes pipelines more complex, and
can be computationally costly.
\item[Sketch intersection] For most cardinality estimators given as examples in
Section~\ref{subsec:def-cardinality-estimators}, it is possible for an attacker
to guess $h\left(t\right)$ from a family of sketches ($M_{1}$, \ldots, $M_{k}$)
for which the attacker \emph{knows} that $t\in M_{1}$. For example, intersecting
the lists stored in K-Minimum Values sketches can provide information on which
hashes come from users that have been in \emph{all} sketches. For HyperLogLog,
one can use the leading zeroes in non-empty buckets to get partial information
on the hash value of users who are in all sketches. Moreover,
HyperLogLog++~\cite{heule2013hyperloglog} has a \emph{sparse mode} that stores
full hashes when the sketch contains a small number of values; this makes
intersection attacks even easier.
Intersection attacks are realistic, although they are significantly more complex
than simply checking if $\mathit{add}\left(M,t\right)=M$. In our running
example, sketches come from counting users across locations and time periods. If
an internal attacker wants to target someone she knows, she can gather
information about where they went using side channels like social media posts.
This gives her a series of sketches $M_{1}, \ldots, M_{k}$ that she \emph{knows}
her target belongs to, and from these, she can get information on $h(t)$ and use
it to perform an attack equivalent to checking whether
$\mathit{add}\left(M,t\right)=M$.
\end{description}
Another possible risk mitigation technique is homomorphic encryption. Each
sketch could be encrypted in a way that allows sketches to be merged, and new
elements to be added; while ensuring that an attacker cannot do any operation
without some secret key. Homomorphic encryption typically has significant
overhead, so it is likely to be too costly for most use cases. Our impossibility
results assume a computationally unbounded attacker; however, it is possible
that an accurate sketching mechanism using homomorphic encryption could provide
privacy against \emph{polynomial-time} attackers. We leave this area of research
for future work.
\subsection{Using a restricted API\label{subsec:restricted-api}}
\noindent
Using cardinality estimator sketches to perform data analysis tasks only
requires access to two operations: $\mathit{merge}$ and $\mathit{estimate}$. So
a simple option is to process the sketches over an API that only allows this
type of operation. One option is to provide a SQL engine on a database, and only
allow SQL functions that correspond to $\mathit{merge}$ and $\mathit{estimate}$
over the column containing sketches. In the BigQuery SQL engine, this
corresponds to allowing HLL\_COUNT.MERGE and HLL\_COUNT.EXTRACT functions, but
not other functions over the column containing sketches~\cite{bigqueryhll}.
Thus, the attacker cannot access the raw sketches.
Under this technique, an attacker who only has access to the API can no longer
directly check whether $\mathit{add}\left(M,t\right)=M$. Since she does not have
access to the sketch internals, she cannot perform the intersection attack
described previously either. To perform the check, her easiest option is to
impersonate her target within the service, interact with the service so that a
sketch $M_{\{t\}}$ containing \emph{only} her target is created in the sketch
database, and compare the estimates obtained from $M$ and
$\mathit{merge}\left(M,M_{\{t\}}\right)$.
Following the intuition given in Section~\ref{subsec:main-result-deterministic},
if these estimates are the same, then the target is more likely to be in the
dataset. How much information the attacker gets this way depends on
$\mathbb{P}\left[\mathit{estimate}\left(\mathit{add}\left(M_{E},t\right)\right)
= \mathit{estimate}\left(M_{E}\right) \mid t\notin E\right]$.
We can increase this quantity by rounding the result of the $\mathit{estimate}$
operation, thus limiting the accuracy of the external attacker. This would make
the attack described in this work slightly more difficult to execute, and less
efficient. However, it is likely that the attack could be adapted, for example
by repeating it multiple times with additional fake elements.
This risk mitigation technique can be combined with the previous one. The
restricted API protects the sketches during normal use by data analysts, i.e.,
against external attackers. The hash salting mitigates the risk of manual access
to the sketches, e.g., by internal attackers. This type of direct access is not
needed for most data analysis tasks, so it can be monitored via other means.
\section{Conclusion}
We formally defined a class of cardinality estimator algorithms with an
associated system and attacker model that captures the risks associated with
processing personal data in cardinality estimator sketches. Based on this model,
we proposed a privacy definition that expresses that the attacker cannot gain
significant knowledge about a given target.
We showed that our privacy definition, which is strictly weaker than any
reasonable definition used in practice, is incompatible with the accuracy and
aggregation properties required for practical uses of cardinality estimators. We
proved similar results for even weaker definitions, and we measured the privacy
loss associated with the HyperLogLog cardinality estimator, commonly used in
data analysis tasks.
Our results show that designing accurate privacy-preserving cardinality
estimator algorithms is impossible, and that the cardinality estimator sketches
used in practice should be considered as sensitive as raw data. These negative
results are a consequence of the \emph{structure} imposed on cardinality
estimators: idempotence, commutativity, and existence of a well-behaved merge
operation. This result shows a fundamental incompatibility between accurate
aggregation and privacy. A natural question is ask whether other sketching
algorithms have similar incompatibilities, and what are minimal axiomatizations
that lead to similar impossibility results.
\section{Acknowledgements}
The authors thank Jakob Dambon for providing insights which helped us prove
Lemma~\ref{lem:bad-variance}, Esfandiar Mohammadi for numerous fruitful
discussions, as well as Christophe De Cannière, Pern Hui Chia, Chao Li, Aaron
Johnson and the anonymous reviewers for their helpful comments. This work was
partially funded by Google, and done while Andreas Lochbihler was at ETH Zurich.
\bibliographystyle{ACM-Reference-Format}
|
\section{Introduction}
Topological strings have been studied quite intensively as a toy
model of ordinary string theory. Besides displaying a rich
mathematical structure, they partially or completely control
certain BPS quantities in ordinary string theory, and as such have
found applications e.g. in the study BPS black holes and
non-perturbative contributions to superpotentials.
Unfortunately, a full non-perturbative definition of topological
string theory is still lacking, but it is clear that it will
involve ingredients from both the A- and B-model, and that both
open and closed topological strings will play a role. Since
M-theory is crucial in understanding the strong coupling limit and
nonperturbative properties of string theory, one may wonder
whether something similar is true in the topological case, i.e.
does there exist a seven-dimensional topological theory which
reduces to topological string theory in six dimensions when
compactified on a circle? And could such a seven-dimensional
theory shed light on the non-perturbative properties of
topological string theory?
In order to find such a seven-dimensional theory one can use
various strategies. One can try to directly guess the spacetime
theory, as in \cite{Dijkgraaf:2004te,Nekrasov:2005bb}, or one can
try to construct a topological membrane theory as in
\cite{Grassi:2004xr,Bao:2005yx,Anguelova:2005cv,Bonelli:2005rw,Ikeda:2006pd}
(after all, M-theory appears to be a theory of membranes, though
the precise meaning of this sentence remains opaque). In this
paper we will follow a different approach and study a topological
version of strings propagating on a manifold of $G_2$ holonomy,
following \cite{deBoer:2005pt} (for an earlier work on $G_2$
sigma-models see \cite{Shatashvili:1994zw}). In
\cite{deBoer:2005pt} the topological twist was defined using the
extended worldsheet algebra that sigma-models on manifolds with
exceptional holonomy possess \cite{Howe:1991ic}. For manifolds of
$G_2$ holonomy the extended worldsheet algebra contains the
$c=7/10$ superconformal algebra \cite{Shatashvili:1994zw} that
describes the tricritical Ising model, and the conformal block
structure of this theory was crucial in defining the twist. In
\cite{deBoer:2005pt} it was furthermore shown that the BRST
cohomology of the topological $G_2$ string is equivalent to the
ordinary de Rham cohomology of the seven-manifold, and that the
genus zero three-point functions are the third derivatives of a
suitable prepotential, which turned out to be equal to the
seven-dimensional Hitchin functional of \cite{Hitchin:2000jd}. The
latter also features prominently in
\cite{Dijkgraaf:2004te,Nekrasov:2005bb}, suggesting a close
connection between the spacetime and worldsheet approaches.
In the present paper we will study open topological strings on
seven-manifolds of $G_2$ holonomy, using the same twist as in
\cite{deBoer:2005pt}. There are several motivations to do this.
First of all, we hope that this formalism will eventually lead to
a better understanding of the open topological string in six
dimensions. Second, some of the results may be relevant for the
study of realistic compactifications of M-theory on manifolds of
$G_2$ holonomy\footnote{This will require an extension of our
results to singular manifolds which is an interesting direction
for future research.}, for a recent discussion of the latter see
e.g. \cite{Acharya:2006ia}. Third, by studying branes wrapping
three-cycles we may establish a connection between topological
strings and topological membranes in seven dimensions. And
finally, for open topological strings one can completely determine
the corresponding open string field theory \cite{Witten:1992fb},
from which one can compute arbitrary higher genus partition
functions and from which one can also extract highly non-trivial
all-order results for the closed topological string using
geometric transitions \cite{Gopakumar:1998vy}. Repeating such an
analysis in the $G_2$ case would allow us to use open $G_2$ string
field theory to perform computations at higher genus in both the
open and closed topological $G_2$ string. This is of special
importance since the definition and existence of the topological
twist at higher genus has not yet been rigorously established in
the $G_2$ case.
Along the way we will run into various interesting mathematical
structures and topological field theories in various dimensions
that may be of interest in their own right.
The outline and summary of this paper is as follows. We will first
briefly review the closed topological $G_2$ string and its Hilbert
space. We will then consider open topological strings and their
boundary conditions. Consistent boundary conditions are those
which preserve one copy of the non-linear $G_2$ worldsheet algebra
and were previously analyzed in \cite{Becker:1996ay,Howe:2005je}.
One finds that there are topological zero-, three-, four- and
seven-branes in the theory\footnote{It is unclear to us how we
could incorporate coisotropic six-branes in our theory, whose
existence is suggested in \cite{Bao:2006ef}.}. The three- and
four-branes wrap associative and coassociative cycles respectively
and are calibrated by the covariantly constant three-form and its
Hodge-dual which define the $G_2$ structure.
Next, we compute the topological open string spectrum in the
presence of these branes. For a seven-brane, the spectrum has a
simple geometric interpretation in terms of the Dolbeault
cohomology of the $G_2$ manifold. To define the Dolbeault
cohomology, we need to use the fact that $G_2\subset SO(7)$ acts
naturally on differential forms, and we can decompose them into
$G_2$ representations. In this paper, we will use the notation
$\pi^p_{\bf n}$ to denote the projection of the space of $p$-forms
$\Lambda^p$ onto the irreducible representation ${\bf n}$ of
$G_2$. The Dolbeault complex is then
\begin{equation}
0 \longrightarrow \Lambda^0 \stackrel{d}{\longrightarrow}
\Lambda^1 \stackrel{\pi^2_{\bf 7} d}{\longrightarrow} \pi^2_{\bf
7}(\Lambda^2) \stackrel{\pi^3_{\bf 1} d}{\longrightarrow}
\pi^3_{\bf 1}(\Lambda^3) \longrightarrow 0 \; .
\end{equation}
The topological open string BRST cohomology is the cohomology of
this complex and yields states at ghost numbers $0,1,2,3$. For
zero-, three- and four-branes the cohomology is obtained by
reducing the above complex to the brane in question.
In section~4 we will verify explicitly that the BRST cohomology in
ghost number one contains the space of (generalized) flat
connections on the brane, but also contains the infinitesimal
moduli of the topological brane. In particular, we will see that
the topological open string reproduces precisely the results in
the mathematics literature \cite{McLean:1998} regarding
deformations of calibrated cycles in manifolds of $G_2$ holonomy.
We briefly discuss scattering amplitudes in section~5 and use them
to construct the open topological string field theory following
methods discussed in \cite{Witten:1992fb} in section~6. The final
answer for the open topological string field theory turns out to
be very simple. For seven-branes we obtain the associative
Chern-Simons (CS) action
\begin{equation}
S =\int_Y \ast\phi\wedge CS_3(A) \; , \label{act}
\end{equation}
with $CS_3 (A)$ the standard Chern-Simons three-form and
$\ast\phi$ the harmonic four-form on the $G_2$ manifold $Y$. For
the other branes we obtain the dimensional reduction of this
action to the appropriate brane. The action (\ref{act}) was first
considered in \cite{Donaldson:1996kp,Baulieu:1997nj}, and it is
gratifying to have a direct derivation of this action from string
theory. We will also discuss the dimensional reduction of this
theory on $CY_3 \times S^1$, which leads to various real versions
of the open A- and B-model, depending on the brane one is looking
at. The situation is very similar to the closed topological $G_2$
string, which also reduced to a combination of real versions of
the A- and B-models. It is presently unclear to us whether we
should interpret this as meaning that the partition functions of
the open and closed topological $G_2$ strings should not be
interpreted as wave functions, as opposed to the partition
functions of the open and closed A- and B-models, which are most
naturally viewed as wave functions.
The last subject we discuss in section~6 is the emergence of
worldsheet instanton contributions of the topological string
theory on Calabi-Yau manifolds from the topological $G_2$ string
on $CY_3 \times S^1$. Though our analysis is not yet conclusive,
it appears that these worldsheet instanton effects arise from
wrapped branes in the $G_2$ theory and not directly from
worldsheet instantons.
Finally, in section 7 we make a preliminary investigation of the gauge-fixing
and quantization of (\ref{act}) and its reductions to four- and
three-dimensional branes. As was the case in the open string field theory for
A-branes, the gauge-fixed actions look very similar to the action (\ref{act})
once we replace the ghost number one field $A$ by a field of arbitrary ghost
number. We also study the one-loop partition functions of the various open
string field theories, and find that they tend to have the effect of shifting
the tree-level theories in a rather simple way. This is similar to the one-loop
shift $k \rightarrow k+h(G)$ of the level $k$ in ordinary Chern-Simons theory,
with $h(G)$ the dual Coxeter number of the gauge group $G$. In particular, we
find that ${*\phi}$ in (\ref{act}) is shifted by a four-form proportional to the
first Pontrjagin class of the manifold $Y$. We have not yet attempted to
determine whether (\ref{act}) is renormalizable and well-defined as a quantum
theory (which, by naive power-counting, it is not) but we expect that it should
be as it is equivalent to a string theory (a similar issue occurs for
holomorphic Chern-Simons in the B-model).
We conclude with a list of open problems and have collected
various technical results in the appendices.
We will adhere to the following conventions: $M$ will refer to a
calibrated submanifold of dimension 3 or 4 (i.e. calibrated by
$\phi$ or $*\phi$, respectively); these are known, respectively,
as associative and coassociative submanifolds. The ambient $G_2$
manifold will be denoted $Y$.
\section{A brief review of the closed topological $G_2$ string}
Let us briefly review the definition of the topological $G_2$
string found in \cite{deBoer:2005pt}. We will cover only
essential points. For further details we refer the reader to
\cite{deBoer:2005pt}.
\subsection{Sigma model for the $G_2$ string}
The topological $G_2$ string constructs a topological string
theory with target space a seven-dimensional $G_2$-holonomy
manifold $Y$. This topological string theory is defined in terms
of a topological twist of the relevant sigma-model. In order to
have $\mathcal{N}=1$ target space supersymmetry, one starts with an $\mathcal{N} =
(1,1)$ sigma model on a $G_2$ holonomy manifold. The special
holonomy of the target space implies an extended supersymmetry
algebra for the worldsheet sigma-model \cite{Howe:1991ic}. That
is, additional conserved supercurrents are generated by pulling
back the covariantly constant 3-form $\phi$ and its hodge dual
$*\phi$ to the worldsheet as
$$ \phi_{\mu\nu\rho}( {\mathbf X}) D {\mathbf X}^\mu D {\mathbf X}^\nu D {\mathbf X}^\rho \; , $$
where $ {\mathbf X}$ is a worldsheet chiral superfield, whose bosonic
component corresponds to the world-sheet embedding map. From the
classical theory it is then postulated that the extended symmetry
algebra survives quantization, and is present in the quantum
theory. This postulate is also based on analyzing all possible
quantum extensions of the symmetry algebra compatible with
spacetime supersymmetry and $G_2$ holonomy.
A crucial property of the extended symmetry algebra is that it contains an $\mathcal{N}
=1$ SCFT sub-algebra, which has the correct central charge of c=7/10 to
correspond to the tri-critical Ising unitary minimal model. Unitary minimal
models have central charges in the series $c = 1 - \frac{6}{p(p+1)}$ (for
$p$ an integer) so the tri-critical Ising model has $p=4$.
The conformal primaries for such models are labelled by two integer Kac labels,
$n'$ and $n$, as $\phi_{(n',n)}$ where $1 \leq n' \leq p$ and $1 \leq n < p$.
The Kac labels determine the conformal weight of the state as $h_{n',n} =
\frac{[pn' - (p+1)n]^2 -1}{4p(p+1)}$. The Kac table for this minimal model is
reproduced in \cite[Table 1]{deBoer:2005pt}. Note that primaries with label
$(n', n)$ and $(p+1 -n', p-n)$ are equivalent. This model has six conformal
primaries with weights $h_I =0,1/10, 6/10, 3/2$ (for the NS states) and
$h_I=7/16, 3/80$ (for the R states).
The conformal block structure of the weight 1/10, $\phi_{(2,1)}$, and of the
weight 7/16 primary, $\phi_{(1,2)}$, is particularly simple,
\begin{eqnarray}
\phi_{(2,1)} \times \phi_{(n',n)} &=& \phi_{(n'-1, n)} +
\phi_{(n'+1, n)} \; ,
\nonumber \\
\phi_{(1,2)} \times \phi_{(n',n)} &=& \phi_{(n', n-1)} +
\phi_{(n', n+1)} \; ,\nonumber
\end{eqnarray}
\noindent where $\phi_{(n',n)}$ is any primary. This conformal block decomposition is
schematically denoted as
\begin{eqnarray}
\Phi_{(2,1)} &=& \Phi_{(2,1)}^{\downarrow} \oplus
\Phi_{(2,1)}^{\uparrow} \; ,
\nonumber \\
\Phi_{(1,2)} &=& \Phi_{(1,2)}^{-} \oplus \Phi_{(1,2)}^{+} \; .
\end{eqnarray}
\noindent The conformal primaries of the full sigma-model are
labelled by their tri-critical Ising model highest weight, $h_I$,
and the highest weight corresponding to the rest of the algebra,
$h_r$, as $|h_I, h_r \rangle$. This is possible because the stress
tensors, $T_I$, of the tricritical sub-algebra and of the \lq rest'
of the algebra, $T_r = T - T_I$ (where $T$ is the stress tensor of
the full algebra), satisfy $T_I \cdot T_r \sim 0$.
\subsection{The $G_2$ twist}
The standard $\mathcal{N} = (2,2)$ sigma-models can be twisted by making
use of the U(1) R-symmetry of their algebra. Using the U(1)
symmetry, the twisting can be regarded as changing the worldsheet
sigma-model with a Calabi-Yau target space by the addition of the
following term:
\begin{equation}\label{eqn_twist_insert}
\pm \frac{\omega}{2} \overline{\psi} \psi \; ,
\end{equation}
with $\omega$ the spin connection on the world-sheet. This
effectively changes the charge of the fermions under worldsheet
gravity to be integral, resulting in the topological A/B-model
depending on the relative sign of the twist in the left and right
sector of the theory (for fermions with holomorphic or
anti-holomorphic target space indices). Here $\overline{\psi}$ and $\psi$ can
be either left- or right-moving worldsheet fermions and $\omega$ is
the spin-connection on the worldsheet. In the topological theory,
before coupling to gravity, there are no ghosts or anti-ghosts so
these are the only spinors/fermions in the system.
This twist has been re-interpreted \cite{Bershadsky:1993cx,
Witten:1991zz} as follows. First think of the exponentiation of
(\ref{eqn_twist_insert}) as an insertion in the path integral
rather than a modification of the action. By bosonising the
world-sheet fermions we can write $\overline{\psi} \psi = \partial H$ for a free
boson field so the above becomes
\begin{equation}
\int \frac{\omega}{2} \partial H = - \int H \frac{\partial \omega}{2} = \int H
R\; ,
\end{equation}
\noindent where $R$ is the curvature of the world-sheet. We can
always choose a gauge for the metric such that $R$ will only have
support on a number of points given by the Euler number of the
worldsheet.
For closed strings on a sphere the Euler class has support on two points which
can be chosen to be at $0$ and $\infty$ (in the CFT defined on the sphere) so
the correlation functions in the topological theory can be calculated in terms
of the original CFT using the following dictionary:
\begin{equation}
\braket{\dots}_{twisted} = \braket{e^{H(\infty)}\dots
e^{H(0)}}_{untwisted}\; .
\end{equation}
\noindent The \lq untwisted' theory should not be confused with
the physical theory, because it does not include integration over
world-sheet metrics and hence has no ghost or superghost system
and also it is still not at the critical dimension. The equation
above simply relates the original untwisted $\mathcal{N} = 2$ sigma-model
theory to the twisted one.
In \cite{deBoer:2005pt} a related prescription is given to define
the twisted \lq topological' sigma-model on a 7-dimensional target
space with $G_2$ holonomy. Here the role of the U(1) R-symmetry is
played by the tri-critical Ising model sub-algebra. However, a
difference is that the topological $G_2$ sigma-model is formulated
in terms of conformal blocks rather than in terms of local
operators. In particular the operator $H$ in the above is replaced
by the conformal block $\Phi^{+}_{(1,2)}$.
The main point of the topological twisting is to redefine the theory in such a
way that it contains a scalar BRST operator. In the $G_2$ sigma model, the BRST
operator is related to the conformal block of the weight 3/2 current $G(z)$ of
the super stress-energy tensor\footnote{The super stress-energy tensor is given
as ${\mathbf T}(z, \theta) = G(z) + \theta T(z)$. The current G(z) can be
further decomposed as G(z) = $\Phi_{(2,1)} \otimes \Psi_{{14 \over 10}}$, in
terms of the tri-critical Ising-model part and the rest of the algebra,
respectively. Since its tri-critical Ising model part contains only the primary
$\Phi_{(2,1)}$, it can be decomposed into conformal blocks accordingly.}, $$ Q
= G_{-{1 \over 2}}^{\downarrow} \; .$$ It should be pointed out that in
\cite{deBoer:2005pt} it was not possible to explicitly construct the twisted
stress tensor, and although there is circumstantial evidence that the
topological theory does exist beyond tree level this statement remains
conjectural.
\subsection{The $G_2$ string Hilbert space}\label{subsec_g2_hilb}
In a general CFT the set of states can be generated by acting with
primary operators and their decendants on the vacuum state,
resulting in an infinite dimensional Fock space. In string sigma
models this Fock space contains unphysical states, and so the
physical Hilbert space is given by the cohomology of the BRST
operator on this physical Hilbert space which is still generally
infinite-dimensional.
In the topological A- and B-models a localization argument
\cite{Witten:1991zz} implies that only BRST fixed-points
contribute to the path integral and these correspond to
holomorphic and constant maps, respectively. Thus the set of
field configurations that when quantized, generate states in the
Hilbert space is restricted to this subclass of all field
configurations and so the Fock space is much smaller. Upon
passing to BRST cohomology this space actually becomes
finite-dimensional.
In the $G_2$ string the localization argument cannot be made
rigorous, because the action of the BRST operator on the
worldsheet fields is inherently quantum, and so is not well
defined on the classical fields. Neglecting this issue and
proceeding naively, however, one can construct a localization
argument for $G_2$ strings that suggests that the path integral
localizes on the space of constant maps \cite{deBoer:2005pt}.
Thus we will take our initial Hilbert space to consist of states
generated by constant modes $X^\mu_0$ and $\psi^\mu_0$ on the
world-sheet (in the NS-sector there is no constant fermionic mode
but the lowest energy mode $\psi^\mu_{-\frac{1}{2}}$ is used
instead). These correspond to solutions of worldsheet equations
of motion with minimal action which dominate the path integral in
the large volume limit.
In \cite{Witten:1991zz} the fact that the path integral can be
evaluated by restricting to the space of BRST fixed points is
related to another feature of the A/B-models: namely the
coupling-invariance (modulo topological terms) of the worldsheet
path integral. Variations of the path integral with respect to
the inverse string coupling constant $t \propto (\alpha')^{-1}$ are
$Q$-exact, so one may freely take the weak coupling limit $t \rightarrow
\infty$ in which the classical configurations dominate. This
limit is equivalent to rescaling the target space metric, and so
we will refer to it as the large volume limit.
Accordingly, all the calculations in the A- and B- model can be
performed in the limit where the Calabi-Yau space has a large
volume relative to the string scale, and the worldsheet theory can
be approximated by a free theory. The $G_2$ string also has the
characteristics of a topological theory, such as correlators being
independent of the operator's positions, and the fact that the
BRST cohomology corresponds to chiral primaries. On the other
hand since the theory is defined in terms of the conformal blocks,
it is difficult to explicitly check the coupling constant independence.
Based on the topological arguments, and on the postulate of the
quantum symmetry algebra, in this paper we will assume the
coupling constant independence and the validity of localization
arguments. Even if these arguments should fail for subtle
reasons, the results of this paper are always valid in the large
volume limit.
\subsection{The $G_2$ string and geometry}\label{subsec_g2_geom}
As in the topological A- and B-model, for the topological $G_2$
string there is a one-to-one correspondence between local
operators of the form $O_{\omega_p}= \omega_{i_1 \ldots i_p}
\psi^{i_1} \ldots \psi^{i_p}$ and target space $p$-forms $\omega_p
= \omega_{i_1 \ldots i_p} dx^{i_1} \wedge \ldots \wedge dx^{i_p}$.
In \cite{deBoer:2005pt} it is found that the BRST cohomology of
the left (right) sector alone maps to a certain refinement of the
de Rham cohomology described by the \lq $G_2$ Dolbeault' complex
\begin{equation}\label{eqn_g2_complex}
0 \rightarrow \Lambda^{0}_{\bf 1} \xrightarrow{\check{D}}
\Lambda^1_{\bf 7} \xrightarrow{\check{D}} \Lambda^2_{\bf 7}
\xrightarrow{\check{D}} \Lambda^3_{\bf 1} \rightarrow 0\; .
\end{equation}
The notation is that $\Lambda^p_{\bf n}$ denotes differential forms of degree
$p$, transforming in the irreducible representation ${\bf n}$ of $G_2$. The
operator $\check{D}$ acts as the exterior derivative on 0-forms, and as
\begin{eqnarray} {\check D} (\alpha) &=& \pi^2_{\bf 7} (d \alpha) \quad {\rm if}
\quad \alpha \in \Lambda^1 \; , \nonumber \\ {\check D} (\beta) &=& \pi^3_{\bf
1} (d \beta) \quad {\rm if} \quad \beta \in \Lambda^2 \; , \nonumber
\end{eqnarray} where $\pi^2_{\bf 7}$ and $\pi^3_{\bf 1}$ are projectors onto the
relevant representations. The explicit expressions for the projectors and the
standard decomposition of the de Rham cohomology are included in appendix
\ref{app_conventions}. Thus, the BRST operator $G_{-1/2}^{\downarrow}$ maps to
the differential operator of the complex $\check{D}$. In the closed theory,
combining the left- and right-movers, one obtains the full cohomology of the
target manifold, accounting for all geometric moduli: metric deformations, the
$B$-field moduli, and rescaling of the associative 3-form $\phi$. The relevant
cohomology for the open string states will be worked out in the following
sections.
\section{Open string cohomology}
We will now consider the $Q$ cohomology of the open string states.
Later, we will interpret part of this cohomology in terms of
geometric and non-geometric (gauge field) moduli on calibrated 3-
and 4-cycles.
In \cite{deBoer:2005pt} states in the $G_2$ CFT were shown to satisfy a certain
non-linear bound in terms of $h_I$ and $h_r$ and states saturating this bound
are argued to fall into shorter, BPS, representation of the non-linear $G_2$
operator algebra. Such states are referred to as chiral primaries. Analogous to
the $\mathcal{N} = 2$ case, it is the physics of these primaries that the twist is
intended to capture and thus they are the states that occur in the BRST
cohomology. The chiral primaries in the NS sector have $h = 0, 1/2, 1, 3/2$ and
$h_I = 0, 1/10, 6/10, 3/2$ and they are the image of the RR ground states under
spectral flow.
Recall that we are working in the zero mode approximation (corresponding to the
large volume limit, $t\rightarrow \infty$, where oscillator modes can be neglected) and
in this limit a general state is of the form $A_{\mu_1 \dots \mu_n} (X_0)
\psi_0^{\mu_1} \dots \psi_0^{\mu_n}$. On such states $L_0$ acts as $t \Box +
\frac{n}{2}$ so states with $h = 0, 1/2, 1, 3/2$ correspond to 0, 1, 2, and 3
forms ($f(X_0)$, $A_\mu(X_0) \psi^\mu_0$, \dots). As argued in
\cite{deBoer:2005pt} we can thus consider $Q$-cohomology on the space of 0, 1,
2, and 3 forms restricted to those that have $h_I = 0, 1/10, 6/10, 3/2$,
respectively.
In general we are interested in harmonic representatives of the $Q$ cohomology
so we will look for operators (corresponding to states) that are both $Q$- and
$Q^\dag$-closed. The results we obtain are essentially the same as those for one
side of the closed worldsheet theory \cite{deBoer:2005pt}.
\subsection{Degree one}\label{subsec_deg_one}
We will start by looking at the $h = 1/2$ state, because it is the only one that
will generate a marginal deformation of the theory. A general state with $h =
1/2$ is of the form $A_\mu(X)\psi^\mu$ so long as \footnote{Although we will
sometimes use the full fields $X$ and $\psi$ in the CFT and also consider OPE's
which generate deriviatives of these fields the reader should recall that we are
always working in the large volume limit where these reduce to $X_0$ and
$\psi_0$. }
\begin{equation}
[L_0, A_\mu(X)] = t\Box A_\mu(X) = 0
\end{equation}
It also satisfies
$$[L_0^I, A_\mu(X)\psi^\mu ] = \frac{1}{10}
A_\mu(X)\psi^\mu \; ,$$ so it is a chiral primary (i.e. it
saturates the chiral bound). Because it is a chiral primary, it
has to be $Q$-closed \cite{deBoer:2005pt}. Rather than proceed along these
lines, however, we will consider the $Q$-cohomology directly from the definition
of $Q$.
Let us determine the $Q$-cohomology of 1-forms $\mathcal{A} = A_\mu (X)\psi^\mu$. We
first calculate $\{G_{-\frac{1}{2}}, A_\mu(X)\psi^\mu\}$ in the CFT on the complex plane
with $z$ complex \lq bulk' coordinates and $y$ \lq boundary' coordinates on the
real line
\begin{equation}
\begin{split}
\{G_{-1/2}, A_\mu(X) \psi^\mu \} &= \oint dz \; G(z) \cdot A_\mu(X) \psi^\mu(y) \; , \\
G(z) \cdot A_\mu(X) \psi^\mu(y) &= g_{\rho\sigma}(X) \psi^\rho \partial X^\sigma(z) \cdot A_\mu(X)
\psi^\mu(y) \\
&\sim \partial(\ln |z - y|^2 + \ln |\overline{z} - y|^2)
\nabla_\rho A_\mu \psi^\rho(z) \psi^\mu(y) \\
&+ \frac{1}{z-y} \partial X^\mu(z) A_\mu(X(y)) \; .
\end{split}
\end{equation}
\noindent This gives\footnote{We have not been careful about the relative normalizations of the
bosonic and fermionic bulk-boundary OPE's, but this is not relevant as in all
computations of this type that occur below, we will only end up keeping one of
the terms.}
\begin{equation}
\{G_{-1/2}, A_\mu(X) \psi^\mu \}= A_\mu \partial X^\mu(y) + \frac{1}{2} \partial_{[\mu} A_{\nu]} \psi^\mu \psi^\nu \; .
\end{equation}
To compute the action of $Q$ we now project onto the $\downarrow$ part, which
includes only the part with tri-critical Ising weight $6/10$. The term $A_\mu
\partial X^\mu$ vanishes in the zero mode limit so we only need to consider the
second term. The condition that this term has $h_I = \frac{6}{10}$ is
\cite{deBoer:2005pt}
\begin{equation}
(\pi_{\bf 14}^2)^{\rho\sigma}_{\phantom{\rho\sigma}\mu\nu} \partial_{[\rho}
A_{\sigma]} = 0 \; ,
\end{equation}
\noindent where $\pi_{\bf 14}^2$ is the projector onto the 2-form
subspace $\Lambda^2_{\bf 14} \subset \Lambda^2$, in the {\bf 14}
representation of $G_2$.
This result implies that the $\frac{6}{10}$ part of $\partial_{[\rho}
A_{\sigma]}$ (or any 2-form) is in $\Lambda_{\bf 7}^2$,
so on a 1-form we can define $Q$ as
\begin{equation}\label{eqn_Q_closure}
\{Q, A_\mu\psi^\mu\} = (\pi_{\bf 7}^2) \{G_{-\frac{1}{2}}, A_\mu\psi^\mu\} = 6
\phi_{\mu\nu}^{\phantom{\mu\nu}\gamma}\phi_{\gamma}^{\phantom{\gamma}\rho\sigma} \partial_{[\rho}
A_{\sigma]} dx^\mu \wedge dx^\nu = \check{D} A =
0\; ,
\end{equation}
\noindent where we have used
\begin{equation}\label{eqn_7_proj}
(\pi_{\bf 7}^2)^{\rho\sigma}_{\phantom{\rho\sigma}\mu\nu} = 4 (*\phi)^{\rho\sigma}_{\phantom{\rho\sigma}\mu\nu} + \frac{1}{6}
(\delta^\rho_\mu \delta^\sigma_\nu - \delta^\sigma_\mu \delta^\rho_\nu) = 6 \phi_{\mu\nu}^{\phantom{\mu\nu}\gamma}\phi_{\gamma}
^{\phantom{\gamma}\rho\sigma} \; .
\end{equation}
\noindent Note that $Q$ acting on 1-forms has reduced essentially to
$\check{D}$; the same will occur for forms of other degrees.
Let us now consider $Q$-coclosure. The inner product of
states
$$\braket{A_{[\mu\nu]}\psi^\mu \psi^\nu|B_{[\alpha\beta]} \psi^\alpha
\psi^\beta} \; ,$$
\noindent becomes the inner product of forms $\int_Y (*A \wedge
B)$, so $Q^\dag$ acting on $A$ is given by \newline $\braket{Q
\cdot f(X) |A_\mu(X) \psi^\mu} = \braket{f(X)| Q^\dag \cdot A_\mu
\psi^\mu}$, which can be determined as
\begin{equation}
\braket{Q \cdot f(X) |A_\mu(X) \psi^\mu} = \int \sqrt{g} \partial_\mu f(X)
A^\mu(X) \\
= - \int \sqrt{g} f(X)
\nabla_\mu A^\mu(X) \; .
\end{equation}
\noindent So if $A_\mu$ is also required to satisfy
\begin{equation}\label{eqn_Qdag_closure}
Q^\dag \cdot A_\mu(X) \psi^\mu = - \nabla_\mu A^\mu(X) = 0 \; ,
\end{equation}
\noindent then it is $Q$- and $Q^\dag$-closed and hence a harmonic
represenative of $Q$-cohomology.
\subsection{Degree zero}
The cohomology in degree zero is rather trivial. Given a degree
zero mode $f(X)$ we have $\{Q, f(X)\} = \partial_\mu f(x) \psi^\mu$.
This follows from $Q = G_{-\frac{1}{2}}^\downarrow = G_{-\frac{1}{2}}$, because the projection onto
the $\downarrow$ component is trivial since all operators of the
form $A_\mu(X) \psi^\mu$ automatically have $L_0^I$ weight
$\frac{1}{10}$. So $Q$-closure implies
\begin{equation}
\partial_\mu f(X) = 0 \; .
\end{equation}
\noindent The $Q^\dag$-closure here is vacuous since there are no lower degree fields.
\subsection{Degree two}
In degree two we start with a two form $\omega_{\rho \sigma} \psi^\rho
\psi^\sigma$ which should have $L_0^I$ weight $\frac{6}{10}$, so it
should satisfy $\pi^2_{\bf 7}(\omega) = \omega$. The need to restrict
$\omega \in \Lambda^2_{\bf 7}$ comes from the way $Q$ is defined in
\cite{deBoer:2005pt}. We must once more calculate the action of
$G_{-\frac{1}{2}}$,
and then project it onto the $\downarrow$ part
\begin{align}
\{G_{-\frac{1}{2}}, \omega\} &= \oint dz \; g_{\mu\nu}\psi^\mu \partial X^\nu(z) \cdot \omega_{\rho
\sigma} \psi^\rho \psi^\sigma \nonumber \\
&= \oint dz \; \frac{1}{z} g_{\mu\nu}\partial^\nu \omega_{\rho \sigma} \psi^\mu \psi^\rho \psi^\sigma
+ \frac{1}{z} g_{\mu\nu} \partial X^\nu \omega_{\rho \sigma} g^{\mu\rho} \psi^\sigma
- \frac{1}{z} g_{\mu\nu} \partial X^\nu \omega_{\rho \sigma} g^{\mu\sigma} \psi^\rho \nonumber \\
&= \partial_\mu \omega_{\rho \sigma} \psi^\mu \psi^\rho \psi^\sigma + 2 \omega_{\rho \sigma} \partial
X^\rho \psi^\sigma \; .
\end{align}
\noindent Once more we can drop the second term in the large volume limit in
which we are working. We use the result in \cite{deBoer:2005pt} that the
projector onto the $L_0^I$ weight $\frac{3}{2}$ corresponds to the projector
onto $\Lambda^3_{\bf 1}$, and is given by contracting with the associative 3-form
$\phi$. So for $\Omega \in \Lambda^3_{\bf 1}$
\begin{equation}
\phi^{\alpha\beta\gamma}\Omega_{\alpha\beta\gamma} \phi_{\mu\nu\rho} = 7 \Omega_{\mu\nu\rho} \; .
\end{equation}
\noindent In particular, we can project onto the $\frac{3}{2}$ part of
$\{G_{-\frac{1}{2}}, \omega\} = \partial_\mu \omega_{\rho \sigma}$ using
$\phi^{\alpha\beta\gamma}$, so $Q$-closure implies
\begin{equation}\label{eqn_Q2_closure}
\phi^{\alpha\beta\gamma} \partial_{[\alpha} \omega_{\beta \gamma]} = 0 \; .
\end{equation}
\noindent Note that this once again can be written as $\check{D}\omega = 0$.
We will now derive the $Q^\dag$-closure condition. This is done in
exactly the same way as was done for the degree one components
\begin{equation}
\begin{split}
\braket{\omega |Q \cdot A_\mu(X) \psi^\mu}
& = \int \sqrt{g} \omega^{\mu\nu} (\pi_{\bf 7}^2)^{\alpha\beta}_{\phantom{\alpha\beta}\mu\nu} \partial_{\alpha} A_{\beta}
= - \int \sqrt{g} A_{\beta} \bigl( (\pi_{\bf 7}^2)^{\alpha\beta}_{\phantom{\alpha\beta}\mu\nu}
\nabla_\alpha \omega^{\mu\nu} \bigr) \; ,
\end{split}
\end{equation}
\noindent so
\begin{equation}
Q^\dag \cdot \omega = -(\pi_{\bf 7}^2)^{\mu\nu}_{\phantom{\mu\nu}\alpha\beta}
\nabla^\alpha \omega_{\mu\nu}
dx^\beta = -6\phi^{\mu\nu}_{\phantom{\mu\nu}\gamma} \phi^{\gamma}_{\phantom{\gamma}\alpha\beta} \nabla^\alpha
\omega_{\mu\nu} dx^\beta = -\nabla^\alpha \omega_{\alpha\beta} dx^\beta = 0 \; .
\end{equation}
\noindent Here we have used $\pi^2_{\bf 7}(\omega) = \omega$.
\subsection{Degree three}
A 3-form $\Omega_{\mu\nu\rho}\psi^\mu \psi^\nu \psi^\rho$ is first projected onto
its $\Lambda^3_{\bf 1}$ component by $Q$, so we take $\pi^3_{\bf 1}(\Omega) = \Omega$,
which means that $\Omega$ is a function times $\phi$. From the definition of $Q$
it is evident that it acts trivially on $\Omega$ since there is no higher $L_0^I$
eigenstate in the NS sector for $Q$ to project onto. This implies $Q = 0$ on
three forms which matches (\ref{eqn_g2_complex}). Thus we see that the action
of $Q$ on states in the zero mode approximation maps into the complex
(\ref{eqn_g2_complex}) as anticipated in Section \ref{subsec_g2_geom}.
The $Q$-coclosure of $\Omega$ is derived similarly to the 1- and
2-form case and gives
\begin{equation}
Q^\dag \cdot \Omega = \nabla^\mu \Omega_{\mu\nu \rho} dx^\nu \wedge dx^\rho = 0 \; .
\end{equation}
\subsection{Harmonic constraints}
In the previous subsections we considered the conditions for $Q$-
and $Q^\dag$-closure on the states in the $G_2$ CFT. These
conditions are all linear in derivatives but they must be enforced
simultaneously to generate unique representatives of
$Q$-cohomology. As $Q$ corresponds to the operator ${\check D}$
discussed in section \ref{subsec_g2_geom}, it generates the
Dolbeault complex (\ref{eqn_g2_complex}) which is known to be
elliptic \cite{ReyesCarrion:1998si, Donaldson:1996kp} and so can
be studied using Hodge theory. This implies that physical states
in the theory correspond to the kernel of the Laplacian operator
$\{Q, Q^\dag\}$, so one can equivalently consider this single
non-linear condition instead of the two seperate linear conditions
imposed by $Q$ and $Q^\dag$.
These $Q$-harmonic conditions (derived from the actions of $Q$ and
$Q^\dag$) are
\begin{align}
\{Q, Q^\dag\} \cdot f &= \nabla_\mu \partial^\mu f =0 \; , \nonumber \\
\{Q, Q^\dag\} \cdot A_\nu \psi^\nu &= \bigl(\nabla_\nu \nabla_\mu A^\mu +
(\pi^2_{\bf 7})_{\nu}^{\phantom{\nu}\gamma\mu\sigma} \nabla_\gamma \nabla_\mu A_\sigma
\bigr) \psi^\nu =0 \; , \label{eqn_deg1_harm} \nonumber \\
\{Q, Q^\dag\} \cdot \omega_{\mu \nu} \psi^\mu \psi^\nu &=
\bigl( (\pi^2_{\bf 7})_{\mu\nu}^{\phantom{\mu\nu}\alpha\beta}
\nabla_\alpha \nabla^\gamma \omega_{\beta\gamma} +
(\pi^3_{\bf 1})_{\mu\nu\rho}^{\phantom{\mu\nu\gamma}\alpha\beta\gamma} \nabla^\rho \nabla_\alpha
\omega_{\beta\gamma} \bigr) \psi^\mu \psi^\nu =0 \; .
\end{align}
\noindent We have used $\pi^2_7(\omega) = \omega$ to simplify the last
expression above.
\section{Open string moduli}\label{subsec_moduli}
In a general topological theory one can use elements of degree one
cohomology to deform the theory using descendant operators. If
$\mathcal{O}$ is a degree one operator, in the A/B-model this means that
it has ghost number one, whereas in the $G_2$ string this means
that it corresponds to one \lq +' conformal block. Then one can
deform the action by adding a term $\int_{\partial \Sigma} \{G_{-\frac{1}{2}}^\uparrow,
\mathcal{O}\}$, which is $Q = G_{-\frac{1}{2}}^\downarrow$ closed and of degree 0. Thus the
elements of $H_Q^1$ cohomology should correspond to possible
deformations of the theory or tangent vectors to the moduli space
of open topological $G_2$ strings.
Since open strings correspond to supersymmetric\footnote{In the
sense of preserving the extended worldsheet superalgebra.} branes,
the full moduli space should include both the moduli space of the
field theory on the brane as well as the geometric moduli of the
branes. For $G_2$ manifolds the latter are simply the moduli of
associative and coassociative 3- and 4-cycles, respectively, which
have been studied in \cite{McLean:1998}. Below we will show that
the operators $\mathcal{O}$
corresponding to normal modes do satisfy the correct constraints to be
deformations of the relevant calibrated submanifolds. Since a
priori it is not known what the field theory on these branes will
be, in the topological case we will study the constraints on the
tangential modes (which in physical strings would correspond to
gauge fields on the brane), and attempt to interpret these as
infinitesimal deformations in the moduli space of some gauge
theory on the brane.
\subsection{Calibrated geometry}
In order to preserve the extended symmetry algebra (such as $\mathcal{N} =
2$ or $G_2$) of the worldsheet SCFT in the presence of a boundary,
certain constraints must be imposed on the worldsheet currents.
These have been studied in \cite{Ooguri:1996ck}
\cite{Becker:1996ay}, and more extensively in
\cite{Albertsson:2001dv} \cite{Albertsson:2002qc}
\cite{Howe:2005je}. One imposes the boundary condition on the
left- and right-moving components of the worldsheet fermions,
$\psi_L^\mu = R^\mu_\nu(X) \psi^\nu_R$, and then conservation of
the worldsheet currents in the presence of the boundary implies
that, on the subspace $M$ where open strings can end,
\begin{equation}
\begin{split}
\phi_{\mu\nu\sigma} &= \eta_\phi R^\alpha_\mu R^\beta_\nu R^\gamma_\sigma \phi_{\alpha\beta\gamma} \; , \\
(*\phi)_{\mu\nu\sigma\lambda} &= \eta_\phi R^\alpha_\mu R^\beta_\nu R^\gamma_\sigma R^\rho_\lambda
(*\phi)_{\alpha\beta\gamma\lambda}\det(R) \\
&= R^\alpha_\mu R^\beta_\nu R^\gamma_\sigma R^\rho_\lambda
(*\phi)_{\alpha\beta\gamma\lambda} \; .
\end{split}
\end{equation}
\noindent Note that $R^\alpha_\mu(X)$ (for any $X \in M$) is
generally a position-dependent invertible matrix, but locally it
can be diagonalized with eigenvalues $+1$ in Neumann directions
and $-1$ in Dirichlet directions. $\eta_\phi = \pm 1$ gives two
different possible boundary conditions with the choice of
$\eta_\phi = 1$ corresponding to open strings ending on a
calibrated 3-cycle, while $\eta_\phi = -1$ corresponds to strings
on a calibrated 4-cycle \cite{Becker:1996ay}. Calibrated
submanifolds, first studied in \cite{Harvey:1982xk}, are
characterized by the property that their volume form induced by
the metric in the ambient space is the pull-back of particular
global forms, in this case
$\phi$ (for associative 3-cycles) or $*\phi$ (for coassociative 4-cycles). This
implies the volume of the calibrated submanifold is minimal in its homology class. \\
\\
\noindent{\bf Remark.} There are several subtleties regarding
boundary conditions in topological sigma-models that deserve to be
mentioned. Below, we will advocate the perspective that any
boundary condition preserving the extended algebra\footnote{To be
precise the boundary conditions preserve some linear combination
of the extended algebra in the left/right sector of the
worldsheet. So a brane may reduce an $\mathcal{N} = (2, 2)$ theory to an
$\mathcal{N} = 2$ theory.} should also be a boundary condition of the
topological theory, because the presence of an extended algebra
allows one to define a twisted theory. In the A- and B-model,
however, although both the A- and B-brane boundary conditions
preserve the $\mathcal{N} = 2$ algebra, each is compatible with only one
of the twists, so a given topological twist is not necessarily
compatible with an arbitrary algebra-preserving boundary
condition. Moreover, a given topological twist might only depend
on the existence of a subalgebra of the full extended algebra, so
may be possible even with boundary conditions that do not preserve
the full extended algebra. A concrete example of this is the
Lagrangian boundary condition for the A-model branes proposed by
Witten \cite{Witten:1992fb}. This condition is considerably less
restrictive that the special Lagrangian condition required to
preserve the full $\mathcal{N} = 2$ algebra in the physical string
\cite{Ooguri:1996ck} and reflects the fact that the A-model is
well-defined for any K\"{a}hler manifold and does not require a
strict Calabi-Yau target space. While similar subtleties might,
in principle, exist for the $G_2$ twist they are concealed by the
fact that the twist does not have a classical realization that we
know of. So we will tentatively assume the correct boundary
conditions are those that preserve the full $G_2$ algebra on one
half of the worldsheet theory.
\subsection{Normal modes}\label{sec_normal_modes}
Let us now consider the cohomology of open strings ending on a
D-brane which wraps either an associative 3-cycle or a
co-associative 4-cycle. We adopt the convention that $I, J, K,
\dots$ are indices normal to the brane while $a, b, c, \dots$ are
tangential, and Greek letters run over all indices. \noindent The
state $A_\mu \psi^\mu$ decomposes into normal and tangential modes
which will be denoted $\theta_I \psi^I$ and $A_a \psi^a$
respectively; all momenta is tangential, denoted by $k_a$. The
normal modes will have the form $\mathcal{A} = \theta_I(X^a) \psi^I$ so
$G_{-1/2} \cdot \mathcal{A} = \partial_a \theta_I(X^b) \psi^a \psi^I$. Here
$\mathcal{A}$ will denote a {\em general} operator/state in the CFT and
should not be confused with the gauge field (or operator) $A_\mu
\psi^\mu$.
\paragraph{Associative 3-cycles.}
Let us now consider the $Q$-cohomology when restricted to an
associative 3-cycle $M$. On the 3-cycle the form $\phi$ must
satisfy \cite{Howe:2005je}
\begin{equation}
\begin{split}
\phi_{\mu\nu\sigma} &= R^\alpha_\mu R^\beta_\nu R^\gamma_\sigma \phi_{\alpha\beta\gamma} \; . \\
\end{split}
\end{equation}
\noindent Since $M$ is associative, $\phi$ acts as a volume form
on this cycle and, from the above, it is only non-vanishing for an
odd number of tangential indices\footnote{Here, and throughout the paper, we
will take $\epsilon$ to be the volume form on the (sub)manifold not merely the
antisymmetric tensor.}
\begin{equation}
\begin{split}
\phi_{abc} &= \epsilon_{abc} \; ,\\
\phi_{Ibc} &= 0 \; ,\\
\phi_{IJK} &= 0 \; .
\end{split}
\end{equation}
\noindent The $Q$-closure of normal modes is given by (\ref{eqn_Q_closure})
\begin{equation}\label{eqn_3c_norm_cond}
\phi_{bK}^{\phantom{bK}J}\phi_{J}^{\phantom{J}aI} \nabla_{a} \theta_{I} =
0 \; ,
\end{equation}
\noindent where the index structure is enforced by the requirement that $\phi$
has an even number of normal indices.
To understand the geometric significance of equation
(\ref{eqn_3c_norm_cond}) in the abelian theory, recall that
$\theta^I$ is just a section of the normal bundle $NM$ of $M$ in
$Y$, which by the tubular neighborhood theorem can be identified
with an infinitesimal deformation of $M$. This equation is the
linear condition on $\theta^I$ such that the exponential map
(defined by flowing along a geodesic in $Y$ defined by $\theta^I$)
$\exp_\theta(M)$ takes $M$ to a new associative submanifold $M'$.
This is just a reformulation of the condition given in
\cite{McLean:1998}.
In \cite{McLean:1998} McLean defines a functional on the space of (integrable)
normal bundle sections by
\begin{equation}
F_{\gamma}(\theta) = (*\phi(x))_{\mu\nu\rho\gamma} \frac{\partial x^\mu}{\partial \sigma^a}
\frac{\partial x^\nu}{\partial \sigma^b} \frac{\partial x^\rho}{\partial \sigma^c}
\epsilon^{abc} \propto (*\phi(x))_{\mu\nu\rho\gamma} \frac{\partial x^\mu}{\partial \sigma^a}
\frac{\partial x^\nu}{\partial \sigma^b} \frac{\partial x^\rho}{\partial \sigma^c} \phi^{abc} \; .
\end{equation}
\noindent Here $x(t, \theta, \sigma) = {\mbox{exp}}_\theta(\sigma, t)$ is a geodesic curve
parameterized by the variable $0 < t < t_1$, which starts at a point $\sigma \in M$
with $\dot{x}(\sigma) = \theta$ at $t=0$, and flows after a fixed time to $x(t=t_1,
\theta, \sigma) \in M'$, the new putative associative submanifold. The functional is
just the pull-back\footnote{More precisely we are pulling back $\chi \in
\Omega^3(Y, TY)$, a tangent bundle valued 3-form, defined using the $G_2$ metric
$\chi^\alpha_{\phantom{\alpha}\mu\nu\rho} = g^{\alpha\beta} (*\phi)_{\beta\mu\nu\rho}$.} of
$*\phi$ from $M'$ to $M$ and it should vanish if $M'$ is associative.
For $M'$ to be a associative it turns out to be sufficient to require that the
time derivative of $F$ at $t=0$ vanishes, which gives
\begin{equation}
\dot{F}_{\gamma}(\theta)|_{t=0} = (*\phi(x))_{Ibc\gamma} \partial_a \theta^I \phi^{abc} =
\phi_{I\gamma}^{\phantom{I\gamma}a} \partial_a \theta^I.
\end{equation}
\noindent This is equivalent to (\ref{eqn_3c_norm_cond}) since
each choice of $bK$ indices in that equation gives only one
non-vanishing term.
The space of such deformations is generally not a smooth manifold
and currently the moduli space of associative submanifolds of a
given $G_2$ manifold is not well understood (but see
\cite{akbulut-2004} for some recent work on this).
At first glance (\ref{eqn_3c_norm_cond}) looks like the linearized
equation (4.7) in \cite{Anguelova:2005cv} but the fields in that
action are actually embedding maps which are non-linear, whereas
the $\theta^I$ above are more closely related to linearized
fluctuations around fixed embedding maps
\footnote{In \cite{Anguelova:2005cv}, maps $x: \Sigma_3 \rightarrow Y$ from
an arbitrary three-manifold to a $G_2$ manifold are considered and
a functional which localizes on associative embeddings is defined.
There a reference associative embedding $x_0$ is chosen and used
to define a local coordinate splitting of $x^\mu$ into tangential
$x^a$ and normal $y^I$ parts. This is different from the present
situation where $\theta^I$ is an infinitesimal normal deformation of
an associative cycle. $\theta^I$ can be identified with a section of
the normal bundle (via the tubular neighborhood theorem) and is
essentially a linear object, whereas the $y^I$ above are a local
coordinate representation of a non-linear map. Basically
$\theta^I$ here are related to the linear variation $\delta y^I
|_{x_0}$ (evaluated at $x= x_0$) in \cite{Anguelova:2005cv}.}
.\\
\\
\noindent {\bf Remark.} The harmonic condition as follows from
(\ref{eqn_deg1_harm}) for normal modes is
\begin{equation}\label{eqn_normal_scnd}
\begin{split}
(\pi^2_{\bf 7} )^{a\phantom{I}bJ}_{\phantom{a}I} \nabla_a \nabla_b \theta_J = 0 \; .
\end{split}
\end{equation}
This also has a nice geometrical interpretation as vector fields
$\theta^I$ extremizing the action
\begin{equation}
\int_M \braket{Q \cdot \theta , Q \cdot\theta} \; ,
\end{equation}
\noindent on the associative 3-cycle. Theorem 5-3 in
\cite{McLean:1998} shows that the zeros of this action (which are
extrema since it is positive semi-definite) correspond to a family
of deformations through minimal submanifolds.
\paragraph{Coassociative 4-cycles.}
The consideration of the 4-cycle $M$ is similar to that of the
3-cycle, but now in the boundary condition we have $\eta_\phi =
-1$, so the non-vanishing components of $\phi$ must have an odd
number of normal indices and
\begin{equation}
\begin{split}
\phi_{abc} &= 0 \; . \\
\end{split}
\end{equation}
\noindent Let us first consider the $Q$-closure of $\theta_I$
\begin{equation}\label{eqn_norm_cond}
\phi_{Ic}^{\phantom{\mu\nu}b}\phi_{b}^{\phantom{b}aJ} \partial_{[a}
\theta_{J]} = 0 \; .
\end{equation}
\noindent These are 24 equations depending on a choice of $I$ and $c$. Examining
the index structure, (\ref{eqn_norm_cond}) reduces to 4 independent equations
\begin{equation}\label{eqn_norm_cond2}
\phi_{b}^{\phantom{b}aJ} \nabla_{a} \theta_{J} = 0 \; ,
\end{equation}
\noindent where we replaced the commutator of a derivative with the covariant
derivative on $M$ in the induced metric.
Following \cite{McLean:1998}, let us observe an isomorphism
between the normal bundle $NM$ of the 4-cycle $M$, and the space
of self-dual 2-forms $\Lambda^2_+(M)$ on $M$, given by
\begin{align}
\theta^I &\rightarrow \theta^I \phi_{Iab} \equiv \Omega_{ab} \; , \\
\Omega_{ab} &\rightarrow \phi^{Iab} \Omega_{ab} = \phi^{Iab} \phi_{Jab} \theta^J =
\frac{1}{6} \theta^I \; , \label{eqn_norm_sd}
\end{align}
\noindent where we have used the first identity in (\ref{eqn_g2_ident}).
To see that $\Omega_{ab}$ is self-dual we use the second identity in
(\ref{eqn_g2_ident}) and the fact that ${*\phi}_{ab}^{\phantom{ab}cd}
\propto \epsilon_{ab}^{\phantom{ab}cd}$ on $M$, so that
\begin{equation}\label{eqn_selfdaul}
(*_4 \Omega )_{ab} \propto {*\phi}_{ab}^{\phantom{ab}cd} \Omega_{cd} = \phi_{ab}^{\phantom{ab}cd} \theta^I
\phi_{Icd} = \frac{1}{6} \phi_{Iab} \theta^I = \frac{1}{6}
\Omega_{ab}\; .
\end{equation}
\noindent Let us now use (\ref{eqn_norm_sd}) to see what
(\ref{eqn_norm_cond2}) implies for $\Omega_{ab}$;
\begin{equation}
0 = \phi_{b}^{\phantom{b}aJ} \nabla_{a} \phi_J^{\phantom{J}cd} \Omega_{cd} =
\nabla_a \bigl(\phi_{b}^{\phantom{b}aJ} \phi_J^{\phantom{J}cd} \Omega_{cd}\bigr)
= \nabla^a \bigl[\bigl( \frac{1}{9} \Omega_{ba}+ \frac{1}{18} \Omega_{ba} \bigr) \bigr]
= \frac{1}{6} \nabla^a \Omega_{ba} \; .
\end{equation}
This equation is just $d^\dag \Omega = 0$, and since $\Omega$ is
self-dual, it also implies $d\Omega = 0$ so that $\Omega$ must be
harmonic. Thus the $Q$-cohomology for the normal modes is given by
$\theta^I$ which map to harmonic self-dual 2-forms on $M$.
Since the $Q^\dag$-cohomology on the normal modes is trivial
(eqn. (\ref{eqn_Qdag_closure}) is trivially true for normal directions), such
$\theta^I$ are $Q$-closed and co-closed, and hence $Q$-harmonic. Thus their
$Q$-cohomology is isomorphic to the de Rham cohomology group $H^2_+(M)$ of
harmonic self-dual 2-forms on $M$. This corresponds to the geometric moduli
space of deformations of a coassociative 4-cycle, determined by McLean in
\cite{McLean:1998}.
\subsection{Tangential modes}
For the tangential modes the $Q$- and $Q^\dag$-closure conditions are
just (\ref{eqn_Q_closure}) and (\ref{eqn_Qdag_closure}) with all
the indices replaced by worldvolume indices $a, b, c, \dots$.
\paragraph{Associative 3-cycles.}
On the 3-cycle it is convenient to represent the $Q$-closure
condition using the projector $\pi^2_{\bf 7}$ in terms of $\phi$
which gives
\begin{equation}
\phi_{ab}^{\phantom{ab}c}\phi_{c}^{\phantom{c}de} \partial_{[d} A_{e]} =
0 \; .
\end{equation}
\noindent When pulled back to the associative cycle, $\phi$ is
proportional to the volume form and so this is
\begin{equation}
\epsilon_{ab}^{\phantom{ab}c}\epsilon_{c}^{\phantom{c}de} \partial_{[d} A_{e]} = 0
\; ,
\end{equation}
\noindent which is just multiple copies of the equation $
\partial_{[d} A_{e]} = 0$. Therefore any tangential deformation
corresponds to a flat connection on the 3-cycle.
Requiring the deformation $A_a \psi^a$ be also be $Q^\dag$-closed,
and hence a harmonic representative of $Q$-cohomology, implies
(\ref{eqn_Qdag_closure}), which can be viewed as enforcing a
covariant gauge condition.
Combined together this means that the $Q$-cohomology for tangential modes on $M$
is spanned by the space of gauge-inequivalent flat connections on $M$. This
matches the result for Lagrangian submanifolds in the A-model and also
the results derived using $\kappa$-symmetry for physical branes in
\cite{Marino:1999af}.
\paragraph{Coassociative 4-cycles.}
On the 4-cycle it is easier to use the representation of
$Q$-closure
\begin{equation}
\begin{split}
\bigl( (\delta^a_c \delta^b_d - \delta^b_c \delta^a_d) + 24
(*\phi)^{ab}_{\phantom{ab}cd} \bigr)
\partial_{[a} A_{b]} \psi^c \psi^d = 0 \; ,
\end{split}
\end{equation}
\noindent in terms of the 4-form $*\phi$, which is now
proportional to the volume form on $M$. Defining $F_{ab} =
\partial_{[a} A_{b]}$ to be the field strength of the $U(1)$ gauge
field, the equation above implies
\begin{equation}
( *_4 F)_{ab} = 12 (*\phi)^{cd}_{\phantom{cd}ab} F_{cd} = - F_{ab} \; .
\end{equation}
\noindent Thus $F_{ab}$ is constrained to be anti-self-dual (ASD)
on $M$. Therefore any tangential deformation on the 4-cycle is
given by a gauge field with ASD field strength. Note an important
difference with the case of normal modes. In the latter case each
$\theta^I$ is mapped uniquely to a harmonic self-dual 2-form
$\Omega_{ab}$ on $M$, so there are exactly $b^2_+ (M)$ such modes. In
this case however the tangential mode $A_a$ is the potential for a
gauge field with ASD field strength (i.e. an (anti-)instanton
configuration). Hence the tangential modes correspond to tangent
vectors on the moduli space of instanton configurations on $M$.
Again the condition $\nabla_a A^a = 0$ for $Q^\dag$-closure is simply
a gauge choice, implying that each $Q$-harmonic representative is
associated to a unique orbit of the gauge group (up to Gribov
ambiguity in the path integral). In fact, these harmonic
constraints $*_4 F = -F$, $d^\dagger A =0$ are precisely
(linearized versions of) the conditions cited in equation (5.22)
of \cite{Birmingham:1991ty} as defining the deformations of an
instanton configuration.
In physical string theory the anti-self-duality constraint on the field strength
of a coassociative brane has been determined in \cite{Marino:1999af} using
$\kappa$-symmetry of the DBI action. In \cite{Leung:2002qs}, a topological
field theory is proposed on calibrated 4-cycles whose total moduli space is a
product of the moduli space of geometric deformations with the moduli space of
ASD connections on $M$. We will see shortly that this is indeed the worldvolume
theory on coassociative 4-cycles for the open $G_2$ string.
\section{Scattering amplitudes}
Before considering the nature of the worldvolume theory of the
calibrated 3- and 4-cycles, it will be useful to consider some
scattering amplitudes in the open $G_2$ theory, as these can be
compared with field theoretic scattering amplitudes and will help
constrain the interaction terms in the worldvolume action. In
fact, as will be discussed in the next section, these interactions
can actually be related to string field theory, not just to
effective field theory, if one concedes that the $G_2$ string is
independent of its coupling constant, as argued in
\cite{deBoer:2005pt}.
\subsection{3-point amplitude}\label{subsec_3_pt}
The simplest amplitudes to calculate (and the only ones we will
need) are the 3-point functions of degree one fields $A_\mu
\psi^\mu$, which are essentially already calculated in
\cite{deBoer:2005pt}. Introducing Chan-Paton factors into the
calculation performed there gives the 3-point function of three
ghost number one fields as
\begin{equation}
\lambda^3 \frac{3}{2} f_{jik} \int_Y \phi^{\alpha\beta\gamma}(x) A_\alpha^i(x) A_\beta^j(x)
A_\gamma^k(x)\; ,
\end{equation}
\noindent where $f_{ijk}$ are the structure functions for the Lie
algebra of the gauge group $G$ and $\lambda$ is the normalization of
the bulk-boundary 2-point function in the $G_2$ CFT (these are
generally not relevant and will not be treated with a great deal
of care).
\paragraph{Tangential modes.}
For an associative 3-cycle embedding $i : M \rightarrow Y$, we
have the relation $i^*(\phi) = \epsilon$, where $\epsilon$ is the volume
form on $M$. If we now consider the previous calculation but where
now the fields $\psi^\mu$ are restricted to be along the 3-brane
(so they have indices $a, b, c\dots$), we find that
\begin{equation}
\braket{A A A} = \lambda^3 \frac{3}{2} f_{jik} \int_M \epsilon^{abc}(x) A_a^i(x) A_b^j(x)
A_c^k(x)\; .
\end{equation}
\noindent As will be discussed in the next section, this is an
interaction vertex for Chern-Simons theory, which is the part of
the effective worldvolume theory for the 3-cycle.
As mentioned in previous section, on a coassociative 4-cycle
$\phi^{abc} = 0$ so the 3-point function of tangential modes
vanishes.
\paragraph{Normal and mixed modes.}
We can now try to consider a mixture of normal or tangential modes
in the 3-point function. The boundary conditions on the open
$G_2$ string, preserving the extended algebra on a 3-cycle, imply
\cite{Becker:1996ay} that only $\phi^{abc}$ and $\phi^{IJc}$ are
non-vanishing. Thus $\phi$ is only non-vanishing for an even
number of indices in Dirichlet directions, so we can only scatter
two normal modes and one tangential mode. This gives
\begin{equation}
\braket{\theta \theta A} = \lambda^3 \frac{3}{2} f_{jik} \int_M \phi^{IJc}(x) \theta_I^i(x) \theta_J^j(x) A_c^k(x) \; .
\end{equation}
\noindent On a 4-cycle the non-vanishing components of $\phi$ have
an odd number of normal indices, and it is easy to see that the
only non-vanishing 3-point functions of degree one modes are
\begin{equation}
\begin{split}
\braket{\theta A A} = \lambda^3 \frac{3}{2} f_{jik} \int_M \phi^{Iab}(x)
\theta_I^i(x) A_a^j(x) A_b^k(x) \; , \\
\braket{\theta \theta \theta} = \lambda^3 \frac{3}{2} f_{jik} \int_M \phi^{IJK}(x)
\theta_I^i(x) \theta_J^j(x) \theta_K^k(x) \; . \\
\end{split}
\end{equation}
\section{Worldvolume theories}\label{sec_worldvolume_theory}
We have already determined the BRST cohomology of normal and
tangential modes on 3- and 4-cycles. These should be thought of
as marginal deformations of the theory preserving the twisting on
the worldsheet (by general arguments that map an element of BRST
cohomology to a descendant that can generate a deformation). When
considered from the spacetime perspective, the elements of BRST
cohomology should translate into spacetime fields and we expect
the BRST closure condition to correspond to the {\em linearized}
spacetime equations of motion. This is true in physical string
theory and can be derived more rigorously via open string field
theory for topological strings, as will be reviewed below.
For the normal modes, the BRST cohomology condition can be
translated into constraints on deformations of the calibrated
submanifolds, such that these modes correspond to tangent vectors
on the moduli space of (co)associative cycles in the $G_2$
manifold.
For tangential modes, the BRST cohomology condition looks
different for the different cycles. On the 3-cycle, BRST closure
and co-closure of the tangential mode $A_a$ imply $dA = 0$ and
$d^\dagger A =0$, so that $A$ is a flat connection in a fixed
gauge, and we expect a gauge theory whose solutions correspond to
gauge-inequivalent flat connections. On the 4-cycle, BRST closure
and co-closure of $A_a$ imply
\begin{equation}\label{eqn_4cycle_tangent}
*_4 dA = - dA \; , \qquad \qquad d^\dagger A = 0 \; .
\end{equation}
These equations are the linearization of the condition for a
variation of a gauge field to be a deformation of an instanton
solution (c.f. equation (5.50) in \cite{Birmingham:1991ty}). This
suggests, in analogy with the geometric moduli, that the theory on
the worldvolume should be a gauge theory extremizing on instantons
and that marginal tangential deformations of the worldsheet theory
should correspond to tangent vectors on the moduli space of
instantons.
In the case of both the 3- and 4-cycle, the worldvolume theory
will include contributions from the normal and tangential modes,
and so should result in a theory whose moduli space includes the
normal and tangential deformations that we have determined in
section \ref{subsec_moduli}. We also expect that the other
physical states, which are massless in the twisted theory, may
still play a role in the spacetime action even though they cannot
be used to generate boundary deformations of the CFT\footnote{Only
a ghost number one state has a 1-form descendant with ghost number
0; ghost number $p$ states have $p$-form descendants with ghost
number zero, so to preserve the ghost number in the worldsheet
action we would have to integrate them over a $p$-cycle on the
worldsheet.}, and hence are not moduli of the theory.
To determine the relevant spacetime actions and how the normal and tangential
moduli, as well as the higher ghost number fields, come into play we will start
by considering Witten's derivation of Chern-Simons theory from open string field
theory (OSFT). We will find that by restricting our attention to tangential
modes on a calibrated 3-cycle we can re-derive Witten's Chern-Simons theory
simply by following the arguments of \cite{Witten:1992fb}. We will then attempt
to generalize this derivation to include normal modes. Their contribution is
expected to be related to the topological theories in \cite{Anguelova:2005cv,
Bonelli:2005rw}, whose actions also localize on the moduli space of associative
3-cycles (though, as we will see, this relation is mostly at the level of
equations of motion). Following a comment in \cite{Witten:1992fb}, we expect the
higher string modes to be related to additional fields generated by gauge-fixing
the CS action. This is discussed in appendix \ref{app_ghosts}.
Once we have transplanted Witten's arguments for special
Lagrangian branes in a Calabi-Yau to associative branes in a $G_2$
manifold, we will apply them to branes wrapping coassociative
cycles and branes wrapping all of $Y$.
\subsection{Chern-Simons theory as a string theory}
In \cite{Witten:1992fb} Witten argues that the open A-model on
$T^*M$ reduces exactly to Chern-Simons theory on $M$, for any
3-manifold $M$. There are several arguments supporting this claim
and we will attempt to generalize them below to the $G_2$ case.
Before doing so, we first review them briefly.
The first argument concerns $Q$-invariance of a boundary term in
the string path integral. In general the open string path
integral can be augmented by coupling to a \lq classical'
background gauge field. This is done by including an additional
piece in the integrand of the path integral which is of the form
\begin{equation}\label{eqn_boundry_coupling}
\textrm{Tr} P \exp\left( \oint_{\partial \Sigma} X^*(A) \right) \; .
\end{equation}
\noindent Here $A$ is a (non-abelian) connection defined on the
brane $M$ and the term above is a Wilson loop for the pull-back of
this connection along the boundary of the worldsheet $\Sigma$.
Requiring that this new term preserve the $Q$-invariance of the
action implies that the field strength $F = dA + A\wedge A$ must
vanish. Hence open strings in the A-model can only couple to flat
connections.
To more rigorously establish that the relevant spacetime theory is
Chern-Simons theory, Witten considers the OSFT action
\begin{equation}\label{eqn_osft_action}
\int \mathcal{A} \star Q \mathcal{A} + \frac{2}{3} \mathcal{A} \star \mathcal{A} \star \mathcal{A} \; ,
\end{equation}
\noindent where $\mathcal{A}$ is a functional of the open string modes
quantized on a fixed time slice, and $Q$ is the appropriate BRST
operator of the theory. The integration measure is defined by the
path integral over the disc\footnote{There is a subtlety here. In
OSFT for the bosonic string this measure involves gluing together
several discs using conformal transformations, but in the setting
of a topological theory all the states have conformal weight zero
under the twisted stress-tensor so they do not transform under
conformal transformations.}. The linearized equations of motion
(coming from the quadratic part of the OSFT action) enforce the
requirement that physical states are BRST-closed on-shell:
\begin{equation}
Q \mathcal{A} = 0 \; .
\end{equation}
\noindent In the large coupling constant limit ($t \rightarrow \infty$) the
$Q$-cohomology can be studied by restricting to functionals $\mathcal{A}$ that depend
only on the string zero-modes, $X^\mu_0$ and $\psi^\mu_0$. The BRST operator,
$Q$, acting on such states, reduces to the exterior derivative $d$ on $T^* M$
(which we can write in terms of the zero modes)
\begin{equation}
d = dx^\mu \frac{\partial}{\partial x^\mu} = \psi^\mu_0 \frac{\partial}{\partial X^\mu_0} \; .
\end{equation}
\noindent Since the $t \rightarrow \infty$ limit is exact in the A-model (modulo
world-sheet instantons which are not present when the target space is $T^*M$),
these identifications are not approximations but rather exact statements. This
allows one to identify the string field action with Chern-Simons theory.
To make this identification one must identify the string field
$\mathcal{A}$ with the target space gauge field $A_\mu(x) dx^\mu$. The
general form for $\mathcal{A}$ at large $t$ is given by the expansion
\begin{equation}\label{eqn_str_fld}
\mathcal{A}(X^\mu, \psi^\mu) = f(X_0) + A_\mu(X_0) \psi_0^\mu + \beta_{\mu\nu}(X_0)
\psi_0^\mu \psi_0^\nu + C_{\mu\nu\rho}(X_0)\psi_0^\mu \psi_0^\nu
\psi_0^\rho\; ,
\end{equation}
\noindent in 3 dimensions. The reason that $\mathcal{A}$ reduces to
$A_\mu(X_0)\psi_0^\mu$ is simply that only ghost number one string
fields should be considered, and here ghost number coincides with
fermion number. Witten comments that it is possible to relate the
other terms in the expansion to ghost and anti-ghosts fields
derived from gauge-fixing CS theory \cite{Axelrod:1991vq}, or
alternatively gauge-fixing OSFT. In appendix \ref{app_ghosts} we
will show that this is indeed the case when we repeat this
derivation on an associative cycle in a $G_2$ manifold.
Witten provides a final argument for CS theory as the string field
theory for the A-model, namely that the open string propagator on
the strip reduces to the CS propagator in the large $t$ limit.
This is essentially the statement that $\frac{b_0}{L_0} =
\frac{d^\dag}{\Box}$. For the topological string, $b_0$ is
replaced by the superpartner of the stress-energy tensor in the
twisted theory (i.e. $Q^\dag$ in $T = \{Q, Q^\dag\}$). In the
$G_2$ case this would be (tentatively) $G_{-\frac{1}{2}}^\uparrow$
\cite{deBoer:2005pt}.
We will now attempt to establish the validity of these arguments
for the open $G_2$ string ending on a calibrated 3-cycle. Before
doing so we should mention that what was missing in this treatment
is a discussion of the normal modes on the brane. It is not
immediately clear whether these modes modify the Chern-Simons
action on the special Lagrangian cycle (though one would imagine
they should in order to capture the dependence of the theory on
the geometric moduli of the brane).
\subsection{Chern-Simons theory on calibrated submanifolds}
If we consider only the tangential modes on a calibrated cycle
then the $Q$-closure conditions become (in the free field
approximation)
\begin{align}
\partial_a f(X) = 0 \; ,\label{eqn_q_close_f} \\
\epsilon^{abc} \partial_{a} A_{b} = 0 \; ,\nonumber \\
\epsilon^{abc} \partial_{a} \beta_{bc} = 0 \; ,\label{eqn_q_close_bt}
\end{align}
\noindent for the degree 0, 1, and 2 components of the string field. Here we
have already used that $\phi_{abc} \propto \epsilon_{abc}$ on the 3-cycle. This is
consistent with the notion that $Q = G_{-\frac{1}{2}}^\downarrow = d$ in the large $t$ limit. More
generally, the complex (\ref{eqn_g2_complex}), which encodes the BRST
cohomology, reduces, when restricted to the tangential directions on an
associative 3-cycle, to the de Rham complex so $Q = d$ and $Q^\dag = d^\dag$.
Recall, from the discussion in Section \ref{subsec_g2_hilb}, that, in contrast
to the situation in the A-model, we do not have an explicit worldsheet action to
work with and hence do not have a Hamiltonian formulation which might directly
establish the $t$ invariance of the action. Assuming this invariance
none-the-less, the equations above imply that the quadratic part of the string
field action reduces to the quadratic part of Chern-Simons theory. That is, in
the large $t$ limit, the $Q$-closure constraint becomes the linearized CS
equation of motion. Here we have also considered modes with fermion number
different from one; these will be discussed in appendix \ref{app_ghosts} in
relation to gauge-fixing Chern-Simons theory.
Also in this limit (of free string theory approximation), the
$Q^\dag$-closure constraints become
\begin{align}
\nabla_a A^a = 0 \; , \nonumber \\
\nabla_a \beta^{ab} = 0 \; . \label{eqn_qdag_close_bt}
\end{align}
\noindent The first term is just the gauge-choice $d^\dag A = 0$. We will
discuss the spacetime interpretation of $\beta_{ab}$ in appendix \ref{app_ghosts}
and it will be clear why it satisfies this constraint. Let us now translate the
rest of Witten's arguments to the $G_2$ case.
The argument is essentially that open string field theory with the
action (\ref{eqn_osft_action}) reduces to Chern-Simons theory in
the large $t$ limit, if one restricts the string field to have
ghost number 1 (which, in the $G_2$ case, translates into fermion
number 1 because the ghost number is the grading for the
$Q$-cohomology, and that is given by the fermion number). That
this holds for the kinetic term follows because we have shown
that the linearized CS action is the same as the linearized
$Q$-closure condition.
For the interaction term this just follows from the fact that the 3-pt function
of the ghost number one parts of $\mathcal{A}$ reduces to the wedge products of the Lie
algebra valued 1-forms, $A_a(x)dx^a$. This is because, at large $t$, $\mathcal{A}$
depends only on the zero modes so the ghost number one part has the form
$A_a(X_0) \psi^a_0$ which can be mapped to one-forms in spacetime. We show in
section \ref{subsec_3_pt} that the 3-pt function of these modes is just the 3-pt
correlator of CS theory.
Witten also shows that the propagator of the OSFT reduces, in the
$t \rightarrow \infty$ limit, to the CS propagator. We will reproduce
this argument briefly here for the $G_2$ case. A much more
complete treatment (of the analogous A/B-model argument) can be
found in section 4.2 of \cite{Witten:1992fb}. The open string
propagator is simply given by the partition function of a finite
strip, of length $T$ and width 1 with the standard metric
\begin{equation}
ds^2 = d\sigma^2 + d\tau^2 \; .
\end{equation}
\noindent In OSFT the moduli space of open Riemann surfaces is
built by gluing such strips together. The strip has one modulus,
namely its length, so in calculating the partition function, one
insertion of $G_{-\frac{1}{2}}^\uparrow$ folded against a Beltrami differential $\mu$
is required \cite{deBoer:2005pt}
\begin{equation}
\int d\sigma d\tau \; \mu(\sigma, \tau) G^\uparrow(\sigma, \tau) \; .
\end{equation}
\noindent The Beltrami differential here is just given by a
change to the metric that changes the length of the strip and is given by a
function $f(\tau) = \delta T \cdot \delta(\tau - \tau_0)$ for any $\tau_0$ on the
strip. Here $\delta T$ is the infinitesimal change in the length of the strip
generated by this differential. Thus the insertion becomes
\begin{equation}
\int d\sigma d\tau \; \delta T \cdot \delta (\tau - \tau_0) G^\uparrow(\sigma,
\tau) = \delta T \int d\sigma \; G^\uparrow(\sigma, \tau_0) \; .
\end{equation}
\noindent Because we have been working in the NS sector, the integral of the
current $G^\uparrow(z)$ around a contour (given by fixed $\tau_0$ which maps to a
half-circle in the complex plane) will just give a $G_{-\frac{1}{2}}^\uparrow$ insertion in the
world-sheet path integral, so its overall form is
\begin{equation}
\int_0^\infty DT (G_{-\frac{1}{2}}^\uparrow) e^{-TL_0} = \frac{G_{-\frac{1}{2}}^\uparrow}{L_0} \; .
\end{equation}
\noindent By our previous identification of $G_{-\frac{1}{2}}^\uparrow$ with $d^\dag$ (this becomes
$d^\dag$ on $M$ for tangential modes) and $L_0$ with $\Box$ in the large $t$
limit, this becomes $\frac{d^\dag}{\Box}$ which is the CS propagator
\cite{Witten:1992fb}. One should note that in the A-model this follows rather
directly but in the $G_2$ string it depends on the fact that $\phi_{abc} \propto
\epsilon_{abc}$ on the associative cycle (so, as previously mentioned, $Q=\check{D}$
reduces to $d$) and thus, in particular, might not hold on a coassociative
cycle.
There is a final argument one can make in favour of CS theory, though it is more
heuristic. We want to argue, as Witten has, that coupling the worldsheet to a
classical background gauge field via a term such as (\ref{eqn_boundry_coupling})
requires this background to satisfy $F = 0$ which is the equation of motion for
Chern-Simons theory.
In \cite{deBoer:2005pt}, a heuristic version of the twisted $G_2$ action is
derived using the decomposition of worldsheet fermions into $\uparrow$ and $\downarrow$
components, $\psi = \psi^\uparrow + \psi^\downarrow$. This is heuristic because this
decomposition is essentially quantum and is not understood at the level of
classical fields. Using this decomposition we can check Witten's argument for
the BRST-invariance of a boundary coupling to a classical configuration of the
gauge field
\begin{equation}\label{eqn_gauge_coupling}
\textrm{Tr} P \exp\left( \oint_{\partial \Sigma} A_\mu \partial_t X^\mu \right) \; .
\end{equation}
\noindent The variation of this factor in the partition function under $[Q,
X^\mu] = \delta X^\mu$ is given by
\begin{equation}
\textrm{Tr} P \oint_{\partial \Sigma} \delta X^\mu \partial_t X^\nu F_{\mu\nu} d\tau \cdot
\exp\left(
\int_{\partial \Sigma; \tau} A_\mu \partial_t X^\mu \right) \; ,
\end{equation}
\noindent where the contour in the exponent must start and end at the point
$\tau$ \cite{Witten:1992fb}. To make this variation vanish requires that the
first term vanish and since \cite{deBoer:2005pt}
\begin{equation}
\delta X^\mu = i \epsilon_L \psi^{\downarrow\mu}_L + i \epsilon_R
\psi^{\uparrow\mu}_R\; ,
\end{equation}
\noindent this implies that $F_{\mu\nu} = \partial_{[\mu} A_{\nu]} + [A_\mu,
A_\nu] = 0$ for classical configurations of the background gauge field $A$.
This is, of course, the Chern-Simons equation of motion.
In the physical theory one could also couple to a term of the form
$C_{\mu\nu}\psi^\mu\psi^\nu$, but no such terms seem to effect the
derivation of $F = 0$ above in the A-model, because any such
coupling results in a variation which cannot cancel the
gauge-field coupling.
The only boundary term in a topological theory should be generated by the
descent procedure starting from a $Q$-closed ghost number one field whose
descendent is a ghost number zero one-form that is given by
\begin{equation}
\{G_{-\frac{1}{2}}^\uparrow, A_\mu \psi^\mu\} = A_\mu \partial_t X^\mu + \pi^2_{\bf 14}(\partial_\mu A_\nu
\psi^\mu\psi^\nu) \; .
\end{equation}
\noindent Both these terms have conformal weight 1 and, by virtue
of a standard descent argument, are $Q$-closed up to a total
derivative. To apply Witten's argument here it is necessary to
understand why the second term cannot appear on the boundary. This
follows because we are considering modes tangential to an
associative cycle and one can check that on such a cycle
$\Lambda^2 T^*M \subset \iota^*(\Lambda^2_7(Y))$ (here $\iota: M \rightarrow Y$ is the
embedding of the three cycle into the ambient $G_2$).
To derive the Chern-Simons action we have considered only the ghost number one
part of the string field $\mathcal{A}$ as this is the standard prescription in OSFT. In
some cases, however, it is desirable to consider the full expansion of $\mathcal{A}$ and
include fields of all ghost number in the action. This is because the higher
modes just play the role of ghosts in gauge-fixing the OSFT action
\cite{Thorn:1986qj}. This is a special feature of Chern-Simons like theories
\cite{Axelrod:1991vq} and so will apply for all the brane theories that we
derive. We include an appendix \ref{app_ghosts} describing the general form of
the gauge-fixed actions for these theories that we will need when we consider
their one-loop partition functions.
\subsection{Normal mode contributions}
In the previous section we argued that the tangential modes of the
$G_2$ worldsheet correspond to gauge fields in a CS theory on the
3-cycle and when higher string modes are included this becomes
gauge-fixed CS theory.
We are also interested in terms in the effective action that include the normal
modes. The most direct way to to get at a normal mode action is to simply
expand the terms $\mathcal{A} \star Q \mathcal{A} $ and $\mathcal{A} \star \mathcal{A} \star \mathcal{A} $ in the OSFT
action. Ignoring the higher string modes, we have
\begin{equation}
\begin{split}
\mathcal{A} &= A_a\psi^a + \theta_J \psi^J \; ,\\
Q \mathcal{A} &= \{Q, A_a \psi^a\} + \{Q, \theta_I \psi^I\} \\
&= \phi_{IJc}\phi^{cde} \nabla_d A_e \psi^I \psi^J
+ \phi_{abc}\phi^{cde} \nabla_d A_e \psi^a \psi^b
+ \phi_{aIJ}\phi^{JbK} \nabla_b \theta_K \psi^a \psi^I \; .
\end{split}
\end{equation}
\noindent Recall that the integration of expressions involving
string fields, $\mathcal{A}$, in the OSFT action corresponds to evaluating
the correlator of the integrand, decomposed in individual string
modes on the disc. In the $G_2$ string only certain combinations
of string modes will have a non-vanishing 3-pt function depending
on the conformal blocks the modes correspond to (see
\cite{deBoer:2005pt}). In our calculation of the 3-pt functions
above, this translates into non-vanishing 3-pt functions when we
can contract the spacetime indices of the string modes with the
3-form $\phi$. From our previous calculation of three point
functions in sections \ref{subsec_3_pt} (see also
App. \ref{subsec_ghost_correl}) we find the generic form of a 3-pt
function on the disc
\begin{equation}\label{eqn_gen_3pt}
\begin{split}
\braket{\lambda \omega} = \int_M \phi^{\mu\nu\rho} \textrm{Tr} \left( \lambda_\mu \omega_{\nu\rho} \right) \; ,\\
\braket{\alpha \beta \gamma} = \int_M \phi^{\mu\nu\rho} \textrm{Tr} \left( \alpha_\mu \beta_\nu
\gamma_\rho \right) \; ,
\end{split}
\end{equation}
\noindent (where, e.g. $\omega = 1/2 \, \omega_{\mu\nu} (x) \psi^\mu \psi^\nu$). Doing this gives the following action
\begin{equation}\label{eqn_cs_normal_action}
S_{\textrm{deg 1}} = \int_M
\phi^{abc} \, \textrm{Tr} \left( A_a \nabla_b A_c + \frac{2}{3} A_a A_b A_c \right)
+ \phi^{IaJ} \, \textrm{Tr} \bigl( \theta_I ( \nabla_a \theta_J + [ A_a , \theta_J ] )
\bigr)\; ,
\end{equation}
\noindent where the trace $\textrm{Tr}$ is over Lie algebra indices. The interaction terms can be calculated directly in string
perturbation theory by checking 3-pt disc amplitudes whereas the kinetic terms
coming from $\mathcal{A} \star Q \mathcal{A}$ vanish in perturbation theory because on-shell
string modes satisfy $Q \mathcal{A} = 0$. To determine them we either simply consider
all the terms of the correct degree in the string mode decomposition of $\mathcal{A}
\star Q \mathcal{A}$ or \lq formally' calculate 3-pt functions assuming the field $\mathcal{A}$
is off-shell. Both result in the same action and as a consistency check, the
linearized equations of motion for this action correspond to the BRST closure of the
string modes.
We have not been too careful with the coefficients in (\ref{eqn_cs_normal_action})
but this is because most coefficients either follow from gauge invariance or can
be absorbed into field redefinitions.
The equations of motion for this action are
\begin{align}
\epsilon^{abc} F_{bc} &= \phi^{IaJ} [ \theta_I , \theta_J ] \; , \label{eqn_F_nonflat} \\
\phi^{aIJ} \bigl( \nabla_a \theta_J + [ A_a , \theta_J ] \bigr) &= 0 \; . \label{eqn_gauged_normal}
\end{align}
\noindent In the abelian case this just reduces to $F=0$ and the
geometric constraint (\ref{eqn_norm_cond2}) on the normal modes
describing associative deformations. In the non-abelian case this
is no longer true but of course in this setting we have lost the
simple association of $\theta_I$ with normal deformations of the
brane, as the string modes become matrix-valued.
At first glance the equations above look similar in form to the Seiberg-Witten
type equations (32) and (40) in \cite{akbulut-2004}. This reference is concerned
with resolving the singular structure of the moduli space of deformations of
associative submanifolds in a general $G_2$ manifold by considering a larger
space of deformations where one is allowed to also deform the induced connection
on the normal bundle to make the deformed submanifold associative. This amounts
to a choice of complex structure on the normal bundle, for each deformation of
the 3-submanifold, such that its reduced structure group $U(2) \subset SO(4)$ in
the $G_2$ manifold is compatible with the induced metric connection. This
additional topological restriction on the $G_2$ manifold is something we have
not assumed and indeed, for general gauge group, there is no obvious relation
between ({\ref{eqn_F_nonflat}}), ({\ref{eqn_gauged_normal}}) and the purely
geometric equations in \cite{akbulut-2004}\footnote{It is possible that the
$U(1)$ part of our gauge connection could be related to the $U(1) \subset U(2)$
part of the induced connection on the normal bundle with fixed complex structure
in \cite{akbulut-2004}.}.
\subsection{Anti-self-dual connections on coassociative submanifolds}
We expect that the worldvolume theory on the 4-cycle should have
equations of motion corresponding to the BRST closure of the
associated string modes. Let us consider the following action
\begin{equation}\label{eqn_4cycle_actfull}
S[A,\theta ] = \int_M \phi^{Iab} \textrm{Tr} \bigl( \theta_I F_{ab} \bigr) +
\frac{2}{3} \, \phi^{IJK} \textrm{Tr} \bigl( \theta_I \theta_J \theta_K
\bigr) \; .
\end{equation}
As with the action on a 3-cycle we cannot directly check the quadratic terms by
considering a string correlator because the relevant correlators vanish for
on-shell states as dictated by the fact that the quadratic terms in the action
determine the BRST closure condition. Rather, we can compare the linearized
equations of motion (generated purely by the quadratic terms) and the string
BRST closure condition and these should match.
The abelian $\theta_I$ equation of motion is now just $\phi^{Iab} F_{ab} =
0$, which implies anti-self-duality of $F$ and so matches the BRST
closure condition. The $A_a$ equation of motion is
\begin{equation}\label{eqn_gauged_normal2}
\phi^{abI} D_b \theta_I = 0 \; ,
\end{equation}
\noindent where $D_a = \nabla_a + [ A_a , ]$ on $M$. This equation is more
conveniently expressed in terms of the self-dual 2-form $\omega_{ab}
= \phi_{abI} \theta^I$ on $M$. At the linear level, the equation
above implies $\omega$ is co-closed, and hence also closed
since it is self-dual. Thus we have the correct linearized
condition for coassociative deformations found by McLean.
We can also consider the formal structure of the term $\mathcal{A} \cdot Q
\mathcal{A}$ in the OSFT action, letting $\mathcal{A}$ go \lq off-shell', and
indeed we find matching.
As a further check we should compare the interaction term to
string scattering amplitudes. The 3-pt function for a general
degree one vertex operator in the topological theory is given by
\begin{equation}
\lambda^3 \frac{3}{2} \int_Y \phi^{\alpha\beta\gamma}(x) \textrm{Tr} \bigl( A_\alpha (x) A_\beta
(x) A_\gamma (x) \bigr) \; .
\end{equation}
\noindent On the 4-cycle the only non-vanishing components of
$\phi$ must have an even number of tangential indices, which
implies the following non-vanishing amplitudes
\begin{equation}\label{eqn_3pt_4_cycle}
\begin{split}
\lambda^3 \frac{3}{2} \int_M \phi^{Iab} \textrm{Tr} \bigl( \theta_I A_a A_b \bigr) \; , \\
\lambda^3 \frac{3}{2} \int_M \phi^{IJK} \textrm{Tr} \bigl( \theta_I
\theta_J \theta_K \bigr) \; .
\end{split}
\end{equation}
The first line above corresponds to the cubic interaction $\theta A A$ in the
first term of (\ref{eqn_4cycle_actfull}) while second correlator in
(\ref{eqn_3pt_4_cycle}) implies the cubic vertex in the second term. This last
term, of course, only corrects the non-abelian instanton equation of motion
\begin{equation}\label{eqn_FnonSD}
\phi^{Iab} F_{ab} = - \phi^{IJK} [ \theta_J , \theta_K ] \; ,
\end{equation}
\noindent and so has no effect on the geometric interpretation in the abelian
case.
In \cite{Leung:2002qs} Leung proposes a 1-form on the space $\mathcal{C} =
{\mbox{Map}}(M, Y) \times \mathcal{A}(M)$ where $M$ is a 4-manifold, $Y$ is a $G_2$
7-manifold and $\mathcal{A}(M)$ is the space of Hermitian connections on the gauge
bundle $E \rightarrow M$ (with fibre $G$)
\begin{equation}
S(f, D_E)(v, B) = \int_M \textrm{Tr} \left( f^*(\iota_v \phi) \wedge F_E +
f^*(\phi) \wedge B \right) \; .
\end{equation}
\noindent Here $(f, D_E) \in \mathcal{C}$ and $(v, B) \in T_{(f, D_E)} \mathcal{C}$ with $v$ a
section of $TY$, $B \in \Lambda^1(M, {\mbox{ad}} \, G)$ and $F_E$ the curvature
of $D_E$ (here $f$ is an element of ${\mbox{Map}}(M, Y)$ and should not be
confused with the $f$ used to denote the zero fermion component of the string
field). The one-form $S$ is invariant under diffeomorphisms of $M$ and its zeros
correspond to coassociative embeddings $f(M) \subset Y$ with anti-self-dual
connections on them. This follows from the fact that $S$ must vanish when
evaluated on arbitrary vectors, $B$, implying $f^*(\phi) = 0$, and arbitrary $v$
implying that $F_E = -{*F}_E$.
To compare with our theory we do not want to consider the space of all such
maps, but only the local deformations of a given coassociative
$f(M)$ in $Y$, so we only consider fluctuations around a fixed
coassociative submanifold. Thus we will take $f$ to be a
coassociative embedding implying that the second term in the
action above vanishes and ${*\phi}$ defines the volume form on the
embedded coassociative 4-cycle $f(M)$. Thus, we rewrite Leung's
functional to generate the following action
functional\footnote{More precisely, Leung's one-form, $\Phi_0$,
descends to a closed one-form on the space \lq
$\mathcal{C}/\textrm{Diffeo(M)}$' and this form is locally the derivative
of a functional, $\mathcal{F}$, whose critical points are zeros of
$\Phi_0$. Our action is most closely related to this functional.}
\begin{equation}\label{eqn_4cycle_action}
S^0[A , \theta ] = \int_M \textrm{Tr} \left( f^*(\iota_\theta \phi) \wedge F
\right)
= \int_M \phi^{Iab} \textrm{Tr} \bigl( \theta_I \left( \partial_a A_b + A_a A_b \right)
\bigr)\; ,
\end{equation}
\noindent using the identity $\epsilon_{abcd} \, \phi^{cdI} = 2 \,
\phi_{ab}^{\;\;\;\; I}$ on the coassociative cycle.
Thus we see that the open $G_2$ string has reproduced the action Leung suggested
in order to study SYZ in the $G_2$ setting and it has also introduced an
additional term that is not present in Leung's action.
\subsection{Seven-cycle worldvolume theory}
As in physical string theory, it is natural to expect the 3- and 4-cycle theory
to look like the dimensional reduction of a theory on the whole 7-manifold
(which is trivially calibrated by its volume form $\phi \wedge *\phi$). Lee et
al \cite{Lee:2002fa}, who propose theories closely related to our 3- and 4-cycle
theories, claim that this theory should be related to (deformed)
Donaldson-Thomas theory \cite{Donaldson:1996kp}.
The 7-cycle theory can be determined exactly the same way as the 3- and 4- cycle
theory. For the interaction term we just calculate the 3-pt functions of the
(ghost number one) terms in $\braket{\mathcal{A} \star \mathcal{A} \star \mathcal{A}}$ given by
(\ref{eqn_gen_3pt}). The kinetic terms, defining the linearized equations of
motion, should correspond to $Q \mathcal{A} = 0$ and they should match $\mathcal{A} \star Q
\mathcal{A}$.
This gives the following action
\begin{equation}\label{sevenwv}
\begin{split}
S &= \int_Y \phi^{\mu\nu\rho} \textrm{Tr} \left( A_\mu \partial_\nu A_\rho + \frac{2}{3} A_\mu A_\nu A_\rho \right)
= \int_Y *\phi \wedge CS_3(A) \; .
\end{split}
\end{equation}
\noindent The equation of motion for this action is
\begin{equation}
*\phi \wedge F = 0 \; .
\end{equation}
\noindent This is one of the equations in \cite{Donaldson:1996kp} where it is
argued to be associated with the 7-dimensional generalization of Chern-Simons
theory. In the abelian theory this equation of motion is simply $\check{D}A =
0$ which has no global solutions which are not exact (i.e. $A = df$) because
$H^1_7(Y) = 0$ for $G_2$ manifolds. Of course, as a gauge field $A$ need not be
a global one form and then this result no longer applies. This is similar to
the situation one finds for Chern-Simons theory on a simply connected manifold.
Note that for the action (\ref{sevenwv}) to be gauge invariant under
large gauge transformations $*\phi$ must actually be an integral cohomology
class. A similar issue arises in holomorphic Chern-Simons theory as mentioned
by Nekrasov in \cite{Nekrasov:2005bb} but, as the three-form $\Omega$ is
holomorphic, it is not clear that it can always be normalized to be integral.
Nekrasov notes, however, that the integrality condition is precisely the
condition on the complex moduli of the CY to be solutions of the attractor
equations. It would be interesting to understand if the integrality of $*\phi$
has a similar interpretation.
In \cite{Lee:2002fa} the authors want to consider solutions to the deformed
Donaldson-Thomas equation
\begin{equation}
\label{}
*\phi \wedge F = \frac{1}{6} F \wedge F \wedge F \; ,
\end{equation}
\noindent which would involve adding a term $CS_7(A)$ to the Lagrangian above.
It is not at all clear why such a term would appear in OSFT but in Section
\ref{sec_quant} we see that such a term does emerge in a rather interesting way
when quantizing this theory.
\subsection{Dimensional reduction, A- and B-branes}
Reducing the open topological $G_2$ string on $CY_3 \times S^1$ gives rise to both special Lagrangian A-branes and holomorphic B-branes on $CY_3$. This
follows from the decomposition of $\phi$ and $*\phi$ in terms of the holomorphic
3-form and K\"{a}hler form on $CY_3$ (see appendix \ref{app_conventions}). The A-branes
arise when reducing the associative 3-cycle action (\ref{eqn_deg2_3cycle}) in the
normal direction. The resulting action
\begin{equation}
\int_{M} \epsilon^{abc} \textrm{Tr} \left( A_a \nabla_b A_c + \frac{2}{3}
A_a A_b A_c \right) +\rho^{aij} \textrm{Tr} \bigl( \theta_i ( \nabla_a
\theta_j + [ A_a , \theta_j ] ) \bigr) \; ,
\end{equation}
\noindent is the real part of complex Chern-Simons
theory, where the indices $a,b,c=1,2,3$ are in the SLag while $i,j=4,5,6$ are in the
normal direction. The normal modes appear quadratically and can be integrated
out (see Section \ref{sec_quant} for a discussion of this issue on an associated
cycle).
Similarly we can reduce the 4-cycle action (\ref{eqn_4cycle_actfull}) in the
tangential direction. This is again a special Lagrangian brane in $CY_3$ but now
calibrated by $\hat{\rho}$ instead of $\rho$, and the worldvolume action is given by the
imaginary part of complex Chern-Simons theory
\begin{equation}
\int_{M} \rho^{iab} \textrm{Tr}(\theta_i F_{ab}) + {2 \over 3} \rho^{ijk}
\textrm{Tr}(\theta_i \theta_j \theta_k) \; ,
\end{equation}
with the additional constraint $D_a \theta_i =0$ for the
normal modes.
The B-branes are simplest to find starting from the 7-cycle worldvolume
theory (\ref{sevenwv}) and reducing on the $CY_3$. We find
\begin{equation}
\begin{split}
S &= \int_{CY_3} { \hat{\rho} \wedge CS(A) + k \wedge k \wedge \textrm{Tr} ( \lambda F ) } \\
&= {1 \over 2i} \int_{CY_3} \Omega \wedge CS(A) - {1 \over 2i}
\int_{CY_3} {\bar \Omega} \wedge {CS(\bar{A})} + \int_{CY_3} k
\wedge k \wedge \textrm{Tr} ( \lambda F ) \; ,
\end{split}
\end{equation}
\noindent where $*\phi = {\hat \rho} \wedge dt + {1 \over 2} k \wedge k$, $t$
parametrizes the circle direction, $\Omega = \rho + i {\hat \rho}$ is the
holomorphic 3-form of the Calabi-Yau, and $\lambda= A_t$ is the scalar component of
the gauge field in the reduction. The action is the sum of B-model 6-brane and
${\bar {\mbox{B}}}$-model 6-brane actions (the appearance of the imaginary part of the
holomorphic 3-form rather than the real part is just a matter of convention).
The extra term in the action comes with the Lagrange multiplier $\lambda$, and so
it expresses the constraint
$$
k \wedge k \wedge F =0 \; .
$$
This extra condition is related to stability of the brane (complexifies the
$U(N)$ symmetry). Lower-dimensional 4-branes and 2-branes then follow by
further dimensional reduction, where again we obtain B- and ${\bar {\mbox{B}}}$-model
actions together with a stability condition.
It is remarkable that like the closed topological M-theory, the open topological
string also contains the A and B+${\bar {\mbox{B}}}$ models. Perturbatively the B+${\bar {\mbox{B}}}$-models are decoupled, and it would be interesting to understand if there is
a non-perturbative coupling between them.
\section{Gauge-fixing and quantization}\label{sec_quant}
Let us now consider the full expansion of the OSFT action without any constraint
on the ghost number of the fields. As found in appendix \ref{app_ghost_7cycle},
this gives the following expression for the action in seven dimensions
\begin{equation}\label{sevenwv_gh}
\begin{split}
S_{(7)} &= \int_Y \phi^{\mu\nu\rho} \, \textrm{Tr} \left(A_\mu \partial_\nu A_\rho + \frac{2}{3} A_\mu A_\nu A_\rho +
\beta_{\mu\nu} \partial_\rho f + \beta_{\mu\nu} [ A_\rho ,f ] + \frac{1}{2} C_{\mu\nu\rho} \{ f , f \} \right)\\
&= \int_Y *\phi \wedge \textrm{Tr} \left( A \wedge dA + \frac{2}{3} A \wedge A
\wedge A + \beta \wedge D f + \frac{1}{2} C \{ f , f\}
\right) \; ,
\end{split}
\end{equation}
\noindent where $f \in \Lambda^0_{\bf 1}$, $\beta \in \Lambda^2_{\bf 7}$ and $C
\in \Lambda^3_{\bf 1}$ are respectively the degree zero, two and three modes of
the string field $\mathcal{A}$ in the adjoint representation of the gauge group and $D =
d + A$ is the gauge-covariant derivative. The purely bosonic (i.e. ghost number
one field) part of the action above has appeared (in conjunction with additional
bosonic terms) in topological quantum field theories studied in
\cite{Baulieu:1997jx} and \cite{Acharya:1997fos}. The interpretation of the
action above in terms of the Batalin-Vilkovisky antifield formalism is detailed
in appendix \ref{app_ghost_7cycle}.
\subsection{Weak coupling limit}
To help us understand the structure of the gauge theories we have
found for open strings ending on (co)associative calibrated
branes, we are more interested in the quantization of the
quadratic part of the non-linear action $S [A] = \int_Y *\phi
\wedge CS(A)$, expanded around solutions of the classical
equations of motion
\begin{equation}
*\phi \wedge F = 0 \; .
\end{equation}
The partition function of this simplified theory corresponds to a
stationary phase approximation of the full theory in the weak
coupling limit. For the gauge theory on associative 3-cycles, we
will investigate how the normal modes modify the corresponding
calculation done by Witten \cite{Witten:1988hf} for pure
Chern-Simons theory.
The equation $*\phi \wedge F =0$ has been considered already in
\cite{Donaldson:1996kp} where it is argued to be the 7-dimensional
generalization of Chern-Simons theory that might provide an analog
of Casson/Floer theory for 7-manifolds. It is related to an
instanton equation for a gauge field on the $Spin(7)$ 8-manifold
$Y \times \mathbb{R}$. This relationship is directly analogous to the way
solutions of the Chern-Simons equation of motion $F=0$ on a
3-manifold $M$ correspond to critical points of the gradient flow
equations coming from the instanton equations $F=*F$ on $M \times
\mathbb{R}$. This fact will be important when we come to consider the
non-trivial phase factor in the path integral of this gauge
theory.
Expanding $S[A]$, for $A = A^0 + B$, to quadratic order in $B$
around a classical solution $A^0$ gives
\begin{equation}
S [A] = S [ A^0 ] + \int_Y *\phi \wedge \textrm{Tr} \left( B
\wedge DB
\right) \; ,
\end{equation}
\noindent where $D = d + [A^0,]$ is here with respect to the background gauge
field solving $\phi^{\mu\nu\rho} F^0_{\nu\rho} = 0$. The linear term is of
course absent since it gives the $A^0$ equation of motion. Performing the BV
analysis of the quadratic action $S_{cl} [B] = \int_Y *\phi \wedge \textrm{Tr} ( B
\wedge DB )$ is straightforward and is given in appendix \ref{app_ghost_7cycle}
(it is also related to a linearization of the structure described for the full
theory in appendix \ref{app_ghost_7cycle}).
The resulting gauge-fixed action takes the familiar form
\begin{equation}
\int_Y *\phi \wedge \textrm{Tr} \left( B \wedge DB \right) + \textrm{Tr} \left(
\varphi D^\mu B_\mu + {\bar c} D^\mu D_\mu c \right) \; ,
\end{equation}
\noindent with $\varphi$ acting as Lagrange multiplier imposing the
gauge-fixing constraint in the action while ${\bar c}$, $c$
correspond to the fermions from the Faddeev-Popov determinant.
Formally the analysis of this gauge theory in 7 dimensions has
been almost identical to Witten's analysis of pure Chern-Simons in
3 dimensions. Indeed we can also use Schwarz's method of
evaluating the partition function for degenerate quadratic
classical actions to obtain the contribution
\begin{equation}
{\mbox{exp}}(ik S[A^0]) \; \frac{{\mbox{det}}(D_\mu
D^\mu)}{\sqrt{{\mbox{det}}(L)}} \; ,
\end{equation}
\noindent to the partition function of $ik \int_Y *\phi \wedge CS(A)$ (in the
weak coupling limit of large $k$) coming from a given gauge-equivalence class of
solutions $A^0$ of $*\phi \wedge F =0$. We should stress that the structure of
the moduli space of solutions to $*\phi \wedge F =0$ is not understood so well
as that for flat connections in 3 dimensions. Witten \cite{Witten:1988hf}
restricts attention to Chern-Simons theory on 3-manifolds $M$ with the property
that the moduli space of flat connections, determined by equivalence classes of
homomorphisms from $\pi_1 (M)$ to the gauge group $G$, be finite. We do not know
whether one can take the moduli space of gauge-inequivalent solutions of $*\phi
\wedge F =0$ to be zero-dimensional by suitable choice of $G_2$ manifold $Y$.
Thus we cannot say whether the partition function can be expressed as a finite
sum over contributions of the form above.
The operator appearing in the denominator above is defined $L = *(*\phi \wedge
D) + D*$ and is understood as an antisymmetric 8x8 matrix of linear differential
operators mapping $\Lambda^1_{\bf 7} \oplus \Lambda^7_{\bf 1}$ to itself. It
follows by collecting $B_\mu$ and $\varphi$ in the first two terms in the
gauge-fixed quadratic action into an 8-vector. One can check that this
definition implies $L$ is elliptic and self-adjoint. It seems the natural
generalisation of the elliptic self-adjoint operator $L_- = *D + D*$ (restricted
to forms of odd degree in 3 dimensions) used by Witten in \cite{Witten:1988hf}
\footnote{In both dimensions 3 and 7, the addition of the $D*$ term in $L$ is
essential in order for it to be elliptic. This is simply because without it the
Pfaffian of the corresponding antisymmetric symbol matrices of odd rank would
vanish identically and so there could exist no inverse. The way of understanding
the need for ellipticity in physics terms is that we require the kinetic
operator in the quadratic action to be the inverse propagator. The propagator
only exists for the gauge-fixed action.}
. Another technical point we are
overlooking is whether $L = *(*\phi \wedge D) + D*$ is a regular operator. We
need not get into the precise definition, sufficed to say that regularity of an
operator guarantees one has a precise definition of its determinant in terms of
regularised zeta functions.
As explained in \cite{Witten:1988hf}, the contribution to the
partition function of Chern-Simons theory in 3 dimensions around a
given flat connection at weak coupling is closely related to the
partition function of an abelian 1-form gauge theory in 3
dimensions, which has been explicitly calculated by Schwarz and
shown to give the Ray-Singer analytic torsion of the de Rham
complex of the 3-manifold, and is thus a topological invariant.
However, this relation to Ray-Singer torsion is generally only
guaranteed for topological actions of the form $\int \omega \wedge
d \omega$, where $\omega$ is a bosonic/fermionic $p$-form of
odd/even degree in $(2p+1)$ dimensions. Thus we should not expect
the partition function of the 7-dimensional quadratic theory above
to be obviously related to Ray-Singer torsion. On the other hand,
since we are still in odd dimension, a theorem of Schwarz
\cite{Schwarz:2001sf} does suggest the partition function for this
gauge theory should be a topological invariant. In fact this
statement is only true modulo possible obstructions related to
non-trivial phase factors that we will now discuss.
\subsection{Phase of the determinant}
An important subtlety in both 3 and 7 dimensions is the role of
the phase of the determinant of the operator $L$. The theories
described by Schwarz are insensitive to this since they compute
absolute values of ratios of determinants of elliptic operators.
The Laplacian $D_\mu D^\mu$ appearing in the numerator is real and
positive-definite so there is no possible phase coming from its
determinant. We will now investigate the structure of this phase
for the 7-dimensional theory.
The expression for the phase in terms of the Atiyah-Patodi-Singer
$\eta$-invariant follows in the same way in both 3 and 7
dimensions; as the limit of a series in powers of the non-zero
eigenvalues $\lambda_i$ of the operator $L$ (at a given background
solution $A^0$ of $*\phi \wedge F =0$). In particular, as in
\cite{Witten:1988hf}, we find
\begin{equation}
\frac{1}{\sqrt{\det(L)}} = \frac{1}{|\sqrt{\det(L)}|} \, \exp \left( \frac{i\pi}{2} \eta_L(A^0)
\right)\; ,
\end{equation}
\noindent where
\begin{equation}
\eta_L (A^0) = \frac{1}{2} \, \underset{s \rightarrow 0}{\mbox{lim}} \sum_i {\mbox{sign}}
\lambda_i \, | \lambda_i |^{-s} \; ,
\end{equation}
\noindent denotes the $\eta$-invariant for the elliptic operator $L$ at
solution $A^0$.
In 3 dimensions Witten \cite{Witten:1988hf} uses the
Atiyah-Patodi-Singer index theorem for the classical twisted spin
complex ($L_-$ can be interpreted as a twisted Dirac operator) to
compute the difference of $\eta$-invariants between two flat
connections, $A = A^0$ and $A=0$, to be proportional to the
Chern-Simons action $\int_M CS(A^0)$ itself at $A^0$. The
proportionality factor is the dual Coxeter number $h (G)$ of the
gauge group $G$. This has the beautiful interpretation of the
level shift $k \rightarrow k + h (G)$ in the quantum
Chern-Simons action, that one also observes for current algebras
of conformal field theories in 2 dimensions.
The identification of $L_- = *D+D*$ in 3 dimensions with a twisted
Dirac operator follows by collecting the differential operators in
$L_-$ into a 4x4 antisymmetric matrix acting on the 4-dimensional
vector space $\Lambda^1 \oplus \Lambda^3$. This allows one to
write $L_- = \gamma^a D_a$ in terms of the 3 4x4 antisymmetric
matrices $\gamma_a$, with components $( \gamma_a )_{bc} = -
\epsilon_{abc}$, $( \gamma_a )_{b4} = - \delta_{ab}$. These
matrices generate a subgroup $SU(2) \subset SO(4)$ and in an
appropriate basis can be written $\Gamma_a = i \sigma_2 \otimes
\sigma_a$ (in terms of Pauli matrices $\sigma_a$). Together with
$\Gamma_4 = i \sigma_1 \otimes 1$, they generate a representation
of the Clifford algebra acting on Dirac spinors in 4 dimensions.
By constructing the interpolating gauge field $A(t)$, for $t \in
[0,1]$ on $M\times [0,1]$ between 2 flat gauge fields $A(1)=A^1$
and $A(0)=A^0$ on $M$, this provides a suitable lift of $L_-$ on
$M$ to the twisted Dirac operator ${\tilde L}_- = \Gamma^a D_a
(A(t)) + \Gamma^4 \partial_t$ on $M\times [0,1]$. It is the
Atiyah-Patodi-Singer index theorem for ${\tilde L}_-$ that allows
Witten to compute the change in $\eta_{L_-}$ between 2 flat
connections.
We will now show that a similar structure follows for $L = *(*\phi
\wedge D) + D*$ in 7 dimensions. Again collecting the differential
operators in $L$ into an 8x8 antisymmetric matrix acting on the
8-dimensional vector space $\Lambda^1 \oplus \Lambda^7$ allows one
to express $L = \gamma^\mu D_\mu$ in terms of the 7 8x8
antisymmetric matrices $\gamma_\mu$, with components $( \gamma_\mu
)_{\nu\rho} = - \phi_{\mu\nu\rho}$, $( \gamma_\mu )_{\nu 8} = -
\delta_{\mu\nu}$. It should be noted that the sub-matrices $(
\gamma_\mu )_{\nu\rho}$ do not form the adjoint representation of
the imaginary octonions despite the fact that they are identical
to the structure constants of this algebra. This is simply because
the octonions are not associative. This is to be contrasted with
the submatrices $( \gamma_a )_{bc}$ in 3 dimensions which give the
adjoint representation of the imaginary quaternions (i.e. the Lie
algebra of $SU(2)$). Nonetheless, together with $\gamma_8 = i 1$,
the full 8x8 matrices $\gamma_\mu$ generate a representation of
the Clifford algebra acting on Weyl spinors in 8 dimensions. The
corresponding action on Dirac spinors in 8 dimensions can be
expressed in terms of the 16x16 anti-Hermitian matrices
$\Gamma_\mu = \sigma_2 \otimes \gamma_\mu$, $\Gamma_8 = i \sigma_1
\otimes 1$. Thus by constructing the interpolating gauge field
$A(t)$ on $Y\times [0,1]$ between 2 solutions $A(1)=A^1$ and
$A(0)=A^0$ of $*\phi \wedge F =0$ on $Y$ we have a suitable lift
of $L$ on the $G_2$ manifold $Y$ to the twisted Dirac operator
${\tilde L} = \Gamma^\mu D_\mu (A(t)) + \Gamma^8 \partial_t$ on $Y
\times [0,1]$.
Before obtaining the change in $\eta_L$ from the
Atiyah-Patodi-Singer index theorem for ${\tilde L}$, it may be
illuminating to make a brief digression explaining how this lift
of $L$ is related to the elliptic complex
\begin{equation}
0 \longrightarrow {\mbox{ad}}\, G \otimes \Lambda^0 \overset{D}{\longrightarrow} {\mbox{ad}}\, G \otimes \Lambda^1
\overset{\frac{1}{4}(1-*\Psi \wedge) D}{\longrightarrow}
{\mbox{ad}}\, G \otimes \Lambda^2_{\bf 7} \longrightarrow 0 \; ,
\end{equation}
\noindent on an 8-manifold $X$ of $Spin(7)$ holonomy, with Cayley
4-form $\Psi = *\Psi$, when $X = Y \times [0,1]$. This complex has
been used in the study of 8-dimensional topological quantum field
theories in \cite{Baulieu:1997jx}. The operator $\pi^2_{\bf 7} =
\frac{1}{4}(1-*\Psi \wedge)$ projects the 2-form in 8 dimensions
onto the 7-dimensional irreducible representation of $Spin(7)$.
The adjoint operators mapping to the left of the complex are
$D^\dagger$. As noted by Donaldson and Thomas, solutions of $*\phi
\wedge F =0$ on the $G_2$ manifold $Y$ correspond to fixed points
of the gradient flow from the $Spin(7)$ instanton equation $*F =
\Psi\wedge F$ on $Y \times \mathbb{R}$ (i.e. elements of the kernel of
$\pi^2_{\bf 7} D$).
The relation of this complex to the twisted spin complex for ${\tilde L}$
follows by observing the isomorphisms ${\sf S}_+ = \Lambda^0_{\bf 1} \oplus
\Lambda^2_{\bf 7}$ and ${\sf S}_- = \Lambda^1_{\bf 8}$ for the positive and
negative chirality spin bundles ${\sf S}_{\pm}$ on a $Spin(7)$ manifold (using
the conventions of \cite{Acharya:1997fos} where the $Spin(7)$-invariant spinor
$\theta \in {\sf S}_+$). The explicit isomorphisms following from Fierz
identities give $\psi_+ = \eta \theta - \frac{1}{4} \chi_{MN} \Gamma^{MN}
\theta$ and $\psi_- = - \psi_M \Gamma^M \theta$ ($M,N=1,...,8$) for any
$\psi_\pm \in {\sf S}_{\pm}$, where $\eta = \theta^t \psi_+$ is a scalar,
$\chi_{MN} = \frac{1}{2} \theta^t \Gamma_{MN} \psi_+$ is a 2-form obeying the
identity $\pi^2_{\bf 7} \chi = \chi$ and $\psi_M = \theta^t \Gamma_M \psi_-$ is
a 1-form. The action of the twisted Dirac operator $\Gamma^M D_M : {\sf S}_-
\rightarrow {\sf S}_+$ on these expressions gives $\Gamma^M D_M \psi_- = ( D^M
\psi_M ) \theta - ( \pi^2_{\bf 7} D \psi )^{MN} \Gamma_{MN} \theta$ hence
equating $\Gamma^M D_M$ acting on ${\sf S}_-$ with $\pi^2_{\bf 7} D + D^\dagger$
acting on $\Lambda^1_{\bf 8}$ in the complex above. This is consistent with the
reduction of the lifted ${\tilde L}$ on $Y \times [0,1]$ to $L=*(*\phi \wedge D)
+ D*$ on $Y$. Using this identification, one can check that the index of the
whole $Spin(7)$ complex above is identical to that for the twisted Dirac
operator on a $Spin(7)$ manifold.
This identification has been used by Reyes-Carri\'{o}n \cite{ReyesCarrion:1998si} to calculate the Atiyah-Singer index
\begin{equation}
\int_{X} ch({\mbox{ad}}\, G) \hat{A}(TX) = \int_X {\mbox{dim}} (G) \, {\hat A}_2 (TX) +
\frac{1}{24} \left( p_1 (TX) \wedge c_2 ({\mbox{ad}}\, G) + 2 \, (c_2({\mbox{ad}}\, G))^2 -4\, c_4({\mbox{ad}}\, G)
\right)\; ,
\end{equation}
\noindent of the $Spin(7)$ complex above, on a closed $Spin(7)$ 8-manifold $X$.
The A-roof genus $\int_X {\hat A}_2$ here corresponds to the number of parallel
spinors on $X$ and so equals 1 if the holonomy is exactly $Spin(7)$ (and not a
subgroup thereof). For convenience, it is assumed in the formula above that the
gauge group is chosen such that the Chern classes $c_1 ({\mbox{ad}}\, G)$ and
$c_3 ({\mbox{ad}}\, G)$ both vanish (e.g. for $G = SU(N)$).
Consider now $X = Y \times [0,1]$ where $A(t)$ interpolates
between two solutions $A= A^0$ and $A=0$ of $*\phi \wedge F =0$ on
$Y$. The Atiyah-Patodi-Singer index theorem for ${\tilde L}$ is
\begin{equation}
{\mbox{ind}} ( {\tilde L} ) = \int_{Y \times [0,1]} ch({\mbox{ad}}\, G) \hat{A}(T(Y \times [0,1])) -
\frac{1}{2} [\eta_L (A^{0}) - \eta_L (0)] \; .
\end{equation}
The bulk integral can be evaluated using the Reyes-Carri\'{o}n
result on $X = Y\times [0,1]$. This is equal to the continuous
part of $\frac{1
|
}{2} [\eta_L (A^{0}) - \eta_L (0)]$ and is given
by
\begin{equation}
{\mbox{dim}} (G) + \frac{1}{24} \left( \frac{1}{2\pi} \right)^4 \int_Y
\left[ - \frac{1}{2} \textrm{Tr} (R \wedge R) \wedge CS_3( A^0 ) + CS_7 ( A^0 )
\right]\; ,
\end{equation}
\noindent as an integral over the $G_2$ manifold $Y$ with Riemann curvature $R$.
The Chern-Simons forms are
\begin{align}
CS_3 (A) &= \textrm{Tr} \left( A \wedge dA + \frac{2}{3} \, A^3 \right) \; , \nonumber \\
CS_7 (A) &= \textrm{Tr} \left( A \wedge (dA)^3 + \frac{16}{5} \, A^3 \wedge (dA)^2 +
\frac{4}{5} A^2 \wedge dA \wedge A \wedge dA +
2\, A^5 \wedge dA + \frac{4}{7} \, A^7 \right) \; .
\end{align}
In general $\frac{1}{2} [\eta_L (A^{0}) - \eta_L (0)]$ can also
have a discontinuous contribution, corresponding to the spectral
flow of $L$, and is equal to (minus) the index of the lifted
operator ${\tilde L}$ itself. This has the effect of shifting the
continuous part of $\frac{1}{2} [\eta_L (A^{0}) - \eta_L (0)]$ by
$\pm 1$ if the eigenvalues $\lambda_i (t)$ of $L(A(t))$
(understood as a function of $t$) change sign when $t$ is varied
between 0 and 1 (a +1 shift corresponds to a change $\lambda_i <0$
to $\lambda_i >0$).
The addition of \lq constant' terms (that do not depend on the
particular choice of solutions $A^1$ and $A^0$) to $\frac{1}{2}
[\eta (A^{1}) - \eta (A^{0})]$ will have a trivial effect that can
be factored out of the overall phase structure of the theory and
ignored. Thus the effect of the spectral flow of a given operator
can only be ignored if it is a constant in this sense. This is the
case for Witten's analysis of $L_-$ in 3 dimensions. This is
obviously also true for the constant ${\mbox{dim}}(G)$ in the
change in the $\eta$-invariant above. It is not clear to us
whether the effect of the spectral flow of $L$ in 7 dimensions
will be significant and we will overlook this subtlety here.
Therefore it is clear that the phase structure of the
7-dimensional theory is much more complicated than just the level
shift that occurs in 3 dimensions. Nonetheless, let us examine
some of the terms in $\frac{1}{2} [\eta_L (A^{0}) - \eta_L (0)]$ in a bit more detail.
The term $\textrm{Tr} (R \wedge R)$, proportional to the first Pontrjagin
class of $Y$, which ordinarily can be a general element of
$H^4(Y,\mathbb{R} )$, is here somewhat constrained due to the fact that
$Y$ must have holonomy in $G_2$. In particular, this constrains
the curvature such that $\pi^2_{\bf 7} R =0$ (or
$R_{\mu\nu\alpha\beta} \phi^{\alpha\beta\gamma} =0$ in components)
so that the holonomy algebra is contained in the Lie algebra of
$G_2$. Decomposing
\begin{equation}
H^4(Y,\mathbb{R} ) = H^4_{\bf 1} (Y,\mathbb{R} ) \oplus H^4_{\bf 7} (Y,\mathbb{R} )
\oplus H^4_{\bf 27} (Y,\mathbb{R} ) \; ,
\end{equation}
\noindent into irreducible representations of $G_2$, one can check that the
constraint above implies $\textrm{Tr} (R \wedge R)$ has no ${\bf 7}$ part. (This also
follows from lemma 1.1.2 in \cite{Joyce:1996}, although only compact
manifolds with full $G_2$ holonomy are considered there and so one has the
stronger constraint $b^4_{\bf 7} =0$ which we need not assume here.)
The cohomology group $H^4_{\bf 1} (Y,\mathbb{R} ) = \mathbb{R}$ has a very simple
structure, being spanned by constant multiples of the harmonic
4-form $*\phi$. Moreover, one can prove the identity $\textrm{Tr} (R
\wedge R) \wedge \phi = - |R|^2 \, {\mbox{vol}}$ implying the
constant multiplying the ${\bf 1}$ part of the first Pontrjagin
class is negative definite and vanishes only if the $G_2$ metric
is flat (this also follows from lemma 1.1.2 in \cite{Joyce:1996}). Hence the
contribution to the expression for $\eta$ above coming from this
term will cause a positive shift in the effective coupling
constant $k$ for the action $\int_Y *\phi \wedge CS(A^0)$,
reminiscent of the level shift in 3-dimensional Chern-Simons
theory.
The final contribution to the first Pontrjagin class coming from
$H^4_{\bf 27} (Y,\mathbb{R} )$ is more complicated and generally will not
vanish. Recall it is precisely elements of $H^3_{\bf 27} (Y,\mathbb{R} ) =
H^4_{\bf 27} (Y,\mathbb{R} )$ that parameterize deformations of a given
$G_2$ manifold such that the deformed manifold is also $G_2$.
Hence this contribution would vanish for \lq rigid' $G_2$
manifolds with no deformation moduli (or, of course, for special
$G_2$ manifolds whose first Pontrjagin class has no ${\bf 27}$
part).
The effect on the partition function from the contribution to
$\eta_L$ from $CS_7 (A^0)$ is also rather complicated. We will
simply note that the equations of motion arising from a
modification to the classical action of this kind would be of the
form
\begin{equation}
\label{}
*\phi \wedge F = \lambda\, F \wedge F \wedge F \; ,
\end{equation}
\noindent for some constant $\lambda$, which were considered by Leung et al as a deformed version of
Donaldson-Thomas theory.
Just as in 3 dimensions, we expect that the overall $\eta_L (0)$ exponential
prefactor in the partition function will not be a topological invariant. The
task of finding a different regularisation that preserves general covariance is
much more difficult in 7 dimensions and we will not attempt this here.
\subsection{3-cycle worldvolume theory}
Let us now repeat the analysis of the previous section as far as
possible to describe the quantization of the 3-cycle theory. The
effective action for this theory
\begin{equation}
\begin{split}
S_{(3)} = \int_M \epsilon^{abc} \textrm{Tr} \left( A_a \partial_b A_c + \frac{2}{3} A_a A_b A_c + \beta_{ab} D_c f + \frac{1}{2} C_{abc} [ f,f ]
\right) \\
+\phi^{aIJ} \textrm{Tr} \left( \theta_I D_a \theta_J + 2\, \beta_{aI} [ \theta_J , f ]
\right) \; , \\
\end{split}
\end{equation}
\noindent (derived from OSFT in appendix \ref{app_ghost_3_4_cycle}) is
essentially pure Chern-Simons theory for the gauge field $A_a$ on $M$, which is
a completely solvable theory, plus additional normal mode contributions from
$\theta_I$, whose effect we shall investigate ($D_a = \nabla_a + [A_a,- ]$ on
$M$). It can also be understood as the dimensional reduction of the 7-cycle
action $S_{(7)}$ (after appropriately rescaling $\beta$ and $C$).
In principle a similar modification by normal modes may occur for
open strings ending on special Lagrangian 3-cycles in Calabi-Yau
manifolds in the A-model, though this is not discussed in
\cite{Witten:1992fb}. There is considerable evidence, however,
that the worldvolume theory on a special Lagrangian is essentially
just Chern-Simons theory (up to possible worldsheet instanton
corrections), as this is used, for instance, in open-closed
transitions \cite{Gopakumar:1998ki}. An essential point is that,
aside from $A_a$, none of the other fields in the 3-cycle action
appears at higher than quadratic order in the Lagrangian so they
can be integrated out exactly.
\subsubsection{1-loop partition function}
Let us again simplify matters by quantizing the quadratic part of
the non-linear action $S_{(3)}$, expanded around solutions of the
equations of motion ({\ref{eqn_F_nonflat}}),
({\ref{eqn_gauged_normal}}) for the classical part $S[A,\theta ] =
\int_M CS(A) + \phi^{aIJ} \textrm{Tr} ( \theta_I D_a \theta_J )$ of
$S_{(3)}$.
Expanding $S[A,\theta ]$, for $A_a = A^0_a + B_a$, $\theta_I =
\theta^0_I + \xi_I$, to quadratic order in $(B, \xi )$, around a
classical solution $( A^0 , \theta^0 )$ gives
\begin{equation}
S [A,\theta ] = S [ A^0 , \theta^0 ] + \int_M \epsilon^{abc} \textrm{Tr} \left( B_a D_b B_c \right) + \phi^{aIJ} \textrm{Tr} \left( \xi_I D_a \xi_J + \theta^0_I [ B_a , \xi_J ]
\right)\; ,
\end{equation}
\noindent where $D_a = \nabla_a + [ A^0_a ,-]$. The BV structure of the quadratic
action
\begin{equation}
S_{cl} [B, \xi ] = \int_M \epsilon^{abc} \textrm{Tr} ( B_a D_b B_c ) + \phi^{aIJ} \textrm{Tr}
( \xi_I D_a \xi_J + \theta^0_I [ B_a , \xi_J ] ) \; ,
\end{equation}
\noindent is detailed in appendix \ref{app_ghost_3_4_cycle}. The resulting
gauge-fixed action takes the expected form
\begin{equation}
S_{cl}[B,\xi ] + \int_M \textrm{Tr} \left( \varphi D_a B^a + {\bar c} D_a D^a c
\right)\; .
\end{equation}
\noindent To compare this quantum theory with Witten's analysis of pure
Chern-Simons theory, let us begin by calculating the contribution to the path
integral from a flat connection (i.e. $A^0$ is flat and $\theta^0 = 0$, solving
({\ref{eqn_F_nonflat}}) and ({\ref{eqn_gauged_normal}})). The modification to
equation (2.8) of \cite{Witten:1988hf} (for the contribution from a flat
connection $A^0$ in pure Chern-Simons theory) due to the normal modes is given
by
\begin{equation}
\mu ( A^0 , 0) = {\mbox{exp}}(ik S[A^0 ,0]) \; \frac{{\mbox{det}}(D_a D^a)}{\sqrt{{\mbox{det}}(L_- \oplus \phi^{aIJ} D_a
)}}\; ,
\end{equation}
\noindent where $\phi^{aIJ} D_a$ is understood as a 4x4 antisymmetric matrix of
differential operators. We can go further by making use of the important
identity $\phi^{aIJ} D_a \phi^{bJK} D_b = - \delta^{IK} D_a D^a$, which follows
using $F^0_{ab} =0$. This is related to the fact that $\phi^{aIJ}$, understood
as 3 4x4 matrices, generate an $SU(2)$ subgroup of the $SO(4)$ structure group
of the normal bundle of $M$ and can be understood as Pauli matrices. This allows
us to identify $\phi^{aIJ} D_a$ as a twisted Dirac operator acting on a
4-dimensional vector space, just as Witten did for $L_-$. Hence going through
the usual Atiyah-Patodi-Singer analysis of the phase factor for the direct sum
of two identical twisted spin complexes over $M \times [0,1]$ (both twisted by
$A^0$) implies the difference of $\eta$-invariants between 2 flat connections $A
= A^0$ and $A=0$ will also be proportional to the pure Chern-Simons action at
$A^0$. Hence this will give essentially the same 1-loop effective action as for
pure Chern-Simons theory except the shift in the level will effectively be
doubled.
Understanding the effect of contributions from more general
solutions of ({\ref{eqn_F_nonflat}}) and
({\ref{eqn_gauged_normal}}) is a more difficult task since not
much is known about this moduli space other than that it contains
flat connections. Formally the contribution from a general
solution $\mu ( A^0 , \theta^0 )$ will be similar to $\mu ( A^0 ,
0 )$ but for replacing $S [ A^0 ,0]$ by $S [ A^0 , \theta^0 ]$ in
the exponential and including an off-block-diagonal component
$\phi^{aIJ} \theta^0_J$ for the determinant in the denominator. It
may prove more convenient to understand such contributions from
the 7-dimensional perspective.
\subsection{4-cycle worldvolume theory}
The action ({\ref{eqn_4cycle_actfull}}) also follows from reduction of the
7-dimensional action $\int_Y *\phi \wedge CS(A)$ on the 4-cycle. The ghost
structure of this theory is derived from OSFT in appendix
\ref{app_ghost_3_4_cycle}, just as for the 3-cycle theory, and again follows
from dimensional reduction of the 7-dimensional theory (up to suitable field
re-scalings) to give the full 4-cycle action
\begin{equation}
\begin{split}
S_{(4)} = \int_M \phi^{Iab} \textrm{Tr} \bigl( \theta_I F_{ab} \bigr) +
\frac{2}{3} \, \phi^{IJK} \textrm{Tr} \bigl( \theta_I \theta_J \theta_K
\bigr) + \frac{1}{2} \phi^{IJK} \textrm{Tr} \bigl( C_{IJK} [ f,f ] \bigr) \\
+ 2\, \phi^{Iab} \textrm{Tr} \bigl( \beta_{Ia} D_b f \bigr) + \phi^{IJK} \textrm{Tr} \bigl( \beta_{IJ} [\theta_K , f] \bigr) \; . \\
\end{split}
\end{equation}
\subsubsection{1-loop partition function}
Proceeding as in the previous sections, we quantize the quadratic
part of $S_{(4)}$ by expanding around solutions of
({\ref{eqn_gauged_normal2}}), ({\ref{eqn_FnonSD}}) for the
classical part of $S_{(4)}$
\begin{equation}
S[A,\theta ] = \int_M \phi^{Iab} \textrm{Tr} \bigl( \theta_I F_{ab} \bigr) + \frac{2}{3}
\, \phi^{IJK} \textrm{Tr} \bigl( \theta_I \theta_J \theta_K \bigr) \; .
\end{equation}
\noindent Expanding $S[A,\theta ]$, for $A_a = A^0_a + B_a$, $\theta_I =
\theta^0_I + \xi_I$, to quadratic order in $(B, \xi )$, around a
classical solution $( A^0 , \theta^0 )$ gives
\begin{equation}
S [A,\theta ] = S [ A^0 , \theta^0 ] + 2\, \int_M \phi^{Iab} \textrm{Tr} \bigl( \xi_I
D_a B_b + \theta^0_I B_a B_b \bigr) + \phi^{IJK} \textrm{Tr} \bigl(
\theta^0_I \xi_J \xi_K \bigr) \; ,
\end{equation}
\noindent where $D_a = \nabla_a + [ A^0_a ,-]$. The BV analysis of
the quadratic action
\begin{equation}
S_{cl} [B, \xi ] = \int_M \phi^{Iab} \textrm{Tr}
\bigl( \xi_I D_a B_b + \theta^0_I B_a B_b \bigr) + \phi^{IJK}
\textrm{Tr} \bigl( \theta^0_I \xi_J \xi_K \bigr)
\end{equation}
\noindent is given in appendix \ref{app_ghost_3_4_cycle}, leading to the
expected gauge-fixed action
\begin{equation}
S_{cl}[B,\xi ] + \int_M \textrm{Tr} \left( \varphi D_a B^a + {\bar c} D_a D^a c
\right)\; .
\end{equation}
We will now begin to analyse the quantum structure of this theory
by calculating the contribution to the path integral from an
instanton configuration (i.e. $A^0$ obeys $\phi^{Iab} F_{ab} = 0$
and $\theta^0 = 0$, solving ({\ref{eqn_gauged_normal2}}) and
({\ref{eqn_FnonSD}})). The contribution is given by
\begin{equation}
\mu ( A^0 , 0) = \frac{{\mbox{det}}(D_a D^a)}{{\mbox{det}}(\phi^{abI} D_b \oplus
D*)}\; ,
\end{equation}
\noindent where $\phi^{abI} D_b$ is understood as a 4x3 matrix of
differential operators which, together with $D*$ acting on
4-forms, makes up a square 4x4 antisymmetric matrix that provides
an involutive mapping $\Lambda^0 (NM) \oplus \Lambda^4 (M)
\rightarrow \Lambda^1 (M)$. The reason there is no square root in
the denominator is that the differential operator appearing in the
gauge-fixed action is an 8x8 matrix (acting on $B_a$, $\xi_I$ and
$\varphi$) with zeros in the 4x4 block-diagonal entries and the
4x4 operators above in both off-block-diagonal entries. It is not
clear to us if this determinant can be simplified further or
whether it contributes a non-trivial phase factor. The structure
of $\theta^0_I \neq 0$ contributions is also unclear.
\section{Remarks and open problems}
So far in this paper we have determined the spectrum of the open $G_2$ string
and related it to the worldvolume field theories of branes in a $G_2$ manifold.
In this section we would like to conclude by making some final remarks regarding
issues that still need to be resolved as well as interesting directions for
further research.
\subsection{Holomorphic instantons on special Lagrangians}
In dimensionally reducing the $G_2$ branes on a Calabi-Yau $Z$
times a circle, we have found that we almost reproduce the real
versions of the gauge theories for the open A- and B-models. There
is a discrepancy, however. If one considers a special Lagrangian
$M \subset Z$, with holomorphic open curves $\Sigma \subset Z$ ending on
$M$ so that $\partial \Sigma \subset M$, then the A-model branes will
receive worldsheet instanton corrections to the standard
Chern-Simons action. A naive dimensional reduction of the
associative theory on a $G_2$ manifold $Y = Z \times S^1$ gives a
special Lagrangian in $Z$ with the Chern-Simons action without
instanton corrections.
This issue is already present in the closed topological $G_2$
string. When reducing on $CY_3 \times S^1$, the closed $G_2$
string gives a combination of A and B+${\bar {\mbox{B}}}$ models.
But it is non-trivial to see where the worldsheet instanton
corrections in the A-model would come from, given that the $G_2$
theory appears to localize on constant maps. A possible resolution
suggested in \cite{deBoer:2005pt} is that since, unlike a generic
$G_2$ manifold, the manifold $CY_3 \times S^1$ has 2-cycles,
worldsheet instantons may now wrap these 2-cycles. However, upon
closer inspection, this possibility appears rather unlikely. A
much more straightforward explanation is that the worldsheet
instanton contribution is due to topological membranes (i.e.
topological 3-branes of the type discussed in this paper) that
wrap associative cycles of the form $\Sigma \times S^1$ in $CY_3
\times S^1$. Such 3-cycles are indeed associative as long as
$\Sigma$ is a holomorphic curve in the Calabi-Yau manifold.
Returning to the open worldsheet instanton contribution to branes
in the A-model, there are two ways to obtain these from the
topological $G_2$ string on $CY_3 \times S^1$. The first way is to
lift the A-model brane together with the open worldsheet instanton
to a single associative cycle in $CY_3 \times S^1$. This is
similar to the M-theory lift in terms of a single M2-brane of a
configuration of a fundamental string ending on a D2-brane in type
IIA string theory. To describe it, we take a special Lagrangian
3-cycle $C$ in a Calabi-Yau manifold $X$, plus an open holomorphic
curve $\Sigma$. We denote the boundary of $\Sigma$ by
$\gamma\subset C$. We first lift $C$ to $X\times S^1$, which we
describe in terms of a map $C\rightarrow X\times S^1$ which takes
$x\in C$ to $(x,\theta(x)) \in X\times S^1$. Here, $\theta(x)$
describes an $S^1$-valued function on $C$ which we want to have
the property that it winds once around the $S^1$ as we wind once
around the curve $\gamma\subset C$. The lift is therefore
one-to-many, as the image of a point in $\gamma$ is an entire
circle, and because of this the lift of $C$ is an open submanifold
of $X\times S^1$ with boundary $\gamma \times S^1$. We can now
glue the naive lift of $\Sigma$, which is $\Sigma\times S^1$, to
the lift of $C$ to form a closed 3-manifold $M$, since the
boundary of $\Sigma \times S^1$ is also $\gamma \times S^1$. In
this way we have obtained a closed 3-manifold $M\subset X\times
S^1$ which projects down to $C$ and $\Sigma$ upon reduction over
the $S^1$. The 3-manifold $M$ is not calibrated, but we can
compute the integral of $\phi$ over $M$. The result is simply
$\int_C \rho + \int_{\Sigma} k$ if we normalize the size of the
$S^1$ appropriately. The fact that the lift of $C$ winds around
the circle does not yield any additional contribution to $\int_M
\phi$ because the restriction of $k$ to $C$ vanishes identically.
We have thus constructed a closed 3-cycle $M$ such that the integral of $\phi$
over it has the correct structure, geometrically, to yield the worldsheet
instanton contribution. The final step is to minimize the volume of $M$ while
keeping its homology class fixed. This will not change $\int_M \phi$ but
presumably lead to the sought-for associative 3-cycle with the right properties.
In order to push this program further and relate $\int_\Sigma k$ to the
(exponentiated) weight of a holomorphic instanton we note that maps $\theta(x)$
which wind about $\gamma$ $n$ times will generate contributions such as $n \int_\Sigma
k$. Carefully summing over all lifts of this form with the appropriate weight
might properly reproduce the instanton contributions.
An entirely alternative approach is to lift both $C$ and $\Sigma$
to $C\times S^1$ and $\Sigma \times S^1$. In this way we obtain an
open associative 3-cycle ending on a coassociative 4-cycle in
$X\times S^1$. To analyze whether this makes sense, we consider
the simple example of an open 3-brane in ${\mathbb{R}}^7$
stretched along the 123-direction, ending on a coassociative cycle
stretching in the 2345-direction. If we vary the action
(\ref{eqn_cs_normal_action}) on the 3-brane we obtain a boundary
term
\begin{equation}
S_{\rm boundary} = \int dx^2 dx^3 {\rm tr}(A_3 \delta A_2 -
A_2\delta A_3 + \theta_5 \delta \theta_4 - \theta_4 \delta
\theta_5 +\theta_7 \delta \theta_6 - \theta_6 \delta \theta_7) \;
.
\end{equation}
We obviously want Dirichlet boundary conditions for $\theta_6$ and
$\theta_7$ so that the endpoint of the open 3-brane is confined to
lie in the 4-brane. We also want $\theta_4$ and $\theta_5$ to be
unconstrained at the boundary. If we therefore choose the boundary
condition
\begin{equation}
A_2 = \theta_5 \qquad A_3=\theta_4 \; ,
\end{equation}
the variations all cancel. To preserve these boundary conditions
under a gauge transformation, we need to restrict the gauge
parameter in such a way that its derivatives in the $2,3$ vanish
at the boundary. In this way we indeed find a consistent open
3-brane ending on a 4-brane.
\subsection{Extensions}
The actions we have discovered on topological branes wrapping cycles in a $G_2$
manifold are variants of Chern-Simons theories derived from OSFT. OSFT itself,
as a generator of perturbative string amplitudes, might need to be augmented by
terms that are locally BRST trivial but none-the-less have global meaning
deriving from the topological structure of the space of string fields. In the
bosonic open string such questions are currently inaccessible but in the
topological case we see some motivation for local total derivative terms to be
added to the action. One such potential term is
\begin{equation}\label{eqn_FF_term}
\int_Y F \wedge F \wedge \phi
\end{equation}
\noindent that might describe lower dimensional branes dissolved in the seven
dimensional brane. Such terms might be motivated by analogy with the
Wess-Zumino terms on physical branes. Note, also, that this reduces to
$F \wedge F \wedge k$ in six-dimensions, a term which appears in the A-model
K\"ahler quantum foam theory \cite{Iqbal:2003ds} which Nekrasov suggests should
be related to holomorphic Chern-Simons theory \cite{Nekrasov:2005bb} (the latter
is, of course, related to our theory by dimensional reduction). It would be
interesting to try and probe for the existence of such terms directly in the
$G_2$ world-sheet or OSFT theory.
The appearance of the $CS_7(A)$ term in the one-loop partition function suggests
that perhaps this term appears in quantizing the theory and so should have been
included in the original classical action.
Understanding if such terms do actually appear in these effective actions is
interesting as it may play a role in the conjectured S-duality of the A/B model
topological strings. In the latter it seems that one may need to consider both
the open and closed theory simultaneously and then terms such as
(\ref{eqn_FF_term}) might play a role in coupling these theories.
\subsection{Relation to twists of super Yang-Mills}
The theories we have found on $G_2$ branes are all topological
theories of the Schwarz type (see \cite{Birmingham:1991ty} for the
terminology) which is no doubt linked to the fact that they are
generated by OSFT. A similar statement holds for branes in the A-
and B-model.
The worldvolume theory on a brane in a $G_2$ or Calabi-Yau
manifold in a physical model is a twisted, dimensionally reduced
super Yang-Mills (SYM) theory \cite{Bershadsky:1995qy} whose
ground states are topological in nature. These are related to the
topological field theories that can be constructed by twisting SYM
and considering only the supersymmetric states (by promoting the
twisted supercharge to a BRST operator). Such theories include the
topological action for Donaldson-Witten theory
\cite{Witten:1988ze} as well as its generalizations to higher
dimensions \cite{Baulieu:1997jx}. These are generally field
theories of the Witten type meaning that the action is itself a
BRST commutator plus a locally trivial term.
Aside from the obvious connection to Chern-Simons theory via OSFT it would be
interesting to understand why the topological theories on branes in topological
string theory are generally of the Schwarz type (which are locally non-trivial)
while the supersymmetric states of the twisted theories on a physical brane can
be studied in a theory that is of the Witten type.
\subsection{Geometric invariants}
One of the most interesting open directions is to investigate the
geometric or topological invariants our open worldvolume gauge
theories compute, and perhaps use them, via open-closed duality,
to discover the connection to the closed topological $G_2$ theory.
It would be interesting to explore the full quantum open string
partition function on a few examples of $G_2$ manifolds. The
theory on the 3-cycle is basically Chern-Simons theory, while on
the 4-cycle the gauge theory of ASD connections will be related
naturally to Donaldson theory. It would very interesting to find a
role for the partition functions in terms of the full physical
string theory, as well as deepen connections with the mathematics
results in \cite{Leung:2002qs}. Another open problem is to analyze
these invariants in the special case of $CY_3 \times S^1$, and
find a physical understanding of related mathematical invariants
such as the one proposed by Joyce \cite{Joyce:1999tz} counting
special Lagrangian cycles in a Calabi-Yau manifold.
\subsection{Geometric transitions}
Open-closed duality techniques have proven very useful for topological string
theory on Calabi-Yau manifolds. In particular, geometric transitions provide
nice examples where closed topological string amplitudes can be computed from
the gauge theory on the branes, which in this case is just Chern-Simons theory
with possible worldsheet instanton corrections. Geometric transitions on $G_2$
manifolds in general are less studied, but interesting examples from the full
string theory point of view are exhibited in e.g.
\cite{Gukov:2002zg}\cite{Acharya:1997rh}. In the present
paper we derived the relevant worldvolume gauge theory actions from open
topological strings and so, one of the immediate applications of our results is
to study geometric transitions from the topological $G_2$ string point of view.
\subsection{Mirror symmetry for $G_2$}
Mirror symmetry on a Calabi-Yau 3-fold can be described in terms
of the Strominger-Yau-Zaslow (SYZ) conjecture. One starts with a
special Lagrangian fibration, and then the mirror manifold is
conjectured to be the dual torus fibration over the same base. In
physics language, the action of mirror symmetry on the fibres is
T-duality. In \cite{Lee:2002fa}, a $G_2$ version of the SYZ
conjecture was suggested, relating coassociative to associative
geometry. Evidence for the $G_2$ mirror symmetry was also found in
$G_2$ compactifications of the physical IIA/IIB string theory on
$G_2$ holonomy manifolds \cite{Acharya:1997rh} \cite{Gukov:2002jv}. It would be
interesting to explore the action of mirror symmetry in the case
of the topological $G_2$ models. A good starting point for this is
by examining automorphisms of the closed $G_2$ string algebra such as those
discussed in \cite{Roiban:2002iv}.
\subsection{Zero Branes}
Although we have not attempted a treatment here it should be possible to reduce
the action (\ref{sevenwv}) to zero dimensions to determine the world-volume of
$D0$-branes on the $G_2$ manifold. This will be a matrix model which may be
related in an interesting way to the $G_2$ geometry.
\section*{Acknowledgments}
We would like to thank Robbert Dijkgraaf, Jos\'{e} Figueroa-O'Farrill, Lotte
Hollands, Dominic Joyce, Amir-Kian Kashani-Poor, Asad Naqvi, Nikita Nekrasov,
Martin Ro\v{c}ek, Assaf Shomer, and Erik Verlinde for helpful discussions. The
work of PdM was supported in part by DOE grant DE-FG02-95ER40899 and currently
by a Seggie Brown fellowship. The work of JdB and SES is supported financially
by the Foundation of Fundamental Research on Matter (FOM).
|
\section{Supplemental Materials}
This Supplementary Information provides additional details on the experimental protocol, the determination of the joint distribution for work and heat, and the analysis of the experimental data.
\subsection{Thermal states initialization}
Spatial average techniques \cite{oli07,bat14,bat15,mic19} were used to initialize the engine states,
which are local pseudo-thermal states encoded in the $^{1}$H and
$^{13}$C nuclei. We present in Table \ref{tab:1} the populations
and the respective local spin temperatures in the eigenbasis of Hamiltonians $\mathcal{H}_{0}^{\text{H}}$
and $\mathcal{H}_{0}^{\text{C}}$.
\begin{table}[h]
\begin{centering}
\begin{tabular}{c|ccc}
$^{1}$H nucleus & $p_{0}^{\text{H}}$ & $p_{1}^{\text{H}}$ & $k_{B}T_{2}$ (peV)\tabularnewline
\hline
& 0.67 $\pm$ 0.01 & 0.33 $\pm$ 0.01 & 21.5 $\pm$ 0.4\tabularnewline
$^{13}$C nucleus & $p_{0}^{\text{C}}$ & $p_{1}^{\text{C}}$ & $k_{B}T_{1}$ (peV)\tabularnewline
\hline
& 0.78 $\pm$ 0.01 & 0.22 $\pm$ 0.01 & 6.6 $\pm$ 0.1\tabularnewline
\end{tabular}
\par\end{centering}
\caption{Populations and spin temperatures of the initial states of the $^{1}$H and $^{13}$C nuclei. The corresponding off-diagonal elements are zero within the measurement errors.}
\label{tab:1}
\end{table}
\subsection{Compression and expansion protocols}
The energy gap compression and expansion protocols are implemented with a time-modulated amplitude and phase transverse rf-pulse on resonance with the $^{13}$C nuclear spin in order to produce effectively the time-dependent driving Hamiltonian $\mathcal{H}^{\text{C}}({\tau})$ described in the main text. The intensities of the transverse field at the beginning and end of the driving protocol were properly calibrated in order to have the associated frequencies given in the main text. The duration of the modulated traverse pulse was varied from $100$~$\mu$s to $700$~$\mu$s in different implementations of the quantum heat engine cycle.
\subsection{Heating protocol}
The thermalization process used to heat the $^{13}$C nuclear spin of the quantum Otto engine during the second stroke has
the local effect of a linear non-unitary
map $\varepsilon(\rho_{j})=\text{Tr}_{k\neq j}\left[\mathcal{U}_{\tau}\left(\rho_{H}^{0}\otimes\rho_{C}^{0}\right)\mathcal{U}_{\tau}^{\dagger}\right]$, with $(j,k) = (H,C)$.
It is represented by the following
set of maps \cite{mic19}
\begin{equation}
\varepsilon(\rho_{j})=\sum_{\ell=1}^{4}K_{\ell}\rho_{j}^{0}K_{\ell}^{\dagger},
\end{equation}
with the Kraus operators
\begin{align}
K_{1} & =\sqrt{1-p}\left(\begin{array}{cc}
1 & 0\\
0 & 0
\end{array}\right),K_{2}=\sqrt{p}\left(\begin{array}{cc}
0 & 0\\
0 & 1
\end{array}\right)\\
K_{3} & =\sqrt{1-p}\left(\begin{array}{cc}
0 & 1\\
0 & 0
\end{array}\right),K_{4}=\sqrt{p}\left(\begin{array}{cc}
0 & 0\\
-1 & 0
\end{array}\right).
\end{align}
The parameter $p$ denotes the population of the excited state in the Hydrogen nucleus. The above Kraus operators correspond to the generalized amplitude damping of a single-$1/2$ system. From a local point of view, the map thus implements complete thermalization. The NMR pulse sequence used in the heat exchange protocol in the experiment is shown in Fig.~\ref{figs1a}, were the Hydrogen nucleus is used as a heat bus.
\begin{figure}[t]
\centering \includegraphics[width=0.98\columnwidth]{figs1} \caption{NMR pulse
sequence used in the heat exchange protocol. The outcome of this sequence
(which takes about $7${~}ms) is an effective full
thermalization described by a completely positive trace preserving
(CPTP) map on reduced density operator of the carbon nucleus, $\varepsilon^{\text{hot}}:\rho_{1}^{\text{noneq}}\rightarrow e^{-\beta_{2}H_{2}^{\text{C}}}/Z_{2}$,
leading it to an equilibrium state at the hot inverse temperature
$\beta_{2}$. Orange connections represent free evolutions under the
scalar interaction during the time displayed above the symbol. Blue
(red) circles stand for $x$ ($y$) rotations by the displayed angle
implemented by transverse rf pulses.}
\label{figs1a}
\end{figure}
\begin{table}[t]
\begin{tabular}{c|c|c|c|c|c|c}
History & stroke & stroke & stroke & stroke & $W/h\,\pm\,0.15$ & $Q/h\,\pm\,0.15$\tabularnewline
& 1 & 2 & 3 & 4 & (kHz) & (kHz)\tabularnewline
\hline
1 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 0 & 0\tabularnewline
\hline
2 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & -2.0 & 0\tabularnewline
\hline
3 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 3.6 & 3.6\tabularnewline
\hline
4 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & 1.6 & 3.6\tabularnewline
\hline
5 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & -3.6 & -3.6\tabularnewline
\hline
6 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & -5.6 & -3.6\tabularnewline
\hline
7 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 0 & 0\tabularnewline
\hline
8 & $|\Psi_{-}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & -2.0 & 0\tabularnewline
\hline
9 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 2.0 & 0\tabularnewline
\hline
10 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & 0 & 0\tabularnewline
\hline
11 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 5.6 & 3.6\tabularnewline
\hline
12 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & 3.6 & 3.6\tabularnewline
\hline
13 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & -1.6 & -3.6\tabularnewline
\hline
14 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & -3.6 & -3.6\tabularnewline
\hline
15 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{-}^{1}\rangle$ & 2.0 & 0\tabularnewline
\hline
16 & $|\Psi_{+}^{1}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{2}\rangle$ & $|\Psi_{+}^{1}\rangle$ & 0 & 0\tabularnewline
\end{tabular}
\caption{All transition histories between the instantaneous
eigenstates $|\Psi_\pm^i\rangle$ $(i=1,2)$ for each stroke of the heat engine, together with the corresponding values of work and heat.}
\label{table2}
\end{table}
\subsection{Joint distribution for work and heat - theory}
The joint distribution for the total work $W$ and the absorbed heat $Q$ may be determined by performing energy measurements on the engine at the beginning and at the end of the expansion, heating and compression strokes \cite{den20}, as depcted in Fig. 1\textbf{b} of the main text.
We first consider the case of ideal projective measurements. By performing projective energy measurements at the beginning and at the end of the expansion step, the distribution of the expansion work $W_2$ reads \cite{tal07},
\begin{equation}\label{eq:W1}
P(W_2)=\sum_{j,k} \delta \left[W_2 - (E_k^{\tau}- E_j^0)\right] p_{jk}^\text{exp} p_j^0,
\end{equation}
where $E_j^0$ and $E_k^{\tau}$ are the respective initial and final energy eigenvalues, $p_j^0= \exp({-\beta_1 E_j^0})/Z^0$ is the initial thermal occupation probability with partition function $Z^0$ and $p_{j k}^\text{exp}= |\bra{j}U_\text{exp}(\tau)\ket{k}|^2$ denotes the transition probability between the instantaneous eigenstates $\ket{j}$ and $\ket{k}$ in time $\tau$ with the corresponding unitary $U_\text{exp}$.
Similarly, the probability density of the heat $Q=Q_3$ during the following heating step, given the expansion work $W_2$, is equal to the conditional distribution \cite{jar04},
\begin{equation}\label{eq:Q2}
P(Q|W_2)=\sum_{i,l} \delta \left[Q -(E_l^{\tau} - E_i^{\tau}) \right]p_{i l}^\text{hea} p_i^{\tau},
\end{equation}
where the occupation probability at time $\tau$ is $p_i^{\tau
|
} = \delta_{k i}$ when the system is in eigenstate $\ket{k}$ after the second projective energy measurement.
The quantum work distribution for compression, given the expansion work $W_2$ and the heat $Q$, is additionally,
\begin{equation}\label{eq:W3}
P(W_4|Q,W_2)= \sum_{r,m} \delta \left[ W_4 - (E_m^0 - E_r^{\tau}) \right] p_{r m}^\text{com} p_r^{\tau},
\end{equation}
with the occupation probability $p_r^{\tau} = \delta_{r l}$ when the system is in eigenstate $\ket{l}$ after the third projective energy measurement. The transition probability $p_{r m}^\text{com}=|\bra{r}U_\text{com}(\tau)\ket{m}|^2$ is fully specified by the unitary time evolution operator for compression $U_\text{com}$.
The joint probability of having certain values of $W_4$, $Q$ and $W_2$ during a cycle of the quantum engine now follows from the chain rule for conditional probabilities, $P(W_4,Q,W_2) = P(W_4|Q,W_2)P(Q|W_2)P(W_2)$ \cite{pap91}. Using Eqs.~\eqref{eq:W1}, \eqref{eq:Q2} and \eqref{eq:W3}, we find,
\begin{eqnarray}\label{eq:p_tot}
P(W_2,Q,W_4)& =& \sum_{j,k,l,m} \delta \left[W_2 - (E_k^{\tau}- E_j^0)\right] \nonumber \\
&\times& \delta \left[Q -(E_l^{\tau} - E_m^{\tau}) \right] \delta \left[ W_4 - (E_m^0 - E_l^{\tau}) \right] \nonumber \\
&\times& p_{j}^{0}\,p_{jk}^{\text{exp}}\,p_{kl}^{\text{hea}}\,p_{lm}^{\text{com}} .
\end{eqnarray}
Introducing the total extracted work $W=-(W_2+W_4)$ work and integrating over all work values $W_2$ and $W_4$, the joint distribution for work and heat is given by,
\begin{equation}
P(W,Q) = \int dW_2 dW_4 ~ \delta[W+(W_2+W_4)] P(W_2,Q,W_4).
\end{equation}
Using the explicit expression \eqref{eq:p_tot}, we finally obtain,
\begin{eqnarray}\label{eq:p_tot_2}
P(W,Q) & = & \sum_{j,k,l,m}\Delta(W,j,k,l,m,\tau,\gamma)\Delta(Q,j,k,l,m,\tau,\gamma)\nonumber \\
& &\times p_{j}^{0}\,p_{jk}^{\text{exp}}\,p_{kl}^{\text{hea}}\,p_{lm}^{\text{com}}.
\end{eqnarray}
For ideal projective measurements, each spectral peak is infinitely sharp ($\gamma = 0$) and the two functions $\Delta$ associated with work and heat, $X=(W, Q)$,
are simply Dirac peaks, $\Delta(X,j,k,l,m,\tau,0)=\delta\left(X-x_{jklm} \right)$, with $w_{jklm} =E_{j}^{0} -E_{k}^{\tau}+E_{m}^{\tau} -E_{l}^{0}$ and $q_{jklm}=E_{l}^{\tau}-E_{m}^{\tau}$.
In the experiment, each pair of energy measurements is effectively implemented using a Ramsey-like interferometric scheme \cite{dor13,maz13,bat14,bat15}. In this case, spectral peaks have a finite width $\gamma$ and are well fitted by a Lorentzian distribution, $\Delta(X,j,k,l,m,\tau,\gamma)=1/\{\pi\gamma[1+ (X-x_{jklm})^2/\gamma^2]\}$ \cite{bat14,bat15}. This is the form we consider in the present experiment.
\begin{figure*}
\includegraphics[width=.56\textwidth]{figs2}\caption{Reconstructed joint probability distribution for work and heat $P(W,Q)$, Eq.~\eqref{eq:p_tot_2}, for
the following values of the driving times, $\tau=200$, $260$, $320$,
$260$, $420$, $500$, and $700\,\mu$s (left), together with the corresponding density plots (right).}
\label{figs2}
\end{figure*}
\subsection{Joint distribution for work and heat - experiment}
We denote the instantaneous energy eigenstates of the two-level system with energy gap $h\nu_i$ ($i= 1, 2)$ as $|\Psi_{\pm}^i\rangle$. The corresponding transition probabilities during expansion and compression strokes are accordingly given by
\begin{equation}\label{eq:notrans}
|\bra{\Psi_{-}^{1}} U \ket{\Psi_{-}^{2}}|^2 = |\bra{\Psi_{+}^{1}} U \ket{\Psi_{+}^{2}}|^2 = 1 - \xi,
\end{equation}
when there is no transition between states, and by
\begin{equation}\label{eq:trans}
|\bra{\Psi_{-}^{1}} U \ket{\Psi_{+}^{2}}|^2 = |\bra{\Psi_{+}^{1}} U \ket{\Psi_{-}^{2}}|^2 = \xi.
\end{equation}
when there is a change of state. The operator $U$ stands for the expansion or compression unitary. Adiabatic driving corresponds to $\xi=0$.
Table \ref{table2} presents all the sixteen possible combinations for
energy transitions of the quantum Otto heat engine during one cycle, together with the
respective values of the extracted random values of work and heat.
The reconstructed joint distributions $P(W,Q)$ are displayed in Fig.~\ref{figs2} for the following values of the driving time, $\tau=200$, $260$, $320$,
$260$, $420$, $500$, and $700\,\mu$s.
\subsection{Efficiency distribution}
The stochastic efficiency is defined as $\eta= W/Q$. Its distribution may be obtained from the joint distribution $P(W,Q)$, Eq.~\eqref{eq:p_tot_2}, by integrating over $W$ and $Q$,
as
\begin{eqnarray}
P(\eta)&=&\int dQdW~\delta\left(\eta-\frac{W}{Q}\right)P(W,Q) \nonumber\\
&=&\sum_{j,k,l,m} p_{j}^{0}\,p_{jk}^{\text{exp}}\,p_{kl}^{\text{hea}}\,p_{lm}^{\text{com}}L(w,q,\gamma,\eta)
\end{eqnarray}
with Lorentz-like peaks,
\begin{widetext}
\begin{eqnarray}
L(w,q,\gamma,\eta) & = & \frac{\gamma}{\pi^{2}\left(\gamma^{2}(\eta-1)^{2}+\eta^{2}q^{2}+2\eta qw+w^{2}\right)\left(\gamma^{2}(\eta+1)^{2}+\eta^{2}q^{2}+2\eta qw+w^{2}\right)}\nonumber \\
& \times & \left\{ \gamma\left(-\gamma^{2}+\eta^{2}\left(\gamma^{2}+q^{2}\right)-w^{2}\right)\left(\log\left(\eta^{2}\right)+\log\left(\gamma^{2}+q^{2}\right)-\log\left(\gamma^{2}+w^{2}\right)\right)\right.\nonumber \\
& + & \left.2\tan^{-1}\left(\frac{q}{\gamma}\right)\left(\eta^{2}q\left(\gamma^{2}+q^{2}\right)+2\eta w\left(\gamma^{2}+q^{2}\right)+q\left(\gamma^{2}+w^{2}\right)\right)\right.\nonumber \\
& + & \left.2\tan^{-1}\left(\frac{w}{\gamma}\right)\left(\eta^{2}w\left(\gamma^{2}+q^{2}\right)+2\eta q\left(\gamma^{2}+w^{2}\right)+w\left(\gamma^{2}+w^{2}\right)\right)\right\}
\end{eqnarray}
\end{widetext}
where we have dropped the indices of $w$ and $q$ for better readability.
\begin{figure}[b!]
\centering \includegraphics[width=0.98\columnwidth]{figs5} \caption{Comparison of the microscopic mean efficiency $\langle \eta \rangle = \langle W/Q\rangle $ (experimental red dots) and the macroscopic efficiency $\eta_\text{th}= \langle W\rangle /\langle Q\rangle$ (simulated blue line) as a function of the driving $\tau$. The macroscopic efficiency increases as the adiabatic regime is approached, while the microscopic average efficiency decreases.}
\label{figs5}
\end{figure}
\subsection{Microscopic versus macroscopic efficiencies}
A comparison of the microscopic mean efficiency $\langle \eta \rangle = \langle W/Q\rangle $ and the macroscopic efficiency $\eta_\text{th}= \langle W\rangle /\langle Q\rangle$ is displayed in Fig.~\ref{figs5} as a function of the driving time $\tau$. The macroscopic efficiency $\eta_\text{th}$ (simulated blue line) increases as the adiabatic regime is approached and irreversible losses induced by quantum friction are reduced. By contrast, the microscopic mean efficiency $\langle \eta \rangle$ (experimental red dots) decreases near the adiabatic regime. It is hence larger for nonadiabatic driving. This counterintuitive behavior is due to the presence of peaks above $\eta_\text{th}$ and, thus, to random events that violate the macroscopic second law law of thermodynamics.
\subsection{Detailed fluctuation relation}
A test of the detailed quantum fluctuation relation,
\begin{equation}
\frac{P(W,Q)}{P(-W,-Q)}=e^{-\Delta\beta Q-\beta_{1}W}.
\end{equation}
was presented in the main text for the driving time $\tau=200\,\mu$s. Figure \ref{Sup03} exhibits similar tests for $\tau=200$,
$260$, $260$, $320$, $420$, $500$, and $700\,\mu$s, showing that the fluctuation theorem for work and heat is obeyed for all the driving times realized in the experiment.
\begin{figure*}
\includegraphics[width=.85 \textwidth]{figs3}
\caption{Experimental verification of the detailed quantum fluctuation relation for work and heat for the
following values of the driving time, $\tau=200$, $260$, $320$, $260$,
$420$, $500$, and $700\,\mu$s.}
\label{Sup03}
\end{figure*}
\subsection{Numerical simulations}
The numerical simulation of the experiment was implemented using a python-based code (in-house developed) and QuTiP
(the Quantum Toolbox in Python) package \cite{Python}. We effectively
simulated the finite-time quantum Otto cycle described in the main
text with the thermalization strokes being solved using the theoretical
thermalization of a qubit with a Markovian thermal reservoir in terms
of the Bloch vector components \cite{Chakraborty2019}. The time-dependent unitary dynamics of the energy gap expansion and compression strokes were solved numerically. In order to obtain the theoretical transition probability, we ran the simulation from $\tau=100\mu s$ to $\tau=700\mu s$ considering
$50$ time steps, which was sufficient to generate smooth curves for
the theoretical quantities and for the confirmation of the quantum fluctuation relations.\\
\textit{Acknowledgements.} We acknowledge financial support from the Federal
University of ABC (UFABC), the Brazilian National Council for Scientific
and Technological Development (CNPq), the Brazilian Federal Agency for
Support and Evaluation of Graduate Education (CAPES), the S\~ao
Paulo Research Foundation (FAPESP) (Grant number 19/04184-5) and the German Science Foundation (DFG) (Project FOR 2724). This research was performed as
part of the Brazilian National Institute of Science and Technology
for Quantum Information (INCT-IQ). We also thank the Multiuser Central
Facilities of UFABC.
\newpage
|
\section{Introduction}
Communication has always been the mainstay of functional human interaction in any society. The hearing-impaired community uses sign language for effective interpersonal communication. Whilst the use of sign language works well as a means of communication between the hearing-impaired community, it is not the same while attempting to communicate with people outside of their community.
In 2012, WHO estimated that over 5.3\% of the world's population (about a 430 million) have hearing disabilities and it is most prevalent in sub-Saharan Africa \citep{world}. Despite this prevalence, facilities to support their communication with the larger society are still lacking, most especially in developing countries. Children with hearing loss and deafness often do not receive formal schooling; adults with hearing loss suffer a higher unemployment rate; a higher percentage of those employed are in the lower grades of employment compared with the general workforce. There are also other impacts including social isolation, loneliness, and stigmatization.
Although there has been an evolution of computer vision with artificial intelligence in creating innovations to solve hearing disabilities problems, we have very few of these solutions targeted towards developing countries \cite{inclusive}. This is mostly due to two factors: (i) The sign language data in the region is low resourced (ii) There are increasing complexities and advanced tools required to deploy these solutions in real-life environments.
\paragraph{Related Work} Dataset is one of the common needs that could blend the recognition,translation,and generation technologies for sign language since modern, data-driven machine learning techniques work best in data-rich scenarios \cite{bragg2019sign}. There is an abundance of American Sign Language datasets available publicly as a high-resource sign language, with the most recent one being a large-scale Word-Level American Sign Language dataset \cite{li2020word}. Sign languages just like spoken languages vary considerably from one location to another. INCLUDE \cite{sridhar2020include}, an Indian Sign Language dataset contains 0.27 million frames across 4,287 videos over 263-word signs from 15 different word categories, but when it comes to the region with the highest number of hearing-impaired population, i.e. sub-Saharan Africa, there is a dearth of sign language datasets. Aside from an instance of a slightly-complex South African Sign Language dataset which uses kinetic gloves \cite{mcinnes2014south} and a very small corpus of Ghanaian Sign Language which serves as a proof of concept \cite{odartey2019ghanaian}, no further work has been done in creating standard datasets for sub-Saharan African sign languages.
In this research project, we facilitate the creation of low-resource sign language datasets for countries where hearing impairments are most prevalent using the Nigeria Sign Language as a case study. A good dataset should sufficiently represent a challenging problem to make it both useful and to ensure its longevity -- to the authors knowledge this dataset is the first of its kind for the low-resource sign languages in sub-Saharan Africa. The dataset images were annotated for object detection using LabelImg \cite{lbl}, in both YOLOv and PASCAL VOC formats. Furthermore, two object detection models and a classification model were trained and compared across multiple evaluation metrics.
The results of this work clearly demonstrate that if provided the availability of otherwise low-resource data, the communication barrier between the hearing impaired community and the larger society can be bridged using the sign-to-speech machine learning techniques that can further be deployed to work in real-time.
\section{ Nigerian Sign Language Dataset}
The dataset comprises of around 5000 images with 137 sign words, including the 27 alphabet letters. It should be noted that sign words and phrases are not just dependent on the signs being performed, but on the facial expression, the body posture, the relative position of the signs to the body, and motion. In fact, some signs are disambiguated based only on motion making it challenging to create a dataset that is non-complex and easy to reuse. Hence, for signs that are heavily dependent on motion, we tried to capture the point in the motion where that moment is peculiar to the specific sign only. Supplementary materials, including code and model configs, are made available on Github {{\href{https://github.com/SteveKola/Sign-to-Speech-for-Sign-Language-Understanding}{\color{blue}here}}}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.6 \textwidth]{E1.png}
\caption{{A mosaic of a fraction of the dataset in diverse backgrounds and lighting conditions.}}
\end{figure*}
\subsection{Dataset Creation}
The dataset focuses on the Nigerian Sign Language and the initial data was created by a TV sign language broadcaster from the Ogun State Broadcasting Corporation. Further works to expand the dataset, with the aim of creating a wider dispersion in the dataset's distribution, were carried out in conjunction with teachers and students totalling 20 individuals from two special education schools in Abeokuta and Lagos-- both cities in Nigeria. Figure 1 reflects the diverse backgrounds and lighting conditions in which the images were captured.
The Nigerian Sign Language, like most of the other sign languages in the world, is quite influenced by the American Sign language, with a chunk of the language adopted from the British Sign Language and a smattering of it being vernacular signs.
As shown in Figure 1, an initial 8000 static images of 137 signs words/phrases, including the 27 alphabet letters, were created with 20+ individuals in diverse environments, under different environments, to account for the different environments in which the work might be applied.
\subsection{Preprocessing \& Annotation}
Data cleaning was performed to weed out excessively blurry images and images where the signs were not totally contained in the image frames. This reduced our images from 8000 to 5000 images. All images were resized into 640 x 640, therefore having a shape of (640, 640, 3). Images of the same class names were stored in the same image self-named folders formatted as "\texttt{classname\_id.jpg}".
A core part of computer vision tasks is to label the data, making it essential for the dataset to be annotated before it can be used with object detection models. This was done using LabelImg$^2$, a graphical image labeling tool, which was used to draw bounding boxes around the signs performed in each image. These rectangular bounding boxes define the location of the target object (sign in this case) with each bounding box consisting of the $x$ and $y$ coordinates (xmin-top left, ymin-top left, xmax-bottom right, ymax-bottom right) \cite{everingham2010pascal}.
The annotations were created in both PASCAL VOC format and the YOLO format. PASCAL VOC stores the annotations as \texttt{.xml} files while the YOLO labeling format stores \texttt{.txt} files.
\section{Modelling Experiments and Evaluation}
We designed three models using two object detection architectures, YOLO and Single-Shot-Detector, and a fine-tuned pre-trained model for classification and compared results across the three models.
\paragraph{Evaluation Metrics} The Precision is calculated as the ratio between the number of positive samples correctly classified to the total number of samples classified as positive (either correctly or incorrectly) while the recall is calculated as the ratio between the number of positive samples correctly classified as positive to the total number of positive samples. The recall measures the model's ability to detect positive samples and the precision measures the model's accuracy in classifying a sample as positive. A precision-recall (PR) curve shows the trade-off between the precision and recall values for different thresholds and the mAP is a way to summarize the PR-curve into a single value representing the average of all precisions. Training an object detection model usually requires two inputs which are the image and the ground-truth bounding boxes for the object in each image. When the model predicts the bounding box, it is expected that the predicted box will not match exactly the ground-truth box. IOU (Intersection over Union) is calculated by dividing the area of intersection between the two boxes by the area of their union which means that higher IOU scores generally translate into better predictions. Specifically, in this paper, we measure where there is a 50\% overlap (@0.5) and where there is a 95\% overlap (@0.95).
\subsection{Object Detection Using YOLO}
YOLO's unified architecture is an extremly fast architecture that reframes object detection as a single regression problem, straight from image pixels to bounding box coordinates and class probabilities. Using this architecture, you only look once (YOLO) at an image to predict what objects are present and where they are \cite{redmon2016you}.
We made use of YOLOv5m implementation with the annotations in YOLO format and trained across 150 epochs. Data augmentation techniques including scaling, left-right flipping, HSV (Hue, Saturation, and Value)
|
manipulation among others, were performed on the data.
After the training process, we evaluated the performance of the model using several metrics including Precision, Recall, and mAP (mean Average Precision) when IOU are at 0.5 (50\%) and 0.95 (95\%). Figure 2 shows the graphs of the metrics curves as training progresses.
After evaluation, the YOLO model had a validation precision score of 0.8057, recall score of 0.95, as well as mAP scores of 0.95 and 0.64 for @0.5IOU and @0.95IOU respectively. This result confirms the effectiveness of our approach in predicting signs performed in diverse environments correctly.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8 \textwidth]{g1e.jpg}
\caption{{Graph of Precision, Recall, and mAP as YOLOv5 training progresses.}}
\end{figure*}
\subsection{Object Detection using Single-Shot Detector}
The Single Shot Detector (SSD) architecture uses a single deep neural network in predicting category scores and box offsets for a fixed set and achieves high detection accuracy by producing predictions of different scales from feature maps of different scales and explicitly separate predictions by aspect ratio \cite{liu2016ssd}.
We used TensorFlow Object Detection API to access and finetune the SSD ResNet50 V1 FPN model, which was pre-trained on the COCO 2017 dataset \cite{lin2014microsoft}, and we fed the model our dataset which was converted into TensorFlow Record format. We made use of only horizontal flip and image cropping for Data augmentation, and we train across 40,000 train steps. Then we evaluate the model on the test set using Precision, Recall, and mAP metrics. After evaluation, the SSD model achieved a Precision score of 0.6414, a Recall score of 0.7075, mAP scores of 0.9535 and 0.6412 for 50\% and 95\% overlap respectively. This result is not as good as the YOLO model's result. Further investigations revealed that the SSD model is a little bit "fond" of imagining signs from random shapes. We deduced that the SSD model might require a much larger dataset to produce results that are on par with the YOLO model's result.
\subsection{Classification using Transfer Learning}
\begin{figure*}[htbp]
\centering
\includegraphics[width= 0.35\textwidth]{netw.png}
\caption{{Model architecture using MobileNet V2 as the pre-trained model.}}
\end{figure*}
We decided to model the problem as a classification problem and train on the dataset while discarding the annotations. The images were resized to 224 by 224 pixels and applied to a pre-trained image classification model known as MobileNet V2 \cite{sandler2018mobilenetv2} which was then customized in 2 ways.
\paragraph{Feature Extraction on a Pretrained Model}
We used the representation learned by MobileNetV2 when pre-trained on the ImageNet dataset \cite{sandler2018mobilenetv2} to extract features from our data. We built a small model atop the pre-trained model without training any layers from the pre-trained model. Upon evaluation after 60 epochs, our model achieved a test accuracy score of 0.4397, precision score of 0.8640, and 0.1401 recall score.
\paragraph{Finetuning on the Pretrained Model}
The poor performance indicated that the model required fine-tuning. MobileNetV2 has 154 layers, hence we left only the first 100 layers frozen and train on both the newly-added classifier layers and the last layers of the pre-trained model. This helped fine-tune the higher-order representations in the base model thereby making it more relevant for our specific task. We evaluated the model after training for an additional 140 epochs and the results are: 0.9115 accuracy score, 0.9355 precision score, and 0.9063 recall score.
\begin{figure*}[ht]
\centering
\includegraphics[width= 0.7\textwidth]{g2e.png}
\caption{Graphs comparing the feature extraction performance with the fine-tuned performance.}
\end{figure*}
By finetuning the pretrained model, we reduced the loss sporadically and both the accuracy score and precision score skyrocketed as seen in Figure 4 and Table 1.
\begin{table}[htbp]
\label{tbl}
\centering
\resizebox{0.4\columnwidth}{!}
{
\begin{tabular}{|c|c|c|c|}
\hline
\textsc{Metrics} & \textsc{YOLO} & \textsc{SSD} & \textsc{Classification} \\
\hline
Recall & 0.9512 & 0.7075 & 0.9355 \\
\hline
Precision & 0.806 & 0.6414 & 0.9063 \\
\hline
mAP:@0.5 & 0.9533 & 0.9535& N/A \\
\hline
mAP:@0.95 &0.6439&0.6412&N/A\\
\hline
\end{tabular}}
\medskip
\caption{Comparison of Precision, Recall, and mAP across different models.}
\end{table}
\section{Conclusion} In this paper, we created a dataset for a low-resource sign language and we experimented with different models and deployed the best model for real-time sign-to-speech synthesis. In pursuance of the objective of this paper which is to bridge the communication barrier between the hearing impaired community and the larger society, we enabled text-to-speech synthesis and deployed the YOLO model for real-time usage in production. This is being done by converting the sign texts (or labels) that are being returned by the model into an equivalent voice of words using Pyttsx3, a text-to-speech conversion library in Python, and deploying the model on DeepStack Server for real-time usage. The authors hope that this work sufficiently demonstrates how much the alienation of the hearing impaired community by the larger society in developing countries can be mitigated using resource-efficient machine learning techniques and tools.
\section*{Acknowledgements} Many thanks to Amanda Bibire of Ogun State Broadcasting Corporation, for volunteering to create the first batch of the dataset. We appreciate the teachers and students of the Special Education School, Saint Peter's College, Abeokuta, for graciously dedicating 4 class sessions to create the later and larger batch of the dataset. Finally, we thank the ML Collective community for the generous computational support, as well as helpful discussions, ideas, and feedback on experiments.
\section{Introduction}
Communication has always been the mainstay of functional human interaction in any society. The hearing-impaired community uses sign language for effective interpersonal communication. Whilst the use of sign language works well as a means of communication between the hearing-impaired community, it is not the same while attempting to communicate with people outside of their community.
In 2012, WHO estimated that over 5.3\% of the world's population (about a 430 million) have hearing disabilities and it is most prevalent in sub-Saharan Africa \citep{world}. Despite this prevalence, facilities to support their communication with the larger society are still lacking, most especially in developing countries. Children with hearing loss and deafness often do not receive formal schooling; adults with hearing loss suffer a higher unemployment rate; a higher percentage of those employed are in the lower grades of employment compared with the general workforce. There are also other impacts including social isolation, loneliness, and stigmatization.
Although there has been an evolution of computer vision with artificial intelligence in creating innovations to solve hearing disabilities problems, we have very few of these solutions targeted towards developing countries \cite{inclusive}. This is mostly due to two factors: (i) The sign language data in the region is low resourced (ii) There are increasing complexities and advanced tools required to deploy these solutions in real-life environments.
\paragraph{Related Work} Dataset is one of the common needs that could blend the recognition,translation,and generation technologies for sign language since modern, data-driven machine learning techniques work best in data-rich scenarios \cite{bragg2019sign}. There is an abundance of American Sign Language datasets available publicly as a high-resource sign language, with the most recent one being a large-scale Word-Level American Sign Language dataset \cite{li2020word}. Sign languages just like spoken languages vary considerably from one location to another. INCLUDE \cite{sridhar2020include}, an Indian Sign Language dataset
|
the \(H^1(Y;\bbZ)\subset
H^1(Y;\bbR)\)-action on \(H^1(Y;\bbR)\). Thus, with the metric on
\(Y\) fixed, one has
a canonical isomorphism from \(\pi \, (\mathop{\mathrm{Conn}}\nolimits /\scrG;
b, b')\) (or more generally \(\pi _1(\operatorname{C}(Y), b, b')\)) to an orbit of \(H^1(Y;\bbZ)\) in \(H^1(Y;\bbR)\). Let
\(t_{\operatorname{C}}(b, b')\) denote the corresponding element in the orbit
space, \(H^1(Y;\bbR)/H^1(Y;\bbZ)=\bbT_Y\). Summarizing, we have a map
\[
t_{\operatorname{C}}\c
\operatorname{C}(Y)\times \operatorname{C}(Y)\to\bbT_Y.
\]
This map is an analog of the map \(t_\scrH\) defined after Lemma
\ref{gr:hR-isom} in Section \ref{sec:h-class}.
It is continuous with respect to the current topology on \(
\mathop{\mathrm{Conn}}\nolimits /\scrG\) and \(\operatorname{C}(Y)\).
Furthermore, recall from (\ref{eq:CD1}) and (\ref{CD:1.5})
the structure of \(
\mathop{\mathrm{Conn}}\nolimits /\scrG\), \(\operatorname{C}(Y)\) as \(\bbT_Y\)-bundles, \(t_{\operatorname{C}}\)
is \(\bbT_Y\)-equivariant both with respect to the \(\bbT_Y\)-action
on the right factor of \(\operatorname{C}(Y)\times
\operatorname{C}(Y)\), and the \(\bbT_Y^{op}\)-action on the left factor of \(\operatorname{C}(Y)\times
\operatorname{C}(Y)\) (or \(( \mathop{\mathrm{Conn}}\nolimits /\scrG)\times
(\mathop{\mathrm{Conn}}\nolimits /\scrG)\)). (\(\bbT_Y^{op}\) above refers to the inverse
action of the torus group.) It maps the diagonal to the identity
element \(0\in
\bbT_Y\).
\begin{lemma}\label{sw:h-isom}
(a) Suppose \(b, b'\in \operatorname{C}(Y)\) are sufficiently close in the sense that \(t_{\operatorname{C}}
(b, b')\) falls in the ball \(B_{0}(1/2)\subset
\bbT_{Y}\). Then there is a distinguished element \(\tilde{o}_Y(b, b')\) in \(\pi
(\operatorname{C}(Y);b, b')\). This distinguished element is independent of the
metric on \(Y\), though the notion of being ``sufficiently close'' does.
(b) Fix \(X_\bullet \subset \overline{X}\).
Suppose a pair \(\{b_i\}_i , \{b'_i\}_i\in \prod_{i\in
\grY_{X_\bullet}}\operatorname{C}(Y_i)\) are sufficiently close in the sense
described above.
Given any \(\grc_i\in \Pi ^{-1} b_i\),
\(\grc_i'\in \Pi ^{-1} b_i'\), let \(o_{Y_i}(\grc_i, \grc_i'):=(\Pi _*)^{-1}\tilde{o}_{Y_i}(b_i, b'_i)\in \pi _{Y_i}
(\grc_i, \grc_i')\). Then the concatenation map
\(\operatorname{c}_{\{o_{Y_i}(\grc_i, \grc_i')\}_i}\) defines a canonical isomorphism from \(\pi
_0\scrB_{X_\bullet} (\{\grc_i\}_i)\) to \(\pi
_0\scrB_{X_\bullet} (\{\grc_i'\}_i)\) as affine spaces under \(\pi _{X_\bullet}\).
\end{lemma}
\noindent{\it Proof.}\enspace Item (a) follows directly from the discussion preceding to
the statement of the lemma. Item (b) generalizes Item (b) of the
Lemma \ref{rem:rel_class}, and follows from Item (a) above together
with Part (i) in the proof of Lemma \ref{rem:rel_class}.
\hfill$\Box$\medbreak
\begin{rem}\label{rem:rel-htpy-convention}
The arguments in the proof the preceding lemma
also establishes the following: A choice of basis for \(H^1(Y; \bbZ)\) gives a
way of (simultaneously) identifying all sets of relative
homotopy classes \(\pi _Y (\grc, \grc')\), \(\grc, \grc'\in \scrB_Y\),
with \(H^1(Y;\bbZ)\) (as affine spaces). (Recall that such a choice
(cf. (\ref{eq:deltab1})) is
required to define the normalized Coulomb gauge, and has been fixed
implicitly in this article.) To see this, observe that as explained in Part (i) in the proof of
Lemma \ref{rem:rel_class}, it suffices to identify sets of relative
homotopy classes \(\pi \, (\operatorname{C}(Y); b, b')\),
with \(H^1(Y;\bbZ)\) (as affine spaces) for every pair \(b, b'\in
\operatorname{C}(Y)\).
This is equivalent to
(consistently) choosing a base point in \(\pi \, (\operatorname{C}(Y); b,
b')\) for each pair \(b, b'\in
\operatorname{C}(Y)\). To do so, note
that a choice of basis for \(H^1(Y;\bbZ)\) defines an isomorphism
\[
i_h \colon\thinspace \bbR^{b^1} \stackrel{\sim}{\to} H^1(Y;\bbR).
\]
Recall also that \(t_{\operatorname{C}}(b, b')\) corresponds to an isomorphism from \(\pi \, (\operatorname{C}(Y); b,
b')\) to an orbit of
\(H^1(Y;\bbZ)\) in \(H^1(Y;\bbR)\). We define the
base point in \(\pi \, (\operatorname{C}(Y); b,b')\) to be the unique element
whose image under this isomorphism lies in \(i_h\, ([0,1)^{\times
b^1})\subset H^1(Y;\bbR)\). Denote this base point by \(\tilde{o}_Y(b,
b')\), and given \(\grc, \grc'\in \scrB_Y\), let
\[
o_{Y}(\grc, \grc'):=(\Pi _*)^{-1}\tilde{o}_Y(\Pi \grc, \Pi \grc')\in
\pi _Y (\grc, \grc')\]
denote the corresponding base point in \(\pi _Y (\grc, \grc')\).
The argument for Item (b) of Lemma
\ref{sw:h-isom} then implies that once a choice of basis is fixed for
every \(H^1(Y_i; \bbZ )\), then for any given \(X_\bullet\), \(\pi
_0\scrB_{X_\bullet} (\{\grc_i\}_i)\), \(\pi
_0\scrB_{X_\bullet} (\{\grc_i'\}_i)\) are identified for every pair
\(\{\grc_i\}_i, \{\grc_i'\}_i\) in \(\scrB_{\partial\overline{X_\bullet}}\).
\end{rem}
\begin{rem}\label{rem:based-rel-htpy}
Fix a reference connections \(A_0\) on \(\bbS^+\) as in
(\ref{eq:A_0}). Then for every given \(X_\bullet\), \((A_0,
0)|_{X_\bullet}\) determines a base
point in \(\pi_0(\scrB_{X_\bullet}(\{\grc_{0,i}\}_i))\), where \(i\in
\grY_{X_\bullet}\), and \(\grc_{0,i}\in \scrB_{Y_i}\) denotes the
restriction of \((A_0,
0)\) to the \(i\)-th connected component of
\(\partial\overline{X_\bullet}\). This in turn defines an
isomorphism
\[
\imath_{A_0}\colon\thinspace \pi_0(\scrB_{X_\bullet}(\{\grc_{0,i}\}_i))\stackrel{\sim}{\to }\pi
_{X_\bullet}
\]
as affine spaces.
Thus, a choice of \(A_0\) determines, for any given \(X_\bullet\subset \overline{X}\), a way of simultaneously identifying \(\pi_0(\scrB_{X_\bullet}(\{\grc_i\}_i))\simeq \pi
_{X_\bullet }\) for all \(\{\grc_i\}_i\in \scrB_{\partial\overline{X_\bullet}}\): Given any \(\{\grc_i\}_i\in \prod_{i\in
\grY_{X_\bullet}}\scrB_{Y_i}\), Let \(h_{A_0}\) denote the
isomorphism
\[
h_{A_0}:=\imath_{A_0}\circ \operatorname{c}_{\{o_{Y_i}(\grc_i,
\grc_{0,i})\}_{i\in \grY_{X_\bullet}}}\colon\thinspace
\pi_0(\scrB_{X_\bullet}(\{\grc_{i}\}_{i\in \grY_{X_\bullet}}))\stackrel{\sim}{\to }\pi
_{X_\bullet}.
\]
When \(X_\bullet\) is of the form \(\hat{Y}_{i, I}\), this isomorphism
agrees with the isomorphism \(\pi _{Y_i}(\grc,
\grc')\stackrel{\sim}{\to } H^1(Y_i;\bbZ)\) introduced in Remark
\ref{rem:rel-htpy-convention} under our assumptions on \(A_0\).
\end{rem}
We next compare the relative homology classes of
t-curves and the relative homotopy classes of Seiberg-Witten quotient
configurations. Recall from
Section \ref{sec:h-class} the definition of \(\scrH^\bbR(X^{'a}, \nu , \grs,
\{\tilde{\gamma}_i\})\) and various other notions. Let \(c=[(A,
\Psi)]\in \scrB_X(\{\grc_i\}_i)\), with \(\grc_i=[(B_i, \Phi_i)]\in
\scrB_{Y_i}\). Assign to \(c\) the 2-current \(\tilde{c}=\frac{i}{2\pi} F_{A^E}\) on \(X^{'a}\). This
current determines a class in \(\scrH^\bbR(X^{'a}, \nu , \grs,
\{\tilde{\grc}_i\}_{i\in \grY_m})\), where
\(\tilde{\grc}_i=\frac{i}{2\pi} F_{B_i^E}\in\scrZ(Y_i, \nu _i,
\grs_i)\subset
\scrZ^\bbR(Y_i, \nu _i, \grs_i)\). This class only depends on the
relative homotopy class of \(c\), and so in this way we have a map
\[
\grh' \colon\thinspace \pi _0\scrB_X(\{\grc_i\}_{i\in \grY})\to \scrH^\bbR((X^{'a}, \nu , \grs),
\{\tilde{\grc}_i\}_{i\in \grY_m}).
\]
This map intertwines with the \(\pi _X\)-action on \(\pi
_0\scrB_X(\{\grc_i\}_{i\in\grY})\), and the \(\operatorname{h}(\pi _X)\subset
\scrH^\bbR_X\)-action on \(\scrH^\bbR((X^{'a}, \nu , \grs),
\{\tilde{\grc}_i\}_{i\in\grY_m})\). (The map \(\operatorname{h}\colon\thinspace \pi _X\to \scrH_X\) is as defined in Remark
\ref{rem:b_1=0}.) It is also natural with respect to
the concatenation maps on both sides. In the special case when \((X, \nu
)=(\bbR\times Y_i, \pi _2^*\nu _i)\), \(i\in\grY_m\), is cylindrical,
the map
\(\grh' \colon\thinspace \pi _{Y_i} (\grc, \grc')\to \scrH^\bbR(Y_i, \nu _i, \grs_i; \tilde{\grc},
\tilde{\grc}')\) by construction factors through a map \(\underline{\grh}\colon\thinspace \pi \,
(\operatorname{C}(Y_i), \Pi \grc, \Pi \grc')\to \scrH^\bbR(Y_i, \nu _i, \grs_i; \tilde{\grc},
\tilde{\grc}')\): \(\grh'=\underline{\grh}\circ \Pi
_*\). It follows from construction that in this case,
\[
\jmath_c=\iota_{PD}\circ I_{\scrH}\circ\underline{\grh},
\]
where \(\iota_{PD}\) again denotes the Poincar\'e map.
Next, consider the case when (\ref{b_1=0}) holds.
Recall from Remark \ref{rem:b_1=0} that
\(\pi_X\stackrel{\rmh}{\simeq }\scrH_X\) in this case.
\begin{lemma}\label{lem:htpy}
Assume (\ref{b_1=0}); namely, all vanishing ends of \((X, \nu)\) has
zero first betti numbers. For each \(i\in \grY_m\), let
\(\{\grc_{i, r}\}_r\in \scrB_{Y_i}\) be a sequence that strongly
t-converges to a t-orbit \(\pmb{\gamma }_i\), and for each \(i\in
\grY_v\), let \(\grc_{i,r}=\grc_i\in \scrB_{Y_i}\) be independent of
\(r\).
Let \(r_0>1\) be as in Lemma \ref{rem:rel_class}.
Then:
(a) For all \(r>r_0\), there is a canonical
isomorphism \(\grh\) from \(\pi_0\scrB_X(\{\grc_{i,r}\}_{i\in
\grY}
\) to an
\(\scrH_X\)-orbit in \(\scrH^\bbR\big((X^{'a}, \nu),\{\tilde{\gamma }_i\}_i\big)\) as affine spaces under
\(\pi_X\simeq \scrH_X\).
(b) If moreover, the sequences \(\{\grc_{i,r}\}_r\), \(i\in \grY_m\)
arise as the \(Y_i\)-end limits of a sequence of admissible solutions
\(\{(A_r, \Psi _r)\}_r\) to \(\grS_{\mu _r, \hat{\grp}}(A_r, \Psi
_r)=0\) as in the assumptions of Theorem \ref{thm:l-conv}, and the
assumptions of Theorem \ref{thm:l-conv} hold. Then the
image of the map \(\grh\) is \(\scrH\big((X^{'a}, \nu),\{\tilde{\gamma
}_i\}_i\big)\subset \scrH^\bbR\big((X^{'a}, \nu),\{\tilde{\gamma
}_i\}_i\big)\). Namely, in this case \(\grh\) is a canonical
isomorphism from \(\pi_0\scrB_X(\{\grc_{i,r}\}_i)\) to \(\scrH\big((X^{'a},
\nu),\{\tilde{\gamma }_i\}_i\big)\) as affine spaces under \(\pi_X\simeq \scrH_X\).
\end{lemma}
\noindent{\it Proof.}\enspace By construction, we have the following commutative diagram under
the assumptions of the lemma: For any \(r, r'>r_0\)
\begin{equation}\label{CD:grh
\xymatrix{
\pi_0\scrB_X(\{\grc_{i,r'}\}_{i\in
\grY})\ar@{->}[r]^{}\ar@{->}[d]^{\grh'}&
\pi_0\scrB_X(\{\grc_{i,r}\}_{i\in \grY}) \ar@{->}[d]^{\grh'}
\\
\scrH ^\bbR\big((X^{'a}, \nu),\{\tilde{\grc}_{i,r'}\}_{i\in \grY_m}\big)
\ar@{->}[r]^{} & \scrH ^\bbR\big((X^{'a},
\nu),\{\tilde{\grc}_{i,r}\}_{i\in \grY_m}\big) \ar@{->}[r]^{i_\infty} & \scrH ^\bbR\big((X^{'a}, \nu),\{\tilde{\gamma }_i\}_i\big).
}
\end{equation}
where all the maps are morphisms of affine spaces under \(\pi
_X\simeq\scrH_X\subset \scrH_X^\bbR\); the horizontal map in the top
row is the canonical isomorphism from Lemma \ref{rem:rel_class}; the
horizontal maps in the bottom
row are the canonical isomorphisms from Lemma \ref{gr:hR-isom}.
The composition \(i_\infty\circ \grh'\) maps
\(\pi_0\scrB_X(\{\grc_{i,r}\}_{i\in \grY})\) to another orbit of the
\(\scrH_X\)-action in \(\scrH ^\bbR\big((X^{'a}, \nu),\{\tilde{\gamma
}_i\}_i\big)\). The canonical isomorphism claimed in Item (a) of the
statement of the present lemma is defined to be this composition map:
\(\grh:=i_\infty\circ \grh'\).
The proof of Item (b) will follows as a by-product of the proof of
Theorem \ref{thm:g-conv} (c), and will be deferred to Section
\ref{sec:g-conv:a}
\hfill$\Box$\medbreak
\subsection{Taubes's proof for \(SW\Rightarrow Gr\): A synopsis}\label{sec:synopsis}
The proof of Theorem \ref{thm:l-conv} follows the outline of Taube's
arguments in \cite{T}. To serve as a roadmap for the remainder of this
article, a brief summary of the ingredients in \cite{T} is
provided here for reader's convenience. For each step listed below, we
indicate where it takes place in \cite{T}, \cite{Ts}, and
the present article, under their respectively contexts.
Let \((X, \nu)\) be an admissible pair and let
\((A, \Psi)=(A_r, \Psi_r)\) be as in the statement of Theorem
\ref{thm:l-conv}. Write \[
\Psi=(r/2)^{1/2} \psi, \quad \text{and \(\psi=(\alpha, \beta)\)}\]
according to the decomposition \(\bbS^+=E\oplus E\otimes K^{-1}\) on
\(X^{'a}\). The \(\mathop{\mathrm{Spin}}\nolimits^c\)-connection corresponding to \(A\) induces a connection
on \(E\), and together with the Levi-Civita connection, also a
connection on \(E\otimes K^{-1}\). We use \(\nabla_A\alpha\),
\(\nabla_A\beta\) to denote the covariant derivative with respect to
the aforementioned induced connections.
Taubes's proof proceeds with the steps listed below:
\begin{itemize}
\item[(1)] Obtaining pointwise estimates of \(|\beta|, |F_A|, |\nabla_A\alpha|,
|\nabla_A\beta|\) in terms of the ``energy
density'' \(r|\varpi|:=r||\nu|-|\alpha|^2|\).
The estimates show that these quantities are interesting where \(\alpha\) is
small. Cf. \cite{Ts} Sections 3 b)-e), \cite{T} Section I.2.
Section \ref{sec:pt-est} in this article contains the corresponding estimates
on a MCE.
\item[(2)] Obtaining an integral bound for \(r|\varpi|\) in terms of constants depending
only on the \(\mathop{\mathrm{Spin}}\nolimits^c\) structure and the metric. This is basically
equivalent to an integral bound on
\(\frac{iF_A}{2\pi}\wedge\nu\). Cf. \cite{Ts} Section 3 a) and
references therein. The
corresponding results are establishes in \textsection \ref{sec:int-est} below.
\item[(3)] Establishing a monotonicity formula for \(\mathcal{W}_B\),
which is a certain notion of ``energy''
over a ball \(B\) in \(X\).
Cf. \cite{Ts} Section 4 and \cite{T} Section I.3 and the beginning of
Section \ref{sec:mono} below.
Roughly speaking, \(\scrW_B\) is the integral of the ``energy
density'' \(r|\varpi|\) over \(B\). Note that this notion of energy
is however of a
different nature from other, more typical notions of energy for
Seiberg-Witten theory on MCE, such as
the Chern-Simons-Dirac functional, or more generally, the topological
energy \(\scrE_{top}\) introduced in \cite{KM}. This \(\scrW_B\) should be regarded
as an analogue of the notion of energy on the Gromov side, namely the
area of the holomorphic curves. As a result of the monotonicity
formula, one obtains an \(r\)-independent bound on the
2-dimensional Hausdorff measure of \(\alpha^{-1}(0)\), and hence also
on that of the t-curve that \(\alpha^{-1}(0)\) geometrically converges to as
\(r\to \infty\). The relevant monotonicity formula in our context is
given in Section \ref{sec:mono} below.
\item[(4)] Rescaling that \((A, \psi)\) over a ball of radius \(O(r^{-1/2})\)
yields an approximate solution to the version of the
Seiberg-Witten equation (\ref{eq:SW}) on the Euclidean space \(X={\mathbb R}^4=\{(x_1, \cdots, x_4)\}\) and
\(\mu^+=\frac{1}{2}(dx_1\wedge dx_2+dx_3\wedge dx_4)\). The behavior of
Seiberg-Witten solutions for such \((X, \mu^+)\) are
well-understood in terms of vortex solutions on \(\bbR^2\), and these
give the local models
for \((A, \psi)\). In particular, combining with Items (1)--(3) above, this
implies the exponential decay of \(|\beta|, |F_{A^E}|, |\nabla_A\alpha|,
|\nabla_A\beta|\) away from \(\alpha^{-1}(0)\). This step appeared in
Section 6 of \cite{Ts} and Section I.4 of \cite{T}. This
adapts easily to our context; see Sections \ref{sec:exp} and
\ref{sec:local_m} below.
\item[(5)] Regarding \(\frac{i}{2\pi}F_{A^E}\) as a 2-current , Items (1) and (2) above imply that this
current is bounded independently of \(r\). Thus, Alaoglu's theorem
(cf. e.g. \cite{Rd} Theorem 10.6.17) implies that these currents
converge to a current \(\mathcal{F}\) in weak\(^*\)
topology. Furthermore, via (3) and (4) it is shown that the support of
\(\mathcal{F}\), denoted temporarily by \(C\), is a closed space of 2-dimensional measure,
and is the \(r\to\infty\) limit of \(\alpha^{-1}(0)\) in the sense of
(\ref{eq:dist-conv}). Cf. the first half of Section I.5 of \cite{T};
Sections 7(a) and (b) of \cite{Ts}, and Section \ref{sec:l-conv} in
this article.
\item[(6)] Using Item (4), it is shown that the current \(\mathcal{F}\) defines a
``positive cohomology assignment'', namely, the map from the set of a
certain kind of generic disks in \(X\) (the ``admissible disks'') to \(\Z\), whose
value is positive for admissible pseudo-holomorphic disks which
intersect \(C\). Taubes showed that this fact guarantees that \(C\) is
a pseudo-holomorphic curve. Cf. Section 6 of \cite{T} and Sections
7(c)-(e) of \cite{Ts}. This part of Taubes' argument applies directly in our setting with
no need for modification.
\end{itemize}
\section{\(L^2_{1, loc}\)-bounds and integral estimates}\label{sec:4}
The main results of this section come in two groups.
The first group of main results, Propositions
\ref{prop:SW-L2-bdd} and \ref{est-L^2_1}, provide
\(L^2_1\)-bounds on solutions \((A_r, \Psi _r)\) to the Seiberg-Witten
equation \(\grS_{\mu _r, \hat{\grp}}(A_r, \Psi _r)=0\), with careful
control on the growth rate of these bounds on \(r\). While (for each
fixed \(r\)) such
\(L^2_1\) bounds usually serve as the starting point of typical
proofs compactness results in Seiberg-Witten
theory (cf. e.g. \cite{KM}'s Corollary 10.6.2 and Theorem 5.2.1
(ii)), the usual arguments to derive them (e.g. those in \cite{KM}),
do not provide sufficient bounds on the \(r\)-growth rate needed for t-convergence.
The second group of main results, Lemmas \ref{co:E-omega-bdd3} and
\ref{T:lem3.1}, supply the type of ``energy bounds'' required to begin the proof
of \(t\)-convergence results. This corresponds to Step (2) of the list
in Section \ref{sec:synopsis}.
\subsection { \(\scrE_{top}\) and \(\scrE_{anal}\): the topological energy
and the analytic energy}\label{sec:top-energy}
We begin with some general definitions and observations.
Similar to what was done in \cite{KM} (Definition 4.5.4 therein), we introduce two
Floer-theoretic notion of ``energy'', \(\scrE_{anal}(A, \Psi)\) and
\(\scrE_{top}(A,\Psi)\) below. They are related when \((A, \Psi )\)
satisfies a Seiberg-Witten equation. (The subscripts ``anal'' and ``top'' signify
``analytical'' and ``topological'' respectively.) As is typical in
Floer theories, \(\scrE_{anal}\) is useful for estimating
\(L^2_1\)-norms, while \(\scrE_{top}\) depends only on the end points
of the Floer trajectory and its relative homotopy class. In the
cylindrical case,
\(\scrE_{top}\) is the difference in the values of the CSD functional at end points
(cf. \textsection \ref{sec:adm-SW} for the definition of CSD functionals).
Let \(X\) be a \(\mathop{\mathrm{Spin}}\nolimits^c\) MCE with ending 3-manifolds \(Y_i\), and let \(\mu \)
be a closed 2-form on \(X\) that has a closed 2-form \(\mu _i\) as the
\(Y_i\)-end limit for each \(i\in \grY\), in the sense defined in
Section \ref{sec:convention}. Recall also from Section \ref{sec:convention} the
definition of \(X_\bullet\). (Note in particular that the boundary
components of \(X_\bullet\) consist of slices of the form \(Y_{i:l}\)).
\begin{defn}
Fix a compact \(X_\bullet\subset X\). Given \((A, \Psi)\in \scrC (X_\bullet)\), let
\begin{equation}\label{eq:E-an}
\begin{split}
\scrE_{anal}^\mu(X_{\bullet})(A, \Psi)= &\frac{1}{4}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\int_{X_{\bullet}}\Big|\frac{i}{4}\rho(\mu^+)-(\Psi\Psi^*)_0\Big|^2\\
&\qquad
+\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2-\frac{i}{4}\int_{X_{\bullet}}F_A\wedge*_4\mu,
\\
\end{split}\end{equation}
where \(R_g\) denotes the scalar curvature.
Over a slice
\(Y_{i:s}:=\mathfrc{s}_i^{-1}(s)\simeq Y_i\) of \(\bar{X}\), let \((B,
\Phi)=(B(s),
\Phi(s))=(A, \Psi )\big|_{Y_{i:s}}\) denote
the restriction of \((A, \Psi)\) to \(\mathfrc{s}_i^{-1}(s)\subset
X\).
We define
\begin{equation}\label{eq:E-top}
\scrE_{top}^\mu(X_{\bullet})(A, \Psi)=\frac{1}{4}\int_{X_{\bullet}}F_A\wedge
F_A-\int_{\partial \overline{X_\bullet}}\langle \Phi,
\mbox{\mbox{\(\partial\mkern-7.5mu\mbox{/}\mkern 2mu\)}}_B\Phi\rangle+\frac{i}{4}\int_{X_{\bullet}}F_A\wedge \mu.
\end{equation}
We then define
\begin{equation}\label{def:E_top nonlocal}
\scrE_{top}^{\mu, \hat{\grp}}(X_\bullet)(A,
\Psi)=\scrE_{top}^\mu (X_{\bullet})(A,
\Psi)-2\, f_{\hat{\grp}}((A, \Psi )|_{\partial\overline{X_\bullet}}).
\end{equation}
\end{defn}
The preceding definition of \(\scrE_{top}^\mu \) and \(\scrE^\mu
_{anal}\) is motivated by the following identity:
\begin{equation}\label{eq:E-top=anal}
\|\mathfrak{S}_{\mu_r}(A, \Psi) \|_{L^2(X_{\bullet})}^2=\mathcal{
E}_{anal}^{\mu_r}(X_{\bullet})(A, \Psi)-{\mathcal E}_{top}^{\mu_r}(X_{\bullet})(A, \Psi). \quad
\end{equation}
In particular, when \((A, \Psi )\in \scrC(X_\bullet)\)
is a solution to the
Seiberg-Witten equation \(\grS_{\mu, \hat{\grp}}(A, \Psi)=0\), one
has:
\[
\scrE_{anal}^{\mu}(X_\bullet)(A,
\Psi
=\scrE_{top}^{\mu, \hat{\grp}}(X_\bullet)(A, \Psi)+\|\hat{\grp}(A, \Psi )\|_{L^2(X_\bullet)}^2.
\]
This identity is used in Part 1 of next subsection to bound the square
terms in the first line of (\ref{eq:E-an}) in terms of
\(\scrE_{top}\). What follows are some observations demonstrating the
topological nature of \(\scrE_{top}\).
Let \(A_0\), \(B_{0,i}\) be the reference connections from
(\ref{eq:A_0}). Recall the definitions of CSD functionals from (\ref{eq:CSD_q}),
(\ref{eq:CSD}), and Remark \ref{rem:M-disconn}. It is often convenient to re-express the topological
energies \(\scrE_{top}^\mu \), \(\scrE_{top}^{\mu , \hat{\grp}}\) in
terms of the CSD functionals:
By the Stokes' theorem,
\begin{equation}\label{eq:E-top-q}
\begin{split}
\scrE_{top}^{\mu}(X_\bullet ) (A, \Psi) &=\frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4}\int_{X_{\bullet}}F_{A_0}\wedge\mu
-2\operatorname{CSD}_{\mu}^{\partial \overline{X_\bullet }}(B, \Phi);\\
\scrE_{top}^{\mu, \hat{\grp}}(X_\bullet ) (A, \Psi) &=\frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4}\int_{X_{\bullet}}F_{A_0}\wedge\mu_r
-2\operatorname{CSD}_{\mu, \hat{\grp}}^{\partial \overline{X_\bullet }}(B, \Phi).\\
\end{split}
\end{equation}
Given the formulas (\ref{eq:E-top}), (\ref{def:E_top nonlocal}),
\(\scrE_{top}^\mu \) and \(\scrE_{top}^{\mu , \hat{\grp}}\) are by
definition gauge-invariant. Moreover, \(\scrE_{top}^\mu (X_\bullet)(A, \Psi
)\), \(\scrE_{top}^{\mu , \hat{\grp}}(X_\bullet)(A, \Psi
)\) are
can be computed from the relative homotopy class of
\([(A, \Psi )]\) via the following explicit formulas:
Recalling that \([(B, \Phi )]_c\in \scrC(Y)\) denotes the representative of \([(B,
\Phi )]\in \scrB(Y)\)
in the normalized Coulomb gauge, one may re-express (\ref{eq:E-top-q}) as
\begin{equation}\label{eq:E-top-h}
\begin{split}
& \scrE_{top}^{\mu}(X_\bullet ) (A, \Psi)=\frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4}\int_{X_{\bullet}}F_{A_0}\wedge\mu\\
& \qquad \quad -2\operatorname{CSD}_{\mu}^{\partial \overline{X_\bullet }}([(B,
\Phi)]_c)-2i^*\big(
(\pi c_1(\grs_X|_{X_\bullet})-\frac{\mu |_{X_\bullet}}{4}\big)\cdot
h\,(A, \Psi );\\
& \scrE_{top}^{\mu, \hat{\grp}}(X_\bullet ) (A, \Psi)=\frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4}\int_{X_{\bullet}}F_{A_0}\wedge\mu\\
& \qquad \quad -2\operatorname{CSD}_{\mu, \hat{\grp}}^{\partial \overline{X_\bullet }}([(B,
\Phi)]_c)-2i^*\big(
(\pi c_1(\grs_X|_{X_\bullet})-\frac{\mu |_{X_\bullet}}{4}\big)\cdot
h\,(A, \Psi ),
\end{split}
\end{equation}
where \(h\, (A, \Psi )\in H^1(\partial \overline{X_\bullet}; \bbZ)/\mathop{\mathrm{Im}\, }\nolimits i^*\simeq \pi
_{X_\bullet}\) denotes the image of the relative homology class of
\([(A, \Psi )]\) under the map \(h_{A_0}\) from Remark \ref{rem:based-rel-htpy}, and \(i^* \colon\thinspace
H^*(\overline{X_\bullet}; \bbZ)\to H^*(\partial \overline{X_\bullet };\bbZ)\) is part of the
relative long exact sequence of the pair \((\overline{X_\bullet}, \partial\overline{
X_\bullet})\). Note that the pairing between \(H^1(\partial \overline{X_\bullet};
\bbZ)/\mathop{\mathrm{Im}\, }\nolimits i^*\simeq H_2(\partial \overline{X_\bullet};
\bbZ)/\mathop{\mathrm{Ker}\,}\nolimits i\) and \(\mathop{\mathrm{Im}\, }\nolimits (i^*\colon\thinspace H^2(\overline{X_\bullet}; \bbZ)\to H^2(\partial \overline{X_\bullet};\bbZ))\) is
well-defined because the subspace \( \mathop{\mathrm{Im}\, }\nolimits (i^*\colon\thinspace H^2(\overline{X_\bullet}; \bbZ)\to H^2(\partial\overline{
X_\bullet};\bbZ))\) pairs trivially with \( \mathop{\mathrm{Ker}\,}\nolimits (i\colon\thinspace H_2(\partial \overline{X_\bullet}; \bbZ)\to H_2(\overline{
X_\bullet};\bbZ))\).
To summarize:
\begin{lemma}
Let \(X\), \(\mu \), \(\hat{\grp}\) be as before. Given arbitrary
(possibly non-compact) \(X_\bullet \subset X\) and an admissible \((A,
\Psi)\in \scrC(X_\bullet)\), the topological
energy \(\scrE_{top}^\mu(X_\bullet )(A, \Psi)\) is
well-defined.
Moreover, their values depend only on the
\(\mathop{\mathrm{Spin}}\nolimits^c\)-structure of \(X\), \([(A, \Psi )|_{\partial
\overline{X_\bullet}}]\in \scrB(\partial
\overline{X_\bullet})\), and the relative
homotopy class of \([(A, \Psi)]\).
\end{lemma}
Moreover, combining
(\ref{eq:E-top-h})
and Lemmas \ref{lem:CSD-est}, \ref{lem:htpy}, one
has:
\begin{lemma}\label{lem:E_topX}
Adopt the assumptions and notations in the statement of Theorem
\ref{thm:l-conv}. (In particular, recall that \((A_r, \Psi_r)\) are
Seiberg-Witten solutions \(\grS_{\mu _r, \hat{\grp}}(A_r, \Psi _r)=0\)
with \(Y_i\)-end limits proscribed by \(\{\pmb{\gamma
}_i\}_{i\in\grY_m}\), \(\{(B_i, \Phi _i)\}_{i\in \grY_v}\),
and with fixed relative homotopy class
\(\mathfrc{h}\).)
Then there exist a constant
\(\textsc{e}, \in\bbR^+\) depending only on \(\|\nu \|_{C^1}\),
\(\varsigma_w\), and:
\BTitem\label{dep:rel-htpy}
\item the \(\mathop{\mathrm{Spin}}\nolimits^c\)-structure on
\(X\),
\item the relative homotopy class \(\mathfrc{h}\), or equivalently
\(\grh(\mathfrc{h})\) (and hence also
implicitly on \(\{\pmb{\gamma
}_i\}_{i\in\grY_m}\)),
\item the cohomology class \([\nu ]\),
\ETitem
such that for
all \(r\geq r_0\) (\(r_0\) being larger or equal to that in Lemma \ref{lem:CSD-est}),
\begin{equation}\label{eq:CSD-est}
|\scrE_{top}^{\mu_r}(X)(A_r, \Psi_r)|\leq \textsc{e} r,\quad\text{and}
\quad
|\scrE_{top}^{\mu_r, \hat{\grp}}(X)(A_r, \Psi_r)|\leq \textsc{e} \, r.
\end{equation}
\end{lemma}
The next lemma says more about the coefficient \(\textsc{e}\) above.
\begin{lemma}
(a) Let \(Y_i\) be a Morse end.
Suppose \(\{(B_r, \Phi _r)\}_r\) is a sequence of
Seiberg-Witten solutions \(\grF_{\mu _{i,r}}(B_r, \Phi _r)=0\) that
strongly t-converges to \(\pmb{\gamma }\). Then the
limit \[\lim_{r\to \infty}(r^{-1} \operatorname{CSD}_{\mu _{i,r}}([B_r, \Phi
_r]_c))=\lim_{r\to \infty}(r^{-1} \operatorname{CSD}_{\mu _{i,r}, \grq_i}([B_r, \Phi _r]_c))\]
exists, and
\begin{equation}\label{ineq:CSD}
\Big|\lim_{r\to \infty}(r^{-1} \operatorname{CSD}_{\mu _{i,r}}([B_r, \Phi
_r)]_c)\Big|\leq \frac{\pi }{2}\sqrt{b^1(Y_i)}\, |[\nu _i]|.
\end{equation}
(b) Adopt the notations and assumptions of Theorem \ref{thm:l-conv}. Then the
limit
\[
\bbE:=\lim_{r\to \infty}(r^{-1} \scrE_{top}^{\mu _r, \hat{\grp}}(A_r, \Psi
_r))
\]
exists, and equals \(\lim_{r\to \infty}(r^{-1} \scrE_{top}^{\mu _r}(A_r, \Psi
_r))\). It is determined by the items listed in (\ref{dep:rel-htpy})
via the formula (\ref{eq:bbE}) below.
\end{lemma}
\noindent{\it Proof.}\enspace {\em (a):} Let \(\grc_r\) denote the gauge equivalence class of \((B_r,
\Phi _r)\).
Recall the map \(\jmath_c\colon\thinspace \pi \, (\mathop{\mathrm{Conn}}\nolimits /\scrG; [B_0], \Pi
\grc_r)\to H^1(Y_i;\bbR)\simeq H_2(Y_i; \bbR)\) from Section \ref{sec:SW-class} and let
\(\tilde{c}_0(\grc_r)\in \pi \, (\mathop{\mathrm{Conn}}\nolimits /\scrG; [B_0], \Pi \grc_r)\) be the element represented
by the path \(s\mapsto B_0+(1-\chi (s))(B_r-B_0)\) on \(\mathop{\mathrm{Conn}}\nolimits (Y_i)\). Then
\begin{equation}\label{ineq:CSD_r}
\begin{split}
r^{-1} \operatorname{CSD}_{\mu _{i,r}}([B_r, \Phi _r]_c)& =r^{-1} \operatorname{CSD}_{\mu
_{i,r}, \grq_i}([B_r, \Phi _r]_c)\\
& = r^{-1} \operatorname{CSD}_{w_{i,r}}(\grc_r)
-\frac{i}{8}\int_{Y_i} ([B_r]_c-B_0)\wedge\nu _i\\
& =r^{-1} \operatorname{CSD}_{w_{i,r}}(\grc_r)
-\frac{\pi }{2}[\nu _i]\cdot \jmath_c (\tilde{c}_0(\grc_r)).\\
\end{split}
\end{equation}
The assumption that \((B_r, \Phi _r)\) strongly converges implies that
the limit
\(\lim_{r\to\infty}\jmath_c (\tilde{c}_0(\grc_r))\) exists, and its norm is bounded
by \(\sqrt{b^1(Y_i)}\). Denote this limit by
\(
\jmath_h (\pmb{\gamma })\in
H_2(Y_i;\bbR)\).
Together with Lemma
\ref{lem:CSD-est}, this implies that for \(i\in \grY_m\),
\[
\lim_{r\to \infty}(r^{-1} \operatorname{CSD}_{\mu _{i,r}}([B_r, \Phi
_r]_c))=-\frac{\pi }{2}[\nu _i]\cdot \jmath_h (\pmb{\gamma }),
\]
and hence Assertion (a) of the lemma.
{\em (b):} By (\ref{eq:E-top-h}),
\[
\begin{split}
r^{-1} \scrE_{top}^{\mu _r}(A_r, \Psi
_r)& =\frac{1}{4r}\int_{X}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4r}\int_{X}F_{A_0}\wedge\mu_r\\
& \qquad -2r^{-1}\sum_{i\in \grY}\operatorname{CSD}_{\mu_{i,r}}^{Y_i}([(B_{i,r},
\Phi_{i,r})]_c)+i^*[\nu ]\cdot h_{A_0}(\op{\mathfrc{h}})/2;\\
r^{-1} \scrE_{top}^{\mu _r, \hat{\grp}}(A_r, \Psi
_r)& =\frac{1}{4r}\int_{X}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4r}\int_{X}F_{A_0}\wedge\mu_r\\
& \qquad -2r^{-1}\sum_{i\in \grY}\operatorname{CSD}_{\mu_{i,r}, \grq_i}^{Y_i}([(B_{i,r},
\Phi_{i,r})]_c)+i^*[\nu ]\cdot h_{A_0}(\op{\mathfrc{h}})/2,
\end{split}
\]
where \(h_{A_0}\) is the map defined in Remark \ref{rem:based-rel-htpy}.
Combined with Assertion (a) of the lemma and Lemma
\ref{lem:CSD-est}, this gives
\begin{equation}\label{eq:bbE}
\begin{split}
\bbE & :=\lim_{r\to \infty}\big(r^{-1} \scrE_{top}^{\mu _r, \hat{\grp}}(A_r, \Psi
_r)\big)=\lim_{r\to \infty}\big(r^{-1} \scrE_{top}^{\mu _r}(A_r, \Psi
_r)\big)\\
& = \frac{i}{4}\int_{X}F_{A_0}\wedge\nu
+\pi \sum_{i\in \grY_m}[\nu _i]\cdot \jmath_h (\pmb{\gamma }_i)+ i^*[\nu ]\cdot h_{A_0}(\op{\mathfrc{h}})/2.
\end{split}
\end{equation}
\hfill$\Box$\medbreak
\begin{remarks}\label{rmk:E-dep}
(a) By Theorem \ref{thm:strong-t} and the fact that there are finitely
many t-orbits with a fixed \(\mathop{\mathrm{Spin}}\nolimits^c\) structure, the bound
(\ref{ineq:CSD}) holds for any sequence of solutions to the 3-dimensional
Seiberg-Witten equations \(\grF_{\mu _{i,r}}(B_r, \Phi _r)=0\).
(b) By Item (b) of the previous lemma, the constant \(\textsc{e}\) in Lemma
\ref{lem:E_topX} may be chosen to depend only on
(\ref{dep:rel-htpy}) (though \(r_0\) may still depend on \(\|\nu
\|_{C^1}\) and \(\varsigma_w\)).
\end{remarks}
To obtain the first group of results mentioned in the beginning of the
present section, we need a generalization of Lemma \ref{lem:E_topX} to general
\(X_\bullet\subset X\), with counterparts of the coefficient \(\textsc{e}\) in
(\ref{eq:CSD-est}) independent of both \(r\) and \(X_\bullet\). This
is much more difficult to achieve, mainly due to the fact that the
perturbation form \(\nu \) is not translation-invariant on the ends of
\(X\); cf. the second paragraph of Section \ref{sec:literature}. In
fact, instead of bounding \(\scrE_{top}^{\mu _r}\) and
\(\scrE_{top}^{\mu _r, \hat{\grp}}\), we find it more convenient to
work with a modified version of them, which agree with them in the
case when \(\nu \) is translation-invariant on the ends.
First, on each
\(Y_{i:s}\subset X-X_c\), let \(\nu _+=\nu_+(s)\), \(\mu _+=\mu_+(s)\) respectively
denote the \(s\)-dependent closed 2-forms on \(Y_i\):
\[
\nu _+(s):=2\nu ^+|_{Y_{i:s}}; \quad \mu _+(s)=2(\mu _r^+)|_{Y_{i:s}}=r\nu _++w_{i,r}
\]
Modifying (\ref{eq:E-top-q}), we set
\begin{equation}\label{def:E'}
\begin{split}
\scrE_{top}^{' \mu_r}(X_{\bullet})(A, \Psi) & :=
\scrE_{top}^{\mu_r}(X_\bullet ) (A, \Psi)+\frac{i}{4}\int_{\partial \overline{X_\bullet}
}(A-A_0)\wedge (*_4\mu_r)\\
&=\frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
F_{A_0}+ \frac{i}{4}\int_{X_{\bullet}}F_{A_0}\wedge\mu_r -2\operatorname{CSD}_{\mu_+}^{\partial \overline{X_\bullet }}(B, \Phi).\\
\end{split}
\end{equation}
Define \(\scrE_{top}^{' \mu_r,\hat{\grp}}(X_{\bullet})\) similarly by
replacing the term \(\scrE_{top}^{\mu_r}(X_\bullet )\) above with
\(\scrE_{top}^{\mu_r, \hat{\grp}}(X_\bullet )\).
Note that \[
\scrE_{top}^{'\mu_r}(X)= \scrE_{top}^{\mu_r}(X);\qquad
\scrE_{top}^{'\mu_r,\hat{\grp}}(X)= \scrE_{top}^{\mu_r,
\hat{\grp}}(X).\]
Therefore the bounds in (\ref{eq:CSD-est}) hold
for \(\scrE_{top}^{'\mu_r}(X)\),
\(\scrE_{top}^{'\mu_r,\hat{\grp}}(X)\) as well.
A preliminary version of the aforementioned generalization of Lemma
\ref{lem:E_topX} is stated in terms of \(\scrE_{top}^{'\mu _r}\) and
\(\scrE_{top}^{'\mu _r, \hat{\grp}}\) as follows:
\begin{lemma}\label{lem:Etop-bdd1}
Let \((A, \Psi )=(A_r, \Psi _r)\) be an admissible solution to the Seiberg-Witten
equation \(\grS_{\mu _r,
\hat{\grp}}(A_r, \Psi _r)=0\) on \(X\) that satisfies in addition
\begin{equation}\label{assume:EtopX-ubdd}
\text{either (a)} \,\, \scrE_{top}^{\mu_r}(X)(A_r,
\Psi_r)\leq \textsc{e} \, r\quad \text{ or (b)}\, \, \scrE_{top}^{\mu_r, \hat{\grp}}(X)(A_r,
\Psi_r)\leq \textsc{e} \, r
\end{equation}
for
a positive constant \(\textsc{e}\) independent of \(r\) and \((A, \Psi
)\). (In particular, according to Lemma \ref{lem:E_topX} and Remarks \ref{rmk:E-dep}, this
holds for those \((A_r, \Psi _r)\) from the statement of Theorem
\ref{thm:l-conv}, with \(\textsc{e}\) determined by (\ref{dep:rel-htpy}) via
(\ref{eq:bbE}).)
Then there exist constants
\(\zeta>0\) \(\zeta'> 0\), a function \(\pmb{\hat{\mathop{\mathrm{l}}}}\colon\thinspace \grY\to \bbR^+\) depending only
on \(\nu \), and an \(r_0>8\) depending only on the constants
\(\textsc{e}\), \(\grl'_i\), \(i\in \grY_v\),
and \(\nu \), such that for all \(r\geq r_0\) and
\(\bfl_r:=(\ln r)\, \pmb{\hat{\mathop{\mathrm{l}}}}\),
\begin{equation}\label{eq:E_top-M1}
\begin{split}
\text{(i) } & \begin{cases}
-\zeta 'r\ln r\leq \scrE_{top}^{' \mu_r }(X_\bullet)(A_r, \Psi_r)\leq
r(\textsc{e}+\zeta) &\text{assuming (\ref{assume:EtopX-ubdd}) (a)}\\
-\zeta '_pr\ln r\leq \scrE_{top}^{' \mu_r, \hat{\grp}}(X_\bullet)(A_r, \Psi_r)\leq
r(\textsc{e}+\zeta_p) &\text{assuming (\ref{assume:EtopX-ubdd}) (b)}
\end{cases}
\quad \text{\(\forall X_\bullet\supset X_{\bfl_r}\)};\\
\text{(ii) } & -\zeta '_5 r\leq \scrE_{top}^{' \mu_r} (X_\bullet)(A_r, \Psi_r) =\scrE_{top}^{' \mu_r, \hat{\grp}}(X_\bullet)(A_r, \Psi_r)\leq r(\textsc{e}+ \zeta _e\ln r)
\quad \text{\(\forall X_\bullet\subset X-\mathring{X}_{\bfl_r}\)}.
\end{split}
\end{equation}
The positive constants
\(\zeta \), \(\zeta '\), \(\zeta '_5\), \(\zeta _e\) above depend only on the metric, the \(\mathop{\mathrm{Spin}}\nolimits^c\)
structure, \(\nu \), \(\varsigma _w\), \(B_0\), and the constants \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}. \(\zeta _p\), \(\zeta '_p\) are positive
constants that depend only on the preceding
list of parameters, together with the constant \(\textsl{z}_p\) in
Assumption \ref{assume}.
In particular, these constants as well as \(\pmb{\hat{\mathop{\mathrm{l}}}}\) and \(r_0\) above
are independent of \(X_\bullet\) and \(r\).
\end{lemma}
A proof of the preceding lemma will be given in Section
\ref{sec:pf-E_top-bdd0}. The undesirable factors of \(\ln r\) in
(\ref{eq:E_top-M1}) appear due to the previously-mentioned trouble
with non-translation invariant \(\nu \) at the ends
(cf. (\ref{bdd:CSD-lower}) and remarks that follow). They
will eventually be removed in Section \ref{sec:improved}. (See Proposition
\ref{prop:integral-est2}.)
Looking ahead, the definitions and lemmas above are relevant to the first group of
results mentioned in the beginning of this section in the following
manner: The typical first step towards such results is to use the
relation between analytical and topological energies to bound \(L^2(X_\bullet)\)-norms of
gauge invariant terms such as \(F_A\) and \(\nabla_A\Psi \) in terms
of the topological energy over \(X_\bullet\). (In our context, this appears as Lemmas \ref{lem:E-top-bdd0} and
\ref{E-top-bdd-ends} below.) Upper bounds on the topology energy over \(X_\bullet\), such
as those given in Lemma \ref{lem:Etop-bdd1} and Proposition
\ref{prop:integral-est2} in our context, then give rise to upper
bounds on the aforementioned squares of \(L^2(X_\bullet)\)-norms of
gauge invariant terms. The latter are used to obtain \(L^2_{1,
A}(X_\bullet)\) bounds on Seiberg-Witten solutions. (This is done in
Section \ref{sec:L^_1-bdd} in our context.)
In existing literature,\((X, \nu )\) is cylindrical on the ends, and upper
bounds on the topological energy over \(X_\bullet\) follows directly
from a corresponding bound over \(X\). (The analog of Lemma
\ref{lem:E_topX} in our context.) This is due to the fact that the \(\operatorname{CSD}\)
functional is decreasing on a Floer trajectory, which can be
interpreted as the gradient flow line of the \(\operatorname{CSD}\)
functional. In the more general
setting of ours, local upper bounds for \(\scrE_{top}^\mu\) likewise follows from
a lower bound on \(\scrE_{top}^\mu\) for \(X_\bullet\) that are
contained in the cylindrical ends of \(X\); see Section
\ref{sec:E_top-lower}. This lower bound makes use of an
interpretation of the Floer trajectory as a gradient flow line of a
{\em time-dependent} \(\operatorname{CSD}\)-functional; see Section
\ref{sec:SW-grad_flow} below.
\subsection{ \(L^2_{local}\) bounds on gauge invariant terms in terms of
(modified) topological energies
}\label{sec:4.2}
Let \((A_r, \Psi_r)=(A, \Psi)\) be an admissible solution to the Seiberg-Witten
equation \(\grS_{\mu _t, \hat{\grp}}(A_r, \Psi _r)=0\) with
\(Y_i\)-end limit \((B_i, \Phi _i)\),
and let \(A_0\) \(\{B_{0,i}\}_i\) be the reference
connections fixed previously.
Recall the definition of \(|X_\bullet|\) from Section
\ref{sec:convention} and let
\[
|X_\bullet|_1:=\min \{ 1, |X_\bullet|\}.
\]
Recall also the definitions of \(X^{'a}\), \(X''\) from Definition
\ref{def:adm}, and note that \(\hat{\grp}(A,
\Psi)\) is supported on the vanishing ends; in fact, on
\(X-X''\supset X-X^{'a}\). Let
\[
X_{\bullet, v}:=X_\bullet \cap (X-X'')\qquad \text{and}\qquad X_{\bullet
,m}:=X_\bullet\cap X''.
\]
In this subsection, we show that:
\begin{lemma}\label{lem:E-top-bdd0}
Let \(X_\bullet\subset X\) be compact
and let \(r\geq 1\).
Then there exist
positive constants \(\zeta _0\), \(\zeta '\), \(\zeta '_p\), \(\zeta ''\), \(\zeta
''_p\) that depend only on
\BTitem\label{parameters}
\item the metric,
\item the \(\mathop{\mathrm{Spin}}\nolimits^c\)-structure on \(X\),
\item the cohomology class of \(\nu \),
\item the constant \(\varsigma _w\) in Assumption \ref{assume},
\item the constant \(\textsl{z}_\grp\) in Assumption \ref{assume},
\ETitem
such that the following hold:
\begin{equation}\label{eq:E-top-bdd0}
\begin{split}
\text{(a)} & \quad
\frac{1}{8}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad \quad
\leq \scrE_{top}^{' \mu_r }(X_{\bullet})(A, \Psi)+r\, (\zeta _0|
X'_{\bullet,m}|+\zeta '|X_{\bullet}|_1)+\zeta''|X_{\bullet}|;\\
\text{(b)} &\quad \frac{1}{16}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{4}\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad \quad
\leq \scrE_{top}^{' \mu_r, \hat{\grp}}(X_{\bullet})(A, \Psi)+r\, (\zeta _0|
X'_{\bullet,m}|+\zeta '_p|X_{\bullet}|_1)+\zeta''_p |X_{\bullet}|.
\end{split}
\end{equation}
\end{lemma}
\noindent{\it Proof.}\enspace
To begin, combine
(\ref{eq:E-top=anal}) with the Seiberg-Witten equation \(\grG_{\mu_r,
\hat{\grp}}(A_r, \Psi_r)=0\) to get:
\begin{equation}\label{X_c-energy}
\begin{split}
& \frac{1}{4}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\quad
=-\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2+\frac{i}{4}\int_{X_{\bullet}}F_A\wedge*_4\mu_r+\scrE_{top}^{\mu_r}
(X_{\bullet})(A,\Psi)+\|\hat{\grp}(A, \Psi)\|_{L^2(X_{\bullet})}^2\\
& \quad =-\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2+\|\hat{\grp}(A,
\Psi)\|_{L^2(X_{\bullet})}^2+\frac{i}{4}\int_{X_{\bullet}}F_A\wedge
*_4w_r\\
&\qquad \quad +\frac{ir}{4}\int_{X_{\bullet
}}F_{A_0}\wedge*_4\nu
+\scrE_{top}^{' \mu_r }
(X_{\bullet})(A, \Psi).
\end{split}
\end{equation}
The terms in the third and fourth line above are bounded in Steps
(1)-(3) below.
(1) The terms \(-\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2\), \( \|\hat{\grp}(A,
\Psi)\|_{L^2(X_{\bullet})}^2\) are bounded by the same general trick.
Observe that for any real
valued function \(\grf\),
\[\begin{split}
0& \leq
\Big|\frac{i}{4}\rho(\mu^+_r)-\big(1-\frac{\grf}{2|\Psi|^2}\big)(\Psi\Psi^*)_0\Big|^2\\
& \leq\Big|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big|^2+\frac{\grf^2}{4}-\grf|\Psi|^2+\frac{|\grf|}{8}|\mu_r|
\end{split}\]
where \(\Psi\neq 0\).
So, for general \(\Psi \),
\begin{equation}\label{f-trick}
\grf|\Psi|^2\leq \Big|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big|^2+\frac{\grf^2}{4}+\frac{|\grf|}{8}|\mu_r|.
\end{equation}
Taking \(\grf=\pm R_g\) in (\ref{f-trick}) then gives us
\begin{equation}\label{bdd(1)}
\begin{split}
\Big|-\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2\Big|& \leq
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_\bullet)}+\int_{X_{\bullet}}\frac{R_g^2}{16}+\int_{X_{\bullet}}\frac{|R_g|}{32}|\mu_r|\\
& \leq
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_\bullet)}+\zeta
_g|X_\bullet|+\zeta _h \, r(|X_{\bullet,m}|+|X_{\bullet}|_1).
\end{split}
\end{equation}
In the above, \(\zeta _g\), \(\zeta _h\) are positive constants
depending only on the metric, \(\varsigma_w\), and \(\nu \). To go from the
first line to the second line, we also made use
of the well-known fact that \(\nu \) exponentially decays to \(\nu _i\) on the
\(Y_i\)-end for each \(i\in \grY\) (cf. (\ref{eq:xi-exp})).
Meanwhile by (\ref{eq:q-bdd}),
\[\begin{split}
& \|\hat{\grp}(A, \Psi )\|_{L^2(X_\bullet)}^2 \leq z\big( \|
\Psi\|_{L^2(X_{\bullet, v})}^2+|X_{\bullet, v}|\big),
\end{split}\]
where \(z\) is a positive constant depending only on the constant
\(\textsl{z}_\grp\) and the metric on the vanishing ends. Taking
\(\grf=4z\) in (\ref{f-trick}) then gives us
\begin{equation}\label{bdd(1')}\begin{split}
\|\hat{\grp}(A, \Psi )\|_{L^2(X_\bullet)}^2 & \leq
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_{\bullet,
v})}+z_1|X_{\bullet, v}|_1+ z_2 r\int_{X_{\bullet, v}} |\nu |\\
& \leq \frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_{\bullet,
v})}+z_1|X_{\bullet, v}|_1+ z_3 r|X_{\bullet,v}|_1,
\end{split}
\end{equation}
noting that \(|\nu |\) decays exponentially to 0 on vanishing
ends. (For a more precise statement, see (\ref{eq:xi-exp}).) In
the above, \(z_i\) are positive constants that depend only on \(\nu\),
the metrics, and the constants \(\varsigma _w\), \(\textsl{z}_\grp \).
(2) The term \(\frac{i}{4}\int_{X_{\bullet}}F_A\wedge *_4w_r \) in
(\ref{X_c-energy}) is bounded as follows.
\begin{equation}\label{bdd(2)}
\frac{i}{4}\int_{X_{\bullet}}F_A\wedge *_4w_r\leq
\frac{1}{16}\|F_A\|_{L^2(X_\bullet)}^2+\|w_r\|_{L^2(X_\bullet)}^2\leq \frac{1}{16}\|F_A\|_{L^2(X_\bullet)}^2+z_4|X_\bullet|,
\end{equation}
where \(z_4\) depends only on the constant \(\varsigma _w\).
(3) The term \(\frac{ir}{4}\int_{X_{\bullet}}F_{A_0}\wedge*_4\nu\) in
(\ref{X_c-energy}) is bounded as follows. Recall that by assumption,
\(F_{A_0}\) is the pull-back of \(F_{B_{0, i}}\) on the ends
\(\hat{Y}_i\), and that \(\wp_i=0\) for vanishing ends. Thus,
\begin{equation}\label{bdd(3)}
\begin{split}
& \frac{ir}{4}\int_{X_{\bullet}}F_{A_0}\wedge*_4\nu \\
& \qquad =\frac{ir}{4}\int_{X_{\bullet}\cap
X_c}F_{A_0}\wedge*_4\nu+\sum_{i\in \grY}
r\wp_i|X_{\bullet}\cap\hat{Y}_i|+r\sum_{i\in \grY}
\int_{X_{\bullet}\cap\hat{Y}_i}F_{A_0}\wedge*_4(\nu-\nu _i)\\
& \qquad = \frac{ir}{4}\int_{X_{\bullet}\cap
X_c}F_{A_0}\wedge*_4\nu+\sum_{i\in \grY_m}
r\wp_i|X_{\bullet}\cap\hat{Y}_i|+r\, \zeta _5 \sum_{i\in \grY}|X_{\bullet}\cap\hat{Y}_i|_1\\
& \qquad \leq r\, (z_5 |X_{\bullet, m}|+\zeta '_5).
\end{split}
\end{equation}
Here, \(\zeta _5\) and \(\zeta _5'\) are positive constants depending only on the choice of
\(A_0\) and \(\nu \), while \(z_5\) is a positive constant depending only on the choice of
\(A_0\), the \(\mathop{\mathrm{Spin}}\nolimits^c\)-structure, and \(\nu \). In estimating the
last term in the second line above, we also used the exponential decay
of \(\nu \) on the ends of \(X\). (Cf. (\ref{eq:xi-exp}) for a precise statement.)
Combining the bounds (\ref{bdd(1)}), (\ref{bdd(1')}), (\ref{bdd(2)}),
(\ref{bdd(3)}) with (\ref{X_c-energy}) and re-arranging, we arrive at
item (a) of (\ref{eq:E-top-bdd0}). The inequality in item (b)
follows from item (a) when \(X_\bullet\subset X^{'a}\). By the additivity of
\(\scrE_{top}^{' \mu_r ,\hat{\grp}}\), it remains to verify the
inequality on the vanishing ends. The proof in this case is similar and is deferred
to the next subsection.
\hfill$\Box$\medbreak
Recall the definition of \(\wp_i\) from Lemma \ref{lem:F-L_1}. For each \(i\in \grY\), set
\[
\wp_i^+:=
\begin{cases}
\wp_i+\frac{\pi }{2}\|*\nu _i\|_T& \text{when \(\hat{Y}_i\) is a Morse
end};\\
0 & \text{when \(\hat{Y}_i\) is a vanishing end.}
\end{cases}
\]
Note that according to Proposition \ref{prop:t-conv3d} and Remarks
\ref{rmk:thurston},
\(\wp_i^+\geq \frac{\pi }{2}(\|*\nu _i\|_T+\zeta _{*\nu _i})\) in our
context, with the right hand side, \(\frac{\pi }{2}(\|*\nu
_i\|_T+\zeta _{*\nu _i})\) vanishing in many cases, such as in the
context of \cite{LT}.
When \(X_\bullet=\hat{Y}_{i, [l, L]}\), the bound
(\ref{eq:E-top-bdd0}) (a) can be refined as follows.
\begin{lemma}\label{E-top-bdd-ends}
Let \((A, \Psi )=(A_r, \Psi _r)\) be as in Lemma
\ref{lem:E-top-bdd0}.
When \(X_\bullet=\hat{Y}_{i, [l, L]}\), we have:
\begin{equation}\label{eq:E-top-bdd2}
\begin{split}
& \frac{1}{8}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bullet}}\Big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\Big|^2\\
&\qquad \quad
\leq \scrE_{top}^{' \mu_r }(X_{\bullet})(A, \Psi)+r\, ( \wp_i^++z_i)
|X_{\bullet}|+rz'_i|X_\bullet|_1+\zeta ''|X_\bullet|,
\end{split}
\end{equation}
where \(z_i\) is a non-negative constant depending only on the metric
on \(Y_i\), and \(z'_i\) is a positive constant that only depends on the
metric on \(X\), the cohomology class of \(\nu \), and the constant \(\varsigma _w\) in Assumption
\ref{assume}. (In particular, \(z_i\) and \(z'_i\) are independent of \(r\) and the \(\mathop{\mathrm{Spin}}\nolimits^c\)
structure.) Moreover, the constant \(z_i\) has the following
properties: \(z_i=0=\wp^++z_i\) when \(\hat{Y}_i\) is a vanishing
end. When \(\hat{Y}_i\) is a Morse end and \(Y_i\) is irreducible and atoroidal, then for any \(\epsilon >0\), there exists a metric
on \(Y_i\) so that its corresponding \(z_i\) is less than \(\epsilon
\). Meanwhile, \(\zeta ''\) is a positive constant that depends only
on the metric and \(\varsigma_w\).
\end{lemma}
\noindent{\it Proof.}\enspace
To begin, recall \cite{KM:sc}'s
Theorem 2. Reformulated as \cite{BD}'s Theorem 5.4, it asserts that
when \(Y_i\) is irreducible, and atoroidal,
\begin{equation}\label{eq:KM}
\|*\nu _i\|_T=\frac{1}{4\pi }\inf_h \{\|R_h\|_h \, \|\theta _h\|_h\},
\end{equation}
where \(h\) ranges through all Riemannian metrics on \(Y_i\). In the above, \(\|\cdot\|_h\)
denotes \(L^2\)-norm with respect to \(h\), and \(\theta _h\) denotes
the harmonic representative of the cohomology class \([*\nu _i]\) with
respect to the metric \(h\). \(R_h\) denotes the scalar curvature on
\(Y_i\) corresponding to \(h\). Recall also that \(\|*\nu
_i\|_T=\|[*\nu _i]\|_T\) is the Thurston semi-norm of \([*\nu _i]\in
H^1(Y_i;\bbR)\).
By assumption, over \(\hat{Y}_i\simeq [0, \infty)\times Y_i\), \(g\) is
the product metric of the standard affine metric on \(\bbR\) and a
metric \(g_i\) on \(Y_i\). Thus, \(R_g|_{Y_{i: s}}=R_{g_i}\) for any
\(s\geq 0\). Combined with the asymptotic behavior of
\(\nu \) (described explicitly in (\ref{eq:dnu}) below), (\ref{eq:KM}) implies that for any
\(s\geq 0\),
\[
\|R_g\|_{L^2(Y_{i: s})}\|\nu|_{Y_{i: s}}\|_{L^2(Y_{i: s})}\leq 4\pi \|*\nu _i\|_T+z_i+\grz_i(s),
\]
where \(z_i=z_{Y_i, \nu _i}:=\|R_{g_i}\|_{L^2(Y_i)}\|\nu _i\|_{L^2(Y_i)}-4\pi \|*\nu
_i\|_T\). According to (\ref{eq:KM}), \(z_i\) has the properties
asserted in the statement of the lemma.
\( \grz_i(s)\) is
a positive function on \([0, \infty)\) that depends only on \(\nu \)
and exponentially decays to \(0\) as \(s\to \infty\). (Explicitly, it
is a multiple of the right hand side of (\ref{eq:xi-exp}).)
Use the preceding inequality to replace (\ref{bdd(1)}) in this context by:
\begin{equation}\label{bdd(1-)}
\begin{split}
& \Big|-\int_{X_{\bullet}}\frac{R_g}{4}|\Psi|^2\Big|\\
& \quad \leq
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_\bullet)}+\int_{X_{\bullet}}\frac{R_g^2}{16}+\int_{X_{\bullet}}\frac{|R_g|}{32}|\mu_r|\\
& \quad \leq
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_\bullet)}+\zeta
_1|X_\bullet|+\frac{r}{32}\big(\int _l^Lds \|R_g\|_{L^2(Y_{i:
s})}\|\nu|_{Y_{i: s}}\|_{L^2(Y_{i: s})}\big)\\
& \quad \le
\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(X_\bullet)}+\zeta
_1|X_\bullet|+ r\, \big( \frac{\pi }{2}\|*\nu
_i\|_T+z_i\big)|X_\bullet|+ r\zeta _2|X_\bullet|_1,
\end{split}
\end{equation}
where \(\zeta _1\), \(\zeta _2\) are positive constants depending only on \(R_g\)
and \(\varsigma_w\). (In particular, they are independent of \(r\) and
\(\grs_i\).)
Meanwhile, replace (\ref{bdd(3)}) in this context
by
\begin{equation}\label{bdd(3)'}
\frac{ir}{4}\int_{X_{\bullet}}F_{A_0}\wedge*_4\nu \leq
r\wp_i|X_{\bullet}|+r\zeta _5|X_\bullet|_1,
\end{equation}
\(\zeta _5\) being a positive constant depending only on \(\nu \).
Combine the bounds (\ref{bdd(1-)}), (\ref{bdd(2)}),
(\ref{bdd(3)'}) with (\ref{X_c-energy}) and re-arranging, we arrive at
(\ref{eq:E-top-bdd2}).
\hfill$\Box$\medbreak
\begin{remarks}
Since both \(r\) and \(|X_\bullet|\) can be arbitrarily large,
applying (\ref{eq:E-top-bdd2}) to the case when \(X=\bbR\times Y\) and
\((A_r, \Psi _r)=(\hat{B}_r, \hat{\Phi }_r)\) where \((B_r, \Phi _r)\)
is a sequence of solutions to the 3-dimensional Seiberg-Witten
equation \(\grF_{\mu _r}(B_r, \Phi _r)=0\) from Theorem
\ref{thm:strong-t} implies that \(c_1(\grs)\cdot [*\nu ]+\|*\nu
\|_T\geq -\frac{2z_{Y, \nu }}{\pi }\) when the
sequence of \((B_r, \Phi _r)\) in Theorem \ref{thm:strong-t} exists.
This bound is comparable to \cite{KM}'s
Propositions 40.1.1 and 40.1.3, where certain non-existence results of
Seiberg-Witten solutions
are obtained under similar constraints on \(c_1(\grs)\). In
particular, in the case when \(b_1(Y)>0\), the
aforementioned propositions in \cite{KM} were used to prove that for irreducible
\(Y\), the dual
Thurston polytope is the convex hull of the ``Seiberg-Witten
basic classes'' (cf. \cite{KM:sc} Theorem 1 and \cite{KM} Theorem
41.5.2).
\end{remarks}
\subsection{Seiberg-Witten solutions as gradient flow lines of
time-dependent CSD functionals}\label{sec:SW-grad_flow}
We begin with some preliminary observations on the behavior of the form \(\nu \)
on the ends, and introduce some notations along the way. In
particular, the previously imentioned exponential decay of
\(\nu \) on the ends of \(X\) is made precise.
Consider an end \(\hat{Y}_i\)
of \(X\).
Write the harmonic 2-form
\(\nu\) as \[\nu=\underline{\nu}+ds \wedge *_Y\grv \ \quad \text{on
\(\hat{Y}_i\),}
\]
where \(\underline{\nu}\), \(\grv\) are \(s\)-dependent 2-forms on \(Y:=Y_i\),
\(*_Y\) denotes the 3-dimensional Hodge dual on \(Y_i\), and \(s=\mathfrc{s}_i\). The harmonicity of \(\nu\) implies
that both \(\underline{\nu}\) and \(\grv\) are closed 2-forms.
Moreover, \(\grv\) and \(\underline{\nu}-\nu_i\) are \(s\)-dependent
exact 2-forms on \(Y_{i:s}=\mathfrc{s}_i^{-1}(s)\), and satisfy
\begin{equation}\label{eq:dnu}
\partial_s(\underline{\nu}-\nu_i)=-d_Y*_Y\grv\quad
\text{and}\quad \partial_s\grv=-d_Y*_Y(\underline{\nu}-\nu_i).
\end{equation}
Here, \(d_Y\) denotes the
exterior derivative on \(Y_i\).
Let
\[
\xi_\nu:=\underline{\nu}-\nu_i+\grv;\quad
\xi_\nu':=\underline{\nu}-\nu_i-\grv,
\]
so that in this notation, \(\nu _+=\nu _i+\xi_\nu \) on
\(\hat{Y}_i\). \(\xi_\nu\) and \(\xi_\nu '\) are exact
2-forms satisfying
\begin{equation}\label{DE:xi}
\partial _s\xi_\nu=-d_Y*_Y\xi_\nu; \quad \partial
_s\xi_\nu'=d_Y*_Y\xi_\nu'.
\end{equation}
As \(d_Y*_Y\) is a self-dual operator on
the space of \(L^2\) exact 2-forms and \(Y_i\) is closed, it has
discrete real eigenvalues \(\{\kappa_j^{(i)}\}_j\) with
\(\min_j|\kappa_j^{(i)}|=:\kappa_i>0\). Thus, there exists a constant \(\zeta_i>0\)
such that
\begin{equation}\label{eq:xi-exp}
\begin{split}
& \|\xi_\nu\|_{C^k(Y_{i:s})}+\|\underline{\nu}-\nu_i\|_{C^k(Y_{i:s})}+\|\grv\|_{C^k(Y_{i:s})}\leq
\zeta_ie^{-\kappa _is}; \quad \text{and via (\ref{DE:xi}), }\\
& \|\nu-\pi _2^*\nu_i\|_{C^k(Y_{i,s})}\leq \, \zeta_ie^{-\kappa _is}.
\end{split}
\end{equation}
The (\(s\)-dependent) form \(\nu _+\)
on \(Y_i\), introduced previously in Section \ref{sec:top-energy}, is
expressed in this part's notation as:
\[
\begin{split}
\nu_+ & =\underline{\nu}+\grv. \\
& =\nu _i+\xi _\nu .
\end{split}
\]
Returning now to the task of interpreting of Seiberg-Witten solutions
on the ends of \(X\), we choose to work in a gauge such that \((A,
\Psi)|_{\hat{Y}_i}\) is in the temporal gauge over each end \(\hat{Y}_i\)
of \(X\). Namely, over the ends \(X-X_c\) we can write
\begin{equation}\label{temp-gauge}
(A, \Psi)=(\partial_s+B,
\Phi),
\end{equation}
where \((B(s), \Phi(s))\), \(s\geq 0\), is a path in \(\mathop{\mathrm{Conn}}\nolimits (Y_i)\times
\Gamma (\bbS_{Y_i})\). Recall that the reference connection \(A_0\) is
chosen such that
over the ends of \(X\), \(A_0=\partial_s +B_0\), where
\(B_0=B_{0,i}\) on the \(Y_i\)-end
Restricting to an end \(\hat{Y}_i\) of \(X\),
the 4-dimensional
Seiberg-Witten equation \(\grS_{\mu_r, \hat{\grp}}(A, \Psi)=0\)
may be re-expressed in terms of \((B(s), \Phi(s))\) over \(\hat{Y}_i\)
as
\begin{equation}\label{t-dep-flow}
\big(\frac{1}{2}\partial_sB,
\partial _s\Phi\big)+\mathfrak{F}_{\mu_+, \hat{\grp}}(B, \Phi)=0,
\end{equation}
where \(\hat{\grp}=\hat{\grp}(s)=\chi_i(s)\grq_i+\lambda_i(s)\grp'_i\) is regarded as a path of tame
perturbations as in (\ref{eq:hatp}).
Square both sides of the previous equation and use integration by parts plus
(\ref{eq:dnu})
to get:
\begin{equation}\label{eq:flow-energy}\begin{split}
& \|\partial_sB\|_{L^2(\hat{Y}_{[l, L]})}^2 +4\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{[l, L]})}+4\|\mathfrak{F}_{\mu_+,
\hat{\grp}} (B, \Phi)\|_{L^2(\hat{Y}_{[l,
L]})}^2\\%& \qquad +i\int_{\hat{Y}_{[l,L]}}ds \, \grw\wedge(\partial_s B+*_3(F_B-F_{B_0}))\\
& \quad\quad = 8\operatorname{CSD}_{\mu_+(l), \hat{\grp}(l)}^{Y}(B,
\Phi)|_{Y_l}-8\operatorname{CSD}_{\mu_+(L), \hat{\grp}(L)}^{Y}(B,
\Phi)|_{Y_L}\\
& \qquad\quad
-ir\int_{\hat{Y}_{[l,L]}}ds\,( *_Y\xi_\nu)\wedge(F_B-F_{B_0})
-8\int_{\hat{Y}_{[l,L]}}(\partial_sf_{\hat{\grp}(s)})(B, \Phi)
\,ds
\end{split}
\end{equation}
for any interval \([l, L]\subset [0, \infty]\). (Here \(Y\) stands for
any of the ending 3-manifolds \(Y_i\).)
In what follows we use the preceding formula in two ways: (i) In the
remainder of this subsection, we use the upper
bound on the right hand side of (\ref{eq:flow-energy}) to get an upper bound on the square terms
on the left hand side; in particular, this leads to the completion of
the proof of
(\ref{eq:E-top-bdd0}) (b). (ii) In the subsequent subsection, we use the non-negativity of the left hand
side to obtain a lower bound of the first two terms on the right hand
side, and hence a lower
bound on \(\scrE^{' \mu_r, \hat{\grp}}_{top}(\hat{Y}_{[l,L]}) (A,\Psi)\),
generalizing the positivity results in the cylindrical case mentioned
towards the end of Section \ref{sec:top-energy}.
\paragraph{\it Verifying the remaining case of (\ref{eq:E-top-bdd0}) (b). }
As observed in the previous subsection, it remains to verify the claimed inequality for
the case when
\(X_\bullet \) lies in vanishing ends.
Nevertheless, assume for the moment that \(X_\bullet=\hat{Y}_{[l,L]}\),
\(\hat{Y}=\hat{Y}_i\) is a general end of \(X_\bullet\). (Either
vanishing or Morse.)
To begin, recall from Assumption \ref{assume} (3) and use the
3-dimensional Weitzenb\"ock formula (\ref{eq:Weit3d}) to get
\[
\begin{split}
& \frac{1}{4}\|\partial_sB\|_{L^2(\hat{Y}_{[l, L]})}^2
+\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{[l, L]})}+\|\mathfrak{F}_{\mu_+} (B, \Phi)\|_{L^2(\hat{Y}_{[l,
L]})}^2\\
&\quad
=\frac{1}{4}\int_{\hat{Y}_{[l,L]}}|F_A|^2+\int_{\hat{Y}_{[l,L]}}|\nabla_A\Psi|^2+\int_{\hat{Y}_{[l,L]}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
& \qquad +\int_{\hat{Y}_{[l,L]}}\frac{R_g}{4}|\Psi|^2-\frac{i}{4}\int_{\hat{Y}_{[l,L]}}F_A\wedge
*_4w_r-\frac{ir}{4}\int_{\hat{Y}_{[l,L]}} ds \, F_B\wedge
(*_Y\nu_+).\\
\end{split}
\]
Inserting both the preceding formula and (\ref{eq:flow-energy}) into
the following inequality
\[\begin{split}
& \frac{1}{4}\|\partial_sB\|_{L^2(\hat{Y}_{[l, L]})}^2
+\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{[l, L]})}+\|\mathfrak{F}_{\mu_+} (B, \Phi)\|_{L^2(\hat{Y}_{[l,
L]})}^2\\
& \qquad \leq
\frac{1}{4}\|\partial_sB\|_{L^2(\hat{Y}_{[l, L]})}^2 +\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{[l, L]})}+\|\mathfrak{F}_{\mu_+,
\hat{\grp}} (B, \Phi)\|_{L^2(\hat{Y}_{[l,
L]})}^2+ \| \hat{\grp} (A, \Psi )\|_{L^2(\hat{Y}_{[l,
L]})}^2,
\end{split}
\]
we have
\begin{equation}\label{bdd:sq}
\begin{split}
&
\frac{1}{4}\int_{\hat{Y}_{[l,L]}}|F_A|^2+\int_{\hat{Y}_{[l,L]}}|\nabla_A\Psi|^2+\int_{\hat{Y}_{[l,L]}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
& \quad \leq -2 \operatorname{CSD}_{\mu _+, \hat{\grp}}^{\partial
\hat{Y}_{[l,L]}}(B, \Phi ) +r\wp_i (L-l)
-2\int_{\hat{Y}_{[l,L]}}(\partial_s\, f_{\hat{\grp}(s)})(B, \Phi)+ \|
\hat{\grp} (A, \Psi)\|_{L^2(\hat{Y}_{[l,
L]})}^2\\
& \qquad -\int_{\hat{Y}_{[l,L]}}\frac{R_g}{4}|\Psi|^2+\frac{i}{4}\int_{\hat{Y}_{[l,L]}}F_A\wedge
*_4w_r+\frac{ir}{4}\int_{\hat{Y}_{[l,L]}}ds\,( *_Y\xi_\nu )\wedge F_{B_0}.
\end{split}
\end{equation}
Argue as in (\ref{bdd(1-)}), (\ref{bdd(1')}), (\ref{bdd(2)}), (\ref{bdd(3)}) to bound the
fourth to the last terms on the right hand side of the preceding
formula and rearranging, we get:
\begin{equation}\label{energy-v0}
\begin{split}
&
\frac{1}{8}\int_{\hat{Y}_{[l,L]}}|F_A|^2+\int_{\hat{Y}_{[l,L]}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{\hat{Y}_{[l,L]}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
& \quad \leq -2 \operatorname{CSD}_{\mu _+, \hat{\grp}}^{\partial
\hat{Y}_{[l,L]}}(B, \Phi ) +r (\wp_i^++z_i)|\hat{Y}_{[l,L]}|+\zeta
r|\hat{Y}_{[l,L]}|_1+\zeta '|\hat{Y}_{[l,L]}|\\
& \quad \quad -2\int_{\hat{Y}_{[l,L]}}(\partial_s\, f_{\hat{\grp}(s)})(B, \Phi) , \\
\end{split}
\end{equation}
where \(\zeta , \zeta '\) are positive constants that depend only on
the choice of reference connection \(A_0\), the metric, \(\nu \), and the constants
\(\varsigma _w\),
\(\textsl{z}_\grp\). Note that over
\(\hat{Y}_{[l,L]}\),
\begin{equation}\label{diff:CSD-E}\begin{split}
& -2 \operatorname{CSD}_{\nu _+, \hat{\grp}}^{\partial
\hat{Y}_{[l,L]}}(B, \Phi )-\scrE^{' \mu_r ,\hat{\grp}}(
\hat{Y}_{[l,L]}) (A, \Psi )\\
& \quad =-\frac{i}{4}\int_{\hat{Y}_{[l,L]}}
F_{B_0}\wedge \mu _r
\leq \zeta
_e r\, e^{-\kappa _il} |\hat{Y}_{[l,L]}|_1 ,
\end{split}
\end{equation}
where \(\zeta _e\) is a positive constant depending
only on \(B_0\) and \(\nu \).
Thus, the bound (\ref{energy-v0}) is equivalent to
\begin{equation}\label{energy-v}
\begin{split}
&
\frac{1}{8}\int_{\hat{Y}_{[l,L]}}|F_A|^2+\int_{\hat{Y}_{[l,L]}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{\hat{Y}_{[l,L]}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
& \quad \leq \scrE^{' \mu_r \hat{\grp}}(
\hat{Y}_{[l,L]}) (A, \Psi ) +r (\wp_i^++z_i)|\hat{Y}_{[l,L]}|+\zeta
_or|\hat{Y}_{[l,L]}|_1+\zeta '_o|\hat{Y}_{[l,L]}|\\
& \quad \quad -2\int_{\hat{Y}_{[l,L]}}(\partial_sf_{\hat{\grp}})(B, \Phi), \\
\end{split}
\end{equation}
where \(\zeta _o, \zeta '_o\) are likewise positive constants that depend only on
the choice of reference connection \(A_0\), the metric, \(\nu \), and the constants
\(\varsigma _w\), \(\textsl{z}_\grp\).
We now estimate the last term above,
\(-2\int_{\hat{Y}_{[l,L]}}(\partial_s\, f_{\hat{\grp}(s)})(B,
\Phi)\). This vanishes unless \(\hat{Y}_i\) is a vanishing end; so we assume
from now on that \(\hat{Y}=\hat{Y}_i\) is a vanishing end.
According to (\ref{eq:hatp}) (and recalling the definitions and
assumptions on \(\chi _i\), \(\lambda _i\) there),
\[
(\partial_sf_{\hat{\grp}})(B,
\Phi)=\chi '_i(s)\,f_{\grq_i}(B, \Phi)+\lambda
'_i (s) \, f_{\grp'}(B, \Phi).
\]
This is supported on \(\hat{Y}_{[\grl_i,
\grl_i']}\), and by our assumption on the cutoff functions \(\chi
_i\) and \(\lambda _i\), its absolute value is bounded by
\(|f_{\grq_i}(B(s), \Phi(s))|+|f_{\grp'}(B(s), \Phi(s))|\). At each \(s\in [\grl_i, \grl_i']\),
\(|f_{\grq_i}(B(s), \Phi(s))|\) and \(|f_{\grp'}(B(s), \Phi(s))|\) can
be bounded via a variant of Lemma \ref{lem:f_q}: Applying the
triangle inequality differently in the last step of
(\ref{ineq:f_q}) and recalling Assumption \ref{assume} 4d), one may arrange so that:
\[\begin{split}
|f_{\grq_i}(B(s), \Phi(s))|+|f_{\grp'}(B(s), \Phi(s))|&
\leq \frac{1}{32}\|F_B-F_{B_0}\|_{L^2(Y_{i: s})}^2+\zeta _2'\big( \|\Phi
\|_{L^2(Y_{i: s})}^2+1\big),
\end{split}
\]
where \(\zeta '_2\) is a positive constants depending only on \(\textsl{z}_\grp\)
and the metric on \(Y_i\). Integrating the preceding
inequality over \(I:=[l,L]\cap [\grl_i, \grl_i']\subset \bbR\), and
appealing again to (\ref{f-trick}), we get
\begin{equation}\label{bdd:f-ps}
\begin{split}
& \Big|-2\int_{\hat{Y}_{[l,L]}}(\partial_sf_{\hat{\grp}})(B,
\Phi)\Big|\\
&\quad \leq \frac{1}{16}\|F_B-F_{B_0}\|_{L^2(\hat{Y}_{I})}^2+\zeta '_3\big(\|\Psi
\|^2_{L^2(\hat{Y}_{I})}+ |\hat{Y}_{I}|\big)\\
& \quad \leq
\frac{1}{16}\|F_B-F_{B_0}\|_{L^2(\hat{Y}_{I})}^2+\frac{1}{4}\Big\|\frac{i}{4}\rho(\mu^+_r)-(\Psi\Psi^*)_0\Big\|^2_{L^2(\hat{Y}_I)}+\zeta
'_4r|\hat{Y}_{I}|,
\end{split}
\end{equation}
where \(\zeta _3'\) is a positive constant depending only on
\(\textsl{z}_\grp\)
and the metric on \(Y_i\), and \(\zeta _4'\) is a
positive constant depending only on \(\nu \), and \(\varsigma _w\),
\(\textsl{z}_\grp\),
and the metric.
Insert the preceding estimate into (\ref{energy-v}),
we arrive at \((\ref{eq:E-top-bdd0})\) (b). \hfill$\Box$\medbreak
\subsection{Lower bounds for \(\scrE^{'\mu_r, \hat{\grp}}_{top}(\hat{Y}_{[l,L]})\)}\label{sec:E_top-lower}
We need a lower bound that
is independent of \(\hat{Y}_{[l,L]}\), \([l, L]\subset[0,\infty]\)
being arbitrary. When \(\hat{Y}=\hat{Y}_i\) is a vanishing end,
suppose in addition that \(l\geq \grl_i'\),
so that the last term of
(\ref{eq:flow-energy}) vanishes on such \(\hat{Y}_{[l,L]}\). Then, for
an admissible solution \((A,
\Psi )=(A_r, \Psi _r)\) to \(\grS_{\mu _r, \hat{\grp}}(A_r, \Psi
_r)=0\), we have:
\begin{equation}\label{bdd:CSD-lower}
\begin{split}
& -2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{[l,L]}}(B,\Phi)\\
&\quad \quad =\frac{1}{2}\|\partial_sB\|_{L^2(\hat{Y}_{i, l})}^2
+2\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{i,
l})}-\frac{ir}{4}\int_{\hat{Y}_{[l,L]}}ds\,(
*_Y\xi_\nu)\wedge(F_B-F_{B_0})\\
& \quad \quad \geq -\frac{ir}{4}\int_{\hat{Y}_{[l,L]}}ds\,(
*_Y\xi_\nu)\wedge(F_B-F_{B_0}).
\end{split}
\end{equation}
Ideally, \(1/r\) times the last term above should be bounded
independently of both
\(r\) and \(\hat{Y}_{[l,L]}\). Unfortunately, this is far from
straightforward. Note that similar terms do not appear in the setting
of \cite{LT} or \cite{Arnold2}. What follows is a first attempt to
tackle this trouble-making term.
To proceed, use (\ref{eq:flow-energy}), (\ref{DE:xi}), (\ref{eq:xi-exp}),
Lemma \ref{lem:F-L_1} to get that for \(r\geq r_0\),
\[
\begin{split}
& \frac{1}{2}\|\partial_sB\|_{L^2(\hat{Y}_{i, l})}^2 +2\|\partial_s\Phi\|^2_{L^2(\hat{Y}_{i, l})}
\\
& \quad = -2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{i,[l,\infty]}}(B, \Phi )
+\frac{ir}{4}\int_{\hat{Y}_{i, l}}ds\,( *_Y\xi_\nu)\wedge(F_{B_i}-F_{B_0})
\\
& \qquad\quad -\frac{ir}{4}\int_l^\infty \, ds\int_{Y_{i:
s}}( \partial_s\xi_\nu)\wedge\big(\int_s^\infty\partial_tB(t) dt\big)\\
& \quad \leq -2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{i,[l,\infty]}}(B, \Phi
) +\zeta _ir\, e^{-\kappa _i l} +
\frac{1}{4}\|\partial_sB\|_{L^2(\hat{Y}_{i, l})}^2+\zeta '_i r^2e^{-2\kappa _i l},
\end{split}
\]
where \(\zeta _i\), \(\zeta _i'\) are positive constants; \(\zeta _i\) depends only on \(\nu \), \(B_0\), the constant \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}; \(\zeta _i'\) depends only on \(\nu\).
Rearranging, we get
\[
-2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{i,[l, \infty]}}(B, \Phi
) \geq- \zeta _i'' (r^2e^{-2\kappa _i l}+1),
\]
and by (\ref{diff:CSD-E}), \(\scrE^{'\mu _r}_{top}(\hat{Y}_{i,l})(A,
\Psi )=\scrE^{'\mu_r, \hat{\grp}}_{top}(\hat{Y}_{i,l})(A, \Psi )\) satisfies
a similar inequality:
\[
\scrE^{'\mu_r, \hat{\grp}}_{top}(\hat{Y}_{i,l})(A, \Psi )\geq - \underline{\zeta }_i
(r^2e^{-2\kappa _i l}+1).
\]
In particular, setting \(\hat{\mathop{\mathrm{l}}}_i:=\kappa _i^{-1}\), and choosing
\(r_0\) to be greater than
\(\max_{i\in \grY_v}\exp \, (4\grl_i'/\hat{\mathop{\mathrm{l}}}_i)\), we have that for \(r\geq
r_0\),
\begin{equation}\label{bdd:E_l-lower}
\begin{split}
-2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial Y_{i,[l, \infty]}}(B, \Phi
) & \geq -2\, \zeta _i'' r ;\\
\scrE^{'\mu _r}_{top}(\hat{Y}_{i,l})(A,
\Psi )= \scrE^{'\mu_r, \hat{\grp}}_{top}(\hat{Y}_{i,l})(A, \Psi )& \geq - 2\underline{\zeta
}_i r\qquad \forall l\geq \big(\frac{\ln r}{2}\big)\hat{\mathop{\mathrm{l}}}_i.
\end{split}
\end{equation}
In the above, the positive constants \(\zeta _i'', \underline{\zeta
}_i >0\) depend only on \(\nu \), \(B_0\), and the constant \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}.
Let \(\pmb{\hat{\mathop{\mathrm{l}}}}\colon\thinspace
\grY\to \bbR^+\) be the function given by
\(\pmb{\hat{\mathop{\mathrm{l}}}}(i)=(\hat{\mathop{\mathrm{l}}}_i)_i\), and write \(\bfl_r:=(\ln r)\,
\pmb{\hat{\mathop{\mathrm{l}}}}\).
Under the assumption (\ref{assume:EtopX-ubdd}) (b), the preceding pair of
inequalities then implies that for \(r\geq r_0\),
\begin{equation}\label{bdd:E-up-i}
\begin{split}
\scrE^{'\mu_r, \hat{\grp}}_{top}(X_{\bf l})(A, \Psi
)& =\scrE^{'\mu_r, \hat{\grp}}_{top}(X)(A, \Psi
)-\sum_{i\in \grY}\scrE^{'\mu_r, \hat{\grp}}_{top}(\hat{Y}_{i,l})(A, \Psi )\\
& \leq r \, (\textsc{e}+2\sum_{i\in \grY}\underline{\zeta
}_i ) \quad \forall \, {\bf l}\geq \frac{\bfl_r}{2} \quad
\text{assuming (\ref{assume:EtopX-ubdd}) (b); similarly, }
\\
\scrE^{'\mu_r}_{top}(X_{\bf l})(A, \Psi
) & \leq r \, (\textsc{e} +2\sum_{i\in \grY}\underline{\zeta
}_i ) \quad \forall \, {\bf l}\geq \frac{\bfl_r}{2} \quad
\text{assuming (\ref{assume:EtopX-ubdd}) (a).}
\end{split}
\end{equation}
Combining the preceding upper bounds for
\(\scrE^{'\mu_r, \hat{\grp}}_{top}(X_{\bf l})\),
\(\scrE^{'\mu_r}_{top}(X_{\bf l})\) with (\ref{eq:E-top-bdd0}), we
have for any \(r\geq r_0\) and \( \forall {\bf
l}\geq \pmb{\hat{\mathop{\mathrm{l}}}}_r/2\) with \(\bfl(i)=:l_i\),
\begin{equation}\label{bdd:E_Xl}
\begin{split}
&
\frac{1}{8}\int_{X_{\bfl}}|F_A|^2+\int_{X_{\bfl}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bfl}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad \quad
\leq r \, (\textsc{e}+\zeta )+\zeta '\sum_{i\in \grY}r^{\mathrm{a}_i}l_i \quad
\text{assuming (\ref{assume:EtopX-ubdd}) (a)};\\
& \frac{1}{16}\int_{X_{\bfl}}|F_A|^2+\int_{X_{\bfl}}|\nabla_A\Psi|^2+\frac{1}{4}\int_{X_{\bfl}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad \quad
\leq r \, (\textsc{e}+\zeta _p)+\zeta '_p\sum_{i\in \grY}r^{\mathrm{a}_i}l_i\quad
\text{assuming (\ref{assume:EtopX-ubdd}) (b).}
\end{split}
\end{equation}
(As in Lemma \ref{lem:CSD-est}, \(\mathrm{a}_i=1\) when \(\hat{Y}_i\) is a
Morse end, and \(\mathrm{a}_i=0\) when it is a vanishing end.)
In particular, given \(i\in \grY\), by taking \(\bfl\) above to be
such that \(\bfl(j)=(\frac{\ln r}{2})\, \hat{\mathop{\mathrm{l}}}_j\) when
\(j\neq i\) and \(\bfl(i)=l\geq (\frac{\ln r}{2})\, \hat{\mathop{\mathrm{l}}}_i\), we have
\[\begin{split}
\frac{1}{16}\int_{\hat{Y}_{i, [l-1, l]}}|F_A|^2& <
\frac{1}{16}\int_{X_{\bfl}}|F_A|^2\\
& \leq r \, (\textsc{e}+\zeta _1\ln
r)+\zeta _2 r^{\mathrm{a}_i} \big(l-\frac{\ln r}{2}\hat{\mathop{\mathrm{l}}}_i\big)\qquad
\forall\, l\geq \frac{\ln r}{2}\hat{\mathop{\mathrm{l}}}_i.
\end{split}
\]
The positive
constants \(\zeta _1, \zeta _2\), as well as \(\zeta \), \(\zeta '\) in
(\ref{bdd:E_Xl}), depend only on the metric, the \(\mathop{\mathrm{Spin}}\nolimits^c\)
structure, \(\nu \), \(\varsigma _w\) \(B_0\), and \(\max_{j\in \grY}
\textsl{z}_j\), \(\textsl{z}_j\) being the constants
in Lemma \ref{lem:F-L_1}. In particular, they are
independent of \(r\), \(l\), and \(i\).
Combining the preceding bound with (\ref{bdd:CSD-lower}), and
choosing \(r_0\) to be sufficiently large (depending on \(\textsc{e}\)), we have for all
\(l\geq\frac{\ln r}{2}\, \hat{\mathop{\mathrm{l}}}_i\)
and any \(L\geq l\),
\[\begin{split}
-2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{i, [l,L]}}(B,\Phi)
& \geq -\zeta r\sum_{n=0}^{\infty} e^{-\kappa
_i(l+n)}\|F_B-F_{B_0}\|_{L^2(\hat{Y}_{i, [l+n, l+n+1]})}\\
& \geq -\sum_{n=0}^{\infty}e^{-\kappa
_i(l+n)}\big( \zeta 'r^2+r \, (\textsc{e}+\zeta _1\ln
r)+\zeta _2 r^{\mathrm{a}_i} (l+n)\big)\\
& \geq- \zeta _3 \, r^2 e^{-\kappa
_il}-\zeta _4\, r e^{-\kappa
_il/2}\quad \text{when \(r\geq r_0\).}
\end{split}
\]
In particular, for any \(r\geq r_0\), \(l\geq (\ln
r) \, \hat{\mathop{\mathrm{l}}}_i\), and \(L\geq l\), we have
\begin{equation}\label{bdd:E-lower-ii}
\begin{split}
-2\operatorname{CSD}_{\mu_+, \hat{\grp}}^{\partial \hat{Y}_{i,
[l,L]}}(B,\Phi)& \geq -\zeta _5 \, r\quad \text{and equivalently (by (\ref{diff:CSD-E})), }\\
\scrE^{' \mu _r}_{top} (\hat{Y}_{i,
[l,L]})(A, \Psi )& =\scrE^{' \mu _r, \hat{\grp}}_{top} (\hat{Y}_{i,
[l,L]})(A, \Psi ) \geq -\zeta '_5 \, r.\quad
\end{split}
\end{equation}
The positive constants \(\zeta _3, \zeta _4, \zeta _5, \zeta _5'\)
above depend only on the metric, the \(\mathop{\mathrm{Spin}}\nolimits^c\)
structure, \(\nu \), \(\varsigma _w\), \(B_0\), and \(\max_{j\in \grY}
\textsl{z}_j\).
\subsection{Lemma \ref{lem:Etop-bdd1} and some of its variants
}\label{sec:pf-E_top-bdd0}
In this part we prove Lemma \ref{lem:Etop-bdd1}
and some of its variants.
\paragraph{\it Proof of Lemma \ref{lem:Etop-bdd1}.}
The rightmost inequalities in (\ref{eq:E_top-M1}) (i) and the left
most
inequalies in (\ref{eq:E_top-M1}) (ii) follow respectively from
(\ref{bdd:E-up-i}) and
(\ref{bdd:E-lower-ii}). To get the remaining two inequalities in
(\ref{eq:E_top-M1}), we need a lower bound on \(\scrE^{'\mu_r}_{top}(X_{\bf l})(A, \Psi
)\) or \(\scrE^{'\mu_r, \hat{\grp}}_{top}(X_{\bf l})(A, \Psi
)\). Here is preliminary bound from (\ref{eq:E-top-bdd0}): for all \({\bf l}\colon\thinspace \grY\to
\bbR^+\), \(\bfl(i)=:l_i\).
\[\begin{split}
& \scrE^{'\mu_r, \hat{\grp}}_{top}(X_{\bf l})(A, \Psi
)\geq - \zeta _p (r+ \sum_{i\in \grY}r^{\mathrm{a}_i}l_i);\\
& \scrE^{'\mu_r}_{top}(X_{\bf l})(A, \Psi
)\geq - \zeta (r+\sum_{i\in \grY}r^{\mathrm{a}_i}l_i),
\end{split}
\]
where \(\zeta\), \(\zeta_p\) are positive constants depending only on
the parameters listed in (\ref{parameters}).
Combined with (\ref{bdd:E-lower-ii}), this implies: For any \(r\geq r_0\),
\(X_\bfl\), \(\bfl\geq \bfl_r\) and \(l_i:=\bfl(i)\) (possibly
\(\infty\)),
\begin{equation}\label{bdd-lower-i}
\begin{split}
\scrE^{'\mu_r}_{top}(X_{\bf l})(A, \Psi
)& =\scrE^{'\mu_r}_{top}(X_{\bfl _r})(A, \Psi
)+\sum_{i\in \grY}\scrE^{'\mu_r}_{top}(\hat{Y}_{i, [\bfl_r(i), l]})\\
& \geq - \zeta 'r\ln r;\quad \text{similarly,} \\
\scrE^{'\mu_r, \hat{\grp}}_{top}(X_{\bf l})(A, \Psi
)&\geq - \zeta _p'r\ln r,
\end{split}
\end{equation}
where \(\zeta'\), \(\zeta'_p\) are positive constants depending only on
the parameters listed in (\ref{parameters}),
\(B_0\), and the constants \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}. This is precisely what was asserted in the
leftmost inequalities in (\ref{eq:E_top-M1}) (i).
Meanwhile, combining (\ref{bdd-lower-i}) and (\ref{bdd:E-up-i}),
one has: For any \(\hat{Y}_{i, [l, L]}\subset X-\mathring{X}_{\bfl
_r}\), \(L\) possibly \(\infty\), we have:
\begin{equation}\label{eq:4.38+}
\begin{split}
\scrE^{'\mu_r}_{top}(\hat{Y}_{i, [l, L]})(A, \Psi
)& =\scrE^{'\mu_r}_{top}(X_{\bfL_i})(A, \Psi
)-\scrE^{'\mu_r}_{top}(X_{\bfl_i})(A, \Psi
)\\
& \leq r \, (\textsc{e} + \zeta _e\ln r), \\
\end{split}
\end{equation}
where \(\bfl_i, \bfL_i\colon\thinspace \grY\to \bbR^+\) are such that
\(\bfl_i(j)=\bfL_i(j)=\bfl_r(j)\) for \(j\neq i\), and
\(\bfl_i(i)=l\); \(\bfL_i(i)=L\). In the above,
\(\zeta '\) depends only on
the parameters listed in (\ref{parameters}),
\(B_0\), and the constants \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}. The preceding inequality leads directly to
the second inequality in (\ref{eq:E_top-M1}) (ii). \hfill$\Box$\medbreak
A straightforward corollary of Lemma \ref{lem:Etop-bdd1} is:
\begin{cor}\label{cor:E_top(X')}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1}. There is a positive constant \(\zeta \) independent of
\(r\), \(X_\bullet\), and \((A, \Psi )\) such that
\[\begin{split}
\text{(i) } & \scrE^{'\mu_r}_{top}(X_\bullet)(A, \Psi )\geq -\zeta \, r\quad \text{for all
\(r\geq r_0\) and \(X_\bullet\subset X-X''=X_v\); consequently}\\
\text{(ii) } & \scrE^{'\mu_r}_{top}(X'')(A, \Psi )\leq r\, (\textsc{e} +\zeta );
\end{split}
\]
similarly for \(\scrE^{'\mu_r,\hat{\grp}}_{top}(X_\bullet)(A, \Psi
)\).
\end{cor}
\noindent{\it Proof.}\enspace We shall prove only the statement for
\(\scrE^{'\mu_r}_{top}(X_\bullet)(A, \Psi )\), since the proof for
\(\scrE^{'\mu_r, \hat{\grp}}_{top}(X_\bullet)(A, \Psi )\) is the same.
According to (\ref{eq:E-top-bdd0}), for all
\(X_\bullet\subset X_v\)
\[
\scrE^{'\mu_r}_{top}(X_\bullet)(A, \Psi )\geq -(\zeta 'r+\zeta_0
|X_\bullet|).
\]
It follows that for any \(X_\bullet \subset X_v\),
\[
\begin{split}
& \scrE^{'\mu_r}_{top}(X_\bullet)(A, \Psi )\\
& \quad =\scrE^{'\mu_r}_{top}(X_\bullet\cap
X_{\bfl_r})(A, \Psi )+\scrE^{'\mu_r}_{top}(X_\bullet- X_{\bfl_r})(A, \Psi )\\ &\quad \geq-(\zeta 'r+\zeta '_0 \ln r)-\zeta '' r\geq -\zeta r.
\end{split}
\]
In the above, we applied the preceding inequality to bound the first term in
the second line, and used the first inequality in (\ref{eq:E_top-M1})
(ii) to bound the second term. This proves Inequality (i)
asserted by the corollary. To obtain Inequality (ii), simply
write \(\scrE^{'\mu_r}_{top}(X'')(A, \Psi )=\scrE^{'\mu_r}_{top}(X)(A, \Psi
)-\scrE^{'\mu_r}_{top}(X-X'')(A, \Psi )\), then combine with the bounds from (i)
and (\ref{assume:EtopX-ubdd}).
\hfill$\Box$\medbreak
For future reference, note that the arguments above in fact establishes the following
generalization of Lemma \ref{lem:Etop-bdd1}:
\begin{lemma}
\label{lem:Etop-bddf}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1}.
Suppose
\(\grt(r)\) is a function from \([r_0, \infty)\)
to \((1, \infty)\) such that the lower bound on
\(\scrE^{'\mu_r}_{top}(X_\bullet)\) in (\ref{bdd:E-lower-ii}) holds for any
given
\(r\geq r_0\) and \(X_\bullet\subset X'-X_{\grt(r)
\pmb{\hat{\mathop{\mathrm{l}}}},m}\). That is, assume that
\begin{equation}\label{assume:t(r)}
\scrE^{'\mu_r}_{top} (X_\bullet)(A, \Psi ) \geq -\zeta '_5 \, r\quad \forall X_\bullet\subset X'-X_{\grt(r)
\pmb{\hat{\mathop{\mathrm{l}}}},m}, r\geq r_0.
\end{equation}
(In particular, it follows from Lemma \ref{lem:Etop-bdd1} that this holds
with \(\grt(r)=\ln r\).)
Then there exist positive constants \(\zeta \), \(\zeta _1\), \(\zeta_2\), \(\zeta _3\)
such that for any
\(r\geq r_0\),
\begin{equation}\label{eq:E_top-Mf}
\begin{split}
\text{(i) } &
-r(\zeta _1+\zeta _2 \grt(r))\leq \scrE_{top}^{'\mu_r}(X_\bullet)(A,
\Psi )\leq r\, (\textsc{e}+\zeta)
\quad \text{for all \(X_\bullet\supset X_{\grt(r) \pmb{\hat{\mathop{\mathrm{l}}}},m}\)};\\
\text{(ii) } & -\zeta _1 r\leq \scrE_{top}^{'\mu_r}(X_\bullet)(A, \Psi
) \leq r\, (\textsc{e}+
\zeta _2\grt (r)+\zeta _3)
\quad \text{for
all \(X_\bullet\subset X-\mathring{X}_{\grt(r)\pmb{\hat{\mathop{\mathrm{l}}}},m}\)}.
\end{split}
\end{equation}
The positive constants
\(\zeta \), \(\zeta _1\), \(\zeta_2\), \(\zeta _3\) above depend only on the metric, the \(\mathop{\mathrm{Spin}}\nolimits^c\)
structure, \(\nu \), \(\varsigma _w\), \(B_0\), and the constants \(\textsl{z}_i\)
in Lemma \ref{lem:F-L_1}.
In particular, these constants
are independent of \(X_\bullet\) and \(r\).
A similar statement holds for \(\scrE_{top}^{'\mu_r,
\hat{\grp}}(X_\bullet)(A, \Psi ) \), with the constants in the
inequalities depending additionally on \(\textsl{z}_\grp\).
\end{lemma}
\noindent{\it Proof.}\enspace We shall again prove only the inequalities for
\(\scrE_{top}^{'\mu_r}(X_\bullet)(A, \Psi )\), since the proof for \(\scrE_{top}^{'\mu_r,
\hat{\grp}}(X_\bullet)(A, \Psi ) \) is identical.
The first inequality in (\ref{eq:E_top-Mf}) (ii) follows from the
assumption (\ref{assume:t(r)}) and Inequality (i) in Corollary \ref{cor:E_top(X')}.
Replacing (\ref{bdd:E-lower-ii}) and (\ref{bdd:E_l-lower}) by this inequality, the arguments in
the previous subsection and in the proof of Lemma \ref{lem:Etop-bdd1} above can be
repeated to replace (\ref{bdd:E-up-i}), (\ref{bdd-lower-i}) (\ref{eq:4.38+})
with their respective companion versions. These are respectively the
second inequality in (\ref{eq:E_top-Mf}) (i), the first inequality in
(\ref{eq:E_top-Mf}) (i), and the
second inequality in (\ref{eq:E_top-Mf}) (ii).
\hfill$\Box$\medbreak
\subsection{\(L^2_{1, loc /A}\)-bounds for \((A-A_0, \Psi)\)}\label{sec:L^_1-bdd}
It follows from the discussion in the preceding subsection that:
\begin{prop}\label{prop:SW-L2-bdd}
Adopt the notation and assumptions of Lemma \ref{lem:Etop-bdd1}.
Let \(\zeta _0\), \(\zeta ''\), \(\zeta ''_p\)
be the constants from Lemma
\ref{lem:E-top-bdd0},
Then there exist constants
\(\zeta _1\), \(\zeta _1'\)
such that the following hold for any \(r\geq
r_0\), any \((A, \Psi )\), and any compact \(X_\bullet\subset X\)
satisfying either \(\partial X_\bullet \subset
X-\mathring{X}_{\bfl_r}\) or \(X_\bullet\subset X_{\bfl_r}\):
\begin{equation}\label{eq:L^2_1}
\begin{split}
& \frac{1}{8}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad
\leq r \, \big(\zeta _0|X_{\bullet,m}|+\textsc{e}+\zeta _1\ln r
\big) +\zeta ''|X_\bullet| \quad \text{assuming
(\ref{assume:EtopX-ubdd}) (a)};
\\
& \frac{1}{16}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{4}\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad
\leq r \, \big(\zeta _0|X_{\bullet,m}|+\textsc{e} +\zeta '_1\ln r \big)
+\zeta ''_p|X_\bullet| \quad \text{assuming
(\ref{assume:EtopX-ubdd}) (b)}.
\end{split}
\end{equation}
The constants \(\zeta _1\), \(\zeta _1'\) above depend only on the parameters listed in
(\ref{parameters}), together with \(B_0\) and the constants \(\textsl{z}_i\) in
Lemma \ref{lem:F-L_1}.
In particular, it is independent of \(r\) and
\(X_\bullet\).
Moreover, if the assumption (\ref{assume:t(r)}) in Lemma \ref{lem:Etop-bddf} holds, then
the statement above holds with
all appearances of \(\ln r \) replaced by \(\grt(r)\).
\end{prop}
\noindent{\it Proof.}\enspace When \(X_\bullet \) is such that
\(\partial X_\bullet \subset X-\mathring{X}_{\bfl_r}\), the claim of
the proposition
follows directly from Lemmas \ref{lem:Etop-bdd1}, \ref{lem:Etop-bddf} and
\ref{lem:E-top-bdd0}. The case when \(X_\bullet \subset X_{\bfl_r}\) follows from the preceding
case, together with the observation that when \(X_\bullet\subset
X_{\bfl_r}\),
\[\begin{split}
& \frac{1}{8}\int_{X_{\bullet}}|F_A|^2+\int_{X_{\bullet}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bullet}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2\\
&\qquad \leq \frac{1}{8}\int_{X_{\bfl_r}}|F_A|^2+\int_{X_{\bfl_r}}|\nabla_A\Psi|^2+\frac{1}{2}\int_{X_{\bfl_r}}\big|\frac{i}{4}\rho(\mu_r^+)-(\Psi\Psi^*)_0\big|^2;\\
\end{split}
\]
\hfill$\Box$\medbreak
As a consequence,
\begin{prop}\label{est-L^2_1}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1},
and recall the reference connection \(A_0\) from (\ref{eq:A_0}).
Fix a compact \(X_\bullet \subset X\) and an \(r\geq r_0\), \(r_0\) being
as in the previous proposition. Let \(u_r\in
C^\infty(X_\bullet, S^1)\) be such that \
u_r\cdot (A_r, \Psi_r)\) is in the gauge specified in
\cite{KM}'s Equations (5.2) and (5.3). Then there exist positive
constants \(\zeta_2\), \(\zeta _2'\) depending only on the parameters listed in
(\ref{parameters}), such that
\begin{equation}\label{est:L^2_1}\begin{split}
& \|(u_r\cdot A_r-A_0, \Psi_r)\|^2_{L_{1,
A_r}^2(X_{\bullet})}
\leq \zeta_2
r \, \big(|X_{\bullet,m}|+\textsc{e} +\ln r \big)
+\zeta _2' |X_\bullet|
\end{split}
\end{equation}
Again, if the assumption (\ref{assume:t(r)}) in Lemma \ref{lem:Etop-bddf} holds, then
the statement above holds with
all appearances of \(\ln r \) replaced by \(\grt(r)\).
\end{prop}
\noindent{\it Proof.}\enspace It suffices to establish (\ref{est:L^2_1}) for those
\(X_\bullet\) of the form \(X_c\) or \(\hat{Y}_{i, [l, l+1]}\).
Combining (\ref{eq:L^2_1}) with standard elliptic estimates (cf. e.g. pp. 101-104 of \cite{KM}),
one may find a positive
constant \(\zeta \) depending only on the metric on \(X\) such that
\[
\|u_r\cdot A_r-A_0\|^2_{L_{1}^2(X_{\bullet})} +\|\nabla_A\Psi\|^2_{L^2(X_\bullet)}\leq
\zeta \cdot (\text{RHS of (\ref{eq:L^2_1}) (b)}).
\]
Meanwhile, a combination of (\ref{eq:L^2_1}) with (\ref{f-trick})
(with \(\grf\) set to be 1) implies that
\[\begin{split}
\|\Psi\|^2_{L^2(X_\bullet)}& \leq 4\cdot (\text{RHS of
(\ref{eq:L^2_1}) (b)})+ |X_\bullet|+\frac{1}{2}\int_{X_\bullet}|\mu
_r|\\
& \leq \zeta_3
r \, \big(|X_{\bullet,m}|+\textsc{e} +\ln r \big)
+\zeta _3' |X_\bullet|,
\end{split}
\]
where \(\zeta _3, \zeta _3'\) only on the parameters listed in
(\ref{parameters}). Together with the preceding inequality, we arrive
at (\ref{est:L^2_1}).
\hfill$\Box$\medbreak
\begin{rem}\label{rem:CNgauge}
The gauge transformation \(u_r\) in (\ref{est:L^2_1}) depends on \(X_\bullet\). Specifically, it is
determined, via Equations (5.2) and (5.3) in \cite{KM}, by \(A_0\) together with a choice of \(\{q_1, q_2, \ldots,
q_{b^1(X_\bullet)}\}\), where each \(q_k\), \(k\in \{1, \ldots, b^1(X_\bullet)\}\) is
a closed 3-form supported in a compact set in the interior of
\(X_\bullet\) and \(\{[q_k]\}_k\subset H^3(X_\bullet , \partial
X_\bullet;\bbR)\) forms a basis of \(H^3(X_\bullet , \partial
X_\bullet;\bbZ)/\text{Tors}\). As in \cite{KM}, we say that a \((A'_r, \Psi
'_r)=u_r\cdot (A_r, \Psi _r)\) or \(A'_r=u_r\cdot A_r\) is in a {\em Coulomb-Neumann} gauge
(with respect to \(A_0\)) if it satisfies Equation (5.2) of
\cite{KM}. We say that it is in the {\em normalized Coulomb-Neumann} gauge
(with respect to \(A_0\) and \(\{q_k\}_{k=1}^{b^1(X_\bullet)}\)) if it
satisfies both Equations (5.2) and (5.3) in \cite{KM}. Note that if
\((A'_r, \Psi _r')\) is in a Coulomb-Neumann gauge with respect to
\(A_0\), then \((B', \Phi ')=(A'_r, \Psi _r')|_{\partial X_\bullet}\)
is in a Coulomb gauge with respect to \(B_0=A_0|_{\partial
X_\bullet}\).
In the case
when \(X_\bullet\) is of the form \(\hat{Y}_{[l, L]}\), for some
\(Y=Y_i\), \(H^3(X_\bullet , \partial
X_\bullet;\bbZ)\simeq H^2(Y;\bbZ)\), and we choose \(\{q_k\}_k\) so
that \(q_k=d(\chi _+(s) *_3 h_k)\), with \(h_k\), \(k\in \{1, \ldots,
b^1(Y)\}\) being the harmonic
1-forms on \(Y\) appearing in (\ref{eq:deltab1}), and \(\chi _+\colon\thinspace [l,
L]\to [0,1]\) is a smooth non-decreasing function that is 0 on a
neighborhood of \(\{l\}\subset [l,L]\) and is 1 on a neighborhood of
\(\{L\}\subset [l,L]\). If \((A_r', \Psi _r')\) is in the normalized
Coulomb-Neumann gauge with respect to such \(\{q_k\}_k\) (and
\(A_0\)), then \((A_r', \Psi _r')|_{\hat{Y}_{:L}}\) is in the
normalized Coulomb gauge with respect to \(\{h_k\}_k\) (and
\(B_0=A_0|_{\hat{Y}_{:L}}\)).
\end{rem}
\subsection{Some integral estimates}\label{sec:int-est}
This subsection contains some integral estimates which will be
instrumental to the pointwise estimates in next section.
\begin{lemma}\label{co:E-omega-bdd}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1}
and recall from (\ref{eq:A_0}) that \(A_0=\hat{B}_{0,i}\) on the
\(\hat{Y}_i\)-end.
Fix \(r\geq r_0\) and \(L_i\geq 1\). Let \((B, \Phi)\) denote
the restriction of \((A_r, \Psi_r)\) to \(Y_{i:L_i}\subset X\) and
let \((B', \Phi ')\) be the representative of the gauge equivalence
class \([(B, \Phi )]\) in the normalized Coulomb gauge. Let
\(\hat{\grp}| (B, \Phi ) \) denote the restriction of \(\hat{\grp}(A,
\Psi )\) to \(Y_{i:L_i}\subset X\).
Then
there exist positive constants \(\zeta', \zeta _0, \zeta _w, \zeta '_\grp\) such that
\begin{equation}\label{eq:CSD-bdd}
\begin{split}
{\rm (1)} & \, \|(B'-B_{0,i}, \Phi)\|_{L^2_{1/2,B}(Y_{i:L_i})}^2 \leq \zeta' r\, (\textsc{e} +\ln r);\\
{\rm (2)} &\, |\operatorname{CSD}^{Y_i}_0(B', \Phi')| \leq \zeta_0 \, r\, (\textsc{e} +\ln r);\\
& \, |\operatorname{CSD}^{Y_i}_{w_r}(B', \Phi')| =|\operatorname{CSD}^{Y_i}_{w_r}(B, \Phi)|
\leq \zeta_w \, r\, (\textsc{e} +\ln r);\\
{\rm (3)} & \, |\, f_{\hat{\grp}|} (B, \Phi)\, | \leq \zeta '_\grp
\, r\, (\textsc{e} +\ln r)
\end{split}\end{equation}
The positive
constants \(\zeta', \zeta _0, \zeta _w, \zeta '_\grp\) depend only on the parameters listed in
(\ref{parameters}), together with \(B_{0,i}\) and the constants \(\textsl{z}_i\) in
Lemma \ref{lem:F-L_1}. As before, the factors of
\(\ln r\) in (\ref{eq:CSD-bdd}) can be replaced by \(\grt(r)\)
if the assumption of Lemma \ref{lem:Etop-bddf}
holds.
\end{lemma}
\noindent{\it Proof.}\enspace
{\bf (1)}: Let \((A'_r, \Psi '_r)=u_r\cdot (A_r, \Psi _r)\) be in
the normalize Coulomb-Neumann gauge on \(\hat{Y}_{i,
[L_i-1, L_i]}\), with respect to the choices specified in Remark
\ref{rem:CNgauge}. As remarked above, with such choices, \((A'_r, \Psi
'_r)|_{Y_{i:L_i}}=(B', \Phi ')\). Note that \(\|(B'-B_{0,i}, \Phi)\|_{L^2_{1/2, B}(Y_{i:
L_i})}\leq \zeta_b \|(A_r'-A_0, \Psi'_r)\|_{L^2_{1, A}(\hat{Y}_{i,
[L_i-1, L_i]})}\), then appeal to Proposition \ref{est-L^2_1}.
{\bf (2)}:
This is a
direct consequence of {\bf (1)} above.
{\bf (3)}: Recall the proof of Lemma \ref{lem:f_q} and the notation therein. From the first
inequality in (\ref{ineq:f_q}), (\ref{eq:q-bdd}) and {\bf (1)} above, we have:
\[ \begin{split}
|f_{\hat{\grp}|}(B, \Phi )|& \leq \zeta _\grp\|(\delta B, \Phi )\|_{L^2}(\|\Phi_t\|_{L^2}+1)\quad \text{for a certain \(t\in [0,1]\)}\\
& \leq \zeta '_\grp \, r\, (\textsc{e} +\ln r),
\end{split}\]
where \(\delta B:=B'-B_{0,i}\). \hfill$\Box$\medbreak
\begin{lemma}\label{co:E-omega-bdd3}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1}.
Let \(r_0\),
\(\grt(r)\) be as in Lemma \ref{lem:Etop-bddf}. Fix \(r\geq r_0\).
Then there are positive constants \(\zeta\), \(\zeta _h\) independent of \(r\),
and \(X_\bullet\) such that
\begin{equation}\label{eq:E-omega-bdd3}
\Big| \int_{X_{\bullet}}i F_{A_r}\wedge \omega \Big| \leq
\zeta _h\,(\textsc{e} +\ln r)+\zeta \,|X_{\bullet,m}| \quad \text{for compact
\(X_\bullet\subset X\)}
\end{equation}
The constants \(\zeta , \zeta_h\) above only depend on the parameters listed in (\ref{parameters}).
Moreover, if the assumption (\ref{assume:t(r)}) in Lemma \ref{lem:Etop-bddf} holds, then
the statement above holds with
the factor of \(\ln r \) replaced by \(\grt(r)\).
\end{lemma}
\noindent{\it Proof.}\enspace
Write
\begin{equation}\label{eq:F-omega}
\begin{split}
\int_{X_{\bullet}}i F_A\wedge \omega &= i\int_{X_{\bullet}}
F_{A}\wedge*\nu +i\int_{X_{\bullet}}
F_{A}\wedge\nu \\
& =i \int_{X_\bullet}
F_{A_0}\wedge*\nu
+
4r^{-1}\scrE_{top}^{'\mu_r}(X_{\bullet})(A,
\Psi)-4r^{-1}\scrE_{top}^{w_r, \hat{\grp}}(X_{\bullet})(A, \Psi).\\
& =i \int_{X_\bullet}
F_{A_0}\wedge*\nu
+
4r^{-1}\scrE_{top}^{'\mu_r, \hat{\grp}}(X_{\bullet})(A,
\Psi)-4r^{-1}\scrE_{top}^{w_r, \hat{\grp}}(X_{\bullet})(A, \Psi).\\
\end{split}
\end{equation}
Consider the second equality of (\ref{eq:F-omega}) in the case of
(\ref{assume:EtopX-ubdd})(a), the second equality in the case of
(\ref{assume:EtopX-ubdd}) (b).
We bound the right hand side of these formulas term by term. The
first term on the right hand side in both cases is
bounded via (\ref{bdd(3)}) as:
\[
\Big| i \int_{X_\bullet}
F_{A_0}\wedge*\nu \Big| \leq \zeta _1\, |X_{\bullet,m}|+\zeta _1'
\]
For the second term, use (\ref{eq:E_top-M1})
to obtain:
\[\begin{cases}
\big| 4r^{-1}\scrE_{top}^{'\mu_r}(X_{\bullet})(A,
\Psi)\big|\leq 4 \textsc{e} +\zeta _2\ln r &\text{assuming
(\ref{assume:EtopX-ubdd}) (a)};\\
\big| 4r^{-1}\scrE_{top}^{'\mu_r, \hat{\grp}}(X_{\bullet})(A,
\Psi)\big|\leq 4 \textsc{e} +\zeta '_2\ln r &\text{assuming
(\ref{assume:EtopX-ubdd}) (b)}.\\
\end{cases}
\]
To bound the last term,
note that by our
assumptions on \(A_0\) and \(w_r\),
\[
\begin{split}
\scrE_{top}^{w_r}(X_{\bullet}) (A, \Psi) & = \frac{1}{4}\int_{X_{\bullet}}F_{A_0}\wedge
(F_{A_0}+iw_r)-2\, \operatorname{CSD}^{\partial X_\bullet}_{w_r}(B,
\Phi)\\
&= \frac{1}{4}\int_{X_{\bullet}\cap X_c}F_{A_0}\wedge
(F_{A_0}+iw_r)-2\, \operatorname{CSD}^{\partial X_\bullet}_{w_r}(B,
\Phi);\\
\scrE_{top}^{w_r, \hat{\grp}}(X_{\bullet}) (A, \Psi) &= \frac{1}{4}\int_{X_{\bullet}\cap X_c}F_{A_0}\wedge
(F_{A_0}+iw_r)-2\, \operatorname{CSD}^{\partial X_\bullet}_{w_r,
\hat{\grp}|}(B,
\Phi).
\end{split}
\]
Combined with (\ref{eq:CSD-bdd}) (2) and (3), this gives
\begin{equation}\label{eq:E_w}\begin{split}
4r^{-1}\big| \scrE_{top}^{w_r}(X_{\bullet}) (A, \Psi) \big|
&\leq \zeta_3 \, (\textsc{e}+\ln r)\qquad \text{assuming
(\ref{assume:EtopX-ubdd}) (a)};\\
4r^{-1}\big| \scrE_{top}^{w_r, \hat{\grp}}(X_{\bullet}) (A, \Psi) \big|
&\leq \zeta'_3 \, (\textsc{e}+\ln r)\qquad \text{assuming
(\ref{assume:EtopX-ubdd}) (b)}.
\end{split}
\end{equation}
Plugging all these back to (\ref{eq:F-omega}), we have the asserted
inequality (\ref{eq:E-omega-bdd3}).
The assertion regarding the general case assuming the
condition of Lemma \ref{lem:Etop-bddf} follows from the same argument, with
the role of Lemma \ref{lem:Etop-bdd1} above played by Lemma
\ref{lem:Etop-bddf} instead.
\hfill$\Box$\medbreak
The next lemma is an analog of \cite{Ts}'s Lemma 3.1. Let \(\psi:=(r/2)^{-1/2}\Psi\).
\begin{lemma}\label{T:lem3.1}
Adopt the assumptions and notation of Lemma \ref{lem:Etop-bdd1}
and let \(X_\bullet\subset X\) be arbitrary. Then
there exists a positive constant \(\zeta\) independent of \(r\) and
\(\bullet\), such that
\begin{gather*}
r \int_{X_{\bullet}}(|\nu|-|\psi|^2)^2\leq
\zeta ( \textsc{e} +\ln r+|X_{\bullet,m}|).\\
\end{gather*}
If in addition, \(X_\bullet\subset X''\), then
\[
r\int_{X_{\bullet}}|\nu |\Big||\nu |-|\psi|^2\Big|\leq \zeta (\textsc{e} +\ln r+|X_\bullet|)
\]
for a positive constant \(\zeta'\) independent of \(r\) and
\(\bullet\). The constants \(\zeta , \zeta '\) above only depend on the parameters listed in (\ref{parameters}).
Moreover, the factor of
\(\ln r\) above can be replaced by \(\grt(r)\)
if the assumption of Lemma \ref{lem:Etop-bddf}
holds.
\end{lemma}
\noindent{\it Proof.}\enspace
Using the first line of the Seiberg-Witten equation, the fact that \(\omega\) is
self-dual, our assumption on \(\mu_r\), and (\ref{eq:E-omega-bdd3}), one has:
\begin{equation}\label{T:3.3}\begin{split}
\frac{r}{2}\int_{X_{\bullet}}|\nu |(|\nu|-|\psi|^2-\zeta'
r^{-1})& \leq \int _{X_\bullet}i F_A\wedge\omega \\
& \leq \zeta_h (\textsc{e}+\ln r)+\zeta |X_{\bullet,m}|).
\end{split}
\end{equation}
Meanwhile, arguing as Equation (3.4) of \cite{Ts}, one has:
\begin{equation}\label{T:eq3.4}\begin{split}
2^{-1} d^*d |\psi|^2+|\nabla_A\psi|^2+4^{-1}
r|\psi|^2(|\psi|^2-|\nu |)\leq \zeta_2 |\psi|^2.
\end{split}\end{equation}
Hence, with \(u:=|\psi|^2-|\nu|\),
\[
2^{-1} d^*d |\psi|^2+4^{-1}r|\nu| u+4^{-1} ru^2\leq \zeta_2 |\psi|^2.
\]
Integrating this and using (\ref{T:3.3}) and Proposition \ref{est-L^2_1}, one has
\[\begin{split}
r \int_{X_{\bullet}} u^2 &\leq
\zeta_4\, (|X_{\bullet,m}|+\ln r )+2\Big|\int_{\partial X_\bullet} \partial_s |\psi|^2\Big|.\\
\end{split}\]
Meanwhile, using the Seiberg-Witten equation and item (1) of
(\ref{eq:CSD-bdd}),
\[\Big|\int_{\partial X_\bullet} \partial_s |\psi|^2\Big|\leq
\zeta_4\, \|\psi\|_{L^2_{1/2}(\partial X_\bullet)}^2\leq \zeta_5\, \ln
r.\]
These together imply the first inequality asserted in the Lemma.
To derive the second inequality in the lemma, follow Taubes's argument in
\cite{Ts}. The harmonicity of \(\nu \) implies that over \(X''\),
\begin{equation}\label{eq:omega-ineq}
-\zeta |\nu|\leq d^*d|\nu|+|\nu|^{-1}|\nabla\nu |^2\leq
\zeta |\nu|,
\end{equation}
and the analog of Equation (3.9) in \cite{Ts} reads:
\[
\begin{split}
r \int_{X_{\bullet}} |\nu| u_+ &\leq \zeta \int_{X_{\bullet}}
(|\nu|^{-1} |\nabla\nu|^2+\zeta )+\zeta'\Big|\int_{\partial X_{\bullet}}
\partial_s u\Big|\\
& \leq \zeta''\Big(|X_\bullet|+\Big|\int_{\partial
X_\bullet} \partial_s |\psi|^2\Big|+\Big|\int_{\partial
X_\bullet} \partial_s |\nu|\Big|\Big)\\
&\leq \zeta_3\, (|X_\bullet|+ \ln r),
\end{split}
\]
where \(u_+:=\max \, (u, 0)\). Combine this with (\ref{T:3.3}).
The assertion regarding the general case assuming the
condition of Lemma \ref{lem:Etop-bddf} follows from the same
argument
\hfill$\Box$\medbreak
\section{A priori pointwise estimates}\label{sec:pt-est}
This section consists largely of refinements and extension of the
pointwise estimates in Section 3 of \cite{Ts}, which is in turn based
on Section I.2 in \cite{T}. Familiarity with
these references is assumed. We begin with some preliminaries.
First, note that by assumption, there exist positive constants \(\textsl{z}_v\geq 1\),
\(\textsl{z}_v '\geq 1\) that depends only on \(\nu \), such that
\begin{equation}\label{eq:v-sig}
\begin{split}
(\textsl{z}'_{v})^{-1} \leq & \inf_{x\in X''\cap \nu^{-1}(0)}|\nabla \nu|\leq \sup_{x\in X''\cap \nu^{-1}(0)}
|\nabla\nu|\leq \textsl{z}'_v; \\
& \textsl{z}^{-1}_v \tilde{\sigma} \leq |\nu|\leq
\textsl{z}_v \, \tilde{\sigma}, \quad \text{
over \(X''\),
\end{split}
\end{equation}
where \(\tilde{\sigma}\) is a function on \(X''\) defined as follows:
Suppose that \(|\nu|^{-1}(0)\neq\emptyset\). Let \(\sigma(\cdot)\) denote the distance function to
\(\nu ^{-1}(0)\) on \(X''\), and set
\[
\tilde{\sigma}:=\chi(\sigma)\, \sigma+(1-\chi(\sigma)).
\]
When \(\nu ^{-1}(0)=\emptyset\), let \(\sigma =\infty\) and
\(\tilde{\sigma }=1\).
Let \(\gamma_a\) be a smooth
cutoff function on \(X\) that equals \(1\) on \(X^{'a}\) and agrees
with \(\chi (\mathfrc{s}_j-a+1)\) over each vanishing end \(j\in
\grY_v\).
Recall also that \(Z^{'a}:=X^{'a}\, \cap \nu^{-1}(0)\). Given
\(\delta>0\),
\[\text{\(X_\delta^{'a}:=\{x\, |\, x\in X^{'a}, \sigma (x)\geq\delta\}\), and \(Z_\delta^{'a}:=X^{'a}-X^{'a}_\delta\).}
\]
In the case when \(a=0\) (resp. \(a=10\)), the spaces \(Z^{'a}\),
\(Z_\delta^{'a}\), \(X_\delta^{'a}\) introduced above are
alternatively denoted by \(Z'\),
\(Z'_\delta\), \(X'_\delta\) (resp. \(Z''\),
\(Z_\delta''\), \(X_\delta''\)). We use \(\scrX^a_\delta \subset
X^{'a}_\delta \) to denote a manifold with smooth boundary obtained by
``rounding corners''. In particular, \(\partial \scrX^a_\delta \) is smooth
and \(\mathop{\mathrm{dist}\, }\nolimits (\partial\scrX^a_\delta , \partial X^{'a}_\delta )\ll
2^{-8}\). Let
\(\gamma _{a,\delta }\) be a smooth cutoff function on \(X\) that is
supported on \(\scrX^{a+1}_{\delta }\) and equals 1 on
\(X^{'a}_{2\delta }\), such that \(\|\gamma _{a, \delta }-\gamma _a \,
(1-\chi (\sigma /\delta ))\|_{C^2}\ll 2^{-8}\).
\begin{remarks}
(1) As in \cite{Ts}, the pointwise estimates provided in this section
typically
hold over domains of the form \(X_\delta \) (or more generally,
\(X^{'a}_\delta \)). They are \(\delta\)-dependent, and constants appearing
the relevant inequalities usually depend on \(\delta\). In \cite{Ts}, the dependence
of the constants on \(\delta\) are left unspecified. For our
purpose, this dependence is important and therefore shall be made explicit
below. As mentioned in Section \ref{sec:convention}, in what follows the
notation \(\zeta\) and its decorated variants such as \(\zeta '\),
\(\zeta _i\), are reserved for constants
independent of \(\delta\) (and also independent of \(r\) and \((A_r, \Psi _r)\)).
(2) Typically, we improve the pointwise estimates in Section 3 of \cite{Ts} by replacing
factors of constants \(\delta^{-1}\) therein by the function
\(\tilde{\sigma}^{-1}\leq \delta^{-1}\). This is often made possible with the
help of the following observation: Given \(\rmm>0\), there are
constants \(\zeta _\rmm\), \(\zeta _\rmm'\) depending only on \(\nu \), the
metric, and \(\rmm\), such that
\begin{equation}\label{ts-loBdd}
d^*d (\tilde{\sigma}^{-k})+\frac{r|\nu|}{\rmm}(\tilde{\sigma}^{-k})\geq\zeta _\rmm\,
(-\zeta'_\rmm\, \tilde{\sigma}^{-k-2}+r\tilde{\sigma}^{-k+1} )>\zeta _\rmm\, r
\tilde{\sigma}^{-k+1}/2
\end{equation}
over \(X''_\delta \) when \(r\delta^3>\zeta '_\rmm\). (In what follows, \(\rmm\) is typically
taken to be \(2^k\), \(k=1, 2, 3, 4\).) This enables one to replace
\(\delta^{-k}\) by \(\tilde{\sigma}^{-k}\) as comparison functions in various
comparison principle arguments.
\end{remarks}
The pointwise estimates in this section are made simpler thanks to the
next lemma, which motivated the introduction of Item (5) in Assumption \ref{assume}.
\begin{lemma}
Let \((X, \nu )\) be an admissible pair, and let \(w_r\)
and \(\hat{\grp}_r\) respectively be a \(r\)-parametrized family of
closed 2-forms and nonlocal
perturbations on \(X\) satisfying Assumption
\ref{assume}. Let \(\mu
_r=r\nu+w_r\) as before, and let \((A, \Psi)=(A_r, \Psi_r)\) be as in Lemma
\ref{lem:Etop-bdd1}. Then for any given \(\textsc{e}>0\), there is a constant
\(r_\textsc{e}\geq 8\) such that
\begin{equation}\label{eq:nonlocalptwise}
\|\hat{\grp}_r\, (A, \Psi)\|_{C^k_{A}(X)}\leq \zeta _\grp
\end{equation}
for all \(r\geq r_\textsc{e}\) and admissible solutions \((A, \Psi )=(A_r,
\Psi _r)\) to
\(\grS_{\mu _r, \hat{\grp}_r}(A, \Psi )=0\) satisfying the energy
bound (\ref{assume:EtopX-ubdd}).
\end{lemma}
\noindent{\it Proof.}\enspace Invoke Assumption \ref{assume} (5). If \(X\) is cylindrical, then
\(\grY_v=\emptyset\) by our assumption on \(X\), and so
\(\hat{\grp}_r=\hat{\grq}_r\equiv 0\). When \(X\) is non-cylindrical, \(\hat{\grp}(A, \Psi )\) is
supported on \(\bigcup_{i\in \grY_v}\hat{Y}_{i, [\grl_i, \grl_i']}\),
it suffices to examine it on each \(\hat{Y}_{i, [\grl_i,
\grl_i']}\). In the present setting, \(\upsilon (r)=\zeta
r\). Meanwhile, applying Proposition \ref{prop:SW-L2-bdd} to \(X_\bullet=\hat{Y}_{i, [\grl_i,
\grl_i']}\), we see that there is an \(r_\textsc{e}\geq \max (2^8 \textsc{e}, 8)\) such
that Conditions (i) and (ii) in Assumption \ref{assume}
(5) holds for \((A, \Psi )=(A_r, \Psi _r)\) when \(r\geq
r_\textsc{e}\). The assertion (\ref{eq:nonlocalptwise}) follows directly
from Assumption \ref{assume} (5) and the properties of
\(\hat{\grp}_r\) and \(w_r\) prescribed in items (4) and (1) in
Assumption \ref{assume}.
\hfill$\Box$\medbreak
We shall apply he preceding lemma to those \((A_r, \Psi _r)\) from the statement of Theorem
\ref{thm:l-conv}, with the constant \(\textsc{e}\) taken to be that
given by (\ref{eq:CSD-est}).
The value of each occurrence of \(r_0\)
in the rest of this article will be taken to be larger or equal to all
its predecessors and the version of
\(r_\textsc{e}\) corresponding to this value of \(\textsc{e}\).
Throughout the rest of this section, we tacitly invoke
the bound (\ref{eq:nonlocalptwise}) to omit terms arising from the
nonlocal perturbation \(\hat{\grp}\) by adjusting the coefficients in
the inequalities.
Write \(\Psi=(r/2)^{1/2}\psi\), and write \(\psi=(\alpha, \beta)\) with
respect to the decomposition \(\bbS^+\simeq E\oplus E\otimes
K^{-1}\) over \(X-\nu^{-1}(0)\). Throughout this section, \((A,
\Psi)=(A_r, \Psi_r)\) is an admissible solution to the Seiberg-Witten equation
\(\grS_{\mu_r, \hat{\grp}}(A_r, \Psi_r)=0\) satisfying the assumptions
of Lemma \ref{lem:Etop-bdd1}.
\subsection{Estimates for \(|\psi|^2\)}\label{assume6}
With our assumption on \(\nu\), \(w_r\), and \(\hat{\grp}\), an
\(L^\infty\)-estimate on \(\psi\) over \(X\) may be established easily.
\begin{lemma}\label{lem:4dPsi:l-infty}
Let \((A, \Psi)\), \(\psi\) be as described immediately preceding this
subsection. Then
\begin{equation}\label{psi-infty}
\|\psi\|_{L^\infty(X)}\leq \zeta_\infty \quad \text{over \(X\)},
\end{equation}
where \(\zeta_\infty\) is a positive constant depending only on \(\sup_X
R_g\), \(\sup_X |\nu|\), and the constants \(\varsigma_w\),
\(\zeta _\grp\) from Assumption \ref{assume}.
\end{lemma}
\noindent{\it Proof.}\enspace
We argue as in the
proof of the Morse-end case of Lemma
\ref{lem:3d-Phi}.
By the first line of the Seiberg-Witten equation
\(\grS_{\mu_r,\hat{\grp}}(A, \Psi)=0\), one has \[\langle \psi,
\bar{\mbox{\mbox{\(\partial\mkern-7.5mu\mbox{/}\mkern 2mu\)}}}^-_A\bar{\mbox{\mbox{\(\partial\mkern-7.5mu\mbox{/}\mkern 2mu\)}}}^+_A\psi\rangle=-r^{-1/2}\langle \psi ,
\hat{\grp}(A, \Psi )\rangle.\]
It then follows from the Weitzenb\"ock formula, the
second line of the Seiberg-Witten equation and (\ref{eq:nonlocalptwise}) that
\begin{equation}\label{ineq:psi}\begin{split}
\frac{1}{2}d^*d|\psi|^2 & +|\nabla_A\psi|^2+\frac{r}{4}
|\psi|^2(|\psi|^2-|\nu+r^{-1}w_r|)+\frac{R_g}{4}|\psi|^2\\ & \leq
\zeta_1|\psi|^2+\zeta_2\, r^{-1}
\end{split}
\end{equation}
where \(\zeta _1\), \(\zeta _2\) are positive constants depending only
on \(\zeta _\grp\).
The smooth function \(|\psi|^2\) must have a maximum at a certain
point \(x_M\in X\), or it is
bounded by \(r^{-1}\|\Phi_i\|_{L^\infty(Y_i)}\) for certain \(i\), where \((B_i,
\Phi_i)\) is the \(Y_i\)-end limit of \((A, \Psi)\). In the former
case, consider the previous inequality at \(x_M\) and rearranging to
get
\[
|\psi(x_M)|^2\big(|\psi( x_M)|^2-|\nu(x_M)|^2\big)\leq
\zeta_3 |\psi(x_M)|^2+\zeta_4,
\]
where \(\zeta _3\), \(\zeta _4\) are positive constants depending only
on \(\zeta _\grp\), \(\varsigma_w\), and \(\sup_X
R_g\).
Hence, \(|\psi(x_M)|^2\leq \zeta_5\) for a positive constant \(\zeta
_5\) depending only
on \(\zeta _\grp\), \(\varsigma_w\), and \(\sup_X
R_g\), and \(\sup_X |\nu|\). In the latter case, invoke
Lemma \ref{lem:3d-Phi}. Either way, Equation (\ref{psi-infty}) holds.
\hfill$\Box$\medbreak
Over \(X^{'9}\), a better pointwise bound on \(|\psi|^2\) may be obtained from the
\(L^\infty\) bound in Lemma \ref{lem:4dPsi:l-infty}.
\begin{prop}\label{T:lem3.2}
There is a constant \(\zeta\) depending only on
the metric, \(\nu \), \(\zeta
_\grp\), and \(\varsigma_w\).
such that over \(X^{'a}\), \(0\leq a\leq 9\),
\begin{eqnarray}
(a) & |\psi|^2 &\leq |\nu|+\zeta r^{-1/3};\\
(b) & |\psi|^2 &\leq |\nu|+\zeta r^{-1}(\sigma^{-2}+1). \label{ineq:psi^2}
\end{eqnarray}
\end{prop}
\noindent{\it Proof.}\enspace This is an analog of Lemma 3.2 of \cite{Ts}\footnote{Equation (3.11)
of \cite{Ts} contains some errors/typos which are easy to
fix.}.
From Equation (\ref{ineq:psi}) and Lemma \ref{lem:4dPsi:l-infty} we have
\begin{equation}\label{u-ineq1}
\frac{1}{2}d^*d|\psi|^2+|\nabla_A\psi|^2+\frac{r}{4}
|\psi|^2(|\psi|^2-|\nu|)\leq \zeta_0,
\end{equation}
where the constant \(\zeta _0>0\) depends on the parameters listed in
the previous lemma.
Let \(\mathfrc{v}:=\gamma_a |\nu|+(1-\gamma_a)(\zeta_\infty^2+1)\),
\(\zeta_\infty\) being the constant from the
previous lemma. Then by (\ref{eq:omega-ineq})
\[
-d^*d \mathfrc{v}\leq
\zeta_1\gamma_a|\nu|^{-1}|\nabla\nu|^2+\zeta_2\gamma_{a+1},
\]
where \(\zeta _1\), \(\zeta _2\) depend only on \(\nu \) and \(\zeta
_\infty\).
Combine the above two inequalities, and setting
\(u:=|\psi|^2-\mathfrc{v}\), we have the following analog of (3.12) in
\cite{Ts}:
\begin{equation}\label{ineq:u}
2^{-1}d^*du+4^{-1} r|\nu| u\leq \zeta'(\gamma_a\, \tilde{\sigma}^{-1}+\gamma_{a+1}),
\end{equation}
where \(\zeta '\) depends only on the metric, \(\nu \), \(\zeta
_\grp\), and \(\varsigma_w\).
Set \(\chi_R:=\chi(r^{1/3}\sigma/R)\), and argue similarly to
(3.14)-(3.15) of \cite{Ts}.
We may find positive constants \(\zeta_3,
\zeta_4\) such that with \begin{equation}
\hat{u}:=u+\zeta_3\chi_1\sigma-\zeta_4\,
r^{-1/3}\quad \text{on \(X''\)},
\end{equation}
one has
\[
2^{-1} d^*d\hat{u}+4^{-1}r|\nu|\hat{u}\leq 0\quad \text{on \(X''\)}.
\]
Suppose \(\hat{u}\) has a maximum in the interior of \(X''\), then the
above inequality implies that \(\hat{u}<0\) at this maximum. Otherwise,
\(\sup_{x\in X''}\hat{u}(x)\) appears in \(\partial X''\) or as a
value of
\(\hat{u}_i:=r^{-1}|\Phi_i|^2-|\nu_i|u+\zeta_3\chi_1\sigma-\zeta_4\,
r^{-1/3}\) for one of the Morse ends \(\hat{Y}_i\). In the first case,
\(\hat{u}\leq 0\) by our choice of \(\mathfrc{v}\), \(\zeta_3\), and
\(\zeta_4\). In the second case, this value is also nonpositive by
(\ref{eq:Phi-ptws}). (Adjust the values of \(\zeta_3, \zeta_4\) if necessary).
The first inequality in (\ref{u-ineq1}) now follows.
For the second inequality claimed, argue as in \cite{Ts} using
(\ref{ineq:u}) to find positive constants \(\zeta_5, \zeta_6\)
so that for \(\check{u}:=u-\zeta_6\, r^{-1}(\tilde{\sigma}^{-2}+1)\), \[2^{-1}
d^*d\check{u}+4^{-1}r|\nu|\check{u}\leq 0
\]
over the region \(\{x\, |\, \sigma(x)\geq
\zeta_5\, r^{-1/3}, \, x\in X''\}\). Now apply the maximum principle type
arguments
over this region as in the proof of the first inequality, using the first
inequality to ensure that \(\hat{u}\leq 0\) on the boundary of this region.
When \(|\nu|^{-1}(0)=\emptyset\), simply set
\(\mathfrc{v}=|\nu|\). Then \(-d^*d\mathfrc{v}\leq \zeta\) in this
case. The argument above may then be simplified by dropping all terms
involving \(\gamma_a\) or \(\tilde{\sigma}\), \(\sigma\). In this case
\(|\psi|^2\leq |\nu|+\zeta r^{-1}\) over \(X\).
\hfill$\Box$\medbreak
\subsection{Estimates for \(|\beta|^2\)}
Coming up next is an analog of Proposition 3.1 of \cite{Ts}.
\begin{prop}\label{T:prop3.1-}
There exist positive constants \(\textsc{o}\geq 8\), \(c\), \(c'\)
\( \zeta_0, \zeta '_0\geq 1\) that depend only on
the metric, \(\nu \) \(\varsigma_w\), and \(\zeta _\grp\),
such the following hold: Suppose \(r>1\), \(\delta >0\) are such that
\(\delta\geq\textsc{o}r^{-1/3}\), then
\begin{equation}\label{ineq:beta0}
\begin{split}
|\beta|^2& \leq 2 c \, \tilde{\sigma}^{-3}r^{-1} (
|\nu |-|\alpha|^2)+\zeta _0\, \tilde{\sigma}^{-5} r^{-2};\\
|\beta|^2& \leq 2 c' \tilde{\sigma}^{-3}r^{-1} (
|\nu |-|\psi|^2)+\zeta '_0\, \tilde{\sigma}^{-5} r^{-2}
\end{split}
\end{equation}
on \(X^{'a}_\delta\), \(0\leq a\leq \frac{25}{3}\).
\end{prop}
\noindent{\it Proof.}\enspace
Proceed as in \cite{Ts} to get the following analog of (3.19) of
\cite{Ts}. \footnote{There is a sign error in (3.19) of \cite{Ts}.}
\begin{eqnarray}
& 2^{-1} d^*d |\beta|^2 &+|\nabla_A\beta|^2+4^{-1} r|\nu|\,
|\beta|^2+ 4^{-1} r (|\alpha|^2\, |\beta|^2+|\beta|^4)\nonumber\\
&& \leq (\zeta +\zeta_0b^2)|\beta|^2 +\zeta_1
\big(|b|\, |\nabla_A\alpha|+ \zeta _1
\big)|\beta|,
\label{ineq:beta}\\
& 2^{-1} d^*d |\alpha|^2 &+|\nabla_A\alpha|^2-4^{-1} r|\nu|\,
|\alpha|^2+ 4^{-1} r (|\alpha|^2\, |\beta|^2+|\alpha|^4)\nonumber\\
&& \leq \zeta'|\alpha|^2 +\zeta_2
\big(|\nabla_A(b\beta)|+\zeta _2
\big)|\alpha|
\label{ineq:alpha}
\end{eqnarray}
on \(X''\), where \(b\) arises from
\(\nabla J\) and can be bounded by
\(|b|\leq\zeta_0\, \tilde{\sigma}^{-1}\), \(|\nabla b|\leq
\zeta'_0\, \tilde{\sigma}^{-2}\) on \(X''\). In the above, the positive constants,
\(\zeta _i, \zeta _i'\), \(i=0,1,2\) depend only on
the metric, \(\nu \) \(\varsigma_w\), and \(\zeta _\grp\).
Judicious uses of the
triangle inequality shows that there exist constants \(\epsilon
_1<1\), \(c\geq 1\), \(\zeta _1\), \(\zeta _1'\) depending only on
the metric, \(\nu \) \(\varsigma_w\), and \(\zeta _\grp\), such that
when \(r^{-1}\delta^{-3}<\epsilon _1\), the inequalities
(\ref{3.20}),
(\ref{3.21}) below hold over \(X''_{\delta/3}\):
\begin{equation}\label{3.20}\begin{split}
2^{-1} d^*d |\beta|^2 &+|\nabla_A\beta|^2+8^{-1} r|\nu|\,
|\beta|^2+ 4^{-1} r (|\alpha|^2\, |\beta|^2+|\beta|^4)\\
& \leq c\, (r^{-1}\tilde{\sigma}^{-3} |\nabla_A \alpha |^2+r^{-1}\tilde{\sigma}^{-1}
\end{split}\end{equation}
Set \(\varpi:=|\nu|-|\alpha|^2\). Combine (\ref{ineq:alpha}) and
(\ref{eq:omega-ineq}) and use Lemma \ref{lem:4dPsi:l-infty}
to get
\begin{equation}\label{3.21}\begin{split}
2^{-1}d^*d(-\varpi) +|\nabla_A\alpha|^2+8^{-1} &
r|\nu|\,(-\varpi)+4^{-1} r\varpi^2\\
&\leq \zeta_1|\nabla_A\beta|^2+\zeta_1' \tilde{\sigma}^{-1} \quad \text{over
\(X^{'a}_{\delta/3}\), \(0\leq a\leq 9\).}
\end{split}\end{equation}
(To get the last term above, we used Proposition \ref{T:lem3.2}
to bound
\(\tilde{\sigma}^{-2}|\alpha|^2\leq \zeta_3\, \tilde{\sigma}^{-1}\) and invoked
\(r^{-1}\delta^{-3}<\epsilon _1\) to simplify terms.)
A combination of the previous two inequalities then yield that when
\(r^{-1}\delta^{-3}<\epsilon _2:=\min\, (\epsilon _1, (4c\zeta
_1)^{-1})\),
\begin{equation}\label{de:beta}
2^{-1}|\nabla_A\beta|^2+cr^{-1}\delta ^{-3} |\nabla_A \alpha |^2+2^{-1}d^*du_1+8^{-1} r|\nu|u_1\leq 0,\quad \text{over
\(X^{'a}_{\delta/3}\), \(0\leq a\leq 9\)}
\end{equation}
for a suitable constant \(\zeta \geq 1\) and
\begin{equation}\label{def:u_1}
u_1:=|\beta|^2-2cr^{-1}\delta^{-3}\varpi-\zeta r^{-2}\delta^{-5}.
\end{equation}
Fix an \(x\in X^{'a}_\delta\), \(0\leq a\leq \frac{25}{3}\), and use the abbreviation \(\tilde{\sigma}_x=\tilde{\sigma} (x)\) below. Then \(B(x, \tilde{\sigma}_x/3)\subset B(x,
2\tilde{\sigma}_x/3)\subset X_{\delta/3}^{'9}\). Define the function
\(\lambda_x(\cdot):=\chi(3\mathop{\mathrm{dist}\, }\nolimits (x, \cdot)/\tilde{\sigma}_x)\). Then \(\lambda_x
u_1\) is supported on \(B(x, 2\tilde{\sigma}_x/3)\) and satisfies
\[
2^{-1}d^*d(\lambda_xu_1)+8^{-1} r|\nu|(\lambda_xu_1)\leq \xi,
\]
where \(\xi\) is supported
on the shell \(A_{\tilde{\sigma}_x}:=B(x, 2\tilde{\sigma}_x/3)-B(x, \tilde{\sigma}_x/3)\), and is
bounded by
\[
|\xi|\leq \tilde{\sigma}_x^{-2}(\zeta '_3 |\psi |^2+\zeta '_4 r^{-1}\delta ^{-3}+\zeta '_5
r^{-2}\delta ^{-5}).
\]
By Lemma \ref{lem:4dPsi:l-infty}, this means that \(|\xi|\leq \zeta
'_2\tilde{\sigma}_x^{-2}\) when \(r^{-1}\delta ^{-3}<\epsilon _3\) for a certain
\(\epsilon _3\leq \epsilon _2\).
Let \(\zeta>0\) be a
constant so that \(|\nu|\big|_{X_{\tilde{\sigma}_x/3}}\geq 4\zeta\tilde{\sigma}_x\), and
let \(\mu_x\) be the
solution to the equation
\[
2^{-1}d^*d\mu_x+\zeta r\tilde{\sigma}_x \mu_x=|\xi|
\]
on \(B(x, 2\tilde{\sigma}_x/3)\) with Dirichlet boundary condition. Then
\(\lambda_x u_1\leq \mu_x\) by the comparison principle. Meanwhile,
using \(G_r\) to denote the integral kernel of the operator
\(2^{-1}d^*d+\zeta r\tilde{\sigma}_x \) on \(B(x, 2\tilde{\sigma}_x/3)\) with Dirichlet
boundary condition,
\[
\begin{split}
\mu_x(x) &=\int_{B(x, 2\tilde{\sigma}_x/3)}G_r(x, y)|\xi|(y) dy\\ & \leq \zeta'_1\tilde{\sigma}_x^{-2}
\int_{A_{\tilde{\sigma}_x}} \big( \mathop{\mathrm{dist}\, }\nolimits (x, y)^{-2}\exp \big(-\zeta_2 \mathop{\mathrm{dist}\, }\nolimits (x, y) (r\tilde{\sigma}_x)^{1/2}\big) \Big) dy\\
&\leq \zeta_4 \tilde{\sigma}_x^2\exp \big(-\zeta'_2
(r\tilde{\sigma}_x^3)^{1/2}\big) \leq
\zeta_5 r^{-2}\tilde{\sigma}_x^{-4}
\end{split}
\]
when \(r^{-1}\tilde{\sigma}_x^{-3}\leq r^{-1}\delta ^{-3}<\epsilon _4\leq
\epsilon _3\) for certain constant \(\epsilon _4\).
Combining this with the bound
\(u_1(x)\leq \mu_x(x)\), we have
\begin{equation}\label{ineq:beta3}
|\beta|^2 \leq 2 c \delta ^{-3}r^{-1} (
|\nu |-|\alpha|^2)+\zeta _0\, \delta ^{-5} r^{-2}\quad \text{over \(X^{'a}_\delta \)}
\end{equation}
when \(r^{-1}\delta ^{-3}\leq\epsilon _5\), where
\(\zeta _0\) and \(\epsilon _5\leq \epsilon _4\) are certain constants
which are independent of \(r,
\delta \), and \((A, \Psi )\). Given \(r\), \(\delta _0\) satisfying
\(r^{-1}\delta ^{-3}_0<\epsilon _5\),
fix \(x\in X^{'a}_{2\delta_0}\) and set
\(\delta=\tilde{\sigma}_x/2\) in (\ref{ineq:beta3}). It follows that
\[
|\beta|^2 (x) \leq 2 c \tilde{\sigma}_x^{-3}r^{-1} (
|\nu |-|\alpha|^2)+\zeta _0 \, \tilde{\sigma}^{-5}_x r^{-2}\quad \text{over \(X^{'a}_{2\delta _0}\)}.
\]
This implies the first inequality in (\ref{ineq:beta0}), with
\(\textsc{o}=2\epsilon _5^{-1/3}\).
The second inequality in (\ref{ineq:beta0}) follows directly from the first. \hfill$\Box$\medbreak
\subsection{Estimating \(|F_{A}|\)}
We use Lemma \ref{T:lem3.1} to obtain a preliminary bound on
\(|F_A|\) on \(X^{'a}\).
Let \(\tilde{r}\) denote the function \[x\mapsto
\tilde{r}_x:=r\tilde{\sigma} (x)\] on \(X\). Note that \(\frac{1}{2}\ln r\leq \ln
\tilde{r}_x\leq \ln r\)
when \(x\in X_{\delta }\) for \(\delta \leq 1\) and
\(r\geq\delta ^{-2}\).
\begin{prop}\label{T:prop3.2}
{\bf (a)} There exist positive constants \(\zeta_2\),
\(\zeta'_2\), that depend only on \(\varsigma_w\) and \(\zeta
_\grp\),
such that
\begin{equation}\label{eq:F+bdd}
|F_A^+| \leq 2^{-3/2}r\, (|\nu|-|\psi|^2)+\zeta _2r
|\beta|^2+\zeta'_2\quad \text{over \(X\). }
\end{equation}
{\bf (b)}
There exist positive constants \(r_0>8\),
\(\textsc{o}\geq 8\), \(\zeta \), \(\zeta '\), \(\zeta_1\), \(\zeta _1'\)
that depend only on the metric, \(\nu \), \(\varsigma_w\) and \(\zeta
_\grp\),
so that the following holds: Suppose \(r>r_0\), and let \(\delta
_0:=\textsc{o}r^{-1/3}\). Then for \(0\leq a\leq 8\),
\begin{equation}\label{bdd:ss}
\begin{split}
\text{(i)} \quad |F_A^-| &\leq
(2^{-3/2}+\varepsilon_
) \, r\, (|\nu|-|\psi|^2) +K_0
\quad \text{over \(X^{'a}_{\delta_0}\)}; \\
\text{(ii)} \quad |F_A^-| &\leq
\zeta _1 r \delta +\zeta _1' \delta^{-2}\ln\, (\delta /\sigma )
\quad \text{over \(Z^{'a}_{\delta}\) for any \(\delta\geq \delta_0\)},\\
\end{split}
\end{equation}
where
\[\begin{split}
\varepsilon_0& :=
\begin{cases}
\big(r^{-4/3}\tilde{\sigma}^{-1} (\ln r+\textsc{e})\big)^{2/7} & \text{where \(\sigma
\geq 3 r^{-1/6} (\ln r+\textsc{e} )^{1/6} \)};\\
r^{-1/6}\, \tilde{\sigma}\, (\ln r+\textsc{e} )^{-1/2}+\zeta '_5\textsc{o}_r^{-3} &\text{otherwise};
\end{cases}\\
K_0 &:= \begin{cases}
\zeta' (r\tilde{\sigma}^{-1})^{5/7}\, (\ln r+\textsc{e} )^{2/7} & \text{where \(\sigma
\geq 3 r^{-1/6} \, (\ln r+\textsc{e} )^{1/6} \)};\\
\zeta '' r^{1/2}\tilde{\sigma}^{-2}(\ln r+\textsc{e} )^{1/2} &\text{otherwise}.
\end{cases}
\end{split}
\]
{\bf (c)}
For \(r>r_0\), there exist positive constants \(\zeta _3\), \(\zeta
_3'\) that depend only on the metric, \(\nu \), \(\varsigma_w\) and \(\zeta
_\grp\),
so that the following holds over \(X^{'a}_{\delta _0}\):
\[\begin{split}
|F_A^-| &\leq\zeta _3 \, r\tilde{\sigma}; \\
|F_A^-| &\leq
(2^{-3/2}+\varepsilon_0)\, r\, (|\nu|-|\psi|^2) +K_1,
\end{split}
\]
where \[
K_1:=\min \, (K_0, \zeta _3 \, r\tilde{\sigma}) \leq
\zeta_3'\, r^{5/6}\, (\ln r +\textsc{e} )^{1/6}.
\]
\end{prop}
\noindent{\it Proof.}\enspace The estimate for \(|F_A^+|\) is a direct consequence of the the
Seiberg-Witten equation \(\grS_{\mu_r,\hat{\grp }}(A, \Psi)=0\) and (\ref{eq:nonlocalptwise}).
Let \(\textsc{s}:=|F_A^-|\).
The arguments leading to \cite{T}'s (I.2.19), together with
(\ref{eq:nonlocalptwise}) and (\ref{psi-infty}) give:
\begin{equation}\label{eq:DE-s}
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \textsc{s}\\
& \quad \leq z_4\, \textsc{s} +2^{-3/2}r|\nabla_A\psi|^2
+\zeta _1r\tilde{\sigma}^{-2}|\psi|\, |\beta|+\zeta_2r\tilde{\sigma}^{-1}(|\nabla_A\psi|\, |\beta |+|\alpha |\,
|\nabla_A\beta |)+\zeta_0\\
& \quad \leq z_4\, \textsc{s} +2^{-3/2} r|\nabla_A\psi|^2
+\zeta_2r\tilde{\sigma}^{-1}|\nabla_A\psi|\, |\beta| \\
& \qquad \qquad \quad +\zeta_2\, r|\nabla_A\beta |^2+\zeta '_1r\tilde{\sigma}^{-2}|\psi |^2 +\zeta_0\quad \text{over \(X^{'9}\),}\\
\end{split}
\end{equation}
where the constants \(z_4\), \(\zeta _1\), \(\zeta _1'\), \(\zeta _2\)
depend only on the metric and \(\nu \); the constant \(\zeta_0\)
depends on the same parameters, and additionally on \(\varsigma_w\)
and \(\zeta _\grp\).
(This is a refinement of \cite{Ts}'s
(3.32).)
It follows from (\ref{ineq:beta}) that
\[
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) |\beta|^2 +|\nabla_A\beta|^2+4^{-1} r|\nu|\,
|\beta|^2\\
& \qquad \quad \leq \zeta \tilde{\sigma}^{-2}|\beta|^2
+\zeta'\tilde{\sigma}^{-1}|\nabla_A\alpha|\, |\beta|\qquad
\text{ over \(X''\)},
\end{split}
\]
where \(\zeta \), \(\zeta '\) depend only on the metric, \(\nu \), \(\varsigma_w\)
and \(\zeta _\grp\). So
\begin{equation}\label{ineq:beta2}
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) (\textsc{s} +\zeta _2 r|\beta
|^2)+ 4^{-1} \zeta _2r^2|\nu|\,|\beta|^2\\
& \quad \leq z_4\, \textsc{s} +2^{-3/2} r|\nabla_A\psi|^2
+\zeta_3r\tilde{\sigma}^{-1}|\nabla_A\psi|\, |\beta| +\zeta '_3r\tilde{\sigma}^{-2}|\psi
|^2 +\zeta _0\\
& \quad \leq z_4 \, \textsc{s} +c_\varepsilon\, r |\nabla_A\psi|^2 +4^{-1}\zeta _3^{2}\varepsilon^{-1}r \tilde{\sigma}^{-2}\,|\beta|^2+\zeta '_3r\tilde{\sigma}^{-2}|\psi
|^2 +\zeta _0\quad \text{over \(X^{'9}\),}\\
\end{split}
\end{equation}
where \(\varepsilon \) be an arbitrary positive number small than \(8\), and
\[
c_\varepsilon := 2^{-3/2}+\varepsilon .
\]
In the above inequalities as well as for the
rest of this proof, all constants denoted in the form of \(\zeta _*\),
\(z_*\) depend only on the metric, \(\nu \), \(\varsigma_w\)
and \(\zeta _\grp\); in particular, they are independent of \(\varepsilon \) (as well as \(r\),
\(\delta \), \((A, \Psi )\)).
Let \(\delta _0:=\textsc{o} r^{-1/3}\),
where \(\textsc{o}\) is as in Proposition
\ref{T:prop3.1-}, and let \(\textsc{o}_r:=r^{1/3}\tilde{\sigma}\). Writing
\(u:=|\psi|^2-|\nu|\) and
appealing to
Propositions \ref{T:lem3.2} and
\ref{T:prop3.1-}, Equation (\ref{ineq:beta2}) implies:
\begin{equation}\label{ineq:beta3}
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) (\textsc{s} +\zeta _2 r|\beta
|^2) \leq z_4 \textsc{s} +c_\varepsilon\, r |\nabla_A\psi|^2 \\
& \quad \quad +
(1-\chi(\delta_0^{-1}\sigma))\,\Big( \zeta _3'r\tilde{\sigma}^{-2}|\psi|^2
+\zeta \, \varepsilon ^{-1}\textsc{o}_r^{-3} r\tilde{\sigma}^{-2}
(-u) +\zeta '\varepsilon ^{-1}\textsc{o}_r^{-6}r \tilde{\sigma}^{-1}+\zeta_0\Big)\\
& \qquad\qquad
+ \zeta'_2 \chi (\delta_0^{-1}\sigma)\, \varepsilon^{-1}(r\tilde{\sigma}^{-1}+r^{2/3} \tilde{\sigma}^{-2}\big)
\quad \text{over \(X^{'9}\).}\\
\end{split}
\end{equation}
Meanwhile, by (\ref{u-ineq1}), (\ref{eq:omega-ineq}) we have
\begin{equation}\label{u-ineq2a}
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \,
u+|\nabla_A\psi|^2\leq \zeta'_0\, \tilde{\sigma}^{-1}\qquad
\text{ over \(X''\).}
\end{split}
\end{equation}
Let
\begin{equation}\label{def:q0}
q_0=q_0^{(\varepsilon )}:=\textsc{s} +c_\varepsilon r\, u+\zeta
_2r|\beta |^2.
\end{equation}
A combination of (\ref{u-ineq2a}) and (\ref{ineq:beta3}) then gives:
\begin{equation}\begin{split}\label{ineq:q_0}
\big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) q_0
& \leq z_4q_0+ \zeta'_1 \chi (\delta_0^{-1}\sigma)\,
\varepsilon^{-1}(r\tilde{\sigma}^{-1}+r^{2/3} \tilde{\sigma}^{-2}\big)\\
& \quad + \zeta ' \, (1-\chi
(\delta_0^{-1}\sigma))\, \Big( r\tilde{\sigma}^{-2}(-u) \big(c_\varepsilon\tilde{\sigma}^2+\zeta \, \varepsilon ^{-1}\textsc{o}_r^{-3} )\\
& \qquad \qquad \qquad + \zeta _3'r\tilde{\sigma}^{-2}|\psi|^2+r \tilde{\sigma}^{-1}( \zeta _0'+\zeta '\varepsilon ^{-1}\textsc{o}_r^{-6})+\zeta_0\Big)\\
\end{split}
\end{equation}
on \(X^{'9}\). Note that for any \(\delta
\geq\textsc{o} r^{-1/3}\) and \(k>-1\),
\[\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \big(-(1-\chi (\delta^{-1}\sigma ))\, \tilde{\sigma}^{-k}\big)
\leq -\frac{r|\psi|^2}{4} \big(1-\chi (\delta^{-1}\sigma )\big)\, \tilde{\sigma}^{-k}\\
& \qquad \qquad +(1-\chi (2\delta^{-1}\sigma
))\,\big(\zeta_7\, \tilde{\sigma}^{-k-2}+\zeta_7' \delta^{-2} \chi (\delta
^{-1}\sigma /2)\,\tilde{\sigma}^{-k}\big)\\
& \quad \leq \big(1-\chi (2\delta^{-1}\sigma )\big)r\tilde{\sigma}^{-k}\, \big( -(1/4-\zeta
\textsc{o}^{-3}_r)\, |\psi|^2+\zeta
'\textsc{o}^{-3}_r(-u)+\zeta_7' \delta^{-2} \chi (\delta
^{-1}\sigma /2)\big)\\
& \quad \leq \big(1-\chi (2\delta^{-1}\sigma )\big)\, r \Big(
-\zeta_1\tilde{\sigma}^{-k+1} +\tilde{\sigma}^{-k}\big(\zeta _1'(-u) +\zeta_7' \delta^{-2} \chi (\delta
^{-1}\sigma /2)\big)\Big).
\end{split}
\]
Combining (\ref{ineq:q_0}) with the preceding inequality and making
use of Proposition \ref{T:lem3.2},
we see that there exist constants \(\zeta _5\), \(\zeta _5'\) that
are independent
of \(r\) and \(\delta \), such that with
\begin{equation}\label{def:q}
\begin{split}
q=q^{(\varepsilon )}& :=q_0^{(\varepsilon )}- (1-\chi (\delta_0 ^{-1}\sigma ))\, \tilde{\sigma}^{-2}(\zeta _5+\zeta_5'
\varepsilon ^{-1} \textsc{o}_r^{-6}),\\
\end{split}
\end{equation}
one has:
\begin{equation}\label{DE:q}
\begin{split}
& \big(\frac{d^*d}{2}+\frac{r|\psi|^2}{4}\big) q\leq z_4
q+ (\zeta _8'+\zeta \, \varepsilon ^{-1}\textsc{o}_r^{-3} )(1-\chi (2\delta_0 ^{-1}\sigma ))\, r\tilde{\sigma}^{-2}(-u+z_8r^{-1}) \\
& \qquad +\zeta _8\, \varepsilon ^{-1}\, \chi (\delta_0
^{-1}\sigma /2) \,(r\tilde{\sigma}^{-1} +\delta _0^{-2} \tilde{\sigma}^{-2})
\quad \text{over \(X^{'9}\).}
\end{split}
\end{equation}
for certain \(r\)-independent constants \(\zeta \), \(\zeta
_8\), \(\zeta '_8\), \(z_8\).
Thus, writing
\[
\begin{split}
& \eta_1:=z_8\, (\zeta _8'+\zeta \, \varepsilon ^{-1}\textsc{o}_r^{-3} )\, (1-\chi
(2\delta_0 ^{-1}\sigma ))\, \tilde{\sigma}^{-2}\\
& \eta_2:=(\zeta _8'+\zeta \, \varepsilon ^{-1}\textsc{o}_r^{-3} )\, (1-\chi (2\delta_0 ^{-1}\sigma ))\, r\tilde{\sigma}^{-2}(-u);\\
& \xi := \zeta_8\, \varepsilon ^{-1}\chi (\delta_0 ^{-1}\sigma /2)\,
( r\tilde{\sigma}^{-1}+\delta _0^{-2}\tilde{\sigma}^{-2}),
\end{split}
\]
one has
\[
d^*dq_+-z_4q_+\leq \eta_1+\eta_2+ \xi\quad
\text{over \(X^{'9}\).}
\]
\footnote
The formula above is regarded as
inequalities between distributions. See e.g. \cite{Ts} p.187 for justification.
}
Fix \(x\in X^{'a}\), \(0\leq a\leq 8\). Let \(\rho_*>0\) be such that \(\rho _*\leq 1/2\)
and the operator \(d^*d-z_4\) with Dirichlet
boundary condition on \(B(x, \rho)\) has positive spectrum \(\forall
\rho\leq 2\rho_*\). Suppose that \(\rho _0>0\) is no larger than
\(\rho _*\).
Multiplying both sides of the preceding differential inequality by \(\chi (\rho_0^{-1}\mathop{\mathrm{dist}\, }\nolimits (x,\cdot))\) times \(G(x, \cdot)\), the
Dirichlet Green's function for \(d^*d-z_4\) on a ball of radius \(2\rho_0\), and integrate
over \(B (x,2\rho_0)\). We then have
\begin{equation}\label{eq:q-est}
\begin{split}
q(x) & \leq c_{0}\, \rho_0^{-4} \int_{B(x, 2\rho_0)-B(x,\rho_0)}q_++c_1\int_{B(x,2\rho_0)}
\xi \mathop{\mathrm{dist}\, }\nolimits (x,\cdot)^{-2}\\
& \qquad \quad
+c_2\int_{B(x,2\rho_0)}
\eta _1\mathop{\mathrm{dist}\, }\nolimits (x,\cdot)^{-2}+c_3\int_{B(x,2\rho_0)}
\eta _2\mathop{\mathrm{dist}\, }\nolimits (x,\cdot)^{-2}.
\end{split}
\end{equation}
Note that by Propositions \ref{T:prop3.1-} and \ref{T:lem3.2},
\begin{equation}\label{bdd:q-0}
\begin{split}
q& \leq \textsc{s} +\zeta \chi (\delta _0^{-1}\sigma )\, r\tilde{\sigma}+\zeta '\tilde{\sigma}^{-2}
\quad \text{over \(X^{'9}\).}\\
\end{split}
\end{equation}
Thus, the first term on the right hand side of (\ref{eq:q-est}) may be
bounded via the facts that
\[
\int_{B(x, 2\rho_0)}\textsc{s} \leq \zeta \, r^{1/2}\, (\ln r+\textsc{e} )^{1/2}\quad \text{by
(\ref{eq:L^2_1});}
\]
and
\begin{equation}\label{bdd:B_rho0}
\int_{B(x,2\rho_0)}\chi (d_0^{-1}\sigma )\, \tilde{\sigma}^{k} \leq \zeta
d_0^{k+3}\rho_0\quad \text{for any \(d_0>0\) and \(k>-3\)}.
\end{equation}
This gives:
\[
\rho ^{-4}_0\int_{B(x,2\rho_0)-B(x,\rho_0)}q_+\leq \zeta_0'\, r^{1/2}\, (\ln
r +\textsc{e} )^{1/2}\rho_0^{-4}.
\]
Now suppose \(x\in X^{'a}_{4\delta }\), \(\delta \geq \delta _0:=\textsc{o}
r^{-1/3}\). Choose \(\rho _0>0\) to be such that
\(\rho _0< \rho _*\).
The remaining integrals (\ref{eq:q-est}) are bounded in this case as
follows. We adopt the shorthand
\(\sigma _y:=\sigma (y)\) and \(\tilde{\sigma}_y:=\tilde{\sigma} (y)\) in what follows.
When \(\rho _0\) is chosen to be sufficiently small, the second integral on the right hand side of
(\ref{eq:q-est}) can be bounded by the following computation:
Use \(z\) to parametrize \(\nu ^{-1}(0)\cap B(x,2\rho_0)\), and for \(y\in B(x,2\rho_0)\), let \(z_y:= (4\rho _0^2-(\sigma _y-\sigma
_x)^2)^{1/2}\).
\[\begin{split}
\int_{B(x,2\rho_0)} \xi \mathop{\mathrm{dist}\, }\nolimits (x,
\cdot)^{-2}
& \leq \zeta \varepsilon ^{-1}\int_0^{2\delta _0}\int^{z_y}_{-z_y}\frac{r \tilde{\sigma}_y+\delta_0^{-2}}{(\sigma _y-\sigma _x)^2+z^2}\, dz\, d\sigma _y, \\
& \leq \zeta ' \varepsilon ^{-1}r\delta_0^{2} \sigma _x^{-1}
\quad \text{when \(x\in X^{'a}_{4\delta}\)}.
\end{split}
\]
To bound the third integral in (\ref{eq:q-est}), take \(\rho
_1=\min \, (\tilde{\sigma}(x)/4, \rho _0)\) for \(x\in X^{'a}_{4\delta }\) and
separate \(B(x,2\rho_0)\) into two regions: \(\mathop{\mathrm{dist}\, }\nolimits (x, \cdot)\leq
\rho _1\) on the first region, and \(\mathop{\mathrm{dist}\, }\nolimits (x, \cdot)>
\rho _1\) on the second. Integrate over the two regions separately to
get:
\[
\int_{B(x,2\rho_0)} \eta_1\mathop{\mathrm{dist}\, }\nolimits (x,\cdot)^{-2}\leq
\zeta _1+\zeta_2'\, \varepsilon ^{-1}\textsc{o}_r^{-3}+\zeta_3 \, \tilde{\sigma}^{-2}\, ( \rho
_0+\varepsilon ^{-1} r^{-1}\delta_0^{-2})\, \rho _0
\quad \text{when \(x\in X^{'a}_{4\delta}\).}
\]
To bound the last integral in (\ref{eq:q-est}),
choose small positive numbers \(\rho_1, \delta_2\), such that
\(\rho _1\leq\rho _0 \)
and \(\sigma (x)/2 >\delta_2\geq\delta_0 \).
Separate \(B(x,2\rho_0)\) into the
three regions: \(\scrR_1:=B(x, \rho_1)\cap X^{'a}_{\delta _2}\), \(\scrR_2:=X^{'a}_{\delta_2}\cap (B(x,2\rho_0)-B(x,\rho_1)) \),
\(\scrR_3:=(Z^{'a}_{\delta_2}-Z^{'a}_{\delta _0/2})\cap
B(x,2\rho_0)\), and integrate separately. Using
the fact that
\(
\eta _2\leq \zeta r\tilde{\sigma}^{-1}(1+\varepsilon ^{-1} \textsc{o}_r^{-3})
\) over \(\scrR_1\) and \(\scrR_3\), and Lemma \ref{T:lem3.1}
over \(\scrR_2\), we get:
\begin{equation}\label{int:eta2}
\begin{split}
\int_{B(x,2\rho_0)}\eta _2\mathop{\mathrm{dist}\, }\nolimits (x,\cdot)^{-2}
& \leq \zeta \,
r\tilde{\sigma}^{-1}_x\, \rho _1^2 \, (1+\varepsilon ^{-1} \textsc{o}_r^{-3})
+ \zeta '\, \rho_1^{-2} \delta _2^{-3}(\ln r)\, (1+\varepsilon ^{-1} r^{-1}\delta_2^{-3})\\
&\qquad \quad
+ \zeta _3'\int_{\delta _0}^{\delta
_2}\int^{z_y}_{-z_y}\frac{r \tilde{\sigma}_y+\varepsilon^{-1} \tilde{\sigma}^{-2}_y
}{(\sigma _y-\sigma _x)^2+z^2}\, dz\, d\sigma _y, \\
& \leq \zeta \,
r\tilde{\sigma}^{-1}_x\, \rho _1^2 \, (1+\varepsilon ^{-1} \textsc{o}_r^{-3})
+ \zeta '\, \rho_1^{-2} \delta _2^{-3}(\ln r +\textsc{e} )\,
(1+\varepsilon ^{-1} r^{-1}\delta_2^{-3})\\
&\qquad \quad +\zeta _4'\,
\big( r\, \delta _2^2+\varepsilon^{-1} \delta_0^{-1}\big) \, \sigma _x^{-1}.\\
\end{split}
\end{equation}
Put together, we have for \(q(x)\leq f_1+\varepsilon ^{-1} f_2\) for
\(x\in X^{'a}_\delta \), where \(f_1\), \(f_2\) are given as follows. For \(x\in X^{'a}_{4\delta _0}\),
\[\begin{split}
f_1& =\zeta _1\tilde{\sigma}^{-2}_x+\zeta _2 r\tilde{\sigma}^{-1}_x\rho _1^2+\zeta _3\, \rho
_1^{-2}\delta _2^{-3} (\ln r +\textsc{e} )+\zeta _4\, r \delta _2^2\, \sigma _x^{-1}\\
&\leq \zeta _1\tilde{\sigma}^{-2}_x+\zeta _2'\, r^{1/2}\tilde{\sigma}^{-1/2}_x\delta
_2^{-3/2} (\ln r+\textsc{e} )^{1/2}+\zeta _4\, r \delta _2^2\, \sigma _x^{-1}\\
&\leq \begin{cases}
\zeta _3'\, (r\tilde{\sigma}_x^{-1})^{5/7}(\ln r+\textsc{e} )^{2/7}\quad & \text{when \(\sigma
_x\geq 3 r^{-1/6} (\ln r+\textsc{e} )^{1/6} \)};\\
\zeta '_4 r^{1/2}\tilde{\sigma}_x^{-2}(\ln r+\textsc{e} )^{1/2
&\text{otherwise}
\end{cases}
\end{split}
\]
with \(\rho _1=r^{-1/4}\tilde{\sigma}^{1/4}_x\delta _2^{-3/4} (\ln
r+\textsc{e} )^{1/4}\) and
\[
\delta _2=\begin{cases}
r^{-1/7}\tilde{\sigma}_x^{1/7} (\ln r+\textsc{e} )^{1/7} & \text{when \(\sigma
_x\geq 3 r^{-1/6} (\ln r+\textsc{e} )^{1/6} \)};\\
\sigma_x/3 &\text{otherwise}.
\end{cases}
\]
Note that such
choice of \(\delta _2\) satisfies the constraint that \(\sigma (x)/2
>\delta_2\geq\delta_0 \) for all sufficiently large \(r\) and \(x\in X^{'a}_{4\delta _0}\).
Meanwhile, \(f_2\) is of the form
\[
\begin{split}
f_2& =\zeta '_1r^{1/3}\tilde{\sigma}^{-1}_x+\zeta '_2 \tilde{\sigma}^{-4}_x\rho _1^2+\zeta '_3\, \rho
_1^{-2}r^{-1}\delta _2^{-6} (\ln r+\textsc{e} )+\zeta _4\, \delta _0^{-1}\, \sigma _x^{-1}\\
&\leq \zeta _5\, \tilde{\sigma}^{-1}_x\big( r^{1/3}+\zeta _5'r^{-1/2}(\ln r+\textsc{e} )^{1/2}\tilde{\sigma}_x^{-4}\big),
\end{split}
\]
with \(\rho _1^2=r^{-1/2}\delta _2^{-3}(\ln r+\textsc{e} )^{1/2}\tilde{\sigma}^{2}_x\), and \(\delta
_2=\sigma _x/3\).
For a fixed \(x\in X^{'a}_{4\delta _0}\), set \[
\varepsilon=\varepsilon _x:= \begin{cases}
\big(r^{-4/3}\tilde{\sigma}^{-1}_x (\ln r+\textsc{e})\big)^{2/7} & \text{when \(\sigma
_x\geq 3 r^{-1/6} (\ln r+\textsc{e} )^{1/6} \)};\\
r^{-1/6}\, \tilde{\sigma}_x\, (\ln r+\textsc{e} )^{-1/2}+\zeta '_5\textsc{o}_r^{-3} &\text{otherwise},
\end{cases}
\]
we then have:
\begin{equation}\label{ineq:q}
\begin{split}
2 q^{(\varepsilon _x)}(x) & \leq \begin{cases}
\zeta' (r\tilde{\sigma}^{-1}_x)^{5/7}\, (\ln r+\textsc{e} )^{2/7} & \text{when \(\sigma
_x\geq 3 r^{-1/6} \, (\ln r+\textsc{e} )^{1/6} \)};\\
\zeta '' r^{1/2}\tilde{\sigma}_x^{-2}(\ln r+\textsc{e} )^{1/2} &\text{otherwise}
\end{cases}\\
& =: K_0\, \quad \text{over \(X^{'a}_{4\delta }\). }
\end{split}
\end{equation}
Noting that for all sufficiently large \(r\), \(\tilde{\sigma}^{-2}<K_0\) on
\(X^{'a}_\delta \), we have
\[
\begin{split}
\textsc{s} +\zeta_2r|\beta |^2\leq (2^{-3/2}+\varepsilon _0)\, r\, (-u)+K_0
\quad \text{over \(X^{'a}_{4\delta _0}\). }
\end{split}
\]
The first inequality in (\ref{bdd:ss}) now follows with \(\textsc{o}\)
renamed as \(4\textsc{o}\).
To verify the second inequality in (\ref{bdd:ss}), fix \(x\in
Z^{'a}_{\delta }\). Notice that (\ref{DE:q}) also holds with the
constant \(\delta _0\) therein replaced by \(4\delta \), with the
definition of \(q\) in (\ref{def:q}) correspondingly modified. For the
rest of this proof, let \(q=:q_\delta \), \(\eta_1\), \(\eta_2\), \(\xi\) denote
the correspondingly modified versions, with \(\varepsilon \) set to be
1. In particular, \(\xi\) is now supported on \(Z^{'a}_{8\delta }\), and
\(\eta_1\), \(\eta_2\) are both supported on \(X^{'a}_{2\delta }\).
Then in this case \(q\, (x)\) is still bounded by (\ref{eq:q-est}), and the first
term on its right hand side is bounded as before. Namely, it is bounded
by a positive multiple of \(r^{1/2}( \ln r+\textsc{e} )^{1/2}\). The
second to the fourth terms on the right hand side are bounded differently as
follows.
To bound the second integral in (\ref{eq:q-est}), divide \(B(x, 2\rho
_0)\cap Z^{'a}_{8\delta }\) into
three regions: the first region \(\scrR_1:=B(x, \sigma _x/2)\); the
second region \(\scrR_2:=\{y\, |\, \sigma _y\leq \frac{3}{4}\sigma _x,
y\in B(x, 2\rho
_0)-\scrR_1\); the third region \(\scrR_3:=B(x, 2\rho
_0)\cap Z^{'a}_{8\delta }-\scrR_1-\scrR_2\).
Make use the following facts: For \(y\in \scrR_1\), \(\tilde{\sigma}_y\geq \tilde{\sigma}_x/2\)
and so \(\xi\leq \zeta r\tilde{\sigma}^{-1}_x +\zeta ' \delta
^{-2}\tilde{\sigma}^{-2}_x\); for \(y\in \scrR_2\), \(|\sigma _y-\sigma
_x|\geq\sigma _x/4\); for \(y\in \scrR_3\), \(|\sigma _y-\sigma
_x|\geq \sigma _y/2\). One has:
\[\begin{split}
\int_{B(x,2\rho_0)} \xi \mathop{\mathrm{dist}\, }\nolimits (x,
\cdot)^{-2}& \leq \zeta _2r\delta +\zeta '_2 \, \delta ^{-2}\ln \, (\delta
/\sigma _x),
\end{split}
\]
where \(\zeta _2\), \(\zeta _2'\) are independent of \(r\), \(\delta
\), and \(x\in Z^{'a}_\delta \).
To bound the remaining two integrals in (\ref{eq:q-est}), note that
the distance from the support of either \(\eta_1\) or \(\eta_2\) is no
less than \(\delta \geq \sigma _x\).
\[
\begin{split}
\int_{B(x,2\rho_0)} \eta_1\mathop{\mathrm{dist}\, }\nolimits (x,
\cdot)^{-2}& \leq \zeta \int_{2\delta }^{\sigma _x+2\rho _0}\int _{-2\rho _0}^{2\rho _0}\frac{\sigma _y^{-2} dz}{(\sigma _y-\sigma
_x)^2+z^2}\, \sigma _y^2 \, d\sigma _y\\
& \leq \zeta '\int_{2\delta }^{\sigma _x+2\rho _0}\frac{d\sigma _y}{\sigma _y-\sigma _x}\\
& \leq \zeta _3\, \ln \, (\delta ^{-1}
\quad \text{when \(x\in Z^{'a}_{\delta}\)}.
\end{split}
\]
To bound the last integral in (\ref{eq:q-est}), choose \(\delta _2>0\), and devide
\(B(x,2\rho_0)\cap X^{'a}_{2\delta }\) into two regions: \(\sigma \geq\delta _2\) on the
first region, and \(\sigma \leq\delta _2\) on the second. (The second
region is empty when \(\delta _2\leq 2\delta \).)
Over the first region, use Lemma \ref{T:lem3.1} together with the
observation that \(\mathop{\mathrm{dist}\, }\nolimits (y, x)\geq \sigma _y/2\geq\delta _2/2\) for any \(y\) in
this region.
Over the second region, use the fact that
by Proposition
\ref{T:lem3.2},
\(
\eta _2\leq \zeta r\tilde{\sigma}^{-1}
\) over \(X^{'a}_{2\delta }\).
Then, setting \(\rho _2=r^{-1/7}\delta^{1/7}(\ln r+\textsc{e} )^{1/7}\), one has:
\[
\begin{split}
\int_{B(x,2\rho_0)} \eta_2\mathop{\mathrm{dist}\, }\nolimits (x,
\cdot)^{-2}& \leq
\zeta '\, \delta _2^{-2} \delta _2^{-3}(\ln r+\textsc{e})+\zeta
_1\int_{2\delta}^{\delta _2}\int _{-2\rho _0}^{2\rho _0}\frac{r\sigma _y
\, dz \, d\sigma _y}{(\sigma _y-\sigma
_x)^2+z^2}\,\\
& \leq \zeta '\, \delta _2^{-5}(\ln r+\textsc{e})+ \zeta '_1\delta^{-1}\int_{2\delta }^{\delta _2}r\sigma _y
\, d\sigma _y\\
& \leq \zeta '\, \delta _2^{-5}(\ln r+\textsc{e})+\zeta r \, \delta^{-1}\delta
_2^
\\
& \leq \zeta _4\, r^{5/7}\delta^{2/7}(\ln r+\textsc{e} )^{2/7}\leq \zeta_4' r\delta
\quad \text{when \(x\in Z^{'a}_{\delta}\)}.
\end{split}
\]
Gathering all the termwise bounds obtained,
we have:
\[
q_\delta \leq \zeta \, r \delta +\zeta ' \delta^{-2}\ln\, (\delta /\sigma )
\quad \text{over \(Z^{'a}_{\delta}\)},
\]
where \(\zeta \), \(\zeta '\) are positive constants independent of
\(r\) and \(\delta \). Hence, by Proposition \ref{T:lem3.2}
\[
\textsc{s} =q_\delta +c_1r\, u-\zeta _2 r|\beta |^2\leq \zeta _1r \delta +\zeta '_1 \delta^{-2}\ln\, (\delta /\sigma)
\quad \text{over \(Z^{'a}_{\delta}\), }
\]
as asserted by
the second inequality in (\ref{bdd:ss}).
Having established Item (b) of the assertions of the proposition, item
(c) follows directly. Given \(x\in X^{'a}_{\delta _0}\), set the parameter
\(\delta \) in (\ref{bdd:ss}) (ii) as \(\delta =\sigma _x\), and
observe that over \(X^{'a}_{\delta _0}\), \(r^{2/3}\leq \textsc{o}^{-1}r\tilde{\sigma}_x\),
one arrives at the first inequality in Item (c). The second inequality
asserted in Item (c) follows from a combination of the first
inequality and (\ref{bdd:ss}) (ii).
\hfill$\Box$\medbreak
We shall repeatedly improve the estimate for \(|F_A^-|\). The first of
such improvements makes use a comparison
function named \(v_2\), which we now describe.
Let \(u=|\psi|^2-|\nu|\) as before. Let \(X^*_{\delta}\subset X''_{\delta}\)
denote the subspace consisting of points where
\(|\psi|^2/|\nu|\geq 1/2\), and let \(\textsc{w}:=|\nu |/2-|\psi |^2\).
\begin{lemma}\label{lem:v_2}
Let \(u:=|\psi|^2-|\nu|\).
There exist positive constants \(\textsc{o}\geq 8\), \(\zeta_i\), \(i=1,
\ldots, 5\), \(\zeta'_3, \zeta _4'\), that are independent of
\(r\), \(\delta \), and \((A, \Psi )\), with the following
significance: Suppose \(r>1\), \(\delta >0\) are such that
\(\delta\geq\textsc{o}r^{-1/3}\), then
given a
positive constant \(\epsilon<1\), there is a function \(v_2\) on \(X''_{\delta}\) which
satisfies:
\begin{itemize}
\item Over \(X''_\delta \),
\[\begin{split}
v_2& \geq \zeta_1\tilde{\sigma}^{-\epsilon}(-u)_+\geq
\zeta_1\tilde{\sigma}^{-\epsilon}\, (|\nu |/2+\textsc{w}).
\end{split}
\]
In particular,
\(v_2\geq \zeta'_1\tilde{\sigma}^{1-\epsilon}\) over \(X_\delta ''-X^*_\delta
\).
\item \(v_2\geq \zeta _2r^{-1}\tilde{\sigma}^{-2-\epsilon}\).
\item Over \(X''_\delta \),
\[\begin{split}
(d^*d+r|\psi|^2/2)\, v_2 & \geq
\tilde{\sigma}^{-\epsilon}\big( \zeta _3 \epsilon r
|\psi|^2\,(-u)_+-\zeta _3'(1-\epsilon )\, r \tilde{\sigma}^{-2}\textsc{w}\big)\\
&\geq \tilde{\sigma}^{-\epsilon}\big( \zeta _3 \epsilon r
|\nu |\,(-u)_+/2-\zeta _4'r \tilde{\sigma}^{-2}\textsc{w}\big).
\end{split}
\]
In particular, \((d^*d+r|\psi|^2/2)\, v_2\geq\zeta _3 \epsilon
\tilde{\sigma}^{-\epsilon}r
|\psi|^2\,(-u)_+\) over \(X^*_\delta \).
\item \(v_2\leq\zeta_4r^{\epsilon}\tilde{\sigma}^{2\epsilon }(-u+\zeta r^{-1}\tilde{\sigma}^{-2})\) on \(X''_\delta\).
\item \(v_2\leq \zeta_5 \tilde{\sigma}^{1-\epsilon}\) on \(X''_\delta\).
\end{itemize}
\end{lemma}
\noindent{\it Proof.}\enspace
From (\ref{u-ineq2a}) we have
\begin{equation}\label{u-ineq2}
(2^{-1}d^*d+\frac{r}{4}|\psi|^2)(-u)\geq-\zeta _0\, \tilde{\sigma}^{-1}\quad
\text{over \(X''\)}.
\end{equation}
As \(|\psi|^2\geq|\nu|/2\) on \(X^*_{\delta}\), it follows from
(\ref{ts-loBdd}) that
\begin{equation}\label{ts-loBdd-p}
\begin{split}
& d^*d (r^{-1}\tilde{\sigma}^{-k})+\frac{r|\psi|^2}{2}(r^{-1}\tilde{\sigma}^{-k})\\
& \quad =
\big(d^*d +\frac{r}{4}|\nu|-\frac{r\textsc{w}}{2} \big)(r^{-1}\tilde{\sigma}^{-k})
\geq\begin{cases}
\zeta''\tilde{\sigma}^{-k+1}-\frac{r\textsc{w}}{2} \big)(r^{-1}\tilde{\sigma}^{-k}) & \text{on \(X''_\delta\).}\\
\zeta''\tilde{\sigma}^{-k+1}& \text{on \(X^*_\delta\).}
\end{cases}
\\
\end{split}
\end{equation}
Adding a suitable multiple of the \(k=2\)'s version of the preceding inequality to
(\ref{u-ineq2}) and combining with Proposition \ref{T:lem3.2}, one
can find a positive constant
\(\zeta_2\) such that
\[
v_1:=-u+\zeta _2r^{-1}\tilde{\sigma}^{-2}
\]
satisfies:
\BTitem\label{ineq:v1}
\item \((d^*d+r |\psi|^2/2)\, v_1\geq -\zeta r\tilde{\sigma}^{-2} \textsc{w}\) on
\(X''_\delta\). In particular, \((d^*d+r |\psi|^2/2)\, v_1\geq 0\) on \(X_\delta^*\);
\item \(v_1\geq \zeta r^{-1}\tilde{\sigma}^{-2}\) on \(X''_\delta\),
\item \(v_1\geq (-u)_+\geq |\nu |/2+\textsc{w}\) on \(X''_\delta\),
\item \(v_1<\zeta\tilde{\sigma}\) on \(X''_\delta\).
\ETitem
Now take \(v_2:=v_1^{1-\epsilon}\).
\hfill$\Box$\medbreak
Recall the notation \(\tilde{r}=r\tilde{\sigma}\).
\begin{prop}\label{prop:curv-varpi}
Let \(u:=|\psi|^2-|\nu|\), and let \(\varepsilon _0\), \(K_0\),
\(K_1\) be as
in Proposition \ref{T:prop3.2}.
There exist \(r\)-independent positive constants \(\zeta_O\),
\(r_0>8\), \(\zeta \), \(\zeta'\), \(\zeta_1\)
that only depend on the metric, \(\nu \), \(\varsigma_w\), \(\zeta
_\grp\) with the following significance:
Let
|
\(\delta _0':=\zeta_O \, r^{-1/3}(\ln r)^{2/3}\), \(\varepsilon _1:=K_1 \,
\tilde{r}^{-1
\), and \(\varepsilon':= \varepsilon
_0+\zeta \varepsilon_1\). Then for any \(r\geq r_0\) one has:
\begin{equation}\label{eq:curv-varpi}
\begin{split}
|F_A^-|& \leq (2^{-3/2}+\varepsilon ')\,
r \, (-u)_+ +\zeta
\, \tilde{\sigma}^{-2}
\quad
\text{over \(X^{'a}_{\delta'_0}\), \(0\leq a\leq 7\)}.
\end{split}
\end{equation}
The constants
\(\textsc{o}\), \(r_0\), \(\zeta \)
and \(\zeta '\) above
depend on the \(\mathop{\mathrm{Spin}}\nolimits^c\) structure and the relative homotopy
class of \((A, \Psi)\).
\end{prop}
Note that \(\varepsilon _1+\varepsilon '\leq \zeta _0'\) over
\(X^{'a}_{\delta _0'}\) for an \(r\)-independent constant \(\zeta _0'\).
\noindent{\it Proof.}\enspace Let \(\textsc{o}\) be the larger of that in Proposition \ref{T:prop3.2}
and that in the preceding lemma. Set \(\delta _0=\textsc{o} r^{-1/3}\). Fix \(\delta \geq \delta _0\). Let \(X^*_{\delta}\subset X_{\delta}''\)
denote the subspace consisting of points where
\(|\psi|^2/|\nu|\geq 1/2\) as before. Note that over \(X''_\delta -X_\delta ^*\), \(|\nu |\leq -2u\) and \(
\tilde{\sigma}^{-1}\leq \textsc{o}^{-1} r^{1/3}\). Let \( \varepsilon_{0, \delta }\),
\(K_{0, \delta } \), \(K_{1, \delta } \) be defined by replacing every
occurrence of \(\tilde{\sigma}\) and \(\sigma \) in the respective formulas defining \(\varepsilon_0\),
\(K_0\), \(K_1\), by the number \(\delta \); and let \(K'_{1, \delta } :=\min\,
(K_{0,\delta }, r\tilde{\sigma})\).
Let \(q_0=q_0^{(\varepsilon)}\) be as in (\ref{def:q0}) with
\(\varepsilon \) set to be \(\varepsilon_{0, \delta }\). Then by the
proof of Proposition
\ref{T:prop3.2}
and the fact that \(\delta ^{-2}< K'_{1,\delta }\) over \(X_\delta ''\)
for \(\delta \geq \delta _0\) and all sufficiently large \(r\), there is a positive constant
\(\zeta '\) that depends only on the metric, \(\nu \),
\(\varsigma_w\), and \(\zeta _\grp\),
such that
\begin{equation}\label{bdd:q_0b}
\begin{split}
q_0& \leq \zeta
K'_{1, \delta } \quad \text{ over \(X^{'a}_{\delta }\)}, \quad 0\leq a\leq 8
\end{split}
\end{equation}
for all sufficiently large \(r\) and and \(\delta \geq \delta
_0\).
Combined with (\ref{ineq:q_0}) (replacing \(\delta _0\) therein by \(\delta _0/2\)), this gives:
\begin{equation}\label{DE:q_0-1}
\begin{split}
\big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) q_0
& \leq (\zeta_4 +\zeta _4' \varepsilon^{-1}_{0,\delta }\,\textsc{o}_r^{-3} \tilde{\sigma}^{-2}) \, r\, (-u)
+r \tilde{\sigma}^{-1}( \zeta _1+\zeta '_1\varepsilon ^{-1}_{0,\delta }\textsc{o}_r^{-6})\\
& \leq \zeta''_4 \, r\tilde{\sigma}^{-2} (-u)
+\zeta ''_1r \tilde{\sigma}^{-1}\quad \text{on \(X^{'a}_{\delta }\).}
\end{split}
\end{equation}
Given \(i\in \grY_m\), let \(q_{0,i}\) denote the \(Y_i\)-end
limit of \(q_0\). This is a function on \(Y_i\), and is bounded via
the 3-dimensional Seiberg-Witten equation \(\grF_{\mu _{i,r}}(B_i,
\Phi _i)=0\) and Lemma \ref{lem:3d-ptws} by
\begin{equation}\label{bdd:q_0i}
q_{0,i}\leq \zeta _i\,(1- \chi (\sigma _i/\delta _{0,i}))\, \tilde{\sigma}_i^{-2}+\zeta _i'\chi (\sigma _i/\delta _{0,i})
\, r^{2/3}
\end{equation}
for certain \(r\)-independent constants
\(\zeta _i\), \(\zeta _i'\).
In the above, \((B_i, \Phi _i)\) denotes the \(Y_i\)-end limit of
\((A, \Psi )\), and \(\sigma _i\), \(\tilde{\sigma}_i\), \(\delta _{0,i}\) are
respectively what were denoted by \(\sigma \), \(\tilde{\sigma}\), \(\delta _0\)
in Lemma \ref{lem:3d-ptws}. According to observations in Section \ref{sec:end-limit}, \(q_0\Big|_{Y_{i:L}}\)
approaches \(q_{0,i}\) as \(L\to \infty\) in \(C^k(Y_i)\) topology.
Combining (\ref{DE:q_0-1}) with (\ref{ts-loBdd}),
and making use of (\ref{bdd:q_0i}) and
(\ref{bdd:q_0b}), one may find constants \(\zeta _2, \zeta_2'\) independent
of \(\delta \), and \(r\), such that the function
\[
q':=q_0-\zeta _2\tilde{\sigma}^{-2}
\]
satisfies:
\begin{equation}\label{DE:q'}
\begin{split}
& \big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \, q'
\leq \zeta _2' \, r\tilde{\sigma}^{-1} =:\xi'\
\quad \text{on \(X^{'a}_\delta \)};\\
& \quad q'\leq 0 \quad \text{over \(\hat{Y}_{i, L}\), \(\forall i\in
\grY_m\); }\\
& \quad q' \leq \zeta
K'_{1,\delta } \quad \text{ over \(X^{'a}_{\delta }\)}, \quad 0\leq
a\leq 8, \quad \delta \geq\delta _0.
\end{split}
\end{equation}
(The number \(L\) above may depend on \(r\) and \((A, \Psi )\), but is
independent of \(\delta \). This dependence does not affect our
subsequent discussion, and unless otherwise specified, all constants
below are independent of \(L\).) Now let
\[
q'_\delta :=\gamma _{a, \delta } \, q', \quad 0\leq a\leq 7
\]
where \(\gamma _{a,\delta }\) is the cutoff function introduced in the beginning of this
section. The function \(q'_\delta \) satisfies:
\begin{equation}\label{DE:q'_d}
\big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \, q'_\delta
\leq \gamma _{a,\delta }\, \xi' +
\frac{ d^*d \, q'_\delta }{2}-\gamma _{a,\delta }
\frac{ d^*dq'}{2} =: \xi'_\delta .
\end{equation}
\((q'_\delta )_+\) is supported within the compact space \(U:=\scrX^{a+1}_{\delta
}-\bigcup_{i\in \grY_m} \hat{Y}_{i, L}\subset X''_\delta \). Let
\[
\scrX^a_{\delta, l}\subset \scrX^a_{\delta}-\bigcup_{i\in \grY_m} \hat{Y}_{i, l}
\]
denote the manifold with boundary obtained by ``rounding the corners''
of \(\scrX^a_{\delta}-\bigcup_{i\in \grY_m} \hat{Y}_{i, l}\).
More precisely, it satisfies: \(\partial\scrX^a_{\delta, L}\) is
smooth, and \(
\scrX^a_{\delta}-\bigcup_{i\in \grY_m}
\hat{Y}_{i,l}-\scrX^a_{\delta, l}\subset \hat{Y}_{i, [l-\epsilon ,
l]},\)
where \(\epsilon \ll 2^{-8}\).
Let \(V:=\scrX^{a+1}_{\delta, L+1}\supset U\).
Then \(q'_\delta \leq 0\)
on \(\partial V\).
Let \(q_1\) be a solution to the following
Dirichlet boundary value problem:
\[
\big(\frac{ d^*d}{2}+\frac{r|\nu |}{4}\big) \, q_1
=\xi'_\delta \, \, \, \text{ over \(V\)};\qquad
q_1|_{\partial V}=0.
\]
The (Dirichlet) Green's function for \(\frac{ d^*d}{2}+\frac{r|\nu |}{4}\), denoted \(G_r\) below, satisfies
\[
|G_r(x, \cdot)|+\mathop{\mathrm{dist}\, }\nolimits(x,\cdot)\, |d G_r(x, \cdot)|\leq \zeta\mathop{\mathrm{dist}\, }\nolimits(x,\cdot)^{-2}e^{-\zeta_g
(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\cdot)}
\]
for certain constants \(\zeta \), \(\zeta '\), \(\zeta ''\). Thus,
\[
\begin{split}
|q_1(x)|& \leq \zeta _1\int_U \mathop{\mathrm{dist}\, }\nolimits(x,\cdot )^{-2}e^{-\zeta_g
(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\cdot )} |\xi'| \\
& \qquad +\zeta _2\int_{X^{'a+1}_\delta -X^{'a}_\delta }K'_{1,\delta }\mathop{\mathrm{dist}\, }\nolimits(x,\cdot )^{-3}e^{-\zeta_g
(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\cdot )} \\
& \qquad +\int_{(Z^{'a}_{2\delta }-Z^{'a}_\delta)\cap U}K'_{1,\delta }\big( \zeta _3\,
\delta^{-2}\mathop{\mathrm{dist}\, }\nolimits(x,\cdot )^{-2}+\zeta '_3\, \delta^{-1}\mathop{\mathrm{dist}\, }\nolimits(x,\cdot
)^{-3}\big) e^{-\zeta_g
(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\cdot )}\\
& \leq \zeta _1' \delta ^{-1}\tilde{\sigma}^{-1}(x)+ \zeta _2'\, K'_{1,\delta }(x) \, (r\delta
)^{-1/2} e^{-\zeta _g(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\, X^{' a+1}_\delta
-X^{'a}_\delta )}\\
&\qquad \qquad +\zeta _3'' K'_{1,\delta }(x)\, (r\delta ^{3})^{-1/2}
e^{-\zeta _g(r\delta )^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,\, Z^{'a}_{2\delta }-Z^{'a}_\delta) }. \\
& \leq \zeta _1' \delta ^{-1}\tilde{\sigma}^{-1}(x)+ \zeta _4 \, K'_{1, \delta }(x) \, (r\delta ^{3})^{-1/2}
\end{split}
\]
This means that there exist constants \(r_0>8\),
\(\zeta _O\), \(\zeta ''_1\) and , \(\zeta _4'\), which are independent of \(r\), \(\delta \), and
\((A, \Psi )\), such that for all \(r\geq r_0\), and \(\delta \geq
\delta _0':=\zeta _O\, r^{-1/3}\, (\ln r)^{2/3}>\delta _0\),
\[
\begin{split}
|q_1| & \leq \zeta '_4 \, K'_{1,\delta }(\ln r)^{-1}\quad \text{over \(V\);} \\
|q_1| & \leq \zeta ''_1 \delta ^{-1}\tilde{\sigma}^{-1} \quad \text{over \(U':=X^{'a}_{4\delta }\cap V\).}
\end{split}
\]
Assume \(\delta
>\delta _0'\) from now on. Then
the preceding estimates for \(|q_1|\) and \(q'\),
the function \(q_2:= q'_\delta -q_1\) satisfies:
\[\begin{split}
\big(\frac{ d^*d}{2}+\frac{r|\psi|^2}{4}\big) \, q_2& =-\frac{ru}{4}
q_1\leq \frac{\zeta '_4}{4} \, K'_{1,\delta }(\ln r)^{-1} r\, |-u|; \\
q_2|_{\partial V} & \leq 0;\\
q_2 & \leq \zeta _5 \, K'_{1,\delta }.
\end{split}
\]
Choose \(\epsilon =(\ln (r\delta ))^{-1}\), and note that \(\delta
^{-1+\epsilon }K_{1,\delta }\geq\zeta \tilde{\sigma}^{-1+\epsilon }K'_{1,\delta
}\). Appeal to the first and the third bullets in Lemma \ref{lem:v_2}
to find positive constants \(\zeta _1\), \(\zeta _2\), \(\zeta _3\)
(independent of \(r\), \(\delta \)), such that with
\[
q_3:=q_
-\zeta _1\delta ^{-1+\epsilon } K_{1, \delta } \, v_2,
\]
one has:
\[\begin{split}
\big(2^{-1}d^*d+\frac{r}{4}|\psi|^2\big)\, q_3& \leq \zeta _2\,
r\tilde{\sigma}^{-2} \delta
^{-1} K_{1, \delta } \textsc{w} +\frac{\zeta '_4}{4} \, K_{1,\delta }\, (\ln r)^{-1} r\,
u_+\\
q_3|_{\partial V}& \leq 0; \\
q_3 & \leq -\zeta _3 \, \delta ^{-1} K_{1, \delta } \textsc{w}. \\
\end{split}
\]
By the maximum principle, \(q_3\leq 0\) over \(V\). Thus, applying the fourth
bullet in Lemma \ref{lem:v_2}, one has:
\[\begin{split}
q_0& \leq \zeta _1' \delta ^{-1} \tilde{r}^\epsilon
K_{1, \delta }\, (-u+\zeta r^{-1}\tilde{\sigma}^{-2}) +\zeta _2' \delta ^{-1}\tilde{\sigma}^{-1
\\%\quad \text{over \(X^{'4}_{4\delt
& \leq \zeta _1''\, \delta ^{-1}K_{1, \delta }\, (-u+\zeta
r^{-1}\tilde{\sigma}^{-2}) +\zeta _2' \delta ^{-1}\tilde{\sigma}^{-1}
\end{split}
\]
over \(X^{'a}_{4\delta }\) for all sufficiently large $r$. All the
constants \(\zeta _*\) above are independent of \(r\), \(\delta \),
and \((A, \Psi )\). Given \(x\in X^{'a}_{8\delta _0}\), set \(\delta
=\tilde{\sigma}(x)/2\) in the preceding expression. It then gives
\[\begin{split}
q_0
& \leq \zeta _
\, \varepsilon _{1}r\, (-u)+\zeta _8\, \tilde{\sigma}^{-2}.
\end{split}
\]
This leads directly to the conclusion of the proposition. \hfill$\Box$\medbreak
\subsection{Estimating \(|\nabla_A\alpha|\) and \(|\nabla_A\beta|\)}
The next proposition is an analog of \cite{Ts}'s Proposition 3.3 and
\cite{T}'s Proposition I.2.8, and the proof is an adaptation of the latter. Let
\(\underline{\alpha}:=|\nu|^{-1/2}\alpha\) and
\(\varpi=|\nu|-|\alpha|^2\).
\begin{prop}\label{lem:est-1st-der}
There exist positive constants \(r_1\), \(\zeta _O\), \(\zeta', \zeta''\), that are independent of
\(r\) and \((A, \Psi )\), with the following significance:
Let \(\delta _0'=\zeta _O r^{-1/3}(\ln r)^{2/3}\). For any \(r>r_1\), one has:
\[\begin{split}
|\nabla_A\underline{\alpha}|^2+r\tilde{\sigma}^2|\nabla_A\beta|^2&\leq \zeta'r
\varpi+\zeta''\tilde{\sigma}^{-2} \quad \text{over \(X^{'a}_{\delta'_0}\)}, \quad
0\leq a\leq \frac{13}{2}.
\end{split}\]
\end{prop}
\noindent{\it Proof.}\enspace Let \(\delta _0'\) be as in Proposition \ref{prop:curv-varpi}. Argue as in p.191 of
\cite{Ts} (which is itself a modification of the arguments in p.20 of
\cite{T}),\footnote{Caveat: Equations (3.50)--(3.53) in \cite{Ts}
contain typos and errors.} replacing the use of
Propositions 3.1, 3.2 therein by their counterparts in our setting,
Propositions \ref{T:prop3.1-} and \ref{prop:curv-varpi}.
This leads to the following
variant of (3.53) in \cite{Ts} (which in turn is based on \cite{T}'s
(I.2.38) and (I.2.40)):
\begin{equation}\label{eq:DE-alpha'}\begin{split}
2^{-1}d^*d &|\nabla_A\underline{\alpha}|^2+2^{-1}|\nabla_A\nabla_A\underline{\alpha}|^2+8^{-1}r|\nu||\nabla_A\underline{\alpha}|^2\\
&\leq \zeta \, (\tilde{\sigma}^{-2}+r\varpi_+)
|\nabla_A\underline{\alpha}|^2
\zeta_2 (r\tilde{\sigma}^{-1}|\beta|^2+ r^{-1}\tilde{\sigma}^{-6}) \, |\nabla_A\beta|^2\\
& \qquad \quad +
\zeta_3 r^{-1}\tilde{\sigma}^{-4}|\nabla_A\nabla_A\beta|^
+\zeta_5r^{-1}\tilde{\sigma}^{-7}+\zeta_6r\tilde{\sigma}^{-3}\varpi^2;\\
2^{-1}d^*d &
|\nabla_A\beta|^2+|\nabla_A\nabla_A\beta|^2+8^{-1}r|\nu|\, |\nabla_A\beta|^2\\
& \leq \zeta (\tilde{\sigma}^{-2}+r\varpi_+)
|\nabla_A\beta|^2
\zeta_2'(r|\beta|^2+ r^{-1}\tilde{\sigma}^{-5}) \, |\nabla_A\alpha|^2\\
& \qquad \quad +\zeta_3'r^{-1}\tilde{\sigma}^{-3}|\nabla_A\nabla_A\alpha|^2+\zeta_4'r\tilde{\sigma}^{-1}|\beta|^2+\zeta_5'r^{-1}\tilde{\sigma}^{-1}
\end{split}
\end{equation}
on \(X^{'a}_{\delta_0'}\), \(0\leq a\leq 7\). Re-introduce the notation \(\tilde{\sigma}_x=\tilde{\sigma}(x)\) and
\(\tilde{r}_x=\tilde{r}(x)=r\tilde{\sigma}_x\), and note that
\begin{equation}\label{est-ts}
1/2\leq \tilde{\sigma} /\tilde{\sigma}_x\leq 3/2\quad
\text{over \(B\, (x, \tilde{\sigma}_x/2) \). }
\end{equation}
Fix \(x\in X^{'a}_{2\delta _0'}\), \(0\leq a\leq\frac{13}{2}\). Use (\ref{eq:DE-alpha'}), (\ref{3.20}),
(\ref{3.21}), Proposition \ref{T:prop3.1-} and (\ref{est-ts}) to find constants \(\zeta \), \(\zeta
'\), \(\zeta ''\), \(\zeta _1\) that are independent of \(r\), \(x\),
and \((A, \Psi )\), such that
\begin{equation}\label{def:y}
\begin{split}
& \quad d^*d y_x+4^{-1}r|\nu|y_x\leq 0 \quad \text{over \(B\, (x,
\tilde{\sigma}_x/2) \), with }\\
&y_x:=\max \, \big(|\nabla_A\underline{\alpha}|^2+\zeta
r\tilde{\sigma}^2_x|\nabla_A\beta|^2-\zeta' r\varpi+\zeta'' r^2\tilde{\sigma}^3_x|\beta|^2-\zeta_1\tilde{\sigma}^{-2}, 0\big).
\end{split}
\end{equation}
Note that by Propositions \ref{T:lem3.2} and \ref{T:prop3.1-} and (\ref{est-ts}),
\begin{equation}\label{bdd:y0}
y_x\leq \zeta _2\, \tilde{\sigma}^{-1}|\nabla_A\alpha|^2+\zeta_2'
r\tilde{\sigma}^2|\nabla_A\beta|^2+\zeta_3' \tilde{\sigma}^{-2}+\zeta _1'r \varpi .
\end{equation}
Multiply both sides of the differential inequality (\ref{def:y}) by
\(\chi (4\mathop{\mathrm{dist}\, }\nolimits (x, \cdot)/\tilde{\sigma}_x)\, G_r\),
where \(G_{r}\) denotes the
Green's function of \(d^*d+4^{-1}r|\nu|\) on \(B\, (x, \tilde{\sigma}_x/2)\) with Dirichlet
boundary condition, then integrate over \(B\, (x, \tilde{\sigma}_x/2)\). This
Green's function satisfies a bound of the form:
\[
\Big|G_r(x, x')\Big|+\mathop{\mathrm{dist}\, }\nolimits (x, x')\, \Big|\, dG_r(x,
x')\Big|\leq \zeta_0\mathop{\mathrm{dist}\, }\nolimits(x,x')^{-2}\, e^{-2\zeta'_0
\tilde{r}_x^{1/2}\mathop{\mathrm{dist}\, }\nolimits (x,x')}.
\]
It follows from integration by parts that
\begin{equation}\label{eq:y-bdd0}
\begin{split}
y_x(x)& \leq\zeta _3 \tilde{\sigma}_x^{-4}e^{-\zeta'_0 \tilde{r}_x^{1/2}\tilde{\sigma}_x/2}\int_{A_x}y_x, \\
\quad
\end{split}
\end{equation}
where \(A_x= B(x, \tilde{\sigma}_x/2)-B(x, \tilde{\sigma}_x/4)\).
By (\ref{bdd:y0}), (\ref{est-ts}), Propositions \ref{prop:SW-L2-bdd},
\ref{T:prop3.1-},
and Lemma \ref{T:lem3.1},
\begin{equation}\label{eq:y-bdd1}
\int_{A_x}y_x\leq\zeta '\, \tilde{\sigma}_x^{-1} \, (\ln r+\textsc{e})
+\zeta r\tilde{\sigma}^2_x\int_{A_x}|\nabla_A\beta|^2.
\end{equation}
To estimate the last term above,
multiply both sides of (\ref{3.20}) by \(\chi\big(2\mathop{\mathrm{dist}\, }\nolimits (x, \cdot)/\tilde{\sigma}_x\big)\), then
integrate over \(B(x,\tilde{\sigma}_x)\).
This gives
\[\begin{split}
\int_{A_x}|\nabla_A\beta|^2& \leq \zeta _4\tilde{\sigma}_x^{-2} \int_{B\,
(x, \tilde{\sigma}_x)}|\beta|^2 +\zeta _4'r^{-1}\tilde{\sigma}_x^{-3}\int_{B\,
(x, \tilde{\sigma}_x)}|\nabla_A\alpha |^2+ \zeta _4''r^{-1}\tilde{\sigma}_x^3\\
& \leq \zeta _5 \, r^{-1}\tilde{\sigma}_x^{-3}(\ln r+\textsc{e}).
\end{split}
\]
In the above, (\ref{est-ts}), Propositions \ref{T:prop3.1-},
\ref{prop:SW-L2-bdd},
and Lemma \ref{T:lem3.1} are used to derive the second line from the
first line. Inserting the preceding inequality into
(\ref{eq:y-bdd1}) and combining with (\ref{eq:y-bdd0}),
we have
\[
y_x(x) \leq\zeta _6\, \tilde{\sigma}_x^{-5}e^{-\zeta_6'(r\tilde{\sigma}_x^3)^{1/2}} (\ln
r+\textsc{e} ),
\]
where \(\zeta _6\), \(\zeta _6'\) are independent of \(r\), \(x\), and \((A, \Psi )\).
This implies the existence of constants \(r_1\), \(\zeta _7\), \(\zeta _7'\),
also independent of \(r\), \(x\), and \((A, \Psi )\),
such that \(y_x(x)\leq \zeta _7' \tilde{\sigma}^{-2}_x\) when \(r\geq r_1\) and \(\sigma_x\leq
\zeta _7 \, r^{-1/3}(\ln r)^{2/3}\). Rename \(\zeta _O\)
to be the larger of this \(\zeta _7\) and twice of the version from Proposition
\ref{prop:curv-varpi}, and re-set \(\delta _0'=\zeta _O\, r^{-1/3}(\ln
r)^{2/3}\). \hfill$\Box$\medbreak
\begin{rem}
It is basically equivalent to estimate either of
\(|\nabla_A\underline{\alpha}|\) and \(|\nabla_A\alpha|\), as they
are related by
\[
\Big| |\nabla_A\alpha|^2-|\nu|\,
|\nabla_A\underline{\alpha}|^2\Big|\leq \zeta \tilde{\sigma}^{-1}\quad \text{on
\(X_{\delta_0}^{'a}\supset X_{\delta_0'}^{'a}\)}.
\]
In fact, a slight modification of the argument in the preceding proof
would yield a direct, similar pointwise bound for
\(|\nabla_A\alpha|\). However, the bound for \(|\nabla_A\underline{\alpha}|\)
given in the previous proposition is slightly better, and it is more
amenable to our applications later.
\end{rem}
\section{Monotonicity and its consequences}\label{sec:mono}
This section contains variants and counterparts of Steps (3) and (4) of
Taubes's proof, as outlined in
Section \ref{sec:synopsis} above. Intermediate steps of these
arguments are used to improve various integral bounds and pointwise
estimates in the previous two sections.
Let \((A, \Psi)=(A_r, \Psi_r)\) be as in the statement of Lemma
\ref{lem:Etop-bdd1},
and let \(B\subset X\) be an open set. Following Taubes, we define the
(local) {\em energy} of \((A, \Psi)\in \mathop{\mathrm{Conn}}\nolimits (\bbS^+)\times \Gamma(\bbS^+)\) on \(B\) to
be:
\[
\mathcal{W}_B(A, \Psi):=4^{-1}r\int_B |\nu| \, \Big||\nu|-|\psi|^2\Big|.
\]
Taubes' proof of the monotonicity formula hinges on a bound for
\(\mathcal{W}_X\). In contrast to \cite{Ts}, \(\mathcal{W}_X\) is infinite in our
situation. However, Lemma \ref{T:lem3.1} provides us with bounds on
\(\mathcal{W}_{X_\bullet}\) for any compact \(X_\bullet\subset
X'\).
\subsection{Bounding \(\mathcal{W}_{Z_\delta }\)}\label{sec:W_z}
Recall the notation
\(\tilde{r}=r\tilde{\sigma}\).
Let \(\delta _1:= 2r^{-1/6} (\ln r+\textsc{e})^{3/4}\). Over \(X^{'a}_{\delta _1/2}\), \(\tilde{\sigma}^{-5}\leq
\tilde{r}\,
(\ln r+\textsc{e})^{-9/2}\). This ensures that there is an \(r\)-independent
constant \(\zeta _e\) such that the functions \(\varepsilon _1\) and \(\varepsilon
'\leq \zeta \varepsilon _1\) from Proposition
\ref{prop:curv-varpi} have the property that
\begin{equation}\label{bdd:eps_1}
(\ln r+\textsc{e} ) \, (\varepsilon _1+\varepsilon ') \leq \zeta _e \quad \text{over \(X^{'a}_{\delta
_1/2}\)}.
\end{equation}
Fix an \(x\in \nu ^{-1}(0)\cap X^{'a}\). Choose local coordinates \((t,
x_1, x_1, x_3)\) centered at \(x\) so that
\(\nu^{-1}(0)\) is identified with the \(t\)-axis, and that
\(\omega=2\nu ^+=d\eta\), where
\begin{equation}\label{eq:eta0}
\begin{split}
\eta& =-\big(\frac{x_1^2+x_2^2}{2}\big) \,
dt-x_3(x_1dx_2-x_2dx_1)+O\, (\sigma ^3), \\
|\eta|& \leq \sigma \, |\nu |/2+O\, (\sigma ^3).
\end{split}
\end{equation}
Let \(Z_x(\delta , l)\subset
X''\) denote the cylinder consisting of points with distance no greater
than \(\delta \) from \(\nu ^{-1}(0)\), and whose \(t\)-coordinate
lies in the interval \
|
. Left
inset: spatial contrast in the recovered signal.}
\label{fig_1}
\end{figure}
In the numerical solution, the propagation of an initial gaussian
wave-packet with enegy $(k_{o}a)^{2}V$ is recorded by a single transducer
placed in the waveguide consisting of a single propagating mode (in
practice, this waveguide is well represented by a one-dimensional chain).
The time evolution was performed through a second order Trotter-Suzuki
algorithm \cite{DeRaedt}. Fig.(\ref{fig_1}) shows the recovered signal at
the focal point, \textit{i.e.} the centroid of the original wave-packet. The
focalization functions for both methods are close to the exact reversed
propagation. However, a slight broadening of the focalized wave-packet is
observed in the TRM. In this case, the differences between PIF and TRM
procedures are small because the initial excitation is mainly composed by
states whose group velocity in the outgoing channel remains almost constant.
\subsection{PIF in a classical Helmholtz resonator}
The PIF procedure for a classical wave equation can be readily deduced using
our Green's function strategy by considering its finite difference version.
This is best visualized as a system of coupled oscillators \cite{Economou}.
By including an inhomogeneity, it becomes in a simple model for a Helmholtz
resonator coupled to an acoustic waveguide. Here, the lowest frequency
\omega _{0}$ of the $\lambda /2$ mode in the resonator is represented by a
single mass $m_{0}$ and its corresponding spring.\ The waveguide is modelled
by a semi-infinite chain of identical masses $m=\alpha m_{0}$ placed at the
equilibrium points $x_{n}=na$ (with $a$ the lattice constant and $n$\ a
positive integer). The nearest neighbor spring constant is $K=m\omega _
\mathrm{x}}^{2}$. The equations of motion for the corresponding
displacements\ $u_{n}$ can be written in a matrix form which, in the
frequency domain, reads
\begin{equation}
\mathbb{D}^{-1}(\omega )\mathbf{u}(\omega )=\left(
\begin{array}{ccc}
\omega ^{2}-\tilde{\omega}_{0}^{2} & \alpha \omega _{\mathrm{x}}^{2} & \cdots
\\
\omega _{\mathrm{x}}^{2} & \omega ^{2}-2\omega _{\mathrm{x}}^{2} & \\
\vdots & & \ddot
\end{array
\right) \mathbf{u}(\omega )=0,
\end{equation
where $\tilde{\omega}_{0}^{2}=\omega _{0}^{2}+\alpha \omega _{\mathrm{x
}^{2} $. Here, $\mathbb{D(\omega )}$ represents a momentum-displacement
response function, which is the resolvent of the dynamical matrix.\ Any
diagonal element has a simple expression in terms of continued fractions. In
particular, a far-field transducer is placed at the site $x_{s}$ on the
waveguide, where the response function i
\begin{equation}
D_{s,s}(\omega )=\frac{1}{\omega ^{2}-2\omega _{\mathrm{x}}^{2}-\Delta
_{L}(\omega )-\left[ \Delta _{R}(\omega )-\mathrm{i}\omega \eta (\omega
\right] }.
\end{equation
Here, the mean-field frequency $2\omega _{\mathrm{x}}^{2}$ appears shifted
by the dynamical effect from the oscillators at both sides of $x_{s}$. The
imaginary shift indicates that excitation components of different
frequencies would eventually escape through the waveguide at the right with
group velocitie
\begin{equation}
v_{g}(\omega )=\frac{a}{2}\eta (\omega )=\frac{a}{2}\sqrt{4\omega _{\mathrm{
}}^{2}-\omega ^{2}}.
\end{equation
The propagation of an excitation originated in the resonator $x_{0}$ is
detected in $x_{s}$ inside the waveguide. The recorded signal presents a
strong component in $\tilde{\omega}_{0}$, which is the \textquotedblleft
carrier\textquotedblright\ frequency as can be appreciated from the density
of states at the resonator's site. The displacement at the transducer,
resulting from an initial excitation $\xi _{0}$ in the resonator, can be
expressed a
\begin{equation}
u_{s}(t)=G_{s,0}(t)\xi _{0},\ t>0, \label{eq_reg}
\end{equation
where $G_{s,0}(t)$ is the Green's function describing the
displacement-displacement response. In general, the connection between the
Green's function and the momentum-displacement response is given b
\begin{equation}
G_{i,j}(\omega )=\mathrm{i}\omega D_{i,j}(\omega ).
\end{equation
As before, the complete evolution $\tilde{u}_{s}(t)$ in the transducer can
be conceived as the forward evolution corresponding to the positive times
and a backward evolution accounting for the negative ones. In this sense
\begin{equation}
\tilde{u}_{s}(t)=\left\{
\begin{array}{cc}
u_{s}(-t), & t<0, \\
u_{s}(t), & t\geq 0
\end{array
\right.
\end{equation
and the injection that produces the desired reversion can be obtained a
\begin{equation}
\delta u_{s}(\omega )=\frac{\tilde{u}_{s}^{\ast }(\omega )}{G_{s,s}(\omega )
.
\end{equation}
According to eq.(\ref{eq_reg}), the complete evolution writes in the
frequency domain a
\begin{align}
\tilde{u}_{s}^{\ast }(\omega )& =u_{s}^{\ast }(\omega )+u_{s}(\omega ) \\
& =\left[ G_{s,0}^{\ast }(\omega )+G_{s,0}(\omega )\right] \Delta u_{0}.
\end{align
Here again, the key tool is the Dyson equation connecting the two subspaces
delimited by the transducer. We can write
\begin{equation}
G_{s,0}(\omega )=\frac{G_{s,s}(\omega )}{G_{s,s}^{\ast }(\omega )
G_{s,0}^{\ast }(\omega ),
\end{equation
and the PIF injection rewrites in terms of the partial evolution
corresponding to the detected signa
\begin{align}
\delta u_{s}^{\mathrm{PIF}}(\omega )& =\left[ \frac{1}{G_{s,s}(\omega )}
\frac{1}{G_{s,s}^{\ast }(\omega )}\right] u_{s}^{\ast }(\omega ) \\
& =\frac{1}{\mathrm{i}\omega }\left[ \frac{1}{D_{s,s}(\omega )}-\frac{1}
D_{s,s}^{\ast }(\omega )}\right
|
] u_{s}^{\ast }(\omega ) \\
& =\eta (\omega )u_{s}^{\ast }(\omega ).
\end{align}
As in the quantum case, we denote $2\omega _{\mathrm{x}}u_{s}^{\ast }(\omega
)$ as the Fourier transform $\delta u_{s}^{\mathrm{TRM}}(\omega )$ of the
injection function in the TRM protocol. In consequence, the perfect time
reversal is obtained only once a further filter $v_{g}(\omega )/v_{\max }$
is applied. Hence, the PIF formula for internal source in the acoustic case
i
\begin{equation}
\delta u_{s}^{\mathrm{PIF}}(\omega )=\frac{v_{g}(\omega )}{v_{\max }}\delta
u_{s}^{\mathrm{TRM}}(\omega ).
\end{equation}
Notably, the prescription remains exactly the same as that of the quantum
version (see eq.(\ref{eq_final})). This implies that the effectiveness of
the filter will depend on the structure of the waveguide and the initial
wave-packet, regardless the details of the cavity. However, for cases in
which the group velocity in the free space is constant, it is clear that
\delta u_{s}^{\mathrm{PIF}}(\omega )\equiv \delta u_{s}^{\mathrm{TRM
}(\omega )$.
A numerical simulation of the reconstruction of the initial excitation was
performed using the Pair Partitioning method (PPM) \cite{CalvoMC}, which
yields the complete dynamics by alternating among the evolutions of pairs of
coupled\ masses. In a similar way as the Trotter algorithm, PPM approximates
the actual evolution determined by the Hamiltonian $H_{1}+H_{2}$ during a
small time step $\delta t$ as a sequence of unitary transformations $U\left[
(H_{1}+H_{2})\delta t\right] \simeq U\left[ H_{1}\delta t\right] U\left[
H_{2}\delta t\right] $ and results in a perfectly reversible algorithm. The
calculated dynamics is best depicted, as in fig.(\ref{fig_2}), by analyzing
the local energy which avoids the fast fluctuations shown by displacement
and momentum. Here, the left panel shows (in a log scale) that the recovered
resonator's local energy coincides with the exact reversal over all the
relevant ranges. Since in this model the waveguide has a cut-off frequency,
as would be the case in a grated waveguide, the PIF filter improves the TRM
focusing when it comes to reproduce the low intensity signals. This is
particularly evident in the reproduction of the time-reversed \textit
survival collapse}. This is a sudden dip in the local energy resulting from
the interference between the excitation surviving in the resonator and that
returning from the waveguide. This surprising phenomenom was originally
described in the context of quantum spin channels \cite{Rufeil}. The perfect
contrast of the time reversed signal provided by PIF is evidenced in the
right panel by the exact cancellation of displacements except for the
resonator. It is also interesting to notice that both procedures produce a
phantom signal outside the \textquotedblleft silence
region\textquotedblright . While the TRM has an evident imperfection in the
localization of the signal, the PIF\ procedure yields this absolute
cancellation even outside the cavity region defined by the far field
transducer at $x_{s}$.\textbf{\ }The resulting localization, which
corresponds to $\lambda /2$ of the carrier signal, was only enabled by
filtering out the band edge components in the emitted signal. From this
perspective, the PIF procedure contributes to the goal of achieving focusing
beyond the diffraction limit \cite{FinkSCIENCE}.
\begin{figure}[tbp]
\includegraphics[angle=90,width=8cm]{resonator.eps}
\caption{(color online). (a) Balls and springs model for a Helmholtz
resonator coupled to a waveguide. (b) Focalization signal in the resonator.
Left panel: local energy recovered as function of time. Both methods, the
TRM (red dashed) and PIF (blue dotted) are contrasted with the exact
reversal (solid). PIF and exact curves coincide for the whole temporal
range. Right Panel: Displacements at the focalization time.}
\label{fig_2}
\end{figure}
\section{Conclusions}
We presented an expression of the PIF procedure which results particularly
simple when the excitation is generated by an internal source. We observed
that, contrary to what happens when the source is external, the prescription
does not involve internal details of the cavity but only the simpler
information about the propagation in the outer region. Hence, this filter
applies to two physically relevant situations: 1) When the excitation is
actually originated in the interior of the cavity. In this case, one obtains
a perfect reversal of the wave function for all the times after the source
has been turned-off. 2) When one uses an external source in a situation that
allows for a clear separation between incoming and outgoing waves. In that
case, one can use the recording of the outgoing wave to perfectly reverse
the whole excursion of the excitation through the cavity. This condition is
achieved when the boundaries are placed far enough from the reverberant
region, as in the quantum and acoustic bazooka devices. An interesting
consequence of our result in the acoustical case is that TRM produces a
perfect time reversal within the cavity, provided that the propagation
beyond its contour is free of further reverberances.
A numerical assesment of the reversion fidelity shows that PIF constitutes a
notable improvement over TRM prescription in cases where the energy
(frequency) dependence has a non-linear dispersion relation, as typical
excitations described by the Schr\"{o}dinger equation or, in acoustic
systems, when the escape velocity is not constant. This occurs in the
presence of multiple propagating channels or when collisions outside the
cavity have a relevant contribution as in grated waveguides.
The PIF prescription becomes an analytical tool to design specific
excitations well beyond the discussed TRM context, thus providing an
alternative strategy for coherent control. In particular, it allows the
design of quantum bazookas that shoot wave packet excitations. Feasible
implementations of this concept include the local generation of vibrational
waves in nanoelectromechanical structures \cite{Schwab}. There, the
excitation is injected through the coupling of a Cooper-pair box to a
nanocantilever. In Bose-Einstein condensates confined to an optical lattice
\cite{Chu}, there is a well-defined set of quantum states with tunable
couplings. Thus, the generation of macroscopic wavefunctions would benefit
from our simple and consistent PIF prescription. Last but not least, in NMR,
the injection of targeted spin-wave packets in chains of interacting spins
\cite{Madi-Ernst} is possible through the local injection of the
polarization stored in a rare $^{13}C$ nucleus \cite{Alvarez}.
\acknowledgments We thank R.A. Jalabert and L.E.F. Foa Torres for valuable
discussions, F. Vignon for helpful correspondence and F. Pastawski for
comments on the manuscript. We acknowledge financial support of CONICET,
ANPCyT and SECyT-UNC.
|
\section{Introduction}
Bimetric theory~\cite{Hassan:2011zd} is a model for a massive spin-2 field interacting with a massless one.
It generalizes both general relativity (GR), which describes a massless spin-2 field,
as well as nonlinear massive gravity~\cite{deRham:2010kj}, which describes a massive
spin-2 field alone. For reviews of bimetric theory and massive gravity, see~\cite{Schmidt-May:2015vnx}
and~\cite{Hinterbichler:2011tt, deRham:2014zqa}, respectively. Bimetric theory can be further generalized
to multimetric theory~\cite{Hinterbichler:2012cn} which always includes one massless spin-2 and several
massive spin-2 degrees of freedom. This is in agreement with the fact that interacting theories for more
than one massless spin-2 field cannot exist~\cite{Boulanger:2000rq}.
The bi- and multimetric actions are formulated in terms of symmetric rank-2 tensor fields,
whose fluctuations around maximally symmetric backgrounds do not coincide with the
spin-2 mass eigenstates~\cite{Hassan:2012wr}.
The form of their interactions is strongly constrained by demanding the
absence of the Boulware-Deser ghost instability~\cite{Boulware:1973my}.
The ghost also makes it impossible to couple more than one of the
tensor fields to the same matter sector, at least not through a standard minimal coupling,
mimicking that of GR~\cite{Yamashita:2014fga, deRham:2014naa}.
A consequence of this is that the gravitational force is necessarily mediated by a superposition
of the massless and the massive spin-2 modes and not by a massless field alone, as one might expect.
It is an interesting open question whether more general matter couplings can be realized in
bimetric theory without re-introducing the ghost.
It has been shown that the ghost does not appear at low energies
if one couples the two tensor fields $g_{\mu\nu}$ and $f_{\mu\nu}$ of bimetric theory
to the same matter source through an ``effective metric" of the form~\cite{deRham:2014naa},
\begin{eqnarray}\label{effmetr}
G_{\mu\nu}=a^2g_{\mu\rho}+2ab\, g_{\mu\rho}\big(\sqrt{g^{-1}f}\,\big)^\rho_{~\nu}+b^2f_{\mu\nu}\,.
\end{eqnarray}
Here, $a$ and $b$ are two arbitrary real constants and the square-root matrix $\sqrt{g^{-1}f}$ is defined via
$\big(\sqrt{g^{-1}f}\,\big)^2=g^{-1}f$.
Ref.~\cite{Noller:2014sta} suggested a similar expression for
bimetric theory formulated in terms of the vierbeine
$e^a_{~\mu}$ and $v^a_{~\mu}$ in $g_{\mu\nu}=e^a_{~\mu}\eta_{ab}e^b_{~\nu}$
and $f_{\mu\nu}=v^a_{~\mu}\eta_{ab}v^b_{~\nu}$. Namely, they couple the metric,
\begin{eqnarray}\label{effmetrvb}
\tilde{G}_{\mu\nu}=\big(ae^a_{~\mu} +bv^a_{~\mu}\big)^\mathrm{T}\eta_{ab}\big(ae^b_{~\nu} +bv^b_{~\nu}\big)\,,
\end{eqnarray}
to matter. This metric coincides with (\ref{effmetr}) if and only if the symmetrization condition,
\begin{eqnarray}\label{symcond}
e^a_{~\mu}\eta_{ab} v^b_{~\nu}=v^a_{~\mu}\eta_{ab} e^b_{~\nu}\,,
\end{eqnarray}
holds. The latter is equivalent to imposing the existence of the square-root matrix
$\sqrt{g^{-1}f}$~\cite{Deffayet:2012zc} which appears in (\ref{effmetr}) as well as in
the interaction potential of bimetric theory. However, in bimetric theory in
vierbein formulation with matter coupled to the metric $\tilde{G}_{\mu\nu}$, the condition
(\ref{symcond}) is incompatible with the equations of motion~\cite{Hinterbichler:2015yaa}.
Hence, the two couplings cannot be made equivalent.
This implies in particular that the vierbein theory
with effective matter coupling does not possess a formulation in terms of metrics.
The two effective matter couplings above have been extensively studied in the literature
and their phenomenology has already been widely explored in the context of cosmology
(see, e.g., \cite{Enander:2014xga, Comelli:2015pua, Gumrukcuoglu:2015nua} for early works).
The effective theory avoids the ghost at low energies but at high energies it is not consistent
and requires a ghost-free completion.
Finding such a completion is of particular interest because the effective
metrics have the interesting property that they can couple the massless spin-2 mode
alone to matter~\cite{Schmidt-May:2014xla}.
The aim of the present work is to construct a symmetric coupling for the two tensor fields $g_{\mu\nu}$ and $f_{\mu\nu}$
of bimetric theory to the same matter source, keeping the theory free from the Boulware-Deser ghost even at high energies.
We obtain this matter coupling by integrating out a non-dymamical field in ghost-free {\bf tri}metric theory.
For low energies our result reduces to the known coupling through the effective metric~(\ref{effmetrvb}).
At high energies, the coupling in the bimetric setup is highly nontrivial.
In particular, it does not possess the same form as in GR. Nevertheless, it is always possible to express the
theory in a simple way (and in terms of a GR coupling) using the trimetric action,
which essentially provides a formulation in terms of auxiliary fields.
\section{Ghost-free trimetric theory}
\subsection{Trimetric action}
We will work with the following ghost-free trimetric action for the
three symmetric tensor fields $g_{\mu\nu}$, $f_{\mu\nu}$ and $h_{\mu\nu}$,
\begin{eqnarray}\label{trimact}
S[g,f,h]=
S_\mathrm{EH}[g]+S_\mathrm{EH}[f]+S_\mathrm{EH}[h]
+S_\mathrm{int}[h,g]
+S_\mathrm{int}[h,f]
+\epsilon S_\mathrm{matter}[h, \phi_i]\,.
\end{eqnarray}
It includes the Einstein-Hilbert terms,
\begin{eqnarray}
S_\mathrm{EH}[g]=m_g^2\int\mathrm{d}^4x~\sqrt{g}~R(g)\,,
\end{eqnarray}
with ``Planck mass" $m_g$ and the bimetric interactions,
\begin{eqnarray}\label{intact}
S_\mathrm{int}[h,g]=-2\int\mathrm{d}^4x~\sqrt{h}~\sum_{n=0}^4\beta^{g}_n\,e_n\big(\sqrt{h^{-1}g}\big)\,,
\end{eqnarray}
with parameters $\beta^g_n$ (and $\beta^f_n$ for $S_\mathrm{int}[h,f]$).
In our parameterization these interaction parameters
carry mass dimension 4. The scalar functions $e_n$ are the elementary symmetric
polynomials, whose general form will not be relevant in the following.
For later use, we only note that they satisfy,
\begin{eqnarray}\label{deten}
\det (\mathbb{1}+X)=\sum_{n=0}^4e_n(X)\,,
\qquad
e_n(\lambda X)=\lambda^n e_n(X)\,,~~\lambda\in\mathbb{R}\,.
\end{eqnarray}
$S_\mathrm{matter}[h, \phi_i]$ is a standard matter coupling (identical to the one in GR)
for the metric $h_{\mu\nu}$.\footnote{Throughout the whole paper we will use a notation for the matter
action which suggests that the source contains only bosons.
For fermions, it is the vierbein of $h_{\mu\nu}$ that appears in the matter coupling. However,
since we will anyway work in the vierbein formulation later on, this is not a problem and the
matter coupling to fermions is also covered by our analysis.}
For later convenience, we have included it in the action with a dimensionless parameter $\epsilon$ in front.
As already mentioned in the introduction, consistency does not allow
the other two metrics to couple to the same matter sector.
The structure of the action is dictated by the absence of the
Boulware-Deser ghost. At this stage, (\ref{trimact}) is the most general trimetric theory
known to be free from this instability.
In particular, the interactions between the three metrics can only be pairwise through the
above bimetric potentials and must not form any loops~\cite{Hinterbichler:2012cn, Nomura:2012xr}.
Moreover, they only contain five free parameters each
and are functions of the square-root matrices $\sqrt{h^{-1}g}$ and $\sqrt{h^{-1}f}$.
The existence of real square-root matrices
is in general not guaranteed and needs to be imposed on the theory as additional
constraints for the action to be well-defined.
At the same time, these constraints ensure a compatible causal structure of the two
metrics under the square root~\cite{Hassan:2017ugh}.
In this paper we will focus on a particular model with
$\beta^{g}_n=\beta^{f}_n=0$ for $n\geq 2$ in the limit
$m^2_h\rightarrow0$. The choice of interaction parameters significantly
simplifies the equations and the limit makes the field $h_{\mu\nu}$ non-dynamical.
The potential in this case simply reads,
\begin{eqnarray}\label{intact2}
S_\mathrm{int}[h,g]=-2\int\mathrm{d}^4x~\sqrt{h}~\Big(\beta^{g}_0+\beta^{g}_1\mathrm{Tr}\sqrt{h^{-1}g}\Big)\,,
\end{eqnarray}
and similar for $S_\mathrm{int}[h,f]$.
\subsection{Vierbein formulation}\label{sec:vb}
It will become necessary later on to work in the vierbein formulation
first introduced in~\cite{Hinterbichler:2012cn}.
Therefore we define the vierbeine for the three metrics,
\begin{eqnarray}\label{defvb}
g_{\mu\nu}=e^a_{~\mu}\eta_{ab} e^b_{~\nu}\,,\qquad
f_{\mu\nu}=v^a_{~\mu}\eta_{ab} v^b_{~\nu}\,,\qquad
h_{\mu\nu}=u^a_{~\mu}\eta_{ab} u^b_{~\nu}\,.
\end{eqnarray}
Existence of the square-root matrices in the interaction
potential requires them to satisfy the following symmetry
constraints~\cite{Hinterbichler:2012cn, Deffayet:2012zc},
\begin{eqnarray}\label{symconstr}
e^\mathrm{T}\eta u=u^\mathrm{T}\eta e\,,\qquad
v^\mathrm{T}\eta u=u^\mathrm{T}\eta v\,,
\end{eqnarray}
which we have expressed using matrix notation.
When they are imposed the square-roots can be evaluated to give,
\begin{eqnarray}
\sqrt{h^{-1}g}=u^{-1}e\,,\qquad
\sqrt{h^{-1}f}=u^{-1}v\,.
\end{eqnarray}
The interaction potential in $S_\mathrm{int}[h,g]+S_\mathrm{int}[h,f]=-\int\mathrm{d}^4x\,V$
can then be written in the form,
\begin{eqnarray}\label{potvb}
V(e,v,u)=2(\det u)~\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big)\,.
\end{eqnarray}
In our particular trimetric model, the constraints (\ref{symconstr}) follow dynamically
from the equations of motion for $e$ and $v$, which was already noticed
in Ref.~\cite{Hinterbichler:2012cn, Deffayet:2012zc}.
We review the underlying argument in a bit more detail because
it will become relevant for our analysis later.
Namely, the equations for $e$ contain six constraints
arising from local Lorentz symmetry.
In order to make this more precise, we split up the Lagrangian
$\mathcal{L}=\mathcal{L}_\mathrm{sep}+\mathcal{L}_\mathrm{sim}$
into terms $\mathcal{L}_\mathrm{sep}$ that are invariant
under \textit{separate} Lorentz transformations and $\mathcal{L}_\mathrm{sim}$ that are only invariant under
\textit{simultaneous} Lorentz transformations of the three vierbeine.
Their invariance under separate linearized Lorentz transformations of $e$
can be used to show that the terms $\mathcal{L}_\mathrm{sep}$ satisfy the identity,
\begin{eqnarray}\label{idlor}
\frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T}
-e^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}\right)^\mathrm{T}=0
\,.
\end{eqnarray}
The equations of motion
$\frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}+\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}=0$
then imply that the remaining terms $\mathcal{L}_\mathrm{sim}$ in the action will be constrained
to satisfy~(\ref{idlor}) on-shell,
\begin{eqnarray}\label{constlor}
\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T}
-e^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}\right)^\mathrm{T}=0\,.
\end{eqnarray}
Using the same arguments, we get a similar constraint for $v$,\footnote{Due to one overall Lorentz
invariance of the action, the constraint obtained from the equations for $u$ will be equivalent to
(\ref{constlor}) and (\ref{constlorv}).}
\begin{eqnarray}\label{constlorv}
\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta v}\,\eta^{-1} \,(v^{-1})^\mathrm{T}
-v^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta v}\right)^\mathrm{T}=0\,.
\end{eqnarray}
Finally, with $\mathcal{L}_\mathrm{sim}=-\int\mathrm{d}^4 x ~V(e,v,u)$ and (\ref{potvb}), it is straightforward to
show that (\ref{constlor}) and (\ref{constlorv}) imply the symmetry of $u^{-1}\eta^{-1} (e^{-1})^\mathrm{T}$
and $u^{-1}\eta^{-1} (v^{-1})^\mathrm{T}$, which is equivalent to the constraints (\ref{symconstr}).\footnote{The
last statement follows trivially from $\mathbb{1}=\mathbb{1}^\mathrm{T}=(SS^{-1})^\mathrm{T}
=(S^{-1})^\mathrm{T}S^\mathrm{T}=(S^{-1})^\mathrm{T}S$ for any symmetric matrix $S$.}
\subsection{Equations of motion}
From now on we focus on the limit $m^2_h\rightarrow0$ which freezes out the dynamics
of the metric $h_{\mu\nu}$ by removing its kinetic term $S_\mathrm{EH}[h]$ from the action.
In this limit we can solve the equation of motion for $h_{\mu\nu}$ (or its vierbein $u^a_{~\mu}$) algebraically
and integrate out the nondynamical field. The trimetric action hence assumes the form
of a bimetric theory augmented by an auxiliary field.\footnote{Note that this limit is
conceptually different from the ones studied in the context of bimetric theory in earlier
works~\cite{Baccetti:2012bk, Hassan:2014vja} since it freezes out the metric that is coupled to the matter sector.}
Technically, it would be sufficient to assume that $m_h$
is negligible compared to all other relevant energy scales in the theory
(the two other Planck masses, the spin-2 masses and the energies of matter particles).
All our findings can thus also be thought of as being a zeroth-order approximation to trimetric
theory with very tiny values for $m_h\neq 0$.
For $m^2_h=0$ the equations of motion obtained by varying the action~(\ref{intact2})
with respect to the inverse vierbein $u_a^{~\mu}$ are~\cite{Hassan:2011vm},
\begin{eqnarray}\label{withmatter}
\beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu}
-\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big)u^a_{~\mu}
=-\epsilon\, T^a_{~\mu}\,,
\end{eqnarray}
where we have introduced the ``vierbein" stress-energy tensor,
\begin{eqnarray}
T^a_{~\mu}\equiv T^a_{~\mu}(u,\phi_i)\equiv-\frac{1}{2\det u}\frac{\delta S_\mathrm{matter}}{\delta u_a^{~\mu}}\,.
\end{eqnarray}
It will be easier to work with a form of the equations without the traces appearing.
Tracing equation (\ref{withmatter}) with $u_a^{~\mu}$ gives,
\begin{eqnarray}
\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]
=-\frac{4(\beta^g_0+\beta^f_0)}{3}+\frac{\epsilon}{3} u_a^{~\mu}T^a_{~\mu}\,.
\end{eqnarray}
We insert this into (\ref{withmatter}) and obtain,
\begin{eqnarray}\label{withmatter2}
\beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu}+\frac{\beta^g_0+\beta^f_0}{3}u^a_{~\mu}
=\epsilon\mathcal{T}^a_{~\mu}\,,
\end{eqnarray}
with,
\begin{eqnarray}
\mathcal{T}^a_{~\mu}=\mathcal{T}^a_{~\nu}(u,\phi_i)\equiv
\frac{1}{3}u^a_{~\mu} u_b^{~\rho}T^b_{~\rho}-T^a_{~\mu}\,.
\end{eqnarray}
Our aim in the following is to solve equation (\ref{withmatter2}) for $u^a_{~\mu}$,
plug back the solution into the trimetric action and interpret the result
as an effective bimetric theory with modified matter coupling.
\section{Vacuum solutions}
\subsection{Exact solution for $h_{\mu\nu}$}
In vacuum with $\epsilon=0$, equation (\ref{withmatter2})
straightforwardly gives the solution for the vierbein $u$ in terms of $e$ and $v$.
In matrix notation it reads,
\begin{eqnarray}\label{usol}
u=-\frac{3}{\beta^g_0+\beta^f_0}\Big(\beta_1^ge +\beta_1^fv\Big)\,.
\end{eqnarray}
The corresponding expression for the metric is,
\begin{eqnarray}\label{hsolsc}
h=u^\mathrm{T}\eta u
=
\Big( ae +bv\Big)^\mathrm{T}\eta\Big(ae +bv\Big)\,,
\end{eqnarray}
with constants,
\begin{eqnarray}\label{defab}
a\equiv\frac{3\beta_1^g}{\beta^g_0+\beta^f_0}\,,\qquad
b\equiv\frac{3\beta_1^f}{\beta^g_0+\beta^f_0}\,.
\end{eqnarray}
The solution (\ref{hsolsc}) has the same form as the effective metric (\ref{effmetrvb}).
The additional symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$
is equivalent to the existence of the square-root matrix $\sqrt{g^{-1}f}$.
But, in general, it is not obvious that the existence of this matrix is
automatically guaranteed by the existence of both $\sqrt{h^{-1}g}$ and
$\sqrt{h^{-1}f}$. However, in our setup, the symmetrization constraint
is ensured to be satisfied dynamically.
To see this, we simply insert the solution (\ref{usol}) for $u$ into one of the
dynamical trimetric constraints (\ref{symconstr}). This gives,
\begin{eqnarray}\label{symconstr2}
0&=&e^\mathrm{T}\eta u-u^\mathrm{T}\eta e\nonumber\\
&=&\frac{3}{\beta^g_0+\beta^f_0}\left[\big(\beta_1^ge +\beta_1^fv\big)^\mathrm{T}
\eta e-e^\mathrm{T}\eta \big(\beta_1^ge +\beta_1^fv\big)\right]
=\frac{3\beta_1^f}{\beta^g_0+\beta^f_0}\left[v^\mathrm{T}\eta e-e^\mathrm{T}\eta v\right]\,,
\end{eqnarray}
which thus directly implies,
\begin{eqnarray}\label{constraint}
e^\mathrm{T}\eta v-v^\mathrm{T}\eta e=0\,.
\end{eqnarray}
The fact that $e^\mathrm{T}\eta v$ is guaranteed to be symmetric dynamically
will become important in the following.
As already stated in the introduction, when (\ref{constraint}) holds,
we can write the right-hand side in terms of metrics,
\begin{eqnarray}\label{solh}
h=ga^2+2ab\, g\big(\sqrt{g^{-1}f}\,\big)+b^2f\,.
\end{eqnarray}
The solution for $h_{\mu\nu}$ thus also coincides with the effective metric (\ref{effmetr}).
\subsection{Effective bimetric potential}\label{sec:effpot}
We now compute the effective potential for the two dynamical vierbeine.
To this end, we insert the solution (\ref{usol}) for $u$
into the trimetric potential (\ref{potvb}).
This gives the effective potential,
\begin{eqnarray}\label{veffvac}
V_\mathrm{eff}(e,v)
=-\frac{54}{\big(\beta^{g}_0+\beta^{f}_0\big)^3}\det \Big(\beta_1^ge +\beta_1^fv\Big)
=\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big)
\,,
\end{eqnarray}
with interaction parameters,
\begin{eqnarray}\label{betan}
\beta_n\equiv B \left(\frac{\beta_1^f}{\beta_1^g}\right)^n\,,
\qquad
B\equiv-\frac{54(\beta_1^g)^4}{\big(\beta^{g}_0+\beta^{f}_0\big)^3}\,.
\end{eqnarray}
In the second equality of (\ref{veffvac}) we have used (\ref{deten}).
The vacuum action for $e$ and $v$ with potential (\ref{veffvac}) is consistent if and only if
the symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ holds~\cite{deRham:2015cha}.
This is a crucial point: An inconsistent theory without this constraint in vacuum
should not arise from our consistent trimetric setup.
The issue gets resolved because the constraint
is implied by the equations of motion, as we saw in the previous subsection.
Invoking this symmetry constraint we can replace
$e^{-1}v=\sqrt{g^{-1}f}$ in (\ref{veffvac}) which gives back a ghost-free bimetric theory
with $\beta_n$ parameters given as in~(\ref{betan}).
In conclusion, the effective theory obtained by integrating out $h_{\mu\nu}$ in vacuum
is identical to a ghost-free bimetric theory.
Of course, it must also be possible to obtain the constraint (\ref{constraint})
in the effective theory, i.e.~without using the equations for $u$ derived in the trimetric setup.
We will verify this in the following by revisiting the arguments given at the end of section~\ref{sec:vb}.
In the present case, the Einstein-Hilbert kinetic terms for $e$ and $v$
belong to $\mathcal{L}_\mathrm{sep}$ while $\mathcal{L}_\mathrm{sim}=-V_\mathrm{eff}$.
Thus the constraints arising from the equations of motions after using
the identity~(\ref{idlor}) set the antisymmetric part of
$\frac{\delta }{\delta e}V_\mathrm{eff}\eta^{-1} (e^{-1})^\mathrm{T}$ to zero,
\begin{eqnarray}\label{consteq}
\frac{\delta V_\mathrm{eff}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T}
-e^{-1}\eta^{-1}\left(\frac{\delta V_\mathrm{eff}}{\delta e}\right)^\mathrm{T}=0\,.
\end{eqnarray}
We now solve this constraint explicitly. The variation of (\ref{veffvac}) with respect to $e$ is,
|
\begin{eqnarray}
\frac{\delta V_\mathrm{eff}}{\delta e}=
B\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right) \left(e + \frac{\beta_1^f}{\beta_1^g}v\right)^{-1}\,.
\end{eqnarray}
We thus have that,
\begin{eqnarray}\label{vareveff}
\frac{\delta V_\mathrm{eff}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T}
= B\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right)
\left(e + \frac{\beta_1^f}{\beta_1^g} v\right)^{-1}\eta^{-1} \,(e^{-1})^\mathrm{T}\,.
\end{eqnarray}
The expression on the right-hand side is a matrix with two upper coordinate indices
which is constrained to be symmetric by (\ref{consteq}). But this implies that
also its inverse must be symmetric. The inverse of (\ref{vareveff}) is,
\begin{eqnarray}
e^\mathrm{T}\eta\left(\frac{\delta V_\mathrm{eff}}{\delta e}\right)^{-1}
= B^{-1}\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right)^{-1}
e^\mathrm{T}\eta\left(e + \frac{\beta_1^f}{\beta_1^g} v\right)
\end{eqnarray}
whose antisymmetric part is precisely
proportional to $(e^\mathrm{T}\eta v-v^\mathrm{T}\eta e)$. The latter hence vanishes dynamically
and we re-obtain (\ref{constraint}).
The symmetrization constraint remains the same if one couples matter to $e$ or $v$ alone because this
coupling is invariant under separate Lorentz transformations and therefore does not contribute to
equation (\ref{consteq}). More general matter coupling involving both $e$ and $v$ are only invariant under
simultaneous Lorentz transformations of the vierbeine and thus give rise to extra terms in (\ref{consteq}).
We will encounter such a situation below.
\section{Perturbative solution in the presence of matter}\label{sec:pert}
\subsection{Solution for $h_{\mu\nu}$}
In order to derive the solution for the nondynamical field in the presence of a matter source,
we again work in the vierbein formulation with $e$, $v$ and $u$ defined as in (\ref{defvb})
and with the constraints (\ref{symconstr}) imposed.
For $\epsilon>0$ we now solve the full equation (\ref{withmatter2}),
\begin{eqnarray}\label{withmatter3}
\beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu}+\frac{\beta^g_0+\beta^f_0}{3}u^a_{~\mu}
=\epsilon\mathcal{T}^a_{~\mu}\,,
\end{eqnarray}
We can rewrite this in the form (again switching to matrix notation),
\begin{eqnarray}\label{usolmat}
u=\frac{3}{\beta^g_0+\beta^f_0}\Big(\epsilon\mathcal{T}-\beta_1^ge -\beta_1^fv\Big)\,.
\end{eqnarray}
Note that, unlike in the vacuum case, this form does now not allow us to express $u$ in terms of
$e$ and $v$ directly, since $u$ still appears on the right-hand side in the stress-energy tensor.
Nevertheless, we can now solve the equations perturbatively.
From now on we shall assume $\epsilon\ll1$, in which case the matter source
can be treated as a small perturbation to the vacuum equations. This allows us to
obtain the solution for $u$ and $h$ as a perturbation series in $\epsilon$.
To this end, we make the ansatz,
\begin{eqnarray}\label{pertsol}
u=\sum_{n=0}^\infty \epsilon^n u^{(n)}=u^{(0)}+\epsilon u^{(1)}+\mathcal{O}(\epsilon^2)\,,\qquad
h=\sum_{n=0}^\infty \epsilon^n h^{(n)}=h^{(0)}+\epsilon h^{(1)}+\mathcal{O}(\epsilon^2)\,.
\end{eqnarray}
The lowest order of the solution is obtained from (\ref{usolmat}) with $\epsilon=0$,
\begin{eqnarray}\label{lou}
u^{(0)}=-\frac{3}{\beta^g_0+\beta^f_0}\Big(\beta_1^ge +\beta_1^fv\Big)\,,
\end{eqnarray}
which of course coincides with the solution obtained in vacuum, c.f.~equation (\ref{usol}).
Then the corresponding lowest order in the metric $h_{\mu\nu}$
is also the same as in equation (\ref{hsolsc}),
\begin{eqnarray}\label{hsolmat}
h^{(0)}=\big(u^{(0)}\big)^\mathrm{T}\eta u^{(0)}
=\frac{9}{(\beta^g_0+\beta^f_0)^2}
\Big(\beta_1^ge +\beta_1^fv\Big)^\mathrm{T}\eta\Big(\beta_1^ge +\beta_1^fv\Big)\,.
\end{eqnarray}
In order to re-arrive at the form (\ref{solh}) in terms of metrics alone we would again have to invoke the
symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ which is enforced dynamically
only at lowest order in perturbation theory. At higher orders, it will be replaced by a new constraint that
needs to be re-computed from the final effective action. Hence, $h^{(0)}$ coincides with the effective metric
(\ref{effmetr}) up to corrections of order $\epsilon$ (which are thus shifted into $h^{(1)}$).
The solutions for the higher orders $u^{(n)}$ in the expansion~(\ref{pertsol}) is given by,
\begin{eqnarray}\label{fullsolu}
u^{(n)}
&=&\frac{3}{\beta^g_0+\beta^f_0}\left.\frac{\delta^n}{\delta\epsilon^n}\Big(\epsilon\mathcal{T}(u, \phi_i)
-\beta_1^ge -\beta_1^fv\Big)\right|_{\epsilon=0}\,,
\end{eqnarray}
where in $\mathcal{T}(u,\phi_i)$ one needs to replace $u=\sum_{l=0}^{n-1}\epsilon^l u^{(l)}$, using
the lower-order solutions and further expand in $\epsilon$.
In other words, we can solve for $u^{(n)}$ recursively, using the already
constructed solutions up to $u^{(n-1)}$.
For instance, the next order $u^{(1)}$ is obtained from (\ref{fullsolu})
with $u$ in the stress-energy tensor replaced by $u^{(0)}$,
which gives,
\begin{eqnarray}\label{usolfo}
u^{(1)}=\frac{3}{\beta^g_0+\beta^f_0}\mathcal{T}(u^{(0)},\phi_i)
\,.
\end{eqnarray}
The corresponding next order in the metric is therefore,
\begin{eqnarray}\label{hsolfo}
h^{(1)}
&=&\big(u^{(0)}\big)^\mathrm{T}\eta u^{(1)}+\big(u^{(1)}\big)^\mathrm{T}\eta u^{(0)}\nonumber\\
&=&-\frac{9}{\big(\beta^g_0+\beta^f_0\big)^2}\,
\big(u^{(0)}\big)^\mathrm{T}\eta \mathcal{T}(u^{(0)},\phi_i)+\big(\mathcal{T}(u^{(0)},\phi_i)\big)^\mathrm{T}\eta u^{(0)}\,.
\end{eqnarray}
The explicit derivation of the next order requires
making an assumption for the precise form of the matter source.
Since the solution for $u^{(1)}$ is sufficient to write down the first correction
to the effective action in vacuum, we stop here.
\subsection{Effective action}
Plugging back the solutions for $u$ (or $h$) into the action with potential
(\ref{potvb}) results in an effective bimetric theory,
perturbatively expanded in $\epsilon$ and written in terms of vierbeine,
\begin{eqnarray}
S_\mathrm{eff}&=& S_\mathrm{EH}[g]+S_\mathrm{EH}[f]\nonumber\\
&~&-2\int\mathrm{d}^4x~\Big(\det u\Big)~\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big)
+\epsilon S_\mathrm{matter}[h, \phi_i]\,,
\end{eqnarray}
with $u=\sum_{n=0}^\infty\epsilon^nu^{(n)}$ and $u^{(n)}$ given by (\ref{fullsolu}).
Expanding in $\epsilon$, we find that the lowest order terms read,
\begin{eqnarray}
S_\mathrm{eff}
&=& S_\mathrm{EH}[g]+S_\mathrm{EH}[f]\nonumber\\
&-&2\int\mathrm{d}^4x~\Big(\det u^{(0)}\Big)\Big(1+\epsilon\,\mathrm{Tr} \Big[(u^{(0)})^{-1}u^{(1)}\Big]\Big)~\Big(\beta^{g}_0+\beta^{f}_0
+\mathrm{Tr}\big[(u^{(0)})^{-1}(\beta^{g}_1e+\beta^{f}_1v)\Big)\nonumber\\
&+&{2\epsilon}\int\mathrm{d}^4x\,\Big(\det u^{(0)}\Big)~
\mathrm{Tr}\Big[(u^{(0)})^{-1}u^{(1)}(u^{(0)})^{-1}(\beta^{g}_1e+\beta^{f}_1v)\Big]\nonumber\\
&+&\epsilon S_\mathrm{matter}[h^{(0)}, \phi_i]
~+~\mathcal{O}(\epsilon^2)\,.
\end{eqnarray}
A short computation shows that, after inserting the expressions
(\ref{lou}) and (\ref{usolfo}) for $u^{(0)}$ and $u^{(1)}$, this simply becomes,
\begin{eqnarray}\label{effact}
S_\mathrm{eff}
&=&S_\mathrm{EH}[g]+S_\mathrm{EH}[f]+S_\mathrm{int}[e,v]+\epsilon S_\mathrm{matter}[h^{(0)}, \phi_i]
~+~\mathcal{O}(\epsilon^2)\,.
\end{eqnarray}
Here, the interaction potential is the one which we already found in section~\ref{sec:effpot},
\begin{eqnarray}\label{effactpot}
S_\mathrm{int}[e,v]\equiv
-\int\mathrm{d}^4x~\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big)\,,
\qquad
\beta_n\equiv -\frac{54(\beta_1^g)^4}{\big(\beta^{g}_0+\beta^{f}_0\big)^3}
\left(\frac{\beta_1^f}{\beta_1^g}\right)^n\,.
\end{eqnarray}
Note that this is not the most general ghost-free bimetric potential since the five $\beta_n$
are not independent. They satisfy $\beta_n=\beta_0(\beta_1/\beta_0)^n$ for $n\geq 2$ and hence
the potential in $S_\mathrm{int}[e,v]$ really contains only two free parameters.
Moreover, the effective metric $h^{(0)}$ in the matter coupling is of the form (\ref{effmetrvb})
but the coefficients $a$ and $b$ are not fully independent of the interaction parameters
$\beta_n$ in the potential. More precisely, they satisfy $b/a=\beta_1/\beta_0$.
\subsection{Symmetrization constraints}
The symmetrization constraints~(\ref{symconstr}) in trimetric theory
(which in our model follow from the trimetric equations of motion even in the presence
of matter) can be treated perturbatively in a straightforward way.
Using (\ref{pertsol}) we expand them as follows,
\begin{eqnarray}\label{pertcons}
\sum_{n=0}^\infty \epsilon^ne^\mathrm{T}\eta u^{(n)}
=\sum_{n=0}^\infty \epsilon^n(u^{(n)})^\mathrm{T}\eta e\,,\qquad
\sum_{n=0}^\infty \epsilon^nv^\mathrm{T}\eta u^{(n)}
=\sum_{n=0}^\infty \epsilon^n(u^{(n)})^\mathrm{T}\eta v\,.
\end{eqnarray}
Comparing orders of the expansion parameter $\epsilon$, we obtain $\forall n$,
\begin{eqnarray}
e^\mathrm{T}\eta u^{(n)}
=(u^{(n)})^\mathrm{T}\eta e\,,\qquad
v^\mathrm{T}\eta u^{(n)}
=(u^{(n)})^\mathrm{T}\eta v\qquad\,.
\end{eqnarray}
These constraints on $u^{(n)}$ imply that at each order
in the perturbation series the square-root matrices exist and we have that,
\begin{eqnarray}
\sqrt{(h^{(n)})^\mathrm{-1}g}=(u^{(n)})^\mathrm{-1}e\,,
\qquad
\sqrt{(h^{(n)})^\mathrm{-1}f}=(u^{(n)})^\mathrm{-1}v\,,
\end{eqnarray}
ensuring the perturbative equivalence of the metric and vierbein formulations
in the trimetric theory.
The situation in the effective theory (\ref{effact}) obtained by integration out
$h_{\mu\nu}$ is more subtle. Namely, the constraint (\ref{constraint}) is obtained dynamically
only in vacuum. In the presence of matter, it will receive corrections of order
$\epsilon$ and higher. As a consequence, the effective action will in general not
be expressible in terms of metrics.
The corrections to the vacuum constraint can again be
straightforwardly obtained by inserting the solution
for the vierbein $u$ into either of the symmetrization constraints in (\ref{pertcons}).
This gives the effective constraint as a perturbation series in $\epsilon$,
\begin{eqnarray}\label{corrsc}
0&=&\tfrac{3}{\beta^g_0+\beta^f_0}\big(e^\mathrm{T}\eta u-u^\mathrm{T}\eta e\big)\nonumber\\
&=&\beta_1^f\big(v^\mathrm{T}\eta e-e^\mathrm{T}\eta v\big)
+ \epsilon \left[e^\mathrm{T}\eta \mathcal{T}(u^{(0)},\phi_i)
-\left(\mathcal{T}(u^{(0)},\phi_i)\right)^\mathrm{T}\eta e\right]
+\mathcal{O}(\epsilon^2)\,.
\end{eqnarray}
In principle, this equation can again be solved recursively and the $\mathcal{O}(\epsilon)$
correction is obtained by using the lowest-order solution
$v^\mathrm{T}\eta e=e^\mathrm{T}\eta v$ in the terms proportional
to $\epsilon$. It demonstrates that in the effective theory with matter coupling,
the antisymmetric part of $e^\mathrm{T}\eta v$ is no longer zero but proportional to an
antisymmetric matrix depending on the matter stress-energy tensor.
\section{Features of the low-energy theory}
\subsection{Validity of the effective description}
Using a specific trimetric setup, we have explicitly constructed a
ghost-free completion for the effective bimetric action,
\begin{eqnarray}\label{effact}
S_\mathrm{eff}
&=&S_\mathrm{EH}[g]+S_\mathrm{EH}[f]-\int\mathrm{d}^4x~\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big)
+\epsilon S_\mathrm{matter}[\tilde{G}, \phi_i]
\,,
\end{eqnarray}
with matter coupling in terms of the effective metric,
\begin{eqnarray}
\tilde{G}_{\mu\nu}=\big(ae^a_{~\mu} +bv^a_{~\mu}\big)^\mathrm{T}\eta_{ab}\big(ae^b_{~\nu} +bv^b_{~\nu}\big)
\,.
\end{eqnarray}
The parameters in (\ref{effact}) obtained in our setup are not all independent but satisfy the relations,
\begin{eqnarray}\label{parconst}
\beta_n=\beta_0(\beta_1/\beta_0)^n\quad \text{for}~n\geq 2\,,
\qquad b/a=\beta_1/\beta_0\,.
\end{eqnarray}
The effective description is valid for small energy densities in the matter sector, which we
have parameterized via $\epsilon\ll 1$. It corresponds precisely to the action
proposed in Ref.~\cite{Noller:2014sta, Hinterbichler:2015yaa}. For higher energies (where the action
(\ref{effact}) is known to propagate the Boulware-Deser ghost~\cite{deRham:2015cha})
the corrections become important. For these energy regimes, parameterized via
$\epsilon\gtrsim 1$, it is simplest to work in the manifestly ghost-free trimetric
formulation (\ref{trimact}) with $m_h=0$
(for instance, if one wants to derive solutions to the equations in the full theory).
Even though the decoupling limits of bimetric theory with matter coupling to $G_{\mu\nu}$
and the vierbein theory with matter coupling to $\tilde{G}_{\mu\nu}$ are identical~\cite{deRham:2015cha},
the two couplings are not equivalent to first order in $\epsilon$.
Namely, the corrections to the symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ in (\ref{corrsc}),
are of $\mathcal{O}(\epsilon)$ and thus equally important as the matter coupling itself.
As a consequence, the effective metric $G_{\mu\nu}$ defined in (\ref{effmetr}) differs from $\tilde{G}_{\mu\nu}$
in (\ref{effact}) at $\mathcal{O}(\epsilon)$. Replacing $\tilde{G}_{\mu\nu}$ by $G_{\mu\nu}$
in the matter coupling introduces correction terms of $\mathcal{O}(\epsilon^2)$
which we have anyway suppressed in (\ref{effact}). However, the additional terms coming from (\ref{corrsc})
will show up in the interaction potential, which contains the antisymmetric components
$(e^\mathrm{T}\eta v-v^\mathrm{T}\eta e)$, and contribute at $\mathcal{O}(\epsilon)$.
Therefore, even when $\mathcal{O}(\epsilon^2)$ terms are neglected,
the theory with action (\ref{effact}) is not equivalent to bimetric theory
with matter coupling via the effective metric~$G_{\mu\nu}$.
This picture is consistent with the results in Ref.~\cite{Hinterbichler:2015yaa}, which
essentially already discussed the $\mathcal{O}(\epsilon)$ correction
in (\ref{corrsc}) and stated that the vacuum constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$
cannot be imposed when matter is included via the effective vierbein coupling.
\subsection{The massless spin-2 mode}
Interestingly, our interactions parameters in (\ref{effact})
subject to the constraints (\ref{parconst}) satisfy,
\begin{eqnarray}\label{condpbg}
\frac{cm_f^2}{m_g^2}(\beta_0+3c\beta_1+3c^2\beta_2+c^3\beta_3)&=&\beta_1+3c\beta_2+3c^2\beta_3+c^3\beta_4\,,
\qquad c\equiv\frac{m_g^2}{m_f^2}\frac{b}{a}\,.
\end{eqnarray}
This condition was derived in Ref.~\cite{Schmidt-May:2014xla} to ensure that
proportional background solutions of the form $\bar{f}_{\mu\nu}=c^2\bar{g}_{\mu\nu}$
exist in bimetric theory with effective matter coupling through the metric $G_{\mu\nu}$ in (\ref{effmetr}).
In our case with metric $\tilde{G}_{\mu\nu}$, the proportional backgrounds are only solutions in vacuum since
the corrections to the symmetrization constraint in (\ref{constraint})
are in general not compatible with $\bar{v}^a_{~\mu}=c\bar{e}^a_{~\mu}$.
This situation is comparable to ordinary bimetric theory with matter coupling via
$g_{\mu\nu}$ or $f_{\mu\nu}$.
Around the proportional vacuum solutions,
the massless spin-2 fluctuation is~\cite{Hassan:2012wr},
\begin{eqnarray}\label{masslessfluc}
\delta g+\frac{m_f^2}{m_g^2}\delta f
=\delta e^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta e^\mathrm{T}
+ \frac{cm_f^2}{m_g^2}\left(\delta v^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta v^\mathrm{T}\right)\,.
\end{eqnarray}
Around the same background, our effective metric $\tilde{G}_{\mu\nu}$
which couples to matter in the effective action~(\ref{effact}) has fluctuations,
\begin{eqnarray}
\delta\tilde{G}_{\mu\nu}&=&
(a+bc)\Big(\bar{e}^\mathrm{T}\eta(a\delta e+b\delta v)
+(a\delta e+b\delta v)\eta\bar{e}^\mathrm{T}\Big)\nonumber\\
&=&\left(1+\frac{c^2m_f^2}{m_g^2}\right)\left(\delta e^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta e^\mathrm{T}
+ \frac{cm_f^2}{m_g^2}\left(\delta v^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta v^\mathrm{T}\right)\right)\,,
\end{eqnarray}
where we have used (\ref{condpbg}) in the second equality.
The fluctuations of $\tilde{G}_{\mu\nu}$ are proportional to~(\ref{masslessfluc}) and
thus they are purely massless, without containing contributions from the massive spin-2
mode.\footnote{The fluctuations of $G_{\mu\nu}$ in (\ref{effmetr}) are also proportional
to~(\ref{masslessfluc}) when the parameters satisfy (\ref{condpbg})~\cite{Schmidt-May:2014xla}.}
We conclude that in the effective theory with action (\ref{effact}), matter interacts only with
the massless spin-2 mode.
\section{Summary \& discussion}
We have presented a trimetric setup which at high energies delivers a ghost-free completion
for a well-studied effective matter coupling in bimetric theory.
Our results suggest that even though both effective metrics ${G}_{\mu\nu}$
and $\tilde{G}_{\mu\nu}$ can be coupled to matter without re-introducing the Boulware-Deser
ghost in the decoupling limit, the vierbein coupling via the latter is probably the preferred choice since
it can be rendered ghost-free by adding additional terms to the action.
Properties of the theory at high energies (description of the Early Universe, black holes, etc.)
are easy to study in our trimetric formulation and it would be interesting
to revisit phenomenological investigations that have been carried out in the effective theory.
Our results further demonstrate that the metric $\tilde{G}_{\mu\nu}$ in the matter coupling possesses
massless fluctuations around the maximally symmetric vacuum solutions.
This is of phenomenological relevance because we expect it to avoid constraints arising from the so-called
vDVZ discontinuity \cite{vanDam:1970vg, Zakharov:1970cc}, which forces the ratio of Planck
masses $m_f/m_g$ to be small in bimetric theory with ordinary matter coupling~\cite{Enander:2015kda, Babichev:2016bxi}.
These constraint usually arise at distance scales larger than the Vainshtein radius~\cite{Vainshtein:1972sx},
but in case of matter interacting only with the massless spin-2 mode there is no need to invoke the
Vainshtein mechanism in order to cure the discontinuity.
By the same argument, linear cosmological perturbations are expected to behave similarly to GR.
Subtleties could arise due to the highly nontrivial symmetrization constraint~(\ref{corrsc})
and the phenomenology needs to be worked out in detail to explicitly confirm these expectations.
A generalization of our construction to more than two fields in vacuum is studied in~\cite{Hassan:2018mcw} and
leads to new consistent multi-vierbein interactions.
It would also be interesting to generalize our setup to other values of interactions parameters
in the trimetric action and include more terms in~(\ref{intact2}).
For the most general set of parameters, it seems difficult to integrate out the vierbein $u$ for the
non-dynamical metric. There may however be simplifying choices different from~(\ref{intact2}) which
allow us to obtain an effective theory with parameters different from (\ref{parconst}).
It would also be interesting to see whether one can find more general forms for effective metrics in this way.
Possibly, these could be the metrics identified in~\cite{Heisenberg:2014rka} and we leave these interesting
investigations for future work.
\acknowledgments
We thank Mikica Kocic for useful comments on the draft and are particularly grateful to
Fawad Hassan for making very valuable suggestions to improve the presentation of our results.
This work is supported by a grant from the Max Planck Society.
|
\section{Introduction}
\par\smallskip
The three-dimensional rotations form a Lie group, usually denoted
by $SO(3)$. A fascinating feature of our world is that the
fundamental group of $SO(3)$ is $\mbox{\bBB Z} _2$, or in short $\pi
_1(SO(3))\cong \mbox{\bBB Z} _2$. This means that all closed paths in
$SO(3)$ starting and ending at the identity fall into two homotopy
classes --- those that are homotopic to the constant path and
those that are not. Composing two paths from the second class
yields a path from the first class. This topological property of
the space parametrizing rotations can also be stated as follows
--- a complete rotation of an object (corresponding to a closed path in $SO(3)$) may or
may not be continuously deformable to the trivial motion (i.e., no
motion at all); the composition of two motions that are not
deformable to the trivial one gives a motion which is.\par A
manifold with a fundamental group $\mbox{\bBB Z} _2$ is a challenge to our
imagination --- it is easy to visualize spaces with fundamental
group $\mbox{\bBB Z}$, (the punctured plane), or $\mbox{\bBB Z} \times \mbox{\bBB Z} \cdots
\times \mbox{\bBB Z}$, (plane with several punctures), or even $\mbox{\bBB Z}\oplus
\mbox{\bBB Z}$ (torus), but there is no two-dimensional manifold embedded
in $\mbox{\bBB R}^3$ whose fundamental group is $\mbox{\bBB Z} _2$.\par The peculiar
structure of $SO(3)$ plays a crucial role in our physical world.
We know that there are two quite different types of elementary
particles, bosons and fermions. The quantum state of a boson is
described by a (possibly multi-component) wave function, which
remains unchanged when a full ($360^{\circ}$) rotation of the
coordinate system is performed, while the wave function of a
fermion gets multiplied by $-1$ under a complete rotation.
Somewhat loosely speaking, this comes from the fact that only the
modulus of the wave function has a direct physical meaning and
therefore the wave function need not transform under a true
representation of $SO(3)$ but just under a projective
representation, which is a true representation of its universal
covering group $SU(2)$ \cite{Wigner, Bargmann}.\par The standard
way of showing that $\pi _1(SO(3))\cong \mbox{\bBB Z} _2$ is to prove that
$SO(3)$ is homeomorphic to $S^3/i$, where $i$ denotes the
identification of diametrically opposite points of $S^3$. Once the
latter is known, it is quite easy to see that paths starting from
one pole of $S^3$ and ending at the other pole will be closed
paths in $S^3/i$, which are not contractible, but the composition
of two such paths gives a contractible path. In order to find the
topological structure of $SO(3)$ one normally uses Lie group and
Lie algebra theory. Namely, it is shown that the group $SU(2)$ of
unitary $2\times 2$ matrices with determinant 1 is homeomorphic to
$S^3$ and that there is a 2--1 homomorphism $SU(2)\to SO(3)$,
which is a local isomorphism, and which sends diametrically
opposite points in $SU(2)$ to the same point in $SO(3)$. \par
There is a more direct way to exhibit the topological structure of
the parameter space of three-dimensional rotations. It only uses
the fact that rotations are represented by $3\times 3$ matrices
and any such matrix must have one (real) eigenvalue and a
corresponding eigenvector. Then one shows easily that this element
of $SO(3)$ must be a rotation around this eigenvector. In other
words any element of $SO(3)$ is a rotation around some axis and we
need to specify the angle of rotation and the orientation of that
axis in $\mbox{\bBB R}^3$.\par In the present paper we present an
alternative way of proving that $\pi _1(SO(3))\cong \mbox{\bBB Z} _2$. It
does not use Lie group theory or even matrices. It is purely
algebraic-topological in nature and very visual. It displays a
simple connection between full rotations (closed paths in $SO(3)$)
and braids. We believe that this may be an interesting way of
showing a nontrivial topological result to students in
introductory geometry and topology courses as well as a suitable
way of introducing braids and braid groups. \par The goal of this
paper is mostly pedagogical --- presenting in a self-contained and
accessible way a set of results that are basically known to
algebraic topologists and people studying braid groups. The fact
that the first homotopy group of $SO(3)$ can be related to
spherical braids is a special case (in disguise) of the following
general statement \cite{Fad1}: ``The configuration space of three
points on an $r$-sphere is homotopically equivalent to the Stiefel
manifold of orthogonal two-frames in $r+1$-dimensional Euclidean
space''. Fadell \cite{Fad1} considers a particular element of $\pi
_1(SO(3))$ and uses the fact that it has order 2 to prove a
similar statement for a corresponding braid. Our direction is
opposite - we analyze braids to deduce topological properties of
$SO(3)$.\par In the next section we describe a simple experiment
that actually demonstrates the $\mbox{\bBB Z} _2$ in three-dimensional
rotations. Then in section 3 we give a formal treatment of that
experiment. We construct a map from $\pi _1(SO(3))$ into a certain
factorgroup of a subgroup of the braid group with three strands.
We prove that this map is an isomorphism and that the image is
$\mbox{\bBB Z} _2$.
\par
\section {The experiment}
Take a ball (tennis ball will do) and attach three strands to three different points on its surface. Attach the other ends of the strands to three different points on the surface of your desk (Figure \ref{Fig1}).
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.4
]{Fig1.eps} \hfil \includegraphics[scale=0.4]{Fig1b.eps}
\caption{Rotating a ball with strands attached}
\label{Fig1}
\end{figure}
\bigskip
Perform an arbitrary number of full rotations of the ball around arbitrary axes. You will get an entangled ``braid''. Now keep the orientation of the ball fixed. If the total number of full rotations is even, you can always untangle the ``braid'' by flipping strands around the ball. If the number of rotations is odd you will never be able to untangle it, but you can always reduce it to one simple configuration, e.g., the one obtained by rotating the ball around the first point and twisting the second and third strands around each other. \par
As is to be expected, rotations that can be continuously deformed to the trivial rotation (i.e., no rotation) lead to trivial braiding. At this point we can only conjecture from our experiment that the fundamental group of $SO(3)$ contains as a factor $\mbox{\bBB Z} _2$. \par
\section{Relating three-dimensional rotations to braids}
With each closed path in $SO(3)$ we associate three closed paths in $\mbox{\bBB R}^3$ starting at the sphere with radius 1 and ending at the sphere with radius 1/2. We may think of continuously rotating a sphere from time $t=0$ to time $t=1$ so that the sphere ends up with the same orientation as the initial one. Simultaneously we shrink the radius of the sphere from 1 to 1/2 (see Figure \ref{Fig2}).
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.30]{Fig2.eps} \includegraphics[scale=0.30]{Fig2a.eps} \includegraphics[scale=0.50]{Fig2b.eps}
\caption{A ``spherical braid'' and a normal braid}
\label{Fig2}
\end{figure}
\bigskip
Any three points on the sphere will trace three continuous paths in $\mbox{\bBB R}^3$, which do not intersect each other. Furthermore, for fixed $t$ the three points on these paths lie on the sphere with radius $1-t/2$.
To formalize things, let $\omega(t), \ \ t\in[0,1]$ be any continuous path in $SO(3)$ with $\omega(0)=\omega(1)=I$.
$\omega(t)$ acts on vectors (points) in $\mbox{\bBB R}^3$.
Take three initial points in $\mbox{\bBB R}^3$, e.g., $\hbox{\bf x}_0^1=(1,0,0)$, $\hbox{\bf x}_0^2=(-1/2,\sqrt 3 /2,0)$,
$\hbox{\bf x}_0^3=(-1/2,-\sqrt 3 /2,0)$. Define three continuous paths by
$$
\hbox{\bf x}^i(t):=(1-t/2)\omega (t)(\hbox{\bf x}_0^i),\quad t\in [0,1],\quad i=1,2,3\ \ .
$$
In this way we get an object that will be called a {\it spherical braid} --- several distinct points on a sphere and the same number of points, in the same positions, on a smaller sphere, connected by strands in such a way that the radial coordinate of each strand is monotonic in $t$.
\par\smallskip\noindent
{\bf Note} One can multiply two spherical braids by connecting the ends of the first to the beginnings of the second (and rescaling the parameter). When one considers classes of isotopic spherical braids one obtains the so called {\it braid group of the sphere} \cite{Fad2}, which algebraically is $B_3/R$ (see below). This is known as the mapping-class group of the sphere (with 0 punctures and 0 boundaries) and has been studied by topologists.
\par\smallskip
We can map our spherical braid to a conventional one using stereographic projection (Figure \ref{Fig2}). First we choose a ray starting at the origin and not intersecting any strand. Then we map stereographically any point on a sphere with radius $1/2 \leq \rho \leq 1$, except the point where the ray intersects that sphere and with respect to which we project, to a point in a corresponding horizontal plane. Finally we define the $z$-coordinate of the image to be $z=-\rho$ .\par
Recall the usual notion of {\it braids}, introduced by Artin \cite{Artin}. (See also \cite{Bir} for a contemporary review of the theory of braids and its relations to other subjects.) One takes two planes in $\mbox{\bBB R}^3$, let's say parallel to the
$XY$ plane, fixes $n$ distinct points on each plane and connects each point on the lower plane with a point on the upper plane by a continuous path (strand). The strands do not intersect each other. In addition the $z$-coordinate of each strand is a monotonic function of the parameter of the strand and thus $z$ can be used as a common parameter for all strands. Two different braids are considered equivalent or {\it isotopic} if there exists a homotopy of the strands (keeping the endpoints fixed), so that for each value of the homotopy parameter $s$ you get a braid, for $s=0$ you get the initial braid and for $s=1$ the final one. When the points on the lower and the upper plane have the same positions (their $x$ and $y$ coordinates are the same), we can multiply braids by stacking one on top of the other. Considering classes of isotopic braids with the multiplication just defined, the {\it braid group} is obtained. Artin showed that the braid group $B_n$ on $n$ strands has a presentation with $n-1$ generators and a simple set of relations - Artin's braid relations. We give them for the case $n=3$ since this is the one we are mostly interested in. In this case the braid group $B_3$ is generated by the generators $\sigma_1$, corresponding to twisting of the first and the second strands, and $\sigma_2$, corresponding to twisting of the second and the third strands (the one to the left always passing behind the one to the right) (Figure \ref{Fig3}).
\begin{figure}
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{20pt}
\centering
\includegraphics[scale=0.35]{Fig3a.eps} \hskip80pt \includegraphics[scale=0.35]{Fig3b.eps}
\caption{The generators $\sigma_1$ and $\sigma_2$ of $B_3$ }
\label{Fig3}
\end{figure}
\bigskip
These generators are subject to a single braid relation (Figure \ref{Fig4}):
\begin{figure}
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.40]{Fig4b.eps} \quad\quad \includegraphics[scale=0.20]{Fig4c.eps}\quad\quad
\includegraphics[scale=0.40]{Fig4a.eps}
\caption{The braid relation for $B_3$ }
\label{Fig4}
\end{figure}
\bigskip
\begin{equation}
\sigma_2\sigma_1\sigma_2=\sigma_1\sigma_2\sigma_1
\label{Artinrel}
\end{equation}
We say that $B_3$ has a {\it presentation} with generators $\sigma_1$ and $\sigma_2$ and defining relation given by Equation \ref{Artinrel}, or in short:
\begin{equation}
B_3=\langle \sigma_1,\,\sigma_2\,;\sigma_1\sigma_2\sigma_1\sigma_2^{-1}\sigma_1^{-1}\sigma_2^{-1}\rangle
\end{equation}
\par
In our case, since a full rotation of the sphere returns the three
points to their original positions, we always get {\it pure
braids}, i.e., braids for which any strand connects a point on the
lower plane with its translate on the upper plane. Pure braids
form a subgroup of $B_3$ which is denoted by $P_3$. Note that
intuitively there is a homomorphism $\pi$ from $B_3$ to the
symmetric group $S_3$ since any braid from $B_3$ permutes the
three points. Formally one defines $\pi$ on the generators by
\begin{equation}
\pi(\sigma_1)(1,2,3)=(2,1,3),\quad \pi(\sigma_2)(1,2,3)=(1,3,2)
\end{equation}
and then extends it to the whole group $B_3$ (it is important that $\pi$ maps Equation \ref{Artinrel} to the trivial identity). Pure braids are precisely those that do not permute the points and therefore we can give the following algebraic characterization of $P_3$:
$$P_3:=Ker\, \pi$$
Alternatively, $S_3$ is the quotient of $B_3$ by the additional
equivalence relations $\sigma_i^2\sim I, \ i=1,2$ and if $N$ is
the minimal normal subgroup containing $\sigma_i^2$, then
$\pi:B_3\to B_3/N$ is the natural projection. It is then easy to
see that the kernel of $\pi$ has to be a product of words of the
following type:
$$
\sigma_{i_1}^{\pm 1}\sigma_{i_2}^{\pm 1}\cdots\sigma_{i_k}^{\pm 1}\sigma_{i_{k+1}}^{\pm 2}\sigma_{i_k}^{\pm 1}
\cdots\sigma_{i_2}^{\pm 1}\sigma_{i_1}^{\pm 1}
$$
The whole subgroup $P_3$ can in fact be generated by the following three {\it twists} (Figure \ref{Fig5})
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.45]{Fig5a.eps} \hskip60pt \includegraphics[scale=0.37]{Fig5b.eps} \hskip60pt
\includegraphics[scale=0.4]{Fig5c.eps}
\caption{The generators $a_{12}$, $a_{13}$ and $a_{23}$ of $P_3$ }
\label{Fig5}
\end{figure}
\begin{equation}
a_{12}:=\sigma_1^2,\quad a_{13}:=\sigma_2\sigma_1^2\sigma_2^{-1}=\sigma_1^{-1}\sigma_2^2\sigma_1,\quad a_{23}:=\sigma_2^{2}
\label{gener}
\end{equation}
\par
In our construction so far we mapped any closed path in $SO(3)$ to a spherical braid and then, using stereographic projection, to a conventional pure braid. The last map, however, depends on a choice of a ray in $\mbox{\bBB R}^3$ and, what is worse, spherical braids that are isotopic (in the obvious sense) may map to nonisotopic braids. To mend this, we will identify certain classes of braids in
$P_3$. Namely, we introduce the following equivalence relations (see Figure \ref{Fig6}):
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.50]{Fig6a.eps} \hskip40pt
\includegraphics[scale=0.45]{Fig6c.eps} \hskip40pt
\includegraphics[scale=0.50]{Fig6b.eps}
\caption{The flips $r_1$, $r_2$ and $r_3$ }
\label{Fig6}
\end{figure}
\begin{equation}
r_1:=\sigma_1\sigma_2^2\sigma_1\sim I,\quad r_2:=\sigma_1^2\sigma_2^2\sim I,\quad
r_3:=\sigma_2\sigma_1^2\sigma_2\sim I. \label{r}
\end{equation}
On our model with the tennis ball from Section 2 $r_i,\,
|
i=1,2,3$ correspond to {\it flips} of the $i$-th strand above and around the ball. Such motions lead to isotopic spherical braids, as will be shown later.
\par\smallskip\noindent
{\bf Note} Strictly speaking, we have to make sure that our final result will not change when any strand crosses at any part of the braid the ray which we use for the stereographic projection. This means that we have to factorize by the normal closure in $B_3$ (not in $P_3$ !) of the generators $r_i,\ i=1,2,3$, i.e., the smallest normal subgroup in $B_3$ containing these three generators. This would then allow us to set to $I$ any $r_i$ in any part of a word. One sees immediately that only one of the generators is needed then, since the other two will be contained in the normal closure of the first. We noticed, however, that we managed to untie any trivial braid just by a sequence of the three flips $r_i$ defined in Equation \ref{r} and their inverses, performed at the end of the braid. At the same time a nontrivial braid, corresponding to an odd number of rotations, cannot be untied even if we allow flips in any part of the braid. This can only be true if the flips $r_i$ generate a normal subgroup in $B_3$ (which of course then coincides with the normal closure of any of the $r_i$ and is also normal in $P_3$).
\par\bigskip\noindent
{\bf Lemma 1}\par\noindent
{\it The subgroup $R\subset P_3$, generated by $r_1$, $r_2$, $r_3$ is normal in $B_3$.}\par\smallskip\noindent
{\bf Proof:}
We need to show that we can represent all conjugates of $r_i$ with respect to the generators of $B_3$ and their inverses as products of the $r_i$ and their inverses. Straightforward calculations, using multiple times Artin's braid relation (Equation \ref{Artinrel}) give the following identities:
$$
\begin{array}{l ll}
\sigma_1r_1\sigma_1^{-1}=r_2, &\sigma_2r_1\sigma_2^{-1}=\sigma_2^{-1}r_1\sigma_2=r_1, \nonumber\\
\sigma_1r_2\sigma_1^{-1}=r_2r_1r_2^{-1}, &\sigma_2r_2\sigma_2^{-1}=r_3,\nonumber\\
\sigma_1r_3\sigma_1^{-1}=\sigma_1^{-1}r_3\sigma_1=r_3, &\sigma_2r_3\sigma_2^{-1}=r_1^{-1}r_2r_1=r_3r_2r_3^{-1},\nonumber\\
\sigma_1^{-1}r_1\sigma_1=r_1^{-1}r_2r_1, &\sigma_1^{-1}r_2\sigma_1=r_1, \nonumber\\
\sigma_2^{-1}r_2\sigma_2=r_1r_3r_1^{-1}=r_2^{-1}r_3r_2, \quad\quad &\sigma_2^{-1}r_3\sigma_2=r_2.\nonumber\\
\end{array}
$$
We demonstrate as an example the proof of the first identity in the second line. We have
\[\sigma_{1}\sigma_{2}\sigma_{1} = \sigma_{2}\sigma_{1}\sigma_{2} \]
\[\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{1} = \sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{1} \]
\[\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{1}\sigma_{2} = \sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{1} \]
\[\sigma_{1}\sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2} = \sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{1} \]
\[\sigma_{1}\sigma_{2}^{2}\sigma_{1} = \sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{1}\sigma_{2}^{-2} \]
\[\sigma_{1}^{3}\sigma_{2}^{2}\sigma_1^{-1} = \sigma_1^2\sigma_{2}^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{1}\sigma_{2}^{-2}\sigma_1^{-2} \]
and therefore
\[\sigma_1r_2\sigma_1^{-1}=
\sigma_{1}\cdot \sigma_{1}^{2} \sigma_{2}^{2}\cdot \sigma_{1}^{-1} = \sigma_{1}^{2}\sigma_{2}^{2} \cdot \sigma_{1}\sigma_{2}^{2}\sigma_{1} \cdot \sigma_{2}^{-2}\sigma_{1}^{-2}=r_2r_1r_2^{-1}\]
\par\bigskip
By suitable full rotations we obtain all generators of $P_3$. For
example, $a_{12}$ is obtained by rotating around the vector
$\hbox{\bf x}_0^3=(-1/2,-\sqrt 3 /2,0)$ and it twists the first
and the second strand. Furthermore, homotopies between closed
paths in $SO(3)$ correspond to isotopies of the spherical braids
and thus homotopic closed paths in $SO(3)$ will be mapped to the
same element in the factorgroup $P_3/R$.\par\bigskip\noindent {\bf
Proposition 1}\par\noindent {\it The factorgroup $P_3/R$ is
isomorphic to $\mbox{\bBB Z}_2$.}\par\smallskip\noindent {\bf Proof:} To
make notations simpler we use the same letter to denote both a
representative of a class in $P_3/R$ and the class itself, hoping
that the meaning is clear from the context. In $P_3/R$ we have
$$\sigma_1\sigma_2^2=\sigma_1^{-1}=\sigma_2^2\sigma_1,$$
and
$$\sigma_2\sigma_1^2=\sigma_2^{-1}=\sigma_1^2\sigma_2.$$
Now we obtain the following sequence of identities:
$$
\begin{array}{l l}
\sigma_2\sigma_1^2=\sigma_1^2\sigma_2, &\sigma_1\sigma_2\sigma_1^2=\sigma_1^3\sigma_2, \nonumber \\
\sigma_2\sigma_1\sigma_2\sigma_1=\sigma_1^3\sigma_2, &\sigma_1\sigma_2\sigma_1\sigma_2\sigma_1=\sigma_1^4\sigma_2, \nonumber \\
\sigma_1\sigma_2^2\sigma_1\sigma_2=\sigma_1^4\sigma_2, &I=\sigma_1^4. \nonumber
\end{array}
$$
We have used twice the braid relation (Equation \ref{Artinrel}) and the first equivalence relation in Equation \ref{r}.
In completely the same way we prove
$$
\sigma_2^4=I.
$$
Combining the last two results with the equivalence relations (Equation \ref{r}) we finally get
\begin{equation}
\sigma_1^2=\sigma_1^{-2}=\sigma_2^2=\sigma_2^{-2}
\label{sigma}
\end{equation}
It is now clear that in $P_3/R$ the three generators, defined in Equation \ref{gener} reduce to one element of order 2. Therefore they generate $\mbox{\bBB Z}_2$. This completes the proof.\par\smallskip
So far we have constructed a map $\pi_1(SO(3))\to P_3/R$, which is onto by construction, and we have shown that the image is isomorphic to $\mbox{\bBB Z}_2$. We now show that this map is actually an isomorphism.
\par\bigskip\noindent
{\bf Proposition 2}\par\noindent {\it The map $\pi_1(SO(3))\to
P_3/R$ is a monomorphism.}\par\smallskip\noindent {\bf Proof:} We
have to show that if a closed continuous path in $SO(3)$ is mapped
to a braid in $R$, then this path is homotopic to the constant
path. The proof basically reduces to the following observation ---
any spherical braid which is pure (the strands connect each point
on the outer sphere with the same point on the inner sphere)
determines a closed path in $SO(3)$. Two isotopic spherical pure
braids determine homotopic closed paths in $SO(3)$. Indeed, recall
that for a spherical braid we can parametrize the points on each
strand with a single parameter $t$ and that for a fixed $t$ all
three points lie on a sphere with radius $1-t/2$. These three
ordered points $\hbox{\bf x}^i(t), i=1,2,3$ give for every fixed
$t$ a nondegenerate triangle, oriented somehow in $\mbox{\bBB R}^3$. Let
$\hbox{\bf l}(t)$ be the vector, connecting the center of mass of
the triangle with the vertex $\hbox{\bf x}^1(t)$, i.e., $\hbox{\bf
l}(t)=\hbox{\bf x}^1-(\hbox{\bf x}^1(t)+\hbox{\bf
x}^2(t)+\hbox{\bf x}^3(t))/3$ and define $\hbox{\bf
e}^1(t):=\hbox{\bf l}(t)/||\hbox{\bf l}(t)||$. Let $\hbox{\bf
e}^3(t)$ be the unit vector, perpendicular to the plane of the
triangle, in a positive direction relative to the orientation
(1,2,3) of the boundary. Finally, let $\hbox{\bf e}^2(t)$ be the
unit vector, perpendicular to both $\hbox{\bf e}^1(t)$ and
$\hbox{\bf e}^3(t)$, so that the three form a right-handed frame.
Then there is a unique element $\omega (t)\in SO(3)$ sending the
vectors $\hbox{\bf e}_0^1=(1,0,0)$, $\hbox{\bf e}_0^2=(0,1,0)$,
$\hbox{\bf e}_0^3=(0,0,1)$ to the triple $\hbox{\bf e}^i(t)$.
According to our definitions, $\omega(0)=\omega(1)=I$ and we get a
continuous function $\omega: [0,1]\to SO(3)$, where continuity
should be understood relative to some natural topology on $SO(3)$,
e.g., the strong operator topology.\par
Now, if we have two isotopic spherical braids, by definition there are continuous functions $\hbox{\bf x}^i(t,s), i=1,2,3$ such that
$\hbox{\bf x}^i(t,s)$ is a braid for any fixed $s\in[0,1]$,
$\hbox{\bf x}^i(0,s)=\hbox{\bf x}_0^i$, $\hbox{\bf
x}^i(1,s)=\hbox{\bf x}_0^i /2$, $\hbox{\bf x}^i(t,0)$ give the
initial braid and $\hbox{\bf x}^i(t,1)$ give the final braid. By
assigning an element $\omega(t,s)$ to any triple $\hbox{\bf
x}^i(t,s)$ as described we get a homotopy between two closed paths
in $SO(3)$.\par Let $\omega'(t)$ be a closed path in $SO(3)$ which
is mapped to a braid $b$ in the class $r_1\in R$. We can
construct a spherical braid, whose image is isotopic to that
braid. Let $\hbox{\bf z}$ be the point on the unit sphere with
respect to which we perform the stereographic projection. This can
always be chosen to be the north pole or a point very close to the
north pole (in case a strand is actually crossing the axis passing
through the north pole). Note that the points $\hbox{\bf x}_0^i,
i=1,2,3$ are on the equator. Construct a simple closed path on the unit sphere
starting and ending at $\hbox{\bf x}_0^1$ and going around
$\hbox{\bf z}$ in a negative direction (without crossing the
equator except at the endpoints). Thus we have two continuous
functions $\varphi(t),\ \theta(t), t\in [0,1]$ --- the spherical
(angular) coordinates describing this path. Let $\hbox{\bf
x}^1(t)$ be the point in $\mbox{\bBB R}^3$ whose spherical coordinates are
$\rho(t):=1-t/2,\ \varphi(t),\ \theta(t)$ and let $\hbox{\bf
x}^i(t):=(1-t/2)\hbox{\bf x}_0^i, i=2,3$. These three paths give
the required spherical braid. It is isotopic to the trivial braid,
coming from the constant path in $SO(3)$ and at the same time it
is isotopic to the preimage of $b$ under the stereographic
projection. In this way we see that $\omega'(t)$ must be homotopic to
the constant path. Obviously similar argument holds with $r_1$
replaced by $r_2$ and $r_3$ or the inverses. Since any element in
$R$ is a product of these generators and since products of
isotopic braids give isotopic braids, this completes the
proof.\par
\section{Further results and generalizations}
There is an obvious generalization of some of the results of the
previous sections to the case $n>3$. The minimal number of strands
that is needed to capture the nontrivial fundamental group of
$SO(3)$ is $n=3$. When $n>3$ any full rotation will give rise to a pure
spherical braid but the whole group of pure braids will not be
generated in this way. It is relatively easy to see that in this
way, after projecting stereographically, one will obtain a
subgroup of $P_n$, generated by a single {\it full twist} $d$ of all strands around an external
point and a set of $n$ flips $r_i$:
$$\begin{array}{l}
d:=(\sigma_{n-1}\cdots \sigma_2\sigma_1)^n,\\
r_1:=\sigma_1\sigma_2\cdots \sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_1,\\
r_2:=\sigma_1^2\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_2,\\
$$r_i:=\sigma_{i-1}\cdots\sigma_2\sigma_1^2\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots\sigma_i,
\quad i=2,3,\ldots n-1,\\
r_n:=\sigma_{n-1}\sigma_{n-2}\cdots\sigma_2\sigma_1^2\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}.\\
\end{array}
$$
Figure
\ref{Fig7} shows a full twist for the case with 3 strands while Figure \ref{Fig8} shows a generic flip.
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.28]{Fig7.eps}
\caption{The full twist $d$ in the case $n=3$ }
\label{Fig7}
\end{figure}
\begin{figure} [h]
\setlength{\abovecaptionskip}{20pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.45]{Fig8.eps}
\caption{The flip $r_i$ }
\label{Fig8}
\end{figure}
Straightforward calculations give the following generalization of Lemma 1:
\par\bigskip\noindent
{\bf Lemma 1$'$}\par\noindent
{\it The subgroup $R\subset P_n$, generated by $r_i$, $i=1,\ldots n$, is normal in $B_n$.}\par\smallskip\noindent
{\bf Proof:} As in the proof of Lemma 1 we exhibit explicit formulas for the conjugates of all flips $r_i$:
$$
\begin{array}{l }
\sigma_j r_i \sigma_j^{-1}=\sigma_j^{-1}r_i \sigma_j=r_i,\quad i-j>1\ \hbox{or}\ j-i>0,\\
\sigma_{i-1}r_i\sigma_{i-1}^{-1}=r_i r_{i-1} r_i^{-1},\\
\sigma_{i-1}^{-1} r_i \sigma_{i-1}=r_{i-1},\\
\sigma_i r_i \sigma_i^{-1}=r_{i+1},\quad i<n-1\\
\sigma_i^{-1} r_i \sigma_i = r_i^{-1}r_{i+1} r_i,\quad i<n-1\\
\end{array}
$$
\par\bigskip
Let us denote by $S$ the subgroup, generated by $d$ and $r_i$.
Using purely topological information, namely that $\pi_1(SO(3))\cong\mbox{\bBB Z}_2$, we can deduce the following
generalization of Proposition 1:
\par\bigskip\noindent
{\bf Proposition 1$'$}\par\noindent {\it The factorgroup $S/R$ is
isomorphic to $ \mbox{\bBB Z}_2$.}\par\smallskip\noindent An equivalent
statement is that $d^2 \in R$.
\par It is tempting to try
generalizing the main result of this paper to higher-dimensional
rotations. A straightforward generalization fails since one would
produce braids in higher than three-dimensional space and these
can always be untangled. Note that in three dimensions we are able
to attribute a path in $SO(3)$ to any spherical braid with 3
strands but this is not the case for $n>3$ (4 points on $S^3$ may
not determine an orientation of the orthonormal frame in
$\mbox{\bBB R}^4$.) It remains to be seen if a refinement of the methods
used can render meaningful results.
|
\section{Introduction}
The accuracy of cosmological observations has been significantly improved in the past two decades. The based six-parameter $\Lambda$CDM model is strongly supported by the precise measurements of anisotropies of the cosmic microwave background (CMB) \cite{Komatsu:2008hk,Ade:2015xua}. The Type Ia supernova (SNe) \cite{Conley:2011ku,Suzuki:2011hu} and Baryon Acoustic Oscillation (BAO) \cite{Cole:2005sx,Eisenstein:2005su} as a geometric complement directly encode the information of expansion history in the late-time Universe. As an important parameter characterizing the today's expansion rate, the Hubble constant is directly measured by Hubble Space Telescope (HST) \cite{Riess:2016jrr}.
The BAO measurement is the periodic relic of fluctuations of baryonic matter density in the Universe. It is considered as a standard ruler of the universe and can be used as an independent way to constrain models.
In the previous observations, the BAO is traced directly by galaxies at low redshift and measured indirectly by analysis of Lyman-$\alpha$ (Ly$\alpha$) forest in quasar spectra at high redshifts.
Recently, the extended-Baryon Oscillation Spectroscopic Survey (eBOSS) \cite{Dawson:2015wdb} released their another percent level BAO measurement at $z=1.52$ using the auto-correlation of quasars directly, referred to as DR14 quasar sample \cite{Ata:2017dya}. It is a new method to achieve BAO features, which makes DR14 as the first BAO distance observations in the range of $1<z<2$.
The higher redshift at which BAO is measured, the more sensitive to the Hubble parameter. Therefore, we can expect an improvement in constraints on the equation of state (EOS) of dark energy (DE) and a preciser description of the expansion history by including DR14.
On the other hand, with increasing total active neutrino mass at fixed $\theta_*$, the spherically-averaged BAO distance $D_V(z\lesssim1)$ increases accordingly, but $D_V(z>1)$ falls \cite{Ade:2013zuv}. It implies that DR14 may improve the constraint on the total active neutrino mass. On the contrary, with increasing the effective number of relativistic species $N_{\textrm{eff}}$ at a fixed $\theta_*$ and a fixed redshift of matter-radiation equality $z_{\textrm{eq}}$, $D_V(z)$ decreases for all BAO measurements \cite{Ade:2013zuv}. Therefore, DR14 can improve the constraint on $N_{\textrm{eff}}$ as well.
In addition, the spatial curvature of our universe can be also constrained better because the geometry of space affects the detection of BAO measurement directly and the new released BAO measurement DR 14 fills the gap between $1<z<2$.
In this paper, we update the constraints on the EOS of DE, the active neutrino masses, the dark radiation and the spatial curvature with the Planck data and the BAO measurements including the DR14 quasar sample at $z=1.52$. The paper is arranged as follows. In Sec.~\ref{method}, we explain our methodology and the data we used. In Sec.~\ref{results}, the results for different models are presented. At last, a brief summary and discussion are included in Sec.~\ref{sum}.
\section{Data and Methodology}
\label{method}
We use the combined data of CMB and BAO measurements to constrain the parameters in the different models. Concretely, we use Planck TT,TE,EE+lowP released by the Planck Collaboration in 2015 \cite{Ade:2015xua}, namely P15, as well as the BAO measurements at $z=0.106,\ 0.15,\ 0.32,\ 0.57,\ 1.52$, namely 6dFGS \cite{Beutler:2011hx}, MGS \cite{Ross:2014qpa}, DR12 BOSS LOWZ, DR12 BOSS CMASS \cite{Cuesta:2015mqa,Gil-Marin:2015nqa}, and DR14 eBOSS quasar sample \cite{Ata:2017dya} seperately.
To show the BAO data we used, we should introduce the BAO model briefly, which is the basic model of the BAO signal. The volume-averaged values are measured, in \cite{Eisenstein:2005su}, by
\begin{eqnarray}
D_V(z)=\[(1+z)^2 D_A^{2}(z) \frac{cz}{H(z)}\]^{1/3}
\end{eqnarray}
where $c$ is the light speed, $D_A(z)$ is the proper angular diameter distance \cite{Aubourg:2014yra}, given by
\begin{eqnarray}
D_A(z)=\dfrac{c}{1+z} S_k\(\int_{0}^{z} { dz'\over H(z')}\)\label{con:da}
\end{eqnarray}
where $S_k(x)$ is
\begin{eqnarray}S_k(x)=
\begin{cases}
\sin(\sqrt{-\Omega_k}x)/\sqrt{-\Omega_k},&\Omega_k<0 \\ \sinh(\sqrt{\Omega_k}x)/\sqrt{\Omega_k},&\Omega_k>0 \\x &\Omega_k=0 \
\end{cases}
\end{eqnarray}
and $H(z)$ is
\begin{eqnarray}
H(z)=H_0 \[\Omega_r (1+z)^4 + \Omega_m (1+z)^3 +\Omega_k (1+z)^2 +(1-\Omega_r -\Omega_m-\Omega_k)f(z)\]^{1/2}
\end{eqnarray}
where
\begin{eqnarray}
f(z)\equiv {\rho_\text{DE}(z)\over \rho_\text{DE}(0)}=\exp\[\int_o^z3(1+w(z')){dz'\over 1+z'}\],
\end{eqnarray}
and $w(z)\equiv p_\text{DE}/\rho_\text{DE}$ is the EOS of DE.
\section{Results}
\label{results}
In this section, we will represent our new constraints on the dark energy, the neutrino masses, the dark radiation and the spatial curvature of the universe separatelly.
\subsection{Constraints on Dark Energy}
In this subsection, we constrain the cosmological parameters in the $\Lambda$CDM model, the $w$CDM model and the $w_0 w_a$CDM model \cite{Chevallier:2000qy,Linder:2002et} respectively. Our results are summarized in Tab.~\ref{tablede}. We run CosmoMC \cite{Lewis:2002ah} in the $\Lambda$CDM model as the basic model, where there are six free cosmological parameters ${\lbrace\Omega_b h^2,\Omega_c h^2,100\theta_{MC},\tau,n_s,\ln(10^{10}A_s)\rbrace}$. Here $\Omega_b h^2$ is the density of the baryonic matter today, $\Omega_c h^2$ is the cold dark matter density today, $100\theta_{MC}$ is 100 times the ratio of the angular diameter distance to the large scale structure sound horizon, $\tau$ is the optical depth, $n_s$ is the scalar spectrum index and $A_s$ is the amplitude of the power spectrum of primordial curvature perturbations.
\begin{table}
\caption{The 68$\%$ limits for the cosmological parameters in different DE models from P15+BAO.}\label{tablede}
\begin{tabular}{p{3 cm}<{\centering}|p{3.5cm}<{\centering} p{3.5cm}<{\centering} p{3.5cm}<{\centering} }
\hline
& $\Lambda$CDM & $w$CDM & $w_0 w_a$CDM \\
\hline
$\Omega_b h^{2}$ & $0.02233\pm 0.00014$ & $0.02229\pm 0.00015$ & $0.02224\pm 0.00015$ \\
$\Omega_c h^{2}$ & $0.1186\pm 0.0010$ & $0.1191\pm 0.0013$ & $0.1200\pm 0.0014$ \\
$100\theta_{MC}$ & $1.04091\pm 0.00030 $ &$1.04086\pm 0.00031 $ & $1.04075\pm 0.00031 $ \\
$\tau$ & $0.085\pm 0.016$ & $0.082\pm 0.017$ & $0.075\pm 0.017$ \\
$\ln(10^{10}A_s)$ &$3.102\pm 0.032$& $3.098_{-0.032}^{+0.035}$& $3.086\pm 0.033 $ \\
$n_s$ & $0.9677\pm 0.0040 $ & $0.9664 \pm 0.0046 $ & $0.9640\pm 0.0046 $ \\
$w$ & - &$-1.036\pm 0.056$ & - \\
$w_0$ & - & - & $-0.25\pm 0.32 $ \\
$w_a$ & - & - & $-2.29_{-0.91}^{+1.10}$\\
\hline
$H_0$ [km s$^{-1}$ Mpc$^{-1}$]&$67.81_{-0.46}^{+0.47}$ &$68.66_{-1.55}^{+1.41}$ &$62.56_{-2.74}^{+2.42}$\\
\hline
\end{tabular}
\end{table}
The EOS of DE is $w=-1.036\pm0.056$ in the $w$CDM model at $68\%$ confidence level (C.L.). The triangular plot of $H_0$, $w_0$ and $w_a$ in the $w_0 w_a$CDM model is shown in Fig.~\ref{fig:ww} and it indicates that the prediction of $\Lambda$CDM is within the $68\%$ confidence region in this figure, which seems to be in conflict with the $w_0$, $w_a$ values in Tab.~\ref{tablede}. Actually, the probabilities are the integrated probabilities, which means the values in the Tab.~\ref{tablede} have been marginalized over all the other parameters except the aimed parameter. Due to the strong correlation between $w_0$ and $w_a$, we should check if the prediction of $\Lambda$CDM is consistent with datasets in the $w_0-w_a$ 2D contour plot.
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.5]{wwCDM12_tri.pdf}
\end{center}
\caption{The triangular plot of $H_0$, $w_0$ and $w_a$ in the $w_0 w_a$CDM model. The point ($w_0=-1$, $w_a=0$) locates in the $w_0 w_a$CDM 68$\%$ C.L. region.}
\label{fig:ww}
\end{figure}
Marginalizing over the other cosmological parameters, we also plot the evolution of the normalized Hubble parameter $H(z)$ in Fig.~\ref{fig:H} where the Hubble parameter is normalized by comparing to those in the best-fit $\Lambda$CDM model.
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.9]{H-f.pdf}
\end{center}
\caption{The normalized $H(z)$ plot in the $\Lambda$CDM model, $w$CDM model and $w_0 w_a$CDM model. The grey band represents the $68\%$ confidence range in the $\Lambda$CDM model allowed by P15+BAO. The ranges between the two blue dash-dotted lines and the two red dashed lines represent the $68\%$ confidence ranges of the $w$CDM model and $w_0 w_a$CDM model allowed by P15+BAO respectively.
The black and orange error bars denote the Hubble parameters measured by HST (named R16) in \cite{Riess:2016jrr} and the Ly$\alpha$ forest of BOSS DR11 quasars (named Ly$\alpha$ Forest) in \cite{Delubac:2014aqe}.
}
\label{fig:H}
\end{figure}
The Hubble constant is $H_0=67.81_{-0.46}^{+0.47}$ km s$^{-1}$ Mpc$^{-1}$ in the $\Lambda$CDM model, $H_0=68.66_{-1.55}^{+1.41}$ km s$^{-1}$ Mpc$^{-1}$ in the $w$CDM model, and $H_0=62.56_{-2.74}^{+2.42}$ km s$^{-1}$ Mpc$^{-1}$ in the $w_0w_a$CDM model, at 68$\%$ C.L.. In all, there is a significant tension on the measurement of Hubble constant between global fitting P15+BAO and the direct measurement by HST in \cite{Riess:2016jrr} (named R16) which gives $H_0=73.24\pm 1.74$ km s$^{-1}$ Mpc$^{-1}$. Even though such a tension is slightly relaxed in the $w$CDM model, it is aggravated in the $w_0 w_a$CDM model. In order to significantly relax such a tension, a more dramatic design of the EOS of DE is needed \cite{Qing-Guo:2016ykt}. In addition, an around $2\sigma$ tension on the Hubble parameter at $z=2.34$ between the predictions of these three DE models constrained by P15+BAO and the measurement by Ly$\alpha$ forest of BOSS DR11 quasars \cite{Delubac:2014aqe} which gives $H(z=2.34)=222\pm 7$ km s$^{-1}$ Mpc$^{-1}$ still exists.
\subsection{Constraints on the total mass of active neutrinos}
The neutrino oscillation implies that the active neutrinos have mass splittings
\begin{eqnarray}
\Delta m_{21}^2=m_2^2-m_1^2,
\end{eqnarray}
\begin{eqnarray}
\vert \Delta m_{31}^2\vert=\vert m_3^2-m_1^2\vert.
\end{eqnarray}
where $\Delta m_{21}^2\simeq 7.54\times 10^{-5}$ eV$^2$ and $\vert \Delta m_{31}^2\vert\simeq 2.46\times 10^{-3}$ eV$^2$ \cite{Olive:2016xmw}. That is to say, there are two possible mass hierarchies: if $m_1<m_2<m_3$, it's a normal hierarchy (NH); if $m_3<m
|
_1<m_2$, it's an inverted hierarchy (IH).
The neutrino mass spectrum is expressed as
\begin{eqnarray}(m_1,m_2,m_3)=
\begin{cases}
(m_1,\sqrt{m_1^2+\Delta m_{21}^2},\sqrt{m_1^2+\vert \Delta m_{31}^2\vert}),& \text{for\ NH,\ where \ }\text{$m_1$ \ is \ the\ minimum}, \\ (\sqrt{m_3^2+\vert \Delta m_{31}^2\vert},\sqrt{m_3^2+\Delta m_{21}^2+\vert \Delta m_{31}^2\vert},m_3),& \text{for \ IH,\ where \ $m_3$ \ is \ the\ minimum},\\
\end{cases}
\end{eqnarray}
and the total mass satisfies
\begin{eqnarray}\sum m_{\nu}=m_1+m_2+m_3\gtrsim
\begin{cases}
0.059,& \text{for\ NH}, \\ 0.101,& \text{for \ IH.}\\
\end{cases}
\end{eqnarray}
Here we set the minimum of the three neutrino masses as a free parameter and the sum of the neutrino masses as a derived parameter. Our results are summarized in Tab.~\ref{tab:neutrino}.
\begin{table}
\caption{The 68$\%$ limits for the cosmological parameters in the $\nu_\text{NH}\Lambda$CDM model and the $\nu_\text{IH}\Lambda$CDM model from P15+BAO. }
\begin{tabular}{p{3 cm}<{\centering}|p{3.5cm}<{\centering} p{3.5cm}<{\centering} p{3.5cm}<{\centering} }
\hline
\ & $\nu_\text{NH}\Lambda$CDM &$\nu_\text{IH}\Lambda$CDM \\
\hline
$\Omega_b h^{2}$ & $0.02234\pm 0.00014$ &$0.02235\pm 0.00014$ \\
$\Omega_c h^{2}$ & $0.1184\pm 0.0011$ &$0.1181\pm 0.0010$ \\
$100\theta_{MC}$ & $1.04093\pm 0.00029 $ &$1.04093\pm 0.00029 $ \\
$\tau$ & $0.087\pm 0.0165$ &$0.090_{-0.016}^{+0.018}$ \\
$\ln(10^{10}A_s)$ & $3.106\pm 0.033 $ &$3.111_{-0.032}^{+0.035} $ \\
$n_s$ & $0.9682^{+0.0041}_{-0.0040} $ &$0.9689\pm 0.0041$ \\
$m_{\nu,min}$ ($95\%$ C.L.) & $<$ 0.047 eV & $<$ 0.049 eV \\
$\Sigma m_{\nu}$ ($95\%$ C.L.) & $<$ 0.16 eV &$<$ 0.19 eV \\
\hline
$H_0$ [km s$^{-1}$ Mpc$^{-1}$]&$67.65\pm{0.50}$ &$67.43\pm 0.48$ \\
\hline
\end{tabular}
\label{tab:neutrino}
\end{table}
The likelihood distribution of $\sum m_\nu$ for the NH and IH are illustrated in Fig.~\ref{fig:mnu}.
\begin{figure}[]
\begin{center}
\includegraphics[scale=0.9]{mnu.pdf}
\end{center}
\caption{The likelihood distributions of $\sum m_{\nu}$ for the NH and IH neutrinos in the $\nu \Lambda$CDM model. The dashed lines donate the allowed minimums of $\sum m_{\nu}$, namely 0.059 eV for NH and 0.101 eV for IH.}
\label{fig:mnu}
\end{figure}
In summary, the masses of the lightest neutrinos in NH and IH are $m_{\nu,min}<0.047$ eV and $m_{\nu,min}<0.049$ eV at 95$\%$ C.L. respectively. The total active neutrino masses are given by $\sum m_{\nu}<0.16$ eV and $\sum m_{\nu}<0.19$ eV for the NH and IH, and the NH is slightly preferred with $\Delta\chi^2\equiv \chi^2_\text{NH}-\chi^2_\text{IH}=-1.25$. Our new results are slightly tighter than those without the DR14 quasar sample \cite{Huang:2015wrx}. See some other related investigations in \cite{Xu:2016ddc,Guo:2017hea,Li:2017iur,Capozzi:2017ipn,Feng:2017nss,Feng:2017mfs,Wang:2017htc,Vagnozzi:2017ovm,Giusarma:2016phn}.
\subsection{Constraints on the dark radiation}
The total energy density of radiation in the Universe is given by
\begin{eqnarray}
\rho_r=\[1+\dfrac{7}{8}(\dfrac{4}{11})^{4/3} N_\text{eff} \] \rho_\gamma
\end{eqnarray}
where $\rho_\gamma$ is the CMB photon energy density, $N_\text{eff}$ denotes the effective relativistic degrees of the freedom in the Universe. For the three standard model neutrinos, their contribution to $N_\text{eff}$ is 3.046 due to non-instantaneous decoupling corrections. Then the additional relativistic degree of freedom $\Delta N_\text{eff}\equiv N_\text{eff}-3.046$ implies the existence of some other unknown sources of relativistic degree of freedom. $\Delta N_\text{eff}<0$ is considered to result from incompletely thermalized neutrinos or the existence of photons produced after neutrino decoupling, which is less motivated. But there exists many cases that $\Delta N_\text{eff}>0$. If a kind of additional massless particles don't interact with others since the epoch of recombination, their energy density evolves exactly like radiation and thus contributes $\Delta N_\text{eff}=1$. There are more explanation for $0<\Delta N_\text{eff}<1$ considering the non-thermal case and the bosonic particles. The thermalized massless boson decoupled during 0.5 MeV$<T<100$MeV contributes $\Delta N_\text{eff}\simeq 0.57$ and $\Delta N_\text{eff}\simeq 0.39$ if they decoupled before $T= 100$ MeV \cite{Weinberg:2013kea}.
In the $N_\text{eff}+\Lambda $CDM model, $N_\text{eff}$ is taken as a free parameter. The results are summarized in Tab.~\ref{paramsnnu}.
\begin{table}
\caption{The 68$\%$ limits for the cosmological parameters in the $N_\text{eff}+\Lambda $CDM model from P15+BAO.}\label{paramsnnu}
\begin{tabular}{p{3 cm}<{\centering}|p{3.5cm}<{\centering} p{3.5cm}<{\centering} p{3.5cm}<{\centering} }
\hline
\ & $N_\textsf{eff}+\Lambda$CDM \\
\hline
$\Omega_b h^{2}$ &$0.02236\pm 0.00019$ \\
$\Omega_c h^{2}$ &$0.1193\pm 0.0031$ \\
$100\theta_{MC}$ &$1.04085\pm 0.00044 $ \\
$\tau$ &$0.086\pm 0.017$ \\
$\ln(10^{10}A_s)$ &$3.105\pm 0.035$ \\
$n_s$ &$0.9691_{-0.0075}^{+0.0076}$\\
$N_\textsf{eff} $ &$3.09_{-0.20}^{+0.18}$\\
\hline
$H_0$ [km s$^{-1}$ Mpc$^{-1}$]&$68.07_{-1.20}^{+1.21}$\\
\hline
\end{tabular}
\end{table}
Our results give $N_\text{eff}=3.09_{-0.20}^{+0.18}$ at 68$\%$ C.L., which is consistent with the fact that there are only three active neutrinos in the Universe.
On the other hand, for example in \cite{Riess:2016jrr}, the dark radiation is proposed to relax the tension on the Hubble constant between the global fitting P15+BAO and the direct measurement by HST. Here we illustrate the constraints on $H_0$ and $N_\text{eff}$ in Fig.~\ref{fig:nnu}.
\begin{figure}[]
\centering
\includegraphics[scale=0.9]{dark-radiation12_2D.pdf}
\caption{The 2D contour plot of $H_0$ and $N_\text{eff}$ in the $N_\text{eff}+\Lambda$CDM model. The rectangle shaded region represents the observational value $H_0=73.24\pm 1.74$ km s$^{-1}$ Mpc$^{-1}$. They are overlapped at the 95$\%$ C.L. region.}\label{fig:nnu}
\end{figure}
From Fig.~\ref{fig:nnu}, we find that the dark radiation cannot really solve this tension.
\subsection{Constraints on the curvature}
According to Eq.~(\ref{con:da}), the spatial geometry affects the distance measurements, and hence the spatial curvature parameter $\Omega_k$ can be constrained by using BAO data. In the $\Omega_k$+$\Lambda$CDM model, $\Omega_k$ is taken as a free parameter. The constraints on the cosmological parameters in the $\Omega_k$+$\Lambda$CDM model are given in Tab.~\ref{paramsomegak}.
\begin{table}
\centering
\caption{The $68\%$ limits for the cosmological parameters in the $\Omega_k+\Lambda$CDM model from P15+BAO }\label{paramsomegak}
\begin{tabular}{p{3 cm}<{\centering}| p{5cm}<{\centering}}
\hline
\ & $\Omega_k$+$\Lambda$CDM \\
\hline
$\Omega_b h^{2}$ &$0.02226\pm 0.00016$ \\
$\Omega_c h^{2}$ &$0.1196\pm 0.0015$ \\
$100\theta_{MC}$ &$1.04078\pm 0.00030 $\\
$\tau$ &$0.081\pm 0.017$\\
$\ln(10^{10}A_s)$ &$3.097\pm 0.033 $\\
$n_s$ &$0.9652 \pm 0.0048 $\\
$\Omega_k$ &$(1.8\pm 1.9)\times10^{-3}$\\
\hline
$H_0$ [km s$^{-1}$ Mpc$^{-1}$]&$68.27_{-0.95}^{+0.68}$ \\
\hline
\end{tabular}
\end{table}
We find that the spatial curvature has been tightly constrained, namely $\Omega_k=(1.8\pm 1.9)\times10^{-3}$ at $68\%$ C.L. and $\Omega_k=(1.8_{-3.8}^{+3.9})\times10^{-3}$ at $95\%$ C.L. which is nicely consistent with a spatially flat universe. Adopting P15 only, the constraint on the spatial curvature is $\Omega_k=(-40_{-41}^{+38})\times10^{-3}$ at $95\%$ C.L. which is around one oder of magnitude looser comparing to our new result. However, our results improves little comparing the Planck $+$ BAO result in the Planck table, $\Omega_k=(0.2\pm 2.1)\times10^{-3}$ at $68\%$ C.L., which implies that the DR14 sample helps little to constrain the curvature. The constraints on $\Omega_\Lambda$ and $\Omega_m$ are illustrated in Fig.~\ref{fig:omegak}.
\begin{figure}[]
\centering
\includegraphics[scale=0.9]{omegak12_2D-2.pdf}
\caption{The contour plot of $\Omega_m$ and $\Omega_{\Lambda}$ in the $\Omega_k+\Lambda$CDM model. The dashed line indicates $\Omega_m+\Omega_{\Lambda}=1$.}
\label{fig:omegak}
\end{figure}
\section{Summary and discussion}
\label{sum}
In this paper we provide the new constraints on the cosmological parameters in some extensions to the based six-parameter $\Lambda$CDM model by combining P15 and BAO data including the DR14 quasar sample measurement released recently by eBOSS. We do not find any signals beyond this based cosmological model.
We explore the EOS of DE in two extended models, namely $w$CDM and $w_0 w_a$CDM model, and find $w=-1.036\pm 0.056$ at $68\%$ C.L. in the $w$CDM model, $w_0=-0.25 \pm 0.32$, $w_a=-2.29^{+1.10}_{-0.91}$ at $68\%$ C.L. in the $w_0 w_a$CDM model and $w=-1$ is located within the $68\%$ C.L. region. But the tension on the Hubble constant with the direct measurement by HST and the global fitting P15+BAO in $w$CDM model cannot be significantly relaxed and the $w_0 w_a$CDM model makes even worse. The neutrino mass normal hierarchy is slightly preferred by $\Delta\chi^2\equiv \chi^2_\text{NH}-\chi^2_\text{IH}=-1.25$ compared to the inverted hierarchy, and the $95\%$ C.L. upper bounds on the sum of three active neutrinos masses are $\sum m_\nu<0.16$ eV for the normal hierarchy and $\sum m_\nu<0.19$ eV for the inverted hierarchy. The three active neutrinos are nicely consistent with the constraint on the effective relativistic degrees of freedom with $N_\text{eff}=3.09_{-0.20}^{+0.18}$ at $68\%$ C.L., and a spatially flat Universe is preferred.
\vspace{5mm}
\noindent {\bf Acknowledgments}
We acknowledge the use of HPC Cluster of ITP-CAS.
This work is supported by grants from NSFC (grant NO. 11335012, 11575271, 11690021), Top-Notch Young Talents Program of China, and partly supported by Key Research Program of Frontier Sciences, CAS.
|
\section{Introduction}
\label{sec:introduction}
There are many applications in which it is
necessary to estimate the probability density
function (pdf) from a finite sample of
$n$ observations $\{x_1, \, x_2, \, \ldots, \, x_n\}$
whose true pdf is $f(x)$. Here we consider the generic
case in which the identically distributed (but not
necessarily independent) random variables have a
compact support $x_k \in [a,b]$.
The usual starting point for a pdf estimation is the naive
estimate
\begin{equation}
\label{eq:rawpdf}
\hat{f}_{\delta}(x) = \frac{1}{n}
\sum_{i=1}^n \delta(x-x_i) \ ,
\end{equation}
where $\delta(.)$ stands for the Dirac delta function.
Although this definition has a number of advantages, it
is useless for practical purposes
since a smooth functional is needed.
Our problem consists in finding an estimate
$\hat{f}(x)$ whose integral over an interval of
given length converges toward that of the true pdf
as $n \rightarrow \infty$. Many solutions have been
developed for that purpose: foremost among these are
kernel techniques in which the estimate $\hat{f}_{\delta}(x)$
is smoothed locally using a kernel function $K(x)$
\cite{Tapia78,Silverman86,Izenman91}
\begin{equation}
\label{eq:kernel}
\hat{f}(x) = \int_a^b \frac{1}{w} K
\left( \frac{x-y}{w} \right) \,
\hat{f}_{\delta}(y) \, {\rm d} y \ ,
\end{equation}
whose width is controlled by the parameter $w$.
The well-known histogram is a variant of this technique.
Although kernel approaches are by far
the most popular ones, the choice of a suitable width
remains a basic problem for which visual guidance
is often needed. More generally, one faces the
problem of choosing a good partition.
Some solutions include Bayesian
approaches \cite{Wolpert95}, polynomial fits
\cite{Ventzek94} and methods based on wavelet
filtering \cite{Donoho93}.
An alternative approach, considered by many authors
\cite{Cenkov62,Schwartz67,Kronmal68,Crain74,Pinheiro97},
is a projection of the pdf on orthogonal functions
\begin{equation}
\label{eq:projection}
\hat{f}(x) = \sum_k \alpha_k \; g_k(x) \ ,
\end{equation}
where the partition problem is now treated in dual space.
This parametric
approach has a number of interesting properties:
a finite expansion often suffices to obtain a good
approximation of the pdf and the convergence
of the series versus the sample size $n$ is generally
faster than for kernel estimates. A strong point
is its global character, since the pdf is fitted globally,
yielding estimates that are better behaved in
regions where the lack of statistics causes kernel
estimates to perform poorly.
Such a property is particularly
relevant for the analysis of turbulent wavefields, in
which the tails of the distribution are of great
interest (e.g. \cite{Frisch95}).
These advantages, however, should be weighed against
a number of downsides. Orthogonal series do not
provide consistent estimates
of the pdf since for increasing number of terms they
converge toward $\hat{f}_{\delta}(x)$ instead of
the true density $f(x)$ \cite{Devroye85}.
Furthermore, most series can only handle continuous or
piecewise continuous densities.
Finally, the pdf estimates obtained that way
are not guaranteed to be nonnegative (see
for example the problems encountered in \cite{Nanbu95}).
The first problem is not a major obstacle, since
most experimental distributions are smooth anyway.
The second one is more problematic.
In this paper we show how it can be partly overcome by
using a Fourier series expansion of the pdf and
seeking a maximization of the likelihood
\begin{equation}
\hat{L} = \int_a^b
\log \hat{f}(x) \; {\rm d} x \ .
\end{equation}
The problem of choosing an appropriate partition
then reduces to that of fitting the pdf with
a positive definite Pad\'e approximant \cite{Graves72}.
Our motivation for presenting this particular
parametric approach stems from its
robustness, its simplicity and the originality
of the computational scheme it leads to. The latter,
as will be shown later,
is closely related to the problem of estimating
power spectral densities with
autoregressive (AR) or maximum entropy methods
\cite{Priestley81,Haykin79,Percival93}.
To the best of our knowledge, the only earlier
reference to similar work is that by
Carmichael \cite{Carmichael76};
here we emphasize the relevance of the method
for estimating pdfs and propose a criterion for choosing
the optimum number of basis functions.
\section{The Maximum likelihood approach}
\label{sec:method}
The method we now describe basically involves
a projection of the pdf on a Fourier series.
The correspondence between the continuous pdf $f(x)$
and its discrete characteristic function $\phi_k$
is established by \cite{Oppenheim89}
\begin{eqnarray}
\label{eq:IFT2}
\phi_k &=& \int_{-\pi}^{+\pi} f(x) \; e^{j k x}
\; {\rm d} x \\
\label{eq:DFT2}
f(x) &=& 2 \pi \sum_{k=-\infty}^{+\infty}
\phi_k \; e^{-j k x} \ ,
\end{eqnarray}
where $\phi_k = \phi_{-k}^* \in {\C}$ is hermitian
\cite{Comment1}.
Note that we have applied a linear transformation
to convert the support from $[a,b]$ to $[-\pi, \pi]$.
For a finite sample, an unbiased estimate of the
characteristic function is obtained by inserting
eq.~\ref{eq:rawpdf} into eq.~\ref{eq:IFT2}, giving
\begin{equation}
\label{eq:charfctn}
\hat{\phi}_k = \frac{1}{n} \sum_{i=1}^{n} e^{jkx_i} \ .
\end{equation}
The main problem now consists in recovering the pdf from
eq.~\ref{eq:DFT2} while avoiding the infinite
summation. By working in dual space we have substituted the
partition choice problem by that of selecting the
number of relevant terms in the Fourier series expansion.
The simplest choice would be to truncate
the series at a given ``wave number'' $p$ and discard
the other ones
\begin{equation}
\label{eq:DFT3}
\hat{f}(x) = 2 \pi \sum_{k=-p}^{+p}
\hat{\phi}_k \; e^{-j k x} \ .
\end{equation}
Such a truncation is equivalent
to keeping the lowest wave numbers and thus filtering
out small details of the pdf.
Incidentally, this solution is equivalent to a kernel
filtering with $K(x) = \sin(\pi x)/\pi x$ as kernel.
This kernel is usually avoided because it suffers from
many drawbacks such as the generation of spurious
oscillations.
An interesting improvement was suggested by Burg in the
context of spectral density estimation
(see for example~\cite{Priestley81,Haykin79}).
The heuristic idea is to keep some
of the low wave number terms while the remaining ones,
instead of being set to zero, are left as free parameters:
\begin{eqnarray}
\label{eq:DFT4}
\hat{f}(x) = 2 \pi \sum_{k=-\infty}^{+\infty}
\hat{\alpha}_k \; e^{-j x k} \\
\mbox{with} \ \ \hat{\alpha}_k = \hat{\phi}_k,
\ \ \ |k| \leq p \nonumber \ .
\end{eqnarray}
The parameters $\hat{\alpha}_k$, for $|k| > p$, are then fixed
self-consistently according to some criterion.
We make use of this freedom
to constrain the solution to a particular class of
estimates. Without
any prior information at hand, a reasonable choice is to
select the estimate that contains the least
possible information or is the most likely. It is therefore
natural to seek a maximization of an entropic quantity
such as the sample entropy
\begin{equation}
\hat{H} = - \int_{-\pi}^{+\pi}
\hat{f}(x) \log \hat{f}(x) \; {\rm d} x \ ,
\end{equation}
or the sample likelihood
\begin{equation}
\label{eq:likelihood}
\hat{L} = \int_{-\pi}^{+\pi}
\log \hat{f}(x) \; {\rm d} x \ .
\end{equation}
We are a priori inclined to choose the entropy
because our objective is the estimation of
the pdf and not that of the characteristic function.
However, numerical investigations done in the context of
spectral density estimation rather lend support to
the likelihood criterion \cite{Johnson84}.
A different and stronger motivation for preferring
a maximization of the likelihood comes from the
simplicity of the computational scheme it gives rise to.
This maximization means that the tail of the characteristic
function is chosen subject to the constraint
\begin{equation}
\label{eq:differential}
\frac{\partial \hat{L}}{\partial \hat{\alpha}_k} = 0,
\ \ \ |k|>p \ .
\end{equation}
From eqs.~\ref{eq:DFT4} and \ref{eq:likelihood} the
likelihood can be rewritten as
\begin{equation}
\label{eq:likelihood2}
\hat{L} = \int_{-\pi}^{+\pi} \log
\left( 2 \pi \sum_{k=-\infty}^{+\infty}
\hat{\alpha}_k \; e^{-j x k}
\right) \, {\rm d} x \ .
\end{equation}
As shown in the appendix, the likelihood is maximized
when the pdf can be expressed by the functional
\begin{equation}
\label{eq:ck}
\hat{f}_p(x) = \frac{1}{\sum_{k=-p}^{p} c_k e^{-jkx}} \ ,
\end{equation}
which is a particular case of a Pad\'e approximant
with poles only and no zeros \cite{Graves72}.
Requiring that $\hat{f}_p(x)$ is real and bounded,
it can be rewritten as
\begin{equation}
\label{eq:pdfestim}
\hat{f}_p(x) = \frac{\varepsilon_0}
{2 \pi} \, \frac{1}{\left|1 + a_1
e^{-jx} + \cdots + a_p
e^{-jpx} \right|^2} \ .
\end{equation}
The values of the coefficients $\{a_1,\ldots,a_p\}$
and of the normalization constant $\varepsilon_0$
are set by the condition that the Fourier transform
of $\hat{f}_p(x)$ must match the sample characteristic
function $\hat{\phi}_k$ for $|k| \le p$.
This solution has a number of remarkable properties,
some of which are deferred to the appendix. Foremost among
these are its positive definite character and the simple
relationship which links
the polynomial coefficients $\{a_1,\ldots, a_p\}$
to the characteristic function on which they
perform a regression. Indeed, we have
\begin{eqnarray}
\label{eq:ARmodel}
\hat{\phi}_k + a_1 \hat{\phi}_{k-1} +
a_2 \hat{\phi}_{k-2} + \cdots
+ a_p \hat{\phi}_{k-p} = &0&, \\
1\leq k\leq &p& \nonumber \ .
\end{eqnarray}
This can be cast in a set of Yule-Walker equations
whose unique solution contains the polynomial coefficients
\begin{equation}
\label{eq:Yule}
\left[ \begin{array}{cccc}
\hat{\phi}_0 & \hat{\phi}_{-1}
& \cdots & \hat{\phi}_{-p+1} \\
\hat{\phi}_1 & \hat{\phi}_0
& \cdots & \hat{\phi}_{-p+2} \\
\vdots & \vdots
& & \vdots \\
\hat{\phi}_{p-1}&\hat{\phi}_{p-2}
& \cdots & \hat{\phi}_0
\end{array} \right] \;
\left[ \begin{array}{c}
a_1 \\ a_2 \\ \vdots \\ a_p
\end{array} \right] = -
\left[ \begin{array}{c}
\hat{\phi}_1 \\ \hat{\phi}_2 \\
\vdots \\ \hat{\phi}_p
\end{array} \right] \ .
\end{equation}
Advantage can be taken here of the Toeplitz
structure of the matrix. The proper normalization
($\int_{-\pi}^{+\pi} \hat{f}(x) \, {\rm d} x = 1$) of the pdf is
ensured by the value of $\varepsilon_0$, which is given by
a variant of eq.~\ref{eq:ARmodel}
\begin{equation}
\hat{\phi}_0 + a_1 \hat{\phi}_{-1} +
a_2 \hat{\phi}_{-2} + \cdots
+ a_p \hat{\phi}_{-p} = \varepsilon_0 \ .
\end{equation}
Equations \ref{eq:pdfestim} and \ref{eq:Yule}
illustrate the simplicity of the method.
\section{Some properties}
\label{sec:properties}
A clear advantage of the method over
|
conventional series expansions
is the automatic positive definite character of
the pdf. Another asset is the close
resemblance with autoregressive or maximum entropy methods
that are nowadays widely used in the estimation of spectral
densities. Both methods have in common
the estimation of a positive function by means of
a Pad\'e approximant whose coefficients directly
issue from a regression (eq.~\ref{eq:ARmodel}). This
analogy allows us to exploit here some results previously
obtained in the framework of spectral analysis.
One of these concerns the statistical properties of the
maximum likelihood estimate. These properties are badly known
because the nonlinearity of the problem impedes
any analytical treatment. The analogy with spectral densities,
however, reveals that the
estimates are asymptotically normally distributed
with a standard deviation \cite{Berk74,Parzen74}
\begin{equation}
\sigma_{\hat{f}} \propto \hat{f} \ .
\end{equation}
This scaling should be compared against that of
conventional kernel estimates, for which
\begin{equation}
\sigma_{\hat{f}} \propto \sqrt{\hat{f}} \ .
\end{equation}
The key point is that kernel estimates are relatively
less reliable in low density regions than in the bulk of the
distribution, whereas the relative uncertainty of maximum likelihood
estimates is essentially constant.
The latter property is obviously preferable when the tails
of the distribution must be investigated,
e.g. in the study of rare events.
Some comments are now in order. By choosing a Fourier series
expansion, we have implicitly assumed that the pdf was
$2 \pi$-periodic, which is not necessarily the case. Thus special
care is needed to enforce periodicity, since otherwise
wraparound may result \cite{Scargle89}.
The solution to this problem depends on how easily
the pdf can be extended periodically.
In most applications, the tails of the distribution progressively
decrease to zero, so periodicity may be enforced simply by artificially
padding the tails with a small interval in which the density vanishes.
We do this by rescaling the support from $[a,b]$
to an interval which is slightly smaller than $2 \pi$, say
$[-3, 3]$ \cite{Comment2}.
Once the Pad\'e approximant is known,
the $[-3, 3]$ interval is scaled back to $[a,b]$.
If there is no natural periodic extension to the pdf,
(for example if $f(a)$ strongly differs from $f(b)$)
then the choice of Fourier basis functions in
eq.\ \ref{eq:projection} becomes questionable
and, not surprisingly, the quality of the fit
degrades. Even in this case, however, the results
can still be improved by using ad hoc solutions
\cite{Comment3}.
We mentioned before that the maximum likelihood method
stands out by computational simplicity. Indeed, a minimization
of the entropy would lead to the solution
\begin{equation}
\label{eq:pdfestim2}
\log \hat{f}_p(x) \propto
\sum_{k=-p}^{p} c_k e^{-jkx} \ ,
\end{equation}
whose numerical implementation requires
an iterative minimization and is therefore
considerably more demanding.
Finally, the computational cost is found to be
comparable or even better (for large sets) than for
conventional histogram estimates.
Most of the computation time goes into the
calculation of the characteristic function, for which
the number of operations scales as the sample size $n$.
\section{Choosing the order of the model}
\label{sec:order}
The larger the order $p$ of the model is, the finer the
details in the pdf estimate are. Finite
sample effects, however, also increase with $p$.
It is therefore of
prime importance to find a compromise.
Conventional criteria for selecting the best compromise between
model complexity and quality of the fit,
such as the Final Prediction Error and the Minimum Description
Length \cite{Priestley81,Haykin79,Percival93} are
not applicable here because they require the series
of characteristic functions $\{ \phi_k \}$ to be normally
distributed, which they are not.
Guided by the way these empirical criteria have been chosen,
we have defined a new one, which is based on the
following observation: as $p$ increases starting from 0,
the pdfs $\hat{f}_p(x)$ progressively converge toward
a stationary shape; after some optimal order, however,
ripples appear and the shapes start diverging again.
It is therefore reasonable to compare the
pdfs pairwise and determine how close they are.
A natural measure of closeness between
two positive distributions $\hat{f}_p(x)$ and
$\hat{f}_{p+1}(x)$ is the Kullback-Leibler entropy
or information gain \cite{Kullback51,Beck93}
\begin{equation}
\hat{I}(\hat{f}_{p+1},\hat{f}_p) =
\int_{-\pi}^{+\pi} \, \hat{f}_{p+1}(x)
\, \log \frac{\hat{f}_{p+1}(x)}
{\hat{f}_p(x)} \; {\rm d} x \ ,
\end{equation}
which quantifies the amount of information gained
by changing the probability density describing
our sample from $\hat{f}_p$ to $\hat{f}_{p+1}$.
In other words,
if $H_p$ (or $H_{p+1}$) is the hypothesis that $x$
was selected from the population whose probability
density is $\hat{f}_p$ ($\hat{f}_{p+1}$), then
$\hat{I}(\hat{f}_{p+1},\hat{f}_p)$
is given as the mean information for discriminating
between $H_{p+1}$ and $H_{p}$ per observation
from $\hat{f}_{p+1}$ \ \cite{Kullback51}.
Notice that the information gain is not a distance
between distributions; it nevertheless has
the property of being non negative and to
vanish if and only if
$\hat{f}_p \equiv \hat{f}_{p+1}$. We now proceed as
follows : starting from $p=0$ the order is
incremented until the information gain
reaches a clear minimum; this corresponds, as it has
been checked numerically, to the convergence toward
a stationary shape; the corresponding order
is then taken as the requested compromise. Clearly,
there is some arbitrariness in the definition of a
such a minimum since visual inspection and common
sense are needed. In most cases, however, the solution
is evident and the search can be automated.
Optimal orders usually range between
2 and 10; larger values may be needed to model
discontinuous or complex shaped densities.
\section{Some examples}
\label{sec:examples}
Three examples are now given in order to illustrate
the limits and the advantages of the method.
\subsection{General properties}
First, we consider a normal
distribution with exponential tails
as often encountered in turbulent
wavefields.
We simulated a random sample with $n=2000$ elements
and the main results appear in Fig.~\ref{figpdf1}.
The information gain
(Fig.~\ref{figpdf1}b) decreases as
expected until it reaches a well defined
minimum at $p=7$, which therefore sets
the optimal order of our model. Since the true
pdf is known, we can test this result
against a common measure of the quality of
the fit, which is the Mean Integrated Squared Error
(MISE)
\begin{equation}
{\rm MISE}(p) = \int_a^b [ f(x)-\hat{f}_p(x) ]^2
{\rm d} x \ .
\end{equation}
The MISE, which is displayed in
Fig.~\ref{figpdf1}b, also reaches a minimum at
$p=8$ and thus supports the choice of the
information gain as a reliable indicator for
the best model. Tests carried out on other types of
distributions confirm this good agreement.
Now that the optimum pdf has been found, its
characteristic function can be computed and
compared with the measured one,
see Fig.~\ref{figpdf1}a. As expected,
the two characteristic functions coincide
for the $p$ lowest wave numbers (eq.~\ref{eq:ARmodel});
they diverge at higher wave numbers, for
which the model tries to extrapolate the characteristic
function self-consistently. The fast falloff of the
maximum likelihood estimate explains the
relatively smooth shape of the resulting pdf.
Finally, the quality of the pdf can be
visualized in Fig.~\ref{figpdf1}d,
which compares the measured pdf
with the true one, and an estimate
based on a histogram with 101 bins. An excellent
agreement is obtained, both in the bulk of the
distribution and in the tails, where the
exponential falloff is correctly reproduced.
This example illustrates the ability
of the method to get reliable estimates in regions
where standard histogram approaches have a lower
performance.
\subsection{Interpreting the characteristic function}
\label{subsec:interpretation}
The shape of the characteristic function in
Fig.~\ref{figpdf1}a is reminiscent of spectral
densities consisting of a low wave number (band-limited)
component embedded in broadband noise. A straightforward
calculation of the expectation of $|\phi_k|$ indeed
reveals the presence of a bias which is due to
the finite sample size
\begin{equation}
{\rm E} [ |\hat{\phi}_k| ] = |\phi_k| +
\frac{\gamma }{\sqrt{n}} \ ,
\end{equation}
where $\gamma$ depends on the degree of independence
between the samples in $\{x\}$.
This bias is
illustrated in Fig.~\ref{figpdf2} for independent
variables drawn from a normal distribution,
showing how the wave number resolution gradually
degrades as the sample size decreases. Incidentally,
a knowledge of the bias level could be used
to obtain confidence intervals for the pdf estimate. This
would be interesting insofar no assumptions have
to be made on possible correlations in the data set.
We found this approach, however, to be too inaccurate
on average to be useful.
The presence of a bias also gives an
indication of the smallest
scales (in terms of amplitude of $x$) one can reliably
distinguish in the pdf. For a set of 2000 samples
drawn from a normal distribution, for example,
components with wave numbers in excess of
$k=3$ are hidden by noise and hence the
smallest meaningful scales in the pdf are
of the order of $\delta x=0.33$.
These results could possibly be further
improved by Wiener filtering.
\subsection{Influence of the sample size}
To investigate the effect of the sample length $n$, we
now consider a bimodal distribution consisting of two
normal distributions with different means and standard
deviations. Such distributions are known to be difficult
to handle with kernel estimators.
Samples with respectively $n=200$, $n=2000$
and $n=20000$ elements were generated; their
characteristic functions and the resulting pdfs are
displayed in Fig.~\ref{figpdf3}. Clearly, finite
sample effects cannot be avoided for small samples
but the method nevertheless succeeds relatively
well in capturing the true pdf and in particular the
small peak associated with the narrow distribution. An
analysis of the MISE shows that it is
systematically lower for maximum likelihood
estimates than for standard histogram estimates,
supporting the former.
\subsection{A counterexample}
The previous examples gave relatively good results
because the true distributions were rather smooth.
Although such smooth distributions are
generic in most applications it may be instructive to look at
a counterexample, in which the method fails.
Consider the distribution which
corresponds to a cut through an annulus
\begin{equation}
f(x) = \left\{ \begin{tabular}{cl}
$\frac{1}{2}$ & $1 \leq |x| \leq 2$ \\
0 & elsewhere \\
\end{tabular} \right. \ .
\end{equation}
A sample was generated with $n=2000$ elements
and the resulting information gains are shown
in Fig.~\ref{figpdf4}. There is an ambiguity
in the choice of the model order and indeed the
convergence of the pdf estimates toward the true
pdf is neither uniform nor in the mean.
Increasing the order improves the fit of the
discontinuity a little but also increases the
oscillatory behavior known as the
Gibbs phenomenon. This problem is related to the
fact that the pdf is discontinuous and hence
the characteristic function is not
absolutely summable.
Similar problems are routinely encountered in the
design of digital filters,
where steep responses cannot
be approximated with infinite impulse response filters
that have a limited number of poles
\cite{Oppenheim89}. The bad performance of the
maximum likelihood approach in this case also comes
from its inability to handle densities that vanish
over finite intervals. A minimization of the
entropy would be more appropriate here.
\section{Conclusion}
\label{sec:conclusion}
We have presented a parametric procedure for
estimating univariate densities using a positive definite
functional. The method proceeds by maximizing the
likelihood of the pdf subject to the constraint that the
characteristic functions of the sample and estimated pdfs
coincide for a given number of terms. Such
a global approach to the estimation of pdfs is in
contrast to the better known local methods (such as
non-parametric kernel methods) whose performance is poorer in
regions where there is a lack of statistics, such as the tails
of the distribution. This difference makes the maximum
likelihood method relevant for the analysis of short
records (with typically hundreds or thousands of samples).
Other advantages include a simple computational
procedure that can be tuned with
a single parameter. An entropy-based criterion has been
developed for selecting the latter.
The method works best with densities that are at least once
continuously differentiable and that can be extended periodically.
Indeed, the shortcomings of the method are essentially the same
as for autoregressive spectral estimates, which
give rise to the Gibbs phenomenon if the density is discontinuous.
The method can be extended to multivariate densities,
but the computational procedures are not yet within the
realm of practical usage.
Its numerous analogies with the design of digital filters
suggest that it is still open to improvements.
\acknowledgments
We gratefully acknowledge the dynamical systems team at the
Centre de Physique Th\'eorique for many
discussions as well as D. Lagoutte and B. Torr\'esani for
making comments on the manuscript. E. Floriani acknowledges
support by the EC under grant \mbox{nr. ERBFMBICT960891}.
|
\section{Introduction}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/diff_md.pdf}
\caption{\small Performance of fixed mechanisms paired with adaptive participants in the 12 matrix games we consider. The horizontal axis shows the learning steps of the participant agent, while the vertical axis shows the return collected by the fixed mechanism agent. Our IO-loop agent, here trained with Diff-MD, tracks the best alternative strategy in all games but Prisoner Dilemma where it significantly outperforms it. Plots are produced averaging mechanism returns over 5 random seeds.}
\label{fig:diff-md}
\end{figure*}
Modern institutions often serve two distinct and equally important roles in our society. First they mediate and foster economic or social interactions among citizens (e.g. taxation policies ensure governments receive enough funds to build roads and schools). Second, they foster behaviors that bring us closer to our aspirations as a society (e.g. charitable donations are tax-free). As artificial learning agents mediate more and more interactions among humans, firms, and organization, it is paramount that we study how to construct adaptive systems that can fulfil both roles.
However, while multiagent learning has received considerable attention in recent years, the standard evaluation scheme pairs our artificial agents with other fixed, and potentially adversarial co-players (e.g. exploitability)~\shortcite{vinyals2019grandmaster,muller2019generalized,goodfellow2014generative}. While this evaluation scheme has merit, it fails to capture the dynamics faced by modern institutions that are often paired with learning constituents, and where agents must take into account not only what other agents will do next, but also, in the long run, how they will adapt to the current strategies present in the system.
Here we address this shortcoming and construct low-exploitability agents that do well when paired with learning co-players, in the general-sum setting. We construct players that, through their behavior, are able to influence what others will learn to do, and explicitly leverage the link between one agent's actions and another agent learning trajectory. In other words, we construct agents (``mechanisms'') that learn to act so as to shepherd participants' strategies both at equilibrium, and during learning.
Our proposed method takes the form of an inner-outer loop learning process. In the inner loop, participant agents respond to a fixed mechanism strategy, while in the outer-loop our mechanism agent adapts its policy based on experience. Unlike previous work, our mechanisms make very few assumptions on the preferred strategies, outcomes, or learning capabilities of the participants, and only have access to the consequences of their own behavior on the learning of others.
We investigate the performance of our mechanism agents with both artificial and human co-players in simple 2-player 2-strategy repeated games, and in a stylized resource allocation problem. We study how our method can be adapted when mechanisms are granted access to the inner workings of participants, and when that is not the case.
Our results show that our mechanism agents successfully shepherd the learning of others towards desirable outcomes, and that the direction presented here is promising for agent-agent interactions, and withstand a transfer to the agent-human interaction setting.
In the broader context of AI in modern day institutions, our methods and ideas show that adaptive agents can successfully shepherd the learning of their co-players towards desirable outcomes and behavior, opening the door to learning-based institutions that fine-tune the incentives faced by their constituents in pursuit of group level goals.
\section{Related work}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/es_md.pdf}
\caption{\small Performance of fixed mechanisms paired with adaptive participants in the 12 matrix games we consider. The horizontal axis shows the learning steps of the participant agent, while the vertical axis shows the return collected by the fixed mechanism agent. Our IO-loop agent, here trained with ES-MD, tracks the best alternative strategy in all games but Prisoner Dilemma where it significantly outperforms it. Plots are produced averaging mechanism returns over 5 random seeds.}
\label{fig:es-md}
\end{figure*}
The inner-outer loop method that we propose here provides insights into two challenges for multiagent reinforcement learning.
The first challenge is the non-stationarity of the environment. When training multiple policies simultaneously, the environment is non-stationary from the point of view of any agent due to the change in the other players' policies. Common approaches to mitigate this non-stationarity involve building populations of agents~\shortcite{brown1951iterative,muller2019generalized,vinyals2019grandmaster} or exploiting knowledge of the learning dynamics of others~\shortcite{balduzzi2018mechanics,hemmat2020lead}. Both these approaches have often focused on competitive zero-sum game.
Here we focus on setting where the environment is always stationary from the point of view of all agents since we don't update policies concurrently in the same training loop.
Second is the challenge of \textit{equilibrium selection}. Generally, and particularly in non-zero-sum games, multiple Nash Equilibria may exist. This leads to the problem of both finding and selecting among possibly unequal equilibria, with the goal of a) biasing learning towards outcomes preferred by one agent (shaping), or b) generalizing with unseen co-players. Progress towards this has been made in recent years through \textit{centralized learning with decentralized execution}, a framework for multiagent RL where agents can exploit privileged knowledge about other agents' during training, but not at time of deployment. Centralized learning can be useful for equilibrium selection: for example, access to a centralized value function provides a recipe to construct agents that are able to coordinate at execution in cooperative settings~\shortcite{sunehag2017value}; coupled training of multiple agents can improve learning of communication protocols~\shortcite{foerster2016learning,foerster2019bayesian}; learning from interactions with agents at different stages of training can improve generalization at evaluation with human participants~\shortcite{strouse2022collaborating}; and exploiting information about how other agents update their behaviour can be used to shape them, both within an episode~\shortcite{lerer2018maintaining,peysakhovich2018consequentialist} and across training~\shortcite{foerster2017learning,yang2020learning}.
In our method, we do not separate training from execution. We make no assumptions about the learning rule of the co-players, or how they will adapt to the strategies currently present in the system. Instead, we infer the relationship between the mechanism's actions and the participant's learning directly from the observed interactions.
\section{Methods}
We consider the problem of constructing agents (``mechanisms'') that shepherd the learning of others (``participants''). We use repeated symmetric two-player, two-strategies games as our initial test-bed as they are easy to analyze and train on. We then move on to a simple resource allocation game. We start by assuming that the mechanism agent has access to the inner workings of participant agents in Differentiable mechanism design, and then extend our methods to remove this assumption using Evolutionary Strategies~\shortcite{salimans2017evolution}.
\subsection{Environments}
We tackle iterated 2-player, 2-strategies symmetric matrix games, and a simple resource allocation game with one mechanism agent and four participant agents.
\subsubsection{Iterated matrix games}
\begin{figure*}[ht]
\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pgg.pdf}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pgg_human.pdf}
\end{subfigure}
\caption{\small Performance of fixed mechanisms paired with artificial adaptive participants (left) or human participants (right) in the resource allocation game we consider. The horizontal axis shows the learning steps of the participant agents (independent learning), while the vertical axis shows the return collected by the fixed mechanism agent. Our IO-loop agent, here trained with Diff-MD, outperforms alternative mechanisms proposed in the economics literature for this game both with artificial learning and human co-players. The left panel was produced averaging mechanism returns over 50 random seeds, while the right panel shows the average return over each experiment; shaded areas indicate standard error.}
\label{fig:pgg}
\end{figure*}
We consider the 12 symmetric 2-player, 2-strategies matrix games identified in~\shortcite{wiki:Normal-form_game,robinson2005topology}, with payouts scaled down by 4 (our payouts are between -3 and 0), and consider iterated interactions between a mechanism (row) player and a participant (column) player with a single memory step. Our naming convention is most easily followed when focusing on the Prisoner Dilemma game.
For each game we define a Markov Decision Process with statespace $\mathcal{S} = (s_0, CC, CD, DC, DD)$, with $s_0$ being the initial state, and the rest being the state after the joint actions in the previous state (e.g. cooperate, cooperate; cooperate, defect and so on). Our agents' one-memory policies can be represented by a 5-tuple $\theta$ corresponding to the probability of cooperating on each state. In these simple repeated games, the transition kernel $\mathcal{T}$ takes the form of a matrix whose entries describe the probability of the next state as a function of the previous state, and can be derived analytically given the one-memory policy parameters from both players. The reward functions are specified on arrival at each state as $r_m = (0, r_{R}, r_{S}, r_{T}, r_{P})$ and $r_p =(0, r_{R}, r_{T}, r_{S}, r_{P})$, where $r_{P}$, $r_{R}$, $r_{S}$, $r_{T}$ correspond to the punishment, reward, sucker and temptation payoffs respectively. The returns $R_m$ and $R_p$ that both the mechanism and participant aim to maximize corresponds to the state value for the initial state $s_0$.
\subsubsection{Resource Allocation game}
We further consider a modification of the classic Public Goods Game (as described in~\shortcite{koster2022humancentered}). The game consists of a single interaction between four participants $i = 1,2,3,4$ and a single mechanism agent
|
. Each participant receives an endowment $e_i$ and allocates a fraction $\rho_i$ of it to a common investment pool, which is then grown by a fixed constant factor ($1.6$) and redistributed to participants in full. The specific amount received by each participant $i$ is denoted as $p_i$ and is determined by the mechanism. Participants seek to maximize their individual welfare (i.e. $R_{p_i} = p_i + (1-\rho_i)e_i$), while the mechanism seeks to maximize total participants' welfare (i.e. $R_m=\frac{1}{N}\sum_i^N R_{p_i}^t$, with $N$ the number of participants). When interacting with naive or poorly designed redistribution mechanisms, the incentive to free-ride may tempt each player away from the contributing to the common pool, which in turn decreases total welfare.
\subsection{The Inner outer loop Algorithm}
\begin{algorithm}[ht]
\begin{algorithmic}
\Require $\mathcal{MDP}, T_m, T_p, \mathfrak{R}, \theta_m^0 ,\gamma_m, \gamma_p$
\For{$t_{m} \text{ in } 0 :: T_{m}$}
\State $\theta_p^0 \sim \mathfrak{R};\quad \bar{R}_m \gets 0$
\For{$t_{p} \text{ in } 0 :: T_{p}$}
\State $(R_m, R_p) \gets \mathcal{MDP}(\theta_m^{t_m}, \theta_p^{t_p})$
\State $\bar{R}_m \gets \bar{R}_m + R_m$
\State $\theta_p^{t_p+1}\gets \theta_p^{t_p} + \gamma_p\nabla_{\theta_p^{t_p}} R_p$
\EndFor
\State $\theta_m^{t_m+1} \gets \theta_m^{t_m} + \gamma_m\nabla_{\theta_m^{t_m}} \bar{R}_m$
\EndFor
\end{algorithmic}
\caption{\small Inner outer loop algorithm. Given an underlying Game function ($\mathcal{MDP}$) that takes policy parameters $\theta_m$ and $\theta_p$ for the mechanism and participants respectively computes their returns $R_m$ and $R_p$, a random participant parameter generator $\mathfrak{R}$, and initial mechanism parameters $\theta_m^0$ this algorithm produces a policy $\theta_m^{T_m}$ for the mechanism player.}\label{algo:ioloop}
\end{algorithm}
Our learning process takes the form of an inner-outer loop that exposes the mechanism to the consequences of its actions on the learning of others. In the inner loop, participant agents repeatedly interact with a fixed mechanism, and use independent gradient ascent to improve their own policies. In the outer loop, mechanism agents update their strategies based on the experience they acquired in the inner loop.
\subsubsection{Differentiable mechanism design}
We first consider the case where the mechanism agents can directly compute the gradients of its return $\bar{R}_m$ with respect to its policy parameters $\theta_m$. Inspecting Algorithm~\ref{algo:ioloop} we note that, at any given $\theta_m^{t_m}$ update, the mechanism return $R_m$ depends on the mechanism parameters $\theta_m^{t_m}$, as well as on the entire trajectory of participants parameters over the inner loop $\theta_{p_0},\ldots,\theta_p^{T_p}$. In Differentiable Mechanism design (Diff-MD), we let the mechanism update have gradient access to the entire trajectory, as well as the environment transition kernel $\mathcal{T}$.
In practice, this can easily be implemented using a tensor library with auto-differentiation (such as JAX~\shortcite{jax2018github}).
\subsubsection{Evolutionary strategies for non-differentiable mechanism design}
When the mechanism agent cannot take derivatives through the environment, we used evolutionary strategies (ES). In this case, the inner loop is repeated $N_p$ times so as to form an experience ``batch'' with mechanism parameters slightly perturbed at the beginning of each inner loop ($t_p=0$) as $\theta_{m}^p = \theta_{m}^p + \epsilon_p$, where $\epsilon_p \sim \mathcal{N}(0,\sigma_{m}^{2})$ with $\sigma_m$ a hyper-parameter ($1$ in our experiments). Given an experience batch the mechanism policy gradient, estimated as $\nabla_{\theta_{m}} \approx \sum_{p=1}^{N_p} \frac{\epsilon_{p} \bar{R}_m}{N_{p} \sigma_{m}}$, moves the mechanism parameters in the direction of those used in episodes that led to positive outcomes (see~\shortcite{salimans2017evolution} for details). We refer to this method as ES-MD.
\subsection{Learning with Opponent-Learning Awareness (LOLA)}
We implemented LOLA~\shortcite{foerster2017learning} as a baseline in our experiments. In LOLA, the mechanism agent projects the learning of the participants forward in time. In contrast to the original paper, in which both agents are assumed to be using LOLA, here only let the mechanism agent be learning-aware.
\section{Results}
Here we show how a trained mechanism performs when paired with learning participants. We report the return collected by a (fixed) mechanism in each episode over the learning trajectory of its co-players. Figures~\ref{fig:diff-md} and~\ref{fig:pgg} show how mechanisms trained with Diff-MD perform in the 12 matrix games and resource allocation game respectively, while Figure~\ref{fig:es-md} shows the performance of a mechanism trained with ES-MD in the 12 matrix games we consider.
\subsection{Matrix games}
In the matrix games we compare our mechanism (labelled as IO-Loop in the legends) to well known one memory strategies (e.g. Tit-for-Tat) and pure strategies (e.g. Selfish). Additionally, in Diff-MD we also compare a mechanism trained with LOLA. In all games, and for both Diff-MD and ES-MD, our IO-loop mechanism achieves the same performance as the best alternative available strategy, with the exception of Prisoner Dilemma where it significantly outperforms it. We used the following hyper-parameters for both experiments: $T_m=10000, T_p=50, \gamma_m=0.1, \gamma_p=10,\theta_m^0=[0.5, 0.5, 0.5, 0.5, 0.5]$, in the ES-MD experiment we further set $N_p=256$ and $\sigma_m=1$.
\subsection{Resource allocation game}
In the resource allocation game we report mechanism performance when paired both with artificial learning agents and human co-players. In particular, in Fig.~\ref{fig:pgg}, we consider unequal endowments with $e_i \in [0.2,1.0]$, and compare our mechanism with four alternative redistribution strategies: Absolute Proportional and Relative Proportional redistribute funds proportionally to the absolute contribution $\rho_i e_i$ or to the fraction of endowment $\rho_i$ contributed by each participants, while the Uniform and Random mechanisms redistributed the funds equally and randomly respectively. Figure~\ref{fig:pgg} shows that Diff-MD finds a mechanism policy that shepherds the participants towards higher welfare outcomes. In this experiment we represented the participants policy as their propensity to contribute to the public fund: $\theta_{p_i}=\rho_i$, and the mechanism policy as a MLP with a single 32-units hidden layer. We further set $T_m=5000, T_p=10, \gamma_m=0.01, \gamma_p=0.1$ and $\theta_m^0$ the default MLP initialization.
\subsection{Evaluation with human co-players}
Figure~\ref{fig:pgg} (right) shows the performance of our mechanism, and alternative baselines, when paired with human co-players in the stylized resource allocation game outlined above (endowment condition for the 4 players: $[1.0, 0.5, 0.4, 0.3]$). We used crowd-sourcing platforms to collect data, and all participants gave informed consent to participate in the experiment. Participants were organized in groups of 4, and after a tutorial phase, they played the resource allocation game outlined above completing the 10 steps constituting our ``inner loop''. The tutorial round explained the mechanics of the game, instructed participants on how to use the web interface, and outlined how participants would be rewarded in real money: participants received a base compensation for completing the experiment, and a bonus proportional to their aggregate return over the course of the experiment. After one game with the Uniform mechanism, each group of participants interacted with two mechanisms, either resulting from our training, or with one of the baseline mechanisms outlined above in counterbalanced order. We collected data from 236 non overlapping groups. If a participant dropped out during the experiment, their actions were replaced with random actions, which were subsequently removed in the analysis ($39\%$ of responses were removed this way). The results presented in the right hand panel in Figure~\ref{fig:pgg} show that our mechanism could withstand a basic transfer to interacting with human co-players, and that its performance remained consistent with what we observed in simulation.
\section{Conclusion}
We have shown here that our inner-outer loop algorithm can provide an oracle-style benchmark to test agents' ability to shepherd the behavior of learning co-players to desired outcomes, and that in simple environments, our agents transfer to human co-players.
As more and more of the systems we use and deploy become adaptive, it becomes increasingly important to 1) construct agents that can plan and act taking into account the fact that others are learning around them, and 2) construct agents that can shape the incentives faced by co-players in pursuit of group-wide objectives.
The ideas and results presented here show that exposing agents to the consequences of their actions on the learning of others is a sensible first step toward these goals. Moreover, the transfer to human co-players we were able to showcase suggests that our method contains the basic elements required to design adaptive institutions can fulfill their basic ``mechanical'' mediation function in society, as well as shepherd their constituents towards more desirable strategies and behaviors.
\newpage
\bibliographystyle{plain}
|
\section{Introduction}
Since the late 1970s, it has been suggested that the Milky Way's halo
has been assembled from the merging and accretion of many smaller
systems \citep[e.g.][]{Searle78}. In this paradigm, it has long been
assumed that the Galaxy's satellites are remnants of this population of
building blocks. However, first analyses of the chemical abundances
of the stellar populations in the surviving dwarf spheroidal galaxies
(dSphs) appeared inconsistent with this idea
{ \citep[e.g.,][]{Shetrone01,Venn04}}. In particular, the metal-poor tail of the
dSph metallicity distribution seemed significantly different from that
of the Galactic halo \citep{Helmi06}. Since then, extremely
metal-poor stars have been discovered in Galactic satellite
galaxies such as Sculptor \citep[e.g.:][]{Frebel10}. Both the
classical and the ultra-faint dSphs are now known to have a wide
abundance dispersion and to host some of the most extreme metal-poor
stars known. In fact, $\sim$ 30\% of the known stars with [Fe/H]
$\leq -3.5$ are found in dwarf galaxies
\citep[e.g.][]{Kirby10,Kirby12,Frebel10a}.
The connection between the surviving dwarfs and those that dissolved
to form the halo can partially be addressed by examining in detail the
stellar kinematics and chemical compositions of present-day dwarf
galaxies. Establishing the detailed chemical histories of these
systems can provide constraints on their dominant enrichment events
and timescales. In addition, modelling of the kinematics can yield
powerful information on the dark matter profile, especially in the
case of multiple populations \citep[e.g.:][]{Walker11, Amorisco12a}.
A multitude of independent studies have addressed the evolutionary
complexities of dwarf spheroidal galaxies in the Local Group.
Since \citet{Harbeck01}, stellar population gradients have been observed
in many satellites of both the Milky Way and Andromeda -- for example
Sculptor \citep{Tolstoy04}, Fornax \citep{Battaglia06}, Sextans \citep{Lee03},
Tucana \citep{Harbeck01}, Draco \citep{Faria07}, and And II \citep{Ho12}.
Whether such gradients are ubiquitous remains an open question, as
dwarfs like Leo I, And I, And III and Carina seem to be characterised
by more homogeneous stellar populations, with milder radial
dependences \citep{Harbeck01, Koch06, Koch07}.
A much closer look into specific dwarfs is allowed by large
spectroscopic datasets. For example, in Sculptor, Fornax and Sextans,
dedicated spectroscopic studies have allowed the identification of
independent stellar subpopulations, with chemo-dynamically distinct
properties { \citep{Battaglia06, Battaglia08, Battaglia11, Walker11,
Amorisco12b, Hendricks14}}. By providing a bridge between chemical and kinematical
properties of several hundreds of bright stars, chemo-dynamical
analyses yield important constraints on the complex processes that
govern the formation and evolution of such puny galaxies. For
instance, subpopulations with a higher average metallicity are more
centrally concentrated and kinematically colder. Whether the dwarf is
characterised exclusively by old stellar populations -- as in the case
of Sculptor \citep[see e.g.][]{deBoer12a} -- or whether multiple
populations of different ages are present at the same time -- as in
Fornax \citep[e.g.,][]{deBoer12b} -- the parallel {\it ordering} of
metallicity, characteristic scale of the stellar distribution and
kinematical state seems to be a fundamental outcome of the dSphs'
evolutionary histories.
The importance of Carina is that it provides one of the cleanest
examples of an episodic star formation (SF) history. This is evident
from its colour-magnitude diagram \citep{Tolstoy09, Stetson11}, which clearly
shows at least three different main sequence turn-offs. A number of
investigations have inferred the SF history of Carina using
photometric data \citep[e.g.,][]{Hurleykeller98, Rizzi03, Dolphin05}, with evidence
for at least two major SF episodes, one at old times ($>8$ Gyr
ago), a second at intermediate ages ($4-6\Gyr$ ago), and perhaps continuing into
even more recent activity ($2\Gyr$ ago). Using both wide-field photometry and spectroscopic
data, \citet{deBoer14b} find that about $60\%$ of the stars in Carina
formed in the SF episode at intermediate ages. However, interestingly,
as the oldest episode, also the intermediate one has enriched stars
starting from low metallicities.
Given such a complex and episodic SF history, one would expect to
detect similarly clear population gradients and chemo-dynamical
differences in Carina's stellar population. Based on deep multicolour
photometry, \citet{Harbeck01} find that red clump (RC) stars are indeed more
concentrated than the older horizontal branch stars. However, no
distinction in the spatial distribution of red and blue horizontal
branch stars can be found \citep[a gradient that is instead very clear
in Sculptor,][]{Tolstoy04}. More recently, \citet{Battaglia12} observe an age
gradient by selecting stellar populations of very different ages with
cuts in the colour magnitude diagram (CMD). This confirms the existence of
a marked age gradient over a baseline that is essentially as long as
the age of the universe.
However, the presence of any gradient within the population of old red
giant stars is substantially less clear, as is the existence of a
corresponding metallicity gradient within the same population. By
using a set of 437 radial velocity members, \citet{Koch06} measure a
very mild chemical gradient, with metal-poor stars having an only
slightly more extended spatial distribution. Analogously,
\citet{Walker11} could not identify any statistically significant
distinction (in either spatial distribution or kinematics) by looking
for two different subpopulations in a sample of more than 700 member
red giants (RGs).
Because of their shallow potential wells, the evolutionary histories
of dSphs are critically dependent on both properties before infall --
for instance virial mass and gas content -- and any subsequent
environmental factors, mainly driven by the details of their
orbits. In contrast to the wide orbits of Sculptor and Fornax,
Carina's proper motion \citep{Piatek03} suggests a pericenter of only a
few tens of $\kpc$ from the centre of the Milky Way, which makes it more
prone to tidal disturbances. Indeed, several investigations have
identified a component of `extratidal' stars around Carina
\citep[e.g.,][]{Majewski05, Munoz06}. Recent deep photometric studies
\citep{Battaglia12} suggest the presence of tenuous but extended
tails, and \citet{deBoer14b} has shown that these have compatible properties with Carina's metal poor
stars \citep[although see also][]{McMonigal14}. Interestingly, the perturbations arising from a tidal field
have been shown to be capable of systematically weakening chemical gradients,
by mixing any distinct stellar subpopulations { \citep{Sales10,Pasetto11}}.
In this paper, we present a new dataset that we use in order to bring
further constraints on Carina's history, based on intermediate
resolution spectra of 956 stars. We describe how the analysis of the
spectra has been done in order to derive the atmospheric parameters of
the stars, namely the effective temperature (T$_{\rm eff}$), surface gravity
($\log g$) and global metallicity (\hbox{[M/H]}), and how these parameters are
combined with high precision photometry to derive an age estimate for
the stars belonging to Carina. We find evidence for the presence of
three distinct stellar subpopulations, and show the age-metallicity
diagram of Carina. Finally we comment on the possible effects of
external disturbances.
The paper is organised as follows: in Sect.~\ref{sec:data} we describe
the datasets used. Section~\ref{sec:parameters} explains how the
atmospheric parameters have been derived and the ages have been
estimated. Finally, Sect.~\ref{sec:description} shows the results and
Sect.~\ref{sec:conclusions} concludes.
\section{Presentation of the data}
\label{sec:data}
The available data include $1.3 \times 10^4$ medium resolution spectra
of 956 stars from the ESO-VLT large programmes, 180.B-0806, 084.B-0836
(P.I.: G.\,Gilmore) and our VISTA $JHK_s$ photometry. In addition, the
photometric data is complemented by the high-precision optical $UBVRI$
photometry of \citet{Stetson11}.
The VLT datasets extend over 5 years, with some multiple observations
of the same stars. The targets have been selected in order to sample
the red clump and the giant branch of Carina, and have been observed
using the LR8 setup of the FLAMES-GIRAFFE instrument
($820.6-940.0\,{\rm nm}$). For the spectra reduction and sky subtraction, we
refer the reader to \citet{Koch06}, since the same method has been
applied here too. The radial velocities have been estimated using the
algorithm of \citet{Koposov11}, which delivers precise, accurate,
radial velocities from moderate resolution spectroscopy. The algorithm
works by fitting synthetic templates from \citet{Munari05} covering a
large range of stellar parameters ($-2.5 <$[Fe/H] $< 0.5$, 3000 $<$
T$_{\rm eff}$ $<$ 80000, 1.5 $<$ $\log g$ $< $5) to our spectra from each
exposure.
Once each exposure has been put in the rest frame, we have stacked the
multiple exposures corresponding to each star in order to increase the
signal-to-noise ratio (SNR) of the data. The final SNR spans values
from 2 to 50 per pixel. Figure~\ref{Fig:Vrad_histogram} shows the
radial velocity distribution of the total sample of 956\ stars
observed with the LR8 setup. From this, we can see that most of the
targeted stars peak at a common radial velocity, as expected. The
Gaussian fit of the histogram has a mean radial velocity of
$221.4\,{\rm km~s^{-1}}$ with a dispersion of $10\,{\rm km~s^{-1}}$, consistent with previous
determinations of the radial velocity of Carina star members ranging
from $220.4\,{\rm km~s^{-1}}$ to $224\,{\rm km~s^{-1}}$ and dispersions ranging from $6.8\,{\rm km~s^{-1}}$
to $11.7\,{\rm km~s^{-1}}$ \citep{Mateo98, Majewski05, Helmi06, Koch06, Fabrizio11,
Lemasle12}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_Vrad}
\end{center}
\caption{Histogram of the radial velocities for the total sample of
stars observed at the LR8 setup (956~stars). The fitted
Gaussian is plotted in red, and the fitting parameters are written
on the top left corner of the figure. The mean value and dispersion
of the radial velocity of Carina are in good agreement with the
literature vaues.}
\label{Fig:Vrad_histogram}
\end{figure}
\section{Extraction of the atmospheric parameters and selection of the Carina subsample}
\label{sec:parameters}
Once the spectra are stacked, we used an updated version of the
automatic parameterisation pipeline presented in \citet{Kordopatis11a}
to obtain the effective temperature, surface gravity and metallicity
of the targets. The pipeline allows us to apply soft priors according
to the observed selection function, by removing from the solution
space combinations of parameters that are not expected to be found.
The method is based on a grid of synthetic spectra used during the
learning phase of the algorithm. Only the wavelength regions
$840-877.5\,{\rm nm}$ and $880.1-882.0\,{\rm nm}$ are selected for the
parameterisation, the discontinuity being introduced to avoid strong
contamination by telluric lines { and to keep within our wavelength range the MgI line at 8807\AA, which is known to be sensitive to surface gravity variations} \citep[see][]{Kordopatis11b}.
{ Furthermore, the cores of the Calcium triplet lines are removed from the wavelength region (two pixels for the first line, and three for the other two lines, corresponding to 0.8 and 1.2\AA, respectively), in order to avoid a mis-match between the synthetic spectra and the observations, due to non local thermodynamical equilibrium effects.}
The learning grid has a constant metallicity step of $0.25\,{\rm dex}$, and
spans effective temperatures from $[3000-8000]$\,K, surface gravities
from $[0-5]$($\,{\rm cgs}$ units) and metallicities from
$[-5.0,+1.0]$. Finally, the $\alpha-$enhancement of the considered
templates is not a free parameter, but it varies with metallicity
($[\alpha/$Fe$]=-0.4\times {\rm {[Fe/H]}}$ in the range $-1\leq {\rm {[Fe/H]}} \leq0$).
{ We note that this adopted $\alpha$-enhancement has no consequences for the determination of \hbox{[M/H]}\, in the case where a star does not follow the same trend \citep[as is expected to be the case for Carina dSph stars, see][]{Venn12}. In such a case the measurement of $\hbox{[M/H]}$\, will still be sound, but \hbox{[M/H]}$\ne$[Fe/H] \citep[see][for further details]{Kordopatis13b}.}
We used the calibration relation established for RAVE DR4 \citep[][see
also \citealt{Kordopatis15a}]{Kordopatis13b}, which employs the same
grid of synthetic spectra on a very similar wavelength range, to
correct the metallicities of the pipeline. The calibration is a simple
low-order polynomial of two variables, the surface gravity and the
metallicity itself, and roughly corrects the metallicity of the giants
by $\sim0.3\,{\rm dex}$ and the one of the dwarfs by $\sim 0.1\,{\rm dex}$. The
adopted relation allowing to obtain the calibrated metallicity,
\hbox{[M/H]}$_c$, from the one derived from the pipeline, \hbox{[M/H]}$_p$, is:
\begin{small}
\begin{equation}
\begin{split}
\hbox{[M/H]}_{c} - \hbox{[M/H]}_{p}=-0.076 - 0.006 \times \log g + 0.003\times\log^2g \\
- 0.021\times \hbox{[M/H]}_{p}\times\log g
+0.582\times \hbox{[M/H]}_{p}+0.205\times \hbox{[M/H]}_{p}^2.
\end{split}
\end{equation}
\end{small}
In what follows, only the corrected metallicities will be used, and
therefore we will note the calibrated metallicities simply as [M/H].
As noted at the beginning of the Section, soft priors have been
imposed on the expected results, by removing some parameter
combinations from the solution space. For the adopted priors, we
imposed:
\begin{itemize}
\item
an effective temperature between $4000-6500$~K,
\item
a surface gravity lower than $3.75$,
\item
a metallicity range between $-5$ and $+1$ (i.e. all the available metallicity range of the templates).
\end{itemize}
The temperature range can be justified by the colour selection of
Carina's candidates ($0.5<B-V<1.2$, see Fig.\,\ref{Fig:CMD}). As fas as the assumption on the
surface gravity is concerned, the reason relies on our {\it a priori }
knowledge of the properties of the observed stars. Indeed, given the
distance modulus of Carina \citep[$m-M\sim20.1$, e.g.][]{Dallora03},
all of Carina's main sequence stars should be outside the observed
magnitude range. In other words, all the stars belonging to Carina
are expected to be either red clump stars or on the red giant
branch, hence with a $\log g$\ lower than 3.5.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_CMD_BV}
\end{center}
\caption{Colour-magnitude diagram of the final adopted Carina sample
(731\ members). The RC is at $(B-V)<0.7$ and $V>20$,
whereas the RGB contains all the stars with $(B-V)>0.7$ up to
$V=18$. }
\label{Fig:CMD}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_SNR}
\end{center}
\caption{Histogram of the mean signal-to-noise ratio per pixel of the
spectra of the final Carina sample that is considered for the
analysis (731\ members).}
\label{Fig:SNR_histogram}
\end{figure}
That said, one must still understand the effect of removing regions of
the solution space on the parameterisation of foreground stars that
might have the same radial velocity as Carina, and contaminate our
sample (the risk here being to mis-parameterise a dwarf star as a
giant). A first statement that can be made is that given the radial
velocity of Carina ($221\,{\rm km~s^{-1}}$), foreground stars having a similar
radial velocity are expected to be mainly halo stars, and therefore
giants. For the few foreground stars that might still have
$\log g$$>$3.75, the algorithm will always match, by design, the
observations with the closest template (the latter being the one
having the closest parameters as the true spectrum). This implies that
the derived parameters of a dwarf star will therefore be at the
boundaries of the grid and easily identifiable. In order to make sure
that such contaminators are excluded from our analysis, we further
decided to discard all the stars for which the surface gravity is
greater than $3.25$, as well as the stars with a spectra having a SNR
lower than 5\,pixel$^{-1}$, because of the large uncertainty in the
derived parameters. Figure~\ref{Fig:SNR_histogram} shows the SNR
histogram of the Carina sample that is considered in what follows. The
bulk of the spectra have SNR$\sim$10 ~pixel$^{-1}$, with some of them
having up to SNR$\sim$40 ~pixel$^{-1}$.
{ A selection of three spectra, at SNR$\sim5, 10,20$ ~pixel$^{-1}$, with their best-fit solution are shown in Fig.~\ref{Fig:Obs_spectra}.}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_spec_low_SNR}\\
\includegraphics[width=0.5\textwidth]{Carina_spec_med_SNR}\\
\includegraphics[width=0.5\textwidth]{Carina_spec_high_SNR}\\
\end{center}
\caption{{ Observed (black) and fitted spectra (red) corresponding to the synthetic spectrum having the parameters of the stars for three stars having SNR$\sim5, 10,20$ ~pixel$^{-1}$. The wavelength range has been truncated at the blue end by $40$\AA\, to make the plots clearer to visualise.}}
\label{Fig:Obs_spectra}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_errors}
\end{center}
\caption{{ Errors on the effective temperature, surface gravity, uncalibrated metallicity and calibrated metallicity as a function of signal-to-noise, for the sample of stars that we have identified as Carina members and with SNR greater than 5\,pixel$^{-1}$. }}
\label{Fig:SNR_errors}
\end{figure}
Typical uncertainties on the parameters were obtained using the error
spectrum of each target, producing 10 Monte-Carlo realisations of
observed spectra and re-deriving the parameters. For T$_{\rm eff}$, $\log g$\ and
[M/H], the median uncertainties are of the order of 226\,{\rm K}, 0.48\,{\rm dex}\,
and 0.29\,{\rm dex}, respectively, { with the errors being the largest for the lowest SNR values, as expected (see Fig.~\ref{Fig:SNR_errors})}.
Figure~\ref{Fig:CMD} shows the CMD for
Carina for our adopted sample, colour-coded according to the
metallicity of the stars. We find that the mean metallicity for the
whole sample is $\hbox{[M/H]}\approx-1.78$, with a large span in the derived
values, ranging from $\sim-4$ to $\sim-0.5$. This result is in very good
agreement with previous studies \citep[see for example][]{Koch06}.
\subsection{Age determination of the red giant branch stars}
\label{sec:ages}
A rough estimate of the age of the stars can be obtained by comparing the colour,
magnitudes and atmospheric parameters of the stars with theoretical
isochrones. Following \citet{Kordopatis11b}, we constructed a library
of isochrones with a constant step in age ($0.5\Gyr$) and metallicity
($0.1\,{\rm dex}$). The step in metallicity has been chosen in order to be
smaller than the typical error on this parameter. The isochrones have
been computed using the online
interpolator\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/cmd}} of
the Padova group, based on the \citet{Marigo08} sets, for a
metallicity range of $[-2.1;0.1]$.
Since the isochrones do not reach as low metallicities as the metal-poorest stars in our sample, we do not attempt to derive ages for those stars that have a metallicity that does not reach $-2.1$ within $2\sigma_{\tiny \hbox{[M/H]}}$, where $\sigma_{\tiny \hbox{[M/H]}}$ is the uncertainty on the derived metallicity.
The expected age, $\bar{a}$, of a star with parameters $\hat \theta_k$ ($k \equiv \hbox{[M/H]}$, $V$, $B-V$, T$_{\rm eff}$, $\log g$), has been obtained as follows. First, we select the set of isochrones within \hbox{[M/H]}$\pm \sigma_{\tiny \hbox{[M/H]}}$. Then, we assign for each point $i$ on the selected isochrones a Gaussian weight $W_i$ , which depends on the distance between the points on the isochrones and the measured observables or derived parameters. In practice, $W_i$ is computed as:
\begin{equation}
W_i= \mathrm{exp}\left(-\sum_k \frac{(\theta_{i,k} - \hat \theta_k)^2}{2\sigma^2_{\hat \theta_k}}\right)
\label{eqn:weight_isochrones}
\end{equation}
where $\theta_{i,k}$ corresponds to the considered parameters of the isochrones and
$\sigma_{\hat \theta_k}$ to the associated uncertainties of the
measurements. It is worth mentioning that we did not include to this
weight any additional multiplicative factor proportional to the mass
of the stars, as suggested by \citet[][see also
\citealt{Kordopatis11b}]{Zwitter10} because this factor is useful
only for surveys having a mixture of dwarfs and giants. Indeed, this
factor defined as the stellar mass difference between two adjacent
points on the isochrones, is introduced in order to give additional
weight to the likelihood of observing a dwarf, because they are
characterised by slower evolutionary phases. Since our survey includes
only giant stars, this factor has been ignored.
The expected age $\bar{a}$ of a given star is then obtained by computing the weighted mean:
\begin{equation}
\bar{a}
|
=\frac{\sum_{i} W_{i} \cdot a_i}{\sum_{i} W_i },
\end{equation}
where $a_i$ are the associated ages of the points on the isochrones. The associated error of the expected age is obtained by:
\begin{equation}
\sigma(\bar{a})=\sqrt{\frac{\sum_{i} W_i \cdot [\bar{a} - a_i ]^2}{\sum_{i} W_i }}.
\end{equation}
We assumed a distance modulus for Carina of $m-M=20.1$ and a line-of-sight $E(B-V)=0.03$~mag, as
estimated by, e.g. \citet{Dallora03} and \citet{Karczmarek15}\footnote{ {We note that our age estimation is rather robust to the adopted distance modulus. Indeed, assuming for example $m-M=20.3$\,mag \citep[see][for a discussion of possible values of the distance modulus]{Vandenberg15}, leads to a median change in the final ages of $0.3\Gyr$ with a $\sigma$ of $1.6\Gyr$.}}. The
considered age-range of the isochrones has been set to be between
$1\Gyr$ and $13.7\Gyr$. We have tested two different configurations:
one where only $V$, $(B-V)$ and the metallicity are taken into
account, and one where we additionally consider the information
related to the effective temperatures and surface gravities.
Figure~\ref{Fig:Ages} shows the differences in the age estimations
according to these two approaches for the RGB stars only (for the RC stars, see the discussion below). We can see that our sample can
be separated at least into two populations, one old ($\sim 13\Gyr$), and one of
intermediate age having a peak at $\sim7.5\Gyr$ and with a much
broader age range. This result is in good agreement with, for
example \citet{Stetson11}, who fit the turn-off stars in the CMD and
estimate that Carina has at least two populations: one of $12\Gyr$ and
one of $4-6\Gyr$, or with Norris et al. (in prep) who are completing a similar age-metallicity analysis using high-resolution spectra of giant stars.
In particular, we find that this intermediate age
population has a tail extending to young ages, down to $1\Gyr$. Taking
into account the atmospheric parameters in
Eq.~\ref{eqn:weight_isochrones} does not change the age distributions drastically,
as can be seen from the dotted (with T$_{\rm eff}$, $\log g$) and plain
(without T$_{\rm eff}$, $\log g$) histograms of Fig.~\ref{Fig:Ages}. This
similarity in the age estimations shows that our effective
temperatures, gravities and metallicities are consistent with the CMD
of Carina. However, since the atmospheric parameters can have large covariances, and that the ages are not fundamentally changed when taking into account the T$_{\rm eff}$\ and the $\log g$, in what follows, we have considered only the ages computed
without taking into account the atmospheric parameters.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Carina_RGB_Ages}
\end{center}
\caption{Age distribution of the RGB stars of Carina. The dotted
(plain) line corresponds to age estimations with (without) taking into
account the information on the effective temperature and gravity. }
\label{Fig:Ages}
\end{figure}
We note that for 10 stars, no age estimation was possible. The reason
is that they are too far from the isochrones, either because they are
foreground contaminators, binary stars, or stars for which the
measurements (metallicity and/or photometry) are of poor quality and
with underestimated errors.
\subsection{Age determination of the red clump-region stars}
The RC stars are massive stars which have
passed through the explosive ignition of the Helium at the RGB tip and
are now burning the helium core. Once all the helium in the core is
exhausted, then the stars begin their AGB phase. Their absolute
magnitude mildly depends on the metallicity and their age, with the
oldest and most metal-poor ones being also the faintest.
The determination of the age of the stars in the RC region of the CMD ($B-V\leq 0.7$) by the projection on the
isochrones is generally difficult to obtain because all the isochrones
pass through that region.
A first run of our pipeline on
the stars in the RC-region has shown that the estimated ages of the bulk of these stars is
similar to the intermediate age population of the RGB ($\sim 7.5\Gyr$
for the RGB, $\sim 6\Gyr$ for the RC-region), as expected from the observing
biases (at these magnitudes, we do not see the oldest RC stars with
still fainter magnitudes). Nevertheless, we find a non-negligible
fraction of stars with ages lower than $4\Gyr$.
{ Since they do not have a counter-part (in terms of star-counts) on the RGB,
it seems clear that the ages of the stars in the RC-region having low metallicities should have much older ages than what is derived (therefore likely being the oldest stars that have left the Horizontal Branch, and are now on the AGB}. If these stars do follow the derived age-metallicity relation of the RGB stars, then their estimated ages should be $\sim13\Gyr$. Given these facts we decide, in what follows, not to take into account the ages of the RC-region stars in our analysis.
\section{Chemodynamical separation of the stellar populations}
\label{sec:description}
In order to identify and separate any chemodynamic stellar
subpopulations, we use the maximum likelihood technique presented by
\citet{Walker11} and later developed in \citet{Amorisco12b}.
We model the spectroscopic target as the combination of multiple subpopulations with different intrinsic properties, including metallicity, kinematics and spatial distribution.
{ The crucial advances in this technique are that:
\begin{enumerate}
\item
all available information can be used at the same time, improving
the quality of the population division with respect to a method that only uses one dimension at a time (for example metallicity or kinematics, separately);
\item
as a result of the population division, each single star is tagged with its membership probability to all identified subpopulations, so that with respect to the cross-contamination due to hard cuts, such a mixture model can better disentangle the properties of any different component;
\item
the number of independent subpopulations needed to best describe the data can be determined objectively, by comparing the gain in likelihood due to the increased number of populations to the growth in the number of free parameters of the model;
\item
any selection function can be explicitly taken into account.
\end{enumerate}
Details on the technique can be found in \citet{Walker11} and
\citet{Amorisco14b}.} The mapping of the chemodynamical subpopulations
onto the space of stellar ages is presented in
Sect.~\ref{sec:age_decomposition}.
\begin{table*}
\centering
\caption{Three distinct chemo-dynamical subpopulations in the Carina dSph: the structural parameters.}
\begin{tabular}{@{}lcccccc@{}}
\hline
Subpop & $\langle[{\rm M/H}]\rangle$ & StD$([{\rm M/H}])$ & $R_{\rm h}/$pc & $e$ & $\langle \sigma_{LOS}\rangle/$kms$^{-1}$ & $f$\\
\hline
RGB Metal Poor (MP) & $-2.4\pm0.1$ & $0.21\pm0.08$ & $500^{+150}_{-75}$ & $0.5\pm0.15$ & $10.4\pm1$ & $0.21\pm0.06$\\
RGB Intermed. Met. (IM) & $-1.84\pm0.05$ & $0.10\pm0.07$ & $185\pm20$ & $\lesssim0.2$ & $7.6\pm0.5$ & $0.59\pm0.04$\\
RGB Metal Rich (MR) & $-1.0\pm0.1$ & $0.23\pm0.07$ & $400^{+100}_{-50}$ & $0.45^{+0.15}_{-0.1}$ & $8.5\pm0.8$ & $0.20\pm0.05$\\
\hline
Red Clump & $-1.50\pm0.05$ & $0.50\pm0.05$ & $225 \pm 20$ & $0.30 \pm0.06 $ & $8.6 \pm 0.3$ & -- \\
\hline
\end{tabular}
\label{tab:pop_numbers}
\end{table*}
\subsection{Three distinct red giant branch subpopulations}
\begin{figure*}
\centering
\hspace{-.5cm}
\includegraphics[width=.5\textwidth]{carinapops}
\includegraphics[width=.4\textwidth]{carinapops0}
\caption{Three distinct subpopulations.
Left upper panel: metallicity distribution function decomposed in three populations.
Black: the MDF of the entire RGB sample. Blue, purple and red, respectively the
contributions of metal-poor, intermediate-metallicity and metal-rich populations
to the global RGB MDF. Note that distribution appear broader than the measured
intrinsic spreads because of uncertainties on the discrete metallicity measurements.
Left lower panel: the inference for the average velocity dispersion of the three RGB subpopulations.
Right panel: the spatial distribution of the { stars having more than 75 percent probability of being members}
(metal rich population
in red, intermediate metallicity in purple, metal poor in blue). Coloured ellipses display the best fitting half-light radii
for the each subpopulation; additionally the red clump population is in green and black lines display the position angle
of the photometry and direction of Carina's tidal tails. }
\label{pops}
\end{figure*}
We first restrict ourselves to the subset of high-probability Carina members
belonging to the RGB ($B-V>0.7$, 400 members)
and investigate any chemo-dynamical sub-divisions. We find that a
two-populations model is preferred over one with a single population,
but a model including three-populations represents the best description of the
data (the probability of obtaining an analogous gain in the likelihood
function by pure chance is negligible, despite the additional degrees of freedom
of the three-populations model). Table\,\ref{tab:pop_numbers} collects the properties of the three
identified subpopulations.
We find that chemistry is the main driver of the division: we identify
a metal poor (MP) population, with average metallicity of about
$\langle[{\rm M/H}]\rangle\approx-2.4$, a population of intermediate
metallicity (IM, $\langle[{\rm M/H}]\rangle\approx-1.8$) and a metal
richer (MR) population, with $\langle[{\rm M/H}]\rangle\approx-1$.
The upper-left panel in Fig.~\ref{pops} displays the metallicity
distribution function of our sample of red giant stars and illustrates
the division in subpopulations. The bulk of Carina's red giants
belong to the population of intermediate metallicity, collecting a
fraction $f_{\rm IM}\approx60\%$ of the total RGB population,
similarly to what found by \citet{deBoer14b}. Note that this plot
includes both (i) a convolution with the individual observational
uncertainty of each metallicity measurement and (ii) the partial
weighing of each star with its membership probability to each
subpopulation. The first is responsible for the larger metallicity
spread of each population (with respect to the intrinsic spread listed
in Table~\ref{tab:pop_numbers}), while the second is responsible for
the skewness and overlap between the different populations { due to those stars that have similar probabilities of belonging to either population (caused by both observational uncertainties in metallicity and by the similarity between the recovered kinematical properties of the populations)}.
The right panel of Fig.~\ref{pops} shows the spatial distribution of
the three subpopulations, with symbols identifying the high
probability members { ($p\geq0.75$)}. Each ellipse corresponds to the best fitting
elliptical half-light radius $R_{\rm h}$, with its ellipticity and
position angle.
{ The properties of the spatial distributions of the different populations are obtained by assuming a parametric functional form (Plummer profile) and by fitting for the spatial distribution of discrete spectroscopic targets, following \citet{Walker11}, but also allowing for a non zero ellipticity.}
The MP population is considerably elongated and
extended. In fact, we are not entirely able to measure its half-light
radius, as we find that its distribution in the radial range probed by
our data is almost flat. We find that MP stars are approximately
aligned with Carina's tidal tails, as measured by
\citet{Battaglia12}. The MR population is somewhat less extended,
although also quite spread out over our radial coverage. On the
contrary, the IM population is substantially more compact and
centrally peaked, with a well defined half-light radius and a
decreasing number counts profile.
In contrast with all other previously studied dSphs, we find that the MR
population is more extended than the IM population, and, accordingly, its
characteristic velocity dispersion is most likely larger, as shown in the
lower-left panel of Fig.~\ref{pops}.
{ This inversion can be captured even with hard cuts in metallicity, i.e. without the use of the likelihood division method. Although such a binning makes the signal weaker due to the large errors in metallicities, the trends shown in Fig.~\ref{Fig:Simple_decompositions} indicate, indeed, that the high metallicity tail is at least as kinematically hot than the bulk of the stars in the intermediate-metallicity bin. The same cuts can be used to show that the spatial distribution of the stars in the intermediate-metallicity bin is more compact than that of the high-metallicity tail, proving that our result is robust.}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{met-kin_new}
\end{center}
\caption{ Projected velocity dispersion versus metallicity, obtained by simply binning the spectroscopic data for the RGBs. Black points are obtained using subsets of 50 RGBs each (successive bins shift by 25 stars at a time, so that not all data points are independent). All measurements are obtained through a maximum likelihood method, and the vertical error bars show $1\sigma$ uncertainties. The wider and coloured error bars group RGB stars into wider bins, and are fully independent from each other (respectively, the three bins contain 75, 250, 75 RGBs).}
\label{Fig:Simple_decompositions}
\end{figure}
This represents the first
case in which it is not possible to identify a complete parallel {\it ordering}
of metallicity, characteristic scale of the stellar distribution and
kinematical state. The MR population is more extended than the
IM population, and, at the same time, at least as kinematically hot.
In turn, this may provide a justifications to previous measurements of a very
limited global chemical gradient in both \citet{Koch06} and \citet{Walker11}.
\subsection{Comparison with the Red Clump population}
We have decided to carry out a separate analysis of the RC
population, keeping it distinct from the RGB population.
We do not try to separate subpopulations based on chemistry within the RC, and
instead only measure global properties of their spatial distribution,
kinematics and chemistry.
We use more than 400 line-of-sight velocity measurements and
associated spatial position for RC stars that belong to Carina with
high probability. As listed in Table~\ref{tab:pop_numbers}, we find that the RC population
bears significant similarities with the population of intermediate
metallicity in the red giant branch. Its half light radius is
comparable with the half light radius of the photometry and so are its
ellipticity and position angle (as shown in the right panel of
Fig.~\ref{pops}). Note that, being on average slightly younger, it is likely
that the RC population probes a combination of the IM and MR red giant
subpopulations. Therefore, the fact that the IM population is even
more concentrated than the RC population is a confirmation of the
inversion described in the previous section. The same reasoning
applies to the kinematics of the RC population, whose velocity
dispersion is slightly hotter than the the one of the IM population.
\subsection{Age decomposition of the red giant branch stars}
\label{sec:age_decomposition}
In this section, we investigate how the chemo-dynamical
population splitting we have just obtained projects into the
space of stellar ages. Grey points in Fig.~\ref{pops_ages} illustrate the
age-metallicity relation for the RGB stars we have derived
and presented in Sect.~\ref{sec:ages} (error-bars are also
shown for a selection of precise measurements, where
uncertainties are lower than $1.5\Gyr$). As previously noted,
despite Carina's narrow RGB and the well known age-metallicity
degeneracy which makes the uncertainty on the age of any
single RG substantial, we identify a well defined age-metallicity
relation on the whole population, thanks to the precise photometry
of \citet{Stetson11} and metallicity estimates of our analysis.
High-probability members of each subpopulation are highlighted
in different colours in the lower panel of Fig.~\ref{pops_ages}. They do not clearly
separate out in age, but the presence of a gradient in the mean
ages of the three populations is clear. The upper panel of Fig.~\ref{pops_ages}
directly projects the population split into the space of stellar ages,
by showing the distributions of the ages of members of each stellar
subpopulation (as for plots in Fig.~\ref{pops}, distributions in Fig.~\ref{pops_ages} are
convolved with the uncertainties of each single age measurement, which
contribute to broaden each probability distribution substantially).
We find that the MP stellar population almost exclusively contains stars
that are associated with the oldest isochrone in our library ($13.7\Gyr$),
the IM has a considerable component of intermediate age stars
($6 - 10\Gyr$), while, to continue the gradient, the MR population
extends up to recent times.
Even though the task of individual age estimation remains challenging,
we find that the chemodynamical division into subpopulations of the
Carina dSph is globally compatible with the independent picture presented
by its SF history. As the photometrically derived SF history suggests,
Carina has experienced three major SF episodes. Indeed, we find
independent and corroboratory evidence for this in the spectroscopy:
three stellar subpopulations are identified, with ages compatible with
those indicated by the SF history.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{Caragedistrc2}
\caption{Subpopulations and ages. Upper panel: the probability
distribution for the age of the three distinct chemo-dynamical
subpopulations in Carina (metal rich in red, intermediate in purple,
metal poor in blue, total in black). Lower panel: the
age-metallicity diagram for the RGB stars in the spectroscopic
sample; error bars are shown only for stars with a precise age
estimate ($\delta_{\rm age}\leq1.5$ Gyr); colored points identify
high-probability members { (probability of being a member larger than 75 percent)} for each chemo-dynamical subpopulation.}
\label{pops_ages}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:conclusions}
We have shown that, similar to Fornax and Sculptor, the Carina dSph is
characterised by the presence of multiple stellar populations, with
distinct chemical properties, spatial distributions and kinematical
states. However, Carina appears substantially
more mixed than either Sculptor or Fornax, which is reflected in the
comparatively smaller differences among the characteristic scale
lengths (as well as velocity dispersions) of its subpopulations. The
MP population is more extended (and kinematically hotter) than the two
younger populations (respectively IM and MR). What is more notable is
that we find that the youngest MR population is more extended and at
least as kinematically hot as the IM population, with evidence for an
{\it inversion} of the usual ordering. Accordingly, the IM metallicity
population is also more concentrated than the RC population.
While metal richer stellar subpopulations are
generally more spatially concentrated and accordingly kinematically
colder, the evolutionary history of Carina partially broke this common
parallel ordering. This opens the question as to whether Carina's stellar
populations were formed with the properties we observe today -- and
then any difference from Fornax or Sculptor has to be found in the
intrinsic properties and triggers of the SF episodes -- or whether Carina
initially had a more pronounced chemical gradient but this was subsequently
perturbed by environmental factors.
\citet{Sales10} have shown that strong enough tidal disturbances may
be capable of homogenising the properties of multiple
subpopulations. They suggest that, by removing a substantial fraction
of the outer dark matter envelope, stripping causes the velocity
dispersion of extended populations to decrease, weakening any
kinematical difference with more concentrated tracers. Indeed,
Carina's proper motion suggests a pericenter of only a few tens of
\kpc, which is considerably smaller than either Fornax or
Sculptor. Also, the presence of an extended extratidal component
would suggest tidal disturbance as a feasible mechanism to mix the
originally more segregated subpopulations of Carina after infall.
However, it remains unclear whether tides alone can be responsible for
Carina's present configuration. In particular, tidal effects are
strongest on more extended populations which implies that they should
naturally preserve any original outside-in ordering in the
characteristic scales of stellar subpopulations: it seems somewhat
puzzling that tides are capable of causing an inversion in the
ordering of the subpopulations as observed here. As a consequence, it
is tempting to discuss other possible mechanism that may cause such
phenomenology.
If the current distribution of the MR population has not been altered
by tides, then the gas from which it is was originated was initially
more spread out than the gas from which the IM population was born. In
the following we list mechanisms that may be held responsible for
this.
\begin{itemize}
\item{Ram pressure: if the MR is formed after infall, gas may have been
disturbed by the interaction with the corona of the Milky Way,
in the form of ram pressure, perhaps also triggering star formation { \citep[see also][on how Milky Way feedback can affect the evolution of dSph satellites, even at $100\kpc$]{Nayakshin13}}.}
\item{Stellar feedback from the IM population: if not energetic enough
to entirely remove the remaining gas, feedback would naturally result
in the gas being distributed on more energetic orbits { \citep[an illustration of this effect has been recently provided by][]{El-Badry15}}. It is unclear however if the cooling necessary
to restart star formation would newly concentrate it in the central regions.}
\item{
Interaction with an another dwarf galaxy or dark halo, { likely before infall,} that may have triggered the star formation and perturbed the gas by dynamical interaction.}
\end{itemize}
\citet{Donghia08} have suggested low mass galaxies only light up with
star formation when belonging to groups at intermediate redshifts.
Indeed, star formation is quite challenging to achieve in such small
haloes \citep[e.g.,][]{Read06} and the larger virial mass of a group
may help dwarfs to more easily retain their gas after episodes of
intense stellar feedback \citep[e.g.,][]{Penarrubia12,Amorisco14}.
Here we note that interactions between low mass dwarfs may also
contribute in triggering and facilitating star formation, in a
systematic way over the population of low mass galaxies. Such interactions
are not unfrequent in a $\Lambda$CDM universe \citep[e.g.,][]{Deason14,Wetzel15}, and would
justify the fact that a fraction of the low mass members
of the Local Group show signs suggesting a violent and active past \citep{Kleyna03,
Coleman04, Amorisco14b, deBoer14b}.
\section*{Acknowledgments}
We thank the anonymous referee for comments that helped improving the quality of the paper.
It is a great pleasure to also thank Mike Irwin, Thomas de Boer and Else Starkenburg for many
discussions and precious insight. The Dark Cosmology centre is funded
by the DNRF.
\bibliographystyle{mnras}
|
\section{\label{sec:level1}First-level heading:\protect\\ The line
{\label{sec:level1}}
\section{Introduction}
Bose-Einstein condensates in double-well potentials have attracted
much attention in the past decades. As a paradigm model for studying
the competition effect of quantum tunneling and interaction, the
double well systems have been widely studied from many aspects
\cite{PRA77063601,PRA593868,JPA381235,PRA78013604,Smerzi,PRA554318,RMP73307,Liujie,Raghavan,Mahmud}.
Due to the experimental progress in manipulating ultracold atomic
gases,
both the trap potential and interaction between atoms can be
implemented with unprecedented tunability \cite{RMP}, and thus the
dynamics of many-body quantum states of interacting bosons can be
experimentally explored by loading the ultracold atoms in double
wells. For the atomic double-well system, the atom-atom interactions
in Bose-Einstein condensates have been found to play important roles
in the dynamics of the system. The competition between tunneling and
interaction leads to many rich and interesting effects, such as the
Josephson oscillation and self-trapping phenomena
\cite{Smerzi,PRA554318,RMP73307,Liujie,PRA593868,Raghavan,Mahmud}.
Moreover, novel correlated tunneling dynamics in interacting atomic
systems characterized by a small number of particles has recently
been observed experimentally \cite{Folling}, which attracted
particular attention in the study of the dynamics of few-atom
systems \cite{Zollner,Liang}.
So far most of theoretical works for double-well systems are based on the
two-mode approximation \cit
{Smerzi,PRA554318,RMP73307,Liujie,Raghavan,PRA593868}.
For atoms confined in a double well potential, if all of them are
prepared in one well initially, they will oscillate in the form of
Josephson oscillation for weak enough interaction, or they will stay
in one well (self-trapping) when the interaction is above a critical
value. Both the Josephson oscillation and self-trapping phenomena
can be understood within the two-mode approximation and have been
observed experimentally in cold atomic systems \cite{Albiez}.
However, if interactions are strong enough, the mean-field theory
and the two-mode approximation are not expected to be valid as
higher orbits are occupied. In the strongly interacting regime,
strong interactions between atoms may fundamentally alter the tunnel
dynamics and result in a correlated
tunneling, which was explored most recently in ultracold atoms \cit
{Zollner,Liang}. Theoretically, the correlated paring tunneling was
studied by multi-configuration time-dependent Hartree method
\cite{Zollner,Sakmann} and also in the scheme of the extended
Bose-Hubbard model with an additional term of correlated pair tunneling \cit
{Liang}. In order to understand the dynamics of double well systems
from weakly to strongly interacting regime in an unified scheme, in
this work we study the dynamical properties of a few bosons confined
in a one-dimensional (1D) split hard wall trap with the repulsive
interaction strength varying from zero to infinity. Experimentally,
the effective interaction strength can be tuned by using Feshbach
resonance or the confinement-induced resonance to the strongly
interacting Tonks-Girardeau (TG) limit \cite{Paredes,Weiss}, which
makes it possible to explore the novel dynamics even in the TG
limit.
In the TG limit, the bosonic systems exhibit the feature of
fermionization
\cite{Girardeau,Cederbaum,Hao1,Hao2,Zollner2,Deuretzbacher}. In the
strongly interacting regime, the mean field theory generally fails
to describe the properties of fermionization. In order to
characterize the crossover from weakly interacting condensation to
strongly interacting TG gas, some sophistical theoretical methods,
such as the exact diagonalization method
\cite{Deuretzbacher,Hao2,PRA78013604} and multi-orbital
self-consistent Hartree method \cite{Zollner2,Cederbaum} have been
applied to study the static few-boson systems. In this work, we
shall apply the exact diagonalization method to study the dynamical
problem in the 1D double-well system. The exact diagonalization
method can produce numerically exact results and allows us to give
an unified description for both the weakly and strongly interacting
regime. As a comparison, we also investigate the dynamics based on
the two-site Bose-Hubbard model by considering both the two-mode and
muti-mode approximations. Comparing the results obtained from
different methods, we conclude that one-band (two-mode) BHM
approximation is efficient to describe dynamics in small interaction
regime, but the multi-band BHM approximation is needed if we want to
describe the dynamics of bosons with large interaction
quantitatively. We also validate that the form of pair tunneling
gives an important contribution to tunneling in the large
interaction regime.
\section{Model and method}
We consider a few bosons with mass $m$ confined in an one-dimensional split
hard wall trap, which is described by the Hamiltonian ($\hbar =m=1$)
\begin{eqnarray}
\widehat{H} &=&\int \widehat{\psi }^{\dagger }(x) \left[ - \frac{1}{2}\frac
\partial ^{2}}{\partial x^{2}}+V(x)+\kappa \delta (x) \right] \widehat{\psi
(x) dx + \nonumber \\
&& c\int \widehat{\psi }^{\dagger }(x)\widehat{\psi }^{\dagger }(x^{\prime
})\delta (x-x^{\prime })\widehat{\psi }(x)\widehat{\psi }(x^{\prime })
dxdx^{\prime }. \label{Hamiltonian1}
\end{eqnarray
Here $V(x)$ is a hard wall trap which is zero in the region $(-a,a)$ and
infinite outside, $\kappa $ is a tunable parameter which describes the
strength of zero-ranged barrier at the center of the trap, and $c$ is the
interaction strength between particles determined by the effective \textrm{1
} s-wave scattering length. Here the double well is modeled by the
1D split hard-wall trap with a $\delta$-type barrier located at the
origin and the tunneling amplitude between the left and right wells
can be tuned by the barrier strength $\kappa $
\cite{PRA77063601,PRA78013604}. To study the tunneling dynamics, the
barrier strength $\kappa$ is initially set to be infinity and the
system is prepared
in the ground state of the left well. At time $t=0$, we suddenly change
\kappa$ to a finite value and study the dynamical evolution of the initially
prepared system.
For $c=0$, the single particle stationary Schr\"{o}dinger equation
associated with the Hamiltonian (\ref{Hamiltonian1}) can be written as
\begin{equation}
\left[ -\frac{1}{2}\frac{\partial ^{2}}{\partial x^{2}}+V(x)+\kappa
\delta (x)\right] \varphi _{n}(x)=\epsilon _{n}\varphi _{n}(x),
\end{equation
where $\varphi _{n}(x)$ are the complete set of orthonormal
eigenfunctions and $\epsilon _{n}$ the corresponding eigenenergies.
Here $n=1,2,3,\cdots $ gives the ordering number of the
single-particle energies. According to the parity symmetry of the
eigenfunctions, the state $\varphi _{n}(x)$ is symmetric for odd $n$
($n=2i-1$) and antisymmetric for even $n$ ($n=2i$). The
single-particle energies are ordered alternatively corresponding to
symmetric and antisymmetric states. For convenience, we also
represent $\varphi _{2i-1}(x)=\varphi _{i,S}(x)$ and $\varphi
_{2i}(x)=\varphi _{i,A}(x)$ with the subscript $S$ ($A$) indicating
the symmetric (antisymmetric) function. The single-particle
antisymmetric eigenfunctions are $\varphi _{i,A}(x)=\frac{1}{\sqrt{a}}\sin
\frac{i\pi x}{a}),$ $i=1,2,3,...$ with their corresponding eigenenergies
\epsilon_{i,A}=\epsilon_{2i}=(i\pi /a)^{2}/2$, and the
single-particle symmetric eigenfunctions are $\varphi
_{i,S}(x)=C[\cos (px)-\frac{\kappa }{p}\sin (px)]\theta (-x)+C[\cos
(px)+\frac{\kappa }{p}\sin (px)]\theta (x)$ with their corresponding
eigenenergies $\epsilon_{i,S}=\epsilon_{2i-1}=p^{2}/2$, where the
wave vector $p$ is determined by transcend equation $p/\kappa +\tan
(pa)=0$ and $ C $ is the normalization constant. It is true that the
barrier only influences symmetric eigenfunctions, but does not
influence antisymmetric eigenfunctions for any barrier strength
$\kappa $. Further, the $(2i-1)$-th eigenenergy is close to the
$2i$-th eigenenergy ($i=1,2,3...$) gradually with
the increase of barrier strength and they become degenerate in the limit
\kappa \rightarrow \infty $. For simplicity, we set $a=1$ and
discuss a large barrier strength $\kappa=50$ in this paper. In this
case, single-particle eigenenergies in split hard wall trap are
$\epsilon _{1}=4.74341$, $\epsilon _{2}=\pi ^{2}/2$, $\epsilon
_{3}=18.9764$, $\epsilon _{4}=2\pi ^{2}$, $\epsilon
_{5}=42,7073,...$ respectively, see Fig.\ref{f1}(a). In contrast to
the small energy gap between the $(2i-1)$-th state and the $2i$-th
state, there is a relatively very large energy gap between the
$2i$-th state and $(2i+1)$-th state, {\it i.e.},
$\epsilon_{i,A}-\epsilon_{i,S} \ll \epsilon_{i+1,S}-\epsilon_{i,A}$.
Expanding field operators as $\widehat{\Psi }(x)=\sum_{n=1}^{\infty
}\varphi _{n}(x)a_{n}$, the many-body Hamiltonian
(\ref{Hamiltonian1}) takes the form
\begin{equation}
\widehat{H}=\sum_{n}\epsilon _{n}a_{n}^{\dag
}a_{n}+c\sum_{n,m,p,q}I_{nmpq}a_{n}^{\dag }a_{m}^{\dag }a_{p}a_{q},
\label{Hamiltonian2}
\end{equation
where $a_{n}^{\dag }(a_{n})$ is bosonic creation (annihilation)
operator for a particle in the single particle energy eigenstate
$\varphi _{n}$. The
interaction integral parameters $I_{nmpq}$ are calculated through
I_{nmpq}=\int_{-a}^{a}\varphi _{n}(x)\varphi _{m}(x)\varphi
_{p}(x)\varphi _{q}(x)dx$. The eigenstate of this Hamiltonian can be
obtained by numerical exact diagonalization in the subspace of the
energetically lowest eigen-states of a noninteracting many-particle
system \cite{Deuretzbacher,PRA78013604}.
When the barrier strength $\kappa $ is large, the split hard wall
trap can be considered as a double well. Similar to the case of
optical lattices, the local Wannier functions $W_{L}^{i}(x)$
$(W_{R}^{i}(x))$ at the left
(right) well with the energy band indices $i$ can be defined as
W_{L}^{i}(x)=1/\sqrt{2}(\varphi _{i,S}(x)+\varphi _{i,A}(x))$ and
W_{R}^{i}(x)=1/\sqrt{2}(\varphi _{i,S}(x)-\varphi _{i,A}(x))$. From the
symmetry of $\varphi _{i,S}(x)$ and $\varphi _{i,A}(x)$, one observes that
W_{L}^{i}(x)=W_{R}^{i}(-x)$. If we expand the bosonic field operator as,
\widehat{\psi
}(x)=\sum_{i}a_{i,L}W_{L}^{i}(x)+\sum_{i}a_{i,R}W_{R}^{i}(x)$ ,
where $a_{i,L\left( R\right) }$ is the bosonic annihilation operator
for a particle at left (right) well, the Hamiltonian can be written
as the form of two-site Bose Hubbard model
\begin{widetext}
\begin{eqnarray}
\widehat{H}=\sum_{i,j}(J_{LL}^{ij}a_{i,L}^{\dagger}a_{j,L}+J_{RR}^{ij}a_{i,R}^{\dagger}a_{j,R})
+\sum_{i,j}(J_{LR}^{ij}a_{i,L}^{\dagger}a_{j,R}+ J_{RL}^{ij}
a_{i,R}^{\dagger}a_{j,L})
+\sum_{i,j,k,l}\sum_{\alpha,\beta,\gamma,\delta}
U^{i,j,k,l}_{\alpha,\beta,\gamma,\delta}
a_{i,\alpha}^{\dagger}a_{j,\beta}^{\dagger}a_{k,\gamma}a_{l,\delta}
\label{Hamiltonian3}
\end{eqnarray}
\end{widetext}
where the integral $J_{\alpha \beta }^{ij}=\int_{-\infty
}^{\infty }dx(W_{\alpha }^{i}(x))^{\ast }H_{0}W_{\beta }^{j}(x)$, with
H_{0}=-\frac{1}{2}\frac{\partial ^{2}}{\partial x^{2}}+V(x)+\kappa \delta (x)
$ and the interaction integral $U_{\alpha ,\beta ,\gamma ,\delta
}^{i,j,k,l}=c\int dx(W_{\alpha }^{i}(x))^{\ast }(W_{\beta }^{j}(x))^{\ast
}W_{\gamma }^{k}(x)W_{\delta }^{l}(x)$. The subscripts $\alpha ,\beta
,\gamma ,\delta \in \{L,R\}$ are the well indices and the superscripts
i,j,k,l$$\in \{1,2,3,...\}$ are the energy band indices. Here we
note $J^{ij}_{RR}=J^{ij}_{LL}$ for the symmetric double well. The
Hamiltonian (\ref{Hamiltonian3}) can be divided into intraband and
interband parts, that is
\begin{equation}
\widehat{H}=\sum_{i}\widehat{H}_{i}+ \widehat{H}_{interband} .
\label{Hamiltonian4}
\end{equation}
The $i$-th intraband Hamiltonian can be written as
\begin{widetext}
\begin{eqnarray}
\widehat{H}_{i}&=&(\epsilon_{i,L} n_{i,L}+\epsilon_{i,R} n_{i,R})+
[J_{i}+2(n_{i,L}+n_{i,R}-1)J'_{i}]
(a_{i,L}^{\dagger}a_{i,R}+a_{i,R}^{\dagger}a_{i,L}) + \nonumber\\
& &
U_{0}^{i}[n_{i,L}(n_{i,L}-1)+n_{i,R}(n_{i,R}-1)]+U_{LR}^{i}(a_{i,L}^{\dagger}a_{i,L}^{\dagger}
a_{i,R} a_{i,R} +a_{i,R}^{\dagger} a_{i,R}^{\dagger} a_{i,L} a_{i,L}
+4 n_{i,L} n_{i,R}),\label{1bandBHM}
\end{eqnarray}
\end{widetext}
where $n_{i,L}=a_{i,L}^{\dagger}a_{i,L}$,
$n_{i,R}=a_{i,R}^{\dagger}a_{i,R}$, $ \epsilon_{i,L} = J_{LL}^{ii}$,
$ \epsilon_{i,R} = J_{RR}^{ii}$, $J_{i} = J_{LR}^{ii} = J_{RL}^{ii}$
is the intraband hopping energy between left and right wells,
$J'_{i} = U_{LLLR}^{iiii}=U_{RRRL}^{iiii}$, $U_{0}^{i} =
U_{LLLL}^{iiii}=U_{RRRR}^{iiii}$ is the on site interaction energy
and $U_{LR}^{i} \equiv U_{LLRR}^{iiii}$ is the intraband pair
hopping energy. It is easy to check that $\epsilon_{i,L}
=\epsilon_{i,R}=(\epsilon_{i,S} + \epsilon_{i,A} )/2 = \mu_i $ and
$J_{i}=(\epsilon_{i,S} - \epsilon_{i,A} )/2$. The interband
Hamiltonian reads
\begin{widetext}
\begin{equation}
\widehat{H}_{interband}=\sum_{i \neq
j}\sum_{\alpha,\beta,\gamma,\delta}
U^{i,j}_{\alpha,\beta,\gamma,\delta}
(a_{i,\alpha}^{\dagger}a_{j,\beta}^{\dagger}a_{i,\gamma}a_{j,\delta}+a_{i,\alpha}^{\dagger}a_{j,\beta}^{\dagger}a_{j,\gamma}a_{i,\delta})\nonumber\\
+\sum_{i,j,k,l}'\sum_{\alpha,\beta,\gamma,\delta}
U^{i,j,k,l}_{\alpha,\beta,\gamma,\delta}
a_{i,\alpha}^{\dagger}a_{j,\beta}^{\dagger}a_{k,\gamma}a_{l,\delta}
\end{equation
\end{widetext}
where $U^{i,j}_{\alpha,\beta,\gamma,\delta} =
U^{i,j,i,j}_{\alpha,\beta,\gamma,\delta}=
U^{i,j,j,i}_{\alpha,\beta,\gamma,\delta} $ and the summation $\sum'$
in the second part contains three-band terms where only three
different energy band indices exist and four-band terms with $i\neq
j\neq k \neq l$. Most interband interaction terms are very small
except for terms of $U_{LLLL}^{ij}$ and $U_{RRRR}^{ij}$ defined on
the same site and between the $i$th and $j$th bands. For the split
system with barrier strength $\kappa =50$, we have $ \epsilon_{i,L}
=\epsilon_{i,R} =4.83911$, $J_{1}=-0.0957$, $U_{0}^{1}=1.48446c$,
$J'_{1} =-0.00366c$, $U_{LR}^{1}=0.0003c$ and
$U_{LLLL}^{12}=0.98866c$, while the interaction strengths containing
three or four energy band indices are very small, for example
$U_{LLLL}^{1233}=-0.000969c$ and $U_{LLRR}^{1234}=-0.00107c$ .
Now we turn to consider the dynamical behavior for the $N$-boson
system. Initially (for $t<0$), the barrier strength $\kappa $ is set
to infinity and $N$ bosons are prepared in the ground state of the
left well. In this case, the split hard-wall trap actually reduces
to two separated hard-wall traps, so the eigenvalues and
eigenfunctions of a single particle in the left well are $(\pi
n/a)^{2}/2$ and $ \varphi _{n}(x)=-\sqrt{2/a}\sin (n\pi x/a)$, with
$n=1,2,3...$ respectively. Then at $t=0$, $\kappa$ is suddenly
changed to a finite value, for example $\kappa=50$ in the present
work, and the corresponding Hamiltonian is
$\widehat{H}_{f}=\widehat{H}(\kappa=50)$. After $\kappa $ is
changed, the time-dependant wave function is given by
\begin{equation}
|\Psi (t)\rangle =e^{-i\widehat{H}_{f}t}|\Psi (0)\rangle
=\sum_{n=1}^{\infty }C_{n}e^{-iE_{n}t}|\Phi _{n}\rangle ,
\label{Psit}
\end{equation}
in which weight coefficients are $ C_{n}=\langle \Phi _{n}|\Psi
(0)\rangle $ and they satisfy normalization condition
$\sum_{n=1}^{\infty }C_{n}^{2}=1$. Here $\Phi _{n}$ and $E_{n}$ are
the eigenstates and eigenvalues of $\widehat{H}_{f}$, respectively.
In order to see how the initial state trapped in the left trap
evolves, we shall use revival probability
\begin{eqnarray}
F(t) &=&|\langle \Psi (t)|\Psi (0)\rangle |^{2} \nonumber \\
&=&1-4\sum_{n<m}C_{n}^{2}C_{m}^{2}\sin ^{2}[(E_{n}-E_{m})t/2],
\label{Fidelity}
\end{eqnarray
the reduced single-particle density matrix
\begin{equation}
\rho (x,x^{\prime },t)=\langle \Psi (t)|\widehat{\Psi }^{\dagger }(x
\widehat{\Psi }(x^{\prime })|\Psi (t)\rangle , \label{rhot}
\end{equation}
and pair correlation function
\begin{equation}
g^{(2)}(x_{1},x_{2},t)=\langle \Psi (t)|\widehat{\Psi }^{\dagger }(x_{1}
\widehat{\Psi }
|
^{\dagger }(x_{2})\widehat{\Psi }(x_{1})\widehat{\Psi
(x_{2})|\Psi (t)\rangle \label{g2}
\end{equation}
after time $t$ to describe the dynamics in split hard-wall trap
system.
\begin{figure}[tbp]
\includegraphics[height=8cm,width=8cm]{f1}
\caption{(a)Single-particle energy levels for the split hard wall
trap with $\kappa=50$. (b) The eigenenergy of two interacting bosons
changing with the interaction strength $c$ in the split hard wall
trap with $\kappa=50$.} \label{f1}
\end{figure}
\section{results and discussions}
Before studying the quantum dynamics of many-body systems, we first
recall the tunneling dynamics of a single atom. If there is only one
boson in this split hard wall trap, the initial state is just the
ground state of left well, that is $\Psi (0)=-\sqrt{2}\sin (\pi x)$
with energy $\pi ^{2}/2$. After the barrier strength $\kappa $
switches on to a finite but large strength, the weight coefficients
of ground and the
first excited state of the finial Hamiltonian $H_{f}$ are $C_{1}\approx \sqrt
2}/2$ and $C_{2}\approx \sqrt{2}/2$. At time $t$, the wavefunction
reads
\begin{eqnarray}
|\Psi (t)\rangle &\approx& \frac{\sqrt{2}}{2} e^{-i\epsilon_{1,S}
t}|\varphi_{1,S}\rangle + \frac{\sqrt{2}}{2} e^{-i\epsilon_{1,A}
t}|\varphi_{1,A}\rangle \nonumber \\
&=& e^{-i\mu_1
t} [ \cos(J_1 t) |W_{L}^1\rangle + i \sin(J_1 t) |W_{R}^1\rangle ],
\label{single-psi}
\end{eqnarray}
where $\mu_1=(\epsilon_{1,S} +\epsilon_{1,A})/2$. It is obvious that
the boson stays in the left well with the probability of $\cos^2(J_1
t)$ whereas in the right well with the probability of $\sin^2(J_1
t)$. Consequently, the boson oscillates back and forth between two
wells with period $\tau =2\pi /(\epsilon_{1,A}-\epsilon _{1,S})=-
\pi/J_1$, which is influenced by the barrier strength $\kappa $
through controlling the energy gap between ground state and the
first excited state. Correspondingly, the fidelity $F(t) \approx
\cos^2(J_1 t)$ oscillates periodically between $1$ and $0$.
For a many-body system, no an analytical expression like
Eq.(\ref{single-psi}) is available. Nevertheless, when the atom
number is small, we can resort to the full exact diagonalization
method to calculate the energy spectrum and eigenstates via directly
diagonalizing the Hamiltonian (\ref{Hamiltonian2}). Consequently the
time-dependent wavefunction, revival probability, single-particle
density matrix and correlation function are straightforward to be
calculated via Eq.(\ref{Psit})-(\ref{g2}). For a continuum system,
we need truncate the set of single-particle basis functions to the
lowest $L$ orbitals (modes) and the basis dimension of a
$N$-particle system with $L$ orbitals (modes) is given by $D=\left(
N+L-1\right) !/[N!\left( L-1\right) !]$. In general, one needs $L
\gg N$ and it is a formidable task to get the full spectrum of the
many-particle system as the particle number $N$ becomes large.
Therefore, despite the fact that the exact diagonalization method
can be applied to deal with the interacting boson systems in a
numerically exact way for all relevant interaction strengths, it
only restricts to small particle systems. When the interaction
strength is weak, the two-site Bose-Hubbard Hamiltonian under
single-band (two-mode) approximation is widely taken to be the model
system for the study of the dynamics of the double-well system. One
of advantages of the two-site Bose-Hubbard Hamiltonian
(\ref{Hamiltonian3}) is that every term in the Hamiltonian has a
straightforward physical meaning which can help us to understand the
physical consequence of different terms. Furthermore, in the scheme
of the two-site Bose-Hubbard model, the system is much more
tractable both analytically and numerically and a large system can
be studied. In the following, we shall first present exact numerical
results by exact diagonalization and then results based on the
two-site Bose-Hubbard model under two-mode (single-band) and
multi-mode (multi-band) approximations.
\subsection{Exact result by exact diagonalization}
We first consider the two-boson case. If two bosons are initially
prepared in the ground state of the left well as the initial state
of the system, which can be gotten by exact diagonalization method.
Through diagonalizing the second quantized initial Hamiltonian
$H_{in}$ in the Hilbert space spanned by the single-particle
eigenstates, we get the initial state $\Psi (0)$. Similarly the
eigenenergy and eigenvectors for $ H_{f}$ can also be obtained. As
an example, we plot the lowest five eigenenergies of two interacting
bosons in the split hard wall with $\kappa=50$ versus the
interaction strength $c$ in Fig. \ref{f1}(b).
\begin{figure}[tbp]
\includegraphics[height=10cm,width=\linewidth]{f2}
\caption{The revival probability $F(t)$ for various $c$.} \label{f2}
\end{figure}
\begin{figure}[tbp]
\includegraphics[height=12cm,width=\linewidth]{f3}
\caption{(Color online) Reduced single-particle density matrix $\protect\rho
(x,x^{\prime },t)$ of two interacting bosons for (a1-a3) $c=0,t=0,5,15$, (b1-b3)
c=5,t=0,5,15$, and (c1-c6) $c=\infty ,t=0,5,16,21,22.5,32$. Each
plot spans the range $-1<x,x^{\prime }<1$. } \label{f4}
\end{figure}
When the interaction is absent, bosons just oscillate back and forth
between two wells and return to their initial state after a Rabi
period $\tau $, which is the same as one boson's. For interacting
bosons, there will be many differences. The revival probability
$F(t)$ as a function of time $t$ is shown in Fig.\ref{f2} for
various $c$. When interaction strength $c$ is very small, the
revival probability $F(t)$ still displays the oscillating feature
and the system returns to their initial state with probability close
to $1$ after a longer period. The revival time becomes longer with
the increase of the interaction strength $c$. When $c$ reaches a
certain value ($c\sim 5$), $F(t)$ approaches $1$ with tiny
oscillations within a very large time scale, which is known as the
self trapping phenomena. In this regime, the tunneling to the right
well is dynamically suppressed and two bosons stay in the left well
stably. As the interaction strength $c$ increases further to the
stronger regime, $F(t)$ begins to decrease more quickly and two
bosons can tunnel to the right well again. In the limit of $c
\rightarrow \infty$, $F(t)$ approaches zero quickly and then
oscillates between $0$ and $0.521$, finally it approaches $1$ after
almost a Rabi period. To see clearly how the atoms tunnel between
the left and right wells, we display the corresponding
time-dependent reduced single particle density matrix $\rho
(x,x^{\prime },t)$ in Fig.\ref{f4} for several typical $c$. The
diagonal contribution $\rho (x,x^{\prime },t)$ along $x=x^{\prime }$
is just the single particle density distribution. In Fig.\ref{f5},
the pair correlation function $g^{(2)}(x_{1},x_{2},t)$ are also
displayed. The pair correlation function $g^{(2)}(x_{1},x_{2},t)$
shows the probability of finding one particle at point $x_{1}$ and
another particle at point $x_{2}$ in one measurement. As shown in
$(a1)$, $(a2)$ and $(a3)$ of Fig.\ref{f4} and Fig.\ref{f5}, the
non-interacting bosons can tunnel from left well to right well, and
then go back to left well, which forms a period of Rabi oscillation.
However, as shown in $(b1)$-$(b3)$ of Fig.\ref{f4} and Fig.\ref{f5},
the single particle density distribution and the pair correlation
function $g^{(2)}(x,x^{\prime },t)$ have no obvious change within a
Rabi period, and no oscillation between the left and right traps is
observed in this self trapping regime with $c=5$.
While in the fermionization limit, the oscillation phenomenon appears again in
(c1)-(c6)$. Both of two bosons tunnel to right well when $t=21$ (see
$(c4)$) and go back to the left well for $t=32$ (see $(c6)$). Our
results are consistent with results in \cite{Zollner} based on the
multi-configuration time-dependent Hartree method.
\begin{figure}[tbp]
\includegraphics[height=12cm,width=\linewidth]{f4}
\caption{(Color online) The pair correlation function $g^{(2)}(x_{1},x_{2},t)
$ for (a1-a3) $c=0,t=0,5,15$, (b1-b3) $c=5,t=0,5,15$, and (c1-c6) $c=\infty
,t=0,5,16,21,22.5,32$. Each plot spans the range $-1<x,x^{\prime }<1$. }
\label{f5}
\end{figure}
\begin{figure}[tbp]
\includegraphics[height=6cm,width=\linewidth]{f5}
\caption{The revival probability $F(t)$ of three particles system
changes with time $t$ with different interaction.} \label{f3}
\end{figure}
\
Next we consider the dynamics of systems with more bosons. The
dynamics of $N=3$ system is similar to the $ N=2$ case, except that
the system enters self-trapping regime earlier than $ N=2$. As shown
in Fig. \ref{f3}, when $c=0.5$, the system already displays the
feature of self trapping with the fidelity $F(t) \sim 1$ with tiny
oscillations within a very large time scale. Similarly, in the TG
limit, bosons can tunnel to the right well more easy and return to
the left well approximately after a Rabi period. The tunneling
dynamics of hard-core bosons is very similar to their correspondence
of free fermions \cite{Salasnich}.
In Fig.\ref{f6}, we plot the revival probability $F(t)$ for systems
with $N=1$ to $N=4$ in the TG limit. It is shown that there is an
obvious peak around the Rabi oscillation periods for various $N$,
which implies that the system can return to the left well with
probability close to $1$ after a Rabi period. One can understand
this from the Bose-Fermi mapping, i.e., bosons in the TG limit can
be mapped into a spinless free Fermi system \cite{Salasnich}. For
the case with $\kappa=50$, we can check that $J_i \approx i^2 J_1$
for $i = 1, 2, 3, 4$. The $N$ atoms initially occupy the $N$-lowest
single-particle levels of the left well, and roughly speaking, each
particle tunnels with the Rabi period $\tau_i = - \pi/J_i$. However,
when the particle number $N$ becomes large, the relations of $J_i
\approx i^2 J_1$ break down and even the dynamics in the TG limit
can be quite complex.
\begin{figure}[tbp]
\includegraphics[height=6cm,width=\linewidth]{f6}
\caption{The revival probability $F(t)$ of the $N$-boson system
changes with time $t$ with infinity interaction.} \label{f6}
\end{figure}
\subsection{BHM approximation}
Now we consider the two-site BHM described by the Hamiltonian
(\ref{Hamiltonian3}). If the interaction strength is much smaller
than the level spacing between the first band and the second band
defined as $\Delta=\mu_2 - \mu_1$, one may expect that the system
can be approximately described by the single-band BHM. Under the
one-band approximation, the Bose-Hubbard model is described by
Eq.(\ref{1bandBHM}) with $i=1$. We note that the pair hopping term
$U_{LR}^{1}$ in Eq.(\ref{1bandBHM}) is generally very small in
comparison with the on-site interaction, for example, in the present
work we have $U_0^{1} \approx 4948 U_{LR}^1 $. Therefore in many
previous works, the pair hopping term is omitted and a simplified
single-band BHM given by
\begin{eqnarray}
\widehat{H} &=& \mu_{1} (n_{1,L}+ n_{1,R})+ \tilde{J_{1}}
(a_{1,L}^{\dagger}a_{1,R}+a_{1,R}^{\dagger}a_{1,L}) + \nonumber \\
& & U_{0}^{1}[n_{1,L}(n_{1,L}-1)+n_{1,R}(n_{1,R}-1)],\label{SBHM}
\end{eqnarray}
has been widely used \cite{JPA381235,Ziegler,BHM,Wangli}. Here
$\tilde{J_{1}}=J_{1}+2(n_{1,L}+n_{1,R}-1)J'_{1}$. Since $U_{LR}^1
\ll U_{0}^1$ in the whole interacting regime, the pairing hopping
term is not expected to significantly change the static properties.
However, when the term $U_{LR}^1$ is comparable with the hopping
amplitude $J_1$, it may give significant contribution to the
dynamics, which has been emphasized in Ref.{\cite{Liang}}. In the
weakly interacting regime, the term of $J'_{1}$ is also usually
neglected as its revision to hopping energy can be attributed to
$J_{1}$. As we shall illustrate later, when the interaction strength
is very large, the contribution of $J'_{1}$ can not be neglected
since $J'_{1} \propto c$.
\begin{figure}[tbp]
\includegraphics[height=10cm,width=8cm]{f7}
\caption{The revival probability $F(t)$ for the two-boson system
changes with time $t$ under one-band BHM approximation.} \label{f7}
\end{figure}
By using the single-band BHM approximation, we study the tunneling
dynamics of the two-site BHM described by $\widehat{H}_1$ given by
Eq.(\ref{1bandBHM}) with all particles prepared in one site
initially. The revival probability $F(t)$ changing with time $t$
under one-band BHM approximation are shown in Fig.\ref{f7} for the
two-boson system. As shown in Fig.\ref{f7}a, when the interaction
strength is not very large, the single-band BHM gives quantitatively
consistent description of the dynamics in comparison with the exact
results by exact diagonalization (see Fig.\ref{f2}a). With further
increase in the interaction, as shown in Fig.\ref{f7}b, although the
single-band BHM with pairing hopping term can describe correctly the
enhancement of tunneling, it does not provide quantitatively
consistent results in comparison with results of exact
diagonalization in Fig.\ref{f2}b. In order to see clearly the effect
of the pair-tunneling term, we also study the dynamics governed by
the simplified BHM of Eq.(\ref{SBHM}) without the pair-tunneling
term. To see the effect of the term of $J'_1$, we consider both
cases for the Hamiltonian of Eq.(\ref{SBHM}) with or without the
term of $J'_1$. When the interaction strength is not so strong (for
example, $c < 5$), we find that the results are almost the same as
that presented in Fig\ref{f7}a. That means that the terms of pair
tunneling and $J'_1$ are not important when the interaction is weak.
However as shown in Fig.\ref{f8} (a) and (b), the dynamics in the
strongly interacting regime shows quite different behaviors if the
pair tunneling term is absent. Comparing Fig.\ref{f7}b and
Fig.\ref{f8}, we can conclude that the pair tunneling term of
$U_{LR}^{1}$ gives an important contribution to tunneling in the
large interaction regime. Comparing Fig.\ref{f8}a and Fig.\ref{f8}b,
we find that the term of $J'_1$ also plays an important role in
enhancing the tunneling.
The dynamics for the three-boson system is shown in Fig.\ref{f9}.
Comparing with the exact dynamical results in Fig.\ref{f3}, we find
that the one-band BHM approximation can describe the dynamics well
only when the interaction strength is small so that $U_{0}^{1} \ll
\Delta $. In contrast to the two-boson system, the pair-tunneling
term has less significant effect on the enhancement of the
tunneling. For very large $c$, although the system can tunnel to the
right well, it does not give quantitatively consistent results in
comparison with results of the numerical exact diagonalization.
\begin{figure}[tbp]
\includegraphics[height=7cm,width=\linewidth]{f8}
\caption{The revival probability $F(t)$ for the two-boson system
changes with time $t$ under one-band BHM approximation with (a)
$U_{LR}^1=0$ and $J'_1 =0$. (b) $U_{LR}^1=0$. } \label{f8}
\end{figure}
\begin{figure}[tbp]
\includegraphics[height=6cm,width=\linewidth]{f9}
\caption{The revival probability $F(t)$ for the three-boson system
changes with time $t$ under one-band BHM approximation.} \label{f9}
\end{figure}
From the above results, we know that one-band BHM approximation is
not enough to give a quantitatively description for the dynamics of
interacting bosons in the large interaction regime. To get better
results, we need keep more band levels and use the multi-band BHM
given by Eq.(\ref{Hamiltonian3}) in our calculation. In
Fig.\ref{f10}, we show the ground energy of the two bosons versus
the interaction strength using exact diagonalization method,
one-band, two-band, three-band and four-band BHM approximations,
respectively. It is shown that one-band BHM approximation can
describe the ground energy very well when the interaction $c<1$, and
two-band BHM approximation can describe well in the region $c<10$,
while three-band and four-band BHM approximations are efficient to
describe the ground energy of two bosons well even for $c=100$. For
the dynamics problem, in order to get a quantitatively consistent
results with the exact diagonaliztion results, we find that more
bands are needed in comparison with the static problem. In
Fig.\ref{f11}, we display the results of $F(t)$ for the two-particle
systems with various $c$ within the multi-band BHM approximation. As
shown in the figure, the result based on a five-band BHM
approximation for $c=10$ already quantitatively agrees with the
exact numerical result. For $c=50$, a ten-band BHM approximation is
required for a quantitatively consistent result. The result for
$c=300$ based on an eighteen-band BHM approximation is also given in
Fig.\ref{f11}. In comparison with Fig.2b and Fig.7b, we find that
there exists only a qualitative agreement with the exact
diagonaliztion result although it is much better than the result of
the single-band BHM approximation.
\begin{figure}[tbp]
\includegraphics[height=6cm,width=\linewidth]{f10}
\caption{The ground energy for the two-boson system obtained by
exact diagonalization method and $i$-band BHM approximation with
$i=1,2,3,4$.} \label{f10}
\end{figure}
\begin{figure}[tbp]
\includegraphics[height=6cm,width=\linewidth]{f11}
\caption{The revival probability $F(t)$ changes with time $t$ under
multi-band BHM approximations, including a five-band BHM
approximation for $c=10$, a seven-band BHM approximation for $c=20$,
a ten-band BHM approximation for $c=50$ and a eignteen-band BHM
approximation for $c=300$. } \label{f11}
\end{figure}
\section{Summary}
In summary, we have studied the dynamical properties of a few bosons
confined in an one-dimensional split hard wall trap by both the
exact diagonaliztion method and the approximate method based on the
two-site Bose-Hubbard model. The system is initially prepared in the
left well of the trap by setting the barrier strength of the split
hard wall trap to infinity, and then it is suddenly changed to a
finite value. With the increase in the interaction strength of
bosons, the system displays the Josephson-like oscillations, self
trapping and correlated tunneling in turn. Comparing results
obtained by two different methods, we conclude that the one-band BHM
approximation can quantitatively describe the dynamics in the weakly
interacting regime, but the multi-band BHM approximation is needed
if we want to describe the dynamics of bosons with large interaction
quantitatively. We also validate that the form of correlated
tunneling gives an important contribution to the tunneling dynamics
in the large interaction regime.
|
\section{Introduction: Motivation and Background}
In many scientific studies, often the main problem of interest is to compare
different population groups. In medical studies, for example, the primary
research problem could be to test for the difference between the location
parameters of two different populations receiving two different drugs, treatments or
therapy, or having two different preconditions. The normal
distribution often provides the basic setup for statistical analyses in
medical studies (as well as in other disciplines). Inference
procedures based on the sample mean, the standard deviation and the one and
two-sample $t$-tests are often the default techniques for the scenarios where
they are applicable. In particular, the two sample $t$-test is the most
popular technique in testing for the equality of two means, performed under
the assumption of equality of variances. Its applicability in real life
situations is, however, tempered by the
|
known lack of robustness of this
test against model perturbations. Even a small deviation from the ideal
conditions can make the test completely meaningless and lead to nonsensical
results. This problem is caused by the fact that the $t$-test is based on
the classical estimates of the location and scale parameters (the sample
mean and the sample standard deviation). Large outliers tend to distort the
mean and inflate the standard deviation. This may lead to false results of
both types, i.e. detecting a difference when there isn't one, and failing to
detect a true significance.
In this paper we are going to develop a class of robust tests for the two
sample problem which evolves from an appropriate minimum distance technique
in a natural way. This class of tests is indexed by two real parameters
\beta $ and $\gamma $, and we will constrain each of these parameters to lie
within the $[0,
|
s; \\
\mbox{}[a_{i},a_{j}]=1\mbox{\ otherwise for\ }1\leq i<j\leq t; \\
\mbox{} [a_{1},b]=\cdots =[a_{2s},b]=[a_{2s+2},b]=\cdots =[a_{t},b]=1 \mbox{\ for }1\leq i\leq t; \\
\mbox{} [a_{2s+1},b]=b^{p^{n-1}}.
\end{array}$$
Notice that for a fixed $n\geq 2$ and $t\geq 1$ there are $\lfloor (t+1)/2\rfloor$ such groups. Notice also that when $n\geq 3$ then the group is powerfully nilpotent as $\langle b^{p}\rangle\leq Z(G)$ and $[G,G]\leq \langle b^{p^{2}}\rangle$.
For $n=2$ this is not the case but the group is still powerfully solvable as we have a powerfully abelian chain $G>\langle b\rangle>1$ with $[G,G]\leq \langle b^{p}\rangle$. We are now ready for groups of order $p^{4}$. In the following we will omit writing relations of the form $[x,y]=1$. \\ \\
{\bf Groups of order $p^{4}$}. From our analysis of non-abelian groups of rank $2$ we get two such groups:
$$G_{2}=\langle a,b:\, a^{p^{2}}=b^{p^{2}}=1,\, [a,b]=a^{p}\rangle\mbox{\ \ and\ \ }G_{3}=\langle a,b:\, a^{p^{3}}=b^{p}=1,\ [a,b]=a^{p^{2}}\rangle.$$
Here $G_{3}$ is furthermore powerfully nilpotent. The only non-abelian groups apart from these are of type $(1,1,2)$ and from the analysis of such groups above we know there are two groups:
$$G_{4}=A(2,2,1)=\langle a,b,c:\, a^{p}=b^{p}=c^{p^{2}}=1,\, [a,b]=c^{p}\rangle,$$
and
$$G_{5}=B(2,2,0)=\langle a,b,c:\,a^{p}=b^{p}=c^{p^{2}}=1,\,[a,c]=c^{p}\rangle .$$
%
Apart from these there are 5 abelian groups and we thus get in total $\boldsymbol{9}$ groups. \\ \\
{\bf Groups of order $p^{5}$}. Again our analysis of groups of rank $2$ and those of type $(1,1,3)$ and $(1,1,1,2)$ gives
us the following non-abelian powerfully solvable groups:
$$\begin{array}{l}G_{6}=\langle a,b:\,a^{p^{2}}=b^{p^{3}}=1,\, [a,b]=a^{p}\rangle,\
G_{7}=\langle a,b:\,a^{p^{3}}=b^{p^{2}}=1,\, [a,b]=a^{p}\rangle, \\
G_{8}=\langle a,b:\,a^{p^{3}}=b^{p^{2}}=1,\,[a,b]=a^{p^{2}}\rangle,\ G_{9}=
\langle a,b:\,a^{p^{4}}=b^{p}=1,\ [a,b]=a^{p^{3}}\rangle,\end{array}$$
and
$$\begin{array}{l}
G_{10}=A(3,2,1)=\langle a,b,c:\,a^{p}=b^{p}=c^{p^{3}}=1,\,[a,b]=c^{p^{2}}\rangle; \\
G_{11}=B(3,2,0)=\langle a,b,c:\,a^{p}=b^{p}=c^{p^{3}}=1,\,[a,c]=c^{p^{2}}\rangle; \\
G_{12}=A(2,3,1)=\langle a,b,c,d:\,a^{p}=b^{p}=c^{p}=d^{p^{2}}=1,\,[a,b]=d^{p}\rangle; \\
G_{13}=B(2,3,0)=\langle a,b,c,d:\,a^{p}=b^{b}=c^{p}=c^{p^{2}}=1,\, [a,b]=d^{p}, [c,d]=d^{p}\rangle; \\
G_{14}=B(2,3,1)=\langle a,b,c,d:\,a^{p}=b^{p}=c^{p}=d^{p^{2}}=1,\,[a,b]=d^{p}, [c,d]=d^{p}\rangle.
\end{array}$$
Here $G_{8},G_{9},G_{10},G_{11},G_{12}$ are furthermore powerfully nilpotent. Apart from these $9$ groups, there are $7$ abelian groups.
We are now only left with the non-abelian groups of type $(1,2,2)$ that will contain a number of different groups and we need to deal with a number of subcases. \\ \\
Suppose that we have generators $a,b,c$ of orders $p,p^{2},p^{2}$. \\ \\
{\it Case 1}. ($Z(G)^{p}\not =1$). Notice that we then must have $|Z(G)^{p}|=p$ as otherwise $G/Z(G)$ is cyclic and thus $G$ abelian.
We can assume that $c\in Z(G)$ and that $Z(G)^{p}=\langle c^{p}\rangle$. Notice also that $[G,G]=\langle [a,b]\rangle$ is cyclic. There are two possibilities. On the one hand, if $[G,G]\leq Z(G)^{p}$, then we can choose our generators so that we get
a group with the following presentation:
$$G_{15}=\langle a,b,c:\, a^{p}=b^{p^{2}}=c^{p^{2}}=1,\, [a,b]=c^{p}\rangle.$$
On the other hand if $[G,G]\not\leq Z(G)^{p}$, it is not difficult to see that we can pick our generators so that we get a group with the presentation
$$G_{16}=\langle a,b,c:\, a^{p}=b^{p^{2}}=c^{p^{2}}=1,\, [a,b]=b^{p}\rangle.$$
Notice that both these groups are powerfully solvable and that $G_{15}$ is furthermore powerfully nilpotent. \\ \\
{\it Case 2}. ($Z(G)^{p}=1$ and $G/Z(G)$ has rank $2$). Then we must have $a\in Z(G)$. It is not difficult to see that
in this case we can choose $b,c$ such that $[b,c]=c^{p}$ and we get the powerfully solvable group
$$G_{17}=\langle a,b,c:\,a^{p}=b^{p^{2}}=c^{p^{2}}=1,\, [b,c]=c^{p}\rangle.$$
Before considering further cases, we first show that if $Z(G)^{p}=1$ and $G/Z(G)$ has rank $3$, then we must have
$[G,G]=G^{p}$.
Note that $|G^p|=p^2$, so suppose by contradiction, that $|G'|=p$. Observe that $G^p\le Z(G)$, so $G/Z(G)$ is a vector space over $\mathbb{F}_p$. Then, the commutator map in $G$ induces a non-degenerate alternating form on $G/Z(G)$, and
so $\dim_{\mathbb{F}_p}(G/Z(G))$ is even. This is a contradiction since $G/Z(G)$ has rank $3$.
We have thus shown that $[G,G]=G^{p}$. In order to distinguish further between different cases, we next turn our attention to $[\Omega_{1}(G),G]$. Notice that $\Omega_{1}(G)=\langle a\rangle G^{p}$.
As $a\not\in Z(G)$ either $|[\Omega_{1}(G),G]|$ is of size $p$ or $p^{2}$. \\ \\
{\it Case 3}. ($Z(G)^{p}=1$, $G/Z(G)$ of rank $3$ and $|[\Omega_{1}(G),G]|=p$). Without loss of generality we can assume that $[\Omega_{1}(G),G]=\langle c^{p}\rangle$. There are two possibilities. Either $c\in C_{G}\left(\Omega_1\left(G\right)\right)=C_{G}(a)$ or not.
Suppose first that
$c\in C_G\left(\Omega_{1}(G)\right)$. Then we have $[a,c]=1$, and we can pick $b$ such that $[a,b]=c^{p}$.
Replacing $b$ by $bc^{l}$ does not change these relations and thus we can assume that $[b,c]=b^{p\alpha}$ for some $0<\alpha<p$. If we let $\beta$ be the inverse of $\alpha$ modulo $p$ and we replace $a,c$ by $a^{\beta},c^{\beta}$, then we arrive at a group with presentation
$$G_{18}=\langle a,b,c:\,a^{p}=b^{p^2}=c^{p^{2}}=1,\, [a,b]=c^{p},\,[b,c]=b^{p}\rangle.$$
Notice that this is a powerfully solvable group with a powerfully abelian chain $G>\langle b,c\rangle>\langle b\rangle>1$.
Suppose now $c\not\in C_G\left(\Omega_1(G)\right)$. Since $|[\Omega_1(G),G]|=|[a,G]|=p$, it follows that the conjugacy class of $a$ has order $p$, and so $|G:C_G(a)|=p$. Thus, we can pick $b$ such that $b\in C_G(a)$ and $[a,b]=1$. Replacing $a$ by a suitable power of $a$ we can suppose that $[a,c]=c^p$. As before, replacing $b$ by $bc^{l}$ does not change these relations, so we can also assume $[b,c]=b^{\alpha p}$ for some $0<\alpha<p$. Finally, if we let $\beta$ be the inverse of $\alpha$ modulo $p$ and we replace $c$ by $c^{\beta}$, we arrive at a group with presentation
$$
G_{19}=\langle a,b,c:\,a^{p}=b^{p^2}=c^{p^{2}}=1,\,[a,c]=c^p,\,[b,c]=b^{p}\rangle.
$$
This group is powerfully solvable with powerfully abelian chain $G>\langle b,c\rangle >\langle b\rangle >1$.
\\ \\
{\it Case 4}. ($Z(G)^{p}=1$, $G/Z(G)$ of rank $3$ and $|[\Omega_{1}(G),G]|=p^2$). In this case, commutation with $a$ induces a bijective linear map
\begin{eqnarray*}
F_{a}:G/\Omega_{1}(G) &\longrightarrow & G^{p} \\
x\Omega_{1}(G) &\longmapsto & [a,x].
\end{eqnarray*}
Identifying $x\Omega_{1}(G)$ with $x^{p}$, we can think of $F_{a}$ as a linear operator on a two dimensional vector space over $\mathbb{F}_p$. Also replacing $b,c$ by a suitable $ba^{r},ca^{s}$ we can assume throughout that $[b,c]=1$. All the groups are going to be powerfully solvable with powerfully abelian chain $G>\langle b,c\rangle >1$.
\\ \\
{\it Case 4.1}. ($F_{a}$ is a scalar multiplication). Notice that this property still holds if we replace $a$ by any power of $a$ and thus it is independent of what $a$ we pick in $\Omega_{1}(G)\setminus G^{p}$. This is thus a characteristic property of $G$. Replacing $a$ with
a power of itself we can assume that $F_{a}$ is the identity map. This gives us the group
$$G_{20}=\langle a,b,c:\, a^{p}=b^{p^{2}}=c^{p^{2}}=1,\, [a,b]=b^{p},\,[a,c]=c^{p}\rangle.$$
{\it Case 4.2}. ($F_{a}$ is not a scalar multiplication). Again we see that this is a characteristic property of $G$. We can now pick $b$ and $c$ such that
$$[a,b]=c^{p},\ [a,c]=b ^{p\alpha}c^{p\beta}.$$
Notice that the matrix for $F_{a}$ is
$$\left[\begin{array}{cc}
0 & \alpha \\
\mbox{} 1 & \beta
\end{array}\right]$$
with determinant $-\alpha$. This is an invariant for the given $a$ that does not depend on our choice of $b$ and $c$. If we replace $a$ by $a^{r}$ and $c$ by $c^r$ then we get
$$[a,b]=c^p,\ [a,c]=b^{p\alpha r^2}c^{p\beta r},$$
and the new determinant becomes $-\alpha r^{2}$. Pick some fixed $\tau$ such that $-\tau$ is a non-square in $\mathbb{F}_p$.
With appropriate choice of $r$ we can then assume that the determinant of $F_{a}$ is $-\alpha$ where either $\alpha=-1$
or $\alpha=\tau$. We thus have a group with one of the two presentations
$$G_{21}(\beta)=\langle a,b,c:\,a^{p}=b^{p^{2}}=c^{p^{2}}=1,\ [a,b]=c^{p},\ [a,c]=b^{-1}c^{p\beta},\,[b,c]=1\rangle,$$
and
$$G_{22}(\beta)=\langle a,b,c:\,a^{p}=b^{p^{2}}=c^{p^{2}}=1,\ [a,b]=c^{p},\ [a,c]=b^{\tau}c^{p\beta},\,[b,c]=1\rangle.$$
Suppose we pick a different $\bar{b}=b^{r}c^{s}$. Then for $\alpha\in \{-1,\tau\}$ we have
$$[a,\bar{b}]=[a,b]^{r}[a,c]^{s}=c^{pr}(b^{p{\alpha}}c^{p\beta})^{s}=b^{ps\alpha}c^{p(r+s\beta)}=\bar{c}^{p}$$
where $\bar{c}=b^{s\alpha}c^{r+s\beta}$. Then
\begin{eqnarray*}
[a,\bar{c}] & = & [a,b]^{s\alpha}[a,c]^{r+s\beta} \\
& = & c^{ps\alpha}(b^{p \alpha}c^{p\beta})^{r+s\beta} \\
& = & (b^{r}c^{s})^{p\alpha}\cdot (b^{s\alpha}c^{r+s\beta})^{p\beta} \\
& = & \bar{b}^{p\alpha}\bar{c}^{p\beta}.
\end{eqnarray*}
This shows that for the given $\alpha \in \{-1,\tau\}$, the constant $\beta\in \mathbb{F}_p$ is an invariant and we get $p$ distinct groups $G_{21}(\beta)$ and $p$ distinct groups $G_{22}(\beta)$. \\ \\
Adding up we have $7$ abelian groups and the groups $G_{6},\ldots , G_{20}, G_{21}(\beta), G_{22}(\beta)$, giving us in total
$\boldsymbol{22+2p}$ groups of order $p^{5}$. \\ \\
Notice that we have seen that all powerful groups of order up to and including $p^{5}$ are powerfully solvable. Now take a powerful group of order $p^{6}$. Suppose it has a generator $a$ of order $p$, say $G=\langle a,H\rangle$
where $H<G$. Notice that $H$ is then powerful of order $p^{5}$ and thus powerfully solvable. As $[G,G]\leq H^{p}$ we then
see that $G$ is powerfully solvable. Thus all powerful groups of order $p^{6}$ are powerfully solvable with the possible exceptions of some groups of type $(2,2,2)$. We will see later that there are a number of groups of type $(2,2,2)$ that are not powerfully solvable.
\mbox{}
\section{Growth}
Let $G$ be a powerfully solvable group of order $p^{n}$. From Theorem \ref{theorem presentation} and the discussion in Section 3, we know that we may assume that $G=\langle a_{1},\ldots ,a_{y},
a_{y+1},\ldots ,a_{y+x}\rangle$ where $o(a_{1})=\cdots =o(a_{y})=p$ and $o(a_{y+1})=\cdots =o(a_{y+x})=p^{2}$. Furthermore the generators can be chosen such that $|G|=p^{y+2x}$ and
$$[a_{j},a_{i}]=a_{i+1}^{p\alpha_{i+1}(i,j)}\cdots a_{y+x}^{p\alpha_{y+x}(i,j)},$$
for $1\leq i<j\leq y+x$, where $0\leq \alpha_{k}(i,j)\leq p-1$ for $k=i+1,\ldots ,y+x$. For each such pair $(i,j)$ where $1\leq i\leq y$ there are $p^{x}$ possible relations for $[a_{j},a_{i}]$. There are $yx+{y\choose 2}$ such pairs. On the other hand, for a pair $(i,j)$
where $y+1\leq i\leq y+x$, for each given $i$ there are $y+x-i$ such pairs and $p^{y+x-i}$ possible relations $[a_{j},a_{i}]$. Adding up we see that the number of solvable presentations is $p^{h(x)}$ where
\begin{eqnarray*}
h(x) & = & \left(yx+{y\choose 2}\right)x+1^{2}+2^{2}+\cdots (x-1)^{2} \\
& = & \left(
(n-2x)x+{n-2x\choose 2}\right)x+\frac{x(2x-1)(x-1)}{6} \\
& = & \frac{1}{3}x^{3}-\frac{(2n-1)}{2}x^{2}+\frac{3n(n-1)+1}{6}x.
\end{eqnarray*}
Thus
$$
h'(x)=x^{2}-(2n-1)x+\frac{3n(n-1)+1}{6},
$$
whose roots are $\frac{2n-1}{2}-\sqrt{\frac{1}{2}n^{2}-\frac{n}{2}+\frac{1}{12}}$ and $\frac{2n-1}{2}+\sqrt{\frac{1}{2}n^{2}-\frac{n}{2}+\frac{1}{12}}$. For large values of $n$ we have that the first root is between $0$ and $n/2$ whereas the latter is greater than $n$.
Thus, for large $ n$, the largest value of $h$ in the interval between $0$ and $n/2$ is $h(x(n))$ where $x(n)=\frac{2n-1}{2}-\sqrt{\frac{1}{2}n^{2}-\frac{n}{2}+\frac{1}{12}}$. Now $\lim_{n\rightarrow \infty}x(n)/n=1-\frac{1}{\sqrt{2}}$. Therefore
$$\lim_{n\rightarrow \infty}\frac{h(x(n)}{n^{3}}=\lim_{n\rightarrow \infty}\frac{1}{3}(x(n)/n)^{3}-(x(n)/n)^{2}+\frac{1}{2}(x(n)/n))=\frac{-1+\sqrt{2}}{6}.$$
We now argue in a similar way as in [5]. Let $n$ be fixed. For any integer $x$ where $0\leq x\leq n/2$, let ${\mathcal P}(n,x)$ be the collection of all powerfully solvable presentations as above. It is not difficult to see that those presentations are consistent and thus the resulting group is of order $p^{n}$ and rank $n-x$. Furthermore $a_{1}^{p}=\cdots =a_{n-2x}^{p}=1$ and $a_{n-2x+1}^{p^{2}}=
\cdots =a_{n-x}^{p^{2}}=1$. We have just seen that, for large values of $n$, if we pick $x(n)$ such that the number of presentations is maximal then
$$|{\mathcal P}(n,x(n))|=p^{\alpha n^{3}+o(n^{3})}$$
where $\alpha=\frac{-1+\sqrt{2}}{6}$. Let ${\mathcal P}_{n}$ be the total number of the powerfully solvable presentations where
$0\leq x\leq n/2$. Then ${\mathcal P}_{n}={\mathcal P}(n,0)\cup {\mathcal P}(n,1)\cup \cdots \cup {\mathcal P}(n,\lfloor n/2\rfloor)$ and thus
$$p^{\alpha n^{3}+o(n^{3})}=|{\mathcal P}(n,x(n))|\leq |{\mathcal P}_{n}|=|{\mathcal P}(n,0)|+\cdots +|{\mathcal P}(n,\lfloor n/2\rfloor)|\leq
n|{\mathcal P}(n,x(n))|=p^{\alpha n^{3}+o(n^{3})}.$$
This shows that $|{\mathcal P}_{n}|=p^{\alpha n^{3}+o(n^{3})}$. Let us show that this is also the growth of powerfully
solvable groups of exponent $p^{2}$ with respect to the order $p^{n}$. Clearly $p^{\alpha n^{3}+o(n^{3})}$ gives us an upper bound. We want to show that this is also a lower bound. Let $x=x(n)$ be as above and let $a_{1},\ldots ,a_{n-x}$ be a set of generators
for a powerfully solvable group $G$ where $a_{1}^{p}=\cdots =a_{n-2x}^{p}=1$ and $a_{n-2x+1}^{p^{2}}=
\cdots =a_{n-x}^{p^{2}}=1$. Notice that $\langle a_{1},\ldots ,a_{n-2x}\rangle G^{p}=\Omega_1(G)$, which is a characteristic subgroup of $G$. It will be useful to consider a larger class of presentations for powerfully solvable groups of order $p^{n}$ where we still require $a_{1}^{p}=\cdots =a_{n-2x}^{p}=1$ and $a_{n-2x+1}^{p^{2}}=\cdots =a_{n-x}^{p^{2}}=1$.
We let ${\mathcal Q}(n,x)={\mathcal Q}(n,x(n))$ be the collection of all presentations with additional commutator relations
$$[a_{i},a_{j}]=a_{1}^{p\alpha_{1}(i,j)}\cdots a_{n-x}^{p\alpha_{n-x}(i,j)}.$$
The presentation is included in ${\mathcal Q}(n,x)$ provided the resulting group is powerfully solvable of order $p^{n}$. Notice
that $G^{p}\leq Z(G)$ and as a result the commutator relations above only depend on the cosets $\overline{a_{1}}=a_{1}G^{p},\ldots , \overline{a_{n-x}}=a_{n-x}G^{p}$ and not on the exact values of $a_{1},\ldots ,a_{n-x}$. Consider the vector space
$V=G/G^{p}$ over $\mathbb{F}_p$ and let $W={\mathbb F}_{p}\overline{a_{1}}+\cdots +{\mathbb F}_{p}\overline{a_{n-2x}}$. Then let
$$H=\{\phi \in \mbox{GL}(n-x,p):\,\phi(W)=W\}.$$
There is now a natural action from $H$ on ${\mathcal Q}(n,x)$. Suppose we have some presentation with generators $a_{1},\ldots ,a_{n-x}$ as above. Let $\phi\in H$ and suppose
$$\overline{a_{i}}^{\phi}=\beta_{1}(i)\overline{a_{1}}+\cdots +\beta_{n-x}(i)\overline{a_{n-x}}.$$
We then get a new presentation in ${\mathcal Q}(n,x)$ for $G$ with respect to the generators $b_{1},\ldots ,b_{n-x}$ where
$b_{i}=a_{1}^{\beta_{1}(i)}\cdots a_{n-x}^{\beta_{n-x}(i)}$. \\ \\
Suppose there are $l$ powerfully solvable groups of exponent $p^{2}$ and order $p^{n}$ where furthermore $|G^{p}|=p^{x}$. Pick powerfully solvable presentations $p_{1},\ldots ,p_{l}\in {\mathcal P}(n,x)$ for these. Let $q$ be powerfully solvable presentation in ${\mathcal P}(n,x)$ of a group $K$ with generators $b_{1},\ldots ,b_{n-x}$. Then $q$ is also a presentation for an isomorphic group $G$
with presentation $p_{i}$ and generators $a_{1},\ldots ,a_{n-x}$. Let $\phi:K\rightarrow G$ be an isomorphism and let $\psi:K/K^{p}\rightarrow G/G^{p}$ be the corresponding linear isomorphism. This gives us a linear automorphism $\tau\in H$ induced by $\tau(\overline{a_{i}})=\psi(\overline{b_{i}})$. Thus $q=p_{i}^{\tau}$. Therefore
$${\mathcal P}(n,x)\subseteq p_{1}^{H}\cup p_{2}^{H}\cup\cdots \cup p_{l}^{H}.$$
From this we get
$$p^{\alpha n^{3}+o(n^{3})}=|{\mathcal P}(n,x)|\leq |p_{1}^{H}|+\cdots +|p_{l}^{H}|\leq lp^{n^{2}},$$
and it follows that $l\geq p^{\alpha n^{3}+o(n^{3})}$. We thus get the following result.
\begin{theo} The number of powerfully solvable groups of exponent $p^{2}$ and order $p^{n}$ is
$p^{\alpha n^{3}+o(n^{3})}$, where $\alpha=\frac{-1+\sqrt{2}}{6}$.
\end{theo}
\noindent As mentioned in [5] the growth of all powerful $p$-groups of exponent $p^{2}$ and order $p^{n}$ is $p^{\frac{2}{27}n^{3}+o(n^{3})}$. This claim was though not proved and we will fill in the details here. \\ \\
As before we consider a group $G$ of order $p^{n}=p^{y+2x}$ with generators $a_{1},\ldots ,a_{y+x}$ where $o(a_{1})=\cdots =o(a_{y})=p$ and $o(a_{y+1})=\cdots =o(a_{y+x})=p^{2}$. This time we can though include all powerful relations
$$[a_{j},a_{i}]=a_{1}^{p\alpha_{y+1}(i,j)}\cdots a_{y+x}^{p\alpha_{y+x}(i,j)}$$
for $1\leq i<j\leq y+x$, where $0\leq \alpha_{k}(i,j)\leq p-1$ for $k=y+1,\ldots ,y+x$. For each such pair $(i,j)$ there are $p^{x}$ possible relations for $[a_{j},a_{i}]$. We thus see that the number of presentations is $p^{h(x)}$ where
$$h(x) = {y+x\choose 2}x = {n-x\choose 2}x= \frac{x^{3}}{2}-\frac{(2n-1)}{2}x^{2}+\frac{n(n-1)}{2}x.$$
Thus
$$h'(x)=\frac{3}{2}\left(x^{2}-\frac{2(2n-1)}{3}x+\frac{n(n-1)}{3}\right)$$
and using the same kind of analysis as before we see that for a large $n$, $h$ takes its maximal value for $x(n)=\frac{2n-1}{3}-
\sqrt{\frac{n^{2}}{9}-\frac{n}{9}+\frac{1}{9}}$. Notice that $\lim_{n\rightarrow \infty}\frac{x(n)}{n}=1/3$. Therefore
$$\lim_{n\rightarrow \infty}\frac{h(x(n)}{n^{3}}
=\lim_{n\rightarrow \infty}\frac{1}{2}
\cdot\left(\frac{n-x(n)}{n}\right)
\cdot\left(\frac{n-1-x(n)}{n}\right)
\cdot\frac{x(n)}{n}
=2/27.$$
The same argument as above shows then that the growth of all powerful groups of exponent $p^{2}$ with respect to order $p^{n}$ is
$p^{\frac{2}{27}n^{3}+o(n^{3})}$. \\ \\
Later on we will be working with a special subclass ${\mathcal P}$ of powerful $p$-groups, namely those that are of type {$(2,\stackrel{r}{\ldots} ,2)$ with $r\ge 1$}. In this case the number of presentations for groups of order $p^{n}$, $n$ even, is $p^{h(n)}$ where $h(n)={n/2 \choose 2}n/2$ and
$$\lim_{n\rightarrow \infty}\frac{h(n)}{n^{3}}=\lim_{n\rightarrow \infty}\frac{n/2(n/2-1)n/2}{2n^{3}}=1/16.$$
Thus the growth here is $p^{\frac{1}{16}n^{3}+o(n^{3})}$.
\section{Groups of type $(2,\ldots,2)$}
We have seen that powerful nilpotence and powerful solvability is preserved under taking quotients. These properties however work badly under taking subgroups. Our next two results underscore this.
\begin{prop} Let $G$ be any powerful $p$-group of exponent $p^{2}$. There exists a powerfully nilpotent group $H$ of exponent $p^{2}$ and powerful class $2$ such that $G$ is powerfully embedded in $H$.
\end{prop}
\noindent{\bf Proof}\ \ Suppose $G=\langle a_{1},\ldots ,a_{r}\rangle$ where $a_{1}^{p}=\cdots =a_{s}^{p}=1$, $a_{s+1}^{p^{2}}=
\cdots =a_{r}^{p^{2}}=1$ and where $|G|=p^{s+2(r-s)}$. Let $N=\langle x_{s+1}\rangle \times \cdots\times \langle x_{r}\rangle$
be a direct product of cyclic groups of order $p^{2}$. Let $H=(G\times N)/M$, where $M=\langle a_{s+1}^{p}x_{s+1}^{-p},
\ldots ,a_{r}^{p}x_{r}^{-p}\rangle$. Notice that $[G,H]=[G,G]\leq G^{p}$ and thus $G$ is powerfully embedded in $H$. Also, as $[H,H]=G^{p}=N^{p}$,
we see that
$$1\leq \langle x_{1},\ldots ,x_{r}\rangle\leq H$$
is a powerfully central chain and thus $H$ is powerfully nilpotent of powerful class at most $2$. $\Box$ \\ \\
{\bf Remark}. (1) There exist powerful $p$-groups of exponent $p^{2}$ that are not powerfully solvable and thus a powerfully embedded subgroup of a powerfully nilpotent group of powerful class $2$ does not
even need to be powerfully solvable. \\
(2) There exist powerfully nilpotent groups of exponent $p^{2}$ that are of arbitrary large powerful class and so the proposition above shows that a powerfully nilpotent group of powerful class $2$ could have
a powerfully embedded powerfully nilpotent subgroup of arbitrary large powerful class. \\ \\
Next result shows that the subgroup structure of a powerfully nilpotent group of powerful class $2$ is even more arbitrary. Notice that such a group is in particular nilpotent of class $2$ and it turns out that any finite $p$-group
of class $2$ can occur as a subgroup.
\begin{prop} Let $G$ be any finite $p$-group of nilpotency class $2$. There exists a powerfully nilpotent group $H$ of powerful class $2$ that contains $G$ as a subgroup.
\end{prop}
\noindent{\bf Proof}\ \ Suppose $[G,G]$ has a basis $a_{1},\ldots ,a_{m}$ as an abelian group where $o(a_{i})=p^{j_i}$. Let $N=\langle x_{1}\rangle\times\cdots \times \langle x_{m}\rangle$ be a direct product
of cyclic groups where $o(x_{i})=p^{j_i+1}$. Now let $H=(G\times N)/M$ where $M=\langle a_{1}x_{1}^{-p},\ldots, a_{m}x_{m}^{-p}\rangle$. Then $G$ embeds as a subgroup
of $H$. Notice also that
$$1\leq \langle x_{1},\ldots ,x_{m}\rangle \leq H$$
is powerfully central and thus $H$ is powerfully nilpotent of powerful class $2$. $\Box$ \\ \\
Thus powerful nilpotence and powerful solvability are in general not as satisfactory as notions for powerful groups as nilpotence
and solvability for the class of all groups. For a rich subclass of powerful groups things however turn out much better. This is the class ${\mathcal P}$ of all powerful groups of type $(2,\stackrel{r}{\ldots} ,2)$ that we considered in Section 5. \\ \\
For a group $G\in {\mathcal P}$ we have that $G^{p}\leq Z(G)$. It follows that the map $G/G^{p}\rightarrow G^{p},\,aG^{p}\mapsto a^{p}$ is a bijection and therefore, for any $H\geq G^{p}$, we have $|H/G^{p}|=|H^{p}|$.
\begin{lemm} Let $G\in {\mathcal P}$ and $H,K\leq G$ where $G^{p}\leq K$. Then $H^{p}\cap K^{p}=(H\cap K)^{p}$.
\end{lemm}
\noindent{\bf Proof}\ \ We have
\begin{eqnarray*}
|H^{p}\cap K^{p}|
= |(HG^{p})^{p}\cap K^{p}|
& = & \frac{|(HG^{p})^{p}|\cdot|K^{p}|}{|(HK)^{p}|}\\
& = & \frac{|HG^{p}/G^{p}|\cdot |K/G^{p}|}{|HK/G^{p}|}\\
& = & |(HG^{p}\cap K)/G^{p}| \\
& = & |(H\cap K)G^{p}/G^{p}|\\
& = & |(H\cap K)^{p}|.
\end{eqnarray*}
As $(H\cap K)^{p}\leq H^{p}\cap K^{p}$ it follows that $H^{p}\cap K^{p}=(H\cap K)^{p}$. $\Box$
\begin{theo} Let $G$ be a powerfully nilpotent group in ${\mathcal P}$ and let $H$ be a powerful subgroup of $G$. Then $H$ is
powerfully nilpotent of powerful class less than or equal to the powerful class of $G$.
\end{theo}
\noindent{\bf Proof}\ \ Suppose $G$ has powerful nilpotence class $c$ and that we have a powerfully central chain $G=G_{0}>G_{1}>\cdots >G_{c}=1$. As $G^{p}\leq Z(G)$ and $(G^{p})^{p}=1$, multiplying a term by $G^{p}$ makes no difference. Also as the
powerful class is $c$ we get a strictly decreasing powerfully central chain $G=G_{0}>G_{1}G^{p}>\cdots >G_{c-1}G^{p}>1$. Without loss of generality we can thus assume that $G_{1},\ldots ,G_{c-1}$ contain $G^{p}$ as a subgroup. We claim that
$$H=H\cap G_{0}\geq H\cap G_{1}\geq \cdots \geq H\cap G_{c-1}\geq 1$$
is powerfully central. Using Lemma 6.3 we have
$$
[H\cap G_{i},H]\leq [H,H]\cap [G_{i},G]\leq H^{p}\cap G_{i+1}^{p}=(H\cap G_{i+1})^{p},
$$
for $0\leq i\leq c-1$. Hence $H$ is powerfully nilpotent of powerful class at most $c$. $\Box$ \\ \\
\begin{theo} Let $G$ be a powerfully solvable group in ${\mathcal P}$ and let $H$ be a powerful subgroup of $G$. Then $H$ is powerfully solvable of powerful derived length less than or equal to the powerful derived length of
$G$.
\end{theo}
\noindent{\bf Proof}\ \ Suppose the powerful derived length of $G$ is $d$ and that we have a powerfully abelian chain $G=G_{0}>G_{1}>\cdots >G_{d}=1$. Arguing like in the proof of the previous theorem, we can assume that $G_{d-1}$ contains $G^{p}$. We show that
$$H=H\cap G_{0}\geq H\cap G_{1}\geq \cdots \geq H\cap G_{c-1}\geq H\cap G_{c}=1$$
is a powerfully abelian chain. Using Lemma 6.3, we have $[H\cap G_{i},H\cap G_{i}]\leq [H,H]\cap [G_{i},G_{i}]\leq H^{p}\cap G_{i+1}^{p}=(H\cap G_{i+1})^{p}$. This shows that $H$ is powerfully solvable of powerful derived length at most $d$. $\Box$ \\ \\
We introduce some useful notation. We use $H\leq_{\mathcal P} G$ to stand for $H,G\in {\mathcal P}$ and $H\leq G$. We use $H\unlhd_{\mathcal P}G$ for $H,G\in {\mathcal P}$ and
$H$ powerfully embedded in $G$. The notations $H<_{\mathcal P}G$ and $H\lhd _{\mathcal P}G$ are defined naturally in a similar way. \\ \\
Let $G$ be a powerful $p$-group in $\mathcal{P}$ and let $V=G/G^{p}$ be the associated vector space over $\mathbb{F}_p$.
The structure of $G$ is determined by the commutator relations
\begin{equation}
[a,b]=c^{p},
\end{equation}
where there exists such $c\in G$ for each pair $a,b$ in $G$. Notice that $[a,b]$ and $c^{p}$ only depend on the cosets $aG^{p},bG^{p}$ and $cG^{p}$. Identifying
the two vector spaces $G/G^{p}$ and $G^{p}$ under the map $G/G^{p}\rightarrow G^{p},xG^{p}\mapsto x^{p}$, we get a natural alternating product on $V$ with
the relations (1) translating to
$$[aG^{p},bG^{p}]=cG^{p}.$$
Let ${\mathcal G}$ be the collection of all powerful subgroups of $G$ that are of type $(2,\ldots ,2)$ and let ${\mathcal V}$ be the collection of all the alternating
subalgebras of $V$. So for $U$ to be a subalgebra of $V$ it needs to be a subspace where $[U,U]\leq U$. Notice that $[H,H]\leq H^{p}$ translates to
$[HG^{p}/G^{p},HG^{p}/G^{p}]\leq HG^{p}/G^{p}$. Recall that for $H,K\in {\mathcal G}$ we write $H\trianglelefteq_{\mathcal P} K$ for $H$ powerfully embedded in $K$. For $U,W\in {\mathcal V}$ we likewise
write $U\trianglelefteq W$ for $U$ an ideal of $W$. Notice that $[H,K]\leq H^{p}$ translates to $[HG^{p}/G^{p},KG^{p}/G^{p}]\leq HG^{p}/G^{p}$. \\ \\
If $H,G\in{\mathcal P}$ such that $G$ is powerfully nilpotent and $H\trianglelefteq_{\mathcal{P}} G$, then the quotient $G/H$ has naturally the structure
of powerful group of type $(2,\ldots ,2)$ with $[aH,bH]=[a,b]H$. \\ \\
{\bf Definition}. We say that a group $G\in\mathcal{P}$ is {\it powerfully simple} if $G\not =1$ and if $H\lhd_{\mathcal P} G$ implies that $H=1$. \\ \\
{\bf Definition}. Let $H,G\in\mathcal{P}$ with $H\lhd_{\mathcal P}G$. We say that $H$ is
a maximal powerfully embedded ${\mathcal P}$-subgroup of $G$ if there is no $H<K<G$ such that $K\unlhd_{\mathcal P}G$.
\begin{lemm}
Let $G\in\mathcal{P}$. We have that $H$ is a maximal powerfully embedded $\mathcal{P}$-subgroup of $G$ if and only if $G/H$ is powerfully simple.
\end{lemm}
\noindent{\bf Proof}\ \ Let $H<K<G$. Now as $H$ is powerful of type $(2,\ldots,2)$ we have $H\cap G^{p}=H^{p}$. Therefore
$[K,G]\leq K^{p}H$ if and only if
$$[K,G]\leq (K^{p}H)\cap G^{p}=K^{p}(H\cap G^{p})=K^{p}H^{p}=K^{p}.$$
The result follows from this. $\Box$ \\ \\
{\bf Remark}. Let $H,K\in {\mathcal G}$ and let $U$ and $W$ be the associated alternating algebras in ${\
|
mathcal V}$. Suppose that $H$ is powerfully
embedded in $K$. Then $K/H$ is powerfully simple if and only if $W/U$ is a simple alternating algebra and the latter happens if and only if $U$ is
a maximal ideal of $V$. \\ \\
We will next prove a Jordan-Holder type theorem for alternating algebras. Suppose $A,B,a,b\in {\mathcal V}$ where $A\lhd B$ and $a\lhd b$. Let
${\mathcal I}_{A}^{B}=\{Z:A\leq Z\leq B\}$ and ${\mathcal I}_{a}^{b}=\{x:a\leq x\leq b\}$. We get natural projections $P:{\mathcal I}_{a}^{b}\rightarrow
{\mathcal I}_{A}^{B}$ and $Q:{\mathcal I}_{A}^{B}\rightarrow {\mathcal I}_{a}^{b}$ given by
$$P(x)=A+B\cap x\mbox{\ and\ }Q(Z)=a+b\cap Z.$$
\begin{lemm}We have $P(a)\trianglelefteq P(b)$ and $Q(A)\trianglelefteq Q(B)$. Furthermore $P(b)/P(a)$ is isomorphic to $Q(B)/Q(A)$.
\end{lemm}
\noindent{\bf Proof}\ \ Notice that $P(a)=A+B\cap a, P(b)=A+B\cap b, Q(A)=a+b\cap A$ and $Q(B)=a+b\cap B$. As $A\trianglelefteq B$, we have $[P(a),P(b)]=[A+B\cap a,A+B\cap b]\leq A+[B\cap a,B\cap b]$. Now as $B$ is a subalgebra and $a\trianglelefteq b$ we have that
this is contained in $A+B\cap a$. The second claim follows from this by symmetry. \\ \\
Now for $P(b)/P(a)$, notice first that we have
$$
B\cap b\cap (A+B\cap a)=B\cap b\cap A+B\cap a=A\cap b+B\cap a,
$$
and for $u,v,w\in B\cap b$ it follows that
$$ [u,v]+A+B\cap a=w+A+B\cap a \Leftrightarrow [u,v]+A\cap b+B\cap a=w+A\cap b+B\cap a.$$
By symmetry
%
$$[u,v]+a+b\cap A=w+a+b\cap A \Leftrightarrow [u,v]+a\cap B+b\cap A=w+A\cap b+B\cap a.$$
The isomorphism of $P(b)/P(a)$ and $P(B)/B(A)$ follows from this. $\Box$ \\ \\
The Jordan-Holder theorem for alternating algebras is proved from this in the standard way. \\ \\
{\bf Definition}. Let $V$ be an alternating algebra. A chain $0=U_{0}\lhd U_{1} \lhd \cdots \lhd U_{n}=V$ is a {\it composition series} for $V$ if all the factors $U_{1}/U_{0},\ldots ,U_{n}/U_{n-1}$ are simple alternating algebras.
\begin{theo} Let $V$ be an alternating algebra. Then all composition series have the same length and same composition factors up to order.
\end{theo}
\noindent{\bf Definition}. Let $G\in {\mathcal P}$. A chain $1=H_{0}\lhd_{\mathcal P} H_{1}\lhd_{\mathcal P}\cdots \lhd_{\mathcal P} H_{n}=G$ is a {\it powerful composition series} for $G$ if all the factors $H_{1}/H_{0},
\ldots ,H_{n}/H_{n-1}$ are powerfully simple.
\begin{theo}Let $G$ be a powerful $p$-group of type $(2,\ldots ,2)$ with two powerful composition series, say
$$1=H_{0}\lhd_{\mathcal P}H_{1}\lhd_{\mathcal P}\cdots \lhd_{\mathcal P}H_{n}=G$$
and
$$1=K_{0}\lhd_{\mathcal P}K_{1}\lhd_{\mathcal P}\cdots \lhd_{\mathcal P}K_{m}=G.$$
Then $m=n$ and the powerfully simple factors $H_{1}/H_{0},H_{2}/H_{1},\ldots ,H_{n}/H_{n-1}$ are isomorphic to $K_{1}/K_{0}$, $K_{2}/K_{1}$, $\ldots ,K_{n}/K_{n-1}$
(in some order).
\end{theo}
\noindent{\bf Proof}\ \ Replace the terms $H_{i},K_{j}$ by their associated alternating algebras $U_{i}, V_{j}$. The result now follows from the Jordan-Holder theorem for alternating algebras. $\Box$ \\ \\
{\bf Definition}. We refer to the unique factors of a powerful composition series of a group $G\in {\mathcal P}$ as the {\it powerful composition factors} of $G$.
\begin{coro}
A group $G\in {\mathcal P}$ is powerfully solvable if and only if the powerful composition factors are cyclic of order $p^{2}$.
\end{coro}
\noindent{\bf Proof}\ \ Any powerful abelian chain of $G$ can be refined to a chain with factors that are cyclic of order $p^{2}$. $\Box$
\section{The classification of powerfully simple groups of type $(2,2,2)$.}
From previous section we know that this task is equivalent to classifying all simple alternating algebras of dimension $3$. \\ \\
Following [4], any given alternating algebra of dimension $3$
over ${\mathbb F}_p$ can be represented by a $3\times 3$ matrix over ${\mathbb F}_p$. Here the matrix
$$\left[\begin{array}{lll}
\alpha_{11} & \alpha_{12} & \alpha_{13} \\
\alpha_{21} & \alpha_{22} & \alpha_{23} \\
\alpha_{31} & \alpha_{32} & \alpha_{33}
\end{array}\right]$$
corresponds to the $3$-dimensional alternating algebra ${\mathbb F}_{p}v_{1}+{\mathbb F}_{p}v_{2}+{\mathbb F}_{p}v_{3}$ where
\begin{eqnarray*}
v_{2}v_{3} & = & \alpha_{11}v_{1}+\alpha_{21}v_{2}+\alpha_{31}v_{3} \\
v_{3}v_{1} & = & \alpha_{12}v_{1}+\alpha_{22}v_{2}+\alpha_{32}v_{3} \\
v_{1}v_{2} & = & \alpha_{13}v_{1}+\alpha_{23}v_{2}+\alpha_{33}v_{3}.
\end{eqnarray*}
From last section we know that this corresponds to a powerful $p$-group of order $p^{6}$ with generators $a_{1},a_{2},a_{3}$ of order $p^{2}$ satisfying the relations:
\begin{eqnarray*}
[a_{2},a_{3}] & = & a_{1}^{p\alpha_{11}}a_{2}^{p\alpha_{21}}a_{3}^{p\alpha_{31}} \\
\mbox{} [a_{3},a_{1}] & = & a_{1}^{p\alpha_{12}}a_{2}^{p\alpha_{22}}a_{3}^{p\alpha_{32}} \\
\mbox{} [a_{1},a_{2}] & = & a_{1}^{p\alpha_{13}}a_{2}^{p\alpha_{23}}a_{3}^{p\alpha_{33}}.
\end{eqnarray*}
In [4] it is shown that two such matrices $A,B$ represent the
same alternating algebra with respect to different basis if and only there exists an invertible $3\times 3$ matrix $P$ such that
$$B=\frac{1}{\mbox{det\,}(P)}P^{t}AP.$$
We write $B\simeq A$ if they are related in this way. This turns out to be slightly more general than being congruent (that is $B=P^{t}AP$).
\begin{lemm} Let $\lambda \in {\mathbb F}_p^{*}$. Then $\lambda A\simeq A$.
\end{lemm}
\noindent{\bf Proof}\ \ Let $P=\frac{1}{\lambda}I$. Then $\frac{1}{\det(P)}P^{t}AP=\lambda^{3}\cdot \frac{1}{\lambda^{2}}A=\lambda A$. $\Box$ \\ \\
From this we easily get the following corollary.
\begin{prop}
We have $B\simeq A$ if and only if there exists $C$ such that $A$ is congruent to $C$ and $B=\lambda C$.
\end{prop}
\noindent In particular two matrices that are congruent are equivalent. We can write each such matrix $A$ in a unique way as a sum of a symmetric and an anti-symmetric matrix, namely $A_{s}=\frac{A+A^{t}}{2}$ and $A_{a}=\frac{A-A^{t}}{2}$. As shown in [4] we have that
$A\simeq B$ if and only if $A_{s}\simeq B_{s}$ and $A_{a}\simeq B_{a}$. We will determine all the equivalence classes and therefore all
powerful $p$-groups of type $(2,2,2)$. From this we will then single out those that are powerfully simple. \\ \\
{\bf Classification of the symmetric matrices}.
It is known that every symmetric matrix is congruent to a diagonal matrix and furthermore to exactly one of the following $D(1,1,1)$, $ D(\tau,1,1)$, $D(1,1,0)$, $ D(\tau, 1, 0)$, $D(1,0,0)$, $D(\tau,0,0)$ and $D(0,0,0)$, where $\tau$ is a fixed non-square in ${\mathbb F}_p^{*}$ and $D(\alpha,\beta,\gamma)$ is the $3\times 3$ matrix with $\alpha, \beta$ and $\gamma$ on the diagonal (compare [1, Chapter 6, Theorem 2.7]). Now $D(1,1,1)$ is equivalent to $\tau D(1,1,1)$ where the latter has determinant $\tau$ modulo $({\mathbb F}_p^{*})^{2}$. Hence $D(1,1,1)$ and $D(\tau,1,1)$ are equivalent. Also $D(\tau,0,0)=\tau D(1,0,0)$ is equivalent $D(1,0,0)$. When the rank is $2$ then multiplying the matrix by a constant $\lambda\in {\mathbb F}_p^{*}$ doesn't change the value of the determinant modulo $({\mathbb F}_p^{*})^{2}$. Hence $D(1,1,0)$ and
$D(\tau,1,0)$ are not equivalent. Up to equivalence we thus get only $5$ matrices:
$$D(1,1,1),\, D(1,1,0),\, D(\tau,1,0),\, D(1,0,0)\, \mbox{and }D(0,0,0).$$ \\
{\bf Classification of the anti-symmetric matrices.}
The situation regarding the anti-symmetric matrices is simpler as there are only two equivalence classes. One containing the zero matrix and one for the non-zero matrices. This comes from the fact that there are only two alternating forms (up to isomorphism) for a $3$-dimensional algebra $V$. Either $V^{\perp}$ has dimension $3$ or $1$. \\ \\
{\bf Classification of the alternating algebras.}
Let $A$ be some $3\times 3$ matrix over ${\mathbb F}_p$ and let $V$ be the corresponding alternating algebra. The symmetric part of $A$ equips $V$ with a corresponding symmetric bilinear form $\langle \ ,\ \rangle_{s}$ and the anti-symmetric part of $A$ equips
$V$ with a corresponding alternating form $\langle \ ,\ \rangle_{a}$. Now there are two possibilities for $\langle \ ,\ \rangle_{a}$. If it is zero then $A$ is symmetric and we get ${\bf five}$ alternating algebras corresponding to the $5$ diagonal matrices listed above. From now
on we can thus assume that $\langle \ ,\ \rangle_{a}$ is non-zero. Thus $V^{\perp_{a}}$ is of dimension $1$. Say
$$V^{\perp_{a}}={\mathbb F}_pv_{3}$$
so that
$$
V=({\mathbb F}_{p}v_{1}+{\mathbb F}_{p}v_{2})\operp_{a} {\mathbb F}_{p}v_{3}
$$
for some $v_1,v_2\in V$. For our classification we will divide first into $3$ cases. For Case 1, we have $\langle v_{3},v_{3}\rangle_{s}\not =0$. For Case 2, we have $\langle v_{3},v_{3}\rangle_{s}=0$ and $(V^{\perp_{a}})^{\perp_{s}}=({\mathbb F}_pv_{3})^{\perp_{s}}=V$. Finally
for Case 3, we have $\langle v_{3},v_{3}\rangle_{s}=0$ and $(V^{\perp_{a}})^{\perp_{s}}=({\mathbb F}_pv_{3})^{\perp_{s}}<V$. \\ \\
{\it Case 1}.
We can here find a basis $v_{1},v_{2},v_{3}$ for $V$ where
$$V={\mathbb F}_{p}v_{1}\operp_{s} {\mathbb F}_{p}v_{2}\operp_{s} {\mathbb F}_{p}v_{3}.$$
{\it Case 1.1}. Suppose first that the rank of $\langle\ ,\ \rangle_{s}$ is $1$. In this case it is easy to see that we can pick our basis further so that
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=0, & \langle v_{2},v_{2}\rangle_{s}=0, & \langle v_{3},v_{3}\rangle_{s}=1.
\end{array}$$
Indeed, notice that we can always multiply the relevant matrix by a constant to get $\langle v_{3},v_{3}\rangle_{s}=1$ and then it is easy to pick our $v_{1},v_{2}$ such that $\langle v_{1},v_{2}\rangle_{a}=1$. In this case we thus have only $\boldsymbol{1}$ algebra. \\ \\
{\it Case 1.2}. Suppose next that the rank of $\langle\ ,\ \rangle_{s}$ is $2$. Here again by multiplying by a constant we can assume that $\langle v_{3},v_{3}\rangle_{s}=1$ and we can assume that $\langle v_{2},v_{2}\rangle_{s}=0$. Now $\langle v_{1},v_{1}\rangle_{s}=\lambda^{2}$ or $\langle v_{1},v_{1}\rangle_{s}=\tau\lambda^{2}$ for some $\lambda\in {\mathbb F}_p^{*}$. By replacing $v_{1}$ by $\frac{1}{\lambda}v_{1}$ we can assume that $\langle v_{1},v_{1}\rangle_{s}$ is either $1$ or $\tau$. Notice that we have also seen above that these cases are genuinely distinct. Now that $v_{1}$ has been chosen we can replace $v_{2}$ by a suitable multiple to ensure that $\langle v_{1},v_{2}\rangle_{a}=1$. We thus get $\boldsymbol{2}$ algebras
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=1, & \langle v_{2},v_{2}\rangle_{s}=0, & \langle v_{3},v_{3}\rangle_{s}=1.
\end{array}$$
and
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=\tau, & \langle v_{2},v_{2}\rangle_{s}=0, & \langle v_{3},v_{3}\rangle_{s}=1.
\end{array}$$ \\ \\
{\it Case 1.3}. We are then only left with the case where the rank of $\langle \ ,\ \rangle_{s}$ is $3$. It is not difficult to see that in this case we can pick $v_{1},v_{2},v_{3}$ such that
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=\alpha, & \langle v_{2},v_{2}\rangle_{s}=1, & \langle v_{3},v_{3}\rangle_{s}=1,
\end{array}$$
where $\alpha\in {\mathbb F}_p^{*}$. We want to see when we get an equivalent algebra by changing $\alpha$ to $\beta$. If we multiply the presentation by a constant it must be by a square if we still want $\langle v_{3},v_{3}\rangle_{s}=1$. Say, we multiply
by $\lambda^{2}$ and then replace $v_{3}$ by $\frac{1}{\lambda} v_{3}$. Notice that we now have
$$\langle v_{1},v_{2}\rangle_{a}=\lambda^{2},\ \langle v_{1},v_{1}\rangle_{s} =\alpha\lambda^{2},\ \langle v_{2},v_{2}\rangle_{s}=\lambda^{2}.$$
We are now looking for all possible $\bar{v}_{1}=av_{1}+bv_{2}$ and $\bar{v}_{2}=cv_{1}+dv_{2}$ where $\langle \bar{v}_{1},\bar{v}_{2}\rangle_{a}=1$, $\langle \bar{v}_{1},\bar{v}_{2}\rangle_{s}=0$ and $\langle \bar{v}_{2},\bar{v}_{2}\rangle_{s}=1$. This gives us the following system of equations:
\begin{eqnarray*}
\lambda^{2}(ad-bc) & = & 1 \\
\lambda^{2}(\alpha ac+bd) & = & 0 \\
\lambda^{2}(\alpha c^{2}+d^{2}) & = & 1.
\end{eqnarray*}
We look first for all the solutions where $c=0$. Notice that in this case we must have $\lambda^{2} ad=1$, $\lambda^2d^2=1$ and $b=0$. Thus $\langle \bar{v}_{1},\bar{v}_{1}\rangle_{s}=\lambda^{2}(\alpha a^{2}+b^{2})=\lambda^{2}\frac{\alpha}{d^{2}\lambda^{4}}=\frac{\alpha}{\lambda^{2}d^{2}}=\alpha$. \\ \\
Next we look for solutions where $c\not =0$ but $d=0$. Then we must have $\lambda^{2}bc=-1$, $\lambda^2\alpha c^2=1$ and $a=0$. Here $\langle \bar{v}_{1},\bar{v}_{1}\rangle_{s}=\lambda^{2}(\alpha a^{2}+b^{2})= \frac{\lambda^{2}}{c^{2}\lambda^{4}}=\frac{\alpha}{\alpha c^{2}\lambda^{2}}=\alpha$. \\ \\
Finally we are left with finding all solutions where $cd\not =0$. Then $a=-\frac{bd}{\alpha c}$, and the first equation above gives us
$$1=-\lambda^{2}\left(\frac{bd^{2}}{\alpha c}+bc\right)=-\frac{b}{\alpha c}\cdot \lambda^{2}(d^{2}+\alpha c^{2})=-\frac{b}{\alpha c}.$$
Thus $b=-\alpha c$ and $a=-\frac{bd}{\alpha c}=d$. Therefore
\begin{eqnarray*}
\langle \bar{v}_{1},\bar{v}_{1}\rangle & = & \lambda^{2}(\alpha a^{2}+b^{2}) \\
& = & \lambda^{2}(\alpha d^{2}+\alpha^{2}c^{2}) \\
& = & \alpha \lambda^{2}(\alpha c^{2}+d^{2}) \\
& = & \alpha.
\end{eqnarray*}
We have thus seen that the value of $\alpha$ doesn't change and we have $\boldsymbol{p-1}$ different algebras here. \\ \\
{\it Case 2}.
Here we are assuming that $\langle v_{3},v_{3}\rangle_{s}=0$ and that $v_{3}$ is orthogonal to $v_{1},v_{2}$ as well. Again we consider few subcases. \\ \\
{\it Case 2.1}. Suppose that the rank of $\langle \ ,\ \rangle_{s}$ is zero. Then clearly we have $\boldsymbol{1}$ algebra.
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=0, & \langle v_{2},v_{2}\rangle_{s}=0, & \langle v_{3},v_{3}\rangle_{s}=0.
\end{array}$$ \\ \\
{\it Case 2.2}. Suppose next that the rank of $\langle \ ,\ \rangle_{s}$ is $1$. By multiplying by a suitable constant we can assume that $\langle v_{1},v_{1}\rangle_{s}=1$ and $\langle v_{2},v_{2}\rangle_{s}=0$. Finally replacing $v_{2}$ by an appropriate
multiple we can also assume that $\langle v_{1},v_{2}\rangle_{a}=1$. We thus also get here only $\boldsymbol{1}$ algebra
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=1, & \langle v_{2},v_{2}\rangle_{s}=0, & \langle v_{3},v_{3}\rangle_{s}=0.
\end{array}$$ \\ \\
{\it Case 2.3}. Finally we are left with the case when the rank of $\langle \ ,\ \rangle_{s}$ is $2$. Here it is easy to see that we can pick our basis further so that
$$\begin{array}{lll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, \\
\langle v_{1},v_{1}\rangle_{s}=\alpha , & \langle v_{2},v_{2}\rangle_{s}=1, & \langle v_{3},v_{3}\rangle_{s}=0.
\end{array}$$
Similar calculations as for Case 1.3 show that we get distinct algebras for different values of $\alpha$. Thus here we have $\boldsymbol{p-1}$ algebras. \\ \\
{\it Case 3}.
Here we are assuming that $\langle v_{3},v_{3}\rangle_{s}=0$ but that $v_{3}$ is not orthogonal to everything in $V$ with respect to $\langle\ ,\ \rangle_s$. Thus $({\mathbb F}_pv_{3})^{\perp_{s}}$ has dimension $2$. Suppose
$$({\mathbb F}_pv_{3})^{\perp_{s}}={\mathbb F}_pv_{2}+{\mathbb F}_pv_{3}.$$
It is not difficult to see that we can pick our basis such that
$$\begin{array}{llll}
%
\langle v_{1},v_{2}\rangle_{a}=1, & \langle v_{1},v_{3}\rangle_{a}=0, & \langle v_{2},v_{3}\rangle_{a}=0, &\\
\langle v_{1},v_{1}\rangle_{s}=0, & \langle v_{1},v_{2}\rangle_{s}=0, & \langle v_{1},v_{3}\rangle_{s}=1, & \langle v_{2},v_{3}\rangle_{s}=0.
\end{array}$$
Now there are two subcases. \\ \\
{\it Case 3.1}. If the rank of $\langle\ ,\ \rangle_{s}$ is $2$ then we must have $\langle v_{2},v_{2}\rangle_{s}=0$ and this gives us $\boldsymbol{1}$ algebra. \\ \\
{\it Case 3.2}. If the rank of $\langle \ , \ \rangle_{s}$ is $3$ then $\langle v_{2},v_{2}\rangle_{s}\not =0$ and after multiplying by a suitable constant we can assume that this value is $1$ (and then afterwards adjust things so that the other assumptions hold again).
Thus we get again $\boldsymbol{1}$ algebra. \\ \\
Adding up we see that in total we get $12+2(p-1)$ algebras and thus the same number of powerful $p$-groups of type $(2,2,2)$.
Before listing these we state and prove a proposition that shows how we can see which of these are powerfully simple.
\begin{prop} An alternating algebra $V$ over ${\mathbb F}_{p}$ of dimension $3$ is simple if and only if $V\cdot V=V$.
\end{prop}
\noindent{\bf Proof}\ \ This condition is clearly necessary as $V\cdot V$ is an ideal of $V$. To see that it is sufficient, suppose $V\cdot V=V$ and let
$I$ be a proper ideal. We want to show that $I=0$. We argue by contradiction and suppose $I$ is an ideal of dimension either $1$ or $2$. If $I$ is of dimension $2$, then $V/I$ is $1$ dimensional and thus we get the contradiction that $V\cdot V\leq I<V$. Now suppose $I$ is of dimension $1$, say $V=I+{\mathbb F}_{p}v_{1}+{\mathbb F}_{p}v_{2}$. Then $V\cdot V\leq I+{\mathbb F}_{p}v_{1}v_{2}$. But the dimension of $I+{\mathbb F}v_{1}v_{2}$ is at most $2$ and we get the contradiction that $V\cdot V<V$. $\Box$ \\ \\
We have thus determined the presentation matrices up to equivalence and got in total $12+2(p-1)$. As we described at the beginning of the section this gives us a classification of all the alternating algebras of dimension $3$ over ${\mathbb F}_{p}$ that in turn gives us a classification of all the powerful $p$-groups of type $(2,2,2)$. Furthermore, last proposition tells us how we read from the presentation whether a given alternating algebra is simple and thus whether the corresponding powerful group is powerfully simple.
The work above gives us the following list of powerful $p$-groups of type $(2,2,2)$. As the power relations for all of these are
$a_{1}^{p^{2}}=a_{2}^{p^{2}}=a_{3}^{p^{2}}=1$ we omit these below. Here $\tau$ is a fixed non-square in ${\mathbb F}_p$. \\
$$\begin{array}{l}
A_{1}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p},\,[a_{3},a_{1}]=a_{2}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle; \\
A_{2}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle; \\
A_{3}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle; \\
A_{4}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p\tau}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle; \\
A_{5}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{2}^{-p}a_{3}^{p},\,[a_{3},a_{1}]=a_{1}^{p}a_{2}^{p},\,[a_{1},a_{2}]=a_{1}^{p}\rangle; \\
A_{6}(\alpha)=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p\alpha}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p}a_{2}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle,\ 1\leq\alpha\leq p-2; \\
B_{1}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{-p}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p}a_{2}^{p},\,[a_{1},a_{2}]=a_{3}^{p}\rangle; \\
B_{2}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p},\,[a_{3},a_{1}]=a_{2}^{p},\,[a_{1},a_{2}]=1\rangle; \\
B_{3}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p\tau},\,[a_{3},a_{1}]=a_{2}^{p},\,[a_{1},a_{2}]=1\rangle; \\
B_{4}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=1\rangle; \\
B_{5}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=1\rangle; \\
B_{6}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{2}^{-p}a_{3}^{p},\,[a_{3},a_{1}]=a_{1}^{p},\,[a_{1},a_{2}]=a_{1}^{p}\rangle \\
B_{7}(\alpha)=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p\alpha}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p}a_{2}^{p},\,[a_{1},a_{2}]=1\rangle,\ 1\leq \alpha\leq p-2;\\
C_{1}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{-p}a_{2}^{-p},\,[a_{3},a_{1}]=a_{1}^{p}a_{2}^{p},\,[a_{1},a_{2}]=1\rangle; \\
C_{2}=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=a_{1}^{p},\,[a_{3},a_{1}]=1,\,[a_{1},a_{2}]=1\rangle; \\
D=\langle a_{1},a_{2},a_{3}:\,[a_{2},a_{3}]=1,\,[a_{3},a_{1}]=1,\,[a_{1},a_{2}]=1\rangle.
\end{array}$$
\mbox{}\\
Of these $12+2(p-1)$ groups, the groups $A_{1},\ldots ,A_{6}(\alpha)$ are the powerfully simple groups. There are $5+(p-2)$ of these.
|
\section{Introduction}
The superconducting state in the most of the low temperature superconductors is being induced by the electron-phonon interaction \cite{Carbotte}, \cite{Carbotte1}. The coupling between the electron and phonon system can be modeled with a use of the Fr\"{o}hlich Hamiltonian \cite{Frohlich}. Let us notice that using a canonical transformation, during the elimination of the phonon degrees of freedom in Fr\"{o}hlich operator, it is possible to obtain the Hamiltonian of the BCS theory \cite{TransKanon}, \cite{BCS}. It should be clearly marked that the BCS model is able to accurately describe the thermodynamic properties of the low temperature superconductors in the limit of the weak coupling, i.e. for $\lambda<0.2$, where $\lambda$ denotes the electron-phonon coupling constant.
In order to precisely estimate the thermodynamic parameters of the superconductors that have a greater value of $\lambda$, the approach proposed by Eliashberg should be used \cite{Eliashberg}. In the Eliashberg scheme the analysis of the superconductivity issue is being started directly from the Fr\"{o}hlich Hamiltonian, which is initially written in the Nambu notation \cite{Nambu}. Next, with a use of the matrix Matsubara functions the Dyson equations are determined; the self-energy of the system is being calculated with an accuracy to the second order in the equations of motion \cite{Carbotte}, \cite{Carbotte1}, \cite{Eliashberg}. In the last step, the Eliashberg set is determined in a self-consistent way.
The set of the Eliashberg equations generalizes the fundamental equation of the BCS model. In particular, one can take into consideration the complicated form of the electron-phonon interaction with a use of the Eliashberg function. Moreover, the application of the full version of the self-consistent method enables to estimate the electron band effective mass in the presence of the electron-phonon interaction and the energy shift function that renormalizes the electron band energy \cite{Carbotte}.
From the mathematical point of view the solution of the Eliashberg set is a truly complicated matter. For that reason, the effect of the electron band energy renormalization is usually omitted what results in the reduction of the equation's number by one third. The approximation given above, in the most cases, does not affect the final results in a significant manner. This is evidenced by the good agreement between theoretical predictions and the experimental data \cite{Szczesniak}. It need to be marked that the strict solution of the simplified Eliashberg set is not easy and can be performed only with a use of a powerful computer and highly advanced numerical methods \cite{Szczesniak1}.
The electron-phonon coupling constant for $\rm{YNi_{2}B_{2}C}$ superconductor is equal to $0.676$ \cite{Jarosik}. In this case, the exact estimation of the thermodynamic parameters is possible only in the framework of the Eliashberg approach. In the literature related to the topic there are some reports that the simplest version of the Eliashberg equations might not be able to cope with the description of all relevant physical quantities. In particular, the dependence of the upper critical field ($H_{C2}$) on the temperature \cite{Shulga}, \cite{Doh} or the results obtained with a use of the directional point-contact spectroscopy \cite{Muhhopadhyay}, \cite{Bashlakov} suggest the necessity of the two-band model being used. On the other hand, some researchers basing on the thermal and spectroscopic experiments are moving toward the direction of the one-band models with a non-trivial wave symmetry ($s+g$ or even $d$-wave symmetry) \cite{Maki}-\cite{Nohara2}. The issue is clouding by the fact that the calculations conducted till the present day for the one-band Eliashberg model were not strict; the study were based on very simplified form of the Eliashberg function \cite{Gonnelli} or approximated analytical formulas \cite{Michor}.
For this reson, in the presented paper, the most important thermodynamic parameters for $\rm{YNi_{2}B_{2}C}$ superconductor were analyzed exactly in the framework of the one-band Eliashberg model. The calculations were based on the effective Eliashberg function:
$\left[\alpha^{2}F\left(\Omega\right)\right]_{\rm{eff}}=1.283 \left[\alpha^{2}F\left(\Omega\right)\right]_{\rm{tr}}$, where the transport function
($\left[\alpha^{2}F\left(\Omega\right)\right]_{\rm{tr}}$) was determined by Gonnelli {\it et al.} in \cite{Gonnelli} (for details see also \cite{Jarosik}).
\section{THE ELIASHBERG EQUATIONS}
The Eliashberg equations on the imaginary axis can be written in the following form \cite{Eliashberg}:
\begin{equation}
\label{r1}
\Delta_{n}Z_{n}=\frac{\pi}{\beta} \sum_{m=-M}^{M}
\frac{K\left(\omega_{n}-\omega_{m}\right)}
{\sqrt{\omega_m^2+\Delta_m^2}}
\Delta_{m},
\end{equation}
\begin{equation}
\label{r2}
Z_n=1+\frac {\pi}{\beta\omega _n }\sum_{m=-M}^{M}
\frac{K\left(\omega_{n}-\omega_{m}\right)}
{\sqrt{\omega_m^2+\Delta_m^2}}\omega_m,
\end{equation}
where $\Delta_{n}\equiv\Delta\left(i\omega_{n}\right)$ represents the order parameter and
$Z_{n}\equiv Z\left(i\omega_{n}\right)$ is the wave function renormalization factor; the $n$-th Matsubara frequency is denoted by: $\omega_{n}\equiv \frac{\pi}{\beta}\left(2n-1\right)$, where $\beta\equiv 1/k_{B}T$ ($k_{B}$ is the Boltzmann constant).
In the Eliashberg formalism the coupling between the electron and phonon system is considered with using of the formula:
\begin{equation}
\label{r3}
K\left(\omega_{n}-\omega_{m}\right)\equiv 2\int_0^{\Omega_{\rm{max}}}d\Omega\frac{\left[\alpha^{2}F\left(\Omega\right)\right]_{\rm{eff}}\Omega}
{\left(\omega_n-\omega_m\right)^2+\Omega ^2}.
\end{equation}
The effective Eliashberg function $\left[\alpha^{2}F\left(\Omega\right)\right]_{\rm{eff}}$ for $\rm{YNi_{2}B_{2}C}$ superconductor is shown in Fig.\fig{f1}. On the basis of the presented data, it is easy to see an exceptionally strong coupling that appears between the electron gas and the crystal lattice vibrations of frequency of about $20$ and $50$ meV. The mentioned effect was confirmed with a use of the point-contact spectroscopy measurements in \cite{Naidyuk}. The value of the maximum phonon frequency ($\Omega_{\rm{max}}$) is equal to $67.51$ meV.
\begin{figure} [ht]
\includegraphics[scale=0.15]{Rys1.eps}
\caption{\label{f1}
The form of the effective Eliashberg function determined by using the transport function obtained by Gonnelli {\it et al.} \cite{Gonnelli}.}
\end{figure}
From the mathematical point of view, the strict analysis of the Eliashberg equations on the imaginary axis, however complicated, is much simpler than on the real axis. In the imaginary domain the arguments of functions $\Delta_{n}$ and $Z_{n}$ take discreet values, thus all problems connected with the numerical analysis of the non-linear integral equations are eliminated \cite{Eliashberg}. On the imaginary axis the Eliashberg equations can be solved in the following way \cite{Szczesniak1}: in the first step one need to define functions $K_{n,m}^{\left(+\right)}$ and $K_{n,m}^{\left(-\right)}$ for quantities $\phi_{n}\equiv\Delta_{n}Z_{n}$ and $Z_{n}$ respectively, where: $K_{n,m}^{\left(\pm\right)}\equiv K\left(\omega_{n}-\omega_{m}\right)\pm K\left(\omega_{n}-\omega_{-m+1}\right)$. Next, the fact that functions $\phi_{n}$ and $Z_{n}$ are symmetric is used. The Eliashberg equations take the form:
\begin{equation}
\label{r4}
\phi_{n}=\sum_{m=1}^{M}J_{1}\left(\omega_{n},\omega_{m},Z_{m},\phi_{m}\right)\phi_{m},
\end{equation}
\begin{equation}
\label{r5}
Z_{n}=\sum_{m=1}^{M}J_{2}\left(\omega_{n},\omega_{m},Z_{m},\phi_{m}\right)Z_{m},
\end{equation}
where:
\begin{equation}
\label{r6}
J_{1}\left(\omega_{n},\omega_{m},Z_{m},\phi_{m}\right)\equiv \frac{\pi}{\beta}\frac{K_{n,m}^{\left(+\right)}}
{\sqrt{\left(Z_{m}\omega_{m}\right)^{2}+\phi^{2}_{m}}}
\end{equation}
and
\begin{equation}
\label{r7}
J_{2}\left(\omega_{n},\omega_{m},Z_{m},\phi_{m}\right)\equiv
\frac{\delta_{n,m}}{Z_{m}}+\frac{\pi}{\beta}\frac{\omega_{m}}{\omega_{n}}\frac{K_{n,m}^{\left(-\right)}}
{\sqrt{\left(Z_{m}\omega_{m}\right)^{2}+\phi^{2}_{m}}}.
\end{equation}
The symbol $\delta_{n,m}$ appearing in Eq. \eq{r7} denotes the Kronecker delta. Let us notice, that the parameter $M$ must be chosen in a such way, that the solutions of the Eliashberg equations for the large values of $n$ and $T=\left[T\right]_{\rm{min}}$ would take their asymptotic form: $\Delta_{n}\simeq 0$ and $Z_{n}\simeq 1$. In the case of $\rm{YNi_{2}B_{2}C}$ it is enough to assume $M=800$. Finally, the Eliashberg set is solved in an iterative way.
\section{The numerical results}
The Eliashberg equations were solved for the temperature range from $k_{B}\left[T\right]_{\rm{min}}=0.2$ meV to $k_{B}T_{C}=1.335$ meV. The form of the order parameter for the selected values of the temperature is presented in Fig.\fig{f2}. It is easy to notice that together with the decreasing of the temperature, the function $\Delta_{n}$ takes the higher maximum values (always for $n=1$) and it becomes wider. The dependence of the order parameter on the temperature can be traced, in the most convenient way, by plotting the function $\Delta_{n=1}\left(T\right)$; see inset in Fig.\fig{f2}. In the considered case we have taken $350$ accurate numerical values of the order parameter for $n=1$. We notice that the function $\Delta_{n=1}\left(T\right)$ can be fitted by the simple formula:
\begin{equation}
\label{r8}
\Delta_{n=1}\left(T\right)=\Delta_{n=1}\left(0\right)\sqrt{1-\left(\frac{T}{T_{C}}\right)^{\beta}},
\end{equation}
where $\Delta_{n=1}\left(0\right)\equiv \Delta_{n=1}\left(k_{B}T=0.2\quad\rm{meV}\right)=2.559$ meV and $\beta=3.21$.
\begin{figure} [ht]
\includegraphics[scale=0.15]{Rys2.eps}
\caption{\label{f2}
The order parameter function on the imaginary axis for the selected values of the temperature; the first $100$ values of $\Delta_{n}$ are presented. The dependence of the order parameter $\Delta_{n=1}$ on the temperature is plotted in the inset.
|
}
\end{figure}
The familiarity with the form of the function $\Delta_{n}$ for $k_{B}T = 0.2$ meV enables the calculation of the order parameter value near the temperature of zero Kelvin ($\Delta\left(0\right)$). In particular it was assumed that $\Delta\left(0\right)\simeq\Delta\left(k_{B}T = 0.2\quad\rm{meV}\right)$. In order to achieve that, the order parameter on the imaginary axis needs to be analytically continued on the real axis ($\Delta\left(\omega\right)$), deriving the coefficients $p_{\Delta j}$ and $q_{\Delta j}$ in the expression \cite{Beach}, \cite{Vidberg}:
\begin{equation}
\label{r9}
\Delta\left(\omega\right)=\frac{p_{\Delta 1}+p_{\Delta 2}\omega+...+p_{\Delta r}\omega^{r-1}}
{q_{\Delta 1}+q_{\Delta 2}\omega+...+q_{\Delta r}\omega^{r-1}+\omega^{r}},
\end{equation}
where $r = 400$. The plot of the real (Re) and imaginary (Im) part of the function $\Delta\left(\omega\right)$ is shown in Fig.\fig{f3}. In the last step, the parameter $\Delta\left(0\right)$ should be calculated on the basis of the equation \cite{Carbotte}, \cite{Carbotte1}, \cite{Eliashberg}: $\Delta\left(T\right)={\rm Re}\left[\Delta\left(\omega=\Delta\left(T\right),T\right)\right]$. The value $2.584$ meV was obtained.
The determination of the parameter $\Delta\left(0\right)$ allows to estimate the dimensionless ratio: $R_{1}\equiv\frac{2\Delta\left(0\right)}{k_{B}T_{C}}$ which, in the BCS theory, is the universal constant of the model and $\left[R_{1}\right]_{\rm{BCS}}=3.53$ \cite{BCS}. In the case of $\rm{YNi_{2}B_{2}C}$, the greater value of $R_{1}$, equal to $3.87$, was obtained.
With reference to the result predicted by the two-band Eliashberg model, the value of $R_{1}$ estimated in the framework of the one-band model should be interpreted as the resultant quantity. In particular the two-band Eliashberg model, which very well reconstructs the experimental value of $R_{1}$, predicts: $R_{1}=0.71\left[R_{1}\right]_{l}+0.29\left[R_{1}\right]_{s}=3.78$, where $\left[R_{1}\right]_{l}$ and $\left[R_{1}\right]_{s}$ denote the ratios for the large and the small value of the order parameter respectively \cite{Huang}. When comparing the obtained results it is easy to notice, that the one-band model insignificantly overestimates $R_{1}$.
\begin{figure} [ht]
\includegraphics[scale=0.15]{Rys3.eps}
\caption{\label{f3}
The real and imaginary part of the order parameter on the real axis; $k_{B}T = 0.2$ meV was assumed.}
\end{figure}
In Fig.\fig{f4} the dependence of the wave function renormalization factor on the Matsubara frequency for the selected values of the temperature is presented. On the basis of the achieved results it has been stated, that function $Z_{n}$ takes its maximum always for $n=1$. In the framework of the Eliashberg formalism the value $Z_{n=1}$ plays very important role, because it determines the ratio: $Z_{n=1}=m_{e}^{*}/m_{e}$, where $m_{e}^{*}$ denotes the electron band effective mass in the presence of the electron-phonon coupling and $m_{e}$ is the electron band effective mass in absence of the electron-phonon interaction. Basing on the results presented in Figure's \fig{f4} inset, it has been concluded, that $m_{e}^{*}$ takes its maximum value equal to $1.676 m_{e}$ for $T = T_{C}$. Let us mark the fact, that in the case $T = T_{C}$, the electron band effective mass in the presence of the electron-phonon coupling can be calculated with an use of the formula:
\begin{equation}
\label{r10}
m_{e}^{*}=\left(1+\lambda\right)m_{e}.
\end{equation}
On the basis of expression \eq{r10} it has been stated, that the calculated value of $m_{e}^{*}$ is identical with the value obtained using the Eliashberg equations. The above result partially confirms the accurateness of the advanced numerical calculations.
\begin{figure} [ht]
\includegraphics[scale=0.15]{Rys4.eps}
\caption{\label{f4}
The wave function renormalization factor for the selected values of the temperature; the first $400$ values of $Z_{n}$ are presented. The dependence of the wave function renormalization factor on the temperature for the first Matsubara frequency is plotted in the inset.}
\end{figure}
Next, we have calculated the following ratios:
\begin{equation}
\label{r11}
R_{2}\equiv\frac{\Delta C\left(T_{C}\right)}{C^{N}\left(T_{C}\right)}
\end{equation}
and
\begin{equation}
\label{r12}
R_{3}\equiv\frac{T_{C}C^{N}\left(T_{C}\right)}{H_{C}^{2}\left(0\right)},
\end{equation}
where $\Delta C\equiv C^{S}-C^{N}$ stands for the difference between specific heat of the superconducting and normal state. The dependence of $\Delta C$ on the temperature is being determined with a use of an expression:
\begin{equation}
\label{r13}
\frac{\Delta C\left(T\right)}{k_{B}\rho\left(0\right)}
=-\frac{1}{\beta}\frac{d^{2}\left[\Delta F/\rho\left(0\right)\right]}
{d\left(k_{B}T\right)^{2}}.
\end{equation}
In Eq. \eq{r13} the symbol $\Delta F$ denotes the difference between the free energy of the superconducting and normal state; $\rho\left(0\right)$ is the value of the electron density of states at the Fermi level.
On the other hand, the specific heat in the normal state can be calculated using the formula: $\frac{C^{N}\left(T\right)}{ k_{B}\rho\left(0\right)}=\frac{\gamma}{\beta}$, where $\gamma\equiv\frac{2}{3}\pi^{2}\left(1+\lambda\right)$.
Finally, the thermodynamic critical field ($H_{C}$) should be estimated in accordance with the expression:
$\frac{H_{C}}{\sqrt{\rho\left(0\right)}}=\sqrt{-8\pi\left[\Delta F/\rho\left(0\right)\right]}$.
Having an open solutions of the Eliashberg equations, $\Delta F$ is determined directly from \cite{Bardeen}:
\begin{eqnarray}
\label{r14}
\frac{\Delta F}{\rho\left(0\right)}&=&-\frac{2\pi}{\beta}\sum_{n=1}^{M}
\left(\sqrt{\omega^{2}_{n}+\Delta^{2}_{n}}- \left|\omega_{n}\right|\right)\\ \nonumber
&\times&(Z^{{\rm S}}_{n}-Z^{N}_{n}\frac{\left|\omega_{n}\right|}
{\sqrt{\omega^{2}_{n}+\Delta^{2}_{n}}}),
\end{eqnarray}
where symbols $Z^{{\rm S}}_{n}$ and $Z^{{\rm N}}_{n}$ denote the wave function renormalization factor for the superconducting state and the normal state respectively.
\begin{figure} [ht]
\includegraphics[scale=0.15]{Rys5.eps}
\caption{\label{f5}
(A) The dependence of the free energy difference between the superconducting and normal state on the temperature.
(B) The specific heat of the superconducting and normal state as a function of the temperature.}
\end{figure}
The determined form of $\Delta F\left(T\right)$ is presented in Fig.\fig{f5} (A), while in Fig.\fig{f5} (B) we are shown the dependence of the specific heat on the temperature for the superconducting and normal state. The characteristic jump of the specific heat that appears at the critical temperature was marked with a vertical line. We notice that the specific heat of the superconducting state was obtained on the basis of the 350 accurate values of $\Delta F/\rho\left(0\right)$.
On the basis of the determined thermodynamic functions, the parameters $R_{2}$ and $R_{3}$ were calculated. In the case of $\rm{YNi_{2}B_{2}C}$ these coefficients take following values: $1.79$ and $0.159$. Let us notice, that $R_{2}$ and $R_{3}$ are the universal constants of the BCS model and $\left[R_{2}\right]_{\rm{BCS}}=1.43$ and $\left[R_{3}\right]_{\rm{BCS}}=0.168$ \cite{BCS}. When basing on the above results one can clearly see, that for $\rm{YNi_{2}B_{2}C}$ superconductor the value of ratio $R_{2}$ significantly deviates from the BCS prediction.
The experimental ratios $R_{2}$ and $R_{3}$ were determined in \cite{Michor}. The following results were obtained: $\left[R_{2}\right]_{\rm{exp}}=1.77$ and $\left[R_{3}\right]_{\rm{exp}}=0.160$. When comparing our theoretical values of $R_{2}$ and $R_{3}$ with the experimental ones, it can be easily noticed that the one-band Eliashberg model properly describes the experimental data.
\section{Concluding Remarks}
In the framework of the one-band Eliashberg model the selected thermodynamic properties of $\rm{YNi_{2}B_{2}C}$ superconductor was calculated. In particular, the fundamental ratios $R_{1}\sim R_{3}$ were determined. It has been stated that, in the case of $R_{1}$ the one-band Eliashberg model predicts a value non-significantly higher than the value determined by the two-band model, which determines $R_{1}$ with a very high accuracy. The result above is connected with the simplified description of the superconducting phase in the one-band model in comparison with the two-band model; we have taken the one effective Eliashberg function instead of the four Eliashberg functions and the four elements of the Coulomb pseudopotential matrix.
In the cases of the two remain ratios - the agreement between predictions of the one-band Eliashberg model and the experimental data is very good. It is worth to notice that, in the opposition to $R_{1}$ and $R_{3}$, the value of $R_{2}$ significantly deviates from the prediction of the BCS model.
In the last part of the summary let us turn a reader's attention toward the problem of the correct determination of the upper critical field. When basing on the papers cited in the introduction one can suppose with a large probability, that the exact form of the function $H_{C2}\left(T\right)$ can be reproduced only in the framework of the two-band theory. Let us remind that the two-band approach was successfully used by us for description of the thermodynamic properties of the superconducting state inducing in $\rm{MgB_{2}}$ \cite{MgB2}. However, in the case of $\rm{YNi_{2}B_{2}C}$ discussed problem is far more complicated because the appropriate Eliashberg functions have not been calculated in the branch press (only electron-phonon coupling constants are being used in the existing two-band approach). At present this issue is being intensively studied by us with a use of the {\it ab initio} approach \cite{AbInitio}.
\begin{acknowledgments}
The authors wish to thank Prof. K. Dzili{\'n}ski for the creation of the excellent working conditions and providing the financial support. Some computational resources have been provided by the RSC Computing Center.
\end{acknowledgments}
|
\section{Background and Related work}
\label{sec:relatedworks}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{ExpSetup-eps-converted-to.pdf}
\caption{Experiment Task Domains: (Left) Household tasks with the Fetch Research Robot picking and placing objects (top left) and indoor navigation (bottom left) (Right) Autonomous Driving tasks in the Virtual Reality simulation system. Tasks included parking and various navigation scenarios.
}
\label{fig:domains}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.80\textwidth]{ExpDesign_small-eps-converted-to.pdf}
\caption{Trust Transfer Experiment Design. Two categories of tasks were used: (A) picking and placing different objects, and (B) navigation in a room, potentially with people and obstacles. Participants were surveyed on their trust in the robot's ability to successfully perform three different tasks (red boxes) \emph{before} and \emph{after} being shown demonstrations of two tasks. The two {demonstrated/observed} tasks were always selected from the same cell (blue boxes; cell randomly assigned, with either both successes or both failures). The tested tasks were randomly selected from three different cells---the (i) same category and difficulty level, (ii) same category but different difficulty level, and (iii) different category but same difficulty level--- compared to the observed tasks.}
\label{fig:ExpDesign}
\end{figure*}
Research into trust in robots (and automation) is a large interdisciplinary endeavor spanning multiple fields including human-factors, psychology, and human-robot interaction. {This paper extends a prior conference version}~\citep{SohRSS18} {with additional discussion and analyses of the human-subject experiment. In addition, we include a new computational trust model --- that hybridizes the neural and Bayesian methods --- with updated experimental results and expanded discussion on the computational models, learned task-spaces, and word-based task descriptions.} In this section, we provide relevant background on trust and computational trust models.
\paragraph{Key Concepts and Definitions.} Trust is a multidimensional concept, with many factors characterizing human trust in robots, e.g., the human's technical expertise and the complexity of the robot~\citep{lee1994trust,muir1994trust,hancock2011meta}. Of these factors, two of the most prominent are the performance and integrity of the machine. Similar to existing work in robotics~\citep{Xu2016,xu2015optimo}, we assume that robots are not intentionally deceptive and focus on characterizing trust based on robot performance. We view trust as a belief in the competence and reliability of another agent to complete a task.
There are two types of trust that differ in their situation specificity. The first is dispositional trust or trust propensity, which is an individual difference for how willing one is to trust another. The second is situational or \emph{learned} trust, that results from interaction between the agents concerned. For example, the more you use your new autonomous vehicle, the more you may learn to trust it. In this paper, we will be concerned mostly with situational trust in robots.
\paragraph{Trust Measurement.} Trust is a latent dynamic entity, which presents challenges for measurement~\citep{Billings2012}. Previous work has derived survey instruments and methods for quantifying trust, including binary measures~\citep{binary}, continuous measures~\citep{desai2012a,Xu2016,LEE1992}, ordinal scales~ \citep{Muir1989,Hoffman2013a,Jian2000} and an Area Under Trust Curve (AUTC) measure~\citep{Desai13,yang2017evaluating} which captures participant's trust through the entire interaction with the robot by integrating binary trust measures over time. In this paper, we use a self-reported measure of trust \citep[similar to][]{xu2015optimo} and Muir's questionnaire~\citep{muir1994trust}.
\paragraph{Computational Models of Trust.} Previous work has explored explanatory models \citep[e.g.,][]{Castelfranchi2010,lee2004trust} and predictive models of trust. Recent models have focused on dynamic modeling, for example, a recent predictive model---OPTIMo~\citep{xu2015optimo}---is a Dynamic Bayesian Network with linear Gaussian trust updates trained on data gathered from an observational study. OPTIMo was shown to outperform an Auto-Regressive and Moving Average Value (ARMAV) model~\citep{lee1994trust}, and stepwise regression~\citep{Xu2016}. Because trust is treated as ``global'' real-valued scalar in these models, they are appropriate when tasks are fixed (or have little variation). However, as our results will show, trust can differ substantially between tasks. As such, we develop models that capture both the dynamic property of trust and its variation across tasks. We leverage upon recurrent neural networks that have been applied to a variety of sequential learning tasks~\citep[e.g.,][]{Soh2017a} and online Gaussian processes that have been previously used in robotics~\citep{Soh2015,Soh2013a,Soh2014a}.
\paragraph{Application of Trust Models.} Trust emerges naturally in collaborative settings. In human-robot collabation~\citep{nikolaidis2017human,Nikolaidis:2017:GMH:2909824.3020253}, trust models can be used to enable more natural interactions. For example, \cite{Chen2018Hri} proposed a decision-theoretic model that incorporates a predictive trust model, and showed that policies that took human trust into consideration led to better outcomes. The models presented in this work can be integrated into such frameworks to influence robot decision-making across different tasks.
\subsection{A GP-Neural Trust Model}
Both the neural and Bayesian models assume Markovian trust updates and that trust summarizes past experience with the robot. They differ principally in terms of the inherent flexibility of the trust updates. In the RNN model, the update parameters, i.e., the gate matrices, are learnt. As such, it is able to adapt trust updates to best fit the observed data. However, the resulting update equations do not lend themselves easily to interpretation. On the other hand, the GP employs fixed-form updates that are constrained by Bayes rule. While this can hamper predictive performance (humans are not thought to be fully Bayesian), the update is interpretable and may be more robust with limited data.
A natural question is whether we can formulate a ``structured'' trust update that combines the simplicity of the Bayes update, while allowing for some additional flexibility. Here, we examine a variant of the GP model that incorporates a neural component in the mean function update. In particular, we modify Eq. (\ref{eq:updatealpha}) with an additional term:
\begin{align}
\boldsymbol{\alpha}_{t} = & \boldsymbol{\alpha}_{t-1} + \gamma_1(\mathbf{C}_{t-1} \mathbf{k}_{t} + \mathbf{e}_{t}) + u(\boldsymbol{\alpha}_{t-1}, \mathbf{C}_{t-1}\mathbf{k}_{t}, \boldsymbol{\Lambda}\mat{x}_{t-1}, c^\ensuremath{a}_{t-1})
\label{eq:bayesnnupdate}
\end{align}
where $u(\cdot)$ models any residual not fully captured by the Bayes update. The function $u$ takes as input the previous mean parameters $\boldsymbol{\alpha}_{t-1}$, $\mathbf{C}_{t-1}\mathbf{k}_{t}$, the latent task vector $\mathbf{z}_{t-1} = \boldsymbol{\Lambda}\mat{x}_{t-1}$, and the robot performance $c^\ensuremath{a}_{t-1}$. The key idea here is that a neural component may modify the posterior distribution in a data-driven (non-Bayesian) manner to better capture the intricacies of human trust updates. In our experiments, $u(\cdot)$ is a simple feed-forward neural network, but alternative models can be used without changing the overall framework. For example, using a GRU-based network would enable non-Markovian updates and may further improve performance.
\section{Experiments}
\label{sec:experiments}
Our experiments were designed to establish if the proposed trust models that incorporate inter-task structure outperform existing baseline methods. In particular, we sought to answer three questions:
\begin{itemize}
\item [\textbf{Q1}] Is it necessary to model trust transfer, i.e., do the proposed function-based models perform better than existing approaches when tested on unseen participants?
\item [\textbf{Q2}] Do the models generalize to unseen tasks?
\item [\textbf{Q3}] Is it necessary to model differences in initial bias, specifically perceptions of task difficulty?
\item [\textbf{Q4}] Does incorporating additional flexibility into the GP trust updates improve performance?
\end{itemize}
\subsection{Experimental Setup}
To answer these questions, we conducted two separate experiments. \textbf{Experiment E1} was a variant of the standard 10-fold cross-validation where we held-out data from 10\% of the participants (3 people) as a test set. This allowed us to test each model's ability to generalize to unseen participants on the same tasks. To answer question \textbf{Q2}, we performed a leave-one-out test on the tasks (\textbf{Experiment E2}); we held-out all trust data associated with one task and trained on the remaining data. This process yielded 12 folds, one per task.
\vspace{0.5em} %
\paragraph{Trust Models.} We evaluated six models in our experiments:
\begin{itemize}
\item \textbf{GP}: A constant-mean Gaussian process trust model;
\item \textbf{PMGP}: The GP trust model with prior mean function;
\item \textbf{POGP}: The GP trust model with prior pseudo-observations;
\item \textbf{RNN}: The neural RNN trust model;
\item \textbf{GPNN}: The Bayesian GP-neural trust model with prior pseudo-observations;
\item \textbf{LG}: A linear Gaussian trust model similar to the updates used in OPTIMo~\citep{xu2015optimo};
\item \textbf{CT}: A baseline model with constant trust.
\end{itemize}
{The baseline CT and LG models did not utilize task features as they do not explicitly consider trust variation across tasks. The general LG model applies linear Gaussian updates}:
\begin{align}
p(\ensuremath{\tau^a}_t | \ensuremath{\tau^a}_{t-1}, c^a_{t-1}, c^a_{t-2}) = \mathcal{N}\left(\mathbf{w}_{\textsc{LG}}^\top
\begin{bmatrix}
\ensuremath{\tau^a}_{t-1} \\
c^a_{t-1} \\
c^a_{t-1} - c^a_{t-2}
\end{bmatrix}, \sigma_{\textsc{LG}}^2 \right)
\end{align}
{where $\mathbf{w}_{\textsc{LG}}$ and $\sigma_{\textsc{LG}}^2$ are learned parameters. In our dataset, the robot $a$'s performance in the two time steps was the same (both success/failure). Hence, the updated trust only depends on the previous trust $\ensuremath{\tau^a}_{t-1}$ and robot performance $c^a_{t-1}$.}
We implemented all the models using the PyTorch framework~\citep{paszke2017automatic}. %
Preliminary cross-validation runs were conducted to find good parameters for the models. The RNN used a two layer fully-connected neural network with 15 hidden neurons and Tanh activation units to project tasks to a 30-dimensional latent task space (Eqn. (\ref{eq:NNproj})). The trust updates, Eqn. (\ref{eq:RNNupdate}), were performed using two layers of GRU cells. A smaller 3-dimensional latent task space was used for the GP models. GP parameters were optimized during learning, except the length-scales matrix, which was set to the identity matrix $\mat{L} = \mat{I}$; fixing $\mat{L}$ resulted in a smoother optimization process. For the GPNN, $u(\cdot)$ was set as a simple two-layer feed-forward neural network with 20 neurons-per-layer and Tanh activation units.
\paragraph{Datasets.} The models were trained using the data collected in our human subjects study. The RNN and GP-based models were \emph{not} given direct information about the difficulty and group of the tasks since this information is typically not known at test time. Instead, each task was associated with a 50-dimensional GloVe word vector~\citep{pennington2014glove} computed from the task descriptions in Fig. \ref{fig:tasks} (the average of all the word vectors in each description). {Complete task descriptions and code to derive the features are available in the online supplementary material}~\citep{SuppMat}.
\paragraph{Training.} In these experiments, we predict how each individual's trust is dynamically updated. The tests are not conducted with a single ``monolithic'' trust model across all participants. Rather, training entails learning the latent task space and model parameters, which are shared among participants, e.g., $\boldsymbol\beta$ and $\boldsymbol{\Lambda}$ for the PMGP and the gate matrices for the GRU. However, each participant's model is updated \emph{only} with the tasks and outcomes that the participant observes.
To learn these parameters, all models are trained ``end-to-end''. We applied maximum likelihood estimation (MLE) and optimized model parameters $\boldsymbol\theta$ using the Bernoulli likelihood of observing the normalized trust scores (as soft labels):
\begin{align}
\mathcal{L}(\theta) &= -\sum \hat{\tau}^a\log(1-\tau^a(\mathbf{x})) + (1-\hat{\tau}^a)\log(1-\tau^a(\mathbf{x}))
\end{align}
where $\hat{\tau}^a$ is the observed normalized trust score. In more general settings where trust is \emph{not} observed, the models can be trained using observed human actions, e.g., in~\citep{ChenMin2017}. We employed the \textsc{Adam} algorithm~\citep{kingma2014adam} for a maximum of 500 epochs, with early stopping using a validation set comprising 15\% of the training data.
\paragraph{Evaluation} Evaluation is carried out on both pre-and-post-update trust. For both experiments, we computed two measures: the average Negative Log-likelihood (NLL) and Mean Absolute Error (MAE). However, we observed that these scores varied significantly across the folds (each participant/task split). To mitigate this effect, we also computed relative Difference from Best (DfB) scores: $
\mathrm{NLL}_\mathrm{DfB}(i,k) = \mathrm{NLL}(i,k) - \mathrm{NLL}^*(i)$,
where $\mathrm{NLL}(i,k)$ is the NLL achieved by model $k$ on fold $i$ and $\mathrm{NLL}^*(i)$ is the best score among the tested models on fold $i$. $\mathrm{MAE}_\mathrm{DfB}$ is similarly defined.
\subsection{Results}
Results for \textbf{E1} are summarized in Tbl. \ref{tbl:E1} with boxplots of $\mathrm{MAE}_\mathrm{DfB}$ shown in Fig. \ref{fig:partsplits}. In brief, the GPNN, RNN, and POGP outperform the other models on both datasets across the performance measures. The POGP makes better predictions on the Household dataset, whilst the RNN performed better on the Driving dataset. The GPNN, however, obtains good performance across the datasets. In addition, the GP achieves better or comparable scores on average relative to LG and CT. Taken together, these results indicate that the answer to \textbf{Q1} is in the affirmative: accounting for trust transfer between tasks leads to better trust predictions.
\begin{figure}
\centering
\includegraphics[width=0.50\columnwidth]{3participant_MAE-crop-eps-converted-to.pdf}
\caption{$\mathrm{MAE}_\mathrm{DfB}$ scores for experiment \textbf{E1} with medians (lines) and means (triangles) shown. The proposed task-dependent trust models (GPNN, RNN, and POGP) models are superior at predicting trust scores on unseen test participants. The GPNN achieves the lowest average $\mathrm{MAE}_\mathrm{DfB}$ scores across the two domains.\label{fig:partsplits}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.50\columnwidth]{LOOtask_MAE-crop-eps-converted-to.pdf}
\caption{$\mathrm{MAE}_\mathrm{DfB}$ scores for experiment \textbf{E2} with medians (lines) and means (triangles) shown. The proposed task-dependent trust models (GPNN, RNN, and POGP) models are superior at predicting trust scores on unseen test tasks. Similar to Fig. \ref{fig:partsplits}, the GPNN achieves the lowest average $\mathrm{MAE}_\mathrm{DfB}$ scores across the two domains. \label{fig:tasksplits}}
\end{figure}
Next, we turn our attention to \textbf{E2}, which is potentially the more challenging experiment. The GPNN, RNN, and POGP again outperform the other models (see Tbl. \ref{tbl:E2} and Fig. \ref{fig:tasksplits}). Both models are able to make accurate trust predictions on \emph{unseen} tasks (\textbf{Q2}), suggesting that (i) the word vectors are informative of the task, and (ii) the models learnt reasonable projections into the task embedding space.
To answer \textbf{Q3} (whether modeling initial human bias is required), we examined the differences between the POGP, PMGP, and GP. The PO/PMGP achieved lower or similar scores to the GP model across the experiments and domains, indicating that difficulty modeling enabled better performance. The pseudo-observation technique POGP always outperformed the linear mean function approach PMGP, suggesting initial bias is nonlinearly related to the task features. Potentially, using a non-linear mean function may allow PMGP to achieve similar performance to POGP.
Finally, we observed that the including additional flexibility in the GP mean update improved model performance (\textbf{Q4}). As stated above, the GPNN achieves similar or better performance on both datasets compared to the other GP variants.
\begin{table}
\begin{center}
\caption{Model Performance on Held-out Participants (Experiment \textbf{E1}). Average Negative log-likelihood (NLL) and Mean Absolute Error (MAE) scores shown with standard errors in brackets. Best scores in \textbf{bold}.\label{tbl:E1}}
\begin{tabular}{ c c c c c }
\hline
\multirow{2}{*} {Models} & \multicolumn{2}{c} {Household} & \multicolumn{2}{c} {Driving}\\
& NLL & MAE & NLL & MAE \\
\hline
\multirow{2}{*} {GPNN} & \textbf{0.558}& \textbf{0.158}& 0.555 & \textbf{0.172}\\
& (\textbf{0.028})& (\textbf{0.011})& (0.026) & (\textbf{0.010})\\
\multirow{2}{*} {RNN} & 0.571 & 0.173 & \textbf{0.549}& 0.175 \\
& (0.023) & (0.010) & (\textbf{0.024})& (0.011) \\
\multirow{2}{*} {POGP} & 0.558 & 0.161 & 0.553 & 0.176 \\
& (0.027) & (0.013) & (0.025) & (0.012) \\
\multirow{2}{*} {PMGP} & 0.577 & 0.176 & 0.567 & 0.197 \\
& (0.019) & (0.010) & (0.018) & (0.011) \\
\multirow{2}{*} {GP} & 0.575 & 0.175 & 0.588 & 0.208 \\
& (0.023) & (0.013) & (0.022) & (0.012) \\
\multirow{2}{*} {LG} & 0.578 & 0.182 & 0.584 & 0.208 \\
& (0.023) & (0.011) & (0.022) & (0.011) \\
\multirow{2}{*} {CT} & 0.662 & 0.249 & 0.649 & 0.266 \\
& (0.029) & (0.016) & (0.017) & (0.010) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Model Performance on Held-out Tasks (Experiment \textbf{E2}). Average Negative log-likelihood (NLL) and Mean Absolute Error (MAE) scores shown with standard errors in brackets. Best scores in \textbf{bold}. \label{tbl:E2}}
\begin{tabular}{ c c c c c }
\hline
\multirow{2}{*} {Models} & \multicolumn{2}{c} {Household} & \multicolumn{2}{c} {Driving}\\
& NLL & MAE & NLL & MAE \\
\hline
\multirow{2}{*} {GPNN} & \textbf{0.533}& \textbf{0.156}& 0.542 & \textbf{0.163}\\
& (\textbf{0.014})& (\textbf{0.007})& (0.019) & (\textbf{0.009})\\
\multirow{2}{*} {RNN} & 0.542 & 0.166 & \textbf{0.531}& 0.174 \\
& (0.012) & (0.006) & (\textbf{0.016})& (0.014) \\
\multirow{2}{*} {POGP} & 0.542 & 0.164 & 0.562 & 0.186 \\
& (0.014) & (0.007) & (0.018) & (0.008) \\
\multirow{2}{*} {PMGP} & 0.564 & 0.174 & 0.574 & 0.202 \\
& (0.014) & (0.009) & (0.015) & (0.008) \\
\multirow{2}{*} {GP} & 0.551 & 0.174 & 0.586 & 0.207 \\
& (0.010) & (0.009) & (0.013) & (0.009) \\
\multirow{2}{*} {LG} & 0.568 & 0.195 & 0.584 & 0.222 \\
& (0.014) & (0.009) & (0.013) & (0.010) \\
\multirow{2}{*} {CT} & 0.669 & 0.273 & 0.661 & 0.284 \\
& (0.008) & (0.007) & (0.013) & (0.005) \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Discussion}
In summary, our experiments show that modeling trust correlations across tasks improves predictions. Our Bayesian and neural models achieve better performance than existing approaches that treat trust as a single scalar value. To be clear, neither model attempts to represent exact trust processes in the human brain; rather, they are computational analogues. Both modeling approaches offer conceptual frameworks for capturing the principles of trust formation and transfer. From one perspective, the GP model extends the single global trust variable used in \cite{Chen2018Hri} and ~\cite{xu2015optimo} to a \emph{collection} of latent trust variables. In the following, we highlight matters relating to the learned feature projections, {changes in trust during a task}, and the neural-GP trust updates.
\paragraph{Learned Trust-Task Spaces.} Although the neural and Bayesian models differ conceptually and in details, they both leverage upon the idea of a vector task space $Z \subseteq \mathbb{R}^k$ in which similarities---and thus, trust transfer---between tasks can be easily computed. For the RNN, $Z$ is a dot-product space. For the GP, similarities are computed via the kernel function; the kernel linearly projects the task features into a lower dimensional space ($\mat{z} = \boldsymbol{\Lambda}\mat{x}$) where an anisotropic squared exponential kernel is applied. As an example, Figs. \ref{fig:LSHousehold} and \ref{fig:LSDriving} show the learned GPNN latent task points for the Household and Driving domains respectively; we observe that tasks in the same group are clustered together. Furthermore, the easy and difficult tasks within each task group are also positioned closer together. This structure is consistent with the use of a squared exponential kernel where distances between the latent points determine covariance; the closer the points (more similar), the similar the latent function value at those points.
\paragraph{Generalization across Task Word Descriptions.} In our experiments, we used word vector representations as task features, which we found to enable reasonable generalization across similar descriptions. For example, after seeing the robot successfully navigate around obstacles, but \emph{failing} to pick up a lemon, the model predicts sensible trust values for the following tasks:
\begin{itemize}
\item ``\emph{Navigate while following a path}'': 0.81
\item ``\emph{Go to the table}'': 0.86
\item ``\emph{Pick up a banana}'': 0.61
\end{itemize}
We posit that this results from the fact that vector-based word representations are generally effective at capturing synonyms and word relationships. {Given a latent space with sufficiently large dimensionality, we expect the model to scale to a larger number of task categories and domains; there is evidence that moderately-sized latent spaces ($<1000$) yield accurate models for complex tasks such as language translation}~\citep{Sutskever2014} {and image captioning}~\citep{ren2017deep}. {Given longer task descriptions, more sophisticated techniques from NLP} \citep[e.g.,][]{bowman2016generating} {beyond the simple averaging used in our experiments can be adapted to construct usable task features of reasonable length.}
A prevailing issue is that the current word/sentence representations may not distinguish subtle semantics, e.g., the task features lack a notion of physical constraints. As such, the model may make unreasonable predictions when given task descriptions that are syntactically similar but semantically different. As an example, the same model predicts the human highly trusts the robot's capability to ``\emph{Navigate to the moon}'' ($\tau^a = 0.83$). To remedy this issue, we can use alternative features; more informative vector-based features can be used without changing the methods described. Applying structured feature representations (e.g., graphs) would require different kernels and embedding techniques. Future work may also examine more sophisticated hierarchical space representations.
\begin{figure}
\centering
\includegraphics[width=0.50\columnwidth]{LatentSpacePlots_Household-crop-eps-converted-to.pdf}
\caption{The task space for the Household domain where each point is a task. Tasks of a similar type and difficulty are clustered together; tasks labelled A correspond to Pick-and-Place tasks and B are Indoor Navigation tasks. The lower-numbered tasks (1-3) were considered easier by participants.}
\label{fig:LSHousehold}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.50\columnwidth]{LatentSpacePlots_Driving-crop-eps-converted-to.pdf}
\caption{The task space for the Driving domain. Similar to Fig. \ref{fig:LSHousehold} above, similar tasks are closer together. Points C corresponds to Navigation tasks (e.g., lane merging) and D are Parking tasks. The lower-numbered tasks (1-3) were considered easier by participants.}
\label{fig:LSDriving}
\end{figure}
\subsubsection{Trust Variations within a Task.} {The presented computational models are ``event-based'' in that trust was updated after each complete task performance. However, prior work has shown that trust can change even as the task is being carried out}~\citep{Desai13,yang2017evaluating}. {To accommodate intra-task trust variability, the presented Bayesian and neural models can be easily altered to be updated at user-defined intervals with a corresponding observation. These modifications and studies to validate such models would make for interesting future work.}
\paragraph{GP-Neural Updates.} Finally, we sought to better understand the relationship between the Bayesian and neural components of the GPNN mean update. Was the neural network $u(\cdot)$ only making minor ``corrections'' to the trust update or was it playing a larger role? To answer this question, we compared the relative norms of the second term ($\eta_{\textrm{GP}} = {\| \gamma_1(\mathbf{C}_{t-1} \mathbf{k}_{t} + \mathbf{e}_{t})\|}/{\|\boldsymbol{\alpha}_{t-1} \|}$) and third term ($\eta_{\textrm{NN}} = {u(\boldsymbol{\alpha}_{t-1}, \mathbf{C}_{t-1}\mathbf{k}_{t}, \boldsymbol{\Lambda}\mat{x}_{t-1}, c^\ensuremath{a}_{t-1})}/{\|\boldsymbol{\alpha}_{t-1} \|}$) on the RHS of Eq. (\ref{eq:bayesnnupdate}) during updates across randomly sampled tasks. Fig. \ref{fig:GPNNnorms} shows a scatter plot of the relative norms of both components. We find a positive correlation (Kendall tau $= 0.2$, $p$-value $=10^{-39}$), but the relationship is clearly nonlinear. Interestingly, $\eta_{\textrm{NN}}$ could be relatively large when the $\eta_{\textrm{GP}}$ was close to zero, indicating that neural component was playing a significant role in the trust update. We also experimented with completely removing the Bayesian portion of the update, but this modification had poorer performance, potentially due to limited data. This suggests that trust is not purely Bayesian and a non-trivial correction is needed to achieve better performance.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{GPNNnorms-eps-converted-to.pdf}
\caption{The relative norms of the GP component $\eta_{\textrm{GP}}$ (x-axis) and neural component $\eta_{\textrm{NN}}$ (y-axis) for updates across randomly sampled tasks. Blue points are for the Household domain and orange +'s for Driving. There is a general positive correlation between the norms, but the relationship is nonlinear.}
\label{fig:GPNNnorms}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
This paper takes a first step towards conceptualizing and formalizing predictive of human trust in robots across multiple tasks. It presents findings from a human-subjects study in two separate domains and shows the effects of task similarity and difficulty on trust formation and transfer.
The experimental findings leads to three novel models that capture the form and evolution of trust as a latent function. Our experiments show that the function-based models achieved better predictive performance on both unseen participants and unseen tasks. These results indicate that (i) a task-dependent functional trust model more accurately reflects human trust across tasks, and (ii) it is possible to accurately predict trust transfer by leveraging upon a shared task space representation and update process.
Formalizing trust as a function opens up several avenues for future research. In particular, we aim to fully exploit this characterization by incorporating other contexts. Does trust transfer when the environment changes substantially or a new, but similar robot appears? Proper experimental design to elicit and measure trust is crucial. Our current experiments employ relatively short interactions with the robot and rely on subjective self-assessments. Future experiments could employ behavioral measures, such as operator take-overs and longer-term interactions where trust is likely to play a more significant role.
This work limits the investigation to trust resulting from complete observations of the robot's performance/capabilities. {However, in real-world collaborative settings, the human user may not observe all the robot's successes or failures: how humans infer the robot's performance under conditions of partial observability remains an interesting open question.} It is also essential to examine trust in the robot's ``intention'', e.g., its policy~\citep{Huang2018} and decision-making process. Arguably, trust is most crucial in new and uncertain situations whereby both robot capability and intention can influence outcomes. In very recent work~\citep{Xie2019Hri}, we have begun to examine how human mental models of these factors influence decisions to trust robots.
Finally, it is important to investigate how these trust models enhance human-robot interaction. {In our current experiments, the human does not get involved in task completion. Our trust model can be used without modification in the collaborative setting where the human and the robot work together to complete a task, provided that the shared goal is unaffected by the change in trust.} Embedding trust models in a decision theoretic framework enables a robot to adapt its behavior according to a human teammate's trust and as a result, promotes fluent long-term collaboration. We have begun a preliminary investigation using trust transfer models in a Partially-observable Markov Decision Process (POMDP), extending the work in ~\citep{Chen2018Hri}. {We are particularly interested in how trust models impacts decision-making in assistive tasks}~\citep{gombolay2018robotic,Soh2015}.
\section*{Acknowledgements}
This work was supported in part by a NUS Office of the Deputy President (Research and Technology) Startup Grant and in part by MoE AcRF Tier 2 grant MOE2016-T2-2-068. Thank you to Indu Prasad for her help with the data analysis.
\subsection{Summary of Findings and Discussion}
Our main findings support the intuition that human trust transfers across tasks, but to different degrees. More specifically, similar tasks are more likely to share a similar level of trust (\textbf{H1}). Observations of robot performance changes trust both in the observed task, and also in similar yet unseen tasks (\textbf{H2}). Finally, trust transfer is asymmetric: positive trust transfers more easily to simpler tasks than to more difficult tasks (\textbf{H3}). These findings suggest that to infer human trust accurately in a collaboration across multiple tasks, robots should consider the similarity and difficulty of previous tasks.\par
\subsubsection{Qualitative Analyses.} Participant justifications for their trust scores were found to be consistent with the above findings. For example, a participant who was previously shown the robot successfully pick and place a plastic bottle and asked about her trust in the robot to pick and place a can, stated ``\emph{I trust this robot because the shape of the can of chips is similar to the bottle of soda}'', whilst another participant who observed failures stated he distrusted the robot because the task was ``\emph{highly similar to the last two failed tasks}''.
The justifications also revealed that some participants were more willing to trust initially (higher dispositional trust), e.g., ``\emph{Yes, I first gave the robot the benefit of the doubt on a task I saw that similar robot can perform some of the time; then I revised my trust completely based on what it actually did on a similar task}''. Differences in perceived task difficulty also played a role in initial trust, ``\emph{I trust the robot because this seems like a simple enough task.}'' and in trust transfer, for example, ``\emph{Robot failed much easier task of navigating around a stationary item, so I don't think it can follow a moving object}''.
\subsubsection{Trust and Participant Characteristics.}
Finally, we examine how participant characteristics may affect dispositional and situational trust. Specifically, we analyzed the effect of four independent variables---gender, amount of computer usage, prior experience with video games, and prior experience with robots---on the average initial trust and the average trust change. All four independent variables were self-reported; participants indicated their level of computer usage per week by selecting one of the following five choices: $<$10 hours, 10-20 hours, 20-30 hours, 30-40 hours, $>$40 hours. Prior experience with video games and robots was measured using the agreement questions, ``\emph{I'm experienced with video games}'' and ``\emph{I'm experienced with robots}'' using a 5-point Likert scale. We used the scores collected using Muir's questionnaire to compute the average initial trust and trust change. {A polynomial contrast model}~\citep{Saville1991} {was applied since the independent variables are ordinal and the true metric intervals between the levels are unknown\footnote{Polynomial contrast allows ordinal variables to enter not only linearly but also with higher order to better ascertain monotonic effects.}.} We also ran tests against the task-specific trust but the results were not significant; this was potentially due to participants being exposed to very different tasks.
Table \ref{tbl:regression} summarizes our results. {After correcting for family-wise error, we found a moderately significant association ($\alpha = 0.1$) between initial trust and prior experience with robots. Participants who had prior exposure to robots were more likely to trust the robots in our experiments. More experience with video games is significantly associated with trust changes. Although not statistically significant, it may also be negatively associated with initial trust (Holm-corrected $p = 0.12$). These results suggest that participant characteristics, such as their prior experience with technology, do play a role in trust formation and dynamics. However, the relationships do not appear straightforward and we leave further examination of these factors to future work. }
\subsubsection{Study Limitations.} {In this work, each participant only observed the robot performing two tasks; we plan to investigate longer interactions involving multiple trust updates in future work. Furthermore, our reported results are based on subjective self-assessments in non-critical tasks. We believe our results to remain valid when the robot's actions affect the human participant's goals. Our recent work}~\citep{Xie2019Hri,Chen2018Hri} {includes behavioral measures, such as operator take-overs and greater forms of risk (e.g., in the form of performance bonuses/penalties; these experiments also provide evidence for trust transfer across tasks.}.
\subsection{Bayesian Gaussian Process Trust Model}
\label{sec:BayesianTrustModel}
In our first model, we view trust formation as a cognitive process, specifically, human function learning~\citep{Griffiths2009}. {We adopt a rational Bayesian framework, i.e., the human is learning about the robot capabilities by combining prior beliefs about the robot with evidence (observations of performance) via Bayes rule. More formally, let us denote the task at time $t$ as $\mathbf{x}_{t}$, and the robot $a$'s corresponding performance as $c^a_t$. Given binary success outcomes (where $c^\ensuremath{a}_t = 1$ indicates success), we introduce a latent function $f^a$ and model trust in the robot as,}
\begin{align}
\tau^a_t(\mat{x}_t) = & \int P(c^a_t = 1| f^a, \mat{x}_t)p_{t}(f^a)df^a
\end{align}
{where $p_{t}(f^a)$ is the human's current belief over $f^a$, and $P(c^\ensuremath{a}_{t}|f^a, \mathbf{x}_{t})$ is the likelihood of observing the robot performance $c^\ensuremath{a}_{t}$ given the task $\mathbf{x}_{t}$. Intuitively, $f^a$ can be thought of as a latent ``unnormalized'' trust function that has range over the real number line. Given an observation of robot performance, the human's trust is updated via Bayes rule,}
\begin{align}
p_{t}(f^a|\mathbf{x}_{t-1}, c^\ensuremath{a}_{t-1}) = \frac{P(c^\ensuremath{a}_{t-1}|f^a, \mathbf{x}_{t-1})p_{t-1}(f^a)}{\int P(c^\ensuremath{a}_{t-1}|f^a, \mathbf{x}_{t-1})p_{t-1}(f^a) df^a},
\label{eq:bayesupdate}
\end{align}
{where $p_{t}$ is the posterior distribution over $f^a$.} %
{To use this model, we need to specify the prior $p_0(f^a)$ and likelihood $P(c^a_i| f^a, \mat{x})$ functions.} Similar to prior work in human function learning~\citep{Griffiths2009}, we place a Gaussian process (GP) prior over $f^a$,
\begin{align}
p_{0}(f^a) = \mathcal{N}( m(\mat{x}), k(\mat{x}, \mat{x}')).
\end{align}
where $m(\cdot)$ is the prior mean function, and $k(\cdot, \cdot)$ is the kernel or covariance function. {The literature on GPs is large and we refer readers wanting more detail to} \cite{williams2006gaussian}. {In brief, a GP is a collection of random variables, of which any finite subset is jointly Gaussian}. {In this model, any given task feature $\mathbf{x}$ indexes a random variable representing the real function value $f^a(\mathbf{x})$ at specific location $\mathbf{x}$. The nice properties of Gaussians enable us to perform efficient marginalization, which makes the model especially attractive for predictive purposes. Note that the GP is completely parameterized by its mean and kernel functions, which we describe below.}
\paragraph{Covariance Function.} The kernel function is an essential ingredient for GPs and quantifies the similarities between inputs (tasks). {Popular kernel functions include the squared exponential and Mat\'{e}rn kernels} \citep{williams2006gaussian}. Although our task features are generally high dimensional (e.g., the word features used in our experiments), we consider tasks to live on a low-dimensional manifold, i.e., a psychological task space. With this in mind, we use a projection kernel:
\begin{align}
k(\mat{x}, \mat{x}') = \exp(-(\mat{x} - \mat{x}')^\top\mat{M}(\mat{x} - \mat{x}'))
\end{align}
with a low rank matrix $\mat{M} = \boldsymbol{\Lambda}\mat{L}\boldsymbol{\Lambda}^\top$
where $\boldsymbol{\Lambda} \in \mathbb{R}^{d \times k}$ and $\mat{L}$ is a diagonal matrix of length-scales capturing axis-aligned relevances in the projected task space.
\paragraph{Capturing Prior Estimations of Task Difficulty and Initial Bias.} As our studies have shown, perceived difficulty results in an asymmetric transfer of trust (\textbf{H3}), which presents a difficulty for standard zero/constant-mean GPs given symmetric covariance functions. To address this issue, we explore two different approaches:
\begin{enumerate}
\item First, the mean function is a convenient way of incorporating a human's prior estimation of task difficulty; tasks which are presumed to be difficult (beyond the robot's capability) will have low values. Here, we have used a data-dependent linear function, $m(\mat{x}) = \boldsymbol{\beta}^\top\mat{x}$ where $\boldsymbol\beta$ is learned along with other GP parameters.
\item A second approach is to use pseudo-observations $\mat{x}^+$ and $\mat{x}^-$ and associated $f^a$'s to bias the initial model. Intuitively, $\mat{x}^+$ ($\mat{x}^-$) summarizes the initial positive (negative) experiences that a person may have had. {The pseudo-observations are implemented simply as pre-observed data-points that the models are seeded with, prior to any trust updates.} Similar to $\boldsymbol\beta$, these parameters are learned during training. {In our experiments, the pseudo-observations are trained using data from all the individuals in each training set, and thus, represent the ``average'' initial experience.}
\end{enumerate}
Both approaches allow the GP to accommodate the aforementioned asymmetry; the evidence has to counteract the prior mean function or pseudo-observations respectively.
\paragraph{Observation Likelihood.} In standard regression tasks, the observed ``outputs'' are real-valued. However, participants in our experiments observed {binary outcomes (the robot succeeded or failed)} and thus, we use the probit likelihood~\citep{neal1997monte},
\begin{align}
P(c^\ensuremath{a}_{t}|f^a, \mathbf{x}_{t}) = \Phi \left( \frac{c^\ensuremath{a}_{t}(f^a(\mat{x}_{t}) - m(\mat{x}_t))}{\sigma_n^2} \right)
\label{eq:likelihood}
\end{align}
where $\Phi(y) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^y \exp\left(-\frac{t^2}{2}\right)dt$ is the CDF of the standard normal,
and $\sigma_n^2$ is the noise variance. {Here, $\Phi(y)$ is a response function that ``squashes'' the function value $y = f^a(\mat{x}) \in (-\infty, \infty)$ onto the range $[0,1]$. Alternative likelihoods can be used without changing the overall framework.}
\vspace{0.5em} %
\paragraph{Trust Updates via Approximate Bayesian Inference.} Unfortunately, the Bayesian update (\ref{eq:bayesupdate}) under the probit likelihood is intractable and yields a posterior process that is non-Gaussian. To address this problem and enable iterative trust updates, we employ approximate Bayesian inference: the posterior process is projected onto the closest GP as measured by the Kullback-Leibler divergence, $\mathrm{KL}(p_t||q)$, and $q$ is our GP approximation~\citep{csato2002a}. Minimizing the KL divergence is equivalent to matching the first two moments of $p_t$ and $q$, which can be performed analytically. The update equations in their natural parameterization forms are given by:
\begin{align}
\mu_t(\mat{x}) & = \boldsymbol{\alpha}_t^\top\mathbf{k}(\mat{x}) \\
k_t(\mat{x},\mat{x}') & = k(\mat{x},\mat{x}') + \mathbf{k}(\mat{x})^\top\mathbf{C}_t\mathbf{k}(\mat{x}')
\label{eq:updatek}
\end{align}
where $\boldsymbol{\alpha}$ vector and $\mathbf{C}$ are updated using:
\begin{align}
\boldsymbol{\alpha}_{t} & = \boldsymbol{\alpha}_{t-1} + \gamma_1(\mathbf{C}_{t-1} \mathbf{k}_{t} + \mathbf{e}_{t}) \label{eq:updatealpha}\\
\mathbf{C}_{t} & = \mathbf{C}_{t-1} + \gamma_2(\mathbf{C}_{t-1} \mathbf{k}_{t} + \mathbf{e
|
}_{t})(\mathbf{C}_{t-1} \mathbf{k}_{t} + \mathbf{e}_{t})^{\top} \label{eq:updateC}
\end{align}
where $\mathbf{k}_{t} = [k(\mat{x}_{1}, \mat{x}_{t}), ..., k(\mat{x}_{t-1}, \mat{x}_{t})]$, $\mathbf{e}_{t}$ is the $t^{\mathrm{th}}$ unit vector and the scalar coefficients $b_1$ and $b_2$ are given by:
\begin{align}
\gamma_1 & = \partial_{f^a} \log \int P(c^\ensuremath{a}_{t}|f^a, \mathbf{x}_{t}) df^a = \frac{c^a_i \partial\Phi}{\sigma_x \Phi}\\
\gamma_2 & = \partial^2_{f^a} \log \int P(c^\ensuremath{a}_{t}|f^a, \mathbf{x}_{t}) df^a = \frac{1}{\sigma_x^2}\left[ \frac{\partial^2\Phi}{\Phi} - \left( \frac{\partial\Phi}{\Phi} \right) \right]
\end{align}
where $\partial\Phi$ and $\partial^2\Phi$ are the first and second derivatives of $\Phi$ evaluated at $\frac{c^a_i (\mu_t(\mat{x}) - m(\mat{x}))}{\sigma_\mat{x}}$.
\paragraph{Trust Predictions.} Given (\ref{eq:updatealpha}) and (\ref{eq:updateC}), predictions can be made with the probit likelihood (\ref{eq:likelihood}) in closed-form:
\begin{align}
\tau^a_t(\mat{x}) = & \int P(c^a = 1| f^a, \mat{x})p_{t}(f^a)df^a \nonumber \\
= & \Phi\left(\frac{\mu_t(\mat{x}) - m(\mat{x})}{\sigma_\mat{x}}\right)
\end{align}
where $\sigma_\mat{x} = \sqrt{ \sigma_n^2 + k_t(\mat{x}_i, \mat{x}_i)}$.
\section{Human Subjects Study}
\label{sec:HumanSubjectsStudy}
In this section, we describe our human subjects study, which was designed to evaluate if and when human trust transfers between tasks. Our general intuition was that human trust generalizes and evolves in a structured manner. We specifically hypothesized that:
\begin{itemize}
\item \textbf{H1:} Trust in the robot is more similar for tasks of the same category, compared to tasks in a different category.
\item \textbf{H2:} Observations of robot performance have a greater affect on the \emph{change in human trust} over similar tasks compared to dissimilar tasks.
\item \textbf{H3:} Trust in a robot's ability to perform a task transfers more readily to easier tasks, compared to more difficult tasks.
\item \textbf{H4:} \emph{Distrust} in the robot's ability to perform a task generalizes more readily to difficult tasks, compared to easier tasks.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{Tasks_small-eps-converted-to.pdf}
\vspace{0.5cm}
\footnotesize
\begin{tabular}{ l| c|c | l }
\hline
\textbf{Domain} & \textbf{Category} & \textbf{ID} & \textbf{Task} \\
\hline
\multirow{12}{4.5em}{Household} & \multirow{6}{5em}{A. Pick \& Place}
& 1 & a bottle of soda \\
& & 2 & a plastic cup \\
& & 3 & a can of chips \\
& & 4 & an apple \\
& & 5 & a glass cup \\
& & 6 & a lemon \\
\cline{2-4}
& \multirow{6}{5em}{B. Indoor Navigation}
& 1 & to the table \\
& & 2 & to the door \\
& & 3 & to living room \\
& & 4 & with people moving around \\
& & 5 & following a person\\
& & 6 & while avoiding obstacles\\
\cline{1-4}
\multirow{12}{4.5em}{Driving} & \multirow{6}{5em}{C. Parking}
& 1 & Forwards, empty lot (aligned) \\
& & 2 & Backwards, empty lot (misaligned)\\
& & 3 & Forwards, empty lot (misaligned)\\
& & 4 & Backwards, with cars (aligned) \\
& & 5 & Backwards, with cars (misaligned) \\
& & 6 & Forwards, with cars (misaligned)\\
\cline{2-4}
& \multirow{6}{5em}{D. Naviga-\\tion}
& 1 & Lane merge \\
& & 2 & T-junction \\
& & 3 & Roundabout \\
& & 4 & Roundabout with other cars \\
& & 5 & Lane merge with other cars \\
& & 6 & T-junction with other cars \\
\hline
\end{tabular}
\end{center}
\caption{Tasks in the Household and Driving Domains. Tasks with IDs 1 to 3 are generally perceived to be easier than tasks labelled with IDs 4 to 6.}
\label{fig:tasks}
\end{figure}
\subsection{Experimental Design}
\label{sec:method}
An overview of our experimental design is shown in Fig. \ref{fig:ExpDesign}. We explored three factors as independent variables: task category, task difficulty, and robot performance. Each independent variable consisted of two levels: two task categories, easy/difficult tasks, and robot success/failure. We used tasks in two domains, each with an appropriate robot (Fig. \ref{fig:tasks}):
\begin{itemize}
\item \textbf{Household}, which included two common categories of household tasks, i.e., picking and placing objects, and navigation in an indoor environment. The robot used was a real-world Fetch research robot with a 7-DOF arm, which performed \emph{live} demonstrations of the tasks in a lab environment that resembles a studio apartment.
\item \textbf{Driving}, where we used a Virtual Reality (VR) environment to simulate an autonomous vehicle (AV) performing tasks such as lane merging and parking, potentially with dynamic and static obstacles. Participants interacted with the simulation system via an Oculus Rift headset, which {provided a first-person viewpoint from the driver seat of the AV}.
\end{itemize}
The robots were different in both settings and there were no cross-over tasks; in other words, the same experiment was conducted independently in each domain with the same protocol. Obtaining data from two separate experiments enabled us to discern if our hypotheses held in different contexts.
In both domains, we developed pre-programmed success and failure demonstrations of robot performance {for all tasks}. ``Catastrophic'' failures were avoided to mitigate complete distrust of the robot; for the household navigation tasks, the robot was programmed to fail by moving to the wrong location. For picking and placing, the robot failed to grasp the target object. The autonomous car failed to park by stopping too far ahead of the lot, and failed to navigate (e.g., lane merge) by driving off the road and stopping (Fig. \ref{fig:variable-success-failure}). %
The primary dependent variables were the participants' subjective trust in the robot $a$'s capability to perform specific tasks. Participants indicated their degree of trust {given robot $a$} and task $x$ at time $t$, denoted as $\ensuremath{\tau^a}_{x,t}$, %
via a 7-point Likert scale in response to the agreement question: ``\emph{The robot is going to perform the task} [$x$]. \emph{I trust that the robot can perform the task successfully}''. In our form, the left-most point (1) indicated ``\emph{Strongly Disagree}'' and the right-most point (7) indicated ``\emph{Strongly Agree}''.
From these task-dependent trust scores, we computed two derivative scores:
\begin{itemize}
\item \textbf{Trust distance across tasks} $d_{\tau,t}(x,x') = |\ensuremath{\tau^a}_{x,t}-\ensuremath{\tau^a}_{x',t}|$, i.e., the 1-norm distance between scores for $x$ and $x'$ at time $t$.
\item \textbf{Trust change over time} $\Delta \ensuremath{\tau^a}_x(t_1, t_2) = |\ensuremath{\tau^a}_{x,t_1}-\ensuremath{\tau^a}_{x,t_2}|$, i.e., the 1-norm distance between the scores for $x$ at $t_1$ and $t_2$.
\end{itemize}
As a general measure of trust, participants were also asked to complete Muir's questionnaire~\citep{muir1994trust, muir1996trust} pre-and-post exposure to the robot demonstrations. We also asked the participants to provide free-text justifications for their trust scores.
\subsection{Robot Systems Setup}
For both the Fetch Robot and Autonomous Driving simulator, we developed our experimental platforms using the Robot Operating System (ROS). On the Fetch robot, we used the MoveIt motion planning framework and the Open Motion Planning Library~\citep{sucan2012} to pick and place objects, and the ROS Navigation stack for navigation in indoor environments.
The VR simulation platform was developed using the Unity 3D engine. Control of the autonomous vehicle was achieved using the hybrid A* search algorithm~\citep{dolgov2010path} and a proportional-integral-derivative (PID) controller.
\begin{figure}
\centering
\begin{tabular}{c c}
\centering
\begin{subfigure}{.48\columnwidth}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.9\linewidth]{lanemerge-success-eps-converted-to.pdf}
\\
\\
\includegraphics[width=0.9\linewidth]{lanemerge-failure-eps-converted-to.pdf}
\end{tabular}
\end{subfigure}
\begin{subfigure}{.48\columnwidth}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.9\linewidth]{parking-success-eps-converted-to.pdf}
\\
\\
\includegraphics[width=0.9\linewidth]{parking-failure-eps-converted-to.pdf}
\end{tabular}
\end{subfigure}
\end{tabular}
\caption{(Left Column) Autonomous car success (top) and failure (bottom) in the lane merge task. In the failure condition, the car drives off road and stops. (Right column) Autonomous car success (top) and failure (bottom) in the forwards parking task. In the failure condition, the car stops short of the parking spot.}
\label{fig:variable-success-failure}
\end{figure}
\subsection{Study Procedure}
We recruited 32 individuals (Mean age: 24.09 years, $SD = 2.37$, 46\% female) through an online advertisement and printed flyers on a university campus. {Experiments were conducted in our lab where participants were shown live demonstrations of the Fetch robot performing the tasks, or observed the AV's behavior using the driving simulator}. After signing a consent form and providing standard demographic data, participants were introduced to the robot. {Specifically, they were provided information about the robot's parts and basic functions, and then asked questions to ensure that they were paying attention and understood the information.} They then continued with the experiment's four stages:
\begin{enumerate}
\item\textbf{Category and Difficulty Grouping:} To gain better control of the factors, participants were asked to group the 12 tasks evenly into the four cells shown in Fig. \ref{fig:ExpDesign}. As such, chosen observations matched a participant's own prior estimations. {We found that participant groupings were consistent --- the same grouping was observed across participants --- but there was individual differences within each difficulty group, e.g., some participants thought picking and placing a lemon was comparatively more difficult than a glass cup.}
\item\textbf{Pre-Observation Questionnaire:} Participants were asked to indicate their subjective trust on the three tested tasks using the measure instruments described above.
\item\textbf{Observation of Robot Performance:} Participants were randomly assigned to observe two tasks from a specific category and difficulty, and were asked to indicate their trust if the robot were to repeat the observed task. {The revised trust score is the baseline from which we evaluate the trust distance to tested tasks.}
\item\textbf{Post-Observation Questionnaire and Debrief:} Finally, participants were asked to re-indicate their subjective trust on the three tested tasks, answered attention/consistency check questions\footnote{{Participants were asked several questions (e.g., what the last survey question was regarding) and to indicate their initial stated category/difficulty for a subset of the tasks.}}, and underwent a short de-briefing.
\end{enumerate}
\subsection{Results}
\label{sec:results}
{In the following, we first report our primary findings using the task-dependent trust distance and change scores defined above. Then, we discuss the relationship between task-dependent trust and general trust (via comparison to the scores obtained from Muir's questionnaire).} For the driving domain, one participant's results were removed due to a failure to pass attention/consistency check questions.
{For the Household domain, an ANOVA showed that the effect of the category group on the trust distance was significant, $F(1, 89) = 24.22, p < 10^{-5}$, as was the effect of difficulty, $F(1,89) = 14.05, p < 0.001$. The interaction between these two factors was moderately significant, $F(1,89) = 2.94, p = 0.089$. We also found a significant effect of category on trust change $F(1, 89) = 24.89, p < 10^{-6}$, and of success/failure outcomes on trust change $F(1, 89) = 10.04, p = 0.002$. Similar results were found for the driving domain.}
Fig. \ref{fig:H1} clearly shows that tasks in the same category (SG) shared similar scores (supporting \textbf{H1}); the {post-observation trust distances (from the observed task to the tested tasks)} are significantly lower $(M = 0.28, SE = 0.081)$ compared to tasks in other categories (DG) $(M = 1.78, SE = 0.22)$, $t(31) = -5.82$, $p < 10^{-5} $ for the household tasks. Similar statistically significant differences are observed for the driving domain, $t(30) = -2.755$, $p < 10^{-2}$. {For both domains, we observe moderate effect sizes ($\approx 1$ on a Likert scale of 7), which suggests practical significance; the relative difference in trust may potentially affect subsequent decisions to delegate tasks to the robot.}
Fig. \ref{fig:H2} shows that the \emph{change} in human trust due to performance observations of a given task was moderated by the perceived similarity of the tasks (\textbf{H2}). The trust change {(between the pre-observation and post-observation trust scores for the three tested tasks)} is significantly greater for {tested tasks in the same group as the observed task (SG) than tasks in a different group (DG)}; $t(31) = 6.25$, $p < 10^{-6}$ for household and $t(30) = 3.46$, $p < 10^{-2}$ for driving. Note also that the trust change for DG was non-zero (one-sample $t$-test, $p < 10^{-2}$ across both domains for successes and failures), indicating that trust transfers even between task categories, albeit to a lesser extent. {Similar to the trust distances, the trust change effect sizes were moderately large indicating practical significance}.
\begin{figure}
\centering
\includegraphics[width=0.4\columnwidth]{H2_clean-crop-eps-converted-to.pdf}
\caption{Trust distance between a given task and tasks in the same category group (SG) compared to tasks in a different category (DG). Trust in robot capabilities was significantly more similar for tasks in the same group.}
\label{fig:H1}
\end{figure}
We analyzed the relationship between perceived difficulty and trust transfer (\textbf{H3}) by first splitting the data into two conditions: participants who received successful demonstrations, and those that observed failures (Fig. \ref{fig:H3}). For the success condition, the trust distance among the household tasks was significantly less for tasks perceived to be easier than the observed task $(M = 2.0, SE = 0.27)$, compared to tasks that were perceived to be more difficult $(M = 0.5, SE = 0.27)$, $t(14) = 4.58$, $p < 10^{-3}$. The hypothesis also holds in the driving domain, $M = 1.25$ $(SE = 0.25)$ v.s. $M = 2.43$ $(SE = 0.42)$, $t(14)=3.6827$, $p < 10^{-3}$. For the failure condition (\textbf{H4}), the results were not statistically significant at the $\alpha=1\%$ level, but suggest that the effect was reversed; belief in robot \emph{inability} would transfer more to difficult tasks compared to simpler tasks.
\begin{figure}
\centering
\includegraphics[width=0.40\columnwidth]{H1_clean-crop-eps-converted-to.pdf}
\caption{Trust change due to observations of robot performance. Trust increased (or decreased) significantly more for the {tested tasks in the same group (SG) as the observed task versus tasks in different groups (DG).}}
\label{fig:H2}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.70\textwidth]{H3_4_clean-crop-eps-converted-to.pdf}
\caption{Trust distance between the observed task and a more difficult task (Ez $\rightarrow$ Df) against when generalizing to a simpler task (Df $\rightarrow$ Ez). Participants who observed successful demonstrations of a difficult task trusted the robot to perform simpler tasks, but not vice-versa.
}
\label{fig:H3}
\end{figure*}
Thus far, we have focussed on task-specific trust; a key question is how this task-dependent trust differs from a ``general'' notion of trust in the robot as measured by Muir's questionnaire. Fig. \ref{fig:Muir} sheds light on this question; overall, task-specific and general trust are positively correlated but the degree of correlation depends greatly on the similarity of the task to previous observations. In other words, while general trust is predictive of task-specific trust, it does not capture the range or variability of human trust across multiple tasks.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{CorrWithMuir-eps-converted-to.pdf}
\caption{\cite{muir1994trust}'s Trust Score vs Task-specific trust scores (for tested tasks). The scores are positively correlated across the three task types, but with different strengths; the general measure is less predictive of task-specific trust for tasks in different categories (Pearson correlation, $\rho = 0.5-0.6$) compared to tasks with same category and difficulty ($\rho=0.93-0.94$).}
\label{fig:Muir}
\vspace{-0.5em}
\end{figure}
\section{Introduction}
\label{sec:intro}
As robots enter our homes and workplaces, interactions between humans and
robots become ubiquitous. \textit{Trust} plays a prominent role in shaping these interactions and
directly affects the degree of autonomy rendered to
robots~\citep{sheridan1984research}. This has led to significant
efforts in conceptualizing and measuring human trust in robots
and automation~\citep{muir1994trust,lee1994trust,Castelfranchi2010,yang2017evaluating}.
A crucial gap, however, remains in understanding
when and how human trust in robots \textit{transfers} across multiple tasks based on the {human's prior knowledge of robot task capabilities and past experiences}. Understanding trust in the multi-task
setting is crucial as robots transition from single-purpose machines in
controlled environments---such as factory floors---to general-purpose partners
performing diverse functions.
{The mathematical formalization of trust across tasks lays the foundation of trust-mediated robot decision making for fluent human-robot
interaction}. In particular, it leads to robot policies that mitigate under-trust or over-trust by humans when interacting with robots~\citep{Chen2018Hri}.
In this work, we take a first step towards formalizing trust transfer across tasks for human-robot interaction.
We adopt the definition of trust as a {psychological
attitude}~\citep{Castelfranchi2010} and {focus on \emph{trust in robot
capabilities}, i.e., the belief in a robot's competence to complete a task. Capability is a primary factor in determining overall trust in
robots}~\citep{muir1994trust}, { and this work investigates how trust in robot capabilities varies and transfers across a range of tasks.}
%
{Our first contribution is a human-subject study ($n=32$) where our goal is to uncover the role of task similarity and difficulty in the formation and dynamics of trust}. We present results in two task domains: household tasks and autonomous driving (Fig. \ref{fig:domains}). The two disparate domains allow us to validate the robustness of our findings. We show that inter-task trust transfer depends on perceived task similarity, difficulty, and observed robot performance. These results are consistent across both domains, even though the robots and the contexts are markedly different: the household domain, involves a Fetch robot that navigates and that picks and places everyday objects, while the driving domain involves a virtual reality (VR) simulation of an autonomous vehicle performing driving and parking maneuvers. To our knowledge, this is the first work showing concrete evidence for trust transfer across tasks in the context of human-robot interaction. {We have made our data and code freely available online for further research}~\citep{SuppMat}.
Based on our experimental findings, we propose to conceptualize trust as a \emph{context-dependent latent dynamic function}. This viewpoint captures the main findings of our experiments and is supported by prior socio-cognitive research showing the dependence of trust on task properties and on the agent to be trusted~\citep{Castelfranchi2010}. We focus on characterizing the \emph{structure} of this ``trust function'' and its \emph{dynamics}, i.e., how it changes with observations of robot performance across tasks. An earlier version of this paper~\citep{SohRSS18} presents two formal models: (i) a Bayesian Gaussian process (GP)~\citep{rasmussen06} model, and (ii) a connectionist recurrent neural model based on recent advances in deep learning. The GP model explicitly encodes a specific assumption about how human trust evolves, via Bayes rule. In comparison, the neural model is data-driven and places few constraints on how trust evolves with observations. Both models leverage latent task space representations (a \emph{psychological task space}) learned using word vector descriptions of tasks, e.g., ``Pick and place a glass cup''. Experiments show both models accurately predict trust across unseen tasks and users. {This paper introduces a third model that combines the Bayesian and neural approaches and show that the hybrid model achieves improved predictions. All three models are differentiable and can be trained using standard off-the-shelf optimizers}.
In comparison with prevailing computational models \cite[e.g.,][]{lee1994trust, xu2015optimo}, a key benefit of these trust models is their abilities to leverage inter-task structure in multi-task application settings.
As \emph{predictive} models, they can be operationalized in decision-theoretic frameworks to calibrate trust during collaboration with human teammates~\citep{Chen2018Hri,Wang2016,Nikolaidis:2017:GMH:2909824.3020253,Huang2018}. Trust calibration is crucial for preventing over-trust that engenders unwarranted reliance in robots~\citep{Robinette2016,Singh1993}, and under-trust that can cause poor utilization~\citep{lee2004trust}. To summarize, this paper makes the following key contributions:
\begin{itemize}
\item A novel formalization of trust as a latent dynamic function and efficient computational differentiable models that capture and predict human trust in robot capabilities across multiple tasks;
\item Empirical findings from a human subjects study showing the influence of three factors on human trust transfer across tasks, i.e., perceived task similarity, difficulty, and robot performance;
\item Systematic evaluation showing the proposed methods outperform existing methods, indicating the importance of modeling trust formation and transfer across tasks.
\end{itemize}
%
\subsection{Neural Trust Model}
\label{sec:NeuralTrustModel}
The Gaussian process trust model is based on the assumption that human trust is essentially Bayesian in nature. However, this assumption may be too restrictive since humans are not thought to be fully rational or Bayesian\footnote{Moreover, whether brains are truly Bayesian remains a matter of debate within the cognitive sciences~\citep{bowers2012bayesian}.}. Here we consider an alternative ``data-driven'' approach based on recent advances in deep neural models.
The architecture of our neural trust model is illustrated in Fig. \ref{fig:NeuralTrustTransfer}. We leverage a learned task representation or ``embedding'' space $Z \subseteq \mathbb{R}^k$ and model trust as a parameterized function over this space. {The key idea is that (un-normalized) trust for a task is obtained via an inner-product between two components: a trust-vector $\ensuremath{\boldsymbol{\theta}}_t$ and a task representation $\mathbf{z}$. The trust vector $\ensuremath{\boldsymbol{\theta}}_t$ is a compressed representation of the human's prior interaction history (observations of robot performance $o^a_t$) and is derived using a recurrent neural network. The task representations $\mathbf{z}$ are derived using a learned function $f_z(\mathbf{x})$ over task features $\mathbf{x}$. To obtain a normalized trust score between $[0,1]$, we use the sigmoid function,}
\begin{align}
\tau^a_t(\mathbf{x}; \ensuremath{\boldsymbol{\theta}}_t) = \mathrm{sigm}(\ensuremath{\boldsymbol{\theta}}_t^\top f_z(\mathbf{x})) = \mathrm{sigm}(\ensuremath{\boldsymbol{\theta}}_t^\top\mathbf{z}).
\end{align}
The trust function $\tau^a_t$ is fully parameterized by $\ensuremath{\boldsymbol{\theta}}_t$ and its linear form has benefits: it is efficient to compute given a task representation $\ensuremath{\mathbf{z}}$ and is interpretable in that the latent task space $Z$ can be examined, similar to other dot-product spaces, e.g., word embeddings~\citep{mikolov2013}. Similar to the GP, $Z$ can be seen as a psychological task space in which the similarities between tasks can be easily ascertained.
\paragraph{Task Projection.} Whilst it is possible to train the model to learn separate task representations $\mathbf{z}$ for each task in the training set, this approach limits the model to only seen tasks. Our aim was to create a general model that potentially generalizes to new tasks. One could use the task features $\mathbf{x}$ directly in the trust function, but there is no guarantee that the task features would form a space in which dot products would give rise to meaningful trust scores. As such, we project observed task features $\ensuremath{\mathbf{x}}$ into $Z$ via a nonlinear function, specifically, a fully-connected neural network,
\begin{align}
\ensuremath{\mathbf{z}} &= f_z(\mathbf{x}) = \mathrm{NN}(\mat{x}; \theta_z)
\label{eq:NNproj}
\end{align}
where $\theta_z$ is the set of network parameters. Similarly, the robot's performance $\ensuremath{c^\robot} $ is projected via a neural network, $\ensuremath{\mathbf{c}^\robot} = \mathrm{NN}( \ensuremath{c^\robot} ; \theta_\ensuremath{c^\robot} )$. During trust updates, both the task and performance representations are concatenated, $\hat{\ensuremath{\mathbf{z}}}_i = [ \ensuremath{\mathbf{z}} ; \ensuremath{\mathbf{c}^\robot}]$, as input to the RNN's memory cells.
\vspace{0.5em} %
\paragraph{Trust Updating via Memory Cells.} We model the trust update function $g$ using a RNN with parameters $\theta_g$,
\begin{align}
\ensuremath{\boldsymbol{\theta}}_t &= \mathrm{RNN}(\ensuremath{\boldsymbol{\theta}}_{t-1}, \hat{\mat{z}}_{t-1}; \theta_g).
\label{eq:RNNupdate}
\end{align}
In this work, we leverage on the Gated Recurrent Unit (GRU)~\citep{cho2014}, which is a variant of long short-term memory~\citep{Hochreiter1997} with strong empirical performance~\citep{jozefowicz2015empirical}. In brief, the GRU learns to control two internal ``gates''---the update and reset gates---that affect what it remembers and forgets. Intuitively, the previous hidden state is forgotten when the reset gate's value nears zero. As such, cells that have active reset gates have learnt to model short-term dependencies. In contrast, cells that have active update gates model long-term dependencies~\citep{cho2014}. Our model uses an array of GRU cells that embed the interaction history up to time $t$ as a ``memory state'' $\mat{h}_t$, which serves as our trust parameters $\ensuremath{\boldsymbol{\theta}}_t$.
More formally, a GRU cell $k$ that has state $h_{t-1}^{(k)}$ and receives a new input $\hat{\mat{z}}_t$, is updated via
\begin{align}
h_t^{(k)} = (1-v_t^{(k)})h^{(k)}_{t-1} + v_t^{(k)}\tilde{h}_{t}^{(k)},
\end{align}
i.e., an interpolation of its previous state and a candidate activation $\tilde{h}_{t}^{(k)}$. This interpolation is affected by the update gate $v_t^{(k)}$, which is parameterized by matrices $\mathbf{W}_v$ and $\mathbf{U}_v$,
\begin{align}
v_t^{(k)} = \textrm{sigm}\big([\mat{W}_v\hat{\mat{z}}_{t} + \mat{U}_v\mat{h}_{t-1})]_k\big).
\end{align}
The candidate activation $\tilde{h}_{t}^{(k)}$ is given by
\begin{align}
\tilde{h}_{t}^{(k)} = \mathrm{tanh}([\mat{W}\hat{\mat{z}}_t + \mat{U}(\mat{r}_t \odot \mat{h}_{t-1})]_k )
\end{align}
where $\odot$ denotes element-wise multiplication. The reset gate $r^{(j)}_t = [\mat{r}_t]_k$ is parameterized by two matrices $\mat{W}_r$ and $\mat{U}_r$,
\begin{align}
r^{(k)}_t = \textrm{sigm}\big([\mat{W}_r\hat{\mat{z}}_{t} + \mat{U}_r\mat{h}_{t-1})]_k\big)
\end{align}
\section{Computational Models for Trust Across Multiple Tasks}
\label{sec:ComputationalModels}
The results from our human subjects study indicate that trust is relatively rich mental construct. Perceived similarity between tasks plays a crucial role in determining trust transfer. Although we consider trust to be a useful information processing ``bottleneck'' in that it summarizes past experience with the robot, it does appear to be task-specific and hence, is more than a simple scalar quantity~\citep[as assumed in prior work][]{Chen2018Hri,Xu2016}.
In this section, we present a richer model where trust is a \emph{task-dependent latent dynamic function} $\tau^\ensuremath{a}_t(\mathbf{x}): \mathbb{R}^d \rightarrow [0,1]$ that maps task features, $\mathbf{x}$, to {the continuous interval $[0,1]$} indicating trustworthiness of the robot to perform the task. {We assume that the task features are given and are sufficiently informative of the underlying tasks; for example, our experiments utilized word-vector features derived from English-language task descriptions, but visual features extracted from images or structured task descriptions may also be used.}
This functional view of trust enables us to naturally capture trust differences across tasks, and can be extended to include other contexts; $\mathbf{x}$ can represent other factors, e.g., the current environment, robot characteristics, and observer properties. To model the dynamic nature of trust, we propose a Markovian function $g$ that updates trust,
\begin{align}
\tau^\ensuremath{a}_t = g(\tau^\ensuremath{a}_{t-1}, o^{\ensuremath{a}}_{t-1})
\end{align}
where $o^{\ensuremath{a}}_{t-1} = (\mathbf{x}_{t-1}, c^\ensuremath{a}_{t-1})$ is the observation of robot $\ensuremath{a}$ performing a task with features $\mathbf{x}_{t-1}$ at time $t-1$ with performance outcome $c^\ensuremath{a}_{t-1}$. The function $g$ serves to change trust given observations of robot performance, and as such, is a function over the space of trust functions. In this work, we consider binary outcomes $c^\ensuremath{a}_{t-1}\in \{+1, -1\}$ indicating success and failure respectively, but differences in performance can be directly accommodated via ``soft labels'' $c^\ensuremath{a}_{t-1}\in [-1, +1]$ without significant modifications to the presented methods.%
The principle challenge is then to determine appropriate forms for $\tau^\ensuremath{a}_t$ and $g$. In this work, we propose and evaluate three different approaches: (i) a Bayesian approach where we model a probability distribution over latent functions via a Gaussian process, (ii) a connectionist approach utilizing a recurrent neural network (RNN), and (iii) a hybrid approach that combines the aforementioned two methods. All three models leverage a learned \emph{psychological task space} in which similarities between tasks can be efficiently computed. Furthermore, as differentiable models, they be trained using standard off-the-shelf gradient optimizers, and can be ``plugged'' into larger models or decision-making frameworks (e.g., deep networks trained via reinforcement learning).
\begin{figure*}
\centering
%
\includegraphics[width=0.70\textwidth]{NeuralArch-crop-eps-converted-to.pdf}
\caption{A High-level Schematic of the Neural Trust Model. The trust vector is updated using GRU cells as memory of previously observed robot performance. The model uses feed-forward neural networks to project tasks into a dot-product space in which trust for a task can be efficiently computed.}
\label{fig:NeuralTrustTransfer}
\end{figure*}
\input{GaussianProcessTrust.tex}
\input{NeuralTrust.tex}
\input{BayesNeuralTrust.tex}
|
\section{Introduction}
\label{sec:intro}
When a camera captures real 3D scenes, \ali{the 2D projection on the image plane tends to present defocused regions} due to the optical characteristics of lenses. Since the defocus blur amount depends on the distance of the captured objects to the focal plane, it generally varies from region to region in the image. This spatially varying blur is often represented by a defocus (blur) map that contains the size of the Circle of Confusion (CoC), which is typically characterized as a disc or Gaussian and described by a radius/scale parameter $C_{\sigma}$ or $\sigma$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth]{DOF_Draw_NA2.pdf}
\vskip -0.250cm
\caption{\label{fig:dofdesig} Pictorial representation of how Circle of Confusions (CoC) form from \textit{pattern} and \textit{depth} edges in a thin lens model.}
\end{center}
\vskip -0.50cm
\end{figure}
Estimating the local defocus blur amount in images has many potential applications, including depth estimation \cite{Gur_2019_CVPR}, in-focus object detection ~\cite{8537933,park2017unified}, salience region detection~\cite{UFO}, image retargeting~\cite{retargeting_our}, and non-blind deconvolution~\cite{Fortunato2014}. A typical categorization of existing defocus blur estimation methods is two-fold: edge-based and region-based methods. Edge-based methods use edge locations (which correspond to the high-frequency information and are more affected by blur) to estimate the local blur amount, then (optionally) propagate the defocus blur estimates from edge locations to the whole image. Region-based methods utilize local image patches and generally produce dense blur maps directly without any propagation scheme. In general, region-based methods are slower than edge-based methods since the defocus blur amount has to be estimated for each pixel location, while edge-based methods (initially) make estimates at only a fraction of the image pixels (i.e., the image edges)~\cite{OurTIP2018, karaaliPHD_dissertation}.
The robustness of conventional defocus blur estimation methods~\cite{DefocusPaper,TIPpaper2012,jvcedgebased} is highly dependent on the strength and/or isolation level of edges, making these methods prone to error since natural images tend to present complex edges with multiple intensity and isolation levels. Also, the vast majority of existing edge-based defocus blur estimation methods follow a similar blur approximation model as presented in Pentland's work~\cite{Pentland87}, which relates the blur scale to the depth of an imaged 3D point based on a thin lens model.
The main drawback of this formulation is that it is not valid for points that lie at depth discontinuities (e.g., boundaries between objects at different depth values), and the formulation should include a mixture of different blur parameters. Hence, assigning a single blur scale parameter to each edge point is ambiguous since the edge might be projected from a 3D point located at an object boundary with depth discontinuity.
Fig.~\ref{fig:dofdesig} shows a pictorial representation of this behavior based on a thin lens model. Rays coming from an edge point formed by the object boundary ($P_d$) will be affected by other rays coming from other background (or foreground) objects, and the blur pattern will no longer be a simple circle. Instead, it will be formed by a combination of different circular patterns with different radii ($P_d'$ in Fig.~\ref{fig:dofdesig}). On the other hand, $P_1$ and $P_2$ are \textit{pattern} edges related to object texture, presenting a relatively constant depth neighborhood and generating a well-defined CoC on the sensor plane.
Our main contribution in this paper is the introduction of a Convolutional Neural Network (CNN) feature learning approach that jointly tackles: i) the
discrimination of \textit{depth edges} (i.e., edges that lie at depth discontinuities) from \textit{pattern edges} (i.e., edges that lie at relatively constant depth values); ii) multi-scale blur estimation for \textit{pattern edges} that uses input patches with varying sizes to account for different local edge patterns. As an additional contribution, we adapt a fast edge-aware guided filter to propagate blur information estimated at \textit{pattern edge} points to homogeneous regions, at the same time penalizing the propagation over \textit{depth edges}. As will be discussed in the experimental results (Section~\ref{sec:experiments}), the final blur maps estimated by the proposed method for natural images yield competitive accuracy compared to all other SOTA methods in terms of the traditional error metric (MAE).
\section{Related Work}
Defocus blur estimation methods can be divided into two main groups: edge- and region-based methods. Also, deep learning strategies have been developed for this task in recent years.
\textbf{Edge-based methods} generally follow a common strategy: estimating the unknown defocus blur amount along image edges, obtaining a \textit{sparse blur map}, and then (optionally) propagating these blur estimates to the whole image via some interpolation/extrapolation methods to obtain a dense blur map (\textit{full blur map}).
Pentland~\cite{Pentland87}, one of the pioneers of blur research, modeled a blurry edge by convolving a sharp edge with a Gaussian kernel. The standard deviation of the Gaussian kernel (i.e., the unknown blur scale) is then calculated using the intensity change rate along the edges.
Elder and Zucker~\cite{elderandzucker98} proposed a simultaneous edge detection and blur estimation method. The method computes the blur scales measuring the zero-crossing of the third-order Gaussian derivatives along the gradient direction (using steerable filters). Both methods opted not to interpolate the blur estimates (e.g., estimated \textit{sparse blur maps} only). Later on, Zhuo and Sim~\cite{DefocusPaper} used the gradient magnitude ratio between the original and a re-blurred version (of the original image) to estimate the local blur amount at edge points. A bilateral filter was employed to smooth the sparse blur map, and alpha Laplacian Matting~\cite{LaplacianM} was chosen to propagate the estimated blur amounts to the rest of the image. The promising results of gradient magnitude usage (introduced in~\cite{DefocusPaper}) inspired many other researchers. For example, the use of more than one re-blurring parameter was proposed in~\cite{fixedsigma1,karaalijungICIP2016} to deal with noise, while a multi-scale approach was explored by Karaali and Jung~\cite{OurTIP2018} to deal with edge-scale ambiguity. On the other hand, Chen et al.~\cite{icip2016fast} proposed a very fast \textit{full blur map} estimation method, using the same sparse estimation scheme as in~\cite{DefocusPaper} with a novel propagation scheme. They first over-segmented the image using superpixels~\cite{SLICpixel} and then computed an affinity matrix that measures the similarity between adjacent superpixels to assign a blur value to superpixels that do not overlap with image edges. Their method tends to produce piece-wise constant \textit{full blur maps}.
\textbf{Region-based methods} typically examine local image patches in order to estimate the unknown blur amount and they mostly employ a post-regularization scheme to produce visually coherent results.
In the pioneering work of Chakrabarti et al.~\cite{CVPR2010FREQ}, the blur identification problem (both defocus and motion) was tackled in the frequency domain by exploring the \textit{convolution} theorem. They computed a likelihood function for a given candidate Point Spread Function (PSF), formulating a sub-band decomposition and Gaussian Scale Mixtures. This method later inspired Zhu et al.~\cite{ZhuFREQ}, who explored a continuous probability function to assess the unknown blur scale at each pixel location, analyzing the localized Fourier spectrum. Additionally, they incorporated color edge information and smoothness constraints to produce a coherent defocus blur map. Similar to~\cite{ZhuFREQ}, D'Andr\`{e}s et al.~\cite{TIPpaper20161} proposed the use of the localized Fourier spectrum, but they modeled the defocus blur estimation problem as image labeling. More specifically, they proposed labeling each image pixel with a discrete defocus blur scale using a machine learning method (regression tree fields -- RTFs), which provides a global consistency in the estimated defocus map. Moreover, D'Andr\`{e}s et al. in~\cite{TIPpaper20161} created a naturally defocused image dataset with known (disk) defocus blur scales using a Lytro camera. Liu and colleagues~\cite{liu2020defocus} recently presented an extension of the RTF model by including edge information, which improved the results.
\textbf{Deep learning} research has shown promising results in many areas ranging from image classification~\cite{ILSVRC15},
to super-resolution~\cite{Li_2019_CVPR} and image deblurring~\cite{Gao_2019_CVPR}, to mention just a few
applications. In recent years, interesting deep strategies have also been developed for defocus blur estimation.
Zeng et al.~\cite{nnblur1} proposed a CNN architecture to learn meaningful local features in a superpixel level for blurry region estimation, and Zhang et al.~\cite{understandblur} explored the blur ``desirability'' in terms of image quality (at three levels -- Good, OK and Bad) using a huge manually labeled dataset, which is not currently publicly available. On the other hand, Park et al.~\cite{park2017unified} proposed combining hand-crafted features with deep features to boost the performance following a similar strategy to edge-based methods (e.g., sparse blur map estimation along edges followed by an interpolation). Lee et al.~\cite{Lee_2019_CVPR} recently presented an end-to-end CNN architecture for the defocus blur estimation problem, which uses an additional \textit{Domain Adaptation}~\cite{domain} technique to transfer features from naturally defocused images to synthetically defocused images. Tang and colleagues~\cite{tang2020r2mrf} also presented an end-to-end network based on a series of residual refinements, but focusing on blur detection (i.e., separating in-focus from out-of-focus regions).
In particular, end-to-end methods (such as~\cite{Lee_2019_CVPR,tang2020r2mrf}) can produce dense blur maps (as traditional region-based methods) while exploring tensor-based parallelism for fast execution, particularly when high-end GPUs are available. However, the performance tends to degrade as larger input images are used, as they might not fit entirely into the GPU memory.
In general, region-based approaches are costlier and more accurate than edge-based methods. In this paper, we present an edge-based method that outperforms existing edge- or region-based methods in terms of accuracy. The core idea of our method is to use only \textit{pattern} edge patches for defocus blur estimation to avoid CoC ambiguities at \textit{depth} edge points.
For this purpose, our solution explores deep architectures that first distinguish \textit{pattern} from \textit{depth} edges using multi-scale image patches (edge classification task), then estimate the blur values along \textit{pattern} edges (blur estimation task), and finally generates a full blur map by using a fast propagation scheme that respects \textit{depth} edges. To the best of our knowledge, the issue of CoC ambiguity at depth edge points has only been dealt with by Liu et al.~\cite{TIP20162}, who model an edge point with two parameters (one parameter for each side of the edge). Although their method produces visually plausible results, using a two-sided blur model for depth edges is an oversimplification of the thin lens model since an edge point on a \textit{depth edge} might contain a mixture of different Circle of Confusion (CoC)~\cite{LensCOC,Lee_2019_CVPR}. Instead, we only use depth edges to prevent blur propagation across different objects.
\section{The Proposed Approach}
\label{sec:proposed}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=\textwidth]{overview22.pdf}
\vskip -0.250cm
\caption{\label{fig:flowchart} Overview of our deep defocus blur estimation method.}
\end{center}
\vskip -0.750cm
\end{figure*}
Given a defocused input image $I^B$, our algorithm starts by computing an edge map of the input image. In this work, we used the well-known Canny detector~\cite{canny_ref}, but any other edge detector can be used. Multi-scale image patches centered at the estimated edge locations are then fed to the first Convolutional Neural Network, called \textit{\textbf{E-NET}}, which classifies an edge as \textit{depth}- or \textit{pattern}-related.
The next step is to feed only patches related to \textit{pattern} edges to another Convolutional Neural Network, called \textit{\textbf{B-NET}}, which estimates the unknown defocus blur amount for a given edge point. Finally, a fast image-guided filter that propagates the sparse blur estimates to the whole image while penalizing propagation over \textit{depth} edge points (which are related to object boundaries) is applied to obtain the final dense map. Fig.~ \ref{fig:flowchart} presents an overview of our method.
Although the \textit{pattern} vs. \textit{depth} edge classification problem is related to the 3D geometry of the scene, our method does not explore structural information as in single-image stereo approaches, such as~\cite{eigen2014depth}. Instead, we explore local blur information caused by defocus, which typically occurs when shallow Depth-of-Field (DoF) cameras are used. As such, both tasks (edge classification and blur estimation) are expected to share low-level features, which suggests some kind of communication between the two networks. Since blur estimation is a consolidated problem with existing annotated datasets (unlike the proposed edge classification task), we hypothesize that the low-level features learned in \textit{\textbf{B-NET}} can leverage the results of \textit{\textbf{E-NET}}. \ali{In fact, our initial tests explored one distinct network for each task (no weight sharing) and also a single network that branches off into the two tasks (full sharing of initial layers), but our partial weight sharing, as described next, presented the best results.}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=\textwidth]{NetworkDesign.pdf}
\vskip -0.125cm
\caption{\label{fig:networkdesig} Overview of the feature extraction networks for \textit{depth} and \textit{pattern} edge separation and defocus blur estimation.}
\end{center}
\vskip -0.750cm
\end{figure*}
\subsection{Blur Estimation Network}
Our blur estimation network (\textbf{\textit{B-NET}}) is fed with image patches centered at \textit{pattern} edges. As noted in~\cite{patchsize1}, determining an appropriate patch size is a key issue in order to avoid patch scale ambiguities. In the context of blur estimation, Park et al.~\cite{park2017unified} used an edge strength measure to determine suitable patch sizes, assuming that strong edges are likely to be less blurry than weak edges. Claiming that blurrier patches require more spatial information in the representation, they used larger patches for weak edges and smaller patches for strong edges (since their network uses fixed-input patches, a padding procedure is adopted for small patches).
Despite the good results shown by the authors, we contend that edge strength does not necessarily correlate to blurriness. In fact, a sharp low-contrast edge can be detected as a weak edge. In this work, we do not explicitly select a desired input patch size, but instead we propose a multi-scale model to fuse information at different resolution levels.
\textbf{\textit{B-NET}} consists of two cascaded sub-networks: \textit{f1-NET} (green shaded box in Fig.~\ref{fig:networkdesig}) and \textit{b-NET} (yellow shaded box in Fig.~\ref{fig:networkdesig}). Sub-network \textit{f1-NET} receives three patches of different sizes ($P^B_1 = 41 \times 41$, $P^B_2 = 27 \times 27$ and $P^B_3 = 15 \times 15$), which are extracted by centering the different-sized patches to the same edge location. These multi-scale patches, after a series of convolutional filters, Rectified Linear Units (ReLUs) and max pooling layers, are concatenated at the point where they reach the same spatial size (i.e. information at different resolution levels are fused), aiming to extract low-level blur information regarding the edges in a multi-scale way. It is important to note that we tried the same architecture using a single patch size (we tested different sizes), and the multi-scale approach gave better results than any individual patch size.
The output of \textit{f1-NET} is then fed to \textit{b-NET}, which consists of another sequence of convolutional filters with ReLU activation functions. The goal of \textit{b-NET} is to extract deep features $f_B$ specialized to encode the blur level of the multi-scale patches. These features $f_B \in \mathbb{R} ^ {64 \times 1}$ are then sent to a classification network called ``Classification Network I'', which consists of three fully connected layers. This network classifies the input feature vector $f_B \in \mathbb{R} ^ {64 \times 1}$ as one of the quantized blur levels ($23$ levels\footnote{Please check Section~\ref{subsec:dataprep} for details}) through two hidden layers ($300$ and $150$ nodes of each) and one output layer with $23$ nodes. ReLU activation functions are used in both hidden layers, and the \textit{softmax} classifier is used in the last layer.
\subsection{Depth vs. Pattern Edge Classification}
Typical edge-based defocus blur estimation methods~\cite{DefocusPaper,TIPpaper2012,icip2016fast,fixedsigma1} start by applying an edge detection scheme to the input image, and estimate a blur value for \textit{every} edge pixel. However, as discussed in Section~\ref{sec:intro}, edges related to a depth discontinuity do not present a well-defined blur value, since they are affected by more than one CoC. In this work, we distinguish \textit{depth} from \textit{pattern} edges using a deep CNN called \textbf{\textit{E-NET}}, with an architecture as illustrated in the top of Fig.~\ref{fig:networkdesig}.
\textbf{\textit{E-NET}} is a network designed to extract local deep features to distinguish \textit{pattern} from \textit{depth} edge points. Our hypothesis is that the low-level features extracted by \textit{f1-NET} using multi-scale patches encodes relevant information not only for blur estimation, but also for edge classification. As such, \textbf{\textit{E-NET}} inherits all the weights from \textit{f1-NET}), and it also includes a separate branch that is fed with a fixed-sized patch ($P^E_1 = 41 \times 41$) that aims to extract features tailored to the edge classification problem. This branch is called sub-network \textit{f2-NET} and is shown in a purple shaded box in Fig.~\ref{fig:networkdesig}.
The outputs of \textit{f1-NET} (low-level blur information) and \textit{f2-NET} (edge classification features)
are then fused together when they reach the same spatial resolution (see ``Information Fusion'' in Fig.~\ref{fig:networkdesig}). Fused information is then sent to \textit{e-NET}, which consists of a set of convolutional layers with ReLU activation functions to extract deep features $f_E$ specialized for \textit{pattern} and \textit{depth} edge classification. Finally, these deep features $f_E \in \mathbb{R} ^ {64 \times 1}$ are sent to a classification network called ``Classification Network II'', which consists of $2$ fully connected hidden layers of size $300$ and $150$ nodes each with ReLU activations functions, similar to the the classification layers of \textbf{\textit{B-NET}}. However, the output layer presents only two nodes (\textit{pattern} or \textit{depth} edge) with \textit{softmax} activation.
It is also important to note that there is significatly more labeled data for blur estimation than edge classification (in fact, we had to label data for this task). Hence, \textit{f1-NET} learns low-level blur features with a large amount of data, while \textit{f2-NET} learns specific low-level information for edge classification with a more modest amount of data.
\subsection{Full Blur Map}
The output of \textbf{\textit{B-NET}} is a sparse and quantized blur map computed only at \textit{pattern} edges provided by \textbf{\textit{E-NET}}. As in most edge-based methods~\cite{DefocusPaper,karaalijungICIP2016,TIP20162,park2017unified,OurTIP2018}, a propagation scheme is used to obtain a full blur map.
For this task, the majority of edge-based defocus blur estimation methods~\cite{DefocusPaper,karaalijungICIP2016,TIP20162,park2017unified} adopted the Laplacian-based colorization scheme~\cite{LaplacianM}.
However, this propagation scheme is time-consuming, and it respects the edges of the original blurry image $I^B$ even
at \textit{pattern} edges, causing visible artifacts at the full blur map even when depth (and blur) does not vary.
An alternative approach would be to use a cross-domain filter that acts on both the sparse blur map $I_S$ and the binary edge map $E$, assuming that $I_S(\bm{x}) = 0$ when $E(\bm{x}) = 0$. If function $\beta$ defines an adaptive weight for each pair of pixels $(\bm{x}, \bm{y})$, the interpolated blur map $B$ is given by
\begin{equation}
B(\bm{x}) = \frac{\sum_{\bm{y}\in \mathcal{N}_{\bm{x}}} \beta(\bm{x},\bm{y})I_S(\bm{y})}
{\sum_{\bm{y}\in \mathcal{N}_{\bm{x}}} \beta(\bm{x},\bm{y})E(\bm{y})} =
\frac{\mathcal{F}(I_S, I)}
{\mathcal{F}(E, I)},
\label{eq:interp}
\end{equation}
where $\mathcal{N}_{\bm{x}}$ is the neighborhood around pixel $\bm{x}$ that specifies the data propagation region (and can be the whole image), and $\mathcal{F}(J,I)$ denotes the cross-domain filtering of image $J$ using image $I$ as the guidance (the selection of the filtering approach impacts the weights $\beta$). The normalization (denominator) ensures that the contribution of all valid values within the neighborhood $\mathcal{N}_{\bm{x}}$
adds up to one.
Edge-aware filters (such as the bilateral filter) consist of adaptive weights $\beta$ that try to prevent mixing information across image edges. In our problem, however, we can (and should) allow propagation across \textit{pattern} edges, but want to prevent propagation across \textit{depth} edges, since they typically separate objects at different depths (and hence, blur values).
One edge-aware filter that allows a suitable adaptation for treating \textit{pattern} and \textit{depth} edges in a different way is the
Domain Transform (DT) filter presented in~\cite{Gastal}. The core idea of the DT filter is to perform 1D domain transformations such that samples considered ``different'' are spatially ``pushed apart'' from each other. With this transformation, a convolution kernel with a fixed size acts like an edge-preserving filter (similarly to the bilateral filter), and two-dimensional filtering is achieved by alternating filtering along rows and columns. For more details on the DT filter, the reader is directed to~\cite{Gastal}.
In this work, we propose a simple modification of the domain transformation that adds a penalty to \textit{depth} edges.
The proposed domain transformation function $ct$ (in the continuous domain) that measures the distance between two 1D points $u \leq w$ is given by
\begin{equation}
ct(u,w) = \int_{u}^{w} 1 + \Psi(x) + \frac{\sigma_s}{\sigma_r} \sum_{k=1}^{c} | I'_k(x) | dx,
\label{eq:domain_transform_modified}
\end{equation}
where $I'_k$ denotes the derivative of the $k$-th color channel of guide image $I$ (along a line or column), and $\sigma_s$, $\sigma_r$ are parameters that control the spatial and color range of the kernel, as defined in~\cite{Gastal}. Note that the value of $ct(u,w)$ progressively increases as does the spatial and color distances between points $u$ and $v$. Our modification to~\cite{Gastal} is the introduction of the cost term
$\Psi(x)$ given by
\begin{equation}
\Psi(x) = \left\{
\begin{array}{ll}
\psi, & \text{~if~} x \text{~is a depth edge}\\% is } E_d(x) = 1 \\
0, & \text{otherwise}
\end{array}
\right .,
\label{eq:costval}
\end{equation}
and $\psi$ is a constant that defines the penalty introduced by depth edges. Note that $1 + \psi$ becomes a lower bound for the distance between any to points $u$ and $w$ separated by a \textit{depth} edge. Hence, if $\psi$ is sufficiently large, blur propagation between these two points is virtually stopped. The domain-transformed signal is then convolved with a low-pass filter with variance $\sigma_s^2$ (very fast implementation if box filters are used).
Since fine textures and/or noise can block blur propagation across \textit{pattern} edges, the input blurry image $I^B$ is first simplified by filtering with the edge-aware filter itself (i.e., $\hat{I^B} = \mathcal{F}(I^B, I^B)$), then is used as the guidance image in the propagation given by Eq.~\eqref{eq:domain_transform_modified}. Note that a similar strategy was employed in~\cite{OurTIP2018}, but without considering \textit{depth} edges.
In terms of complexity, \textbf{\textit{B-NET}} and \textbf{\textit{E-NET}} require, respectively, 123.46 and 149.16 MFLOPs. Since these networks are fed with edge-centered patches, the main bottleneck of the proposed method relates to the total number of edges. The interpolation step is very fast, since the DT-filter can be implemented in linear time~\cite{Gastal}.
\section{Experiments}
\label{sec:experiments}
\subsection{Data Preparation}
\label{subsec:dataprep}
We use images from then ILSVRC~\cite{ILSVRC15}, MS-COCO~\cite{MSCOCO} and HKU-IS~\cite{saliencyData} datasets in order to train the proposed network architectures. Due to the lack of annotated datasets with blur data, we strongly rely on synthetic data.
\textbf{Synthetic data for defocus blur estimation:} in order to generate a dataset with known ground truth blur value (i.e. defocus blur amount is know for a given pixel) to train \textit{\textbf{B-NET}}, we first manually select sharp (all-in-focus) images that do not contain any visually detectable blurry pixels (250 from ILSVRC and 250 from MS-COCO), and then convolve each selected sharp image $I^S$ with a blur kernel. {Although the actual kernel depends on the camera and lens system, a disk kernel models a perfect lens system with a circular aperture, and was shown to be a good approximation in experiments with real images conducted in~\cite{ZhuFREQ}. Here, we used disk kernels $C_{R_{i}}$ to generate blurry images $I^B$ through
\begin{equation}
\label{eq:blurringtheimage}
I^B_i=I^S * C_{R_{i}},~ i \in \{1,2,\cdots, S\},
\end{equation}
where $R_i$ denotes the blur level (i.e., the kernel size), and $S$ is the number of quantized blur values.
Following~\cite{TIPpaper20161}, we set $S=23$, starting from \ali{$R_1 = 0.5$} and increasing up to $R_{23} = 6$ with a step size of $0.25$.
As noted in~\cite{liu2020defocus}, the chosen range contemplates the expected blur amount in most image resolutions, except for extremely defocused regions in ultra-high resolution images. Then, we extract approximately 5M patches from the edge points of synthetically blurred images, assuring that they are equally distributed for each blur level. Although blurring the whole image with a single spatially-invariant blur kernel is clearly an oversimplification since it does not impose any blur variations due to depth changes, this approach has generalized well to the blur estimation problem, especially in region-based methods~\cite{CVPR2010FREQ,ZhuFREQ}. Furthermore, the separation of \textit{depth} and \textit{pattern} edges from each other assures that
the proposed network will only be fed by patches that do not present strong depth variations.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.3\columnwidth]{18_1_0.png}
\includegraphics[width=0.3\columnwidth]{18_2_0.png}
\includegraphics[width=0.3\columnwidth]{18_3_0.png} \\
\includegraphics[width=0.3\columnwidth]{18_4_0.png}
\includegraphics[width=0.3\columnwidth]{18_5_0.png}
\includegraphics[width=0.3\columnwidth]{18_6_0.png}
\caption{\label{fig:synforback} \ali{Synthesized foreground-background images. Foreground blur level $R_{k_1}$, background blur level $R_{k_2}$. First row: $R_{k_1}=0$ and $R_{k_2}=1,3,5$, from left to right. Second row: $R_{k_1}=1$ and $R_{k_2}=3$, $R_{k_1}=1$ and $R_{k_2}=5$, and $R_{k_1}=3$, $R_{k_2}=5$ from left to right. }}
\end{center}
\vskip -0.750cm
\end{figure}
\textbf{Synthetic data for depth vs. pattern edge separation:} to train \textit{\textbf{E-NET}}, which distinguishes \textit{depth} edges from \textit{pattern} edges, we need edge points that present different blurriness (i.e. depth) levels at different sides. Although a database with similar characteristics is reported in~\cite{understandblur}, it was not made publicly available. Due to the lack of annotated data, we produced synthetic scenes $I^{FB}$ in a foreground-background manner using 200 salient regions as foreground objects from the HKU-IS~\cite{saliencyData} dataset, which provides images $S^I$ with binary masks $S^B$ indicating salient objects. We also use 100 images from ILSVRC~
|
\cite{ILSVRC15} and 100 images from MS-COCO~\cite{MSCOCO} datasets to compose the background of our dataset.
To generate our synthetic dataset, we first crop the salient object from the salient image $S^I$ using the provided binary mask $S^B$. Then, we blur the cropped region and its corresponding mask image with a disk blur kernel \ali{$C_{R_{k_1}}$}, while simultaneously blurring a sharp (focused) image with a different disk blur kernel \ali{$C_{R_{k_2}}$} (with \ali{$R_{k_1} < R_{k_2}$}), which will be the background image. Finally, we alpha-blend the blurry cropped image onto the blurry background image using the blurred binary mask as alpha values. We used four levels of blur scale for this task:
\ali{$R_{1}=0$} for no blur, \ali{$R_{2}=1$} for low blur, \ali{$R_{3}=3$} is for medium blur and \ali{$R_{4}=5$} is for high blur.
For each salient image -- background image pair, we synthesize $N=6$ synthetic foreground-background images $I^{FB}_n, n \in \{1, 2, \ldots,N\}$, \ali{as illustrated in Fig.~\ref{fig:synforback}}.
Finally, we extract around 2M \textit{depth} edge patches from the boundary of projected objects. Since using only synthetic images could overfit the network, we also manually labeled \textit{depth} edges in real images, \ali{as illustrated in Fig.~\ref{fig:realdata}.}
More precisely, we labeled 100 real defocused images chosen from Flicker, from which 500K \textit{depth} edge patches were extracted. We also perform data augmentation by randomly rotating the images by $-90$, $-30$ $60$, $135$, or $180$ degrees.
Since noise did not seem to be an issue with the blurry images in our datasets, we did not add noise in the augmentation process.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.45\columnwidth]{blurryim_028.png}
\includegraphics[width=0.45\columnwidth]{edge028.png} \\
\caption{\label{fig:realdata} Manually labeled depth edges from a defocused image in Flickr~\cite{flick_photo}}
\end{center}
\end{figure}
As for \textit{pattern} edge patches, which is the other class label of \textit{\textbf{E-NET}}, we extract approximately 2.5M patches from 100 sharp (focused) images of ILSVRC~\cite{ILSVRC15} and 100 sharp (focused) images of MS-COCO~\cite{MSCOCO} datasets, synthetically blurring them with different blur patterns such as uniform blur, gradually changing blur, and step-wise changing blur (with step size $0.25$ to simulate small blur changes at the same depth layers).
\subsection{Model Training}
As described in Section~\ref{sec:proposed}, although there is some weight sharing between \textbf{\textit{B-NET}} and \textbf{\textit{E-NET}}, the two networks are trained separately. Both models were implemented using Tensorflow with $80-20\%$ train-validation data separation, with the \textit{softmax cross entropy} as the loss function. We used the Adam~\cite{adamop} optimizer with a batch size of $256$.
We start by first training \textbf{\textit{B-NET}}, using an initial learning rate of $10^{-3}$ that is divided by $10$ every $10$ epochs, yielding convergence after $75$ epochs. In a subsequent step we trained \textbf{\textit{E-NET}} with the same initial learning rate, but divided by $10$ at every $5$ epochs instead of $10$. It is important to recall that during the training of \textbf{\textit{E-NET}}, the shared weights from \textbf{\textit{B-NET}} (called \textit{f1-NET}, as shown in Fig.~\ref{fig:networkdesig}) are frozen, and the remaining weights of \textbf{\textit{E-NET}} are learned. With this strategy, the network converged after $30$ epochs.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.3\columnwidth]{orig_20.png}
\includegraphics[width=0.3\columnwidth]{imgm0_20.png}
\includegraphics[width=0.3\columnwidth]{depth_pattern_edges_20.png} \\
\includegraphics[width=0.3\columnwidth]{orig_22.png}
\includegraphics[width=0.3\columnwidth]{imgm0_22.png}
\includegraphics[width=0.3\columnwidth]{depth_pattern_edges_22.png} \\
\caption{\label{fig:resultEdge} \textit{Depth} and \textit{pattern} edge separation via \textbf{\textit{E-NET}}. From left to right (in pairs): original images (from ~\cite{TIPpaper20161}), ground truth blur map, and corresponding \textit{depth} edges (red) and \textit{pattern} edges (blue).}
\end{center}
\vskip -0.75cm
\end{figure}
\textbf{Implementation details:} Regarding defocus blur interpolation, we used $\sigma_{r_1} = 0.5$ and $\sigma_{s_1} = 7$ to get the simplified image $\hat{I^B}$, and $\sigma_{r_2} = 3.75$ and $\sigma_{s_2} = \min\{H,W\}/8$ to propagate sparse blur estimates. $H$ and $W$ are the height and weight of the simplified defocus image $\hat{I^B}$ respectively. Before the simplification operation, the image pixels are normalized to the range $[0, 1]$. Since the data interpolation has to be propagated to the whole image from edge pixels (which might be very sparse), we chose a large spatial kernel size $\sigma_{s_2}$. The range kernel $\sigma_{r_2}$ is also a relatively large value, chosen to increase pixel similarity and allow some propagation over \textit{pattern} edges. For the penalization term of \textit{depth} edges, we empirically set $\psi = 100$ (which blocks propagation almost entirely) and used this default value in all experiments -- larger values for $\psi$ show almost no difference from the default value.
\begin{table*}[!ht]
\caption{Quantitative evaluation of blur maps for the dataset provided in~\cite{TIPpaper20161} in terms of Average MAE $\pm$ Standard Deviation (STD) \ali{of Raw Blur}, \ali{Average MAE of Relative Blur}, Running Time and Implementation.}
\centering
\scalebox{0.95}{
\begin{tabular} {c c c c c c c c}
\hline
&~\cite{DefocusPaper}&~\cite{TIPpaper20161}&~\cite{OurTIP2018}&~\cite{park2017unified}&~\cite{Lee_2019_CVPR} &~\cite{liu2020defocus} &Our\\ [0.5ex]
\hline\hline
MAE $\pm$ STD \ali{of Raw Blur} & $0.629\pm 0.205$ & $0.197\pm 0.072$ & $0.378\pm 0.170$ & $0.307\pm 0.121$ & $0.611\pm 0.217$ & $0.180 \pm \mathbf{0.067}$ & $\mathbf{0.169}\pm 0.068$ \\
\ali{MAE of Relative Blur} & 0.098 & 0.033 & 0.063 & 0.049 & 0.101 & 0.030 & \textbf{0.028} \\
Time (sec.) & $5.60$ & $\thicksim 114$ & $0.69$ & $7.02$ & $\mathbf{0.14}$ & $\thicksim 184^*$ & $4.55$ \\
\hline
Impl. & Matlab & C++ \& Matlab & Matlab & Matlab & Python/GPU & Not informed & Python/GPU \\
\hline
\multicolumn{8}{l}{$^*$ Running time extracted from \cite{liu2020defocus}, where a different hardware was used. Authors report running time higher than~\cite{TIPpaper20161}.}
\end{tabular}
}
\label{table:1}
\vskip -0.25cm
\end{table*}
\subsection{Experimental Validation}
In order to validate the proposed defocus blur estimation method, we use the dataset introduced in~\cite{TIPpaper20161} that contains $22$ real defocused images with resolution $360 \times 360$ captured by a Light Field camera, and it is (to our knowledge) the only dataset in the literature that provides the ground truth defocus blur values for each pixel. Although it also presents images corrupted by artificial noise, we restricted our analysis to the subset with natural image noise only since our model was not trained with artificial noise. We use the popular Mean Absolute Error (MAE) to quantitatively compare our defocus blur estimation approach with SOTA methods, and show edge map images with highlighted \textit{depth} edges for qualitative evaluation of \textbf{\textit{E-NET}} on the same dataset.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.21\columnwidth]{orig_12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m1_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m2_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m3_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m4_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m5_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m6_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m7_im12.png}
\includegraphics[width=0.21\columnwidth]{bmap_m8_im12.png} \\
\includegraphics[width=0.21\columnwidth]{orig_14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m1_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m2_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m3_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m4_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m5_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m6_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m7_im14.png}
\includegraphics[width=0.21\columnwidth]{bmap_m8_im14.png} \\
\includegraphics[width=0.21\columnwidth]{orig_15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m1_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m2_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m3_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m4_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m5_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m6_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m7_im15.png}
\includegraphics[width=0.21\columnwidth]{bmap_m8_im15.png} \\
\includegraphics[width=0.21\columnwidth]{orig_22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m1_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m2_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m3_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m4_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m5_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m6_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m7_im22.png}
\includegraphics[width=0.21\columnwidth]{bmap_m8_im22.png} \\
\caption{\label{fig:resultOnTipDataOrig} Comparison of blur estimation algorithms using the dataset provided in~\cite{TIPpaper20161}. From left to right: original image, ground truth, results of~\cite{DefocusPaper},~\cite{TIPpaper20161},~\cite{OurTIP2018},~\cite{park2017unified},~\cite{Lee_2019_CVPR},~\cite{liu2020defocus} and the proposed method. }
\end{center}
\vskip -0.75cm
\end{figure*}
We first illustrate examples of \textit{pattern} vs. \textit{depth} edge classification produced by \textbf{\textit{E-NET}} in Fig.~\ref{fig:resultEdge}.
In the first blurry image, three different depth layers can be seen (from left to right), and \textbf{\textit{E-NET}} manages to distinguish most of the edge points that present depth discontinuities (abrupt blur change).
In the second image, there is an abrupt depth transition from the red wall to the background, and \textbf{\textit{E-NET}} correctly labels these boundary points as \textit{depth} edges, with a few false negatives. Note that edge classification is an intermediate step of our approach, and it will be evaluated implicitly by showing that it does improve the final
goal (dense blur estimation), as will be shown next. Furthermore, the amount of data used to train \textbf{\textit{E-NET}} is rather limited, so we only split the data into train and validation (no test set). For the sake of illustration, the accuracy in the validation set was 88\%.
We computed the MAE \ali{of the raw blur values} for each of the $22$ images in the database and reported the average MAE and the standard deviation of the full blur maps obtained by the proposed method\footnote{Our code is available at \url{github.com/alikaraali/DepthEdgeAwareBENet}} and competitive approaches in Table~\ref{table:1}. \ali{We also computed and reported the MAE of the relative blur values by re-scaling the raw blur values to $[0,1]$, as done in~\cite{Lee_2019_CVPR}}. We can see that our method outperforms all the competitive approaches \ali{in both raw and relative blur}\footnote{\ali{Although we used official implementation of~\cite{Lee_2019_CVPR} from \url{github.com/ake/DMENet}, we obtained an average relative blur MAE slightly different from the value reported in their paper.}}. It is slightly better than region-based methods~\cite{TIPpaper20161,liu2020defocus}, which are considerably slower. Please note that as the method ~\cite{Lee_2019_CVPR} produces a ``relative blur map'', they normalize the GT blur map by the maximum value to report their accuracy. For a fair comparison with the other approaches, we do the opposite: multiply their blur map by the maximum blur value to report the raw blur accuracy. Since the methods described in~\cite{DefocusPaper,OurTIP2018,park2017unified} model the blur with a Gaussian PSF, we re-scale their results from Gaussian scale to disk radii via the mapping function provided by~\cite{TIPpaper20161}. Also, since the method presented in~\cite{park2017unified} is trained with a maximum Gaussian blur scale $\sigma_g=2.0$, we clipped the ground truth values to $2$ when computing the results of~\cite{park2017unified}, which favored their results significantly. The average running times for all the analyzed methods considering all the images are also shown in Table~\ref{table:1}. Although~\cite{Lee_2019_CVPR} has the fastest execution speed, our method presents a very good compromise between MAE and running time when compared to other SOTA methods. The supplementary material provides a comprehensive study of the dataset.
Since an important application related to blur estimation is deblurring, we have also evaluated how well our method integrates in this task.
Following~\cite{TIPpaper20161}, we used the combination of the methods proposed in \cite{laplacianpriors} and \cite{levin2007image} as a deblurring baseline approach,
and used as input the ground truth data provided in~\cite{TIPpaper20161}, the defocus maps produced by SOTA approaches and by the proposed method.
The average PSNR and SSIM values are summarized in Table~\ref{table:deblur}, showing that our method produced the highest average gain for both PSNR and SSIM, being inferior only to the deblurring results obtained with GT blur estimates.
Fig.~\ref{fig:deblur} shows a visual comparison of some deblurring results, highlighting regions with high structural
or textural information. Our method presents visual results similar to those obtained with the ground truth blur map and the blur map produced with~\cite{liu2020defocus}, while being much faster than~\cite{liu2020defocus}. A detailed analysis of this experiment, along with the visual results of all methods, is provided in the supplementary material.
\begin{table*}[ht]
\caption{Average PNSR \& SSIM for the deblurred images on the dataset provided in~\cite{TIPpaper20161} using different blur maps. Best value shown in bold, second best in italic.}
\centering
\scalebox{0.99}{
\begin{tabular} {c c c c c c c c c c}
\hline
& Blurry Im. & GT Data &~\cite{DefocusPaper}&~\cite{TIPpaper20161}&~\cite{OurTIP2018}&~\cite{park2017unified}&~\cite{Lee_2019_CVPR} &~\cite{liu2020defocus} &Our\\ [0.5ex]
\hline\hline
PSNR/SSIM & 24.20/0.782 & \textbf{27.69}/\textbf{0.885} & 23.96/0.786 & 26.45/0.862 & 25.72/0.833 & 25.41/0.839 & 24.70/0.801 & 26.64/0.866 & \textit{27.00}/\textit{0.870} \\
Gain & N/A & \textbf{3.48}/\textbf{0.102} & -0.24/0.004 & 2.25/0.080 & 1.51/0.050 & 1.20/0.056 & 0.49/0.018. & 2.44/0.084 & \textit{2.80}/\textit{0.088} \\
\hline
\end{tabular}
}
\label{table:deblur}
\end{table*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method01.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method03.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method02.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method07.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method09.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_09_method08.png} \\
\vskip .01cm
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method01.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method03.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method02.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method07.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method09.png}
\includegraphics[width=0.31\columnwidth]{deblurred_image_18_method08.png} \\
\begin{tabular}{UUUUUUU}
(a) & (b) & (c) & (d) & (e) & (f)
\end{tabular}
\caption{\label{fig:deblur} Deblurring results using different blur maps as input. (a)-(b) sharp and blurry images. (c)-(f): results using the GT blur map,~\cite{Lee_2019_CVPR},~\cite{liu2020defocus} and the proposed method. }
\end{center}
\vskip -0.75cm
\end{figure*}
\begin{figure}[!hb]
\vskip -0.25cm
\begin{center}
\includegraphics[width=1.0\columnwidth]{prcurve.pdf}
\caption{\label{fig:prcurve} Precision-recall curves and best accuracy for \cite{DefocusPaper}, \cite{OurTIP2018}, \cite{park2017unified}, \cite{Lee_2019_CVPR} and the proposed method on the CUHK defocus detection dataset \cite{cuhkdata}.}
\end{center}
\vskip -0.5cm
\end{figure}
Finally, we tested our method in the related task of defocus blur detection (DBD), which aims to find in-focus and out-of-focus regions of a given image. For this task, we use the CUHK blur detection dataset~\cite{cuhkdata}, which contains 704 defocused images along with the corresponding binary blur maps as ground truth. Since the main focus of the proposed method is defocus blur estimation (i.e., finding the actual blur level at each pixel), DBD is performed by thresholding the estimated blur maps.
More precisely, we used the same adaptive thresholding approach adopted in~\cite{park2017unified} and~\cite{Lee_2019_CVPR}, which consists of defining a threshold
\begin{equation}
\tau = \alpha v_{max} + (1 - \alpha) v_{min},
\end{equation}
where $v_{max}$ and $v_{min}$ are the maximum and the minimum blur values of the estimated defocus blur map, and $\alpha$ is an empirically chosen parameter, which is set so as to maximize the accuracy of each method individually.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.32\columnwidth]{b1.png}
\includegraphics[width=0.32\columnwidth]{edge1sided.png}
\includegraphics[width=0.32\columnwidth]{edge2sided.png}
\includegraphics[width=0.32\columnwidth]{dense_map0.png}
\includegraphics[width=0.32\columnwidth]{dense_map1.png}
\includegraphics[width=0.32\columnwidth]{dense_map0L.png}
\vskip .05cm
\includegraphics[width=0.32\columnwidth]{out_of_focus0005.jpg}
\includegraphics[width=0.32\columnwidth]{e2dge1sided.png}
\includegraphics[width=0.32\columnwidth]{e2dge2sided.png}
\includegraphics[width=0.32\columnwidth]{d2ense_map0.png}
\includegraphics[width=0.32\columnwidth]{d2ense_map1.png}
\includegraphics[width=0.32\columnwidth]{d2ense_map0L.png}
\vskip .05cm
\includegraphics[width=0.32\columnwidth]{out_of_focus0003.jpg}
\includegraphics[width=0.32\columnwidth]{e3dge1sided.png}
\includegraphics[width=0.32\columnwidth]{e3dge2sided.png}
\includegraphics[width=0.32\columnwidth]{d3ense_map0.png}
\includegraphics[width=0.32\columnwidth]{d3ense_map1.png}
\includegraphics[width=0.32\columnwidth]{d3ense_map0L.png}
\caption{\label{fig:extra1} Columns 1-3: original image, pattern edges, and depth edges. Columns 4-5: full blur maps via DT-Propagation without and with \textit{depth} edge aware propagation. Last column: full blur map via Laplacian-based propagation}
\end{center}
\vskip -0.75cm
\end{figure*}
Fig.~\ref{fig:prcurve} shows the precision-recall curves for our approach and competitive methods, along with the corresponding accuracy (see legend of the figure). Since only a subset of 200 images was used to test the method presented in~\cite{Lee_2019_CVPR} (the remaining 504 were used to train it), all results refer to this smaller test subset for a fair comparison.
It is important to mention that the proposed method, as well as~\cite{DefocusPaper}, \cite{OurTIP2018}, and \cite{park2017unified}, does not use any images from CUHK dataset at any part of the algorithm design or training, while \cite{Lee_2019_CVPR} uses images from CUHK dataset in the training phase for domain adaptation. As an illustration, the last two rows of Fig.~\ref{fig:extra1} show some results produced by our method on images from the CUHK dataset (before thresholding), and more results are provided in the supplementary material.
\subsection{Parameter Settings and Ablation Studies}
This section studies the effect of adding/removing some steps of the proposed approach, as well as the effect of changing some parameters.
We start by noting that most of the existing edge-based defocus blur estimation methods adopt a post-processing scheme on the sparse blur estimates in order to smooth the results and get rid of outliers. For instance, Zhuo and Sim~\cite{DefocusPaper} used a joint bilateral filter with the original input image as the reference; Karaali and Jung~\cite{OurTIP2018} proposed a Connected Edge Filter (CEF) scheme that regularizes the blur estimates along connected edges; while Park et al.~\cite{park2017unified} proposed a probabilistic joint bilateral filter (PJBF), modifying the post-processing approach used in~\cite{DefocusPaper} by including the confidence values of the blur estimates provided by their network. We tested all these three options in our sparse blur map, and although some of them yielded a small MAE drop on the sparse estimates, none provided any significant error changes in the final full blur maps. More precisely, the error (\ali{raw blur MAE}) of sparse estimates decreased from $0.112$ to $0.110$ when used CEF post-filtering, but increased to $0.116$ when used PJBF. On the other hand, the \ali{raw blur MAE} of the corresponding full blur maps is $0.169$ and $0.171$, respectively, which is equivalent to the results without any post-processing reported in Table~\ref{table:1}.
Another important point to be addressed is the importance of distinguishing \textit{pattern} from \textit{depth} edges, and its relationship with the propagation scheme required
to obtain the full blur map. In Table~\ref{table:1}, we showed that our average \ali{raw blur MAE} for the natural blur dataset was 0.169. We have also tested our propagation scheme without penalization across \textit{depth} edges, and with the popular Laplacian-based colorization scheme~\cite{LaplacianM} used in several competitive approaches~\cite{DefocusPaper,karaalijungICIP2016,TIP20162,park2017unified}. The average \ali{raw blur MAE} for these two propagation schemes are 0.188, and 0.288, respectively. We show in Fig.~\ref{fig:extra1} the result from \textit{\textbf{E-NET}} and its impact on the dense blur map with and without propagation penalization across \textit{depth} edges, along with the Laplacian-based scheme, for three images taken from \cite{DefocusPaper} and \cite{cuhkdata}. Note that our scheme presents sharper object boundaries in the blur map for the in-focus objects. Also, note that not penalizing depth edges means setting $\psi = 0$ in Eq.~\eqref{eq:domain_transform_modified}, so that changing $\psi$ from zero to 100 (default value) leads to increasing blockage of blur propagation across depth edges.
\section{Conclusions}
In this work, we have introduced a novel deep edge-based defocus blur estimation method. The main contributions of the proposed method were: the introduction of a network that distinguishes \textit{pattern} and \textit{depth} edges; use of only \textit{pattern} edge points for blur estimation via another neural network architecture to avoid CoC ambiguity; and penalizing the propagation of sparse blur values at \textit{depth} edge points to avoid mixing blur values from objects at different depths.
Quantitative results using the traditional MAE metric of raw \ali{and relative} defocus blur estimates showed that the proposed blur estimation method produced more accurate results for natural blurry images than all the tested SOTA approaches. Furthermore, experiments involving image deblurring based on the estimated blur maps showed that our method outperformed competitive approaches in terms of the PSNR and SSIM of deblurred images. Finally, experiments conducted on the related task of defocus blur detection showed that the proposed method gave promising results with good cross-dataset generalization capabilities. We believe this is a remarkable achievement for an edge-based approach, particularly considering that we generated synthetic training data with a fixed kernel shape (disk) that is only an approximation of the actual blur kernel.
As future work, we plan to use different kernel shapes (e.g., disk and Gaussian) for generating training data with more variability aiming to further improve the generalization of the network. We also intend to develop another deep network that learns how to adequately propagate sparse blur information to the whole image given the original input color image.
\section*{Acknowledgment}
This work was partly funded by the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13/RC/2016) and is co-funded by the European Regional Development Fund, and also partly supported by Brazilian agencies CAPES (Finance Code 001) and CNPq.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
Pedestrian flows are dynamical systems. Numerous models exist \citep{hamacher-2001,antonini-2006,chraibi-2011} on both the macroscopic \citep{hughes-2001,hoogendoorn-2004} and the microscopic level \citep{helbing-1995,chraibi-2010,seitz-2012}. In the latter, two approaches seem to dominate: ordinary differential equation (ODE) models and cellular automata (CA).
ODE models are particularly well suited to describe dynamical systems because they can formally and concisely describe the change of a system over time. The mathematical theory for ODEs is rich, both on the analytic and the numerical side.
In CA models, pedestrians are confined to the cells of a grid. They move from cell to cell according to certain rules. This is computationally efficient, but there is only little theory available \citep{boccara-2003}.
Many CA models employ a floor field to steer individuals around obstacles \cite{burstedde-2001,ezaki-2012}.
The use of floor fields for pedestrian navigation in ODE models is only sparingly described in literature. In \citep{helbing-1995}, pedestrians steer towards the edges of a polygonal path, in \citep{hoogendoorn-2003} optimal control is applied.
In addition, most of the ODE models are derived from molecular dynamics where the direction of motion is gradually changed by the application of a force. This leads to various problems mostly caused by inertia \citep{chraibi-2011, koster-2013}.
Cellular automata and more recent models in continuous space, like the Optimal Steps Model \citep{seitz-2012}, deviate from this approach and directly modify the direction of motion. This is also true for some ODE models in robotics, where movement is very controlled and precise and thus inertia is negligible \citep{starke-2002,starke-2011}.
The direct change of the velocity constitutes a strong deviation from molecular dynamics and hence from force based models.
This paper proposes an application of this model type to pedestrian dynamics: the Gradient Navigation Model (GNM). The GNM is a system of ODEs that describe movement and navigation of pedestrians on a microscopic level. Similar to CA models, pedestrians steer directly towards the direction of steepest decent on a given navigation function. This function combines a floor field and local information like the presence of obstacles and other pedestrians in the vicinity.
The paper is structured as follows: In the model section, three main assumptions about pedestrian dynamics are stated. They lead to a system of differential equations. A brief mathematical analysis of the model is given in the next section where we use a plausibility argument to reduce the number of free parameters in the ODE system from four to two. We constructed the model functions so that they are smooth. Thus, using standard mathematical arguments, existence and uniqueness of the solution follow directly.
In the simulations section the calibrated model is validated against several scenarios from empirical research: congestion in front of a bottleneck, lane formation, stop-and-go waves and speed-density relations. We also demonstrate computational efficiency using high-order accurate numerical solvers like MATLABs \textsf{ode45} \citep{dormand-1980b} that need the smoothness of the solution to perform correctly. We conclude with a discussion of the results and possible next steps.
\section{\label{sec-GNM}Model}
The Gradient Navigation Model (GNM) is composed of a set of ordinary differential equations to determine the position $x_i\in\mathbb{R}^2$ of each pedestrian $i$ at any point in time. A navigational vector $\vec{N}_i$ is used to describe an individual's direction of motion.
The model is constructed using three main assumptions.
\begin{description}
\item[Assumption 1] Crowd behavior in normal situations is governed by the decisions of the individuals rather than by physical, interpersonal contact forces.
\end{description}
This assumption is based on the observation that even in very dense but otherwise normal situations, people try not to push each other but rather stand and wait.
It enables us to neglect physical forces altogether and focus on personal goals. If needed in the future, this assumption could be weakened and additional physics could be added similar to \cite{hoogendoorn-2003} who split up what they call the \textit{physical model} and the \textit{control model}.
Note that this assumption sets the GNM apart from models for situations of very high densities, where pedestrian flow becomes similar to a fluid \cite{hughes-2001,helbing-2007}.
\begin{description}
\item[Assumption 2] Pedestrians want to reach their respective targets in as little time as possible based on their information about the environment.
\end{description}
Most models for pedestrian motion are designed with this assumption. Differences remain regarding the optimality criteria for `little time' as well as the amount of information each pedestrian possesses. \cite{helbing-1995} uses a polygonal path around obstacles for navigation, \cite{hoogendoorn-2004b} solve a Hamilton-Jacobi equation, incorporating other pedestrians.
In this paper, we use the eikonal equation similar to \cite{hughes-2001, hartmann-2010} to find shortest arrival times $\sigma$ of a wave emanating from the target region. This allows us to compute the part of the direction of motion $\vec{N}_{i,T}$ that minimizes the time to the target:
\begin{equation}\label{eq:NT}
\vec{N}_{i,T}=-\nabla \sigma
\end{equation}
\begin{description}
\item[Assumption 3] Pedestrians alter their desired velocity as a reaction to the presence of surrounding pedestrians and obstacles. They do so after a certain reaction time.
\end{description}
The relation of speed and local density has been studied numerous times and its existence is well accepted. The actual form of this relation, however, differs between cultures, even between different scenarios \cite{seyfried-2005,chattaraj-2009,jelic-2012}.
Note that assumption 3 not only claims the existence of such a relation but makes it part of the thought process. In our model, we implement this by modifying the desired direction of motion with a vector $\vec{N}_{i,P}$ so that pedestrians keep a certain distance from each other and from obstacles. In models using velocity obstacles, this issue is addressed further \cite{fiorini-1998,shiller-2001,berg-2011,curtis-2014}.
Attractants like windows of a store or street performers could also be modelled as proposed by \cite[p. 49]{molnar-1996}, but are not considered in this paper.
\begin{equation}\label{eq:NP}
\vec{N}_{i,P}=-(\underbrace{\sum_{j\neq i} \nabla P_{i,j}}_\text{influence of pedestrians} + \underbrace{\sum_B \nabla P_{i,B}}_\text{influence of obstacles})
\end{equation}
$\nabla P_{i,j}$ and $\nabla P_{i,B}$ are gradients of functions that are based on the distance to another pedestrian $j$ and obstacle $B$ respectively. Their norm decreases monotonically with increasing distance. To model this, we introduce a smooth exponential function with compact support $R>0$ and maximum $p/e>0$ (see Fig. \ref{fig:calibrate_potentials}):
\begin{equation}\label{eq:h}
h(r;R,p)=\begin{cases}
p\ \text{exp}\parenth{\frac{1}{(r/R)^2-1}}&|r/R|<1\\
0&\text{otherwise}
\end{cases}
\end{equation}
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig1_calibrate_potentials.pdf}
\caption{\label{fig:calibrate_potentials}The graph of $h$, which depends on the distance between pedestrians $r$ as well the maximal value $p/e$ and support $R$ of $h$.}
\end{figure}
To take the viewing angle of pedestrians into account, we scale $\nabla P_{i,j}$ by
\begin{equation}
s_{i,j}=\tilde{g}(\text{cos}(\kappa\phi_{i,\sigma}-\kappa\phi_j))
\end{equation}
The function $\tilde{g}$ is a shifted logistic function (see appendix \ref{eq:tildeg}) and $(\phi_{i,\sigma}-\phi_j)$ is the angle between the direction $\vec{N}_{i,T}$ and the vector from $x_i$ to $x_j$ (see Fig. \ref{fig:viewingdirection}). $\kappa$ is a positive constant to set the angle of view to $\approx 200\deg$.
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig2_viewingdirection.pdf}
\caption{\label{fig:viewingdirection}Isolines of the function $s_{i,j}h(\|x_i-x_j\|;1,1)$ with $x_i=0$ and $x_j\in\mathbb{R}^2$. This function represents the field of view of a pedestrian in the origin together with his or her comfort zone. If $x_j$ is close and in front of $x_i$, the function values are maximal, meaning least comfort for $x_i$.}
\end{figure}
Using $h$ and $s_{i,j}$ (see Fig. \ref{fig:viewingdirection}), the gradients in Eq. \ref{eq:NP} are now defined by
\begin{eqnarray}\label{eq:gradient_deltaP}
\nabla P_{i,j}&=&h_\epsilon(\|x_i-x_j\|;p_j,R_j)s_{i,j}\frac{x_j-x_i}{\|x_j-x_i\|}\\\label{eq:gradient_deltaO}
\nabla P_{i,B}&=&h_\epsilon(\|x_i-x_B\|;p_B,R_B)\frac{x_B-x_i}{\|x_B-x_i\|}
\end{eqnarray}
where $p_j,R_j,p_B,R_B$ are positive constants that represent comfort zones between pedestrian $i$, pedestrian $j$ and obstacle $B$. To avoid (mostly artificially induced) situations when pedestrians stand exactly on top of each other \cite{koster-2013}, we replace $h$ by $h_\epsilon$:
\begin{equation}
h_\epsilon(\|x_i-x_j\|;p,R)=h(\|x_i-x_j\|;p,R)-h(\|x_i-x_j\|;p,\epsilon)
\end{equation} where $\epsilon>0$ is a small constant. For $\epsilon\to 0$, $h_\epsilon(\cdot;p,R)\to h(\cdot;p,R)$. For $\|x_i-x_j\|=0$, we also define $\nabla P_{i,j}=0$ and $\nabla P_{i,B}=0$.
To validate the second part of assumption 3, we use the result of \cite{moussaid-2009b}: pedestrians undergo a certain reaction time $\tau$ between an event that changes their motivation and their action. The relaxed speed adaptation is modeled by a multiplicative, time dependent, scalar variable $w:\mathbb{R}^+_0\to\mathbb{R}$, which we call relaxed speed. Its derivative with respect to time, $\dot{w}$, is similar to acceleration in one dimension.
Eq. \ref{eq:NT} and \ref{eq:NP} enable us to construct a relation between the desired direction $\vec{N}$ of a pedestrian and the underlying floor field as well as other pedestrians:
\begin{equation}\label{eq:naviation_vector}
\vec{N}=g(g(\vec{N}_T)+g(\vec{N}_P))
\end{equation}
The function $g:\mathbb{R}^2\to\mathbb{R}^2$ scales the length of a given vector to lie in the interval $[0,1]$. For the exact formula see appendix A. Note that with definition (\ref{eq:naviation_vector}), $N$ must not always have length one, but can also be shorter. This enables us to scale it with the desired speed of a pedestrian to get the velocity vector:
\begin{equation}
\dot{\vec{x}}=\vec{N} w
\end{equation}
With initial conditions $\vec{x}_0=\vec{x}(0)$ and $w_0=w(0)$ the Gradient Navigation Model is given by the equations of motion for every pedestrian $i$:
\begin{equation}\label{eq:GNMequations}
\begin{array}{rcl}
\dot{\vec{x}}_i(t)&=& w_i(t)\vec{N}_i(\vec{x}_i,t)\\\
\dot{w}_i(t)&=&\frac{1}{\tau}\parenth{v_i(\rho(\vec{x}_i))\|\vec{N}_i(\vec{x}_i,t)\|-w_i(t)}\\
\end{array}
\end{equation}
The position $\vec{x}_i:\mathbb{R}\to\mathbb{R}^2$ and the one-dimensional relaxed speed $w_i:\mathbb{R}\to\mathbb{R}$ are functions of time $t$.
$v_i(\rho(\vec{x}_i))$ represents the individuals' desired speeds that depends on the local crowd density $\rho(\vec{x}_i)$ (see assumption 3). Since the reason for the relation between velocity and density is still an open question \cite{seyfried-2005,jelic-2012}, we choose a very simple relation in this paper: we use $v_i(\rho)$ constant and normally distributed with mean $1.34$ and standard deviation $0.26$, i.e. $v_i(\rho)=v_i^\text{des}\sim N(1.34,0.26)$. The choice of this distribution is based on a meta-study of several experiments \cite{weidmann-1993}.
With these equations, the direction of pedestrian $i$ changes independently of physical constraints, similar to heuristics in \cite{moussaid-2011}, many CA models and the Optimal Steps Model \cite{seitz-2012}. The speed in the desired direction is determined by the norm of the navigation function $\vec{N}_i$ and the relaxed speed $w_i$.
\section{\label{subsec:staticnavigationfield}The navigation field}
Similar to \cite{hughes-2001} and later \cite{hoogendoorn-2003,hartmann-2010}, we use the solution $\sigma:\mathbb{R}^2\to\mathbb{R}$ to the eikonal equation (\ref{eq:eikonal_equation}) to steer pedestrians to their targets. $\sigma$ represents first arrival times (or walking costs) in a given domain $\Omega\subset\mathbb{R}^2$:
\begin{equation}\label{eq:eikonal_equation}
\begin{array}{rcl}
G(x)\|\nabla \sigma(x)\| &=& 1,\ x\in\Omega\\
\sigma(x)&=&0,\ x\in\Gamma\subset\partial\Omega
\end{array}
\end{equation}
$\Gamma\subset\partial\Omega$ is the union of the boundaries of all possible target regions for one pedestrian. Static properties of a geometry (for example rough terrain or an obstacle) can be modelled by modifying the speed function $G:\mathbb{R}^2\to (0,+\infty)$. \cite{hartmann-2014b} include the pedestrian density in $G$. This enables pedestrians to locate congestions and then take a different exit route. \cite{treuille-2006} used the eikonal equation to steer very large virtual crowds.
If $G(x)=1\ \forall x$, $\sigma$ represents the length of the shortest path to the closest target region. This does not take into account that pedestrians can not get arbitrarily close to obstacles. Therefore, we slow down the wave close to obstacles by reducing $G$ in the immediate vicinity of walls. The influence of walls on $\sigma$ is chosen similar to $\|\nabla P_{i,B}\|$, so that pedestrians incorporate the distance to walls into their route.
Being a solution to the eikonal equation (\ref{eq:eikonal_equation}), the floor field $\sigma$ is Lipschitz-continuous \cite{evans-1997}. In the given ODE setting, however, it is desirable to smooth $\sigma$ to ensure differentiability and thus existence of the gradient at all points in the geometry. We employ mollification theory \cite{evans-1997} with a mollifier $\eta$ (similar to $h$ in Eq. \ref{eq:h}) on compact support $B(x
|
)$ to get a mollified $\nabla\sigma$, which we call $\nabla\tilde{\sigma}$:
\begin{equation}\label{eq:smooth_nablasigma}
\nabla\tilde{\sigma}(x)=\nabla(\eta * \sigma)(x)=\int_{B(x)} \nabla\eta(y)\sigma(x-y)dy\in C^\infty(\mathbb{R}^2,\mathbb{R}^2)
\end{equation}
\section{\label{sec-ANA}Mathematical Analysis and Calibration}
Existence and uniqueness of a solution to Eq. \ref{eq:GNMequations} follows from the theorem of Picard and Lindeloef when using the method of vanishing viscosity to solve the eikonal equation \cite{evans-1997} and mollification theory to smooth $\nabla\sigma$ (see Eq. \ref{eq:smooth_nablasigma}).
The system of equations in Eq. \ref{eq:GNMequations} contains several parameters.
\cite{moussaid-2009b,johansson-2007} conducted experiments to find the parameters $\tau$ (relaxation constant, $\approx 0.5$) and $\kappa$ (viewing angle, $\approx 200\deg$, which corresponds to a value of $\kappa\approx 0.6$ here). The following free parameters remain:
\begin{itemize}
\item $p_p$ and $R_p$ define maximum and support of the norm of the pedestrian gradient $\|\nabla P_{i,j}\|$
\item $p_B$ and $R_B$ define maximum and support of the norm of the obstacle gradient $\|\nabla P_{i,B}\|$
\end{itemize}
We use an additional assumption to find relations between these four free parameters:
\begin{description}
\item[Assumption 4] A pedestrian who is enclosed by four stationary other persons on the one side and by a wall on the other side, and who wants to move parallel to the wall, does not move in any direction (see Fig. \ref{fig:calibrate_model}).
\end{description}
This scenario is very common in pedestian simulations and involves many elements that are explicitly modeled: other pedestrians, walls and a target direction. The setup also includes other scenarios: when the wall is replaced by two other pedestrians, the one in the center also does not move if assumption 4 holds. This is because the vertical movement is canceled out by the symmetry of the scenario.
\begin{figure}
\begingroup%
\makeatletter%
\providecommand\rotatebox[2]{#2}%
\ifx\svgwidth\undefined%
\setlength{\unitlength}{160bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \mathbb{R}{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{\svgwidth}%
\fi%
\global\let\svgwidth\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,1)%
\put(0,0){\includegraphics[width=\unitlength]{fig3_calibrate_model.pdf}}%
\put(0.3871858,0.09985346){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-$\nabla\delta$}}}%
\put(0.82093761,0.79565877){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-$\nabla\sigma$}}}%
\end{picture}%
\endgroup
\caption{\label{fig:calibrate_model}(Color online) Setup used to reduce the number of parameters. The pedestrian in the center (gray) wants to reach a target on the right (black, thick arrow, $-\nabla\sigma$) but is enclosed by four other pedestrians who are not moving and a wall (thick line at the bottom). Together, the pedestrians and the wall act on the gray pedestrian via $-\nabla\delta$ (sum of red, slim arrows). The red cross on the wall marks the position on the wall that is closest to the gray pedestrian.}
\end{figure}
Using assumption 4, we can simplify the system of equations (\ref{eq:GNMequations}) to find dependencies between parameters. First, the direction vectors $\vec{N}_{i,T}$ and $\vec{N}_{i,P}$ are computed based on the given scenario. The gray pedestrian wants to walk parallel to the wall in positive x-direction, that means
\begin{eqnarray}\label{eq:calibrate_sigma}
\vec{N}_{i,T}(x_i)=-\nabla\sigma(x_i)&=&\left[\begin{array}{c}
1\\0
\end{array}\right]
\end{eqnarray}
The remaining function $\vec{N}_{i,P}$ is composed of the repulsive effect of the four enclosing pedestrians and the wall. We simplify equations \ref{eq:gradient_deltaP} and \ref{eq:gradient_deltaO} by taking the limit $\epsilon\to 0$, which is reasonable since the pedestrians do not overlap in the scenario.
\begin{align}\label{eq:calibrate_delta}
\vec{N}_{i,P}=\underbrace{\sum_{i=1}^4 h_p(\|x_\text{gray}-x_i\|)s_{\text{gray},j}\frac{x_\text{gray}-x_j}{\|x_\text{gray}-x_j\|}}_{\text{influence of the `white' pedestrians}}\nonumber\\
+\underbrace{h_B(\|x_\text{gray}-x_B\|)\left[\begin{array}{c}
0\\1
\end{array}\right]}_{\text{influence of the wall}}
\end{align}
Using Eq. (\ref{eq:calibrate_sigma}), (\ref{eq:calibrate_delta}) and assumption 4 the system of differential equations (\ref{eq:GNMequations}) for the pedestrian in the center yield
\begin{eqnarray}
\label{eq:calibration_xdot}\dot{\vec{x}}_i=w \vec{N}_i=-w g(g(\vec{N}_{i,T})+g(\vec{N}_{i,P}))&=&0\\
\dot{w}=\frac{1}{\tau}(v(\rho)\|N\|-w)&=&0
\end{eqnarray}
The second equality yields
\begin{equation}
w=v(\rho)\|\vec{N}\|\implies w\geq 0
\end{equation}
Since assumption 4 does not imply $w=0$, Eq. (\ref{eq:calibration_xdot}) holds true generally if
\begin{equation}\label{eq:calibrate_sigmadelta}
g(\vec{N}_{i,T})=-g(\vec{N}_{i,P})
\end{equation}
Since all $\phi_i$, $x_i$, $x_\text{gray}$ and $x_B$ are known in the given scenario, the only free variables in Eq. (\ref{eq:calibrate_sigmadelta}) are the free parameters of the model: the height and width of $h_p$ (named $p_p$ and $R_p$), as well as $h_B$ (named $p_B$ and $R_B$). With only two equations for four parameters, system (\ref{eq:calibrate_sigmadelta}) is underdetermined and thus we choose $R_B=0.25$ (according to \citep{weidmann-1993}) and $R_p=\sqrt{3}r(\rho_\text{max})$, where $r(\rho_\text{max})$ is the distance of pedestrians in a dense lattice with pedestrian density $\rho_\text{max}$. This choice for $R_p$ ensures that pedestrians adjacent to the enclosing ones have no influence on the one in the center. Note that if this condition is weakened in assumption 4, the model behaves differently on a macroscopic scale (see Fig. \ref{fig:FD2D_speed_twoneighbors}).
With two of the four parameters fixed, we use Eq. (\ref{eq:calibrate_sigmadelta}) to fix the remaining two. Table \ref{tab:calibrate_parameters} shows numerical values of all parameters, assuming $\rho_{max}=7 P/m^2$, which leads to $r(\rho_{max})\approx 0.41$.
\begin{table}
\begin{tabular}{c|c|l}
Parameter&Value&Description\\
\hline
$\kappa$&0.6&Viewing angle\\
$\tau$&0.5&Relaxation constant\\
\hline
$p_p$&3.59&Height of $h_p$\\
$p_B$&9.96&Height of $h_B$\\
\hline
$R_p$&0.70&Width of $h_p$\\
$R_B$&0.25&Width of $h_B$\\
\end{tabular}
\caption{\label{tab:calibrate_parameters}Numerical values of all parameters of the Gradient Navigation Model using assumption 4 and $\rho_{max}=7P/m^2$. The first two were determined by experiment \cite{johansson-2007,moussaid-2009b}.}
\end{table}
\section{\label{sec-VALIDATION}Simulations}
To solve Eq. (\ref{eq:GNMequations}) numerically, we use the step-size controlling Dormand-Prince-45 integration scheme \cite{dormand-1980b} with $\text{tol}_\text{abs}=10^{-5}$ and $\text{tol}_\text{rel}=10^{-4}$. Employing this scheme is possible because the derivatives are designed to depend smoothly on $x$, $w$ and $t$. Unless otherwise stated, all simulations use the parameters given in Tab. \ref{tab:calibrate_parameters}. The desired speeds $v_i^\text{des}$ are normally distributed with mean $1.34ms^{-1}$ and standard deviation $0.26ms^{-1}$ as observed in experiments \cite{weidmann-1993}. $v_i^\text{des}$ is cut off at $0.3ms^{-1}$ and $3.0ms^{-1}$ to avoid negative or unreasonably high values.
We used the fast marching method \cite{sethian-1999} to solve the eikonal equation (Eq. \ref{eq:eikonal_equation}). The mollification of $\nabla\sigma$ (Eq. \ref{eq:smooth_nablasigma}) is computed using Gauss-Legendre quadrature with $21\times21$ grid points.
All simulations were conducted on a machine with an Intel Xeon (R) X5672 Processor, 3.20 Ghz and with the Java-based simulator VADERE. Simulations of scenarios with over 1000 pedestrians were possible in real time under these conditions.
We validate the model quantitiatively by comparing the flow rates of 180 simulated pedestrians in a bottleneck scenario (see Fig. \ref{fig:liddle_cone}) of different widths with experimentally determined data from \cite{kretz-2006,seyfried-2009,liddle-2011}. The length of the bottleneck is $4m$ in all runs.
Fig. \ref{fig:flow_liddle} shows that, regarding flow rates, the simulation is in good quantitative agreement with data from \cite{kretz-2006,seyfried-2009,liddle-2011} for all bottleneck widths.
\begin{figure}
\includegraphics[width=0.35\textwidth]{fig4_flow_liddle.pdf}
\caption{\label{fig:flow_liddle}(Color online) Flow rate of the GNM compared to experiments of Kretz \cite{kretz-2006}, Seyfried \cite{seyfried-2009} and Liddle \cite{liddle-2011}. We use the parameters from Tab. \ref{tab:calibrate_parameters} and the normal distribution $N(1.34ms^{-1},0.26ms^{-1})$ to find desired velocities as proposed by \cite{weidmann-1993}.}
\end{figure}
Also, the formation of a crowd in front of a bottleneck matches observations well (see Fig. \ref{fig:liddle_cone}): in front of the bottleneck, they form a cone as observed by \cite{kretz-2006,seyfried-2009b,schadschneider-2011b}. Note that this is different from the behaviour described in \cite{helbing-2000} that tries to capture the dynamics in stress situations. Our simulations suggest that the desired velocity is the most important parameter for this experiment: when we change its distribution to $N(1.57ms^{-1},0.15ms^{-1})$ as found by \cite{gerhardt-2011}, the flow is $\approx 1s^{-1}$ higher for small widths and $\approx 1s^{-1}$ lower for larger widths.
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig5_liddle_cone.pdf}
\caption{\label{fig:liddle_cone}The pedestrians in the GNM simulation form a cone in front of the bottleneck as observed by \cite{kretz-2006,seyfried-2009b,schadschneider-2011b}.}
\end{figure}
The GNM can be calibrated to match the relation of speed and density in a given fundamental diagram. Fig. \ref{fig:FD2D_speed_oneneighbor} shows that for the calibration with only one layer of neighbors, pedestrians do not slow down with increasing densities as quickly as suggested in \cite{weidmann-1993}. When calibrating with one additional layer of pedestrians in the scenario shown in Fig. \ref{fig:calibrate_model}, the curves match much better (see Fig. \ref{fig:FD2D_speed_twoneighbors}). We use the method introduced by \cite{liddle-2011} to measure local density.
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig6_FD2D_speed_oneneighbor.pdf}
\caption{\label{fig:FD2D_speed_oneneighbor}(Color online) Speed-density relation in unidirectional flow compared to experimental data from metastudy (Weidmann, \cite{weidmann-1993}). The corridor was $40m$ long and $4m$ wide with periodic boundary conditions. Each cross (labeled `simulation') represents a local measurement at the position of a pedestrian. We use the method introduced by \cite{liddle-2011} to measure local density. The parameter set in these simulations was fixed with the procedure shown in Fig. \ref{fig:calibrate_model} and thus incorporates one layer of four pedestrians.}
\end{figure}
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig7_FD2D_speed_twoneighbors.pdf}
\caption{\label{fig:FD2D_speed_twoneighbors}(Color online) Speed-density relation in unidirectional flow compared to experimental data from metastudy (Weidmann, \cite{weidmann-1993}). The corridor was $40m$ long and $4m$ wide with periodic boundary conditions. Each cross (labeled `simulation') representsa local measurement at the position of a pedestrian. We use the method introduced by \cite{liddle-2011} to measure local density. The parameter set in these simulations was adjusted with a similar procedure as in Fig. \ref{fig:calibrate_model} to incorporate neighbors of neighbors in the computation of $\nabla\delta$: $R_p=1.0$, $p_p=1.79$, $R_B=0.25$ and $p_B=11.3$.}
\end{figure}
\cite{gaididei-2013,marschler-2013} compute the deviation of distances between drivers to analyze stop-and-go waves in car traffic. No deviation implies no stop-and-go waves since all distances are equal. A large deviation hints at the existence of a wave since there must be regions with large and regions with small distances between drivers. For pedestrian dynamics \cite{helbing-2007, portz-2011, jelic-2012} found stop-and-go waves experimentally. Similar to the wave analysis in traffic, we use the deviation of individual speeds to measure stop-and-go waves. Fig. \ref{fig:stopandgo_mu_sigma} and \ref{fig:stopandgo_scenario} shows that the GNM also produces stop-and-go waves when a certain global density is reached.
\begin{figure}
\includegraphics[width=0.4\textwidth]{fig8_stopandgo_mu_sigma.pdf}
\caption{\label{fig:stopandgo_mu_sigma}Normalized standard deviation $\sigma(v)/0
|
forced convection of fluids leads to many important
fluid dynamics problems across science and engineering (\cite{bird2007transport,bejan2013convection,lienhard2013heat}). For example, several
studies have proposed methods for increasing heat transfer efficiency
to alleviate constraints on computer processor speeds due to internal heating (\cite{nakayama1986thermal,zerby2002final,mcglen2004integrated,ahlers2011aircraft}).
One way to improve heat transfer is to change the spatial and temporal configurations of heat sources and sinks
(\cite{campbell1997optimal,da2004optimal,gopinath2005integrated}). Another, which we
pursue in this work, is to optimize a convecting fluid flow, subject to suitable constraints,
for a given configuration of of heat sources and sinks. Similar flow problems have been approached in a variety
of ways depending on what is optimized and the assumptions about the underlying fluid flow
(\cite{karniadakis1988minimum,mohammadi2001applied,zimparov2006thermodynamic,chen2013entransy}). A related problem is
the optimal flow for the mixing of a passive scalar in a fluid (\cite{chien1986laminar,caulfield2001maximal,tang2009prediction,foures2014optimal,camassa2016optimal}).
The most standard geometries for convective heat transfer can be classified
based on whether the convecting flows are
internal or external (\cite{rohsenow1998handbook,bird2007transport}). External flows
are used to transfer heat from the external surfaces of a
heated object (e.g. flat plate (\cite{lienhard2013heat}),
cylinder (\cite{karniadakis1988numerical}), or sphere (\cite{kotouvc2008loss})).
Internal flows are typically used to transfer heat
from the internal surfaces of a heated pipe or duct (\cite{rohsenow1998handbook}).
The simplest internal flows
are approximately unidirectional, with the flow profile depending strongly
on the duct geometry and
whether the flow is laminar or turbulent, and developing or fully-developed
(\cite{lienhard2013heat}).
One set of recent work has studied heat transfer enhancement by modifying channel flows from a quasi-unidirectional flow.
Obstacles such as rigid bluff bodies or oscillating plates or flags (active or
passive) are inserted into the flow, and vorticity emanates
from their separating boundary layers (\cite{fiebig1991heat,sharma2004heat,accikalin2007characterization,gerty2008fluidic,hidalgo2010heat,shoele2014computational,jha2015small}). The vortices
enhance the mixing of the advected temperature field and disrupt the
thermal and viscous boundary layers close to the heated surfaces (\cite{biswas1996numerical}). The
question is whether it is better to create
vortical structures in a unidirectional background flow or simply
increase the speed of the unidirectional flow, if both options have
the same energetic cost.
In this work we consider the optimal flows
for convection cooling of heated surfaces with a range of geometries. We focus on steady 2D flows, which are expected to be the first step towards applying
the methods to unsteady 3D flows. We adopt the framework of \cite{hassanzadeh2014wall} which was motivated by the
problem of optimal heat transport in
Rayleigh-B{\'e}nard convection. Optimal flow solutions for Rayleigh-B{\'e}nard convection were computed by \cite{waleffe2015heat} and \cite{sondak2015optimal} and in a truncated model
(the Lorenz equations) by \cite{souza2015maximal,souza2015transport}.
We adapt the framework of \cite{hassanzadeh2014wall} to a more general class of geometries including those relevant to
convection cooling in technological applications.
By using a convenient choice of coordinates, we show that the optimal
flows in a wide range of geometries are simply those found by
\cite{hassanzadeh2014wall} mapped to the new coordinates, when a fixed
kinetic energy budget is imposed. With a fixed enstrophy budget, a geometry dependent
term enters the equations, but the new coordinates facilitate numerical solutions and
the qualitative understanding of the flow features.
\section{General framework and application to exterior cooling}\label{sec:model}
We consider 2D regions for which the boundaries are solid walls with
fixed temperature ($T = 0$ or 1) or else insulated ($\partial_n T = 0$, where
$n$ is the coordinate normal to the boundary, increasing into the fluid domain).
We attempt to find the 2D steady incompressible
flow of a given total kinetic energy which maximizes the heat transferred out of the hot boundaries (those with $T = 1$), which is also the heat transferred into
the cold boundaries (those with $T = 0$).
\cite{hassanzadeh2014wall} solved this problem when the surfaces are horizontal parallel lines extending to infinity. We extend their results to different geometries using
a change of coordinates.
\begin{figure}
\centerline{\includegraphics[width=12cm
{AnnulusFig.pdf}
\caption{A) Annular domain between the inner (hot) boundary at radius 1 and the
outer (cold) boundary
at radius $R$. The curvilinear grid plots equicontours of the conformal coordinates
$\alpha = 1 - \log{r}/\log{R}$ and $\beta = -\theta/\log{R}$. B) The flow domain
in ($\alpha, \beta$) coordinates.
}
\label{fig:Annulus}
\end{figure}
We illustrate the general approach using a simple case in which the
boundaries are concentric circles (see figure \ref{fig:Annulus}).
The inner circle (radius 1) is the hot surface ($T$ = 1), and
the outer circle (radius $R > 1$) is the cold surface ($T$ = 0),
and can be taken to represent a cold reservoir to which heat is transferred. We solve the problem for all $R > 1$, but the typical application would be
the cooling of the exterior surface of an isolated heated body (\cite{karniadakis1988numerical}), in which case the reservoir is far away ($R \gg 1$).
The flow $\mathbf{u}(x,y) = (u(x,y), v(x,y))$ satisfies incompressibility
($\nabla \cdot \mathbf{u} = 0$) so we write it in terms of the stream function $\psi(x,y)$ as
$\mathbf{u} = -\nabla^\perp \psi = (\partial_y \psi, -\partial_x \psi)$. For a given flow field, the temperature field is obtained by solving
\begin{align}
\mathbf{u} \cdot \nabla T - \kappa \Delta T &= 0. \label{AdvDiff}
\end{align}
\noindent The flow field is constrained to have fixed kinetic energy $KE$:
\begin{align}
\frac{1}{2}\rho W\iint |\mathbf{u}|^2 dA = KE, \label{KE}
\end{align}
\noindent where $\rho$ is the fluid density and
$W$ is the domain width in the out-of-plane ($z$) direction, along which all quantities
are assumed to be invariant. The flow field is not
explicitly constrained to satisfy the Navier-Stokes equations. However,
any (sufficiently smooth) incompressible flow satisfies the incompressible Navier-Stokes
equations with a suitable volume forcing term $\mathbf{f}$:
\begin{align}
\frac{D\mathbf{u}}{Dt} = -\frac{1}{\rho}\nabla p + \nu \Delta \mathbf{u} + \frac{1}{\rho}\mathbf{f}
\quad ; \quad \nabla \cdot \mathbf{u} = 0. \label{NS}
\end{align}
\noindent Given $\mathbf{u}$, (\ref{NS}) determines $\mathbf{f}$ and $p$.
The flow can be considered to be driven by $\mathbf{f}$.
We maximize the (steady) rate of heat flux out of the hot boundary:
\begin{align}
Q = -\int_{r = 1} k\, \partial_n T ds. \label{Cond}
\end{align}
\noindent Here $k$ is the thermal conductivity of the fluid, $n$ is the coordinate normal to the boundary, increasing into the fluid domain, and $s$
is the arc length coordinate along the boundary,
here just the negative of the angular coordinate along the boundary, increasing from 0 to $2\pi$ moving along the circle in the clockwise direction.
Nondimensionalizing lengths by a typical length scale $L$ (in the present case, the radius of the inner cylinder),
and time by a diffusion time scale $L^2/\kappa$, and writing $\mathbf{u}$ in terms of $\psi$, (\ref{AdvDiff}) becomes
\begin{align}
\partial_y \psi \partial_x T - \partial_x \psi \partial_y T - \Delta T &= 0, \label{AdvDiffND}
\end{align}
\noindent and (\ref{KE}) becomes
\begin{align}
\iint |\nabla \psi|^2 dA = Pe^2, \label{Pe}
\end{align}
\noindent where the Peclet number $Pe = \sqrt{2 KE/\rho W \kappa^2}$ measures the strength of advection relative to diffusion of heat, and is different from
the definition of \cite{hassanzadeh2014wall} (which uses the average flow speed) because we will deal with unbounded
fluid domains. We have already
nondimensionalized temperature by the temperature of the hot boundary
(also the temperature difference between the boundaries). Having chosen scales for
length, time and temperature, we need to choose a typical mass scale to nondimensionalize (\ref{Cond}). Since mass enters the thermal conductivity,
for simplicity we instead chose a thermal conductivity scale to be that
of the fluid, so that in dimensionless form, (\ref{Cond}) becomes
\begin{align}
Q = -\int_{r = 1} \partial_n T ds. \label{Conda}
\end{align}
\noindent We maximize $Q$ over all steady 2D incompressible flow fields (given by $\psi$) of a given kinetic energy
by taking the variation of the Lagrangian
\begin{align}
\mathcal{L} = -\int_{r = 1} \partial_n T ds + \iint m(x,y) \left(-\nabla^\perp \psi \cdot \nabla T - \Delta T\right) dA +
\lambda \left(\iint |\nabla \psi|^2 dA - Pe^2 \right). \label{L}
\end{align}
\noindent Here $m$ and $\lambda$ are Lagrange multipliers enforcing (\ref{AdvDiffND}) and (\ref{Pe}) respectively. The area integrals
are over the fluid domain, the annulus in figure \ref{fig:Annulus}A. The optimal $\psi$ is found
by setting to zero the variations of $\mathcal{L}$ with respect to $T$, $\psi$, $m$, and $\lambda$. Taking the variations and
integrating by parts, we obtain the following equations and boundary conditions:
\begin{align}
0 = \frac{\delta\mathcal{L}}{\delta m} &= -\nabla^\perp \psi \cdot \nabla T - \Delta T\quad , \quad T\; \mbox{given on boundaries} \label{T} \\
0 = \frac{\delta\mathcal{L}}{\delta T} &= \nabla^\perp \psi \cdot \nabla m - \Delta m\quad , \quad \mbox{$m = T$ on boundaries} \label{m} \\
0 = \frac{\delta\mathcal{L}}{\delta \psi} &= -\nabla^\perp T \cdot \nabla m - 2\lambda\Delta \psi\quad , \quad \psi = \mbox{const. on boundaries} \label{psi} \\
0 = \frac{\delta\mathcal{L}}{\delta \lambda} &= \iint |\nabla \psi|^2 dA - Pe^2. \label{mu}
\end{align}
\noindent The boundary conditions for (\ref{psi}) are due to the fact that the boundaries are solid walls with no flow penetration.
There is slip along the walls, and the flow may be taken as an approximation of a flow which would occur with no-slip
boundary conditions. No-slip boundary conditions arise naturally when we consider the
problem with fixed enstrophy, later in this work. The constant
for $\psi$ may be taken to be zero on one boundary (the inner cylinder), to remove
arbitrariness in $\psi$.
On other boundaries (here, the outer cylinder), the constant values of $\psi$ are unknowns set by the equations
\begin{align}
\int m \frac{\partial T}{\partial s} + 2 \lambda \frac{\partial \psi}{\partial n} ds = 0
\end{align}
\noindent on each such boundary. Our approach is essentially the same as that of
\cite{hassanzadeh2014wall} so far. To solve the problem in this annular
geometry, and in a general class of geometries, it is convenient to change
coordinates. Let $T_0$ denote the temperature with the given boundary
conditions and no flow, determined purely by conduction: $\Delta T_0 = 0$.
$T_0$ is a harmonic function, here given by $1 - \log{r}/\log{R}$.
It has a harmonic
conjugate function $\beta = -\theta/\log{R}$ which is related to
$T_0$ by the Cauchy-Riemann equations: $\partial_x T_0 = \partial_y \beta$,
$\partial_y T_0 = -\partial_x \beta$. Because the
flow domain is doubly-connected, $\beta$ is multi-valued; for simply
connected domains, a single-valued harmonic conjugate always exists
(\cite{brown1996complex}). $\beta$ is unique up to an additive constant.
$(T_0, \beta)$ are conformal coordinates,
and we will show that the equations (\ref{T})--(\ref{mu}) are essentially unchanged in these
coordinates. However, the flow domain transforms into a rectangle, and we will
show that this maps the problem to the one solved by \cite{hassanzadeh2014wall}.
For notational convenience, we set $\alpha(x,y) \equiv T_0(x,y)$ and use $\alpha$
as the coordinate name. The metric which gives the density of $(\alpha, \beta)$ equicontours in the $(x,y)$-plane is
\begin{align}
h = \|\partial\mathbf{X}/\partial\alpha\| = \|\partial\mathbf{X}/\partial\beta\|, \quad\mbox{with}
\quad\mathbf{X} = (x(\alpha, \beta), y(\alpha, \beta)). \label{h}
\end{align}
\noindent Writing $\nabla$, $\nabla^\perp$, and $\Delta$ in
$(\alpha, \beta)$ coordinates using standard formulas for differential
operators in orthogonal
curvilinear coordinates (\cite{acheson1990elementary}) we have
\begin{align}
-\nabla_{x,y}^\perp \psi \cdot \nabla_{x,y} T - \Delta_{x,y} T =
\frac{1}{h^2}\left(-\nabla_{\alpha,\beta}^\perp \psi \cdot \nabla_{\alpha,\beta} T - \Delta_{\alpha,\beta} T\right).
\end{align}
\noindent Therefore equations (\ref{T})--(\ref{psi}) are unchanged in
$(\alpha, \beta)$ coordinates and so is (\ref{mu}) when the integral is
written in $(\alpha, \beta)$ coordinates:
\begin{align}
0 = \iint |\nabla_{\alpha,\beta} \psi|^2 dA_{\alpha,\beta} - Pe^2. \label{mua}
\end{align}
\noindent The boundary conditions are also unchanged, except that they are
given on the sides of the rectangle (figure \ref{fig:Annulus}B). Thus
the problem is essentially the same as that solved by
\cite{hassanzadeh2014wall} with $(\alpha, \beta)$ here corresponding
to $(1-z, x)$ there. Instead of maximizing (\ref{Conda}) they maximized
the convective heat flux integrated over the flow domain. In Appendix
\ref{QAlt} we show that the two are equal in curvilinear coordinates, so
that in place of (\ref{Conda}) we can use
\begin{align}
Q = \Delta \beta - \int_{\beta_{min}}^{\beta_{max}} \int_0^1 \partial_\beta \psi\, T \,d\alpha d\beta. \label{Q2}
\end{align}
\noindent where $\Delta\beta \equiv \beta_{max}-\beta_{min}$ is the extent of the domain in $\beta$.
Equation (\ref{Q2}) allows one to compute
the leading contributions to $Q$ in the small-$Pe$ limit
from the solution to a single eigenvalue problem.
In the limit of small $Pe$, the solutions can be written as asymptotic series
in powers of $Pe$. When $Pe = 0$, the solutions to (\ref{T})--(\ref{mu}) are
\begin{align}
m_0 = T_0 = \alpha = 1 - \frac{\log r}{\log R} \quad , \quad \psi_0 = 0 \quad , \quad \lambda \; \mbox{undetermined}.
\end{align}
\noindent Asymptotic solutions to (\ref{T})--(\ref{mu}) for $0 < Pe \ll 1$ are therefore posed as
\begin{align}
T &= T_0 + Pe \, T_1 + O(Pe^2) \label{Texpn} \\
m &= m_0 + Pe\, m_1 + O(Pe^2) \label{mexpn} \\
\psi &= Pe \,\psi_1 + O(Pe^2) \label{Psiexpn}
\end{align}
\noindent Expanding (\ref{T})--(\ref{mu}) to $O(Pe)$ yields linearized equations for $T_1, m_1,\psi_1$:
\begin{align}
0 &= -\nabla^\perp \psi_1 \cdot \nabla T_0 - \Delta T_1, \quad T_1 = 0\; \mbox{on boundaries} \label{T1} \\
0 &= \nabla^\perp \psi_1 \cdot \nabla m_0 - \Delta m_1, \quad m_1 = 0\; \mbox{on boundaries} \label{m1} \\
0 &= -\nabla^\perp T_0 \cdot \nabla m_1 -\nabla^\perp T_1 \cdot \nabla m_0 - 2\lambda\Delta \psi_1\quad , \quad \psi_1 = \mbox{const. on boundaries} \label{psi1} \\
0 &= \iint |\nabla \psi_1|^2 dA - 1. \label{mu1}
\end{align}
\noindent Examining (\ref{T1}) and (\ref{m1}) and using $m_0 = T_0$ we find
that $m_1 = -T_1$, so we can eliminate $m_0$ and $m_1$ and reduce (\ref{T1})--(\ref{mu1}) to
\begin{align}
0 &= -\nabla^\perp \psi_1 \cdot \nabla T_0 - \Delta T_1, \quad T_1 = 0\; \mbox{on boundaries} \label{T1a} \\
0 &= -\nabla^\perp T_1 \cdot \nabla T_0 - \lambda\Delta \psi_1\quad , \quad \psi_1 = \mbox{const. on boundaries} \label{psi1a} \\
0 &= \iint |\nabla \psi_1|^2 dA - 1. \label{mu1a}
\end{align}
\noindent Using $\nabla T_0 = \hat{\mathbf{e}}_\alpha/h$, the system simplifies to
\begin{align}
0 &= \partial_\beta \psi_1 - \Delta_{\alpha,\beta} T_1, \quad T_1 = 0\; \mbox{on boundaries} \label{T1b} \\
0 &= \partial_\beta T_1 - \lambda\Delta_{\alpha,\beta} \psi_1\quad , \quad \psi_1 = \mbox{const. on boundaries} \label{psi1b} \\
0 &= \iint |\nabla_{\alpha,\beta} \psi_1|^2 d\alpha d\beta - 1. \label{mu1b}
\end{align}
\noindent Because the boundary
conditions are periodic in $\beta$, the eigenfunctions may be found by taking a Fourier transform of
(\ref{T1b})--(\ref{psi1b}) in $\beta$ (or $x$ in \cite{hassanzadeh2014wall}). We obtain
\begin{align}
\psi_1 &= A_{kn} \sin(k\pi\alpha)\sin(2\pi n\beta/\Delta\beta) \label{psi1def} \\
T_1 &= B_{kn} \sin(k\pi\alpha)\cos(2\pi n\beta/\Delta\beta) \label{T1def}
\end{align}
\noindent where $\Delta\beta = 2\pi/\log{R}$.
We may add an arbitrary constant phase to $\beta$ which simply rotates the solutions without
changing the total kinetic energy or the heat transferred.
The flow corresponding to (\ref{psi1def}) is an array of convection rolls in $(\alpha, \beta)$
space.
Inserting (\ref{psi1def})--(\ref{T1def}) into (\ref{T1b})--(\ref{psi1b}) we obtain $A_{kn}$, $B_{kn}$, and $\lambda$:
\begin{align}
A_{kn} &= \left( \frac{4/\Delta\beta}{(\pi k)^2 + (2\pi n/\Delta\beta)^2} \right)^{1/2},\\
B_{kn} &= \frac{-(4/\Delta\beta)^{1/2}(2\pi n/\Delta\beta)}
{\left[(\pi k)^2 + (2\pi n/\Delta\beta)^2\right]^{3/2}},\\
\lambda &= \frac{-(2\pi n/\Delta\beta)^2}
{\left[(\pi k)^2 + (2\pi n/\Delta\beta)^2\right]^{2}}.
\end{align}
\noindent The heat transferred by each mode (\ref{Q2}) can
be expanded in orders of $Pe$. The zeroth order term
is $Q_0 = \Delta \beta$. The first order term $Q_1$
vanishes for each mode (\ref{psi1def})--(\ref{T1def}),
so the leading-order term to be maximized is
\begin{align}
Q_2 = - \int_{\beta_{min}}^{\beta_{max}} \int_0^1 \partial_\beta \psi_1\, T_1 \,d\alpha d\beta
= -\lambda = \frac{(2\pi n/\Delta\beta)^2}
{\left[(\pi k)^2 + (2\pi n/\Delta\beta)^2\right]^{2}}.
\end{align}
\noindent $Q_2$ is maximum when $k = 1$ and if $\Delta\beta/2$ is an integer, when $n = \Delta\beta/2$.
In this case, $Q_{2, max} = 1/4\pi^2$. If $\Delta\beta/2$ is not an integer but lies between the integers $l$ and $l+1$,
then the $Q_2$-maximizing $n$ is $l$ if $\Delta\beta/2 < \sqrt{l(l+1)}$ and $l+1$ otherwise.
The $Q_2$-maximizing flow is a mode (\ref{psi1def}) consisting of approximately
square convection rolls in $(\alpha, \beta)$ space. In figure \ref{fig:AnnulusOptimaFig} we plot the streamlines at equal
intervals of the streamfunction for
$Q_2$-maximizing flows when the outer boundary radius is $R$ = 1.5, 4, 10, and 1000.
\begin{figure}
\centerline{\includegraphics[width=14cm
{AnnulusOptimaFig.pdf}
\caption{Streamlines for optimal convection cooling flows in annular domains,
with the outer (cold) boundary
at different radii $R$ = 1.5 (A), 4 (B), 10 (D), and 1000 (F). The inner (hot) boundary has radius 1.
Panels C and E show the flows in panels B and D in $(\alpha, \beta)$ coordinates. The streamlines are plotted without arrows because there is no change in the heat transferred
when the flow direction is reversed. The streamlines are plotted at nine equally spaced values
between the extrema of the stream function.
}
\label{fig:AnnulusOptimaFig}
\end{figure}
When $R$ is close to 1 (panel A), we have a thin gap between the hot and cold surfaces, and the flows
tend to the square convection rolls in a straight channel found by \cite{hassanzadeh2014wall}. Since
$h$ in (\ref{h}) is $r \log{R}$, which varies little in the thin gap, there is relatively little distortion when the vortices are mapped
from the $(\alpha, \beta)$-plane to the $(x,y)$-plane.
The case $R \gg 1$
corresponds to the application of the convection cooling of the exterior of a circular body.
In this limit (e.g. panel F), the optimal flow is
\begin{align}
\psi_{max} = -A_{11} \sin{\left(\pi\left(1-\frac{\log{r}}{\log{R}}\right)\right)}\sin(\theta). \label{psi1opt}
\end{align}
\noindent which consists of a pair of oppositely-signed vortices forming a dipole near the body.
The amount of vortex distortion is given by $h = r\log{R}$, so the vortices are
strongly stretched moving outward from the body.
The centers of
the vortices are located at $r = \sqrt{R}$, the local extrema of $\psi_{max}$. Differentiating
(\ref{psi1opt}) with respect to $r$, we find that the azimuthal flow speed $u_\theta$ is proportional to
$1/r$ in the neighborhood of the body ($1 \leq r \ll R$). When
$\Delta\beta/2 = \pi/\log{R}$ is an integer, $Q_{2, max} = 1/4\pi^2$. For large
$R$, $Q_{2, max}$ is attained with $n = k = 1$.
Consequently, with a fixed total kinetic energy, the
optimal heat transferred by convection ($Pe^2 Q_{2, max} \sim Pe^2\log^{-2}{R}$)
decays slowly as the outer radius increases. Physically, it seems reasonable
that the optimal heat transferred with fixed kinetic energy
should decrease as the hot and cold surfaces become more distant.
We have discussed optimal convection cooling using the specific example of an annular geometry, but
similar results can be obtained for a wide range of geometries. The first step is to
compute $(\alpha, \beta)$ for a given geometry. In the next two sections we discuss
the solutions for two geometries which are relevant to applications. We find that a slight
modification is needed in cases where the hot and cold surfaces meet.
\section{Hot plate embedded in a cold surface}\label{sec:plate}
The next geometry we consider is a hot plate embedded in a cold surface
(see figure \ref{fig:PlateFig}). The hot plate gives a simple model of a flat heated surface such as a computer processor.
An important difference with the previous case is that the cold and hot surfaces are now connected (or nearly so, as we will
discuss). We nondimensionalize by the
length of the plate.
\begin{figure}
\centerline{\includegraphics[width=15cm
{PlateFig.pdf}
\caption{A) A portion of the flow domain--the upper half
plane--near the hot boundary. The hot boundary extends nearly to
the limits of the interval ($|x| \leq 1/2, y = 0$). The
cold boundaries are approximately ($|x| > 1/2, y = 0$). They are joined
by small insulating boundaries which are approximately semi-circular.
The curvilinear grid plots equicontours of the conformal coordinates
$\alpha = \frac{1}{\pi}\left(\arg{\left(z-\frac{1}{2}\right)}
-\arg{\left(z+\frac{1}{2}\right)} \right)$ and $\beta = -\frac{1}{\pi}\log{\left|\frac{z-1/2}{z+1/2}\right|}$.
B) Close-up near $z = 1/2$. The approximate semi-circle closest to 1/2 (light grey or green)
is the contour $\beta = 2$. This
is one of the insulating boundaries of the truncated domain, with the other
located near $z = -1/2$.
C) The flow domain in ($\alpha, \beta$) space. The letters D, E, F, G show corresponding
points in panels A) and B).
}
\label{fig:PlateFig}
\end{figure}
The plate (where $T = 1$) occupies $-1/2 \leq x \leq 1/2$ on the real axis, and the cold surface (where $T = 0$) is $|x| > 1/2$. The fluid lies in the upper half plane $y > 0$.
In terms of
the complex coordinate $z = x+iy$, the pure conduction solution (with $Pe$ = 0) is
\begin{align}
T_0 = \frac{1}{\pi}\left(\arg{\left(z-\frac{1}{2}\right)} -\arg{\left(z+\frac{1}{2}\right)} \right).
\end{align}
\noindent Following the same procedure as before,
\begin{align}
\alpha = \frac{1}{\pi}\left(\arg{\left(z-\frac{1}{2}\right)}
-\arg{\left(z+\frac{1}{2}\right)} \right), \quad \beta = -\frac{1}{\pi}\log{\left|\frac{z-1/2}{z+1/2}\right|}
\label{abplate}
\end{align}
\noindent These are also known as bipolar coordinates, with foci at $x = \pm1/2$. In these coordinates, the flow domain is $0 < \alpha < 1$,
$-\infty < \beta < \infty$. The problem is no longer periodic in $\beta$,
and new boundary
conditions are required at $\beta = \pm\infty$. To proceed, we first truncate
the domain to $0 < \alpha < 1$,
$\beta_{min} < \beta < \beta_{max}$, introducing new boundaries at
finite values $\beta_{min} < 0$ and $\beta_{max} > 0$.
By (\ref{abplate}), the boundaries' distances from $z = \pm 1/2$ decay
exponentially fast ($\sim e^{-|\beta_{min}|\pi}$, $e^{-|\beta_{max}|\pi}$)
as $\beta_{min}$ and $\beta_{max}$ grow in magnitude. Therefore with moderate
values of $\beta_{min}$ and $\beta_{max}$
we obtain a good approximation to the original non-truncated domain in
the $(x,y)$ plane.
We find
that the leading-order solution for $T$ is still $T_0$ if we make the new
boundaries insulating, so $0 = \partial_n T = \partial_\beta T/h$. $T_0$
satisfies this boundary condition because $\partial_\beta T_0 =
\partial_\beta \alpha = 0$ ($\alpha$ and $\beta$ are orthogonal coordinates).
Recomputing the variation of $\mathcal{L}$ with these Neumann (instead of periodic)
conditions at the $\beta$ boundaries,
we find that $m$ has the same boundary conditions as $T$, and thus as before,
$m_0 = T_0$, $m_1 = -T_1$. We assume the insulating boundaries are
solid walls that the flow does not penetrate. Because they
are connected to the hot and cold boundaries, this requires $\psi = 0$ on all four
sides of the rectangular domain in $(\alpha, \beta)$ space. Thus we obtain
the same eigenvalue problem as (\ref{T1b})--(\ref{mu1b}) but with
$\partial_n T_1 = \psi_1 = 0$ on the insulating
sides of the rectangle.
The eigenfunctions are slightly modified from (\ref{psi1def})--(\ref{T1def}):
\begin{align}
\psi_1 &= A_{kn} \sin(k\pi\alpha)\sin(\pi n(\beta-\beta_{min})/\Delta\beta), \label{psi2def} \\
T_1 &= B_{kn} \sin(k\pi\alpha)\cos(\pi n(\beta-\beta_{min})/\Delta\beta) \label{T2def}
\end{align}
\noindent with
\begin{align}
A_{kn} &= \left( \frac{1/\Delta\beta}{(\pi k)^2 + (\pi n/\Delta\beta)^2} \right)^{1/2},\\
B_{kn} &= \frac{-(1/\Delta\beta)^{1/2}(\pi n/\Delta\beta)}
{\left[(\pi k)^2 + (\pi n/\Delta\beta)^2\right]^{3/2}},\\
\lambda &= \frac{-(\pi n/\Delta\beta)^2}
{\left[(\pi k)^2 + (\pi n/\Delta\beta)^2\right]^{2}}.
\end{align}
\noindent Because the Neumann boundaries are solid insulating walls,
(\ref{Q2}) still holds, and $Q_2$, the leading contribution to the heat transferred by a given mode,
is
\begin{align}
Q_2 = -\lambda = \frac{ (\pi n/\Delta\beta)^2}
{\left[(\pi k)^2 + (\pi n/\Delta\beta)^2\right]^{2}}.
\end{align}
\noindent $Q_2$ is maximized when $k = 1$ and $n$ is one of the two integers closest to
$\Delta\beta$, which again results in convection rolls which are square (or nearly so, if
$\Delta\beta$ is not an integer) in $(\alpha, \beta)$ space. In
figure \ref{fig:PlateFlowsFig} we show examples
of optimal flows when $\Delta\beta$ is even (panel A) and odd
(panels C and F).
\begin{figure}
\centerline{\includegraphics[width=15cm
{PlateFlowsFig.pdf}
\caption{A) Optimal heat transferring flow with $\beta_{min}$ = -2
and $\beta_{max}$ = 2. The flow has four vortices: one pair of large
vortices of the order of the plate length, and one pair of small vortices of the order of the insulating
boundary length.
B) Flow from panel A in $(\alpha, \beta)$ coordinates.
C) Optimal heat transferring flow with $\beta_{min}$ = -2.5
and $\beta_{max}$ = 2.5. The flow has five vortices, one
large vortex centered over the plate and two pairs symmetric about the y-axis.
D) Flow from panel C in $(\alpha, \beta)$ coordinates.
E) Close-up of the flow in panel C near $z = 1/2$, showing one of the smaller vortices, near the insulated boundary (light gray or
green). F) Example of an asymmetric flow with $\beta_{min} = -2.75$, $\beta_{max} = 2.25$. Five
vortices are present, and the two nearest the insulated boundaries are too
small to be visible.
}
\label{fig:PlateFlowsFig}
\end{figure}
We have a chain of vortices
which are $O(1)$ in size above the hot plate, and shrink exponentially
in size as they approach the insulated boundaries at the hot-cold interface. In
panel A, $\beta_{min}$ = -2
and $\beta_{max}$ = 2, and there are four vortices, one large pair and one
small pair. The smallest
vortices are proportional in size to the insulated boundaries. Panel B shows the
same flow in the $(\alpha, \beta)$ plane. Panel C shows the optimal flow with $\beta_{min}$ = -2.5
and $\beta_{max}$ = 2.5, so there are five vortices (the smallest pair is not
visible). Panel D shows this flow in the $(\alpha, \beta)$ plane. Panel E shows
the flow near one of the small vortices in panel C at the insulated boundary
near $z = 1/2$. Panel F shows an asymmetric case, $\beta_{min}$ = -2.75
and $\beta_{max}$ = 2.25. Again there are five vortices, but now they are
asymmetric with respect to the plate center.
The size of each vortex is
proportional to its distance from the insulated boundary, and the
typical flow speed within each vortex is inversely proportional to its typical length.
Therefore, as the vortices become smaller, the flow speed increases such that the total
kinetic energy of each vortex (proportional to its area times typical flow speed squared)
is constant.
Once again, $Q_{2, max} = 1/4\pi^2$ (when
$\Delta\beta$ is an integer), so the optimal heat transferred by convection is essentially
the same when we vary the positions of the insulating boundaries. The optimal flow varies considerably with the positions of the insulating boundaries, however, because they set the vortices' positions. It is somewhat
surprising that the optimal heat transferred by convection does not increase as the hot and cold surfaces are brought together (for the annulus we found a slow decrease in the heat transferred, but only when the surfaces are very distant). However, the heat transferred by conduction ($\Delta\beta$) diverges
logarithmically as the distance between the hot and cold surfaces tends to zero.
\section{Channel with hot interior and cold exterior}\label{sec:channel}
A classic configuration for
|
heat transfer is the flow through a heated
channel
(\cite{eagle1930coefficient,dipprey1963heat,bird2007transport,lienhard2013heat,bejan2013convection}).
Recent works have studied the dynamics of
flapping flags and vortex streets in
channels (\cite{alben2015flag,wang2015dynamics}) with the goal of improving
heat transfer efficiency by using vortices to mix
the temperature field (\cite{gerty2008fluidic,shoele2014computational,jha2015small}).
Here we consider a heated channel in an unbounded fluid (figure
\ref{fig:ChannelFig}A).
\begin{figure}
\centerline{\includegraphics[width=11cm
{ChannelFig.pdf}
\caption{A) Heated channel in an unbounded flow domain. The inside surfaces of the channel walls have temperature $T = 1$ and outside surfaces have $T = 0$.
B) The flow domain in $\alpha, \beta$ space for the case $L = 2, \beta_{min} = -0.5,$ and
$\beta_{max} = 0.5$. The light blue (light gray) dot marks the point
at infinity and the black triangle marks the origin in $x,y$ space.
The green boundaries are insulated surfaces, and the red and blue
boundaries are the hot and cold surfaces of the upper channel wall.
C) The upper half of the flow domain in $x,y$ space. The red, blue, green, and yellow
lines and triangle correspond to those
in panel B. The curvilinear grid plots equicontours of $\alpha$ and $\beta$.
}
\label{fig:ChannelFig}
\end{figure}
The inside surfaces of the channel walls are
fixed at temperature $T = 1$ and the outside surfaces are fixed at
$T = 0$. If we assume that the channel flow is a
perturbation of unidirectional flow, we could possibly truncate the
fluid domain to the interior of the channel, and use inflow and outflow
boundary conditions for the fluid and temperature
(\cite{shoele2014computational,wang2015dynamics}). However, we wish to
make minimal assumptions about the flow and therefore we represent
the flow both inside and outside the channel. Energy is needed
to drive the flow through the entire fluid domain, so it makes sense to
include the outside flow in the optimization calculation.
The analytic function $\alpha + i\beta$ can be found numerically
by using complex analysis and boundary integral methods.
Because $T$ (and therefore $\alpha$)
has a jump across the channel walls, we can use the Plemelj formula
(\cite{estrada2012singular}) to write $\alpha + i\beta$ in
the form
\begin{align}
\alpha(z) + i\beta(z) = \frac{1}{2\pi i}\int_C \frac{\gamma(s) + i\eta(s)}{z - \zeta(s)} ds + C_1 \label{plemelj}
\end{align}
\noindent where $C$ is a complex contour parametrized by arc length as
$\zeta(s)$, representing the union of the two channel walls, $s \pm i/2, -L/2 < s < L/2$.
The real constant $C_1$ will be chosen shortly.
The function $\gamma(s) + i\eta(s)$ corresponds to the jump in $\alpha(s) + i\beta(s)$ along
the contour.
From the boundary conditions on $\alpha$ (those of $T$) we have
$\gamma(s) = \pm 1$ on $\zeta(s) = s \pm i/2$. It remains to find $\eta(s)$. We
note that (\ref{plemelj}) provides the average value of $\alpha + i\beta$ on the
countour when $z$ is a point on the contour and the integral is changed from an
ordinary integral to a
principal value integral. We solve
for $\eta(s)$ by the condition that the average value of $\alpha(s)$ on the contour equals
$1/2$:
\begin{align}
1/2 = Re\left[\frac{1}{2\pi i}~-\hspace{-.35cm}\int_{-L/2}^{L/2} \left(\frac{1 + i\eta(s)}{s' - s}
+ \frac{-1 + i\eta(s)}{s' - s +i}\right) \,ds \right] + C_1, \;\;-L/2 < s' < L/2. \label{nu}
\end{align}
\noindent We have written the contributions from the two channel walls in (\ref{plemelj}) separately and
used symmetry considerations to deduce that $\eta$ on the lower wall equals
that on the upper wall. We use the Chebyshev collocation method (\cite{golberg1990ins})
to solve (\ref{nu}) with $s$ at points on a Chebyshev-Lobatto grid and $s'$ on a
Chebyshev-Gauss grid. We need two additional constraints to uniquely specify $\eta$ and $C_1$
in (\ref{nu}) (\cite{golberg1990ins}). These can be given as:
\begin{align}
\lim_{s \rightarrow \pm L/2} \eta(s) \sqrt{L^2/4-s^2} = 0.
\end{align}
\noindent These conditions are preferred because they make $\eta$ a bounded continuous function.
Having solved for $\eta$ and
$C_1$ and knowing $\gamma$, we evaluate $\alpha$ and $\beta$ using (\ref{plemelj}). As with the
single heated plate,
we find that $\beta \rightarrow \pm \infty$ at the plate edges where the hot and cold boundaries
meet. So as before, we cut off
the domain with small insulated boundaries at $\beta = \beta_{min}$ and $\beta_{max}$. In
figure \ref{fig:ChannelFig}B and C we show $(\alpha,\beta)$ equicontours and their location in the $(x,y)$
plane, respectively, for the case $L = 2$. Here we use $\beta_{min} = -0.5, \beta_{max} = 0.5$. These
values are relatively small in magnitude so that the insulated boundaries are large enough to
be clearly visible.
As with the single heated plate example, the insulated boundaries approach the plate
edges exponentially with increasing
magnitude of these values.
In panel C, we show only the upper half plane in $(x,y)$. In the lower half plane,
the values of $\alpha$ and $\beta$ can be smoothly continued from the upper half plane, by reflecting the values in the origin. Thus the flow is also smoothly continued in
the lower half-plane by reflecting $\psi$ in the origin.
\begin{figure}
\centerline{\includegraphics[width=11cm
{ShortChannelFlowFig.pdf}
\caption{Streamlines of optimal heat transferring flows for
short channels. The channel lengths are $L = 0.5$ in the top
row (A, D, G) and 1 in the bottom row (D, F, I). The middle row (B, E, H)
shows the flows from the top and bottom rows in $(\alpha, \beta)$
coordinates, where they are the same.
Moving from left to right, the positions of the insulated
boundaries (green) varies: $\beta_{min} = -0.5,$ and
$\beta_{max} = 0.5$ (A-C), $\beta_{min} = -1,$ and
$\beta_{max} = 1$ (D-F, too close to the plate edges to be visible), and
$\beta_{min} = -0.25,$ and $\beta_{max} = 0.75$ (G-I).}
\label{fig:ShortChannelFlowFig}
\end{figure}
In figure \ref{fig:ShortChannelFlowFig} we show optimal flows
for ``short'' channels, with lengths less than or equal to the
channel height. The top row (A, D, G) shows optimal flows when
the channel length is half the height. The only difference between
A, D, and G is the location of the insulating boundaries. As we
have seen with the single plate,
this makes a big difference in the optimal flow. In panel A, the flow
consists of rolls that move around the plates. There is a strong net flow
through the channel, which is fast near the channel walls, and slow
in the middle of the channel. The corresponding flow in the
$(\alpha, \beta)$ plane is shown in panel B. It is a single vortex which
is mapped twice onto the $(x, y)$ plane in panel A by reflection
in the origin, as mentioned previously. In panel D we move the insulating
boundaries closer to the plate edges, and obtain two pairs of vortices moving
around the edges of the channel. The flow is fastest near the plate edges,
where the hot and cold surfaces are closest. Panel E shows the corresponding
flow in the $(\alpha, \beta)$ plane. Panel G shows an example
in which the insulating boundaries are asymmetric in $\beta$, with the
$(\alpha, \beta)$ representation shown in panel H.
The bottom row (C, F, I) shows optimal flows when the channel length
equals its height. The corresponding flows in the $(\alpha, \beta)$ plane
are again those in panels B, E, and H.
In panel C, the flow has a stronger circulatory component that moves
across the channel openings, from top to bottom and vice versa,
around the outside of the entire channel, compared to that in panel A.
In panels F and I, the flow is also stronger near the centers of the
channel openings than in the corresponding flows of the top row (D and G).
\begin{figure}
\centerline{\includegraphics[width=11cm
{LongChannelFlowFig.pdf}
\caption{Streamlines of optimal heat transferring flows for
longer channels (lengths $L = 2$ (A-C)
and 4 (D-F) in units of channel width).
For the length-2 channels (A-F), the insulated boundaries (green)
are located at A) $\beta_{min} = -0.5,$ and
$\beta_{max} = 0.5$, B) $\beta_{min} = -1,$ and
$\beta_{max} = 1$ (too close to the plate edges to be visible), and
C) $\beta_{min} = -0.25,$ and $\beta_{max} = 0.75$.
For length-4 channels (D-F), the insulated boundaries (green)
are located at D) $\beta_{min} = -0.5,$ and
$\beta_{max} = 0.5$, E) $\beta_{min} = -1,$ and
$\beta_{max} = 1$ (too close to the plate edges to be visible), and
F) $\beta_{min} = -0.25,$ and $\beta_{max} = 0.75$.
}
\label{fig:LongChannelFlowFig}
\end{figure}
In figure \ref{fig:LongChannelFlowFig} we show optimal flows
when the channel length is increased to two (A-C) and four (D-F)
times the channel height. Now the flows are strongly confined to the
channel openings. There is very little flow in the center of the
channel. The reason is that the temperature due to pure
conduction ($T_0 = \alpha$) is nearly uniform in the center of
the channel. There is little to be gained by using energy to transport fluid through
this region of nearly uniform temperature. Since $\alpha$ changes little
in this region, $h = \|\partial\mathbf{X}/\partial\alpha\| = \|\partial\mathbf{X}/\partial\beta\| \gg 1$ and the flow speed
$\|\mathbf{u}\| = \|-\nabla^\perp_{x,y} \psi\| =\| -\nabla^\perp_{\alpha,\beta} \psi/h\|
\sim 1/h \ll 1.$ The lack of flow through the channel is a significant
difference from typical channel convection cooling flows
(\cite{bird2007transport}). This can be attributed to our objective of maximizing the total heat flux out of the hot walls. In these solutions,
most of the flux
is close to the channel openings, near the cold exterior.
\section{Large amplitude flows}\label{sec:LargeKE}
We now consider the case where $Pe$ is not small by returning to equations
(\ref{T})--(\ref{mu}), which hold for all $Pe \geq 0$.
We have noted that this is the same as the constant-kinetic-energy
problem considered by \cite{hassanzadeh2014wall} under the transformation
$(\alpha, \beta) \rightarrow (1-z, x)$. They computed numerically the solutions
from $Pe$ small to large and derived a boundary layer solution for
the optimal flow in the limit of large $Pe$. Their derivation works
for both the periodic and Neumann
boundary conditions (in $\beta$) that we used here. We simply translate
the complete asymptotic solution for $\psi$ that they found into an
$(\alpha, \beta)$ rectangle with $\beta$-width $\Delta\beta$:
\begin{align}
\psi &\sim \frac{f(\mu_H,\Gamma)}{\sqrt{2\mu_H}}\sin\left(\frac{N\pi(\beta-\beta_{min})}{\Delta\beta}\right)
\tanh\left(\frac{\pi f(\mu_H,\Gamma)(1-\alpha)}{2\sqrt{2\mu_H}\,\Gamma}\right)
\tanh\left(\frac{\pi f(\mu_H,\Gamma)\alpha}{2\sqrt{2\mu_H}\,\Gamma}\right)
\nonumber \\
& \mbox{where} \quad f(\mu_H,\Gamma) = 1 - \frac{\pi\sqrt{2\mu_H}}{2\Gamma}. \label{largePe}
\end{align}
\noindent In the limit of large $Pe$, equal-aspect-ratio convection rolls are no longer optimal.
Instead, the convection rolls become very narrow in the $\beta$ direction,
with narrow boundary layers of fast-moving fluid along the hot and cold walls.
The size of the boundary layers and the $\beta$-width of the rolls are set by the parameters
$\Gamma$ and $\mu_H$ (= $2\lambda$) in (\ref{largePe}). For $Pe \gg 1$
\cite{hassanzadeh2014wall} found that for the optimal rolls,
\begin{align}
\Gamma \sim 3.8476 Pe^{-1/2}\Delta\beta^{1/4} \quad ; \quad\mu_H = \Gamma^2/8\pi^2 ,
\end{align}
\noindent where we have added the $\Delta\beta$ term due to our different definition of $Pe$.
This solution is valid for our problems with Neumann conditions at the
$\beta$ boundaries when $\Delta\beta/\Gamma$ is an
integer, $N$ in (\ref{largePe}). For periodic boundary conditions, $N$ must
be even.
\begin{figure}
\centerline{\includegraphics[width=15cm
{LargeAmplitudeFlowsFig.pdf}
\caption{A comparison of optimal flows with small and large kinetic energy
($Pe$) across the three geometries considered in this work. Top row (A-D):
optimal flows in an annulus with outer radius $R = 10$. A) Small-$Pe$
optimal flow in $(\alpha, \beta)$. B) Small-$Pe$
optimal flow in $(x, y)$. C) Large-$Pe$ ($Pe \approx$ 210)
optimal flow in $(x, y)$. D) Large-$Pe$ ($Pe \approx$ 210)
optimal flow in $(\alpha, \beta)$.
Middle row (E-H): optimal flows over a hot plate in a cold surface.
E) Small-$Pe$
optimal flow in $(\alpha, \beta)$. F) Small-$Pe$
optimal flow in $(x, y)$. G) Large-$Pe$ ($Pe \approx$ 182)
optimal flow in $(x, y)$. H) Large-$Pe$ ($Pe \approx$ 182)
optimal flow in $(\alpha, \beta)$.
Bottom row (I-L): optimal flows around a channel with hot interior
and cold exterior. I) Small-$Pe$
optimal flow in $(\alpha, \beta)$. J) Small-$Pe$
optimal flow in $(x, y)$. K) Large-$Pe$ ($Pe \approx$ 237)
optimal flow in $(x, y)$. L) Large-$Pe$ ($Pe \approx$ 237)
optimal flow in $(\alpha, \beta)$.
}
\label{fig:LargeAmplitudeFlows}
\end{figure}
In figure \ref{fig:LargeAmplitudeFlows} we plot a few examples of optimal flows at moderately large $Pe$ for the geometries we have considered, together with the corresponding
small-$Pe$ optima. The top row shows flows past a cylinder with outer boundary
at $R = 10$ at small $Pe$ in $(\alpha, \beta)$ (A) and $(x, y)$ (B), the same
flows as in figure \ref{fig:AnnulusOptimaFig}E and D, respectively. The large-$Pe$ optimum
is shown in panels C and D of figure \ref{fig:LargeAmplitudeFlows}.
$Pe \approx 210$ so that $N = 8$. The streamlines are approximately rectangles
in $(\alpha, \beta)$ (D), which map onto approximate wedges in $(x, y)$ (C).
In the middle row, we show optimal flows over a hot plate
at small $Pe$ (E, F, similar to those in figure \ref{fig:PlateFlowsFig})
and $Pe \approx 182$ (G, H). A given rectangular streamline in $(\alpha, \beta)$ (H)
maps onto a streamline which approximately follows one larger circle and one smaller
circle (in the opposite direction) in the bipolar coordinate system (figure \ref{fig:PlateFig}A) connected by segments along the hot and cold plates.
In the bottom row, optimal flows in a channel of length 2 are shown at small
$Pe$ (I, J) and $Pe \approx 237$ (K, L), giving $N = 4$. In this case a rectangular
streamline in $(\alpha, \beta)$ maps onto a streamline which follows two arcs
which are roughly circles centered at the channel edges. The arcs are again
connected by segments along the hot and cold plates.
\section{More general geometries}\label{sec:General}
We have studied three examples which are representative of a more
general class of problems which can be solved with the same method. The main
step is to solve for
$(\alpha(x,y), \beta(x,y))$. The first case,
flow through an annulus, can be extended to any doubly connected region which
lies between one simple closed curve on which the temperature is 1 and another
on which the temperature is 0. The flow region may include the point at infinity.
All such regions correspond to a rectangle in $(\alpha, \beta)$ space on
which the boundary conditions for $\psi$ and $T$ are Dirichlet on the
$\alpha = 0$, $\alpha = 1$
sides and periodic on the $\beta = \beta_{min}$, $\beta = \beta_{max}$
sides.
The second case, flow over a heated plate, can be generalized to the flow within
any region bounded by a simple closed curve which is partitioned into four
arcs. On the four arcs the boundary
conditions are $T = 0$, $\partial_n T = 0$, $T = 1$, and $\partial_n T = 0$, moving
continuously around the curve. On the entire curve $\psi = 0$. The curve may
include the point at infinity (as for the heated plate), and in general we may
use any simple closed curve on the Riemann sphere. On the arcs where $\partial_n T = 0$,
we have $0 = \partial_n T_0 = \partial_n \alpha = -\partial_s \beta$, so $\beta$ is
constant. All such cases correspond
to a rectangle in $(\alpha, \beta)$ space on
which the boundary conditions are Dirichlet for $\psi$, and Dirichlet and Neumann for
$T$ on the $\alpha$ and $\beta$ sides, respectively.
The green arcs in figures 3-7 were chosen because they followed lines of constant
$\beta$ for functions which could be written in closed form or computed relatively
easily, but the shapes of the insulated boundaries can be arbitrary in general.
The third case, flow through a heated channel, is an example where the flow lies
in the region between two simple closed curves, each of which is partitioned into
four arcs as above. Here by symmetry the same rectangle in $(\alpha, \beta)$ was used
to cover the $(x,y)$ space twice. This case can be generalized to nonsymmetrical
configurations of $n$ closed curves, and the curves may contain multiple arcs on which
$T = 0$ and $T = 1$.
\begin{figure}
\centerline{\includegraphics[width=14cm
{ExoticRegionFig.pdf}
\caption{Optimal small kinetic energy flow (small $Pe$) over a surface
with two hot plates (red) separated by cold plates (blue) and insulated
boundaries (green), in $(\alpha, \beta)$ (A)
and $(x,y)$ (B). The flow is described by (\ref{psi2def}) with
$\beta_{min} = -0.9$,
$\beta_{max} = 0.8$, and
($\alpha, \beta$) given by (\ref{multiplate}).
}
\label{fig:ExoticRegion}
\end{figure}
Figure \ref{fig:ExoticRegion} shows an example where the flow domain is the upper
half plane, bounded by two hot and three cold segments along the
$x$ axis. On the Riemann sphere this is one closed curve on which
the temperature is 1 on two arcs and 0 on two arcs. Here
\begin{align}
\alpha = \frac{1}{\pi}\arg{\left(\frac{(z-6)(z-3)}{(z-5)z}\right)},
\quad \beta = -\frac{1}{\pi}\log{\left|\frac{(z-6)(z-3)}{(z-5)z}\right|}.
\label{multiplate}
\end{align}
\noindent The rectangle with sides $\alpha = 0$, $\alpha = 1$, $\beta = -0.9$,
$\beta = 0.8$ maps twice onto the flow region. Moving around the
rectangle counterclockwise twice in panel A corresponds to moving along the boundary on
the $x$ axis and green arcs in panel B from $x = -\infty$ to $+\infty$.
In the small-$Pe$ limit
the stream function is (\ref{psi2def}) with $n = 2$.
\section{Fixed enstrophy flows}\label{sec:Enstrophy}
\cite{hassanzadeh2014wall} also considered the optimization problem
with fixed enstropy instead of fixed kinetic energy. Fixed
enstropy is equivalent to a fixed rate of viscous energy
dissipation, which is proportional to
the enstrophy for our flow boundary conditions (solid walls or periodic boundaries)
(\cite{lamb1932hydrodynamics}). In place of
(\ref{Pe}) we have:
\begin{align}
\frac{1}{2}\iint \nabla \mathbf{u}:\nabla \mathbf{u} \, dA =
\iint (\Delta \psi)^2 dA = Pe^2. \label{Enstrophy}
\end{align}
\noindent Here $Pe = \sqrt{\dot{E}/\mu W}$ is redefined to
include the (steady) total rate of viscous energy dissipation $\dot{E}$ instead of the total kinetic
energy, the fluid viscosity $\mu$, and the out-of-plane width $W$.
With (\ref{Enstrophy}) instead of (\ref{Pe}) in the Lagrangian (\ref{L}),
the equation for $\psi$ becomes
\begin{align}
0 = -\nabla^\perp T \cdot \nabla m - 2\lambda\Delta^2 \psi\quad ,
\quad \psi = \mbox{const.}, \; \partial_n \psi = 0 \; \mbox{on boundaries}. \label{psi2}
\end{align}
instead of (\ref{psi}); (\ref{psi2}) includes the biharmonic operator and therefore
an additional boundary condition on $\psi$, the no-slip condition corresponding to a viscous flow.
In $(\alpha,\beta)$ coordinates, (\ref{Enstrophy}) becomes
\begin{align}
\iint \frac{1}{h^2} (\Delta_{\alpha,\beta} \psi)^2 dA_{\alpha,\beta} = Pe^2. \label{EnstrophyAB}
\end{align}
\noindent and (\ref{psi2}) becomes
\begin{align}
0 = -\nabla_{\alpha,\beta}^\perp T \cdot \nabla_{\alpha,\beta} m - 2\lambda\Delta_{\alpha,\beta}
\left(\frac{1}{h^2}\Delta_{\alpha,\beta} \psi \right) ,
\quad \psi = \mbox{const.}, \; \partial_n \psi = 0 \; \mbox{on boundaries}. \label{psi2ab}
\end{align}
\noindent The metric term ($1/h^2$) in (\ref{psi2ab}) is a function of $(\alpha,\beta)$
given by (\ref{h}),
different for each geometry, and therefore
unlike for the fixed kinetic energy case, the solutions are not geometry-independent
when viewed in $(\alpha,\beta)$ coordinates.
The case of a straight channel between
hot and cold walls (where $h \equiv 1$) was solved by
\cite{hassanzadeh2014wall} with free-slip
boundary conditions instead of no-slip conditions, to simplify the problem.
In this case, for small $Pe$ the optimal flows are sinuosoidal convection
rolls with the same form as for the fixed kinetic energy case, but with a
different aspect ratio ($\sqrt{2}$ versus 1). At large $Pe$, the boundary
layer structure is different, but the outer flow has the same form as
in the fixed kinetic energy case. In the no-slip case, the optimal
flows with fixed enstrophy were
computed numerically by \cite{souza2016optimal} and compared to the
free-slip case. The solutions are similar qualitatively in both
cases at small and large $Pe$. In both cases there is a transition from convection
rolls with ``round'' streamlines and a moderate aspect ratio at small $Pe$ to
narrow rectangular convection
rolls at large $Pe$. At large $Pe$
there are significant differences in the boundary layer structure,
and a small difference in the power-law scaling of heat transferred ($Q$)
with respect to $Pe$: $Q \sim Pe^{0.58}$ for the stress-free optima while
$Q \sim Pe^{0.54}$ for the no-slip optima.
For more general geometries, the fixed-enstrophy problem requires a numerical
solution. The $(\alpha,\beta)$ coordinates are useful here because they map the
flow domain onto a rectangle, where the differential operators can be discretized
by finite differences on uniform grids. The ($1/h^2$) factor
in (\ref{EnstrophyAB}) can be viewed as a weight that decreases the cost of
vorticity where $h$ is large. By (\ref{h}), $h = \|\partial_x \alpha,
\partial_y \alpha\|^{-1} = \| \nabla T_0 \|^{-1}$, so the cost of vorticity
is decreased where $\| \nabla T_0 \|$ is
small. Typically this occurs far from the hot surface.
\begin{figure}
\centerline{\includegraphics[width=14cm
{AnnulusEnstrophyFig.pdf}
\caption{Comparison of small-$Pe$ optimal flows in an annulus with fixed kinetic energy
(left two columns) and fixed enstrophy (right two columns). The outer
boundary radius is $R$ = 20 (top row, A-D), 100 (middle row, E-H), and
1000 (bottom row, I-L). The fixed kinetic energy flows
are shown in the first column (A, E, I) in the $(\alpha, \beta)$ plane
and in the second column (B, F, J) in the $(x,y)$ plane. The fixed-enstrophy
flows are shown in the third column (C, G, K) in the $(x,y)$ plane
and in the fourth column (D, H, L) in the $(\alpha, \beta)$ plane.
}
\label{fig:AnnulusEnstrophy}
\end{figure}
We now discuss the solutions to the small-$Pe$ eigenvalue
problem with fixed enstrophy for the annular
domains with various outer radii $R$. In the third column of
figure \ref{fig:AnnulusEnstrophy} we present the optimal flows at
$R$ = 20 (C), 100 (G), and 1000 (K) in the
$(x,y)$ plane. When lengths are scaled by $R$, the flows converge to a
common solution in the large-$R$ limit, consisting of a smooth vortex dipole
which is the size of the domain
(and obeys no-slip at the boundaries). In the fourth column, the same flows
are shown in the $(\alpha,\beta)$ plane. As $R$ increases (moving from panel D to
H to L), the vorticity moves towards $\alpha = 0$,
where $h = R^{1-\alpha}\log(R)$ is largest. In the second column (B, F, J) we plot the
optimal flows with fixed kinetic energy. In these cases the vortex dipole
is stretched towards the inner cylinder, with each vortex center at a distance
$\sim \sqrt{R}$ from the origin. In the first column, the same flows are shown in the
$(\alpha,\beta)$ plane, where they become identical under a rescaling
of the $\beta$ axis.
With fixed kinetic energy, we have larger vorticity near
the inner cylinder. This vorticity is spread more uniformly in the
fixed-enstrophy solutions. We now show that for these flows
(i.e. panels C, G, and K), the kinetic energy diverges in the limit of large $R$.
For the fixed-enstrophy flows, we have $\iint (\Delta_{x,y} \psi)^2 dA = Pe^2$.
Thus $(\Delta_{x,y} \psi)^2 R^2 \sim Pe^2$ so $Pe/R \sim \Delta_{x,y} \psi \sim \psi/R^2$.
Thus $\psi \sim R$ and $\mathbf{u} = -\nabla_{x,y}^\perp \psi \sim \mathbf{1}$. Thus the flow
speed is of order 1 over an area $\sim R^2$, so the kinetic energy diverges as
$R^2$ as $R$ becomes large.
\begin{figure}
\centerline{\includegraphics[width=14cm
{PlateEnstrophyFig.pdf}
\caption{Comparison of small-$Pe$ optimal flows with fixed kinetic energy
(top row) and fixed enstrophy (bottom row) in a half-disk with radius
10 above a heated plate of unit length centered at the origin. Optimal
flow with fixed kinetic energy in the $(x,y)$ plane (A) and the
$(\alpha, \beta)$ plane (B). Optimal
flow with fixed enstrophy in the $(x,y)$ plane (C) and the
$(\alpha, \beta)$ plane (D). Here we use $\beta_{min} = -2$,
$\beta_{max} = 2$.}
\label{fig:PlateEnstrophy}
\end{figure}
For the optimal flow over a heated plate in section \ref{sec:plate}, the domain is infinite.
We have $h = \frac{\pi}{2}\left(\cosh(\pi \beta) - \cos(\pi \alpha)\right)^{-1}$.
As $(\alpha,\beta)\to (0,0)$, $h \sim \|(\alpha,\beta)\|^{-2}$. Thus
the small-$Pe$ eigenvalue problem is singular. Solving the problem
numerically in the $(\alpha,\beta)$ rectangle, we find that
the solutions have infinite kinetic energy and do not satisfy
$\psi \to 0$ as $(\alpha,\beta)\to (0,0)$ (or $x + iy \to \infty$). To examine the situation further,
we consider a regularized version of
the problem, in which the domain is cut off by a large semi-circle of radius
$R$ on which $T = 0$ (see figure \ref{fig:PlateEnstrophy}A). We compute
$\alpha$ and $\beta$ for this domain by modifying the unbounded
definition (\ref{abplate}):
\begin{align}
\alpha + i\beta = \frac{-i}{\pi}\log{\left(\frac{z-1/2}{z+1/2}\right)}
-i\sum_{k = 1}^\infty a_k z^k.
\label{abplateMod}
\end{align}
\noindent We have added a power series, which gives a general representation of
an analytic function which converges inside the disk of radius $R$.
We truncate the series at $k = N$ and choose the $a_k$ so that
the $\sin{k\theta}$-components of $\alpha$ are zero
on $|z| = R$. We find that
$a_k$ converge rapidly to 0 with increasing $k$, so just a
few terms are needed to obtain $\sim 10^{-12}$ precision for $R = 10$.
In figure \ref{fig:PlateEnstrophy} we compare the optimal flows with
fixed kinetic energy and enstrophy in the small-$Pe$ limit with $R = 10$.
The flow with fixed kinetic energy (panel A) is almost identical to that in the
unbounded case (figure \ref{fig:PlateFlowsFig}A), because the flow
is very weak far from the plate in that case. Panel B shows the same
flow in the $(\alpha,\beta)$ plane. The optimal flow with fixed
enstrophy (panel C) has a single vortex whose size is roughly that of the
domain. Panel D shows this flow in the $(\alpha,\beta)$ plane. The vorticity is
concentrated near $(\alpha,\beta) = (0,0)$, where $h$ is largest.
In summary, the optimal flows with fixed enstrophy have vortices which are
much larger in scale than those for flows with fixed kinetic energy.
In the examples shown here, the fixed-enstrophy flows
are of the same size as the physical domain, while the fixed-energy
flows are of the order of the size of the hot surface.
\section{Summary and possible extensions}\label{Conclusion}
We have shown that a certain class of optimal flows
for heat transfer can be generalized to a wide
range of geometries using a change of coordinates. We presented solutions
in three basic situations: the exterior cooling of
a cylinder, a hot flat plate embedded in a cold surface, and
a channel with hot interior and cold exterior.
Steady 2D flows were sufficient to present the basic idea,
but unsteady flows can be addressed by retaining the
unsteady term in the advection-diffusion equation. It seems likely
that the coordinate change would be useful in certain 3D geometries as well,
though these problems have not yet been solved.
\bigskip
We acknowledge helpful discussions with Professor Charles R. Doering.
|
\section{Introduction and motivation}
The magnetic field of many celestial bodies is maintained by the dynamo effect which requires that the body contains a fluid conductor executing a non-trivial motion. The energy source for this motion is frequently assumed to be thermal convection. Accordingly, convection driven dynamos in spherical spheres have been extensively studied \citep{busse1,roberts1,wicht,jones}. \citet{bullard} first pointed out that precession is a possible energy source for the geodynamo. Precession driven flows, their stability and dynamo properties have been investigated independently of thermal convection \citep{poincare,malkus,busse2,vanyo,tilgner4,noir,tilgner1,tilgner7,tilgner3,tilgner2}. More recently, some attention has also been paid to other types of mechanical forcing, such as tides, libration and collisions \citep{weiss,bars1,dwyer}.
For the Earth and the other planets and moons of the solar system, the mechanical forcing is well known, while it is uncertain if their cores are convecting. In general, it will always be necessary to consider buoyancy and a mechanical forcing such as precession in conjunction. The thermal stratification can be either unstable or stable depending on which epoch in the thermal history of a celestial body is being considered. Typically, the surface cooling of a recently formed body leads to a superadiabatic gradient. As the heat generated during accretion and formation of the body is progressively lost, the internal temperature gradient eventually drops below the adiabatic gradient, yielding a stable thermal stratification. Small bodies such as planetesimals or the Moon have already reached this point in the past.
In this paper, we investigate the hydrodynamics of stably and unstably stratified fluid in a precessing spherical shell. This is intended as a model of the Earth, whose precession is maintained by the gravitational torque of the Moon and the Sun, but also as a model of planetesimals in the early solar system which undergo force free precessional motion after collisions if the direction of their angular momentum does not coincide with one of their principal axes of inertia.
In section 2 we formulate the problem and introduce the numerical methods. In section 3 we investigate the interaction of stable stratification and precession. In section 4 we investigate the interaction of unstable stratification and precession. We conclude with some discussion in section 5.
\section{Equations and numerical methods}
We consider a fluid in a spherical shell of inner radius $r_i$ and outer radius $r_o$. Suppose that the spherical shell spins at the angular velocity $\bm\Omega_s$ and precesses at the angular velocity $\bm\Omega_p$. A background temperature $T_b$ is imposed on the fluid, which we assume to be a linear function of radius,
\begin{equation}\label{tb}
T_b=\frac{T_o-T_i}{d}(r-r_o)+T_o,
\end{equation}
where $T_i$ and $T_o$ are the temperature at $r_i$ and $r_o$, respectively, and $d=r_o-r_i$ is the thickness of the gap. Such a background temperature is maintained by a heat source or sink which varies inversely proportional to radius (because $\nabla^2 T_b = 2(T_o-T_i)/(rd)$). An alternative model in which the stratification is imposed by fixed temperatures at the boundaries in the absence of a heat source leads to a non-uniform stratification and has not been considered for simplicity. In the frame moving with the boundary, the dimensional Navier-Stokes equation of a Boussinesq fluid reads
\begin{align}\label{ns1}
&\frac{\partial\bm u}{\partial t}+\bm u\bm\cdot\bm\nabla\bm u+\bm\Omega_{ref}\times(\bm\Omega_{ref}\times\bm r)+2\bm\Omega_{ref}\times\bm u+(\bm\Omega_p\times\bm\Omega_s)\times\bm r= \nonumber\\
&-\frac{1}{\rho_o}\bm\nabla p+\nu\nabla^2\bm u+\frac{\rho}{\rho_o}\bm g, ~~~ \nabla \cdot \bm u =0,
\end{align}
where $\bm\Omega_{ref}=\bm\Omega_s+\bm\Omega_p$ is the angular velocity of the moving frame with respect to the inertial frame and $\rho_o$ the fluid density at $r_o$. In the Boussinesq approximation, density variations are only retained in the buoyancy term and the density deviation is proportional to the temperature deviation,
\begin{equation}\label{boussinesq}
\frac{\rho-\rho_o}{\rho_o}=-\alpha(T-T_o)=-\alpha(\Theta+T_b-T_o),
\end{equation}
where $\Theta=T-T_b$ is the temperature deviation from the conductive profile and $\alpha$ the thermal expansion. In a sphere of constant density, the gravitational acceleration is proportional to radius,
\begin{equation}\label{g}
\bm g=-g_o\frac{\bm r}{r_o},
\end{equation}
where $g_o$ is the gravity at $r_o$.
Substituting (\ref{boussinesq}) and (\ref{g}) into (\ref{ns1}) and normalising length with $d$, time with $\Omega_s^{-1}$, $\bm u$ with $\Omega_sd$ and $\Theta$ with $(T_i-T_o)$, we obtain the dimensionless Navier-Stokes equation
\begin{equation}\label{ns2}
\frac{\partial\bm u}{\partial t}+\bm u\bm\cdot\bm\nabla\bm u=-\bm\nabla\Phi+Ek\nabla^2\bm u+2\bm u\times(\hat{\bm z}+Po\hat{\bm\Omega}_p)+Po(\hat{\bm z} \times\hat{\bm\Omega}_p)\times\bm r+\widetilde{Ra}\Theta\bm r,
\end{equation}
where all the curl-free terms, e.g. the centrifugal force and the term associated with $T_b$, are already absorbed into the total pressure $\Phi$. In (\ref{ns2}) there are three dimensionless parameters. The Ekman number
\begin{equation}\label{ek}
Ek=\frac{\nu}{\Omega_sd^2}
\end{equation}
measures the ratio of the viscous time scale to the spin time scale, the Poincar\'e number
\begin{equation}\label{po}
Po=\frac{\Omega_p}{\Omega_s}
\end{equation}
measures the precession rate. Despite its unusual form, we choose to call the parameter in the buoyancy term the Rayleigh number
\begin{equation}\label{ra}
\widetilde{Ra}=\frac{\alpha g_o(T_i-T_o)}{\Omega_s^2r_o},
\end{equation}
which is proportional to the imposed stratification. A negative $\widetilde{Ra}$ corresponds to a stable stratification and a positive $\widetilde{Ra}$ to an unstable stratification. For a stable stratification $\widetilde{Ra}$ is just the square of the dimensionless Brunt-V\"ais\"al\"a frequency. The unit vector along the precession axis in Cartesian coordinates $(x,y,z)$ is
\begin{equation}\label{omega_p}
\hat{\bm\Omega}_p=\sin\beta\cos t\,\hat{\bm x}-\sin\beta\sin t\,\hat{\bm y}+\cos\beta\,\hat{\bm z},
\end{equation}
where $\hat{\bm z}$ is the spin axis and $\beta$ the angle between the spin axis $\hat{\bm z}$ and the precession axis $\hat{\bm\Omega}_p$. The boundary condition of velocity is no-slip at the outer boundary, whereas it is stress-free at the inner boundary to approximate a full sphere.
Accordingly, the dimensionless equation of temperature deviation is
\begin{equation}\label{theta}
\frac{\partial\Theta}{\partial t}+\bm u\bm\cdot\bm\nabla\Theta-u_r=\frac{Ek}{Pr}\nabla^2\Theta.
\end{equation}
In (\ref{theta}) the Prandtl number
\begin{equation}\label{pr}
Pr=\frac{\nu}{\kappa}
\end{equation}
measures the ratio of viscosity to the thermal diffusivity. The boundary condition is $\Theta=0$ at both inner and outer boundaries.
Equations (\ref{ns2}) and (\ref{theta}) are solved numerically. In the calculations we fix the aspect ratio $r_i/r_o$ to $0.1$ such that the spherical shell is almost a spherical cavity and hence the inner core is almost negligible, and the angle $\beta$ to $60^\circ$. We investigate only retrograde precession ($Po<0$) which is geophysically relevant. We fix the Prandtl number to $Pr=1$. In the calculations of stable stratification we fix $Po=-0.3$ and vary $Ek$ and $\widetilde{Ra}$ to investigate how the stable stratification influences the precessional instability. In the calculations of unstable stratification, we select two Ekman numbers $Ek=5\times10^{-3}$ and $1\times10^{-3}$, which are sufficiently high to maintain the flow precessionally stable, and vary $Po$ and $\widetilde{Ra}$ to investigate the interaction of precessional and convective instabilities.
The numerical calculations are carried out in the spherical coordinates $(r,\theta,\phi)$ with the parallel pseudo-spectral code provided by \citet{tilgner5}. The toroidal-poloidal decomposition method is employed such that the divergence free condition of fluid flow $\bm\nabla\bm\cdot\bm u=0$ is automatically satisfied. The spherical harmonics $P_l^m(\cos\theta)e^{{\mathrm i}m\phi}$ are used on the spherical surface $(\theta,\phi)$ and the Chebyshev polynomials $T_k(r)$ are used in the radial direction. Resolutions as high as $256$ (radial $r$), $128$ (colatitude $\theta$) and $64$ (longitude $\phi$) are used. A semi-implicit scheme is employed for time stepping, using an Adams-Bashforth scheme for the nonlinear and a Crank-Nicolson scheme for the diffusive terms.
\section{Stably stratified precessional flow}
The precessional flow in the interior of spherical container is mainly a solid body rotation with angular velocity $\bm\Omega_f$ which is in general different from both $\bm\Omega_s$ and $\bm\Omega_p$ \citep{busse2}. At the boundary the fluid rotation matches the spin rate $\bm\Omega_s$ such that there exists an Ekman boundary layer, where the poloidal flow is generated by Ekman pumping. In addition to the boundary layers there also exist internal shear layers that are spawned at the critical latitude \citep{tilgner4}. A well-understood instability mechanism in these flows is a triad resonance between the basic flow and two inertial modes \citep{kerswell1,tilgner1,tilgner7,tilgner3}.
We now consider the interaction of a stable stratification and precession. Radial stratification tends to suppress fluid motion in the radial direction, which suggests that stable stratification suppresses precessional instability. However, stable stratification leads to gravito-inertial waves whose frequencies may be closer to a perfect triad resonance than the pure inertial waves of the unstratified case, so that destabilisation through stable stratification is possible, too \citep{kerswell2}. Only stabilisation is observed in our numerical calculations. In the numerical calculations we fix $Po=-0.3$ and calculate various combinations of $Ek$ and $\widetilde{Ra}$, namely $Ek$ ranges from $5\times10^{-3}$ to $1\times10^{-4}$ and $\widetilde{Ra}$ from $0$ (purely precessional flow) to $-1$. Because a stable precessional flow is anti-symmetric about the centre, i.e. $\bm u(-\bm r)=-\bm u(\bm r)$, we use symmetries \citep{tilgner7} to detect the precessional instability, i.e. the flow is separated into two parts, $\bm u_s(\bm r)=\left[\bm u(\bm r)-\bm u(-\bm r)\right]/2$ and $\bm u_a(\bm r)=\left[\bm u(\bm r)+\bm u(-\bm r)\right]/2$, and the kinetic energy of $\bm u_a$ represents the instability.
Figure \ref{energy} shows the total kinetic energy $E$ (figure \ref{E}) and the ratio of instability energy $E_a$ to the total energy (figure \ref{Ea}) against the Ekman number $Ek$ at different Rayleigh numbers $\widetilde{Ra}$ (for time dependent flows, $E$ and $E_a$ are calculated by averaging over time). Figure \ref{E} shows that increasing $Ek$ reduces the energy $E$. This behavior is well known from previous studies and is due to the fact that the viscous term is more important at higher $Ek$ which damps any motion excited by the Poincar\'e force. It is also seen from figure \ref{E} that a larger $\widetilde{|Ra|}$ reduces the energy. This is consistent with figure \ref{omega1}, which shows the modulus of the fluid rotation vector $|\bm\omega|=\sqrt{\omega_x^2+\omega_y^2+\omega_z^2}$ to indicate that a lower $\widetilde{|Ra|}$ corresponds to a stronger fluid rotation. In order to explain the data in figures \ref{E} and \ref{omega}, one would have to extend Busse's calculation \citep{busse2} to a stably stratified medium, which is not attempted here.
Figure \ref{omega} shows the radial dependence of the fluid rotation vector. The shear in the basic flow is due to viscous corrections to the solution of the inviscid equation, which is simply a solid-body rotation. Viscosity introduces Ekman pumps and shear layers crossing the entire fluid volume, all of which possess a radial velocity component. Since stratification suppresses radial motion, it also reduces the shear. In addition to the modulus of the fluid rotation vector, we also calculated the angle $\gamma$ between the fluid rotation vector and its average, $\gamma=\arccos(\bm\omega\cdot\overline{\bm\omega}/(|\bm\omega||\overline{\bm\omega}|)$, and $\gamma$ is less than around $10^\circ$ in the interior and reaches around $30^\circ$ near the boundary. Moreover, it is interesting that figure \ref{omega2} indicates a counter-rotation of the fluid in the precession frame ($\omega_z+1<0$), which is located in the vicinity of the inner boundary.
The orientation of the fluid rotation axis is best characterised in the precession frame in which the geographic and precession axes are stationary. Figure \ref{angle} shows the angle formed by the rotation axis of the fluid (averaged over the fluid volume) with the geographic axis (figure \ref{colatitude}) and with the meridian of the precession axis (figure \ref{longitude}) in the precession frame. Figure \ref{colatitude} parallels the total kinetic energy shown in figure \ref{E}, since the kinetic energy increases with the increasing angle between the fluid and boundary rotation axes. The key point in figure \ref{colatitude} is that the stable stratification does not noticeably modify the rotation axis until $\widetilde{Ra}=-1$, and therefore we may conclude that a stable stratification takes effect on the precessional flow only for $\widetilde{|Ra|}>1$.
The energy in the unstable modes is shown in figure \ref{Ea}. $E_a \neq 0$ indicates instability. The onset of instability is shifted towards smaller $Ek$ if $\widetilde{|Ra|}$ is increased. This can be seen in more detail in figure \ref{Ek-Ra} which shows the critical Ekman number, below which the flow is unstable, as a function of $\widetilde{Ra}$. The critical Ekman number monotonously decreases with increasing $\widetilde{|Ra|}$. Even though this is intuitive (because stable stratification suppresses radial motion and hence retards the onset of instability), the opposite could have happened as well if the changes in frequency of the inertial modes due to stratification had brought a combination of them closer to a triad resonance. There is an indication of this effect at finite amplitude, since the hierarchy among the different $\widetilde{Ra}$ is not preserved in figure \ref{Ea} at $Ek=10^{-3}$.
The onset of the instability is to a large degree just a matter of the energy in the basic flow. The growth rate of a triad resonance depends on the shear in the basic flow which increases with the energy of the basic flow. This point is demonstrated in figure \ref{Ea-E} which shows that the onsets of instability for different $\widetilde{Ra}$ nearly coincide if $E_a$ is plotted as a function of $E$, with the exception of $\widetilde{Ra}=-1$. This again shows that stable stratification is significant only for $\widetilde{|Ra|}=1$ or larger.
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[scale=0.46]{fig1a.ps}\label{E}}
\subfigure[]
{\includegraphics[scale=0.46]{fig1b.ps}\label{Ea}}
\end{center}
\caption{\footnotesize Stably stratified precessional flow at Poincar\'e number $Po=-0.3$. The total kinetic energy $E$ (a) and the ratio of instability energy $E_a$ to total energy (b) as a function of Ekman number $Ek$ at different Rayleigh numbers $\widetilde{Ra}$.}\label{energy}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[scale=0.46]{fig2a.ps}\label{omega1}}
\subfigure[]
{\includegraphics[scale=0.46]{fig2b.ps}\label{omega2}}
\end{center}
\caption{\footnotesize Stably stratified precessional flow. Radial dependence of fluid rotation vector at Ekman number $Ek=2\times10^{-3
|
}$ and Poincar\'e number $Po=-0.3$. Modulus $|\bm\omega|$ (a) and $z$ component $\omega_z$ (b).}\label{omega}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[scale=0.46]{fig3a.ps}\label{colatitude}}
\subfigure[]
{\includegraphics[scale=0.46]{fig3b.ps}\label{longitude}}
\end{center}
\caption{\footnotesize Stably stratified precessional flow at Poincar\'e number $Po=-0.3$. Position of fluid rotation axis as a function of Ekman number $Ek$ at different Rayleigh numbers $\widetilde{Ra}$. Colatitude (a) and longitude (b) of fluid rotation axis in the precession frame. The dashed line denotes the position of precession axis.}\label{angle}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[scale=0.46]{fig4a.ps}\label{Ek-Ra}}
\subfigure[]
{\includegraphics[scale=0.46]{fig4b.ps}\label{Ea-E}}
\end{center}
\caption{\footnotesize Stably stratified precessional flow at Poincar\'e number $Po=-0.3$. (a) The squares show points determined numerically to lie on the stability limit. The points are connected by a line to guide the eye. (b) The instability energy $Ea$ as a function of the total energy $E$ for different Rayleigh numbers $\widetilde{Ra}$.}
\end{figure}
\section{Unstably stratified precessional flow}
For the study of the fluid core of most planets it is likely more relevant to investigate the interaction of unstable stratification with precession. In \citet{bars2} the geometry of an infinitely long cylinder with an elliptical cross section was analytically investigated and it was shown that the elliptical instability and the convective instability may either stabilise or destabilise each other in different parameter regimes. In this section we numerically calculate unstably stratified precessional flow in spherical geometry to study the interaction of precession with unstable stratification.
We select two Ekman numbers, $Ek=5\times10^{-3}$ and $1\times10^{-3}$, such that the purely precessional flow is stable, namely it is stable at least until $|Po|=1$ for $Ek=5\times10^{-3}$ and $|Po|=0.3$ for $Ek=1\times10^{-3}$. Then we test various combinations of $Po$ and $\widetilde{Ra}$ to seek the neutral stablity curves of the critical Rayleigh number $\widetilde{Ra}_c$ against $|Po|$ at both Ekman numbers, as well as to calculate some supercritical flows at $Ek=5\times10^{-3}$. $\widetilde{Ra}_c$ is sought by increasing or decreasing $\widetilde{Ra}$ in steps of $0.01$ at a given $Po$ . We distinguish two different types of flows by the dominant azimuthal wavenumbers in the spectra of the velocity field. These spectra are computed in a frame of reference in which the axes of precession and boundary rotation are stationary and a $z'$-axis is pointing along the rotation axis of the fluid. The azimuthal angle $\phi'$ is measured around the $z'$-axis and the azimuthal wavenumbers are denoted by $m'$. The onset of convection occurs at $m'=3$, so that we call the flow convective if the kinetic energy is concentrated at $m'=3$. If on the contrary the spectrum has its largest contributions at $m'<3$, we call the flow precessional. Flows undergoing precessional instability have their kinetic energy concentrated in these wavenumbers. The motion with wavenumbers $m'=1$ and $2$ can be identified as two inertial modes by the same technique as used by \citet{tilgner7}. These two modes, together with the spin-over mode, fulfill the requirements of a triad resonance. Their parameters (latitudinal wavenumber $l'$ and frequency $\varpi'$) are close to two analytically determined eigenmodes with $m'=1$, $l'=6$, $\varpi'=-0.537$ and $m'=2$, $l'=6$, $\varpi'=-1.093$ \citep{greenspan}.
Figure \ref{RaPo1} shows an approximative stability diagram of unstably stratified precessional flows at $Ek=5\times10^{-3}$. The solid line denotes the neutral stability curve above/below which flow is unstable/stable, and the circle symbol denotes the convective flows and the square symbol the precessional flows. In the absence of precession ($Po=0$), $\widetilde{Ra}_c$ for the onset of convective instability is $0.45$ and the dominant mode is $m'=3$. When $|Po|$ increases to $0.1$, $\widetilde{Ra}_c$ decreases and the instability is still convective. The instability becomes precessional for $|Po|=0.2$ or larger. $\widetilde{Ra}_c$ decreases until it reaches a minimum of $\widetilde{Ra}_c=0.19$ at $|Po|=0.45$. The flow becomes more stable with stronger precession for $|Po|>0.45$. At $|Po|=0.6$ the flow is so stable that $\widetilde{Ra}_c$ reaches $1$. At such a high $\widetilde{Ra}$ the flow pattern is convective again.
Precession has apparently a dual role. On one hand, both precession and convection can lead to instabilities on their own so that a superposition of both can be expected to be even less stable. This is observed in figure \ref{RaPo1} around $-Po \approx 0.45$, where the critical Rayleigh number is reduced by more than a factor of 2 by precession, but precession alone in the isothermal fluid (corresponding to $\widetilde{Ra}=0$) is stable. On the other hand, precession introduces shear on top of the global rotation, and the onset of convective instability has now to be computed for a sheared basic flow, which can lead to a higher critical Rayleigh number. An example of this phenomenon is seen in figure \ref{RaPo1} for $-Po$ around 0.6.
We then investigate the lower Ekman number of $Ek=1\times10^{-3}$. We only seek the neutral stability curve but do not calculate any supercritical flows, and we calculate until $|Po|=0.3$ beyond which the purely precessional flow ($\widetilde{Ra}=0$) is already unstable (figure \ref{Ea}). Figure \ref{RaPo2} shows this neutral stability curve. As in figure \ref{RaPo1} the circle symbol denotes the convective instability and the square symbol the precessional instability. For purely convective flows ($Po=0$), the critical Rayleigh number $\widetilde{Ra}_c=0.07$ at $Ek=1\times10^{-3}$ is much lower than $\widetilde{Ra}_c=0.45$ at $Ek=5\times10^{-3}$. This is consistent with the asymptotic scaling law of previous calculations \citep{roberts2, busse3, zhang} and numerical calculations \citep{zhang, tilgner6}. Notice that the definition of $Ra$ in \citep{tilgner6} should be translated to our definition (equation \ref{ra}) such that the asymptotic scaling law is $\widetilde{Ra}_c=O(Ek^{2/3})$ (this scaling law does not precisely hold in our numerical calculations because we choose too high Ekman numbers). For $Ek=1\times10^{-3}$, $\widetilde{Ra}_c$ has a maximum value of $\widetilde{Ra}_c=0.21$ at $|Po|=0.2$, which shows that precession stabilises the flow at small $|Po|$ whereas it destabilises at large $|Po|$. We know that the neutral stability curve at $Ek=5\times10^{-3}$ (figure \ref{RaPo1}) eventually decreases when $|Po|$ is large enough for precession alone to be unstable. In summary, at both the high and low Ekman numbers the neutral stability curve is not monotonic but it has a minimum at the high $Ek$ and a maximum at the low $Ek$.
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[scale=0.48]{fig5a.ps}\label{RaPo1}}
\subfigure[]
{\includegraphics[scale=0.48]{fig5b.ps}\label{RaPo2}}
\end{center}
\caption{\footnotesize Diagram of convective stability and precessional stability for unstably stratified flow at $Ek=5\times10^{-3}$ (a) and $Ek=1\times10^{-3}$ (b). Points on the solid line denotes points on the neutral stability curve. The circle symbols denote the convective flows and the square symbols the precessional flows.}\label{RaPo}
\end{figure}
To end this section we discuss the Nusselt number $Nu$ which measures the ratio of the total heat transfer to the thermal conduction in the fluid at rest. The stratifying linear temperature profile assumed here is maintained by a heat source so that the Nusselt number depends on radius.
The Nusselt number is conveniently computed at the boundaries as
\begin{equation}
Nu=\frac{\langle \partial T/\partial r \rangle}{\Delta T/d}.
\end{equation}
The brackets denote an average over the spherical surface of radius $r_o$ or $r_i$ and time. Because of the internal heat source, the Nusselt numbers at the two boundaries are related through
\begin{equation}\label{twonu}
Nu|_{r=r_o}-\eta^2Nu|_{r=r_i}=1-\eta^2,
\end{equation}
where $\eta=r_i/r_o=0.1$ is the aspect ratio used in our calculations.
It is verified by our numerical calculations that equation (\ref{twonu}) precisely holds. Since $Nu$ at the inner boundary can be deduced from the one at the outer boundary, we only show $Nu$ at the outer boundary. Figure \ref{nu} shows $Nu$ at the outer boundary against $Po$ at $Ek=5\times10^{-3}$ for $\widetilde{Ra}=0$ and $\widetilde{Ra}=0.5$. Because a purely precessional flow ($\widetilde{Ra}=0$) has some poloidal component it also transfers heat, in which case the temperature deviation is a passive scalar. Figure \ref{nu} contains several points with $Nu$ around $1.05$ which are compared in table \ref{urtheta}. At equal poloidal kinetic energy, one flow may be more efficient at tranporting heat than the other because of a better correlation between radial velocity and temperature. According to this criterion, convection is more efficient than precession at advecting heat. Adding precession to a convective flow reduces this correlation by a factor of about $4$, as shown in table \ref{urtheta}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.48]{fig6.ps}
\end{center}
\caption{\footnotesize Nusselt number $Nu$ at the outer boundary as a function of $Po$ at Ekman number $Ek=5\times10^{-3}$ and Rayleigh numbers $\widetilde{Ra}=0$ and $0.5$.}\label{nu}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lllllll}
$\widetilde{Ra}$ & $Po$ & $Nu$ & $\langle\int u_r\Theta dV\rangle$ & $\langle\int u_r^2dV\rangle$ & $\langle\int\Theta^2dV\rangle$ & $\frac{\langle\int u_r\Theta dV\rangle}{\sqrt{\langle\int u_r^2dV\rangle\langle\int\Theta^2dV\rangle}}$ \\
$0$ & $-0.1$ & $1.006$ & $6.08\times10^{-4}$ & $3.81\times10^{-3}$ & $4.22\times10^{-3}$ & $0.151$ \\
$0$ & $-0.2$ & $1.034$ & $2.57\times10^{-3}$ & $1.33\times10^{-2}$ & $1.91\times10^{-2}$ & $0.161$ \\
$0$ & $-0.3$ & $1.069$ & $4.24\times10^{-3}$ & $1.50\times10^{-2}$ & $2.76\times10^{-2}$ & $0.209$ \\
$0$ & $-0.4$ & $1.029$ & $2.07\times10^{-3}$ & $4.78\times10^{-3}$ & $1.10\times10^{-2}$ & $0.285$ \\
$0.5$ & $0$ & $1.036$ & $2.04\times10^{-3}$ & $4.73\times10^{-4}$ & $1.29\times10^{-2}$ & $0.826$ \\
$0.5$ & $-0.1$ & $1.053$ & $3.07\times10^{-3}$ & $4.60\times10^{-3}$ & $2.07\times10^{-2}$ & $0.315$ \\
$0.5$ & $-0.2$ & $1.084$ & $4.81\times10^{-3}$ & $1.56\times10^{-2}$ & $3.40\times10^{-2}$ & $0.209$
\end{tabular*}
\end{center}
\caption{\footnotesize Correlations $\langle\int u_r\Theta dV\rangle$, $\langle\int u_r^2dV\rangle$ and $\langle\int\Theta^2dV\rangle$ of purely precessional and convective flows. The brackets denote the time average. The first four flows at $\widetilde{Ra}=0$ are precessionally stable and the next three flows at $\widetilde{Ra}=0.5$ are convectively unstable as shown in figure \ref{RaPo1}.}\label{urtheta}
\end{table}
\section{Discussion}
We investigated the interaction of precession with thermal stratification. The heat transport in precessional flow is less than in convective flow at equal rms of the radial velocity component due to a smaller correlation between radial velocity and temperature. Unstable stratification together with precession can be either more stable or more unstable than the stratification or precession acting alone. The same could have been expected from the combination of precession with stable stratification, but in this case, we only found examples in which the stratification stabilises the flow. The presence of stable stratification becomes relevant for precession if the ratio of the Brunt-V\"ais\"al\"a frequency to the rotation frequency is near $1$.
We present a rough estimate to determine what this criterion implies for various celestial bodies. Consider the extreme case of a body which has cooled down so much that it is nearly isothermal. Within the Boussinesq approximation used here, this corresponds to a stable stratification with a temperature gradient equal to the adiabatic gradient, given by
\begin{equation}
\left(\frac{\partial T}{\partial z}\right)_{\rm ad}=-\frac{g\alpha T}{c_p},
\end{equation}
where $g$ is the gravitational acceleration, $\alpha$ the thermal expansion coefficient and $c_p$ the specific heat capacity at constant pressure.
The gravitational accleration at the surface of a sphere with radius $R$ is given by
\begin{equation}
g=\frac{4}{3}\pi G\rho R,
\end{equation}
where $G$ is the gravitational constant. The Brunt-V\"ais\"al\"a frequency for the cooled body under consideration is given by
\begin{equation}\label{N}
N=\sqrt{-g\alpha\left(\frac{\partial T}{\partial z}\right)_{\rm ad}}=g\alpha\sqrt{\frac{T}{c_p}}=\frac{4}{3}\pi G\rho\alpha R\sqrt{\frac{T}{c_p}}.
\end{equation}
If we accept the following values to be representative of planetary cores
\begin{equation}\label{values}
\rho=11390\,{\rm kg/m^3},\hspace{3mm}\alpha=1.5\times10^{-5}\,{\rm K^{-1}},\hspace{3mm}T=4000\,{\rm K},\hspace{3mm}c_p=860\,{\rm J/(kg\cdot K)},
\end{equation}
we obtain
\begin{equation}
N=1.03\times10^{-10}R\;{\rm m^{-1}s^{-1}}.
\end{equation}
If we take as an example planetesimals with a typical radius of $R=10\,{\rm km}$ or $R=100\,{\rm km}$, we find $N=10^{-6}\,{\rm s^{-1}}$ or $N=10^{-5}\,{\rm s^{-1}}$, respectively. The rotation of planetesimals has a typical period of about $10$ hours \citep{weiss}, i.e. the rotation rate is $\Omega=1.7\times10^{-4}\,{\rm s^{-1}}$, which implies that the buoyancy is negligible in this context. The situation is different for the Earth with a core of radius $3.4 \times 10^6 {\rm m}$ and a rotation period of $24$ hours. A stabilising stratification with a gradient $4\%$ less than the adiabatic gradient would be enough for the stratification to significantly influence the flow driven by precession.
\bibliographystyle{jfm}
|
\section{Introduction}
Object detection, which needs to localize and recognize each object in images or videos, is a fundamental and critical problem in computer vision.
With the development of deep learning, current object detectors are usually base on convolution neural
networks, like FCOS~\cite{tian2019fcos}, RepPoints~\cite{yang2019reppoints, chen2020reppointsv2}, Faster\cite{ren2016faster}/Cascade\cite{cai2018cascade}
RCNN, DETR~\cite{carion2020end} and Sparse R-CNN~\cite{sun2020sparse}, etc. They can be roughly divided into anchor-free or anchor-based methods
based on their ways to generate detection results. Anchor-based methods firstly place a lot of anchors with different scales and aspect ratios,
and then refine these anchors to generate proposals or detection results, while anchor-free methods directly generate proposals or detection results from feature maps.
\section{Author names}
Each author name must be followed by:
\begin{itemize}
\item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author.
\item An {\tt \textbackslash{}And} command for the second to last author.
\item An {\tt \textbackslash{}and} command for the other authors.
\end{itemize}
\section{Affiliations}
After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command.
Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too.
\section{Mapping authors to affiliations}
If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
\section{Emails}
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones.
Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract ``[email protected]" and ``[email protected]" because the domains are different.
\bibliographystyle{named}
\section{Introduction}
Object detection, which needs to localize and recognize each object in images or videos, is a fundamental and critical problem in computer vision.
With the development of deep learning, current object detectors are usually based on convolution neural
networks, like FCOS~\cite{tian2019fcos}, RepPoints~\cite{yang2019reppoints,chen2020reppointsv2}, TOOD~\cite{feng2021tood}, Faster R-CNN\cite{ren2016faster}, DETR~\cite{carion2020end}, DeFCN~\cite{wang2021end}, and Sparse R-CNN~\cite{sun2020sparse}. They can be roughly divided into anchor-free or anchor-based methods
based on their ways to generate detection results. Anchor-based methods firstly place a lot of anchors with different scales and aspect ratios,
and then refine these anchors to generate proposals or detection results, while anchor-free methods directly generate them from feature maps.
\begin{figure}[t]
\centering
\subfigure[]{\label{aadi_for_rpn:a}\includegraphics[width=0.31\linewidth]{images/hand_designed.eps}}
\subfigure[]{\label{aadi_for_rpn:b}\includegraphics[width=0.31\linewidth]{images/aadi_anchors.eps}}
\subfigure[]{\label{aadi_for_rpn:c}\includegraphics[width=0.31\linewidth]{images/proposals.eps}}
\caption{Visualization of AADI for RPN. (a) The original anchors of standard RPN. (b) The anchors augmented by AADI. (c) The proposals generated by AADI-RPN}
\label{aadi_for_rpn}
\end{figure}
The anchor-free methods were once widely regarded as the standard paradigm for future object detectors. However, ATSS~\cite{zhang2020bridging} has demonstrated that anchor-free methods are not more effective than anchor-based ones. For example, with same tricks and appropriate strategies for judging positive and negative samples of hand-designed anchors, the anchor-based ATSS can perform better than anchor-free FCOS. Based on ATSS, PAA~\cite{paa-eccv2020} proposes a method of judging positive and negative samples by Gaussian Mixture Model and achieves large improvements, which further proves the conclusion mentioned above. Both of these two models use a method to estimate the critical point of the distribution of positive and negative anchors. However, even if the best critical point is estimated, there is no guarantee that the positive and negative samples will be completely separated, and placing more hand-designed anchors can not solve this problem. Therefore, how to improve the quality of anchors has become an important problem.\footnotemark[1]
\footnotetext[1]{More details can be seen from appendix.}
\begin{figure}[t]
\centering
\subfigure[$d$=1.]{\label{anchor_dilation:a}\includegraphics[width=0.3\linewidth]{images/anchor_d1.eps}}
\hspace{0.03\linewidth}
\subfigure[$d$=2.]{\label{anchor_dilation:b}\includegraphics[width=0.3\linewidth]{images/anchor_d2.eps}}
\hspace{0.03\linewidth}
\subfigure[$d$=3.]{\label{anchor_dilation:c}\includegraphics[width=0.3\linewidth]{images/anchor_d3.eps}}
\caption{Anchors under different dilations, and the kernel of the convolution layer is fixed to 3*3.}
\label{anchor_dilation}
\end{figure}
In current anchor-based object detectors' paradigm, high-quality anchors usually lead to better performance, because they make it easier to separate positive and negative anchors, which is important to improve the recall rate of ground truth boxes and reduce the number of false positive predictions.
In standard RPN, anchors with an IoU greater than 0.7 are marked as positive anchors, those with an IoU below 0.3 are marked as negative anchors, and others are not considered in training. Thus, placing more anchors with different aspect ratios and scales will improve the performance of RPN. However, since the aspect ratio and scale can be any positive number, it makes the anchor designation to be a non-trivial problem. Thus, some methods try to learn high-quality anchors by a neural network module, like AttratioNet~\cite{gidaris2016attend}, GA-RPN, and Cascade RPN.
All these methods can be viewed as gradient-based anchor augmentation methods, and their overall training and inference overheads almost double due to their strategy of dense prediction and a large amount of memory access in feature alignment.
In this paper, we propose a gradient-free anchor augmentation method named AADI, which means {\bf A}ugmenting {\bf A}nchors by the {\bf D}etector {\bf I}tself. AADI can be applied to both two-stage and single-stage methods, and Figure~\ref{aadi_for_rpn} shows an example of AADI for RPN~\cite{ren2016faster}. AADI comprises two processes, including augmentation and refinement process. During training process, AADI first uses RPN to augment hand-designed anchors to improve their quality in augmentation process, and then uses the augmented anchors and ground truth to train the parameters of RPN in refinement process. Intuitively, AADI can be viewed as a variational EM algorithm which is typically used to solve the problems with latent variables, and the augmentation and refinement process are corresponding to its E-step and M-step, respectively. Here, we iterate it only once to improve the computational efficiency. Moreover, in the augmentation process, AADI uses dilated convolution~\cite{yu2015multi} to augment each hand-designed anchor, and in the refinement process, it employs RoI Align~\cite{he2017mask} to extract features for augmented anchors and uses an equivalent fully connected layer to refine these anchors. Note that the dilated convolution and fully connected operations share parameters.
In order to make RPN's parameters be reused in these two processes, we need to carefully design the dilation and kernel size of convolution layer to ensure that the convolution layer and fully connected layer could obtain same input feature for a specific anchor. As Figure~\ref{anchor_dilation} shows, supposing that the dilation of a convolution layer is $d$, then the process of a 3$\times$3 convolution layer is same as that of an RoI Align for a (3$\times d$)$\times$(3$\times d$) anchor. Furthermore, the operation of a 3$\times$3 convolution layer for each anchor is equal to a fully connected layer whose output dimension is 9. Thus, unlike the ineffective masked deformable convolution layer in GA-RPN or adaptive convolution layer in Cascade RPN which is used for tackling the feature misalignment problem of hand-designed and learned anchors, AADI avoids this problem by determining the scale and aspect ratio of anchors according to the dilation and kernel size of convolution operation.
Consequently, the hyper-parameters of AADI are kernel size and dilation of convolution layer, that is, AADI converts the scale and aspect ratio of anchors from a continuous space to a discrete space, which greatly alleviates the problem of anchor designation.
During training, the augmentation process does not need to calculate gradient, which makes the training speed of AADI fast. During inference, AADI only takes top 2000 augmented anchors for further refinement, which makes the inference speed fast. Extensive experiments on MS COCO dataset show that AADI can efficiently improve the detection performance of anchor-based object detectors, specifically, AADI achieves at least 2.4 box AP improvements on Faster R-CNN, 2.2 box AP improvements on Mask R-CNN, 1.8 box AP improvements on RetinaNet, and 0.9 box AP improvements on Cascade Mask R-CNN. And it runs only slightly slower than standard RPN. Our main contributions are as follows:
\begin{itemize}[itemsep=0pt,topsep=0pt,parsep=0pt,leftmargin=9.3pt]
\item We propose a novel and effective method, AADI, which augments anchors by the detector itself and can significantly improve anchors' quality. It can be applied on all anchor-based object detectors without adding any parameters or hyper-parameters.
\item AADI determines the scale and aspect ratio of anchors according to the kernel and dilation of a convolution layer, and it determines the aspect ratio according to the shape of the kernel, which greatly eases the anchor design difficulty.
\item We carry out experiments on MS COCO dataset to verify the effectiveness of AADI. With only little extra computation costs, AADI achieves significant performance improvements on many representative object detectors with different backbones.
\end{itemize}
\section{Related Works}
Current object detection methods can be roughly categorized into two classes, one is anchor-based method and the other is anchor-free method. Anchor-based methods firstly place a lot of hand-designed anchors on the feature maps, and then perform classification and box regression on these anchors. Therefore, better anchors can greatly alleviate the difficulty of these two tasks. AttratioNet~\cite{gidaris2016attend} uses ARN to iteratively refine the hand-designed anchors several times. However, ARN needs to employ RoI Pooling to extract features for each anchor, and it is not a lightweight network, which makes its training and inference speed slow. GA-RPN~\cite{wang2019region} first predicts a suitable anchor for each position, and then uses the predicted anchors to train RPN. Cascade RPN~\cite{cai2018cascade} stacks two RPNs together to improve the quality of proposals. Its first RPN is used to refine the hand-designed anchors without classification, and the second one plays the same role as a standard RPN. GA-RPN and Cascade RPN have effectively improved the quality of proposals by improving the quality of anchors. However, they both need to use DCN\cite{dai2017deformable} or its variants to align the features and anchors, and there is a lot of memory access during this operation, resulting in slower training and inference speed. In this paper, we first filter a small set of augmented anchors, and then use RoI Align to extract the features for them, which can significantly reduce the memory access.
Compared with anchor-free methods, anchor-based methods are more intuitive, and they do not need to determine which ground truth should be placed on which level of pyramid features. However, FCOS~\cite{tian2019fcos} finds that determining the scale and aspect ratio of anchors is not a trivial issue, and these two parameters have a great influence on the performance. Therefore, FCOS proposes an anchor-free paradigm to generate the final results. For each position which is located in the center region of the ground truth, it directly predicts the distance from the position to the four corners of the ground truth. RepPoints~\cite{yang2019reppoints} is also an anchor-free method. It no longer directly predicts bounding boxes, but predicts a set of representative points for each ground truth, and then groups them to bounding boxes. DETR and Deformable DETR~\cite{zhu2020deformable} use Transformer to directly predict the detection results to avoid the anchor designation problem.
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}[t]
\caption{Pseudocode of AADI}
\label{alg_aadi_formula}
\begin{algorithmic}[1]
\REQUIRE ~~\\
$\mathcal{A}$ is a set of hand-designed anchors;\\
$\mathcal{A}_a$ is a set of augmented anchors;\\
$\mathcal{G}$ is a set of ground truth boxes on the image;\\
$\mathcal{F}$ is feature maps obtained by backbone;
$f$ is RPN;\\
$\theta$ is a set of paramters of RPN;\\
$\epsilon$ is learning rate;\\
\ENSURE ~~ \\
$\mathcal{P}$ is a set of proposals;\\
\WHILE{not converge}
\STATE $\mathcal{A}_a \gets f(\mathcal{A}, \mathcal{F}, \theta)$
\STATE $\mathcal{P} \gets f(\mathcal{A}_a, \mathcal{F}, \theta)$
\STATE $\mathcal{L} \gets loss(\mathcal{P}, \mathcal{G})$
\STATE $\bigtriangleup_{\theta} \gets \frac{\partial{\mathcal{L}}}{\partial{f(\mathcal{A}_a, \mathcal{F}, \theta)}} \times \frac{f(\mathcal{A}_a, \mathcal{F}, \theta)}{\partial{\theta}}$
\STATE $\theta \gets \theta - \epsilon \times \bigtriangleup_{\theta}$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Although the hyper-parameter adjustment of anchor-free methods is simpler, ATSS and PAA demonstrate that anchor-based methods can significantly outperform anchor-free methods with suitable anchor assignment strategies. However, the anchors of both ATSS and PAA are predefined, although both ATSS and PAA are very robust to the scale and aspect ratio of anchors, they are hard to further improve the performance by placing more anchors with different scales and aspect ratios. However, AADI can adaptively augment the hand-designed anchors to improve their quality.
\begin{figure*}[t]
\centering
\subfigure[RPN.]{
\begin{minipage}[t]{0.40\linewidth}
\centering
\label{rpn:a}
\includegraphics[width=0.9\linewidth]{images/rpn.eps}
\end{minipage}
}
\subfigure[AADI-RPN.]{
\begin{minipage}[t]{0.40\linewidth}
\centering
\label{aadi-rpn:b}
\includegraphics[width=0.9\linewidth]{images/aadi_rpn.eps}
\end{minipage}
}
\subfigure[Box subnet of RetinaNet.]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\label{retinanet:c}
\includegraphics[width=0.9\linewidth]{images/retina.eps}
\end{minipage}
}
\subfigure[Box subnet of AADI-RetinaNet.]{
\begin{minipage}[t]{0.50\linewidth}
\centering
\label{aadi-retinanet:d}
\includegraphics[width=0.9\linewidth]{images/model__aadi__for__retinanet.eps}
\end{minipage}
}
\caption{The pipeline of AADI for RPN and RetinaNet.}
\label{AADI}
\end{figure*}
\section{Method}
\subsection{AADI}
\label{overview}
High-quality anchors are critical for all anchor-based object detectors, so, how to adaptively generate better anchors is an important factor. Unlike previous gradient-based methods, AADI just uses the detector itself to augment the hand-designed anchors without gradient, and it can be used in both two-stage and single-stage object detectors. The overall pipeline of AADI for a common two-stage object detector is shown in Figure~\ref{aadi-rpn:b}, and its Pseudocode is shown in Algorithm~\ref{alg_aadi_formula}. Compared with AttratioNet, AADI does not need to calculate gradient during anchor augmentation. And compared with the IBBR({\bf I}terative {\bf B}ounding {\bf B}ox {\bf R}egression) of IoU-Net~\cite{jiang2018acquisition}, which has a similar inference pipeline. The key difference is that AADI works on both training and inference period, while IBBR only works on inference. Taking AADI for two-stage object detectors as an example, AADI comprises two processes, including an augmentation process and a refinement process, and the details are illustrated as follows.
\noindent\textbf{Augmentation process.}
As shown in Figure~\ref{aadi-rpn:b}, AADI first gets the input of anchors and feature maps from the backbone, and then uses RPN$_c$ to augment the anchors. RPN$_c$ is same as the standard RPN, and it does not need to calculate the gradient during training. As most of augmented anchors are either redundant samples or negative samples, so we remove them by Non-Maximum Suppression (NMS) and keep only 2000 anchors with the highest score to improve the model's efficiency. As shown in Figure~\ref{aadi_for_rpn}, the augmented anchors have a higher Intersection over Union (IoU) than the original anchors.
Note that there can be multiple RPN$_c$ with different dilations, and these RPN$_c$s can share parameters.
\noindent\textbf{Refinement process.}
The refinement process gets input of augmented anchors from the augmentation process and feature maps from the backbone, and then uses RPN$_f$ to predict proposals. RPN$_f$ and RPN$_c$ have the same parameters, but they are implemented in different ways. RPN$_c$ is same as the standard RPN, and it uses convolution to get augmented anchors, while RPN$_f$ is implemented in a fully connected way. In addition, RPN$_f$ employs RoI Align to extract features for each augmented anchor. Note that different RPN$_c$s are to correspond to different RPN$_f$s if their parameters are not sharing. Moreover, RPN$_f$ needs to calculate gradient during training, so RPN$_c$ will also be trained indirectly.
The key problem of ensuring that this paradigm can work is the features obtained by the two RPNs should be same. For RPN$_c$, the features for each anchor are determined by the kernel size and dilation of a convolution layer, while for RPN$_f$, the features are determined by RoI Align. So, there are two conditions to meet. First, the output size of RoI Align needs to be same as the kernel size of the convolution layer, which is easy to meet. Second, the dilation and the scale of each anchor must be carefully designed to align the RoI, which is hard to meet.
To address this issue, we propose to set scale and aspect ratio of hand-designed anchors by Eq.~\ref{eq:scale_determiniation} and Eq.~\ref{eq:aspect_ratio_determiniation}, respectively. Where $m$ and $n$ is height and width of kernel size of a convolution layer, and $d$ is the dilation. In Figure~\ref{anchor_dilation}, we visualize the anchors and their features sampled by a 3x3 convolution layer under different dilations, and it is easy to find that the features sampled by a 3$\times$3 convolution layer are same as the features extracted by a 3$\times$3 RoI Align. In this way, the aspect ratio of the anchors is also determined by the shape of the kernel. For example, for a 3$\times$3 convolution layer, if its dilation is set to 2, then the scale of the anchors is $3 \times 2=6$, and the aspect ratio is $\frac{3}{3}=1$. Since $m$, $n$ and $d$ all are must be integers, so the designation space of anchors is converted from a continuous space to a discrete space, which greatly alleviates the problem of anchor designation.
\begin{equation}
scale = d \times \sqrt{m \times n}
\label{eq:scale_determiniation}
\end{equation}
\begin{equation}
aspect\_ratio = \frac{n}{m}
\label{eq:aspect_ratio_determiniation}
\end{equation}
\subsection{Training}
If AADI is only trained on the augmented anchors, AADI's robustness to hand-designed anchors is relatively poor, which will reduce the performance of anchor augmentation. Therefore, during training, we first select an anchor with the largest IoU for each ground truth, then append them to the augmented anchors, which can guide the AADI to augment the hand-designed anchors, we call this strategy {\bf Anchor Guided}.
The number of positive augmented anchors is very large, if all of them are used directly, the memory will be exhausted. In addition, we found that the overlap of some augmented
anchors is large. Therefore, we employ NMS to filter out the redundant anchors according to their IoUs with the ground truth, and the training strategy of the refinement process is same as standard RPN. Similar to RPN, AADI can be trained in an end-to-end manner using multi-task loss as follows:
\begin{equation}
\mathcal{L}_{rpn} = \lambda \mathcal{L}_{reg} + \mathcal{L}_{cls}
\label{eq:multi_task}
\end{equation}
Where $\mathcal{L}_{reg}$ and $\mathcal{L}_{cls
|
}$ are the box regression loss and the classification loss, respectively. And these two loss terms are balanced by $\lambda$. In the implementation, binary cross entropy loss and Smooth L1 Loss~\cite{girshick2015fast} are used as classification loss and box regression loss, respectively, and $\lambda$ is empirically set to 5.
\subsection{Inference}
During inference, the augmentation process is same as that of obtaining proposals by standard RPN, and the refinement process is just a two-layer fully connected layer which is used to refine the augmented anchors. Here, AADI only takes the top 2000 augmented anchors for refinement, while Cascade RPN needs to augment all regressed anchors, which leads to higher computation overheads.
\subsection{AADI-RetinaNet}
AADI can be naturally deployed in anchor-based single-stage object detectors. We only apply AADI on RetinaNet (AADI-RetinaNet) due to the limited time and resources. Since RetinaNet uses 4 multi-layer convolution networks to extract features for classification and regression tasks respectively, and the kernel of all convolution layers is 3x3, so we can use the last layer of the box subnet to extract features for the augmented anchors. The pipeline of AADI-ReitinaNet are shown in Figure~\ref{aadi-retinanet:d}, and the training process of AADI-RetinaNet is similar to AADI-RPN, However, during inference, since NMS is relatively time-consuming, we only select the top 2000 augmented anchors without NMS to do further refinement. Here, the IoU threshold of NMS of post-processing is set to 0.6. Furthermore, Group Normalization~\cite{wu2018group} is used to make the training process smoother.
\section{Experiments}
\subsection{Dataset and Evaluate Metrics}
Our experiments are implemented on the challenging Microsoft COCO 2017~\cite{lin2014microsoft} dataset. It consists of 118k images for training, {\it train-2017} and 5k images for validation, {\it val-2017}.
There are also 20k images without annotations for test in {\it test-dev}.
We train all our models on {\it train-2017}, and report ablation studies on {\it val-2017} and final results on {\it test-dev}. In this paper, Average Recall(AR) is used to measure the quality of proposals, which is the average of recalls at different IoU thresholds, and AR$_x$ means recall at IoU threshold of $x$\%, AR$_s$, AR$_m$, and AR$_l$ denote the AR for small, medium, and large objects, respectively. The detection results are subjected to standard COCO-style Average Precision (AP) metrics which include AP (mean of AP over all IoU thresholds), AP$_{50}$, AP$_{75}$, AP$_s$, AP$_m$ and AP$_l$, where AP$_{50}$ and AP$_{75}$ are APs for IoU threshold of 50\% and 75\%, respectively, and AP$_s$, AP$_m$ and AP$_l$ are APs for small, medium, and large objects, respectively. Moreover, AP$^\text{box}$ and AP$^\text{mask}$ are denote AP for object detection and instance segmentation, respectively.
\subsection{Implementation Details}
All experiments are based on detectron2~\cite{wu2019detectron2}. And our codes are deployed on a machine with 8 Tesla V100-SXM2-16GB GPUs and Intel Xeon Platinum 8163 CPU. Our software environment mainly includes Ubuntu 18.04 LTS OS, CUDA 10.1, and PyTorch~\cite{paszke2017automatic} 1.6.0.
Unless specially noted, the hyper-parameters of our model are following detectron2, and all models are based on FPN~\cite{lin2017feature}. For AADI-RPN, the dilations of RPN$_c$ is set to 2 and 4, and these two RPN$_c$ are not sharing parameters, moreover. For AADI-RetinaNet, the dilation is set to 2. Moreover, the strategy of Anchor Guided is used in all models.
As a common practice~\cite{girshick2014rich}, all network backbones are pretrained on ImageNet1k classification set~\cite{5206848}, and their pre-trained models are publicly available. With 1x~\cite{wu2019detectron2} training schedule and ResNet-50 (R-50) as backbone, it takes about 4.2 hours for our model to converge, and about 2.9 hours with Mixed Precision training~\cite{micikevicius2017mixed}.
\subsection{Results}
\begin{table}
\begin{center}
\resizebox{0.45\textwidth}{18mm}{
\begin{tabular}{lcccccccc}
\toprule
& & & & & & & \multicolumn{2}{c}{Speed}\\
\cmidrule(r){8-9}
Method & Backone & AR$_{100}$& AR$_s$ & AR$_m$ & AR$_l$ & AP$^\text{box}$ & Train(ms) & Test(FPS)\\
\midrule
AttratioNet & VGG-16 & 53.3 & 31.5 & 62.2 & 77.7 & 34.3 & - & -\\
ZIP & Inception & 53.9 & 31.9 & 63.0 & 78.5 & - & - & -\\
C-RPN & R-50 & 54.0 & 35.6 & 62.7 & 78.5 & - & - & -\\
\midrule
RPN & R-50 & 45.7 & 31.4 & 52.7 & 61.1 & 37.9 & 186.0 & 31.3\\
PAA & R-50 & 49.7 & 31.4 & 59.4 & 68.3 & 39.3 & 342.6 & 31.4\\
GA-RPN & R-50 & 59.1 & 40.7 & 68.2 & 78.4 & 39.6 & 448.1 & 14.7\\
Cascade RPN & R-50 & 61.1 & 42.1 & 69.3 & 82.8 & 40.4 & 436.4 & 15.1\\
\midrule
AADI-RPN & R-50 & 55.3 & 35.4 & 62.7 & 79.7 & 41.3 & 170.2 & 25.6\\
\bottomrule
\end{tabular}}
\end{center}
\caption{Region proposal results on COCO {\it val-2017}. All results are trained with 1x schedule.}
\label{proposal_recall}
\end{table}
{\bf Region Proposal Performance.} We first apply AADI on RPN (AADI-RPN) of Faster R-CNN to verify its performance on region proposal, and the model is trained on COCO {\it train-2017} with 1x training schedule.
Since it is difficult to reproduce the performance of AttratioNet, ZIP\cite{li2019zoom} and C-RPN~\cite{fan2019siamese}, we only compare recall rate with published results. For other models, their performance is tested on the same machine, and the training time indicates the average time cost per batch, while the test time indicates the average inference speed on COCO {\it val-2017}. All region proposal results are shown in Table~\ref{proposal_recall}, and the results show that AADI-RPN, by a large margin, outperforms standard RPN in terms of AR under all settings, and it achieves the highest AP$^{\text{box}}$ over other models. Furthermore, although AADI-RPN has a lower recall rate than the current state-of-the-art Cascade RPN, its inference speed is about 70\% faster. For training time, AADI-RPN achieves the fastest training speed, even faster than RPN. This is mainly because only a small set of the augmented anchors are participated in the training process, and the way to compute the gradient is changed from convolution to fully connected operation, which significantly reduces the training overheads.
\begin{table*}
\begin{center}
\resizebox{0.78\textwidth}{25mm}{
\begin{tabular}{ccc|cccccc|c}
\hline
Method & Backbone & AADI & AP$^{\text{box}}$ & AP$_{50}^{\text{box}}$ & AP$_{75}^{\text{box}}$ & AP$^{\text{mask}}$ & AP$_{50}^{\text{mask}}$ & AP$_{75}^{\text{mask}}$ & Inference Speed(FPS)\\
\hline
\multirow{4}{*}{RetinaNet} & R-50 & & 39.6 & 59.3 & 42.2 & - & - & - & 22.2\\
& R-50 & \checkmark & \textbf{41.4} & \textbf{59.3} & \textbf{45.2} & - & - & - & 23.0\\\cline{4-10}
& Swin-T & & 44.9 & \textbf{65.8} & 48.0 & - & - & - & 18.6\\
& Swin-T & \checkmark & \textbf{47.0} & 65.6 & \textbf{51.4} & - & - & & 18.1\\
\hline
\multirow{4}{*}{Faster R-CNN} & R-50 & & 40.4 & 61.4 & 43.9 & - & - & - & 26.0\\
& R-50 & \checkmark & \textbf{42.8} & \textbf{61.4} & \textbf{46.9} & - & - & - & 18.7\\\cline{4-10}
& Swin-T & & 45.0 & 67.2 & 49.1 & - & - & - & 19.6\\
& Swin-T & \checkmark & \textbf{47.7} & \textbf{67.5} & \textbf{52.4} & - & - & - & 15.1\\
\hline
\multirow{4}{*}{Mask R-CNN} & R-50 & & 41.0 & 61.5 & 44.9 & 37.2 & 58.6 & 39.9 & 22.8\\
& R-50 & \checkmark & \textbf{43.2} & \textbf{61.7} & \textbf{47.7} & \textbf{38.1} & \textbf{58.9} & \textbf{41.2} & 15.8\\\cline{4-10}
& Swin-T & & 46.0 & \textbf{68.1} & 50.3 & 41.6 & \textbf{65.1} & 44.9 & 17.9\\
& Swin-T & \checkmark & \textbf{48.4} & 67.6 & \textbf{53.2} & \textbf{42.8} & 65.0 & \textbf{46.6} & 13.4\\
\hline
\multirow{4}{*}{\makecell{Cascade \\Mask R-CNN}}
& R-50 & & 44.4 & 61.7 & 48.3 & 38.2 & 59.0 & 41.1 & 12.5\\
& R-50 & \checkmark & \textbf{45.4} &\textbf{62.4} & \textbf{49.2} & \textbf{38.9} & \textbf{59.4} & \textbf{42.0} & 9.9 \\\cline{4-10}
& Swin-T & & 50.3 & 69.0 & 54.9 & 44.0 & 66.5 & 47.8 & 10.7\\
& Swin-T & \checkmark & \textbf{51.2} & \textbf{69.2} & \textbf{55.8} & \textbf{44.1} & \textbf{66.6} & \textbf{48.1} & 8.9\\
\hline
\end{tabular}}
\end{center}
\caption{Object detection results on COCO {\it val-2017}. All results are trained with 3x schedule.}
\label{object_detection}
\end{table*}
{\bf Object Detection Performance.} We conducted extensive experiments on four typical object detection frameworks: RetinaNet, Faster R-CNN, Mask R-CNN, and Cascade Mask R-CNN, as well as two different backbone networks: R-50 and tiny Swin Transformer (Swin-T)~\cite{liu2021Swin} to verify the anchor augmentation capabilities of AADI. All detection results with 3x training schedule are shown in Table~\ref{object_detection}, and we can see that AADI achieves significant performance boosts in all detectors with different backbones, which demonstrates its strong robustness. In particular, when using Swin-T as the backbone, we observe +2.1 AP$^\text{box}$, +2.7 AP$^\text{box}$, +2.4 AP$^\text{box}$, and +0.9 AP$^\text{box}$ gains over RetinaNet, Faster R-CNN, Mask R-CNN, and Cascade Mask R-CNN, respectively. Furthermore, their AP$^\text{mask}$ are also improved. It is worth noting that the Swin Transformer based object detectors are currently state-of-the-art, while AADI still achieves significant performance boosts with only little extra computation costs, which reflects the efficiency of AADI. Meanwhile, we can find that the performance gains on AP$_{50}^{\text{box}}$ are small, while large gains are obtained on AP$_{75}^{\text{box}}$, similar finding could be found for GA-RPN and Cascade RPN. Therefore, the main advantage of AADI is to improve the quality of anchors, thereby improving the detection performance.
\section{Ablation Study}
We conduct our ablation study on COCO {\it train-2017}, and report the results on COCO {\it val-2017} dataset. All models' backbone is R-50 unless specially noted.
It can be seen from Eq.~\ref{eq:scale_determiniation} and Eq.~\ref{eq:aspect_ratio_determiniation} that the scale and aspect ratio of anchors is determined by the kernel and dilation of a convolution layer. For RPN, since the kernel is usually fixed to 3*3, so we only change the scale of anchors by adjusting the dilation. Here, we test the region proposal performance of RPN with dilation of 2, 3 and 4, respectively. Furthermore, the effects of the proposed training strategy of {\bf Anchor Guided} in augmentation process are also tested. All results are shown in Table~\ref{ablation_proposal_recall}.
From Table~\ref{ablation_proposal_recall}, we can conclude that larger dilation, worse performance of small objects, and better performance of large objects, and vice versa. This is because the scale of anchors will become larger as the dilation becomes larger, and large anchors are beneficial for large objects, but harmful for small objects during training. Furthermore, the strategy of Anchor Guided can consistently improve the performance of AADI-RPN with different dilations, especially for large objects. Compared with standard RPN, AADI-RPN consistently gets better performance by a large margin under different settings, which indicates that AADI is an effective method.
\begin{table}
\begin{center}
\resizebox{0.45\textwidth}{14mm}{
\begin{tabular}{ccccccccc}
\toprule
Dilation & Anchor Guided & AR$_{100}$ & AR$_{1000}$ & AR$_s$ & AR$_m$ & AR$_l$\\
\midrule
2 & & 54.8 & 64.7 & 39.0 & 63.1 & 70.6\\
2 & \checkmark & 56.3 & 66.7 & 39.5 & 64.9 & 73.4\\
3 & & 53.7 & 64.0 & 35.4 & 62.1 & 73.9\\
3 & \checkmark & 55.6 & 67.6 & 36.1 & 64.3 & 77.6\\
4 & & 52.2 & 60.5 & 30.9 & 61.3 & 76.6\\
4 & \checkmark & 54.4 & 65.5 & 33.0 & 63.7 & 78.9\\
\midrule
\multicolumn{2}{c}{RPN} & 45.7 & 58.0 & 31.4 & 52.7 & 61.1\\
\bottomrule
\end{tabular}}
\end{center}
\caption{Region proposal results of AADI-RPN with different settings. All results are trained with 1x schedule.}
\label{ablation_proposal_recall}
\end{table}
We find that there is not much difference in the performance of AADI-RPN with different dilations. Therefore, we test detection performance to determine which dilation or its combination is the optimal choice. We further test whether the parameters are shared or not when using different dilation combinations. All detection results are shown in Table~\ref{ablation_object_detection}, and the strategy of Anchor Guided is used for all tests.
From Table~\ref{ablation_object_detection}, we empirically find that setting dilation to 3 ( the scale of anchors is 9 ) achieves the best detection performance when only one dilation is used, which may correspond to the dominate scale of objects in the dataset. We also observe a small performance gap among different dilations, which indicates the robustness of AADI. Moreover, since different dilations correspond to different scales of anchors, we expect that using two or more RPNs with different dilations will bring further performance gains. Therefore, we deploy two or more RPNs with different dilations in AADI and test the performance of these different dilation combinations. The results show that when two RPNs share the same parameters, there is almost no performance gain compared to using only one RPN. When the parameters are not shared, the performance can be substantially improved, and we can obtain the highest AP of 41.3 when the dilations of two RPNs are set to 2 and 4, respectively. On top of this, we add another RPN with dilation of 3 to test whether the performance can be further improved. However, a lower AP of 41.0 is obtained, probably due to newly added RPN parameters raising the difficulty of optimization. Moreover, compared with Faster R-CNN, our best practice of AADI achieves 3.4 AP gains.
\begin{table}
\begin{center}
\resizebox{0.45\textwidth}{18mm}{
\begin{tabular}{cccccccccccccccc}
\toprule
Dilation & RPN Parameter & AP & AP$_{50}$ & AP$_{75}$ & AP$_s$ & AP$_m$ & AP$_l$\\% & AR$_{100}$ & AR$_s$ & AR$_m$ & AR$_l$\\
\midrule
2 & - & 40.3 & 59.3 & 44.3 & 24.2 & 43.3 & 52.2\\% & 57.7 & 40.5 & 60.9 & 72.0\\
3 & - & 40.8 & 59.5 & 45.0 & 24.0 & 44.6 & 53.1\\% & 58.4 & 39.7 & 62.7 & 72.7\\
4 & - & 40.5 & 58.7 & 44.6 & 23.2 & 44.8 & 52.7\\% & 57.9 & 37.9 & 63.0 & 74.1\\
\midrule
2,3 & share & 40.8 & 59.6 & 45.2 & 24.4 & 44.0 & 53.0 \\
3,4 & share & 40.7 & 59.2 & 44.5 & 23.3 & 44.6 & 54.0 \\
2,4 & share & 40.6 & 59.0 & 44.5 & 23.7 & 44.0 & 52.4 \\
\midrule
2,3 & not share & 40.7 & 59.5 & 44.7 & \textbf{24.5} & 44.1 & 52.8\\
3,4 & not share & 41.2 & 59.7 & \textbf{46.0} & 23.7 & 45.0 & 54.0\\
2,4 & not share & \textbf{41.3} & \textbf{59.9} & 45.5 & 23.9 & \textbf{45.4} & \textbf{54.0}\\
2,3,4 & not share & 41.0 & 59.6 & 45.1 & 24.4 & 44.7 & 53.6\\
\midrule
Faster R-CNN & - & 37.9 & 58.8 & 41.1 & 22.4 & 41.1 & 49.1\\% & 52.4 & 34.2 & 55.7 & 65.1\\
\bottomrule
\end{tabular}}
\end{center}
\caption{Detection performance of AADI for Faster R-CNN with different dilations. All results are trained with 1x schedule.}
\label{ablation_object_detection}
\end{table}
\section{Conclusion}
In this paper, we propose a gradient-free anchor augmentation method named AADI. It does not add any parameters or hyper-parameters, and it does not significantly affect training and inference speed.
We conduct experiments on RetinaNet and different R-CNNs to verify its performance.
Specifically, AADI brings at least 2.2 box AP improvements on Faster/Mask R-CNN, 1.8 box AP improvements on RetinaNet, and 0.9 box AP gains on Cascade Mask R-CNN. In our future work, we will try to combine AADI and other anchor-based methods together to further improve detection performance.
|
\section{Introduction}
Fitting correlators is one of the essential procedures in studying
non-perturbative physics in Lattice QCD. The hadron spectrum and decay
constants can be calculated by fitting two-point correlation
functions, and hadronic form factor and matrix elements can be
calculated by fitting three-point functions. Here we present our
method for fitting two-point meson correlators with HISQ.
The meson correlators are computed using the HISQ action\cite{Follana:2006rc},
which is an ${\cal O}(a^2)$-improved staggered quark action, with smaller
taste-symmetry breakings than in the asqtad action we
used previously.
For the pseudoscalar meson correlators, we use both random-wall
sources and Coulomb gauge fixed wall sources,
with the point operator as the sink. We do
combined fits to the correlators of both sources. This helps us to
isolate the ground states that we are interested in. More details about
the correlators are given in Sec.~\ref{sec:cor}.
The HISQ lattices include the vacuum polarization of four dynamical
quarks: up, down, strange, and charm quarks
\cite{Bazavov:2010ru,Bazavov:2010pi}. There are, in total, nineteen HISQ
ensembles with different lattice spacings, quark
masses, and spatial volumes. The parameters of the HISQ lattices are
tabulated in Table \ref{tab:lat}. Note that we have physical sea-quark
mass ensembles, where all sea-quark masses are tuned to take the
physical values. These ensembles eliminate the need for a chiral
extrapolation, and they can be used, together with higher unphysical-mass ensembles,
to correct for slight mistunings of the sea quark masses.
Since elements of the correlators at different time slices are highly
correlated, we use the full covariance matrix, which includes both
diagonal and off-diagonal elements, to define the objective function
$\chi^2$. We use Gaussian constraints, or priors in the language of Bayesian fits,
on some parameters of heavier states. This is
useful especially when multiple states are needed to get reliable fits.
We divide the correlators into two groups: light-light and heavy-light.
In this terminology, light refers to the up, down, and
strange quarks, while heavy refers to the charm quark. We find that we can
get good fits with a single state for light-light correlators
as long as large enough distances are chosen for the fitting range. However, we
find multiple state fits are efficient for heavy-light correlators to get
good fits with small statistical errors. Our fitting methods are
discussed in more detail in Sec.~\ref{sec:lsf}.
We block the correlators in the Monte Carlo trajectory to account
for the autocorrelations. To find an optimal size of the blocks,
we look at how the covariance matrix scales as the size of the blocks
is increased. In the course of this analysis, we find that the
autocorrelations depend on (Euclidean) time, and that
they can be understood as a consequence of the way the
source time slices are chosen. A short discussion about this is given in
Sec.~\ref{sec:autocor}
The systematic error due to the excited-state contamination is tested by
varying the fitting ranges and priors. Changes from these variations are small
compared to our statistical errors. This is briefly discussed
in Sec.~\ref{sec:syserr}
\section{Meson Correlators}
\label{sec:cor}
We use the point and wall operators to create the pseudoscalar mesons,
\begin{align}
O_P(\vec{x}, t) &=
\bar{\chi}_A (\vec{x}, t)
\epsilon(\vec{x}, t) \chi_B (\vec{x}, t) \,,
\label{eq:pop} \\
O_W(t) &=
\sum_{\vec{x}, \vec{y}}
\bar{\chi}_A (\vec{x}, t)
\epsilon(\vec{x}+\vec{y}, t) \chi_B (\vec{y}, t) \,,
\end{align}
where $\chi$ is the HISQ field, and $A$, $B$ are flavor
indices. $\epsilon(\vec{x}, t) = (-1)^{\sum_i x_i + t}$
corresponds to the projection to the Goldstone pion with staggered quarks
\cite{Golterman:1985dz}. The Coulomb gauge fixing must be used for the wall
operator, so we call this Coulomb wall operator.
Using the point sink, we can define two correlators
depending on the source,
\begin{align}
C_{R}(t) &=
\Big\langle
\frac{1}{V_S}\sum_{\vec{x}} O_P(\vec{x},t)
\frac{1}{3 V_S} \sum_{\vec{y}} O_P^{\dag}(\vec{y}, 0) \Big\rangle \,, \\
C_{W}(t) &=
\Big\langle
\frac{1}{V_S}\sum_{\vec{x}} O_P(\vec{x},t)
O_W^{\dag}(0) \Big\rangle \,.
\end{align}
The summation over the sources for the point operator is
implemented using random noise, so we call $C_R$ random wall source correlators.
These correlators can be expressed in terms of masses and amplitudes
of relevant excitations. We adopt the following parameterization.
\begin{align}
C_\text{R}(t) &=
\sum_{j=0}^{J-1} A_j M_j^3 \Big(e^{-M_j t} + e^{-M_j (N_T - t)}\Big) +
\sum_{k=0}^{K-1} (-1)^t A'_k {M'}_k^3 \Big( e^{-M'_k t} + e^{-M'_k (N_T - t)}\Big) \,, \\
C_\text{W}(t) &=
\sum_{j=0}^{J-1} B_j M_j^3 \Big(e^{-M_j t} + e^{-M_j (N_T - t)}\Big) +
\sum_{k=0}^{K-1} (-1)^t B'_k {M'}_k^3 \Big(e^{-M'_k t} + e^{-M'_k (N_T - t)}\Big) \,,
\end{align}
where $J$ and $K$ are the numbers of ordinary and alternate states respectively. The backward propagations from the image source at $N_T$,
the temporal size of the lattice, are included because of periodic
boundary condition on the mesons.
Following \cite{Aubin:2004fs,Bazavov:2009bb}, we fit the two correlators simultaneously with common masses. It helps us to isolate the ground
states of the point source since the Coulomb wall operator is less contaminated from excited states. Once the masses and amplitudes are fitted,
the decay constant is given by
\cite{Kilcup:1986dg}
\begin{align}
af_{AB} = (am_A + am_B) \sqrt{3 V_S A_0/2} \,,
\end{align}
where $V_S$ is the spatial volume. Note that bare values of masses and amplitude can be used without renormalization thanks to the partially conserved axial current (PCAC) relation of HISQ. Hence, the uncertainty from renormalization is absent.
\begin{table}
\begin{center}
\begin{tabular}{ccccccc}
$\beta$ & $\approx a(fm)$ & $am_l$ & $am_s$ & $L^3 \cdot T$ & $N_\text{lat}$ \\
\midrule
\multirow{3}{*}{580} & \multirow{3}{*}{0.15} & 0.013 & 0.065 & $16^3 \cdot 48$ & 1020 \\
& & 0.0064 & 0.064 & $24^3 \cdot 48$ & 1000 \\
& & 0.00235 & 0.064 & $32^3 \cdot 48$ & 1000 \\
\midrule
\multirow{11}{*}{600} & \multirow{11}{*}{0.12} & 0.0102 & 0.0509 & $24^3 \cdot 64$ & 1040 \\
& & 0.00507 & 0.0507 & $24^3 \cdot 64$ & 1020 \\
& & 0.00507 & 0.0507 & $32^3 \cdot 64$ & 1000 \\
& & 0.00507 & 0.0507 & $40^3 \cdot 64$ & 1030 \\
& & 0.00184 & 0.0507 & $48^3 \cdot 64$ & 840\\
& & 0.00507 & 0.0304 & $32^3 \cdot 64$ & 1020 \\
& & 0.00507 & 0.022815 & $32^3 \cdot 64$ & 1020 \\
& & 0.00507 & 0.012675 & $32^3 \cdot 64$ & 1020 \\
& & 0.00507 & 0.00507 & $32^3 \cdot 64$ & 1020 \\
& & 0.00507/0.012675 & 0.022815 & $32^3 \cdot 64$ & 1020 \\
& & 0.0088725 & 0.022815 & $32^3 \cdot 64$ & 1020 \\
\midrule
\multirow{3}{*}{630} & \multirow{3}{*}{0.09} & 0.0074 & 0.037 & $32^3 \cdot 96$ & 1011 \\
& & 0.00363 & 0.0363 & $48^3 \cdot 96$ & 1000 \\
& & 0.0012
|
& 0.0363 & $64^3 \cdot 96$ & 702 \\
\midrule
\multirow{2}{*}{672} & \multirow{2}{*}{0.06} & 0.0048 & 0.024 & $48^3 \cdot 144$ & 1000 \\
& & 0.0024 & 0.024 & $64^3 \cdot 144$ & 655 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Parameters of HISQ ensembles are tabulated. With a few intentional
exceptions, the light quark masses ($m_l$) are approximately 0.2, 0.1, and
0.037 times the physical strange quark mass. The exceptions appear in the
ensembles with unphysical values of the strange quark mass and non-degenerate
up and down quark masses.
The HISQ ensembles also include three ensembles to study
finite-volume effects, where all parameters are the same except the
lattice volume.
\label{tab:lat}}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ccc}
\toprule
& light-light & heavy-light \\
\midrule
No. of states & J+K = 1+0 & J+K = 2+1 \\
\midrule
\multirow{2}{*}{Priors} &
\multirow{2}{*}{No} & $M_1-M_0 = 700 \pm 140 $ MeV \\
& & $M'_0-M_0 = 400 \pm 200 $ MeV \\
\midrule
\multirow{1}{*}{Fit range} &
\begin{tabular}{lll}
$a$ (fm) & $t_\text{min}$ & $t_\text{max}$ \\
0.15 & 16 & 23 \\
0.12 & 20 & 31 \\
0.09 & 30 & 47 \\
0.06 & 40 & 71 \\
\end{tabular}
&
\begin{tabular}{lll}
$a$ (fm) & $t_\text{min}$ & $t_\text{max}$ \\
0.15 & 8 & 23 \\
0.12 & 10 & 31 \\
0.09 & 15 & 47 \\
0.06 & 20 & 71 \\
\end{tabular} \\
\bottomrule
\end{tabular}
\end{center}
\caption{Our fit methods are summarized. $J+K$ stands for the
number of states included in the fits, where $J$ $(K)$ is that of
ordinary (alternate) states. For heavy-light, the priors are
applied to the mass gaps. The minimum distances are about
2.4 fm for light-light and 1.2 fm for heavy-light. \label{tab:fit_sum}}
\end{table}
\section{Least-squares Fits}
\label{sec:lsf}
We use least-squares minimization to find the best fits.
The objective function to be minimized is the augmented $\chi^2$ function,
which includes the Gaussian priors as well as usual $\chi^2$,
\begin{align}
\chi^2_\text{aug} (\theta) = \chi^2(\theta)
+ \sum_{\alpha}
\frac{(\theta_\alpha - \mu_{\alpha})^2}{\sigma_{\alpha}^2} \,,
\label{eq:chi2}
\end{align}
where $\theta$ represents a set of parameters to be estimated, and
$\alpha$ runs over parameters constrained with central value
$\mu_\alpha$ and width $\sigma_\alpha$.
Table \ref{tab:fit_sum} summarizes our fit methods. We divide the
correlators into two groups depending on the valence quark masses:
light-light and heavy-light, where by ``heavy'' we mean charm and
``light'', up, down, and strange. For light-light, we use
1+0 state with large minimum distances ($\sim 2.4$ fm). For
heavy-light, however, we use 2+1 state with constrained masses and
the reduced minimum distances ($\sim 1.2$ fm). We choose minimum distances
small enough to provide adequate statistical signal for included states
but not so small as to be polluted from ignored excited states.
The main reason for the differences in the two groups of fits is the
fractional errors (noise-to-signal) of the correlators
\cite{Lepage:1989hd}. Figure \ref{fig:fe} shows an example of the
fractional errors for various valence quark combinations. One can
easily see that the fractional error grows much faster for heavy-light.
This means that good signals are not expected at large
distances for heavy-light. Thus, it is better to reduce the minimum distance to get
better statistical precision, while including additional heavier
states in the fitting function.
\begin{figure}
\begin{center}
\includegraphics[width=0.54\textwidth]{frac.pdf}
\caption{A example of the fractional errors for various
valence quark mass combinations are shown. The circle and
square symbols stand for random and Coulomb wall sources
respectively. Here, ``light'' stands for up and down
quarks.\label{fig:fe}}
\end{center}
\end{figure}
The central values of the priors are taken to be
$M_{D(2550)}-M_D \approx 700$ MeV for the ordinary excited
state and $M_{D^*}-M_{D} \approx 400$ MeV for the alternate state
\cite{pdg:2012}. The widths are chosen narrow enough to effectively
stabilize the fits, but not so narrow as to significantly influence our
determination of the ground states, the states of interest.
We find that these priors allow us to get good confidence levels of fits (or
$p$-values) over all heavy-light fits.
\begin{figure}
\includegraphics[width=0.5\textwidth]{rand_covratio_4src.pdf}
\includegraphics[width=0.5\textwidth]{rand_covratio_1src_v2.pdf}
\caption{The ratios of diagonal elements of the covariance matrix for
the random wall source correlator on the 0.12 fm physical quark mass
ensemble are plotted for various values of block size $b$. On the
left, the correlators are averaged over four measurements on each
lattice. On the right, however, only a single measurement is used to
measure the correlators. For details, please see the text.
\label{fig:covratio}} \end{figure}
\section{Autocorrelation of Correlators}
\label{sec:autocor}
We use the blocking method to compute the correlators and their statistical
errors from the Monte Carlo samples. The blocking method divides measurements
in a time series into contiguous blocks and uses their means as
independent measurements in successive analysis.
As a proxy for autocorrelation, we examine
ratios of the covariance matrix of the correlators as a function of the
block size,
\begin{align}
r^{[b]}_{ij} \equiv C^{[b]}_{ij}/C^{[1]}_{ij} \,.
\label{eq:covratio}
\end{align}
where $b$ denotes the size of blocks, and $C^{[b]}_{ij}$ represents the
covariance matrix computed with the block size $b$.
Figure \ref{fig:covratio} shows examples of the diagonal elements of
the ratio matrix. On the left plot, we can see that the autocorrelation
has a peculiar (Euclidean) time dependence. This happens
because we measure the correlators from four source times and average them
to get one estimator on each gauge configuration. Then, when we proceed
to the next gauge configuration, we move the locations of sources by
a constant amount $N_T/8$ plus a small displacement $\delta = \pm 1$,
which prevents moving back and forth to the same places. Note that the
positions of peaks coincide with the ``moving distance'', $N_T/8 +
\delta$, or its multiple. We can compare these with the results
of the single measurement (with the the moving distance $\sim N_T/4$),
which are plotted on the right in Fig.~\ref{fig:covratio}. Hence,
we find these autocorrelations are induced
by the way we place the multiple sources thoughout the lattices.
To suppress the effects of the autocorrelations, we need to increase
the size of the blocks. However, as the block size increases we
have fewer blocks, and our error estimates get less accurate.
In this trade-off, we find that an optimal block size is four, which
gives us enough statistics to estimate the covariance matrices of large dimension
while reducing the underestimation of error due to the autocorrelations
for the ensembles where the number of configurations is about a thousand.
We use the block size two or one depending on the number of configurations
when it is less than a thousand.
\section{Excited State Effects}
\label{sec:syserr}
To address the systematic uncertainty due to ignored excited states, we
vary the fitting range and priors and then test changes of the values
of interest. For measurements of the decay constants on the 0.09 fm
physical sea quark mass ensemble, for example, no statistically significant deviation is
found beyond statistical precision with any of the following variations: with
reduced prior widths ($\sim$ 50 MeV) for the mass gaps, with shifts of the central values
($\sim\pm$100 MeV), with additional constraints on the amplitudes, with
changes of the minimum distances ($\sim\pm$2), and so on. This
leads us to our conservative estimations of the systematic uncertainty
from the excited states in Ref.~\cite{Doug:2012}.
\acknowledgments \vspace{-2.0mm}
This work was supported by the U.S. Department of Energy and National
Science Foundation.
Computation for this work was done at
the Texas Advanced Computing Center (TACC),
the National Center for Supercomputing Resources (NCSA),
the National Institute for Computational Sciences (NICS),
the National Center for Atmospheric Research (UCAR),
the USQCD facilities at Fermilab,
and the National Energy Resources Supercomputing Center (NERSC),
under grants from the NSF and DOE.
|
\section{Introduction}
Supersingular Isogeny Graphs were proposed for use in cryptography in 2006 by Charles, Goren, and Lauter~\cite{lauter:2009}. Supersingular isogeny graphs are examples of Ramanujan graphs, i.e.\ optimal expander graphs. This means that relatively {\it short} walks on the graph approximate the uniform distribution, i.e. walks of length approximately equal to the logarithm of the graph size. Walks on expander graphs are often used as a good source of randomness in computer science, and the reason for using {\it Ramanujan} graphs is to keep the path length short. But the reason these graphs are important for cryptography is that {\it finding paths} in these graphs, i.e.\ {\it routing,} is hard: there are no known subexponential algorithms to solve this problem, either classically or on a quantum computer. For this reason, systems based on the hardness of problems on Supersingular Isogeny Graphs are currently under consideration for standardization in the NIST Post-Quantum Cryptography (PQC) Competition~\cite{PQC}.
\cite{lauter:2009} proposed a general construction for cryptographic hash functions based on the hardness of inverting a walk on a graph. The path-finding problem is the following: given fixed starting and ending vertices representing the start and end points of a walk on the graph of a fixed length, find a path between them. A hash function can be defined by using the input to the function as directions for walking around the graph: the output is the label for the ending vertex of the walk. Finding collisions for the hash function is equivalent to finding cycles in the graph, and finding pre-images is equivalent to path-finding in the graph. Backtracking is not allowed in the walks by definition, to avoid trivial collisions.
In~\cite{lauter:2009}, two concrete examples of families of optimal expander graphs (Ramanujan graphs) were proposed, the so-called Lubotzky--Phillips--Sarnak (LPS) graphs~\cite{lubotzky:1988}, and the Supersingular Isogeny Graphs (Pizer) \cite{pizer:1998}, where the path finding problem was supposed to be hard. Both graphs were proposed and presented at the 2005 and 2006 NIST Hash Function workshops, but the LPS hash function was quickly attacked and broken in two papers in 2008, a collision attack~\cite{Zemor-Tillich} and a pre-image attack~\cite{SCN:PetLauQui08}. The preimage attack gives an algorithm to efficiently find paths in LPS graphs, a problem which had been open for several decades. The PLQ path-finding algorithm uses the explicit description of the graph as a Cayley graph in ${\mathrm{PSL}_2}({\mathbb{F}}_p)$, where vertices are $2 \times 2$ matrices with entries in ${\mathbb{F}}_p$ satisfying certain properties. Given the swift discovery of attacks on the LPS path-finding problem, it is natural to investigate whether this approach is relevant to the path-finding problem in Supersingular Isogeny (Pizer) Graphs.
In 2011, De Feo--Jao--Pl\^ut \cite{defeo:2014} devised a cryptographic system based on supersingular isogeny graphs, proposing a Diffie--Hellman protocol as well as a set of five hard problems related to the security of the protocol. It is natural to ask what is the relation between the problems stated in~\cite{defeo:2014} and the path-finding problem on Supersingular Isogeny Graphs proposed in~\cite{lauter:2009}.
In this paper we explore these two questions related to the security of cryptosystems based on these Ramanujan graphs. In Part \ref{part1} of the paper, we study the relation between the hard problems proposed by De Feo--Jao--Pl\^ut and the hardness of the Supersingular Isogeny Graph problem which is the foundation for the CGL hash function. In Part \ref{part2} of the paper, we study the relation between the Pizer and LPS graphs by viewing both from a number theoretic perspective.
In particular, in Part \ref{part1} of the paper, we clearly explain how the security of the Key Exchange protocol relies on the hardness of the path-finding problem in SSIG, proving a reduction (Theorem ~\ref{lemma::reduction}) between the Supersingular Isogeny Diffie Hellmann (SIDH) Problem and the path-finding problem in SSIG. Although this fact and this theorem may be clear to the experts (see for example the comment in the introduction to a recent paper on this topic~\cite{Menezes}), this reduction between the hard problems is not written anywhere in the literature. Furthermore, the Key Exchange (SIDH) paper~\cite{defeo:2014} states 5 hard problems, including (SSCDH), with relations proved between some but not all of them, and mentions the paper \cite{lauter:2009} only in passing (on page 17), with no clear statement of the relationship to the overarching hard problem of path-finding in SSIG.
Our Theorem~\ref{lemma::reduction} clearly shows the fact that the security of the proposed post-quantum key exchange relies on the hardness of the path-finding problem in SSIG stated in~\cite{lauter:2009}. Theorem~\ref{thm::number-chains} counts the chains of isogenies of fixed length. Its proof relies on elementary group theory results and facts about isogenies, proved in Section \ref{sec::correspondence}.
In Part \ref{part2} of the paper, we examine the LPS and Pizer graphs from a number theoretic perspective with the aim of highlighting the similarities and differences between the constructions.
Both the LPS and Pizer graphs considered in~\cite{lauter:2009} can be thought of as graphs on
\begin{equation}\label{eq:intro_double_coset}
\Gamma \backslash {\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}),
\end{equation}
where $\Gamma$ is a discrete cocompact subgroup, where $\Gamma$ is obtained from a quaternion algebra $B.$ We show how different input choices for the construction lead to different graphs. In the LPS construction one may vary $\Gamma$ to get an infinite family of Ramanujan graphs. In the Pizer construction one may vary $B$ to get an infinite family. In the LPS case, we always work in the Hamiltonian quaternion algebra. For this particular choice of algebra we can
rewrite the graph
as a Cayley graph. This explicit description is key for breaking the LPS hash function. For the Pizer graphs we do not have such a description. On the Pizer side the graphs may, via Strong Approximation, be viewed as graphs on ad\`{e}lic double cosets which are in turn the class group of an order of $B$ that is related to the cocompact subgroup $\Gamma$. From here one obtains an isomorphism with supersingular isogeny graphs. For LPS graphs the local double cosets are also isomorphic to ad\`{e}lic double cosets, but
in this case the corresponding set of ad\`{e}lic double cosets is smaller relative to the quaternion algebra
and we do not have the same chain of isomorphisms.
Part \ref{part2} has the following outline. Section \ref{subsect:LPS} follows \cite{lubotzky:2010} and presents the construction of LPS graphs from three different perspectives: as a Cayley graph, in terms of local double cosets, and, to connect these two, as a quotient of an infinite tree. The edges of the LPS graph are explicit in both the Cayley and local double coset presentation. In Section \ref{app:sect:explicitisom} we give an explicit bijection between the natural parameterizations of the edges at a fixed vertex. Section \ref{subsect:StrongApprox} is about Strong Approximation, the main tool connecting the local and adelic double cosets for both LPS and Pizer graphs. Section \ref{sect:Pizer} follows \cite{pizer:1998} and summarizes Pizer's construction. The different input choices for LPS and Pizer constructions impose different restrictions on the parameters of the graph, such as the degree. $6$-regular graphs exist in both families. In Section \ref{subsect:Pizer_primes} we give a set of congruence conditions for the parameters of the Pizer construction that produce a $6$-regular graph. In Section \ref{subsect:LPS_Pizer_compare} we summarize the similarities and differences between the two constructions.
\subsection{Acknowledgments}
This project was initiated at the Women in Numbers 4 (WIN4) workshop at the Banff International Research Station in August, 2017. The authors would like to thank BIRS and the WIN4 organizers. In addition, the authors would like to thank the Clay Mathematics Institute, PIMS, Microsoft Research, the Number Theory Foundation and the NSF-HRD 1500481 - AWM ADVANCE grant for supporting the workshop. We thank John Voight, Scott Harper, and Steven Galbraith for helpful conversations, and the anonymous referees for many helpful suggestions and edits.
\part{\Large Cryptographic applications of supersingular isogeny graphs}\label{part1}
In this section we investigate the security of the \cite{defeo:2014}
key-exchange protocol. We show a reduction to the path-finding problem in supersingular isogeny
graphs stated in \cite{lauter:2009}. The hardness of this problem is the
basis for the CGL cryptographic hash function, and we show here that if this problem is not hard, then the
key exchange presented in \cite{defeo:2014} is not secure.
We begin by recalling some basic facts about
isogenies of elliptic curves and the key-exchange construction. Then, we give a reduction between two hardness assumptions. This reduction is based on a correspondence between a path representing the composition of $m$ isogenies of degree $\ell$ and an isogeny of degree
$\ell^m$.
\section{Preliminaries} \label{sec::crypto-prelim}
We start by recalling some basic and well-known results about isogenies. They
can all be found in \cite{silverman}. We try to be as concrete and constructive
as possible, since we would like to use these facts to do computations.
An elliptic curve is a curve of genus one with a specific base point $\O$. This
latter can be used to define a group law. We will not go into the details of
this, see for example \cite{silverman}. If $E$ is an elliptic curve defined over a field
$K$ and $\char(\bar{K})\neq 2,3$, we can write the equation of $E$ as
\[ E: y^2 = x^3 +a\cdot x + b, \]
where $a, b \in K$.
Two important quantities related to an elliptic curve are its discriminant
$\Delta$ and its $j$-invariant, denoted by $j$. They are defined as follows.
\[ \Delta = 16 \cdot (4\cdot a^3 + 27\cdot b^2) \quad\text{and}\quad j = -1728\cdot\frac{a^3}{\Delta}. \]
Two elliptic curves are isomorphic over $\bar{K}$ if and only if they have the same $j$-invariant.
\begin{definition} Let $E_0$ and $E_1$ be two elliptic curves. An isogeny from
$E_0$ to $E_1$ is a surjective morphism
\[ \phi: E_0 \rightarrow E_1, \]
which is a group homomorphism.
\end{definition}
An example of an isogeny is the multiplication-by-$m$ map $[m]$,
\begin{align*}
[m] : E &\rightarrow E \\
P & \mapsto m\cdot P.
\end{align*}
The degree of an isogeny is defined as the degree of the finite extension
$\bar{K}(E_0)/\phi^*(\bar{K}(E_1))$, where $\bar{K}(*)$ is the
function field of the curve, and $\phi^*$ is the map of function fields induced by the isogeny
$\phi$. By convention, we set
\[ \deg([0]) = 0. \]
The degree map is multiplicative under composition of isogenies:
\[ \deg(\phi \circ \psi) = \deg(\phi) \cdot \deg(\psi) \]
for all chains $E_0 \xrightarrow{\phi} E_1 \xrightarrow{\psi} E_2$, and for an integer $m > 0$, the multiplication-by-$m$ map has degree $m^2$.
\begin{theorem}{\cite{silverman}} Let $E_0 \rightarrow E_1$ be an
isogeny of degree $m$. Then, there exists a unique isogeny
\[ \hat{\phi}: E_1 \rightarrow E_0 \]
such that $\hat{\phi} \circ \phi = [m]$ on $E_0$, and $\phi \circ
\hat{\phi}=[m]$ on $E_1$. We call $\hat{\phi}$ the dual isogeny to $\phi$.
We also have that
\[ \deg(\hat{\phi}) = \deg(\phi). \]
\end{theorem}
For an isogeny $\phi$, we say $\phi$ is separable if the field extension
$\bar{K}(E_0)/\phi^*(\bar{K}(E_1))$ is separable. We then have the following lemma.
\begin{lemma} Let $\phi: E_0 \rightarrow E_1$ be a separable isogeny. Then
\[ \deg(\phi) = \#\ker(\phi). \]
\end{lemma}
In this paper, we only consider separable isogenies and frequently use this convenient fact.
From the above, it follows that a point $P$ of order $m$ defines an isogeny $\phi$
of degree $m$,
\[ \phi: E \rightarrow E/\langle P \rangle. \]
We will refer to such an isogeny as a cyclic isogeny (meaning that its kernel is a cyclic subgroup of $E$). For $\ell$ prime, we also say that
two curves $E_0$ and $E_1$ are $\ell$-isogenous if there exists an isogeny
$\phi: E_0 \rightarrow E_1$ of degree $\ell$.
We define $E[m]$,
the $m$-torsion subgroup of $E$, to be the kernel of the multiplication-by-$m$ map.
If $\char(K)>0$ and $m \geq 2$ is an integer coprime to $\char(K)$, or if
$\char(K)=0$, then the points of $E[m]$ are
\[ E[m] = \{ P \in E(\bar{K}): m \cdot P = \O \} \cong {\mathbb{Z}}/m{\mathbb{Z}} \times {\mathbb{Z}}/m{\mathbb{Z}}. \]
If an elliptic curve $E$ is defined over a field of characteristic $p>0$ and its
endomorphism ring over $\bar{K}$ is an order in a quaternion algebra, we say that $E$ is
supersingular. Every isomorphism class over $\bar{K}$ of supersingular elliptic curves in characteristic $p$ has a representative defined over ${\mathbb{F}}_{p^2}$, thus we will often let $K={\mathbb{F}}_{p^2}$ (for some fixed prime $p$).
We mentioned above that an $\ell$-torsion point $P$ induces an isogeny of degree $\ell$. More generally, a finite subgroup
$G$ of $E$ generates a unique isogeny of degree $\# G$, up to automorphism.
Supersingular isogeny graphs were introduced into cryptography in \cite{lauter:2009}. To define a supersingular isogeny graph, fix a finite field $K$ of characteristic $p$, a supersingular elliptic curve $E$ over $K$, and a prime $\ell \ne p$. Then the corresponding isogeny graph is constructed as follows. The vertices are the $\bar{K}$-isomorphism classes of elliptic curves which are $\bar{K}$-isogenous to $E$. Each vertex is labeled with the $j$-invariant of the curve. The edges of the graph correspond to the $\ell$-isogenies between the elliptic curves. As the vertices are isomorphism classes of elliptic curves, isogenies that differ by composition with an automorphism of the image are identified as edges of the graph. I.e. if $E_0,E_1$ are $\bar{K}$-isogenous elliptic curves, $\phi:E_0\rightarrow E_1$ is an $\ell$-isogeny and $\epsilon\in {\mathrm{Aut}}(E_1)$ is an automorphism, then $\phi$ and $\epsilon\circ \phi $ are identified and correspond to the same edge of the graph.
If $p \equiv 1\mod {12}$, we can uniquely identify an isogeny with its dual to make it an undirected graph. It is a multigraph in the sense that there can be multiple edges if no extra conditions are imposed on $p$. Three important properties of these graphs follow from deep theorems in number theory:
\begin{enumerate}
\item The graph is connected for any $\ell \ne p$
(special case of~\cite[Theorem 4.1]{lauter:2009r}).
\item A supersingular isogeny graph has roughly $p/12$ vertices. ~\cite[Theorem 4.1]{silverman}
\item Supersingular isogeny graphs are optimal expander graphs, in particular they are Ramanujan. (special case of~\cite[Theorem 4.2]{lauter:2009r}).
\end{enumerate}
\begin{remark} \label{Rem: multiple edges}
In order to avoid trivial collisions in cryptographic hash functions based on isogeny graphs, it is best if the graph has no short cycles. Charles, Goren, and Lauter show in \cite{lauter:2009} how to ensure that isogeny graphs do not have short cycles by carefully choosing the finite field one works over. For example, they compute that a $2$-isogeny graph does not have double edges (i.e.\ cycles of length $2$) when working over ${\mathbb{F}}_p$ with $p \equiv 1 \bmod 420$. Similarly, we computed that a $3$-isogeny graph does not have double edges for $p \equiv 1 \bmod 9240$. Given that $420 = 2^2 \cdot 3 \cdot 5 \cdot 7$ and $9240 = 2^3 \cdot 3 \cdot 5 \cdot 7 \cdot 11$, we conclude that neither the 2-isogeny graph nor the 3-isogeny graph has double edges for $p \equiv 1 \bmod 9240$.
For our experiments (described in Section \ref{sec::correspondence}), we were interested in studying short walks, for example of length $4$, in a setting relevant to the Key-Exchange protocol described below. The smallest prime $p$ with the property $p \equiv 1 \bmod 9240$ that also satisfies $2^4\cdot3^4 \mid p-1$ is
$$p = 2^4\cdot 3^4 \cdot 5 \cdot 7 \cdot 11 +1.$$
\end{remark}
\section{The \cite{defeo:2014} key-exchange}
Let $E$ be a supersingular elliptic curve defined over ${\mathbb{F}}_{p^2}$,
where $p =\ell_A^n\cdot \ell_B^m\pm 1$, $\ell_A$ and $\ell_B$ are primes, and $n \approx m$ are approximately equal.
We have players $A$ (for Alice) and $B$ (for Bob), representing the two parties who wish to engage in a key-exchange protocol with the goal of establishing a shared secret key by communicating via a (possibly) insecure channel.
The two players $A$ and $B$ generate their public parameters by each picking two points $P_A$, $Q_A$ such that $\langle P_A, Q_A \rangle = E[\ell_A^n]$ (for $A$), and two points $P_B$, $Q_B$ such that $\langle P_B, Q_B \rangle = E[\ell_B^m]$ (for $B$).
Player $A$ then secretly picks two random integers $0\leq m_A, n_A< \ell_A^n$. These two integers (and the isogeny they generate) will be player $A$'s secret parameters. $A$ then computes the isogeny $\phi_A$
\[ E \xrightarrow{\phi_A} E_A :=E/\langle[m_A]P_A + [n_A]Q_A\rangle. \]
Player $B$ proceeds in a similar fashion and secretly picks $0\leq m_B, n_B < \ell_B^m$.
Player $B$ then generates the (secret) isogeny
\[ E \xrightarrow{\phi_B} E_B :=E/\langle[m_B]P_B + [n_B]Q_B\rangle. \]
So far, $A$ and $B$ have constructed the following diagram.
\[
\begin{tikzcd}{}
& E_A\arrow[leftarrow]{dl}{\phi_A}
& \\
E \arrow{dr}{\phi_B}
& & \\
& E_B
&
\end{tikzcd}
\]
To complete the diamond, we proceed to the exchange part of the protocol.
Player
$A$ computes the points $\phi_A(P_B)$ and $\phi_A(Q_B)$ and sends $\{ \phi_A(P_B), \phi_A(Q_B), E_A \}$ along
to player $B$. Similarly, player $B$ computes and sends $\{ \phi_B(P_A), \phi_B(Q_A), E_B \}$ to player $A$. Both players now have enough information to construct the following diagram,
\begin{equation}\label{diagram}
\begin{tikzcd}{}
& E_A\arrow[leftarrow]{dl}{\phi_A}\arrow[rightarrow]{dr}{\phi'_{A}}
& \\
E \arrow{dr}{\phi_B}
& & E_{AB} \arrow[leftarrow]{dl}{\phi'_{B}} \\
& E_B
&
\end{tikzcd}
\end{equation}
where
\[ E_{AB} \cong E/\langle [m_A]P_A + [n_A]Q_A, [m_B]P_B + [n_B]Q_B \rangle. \]
Player $A$ can use the knowledge of the secret information $m_A$ and $n_A$ to compute the isogeny $\phi'_{B}$, by quotienting $E_B$ by $\langle[m_A]\phi_B(P_A) + [n_A]\phi_B(Q_A) \rangle$ to obtain $E_{AB}$.
Player $B$ can use the knowledge of the secret information $m_B$ and $n_B$ to compute the isogeny $\phi'_{A}$, by quotienting $E_A$ by $\langle[m_B]\phi_A(P_B) + [n_B]\phi_A(Q_B) \rangle$ to obtain $E_{AB}$.
A separable isogeny is determined by its kernel, and so both ways of going around the diagram from $E$ result in computing the same elliptic curve $E_{AB}$.
The players then use the $j$-invariant of the curve $E_{AB}$ as a shared secret.
\begin{remark} Given a list of points specifying a kernel, one can explicitly compute
the associated isogeny using V\'{e}lu's formulas \cite{velu:1971}. In principle, this is how the
two parties engaging in the key-exchange above can compute $\phi_A$, $\phi_B$, $\phi'_{A}$, $\phi'_{B}$
\cite{velu:1971}. However, in practice for cryptographic size subgroups, this would be impossible, and thus
a different approach is taken, based on breaking the isogenies into $n$ (resp. $m$) steps, each of degree $\ell_A$ (resp. $\ell_B$). This equivalence will be explained below.
\end{remark}
\subsection{Hardness assumptions}
The security of the key-exchange protocol is based on the following hardness assumption, which was introduced in \cite{defeo:2014} and called the Supersingular Computational Diffie--Hellman (SSCDH) problem.
\begin{problem}(Supersingular Computational Diffie--Hellman (SSCDH)):
\label{pbm:sscdh}
Let $p$, $\ell_A$, $\ell_B$, $n$, $m$, $E$, $E_A$, $E_B$, $E_{AB}$, $P_A$, $Q_A$, $P_B$, $Q_B$ be as above.
Let $\phi_A$ be an isogeny from $E$ to $E_A$ whose kernel is equal to $\langle [m_A]P_A + [n_A]Q_A \rangle$, and let $\phi_B$ be an isogeny from $E$ to $E_B$ whose kernel is equal to $\langle [m_B]P_B + [n_B]Q_B \rangle$, where $m_A$,$n_A$ (respectively $m_B$,$n_B$) are integers chosen at random between $0$ and $\ell_A^m$ (respectively $\ell_B^n$), and not both divisible by $\ell_A$ (resp. $\ell_B$).
Given the curves $E_A$, $E_B$ and the points $\phi_A(P_B)$, $\phi_A(Q_B)$, $\phi_B(P_A)$, $\phi_B(Q_A)$, find the $j$-invariant of
\[ E_{AB} \cong E/\langle [m_A]P_A + [n_A]Q_A, [m_B]P_B + [n_B]Q_B \rangle; \]
see diagram \eqref{diagram}.
\end{problem}
In~\cite{lauter:2009}, a cryptographic hash function was defined: $$h:\{ 0,1\}^r \rightarrow \{ 0,1\}^s$$ based on the Supersingular Isogeny Graph (SSIG) for a fixed prime $p$ of cryptographic size, and a fixed small prime $\ell \ne p$. The hash function processes the input string in blocks which are used as directions for walking around the graph starting from a given fixed vertex. The output of the hash function is the $j$-invariant of an elliptic curve over ${\mathbb{F}}_{p^2}$ which requires $2\log(p)$ bits to represent, so $m= 2 \lceil \log(p)\rceil$.
For the security of the hash function, it is necessary to avoid the generic {\it birthday attack}. This attack runs in time proportional to the square root of the size of the graph, which is the {\it Eichler class number}, roughly $\lfloor p/12 \rfloor$. So in practice, we must pick $p$ so that $\log(p) \approx 256$.
The integer $r$ is the length of the bit string input to the hash function. If $\ell = 2$, which is the easiest case to implement and a common choice, then $r$ is precisely the number of steps taken on the walk in the graph, since the graph is 3-regular, with no backtracking allowed, so the input is processed bit-by-bit. In order to assure that the walk reaches a sufficiently random vertex in the graph, the number of steps should be roughly $\log(p) \approx 256$. A CGL-hash function is thus specified by giving the primes $p$, $\ell$, the starting point of the walk, and the integers $r \approx 256$, $s$. (Extra congruence conditions were imposed on $p$ to make it an undirected graph with no small cycles.)
The hard problems stated in~\cite{lauter:2009} corresponded to the important security properties of {\it collision} and {\it preimage resistance} for this hash function. For preimage resistance, the problem~\cite[Problem 3]{lauter:2009} stated was: given $p$, $\ell$, $r>0$, and two supersingular $j$-invariants modulo $p$, to find a path of length $r$ between them:
\begin{problem}(Path-finding \cite{lauter:2009})
\label{pbm:path}
Let $p$ and $\ell$ be distinct prime numbers, $r> 0$, and $E_0$ and $E_1$ two supersingular elliptic curves over $\mathbb{F}_{p^2}$.
Find a path of length $r$ in the $\ell$-isogeny graph corresponding to a composition of $r$ $\ell$-isogenies leading from $E_0$ to $E_1$ (i.e. an isogeny of degree $\ell^r$ from $E_0$ to $E_1$).
\end{problem}
It is worth noting that, to break the preimage resistance of the specified hash function, you must find a path of exactly length $r$, and this is analogous to the situation for breaking the security of the key-exchange protocol. However, the problem of finding *any* path between two given vertices in the SSIG graphs is also still open. For the LPS graphs, the algorithm presented in~\cite{SCN:PetLauQui08} did not find a path of a specific given length, but it was still considered to be a ``break'' of the hash function.
Furthermore, the diameter of these graphs, both LPS and SSIG graphs, has been extensively studied. It is known that the diameter of the graphs is roughly $\log(p)$ (it is $c\log(p)$, where $c$ is a constant between $1$ and $2$, (see for example~\cite{S17})). That means that if $r$ is greater than $c\log(p)$, then given two vertices, it is likely that a path of length $r$ between them may exist. The fact that walks of length greater than $c\log(p)$ approximate the uniform distribution very closely means that you are not likely to miss any significant fraction of the vertices with paths of that length, because that would constitute a bias. Also, if $r \gg \log(p)$ then there may be many paths of length $r$. However, if $r$ is much less than $\log(p)$, such as $\frac{1}{2}\log(p)$, there may be {\it no path} of such a short length between two given vertices. See~\cite{LP15} for a discussion of the ``sharp cutoff'' property of Ramanujan graphs.
But in the cryptographic applications, given an instance of the key-exchange protocol to be attacked, we {\it know} that there exists a path of length $n$ between $E$ and $E_A$, and the hard problem is to find it.
The set-up for the key-exchange requires $p=\ell_A^n \ell_B^m \pm 1$, where $n$ and $m$ are roughly the same size, and $\ell_A$ and $\ell_B$ are very small, such as $\ell_A = 2$ and $\ell_B = 3$. It follows that $n$ and $m$ are both approximately half the diameter of the graph (which is roughly $\log(p)$). So it is unlikely to find paths of length $n$ or $m$ between two random vertices. If a path of length $n$ exists and Algorithm A finds a path, then it is very likely to be the one which was constructed in the key exchange. If not, then Algorithm A can be repeated any constant number of times. So we have the following reduction:
\begin{theorem}
\label{lemma::reduction}
Assume as for the Key Exchange set-up that $p =\ell_A^n\cdot \ell_B^m + 1$ is a prime of cryptographic size, i.e. $\log(p) \ge 256$, $\ell_A$ and $\ell_B$ are small primes, such as $\ell_A = 2$ and $\ell_B = 3$, and $n \approx m$ are approximately equal.
Given an algorithm to solve Problem \ref{pbm:path} (Path-finding), it can be used to solve Problem \ref{pbm:sscdh} (Key Exchange) with overwhelming probability. The failure probability is roughly $$\frac{\ell_A^n + \ell_A^{n-1}}{p} \approx \frac{\sqrt{p}}{p}.$$
\end{theorem}
\begin{proof}
Given an algorithm (Algorithm A) to solve Problem~\ref{pbm:path}, we can use this to solve Problem~\ref{pbm:sscdh} as follows. Given $E$ and $E_A$, use Algorithm A to find the path of length $n$ between these two vertices in the $\ell_A$-isogeny graph. Now use Lemma \ref{lemma::isogeny-composition-2} below to produce a point $R_A$ which generates the $\ell_A^n$-isogeny between $E$ and $E_A$. Repeat this to produce the point $R_B$ which generates the $\ell_B^m$-isogeny between $E$ and $E_B$ in the $\ell_B$-isogeny graph. Because the subgroups generated by $R_A$ and $R_B$ have smooth order, it is easy to write $R_A$ in the form $[m_A]P_A + [n_A]Q_A$ and $R_B$ in the form $[m_B]P_B + [n_B]Q_B$.
Using the knowledge of $m_A$, $n_A$, $m_B$, $n_B$, we can construct $E_{AB}$ and recover the $j$-invariant of $E_{AB}$, allowing us to solve Problem~\ref{pbm:sscdh}.
The reason for the qualification ``with overwhelming probability'' in the statement of the theorem is that it is possible that there are multiple paths of the same length between two vertices in the graph. If there are multiple paths of length $n$ (or $m$) between the two vertices, it suffices to repeat Algorithm A to find another path. This approach is sufficient to break the Key Exchange if there are only a small number of paths to try. As explained above, with overwhelming probability, there are {\it no} other paths of length $n$ (or $m$) in the Key Exchange setting.
In the SSIG corresponding to $(p, \ell_A)$, the vertices $E$ and $E_A$ are a distance of $n$ apart. Starting from the vertex $E$ and considering all paths of length $n$, the number of possible endpoints is at most $\ell_A^n + \ell_A^{n-1}$ (See Corollary~\ref{corollary::number-isogenies} below). Considering that the number of vertices in the graph is roughly $\lfloor p/12 \rfloor$, then the probability that a given vertex such as $E_A$ will be the endpoint of one of the walks of length $n$ is roughly
$$\frac{\ell_A^n + \ell_A^{n-1}}{p} \approx \frac{\sqrt{p}}{p} \le 2^{-128}.$$
This estimate does not use the Ramanujan property of the SSIG graphs. While a generic random graph could potentially have a topology which creates a bias towards some subset of the nodes, Ramanujan graphs cannot, as shown in~\cite[Theorem 3.5]{LP15}.
\end{proof}
\section{Composing isogenies}\label{sec::correspondence}
Let $k$ be a positive integer. Every separable $k$-isogeny $\phi : E_0 \rightarrow E_1$ is determined by its kernel up to composition with an automorphism of the elliptic curve $E_1.$ Thus the edge corresponding to $\phi$ is uniquely determined by $\ker(\phi )$ and vice versa. This kernel is a subgroup of the $k$-torsion $E_0[k]$, and the latter is isomorphic to ${\mathbb{Z}}/k{\mathbb{Z}} \times {\mathbb{Z}}/k{\mathbb{Z}}$ if $k$ is coprime to the characteristic of the field we are working over.
Hence, fixing a prime $\ell$ and working over a finite field ${\mathbb{F}}_q$ which has characteristic different from $\ell$, the number of $\ell$-isogenies $\phi : E_0 \rightarrow E_1$ that correspond to different edges of the graph is equal to the number of subgroups of ${\mathbb{Z}}/\ell{\mathbb{Z}} \times {\mathbb{Z}}/\ell{\mathbb{Z}}$ of order $\ell$. It is well known that this number is equal to $\ell+1$. In other words, $E$ is $\ell$-isogenous to precisely $\ell+1$ elliptic curves.
However, some of these $\ell$-isogenous curves may be isomorphic. Therefore, in the isogeny graph (where nodes represent isomorphism classes of curves), $E$ has degree $\ell+1$ and may have $\ell+1$ neighbors or fewer.
Using V\'{e}lu's formulas, the equations for an edge can be computed from its kernel. Hence for computational purposes, it is important to write down this kernel explicitly. This is best done by specifying generators. Let $P, Q \in E_0$ be the generators of $E_0[\ell] \cong {\mathbb{Z}}/\ell{\mathbb{Z}} \times {\mathbb{Z}}/\ell{\mathbb{Z}}$. Then the subgroups of order $\ell$ are generated by $Q$ and $P + iQ$ for $i = 0,\ldots,\ell-1$.
We now study isogenies obtained by composition, and isogenies of degree a prime power. It turns out that these correspond to each other under certain conditions. The first condition is that the isogeny is cyclic. Notice that every prime order group is cyclic, therefore all $\ell$-isogenies are cyclic (meaning they have cyclic kernel). However, this is not necessarily true for isogenies whose order is not a prime. The second condition is that there is no backtracking, defined as follows:
\begin{definition}
For a chain of isogenies $\phi_m \circ \phi_{m-1} \circ \ldots \circ \phi_1$ ($\phi_i:E_{i-1}\rightarrow E_i$), we say that it has \emph{no backtracking} if $\phi_{i+1}\ne \epsilon\circ \hat\phi_i $ for all $i = 1,\ldots,m-1$ and any $\epsilon\in {\mathrm{Aut}} (E_{i+1}) $, since this corresponds to a walk in the $\ell$-isogeny graph without backtracking.
\end{definition}
In the following, we show that chains of $\ell$-isogenies of length $m$ without backtracking correspond to cyclic $\ell^m$-isogenies. Recall that we are only considering separable isogenies throughout.
\begin{lemma}\label{lemma::isogeny-composition-1}
Let $\ell$ be a prime, and let $\phi$ be a separable $\ell^m$-isogeny with cyclic kernel. Then there exist cyclic $\ell$-isogenies $\phi_1,\ldots,\phi_{m}$ such that
$ \phi = \phi_m \circ \phi_{m-1} \circ \ldots \circ \phi_1 $ without backtracking.
\end{lemma}
\begin{proof}
Assume that $\phi = E_0 \rightarrow E$, and that its kernel is $\langle P_0 \rangle \subseteq E_0$, where $P_0$ has order $\ell^m$. For $i = 1,\ldots,m$, let
$$ \phi_i : E_{i-1} \rightarrow E_i $$
be an isogeny with kernel $\langle \ell^{m-i}P_{i-1} \rangle$, where $P_i = \phi_{i}(P_{i-1}).$
We show that $\phi_i$ is an $\ell$-isogeny for $i \in \{1,\ldots,m\}$ by observing that $\ell^{m-i}P_{i-1}$ has order $\ell$. The statement is trivial for $i=1$. For $i \geq 2$, clearly $\ell^{m-i}P_{i-1} = \ell^{m-i}\phi_{i-1}(P_{i-2}) = \phi_{i-1}(\ell^{m-i}P_{i-2}) \neq \O$, since $\ell^{m-i}P_{i-2} \notin \ker \phi_{i-1} = \langle \ell^{m-(i-1)}P_{i-2} \rangle = \{ \ell^{m-(i-1)}P_{i-2}, 2\ell^{m-(i-1)}P_{i-2},\ldots,(\ell-1)\ell^{m-(i-1)}P_{i-2} \} $. Furthermore, $\ell \cdot \ell^{m-i}P_{i-1} = \ell^{m-(i-1)}\phi_{i-1}(P_{i-2}) = \phi_{i-1}(\ell^{m-(i-1)}P_{i-2}) = \O$, using the definition of $\ker \phi_{i-1}$.
Next, we show by induction that $\phi_i \circ \ldots \circ \phi_1$ has kernel $\langle \ell^{m-i}P_0 \rangle$. Then it follows that $\phi_m \circ \ldots \circ \phi_1$ is the same as $\phi$ up to an automorphism $\epsilon$ of $E$, since the two have the same kernel. Replacing $\phi_m$ with $\epsilon\circ \phi_m$ if necessary we have $ \phi = \phi_m \circ \phi_{m-1} \circ \ldots \circ \phi_1 .$
The case $i = 1$ is trivial: $\phi_1 : E_0 \rightarrow E_1$ has kernel $\langle \ell^{m-1}P_0 \rangle$ by definition. Now assume the statement is true for $i-1$. Then, we have $\langle \ell^{m-i}P_0 \rangle \subseteq \ker \phi_{i} \circ \ldots \circ \phi_1$. Conversely, let $Q \in \ker \phi_i\circ\ldots\circ\phi_1$. Then $\phi_{i-1}\circ\ldots\circ\phi_i(Q) \in \ker\phi_i = \langle \ell^{m-i}P_{i-1} \rangle = \phi_{i-1}(\langle \ell^{m-i}P_{i-2} \rangle) = \ldots = \phi_{i-1}\circ\ldots\circ\phi_1(\langle \ell^{m-i}P_0 \rangle)$ and hence $Q \in \langle \ell^{m-i}P_0 \rangle + \ker\phi_{i-1}\circ\ldots\circ\phi_1 = \langle \ell^{m-i}P_0 \rangle + \langle \ell^{m-(i-1)}P_0 \rangle = \langle \ell^{m-i}P_0 \rangle$.
Finally, we show that there is no backtracking in $\phi_m\circ\ldots\circ\phi_1$. Contrarily, assume that there is an $i \in \{1,\ldots,m-1\}$ and $\epsilon\in {\mathrm{Aut}}(E_{i+1})$ such that $\phi_{i+1}=\epsilon\circ \hat \phi_i$. Then, since $\ker (\phi_{i+1}\circ \phi_i )=\ker(\epsilon\circ \hat\phi_i\circ \phi_i )= \ker [\ell]$, we have $\ker(\phi_{i+1}\circ \phi_i \circ\phi_{i-1}\circ\ldots\circ\phi_1) =\ker ([\ell]\circ\phi_{i-1}\circ\ldots\circ\phi_1)$. Notice that $[\ell]$ commutes with all $\phi_j$, and hence $E_0[\ell] \subseteq \ker(\phi_{i+1}\circ \phi_i \circ\phi_{i-1}\circ\ldots\circ\phi_1)\subseteq \ker(\phi_{m}\circ \phi_i \circ\phi_{i-1}\circ\ldots\circ\phi_1)=\ker\phi$. Since $E_0[\ell] \cong {\mathbb{Z}}/\ell{\mathbb{Z}} \times {\mathbb{Z}}/\ell{\mathbb{Z}}$, the kernel of $\phi$ cannot be cyclic, a contradiction.
\end{proof}
\begin{remark}
It is clear that in the above lemma, if $\phi$ is defined over a finite field ${\mathbb{F}}_q$, then all $\phi_i$ are also defined over this field. Namely, if $E_0$ is defined over ${\mathbb{F}}_q$ and the kernel is generated by an ${\mathbb{F}}_q$-rational point, then by V\'elu we obtain ${\mathbb{F}}_q$-rational formulas for $\phi_1$, which means that $\phi_1$ is defined over ${\mathbb{F}}_q$, and so on.
\end{remark}
\begin{lemma}\label{lemma::isogeny-composition-2}
Let $\ell$ be a prime, let $E_i$ be elliptic curves for $i = 0,\ldots,m$, and let $\phi_i : E_{i-1} \rightarrow E_i$ be $\ell$-isogenies for $i = 1,\ldots,m$ such that $\phi_{i+1}\ne \epsilon\circ \hat\phi_i $ for $i = 1,\ldots,m-1$ and any $\epsilon\in {\mathrm{Aut}} (E_{i+1}) $ (i.e.\ there is no backtracking). Then $\phi_m \circ \ldots \circ \phi_1$ is a cyclic $\ell^m$-isogeny.
\end{lemma}
\begin{proof}
The degree of isogenies multiplies when they are composed, see e.g.\ \cite[Ch.\ III.4]{silverman}. Hence we are left with proving that the composition of the isogenies is cyclic.
First note that all $\phi_i$ are cyclic since they have prime degree, and denote by $P_{i-1} \in E_{i-1}$ the generators of the respective kernels. Let $Q_{m-1}$ be a point on $E_{m-1}$ such that $\ell Q_{m-1} = P_{m-1}$. Notice that such a point always exists over the algebraic closure of the field of definition of the curve. Let $R_{m-2} = \hat{\phi}_{m-1}(Q_{m-1})$, where the hat denotes the dual isogeny. Then $\phi_m \circ \phi_{m-1}(R_{m-2}) = \phi_m \circ \phi_{m-1} \circ \hat{\phi}_{m-1}(Q_{m-1}) = \phi_m \circ [\ell] (Q_{m-1}) = \phi_m(\ell Q_{m-1}) = \phi_m(P_{m-1}) = \mathcal{O}$, and hence $R_{m-2}$ is in the kernel of $\phi_m \circ \phi_{m-1}$.
Next we show that $R_{m-2}$ has order $\ell^2$, which implies that it generates the kernel of $\phi_m \circ \phi_{m-1}$. Suppose that $\ell R_{m-2} = \O$. Then $\O = \ell R_{m-2} = \ell \hat{\phi}_{m-1}(Q_{m-1}) = \hat{\phi}_{m-1}(P_{m-1})$. Since $P_{m-1}$ has order $\ell$, this implies that $P_{m-1}$ generates the kernel of $\hat{\phi}_{m-1}$. However, $P_{m-1}$ also generates the kernel of $\phi_m$, so $\epsilon\circ \hat{\phi}_{m-1} = \phi_m$ for some $\epsilon\in {\mathrm{Aut}}(E_m)$. But this is a contradiction to the assumption of no backtracking.
By iterating this argument, we obtain a point $R_0$ which generates the kernel of $\phi_m\circ\ldots\circ\phi_1$, and hence this isogeny is cyclic.
\end{proof}
Combining Lemmas \ref{lemma::isogeny-composition-1} and \ref{lemma::isogeny-composition-2}, we obtain the following correspondence.
\begin{corollary}\label{corollary::correspondence}
Let $\ell$ be a prime and $m$ a positive integer. There is a one-to-one correspondence between cyclic separable $\ell^m$-isogenies and chains of separable $\ell$-isogenies of length $m$ without backtracking. (Here we do not distinguish between isogenies that differ by composition with an automorphism on the image.)
\end{corollary}
Next, we investigate how many such isogenies there are. We start by studying $\ell^m$-isogenies. The following group theory result is crucial.
\begin{lemma}\label{lemma::any-subgroups}
Let $\ell$ be a prime and $m$ a positive integer. Then the number of subgroups of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$ of order $\ell^m$ is $\frac{\ell^{m+1}-1}{\ell-1}$, and $\ell^m + \ell^{m-1}$ of these subgroups are cyclic.
\end{lemma}
\begin{proof}
Every subgroup of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$
is isomorphic to ${\mathbb{Z}}/\ell^i{\mathbb{Z}} \times {\mathbb{Z}}/\ell^j{\mathbb{Z}}$ for $0 \leq i \leq j \leq m$. The number of subgroups
which are isomorphic to ${\mathbb{Z}}/\ell^i{\mathbb{Z}} \times {\mathbb{Z}}/\ell^j{\mathbb{Z}}$ is 1 if $i=j$ and $\ell^{j-i} + \ell^{j-i-1}$
otherwise.
A direct consequence of the above statement is that there are
\[ \sum_{i=0}^{\floor{\frac{m-1}{2}}} \ell^{m-2i} + \ell^{m-2i-1} + \epsilon_m = \sum_{t=0}^{m} \ell^t \]
subgroups, where $\epsilon_m = 0$ if $k$ is odd and 1 otherwise. This proves the first statement.
For the second statement, let $H$ be a cyclic subgroup of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$
of order $l^m$. Then $H$ is generated by an element of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$
of order $l^m$, and contains $l^m - l^{m-1}$ elements of order $l^m$. Therefore, the number
of such subgroups is the number of elements of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$ of order
$l^m$ divided by $l^m - l^{m-1}$.
Let $(a,b)$ be an element of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$ of order $l^m$. Then one of
$a$ or $b$ has order $l^m$. If $a$ has order $l^m$, then there are $\varphi(\ell^m)=l^m - l^{m-1}$ choices for
$a$, and $l^m$ for $b$. That is, there are $l^m\cdot(l^m-l^{m-1})$ choices in total.
Otherwise, there are $l^{m-1}$ choices for $a$ (representing the number of elements of
order at most $l^{m-1}$), and $l^m - l^{m-1}$ choices for $b$. That is, there are
$l^{m-1}\cdot(l^m-l^{m-1})$ choices in total. This means the total number of cyclic
subgroups of ${\mathbb{Z}}/\ell^m{\mathbb{Z}} \times {\mathbb{Z}}/\ell^m{\mathbb{Z}}$ of order $l^m$ is
\[ \frac{l^m\cdot(l^m-l^{m-1}) + l^{m-1}\cdot(l^m-l^{m-1})}{l^m - l^{m-1}}= l^m + l^{m-1}. \]
\end{proof}
\begin{remark}
One could also see the first statement in the lemma above by noting that this is the same as the degree of the Hecke operator $T_{\ell^m}$ which is $\sigma_1(\ell^m)$. We thank the referee for pointing this out.
\end{remark}
\begin{corollary}\label{corollary::number-isogenies}
There are $\frac{\ell^{m+1}-1}{\ell-1}$ separable $\ell^m$-isogenies originating at a fixed elliptic curve, and $\ell^m + \ell^{m-1}$ of them are cyclic. (Here we are counting isogenies as different if they differ even after composition with any automorphism of the image.)
\end{corollary}
Using the correspondence from Corollary \ref{corollary::correspondence}, we then obtain the following.
\begin{theorem}\label{thm::number-chains}
The number of chains of $\ell$-isogenies of length $m$ without backtracking is $\ell^m+ \ell^{m-1}$. (Here we do not distinguish between isogenies that differ by composition with an automorphism on the image.)
\end{theorem}
This last result can be observed in a much more elementary way, which is also enlightening. We consider chains of $\ell$-isogenies of length $m$. To analyze the situation, it is helpful to draw a graph similar to an $\ell$-isogeny graph but that does {\it not} identify isomorphic curves.
This graph is an $(\ell+1)$-regular tree of depth $m$. The root of the tree has $\ell+1$ children, and every other node (except the leaves) has $\ell$ children. The leaves have depth $m$. It is easy to work out that the number of leaves in this tree is $(\ell+1)\ell^{m-1}$, and this is also equal to the number of paths of length $m$ without backtracking, as stated in Theorem \ref{thm::number-chains}.
Finally, this graph also helps us count the number of chains of $\ell$-isogenies of length $m$ including those that backtrack. By examining the graph carefully, we can see that the number of such walks is $\ell^m + \ell^{m-1} + \ldots + \ell + 1$, and according to Corollary \ref{corollary::number-isogenies}, this corresponds to the number of $\ell^m$-isogenies that are not necessarily cyclic.
These results were also observed experimentally using Sage. The numbers match the results of our experiments for small values of $\ell$ and $m$, over various finite fields and for different choices of elliptic curves, see Table \ref{table::endpoints}. Notice that the images under isogenies with distinct kernels may be isomorphic, leading to double edges in an isogeny graph that identifies isomorphic curves. Hence, the number of isomorphism classes of images (i.e.\ the number of neighbors in the isogeny graph) may be smaller than the number of isogenies stated in the table.
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
$\ell$ & $m$ & number of isogenies & number of isogenies \\
& & without backtracking & with backtracking \\\hline
2 & 4 & 24 & 31 \\
2 & 5 & 48 & 63 \\
2 & 6 & 96 & 127 \\
2 & 7 & 192 & 255 \\
3 & 4 & 108 & 121 \\
3 & 5 & 324 & 364 \\\hline
\end{tabular}
\end{center}
\caption{For small fixed $\ell$ and $m$, values obtained experimentally for the number of $\ell$-isogeny-chains of length $m$ starting at a fixed elliptic curve $E$ without and with backtracking.}
\label{table::endpoints}
\end{table}
\part{\Large Constructions of Ramanujan graphs}\label{part2}
\newcommand{{\mathbf{e}}}{{\mathbf{e}}}
\newcommand{{\mathbf{v}}}{{\mathbf{v}}}
\newcommand{{\mathbf{u}}}{{\mathbf{u}}}
\newcommand{{\mathrm{GL}}_2}{{\mathrm{GL}}_2}
\newcommand{{\mathrm{SL}}_2}{{\mathrm{SL}}_2}
\newcommand{{\QQ_v}}{{{\mathbb{Q}}_v}}
\newcommand{{\ZZ_v}}{{{\mathbb{Z}}_v}}
\newcommand{{\QQ_p}}{{{\mathbb{Q}}_p}}
\newcommand{{\ZZ_p}}{{{\mathbb{Z}}_p}}
In this section we review the constructions of two families of Ramanujan graph, LPS graphs and Pizer graphs. Ramanujan graphs are optimal expanders; see Section \ref{subset:autoprelim} for some related background. The purpose is twofold. On the one hand we wish to explain how equivalent constructions on the same object highlight different significant properties. On the other hand, we wish to explicate the relationship between LPS graphs and Pizer graphs.
Both families (LPS and Pizer) of Ramanujan graphs can be viewed (cf.\ \cite[Section 3]{li:1996}) as a set of ``local double cosets'', i.e.\ as a graph on
\begin{equation}\label{eq:doublecoset-PGL}
\Gamma \backslash {\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}),
\end{equation}
where $\Gamma $ is a discrete cocompact subgroup.
In both cases, one has a chain of isomorphisms that are used to show these graphs are Ramanujan, and in both cases one may in fact vary parameters to get an infinite family of Ramanujan graphs.
To explain this better, we introduce some notation. Let us choose a pair of distinct primes $p$ and $l$ for an $(l+1)$-regular graph whose size depends on $p.$ (An infinite family of Ramanujan graphs is formed by varying $p.$) Let us fix a quaternion algebra $B$ defined over ${\mathbb{Q}}$ and ramified at exactly one finite prime and at $\infty ,$ and an order of the quaternion algebra $\O .$ Let $\mathbb{A}$ denote the ad\`{e}les of ${\mathbb{Q}}$ and $\mathbb{A}_f$ denote the finite ad\`{e}les. For precise definitions see Section \ref{subset:autoprelim}.
In the case of Pizer graphs, let $B=B_{p,\infty}$ be ramified at $p$ and $\infty ,$ and take $\O$ to be a maximal order (i.e.\ an order of level $p$).\footnote{A similar construction exists for a more general $\O .$ However, to relate the resulting graph to supersingular isogeny graphs, we require $\O$ to be maximal. } Then we may construct (as in \cite{pizer:1998}) a graph by giving its adjacency matrix as a Brandt matrix. (The Brandt matrix is given via an explicit matrix representation of a Hecke operator associated to $\O$.) Then we have (cf.\ \cite[(1)]{lauter:2009r}) a chain of isomorphisms connecting \eqref{eq:doublecoset-PGL} with supersingular isogeny graphs (SSIG) discussed in Part \ref{part1} above:
\begin{equation}\label{eq:whatEyalwrote}
(\O[l^{-1}])^{\times }\backslash {\mathrm{GL}}_2({\QQ_l})/{\mathrm{GL}}_2({\ZZ_l}) \cong B^{\times }({\mathbb{Q}})\backslash B^{\times }(\mathbb{A}_f)/B^{\times }(\hat{{\mathbb{Z}}})\cong {\mathrm{Cl}} \O \cong \text{SSIG}.
\end{equation}
This can be used (cf.\ \cite[5.3.1]{lauter:2009r}) to show that the supersingular $l$-isogeny graph is connected, as well as the fact that it is indeed a Ramanujan graph.
In the case of LPS graphs the choices are very different. Let $B=B_{2,\infty }$ now be the Hamiltonian quaternion algebra. The group $\Gamma $ in \eqref{eq:doublecoset-PGL} is chosen as a congruence subgroup dependent on $p.$ This leads to a larger graph whose constructions fits into the following chain of isomorphisms:
\begin{equation}\label{eq:LPSchain1}
{\mathrm{PSL}_2}({\mathbb{F}}_p)\cong \Gamma(2p)\backslash \Gamma(2)\cong \Gamma(2p)\backslash T\cong \Gamma(2p)\backslash {\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})\cong G'({\mathbb{Q}})\backslash H_{2p}/G'({\mathbb{R}})K_0^{2p}.
\end{equation}
The isomorphic constructions and their relationship will be made explicit in Sections \ref{subsubsect:CaylePres}-\ref{subsubsect:localdoublecosets} and Section \ref{subsubsect:SA_LPS}.
We shall also explain how properties of the graph, such as its regularity, connectedness and the Ramanujan property, are highlighted by this chain of isomorphisms. For now we give only an overview, to be able to compare this case with that of Pizer graphs. The quotient ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})$ has a natural structure of an infinite tree $T.$ This tree can be defined in terms of homothety classes of rank two lattices of ${\QQ_l}\times {\QQ_l}$ (see Section \ref{subsubsect:infinitelatticetree}). One may define a group $G'=B^{\times }/Z(B^{\times })$ and its congruence subgroups $\Gamma (2)$ and $\Gamma (2p),$ and show that the discrete group $\Gamma (2)$ acts simply transitively on the tree $T,$ and hence $\Gamma(2p)\backslash T$ is isomorphic to the finite group $ \Gamma(2)/\Gamma(2p) .$ Using the Strong Approximation theorem, this turns out to be isomorphic to the group ${\mathrm{PSL}_2}({\mathbb{F}}_p) .$ The latter has a structure of an $(l+1)$-regular Cayley graph. A second application of the Strong Approximation Theorem with $K_0^{2p}$, an open compact subgroup of $G'(\mathbb{A}_f)$, shows that $H_{2p}$ is a finite index normal subgroup of $G'(\mathbb{A})$.
Note that an immediate distinction between Pizer and LPS graphs is that the quaternion algebras underlying the constructions are different: they ramify at different finite primes ($p$ and $2,$ respectively). In addition, the size of the discrete subgroup $\Gamma $ determining the double cosets of \eqref{eq:doublecoset-PGL} is different in the two cases. Accordingly, the size of the resulting graphs is different as well. We shall see
that (under appropriate assumptions on $p$ and $l$) the Pizer graph has $\frac{p-1}{12}$ vertices, while the LPS graph has order $|{\mathrm{PSL}_2}({\mathbb{F}}_p)|=\frac{p(p^2-1)}{2}.$ One may consider an order $\O_{LPS}$ such that $(\O_{LPS}[l^{-1}])^{\times }\cong \Gamma (2p)$
analogously to the relationship of $\O $ and $\Gamma $ in the Pizer case and \eqref{eq:whatEyalwrote}. However, this order $\O_{LPS}$ is unlike the Eichler order from the Pizer case. (It has a much higher level.) In particular, there is a discrepancy between the order of the class set ${\mathrm{Cl}} \O_{LPS}$ and the order of the LPS graph. This is a numerical obstruction indicating that an analogue of the chain \eqref{eq:whatEyalwrote} for LPS graphs is at the very least not straightforward.
The rest of the paper has the following outline. In Section \ref{subsect:LPS} we explore the isomorphic constructions of LPS graphs from \eqref{eq:LPSchain1}. We give the construction as a Cayley graph in Section \ref{subsubsect:CaylePres}. The infinite tree of homothety classes of lattices is given in Section \ref{subsubsect:infinitelatticetree}. In Section \ref{subsubsect:localdoublecosets} we explain how local double cosets of the Hamiltonian quaternion algebra connect these constructions. Section \ref{app:sect:explicitisom} makes one step of the chain of isomorphisms in \eqref{eq:LPSchain1} completely explicit in the case of $l=5$ and $l=13,$ and describes how the same can be done in general. In Section \ref{subsect:StrongApprox} we give an overview of how Strong Approximation plays a role in proving the isomorphisms and the connectedness and Ramanujan property of the graphs. In Section \ref{sect:Pizer} we turn briefly to Pizer graphs. We summarize the construction, and explain how various restrictions on the prime $p$ guarantee properties of the graph. Section \ref{subsect:Pizer_primes} contains the computation of a prime $p$ where the existence of both an LPS and a Pizer construction is guaranteed (for $l=5$). In Section \ref{subsect:LPS_Pizer_compare} we say a bit more of the relationship of Pizer and LPS graphs, having introduced more of the objects mentioned in passing above.
Throughout this part of the paper we aim to only include technical details if we can make them fairly self-contained and explicit, and otherwise to give a reference for further information.
\section{Background on Ramanujan graphs and ad\`{e}les}\label{subset:autoprelim}
In this section we fix notation and review some definitions and facts that we will be using for the remainder of Part \ref{part2}.
Expander graphs are graphs where small sets of
vertices have many neighbors. For many applications of expander graphs, such as in Part \ref{part1}, one wants $(l+1)$-regular expander graphs $X$ with $l$ small and the number of vertices of $X$ large.
If $X$ is an $(l+1)$-regular graph (i.e. where every vertex has degree $l+1$), then $l+1$ is an eigenvalue of the adjacency matrix of $X.$ All eigenvalues $\lambda $ satisfy $-(l+1)\leq \lambda \leq (l+1)$, and $-(l+1)$ is an eigenvalue if and only if $X$ is bipartite.
Let $\lambda(X)$ be the second largest eigenvalue in absolute value of the adjacency matrix. The smaller $\lambda(X)$ is, the better expander $X$ is. Alon--Boppana proved that for an \emph{infinite} family of $(l+1)$-regular graphs of increasing size, $\lim \inf_{(X)}\lambda (X)\geq 2 \sqrt{l}$ \cite{alon:1986}. An $(l+1)$-regular graph $X$ is called Ramanujan if $\lambda(X)\leq 2 \sqrt l$. Thus an infinite family of Ramanujan graphs are optimal expanders.
For a finite prime $p$, let ${\mathbb{Q}}_p$ denote the field of $p$-adic numbers and ${\mathbb{Z}}_p$ its ring of integers. Let ${\mathbb{Q}}_\infty={\mathbb{R}}$. We denote the ad\`{e}le ring of ${\mathbb{Q}}$ by $\mathbb{A}$ and recall that it is defined as a restricted direct product in the following way,
\[
\mathbb{A}=\sideset{}{'}\prod_p{\mathbb{Q}}_p=\left\{ (a_p)\in \prod_p {\mathbb{Q}}_p : a_p \in {\mathbb{Z}}_p \text{ for all but a finite number of $p< \infty$}\right\}.
\]
We denote the ring of finite ad\`{e}les by $\mathbb{A}_f$, that is
\[
\mathbb{A}_f=\sideset{}{'}\prod_{p<\infty}{\mathbb{Q}}_p=\left\{ (a_p)\in \prod_{p<\infty} {\mathbb{Q}}_p : a_p \in {\mathbb{Z}}_p \text{ for all but a finite number of $p$}\right\}.
\]
Let $\mathbb{A}^{\times}$ denote the id\`{e}le group of ${\mathbb{Q}}$, the group of units of $\mathbb{A}$,
\[
\mathbb{A}^{\times}=\sideset{}{'}\prod_p{\mathbb{Q}}_p=\left\{ (a_p)\in \prod_p {\mathbb{Q}}_p^\times : a_p \in {\mathbb{Z}}_p^\times \text{ for all but a finite number of $p< \infty$}\right\}.
\]
Let $B$ be a quaternion algebra over ${\mathbb{Q}}$, $B^\times$ the invertible elements of $B$ and $\O$ an order of $B$. For a prime $p$ let $\O_p=\O\otimes_{{\mathbb{Z}}}{\mathbb{Z}}_p$. Then let
\[
B^\times(\mathbb{A})=\sideset{}{'}\prod_pB^\times({
|
\mathbb{Q}}_p)=\left\{(g_p)\in \prod_p B^\times({\mathbb{Q}}_p) : g_p\in \O_p^\times \text{ for all but a finite number of $p< \infty$}\right\}.
\]
More generally for an indexed set of locally compact groups $\{G_v\}_{v\in I}$ with a corresponding indexed set of compact open subgroups $\{K_v\}_{v\in I}$ we may define the restricted direct product of the $G_v$ with respect to the $K_v$ by the following
\[G:=\sideset{}{'}\prod_{v\in I} G_v=\left\{ (g_v)\in \prod_{v\in I} G_v : g_v \in K_v \text{ for all but a finite number of $v$} \right\}.
\]
If we define a neighborhood base of the identity as
\[
\left\{\prod_v U_v : U_v \text{ neighborhood of identity in $G_v$ and $U_v=K_v$ for all but a finite number of $v$}\right\}
\]
then $G$ is a locally compact topological group.
\section{LPS Graphs}\label{subsect:LPS}
We describe the LPS graphs used in \cite{lauter:2009} for a proposed hash function. They were first considered in \cite{lubotzky:1988}, for further details see also \cite{lubotzky:2010}. We shall examine the objects and isomorphisms in \eqref{eq:LPSchain1} in more detail. We review constructions of these graphs in turn as Cayley graphs and graphs determined by rank two lattices or, equivalently, local double cosets. Throughout this section, let $l$ and $p$ be distinct, odd primes both congruent to $1$ modulo $4.$ We shall give constructions of $(l+1)$-regular Ramanujan graphs whose size depends on $p.$
We shall also assume for convenience\footnote{If $p$ is not a square modulo $l,$ then the constructions described below result in bipartite Ramanujan graphs with twice as many vertices.} that $\left(\frac{p}{l}\right)=1,$ i.e. that $p$ is a square modulo $l.$
\subsection{Cayley graph over ${\mathbb{F}}_p.$}\label{subsubsect:CaylePres}
This description follows \cite[Section 2]{lubotzky:1988}. The graph we are interested in is the Cayley graph of the group ${\mathrm{PSL}_2}({\mathbb{F}}_p).$ We specify a set of generators $S$ below. The vertices of the graph are the $\frac{p(p^2-1)}{2}$ elements of ${\mathrm{PSL}_2}({\mathbb{F}}_p).$ Two vertices $g_1,g_2\in {\mathrm{PSL}_2}({\mathbb{F}}_p)$ are connected by an edge if and only if $g_2=g_1h$ for some $h\in S.$
Next we give the set of generators $S.$ Since $l\equiv 1\mod{4}$ it follows from a theorem of Jacobi \cite[Theorem 2.1.8]{lubotzky:2010} that there are $l+1$ integer solutions to
\begin{equation}\label{eq:lsquaresumconds}
l=x_0^2+x_1^2+x_2^2+x_3^2;\ \ 2\nmid x_0;\ \ x_0>0.
\end{equation}
In this case we will also have $2|x_i$ for all $i>0.$
Let $S$ be the set of solutions of \eqref{eq:lsquaresumconds}. Since $p\equiv1\mod{4}$ we have $\left(\frac{-1}{p}\right)=1.$ Let $\varepsilon \in {\mathbb{Z}} $ such that $\varepsilon^2\equiv -1\mod{p}.$ Then to each solution of \eqref{eq:lsquaresumconds} we assign an element of ${\mathrm{PGL}_2}({\mathbb{Z}})$ as follows:
\begin{equation}\label{eq:elementsofSmappedtomtcesmodp}
(x_0,x_1,x_2,x_3)\mapsto \tbtmtx{x_0+x_1\varepsilon}{x_2+x_3\varepsilon}{-x_2+x_3\varepsilon}{x_0-x_1\varepsilon}.
\end{equation}
Note that the matrix on the right-hand side has determinant $l\mod{p}.$ Since $\left(\frac{l}{p}\right)=1$ this determines an element of ${\mathrm{PSL}_2}({\mathbb{F}}_p).$
The $l+1$ elements of ${\mathrm{PSL}_2}({\mathbb{F}}_p)$ determined by $\eqref{eq:elementsofSmappedtomtcesmodp}$ form the set of Cayley generators. Let us abuse notation and denote this set with $S$ as well.
This graph is connected. To prove this fact, one may use the theory of quadratic Diophantine equations \cite[Proposition 3.3]{lubotzky:1988}. Alternately, the chain of isomorphisms \eqref{eq:LPSchain1} proves this fact by relating this Cayley graph to a quotient of a connected graph \cite[Theorem 7.4.3]{lubotzky:2010}: the infinite tree we shall describe in the next section.
The solutions $(x_0,x_1,x_2,x_3)$ and $(x_0,-x_1,-x_2,-x_3)$ correspond to elements of $S$ that are inverses in ${\mathrm{PSL}_2}({\mathbb{F}}_p).$ Since $|S|=l+1$ this implies that the generators determine an undirected $(l+1)$-regular graph.
\subsection{Infinite tree of lattices}\label{subsubsect:infinitelatticetree}
Next we shall work over ${\QQ_l}.$ We give a description of the same graph in two ways: in terms of homothety classes of rank two lattices, and in terms of local double cosets of the multiplicative group of the Hamiltonian quaternion algebra. The description follows \cite[5.3, 7.4]{lubotzky:2010}. Let $B=B_{2,\infty}$ be the Hamiltonian quaternion algebra defined over ${\mathbb{Q}}.$
First we review the construction of an $(l+1)$-regular infinite tree on homothety classes of rank two lattices in ${\QQ_l}\times {\QQ_l} $ following \cite[5.3]{lubotzky:2010}. The vertices of this infinite graph are in bijection with ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}).$ To talk about a finite graph, we shall then consider two subgroups $\Gamma(2)$ and $\Gamma(2p)$ in $B^{\times}/Z(B^{\times })$. It turns out that $\Gamma (2)$ acts simply transitively on the infinite tree, and orbits of $\Gamma (2p)$ on the tree are in bijection with the finite group $\Gamma (2)/\Gamma (2p).$ Under our assumptions the latter turns out to be in bijection with ${\mathrm{PSL}_2}({\mathbb{F}}_p)$ above and the finite quotient of the tree is isomorphic to the Cayley graph above.
First we describe the infinite tree following \cite[5.3]{lubotzky:2010}. Consider the two dimensional vector space ${\QQ_l}\times {\QQ_l}$ with standard basis ${\mathbf{e}}_1={}^t\langle 1,0 \rangle ,$ ${\mathbf{e}}_2={}^t\langle 0,1 \rangle .$ A {\em{lattice}} is a rank two ${\ZZ_l}$-submodule $L\subset {\QQ_l}\times {\QQ_l}.$ It is generated (as a ${\ZZ_l}$-module) by two column vectors ${\mathbf{u}},{\mathbf{v}}\in {\QQ_l}\times {\QQ_l}$ that are linearly independent over ${\QQ_l}.$ We shall consider homothety classes of lattices, i.e.\ we say lattices $L_1$ and $L_2$ are equivalent if there exists an $0\neq \alpha \in {\QQ_l}$ such that $\alpha L_1=L_2.$ Writing ${\mathbf{u}},{\mathbf{v}}$ in the standard basis ${\mathbf{e}}_1,{\mathbf{e}}_2$ maps the lattice $L$ to an element $M_L\in{\mathrm{GL}}_2({\QQ_l}).$ Let ${\mathbf{u}}_1,{\mathbf{v}}_1,{\mathbf{u}}_2,{\mathbf{v}}_2\in {\QQ_l}\times {\QQ_l}$ and let $L_i={\mathrm{Span}}_{{\ZZ_l}}\{{\mathbf{u}}_i,{\mathbf{v}}_i\}$ ($i=1,2$) be the lattices generated by these respective pairs of vectors, with $M_{L_1}$ and $M_{L_2}$ the corresponding matrices. Let $M\in {\mathrm{GL}}_2({\QQ_l})$ so that $M_{L_1}M=M_{L_2}.$ Then $L_1=L_2$ (as subsets of ${\QQ_l}\times {\QQ_l}$) if and only if $M\in {\mathrm{GL}}_2({\ZZ_l}).$ It follows that the homothety classes of lattices are in bijection with ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}).$ Equivalently, we may say that ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})$ acts simply transitively on homothety classes of lattices.
The vertices of the infinite graph $T$ are homothety classes of lattices. The classes $[L_1],[L_2]$ are adjacent in $T$ if and only if there are representatives $L_i'\in [L_i]$ ($i=1,2$) such that $L_2'\subset L_1'$ and $[L_1':L_2']=l.$ We show that this relation defines an undirected $(l+1)$-regular graph. By the transitive action of ${\mathrm{GL}}_2({\QQ_l})$ on lattices we may assume that $L_1'={\ZZ_l}\times {\ZZ_l}={\mathrm{Span}}_{{\ZZ_l}}\{{\mathbf{e}}_1,{\mathbf{e}}_2\},$ the {\em{standard lattice}} and $L_2'\subset {\ZZ_l}\times {\ZZ_l}.$ The map ${\ZZ_l}\rightarrow{\ZZ_l}/l{\ZZ_l}\cong {\mathbb{F}}_l$ induces a map from ${\ZZ_l}\times {\ZZ_l}$ to ${\mathbb{F}}_l^2.$ Since the index of $L_2'$ in ${\ZZ_l}\times {\ZZ_l}$ is $l,$ the image of $L_2'$ is a one-dimensional vector subspace of ${\mathbb{F}}_l^2.$ This implies that $L_2'\supset \{l{\mathbf{e}}_1,l{\mathbf{e}}_2\},$ i.e.\ $L_2'\supset lL_1'$ and the graph is undirected.\footnote{I.e.\ the adjacency relation defined above is symmetric.} Furthermore, since there are $l+1$ one-dimensional subspaces of ${\mathbb{F}}_l^2,$ the graph is $(l+1)$-regular.
The $l+1$ neighbors of the standard lattice can be described explicitly by the following matrices:
\begin{equation}\label{eq:mtces_lattice_neighbours}
M_l=\tbtmtx{1}{0}{0}{l},\ M_h=\tbtmtx{l}{h}{0}{1}\text{ for }0\leq h\leq l-1
\end{equation}
For any of the matrices $M_t$ ($0\leq t\leq l$) the columns of $M_t$ span a different one-dimensional subspace of ${\mathbb{F}}_l\times {\mathbb{F}}_l.$ The matrices determine the neighbors of any other lattice by a change of basis in ${\QQ_l}\times {\QQ_l}.$
By the above we can already see that $T$ is isomorphic to the graph on ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})$ with edges corresponding to multiplication by generators \eqref{eq:mtces_lattice_neighbours} above. To show that $T$ is a tree it suffices to show that there is exactly one path from the standard lattice ${\ZZ_l}\times {\ZZ_l} $ to any other homothety class. This follows from the uniqueness of the Jordan--H\"{o}lder series in a finite cyclic $l$-group as in \cite[p.\ 69]{lubotzky:2010}.
In the next section, we show that the above infinite tree is isomorphic to a Cayley graph of a subgroup of $B^{\times }/Z(B^{\times }).$ In Section \ref{app:sect:explicitisom} we give an explicit bijection between the Cayley generators and the matrices given in \eqref{eq:mtces_lattice_neighbours} above.
\subsection{Hamiltonian quaternions over a local field}\label{subsubsect:localdoublecosets}
\newcommand{{\mathbf{i}}}{{\mathbf{i}}}
\newcommand{{\mathbf{j}}}{{\mathbf{j}}}
\newcommand{{\mathbf{k}}}{{\mathbf{k}}}
To turn the above infinite tree into a finite, $(l+1)$-regular graph we shall define a group action on its vertices. Let $B$ be the algebra of Hamiltonian quaternions defined over ${\mathbb{Q}}.$ Let $G'$ be the ${\mathbb{Q}}$-algebraic group $B^{\times}/Z(B^{\times }).$ In this subsection we shall follow \cite[7.4]{lubotzky:2010} to define normal subgroups $\Gamma (2p)\subset \Gamma (2)$ of $\Gamma =G'({\mathbb{Z}}[l^{-1}])$ such that $\Gamma (2)$ acts simply transitively on the graph $T.$ The quotient $\Gamma(2p)\backslash T $ will be isomorphic to the Cayley graph of the finite quotient group $\Gamma (2)/\Gamma(2p).$ This graph is isomorphic to the Cayley graph of ${\mathrm{PSL}_2}({\mathbb{F}}_p)$ defined in Section \ref{subsubsect:CaylePres} above. Thus we have the following equation.
\begin{equation}\label{eq:chainofcongs_LPS_inHamiltsubsubsect}
{\mathrm{PSL}_2}({\mathbb{F}}_p)\cong \Gamma(2p)\backslash \Gamma(2)\cong \Gamma(2p)\backslash T\cong \Gamma(2p)\backslash {\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}).
\end{equation}
We first define the groups $\Gamma ,$ $\Gamma (2),$ $\Gamma (2p)$ and then examine their relationship with $T.$ Recall that $B=B_{2,\infty }$, i.e.\ $B$ is ramified at $2$ and $\infty .$ For a commutative ring $R$ define $B(R)={\mathrm{Span}}_{R}\{1,{\mathbf{i}}, {\mathbf{j}},{\mathbf{k}}\}$ where ${\mathbf{i}}^2={\mathbf{j}}^2=-1$ and ${\mathbf{i}}{\mathbf{j}}=-{\mathbf{j}}{\mathbf{i}}={\mathbf{k}}.$ We introduce the notation $b_{x_0,x_1,x_2,x_3}:=x_0+x_1{\mathbf{i}}+x_2{\mathbf{j}}+x_3{\mathbf{k}}.$ Recall that for $b=b_{x_0,x_1,x_2,x_3}$ we may define $\bar{b}=b_{x_0,-x_1,-x_2,-x_3}$ and the {\em{reduced norm}} of $b$ as $N(b)=b\bar{b}=x_0^2+x_1^2+x_2^2+x_3^2.$ For a (commutative, unital) ring $R$ an element $b\in B(R)$ is invertible in $B(R)$ if and only if $N(b)$ is invertible in $R.$ (Then $b^{-1}=(N(b))^{-1}\bar{b}.$) Furthermore
\begin{equation}\label{eq:commutatorb}
[b_{x_0,x_1,x_2,x_3},b_{y_0,y_1,y_2,y_3}]= 2(x_2y_3-x_3y_2){\mathbf{i}} + 2(x_3y_1-x_1y_3){\mathbf{j}} +2(x_1y_2-x_2y_1){\mathbf{k}},
\end{equation}
and hence if $R$ has no zero divisors then $Z(B(R))=R.$
In particular $Z(B^{\times }({\mathbb{Z}}[l^{-1}]))=\{\pm l^k\mid k\in {\mathbb{Z}}\}.$
Recall that $S$ was the set of $l+1$ integer solutions of \eqref{eq:lsquaresumconds}. Any solution $x_0,x_1,x_2,x_3$ determines a $b=b_{x_0,x_1,x_2,x_3}\in B({\mathbb{Z}}[l^{-1}])$ such that $N(b)=l .$ Since $l$ is invertible in ${\mathbb{Z}}[l^{-1}]$ we in fact have $b\in B^{\times }({\mathbb{Z}}[l^{-1}]).$ Let $\Gamma =G'({\mathbb{Z}}[l^{-1}])=B^{\times }({\mathbb{Z}}[l^{-1}])/Z(B^{\times }({\mathbb{Z}}[l^{-1}])) $ and let us denote the image of $S$ in $\Gamma $ by $S$ as well. Since $B^{\times }({\mathbb{Z}}[l^{-1}])=\{b\in B({\mathbb{Z}}[l^{-1}])\mid N(b)=l^k,\ k\in {\mathbb{Z}} \},$ if $[b]\in \Gamma $ for $b\in B^{\times }({\mathbb{Z}}[l^{-1}])$ then it follows from \cite[Corollary 2.1.10]{lubotzky:2010} that $b$ is a unit multiple of an element of $\langle S \rangle .$ It follows that
$\Gamma =\langle S \rangle \{[1],[{\mathbf{i}}],[{\mathbf{j}}],[{\mathbf{k}}] \}$ and the index of $\langle S \rangle$ in $\Gamma $ is $4.$ In fact observe that if $b\in S$ then $b^{-1}\in S$ and \cite[Corollary 2.1.11]{lubotzky:2010} states that $\langle S \rangle $ is a free group on $\frac{l+1}{2}$ generators. We shall see that $\langle S \rangle $ agrees with a congruence subgroup $\Gamma (2).$
Now let $N=2M$ be coprime to $l$ and let $R={\mathbb{Z}}[l^{-1}]/N{\mathbb{Z}}[l^{-1}].$ The quotient map ${\mathbb{Z}}[l^{-1}]\rightarrow R$ determines a map $B({\mathbb{Z}}[l^{-1}])\rightarrow B(R).$ This restricts to a map $B^{\times }({\mathbb{Z}}[l^{-1}])\rightarrow B^{\times }(R).$ Observe that if $M=1$ then $B^{\times }(R)$ is commutative. If $M=p$ then the subgroup
$$Z:=\left\lbrace b_{x_0,0,0,0}\in B^{\times }({\mathbb{Z}}[l^{-1}]/2p{\mathbb{Z}}[l^{-1}])\mid p\nmid x_0,2\nmid x_0 \right\rbrace$$
(cf.\ \cite[p. 266]{lubotzky:1988}) is central in $B^{\times }(R).$ Consider the commutative diagram:
\begin{equation}\label{eq:commdiag}
\begin{array}{ccccc}
B({\mathbb{Z}}[l^{-1}])^{\times }&\longrightarrow & B^{\times }({\mathbb{Z}}[l^{-1}]/2{\mathbb{Z}}[l^{-1}])& \longrightarrow & B^{\times }({\mathbb{Z}}[l^{-1}]/2p{\mathbb{Z}}[l^{-1}])\\
\downarrow &&\downarrow&&\downarrow\\
\Gamma & \stackrel{\pi_{2} }{\longrightarrow} & B^{\times}({\mathbb{Z}}[l^{-1}]/2{\mathbb{Z}}[l^{-1}]) &
\stackrel{\pi_{p} }{\longrightarrow} & B^{\times}({\mathbb{Z}}[l^{-1}]/2p{\mathbb{Z}}[l^{-1}])/Z
\end{array}
\end{equation}
and define\footnote{The definition here agrees with the choices in \cite{lubotzky:1988} as well as $\Gamma (N)=\ker (G'({\mathbb{Z}}[l^{-1}])\rightarrow G'({\mathbb{Z}}[l^{-1}]/N{\mathbb{Z}}[l^{-1}]))$ in \cite{lubotzky:2010}. Here $G'=B^{\times }/Z(B^{\times })$ as a ${\mathbb{Q}}$-algebraic group. Note however that by \eqref{eq:commutatorb} the center $Z(B^{\times }(R))$ for $R={\mathbb{Z}}[l^{-1}]/N{\mathbb{Z}}[l^{-1}],$ $N=2M$ may not be spanned by $1+N{\mathbb{Z}}[l^{-1}].$ In fact from \eqref{eq:commutatorb} $B^{\times }(R)$ is commutative for $M=1$ and for $M=p$ we have $Z(B^{\times }(R))=Z\oplus [p]{\mathbf{i}} +[p]{\mathbf{j}}+[p]{\mathbf{k}} .$ However the image of $\langle S\rangle $ in $ B^{\times }(R)$ is trivial if $M=1$ and intersects the center in $Z$ when $M=p.$} $\pi _{2p}:=\pi _{p}\circ \pi _{2}$ and $\Gamma (2):=\ker \pi _2$ and $\Gamma (2p)=\ker \pi _{2p}.$ Observe that by the congruence conditions (cf. \eqref{eq:lsquaresumconds}) $S\subseteq \Gamma $ is contained in $\Gamma (2)$ and in fact $\langle S\rangle =\Gamma (2)\supseteq\Gamma (2p).$ As mentioned above this implies that $\Gamma (2)$ is a free group with $\frac{l+1}{2}$ generators.
To see the action of $\Gamma (2)$ on $T$ note that $B$ splits over ${\QQ_l}$ and hence $B({\QQ_l})\cong M_2({\QQ_l}).$ Since $-1\in ({\mathbb{F}}_l^{\times})^2$ there exists an $\epsilon\in {\ZZ_l}$ such that $\epsilon^2=-1.$ Then we have an isomorphism $\sigma : B({\QQ_l})\rightarrow M_2({\QQ_l})$ \cite[p. 95]{lubotzky:2010} given by
\begin{equation}\label{eq:unramlocalisomdef}
\sigma (x_0+x_1{\mathbf{i}}+x_2{\mathbf{j}}+x_3{\mathbf{k}}) = \tbtmtx{x_0+x_1\epsilon}{x_2+x_3\epsilon}{-x_2+x_3\epsilon}{x_0-x_1\epsilon}.
\end{equation}
Observe that $\sigma (B^{\times }({\mathbb{Z}}[l^{-1}]))\subseteq {\mathrm{GL}}_2({\QQ_l})$ and $\sigma $ maps elements of the center into scalar matrices, and hence this defines an action of $\Gamma $ (and hence $\Gamma (2),\Gamma(2p)$) on $T.$ This action preserves the graph structure. Then we have the following.
Observe that $\sigma $ maps the elements of $\langle S \rangle\subseteq \Gamma $ into the congruence subgroup of ${\mathrm{PGL}_2}({\ZZ_l})$ modulo $2.$
\begin{proposition}\label{prop:simpletrans_on_tree}
\cite[Lemma 7.4.1]{lubotzky:2010} The action of $\Gamma (2)$ on the tree $T={\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})$ is simply transitive (and respects the graph structure).
\end{proposition}
\begin{proof}
See {\em{loc.cit.}}\ for details of the proof. Transitivity follows from the fact that $T$ is connected and elements of $S$ map a vertex of $T$ to its distinct neighbors. The group $\Gamma (2)=\langle S\rangle$ is a discrete free group, hence its intersection with a compact stabilizer ${\mathrm{PGL}_2}({\ZZ_l})$ is trivial. This implies that the neighbors are distinct and the stabilizer of any vertex is trivial.
\end{proof}
The above implies that the orbits of $\Gamma (2p)$ on $T$ have the structure of the Cayley graph $\Gamma (2)/\Gamma (2p)$ with respect to the generators $S.$ We can see from the maps in \eqref{eq:commdiag} that $\Gamma (2)/\Gamma (2p)$ is isomorphic to a subgroup of $G'({\mathbb{Z}}/2p{\mathbb{Z}})\cong G'({\mathbb{Z}}/2{\mathbb{Z}})\times G'({\mathbb{Z}}/p{\mathbb{Z}}).$ (This last isomorphism follows from the Chinese Remainder Theorem.) Since the image of $\Gamma (2)$ in $G'({\mathbb{Z}}/2{\mathbb{Z}})$ is trivial, we may identify $\Gamma (2)/\Gamma (2p)$ with a subgroup of $G'({\mathbb{Z}}/p{\mathbb{Z}}).$ Here $G'({\mathbb{Z}}/p{\mathbb{Z}})\cong {\mathrm{PGL}_2}({\mathbb{F}}_p).$ (For an explicit isomorphism take an analogue of $\sigma $ in \eqref{eq:unramlocalisomdef} with $\epsilon \in {\mathbb{Z}}/p{\mathbb{Z}}$ such that $\epsilon ^2=-1.$) The image of $\Gamma (2)$ agrees with ${\mathrm{PSL}_2}({\mathbb{F}}_p)$ as a consequence of the Strong Approximation Theorem \cite[Lemma 7.4.2]{lubotzky:2010}. We shall discuss this in the next section.
We summarize the contents of this section.
\begin{theorem}\label{thm:LPS-summary}
\cite[Theorem 7.4.3]{lubotzky:2010}
Let $l$ and $p$ be primes so that $l\equiv p\equiv 1\mod{4}$ and $l$ is a quadratic residue modulo $2p.$ Let $S\subset {\mathrm{PSL}_2}({\mathbb{F}}_p)$ be the $(l+1)$-element set corresponding to the solutions of \eqref{eq:lsquaresumconds} via the map \eqref{eq:elementsofSmappedtomtcesmodp} and $Cay({\mathrm{PSL}_2}({\mathbb{F}}_p),S)$ the Cayley graph determined by the set of generators $S$ on the group ${\mathrm{PSL}_2}({\mathbb{F}}_p).$ Let $T$ be the graph on ${\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l})$ with edges corresponding to multiplication by elements listed in \eqref{eq:mtces_lattice_neighbours}. Let $B$ be the Hamiltonian quaternion algebra over ${\mathbb{Q}}$ and $\Gamma (2p)$ the kernel of the map $\pi_{2p}$ in \eqref{eq:commdiag} (a cocompact congruence subgroup). Then $\Gamma (2p)$ acts on the infinite tree $T$ and we have the following isomorphism of graphs:
\begin{equation}\label{eq:thm:LPS}
Cay({\mathrm{PSL}_2}({\mathbb{F}}_p),S)\cong \Gamma (2p) \backslash {\mathrm{PGL}_2}({\QQ_l})/{\mathrm{PGL}_2}({\ZZ_l}).
\end{equation}
These are connected, $(l+1)$ regular, non-bipartite, simple, graphs on $\frac{p^3-p}{2}$ vertices.
\end{theorem}
\subsection{Explicit isomorphism between generating sets}\label{app:sect:explicitisom}
We have seen above that the LPS graph can be interpreted as a finite quotient of the infinite tree of homothety classes of lattices. In this case, the edges are given by matrices that take a ${\ZZ_l}$-basis of one lattice to a ${\ZZ_l}$-basis of one of its neighbors. On the other hand, the edges can be given in terms of the set of generators $S.$
Proposition \ref{prop:simpletrans_on_tree} states that $\langle \sigma(S)\rangle =\Gamma (2)\subset G'({\mathbb{Z}}[l^{-1}])$ acts simply transitively on the tree $T.$ The proof of the proposition (cf. \cite[Lemma 7.4.1]{lubotzky:2010}) implicitly shows that there exists a bijection between elements of $\sigma(S)\subset {\mathrm{PGL}_2}({\ZZ_l})$ and the matrices given in \eqref{eq:mtces_lattice_neighbours}.
In this section we wish to make this bijection more explicit. For a fixed $\alpha \in S$ we find the matrix from the list \eqref{eq:mtces_lattice_neighbours} determining the same edge of $T$. As in Section \ref{subsubsect:localdoublecosets} we write $\sigma (\alpha )\in {\mathrm{PGL}_2}({\ZZ_l})$ for the elements of $\sigma(S).$
This amounts to finding the matrix $M$ from the list in \eqref{eq:mtces_lattice_neighbours} such that $\sigma(\alpha) ^{-1}M\in {\mathrm{PGL}_2}({\ZZ_l}).$
To pair up matrices from \eqref{eq:mtces_lattice_neighbours} with the corresponding elements of $S,$ we introduce the following notation. Let us number the solutions to $\alpha \overline{\alpha }=l$ as $\alpha _0,\ldots ,\alpha_{l-1},\alpha_l$ so that we have the correspondence $\sigma(\alpha _h) ^{-1}M_h\in {\mathrm{PGL}_2}({\ZZ_l})$ for $0\leq h\leq l.$
By giving an explicit correspondence, we mean that given an $\alpha \in \sigma ^{-1}(S),$ we determine $0\leq h\leq l$ such that $\alpha =\alpha _h.$
Elements of $\sigma(S)\subset {\mathrm{PGL}_2}({\ZZ_l})$ are given in terms of an $\epsilon \in {\ZZ_l}$ such that $\epsilon^2=-1.$ Let $a,$ $b$ be the positive integers such that $a^2+b^2=l$ and $a$ is odd. Let $0\leq e\leq l-1$ so that $eb=a.$ Then in ${\ZZ_l}$ we have either $\epsilon\in e+l{\ZZ_l}$ and $\epsilon^{-1}=-\epsilon\in -e+l{\ZZ_l}$ or $\epsilon\in -e+l{\ZZ_l}$ and $\epsilon^{-1}=-\epsilon\in e+l{\ZZ_l}.$
Let $\alpha =x_0+x_1{\mathbf{i}}+x_2{\mathbf{j}}+x_3{\mathbf{k}}$ so that $\sigma (\alpha )\in S,$ and $a,b,e, \epsilon$ are as above. Let
$$\alpha_h =x_0^{(h)}+x_1^{(h)}{\mathbf{i}}+x_2^{(h)}{\mathbf{j}}+x_3^{(h)}{\mathbf{k}}$$ for $0\leq h\leq l.$ Here $x_0,x_1,x_2,x_3$ are integers; it is convenient to think about them (as well as $x_0^{(h)},x_1^{(h)},x_2^{(h)},x_3^{(h)}$ for $0\leq h\leq l$) as being in ${\mathbb{Z}} \subset {\ZZ_l} .$
Then
\begin{equation}\label{eq:sigmainv_alpha}
\sigma(\alpha) ^{-1}=\frac{1}{l}\tbtmtx{x_0-x_1\epsilon}{-x_2-x_3\epsilon}{x_2-x_3\epsilon}{x_0+x_1\epsilon}
\end{equation}
and
\begin{equation}\label{eq:timesmtx2}
\begin{split}
\sigma(\alpha) ^{-1}\cdot \tbtmtx{l}{h}{0}{1}= & \tbtmtx{x_0-x_1\epsilon}{l^{-1}\left(h(x_0-x_1\epsilon)+(-x_2-x_3\epsilon)\right)}{x_2-x_3\epsilon}{l^{-1}\left(h(x_2-x_3\epsilon)+(
|
is
\[
\ell_{\mathrm{PL}}(\tau) = -\tau G(\hat{\pi}_1) - (T - \tau) G(\hat{\pi}_2),
\]
where $G(x) = x (1 - \log x)$. Define
\[
\mathcal{T}_c = \min_{1 \le t \le T - 1} \bigg[t G\bigg(\frac{S_t}{t}\bigg) + (T - t) G\bigg(\frac{S_T - S_t}{T - t}\bigg)\bigg].
\]
Then the LR statistic is
\begin{equation}\label{eq:LR-count}
\mathcal{T}_{\mathrm{LR}}^{(c)} = -2(\ell_0 - \ell_1) = -2 (- TG(S_T/T) + \mathcal{T}_L).
\end{equation}
We would reject $H_0$ for large values of this statistic.
\vskip10pt
\noindent
\textbf{CUSUM statistic.}
\vskip5pt
\noindent
A well-known and often-used statistic in changepoint problems is the so-called CUSUM statistic. For $0< a < b <1$, suppose $aT$ and $bT$ are known upper and lower bounds on the locations of the potential changepoints. For $0\le \delta \le 1$, the CUSUM statistic is defined as
\begin{equation}\label{eq:CUSUM}
\mathcal{T}_{\mathrm{CUSUM}}^{(\delta)} = \max_{ aT \le t \le bT } \left[ \frac{t}{T}\left(1-\frac{t}{T}\right) \right]^{\delta} \Bigg | \frac{S_t}{t} - \frac{S_T - S_t}{T - t} \Bigg |.
\end{equation}
This can be used with both binary and count data.
\subsection{Asymptotic tests}
First, we will consider an asymptotic test based on the CUSUM statistic \eqref{eq:CUSUM}. The asymptotic null distribution can be calculated using a Brownian bridge approximation. For details see, e.g., \cite{brodsky2013nonparametric}.
\begin{proposition}\label{prop:BB}
Let $B^0(t)$ denote a standard Brownian bridge. Under $H_0$, $\pi_i = \pi$ for all $i$, and
\[
\frac{\sqrt{T} \, \mathcal{T}_{\mathrm{CUSUM}}^{(\delta)}}{\sqrt{\pi(1-\pi)}} \xrightarrow[T \to \infty]{ \mathcal{L}} M_{ab}^{(\delta)},
\]
where $M_{ab}^{(\delta)} = \max_{a \le t \le b} \frac{|B^0(t)|}{(t(1 - t))^{1 - \delta}}$.
\end{proposition}
\begin{corollary}\label{cor:BBapprox}
Let $\widehat{\pi} = \frac{1}{T} \sum_{s=1}^T X_s \xrightarrow[H_0, T \to \infty]{a.s.} \pi$. Then, under $H_0$,
\[
\frac{\sqrt{T} \, \mathcal{T}_{\mathrm{CUSUM}}^{(\delta)}}{\sqrt{\widehat{\pi}(1-\widehat{\pi})}} \xrightarrow[T \to \infty]{ \mathcal{L}} M_{ab}^{(\delta)}.
\]
\end{corollary}
Using this result we can perform an asymptotic test for $H_0$ in the ``large sample'' regime where $T$ is large. This test would be good when $\pi$ is not too small (so that the underlying normal approximations to the partial sums $\sum_{s = 1}^t X_s$ go through). It is well-known that in this asymptotic framework the choice $\delta = 1/2$ is the best for estimation (See, e.g., \cite{brodsky2013nonparametric}, Chapter 3), while $\delta = 1$ is the best for minimizing type-1 error, $\delta = 0$ for minimizing type-2 error. However, in the small sample situations explored in this paper we do not see such a clear-cut distinction (see Section~\ref{sec:simu}).
\subsection{Conditional tests}
\label{sec:conditionaltests}
Our exact tests are based on the following simple lemma.
\begin{lemma}\label{lem:suff}
Suppose $X_1$, \ldots, $X_T$ are independent Bin$(n_i, \pi)$ (or Poisson$(\pi)$). Let $S_i = \sum_{j = 1}^i X_i$. Then the joint distribution of $(S_1, \ldots, S_{T - 1})$ given $S_T$ does not depend on $\pi$.
\end{lemma}
\begin{proof}
Since $S_T$ is sufficient for $\pi$, the distribution of $(X_1, \ldots, X_T)$ given $S_T$ does not depend on $\pi$. Hence the same holds for $(S_1, \ldots, S_{T - 1})$.
\end{proof}
\noindent
\textbf{Approach 1.}
\vskip5pt
\noindent
By Lemma~\ref{lem:suff}, $\mathcal{T}_{LR} \mid S_T$ does not depend on $\pi$ under $H_0$. So we can do an exact conditional test. In fact, we can use the statistics $\mathcal{T}_{b}$ (or $\mathcal{T}_c$) which is equivalent to $\mathcal{T}_{\mathrm{LR}}^{(b)}$ for conditional testing. Similarly, we can do a CUSUM based exact test, since the CUSUM statistic $\mathcal{T}_{\mathrm{CUSUM}}^{(\delta)}$ is a function of the partial sums $S_t, t < T$.
\vskip10pt
\noindent
\textbf{Approach 2.}
\vskip5pt
\noindent
Note that we can decompose $H_1$ as a disjoint union of the following $(T - 1)$ hypotheses:
\[
H_{1i}: \tau = i, 1 \le i \le T - 1,
\]
and test these separately against $H_0$, and, finally, rejecting $H_0$ if one of these $T - 1$ hypotheses gets rejected.
\vskip5pt
\noindent
\textbf{Binary data:}
Suppose $X_i\sim\mathrm{Ber}(\pi_i)$. Note that if we use $S_i = \sum_{j = 1}^i X_j$ as a test statistic for testing $H_0$ against $H_{1i}$, then, under $H_0$,
\[
S_i \mid S_T \sim \mathrm{Hypergeometric}(i, S_T, T).
\]
Therefore, we get a $p$-value $p_i$ from this conditional distribution as
\[
p_i = \sum_{q \,:\, f(q;\, i, S_T, T) \le f(S_i;\, i, S_T, T)} f(q;\, i, S_T, T),
\]
where $f(q;\, i, S_T, T)$ is the PMF of the Hypergeometric$(i, S_T, T)$ distribution.
\vskip5pt
\noindent
\textbf{Count data:}
For count data $X_i \sim$ Poisson$(\pi_i)$, we can use the same procedure as above using the observation that, under $H_0$,
\[
S_i \mid S_T \sim \mathrm{Binomial}\bigg(S_T, \frac{i}{T}\bigg).
\]
In this case, we get a $p$-value $p_i$ from the above conditional distribution as
\[
p_i = \sum_{q \,:\, g(q;\, S_T, i/T) \le g(S_i;\, S_T, i/T)} g(q;\, S_T, i/T),
\]
where $g(q;\, S_T, i/T)$ is the PMF of the Binomial$(S_T, i/T)$ distribution.
\vskip10pt
\noindent
\textbf{Multiplicity correction:}
Once we get hold of the individual $p$-values, we can try to control the \emph{familywise error rate} (FWER). It follows from Lemma~\ref{lem:suff} that $(p_1, \ldots, p_{T - 1})$ given $S_T$ does not depend on $\pi$ under $H_0$. Thus we can exactly simulate the distribution of $p_{(1)}$ using Monte Carlo. Denoting by $r_{\alpha, T}$ the lower $\alpha$-th quantile of $p_{(1)}$, we reject $H_0$, if $p_{(1)} \le r_{\alpha, T}$.
\subsection{Changepoint estimation}
\label{sec:estimation}
While we are interested in changepoint detection, the testing methods give bona-fide estimators of the underlying changepoint. For example, the likelihood ratio statistics are based on maximizing the profile log-likelihood $\ell_{\mathrm{PL}}(\tau)$ and the maximizer gives an estimate of $\tau$. Similarly, for the CUSUM statistic, the maximizer in the definition gives one estimate. As for the conditional testing approach, the minimizing index of the individual $p$-values gives an estimate of the changepoint. One can show that, under a single changepoint model, these estimates are consistent, because all these objective functions are based on the cumulative average $S_t/t$, and one can use the fact that a properly rescaled version of this process converges to a Brownian motion under the null hypothesis of no changepoints. For example, an analysis of the CUSUM estimator along these lines can be found in \cite{brodsky2013nonparametric}. In Section~\ref{sec:realdata}, we obtain channel-specific estimates of changepoints in this way and plot their histograms (see Figures~\ref{fig:uss_hist}, \ref{fig:mit_hist}, and \ref{fig:mitdeg_hist}).
A statistically valid procedure for simultaneous detection and estimation may be obtained by using an even-odd sample splitting: separate out the observations with even and odd time indexes, use the even ones for testing, and based on the decision, use the odd one for further estimation. However, as with any sample splitting method, this method will suffer a loss in power in small sample scenarios.
\section{The multichannel case: global vs. local testing}
\label{sec:mult}
Suppose we observe an $m$-variate $(m>1)$ independent time-series
\[
\boldsymbol{X}_1, \dots, \boldsymbol{X}_{\tau} \protect\overset{\mathrm{i.i.d.}}{\sim} F_1, \, \boldsymbol{X}_{\tau+1}, \dots, \boldsymbol{X}_{T} \protect\overset{\mathrm{i.i.d.}}{\sim} F_2,
\]
where $F_1$ and $F_2$ are $m$-variate distributions.
We would like to test the global null $H_0:$ ``no change in the $m$-variate time-series'', i.e., $H_0: \tau = T$. Since permutations of $\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{T}$ are equally likely under $H_0$, a natural approach for testing $H_0$ is to adopt a permutation test using the global CUSUM statistic
\[
C^{(\delta)} = \max_{1 \le t \le T-1} \left[ \frac{t}{T}\left(1-\frac{t}{T}\right) \right]^{\delta} \bigg|\bigg| \frac{1}{t} \sum_{i=1}^t \boldsymbol{X}_i - \frac{1}{T-t} \sum_{i=t+1}^T \boldsymbol{X}_i \bigg|\bigg| \, \text{ for } \delta \in [0,1],
\]
where $|| . ||$ denote any suitable norm in $\mathbb{R}^m$. Permuting the time-series multiple times, we construct a randomized size-$\alpha$ test that rejects $H_0$ for a large value of observed $C^{(\delta)}$. However, this test cannot determine which channels were responsible for the global change.
Now suppose that $\boldsymbol{X}_i = (X_{i, 1}, \ldots, X_{i, m})$ for $i = 1, \ldots,T$, and the time-series for the $j$-th channel is
\[
X_{j, 1}, \ldots, X_{j, \tau} \protect\overset{\mathrm{i.i.d.}}{\sim} F_{j, 1}, \, X_{j, \tau + 1}, \ldots, X_{j, T} \protect\overset{\mathrm{i.i.d.}}{\sim} F_{j, 2} \, \text{ for } j = 1, 2, \ldots, m.
\]
In this article, $F_{j, 1} \equiv \mathrm{Ber}(p_{j, 1}) \text{ or } \mathrm{Pois}(\lambda_{j, 1})$ and $F_{j, 2} \equiv \mathrm{Ber}(p_{j, 2}) \text{ or } \mathrm{Pois}(\lambda_{j, 2})$ depending on whether we deal with binary or count data. The global null $H_0$ is equivalent to $\cap_{j=1}^m H_{0, j}$ where $H_{0, j}:$ ``no change in the $j$-th channel''. A local approach for testing $H_0$ would be to compute $p$-values corresponding to $H_{0, j}$ for $j = 1, \ldots, m$, and apply some suitable multiple testing procedure controlling FWER or FDR. Note that FDR equals FWER under $\cap_{j=1}^m H_{0, j}$. Since FDR controlling methods are known to be more powerful than traditional FWER controlling methods such as Bonferroni and Holm's methods (\cite{holm1979simple}) when $m$ is large, we use some popular methods for FDR control.
In this article, we consider the celebrated Benjamini-Hochberg (BH) step-up procedure proposed by \cite{benjamini1995controlling}. \cite{BenjaminiYekutieli01} proved that the BH method controls FDR at a pre-fixed level when $p$-values are mutually independent or they have certain positive dependence.
Let $\mathcal{R} = \{1\le j \le m: H_{0, j} \text{ is rejected} \}$ be the rejection set obtained from some FDR controlling procedure. The global null $H_0$ is rejected if and only if $\mathcal{R}$ is nonempty. This test for $H_0$ is referred as a local test $\phi := \ind{\mathcal{R} \neq \emptyset}$.
\begin{remark}
If FDR is controlled at level $\alpha$, then $\phi$ is a valid level-$\alpha$ test for the global null $H_0$ since $P_{H_0}( H_0 \text{ rejected}) = P_{H_0}(\mathcal{R} \neq \emptyset) = P_{H_0}( \cup_{j=1}^m H_{0, j} \text{ rejected})=$ FWER $=$ FDR $\le \alpha$.
\end{remark}
\begin{remark}
The local testing approach enjoys a few advantages over the global testing approach. First, local testing is much more informative in the sense that channels responsible for the global change, if any, are also determined. Second, under the rare signal regime where signals are available only in a few out of a large number of channels, global tests may fail to detect a change whereas local tests are more likely to detect the change as they scrutinize all channels. These points are empirically demonstrated in the simulations of Section~\ref{sec:glob-loc}.
\end{remark}
\begin{remark}
Although we have formulated the local testing approach for a single global changepoint so as to compare it to the global testing approach, it is clear that the former applies to situations where individual channels have different changepoints. This advantage of the local testing approach over the global testing approach will be clear in Section~\ref{sec:realdata}, where we plot histograms of detected local changepoints.
\end{remark}
\section{Simulations}
\label{sec:simu}
\def\mathrm{Pois}{\mathrm{Pois}}
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/berpowT50tau25p14.pdf} &
\includegraphics[scale = 0.45]{figures/berpowT50tau40p146.pdf} \\
(a) $T=50, \tau=25, \pi_1 =0.4$ & (b) $T=50, \tau=40, \pi_1 =0.46$ \\
\includegraphics[scale = 0.45]{figures/berpowT200tau100p108.pdf} &
\includegraphics[scale = 0.45]{figures/berpowT200tau160p1148.pdf} \\
(c) $T=200, \tau=100, \pi_1 =0.08$ & (d) $T=200, \tau=160, \pi_1 =0.148$
\end{tabular}
\caption{Comparison of change detection probabilities of exact tests, asymptotic tests and the cptmean test with $\alpha=0.1$ in the time-series $X_1, \ldots, X_{\tau} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Ber}(\pi_1)$, $X_{\tau+1}, \ldots, X_{T} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Ber}(\pi_2)$.}
\label{fig:bern.local.comp}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/poispowT10tau5lam12.pdf} &
\includegraphics[scale = 0.45]{figures/poispowT10tau2lam11.pdf} \\
(a) $T=10, \tau=5, \lambda_1 =2$ & (b) $T=10, \tau=2, \lambda_1 =1$ \\
\includegraphics[scale = 0.45]{figures/poispowT50tau25lam1015.pdf} &
\includegraphics[scale = 0.45]{figures/poispowT50tau10lam1015.pdf} \\
(c
|
) $T=50, \tau=25, \lambda_1 =0.15$ & (d) $T=50, \tau=10, \lambda_1 =0.15$
\end{tabular}
\caption{Comparison of change detection probabilities of exact tests, asymptotic tests and the cptmeanvar test with $\alpha=0.1$ in the time-series: $X_1,\dots,X_{\tau} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Pois}(\lambda_1)$, $X_{\tau+1},\dots,X_{T} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Pois}(\lambda_2)$.}
\label{fig:pois.local.comp}
\end{figure}
\subsection{Exact vs. asymptotic tests in a single channel}
\label{sec:exact-asymp}
We first compare the proposed exact level-$\alpha$ tests against the asymptotic level-$\alpha$ tests in a single channel. Exact conditional tests using the $\min_{1\le i \le T-1} p_i$ statistic, as discussed in Section \ref{sec:conditionaltests}, are referred to as ``minP'' tests, conditional tests based on $\mathcal{T}_{\mathrm{LR}}^{(b)}$ and $\mathcal{T}_{\mathrm{LR}}^{(c)}$ are referred to as ``LR'' tests, and conditional tests based on the CUSUM statistics $\mathcal{T}_{\mathrm{CUSUM}}^{(0.5)}$ and $\mathcal{T}_{\mathrm{CUSUM}}^{(1)}$ are referred to as the ``CU.5'' test and the ``CU1'' test respectively. For these exact tests, $\alpha$-th quantiles of the respective test statistics under null are estimated from 50,000 Monte Carlo samples.
Asymptotic tests based on Brownian bridge approximations (see Corollary~\ref{cor:BBapprox}) are considered for $\delta = 0.5$ and $\delta = 1$, and are referred to as the ``BB.5'' test and the ``BB1'' test respectively.
Additionally, we consider two tests based on the functions \texttt{cpt.mean} and \texttt{cpt.meanvar} in the \textbf{R} package \texttt{changepoint}, which estimate the number of changepoints in univariate time-series. These are referred to as the ``cptmean'' test and the ''cptmeanvar'' test respectively. These tests\footnote{The ``cptmean'' test in Figure~\ref{fig:bern.local.comp} appiles the \texttt{cpt.mean} function with the ``BinSeg'' method and the CUSUM statistic. The ``cptmeanvar'' test in Figure~\ref{fig:pois.local.comp} applies the \texttt{cpt.meanvar} function with the ``BinSeg'' method and the ``Poisson'' statistic.} detect a change if the number of estimated changepoints is at least one.
Figure~\ref{fig:bern.local.comp} considers the Bernoulli case. We see that the exact conditional tests perform well in both sparse and dense situations and always outperform the cptmean test. The asymptotic tests (especially BB1) also provide reasonable power if the sample size $T$ is large and the changepoint $\tau$ is near the middle (Figures~\ref{fig:bern.local.comp}(a) and \ref{fig:bern.local.comp}(c)). However, if the changepoint is closer to the boundary (Figures~\ref{fig:bern.local.comp}(b) and \ref{fig:bern.local.comp}(d)), then the exact conditional tests minP and LR perform significantly better than the asymptotic tests.
Figure~\ref{fig:pois.local.comp} considers the Poisson case. The proposed exact tests perform well even when the sample size is as small as $T=10$, and, in this case, they uniformly outperform the asymptotic tests and the cptmeanvar test. If the changepoint is close to the boundary, then the exact tests (especially minP and LR) yield much higher power than their competitors (see Figures~\ref{fig:pois.local.comp}(b) and \ref{fig:pois.local.comp}(d)). For large sample sizes (e.g., $T=50$), when the Brownian bridge approximations kick in, asymptotic tests become comparable to the exact tests in terms of performance.
\subsection{Global vs. local testing in multiple channels}
\label{sec:glob-loc}
Global testing of $H_0$ is done by permutation tests using $C^{(\delta)}$ with Euclidean norm as discussed in Section~\ref{sec:mult}. The $m$-variate time-series $\boldsymbol{X}_1,\dots, \boldsymbol{X}_T$ is permuted $B=1000$ times to obtain a randomized size-$\alpha$ test. For power comparisons, two tests ``gCU.5'' and ``gCU1'' are considered that are obtained using the global CUSUM statistics $C^{(0.5)}$ and $C^{(1)}$ respectively.
To test each channel for possible changepoints, we consider three exact conditional tests, namely minP, LR and CU1. After computing $p$-values from these tests, we employ the BH procedure to obtain $\mathcal{R} = \{1\le j \le m: H_{0, j} \text{ is rejected} \}$. Henceforth, we refer to these local tests as minP-BH, LR-BH and CU1-BH respectively. Figures~\ref{fig:berpowglobloc} and \ref{fig:poispowglobloc} compare probabilities of global change detection (gCD), i.e. probabilities of rejecting $H_0$ (this is $P(\mathcal{R} \neq \emptyset)$ for local tests) for global and local tests. Figure~\ref{fig:berpowglobloc} considers Bernoulli channels while Figure~\ref{fig:poispowglobloc} deals with Poisson channels. We find that the local tests are significantly more powerful than the global tests in the rare signal regime where $n_{\text{cp}}$ is small or moderate. Also, the power advantage is more and continues over a longer range of $n_{\mathrm{cp}}$ when the changepoint is near the boundary. The local and global tests have comparable power for large $n_{\text{cp}}$, as expected.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/berGlobLoc_m1000T200tau100pre05post25.pdf} &
\includegraphics[scale = 0.45]{figures/berGlobLoc_m1000T200tau160pre05post25.pdf} \\
(a) $\tau=100$ & (b) $\tau=160$
\end{tabular}
\caption{Comparison of P(gCD) of global and local tests in $m=1000$ independent Bernoulli series: $X_{j, 1}, \ldots, X_{j, \tau} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Ber}(\pi_1 = 0.05)$,\, $X_{j, \tau+1}, \ldots, X_{j, T} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Ber}(\pi_2 = 0.25)$ where $T=200$. Tests are conducted at level $\alpha=0.1$ and $n_{\text{cp}}$ channels undergo change at time-point $\tau$.}
\label{fig:berpowglobloc}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/poisGlobLoc_m200T20tau10pre25post15.pdf} &
\includegraphics[scale = 0.45]{figures/poisGlobLoc_m200T20tau15pre25post15.pdf} \\
(a) $\tau=10$ & (b) $\tau=15$
\end{tabular}
\caption{Comparison of P(gCD) of global and local tests in $m=200$ independent Poisson series: $X_{j, 1}, \ldots, X_{j, \tau} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Pois}(\lambda_1=0.25)$,\, $X_{j, \tau+1}, \ldots, X_{j, T} \protect\overset{\mathrm{i.i.d.}}{\sim} \mathrm{Pois}(\lambda_2=1.5)$ where $T=20$. Tests are conducted at level $\alpha=0.1$ and $n_{\text{cp}}$ channels undergo change at time-point $\tau$.}
\label{fig:poispowglobloc}
\end{figure}
In a simulation study presented in the appendix, we consider two additional FDR controlling methods, namely the adaptive Benjamini-Hocheberg (ABH) and the adaptive Storey-Taylor-Siegmund (STS) methods. Performances of these methods are comparable to that of the vanilla BH method as the simulation study is done under the rare signal regime. Both the ABH and the STS methods are implemented using the \textbf{R} package \texttt{mutoss} \citep{mutoss17}.
\section{Real data}
\label{sec:realdata}
Now we analyse two datasets that can be naturally summarised by networks. These give real examples of multichannel binary and count data with potential changepoints.
\subsection{US senate rollcall data}
\label{sec:USvote}
From the US senate rollcall dataset \citep{voteview2020}, we construct networks where nodes represent US senate seats. Each epoch represents a proposed bill on which votes were taken. An edge between two seats is formed if they voted similarly on that bill. We have $n=100$ nodes. We consider $T = 50$ time-points between August 10, 1994 and January 24, 1995.
There are $m = \binom{n}{2} = 4950$ channels (i.e. edges). Of these, $622$ channels are ignored while analyzing this data since those channels contain too many zeros or ones (more than 45). We applied the BH procedure to simultaneously test the remaining $4328$ channels controlling FDR at level $\alpha = 0.05$. For each significant channel, the corresponding changepoint location is also reported (see the discussion in Section~\ref{sec:estimation}).
In Figure~\ref{fig:uss_hist}, we plot the histograms of the changepoint locations of the significant channels. Note particularly the peak near time-point $24$ (which corresponds to December 1, 1994). There is a historically well-documented change near December 1994, which saw the end of the conservative coalition (see, e.g., \cite{moody2013portrait}). Interestingly, the global method also detected a changepoint at $t = 24$. Changepoints at nearly the same location were found earlier in \cite{roy2017change} and \cite{mukherjee2018thesis}. However, the local methods have the advantage of identifying the channels that underwent a change. The number of significant channels, $n_{s}$, is reported below each histogram. A number of channels had extremely small $p$-values. For example, Figure~\ref{fig:uss_twocha}(a) depicts the time-series of edges $(4, 5)$ and $(4, 6)$. Changes are visible to the naked eye. Seats $5$ and $6$ are in Arizona, while seat 4 is in Arkansas. Clearly, seat 4 went from agreeing with seats 5 and 6 to disagreeing. On the other hand, seat 3 is also from Arkansas, and no changepoints were found in the channels $(3, 5)$ and $(3, 6)$ (see Figure~\ref{fig:uss_twocha}(b)).
\begin{figure}[!ht]
\centering
\begin{tabular}{ccc}
\includegraphics[scale = 0.28]{figures/USvoteHist_locminP.pdf} &
\includegraphics[scale = 0.28]{figures/USvoteHist_loclr.pdf} &
\includegraphics[scale = 0.28]{figures/USvoteHist_loccu1.pdf} \\
(a) minP-BH, $n_{s}$ = 1950 & (b) LR-BH, $n_{s}$ = 1980 & (c) CU1-BH, $n_{s}$ = 1987
\end{tabular}
\caption{Distribution of detected changepoint locations in the US senate rollcall data. All three methods report a mode at $t = 24$.}
\label{fig:uss_hist}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/USvote_edgetimeseries_45_46.pdf} &
\includegraphics[scale = 0.45]{figures/USvote_edgetimeseries_35_36.pdf} \\
(a) & (b)
\end{tabular}
\caption{(a) Edges $(4, 5)$ and $(4, 6)$. Seats $5$ and $6$ are in Arizona, while seat 4 is in Arkansas. Clearly, seat 4 went from agreeing with seats 5 and 6 to disagreeing. (b) Edges $(3, 5)$ ans $(3, 6)$. Seat $3$ is also in Arkansas. No changepoints are detected in these channels.}%
\label{fig:uss_twocha}
\end{figure}
\subsection{MIT reality mining data}
We use the MIT reality mining data \citep{eagle2006reality} to construct a series of networks involving $n = 90$ individuals (staff and students at the university). The data consists of call logs between these individuals from 20th July 2004 to 14th June 2005. We construct $T = 48$ weekly networks, where a weighted edge between nodes $u$ and $v$ reports the number of phone calls between them during the corresponding week. There are $m = \binom{n}{2} = 4005$ channels (i.e. edges). 3945 channels are ignored while analyzing this data since those channels contain too many zeros (more than 44). The remaining 60 channels are tested for possible changepoints.
We model the weighted edges as Poisson variables and apply the exact tests minP, LR or CU1 on each channel. Then we apply the BH method to simultaneously test the 60 channels controlling FDR at level $\alpha = 0.05$. Figure~\ref{fig:mit_hist} contains the histograms of the detected changepoint locations. Note particularly the peaks near $t = 20$ and $t = 33$. For comparison, a changepoint at $t = 24$ was found by a global algorithm in \cite{mukherjee2018thesis}. The graph-based multivariate (global) change detection methods of \cite{chen2015} found changepoints at approximately $t = 22$ and $t = 25$ (their analyses were on daily networks). The global algorithms consider global characteristics, and thus it is not surprising that they find changepoints somewhat in the middle of the predominant local changepoints near $t = 20$ and $t = 33$. It turns out that $t = 20$ is just before the start of the Winter break, and $t = 33$ is just before the start of the Spring break.
We also perform changepoint analysis with node-degrees as channels, modeled as a Poisson series. Figure~\ref{fig:mitdeg_hist} shows the histograms of the detected changepoint locations. Analyses of edge and degree time-series detect 44 and 46 common nodes (i.e. detected by all three tests: minP-BH, LR-BH and CU1-BH) respectively. Among these, $40$ nodes are declared significant by both analyses.
Finally, in Figure~\ref{fig:mit_graph}, we show the average networks before and after time-point 20. The 46 channels (i.e. nodes) declared to have changepoints by all three tests under the degree-based analysis are shown as green circles. A structural change is clearly visible.
\begin{figure}[!ht]
\centering
\begin{tabular}{ccc}
\includegraphics[scale = 0.28]{figures/MITphoneHist_locminP.pdf} &
\includegraphics[scale = 0.28]{figures/MITphoneHist_loclr.pdf} &
\includegraphics[scale = 0.28]{figures/MITphoneHist_loccu1.pdf} \\
(a) minP-BH, $n_{s}=42$ & (b) LR-BH, $n_{s}=45$ & (c) CU1-BH, $n_{s}=39$
\end{tabular}
\caption{Changepoint locations from edge-based analysis of the MIT reality mining data.}
\label{fig:mit_hist}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{tabular}{ccc}
\includegraphics[scale = 0.28]{figures/MITphoneDegreeHist_locminP.pdf} &
\includegraphics[scale = 0.28]{figures/MITphoneDegreeHist_loclr.pdf} &
\includegraphics[scale = 0.28]{figures/MITphoneDegreeHist_loccu1.pdf} \\
(a) minP-BH, $n_{s}=47$ & (b) LR-BH, $n_{s}=50$ & (c) CU1-BH, $n_{s}=47$
\end{tabular}
\caption{Changepoint locations from degree-based analysis of the MIT reality mining data.}
\label{fig:mitdeg_hist}
\end{figure}
\begin{figure}[!ht]
\begin{tabular}{cc}
\includegraphics[scale = 0.45]{figures/AvgNetworkChangeMIT_pre.pdf} &
\includegraphics[scale = 0.45]{figures/AvgNetworkChangeMIT_post.pdf} \\
\quad(a) & \quad(b)
\end{tabular}
\caption{Average networks (a) before and (b) after time-point 20 in the MIT reality mining data. Green nodes represent significant channels.}
\label{fig:mit_graph}
\end{figure}
\section{Discussion}
\label{sec:discuss}
In this article, we have considered the problem of changepoint detection for binary and count data, and proposed exact tests that perform significantly better than Brownian-bridge based asymptotic tests in small samples. We have also considered multichannel data and used a multiple testing approach to test for changes in all channels simultaneously. This local approach outperforms the global approach of treating all channels together as a single object quite significantly in case of rare signals (i.e. when the number of channels with a changepoint is much smaller than the total number of channels).
Although the methods we propose are technically for single changes, they work quite well for multiple changepoints, especially when there is one large change. This is empirically demonstrated in the appendix.
\bibliographystyle{apalike}
|
\section{Introduction}
Understanding the symmetries of nature is a fundamental open question. The fact that we can shed new light on this subject by reorganizing how we study scattering processes is central to the recently conjectured duality between quantum gravity in asymptotically flat spacetimes and a celestial conformal field theory living on the codimension-two sphere at null infinity.
Hints for such a duality were uncovered by studying infrared aspects of gauge theory and gravity. A key insight~\cite{Strominger:2017zoo} is the fact that asymptotic symmetries not only present an infinite dimensional enhancement of the familiar global symmetries, but also are realized perturbatively in quantum field theory via soft theorems~\cite{Weinberg:1965nx}.
However, the real power of this framework comes from pushing the correspondence to involve a full quantum theory of gravity. This is highlighted by the fact that the holographic map mixes infrared and ultraviolet limits of bulk observables.
In this framework, $\mathcal{S}$-matrix elements get mapped to conformal correlators~\cite{Pasterski:2016qvg,Pasterski:2017kqt,Pasterski:2017ylz,deBoer:2003vf,Cheung:2016iub} by transforming the external states to boost, rather than energy, eigenstates. For the massless case, the map is simply a Mellin transform in the energies of each external scattering state
\begin{equation}
\langle O^\pm_{\Delta_1}(z_1,{\bar z}_1)...O^\pm_{\Delta_n}(z_n,{\bar z}_n)\rangle=\prod_{i=1}^n \int_0^\infty d\omega_i \omega_i^{\Delta_i-1} \langle out|\mathcal{S}|in\rangle\,,
\end{equation}
which trades the energy for a conformal dimension $\Delta$ while the 4D helicity gives the 2D spin $J$.
It is thus necessary to incorporate ultraviolet physics in order for basic observables like the gravitational $\mathcal{S}$-matrix to be well-defined in the new basis, making it natural to expect that string theory plays an important role.
For instance, when recast as a celestial correlator, the scattering amplitude for four radiative gravitons diverges. As shown in~\cite{Stieberger:2018edy}, this divergent behavior is softened in string theory. Moreover, the worldsheet of the string is pinned to the celestial CFT in certain limits.
Another recent insight is that the holographic symmetries of the MHV sector of gravitational scattering involves a ${\rm w}_{1+\infty}$ algebra hinting at a potential connection to $\mathcal{N}=2$ string theory~\cite{Strominger:2021lvk}. A full understanding of the relation between the string worldsheet CFT and celestial CFT, and the embedding of celestial holography into string theory are open problems.
An essential step in building our holographic dictionary is to understand the spectrum and the symmetries of celestial CFT. These questions go hand in hand.
Data on the principal series $\Delta\in 1+i\mathbb{R}$ form a basis for finite energy states~\cite{Pasterski:2017kqt}. However, a special role is played by $\Delta\in\frac{1}{2}\mathbb{Z}$, since soft theorems in momentum space map to conformally soft theorems at these values of the conformal dimension~\cite{Cheung:2016iub}.
Each Goldstone/Goldstino of a spontaneously broken asymptotic symmetry corresponds to a conformally soft factorization theorem. This has been shown for gauge theory~\cite{Fan:2019emx,Nandan:2019jas,Pate:2019mfs} and gravity~\cite{Adamo:2019ipt,Puhm:2019zbl,Guevara:2019ypd} where the asymptotic symmetries are a large U(1) Kac-Moody symmetry~\cite{Strominger:2013lka,He:2014cra,He:2015zea} and BMS supertranslations~\cite{Strominger:2013jfa,He:2014laa} and superrotations~\cite{Kapec:2014zla,Kapec:2016jld,Himwich:2019qmj}. The corresponding symmetries are then generated by celestial currents constructed via suitable bulk inner products with conformal primary wavefunctions with the appropriate $\Delta$~\cite{Donnay:2018neh,Donnay:2020guq}. For each of these symmetries, the conformal primary wavefunctions selecting the currents is pure gauge.
The question of whether or not these modes with real (half-)integer conformal dimension $\Delta$ augment the conformal basis on the principal continuous series of the SL(2,$\mathbb{C}$) Lorentz group~\cite{Pasterski:2017kqt} was addressed in~\cite{Donnay:2020guq}, where we showed how one can analytically continue the external states to these conformal dimensions. The freedom to do so not only lets us explore conformally soft dimensions, but also has the effect of rendering perturbative celestial amplitudes for non-scale invariant theories well defined in a distributional sense~\cite{Puhm:2019zbl,Ball:2019atb}. With the full range of the complex plane to explore, it is possible to look for additional conformally soft theorems beyond those that can be traced to a (large) gauge symmetry origin.
Indeed, there appear to be more soft factorization theorems than asymptotic gauge symmetries account for. In the plane wave basis, such examples appear at subleading orders in a Taylor expansion near zero energy compared to the ones that correspond to gauge symmetries.
These include the subleading soft photon/gluon theorem~\cite{Lysov:2014csa,Campiglia:2016hvg} and the subsubleading soft graviton theorem~\cite{Campiglia:2017dpg}, as well as possible scalar asymptotic symmetries~\cite{Campiglia:2017dpg}. In supersymmetric theories, in addition to the soft gravitino which has an interpretation as large supersymmetry~\cite{Awada:1985by,Lysov:2015jrs,Avery:2015iix}, there exist soft gluino and subleading soft gravitino theorems - see e.g.~\cite{Liu:2014vva,Rao:2014zaa,Chen:2014xoa,Dumitrescu:2015fej}. In the conformal basis, all of these modes are nicely captured by celestial diamonds~\cite{Pasterski:2021fjn,Pasterski:2021dqe}.
For special values of the conformal dimension celestial primary operators will have SL(2,$\mathbb{C}$) primary descendants~\cite{Penedones:2015aga,Banerjee:2019aoy,Banerjee:2019tam}. Celestial diamonds~\cite{Pasterski:2021fjn,Pasterski:2021dqe} provide a celestial CFT interpretation for the structure of these conformal multiplets. As observed in~\cite{Pasterski:2021fjn}, this structure persists beyond the conformally soft theorems with asymptotic symmetry interpretations, capturing both the most subleading soft theorems, as well as the infinite towers of primaries giving symmetry enhancements in the single helicity sector~\cite{Guevara:2021abz,Strominger:2021lvk}.
Infinite-dimensional fermionic symmetries have received much less attention in the celestial CFT context than their bosonic counterparts~\cite{Donnay:2018neh,Donnay:2020guq,Pasterski:2021dqe,Banerjee:2019aoy,Banerjee:2019tam,Banerjee:2020zlg}. Moreover, subleading conformally soft theorems which do not have an obvious gauge symmetry origin have also remained elusive~\cite{Lysov:2014csa,Campiglia:2016hvg,Campiglia:2016jdj,Campiglia:2016efb,Himwich:2019dug,Banerjee:2021cly}. The purpose of this paper is to remedy this. Building off the fermionic soft theorems of~\cite{Dumitrescu:2015fej} for photinos and~\cite{Lysov:2015jrs,Avery:2015iix} for the gravitino, the conformal primary fermions of~\cite{Muck:2020wtx,Narayanan:2020amh,Pasterski:2020pdk}, as well as the results on conformally soft theorems in $\mathcal{N}=1$ supergravity by~\cite{Fotopoulos:2020bqj}, we will see that celestial diamonds naturally provide a footing to study the conformally soft sector of celestial CFT when supersymmetry is incorporated.
This paper examines fermionic symmetries and the associated celestial diamonds relevant to supergravity and supersymmetric gauge theories. We identify the generator for large supersymmetry transformations as the conformally soft gravitino primary operator with SL(2,$\mathbb{C}$) dimension
$\Delta=\frac{1}{2}$ and its shadow with conformal dimension
$\Delta=\frac{3}{2}$ which form the left and right corners of the celestial gravitino diamond.\footnote{Note that while these values of the conformal dimension have already been identified in~\cite{Fotopoulos:2020bqj} from conformally soft factorization, here we explicitly construct the 2D operators as bona fide conformal primaries.} The structure of the diamond is chiral and only one of the two corners yields a Ward identity that is isomorphic to the conformally soft gravitino theorem. The primary descendant operator at the bottom corner of the gravitino diamond corresponds to the fermionic analogue of the soft charge operators of~\cite{Banerjee:2018fgd,Banerjee:2019aoy,Banerjee:2019tam}.
Since the formalism set up in~\cite{Pasterski:2020pdk} extends naturally to any spin, we take advantage of this to identify conformal primary operators of spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ which correspond to known soft theorems but which do not have a gauge symmetry origin. This includes the subleading soft gravitino and soft photino which correspond to degenerate celestial diamonds. Despite the absence of a gauge symmetry in these cases we can construct fermionic boundary operators that generate a shift in bulk fields akin to those generated by spontaneously broken asymptotic symmetries. For the conformally soft photino with $\Delta=\frac{1}{2}$
our 2D operator is consistent with the soft charge considered in~\cite{Dumitrescu:2015fej}. Meanwhile, a spacetime interpretation for the subleading soft gravitino has not been considered before in the literature; we fill this gap by computing a soft charge for the $\Delta=-\frac{1}{2}$ conformally soft gravitin
. These operators are further shown to select modes corresponding to the fermionic conformally soft theorems considered in~\cite{Fotopoulos:2020bqj}.
Because their existence is tied to supersymmetry~\cite{Fotopoulos:2020bqj,Pasterski:2020pdk} and double copy relations~\cite{Casali:2020vuy,Pasterski:2020pdk} which connect fields of lower spin to those with large gauge symmetries, we expect them to play an important role in understanding the analog of the spin shifting amplitudes relations within celestial CFT. Indeed, we see that these fermionic and bosonic diamonds glue together into a celestial pyramid, which nicely unites the spin-shifting analysis of~\cite{Pasterski:2020pdk} and conformal multiplet studies of~\cite{Pasterski:2021dqe,Pasterski:2021fjn}.
This paper is organized as follows. We set up the conformal primary wavefunctions and operators for celestial fermions in section~\ref{sec:celestialfermions}. We then examine the behavior of the bulk fields near null infinity in section~\ref{sec:BulkBoundary} to establish the relevant extrapolate dictionary. With this technology we are ready to examine the Goldstino mode corresponding to the leading soft graviton theorem in section~\ref{sec:gravitino}. We then use the structure of celestial diamonds to extend these constructions to fermions with conformally soft theorems but no corresponding conformal Goldstino modes in section~\ref{sec:subgoldstinos}. We conclude by examining how supersymmetry relates the fermionic and bosonic examples in section~\ref{celestialpyramids}. Further computational details can be found in the appendix.
\section{Celestial Fermions}
\label{sec:celestialfermions}
In this section, we set up the conformal primary wavefunctions for spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ fields and use these to construct the celestial operators corresponding to single particle scattering states.
\subsection{Conformal Primary Wavefunctions}\label{sec:CPW}
{\it Conformal primary wavefunctions} are functions on $\mathbb{R}^{1,3}\times \mathbb{C}$ which depend on a spacetime vector $X^\mu\in \mathbb{R}^{1,3}$ and a point $(w,{\bar w}) \in S^2$, which under simultaneous SO(1,3)$\simeq$ SL(2,$\mathbb{C}$) Lorentz transformations of $X$ and M\"{o}bius transformation of $(w,{\bar w})$ on the celestial sphere
\begin{equation}\label{mobius}
X^\mu\mapsto \Lambda^\mu_{~\nu}X^\nu\,,~~~ w\mapsto \frac{a w+b}{cw+d}\,,~~~{\bar w}\mapsto \frac{{\bar a} {\bar w}+{\bar b}}{{\bar c}{\bar w}+{\bar d}}\,,
\end{equation}
transform as 2D conformal primaries with SL(2,$\mathbb{C}$) conformal dimension~$\Delta$ and 2D spin~$J$.
We focus here on fermionic wavefunctions and consider two types: {\it radiative} conformal primary wavefunctions which satisfy the equations of motion for massless spin-$s$ fields in the vacuum and have $J=\pm s$, and {\it generalized} conformal primary wavefunctions with $|J|\leq s$ and in principle allow for sources and non-analytic behavior though here we restrict ourselves to analytic wavefunctions.
\subsubsection*{A Covariant Tetrad and Spin Frame}
Let us first introduce some useful notation. We start by embedding the celestial sphere into the $\mathbb{R}^{1,3}$ lightcone via the null reference direction
\begin{equation}\label{qmu}
q^\mu=(1+w{\bar w},w+{\bar w},i({\bar w}-w),1-w{\bar w})\,,
\end{equation}
from which we construct the following null tetrad for Minkowski space~\cite{Pasterski:2020pdk}
\begin{equation}\label{tetrad}
l^\mu=\frac{q^\mu}{-q\cdot X}\,, ~~~n^\mu=X^\mu+\frac{X^2}{2}l^\mu\,, ~~~m^\mu=\epsilon^\mu_++(\epsilon_+\cdot X) l^\mu\,, ~~~\bar{m}^\mu=\epsilon^\mu_-+(\epsilon_-\cdot X) l^\mu\,,
\end{equation}
where $\epsilon_+^\mu=\frac{1}{\sqrt{2}}\partial_w q^\mu$ and $\epsilon_-^\mu=\frac{1}{\sqrt{2}}\partial_{{\bar w}} q^\mu$.
The vectors of this tetrad satisfy standard normalization conditions $l\cdot n=-1\,,~~m\cdot\bar{m}=1$ with all other inner products vanishing and have the property that they transform covariantly under the SL(2,$\mathbb{C}$)
\begin{equation}\label{tetradcov}
l^\mu\mapsto \Lambda^{\mu}_{~\nu} l^\nu\,,~~n^\mu\mapsto \Lambda^{\mu}_{~\nu} n^\nu\,,
~~m^\mu\mapsto \frac{cw+d}{{\bar c}{\bar w}+{\bar d}}\Lambda^{\mu}_{~\nu} m^\nu\,,~~{\bar m}^\mu\mapsto \frac{{\bar c}{\bar w}+{\bar d}}{cw+d}\Lambda^{\mu}_{~\nu} {\bar m}^\nu\,,
\end{equation}
where $\Lambda^{\mu}_{~\nu}$ is the corresponding vector representation of SO(1,3)$\simeq$ SL(2,$\mathbb{C}$).
To discuss operators and wavefunctions of half-integer spin it is convenient to further decompose the tetrad into a spin frame
\begin{equation}\label{spinframe}
l_{a{\dot b}}=o_a\bar{o}_{\dot b}\,,~~n_{a{\dot b}}=\iota_a\bar{\iota}_{\dot b}\,,~~m_{a{\dot b}}=o_a{\bar\iota}_{\dot b}\,,~~\bar{m}_{a{\dot b}}=\iota_a{\bar o}_{\dot b}\,,
\end{equation}
where $v_{a{\dot b}}=v_\mu\sigma^\mu_{a\dot b}$ for $\sigma^\mu_{a\dot b}=(\mathds{1},\sigma^i)_{a\dot b}$ with the Pauli matrices $\sigma^i$. This yields
\begin{equation}\label{spinframespinors}
o_a=\sqrt{\frac{2}{q\cdot X}}\left(\begin{array}{c} {\bar w} \\-1 \end{array}\right)\,,~~~\iota_a=\sqrt{\frac{1}{q\cdot X}}\left(\begin{array}{c}X^0-X^3-w(X^1-iX^2) \\-X^1-iX^2+w(X^0+X^3) \end{array}\right)\,,
\end{equation}
up to an overall phase ambiguity which is fixed by setting ${\bar o}_{\dot a}=(o_a)^*$ and ${\bar \iota}_{\dot a}=(\iota_a)^*$ in the region where $q\cdot X>0$ and analytically continued from there. The elements of the spin frame transform as
\begin{equation}
o_a\mapsto (cw+d)^{\frac{1}{2}}({\bar c}{\bar w}+{\bar d})^{-\frac{1}{2}}(Mo)_a\,, \quad \iota_a\mapsto (cw+d)^{-\frac{1}{2}}({\bar c}{\bar w}+{\bar d})^{\frac{1}{2}}(M\iota)_a\,,
\end{equation}
where $M$ is an element of $\overline{ SL(2,\mathbb{C})}$. We refer to appendix~\ref{app:Conventions} for our spinor conventions.
\subsubsection*{Radiative Conformal Primary Wavefunctions}
The scalar conformal primary wavefunction corresponds to the Mellin transform of a plane wave,
\begin{equation}\label{varphi}
\varphi^{\Delta,\pm} =\frac{1}{(-q\cdot X_\pm)^\Delta}=\frac{1}{(\mp i)^\Delta \Gamma(\Delta)}\int_0^\infty d\omega \omega^{\Delta-1} e^{\pm i \omega q\cdot X- \varepsilon q^0\omega}\,,
\end{equation}
where $X^\mu_\pm=X^\mu\pm i\varepsilon\{-1,0,0,0\}$ is used as a regulator. Except as necessary, we will omit the $\pm$~label henceforth. In terms of~\eqref{varphi} together with the null tetrad~\eqref{tetrad} and the spin frame~\eqref{spinframe} the radiative spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ wavefunctions are given by~\cite{Pasterski:2020pdk}
\begin{equation}\begin{array}{ll}\label{psichi}
\psi_{\Delta,J=+\frac{1}{2}}=o\varphi^\Delta\,,&~~~{\bar\psi}_{\Delta,J=-\frac{1}{2}}={\bar o}\varphi^\Delta\,,\\
\chi_{\Delta,J=+\frac{3}{2};\mu}=m_\mu o\varphi^\Delta\,,&~~~{\bar\chi}_{\Delta,J=-\frac{3}{2};\mu}={\bar m}_\mu{\bar o}\varphi^\Delta\,,
\end{array}
\end{equation}
with the expressions on the left denoting left-handed spinors, while the ones on the right correspond to right-handed spinors. To simplify the notation we have omitted the spinor indices.
Related to the conformal primaries~\eqref{varphi
-\eqref{psichi} via a shadow transform are wavefunctions of flipped and shifted conformal dimension and flipped spin.
The scalar shadow wavefunction is given by
\begin{equation}\label{SHvarphi}
{\widetilde{\varphi}}^{\Delta}=(-X^2)^{\Delta-1}\varphi^{\Delta}\,,
\end{equation}
while we claimed in~\cite{Pasterski:2020pdk} that the shadow wavefunctions for spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ are given by\footnote{We have chosen the normalization conventions of~\cite{Pasterski:2021fjn} which differ from those of~\cite{Pasterski:2017kqt,Pasterski:2020pdk} by an overall sign in the $s=\frac{1}{2}$ shadow wavefunctions.}
\begin{equation}\begin{array}{ll}\label{SHpsichi}
{\widetilde \psi}_{\Delta,J=-\frac{1}{2}}=-\sqrt{2}\iota(-X^2)^{\Delta-\frac{3}{2}}\varphi^\Delta\,,&~~~\widetilde{{\bar\psi}}_{\Delta,J=+\frac{1}{2}}=-\sqrt{2}{\bar \iota}(-X^2)^{\Delta-\frac{3}{2}}\varphi^\Delta\,,\\
{\widetilde \chi}_{\Delta,J=-\frac{3}{2};\mu}=\sqrt{2}{\bar m}_\mu \iota(-X^2)^{\Delta-\frac{3}{2}}\varphi^\Delta\,,&~~~\widetilde{{\bar\chi}}_{\Delta,J=+\frac{3}{2};\mu}=\sqrt{2}{ m}_\mu{\bar \iota}(-X^2)^{\Delta-\frac{3}{2}}\varphi^\Delta\,.\\
\end{array}
\end{equation}
We will explicitly verify the expressions~\eqref{SHpsichi} in an expansion near null infinity in appendix~\ref{app:radexpCPW}.
\subsubsection*{Generalized Conformal Primary Wavefunctions}
Generalized bulk wavefunctions for half-integer spins $s\leq |J|$ were discussed in~\cite{Pasterski:2020pdk,Pasterski:2021fjn}. We focus here on the analytic wavefunctions built from the generalized scalar $\varphi^{gen}_\Delta=f(X^2) \varphi^\Delta$. For $s=\frac{1}{2}$ they are given by
\begin{equation}
\psi^{gen}_{\Delta,+\frac{1}{2}}=o \varphi^{gen}_\Delta\,, \quad \psi^{gen}_{\Delta,-\frac{1}{2}}=\iota \varphi^{gen}_\Delta\,.
\end{equation}
Enforcing the vacuum Weyl equation $\bar{\sigma}^\mu\partial_\mu \psi^{gen}_{\Delta,\pm\frac{1}{2}}=0$ yields the radiative conformal primary wavefunctions~\eqref{psichi} and their shadows~\eqref{SHpsichi} for generic $\Delta \in \mathbb{C}$ and $J=\pm \frac{1}{2}$. For $s=\frac{3}{2}$ they are given by
\begin{equation}
\chi^{gen}_{\Delta,+\frac{3}{2};\mu}=m_\mu o\varphi_{\Delta}^{gen}\,, \quad \chi^{gen}_{\Delta,-\frac{3}{2};\mu}=\bar{m}_\mu\iota\varphi_{\Delta}^{gen}\,,
\end{equation}
and
\begin{equation}
\chi^{gen}_{\Delta,+\frac{1}{2};\mu}=l_\mu o\varphi_{\Delta}^{gen,1}+n_\mu o\varphi_{\Delta}^{gen,2}+m_\mu \iota\varphi_{\Delta}^{gen,3}\,, \quad \chi^{gen}_{\Delta,-\frac{1}{2};\mu}=l_\mu \iota\varphi_{\Delta}^{gen,1}+n_\mu \iota\varphi_{\Delta}^{gen,2}+\bar{m}_\mu o\varphi_{\Delta}^{gen,3}\,,
\end{equation}
where $\varphi^{gen,i}_\Delta=f_i(X^2) \varphi^\Delta$.
Enforcing the chiral projection of the Rarita-Schwinger equation $\varepsilon^{\mu\nu\rho\kappa}\bar{\sigma}_\nu\nabla_\rho \chi^{gen}_{\Delta,J;\kappa}=0$ as well as the gauge conditions
\begin{equation} \nabla^\mu \chi^{gen}_{\Delta,J;\mu}=0\,,~~ X^\mu \chi^{gen}_{\Delta,J;\mu}=0\,,~~ \bar{\sigma}^\mu \chi^{gen}_{\Delta,J;\mu}=0\,,\end{equation}
yields the radiative conformal primary wavefunctions~\eqref{psichi} and their shadows~\eqref{SHpsichi} for generic $\Delta \in \mathbb{C}$ and $J=\pm \frac{3}{2}$, as well as a discrete set of solutions $\Delta=\frac{5}{2}$ and $J=\pm \frac{1}{2}$ which we will come back to in section~\ref{sec:gravitino}.
\subsection{Conformal Primary Operators}\label{sec:CPO}
Given a 4D operator of spin-$s$ in the Heisenberg picture, we can define a 2D celestial CFT operator via a suitable inner product with the conformal primary wavefunctions. We need three ingredients: our primary wavefunctions, a bulk operator, and an appropriate inner product. We examined spinorial primary wavefunctions in the previous section, so let us move on to the operator mode expansions.
\paragraph{Bulk Operator Mode Expansions}
In the momentum basis, we have the following mode expansions for the spin-$\frac{1}{2}$
and spin-$\frac{3}{2}$ left-handed Weyl components of the Majorana fields.
Parametrizing null momenta as $k^\mu=\omega q^\mu$ we can express them as a product of spinor-helicity variables
\begin{equation}
k_{a \dot a}=\sigma^\mu_{a\dot a} k_\mu = -|k]_a \langle k |_{\dot a}\,,
\end{equation}
where
\begin{equation}
|k]_a=\sqrt{\omega}|q]_a\,, \quad \langle k|_{\dot a}=\sqrt{\omega}\langle q|_{\dot a}\,,
\end{equation}
with
\begin{equation}
|q]_a=\sqrt{q\cdot X}o_a~~~\langle q|_{\dot{a}}=\sqrt{q\cdot X}\bar{o}_{\dot{a}}\,.
\end{equation}
For the left-handed massless Weyl photino we have the mode expansion
\begin{equation}\label{hatpsiBulk}
\hat \psi_a(X)=e\int \frac{d^3k}{(2\pi)^3}\frac{|k]_a}{2k^0}\left[a_- e^{ik\cdot X}+a_+^\dagger e^{-ik\cdot X}\right]\,,
\end{equation}
while for the left-handed massless Weyl gravitino we have
\begin{equation}\label{hatchiBulk}
\hat{\chi}_{\mu a}(X)=\kappa\int \frac{d^3k}{(2\pi)^3}\frac{|k]_a}{2k^0}\epsilon^+_\mu\left[ a_- e^{ik\cdot X}+a_+^\dagger e^{-ik\cdot X}\right]\,.
\end{equation}
These mode operators obey the standard anti-commutation relations
\begin{equation}
\{a_\pm(k),a^\dagger_\pm(k')\}=(2\pi)^3 (2k^0)\delta^{(3)}(k-k')\,.
\end{equation}
The opposite helicity modes are captured by the right-handed Weyl spinors.
\paragraph{From Weyl to Dirac and Majorana Primaries}
The spin $s=\frac{1}{2}$ and $\frac{3}{2}$ Weyl spinors corresponding to the photino and gravitino can be embedded into Dirac spinors
\begin{equation}\label{embed1}
\Psi^{s=\frac{1}{2}}=\left(\begin{array}{c}
\psi_a \\ \bar{\psi}^{\dot a}
\end{array}\right)\,, \quad \Psi^{s=\frac{3}{2}}_{\mu}=\left(\begin{array}{c}
\chi_{\mu a} \\ \bar{\chi}_{\mu}^{\dot a}
\end{array}\right)\,,
\end{equation}
on which we impose a Majorana condition (see appendix~\ref{app:Conventions} for more details)
\begin{equation}\label{majorana}
\bar{\psi}^{\dot a}=\varepsilon^{\dot a \dot b}(\psi^\dagger)_{\dot b}\,, \quad \bar{\chi}^{\dot a}_\mu=\varepsilon^{\dot a \dot b}(\chi^\dagger)_{\mu\dot b}\,.
\end{equation}
The conformal primary wavefunctions of section~\ref{sec:CPW} embed into Dirac spinors of definite conformal dimension and spin
\begin{equation}\label{chiralspinor}
\Psi_{\Delta,J}^{s=\frac{1}{2}}=\left(
\begin{array}{c}
\psi_{\Delta,J} \\
\bar{\psi}_{\Delta,J}
\end{array}\\ \right)\,, \quad
\Psi_{\Delta,J;\mu}^{s=\frac{3}{2}}=\left(
\begin{array}{c}
\chi_{\Delta,J;\mu} \\
\bar{\chi}_{\Delta,J;\mu}
\end{array}\\ \right)\,,
\end{equation}
where the component spinors are given by~\eqref{psichi}. Thus for fixed sign of the spin only one Weyl component is non-vanishing. Because these primaries obey
\begin{equation}
(\Psi_{\Delta,J})^C=\Psi_{\Delta^*,-J}\,,
\end{equation}
where the charge conjugate $\Psi^C$ is defined in~\eqref{chargeconjugate}, we can construct a Majorana primary for either $s=\frac{1}{2}$ and $\frac{3}{2}$ (omitting the spacetime index) as
\begin{equation}\label{embed}
\Psi_{\rm M}(\Delta,J)=\Psi_{\Delta,J}+\Psi_{\Delta^*,-J}\,.
\end{equation}
\subsubsection*{Fermionic 2D Operators}
With these ingredients, as well as the inner products which we review in appendix~\ref{app:IP}, we are now ready to construct fermionic primary operators generalizing the bosonic construction of~\cite{Donnay:2020guq,Pasterski:2021fjn}. In terms of the Dirac spinors~\eqref{chiralspinor} we define the 2D fermionic operator
\begin{equation}\label{Ffermionic}
\mathcal{O}^{s,\pm}_{\Delta,J}(w,{\bar w})\equiv i(\hat{\Psi}^s(X^\mu),\Psi^s_{\Delta^*,-J}(X_\mp^\mu;w,{\bar w}))\,,
\end{equation}
where $\pm$ on the operator indicates whether it corresponds to an {\it in} or an {\it out} state. The inner products $(.\,,.)$ for $s=\frac{1}{2}$ and $\frac{3}{2}$ Dirac spinors are given in~\eqref{IPphotino} and~\eqref{IPgravitino}, respectively. In terms of our Weyl spinors these take the explicit form
\begin{equation}\label{Fphotinos}
\begin{alignedat}{2}
\mathcal{O}^{s,\pm}_{\Delta,J}(w,{\bar w})&=i\int d\Sigma_\nu \left({\psi}_{\Delta,J} (X_\pm;w,{\bar w}) \sigma^\nu \hat {{\psi}}^\dagger(X)+\bar{\psi}_{\Delta,J} (X_\pm;w,{\bar w}) \bar\sigma^\nu \hat {\psi}(X)\right)\,,
\end{alignedat}
\end{equation}
for the photino and
\begin{equation}\label{Fgravitinos}
\begin{alignedat}{2}
\mathcal{O}^{s,\pm}_{\Delta,J}(w,{\bar w})&=i\int d\Sigma_\nu \left(\chi_{\Delta,J;\mu} (X_\pm;w,{\bar w}) \sigma^\nu \hat \chi^{\dagger\mu}(X)+ \bar{\chi}_{\Delta,J;\mu}(X_\pm;w,{\bar w})\bar\sigma^\nu \hat{\chi}^\mu(X) \right)\,,
\end{alignedat}
\end{equation}
for the gravitino, where we note that for fixed $\Delta,J$ only one of the two terms in each expression will be non-zero. We will use $\widetilde{\mathcal{O}}^{s,\pm}_{\Delta,J}$ to denote the shadow operator constructed from~\eqref{Ffermionic} with~\eqref{SHpsichi} replacing~\eqref{psichi} in the chiral Dirac spinor~\eqref{chiralspinor}.
For integer spins $s=1,2$ we showed in~\cite{Donnay:2020guq} that for certain values of $\Delta$ the operators $\mathcal{O}^s_{\Delta,J}(w,{\bar w})$ correspond to soft charges in the full (matter-coupled) theory when the Cauchy slice on which they are defined is taken to null infinity and the wavefunctions $\Phi^s_{\Delta,J}$ are the Goldstone modes of the spontaneously broken asymptotic symmetries in gauge theory and gravity. Here, the operator~\eqref{Ffermionic} generates the shift\footnote{Recall that the mode operators are Grassmann valued while our conventions are such that the fermionic primary wavefunctions are not, as such one should multiply these generators by a Grassmann valued parameter to implement the supersymmetry transformations. See~\cite{Fotopoulos:2020bqj,Jiang:2021xzy,Brandhuber:2021nez,Hu:2021lrx} for recent work on celestial superfields.}
\begin{equation}\label{shift2}
\{\mathcal{O}^{s,\pm}_{\Delta,J}(w,{\bar w}),\hat \Psi^s(X)\}=i\Psi^s_{\Delta,J}(X_\mp;w,{\bar w})\,.
\end{equation}
In section~\ref{sec:gravitino} we will show that the spin-$\frac{3}{2}$ operator~\eqref{Ffermionic} for certain values of $\Delta$ corresponds to the soft charge for spontaneously broken large supersymmetry whose Ward identity is equivalent to the conformally soft gravitino theorem. Moreover, even in the absence of an asymptotic symmetry, we will identify~\eqref{Ffermionic} for $s=\frac{1}{2},\frac{3}{2}$ with soft charges associated to the conformally soft photino theorem and the subleading conformally soft gravitino theorem.
\section{From Bulk to Boundary}\label{sec:BulkBoundary}
Evaluating the fermionic 2D operators at null infinity requires knowing both the large-$r$ expansions of the 4D field operators and the conformal primary wavefunctions.
Near $\mathcal I^+$ we use retarded Bondi coordinates $(u,r,z,{\bar z})$ which are related to the Cartesian coordinates $(X^0,X^1,X^2,X^3)$ by the transformation
\begin{equation}\label{eq:co}
X^0=u+r\,, \quad X^i=r \hat{X}^i(z,{\bar z})\,, \quad \hat{X}^i(z,{\bar z})=\frac{1}{1+z \bar z}(z+{\bar z},i({\bar z}-z),1-z{\bar z})\,,
\end{equation}
which maps the line element to
\begin{equation}\label{gBondi}
ds^2=-du^2-2du dr+2r^2 \gamma_{z \bar z} dz d\bar z \quad \text{with}\quad \gamma_{z \bar z}=\frac{2}{(1+z \bar z)^2}\,.
\end{equation}
To discuss spinor fields we use the flat frame
\begin{equation}
e^0=e^0_\mu dX^\mu\equiv du+dr\,, \quad e^i=e^i_\mu dX^\mu \equiv \hat X^idr+r\partial_z \hat X^i dz+r\partial_{\bar z} \hat X^i d{\bar z}\,,
\end{equation}
for which the spin connection vanishes. The Gamma matrices in retarded Bondi coordinates are of the form $\gamma_\mu\equiv e_\mu^0 \gamma_0+e_\mu^i \gamma_i$ yielding
\begin{equation}
\gamma_u =\gamma_0\,, \quad \gamma_r=\gamma_0+\hat X^i \gamma_i\,, \quad \gamma_z=r\partial_z \hat X^i \gamma_i\,, \quad \gamma_{\bar z}=r\partial_{\bar z} \hat X^i \gamma_i\,.
\end{equation}
\subsection{Operator Expansion}\label{sec:OPexpansion}
At the center of the equivalence between soft theorems and Ward identities of asymptotic symmetries lies the
large-$r$ saddle point approximation
\begin{equation}\label{saddlept}
\lim_{r\rightarrow\infty}\sin\theta e^{i\omega q^0r(1-\cos\theta)}=\frac{i}{\omega q^0 r}\delta(\theta)+\mathcal{O}((\omega q^0 r)^{-2})\,,
\end{equation}
which we use to evaluate the integrals~\eqref{hatpsiBulk}-\eqref{hatchiBulk} and their Hermitian conjugates.
For spin-$\frac{1}{2}$ we have
\begin{equation}\label{psiScri}
\lim\limits_{r\rightarrow\infty} r\hat{\psi}=-\frac{i e}{2(2\pi)^2} (1+z{\bar z}) \int_0^\infty d\omega {\omega}^{\frac{1}{2}}\left[ a_-(\omega,z,{\bar z})e^{-i\omega (1+z{\bar z}) u} - a^\dagger_+(\omega,z,{\bar z})e^{i\omega (1+z{\bar z}) u}\right]|x]\,,
\end{equation}
while for spin-$\frac{3}{2}$ we get
\begin{equation}\label{chiScri}
\lim\limits_{r\rightarrow\infty} \hat{\chi}_{\bar z}=-\frac{i \kappa}{\sqrt{2}(2\pi)^2} \int_0^\infty d\omega {\omega}^{\frac{1}{2}}\left[ a_-(\omega,z,{\bar z})e^{-i\omega (1+z{\bar z}) u} - a^\dagger_+(\omega,z,{\bar z})e^{i\omega (1+z{\bar z}) u}\right]|x]\,,
\end{equation}
with the other components subleading. Replacing $a_- \mapsto a_+$, $a^\dagger_+\mapsto a^\dagger_-$ and $|x]_a \mapsto |x\rangle^{\dot a}$ yields the Hermitian conjugate Weyl spinors $\hat{\psi}^\dagger$ and $\hat{\chi}^\dagger_z$ where we note that $ \epsilon^+_{\bar z}=\epsilon^-_z=\frac{\sqrt{2}r}{1+z{\bar z}}$ while $\epsilon^+_z= \epsilon^-_{\bar z}=0$.
The saddle point approximation has localized $(w,{\bar w})\mapsto (z,{\bar z})$ and we have introduced
\begin{equation}
|x]_a=\sqrt{2}\left(\begin{array}{c} {\bar z}\\-1\end{array}\right)\,, \quad | x\rangle^{\dot a}=-\sqrt{2}\left(\begin{array}{c} 1\\z\end{array}\right).\,
\end{equation}
The above expressions~\eqref{psiScri}-\eqref{chiScri} give the bulk operators near null infinity using a momentum basis mode expansion. The creation and annihilation operators with definite $(\Delta,J)$ are obtained from $a_\pm$ and $a^\dagger_\pm$ via a Mellin transform
\begin{equation}\label{mellinmode}
a_{\Delta,\pm s}=\int_0^\infty d\omega \omega^{\Delta-1}a_\pm(\omega)\,,\quad a^\dagger_{\Delta,\mp s}=\int_0^\infty d\omega \omega^{\Delta-1}a^\dagger_\pm(\omega)\,,
\end{equation}
while the inverse Mellin transform
\begin{equation}\label{inversemellinmode}
a_\pm(\omega)=\frac{1}{2\pi} \int_{1-i\infty}^{1+i\infty} (-id\Delta) \omega^{-\Delta}a_{\Delta,\pm s}\,,\quad a^\dagger_\mp(\omega)=\frac{1}{2\pi} \int_{1-i\infty}^{1+i\infty} (-id\Delta) \omega^{-\Delta}a^\dagger_{\Delta,\pm s}\,,
\end{equation}
takes us back to the momentum basis. We have labeled both the creation and annihilation Mellin operators with the $\Delta$ and $J$ of the corresponding state on the celestial sphere.
\subsection{Wavefunction Expansion}\label{sec:CPWexpansion}
Finally, we need the expansion of the conformal primary wavefunctions near null infinity in order to evaluate the 2D operators and identify them for special values of $\Delta$ with soft charges. To obtain the large~$r$ expansion for the fermions we note that the components of the spin frame can be written in terms of the scalar primary as
\begin{equation}\label{oiotaexpansion}
o=\sqrt{2}i\varphi^\frac{1}{2}\left( \begin{array}{c} {\bar w} \\ -1 \end{array}\right)\,, \quad \iota=i\varphi^{\frac{1}{2}}\left[2r \frac{z-w}{1+z{\bar z}} \left( \begin{array}{c} {\bar z} \\ -1 \end{array}\right) +u \left( \begin{array}{c} 1 \\ w \end{array}\right) \right]\,,
\end{equation}
while the relevant tetrad vector components can be expressed as
\begin{equation}\label{mmbarexpansion}
m_{z}=-\sqrt{2}r(2r+u)\varphi^{1}\frac{({\bar z}-{\bar w})^2}{(1+z{\bar z})^2},~~~m_{{\bar z}}=\sqrt{2}ru\varphi^{1}\frac{(1+z{\bar w})^2}{(1+z{\bar z})^2}\,,
\end{equation}
with the expressions for $\bar{m}$ following from complex conjugation. The behavior near null infinity of the fermionic conformal primaries is thus dictated by the large-$r$ expansion of the scalar wavefunction.
For the Mellin transformed plane wave the saddle-point approximation gives
\begin{equation}\begin{alignedat}{3}\label{eq:sp}
\lim_{r\rightarrow\infty} \int_0^\infty d\omega \omega^{\Delta-1}e^{\pm i\omega q\cdot X-\varepsilon\omega (1+z{\bar z})}
&=r^{-1}\frac{u^{1-\Delta}\Gamma(\Delta-1)}{(\pm i)^{\Delta}(1+z{\bar z})^{\Delta-2}} \pi \delta^{(2)}(z-w)+\mathcal{O}(r^{-2})\,,
\end{alignedat}\end{equation}
which implies for the scalar primary
\begin{equation}
\lim_{r\to \infty}\varphi^\Delta\Big|_{z= w}=r^{-1}\frac{u^{1-\Delta}}{\Delta-1} (1+z{\bar z})^{2-\Delta}\pi \delta^{(2)}(z-w)+\mathcal{O}(r^{-2})\,.
\end{equation}
Note that the $\omega$ integral in~\eqref{eq:sp} converges so long as Re$(\Delta)>1$, and we have taken the large~$r$ limit before integrating over~$\omega$.
Going from the plane wave to the conformal basis first it is easy to see that away from $z=w$ we have the expansion
\begin{equation}
\lim_{r\to \infty}\varphi^\Delta\Big|_{z\neq w}=\left(\frac{1+z{\bar z}}{2(z-w)({\bar z}-{\bar w})}\right)^\Delta r^{-\Delta}+\mathcal{O}(r^{-\Delta-1})\,,
\end{equation}
which for Re$(\Delta)< 1$ is leading compared to the contact term~\eqref{eq:sp}.
For Re$(\Delta)= 1$ the above expressions contribute at the same order.\footnote{The appearance of similar ``resonances" were discussed e.g. in~\cite{Kutasov:1999xu}.} For the shadow transformed scalar primary~\eqref{SHvarphi} the roles of the contact and non-contact terms are reversed. Using $-X^2=u(2r+u)$ we find
\begin{equation}
\lim_{r\to \infty} \widetilde{\varphi}^\Delta\Big|_{z= w}=r^{\Delta-2}\frac{2^{\Delta-1}}{\Delta-1}(1+z{\bar z})^{2-\Delta}\pi\delta^{(2)}(z-w)+\mathcal{O}(r^{\Delta-3})\,,
\end{equation}
with the $\omega$-integral~\eqref{eq:sp} again converging so long as Re$(\Delta)>1$, while
\begin{equation}
\lim_{r\to \infty} \widetilde{\varphi}^\Delta\Big|_{z\neq w}=r^{-1}u^{-1+\Delta} \frac{1}{2}\left(\frac{1+z{\bar z}}{(z-w)({\bar z}-{\bar w})}\right)^\Delta +\mathcal{O}(r^{-2})\,.
\end{equation}
Notice that at generic points ($z\neq w$) the shadow primaries have the standard radiative fall offs near null infinity, e.g. $\sim 1/r$ for scalar wavefunctions and this continues to hold for non-zero spin. Meanwhile the contact terms of the non-shadowed primaries are of radiative order.
The above ingredients determine the large-$r$ behavior of the photino and gravitino and their shadow transforms (see appendix~\ref{app:radexpCPW} for their detailed expressions). In particular, the photino arises from the scalar with conformal dimension shifted by $\frac{1}{2}$ while the gravitino is proportional to the scalar with conformal dimension shifted by $\frac{3}{2}$.
In section~\ref{sec:gravitino} and~\ref{sec:subgoldstinos} we will be interested in the large-$r$ behavior for special half-integer values of the conformal dimension. For the leading conformally soft gravitino with $\Delta=\frac{1}{2}$ both contact and non-contact terms will appear at the same order and yield finite contributions, while for the conformally soft photino with $\Delta=\frac{1}{2}$ and the subleading conformally soft gravitino $\Delta=-\frac{1}{2}$ the contact terms have simple poles, respectively, $\frac{1}{\Delta-\frac{1}{2}}$ and $\frac{1}{\Delta+\frac{1}{2}}$.
\subsection{Extrapolate-Style Dictionary for Fermions}\label{sec:CPOexpansion}
We close this section by commenting on an extrapolate-style dictionary for fermions. Notice that states of definite conformal dimension are prepared with bulk operators integrated along a light ray of the form
\begin{equation}\label{lightray}
\mathcal{O}^{\frac{1}{2},-}_{\Delta,+\frac{1}{2}}\propto\lim_{r\rightarrow \infty} r\int du u_+^{-\Delta+\frac{1}{2}} \hat{\psi}(u,r,z,{\bar z}), \qquad \mathcal{O}^{\frac{3}{2},-}_{\Delta,+\frac{3}{2}}\propto\lim_{r\rightarrow \infty} \int du u_+^{-\Delta+\frac{1}{2}} \hat{\chi}_{{\bar z}}(u,r,z,{\bar z}) \, ,
\end{equation}
for the $+$ helicity modes, and similarly with the $z$ component for the $-$ helicity modes.
Here we have defined $u_\pm=u\pm i \varepsilon$. We note that a cancellation of phases selects the annihilation or creation operator in the integrals, namely
\begin{equation}\scalemath{0.98}{\label{uplus}
\int_{-\infty}^\infty du u_+^{-\Delta+\frac{1}{2}} \int_0^\infty d\omega {\omega^{\frac{1}{2}}}\left[a_\pm e^{-i\omega(1+z{\bar z}) {u}}-a^\dagger_{\mp}e^{i\omega(1+z{\bar z}) {u}}\right]
=\frac{2\pi (1+z{\bar z})^{\Delta-\frac{3}{2}} }{{i^{\Delta-\frac{1}{2}}}\Gamma(\Delta-\frac{1}{2})} a_{\Delta,\pm s}\,,
}\end{equation}
and
\begin{equation}\scalemath{0.98}{\label{uminus}
\int_{-\infty}^\infty du u_-^{-\Delta+\frac{1}{2}} \int_0^\infty d\omega{\omega^{\frac{1}{2}}} \left[a_\pm e^{-i\omega(1+z{\bar z}) {u}}-a^\dagger_{\mp}e^{i\omega(1+z{\bar z}) {u}}\right]={-}\frac{2\pi (1+z{\bar z})^{\Delta-\frac{3}{2}} }{{(-i)^{\Delta-\frac{1}{2}}}\Gamma(\Delta-\frac{1}{2})} a_{\Delta,\pm s}^\dagger
\,.}
\end{equation}
Light ray operators at future (past) null infinity (with an appropriate analytic continuation off the real manifold) thus create the $\emph{out}$ ($\emph{in}$) states from the vacuum.
For example, the standard outgoing momentum eigenstates of a positive helicity photino is prepared via
\begin{equation}\label{pout}
\langle p,+|{\propto}\lim\limits_{r\rightarrow\infty} r \int du\, e^{i\omega(1+z{\bar z}) u}\langle 0|\hat{\psi}\,,
\end{equation}
where $p_\mu=\omega q_\mu(z,{\bar z})$. Here we are concerned with the state in the Hilbert space. The right hand side is also proportional to the spinor $|p]$, which we can remove by contracting with $\frac{1}{\omega(1+z{\bar z})}\langle p|\bar{\sigma}_u$.
Its analog for an outgoing boost eigenstate of SL(2,$\mathbb{C}$) spin $J=+\frac{1}{2}$ is
\begin{equation}
\langle \Delta,z,{\bar z},+|{\propto}\lim_{r\rightarrow \infty} r \int du \,u_+^{-\Delta+\frac{1}{2}} \langle 0|\hat{\psi}(u_+,r,z,{\bar z})\, .
\end{equation}
Similar expressions are obtained for the outgoing state of the gravitino with $J=+\frac{3}{2}$.
\section{Large Supersymmetry and Conformally Soft Gravitino}\label{sec:gravitino}
In this section we investigate the celestial diamond corresponding to the leading soft gravitino theorem and large supersymmetry transformations.
{Recall that the massless Rarita-Schwinger equation has a fermionic gauge symmetry. Namely, it is invariant under the gauge transformation
\begin{equation}\label{LargeSusy}
\Psi_{\mu} \mapsto \Psi_{\mu}+\nabla_\mu \lambda\,,
\end{equation}
where $\lambda$ is an anti-commuting spinor. This is the local supersymmetry of supergravity.\footnote{{We assume a purely bosonic asymptotically flat background and have dropped the corresponding transformation on the frame field. Note that while the spinor $\lambda$ here is anti-commuting, in the rest of the section we restrict to commuting component spinors which should be dressed with Grassmann variables to compare to~\eqref{LargeSusy}.}} When the asymptotic behavior of $\lambda$ is such that the inhomogeneous shift goes to an arbitrary function in $(z,{\bar z})$ near null infinity, the gauge transformation~\eqref{LargeSusy} corresponds to a (spontaneously broken) {\it large} supersymmetry transformation and the Goldstino is the (conformally) soft gravitino.}
\subsection{Leading Conformally Soft Gravitino}\label{sec:confsoftgravitinos}
For conformal dimension $\Delta=\frac{1}{2}$ the left-handed conformal primary wavefunction with $J=+\frac{3}{2}$ reduces to pure gauge~\cite{Pasterski:2020pdk,Pasterski:2021fjn}
\begin{equation}\label{chisoft}
\chi^{\rm G}_{\frac{1}{2},+\frac{3}{2};\mu}=\nabla_\mu \Lambda_{\frac{1}{2},+\frac{3}{2}}\,, \quad \Lambda_{\frac{1}{2},+\frac{3}{2}}=(\epsilon_+\cdot X)o\varphi^{\frac{1}{2}}\,.
\end{equation}
Besides the harmonic and radial gauge conditions, $\nabla_\mu \chi_\nu=0$ and $X^\mu \chi_\mu=0$ , the conformally soft gravitino~\eqref{chisoft} satisfies the gauge condition $\bar{\sigma}^\mu\chi_\mu=0$~\cite{Lysov:2015jrs}.
At future null infinity its angular components take the form
\begin{equation}\label{chisoftscri}
\chi^{\rm G}_{\frac{1}{2},+\frac{3}{2};z}
=\frac{-i}{(z-w)^2}\left(
\begin{array}{c}
{\bar w} \\ -1
\end{array}\right)
\,,\quad
\chi^{\rm G}_{\frac{1}{2},+\frac{3}{2},{\bar z}}
=2\pi i \delta^{(2)}(z-w)\left(
\begin{array}{c}
{\bar w} \\ -1
\end{array}\right)
\,,
\end{equation}
while its temporal and radial components behave, respectively, as $\mathcal{O}(1/r)$ and $\mathcal{O}(1/r^2)$. Hence the gravitino wavefunction $\chi_{\frac{1}{2},+\frac{3}{2};\mu}$ obeys the standard fall-off conditions and obviously has vanishing `field strength' $\mathcal{f}_{\mu\nu}=\nabla_\mu \chi_\nu-\nabla_\nu \chi_\mu$. Comparing this to~\eqref{LargeSusy} we recognize the $(\Delta,J)=(\frac{1}{2},+\frac{3}{2})$ gravitino~\eqref{chisoftscri} as the Goldstino for spontaneously broken large supersymmetry.
Related to the conformally soft gravitino~\eqref{chisoft} by a shadow transform is the conformal shadow primary of spin $J=-\frac{3}{2}$ and conformal dimension $\Delta=\frac{3}{2}$ which also reduces to pure gauge, albeit with a more complicated potential,
\begin{equation}\label{tchisoft}
\widetilde{\chi}^{\rm G}_{\frac{3}{2},-\frac{3}{2};\mu}=\nabla_\mu \Lambda_{\frac{3}{2},-\frac{3}{2}}\,, \quad \Lambda_{\frac{3}{2},-\frac{3}{2}}=\sqrt{2}\left((\epsilon_-\cdot X)\iota-{\textstyle \frac{1}{2}}(\epsilon_-\cdot X)^2o\right)\varphi^{\frac{3}{2}}\,,
\end{equation}
and satisfies the harmonic and radial gauge condition as well as $\bar{\sigma}^\mu \widetilde \chi_\mu=0$.
At future null infinity its angular components take the form\footnote{Using~\eqref{2dShadowTransform} it is straightforward to check that the leading $z$ component in the large-$r$ expansion of~\eqref{tchisoft} is the shadow transform of~\eqref{chisoft}, while to show this for the ${\bar z}$ component it is convenient to write~\eqref{chisoft} as a sum over (derivatives of) conformal integrals and use~\eqref{I2} to solve them.}
\begin{equation}\label{tchisoftscri}
\widetilde{\chi}^{\rm G}_{\frac{3}{2},-\frac{3}{2};z}
=\pi i \delta^{(2)}(z-w)
\left(\begin{array}{c}
1 \\ 0
\end{array}\right)
-\pi i \partial_{\bar z} \delta^{(2)}(z-w)
\left(\begin{array}{c}
{\bar z} \\ -1
\end{array}\right)
\,,\quad
\widetilde{\chi}^{\rm G}_{\frac{3}{2},-\frac{3}{2};{\bar z}}
=\frac{-i}{({\bar z}-{\bar w})^3}
\left(
\begin{array}{c}
{\bar z} \\ -1
\end{array}\right)
\,,
\end{equation}
while its temporal and radial components behave, respectively, as $\mathcal{O}(1/r)$ and $\mathcal{O}(1/r^2)$ and the `field strength' again vanishes. We recognize the $(\Delta,J)=(\frac{3}{2},-\frac{3}{2})$ gravitino~\eqref{tchisoftscri} as the (shadow) Goldstino for spontaneously broken large supersymmetry.
\subsection{Celestial Gravitino Diamond}
The conformally soft gravitinos~\eqref{chisoft} and~\eqref{tchisoft} form the left and right corners of the celestial gravitino diamond shown in figure~\ref{fig:gravitinodiamond} and descend to the same generalized conformal primary at the bottom of the diamond
\begin{equation}
\partial_w \widetilde \chi^{\rm G}_{\frac{3}{2},-\frac{3}{2}}=\chi^{gen,{\rm G}}_{\frac{5}{2},-\frac{1}{2}} =\frac{1}{2!} \partial_{\bar w}^2 \chi^{\rm G}_{\frac{1}{2},+\frac{3}{2}}\,.
\end{equation}
We can complete the diamond at the top corner with a generalized conformal primary wavefunction that descends to the radiative conformal primaries\footnote{The ambiguity in defining the top corner is discussed in~\cite{Pasterski:2021fjn}.}
\begin{equation}
\widetilde \chi^{\rm G}_{\frac{3}{2},-\frac{3}{2}}=\frac{1}{2!}\partial_{\bar w}^2 \chi^{gen,{\rm G}}_{-\frac{1}{2},+\frac{1}{2}}\,, \quad \chi^{\rm G}_{\frac{1}{2},+\frac{3}{2}}=\partial_w \chi^{gen,{\rm G}}_{-\frac{1}{2},+\frac{1}{2}}\,.
\end{equation}
This is summarized in table~\ref{table:Ggravitinos}.
\begin{figure}[ht!]
\centering
\begin{tikzpicture}[scale=1.2]
\definecolor{red}{rgb}{.5, 0.5, .5};
\draw[thick,->] (-1+.05,1-.05)node[left]{${\widetilde \chi}^{{\rm G}}_{\frac{3}{2},-\frac{3}{2}}$} --node[below left]{} (-.05,.05) ;
\draw[thick,->] (2-.05,2-.05) node[right]{${\chi}^{{\rm G}}_{\frac{1}{2},+\frac{3}{2}}$} --node[below right]{} (1+.1414/2,1+.1414/2);
\draw[thick,->] (1-.1414/2,1-.1414/2)-- (.05,.05);
\filldraw[black] (0,0) circle (2pt) node[below]{$\chi^{gen,{\rm G}}_{\frac{5}{2},-\frac{1}{2}}~~~$};
\filldraw[white] (0,0) circle (1pt) ;
\filldraw[black] (-1,1) circle (2pt) ;
\filldraw[black] (2,2) circle (2pt) ;
\draw[thick] (1+.1414/2,1+.1414/2) arc (45:-135:.1);
\draw[thick] (0+.1414/2,2+.1414/2) arc (45:-135:.1);
\draw[thick,->] (1,3) node[above,black]{$~~~\chi^{gen,{\rm G}}_{-\frac{1}{2},+\frac{1}{2}}$} -- (2-.05,2+.05);
\draw[thick,->] (0-.1414/2,2-.1414/2)-- (-1+.05,1.05);
\draw[thick,->] (1,3) -- (0+.1414/2,2+.1414/2);
\node[fill=black,regular polygon, regular polygon sides=4,inner sep=1.6pt] at (1,3) {};
\node[fill=white,regular polygon, regular polygon sides=4,inner sep=.8pt] at (1,3) {};
\filldraw[red,thick] (1,0) circle (2pt);
\filldraw[white] (1,0) circle (1pt) node[below,black]{$~~~{\bar \chi}^{gen,{\rm G}}_{\frac{5}{2},+\frac{1}{2}}$};
\filldraw[red,thick] (-1,2) circle (2pt) node[left,black]{$-{\bar \chi}^{{\rm G}}_{\frac{1}{2},-\frac{3}{2}}$};
\draw[red, thick] (0-.1414/2,1+.1414/2) arc (135:315:.1);
\draw[red, thick] (1-.1414/2,2+.1414/2) arc (135:315:.1);
\draw[->,red,thick] (-1,2) -- (0-.1414/2,1+.1414/2);
\draw[->,red,thick] (0+.1414/2,1-.1414/2) -- (1-.05,0.05);
\filldraw[red,thick] (2,1) circle (2pt) ;
\draw[->,red,thick] (2,1) node[right,black]{${\widetilde{\bar{\chi}}}^{{\rm G}}_{\frac{3}{2},+\frac{3}{2}}$} -- (1+.05,0.05);
\draw[->,red,thick] (0,3) node[above,black]{${\bar \chi}^{gen,{\rm G}}_{-\frac{1}{2},-\frac{1}{2}}~~~$} -- (1-.1414/2,2+.1414/2);
\draw[->,red,thick] (1+.1414/2,2-.1414/2) -- (2-.05,1+.05);
\draw[->,red,thick] (0,3) -- (-1+.05,2+.05);
\node[fill=red,regular polygon, regular polygon sides=4,inner sep=1.6pt] at (0,3) {};
\node[fill=white,regular polygon, regular polygon sides=4,inner sep=.8pt] at (0,3) {};
\end{tikzpicture}
\caption{Goldstino diamond for the leading soft gravitino theorem~\cite{Pasterski:2021fjn}.
}
\label{fig:gravitinodiamond}
\end{figure}
\vspace{1em}
\begin{table}[ht!]
\renewcommand*{\arraystretch}{1.3}
\centering
\begin{tabular}{l|l|l|l|l}
Corner & $\Delta$ & $J$
& $\chi^{{\rm G}}_{\Delta,J}$ & $\Lambda_{\Delta,J}$\\
\hline
Top&$-\frac{1}{2}$&$+\frac{1}{2}$
& $\frac{1}{\sqrt{2}} ol_\mu\varphi^{-\frac{1}{2}}$&$- \frac{1}{\sqrt{2}}o\varphi^{-\frac{1}{2}}\log\varphi^{-1}$\\
Left&$~~\frac{3}{2}$&$-\frac{3}{2}$
&$\sqrt{2}\iota{\bar m}_\mu \varphi^{\frac{3}{2}}$&$\frac{1}{2!}\partial_{\bar w}^2\Lambda_{-\frac{1}{2},+\frac{1}{2}}$\\
Right&$~~\frac{1}{2}$&$+\frac{3}{2}$
&$o m_\mu \varphi^\frac{1}{2}$&$\partial_w\Lambda_{-\frac{1}{2},+\frac{1}{2}}$\\
Bottom&$~~\frac{5}{2}$&$-\frac{1}{2}$
&${2}\left[\left(\frac{X^2}{2}l_\mu+n_\mu\right) \iota +\frac{X^2}{2} o\bar{m}_\mu\right] \varphi^{\frac{5}{2}}$\
&$\frac{1}{2!}\partial_w\partial_{\bar w}^2\Lambda_{-\frac{1}{2},+\frac{1}{2}}$\\
\end{tabular}
\caption{Elements of the celestial diamond corresponding to large supersymmetry~\cite{Pasterski:2020pdk,Pasterski:2021fjn}.}
\label{table:Ggravitinos}
\end{table}
\subsection{Soft Charge for Large Supersymmetry}\label{sec:gravitinocharge}
In the language of the covariant phase space formalism, computing the soft part of the large supersymmetry charge amounts to computing the symplectic structure for a generic gravitino perturbation with radiative fall-offs at null infinity and the Goldstino associated to large supersymmetry. This corresponds to the 2D operator~\eqref{Ffermionic} for~\eqref{Fgravitinos} with the conformally soft gravitinos~\eqref{chisoftscri} or~\eqref{tchisoftscri}.\footnote{See appendix~\ref{app:Omega} for details. Comparing to the results of appendix~\ref{app:IP}, we see that \begin{equation}\mathcal{O}^{s,\pm}_{\Delta,J}(w,{\bar w})\equiv i(\hat{\Psi}^s(X^\mu),\Psi^s_{\Delta^*,-J}(X_\mp^\mu;w,{\bar w}))=\Omega(\hat{\Psi}^s(X^\mu),\Psi^s_{\Delta,J}(X_\pm^\mu;w,{\bar w}))\,,\end{equation}
after an appropriate complexification of the symplectic product to allow pairings between the Majorana field operators and the primaries~\eqref{chiralspinor}.}
The final result for the soft charge of spontaneously broken large supersymmetry is given by (expression~\eqref{Omegaleadingchi} evaluated at null infinity).
For the left handed Goldstino diamond, only the first term in~\eqref{Fgravitinos} contributes and we have (omitting the $s$ label)
\begin{equation}\label{softgravitinocharge}
\mathcal{O}_{\Delta,J}=i\int du d^2z \,\hat{\chi}^{\dagger(0)}_z \bar{\sigma}_u \chi^{\rm G}_{\Delta,J;{\bar z}}\,,
\end{equation}
generating the shift~\eqref{shift2}. This yields $\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}$ for the Goldstino
\begin{equation}\label{chicontact}
\chi^{\rm G}_{\frac{1}{2},+\frac{3}{2},{\bar z}}
=2\pi i \delta^{(2)}(z-w)\left(
\begin{array}{c}
{\bar w} \\ -1
\end{array}\right)\,,
\end{equation}
which establishes the equivalence between the large supersymmetry Ward identity and the (conformally) soft gravitino theorem~\cite{Fotopoulos:2020bqj}. We see this as follows. After using the inverse Mellin transform to express the creation and annihilation operators in~\eqref{chiScri} in terms of $a_{\Delta,+\frac{3}{2}}(z,{\bar z})$ and $a^\dagger_{\Delta,+\frac{3}{2}}(z,{\bar z})$, the $\omega$-integral takes the form
\begin{equation}
\int_0^\infty d\omega \omega^{\frac{1}{2}-\Delta} e^{\pm i \omega(1+z{\bar z})u_\pm}=(\mp i)^{\Delta-\frac{3}{2}}{\textstyle \Gamma(\frac{3}{2}-\Delta)} u_\pm^{\Delta-\frac{3}{2}}(1+z{\bar z})^{\Delta-\frac{3}{2}}\,,
\end{equation}
where we analytically continue $u\mapsto u_\pm =u\pm i\varepsilon$ to guarantee convergence. Then using the generalized distribution~\cite{Donnay:2020guq}
\begin{equation}
\int_0^\infty du u^{\Delta-\frac{3}{2}}=2\pi {\textstyle \boldsymbol{\delta}(i(\Delta-\frac{1}{2}))}\,,
\end{equation} we find that the $u$-integrals in the soft charge ${\cal O}_{\Delta,J}$ for $\Delta=\frac{1}{2}$, $J=+\frac{3}{2}$ gives\footnote{Setting $\Delta=\frac{1}{2}$ before evaluating the integral in~\eqref{lightray} gives the same factor of $\frac{1}{2}$ that was encountered in the Ward identity papers~\cite{He:2014laa}. This is effectively taking the average of the $u_+$ and $u_-$ integrals, giving us both positive and negative frequency contributions for the gravitino zero modes.}
\begin{equation}
\int_0^\infty d\omega \omega^{\frac{1}{2}} a_+(\omega)\int_{-\infty}^{+\infty} du e^{-i \omega(1+z{\bar z})u_-}=\pi \lim_{\Delta \to \frac{1}{2}} {\textstyle (\Delta -\frac{1}{2})} a_{\Delta,+\frac{3}{2}} (1+z{\bar z})^{-1}\,,
\end{equation}
and
\begin{equation}
\int_0^\infty d\omega \omega^{\frac{1}{2}} a^\dagger_-(\omega)\int_{-\infty}^{+\infty} du e^{+i \omega(1+z{\bar z})u_+}=\pi \lim_{\Delta \to \frac{1}{2}} {\textstyle (\Delta -\frac{1}{2})} a^\dagger_{\Delta,+\frac{3}{2}} (1+z{\bar z})^{-1}\,.
\end{equation}
Plugging this into~\eqref{softgravitinocharge} and integrating over $(z,{\bar z})$ yields the soft charge
\begin{equation}\label{contact}
\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}(w,{\bar w})={\textstyle\frac{i\kappa}{2}}\lim_{\Delta \to \frac{1}{2}} {\textstyle \left(\Delta-\frac{1}{2}\right)}( a_{\Delta,+\frac{3}{2}}-a^\dagger_{\Delta,+\frac{3}{2}})\,.
\end{equation}
Meanwhile, the 2D operator $\widetilde \O_{\frac{3}{2},-\frac{3}{2}}(w,{\bar w})$ obtained from~\eqref{softgravitinocharge} for the $\Delta=\frac{3}{2}$ shadow Goldstino
\begin{equation}\label{tchinoncontact}
\widetilde{\chi}^{\rm G}_{\frac{3}{2},-\frac{3}{2};{\bar z}}
=\frac{-i}{({\bar z}-{\bar w})^3}
\left(
\begin{array}{c}
{\bar z} \\ -1
\end{array}\right)\,,
\end{equation}
gives rise to the $(h,{\bar h})=(0,\frac{3}{2})$ supercurrent.
This is similar to the situation in gravity where\footnote{The Bondi news is $N_{AB}=\lim\limits_{r\to \infty}\frac{1}{r}\partial_u h_{AB}$.} the $\Delta=0$ Goldstone mode $N^{\rm G}_{0,+2;{\bar z}\bz}=-2\pi\delta^{(2)}(z-w)$ generating Diff($S^2$) symmetry yields equivalence with the subleading (conformally) soft graviton theorem while the $\Delta=2$ shadow Goldstone mode $\widetilde N^{\rm G}_{2,-2;{\bar z}\bz}=-\frac{1}{({\bar z}-{\bar w})^4}$ gives rise to the $(h,{\bar h})=(0,2)$ stress tensor~\cite{Donnay:2020guq}. The soft charges generated by these superrotation and Diff($S^2$) modes were shown to be related by a 2D shadow transform in celestial CFT. Here, similarly, the soft charges for the Goldstinos~\eqref{chicontact} and~\eqref{tchinoncontact} are related by the shadow transform~\eqref{2dShadowTransform} as
\begin{equation}
\widetilde{\mathcal{O}}_{\frac{3}{2},-\frac{3}{2}}(w,{\bar w})={\frac{1}{2\pi}}\int d^2w' \frac{1}{({\bar w}-{\bar w}')^3} \mathcal{O}_{\frac{1}{2},+\frac{3}{2}}(w',{\bar w}')\,,
\end{equation}
and the inverse
\begin{equation}
\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}(w,{\bar w})={-}\frac{1}{\pi}\int d^2w' \frac{({\bar w}-{\bar w}')}{(w-w')^2} \widetilde{\mathcal{O}}_{\frac{3}{2},-\frac{3}{2}}(w',{\bar w}')\,.
\end{equation}
\subsection{Soft Operator in the Gravitino Diamond}\label{sec:softOpgravitino}
Starting from~\eqref{contact}, we see much like the superrotation/Diff$(S^2)$ case that one of the corners of the diamond is isomorphic to the soft theorem. We can use descendancy relations to take us to the bottom corner of the diamond. Letting
\begin{equation}\label{osoft}
[\mathcal{O}_{soft}|_{{\bar z}}^{a}= -\frac{\pi}{\sqrt{2}} D_{{\bar z}}^2 \int du \hat{\chi}^\dagger_{z\dot{c}} \bar{\sigma}^{\dot{c} a}_u\,,
\end{equation}
we construct the object
\begin{equation}\label{feta}
\mathcal{Q}(\eta)=\int d^2z \,[\mathcal{O}_{soft}|\eta]\,,
\end{equation}
where we suppress the tensor index contractions between the soft operator~\eqref{osoft} and spinor $\eta^{{\bar z}}_{a}$ since $\eta^{z}_{a}=0$. In terms of the creation and annihilation operators
\begin{equation}\label{Osoft}
\begin{alignedat}{3}
[\mathcal{O}_{soft}|&={\textstyle \frac{i \kappa}{8}}\lim_{\Delta \to \frac{1}{2}} {\textstyle \left(\Delta-\frac{1}{2}\right)}D_{\bar z}^2\left[(1+z{\bar z})^{-1}( a_{\Delta,+\frac{3}{2}}-a^\dagger_{\Delta,+\frac{3}{2}})\langle x|\bar{\sigma}_u\right]\,,\\
&={\textstyle \frac{i \kappa}{8}}\lim_{\Delta \to \frac{1}{2}} {\textstyle \left(\Delta-\frac{1}{2}\right)}(1+z{\bar z})^{-1}\partial_{\bar z}^2\left[( a_{\Delta,+\frac{3}{2}}-a^\dagger_{\Delta,+\frac{3}{2}})\right]\langle x|\bar{\sigma}_u\,.
\end{alignedat}
\end{equation}
We see that the operator $[\mathcal{O}_{soft}|x]$ is a level-2 primary descendant operator with $\Delta=\frac{5}{2}$ and $J=-\frac{1}{2}$, and the smeared operator~\eqref{feta} gives the fermionic analogue of the charge operators of~\cite{Banerjee:2018fgd,Banerjee:2019aoy,Banerjee:2019tam}. Much like the spin-1 and spin-2 cases studied in~\cite{Pasterski:2021dqe} the spacetime descendants on the round sphere map to flattened celestial sphere descendants of the Mellin transformed modes.
Moreover, we see from~\eqref{Osoft} that
\begin{equation}
{\cal Q}(\frac{{\bar w}-{\bar z}}{w-z}|x])=
\pi{\cal O}_{\frac{1}{2},+\frac{3}{2}},~~~
{\cal Q}(\frac{1}{{\bar w}-{\bar z}}|x])=2\pi\widetilde{\cal O}_{\frac{3}{2},-\frac{3}{2}}\,,
\end{equation}
while
\begin{equation}
{\cal Q}(\delta^{(2)}(z-w)|x])=\partial_w{{\widetilde{\cal O}}}_{\frac{3}{2},-\frac{3}{2}}=\frac{1}{2!}\partial_{\bar w}^2{\cal O}_{\frac{1}{2},+\frac{3}{2}}.
\end{equation}
This form of the charge where $\eta=\varepsilon(z,{\bar z})|x]$ is consistent with that of~\cite{Avery:2015gxa,Lysov:2015jrs}. We are able to use $\eta$ with such a simple form because there are various kernels of the integrated charge~\eqref{feta} coming from the kernels of the descendancy relations of the celestial diamond, as well as that of the spinor product.\footnote{Note that the gauge potential of the conformally soft gravitino~\eqref{chisoft} at null infinity can indeed be written as
\begin{equation}\label{eta}
\Lambda_{\frac{1}{2},+\frac{3}{2}}|_{\mathcal{I}^+}\simeq D_A \eta^A_+\,,\quad
\eta^z_+=0\,, \quad \eta^{\bar z}_+=i\frac{{\bar z}-{\bar w}}{z-w}\frac{1+z{\bar z}}{1+w{\bar w}}\left(\begin{array}{c} {\bar w}\\-1\end{array}\right)\,,
\end{equation}
where the equivalence is up to the kernel of the sphere derivatives giving $\chi_{A}$. The angular components of the Goldstino can then be expressed as
\begin{equation}
\chi^G_{\frac{1}{2},+\frac{3}{2};z}=D_z D_{\bar z} \eta^{\bar z}_+\,, \quad \chi^G_{\frac{1}{2},+\frac{3}{2};{\bar z}}=D_{\bar z}^2 \eta^{\bar z}_+\,.
\end{equation}
The soft charge~\eqref{softgravitinocharge} takes the form
\begin{equation}
\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}={i}\int du d^2z \, D_{\bar z}^2\,\hat{{\chi}}^{\dagger(0)}_{z}\, \bar \sigma_u \eta^{\bar z}_+ \,.
\end{equation}}
The soft operator lies at the bottom of the subleading soft graviton memory diamonds, conveniently captured by the following diagram.
\begin{equation}
\raisebox{-2cm}{
\begin{tikzpicture}[scale=0.6]
\definecolor{red}{rgb}{.5, 0.5, .5};
\draw[thick,->] (-1+.05,1-.05)node[left]{$\widetilde \O_{\frac{3}{2},-\frac{3}{2}}=\frac{1}{2!}\partial_{\bar w}^2 \mathcal{O}_{-\frac{1}{2},+\frac{1}{2}}$} --node[below left]{} (-.05,.05) ;
\draw[thick,->] (2-.05,2-.05) node[right]{$\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}=\partial_w \mathcal{O}_{-\frac{1}{2},+\frac{1}{2}}$} --node[below right]{} (1+.1414/2,1+.1414/2);
\draw[thick,->] (1-.1414/2,1-.1414/2)-- (.05,.05);
\filldraw[black] (0,0) circle (2pt) node[below]{$ [\mathcal{O}_{soft}|x]_{\frac{5}{2},-\frac{1}{2}}~~~$};
\filldraw[white] (0,0) circle (1pt) ;
\filldraw[black] (-1,1) circle (2pt) ;
\filldraw[black] (2,2) circle (2pt) ;
\draw[thick] (1+.1414/2,1+.1414/2) arc (45:-135:.1);
\draw[thick] (0+.1414/2,2+.1414/2) arc (45:-135:.1);
\draw[thick,->] (1,3) node[above,black]{$~~~\mathcal{O}_{-\frac{1}{2},+\frac{1}{2}}$} -- (2-.05,2+.05);
\draw[thick,->] (0-.1414/2,2-.1414/2)-- (-1+.05,1.05);
\draw[thick,->] (1,3) -- (0+.1414/2,2+.1414/2);
\node[fill=black,regular polygon, regular polygon sides=4,inner sep=1.6pt] at (1,3) {};
\node[fill=white,regular polygon, regular polygon sides=4,inner sep=.8pt] at (1,3) {};
\end{tikzpicture}
}
\end{equation}
As shown in the diagram, from the definitions of the large supersymmetry current and its shadow we can formally write $\widetilde \O_{\frac{3}{2},-\frac{3}{2}}$ and $\mathcal{O}_{\frac{1}{2},+\frac{3}{2}}$ as descendants of $\mathcal{O}_{-\frac{1}{2},+\frac{1}{2}}$, a generalized primary operator with conformal dimension $\Delta=-\frac{1}{2}$ and spin $J=+\frac{1}{2}$ that lies at the top of the gravitino memory diamond. This operator is formally defined via
\begin{equation}
\mathcal{O}_{-\frac{1}{2},+\frac{1}{2}}=i(\hat{\Psi}^s,\Psi^{s=\frac{3}{2}}_{\frac{1}{2},-\frac{1}{2}})\,,~~~~~~\Psi^{s=\frac{3}{2}}_{\frac{1}{2},-\frac{1}{2}}=\left(\begin{array}{c}
0 \\
\frac{1}{\sqrt{2}}\bar{o}l_\mu\varphi^{-\frac{1}{2}}
\end{array}\right).
\end{equation}
This operator is strictly speaking not part of the spectrum but its role is important for the theory, e.g. it can be used to define the other primary operators and (primary) descendants in the theory -- see~\cite{Pasterski:2021fjn,Pasterski:2021dqe} for a related discussion.
We close this section by noting that the
the spin~1,$\frac{3}{2}$ and~2 symmetry currents
\begin{equation}\label{symcurr}
\widetilde{\cal O}_{1,-1},\widetilde{\cal O}_{\frac{3}{2},-\frac{3}{2}},\widetilde{\cal O}_{2,-2}\,,
\end{equation}
corresponding to large U(1), large SUSY and superrotation symmetry, respectively, all have the same symmetry parameter $\propto \frac{1}{{\bar z}-{\bar w}}$ (called $\varepsilon$ and $Y^z$ for spin~1 and~2 in~\cite{Pasterski:2021dqe} and $\eta$ for spin~$\frac{3}{2}$ above), for their appropriately normalized charges, since their first $\partial_w$ descendants are the respective soft charges. We will investigate the relationship between the descendancy relations in celestial diamonds and the spin-shifting relations of SUSY in section~\ref{celestialpyramids}.
\section{Soft Charges without Goldstinos}\label{sec:subgoldstinos}
In this section we examine the degenerate celestial diamonds corresponding to the soft photino\footnote{Generalizing to the gluino case is straightforward for the soft charges constructed herein. Since they are linear in the perturbation, this amounts to adding a Lie algebra index.} and subleading soft gravitino.
\subsection{Conformally Soft Photino}\label{sec:photino}
The conformally soft spin $s=\frac{1}{2}$ Weyl spinor $\psi_{\frac{1}{2},+\frac{1}{2}}$ and its shadow $\widetilde \psi_{\frac{3}{2},-\frac{1}{2}}$ correspond to the zero-area celestial diamonds shown in figure~\ref{fig:photinodiamond}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=.9]
\filldraw[black] (1,1) circle (2pt) ;
\filldraw[black] (0,2) circle (2pt) ;
\filldraw[black] (0,1) circle (2pt) ;
\filldraw[black] (1,2) circle (2pt) ;
\draw[thick,->] (1,2)-- (0+.05,1+.05);
\draw[thick,->] (0,2)-- (1-.05,1+.05);
\filldraw[black] (0,2) circle (2pt) ;
\node at (-1,1) {$\frac{3}{2}$};
\node at (-1,2) {$\frac{1}{2}$};
\node at (-0.6,3) {$J$};
\node at (-1,2.87) {$\tiny{\Delta}$};
\draw[thick] (-1+.01,3+.2) -- (-1+.28,3-.3);
\node at (0,3) {$-\frac{1}{2}~$};
\node at (1,3) {$\frac{1}{2}$};
\end{tikzpicture}
\caption{Photino `diamonds'.}
\label{fig:photinodiamond}
\end{figure}
Using appendix~\ref{app:Omega} we can write a 2D operator analogous to the soft charge~\eqref{softgravitinocharge}. Namely, evaluating expression~\eqref{Omegapsi} at null infinity yields
\begin{equation}\label{SoftChargePhotino}
\mathcal{O}_{\frac{1}{2},+\frac{1}{2}}=i\int_{\mathcal{I}^+} du d^2z \sqrt{\gamma}
\left[\hat{\psi}^{\dagger(1)} \bar{\sigma}_u \psi_{\frac{1}{2},+\frac{1}{2}}^{(1)}\right]\,,
\end{equation}
where the superscript on the photinos indicates the coefficient of the $r^{-1}$ term in their large-$r$ expansions. The form of~\eqref{SoftChargePhotino} is consistent with the soft charge in~\cite{Dumitrescu:2015fej}.
At coincident points on the celestial sphere the contribution from the conformally soft photino is
\begin{equation}\label{confsoftpsiScricontact}
\psi_{\frac{1}{2},+\frac{1}{2}}^{(1)}\Big|_{z=w}=i\pi\lim_{\Delta\to\frac{1}{2}}\frac{u^{\frac{1}{2}-\Delta}}{\Delta-\frac{1}{2}}(1+z{\bar z})\delta^{(2)}(z-w)|x]\,,
\end{equation}
while at non-coincident we have
\begin{equation}\label{confsoftpsiScrinoncontact}
\psi_{\frac{1}{2},+\frac{1}{2}}^{(1)}\Big|_{z\neq w}=\frac{i}{2}\frac{1+z\bar{z}}{(z-w)(\bar{z}-{\bar w})}|q]\,.
\end{equation}
Analogous expressions exist for the opposite helicity photino yielding $\mathcal{O}_{\frac{1}{2},- \frac{1}{2}}$. Furthermore, as indicated in figure~\eqref{fig:photinodiamond} the $\Delta=\frac{1}{2}$ conformally soft photino descend to the $\Delta=\frac{3}{2}$ shadow photino as
\begin{equation}
\partial_{\bar w} \psi_{\frac{1}{2},+\frac{1}{2}}=-\widetilde \psi_{\frac{3}{2},-\frac{1}{2}}\,,
\quad \partial_w \bar{\psi}_{\frac{1}{2},-\frac{1}{2}}=-\widetilde{\bar{\psi}}_{\frac{3}{2},+\frac{1}{2}}\,,
\end{equation}
for which we can write the 2D operators $\widetilde \O_{\frac{3}{2},\pm \frac{1}{2}}$.
Note that the isomorphism with the conformally soft photino theorem~\cite{Fotopoulos:2020bqj} follows from the contact term~\eqref{confsoftpsiScricontact}. Indeed, if we renormalize the operator by $(\Delta-\frac{1}{2})$, the soft charge becomes\footnote{Without renormalizing the limit $\Delta \to \frac{1}{2} $ furthermore yields a contribution to~\eqref{confsoftpsiScricontact} involving a logarithm in~$u$ which will be discussed elsewhere.}
\begin{equation}
\mathcal{O}^{ren}_{\frac{1}{2},+\frac{1}{2}}(w,{\bar w})
=- 2 \pi(1+w{\bar w})^{-1} \int du
\hat{\psi}^{\dagger(1)}\bar{\sigma}_u |q]
={\frac{ie}{2} }\lim_{\Delta \to \frac{1}{2}} (\Delta-\frac{1}{2}) (a_{\Delta,+\frac{1}{2}}-a^\dagger_{\Delta,+\frac{1}{2}})\,.
\end{equation}
\subsection{Subleading Conformally Soft Gravitino}\label{sec:subgravitino}
The conformally soft spin $s=\frac{3}{2}$ Weyl spinor $\chi_{-\frac{1}{2},+\frac{3}{2};\mu}$ and its shadow $\widetilde \chi_{\frac{5}{2},-\frac{3}{2};\mu}$ corresponds to the zero-area celestial diamond shown in figure~\ref{fig:subleadinggravitinodiamond}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=.9]
\definecolor{red}{rgb}{.5, 0.5, .5};
\draw[thick,red,->] (-1+.05,1-.05)node[left]{
} --node[below left]{
} (-.05,.05) ;
\draw[thick,red,->] (2-.05,2-.05)node[above]{
} --node[below right]{
} (1+.1414/2,1+.1414/2);
\draw[thick,red,->] (1-.1414/2,1-.1414/2)-- (.05,.05);
\filldraw[red] (0,0) circle (2pt);
\filldraw[red] (-1,1) circle (2pt) ;
\filldraw[red] (2,2) circle (2pt) ;
\draw[red,thick] (1+.1414/2,1+.1414/2) arc (45:-135:.1);
\draw[red,thick] (0+.1414/2,2+.1414/2) arc (45:-135:.1);
\node at (-2,0) {$\frac{5}{2}$};
\node at (-2,1) {$\frac{3}{2}$};
\node at (-2,2) {$\frac{1}{2}$};
\node at (-2,3) {$-\frac{1}{2}~~$};
\node at (-1.6,4) {$J$};
\node at (-2,3.87) {$\tiny{\Delta}$};
\draw[red,thick] (-2+.01,4+.2) -- (-2+.28,4-.3);
\node at (-1,4) {$-\frac{3}{2}$};
\node at (0,4) {$-\frac{1}{2}$};
\node at (1,4) {$\frac{1}{2}$};
\node at (2,4) {$\frac{3}{2}$};
\filldraw[black] (1,0) circle (2pt) ;
\filldraw[red,thick] (1,0) circle (2pt) ;
\filldraw[red,thick] (-1,2) circle (2pt) ;
\draw[red, thick] (0-.1414,1+.1414) arc (135:315:.2);
\draw[red, thick] (1-.1414,2+.1414) arc (135:315:.2);
\draw[black, thick] (0-.1414,2+.1414) arc (135:315:.2);
\draw[black, thick] (1-.1414,1+.1414) arc (135:315:.2);
\draw[->,red,thick] (-1,2) -- (0-.1414,1+.1414);
\draw[->,red,thick] (0+.1414,1-.1414) -- (1-.05,0.05);
\filldraw[red,thick] (2,1) circle (2pt) ;
\filldraw[black,thick] (-1,0) circle (2pt) ;
\filldraw[black,thick] (-1,3) circle (2pt) ;
\filldraw[black,thick] (2,3) circle (2pt) ;
\filldraw[black,thick] (2,0) circle (2pt) ;
\draw[->,thick] (-1,3) -- (0-.1414,2+.1414);
\draw[->,thick] (1+.1414,1-.1414) -- (2-.05,0+.05);
\draw[->,thick] (0+.1414,2-.1414) -- (1-.1414,1+.1414);
\draw[black,thick] (0+.1414/2,1+.1414/2) arc (45:-135:.1);
\draw[black,thick] (1+.1414/2,2+.1414/2) arc (45:-135:.1);
\draw[->,thick] (0-.1414/2,1-.1414/2) -- (-1+.05,0+.05);
\draw[->,thick] (1-.06,2-.06) -- (0+.06,1+.06);
\draw[->,thick] (2,3) -- (1+.06,2+.06);
\draw[->,red,thick] (2,1) -- (1+.05,0.05);
\draw[->,red,thick] (0,3) -- (1-.1414,2+.1414);
\draw[->,red,thick] (1+.1414,2-.1414) -- (2-.05,1+.05);
\draw[->,red,thick] (0,3) -- (-1+.05,2+.05);
\draw[red,thick,->] (1,3)-- (2-.05,
|
2+.05);
\filldraw[black] (1,3) circle (2pt) ;
\draw[red,thick,->] (0-.1414/2,2-.1414/2)-- (-1+.05,1.05);
\draw[red,thick,->] (1,3) -- (0+.1414/2,2+.1414/2);
\filldraw[red,thick] (0,3) circle (2pt) ;
\filldraw[white] (1,0) circle (1pt) ;
\filldraw[white] (0,0) circle (1pt) ;
\node[fill=red,regular polygon, regular polygon sides=4,inner sep=1.6pt] at (0,3) {};
\node[fill=white,regular polygon, regular polygon sides=4,inner sep=.8pt] at (0,3) {};
\node[fill=red,regular polygon, regular polygon sides=4,inner sep=1.6pt] at (1,3) {};
\node[fill=white,regular polygon, regular polygon sides=4,inner sep=.8pt] at (1,3) {};
\end{tikzpicture}
\caption{Subleading gravitino `diamonds' (black) on top of leading gravitino diamonds (grey).}
\label{fig:subleadinggravitinodiamond}
\end{figure}
Using again appendix~\ref{app:Omega}, we can write the 2D~operator corresponding to a soft charge for the subleading conformally soft gravitino, namely expression~\eqref{Omegasubleadingchi} evaluated at null infinity. For the gravitino at $\Delta=-\frac{1}{2}$ this yields
\begin{equation}
\begin{aligned}
\mathcal{O}_{-\frac{1}{2},+\frac{3}{2}}=i\int_{\mathcal I^+} du d^2z &\Big\{\sqrt{\gamma}\left[\delta\chi^{\dagger\, (2)}_r\left(\frac{1}{2}\bar{\sigma}_r-\bar{\sigma}_u\right)\chi^{(0)}_{-\frac{1}{2},+\frac{3}{2};\,u}+\delta\chi^{\dagger\, (1)}_u\left(\frac{1}{2}\bar{\sigma}_r-\bar{\sigma}_u\right)\chi^{(1)}_{-\frac{1}{2},+\frac{3}{2};\,r}\right]\\
&\qquad -\delta\chi^{\dagger\, (1)}_{\bar z} \left(\frac{1}{2}\bar{\sigma}_r-\bar{\sigma}_u\right)\chi^{(-1)}_{-\frac{1}{2},+\frac{3}{2};\,z}+\delta\chi_z^{\dagger\, (0)}\bar{\sigma}_u\chi^{(0)}_{-\frac{1}{2},+\frac{3}{2};\,{\bar z}} \Big\}\,.
\end{aligned}
\end{equation}
Curiously, while the presence of a soft theorem at this dimension has been noticed in the amplitudes literature~\cite{Liu:2014vva,Fotopoulos:2020bqj}, on the asymptotic symmetry side so far only the large supersymmetry charge for the leading soft gravitino has been computed~\cite{Fuentealba:2021xhn,Avery:2015iix,Lysov:2015jrs}.
We are happy to contribute a spacetime interpretation of the subleading (conformally) soft gravitino theorem and its corresponding charge.
At coincident points on the celestial sphere the subleading conformally soft gravitino contributes
\begin{equation}
\chi_{-\frac{1}{2},+\frac{3}{2};{\bar z}}^{(0)}\Big|_{z=w}=\sqrt{2}\pi i\lim_{\Delta\to-\frac{1}{2}}\frac{u^{\frac{1}{2}-\Delta}}{\Delta+\frac{1}{2}}(1+z{\bar z})\delta^{(2)}(z-w)|q]\,,
\end{equation}
while all other components are subleading in the large~$r$ expansion. Again the superscript $(n)$ indicates the coefficient of the $r^{-n}$ term. At non-coincident points the conformally soft gravitino has non-radiative fall-offs
\begin{equation}
\chi_{-\frac{1}{2},+\frac{3}{2};u}^{(0)}\Big|_{z\neq w}=-i\frac{1+{\bar w} z}{\sqrt{2}(z-w)}|q]\,,\quad
\chi_{-\frac{1}{2},+\frac{3}{2};r}^{(1)}\Big|_{z\neq w}=iu\frac{1+{\bar w} z}{\sqrt{2}(z-w)}|q]\,,
\end{equation}
and
\begin{equation} \chi_{-\frac{1}{2},+\frac{3}{2};z}^{(-1)}\Big|_{z\neq w}=-\sqrt{2}i\frac{{\bar z}-{\bar w}}{(z-w)(1+z \bar{z})}|q]\,,
\end{equation}
in addition to
\begin{equation}\scalemath{0.95}{
\chi_{-\frac{1}{2},+\frac{3}{2};z}^{(0)}\Big|_{z\neq w}=iu\frac{(1+{\bar w} z)(1+{\bar z} w)}{(z-w)^2(1+z\bar{z})}|q]
\,,\quad \chi_{-\frac{1}{2},+\frac{3}{2};{\bar z}}^{(0)}\Big|_{z\neq w}=iu\frac{(1+{\bar w} z)^2}{\sqrt{2}(z-w)({\bar z}-{\bar w})(1+z\bar{z})}|q]\,}.
\end{equation}
Analogous expressions exist for the opposite helicity gravitino yielding $\mathcal{O}_{-\frac{1}{2},- \frac{3}{2}}$. Furthermore, as indicated in figure~\eqref{fig:subleadinggravitinodiamond} the $\Delta=-\frac{1}{2}$ conformally soft gravitinos descend to the $\Delta=\frac{5}{2}$ shadow gravitinos as
\begin{equation}
\frac{1}{3!}\partial_{\bar w}^3 \chi_{-\frac{1}{2},+\frac{3}{2}}=-\widetilde \chi_{\frac{5}{2},-\frac{3}{2}}\,,\quad \frac{1}{3!}\partial_w^3 \bar \chi_{-\frac{1}{2},-\frac{3}{2}}=-\widetilde{\bar{\chi}}_{\frac{5}{2},+\frac{3}{2}}\,,
\end{equation}
for which we can write the 2D operators $\widetilde \O_{\frac{5}{2},\pm \frac{3}{2}}$.
As in the case of the photino, the isomorphism with the subleading conformally soft gravitino theorem~\cite{Fotopoulos:2020bqj} follows from the contact term. We can renormalize the charge operator by $(\Delta+\frac{1}{2})$ such that the limit $\Delta \to -\frac{1}{2}$ is finite\footnote{Again, without renormalizing the limit $\Delta \to -\frac{1}{2} $ yields a logarithm in~$u$.}
\begin{equation}
\mathcal{O}^{ren}_{-\frac{1}{2},+\frac{3}{2}}(w,{\bar w})=-\sqrt{2}\pi(1+w\bar{w})\int du u
\hat\chi_w^{\dagger\, (0)}\bar{\sigma}_u|q]={\frac{\kappa}{2}} \lim_{\Delta \to -\frac{1}{2}}{(\Delta+\frac{1}{2})} (a_{\Delta,+\frac{3}{2}}+a^\dagger_{\Delta,+\frac{3}{2}})\,.
\end{equation}
\section{Celestial Pyramids}\label{celestialpyramids}
In this paper we have completed the soft charge analysis for each of the fermionic celestial diamonds. This gives us a nice opportunity to merge the investigations of spin-shifting relations in~\cite{Pasterski:2020pdk} and SL($2,\mathbb{C}$) descendants in~\cite{Pasterski:2021dqe}. Namely, when one adds supersymmetry to the mix the celestial diamonds stack into a celestial pyramid.
The starting point is the observation~\cite{Fotopoulos:2020bqj} that the SUSY charges map conformally soft theorems to one another. For radiative primaries (i.e. $s=|J|$) they found the following sequences of soft limits for the gauge multiplet
\begin{equation}
{\cal O}_{0,1}
\xrightarrow[]{\overline{Q}} {\cal O}_{\frac{1}{2},\frac{1}{2}}\xrightarrow[] { Q}{
\cal O}_{1,1} \,,
\end{equation}
and for the gravity multiplet
\begin{equation}
{\cal O}_{-1,2}\xrightarrow[]{\overline{Q}}{\cal O}_{-\frac{1}{2},\frac{3}{2}}\xrightarrow[]{{Q}} {\cal O}_{0,2}\xrightarrow[]{\overline{Q}}{\cal O}_{\frac{1}{2},\frac{3}{2}}\xrightarrow[]{{Q}} {\cal O}_{1,2}\,.
\end{equation}
These are summarized in a more geometric manner in figure~\ref{fig:shiftingdiamonds}, where we see that they are glued together by addition sequences starting from the fermionic primaries (which will require $\mathcal{N}>1$ SUSY) . In the Mellin basis, the $\mathcal{N}=1$ supercharges have the following representation
\begin{equation}\label{supercharges}
Q=\partial_\theta |q\rangle e^{\partial_\Delta/2},~~~\overline{Q}=\theta| q] e^{\partial_\Delta/2}.
\end{equation}
\begin{figure}[tb!]
\centering
\begin{tikzpicture}
\node (A) at (4,2) {$~~~~~~\mathcal{O}_{-1,2}$};
\node (B) at (3,1) {$~~~~~~\mathcal{O}_{-\frac{1}{2},\frac{3}{2}}$};
\node (C) at (2,0) {$~~~~\mathcal{O}_{0,1}$};
\node (D) at (1, -1) {$~~~~\mathcal{O}_{\frac{1}{2},\frac{1}{2}}$};
\node (E) at (0,-2) {$~~~~\mathcal{O}_{1,0}$};
\node (F) at (4,0) {$~~~~\mathcal{O}_{0,2}$};
\node (G) at (3,-1) {$~~~~\mathcal{O}_{\frac{1}{2},\frac{3}{2}}$};
\node (H) at (4,-2) {$~~~~\mathcal{O}_{1,2}$};
\node (I) at (2,-2) {$~~~~\mathcal{O}_{1,1}$};
\draw[thick,red,->] (4-.1,2-.1) -- node[above left]{$\overline{Q}$} (3+.2,1+.2);
\draw[thick,red,->] (3-.1,1-.1) --(2+.2,0+.2);
\draw[thick,red,->] (2-.1,0-.1) --(1+.2,-1+.2);
\draw[thick,red,->] (1-.1,-1-.1) --(0+.2,-2+.2);
\draw[thick,red,->] (4-.1,0-.1) --(3+.2,-1+.2);
\draw[thick,red,->] (3-.1,-1-.1) --(2+.2,-2+.2);
\draw[thick, blue, ->] (3+.2,1-.2) -- node[below left]{$Q$} (4-.15,0+.15);
\draw[thick, blue, ->] (2+.2,0-.2) -- (3-.15,-1+.15);
\draw[thick, blue, ->] (1+.2,-1-.2) -- (2-.15,-2+.15);
\draw[thick, blue, ->] (3+.2,-1-.2) -- (4-.15,-2+.15);
\draw[thick,->] (-1,2) -- node[left]{$\Delta$} (-1,1) ;
\draw[thick,->] (-1,2) -- node[above]{$J$} (0,2) ;
\end{tikzpicture}
\begin{tikzpicture}[scale=.5]
\node (A) at (-6.5,0){};
\draw[fill=white!95!gray] (2,2) -- (-4,-4) -- (-2,-6) -- (4,0) --(2,2);
\draw[fill=white!90!gray] (1,1) -- (-3,-3)-- (-1,-5)--(3,-1) --(1,1);
\draw[fill=white!80!gray] (0,0) -- (-2,-2)-- (0,-4)--(2,-2) --(0,0);
\draw[] (-4,2) --(4,-6);
\draw[] (4,2) --(-4,-6);
\draw[] (-2,2) -- (4,-4) -- (2,-6) -- (-4,0) --(-2,2);
\draw[thick] (2,2) -- (-4,-4) -- (-2,-6) -- (4,0) --(2,2);
\draw[] (0,2) -- (4,-2) -- (0,-6) -- (-4,-2) --(0,2);
\draw[thick] (1,1) -- (3,-1);
\draw[thick] (0,0) -- (2,-2);
\draw[thick] (-2,-2) -- (0,-4);
\draw[thick] (-3,-3) -- (-1,-5);
\draw[red, thick ,->] (4,2) -- (3,1);
\draw[red, thick ,->] (3,1) -- (2,0);
\draw[red, thick ,->] (2,0) -- (1,-1);
\draw[red, thick ,->] (1,-1) -- (0,-2);
\draw[red, thick ,->] (4,0) -- (3,-1);
\draw[red, thick ,->] (3,-1) -- (2,-2);
\draw[blue, thick ,->] (2,0) -- (3,-1);
\draw[blue, thick ,->] (3,-1)--(4,-2);
\draw[blue, thick ,->] (3,1) -- (4,0);
\draw[blue, thick ,->] (1,-1)--(2,-2);
\draw[red, thick ,->] (-4,2) -- (-3,1);
\draw[red, thick ,->] (-3,1) -- (-2,0);
\draw[red, thick ,->] (-2,0) -- (-1,-1);
\draw[red, thick ,->] (-1,-1) -- (-0,-2);
\draw[red, thick ,->] (-4,0) -- (-3,-1);
\draw[red, thick ,->] (-3,-1) -- (-2,-2);
\draw[blue, thick ,->] (-2,0) -- (-3,-1);
\draw[blue, thick ,->] (-3,-1)--(-4,-2);
\draw[blue, thick ,->] (-3,1) -- (-4,0);
\draw[blue, thick ,->] (-1,-1)--(-2,-2);
\end{tikzpicture}
\caption{The SUSY generators shift between radiative primaries corresponding to conformally soft theorems within the same helicity sector. On the left is a zoomed in view of the action on the positive helicity sector. This corresponds to the upper right hand corner in our top-down view of the celestial pyramid. The diamonds corresponding to the positive helicity subleading soft graviton, leading soft gravitino, and leading soft photon are highlighted. }
\label{fig:shiftingdiamonds}
\end{figure}
We would now like to explore how these supercharges interact with our descendancy relations. From~\eqref{supercharges} we quickly see that
\begin{equation}\label{0der}
[\partial_{\bar w},{Q}]=[\partial_w,\overline{{Q}}]=0.
\end{equation}
These observations follow from the $\mathcal{N}=1$ super-Poincar\'e algebra which adds the (anti)-commutation relations
\begin{equation}
\begin{alignedat}{3}
\{\overline{Q}_a,{Q}_{\dot b}\}=\sigma^\mu_{a\dot b}P_\mu,~~~[M^{\mu\nu},\overline Q_a]=\frac{1}{2}(\sigma^{\mu\nu})_a^{~b}\overline Q_b.
\end{alignedat}
\end{equation}
In terms of the SL$(2,\mathbb{C})$ indexing, we have the global part of the $\mathfrak{sbms}_4$ algebra, namely
\begin{equation}\begin{alignedat}{3}
\{G_m,\bar{G}_n\}=P_{m,n},~~~[L_m,G_n]=(\frac{1}{2}m-&n)G_{m+n},~~~[\bar{L}_m,\bar{G}_n]=(\frac{1}{2}m-n)\bar{G}_{m+n}\,,
\end{alignedat}\end{equation}
in addition to the bosonic $\mathfrak{bms}_4$ subalgebra
\begin{equation}\begin{alignedat}{3}
\label{BMSalg}
[L_m,L_n]=(m-n)L_{m+n},~~~&[\bar{L}_m,\bar{L}_n]=(m-n)\bar{L}_{m+n}\,,\\
[L_{n},P_{k,l}]=(\frac{1}{2}n-k)P_{k+n,l},~~~&[\bar{L}_{n},P_{k,l}]=(\frac{1}{2}n-l)P_{k,l+n}\,,\\
\end{alignedat}\end{equation}
restricted to $\{P_{\pm\frac{1}{2},\pm\frac{1}{2}}, G_{\pm\frac{1}{2}},\bar{G}_{\pm\frac{1}{2}},L_{i},\bar{L}_i\}$.
The observation~\eqref{0der} is just the relation
\begin{equation}\label{comm} [L_{m},\bar{G}_n]=[\bar{L}_m,G_n]=0.
\end{equation}
Our question about how the supercharges interact with the SL$(2,\mathbb{C})$ multiplets is simply a question about descendancy relations for a larger algebra.
The conditions for Poincar\'e primaries follows from a relaxation of the BMS primary construction in~\cite{Banerjee:2020kaa} for the Mellin transformed massless amplitudes. See also~\cite{Stieberger:2018onx,Law:2019glh,Law:2020xcf}. A Poincar\'e primary is annihilated by
\begin{equation}
L_1,\bar{L}_1,P_{\frac{1}{2},\frac{1}{2}},P_{\frac{1}{2},-\frac{1}{2}},P_{-\frac{1}{2},\frac{1}{2}}\,,
\end{equation}
and has weights $h,\bar{h}$ under $L_0,\bar{L}_0$
\begin{equation}
L_0|h,\bar{h}\rangle=h|h,\bar{h}\rangle,~~~\bar{L}_0|h,\bar{h}\rangle=\bar{h}|h,\bar{h}\rangle\,,
\end{equation}
while descendants are generated by
\begin{equation}\label{gen1}
L_{-1},\bar{L}_{-1},P_{-\frac{1}{2},-\frac{1}{2}}\,.
\end{equation}
Analogously, a massless $\mathcal{N}=1$ super-Poincar\'e primary is annihilated by
\begin{equation}
L_1,\bar{L}_1,P_{\frac{1}{2},\frac{1}{2}},P_{\frac{1}{2},-\frac{1}{2}},P_{-\frac{1}{2},\frac{1}{2}},G_{\frac{1}{2}},\bar{G}_{\frac{1}{2}}\,,
\end{equation}
and has weights $h,\bar{h}$ under $L_0,\bar{L}_0$, while descendants are generated by
\begin{equation}\label{gen}
L_{-1},\bar{L}_{-1},G_{-\frac{1}{2}},\bar{G}_{-\frac{1}{2}}\,,
\end{equation}
where we have used
$\{G_{-\frac{1}{2}},\bar{G}_{-\frac{1}{2}}\}=P_{-\frac{1}{2},-\frac{1}{2}}$ to remove the translation in~\eqref{gen1} from our list. Here we will focus on the anti-chiral subalgebra spanned by $\bar{L}_i,\bar{G}_{\pm\frac{1}{2}}$, since this is enough to connect the fermionic soft charges examined in this paper to their supersymmetrically related bosonic counterparts.
\begin{figure}[b!]
\centering
\begin{tikzpicture}[scale=0.95]
\draw[thick,->] (-3,-1) -- node[right]{$s$} (-3,0) ;
\draw[thick,->] (-3,-1) -- node[below]{$\bar{h}$} (-4,-1) ;
\node at (4,-1) {};
\draw[thick,<-] (-2,-4) node[left]{$\widetilde{\mathcal{O}}_{3,-2}$} --(0,-4) ;
\draw[thick,<-] (0,-4)--(2,-4);
\draw[thick,<-] (2,-4)--(4,-4);
\draw[thick,<-] (4,-4)--(6,-4)node[right]{$\mathcal{O}_{-1,2}$};
\draw[thick,<-] (-1,-3) node[left]{$\widetilde{\mathcal{O}}_{\frac{5}{2},-\frac{3}{2}}$} --(1,-3);
\draw[thick,<-] (1,-3)--(3,-3);
\draw[thick,<-](3,-3)--(5,-3) node[right]{$\mathcal{O}_{-\frac{1}{2},\frac{3}{2}}$};
\draw[thick,<-] (0,-2) node[left]{$\widetilde{\mathcal{O}}_{2,-1}$} -- (2,-2) ;
\draw[thick,<-] (2,-2) -- (4,-2)node[right]{$\mathcal{O}_{0,1}$} ;
\draw[thick,<-] (1,-1) node[left] {$\widetilde{\mathcal{O}}_{\frac{3}{2},-\frac{1}{2}}$} --node[above]{$\bar{L}_{-1}$} (3,-1) node[right] {$\mathcal{O}_{\frac{1}{2},\frac{1}{2}}$} ;
\filldraw[] (2,0) circle (2pt) node[above]{$\mathcal{O}_{1,0}$};
\filldraw[] (-1,-3) circle (2pt);
\filldraw[] (0,-2) circle (2pt);
\filldraw[] (1,-1) circle (2pt);
\filldraw[] (-2,-4) circle (2pt);
\filldraw[] (6,-4) circle (2pt);
\filldraw[] (5,-3) circle (2pt);
\filldraw[] (4,-2) circle (2pt);
\filldraw[] (3,-1) circle (2pt);
\draw[thick,red,->] (6,-4) -- node[left]{$\bar{G}_{-\frac{1}{2}}$} (5,-3) ;
\draw[thick,red,->] (5,-3) -- (4,-2) ;
\draw[thick,red,->] (4,-2) -- (3,-1) ;
\draw[thick,red,->] (3,-1) -- (2,0) ;
\draw[thick,white!50!red,->] (4,-4) -- (3,-3) ;
\draw[thick,white!50!red,->] (3,-3) -- (2,-2) ;
\draw[thick,white!50!red,->] (2,-2) -- (1,-1) ;
\draw[thick,white!50!red,->] (2,-4) -- (1,-3) ;
\draw[thick,white!50!red,->] (1,-3) -- (0,-2) ;
\draw[thick,white!50!red,->] (0,-4) -- (-1,-3) ;
\filldraw[] (4,-4) circle (2pt);
\filldraw[white] (4,-4) circle (1pt);
\filldraw[] (2,-4) circle (2pt);
\filldraw[white] (2,-4) circle (1pt);
\filldraw[] (0,-4) circle (2pt);
\filldraw[white] (0,-4) circle (1pt);
\filldraw[] (2,-2) circle (2pt);
\filldraw[white] (2,-2) circle (1pt);
\filldraw[] (3,-3) circle (2pt);
\filldraw[white] (3,-3) circle (1pt);
\filldraw[] (1,-3) circle (2pt);
\filldraw[white] (1,-3) circle (1pt);
\end{tikzpicture}
\caption{Vertical section of the celestial pyramid corresponding to the $\Delta=1-J$ diagonal of figure~\ref{fig:shiftingdiamonds}. The commuting generators $\bar{G}_{-\frac{1}{2}}$ and $\bar{L}_{-1}$ map between states in the positive helicity degenerate celestial diamonds corresponding to the most subleading soft theorems. More generally, the $\Delta=k+1-J$ sections connect radiative $J=+s$ primaries of different spins and their descendant $\mathcal{O}_{soft}$. }
\label{fig:shiftingdiamonds2}
\end{figure}
Starting from a conformal primary $\mathcal{O}_{\Delta,J}$ annihilated by $\bar{L}_{+1}$ and $\bar{G}_{+\frac{1}{2}}$
\begin{equation}
\bar{L}_{1}\mathcal{O}_{\Delta,J}(0,0)=0,~~~\bar{G}_{\frac{1}{2}}\mathcal{O}_{\Delta,J}(0,0)=0
\end{equation}
we have the $\mathcal{N}=1$ doublet
\begin{equation}\label{doublet}
\left(\mathcal{O}_{\Delta,J},\bar{G}_{-\frac{1}{2}}\mathcal{O}_{\Delta,J}\right)\,,
\end{equation}
where we will suppress the $(w,{\bar w})=(0,0)$ coordinate here and in what follows.
In terms of
\begin{equation}
h=\frac{1}{2}(\Delta+J),~~~\bar{h}=\frac{1}{2}(\Delta-J)\,,
\end{equation}
we see that the second operator in the doublet has weights $(h',\bar{h}')=(h,\bar{h}+\frac{1}{2})$. Since
\begin{equation}
\bar{L}_{1}\bar{G}_{-\frac{1}{2}}\mathcal{O}_{\Delta,J}=[\bar{L}_{1},\bar{G}_{-\frac{1}{2}}]\mathcal{O}_{\Delta,J}=\bar{G}_{\frac{1}{2}}\mathcal{O}_{\Delta,J}=0\,,
\end{equation}
this is also an ${\mathrm{SL}}(2,\mathbb{C})$ primary.
The operators corresponding to positive helicity soft theorems have $J=+s$ and $\Delta\in\{1-s,...,s+1\}\cap(-\infty,1]$, which we can label by the value $\bar{k}\in\mathbb{Z}$
\begin{equation}
\bar{k}=1+J-\Delta\,,
\end{equation}
the level of their $\bar{L}_{-1}$ primary descendant. Meanwhile, the superpartner $\bar{G}_{-\frac{1}{2}}\mathcal{O}_{\Delta,J}$ will have a primary descendant at level $\bar{k}'=\bar{k}-1$.
This is consistent with the the structure of spin-$s$ celestial diamonds. The observation that~\cite{Fotopoulos:2020bqj}
\begin{equation}\label{susymult}
\bar{G}_{-\frac{1}{2}}\mathcal{O}^s_{\Delta,s}=\mathcal{O}^{s-\frac{1}{2}}_{\Delta+\frac{1}{2},s-\frac{1}{2}},
\end{equation}
for an appropriate normalization of the operators, can be phrased in terms of the soft operators $\mathcal{O}^s_{soft}$ residing at the bottom corner of the spin-$s$ diamonds. From section~3 of~\cite{Pasterski:2021fjn} we have the following descendancy relations
\begin{equation}
\mathcal{O}^s_{soft}=\frac{1}{\bar k!} \partial_{\bar w}^{\bar k} \mathcal{O}^s_{\Delta,s}\,,\quad \mathcal{O}^{s-\frac{1}{2}}_{soft} =\frac{1}{(\bar k-1)!} \partial_{\bar w}^{(\bar k-1)} \mathcal{O}^{s-\frac{1}{2}}_{\Delta+\frac{1}{2},s-\frac{1}{2}}\,.
\end{equation}
Using~\eqref{BMSalg}, this implies that the soft charges are related via
\begin{equation}\label{softop}
\bar k \,\bar{G}_{-\frac{1}{2}}\mathcal{O}^s_{soft}=\bar{L}_{-1}\mathcal{O}^{s-\frac{1}{2}}_{soft}.
\end{equation}
Moreover, from the celestial diamond perspective, it is clear that the observation~\eqref{symcurr} about the relation between the symmetry parameters of different spin-$s$ asymptotic symmetry currents extends to general $\bar k$. This is because the shadows of both of the operators $\left(\mathcal{O}^s_{\Delta,s},\mathcal{O}^{s-\frac{1}{2}}_{\Delta+\frac{1}{2},s-\frac{1}{2}}\right)$ appearing in the doublet~\eqref{doublet}, namely
\begin{equation}
\left(\widetilde{\mathcal{O}}^s_{2-\Delta,-s},\widetilde{\mathcal{O}}^{s-\frac{1}{2}}_{\frac{3}{2}-\Delta,\frac{1}{2}-s}\right)\,,
\end{equation}
are both level $k=k'=\Delta+s-1$ ascendants of the corresponding soft operators. We can further see that~\eqref{softop} together with~\eqref{comm} and the shadow/descendancy relations of~\cite{Pasterski:2021fjn} imply
\begin{equation}\label{shadowmult}
\bar k \,\bar{G}_{-\frac{1}{2}}\widetilde{\mathcal{O}}^s_{2-\Delta,-s}
=-\bar{L}_{-1}\widetilde{\mathcal{O}}^{s-\frac{1}{2}}_{\frac{3}{2}-\Delta,\frac{1}{2}-s}.
\end{equation}
This analog of~\eqref{susymult} for the shadow modes plays an important role in celestial CFT since both the stress tensor and the super-current are constructed via shadow transforms. For the particular case of $\Delta=0,s=2$, the relation~\eqref{shadowmult} which involves these two symmetry currents is a statement about how the global symmetry generators act on the representation of $\mathfrak{sbms}_4$ arising from the soft theorem, consistent with
\begin{equation}
[\bar{G}_{-\frac{1}{2}},\bar{L}_{-2}]-\frac{1}{2} [\bar{L}_{-1} ,\bar{G}_{-\frac{3}{2}}]=0.
\end{equation}
We thus see how the spin shifting relations of~\cite{Pasterski:2021dqe} and the celestial diamond story of~\cite{Pasterski:2021dqe,Pasterski:2021fjn} fit together. The global SL$(2,\mathbb{C})$ descendants shift within the $(\Delta,J)$ plane while the SUSY charges translate in the transverse $s$-direction. This structure explains the connection between the gauge parameters used to demonstrate soft theorem/Ward identity equivalences for different spins. While we have focused on the $\mathcal{N}=1$ case connecting pairs of soft theorems, this structure can straightforwardly be extended to the $\mathcal{N}>1$ case.
\section*{Acknowledgements}
The work of Y.P. is supported by the PhD track fellowship of Ecole Polytechnique.
The work of S.P. is supported by the Sam B. Treiman Fellowship at the Princeton Center for Theoretical Science.
The work of A.P. is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 852386).
|
\section{Introduction}
Observations reveal many evidences of unusual processes occurring
in the region of GC. For instance, the enigmatic 511 keV
annihilation emission discovered by INTEGRAL \citep[see
e.g.][]{knoed} whose origin is still debated, the hot plasma with
temperature about 10 keV which cannot be confined in the GC
and, therefore, sources with a power about $10^{41}$ erg s$^{-1}$
are required to heat the plasma \citep{koyama07}. In fact plasma outflows with the velocity $\ga
100$ km s$^{-1}$ are observed in the nucleus regions of our Galaxy
\citep{crocker2} and Andromeda
\citep{gilf}. Time variations of the 6.4 keV line and X-ray
continuum emission observed in the direction of molecular clouds
in the GC which are supposed to be a reflection of a giant X-ray
flare occurred several hundred years ago in the GC \citep{inui,ponti,terr}. HESS observations of the GC
in the TeV energy range indicated an explosive injection of CR
there which might be associated with the supermassive black hole
Sgr A$^\ast$(e.g. Aharonian et al. 2006).
Recent analysis of
Fermi LAT data \citep[see][]{meng,dobler2} discovered a new
evidence of the GC activity. They found two giant features of
gamma-ray emission in the range 1 -100 GeV, extending 50 degrees
($\sim 10$ kpc) above and below the Galactic center - the Fermi
Bubble (FB). They presented a list of mechanisms that may
contribute
to the energy release and particle production necessary to
explain the gamma-ray emission from the bubble. They noticed,
however, that most likely the Fermi bubble structure was created
by some large episode of energy injection in the GC, such as a
past accretion event onto the central supermassive black hole
(SMBH) in the last $\sim 10$ Myr. They cast doubt on the idea that
the Fermi bubble was generated by previous starburst activity in
the GC because there was no evidence of massive supernova
explosions ($\sim 10^{4}-10^{5}$) in the past $\sim 10^7$ yr
towards the GC.
Besides, these supernova remnants should be traced by the line
emission of radioactive $^{26}$Al. Observations do not show
significant concentration of $^{26}$Al line towards the GC
\citep{diehl}.
\citet{crocker} and \citet{crocker2,crocker1} argued that the
procedure used by \citet{meng} did not remove contributions of CR
interaction with an ionised gas, then gamma-rays could be produced
by protons interaction with the fully ionised plasma in the halo.
\citet{crocker} also argued that the lifetime of these protons can
be very long because the plasma is extremely turbulent in this
region therefore protons could be trapped there for a time scale
$\tau_{pp}\ga 10^{10}$ yr, then the observed gamma-rays can be
explained with the injected power of SN $\sim 10^{39}$ erg
s$^{-1}$.
In this letter we propose that the FB emission may result from star
capture processes, which have been developed by \citet{cheng1,cheng2} and
\citet{dog_pasj1,dog_pasj,dog_aa} to explain a wide range of X-ray and gamma-ray
emission phenomena from the GC.
\section{Observations}
The procedure of separation of the bubble emission from the total
diffuse emission of the Galaxy is described in \citet{meng}. It is important to note that the bubble
structure is seen when components of gamma-ray emission
produced by cosmic ray (CR) interaction with the background gas,
i.e. by CR protons ($\pi^o$ decay) and electrons
(bremsstrahlung) are removed. \citet{meng} concluded the the
bubble emission was of the inverse Compton (IC) origin generated
by relativistic electrons.
Here we summarize the multi-wavelength observational constraints of the FB:
\begin{itemize}
\item The spectral shape and intensity of gamma-rays are almost
constant over the bubble region that suggests a uniform production
of gamma-rays in the FB. The total gamma-ray flux from the bubble at energies
$E_\gamma>1$ GeV is $F_\gamma\sim 4\times 10^{37}$ erg s$^{-1}$
and the photon spectrum of gamma-rays is power-law,
$dN_\gamma/dE_\gamma\propto E_\gamma^{-2}$ for the range 1-100
GeV (\citet{meng});
\item In the radio range the bubble is seen from the tens GHz WMAP data
as a microwave residual spherical excess \emph{("the
microwave haze")} above the GC $\sim4$ kpc in radius
\citep{fink}. Its power spectrum in the frequency range 23 - 33
GHz is described as power-law, $\Phi_{\nu}\propto \nu^{-0.5}$. For
the magnetic field strength $H\sim
10~\mu$G the energy range of electrons responsible for emitting these radio waves is within
the range 20 - 30 GeV and their spectrum is $dN_e/dE_e\propto
E^{-2}_e$;
\item The ROSAT 1.5 keV X-ray data clearly showed the
characteristic of a bipolar structure \citep{cohen} that
aligned well with the edges of the Fermi bubble. The ROSAT
structure is explained as due to a fast wind which drove a shock
into the halo gas with the velocity $v_{sh}\sim 10^8$cm s$^{-1}$.
This phenomenon requires an energy release of about $10^{55}$
ergs at the GC and this activity should be periodic on a
timescale of $10-15$ Myr.
\item The similarities of the morphology of the radio, X-ray and
gamma-ray structures strongly suggest their common origin.
\end{itemize}
In the case of electron (leptonic) model of \citet{meng}
gamma-rays are produced by scattering of relativistic
electrons on background soft photons, i.e. relic, IR and optical
photons from the disk.
\section{The bubble origin in model of a multiple star capture by the central black hole}
In this section we present our ideas about the origin of the Fermi Bubble in the
framework of star capture by the central SMBH. The
process of gamma-ray emission from the bubble is determined by a
number of stages of energy transformation. Each of these stages
actually involves complicated physical processes. The exact
details of these processes are still not understood very well.
Nevertheless, their qualitative features do not depend on these
details. In the following, we only briefly describe these
processes and give their qualitative interpretations. We begin to
describe processes of star capture by the central black hole as
presented in \citet{dog_aa}.
\subsection{Star capture by the central black hole}
As observations show, there is a supermassive black hole (Sgr
A$^\ast$) in the center of our Galaxy with a mass of $\sim
4\times10^6~M_{\odot}$ . A total tidal disruption of a star occurs
when the penetration parameter $b^{-1}\gg 1$, where $b$ is the
ratio of the periapse distance $r_p$ to the tidal radius $R_T$.
The tidal disruption rate $\nu_s$ can be approximated to within an
order of magnitude $\nu_s\sim 10^{-4}-
10^{-5}$~yr$^{-1}$ \citep[see the review of][]{alex05}.
The energy budget of a tidal disruption event follows from
analysis of star matter dynamics.
Initially about 50\% of the stellar mass becomes tightly bound to
the black hole , while the remainder 50\% of the stellar mass is
forcefully ejected \citep[see, e.g.][]{ayal}. The kinetic energy
carried by the ejected debris is a function of the penetration
parameter $b^{-1}$ and can significantly exceed that released by a
normal supernova ($\sim 10^{51}$~erg) if the orbit is highly
penetrating \citep{alex05},
\begin{equation}\label{energy}
W\sim 4\times 10^{52}\left(\frac{M_\ast}{M_\odot}\right)^2
\left(\frac{R_\ast}{R_\odot}\right)^{-1}\left(\frac{M_{\rm bh}/M_\ast}{10^6}\right)^{1/3}
\left(\frac{b}{0.1}\right)^{-2}~\mbox{erg}\,.
\end{equation}
Thus, the mean kinetic energy per escaping nucleon is estimated as
$E_{\rm esc}\sim 42 \left(\frac{\eta}{0.5}\right)^{-1} \left(\frac{M_\ast}{M_\odot}\right)
\left(\frac{R_\ast}{R_\odot}\right)^{-1}\left(\frac{M_{\rm bh}/M_\ast}{10^6}\right)^{1/3}
\left(\frac{b}{0.1}\right)^{-2}~\mbox{MeV}$,
where $\eta M_\ast$ is the mass of escaping material. From $W$ and
$\nu_s$ we obtain that the average power in the form of a flux of
subrelativistic protons. If $W\sim 3\times 10^{52}$ erg and $\nu_s\sim
3\times 10^{-5}$ yr$^{-1}$, we get $\dot{W}\sim 3\times 10^{40}$ erg s$^{-1}$.
In \citet{dog_pasj} we described the injection spectrum of protons
generated by processes of star capture as monoenergetic. This is a
simplification of the injection process because a stream of
charged particles should be influenced by different plasma
instabilities,as it was shown by \citet{chech} for the case of
relativistic jets. At first the jet material is moved by inertia.
Then due to the excitation of plasma instabilties in the flux, the
particle distribution functions, which were initially delta
functions both in angle and in energy, transforms into complex
angular and energy dependencies.
\subsection{Plasma heating by subrelativistic protons}
Subrelativistic protons lose
their energy mainly by Coulomb collisions, i.e.
$\frac{dE_p}{dt}=-\frac{4\pi ne^4}{m_e
\mathrm{v}_{\rm{p}}}\ln\Lambda$, where $\mathrm{v}_{\rm{p}}$ is
the proton velocity, and $\ln\Lambda$ is the Coulomb logarithm. In
this way the protons
transfer almost all their energy to the background plasma and
heat it. This process was analysed in \citet{dog_pasj,dog_aa}.
For the GC parameters the
average time of Coulomb losses for several tens MeV protons is
several million years. The radius of plasma heated by the protons
is estimated from the diffusion equation describing propagation
and energy losses of protons in the GC \citep[][]{dog_pasj2}. This
radius is about 100 pc. The temperature of heated plasma is
determined by the energy which these protons transfer to the
background gas. For $\dot{W}\sim 10^{40}-10^{41}$erg s$^{-1}$ the
plasma temperature is about 10 keV \citep{koyama07} just as
observed by Suzaku for the GC. The plasma with such a high
temperature cannot be confined at the GC and therefore it expands
into the surrounding medium.
\subsection{The hydrodynamic expansion stage}
Hydrodynamics of gas expansion was described in many monographs
and reviews \citep[see e.g.][]{kogan}. As the time of star capture may be smaller than
the time of proton energy losses, we have almost stationary energy
release in the central region with a power $\dot{W}\sim 3\times
10^{40}$ erg s$^{-1}$. This situation is very similar to that
described by \citet{weav77} for a stellar wind expanding into a
uniform density medium. The model describes a star at time t=0
begins to blow a spherically symmetric wind with a velocity of
stellar wind $V_w$, mass-loss rate $dM_w/dt=\dot{M_w}$, and a
luminosity $L_w=\dot{M_w}V_w^2/2$ which is analogous to the power
$\dot{W}$ produced by star capture processes. Most of the time
of the evolution is occupied by the so-called snowplow phase when
a thin shock front is propagating through the medium . The
shock is expanding as
\begin{equation}
R_{sh}(t)=\alpha \left(\frac{L_wt^3}{\rho_0}\right)^{1/5}
\label{rho}
\end{equation}
where $\rho_0=n_0m_p$ and $\alpha$ is a constant of order of
unity. The velocity distribution inside the expanding
region $u(z)$ is nonuniform.
Our extrapolation of this hydrodynamic solution onto the Fermi
bubble is, of course, rather rough. First, the gas distribution in
the halo is nonuniform. Second, the analysis does not take into
account particle acceleration by the shock. A significant fraction
of the shock energy is spent on acceleration that modify the shock
structure. Nevertheless, this model presents a qualitative picture
of a shock in the halo.
\subsection{Shock wave acceleration phase and non-thermal emission}
|
The theory of particle acceleration by shock is described in many
publications. This theory has been developed, and bulky numerical
calculations have beeen performed to calculate spectra of
particles accelerated by supernova (SN) shocks and emission
produced by accelerated electrons and protons in the range from
radio up to TeV gamma-rays \citep[see e.g.][]{volk}. Nevertheless
many aspects of these processes are still unclear. For instance,
the ratio of electrons to protons accelerated by shocks is not
reliably estimated \citep[see][]{byk}.
We notice that the energy of shock front expected in the GC is nearly two
orders of magnitude larger than that of the energy
released in a SN explosion. Therefore, process of particle
acceleration in terms of sizes of envelope, number of accelerated
particles, maximum energy of accelerated particles, etc. may
differ significantly from those obtained for SNs. Below we present
simple estimations of electron acceleration by shocks.
The injection spectrum of electrons accelerated in shocks is
power-law, $Q(E_e)\propto
E_e^{-2}$, and the maximum energy of accelerated electrons can be
estimated from a kinetic equation describing their spectrum at the
shock \citep{ber90}, $E_{max}\simeq {v_{sh}^2}/{D\beta}$, where
$v_{sh}\sim 10^8$cm/s is the velocity of shock front, $D$ is the
diffusion coefficient at the shock front and the energy losses of
electrons (synchrotron and IC) are: $dE_e/dt=-\beta E_e^2$. Recall
$\beta$ is a function of the magnetic and background radiation
energy densities, $\beta \sim w\sigma_Tc/(m_ec^2)^2$, where
$\sigma_T$ is Thompson cross section and $w=w_{ph}+w_B$ is the
combined energy density of background photons $w_{ph}$ and the
magnetic energy density $w_B$ respectively. It is difficult to
estimate the diffusion coefficient near the shock. For qualitative
estimation, we can use the Bohm diffusion ($\sim r_L(E_e)$c),
where $r_L$ is the Larmor radius of electrons. Using the typical
values of these parameters, we obtain $E_{max}\sim
1~TeV~v_8B_{-5}^{1/2}w_{-12}^{-1/2}$, where $v_8$ is the shock
velocity in units of $10^8$cm/s, $B_{-5}$ is the magnetic field in
the shock in units of $10^{-5}$G and $w_{-12}$ is the energy
density in units of 10$^{-12}$erg/cm$^3$.
The spectrum of electrons in the bubble is modified
by processes of energy losses and escape. It can be derived from
the kinetic equation
\begin{equation}
\frac{d}{dE}\left(\frac{dE}{dt}N\right)+\frac{N}{T}=Q(E)
\end{equation}
where $dE/dt=\beta E^2+\nabla u(z)E$ describes the inverse
Compton, synchrotron and adiabatic (because of wind velocity
variations) energy losses, $T$ is the time of particle escape from
the bubble, and $Q(E)=KE^{-2}\theta(E_{max}-E)$ describes
particles injection spectrum in the bubble. As one can see, in
general case the spectrum of electrons in the bubble cannot be
described by a single power-law as assumed by \citet{meng}. The
spectrum of electrons has a break at the energy $E_b\sim 1/\beta
T$ where $T$ is the characteristic time
of either particle escape from the bubble or of the adiabatic
losses, e.g. for $\nabla u=\alpha$ the break position follows from
$T\sim 1/\alpha$. By solving equation 3, we can see that the electron spectrum cannot be described by a
single power-law even in case of power-law injection (see eg. Berezinskii et al., 1990).
The distribution of background photons can be derived from
GALPROP program. The average energy density of background photons
in the halo are $w_o=2$ eV cm$^{-3}$ for optical and $w_{IR}=0.34$
eV cm$^{-3}$ for IR. These background soft photon energy densities
are obviously not negligible in comparing with $w_{CMB}=0.25$ eV
cm$^{-3}$ for the relic photons and also comparable with the
magnetic energy density ($\sim 1(H/5\times 10^{-6}G)^2
eV/cm^3$). The expected IC energy flux of gamma-rays and
synchrotron radiation emitted from the same population of
electrons described above are shown in Fig. \ref{emission}
for different values of $E_b$ and $E_{max}$. The
Klein-Nishina IC cross-section \citep{gould} is used. The observed
spectrum of radioemission in the range 5-200GHz and gamma-rays are
taken from \citet{dobler1} and \citet{meng} respectively.
The inverse Compton gamma ray spectrum is formed by
scattering on three different components of the background photons.
When these three components are combined (see Fig.
\ref{emission}b), they mimic a photon spectrum $E_\gamma^{-2}$ and
describe well the data shown in Fig. 23 of \citet{meng}.
We want to remark that although a single power law with the spectral
indexes in between 1.8 and 2.4 in the energy range of electrons from 0.1 to 1000 GeV
can also
explain both the Fermi data as well as the radio data as suggested
by Su et al. (2010). Theoretically a more complicated electron spectrum will be
developed when the cooling time scale is comparable with the
escape time even electrons are injected with a single power law as shown in equation 3.
\subsection{The thermal emission from heated plasma}
In our model there is 10keV hot plasma with power $\dot{W} \sim
3\times 10^{40}$erg/s injected into bubbles. Part of these
energies are used to accelerate the charged particles in the shock
but a good fraction of energy will be used to heat up the gas in
the halo due to Coulomb collisions. The temperature of halo gas
can be estimate as $nd^3kT\approx \dot{W}d/v_w$ which gives
$kT\sim 1.5 (\dot{W}/3\times
10^{40}erg/s)v_8^{-1}(d/5kpc)^{-2}(n/10^{-3})$keV. The thermal
radiation power from the heated halo gas is simply given by
$L_{th}=1.4\times 10^{-27}n_en_iZ^2T^{1/2}$erg/cm$^3$
\citep{rybicki}.
By using kT=1.5keV, $n_e=n_i=10^{-3}cm^{-3}$ and Z=1, we find
$L_x\sim 10^{38}$erg/s.
\section{Discussion}
The observed giant structure of FB is difficult to be explained by other processes.
We suggest that periodic star capture processes by the central SMBH can inject
$\sim 3\times 10^{40}$erg/s hot plasma into the galactic halo. The hot gas can
expand hydrodynamically and form shock to accelerate electrons to relativistic
speed. Synchrotron radiation and inverse Compton scattering with the background
soft photons produce the observed radio and gamma-rays respectively. Acceleration of protons by the same shock
may contribute to Ultra-high energy cosmic rays, which will be considered in future works.
It is interesting to point out that the mean free path of TeV electrons
$\lambda \sim \sqrt{D/\beta E_e}\sim 50 D_{28}^{1/2}\tau_5^{1/2}$pc, where
$D_{28}$ and $\tau_5$ are the diffusion coefficient and cooling time for TeV
electrons in units of $10^{28}cm^2/s$ and $10^5$yrs respectively. This estimated
mean free path is much shorter than
the size of the bubble. In our model the capture time is once every $\sim 3\times 10^4$yr,
we expect that there is about nearly 100 captures in 3 million years. Each of these
captures can produce an individual shock front, therefore the gamma-ray radiation can
be emitted uniformly over the entire bubble.
Furthermore we can estimate the shape of the bubble, if we simplify
the geometry of our model as follows. After each capture a disk-like
hot gas will be ejected from the GC. Since the gas pressure in the halo
($n(r)kT\sim 10^{-14}(n/3\times10^{-3}cm^{-3})(T/3\times 10^4K)$erg/cm$^3$) is low and decreases quadratically
for distance larger than 6kpc(Paczynski 1990),
we can assume that the hot gas can escape freely vertically, which is defined as the z-direction and hence the z-component of the wind velocity
$v_{wz} = constant$ or $z=v_{wz}t$. The ejected disk has a thickness $\Delta z=v_{wz} t_{cap}$,
where $t_{cap}=3\times 10^4$yr is the
capture time scale. On the other hand the hot gas can also expand laterally and its radius along the direction of the galactic disk is given by
$x(t)=v_{wx} t + x_0\approx v_{wx} t$, where $x_0\sim 100pc$(cf. section 3.2). When the expansion speed is supersonic then
shock front can be formed at the edge of the ejected disk. In the vertical co-moving frame of the ejected disk
the energy of the disk is
$\Delta E$, which is approximately constant if the radiation loss is small. The energy
conservation gives $\Delta E={1\over2}m v_{wx}^2$ with $m=m_0+m_s(t)=m_0+\pi x^2\Delta
z\rho$, where $m_0\approx 2\Delta E/v_w^2$ is the initial mass in the ejected disk, $m_s$ is the swept-up mass from the surrounding gas when the disk is expanding laterally, and $\rho=m_pn$ is the density of the medium
surrounding the bubble. Combing the above equations, we can obtain
$\Delta E={1\over2}[m_0+\pi (v_{wx}t)^2\Delta z\rho]
v_{wx}^2$.
There are two characteristic stages, i.e. free expansion stage, in which $v_{wx}\approx $constant for $m_0>m_s(t)$ and deceleration stage for $m_0<m_s(t)$. The time scale switching from free expansion to deceleration is given by $m_0=m_s(t_s)$ or $t_s=\sqrt{m_0/\pi \Delta z \rho v_{wx}^2}$. In the free expansion stage, we obtain
$x=\frac{v_{wx}}{v_{wz}}z \sim z$
for $x<v_{wx} t_s=x_s$.
In the deceleration stage, $\Delta E\approx{1\over2}\pi (v_{wx}t)^2\Delta z\rho
v_{wx}^2$, we obtain
$(x/v_w t_s)=\left(\frac{\Delta E}{\pi t_s^2 v_{wx}^4 \Delta z \rho}\right)^{1/4}$$\left(\frac {z}{v_w t_s}\right)^{1/2}$$
\approx 0.9 \left(\frac {z}{v_w t_s}\right)^{1/2}$, we have approximated $v_{wx}\sim v_{wz}\sim v_w\sim 10^8$cm/s, $\Delta E=3\times 10^{52}$ergs and $\rho/m_p=3\times 10^{-3}cm^{-3}$.
The switching from a linear relation to the quadratic relation takes place at $z_s\sim v_w t_s \sim 300$pc. The quasi-periodic injection of disks into the halo can form a sharp edge, where shock fronts result from the laterally expanding disks with quadratic shape, i.e. $z\sim x^2$. In fitting the gamma-ray spectrum it gives $E_b \sim 50$GeV, which corresponds to a characteristic time scale of either adiabatic loss or particle escape $\sim 15$Myrs. By using equation 2, the characteristic radius of FB is about 5kpc, which is quite close to the observed size of FB.
\acknowledgments We thank the anonymous referee for very useful suggestions and Y.W. Yu for useful discussion.
KSC is supported by a grant under HKU
7011/10p. VAD and DOC are partly supported by
the NSC-RFBR Joint Research Project RP09N04
and 09-02-92000-HHC-a. CMK is supported in part by the National Science
Council, Taiwan, under grants NSC 98-2923-M-008-001-MY3 and NSC
99-2112-M-008-015-MY3.
\begin{figure}
\epsscale{.80} \plotone{fig1.eps} \caption{ (a)Energy fluxes produced by
Synchrotron and IC with $E_{max}$=1TeV and different $E_b$. (b)The gamma-ray
spectra with $E_b$=50GeV and different $E_{max}$.
\label{emission}.}
\end{figure}
|
\section{Introduction}
The \textit{advection-diffusion equation} (ADE) is a common tool for describing various transport phenomena.
In its most basic form (see Eq.~\eqref{eq:ADE}), the equation represents the evolution in time of a scalar quantity of interest due to convection and diffusion.
The ADE has been employed to describe problems encountered in a variety of scientific fields. For example, transport of contaminants in the environment~\cite{boeker2011book}, chemical reaction engineering~\cite{augier2010}, filtration ~\cite{JCH2017}, semi-conductor physics~\cite{rhoderick1982}, cognitive psychology~\cite{ratcliff2004} and biological systems~\cite{murray2002book}.
In many applications, the solution of the ADE is obtained by numerical simulations. The numerical approaches can be roughly subdivided into two categories: Eulerian and Lagrangian~\cite{zhang2007}.
In an Eulerian framework, the equation is solved over a fixed grid~\cite{P001,TE2004}~(usually with a finite-volume or finite-elements method).
In a Lagrangian framework, the unknown variable (e.g. concentration of a chemical species) is modeled by a collection of \textit{particles} which are transported in the domain and may change their properties (e.g. mass) over time ~\cite{majohnson2013,marol2017}.
A number of hybrid methods, i.e. Eulerian-Lagrangian methods, are also popular~\cite{younes2005,celia1990,neuman1984}.
Among the different Lagrangian methods, one of the common ones is \textit{particle tracking} (PT). In this approach, the distribution of ``particles" represent (in an approximate manner) the concentration.
The essence of PT is to update the position of the particles every time step via a Langevin equation, i.e. by a combination of a deterministic jump which represents the advection term and a random walk (RW) which represents the diffusion term.
Lagrangian approaches in general, and RW schemes in particular, are very popular for solving the ADE,
due to various advantages over Eulerian schemes.
In Eulerian codes, the concentration is homogenized over a numerical cell, or varies in the cell but in a restricted fashion (e.g. a linear change).
This may smear out sharp concentration fronts, i.e cause a numerical diffusion.
The remedy to this problem is to increase the mesh resolution, but this has a significant computational cost.
Sharp concentration fronts are very common in a wide variety of applications, such as mixing-controlled or very fast chemical reactions, or advection-dominated problems (i.e. problems characterized by high P\'eclet numbers).
In a different vein, when there is a necessity of employing non-uniform or ``noisy'' initial conditions, RW scheme is quite efficient in describing such conditions~\cite{paster2014connecting}.
Moreover, Lagrangian approaches can tackle problems with an infinite spatial domain.
This is a key difference with respect to Eulerian codes, which need a complete description and discretization of the computational domain.
In the RW codes (being a gridless method) no such restriction applies; it has nonetheless to be noted that the velocity field (i.e., the advection term) has to be known in order to solve the ADE.
In infinite domains, the velocity is known everywhere in the domain only for some special cases where an analytical solution of flow exists.
We shall discuss one such case in the sequel.
Reactive systems can also be solved by RW codes.
Both homogeneous and heterogeneous reactions can be included in RW codes ~\cite[e.g.,][]{sanchez2015}.
Consider a first order homogeneous reaction (e.g.: a radioactive decay) \ch{A ->[ k ] B} where the reaction rate $r$ depends linearly on the concentration of $A$, i.e. $r=kC_A$. Such case is relatively straightforward to implement in RW~\cite{kinzelbach1988,prickett1981,sherman1986}.
The problem is more complicated when considering a second order homogeneous reaction \ch{A + B ->[ k ] C}, where the reaction rate depends on the concentration of both A and B, i.e. $r=kC_AC_B$ .
This problem was tackled by~\citet{paster2014connecting,paster2013} following the concepts presented in the work of~\citet{benson2008}.
The use of the RW approach for modeling second order reactive systems is especially beneficial when the reaction is diffusion limited and significant concentration gradients develop at the interface between ``islands" of the reactants~\cite{paster2014connecting}.
More recently~\citet{SoleMari2017} presented an approach for including more complicated rate laws, such as Michaelis-Menten, in this RW approach.
Another significant body of literature was devoted to the inclusion of heterogeneous reactions in RW codes~\cite{agmon1984, schuss2015book, plante2011, prustel2013} and to the implementation of boundary conditions in general.
Some boundary conditions are straightforward to translate from their mathematical definition into RW framework: for example, directly tracking the particles position makes ``inlet'' and ``outlet'' conditions trivial to implement.
Some conditions at solid boundaries are also easy to setup.
An impermeable wall (i.e. where flux is equal to zero, $\partial C/\partial n=0$, $n$ being the normal to the wall) is described by imposing a ``reflection'' rule: particles that cross the boundary are reflected back into the domain.
The more general condition of constant flux (i.e., a Neumann condition, $-D\partial C/\partial n=q(\textbf{x},t)> 0$, where $D$ is the diffusion coefficient) is described in PT by combining the reflection rule and the introduction new particles at the wall~\citep{szymczak2003}.
A more complicated scenario arises when dealing with heterogeneous reactions. These stand as a middle ground between the no-flux condition and the ``perfect-sink'' boundary condition of infinitely fast reaction.
Such heterogeneous reactions can be modeled by a Robin (mixed) boundary condition $-D\frac{\partial C}{\partial n}=kC$ where $n$ is a coordinate normal to the wall.
Even in the Eulerian framework, efforts to accurately employ these conditions are recent~\citep{lin2017high}.
In the current state of the art of RW ~\cite[e.g.,][]{agmon1984,singer2008}, this boundary condition is represented in the algorithm by a certain probability $p$ for the particle that hit the wall to be annihilated and removed from the system.
\citet{agmon1984} derived a first-order accurate expression for this probability, given by $p=p_1=k\sqrt{\frac{\pi D}{\Delta t}}$.
In the case of vanishingly small $\Delta t$, this expression is correct.
In a real application though, there will be a technical constraint on the lower limit for $\Delta t$, dictated by a trade-off between simulation accuracy and computational expense: in these cases, using this expression for reaction probability at the wall could result in an incorrect estimation of particle flux.
In the present work we derive $p_2$, a second-order accurate expression for the reaction probability and discuss the relevance and importance of this result.
We then illustrate the difference between using $p_1$ and $p_2$ by applying them to two test cases and comparing the results with analytical and numerical solutions.
We thus organized this paper as follows.
In the next section, an overview of the theoretical background for the RW methodology in solving the ADE is given, followed by a description of the implementation of the reactive boundary condition in its classical, first-order accurate form.
Then, a theoretical derivation of the higher-order terms is given in Section \ref{sec:convergence_proof}, and specifically the second-order expression which will be used in the remainder of this work.
In Section \ref{sec:exampleApps} two cases will then be shown, where we used a RW code to solve chosen transport problems employing the proposed corrected reaction probability and compared the results with available analytical predictions or grid-converged Eulerian simulations.
In this way, we compare the results of the RW code using its first-order or second-order estimations for the reaction probability, and illustrate the effectiveness of the higher order correction.
\section{Governing equations and the RW methodology} \label{sec:GovEqRWmeth}
\subsection{Governing equations}
For the special case of a constant diffusion coefficient $D$, the advection-diffusion equation is given by
\begin{equation} \label{eq:ADE}
\frac{\partial C}{ \partial t} + {\bf u}\cdot \nabla C =D \nabla^2 C, \quad \textbf{x} \in \Omega
\end{equation}
\noindent where $C=C(\textbf{x},t)$ is concentration [ML$^{-d}$], $\textbf{u}(\textbf{x},t)$ is the velocity [LT$^{-1}$], and $d=1,2,3$ is the dimension of the domain $\Omega$.
Next, assume that a certain part of the domain boundary $\Gamma_1\in \partial \Omega$, is a non-permeable wall where the perpendicular velocity vanishes, $u_n$=0.
If this wall is reactive, and if the reaction rate is assumed to be linear with the concentration, then the flux of $C$ into the boundary is equal to the rate of reaction.
In this case the boundary condition for $C$ is given by:
\begin{equation}\label{eq:rxnBC}
-D \frac{\partial C}{ \partial n} = kC, \quad \textbf{x} \in \Gamma_1
\end{equation}
\noindent where $n$ is the outward coordinate, and $k=k(\textbf{x})\ge 0$ is the reaction coefficient at the boundary. If the boundary is passive, $k=0$; we are mostly concerned here with the reactive case $k>0$.
\subsection{RW scheme}
In the RW numerical scheme, the concentration $C$ is represented by a discrete collection of \textit{particles} in the domain $\Omega$.
Each particle has a specified mass $m_p$.
For the sake of simplicity, and without loss of generality, we shall assume here that all particles have exactly the same mass $m_p=\mathrm{const}$.
Note that these particles are not physical particles.
In contrast, each particle is a point mass (mathematically speaking, a Dirac Delta function).
In other words, the particles can be conceived as a numerical grid which changes over time.
In RW scheme, a particle's position is updated in each time step by the Langevin equation.
Without loss of generality, we shall consider a one-dimensional problem.
In this problem, the domain is the negative $x$-axis and the governing equation is
\begin{equation} \label{eq:ADE1d}
\frac{\partial C}{\partial t}+u\frac{\partial C}{\partial x}=D\frac{\partial^2 C}{\partial x^2} , \quad \forall x \le 0,
\end{equation}
\noindent and the boundary condition at $x=0$ is given by
\begin{equation}\label{eq:rxnBC1d}
-D \frac{\partial C}{ \partial x} = kC \; .
\end{equation}
\noindent The Langevin equation for this one-dimensional case reads
\begin{equation}\label{eq:Langevin1d}
x(t+\Delta t)=x(t)+u\Delta t+\sqrt{2D\Delta t}~\xi
\end{equation}
\noindent where $\Delta t$ is the time step size, and $\xi$ is a random number with standard normal distribution, i.e. zero mean and unit variance.
\begin{algorithm}
\caption{Random Walk evolution (1D, $x<0$, reactive b.c. at $x=0$)}
\begin{algorithmic} \label{alg:performRW}
\STATE $t=0$, initialize particles location
\WHILE {$t<T$}
\STATE $t \leftarrow t+\Delta t$, perform RW (Eq.~\eqref{eq:Langevin1d})
\IF {$x>0$}
\STATE reflect or annihilate with probability $p$ (see \ref{alg:annihilate_reflected}).
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsubsection{Implementation of the reactive BC}
In our numerical scheme, after we moved all the particles by Eq.~\eqref{eq:Langevin1d}, the next step is to implement the boundary condition. Here, a particle with an updated position $x=x(t+\Delta t)$ such that $x>0$ (i.e., outside $\Omega$), is reflected back to the domain, to $-x$.
If the b.c. is prescribed at an arbitrary position $x_0$, the reflected position is $x_0-(x-x_0)=2x_0-x$.
This process is repeated for each particle, regardless of the whether or not reaction occurs.
For $k=0$, this assures that the total mass in the domain is preserved.
For $k>0$, a fraction of the flux at the boundary is consumed by reaction.
In the proposed RW scheme this is implemented by either a complete annihilation of a fraction of the reflected particles, or by a fractional change in the mass of the particles.
Focusing on the first option at present, we need to assign a certain probability $p$ for a reflected particle to be annihilated.
Previous works \cite{agmon1984} have derived this annihilation probability as
\begin{equation} \label{eq:p_rxn_1}
p=p_1=k\sqrt{\frac{\pi \Delta t}{D}}
\end{equation}
\noindent with the apparent requirement that $\Delta t$ must be small enough such that $p\le1$.
It is relatively straightforward to implement this annihilation probability in the computational code (see Algorithm \ref{alg:annihilate_reflected}).
We note that in RW code with mass-changing particles, the reflected particle will reduce its mass by a relative fraction $p$.
\begin{algorithm}
\caption{Treatment of a reflected particle}
\begin{algorithmic} \label{alg:annihilate_reflected}
\STATE Generate a random number of uniform distribution $\xi\sim U_{[0,1]}$.
\IF {$\xi<p$}
\STATE The particle is annihilated.
\ENDIF
\end{algorithmic}
\end{algorithm}
Note that in Eq.~\eqref{eq:p_rxn_1}, at the limit $k\to 0$, all particle are preserved, as expected.
Furthermore, the fraction of annihilated particles is linearly dependent on the reaction rate and the square root of the time step.
The linear dependence on the reaction rate is simple to grasp, since the boundary condition \eqref{eq:rxnBC1d} states that the reaction is linear with $k$.
However the dependence on the square root of $\Delta t$ is not a straightforward result, and should be explained.
One way to think of this dependence is to look at the case of a single particle that is initially located at some arbitrary position $x=x_0<0$ at $t=0$.
Now, assume this particle performs a random walk for some length of time $T$ and that the time steps taken in this walk have $\Delta t\ll T$.
Then one can prove that the expected number of hits by a reflective wall located at $x=0$ until $t=T$ is given by
\begin{equation}
n=\frac{2}{\pi}\sqrt{\frac{T}{\Delta t}}
\left[ \exp(-\chi^2)-\sqrt{\pi}\chi \rm{erfc}(\chi) \right]
\end{equation}
\noindent where $\chi^2=x_0^2/4DT $ is the scaled distance of the particle from the wall. For $\chi \to 0^+$, the term in the square brackets converges to unity. Hence, for large $T$, $n$ scales like $1/\sqrt{\Delta t}$ so clearly $p$ must scale like $\sqrt{\Delta t}$ to compensate for this.
In the following we prove that \eqref{eq:p_rxn_1} is correct at first order, i.e. it is $\mathcal{O}(\Delta t)$, and also provide a second order correction for \eqref{eq:p_rxn_1}.
\section{Convergence of the RW scheme to the reactive BC} \label{sec:convergence_proof}
Our goal here is to prove that the scheme based on Eq.~\eqref{eq:p_rxn_1} converges to the boundary condition \eqref{eq:rxnBC1d} at the limit $\Delta t\to 0$.
First we note that at a given timestep $\Delta t$, the presence of the boundary affects the particle cloud only at the proximity of the wall.
The characteristic distance of a random walk is $a'=\sqrt{2D\Delta t}$, so that the effect of the wall is up to a distance $\mathcal{O}(l)$ from the boundary. Within this infinitesimally short distance from the boundary we assume that the particle distribution follows a stationary smooth concentration given by
\begin{equation} \label{eq:conc_distrib}
C(x)=C_0+C'x+ \frac{1}{2!} C'' x^2 +\cdots
\end{equation}
\noindent where $C_0=C(x=0)$, $C'=dC/dx|_{x=0}$, $\dots$ are constants during the specific time step. We further assume the time step of the scheme $\Delta t$ so small such that $a' \ll C/C'$.
The boundary condition states that the diffusive flux into the boundary equals the rate of reaction at the boundary. In a time step $\Delta t$ the annihilation rate [ML$^{d-1}$] can be defined as
\begin{equation} \label{eq:annihilation_rate}
r \Delta t = -D C' \Delta t.
\end{equation}
We move now to the particles framework. We first consider a single particle located at $x$ in the beginning of the time step. The probability density function (p.d.f.) of the particle position after $\Delta t$ is given by the Gaussian
\begin{equation}
f(x'|x)=\frac{1}{\sqrt{4\pi D \Delta t}}\exp \left[ - \frac{(x-x')^2}{4D\Delta t} \right].
\end{equation}
\noindent in an \textit{attempted} step.
If $x'>0$ the particle can either be reflected or annihilated.
The probability of annihilation is thus calculated based on the probability to cross the boundary during the present time step,
\begin{equation}
r_1(x)=p \int_{0}^{\infty}f(x'|x)dx'.
\end{equation}
Next we take into account the cloud of particles distributed in $x<0$ with a p.d.f. following \eqref{eq:conc_distrib}. Hence, we take into account all possible values of $x$. The number of particles (or, more appropriately, the mass) in an infinitely small volume $dx$ is $C(x)dx$, and each of them is removed with probability $r_1(x)$, such that the total amount of particles removed in a single time step is
\begin{equation} \label{eq:No_of_particles_removed}
r\Delta t = \int_{-\infty}^{0}C(x) r_1(x) dx =
p \int_{-\infty}^{0} C(x) \int_{0}^{\infty}f(x'|x)dx' dx ,
\end{equation}
\noindent where we assumed (without loss of generality), that the mass of a single particle is unity.
Performing the integration over $x'$ we get
\begin{equation}
I=\int_{0}^{\infty}f(x'|x)dx'=\frac{1}{2}+\frac{1}{2} \rm{erf} \left( \frac{x}{\sqrt{4 D\Delta t}} \right)=
\frac{1}{2} \rm{erfc} \left( - \frac{x}{\sqrt{4 D\Delta t}} \right)
\end{equation}
such that \eqref{eq:No_of_particles_removed} becomes
\begin{multline} \label{eq:No_of_particles_removed_2}
r\Delta t =
\frac{p}{2} \int_{-\infty}^{0} \left( C_0+C'x+C'' \frac{x^2}{2!}+\dots \right) \rm{erfc} \left( \frac{-x}{\sqrt{4 D\Delta t}} \right)dx = \\
a \frac{p}{2\sqrt{\pi}}
\left(
C_0 - \frac{a\sqrt{\pi}}{4} C' +
\frac{a^2}{6}C'' - \cdots
\right)
\end{multline}
\noindent where $a^2=4D\Delta t =(2a')^2$.
\iffalse
\fbox{\begin{minipage}{30em} \footnotesize
Justification (not to be included in final text)-- the integral in \eqref{eq:No_of_particles_removed_2} equals (defining $y=-x/a$):
\begin{multline*}
-a\int_{\infty}^{0} \left( C_0-C'ya+C'' \frac{a^2 y^2 }{2!}-\dots \right) \rm{erfc} ( y ) dy = \\
a\int_0^{\infty} \left( C_0-C'ya+C'' \frac{a^2 y^2 }{2!}-\dots \right) \rm{erfc} ( y ) dy = \\
a C_0 \int_0^{\infty} \left( 1- \frac{C'}{C_0} a y + \frac{C''}{C_0} \frac{a^2 y^2 }{2!}-\dots \right) \rm{erfc} ( y ) dy = \\
a C_0 \left( \int_0^{\infty} \rm{erfc} ( y ) dy -
\frac{C'}{C_0} a \int_0^{\infty} y~ \rm{erfc} ( y ) dy +
\frac{C''}{C_0} \frac{a^2}{2} \int_0^{\infty} y^2 \rm{erfc} ( y ) dy
-\dots \right) = \\
a C_0 \left( \frac{1}{\sqrt{\pi }} -\frac{C'}{C_0} a \frac{1}{4} +\frac{C''}{C_0} \frac{a^2}{2} \frac{1}{3 \sqrt{\pi }} -\dots \right) = \\
a \left( \frac{1}{\sqrt{\pi}} C_0 -\frac{a}{4} C' +\frac{a^2}{6\sqrt{\pi}}C'' - \dots \right)
\end{multline*}
since
$ \int_0^{\infty } \rm{erfc}(x) \, dx =\frac{1}{\sqrt{\pi }} $;
$ \int_0^{\infty } x \rm{erfc}(x) \, dx =\frac{1}{4}$; $\int_0^{\infty } x^2 \rm{erfc}(x) \, dx = \frac{1}{3 \sqrt{\pi }}$
Then, from \eqref{eq:No_of_particles_removed_2} and using \eqref{eq:annihilation_rate} this yields
\begin{equation*}
-D \frac{C'}{C_0}\Delta t= \frac{p}{2} \frac{\sqrt{4D\Delta t}}{\sqrt{\pi }}
\left(
1
-\frac{C'}{C_0} \frac{a \sqrt{\pi }}{4}
+\frac{C''}{C_0} \frac{a^2}{6 }
-\dots
\right)
\end{equation*}
which becomes
\begin{equation*}
k\sqrt{\Delta t}= p \frac{\sqrt{D}}{\sqrt{\pi }}
\left(
1
+\frac{k}{D} \frac{a \sqrt{\pi }}{4}
+\frac{C''}{C_0} \frac{a^2}{6 }
-\dots
\right)
\end{equation*}
i.e.
\begin{equation*}
p=\frac{k \frac{\sqrt{\pi\Delta t}}{\sqrt{D}}}
{ 1 +\frac{k}{D} \frac{\sqrt{D\Delta t} \sqrt{\pi }}{2}
+\frac{C''}{C_0} \frac{{4D\Delta t}}{6 }
-\dots }
\end{equation*}
\end{minipage}}
\fi
\noindent Using \eqref{eq:annihilation_rate} this yields
\begin{equation}\label{eq:p2}
p_2=\frac{p_1}{1+p_1/2+\epsilon}
\end{equation}
\noindent where $p_1$ is the first order annihilation probability defined in \eqref{eq:p_rxn_1}, i.e. $p_1=k\sqrt{\frac{\pi \Delta t}{D}}$ and
\begin{equation}
\epsilon=\frac{a^2} {6} \frac{C''}{C_0} + \mathcal{O}(a^3)
\end{equation}
\noindent is the sum of the 3rd order and higher order terms.
Note that $\epsilon=\mathcal{O}(\Delta t)$, i.e. it is of order $\sqrt{\Delta t}$ relative to $p_1$.
The term $\epsilon$ can also be approximated only when $C''$ is known.
In 1D problems, the calculation of $C''$ from particle locations can be done by fitting a 3rd order polynomial to the experimental CDF of the particles close to $x=0$, e.g.: $-3a'<x<0$.
While this can be done in principle, it is cumbersome and may introduce numerical noise into the computation.
The result~\eqref{eq:p2} is second order accurate.
Dropping $\epsilon \ll \frac{p_1}{2}$ in~\eqref{eq:p2} we get
\begin{equation}\label{eq:p2_no_eps}
p_2=\frac{p_1}{1+p_1/2},
\end{equation}
i.e., the probability of annihilation $p_2$ is smaller than $p_1$.
For example, when $p_1=0.2$, $p_2 \approx 0.182$ and the reaction rate decreases by $\sim$ 10\%, a rather significant change.
Clearly, when $p_1 \ll 1$, the ratio $p_1/p_2 \rightarrow 1$ and replacing $p_1$ by $p_2$ has a negligible effect.
However, the computational cost involved in having $\Delta t$ small enough such that $p_1 \ll 1$ may be prohibitive.
Then, replacing $p_1$ by $p_2$ has the potential to increase the accuracy of the simulation without any additional computational cost.
To illustrate this, in the following section we will compare the use of $p_1$ \eqref{eq:p_rxn_1} and $p_2$ \eqref{eq:p2_no_eps} in RW codes with analytical and numerical solutions.
\section{Example applications} \label{sec:exampleApps}
In this section we will explore two different applications where solution is obtained by RW. We focus on the treatment of the
|
reactive boundary condition and the accuracy of our proposed methodology.
First, we will consider a simple 1D transient, pure diffusion case where we will compare our simulation results to a known analytical solution.
Then, results of 2D advection-diffusion simulations will be shown, together with a comparison with the results of Eulerian simulations for equivalent setups.
\subsection*{One-dimensional transient pure diffusion}
We start by considering the rather simple case of a one-dimensional finite domain of length $2l$.
The governing equation for this problem is
\begin{equation}\label{eq:PureDiff1d}
\frac{\partial C}{\partial t}=D\frac{\partial^2 C}{\partial x^2} , \quad -l \leq x \leq l ,
\end{equation}
\noindent with a reactive (Robin) boundary condition (i.e. Eq.~\eqref{eq:rxnBC1d}) at $x = \pm l$ and an initial condition of constant concentration i.e. $C(x,t=t_0)=C_0=$ const.
The problem can be redefined in non-dimensional terms as
\begin{equation}\label{eq:PureDiff1d-NonDim}
\begin{cases}
\dfrac{\partial C'}{\partial t'}=\dfrac{\partial^2 C'}{\partial x'^2} & -1 \leq x' \leq 1 \\
C'=1 & t=0\\
\Da\dfrac{\partial C'}{\partial x'}= \mp C' & x'=\pm 1
\end{cases}
\end{equation}
\noindent where $x'=x/l$, $t' = t D / l^2$, $C'=C/C_0$ and the Damk\"ohler number $\mathrm{Da}= k l / D$ represents the ratio between the characteristic diffusion time and the characteristic reaction time.
Note that in this section primes denote non-dimensional variables, not to be confused with derivatives in Section \ref{sec:convergence_proof}.
The problem defined by~\eqref{eq:PureDiff1d-NonDim} has an analytical solution \citep[][section 3.11]{CJbook}, given by (see Fig.~\ref{fig:1D-CTimeSpace})
\begin{equation}\label{eq:CJsolution}
C' =\sum_{n=1}^\infty \dfrac{2 \rm{Da} \; cos\left(\alpha_n x'\right) sec(\alpha_\textit{n})}{\rm{Da}(\rm{Da}+1)+\alpha_\textit{n}^2} e^{-\alpha_\textit{n}^2t'}
\end{equation}
\noindent where $\alpha_n$ are the positive roots of $\alpha \rm{tan}(\alpha) = \rm{Da}$.
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{1D-CTimeSpace.png}
\caption{Time evolution of the concentration in the 1D pure diffusion problem, for Da=1. Due to symmetry, only half of the domain is shown.
The analytical solution (continuous line) is compared with the random walk simulations employing the first and the second order approximations for $p$ (squares and diamonds, respectively). Here, $\Delta t'=5\times10^{-3}$, $N_{p,0}=5\times10^6$, $p_1=0.1253$, $p_2=0.1179$.}
\label{fig:1D-CTimeSpace}
\end{center}
\end{figure}
In non-dimensional terms, the expression for the probability of reaction $p_1$ becomes $p_1=\rm{Da}$ $\sqrt{\pi \Delta t'}$, where $\Delta t' = D \Delta t / l^2$.
Random walk simulations with $p=p_1$ and $p=p_2$ were performed, distributing $N_{p,0}$ = 5$\times 10^6$ particles in the domain at the beginning of the simulation, and implementing the 1D random walk scheme.
For each time step the concentration in the domain was evaluated from the particles position distribution by discretizing the domain into a number of bins $N_{\rm{bins}}=50$, and calculating the normalized concentration $\hat{C_{i}}$ in each bin as
\begin{equation*}
\hat{C_{i}}=\dfrac{C_i}{C_0}=\dfrac{m_p(t) N_{p,i} N_{\rm{bins}}}{N_{p,0}m_{p,0}} \; .
\end{equation*}
\noindent where $m_p(t)$ is the mass of the particle and $m_{p,0}$ is arbitrarily set to 1.
The decrease in particle numbers over time lead to a decrease in the statistical significance of the result.
This problem was tackled by a mass-conserving splitting of the particles (see~ \ref{App:A} for details).
The results of normalized concentration values are illustrated in Fig.~\ref{fig:1D-CTimeSpace} for a specific $\Delta t' = 5 \times 10^{-3}$ and $\Da=1$, yielding $p_1=0.1253$ and $p_2=0.1179$ (a $\sim$~6\% difference).
As it can be seen qualitatively in Fig.~\ref{fig:1D-CTimeSpace}, the first order approximation for $p$ results in a noticeable underestimation of $\hat{C}_i$ everywhere in the domain, while the second order approximation leads to results much closer to the analytical solution.
The quantitative comparison between the analytical predictions and the random walk solution is conducted via two error metrics; $E_{\rm{norm}}$, the root mean square (RMS) of the normalized error and $E_{\rm{abs}}$, the RMS of the absolute error (see Appendix~B for definitions).
In Fig.~\ref{fig:1D-ErrorsSameDA}, we show the evolution with time of the errors $E_{\rm{norm}}$ and $E_{\rm{abs}}$ for the same Damk\"ohler number $\Da=1$, with two different time-steps $\Delta t'=\{5\times10^{-3},5\times10^{-4}\}$.
For each of the two cases three simulations were performed using reaction probabilities $p_1$, $p_2$, and $p_{\rm{min}}$.
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{1D-ErrorsSameDA.png}
\caption{Normalized error $E_{\rm{norm}}$ (top) and absolute error $E_{\rm{abs}}$ (bottom) over time: values for simulations with $\Delta t'=5\times10^{-3}$ and $\Delta t'=5\times10^{-4}$ are shown (purple and green datasets, respectively; color in the online version of this paper).
Different approximations for the boundary reaction probability $p$ are employed: $p=p_1$ (squares), $p=p_2$ (diamonds), $p=p_{\rm{min}}$ (circles).
In all these simulations, Da=1 and $N_{p,0}=5\times10^6$.}
\label{fig:1D-ErrorsSameDA}
\end{center}
\end{figure}
The probability $p_{\mathrm{min}}$ minimizing this error was found numerically via a parametric sweeping operation: for the same case setup, a range of different probabilities was tested in the algorithm, and for each one, $E_{\rm{norm}}$ was calculated.
The results can be seen in Fig.~\ref{fig:PminSweep} where $E_{\rm{norm}}$ is shown as a function of $p$ for different times.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{PminSweep.png}
\caption{Values of normalized error of the 1D simulation as a function of the reaction probability. Here, $\Da = 1 $ and $\Delta t'=5\times10^{-3}$.
The dashed vertical lines indicate the first and second order approximations for $p$, respectively $p_1=0.1253$ and $p_2=0.1179$, while the dotted line points at the approximate global minimum of the normalized error, $p_{\rm{min}}=0.1173$. The differently colored lines
correspond to times increasing in a geometric sequence, from $t'=0.1$ to $t'_{end}=3.05$.}
\label{fig:PminSweep}
\end{center}
\end{figure}
From this figure two points stand out: first, the steep descent of the error curves close to the global minimum shows how even minor differences in the probability used can lead to significant discrepancies from the expected concentration values, and second, it shows how simply implementing the second order correction (with respect to $p_1$) leads to an improvement in the error of almost one order of magnitude (for this case).
For the sake of clarity, we remind the reader that the difference between $p_{\rm{min}}$ and $p_2$ accounts for the higher-order terms we dropped from Eq.~\ref{eq:p2}, summed in $\epsilon$, meaning that $p_{\rm{min}}$ cannot be calculated analytically \textit{a priori}.
From these results in Fig.~\ref{fig:1D-ErrorsSameDA}, the argument just exposed is made even more clearly, as it can be seen that a much greater increase in the accuracy of the simulation can be gained by using the more precise second-order approximation for the estimation of $p$, with respect to the more immediate (but much costlier) solution of simply decreasing sharply the integration step $\Delta t'$.
The appreciable error in the lowest part of Fig.~\ref{fig:PminSweep} are attributable to the sampling error when binning a set of points.
This error is expected to be equal to $\sqrt{N_{\rm{bins}}/N_{p,0}}$ $\approx \sqrt{10^{-5}} \approx 3 \cdot 10^{-3}$, closely fitting the observed error.
Finally, simulations exploring the system behavior over a range of different Damk\"ohler numbers (see Tab.~\ref{tab:1D-DiffDa}) were performed.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
Da & $p_1$ & $p_2$\\
[1ex]
\hline\hline
0.10 &0.007927 &0.007895\\
0.316 &0.02506 &0.02475\\
1 &0.07926 &0.07624\\
3.16 &0.2506 &0.2227\\
10 &0.7926 &0.5676\\
\hline
\end{tabular}
\caption{Parameters for the simulation campaign shown in Fig.~\ref{fig:ErrorsDiffDa}. In all cases, $N_{p,0}=5\times10^6$ and $\Delta t'=2.5\times10^{-3}$.}\label{tab:1D-DiffDa}
\end{table}
Fig.~\ref{fig:ErrorsDiffDa} shows the evolution of relative error over time for the a few Da values. In this figure we compare the use of $p=p_1$ and $p=p_2$ in the random walk algorithm for a fixed $\Delta t' =2.5 \times 10^{-3}$ for all cases. The parameters of the cases (Da, $p_1$ and $p_2$) are given in Table \ref{tab:1D-DiffDa}.
\begin{figure}
\begin{center}
\includegraphics[width=1.2\textwidth]{ErrorsDiffDa.png}
\caption{Normalized error $E_{\rm{norm}}$ over time, for $\Delta t' = 5\times 10^{-3}$, for five different Damk\"ohler numbers, equal to $\Da=0.1, 0.316, 1.0, 3.16,10$. On the left results for $p=p_1$ are shown (in color), while the figure on the right shows results for $p=p_2$ (in color) together with the same results for $p=p_1$ (in grey) for ease of comparison. }
\label{fig:ErrorsDiffDa}
\end{center}
\end{figure}
Each case has a different end time $t'_{end}$ defined as the time when the average concentration becomes 10\% of the initial one, i.e., $\int_{-l}^{+l}(C / C_0)dx~=~0.1$.
As can be seen in this figure, the normalized errors for the $p_2$ case are smaller than the normalized errors for $p_1$.
The error is reduced by about a factor of 2 in the early times in the smallest Da used (Da=0.1) and by more than an order of magnitude in the highest Da used (Da=10).
We also note the gradual increase with time in all cases, which can be attributed to the use of a higher reaction probability $p$ with respect to the correct one, leading to a larger removal of particles each timestep and an increase in the mismatch between simulation and analytical solution.
Note that $p_1$ scales linearly with Da, so higher Da values correspond to higher $p_1$ and a more significant difference between $p_1$ and $p_2$.
For completeness and reference, the computational cost of the simulation showing the largest error in Fig.~\ref{fig:1D-ErrorsSameDA} (corresponding to $\Da=1$ and $\Delta t'=5\times10^{-3}$) was of about one minute, while the corresponding simulation with a smaller time-step discretization ($\Delta t'=5\times10^{-4}$) had a linearly increased cost of ~10 minutes.
All of the simulation runs were performed using \verb+Matlab+ as a single-thread operation on a 2.6 GHz i7-6700HQ CPU.
\subsection*{Poiseuille flow between reactive parallel plates}
In this part of this work, we consider a problem of steady-state advection-diffusion-reaction in a 2D domain.
The geometry considered is an infinite strip, $\Omega=\{ -\infty <x<0, 0\leq y \leq H\}$ with width $H$, whose axis is parallel to the $x$ direction (see Fig.~\ref{fig:Sketch2D}).
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{Sketch2D-eps-converted-to.pdf}
\caption{Setup of the 2D problem with Poiseuille flow between two plates.
The plates are passive ($k=0$) at $x<0$ and reactive ($k>0$) at $x>0$.
The reactive walls are denoted by thicker lines.}
\label{fig:Sketch2D}
\end{center}
\end{figure}
Flow in the strip is assumed to be laminar and incompressible with a velocity profile $u_x(y)$ given by Poiseuille's law,
\begin{equation}\label{eq:Poiseuille}
u_x=\dfrac{H^2}{2\mu} \nabla p\left(\dfrac{y}{H}\left(1-\dfrac{y}{H}\right)\right) = 6\bar{u} \left( \dfrac{y}{H}\right) \left( 1-\dfrac{y}{H}\right) \quad 0\leq y\leq H
\; ,
\end{equation}
\noindent where $\mu$ is the fluid viscosity, $\nabla p$ the applied pressure gradient, and $\bar{u}$ is the average velocity; the fluid velocity has no transverse component, i.e. $u_y=0$.
The governing equation for the concentration $C$ is given by the ADE \eqref{eq:ADE}.
The walls of the domain ($y=\{0, H\}$) are assumed to be reactive in $x>0$ and passive in $x\leq0$.
This boundary condition is defined as $\mp D \frac{\partial C}{\partial y}=kC$, at $x>0, y=\{0, H\}$, where $k=\rm{const}>0$.
The concentration far upstream (at $x\rightarrow -\infty$) is assumed to be constant, $C=C_0$.
We employed the random walk algorithm to solve he problem and focused our attention on the steady state solution for this advection-diffusion problem.
To represent the boundary condition of constant concentration at $x\rightarrow -\infty$, a fixed number of new particles $N_{inj}$ was introduced every time step at the position $x_{inj}<0$.
The location $x_{inj}<0$ was chosen such that it is far enough upstream to represent the boundary condition, but not too far to avoid excessive computational burden; at the start of the RW simulation, there are no particles in the domain.
The code was run until convergence to steady state.
In non-dimensional terms, the steady state concentration $C'=\frac{C}{C_0}$ is a function of $x'=\frac{x}{H}$, $y'=\frac{y}{H}$, P\'eclet and Damk\"ohler numbers, i.e.: $C'=C'(x',y',\rm{Pe},\rm{Da})$, where Pe=$\bar{u} H / D$ and Da=$k H / D$.
We thus explored the problem for a range of Pe and Da.
Two simulation campaigns were performed, each composed of nine different cases exploring the combination of Pe=1, 10, 100 and Da=1, 10, 100; as it was done in the 1D case, the two campaigns differed in the use of the approximation for reaction probability: $p=p_1$ or $p=p_2$, respectively (details in Tab.~\ref{tab:2D-Setup}).
\begin{table}
\centering
\begin{tabular}{|c|c||c|c|c|}
\hline
Pe& Da & $N_{inj}$& $p_1$ & $p_2$\\
[1ex]
\hline\hline
1 &1 &5 &0.01 & 0.00995 \\
1 &10 &5 &0.1 & 0.0952 \\
1 &100&5 &1 & 0.667 \\
10 &1 &17 &0.01 & 0.00995 \\
10 &10 &17 &0.1 & 0.0952 \\
10 &100&17 &1 & 0.667 \\
100 &1 &17 &0.01 & 0.00995 \\
100 &10 &17 &0.1 & 0.0952 \\
100 &100&17 &1 & 0.667 \\
\hline
\end{tabular}
\caption{Parameters for the 2D simulation campaign, showing $N_{inj}$, $\Delta t$, and $p$ for all cases.
In every case, $\Delta t'=3.18\times10^{-5}$, where $\Delta t'=\Delta t \dfrac{D}{H^2}.$}
\label{tab:2D-Setup}
\end{table}
An analytical solution for the problem at hand is not available.
Thus, in order to test the performance of the random walk algorithms, we performed additional Eulerian simulations on the same cases explored in the Lagrangian code, using the finite element commercial CFD suite \verb+Comsol 5.2a+.
The velocity field and the concentration in the domain for each of the nine cases were obtained by first solving the laminar flow problem, and then the steady-state form of the advection-diffusion equation (i.e. Eq.~\eqref{eq:ADE} with the time derivative set to zero), with the partially reacting boundary conditions on the walls.
For illustration, we show in Fig.~\ref{fig:Contours} the results of three of the nine cases (Pe=10, Da=1, 10, 100): note (especially in the bottom figure) that the normalized concentration is not uniform, and is smaller than unity close to the boundaries.
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{Contours.png}
\caption{Solution of non-dimensional particle concentration $C/C_0$ in the 2D from Comsol, for Pe=10 and Da=1, 10, 100 at steady-state (respectively, cases A, B, and C).
The reactive walls are present at $x>0$; at $x<0$ it is possible to notice particle backdiffusion, especially at higher Damk\"ohler numbers (bottom).
It has to be noted that in this figure, the domain aspect ratio has been skewed for the sake of visualization clarity.}
\label{fig:Contours}
\end{center}
\end{figure}
This is due to back-diffusion from the reactive area of the domain, $x>0$.
In order to compare the RW and \verb+Comsol+ results, we calculated the total mass in the semi-infinite $x>0$ domain,
\begin{equation}
M'=\int_0^1 \int_0^{\infty} C' dx' dy' \;
\end{equation}
Figure~\ref{fig:2D-MassOverTime} shows the evolution in time of $M' $ for a specific case (Pe=100, Da=1).
\begin{figure}
\begin{center}
\includegraphics[width=.9\textwidth]{2D-MassOverTime.png}
\caption{Time evolution of total mass $M'$ at $x>0$ for the random walk code, comparing first order (blue) with second order approximation (red) for reaction probability $p$. Steady state result of the Comsol simulation is shown for comparison, represented as a straight line.
Data shown for Pe=100, Da=1 case.}
\label{fig:2D-MassOverTime}
\end{center}
\end{figure}
In this figure, it is qualitatively clear that there is an improvement of the random walk algorithm accuracy when using the second order approximation; in the other cases it is hard to visually discern the lines corresponding to $p_1$ and $p_2$, so for the sake of brevity we do not include similar figures for these cases here.
In order to provide a more quantitative measure for the mismatch, we calculate the relative error by comparing RW results and the \verb+Comsol+ simulations (see \ref{App:B}): this data is shown in Figure~\ref{fig:2D_error_Pe_DA}.
To provide an accurate estimation of the uncertainty of the \verb+Comsol+ results, we performed a grid-convergence campaign for each of the nine explored cases, then calculated the \textit{Grid Convergence Index} using the well-known Richardson extrapolation procedure, later improved by Roache~\cite{roache1998fundamentals,roache1998verification} (see \ref{App:C}).
The first point to note is that, for all cases, the error never exceeds a few percentages, showing the accuracy of the random walk code (or at very least, the closeness of the results to the estimate from the Eulerian data).
Clearly, a marked improvement in the code accuracy was obtained by simply employing the second order approximation for the reaction probability.
The figure also shows the standard deviation of the presented data, showing the uncertainty due to the calculation of steady-state average particle concentration (due to small oscillations in particles number, see Fig. \ref{fig:2D-MassOverTime}) and the remaining uncertainty in the \verb+Comsol+ results (see~\ref{App:C}).
As it is apparent, the error bars show how the predictions from the two different probability estimation strategies can be reliably set apart, and the difference be ascribed to a fundamental improvement in the code accuracy.
While there is no discernible trend linking the calculated error with the problem setup (e.g.: P\'eclet and Damk\"ohler number), in all cases the more accurate campaign using $p=p_2$ shows errors ranging from ~$10^{-4}$ to ~$10^{-3}$.
Lastly, the runtime of the slowest PT simulation in this test case (Da=1) was equal to $\sim$3 hours per simulation for Pe=10 and Pe=100, and $\sim$30 hours for Pe=1.
In comparison, the runtime of the Comsol steady state model ranged between a few minutes to $\sim$1 hour per simulation (for the same setups, with higher cost for higher P\'eclet numbers).
The Comsol simulations were run in parallel on an esa-core Intel Core i7-3960x workstation; while PT simulations were run on an Intel Xeon E5-2630 on a single thread.
The difference is attributed first on the parallelization difference, and also to the slower convergence of the transient RW code to an apparent steady state.
\section{Conclusions}
In this work we treated the implementation of the partially reacting boundary condition in random walk numerical algorithms.
We start from a simple relationship between the reaction probability $p$ and the algorithm time step $\Delta t$, employed by previous works, where $p \propto \sqrt{\Delta t}$.
This relationship is correct at first order.
First, we give a theoretical analysis resulting in an estimation for $p$ correct to the second order, and an estimation of its third-order error.
Then, in order to show the increase in prediction accuracy which can be gained from this correction, we set up two different test cases and compare the effectiveness of the classic methodology with the one proposed in this work.
In the first case we study a simple 1D pure diffusion problem, for which an analytical solution is available.
We use this analytical solution to calculate the error in the random walk predictions under a wide range of reaction rates.
We observe that while the classical first order approximation described the system in a qualitatively satisfactory manner, the use of the proposed second order approximation for the reaction probability reduced the RW simulation error by about an order of magnitude for the cases considered.
Similar results are obtained when studying a more realistic 2D advection-diffusion problem: in this case a wide parametric sweep over P\'eclet and Damk\"ohler numbers was performed.
The results were compared with a finite element steady state simulation.
Again we observe that the use of the second order approximation for $p$ improves the solution accuracy over all the explored parameter space, proving the reliability and effectiveness of the proposed methodology.
The main result of this work is thus to provide a simple way of improving the predictive capabilites of Lagrangian random walk algorithms in the case of reacting boundary conditions.
Such a reactive boundary is a very common physical setup and of great interest in the modelling of a diverse range of applications both in reaction engineering and environmental science.
This kind of improvement should afford practitioners a greater reach when dealing with the trade-off between simulation accuracy and computational cost.
\section*{Acknowledgements}
A.P. and G.B. would like to acknowledge partial funding by the Startup grant of the V.P. of research in Tel Aviv University and partial funding by the Israel Water Authority.
\begin{figure}
\begin{center}
\includegraphics[width=.85\textwidth]{Pe001.png}
\includegraphics[width=.85\textwidth]{Pe010.png}
\includegraphics[width=.85\textwidth]{Pe100.png}
\caption{Relative error between random walk simulations results and grid-converged Eulerian data, shown for the range of P\'eclet and Damk\"ohler numbers. Results from the $p=p_1$ (blue) $p=p_2$ (red) campaigns are compared. The error bars show the uncertainty in the calculation owing to particle number oscillations in the random walk code and the estimated errors in the simulations.}
\label{fig:2D_error_Pe_DA}
\end{center}
\end{figure}
\clearpage
\section*{References}
\bibliographystyle{model3-num-names}
|
\section{Conclusion}
We presented a novel robot imitation learning framework for performing manipulation tasks in scenes that contain multiple unknown objects. The proposed model takes features from images and point clouds as input, and predicts a pair of target and tool objects, and a desired keypose for placing the tool relative to the target. The proposed system does not require any predefined class-specific priors, and can generalize to new objects within the same category. Extensive experiments using real demonstration videos and a real robot show that the proposed model outperforms alternative models. Supplementary material, including video, can be found at \href{ https://tinyurl.com/2swbt7a3 }{\texttt{\textcolor{blue}{ https://tinyurl.com/2swbt7a3 }}}.
\section*{Acknowledgement}
This work was supported by NSF awards IIS 1734492 and IIS 1846043.
\section{Experiments}
\begin{figure*}[h]
\begin{center}
\hspace{-8mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking1_success_rate.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking1_target.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking1_tool.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking1_keypose.pdf}
\end{subfigure}
\end{center}
\begin{center}
\hspace{-8mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking2_success_rate.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking2_target.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking2_tool.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/stacking2_keypose.pdf}
\end{subfigure}
\end{center}
\begin{center}
\hspace{-8mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/pouring_success_rate.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/pouring_target.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/pouring_tool.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/pouring_keypose.pdf}
\end{subfigure}
\end{center}
\begin{center}
\hspace{-8mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/painting_success_rate.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/painting_target.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/painting_tool.pdf}
\end{subfigure}
\hspace{-5mm}
\begin{subfigure}{}
\includegraphics[width=0.24\textwidth]{images/plots/painting_keypose.pdf}
\end{subfigure}
\end{center}
\caption{Results of experiments on predicting tool-target pairs and keyposes in real novel static scenes, as a function of the number of videos used for training. The results are averaged over five independent trials.}
\label{fig:stat_all}
\end{figure*}
We evaluate the proposed method on four manipulation tasks, listed in Fig. \ref{fig:tasks}, and compare it with seven alternative architectures on the same tasks. In order to challenge the generalization capability of the trained systems in each task, we split a set of objects with various sizes and texture into a training set, used in demonstration videos, and a test set, used in robot executions, so that the objects in the test sets are never seen in the demonstrations, as shown in Fig.~\ref{fig:obj_set}. For each task, we record $80$ demonstration videos for training. The trained system is tested on $20$ different scenes per task.
\begin{figure}
\begin{center}
\begin{subfigure}{}
\includegraphics[width=0.49\textwidth]{images/object_set.pdf}
\end{subfigure}
\end{center}
\caption{Set of objects used in the experiments. Top row is the set of demonstration objects, bottom row is the test objects.}
\label{fig:obj_set}
\end{figure}
We compare the proposed architecture with methods that aggregate shape (from DGCNN) and appearance (from Fast-RCNN) descriptors of all the objects that are present in the scene into one large vector, and use that as an input to a neural network, which returns a predicted target object, tool object and keypose. This is a more straightforward design that is different from our architecture that takes pairs of candidate tool-target as inputs, and decomposes the policy into a high-level and an intermediate-level ones. All compared methods share the same low-level policy.
The three compared architectures aggregate input object descriptors by: (1) {\bf average pooling}, (2) {\bf max pooling}, and (3) {\bf attention}, respectively. Without the flexibility in predicting the target and the tool object in scenes made of variable numbers of objects, the baselines use instead two classification output layers for predicting target and tool object separately. Returned tool and target objects are indicated by their indices in the input.
Since the maximum number of objects per scene in our experiments is $10$, the classification output layers of the baseline models have $10$ units. This is a limitation compared to our proposed architecture, which is class-agnostic and can handle scenes with any number of objects.
Comparisons in these experiments should answer two questions: (1) how much improvement the DGCNN feature can bring to category-level manipulation, (2) how significant is the proposed architecture. To answer the first one, the proposed method is compared with ResNet~\cite{he2016deep} and Dense Object Nets~\cite{DBLP:conf/corl/FlorenceMT18}. Besides feeding fixed pre-trained features, we also evaluate an end-to-end training. These baselines share the same architecture as the proposed one except the input features.
To answer the second question, we compare our architecture with a more straightforward design: these compared methods aggregate shape (from DGCNN) and appearance (from Fast-RCNN) descriptors of all the objects that are present in the scene into one large vector, and use that as an input to neural networks, which return a predicted target object, tool object and keypose. We evaluate three compared mechanisms in aggregating input object descriptors: {\bf average pooling}, {\bf max pooling}, and {\bf attention}. Without the flexibility in predicting the target and the tool object in scenes made of variable numbers of objects, the baselines use instead two classification output layers for predicting target and tool object separately. Returned tool and target objects are indicated by their indices in the input. All compared methods share the same low-level policy. Since the maximum number of objects per scene in our experiments is $10$, the classification output layers of the baseline models have $10$ units. This is a limitation compared to our proposed architecture, which is class-agnostic and can handle scenes with any number of objects. Additionally, to see how the proposed decomposition of the policy influences the training, we evaluate GAIL~\cite{ho2016generative} with an architecture similar with the max pooling described above, except that it predicts change of tool object poses in each time step rather than the keypose. Without the intermediate-level policy, this architecture need to predict tool object, target object and low-level motion simultaneously. GAIL is an inverse imitation learning algorithm which have been proven successful in many applications. But even with this training algorithm, the lack of the decomposition of its policy still blocks this baseline from solving tasks shown later.
All compared methods use two consecutive fully connected layers with $256$ and $128$ units to encode object descriptors. In all models, the size of the hidden fully connected layer is $128$, and another hidden layer with $64$ units is followed before the quaternion output. ReLU is applied after every fully connected layer except for the output layers. We train all models with Adam optimizer, a training batch contains $10$ scenes, the learning rate is $1e-4$, and the number of epochs is $100$.
In all four tasks, we evaluate the methods based on their: (1) accuracy in predicting the ground-truth target-tool pairs in each scene, (2) accuracy in predicting keyposes of the tool in each scene, (3) overall task success rate, which combines together the two previous criteria.
The keypose prediction accuracy is measured by the $L_2$ error.
Additionally, we deploy the trained models on a {\it Kuka iiwa14} robot equipped with a {\it RealSense 415} RGB-D camera (Fig.~\ref{fig:system_overview} and Fig.~\ref{fig:tasks}) and report the task success rates in table \ref{tab:real_robot_stat}.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c}
Success / Trial & Task (A) & Task (B) & Task (C) & Task (D) \\
\hline
Proposed & $5/5$ & $5/5$ & $4/5$ & $4/5$\\
\hline
Average Pooling & $0/5$ & $0/5$ & $0/5$ & $0/5$\\
\hline
Max Pooling & $2/5$ & $1/5$ & $1/5$ & $0/5$ \\
\hline
Attention & $1/5$ & $1/5$ & $1/5$ & $1/5$ \\
\hline
ResNet~\cite{he2016deep} & $0/5$ & $1/5$ & $1/5$ & $0/5$ \\
\hline
End-to-End ResNet~\cite{he2016deep} & $0/5$ & $1/5$ & $0/5$ & $0/5$ \\
\hline
GAIL~\cite{ho2016generative} & $0/5$ & $0/5$ & $0/5$ & $0/5$\\
\hline
Dense Object Nets~\cite{DBLP:conf/corl/FlorenceMT18} & $1/5$ & $0/5$ & $0/5$ & $0/5$ \\
\end{tabular}
\caption{Real robot evaluation results}
\label{tab:real_robot_stat}
\end{table}
\noindent {\bf Small-on-large box stacking (A), and large-on-small box stacking (B).} The robot is expected to stack the smallest box on top of the largest one in task (A) and do the opposite in task (B). The desired ideal keypose for placement is on top of the center of the target object, with its height given as the sum of the target object's height and half of the tool object's height. A predicted keypose is considered as accurate as long as its pose is within $3cm$ from the ideal center. The result of these two tasks are shown in top two rows of Fig. \ref{fig:stat_all}. We can see from these plots that the proposed model achieves better performance than compared models. While the proposed model gets lower error in keypose prediction, its advantage is even more significant in the target and tool object prediction. We hypothesize that the keypose prediction in these two tasks is easier than tool-target prediction because the keypose centers are always on top of the target objects' centers in the training videos. Furthermore, ResNet and Dense Object Nets baselines, which feed compared features to the proposed architecture, also reach higher accuracy on target and tool object prediction than baselines with other aggregation mechanisms. So results on these tasks support the benefit of proposed architecture, as
|
an answer to the significance of the proposed architecture. It can be explained by the fact that the order of objects is arbitrary in the different scenes. Therefore, the baseline models without proposed architecture will see many different indices for the same object in different scenes during training. As a result, training iterations can cancel each other until these models manage to find out the correspondence between each input object and each output. In contrast, the proposed model can avoid this problem because the same object in difference scenes or frames should get similar outputs as they have similar descriptors. Among the compared aggregation mechanisms, the average pooling model seems to fail to learn. It may fail to focus on target and tool objects because it always considers all the objects in the scene. Although they may need much more data than our model, the max pooling and attention models show improving tendencies. Max pooling seems to be better than the attention model, its relative advantage may come from the fact that (1) it has less parameters as it does not compute attention weights, and (2) these tasks require only two objects rather than a complex combination of information from multiple objects. After verifying the significance of the proposed architecture, we can see the benefit of DGCNN features through the difference between the proposed method with ResNet and Dense Object Nets baselines. Without direct shape information from point clouds, these baselines have higher keypose prediction errors than other evaluated methods. At the same time, they are more likely to fail distinguishing between large and small box as object appearances can be misleading in sizes due to their distances to the camera. Additionally, the end-to-end training does not seem to help here.
\noindent {\bf (C) Pouring.} The robot is expected to pick up a bottle from the side, move it close to a cup, and rotate it to point to the cup. The ideal desired keypose has a height defined as the sum of the height of the cup and the radius of the bottle, and a successful pose should be higher than the cup and less than $5cm$ + desired height. In addition to the height criteria, a successful pose should also satisfy: (1) the pose has a main axis in its PCA pose pointing toward the cup, the direction of this axis goes from the bottle's center to the bottle's opening (2) the point cloud of the bottle's neck transformed by the predicted key pose lies within the range of the cup, the size of the cup is computed by its point cloud. Performances of different models are shown in the third row of Fig. \ref{fig:stat_all}. We can also see the clear advantage of the proposed model over other models, and the advantage in keypose predictions is more significant than in previous tasks, which should come from the fact that the keypose in this task is more complex: In this task, the ground-truth keypose depends on the initial relative pose of the bottle with respect to the cup because the demonstrator pours from left if the bottle is on the left of the cup initially, and pours from the right otherwise. Our intermediate-policy can extract this information from the input relative poses, while this information cannot be retrieved directly from the input with other models. Experiments also show gaps in target and tool object prediction between proposed method and compared input features, ResNet and Dense Object Nets, are smaller in this task. This result can be attributed to the appearance saliency of bottles and cups over other objects. But this still cannot reduce their disadvantages in keypose prediction as 3D shape information is missing in these features.
\noindent {\bf (D) Painting.} The robot is expected to lift and rotate a paint-brush, and move it to the top surface of a box. The desired keypose requires the brush's tip to perpendicularly touch the top surface of the box. As the brush's tip is soft, we allow a tolerance in height of $3cm$. Performances of different models are listed in the last row of Fig. \ref{fig:stat_all}, which again shows a clear advantage of the proposed method. But we can see that in this case, max pooling and attention achieve good accuracy in target object prediction. We hypothesize that the reason is that brushes have much different shapes than other objects present in the scene, and hence, the tool object is easier to be distinguished when DGCNN features are provided. Furthermore, low accuracy for target and tool object prediction from compared features, ResNet and Dense Object Nets, again suggests that appearance information alone can be insufficient for these manipulation tasks.
\section{Introduction}
Current robotic manipulation systems that are deployed in real-world environments rely on precise 3D CAD models of the manipulated objects. These models are used by planning algorithms in simulation to carefully select actions of the robot to apply on the objects in order to accomplish a pre-defined task, such as pick-and-place, assembly, transferring liquid content, and surface painting. 3D CAD models are obtained by manually placing standalone objects on rotary tables and scanning them.
Moreover, these objects need to be detected from images, typically through the use of convolutional neural networks that are trained by manually selecting and labeling the objects in training images. This tedious human labor severely limits the applicability of manipulation system outside of controlled environments, where robots are faced with a large variety of object types, shapes and textures, which makes it virtually impossible for robots to always rely on prior models. Moreover, despite recent progress in model-based planning algorithms, search for manipulation actions in a physics-based simulation that involves contacts and collisions is still computationally expensive.
An increasingly popular solution to the problems mentioned above is to learn {\it category-level} manipulation skills. A sequence of images showing how to perform a certain manipulation task is provided to the robot by a human demonstrator, or collected by the robot itself through trial-and-error. The robot is then tasked with learning a manipulation policy that can be tested on objects other than those used in the demonstrations. There are typically two types of methods for solving this problem: {{\it end-to-end} techniques that train a neural net to map pixels directly to torques, and {\it modular} techniques that decompose the problem into first reasoning over 6D poses of objects then solving the inverse kinematics and dynamics problem in order to control the robotic arm. While end-to-end techniques are appealing due to their simplicity and flexibility, they still require large sets of training data and suffer from limited ability to generalize to novel objects. Therefore, the method proposed in this work subscribes to the modular type of robot learning techniques.
\begin{figure}[t]
\begin{center}
\begin{subfigure}{}
\includegraphics[width=0.49\textwidth]{images/overview.pdf}
\end{subfigure}
\end{center}
\caption{\small System overview and robotic setup used in the experiments. In this example, the system is trained by unlabeled visual demonstrations to pick up a paint-brush in a specific manner, and to place it in a specific configuration relative to another object. The robot is tasked with repeating the demonstrated behaviour on a new scene containing novel objects with different sizes and texture.}
\label{fig:system_overview}
\end{figure}
Prior works in modular robot learning were mostly focused on grasping problems~\cite{DBLP:conf/iros/BoulariasKP11,Detry2013,Lenz2013,7139793,Yan-2018-113286,DBLP:conf/iccv/MousavianEF19}. While there are numerous learning-based techniques that generalize grasping skills to novel objects, only a few recent ones have been shown capable of generalizing other manipulation skills, such as placing, to novel objects~\cite{5509439,seita2019deep,Manuelli2019kPAMKA,DBLP:conf/corl/FlorenceMT18,9363610,e4e03a5d1bf545c29f8c26b824a57068, DBLP:journals/corr/abs-1910-11977, vecerik2020s3k}. However, most of these category-level techniques rely on annotated images wherein a human expert manually specifies {\it keypoints} on training objects. A neural network is then used to predict these task-specific keypoints on new images of unknown objects within the same category. Some techniques, such as~\cite{DBLP:journals/corr/abs-1910-11977} overcome this requirement by using a self-supervised robotic system to collect and annotate data, but this still requires long hours of real robotic manipulation which can be unsafe or less efficient if no guided exploration is provided.
In this work, we propose a new technique for learning category-level manipulation skills from unlabeled RGB-D videos of demonstrations. The proposed system is fully autonomous, and does not require any human feedback during training or testing besides the raw demonstration videos. The setup of the system includes a support surface wherein a number of unknown objects are placed, which includes task-irrelevant distractive objects. The system is composed of a high-level policy that selects a tool and a target from the set of objects in the scene at each step of the manipulation, an intermediate-level policy that predicts desired 6D keyposes for the robot's wrist, and a low-level policy that moves the wrist to the keyposes. The system employs a dynamic graph convolutional neural network that receives as inputs partial point clouds of the objects. A local frame of reference is computed for each object based on the principal component analysis of its point cloud. The 6D keyposes predicted by the network are in the frame of the predicted target object.
The proposed system is tested on a real robot with real demonstrations of four different tasks: two variants of stacking, simulated pouring, and simulated painting. Simulated pouring and painting are performed with real objects but with no liquid yet, for safety of the robot. The same architecture and parameters are used for the four different tasks.
The key novel contributions of this work are as follows. (1) An efficient {\it new architecture} for learning category-level manipulation tasks. A key feature of this architecture is the capability to generate trajectories of the robot's end-effector according to the 3D shapes of the objects that are present in the scene. The proposed architecture can thus generalize to objects with significantly different sizes and shapes. This is achieved through the use of a dynamic version of graph neural networks, wherein the graph topology is learned based on the demonstrated task. Moreover, the proposed architecture can be used in scenes that contain an arbitrary number of objects. This is in contrast with most existing manipulation learning methods, which are limited to objects of similar dimensions.
(2) An {\it empirical study} comparing various architectures for learning manipulation tasks, using real objects and robot. The study shows that, amongst the compared methods, a dynamic GNN provides the most data-efficient architecture for learning such tasks. Furthermore, the same hyper-parameter values can be used to learn different tasks.
(3) A {\it new formulation} of the problem by representing states as lists of relative 6D poses of object pairs. The proposed formulation also {\it unifies} the problems of grasping and manipulation by treating the end-effector as one of the objects in the scene, and viewing grasping as a special type of tool-use.
(4) A {\it fully self-supervised} learning pipeline that does not require any manual annotation or manual task decomposition, in contrast with existing category-level techniques~\cite{DBLP:journals/ral/SchmidtNF17,5509439,seita2019deep, Manuelli2019kPAMKA,DBLP:conf/corl/FlorenceMT18,9363610,e4e03a5d1bf545c29f8c26b824a57068,DBLP:journals/corr/abs-1910-11977,vecerik2020s3k}.
\section{Proposed Approach}
The proposed approach initially assumes that there are no hidden variables, and incrementally creates new ones only when needed to explain stochasticity of outputs.
In this work, we focus on reward estimation in POMDP. Meanwhile, we investigate POMDP problems where hidden part of states can be inferred from the history of observable states. Further, proposed approach models hidden variables as whether a trajectory visits specific regions. It well fits tasks with topological structures where optimal behaviors can be divided into multiple stages, and each stage is characterized by traversing a subregion in observable state space. Proposed method aims at discovering most relevant regions for reward estimation.
We model subregion for hidden variable $M^i$ as a hyper-sphere: $(z-C^i)^2 \leq (\epsilon^i)^2$ ,with center $[C^i]^m_{i=1}$ and size $[\epsilon^i]^m_{i=1}$. So we can define hidden variable as $M^i(D_l)=I_{min_h (z^l_h - C^i)^2 \leq (\epsilon^i)^2}$ where $I_{condition}$ is the indicator function, a hidden variable is activated when the trajectory traverses this region.
To find most relevant memory, we measure information gains from these hidden variables on rewards, we start with one hidden variable and extend it to multiple case later: $IG(M) = H(R) - H(R|M)$ where $H(\cdot)$ is entropy and $H(\cdot|\cdot)$ is conditional entropy. To reduce uncertainty in reward prediction, we maximize the information gain over hidden variable $M$. As the first term is not related to $M$, the maximization is equivalent to minimizing $H(R|M)$. We assume rewards are discrete in the computation of entropy, which is true in many robotics application as a constant reward is provided only when a final goal is reached. In some continuous cases, after clustering reward values, the same computation can still be applied. The conditional entropy is defined as
\begin{eqnarray*}
H(R|M) = \sum_{r_k}\Big(-P(M=0,R=r_k)log \frac{P(M=0, R=r_k)}{P(M=0)} \\
-P(M=1,R=r_k)log \
|
G,V \otimes_E \La_E(G_\infty)^\iota)$, and in particular it is
canonically independent of the choice of lattice $T$. The natural map
\[
H^*(G,V \otimes_E \La_E(G_\infty)^\iota)
\otimes_{\La_E(G_\infty)} \calH_E(G_\infty)
\to
H^*(G,V \otimes_E \calH_E(G_\infty)^\iota)
\]
is an isomorphism. We define $\bfR\Ga_\Iw(G,V) = \bfR\Ga(G,V
\otimes_E \calH_E(G_\infty)^\iota)$ and $H^*_\Iw(G,V) = H^*(G,V
\otimes_E \calH_E(G_\infty)^\iota)$. We refer to $H^*_\Iw(G,V)$ as
the \emph{rigid analytic Iwasawa cohomology}, or, because we have no
use for classical Iwasawa cohomology in this paper, simply the
\emph{Iwasawa cohomology}. Iwasawa cohomology groups are coadmissible
$\calH_E(G_\infty)$-modules.
There is an equivalence of categories $V \mapsto \bbD_\rig(V)$ between
continuous $E$-linear $G_p$-representations and
$(\vphi,G_\infty)$-modules over $\calR_E = \calR \otimes_{\bbQ_p} E$,
where $\calR$ is the Robba ring. Given any $(\vphi,G_\infty)$-module
$D$ over $\calR$, we define $\bfR\Ga_\Iw(G_p,D)$ to be the class of
\[
[D \xrightarrow{1-\psi} D]
\]
in the derived category, where $\psi$ is the canonical left inverse to
$\vphi$ and the complex is concentrated in degrees $1,2$, and we
define $H^*_\Iw(G_p,D)$ to be its cohomology, referring to the latter
as the \emph{Iwasawa cohomology} of $D$. These Iwasawa cohomology
groups are also coadmissible $\calH_E(G_\infty)$-modules. Note the
comparison
\[
\bfR\Ga_\Iw(G_p,V) \cong \bfR\Ga_\Iw(G_p,\bbD_\rig(V)).
\]
We define $\wt\bbD_\cris(D) = D[1/t]^{G_\infty}$ and $\bbD_\cris(D) =
\wt\bbD_\cris(\Hom_{\calR_E}(D,\calR_E))$ (where $t \in \calR$ is
Fontaine's $2\pi i$), and we say that $D$ is crystalline if $\dim_E
\bbD_\cris(D) = \rank_{\calR_E} D$. Note the comparisons
\[
\bbD_\cris(V) \cong \bbD_\cris(\bbD_\rig(V)),
\qquad
\wt\bbD_\cris(V) \cong \wt\bbD_\cris(\bbD_\rig(V)).
\]
The functor $\wt\bbD_\cris$ provides an exact, rank-preserving
equivalence of exact $\otimes$-categories with Harder--Narasimhan
filtrations, from crystalline $(\vphi,G_\infty)$-modules over
$\calR_E$ to filtered $\vphi$-modules over $E$, under which those
$(\vphi,G_\infty)$-modules of the form $\bbD_\rig(V)$ correspond to
the weakly admissible filtered $\vphi$-modules. In particular, if we
tacitly equip any $E[\vphi]$-submodule of a filtered $\vphi$-module
with the induced filtration, then for $D$ crystalline $\wt\bbD_\cris$
induces a functorial, order-preserving bijection
\[
\{\text{$t$-saturated $(\vphi,G_\infty)$-submodules of }D\}
\leftrightarrow
\{\text{$E[\vphi]$-stable subspaces of }\wt\bbD_\cris(D)\}.
\]
In the remainder of this subsection, we assume given a continuous
$E$-representation $V$ of $G_{\bbQ,S}$ that is crystalline at $p$, as
well as a fixed $E[\vphi]$-stable $F \subseteq \bbD_\cris(V|_{G_p})$,
and we associate to these data an Iwasawa-theoretic Selmer complex.
We begin by defining a local condition for each $v \in S$, by which we
mean an object $U_v$ in the derived category together with a morphism
$i_v \cn U_v \to \bfR\Ga_\Iw(G_v,V)$. If $v \neq p$, we denote by
$I_v \subset G_v$ the inertia subgroup, and we let $U_v =
\bfR\Ga_\Iw(G_v/I_v,V^{I_v})$ and $i_v$ be the inflation map. If $v =
p$, we write $F^\perp \subseteq \wt\bbD_\cris(V)$ for the orthogonal
complement of $F$, and then $D^+_F := \wt\bbD_\cris^{-1}(F^\perp)
\subseteq \bbD_\rig(V)$ and $D^-_F = \bbD_\rig(V)/D^+_F$. Then we let
$U_v = \bfR\Ga_\Iw(G_p,D^+_F)$, and we let $i_v$ be the functorial map
to $\bfR\Ga_\Iw(G_p,\bbD_\rig(V)) \cong \bfR\Ga_\Iw(G_p,V)$.
We now define the \emph{Selmer complex} $\bfR\wt\Ga_{F,\Iw}(\bbQ,V)$
to be the mapping fiber of the morphism
\[
\bfR\Ga_\Iw(G_{\bbQ,S},V)
\oplus
\bigoplus_{v \in S} U_v
\xrightarrow{\bigoplus_{v \in S} \res_v - \bigoplus_{v \in S} i_v}
\bigoplus_{v \in S} \bfR\Ga_\Iw(G_v,V),
\]
where $\res_v \cn \bfR\Ga_\Iw(G_{\bbQ,S},X) \to \bfR\Ga_\Iw(G_v,X)$
denotes restriction of cochains to the decomposition group. We write
$\wt H^*_{F,\Iw}(\bbQ,V)$ for its cohomology, referring to it as the
\emph{extended Selmer groups}. Then $\bfR\wt\Ga_{F,\Iw}(\bbQ,V)$ is a
perfect complex of $\calH_E(G_\infty)$-modules for the range $[0,3]$.
We will have need for a version without imposing local conditions at
$p$. Namely, we write $\bfR\wt\Ga_{(p),\Iw}(\bbQ,V)$ for the mapping
fiber of
\[
\bfR\Ga_\Iw(G_{\bbQ,S},V)
\oplus
\bigoplus_{v \in S^{(p)}} U_{v,\Iw}
\xrightarrow{\bigoplus_{v \in S^{(p)}} \res_v
- \bigoplus_{v \in S^{(p)}} i_v}
\bigoplus_{v \in S^{(p)}} \bfR\Ga_\Iw(G_v,V),
\]
where $S^{(p)} = S \bs \{p\}$, and we write $\wt
H^*_{(p),\Iw}(\bbQ,V)$ for its cohomology. Bearing in mind the exact
triangle
\[
\bfR\Ga_\Iw(G_p,D^+_F) \to \bfR\Ga_\Iw(G_p,V) \to
\bfR\Ga_\Iw(G_p,D^-_F) \to \bfR\Ga_\Iw(G_p,D^+_F)[1],
\]
we deduce from the definitions of the Selmer complexes an exact
triangle
\begin{equation}\label{E:remove-p}
\bfR\wt\Ga_{F,\Iw}(\bbQ,V) \to \bfR\wt\Ga_{(p),\Iw}(\bbQ,V) \to
\bfR\Ga_\Iw(G_p,D^-_F) \to \bfR\wt\Ga_{F,\Iw}(\bbQ,V)[1].
\end{equation}
\subsection{The Main Conjecture for $f$ and its symmetric powers}
We remind the reader of the fixed newform $f$ of weight $k$, level
$\Ga_1(N)$ with $p \nmid N$ and character $\ep$, with CM by $K$, and
the roots $\alpha,\ov\alpha$ of $x^2 + \ep(p)p^{k-1}$.
Since the elements $\alpha,\ov\alpha$ are distinct, the
$\vphi$-eigenspace with eigenvalue $\alpha$ determines an
$E[\vphi$]-stable subspace $F_\alpha \subseteq \bbD_\cris(V_f)$. We
apply the constructions of Iwasawa-theoretic extended Selmer groups,
with their associated ranks and characteristic ideals, to the data of
$V_f$ equipped with $F_\alpha$.
The following is the ``finite-slope'' form of the Main Conjecture of
Iwasawa theory for $f$.
\begin{theorem}\label{t:MC}
Assume that $p$ does not divide the order of the nebentypus $\ep$.
The coadmissible $\calH_E(G_\infty)$-module $\wt
H^2_{F_\alpha,\Iw}(\bbQ,V_f)$ is torsion, and
\[
\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
=
(\Tw_{-1} L_{f,\alpha}).
\]
\end{theorem}
\begin{proof}
We reproduce the argument of \cite[\S5]{P2}, adapted to the
normalizations of this paper.
In the notation of \S\ref{S:selmer}, the object $D^-_{F_\alpha}$ is
crystalline, and $\wt\bbD_\cris(D^-_{F_\alpha})$ has
$\vphi$-eigenvalue $\alpha\inv$ and Hodge--Tate weight $0$. This
implies that $H^2_\Iw(G_p,D^-_{F_\alpha}) = 0$. (If $k$ is odd,
$\ep(p)=-1$, and $\alpha=+p^{(k-1)/2}$ then
$H^1_\Iw(G_p,D^-_{F_\alpha})_\tors \cong E(\chi^{(k-1)/2})$ is
nonzero, but this ``exceptional zero'' does not affect the present
proof.)
Write $f^c = f \otimes \ep\inv$ for the eigenform with Fourier
coefficients complex conjugate to those of $f$, and recall the duality
$\Hom_E(V_{f^c},E) \cong V_f(1-k)$. Let $z'_{f^c} \in \wt
H^1_{(p),\Iw}(\bbQ,\Hom_E(V_{f^c},E))$ denote Kato's zeta element
derived from elliptic units (denoted $z_\ga^{(p)}(f^*)$ for suitable
$\ga \in \Hom_E(V_{f^c},E)$ in \cite{K}), and let
\[
z_f = \Tw_{k-1} z'_{f^c} \in
\wt H^1_{(p),\Iw}(\bbQ,\Hom_E(V_{f^c},E)(k-1)) \cong
\wt H^1_{(p),\Iw}(\bbQ,V_f).
\]
For a crystalline $(\vphi,G_\infty)$-module $D$ satisfying
$\Fil^1\bbD_\dR(D) = 0$, recall the dual of the big exponential map
treated in \cite[\S3]{Nak}:
\[
\Exp^*_{D^*}
\cn
H^1_\Iw(G_p,D)
\to
\wt\bbD_\cris(D) \otimes_E \calH_E(G_\infty).
\]
By naturality in $D$, there is a commutative diagram
\[\begin{array}{r@{\ }c@{\ }c@{\ }c}
\wt H^1_{(p),\Iw}(\bbQ,V_f)
\xrightarrow{\loc_{V_f}} H^1_\Iw(G_p,V_f)
\cong
& H^1_\Iw(G_p,\bbD_\rig(V_f))
& \xrightarrow{\Col_\alpha} &
H^1_\Iw(G_p,D^-_{F_\alpha}) \\
& \Exp^*_{V_f^*}\downarrow\phantom{\Exp^*_{V_f^*}} &
& \phantom{\Exp^*_{D^{-,*}_{F_\alpha}}} \downarrow \Exp^*_{D^{-,*}_{F_\alpha}} \\
& \wt\bbD_\cris(V_f) \otimes_E \calH_E(G_\infty)
& \to &
\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty).
\end{array}\]
Write $\loc_\alpha = \Col_\alpha \circ \loc_{V_f}$, where the maps
$\loc_{V_f}$ and $\Col_\alpha$ are as in the preceding diagram.
Identifying $\wt\bbD_\cris(D_{F_\alpha}^-) = \Hom_E(Ee_\alpha,E)$,
\cite[Theorem~16.6(2)]{K} shows that
\begin{equation}\label{E:compute-kato}
(\Tw_1 \Exp^*_{D^{-,*}_{F_\alpha}} \loc_\alpha z_f)(e_\alpha)
=
(\Exp^*_{\Hom_E(V_f,E)} \loc_{V_f(1)} \Tw_1 z_f)(e_\alpha)
=
L_{f,\alpha},
\end{equation}
after perhaps rescaling $e_\alpha$. In particular, $\loc_\alpha$ is a
nontorsion morphism.
The exact triangle \eqref{E:remove-p} gives rise to an exact sequence
\begin{multline*}
0
\to
\wt H^1_{F_\alpha,\Iw}(\bbQ,V_f)
\to
\wt H^1_{(p),\Iw}(\bbQ,V_f)
\xrightarrow{\loc_\alpha}
H^1_\Iw(G_p,D^-_{F_\alpha}) \\
\to
\wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
\to
\wt H^2_{(p),\Iw}(\bbQ,V_f)
\to
0.
\end{multline*}
It follows from \cite[Theorem~12.4]{K} that the finitely generated
$\calH_E(G_\infty)$-module $\wt H^1_{(p),\Iw}(\bbQ,V_f)$ (resp.\ $\wt
H^2_{(p),\Iw}(\bbQ,V_f)$) is free of rank $1$ (resp.\ is torsion).
Employing the local Euler--Poincar\'e formula and the fact that
$\loc_\alpha$ is nontorsion, we see from the preceding exact sequence
that $\wt H^1_{F_\alpha,\Iw}(\bbQ,V_f) = 0$, $\wt
H^2_{F_\alpha,\Iw}(\bbQ,V_f)$ is torsion, and
\begin{multline*}
\left(\chr_{\calH_E(G_\infty)}
\frac{\wt H^1_{(p),\Iw}(\bbQ,V_f)}{\calH_E(G_\infty)z_f}\right)
\left(\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)\right) \\
=
\left(\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}\right)
\left(\chr_{\calH_E(G_\infty)} \wt H^2_{(p),\Iw}(\bbQ,V_f)\right).
\end{multline*}
Applying $\Tw_{k-1}$ to the claim of \cite[Theorem~12.5(3)]{K} with
$f^*$ in place of $f$, we deduce that
\[
\chr_{\calH_E(G_\infty)}
\frac{\wt H^1_{(p),\Iw}(\bbQ,V_f)}{\calH_E(G_\infty)z_f}
=
\chr_{\calH_E(G_\infty)} \wt H^2_{(p),\Iw}(\bbQ,V_f),
\]
and therefore
\[
\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
=
\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}.
\]
Although only a divisibility of characteristic ideals is claimed by
Kato, one easily checks that his proof, especially
\cite[Proposition~15.17]{K}, gives an equality whenever Rubin's method
gives an equality. Under the hypothesis that $\ep$ has order prime to
$p$, the required extension of Rubin's work is precisely
Theorem~\ref{T:rubin} below. It remains to compute the right hand
side of the last identity. In fact, one has the exact sequence
\begin{multline*}
0 \to
H^1_\Iw(G_p,D^-_{F_\alpha})_\tors
\to
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f} \\
\xrightarrow{\Exp^*_{D^{-,*}_{F_\alpha}}}
\frac{\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty)}
{\calH_E(G_\infty)\Exp^*_{D^{-,*}_{F_\alpha}}\loc_\alpha z_f}
\to
\coker \Exp^*_{D^{-,*}_{F_\alpha}}
\to 0,
\end{multline*}
and because $D^-_{F_\alpha}$ has Hodge--Tate weight zero and
$H^2_\Iw(G_p,D^-_{F_\alpha})=0$, \cite[Theorem~3.21]{Nak} shows that
\[
\chr_{\calH_E(G_\infty)} H^1_\Iw(G_p,D^-_{F_\alpha})_\tors
=
\chr_{\calH_E(G_\infty)} \coker \Exp^*_{D^{-,*}_{F_\alpha}},
\]
and hence
\[
\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}
=
\chr_{\calH_E(G_\infty)}
\frac{\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty)}
{\calH_E(G_\infty)\Exp^*_{D^{-,*}_{F_\alpha}}\loc_\alpha z_f}.
\]
Finally, \eqref{E:compute-kato} shows that the right hand side above
is generated by $\Tw_{-1} L_{f,\alpha}$.
\end{proof}
We now turn to the Main Conjecture of Iwasawa theory for $V_m$ in its
``finite-slope'' form, beginning with two remarks. First, we remind
the reader that since $\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee$ is a
finitely generated, torsion $\La_{\calO_E}(G_\infty)$-module, it
follows that
\[
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p]
=
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee
\otimes_{\La_{\calO_E}(G_\infty)} \La_E(G_\infty)
\stackrel\sim\to
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee
\otimes_{\La_{\calO_E}(G_\infty)} \calH_E(G_\infty),
\]
and therefore $\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p]$ is naturally
a finitely generated (hence coadmissible), torsion
$\calH_E(G_\infty)$-module. Second, for $\fks \in \fkS$ we note that
the ``plus and minus'' Iwasawa-theoretic Selmer groups satisfy the
arithmetic duality
\[
H^{1,\fks_i}_f(k_\infty,A_{f_i}^*((r-i)(k-1)))^\vee[1/p]
\cong
\wt H^2_{\fks_i,\Iw}(\bbQ,V_{f_i}((i-r)(k-1)))^\iota,
\]
where $\wt H^2_{\fks_i,\Iw}$ denotes the cohomology of an
Iwasawa-theoretic Selmer complex with local condition at $p$
appropriately built from the choice $\fks_i$. These isomorphic
modules are also finitely generated (hence coadmissible), torsion
$\calH_E(G_\infty)$-modules, by Theorem~5.6.
With the preceding remarks in mind, what follows is the finite-slope
analogue of Definition~5.3. Fix $\fkt = (\fkt_0,\ldots,\fkt_{\wt
r-1}) \in \fkT$. For each $i=0,\ldots,\wt r-1$, the elements
$\alpha_i,\ov\alpha_i$ are distinct, so the $\vphi$-eigenspace with
eigenvalue $\fkt_i p^{(r-i)(k-1)}$ determines an $E[\vphi]$-stable
subspace $F_i \subseteq \bbD_\cris(V_{f_i}((i-r)(k-1)))$. We may
apply the constructions of Iwasawa-theoretic extended Selmer groups,
with their associated ranks and characteristic ideals, to the data of
$V_{f_i}((i-r)(k-1))$ equipped with $F_i$.
\begin{definition}\label{d:Selmer}
For $\fkt \in \fkT$, we define the coadmissible
$\calH_E(G_\infty)$-module
\[
\Sel_{k_\infty}^\fkt(V_m^*)^\vee
:=
\left(\bigoplus_{i=0}^{\wt r-1}
\wt H^2_{F_i,\Iw}\left(\bbQ,V_{f_i}((i-r)(k-1))\right)^\iota\right)
\oplus
\begin{cases}
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p] & m\text{ even}, \\
0 & m\text{ odd}.
\end{cases}
\]
\end{definition}
\begin{remark}
Although the notation $\Sel_{k_\infty}^\fkt(V_m^*)^\vee$ in the
finite-slope case was chosen for symmetry with
$\Sel_{k_\infty}^\fks(A_m^*)^\vee[1/p]$ in the ``plus and minus''
case, this notation is highly misleading: it is an essential feature
of the finite-slope theory that $\Sel_{k_\infty}^\fkt(V_m^*)^\vee$ is
coadmissible but typically \emph{not} finitely generated over
$\calH_E(G_\infty)$, and therefore does not arise as the Pontryagin
dual (with $p$ inverted) of direct limits of finite-layer objects, as
$\Sel_{k_\infty}^\fks(A_m^*)^\vee[1/p]$ does. This fact forces us to
work on the other side of arithmetic duality, as in the first summand
above.
\end{remark}
\begin{theorem}\label{t:MC2}
For all $\fkt \in \fkT$, the coadmissible $\calH_E(G_\infty)$-module
$\wt H^2_{\fkt,\Iw}(\bbQ,V_m)$ is torsion, and
\[
\chr_{\calH_E(G_\infty)} \Sel_{k_\infty}^\fkt(V_m^*)^\vee
=
(\Tw_1 \wt L_{V_m,\fkt}).
\]
\end{theorem}
\begin{proof}
Just as in the proof of Theorem~5.9, this theorem follows from
Theorem~5.5 and from Theorem~\ref{t:MC} applied to each $f_i$.
\end{proof}
\section{The Main Conjecture for imaginary quadratic fields at inert
primes}
In the fundamental works \cite{R,R2}, Rubin perfected the Euler system
method for elliptic units. From this he deduced a divisibility of
characteristic ideals as in the Main Conjecture of Iwasawa theory. In
most cases, he used the analytic class number formula to promote the
divisibilities to identities. In this section we extend the use of
the analytic class number formula to the remaining cases. The
obstruction in these problematic cases is that the control maps on
global/elliptic units and class groups are far from being
isomorphisms. Our approach is to use base change of Selmer complexes
to get a precise description of the failure of control, and then to
apply a characterization of $\mu$- and $\la$-invariants that is valid
even in the presence of zeroes of the characteristic ideal at
finite-order points. This section is written independently of the
preceding notations and hypotheses of this paper and \cite{HL}; we
employ notations as in \cite{R}, recalled as follows.
We take $K$ to be an imaginary quadratic field, and $p$ an odd prime
inert in $K$. Let $K_0$ be a finite abelian extension with $\Delta =
\Gal(K_0/K)$ and $\de = [K_0:K]$, and assume that $p \nmid \de$. Let
$K_\infty$ be an abelian extension of $K$ containing $K_0$, such that
$\Ga = \Gal(K_\infty/K_0)$ is isomorphic to $\bbZ_p$ or $\bbZ_p^2$.
One has $\scrG = \Gal(K_\infty/K) = \Delta \times \Ga$. Accordingly,
$K_\infty = K_0 \cdot K_\infty^\Delta$, where
$\Gal(K_\infty^\Delta/K)$ is identified with $\Ga$.
We write $\La = \La(\scrG) =\bbZ_p[\![\scrG]\!]$. The letter $\eta$
will always range over the irreducible $\bbZ_p$-representations of
$\Delta$. One has $\bbZ_p[\Delta] = \bigoplus_\eta
\bbZ_p[\Delta]^\eta$, where $\bbZ_p[\Delta]^\eta$ is isomorphic to the
ring of integers in the unramified extension of $\bbQ_p$ of degree
$\dim(\eta)$, and, accordingly, $\La = \bigoplus_\eta
\bbZ_p[\Delta]^\eta[\![\Ga]\!]$. The sum map $\summ \cn
\bbZ_p[\Delta] \to \bbZ_p$, $\sum_\sigma n_\sigma\sigma \mapsto
\sum_\sigma n_\sigma$, is identified with the projection onto the
component $\bbZ_p[\Delta]^{\bf1}$ indexed by the trivial character
${\bf1}$; write $\bbZ_p[\Delta]^!$ for the kernel of the sum map,
which is equal to $\bigoplus_{\eta\neq{\bf1}} \bbZ_p[\Delta]^\eta$,
and satisfies $\bbZ_p[\Delta] = \bbZ_p[\Delta]^{\bf1} \oplus
\bbZ_p[\Delta]^!$.
For $\{a_n\}$ a sequence of positive real numbers, if there exist real
numbers $\mu,\la$ such that $\log_p a_n = \mu p^n + \la n + O(1)$ as
$n \to +\infty$, then these numbers $\mu,\la$ are uniquely determined
by $\{a_n\}$, and we write $\mu = \mu(\{a_n\})$ and $\la =
\la(\{a_n\})$.
\begin{lemma}\label{L:numerics}
Assume that $\Ga$ is isomorphic to $\bbZ_p$, and let $M$ be a finitely
generated, torsion $\bbZ_p[\![\Ga]\!]$-module. Then for $n \gg 0$ the
quantity $\rank_{\bbZ_p} M_{\Ga^{p^n}}$ stabilizes to some integer $r
\geq 0$, so that $M_{\Ga^{p^n}} \approx \bbZ_p^{\oplus r} \oplus
M_{\Ga^{p^n}}[p^\infty]$, and Iwasawa's $\mu$- and $\la$-invariants of
$M$ satisfy $\mu(M) = \mu(\{\#M_{\Ga^{p^n}}[p^\infty]\})$ and $\la(M)
= r + \la(\{\#M_{\Ga^{p^n}}[p^\infty]\})$.
\end{lemma}
\begin{proof}
One easily sees that if $M \to M'$ is a pseudo-isomorphism, then both
sides of the desired identities are invariant under replacing $M$ by
$M'$. Using the structure theorem and additivity over direct sums, it
therefore suffices to check the case where $M = \bbZ_p[\![\Ga]\!]/(f)$
for prime $f \in \bbZ_p[\![\Ga]\!]$. The case where $f$ is relatively
prime to all the augmentation ideals $I(\Ga^{p^k}) = (f_k)$ of
$\Ga^{p^k}$ for $k \geq 0$, or equivalently where $r=0$, is
well-known. The remaining case is where $f = f_k/f_{k-1}$ for $k \geq
0$ (we set $f_{-1}=1$), whence one has
\[
(\bbZ_p[\![\Ga]\!]/(f))_{\Ga^{p^n}} = \bbZ_p[\![\Ga]\!]/(f,f_n) =
\bbZ_p[\![\Ga]\!]/(f) \approx \bbZ_p^{\oplus (p-1)p^{k-1}}
\]
for $n \geq k$, agreeing with the Iwasawa invariants.
\end{proof}
Let $F$ be a subextension of $K_\infty/K_0$. If $F/K_0$ is finite, we
associate to it the following objects:
\begin{itemize}
\item $A(F) = \Pic(\calO_F) \otimes_\bbZ \bbZ_p$ is the $p$-part of
its ideal class group,
\item $X(F) = \Pic(\calO_F,p^\infty) = \llim_n (\Pic(\calO_F,p^n)
\otimes_\bbZ \bbZ_p)$ is the inverse limit of the $p$-parts of its
ray class groups of conductor $p^n$,
\item $U(F) = (\calO_F \otimes_\bbZ \bbZ_p)^\times_\prop$ is the
pro-$p$ part of its group of semilocal units,
\item $\scrE(F) = \calO_F^\times \otimes_\bbZ \bbZ_p$ is its group of
global units $\otimes \bbZ_p$, and
\item $\scrC(F)$ is its group of elliptic units $\otimes \bbZ_p$, as
defined in \cite[\S1]{R}.
\end{itemize}
If $F/K_0$ is infinite, and $? \in \{A,X,U,\scrE,\scrC\}$, we let
$?(F) = \llim_{F_0} ?(F_0)$, where $F_0$ ranges over the finite
subextensions of $F$, obtaining a finitely generated
$\bbZ_p[\![\Gal(F/K)]\!]$-module. Note that Leopoldt's conjecture is
known in this case, so by the definition of ray class groups one has a
short exact sequence
\[
0 \to \scrE(F) \to U(F) \to X(F) \to A(F) \to 0.
\]
Class field theory identifies $A(F)$ (resp.\ $X(F)$) with the Galois
group of the maximal $p$-abelian extension of $F$ which is everywhere
unramified (resp.\ unramified at primes not dividing $p$).
The following improvement of Rubin's work is the main result of this
section, and the remainder of this section consists of its proof.
\begin{theorem}\label{T:rubin}
One has the equality of characteristic ideals,
\[
\chr_\La A(K_\infty) = \chr_\La(\scrE(K_\infty)/\scrC(K_\infty)).
\]
\end{theorem}
In \cite[Theorem~4.1(ii)]{R} and \cite[Theorem~2(ii)]{R2} it is proved
that both sides are nonzero at each $\eta$-factor, that $\chr_\La
A(K_\infty)$ divides $\chr_\La(\scrE(K_\infty)/\scrC(K_\infty))$, and
that the $\eta$-factors are equal when $\eta$ is nontrivial on the
decomposition group of $p$ in $\Delta$. To get equality for the
remaining $\eta$, we may thus reduce to the case where $p$ is totally
split in $K_0/K$. We also specialize our notation to where
$K_\infty^\Delta$ is any $\bbZ_p^1$-extension. We index finite
subextensions $F$ of $K_\infty/K_0$ as $F = K_n =
K_\infty^{\Ga^{p^n}}$ for $n \geq 0$. Fix a topological generator
$\ga \in \Ga$, and for brevity write $\La_n = \bbZ_p[\scrG/\Ga^{p^n}]
= \La/(\ga^{p^n}-1)$.
There is unique $\bbZ_p^2$-extension of $K$, and it contains all
$\bbZ_p$-extensions of $K$. This extension is unramified at all
primes not dividing $p$, and Lubin--Tate theory shows it is totally
ramified at $p$. The same ramification behavior is true of any
$\bbZ_p$-extension, as well as of $K_\infty/K_0$ because $p$ is
totally split in $K_0/K$. In particular, if $S_n$ denotes the set of
places of $K_n$ lying over $p$, then the restriction maps $S_{n+1} \to
S_n$ are bijections, and $S_n$ is a principal homogeneous
$\Delta$-set. Fixing once and for all $v_0 \in S_0$, with unique lift
$v_n \in S_n$, declaring $v_n$ to be a basepoint of $S_n$ gives an
identification $\bbZ_p[S_n] \cong \bbZ_p[\Delta]$ of
$\bbZ_p[\Delta]$-modules. We write $\invt$ for the composite of the
semilocal restriction map, the invariant maps of local class field
theory, and this identification:
\[
\invt \cn
H^2(G_{K,\{p\}},\bbZ_p(1))
\to
\bigoplus_{v \in S_n} H^2(G_{K_{n,v}},\bbZ_p(1))
\cong
\bbZ_p[S_n]
\cong
\bbZ_p[\Delta].
\]
Also, it follows that $p-1$ does not divide the ramification degree of
$K_\infty/\bbQ$ at $p$, so that $\mu_{p^\infty}(K_{\infty,v}) = 1$ for
any place $v$ of $K_\infty$ lying over $p$. Therefore, for $F/K_0$
finite the group $(\calO_F \otimes_\bbZ \bbZ_p)^\times$ is already
pro-$p$.
Since $\chr_\La A(K_\infty)$ divides
$\chr_\La(\scrE(K_\infty)/\scrC(K_\infty))$, their Iwasawa $\mu$- and
$\la$-invariants, considered as a $\bbZ_p[\![\Ga]\!]$-modules, satisfy
\begin{equation}\label{E:rubin}
\mu(A(K_\infty)) \leq \mu(\scrE(K_\infty)/\scrC(K_\infty)),
\qquad
\la(A(K_\infty)) \leq \la(\scrE(K_\infty)/\scrC(K_\infty)),
\end{equation}
We shall improve these inequalities to the claim that for some $\ep
\in \{0,1\}$ one has
\[
\mu(A(K_\infty)) = \mu(\scrE(K_\infty)/\scrC(K_\infty)),
\qquad
\ep + \la(A(K_\infty)) = \la(\scrE(K_\infty)/\scrC(K_\infty)),
\]
and additionally
\[
\rank_{\bbZ_p} A(K_\infty)_\scrG = 0,
\qquad
\rank_{\bbZ_p} (\scrE(K_\infty)/\scrC(K_\infty))_\scrG = \ep.
\]
These computations are equivalent to the claim that
\begin{equation}\label{E:subtheorem}
(\chr_\La \bbZp)^\ep \cdot \chr_\La A(K_\infty)
= \chr_\La \scrE(K_\infty)/\scrC(K_\infty).
\end{equation}
Granted \eqref{E:subtheorem}, let us show how to deduce the theorem.
Let $K'_\infty$ denote the compositum of $K_0$ with the unique
$\bbZ_p^2$-extension of $K$, and write $\scrG' = \Gal(K'_\infty/K) =
\Delta \times \Ga'$, $\La' = \La(\scrG') = \bbZ_p[\![\scrG']\!]$, and
$\proj \cn \La' \twoheadrightarrow \La$. By Rubin's theorem, there
exist $f' \in \La(\scrG')$ and $f \in \La(\scrG)$ with
\[
f' \cdot \chr_{\La'} A(K'_\infty)
=
\chr_{\La'} \scrE(K'_\infty
|
)/\scrC(K'_\infty),
\quad
f \cdot \chr_\La A(K_\infty)
=
\chr_\La \scrE(K_\infty)/\scrC(K_\infty).
\]
By \cite[Corollary 7.9(i)]{R} one has $\proj(f') = f$ up to a unit in
$\La$. Since $\proj$ is a homomorphism of semilocal rings that is a
bijection on local factors and restricts to a local homomorphism on
each local factor, it follows that $f'$ is a unit (resp.\ restricts to
a unit over a given local factor) in $\La'$ if and only if $f$ is a
unit (resp.\ restricts to a unit over the corresponding local factor)
in $\La$. On the other hand, \eqref{E:subtheorem} implies that $f$
divides $\chr_\La \bbZ_p$ in $\La$. Since $(\chr_\La \bbZ_p)^\eta =
\La^\eta$, the unit ideal, if $\eta \neq {\bf1}$, we deduce the
identity of the theorem for both $\bbZ_p^1$- and $\bbZ_p^2$-extensions
over each such $\eta$-factor. We only have left to consider the case
where $\eta={\bf1}$, or rather where $\Delta$ is trivial and $K_0=K$.
\begin{lemma}
Write $R = \bbZ_p[\![S,T]\!]$, and for $a,b \in \bbZ_p$ not both
divisible by $p$, write $R_{a,b} = R/((1+S)^a(1+T)^b-1)$ with
$\pi_{a,b} \cn R \twoheadrightarrow R_{a,b}$. We identify $R_{a,b}
\cong \bbZ_p[\![U]\!]$, where $U = \pi_{a,b}(S)$ if $p \nmid b$ and
$U = \pi_{a,b}(T)$ otherwise.
Suppose $g \in R$ is such that for all $a,b$ above, $\pi_{a,b}(g)$
divides $U$ in $R_{a,b}$. Then $g$ is a unit.
\end{lemma}
\begin{proof}
Write $g = x + yS + zT + O((S,T)^2)$ with $x,y,z \in \bbZ_p$; we are
to show that $p \nmid x$. Since $\pi_{0,1}(g)$ divides $U$ in
$R_{0,1}$, and $R_{0,1}$ is a UFD with $U$ a prime element, it follows
that $\pi_{0,1}(g)$ is either a unit or $U$ times a unit. As
$\pi_{0,1}(g) = x + yU + O(U^2)$, the first case is equivalent to $p
\nmid x$, and the second case is equivalent to $x=0$ and $p \nmid y$.
But in the second case the identity
\[
g = yS + zT + O((S,T)^2) = (1+S)^y(1+T)^z-1 + O((S,T)^2)
\]
would imply $\pi_{y,z}(g) = 0 + O(U^2)$, that is $U^2$ divides
$\pi_{y,z}(g)$ in $R_{y,z}$, contradicting that $\pi_{y,z}(g)$ divides
$U$.
\end{proof}
Choose a $\bbZ_p$-basis $\ga_1,\ga_2 \in \Ga'$, so that $\ker(\Ga'
\twoheadrightarrow \Ga) = (\ga_1^a\ga_2^b)^{\bbZ_p}$ for some $a,b \in
\bbZ_p$ not both divisible by $p$. Set $S=\ga_1-1,T=\ga_2-1 \in
\La'$, and note that $\ker(\La' \twoheadrightarrow \La)$ is generated
by $(1+S)^a(1+T)^b-1$, so that the map $\La' \twoheadrightarrow \La$
is identified with the map $\pi_{a,b} \cn R \twoheadrightarrow
R_{a,b}$ of the preceding lemma. Under this identification, the
augmentation ideal $\chr_\La \bbZ_p$ is generated by $U \in R_{a,b}$,
so we have that $\pi_{a,b}(f') = f$ divides $U$. Since
$K_\infty^\Delta = K_\infty$ was allowed to be any $\bbZ_p$-extension
of $K$, and conversely every such pair of $a,b$ arises from some
choice of $K_\infty$, the preceding lemma shows that $f'$ is a unit,
and therefore so is $f$, proving the theorem at once for $\bbZ_p^1$-
and $\bbZ_p^2$-extensions.
We begin the proof of \eqref{E:subtheorem} (and no longer assume that
$K_0=K$). As mentioned at the beginning of this section, our approach
is to use base change of Selmer complexes to measure the failure of
the maps $(\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}} \to
\scrE(K_n)/\scrC(K_n)$ and $A(K_\infty)_{\Ga^{p^n}} \to A(K_n)$ to be
isomorphisms.
Since we use base change in the derived category, we give some
generalities on the operation $\Lotimes_\La \La_n$. We first compute
that $\La_n[0] \cong [\La \xrightarrow{\ga^{p^n}-1} \La]$ as objects
in the derived category of $\La$-modules, the latter concentrated in
degrees $-1,0$, so that for any $\La$-module (resp.\ complex of
$\La$-modules) $X$ one may compute $X \Lotimes_\La \La_n$ as $[X
\xrightarrow{\ga^{p^n}-1} X]$ (resp.\ as the mapping cone of
$\ga^{p^n}-1$ on $X$). The induced map $X \Lotimes_\La \La_{n+1} \to
X \Lotimes_\La \La_n$ corresponds to the morphism $[X
\xrightarrow{\ga^{p^{n+1}}-1} X] \to [X \xrightarrow{\ga^{p^n}-1}
X]$ given by multiplication by $1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$
in shift degree $-1$, and by the identity in shift degree $0$.
Alternatively, the $\Tor$ spectral sequence degenerates to short exact
sequences
\begin{equation}\label{E:generic-base-change}
0 \to
H^i(X)_{\Ga^{p^n}}
\to
H^i(X \Lotimes_\La \La_n)
\to
H^{i+1}(X)^{\Ga^{p^n}}
\to 0,
\end{equation}
and the natural morphism from the above sequence for $n+1$ to the
sequence for $n$ is given by the natural projection on the first term,
and by multiplication by $1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$ on the
last term. The Bockstein homomorphism $\beta = \beta_X$, defined as
the connecting homomorphism in the exact triangle
\begin{multline*}
X \Lotimes_\La
\left(\La_n \xrightarrow{\ga^{p^n}-1} \La/(\ga^{p^n}-1)^2
\to \La_n \to \La_n[1]\right) \\
\cong
\left(X \Lotimes_\La \La_n
\xrightarrow{\ga^{p^n}-1}
X \Lotimes_\La \La/(\ga^{p^n}-1)^2
\to
X \Lotimes_\La \La_n
\xrightarrow\beta
X \Lotimes_\La \La_n[1]\right),
\end{multline*}
is computed on cohomology as the composite
\begin{multline*}
H^i(\beta) \cn
H^i(X \Lotimes_\La \La_n)
\twoheadrightarrow
H^i(X \Lotimes_\La \La_n)/H^i(X)_{\Ga^{p^n}} \\
\cong
H^{i+1}(X)^{\Ga^{p^n}}
\hookrightarrow
H^{i+1}(X)
\twoheadrightarrow
H^{i+1}(X)_{\Ga^{p^n}}
\hookrightarrow
H^{i+1}(X \Lotimes_\La \La_n).
\end{multline*}
Note that if $Z$ is a finitely generated, torsion $\La$-module, then
$\rank_{\bbZ_p} Z^{\Ga^{p^n}} = \rank_{\bbZ_p} Z_{\Ga^{p^n}}$.
If $X$ satisfies $X = X^\Ga$, then the above computations reduce to $X
\Lotimes_\La \La_n \cong X[1] \oplus X$, in such a way that the
natural map $X \Lotimes_\La \La_{n+1} \to X \Lotimes_\La \La_n$ is
identified with multiplication by $p$ in shift degree $-1$, and with
the identity map in shift degree $0$. The Bockstein homomorphism
\[
\beta \cn
X[1] \oplus X = X \Lotimes_\La \La_n
\to
X \Lotimes_\La \La_n[1] = X[2] \oplus X[1]
\]
is the identity map on $X[1]$ and zero on the other factors. In this
scenario, we write $\beta\inv = \beta_X\inv \cn X[2] \oplus X[1] \to
X[1] \oplus X$ for the map that is inverse to this identity map on
$X[1]$ and zero on the other factors. Any morphism $f \cn Y \to X$
gives rise to a morphism $f \Lotimes_\La \La_n \cn Y \Lotimes_\La
\La_n \to X \Lotimes_\La \La_n = X[1] \oplus X$. Writing $f
\otimes_\La \La_n$ for the projection of $f \Lotimes_\La \La_n$ onto
the second component, $X$, the commutative diagram
\[\begin{array}{ccccc}
Y \Lotimes_\La \La_n
& \xrightarrow{f \Lotimes_\La \La_n} &
X \Lotimes_\La \La_n
& = &
X[1] \oplus X \\
\beta_Y\downarrow\phantom{\beta_Y}
& &
\beta_X\downarrow\phantom{\beta_X}
& &
\phantom\sim\searrow\sim \\
Y \Lotimes_\La \La_n[1]
& \xrightarrow{f \Lotimes_\La \La_n[1]} &
X \Lotimes_\La \La_n[1]
& = &
X[2] \oplus X[1]
\end{array}\]
shows that the projection of $f \Lotimes_\La \La_n$ onto the first
component, $X[1]$, is computed by $\beta_X\inv \circ (f \otimes_\La
\La_n)[1] \circ \beta_Y$.
We now return to the setting of the theorem, recalling \Nekovar's
constructions of the fundamental invariants of number fields in terms
of Selmer complexes in \cite[\S9.2,\S9.5]{N} (with notations adapted
to our situation). Throughout, $n \geq 0$ ranges over nonnegative
integers. For brevity we write
\[
\bfR\Ga_n = \bfR\Ga_\cont(G_{K_n,\{p\}},\bbZ_p(1)),
\qquad
\bfR\Ga_\Iw
= \bfR\Ga_\Iw(K_\infty/K_0,\bbZ_p(1))
= \bfR\!\llim_n \bfR\Ga_n,
\]
and $H^i_? = H^i(\bfR\Ga_?)$ for $? \in \{n,\Iw\}$. Then one
has the computations
\begin{gather*}
H^i_n = 0,\ i \neq 1,2, \qquad
H^1_n = \calO_{K_n,\{p\}}^\times \otimes_\bbZ \bbZ_p, \\
0 \to
\Pic(\calO_{K_n,\{p\}}) \otimes_\bbZ \bbZ_p
\to
H^2_n
\xrightarrow{\invt}
\bbZ_p[\Delta]
\xrightarrow{\summ}
\bbZ_p
\to 0,
\end{gather*}
and, passing to inverse limits (Mittag-Leffler holds by compactness),
\begin{gather*}
H^i_\Iw = 0,\ i \neq 1,2, \qquad
H^1_\Iw = \llim_n (\calO_{K_n,\{p\}}^\times \otimes_\bbZ \bbZ_p), \\
0 \to \llim_n (\Pic(\calO_{K_n,\{p\}}) \otimes_\bbZ \bbZ_p)
\to
H^2_\Iw
\xrightarrow{\invt}
\bbZ_p[\Delta]
\xrightarrow{\summ}
\bbZ_p \to 0.
\end{gather*}
Let $U^- = \bbZ_p[\Delta][-1] \oplus \bbZ_p[\Delta][-2]$, considered
as a perfect complex of $\La$-modules, or as a complex of
$\La_n$-modules. One constructs a map $i^-_n \cn \bfR\Ga_n \to U^-$
via the local valuation maps in degree one and the local invariant
maps in degree two, and obtains a map $i^-_\Iw \cn \bfR\Ga_\Iw \to
U^-$ from the $i^-_n$ by taking the inverse limit on $n$. By taking
mapping fibers of $i^-_n$ and $i^-_\Iw$, one obtains complexes
$\bfR\wt\Ga_{f,n}$ of $\La_n$-modules and a perfect complex
$\bfR\wt\Ga_{f,\Iw}$ of $\La$-modules sitting in exact triangles
\[
\bfR\wt\Ga_{f,n}
\to
\bfR\Ga_n
\xrightarrow{i^-_n}
U^-
\to
\bfR\wt\Ga_{f,n}[1]
\]
and
\[
\bfR\wt\Ga_{f,\Iw}
\to
\bfR\Ga_\Iw
\xrightarrow{i^-_\Iw}
U^-
\to
\bfR\wt\Ga_{f,\Iw}[1].
\]
Writing $\wt H^i_{f,?} = H^i(\bfR\wt\Ga_{f,?})$ for $? \in
\{n,\Iw\}$, one has the computations
\[
\wt H^i_{f,n} = \begin{cases}
0 & i \neq 1,2,3 \\
\scrE(K_n) & i=1 \\
A(K_n) & i=2 \\
\bbZ_p & i=3,
\end{cases}
\qquad \text{and} \qquad
\wt H^i_{f,\Iw} = \begin{cases}
0 & i \neq 1,2,3 \\
\scrE(K_\infty) & i=1 \\
A(K_\infty) & i=2 \\
\bbZ_p & i=3.
\end{cases}
\]
By control for Galois cohomology, the natural map $\bfR\Ga_\Iw
\Lotimes_\La \La_n \to \bfR\Ga_n$ is an isomorphism, compatible with
varying $n$. Since $U^- = (U^-)^\Ga$, one has the computation $U^-
\Lotimes_\La \La_n \cong U^-[1] \oplus U^-$. It follows from the
definition of $i^-_\Iw$ as an inverse limit that $i^-_\Iw \otimes_\La
\La_n = i^-_n$, so that $i^-_\Iw \Lotimes_\La \La_n = (\beta_{U^-}\inv
\circ i^-_n[1] \circ \beta_{\bfR\Ga_n}, i^-_n)$. Thus we have a
commutative diagram
\[\begin{array}{ccccccc}
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n
& \to &
\bfR\Ga_n
& \xrightarrow{i^-_\Iw \Lotimes_\La \La_n} &
U^-[1] \oplus U^-
& \to &
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n[1] \\
& & =\downarrow\phantom= & & \pr_2\downarrow\phantom{\pr_2} \\
\bfR\wt\Ga_{f,n}
& \to &
\bfR\Ga_n
& \xrightarrow{i^-_n} &
U^-
& \to &
\bfR\wt\Ga_{f,n}[1],
\end{array}\]
which we complete to a morphism of exact triangles via a morphism
$\BC_n \cn \bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n \to
\bfR\wt\Ga_{f,n}$. Taking mapping fibers of the resulting morphism of
triangles gives an exact triangle
\[
\Fib(\BC_n) \to 0 \to U^-[1] \to \Fib(\BC_n)[1],
\]
hence an isomorphism $\Fib(\BC_n) \cong U^-$ and an exact triangle
\begin{equation}\label{E:BC-cone}
U^-
\xrightarrow{j_n}
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n
\xrightarrow{\BC_n}
\bfR\wt\Ga_{f,n}
\xrightarrow{k_n}
U^-[1].
\end{equation}
It is easy to compute that $j_n$ is the composite of the inclusion
$U^- \hookrightarrow U^- \oplus U^-[-1]$ and the shifted connecting
homomorphism $U^- \oplus U^-[-1] \to \bfR\wt\Ga_{f,\Iw} \Lotimes_\La
\La_n$. The construction of the snake lemma shows that $k_n$ is the
composite
\[
\bfR\wt\Ga_{f,n}
\to
\bfR\Ga_n
\xrightarrow{i^-_\Iw \Lotimes_\La \La_n}
U^-[1] \oplus U^-
\xrightarrow{\pr_1}
U^-[1],
\]
or in other words the composite of $\bfR\wt\Ga_{f,n} \to \bfR\Ga_n$
with $\beta_{U^-}\inv \circ i^-_n[1] \circ \beta_{\bfR\Ga_n}$. Of
course, the source or target of $H^i(k_n) \cn \wt H^i_{f,n} \to
H^{i+1}U^-$ is zero if $i \neq 1$, and if $i=1$ this computation
simplifies to
\[
\scrE(K_n)
\to
H^1_n
\xrightarrow\beta
H^2_n
\xrightarrow{\invt}
\bbZ_p[\Delta].
\]
The kernel of $\beta$ contains the universal norms in $\scrE(K_n)$ for
$K_\infty/K_n$, and in particular $\scrC(K_n)$ (see
\cite[Proposition~II.2.5]{D} for the norm relations), which itself is
of finite index in $\scrE(K_n)$. Since $\bbZ_p[\Delta]$ is torsion
free, it follows that $H^1(k_n) = 0$, too. Since $H^*(k_n)=0$, the
long exact sequence associated to the triangle \eqref{E:BC-cone}
breaks up into the short exact rows in the following diagrams, and
\eqref{E:generic-base-change} gives the short exact columns:
\begin{equation}\label{E:crosses}
\begin{array}{r@{\ }c@{\ }l}
& \scrE(K_\infty)_{\Ga^{p^n}} \\ & \downarrow \\
\bbZ_p[\Delta] \to & H^1(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n)
& \to \scrE(K_n), \\
& \downarrow \\ & A(K_\infty)^{\Ga^{p^n}}
\end{array}
\quad
\begin{array}{r@{\ }c@{\ }l}
& A(K_\infty)_{\Ga^{p^n}} \\ & \downarrow \\
\bbZ_p[\Delta] \to & H^2(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n)
& \to A(K_n). \\
& \downarrow \\ & \bbZ_p
\end{array}
\end{equation}
(The triangle \eqref{E:BC-cone} also gives the computation
$H^3(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n) \cong \bbZ_p$.) The
composite arrows from the top to the right points of the two diagrams
give the respective control maps $\scrE(K_\infty)_{\Ga^{p^n}} \to
\scrE(K_n)$ and $A(K_\infty)_{\Ga^{p^n}} \to A(K_n)$.
It is crucial to compute the transition morphisms from the diagrams
\eqref{E:crosses} associated to $n+1$ to those associated to $n$.
Explicitly, the transition maps on the upper (resp.\ lower, right)
points are the natural projections (resp.\ multiplication by
$1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$, the norm maps), and the maps on
the left points are \emph{multiplication by $p$} because the term
$U^-[1]$ in the sequence \eqref{E:BC-cone} is identified with first
summand of $U^- \Lotimes_\La \La_n \cong U^-[1] \oplus U^-$.
We consider the second diagram in \eqref{E:crosses}. The computation
$\rank_{\bbZ_p} A(K_\infty)_{\Ga^{p^n}} = \de-1$ is immediate. A
diagram chase identifies the composite map $\bbZ_p[\Delta] \to \bbZ_p$
as the sum map. Since $\bbZ_p$ is uniquely a $\La$-direct summand of
$\bbZ_p[\Delta]$, we may canonically refine the diagram to the short
exact sequence
\begin{equation}\label{E:SES-A}
0 \to \bbZ_p[\Delta]^!
\to A(K_\infty)_{\Ga^{p^n}} \to
A(K_n) \to 0,
\end{equation}
and in particular there is an injection
$A(K_\infty)_{\Ga^{p^n}}[p^\infty] \hookrightarrow A(K_n)$ of finite
abelian groups. Applying the snake lemma to the commutative diagram
\[\begin{array}{rcccccl}
0 \to &
\bbZ_p[\Delta]^!
& \to &
\displaystyle
\frac{A(K_\infty)_{\Ga^{p^{n+1}}}}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
& \to &
\displaystyle
\frac{A(K_{n+1})}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
& \to 0 \\
& p \downarrow \phantom{p} & & \downarrow & & \downarrow \\
0 \to &
\bbZ_p[\Delta]^!
& \to &
\displaystyle
\frac{A(K_\infty)_{\Ga^{p^n}}}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
& \to &
\displaystyle
\frac{A(K_n)}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
& \to 0,
\end{array}\]
and examining the final column, we get the exact sequence
\[
0 \to \bbZ_p[\Delta]^!/p
\to
\frac{A(K_{n+1})}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
\to
\frac{A(K_n)}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]} \to 0.
\]
This implies that
\[
\frac{\#A(K_n)}{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
=
p^{(\de-1)n} \frac{\#A(K_0)}{\#A(K_\infty)_{\Ga^{p^0}}[p^\infty]},
\]
so that
\begin{gather*}
\mu(A(K_\infty))
= \mu(\{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]\})
= \mu(\{\#A(K_n)\}), \\
\la(A(K_\infty))
= \de-1 + \la(\{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]\})
= \la(\#\{A(K_n)\}).
\end{gather*}
We now consider the first diagram in \eqref{E:crosses}. Since
$\scrE(K_n)$ is a free $\bbZ_p$-module, we may choose a splitting
$H^1(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n) \cong \bbZ_p[\Delta]
\oplus \scrE(K_n)$. It follows from the norm relations on elliptic
units that the map $\scrC(K_\infty)_{\Ga^{p^n}} \to \scrC(K_n)$ is
surjective; combining this fact with the proof of
\cite[Theorem~7.7]{R} shows that ther kernel of this map is
$(\scrC(K_\infty)_{\Ga^{p^n}})^\Ga$, is $\scrG$-isomorphic to $\bbZp$,
and is a $\bbZ_p$-direct summand of $\scrC(K_\infty)_{\Ga^{p^n}}$.
Moreover, this map followed by the inclusion $\scrC(K_n) \subseteq
\scrE(K_n)$ is equal to the composite
\[
\scrC(K_\infty)_{\Ga^{p^n}} \to \scrE(K_\infty)_{\Ga^{p^n}} \to
\scrE(K_n),
\]
which shows that the subset of $\scrC(K_\infty)_{\Ga_{p^n}}$ mapping
into the summand $\bbZ_p[\Delta]$ is again
$(\scrC(K_\infty)_{\Ga_{p^n}})^\Ga$. Writing $\bbZ_p[\Delta] =
\bbZ_p[\Delta]^{\bf1} \oplus \bbZ_p[\Delta]^!$, the image $I_n$ of
$(\scrC(K_\infty)_{\Ga_{p^n}})^\Ga \to \bbZ_p[\Delta]$ is equal to
either $0$ or $p^{e_n}\bbZ_p[\Delta]^{\bf1}$ with $e_n \geq 0$. If
$I_n = 0$ we set $e_n = 0$, so that in all cases we have
$(\bbZ_p[\Delta]/I_n)[p^\infty] \cong \bbZ/p^{e_n}$. Write $\ep_n = 1
- \rank_{\bbZ_p} I_n$. Again considering the proof of
\cite[Theorem~7.7]{R} shows that, in the commutative diagram
\[\begin{array}{rcccccl}
0 \to &
\bbZ_p
& \to &
\scrC(K_\infty)_{\Ga^{p^{n+1}}}
& \to &
\scrC(K_{n+1})
& \to 0 \\
& f \downarrow \phantom{f} & & \downarrow & & \downarrow \\
0 \to &
\bbZ_p
& \to &
\scrC(K_\infty)_{\Ga^{p^n}}
& \to &
\scrC(K_n)
& \to 0,
\end{array}\]
the map $f$ is multiplication by $p$ (up to a unit). Let $v_n$ be a
basis vector for $I_n$ if $\ep_n=0$, and $v_n=0$ if $\ep_n=1$. The
commutativity of the square
\[\begin{array}{cccc}
\bbZ_p & \xrightarrow{\cdot v_{n+1}} & I_{n+1} \subseteq & \bbZ_p[\Delta] \\
f \downarrow \phantom{f} & & & \phantom{p} \downarrow p \\
\bbZ_p & \xrightarrow{\cdot v_n} & I_n \subseteq & \bbZ_p[\Delta] \\
\end{array}\]
implies that $v_n=0$ if and only if $v_{n+1}=0$, so that
$\ep_n=\ep_{n+1}$ is independent of $n$; denote it henceforth by
$\ep$. When $\ep=0$, it is also easy to deduce from this
commutativity that $e_n=e_{n+1}$ is independent of $n$; denote it
henceforth by $e$.
The definition of $I_n$ allows us to modify the first diagram in
\eqref{E:crosses} to a short exact sequence
\begin{equation}\label{E:SES-EC}
0 \to (\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}}
\to
\bbZ_p[\Delta]/I_n \oplus \scrE(K_n)/\scrC(K_n)
\to
A(K_\infty)^{\Ga^{p^n}} \to 0.
\end{equation}
One has $\rank_{\bbZ_p} A(K_\infty)^{\Ga^{p^n}} = \rank_{\bbZ_p}
A(K_\infty)_{\Ga^{p^n}} = \de-1$, and combining this with the above
sequence gives $\rank_{\bbZ_p}
(\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}} = \ep$. The above
sequence also gives an exact sequence of finite abelian groups
\[
0
\to (\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}}[p^\infty]
\to \bbZ/p^e \oplus \scrE(K_n)/\scrC(K_n)
\to A(K_\infty)^{\Ga^{p^n}}[p^\infty],
\]
where $\#A(K_\infty)^{\Ga^{p^n}}[p^\infty]$ is bounded independently
of $n$. It follows that
\begin{gather*}
\mu(\scrE(K_\infty)/\scrC(K_\infty))
= \mu(\{\#\scrE(K_n)/\scrC(K_n)\}), \\
\la(\scrE(K_\infty)/\scrC(K_\infty))
= \ep + \la(\{\#\scrE(K_n)/\scrC(K_n)\}).
\end{gather*}
The analytic class number formula gives
\[
\#A(K_n) = \#\scrE(K_n)/\scrC(K_n),
\]
and the computations $\rank_{\bbZ_p} A(K_\infty)_\scrG = 0$ and
$\rank_{\bbZ_p} (\scrE(K_\infty)/\scrC(K_\infty))_\scrG = \ep$ follow
from \eqref{E:SES-A} and \eqref{E:SES-EC}. This establishes
\eqref{E:subtheorem}, and completes the proof of the theorem.
\thebibliography{Nak}
\bibitem[dS]{D} de~Shalit, Ehud, \emph{Iwasawa theory of elliptic curves
with complex multiplication}. Persectives in Math.\ {\bf 3},
Academic Press Inc., Boston, MA, 1987.
\bibitem[HL]{HL} Harron, Robert and Lei, Antonio, Iwasawa theory of
symmetric powers of CM modular forms at supersingular primes. To
appear in Journal de Th\'eorie des Nombres de Bordeaux.
\bibitem[Ka]{K} Kato, Kazuya, $p$-adic Hodge theory and values of zeta
functions of modular forms. Cohomologies $p$-adiques et
applications arithm\'etiques III. \emph{Ast\'erisque} {\bf 295}
(2004), 117--290.
\bibitem[Nak]{Nak} Nakamura, Kentaro, Iwasawa theory of de~Rham
$(\vphi,\Ga)$-modules over the Robba ring.
\emph{J.\ Inst.\ Math.\ Jussieu} {\bf 13} (2014), no.\ 1, 65--118.
\bibitem[Nek]{N} \Nekovar, Jan, \emph{Selmer complexes}.
\emph{Ast\'erisque} {\bf 310} (2006).
\bibitem[P]{P} Pottharst, Jonathan, Analytic families of finite-slope Selmer
groups. \emph{Algebra and Number Theory} {\bf 7} (2013), no.\ 7,
1571--1612.
\bibitem[P2]{P2} Pottharst, Jonathan, Cyclotomic Iwasawa theory of motives.
\emph{Preprint}, version 30 July 2012.
\bibitem[Ru]{R} Rubin, Karl, The ``main conjectures'' of Iwasawa theory for
imaginary quadratic fields. \emph{Invent.\ Math.} {\bf 103} (1991),
no.\ 1, 25--68.
\bibitem[Ru2]{R2} Rubin, Karl, More ``Main Conjectures'' for imaginary quadratic
fields. \emph{Elliptic curves and related topics}, CRM
Proc.\ Lecture Notes {\bf 4}, Amer.\ Math.\ Soc., Providence, RI,
1994, 23--28.
\bibitem[ST]{ST} Schneider, Peter and Teitelbaum, Jeremy, Algebras of
$p$-adic distributions and admissible representations.
\emph{Invent.\ Math.} {\bf 153} (2003), no.\ 1, 145--196.
\end{document}
|
\section{INTRODUCTION} \label{section:introduction} Stellar winds of hot
massive stars, primarily those with M $\ge$ 8 M$_{\odot}$, have important
effects on stellar and galactic evolution. These winds provide
enrichment to the local interstellar medium via stellar mass loss. On a
larger scale, the cumulative enrichment and energy from the collective
winds of massive stars in a galaxy are expected to play a pivotal role
in driving galactic winds \citep{leitherer92,oppenheimer06,mckee07}.
The number of massive stars in any star-forming galaxy, as well as their
tendency to be found in clusters, are critical parameters for
determining a galaxy's energy budget and evolution. \citet{oskinova05}
and \citet{agertz13} showed that the energy from the winds of massive
stars will dominate over the energy from supernovae in the early years
of massive star cluster evolution. While substantial progress has been
made over the last several decades in modeling massive star winds
\citep{puls96}, many questions remain, such as the degree of clumping of
the winds, the radial location of different ions and temperature regimes
in the wind with respect to the stellar surface, and the origin of
Corotating Interaction Regions (CIRs) representing large-scale wind
perturbations. X-ray observations have provided powerful diagnostic
tools for testing models, but a fully consistent description of the
detailed structure of a stellar wind is still elusive.
Variability in the winds of massive stars can be an important probe
of the structure of the stellar winds. There can be multiple causes of
X-ray variability in massive stars. Large-scale structures in the winds,
as traced by Discrete Absorption Components \citep[DACs,][]{kaper99} and
possibly linked to CIRs, may be associated with shocks in the wind and
thereby potentially affect the X-ray emission. X-ray variations of this
type have probably been detected for $\zeta$ Oph, $\zeta$ Pup, and $\xi$
Per \citep{osk01,naze13,massa14}. Also, X-ray variability with the same
period as but larger amplitude than known pulsational activity in the
visible domain was recently detected in $\xi^1$ CMa \citep{osk14} and
possibly in the hard band of $\beta$ Cru \citep{cohen08}. The exact
mechanism giving rise to these changes remains unclear. Notably, other
pulsating massive stars do not show such X-ray ``pulsations," such as
$\beta$ Cen \citep{raassen05} and $\beta$ Cep \citep{favata09}.
Smaller-scale structures such as clumps can also produce X-rays, albeit
at lower energies than large-scale structures. It is also possible that
some X-ray variations in massive stars are stochastic in nature and are
not correlated with any currently known timescale.
Another cause for X-ray variability is possible in magnetic stars. When
a strong global magnetic field exists, the stellar wind is forced to
follow the field lines, and the wind flowing from the two stellar
hemispheres may then collide at the equator, generating X-rays
\citep{babel97}. Such recurrent variations have been detected in
$\theta^1$ Ori C \citep{stelzer05,gagne05}, HD~191612 \citep{naze10},
and possibly Tr16-22 \citep{naze14}, although the absence of large
variations in the X-ray emission of the magnetic star $\tau$ Sco is
puzzling in this context \citep{ignace10}. Even stronger but very
localized magnetic fields could also be present, e.g. associated with
bright spots on the stellar surface that are required to create CIRs
\citep{cranmer96,cantiello09}.
In multiple systems, the collision of the wind of one star with the wind
of another can produce X-ray variations \citep{stevens92}. Wind-wind
collision emission may vary with binary phase, with inverse distance in
eccentric systems, or due to changes in line of sight absorption, as
observed in HD~93403 \citep{rauw02}, Cyg OB2 9 \citep{naze12}, V444
Cyg \citep{lomax15}, and possibly HD~93205 \citep{antokhin03}.
Delta Ori A (Mintaka, HD~36486, 34~Ori) is a nearby multiple system that
includes a close eclipsing binary, $\delta$\ Ori~Aa1
\citep[O9.5~II:][]{walborn72} and $\delta$\ Ori~Aa2 (\citep[B1~V:][]{shenar15}, herein Paper IV), with period $\approx$5.73d
\citep{harvin02,mayer10}. This close binary is orbited by a more distant companion star,
$\delta$\ Ori~Ab ($\approx$B0~IV: \citet{pablo15}, herein
Paper III; Paper IV), having a period of $\approx$346 yr
\citep{heintz87,perryman97,tokovinin14}. The components Aa1 and Aa2
are separated by about 43 R$_{\odot}$ (2.6 $R_{Aa1}$; Paper III), and the
inclination of $\approx$76\degr$\pm$4\degr\ (Paper III) ensures
eclipses. We acquired $\approx$479 ks of high resolution X-ray grating
spectra with a \textit{Chandra}\ Large Program to observe nearly a full period
of \delOri\ Aa (\citet{corcoran15}, herein Paper I).
Simultaneous with the acquisition of the \textit{Chandra}\ data,
Microvariability and Oscillations of STars (MOST) space-based
photometry and ground-based spectroscopy at numerous geographical
locations were obtained and are reported in Paper III.
\input{parameters.tex}
Previous X-ray observations of the \delOri\ A system from \textit{Einstein}\
showed no significant variability \citep{grady84}. ROSAT data for
\delOri\ A were studied by \citet{Haberl93}, who found modest
2-$\sigma$ variability but no obvious phase dependence; the
\citet{corcoran96} reanalysis of the ROSAT data showed similar results.
A single previous \textit{Chandra}\ HETGS observation of \delOri\ Aa was
analyzed by \citet{miller02}. Fitting the emission lines using Gaussian
profiles, they found the profiles to be symmetrical and of low FWHM,
considering the estimated wind velocity. The 60 ks exposure time covers
about 12\% of the orbital period. \citet{pollock13} analyzed a
\textit{Chandra}\ LETGS observation with exposure time of 96 ks, finding some
variability in the zeroth order image; they were not able to detect any
variability in the emission lines between two time splits of the
observation.
This paper is part of a series of papers. Other papers in this series
address the parameters of the composite \textit{Chandra}\ $\approx$479 ks
spectrum (Paper I), the simultaneous MOST and spectroscopic observations
(Paper III), and UV-optical-X-ray wind modeling (Paper IV). In this
paper (Paper II), we investigate variability in the X-ray flux in the
\textit{Chandra}\ spectra. Sect.\ \ref{section:observations} describes the
\textit{Chandra}\ data and processing techniques. Sect.\ \ref{section:lc}
discusses the overall X-ray flux variability of the observations and
period search. Time-resolved and phase-resolved analyses of emission
lines are presented in Sect.\ \ref{section:data_analysis}. In Sect.\
\ref{section:discussion} we relate our results of phase-based variable
emission line widths to a colliding wind model developed for this binary
system in Paper I, and discuss possible additional sources of
variability in \delOri\ Aa. Sect.\ \ref{section:conclusions} presents
our conclusions.
\section{OBSERVATIONS AND DATA REDUCTION} \label{section:observations}
Delta Ori Aa was observed with the \textit{Chandra}\ Advanced CCD Imaging
Spectrometer (ACIS) instrument using the High Energy Transmission
Grating Spectrometer (HETGS) \citep{canizares05} for a total exposure
time of $\approx$479 ks, covering parts of 3 binary periods (see Table
\ref{tab:obs} for a list of observations and binary phases). Four
separate observations were obtained within a 9-day interval. The HETGS
consists of 2 sets of gratings: the Medium Energy Grating (MEG) with a
range of 2.5-31 \ang\ (0.4-5.0 keV) and resolution of 0.023 \ang\ FWHM,
and the High Energy Grating (HEG) with a range of 1.2-15 \ang\ (0.8-10
keV) and resolution of 0.012 \ang\ FWHM. The resolution is approximately
independent of wavelength. The \textit{Chandra}\ ACIS detectors record both MEG
and HEG dispersed grating spectra as well as the zeroth order image.
Due to spacecraft power considerations, it was necessary to use only 5
ACIS CCD chips instead of the requested 6 for these observations. Chip
S5 was not used, meaning wavelengths longer than about 19 \ang\ in the
MEG dispersed spectra and about 9.5 \ang\ in the HEG dispersed spectra
were only recorded for the ``plus'' side of the dispersed spectra,
reducing the number of counts and effective exposure in these wavelength
regions. The standard data products distributed by the \textit{Chandra}\ X-ray
Observatory were further processed with TGCat
software\footnote{available for public download: http://tgcat.mit.edu}
\citep{Hu11}. Specifically, each level 1 event file was processed into a
new level 2 event file using a package of standard CIAO analysis tools
\citep{ciao06}. Additionally, appropriate redistribution matrix (RMF)
and area auxiliary response (ARF) files were calculated for each order
of each spectrum. TGCat processing produced analysis products with
supplemental statistical information, such as broad- and narrow-band
count rates.
\input{obs_table.tex}
MOST photometry observations were obtained of \delOri\ Aa for
approximately 3 weeks, including the 9 days of the \textit{Chandra}\
observations. Fig.\ \ref{most_time} shows the simultaneous MOST
lightcurve aligned in time to the \textit{Chandra}\ lightcurve. The \textit{Chandra}\
lightcurve in this figure is the $\pm$1 orders of the HEG and MEG
combined in the $1.7 \le \lambda \le 25.0$ \ang\ range, binned at 4 ks,
with Poisson errors. Fig.\ \ref{most} shows the same data plotted with
binary phase rather than time. The MOST lightcurves from several orbits
are averaged and overplotted in the figure to show the optical
variability.
Table \ref{table:params} lists the spectral types and radii of the $\delta$\ Ori Aa1 and Aa2, as well as the orbital parameters and the MOST secondary periods. In this paper $\phi$=0.0 refers to the binary orbital period and denotes the time when the secondary is in
front of the primary (deeper optical minimum) and $\phi$= 0.5 denotes
the time when the primary is approximately in front of the secondary
(shallower optical minimum). While primary minimum is a definition, the
secondary minimum at $\phi$=0.5 is only approximate, since the orbit is
slightly elliptical and also varies slowly with apsidal motion. The
actual secondary minimum is currently $\phi$=0.45 (Paper III). Actual
current quadrature phases are $\phi$=0.23 and 0.78. To avoid confusion,
we use the phases in this paper that would be assumed with a circular
orbit (i.e., $\phi$=0 for inferior conjunction, $\phi$=0.5 for superior
conjunction, $\phi$=0.25 and 0.75 for quadrature). Also, we use the
ephemeris of \citet{mayer10}. No evidence of
X-ray emission from the tertiary star was seen in any of the \textit{Chandra}\
observations (Paper I), so the spectra represent only \delOri\ Aa1 and
Aa2.
\begin{figure*}[!tb] \centering\leavevmode
\includegraphics[width=5.5in]{chandra_color_time.ps} \caption{Chandra
X-ray lightcurve from the 2012 campaign with the simultaneous continuous
MOST optical lightcurve. The time intervals for each of the Chandra observations are delineated with vertical lines with the Chandra observation ID (ObsID in Table \ref{tab:obs}) at the top of the figure. The Chandra lightcurves were
calculated from the dispersed spectra in each observation. The four
observations are separated by gaps due to the passage of Chandra through
the Earth$'$s radiation zone as well as necessary spacecraft thermal
control, during which time continued \delOri\ Aa observations were not
possible. Chandra counts per second are on the left y-axis. MOST
differential magnitudes are on the right y-axis.}\label{most_time}
\end{figure*}
\begin{figure*}[!tbh] \centering\leavevmode
\includegraphics[width=5.5in]{chandra_color_phi.ps} \caption{Phased
Chandra X-ray lightcurve from the 2012 campaign with the simultaneous
continuous MOST optical lightcurve. The mean \delOri\ Aa MOST optical
lightcurve is plotted above the Chandra X-ray lightcurve. The four
Chandra observations are shown in red, magenta, blue, and black. Binary
phase is on the x-axis, MOST differential magnitude on the right y-axis,
and Chandra count rate on the left y-axis. The MOST lightcurves have
been smoothed and both lightcurves have been repeated for one binary
orbit for clarity.} \label{most} \end{figure*}
\section{OVERALL VARIABILITY OF X-RAY FLUX}
\label{section:lc}
The lightcurve of the dispersed \textit{Chandra}\ spectrum of \delOri\ Aa,
shown in lower part of Fig.\ \ref{most_time} shows the spectrum was variable throughout in X-ray flux with a maximum
amplitude of about $\pm$15\% in a single observation. During the first
and second observations the X-ray flux varied by $\approx$5-10\%,
followed by a $\approx$15\% decrease in the third and a $\approx$15\%
increase in the fourth. Note that
none of the X-ray minima in the residual lightcurve aligns with an
optical eclipse of the \delOri\ Aa system.
Fig.\ \ref{most_time} suggests an increase in
overall X-ray flux with time. From the beginning to end of the 9-day campaign
there is a $\approx$25\% increase in the mean count rate. The best linear fit to the entire
lightcurve is 96.04 counts/d + 0.002 counts/d ∗ HJD, with equal weights
for all points.
\input{period.tex}
We first did a rough period search on the delta Ori Aa Chandra lightcurves using the software package Period04 \citep{lenz05}, and found peaks around 4.8d and 2.1d. This method uses a speeded-up Deeming algorithm \citep{kurtz85} that is not appropriate for sparse datasets such as ours because it assumes the independence of sine and cosine terms valid only for regular lightcurves. Therefore, to verify this preliminary conclusion and get final results, we rather rely on several methods specifically suitable to such sparse lightcurves: the Fourier period search optimally adapted to sparse datasets \citep[][ see Fig. 5]{heck85,gosset01}, as well as variance and entropy methods \citep[e.g.][]{schwarz89,cincotta99}. The results of all these methods were consistent within the errors of each other.
Using these tools, we first looked for periods in the raw $\,\mathrm{cts\,s\mone}$ lightcurve data. A
period of 5.0$\pm$0.3d was found with an amplitude of $7.1\times
10^{-3}$. We then removed the linear trend described above from the raw data, producing a residual lightcurve. The period searches were repeated, with an identified period of 4.76$\pm$0.3d
(amplitude=$4.7\times 10^{-3}$). After pre-whitening the residual data in Fourier space for this period, an additional significant period of 2.04$\pm$0.05d (amplitude=$3.5\times
10^{-3}$) was found. Each of these periods has a
Significance Level (SL) of $SL<$1\% with the definition of SL as a
test of the probability of rejecting the null hypothesis given that it
is true. If the $SL$ is a very small number, the null hypothesis can be
rejected because the observed pattern has a low probability of occurring
by chance. Fig.\ \ref{fig:period} shows the results of the Fourier
period search method for the raw, residual, and pre-whitened lightcurves, which produced periods of 5.0d, 4.76d, and 2.04d, respectively. Table \ref{table:period} lists the frequency,
period, and amplitude of the periods. Fig.\
\ref{fig:residual_lc} shows the residual lightcurve with the period 4.76d plotted on the x-axis. The residual data were smoothed with a median
filter for the plot only, in order to see the short-term variability
more clearly; the analysis used the unsmoothed residual data
points.
The 5.0d period in the raw data and the 4.76d period in the residual data with the linear trend removed are considered to be the same period because the errors overlap. Comparing the periods identified in the \textit{Chandra}\ $\delta$\ Ori Aa data with the MOST optical periods, the strongest \textit{Chandra}\ period of 4.76d is consistent within the errors to the MOST$_{F2}$ period of 4.614d (see Table \ref{table:params}). The \textit{Chandra}\ period of 2.04d is consistent with the less significant MOST$_{F10}$ period of 2.138d. There is no evidence of an X-ray period matching the binary period of 5.73d.
Finally, we again searched for periods including the lightcurve from the original \textit{Chandra}\
observation of \delOri\ Aa ({ObsID}\xspace\ 639 taken in 2001). The lightcurve for obsid 639 alone did not
yield any statistically significant periods since it covers a much
smaller time interval (60 ks, hence 0.7 d). However, when combined with
the 2013 data, we find similar results as mentioned above for the raw
data, with a period of 5.0d (and its harmonics at 2.5d) providing the
strongest peak. Residual lightcurves were not analyzed with the early
observation because the linear function (which is probably not truly
linear) could not be determined across an 11-year gap.
\begin{figure*}[!htb]
\includegraphics[width=2.\columnwidth]{deltaori_periodo2.ps}
\caption{Periodograms derived using a Fourier period search adapted for sparse datasets. Frequencies
identified in other wavelengths are shown as red vertical lines. The
left red vertical line corresponds to the binary period (5.73d), and the
two strongest secondary MOST periods (4.613d and 2.448d) are indicated by center
and right red vertical lines. {\it Top two panels:} Periodogram for the
raw and residual data, respectively. {\it 3rd panel:} Periodogram for
the residual data after ``cleaning" (prewhitening) of the strongest
signal (4.76d), leaving clearly a second period (2.04d). {\it Bottom:}
Associated spectral window, showing the relative positions where aliases
may rise.}
\label{fig:period}
\end{figure*}
\begin{figure}[!htb]
\includegraphics[angle=90,width=1.\columnwidth]{wr_phase.ps}
\caption{The residual count rate lightcurve of 1~ks binned data after
correction for linear fit and filtering by a running median
over $\approx$1/4d bin size. The x-axis is phase for the 4.76d period found using period search techniques. See text for explanation.} \label{fig:residual_lc}
\end{figure}
\section{TIME- AND BINARY PHASE-RESOLVED VARIABILITY}
\label{section:data_analysis}
\subsection{Time-sliced spectra}
\label{section:time_sliced}
The discrete photon-counting characteristic of the ACIS detector allows
the creation of shorter time segments of data from longer observations.
Time-resolved and phase-resolved variability of flux and emission line
characteristics were investigated using a set of short-exposure spectra,
contiguous in time (``time-sliced spectra"), covering the entire
exposure time of the observations, along with individual instrumental
responses to account for any detailed changes in local response with
time, such as might be introduced by the $\approx$1 ks aspect dither
pointing of the telescope. This was accomplished by reprocessing
the set of time-sliced data using TGCat software, taking care to align
the zeroth order images among the time-sliced spectra prior to the
spectral extraction to produce correct energy assignments for the
events. The resulting time segments are of similar exposure time,
$\approx$40 ks, making them easily comparable. Table \ref{table:slice}
lists the 12 time-sliced spectra, along with the beginning and end time
and binary phase range. Our \textit{Chandra}\ observations cover parts of
orbits 396-398 based on the ephemeris. The integer portion of the
binary phase is in reference to the epoch of the ephemeris used. The
decimal portion is the binary phase for the specific orbit.
\input{slice_table.tex}
In addition to the 12 time-sliced spectra described above, we produced
time-sliced spectra of approximately 10 ks in length from the \textit{Chandra}\
observations, using the same technique described. Forty-eight
time-sliced spectra with approximately 10 ks exposure time each were
used in the composite spectral line analysis in Sect.\
\ref{section:composite}. All other analyses used the 40 ks time-sliced
spectra. We have not included the 2001 \textit{Chandra}\ observation, obsid 639,
in the following time-resolved emission-line analyses, primarily because
the long-term trend of flux variability seen in our 9-day observing
campaign would make interpretation of this early observation
questionable with respect to flux level.
We describe below several different analyses of the variability of the
dispersed spectral data. All comparisons in this Section are made to the binary orbital period, not to the periods found in the X-ray flux in Sect. \ref{section:lc}, because we are interested in relating any variability to the known physical parameters of the system and possible effects of the secondary on the emission from the primary wind. First, statistical tests were performed on
narrow wavelength-binned data for the 12 time-sliced spectra of 40 ks
each to test for variability using $\chi^2$ tests. We then formally fit
the emission lines in each of these individual 40 ks time-sliced spectra
using Gaussian profiles (Sect.\ \ref{section:fit}), determining fluxes,
line widths, and 1$\sigma$ confidence limits. Subsequent composite line
spectral analysis used the combined H-like ion profiles, as well as
\ion{Fe}{17}, to evaluate the flux, velocity, and line width as a function
of binary phase (Sect.\ \ref{section:composite}). Finally, we looked
for variability in the $fir$-inferred radius ($R_{fir}$) of each ion, as
well as X-ray temperatures derived from the H-like to He-like line
ratios (Sect.\ \ref{section:fi}).
\subsection{Narrow-band fluxes and variability} \label{section:bin}
\input{tbl_fprops.tex}
For the following statistical analysis, we used the narrow
wavelength-binned bands in the 12 time-sliced, 40 ks spectra described
above. The count rates for a standard set of narrow wavelength bins
were output from TGCat processing. The parameters of the bins between
2.5 \ang\ and 22.20 \ang\ are listed in Table \ref{table:fprops}. We
searched for variations using a series of $\chi^2$ tests, trying several
hypotheses (constancy, linear trend, parabolic trend) and checked the
improvement when more free parameters were used. The Significance Level
($SL$) is defined in Sect.\ \ref{section:lc}. Five bands were
significantly variable when compared to a constant value, i.e. $SL \le$1\%: (1) the continuum centered at
4.9 \ang, (2) \ion{S}{15}, (3) \ion{Si}{13}, (4) \ion{Fe}{20}\ (10.4-12 \ang), and
(5) \ion{Ne}{9}. A further 4 bands are marginally variable, i.e.
$1\%<SL<5$\%: (1) \ion{Mg}{12}, (2) the continuum centered at 8.8 \ang, (3)
\ion{Ne}{10}, and (4) the continuum centered at 14.925 \ang. When compared to a linear trend, all but \ion{Fe}{20}\ were significantly variable, and when compared to a quadratic trend, all but \ion{Ne}{9}, \ion{Fe}{20}, and \ion{Si}{13}\ were significantly variable, although in all cases $SL<5$\%.
As an additional test, we directly compared the spectra one to another.
Using a $\chi^2$ test on the strongest wavelength bins, with spectra
binned at 0.02 \ang, variability was significant for lines (or in the
regions of lines): \ion{Si}{13}, \ion{Mg}{12}, \ion{Mg}{11}, \ion{Ne}{9}, and the zone from
10.4-12 \ang\ (corresponding to \ion{Fe}{20}).
\clearpage
\begin{figure*}[!htb]
\centering
\includegraphics[width=\columnwidth]{dori_si13_vs_mean_heg_meg-02.ps_page_7}
\includegraphics[width=\columnwidth]{dori_si13_vs_mean_heg_meg-02.ps_page_9} \\
\includegraphics[width=\columnwidth]{dori_si13_vs_mean_heg_meg-02.ps_page_11}
\includegraphics[width=\columnwidth]{dori_si13_vs_mean_heg_meg-02.ps_page_1} \\
\caption {\ion{Si}{13}\ profile (blue) overplotted with the mean \ion{Si}{13}\ profile (red) of all
time-sliced spectra. Upper panel left: Phase is centered at 0.091 and
is 0.15 wide; Upper panel right: Phase is centered at 0.254 and is 0.15
wide; Lower panel left: Phase is centered at 0.576 and is 0.15 wide;
Lower panel right: Phase is centered at 0.644 and is 0.15
wide.}\label{fig:profiles}
\end{figure*}
In summary, \ion{S}{15}, \ion{Si}{13}, \ion{Ne}{9}, and \ion{Fe}{20}\ were variable in
both of the above tests. An example of the \ion{Si}{13}\ lines for several
time slices is shown in Fig.\ \ref{fig:profiles}, demonstrating the
observed variability. As noted later, \ion{Ne}{9}\ is contaminated by Fe
lines, which may contribute to the variability. A few other emission
lines as well as some continuum bandpasses may also be variable, but
with lower confidence. As an additional confirmation of variability for
one feature, \ion{Si}{13}, we performed a two-sample Kolmogorov Smirnov (KS)
test \citep{press02} on each time slice against the complementary
dataset (to ensure the datasets are independent). With the criteria that
an emission line is variable if the null hypothesis is $\le$ 0.1, and
that a line is not variable if the null hypothesis is $\ge$ 0.9, the
only spectrum where the K-S test suggested real variability (at about
2\% probability of being from the same distribution) was \ion{Si}{13}\ at
$\phi$=0.11. Note that the KS test shows that there is variability, but
not what is varying, such as flux or centroid.
\subsection{Fitting of Emission Lines} \label{section:fit}
For each of the 12 time-sliced spectra of 40 ks each, the H- and He-like
lines of S, Si, Mg, Ne, and O, as well as \ion{Fe}{17}\ 15.014 \ang, were
fit using the Interactive Spectral Interpretation System
\citep[ISIS;][]{houck00}. Only Gaussian profiles were considered
because (1) Gaussian profiles are generally appropriate at this
resolution for thin winds at the signal-to-noise level of the
time-sliced spectra (with some exceptions, noted below), and (2)
previous studies indicated Gaussian profiles provided adequate fits to
the emission lines for both the early \textit{Chandra}\ observation of \delOri\
Aa \citep{miller02} and for the combined spectrum from 2012 (Paper I).
We note that lines in the spectrum of the combined HETGS data showed
some deviations from a Gaussian profile (Paper I).
The continuum for each time-sliced spectrum was fit by using the same
3-temperature APEC model derived in Paper I. This model allowed for
line-broadening and a Doppler shift. Some abundances were fit in
order to minimize the residuals in strong lines. An N$_H$ of
$0.15\times 10^{22}\,\mathrm{cm\mtwo}$ (Paper I) was fixed for the value of the
total absorption. Only the continuum component of this model was used
for continuum modeling in the following analysis.
Fits were determined by folding the Gaussian line profiles through the
instrumental broadening using the RMF and ARF response functions, which
were calculated individually for each time-sliced spectrum. All first
order MEG and HEG counts, on both the plus and minus arms of the
dispersed spectra, were fit simultaneously. For most H-like ions, the
line centroid, width, and flux were allowed to vary. In a few cases
where the S/N was low, the line center and/or the width was fixed to
obtain a reasonably reliable fit based on the Cash statistic. For the
He-like line triplets, the component lines were fit simultaneously with
the line centroid of the recombination ($r$) line allowed to vary, and
the centroids for the intercombination ($i$) and forbidden ($f$) lines
forced to be offset from the $r$ line centroid by the theoretically
predicted wavelength difference. The individual flux and width values
of the triplet components for the He-like ions were allowed to vary
except for a few cases of low S/N when the width value of the $i$ line
and $f$ line were forced to match that of the $r$ line to obtain a
reasonable statistical fit. The reduced Cash statistic using subplex
minimization was used to evaluate each fit. For most emission lines
with good signal-to-noise, the reduced Cash statistic was 0.95-1.05. A
few of the lines with poor signal-to-noise had a reduced Cash statistic
as low as 0.4. Confidence limits were calculated at the 68\% (i.e.
$\pm1\sigma$) level for each parameter of each line, presuming that
parameter was not fixed in the fit.
In most cases, the line profiles in the time-sliced spectra were well
fit with a simple Gaussian. In a few cases, a profile might be better
described as flat-topped with steep wings. Broad, flat-topped lines
are expected to occur when the formation region is located relatively
far from the stellar photosphere, where the terminal velocity has almost
been reached. In such cases, Gaussian profiles are expected to fit
rather poorly. Occasionally a second Gaussian profile was included for a
line if a credible fit required it. If more than one Gaussian
profile was used to fit a line, the total flux recorded in the data
tables is the sum of the individual fluxes of the Gaussian components
with the errors propagated in quadrature.
\input{flux_table_land.tex}
For the case of \ion{Ne}{9}\ where lines from \ion{Fe}{17}\ and \ion{Fe}{19}\ provided
significant contribution in the wavelength region of the fit, these additional
Fe lines were fit separately from the \ion{Ne}{9}\ component lines. The Fe lines fit in this region were \ion{Fe}{19}\ at 13.518 \AA, \ion{Fe}{19}\ at 13.497 \AA, and \ion{Fe}{17}\ at 13.391 \AA, with \ion{Fe}{19}\ at 13.507 \AA\ and \ion{Fe}{19}\ at 13.462 \AA\ included at their theoretical intensity ratios to \ion{Fe}{19}\ at 13.518 \AA.
Also, the \ion{Ne}{10}\ line is blended with
an \ion{Fe}{17}\ line. We have assumed this \ion{Fe}{17}\ component
contributes flux to the \ion{Ne}{10}\ line equal to 13\% of the \ion{Fe}{17}\ at
15.014 \ang\ \citep{walborn09}. A final correction was applied to the
\ion{Si}{13}-$f$ line because the Mg Lyman series converges in this wavelength
region. Using theoretical relative line strengths, we assumed the
\ion{Si}{13}-$f$ line flux was overestimated by 10\% of the measured flux of
the \ion{Mg}{12}\ L$\alpha$ line.
The flux values and confidence limits are tabulated in Table
\ref{table:flux:s} for S and Si lines, Table \ref{table:flux:mg} for Mg
and Ne lines, and Table \ref{table:flux:o} for O and \ion{Fe}{17}\ lines.
To summarize the results, Fig.\ \ref{flux1} shows a comparison of the
fluxes of the H-like ions. The error bars for \ion{S}{16}\ are
quite large. \ion{Si}{14}\ shows a peak at about $\phi$=0.0 and a somewhat
lower value at about $\phi$=0.6. \ion{Mg}{12}, \ion{Ne}{10}, and \ion{O}{8}\ are essentially
constant. Fig.\ \ref{flux2} shows the fluxes for the He-like
$r$ lines. \ion{S}{15}-$r$ has a maximum at about $\phi$=0.1, with lower
flux in the range $\phi$=0.5-0.8. The flux values for \ion{Si}{13}-$r$ are
larger for $\phi$=0.0-0.4 than for the range $\phi$=0.5-0.8. \ion{O}{7}-$r$
shows an apparent increase in flux in the $\phi$=0.5-0.7 range.
\ion{Mg}{11}-$r$ and \ion{Ne}{9}-$r$ are relatively constant. \ion{Ne}{9}\ was
consistently variable in the narrow-band statistical tests. In this
Gaussian fitting of the lines we have fit and removed the contaminating
Fe lines in \ion{Ne}{9}, possibly removing the source of variability in this
triplet. We note that the points in Fig.\ \ref{flux1} and Fig.\
\ref{flux2} are from three different orbits of the binary. The increase
in flux with time discussed in Sect.\ \ref{section:lc} has not been
taken into account in these fitted line fluxes, either in the plots or
the data tables, so care must be taken in their interpretation.
\begin{figure*}[!htb]
\centering
\includegraphics[angle=90,width=2.5in]{s16_b.ps}
\hfill
\includegraphics[angle=90,width=2.5in]{si14_b.ps} \\
\includegraphics[angle=90,width=2.5in]{mg12_b.ps}
\hfill
\includegraphics[angle=90,width=2.5in]{ne10_b.ps} \\
\includegraphics[angle=90,width=2.5in]{o8_b.ps} \\
\caption{Fluxes of H-like emission lines based on Gaussian fits vs phase.
Errors are 1$\sigma$ confidence limits. Phase with respect
to periastron is indicated at the top of the plot. }
\label{flux1}
\end{figure*}
\begin{figure*}[!htb]
\centering
\includegraphics[angle=90,width=2.5in]{s15_r_b.ps}
\hfill
\includegraphics[angle=90,width=2.5in]{si13_r_b.ps} \\
\includegraphics[angle=90,width=2.5in]{mg11_r_b.ps}
\hfill
\includegraphics[angle=90,width=2.5in]{ne9_r_b.ps} \\
\includegraphics[angle=90,width=2.5in]{o7_r_b.ps} \\
\caption{Fluxes of He-like $r$ emission lines based on Gaussian fits vs
phase. Errors are 1$\sigma$ confidence limits.}\label{flux2}
\end{figure*}
\subsection{Spectral Template and Composite Line Profile Fitting}
\label{section:composite}
In order to improve the signal-to-noise ratio in line fits in the
time-sliced data, we used two methods to fit multiple lines
simultaneously. In the first method, we adopted a multi-thermal APEC
\citep{Smith:01, Foster:Smith:Brickhouse:2012} plasma model which
describes the mean spectrum fairly well (see Table~\ref{tbl:apecmodel}),
and used this as a spectral template, allowing the fits to the Doppler
shift, line width, and overall normalization to vary freely. This is a simpler model than the more
physically based APEC model defined in Paper I
since here it need not fit the spectrum
globally, but is only required to fit a small region about a
feature of interest. To demonstrate this,
Figure~\ref{fig:apectemplate} shows an example after fitting {\em
only} the $8.3$--$8.6$\,\AA\ region for the \ion{Mg}{12}\
feature's centroid, width, and normalization for the entire
exposure. The temperatures were not varied, and the relative
normalizations of the three components were kept fixed, as was the
absorption column. We can see that this provides a good local
characterization of the spectrum, and so will be appropriate for
studying variation of these free parameters in local regions as a
function of time or phase. For any such fit, other regions will
not not necessarily be well described by this model.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\columnwidth]{mg12_template_example_model.ps}
\caption{A portion of the HEG spectrum for the entire exposure,
after fitting an APEC template to the $8.3$--$8.6$\,\ang\ region. The temperatures, relative normalizations, and absorption
were frozen parameters. The Doppler shift, line width, and
normalization were free. We show the resulting model evaluated over
a bit broader region that was fit to demonstrate the applicability
of the model to the local spectral region. Other regions will not
necessarily be well represented by the same parameters.
\label{fig:apectemplate}}
\end{figure}
These fits
used spectra extracted in 10 ks time bins (about $0.02$ in phase), but
were fit using a running average of three time bins. We primarily used
the H-like Lyman$\,\alpha$ lines, as well as some other strong and
relatively isolated features (see Table~\ref{tbl:linesfit}). The results
for the interesting parameters, the mean Doppler shift of the lines and their
widths, are shown in Fig.\ \ref{fig:template}. The smooth curve in the
top panel is the binary orbital radial velocity curve. We clearly see a
dip in the centroid near $\phi= 0.8$, as well as significant changes in
average line width.
\begin{deluxetable}{rr}
\tabletypesize{\scriptsize}
\tablecolumns{2}
\tablewidth{0pt}
\tablecaption{Plasma model parameters used for spectral
template fitting.}\label{tbl:apecmodel}
\tablehead{
\multicolumn{2}{c}{Temperature Components\tablenotemark{a}} \\
\colhead{$T$} & \colhead{$Norm$} \\
}
\startdata
2.2& 8.16 \\
6.6& 1.90 \\
19.5& 0.226 \\
\hline
\multicolumn{2}{c}{Relative Abundances\tablenotemark{b}}\\
Elem.& $A$ \\
\hline
Ne& 1.2\\
Mg& 0.7\\
Si& 1.6\\
Fe& 0.9\\
\hline
\multicolumn{2}{c}{Total Absorption\tablenotemark{c}}\\
\hline
$N_H$& 0.15\\
\enddata
\tablenotetext{a}{Temperatures are given in MK, and the normalization
is related to the volume emission measure, $VEM$, and distance,
$d$, via $VEM = 10^{14}$ $(4\pi d^2) \times \Sigma_i Norm_i$}.
\tablenotetext{b}{We give elemental abundances relative to the solar
values of \citet{Anders:89} for those significantly different from
1.0. (These are not rigorously determined abundances, but related
to discrete temperatures adopted and actual abundances.)}
\tablenotetext{c}{The total absorption is given in units of $10^{22}\,\mathrm{cm\mtwo}$.}
\end{deluxetable}
\begin{deluxetable}{rl}
\tabletypesize{\scriptsize}
\tablecolumns{2}
\tablewidth{0pc}
\tablecaption{Lines used in CLP analysis\tablenotemark{a}}
\label{tbl:linesfit}
\tablehead{
\colhead{$\lambda_0$} & \colhead{Feature} \\
\colhead{ \ang} & \colhead{} \\
}
\startdata
6.182& Si~XIV \\
8.421& Mg~XII \\
10.239& Ne~X \\
11.540& Fe~XVIII \\
12.132& Ne~X \\
14.208& Fe~XVIII \\
15.014& Fe~XVII \\
15.261& Fe~XVII \\
16.005& Fe~XVIII$+$O~VIII \\
16.780& Fe~XVII \\
17.051& Fe~XVII \\
18.968& O~VIII \\
\enddata
\tablenotetext{a}{Lines used in
the ensemble fitting with Composite Line Profile or spectral
template methods. Both HEG and MEG were used at wavelengths shorter
than 16 \ang. The region widths were 0.20 \ang, centered on each
feature.}
\end{deluxetable}
In the second method, we rebinned regions around selected features to a
common velocity scale and summed them into a Composite Line Profile
(``CLP''). While this mixes resolutions (the resolving power is
proportional to wavelength and is different for HEG and MEG) and blends
in the CLP, the mix should be constant with phase and be sensitive to
dynamics as long as line ratios themselves do not change. Hence, we can
search for phased variations in the line centroid. This technique has
been applied fruitfully in characterizing stellar activity in cool stars
\citep{Hoogerwerf:al:2004,Huenemoerder:Testa:al:2006}. The CLP profiles
were computed in phase bins of $0.01$, but grouped by 5 bins for
fitting, thus forming a running average. We used the same lines as in
the template fitting. In Fig.\ \ref{fig:clplines} we show an example of
composite line profiles, and fits of a Lorentzian (since the composite
profile is no longer close to Gaussian) plus a polynomial to determine
centroid and width. This method, while less direct than template
fitting, did confirm the trend seen in line velocity in the template
fitting.
\begin{figure}[!htb]
\includegraphics[width=0.9\columnwidth]{apec3_template_fits-01c.ps}
\caption{The mean emission line Doppler velocity (top: points with error
bars), primary radial velocity (top, sinusoidal curve), and mean line
width for \ion{Ne}{10}\ (bottom) derived by fitting spectra in phase bins with
an APEC template, allowing the Doppler shift, line width, and
normalization to vary freely. Data from the individual \textit{Chandra}\
observations are differentiated with colors. Error bars (1$\sigma$) are
correlated over several bins since a running average was used over 3-10
ks bins.} \label{fig:template}
\end{figure}
\begin{figure}[!htb]
\includegraphics*[width=0.9\columnwidth, viewport=0 0 480
290,angle=0]{plt_r1-03_phi068.ps}
\includegraphics*[width
|
=0.94\columnwidth, viewport=0 0 500
290,angle=0]{plt_r1-03_phi078.ps}
\caption{An example of Composite Line
Profiles for two different phases with different centroids, 0.68 (top)
and 0.78 (bottom), as defined by the photometric ephemeris. In each
large panel, the histogram is the observed profile, the smooth curve is
the fit. In the small panel below each are the residuals.}
\label{fig:clplines}
\end{figure}
The template-fit line width result is very interesting in that it shows
a significantly narrower profile near $\phi\approx0$ than at other
phases. Given the trends in width, (low near phase 0.0, high near 0.2
and 0.8) 10 ks time-sliced spectra were grouped in these states and then
compared (Fig.\ \ref{fig:linewidth_1}). The plots show the narrow state
in blue and the broad state in red. The lines are all sharper, except
for the line at 17 \ang\ (and maybe Si XIV 6 \ang), in the narrow
state. The top panel of Fig.\ \ref{fig:linewidth_1} shows a heavily
binned overview, and the lower panel shows a comparison of the \ion{Ne}{10}\
line profiles at $\phi$=0 and at quadrature phases.
The line width variability was confirmed by comparing the average
spectrum at phases near $\phi=0.0$ with the average spectrum at other
phases. The changes were primarily in a reduced strength of the line
core in the phases when the lines are broad, with little or no change in
the wings.
Fig.\ \ref{fig:linewidth_2} shows the trend vs.\ emission line. Except
for \ion{Ne}{10}\ and \ion{Fe}{17}\ 17 \ang, there is a trend for larger
differences in the FWHM with increasing wavelength. Note that
temperature of maximum emissivity goes roughly inversely with
wavelength; wind continuum opacity increases with wavelength. The
increasing trend is typical of winds, since the opacity causes longer
wavelength lines to be weighted more to the outer part of the wind where
the velocity is higher. Gaussian fit centroids show the lines are all
slightly blueshifted, which could be consistent with skewed wind
profiles. The ``narrow" group is near the primary star radial velocity
shift. Radial velocities of the lines are all roughly consistent with
-80 km s$^{-1}$, except perhaps \ion{Ne}{10}, which is blended with an Fe line. The
dependence between line width and binary phase was confirmed
independently by moment analyses of the individual lines, and was also
suggested by the CLP analysis above.
\begin{figure}[!htb]
\centering
\leavevmode
\includegraphics[scale=0.4]{width_1a.ps} \\
\includegraphics[scale=0.4]{width_2a.ps} \\
\caption{Examples of broad and narrow emission lines for selected
wavelength regions. Plots are constructed from counts per bin data
without continuum removal. For this comparison, the 10 ks time-sliced
spectra have been combined to represent $\phi$=0.0 (blue) and the
quadrature phases (red). } \label{fig:linewidth_1}
\end{figure}
\begin{figure}[!htb]
\centering
\leavevmode
\includegraphics[width=1.0\columnwidth]{dori_cmp_widths_hilo-03b.ps}
\caption{Comparison of FWHM in km s$^{-1}$\ for several emission lines in the
time-sliced spectra of \delOri\ Aa. For this comparison, the 10 ks
time-sliced spectra have been combined to represent $\phi$=0.0 (blue)
and the quadrature phases (red). Note that these spectra do not have
continuum removal. Gaussian plus polynomial line fitting was used on
the time slices to determine the line width.} \label{fig:linewidth_2}
\end{figure}
\subsection{X-ray Emission Line Ratios }\label{section:fi}
The He-like ions provide key plasma diagnostics using the relative
strengths of their $fir$ (forbidden, intercombination, resonance) lines
by defining two ratios\footnote{These lines are often designated as w,
x, y, z in order to highlight the fact that the i-line emission is
produced by two transitions (x+y) such that $R = z/(x + y)$ and $G= (x +
y + z)/w$}: the $R$ ratio =$f/i$ and the $G$ ratio=$(i+f)/r$.
\citet{gabriel69} demonstrated that the $f/i$ and $G$ ratios are
sensitive to the X-ray electron density and temperature, respectively.
These ratios have been used extensively in stellar X-ray studies. In
addition, the presence of a strong UV/EUV radiation field can change the
interpretation of the $f/i$ ratio from a density diagnostic to a
measurement of the radiation field geometric dilution factor, i.e.,
effectively the radial location of the X-ray emission from a central
radiation field \citep{blum72}. The $f/i$ ratio is known to decrease in
the case of a high electron density and/or high radiation flux density,
which will de-populate the upper level of the $f$-line transition
(weakening its emission) while enhancing the $i$-line emission. For hot
star X-ray emission, the $f/i$ ratio is controlled entirely by the
strong UV/EUV photospheric radiation field. The first analysis of an O
supergiant HETG spectrum by \citet{waldron01} verified that the observed
X-ray emission is distributed throughout the stellar wind and
demonstrated that density effects could only become important in high
energy He-like ions if their X-rays are produced extremely close to the
stellar surface. Thus the $f/i$ ratio can be exploited to determine the
onset radius or $fir$-inferred radius ($R_{fir}$ in units of R$_*$) of a
given ion via the geometric dilution factor of the photospheric
radiation field \citep{waldron01}. In addition, there are basically two
types of $fir$-inferred radii, ``localized" (point-like) or
``distributed." The first detailed distributed approach was developed by
\citet{leutenegger06} assuming an X-ray optically thin wind. For a given
observed $f/i$ ratio, the localized approach predicts a larger
$R_{fir}$ as compared to the distributed approach \citep[see discussion
in][]{waldron07}. Since all X-ray emission lines scale as the electron
density squared, all line emissions are primarily dominated by their
densest region of formation. However, in the case of the $fir$ lines, an
enhanced $i$-line emission can only occur deep within the wind (high
density), whereas the majority of the $f$-line emission is produced in
the outer wind regions at lower densities. The $r$-line emission is
produced throughout the wind.
Another X-ray temperature-sensitive line ratio is the H-like to He-like
line ratio (H/He) as explored in several hot-star studies
\citep[e.g.][]{schulz02,miller02,waldron04,waldron07}. However, a wind
distribution of X-ray sources implies a density dependence (i.e., the
H-like and He-like lines may be forming in different regions) and a
dependence on different wind X-ray absorption effects. Thus, the
temperatures derived from H/He ratios may be higher than their actual
values \citep[see][]{waldron04,waldron07}.
Our line ratio analysis is based on the approach given by
\citet{waldron07}. The $f/i$ ratios for each He-like ion in each
time-sliced spectrum are tabulated in Tables \ref{tab:FISI} -
\ref{tab:FIO}. We did not include S in this analysis because the flux measurement errors are
large and the flux ratio errors are extremely large or unbounded.
We calculated the $fir$-inferred radii ($R_{fir}$) and
H/He-inferred temperatures ($T_{HHe}$) versus phase for the 12
time-sliced spectra, as determined by the Gaussian line fitting. All
radii were determined by the point-like approach and a TLUSTY
photospheric radiation field with parameters T$_{eff}$=29500 kK and log G=3.0.
The model $f/i$ ratios and H/He ratios
used to extract $R_{fir}$ and $T_{HHe}$ information take into account
the possible contamination from other lines. For all derived $R_{fir}$,
we assume that the He-like ion line temperature is at its expected
maximum value.
The $fir$-inferred radii for Mg and Si are plotted in Fig.\
\ref{onemg} and for Ne and O in Fig.\ \ref{ssi}.
In all cases the derived $R_{fir}$ is based on the average of the
minimum and maximum predicted values, so if the lower bound is at 1,
then upper bound should be considered as an upper limit. In these plots,
any lower limits are indicated by arrows. The binary phase is used for the x-axis.
There are 10 cases
showing a finite range for the O $R_{fir}$. There are two O and Ne
$R_{fir}$ values at phase $\approx 0.65$, but in different binary
orbits, indicating the same finite radial locations (within the errors)
for each ion (O at $\approx 7-9$, and Ne at $\approx 4-6$), which could
mean that at least for O and Ne the behavior is repeatable. This is not
seen in Mg or Si. For Mg, in one case for $\phi\approx$0.65 there
is a finite $R_{fir}$ ($\approx$2), whereas the other case at
$\phi\approx$0.65 indicates only a lower bound of $\approx$4.5. For Si
there are five $R_{fir}$ with finite ranges, all within the errors of
one another. Si has four $R_{fir}$ at
$\approx 1$ since the observed $f/i$ for these phases were below their
respective minimum $f/i$. This behavior suggests that these regions
producing the majority of the higher energy emission lines may be
experiencing significant dynamic fluctuations in density and/or
temperature.
The 12 derived H/He temperatures ($T_{HHe}$) versus phase for each ion
are shown in Fig.\ \ref{tx}. In general, for all ions there
is very little variation in $T_{HHe}$ with phase, although one could
easily argue that at certain phases there are minor fluctuations.
A verification of these results was obtained using the Potsdam
Wolf-Rayet (PoWR) code \citep{hamann04} to perform a similar $f/i$
analysis that included diffuse wind emission and limb darkening.
Differences in the results obtained using the two methods were
negligible compared to measurement uncertainties.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth]{RFIRMGSI.eps}
\caption{Phase dependence of the derived Mg and Si $fir$-inferred radii
($R_{fir}$) for the 12 time-sliced spectra. Upper and lower limits are shown as arrows.
See text for
model details.}\label{onemg}
\end{figure}
\begin{figure}[!htb] \centering
\includegraphics[width=0.9\columnwidth]{RFIRONE.eps}
\caption{Phase dependence of the derived O and Ne $fir$-inferred radii
($R_{fir}$) for the 12 time-sliced spectra. Upper and lower limits are shown as arrows.
See text for model details.}\label{ssi} \end{figure}
\begin{figure}[!htb] \centering
\includegraphics[width=\columnwidth]{HHETXA.eps}
\caption{Phase dependence of the Si, Mg, Ne, and O T$_{HHe}$ calculated from the H/He
ratios of each of the twelve 40 ks time-sliced spectra.}\label{tx}
\end{figure}
\subsection{Non-detection of Stellar Wind Occultation Effects}
One goal of this program was to use the variable occultation of the
primary wind by the essentially X-ray-dark secondary, \delOri\ Aa2, as
it orbits the primary, mapping the ionization, temperature, and
velocity regimes within the primary's stellar wind. The secondary star,
\delOri\ Aa2, is orbiting deep within the wind of the primary star,
\delOri\ Aa1. Based on the binary separation of 2.6 R$_{Aa1}$ and our
calculations of the $fir$-inferred radii of the various ions, we expect
the secondary to be outside the onset radius of S emission in the
primary wind, very close to or inside of the onset radius of Si, and
inside the onset radii of Mg, Ne, and O. We can make a simple model of
the expected lightcurves for these emission lines for \delOri\ Aa. If
the secondary is outside the onset radius of an ion, the lightcurve will have a
maximum and relatively constant flux value between $\phi$ $\approx$0.25
and $\approx$0.75. The lightcurve will have a relatively constant but
lower flux at $\phi$ $\approx$0.75-0.25 as it occults both the back and
front sides of the onset-radius shell. If the secondary is inside the
onset radius of an ion, then the lightcurve will be at a maximum flux near $\phi$=
0.0 and 0.5. Between these two phases, the lightcurve will have a lower
and relatively constant flux value as it occults only the back side of
the onset-radius shell.
The $i$ line of the He-like triplet is formed deep in the wind while
both the $r$ and $f$ lines are more distributed throughout the wind.
Our best chance of identifying occultation effects is probably from the
fluxes of the $i$ lines, as the shell of $i$-line emission will be
thinner than either the $f$-line emission shell or the $r$-line emission
shell. To estimate any occultation effect that we might see in these
lightcurves, we calculated the maximum volume of $i$-line emission that
could be occulted by the secondary using various parameters. The total $i$-line emission expected from an emitting shell around the surface of the star was estimated using the spherical volume of the shell.
The secondary star is assumed to have a radius of 0.3 R$_{star}$.
The percentage of $i$-line emission occulted by the secondary star will be maximum when the largest column of emitting material is occulted, which is at the rim of the shell. An estimate of the occulted emission was based on the volume of the spherical cap of the emitting shell, with a height equal to the diameter of the secondary star. Care was taken to subtract the volume of the star that might be included in the cap.
We find
that, for the extreme case of an $i$-line emission shell at the surface
of the primary star with a thickness of 0.01 R$_{Aa1}$, the maximum
occultation of the $i$-line flux would reduce the flux by $\approx$20\%. If the
thickness of the $i$-line shell is instead a more likely value of 0.1
R$_{Aa1}$ but still in contact with the surface of the star, the flux of
the $i$ emission line would be reduced by only about 3\% due to
occultation by the secondary. In these estimates, the secondary would
not be at primary minimum $\phi$=0.0, but instead projected on the rim
of the $i$-line emission shell where the column of $i$-line emission is
greatest. A thicker $i$-line shell would reduce the amount of
occultation further because of the finite size of the projected
secondary star in relation to the volume of the sphere of $i$-line
emission. Any other position of the secondary in the orbit, or any
larger shell of $i$-line emission, would reduce the percentage of
occultation.
Fig.\ \ref{fig:i-s} shows the flux measurements for the $i$ lines of the
He-like triplets. We have previously found a linear increase in flux
over the time period of the observations which is not removed in these
plots. While the estimates of the fluxes from the \ion{S}{15}-$i$ and
\ion{Si}{13}-$i$ lines that might have an onset radius very close to the
stellar surface of the primary star do not preclude the existence of
occultation effects, we unfortunately cannot identify such variability
which would be at the 1-2\% level, particularly due to the other
identified variations on the order of 10-15\% and the errors of the
measurements.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth]{stacked_i_plots.ps}
\caption{Flux of He-like $i$ component of the triplet based on
Gaussian fits vs phase. Errors are 1$\sigma$ confidence limits.
Fluxes have been normalized by dividing by the mean flux for the
respective line. Phase with respect to periastron is indicated at the
top of the plot. The plot for each ion is offset by a value of 3
from the previous one for clarity.} \label{fig:i-s}
\end{figure}
\section{DISCUSSION} \label{section:discussion}
\subsection{Effects of a Wind-Star Collision} \label{section:cw}
An important and unexpected result of this analysis is the discovery of
variability of line widths with binary phase. H-like emission line
widths are at a minimum at $\phi$=0.0 when the secondary is in front of
the primary, and to a lesser degree at $\phi$=0.7. The line widths are
at a maximum near $\phi$= 0.2 and $\phi$=0.8, close to quadrature. The
phase-dependent variability of the emission line widths must therefore
be related to interaction between the primary and the secondary.
In Paper I we developed a model to represent the effect of the secondary
star on the wind region of the primary. Our 1D, line-of-centers, CAK
calculations presented in Paper I showed that radiative braking does not
occur in this system, so the primary wind directly impacts the surface
of the secondary star. A similar example has been observed for CPD
$-41^{\circ}$7742, along with an eclipse of the X-ray emitting colliding
wind region when the two stars were perfectly aligned \citep{sana05}. A
3D smoothed particle hydrodynamics (SPH) code
\citep{Russell13,MaduraP13} was then used to simulate the effect of the
wind-wind collision in \delOri\ Aa. The colliding winds form an
approximately cone-shaped cavity in the wind of the primary with the
secondary star at the apex of the cone. The cavity has a half-opening
angle of $\approx$30\degr, so the solid angle fraction taken up by the
cone is $\approx$8\%. A bow shock surrounds the cavity, and within this
cavity the secondary wind prevails, yielding lower densities as well as
very little X-ray flux. Fig.\ \ref{fig:temp}, reproduced from Paper I,
shows the density and temperature structure of the winds and their
interaction in the binary orbital plane. The hot gas in the shock at
the interface between the lower density cavity and the primary star wind
was calculated in Paper I to produce at most 10\% of the observed X-ray
flux. According to this model, as the cavity rotates around the primary
star from the blue- to red-shifted part of the wind, emission line
profiles should change in shape and velocity (as long as the radius of
line formation is similar to or larger than the location of the apex of
the bow shock).
\begin{figure*}[!htb] \centering \includegraphics[height=60mm]{cw_sim.ps}
\caption{Model of density and temperature structure of the binary
orbital plane of the SPH simulation of Aa1 (larger black circle) and
Aa2 (smaller black circle), based the parameters from Paper IV and the
model in Paper I. The collision of the wind of the primary star
against the secondary star produces a low density cavity within the
primary wind. The perimeter of the low density cavity is a shocked
bow shock of higher density than either star's wind region (left
panel). In the temperature plot on the right, only the hot gas from
along the wind-collision, bow shock boundary is shown since the SPH
simulation does not include the embedded wind shocks of either delta
Ori Aa1 or delta Ori Aa2. The primary wind collides directly with the
secondary surface in this simulation primarily because of the large
difference in mass loss rate ($\dot{M}_{Aa1}/\dot{M}_{Aa2}\approx
40$). }
\label{fig:temp} \end{figure*}
\begin{figure*}[!htb] \centering
\includegraphics{mg12_4phase_specfits-01.ps}
\caption {\ion{Mg}{12}\ profile overplotted with
the Gaussian fit. Upper panel left: Phase is centered at 0.031; Upper
panel right: Phase is centered at 0.492; Lower panel left: Phase is
centered at 0.254; Lower panel right: Phase is centered at
0.721.}\label{fig:mg12_conj}
\end{figure*}
The variability of the emission-line widths we have observed may
potentially be explained by this cavity in the primary wind caused by
the wind interaction with the secondary. When viewed at $\phi$=0.0, the
cavity will occupy a region of the primary stellar wind that would
otherwise be the formation region of emission with high negative
velocities. The emission line profiles viewed at this phase might then
be truncated at the largest negative velocities, creating a
comparatively narrower profile than one expects without the presence of
the secondary. At $\phi$=0.5, some of the most positive velocities
would not be detected in the emission-line profiles. Wind absorption
must be taken into account at $\phi$=0.5 and the effect of the cavity
may be less pronounced. Such emission truncation at $\phi$=0.0 and 0.5
is suggested in Fig.\ \ref{fig:mg12_conj}, displaying the \ion{Mg}{12}\
profiles and fits for the time-sliced spectrum near $\phi$=0.0 and the
time-sliced spectrum near $\phi$=0.5. The \ion{Mg}{12}\ line was chosen as a
relatively strong line that was also used in the template analysis in
Sect.\ \ref{section:composite}. The red line in the figure is the
Gaussian fit to the line, the black line is the time-sliced spectrum for
a particular phase interval, and the lower panel of each plot shows the
$\delta\chi$ statistic (though the Cash statistic was actually used in
the fitting). Negative velocities appear somewhat under-represented in
the time-sliced spectrum near $\phi$=0.0, although the profile near
$\phi$=0.5 does not appear to be asymmetric. There could be other
explanations for this characteristic, such as velocity changes in the
centroid or non-Gaussian profiles. At quadrature, $\phi$=0.25 and 0.75,
the velocities normally produced in the embedded wind of the primary and
replaced by the region occupied by the cavity will tend to be near zero
velocity, resulting in emission lines of expected widths, but somewhat
non-Gaussian peaks, such as flat-topped or skewed peaks. This
prediction is not inconsistent with the profiles observed near
quadrature in the \delOri\ Aa time-sliced spectra, which are broader
than the profiles seen near conjunction (see Fig.\ \ref{fig:mg12_conj}
as an example).
However, further analysis of the \ion{Mg}{12}\ line reveals a more complex
scenario. Fig.\ \ref{fig:mg12_dave} shows the correlation of FWHM with
flux for \ion{Mg}{12}\ for the 12 time-sliced spectra, based on the Gaussian
fits. While most of the points show a similar distribution with little
or no dependence of FWHM on flux, the three points from the first
\textit{Chandra}\ observation, {ObsID}\xspace 14567, have significantly larger FWHM
values along with lower flux. We have carefully checked that there is no known reason to
expect this result to be instrumental. We conclude that the variability
in the \ion{Mg}{12}\ line has several time scales, of which the binary orbit and
colliding winds associated with the binary system are only one
component.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4]{mg12_gaussfit_flux_vs_fwhm-01.ps}
\caption{Gaussian parameters for \ion{Mg}{12}\
lines for each of the 12 time-sliced spectra. Error bars are 1$\sigma$.
Flux vs FWHM.}\label{fig:mg12_dave}
\end{figure}
\subsection{Stellar pulsations and Corotation Interacting Regions}
Pulsations have been found in only a few O stars, but those that have
been found are mostly in late-O stars, such as $\zeta$ Oph, O9.5V
\citep{walker05}. Variations due to pulsations are cyclic, even over
long timescales
\citep[i.e.][]{prinja86,kaper96,kaper97,kaper99,massa95,prinja88,
henrichs88}. Models predict that massive stars near the main sequence
will experience pulsations of several types, including non-radial
pulsations (NRP), $\beta$ Cep instabilities, and {\it l}-mode pulsations
\citep{cox92,pamy99}.
Another source of variability relates to phenomena near or on the
surface of the star, such as bright spots, which tend to be transient
over a few stellar rotations or less. For many years DAC variability
in UV P-Cygni profiles has been recognized, and is believed to be common
in O stars. \citet{cranmer96} provided a model of ad hoc photospheric
perturbations in the form of bright spots that was able to reproduce the
DAC phenomenon. CIRs, possibly related to DACs, are perturbations that
start at the base of the stellar wind and can extend far out into the
wind, and thus are tied to the rotation period of the star, although
they could also be transient. Presumably there can be multiple CIRs
distributed over the surface of the star, cumulatively resulting in
variations that appear to be shorter than the rotation period, although
each individual spot and its associated CIR will rotate with the star.
The periods we have identified in the \textit{Chandra}\ lightcurve of \delOri\
Aa (4.76d and 2.04d, see Sect.\ \ref{section:lc}), are possibly the
X-ray signature of CIRs or pulsations. The 4.76$\pm$0.3d
period may be the rotation period of the primary star. Based on a $v
~sin~ i$ value of 130 km s$^{-1}$\ (Paper IV) and the estimated value of
R$_*$=12R$_{\odot}$ , and also including the assumption of alignment between
the rotational and orbital axes, the rotational period should be about
4.7d, consistent with our strongest period of 4.76d. A single
non-transient CIR would share the stellar rotational period.
We did not find evidence of the binary orbital period of 5.73d in the
X-ray lightcurve, indicating that most of the X-ray variability is not
related to the orbital motion, at least for the limited orbital data we
have.
There is an apparent increase in flux over 9 days, suggesting long-term
variability. We do not have a sufficient time baseline to quantify this
component of variability, but it is clearly not associated with the
binary period. We suggest this long-term variability is due to
pulsations such as NRP and/or increasing CIR activity.
\section{CONCLUSIONS}\label{section:conclusions}
\textit{Chandra}\ high-resolution grating X-ray spectra of \delOri\ Aa acquired
in 2012 with a total exposure time of $\approx$479 ks have been analyzed
for phase-resolved and time-resolved variability. Several components of
variability were detected in our analyses.
The count rate of the entire spectral range increased during the 9-day
observing campaign by approximately 25\%. We cannot constrain the
cause of this longer-term variability with this dataset, but we
speculate this may be related to stellar pulsations or CIRs and other
wind instabilities.
An important result of the period searches in the X-ray data is the
binary motion seems to contribute very little to the variability of the
total flux. A period search of the total X-ray flux lightcurve yielded
periods of 4.76$\pm$0.3d and 2.04$\pm$0.5d after removal of the
long-term trend, both of which are less than the binary period of 5.73d.
A period search including the early 2001 \textit{Chandra}\ observation as well
as the 2012 data gave a period near 5.0d, within the errors of the 4.76d
period determined from the normalized 2012 spectra. The 4.76d period is
consistent with the secondary period found by MOST of 4.614d; thus it is
present in both X-ray and optical data. We suggest that this may be
the rotation period of \delOri\ Aa1 based on estimates of $v~sin~i$ and
the radius of \delOri\ Aa1. The 2.04d period, also found in the MOST
photometry, may be associated with pulsations or CIRs.
Flux variability of individual emission lines was confirmed with
statistical tests for the He-like triplets of \ion{S}{15}, \ion{Si}{13}, and
\ion{Ne}{9}\ (contaminated with an \ion{Fe}{17}\ line), as well as the
\ion{Fe}{20}\ complex. Also, several line profiles are apparently
non-Gaussian with blue-shifted centroids of about -80 km s$^{-1}$\ prevalent,
possibly indicating that line-fitting with wind profiles would be more
appropriate. Derived $R_{fir}$ are in similar
ranges for O stars of the spectral type of \delOri\ Aa1.
For first time, phase-dependent variability in the X-ray emission line
widths has been found in a binary system. Line widths are at a
minimum at $\phi$=0.0 and at a maximum at $\phi$=0.2 and 0.8,
approximately. It is thus likely that the line widths are dependent on
an interaction between the primary and secondary. The variation could
qualitatively be explained as the result of a cavity in the primary
wind produced by wind-wind collision. According to this model, the
cavity created by the colliding winds would be of comparatively lower
density, causing a reduction in blueward or redward emission at
conjunctions, and in principle making the lines narrower at
conjunctions than at quadratures of the binary phase. The spectra
presented in this paper are possibly consistent with this idea,
although additional short-term variability of the line widths is
suggested.
One goal of the 2012 \textit{Chandra}\ observing program of \delOri\ Aa was to
allow observations of a massive star stellar wind as the short-period
secondary occulted different regions of emission formation on its
journey around the primary star. We predict the reduction in the flux
levels due to occultation to be about 1-3\% at most. Additional
variability from other sources of greater magnitude, as well as limited
signal-to-noise in the data, make it impossible to identify occultation
in our dataset at such a low percentage when there are clearly
variations in the 10-15\% range. In particular, detailed analysis of
\ion{Mg}{12}\ showed that flux, radial velocity, and FWHM vary both within a
single orbit and within the dataset as a whole.
The variability we see in the emission from \delOri\ Aa is probably a
composite of several effects, including the long-term, greater than
9-day, photometric variability, binary orbit FWHM effects, inter-orbit
variability and intra-orbit variability. It is likely that CIRs and/or
pulsations play an important role in the variability. New long
observations with the higher sensitivities offered by XMM-Newton would
probably help resolve some of the photometric issues. Questions remain
concerning the source of the periods, phase-dependency of line profiles,
various time scales of variability, and detailed modeling of the line
width variability. Chandra observations at specific phases, such as
conjunction and quadrature, and with a longer timeline, would be useful
in verifying the model as well as parameterizing the variability we have
seen. Additional analysis of the UV DACs may clarify the sources of some
of the components of the variability and in particular the rotation
period of \delOri\ Aa.
\section{Acknowledgements}
The authors acknowledge the constructive comments of the anomymous referee. MFC, JSN, WLW, CMPR, and KH are grateful for support provided by the
National Aeronautics and Space Administration through Chandra Award
Number GO3-14015A, G03-14015E, and GO3-14015G issued by the Chandra
X-ray Observatory Center, which is operated by the Smithsonian
Astrophysical Observatory for and on behalf of the National Aeronautics
Space Administration under contract NAS8-03060. DPH was supported by
NASA through the Smithsonian Astrophysical Observatory contract
SV3-73016 to MIT for the Chandra X-Ray Center and Science Instruments.
YN acknowledges support from the Fonds National de la Recherche
Scientifique (Belgium), the Communaut\'e Fran\c caise de Belgique, the
PRODEX XMM and Integral contracts, and the 'Action de Recherche
Concert\'ee (CFWB-Acad\'emie Wallonie Europe). AFJM is grateful for
financial aid from NSRC (Canada) and FRQNT (Quebec). NDR gratefully
acknowledges his CRAQ (Centre de Recherche en Astrophysique du Qu\'ebec)
fellowship. LMO acknowledges support from DLR grant 50 OR 1302. NRE is
grateful for support from the Chandra X-ray Center NASA Contract
NAS8-03060. JLH acknowledges support from NASA award NNX13AF40G and NSF
award AST-0807477. MFC, JSN, and KH also acknowledge helpful
discussions with John Houck and Michael Nowak on data analysis with
ISIS, and Craig Anderson for technical support. This research has made
use of data and/or software provided by the High Energy Astrophysics
Science Archive Research Center (HEASARC), which is a service of the
Astrophysics Science Division at NASA/GSFC and the High Energy
Astrophysics Division of the Smithsonian Astrophysical Observatory. This
research made use of the Chandra Transmission Grating Catalog and
archive (http://tgcat.mit.edu). This research also has made use of
NASA's Astrophysics Data System.
Facilities: \facility{CXO}, {MOST}
\bibliographystyle{apj}
|
\section{Introduction}
\IEEEPARstart{T}{he} concept of linear complementary dual (LCD) codes was introduced by
Massey \cite{M} in 1992.
For implementations against side-channel and fault injection attacks,
a new application of binary LCD codes was found by Carlet and Guilley (\cite{BCCGM, CG}).
Since then, LCD codes have attracted wide attention from the coding research community~(\cite{CMTQ1, CMTQP}, \cite{ KY}-\cite{LDL}, \cite{Y,ZTLD}).
Carlet {\em et al.} \cite{CMTQP} proved that for $q >3$, any $q$-ary linear code is equivalent to an LCD code over $\Bbb F_q$;
therefore, it is sufficient to investigate binary LCD codes and ternary LCD codes.
Self-orthogonal codes are very important for the study of quantum communications and quantum computations
since they can be applied to the classical construction of quantum error-correcting codes (\cite{CRSS1, CRSS2}).
In this paper, we mainly use simplicial complexes for constructing binary LCD codes and binary self-orthogonal codes.
For the definition of simplicial complexes, we need the following notations. Let $\Bbb F_2$ be the finite field of order $2$ and
$m$ be a positive integer. The support $\mathrm{supp}(v)$ of a vector $v$ in $\Bbb F_2^m$ is defined by the set of nonzero coordinate positions of $v$.
Let $2^{[m]}$ denote the power set of $[m]=\{1, \ldots, m\}$. It is easy to check that there is a bijection between $\mathbb{F}_2^m$ and $2^{[m]}$, defined by $v\mapsto$ supp$(v)$; hence, due to this bijection, a vector $ v$ in $\mathbb{F}_2^m$ is identified with its support supp$(v)$.
For two sets $A$ and $B$,
the set $\{x: x\in A\mbox{ and } x\notin B\}$ is denoted by $A\backslash B$, and the size of $A$ is denoted by $|A|$.
\begin{definition}
A subset $\Delta$ of $\mathbb{F}_2^m $ is called a {\it simplicial complex} if $u\in \Delta$ and $\mathrm{supp}(v)\subseteq \mathrm{supp}(u)$ imply $v\in \Delta$ for any $u,v\in \Bbb F_2^m$.
\end{definition}
An element of a simplicial complex $\Delta$ is called \emph{maximal} if it is not properly contained in the others in $\Delta$.
Let $\mathcal{F}$ be the set of maximal elements of a simplicial complex $\Delta$. Especially, $\Delta_F$ denotes the simplicial complex generated by a nonzero vector $F$ in $\Bbb F_2^m$.
In this paper, we use a typical construction of a linear code given in \cite{HP}.
Let $D=\{ g_1, g_2,\ldots, g_n \}\subseteq \Bbb F_p^m$. Then a linear code $\mathcal{C}_{D}$ of length $n=|D|$ over $\Bbb F_{p}$ can be defined by
\begin{equation}\mathcal{C}_{D}= \{c_{u}=(u\cdot g_1, u\cdot g_2, \ldots, u\cdot { g}_n): { u}\in \Bbb F_{p}^m\}, \end{equation} where $\cdot $ denotes the Euclidean inner product of two elements in $\Bbb F_p^m.$ The set $D$ is called the {\it defining set} of $\mathcal{C}_{D}$. Let $G$ be the $m\times n$ matrix as follows:
\begin{equation}G=[g_1^T \; g_2^T \; \cdots \; g_n^T],\end{equation}
where the column vector $g_i^T$ denotes the transpose of a row vector $g_i$.
Zhou {\em et al.} \cite{ZTLD} obtained some simple conditions under which the linear codes defined in Eq. (1) are LCD or self-orthogonal, and they also presented four infinite families of binary linear codes. For any positive integers $m$ and $t$ with $1\le t\le m-1$, two defining sets are given as follows:
$$D_t=\{g\in \Bbb F_2^m: wt(g)=t\}, $$
$$\mbox{ and } D_{\le t}=\{g\in \Bbb F_2^m: 1\le wt(g)\le t\},$$ where $wt(v)$ denotes the Hamming weight of $v\in \mathbb{F}^m_2$.
We note that the two sets can also be expressed by using simplicial complexes in the following way:
$$D_t= \Delta_{D_t} \setminus \Delta_{D_{t-1}} \mbox{ and } D_{\le t}= \Delta_{D_t} \backslash\{0\}.$$
Note that here ${D_t} $ denotes a set of maximal elements for any $t\ge 1$, and
$\Delta_{D_t}$ and $\Delta_{D_{t-1}}$ are simplicial complexes. For example, if $m=3$, then $D_1=\{(1,0,0), (0,1,0), (0,0,1)\}$ and $D_2=\{(1,1,0), (1,0,1),(0,1,1)\}$; hence, $\Delta_{D_1}=D_1\cup \{0\}$ and $\Delta_{D_2}= \{0\}\cup D_1\cup D_2$. It is easy to check that $\Delta_{D_1}$ and $\Delta_{D_2}$ are simplicial complexes.
Inspired by \cite{ZTLD}, we employ the difference of two distinct simplicial complexes
for construction of an infinite family of
binary LCD codes and two infinite families of binary self-orthogonal codes.
This paper is organized as follows.
In Section II we introduce some basic concepts on generating functions, LCD codes, and self-orthogonal codes.
In Section III we determine the weight distributions of some binary linear codes and discuss the minimum distances of their dual codes.
Section IV presents a class of binary LCD codes and two classes of binary self-orthogonal codes. Section V concludes this work.
\section{Preliminaries}
\subsection{ Generating functions }
The following $m$-variable generating function associated with a subset $X$ of $\mathbb{F}_2^m$ was introduced by Chang {\em et al.} \cite{CH}.
$$\mathcal{H}_{X}(x_1,x_2\ldots, x_m)=\sum_{u\in X}\prod_{i=1}^mx_i^{u_i}\in \mathbb{Z}[x_1,x_2, \ldots, x_m],
$$
where $u=(u_1,u_2,\ldots, u_m)\in \mathbb{F}_2^m$; here, for $u_i$, we use the identification of $0, 1 \in \mathbb{F}_2$ with $0, 1 \in \mathbb{Z}_2$, respectively (Note that this is just a formal definition and there should be no confusion because
we do not make addition operation on the powers of $x_i$).
The following lemma will be used in Section III.
\begin{lemma}\cite[Theorem 1]{CH} \label{th1}
Suppose that $\Delta$ is a simplicial complex of $\mathbb{F}_2^m$ and $\mathcal {F}$ is the set of maximal elements of $\Delta$. Then
\begin{align*}
\mathcal{H}_{\Delta}(x_1,x_2\ldots, x_m)=\sum_{\emptyset\neq S\subseteq \mathcal{F}}(-1)^{|S|+1}\prod_{i\in \cap S}(1+x_i).
\end{align*}
\end{lemma}
\begin{remk} Recall that there is a bijection between $\mathbb{F}_2^m$ and $2^{[m]}$.
Hence, the set $\cap S$ in Lemma 2.1 can be understood as the intersection of the elements of $S$ in $2^{[m]}$. We have the following example. Let $\Delta=\langle (1,1,0), (0,1,1) \rangle$ be a simplicial complex in $\Bbb F_2^3$. By Lemma 2.1, we have \begin{eqnarray*}&&\mathcal{H}_{\Delta}(x_1,x_2, x_3)\\
&=&(1+x_1)(1+x_2)+(1+x_2)(1+x_3)-(1+x_2)\\
&=&1+x_1+x_2+x_3+x_1x_2+x_2x_3.\end{eqnarray*}
\end{remk}
\subsection{ LCD codes and self-orthogonal codes }
Let $\Bbb F_q$ be the finite field of order $q$, where $q$ is a power of a prime.
Let $\mathcal C$ be an $[n,k]$ code over $\Bbb F_q$.
The dual code $\mathcal{C}^{\bot}$ of $\mathcal{C}$ is defined by $\mathcal{C}^{\bot}=\{w\in\Bbb F_q^n: w\cdot c=0 \mbox{ for every } c\in \mathcal{C}\}.$ If $\mathcal{C} \cap \mathcal{C}^{\bot}=\{0\}$, then $\mathcal{C}$ is called a {\it linear complementary dual} (LCD) code; if $\mathcal{C} \subseteq\mathcal{C}^{\bot}$, then $\mathcal{C}$ is called {\it self-orthogonal}.
Regarding the codes defined in Eq. (1), Zhou {\em et al.} \cite{ZTLD} obtained the following lemma.
\begin{lemma} \cite[Corollary 16]{ZTLD}
Let $\mathcal{C}_D$ be the linear code defined in Eq. (1).
Let $\mbox{Rank}(G)$ denote the rank of the matrix $G$ in Eq. (2).
Then $\mathcal{C}_D$ is self-orthogonal (LCD, respectively) if and only
if $GG^T=0$ ($\mbox{Rank} (GG^T)=\mbox{Rank}(G)$, respectively).
\end{lemma}
Let $\mathcal{C}$ be an $[n,k,d]$ linear code over $\Bbb F_q$.
Assume that there are $A_i$ codewords in $\mathcal C$ with Hamming weight $i$ for $1\le i \le n$.
Then $\mathcal C$ has weight distribution $(1, A_1, \ldots, A_n)$ and weight enumerator $1+A_1z+\cdots +A_nz^n$.
Moreover, if the number of nonzero $A_{i}$'s in the sequence $(A_1, \ldots, A_n)$ is exactly equal to $t$,
then the code is called {\it $t$-weight}. An $[n,k,d]$ code $\mathcal{C}$ is called {\it distance optimal} if there is no $[n,k,d+1]$ code (that is, this code has the largest minimum distance for given length $n$ and dimension $k$),
and it is called {\it almost optimal} if an $[n, k, d + 1]$ code is distance optimal (refer to~\cite[Chapter 2]{HP}). On the other hand, the {\it Griesmer bound} \cite{G} on an $[n, k, d]$ linear code over $\Bbb F_q$ is given by
$ \sum_{i=0}^{k-1}\bigg\lceil {\frac{d}{q^i}} \bigg\rceil \le n, $ where $\lceil {\cdot} \rceil$ is the ceiling function.
Furthermore, a binary $[n,k,d]$ LCD code $\mathcal C$ is called {\it LCD distance optimal} if there is no $[n,k,d+1]$ LCD code (that is, this LCD code has the largest minimum distance among $[n, k]$ LCD codes for given length $n$ and dimension $k$),
and it is called {\it LCD almost optimal} if an $[n, k, d + 1]$ code is LCD distance optimal.
\section{ Weight distributions of binary linear codes arising from simplicial complexes}
We will determine the weight distributions of the codes defined in Eq. (1), noting that their defining sets are
expressed as the differences of two simplicial complexes.
Let $\Delta_1$ and $\Delta_2$ with $\Delta_2 \subset \Delta_1$ be two distinct simplicial complexes of $\Bbb F_2^m$.
Let $p=2$ and $D=\Delta_1\backslash \Delta_2$ in Eq. (1). Note that if $u=0$ in Eq. (1), then $wt(c_{u})=0$.
From now on, we assume that $u\neq {0}$. Then
\begin{eqnarray}wt(c_{u})
&=&|D|-\frac 12 \sum_{y\in \Bbb F_2}\sum_{d\in D}(-1)^{y(u\cdot d)}\nonumber\\
&=& \frac{|D|}2-\frac 12 \sum_{d\in \Delta_1\backslash \Delta_2}(-1)^{u\cdot d}\nonumber\\
&=& \frac{|D|}2-\frac 12(\sum_{d\in \Delta_1}(-1)^{u\cdot d}- \sum_{d\in \Delta_2}(-1)^{u\cdot d})\nonumber\\
&=& \frac{|D|}2-\frac 12\mathcal{H}_{\Delta_1}((-1)^{u_1},\ldots, (-1)^{u_m})\nonumber\\
&+&\frac 12\mathcal{H}_{\Delta_2}((-1)^{u_1},\ldots, (-1)^{u_m}),
\end{eqnarray}
where $u=(u_1, u_2, \ldots, u_m)^{}\in \Bbb F_2^m$.
For $u\in \mathbb{F}_2^m$ and $X\subseteq\mathbb{F}_2^m$,
a Boolean function $\chi(u|X)$ in $m$-variable is defined by
$\chi(u|X)=1$ if and only if $u\bigcap X=\emptyset$.
If a simplicial complex is generated by a maximal element $A$ (denoted by $\Delta_A$), then by Lemma 2.1 we have
\begin{eqnarray}&&\mathcal{H}_{\Delta_A}((-1)^{u_1}, \ldots, (-1)^{u_m})= \prod _{i\in A}(1+(-1)^{u_i})\nonumber\\
&=&\prod _{i\in A}2(1-u_i)=2^{|A|}\chi(u | A).\end{eqnarray}
\begin{theorem} {\rm Let $m\ge 3$ be a positive integer.
Suppose that $A$ and $B$ are two elements of $\Bbb F_2^m$ with $B\subset A$.
Let $D=\Delta_A \backslash \Delta_B$. Then the code $\mathcal{C}_D$ defined in Eq. (1) meets the Griesmer bound.
$(1)$ If $|B|=0$, then $\mathcal{C}_D$ is a $[2^{|A|}-1, |A|, 2^{|A|-1}]$ one-weight code with weight enumerator $1+(2^{|A|}-1)z^{2^{|A|-1}}.$
$(2)$ If $|B|\ge 1$, then $\mathcal{C}_D$ is a $[2^{|A|}-2^{|B|}, |A|, 2^{|A|-1}-2^{|B|-1}]$ two-weight code with weight enumerator $$1+(2^{|A|}-2^{|A|-|B|})z^{2^{|A|-1}-2^{|B|-1}}+(2^{|A|-|B|}-1)z^{2^{|A|-1}}.$$
}
\end{theorem}
{\bf Proof} The length of $\mathcal{C}_D$ is $2^{|A|}-2^{|B|}$. By Eqs. (3) and (4), \begin{equation*}wt(c_{{u}})=2^{|A|-1}(1-\chi(u|A))- 2^{|B|-1}(1-\chi(u|B)).
\end{equation*}
The frequency of each codeword $c_{{u}}$ can be determined by the vector $u$. By the definition of $\chi(u|A)$, we note that $wt(c_u)=0$ if and only if $\chi(u|A)=\chi(u|B)=1$: that is, $u\cap A=\emptyset$. Since $u\in \Bbb F_2^m$, every codeword is repeated $2^{m-|A|}$ times. Hence, we see that
the code $\mathcal{C}_D$ has dimension $|A|$.
Furthermore, if $|B|\ge 1$, then we have \begin{eqnarray*} &&\sum_{i=0}^{|A|-1}\bigg\lceil {\frac{2^{|A|-1}-2^{|B|-1}}{2^i}} \bigg\rceil \\
&=& (2^{|A|}-1)-(2^{|B|}-1) =2^{|A|}-2^{|B|}. \end{eqnarray*} Hence, we conclude that $\mathcal{C}_D$ meets the Griesmer bound.
Similarly, the result holds for the case where $|B|=0$.
$\blacksquare$
\begin{theorem} Let $D$ be defined as in Theorem 3.1. Then $\mathcal{C}_D^{\bot}$ is a
$[2^{|A|}-2^{|B|}, 2^{|A|}-2^{|B|}-|A|, \delta]$ linear code, where
$$ \delta=\left\{
\begin{array}{ll}
3 & \mbox{if}\ |A|>|B|+1,\\
4 & \mbox{if}\ |A|=|B|+1\ge 3.
\end{array} \right. $$
\end{theorem}
{\bf Proof} Assume that $D=\{g_1, \ldots, g_n\}\subseteq \Bbb F_
|
2^m$ with $n=|D|$.
The generator matrix $G'$ of $\mathcal{C}_D$ can be induced by the matrix $G$ in Eq. (2) by deleting all the zero row vectors of $G$.
Clearly, $G'$ is the parity-check matrix of $\mathcal{C}_D^{\bot}$. The minimum distance of $\mathcal{C}_D^{\bot}$ is greater than 2.
We divide the proof into two parts.
(1) If $|A|>|B|+1$, then there are two distinct positive integers $i$ and $j$ in $A\backslash B$. Let ${\bf e}_k=(e_1, e_2, \ldots, e_m)\in \Bbb F_2^m$, where
$e_k=1$ and $e_l=0 $ if $l\neq k$. Then it is easy to check that ${\bf e}_i^T, {\bf e}_j^T$, and ${\bf e}_i^T+{\bf e}_j^T$ are three different columns of $G'$; therefore, the minimum distance of $\mathcal{C}_D^{\bot}$ is 3.
(2) If $|A|=|B|+1$, then we assume that $A\backslash B=\{i\}$ without loss of generality. We note that any three columns of $G'$ are linearly independent. Since $|B|\ge 2$, there are two integers $j$ and $k$ in $B$. Then ${\bf e}_i^T, {\bf e}_i^T+{\bf e}_j^T$, ${\bf e}_i^T+{\bf e}_k^T$, and ${\bf e}_i^T+{\bf e}_j^T+{\bf e}_k^T$ are four linearly dependent columns of $G'$. Therefore, the minimum distance of $\mathcal{C}_D^{\bot}$ is 4.
$\blacksquare$
\begin{corollary} {\rm Let $|B|=0$ and $|A|>1$ in Theorem 3.2. Then $\mathcal{C}_D^{\bot}$ is a $[2^{|A|}-1, 2^{|A|}-1-|A|, 3]$ Hamming code. }
\end{corollary}
\begin{theorem} {\rm Let $m\ge 3$ be a positive integer.
Suppose that $A$ and $B$ are two distinct elements of $\Bbb F_2^m$ such that $0< |B|< |A|$ and $A\cap B=\emptyset$.
Let $D=(\Delta_A\cup \Delta_B) \backslash \{0\}$.
Then $\mathcal{C}_D$ in Eq. (1) is a $[2^{|A|}+2^{|B|}-2, |A|+|B|, 2^{|B|-1}]$ three-weight code with weight enumerator \begin{eqnarray*}&&1+(2^{|B|}-1)z^{2^{|B|-1}}+(2^{|A|}-1)z^{2^{|A|-1}}\\
&+&(2^{|B|}-1)(2^{|A|}-1)z^{2^{|B|-1}+2^{|A|-1}}.\end{eqnarray*}
}
\end{theorem}
{\bf Proof} The length of $\mathcal{C}_D$ is $2^{|A|}+2^{|B|}-2$. By Eqs. (3) and (4), we have \begin{equation*}wt(c_{{u}})=2^{|A|-1}(1-\chi(u|A))+2^{|B|-1}(1-\chi(u|B)).
\end{equation*}
The frequency of each codeword in $\mathcal{C}_D$ can be determined by the vector $u$, and so the result follows immediately.
$\blacksquare$
In a similar way to Theorem 3.2, we have:
\begin{theorem}
Let $D$ be defined as in Theorem 3.4.
Then $\mathcal{C}_D^{\bot}$ is a $[2^{|A|}+2^{|B|}-2, 2^{|A|}+2^{|B|}-2-|A|-|B|, 3]$ code. \end{theorem}
\begin{theorem} {\rm Let $m$ be a positive even integer and $k=\frac m2$.
Let $\{A_1,\ldots, A_k\}$ be a partition of $\{1,2,\ldots, m\}$, where $|A_i|=2$ for $1\le i\le k$.
Let $D=(\Delta_{A_1}\cup\cdots\cup \Delta_{A_k}) \backslash \{0\}$ in Eq. (1).
Then $\mathcal{C}_D$ is a $[{3m}/2, m, 2]$ code and its weight
enumerator is given by $$1+\prod_{l=0}^{k-1}3^l{k\choose l}z^{m-2l}.$$
}
\end{theorem}
{\bf Proof} The length of $\mathcal{C}_D$ is $\frac32m$. By Eqs. (3) and (4), \begin{equation}wt(c_{{u}})=m-2(\chi(u|A_1)+\cdots+\chi(u|A_k)).
\end{equation}
The frequency of each codeword in $\mathcal{C}_D$ can be determined by the vector $u$, and hence the result follows right away.
$\blacksquare$
We obtain the following theorem in a similar way to Theorem 3.2.
\begin{theorem} Let $D$ be defined as in Theorem 3.6.
Then $\mathcal{C}_D^{\bot}$ is a $[{3m}/2, m/2, 3]$ code.
\end{theorem}
\section{ Binary LCD codes and self-orthogonal codes}
We present some binary LCD codes and binary self-orthogonal codes in this section.
\begin{lemma} Let $\Delta_A$ be a simplicial complex generated by a nonzero element $A$ in $\Bbb F_2^m$
and $\Delta_A\backslash\{0\}=\{g_1, g_2,\ldots, g_n\}\subseteq\Bbb F_2^m$, where $n=2^{|A|}-1$.
Let $G=[g_1^Tg_2^T\cdots g_n^T]$ be the $m\times n$ matrix in Eq. (2).
Then $\mbox{Rank}(G)=|A|$ and $$ \mbox{Rank}(GG^T)=\left\{
\begin{array}{ll}
0 & \mbox{if}\ |A|\ge 3,\\
|A| & \mbox{if}\ |A|<3.
\end{array} \right. $$
\end{lemma}
{\bf Proof} Note that $\mbox{Rank}(G)=|A|$.
Let $M=(m_{ij})_{m\times m}=GG^T$. By \cite[Lemma 18]{ZTLD},
assume that $c_i$ is the $i$-th row vector of $G$.
Then $m_{i,j}=c_ic_j^T$.
Let $U_{i,j}=\{g=(g_1,g_2, \ldots, g_m)\in D: g_i=g_j=1\}$.
Then $m_{i,j}=|U_{i,j}| \pmod 2$. Then the result follows from Lemma 2.2 and
$$ U_{i,j}=\left\{
\begin{array}{ll}
2^{|A|-1} & \mbox{if}\ i=j\in A,\\
2^{|A|-2} & \mbox{if}\ i\neq j, \; i,j\in A, \\
0 &\mbox{otherwise}.
\end{array} \right. $$
\begin{theorem} Let $D$ be defined as in Theorem 3.1.
Then the code $\mathcal{C}_D$ defined in Eq. (1) is self-orthogonal if and only if one of the followings holds:
(1) $|B|=0$ and $|A|\ge 3$.
(2) $|A|>|B|\ge 3$.
\end{theorem}
{\bf Proof} Let $\Delta_B\backslash\{0\}=\{g_1, g_2,\ldots, g_l\}\subseteq\Bbb F_2^m$ and $\Delta_A\backslash\Delta_B=\{g_{l+1}, g_{l+2},\ldots, g_n\}\subseteq\Bbb F_2^m$, and
$\Delta_A\backslash\{0\}=\{g_1, g_2,\ldots, g_n\}\subseteq\Bbb F_2^m$.
Let $G_1=[g_1^T g_2^T\cdots g_l^T]$, $G_2=[g_{l+1}^Tg_{l+2}^T\cdots g_n^T]$ and $G=[G_1G_2].$ By Lemma 2.2, the code $\mathcal{C}_D$ is self-orthogonal if and only if $G_2G_2^T=0$. Note that $GG^T=G_1G_1^T+G_2G_2^T$. Now, we consider the following four cases depending on the value of $|B|$.
(1) If $|B|=0$, then $\mathcal{C}_D$ is self-orthogonal if and only if $|A|\ge 3$ from Lemma 4.1.
(2) If $|B|=1$, then $m_{ii}=2^{|A|-1}-1 \equiv1\pmod 2$ for $i\in B$. Hence, $\mathcal{C}_D$ cannot be self-orthogonal in this case.
(3) If $|B|=2$, then $m_{ij}=2^{|A|-2}-1 \equiv1\pmod 2$ for $i,j\in B$. Thus, $\mathcal{C}_D$ cannot be self-orthogonal in this case.
(4) If $|B|\ge3$, then we have that $G_1G_1^T=0$ and $GG^T=0$ by Lemma 4.1. Therefore, $\mathcal{C}_D$ is self-orthogonal.
$\blacksquare$
\begin{example}{\rm Let $|B|=0$ and $|A|=3\le m$. Then $\mathcal{C}_D$ in Theorem 3.1 is a $[7,3,4]$ self-orthogonal code, and $\mathcal{C}_D^{\bot}$ is a $[7,4,3]$ code. According to \cite{G2}, we find that both $\mathcal{C}_D$ and $\mathcal{C}_D^{\bot}$ are distance optimal.}
\end{example}
\begin{example}{\rm Let $|B|=3$ and $|A|=5\le m$. Then $\mathcal{C}_D$ in Theorem 3.1 is a $[24,5,12]$ self-orthogonal code, and $\mathcal{C}_D^{\bot}$ is a $[24,19,3]$ code. We confirm that both $\mathcal{C}_D$ and $\mathcal{C}_D^{\bot}$ are distance optimal according to \cite{G2}.}
\end{example}
\begin{example}{\rm Let $|B|=4$ and $|A|=5\le m$. Then $\mathcal{C}_D$ in Theorem 3.1 is a $[16,5,8]$ self-orthogonal code and $\mathcal{C}_D^{\bot}$ is a $[16,11,4]$ code. According to \cite{G2}, we conclude that both $\mathcal{C}_D$ and $\mathcal{C}_D^{\bot}$ are distance optimal.}
\end{example}
\begin{theorem} Let $D$ be defined as in Theorem 3.4.
Then $\mathcal{C}_D$ is self-orthogonal if and only if $|A|>|B|\ge 3$.
\end{theorem}
{\bf Proof} Let $\Delta_B\backslash\{0\}=\{g_1, \ldots, g_l\}\subseteq\Bbb F_2^m$ and
$\Delta_A\backslash\{0\}=\{h_1,\ldots, h_n\}\subseteq\Bbb F_2^m$.
Let $G_1=[g_1^T \cdots g_l^T]$, $G_2=[h_1^T\cdots h_n^T]$, and $G=[G_1G_2].$
From the assumption that $A\cap B=\emptyset$ and Lemma 2.2, it follows that $\mathcal{C}_D$
is self-orthogonal if and only if $GG^T=0$. The result thus follows from the fact that $GG^T=G_1G_1^T+G_2G_2^T$ and Lemma 4.1.
$\blacksquare$
\begin{example}{\rm Let $|B|=2$, $|A|=3$, and $5\le m$. Then $\mathcal{C}_D$ in Theorem 3.4 is a $[10,5,3]$ self-orthogonal code and $\mathcal{C}_D^{\bot}$ is a $[10,5,3]$ code. According to \cite{G2}, we find that $\mathcal{C}_D$ and $\mathcal{C}_D^{\bot}$ are both almost optimal.}
\end{example}
\begin{theorem} Let $D$ be defined as in Theorem 3.6. Then $\mathcal{C}_D$ is an LCD code.
\end{theorem}
{\bf Proof} By Lemma 4.1, for any $1\le i\le k$ we have $m_{i_1,i_2}=m_{i_2,i_1}=1,$ where $\{i_1, i_2\}= A_i$. Note that $\{A_1,\ldots, A_k\}$ is a partition of $\{1,2,\ldots, m\}$ and $\mbox{Rank}(G)=m$. Equivalently, we can write $GG^T=\mbox{diag}\{I_2, I_2, \ldots, I_2\}$, where $I_2$ is the identity matrix of order 2. Then $\mbox{Rank}(G)=\mbox{Rank}(GG^T)=m$. Then the result follows from Lemma 2.2.
$\blacksquare$
In \cite{GK, HS}, the authors obtained some bounds on LCD codes, and they also gave a complete classification of binary LCD codes with small lengths.
\begin{example}{\rm Let $m=4$. Then $\mathcal{C}_D$ in Theorem 3.6 is a $[6,4,2]$ binary LCD code and $\mathcal{C}_D^{\bot}$ is a $[6,2,3]$ binary LCD code. According to \cite{G2}, $\mathcal{C}_D$ is distance optimal and $\mathcal{C}_D^{\bot}$ is almost optimal. According to the tables in (\cite{GK, HS}), we see that $\mathcal{C}_D$ and $\mathcal{C}_D^{\bot}$ are both LCD distance optimal codes as well.
}
\end{example}
\begin{example}{\rm Let $m=6$. Then $\mathcal{C}_D$ in Theorem 3.6 is a $[9,6,2]$ binary LCD code and $\mathcal{C}_D^{\bot}$ is a $[9,3,3]$ binary LCD code. According to \cite{G2}, $\mathcal{C}_D$ is distance optimal and $\mathcal{C}_D^{\bot}$ is almost optimal.
Moreover, we conclude that the code $\mathcal{C}_D$ is LCD distance optimal based on the tables in (\cite{GK, HS}).
}
\end{example}
\begin{example}{\rm Let $m=8$. Then $\mathcal{C}_D$ in Theorem 3.6 is a $[12,8,2]$ binary LCD code and $\mathcal{C}_D^{\bot}$ is a $[12,4,3]$ binary LCD code.
We find that the code $\mathcal{C}_D$ is almost optimal according to \cite{G2}.
Furthermore, we can see that $\mathcal{C}_D$ is an LCD distance optimal code according to the tables in (\cite{GK, HS}),
}
\end{example}
\section{Concluding remarks}
In this paper we obtain an infinite family of binary LCD codes and two infinite families
of binary self-orthogonal codes by using simplicial complexes.
Weight distributions are explicitly determined for these codes.
We also find some (almost) optimal binary self-orthogonal and LCD codes.
It is worth noting that some of our self-orthogonal codes in Theorem 3.1 meet the Griesmer bound.
Table I presents some of optimal binary LCD codes obtained by using Theorems 3.6 and 3.7; their optimality is based on the tables in (\cite{GK, HS}). Their classification in (\cite{GK, HS}) treats binary LCD codes of only small lengths, so that optimality of our LCD codes in Theorems 3.6 and 3.7 is confirmed for only small lengths due to limited current database.
However, we believe that our binary LCD codes may include new LCD distance optimal codes of larger lengths provided that the database is supported for larger lengths.
\begin{table} [h]
\caption{some of LCD distance (or almost) optimal codes from Theorems 3.6 and 3.7 }
\begin{tabu} to 0.4\textwidth{|X[1,c]|X[1,c]|}
\hline
\rm{Parameters}&\rm{Optimality} \\
\hline
$[3,2,2]$ & LCD distance optimal \\
\hline
$[6,4,2]$ & LCD distance optimal \\
\hline
$[6,2,3]$ & LCD distance optimal \\
\hline
$[9,3,3]$ & LCD almost optimal \\
\hline
$[9,6,2]$ & LCD distance optimal \\
\hline
$[12,8,2]$ & LCD distance optimal \\
\hline
$[15,10,2]$ & LCD almost optimal \\
\hline
\end{tabu}
\end{table}
{\bf Acknowledgement.}
We express our gratitude to the reviewers for their very helpful comments, which
improved the exposition of this paper.
\ifCLASSOPTIONcaptionsoff
\newpage
\mathfrak{i}
|
\section{Introduction}
\subsection{Summary}
\label{subsec: summary2}
In this series of two papers, we prove, under some hypotheses, an integral $R=\mathbb{T}$ theorem for the mod-$p$ Galois representation ${\bar\rho} = 1 \oplus \omega$, where $\mathbb{T}$ is the Hecke algebra acting on weight $2$ modular forms of level $N=\ell_0\ell_1$, $p\geq 5$ is a prime number, $\omega$ is the mod-$p$ cyclotomic character, and $R$ is a universal pseudodeformation ring for $\overline{\rho}$. We are concerned with the case where
\begin{itemize}
\item $\ell_0$ is a prime with $\ell_0 \equiv 1 \pmod{p}$, and
\item $\ell_1$ is a prime with $\ell_1 \not \equiv \pm 1\pmod{p}$ such that $\ell_1$ is a $p$th power modulo $\ell_0$.
\item There is a unique weight 2 cusp form of level $\ell_0$ that is congruent to the Eisenstein series modulo $p$.
\end{itemize}
See the introduction of Part I (\cite{part1}) for a discussion of why this particular setup is interesting from the point of view of Galois representations.
This second paper is focused on the computations needed to verify the hypotheses of the $R=\mathbb{T}$ theorem proven in Part I.
The method we use to prove $R=\mathbb{T}$ is new. As with the standard approach, we start with a surjection $R \onto \mathbb{T}$ and show that if $R$ is ``small enough," this surjection must be an isomorphism. Standard methods use tangent space computations to show that $R$ is ``small enough," but these techniques are not enough in our setting because of the failure of the Gorenstein property. Instead, we show that $R$ is ``small enough" by bounding the dimension of $R/pR$ (as a vector space) under certain conditions.
In Part I of this pair of papers, we prove that
\[
\dim_{\mathbb{F}_p}R/pR \le 3\Longleftrightarrow \dim_{\mathbb{F}_p}R/pR = 3\Longleftrightarrow R=\mathbb{T},
\]
and then prove that $\dim_{\mathbb{F}_p}R/pR > 3$ if and only if certain Galois cochains exist and satisfy specific local conditions about their restrictions to decomposition groups at $\ell_0$, $\ell_1$, and $p$. Broadly, the main steps of this paper are as follows.
\begin{itemize}
\item Translate the cochain existence problem into a problem about the existence of extensions of $\mathbb{Q}(\zeta_p)$ with prescribed (nilpotent $p$-group) Galois group, and translate the local conditions into conditions on splitting behavior of primes in these extensions.
\item Describe extensions with the desired Galois groups explicitly as iterated Kummer extensions, following Sharifi's construction of generalized Heisenberg extensions \cite{sharifithesis,sharifi2007,LLSWW2020}. We refer to these as \emph{twisted-Heisenberg extensions}.
\item Adjust the extensions constructed in the previous step so that they have the desired local properties. This involves understanding the local behavior of certain global cochains, which we achieve using a tame analog of the Gross-Stark conjecture, developed in \cite{WWE3,wake2020eisenstein}.
\item Use Kummer theory to express the splitting behavior of primes in these extensions in terms of conditions on the Kummer generators.
\item Perform computations in Sage \cite{SAGE} using the unit/S-unit interface, written by John Cremona, to the unit/S-unit groups computed in PARI/GP \cite{PARI2}.
\item Establish number-theoretic characterizations of these extension fields.
\end{itemize}
Using these computations, we find many explicit examples where the conditions for $\dim_{\mathbb{F}_p}R/pR \le 3$ are satisfied and conclude that $R=\mathbb{T}$ in these cases. Moreover, we find examples where $\dim_{\mathbb{F}_p}R/pR >3$, and we compute that $\mathrm{rank}_{\mathbb{Z}_p}\mathbb{T}>3$ in each of these cases, which is consistent with our $R=\mathbb{T}$ conjecture.
Although the focus of this paper is on computing bounds for $\dim_{\mathbb{F}_p}R/pR$, we expect that some of the techniques developed here will be of independent interest. We expect that the same methods can be used to compute bounds on dimensions of residually-reducible deformation rings in other contexts. We also hope that this paper can serve as a guide for further computation and exploration in generalized Heisenberg extensions of number fields.
\subsection{Main results}
\label{subsec: P2 main results}
We begin by formulating our main results in terms of splitting conditions in certain unipotent $p$-extensions of $\mathbb{Q}(\zeta_p)$ determined by $N=\ell_0\ell_1$. In the description below, we use $C_p$ to denote a cyclic order $p$ group.
Let $K =\mathbb{Q}(\zeta_p,\ell_1^{1/p})$, and let $L/\mathbb{Q}(\zeta_p)$ be the $\omega^{-1}$-isotypic $C_p$-extension such that $(1-\zeta_p)$ splits and only the primes over $\ell_0$ ramify. (For the existence and uniqueness of $L/\mathbb{Q}(\zeta_p)$, see, e.g., \cite[Lem.\ 3.9]{CE2005}.)
To state the main result of this paper, we require two special $C_p$-extensions of $K$ defined in \S\ref{subsec: P2 deducing main}, which we denote by $K'/K$ and $K''/K$. These $C_p$-extensions are characterized by certain Galois-theoretic and splitting conditions, including the ``$\omega^i$-isotypic'' condition that is defined in Definition \ref{defn: Delta isotypic}. In particular, the Galois-theoretic conditions also involve a tower of $C_p$-extensions of $L$ in which each extension in the tower is constructed by composing the previous subextension with an extension of $K$.
Diagrammatically, letting $M = KL$, we have:
\[
\begin{tikzcd}[every arrow/.append style={-}]
&&M''\arrow{d}\arrow{ddll}\\
&&M'\arrow{d}\arrow{dl}\\
K''\arrow{dr}&K'\arrow{d}&M\arrow{d}\arrow{dl}\\
& K\arrow{d}&L\arrow{dl}\\
&\mathbb{Q}(\zeta_p)&
\end{tikzcd}
\]
In this setup, $M'/M$ is the unique $C_p$-extension such that
\begin{itemize}
\item $M'/\mathbb{Q}$ is Galois and $M'/M$ is $\omega^0$-isotypic
\item $M'/M$ is unramified
\item the primes of $M$ over $\ell_0$ split in $M'/M$.
\end{itemize}
Then $K'/K$ is characterized up to isomorphism by being a $C_p$-extension of $K$ contained in $M'$ but not equal to $M$. See Proposition \ref{prop: characterize M'} for details.
Likewise, assuming that the primes over $\ell_1$ split in $K'/K$ (equivalently, in $M'/M$), we can construct and identify another $C_p$-extension $M''/M'$. It is the unique $C_p$-extension of $M'$ such that
\begin{itemize}
\item $M''/\mathbb{Q}$ is Galois and $M''/M'$ is $\omega$-isotypic
\item the conductor of $M''/M'$ divides (resp.\ is equal to) $\mathfrak{m}^{\mathrm{flat}} := \prod_{v \mid p} v^2$; that is, the product of the squares of the primes of $M'$ over $p$
\item primes of $M'$ over $\ell_0$ split in $M''/M'$.
\end{itemize}
Then $K''/K$ is characterized up to isomorphism by being a $C_p$-extension of $K$ contained in $M''$, not contained in $M'$, and being a member of an isomorphism class (of subfields of $M''$) of cardinality $p$. See Proposition \ref{prop: characterize M''} for details.
The first main result of this paper gives splitting conditions in the $C_p$-extensions $K'/K$ and $K''/K$ for when $\dim_{\mathbb{F}_p}R/pR>3$.
\begin{thm}[{Theorem \ref{thm: main}}]
\label{thm: main intro}
We have $\dim_{\mathbb{F}_p}R/pR>3$ if and only if the following conditions hold:
\begin{enumerate}[label=(\roman*)]
\item all primes of $K$ over $\ell_1$ split in $K'/K$;
\item there exists some prime of $K$ over $\ell_0$ that splits in both $K'/K$ and $K''/K$.
\end{enumerate}
In particular, when $\dim_{\mathbb{F}_p}R/pR=3$, we have $R=\mathbb{T}$.
\end{thm}
The second main result of this paper is an algorithm that computes whether conditions $(i)$ and $(ii)$ in Theorem \ref{thm: main} hold. Indeed, since $K'/K$ and $K''/K$ are both $C_p$-extensions, each can be constructed by adjoining the $p$th root of an $S$-unit in $K$, where we have taken $S$ to be the set of primes of $K$ dividing $Np$. In particular, Kummer theory provides a computationally feasible way to check conditions $(i)$ and $(ii)$ even when the degrees of $K'/\mathbb{Q}$ and $K''/\mathbb{Q}$ are large, i.e., of degree $p^2(p-1)\geq 100$. The main components of our algorithm are given in \S\ref{sec:adjustments}, and the entire program, implemented using Sage \cite{SAGE}, can be found online at \url{https://github.com/cmhsu2012/RR3}. Our program is efficient enough that we have run it for some small values of $p$ and many values of $N$.
Here is a sample result of our calculations. For a detailed discussion of all computed examples, see \S\ref{sec:data}.
\begin{thm}
\label{thm: main computations}
Let $p=5$ and $\ell_0=11$. Then for
\[
\ell_1 =23,67,263,307,373,397,593,857,967,1013,
\]
condition $(i)$ of Theorem \ref{thm: main} holds, but condition $(ii)$ does not. In particular, for these values of $\ell_1$, the $\mathbb{F}_p$-dimension of $R/pR$ equals $3$ and $R\cong\mathbb{T}$.
For $\ell_1 =43,197,683,727$, conditions $(i)$ and $(ii)$ of Theorem \ref{thm: main} both hold. Consequently, the $\mathbb{F}_p$-dimension of $R/pR$ exceeds $3$ for these values of $\ell_1$.
\end{thm}
\begin{rem}
For the values of $p$ and $N$ where we found $\dim_{\mathbb{F}_p}R/pR > 3$, we also computed $\dim_{\mathbb{F}_p}\mathbb{T}/p\mathbb{T}>3$. This is consistent with our conjecture that $R\cong\mathbb{T}$.
\end{rem}
For the remainder of this introduction, we outline how conditions $(i)$ and $(ii)$ in Theorem \ref{thm: main} arise. Specifically, we explain how the $C_p$-extensions $K'/K$ and $K''/K$ are the splitting fields of certain Galois cochains and then show how to compute explicit $S$-units corresponding to these cochains using Kolyvagin derivative operators. To describe this precisely, we summarize the necessary Galois cohomological framework established in Part I.
\subsection{Differential equations and the rank of $R$}
\label{subsec:diffeq and R}
Let $R$ be the ring representing deformations of the pseudorepresentation $\omega \oplus 1$ that have determinant equal to the cyclotomic character and that are finite-flat at $p$, unramified-or-Steinberg at the primes $\ell_0, \ell_1$ that divide $N$, and unramified outside $Np$. Note that $R$ only plays a motivational role in this paper, so we do not discuss this definition further -- see Part I for more information.
As we explain in the introduction of Part I, there is a particular first-order deformation $\rho_1$ of ${\bar\rho}$ such that $\dim_{\mathbb{F}_p}R/pR \le 3$ unless a deformation $\rho_2$ of $\rho_1$ satisfying certain local conditions exists.
There is a helpful analogy between this problem and boundary value problems in the theory of differential equations. The existence of $\rho_2$ is analogous to the existence of a general solution to the system of differential equations, and satisfying the local conditions is analogous to the existence of a solution to the boundary value problem. We will now use this analogy to frame our main results from Part I.
\subsubsection{The system of equations defining $\rho_1$}
The starting point for our study of $\dim_{\mathbb{F}_p}R/pR$ is the representation $\rho_1$ of $G_\mathbb{Q} := \mathrm{Gal}(\overline{\Q}/\mathbb{Q})$. As an input, we start with two cocycles:
\begin{itemize}
\item $b\up1$ represents the Kummer class of $\ell_1$ in $H^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$, and
\item $c\up1$ represents a nontrivial class in $H^1(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$ that is ramified only at $\ell_0$ (such a class is unique up to scaling).
\end{itemize}
If we wanted to represent $\rho_1$ as a matrix with values in $\GL_2(\mathbb{F}_p[\epsilon]/(\epsilon^2))$, there are two choices:
\[
\ttmat{\omega(1+a\up1 \epsilon)}{\epsilon b\up1}{\omega (c\up1+\epsilon c\up2)}{1+d\up1 \epsilon}
\mbox{ or }
\ttmat{\omega(1+a\up1 \epsilon)}{ b\up1+\epsilon b\up2}{\omega \epsilon c\up1}{1+d\up1 \epsilon}.
\]
In other words, we have to choose either the upper-right or lower-left entry to be a multiple of $\epsilon$. From the point of view of pseudorepresentations, both choices give the same answer because they have the same trace and determinant. To obviate this choice, we write $\rho_1$ as
\[
\rho_1 = \ttmat{\omega(1+a\up1 \epsilon)}{b\up1}{\omega c\up1}{1+d\up1 \epsilon}
\]
to refer to the pseudorepresentation obtained by either of these choices. (This ad hoc definition can be made more formal using the theory of \emph{Generalized Matrix Algebras (GMA)}---see [Part I, \S\ref{sssec: CH reps and GMA reps}] for the definition of GMAs or [Part I, \S\ref{subsec: 1-reducible}] for the 1-reducible GMAs relevant here.)
The determinant of $\rho_1$ is $\omega(1+(a\up1+d\up1-b\up1c\up1)\epsilon)$.
Since we require our deformations to have cyclotomic determinant, we must have $d\up1=b\up1c\up1-a\up1$. Since $b\up1$ and $c\up1$ are fixed, the data of $\rho_1$ is equivalent to the data of a cochain $a\up1$. For $\rho_1$ to be a homomorphism, it is equivalent that $a\up1$ satisfy the differential equation
\begin{equation}
\label{eq:diffeq of a1}
-da\up1 = b\up1 \smile c\up1.
\end{equation}
In order for this equation to have a solution, it is equivalent that $\ell_1$ be a $p$th power modulo $\ell_0$ (see [Part I, Lem.\ \ref{lem: log ell1 is zero}]), which we assume. Hence \eqref{eq:diffeq of a1} has a solution.
Note that the solution $a\up1$ to \eqref{eq:diffeq of a1} is not unique, but any two solutions differ by a cocycle. In order for $\rho_1$ to satisfy the local conditions defining $R$, we must impose a condition on the ramification of $a\up1$ at $p$ (it can be shown that $\rho_1$ satisfies the conditions at $\ell_0$ and $\ell_1$ for all choices of $a\up1$---see [Part I, Lem.\ \ref{lem: D1 construction}]). Still, the solution with this condition is not unique: any two solutions differ by a cocycle that is unramified at $p$.
\subsubsection{The system of equations defining $\rho_2$}
\label{subsub:rho2}
Next we want to deform $\rho_1$ to a pseudorepresentation $\rho_2$ with coefficients in $\mathbb{F}_p[\epsilon]/(\epsilon^3)$.
We write our desired deformation as
\begin{equation}
\label{eq:rho2}
\rho_2 = \ttmat{\omega(1+a\up1 \epsilon+a\up2 \epsilon^2)}{b\up1+b\up2\epsilon}{\omega (c\up1+c\up2 \epsilon)}{1+d\up1 \epsilon+d\up2 \epsilon^2}
\end{equation}
with the same convention as for $\rho_1$ that one or the other of the upper-right or lower-left entries should be multiplied by $\epsilon$ (or using the 1-reducible GMAs of [Part I, \S4.1]).
Just as with $d\up1$, in order that $\det(\rho_2)=\omega$ we must have $d\up2 =b\up1 c\up2+b\up2 c\up1 -a\up1 d\up1 -a\up2$. The data of the deformation $\rho_2$ is the remaining cochains $a\up2$, $b\up2$, and $c\up2$. These cochains must satisfy the following system of equations in order for $\rho_2$ to be a homomorphism:
\begin{align}
\label{eq:diffeq for a2}
-d a\up2 &= a\up1 \smile a\up1 + b\up1 \smile c\up2 +b\up2 \smile c\up1 \\
\label{eq:diffeq for b2}
-d b \up2 &= a\up1 \smile b\up1 + b\up1\smile d\up1 \\
\label{eq:diffeq for c2}
-d c\up2 &= c\up1 \smile a\up1 + d\up1 \smile c\up1.
\end{align}
This system is much more complex than \eqref{eq:diffeq of a1}, due to the coupling of equations. However, we find that:
\begin{itemize}
\item Equation \eqref{eq:diffeq for c2} has a solution for a unique value of $a\up1$ (among those that satisfy \eqref{eq:diffeq of a1} and the condition on ramification at $p$). This value is characterized by the condition that $a\up1|_{\ell_0}$ be in the image of the cup product
\[
H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(1)) \xrightarrow{\smile c\up1|_{\ell_0}} H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p).
\]
From now on, we fix $a\up1$ to be that solution, and we define $\alpha \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(1))$ such that $\alpha \smile c\up1|_{\ell_0} = a\up1|_{\ell_0}$.
\item Equation \eqref{eq:diffeq for b2} has a solution if and only if $a\up1|_{\ell_1}=0$.
\item Supposing that \eqref{eq:diffeq for b2} has a solution, equation \eqref{eq:diffeq for a2} has a solution only for certain values of $b\up2$. These values are characterized by the condition that $b\up2|_{\ell_0}$ be in the image of the cup product
\[
H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2)) \xrightarrow{\smile c\up1|_{\ell_0}} H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p(1)).
\]
For such a choice of $b\up2$, we define $\beta\in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$ such that $\beta \smile c\up1|_{\ell_0} = b\up2|_{\ell_0}$.
\end{itemize}
In summary, we see that $\rho_2$ exists if and only if $a\up1|_{\ell_1}=0$. Moreover, when this is the case, there are invariants $\alpha \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(1))$ and $\beta\in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$.
Now assume that $\rho_2$ exists. In order for $\rho_2$ to satisfy the local conditions defining $R$, there are conditions that $a\up2$, $b\up2$, and $c\up2$ be finite-flat at $p$, and an additional condition at $\ell_0$ (No additional condition at $\ell_1$ is necessary---we show that the condition on $\rho_2$ at $\ell_1$ is satisfied for all choices of $a\up2$, $b\up2$, and $c\up2$.) We show that the finite-flat conditions can always be satisfied, and from now on we fix $b\up2$ to be a solution satisfying the finite-flat condition and fix $\beta$ to satisfy $\beta \smile c\up1|_{\ell_0} = b\up2|_{\ell_0}$ for this choice.
The extra condition at $\ell_0$ can be expressed in terms of $\alpha$ and $\beta$:
\begin{itemize}
\item $\alpha^2+\beta=0$ in $H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$.
\end{itemize}
\begin{rem}
In the theory of differential equations, an \emph{inverse problem} is one where the system of equations and the solution are given, and the unknown is the boundary values. This is analogous to our situation, in that we use the equation \eqref{eq:diffeq for c2} to find the correct local conditions for $a\up1$.
\end{rem}
We summarize the important information as follows:
\begin{itemize}
\item $a\up1$ is the unique solution to \eqref{eq:diffeq of a1} such that
\begin{itemize}
\item $a\up1$ satisfies a finite-flat condition at $p$, and
\item $a\up1|_{\ell_0} =\alpha \smile c\up1|_{\ell_0}$ for some unique $\alpha \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(1))$.
\end{itemize}
\item $b\up2$ is a solution to \eqref{eq:diffeq for b2} (if it exists) such that
\begin{itemize}
\item $b\up2$ satisfies a finite-flat condition at $p$, and
\item $b\up2|_{\ell_0} =\beta \smile c\up1|_{\ell_0}$ for some unique $\beta \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$.
\end{itemize}
\end{itemize}
Moreover, we know that $b\up2$ exists if and only if $a\up1|_{\ell_1}=0$. The following theorem is the main result of Part I.
\begin{thm}[Part I, Theorem \ref{P1: thm: main with a1}]
\label{thm: main of part 1}
We have $\dim_{\mathbb{F}_p}R/pR>3$ if and only if the following conditions hold:
\begin{enumerate}[label = (\roman*)]
\item $a\up1|_{\ell_1}=0$
\item $\alpha^2+\beta=0$ in $H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$.
\end{enumerate}
\end{thm}
By realizing $K'/K$ as the splitting field of $a\up1|_{G_K}$ and $K''/K$ as the splitting field of $b\up2|_{G_K}$, we can translate Theorem \ref{thm: main of part 1} into the language of Theorem \ref{thm: main intro}. It remains to compute the $S$-units corresponding to $a\up1|_{G_K}$ and $b\up2|_{G_K}$.
\subsection{Computing $S$-units to solve cup product and Massey product equations}
In the theory of differential equations, one technique used for solving boundary value problems is to first find a particular solution to the equation, then adjust that solution so it satisfies the boundary conditions. We take a similar two-step approach to computing $a\up1$ and $b\up2$:
\begin{description}
\item[Step 1] Find cochains that solve \eqref{eq:diffeq of a1} and \eqref{eq:diffeq for b2}. We denote these by $a\up1_\mathrm{cand}$ and $b\up2_\mathrm{cand}$, for \emph{candidate solutions}.
\item[Step 2] Find \emph{local adjustments} needed to make the candidate solutions satisfy the necessary local conditions. These local adjustments are global cocycles $a_\mathrm{adj}$ and $b_\mathrm{adj}$ such that $a\up1=a\up1_\mathrm{cand}+a_\mathrm{adj}$ and $b\up2=b\up2_\mathrm{cand}+b_\mathrm{adj}$ satisfy the desired local properties.
\end{description}
As we described above, we do not compute with cochains directly, but rather with their $S$-units associated by Kummer theory. For the purposes of this introduction, we will abuse notation and conflate these two.
\subsubsection{Computing candidate solutions}
To compute our candidate solutions, we start with an alternate interpretation of the equations \eqref{eq:diffeq of a1} and \eqref{eq:diffeq for b2}. A solution $a\up1_\mathrm{cand}$ to \eqref{eq:diffeq of a1} gives a twisted-Heisenberg extension of $\mathbb{Q}$, which is cut out by the following upper-triangular 3-dimensional representation of $G_\mathbb{Q}$:
\begin{equation}
\label{eq: 3d abc}
\begin{pmatrix}
\omega & b\up1 & \omega a\up1_\mathrm{cand} \\
0 & 1 & \omega c\up1 \\
0 & 0 & \omega
\end{pmatrix}.
\end{equation}
In his thesis, Sharifi gave a way to find such extensions using Kummer theory \cite{sharifithesis}. The idea is to start by interpreting $c\up1$ as a unit in $\mathbb{Q}(\zeta_p)$. Since $b\up1 \cup c\up1$ vanishes in cohomology, the Hasse norm theorem implies that $c\up1$ is a norm from $K$; let $\gamma \in K^\times$ be such that $N_{K/\mathbb{Q}(\zeta_p)}\gamma=c\up1$. Then Sharifi proves that $a\up1_\mathrm{cand}$ can be obtained as a kind of Kolyvagin derivative of $\gamma$ \cite[Proposition 2.6]{sharifithesis}.
There is a similar interpretation of \eqref{eq:diffeq for b2}. Namely, a solution to \eqref{eq:diffeq for b2} gives a twisted-Heisenberg extension of $\mathbb{Q}$ one dimension greater, cut out by
\begin{equation}
\label{eq: 4d abcd}
\begin{pmatrix}
\omega & b\up1 & \omega a\up1_\mathrm{cand} & b\up2_\mathrm{cand}\\
0 & 1 & \omega c\up1 & d\up1_\mathrm{cand}\\
0 & 0 & \omega & b\up1 \\
0 & 0 & 0 & 1
\end{pmatrix},
\end{equation}
where we define $d\up1_\mathrm{cand}=b\up1 c\up1-a\up1_\mathrm{cand}$. The obstruction to the existence of a cochain $b\up2_\mathrm{cand}$ is a generalization of the cup product called the \emph{triple Massey product} $(b\up1,c\up1,b\up1)$. With this notation, \eqref{eq:diffeq for b2} can be rewritten as
\[
-d b\up2 = (b\up1,c\up1,b\up1).
\]
Sharifi generalized his results from cup products to higher \emph{cyclic} Massey products \cite{sharifi2007,LLSWW2020}, which include triple Massey products of the form $(b\up1,b\up1,c\up1)$, but not $(b\up1,c\up1,b\up1)$. The result is that a solution to the equation
\[
-dZ = (b\up1,b\up1,c\up1)
\]
can be obtained as a second Kolyvagin derivative of $\gamma$ \cite[Thm.\ 4.3]{sharifi2007} (where, as above, $\gamma \in K^\times$ satisfies $N_{K/\mathbb{Q}(\zeta_p)}\gamma=c\up1$).
We show, using commutativity relations for Massey products, that a solution $b\up2_\mathrm{cand}$ can be derived from such a $Z$.
\subsubsection{Computing local adjustments}
There are two types of local adjustments that need to be made:
\begin{itemize}
\item local adjustments at $p$ to ensure finite-flatness,
\item local adjustments at $\ell_0$, used in defining $\alpha$ and $\beta$.
\end{itemize}
For $a\up1$, the finite-flat condition translates to the extension $K'/K$
being unramified at $p$. This can be achieved by multiplying $a\up1_\mathrm{cand}$ by an appropriate $p$th root of unity. For $b\up2$, the finite-flat condition boils down to a \emph{peu ramife\'e} condition as in \cite[\S2.4]{serre1987}. If $K/\mathbb{Q}$ is tamely ramified at $p$, this amounts to $b\up2_\mathrm{cand}$ being prime-to-$p$ (as an $S$-unit), which is automatic by our Kolyvagin derivative construction. If $K/\mathbb{Q}$ is wildly ramified, the condition is slightly more involved, but can be achieved by multiplying $b\up2_\mathrm{cand}$ by an appropriate power of $p$.
The local adjustment at $\ell_0$ is more interesting because it involves the cocycle $c\up1$. Unlike $b\up1$, the splitting field of $c\up1$ is not easy to write down. Even if we do compute it, checking the condition globally would involve working with the compositum of $K$ and the splitting field of $c\up1$. Instead, we take advantage of the fact that the local condition only involves the local restriction $c\up1|_{\ell_0}$, not the global cocycle. The structure of this local restriction is known by a tame version of the Gross--Stark conjecture \cite{wake2020eisenstein}. This result computes slope of $c\up1|_{\ell_0}$ (with respect to a canonical basis of $H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p(-1))$) in terms of an analytic invariant called the \emph{Mazur--Tate derivative} $\zeta_\mathrm{MT}'$ (of a family of Dirichlet $L$-functions). The quantity $\zeta_\mathrm{MT}'$ is completely explicit and easily computed, so this allows us to explicitly compute $c\up1|_{\ell_0}$ up to scalar, and find the adjustments purely locally.
\subsection{Organization of the paper}
In Section \ref{sec: recollect part 1}, we summarize the relevant constructions from Part I; this section also contains a subsection (\S\ref{subsec: finite-flat}) of new material that focuses on formulating the finite-flat condition for $b\up2$ in language that is explicit enough for computations. In Section \ref{sec: sharifi}, we construct our candidate solutions, including the background material required to apply Sharifi's methods in our setting. In Section \ref{sec:adjustments}, we prove the main result of this paper (Theorem \ref{thm: main}) and give explicit algorithms for checking whether the splitting conditions in Theorem \ref{thm: main} hold. In Section \ref{sec:data}, we present a broad selection of computed examples that illustrate our main result. Lastly, in Section \ref{sec: P2 ANT}, we establish the characterizations of $K'/K$ and $K''/K$ that appear in \S\ref{subsec: P2 main results} as part of this paper's main result; this content is also used in [Part I, \S\ref{subsec: HL}] to state a terminal result of the pair of papers.
\subsection{Acknowledgements}
The first-named author thanks the University of Bristol and the Heilbronn Institute for Mathematical Research for its partial support of this project. The second-named author was supported in part by NSF grant DMS-1901867, and would like to thank his coauthors on the paper \cite{LLSWW2020}; that paper inspired many of the ideas used here about how to compute Massey products. The third-named author was supported in part by Simons Foundation award 846912, and thanks the Department of Mathematics of Imperial College London for its partial support of this project from its Mathematics Platform Grant. We also thank John Cremona for several insightful conversations about computational aspects of this project as well as the referee for their helpful comments and suggestions. This research was supported in part by the University of Pittsburgh Center for Research Computing and Swarthmore College through the computing resources provided.
\section{Key Galois cochains and their local properties}\label{sec: recollect part 1}
The purpose of this section is to recall the constructions of Part I as a point of departure. In \S\ref{subsec: finite-flat}, there will also be some new content that makes these constructions more amenable to computation. We begin with notation and conventions to make the Galois cochains featured in \S\ref{subsec:diffeq and R} precise.
\subsection{Assumptions, notation, and conventions}
The following statement of assumptions, which are in force throughout this paper, recapitulates the assumptions given at the outset of \S\ref{subsec: summary2} in a slightly more precise form. We write
\[
\mathbb{T} \twoheadrightarrow \mathbb{T}_{\ell_0}
\]
for the surjection of Hecke algebras, from level $N$ to $\ell_0$, as described in Part I, \S\ref{P1: subsec: modular forms}.
\begin{assump}
\label{assump: main}
Assume $p\geq 5$ is prime. Throughout the paper, we specialize to level $N=\ell_0\ell_1$, where $\ell_0$ and $\ell_1$ are primes such that
\begin{enumerate}
\item $\ell_0 \equiv 1 \pmod{p}$
\item $\ell_1 \not \equiv 0, \pm 1 \pmod{p}$
\item $\ell_1$ is a $p$th power modulo $\ell_0$
\item $\rk_{\mathbb{Z}_p}\mathbb{T}_{\ell_0} = 2$, which is easily checked via a criterion Merel \cite{merel1996}, c.f., Part I, Remark \ref{P1: rem: Merel}
\end{enumerate}
\end{assump}
The assumption that $\rk_{\mathbb{Z}_p} \mathbb{T}_{\ell_0} = 2$ is equivalent to the assumption that there is a unique Eisenstein-congruent cusp form at level $\ell_0$, which is the assumption we used in the introduction (\S\ref{subsec: summary2}).
Actions of and functions on profinite groups are presumed continuous without further comment. This includes cochains on Galois groups, which are thought of as functions on a finite self-product of the group, which we will establish notation for shortly.
For $q = \ell_0, \ell_1, p$, we fix embeddings of algebraic closures $\overline{\Q} \hookrightarrow \overline{\Q}_q$, inducing inclusions of decomposition groups $G_q := \mathrm{Gal}(\overline{\Q}_q/\mathbb{Q}_q) \hookrightarrow G_{\mathbb{Q},Np}$, where $G_{\mathbb{Q},Np}$ is the Galois group of the maximal algebraic extension of $\mathbb{Q}$ ramified only at places dividing $Np\infty$. Write $I_q \subset G_q$ for the inertia subgroup, and $\mathrm{Fr}_q \in G_q$ for a lift of the arithmetic Frobenius element of $G_q/I_q$ to $G_q$.
A primitive $p$th root of unity $\zeta \in \overline{\Q}$ plays an important role by inducing an isomorphism between the group of $p$th roots of unity $\mu_p \subset \overline{\Q}^\times$ and $\mathbb{F}_p(1)$, which we write for the representation of $G_{\mathbb{Q},Np}$ on the 1-dimensional $\mathbb{F}_p$-vector space $\mathbb{F}_p$ with action by the modulo $p$ cyclotomic character $\omega$. Likewise, $\zeta$ induces isomorphisms $\mu_p^{\otimes i} \buildrel\sim\over\ra \mathbb{F}_p(i)$ for all $i \in \mathbb{Z}$, where $\mathbb{F}_p(i)$ denotes $\mathbb{F}_p(1)^{\otimes i}$.
\begin{defn}
Let $i \in \mathbb{Z}$ and $j \in \mathbb{Z}_{\geq 0}$. We use $C^j(\mathbb{Z}[1/Np], \mathbb{F}_p(i))$ to denote $j$-cochains of $G_{\mathbb{Q},Np}$ valued in $\mathbb{F}_p(i)$. Likewise, when $Z^j$, $B^j$, or $H^j$ replaces ``$C^j$'' in this notation, we are referring to cocycles, coboundaries, and cohomology, respectively. When $Y \in Z^j(-)$ is a cocycle, we write $[Y] \in H^j(-)$ for its associated cohomology class.
Similarly, when $q$ is a prime, we write $C^j(\mathbb{Q}_q, \mathbb{F}_p(i))$ for local cochain groups. We have restriction maps
\[
(-)\vert_q : H^j(\mathbb{Z}[1/Np], \mathbb{F}_p(i)) \to H^j(\mathbb{Q}_q, \mathbb{F}_p(i)), \quad
C^j(\mathbb{Z}[1/Np], \mathbb{F}_p(i)) \to C^j(\mathbb{Q}_q, \mathbb{F}_p(i))
\]
for $q = \ell_0, \ell_1, p$. We say that a cohomology class (resp.\ cochain) is
\begin{itemize}
\item \emph{unramified at $q$} when its further restriction to $H^j(\mathbb{Q}^\mathrm{ur}_q, \mathbb{F}_p(i))$ \\(resp.\ $C^j(\mathbb{Q}^\mathrm{ur}_q, \mathbb{F}_p(i))$) vanishes, and
\item \emph{splits at $q$} when it vanishes under these map.
\end{itemize}
We have cup product maps
\begin{equation}\label{eq: cup products}
\begin{split}
\smile &: C^j(-, \mathbb{F}_p(i)) \times C^{j'}(-, \mathbb{F}_p(i')) \to C^{j+j'}(-, \mathbb{F}_p(i+i')),\\
\cup &: H^j(-, \mathbb{F}_p(i)) \times H^{j'}(-, \mathbb{F}_p(i')) \to H^{j+j'}(-, \mathbb{F}_p(i+i')).
\end{split}
\end{equation}
\end{defn}
\subsection{Pinning data}
\label{subsec: cocycles} Because many of the constructions in Part I depend in subtle ways on additional choices that we refer to as \emph{pinning data}, we recall this information here.
\begin{defn}
\label{defn: pinning}
We refer to the following choices a \emph{pinning data}.
\begin{itemize}
\item For each $q \in \{\ell_0, \ell_1, p\}$, an embedding $\overline{\Q} \hookrightarrow \overline{\Q}_q$
\item a primitive $p$th root of unity $\zeta_p \in \overline{\Q}$
\item for $i=0,1$, a $p$th root $\ell_i^{1/p} \in \overline{\Q}$ of $\ell_i$, such that, if possible, the image of $\ell_1^{1/p}$ in $\overline{\Q}_p$, under the fixed embedding, is in $\mathbb{Q}_p$. (See Lemma \ref{lem: b1 ramification at p} for when this is possible.)
\end{itemize}
\end{defn}
We fix a choice of pinning data for the entire paper. Notice that our choice defines a decomposition subgroup of $q$ in $G_{\mathbb{Q},Np}$ for each prime $q$ dividing $Np$ as well as an isomorphism between this subgroup and $G_q$. Likewise, for any number field $F \subset \overline{\Q}$, the induced embedding $F \hookrightarrow \overline{\Q}_q$ singles out a prime of $F$ lying over $q$, which we call the \emph{distinguished prime of $F$ over $q$}.
We are now ready to write down precise descriptions of the key Galois cochains from Part I, overviewed in \S\ref{subsec:diffeq and R}. Indeed, recall the canonical isomorphism
\begin{equation}
\label{eq: kummer}
\mathbb{Z}[1/Np]^\times \otimes_\mathbb{Z} \mathbb{F}_p \isoto H^1(\mathbb{Z}[1/Np],\mu_p)
\end{equation}
of Kummer theory, which sends an element $n \in \mathbb{Z}[1/Np]^\times \otimes_\mathbb{Z} \mathbb{F}_p$ to the class of the cocycle $G_{\mathbb{Q},Np} \ni \sigma \mapsto \frac{\sigma (n^{1/p})}{n^{1/p}}$ for a choice $n^{1/p} \in \overline{\Q}$ of $p$th root of $n$. We call this element of $H^1(\mathbb{Z}[1/Np],\mu_p)$ the \emph{Kummer class of $n$} and call any cocycle in this class a \emph{Kummer cocycle of $n$}. Because $\zeta_p \not\in \mathbb{Q}$, each Kummer cocycle of $n$ is given by $\sigma \mapsto \frac{\sigma (n^{1/p})}{n^{1/p}}$ for a unique choice $n^{1/p} \in \overline{\Q}$ of $p$th root of $n$. We use the isomorphism $\mathbb{F}_p(1) \cong \mu_p$ induced by the choice of $\zeta_p$ in the pinning data (Definition \ref{defn: pinning}) to think of Kummer classes and cocycles as having coefficients in $\mathbb{F}_p(1)$.
We fix the following cohomology classes and cocycles throughout the paper:
\begin{defn} \hfill
\label{defn: pinned cocycles}
\begin{itemize}
\item Let
\[
b_0 , b_1, b_p \in H^1(\mathbb{Z}[1/Np], \mathbb{F}_p(1))
\]
be the Kummer classes of $\ell_0$, $\ell_1$, and $p$, respectively.
\item For $i=0,1$, let
\[
\gamma_0 \in I_{\ell_0}, \ \gamma_1 \in I_{\ell_1}
\]
be a lift of a generator of the maximal pro-$p$ quotient of the tame quotient of $I_{\ell_i}$ and fixed such that $b_i(\gamma_i)=1 \in \mathbb{F}_p(1)$.\footnote{Note that $b_i|_{I_{\ell_i}}: I_{\ell_i} \to \mathbb{F}_p(1)$ is a well-defined homomorphism because $I_{\ell_i}$ acts trivially on $\mathbb{F}_p(1))$.}
\item Let
\[
b_0\up1, b_1\up1 \in Z^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))
\]
be the Kummer cocycles associated to $p$th roots $\ell_0^{1/p}$ and $\ell_1^{1/p}$, respectively, chosen in our pinning data (Definition \ref{defn: pinning}). Let $b\up1=b_1\up1$.
\item Let
\[
c\up1 \in Z^1(\mathbb{Z}[1/Np], \mathbb{F}_p(-1))
\]
with cohomology class $c_0=[c\up1]$ be the unique cocycle such that
\begin{enumerate}[label=(\roman*)]
\item $c_0$ is ramified exactly at $\ell_0$,
\item $c_0|_p=0$,
\item $c\up1(\gamma_0) = 1$, and
\item $c\up1\vert_{\ell_1} = 0$.
\end{enumerate}
Note that properties (i)-(iii) specify $c_0$ uniquely, and property (iv) specifies $c\up1$ uniquely. See Part I, Definition \ref{P1: defn: pinned cocycles} for more details.
\item Let
\[
a_0, a_p \in Z^1(\mathbb{Z}[1/Np],\mathbb{F}_p)
\]
be non-zero homomorphisms ramified exactly at $\ell_0$ and at $p$, respectively, and such that $a_0(\gamma_0)=1$. This determines $a_0$ uniquely and determines $a_p$ up to $\mathbb{F}_p^\times$-scaling (which is sufficient for our purposes).
\end{itemize}
\end{defn}
We note that the choices of $b\up1$, $c\up1$, and $a_0$ depend only on the pinning data of Definition \ref{defn: pinning}. Additionally, while we define all of the cohomology classes and cocycles appearing in the contructions of Part I for the sake of completeness, the content of this paper primarily requires the cocycles $b\up1,c\up1$, and $a_0$.
\subsection{The solution $a\up1$ to differential equation \eqref{eq:diffeq of a1} and the invariant $\alpha$}
We move on to Galois cochains that are not cocyles, but satisfy differential equations whose origin was discussed in \S\ref{subsec:diffeq and R}. Then, the crucial numerical invariant $\alpha^2 + \beta \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$ will be derived from them, which is proven to be canonical -- that is, independent of the pinning data and therefore only dependent on a choice of $p$ and $N$ satisfying Assumption \ref{assump: main} -- in Part I, Theorem \ref{thm: main invariant}.
By definition of $c\up1$, its associated cohomology class $c_0$ splits at $p$. This means that there exists some $x_{c} \in \mathbb{F}_p(-1)$ such that
\[
c\up1\vert_p = dx_{c}.
\]
Concretely, for any $\tau \in G_p$, we have $c_0\up1(\tau) = (\omega^{-1}(\tau)-1)x_{c}$. This $x_{c}$ is determined by the pinning data.
We will use $x_{c}$ to normalize the solution to our first differential equation \eqref{eq:diffeq of a1}, which is
\[
-dX = b\up1 \smile c\up1,
\]
where $X$ is an unknown in $C^1(\mathbb{Z}[1/Np], \mathbb{F}_p)$.
In the statement of Proposition \ref{prop: produce a1} below, we use the fact that a primitive $p$th root of unity in $\mathbb{Q}_{\ell_0}$ exists becuase $\ell_0 \equiv 1 \pmod{p}$ and supplies a $\mathbb{F}_p$-basis for $H^0(\mathbb{Q}_{\ell
|
_0}, \mathbb{F}_p(1))$ under the natural isomorphism $\mu_p(\mathbb{Q}_{\ell_0}) \cong H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1))$.
\begin{prop}[Part I, Lemma \ref{P1: lem: produce a1}]
\label{prop: produce a1}
There exists a unique solution $a\up1 \in C^1(\mathbb{Z}[1/Np],\mathbb{F}_p)$ of the differential equation \eqref{eq:diffeq of a1} that satisfies the local conditions
\begin{enumerate}[label=(\alph*)]
\item $(a\up1 +b_1\up1 \smile x_{c})|_{I_p} = 0$ in $Z^1(\mathbb{Q}_p^\mathrm{nr}, \mathbb{F}_p)$
\item $a\up1|_{\ell_0}$ is on the line in $Z^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p) \cong H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p)$ spanned by $\zeta \cup c_0|_{\ell_0}$ under the cup product
\[
H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1)) \times H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(-1)) \to H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p),
\]
where $\zeta$ is a choice of $\mathbb{F}_p$-basis of $H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1))$.
\end{enumerate}
This cochain $a\up1$ is uniquely determined by the pinning data.
\end{prop}
Condition (a) is a finite-flat condition at $p$, as discussed in Part I, \S\ref{P1: subsubsec: finite-flat}.
\begin{rem}
We point out a phenomenon that will frequently appear in various forms. Even though $a\up1$ is a global cochain that is not a global cocycle, its restriction $a\up1\vert_{\ell_0}$ to $G_{\ell_0}$ is a cocycle because the factor $b\up1$ of the differential equation splits at $\ell_0$. We will often encounter situations where a global non-cocycle is a local cocycle, allowing us to impose arithmetic conditions on it.
\end{rem}
We now define an important numerical invariant.
\begin{defn}
\label{defn: alpha}
Let $\alpha \in \mu_p(\mathbb{Q}_{\ell_0}) \cong H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1))$ be the unique solution to
\[
[a\up1|_{\ell_0}] = \alpha \cup c_0|_{\ell_0}.
\]
By Proposition \ref{prop: produce a1}, $\alpha$ depends only on the pinning data of Definition \ref{defn: pinning}.
\end{defn}
\subsection{The solution $b\up2$ to differential equation \eqref{eq:diffeq for b2} and the invariant $\beta$}
In preparation to discuss the differential equation solved by $b\up2$, we recall the following well-known analogue, for general odd primes $p$, of the ramification behavior of the prime $2$ in quadratic (degree $2$) number fields. As far as notation, recall that $b_1$ denotes the Kummer class of $\ell_1$, which contains the Kummer cocycle $b\up1$.
\begin{lem}[Part I, Lemma \ref{P1: lem: ram of ell1 at p}]
\label{lem: b1 ramification at p}
The following conditions are equivalent.
\begin{enumerate}
\item $\ell_1^{p-1} \equiv 1 \pmod{p^2}$
\item $b_1$ is unramified at $p$
\item $p$ is tamely ramified in $\mathbb{Q}(\ell_1^{1/p})/\mathbb{Q}$.
\end{enumerate}
\end{lem}
We recall differential equation \eqref{eq:diffeq for b2}, which has the form
\[
-dY = a\up1 \smile b\up1 + b\up1 \smile d\up1,
\]
where we have let $d\up1 = b\up1c\up1 - a\up1$.
\begin{prop}[Part I, Lemma \ref{P1: lem: exists 2nd-order 1-reducible} and Proposition \ref{P1: prop: exists unique beta}]
\label{prop: produce b2}
There exists a solution $b\up2 \in C^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$ of the differential equation \eqref{eq:diffeq for b2} if and only if $a\up1\vert_{\ell_1} = 0$ in $Z^1(\mathbb{Q}_{\ell_1}, \mathbb{F}_p)$. If any such solution exists, then there also exists a solution $b\up2$ satisfying the local conditions that
\begin{enumerate}[label=(\alph*)]
\item $b\up2|_{\ell_0}$ is a cocycle on the line in $Z^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1)) \cong H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1))$ spanned by $\zeta' \cup c_0|_{\ell_0}$ under the cup product
\[
H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(2)) \times H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(-1)) \to H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1)),
\]
where $\zeta'$ is a choice of $\mathbb{F}_p$-basis of $H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(2))$
\item there exists some $\rho_2$ as in \eqref{eq:rho2} such that $\rho_2\vert_p$ is finite-flat and $b\up2$ is a coordinate of $\rho_2$ (as in \eqref{eq:rho2}).
\end{enumerate}
There exists a single $\beta \in H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(2)) \cong \mathbb{F}_p(2)$ such that, for any of the $b\up2$ satisfying these local conditions,
\[
b\up2|_{\ell_0} = \beta \smile c\up1|_{\ell_0}.
\]
Moreover, the set of solutions of \eqref{eq:diffeq for b2} satisfying these two local conditions is contained in a torsor under the subspace of $Z^1(\mathbb{Z}[1/Np], \mathbb{F}_p(1))$ spanned by coboundaries $B^1(\mathbb{Z}[1/Np], \mathbb{F}_p(1))$ and the cocycle $b\up1$.
\end{prop}
\begin{proof}
The first claim of the proposition is exactly Lemma \ref{lem: a1 vanish at ell1 when S large} of Part I.
Applying that first claim, the existence of a solution $b\up2$ of \eqref{eq:diffeq for b2} is sufficient, by Proposition \ref{P1: prop: exists unique beta}, to deduce that
\begin{itemize}
\item there exists a $\rho_2$ as in \eqref{eq:rho2} that is finite-flat at $p$, meaning that its $b\up2$-coordinate satisfies (b); and
\item its $b\up2$-coordinate also satisfies (a), according to Lemma \ref{P1: lem: exists 2nd-order 1-reducible}(2) of Part I.
\end{itemize}
Finally, the claim about containment in a torsor is in Part I, Proposition \ref{P1: prop: exists unique beta}.
\end{proof}
\begin{defn}
\label{defn: beta} Whenever a deformation $\rho_2$ of $\rho_1$ exists, i.e., when $a\up1\vert_{\ell_1} = 0$ by Proposition \ref{prop: produce b2}, we define $\beta \in H^0(\mathbb{Q}_{\ell_0},\mathbb{F}_p(2))$ to be the unique element (depending only on the pinning data) that satisfies
\[
b\up2|_{\ell_0} = \beta \smile c\up1|_{\ell_0}.
\]
for any choice of $b\up2$ satisfying the local conditions (a) and (b) Proposition \ref{prop: produce b2}.
\end{defn}
\subsection{Local slope and a zeta-value}\label{subsec: local slope and a zeta-value}
The purpose of this section is to relate the class of the cocycle $c\up1$ to a Mazur--Tate $\zeta$-function $\xi$, following \cite[\S8.3]{wake2020eisenstein}.
This formula expresses the slope of the line spanned by the global class $c\up1$ in the 2-dimension local Galois cohomology group $H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p(-1))$ as a ratio of classical and tame $L$-values. This can be thought of as a tame analog of the ($p$-adic) Gross--Stark formula.
We consider the basis $\{a_0|_{\ell_0}, \lambda\}$ of $H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p)$, where $a_0$ is as in Defintion \ref{defn: pinned cocycles} and $\lambda$ is the unique unramified character sending $\mathrm{Fr}_{\ell_0}$ to 1. We also let $\zeta_\mathrm{MT}' \in \mathbb{F}_p$ denote the element
\begin{equation}\label{eq: mazur-tate derivative}
\zeta_\mathrm{MT}' = \frac{1}{2} \sum_{i=1}^{\ell_0-1} B_2(i)\log_{\ell_0}(i),
\end{equation}
where $B_2(x)$ is the second Bernoulli polynomial.
This element can be seen as the derivative of the Mazur-Tate type $L$-funciton $\chi \mapsto L(\chi,1)$ for Dirichlet characters $\chi$ of modulus $N$ and $p$-power order (see \cite[\S1.5]{WWE3} or \cite[\S4.1]{wake2020eisenstein}). Note that we have $\zeta_\mathrm{MT}' \ne 0$ due to our assumption that there is a unique cusp form of level $\ell_0$ that is congruent to the Eisenstein series modulo $p$, according to \cite[Thm.\ 1.5.2]{WWE3}.
\begin{rem}
We remark that this is closely related to Merel's result \cite[Thm.\ 2]{merel1996} characterizing the uniqueness of the cusp form in terms of \emph{Merel's number}, defined in Part I, \S\ref{sssec: intro interpretation}, equation \eqref{eq: Merel number}. Indeed, $\zeta_\mathrm{MT}'$ vanishes if and only if \emph{Merel's number} does; a direct link between the two quantities was proved in \cite[Lem.\ 12.3.1]{WWE3} (see also \cite{lecouturier2018}).
\end{rem}
We have the following description of the line spanned by $c\up1 |_{\ell_0}$.
\begin{thm}[{\cite[Prop.\ 8.3.2]{wake2020eisenstein}}]
\label{thm:slope}
Let $\zeta \in H^0(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1)) \cong \mathbb{F}_p(1)$ be a basis. The line in $H^1(\mathbb{Q}_{\ell_0},\mathbb{F}_p)$ spanned by $\zeta \smile c\up1|_{\ell_0}$ contains the element
\[
\zeta_\mathrm{MT}' \lambda + \frac{1}{6} a_0|_{\ell_0}.
\]
\end{thm}
This is computationally significant because $\zeta_\mathrm{MT}'$, $\lambda$, and $a_0|_{\ell_0}$ are easy to compute even though $c\up1$ is not.
\subsection{Explicit formulation of the finite-flat condition on $b\up2\vert_p$}\label{subsec: finite-flat}
In this section, we shift from recollections to new content. It is readily apparent that the finite-flat condition of Proposition \ref{prop: produce b2} is too inexplicit for computation; the goal of this section is to remedy that problem.
For orientation, we recall the ``basis change'' argument of [Part I, \S\ref{P1: subsec: finite-flat on rho2}], which is a first step in making the finite-flat condition testable using Kummer theory. We let $\rho'_2$ be a conjugate of $\rho_2$ (as in \eqref{eq:rho2}) by generalized matrix such that $\rho'_2\vert_p$ is upper-triangular. This conjugation was achieved with a lower-triangular generalized matrix of the form
\[
\ttmat{1}{0}{x_c + \epsilon x'_c}{1},
\]
where $x_c$ was prescribed in Proposition \ref{prop: produce a1}. We write $\rho'_2$ using the coordinate system
\[
\rho_2'=\ttmat{\omega (1+a\upp1\epsilon + a\upp2\epsilon^2) }{b\upp1 + b\upp2 \epsilon}{\omega(c\upp1 + c\upp2 \epsilon)}{1+d\upp1 \epsilon + d\upp2\epsilon^2},
\]
and, from Proposition \ref{prop: produce a1}, we have an unramified homomorphism
\[
a\upp1\vert_p = (a\up1 + b\up1 \smile x_c\big)\vert_p : G_p \to \mathbb{F}_p.
\]
Also, we have $b\upp2 = b\up2$ and $b\upp1=b\up1$ because the conjugation is lower-triangular.
We have the following simplified form of $\rho'_2\vert_p$, namely,
\[
\rho'_2\vert_p = \ttmat{\omega (1+a\upp1\epsilon + a\upp2\epsilon^2) }{b\up1 + b\up2 \epsilon}{0}{1-a\upp1 \epsilon + d\upp2\epsilon^2}\bigg\vert_p.
\]
The constant determinant property implies that
\[
d\upp1 = -a\upp1, \qquad d\upp2 = (a\upp1)^2-a\upp2.
\]
We observe that the differential equation imposed on $b\upp2 = b\up2$ by the homomorphism property of $\rho'_2$ is
\[
-db\upp2 = a\upp1 \smile b\upp1 + b\upp1 \smile d\upp1,
\]
which simplifies upon restriction to $G_p$ as
\[
-db\up2\vert_p = a\upp1\vert_p \smile b\up1\vert_p + b\up1\vert_p \smile -a\upp1\vert_p.
\]
In Part I, we translate the finite-flat property of the GMA representation $\rho'_2\vert_p$ to a typical (matrix-valued) representation. For convenience, we write
\[
\chi_2 := (1 + \epsilon a\upp1 + \epsilon^2 a\upp2)\vert_p : G_p \to \mathbb{F}_p[\epsilon]/(\epsilon^3)^\times, \text{ and } \chi_1 = 1 + \epsilon a\upp1\vert_p : G_p \to \mathbb{F}_p[\epsilon_1]^\times,
\]
so that $\chi_1 = (\chi_2 \mod \epsilon^2)$ and
\[
\rho_2'\vert_p = \ttmat{\omega \chi_2}{b\up1 + \epsilon b\up2}{0}{\chi_2^{-1}}.
\]
We also establish notation for the homomorphism
\begin{equation}
\label{eq: P2 form eta1}
\eta_1 = \ttmat{\omega(1 + \epsilon a\upp1)}{b\up1 + \epsilon b\up2}{\epsilon c\upp1}{1 + d\upp1} : G_{\mathbb{Q},Np} \to \GL_2(\mathbb{F}_p[\epsilon]/(\epsilon^2))
\end{equation}
assembled from the coordinates of $\rho_2'$, as in [Part I, Def.\ \ref{defn: etas}]. Note that $\eta_1 \vert_p$ is upper-triangular, extending $\chi_1^{-1}$ by $\omega\chi_1$, since we have arranged that $c\upp1\vert_p = 0$.
\begin{prop}[Part I, Lemma \ref{lem: flat rho2 step 2}]
\label{P2: prop: ref flat rho2 step 2}
Assume the deformation $\rho_2$ of $\rho_1$ exists. Then $\rho_2$ is finite-flat at $p$ if and only if both the following (a) and (b) hold.
\begin{enumerate}[label=(\alph*)]
\item the homomorphism $\eta_1$ is finite-flat at $p$
\item $\chi_2 : G_p \to \mathbb{F}_p[\epsilon_2]$ is unramified.
\end{enumerate}
Moreover, if $\rho_2$ satisfies (a), then there exists some deformation $\rho_{2,\mathrm{new}}$ of $\rho_1$ that is finite-flat at $p$ and such that $\eta_1$ equals the $\eta_{1, \mathrm{new}}$ formed from $\rho_{2,\mathrm{new}}$ via \eqref{eq: P2 form eta1}.
\end{prop}
\begin{proof}
The first claim is the equivalence $(1) \Rightarrow (3)$ of Part I, Lemma \ref{lem: flat rho2 step 2}. The second claim follows from the proof of Part I, Lemma \ref{lem: flat rho2 step 4}: the proof shows that once we have a $\rho_2$ whose induced $\eta_1$ is finite-flat at $p$, then it is possible to adjust at most the $a\up2$ and $d\up2$-coordinates to form $\rho_{2,\mathrm{new}}$ from $\rho_2$ such that $\rho_{2,\mathrm{new}}$ is finite-flat at $p$ and such that $\eta_{1,\mathrm{new}}$ (assembled from the coordinates of $\rho_{2,\mathrm{new}}$) equals $\eta_1$.
\end{proof}
Next, we want to understand when $\eta_1\vert_p : G_p \to \GL_2(\mathbb{F}_p[\epsilon]/(\epsilon^2))$, or equivalently the extension class $B := [b\up1 + \epsilon b\up2] \in \mathrm{Ext}^1_{\mathbb{F}_p[\epsilon_1][G_p]}(\chi_1^{-1}, \omega\chi_1)$, is finite flat. Indeed, note that $\chi_1^{-1}$ and $\omega\chi_1$ are finite-flat because $\chi_1$ is unramified.
This issue is especially straightforward when $a\upp1\vert_p = 0$, making $\chi_1$ trivial and $b\up2\vert_p$ a cocycle. Otherwise, $a\upp1\vert_p$ cuts out the unique unramified cyclic degree $p$ extension $\mathbb{Q}_{p^p}/\mathbb{Q}_p$, and $b\up2\vert_p$ is not a cocycle on $G_p$. In order to apply Kummer theory in the latter case, we restrict $b\up2$ to the absolute Galois group of $\mathbb{Q}_{p^p}$. We write
\[
b\up2\vert_{p^p} \in Z^1(\mathbb{Q}_{p^p}, \mathbb{F}_p(1)), \text{ so that } [b\up2\vert_{p^p}] \in H^1(\mathbb{Q}_{p^p}, \mathbb{F}_p(1)) \cong \mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p,
\]
where the final isomorphism comes from Kummer theory.
\begin{prop}
\label{prop: b2 f-f unramified test}
The finite-flatness of $\eta_1\vert_p$ is characterized by Kummer theory.
\begin{enumerate}[label = (\alph*)]
\item When $\chi_1 = 1$, $\eta_1$ is finite-flat at $p$ if and only if the class
\[
[b\up2\vert_p] \in H^1(\mathbb{Q}_p, \mathbb{F}_p(1)) \cong \mathbb{Q}_p^\times/(\mathbb{Q}_p^\times)^p
\]
of the cocycle $b\up2\vert_p$ is in the line spanned by the Kummer class of $1+p$.
\item When $\chi_1 \neq 1$, $\eta_1$ is finite-flat at $p$ if and only if the class
\[
[b\up2\vert_{p^p}] \in H^1(\mathbb{Q}_{p^p}, \mathbb{F}_p(1)) \cong \mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p
\]
is in the subspace spanned by the subgroup of units in $\mathbb{Z}_{p^p}^\times$ that are $1 \pmod{p}$.
\end{enumerate}
\end{prop}
\begin{proof}
Simple application of Part I, Lemma \ref{P1: lem: local Kummer finite-flat}.
\end{proof}
While the condition of Proposition \ref{prop: b2 f-f unramified test} is arguably explicit, our application requires the finite-flatness of $\eta_1$ to be tested using Kummer theory in the extension field of $\mathbb{Q}_p$ \emph{cut out} by $b\up1$, meaning that it is the minimal extension $A/\mathbb{Q}_p$ such that $b\up1\vert_{G_A} = 0$. It is important to understand how this condition behaves, which we describe in this lemma with proof omitted.
\begin{lem}
Let $x \in Z^1(\mathbb{Q}_p, \mathbb{F}_p(i))$ and assume $(p-1) \nmid i$. If $x \in B^1(\mathbb{Q}_p, \mathbb{F}_p(i))$ is non-zero, $\mathbb{Q}_p(\zeta_p)/\mathbb{Q}_p$ is the extension cut out by $x$. Otherwise, letting
\[
\rho_x = \ttmat{\omega^i}{x}{0}{1} : G_p \to \GL_2(\mathbb{F}_p),
\]
the finite extension of $\mathbb{Q}_p$ cut out by $x$ is the subfield of $\overline{\Q}_p^{\ker \rho_x}$ that is fixed by the image of $\sm{\omega^i}{}{}{1}$ under the isomorphism $\mathrm{Gal}(\overline{\Q}_p^{\ker \rho_x}/\mathbb{Q}_p) \cong \mathrm{image}(\rho_x)$.
\end{lem}
\begin{lem}
The extension of $\mathbb{Q}_p$ cut out by $b\up1\vert_p$ falls into two isomorphism classes,
\[
\mathbb{Q}_p(\ell_1^{1/p}) \simeq
\left\{
\begin{array}{ll}
\mathbb{Q}_p & \text{ if } \ell_1^{p-1} \equiv 1 \pmod{p^2} \\
\mathbb{Q}_p((1+p)^{1/p}) & \text{ if } \ell_1^{p-1} \not\equiv 1 \pmod{p^2}
\end{array}
\right.
\]
where the latter is totally ramified over $\mathbb{Q}_p$.
\end{lem}
\begin{proof}
To prove the claim, we apply the condition for the ramification of $b_1$ determined in Lemma \ref{lem: b1 ramification at p}, and note that $b_1$ is ramified at $p$ if and only if it is non-trivial at $p$.
First we address the ramified case. We recall from [Part I, Lem.\ \ref{P1: lem: local Kummer finite-flat}] that $H^1(\mathbb{Q}_p, \mathbb{F}_p(1))^{\mathrm{flat}}$ is 1-dimensional, being spanned by the Kummer class of $1+p$. Therefore, because $b_1\vert_p \in H^1(\mathbb{Q}_p, \mathbb{F}_p(1))$ is always contained in the finite-flat subspace, the extension of $\mathbb{Q}_p$ cut out by $\sm{\omega}{b\up1}{0}{1}$ is $\mathbb{Q}_p(\zeta_p, (1+p)^{1/p})$. By Galois theory, all of its subfields of degree $p$ over $\mathbb{Q}_p$ are mutually isomorphic, and therefore isomorphic to $\mathbb{Q}_p((1+p)^{1/p})$.
In the unramified case, our restriction on $b\up1$ in Definition \ref{defn: pinned cocycles} implies that $b\up1\vert_p = 0$, making it cut out the trivial extension of $\mathbb{Q}_p$.
\end{proof}
When $\mathbb{Q}_p(\ell_1^{1/p}) = \mathbb{Q}_p$, the finite-flatness of $\eta_1$ can be tested just as easily as in Proposition \ref{prop: b2 f-f unramified test}. However, when $\mathbb{Q}_p(\ell_1^{1/p})/\mathbb{Q}_p$ is totally ramified, the test becomes more difficult: Lemma \ref{P1: lem: local Kummer finite-flat} of Part I does not apply over ramified extensions of $\mathbb{Q}_p$. This difficulty can be circumvented when $a\upp1\vert_p = 0$, because $b\up2\vert_p$ is a cocycle. Hence the remaining difficulty is concentrated in the case where
\[
\tag{$\star$} a\upp1\vert_p \neq 0 \text{ and } b\up1\vert_{I_p} \neq 0.
\]
The following proposition, which ends up being an exercise in Kummer theory, addresses this most difficult case $(\star)$. Let $F := \mathbb{Q}_p((1+p)^{1/p})$ and fix an isomorphism between $F$ and $\mathbb{Q}_p(\ell_1^{1/p})$. Let $\pi \in F$ denote a uniformizer.
\begin{lem}\label{lem: b1 ramified f-f test}
In case $(\star)$, the following conditions are equivalent.
\begin{enumerate}
\item $[b\up2\vert_{\mathbb{Q}_{p^p}}] \in H^1(\mathbb{Q}_{p^p}, \mathbb{F}_p(1))$ corresponds to a $p$-unit under the isomorphism $H^1(\mathbb{Q}_{p^p}, \mathbb{F}_p(1)) \cong \mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p$ of Kummer theory
\item $[b\up2\vert_{F}] \in H^1(F, \mathbb{F}_p(1))$ corresponds to a $\pi$-unit that is $1 \pmod{\pi^2}$ under the isomorphism
\[
H^1(F, \mathbb{F}_p(1)) \cong F^\times/(F^\times)^p
\]
of Kummer theory.
\end{enumerate}
\end{lem}
\begin{proof}
Let $M$ denote the composite field of $F$ and $\mathbb{Q}_{p^p}$, $M := \mathbb{Q}_{p^p}((1+p)^{1/p})$. We choose a uniformizer of $F$ and $M$,
\[
\pi := (1+p)^{1/p} - 1 \in F.
\]
We have a commutative diagram with exact rows and columns
\[
\xymatrix{
& 0 \ar[d] & 0 \ar[d] \\
& \langle1+p\rangle \ar[d] \ar[r]^\sim & \langle 1 + p \rangle \ar[d] \\
0 \ar[r] & \mathbb{Q}_p^\times/(\mathbb{Q}_p^\times)^p \ar[r] \ar[d] & \mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p \ar[d] \\
0 \ar[r] & F^\times/(F^\times)^p \ar[r] & M^\times/(M^\times)^p
}
\]
where both horizontal inclusions arise from taking invariants of an equivariant Galois action of $\mathrm{Gal}(\mathbb{Q}_{p^p}/\mathbb{Q}_p) \cong \mathrm{Gal}(M/F)$.
In case $(\star)$, the fact that $b\up2\vert_p$ is a cocycle upon restriction to both $G_F$ and $G_{\mathbb{Q}_{p^p}}$ means that the classes induced by these cocycles are sent, via Kummer theory, to elements $y \in \mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p$ and $z \in F^\times/(F^\times)^p$ that map to the same element of $M^\times/(M^\times)^p$. Thus the $\mathrm{Gal}(\mathbb{Q}_{p^p}/\mathbb{Q}_p)$-action on $y$ is trivial.
In order to apply Proposition \ref{prop: b2 f-f unramified test}(b), it will be useful to know what the coordinates of $p$ are in the decomposition
\[
F^\times/(F^\times)^p \cong \langle \pi \rangle \oplus (1 + \pi \mathcal{O}_F)/(1 + \pi \mathcal{O}_F)^p.
\]
The key calculation is the equality
\[
\frac{\pi^p}{p} = 1 + (1+p)^{1/p} - \frac{1}{p}\sum_{i=2}^{p-1} (-1)^i (1+p)^{i/p} \in \mathcal{O}_F^\times,
\]
which means that $p$ has trivial $\langle \pi\rangle$-part in the decomposition of $F^\times/(F^\times)^p$. Moreover, its image in $(1 + \pi \mathcal{O}_F)/(1 + \pi \mathcal{O}_F)^p$ is non-trivial modulo the image of $(1 + \pi^2 \mathcal{O}_F)$.
Using the natural isomorphism
\[
\mathbb{Q}_{p^p}^\times/(\mathbb{Q}_{p^p}^\times)^p \cong \langle p \rangle \oplus (1 + p \mathbb{Z}_{p^p})/(1 + p \mathbb{Z}_{p^p})^p
\]
and the Galois-equivariant exp-log isomorphism $(1 + p \mathbb{Z}_{p^p}, \cdot) \cong (p\mathbb{Z}_{p^p}, +)$, we find that $y \in \langle 1+p\rangle$ if and only if its $\langle p\rangle$-coordinate, under the summand, vanishes.
Putting the above two facts together, we observe that the equivalence of (1) and (2) follows from proving that the pullback to $F^\times/(F^\times)^p$ of the image of $(1 + p \mathbb{Z}_{p^p})/(1 + p \mathbb{Z}_{p^p})^p$ in $M^\times/(M^\times)^p$ lies in the span of $1+\pi^2 \mathcal{O}_F$. This is easily verified by viewing $1 + p\mathbb{Z}_{p^p}$ and $1 + \pi^2 \mathcal{O}_F$ within $1 + \pi^2 \mathcal{O}_M$ and using the fact that we have a logarithm isomorphism $(1 + \pi^2\mathcal{O}_M,\cdot) \isoto (\pi^2\mathcal{O}_M, +)$.
\end{proof}
\section{Sharifi's explicit solutions to Massey product differential equations}\label{sec: sharifi}
We now turn our focus to developing a method for computing whether the cohomological conditions $(i)$ and $(ii)$ in Theorem \ref{thm: main of part 1} hold. Central to our approach is work of Sharifi giving explicit solutions to cup product and certain Massey product equations \cite{sharifithesis,sharifi2007}. More specifically, from Propositions \ref{prop: produce a1} and \ref{prop: produce b2}, we know that there exist solutions of the differential equations given by
\begin{equation}
\label{eq:da1 and db2}
\begin{split}
-da\up1_\mathrm{cand
|
fig:unifrom}(b) shows the 25-point $MED_0$, which clearly has better projections than the original MED. The product measure ensures that no two points can have the same coordinate. Thus, the design will project onto $n$ different points in each dimension, a property shared by the popular Latin hypercube designs \citep{Santner2003}. In fact, the criterion in (\ref{eq:MED0}) for $f(\bm x)=1$ is a limiting case of the MaxPro design criterion proposed by \citet{Joseph2015b}. The Latin hypercube and MaxPro designs have much better centered $L_2$ discrepancy ($CL_2$) measures \citep{Hickernell1998} than factorial-type designs and thus, are expected to perform much better in high dimensions. We have not yet established the theoretical convergence rate for the integration errors of these new designs, but our investigation with similar designs show a rate better than the MC rate by a $\log n$ factor \citep{Mak2017a, Mak2017b}.
The MED criterion in (\ref{eq:MED0}) can also be given a probabilistic interpretation. It can be written as
\[\max_{\bm D} \min_{i\ne j} \sqrt{ f(\bm x_i)f(\bm x_j)}V_R(\bm x_i,\bm x_j),\]
where $V_R(\bm x_i,\bm x_j)=\prod_{l=1}^p|x_{il}-x_{jl}|$ is the volume of the hyper-rectangle, which has $\bm x_i$ and $\bm x_j$ at the two opposite corners. See Figure \ref{fig:prob} for an illustration. Thus, the same probability-balancing interpretation as in the Euclidean case holds for this criterion as well, which can be obtained by replacing the hyper-sphere volume element with the hyper-rectangle volume element.
\subsection{Correlated Distributions}
Consider a multivariate normal density with mean at $\bm {.5}=(0.5,\ldots,0.5)'$ and a first order auto-regressive correlation structure: $\bm x\sim N(\bm {.5},\sigma^2\bm R)$, where $\sigma=1/8$, $\bm R_{ij}=\rho^{|i-j|}$ for $\rho\in [0,1]$ and $i,j=1,\ldots,p$. The left panel in Figure \ref{fig:normal10} shows the marginal distributions of 149-point MED of a 10-dimensional multivariate normal density with $\rho=0$. We can see that the marginal distributions agree reasonably well with the true marginal density $N(.5,1/8)$, which is shown as a thick solid black line. Now consider the correlated case with $\rho=0.9$. The marginal distributions of 149-point MED is shown in the right panel of Figure \ref{fig:normal10}. We can see that they are more dispersed than the true marginal distribution. We will show below that this problem is not due to the algorithm, but with the MED criterion.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 0.45\textwidth]{rho0.pdf} &
\includegraphics[width = 0.45\textwidth]{rho9.pdf}
\end{tabular}
\caption{Estimated marginal distributions from a 149-point MED of a 10-dimensional independent normal distribution (left) and dependent normal distribution with $\rho=0.9$. The true marginal distribution is shown as a thick solid black line.}
\label{fig:normal10}
\end{center}
\end{figure}
Let $\bm \Sigma=\sigma^2\bm R$ and $f(\bm x)=\exp\{-.5(\bm x-\bm {.5})'\bm \Sigma^{-1}(\bm x-\bm {.5})\}$, omitting the normalizing constant. Consider a two-point MED. By symmetry, they should be of the form $\bm x_1=\bm {.5}-\bm u$ and $\bm x_2=\bm {.5}+\bm u$, where $\bm u$ maximizes
\begin{eqnarray*}
\log\psi(\bm D)&=&.5\log f(\bm {.5}-\bm u)+.5 \log f(\bm {.5}+\bm u)+p\log d(\bm {.5}-\bm u,\bm {.5}+\bm u)\\
&=&-.5\bm u'\bm \Sigma^{-1}\bm u+\frac{p}{2}\log (4\bm u'\bm u).
\end{eqnarray*}
Differentiating and equating to 0, we find that $\bm u$ should satisfy
\[\left(\bm \Sigma-\frac{\bm u'\bm u}{p}\bm I\right)\bm u=0.\]
Now when $\rho=0$, we can see that the solution can be any point in the surface of the sphere $\bm u'\bm u=p\sigma^2$. One particular solution is $\bm u=\pm\sigma \bm 1$. On the other hand, if $\rho>0$, then $\bm u$ is in the eigen direction corresponding to the largest eigen value of $\bm \Sigma$. Consider the extreme case of $\rho=1$. Then, it is easy to show that $\bm u=\pm \sigma\sqrt{p}\bm 1$ is the only solution. Thus, as $p$ increases, MED points move away from the center, which explains the phenomenon observed in Figure \ref{fig:normal10}.
Consider a modified MED criterion by replacing the Euclidean distance with the Mahalanobis distance
\begin{equation}\label{eq:MED_M}
\max_{\bm D}\tilde{\psi}(\bm D)=\max_{\bm D}\min_{i\ne j} f^{1/(2p)}(\bm x_i)f^{1/(2p)}(\bm x_j)\tilde{d}(\bm x_i,\bm x_j;\bm \Sigma),
\end{equation}
where $\tilde{d}^2(\bm x_i,\bm x_j;\bm \Sigma)=(\bm x_i-\bm x_j)'\bm \Sigma^{-1}(\bm x_i-\bm x_j)$. The distributional convergence of the MED from (\ref{eq:MED_M}) follows from Theorem 1 using a linear transform of the space. Now for the 2-point MED of a multivariate normal distribution considered earlier, it is easy to show that $\bm u$ can be any point on the ellipsoid $\bm u'\bm \Sigma^{-1} \bm u=p\sigma^2$ irrespective of the value of $\rho$. This makes more sense than the single point solution on the principal eigen direction obtained with the Euclidean distance when $\rho\ne 0$. With this modification, now there is a high chance that the algorithm might choose a solution not too far from the center. However, there is also the danger of choosing the solution too close to the center. Ideally we want to choose a solution that not only maximizes $\tilde{\psi}(\bm D)$, but also $\psi(\bm D)$. So we may consider maximizing
\begin{equation}\label{eq:MED_M2}
w\psi(\bm D)+(1-w)\tilde{\psi}(\bm D),
\end{equation}
for some $w\in (0,1)$. However, this doubles the cost of evaluating the objective function and also introduces the difficulty of choosing a new parameter $w$. Fortunately, there is an easy way to incorporate this into the proposed algorithm without having to go through these additional troubles. Note that the algorithm makes two passes at each step, first to find $\bm D_k^{new}$ and then to find $\bm D_k$. So we simply use $\psi(\bm D)$ in the first step and then use $\tilde{\psi}(\bm D)$ in the second step. This ensures the final design has high objective function values under both criteria. The marginal distributions of the MED points using the modified algorithm is shown in the left panel of Figure \ref{fig:normal10_MD}, which agree reasonably well with the true marginal distribution. The $10\choose 2$ estimated correlations from the MED are plotted against the true correlations in the right panel, which show good agreement. Thus, these two plots together show that the 10-dimensional normal distribution is well-represented by the 149-point MED.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 0.45\textwidth]{rho9_MD.pdf} &
\includegraphics[width = 0.45\textwidth]{correlations.pdf}
\end{tabular}
\caption{Estimated marginal distributions from a 149-point MED of a 10-dimensional normal distribution with $\rho=0.9$ using the modified criterion in (\ref{eq:MED_M2}) (left). The estimated correlations are plotted against the true correlations (right).}
\label{fig:normal10_MD}
\end{center}
\end{figure}
However, the foregoing improvement on MED using the Mahalanobis distance may not help with more complex distributions such as the banana-shaped density considered earlier. For this distribution, the overall correlation is approximately zero and thus the new criterion reduces to the original criterion. The marginal distributions of the design from the last panel of Figure \ref{fig:banana1-6} is shown in Figure \ref{fig:bananax1x2}. In this case, the true marginal distributions can be obtained through numerical integration and are also shown in the same figure. We can see that the marginal distributions from MED are much more dispersed than the true distributions. One way to fix this issue is to follow-up the MED with an MCMC or a deterministic approximation technique such as DoIt \citep{Joseph2012, Joseph2013}. Consider the option of using MCMC. We first fit a limit kriging predictor on the log-unnormalized posterior using the 654 points. Then we run $n=109$ MCMC chains using Metropolis algorithm starting at each of the $n$ MED points. The size of the $i$th MCMC chain is taken as $\lceil Np_i\rceil$, where $p_i= f(\bm x_i)/\sum_{i=1}^nf(\bm x_i)$ and $N=10,000$. The marginal densities obtained from the resulting MCMC samples are also shown in Figure \ref{fig:bananax1x2}. We can see that they match well with the true distribution. Note that we have not made any new evaluations of the unnormalized posterior for doing this.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 0.45\textwidth]{banana_x1.pdf} &
\includegraphics[width = 0.45\textwidth]{banana_x2.pdf}
\end{tabular}
\caption{Estimated marginal distributions from a 109-point MED of the banana-shaped distribution. The true marginal distribution and density from the follow-up MCMC based on the MED points are also shown.}
\label{fig:bananax1x2}
\end{center}
\end{figure}
\subsection{Improved MED Criterion and Algorithm}
The developments in the previous two subsections suggest an improved version of the MED criterion. We can write (\ref{eq:MED_M}) as
\[\max_{\bm D}\min_{i\ne j} f^{1/(2p)}(\bm x_i)f^{1/(2p)}(\bm x_j)d(\bm \Sigma^{-1/2}\bm x_i,\bm \Sigma^{-1/2}\bm x_j).\]
Now the Euclidean distance $d(\cdot,\cdot)$ can be generalized to obtain
\begin{equation}\label{eq:newMED}
\max_{\bm D}\min_{i\ne j} f^{1/(2p)}(\bm x_i)f^{1/(2p)}(\bm x_j)d_s(\bm \Sigma^{-1/2}\bm x_i,\bm \Sigma^{-1/2}\bm x_j),
\end{equation}
where $d_s(\cdot,\cdot)$ is defined in (\ref{eq:ds}). The only two remaining things to decide are the choices of $\bm \Sigma$ and $s$.
Because we use an iterative algorithm, $\bm \Sigma$ at the $(k+1)$th stage can be estimated using the MED at the $k$th stage. An estimate of $\bm \Sigma$ can be obtained using the sample variance-covariance matrix: $\bm \Sigma_k=\widehat{var}(\bm D_k)$. The inverse of the Hessian of the negative log-likelihood computed at the posterior mode can also be used as an estimate of $\bm \Sigma_k$. This estimate suggests that $\bm \Sigma_{k+1}=\gamma_k/\gamma_{k+1}\bm \Sigma_k$, which makes sense because the variance decreases as $\gamma$ increases. Thus, we use the estimate
\[\bm \Sigma_{k+1}=\frac{\gamma_k}{\gamma_{k+1}}\widehat{var}(\bm D_k).\]
Now consider the choice of $s$. Our experiments with multivariate normal distribution using different values of $s$ gave better results when $s=2$ than $s=0$. We think this is because it is unlikely that the MED points of a non-uniform distribution will lie parallel to the coordinate axes. Thus, projections seem to be less of a concern with non-uniform distributions. Bayesian asymptotics suggest that the posterior distributions tend to be normal as the sample size increases, so the posterior distribution is likely to be non-uniform. However, when the data is uninformative and when the prior distributions are flat, $s=0$ can perform better. Here is an adaptive choice of $s$ at the $k$th stage:
\[s_k=2\left\{1-\left(\frac{\bm f_{k,min}}{\bm f_{k,max}}\right)^{\gamma_k}\right\},\]
where $\bm f_{k,min}$ and $\bm f_{k,max}$ denote the minimum and maximum values of $\bm f_k$, respectively. We can see that if $\bm f_{k,min}=\bm f_{k,max}$ (i.e., a uniform distribution) or $\gamma_k=0$, then $s_k=0$, but will tend to 2 if $\bm f_{k,min}<<\bm f_{k,max}$ and $\gamma_k$ is large. We can also use a lower and upper quantile of $\bm f_k$ instead of the minimum and maximum values to detect almost flat distributions.
Consider again the $p$-dimensional normal distribution with $\rho=0.9^{\log p}$ with $p=1,2,...,30$. To use the fast component-by-component construction of the lattice rule, we choose the largest prime number less than $100+
|
5p$ as the size of MED ($n$). There are a few more parameters to choose in the algorithm. We let $K=\lceil 4\sqrt{p}\rceil$ as the number of steps in the algorithm and we choose the closest $n$ points to $\bm x_j$ to define the local region $L_{jk}$. We have used these parameter settings for all the examples presented in this article. The left panel of Figure \ref{fig:sim} shows the logarithm of number of density evaluations made by the new algorithm and the generalized simulated annealing (GSA) algorithm used in \citet{Joseph2015a}. For the 30-dimensional density, the number of evaluations made by the new algorithm is smaller than that of the GSA algorithm by a factor of 1500, which is a substantial saving! In terms of the CPU time, generating $n=241$ MED points for the 30-dimensional density took about 40 minutes using the new algorithm with a total evaluations of 5302. The current implementation is in R and we believe that it can be made an order of magnitude faster by converting it to C++. The right panel of the Figure \ref{fig:sim} shows the $CL_2$ discrepancy criterion of the MED generated by the two algorithms, which is computed by transforming the point set to a unit hypercube using the normal distribution function. This shows that the MED generated by the new algorithm has much smaller discrepancy and is thus closer to the target distribution. However, the quality of the point set deteriorate as the number of dimensions increases. The $CL_2$ discrepancy of the Sobol sequence with size equal to the number of evaluations made by the new algorithm is also shown in the same figure. Clearly, Sobol is slightly better than MED generated by the new algorithm, but we should remember that the Sobol sequence can be used only when the distribution function is known, which doesn't happen in a real Bayesian problem.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 0.45\textwidth]{evaluations.pdf} &
\includegraphics[width = 0.45\textwidth]{cl2.pdf}
\end{tabular}
\caption{The logarithm of the number of density evaluations made by the new algorithm and the GSA algorithm are shown in the left panel. The right panel shows the centered $L_2$ discrepancy measure (smaller-the-better).}
\label{fig:sim}
\end{center}
\end{figure}
\section{A COMPUTATIONALLY EXPENSIVE EXAMPLE}
\citet{Miller2007} developed a thermo-mechanical finite element model (FEM) to simulate a friction drilling process. The model has several outputs, but here we analyze only one output of the model, the thrust force ($y$). The FEM contains an unknown parameter, coefficient of friction ($\eta$), which needs to be specified to get the output. Figure \ref{fig:FEM} shows the thrust force over the tool travel distance ($x$) for three different values of the coefficient of friction. Miller and Shih also performed a physical test to validate the FEM model. The measured thrust force from the physical test is also shown in Figure \ref{fig:FEM}. From this, they concluded that $\eta=0.7$ is the best choice for the friction coefficient. However, they also noticed the discrepancy in the thrust force predictions even with the best choice of $\eta$.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 0.45\textwidth]{FEM.pdf}
\caption{FEM outputs at three values of the friction coefficient and actual force data from physical experiment.}
\label{fig:FEM}
\end{center}
\end{figure}
\citet{Kennedy2001} proposed a Bayesian calibration approach to simultaneously estimate $\eta$ and the discrepancy. The discrepancy in their approach is modeled using a Gaussian process (GP). However, a more engineering approach to deal with the discrepancy is to understand the causes of discrepancy and improve the FEM to avoid the discrepancies.
Further investigation of \citet{Miller2007} showed that this discrepancy can be attributed to the deflection of the sheet at the initial contact with the tool. Thus the actual tool travel is less than the tool travel used in the FEM. It is not easy to implement this correction in the FEM code because the elastic deflection and thrust force need to be solved iteratively which can substantially increase the computational time. \citet{Joseph2015c} proposed to use simple statistical adjustments in such situations. Their approach can be described as follows.
Let $y=g(x;\eta)$ be the FEM. Then, we can take $y=g(\gamma x;\eta)$, where $\gamma \in [0,1]$ is an adjustment parameter, which accounts for the workpiece deflection at the initial contact. If we feel that the deflection can change during tool travel, we can introduce one more adjustment parameter to obtain $y=f(\gamma_1 x^{\gamma_2};\eta)$, where $\gamma_2>1$ indicates that the deflection increases with tool travel. Thus the model calibration problem reduces to the following nonlinear regression problem:
\[y_i=g(\gamma_1 x_i^{\gamma_2};\eta)+\epsilon_i,\]
where $\epsilon_i\sim^{iid}N(0,\sigma^2)$. However, different from the usual nonlinear regression problems, $g(\cdot,\cdot)$ is a computationally expensive model. Therefore, first we approximate the FEM using a GP model:
\[\hat{g}(x;\eta)=\exp\{\hat{\mu}+\bm r(\bm x;\bm \eta)'\bm R^{-1}(\log \bm y^{FEM}-\hat{\mu}\bm 1)\},\]
where $\hat{\mu}=\bm 1'\bm R^{-1}\log \bm y^{FEM}/\bm 1'\bm R^{-1}\bm 1$ and $\bm r(\bm x;\bm \eta)$ and $\bm R$ are the correlation vector and correlation matrix computed using the Gaussian correlation function $R(\bm h)=\exp(-\theta_x h_x^2-\theta_{\eta} h_{\eta}^2)$. A log-transformation was used to ensure that the predictions are non-negative. The details of GP models can be found in many references, for example, the book by \citet{Santner2003}. We have used the R package \emph{GPfit} \citep{GPfit2015} for fitting the model.
Now to estimate the friction coefficient and the two adjustment parameters, we used the following Bayesian model:
\begin{eqnarray*}
y_i&\sim^{iid} & N\left(\hat{g}(\gamma_1 x_i^{\gamma_2};\eta),\sigma^2\right),\;i=1,\ldots,N,\\
\eta &\sim & p(\eta;.5,1,10,10),\\
\gamma_1&\sim & p(\gamma_1;.5,1,10,100),\\
\gamma_1&\sim & p(\gamma_2;.75,1.25,10,10),\\
p(\sigma^2)&\propto & 1/\sigma^2,
\end{eqnarray*}
where $N=96$ and $p(x;a,b,\lambda_a,\lambda_b)=\exp\{\lambda_a (x-a)\}I(x<a)+I(a\le x\le b)+\exp\{-\lambda_b (x-b)\}I(x>b)$ is the prior distribution. This prior distribution places uniform prior in the interval $[a,b]$ and exponential distributions outside $[a,b]$. Thus, $p(\eta;.5,1,10,10)$ ensures that the estimates of $\eta$ will most likely be in the experimental range of $[.5,1]$, but allows for slight extrapolation outside $[.5,1]$. A larger value of $\lambda_b$ is used for $\gamma_1$ because it is unlikely to be above 1. For simplicity, we have ignored the uncertainties in the estimation of $g(\cdot;\cdot)$.
There are four unknown parameters in the Bayesian model: $(\eta,\gamma_1,\gamma_2,\sigma^2)$ and their posterior distribution is
\[p(\eta,\gamma_1,\gamma_2,\sigma^2|\bm y)\propto \frac{1}{\sigma^N}\exp\left[-\frac{1}{2\sigma^2}\sum_{i=1}^N \{y_i-\hat{g}(\gamma_1 x_i^{\gamma_2};\eta)\}^2\right]p(\eta)p(\gamma_1)p(\gamma_2)p(\sigma^2).\]
We can easily integrate out $\sigma^2$ to obtain (omitting the normalizing constant)
\[\log p(\eta,\gamma_1,\gamma_2|\bm y)=-\frac{N}{2}\sum_{i=1}^n \{y_i-\hat{g}(\gamma_1 x_i^{\gamma_2};\eta)\}^2+\log p(\eta)+\log p(\gamma_1)+\log p(\gamma_2).\]
There were 332 observations in the FEM data, so the prediction using the GP model still takes time. One prediction using \emph{GPfit} took about 0.16 seconds in a 3.2GHz laptop. 96 such evaluations are needed to compute $\log p(\eta,\gamma_1,\gamma_2|\bm y)$. Thus, one evaluation of the unnormalized posterior takes about 15.4 seconds.
We scale the parameters in $[0,1]^3$ and apply the algorithm proposed in Section 3 to generate an MED of size 113 with $K=7$. The two-dimensional projections of the MED points are shown in Figure \ref{fig:FD}. We can see that the posterior distribution occupy very little space in the 3-dimensional hypercube, so a QMC sample would have been quite wasteful. On the other hand, it took only about 3.4 hours to generate the MED points. A typical application of MCMC for this problem would require about 10,000 samples, which would have taken about 43 hours to complete. Clearly, MED is advantageous in this particular example.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{FD12.pdf} &
\includegraphics[width = 0.3\textwidth]{FD13.pdf} &
\includegraphics[width = 0.3\textwidth]{FD23.pdf}
\end{tabular}
\caption{Two-dimensional plots of the MED points in the friction drilling example.}
\label{fig:FD}
\end{center}
\end{figure}
The posterior means of the parameters were $(0.5,0.761,0.863)$ in the $0-1$ scale. In the original scale, they are $\hat{\eta}=0.75$, $\hat{\gamma}_1=0.88$, and $\hat{\gamma}_2=1.18$. The computer model prediction at $\hat{\eta}=0.75$, the measured data, and the prediction from the calibrated model are shown in Figure \ref{fig:calibration}. We can see that the calibration has helped in bringing the computer model prediction closer to the actual measured data. There is still some unaccounted discrepancy. If this is considered important, we may add a GP term to capture the remaining discrepancy.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 0.45\textwidth]{calibration.pdf}
\caption{Predicted FEM output at $\eta=0.75$, measured data from the physical experiment, and the calibrated model at the posterior means of the parameters.}
\label{fig:calibration}
\end{center}
\end{figure}
\section{CONCLUSIONS}
This article proposed an efficient algorithm to generate a minimum energy design. The main novelty is in generating the design with as few evaluations as possible of the probability density, which can be advantageous when dealing with computationally expensive posterior distributions in Bayesian applications. For a 30-dimensional distribution, the new algorithm required 1500 times fewer evaluations compared to the global optimization-based algorithm used in \citet{Joseph2015a}. This substantial reduction in the number of evaluations makes the new algorithm useful in real applications and competitive to the existing MCMC and QMC methods. The algorithm can work as a black-box and the user need to specify only two inputs: the log-unnormalized posterior and a hypercube where the density is expected to lie. Thus it can be used easily by non-statisticians and applied to a wide variety of Bayesian problems. We have made sensible choices to various parameters involved in the algorithm, which seem to work well in the problems we have tested so far. Clearly, much more needs to be done to understand the convergence of the algorithm and optimal choices of the parameters, but we leave this as a topic for future research.
It is important to understand when to use a deterministic sampling method such as MED as opposed to MCMC in a real Bayesian application. It is our belief that practitioners will continue to use MCMC because of its flexibility and ease of implementation, as long as the likelihood and prior are cheap to evaluate. But as the cost of evaluations of the unnormalized posterior increases, the deterministic sampling methods become more relevant. On the other hand, if the cost of evaluation is very high, then even the deterministic sampling can become unaffordable and one may need to rely on more efficient function approximation techniques. The decision to transition from MCMC to MED to function approximation depends on the problem in hand, the available computing resources, the time constraints, and the accuracy needed. We admit that the accuracy of the deterministic samples or function approximation can be questionable for complex distributions and in high dimensions. The accuracy produced by them may be enough for the problem in hand, particulary at the initial stages of model building. But if high accuracy is warranted, then one may need to follow-up such a method with MCMC (see Figure \ref{fig:bananax1x2}). On the other hand, the deterministic samples can be used to make the MCMC methods more efficient or can act as an experimental design for doing function approximation. Thus, in summary, MED is not meant to replace MCMC or other advanced function approximation techniques, but it can play a major role in solving a Bayesian problem when the computations are expensive.
\begin{center}
{\Large\bf SUPPLEMENTARY MATERIALS}
\end{center}
\noindent The proof of Theorem 1 is given in this file.\\
\begin{center}
{\Large\bf ACKNOWLEDGMENTS}
\end{center}
This research is supported by a U.S. National Science Foundation grant DMS-1712642.
\bibliographystyle{ECA_jasa}
|
\section{Introduction}
We suppose a Brownian motion $(W_{t})_{t \in [0,1]}$ on the probability space $(\Omega, \mathcal{F}, P)$, where the $\sigma$-field $\mathcal{F}$ is generated by the Brownian motion and completed by null sets. Under the assumption that the Brownian motion is evaluated at an equidistant time grid, i.e. we have the information $W_{1/n}, W_{2/n},\ldots, W_1$, the investigation of optimal approximation with respect to the mean squared error for It\=o stochastic differential equations (SDEs) goes back to \cite{CC}, where the authors determined the optimal approximation scheme for a class of SDEs with additive noise. For SDEs interpreted in the classical It\=o sense there is a rich literature on numerical results, approximation algorithms and error analysis. We mention the monographs \cite{Kloeden_Platen, GrahamTalay}. Many upper bounds for the mean squared error of approximation schemes can be found in \cite{Kloeden_Platen}. We also refer to the survey \cite{MGR} and the comprehensive study of the pointwise optimal approximation of It\=o SDEs given in \cite{Mueller_Gronbach}.
Much less is known about numerical schemes for anticipating stochastic differential equations. Some upper bounds of the mean squared error are known from the investigation of Euler schemes for a class of Skorohod SDEs, see e.g.\cite{TorresTudor} and \cite{Shevchenko}. An Euler scheme for an anticipating SDE, where the integral is interpreted in anticipating Stratonovich sense, is analysed in \cite{AnhKHiga}. A Wong-Zakai result for anticipating Stratonovich SDEs is given in \cite{CoutinFrizVictoir} by rough path theory. None of these studies deal with optimal approximation or lower bounds of the error.
To the best of our knowledge, this is the first study of optimal approximation and optimal approximation schemes for anticipating SDEs, where the integral is interpreted in Skorohod sense, see Section \ref{section_SkorohodSDE}.
There are very few existence results on Skorohod SDEs. However, all the difficulties arise already at linear Skorohod SDEs with a nonadapted initial value.
Our findings are as follows: Let $a, \sigma, f \in C^1([0,1];\RR)$, the Wiener integral $I(f) = \int_{0}^{1} f(s)dW_s$ and some sufficiently smooth function $F:\RR \rightarrow \RR$ (cf. Definition \ref{def_WA}). Then the unique solution $(X_t)_{t \in [0,1]}$ of the Skorohod SDE
\begin{align}\label{eq_SDE}
dX_t = a(t) X_t dt + \sigma(t) X_t dW_t, \quad X_0 = F(I(f))
\end{align}
exists in $L^2(\Omega \times [0,1])$ and our first main result on optimal approximation yields that
\begin{align}\label{eq_IntroOptResult}
\lim_{n\rightarrow \infty} n\, \left(\ex\left[\left(X_1 - \ex[X_1|W_{1/n}, W_{2/n},\ldots, W_1]\right)^2\right]\right)^{1/2} = C,
\end{align}
for some specific constant $C$ (Theorem \ref{thm_OptSDELin}).
The solution of the SDE \eqref{eq_SDE} has a simple representation in terms of a convolution operator, the Wick product, a basic tool in stochastic analysis, see e.g. \cite{Buckdahn_Nualart, Holden_Buch} and Section \ref{section_SkorohodSDE}, as
\begin{equation}\label{Intro_SDESol}
X_t = X_0 \diamond \exp\left(\int_{0}^{t} \sigma(s) dW_s + \int_{0}^{t} (a(s) - \sigma^2(s)/2)ds\right).
\end{equation}
Using this representation, the constant in \eqref{eq_IntroOptResult} exhibits a short and simple form as
\begin{equation}\label{eq_IntroC}
C:=\frac{1}{\sqrt{12}} e^{\int_{0}^{1} a(s) ds}\left(\int_{0}^{1} \ex\left[\left(\left(f'(s) F'(I(f)) + \sigma'(s) F(I(f))\right)\diamond e^{I(\sigma)-\|\sigma\|^2/2}\right)^2\right] ds\right)^{1/2}.
\end{equation}
The above optimal approximation result extends to more general nonadapted initial values
$$
X_0 = F(I(f_1), \ldots, I(f_K))
$$
for $f_1, \ldots, f_K \in C^1([0,1];\RR)$ and sufficiently smooth $F:\RR^K \rightarrow \RR$ (Theorem \ref{thm_OptSDELinMulti}). Then the convergence in \eqref{eq_IntroOptResult} is true where in \eqref{eq_IntroC} the term $f'(s) F'(I(f))$ is replaced by
$$
\sum_{k=1}^{K}f'_k(s) \dfrac{\partial}{\partial x_k}F(I(f_1), \ldots, I(f_K)).
$$
Exemplary one can take
\begin{equation}\label{eq_X0LinMulit}
X_0 = \cos\left(\int_{0}^{1}e^s\, dW_s\right) + \left(\int_{0}^{1}\sin(s)\, W_s \, ds\right)\exp\left(\int_{0}^{1}W_s\, ds\right).
\end{equation}
In contrast to It\=o SDEs, concerning Skorohod integrals and anticipating SDEs, we have no martingale or Markov tools, in particular we have to handle the lack of It\=o isometry and well-known bounds as martingale inequalities (e.g. the BDG inequality). Therefore we make use of subtle computations of the Wiener chaos expansion of all objects involved. As already observed on the optimal approximation of Skorohod integrals in \cite{NP, P17a}, this necessary alternative approach leads to results that are natural generalizations of the corresponding results for It\=o SDEs. Notice that \eqref{eq_IntroOptResult} and \eqref{eq_IntroC} extend the corresponding results for linear It\=o SDEs in \cite[Theorem 1]{Mueller_Gronbach}.
We will also illustrate that strong approximation schemes for It\=o SDEs from \cite{Kloeden_Platen} are insufficient for Skorohod SDEs \eqref{eq_SDE}. To ensure the convergence of the scheme a further correction is needed to take into account the convolution with the initial value in \eqref{Intro_SDESol}. Extending numerical schemes for It\=o SDEs, this correction is included in our approximation scheme.
As an optimal approximation scheme for the solution of \eqref{eq_SDE}, we present a truncated Wagner-Platen type scheme, inspired by the corresponding scheme for It\=o SDEs in \cite{Kloeden_Platen, Mueller_Gronbach, Przybylowicz}. Firstly, we need a strong approximation $x^n_0$ of the initial value. This is done by truncated Wiener chaos expansions and sufficiently accurate approximations of Wiener integrals. For the shorthand notations for every $g \in C([0,1];\RR)$ and $k=0,\ldots, n-1$, let
$$
g_k := g(k/n), \ \Delta := 1/n, \ \Delta_k W := (W_{(k+1)\Delta} - W_{k\Delta}).
$$
A Wagner-Platen type scheme $\widetilde{X}$ for the SDE \eqref{eq_SDE} is then given by the recursion
\begin{align*}
x^n_{k+1} &= x^n_k + a_k x^n_k \Delta + \sigma_k x^n_k \diamond \Delta_k W + \sigma_k^2 x^n_k \diamond (1/2)(\Delta_k W)^{\diamond 2}+ \sigma_k^3 x^n_k \diamond (1/6)(\Delta_k W)^{\diamond 3}\\
&\quad + \left(\sigma'_k + a_k\sigma_k\right) x^n_k\diamond \Delta_k W \Delta + \left(a'_k + a_k^2\right) x^n_k (1/2)\Delta^2
\end{align*}
and then
$\widetilde{X}^n_0 := x^n_0$ and
$$
\widetilde{X}^n_k := x^n_k - \frac{1}{2}\sum_{l=0}^{k-1} \sigma'_l \Delta_l W \Delta \diamond x^n_l\diamond \prod_{i=l+1}^{n-1}\left(1+ a_i \Delta + \sigma_i\Delta_i W\right) , \quad k=1,\ldots, n-1.
$$
This equals the Wagner-Platen scheme in \cite{Mueller_Gronbach, Kloeden_Platen}, where some ordinary products are replaced by convolutions due to the non-adaptedness of the initial value $x^n_0$.
In Theorem \ref{thm_LinSDEApproxWickWP} we prove that \eqref{eq_IntroOptResult} is also true with the optimal approximation replaced by $\widetilde{X}^n_n$.
The main tools for our considerations are optimal approximation results for functionals of Wiener integrals, their optimal approximations and accurate strong approximations. This leads to surprisingly nice compatibility relations for approximations of all random elements (Section \ref{section_OptWienerChaos}).\\
The paper is organized as follows: In Section \ref{section_SkorohodSDE} we introduce the Skorohod integral and present the representations of functionals and solutions of the corresponding anticipating SDEs. Section \ref{section_OptApproxResult} is devoted to the optimal approximation results for linear Skorohod SDEs. In Section \ref{section_ApproxSchemes} we present an asymptotically optimal approximation scheme and illustrate the necessity of new approximation schemes for Skorohod SDEs due to the convolution with the initial value in \eqref{Intro_SDESol}. Section \ref{section_OptWienerChaos} contains the proofs of the compatibility of these optimal and strong approximations with a sufficiently large class of functionals applied on Wiener integrals. In Section \ref{section_Proofs} we give the proofs of the main results on optimal approximation and the optimal approximation scheme. Finally, in Section \ref{section_FurtherConvRates}, we present some further convergence rate results, consequences of our findings and a conjecture on quasilinear Skorohod SDEs.
\section{Skorohod stochastic differential equations}\label{section_SkorohodSDE}
For a possibly nonadapted process $(u_s)_{s \in [0,1]}$, the Skorohod integral $\int_{0}^{1} u_s dW_s$ can be defined as a natural extension of the It\=o integral, see e.g. \cite{Nualart, DiNunno, Holden_Buch, Pardoux}.
There are many introductions to the Skorohod integral and anticipating SDEs. An essential tool in our approach is the representation of processes via the Wiener chaos decomposition and the characterization via the S-transform. Aiming the optimal approximation, we collect some basic properties and representations and define a class of random variables for the initial value of the Skorohod SDEs.
We denote the norm and inner product on $L^2 := L^2([0,1];\RR)$ by $\|\cdot \|$ and $\langle \cdot, \cdot\rangle$.
We recall the Wiener integral $I(f)$ of a function $f\in L^2$, which is the extension of the continuous linear mapping $\eins_{(0,t]}\mapsto W_t$ from
$L^2$ to $L^2(\Omega) := L^2(\Omega,\mathcal{F},P)$. The stochastic calculus on $L^2(\Omega)$ is based on the Gaussian Hilbert space $\{I(f) : f \in L^2\} \subset L^2(\Omega)$.
The \emph{Wick exponential}, i.e. the stochastic exponential of a Wiener integral $I(f)$, is defined by
\begin{equation*}
\exp^{\diamond}(I(f)) := \exp\left(I(f) - \|f\|^2/2\right).
\end{equation*}
These exponentials exhibit the following renormalization properties \cite[Corollaries 3.37--3.38]{Janson}. For all $f,g \in L^2$, $p>0$, we have:
$$\ex[\exp^{\diamond}(I(f)) \exp^{\diamond}(I(g))] = \exp(\langle f,g\rangle), \quad \ex[(\exp^{\diamond}(I(f)))^p] = \exp\left(p(p-1) \|f\|^2/2\right).
$$
The \emph{S-transform} of $X$ at $f$ is defined as
\[
(SX)(f) := \ex[X \exp^{\diamond}(I(f))].
\]
Then, due to the totality of
\begin{equation*}
\{\exp^{\diamond}(I(f)) : f \in L^2\}
\end{equation*}
in $L^p(\Omega,\mathcal{F},P)$, $p>0$, (\cite[Corollary 3.40]{Janson}), $(S \cdot)(\cdot)$ is a continuous and injective function on $L^2(\Omega, \mathcal{F}, P)$ (see e.g. \cite[Chapter 16]{Janson} for more details). As an example, for $g\in L^2$, we have
\begin{equation}\label{eq_SOfWickExp}
(S \ \exp^{\diamond}(I(g)) )(f) = \exp\left(\langle g, f\rangle\right), \quad (S I(g) )(f) = \langle g, f\rangle.
\end{equation}
In particular the characterization of random variables via the S-transform can be used to introduce the Skorohod integral (cf. e.g. \cite[Section 16.4]{Janson}):
\begin{definition}\label{def_SkorohodIntegral}
Suppose $u=(u_s)_{s \in [0,1]}$ is a (possibly nonadapted) square integrable process on $(\Omega, \mathcal{F}, P)$ and $Y \in L^2(\Omega, \mathcal{F}, P)$ such that
\[
\forall g \in L^2: \ (S Y)(g) = \int_{0}^{1} (S u_s)(g) g(s) ds,
\]
then $\int_{0}^{1}u_s dW_{s} = Y$ defines the Skorohod integral of $u$ with respect to the Brownian motion $W$.
\end{definition}
For more information on the Skorohod integral we refer to \cite{Janson}, \cite{Nualart} or \cite{Kuo}. Skorohod integrals arise in several applications, e.g. computations of derivative-free option price sensitivities \cite{Fournie} or market models with insider trading \cite[Chapter 6]{Nualart}.
\begin{remark}
In contrast to classical approaches to the Skorohod integral (via multiple Wiener integrals or as the adjoint of the Malliavin derivative), the above definition by S-transform extends immediately to integrands in $L^p(\Omega \times [0,1])$, $p>1$. We refer to \cite{AyedKuo,P18} on further recent approaches to the Skorohod integral beyond square-integrable integrands and to \cite{BP3} for a characterization of the Skorohod integral via discrete counterparts.
\end{remark}
We recall that for the Hermite polynomials
\begin{equation*}
h^{k}_{\alpha}(x) = (-\alpha)^{k} \exp\left(\frac{x^2}{2\alpha}\right) \frac{d^k}{dx^k} \exp\left(\frac{- x^2}{2\alpha}\right) ,\ k \in \NN_{0}, \ \alpha >0
\end{equation*}
the $k$-th Wiener chaos $H^{:k:}$ is the $L^2$-completion of
$\{h^k_{\| f \|^2}(I(f)) : f \in L^2\}$ in $L^2(\Omega)$ and these subspaces are orthogonal and fulfill $L^2(\Omega, \mathcal{F}, P) = \bigoplus_{k \geq 0} H^{:k:}$ (see e.g. \cite[Theorem 2.6]{Janson}). Thus, for the projections
\[
\pi_k: L^2(\Omega) \rightarrow H^{:k:},
\]
for every random variable $X \in L^2(\Omega)$, we denote the (truncated) \emph{Wiener chaos decomposition}
\[
X=\sum_{k=0}^{\infty} \pi_k(X), \quad X^{(n)}=\sum_{k=0}^{n} \pi_k(X).
\]
We refer to \cite{Janson, Holden_Buch, Nualart} for further details and a reformulation in terms of multiple Wiener integrals. We recall that a process $u \in L^2(\Omega \times [0,1])$ is Skorohod integrable if and only if
\[
\sum_{k=0}^{\infty} (k+1) \|\pi_k(u)\|^2_{L^2(\Omega \times [0,1])} < \infty,
\]
(cf. \cite[Theorem 7.39]{Janson}). The S-transform is closely related to a convolution imitating the product of uncorrelated random variables as $\ex[X \diamond Y]=\ex[X]\ex[Y]$, which is implicitly contained in the Skorohod integral and a fundamental tool in stochastic analysis. Due to the injectivity of the S-transform, the \emph{Wick product} can be introduced via
\begin{equation}\label{eq_WickByS}
\forall g \in L^2 \ : \ S(X \diamond Y)(g) = (S X)(g) (S Y)(g)
\end{equation}
on a dense subset in $L^{2}(\Omega) \times L^{2}(\Omega)$. For example, it is $e^{\diamond I(g)} \diamond e^{\diamond I(h)} = e^{\diamond I(g+h)}$. In particular, for a a Wiener integral $I(f)$, we see that Hermite polynomials play the role of monomials in standard calculus as $
(I(f))^{\diamond k} = h^k_{\|f\|^2}(I(f))$ and the notation Wick exponential is well justified by $\exp^{\diamond}(I(f)) = \sum_{k=0}^{\infty} \frac{1}{k!} I(f)^{\diamond k}$. For more details on Wick exponentials we refer to \cite{Janson, Holden_Buch, Kuo}. The close connection of the Skorohod integral and Wick calculus is not only illustrated by their both introduction via the S-transform. Exemplary we notice the Fubini argument on the compatibility of Skorohod integrals, pathwise integrals and Wick products, which will be useful. For a proof we refer to \cite[Chapter 16]{Janson} or \cite[Proposition 7]{NP}:
\begin{proposition}\label{prop_SkorohodAndWick}
Suppose $X \in L^2(\Omega)$ and a Skorohod integrable process $u\in L^2(\Omega \times [0,1])$. Then, if both sides exist in $L^2(\Omega)$:
\[
\int_{0}^{1} X \diamond u_s dW_s = X \diamond \int_{0}^{1} u_s dW_s, \qquad \int_{0}^{1} X \diamond u_s ds = X \diamond \int_{0}^{1} u_s ds.
\]
\end{proposition}
An extension of Wick exponentials to further infinite chaos random variables is:
\begin{definition}[$\WA$]\label{def_WA}
Following \cite{Buckdahn_Nualart}, we define the class of Wick-analytic functionals $\WA$ as
\begin{equation*}
F(I(f_1), \ldots, I(f_K)) = \left(\sum\limits_{k=0}^{\infty}a_{1,k} I(f_1)^{\diamond k}\right) \diamond \cdots \diamond \left(\sum\limits_{k=0}^{\infty} a_{K,k} I(f_K)^{\diamond k}\right),
\end{equation*}
where $K \in \NN$, $f_1, \ldots, f_K \in L^2$ and $\sup_{i\leq K, k\geq 1}\{\sqrt[k]{k! |a_{i,k}|}\} < \infty$. We denote the linear span by $\lin(\WA)$.
\end{definition}
An example in $\WA$, for the Wiener integral
$\int_{0}^{1}W_s \, ds = \int_{0}^{1}(1-s)dW_s$ is (cf. \cite[p. 107]{Holden_Buch}):
$$
\sin\left(\int_{0}^{1}W_s \, ds\right) = \sin^{\diamond}\left(\int_{0}^{1}W_s \, ds\right)e^{\int_{0}^{1}(1-s)^2\,ds/2} = \sum_{k=1}^{\infty} \frac{(-1)^{k-1}e^{1/6}}{(2k-1)!}\left(\int_{0}^{1}W_s \, ds\right)^{\diamond (2k-1)}.
$$
Similarly, an example in the linear span $\lin (\WA)$ is \eqref{eq_X0LinMulit}. Notice that $\lin (\WA) \subset L^2(\Omega)$ (see e.g. \cite[Proposition 9]{NP}).
\begin{remark}\label{rem_derivativeWA}
Let a Wick-analytic functional $F(I(f)) = \sum_{k=0}^{\infty} a_k I(f)^{\diamond k}$ with $C:= \sup_{k\geq 1} \{\sqrt[k]{k! |a_{k}|}\}$. Thanks to the derivative rule for Hermite polynomials $\dfrac{\partial}{\partial x} I(f)^{k} = k I(f)^{k-1}$, we have $F'(I(f)) = \sum_{k \geq 0} (k+1)a_{k+1} I(f)^{\diamond k}$ with $\sup_{k\geq 1} \{\sqrt[k]{(k+1)! |a_{k+1}|}\} \leq C^2$ and therefore $F'(I(f)) \in \WA$.
\end{remark}
An iteration of the conclusion in Remark \ref{rem_derivativeWA} (cf. \cite[Proposition 10]{NP}) yields:
\begin{proposition}\label{prop_DerivativesWA}
All derivative of elements in $\lin (\WA)$ are in $\lin (\WA)$ as well.
\end{proposition}
One can identify $\lin (\WA)$ as a class of smooth random variables in Malliavin calculus (cf. \cite[p. 25]{Nualart}). However, for optimal approximation the Wick-analytic representation is more appropriate, see Section \ref{section_OptWienerChaos}.
The standard tools on existence of Skorohod SDEs like Picard iterations cannot be applied, as the $L^2$-norms of Skorohod integrals involve integrals of Malliavin derivatives of these integrands and this leads to non-closed iterations (cf. the Skorohod It\=o formulas in \cite[Sec. 3.2.3]{Nualart}).
In this work we are mainly interested in the approximation of linear Skorohod SDEs:
\begin{theorem}\label{thm_LinSkorohodSDE}
Suppose $X_0 \in L^p(\Omega)$, $p>2$, $a,\sigma \in L^2$. Then there is a unique stochastic process $X \in L^2(\Omega \times [0,1])$ which solves the linear Skorohod SDE
\begin{equation*}
X_t = X_0 + \int_{0}^{t}a(s) X_s ds + \int_{0}^{t}\sigma(s) X_s dW_s, \ t \in [0,1].
\end{equation*}
The solution is given by (see e.g. \cite[Theorem 2.1]{Buckdahn_Nualart}):
\begin{equation}\label{eq_LinSkorohodSDESol}
X_t = X_0 \diamond \exp\left(\int_{0}^{t} a(s) ds\right) \exp^{\diamond}\left(\int_{0}^{t} \sigma(s) dW_s\right).
\end{equation}
\end{theorem}
\begin{remark}
(i) \ Thanks to the S-transform we can easily prove that the solution is given by \eqref{eq_LinSkorohodSDESol}: For every $f \in L^2$, linearity gives the linear integral equation
\begin{align*}
(S X_t)(f) &=(S X_0)(f) + \int_{0}^{t}a(s) (S X_s)(f) ds + \int_{0}^{t}\sigma(s) (S X_s)(f) f(s)ds
\end{align*}
and therefore
$$
(S X_t)(f) = (S X_0)(f) e^{\int_{0}^{t} a(s) ds} e^{\int_{0}^{t} \sigma(s) f(s) ds}.
$$
Thanks to \eqref{eq_SOfWickExp}, \eqref{eq_WickByS} and Definition \ref{def_SkorohodIntegral}, we conclude the unique asserted solution \eqref{eq_LinSkorohodSDESol}. For further information on linear Skorohod SDEs we refer to \cite{Buckdahn_Nualart}.
(ii) \ Due to the simple right hand side in \eqref{eq_LinSkorohodSDESol}, the convolution can be reformulated in terms of a Girsanov transform, see e.g. \cite{Buckdahn, Buckdahn_Nualart}. Concerning approximation results below, the representation in terms of Wick products as in Definition \ref{def_WA} is more helpful.
\end{remark}
\section{The optimal approximation results}\label{section_OptApproxResult}
In the following optimal approximation results on Skorohod SDEs we restrict ourselves on the time horizon $[0,1]$. The extension of the results and constants involved to $[0,T]$ for some $T>0$ is straightforward. The proofs of the main theorems \ref{thm_OptSDELin} and \ref{thm_OptSDELinMulti} are postponed to Section \ref{section_Proofs}.
We consider the simple case of the information of $W$ given by
\[
W_{1/n}, W_{2/n}, \ldots, W_1,
\]
and, given a random variable $X = F(W) \in L^2(\Omega)$, we are interested in the approximation $\widehat{X}^n \in L^2(\Omega,\sigma(W_{1/n}, W_{2/n}, \ldots, W_1),P)$, that minimizes the mean squared error
\[
\ex[(X-\widehat{X}^n)^2]^{1/2}.
\]
This is clearly given by
\[
\widehat{X}^n := \ex[X|W_{1/n}, W_{2/n}, \ldots, W_1].
\]
\begin{remark}
In the following we mean by $f' \in BV$ that $f \in L^2$ is differentiable and $f': [0,1] \rightarrow \RR$ is of bounded variation.
\end{remark}
Our main result is the following:
\begin{theorem}\label{thm_OptSDELin}
Suppose $f', \sigma' \in BV$, $a: \RR \rightarrow \RR$ is integrable, $X_0 = F(I(f)) \in \WA$. Then for the solution of the Skorohod SDE from Theorem \ref{thm_LinSkorohodSDE}, we have
$$
\lim_{n \rightarrow \infty} n \, \ex[(X_1-\widehat{X_1}^n)^2]^{1/2} = C,
$$
with the constant
\begin{equation*}
C := \frac{1}{\sqrt{12}} e^{\int_{0}^{1} a(s) ds}\left(\int_{0}^{1} \ex\left[\left(\left(f'(s) F'(I(f)) + \sigma'(s) F(I(f))\right)\diamond e^{\diamond I(\sigma)}\right)^2\right] ds\right)^{1/2}.
\end{equation*}
\end{theorem}
\begin{remark}\label{rem_Mehler}
We recall the Mehler transform (or second quantization operator)
\begin{equation*}
\Gamma(r)(X) := \sum_{k=0}^{\infty} r^k \pi_k(X),
\end{equation*}
and the H\"older inequality \cite[Proposition 4.3]{Hu_Yan},
\begin{equation}\label{eq_WickProductGammaEstimate}
\ex[(X \diamond Y)^2] \leq \ex[(\Gamma(\sqrt{2})X)^2]\ex[(\Gamma(\sqrt{2})Y)^2].
\end{equation}
In particular, \eqref{eq_WickProductGammaEstimate} gives a Wick-free upper bound for the constant in Theorem \ref{thm_OptSDELin}:
\begin{align*}
C^2 &\leq \frac{e^{2\int_{0}^{1} a(s) ds}}{12} \int_{0}^{1} \ex\left[\left(\Gamma(\sqrt{2})\left(f'(s) F'(I(f)) + \sigma'(s) F(I(f))\right)\right)^2\right] \ex\left[\left(\Gamma(\sqrt{2})e^{\diamond I(\sigma)}\right)^2\right] ds\\
&= \left\|f'\|F'(I(\sqrt{2}f))\|_{L^2(\Omega)} + \sigma'\|F(I(\sqrt{2}f))\|_{L^2(\Omega)}\right\|^2e^{2\int_{0}^{1} a(s) ds + 2\|\sigma\|^2}/12.
\end{align*}
By the assumption $F(I(f)) \in \WA$ and Proposition \ref{prop_DerivativesWA} this is clearly finite.
\end{remark}
A direct application of Theorem \ref{thm_OptSDELin} on a Wick exponential type initial value gives:
\begin{corollary}\label{cor_LinSDEWickExp}
Suppose $f', \sigma' \in BV$, $a$ is integrable, $X_0 = e^{\diamond}(I(f))$. The solution of the linear Skorohod SDE is then given by
$$
X_t = e^{\diamond I(f)} \diamond e^{\diamond \int_{0}^{t} \sigma(s) dW_s} e^{\int_{0}^{t} a(s) ds} = e^{\diamond I(f + \eins_{[0,t)}\sigma)} e^{\int_{0}^{t} a(s) ds}.
$$
and the terminal value $X_1$ satisfies the asymptotic optimal approximation
$$
\lim_{n \rightarrow \infty} n \, \ex[(X_1-\widehat{X_1}^n)^2]^{1/2} = \|f' + \sigma'\| \exp\left(\|f+\sigma\|^2/2 + \int_{0}^{1} a(s) ds\right)/\sqrt{12}.
$$
In this case, $X_1$ is the solution of a linear It\=o SDE and the constant above coincides with the optimal approximation results in \cite{Mueller_Gronbach}. Considering some $X_0 = e^{\diamond I(f)}$ with $f \in C^1([0,2];\RR)$ and $\mathrm{supp} (f) = [0,2]$ (i.e. a larger $L^2(\Omega)$ as well), this is not covered by \cite{Mueller_Gronbach} anymore, as $X_{1}$ is now possibly nonadapted. The extension of Theorem \ref{thm_OptSDELin} to such extended time horizons and initial values is straightforward.
\end{corollary}
\begin{example}\label{example_SimpleLinSDE}
The solution of the Skorohod SDE
$$
dX_t = t^2 X_t dt + t(1-t) X_t dW_t, \quad X_0 = e^{\int_{0}^{1}W_s ds}
$$
satisfies the asymptotic optimal approximation
$$
\lim_{n \rightarrow \infty} n^2 \; \ex[(X_1-\widehat{X_1}^n)^2] = (e^{1/6}+ e^{1/2})e^{8/15}/12.
$$
Observe that the initial value $X_0$ cannot be simulated exactly.
\end{example}
Theorem \ref{thm_OptSDELin} allows multivariate extensions as the following.
\begin{remark}\label{rem_Abbreviations}
For simplicity, throughout the article we consider abbreviations like
\begin{align*}
F &:= F(I(f_1), \ldots, I(f_K)), &F_{x_k} &:= \dfrac{\partial}{\partial x_k} F(I(f_1), \ldots, I(f_K)).
\end{align*}
\end{remark}
\begin{theorem}\label{thm_OptSDELinMulti}
Suppose $f'_1, \ldots, f'_K, \sigma' \in BV$, $a: \RR \rightarrow \RR$ is integrable and
$$
X_0 := F = F(I(f_1), \ldots, I(f_K))\in \lin (\WA).
$$
Then for the solution of the Skorohod SDE from Theorem \ref{thm_LinSkorohodSDE}, we have
$$
\lim_{n \rightarrow \infty} n \, \ex[(X_1-\widehat{X_1}^n)^2]^{1/2} = \frac{ e^{\int_{0}^{1} a(s) ds}}{\sqrt{12}}\left(\int_{0}^{1} \ex\left[\left(\left( \sum_{k=1}^{K}f'_k(s)F_{x_k} + \sigma'(s) F\right)\diamond e^{\diamond I(\sigma)}\right)^2\right] ds\right)^{1/2}.
$$
\end{theorem}
\section{Approximation schemes}\label{section_ApproxSchemes}
The lack of adaptedness and many properties (martingale, Markov, isometry) in contrast to It\=o SDEs makes the approximation of Skorohod integrals and Skorohod SDEs very difficult. We restrict ourselves on the approximation of the solution at the terminal time $X_1$ and deal with univariate initial values $X_0 = F(I(f))$
from Theorem \ref{thm_OptSDELin}.
We make use of the following notations for every $n \in \NN$, $k = 0, \ldots, n-1$, $g \in C([0,1])$,
$$
\Delta := 1/n, \qquad \Delta_k W := W_{(k+1)\Delta} - W_{k\Delta}, \quad g_k := g(k/n),
$$
and begin with some remarks on the insufficiency of ordinary strong approximation schemes.
\begin{remark}\label{rem_MilsteinInsufficient}
(i) \ The ordinary Milstein scheme for Skorohod SDEs does not even converge in simplest cases as illustrated by the SDE in Corollary \ref{cor_LinSDEWickExp}. Let $f \in C^1([0,1])$ with $\int_{0}^{1} f(s) ds \neq 0$. Then the Milstein scheme
\begin{equation}\label{eq_OrdinaryMilstein}
x^n_{k+1} = x^n_k\left(1 + \Delta_k W + \frac{1}{2}(\Delta_k W^2 - \Delta)\right), \quad x^n_0 = e^{\diamond I(f)}, \ k=0,\ldots, n-1,
\end{equation}
fulfills by well-known arguments (see e.g. \cite[Theorem 10.3.5]{Kloeden_Platen}),
$$
x^n_n = e^{\diamond I(f)} \prod_{k=0}^{n-1}\left(1+\Delta_k W + \frac{1}{2}(\Delta_k W^2 - \Delta)\right) \stackrel{L^2(\Omega)}{\rightarrow} e^{\diamond I(f)} e^{\diamond W_1} = e^{\diamond (I(f) + W_1)} e^{\int_{0}^{1} f(s) ds}
$$
as $n$ tends to infinity. (It is as well a direct consequence of Theorem \ref{thm_OptWickFuncMult} (i) in Section \ref{section_OptWienerChaos}). But the exact solution of the linear Skorohod SDE
\begin{equation*}
dX_t = X_t dW_t, \quad X_0 = e^{\diamond I(f)}
\end{equation*}
is given by $X_1 = e^{\diamond I(f)} \diamond e^{\diamond W_1}= e^{\diamond (I(f) + W_1)}$. Similarly follows the insufficiency of other ordinary schemes (cf. e.g. \cite{Kloeden_Platen}) for Skorohod SDEs.
(ii) \ The situation is not sufficiently corrected by an approximation of the initial value. In particular, we cannot assume that the initial value $X_0$ is already exactly simulated, see Example \ref{example_SimpleLinSDE}. By Corollary \ref{cor_WickCarriesOverOpt} in the next section and the formula for conditional Gaussian random variables
\begin{align}\label{eq_InfAsConditional}
\widehat{I(f)}^n_t &= \ex[I(f)_t| W_{1/n}, \ldots, W_1]\nonumber\\
&= \sum_{k=0}^{\lfloor n t\rfloor-1} \left(\frac{1}{\Delta} \int_{k\Delta}^{(k+1)\Delta} f(s) ds\right)\Delta_k W+ \left(\frac{1}{\Delta} \int_{\lfloor n t\rfloor/n}^{t} f(s) ds\right)\Delta_{\lfloor n t\rfloor} W,
\end{align}
we have
$$
\widehat{X_0}^n := \widehat{e^{\diamond I(f)}}^n = \prod_{k=0}^{n-1} e^{\diamond \left(\frac{1}{\Delta}\int_{k\Delta}^{(k+1)\Delta} f(s) ds\right) \Delta_kW}.
$$
Hence, the Milstein scheme \eqref{eq_OrdinaryMilstein} with this new initial value is given by
$$
x^n_n = \prod_{k=0}^{n-1}e^{\diamond \left(\frac{1}{\Delta}\int_{k\Delta}^{(k+1)\Delta} f(s) ds\right) \Delta_k W}\left(1+\Delta_k W + \frac{1}{2}(\Delta_k W^2 - \Delta)\right).
$$
One can easily check by the Wiener chaos decomposition that it satisfies
$$
\lim_{n \rightarrow \infty}\ex[x^n_n] = \exp\left(\int_{0}^{1} f(s) ds\right).
$$
This is also a consequence of Theorem \ref{thm_OptWickFuncMult} (i). In general this is not equal to $1=\ex[X_1]$. These examples illustrate already that the approximation of the solution of a linear Skorohod SDE has to deal with the approximation of the convolution (Wick product):
\begin{equation}\label{eq_SimpleSolAsWickProd}
X_t = X_0 \diamond e^{\diamond W_t} = e^{\diamond (I(f) + W_t)}.
\end{equation}
\end{remark}
The computation of the convolution in \eqref{eq_SimpleSolAsWickProd} (or in general \eqref{eq_LinSkorohodSDESol}) is analogous to a convolution correction immediately in the scheme. Particularly, the example in Remark \ref{rem_MilsteinInsufficient} motivates to consider Wick products appropriate to Skorohod integrals in the approximation schemes.
Firstly, we need a compatible strong approximation of the initial value: As we are interested in an $\{W_{1/n}, \ldots, W_{n}\}$-measurable approximation, the algorithm begins via the strong optimal approximation of the Wiener integral $I(f)$:
\begin{algorithm}[Algorithm Initial value]\label{algo_WickWP1}
We suppose $f \in C^1([0,1])$ and $X_0 = F(I(f))$.
\begin{enumerate}
\item Let $n \in \NN$ be the discretization level. Set $\Delta := 1/n$ and simulate
$$
(\Delta_0 W, \ldots, \Delta_{n-1} W) \sim \mathcal{N}_{0,\Delta}^{\otimes n}.
$$
Set $f_k, f'_k$ via $g_k := g(k/n)$ for $g \in C([0,1];\RR)$ and all $k=0,\ldots, n-1$.
\item Define
$$
I^n(f) := \sum_{k=0}^{n-1} (f_k + f'_k\Delta/2) \Delta_k W, \quad \|f\|^2_n := \Delta \sum_{k=0}^{n-1}
(f_k + f'_k \Delta/2)^2, \ (I^n(f))^{\diamond 0} := 1.
$$
\item Compute the Wick powers up to $m=n-1$ via the Hermite recursion formula
\begin{equation}\label{eq_AlgoHermiteRecursion}
\left(I^n(f)\right)^{\diamond (m+1)} = I^n(f) \left(I^n(f)\right)^{\diamond m} - m\|f\|^2_n \left(I^n(f)\right)^{\diamond (m-1)}.
\end{equation}
\item Let $F(I(f)) = \sum_{m=0}^{\infty} c_m I(f)^{\diamond m}$ be the Wiener chaos expansion. For unknown coefficients, compute $c_m = \frac{1}{m!}\ex[F(I(f)) I(f)^{\diamond m}]$ or approximate it via $\frac{1}{m!}\ex[F(I^n(f)) I^n(f)^{\diamond m}]$ (e.g. via Monte Carlo).
\item Set the approximation of $X_0 = F(I(f))$ as
$$
x^n_0 = \widetilde{F}^{(n)}(I^n(f)) := \sum_{m=0}^{n} c_m \left(I^n(f)\right)^{\diamond m}.
$$
\end{enumerate}
\end{algorithm}
\begin{remark}\label{rem_WickFiniteChaos}
In fact (for exact coefficients in the Wiener chaos decomposition) $\widetilde{F}^{(n)}(I^n(f))$ equals the finite chaos projection
$$
F^{(n)}(I^n(f)) = \sum_{k=0}^{n} \pi_k(F(I^n(f))).
$$
Observe the helpful formula where all Wick products are collected to the random part, the increments of the Brownian motion:
$$
\left(I^n(f)\right)^{\diamond m} = \sum_{(A_1,\ldots, A_m)\in \{0,\ldots, n-1\}^m} \left(\prod_{k =1}^{m} (f_{A_k}+f'_{A_k}\Delta/2)\right) \left(\diamond_{k =1,\ldots, m} (\Delta_{A_k} W)\right).
$$
The Wick products can be simply computed and implemented via generalized Hermite recursions, cf. \cite{Avram_Taqqu} or \cite[Lemma 2.1]{P14}. Especially, by the independence and $\alpha_i := \#\{k \leq m : A_k = i\}$,
\begin{align}\label{eq_WickProdOfBMIncrem}
\left(\diamond_{k =1,\ldots, m} (\Delta_{A_k} W)\right) = \prod_{i=0}^{n-1} h^{\alpha_i}_{\Delta}(\Delta_i W).
\end{align}
For $F(I(f)) \in \WA$, this approximation satisfies the optimal approximation as a consequence of Theorem \ref{thm_OptWickFunc} below:
$$
\lim_{n \rightarrow \infty} n\, \ex\left[\left(F(I(f)) - \widetilde{F}^{(n)}(I^n(f))\right)^2\right]^{1/2} = \frac{1}{\sqrt{12}}\|f'\| \|F'(I(f))\|_{L^2(\Omega)}.
$$
\end{remark}
\begin{example}
For $F(I(f)) = e^{\diamond I(f)}$, we have
\begin{equation*}
\widetilde{F}^{(n)}(I^n(f)) = \sum_{k=0}^{n} \frac{1}{k!} I^n(f)^{\diamond k} = \prod_{k=0}^{n}\left(1+(f_k + f'_k\Delta/2)\Delta_k W\right) + \ldots
\end{equation*}
and the product on the right hand side converges in $L^2(\Omega)$ to the random variable $e^{\diamond I(f)}$, see e.g. \cite{Kloeden_Platen, Mueller_Gronbach}. For the appropriate optimal approximation scheme for It\=o integrals we refer to \cite{Przybylowicz}.
\end{example}
A Wick-Euler scheme
$$
x^n_{k+1} = x^n_k + a(x^n_k) \Delta + \sigma_k x^n_k \diamond \Delta_k W, \quad k=0,\ldots, n-1,
$$
is not optimal for similar reasons as for It\=o SDEs (cf. \cite{Kloeden_Platen, Mueller_Gronbach}). This is also a direct consequence of the last proof in Section \ref{section_Proofs}. Extending schemes for It\=o integrals and SDEs, see e.g. \cite[Section 4]{Mueller_Gronbach}, we define a Wick version of the truncated Wagner-Platen scheme:
\begin{definition}[Wick-WP scheme]\label{def_WickWP}
Suppose the linear Skorohod SDE
$$
dX_t = a(t) X_t dt + \sigma(t) X_t dW_t,\quad X_0 \in \WA.
$$
Let $x^n_0$ be a strong approximation of $X_0$ via Algorithm \ref{algo_WickWP1}, set via recursion for all $k=0,\ldots, n-1$,
\begin{align*}
x^n_{k+1} &= x^n_k + a_k x^n_k \Delta + \sigma_k x^n_k \diamond \Delta_k W + \sigma_k^2 x^n_k \diamond (1/2)(\Delta_k W)^{\diamond 2}+ \sigma_k^3 x^n_k \diamond (1/6)(\Delta_k W)^{\diamond 3}\\
&\quad + \left(\sigma'_k + a_k\sigma_k\right) x^n_k\diamond \Delta_k W \Delta + \left(a'_k + a_k^2\right) x^n_k (1/2)\Delta^2.
\end{align*}
Then we set
$$
\widetilde{X}^n_k := x^n_k - \frac{1}{2}\sum_{l=0}^{k-1} \sigma'_l \Delta_l W \Delta \diamond x^n_l\diamond \prod_{i=l+1}^{n-1}\left(1+ a_i \Delta + \sigma_i\Delta_i W\right), \quad k=0,\ldots, n.
$$
\end{definition}
As the only nonadapted term is the initial value $x^n_0$, alternatively one can compute the approximation $\widetilde{X}^{n,Ito}_n$ of the stochastic exponential $e^{\diamond I(\sigma)}e^{\int_{0}^{1} a(s) ds}$ with deterministic initial value $1$ by the ordinary Wagner-Platen scheme (i.e. $x^n_{k+1}$ above with ordinary products, see e.g. \cite[Section]{Mueller_Gronbach}) and then compute or approximate the Wick product
\begin{equation}\label{eq_WickWPAsProduct}
x^n_0 \diamond \widetilde{X}^{n,Ito}_n.
\end{equation}
In fact, as shown in the proof in Section \ref{section_Proofs} (cf. \eqref{eq_WickWPShort}), our scheme is equivalent to this approach. In any case the Wiener chaos expansion of the finite chaos elements $x^n_0$ and $\widetilde{X}^{n,Ito}_n$ are needed. Therefore, the computational effort for the Wick product in \eqref{eq_WickWPAsProduct} is the same as in our Wick-WP scheme. It illustrates the minimal required computations involved to get an optimal approximation scheme for linear Skorohod SDEs. Because of the considerations above and the limit results below, we believe that there is no approximation scheme for Skorohod SDEs without some careful approximation of the Wick products in the Wick-WP scheme.
For an initial value $X_0$ which is independent of $W$, all Wick products in the Wick-WP scheme equal ordinary products. Hence, no computations of convolutions are needed and we obtain the scheme for It\=o SDEs from \cite{Mueller_Gronbach}.
We assume that according to Eq. (58)--(59) in \cite{Przybylowicz} one can similarly consider a counterpart for the approximation scheme for Skorohod integrals. Here we restrict ourselves on the approximation of Skorohod SDEs.
As an extension of results for It\=o SDEs we have that the Wick-WP scheme is sufficient for the optimal approximation. The proof is contained in Section \ref{section_Proofs}:
\begin{theorem}\label{thm_LinSDEApproxWickWP}
Suppose $f, a, \sigma \in C^1([0,1])$, $F(I(f)) \in \WA$. Then the solution of the linear Skorohod SDE in Theorem \ref{thm_LinSkorohodSDE} and the Wick-WP scheme $\widetilde{X}^n$ fulfill
$$
\lim_{n \rightarrow \infty} n \; \ex[(X_1-\widetilde{X}^n_n)^2]^{1/2} = C
$$
with the optimal approximation constant
$$
C:= \frac{1}{\sqrt{12}} e^{\int_{0}^{1} a(s) ds}\left(\int_{0}^{1} \ex\left[\left(\left(f'(s) F'(I(f)) + \sigma'(s) F(I(f))\right)\diamond e^{\diamond I(\sigma)}\right)^2\right] ds\right)^{1/2}
$$
from Theorem \ref{thm_OptSDELin}.
\end{theorem}
Finally we collect the Wick-WP scheme as an algorithm:
\begin{algorithm}[Algorithm SDE]\label{algo_WickWP2}
Let $n \in \NN$ be the discretization level.
\begin{enumerate}
\item Apply Algorithm \ref{algo_WickWP1} and set $a_k, a'_k,\sigma_k,\sigma'_k$ via $g_k := g(k/n)$ for $g \in C([0,1];\RR)$.
\item Compute via (i) the Wick-WP scheme for all $k =0,\ldots, n-1$:
\begin{align*}
x^n_{k+1} &= x^n_k + a_k x^n_k \Delta + \sigma_k x^n_k \diamond \Delta_k W + \sigma_k^2 x^n_k \diamond (1/2)(\Delta_k W)^{\diamond 2}+ \sigma_k^3 x^n_k \diamond (1/6)(\Delta_k W)^{\diamond 3}\\
&\quad + \left(\sigma'_k + a_k\sigma_k\right) x^n_k\diamond \Delta_k W \Delta + \left(a'_k + a_k^2\right) x^n_k (1/2)\Delta^2.
\end{align*}
As all these objects are finite chaos elements, all Wick products involved can be reduced to Wick products on $\Delta_k W$ (\eqref{eq_WickProdOfBMIncrem} in Remark \ref{rem_WickFiniteChaos}).
\item Compute via $(i)-(ii)$ and Remark \ref{rem_WickFiniteChaos} for all $k =0,\ldots, n-1$:
$$
\widetilde{X}^n_k := x^n_k - \frac{1}{2}\sum_{l=0}^{k-1} \sigma'_l \Delta_l W \Delta \diamond x^n_l \diamond \prod_{i=l+1}^{n-1}\left(1+ a_i \Delta + \sigma_i\Delta_i W\right).
$$
\end{enumerate}
\end{algorithm}
\section{Approximation and Wiener chaos}\label{section_OptWienerChaos}
In this section we present general results on optimal approximation and the approximation via Algorithm \ref{algo_WickWP1} on the class $\lin (\WA)$ from Definition \ref{def_WA}.
The Wiener chaos expansion in terms of Wick analytic functionals has the advantage that the optimal approximation carries over to functionals in terms of Wick products. In fact, we have (see \cite[Corollary 9.4]{Janson} or \cite[Lemma 6.20]{DiNunno}):
\begin{proposition}\label{prop_WickConditionalExpectation}
For $X,Y, X\diamond Y \in L^2(\Omega)$ and the sub-$\sigma$-field $\mathcal{G} \subseteq \mathcal{F}$:
$$
\ex[X\diamond Y|\mathcal{G}] = \ex[X|\mathcal{G}]\diamond \ex[Y|\mathcal{G}].
$$
\end{proposition}
\begin{corollary}\label{cor_WickCarriesOverOpt}
Suppose $F= F(I(f_1), I(f_2), \ldots, I(f_K)) \in \lin (\WA)$, then
$$
\widehat{F}^n = F\left(\widehat{I(f_1)}^{n}, \widehat{I(f_2)}^{n}, \ldots, \widehat{I(f_K)}^{n}\right).
$$
\end{corollary}
This yields that the computations on optimal approximation can be easily extended to strong approximation schemes of these functionals on the underlying Wiener integrals, see e.g. Theorem \ref{thm_OptWickFunc} below. The following results are the essence of our main theorems \ref{thm_OptSDELin} and \ref{thm_OptSDELinMulti}, as well as a key step for the optimal rate result on the Wick-WP scheme in Theorem \ref{thm_LinSDEApproxWickWP}.
We firstly consider finite Wiener chaos elements in Subsection \ref{subsection_finite}. This leads to a univariate functional approximation result in Subsection \ref{subsection_uni} and is extended to multivariate functionals in Subsection \ref{subsection_multi}.
\subsection{Finite chaos}\label{subsection_finite}
We clearly have
$$
\widehat{W_t}^n = \ex[W_t|W_{1/n}, W_{2/n}, \ldots, W_1] = W_t^{\lin},
$$
where
$$
W_t^{\lin} := W_{i/n} + n(t-i/n)(W_{(i+1)/n}-W_{i/n}), \ t \in [i/n, (i+1)/n)
$$
is the linear interpolation of $W$ with respect to the equidistant time grid.
We recall the strong approximation from Algorithm \ref {algo_WickWP1}, e.g. $I^n(f) := \sum_{k=0}^{n-1} (f_k + f'_k\Delta/2) \Delta_k W$. All further computations will be based on the following elementary observations:
\begin{proposition}\label{proposition_WienerIntCov}
Suppose $f, 'g' \in BV$. Then for all
$$
(x_n,y_n) \in \left\{(I(f) - \widehat{I(f)}^n), (I(f) - I^n(f))\right\} \times \left\{(I(g) - \widehat{I(g)}^n), (I(g) - I^n(g)), I(g)\right\},
$$
$$
\lim_{n \rightarrow \infty} n^2 \; \ex\left[x_n \, y_n\right] = \frac{1}{12}\langle f', g' \rangle.
$$
\end{proposition}
\begin{proof}
Via integration by parts $I(f) = f(1) W_1 - \int_{0}^{1} f'(s)W_s \, ds$. Hence, Fubini's theorem yields
\begin{equation}\label{eq_WienerIntDiff}
I(f) - \widehat{I(f)}^n = - \int_{0}^{1} f'(s) \left(W_s - W^{\lin}_s\right) ds.
\end{equation}
We recall the well-known covariances of these Brownian bridges:
\begin{equation}\label{eq_BBIntegral}
\int_{i/n}^{(i+1)/n} \int_{j/n}^{(j+1)/n} \ex[(W_s - W^{\lin}_s)(W_t - W^{\lin}_t)] ds\, dt = \eins_{\{i=j\}}\frac{1}{12 n^3}.
\end{equation}
Hence, for $g,h \in C([0,1];\RR)$, the mean value theorem then gives
$$
\int_{i/n}^{(i+1)/n} \int_{j/n}^{(j+1)/n} g(s) h(t)\ex[(W_s - W^{\lin}_s)(W_t - W^{\lin}_t)] ds\, dt = g(\xi_i)h(\overline{\xi}_i)\eins_{\{i=j\}}\frac{1}{12 n^3}
$$
for appropriate $\xi_i, \overline{\xi}_i \in [i/n,(i+1)/n]$, and therefore, via \eqref{eq_WienerIntDiff}, the Riemann sum
\begin{align}\label{eq_RiemannSumFirst}
&\ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right]\nonumber\\
&= \sum_{i,j=0}^{n-1} \int_{i/n}^{(i+1)/n} \int_{j/n}^{(j+1)/n} f'(s) g'(t)\ex[(W_s - W^{\lin}_s)(W_t - W^{\lin}_t)] ds\, dt\nonumber\\
&=\frac{1}{12 n^2} \left(\frac{1}{n}\sum_{i = 0}^{n-1} f'(\xi_i) g'(\overline{\xi}_i) \right).
\end{align}
We recall $h_k := h(k/n)$ for $h \in C([0,1];\RR)$ and the total variation
$$
T(h) := \sup\left\{\sum_{i=0}^{m-1}|h(t_{i+1})-h(t_i)| : m \in \NN, 0=t_0<\ldots <t_{m}=1\right\}.
$$
Hence, via $n\int_{i/n}^{(i+1)/n} h(s) ds \in \left[h_i, h_{i+1}\right]$, for the Riemann sum approximation it is
\begin{equation}\label{eq_RiemannSumRateBV}
\left|\int_{0}^{1} h(s) ds - \frac{1}{n}\sum_{i=0}^{n-1} h_i\right| \leq \frac{1}{n}\sum_{i=0}^{n-1} \left|n\int_{i/n}^{(i+1)/n} h(s) ds - h_i\right| \leq \frac{T(h)}{n}.
\end{equation}
Moreover, for all $i=0,\ldots, n-1$, we notice
$$
\left|g(\xi_i)h(\overline{\xi}_i) + g(i/n) h(i/n)\right| \leq \left|g(\xi_i) + g(i/n) \right|\|h\|_{\infty} + \left|h(\overline{\xi}_i) + h(i/n)\right|\|g\|_{\infty}
$$
and therefore
\begin{equation}\label{eq_ErrorAsRiemannSum}
\frac{1}{12}\left|\frac{1}{n}\sum_{i=0}^{n-1} f'_i g'_i - \left(\frac{1}{n}\sum_{i = 0}^{n-1} f'(\xi_i) g'(\xi_i) \right)\right| \leq \frac{1}{12 n}\left(T(f')\|g'\|_{\infty} + T(g')\|f'\|_{\infty}\right).
\end{equation}
Hence, via \eqref{eq_RiemannSumFirst} - \eqref{eq_ErrorAsRiemannSum},
we conclude
\begin{align}\label{eq_WienerIntOACovariance}
\lim_{n \rightarrow \infty} n^2\, \ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right] &= \frac{1}{12} \langle f', g' \rangle.
\end{align}
The deterministic integrand of the Wiener integral $I(f) - I^n(f)$ is given by $f(s) - (f_{\lfloor ns \rfloor} +f'_{\lfloor ns \rfloor}\Delta/2)$. Via the mean value theorem for $k=\lfloor ns \rfloor$ and some $\xi_s \in [k/n,s]$,
\begin{align}\label{eq_MVTIntegrand}
f(s) = f_k + f'(\xi_s)(s-k/n),
\end{align}
and therefore
\begin{align*}
f(s) - (f_k + f'_k\Delta/2)
&= f'_k\left((s-k/n)- \Delta/2\right)+ (f'(\xi_s) - f'_k)(s-k/n)
\end{align*}
and the expansion
\begin{align*}
&\left(f(s) - (f_k + f'_k\Delta/2)\right)\left(g(s) - (g_k + g'_k\Delta/2)\right)\nonumber\\
&= f'_k g'_k \left((s-k/n)- \Delta/2\right)^2 +\left(f'_k (g'(\overline{\xi}_s) - g'_k) + g'_k(f'(\xi_s) - f'_k)\right)\left((s-k/n)- \Delta/2\right)(s-k/n)\nonumber\\
&\quad + (f'(\xi_s) - f'_k)(g'(\overline{\xi}_s) - g'_k)(s-k/n)^2.
\end{align*}
Thanks to the It\=o isometry and $\int_{k/n}^{(k+1)/n}(s-k/n-\Delta/2)^2ds=\Delta^3/12$, we obtain
\begin{align*}
\ex\left[(I(f) - I^n(f))(I(g) - I^n(g))\right]
&= \sum_{k=0}^{n-1} \int_{k/n}^{(k+1)/n} \left(f(s) - (f_k + f'_k\Delta/2)\right) \left(g(s) - (g_k + g'_k\Delta/2)\right)ds\\
&= \frac{1}{12n^2}\left(\frac{1}{n}\sum_{k=0}^{n-1} f'_k g'_k\right) + R(f',g',n),
\end{align*}
where the remainder fulfills
\begin{align}\label{eq_WienerIntStrongRest}
|R(f',g',n)| \leq \frac{1}{n^3}2(T(f') + T(g'))(\|f'\|_{\infty} + \|g'\|_{\infty}).
\end{align}
Thus we conclude the same limit
\begin{align*}
\lim_{n \rightarrow \infty} n^2\, \ex\left[(I(f) - I^n(f))(I(g) - I^n(g))\right] = \frac{1}{12}\langle f', g' \rangle.
\end{align*}
Finally, via integration by parts,
$$
I(g)-I^n(g) = g(1)W_1 - \int_{0}^{1} g'(t)W_t \, dt - \sum_{k=0}^{n-1} (g_k + g'_k\Delta/2)\Delta_k W.
$$
Thus, with the covariances $\ex[(W_s - W^{\lin}_s)\Delta_k W]=0$ for all $k$, and $\ex[(W_s - W^{\lin}_s)W_1]=0$, we conclude
\begin{align*}
\ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - I^n(g))\right] &= \ex\left[\left(\int_{0}^{1} f'(s) \left(W_s - W^{\lin}_s\right) ds\right)\left(\int_{0}^{1} g'(t)W_t \, dt \right)\right]\\
&=\sum_{i,j=0}^{n-1} \int_{i/n}^{(i+1)/n} \int_{j/n}^{(j+1)/n} f'(s) g'(t)\ex[(W_s - W^{\lin}_s)W_t] ds\, dt.
\end{align*}
Hence, due to $\ex[(W_s - W^{\lin}_s)W_t]=\ex[(W_s - W^{\lin}_s)(W_t - W^{\lin}_t)]$ and \eqref{eq_RiemannSumFirst} - \eqref{eq_WienerIntOACovariance},
\begin{align*}
\lim_{n \rightarrow \infty} n^2\, \ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - I^n(g))\right] = \frac{1}{12}\langle f', g' \rangle.
\end{align*}
The proofs for all other covariances are straightforward due to the piecewise constant integrand in $I^n(g)$.
\end{proof}
\begin{remark}\label{remark_WienerInt}
In the simplest case, all these converge towards $\frac{1}{12}\|f'\|^2$ as $n$ tends to infinity:
$$
n^2\, \ex\left[(I(f) - I^n(f))^2\right] , \quad n^2\, \ex\left[(I(f) - \widehat{I(f)}^n)(I(f) - I^n(f))\right], \quad n^2\, \ex\left[(I(f) - \widehat{I(f)}^n)I(f)\right].
$$
\end{remark}
As a direct consequence of \eqref{eq_RiemannSumRateBV}, \eqref{eq_ErrorAsRiemannSum} and $T(f' g') \leq T(f')\|g'\|_{\infty} + T(g')\|f'\|_{\infty}$, we observe:
\begin{remark}\label{rem_WienerIntConvRateBound}
For $f',g'\in BV$, we have
$$
\left|n^2\, \ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right] - \frac{1}{12}\langle f',g'\rangle \right| \leq \frac{1}{6n}\left(T(f')\|g'\|_{\infty}+T(g')\|f'\|_{\infty}\right).
$$
\end{remark}
The multiple chaos extension of the covariance limits in Proposition \ref{proposition_WienerIntCov} is:
\begin{proposition}\label{proposition_OptWienerIntWick}
Suppose $f', g' \in BV$ and $k \in \NN$ is fixed. Then for all $(x_n,y_n)$ in
$$
\left\{(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}), (I(f)^{\diamond k} - (I^n(f))^{\diamond k})\right\} \times \left\{(I(g)^{\diamond k} - (\widehat{I(g)}^n)^{\diamond k}), (I(g)^{\diamond k} - (I^n(g))^{\diamond k}), I(g)^{\diamond k}\right\},
$$
$$
\lim_{n \rightarrow \infty} n^2 \; \ex\left[x_n \, y_n\right] = \frac{1}{12}\langle f', g' \rangle\, k\, k!\, \langle f, g \rangle^{(k-1)}.
$$
\end{proposition}
\begin{proof}
For higher chaos terms we observe the standard expansion
\begin{equation}\label{eq_WickPowerExp}
I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k} = (I(f) - \widehat{I(f)}^n) \diamond \sum_{j=1}^{k} (\widehat{I(f)}^n)^{\diamond j-1} \diamond I(f)^{\diamond k-j}.
\end{equation}
We show that the right hand side is close enough to the simplified variable
$$
(I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)}.
$$
Dealing with $L^2$-norms of Gaussian variables, we will frequently make use of Wick's Theorem,
\begin{equation}\label{eq_WickTheorem}
\ex\left[\left(I(f_1) \diamond \cdots \diamond I(f_n)\right) \left(I(g_1) \diamond \cdots \diamond I(g_m)\right)\right] = \delta_{n,m} \sum\limits_{\sigma \in \mathcal{S}_{n}} \prod\limits_{i=1}^{n} \langle f_i, g_{\sigma(i)}\rangle,
\end{equation}
for all $n,m \in \NN$, $f_1,\ldots, f_n, g_1,\ldots, g_m \in L^2$, where $\mathcal{S}_{n}$ denotes the group of permutations on $\{1, \ldots, n\}$ (see e.g. \cite[Theorem 3.9]{Janson}).
Via \eqref{eq_WickPowerExp} and the reformulation for general products
\begin{align*}
\sum_{j=2}^{k}a^{k-j}\left(a^{j-1} - b^{j-1}\right) &= (a-b)\sum_{j=2}^{k}a^{k-j}\sum_{l=1}^{j-1} a^{l-1} b^{j-1-l} = (a-b)\sum_{j=2}^{k}\sum_{l=1}^{j-1} a^{k-1-l} b^{l-1}\\
&= (a-b)\sum_{l=1}^{k-1}(k-l) a^{k-1-l} b^{l-1},
\end{align*}
for the difference, we have
\begin{align}\label{eq_WickPowerDiffExpansion}
&(I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} - \left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right)\nonumber\\
&= \left(I(f) - \widehat{I(f)}^n\right) \diamond \sum_{j=2}^{k} I(f)^{\diamond k-j} \diamond \left(I(f)^{\diamond (j-1)} - (\widehat{I(f)}^n)^{\diamond (j-1)}\right)\nonumber\\
&= \left(I(f) - \widehat{I(f)}^n\right)^{\diamond 2} \diamond \sum_{l=1}^{k-1}(k-l) I(f)^{\diamond (k-1-l)} \diamond (\widehat{I(f)}^n)^{\diamond (l-1)}.
\end{align}
We give a sufficient upper bound on the $L^2$-norm. By the covariances
\begin{align*}
\ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right] &=\ex\left[(I(f) - \widehat{I(f)}^n)I(g)\right],
\ &\ex\left[(I(f) - \widehat{I(f)}^n)\widehat{I(g)}^n\right] &= 0,\\
\ex\left[I(f)\widehat{I(f)}^n\right] &= \ex\left[I(f)^2\right]\geq \ex\left[(\widehat{I(f)}^n)^2\right]
\end{align*}
and Wick's Theorem, for all $0 \leq m,m' <k$, according to the scheme of numbers of factors
\[
\begin{array}{r c| c l}
2 \left\{ \right.&I(f) - \widehat{I(f)}^n &I(f) - \widehat{I(f)}^n & \left. \right\} 2\\
m\left\{ \right. & I(f) & I(f) & \left. \right\} m'\\
k-m\left\{ \right. & \widehat{I(f)}^n & \widehat{I(f)}^n &\left. \right\} k-m'
\end{array}
\]
and the shorthand notation
$$
e_n := \ex\left[(I(f) - \widehat{I(f)}^n)^2\right],
$$
we obtain
\begin{align}\label{eq_HorribleWickProductNorms}
&\ex\left[\left((I(f) - \widehat{I(f)}^n)^{\diamond 2} \diamond I(f)^{\diamond m} \diamond (\widehat{I(f)}^n)^{\diamond k-m}\right)^2\right]
\nonumber\\
&\leq 2 \, e_n^2\, k!\|f\|^{2k} + 4m m' \; e_n^3\, (k-1)!\|f\|^{2(k-1)}+m (m-1) m' (m'-1) \; e_n^4\, (k-2)!\|f\|^{2(k-2)}.
\end{align}
Thus, via \eqref{eq_WickPowerDiffExpansion}--\eqref{eq_HorribleWickProductNorms} and $\sum_{l=1}^{k-1-m}\left((k-l)\cdots (k-l-m)\right) = \frac{1}{2+m}k(k-1)\cdots (k-1-m)$ for $k-1-m\geq 0$ (the following cases are clear by induction),
we conclude
\begin{align}\label{eq_WickPowerDiffSimpler}
&\ex\left[\left(\left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right) - (I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)^2\right] \nonumber\\
&= \ex\left[\left(\sum_{l=1}^{k-1}(k-l) \left(I(f) - \widehat{I(f)}^n\right)^{\diamond 2} \diamond I(f)^{\diamond (k-1-l)} \diamond (\widehat{I(f)}^n)^{\diamond (l-1)}\right)^2\right] \nonumber\\
&= \sum_{l,l'=1}^{k-1}(k-l)(k-l') \ex\left[\left((I(f) - \widehat{I(f)}^n)^{\diamond 2} \diamond I(f)^{\diamond (k-1-l)} \diamond (\widehat{I(f)}^n)^{\diamond l-1}\right)\right.\nonumber\\
&\hspace{4cm} \times \left.\left((I(f) - \widehat{I(f)}^n)^{\diamond 2} \diamond I(f)^{\diamond k-i-l'} \diamond (\widehat{I(f)}^n)^{\diamond l'-1}\right)\right] \nonumber\\
&\leq 2\left(\sum_{l=1}^{k-1}(k-l)\right)^2e_n^2 (k-2)!\|f\|^{2(k-2)} + 4\left(\sum_{l=1}^{k-2}(k-l)(k-l-1)\right)^2e_n^3 (k-3)!\|f\|^{2(k-3)}\nonumber\\
&\quad + \left(\sum_{l=1}^{k-3}(k-l)(k-l-1)(k-l-2)\right)^2e_n^4 (k-4)!\|f\|^{2(k-4)}\nonumber\\
&= \frac{1}{2}\left(k(k-1)\right)^2e_n^2 \ex\left[\left(I(f)^{\diamond (k-2)}\right)^2\right] + \frac{4}{9}\left(k(k-1)(k-2)\right)^2e_n^3 \ex\left[\left(I(f)^{\diamond (k-3)}\right)^2\right]\nonumber\\
& \quad + \frac{1}{16}\left(k(k-1)(k-2)(k-3)\right)^2e_n^4 \ex\left[\left(I(f)^{\diamond (k-4)}\right)^2\right].
\end{align}
In particular, via Proposition \ref{proposition_WienerIntCov}
\begin{equation}\label{eq_WickPowerDiffSimplerLandau}
\ex\left[\left((I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}) - (I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)^2\right] \in \mathcal{O}(n^{-4}).
\end{equation}
Therefore, by $AB-ab = (A-a)B + a(B-b)$ and the Cauchy-Schwarz inequality
\begin{align}\label{eq_WickPowerDiffSimplerLandau2}
&\left|\ex\left[\left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right)\left(I(g)^{\diamond k} - (\widehat{I(g)}^n)^{\diamond k}\right)\right]\right.\nonumber\\
&\left. -\ex\left[\left((I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)\left((I(g) - \widehat{I(g)}^n) \diamond k I(g)^{\diamond (k-1)} \right)\right]\right| \in \mathcal{O}(n^{-4}).
\end{align}
Hence, due to \eqref{eq_WickPowerDiffSimplerLandau2}, Wick's Theorem \eqref{eq_WickTheorem} and Proposition \ref{proposition_WienerIntCov}, we conclude
\begin{align}\label{eq_WickConvFiniteChaos}
&\lim_{n \rightarrow \infty} n^2 \; \ex\left[\left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right)\left(I(g)^{\diamond k} - (\widehat{I(g)}^n)^{\diamond k}\right)\right]\nonumber\\
&= \lim_{n \rightarrow \infty} n^2 \; \ex\left[\left((I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)\left((I(g) - \widehat{I(g)}^n) \diamond k I(g)^{\diamond (k-1)} \right)\right]\nonumber\\
&= \lim_{n \rightarrow \infty} n^2 \; \ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right] k\, k!\, \langle f, g \rangle^{(k-1)}\nonumber\\
&\quad + \lim_{n \rightarrow \infty} n^2 \; \ex\left[(I(f) - \widehat{I(f)}^n)(I(g) - \widehat{I(g)}^n)\right]^2 k^4\, (k-2)! \,\langle f, g \rangle^{(k-2)}\nonumber\\
&= \frac{1}{12}\langle f', g' \rangle k\, k!\, \langle f, g \rangle^{(k-1)}.
\end{align}
The result for all other sequences follows analogously by the covariance limits in Proposition \ref{proposition_WienerIntCov}.
\end{proof}
\subsection{The univariate functional}\label{subsection_uni}
The paradigmatic result on optimal approximation is:
\begin{theorem}\label{thm_OptWickFunc}
Suppose $f', g' \in BV$, $F=F(I(f)), G=G(I(g)) \in \WA$. Then for all
\begin{align*}
(x_n,y_n) \in \left\{(F-\widehat{F}^n), (F-\widetilde{F}^{(n)})\right\} \times \left\{(G-\widehat{G}^n), (G-\widetilde{G}^{(n)})\right\},
\end{align*}
$$
\lim_{n \rightarrow \infty} n^2\, \ex\left[x_n y_n\right]= \frac{1}{12} \langle f', g'\rangle \, \ex\left[F'(I(f)) G'(I(g))\right].
$$
\end{theorem}
\begin{proof}
We firstly present the proof for $x_n = y_n = F-\widehat{F}^n$. The Wiener chaos expansion yields $F(I(f)) = \sum_{k\geq 0} \frac{a_k}{k!} I(f)^{\diamond k}$ for some unique coefficients $a_k \in \RR$.
Due to Proposition \ref{prop_DerivativesWA}, $\dfrac{\partial^k}{\partial x^k}F(I(f)) \in L^2(\Omega)$ for all $k \in \NN$.
Thanks to the derivative rule for Hermite polynomials $\dfrac{\partial}{\partial x} I(f)^{\diamond k} = k I(f)^{\diamond k-1}$, we observe for all $j \in \NN$,
\begin{align*}
\ex\left[\left(\dfrac{\partial^j}{\partial x^j}F(I(f))\right)^2\right]
&= \sum_{k\geq j} \frac{a_k^2}{(k-j)!^2} \ex\left[\left(I(f)^{\diamond (k-j)}\right)^2\right].
\end{align*}
Hence, via \eqref{eq_WickPowerDiffSimpler} and the shorthand notation $e_n := \ex\left[(I(f) - \widehat{I(f)}^n)^2\right]$, we conclude
\begin{align}\label{eq_WickPowerFuncDiffSimpler}
&\sum_{k\geq 1} \frac{a_k^2}{k!^2}\ex\left[\left((I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}) - (I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)^2\right] \nonumber\\
&\leq \frac{1}{2}e_n^2 \sum_{k\geq 2} \frac{a_k^2}{(k-2)!^2} \ex\left[\left(I(f)^{\diamond (k-2)}\right)^2\right] + \frac{4}{9}e_n^3 \sum_{k\geq 3} \frac{a_k^2}{(k-3)!^2} \ex\left[\left(I(f)^{\diamond (k-3)}\right)^2\right]\nonumber\\
&\quad + \frac{1}{16}e_n^4 \sum_{k\geq 4} \frac{a_k^2}{(k-4)!^2} \ex\left[\left(I(f)^{\diamond (k-4)}\right)^2\right]\nonumber\\
&= \frac{e_n^2}{2}\ex\left[\left(F''\right)^2\right] + \frac{4e_n^3}{9}\ex\left[\left(\dfrac{\partial^3}{\partial x^j}F\right)^2\right] + \frac{e_n^4}{16}\ex\left[\left(\dfrac{\partial^4}{\partial x^j}F\right)^2\right]\in \mathcal{O}(n^{-4}).
\end{align}
Thus, via Corollary \ref{cor_WickCarriesOverOpt}, \eqref{eq_WickPowerFuncDiffSimpler} and Proposition \ref{proposition_OptWienerIntWick} (cf. \eqref{eq_WickConvFiniteChaos}), we obtain
\begin{align*}
\lim_{n \rightarrow \infty} n^2 \; \ex\left[\left(F - \widehat{F}^{n}\right)^2\right] &= \lim_{n \rightarrow \infty} n^2 \; \sum_{k\geq 1} \frac{a_k^2}{k!^2}\ex\left[\left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right)^2\right] \nonumber\\
&=\lim_{n \rightarrow \infty} n^2 \; \sum_{k\geq 1} \frac{a_k^2}{k!^2}\ex\left[\left((I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)^2\right] \nonumber\\
&= \frac{1}{12} \|f'\|^2\sum_{k\geq 1} \frac{a_k^2}{(k-1)!} \|f\|^{2(k-1)} = \frac{1}{12} \|f'\|^2 \ex\left[\left(F'(I(f))\right)^2\right].
\end{align*}
The proof for $x_n = F-\widehat{F}^n$, $y_n = G-\widehat{G}^n$ with the Wiener chaos expansion $G(I(g)) = \sum_{k\geq 0} \frac{b_k}{k!} I(g)^{\diamond k}$ proceeds analogously, as via \eqref{eq_WickPowerDiffSimplerLandau2}, \eqref{eq_WickPowerFuncDiffSimpler} and \eqref{eq_WickConvFiniteChaos},
\begin{align*}
&\lim_{n \rightarrow \infty} n^2 \; \ex\left[\left(F - \widehat{F}^{n}\right)\left(G - \widehat{G}^{n}\right)\right]\\
&= \lim_{n \rightarrow \infty} n^2 \; \sum_{k\geq 1} \frac{a_k b_k}{k!^2}\ex\left[\left(I(f)^{\diamond k} - (\widehat{I(f)}^n)^{\diamond k}\right)\left(I(g)^{\diamond k} - (\widehat{I(g)}^n)^{\diamond k}\right)\right] \nonumber\\
&=\lim_{n \rightarrow \infty} n^2 \; \sum_{k\geq 1} \frac{a_k b_k}{k!^2}\ex\left[\left((I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond (k-1)} \right)\left((I(g) - \widehat{I(g)}^n) \diamond k I(g)^{\diamond (k-1)} \right)\right] \nonumber\\
&= \frac{1}{12} \langle f', g'\rangle \sum_{k\geq 1} \frac{a_k b_k}{(k-1)!} \langle f,g\rangle^{k-1} = \frac{1}{12} \langle f', g'\rangle \ex\left[F'(I(f)) G'(I(g))\right].
\end{align*}
The proofs of the other limits follow analogously via the finite chaos case in Proposition \ref{proposition_OptWienerIntWick} and making use of
\begin{align}\label{eq_WAStrongRest}
\|F(I(f)) - F^{(n)}(I(f))\|^2_{L^2(\Omega)} &\leq \sum_{k>n} \frac{(C^2\|f\|^2)^k}{k!} \leq \frac{(C\|f\|)^{2n}}{n!} e^{C^2\|f\|^2},
\end{align}
with $C:= \sup_{k\geq 0} |a_k|^{1/k}<\infty$.
\end{proof}
\begin{example}
Suppose $f' \in BV$. Then we have
$$
\lim_{n \rightarrow \infty} n^2 \; \ex\left[\left(e^{I(f)}-\widehat{e^{I(f)}}^n\right)^2\right] = \lim_{n \rightarrow \infty} n^2 \; \ex\left[\left(e^{I(f)}-e^{I^n(f)}\right)^2\right] = \frac{1}{12} \|f'\|^2 e^{2\|f\|^2}.
$$
\end{example}
\begin{remark}\label{rem_OptErrorExpansion}
Looking at \eqref{eq_HorribleWickProductNorms}, we observe that $\ex[(\widehat{I(f)}^n)^2] \leq \ex[(I(f)^n)^2]$ is the only reason for the inequality. Hence, asymptotically, we obtain equalities in \eqref{eq_HorribleWickProductNorms}, \eqref{eq_WickPowerDiffSimpler} and \eqref{eq_WickPowerFuncDiffSimpler}. This yields the following expansion of the optimal approximation error with $e_n := \ex\left[(I(f) - \widehat{I(f)}^n)^2\right]$,
\begin{align*}
\ex\left[\left(F(I(f)) - \widehat{F(I(f))}^{n}\right)^2\right] &\sim e_n\, \ex\left[\left(F'(I(f))\right)^2\right] + \frac{e_n^2}{2}\, \ex\left[\left(F''(I(f))\right)^2\right]\nonumber\\
&\quad + \frac{4e_n^3}{9}\, \ex\left[\left(F^{(3)}(I(f))\right)^2\right] + \frac{e_n^4}{16}\, \ex\left[\left(F^{(4)}(I(f))\right)^2\right].
\end{align*}
We believe that this can be used for an error expansion of the approximation via Algorithm \ref{algo_WickWP2}.
\end{remark}
\subsection{The multivariate functional}\label{subsection_multi}
Theorem \ref{thm_OptWickFunc} can be extended in various directions to multivariate functionals. For the proofs of our optimal approximation results we need:
\begin{theorem}\label{thm_OptWickFuncMult}
Suppose
$f', g', f'_1, g'_1,\ldots,\in BV$
and recall the abbreviations (Remark \ref{rem_Abbreviations})
\begin{align*}
F &:= F(I(f_1), \ldots, I(f_m)), \quad \widetilde{F}^{(n)} := \sum_{k=0}^{n} \pi_k(F(I^n(f_1), \ldots, I^n(f_m))), \quad F_{x_k} := \dfrac{\partial}{\partial x_k} F.
\end{align*}
(i) \ Suppose $F= F(I(f_1), \ldots, I(f_m)), G= G(I(g_1), \ldots, I(g_m)) \in \lin(\WA)$. Then for all
$$
(x_n, y_n) \in \left\{F-\widehat{F}^n, F-\widetilde{F}^{(n)}\right\} \times \left\{G-\widehat{G}^n, G-\widetilde{G}^{(n)}, G\right\},
$$
$$
\lim_{n \rightarrow \infty} n^2 \; \ex\left[x_n y_n\right] = \frac{1}{12} \sum_{i,j=1}^{m}\langle f'_i, g'_j\rangle\, \ex\left[F_{x_i}G_{x_j}\right].
$$
(ii) \ Suppose $F = F(I(f)), G= G(I(g)) \in \WA$. Then we have
\begin{align*}
&(a) &\lim_{n \rightarrow \infty} n^2\, \ex\left[\left((F-\widetilde{F}^{(n)}) \diamond G\right)^2\right] &= \frac{1}{12} \|f'\|^2 \ex\left[\left(F' \diamond G\right)^2\right],\\
&(b) &\lim_{n \rightarrow \infty} n^2\, \ex\left[\left(\widetilde{F}^{(n)} \diamond (I(g) - I^n(g)) \diamond G\right)^2\right] &= \frac{1}{12} \|g'\|^2 \ex\left[\left(F \diamond G\right)^2\right],\\
&(c) &\lim_{n \rightarrow \infty} n^2\, \ex\left[\left((F-\widetilde{F}^{(n)}) \diamond G\right)\left(\widetilde{F}^{(n)} \diamond (I(g) - I^n(g)) \diamond G\right)\right] &= \frac{1}{12} \langle f', g'\rangle \ex\left[\left(F' \diamond G\right)\left(F \diamond G\right)\right].
\end{align*}
\end{theorem}
\begin{remark}
(i) \ The limit in Theorem \ref{thm_OptWickFuncMult} (i) for $\widehat{F}^n = \widehat{G}^n$ is the key result for the optimal approximation in Theorem \ref{thm_OptSDELin} and Theorem \ref{thm_OptSDELinMulti}. The limit in (i) for $\widetilde{F}^{(n)} =\widetilde{G}^{(n)}$ implies the best convergence rate for the strong approximation in Algorithm \ref{algo_WickWP1}.
(ii) \ The limits for mixed terms in Theorem \ref{thm_OptWickFuncMult} (i) illustrate the compatibility of the convergences via optimal approximation and Algorithms \ref{algo_WickWP1}-\ref{algo_WickWP2} on the large class $\lin(\WA)$.
(iii) \ Theorem \ref{thm_OptWickFuncMult} (ii) is a key step in the proof of the optimal Wick-WP scheme in Theorem \ref{thm_LinSDEApproxWickWP}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm_OptWickFuncMult}]
Due to various Wiener chaos expansions the notations for arbitrary functionals become easily elaborately. However, the proof is a straightforward extension of the arguments for Theorem \ref{thm_OptWickFunc}. We present the proof for the two-dimensional case in (i) for $G=F=F(I(f), I(g)) \in \WA$. All other cases are straightforward generalizations as in the previous subsections. Let the Wiener chaos expansion
$$
F(I(f), I(g)) = \sum_{k,l=0}^{\infty} a_{k,l} I(f)^{\diamond k} \diamond I(g)^{\diamond l}
$$
for some coefficients $a_{k,l} \in \RR$. Via Corollary \ref{cor_WickCarriesOverOpt}, it is
\begin{align}\label{eq_OAMulti1}
&\ex\left[\left(F(I(f), I(g)) - \widehat{F(I(f), I(g))}^{n}\right)^2\right]\nonumber\\
&= \sum_{\substack{k,l, k', l' \geq 0\\ k+l=k'+l'>0}} a_{k,l} a_{k',l'} \ex\left[\left(I(f)^{\diamond k} \diamond I(g)^{\diamond l} - (\widehat{I(f)}^{n})^{\diamond k} \diamond (\widehat{I(g)}^{n})^{\diamond l} \right)\right.\nonumber\\
&\hspace{3cm} \left. \times\left(I(f)^{\diamond k'} \diamond I(g)^{\diamond l'} - (\widehat{I(f)}^{n})^{\diamond k'} \diamond (\widehat{I(g)}^{n})^{\diamond l'} \right)\right].
\end{align}
As
$$
I(f)^{\diamond k} \diamond I(g)^{\diamond l} - (\widehat{I(f)}^{n})^{\diamond k} \diamond (\widehat{I(g)}^{n})^{\diamond l} = (I(f)^{\diamond k} - (\widehat{I(f)}^{n})^{\diamond k}) \diamond I(g)^{\diamond l} + (\widehat{I(f)}^{n})^{\diamond k} \diamond (I(g)^{\diamond l} - (\widehat{I(g)}^{n})^{\diamond l}),
$$
the right hand side in \eqref{eq_OAMulti1} is reduced to covariances of the terms of the type
\begin{equation*}
(I(f)^{\diamond k} - (\widehat{I(f)}^{n})^{\diamond k}) \diamond I(g)^{\diamond l}, \quad (I(f)^{\diamond k} - (\widehat{I(f)}^{n})^{\diamond k}) \diamond (\widehat{I(g)}^{n})^{\diamond l}.
\end{equation*}
Analogously to the proof of Proposition \ref{proposition_OptWienerIntWick} (see e.g. \eqref{eq_WickPowerDiffSimplerLandau}--\eqref{eq_WickPowerDiffSimplerLandau2}), these terms behave in covariance computations like
$$
\left(I(f) - \widehat{I(f)}^{n}\right) \diamond k I(f)^{\diamond k} \diamond I(g)^{\diamond l}.
$$
Suppose $f', g', \bar{f}', \bar{g}' \in BV$. We recall that $I(f)^{\diamond k} \diamond I(g)^{\diamond l}$ are polynomials (of $I(f), I(g)$) and therefore differentiable. Then, analogously to \eqref{eq_WickPowerDiffSimpler} and due to the derivative rule for Hermite polynomials, for $k+l=k'+l'$ we obtain
\begin{align*}
&\ex\left[\left((I(f)^{\diamond k} - (\widehat{I(f)}^{n})^{\diamond k}) \diamond I(g)^{\diamond l}\right)\left((I(\bar{f})^{\diamond k'} - (\widehat{I(\bar{f})}^{n})^{\diamond k'}) \diamond I(\bar{g})^{\diamond l'}\right)\right]\\
&\sim \ex\left[\left((I(f) - \widehat{I(f)}^n) \diamond k I(f)^{\diamond k-1} \diamond I(g)^{\diamond l}\right) \left( ((I(\bar{f}) - \widehat{I(\bar{f})}^n) \diamond k'I(\bar{f})^{\diamond k'-1} \diamond I(\bar{g})^{\diamond l'}\right)\right]\\
&\sim \frac{1}{12n^2}\langle f', \bar{f}'\rangle \, \ex\left[\left(k I(f)^{\diamond k-1} \diamond I(g)^{\diamond l}\right) \left( (k'I(\bar{f})^{\diamond k'-1} \diamond I(\bar{g})^{\diamond l'}\right)\right]\\
&= \frac{1}{12n^2}\langle f', \bar{f}'\rangle \, \ex\left[\left(\dfrac{\partial}{\partial x_1} I(f)^{\diamond k} \diamond I(g)^{\diamond l}\right) \left(\dfrac{\partial}{\partial x_1} I(\bar{f})^{\diamond k'} \diamond I(\bar{g})^{\diamond l'}\right)\right].
\end{align*}
Thus, as in Proposition \ref{proposition_OptWienerIntWick}, for a fixed chaos (of order $N \in \NN$), we conclude
\begin{align*}
&\sum_{\substack{k,l, k', l' \geq 0,\ k+l=k'+l' = N}} a_{k,l} a_{k',l'} \ex\left[\left(I(f)^{\diamond k} \diamond I(g)^{\diamond l} - (\widehat{I(f)}^{n})^{\diamond k} \diamond (\widehat{I(g)}^{n})^{\diamond l} \right)\right.\\
&\hspace{5cm} \left. \times\left(I(f)^{\diamond k'} \diamond I(g)^{\diamond l'} - (\widehat{I(f)}^{n})^{\diamond k'} \diamond (\widehat{I(g)}^{n})^{\diamond l'} \right)\right]\\
&\sim \frac{1}{12n^2} \|f'\|^2 \sum
a_{k,l} a_{k',l'} \ex\left[\left(\dfrac{\partial}{\partial x_1} I(f)^{\diamond k} \diamond I(g)^{\diamond l}\right)\left(\dfrac{\partial}{\partial x_1} I(f)^{\diamond k'} \diamond I(g)^{\diamond l'}\right)\right]\\
&\quad + \frac{1}{12n^2} \|g'\|^2 \sum
a_{k,l} a_{k',l'} \ex\left[\left(\dfrac{\partial}{\partial x_2} I(f)^{\diamond k} \diamond I(g)^{\diamond l}\right)\left(\dfrac{\partial}{\partial x_2} I(f)^{\diamond k'} \diamond I(g)^{\diamond l'}\right)\right]\\
&\quad + \frac{2}{12n^2} \langle f', g\rangle\sum
a_{k,l} a_{k',l'} \ex\left[\left(\dfrac{\partial}{\partial x_1} I(f)^{\diamond k} \diamond I(g)^{\diamond l}\right)\left(\dfrac{\partial}{\partial x_2} I(f)^{\diamond k'} \diamond I(g)^{\diamond l'}\right)\right].
\end{align*}
Hence, analogously to Theorem \ref{thm_OptWickFunc}, summing up (Fubini's theorem applies as we have uniform bounds via Proposition \ref{prop_DerivativesWA}) leads to the asserted asymptotics
\begin{align}\label{eq_OAMultiLast}
&\ex\left[\left(F(I(f), I(g)) - \widehat{F(I(f), I(g))}^{n}\right)\right]\nonumber\\
&\sim \frac{1}{12n^2} \|f'\|^2\, \ex\left[F_{x_1}^2\right] + \frac{1}{12n^2} \|g'\|^2\, \ex\left[F_{x_2}^2\right] + \frac{2}{12n^2} \langle f', g' \rangle\, \ex\left[F_{x_1}F_{x_2}\right].
\end{align}
All other statements for multivariate $F, G \in \WA$ and the statements in (ii) are straightforward as the covariance limits carry over to Wiener chaos computations in Proposition \ref{proposition_OptWienerIntWick} and by the truncation in \eqref{eq_WAStrongRest}. In particular, due to
$$
\ex\left[\left(F+G - \widehat{F+G}^n\right)^2\right] = \ex\left[(F-\widehat{F}^n)^2 + (G-\widehat{G}^n)^2 + 2(F-\widehat{F}^n)(G-\widehat{G}^n)\right],
$$
and an iteration, we conclude the limits in (i) for $F, G \in \lin(\WA)$ as well.
\end{proof}
\section{Proofs}\label{section_Proofs}
The optimal approximation results for linear Skorohod SDEs are now essentially an easy application of the results in Section \ref{section_OptWienerChaos}:
\begin{proof}[Proof of Theorem \ref{thm_OptSDELin}]
Due to \eqref{eq_LinSkorohodSDESol}, Proposition \ref{prop_WickConditionalExpectation} and the deterministic term $\int_{0}^{1}a(s) ds$, we have
\begin{align*}
\ex[(X_1 - \widehat{X_1}^n)^2] &= \ex\left[\left(F(I(f)) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1}a(s) ds} - \widehat{F(I(f))}^n \diamond e^{\diamond \widehat{I(\sigma)}^n} e^{\int_{0}^{1}a(s) ds}\right)^2\right]\\
&= e^{2\int_{0}^{1}a(s) ds}\, \ex\left[\left(F(I(f)) \diamond e^{\diamond I(\sigma)} - \widehat{F(I(f))}^n \diamond e^{\diamond \widehat{I(\sigma)}^n} \right)^2\right].
\end{align*}
For the random variable
$$
G(I(f), I(\sigma)) := F(I(f)) \diamond e^{\diamond I(\sigma)},
$$
by $F(I(f)) \in \WA$ and Proposition \ref{prop_DerivativesWA}, these derivatives exist in $L^2(\Omega)$:
$$
G_{x_1} = F'(I(f)) \diamond e^{\diamond I(\sigma)}, \qquad G_{x_2} = F(I(f)) \diamond e^{\diamond I(\sigma)}.
$$
Thanks to Theorem \ref{thm_OptWickFuncMult} (i), we therefore conclude
\begin{align*}
&\lim_{n \rightarrow \infty} n^2\, \ex\left[\left(F(I(f)) \diamond e^{\diamond I(\sigma)} - \widehat{F(I(f))} \diamond e^{\diamond \widehat{I(\sigma)}} \right)^2\right] = \frac{1}{12} \left(\|f'\|^2 \ex[(F'(I(f)) \diamond e^{\diamond I(\sigma)})^2]\right.\\
&\qquad \left. + \|\sigma'\|^2 \ex[(F(I(f)) \diamond e^{\diamond I(\sigma)})^2] +2\langle f',\sigma'\rangle \ex[(F'(I(f)) \diamond e^{\diamond I(\sigma)})(F(I(f)) \diamond e^{\diamond I(\sigma)})]\right)\\
&\qquad = \int_{0}^{1} \ex\left[\left(\left(f'(s) F'(I(f)) + \sigma'(s) F(I(f))\right)\diamond e^{\diamond I(\sigma)}\right)^2\right] ds.
\end{align*}
This yields the asserted optimal convergence in Theorem \ref{thm_OptSDELin}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_OptSDELinMulti}]
Concerning a multivariate initial value $F(I(f_1), \ldots, I(f_K))$ in Theorem \ref{thm_OptSDELinMulti}, we conclude analogously via Theorem \ref{thm_OptWickFuncMult} (i) and the function
$$
G\left(I(f_1), \ldots, I(f_K), I(\sigma)\right) := F(I(f_1), \ldots, I(f_K)) \diamond e^{\diamond I(\sigma)} \in \lin(\WA).
$$
\end{proof}
For the proof of Theorem \ref{thm_LinSDEApproxWickWP} we need to combine the limits in Theorem \ref{thm_OptWickFuncMult} with the approximation of the pathwise integral $\int_{0}^{1} a(s) ds$ in the Wick-WP scheme.
We denote for shorthand
\begin{align}\label{eq_ShorthandAlphaBeta}
\alpha_k &:=a_k \Delta + \sigma_k\Delta_k W,\nonumber\\
\beta_k &:= \sigma_k^2 (1/2)(\Delta_k W)^{\diamond 2} + \left(\sigma'_k + a_k\sigma_k\right)\Delta_k W \Delta + \sigma_k^3(1/6)(\Delta_k W)^{\diamond 3} + \left(a'_k + a_k^2\right) (1/2)\Delta^2\nonumber\\
\mathcal{E}^n &:= \prod_{k=0}^{n-1}\left(1+\alpha_k + \beta_k\right) - \frac{1}{2}\sum_{l=0}^{n-1}\sigma'_l \Delta_l W \Delta\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right).
\end{align}
Then, via the independence of the terms in the scheme with a deterministic initial value (then all Wick products are ordinary products), the approximation in Theorem \ref{thm_LinSDEApproxWickWP} is given by
\begin{align}\label{eq_WickWPShort}
\widetilde{X}^n_n &= \widetilde{F}^{(n)}(I^n(f)) \diamond \mathcal{E}^n.
\end{align}
We recall the similar type formula for the solution of the Skorohod SDE in Theorem \ref{thm_LinSDEApproxWickWP} via \eqref{eq_LinSkorohodSDESol},
\begin{equation}\label{eq_LinSDESolShort}
X_1 = F(I(f)) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}.
\end{equation}
In contrast to It\=o SDEs here the task is to carry over the strong approximations of $\widetilde{F}^{(n)}(I^n(f))$ towards $F(I(f))$ and of $\mathcal{E}^n$ towards $e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}$ to the Wick product and extract the correct convergence rates. As the Wick product is a convolution operator and not closed on $L^2(\Omega)$, it is not trivial that such convergence is true or that the convergence rates of the terms remain valid in an approximation of \eqref{eq_LinSDESolShort} via \eqref{eq_WickWPShort}. The proof is based on Theorem \ref{thm_OptWickFuncMult} (ii) and the following computations:
\begin{proposition}\label{prop_WPComp}
We denote for shorthand $I_k^{(n)}:= I^{(n)}(\sigma\eins_{[k/n,(k+1)/n]})$ and notice
\begin{align}\label{eq_WPWienerIntDiff}
I_k - I^n_k &= \int_{k/n}^{(k+1)/n}(\sigma(s) - \sigma_k - \sigma'_k \Delta/2) dW_s.
\end{align}
Then it is:
\begin{align}\label{eq_OptStrongWickExpExp}
e^{\diamond I_k} &= 1+\sigma_k \Delta_k W + \frac{1}{2}(\sigma_k\Delta_k W)^{\diamond 2} +\frac{1}{6}(\sigma_k\Delta_k W)^{\diamond 3} + \sigma'_k \Delta_kW \Delta+ R^{\sigma}_{k,n},\\
R^{\sigma}_{k,n} &= \left(I_k - I^n_k - \sigma'_k \Delta_k W \Delta/2\right) + \frac{1}{2}\left( I_k^{\diamond 2} - (\sigma_k\Delta_k W)^{\diamond 2}\right)+ \frac{1}{6}\left(I_k^{\diamond 3} - (\sigma_k\Delta_k W)^{\diamond 3}\right) + \sum_{m=4}^{\infty} \frac{1}{m!} I_k^{\diamond m}.\nonumber
\end{align}
We denote for shorthand $\ve_k := \int_{k/n}^{(k+1)/n}a(s) ds - (a_k\Delta + a'_k\Delta^2/2)$. Then it is
\begin{align}\label{eq_OptStrongDetExpExp}
e^{\int_{k/n}^{(k+1)/n} a(s) ds}
&= 1+a_k \Delta + (a'_k + a_k^2)\Delta^2/2 + R^{a}_{k,n},\\
R^{a}_{k,n} &= \ve_k + \frac{1}{2}\left(\left(\int_{k/n}^{(k+1)/n}a(s) ds\right)^{2} - a_k^2\Delta^2\right) + \sum_{m=3}^{\infty} \frac{1}{m!} \left(\int_{k/n}^{(k+1)/n} a(s) ds\right)^{m},\nonumber\\
e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}
&=\prod_{k=0}^{n-1}\left(1+\alpha_k + \beta_k+\overline{R}_{k,n}\right),\label{eq_OptStrongExpProd}\\
\overline{R}_{k,n} &= R^{\sigma}_{k,n} + a_k\Delta\left(e^{\diamond I_k} - 1-\sigma_k \Delta_k W\right)
+ \frac{1}{2}(a'_k + a_k^2)\Delta^2\left(e^{\diamond I_k} - 1\right)
R^a_{k,n}e^{\diamond I_k}.\nonumber
\end{align}
Finally it is:
\begin{align}
e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n &= \
|
sum_{l=0}^{n-1} (I_l - I^n_l)\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right) + R_n,\label{eq_OptStrongExpDiff}
\end{align}
with
\begin{align}\label{eq_GammaRnRate}
\ex[(\Gamma(\sqrt{2})R_n)^2] \in \Oo(\Delta^3).
\end{align}
\end{proposition}
\begin{proof}
Equation \eqref{eq_OptStrongWickExpExp} and \eqref{eq_OptStrongDetExpExp} are clear by the Wick exponential $e^{\diamond I_k} = \sum_{m=0}^{\infty} I_k^{\diamond m}$ and the ordinary exponential function.
Then, multiplying \eqref{eq_OptStrongWickExpExp} and \eqref{eq_OptStrongDetExpExp} in the product (notice the uncorrelated $I_k$ for different $k$),
$
e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} = \prod_{k=0}^{n-1}e^{\diamond I_k} e^{\int_{k/n}^{(k+1)/n} a(s) ds},
$
we conclude \eqref{eq_OptStrongExpProd}. A simple computation with the general expansion
\begin{align}\label{eq_GeneralExpansion}
\prod_{k=0}^{m}(a_k + b_k) - \prod_{k=0}^{m}(a_k) = \sum_{k=0}^{m} b_k \prod_{j=0}^{k-1}(a_j + b_j) \prod_{j=k+1}^{m}(a_j),
\end{align}
on \eqref{eq_OptStrongExpProd} and \eqref{eq_ShorthandAlphaBeta} with $b_k = \overline{R}_{k,n}$ gives
\begin{align}\label{eq_WPDiffRn1}
e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n
&=\sum_{l=0}^{n-1} \overline{R}_{l,n} \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k+\overline{R}_{k,n}\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right)\nonumber\\
&\quad + \frac{1}{2}\sum_{l=0}^{n-1}\sigma'_l \Delta_l W \Delta\, \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right).
\end{align}
The reformulation $Aa-Bb = (A-B)a + B(a-b)$ and \eqref{eq_GeneralExpansion} yields
\begin{align}\label{eq_WPDiffRn2}
&\sum_{l=0}^{n-1} \overline{R}_{l,n} \left(\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k+\overline{R}_{k,n}\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right) - \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right)\right)\nonumber\\
&= \sum_{l=0}^{n-1} \overline{R}_{l,n} \left(\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k+\overline{R}_{k,n}\right) - \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \right)\prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right)\nonumber\\
&\qquad + \sum_{l=0}^{n-1} \overline{R}_{l,n} \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right)\left( \prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right) -\prod_{k=l+1}^{n-1}\left(1+\alpha_k\right)\right) \nonumber\\
&= \sum_{l=0}^{n-1} \overline{R}_{l,n} \left(\sum_{k=0}^{l-1}\overline{R}_{k,n}\prod_{j=0}^{k-1}\left(1+\alpha_j + \beta_j+\overline{R}_{j,n}\right)\prod_{j=k+1}^{l-1}\left(1+\alpha_j + \beta_j\right)\right)\prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right)\nonumber\\
&\qquad + \sum_{l=0}^{n-1} \overline{R}_{l,n} \prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \left(\sum_{k=l+1}^{n-1}\beta_k \prod_{j=l+1}^{k-1}\left(1+\alpha_j + \beta_j\right) \prod_{j=k+1}^{n-1}\left(1+\alpha_j\right)\right).
\end{align}
Thus via \eqref{eq_WPDiffRn1} and \eqref{eq_WPDiffRn2}, we conclude the asserted formula \eqref{eq_OptStrongExpDiff} with
\begin{align}\label{eq_propWPCompR}
R_n &= \sum_{l=0}^{n-1} \left(\overline{R}_{l,n} - (I_l-I^n_l -\sigma'_l\Delta_l W\Delta/2)\right)\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right)\nonumber\\
&\quad + \sum_{l=0}^{n-1} \overline{R}_{l,n} \left(\sum_{k=0}^{l-1}\overline{R}_{k,n}\prod_{j=0}^{k-1}\left(1+\alpha_j + \beta_j+\overline{R}_{j,n}\right)\prod_{j=k+1}^{l-1}\left(1+\alpha_j + \beta_j\right)\right)\prod_{k=l+1}^{n-1}\left(1+\alpha_k + \beta_k\right)\nonumber\\
&\qquad + \sum_{l=0}^{n-1} \overline{R}_{l,n} \left(\sum_{k=l+1}^{n-1}\beta_k \prod_{j=l+1}^{k-1}\left(1+\alpha_j + \beta_j\right) \prod_{j=k+1}^{n-1}\left(1+\alpha_j\right)\right)\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right).
\end{align}
The mean value theorem with appropriate $\xi_k, \overline{\xi}_k \in [k/n,(k+1)/n]$ and $(Aa-Bb)^2 \leq 2(A-B)^2a^2 + 2B^2(a-b)^2$ gives
\begin{align*}
\int_{k/n}^{(k+1)/n}(\sigma(s) - \sigma_k - \sigma'_k\Delta)^2 \, ds &= (\sigma(\xi_k) - \sigma_k - \sigma'_k\Delta)^2\Delta= \left(\sigma'(\overline{\xi}_k)(\xi_k-k/n) - \sigma'_k\Delta\right)^2\Delta\\
&\leq 2\left(|\sigma'(\overline{\xi}_k)-\sigma'_k|^2 + (\sigma'_k)^2\right)\Delta^3.
\end{align*}
and therefore, by It\=o isometry and $|\sigma'(\overline{\xi}_k)-\sigma'_k|^2 \leq |\sigma'(\overline{\xi}_k)-\sigma'_k| 2\|\sigma'\|_{\infty}$,
\begin{align}\label{eq_RKleinsterRest}
\sum_{k=0}^{n-1}\ex\left[(I_k - I^n_k-\sigma'_k \Delta_kW\Delta/2)^2\right] \leq 2\left(2T(\sigma')\|\sigma'\|_{\infty} + \|\sigma'\|^2_{\infty}\right)\Delta^2.
\end{align}
The lowest other term in the $L^2$-norm of $R^{\sigma}_{k,n}$ is $I_k^{\diamond 2} - (\sigma_k\Delta_k W)^{\diamond 2}$. Due to Wick's theorem \eqref{eq_WickTheorem}, the mean value theorem and the uniform bounds via $\sigma \in C^1([0,1])$, we obtain
\begin{align}\label{eq_RKleinsterRest2}
&\ex\left[\left(I_k^{\diamond 2} - (\sigma_k\Delta_k W)^{\diamond 2}\right)^2\right] =
\ex\left[\left(\int_{k/n}^{(k+1)/n}(\sigma(s) - \sigma_k) dW_s\diamond \int_{k/n}^{(k+1)/n}(\sigma(s) + \sigma_k) dW_s\right)^2\right]\nonumber\\
&= \left(\int_{k/n}^{(k+1)/n}(\sigma(s) - \sigma_k)^2 ds \, \int_{k/n}^{(k+1)/n}(\sigma(s) + \sigma_k)^2 ds + \left(\int_{k/n}^{(k+1)/n}(\sigma(s) - \sigma_k)(\sigma(s) + \sigma_k) ds\right)^2\right)\nonumber\\
&\leq 2\|\sigma'\|^2_{\infty}\|\sigma\|^2_{\infty}(\Delta^4/3 + \Delta^4/4) = \frac{14}{12}\|\sigma'\|^2_{\infty}\|\sigma\|^2_{\infty} \Delta^4.
\end{align}
We notice that all $R^{\sigma}_{k,n}$ (resp. $\overline{R}_{k,n}$, $R_n$) are independent for different $k$ (and fixed $\sigma, n$). By the uniform estimates above, we conclude
\begin{align}\label{eq_OORSigma}
&\ex\left[\left(\sum_{k=0}^{n-1}R^{\sigma}_{k,n}\right)^2\right] = \sum_{k=0}^{n-1}\ex\left[\left(R^{\sigma}_{k,n}\right)^2\right]\in \Oo(\Delta^2),\nonumber\\ &\sum_{k=0}^{n-1}\ex\left[\left(R^{\sigma}_{k,n}-(I_k - I^n_k-\sigma'_k \Delta_kW\Delta/2)\right)^2\right]
\in \Oo(\Delta^3).
\end{align}
An elementary computation gives $\ve_k^2 = |a'(\xi_k) - a'_k|^2 \Delta^4$ for appropriate $\xi_k \in [k/n,(k+1)/n]$. The lowest other terms in $R^a_{k,n}$ are of order $\Delta^5$ (e.g. $a_k \Delta \ve_k$).
Combining all these bounds in \eqref{eq_RKleinsterRest}--\eqref{eq_OORSigma}, we obtain by independence of the factors
\begin{align}\label{eq_OORDach}
&\ex\left[\left(\sum_{k=0}^{n-1}\overline{R}_{k,n}\right)^2\right] \in \Oo(\Delta^2),\quad \ex\left[\left(\sum_{k=0}^{n-1}\left(\overline{R}_{k,n} -(I_k - I^n_k-\sigma'_k \Delta_kW\Delta/2)\right)\right)^2\right] \in \Oo(\Delta^3),\nonumber\\
&\ex\left[\left(\sum_{k,l=0, k \neq l}^{n-1}\overline{R}_{k,n}\overline{R}_{l,n}\right)^2\right] \in \Oo(\Delta^4),\quad \ex\left[\left(\sum_{k,l=0, k \neq l}^{n-1}\beta_k\overline{R}_{l,n}\right)^2\right] \in \Oo(\Delta^4).
\end{align}
We observe
\begin{align*}
&\ex\left[\left(1+\alpha_k\right)^2\right] = (1+a_k\Delta)^2 +\sigma_k^2\Delta,\\
&\ex\left[\left(1+\alpha_k+\beta_k\right)^2\right] = \left(1+a_k\Delta +\frac{(a'_k +a_k^2)\Delta^2}{2}\right)^2 +(\sigma_k +(\sigma'_k+a_k \sigma_k)\Delta)^2\Delta +
\frac{\sigma_k^4\Delta^2}{4} +\frac{\sigma_k^6\Delta^3}{6}.
\end{align*}
Hence, via $1+x \leq e^x$, independence of the factors and by inspection of \eqref{eq_OptStrongExpProd}, for all $0 \leq l \leq m < n \in \NN$ we have a constant $c= c(a,\sigma)<\infty$ with
\begin{align}\label{eq_WPProductL2}
&\ex\left[\prod_{k=l}^{m}\left(1+\alpha_k\right)^2\right] = \prod_{k=l}^{m} (1+a_k\Delta +\sigma_k^2\Delta) \leq e^{\|\sigma\|^2+2\int_{0}^{1}|a(u)|du+\Oo(\Delta)} <c,\nonumber\\
&\ex\left[\prod_{k=l}^{m}\left(1+\alpha_k +\beta_k\right)^2\right], \ex\left[\prod_{k=l}^{m}\left(1+\alpha_k+\beta_k+\overline{R}_{k,n}\right)^2\right] \leq e^{\|\sigma\|^2+2\int_{0}^{1}|a(u)|du+\Oo(\Delta)}<c.
\end{align}
The asymptotic of the remainder $R_n$ in \eqref{eq_propWPCompR} is based on \eqref{eq_OORDach}. Thus, with the uniform bound \eqref{eq_WPProductL2}, we conclude $\ex\left[\left(R_{n}\right)^2\right] \in \Oo(\Delta^3)$. We recall the Mehler transform in Remark \ref{rem_Mehler} and notice by an analogous reasoning (the only difference is $\sqrt{2}\sigma$ instead of $\sigma$), that $\ex\left[\left(\Gamma(\sqrt{2})R_{n}\right)^2\right] \in \Oo(\Delta^3)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_LinSDEApproxWickWP}]
We consider the following reformulation of the difference, cf. \eqref{eq_WickWPShort}, \eqref{eq_LinSDESolShort},
\begin{align*}
(X_1 - \widetilde{X}^n_n) &= \left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} + \widetilde{F}^{(n)}\diamond \left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right).
\end{align*}
Therefore the limit in Theorem \ref{thm_LinSDEApproxWickWP} is reduced to the limits of the following three items:
\begin{align}\label{eq_OptStrong1}
&\ex\left[\left(X_1 - \widetilde{X}^n_n\right)^2\right] = \ex\left[\left(\left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)^2\right] + \ex\left[\left(\widetilde{F}^{(n)}\diamond \left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right)\right)^2\right]\nonumber\\
&\quad + 2\ex\left[\left(\left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)\left(\widetilde{F}^{(n)}\diamond \left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right)\right) \right].
\end{align}
\textit{Step (i)} : \
Thanks to Theorem \ref{thm_OptWickFuncMult} (ii - a), we immediately conclude
\begin{align*}
\lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)^2\right] = \frac{1}{12}\|f'\|^2 \ex\left[\left(F'(I(f)) \diamond e^{\diamond I(\sigma)}\right)^2\right] e^{2\int_{0}^{1} a(s) ds}.
\end{align*}
\textit{Step (ii)} : \
The remaining terms in \eqref{eq_OptStrong1} require the subtle reformulation of
$\left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right)$
in \eqref{eq_OptStrongExpDiff} in Proposition \ref{prop_WPComp}. Thus, via \eqref{eq_GammaRnRate}, the H\"older inequality \eqref{eq_WickProductGammaEstimate} and the assumption $F(I(f)) \in \WA$,
\begin{equation}\label{eq_AWP1}
\ex\left[(\widetilde{F}^{(n)} \diamond R_n)^2\right] \leq \ex\left[|\Gamma(\sqrt{2})\widetilde{F}^{(n)}|^2\right]\ex\left[|\Gamma(\sqrt{2})R_n|^2\right] \in \Oo(\Delta^3).
\end{equation}
Notice that all product terms in \eqref{eq_OptStrongExpDiff},
$$
\prod_{k=0}^{n-1}\left(1+\alpha_k + \beta_k+ \overline{R}_{k,n}\right), \
\prod_{k=0}^{n-1}\left(1+\alpha_k + \beta_k\right), \ \prod_{k=0}^{n-1}\left(1+\alpha_k\right)
$$
behave via \eqref{eq_OptStrongExpProd}, \eqref{eq_WPProductL2} and the error asymptotics in \eqref{eq_OORDach} in covariance computations like $e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}$. Hence, due to \eqref{eq_OptStrongExpDiff}, \eqref{eq_AWP1} and Theorem \ref{thm_OptWickFuncMult} (ii - b), we conclude
\begin{align}\label{eq_AWP2}
&\lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\widetilde{F}^{(n)} \diamond \sum_{l=0}^{n-1} (I_l - I^n_l)\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right)\right)^2\right]\nonumber\\
&\lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\widetilde{F}^{(n)} \diamond \left(I(\sigma) - I^n(\sigma)\right) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} \right)^2\right]\nonumber\\
&= \frac{1}{12}\|\sigma'\|^2 \, \ex\left[\left(F(I(f)) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)^2\right].
\end{align}
The Cauchy-Schwarz inequality and \eqref{eq_AWP1}--\eqref{eq_AWP2} imply
\begin{equation}\label{eq_AWP3}
\ex\left[\left(\widetilde{F}^{(n)} \diamond \sum_{l=0}^{n-1} (I_l - I^n_l)\prod_{k=0}^{l-1}\left(1+\alpha_k + \beta_k\right) \prod_{k=l+1}^{n-1}\left(1+\alpha_k\right)\right)\left(\widetilde{F}^{(n)} \diamond R_n\right)\right]\in \Oo(\Delta^{5/2}).
\end{equation}
Hence, by \eqref{eq_OptStrongExpDiff}, \eqref{eq_AWP2}--\eqref{eq_AWP3}, we obtain
\begin{align*}
\lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\widetilde{F}^{(n)} \diamond \left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right)\right)^2\right]
&= \frac{1}{12}\|\sigma'\|^2 \,\ex\left[\left(F(I(f)) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)^2\right].
\end{align*}
\textit{Step (iii)} : \ The convergence for the last term in \eqref{eq_OptStrong1} follows by an analogous reasoning to Step $(ii)$ above and by Theorem \ref{thm_OptWickFuncMult} (ii - c) and this gives
\begin{align*}
&\lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)\left(\widetilde{F}^{(n)}\diamond \left(e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds} - \mathcal{E}^n\right)\right) \right]\\
&= \lim_{n \rightarrow \infty} n^2 \, \ex\left[\left(\left(F- \widetilde{F}^{(n)}\right)\diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right)\left(\widetilde{F}^{(n)}\diamond \left(I(\sigma) - I^n(\sigma)\right) \diamond e^{\diamond I(\sigma)} e^{\int_{0}^{1} a(s) ds}\right) \right]\\
&= \frac{1}{12}\langle f', \sigma'\rangle\, \ex\left[\left(F'(I(f)) \diamond e^{\diamond I(\sigma)}\right)\left(F(I(f)) \diamond e^{\diamond I(\sigma)}\right)\right] e^{2\int_{0}^{1} a(s) ds}.
\end{align*}
Thus, combing Steps $(i)-(iii)$ in \eqref{eq_OptStrong1}, we conclude the asserted optimal approximation for the Wick-WP scheme.
\end{proof}
\section{Outlook on convergence rates and applications}\label{section_FurtherConvRates}
Finally we sketch some conditions for further convergence rates, give some new optimal approximation results for Skorohod integrals and present a conjecture on quasilinear Skorohod SDEs.
Thanks to \eqref{eq_InfAsConditional}, we recall
\begin{align}\label{eq_OptWienerProjection}
\ex\left[(I(f) - \widehat{I(f)}^n)^2\right] &=
\sum_{i=1}^{n} \int_{(i-1)/n}^{i/n} \left(f(s) - \left(n \int_{(i-1)/n}^{i/n} f(u) du\right)\right)^2 ds.
\end{align}
\begin{proposition}\label{prop_OtherConvRates}
Suppose $f \in L^2$. Then the mean squared error behaves as:
For every bounded $f$ it is
$$
\ex\left[(I(f) - \widehat{I(f)}^n)^2\right] \leq 2\|f\|_{\infty}^2 n^{-1}
$$
and for $c\in \RR$, $r \in [0,1]\setminus \QQ$, we have an increasing sequence $(n_k)_{k \in \NN}$ with
$$
\ex\left[\left(I(c \eins_{[0,r)}) - \widehat{I(c \eins_{[0,r)})}^{n_k}\right)^2\right] \sim \frac{c^2}{2n_k}.
$$
\end{proposition}
\begin{proof}
Thanks to simple computations on step functions and \eqref{eq_OptWienerProjection}, we conclude the first estimate. The asymptotic relation follows immediately by \cite[Proposition 4.2]{P17a}.
\end{proof}
Proposition \ref{prop_OtherConvRates} can be the starting point for further optimal approximation results and error expansions for Skorohod SDEs or Skorohod integrals. The proof techniques by Section \ref{section_OptWienerChaos} apply analogously. We mention exemplary two statements.
Thanks to Proposition \ref{prop_OtherConvRates} and Theorem \ref{thm_OptWickFunc}, we have:
\begin{theorem}
Suppose $f,a,\sigma' \in L^2$ are bounded and $\tau \in (0,1)\setminus \QQ$. Then the solution of the Skorohod SDE
$$
dX_t = a(t) X_t dt + \sigma(t) X_t dW_t, \quad X_0 = e^{I(f\eins_{[0,\tau]})},
$$
satisfies
$$
\limsup_{n \rightarrow \infty} \sqrt{n}\, \ex[(X_1-\widehat{X_1}^n)^2]^{1/2} < \infty
$$
and there exists a constant $c>0$ and an increasing sequence $(n_k)_{k \in \NN}$ with
$$
\lim_{k \rightarrow \infty} \sqrt{n_k}\, \ex[(X_1-\widehat{X_1}^{n_k})^2]^{1/2} = c.
$$
\end{theorem}
\begin{remark}
On optimal approximation of Skorohod integrals we performed the following result for sufficiently regular integrands (see \cite[Theorem 21]{NP}): For a function $f \in C^{1,2,\ldots, 2}([0,1]\times \RR^K)$ with some Lipschitz and H\"older-growth conditions, fixed time points $\tau_2,\ldots, \tau_K \in [0,1]$ and the Skorohod integral
$$
I := \int_{0}^{1} f(s, W_s, W_{\tau_2}, \ldots, W_{\tau_K}) dW_s,
$$
it is
$$
\lim_{n \rightarrow \infty} n \; \ex[(I - \ex[I|W_{1/n},W_{2/n}, \ldots, W_1, W_{\tau_2}, \ldots, W_{\tau_K}])^2]^{1/2} = C,
$$
where the constant extends the It\=o case situation.
\end{remark}
Due to the general representations of optimal approximation in terms of Wiener integrals and their functionals in Section \ref{section_OptWienerChaos}, now we are able to deal with a larger class of integrands: Via Theorem \ref{thm_OptWickFuncMult} (i) and the techniques in \cite{NP}, we obtain in the simplest case
\begin{theorem}
Suppose $F(t,x,y) \in C^{1,2,1}([0,1]\times \RR^2;\RR)$, the differential operator
$$
\LL := \dfrac{\partial}{\partial t} + \frac{1}{2}\dfrac{\partial^2}{\partial x^2}
$$ and $f' \in BV$. Then the Skorohod integral
$$
I := \int_{0}^{1} \dfrac{\partial}{\partial x} F(s, W_s, I(f)) dW_s,
$$
exists and satisfies the optimal approximation
$$
\lim_{n \rightarrow \infty} n\, \ex[(I-\widehat{I}^n)^2]^{1/2} = \frac{1}{\sqrt{12}} \left(\int_{0}^{1} \ex\left[\left(f'(s) F_y(1,W_1,I(f)) - \LL F(s,W_s,I(f))\right)^2\right] ds\right)^{1/2}.
$$
\end{theorem}
Thanks to
the expansion
$$
\sqrt{a^2 + b^2 + \ldots} = a + \frac{b^2}{2a} + \ldots
$$
we conclude error expansions like:
\begin{theorem}
Suppose $f, a, \sigma \in C^2_b([0,1])$ and the Skorohod SDE
$$
dX_t = a(t)X_t dt + \sigma(t) X_t dW_t, \quad X_0 = F(I(f)) \in \WA.
$$
Then the optimal approximation fulfills the expansion
\begin{align*}
\ex[(X_1-\widehat{X_1}^n)^2]^{1/2} &= C n^{-1} + \overline{C} n^{-2} + \Oo(n^{-3}),
\end{align*}
where $C$ is the optimal approximation constant from Theorem \ref{thm_OptSDELin} and $\overline{C} \in (0,\infty)$ can be specified by the norms in \eqref{eq_OAMultiLast} and Remark \ref{rem_WienerIntConvRateBound}.
\end{theorem}
\begin{remark}
As the proof of the error expansion in Remark \ref{rem_OptErrorExpansion} immediately carries over to the approximation scheme in Algorithm \ref{algo_WickWP2} which is used in Theorem \ref{thm_LinSDEApproxWickWP}, we believe that similarly:
\begin{align*}
\ex[(X_1-\widetilde{X}^n_n)^2]^{1/2} &= C n^{-1} + \overline{C} n^{-2} + \Oo(n^{-3}).
\end{align*}
\end{remark}
Finally we consider more general Skorohod SDEs. We make use of a standard regularity assumption: Let $a: [0,1] \times \RR \times \Omega \rightarrow \RR$ be measurable and there exists an $L>0$ with
\begin{align*}
(L) \qquad \forall t \in [0,1], \ x,y\in \RR, \ \omega \in \Omega \ : \ |a(t,x,\omega) - a(t,y,\omega)| \leq L|x-y|,\quad |a(t,0,\omega)| \leq L.
\end{align*}
Then, due to a general Girsanov transform by Buckdahn, cf. \cite[Theorem 3.3.6]{Nualart}:
\begin{theorem}\label{thm_QuasilinEx}
Suppose $X_0 \in L^p(\Omega)$ for some $p>2$, $a$ satisfies (L) and $\sigma \in L^2$. Then there exists a unique stochastic process $X \in L^2(\Omega \times [0,1])$ which solves for all $t \in [0,1]$ the quasilinear Skorohod SDE
$$
dX_t = a(t,X_t) dt + \sigma(t) X_t\, dW_t.
$$
\end{theorem}
Inspired by our results we believe that in general it is true:
\begin{conjecture}\label{con_OptSDEQuasilin}
Suppose the assumptions in Theorem \ref{thm_QuasilinEx} are satisfied and additionally $X_0 = F(I(f)) \in \WA$, $f', \sigma' \in BV$, $a \in C^1([0,1]\times \RR)$ is nonrandom, $a_x$ is bounded. Then
$$
\lim_{n \rightarrow \infty} n \, \ex[(X_1-\widehat{X_1}^n)^2]^{1/2} \in (0,\infty).
$$
\end{conjecture}
We believe that the solution of a quasilinear Skorohod SDE can be approximated by an appropriate Wick-WP scheme as well.
By inspection of the proofs in Section \ref{section_Proofs}, we do not know how to overcome the difficulties with a general drift coefficient or more general Skorohod SDEs as in \cite{MishuraShevchenko}.
\begin{acknowledgement}The author thanks Andreas Neuenkirch for many fruitful discussions. \end{acknowledgement}
|
\section{Introduction}
Human exploration of the world is never-ending, and we never know there still exist how many unknown things beyond our scope. For real-world machine learning applications, we often can only collect limited training instances before we do prediction on a large amount of unlabeled testing instances. Given the temporal and spatial constrictions at the beginning, it is likely that unlabeled new instances observed after a long time involve some meaningful new categories of objects, e.g., the news classification problem studied in \cite{zhang2011serendipitous,zhuang2010d,li2007learning}, and the bacterial detecting problem in \cite{akova2010machine,dundar2012bayesian}.
Basically, traditional classification models are unable to recognize new data categories, while clustering models cannot make full use of the supervised information from known categories.
An ideal model should simultaneously recognize the new data categories and assign most appropriate category labels to the data actually from known categories, since these two processes can benefit from each other.
Existing models for such a learning scenario typically assume the number of unknown new categories is pre-specified. In \cite{zhuang2010d}, Zhuang et al. proposed a double-latent-layered Latent Dirichlet Allocation (DLDA) model, which can utilize supervised information from known categories in a generative manner. While classifying test data into categories acquired from the training data, their model can simultaneously group the remaining data into some pre-specified number of new clusters. In \cite{zhang2011serendipitous}, the so-called Serendipitous Learning (SL) model established a maximum margin learning framework that combines the classification model built upon known classes with the parametric clustering model on unknown classes. Though these methods are effective when the true number of unknown new categories is available, their performances can be significantly degraded by a vague or wrong specification of the unknown category information.
Given that the accessibility assumption of the true number of unknown categories often is impractical, in this paper, we propose a Bayesian nonparametric topic model based on the hierarchical Dirichlet process \cite{teh2006hierarchical} and the notion of latent Dirichlet allocation \cite{blei2003latent}, for semi-supervised text modelling beyond the predefined label space. Unlike existing methods \cite{zhuang2010d,zhang2011serendipitous} which assume that the number of unknown new categories in test data is known, our model can automatically infer this number via nonparametric Bayesian inference while classifying the data from known categories into their most appropriate categories. Exact inference in our model is intractable, so we provide an efficient collapsed Gibbs sampling algorithm for approximate posterior inference.
Extensive experiments on various text data sets show that: (a) compared with parametric approaches that use pre-specified true number of new categories, the proposed nonparametric approach can yield comparable performance; and (b) when the exact number of new categories is unavailable, i.e. the parametric approaches only have a rough idea about the new categories, our approach has evident performance advantages.
In the following, we first review related works, and then present the generative process of our model and its approximate inference; experimental results are discussed in detail, before we conclude the paper and point out future work.
\section{Related Work}
A special case of the problem studied in this paper is the Positive and Unlabeled (PU) learning \cite{yu2003text,li2007learning,du2014analysis}, where the goal is to identify usually valuable positive instances from a huge collection of unlabeled ones. Our model generalizes PU learning in that, it not only identifies (multiple) known category of instances but also conducts nonparametric clustering for the remaining instances. It should be noted that the identification of known categories may benefit from a proper grouping of the unknown instance categories.
Assuming accessibility to both the seen and the unseen classes in the unlabeled data, the recently proposed Generalized Zero-Shot Learning (GZSL) \cite{NIPS2018_7471} is also related to our work. However, GZSL has to leverage semantic representations such as attributes or class prototypes to bridge seen and unseen classes, while our setting here is more challenging. Moreover, GZSL is not easy to infer the number of unseen classes underlying the data.
Another topic closely related to ours is semi-supervised clustering \cite{bilenko2004integrating}, which exploits available knowledge to help partition unlabeled data into groups. Generally, its knowledge is represented in the form of pairwise constraints \cite{bilenko2004integrating,kulis2009semi,rangapuram2012constrained}, i.e., cannot-link and must-link, which tends to be inefficient when the number of constraints is very large. Noting that our assumption is plenty of training instances are available from the known categories, these algorithms may suffer from efficiency problems. Moreover, violation of the constraints usually is allowed in these models, so it is not easy to map the resultant data clusters to the known classes. Instead of using constraints as supervision, we directly leverage label information in our model.
Under the nonparametric Bayesian framework, a semi-supervised determinantal clustering process was proposed in \cite{shah2013determinantal}. However, in each round of its sampling based inference procedure, its kernelized formulation leads to cubic computational complexity w.r.t. the number of instances to be clustered, which makes it infeasible for large data sets.
In nonparametric Bayesian statistics, the Dirichlet Process (DP) is a popular stochastic process that is widely used for adaptive modelling of the data \cite{teh2011dirichlet}. Intuitively, it is a distribution over distributions, i.e. each draw from a DP is itself a distribution.
Sethuraman \cite{sethuraman1991constructive} explicitly showed that distributions drawn from a DP are discrete with probability one, that is, the random distribution $G$ distributed according to a DP with concentration parameter $\gamma$ and base distribution $H$, can be written as
\begin{center}
$G =\sum_{i=1}^\infty\pi_i\delta_{\theta_i}$,\ \ \ \ $\pi_i=v_i\prod_{j=1}^{i-1}(1-v_j)$,
\end{center}
where $\theta_i\sim H$, $v_i\sim \text{Beta}(1,\gamma)$, and $\delta_\theta$ is an atom at $\theta$.
It is clear from this formulation that $G$ is discrete almost surely, that is, the support of $G$ consists of a countably infinite set of atoms, which are drawn independently from $H$.
Antoniak \cite{antoniak1974mixtures} first introduced the idea of using a DP as the prior for the mixing proportions of simple distributions, which is called the DP Mixture (DPM) model. Due to the fact that the distributions sampled from a DP are discrete almost surely, data generated from a DPM can be partitioned according to their distinct values of latent parameters $\theta_i$'s. Therefore, DPM is a flexible mixture model, in which the number of mixture components is random and grows as new data are observed.
Teh et al. \cite{teh2006hierarchical} proposed the Hierarchical DP (HDP), which is a nonparametric Bayesian approach to the modeling of grouped data, where each group is associated with a DPM model, and where we wish to link these mixture models.
\section{Learning beyond Predefined Labels via Generative Modelling}
\subsection{Problem specification}
Assume we have a labeled training data set $\mathcal{D}_l$ from the known categories $\mathcal{K}$,
and an unlabeled test data set $\mathcal{D}_u$ which includes instances from both the known categories $\mathcal{K}$ and some unknown new categories $\mathcal{U}$.
The goal is to learn a function $f: \mathcal{D}_u \rightarrow \mathcal{K} \cup \mathcal{U}$ that maps any instance in $\mathcal{D}_u$ to its category label in $\mathcal{K} \cup \mathcal{U}$. Specifically, if an instance comes from the known categories $\mathcal{K}$, we aim to identify its true category label; meanwhile, we aim to group the instances not belonging to the known categories $\mathcal{K}$ into clusters $\mathcal{U}$.
\subsection{The Proposed Bayesian Nonparametric Topic Model}
For the problem specified above, an ideal model should simultaneously recognize the unknown new data categories and assign most appropriate category labels to the data actually from known categories, since these two processes can benefit from each other. However, it is usually difficult to determine the number of unknown categories in advance, which makes parametric approaches that assume this number is pre-specified impractical. To avoid performance degrading caused by a vague or wrong specification of the category information, in this paper, we propose a Bayesian nonparametric topic model, which can automatically infer the number of unknown new categories underlying test data $\mathcal{D}_u$ while classifying the data from known categories $\mathcal{K}$ into their most appropriate categories. Specifically, focusing on text data, we assume the following generative process for a document corpus:
\smallskip
\begin{enumerate}
\item Draw concentration parameters $\gamma \sim \Gamma(\gamma|a_\gamma,b_\gamma)$ and $\alpha \sim \Gamma(\alpha|a_\alpha,b_\alpha)$, where $a_\cdot$ and $b_\cdot$ are the shape and scale parameter of a Gamma distribution respectively;
\item Draw a discrete distribution $G_0 \sim \text{DP}(\gamma,H)$, where the base distribution $H$ is a $L$-dimensional Dirichlet with parameter $\zeta$, and $G_0$ has countable but infinite number of atoms;
\item Draw a discrete category distribution $G_d \sim \text{DP}(\alpha,G_0)$ for the $d$-th document;
\item Choose a document category $\varphi_{dn} \sim G_d$ for the $n$-th word in the $d$-th document\footnote{Note that, the support of the discrete distribution $G_d$ consists of atoms drawn from $G_0$, the atoms of which are eventually from $H$. Thus, $\varphi_{dn}$ is a vector rather than an index.};
\item Choose a word topic index $y_{dn} \sim \text{Categorical}(\varphi_{dn})$ for the $n$-th word in the $d$-th document;
\item Draw word topics $\phi_l \sim \text{Dir}(\beta),\ l=1,...,L$ from a $P$-dimensional Dirichlet prior with parameter $\beta$, where $L$ is the number of topics and $P$ is the vocabulary size;
\item Choose a word $w_{dn} \sim \text{Categorical}(\phi_{y_{dn}})$.
\end{enumerate}
\smallskip
As described above, this generative model integrates the hierarchical Dirichlet process (HDP) \cite{teh2006hierarchical} with the notion of Latent Dirichlet Allocation (LDA) \cite{blei2003latent}. However, the difference from standard LDA is that, here the distribution over word topics is conditioned on document categories rather than documents.
Placing a DP prior on the document category distribution $G_d$, a document is allowed to involve an infinite number of categories. Meanwhile, assuming multiple $G_d$'s have the same discrete base distribution $G_0$ (which also has a DP prior), multiple documents not only can have their distinct categories but also have the chance to share some common ones. The actual number of categories used to model a corpus is determined by nonparametric Bayesian posterior inference. Note that, if the category label of a document is known, we can fix the corresponding category of all words in this document to the category determined by the label during posterior inference. In this way, the supervision from known categories can be injected. For any document without known label,
we can infer the most appropriate category for each of its words, and assign this document to the category that generates most of its words.
Since the proposed model for Learning Beyond Predefined Labels (LBPL) is based on Nonparametric Topic Modelling (NTM), it will be denoted by LBPL-NTM in the sequel. The probabilistic generative process of LBPL-NTM is illustrated as a graphical model in Figure \ref{HDP-SDC-model}.
\begin{figure}[!ht]
\begin{center}
\centerline{\includegraphics[width=0.6\columnwidth, height=3.3cm]{HDP-LDA2.pdf}}
\caption{Graphical representation of the proposed LBPL-NTM model.
}
\label{HDP-SDC-model}
\end{center}
\end{figure}
Note that, LBPL-NTM is conceptually different from the infinite extension of LDA presented in \cite{teh2006hierarchical}, which learns topics in a purely unsupervised manner and cannot make use of the labeled information. From pure modeling perspective, our model introduces an additional topic index layer ($y_{dn}$) along with $L$ hidden topics to infinite LDA. What's worth mentioning is that, it is not a trivial thing to extend the single-layered infinite LDA to a new two-layered model. With the introduced topic index layer and the hidden topics serving as low level topic modeling module, we can interpret $G_d$ as the distribution over categories (rather than over topics as in infinite LDA) for each document, and then inject labeled information through $\phi_{dn}$ and infer the number of unknown categories (rather than topics as in infinite LDA) automatically from the data.
Besides, LBPL-NTM also differs from supervised topic models \cite{mcauliffe2008supervised,zhu2012medlda,zhu2014gibbs} basically, which train discriminative classification models in the semantic space with pre-specified category labels and cannot identify new categories underlying the test data.
The labeled LDA model proposed in \cite{ramage2009labeled}, adopted a similar word-label correspondence idea by defining a one-to-one correspondence between LDA's latent topics and labels. However, it was designed to solve the multi-label problem in social bookmarking rather than discover new data categories underlying unlabeled data, thus is different from our model as well.
The double-latent-layered LDA (DLDA) \cite{zhuang2010d} is a more closely related work to ours, where the authors conditioned the distribution over word topics on the document categories as in our model. By utilizing supervised information from known categories in a generative manner, their parametric model can classify unlabeled data into categories acquired from the labeled data, while grouping data into some pre-specified number of new clusters simultaneously. Though DLDA is effective when the true number of new categories is available, its performance can be significantly degraded by a wrong specification of this number. Our key difference with theirs is that our nonparametric model can naturally deal with the scenario where the number of new categories underlying test data is not clear, via allowing an infinite number of categories to model the corpus.
For the model inference of LBPL-NTM, we need to compute the posterior distribution of hidden variables given the data and model hyper-parameters:
\begin{displaymath}
\begin{aligned}
p(\alpha,\gamma,\phi_l,\varphi_d,\mathbf{Y}_d|a_\alpha,b_\alpha,a_\gamma,b_\gamma,\beta,H,\mathbf{W}_d) \quad \quad \quad \quad \quad \\
=\frac{p(\alpha,\gamma,\phi_l,\varphi_d,\mathbf{Y}_d,\mathbf{W}_d|a_\alpha,b_\alpha,a_\gamma,b_\gamma,\beta,H)}{p(\mathbf{W}_d|a_\alpha,b_\alpha,a_\gamma,b_\gamma,\beta,H)}.
\end{aligned}
\end{displaymath}
However, the marginal probability in the denominator is intractable to compute. A popular way to conduct approximate posterior inference is the Markov Chain Monte Carlo (MCMC) method \cite{neal2000markov}. In the following, we will appeal to the Chinese restaurant franchise representation \cite{teh2006hierarchical} of HDP for approximate posterior sampling. Note that, the high-dimensional latent topics $\phi_l$'s and the latent category variables $\varphi_{dn}$'s are integrated out to attain efficient collapsed sampling.
\subsection{Inference by Collapsed Gibbs Sampling}
First we give a brief description of the Chinese restaurant franchise representation of HDP. In the Chinese restaurant franchise, the metaphor of the Chinese restaurant process is extended to allow multiple restaurants which share a set of dishes. A customer entering some restaurant sits at one of the occupied tables with a certain probability, and sits at a new table with the remaining probability. If the customer sits at an occupied table, he eats the dish that has already been ordered. If he sits at a new table, he needs to pick the dish for the table. The dish is picked according to its popularity among the whole franchise, while a new dish can also be tried.
To employ this representation of HDP for posterior sampling, we introduce necessary index variables.
Recall that $\varphi_{dn}$'s are random variables with distribution $G_d$.
Let $\theta_1,\cdots,\theta_K$ denote $K$ i.i.d. random variables (dishes) distributed according to $H$, and, for each $d$, let $\psi_{d1},\cdots,\psi_{dT_d}$ denote $T_d$ i.i.d. variables (tables) distributed according to $G_0$.
Then each $\varphi_{dn}$ is associated with one $\psi_{dt}$, while each $\psi_{dt}$ is associated with one $\theta_k$. Let $t_{dn}$ be the
index of the $\psi_{dt}$ associated with $\varphi_{dn}$, and let $k_{dt}$ be the index of $\theta_k$ associated with $\psi_{dt}$. Let $s_{dt}$
be the number of $\varphi_{dn}$'s associated with $\psi_{dt}$, $m_{dk}$ is the number of $\psi_{dt}$'s associated with $\theta_k$, and $m_k = \sum_{d}m_{dk}$ as the number of $\psi_{dt}$'s associated with $\theta_k$ over all $d$.
For each $d$, by integrating out $G_d$ and $G_0$, we have the following conditional distributions:
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
\varphi_{dn}|\varphi_{d1},\cdots,\varphi_{dn-1},\alpha,G_0\thicksim \sum\limits_{t=1}^{T_d}\frac{s_{dt}}{n-1+\alpha}\delta_{\psi_{dt}}+\frac{\alpha}{n-1+\alpha}G_0,
\end{array}
\end{equation}
\begin{equation}
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
\psi_{dt}|\psi_{11},\psi_{12},\cdots,\psi_{21},\cdots,\psi_{dt-1},\gamma,H\thicksim \sum\limits_{k=1}^{K}\frac{m_{k}}{\sum_{k}m_k+\gamma}\delta_{\theta_{k}}+\frac{\gamma}{\sum_{k}m_k+\gamma}H.
\end{array}
\end{equation}
\smallskip
Note that, $t_{dn}$'s and $k_{dt}$'s inherit the exchangeability properties of $\varphi_{dn}$'s and $\psi_{dt}$'s, so the conditional distributions in (1) and (2) can be easily adapted to be expressed in terms of $t_{dn}$ and $k_{dt}$. In the following, we will alternately execute four steps: first sample $t_{dn}$ conditioned on all other variables, then sample $k_{dt}$ for each table of data, thirdly sample $y_{dn}$ for each word, and finally sample hyper-parameters $\gamma$ and $\alpha$. Note that, if the category label of a document is known, we fix the category index \textbf{$k$} of all words in this document to the label during the sampling process.
\vskip 0.05in
\textbf{Sampling $\mathbf{t}$.}$\ $ To compute the conditional distribution of $t_{dn}$ given the remaining variables,
we make use of exchangeability and treat $t_{dn}$ as the last variable being sampled in the last group.
Using (1), the prior probability that $t_{dn}$ takes on a particular previously seen value $t$ is proportional
to $s^{-dn}_{dt}$ , whereas the probability that it takes on a new value (say $t^{new} = T_j +1$) is proportional
to $\alpha$. The likelihood of the data given $t_{dn}= t$ for some previously seen $t$ is simply $f(y_{dn}|\theta_{k_{dt}})$.
To determine the likelihood when $t_{dn}$ takes on value $t^{new}$, the simplest approach would be to generate
a sample for $k_{dt^{new}}$ from its conditional prior (2) \cite{neal2000markov}. If this value of $k_{dt^{new}}$ is itself a new value, say $k^{new} = K + 1$, we may generate a sample for $\theta_{k^{new}}$ as well.
Combining all this information, the conditional posterior distribution of $t_{dn}$ is then
\begin{equation} \label{crp2}
p(t_{dn}=t|\mathbf{t}^{-dn},\mathbf{k},\mathbf{Y},\Theta)\propto
\left\{
\begin{array}{@{}l@{\ \ \ }l}
\alpha f(y_{dn}|\theta_{k_{dt}}), & t = t^{\text{new}},\\
s_{dt}^{-dn}f(y_{dn}|\theta_{k_{dt}}), & t\ \text{appeared}.\\
\end{array}
\right .
\end{equation}
However, here we show that we don't need to store and update the $\theta$'s, i.e., we can get a collapsed sampler.
To compute the likelihood that $y_{dn}$ comes from the $k$-th class $\theta_k$, $1\leq k\leq K$, we can first compute the posterior distribution of $\theta_{k}$ given $\mathbf{Y}_{(k)}^{-dn}$ (elements assigned to class $k$ in $\mathbf{Y}^{-dn}$), then integrate over this posterior. Specifically, by conjugacy the posterior of $\theta_{k}$ is also Dirichlet distributed, whose parameter is updated from the prior base distribution $H$ according to $\mathbf{Y}_{(k)}^{-dn}$. If we assume $O_{kl}$ is the number of elements in $\mathbf{Y}_{(k)}^{-dn}$ that equal to $l,\ 1\leq l\leq L$, then
\begin{displaymath} \label{crp2}
\begin{aligned}
\theta_k|H,\mathbf{t}^{-dn},\mathbf{k}^{-dt_{dn}},\mathbf{Y}^{-dn}\sim \text{Dir}(\zeta+O_{k\cdot}),
\end{aligned}
\end{displaymath}
\vskip 0.05in
\noindent where $\zeta$ and $O_{k\cdot}$ both are $L$ dimensional vectors. Integrate over this posterior we can get the likelihood for $y_{dn}$,
\begin{equation} \label{crp2}
\begin{aligned}
f(y_{dn}|\theta_k:1\leq k\leq K)=\int \theta_{y_{dn}} \cdot \text{Dir}(\theta;\zeta+O_{k\cdot}) d\theta
=\frac{\zeta_{y_{dn}}+O_{ky_{dn}}}{\sum_l(\zeta_l+O_{kl})}.
\end{aligned}
\end{equation}
To compute the likelihood that $y_{dn}$ comes from a new $k=(K+1)$-th class $\theta_{K+1}$, we can directly integrate over the prior $H$:
\begin{equation} \label{crp2}
\begin{aligned}
f(y_{dn}|\theta_k:k=K+1)=\int \theta_{y_{dn}} \cdot \text{Dir}(\theta;\zeta) d\theta=\frac{\zeta_{y_{dn}}}{\sum_l\zeta_l}.
\end{aligned}
\end{equation}
\textbf{Sampling $\mathbf{k}$.}$\ $ Sampling the variables $k_{dt}$ is similar to sampling $t_{dn}$. Since changing $k_{dt}$ actually changes the component membership of all data items in table $t$, the likelihood of setting $k_{dt} = k$ is given by $\prod_{n:t_{dn}=t} f(y_{dn}|\theta_k)$, so that the conditional probability of $k_{dt}$ is
\begin{equation} \label{crp2}
\begin{aligned}
p(k_{dt} =k|\mathbf{t},\mathbf{k}^{-dt},\mathbf{Y},\Theta)\propto
\left\{
\begin{array}{@{}l@{\ \ \ }l}
\gamma \prod_{n:t_{dn}=t} f(y_{dn}|\theta_k), & k = k^{\text{new}},\\
m_{k}^{-dt}\prod_{n:t_{dn}=t} f(y_{dn}|\theta_k), & k\ \text{appeared},\\
\end{array}
\right .
\end{aligned}
\end{equation}
\noindent where $f(y_{dn}|\theta_k)$ can be computed same as above.
\vskip 0.15in
\textbf{Sampling $\mathbf{Y
|
}$.}$\ \ $
Conditioned on $\mathbf{t}$, $\mathbf{k}$ and $\mathbf{Y}^{-dn}$, the prior of $y_{dn}=l,\ 1\leq l\leq L$ is:
\begin{eqnarray}
\nonumber p(y_{dn}=l|\mathbf{t},\mathbf{k}, \mathbf{Y}^{-dn})=\int \theta_{l} \cdot \text{Dir}(\theta;\zeta+O_{k_{dt_{dn}}\cdot}) d\theta =\frac{\zeta_{l}+O_{k_{dt_{dn}}l}}{\sum_l(\zeta_l+O_{k_{dt_{dn}}l})}.
\end{eqnarray}
Assume $\mathbf{W}_{(l)}^{-dn}$ denotes the elements in $\mathbf{W}^{-dn}$ that are generated from topic $l$, and $O_{lw}$ is the number of elements in $\mathbf{W}_{(l)}^{-dn}$ that equal to $w,\ 1\leq w\leq P, \ 1\leq l\leq L$, then
\begin{displaymath} \label{crp2}
\begin{aligned}
\phi_l|\beta,\mathbf{Y}^{-dn},\mathbf{W}_{(l)}^{-dn}\sim \text{Dir}(\beta+O_{l\cdot}),
\end{aligned}
\end{displaymath}
\noindent where $\beta$ and $O_{l\cdot}$ both are $P$ dimensional vectors. Integrating over this posterior, we can get the likelihood that $w_{dn}$ is generated from topic $\phi_{l}$:
\begin{eqnarray}
\nonumber f(w_{dn}|\mathbf{t},\mathbf{k}, \mathbf{Y}^{-dn}, \mathbf{W}^{-dn}) = \int \phi_{w_{dn}} \cdot \text{Dir}(\phi;\beta+O_{l\cdot}) d\phi = \frac{\beta_{w_{dn}}+O_{lw_{dn}}}{\sum_w(\beta_w+O_{lw})}.
\end{eqnarray}
\vskip 0.05in
The conditional posterior probability of $y_{dn}=l,\ 1\leq l\leq L$ is proportional to the prior times the likelihood:
\begin{equation} \label{crp2}
\begin{aligned}
p(y_{dn}=l|\mathbf{t},\mathbf{k}, \mathbf{Y}^{-dn}, \mathbf{W})\propto
\frac{\zeta_{l}+O_{k_{dt_{dn}}l}}{\sum_l(\zeta_l+O_{k_{dt_{dn}}l})}\cdot \frac{\beta_{w_{dn}}+O_{lw_{dn}}}{\sum_w(\beta_w+O_{lw})}.
\end{aligned}
\end{equation}
\vskip 0.05in
\textbf{Sampling $\gamma$ and $\alpha$.}$\ $ In each iteration of our Gibbs sampling, we use the auxiliary variable method described in \cite{teh2006hierarchical} to sample $\gamma$ and $\alpha$.
\vskip 0.1in
We summarize the above approximate posterior sampling process in Algorithm 1. After this sampling process converges, we take a sample from the Markov chain and count the words assigned to each category $k=1,2,...$ for each document, and finally a document is assigned to the category that has generated most of its words.
\begin{algorithm} [ht] \small
\caption{Collapsed Gibbs Sampling for LBPL-NTM}
\vspace{0.03in}
\textbf{Input:} the words $\textbf{W}$, the number of topics $L$, parameter $\zeta$ of the base Dirichlet distribution $H$, the hyper-parameters $\beta$, $a_\gamma$, $b_\gamma$, $a_\alpha$, $b_\alpha$, and the maximal number of iterations $maxIter$.
\\
\textbf{Output:} $\textbf{t}$, $\textbf{k}$ and $\textbf{Y}$.
\begin{enumerate}
\item Initialize the latent variables $\textbf{t}$, $\textbf{k}$, $\textbf{Y}$, $\gamma$ and $\alpha$;
\item \textbf{for} $iter=1$ to $maxIter$ \textbf{do}
\item \ \quad Update $\textbf{t}$ according to (3), (4), and (5);
\item \ \quad Update $\textbf{k}$ according to (6), (4), and (5);
\item \ \quad Update $\textbf{Y}$ according to (7);
\item \ \quad Update $\gamma$ and $\alpha$ using the auxiliary variable method in \cite{teh2006hierarchical};
\item \textbf{end for}
\item Output $\textbf{t}$, $\textbf{k}$ and $\textbf{Y}$.
\end{enumerate}
\end{algorithm}
\subsection{Computational complexity}
In each round of our collapsed Gibbs sampling, the dominant computation is $O(|\textbf{W}_u|\cdot(|\bar{\mathbf{t}}|+|\mathbf{k}|) + |\textbf{W}_a|\cdot L)$, where $|\textbf{W}_u|$ is the total number of words in the unlabeled documents, $|\textbf{W}_a|$ is the total number of words in the entire corpus, $|\bar{\mathbf{t}}|$ is the average number of inferred word groups in each document, $|\mathbf{k}|$ is the inferred number of categories, and $L$ is the specified number of topics. Generally, $|\mathbf{k}|$ and $|\bar{\mathbf{t}}|$ are very small, and $L=128$ throughout the paper\footnote{For fair comparison with the DLDA model \cite{zhuang2010d}, the number of topics $L$ is fixed to the constant 128. We empirically find that $L$ has little performance influence (compared to the number of categories) on the learning problem studied here, as long as it is not too small or too large. This is probably due to the two-layered nature of our model.}, thus our model can be seen as scale linearly with the number of words in the corpus.
\vspace{0.03in}
\section{Experiments}
In this section, we evaluate the proposed LBPL-NTM model on various text corpora, including the benchmark 20 Newsgroups data set, the imbalanced TDT2 data set and the sparse ODP data set.
\subsection{Baselines and evaluation metrics}
We compare LBPL-NTM with the following algorithms:
\begin{itemize}
\item Serendipitous Learning (SL) \cite{zhang2011serendipitous}: a maximum margin learning framework that combines the classification model built upon known classes and the parametric clustering model on unknown classes;
\item DLDA \cite{zhuang2010d}: a double-latent-layered LDA model, which can utilize supervised information similar as LBPL-NTM when clustering data with pre-specified number of clusters;
\item Constrained 1-Spectral Clustering (COSC) \cite{rangapuram2012constrained}: a state-of-the-art graph-based constrained clustering algorithm, which can guarantee that all given constraints are fulfilled;
\item Semi-supervised K-means (SSKM) \cite{kulis2009semi}: clustering data with pairwise constraints in original space;
\item Unsupervised clustering package CLUTO\footnote{http: //glaros.dtc.umn.edu/gkhome/cluto/cluto/download}
\item Nonparametric Bayesian unsupervised clustering model Dirichlet Process Gaussian Mixture (DPGM).
\end{itemize}
\smallskip
Two popular clustering metrics are adopted to compare the clustering quality of these algorithms: normalized mutual information (NMI) \cite{manning2008introduction} and adjusted rand index (ARI) \cite{hubert1985comparing}.
NMI measures how closely the clustering algorithm could reconstruct the label distribution underlying the data. If $A$ and $B$ represent the cluster assignments and the ground truth class assignments of the data respectively, then NMI is defined as
\begin{displaymath}
NMI = 2\cdot I(A;B)/(H(A)+H(B)),
\end{displaymath}
where $I(A;B)=H(A)-H(A|B)$ is the mutual information between $A$ and $B$, $H(\cdot)$ is the Shannon entropy, and $H(A|B)$ is the conditional entropy of $A$ given $B$.
If $a$ denotes the number of pairs of data points that are in the same cluster in $A$ and in the same class in $B$, and $b$ denotes the number of pairs of points that are in different clusters in $A$ and in different classes in $B$, then the Rand Index (RI) is given by $RI = (a + b)/C_2^D$, where $C_2^D$ is the total number of possible pairs in the dataset. Since the expected RI value of two random assignments does not take a constant value, Hubert and Arabie \cite{hubert1985comparing} proposed to discount the expected RI of random assignments by defining the ARI as
\begin{displaymath}
ARI = (RI-Expected\_RI)/(\max(RI)-Expected\_RI).
\vspace{0.05in}
\end{displaymath}
As in \cite{zhuang2010d}, we also evaluate the classification accuracy on the data from the known classes with average $F1$ measure.
For each known class, the $F1$ score can be computed as follows,
\begin{displaymath}
F1_i = 2\cdot Precison_i \cdot Recall_i/(Precison_i + Recall_i), \ i=1,...,k,
\end{displaymath}
where $Precison_i$ and $Recall_i$ are the precision and recall on the $i$-th known class. Then, we use the average $F1$ score over these $k$ known classes as the final measure.
\subsection{Parameter settings}
In all our experiments, we set the parameters and hyper-parameters of LBPL-NTM as follows: $L=128$, $a_\gamma=1, b_\gamma=0.001$, $a_\alpha = 5$, $b_\alpha = 0.1$, $\zeta_l=1,\ l=1,...,L$, $\beta_w=0.01,\ w=1,...,P$. We run 3000 Gibbs sampling iterations to sample from the posteriors of LBPL-NTM and DLDA, and use the last sample for classification and clustering performance evaluation\footnote{Such a choice is consistent with the evaluation strategy in \cite{zhuang2010d}. Alternatively, we can also average the classification and clustering scores over multiple posterior samples.}.
The parameter settings of all compared algorithms follow the instructions in their original papers and are carefully tuned on our data sets. The similarity matrix for COSC is constructed using the cosine value of the angle between each pair of documents\footnote{COSC works not well with the $k$-NN similarity graph \cite{buhler2009spectral} on our data sets.}. For CLUTO, we use its \emph{direct} implementation for clustering with default parameter settings. PCA is used to reduce the original high dimensionality to 500 for SL and DPGM, due to efficiency problems. Without statement, all algorithms except for DPGM and LBPL-NTM, use the true number of data categories.
\subsection{Evaluation results}
\textbf{Benchmark data---20 Newsgroups:}
This data set is widely used in text categorization and clustering. It has approximately 20,000 newsgroup documents that are evenly partitioned into twenty different newsgroups. Since some of the newsgroups are very closely related, a part of these twenty newsgroups are further grouped into four top categories, e.g., the top category \emph{sci} contains four subcategories \emph{sci.crypt}, \emph{sci.electronics}, \emph{sci.med} and \emph{sci.space}. We only retain the terms that have document frequency (DF) above 15 and are not in the stop words list. As in Table II of \cite{zhuang2010d}, we consider two kinds of 4-way learning problems---the data for each \emph{difficult} problem consist of all 4 subcategories of a top category, and the data for each \emph{easy} problem consist of 4 subcategories from different top categories. Here these problems are denoted as E1-E4 and D1-D4 for short. For each problem, assume we have supervision from the subcategories in bold face in Table II of \cite{zhuang2010d}, from which 40\% instances are sampled as training data, and the rest 60\% and all instances from the subcategories without supervision are used as testing data. We independently repeat the experiments 10 times, and the averaged results over these trials are reported in Figure 2, from which we can see LBPL-NTM and DLDA can significantly outperform other competitors, while these two methods perform similarly. However, it should be noted that DLDA used the actual number of categories, while LBPL-NTM can automatically infer the most appropriate number from data owing to the merits of Bayesian nonparametrics. The posterior frequencies of the inferred numbers of categories by LBPL-NTM are shown in Figure 3, from which we can see higher frequencies around the true number 4.
One may naturally question the learning performance of DLDA when actual number of categories is not available. To this end, we further compare LBPL-NTM with DLDA, assuming that we only have a rough idea about the number of unknown categories underlying data. Under the same settings as above, Figure 4 gives the average results over 10 independent trials on 20 Newsgroups data set when the number of categories $K$ in DLDA is varied from $K=3$ to $K=7$ (the true number is 4). From these results we can observe that 1) the clustering performance (in terms of NMI and ARI) of DLDA is quite sensitive to the pre-specified $K$ while LBPL-NTM can circumvent this issue with nonparametric prior; 2) it seems that the classification performance (F1) of DLDA becomes better when the specified number of categories is larger, but as will be seen later this is not always true.
\smallskip
\textbf{Imbalanced data---TDT2:}
The NIST Topic Detection and Tracking (TDT2) corpus consists of data collected during the first half of 1998 and taken from 6 sources, including 2 news wires, 2 radio programs and 2 television programs. It consists of 11201 on-topic documents which are classified into 96 semantic categories. In the experiment, those documents appearing in two or more categories were removed, and only the largest 20 categories were kept. As above, we only retain the terms that have DF above 15 and are not in the stop words list. Here we assume supervision is available in the largest 10 categories, from which 40\% instances are sampled as training data, and the rest 60\% and all instances from the categories without supervision are used as testing data.
We independently repeat the experiments 10 times, and the averaged results over these trials are shown in Table 1. It seems that the parametric approach DLDA doesn't get its best performance when the true number of categories is pre-specified, which is probably due to the severe imbalance among different categories. Surprisingly, LBPL-NTM achieves the best results without any information of the total number of data categories. This may be due to its ability to dynamically adjust the number of data categories during its posterior sampling process. The posterior frequencies of the inferred numbers of categories by LBPL-NTM are shown in Figure \ref{NIPSresults11a}.
\begin{figure}[H]
\centering
\subfigure[NMI]{
\includegraphics[height=1.5in,width=4.3in]{ECML-NMI1}}
\subfigure[ARI]{
\includegraphics[height=1.5in,width=4.3in]{ECML-ARI1}}
\subfigure[F1]{
\includegraphics[height=1.5in,width=4.3in]{ECML-F11}}
\caption{Comparison on the 4-way learning problems (D1-D4, E1-E4) constructed from 20 Newsgroups data. All results are averaged over 10 independent trials in terms of NMI, ARI and F1.}
\vskip-0.1in
\end{figure}
\begin{figure}[H]
\centering
{\includegraphics[height=0.88in,width=2.1in]{20News_Posterior_Frequency_AAAI.pdf}}
\caption{Posterior frequencies of the inferred numbers of categories on 20 Newsgroups.}
\end{figure}
\begin{table}[!htbp]
\scriptsize
\centering
\setlength\tabcolsep{6.5pt}
\renewcommand\arraystretch{1.1}
\caption{Averaged results over 10 independent trials on the TDT2 data.}
\label{NIPSresult8}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\hline
& \multirow{2}{*}{DPGM} & \multirow{2}{*}{CLUTO} &\multirow{2}{*}{COSC} & \multirow{2}{*}{SSKM} & \multirow{2}{*}{SL} & \multicolumn{3}{c|}{DLDA} & \multirow{2}{*}{LBPL-NTM}\\
\cline{7-9}
&&&&&&$K$=15&$K$=20&$K$=25&\\
\hline
NMI & 0.4878 & 0.8217 & 0.6042 & 0.8057 & 0.7743 & 0.8157 & 0.8173 & 0.8135 & \textbf{0.8358} \\
ARI & 0.2608 & 0.6591 & 0.4375 & 0.6665 & 0.7159 & 0.7804 & 0.7167 & 0.6788 & \textbf{0.7873} \\
F1 & - & - & - & - & 0.8443 & 0.8473 & 0.8490 & 0.8068 & \textbf{0.9075} \\
\hline
\end{tabular}
\vskip-0.1in
\end{table}
\begin{figure}[H]
\centering
\subfigure[NMI]{
\includegraphics[height=1.6in,width=4.3in]{ECML-NMI2.pdf}}
\subfigure[ARI]{
\includegraphics[height=1.6in,width=4.3in]{ECML-ARI2.pdf}}
\subfigure[F1]{
\includegraphics[height=1.6in,width=4.3in]{ECML-F12.pdf}}
\caption{Further comparison with DLDA on the 4-way learning problems constructed from 20 Newsgroups. All results are averaged over 10 independent trials in terms of NMI, ARI and F1.}
\end{figure}
\smallskip
\textbf{Sparse data---ODP:}
This data set is collected by Yin et al. \cite{yin2009exploring}, originally for web object classification by exploiting social tags. It contains 5536 web pages from 8 categories, which are detailed in Table 1 in \cite{yin2009exploring}. Since the features on each web page are the social tags on it, these data are extremely sparse. Specifically, the average number of tag words on each web page is 25.76, which is much smaller than that (more than 160) of 20 Newsgroups. Assume that there is supervised information in the categories of Books, Electronic, Health and Garden. As above, we randomly sample 40\% instances as training data from these known categories, and the rest 60\% and all instances from the categories without supervision are used as testing data. We independently repeat the experiments 10 times, and report the averaged NMI, ARI and F1 values in Table \ref{NIPSresult10}, from which we can see LBPL-NTM also has competitive performance on sparse data. The posterior frequencies of the inferred numbers of categories are shown in Figure \ref{NIPSresults11b}. Note that there is a very small category---Office in ODP, and it is not easy to discover it due to data sparseness.
\begin{table}[!htbp]
\scriptsize
\centering
\setlength\tabcolsep{6.5pt}
\renewcommand\arraystretch{1.2}
\caption{Averaged results over 10 independent trials on the ODP data.}
\label{NIPSresult10}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\hline
& \multirow{2}{*}{DPGM} & \multirow{2}{*}{CLUTO} &\multirow{2}{*}{COSC} & \multirow{2}{*}{SSKM} & \multirow{2}{*}{SL} & \multicolumn{3}{c|}{DLDA} & \multirow{2}{*}{LBPL-NTM}\\
\cline{7-9}
&&&&&&$K$=5&$K$=8&$K$=10&\\
\hline
NMI & 0.2825 & 0.5302 & 0.3866 & 0.5155 & 0.4523 & 0.6039 & 0.6054 & \textbf{0.6084} &0.5877 \\
ARI & 0.0966 & 0.3983 & 0.2451 & 0.4145 & 0.3684 & \textbf{0.6034} & 0.5480 & 0.5119 &0.5781 \\
F1 & - & - & - & - & 0.7045 & 0.7400 & \textbf{0.7868} & 0.7461 &0.7715 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\subfigure[TDT2]{
\label{NIPSresults11a}
{\includegraphics[height=1in,width=2.18in]{TDT2_Posterior_Frequency_AAAI2.pdf}}}
\hskip 0.1in
\subfigure[ODP] {
\label{NIPSresults11b}
{\includegraphics[height=1in,width=1.98in]{ODP_Posterior_Frequency_AAAI.pdf}}}
\caption{Posterior frequencies of the inferred numbers of categories on TDT2 and ODP.}
\end{figure}
\subsection{Time efficiency}
The core sampling procedure of LBPL-NTM was implemented in C++, and all experiments were conducted in Matlab on a desktop with 3.60 GHz CPU. On the 4-way learning problems constructed from 20 Newsgroups, each round of our collapsed Gibbs sampling procedure takes about 0.9 second, which is a little slower than the speed of 0.7 second per sampling round of DLDA (implemented in C). We attribute this speed difference to the nonparametric nature of LBPL-NTM.
It is also observed empirically that both LBPL-NTM and DLDA run much faster than constraints based semi-supervised clustering methods. Besides, as mentioned above, SL and DPGM are quite inefficient for high dimensional text data, and PCA has to be used for them.
\vspace{0.05in}
\section{Conclusion and Future Work}
We proposed a nonparametric Bayesian method for learning beyond the predefined label space. Unlike existing methods which assume the number of unknown new categories in test data is known, our model can automatically infer this number via nonparametric Bayesian inference.
Empirical results show that: (a) compared with parametric approaches that use pre-specified true number of new categories, the proposed nonparametric approach yields comparable performance; and (b) when the exact number of new categories is unavailable, our approach has evident performance advantages.
Our model can be extended in several aspects, e.g., 1) adapt it to the online learning scenario with sequential Monte Carlo \cite{doucet2000sequential}; 2) explore multi-source text corpora with cross-domain learning \cite{zhuang2012mining,du2012multi}; and 3) leverage semantic representations such as attributes or class prototypes to bridge seen and unseen classes as in \cite{NIPS2018_7471}.
\smallskip
\section{Acknowledgments}
This work was supported by the National Natural Science Foundation of China (No. 61473273, 61602449, 61573335, 91546122, 61303059), Guangdong provincial science and technology plan projects (No. 2015B010109005), and the Science and Technology Funds of Guiyang (No. 201410012).
\smallskip
|
\section{Introduction}
\subsection{Relevance of the known asymptotics}
Arakelov invariants, such as the Faltings's delta function and the self-intersection of the relative dualizing sheaf, play an important role in Arakelov geometry. In particular, the self-intersection of the relative dualizing sheaf on an arithmetic surface is an essential contribution to the Faltings's height of the Jacobian \cite{MoretBailly}. It is known that suitable upper bounds for this invariant lead to important applications in arithmetic geometry. For instance, an analogue of the Bogomolov--Miyaoka--Yau inequality for arithmetic surfaces implies an effective version of the Mordell conjecture \cite{MoretBailly0}. Also, sufficiently good upper bounds for the self-intersection of the relative dualizing sheaf are crucial in the work of B.~Edixhoven and his co-authors, when estimating the running time of his algorithm for determining Galois representations \cite{Edixhoven}. However, establishing such bounds turned out to be a complicated problem. The known bounds have been established so far by the work of Abbes, Ullmo, Michel, Edixhoven, de Jong, and others, see, e.g.,~\cite{AbbesUllmo}, \cite{CurillaKuehn}, \cite{Edixhoven}, \cite{Javanpeykar}, \cite{deJong}, \cite{MichelUllmo}. They all deal with the calculation of the self-intersection of the relative dualizing sheaf for regular models of certain modular curves, Fermat curves, Belyi curves, or hyperelliptic curves, either establishing bounds or computing explicit asymptotics for this numerical invariant.
In their influential work \cite{AbbesUllmo}, A.~Abbes and E.~Ullmo considered the modular curves
$X_{0}(N)/\mathbb{Q}$. Denoting by $\mathcal{X}_{0}(N)/\mathbb{Z}$ a minimal regular model for $X_{0}(N)/\mathbb{Q}$, and by $\overline{\omega}_{\mathcal{X}_{0}(N)/\mathbb{Z}}$ the relative dualizing sheaf on $\mathcal{X}_{0}(N)$, equipped with the Arakelov metric, they established the asymptotics
\begin{align}
\label{bound-abbesullmo}
\overline{\omega}_{\mathcal{X}_0(N)/\mathbbm{Z}}^2 \sim
3g_{\overline{\Gamma}_0(N)}\log(N),
\end{align}
as $N\rightarrow\infty$, where $N$ is assumed to be square-free with $2,3\nmid N$ and such that the genus $g_{\overline{\Gamma}_0(N)}$ of $X_0(N)$ is greater than zero (see \cite{AbbesUllmo} and \cite[Th\'e{}or\`e{}me 1.1]{MichelUllmo}). As consequence of this result, P.~Michel and E.~Ullmo obtained in \cite{MichelUllmo} an asymptotics for Zhang's admissible self-intersection number \cite{Zhang}, and with this they proved an effective version of the Bogomolov conjecture for the curve $X_0(N)/\mathbbm{Q}$. Namely, if $h_{\mathrm{NT}}$ denotes the N\'e{}ron--Tate height on the Jacobian \cite{BombieriGubler}, then for all $\varepsilon>0$ and sufficiently large $N$, the set $$\{x\in X_0(N)(\overline{\mathbbm{Q}}) \mid h_{\mathrm{NT}}(x)\leq (2/3-\varepsilon)\log(N)\}$$ is finite, whereas the set $$\{x\in X_0(N)(\overline{\mathbbm{Q}}) \mid h_{\mathrm{NT}}(x)\leq (4/3+\varepsilon)\log(N)\}$$ is infinite. Recently, Banerjee--Borah--Chaudhuri \cite{BanerjeeBorahChaudhuri} removed the square-free condition on $N$ and proved that \eqref{bound-abbesullmo} and the Bogomolov conjecture hold for curves $X_0(p^2)/\mathbbm{Q}$, with $p$ a prime number.
By its very definition, the self-intersection of the relative dualizing sheaf on modular curves is the sum of a geometric part that encodes the finite intersection of divisors coming from the cusps, and an analytic part which is given in terms of the Arakelov Green's function evaluated at these cusps. The leading term $3g_{\overline{\Gamma}_0(N)}\log(N)$ in the asymptotics \eqref{bound-abbesullmo} is the sum of $g_{\overline{\Gamma}_0(N)}\log(N)$ that comes from the geometric part, and $2g_{\overline{\Gamma}_0(N)}\log(N)$ that comes from the analytic part. J.~Kramer conjectured that this result can be generalized to arbitrary modular curves, that is, that the main term in the asymptotics of the self-intersection of the relative dualizing sheaf for a semi-stable model of modular curves of level $N$ and genus $g$ is $3g\log(N)$, as $N$ tends to infinity. Following the lines of proof in \cite{AbbesUllmo}, Kramer's Ph.D. student H.~Mayer \cite{Mayer} obtained a positive answer to this conjecture for the case of modular curves $X_1(N)/\mathbb{Q}$. More precisely, for square-free $N$ of the form $N =N^{\prime}\cdot q\cdot r$, where $q, r> 4$ are different primes, he proved the asymptotics
$$
\overline{\omega}_{\mathcal{X}_1(N)/\mathbbm{Z}}^2 \sim 3g_{\overline{\Gamma}_1(N)}\log(N),
$$
as $N\to\infty$; here, $\mathcal{X}_1(N)/\mathbbm{Z}$ is a minimal regular model for $X_1(N)/\mathbb{Q}$. From this asymptotics he was then able to deduce the validity of the Bogomolov conjecture in this case.
\vskip 2mm
\subsection{Main result}
In this article, we establish asymptotics for the self-intersection of the relative dualizing sheaf on modular curves $X(N)$, as the level $N$ tends to infinity. Following mostly the lines of proof in \cite{AbbesUllmo}, we will first obtain a formula for this Arakelov invariant, in which the geometric and analytic parts are explicitly given. Then, we proceed to compute the asymptotics for each of these parts. Our main result is the following asymptotics (see Theorem \ref{id:main result of the paper})
$$
\frac{1}{\varphi(N)}\overline{\omega}_{\mathcal{X}(N)/\mathbbm{Z}[\zeta_N]}^2 \sim 2g_{\overline{\Gamma}(N)}\log(N),
$$
as $N\to\infty$, where $N\geq 3$ is a composite, odd, and square-free integer; here, $\mathbbm{Z}[\zeta_N]$ denotes the ring of integers of the cyclotomic field $\mathbb{Q}(\zeta_N)$ with a primitive $N$-th root of unity $\zeta_N$, and $\mathcal{X}(N)/\mathbb{Z}[\zeta_N]$ is a minimal regular model for $X(N)/\mathbb{Q}(\zeta_N)$. It turns out that the right hand side $2g_{\overline{\Gamma}(N)}\log(N)$ in the asymptotics comes from the analytic part, which partially confirms Kramer's conjecture.
The difference of our result with the previous cases lies in the underlying minimal regular model, where all the computations take place, namely, our model is a $\mathbbm{Z}[\zeta_N]$-scheme which is not semi-stable. The analogue computations for a semi-stable model are a theme of our future research.
\subsection{Sketch of proof}
To prove the main result, we follow the strategy given in the work of Abbes and Ullmo \cite{AbbesUllmo}. The proof starts by observing that $\overline{\omega}^2_{\mathcal{X}(N)/\mathbb{Z}[\zeta_{N}]}/\varphi(N)$ is the sum of an explicit geometric contribution $\mathcal{G}(N)$ and an explicit analytic contribution $\mathcal{A}(N)$ (see Proposition \ref{prop:self-intersection explicit formula}). In particular, using results of Manin--Drinfeld \cite{Elkik} and Faltings--Hriljac \cite[Theorem 3.1.]{Hriljac}, we show that
\begin{align*}
\mathcal{G}(N) &= \frac{2g_{\overline{\Gamma}(N)}(V_0,V_{\infty})_{\mathrm{fin}} - (V_0,V_0)_{\mathrm{fin}} - (V_{\infty},V_{\infty})_{\mathrm{fin}}}{2(g_{\overline{\Gamma}(N)}-1)}.
\end{align*}
We refer the reader to section \ref{section6.1} for the definitions of the vertical divisors $V_0$ and $V_{\infty}$. From this, a straight-forward computation yields the asymptotics (see Proposition \ref{asymptotics:geometric part})
\begin{align*}
\frac{1}{\varphi(N)}\mathcal{G}(N) = o(g_{\overline{\Gamma}(N)}\log(N)),
\end{align*}
as $N\to\infty$. Furthermore, the analytic contribution $\mathcal{A}(N)$ is given in terms of the
Arakelov (or canonical) Green's function $g_{\mathrm{Ar}}(\cdot,\cdot)$, evaluated at the corresponding cusps, and equals
\begin{align*}
\mathcal{A}(N) &= 4g_{\overline{\Gamma}(N)}(g_{\overline{\Gamma}(N)}-1)\sum_{\sigma:\mathbbm{Q}(\zeta_N) \hookrightarrow \mathbbm{C}} g_{\mathrm{Ar}}(0^{\sigma},\infty^{\sigma}).
\end{align*}
To derive the desired asymptotics for $\mathcal{A}(N)$, one first uses a fundamental identity by Abbes--Ullmo, which expresses the Arakelov Green's function $g_{\mathrm{Ar}}(q_1,q_2)$ at the cusps $q_1$ and $q_2$ of $X(N)$ in terms the scattering constant $\mathscr{C}_{q_1q_2}$ at $q_1$ and $q_2$, the hyperbolic volume $v_{\overline{\Gamma}(N)}$ of $X(N)$, the constant term in the Laurent expansion at $s=1$ of the Rankin--Selberg transforms $\mathscr{R}_{q_1}$ resp. $\mathscr{R}_{q_2}$ of the Arakelov metric at $q_1$ and $q_2$, respectively, and a contribution $\mathscr{G}$ involving the constant term of the automorphic Green's function $G_s(z,w)$ at $s=1$; we refer the reader to \eqref{dfn:Scattering constant}, \eqref{dfn:RankinSelberg constant}, and \eqref{def_mathscrG} for precise definitions.
More precisely, one has (see \eqref{id:Abbes--Ullmo formula})
\begin{align*}
g_{\mathrm{Ar}}(q_1,q_2) = -2\pi\,\mathscr{C}_{q_1q_2} - \frac{2\pi}{v_{\overline{\Gamma}(N)}} + 2\pi(\mathscr{R}_{q_1} + \mathscr{R}_{q_2}) + 2\pi\,\mathscr{G}.
\end{align*}
The relevant scattering constants $\mathscr{C}_{q_1q_2}$ are computed, e.g., in \cite{GradosPippich}, and the asymptotics for $\mathscr{G}$ follows from bounds proven in \cite{JorgensonKramer}. Furthermore, we show that to deal with the terms $\mathscr{R}_{q_1} + \mathscr{R}_{q_2}$ for the cusps in question one is reduced to only compute $\mathscr{R}_{\infty}$ (see Lemma \ref{lem:reduction step}). To provide the explicit formula for $\mathscr{R}_{\infty}$ given in Proposition \ref{lem:R_infty}, we apply methods from the spectral theory of automorphic functions, namely, we represent $\mathscr{R}_{\infty}$ in terms of a particular automorphic kernel. Decomposing this automorphic kernel into terms involving hyperbolic and parabolic elements, $\mathscr{R}_{\infty}$ is divided into hyperbolic and parabolic contributions $\mathscr{R}_{\infty}^{\mathrm{hyp}}$ and $\mathscr{R}_{\infty}^{\mathrm{par}}$ (see identity \eqref{eqn:automorphic interpretation}). The computation of $\mathscr{R}_{\infty}^{\mathrm{hyp}}$ and $\mathscr{R}_{\infty}^{\mathrm{par}}$ are the technical heart of the paper. Note that to determine the hyperbolic contribution $\mathscr{R}_{\infty}^{\mathrm{hyp}}$, we proceed differently than Abbes--Ullmo and we reduce the calculation of the residue of the corresponding zeta function determined by hyperbolic elements to the well-known residue of the zeta function of a suitably generalized id\`ele class group of the quadratic extension. From this, it is finally shown that (see Proposition \ref{id:asymptotics of analytical contrib})
\begin{align*}
\frac{1}{\varphi(N)}\mathcal{A}(N) = 2g_{\overline{\Gamma}(N)}\log(N) + o(g_{\overline{\Gamma}(N)}\log(N)),
\end{align*}
as $N\to\infty$. Adding up the asymptotics for the geometric and for the analytic contribution then proves the main result.
\subsection{Outline of the article}
The paper is organized as follows. In Section 2, we set our main notation and review basic facts on modular curves $X(N)$, non-holomorphic Eisenstein series, the spectral theory of automorphic forms, and the Arakelov metric. The core of this part is the identity \eqref{eqn:automorphic interpretation}, which states that the Rankin--Selberg constant of the Arakelov metric $\mathscr{R}_{\infty}$ at the cusp $\infty$ is essentially the sum of two contributions $\mathscr{R}_{\infty}^{\mathrm{hyp}}$ and $\mathscr{R}_{\infty}^{\mathrm{par}}$, associated to the hyperbolic and parabolic elements of $\overline{\Gamma}(N)$, respectively. In Section 3, we turn our attention to the minimal regular model $\mathcal{X}/S$ of the curve $X(N)$, where $S=\mathrm{Spec}(\mathbbm{Z}[\zeta_N])$ with $\zeta_N$ an $N$-th root of unity. Here, in Proposition \ref{prop:Model description}, we describe the good and bad fibers of $\mathcal{X}/S$, and briefly recall the moduli interpretation of the cusps of $X(N)$. Further, in Proposition \ref{prop:iota iso}, we describe how the cusps $0$ and $\infty$ are mapped on $X(N)$ under a given embedding $\mathbbm{Q}(\zeta_N) \xhookrightarrow{} \mathbbm{C}$. The importance of this result will be clear in the proof of Proposition \ref{id:asymptotics of analytical contrib}. In Section 4, we begin with the study of the zeta function associated to an hyperbolic element of $\overline{\Gamma}(N)$. In Proposition \ref{id:identity of zeta functions}, we show that this zeta function is, in fact, equal to a partial zeta function of a suitable narrow-ray ideal class, up to a holomorphic function. Using this fact, we then proceed to compute an asymptotics for the hyperbolic contribution $\mathscr{R}_{\infty}^{\mathrm{hyp}}$. This is done in Proposition \ref{prop:hyperbolic contribution}. In Section 5, we deal with the computation of the parabolic contribution $\mathscr{R}_{\infty}^{\mathrm{par}}$, and for this we proceed in the same way as in \cite{AbbesUllmo}, adapting the results therein to our case. Finally, in Section 6, we state our main result in Theorem \ref{id:main result of the paper}, and for the proof, we first provide a formula for the self-intersection of the relative dualizing sheaf in Proposition \ref{prop:self-intersection explicit formula}, where the geometric and analytic parts are explicitly given. Then, using the computations from previous sections, we determine the desired asymptotics.
\subsection{Acknowledgements}
We want to express our sincere gratitude to J.~Kramer, who encouraged us to work on this fascinating topic, for his patience, support, and for his guidance that led to the solution of the analytical part. In the same spirit, we are much imdebted to B.~Edixhoven for his generosity and support in developing of the results presented in Section 3 of this paper. We also want to thank P.~Bruin for his valuable comments on Section 3. Both authors acknowledge support from the International DFG Research Training Group \textit{Moduli and Automorphic Forms: Arithmetic and Geometric Aspects} (GRK 1800). Finally, the second named author would like to acknowledge support from the LOEWE research unit ``Uniformized structures in arithmetic and geometry'' of Technical University Darmstadt and Goethe-University Frankfurt.
\section{Background material}
\subsection{The modular curve $X(N)$}
Let $\mathbbm{H} \coloneqq \{z=x+iy \in\mathbbm{C} \mid x,y\in\mathbb{R}; y>0\}$ be the hyperbolic upper half-plane endowed with the hyperbolic volume form $\mu_{\mathrm{hyp}}(z) \coloneqq \mathrm{d}x\,\mathrm{d}y / y^2$, and let $\mathbbm{H}^* \coloneqq \mathbbm{H} \sqcup \mathbbm{P}^{1}(\mathbbm{R})$ be the union of $\mathbbm{H}$ with its topological boundary. The group $\mathrm{PSL}_2(\mathbbm{R})$ acts on $\mathbbm{H}^*$ by fractional linear transformations. This action is transitive on $\mathbbm{H}$, since $z=x+iy=n(x)a(y)i$ with
\begin{align*}
n(x) \coloneqq \begin{pmatrix}1&x\\0&1 \end{pmatrix}, \quad a(y) \coloneqq \begin{pmatrix}y^{1/2} & 0 \\ 0 & y^{-1/2}\end{pmatrix}.
\end{align*}
By abuse of notation, we represent an element of $\mathrm{PSL}_2(\mathbbm{R})$ by a matrix. We set $I\coloneqq\begin{psmallmatrix} 1&0\\0&1\end{psmallmatrix}\in\mathrm{PSL}_2(\mathbbm{R})$.
Throughout the article, let $N\geq 3$ be a composite, odd, and square-free integer and let $\Gamma\coloneqq\overline{\Gamma}(N)$ denote the principal congruence subgroup of the modular group $\mathrm{PSL}_2(\mathbbm{Z})$. By $\Gamma_z \coloneqq \{\gamma\in\Gamma \mid \gamma{}z=z\}$ we denote the stabilizer subgroup of a point $z\in\mathbbm{H}^*$ with respect to $\Gamma$.
\vskip 2mm
The quotient space $Y(N)\coloneqq \Gamma\backslash\mathbbm{H}$ resp.~$X(N) \coloneqq \Gamma\backslash\mathbbm{H}^*$ admits the structure of a Riemann surface and a compact Riemann surface of genus $g_{\Gamma}$, respectively. The hyperbolic volume form $\mu_{\mathrm{hyp}}(z)$ naturally defines a $(1,1)$-form on $Y(N)$, which we still denote by $\mu_{\mathrm{hyp}}(z)$, since it is $\mathrm{PSL}_2(\mathbbm{R})$-invariant on $\mathbbm{H}$. Thus, the hyperbolic volume of $Y(N)$ is given by $v_{\Gamma} \coloneqq \int_{Y(N)}\mu_{\mathrm{hyp}}(z)$. The genus $g_{\Gamma}$ and the hyperbolic volume $v_{\Gamma}$ are related by the identity
\begin{align*}
g_{\Gamma} = 1 + \frac{v_{\Gamma}}{4\pi}\bigg( 1-\frac{6}{N} \bigg).
\end{align*}
By abuse of notation, we identify $X(N)$ with a fundamental domain $\mathcal{F}_{\Gamma}\subset\mathbbm{H}^*$; thus, at times we will identify points of $X(N)$ with their pre-images in $\mathbbm{H}^*$.
\vskip 2mm
A cusp of $X(N)$ is the $\Gamma$-orbit of a parabolic fixed point of $\Gamma$ in $\mathbbm{H}^*$. By $P_{\Gamma}\subseteq \mathbbm{P}^1(\mathbbm{Q})$ we denote a complete set of representatives for the cusps of $X(N)$ and we write $p_{\Gamma} \coloneqq \#P_{\Gamma}$. We will always identify a cusp of $X(N)$ with its representative in $P_{\Gamma}$. Hereby, identifying $\mathbbm{P}^1(\mathbbm{Q})$ with $\mathbb{Q}\cup\{\infty\}$, we write elements of $\mathbbm{P}^1(\mathbbm{Q})$ as $\alpha/\beta$ for $\alpha,\beta\in\mathbb{Z}$, not both equal to $0$, and we always assume that $\mathrm{gcd}(\alpha,\beta)=1$; we set $1/0:=\infty$. Given a cusp $q=\alpha/\beta \in P_{\Gamma}$, we choose the scaling matrix for $q$ by $\sigma_q\coloneqq g_qa(w_q) \in \mathrm{PSL}_2(\mathbbm{R})$, where $g_q \coloneqq \begin{psmallmatrix} \alpha & * \\ \beta & *\end{psmallmatrix} \in \mathrm{PSL}_2(\mathbbm{Z})$ and $w_q \coloneqq [\mathrm{PSL}_2(\mathbbm{Z})_q : \Gamma_q]$. Let $U\subset\mathbbm{Z}$ be a set of representatives for $(\mathbbm{Z}/N\mathbbm{Z})^{\times} / \{\pm I\}$ containing $1\in\mathbbm{Z}$. Given $\xi \in U$, the element $0_{\xi} \coloneqq n(N-1)p(\xi)\gamma_N \infty \in \mathbbm{P}^1(\mathbbm{Q})$ defines a cusp of $\Gamma$, where $p(\xi) \coloneqq \begin{psmallmatrix} \xi & 1 \\ rN & \tilde{\xi} \end{psmallmatrix}$ such that $\xi\tilde{\xi} - rN = 1$ with $\tilde{\xi},r\in\mathbbm{Z}$, and $\gamma_N \coloneqq \begin{psmallmatrix}0&-1 \\ 1& 0 \end{psmallmatrix}$.
\subsection{Eisenstein series, scattering constants, and the Rankin--Selberg transform}
\label{subsect:Eisenstein Scattering RaSe}
For a given cusp $q\in P_{\Gamma}$ with scaling matrix $\sigma_q\in\mathrm{PSL}_2(\mathbbm{R})$, the non-holomorphic Eisenstein series associated to $q$
is defined by
\begin{align*}
E_{q}(z,s) \coloneqq
\sum_{\gamma\in\Gamma_q\backslash\Gamma} \mathrm{Im}(\sigma_q^{-1}\gamma z)^s,
\end{align*}
where $z\in\mathbbm{H}$ and $s\in\mathbbm{C}$ with $\mathrm{Re}(s)>1$. The Eisenstein series
$E_{q}(z,s)$ is $\Gamma$-invariant in $z$, and holomorphic in $s$ for $\mathrm{Re}(s)>1$. Furthermore, $E_q(z,s)$ admits a meromorphic continuation to the complex $s$-plane with a simple pole at $s=1$ with residue equal to $v_{\Gamma}^{-1}$. Now, for $u\in U$, consider the subset $M(u)\subset\mathbbm{Z}^2
|
$ given by
\begin{align}
\label{dfn:Subset M(u)}
M(u) \coloneqq \{(m,n)\in\mathbbm{Z}^2 \mid (m,n)\equiv (0,u)\,\mathrm{mod}\,N\},
\end{align}
which is invariant under right multiplication by a matrix of $\Gamma$. Since we have $w_{\infty} = N$, $g_{\infty} = I$, and $\sigma_{\infty} = a(N)$, the Eisenstein series $E_{\infty}(z,s)$ associated to $\infty$ can be conveniently written as follows
\begin{align*
E_{\infty}(z,s) = \frac{1}{N^s}\sum_{u\in U} D_u(s)\bigg(\sum_{(m,n)\in M(u)} \frac{y^s}{|mz+n|^{2s}}\bigg),
\end{align*}
where $D_u(s)$ is the Dirichlet series defined by
\begin{align}
\label{dfn:Dirichlet series Du(s)}
D_u(s) \coloneqq \sum_{\substack{d=1 \\ du\equiv 1\,\mathrm{mod}\,N}}^{\infty}\frac{\mu(d)}{d^{2s}},
\end{align}
with $\mu(d)$ denoting the M\"obius function. At $s=1$, the series $D_u(s)$ admits the following Laurent expansion
\begin{align}
\label{expansion:sum of D_u(s)}
\sum_{u\in U} D_u(s) = \frac{1}{\pi}\bigg(\frac{v_{\Gamma}}{N^3}\bigg)^{-1} + O(s-1).
\end{align}
\vskip 2mm
Let $q_1, q_2\in P_{\Gamma}$ be two cusps, not necessarily distinct, with scaling matrices $\sigma_{q_1},\sigma_{q_2}\in\mathrm{PSL}_2(\mathbbm{R})$, respectively. The scattering function $\varphi_{q_1q_2}(s)$ at the cusps $q_1$ and $q_2$ is defined by
\begin{align*}
\varphi_{q_1q_2}(s) &\coloneqq \sqrt{\pi}\,\frac{\Gamma\left(s-\frac{1}{2}\right)}{\Gamma(s)}
\sum_{c=1}^{\infty}c^{-2s}
\,\bigg(\sum_{\substack{ d\,\mathrm{mod}\,c \\
\begin{psmallmatrix}*&*\\c&d \end{psmallmatrix}\in\sigma_{q_1}^{-1} \Gamma \sigma_{q_2}}} 1 \bigg),
\end{align*}
where $s\in\mathbbm{C}$ with $\mathrm{Re}(s)>1$ and $\Gamma(s)$ denotes the Gamma function. The Eisenstein series admits the following Fourier expansion
\begin{align}
\label{id:Fourier Expansion Eisenstein}
E_{q_1}(\sigma_{q_2}z,s) = \delta_{q_1q_2}y^s + \varphi_{q_1q_2}(s)y^{1-s} + \sum_{n\neq 0}\varphi_{q_1q_2}(n;s)y^{1/2}K_{s-1/2}(2\pi|n|y)e^{2\pi{}inx},
\end{align}
where $\delta_{q_1q_2}$ is the Dirac's delta function, $K_{\mu}(Z)$ is the modified Bessel function of the second kind, and $$\varphi_{q_1q_2}(n;s) \coloneqq \frac{2\pi^s}{\Gamma(s)}|n|^{s-1/2} \sum_{c>0} c^{-2s} \bigg( \sum_{\substack{d\,\mathrm{mod}\,c \\ \begin{psmallmatrix} a&*\\c&d \end{psmallmatrix} \in \sigma_{q_1}^{-1}\Gamma\sigma_{q_2}}} e^{2\pi{}i(dm+an)/c}\bigg).$$ The scattering function $\varphi_{q_1q_2}(s)$ is holomorphic for $s\in\mathbbm{C}$ with $\mathrm{Re}(s)>1$ and admits a meromorphic continuation to the complex $s$-plane. At $s=1$, there is always a simple pole of $\varphi_{q_1q_2}(s)$ with residue equal to $v_{\Gamma}^{-1}$. The constant $\mathscr{C}_{q_1q_2}$ given by
\begin{align}
\label{dfn:Scattering constant}
\mathscr{C}_{q_1q_2} \coloneqq \lim_{s\rightarrow 1}\bigg( \varphi_{q_1q_2}(s) - \frac{v_{\Gamma}^{-1}}{s-1} \bigg)
\end{align}
is called the scattering constant at the cusps $q_1$ and $q_2$.
\vskip 2mm
Let $f:\mathbbm{H}\longrightarrow\mathbbm{C}$ be a $\Gamma$-invariant function which is of rapid decay at a cusp $q\in P_{\Gamma}$, i.e., the 0-th coefficient $a_0(y,q)$ in the Fourier expansion of $f(\sigma_q{}z)$ satisfies $a_0(y,q) = O(y^{-C})$ for all $C>0$, as $y\rightarrow\infty$. For $s\in\mathbbm{C}$ with $\mathrm{Re}(s)>1$, the Rankin--Selberg transform of $f$ at $q$ is given by the integral
\begin{align*}
\mathcal{R}_q[f](s) \coloneqq \int_{\mathcal{F}_{\Gamma}}f(z) E_q(z,s) \mu_{\mathrm{hyp}}(z).
\end{align*}
The function $\mathcal{R}_q[f](s)$ possesses a meromorphic continuation to the complex $s$-plane having a simple pole at $s=1$, with residue equal to $(1/v_{\Gamma})\int_{\mathcal{F}_{\Gamma}}f(z)\mu_{\mathrm{hyp}}(z)$. Furthermore, we have the following useful identity
\begin{align*}
\mathcal{R}_{q}[f](s) = \int_0^{\infty}a_0(y,q)y^{s-2}\mathrm{d}y.
\end{align*}
\vskip 2mm
\subsection{The spectral expansion and automorphic kernels}
\label{subsect:Spectral Expansion}
Let $k$ be either 0 or 2 and fix it. Let $z\in\mathbbm{H}$ and $\gamma=\begin{psmallmatrix} a&b\\c&d\end{psmallmatrix}\in\mathrm{PSL}_2(\mathbbm{R})$. Given a function $f:\mathbbm{H} \longrightarrow \mathbbm{C}$, we define $f|[\gamma;k]$ by $$(f|[\gamma;k])(z) \coloneqq j_{\gamma}(z;k)^{-1}f(\gamma z),$$ where $j_{\gamma}(z;k) \coloneqq ((cz+d)/|cz+d|)^k$ is the weight-$k$ automorphy factor. A function $f:\mathbbm{H} \longrightarrow \mathbbm{C}$ is an automorphic function of weight $k$ with respect to $\Gamma$ if the equality $(f|[\gamma;k])(z) = f(z)$ holds for all $\gamma\in\Gamma$. Denote by $L^2(Y(N),k)$ the Hilbert space consisting of all automorphic functions of weight $k$ with respect to $\Gamma$ that are measurable and square integrable, endowed with the inner product given by $\langle f,g \rangle \coloneqq \int_{\mathcal{F}_{\Gamma}} f(z)\overline{g(z)} \mu_{\mathrm{hyp}}(z)$. The hyperbolic Laplacian of weight $k$ is defined by
\begin{align*}
\Delta_{\mathrm{hyp},k} \coloneqq -y^2\bigg( \frac{\partial^2}{\partial{}x^2} + \frac{\partial^2}{\partial{}y^2}\bigg)+iky\frac{\partial}{\partial{}x}.
\end{align*}
Considering the uppering and lowering Maass operators of weight $k$, namely
\begin{align}
\label{dfn:Uppering lowering operators}
U_k \coloneqq \frac{k}{2} + iy\frac{\partial}{\partial{}x} + y \frac{\partial}{\partial{}y}, \qquad L_k \coloneqq \frac{k}{2} + iy\frac{\partial}{\partial{}x} - y \frac{\partial}{\partial{}y},
\end{align}
respectively, the following identities hold
\begin{align*}
\Delta_{\mathrm{hyp},k} = L_{k+2}U_k-\frac{k}{2}\bigg(1+\frac{k}{2} \bigg) = U_{k-2}L_k + \frac{k}{2}\bigg(1-\frac{k}{2}\bigg).
\end{align*}
Since the hyperbolic Laplacian $\Delta_{\mathrm{hyp},k}$ of weight $k$ defines a symmetric and essentially self-adjoint operator, it extends to a unique self-adjoint operator $\Delta_k$ on a suitable domain. Consequently, there exists a countable orthonormal set $\{\psi_j\}_{j=0}^{\infty}$ of eigenfunctions of $\Delta_0$, in fact eigenfunctions of $\Delta_{\mathrm{hyp},0}$, such that for all $f\in L^2(Y(N),k)$, we have the spectral expansion
\begin{align*}
f(z) = \sum_{j=0}^{\infty}\langle f,\psi_j\rangle \psi_j(z) + \frac{1}{4\pi}\sum_{q\in P_{\Gamma}}\int_{-\infty}^{\infty}\bigg\langle f,E_{q,k}\bigg(\cdot\,,\frac{1}{2}+ir\bigg) \bigg\rangle E_{q,k}\bigg(\cdot\,,\frac{1}{2}+ir\bigg) \mathrm{d}r,
\end{align*}
which converges in the norm topology; moreover, if $f$ is smooth and bounded, then the expansion converges uniformly on compacta of $\mathbbm{H}$. Here, $E_{q,k}(z,s)$ denotes the weight-$k$ Eisenstein series of $\Gamma$ at the cusp $q\in P_{\Gamma}$ (see, e.g., \cite[p.~291]{Roelcke}). For $k=0$, this is the Eisenstein series $E_{q}(z,s)$ from Section \ref{subsect:Eisenstein Scattering RaSe}, but if $k=2$, then $E_{q,2}(z,s)$ is determined by $E_{q}(z,s)$ via the uppering Maass operator $U_0$ given by \eqref{dfn:Uppering lowering operators}, namely
\begin{align}
\label{id:Eisenstein between weights}
U_0(E_{q}(z,s)) = s E_{q,2}(z,s),
\end{align}
where $s$ is not a pole of $E_{q}(z,s)$. In particular, using \eqref{id:Fourier Expansion Eisenstein} and \eqref{id:Eisenstein between weights}, the following Fourier expansion can be deduced
\begin{align*}
E_{\infty,2}(\sigma_{\infty}z,s) &= y^s + \varphi_{\infty\infty,2}(s)y^{1-s} \\[2mm]
&+ \frac{y^{1/2}}{s} \sum_{n\neq 0} \bigg( \bigg(\frac{1}{2}-2\pi{}ny\bigg)K_{s-1/2}(2\pi|n|y) + y\frac{\partial}{\partial{}y}K_{s-1/2}(2\pi|n|y)\bigg) \varphi_{\infty\infty}(n;s)e^{2\pi{}inx}, \nonumber
\end{align*}
where we have set $$\varphi_{\infty\infty,2}(s) \coloneqq \frac{1-s}{s}\varphi_{\infty\infty}(s).$$
\vskip 2mm
Next, we fix real numbers $T>0$ and $A>1$, once for all. Consider the holomorphic function $h_T(r)$ defined on the strip $|\mathrm{Im}(r)|<A/2$ by
\begin{align}
\label{dfn:h(r) test function}
h_T(r) \coloneqq \exp\bigg(-T\bigg(\frac{1}{4}+r^2\bigg)\bigg).
\end{align}
Let $\phi_k$ denote the inverse Selberg--Harish--Chandra transform of weight $k$ of $h_T(r)$, namely, we have
\begin{align*}
\phi_k(x) \coloneqq -\frac{1}{\pi}\int_{-\infty}^{\infty}Q'(x+t^2)\left(\frac{\sqrt{x+1+t^2}-t}{\sqrt{x+1+t^2}+t}\right)^{k/2}\mathrm{d}t
\end{align*}
where $x\geq0$ and
\begin{align*}
Q\left(\frac{1}{4}(e^u + e^{-u}-2)\right) = \frac{1}{2}g(u), \quad g(u) = \frac{1}{2\pi}\int_{-\infty}^{\infty}h_T(r)e^{-iru}\mathrm{d}r.
\end{align*}
For $z,w\in\mathbbm{H}$, the weight-$k$ point pair invariant $\pi_k(z,w)$ associated to $h_T(r)$ is given by
\begin{align*}
\pi_k(z,w) \coloneqq \bigg(\frac{w-\overline{z}}{z-\overline{w}}\bigg)^{k/2}\phi_k(u(z,w)),
\end{align*}
where $u(z,w) \coloneqq |z-w|^2/4\mathrm{Im}(z)\mathrm{Im}(w)$. With this, we define the weight-$k$ automorphic kernel by
\begin{align*}
K_k(z,w) \coloneqq \sum_{\gamma \in \Gamma} j_{\gamma}(z;k)\pi_k(z,\gamma w).
\end{align*}
Now, let $\mathcal{S}_2(\Gamma)$ be the $\mathbbm{C}$-vector space of dimension $g_{\Gamma}$ consisting of cusp forms of weight 2 with respect to $\Gamma$, endowed with the Petersson inner product. Once for all, we fix an orthonormal basis $\{f_1, \ldots, f_{g_{\Gamma}}\}$ of $\mathcal{S}_2(\Gamma)$ and write $\lambda_j \coloneqq 1/4 + r_j^2$, with $r_j\in\mathbbm{C}$, for the corresponding eigenvalue of $\psi_j$. Then, the following spectral expansions hold
\begin{align}
\label{id:spec decom aut kernels}
\begin{split}
K_0(z,w) &= \sum_{j=0}^{\infty}h_T(r_j)\psi_j(z)\overline{\psi}_j(w) + S_0(z,w), \\
K_2(z,w) &= \sum_{j=1}^{g_{\Gamma}}\mathrm{Im}(z)\mathrm{Im}(w)f_j(z)\overline{f_j}(w) + \sum_{j=1}^{\infty}\frac{h_T(r_j)}{\lambda_j}(U_0\psi_j)(z)\overline{(U_0\psi_j)}(w) + S_2(z,w),
\end{split}
\end{align}
where $U_0$ is the uppering Maass operator of weight 0 given by \eqref{dfn:Uppering lowering operators}, and
\begin{align*}
S_k(z,w) \coloneqq \frac{2-k}{2}v_{\Gamma}^{-1} + \frac{1}{4\pi}\sum_{q\in P_{\Gamma}}\int_{-\infty}^{\infty}h_T(r)E_{q,k}\bigg(z,\frac{1}{2}+ir\bigg)\overline{E_{q,k}}\bigg(w,\frac{1}{2}+ir\bigg) \mathrm{d}r.
\end{align*}
In the sequel, we write $K_k(z)$ for $K_k(z,z)$ and $S_k(z)$ for $S_k(z,z)$. Letting $$\nu_k(\gamma;z) \coloneqq j_{\gamma}(z;k)\pi_k(z,\gamma z),$$ we have
K_k(z) = K_k^{\mathrm{hyp}}(z) + K_k^{\mathrm{par}}(z)
$
with
$$
K_k^{\mathrm{hyp}}(z) \coloneqq \sum_{\substack{\gamma\in\Gamma \\ |\mathrm{tr}(\gamma)|>2 }}\nu_k(\gamma;z), \quad
K_k^{\mathrm{par}}(z) \coloneqq \sum_{\substack{\gamma\in\Gamma \\ |\mathrm{tr}(\gamma)| = 2 }} \nu_k(\gamma;z).
$$
Note that, by abuse of notation, the element $I$ is included in $K_{k}^{\mathrm{par}}(z)$. With this, we define
\begin{align*}
\mathcal{H}(z) & \coloneqq K_2^{\mathrm{hyp}}(z) - K_0^{\mathrm{hyp}}(z), \\[3mm]
\mathcal{P}(z) & \coloneqq (K_2^{\mathrm{par}}(z) - S_2(z)) - (K_0^{\mathrm{par}}(z) - S_0(z)), \\
\mathcal{T}(z) &\coloneqq \sum_{j=1}^{\infty}\frac{h_T(r_j)}{\lambda_j}|(U_0\psi_j)(z)|^2 - \sum_{j=0}^{\infty}h_T(r_j)|\psi_j(z)|^2.
\end{align*}
\subsection{The Arakelov metric and the Arakelov Green's function}
\label{subsect:Arakelov metric}
For $z\in \mathbbm{H}$, the Arakelov metric $F(z)$ is the $\Gamma$-invariant function defined on $\mathbbm{H}$ given by
\begin{align*}
F(z) \coloneqq \frac{\mathrm{Im}(z)^2}{g_{\Gamma}}\sum_{j=1}^{g_{\Gamma}}|f_j(z)|^2.
\end{align*}
It can be verified that $F(z)$ is of rapid decay at all cusps $q\in P_{\Gamma}$, and that $\int_{\mathcal{F}_{\Gamma}}F(z)\mu_{\mathrm{hyp}}(z) = 1$. Moreover, if $z\in\mathbbm{H}$ and $\alpha\in\mathrm{GL}_2^+(\mathbbm{Q})$ such that $\alpha^{-1}\Gamma\alpha = \Gamma$, then the identity $F(\alpha z) = F(z)$ holds. Following the lines of Abbes--Ullmo, one obtains the following key identity for $F(z)$. First, by definition, we have $$K_2(z) - K_0(z)= \mathcal{H}(z) + K_2^{\mathrm{par}}(z) - K_0^{\mathrm{par}}(z).$$ Using the spectral decompositions given by \eqref{id:spec decom aut kernels}, we then obtain
\begin{align*}
K_2(z) - K_0(z) &= g_{\Gamma}F(z) + \mathcal{T}(z) + S_2(z) - S_0(z).
\end{align*}
These two identities for the difference $K_2(z) - K_0(z)$ therefore yield the identity
\begin{align}
\label{id:spectral interpretation Arakelov metric}
F(z) = \frac{1}{g_{\Gamma}}\bigg(-\mathcal{T}(z) + \mathcal{H}(z) + \mathcal{P}(z)\bigg).
\end{align}
In the sequel, we refer to $\mathcal{H}(z)$ resp.~$\mathcal{P}(z)$ as the hyperbolic contribution and parabolic contribution of the Arakelov metric, respectively.
\vskip 2mm
The Rankin--Selberg constant of the Arakelov metric at $q$ is defined by
\begin{align}
\label{dfn:RankinSelberg constant}
\mathscr{R}_{q} \coloneqq \lim_{s\rightarrow 1}\bigg( \mathcal{R}_q[F](s) - \frac{v_{\Gamma}^{-1}}{s-1}\bigg).
\end{align}
For $q=\infty$, taking the Rankin--Selberg transform on both sides of \eqref{id:spectral interpretation Arakelov metric} and grouping the constant terms of the Laurent expansion at $s=1$, we obtain the following identity
\begin{align}
\label{eqn:automorphic interpretation}
\mathscr{R}_{\infty} = \frac{1}{g_{\Gamma}}\bigg( -\frac{v_{\Gamma}^{-1}}{2}\sum_{j=1}^{\infty}\frac{h_T(r_j)}{\lambda_j} + \mathscr{R}_{\infty}^{\mathrm{hyp}} + \mathscr{R}_{\infty}^{\mathrm{par}}\bigg).
\end{align}
Here, $\mathscr{R}_{\infty}^{\mathrm{hyp}}$ resp.~$\mathscr{R}_{\infty}^{\mathrm{par}}$
denotes the constant term in the Laurent expansion at $s=1$ of $\mathcal{R}_{\infty}[\mathcal{H}](s)$ and $\mathcal{R}_{\infty}[\mathcal{P}](s)$, respectively, i.e., we have
\begin{align*}
\begin{split}
\mathscr{R}_{\infty}^{\mathrm{hyp}} &\coloneqq \lim_{s\rightarrow 1}\bigg(\mathcal{R}_{\infty}[\mathcal{H}](s) - \frac{v_{\Gamma}^{-1}}{s-1}\int_{\mathcal{F}_{\Gamma}}\mathcal{H}(z)\mu_{\mathrm{hyp}}(z)\bigg), \\
\mathscr{R}_{\infty}^{\mathrm{par}} &\coloneqq \lim_{s\rightarrow 1}\bigg(\mathcal{R}_{\infty}[\mathcal{P}](s) - \frac{v_{\Gamma}^{-1}}{s-1}\int_{\mathcal{F}_{\Gamma}}\mathcal{P}(z)\mu_{\mathrm{hyp}}(z)\bigg).
\end{split}
\end{align*}
\vskip 2mm
In the sequel, we refer to the constants $\mathscr{R}_{\infty}^{\mathrm{hyp}}$ and $\mathscr{R}_{\infty}^{\mathrm{par}}$ as the hyperbolic and parabolic contributions of $\mathscr{R}_{\infty}$, respectively.
\vskip 2mm
Next, the canonical volume form on $\mathbbm{H}$ is given by $\mu_{\mathrm{can}}(z) \coloneqq F(z) \mu_\mathrm{hyp}(z)$. Since $F(z)$ and $\mu_{\mathrm{hyp}}(z)$ are $\Gamma$-invariant on $\mathbbm{H}$, the canonical volume form $\mu_{\mathrm{can}}(z)$ is naturally defined on $Y(N)$. Furthermore, $\mu_{\mathrm{can}}(z)$ extends to a $(1,1)$-form on $X(N)$, since it remains smooth at the cusps $q\in P_{\Gamma}$. In addition, observe that $\int_{X(N)}\mu_{\mathrm{can}}(z)=1$.
\vskip 2mm
Finally, the Arakelov Green's function $g_{\mathrm{Ar}}$ is the function defined on $X(N)\times X(N)$ which is smooth outside the diagonal and characterized by the following conditions
\begin{enumerate}[(i)]
\setlength\itemsep{0.5em}
\item $\displaystyle \frac{1}{i\pi}\partial_z\partial_{\overline{z}}g_{
|
intersection between P9's orbital path and the galactic plane, where higher source density can impede detection (see Figure \ref{fig:radec}).
A higher mass and thus more distant Planet Nine will require a dedicated survey along the predicted orbital path \citep{brownbatygin2016}, but with a lower limit to the brightness of 24th magnitude, such an object is readily observable by the current generation of telescopes with wide field cameras such as the Dark Energy Camera on the Blanco 4m telescope in Chile and the Hyper-Suprime Camera on the Subaru telescope in Hawaii. Finally, all but the very faintest possible Planet Nine will be observable with the Large Scale Synoptic Telescope (LSST), currently under construction in Chile and scheduled for operations in 2022. Therefore, Planet Nine -- if it exists as described here -- is likely to be discovered within a decade.
\subsection{Infrared and Microwave Surveys}
While detecting sunlight reflected at optical wavelengths might seem like the natural method for searching for Planet Nine, the $1/r^{4}$ dependency of the flux of reflected sunlight makes the brightness of any object drop precipitously with distance. At long wavelengths, thermal emission, on the other hand, drops as only $T/r^{2}$, where $T$ could be approximately constant with distance (or drop as $1/r^{2}$ if the planet is in thermal equilibrium with the sun). For sufficiently distant planets, thermal emission constitutes a potentially preferable avenue towards direct detection. While current cosmological experiments at millimeter wavelengths have sufficient sensitivity to detect Planet Nine \citep{cowan2016}, a systematic search would require millimeter telescopes with both high sensitivity and high angular resolution to robustly detect moving sources. The proposed CMB S-4, a next-generation cosmic microwave background experiment \citep{2016arXiv161002743A} could fulfill these requirements. Such an observatory would be sensitive not only to Planet Nine, but to putative even more distant bodies that might be present in the solar system.
It is worth noting that Planet Nine could emit more strongly than a blackbody at some wavelengths. \citet{fortney2016} found that in extreme cases the atmosphere of Planet Nine could be depleted of methane and have order of magnitude more emission in the 3-4 $\mu$m range. \citet{meisner2017b} exploited this possibility to search for Planet Nine in data from the Wide-field Infrared Survey Explorer (WISE) dataset. In particular, they examined 3$\pi$ steradians and placed a strongly atmospheric-model-dependent constraint on the presence of a high-mass Planet Nine at high galactic latitudes.
\subsection{Gravitational Detection}
A separate approach towards indirect detection of Planet Nine was explored by \citet{fienga2016}. Employing ranging data from the Cassini spacecraft, these authors sought to detect Planet Nine's direct gravitational signature in the solar system ephemeris (somewhat akin to the technique \citealt{leverriera, leverrierb} used to discover Neptune; section \ref{sec:findneptune}). Adopting the P9 parameters from \citep{phattie}, they were able to immediately constrain P9 to the outer $\sim50\%$ of the orbit (in agreement with observational constraints; \citealt{brownbatygin2016}). Moreover, the calculations of \citet{fienga2016} point to a small reduction in the residuals of the ranging data if the true anomaly of P9 is taken to be $\nu_9\approx118\deg$. This line of reasoning was further explored by \citet{holmanpayne16a, holmanpayne16b}, who additionally considered the long baseline ephemerides of Pluto to place additional constraints on the sky-location of Planet Nine.
The reanalysis of the ephemeris carried out by \citet{folknerdps}, however, highlights the sensitivity of Fienga et al.'s results to the specifics of the underlying dynamical model, suggesting that the gravitational determinations of Planet Nine's on-sky location are in reality considerably less precise than advocated by \citet{fienga2016, holmanpayne16a, holmanpayne16b}. An additional complication pertinent to this approach was recently pointed out by \citet{pitjevapitjev}, who caution that failure to properly account for the mass contained within the resonant and classical Kuiper belt (which they determine to be on the order of $2 \times 10^{-2}\,M_{\oplus}$) can further obscure P9's gravitational signal in the solar system's ephemeris. Particularly, \citet{pitjevapitjev} find that the anomalous acceleration due to a $0.02\,M_{\oplus}$ Kuiper belt is essentially equivalent to that arising from a $m_9=10\,M_{\oplus}$ planet at a heliocentric distance of $r=540\,$AU (for orbits around Saturn), further discouraging the promise of teasing out Planet Nine's gravitational signal from spacecraft data.
\section{Formation Scenarios}
\label{sec:formation}
In terms of both physical and orbital characteristics, the inferred properties of Planet Nine are certainly unlike those of any other planet of the solar system. Recent photometric and spectroscopic surveys of planets around other stars \citep{kepler2010,kepler2013}, however, have conclusively demonstrated that $m\sim5-10\,M_{\oplus}$ planets are exceedingly common around solar-type stars, and likely represent one of the dominant outcomes of the planet conglomeration process\footnote{Jovian-class objects like Jupiter and Saturn, on the other hand, are comparatively rare and are believed to reside within 20 AU in only $\sim 20\%$ of sun-like stars \citep{cumming2008}.}. A moderately excited orbital state (and in particular, a high eccentricity) is also not uncommon among long-period extrasolar planets, and is a relatively well-established byproduct of post-nebular dynamical relaxation of planetary systems \citep{juric2008}. Nevertheless, the formation of Planet Nine represents a formidable problem, primarily due to its large distance from the sun.
To attack this speculative issue, several different origin scenarios have been proposed and are discussed in this section. The first option is for the planet to form {\sl in situ}, via analogous formation mechanism(s) responsible for the known giant planets (section \ref{sec:insitu}). Another option is for Planet Nine to form in the same annular region as the other giant planets, and then be scattered outward into its present orbit (section \ref{sec:forminside}). Yet another possibility is for the planet to originate from another planetary system within the solar birth cluster, and then be captured during the early evolution of the solar system (section \ref{sec:capture}). While all of these scenarios remain in play (Figure \ref{fig:formation}), each is characterized by non-trivial shortcomings, as discussed below.
\subsection{In Situ Formation}
\label{sec:insitu}
Perhaps the most straightforward model for the origin of Planet Nine is for it to form {\sl in situ}, at its present orbital location. An attractive feature of this scenario is the fact that it does not require any physical processes beyond conglomeration itself. The advantages, however, stop there. Generally speaking, the timescale over which planetary building blocks (pebbles, planetesimals) amass into multi-Earth mass objects\footnote{While gravitational instability of the early solar nebula provides an alternate means of forming planets, it is irrelevant to the problem at hand, since the mass scale for objects generated through this channel is of order $10\,M_{J}$ (or higher, e.g., \citealt{rafikov2005})} is set by the orbital period at the location of the forming planet \citep{pebblereview}. With a $\sim10,000\,$year orbital period (corresponding to $a\sim500\,$AU), a forming Planet Nine would only complete $\sim300$ revolutions around the sun within the typical lifetime of a protoplanetary disk \citep{jesus}. The corresponding impedance of growth by the slowness of the orbital clock is illuminated by the calculations of \citet{kenyonbromley2016}, who find that even under exceptionally favorable conditions, formation of super-Earths at hundreds of AU requires billions of years.
Another shortcoming of {\sl in situ} formation concerns the availability of planet-forming material at large heliocentric distances. Various lines of evidence indicate the solar system did not originate in isolation, and instead formed within a cluster of $10^3-10^4$ stars (\citealt{al2001,zwart2009,adams2010,pfalzner2015}). Such a cluster environment can be highly disruptive to the outer regions of circumstellar disks and hence to planet formation. At minimum, a number of authors have shown that over a timescale of $\sim10\,$Myr, passing stars are expected to truncate the disk down to a radius of $\sim300$ AU, about one third of the minimum impact parameter \citep{heller,ostriker}. More importantly, these clusters also produce intense FUV radiation fields that evaporate circumstellar disks, removing all of the material beyond $\sim30-40$ AU over a time scale of 10 Myr \citep{adams2004}. Observational evidence supports this picture and indicates that disks in cluster environments experience some radiation-driven truncation (e.g., \citealt{kanderson}). Moreover, even in regions of distributed star formation, where external photoevaporation is unlikely to play a defining role, observations find that typical disk radii are only of order 100 AU \citep{haisch2001,andrews2009,andrews2010}. As a result, both observational and theoretical considerations suggest that the early Solar Nebula was unlikely to have extended much farther than the current orbit of Neptune at $a\sim30\,$AU (see also \citealt{kretke2012}). Forming Planet Nine at a radius of $\sim500\,$AU is thus strongly disfavored.
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\textwidth]{formation_schemes.pdf}
\caption{Above, we show not-to-scale schematics for the three possible mechanisms by which Planet Nine could have been formed and placed in its current orbit in the solar system. (top panel) In \emph{in situ} formation, Planet Nine forms in its current distant orbit while the protoplanetary disk is still present, and resides there throughout the history of the solar system. (middle panel) If Planet Nine forms among the outer planets in the solar system, it could subsequently be scattered outwards onto a high-eccentricity orbit through interactions with the other solar system planets. Then, its orbit could be circularized through interactions with passing stars. (bottom panel) If Planet Nine originally formed around a host star other than the sun, a subsequent close encounter between this other star and the sun could result in Planet Nine being captured into its current-day long-period orbit around the sun.}
\label{fig:formation}
\end{figure}
\subsection{Formation Among the Giant Planets}
\label{sec:forminside}
A somewhat more natural origin scenario is for Planet Nine to form within the region that produces the known giant planets (i.e., the annulus defined roughly by 5\,AU\,$\lower.5ex\hbox{$\; \buildrel < \over \sim\;$} a \lower.5ex\hbox{$\; \buildrel < \over \sim\;$}$\,30\,AU), and to be scattered out later. The physics of the planet formation process is notoriously stochastic, and the number of planets produced in a given planet-forming region cannot be calculated in a deterministic manner \citep{morbyraymond}, meaning that additional ice giants other than Uranus and Neptune may have occupied the outer solar system during its infancy. To this end, analytic arguments put forth by \citet{Peter2004} suggest that the outer solar system could have started out with as many as five ice giants. Along similar lines, numerical models for the agglomeration of Neptune and Uranus through collisions among large planetary embryos find that $\sim5\,M_{\oplus}$ objects routinely scatter away from the primary planet forming region \citep{izidoro}. Calculations of ancillary ice giant ejection during the outer solar system's transient epoch of dynamical instability are also presented in \citet{Nesvorny2011,batygin2012,nm212}.
It is important to recognize that this model of Planet Nine formation must necessarily involve a two-step process. This is because outward scattering of Planet Nine facilitated by the giant planets places it onto a temporary, high-eccentricity ($q\sim5\,$AU) orbit, which must subsequently be circularized (thus lifting its perihelion out of the planet-forming region) by additional gravitational perturbations arising from the cluster. The difficulty with this scenario, however, is that the likelihood of producing the required orbit for Planet Nine is low. \citet{liadams2016} estimated the scattering probability for this process by initializing Planet Nine on orbits with zero eccentricity and semi-major axis $a\approx100-200\,$AU, and found that stellar fly-by encounters produce final states with orbital elements $a=400-1500$ AU, $e=0.4-0.9$, and $i<60\deg$ only a few percent of the time. We note however, that the calculations of \citet{brasser2006,brasser2012} obtain considerably more favorable odds of decoupling a scattered planetary embryo from the canonical giant planets and trapping it in the outer solar system, with reported probability of success as high as $\sim15\%$, depending on the specifics of the adopted cluster model.
As an alternative to invoking cluster dynamics, \citet{ericksson2018} considered the circularization of a freshly scattered Planet Nine through dynamical friction arising from a massive disk of planetesimals, extending far beyond the orbit of Neptune. Unlike the aforementioned cluster calculations, in this scenario the chances of producing a favorable Planet Nine orbit can be as high as $\sim30\%$. An important drawback of this model, however, is that it suffers from the same issues of disk truncation outlined in the previous sub-section. Moreover, simulations of the Nice model \citep{tsiganis2005} require the massive component of the solar system's primordial planetesimal disk to end at $a\sim30\,$AU, to prevent Neptune from radially migrating beyond its current orbit. Therefore, within the framework of \citet{ericksson2018}, some additional physical process would be required to create an immense gap ranging from $30\,$AU to $100\,$AU in the solar system's primordial planetesimal disk.
\subsection{Ejection and Capture Within the Solar Birth Cluster}
\label{sec:capture}
Although gravitational perturbations arising from passing stars can act to alter the orbital properties of Planet Nine, as discussed above, the possibilities do not end there -- the birth environment of the solar system can also lead to the disruption of Planet Nine from its wide orbit. Several groups have worked to quantify the effects of scattering encounters in young stellar clusters using N-body methods over the last decade (e.g., \citealt{zwart2009,malmberg2011,pfalzner2013,pfalzner2015,pfalzner2018}). Another way to study this class of disruption events is to separately calculate the cross sections for fly-by encounters to ionize the solar system, and combine these results with the rate of stellar encounters to determine an optical depth, $\tau_{\rm s}$, for scattering \citep{al2001,liadams2015,adams2006,proszkow}. This quantity can be written in the form
\begin{equation}
\tau_{\rm s} = \int n_\ast\, \langle{v\,\sigma_{\rm{int}}}\rangle \,dt\,,
\label{tauscatter}
\end{equation}
where $n_\ast$ is the number density of stars in the system, $v$ is the relative speed between the Sun and other systems, and $\sigma_{\rm{int}}$ is the interaction cross section. The angular brackets denote averages over both the velocity distribution and the stellar initial mass function for the cluster members.
For interactions in a cluster with velocity dispersion of $v_c = 1$\,km/s, and a planet with an aphelion distance of $640\,$AU (comparable to our best-fit P9 orbital solution obtained in section \ref{sec:numerical}), \citet{liadams2016} derive an ejection cross-section of $\sigma_{\rm int}\approx2.5\times10^6$\,AU$^2$. Adopting a stellar number density of $n_{\ast}=100$\,pc$^{-3}$ and a cluster lifetime of $\Delta t= 100\,$Myr, we obtain $\tau_{\rm s}\approx0.4$\footnote{Following \citet{liadams2015}, we have accounted for the fraction of single (vs binary) stellar encounters by reducing $\tau_{\rm s}$ by a factor of $2/3$.}, which translates to a P9 survival probability of $\exp(-\tau_{\rm{s}})\approx2/3$. In other words, integrated over the lifetime of the cluster, the probability of ejecting Planet Nine is of order $\sim30\%$.
Once the solar system emerges from its birth cluster, the probability that Planet Nine can be stripped from the solar system by a passing star diminishes dramatically. In particular, for scattering interactions in the field \citep{bintrem}, the stellar density is much lower ($\langle n_\ast \rangle \approx 0.1$ pc$^{-3}$) and the velocity dispersion is higher ($v_c\approx40$ km/s). The higher interaction speeds lead to much lower ejection cross sections (see \citealt{liadams2015}), which more than compensates for the longer residence time (about 4.5 Gyr). As a result, the corresponding scattering optical depth for the ejection of Planet Nine in the solar neighborhood is essentially negligible -- $\tau_{\rm s}$ $\approx$ 0.002 -- 0.02. In other words, once the solar system emerges from its birth cluster, and Planet Nine attains its required orbit, it has an excellent chance of continued survival.
In light of the fact that ejection of planets constitutes a distinct possibility within the early evolution of the solar system, the same reasoning applies to other members of the solar system's birth cluster. This opens up the possibility that rather than originating within the solar system, Planet Nine was ejected from some other planetary system, and was subsequently captured by the sun's gravitational field. This P9 capture scenario has been recently considered by a number of authors \citep{liadams2016,mustill2016,parker2017}. However, a multitude of requirements must be met in order for successful capture to take place.
One limiting factor is that the scattering encounter must be sufficiently distant so that the cold classical population of the Kuiper Belt (which is likely primordial; \citealt{batygin2012}) is not destroyed. This demand implies that the impact parameter exceeds $b\simgreat150-200$ AU. More constraining is the requirement that the final orbital elements of the captured planet be consistent with those inferred for Planet Nine. For the conditions expected in the solar birth cluster, the probability for successful capture is of order a few percent, both for freely floating planets \citep{liadams2016}, and for planets captured from other planetary systems \citep{mustill2016}. More recent work \citep{parker2017} finds even lower probabilities, with successful captures of freely floating planets estimated to be only 5 --
|
10 out of $10^4$. In addition, planet capture is enhanced for expanding, unbound clusters. In contrast, solar system enrichment of short-lived radiogenic isotopes is enhanced for subvirial (partially collapsing) clusters, so some tension exists between the requirement of capturing Planet Nine and explaining meteoritic enrichment.
\section{Conclusion}
\label{sec:conclude}
Over the past decade and a half, continued detection of minor bodies in the distant solar system has brought the intricate dynamical architecture of the distant Kuiper belt into much sharper focus. Staggeringly, the collective orbital structure of the current census of long-period trans-Neptunian objects offers a number of tantalizing hints at the possibility that an additional massive object -- Planet Nine -- may be lurking beyond the current observational horizon of the distant solar system. In this work, we have presented a comprehensive review of the observational evidence, as well as the analytical and numerical formulation of the Planet Nine hypothesis. The emergent theoretical picture is summarized in section \ref{sec:summary}, followed by a discussion of alternate explanations for the observed orbital anomalies (section \ref{sec:altideas}) and prospects for future work (section \ref{sec:future}).
\subsection{Summary of Results}
\label{sec:summary}
The case for the existence of Planet Nine can be organized into the following four primary lines of evidence.
\medskip \noindent $\bullet$ {\sl Orbital Alignment of Long-Period KBOs.}
The observed collection of dynamically stable Kuiper belt objects with semi-major axes in excess of $a\simgreat250$ AU (orbital periods greater than $P\gtrsim 4000\,$years), reside on orbits that are clustered together in physical space. This clustering is statistically significant at the $99.8 \%$ significance level, and ensues from the simultaneous alignment of the apsidal lines (equivalently, longitudes of perihelion) as well as the orbital planes (which are dictated by the inclinations and the longitudes of ascending node). Because the orbits precess differentially under perturbations from the known giant planets, a sustained restoring gravitational torque is required to preserve the orbital grouping. Dynamical evolution induced by Planet Nine can fully account for the generation and maintenance of the observed alignment of long-period KBOs, while simultaneously allowing for their orbital stability.
\medskip \noindent $\bullet$ {\sl Detachment of Perihelia.} Long-period KBOs exhibit a broad range of perihelion distances, with a substantial fraction possessing $q > 40\,$AU. Such objects are dynamically decoupled from Neptune, and cannot be created during Kuiper belt formation via interactions with the known planets alone. As a result, this population of KBOs requires external gravitational perturbations to lift their perihelia. Within the framework of the Planet Nine hypothesis, the same dynamics that are responsible for the aforementioned apsidal alignment also provide a means of generating Neptune-detached Kuiper belt objects, thereby explaining their origins.
\medskip \noindent $\bullet$ {\sl Excitation of Extreme TNO Inclinations.} Trans-Neptunian objects with inclinations in excess of $i\gtrsim50\deg$ are not a natural outcome of the solar system formation process. Nevertheless, several objects with inclinations well above $50\deg$ have been detected on wide ($a>250\,$AU) orbits, including the recently discovered KBO 2015\,BP$_{519}$ which is the only known member of this class characterized by $q>30\,$AU. Despite their puzzling nature, such highly inclined objects are routinely produced within the framework of the P9 hypothesis, via a high order (octupolar) secular resonance with Planet Nine.
\medskip \noindent $\bullet$ {\sl Production of Retrograde Centaurs.} In addition to long-period TNOs with $i>50\deg$, the solar system also hosts a multitude of highly inclined, and even strongly retrograde shorter-period ($a<100\,$AU) objects. As with their distant counterparts, the dynamical origins of such objects is inexplicable through perturbations from the known giant planets alone. Although these objects are decoupled from Planet Nine's gravitational influence today, numerical simulations demonstrate that an intricate interplay between P9-induced dynamical evolution and Neptune scattering can deliver highly inclined long-period TNOs onto shorter period orbits, polluting the classical region of the Kuiper belt with highly inclined Centaurs. More generally, this process allows for the injection of objects that trace through orbits in the retrograde direction, into the trans-Jovian domain of the solar system.
\medskip \noindent
Each of the above dynamical effects can be understood from purely analytic grounds within the framework of the Planet Nine hypothesis. Simplified models of this sort, based on secular perturbation theory, are presented in section \ref{sec:analytical}. Detailed comparison with the data, however, requires the fabrication of a synthetic population of long-period KBOs using large-scale N-body simulations. The results of thousands of such simulations are described in section \ref{sec:numerical}, and collectively point to a revised set of physical and orbital parameters for Planet Nine. Specifically, compared to the original results \citep{phattie}, where P9 was reported to have $m_9\approx10M_{\oplus}$ and occupy an $a_9=700\,$AU orbit with $e_9=0.6$, the current simulations (reviewed in section \ref{sec:numerical}), point towards a marginally lower-mass planet that resides on a somewhat more proximate and less dynamically excited orbit, with $m_9 \approx 5-10 M_{\oplus}$, $a_9\sim400-800\,$AU, $e_9\sim0.2-0.5$, and $i_9\sim15-25\deg$. Perhaps counterintuitively, the increase in brightness due to a smaller heliocentric distance more than makes up for the decrease in brightness due to a slightly diminished physical radius, suggesting that Planet Nine is more readily discoverable by conventional optical surveys than previously thought.
\subsection{Alternative Explanations}
\label{sec:altideas}
As a formulated dynamical model, the Planet Nine hypothesis provides a satisfactory account for the orbital anomalies of the distant solar system. Nevertheless, until the existence of Planet Nine is confirmed observationally, the possibility that the envisioned theory is incorrect will continue to linger. The history of proposed planets based on dynamical anomalies suggests that this option should be taken seriously. In this vein, it is useful to recall the cautionary tale surrounding the prediction and subsequent abandonment of planet Vulcan (see section \ref{sec:vulcan}), as it illuminates the vulnerability of even the most well-formulated theoretical models. While it is unlikely that the asymmetries of the orbital structure of the distant solar system point to fundamentally new physics (as in the case of Vulcan), we must acknowledge alternative explanations for the observations that do not invoke the existence of Planet Nine. These ideas generally fall into two distinct categories, and we briefly discuss their attributes below.
\paragraph{Observational Bias}
If the Planet Nine hypothesis proves to be unnecessary, perhaps the most likely explanation is that no explanation is required.
One can envision a scenario where the observational strategy and serendipity conspire to produce the observed patterns in the data. In this case, the apparent clustering in longitudes of perihelion and orbital poles would be spurious, and the underlying distribution of orbital angles of the distant solar system objects would be uniform \citep[e.g.,][]{lawler2017, shankmansurvsim}.
The upshot here is that this interpretation can be quantified by a well-defined statistical likelihood. As discussed in section \ref{anomalous}, the associated false-alarm probability is estimated to be less than one percent \citep{brownbatygin2018}. Beyond angular clustering alone, the statistical significance of the Planet Nine hypothesis is further reinforced by the fact that multiple lines of evidence all point to the {\it same} Planet Nine model. Specifically, not only can the orbital alignment of the distant TNOs be understood in terms of a new planet, the existence of perihelion-detached objects, as well as the generation of the observed high inclination objects also naturally emerge within the framework of the same theoretical model. While observational bias can never be completely ruled out as an explanation for the data, continued detection of long-period Kuiper belt objects (e.g., \citealt{goblin,triplet}, etc) will lead to further refinement of the false-alarm probability.
\paragraph{Self-Gravity of the Distant Kuiper Belt}
A separate class of models posit that the observed structure of the distant solar system is real but is not sculpted by a planet, and instead arises from the collective gravity of the distant Kuiper belt. The first of these theories is due to \citet{madigan}, who propose that the distant solar system contains $1-10M_{\oplus}$ of unseen material. Over $\sim$Gyr timescales, this population of objects experiences coupled inclination-eccentricity evolution -- a process the authors refer to as the ``inclination instability" (see \citealt{madigan18} for further discussion). The development of this instability results in a clustered distribution of the arguments of perihelion of the constituent bodies, in agreement with the anomalous pattern first pointed out by \citet{trushep2014}. However, this model provides no explanation for the observed confinement in the \textit{longitudes} of perihelion as well as the angular momentum vectors, and thus fails to reproduce the actual observed structure of the distant belt. Moreover, the inclination instability does not naturally manifest in numerical simulations of Kuiper belt emplacement that self-consistently account for self-gravity of the planetesimal disk, calling into question the specificity of initial conditions required for this model to operate \citep{fan2017}.
A different self-gravity model has been recently proposed by \citet{2018arXiv180406859S}. In this theory, instead of the observed Kuiper belt contributing to the collective potential of the trans-Neptunian region, there exists a moderately eccentric ($e\sim0.2$) shepherding disk that extends from $a_{\rm{in}}=40\,$AU to $a_{\rm{out}}=750\,$AU, comprising $\sim10M_{\oplus}$ of material in total. The gravitational potential of this disk creates a stable, apsidally anti-aligned secular equilibrium at high eccentricity that facilitates the confinement of the longitudes of perihelion of the observed long-period KBOs (which act as test particles, enslaved by the disk's potential). In light of the fact that the primary mode of dynamical evolution induced by Planet Nine itself is secular in nature (and in an orbit-averaged sense, the gravitational potential of Planet Nine is not different from a massive wire that traces its orbit), there are clear mathematical similarities that ensue between this model and the analytical formulation of the P9 hypothesis (section \ref{sec:analytical}). Nevertheless, from the point of view of planet formation, the prolonged existence of the envisioned shepherding disk suffers from many of the same drawbacks outlined in section \ref{sec:insitu} (e.g., radiative stripping of $r\gtrsim50\,$AU protoplanetary disk material, disruptive perturbations due to passing stars in the birth cluster, etc.) as well as the apparent incompatibility with the striking decrease in observed number of KBOs beyond $a\gtrsim48\,$AU.
\subsection{Future Directions}
\label{sec:future}
One result of this review is that numerical simulations containing Planet Nine can produce orbits that are in good agreement with the observed structure of the distant Kuiper belt. Nonetheless, a number of theoretical questions remain unanswered, and their resolution may help illuminate the path toward the direct detection of the most distant planetary member of the solar system. As a result, we conclude this article with a brief (and incomplete) discussion of some problems that remain open within the broader framework of the Planet Nine hypothesis.
\paragraph{The Inclination Dispersion of Jupiter-Family Comets}
The inward scattering of TNOs that have been strongly influenced by Planet Nine constitutes a dynamical pathway through which highly inclined icy bodies can be delivered to much smaller heliocentric distances. In this way, Planet Nine can affect not only the orbital properties of the Kuiper belt itself, but the inclination dispersion of Jupiter-Family comets, which are sourced from the trans-Neptunian region \citep{1988ApJ...328L..69D}. In a recent study, \citet{spcomets} carried out detailed simulations of the generation and evolution of short-period comets, and found that inclusion of a $m_9=15M_{\oplus}$ Planet Nine that resides on a $a=700\,$AU, $e=0.6$ orbit leads to a simulated inclination dispersion of Jupiter-Family comets that is broader than its observed counterpart.
Importantly, however, the simulation suite presented in section \ref{sec:numerical} of this work favors a lower mass, more circular Planet Nine over the parameters adopted by \citet{spcomets}. Because the inclination excitation of long-period TNOs is driven by octupole-level secular interactions with P9, we can reasonably expect that our revised P9 parameters would significantly diminish both the rate, and the efficiency of high-$i$ comet generation, potentially eliminating any discrepancy between the observed and modeled orbital distributions. A quantitative evaluation of the possibility is of substantial interest.
\paragraph{Contamination of the Distant Kuiper Belt}
Within the current observational sample of distant KBOs, orbital clustering exhibited by long-term (meta)stable KBOs is considerably tighter than that of the unstable objects. Qualitatively, this disparity is sensible since unstable objects actively interact with Neptune, and can be rapidly perturbed away from P9-sculpted dynamical states. Moreover, given their rapid excursions in semi-major axes, such objects could represent relatively recent additions to the distant Kuiper belt drawn from the more proximate (and therefore P9-unperturbed) region of trans-Neptunian space. At the same time, observational bias strongly favors the detection of low-perihelion objects, meaning that this dynamically unstable sub-sample of KBOs is significantly over-represented within the current dataset.
Because the typical lifetimes of unstable objects are short compared to the lifetime of the solar system, they are almost entirely absent from the simulation suite presented in section \ref{sec:numerical}, limiting the scope of our comparison between simulation and data only to (meta)stable objects. Overcoming this limitation, and better quantifying the P9-facilitated evolution of objects that experience rapid orbital diffusion due to strong coupling with Neptune with an eye towards better understanding the orbital patterns exhibited by unstable long-period KBOs would nicely complement the calculations presented in this work.
\paragraph{Interactions With the Galaxy}
A subtle limitation of almost all Planet Nine calculations that have been carried out to date, lies in that they treat the solar system as an isolated object, thus ignoring the effects of passing stars as well as the gravitational tide of the Galaxy (exceptions to this rule include the recent works of \citealt{spcomets, goblin}). This approximation is perfectly reasonable for simulating the evolution of objects with semi-major axes less than a few thousand AU, and for now, the vast majority of known long-period TNOs lies in this domain. This picture is however slowly changing, and recent detections of very long-period trans-Neptunian objects \citep{gomes2015, sheptru2016, goblin} increasingly point to an over-abundance of long-period minor bodies with semi-major axes larger than $\sim1000\,$AU.
Because there exists a strong bias towards detection of shorter-period objects, it is likely that the prevalence of these extremely distant TNOs cannot be fully attributed to the standard model of scattered KBO generation, wherein long-period orbits are created by outward scattering facilitated by Neptune. If these objects did not originate from more proximate orbits, then where did they come from? A riveting hypothesis is that they are injected into the distant solar system from the inner Oort cloud, via a complex interchange between Galactic effects and Planet Nine's gravity. Moreover, because these long-period KBOs generally conform to the pattern of orbital alignment dictated by Planet Nine, a careful characterization of their dynamical evolution may yield an excellent handle on Planet Nine's gravitational sculpting at exceptionally large heliocentric distances.
\paragraph{Alternate Orbital Solutions}
All simulations of P9-induced dynamical evolution carried out in this work (section \ref{sec:numerical}) were founded on a series of analytical models delineated in section \ref{sec:analytical}. While this aggregate of calculations cumulatively points to a specific range of P9 orbital parameters that provide a satisfactory match to the data, it is important to keep in mind that we have not strictly ruled out the possibility that there could exist other, more exotic orbital configurations that might match the data equally well. For instance, we have not thoroughly examined the possibility that Planet Nine's orbit itself could be very highly inclined (or even retrograde), and that a seemingly strange orbital architecture of the distant Kuiper belt generated by such a planet could be rendered compatible with the current dataset by observational biases. Continued numerical exploration of P9-sculpted orbital structure outside of the parameter range considered in this review will help quantify this possibility.
Even if the qualitative aspects of our theoretical models are correct, we must acknowledge that the dataset which informs our understanding of the distant Kuiper belt remains sparse, and new detections of large-$a$ KBOs may significantly alter the population's inferred statistical properties. As an example, we note that even basic attributes of the distant belt, such as the critical semi-major axis beyond which orbital clustering ensues, $a_{\rm{c}}$, are characterized by considerable uncertainties as well as a non-trivial dependence on $q$. While in this work we have tentatively adopted $200\rm{AU}\,$$\lesssim a_{\rm{c}}\lesssim300\rm{AU}$ as a criterion for simulation success, if additional data reveals that the true value of $a_{\rm{c}}$ lies interior or exterior to this range, P9 orbital fits with respectively lower or higher semi-major axes, $a_9$ would be rendered acceptable. Thus, as the census of well-characterized long-period TNOs continues to grow, alteration of this (and other) statistical characteristics of the distant Kuiper belt will inevitably lead to further refinement of Planet Nine's orbital elements.
\bigskip
Further theoretical curiosities aside, arguably the most practically attractive aspect of the P9 hypothesis is the prospect of near-term observational confirmation (or falsification) of the results discussed in this review. Not only would the astronomical detection of Planet Nine instigate a dramatic expansion of the Sun's planetary album, it would shed light on the physical properties of a Super-Earth class planet, while evoking extraordinary new constraints on the dramatic early evolution of the solar system. The search for Planet Nine is already in full swing, and it is likely that if Planet Nine -- as envisioned here -- exists, it will be discovered within the coming decade.
\bigskip
\noindent
{\bf Acknowledgments:} This review benefited from discussions and additional input from many people, and we would especially like to thank Elizabeth Bailey, Tony Bloch, David Gerdes, Stephanie Hamilton, Jake Ketchum, Tali Khain, and Chris Spalding. We are indebted to Greg Laughlin, Erik Petigura, Alessandro Morbidelli, Gongjie Li and an anonymous referee for critical readings of the text and for providing insightful comments that led to a considerable improvement of the manuscript. We also thank Caltech's Division of Geological \& Planetary Sciences for hosting F.C.A. during his sabbatical visit, January -- April 2018, when work on this manuscript was initiated. K.B. is grateful to the David and Lucile Packard Foundation and the Alfred P. Sloan Foundation for their generous support.
\newpage
|
\section{Introduction}
Stereo reconstruction is a fundamental problem in computer vision. It aims at recovering the 3D geometry of the sensed scene by computing the disparity between corresponding pixels in a rectified pair of images. For applications like robotic~\cite{robot}, virtual/augmented reality~\cite{augmented-reality} and autonomous driving~\cite{auto-driving}, depth information is crucial for scene understanding; thus, accurate yet fast solutions are particularly attractive.
Recently, deep learning approaches have shown tremendous improvements, especially in the supervised setting~\cite{ACVNet,LEAStereo,CREStereo}. Common to most architectures is the cost volume, which encodes the feature similarity and plays a key role in matching pixels. In particular, features from the left image are concatenated (in 3D networks) or correlated (in 2D networks) and processed through convolutions, which are costly and prevent real-time execution. To address this issue, more efficient stereo models have been proposed, taking advantage of efficient cost aggregation~\cite{coex,AANet}, coarse-to-fine~\cite{StereoNet,AnyNet} and cost volume pruning~\cite{DeepPruner,UASNet,CFNet} strategies to preserve computation and accuracy. Nonetheless, the sparse and constant disparity candidates used for matching cost computation can easily miss the ground truth disparity, thus deteriorating the accuracy of prediction. The case becomes even worse when dealing with challenging regions. For example, although occlusion is ill-posed problem for stereo matching based on a single stereo pair, popular supervised stereo methods \textit{haven't ever} taken advantage of the intrinsic correlation among frames when stereo videos are available. Even if commercial cameras can easily acquire high FPS videos (\eg 30), in which likely past context is strictly related to the current one, state-of-the-art stereo methods generally process each frame independently. We argue that previous disparities can be relevant cues to improve estimates, especially in complex regions where the current pair is not meaningful, as near object boundaries and in occluded areas.
Inspired by these observations, we propose \textit{TemporalStereo}, a novel lightweight deep stereo model (as depicted in \cref{fig:arch_performance}) not only suited for single pair inference with fast performance, but also able to leverage the rich spatial-temporal cues provided by stereo videos, when available. In \textit{single-pair} mode (\ie when only the current pair is available), our network follows a coarse-to-fine scheme~\cite{AnyNet,StereoNet,CasStereo} to enable an efficient implementation.
In order to increase the capacity of the constructed sparse cost volume and also the quality of sparse disparity candidates at every single stage, we enhance the cascade-based architecture in several aspects: 1) firstly, we enrich the cost of each queried disparity candidate with context from non-local neighborhoods during cost computation; 2) as the number of disparity candidates could be decreased to very few (\eg, 5), other than exploiting pure 3D convolutions for cost aggregation, we also perform multiple statistics (\eg mean and maximum statistics~\cite{SepFlow}) over a large window to grasp the global cost distribution of each pixel at one step; 3) differently from previous strategies~\cite{UASNet,StereoNet,AnyNet,CFNet}, for which the candidates are constant throughout each stage, we rely on the aggregated cost volume itself to \textit{shift} the candidates towards a better location and thus improve the quality of predicted disparity map; 4) eventually, when multiple pairs are available, our network can easily switch to \textit{temporal} mode, in which past disparities, costs and cached features could also be aligned to the current reference frame and used to boost current estimates with a negligible runtime increase (4ms as shown in Tab.~\ref{tab:benchmark}). Experimental results shown on synthetic (SceneFlow, TartanAir) and real (KITTI 2012, KITTI 2015) benchmarks support the following claims:
\begin{itemize}
\item Our method is accurate yet fast and achieves state-of-the-art results when a single stereo pair is available. In particular, the improved coarse-to-fine design based on a very few (\eg, 5) candidates allows TemporalStereo{} to unlock fast execution and high performance.
\item When multiple temporally adjacent stereo pairs are available, TemporalStereo{} is the first supervised stereo network that can effortlessly cache the past context and use it to improve ongoing predictions, \eg in occluded, reflective and dynamic regions. The model trained in temporal mode is effective also in single-pair mode, allowing to deploy a single model in video-based applications.
\item The proposed temporal cues can be widely applied to boost the matching accuracy of current efficient stereo methods, \eg, on TartanAir~\cite{TartanAir} dataset, the improvements of CoEX~\cite{coex} and StereoNet~\cite{StereoNet} are 14.6\% and 26.1\% respectively. However, our TemporalStereo{} is still the best one on utilizing temporal information.
\end{itemize}
\section{Related Work}
\textbf{Single Pair Stereo Matching.} Traditional methods \cite{fastfilter} employ handcrafted schemes to find local correspondences~\cite{4steps,nonlocalaggregation,segmenttree} and approximate global optimization by exploiting spatial context~\cite{SGM,beliefprop}. Recent deep learning strategies \cite{poggi2021synergies} adopt CNNs and can be broadly divided into two categories according to how they build the cost volume. On the one hand, 2D networks follow DispNetC~\cite{sceneflow} employing a correlation-based cost volume and applying 2D convolutions to regress a disparity map. Stacked refinement sub-networks \cite{CRL,iResNet} and multi-task learning~\cite{edgestereo,segstereo} have been proposed to improve the accuracy. GCNet~\cite{GCNet} represents a milestone for 3D networks.
Following works improve the results thanks to spatial pyramid pooling \cite{PSMNet}, group-wise correlations~\cite{GWCNet}, forcing unimodal \cite{PDS,AcfNet,wasserstein} or bimodal \cite{SMDNet} cost distributions, or with cost aggregation layers inspired by classical methods \cite{GANet,CSPN}. 3D methods usually outperform 2D architecture by a large margin on popular benchmarks~\cite{sceneflow,KITTI2012,KITTI2015}, paying more in terms of computational requirement. In this work, we leverage 3D sparse cost volumes, and the peculiar proposed architecture proves to be fast and accurate at disparity estimation.
\textbf{Efficient Stereo Matching with Deep-Learning.}
A popular strategy to decrease the runtime consists in performing computation at a single yet small resolution (\eg 1/4) and obtaining the final disparity through upsampling. CoEx~\cite{coex} adopts this design. Coarse-to-fine \cite{BI3D,HSM} design
further improves efficiency, since the overall computation is split into many stages. To further reduce the disparity search range for each pixel, StereoNet~\cite{StereoNet} and AnyNet~\cite{AnyNet} add a constant offset to disparity maps produced in the previous stage to avoid checking all the disparity candidates. However, as reported in \cite{UASNet}, the constant offset strategy is not robust against large errors in initial disparity maps. DeepPruner \cite{DeepPruner} prunes the search space using a differentiable PatchMatch \cite{barnes2009patchmatch}, obtaining sparse cost volumes. In contrast, methods such as \cite{UASNet,CFNet} employ uncertainty to evaluate the outcome of previous stages and then build the current cost volume accordingly. However, since the cost volume itself expresses the goodness of candidates (the better the candidates, the more representative the cost volume), it could be used to \textit{check} and eventually \textit{correct} the candidates themselves. Nonetheless, previous methods in literature cannot exploit this cue since their candidates come from the earlier level and are constants through the current stage. Conversely, this paper proposes inferring an offset for each disparity candidate from the current aggregated cost volume, which helps to ameliorate the disparity. Moreover, to improve network efficiency on high-resolution images, we perform the coarse-to-fine predictions only at downsampled feature maps rather than at full image resolution like HITNet~\cite{HitNet}. Furthermore, differently from competitors, our method can seamlessly use the past context to improve the predictions in the case of stereo videos largely, but with negligible overhead.
\textbf{Multiple Pairs Stereo Matching.}
When two temporally adjacent stereo pairs are available, optical flow~\cite{sceneflow,raft} or 3D scene flow~\cite{teed2021raft3d} is often taken into account to link the images with the reconstructed 2D/3D motion field. Then, the stereo problem is tackled leveraging a multi-task model \cite{aleotti2020learning,Lai2019bridging,unos} or by casting it as a special case of optical flow \cite{flow2stereo}. However, these methods are not able to capitalize longer stereo sequences. Conversely, OpenStereoNet \cite{zhong2018open} adopts recurrent units to capture temporal dynamics and correlations available in videos for unsupervised disparity estimation. We argue that past information could be beneficial to understand the current scene -- especially in difficult areas -- for \textit{free}, since past context and predictions can be cached easily, although none of the above methods make use of them explicitly in video by taking into account geometry. Following this rationale, we leverage camera pose -- which can be inferred or given by external sensors -- to retrieve cached past disparity candidates and align them to the current volume, ensuring real-time performance even in temporal mode. Although such a mechanism would not take into account for moving objects, TemporalStereo{} is naturally robust to them, as it also accesses to the current stereo pair, on which the issue mentioned above does not happen with a singe stereo pair, in contrast to other monocular video approaches~\cite{deepvideomvs,deepv2d} which need to explicitly deal with moving objects. As a bonus, a single train allows TemporalStereo{} to work in both single-pair and temporal ways seamlessly.
\section{Method}
\begin{figure*}[!tbp]
\centering
\includegraphics[width=1.6\columnwidth]{assets/framework.pdf}
\caption{\textbf{TemporalStereo{} Architecture}. In single-pair mode, the model predicts the disparity map in a coarse-to-fine manner.
If past pairs are available, TemporalStereo{} switches to temporal mode: the model remains the same, but features, costs and candidates cached from past pairs are now employed to improve the current prediction.
}
\vspace{-0.5cm}
\label{fig:framework}
\end{figure*}
In the following, first we describe the proposed TemporalStereo{} in single-pair mode (\ie when a single stereo pair is available), then we fully unlock the potential of our architecture enabling temporal mode. Fig. \ref{fig:framework} depicts the model.
\subsection{Single Pair Mode}
Given a single stereo pair, a backbone extracts multi-scale features at $1/4$, $1/8$ and $1/16$ of the original resolution. Then, three stages predict disparity maps starting from these features. In particular, each stage performs feature decoding, cost volume computation and aggregation.
\textbf{Feature decoding}
In each stage $s\in\{1,2,3\}$, a decoder processes the current features together with, if $s>1$, those from the previous stage $F_{l}^{s-1}$, $F_{r}^{s-1}$. In particular, $F_{l}^{s-1}$, $F_{r}^{s-1}$ are bilinearly upsampled by a factor 2 and concatenated with left and right features from the backbone. Then, the resulting feature maps are processed by two 2D convolutions with kernels of 3, obtaining $F_{l}^{s}$, $F_{r}^{s}$.
\textbf{Cost volume computation.}\label{cost-computation} A 4D cost volume $\mathcal{C}^{s}\in \mathbb{R}^{Ch^s\times H^s\times W^s\times |\mathcal{D}^s| }$ is constructed by concatenating $F_{l}^{s}$ with the corresponding $F_{r}^{s}$ for each disparity $d$ in the set $\mathcal{D}^s=\{d_1, d_2, ..., d_{n}\}$, with $Ch^{s}$ respectively the number of channels of feature $F_{l}^{s}$ and disparity candidates used in the stage:
\begin{equation}
\begin{aligned}
\mathcal{C}_{concat}^{s}(\cdot, u, v, d) = \oplus \{F_{l}^{s}(\cdot,u,v), F_{r}^{s}(\cdot, u-d,v)\},
\end{aligned}
\vspace{-0.1cm}
\label{eq:concat}
\end{equation}
where $u,v$ are horizontal and vertical pixel coordinates, respectively, while $\oplus$ stands for concatenation at channel dimension. However, by doing so, $\mathcal{C}^s$ only contains local information, which is a major limitation in the case of a sparse set of candidates. To alleviate it, we enrich $\mathcal{C}^{s}$ with multi-level costs~\cite{raft} encoding a larger neighborhood. Specifically, we downsample feature maps $F_{l}^{s},\; F_{r}^{s}$ by a factor of $1,2,4$ and, for each level, group-wise correlations \cite{GWCNet} are performed to obtain the cost of the level. Finally, such costs are bilinearly upsampled to the resolution of $\mathcal{C}^s$, and stacked together, obtaining $\mathcal{C}_{gwc}^s$. The final multi-level cost results:
\begin{equation}
\begin{aligned}
\mathcal{C}^{s}(\cdot, u, v, d) = \oplus \{ \mathcal{C}_{concat}^{s}(\cdot, u, v, d), \mathcal{C}_{gwc}^{s}(\cdot, u, v, d) \}.
\end{aligned}
\vspace{-0.1cm}
\end{equation}
\textbf{Cost aggregation.}
We now describe how we compute the aggregated cost volume $\mathcal{C}^{s}_{agg}$. In each stage, preliminary spatial cost aggregation is performed by one 3D convolution with kernel size $3\times3\times3$ followed by an hourglass network and another 3D convolution with kernel size $3\times3\times3$. Afterwards, a Statistical Fusion module further improves the aggregation with mean and maximum statistics~\cite{SepFlow}, especially along the disparity dimension (which contains as few as 5 candidates): 4 parallel layers (identity, convolutional with kernel size $5\times1\times1$, average pooling and max pooling both with kernel size $5\times5\times5$, respectively) extract global statistics from the preliminary aggregated volume, then their outcomes are stacked together and processed by one 3D convolution.
In temporal mode, Statistical Fusion also helps to merge past costs with current ones, as we will discuss later. We employ Statistical Fusion only in stages $s=1,2$ because, to lighten the computation, and in $s=3$ past costs are not used. Finally, a cost prediction head with two 3D convolutions with kernel size $3\times3\times3$ is in charge of predicting the final cost volume $\mathbf{C}^{s}_{f}$ from $\mathcal{C}^{s}_{agg}$. From $\mathcal{C}^{s}_{f}$ and $\mathcal{D}^s$ we could obtain the disparity $\hat{d}^{s}$ of the current stage by means of soft-argmin operator \cite{GCNet}, but $\mathcal{C}^{s}_{agg}$ can help us to improve the candidates -- as we will describe thereafter.
For this reason, before computing $\hat{d}^{s}$, we have to introduce our adaptive candidate shifting strategy.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\columnwidth]{assets/shifting.pdf}
\vspace{-0.5cm}
\caption{\textbf{Adaptive Shifting in action.} We show the ground truth disparities and the predictions with and without adaptive shifting for three horizontal regions(\ie, A, B, C) and one pixel(D) in the image. For A, B, C, our strategy better estimates the actual disparity distribution of the pixels in each region. For D, the matching probability distribution of 5 disparity candidates moves from flatten unimodal to one-hot distribution centered at ground truth by adaptive shifting candidates towards a better location.}
\label{fig:adaptive-shifting}
\end{figure}
\textbf{Adaptive candidate shifting.} Previous works \cite{UASNet,StereoNet} consider the candidates $\mathcal{D}^s$ as constants throughout the current stage $s$. However, the aggregated cost volume itself contains cues about the candidates used to build it. Since it has been constructed from $\mathcal{D}^s$, it represents how well the candidates can explain the current scene. Moreover, in the cost aggregation step, each cost has been enriched with all the costs in its neighbourhood, collecting matching results of nearby pixels. We propose to leverage a dedicated prediction head to infer, from $\mathcal{C}^{s}_{agg}$ as shown in Fig. \ref{fig:framework}, an offset value for each candidate, which represents how much we have to \textit{shift} the candidate towards a better location. Thus, the candidates are pixel-wisely different and able to adaptively adjust according to the learned context. This strategy gives the network the ability to adapt $\mathcal{D}^s$ and improve $\hat{d}^s$, as depicted in Fig. \ref{fig:adaptive-shifting}. Specifically, an independent head -- with the same structure of the cost prediction head, \ie, two 3D convolutions, -- is in charge of inferring the offset volume $\mathcal{O}^{s}\in \mathbb{R}^{H^s\times W^s\times |\mathcal{D}^s|}$ from inputted $\mathcal{C}^{s}_{agg}$. Although offset learning~\cite{liu2016ssd} is not new and has been used in~\cite{wasserstein} to convert the dense disparity candidates, which always tightly cover the ground truth disparity, into continuous ones, so that the predicted disparity value will not be restricted to integers. Differently, our disparity candidates are few and sparse, which could be easily missing the ground truth. By enriching the context of each sparse candidate, the proposed shifting strategy is able to \textit{correct} the constant and flawed candidates for better disparity estimation.
\textbf{Disparity prediction and candidate sampling.}
Given $\mathcal{D}^s$, $\mathcal{O}^{s}$ and $\mathcal{C}^{s}_{f}$, soft-argmin operator~\cite{GCNet} could be applied to predict the final disparity map. However, this strategy results sub-optimal in the case of multi-modal distributions~\cite{AcfNet,wasserstein,SMDNet}. To overcome this limitation, we predict the disparity following the strategy proposed in~\cite{coex}: we select top-$K$ values of $-\mathcal{C}^{s}_{f}$ for each pixel across the disparity dimension, and we normalize them with the softmax operator $\sigma$. The predicted disparity map $\hat{d}^{s}$ results:
\begin{equation}
\begin{aligned}
\hat{d}^{s} = \sum_{d\in\{d^{top}_{1}, \cdots, d^{top}_{K}\}} (d + o_{d}) \times \sigma(-c_{d}),
\end{aligned}
\vspace{-0.2cm}
\label{eq:softargmin}
\end{equation}
where $d^{top}_{k} = \underset{d}{argmax}^{k}(-c_{d})$, $argmax^{k}(\cdot)$ is the $k$-th maximal value for $k\in\{1, 2, ..., K\}$, $c_{d}$ the cost in $\mathcal{C}^{s}_{f}$ of candidate $d$ for each pixel, and $o_{d}$ is the offset value in $\mathcal{O}^{s}$ for the candidate $d$. Convex upsampling \cite{raft} is used to double the resolution of $\hat{d}^1$ and $\hat{d}^2$, obtaining $\hat{d}^1_\uparrow$, $\hat{d}^2_\uparrow$. To bring $\hat{d}^3$ to full resolution, following~\cite{coex}, we bilinearly upsample $\hat{d}^3$ to full resolution and for each pixel in the upsampled resolution, a weighted average of a $3\times3$ superpixel surrounding it is calculated to get the final prediction $\hat{d}^3_\uparrow$.
Finally, we obtain the set of candidates $\mathcal{D}^{s+1}$ for the next stage applying an inverse transform sampling. We sample according to a normal distribution $\mathcal{N}(\hat{d}^{s}_\uparrow, 1)$ in the range $[\hat{d}^{s}_\uparrow - \beta, \hat{d}^{s}_\uparrow + \beta]$, with $\beta$ a constant value. This sampling strategy allows to include more candidates near the predicted disparity. The first stage is initialized with $\mathcal{D}^1$ uniformly sampled across the full disparity range to provide an overview of current scene geometry.
\textbf{Loss function.} Following~\cite{PSMNet}, we minimise the smooth $L_1$ loss, \ie, Huber loss at full resolution between the ground truth disparity map $d^{gt}$ and the predictions at the three stages $\{\hat{d}^{1}_\uparrow, \hat{d}^{2}_\uparrow, \hat{d}^{3}, \hat{d}^{3}_\uparrow\}$. $\hat{d}^{1}_\uparrow$, $\hat{d}^{2}_\uparrow$ and $\hat{d}^{3}$ are bilinearly upsampled to full resolution and a factor $\lambda^s$ weights each predicted map. The loss results:
\begin{equation}
\begin{aligned}
\mathcal{L}_{huber} & = \frac{1}{|\mathcal{N}|} \lambda^{0} \cdot smooth_{L_{1}}(d^{gt} - \hat{d}^{3}) \\ &+ \sum_{s =1}^{3} \frac{1}{|\mathcal{N}|} \lambda^{s} \cdot smooth_{L_{1}}(d^{gt} - \hat{d}^{s}_\uparrow),
\end{aligned}
\vspace{-0.1cm}
\label{eq:l1loss}
\end{equation}
where $\mathcal{N}$ is the number of pixel with valid ground truth.
Furthermore, to supervise the learning of offsets in three stages $\{{O}^{1}, {O}^{2}, {O}^{3}\}$, we adopt the Warsserstein loss~\cite{wasserstein} for training. Specifically, for each stage $s$, we downsample the ground truth disparity map $d^{gt}$ to the resolution of corresponding offset for supervision, obtaining $d^{gt, s}_{\downarrow}$. As the offset value can range in a quite large value space, we improve the Warsserstein loss as:
\begin{equation}
\begin{aligned}
\mathcal{L}_{war} = \sum_{s =1}^{3}\frac{1}{|\mathcal{N}|} \lambda^{s} \cdot \sum_{d\in \mathcal{D}^{s}} \left( |d + o_{d} - d^{gt,s}_{\downarrow}| \times \left(\sigma(-c_{d}) + \alpha\right) \right)
\end{aligned}
\label{eq:warloss}
\end{equation}
where $c_{d}$ the cost value in $\mathcal{C}^{s}_{f}$ of candidate $d$ for each pixel, and $o_{d}$ is the offset value in $\mathcal{O}^{s}$ for the candidate $d$. We set $\alpha = 0.25$, \ie even for the candidate with extremely low matching probability $\sigma(-c_{d})$, the network still learns an offset to enforce it moving towards the ground truth. With a weight factor $\lambda_{final}$, our final loss becomes:
\begin{equation}
\begin{aligned}
\mathcal{L} = \mathcal{L}_{huber} + \lambda_{final} \cdot \mathcal{L}_{war}.
\end{aligned}
\label{eq:allloss}
\end{equation}
\subsection{Temporal Mode}
So far, we have presented the model suited for single-pair mode. Now, we illustrate how the model works in temporal mode. Differently from multi-view stereo models \cite{deepmvs,deepv2d}, our network processes stereo videos one stereo pair at a time. This behaviour, which helps to save computation, is possible thanks to the sparse cost volume. In fact, we can easily add disparity candidates from the past inferences to the current set $\mathcal{D}^s$, thus increasing the search range with other plausible solutions.
The past candidates, however, have to be aligned with the current frame to be meaningful. Optical flow/3D scene flow could be used to tackle the issue, but predicting flow fields is expensive in terms of memory footprint and time. Instead, in this work, we suppose that the camera is calibrated and the pose is given --since pose can be provided by external IMU sensors or estimated from the stereo pairs-- to build the rigid motion field used to align the candidates. Nonetheless, rigid flow formulation only holds for rigid objects in the scene, but since our network always relies on the current stereo pair, TemporalStereo{} is robust even in case of wrong flows (\eg due to bad poses). In the remainder, we detail how we do use temporal information to improve stereo matching results.
\textbf{Local map.} \label{local-map} As highlighted before, occlusions represent a major issue in stereo matching. However, we argue that currently occluded regions might not have been as such in past frames, \eg due to camera motion. For this reason, the issue can be alleviated by adequately exploiting past estimates and at a minimal cost since they can be easily cached. Furthermore, we cache according to a keyframe strategy to bound the complexity in the case of long video sequences and ensure enough motion parallax. Following~\cite{neuralrecon}, a new incoming frame is promoted to keyframe if its relative translation and rotation are greater than $\textbf{t}_{max}$ and $\textbf{R}_{max}$ respectively. Then, a memory bank collects disparity $\hat{d}^3_\uparrow$ of the last $N_{key}$ keyframes. Since each cached map represents the scene's geometry in the past, to use it in the current computation, we have to update the disparity values and their coordinates in the image. Inspired by SLAM literature \cite{murORB2}, we leverage a Local Map strategy to this aim. In detail, given the camera model $\pi:\mathbb{R}^{3} \rightarrow \mathbb{R}^{2}$, a 3D point $P(X, Y, Z)$ can be projected to a 2D pixel $p(u,v)$:
\begin{equation}
\begin{aligned}
\pi(P) = \left( f_{x}\frac{X}{Z}+c_{x}, f_{y}\frac{Y}{Z}+c_{y} \right),
\end{aligned}
\vspace{-0.1cm}
\end{equation}
where $(f_x, f_y, c_x, c_y)$ are the known camera intrinsics. Similarly, a pixel $p$ can be back-projected to a 3D point $P$:
\begin{equation}
\begin{aligned}
\pi^{-1}(p, d) = \frac{b\cdot f_{x}}{d} \left( \frac{u-c_{x}}{f_{x}}, \frac{v-c_{y}}{f_{y}}, 1 \right)^{\top},
\end{aligned}
\end{equation}
where $b$ is the baseline between the left and the right cameras and $d$ the disparity. With the relative extrinsics $\textbf{T}_{j\rightarrow t}\in\mathbb{SE}(3)$ from keyframe $j$ to the current frame $t$, we can update every disparity value $d_j$ of keyframe $j$ as:
\begin{equation}
\begin{aligned}
d_{j}^{proj} = \frac{b \cdot f_{x}}{Z_{j}}, \; \textrm{where} \; P_{j}(X_j, Y_j, Z_j) = T_{j\rightarrow t}^{-1} \cdot \pi^{-1}(p, d_{j}),
\end{aligned}
\label{eq:back-project}
\end{equation}
where $d_{j}^{proj}$ is the updated disparity value. At this point, we obtain the coordinates of $d_{j}^{proj}$ in $t$ through forward warping. To preserve end-to-end requirement, we use the differentiable Softmax Splatting $\overrightarrow{\sigma}$~\cite{softsplat}:
\begin{equation}
\begin{aligned}
\hat{d}_{j}^{proj} &=
\overrightarrow{\sigma}(d_{j}^{proj}, \pi(P_{j}) - p )
\end{aligned}
\label{eq:forward-warp}
\end{equation}
Finally, the Local Map results:
\begin{equation}
\begin{aligned}
\mathcal{D}^{2} = \oplus \left\{ \mathcal{D}^{2}, \hat{d}_{1}^{proj}, \hat{d}_{2}^{proj}, \ldots, \hat{d}_{N_{key}}^{proj} \right\}
\end{aligned}
\end{equation}
It is worth noticing that, in single-pair mode, the Local Map only contains candidates from the current pair. Moreover, it boosts candidate selection only for $s=2$, since this stage represents the best trade-off between accuracy and speed. In fact, in $s=1$ candidates are looked over the full search range at the lowest resolution, while in $s=3$ a larger cost volume involves a more expensive computation, thus not fulfilling our requirements.
\textbf{Temporal shift and past costs.}
Past semantic and matching scores are crucial for temporal stereo processing. Although Local Map effectively proposes previous depth cues about the scene, it lacks in providing the context behind these guesses. To address the problem, we introduce Temporal Shift and Past Costs modules. Temporal Shift enriches current feature maps with those computed in the past. Specifically, we adopt the TSM \cite{tsm} strategy to facilitate the feature exchange among neighboring frames because it does not introduce additional computation or parameters: past features are \textit{shifted} along the time dimension and then merged with current ones as shown in~\cref{fig:framework}. Doing so, TSM provides \textit{spatio-temporal} capabilities to our backbone not originally designed for temporal modeling. In practice, every feature map from the backbone is cached and used by Temporal Shift in the next prediction. Notably, as TSM does not rely on pose, we can still perform stereo matching in temporal mode and benefit from past information when pose is not available as shown in~\cref{tab:ablation-temporal}.
Similarly, Past Costs adds past matching scores to current cost volumes. Given $\mathcal{C}_f^3$ volume computed at time $t-1$, first we update the value of its candidates to $t$ according to Eq. (\ref{eq:back-project}). Then, cost values and candidates are forward warped to $t$ using Eq. (\ref{eq:forward-warp}). Finally, the warped cost volume is downsampled by a factor of 2 and 4 and concatenated with current $\mathcal{C}_{agg}^{2}$ and $ \mathcal{C}_{agg}^{1}$ respectively, to be further aggregated by the Statistical Fusion module. To preserve computation, we only warp top-$K$ candidates and their costs to $t$. In single-pair mode, both Temporal Shift and Past Costs are the identity function. This strategy allows us to run the model trained in temporal mode also in single-pair mode with a limited drop in accuracy, \eg, at the beginning of the video sequence.
\section{Experiments}
\begin{table}[t]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cc}
\multicolumn{2}{c}{
\begin{tabular}{cccc|ccc|ccc|c}
\hline
\toprule
& Multi-Level & Statistical & Adaptive & \multicolumn{3}{c}{EPE} & \multicolumn{3}{c}{3PE} & Runtime \\
\cline{5-11}
& Cost & Fusion & Shifting & ALL & OCC & NOC & ALL & OCC & NOC & (ms) \\
\toprule
(A) & \xmark & \xmark & \xmark & 0.600 & 1.949 & 0.369 & 2.85 & 11.62 & 1.36 & \textbf{40.62} \\
(B) & \cmark & \xmark & \xmark & 0.587 & 1.924 & 0.360 & 2.79 & 11.44 & 1.33 & 43.41 \\
(C) & \cmark & \cmark & \xmark & 0.581 & 1.908 & 0.356 & 2.78 & 11.39 & \textbf{1.31} & 44.54 \\
(D) & \xmark & \xmark & \cmark & 0.564 & 1.923 & 0.334 & 2.87 & 11.78 & 1.36 & 41.58 \\
(E) & \cmark & \cmark & \cmark& \textbf{0.532} & \textbf{1.830} & \textbf{0.315} & \textbf{2.75} & \textbf{11.37} & \textbf{1.31} & 45.42 \\
\bottomrule
\end{tabular} } \\
\\
\resizebox{0.90\columnwidth}{!}{
\begin{tabular}{ccc|c|c}
\hline
\toprule
& Candidates & Disparity & \multirow{2}{*}{EPE} & Runtime \\
\cline{5-5}
& Number & Range & & (ms) \\
\toprule
(F) & 3 & [-4, 4] & 0.565 & \textbf{41.61} \\
(G) & 9 & [-4, 4] & \textbf{0.531} & 55.84 \\
(H) & 5 & [-4, 4] & 0.532 & 45.42 \\
(I) & 5 & [-2, 2] & 0.543 & 45.42 \\
(J) & 5 & [-8, 8] & 0.568 & 45.42 \\
\bottomrule
\end{tabular}
} &
\resizebox{1.03\columnwidth}{!}{
\begin{tabular}{ccc|c|c}
\hline
\toprule
& Model & Depth-wise & \multirow{2}{*}{EPE} & Runtime \\
\cline{5-5}
& Variants & 3DCNN & & (ms) \\
\toprule
(K) & Baseline & \xmark & 0.589 & 63.91 \\
(L) & Baseline & \cmark & 0.600 & \textbf{40.62} \\
(M) & Full & \xmark & 0.535 & 75.42 \\
(N) & Full & \cmark & \textbf{0.532} & 45.42 \\
\bottomrule
\end{tabular}
}
\\
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Single-pair mode ablation.} We assess on SceneFlow the key components of the proposed architecture.}
\label{tab:ablation-single}
\end{table}
This section describes the experimental setups used to evaluate on popular datasets including: SceneFlow~\cite{sceneflow}, TartanAir~\cite{TartanAir}, KITTI 2012~\cite{KITTI2012} and KITTI 2015~\cite{KITTI2015}.
As standard practice in this field~\cite{AcfNet,GANet}, we compute the end-point-error (EPE) and the percentage of points with a disparity error $>3$ pixels (3PE, $>5$ for 5PE) as error metrics in non-occluded (NOC), occluded (OCC) and both (ALL) the regions. Moreover, following the literature~\cite{KITTI2015}, we measure the D1 error on KITTI instead of the 3PE in background (BG), foreground (FG) and both (ALL) areas. We do so by computing the percentage of points with error $>3$ pixels and $>5\%$ than the ground truth. Runtime reported in Tab~\ref{tab:ablation-single},~\ref{tab:ablation-temporal},~\ref{tab:benchmark} is measured in the corresponding image resolution on each dataset, on a single NVIDIA 3090 GPU. \supp{Supplementary material} outlines more details about implementation, dataset description, and training setups.
\begin{table}[t]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc|ccc|ccc|c}
\hline
\toprule
& Temporal & Local & Past & \multicolumn{3}{c}{EPE} & \multicolumn{3
|
}{c|}{3PE} & Runtime \\
\cline{5-7} \cline{8-10}
& Shift & Map & Costs & ALL & OCC & NOC & ALL & OCC & NOC & (ms) \\
\toprule
(A) & \xmark & \xmark & \xmark & 0.647 & 1.899 & 0.420 & 3.96 & 14.21 & 2.08 & \textbf{36.85} \\
(B) & \cmark & \xmark & \xmark & 0.643 & 1.842 & 0.435 & 4.00 & 14.55 & 2.15 & \textbf{36.85}\\
(C) & \cmark & \cmark & \xmark & 0.624 & 1.799 & \textbf{0.413} & 3.81 & 13.60 & \textbf{2.02} & 38.22 \\
(D) & \cmark & \cmark & \cmark & \textbf{0.610} & \textbf{1.637} & \textbf{0.413} & \textbf{3.73} & \textbf{12.60} & 2.03 & 40.13 \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Temporal mode ablation.} We evaluate the temporal components on TartanAir, with $W_{tr}=W_{test}=4$.}
\label{tab:ablation-temporal}
\end{table}
\subsection{Ablation Study}
\textbf{Single-pair mode.} We leverage SceneFlow~\cite{sceneflow} to assess the impact of main components of TemporalStereo{} in single-pair mode. \cref{tab:ablation-single} reports this ablation study, witnessing how each module helps to improve the results consistently. In particular, the multi-level cost computation with group-wise convolutions (B) is effective in enriching the cost volume of each stage, and the baseline model (A) also benefits from the further aggregation of Statistical Fusion (C). Nonetheless, the adaptive shifting strategy (E) provides without any doubt the main boost in performance. To be noticed, without the enriched context of each sparse candidate, the adaptive shifting strategy shows limitation on disparity correcting and its performance gain (D) decreases a lot. Furthermore, results shown in (F, G, H) illustrates our TemporalStereo{} is able to predict accurate disparity with as few as 5 candidates. With the ability to shift the candidates towards a better solution, our model can search in a quite large space (\eg, $\beta=2,\; 4, \; 8$ as reported in H, I, J) to get the best result when $\beta=4$. 3D convolutions are the common operations to aggregate the cost in recent methods~\cite{coex,StereoNet,AcfNet}. Our baseline model (A) benefits from the 3D convolutions (K) when compared to depth-wise~\cite{mobilenetv2} 3D convolutions (L), but the runtime increases a lot. In contrast, the full model (E) gives much higher improvement (N) with negligible runtime increase (4.8ms), and the time-consuming 3D convolutions is not necessary (M) for accuracy improvement.
\noindent\textbf{Temporal mode.}
Before presenting the results achieved in temporal mode, we illustrate the protocol adopted at training and test time on TartanAir dataset~\cite{TartanAir}. \textit{Given a temporal window containing $W$ frames}, initial $f=\{1,2,...,W-1\}$ frames are processed by the network sequentially without computing error metrics and blocking the gradients. Specifically, each outcome is cached and used in the next frame prediction. When $f=W$, we compute errors (and backpropagation at training time). Tab.~\ref{tab:ablation-temporal} reports the ablation conducted on TartanAir~\cite{TartanAir}, with $W=4$ both at train and test time, aimed at evaluating the importance of each temporal module. Specifically, we train the model in the single-pair mode for $20$ epochs (\ie the model learns how to solve stereo matching task), then we enable temporal components for other 20 epochs (\ie the model now focuses on how to use past information). We can notice how the baseline (A), \ie the model trained in single-pair mode for 40 epochs, could be improved using Temporal Shift to fuse past features with current ones (B). However, the benefit due to Temporal Shift is much lower than the gain provided by Local Map, which largely improves the performance (C). Finally, including also cached past costs (D) provides an additional boost in accuracy.
Another question that may arise is \textit{how does the choice of $\;W$ influence the result?} In fact, the larger the window the broader the context, but too far frames in time stop being meaningful after a while.
Tab. \ref{tab:ablation-windows} reports the results achieved by TemporalStereo{} using different values of $W$. In particular, we could have two different values for $W$, that are $W_{tr}$ and $W_{test}$ for train and test respectively. In addition to the baseline $W_{tr}=1$ and the temporal $W_{tr}=4$ models, we also train a $W_{tr}=8$ model following the same configuration of $W_{tr}=4$. When compared with the state-of-the-arts, i.e., PSMNet~\cite{PSMNet} and CoEx~\cite{coex}, our baseline outperforms them by a large margin in EPE metric. As for in temporal mode, in general, with more frames joining the training or testing, the better result we can get in all regions.
Notably, \colorbox{w4!60!white}{$W_{test}=4$} and \colorbox{w8!60!white}{$W_{test}=8$} are always beneficial in OCC, witnessing that TemporalStereo{} can effectively exploit more frames to deal with such difficult areas.
Moreover, when $W_{test} > 1$, the results consistently overcome baseline with single stereo pair. It indicates our network can benefit from past information with only few frames(\eg, 4, 8) after the network startup. Finally, we can notice how temporal models tested with \colorbox{w1!60!white}{$W_{test}=1$} obtain results closer to the baseline. This outcome implies that, in practical video applications, TemporalStereo{} can be trained once in temporal mode and used in single-pair way for the first inference (yet providing a good initial estimate) and in the temporal mode for all the others. \cref{fig:qualitative-tartan} visualises the benefits of temporal model ($W_{tr}=8$, \colorbox{w8!60!white}{$W_{test}=8$}) to alleviate occlusion errors. The temporal mode (2) largely outperforms the single-pair one (1). Furthermore, we highlight how TemporalStereo{} is also robust against inaccurate poses: although in (3) the camera pose is set to an identity matrix and only Temporal Shift module could help, it still outperforms (1). Eventually, the proposed temporal cues can be plugged into recent efficient methods (\eg, CoEx~\cite{coex} and StereoNet~\cite{StereoNet}) for further improvement. Although CoEX with a very different architecture compared to ours, its EPE still decreases from 0.714 to 0.610, with near 14\% improvement. Nonetheless, our TemporalStereo{} utilizes past cues most effectively.
\begin{table}[t]
\centering
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{ccc|ccc|ccc}
\toprule
& \multirow{2}{*}{Method} & \multirow{2}{*}{$W_{test}$} & \multicolumn{3}{c}{EPE} & \multicolumn{3}{c}{3PE} \\
\cline{4-9} & & & ALL & OCC & NOC & ALL & OCC & NOC \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{$W_{tr}=1$}} & PSMNet~\cite{PSMNet} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white}0.866 & \cellcolor{w1!60!white}2.654 & \cellcolor{w1!60!white}0.558 & \cellcolor{w1!60!white}4.80 & \cellcolor{w1!60!white}18.63 & \cellcolor{w1!60!white}2.48 \\
& StereoNet~\cite{StereoNet} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white}0.888 & \cellcolor{w1!60!white}2.647 & \cellcolor{w1!60!white}0.578 & \cellcolor{w1!60!white}5.15 & \cellcolor{w1!60!white}19.34 & \cellcolor{w1!60!white}2.68 \\
& CoEx~\cite{coex} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white}0.714 & \cellcolor{w1!60!white}2.074 & \cellcolor{w1!60!white}0.463 & \cellcolor{w1!60!white}\textbf{3.83} & \cellcolor{w1!60!white}14.57 & \cellcolor{w1!60!white}\textbf{1.93} \\
& TemporalStereo{} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white}\textbf{0.647} & \cellcolor{w1!60!white}\textbf{1.899} & \cellcolor{w1!60!white}\textbf{0.420} & \cellcolor{w1!60!white}3.96 & \cellcolor{w1!60!white}\textbf{14.21} & \cellcolor{w1!60!white}2.08 \\
\midrule
\multirow{3}{*}{\rotatebox[origin=c]{90}{$W_{tr}=4$}} & TemporalStereo{} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white} 0.665 & \cellcolor{w1!60!white} 1.731 & \cellcolor{w1!60!white} 0.459 & \cellcolor{w1!60!white} 3.95 & \cellcolor{w1!60!white} 13.34 & \cellcolor{w1!60!white} 2.16 \\
& TemporalStereo{} & \cellcolor{w4!60!white}4 & \cellcolor{w4!60!white}0.610 & \cellcolor{w4!60!white}1.637& \cellcolor{w4!60!white}0.413 & \cellcolor{w4!60!white}3.73 & \cellcolor{w4!60!white}12.60 & \cellcolor{w4!60!white}2.03 \\
& TemporalStereo{} & \cellcolor{w8!60!white}8 & \cellcolor{w8!60!white}\textbf{0.607} & \cellcolor{w8!60!white}\textbf{1.634} & \cellcolor{w8!60!white}\textbf{0.409} & \cellcolor{w8!60!white}\textbf{3.71} & \cellcolor{w8!60!white}\textbf{12.59} & \cellcolor{w8!60!white}\textbf{2.01} \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{$W_{tr}=8$}} & TemporalStereo{} & \cellcolor{w1!60!white}1 & \cellcolor{w1!60!white} 0.673 & \cellcolor{w1!60!white} 1.912 & \cellcolor{w1!60!white} 0.450 & \cellcolor{w1!60!white} 4.00 & \cellcolor{w1!60!white} 14.03 & \cellcolor{w1!60!white} 2.17 \\
& TemporalStereo{} & \cellcolor{w4!60!white}4 & \cellcolor{w4!60!white}0.609 & \cellcolor{w4!60!white}1.625 & \cellcolor{w4!60!white}0.412 & \cellcolor{w4!60!white}3.74 & \cellcolor{w4!60!white}12.58 & \cellcolor{w4!60!white}2.03\\
& StereoNet~\cite{StereoNet} & \cellcolor{w8!60!white}8 & \cellcolor{w8!60!white}0.656 & \cellcolor{w8!60!white}1.928 & \cellcolor{w8!60!white}0.428 & \cellcolor{w8!60!white}3.97 & \cellcolor{w8!60!white}14.45 & \cellcolor{w8!60!white}2.06 \\
& CoEx~\cite{coex} & \cellcolor{w8!60!white}8 & \cellcolor{w8!60!white}0.610 & \cellcolor{w8!60!white}1.637 & \cellcolor{w8!60!white}0.413 & \cellcolor{w8!60!white}3.73 & \cellcolor{w8!60!white}12.60 & \cellcolor{w8!60!white}2.03 \\
& TemporalStereo{} & \cellcolor{w8!60!white}8 & \cellcolor{w8!60!white}\textbf{0.601} & \cellcolor{w8!60!white}\textbf{1.615} & \cellcolor{w8!60!white}\textbf{0.405} & \cellcolor{w8!60!white}\textbf{3.71} & \cellcolor{w8!60!white}\textbf{12.54} & \cellcolor{w8!60!white}\textbf{2.02} \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Impact of different frames in temporal mode.} Models are tested with $W_{test}=$ 1, 4 and 8.
}
\label{tab:ablation-windows}
\end{table}
\begin{figure}[t]
\centering
\resizebox{0.92\columnwidth}{!}{
\renewcommand{\tabcolsep}{1pt}
\begin{tabular}{cccc}
Inputs & (1) single-pair & (2) temporal & (3) temporal w/ bad poses \\
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_left.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_single_est.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_temporal_est.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_identity_est.png}
\\
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_right.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_single_error.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_temporal_error.png} &
\includegraphics[width=0.22\textwidth]{assets/tartanair/amusement_7_193_identity_error.png} \\
\end{tabular}
}
\vspace{-0.4cm}
\caption{\textbf{Benefits of temporal mode.} Compared to single-pair mode (1), temporal mode (2) is more accurate at occlusions, even with noisy poses (3) -- the colder the color, the lower the error.
}
\label{fig:qualitative-tartan}
\end{figure}
\begin{table}[t]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{ccc|cc|cc|cc|cc}
\toprule
\multicolumn{3}{c|}{\multirow{3}{*}{Pose Type}} & \multicolumn{4}{c}{\cellcolor{lightwhite!60!white} Outdoor} & \multicolumn{4}{c}{\cellcolor{lightblue!60!white}Indoor}\\
&&& \multicolumn{2}{c}{\cellcolor{lightwhite!60!white}Amusement}
& \multicolumn{2}{c}{\cellcolor{lightwhite!60!white}SoulCity}
&\multicolumn{2}{c}{\cellcolor{lightblue!60!white}Carwelding} &
\multicolumn{2}{c}{\cellcolor{lightblue!60!white}Hospital} \\
\cline{4-11}
&&& EPE & 3PE & EPE & 3PE & EPE & 3PE & EPE & 3PE \\
\midrule
\multicolumn{3}{c|}{Single-pair} & 0.562 & 2.62 & 0.529 & 2.60 & 0.508 & 2.84 & 0.904 & 3.74 \\
\multicolumn{3}{c|}{Identity} & 0.569 & 2.58 & 0.534 & 2.66 & 0.538 & 2.96 & 0.927 & 4.21 \\
\multicolumn{3}{c|}{DROID-SLAM~\cite{teed2021droid}} & \textbf{0.545} & \textbf{2.51} & 0.509 & \textbf{2.52} & 0.479 & \textbf{2.63} & \textbf{0.848} & \textbf{3.30} \\
\multicolumn{3}{c|}{GT} & \textbf{0.545} & \textbf{2.51} & \textbf{0.508} & \textbf{2.52} & \textbf{0.478} & \textbf{2.63} & \textbf{0.848} & \textbf{3.30} \\
\cline{1-11}
& $\sigma_{R}$ $(^{\circ})$ & $\sigma_{t}$ $(m)$ \\
\cline{1-11}
\multirow{5}{*}{\rotatebox[origin=c]{90}{GT+Noise}} & 10 & 0.50 & 0.716 & 3.52 & 0.791 & 4.86 & 0.608 & 3.34 & 1.152 & 4.85 \\
& 10 & 0.05 & 0.713 & 3.46 & 0.776 & 4.55 & 0.611 & 3.37 & 1.096 & 4.79 \\
& 1 & 0.50 & 0.577 & 2.58 & 0.592 & 2.83 & 0.531 & 2.91 & 0.934 & 4.49 \\
& 1 & 0.05 & \textbf{0.546} & \textbf{2.54} & 0.518 & 2.64 & 0.506 & \textbf{2.72} & 0.880 & 3.84 \\
& 1 & 0.01 & \textbf{0.546} & \textbf{2.54} & \textbf{0.517} & \textbf{2.62} & \textbf{0.505} & 2.74 & \textbf{0.873} & \textbf{3.66} \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Impact of different pose input in temporal mode.} "Single-pair" denotes our model is in single-pair mode where no pose needed; "Identity" means identity matrix; "DROID-SLAM" means pose estimated by DROID-SLAM (ORB-SLAM3 fails on these 4 scenes);
and "GT+Noise" is the ground truth pose with manually added Gaussian noise.
}
\label{tab:ablation-pose}
\end{table}
\textbf{Pose Analysis.} To assess the impact of pose input, we evaluate our model with setting $W_{tr}=8$ and \colorbox{w8!60!white}{$W_{test}=8$} on 4 scenes (2 indoor and 2 outdoor scenes) of TartanAir dataset~\cite{TartanAir} in hard motion pattern. The overall results of several pose inputs are listed in Tab~\ref{tab:ablation-pose}. Specifically, the noise of rotation follows $\mathcal{N}(0, \sigma_{R})$ in degree($^\circ$) and noise of translation follows $\mathcal{N}(0,\sigma_{t})$ in meter(m). As reported, 1) poses from ground truth (GT) or estimated by DROID-SLAM~\cite{teed2021droid} yield almost the same EPE and 3PE metrics. It means, when the ground truth pose is not available, an actual SLAM system like DROID-SLAM could be an alternative scheme. 2) Pose with small rotation error (e.g., $\sigma_{R} <= 1^{\circ}$) and translation error (e.g., $\sigma_{t} <= 0.05m$) gets very close results to the one with ground truth pose, and it always surpasses the results get in single-pair mode. Even when the pose error comes to the maximum level of the dataset (\ie, $\sigma_{R} = 10^{\circ}, \sigma_{t} = 0.5m$), our model does not crash down and provides impressive predictions, proving the robustness of our model to inaccurate pose. 3) For outdoor scenes, since the view change between frames is much smaller than indoor scenes, the accuracy drops negligibly even when identity transformation applied.
\begin{table}[t]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\toprule
Pretrain & \multicolumn{2}{c|}{SF} & \multicolumn{2}{c|}{SF+Pseudo} & \multicolumn{3}{c}{TartanAir+Pseudo} \\
\hline
Method & CoEx & Ours & CoEx & Ours & CoEx & Ours & Ours(temporal) \\
\hline
D1-BG & 1.79 & 2.17 & 1.73 & 1.89 & 1.74 & 1.89 & \textbf{1.61} \\
D1-FG & 3.82 & 2.96 & 3.60 & 2.85 & 3.49 & 3.03 & \textbf{2.78} \\
D1-ALL & 2.13 & 2.30 & 2.04 & 2.05 & 2.03 & 2.07 & \textbf{1.81} \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Impact of pretraining.} We study the importance of pretraining by evaluating the D1 metric on KITTI 2015 test dataset. Results by CoEx, Ours in single-pair and temporal mode (temporal) are reported accordingly. 'SF' and 'TartanAir' denotes SceneFlow and TartanAir datasets respectively. `+Pseudo' means further training on KITTI raw sequences with our generated pseudo label.}
\label{tab:pretraining}
\end{table}
\begin{table}[t]
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cl|c|cc|cc|ccc|c}
\toprule
&\multirow{3}*{Method} & SceneFlow~\cite{sceneflow} & \multicolumn{4}{c|}{KITTI 2012~\cite{KITTI2012}} &\multicolumn{3}{c|}{KITTI 2015~\cite{KITTI2015}} & Run \\
\cline{3-10}
& & \multirow{2}*{EPE} & \multicolumn{2}{c|}{Reflective} & \multicolumn{2}{c|}{All} & \multicolumn{3}{c|}{D1} & time \\
& & & 3PE & 5PE & 3PE & 5PE & BG & FG & ALL & (ms) \\
\hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{slow}}& PSMNet~\cite{PSMNet} & 1.09 & 10.18 & 5.64 & 1.89 & 1.15 & 1.86 & 4.62 & 2.32 & 410 \\
& GwcNet-gc~\cite{GWCNet} & 0.77 & 9.28 & 5.22 & 1.70 & 1.03 & 1.74 & 3.93 & 2.11 & 320 \\
& GANet-Deep~\cite{GANet} & 0.78 & 7.92 & 4.41 & 1.60 & 1.02 & 1.48 & 3.46 & 1.81 & 1800 \\
& AcfNet~\cite{AcfNet} & 0.87 & 8.52 & 5.28 & 1.54 & 1.01 & 1.51 & 3.80 & 1.89 & 480 \\
& LEAStereo~\cite{LEAStereo} & 0.78 & \textbf{6.50} & \textbf{3.18} & 1.45 & 0.88 & 1.40 & \textbf{2.91} & \textbf{1.65} & 300 \\
& ACVNet~\cite{ACVNet} & \textbf{0.46} & 7.03 & 4.14 & \textbf{1.13} & \textbf{0.71} & \textbf{1.37} & 3.07 & \textbf{1.65} & 200 \\
& CFNet~\cite{CFNet} & - & 7.29 & 3.81 & 1.58 & 0.94 & 1.54 & 3.56 & 1.88 & 180 \\
\cline{2-11}
& DWARF$\ddagger$~\cite{aleotti2020learning} & - & - & - & - & - & 3.20 & 3.94 & 3.33 & \textbf{140} \\
& DTF\_SENSE$\ddagger$~\cite{schuster2021dtf} & - & - & - & - & - & 2.08 & 3.13 & 2.25 & 760 \\
& SENSE$\ddagger$~\cite{SENSE} & - & - & - & - & - & 2.07 & 3.01 & 2.22 & 320 \\
\midrule
\multirow{7}{*}{\rotatebox[origin=c]{90}{fast}} & StereoNet~\cite{StereoNet} & 1.10 & - & - & 6.02 & - & 4.30 & 7.45 & 4.83 & \textbf{15} \\
& DeepPruner-Fast~\cite{DeepPruner} & 0.97 & - & - & - & - & 2.32 & 3.91 & 2.59 & 60 \\
& AANet+~\cite{AANet} & 0.72 & 9.10 & 5.12 & 2.04 & 1.30 & 1.65 & 3.96 & 2.03 & 60 \\
& CoEx~\cite{coex} & 0.69 & 8.63 & 4.49 & 1.93 & 1.13 & 1.79 & 3.82 & 2.13 & 27 \\
& HITNet~\cite{HitNet} & - & 7.54 & 4.01 & 1.89 & 1.29 & 1.74 & 3.20 & 1.98 & 20 \\
& Ours(single-pair) & \textbf{0.53} & 6.99 & 3.52 & 1.94 & 1.08 & 1.89 & 2.85 & 2.05 & 44 \\
& Ours(temporal)& - & \textbf{6.14} & \textbf{3.08} & \textbf{1.61} & \textbf{0.88} & \textbf{1.61} & \textbf{2.78} & \textbf{1.81} & 48 \\
\bottomrule
\end{tabular}
}
\vspace{-0.3cm}
\caption{\textbf{Comparison with state-of-the-art methods -- slow and fast.} We report results of state-of-the-art methods on SceneFlow~\cite{sceneflow} and KITTI datasets~\cite{KITTI2012,KITTI2015}. \textit{fast} denotes models with runtime $<100$ ms. $\ddagger$ means 3D scene flow based methods.
}
\label{tab:benchmark}
\end{table}
\subsection{Evaluations on KITTI Benchmarks}
To conclude, we run TemporalStereo on KITTI 2012 and KITTI 2015 test data and submit to the online leaderboard.
\textbf{Pretraining.} As KITTI is very challenging due the lack of a big training set, pretraining on SceneFlow dataset~\cite{sceneflow} has been a common training schedule for deep learning based stereo methods~\cite{PSMNet,LEAStereo}. Recent methods~\cite{CFNet,HSM} also introduce extra data, \eg, HR-VS~\cite{HSM}, ETH3D~\cite{eth3d} and Middlebury~\cite{middlebury} to augment the KITTI for training. Following the knowledge distillation strategy proposed in AANet+~\cite{AANet}, we augment the KITTI dataset by leveraging the prediction results from pretrained LEAStereo~\cite{LEAStereo} to generate pseudo labels on KITTI raw sequences~\cite{eigen2014depth}. As a result, we get 61 stereo video sequences (containing 42K pairs) with pseudo labels for pretraining, and poses are calculated from the GPS/OXTS devices on KITTI. We study the influence of different pretraining strategies to the D1 metric on KITTI 2015 test dataset. As shown in Tab.~\ref{tab:pretraining}, compared with CoEx~\cite{coex}, when only pretrained on SceneFlow dataset, our model in single-pair mode performs better in D1-FG and worse in D1-ALL, D1-BG metrics. After further training on pseudo labels, we get the almost same result as CoEx, which demonstrates our model requires more data to achieve better performance. Replacing the SceneFlow dataset with TartanAir, both the result of CoEx and ours in single-pair mode does not improve and achieve almost the same accuracy, \ie D1-ALL 2.03\% for CoEx and 2.07\% for ours. However, by leveraging temporal information, the temporal mode can boost the performance of our TemporalStereo{} further by a large margin, i.e. reducing D1-ALL to 1.81\%, supporting the major impact of our temporal paradigm over the pseudo labels training. It needs to point out that, the poses of KITTI raw sequences and KITTI 2015 are estimated by GPS/OXTS devices and ORBSLAM3~\cite{ORBSLAM3} respectively, the remarkable result achieved demonstrates the robustness of TemporalStereo{} to the pose error again.
\cref{tab:benchmark} collects results achieved by a variety of deep stereo models, both on Scene Flow and the KITTI online benchmarks. For KITTI, we report the error rates achieved by TemporalStereo, both in single-pair and temporal mode ($W_{tr}, W_{test}=11$), finetuned from models pretrained on KITTI raw sequences~\cite{eigen2014depth} respectively. In single-pair mode, even only one stereo pair avaliable, our TemporalStereo{} achieves the state-of-the-art results on all datasets. When switching to the temporal mode, our network surpasses all fast stereo matching methods by a large margin. Specially, our result on D1-FG is even better than the slow models~\cite{LEAStereo,GANet} running with hundreds of milliseconds. It also needs to point out that our TemporalStereo{} in temporal mode, benefiting from past semantic and geometric information, achieves better result than in single-pair mode on D1-FG metric (the D1 error on foreground areas, \ie, moving cars), which proves the robustness of our model on \textbf{dynamic objects}. The same advantage is also evident on 3PE and 5PE in \textbf{reflective regions} on KITTI 2012.
We also compare with the \textbf{3D scene flow} based state-of-the-art methods~\cite{aleotti2020learning,schuster2021dtf,SENSE}, which project with dense 3D motion field and output disparity. In contrast, simply using camera pose, our TemporalStereo{} is the obvious winner in both accuracy and efficiency on KITTI 2015.
\section{Conclusions and Limitations}
We presented TemporalStereo, a novel network devoted to fast stereo matching. The enhanced coarse-to-fine design and sparse cost volumes allow for fast inference and high performance. Moreover, TemporalStereo{} can exploit past information to ameliorate predictions, especially in occluded regions. The same model, trained once, can handle either single or multiple stereo pairs effectively. Considering the limitation on the given pose, empowering our system with pose estimation will be our future research direction.
{\small
\bibliographystyle{ieee_fullname}
|
\section{Introduction}
Most of the evidence for gauge - gravity duality comes from conformal systems with some amount of
supersymmetry. It is a challenge to come up with examples where the duality is at work
while these symmetries are absent or broken. Another obstruction in collecting evidence for the duality is that it is best understood when
the side of the gauge theory (we restrict ourselves here to $SU(N)$ gauge theories) is strongly coupled
and $N$ is large. The strong coupling requires nonperturbative methods among which our analytical tools are
restricted in four or higher dimensions.
Numerical methods on the other hand become prohibitive for these systems when $N$ is really large.
In this paper we present an example in five space-time dimensions, where all the above difficulties may be possible to overcome.
In order to construct the example we first define a certain object that we name Coulomb's constant,
an appropriate generalization of our familiar constant in four dimensions.
Coulomb's law in four dimensions for a unit positive charge is typically written in the form $E=1/4\pi (1/r^2)$
(we work in units where $\epsilon_0=1$). The $1/4\pi$ is a convention suggested by Gauss's law and it represents,
in these units, the classical overall strength of the underlying $U(1)$ force.
One refers to this constant as Coulomb's constant.
A generalization of this definition to any quantum $SU(N)$ theory requires the knowledge of
three dimensionless objects: the charge $c_1(r)$
obtained from the static potential $V_4(r)$, the 't Hooft coupling $\lambda = g^2 N$
with $g$ the renormalized gauge coupling and the
group-dependent Casimir index $C_F=(N^2-1)/2N$. Then, the combination
\begin{equation}
k_4=\frac{N}{C_F}\frac{c_1}{\lambda} = \frac{1}{4\pi}\label{4dC}
\end{equation}
is a nonperturbative and $N$-independent definition of Coulomb's constant\footnote{Here and in the
rest of this letter the term "Coulomb's constant" should be taken with a grain of salt.
What this really means is that one can define a renormalized coupling
through $c_1(r)$ anywhere on the phase diagram, such that it satisfies eq. (\ref{4dC}).} in four dimensions.
Renormalizability protects the validity of this definition throughout the phase diagram.
Notice in particular that the quantity defined in Eq. (\ref{4dC}) is scheme-independent in perturbation theory.
To illustrate the point, we write it in two different schemes, the '$c$' and '$qq$' schemes \cite{FrancRainer}:
\begin{eqnarray}
\frac{1}{4\pi} &=& \frac{c_1^{(c)}(r)}{g_c^2(1/r)C_F} = \left[\frac{c_1(r)}{g^2(1/r)}\right]_{\rm c-scheme}\frac{1}{C_F}\nonumber\\
\frac{1}{4\pi} &=& \frac{c_1^{(qq)}(r)}{g_{qq}^2(1/r)C_F} = \left[\frac{c_1(r)}{g^2(1/r)}\right]_{\rm qq-scheme}\frac{1}{C_F}
\end{eqnarray}
where $c_1^{(c)}(r)=-1/2 r^3 V_4''(r)$, $c_1^{(qq)}(r)=r^2 V_4'(r)$.
We denote $V'_4(r)=\partial V_4(r)/\partial r$ etc.
The couplings in the two different schemes are related in a well defined way in perturbation theory,
as is their relation to more conventional in the continuum schemes such as the ${\rm \overline {MS}}$ scheme \cite{FrancRainer}.
Clearly, while the charges $g^2$ and the $c_1$'s are scheme dependent, their ratio, i.e. our quantity of interest, is not.
In five dimensions things are more complicated as the classical $U(1)$ Coulomb constant $1/(2\pi^2)$, suggested by Gauss's law, proves
to be more difficult to generalize. To begin, $SU(N)$ gauge theories are nonrenormalizable in five dimensions.
As a consequence, in the non-self-interacting or perturbative limit they are free and in its vicinity cut-off dominated.
To define nonperturbatively a computationally tractable interacting theory one observes that
even in infinite volume these theories possess a first order phase transition separating a confined phase at strong coupling
from a Coulomb phase at weak coupling and that as the phase transition is approached
(from the side of the Coulomb phase in this work), cut-off effects diminish \cite{FM}.
By analogy to eq. (\ref{4dC}) we form the dimensionless combination
\begin{equation}
k_5=2 \frac{c_2}{\lambda_5}\label{5dC}
\end{equation}
where $c_2$ is the static charge of dimension length appearing in the static potential $V_5(r)={\rm const.} - c_2/r^2$
and $\lambda_5=g_5^2 N$ the five-dimensional analogue of $\lambda$ which has also dimension of length due to the
dimensionful five-dimensional coupling $g_5$.
As for the dimensionless constant, instead of $N/C_F$ used in four dimensions, we take its infinite $N$ limit, 2.
In this letter, we take $k_5$ as our definition of Coulomb's constant in five dimensions and
we compute it in two different ways.
We stress that in five dimensions there is no unambiguous notion of perturbative "schemes"
because the theory is nonrenormalizable.
Nevertheless, our quantity is expected to be independent of any possible scheme definition,
for the same reasons as in four dimensions.
Moreover, since our computations are anyway far from the perturbative regime
and in addition we compute exactly the same quantity with both methods, we expect our comparative study
of Coulomb's constant in five dimensions to be completely free of ambiguities.
Our results suggest that these expectations are indeed justified.
\section{$k_5$ from Gauge Theory}
Near the phase transition neither perturbation theory nor strong coupling techniques are useful.
The appropriate analytical field theory computational tool instead is the Mean-Field expansion,
which can be conveniently implemented on a Euclidean lattice \cite{DZ}.
It is worth stressing that on the isotropic lattice both the order of the phase transition (it is of first order) and
the numerical value of the critical coupling $\beta_c$ are predicted correctly by the Mean-Field \cite{DZ}, as
several Monte Carlo studies have proved \cite{Creutz}.
\begin{figure}[!t]
\centerline{\epsfig{file=isotropic.eps,width=16cm}}
\caption{\small The phase diagram of isotropic, infinite volume five-dimensional $SU(N)$ gauge theories.}
\end{figure}
We begin by expressing eq. (\ref{5dC}) in lattice parameters as
\begin{equation}
k^{(\rm LAT)}_5(L) = {\overline c}_2 \frac{\beta}{N^2}\, ,\label{5dCL}
\end{equation}
where ${\overline c}_2=c_2/a$ with $a$ the lattice spacing and $\beta=2Na/g_5^2$ the dimensionless lattice coupling.
We take the number of lattice points $L=2\pi \rho/a$ in all five directions to be the same,
so $\rho$ is the physical radius of each periodic dimension.
On a finite lattice the quantity in eq. (\ref{5dCL}) depends on $L$.
To leading order in the Mean-Field expansion the computation of the Wilson Loop of length $r$
oriented along a four-dimensional slice, computed using Wilson's plaquette action, yields for $SU(2)$ the result \cite{MF}
\begin{eqnarray}
aV_5(r/a) &=& -2 \log{(\overline v_0)}
-\frac{1}{\overline v_0^2}\frac{1}{L^4}\sum_{p} \sum_{M\ne 0}
\delta_{p_0,0}\, \Bigl\{ \bigl(\frac{1}{4}\cos(p_Mr)+1 \bigr)
K^{-1}_{00}(p,0) \nonumber\\
&+& \sum_A \bigl(\frac{1}{4}\cos(p_Mr)-1 \bigr)
K^{-1}_{00}(p,A)\Bigr\} \,.\label{StaticL}
\end{eqnarray}
In the above, $\overline v_0$ is the Mean-Field background
(zero by definition in the confined phase, which explains why we did not consider a four-dimensional model)
and $p=\{p_M=\frac{2\pi }{L}l_M\}, l_M=0,\cdots, L-1$ are the
lattice momenta. The Euclidean indices take the values $M,N=0,1,2,3,5$.
The inverse propagator $K_{MN}(p,\alpha)$, apart from $\beta$ and $\overline v_0$, depends on the momenta and the group index $\alpha=0,A=1,2,3$.
At this order, there is no distinction between bare and renormalized lattice coupling, so $\beta$
can be thought of as a physical coupling.
Among the observables of this theory one finds states with vector and scalar quantum numbers.
Appropriate Polyakov Loops can be used to extract the lightest vector's mass, which turns out to be
$am_V=12.61/L$ in units of the lattice spacing and the lightest scalar's mass $am_S$ which depends only on $\beta$ and therefore can be
used as a measure of the lattice spacing. All observables turn out to be gauge independent.
The related expressions with their detailed derivation can be found in \cite{MF}.
A Line of Constant Physics (LCP) is defined as the value
of a physical quantity along a trajectory of constant $q^{(LAT)}=m_V/m_S$.
The LCP which corresponds to the scalar and vector having Kaluza-Klein masses is $q^{(LAT)}=1$.
We call this the KK-LCP.
The algorithm then is to compute eq. (\ref{StaticL}) numerically for a given $L$ and plot $(r/a)^2 aV_5(r/a)$ vs $(r/a)^2$.
This is expected to be a straight line whose intercept
is ${\overline c}_2$,
from which our physical quantity $k^{(\rm LAT)}_5(L)$ can be easily obtained
\footnote{An alternative way to extract ${\bar c}_2$ would be from $1/2r^3 F_5(r)$, with $F_5(r)=\partial V_5(r)/\partial r$
by analogy to four dimensions \cite{FrancRainer}.
For our purposes here though a global fit will suffice.}. Then repeat these steps for several
increasing $L$'s along the KK-LCP (see Fig.1) and finally extrapolate to $k^{(\rm LAT)}_5(\infty)$. In Table 1 we present the available data.
For $SU(2)$ $\beta_c\simeq 1.6762016760$ and from Table 1 we can see
how the lattice spacing decreases as the phase transition is approached.
\hskip 2cm
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$L$ & $a m_S$ & $\beta$ & $k^{(\rm LAT)}_5 \times 10^4$ \\
\hline \hline
${\bf 24}$ & $0.5236$ &$1.6764598$ & $708$\\ \hline
${\bf 30}$ & $0.4189
|
$ &$1.6763073$ & $700$\\ \hline
${\bf 36}$ & $0.3490$ &$1.67625254$ & $693$\\ \hline
${\bf 42}$ & $0.2992$ &$1.67622913$ & $687$\\ \hline
${\bf 48}$ & $0.2618$ &$1.67621776$ & $682$\\ \hline
${\bf 54}$ & $0.2327$ &$1.67621172$ & $678$\\ \hline
${\bf 60}$ & $0.2094$ &$1.67620825$ & $674$\\ \hline
${\bf 72}$ & $0.1745$ &$1.67620484$ & $668$\\ \hline
${\bf 96}$ & $0.1309$ &$1.676202674$ & $661$\\ \hline
${\bf 300}$ & $0.0419$ &$1.6762016769$ & $644$\\ \hline
\end{tabular}
\center{TABLE 1. The KK-LCP lattice data.}
\end{center}
The infinite volume extrapolation gives
\begin{equation}
k_5^{(LAT)}(\infty)=0.0636\, ,
\end{equation}
see Fig. 2. It is not easy to assign an error to our result because it is a numerical
computation of an analytical expression. A reasonable estimate gives an error of $\pm 0.0002$.
\begin{figure}[!t]
\centerline{\epsfig{file=k5.eps,width=14cm}}
\caption{\small The infinite $L$ extrapolation of $k_5^{\rm (LAT)}(L)$. Since it is a first order phase transition, the physical volume goes
to infinity at a finite lattice spacing.}
\end{figure}
\section{$k_5$ from Gravity}
One of the reasons we chose to compute Coulomb's constant is because
we were after an $N$-independent quantity.
Up to now we have computed it only for $N=2$ so we must prove that it is indeed $N$-independent.
Instead of developing the necessary Lattice Mean-Field formalism for general $N$ which
is rather involved \cite{Kutsumb}, here we will try to prove it in a simpler way.
Consider a large number of coincident $N$ $D4$-branes in type IIA string theory with one of the spatial dimensions
along the brane compactified on a circle of radius $\rho$. The gravitational backreaction of such a configuration is \cite{Witten}
\begin{eqnarray}
ds^2 = \left(\frac{u}{R}\right)^{3/2}
\left(\eta_{\mu\nu}dx^\mu dx^\nu+f(u)dx_5^{2}\right)
+\left(\frac{R}{u}\right)^{3/2}
\left(\frac{du^2}{f(u)}+u^2 d\Omega^2\right),
\end{eqnarray}
\begin{equation} e^\phi= g_s \left(\frac{u}{R}\right)^{3/4},~F=\frac{2\pi N}{V}\epsilon
\end{equation}
\begin{equation}
f(u)=1-\frac{u_k^3}{u^3} \ ,~R^3=\pi g_s l_s^3 N ~,
\end{equation}
where $\mu,\nu=0,1,2,3$, $x_5$ parametrizes the circle and $d\Omega^2$, $\epsilon$ and $V=8\pi^2/3$
are the line element, the volume form and the volume of a unit $S^4$,
respectively. $R$ is the radius of $S^4$. $\phi$ is the dilaton, $F$ is the Ramond-Ramond 4-form
and $g_s$ and $l_s $ are the fundamental string coupling and length respectively.
$u_k$ is a minimal length scale in the $u$-direction, reflecting the absence of conformal invariance.
This background is believed to be dual to a five-dimensional gauge theory along $x_\mu, x_5$ compactified on
a circle. The low energy spectrum does not contain fermions
because they are assumed to have anti-periodic boundary conditions along the circle,
but it does contain a set of adjoint scalars.
These however are expected to pick up nonperturbatively a large mass and eventually also decouple.
In this case, the dual theory at low enough energies is a pure, non-supersymmetric, five-dimensional $SU(N)$ gauge theory in the large
't Hooft coupling $\lambda$ limit, with $\lambda_5=4\pi^2 g_sl_s=2\pi \rho \lambda$.
The lightest massive state is the adjoint scalar that originates from the fifth-dimensional components of the
gauge field (not to be confused with the adjoint scalar mentioned above), with a Kaluza-Klein mass $1/\rho$.
It is known \cite{Maldacena} that the minimal surface of a string world-sheet
parametrized by the coordinates $\sigma$ and $\tau$ extending in
the holographic $u$-dimension and whose boundary lies on the five-dimensional boundary of the
six dimensional space transverse to the $S^4$,
represents the Wilson Loop of the dual gauge theory.
More specifically, by taking the string world-sheet ansatz
$t=\tau, x_1=\sigma, x_5=\pi \rho/2, u=u(\sigma)$,
one arrives at the expressions \cite{Sonni}
\begin{equation}
r = 3\rho \sqrt{A}\int_1^\infty \frac{dy}{\sqrt{(y^3-A^3)(y^3-1)}}\label{L}
\end{equation}
and
\begin{eqnarray}
V_5 = \frac{u_k}{l_s^2} \frac{2}{A} \Biggl\{\int_1^\infty dy \Bigl[\frac{y^3}{\sqrt{(y^3-A^3)(y^3-1)}}
- \frac{1}{\sqrt{1-\frac{A^3}{y^3}}}\Bigr]-\int_A^1 dy \frac{1}{\sqrt{1-\frac{A^3}{y^3}}}\Biggr\}\; \label{V5}
\end{eqnarray}
for the length of the Wilson Loop and the static potential respectively.
The latter is computed in the regularization scheme where its finiteness is ensured by subtracting out
the infinite mass of the static quarks.
We are using the dimensionless parameters $y=u/u_0$ and $A=u_k/u_0$
where $u_0\ge u_k$ is the turning point of the string world-sheet, i.e. the deepest the string probes the
holographic dimension. The $A\to 0$ limit corresponds to the ultra-violet limit of the gauge theory
where in fact the circle de-compactifies.
In this limit the adjoint scalar of mass $1/\rho$ becomes part of the massless five-dimensional gauge field,
so $q^{(GRAV)}=m_V/m_S=1$ as in the previous section.
As $\rho$ increases and the system enters in its five-dimensional domain, $m_S$ goes faster
to zero compared to the mass of the other adjoint fields, as it is protected by gauge invariance.
Thus, as long as we are below its mass scale, the latter always remain decoupled.
We can, instead of decompactifying the circle, stop at some large but finite $\rho$ where supersymmetry remains broken.
In fact, large and infinite $\rho$ are practically indistinguishable from the point of view of our observable.
In \cite{Giatagan} it was shown how to disentangle the system of eqs. (\ref{L}) and (\ref{V5}) in the ultraviolet, in order
to obtain $V_5(r)$.
The solution involves series whose individual terms have divergences which however must cancel in the sum because
eq. (\ref{L}) is manifestly finite and eq. (\ref{V5}) is made finite by the regularization.
In any case their effect accumulates in an irrelevant additive constant in $V_5$.
The final result is
\begin{equation}
V_5(r) = {\rm const.} -\frac{c_2}{r^2}, \;\; c_2 = \frac{1}{54\pi^2}\left(\frac{\sqrt{\pi}\Gamma(2/3)}{\Gamma(7/6)}\right)^3 \lambda_5
\, .\label{d5}
\end{equation}
It is possible to verify this numerically, directly from eqs. (\ref{L}) and (\ref{V5}),
(that also shows that the additive constant actually sums to zero).
This is a five-dimensional Coulomb potential that points to the dual gauge theory being in
its Coulomb phase. Moreover, the duality works at strong coupling
so the gauge theory must be strongly coupled.
These requirements are simultaneously satisfied near the phase transition, precisely in the regime where we
computed the static potential on the lattice.
In order to make quantitative contact with the lattice formulation, we rewrite eq. (\ref{d5})
in terms of lattice parameters as
\begin{equation}
{\overline c}_2 = \frac{1}{27\pi^2}\left(\frac{\sqrt{\pi}\Gamma(2/3)}{\Gamma(7/6)}\right)^3\frac{N^2}{\beta}\, .\label{c2dimless}
\end{equation}
Eq. (\ref{c2dimless}) implies that the obvious $N$-independent dimensionless quantity
that plays the role of Coulomb's constant is
\begin{equation}
k_5^{(GRAV)} = {\overline c}_2\frac{\beta}{N^2}\label{k5G}
\end{equation}
in agreement with eq. (\ref{5dCL}) and suggests for it the value
\begin{equation}
k_5^{(GRAV)} =\left[\frac{B(2/3,1/2)}{3\pi^{2/3}}\right]^3=0.0649\cdots
\end{equation}
with $B(x,y)$ the Euler Beta function.
The discrepancy between $k_5^{(LAT)}$ and $k_5^{(GRAV)}$ is $1.81\%-2.43\%$.
\section{Conclusion}
Motivated by its four-dimensional analogue, we proposed a definition for Coulomb's constant
for five-dimensional $SU(N)$ gauge theories and then computed it in two different ways.
One in a Lattice Mean-Field expansion for $N=2$ and the other
via a Holographic Wilson Loop calculation.
The agreement of the results, numerically found to be within approximately $2\%$,
relies on the validity of the duality between the two approaches.
Even though the duality is expected to hold for large $N$,
we argued that the comparison makes sense because
-as the numerical agreement suggests- we are dealing with
a quantity that seems to remain $N$-independent to a good approximation beyond the large $N$ limit.
\vskip .1cm
{\bf Acknowledgements}
The author is grateful to F. Knechtli for many useful comments on a draft and
for generating the data for the larger lattices at the U. Wuppertal.
He would also like to thank the KITP, Santa Barbara and especially the organizers of the
program "Novel Numerical Methods for Strongly Coupled Quantum Field Theory and Quantum Gravity" for hospitality.
This research was supported in part by
the NTUA grants PEBE 2009, 2010 and in part by the National Science Foundation under Grant No. PHY11-25915.
\newpage
|
\section{Introduction}
Deep learning in image classification has achieved a remarkable improvement in performance and accuracy \cite{NIPS2012_c399862d} \cite{ren2015faster} \cite{simonyan2014very} \cite{szegedy2017inception} \cite{he2016deep}. An abundance of labeled dataset and computationally efficient machines has helped researchers to correctly classify and identify images. With access to large annotated medical images, deep learning has also found its involvement on health analytic \cite{survey}. Use of deep learning architecture to correctly predict the pathological conditions can automate the diagnosis process and provide faster results without requirement of professional medical staffs. According to a report from WHO\footnote{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5704652/}, around 3 billion people still lack proper access to medical imaging expertise. An automated diagnosis system could be beneficial at places where there is insufficient medical expertise. Highly accurate prediction models can even assist humans and remove errors resulted due to fatigue, cognitive or perceptual bias.
\par
Convolutional Neural Networks (CNN) \cite{fukushima1991handwritten} are highly efficient in learning and extracting characteristics of images. A CNN is a network of convolution and pooling layers applied one after another for a number of depths to extract essential features from the input data. The lower layers of the network extract input images' local
features, while the higher layers gradually identify more specific and global features. AlexNet \cite{NIPS2012_c399862d} was one of the early CNN architectures using multiple convolutional layers. This network used five convolutional layers and three fully connected layers in its architecture. VGG Net \cite{simonyan2014very}] was a further enhancement to AlexNet that utilized sixteen and nineteen weight layers, often termed as VGG16 and VGG19. Stacking convolutional layers one after another to create a deep network often led to overfitting \cite{goodfellow2016deep}, hence, instead of making the network deeper, Inception Net \cite{szegedy2017inception} proposed to make the network wider by applying filters of multiple sizes on the same level. Residual Networks \cite{he2016deep} also known as ResNet on the other hand proposed a network that utilized skip connections, or shortcut for the network to jump over some layers for training the model deeper. DenseNet \cite{densenet} parallelized this approach by connecting each layer of the network to all previous layers and passing the feature vector of that layer to all subsequent layers. This solved the vanishing gradient problem in a deep neural network. All these architectures however followed the same principle of CNN consisting of a set of layers like convolution, sub-sampling, dense, and soft-max with some topological difference
in how features are transmitted across layers.
\par
Breakthrough achievements of deep learning architectures in computer vision applications have prompted researchers to employ such deep convolutional neural network in medical images. One of the critical tasks in medical imaging is correctly diagnosing the presence or absence of disease and malignancy in the patient's medical images like X-ray or MRI. Deep networks have improved this diagnosis accuracy in various applications, including detecting diabetic retinopathy \cite{eye}, and skin cancer \cite{skin}. Since thorax diseases constitute a significant health threat, and chest x-rays are widespread means of medical diagnosis, different research projects have been carried out to investigate the performance of deep learning in detecting thoracic diseases.
\par
\begin{figure}[ht]
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\textwidth]{figures/eye.png}
\caption [A sample of fundus photo] { A sample of fundus photo \cite{eye}}
\label{fig:figure1}
\end{minipage}%
\hfil
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\textwidth]{figures/skin.png}
\caption [A sample of skin image] {A sample of skin image \cite{skin}}
\label{fig:figure2}
\end{minipage}%
\hfil
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\textwidth]{figures/xray.png}
\caption [A sample of chest x-ray ] {A sample of chest x-ray \cite{ChestX-Ray8}}
\label{fig:figure3}
\end{minipage}
\end{figure}
In 2015, a convolutional neural network classifier was tested on chest radio-graphs for classification of pathologies on 433 images with an area under curve obtained as 0.87-0.94 \cite{2015chest}. The authors used a pre-trained model \cite{donahue2014decaf} which was similar to the Krizhevsky model presented in \cite{krizhevsky2012imagenet}. The findings showed the feasibility of detecting x-ray pathology using deep learning approaches. In 2017, a CNN classifier of abnormalities on the front chest radio-graphs \cite{2017chest} was proposed that obtained an area under the curve of 0.964 against 2443 radiologists' annotated datasets. The authors obtained an overall sensitivity and specificity of 91 percent. The authors used a medium size dataset of 35000 x-rays with the architecture proposed by Google as GoogleNet \cite{szegedy2015going}. The positive results showed that deep neural network architectures could be equipped to achieve good success in detecting common pathologies, even with modest-sized medical data sets.
\par
Maduskar et. al.\cite{TB1} evaluated the performance of convolutional neural network in tuberculosis detection using a small dataset of 1007 chest x-rays. The authors experimented with two architectures AlexNet \cite{NIPS2012_c399862d} and GoogleNet \cite{szegedy2015going} with and without pretraining. The best performance was obtained with an ensemble of both the architecture in pretrained condition (AUC=0.99). The pretrained models always outperformed the untrained models. Lakhani et. al. \cite{TB2} compared the performance of a computer aided tuberculosis diagnosis system (CAD4TB) with the health professionals to inspect if machine learning classifiers performed as desired. Tests were performed on 161 subjects. The final results showed that the tuberculosis assessment of CAD4TB was comparable to health officers.
\par
Wang et.al. suggested a weakly controlled multi-label classification and localization of thoracic diseases in 2016, taking multi-label losses into account \cite{ChestX-Ray8}. Considering the accessibility and everyday global use of chest x-ray in pathology diagnosis, they acquired around 108,948 images with annotated eight different diseases to detect common thoracic diseases using deep learning. Though they obtained promising results, they were skeptical about using deep neural architecture in a fully automated high precision computer-aided diagnosis systems. In 2017, Rajpurakar et. al. designed a deep learning model named CheXNet \cite{chexnet} that used a 121-layer convolutional neural network with dense connections and batch normalization for detection of pneumonia. The model was trained with a publicly available dataset of 100,000 chest x-ray images. The model final results were able to outperform the average radiologist performance. In CheXNeXt \cite{chexnext2018}, the evaluation of a multi-label disease diagnosis model was performed comparing its performance with radiologists.\par
This project combines the 121-layer architecture of \cite{chexnet} with the multi-class classification of \cite{chexnext2018} for making diagnostic prediction on 14 different pathological conditions on chest x-rays. The project also localizes the parts of the chest radiograph that indicates the presence of each pathology. Detection of 14 different pathology conditions viz. 'Atelectasis', 'Cardiomegaly', 'Consolidation', 'Edema', 'Emphysema', 'Effusion', 'Fibrosis', 'Hernia', 'Infiltration', 'Mass', 'Nodule', 'Pneumothorax', 'Pleural Thickening', and 'Pneumonia' is a multi-label classification problem. The input to the DenseNet architecture is a chest X-ray image and output is a label that provides the probability that each pathology is positive in the chest x-ray.
\section{Model Development}
\subsection{Data}
ChestX-ray8 \cite{ChestX-Ray8} dataset that consists of more than 200,000 front view X-ray images of 64,000 unique patients was used as the source of the dataset. A random set of 99,316 images was collected from the dataset for this project where every image was annotated with labels identifying aforementioned 14 pathological conditions. A test set of 420 images was extracted to check the generalization ability of the supervised model.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.1}
\caption{Detail annotation of a sample of the dataset.}
\centering
{\begin{tabular}{c c c c c c c c c c}
Image & Atelectasis & Consolidation & Edema & Effusion & Pneumothorax& Hernia & Infiltration & Mass & PatientId \\ \hline
015.png & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 8270 \\ \hline
001.png & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 29855 \\ \hline
000.png & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1297 \\ \hline
002.png & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 12359 \\ \hline
001.png & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 17951 \\ \hline
\end{tabular}}
\\
{\raggedright *All fourteen pathological conditions are not displayed due to width constraint.\par}
\end{table}
\subsection{Pre-processing}
\subsubsection{Addressing Data Leakage}
Data leakage is a condition where a same patient has multiple images in test and train set \cite{kriegeskorte2009circular} \cite{rathore2017review}. This might occur if a patient has taken more than one chest X-ray at different time interval and such images are included in the dataset. Such setup will introduce bias to a deep learning model as it will be more confident in making decisions related to certain patients. Hence, during data splitting, it is customary to perform splitting on patient level so that data "leakage" between the train, and test datasets is prevented.
\subsubsection{Preparing Images}
Before building a deep learning model, images are first preprocessed. Using the standard Keras framework, the mean and standard deviation of the input data was normalized for standardizing the distribution. The input after each epoch was shuffled to ensure 'bad' epochs that are not representative of the overall dataset in training was not used. Image size was also set to 320 by 320 pixels which is a reasonable dimension for the deep layer convolutional network. The 1-channel X-ray image was converted to a 3-channel format to facilitate the use of transfer learning using pretrained model of ImageNet \cite{NIPS2012_c399862d}. The test data was normalized using the statistics of the training set so that the overall distribution of data during train and test stayed the same.
\subsubsection{Addressing Class Imbalance}
\begin{figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{figures/class_imb_new.png}
\caption{Class imbalance problem in the dataset.}
\label{fig:classimbalance}
\end{figure}
As revealed in Figure \ref{fig:classimbalance}, the contributions of positive cases in the dataset was considerably lower than the negative ones. Take the example of Hernia where the ratio of positive to negative cases was only about 0.02. In the case of Infiltration that has the largest number of positive labels, the ratio of dataset with disease label to images with no label was only around 0.18. If a model is trained using an unevenly balanced dataset using a normal cross entropy loss function, the algorithm will prioritize the majority class and the model will be biased towards one kind of label. We modified our cross-entropy loss to address this issue which is discussed in Section [2.3].
\subsection{Training}
Normal cross entropy loss, most commonly used in classification models, for $i^{th}$ example is given by,
\begin{equation}
L_{cross-entropy}(x_i)=-(y_i \log(g(x_i)) + (1-y_i) \log(1-g(x_i)))
\label{eq:cross}
\end{equation}
where $x_i$ and $y_i$ are the features and the label of the given images. The prediction of the model is given by $g(x_i)$.
One of $y_i$ or $(1-y_i)$, will contribute to the loss at a time since when $y_i$ equals one $(1-y_i)$ is zero and vice-versa.
This means that in an imbalance dataset, one label will dominate the loss.
The entire training set of size $N$ will have a cross entropy loss of:
\begin{equation}
L_{cross-entropy}(D)=-(1/N) \big( \sum_{positive examples} \log (g(x_i)) + \sum_{negative examples} \log(1-g(x_i)) \big)
\label{eq:balcross}
\end{equation}
To solve the problem created by a highly skewed data distribution, it is important to maintain an equal contribution of both the labels in each class. To achieve this, each example from each class is multiplied by a class-specific weight factor, $w_{pos}$ and $w_{neg}$.
To obtain this,
\begin{equation}
w_{pos} \times freq_{p} = w_{neg} \times freq_{n}
\end{equation}
which can be done by taking
\begin{equation}
w_{pos} = freq_{neg}
\end{equation}
\begin{equation}
w_{neg} = freq_{pos}
\end{equation}
where,
\begin{equation}
freq_p=\frac{number of positive examples}{N}
\end{equation}
and
\begin{equation}
freq_n=\frac{number of negative example}{N}
\end{equation}
The contribution of positive and negative labels is hence balanced. After adjusting the class imbalance, the contribution of positive and negative labels to the loss function was equal as revealed by Figure \ref{fig:classbalance}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figures/class_imb_solved.png}
\caption{Class imbalance problem solved.}
\label{fig:classbalance}
\end{figure}
Hence, the final weighted loss after computing positive and negative weights is
\begin{equation}\label{eqn:los}
{L}_{cross-entropy}^{w}(x) = - (w_{p} y \log(f(x)) + w_{n}(1-y) \\ \log( 1 - f(x) ) )
\end{equation}
\subsubsection{Network Architecture}
Neural networks are considered a series of discrete layers, each one taking in a previous state vector $h(n)$ and producing a new state vector, $h(n+1)=F(h(n))$. The function could be activation applied to some summation or convolutional block or lstm cell. Adding more layers is assumed to improve the function approximation and hence accuracy of the network but practically, it actually decreases the accuracy of both training and test set. This is due to the vanishing gradient problem. In vanishing gradients, the gradients of loss function slowly reduce to zero following multiple chain rule. Because of this, the weights never change their values and hence, there is no learning in the network. The accuracy seems to get stuck at some saturation point and degrades. This is also called the degradation problem. This problem was solved by DenseNet architecture \cite{densenet}. In DenseNet, one layer is directly connected with all the other layers so that each individual layer has access to all the earlier layers of the network.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{figures/network.png}
\caption{Network architecture for the stated research project.}
\end{figure}
DenseNet is divided into two blocks: DenseBlock and Transition Block. In a Dense Block, the feature size dimension remains the same but the filter numbers vary. Each layer in a dense block computes two convolutional operations: 1x1 conv that is used for extracting features and 3x3 conv that is used for decreasing the feature depth. The transition block is present between Dense Blocks and it's main function is to downsample the feature size by applying a 1x1 convolution and 2x2 average pooling function.
\par
In this research project, a 121-layer DenseNet was used as the fundamental architecture. The DenseNet-121 consists of 4 Dense Blocks with Dense Block 1 comprising of 6 dense layers, Dense Block 2 comprising of 12 dense layers, Dense Block 3 consisting of 24 dense layers and Dense Block 4 consisting of 16 dense layers; there are 3 transition blocks between Dense Blocks. The number 121 in DenseNet-121 means that there are 121 layers with trainable weights. Using the principle of transfer learning, the pre-trained weights from ImageNet were used as the initial weights of the architecture. ReLU \cite{nair2010rectified} was used as the activation function in the hidden layers.\par
The early layers of DenseNet architecture with ImageNet weights were left as it were since they carried general features of images like edges. The top layers carry information about specific features of the images hence such layers were skipped and additional two layers viz, a Global Average Pooling layer and a Dense layer with sigmoid activation were added. A Global Average Pooling layer \cite{lin2013network} was used to compute the average of the last convolution layer in the network, and a Dense Layer with sigmoid activation provided the prediction for all of the target classes.\par
Note that, the stated problem in this project is a multi-label classification problem where there can be more than one right answers i.e. an X-ray image will predict probability for each of the pathology differently allowing each condition to have high or low probability independent of
|
the results of labels. Hence, a sigmoid function was used on each raw output independently. The loss function in this project is the weighted loss function explained by Equation \ref{eqn:los}.
\par
After specifying the neural network architecture, the model was trained using back-propagation \cite{gradientdescent} with mini-batch stochastic gradient descent of 8 images and Adam optimizer \cite{kingma2014adam} with default learning rate of 0.001.
\subsection{Testing}
Our deep learning model was trained for different sizes of training dataset to observe the generalization accuracy of the model against an unseen test set of 420 images. The test set was drawn from the same data distribution as the training data split by patient ID.
\section{Evaluation of Diagnostic Model}
The model was first trained on 1000 images for five epochs and tested against the test set. The ROC curve obtained is shown in Figure \ref{fig:ROC1000}. We can clearly observe that our model is almost a random classifier with the curve of prediction for each pathology lying on the diagonal. The corresponding AUC score summarizes the performance of each classifier into a single measure, which reveals inaccurate prediction of disease by the model across different thresholds. We can hypothesize that the model is underfitting and inability of model to correctly classify pathologies is due to insufficient size of training data or number of epoch and assume that after training the model on a large dataset with large epoch, the accuracy of the evaluation metrics will improve significantly.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{figures/Train1000Images.png}
\caption{ROC curve on classification of thoracic pathologies on a small dataset of 1000 images.}
\label{fig:ROC1000}
\end{figure}
\begin{table*} [h!]
\caption{ Table of evaluation metrics after training on 1000 images}
\resizebox{\textwidth}{!}{\begin{tabular}{c c c c c c c c c c}
\hline
& \textbf{Accuracy} & \textbf{Prevalence} & \textbf{Sensitivity} & \textbf{Specificity} & \textbf{PPV} & \textbf{NPV} & \textbf{AUC} & \textbf{F1} & \textbf{Threshold}
\\ \hline
Cardiomegaly & 0.445 & 0.119 & 0.54 & 0.432 & 0.114 & 0.874 & 0.517 & 0.188 & 0.5 \\ \hline
Emphysema & 0.864 & 0.133 & 0 & 0.997 & 0 & 0.866 & 0.53 & 0 & 0.5 \\ \hline
Effusion & 0.524 & 0.126 & 0.66 & 0.504 & 0.161 & 0.911 & 0.62 & 0.259 & 0.5 \\ \hline
Hernia & 0.881 & 0.119 & 0 & 1 & NaN & 0.881 & 0.593 & 0 & 0.5 \\ \hline
Infiltration & 0.54 & 0.14 & 0.712 & 0.512 & 0.193 & 0.916 & 0.643 & 0.303 & 0.5 \\ \hline
Mass & 0.167 & 0.143 & 0.9 & 0.044 & 0.136 & 0.727 & 0.529 & 0.236 & 0.5 \\ \hline
Nodule & 0.595 & 0.129 & 0.352 & 0.631 & 0.123 & 0.868 & 0.484 & 0.183 & 0.5 \\ \hline
Atelectasis & 0.41 & 0.143 & 0.65 & 0.369 & 0.147 & 0.864 & 0.54 & 0.239 & 0.5 \\ \hline
Pneumothorax & 0.869 & 0.131 & 0 & 1 & NaN & 0.869 & 0.537 & 0 & 0.5 \\ \hline
Pleural\_Thickening & 0.862 & 0.138 & 0 & 1 & NaN & 0.862 & 0.581 & 0 & 0.5 \\ \hline
Pneumonia & 0.514 & 0.119 & 0.54 & 0.511 & 0.13 & 0.892 & 0.536 & 0.209 & 0.5 \\ \hline
Fibrosis & 0.855 & 0.145 & 0 & 1 & NaN & 0.855 & 0.477 & 0 & 0.5 \\ \hline
Edema & 0.624 & 0.119 & 0.54 & 0.635 & 0.167 & 0.911 & 0.601 & 0.255 & 0.5 \\ \hline
Consolidation & 0.65 & 0.126 & 0.377 & 0.689 & 0.149 & 0.885 & 0.559 & 0.214 & 0.5 \\ \hline
\end{tabular}}
\end{table*}
Table 2 displays the evaluation metrics computed on the test dataset based on the model trained on 1000 X-ray images. It is clear from the metrics that the model has failed to generalize. Inspecting only accuracy can give a false impression of model performance, especially in cases of Penumothorax, Hernia and Plueral Thickening but accuracy is not the correct measure of discriminatory power of the model. Corresponding F1 score gives the accurate estimation of generalization which is unacceptable. Because of 0 values of TP, and FP, scores of PPV are not available for some conditions. \par
When the deep learning model was trained on an increased number of 99316 images but keeping the epoch number same, the generalization on test set was not improved. The model performed relatively well on the training dataset but the model still behaved like a random classifier. As can be observed from the ROC plot in Figure \ref{fig:ROCNoRegularize}, the prediction accuracy of the model depreciated even with an increased size of dataset. So, the problem lies in underfitting as the model is not complicated enough.\par
For the next experiment we increased our training epoch to 20, and used an adaptive learning rate. We also used dropout \cite{srivastava2014dropout} of 10\% to randomly ignore or zero-out some layer outputs. This prevents over-fitting of our training data and attempts to generalize the model well.\par
\begin{figure}[h!]
\centerline{\includegraphics[width=0.5\textwidth]{figures/TrainAllImagesNoRegularizer.png}}
\vspace*{8pt}
\caption{ROC curve on classification of thoracic pathologies on the full dataset.}
\label{fig:ROCNoRegularize}
\end{figure}
\begin{figure}[h!]
\centerline{\includegraphics[width=0.5\textwidth]{figures/TrainAllImagesWithRegularizer.png}}
\vspace*{8pt}
\caption{ROC curve on classification of thoracic pathologies on the full dataset.}
\label{fig:finalROC}
\end{figure}
The ROC curve as displayed in Figure \ref{fig:finalROC} shows the performance of the classifier for different pathology conditions in various thresholds, and the evaluation metric table shown in Table 4 reveals the corresponding measures of evaluation criteria for each of the pathology. We can observe an improved performance of the model on varied pathology conditions, and can conclude that the model has greater discriminative performance on all conditions compared to our last two experiments. Note that despite having high sensitivity and accuracy, the PPV of the predictions can still be low. Take the predictions of Pneumonia as an example. It has a sensitivity of 0.6 but given that the model predicted positive, the probability that the person has Pneumonia is only 0.18.
\begin{table} [h!]
\small
\caption{Comparison of selected prediction}
\centering
{\begin{tabular}{c c c}
\hline
& This Project & CheXNext
\\ \hline
Cardiomegaly & 0.896 & 0.831 \\ \hline
Edema & 0.857 & 0.924 \\ \hline
Mass & 0.82 & 0.909 \\ \hline
Effusion & 0.764 & 0.901 \\ \hline
Consolidation & 0.75 & 0.893 \\ \hline
Pneumothorax & 0.805 & 0.944 \\ \hline
Pneumonia & 0.681 & 0.851\\ \hline
\\
\end{tabular}}
\end{table}
\begin{table*} [h!]
\caption{Table of evaluation metrics after training on 99316 images with regularizer}
\resizebox{\textwidth}{!}{\begin{tabular}{c c c c c c c c c c}
\hline
& \textbf{Accuracy} & \textbf{Prevalence} & \textbf{Sensitivity} & \textbf{Specificity} & \textbf{PPV} & \textbf{NPV} & \textbf{AUC} & \textbf{F1} & \textbf{Threshold}
\\ \hline
Cardiomegaly & 0.826 & 0.119 & 0.78 & 0.832 & 0.386 & 0.966 & 0.896 & 0.517 & 0.5 \\ \hline
Emphysema & 0.762 & 0.133 & 0.589 & 0.788 & 0.3 & 0.926 & 0.795 & 0.398 & 0.5 \\ \hline
Effusion & 0.681 & 0.126 & 0.736 & 0.673 & 0.245 & 0.946 & 0.764 & 0.368 & 0.5 \\ \hline
Hernia & 0.705 & 0.119 & 0.66 & 0.711 & 0.236 & 0.939 & 0.772 & 0.347 & 0.5 \\ \hline
Infiltration & 0.598 & 0.14 & 0.644 & 0.59 & 0.204 & 0.91 & 0.65 & 0.31 & 0.5 \\ \hline
Mass & 0.764 & 0.143 & 0.75 & 0.767 & 0.349 & 0.948 & 0.82 & 0.476 & 0.5 \\ \hline
Nodule & 0.66 & 0.129 & 0.593 & 0.669 & 0.209 & 0.918 & 0.655 & 0.309 & 0.5 \\ \hline
Atelectasis & 0.674 & 0.143 & 0.75 & 0.661 & 0.269 & 0.941 & 0.786 & 0.396 & 0.5 \\ \hline
Pneumothorax & 0.7 & 0.131 & 0.745 & 0.693 & 0.268 & 0.948 & 0.805 & 0.394 & 0.5 \\ \hline
Pleural\_Thickening & 0.6 & 0.138 & 0.672 & 0.588 & 0.207 & 0.918 & 0.714 & 0.317 & 0.5 \\ \hline
Pneumonia & 0.633 & 0.119 & 0.6 & 0.638 & 0.183 & 0.922 & 0.681 & 0.28 & 0.5 \\ \hline
Fibrosis & 0.65 & 0.145 & 0.623 & 0.655 & 0.235 & 0.911 & 0.725 & 0.341 & 0.5 \\ \hline
Edema & 0.783 & 0.119 & 0.8 & 0.781 & 0.331 & 0.967 & 0.857 & 0.468 & 0.5 \\ \hline
Consolidation & 0.607 & 0.126 & 0.792 & 0.58 & 0.214 & 0.951 & 0.75 & 0.337 & 0.5 \\ \hline
\end{tabular}}
\end{table*}
The proposed architecture outperformed the CheXNext paper \cite{chexnext2018} on detecting the cardiomegaly condition which is a condition where the heart enlarges abnormally. Other predictions were close, if not better, than the paper.
\subsection{Interpretation of Deep Learning Model}
Model interpretation is one of the challenges in deep learning as the complex architecture of neural networks are harder to interpret. To address this, Class Activation Maps \cite{kwasniewska2017deep} are used to increase the model interpretability. Class activation maps helps us to understand where the deep learning model is paying attention when making a classification. Using GRADCAM \cite{gradcam}, we produced heat-maps for highlighting the important regions in x-rays during prediction. Though GRADCAM doesn't provide a detail information for reasoning of our model predictions, it can still be useful for an expert validation to check whether the model is looking at the right regions in the image for making associated predictions.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/GradCamFinal.png}
\caption [Visualization of pathology prediction using GRADCAM] {Visualization of pathology prediction using GRADCAM}
\label{fig:heatmap}
\end{figure}
Figure \ref{fig:heatmap} shows the visualization of our learning using GRADCAM. We generate heatmap for 2 classes with the highest performing AUC measures and one with lowest AUC measure. As we can observe, activation map generates a heatmap depicting the region that was 'looked' upon by the model while predicting the disease in condition of Mass and Pneumothorax. Since the prediction of Cardiomegaly is very low, there is no heatmap on the x-ray as the model did not find any region to make any correct decision.
\subsection{Uncertainty Estimation}
Computation of confidence interval of our measurement helps to understand how the measurement is affected by sampling. This essentially is an uncertainty estimation where we quantify how sure we are of our prediction from the training dataset which is just a sample of the real world data. We evaluated confidence interval of AUC score in this experiment and obtained the results as shown in Table 5.
\begin{table} [h!]
\small
\caption{ Table of confidence interval for AUC score}
\centering
{\begin{tabular}{c c}
\hline
& Mean AUC (CI
\\ \hline
Cardiomegaly & 0.89 (0.86-0.92) \\ \hline
Emphysema & 0.79 (0.76-0.82) \\ \hline
Effusion & 0.76 (0.73-0.80) \\ \hline
Hernia & 0.77 (0.73-0.80) \\ \hline
Infiltration & 0.65 (0.62-0.68) \\ \hline
Mass & 0.82 (0.79-0.85) \\ \hline
Nodule & 0.65 (0.60-0.70) \\ \hline
Atelectasis & 0.79 (0.76-0.81) \\ \hline
Pneumothorax & 0.81 (0.77-0.83) \\ \hline
Pleural\_Thickening & 0.71 (0.68-0.74) \\ \hline
Pneumonia & 0.68 (0.62-0.73) \\ \hline
Fibrosis & 0.72 (0.68-0.76) \\ \hline
Edema & 0.86 (0.82-0.88) \\ \hline
Consolidation & 0.75 (0.72-0.78)
\\ \hline
\end{tabular}}
\end{table}
Our confidence intervals are narrower for almost all classes. Predictions that have a narrow interval means we are pretty confident of the result and regardless of the sampling, our result would always give the similar result.
\section{Discussion}
The primary limitation of this deep learning model is that the architecture is trained using frontal x-rays. A model trained with both front and lateral radiology images could perform better on most practical cases. Also, the model does not take into account if the patient has historic illness since diagnosis of any diseases is dependent on the patient's medical history. The model is also trained on 99316 instead of complete set of 200,000 images due to constraint of computational power.\par
Chest x-rays are one of the most common disease diagnostic tool for detection of various thoracic disease. However, a large group of world population still lack access to even the most simplest form of radiology diagnosis. The results of this project push for the implementation of an automated diagnosis system for detection of thoracic diseases with ability of result interpretation of the deep learning model.
\bibliographystyle{unsrt}
|
\section{Introduction}
In many service industries the role of businesses is shifting from service provision to facilitating the exchange of services, typically through the creation of virtual two-sided marketplaces. When suppliers in a two-sided market are individual contractors rather than businesses, the market is considered to be part of the gig economy. In contrast to traditional fixed labour contracts offering long-term security to all parties involved, labour in the gig economy is organised through more flexible arrangements. Not only does this allow service providers to respond more adequately to changes in demand than operators with more traditional forms of labour, it also means that they may be exempted from paying employee benefits \citep{Pra15}. Unfortunately, recent protests demonstrate that the value gig workers derive from the flexibility to set their own working schedules \citep{hall2018analysis,Chen2019value} may not outweigh the loss of financial security associated with flexible labour agreements. While the social desirability of these new forms of labour agreements is disputed, the gig economy has gained ground across many industries.
Transportation is a predominant example with platform businesses in food delivery (Just Eat Takeaway, Uber Eats, DoorDash), package delivery (Amazon Flex) and passenger services (Uber, Lyft, DiDi). Service providers in the third category are commonly referred to as ridesourcing providers or Transport Network Companies (TNCs). Typically, ridesourcing businesses reward drivers based on satisfied demand rather than based on time spent working for the platform. Hence, in contrast to traditional transit operators with employed drivers, they do not bear the cost of excess labour available through their platform. This is beneficial especially in times of rapidly declining travel demand, such as during the COVID-19 pandemic.
The question remains whether a decentralisation of supply is truly a win-win for service provider, suppliers and consumers in the ride-hailing market. Early evidence suggests that in addition to losing access to social provisions related to employment, ridesourcing drivers may receive inadequate financial compensation for supplied labour. In Chicago for example, strong competition between suppliers has led to average driver earnings below the local minimum wage \citep{henao2019analysis}. Besides suppressing driver earnings, oversupply contributes to road congestion by inducing repositioning by idle drivers waiting to be matched \citep{beojone2021inefficiency}. Travellers on the other hand may benefit from an oversupplied market through low waiting times and few denied requests.
Sustained supply of labour to a platform with low payouts suggests that the tragedy of the commons may apply to the ridesourcing market. It occurs when excessive participation leads to a depletion of the total value derived from participation on the platform. A potential reason why drivers may continue to participate under these conditions is that they have limited alternative opportunities in the labour market. Oversupply in the ridesourcing market may also be explained by large temporal variations in labour opportunity costs underlying the value of flexible work \citep{Chen2019value,ashkrof2020understanding}. When a potential driver is not involved in alternative activities - such as alternative employment or education - on a particular day, (s)he may be tempted to work for the platform even when expected earnings are low. In other words, varying opportunity costs caused by activity schedules may disturb the balancing loop of competition in labour supply.
In contrast, ridesourcing platforms may struggle to attract enough suppliers to the market when the labour market is strong, especially when employment yields high social security benefits. This hampers travellers' chances to find a (quick and cheap) ride. When ride requests have to be rejected or when travellers stop making requests altogether, the service provider is confronted with lost revenue. This may ultimately result in the termination of the service. \cite{farrell2017online} have observed that the growth of on-demand service platforms in many cities is indeed limited by the availability of workers rather than customers.
In order to comprehend the societal implications of ridesourcing, we thus need to understand how the decentralisation of supply affects the fleet size of a ride-hailing service. Considering the bottom-up nature of ridesourcing supply, its analysis requires investigating system-level effects of factors influencing individual driver's labour decisions. This includes not only strategical decisions by the platform, but also labour market properties and driver characteristics. In this study, we therefore focus on structural supply deficits / surpluses that may exist in the ridesourcing market. Hence, we study labour supply only at the extensive margin as opposed to highly temporal imbalances in supply and demand which may follow from hourly variations in opportunity costs and/or travel demand, i.e. we will capture how many drivers work on a day, but not how long they work on that day.
\subsection{Ridesourcing system dynamics}
The emergence of ridesourcing has not gone unnoticed in the scientific community. In a review of ridesourcing literature, \cite{wang2019ridesourcing} have identified four major research problems related to the impact of emerging ridesourcing services. These topics include the effect of ridesourcing on other modes of transportation \citep{qian2017taxi,zhu2020analysis,ke2020ride,yu2020balancing,ke2021equilibrium}, its broader societal and environmental impacts \citep{rayle2016just,clewlow2017disruptive,yu2017environmental,jin2018ridesourcing}, competition between service providers \citep{zha2016economic,zhou2020competitive}, and the effectiveness of regulations in the ridesourcing market \citep{zha2018surge,zha2018geometric,li2019regulating,yu2020balancing}. A key factor when identifying the societal impacts of ridesourcing is the pricing strategy adopted by the service provider. Hence, many studies revolve around the optimisation of ridesourcing pricing strategies, including the specification of ride fares and driver wages \citep{banerjee2015pricing,taylor2018demand,zha2018surge,zha2018geometric,bai2019coordinating,sun2019optimal,bimpikis2019spatial,nourinejad2020ride,dong2021optimal}.
A common feature of previously mentioned works is that the ridesourcing market is represented using a static steady-state model. While allowing insightful analyses into ridesourcing equilibria, there are two downsides to this approach. First, static models are incapable of explaining system evolution towards proposed equilibria. Second, these models fail to capture key dynamic processes that are inherent to ridesourcing provision. Arguably, the equilibria sketched in previous studies may not be realised in practice. In the following, we distinguish several complex day-to-day processes underlying the emergence of decentralised ridesourcing supply.
First, labour supply decisions are affected by a driver's participation history. Because there is no guaranteed participation reward and drivers lack proper means of communicating with other drivers \citep{robinson2017making}, drivers' own experiences form an important source of information in the participation decision. Given that ridesourcing earnings are highly sensitive to system variables such as travel demand and other drivers' labour decisions \citep{bokanyi2020understanding}, there may be large day-to-day variations in the average participation reward. Moreover, due to path-dependent spatial relations between successive matchings of drivers and travel requests, ridesourcing earnings may be distributed unevenly among participating drivers. To illustrate, a driver who is assigned to deliver a passenger in a low demand area may struggle to find a subsequent ride. 'Unlucky' individuals with below average earnings may decide to leave the platform before learning that the system average earnings were higher than their personal earnings. Hence, the unpredictability of ridesourcing earnings can affect the amount of labour available for platform operations.
Second, participation may require making financial investments or entering into contracts. Even though entry barriers for ridesourcing are typically lower than those for conventional taxis \citep{hall2018analysis}, empirical findings still show an increase in vehicle ownership in the population associated with the launch of a ridesourcing service \citep{gong2017uber}. This demonstrates that ridesourcing drivers do not necessarily drive for the platform with a vehicle they already owned. In addition, a taxi license or appropriate driver insurance may need to be obtained to enter the ridesourcing market \citep{baron2018disruptive}. Hence, participation decisions are preceded by a registration decision in which required investments are traded off against anticipated future revenues from participation. The discrepancy in costs between registration (with entry costs) and participation (when entry costs are sunk) implies that studies neglecting registration choice may either overestimate or underestimate the ridesourcing fleet size. This depends on whether the drop in the number of registered drivers outweighs more frequent participation by registered drivers to compensate for the capital costs associated with platform registration \citep{hall2018analysis}.
Based on a theory of innovation diffusion \citep{rogers2010diffusion}, there are two more steps preceding drivers' platform participation choice: (1) becoming aware of its existence and (2) being persuaded to gather more information about its utility. Variations in attitudes, preferences and social network may explain why individual agents may undergo these stages at different moments in time. The rate at which potential drivers may start considering registration is relevant because a very rapid increase in supply may lead to sharply decreasing participation earnings. A slow diffusion on the other hand may lead to a prolonged situation with long waiting times and therefore dissatisfied travellers.
To gain a better understanding of equilibria in ridesourcing systems, we need dynamic models that can account for the previously mentioned processes in drivers' labour supply decisions. To the best of our knowledge, applications of day-to-day learning models for ridesourcing systems have been limited to only a few studies.
One of these has represented ridesourcing evolution with learning behaviour by drivers. \citet{djavadian2017agent} proposed a stochastic day-to-day approach with an integrated within-day operating policy, in which travellers choose ridesourcing if it maximises their expected consumer surplus, anticipating travel time based on experience. Drivers supply labour when their learned perceived income exceeds a deterministic income threshold, implying that variables other than expected income that play a role in drivers' labour supply decisions are neglected. The model proposed by \cite{djavadian2017agent} also does not account for the stages preceding participation, such as the registration process. The method is applied only to a minimal case study representing access to and egress from a single railway station, with supply levels limited to 20 drivers or lower.
\cite{cachon2017role} and \cite{yu2020balancing} propose a semi-dynamic model consisting of a registration phase and a participation phase. Both phases are strictly separated in time, which means that the model cannot capture interactions between participation decisions of existing drivers and the registration decisions of potential drivers. \cite{dong2021optimal} apply a similar methodology when studying ridesourcing service providers opting for a dual-sourcing strategy. Drivers in their study first decide whether they take up an employment offer by the provider, giving up work schedule flexibility in return for reduced income uncertainty. In the second phase, those that rejected the offer decide on platform participation. Drivers are only offered employment once, i.e. there is no feedback loop between participation and employment. The aforementioned studies apply macroscopic models to represent the within-day matching process, neglecting complex disaggregate spatio-temporal within-day relations between supply, demand and service provider that influence drivers' labour supply decisions.
\subsection{Study contributions}
We represent the long-term evolution of ridesourcing supply by explicitly considering complex interactions between within-day ride-hailing operations, registration barriers and day-to-day variations in opportunity costs. We do so by proposing a day-to-day learning model with a decentralised labour supply, explicitly distinguishing between two dimensions: registration and participation. For platform registration, we develop a probabilistic agent-based model that accounts for registration costs, opportunity costs and anticipated income levels. We propose a macroscopic model based on an epidemiological process to represent diffusion of information between registered and non-registered drivers, concerning the awareness of and satisfaction with the ridesourcing platform. For the daily participation decision, we establish a probabilistic agent-based choice model that acknowledges that drivers' daily participation decision is not merely based on the expected income on a given day derived from accumulated day-to-day experience, but which also depends on unobserved factors such as variations in opportunity costs.
We integrate our day-to-day model into MaaSSim, an agent-based discrete event simulator of mobility-on-demand operations \citep{Kucharski2020MaaSSim}. The agent-based nature of this model allows us to capture heterogeneity in ridesourcing earnings following from disaggregate and spatially-dependent interactions between demand, supply and platform dynamics in ridesourcing operations, which may affect the emergent ridesourcing fleet.
The model is applied to a case study representing a realistic urban network, with up to 1000 vehicles, to allow for the examination of emergence properties in a decentralised supply market in ridesourcing provision. More specifically, we construct an experiment to find the extent to which labour supply in the market is dependent on the availability and cost of labour in the market. This allows us to answer whether ridesourcing provision risks attaining undesired levels of supply, i.e. over- or under-supplied. In addition, our experiment includes an investigation of the commission rate charged by the platform, in order to explore the implications of profit maximisation in a decentralised market, for both drivers and travellers. Other variables that we study are platform registration barriers and variability in drivers' daily opportunity costs, in order to understand how they characterise supply in ridesourcing provision. Finally, we employ an exhaustive search for establishing the optimal ridesourcing fleet size for travellers, drivers and service provider, which we compare to the equilibrium participation volume in decentralised ridesourcing provision.
\section{Methodology}
We develop an agent-based day-to-day model with driver agents potentially willing to work for the platform. These agents are at any given moment in time in one of three states: uninformed, interested or registered. Uninformed driver agents are potential drivers currently unaware of the existence of the service. Interested drivers are those that have been informed about the existence of the platform and now monitor the average participation reward. They make an occasional registration decision. Once drivers are registered, they make a daily participation choice, based on the expected income that is learned from previous driving experiences. This is simulated by integrating our day-to-day model, comprising of information diffusion, registration and participation, with a within-day ride-hailing model (Figure \ref{fig:Conc-fw}). This model simulates within-day interactions between driver agents, traveller agents and the platform agent.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.85\textwidth]{Figures/D2d-supply-v2.png}
\caption{Conceptual framework of the proposed dynamic ridesourcing model, including references to subsections in which a particular submodel is explained}\label{fig:Conc-fw}
\end{figure}
In this section, we describe the five sub models constituting our approach. We also provide more information about the implementation of the model.
\subsection{Information diffusion}
\cite{rogers1995lessons} argues that the diffusion of information about an innovation is a social process. Individuals seek information from peers to guide the adoption decision, especially from those that have previously adopted the innovation. Information spreading via word-of-mouth is considered to be, to some extent, similar to virus transmission in a network. Hence, many information diffusion models are based on compartment models from epidemics \citep{zhang2016dynamics}. In these models, the population is divided into different classes depending on their current stage of the disease, typically distinguishing susceptible, infectious and recovered agents, although many other compartments are possible \citep{pastor2015epidemic}. One of the main benefits of representing information diffusion with epidemic compartment models is that they do not require the specification of the underlying (social) network, which is typically hard to observe in word-of-mouth communication.
We assume that an SI model with susceptible (i.e. uninformed) and infectious (i.e. informed) agents suffices. Consider a pool of $N$ potentially interested drivers, of which $I(t-1)$ are informed (i.e. interested or registered) at the start of day $t-1$. If we assume that all uninformed drivers are equally likely to be informed on a given day, then we can formulate the probability for an uninformed driver to be informed at the start of the next day as:
\begin{equation}
p^{\mathrm{inform}}(t) = \frac{\beta_{\mathrm{inf}} \cdot I(t-1)}{N} \label{eq:inf-diff}
\end{equation}
in which $\beta_{\mathrm{inf}}$ represents the average information transmission rate, or more specifically, the probability that information is transmitted in a contact between an informed an uninformed agent multiplied by the average daily number of contacts of agents.
\subsection{Platform registration}
Before informed driver agents can participate, they need to trade off registration costs and participation benefits. In contrast to the approach of \citet{cachon2017role} - one of the few works to represent the registration process in ridesourcing supply - we assume that informed drivers base their registration decision on the average expected income of already registered drivers, rather than on a probability distribution of incomes presented to drivers in advance. Since registration represents a relatively long-term labour decision, we model the decision to be occasional rather than daily. More specifically, we assume that on any given day drivers have a probability $\gamma$ of making a registration decision.
Drivers register with the platform when the expected earnings from participating exceed the total costs related to participation and registration. Participation cost includes the opportunity cost of the time spent working as well as a potential disutility associated with the driving activity. From hereon, participation cost will be referred to as the reservation wage, a term used in labour economics to define the minimum income level for which drivers are willing to accept specific work \citep{franz1980reservation}. Registration costs, on the other hand, correspond to capital expenses which are independent of participation, such as investment in a vehicle and insurance. We formalize drivers' registration choice with a binary random utility model, in which the registration utility of an informed driver agent $d$ is determined by the net income that drivers expect to collect with participation on the platform, which is defined as the average expected income of already registered drivers $\overline{I_{t}^{\mathrm{exp}}}$ minus a constant penalty $C_d$ to represent capital registration costs. The alternative utility - to remain unregistered - is determined by the reservation wage to represent the time cost of participation on the platform. We apply a logit model with parameter $\beta_{\mathrm{reg}}$, and an error term $\varepsilon_{\mathrm{reg}}$ to account for unknown dynamics in registration choice. The particular utilities and the resulting probability of registration for a driver $d$ on day $t$ are, respectively, formulated as:
\begin{equation}
U_{dt}^{\mathrm{regist}} = \beta_{\mathrm{reg}} \cdot (\overline{I_{t}^{\mathrm{exp}}} - C_d) + \varepsilon_{\mathrm{reg}}
\end{equation}
\begin{equation}
U_{dt}^{\mathrm{unregist}} = \beta_{\mathrm{reg}} \cdot W_d + \varepsilon_{\mathrm{reg}}
\end{equation}
\begin{equation}
p^{\mathrm{regist}}(d,t) = \frac{\gamma \cdot \exp(U_{dt}^{\mathrm{regist}})}{\exp(U_{dt}^{\mathrm{regist}})+\exp(U_{dt}^{\mathrm{unregist}})}
\end{equation}
\subsection{Labour participation}
In the following, we introduce the specification of registered drivers' participation choice, including how drivers anticipate future income based on personal experience.
\subsubsection{Participation choice}
Similar to other studies representing ridesourcing supply \citep{banerjee2015pricing,djavadian2017agent,taylor2018demand,bai2019coordinating}, we model participation based on drivers' expected income and reservation wage. We assume a positive relation between income and labour supply, thus following the neoclassical theory of labour supply \citep{chen2016dynamic,angrist2017uber,xu2020empirical}. Notwithstanding, there are likely to be other factors in play driving participation choice, such as planned activities for the particular day, which are typically difficult to observe. Therefore, in the determination of the utility to participation or to remain idle, next to the reservation wage $W_d$ and expected income $I_{dt}^{\mathrm{exp}}$, we include an error term $\varepsilon_{\mathrm{ptp}}$. We apply a logit model with parameter $\beta_{\mathrm{ptp}}$ and error term $\varepsilon_{\mathrm{ptp}}$ to represent the degree of randomness in the participation choice model, which indicates the significance of non-observed factors influencing the participation choice. The utility and corresponding probability of participating for a driver $d$ on day $t$ are specified as follows:
\begin{equation}
U_{dt}^{\mathrm{participate}} = \beta_{\mathrm{ptp}} \cdot I_{dt}^{\mathrm{exp}} + \varepsilon_{\mathrm{ptp}}
\end{equation}
\begin{equation}
U_{dt}^{\mathrm{idle}} = \beta_{\mathrm{ptp}} \cdot W_d + \varepsilon_{\mathrm{ptp}}
\end{equation}
\begin{equation}
p^{\mathrm{participate}}(d,t) = \frac{\exp(U_{dt}^{\mathrm{participate}})}{\exp(U_{dt}^{\mathrm{participate}})+\exp(U_{dt}^{\mathrm{idle}})}
\end{equation}
\subsubsection{Ride-hailing operations}
The financial reward for participation is modeled with the within-day simulation model of the MaaSSim simulator \citep{Kucharski2020MaaSSim}. It allows to capture complex spatiotemporal within-day interactions in ridesourcing between three types of agents: travellers, (participating) drivers and the platform. The following assumptions are made about the operational strategies of these agents in the operational model.
\textit{Driver} agents' labour supply decisions are limited to the extensive margin, i.e. they will work during all hours considered by the within-day model. Drivers will accept all ride requests assigned to them in this time frame. Unassigned drivers do not reposition, instead they remain idle at their drop-off location until assigned to a new request. Driver agents are faced with per-kilometre operating costs $\delta$.
Each day, a \textit{traveller} agent makes an identical trip for which it requests a ride on the platform. If the time to receive an offer exceeds a threshold $\theta$, the traveller will revoke its ride request. If an offer is received within the tolerance threshold, it will be accepted. Ride offers cannot be cancelled at a later stage.
The \textit{platform} agent offers private rides on a road network with static travel times. It assigns requests to drivers whenever two constraints are met: (1) there are unassigned requests on the platform, and (2) there are idle drivers. It allocates the request-driver pair with the least amount of travel time from the driver's location to the request location. For each transaction, the ridesourcing platform charges a commission rate $\pi$. Ride fares on the platform are comprised of a base fare $f_{\mathrm{base}}$ and per-kilometre fare $f_{\mathrm{km}}$.
We now specify $Q_{\mathrm{req}}$ as the (virtual) queue of unassigned requests on the platform and $Q_{\mathrm{driver}}$ as the (virtual) queue of idle drivers. $tt_{iu}$ corresponds to the travel time from the location of an idle driver $i\in Q_{\mathrm{driver}}$ to the pick-up location of an unassigned request $u\in Q_{\mathrm{req}}$. The matching function to find the request-driver pair $(u^*,i^*)$ with the least intermediate travel time is then formulated as follows:
\begin{equation}
(u^*,i^*) = \argmin_{u \in Q_{\mathrm{req}}, i\in Q_{\mathrm{driver}}} tt_{iu}
\end{equation}
The earnings of ridesourcing drivers follow directly from ride fares paid by travellers. If the daily pool of travel requests is denoted as $R$, and the direct distance from request location to destination is denoted as $s_r$, the payout $PO_r$ to a driver for serving a single request $r\in R$ is defined as:
\begin{equation}
PO_{r} = (f_{\mathrm{base}} + f_{\mathrm{km}} \cdot s_r) \cdot (1 - \pi)
\end{equation}
The total payout $PO_{dt}$ to a driver on a specific day is the sum of the payouts $PO_r$ from requests served by this specific driver on a particular day $t$. Defining $a_{rdt}$ as a binary assignment variable indicating whether driver $d$ picks up request $r$ on day $t$, we can formulate driver's daily payout as:
\begin{equation}
PO_{dt} = \sum_{r\in R} PO_{r} \cdot a_{rdt}
\end{equation}
The net experienced income of a participating driver $I_{dt}^{\mathrm{act}}$ can now be formulated as:
\begin{equation}
I_{dt}^{\mathrm{act}} = PO_{dt} - OC_{dt}
\end{equation}
where, in consideration of deadheading distance $DH_{dt}$, $OC_{dt}$ represents a driver's operational costs on day $t$:
\begin{equation}
OC_{dt} = (\sum_{r\in R} s_r \cdot a_{rdt} + DH_{dt}) \cdot \delta
\end{equation}
\subsubsection{Learning}
As stated before, participation choice depends on the earnings that are expected on a particular day. Given that the typical ridesourcing driver has limited connections to other drivers \citep{robinson2017making}, anticipated earnings are predominantly based on individual experiences. Considering memory decay \citep{ebbinghaus2013memory} and dynamics in ridesourcing system variables, we cannot assume that drivers weigh all experiences equally. In the absence of empirical evidence for the specification of the learning function in ridesourcing labour supply, we rely on findings from learning in another travel-related context. \cite{bogers2007modeling} demonstrate that conditional on sufficient experience, learning in route choice can be described using a Markov formulation. In this study, we propose a two-phase learning model for driver's perceived income to differentiate learning behaviour by experienced and inexperienced drivers. When the number of days of participation experience $E_{dt}$ exceeds a threshold $\omega$, learning is described with a Markov formulation similar to \cite{bogers2007modeling}. However, when $E_{dt}$ is below $\omega$, drivers compute the unweighted average past income as a proxy for their expected income, to prevent oversensitive and abrupt reactions to the first few experiences. With the actual experienced income on the previous day specified as $I_{d,t-1}^{\mathrm{act}}$, we define the expected income $I_{dt}^{\mathrm{exp}}$ of driver $d$ for day $t$ as:
\begin{equation}
I_{dt}^{\mathrm{exp}} = (1-\kappa)\cdot I_{d,t-1}^{\mathrm{exp}} + \kappa\cdot I_{d,t-1}^{\mathrm{act}} \label{eq:learning-1}
\end{equation}
in which $\kappa$ represents the weight attributed to the last experience as opposed to all previous experiences, which is formulated as:
\begin{equation}
\kappa = \begin{cases}
0 & \hspace{7mm} w_{d,t-1} = 0 \\
1 / (E_{dt}+1) & \hspace{7mm} 0 < E_{dt} \cdot w_{d,t-1} < \omega \\
1 / \omega & \hspace{7mm} \text{otherwise} \label{eq:learning-2}
\end{cases}
\end{equation}
in which $w_{di}$ is a binary variable to indicate whether a driver participated on a past day $i\in\{1,\dots,t-1\}$ and $E_{dt}$ defines the number of days during which the driver has so far gained a participation experience:
\begin{equation}
E_{dt} = \sum_{i\in\{1,\dots,t-1\}} w_{di} \label{eq:learning-3}
\end{equation}
\subsection{Implementation}
In this subsection, we describe the definitions for convergence and the number of replications in the experiment.
\subsubsection{Simulation framework}
We implement our day-to-day driver model in MaaSSim, an open-source agent-based discrete event simulator of mobility-on-demand operations, programmed in Python \citep{Kucharski2020MaaSSim}. Both supply and demand are represented microscopically. For supply this pertains to the explicit representation of single vehicles and their movements in time and space, while for demand this pertains to exact trip request time and destinations defined at the graph node level. Travel times in the network are precomputed and stored in a skim matrix.
\subsubsection{Convergence}
A key property of ridesourcing systems is that the size of the fleet may fluctuate on a day-to-day basis. Due to a random component in participation choice, these variations even occur when the system is otherwise in a steady state. To determine whether a ridesourcing system has achieved a steady state, we therefore need to examine other indicators. We argue that a combined analysis of two indicators suffices to establish the convergence of the system. First, there should be few new entrants in the market, i.e. the number of agents with the ability to participate is relatively stable. Second, the degree of learning among registered drivers needs to be minimal, i.e. their expected reward of participation is relatively stable. Together, those criteria imply that the expected fleet size shows limited variations from day to day. The number of drivers that actually decide to participate may still fluctuate due to stochasticity in the participation decision.
We formalise the convergence criteria by checking whether relative day-to-day changes in the number of registered drivers $G_t$ and the expected income of registered drivers $I_{d,t}^{\mathrm{exp}}$ exceed a convergence parameter $\varphi$. The supply evolution process has sufficiently converged when $\varphi$, which is set to approach 0, has not been exceeded on $k$ consecutive days:
\begin{equation}
\frac{G_{t-j}-G_{t-j-1}}{N_{t-j-1}} \leq \varphi \hspace{5mm}, \forall j\in \{0,1,\dots,k-2,k-1\}
\end{equation}
\begin{equation}
\frac{|I_{d,t-j}^{\mathrm{exp}}-I_{d,t-j-1}^{\mathrm{exp}}|}{I_{d,t-j-1}^{\mathrm{exp}}} \leq \varphi \hspace{5mm} \forall d\in G_t, \forall j\in \{0,1,\dots,k-2,k-1\}
\end{equation}
\subsubsection{Replications}
Due to stochastic components in information diffusion, platform registration and participation, we need to replicate the experiment for statistical significance. We determine the number of required iterations $R(m)$ based on a number of initial replications $m$, with a formula commonly used in stochastic traffic simulations \cite{burghout2004note}:
\begin{equation}
R(m) = \left(\frac{S(m)\cdot t_{m-1,\frac{1-\alpha}{2}}}{\overline{X}(m)\cdot\varepsilon_{\mathrm{repl}}}\right)^2
\end{equation}
where $\overline{X}(m)$ and $S(m)$ are, respectively, the estimated mean and standard deviation of the mean expected income in the population in equilibrium from a sample of $m$ runs, $\varepsilon_{\mathrm{repl}}$ is the allowable percentage error of estimate $\overline{X}(m)$ of the actual mean, and $\alpha$ is the level of significance.
\section{Experimental design}
A series of experiments are constructed for investigating the significance of supply market conditions, platform pricing and service entry barriers in ridesourcing provision. In this section, we introduce the experimental design.
\subsection{Set-up}
We apply the proposed approach to the city of Amsterdam, currently hosting ridesourcing service UberX. It is estimated that in 2019, a total of 8 million taxi or ridesourcing rides took place in Amsterdam, served by 5,000 - 7,000 drivers \citep{gemamsterdam2019}. On an average day, this amounts to approximately 20,000 hailed rides. Considering that it is not likely that all people in Amsterdam potentially interested in driving for a ridesourcing platform actually served at least a single ridesourcing ride in 2019, we assume that the total ridesourcing supply pool in Amsterdam consists of 10,000 drivers.
Demand is sampled once from a database of rides longer than 2.5 kilometres, generated by the activity-based model Albatross for the Netherlands \citep{arentze2004learning}. It is assumed that travellers are willing to wait five minutes to be matched after requesting their ride, i.e. patience threshold $\theta$ is set to 5. Participating drivers do not make within-day work shift decisions. A single day in the simulation consists of eight hours, corresponding to a typical working day. We simplify the performance of the underlying road network with a universal (constant) traffic speed of 36 km/h on all network links. Ride fares in the experiment are equal to the standard tariffs charged to travellers by Uber in Amsterdam \citep{Uber2020wat}, i.e. a base fare of \euro{1.40} and an additional \euro{1.21} per kilometre. Unlike Uber's pricing model, there is no minimum ride tariff. In the reference scenario that is used throughout the experiment, the commission rate $\pi$ is set to Uber's 25\% \citep{Uber2020track}. As it has been demonstrated that the reservation wage of Uber drivers might be higher or lower than the minimum wage in a given labour market \citep{Chen2019value}, we set the reservation wage $W_d$ in the experiment to \euro{80}, which is close to the minimum daily wage in the Netherlands \citep{GovNL2020minwage}.
We set the information transmission rate $\beta_{\mathrm{inf}}$ to 0.2 so that after around 50 iterations all potential drivers are likely to be informed. Choice model parameters $\beta_{\mathrm{reg}}$ and $\beta_{\mathrm{ptp}}$ are set to 0.2 and 0.1 respectively, representing that unobserved factors are likely to play a larger role in short-term participation, when drivers have more information about the specification of these variables, compared to registration. With $\gamma$ set to 0.2, we expect 20\% of informed drivers to make a registration decision on a given day. The learning threshold $\omega$ is set to 5 days, implying that after five experiences the weight of each new experience in the determination of the expected income has dropped to 0.2, and remains equal afterwards. Convergence parameters $\varphi$ and $k$ are set to 0.01 and 10, respectively.
With each driver assigned a probability of $10 / N$ to be registered at the start of the simulation, we expect an initial registration volume of 10 drivers. Their initial expected income $I^{\mathrm{exp}}_0$ is set to the sum of reservation wage $W_d$ (\euro{80} in the reference scenarios) and the daily share of registration costs $C_d$ (\euro{20} in the reference scenarios). All other driver agents start in the uninformed state.
We empirically establish that the computational load of a single day in the simulation scales directly with the number of requests and vehicles in the system, implying that if we represent the real-world population with a 10\% sample for supply and demand, similar to other studies applying agent-based models in the transportation field \citep{kaddoura2015marginal,bischoff2016autonomous}, we can reduce the total computational load of our experiment by 90\%. Given that we perform a scenario analysis in which each scenario requires multiple replications of our day-to-day simulation approach, we can benefit greatly from the efficiency gain offered by sampling. However, we need evidence that sampling has a limited effect on our simulation results, especially given that ridesourcing may benefit from economies of scale \citep{zha2016economic}. Therefore, we compare the resulting system performance indicators for a 10\% sample of demand and supply to the indicators when we do not apply sampling. Based on three replications for each scenario, we observe that a less efficient matching algorithm in the scenario with sampled supply and demand may lead to a slightly higher average waiting time for travellers, indicating that simulation based on a 10\% sample might lead to slightly overestimated travel times. Remarkably, other performance indicators of the service do not seem to be affected by sampling. The expected income in equilibrium, for example, differs by less than 1\%. Only in the early driver adoption stage, with limited supply, we note a discrepancy in the average income of drivers, which is quickly overcome once supply increases. Our analysis demonstrates that registration and participation volumes scale directly from a 10\% sample to supply and demand levels representing the full population, which indicates that in this case a 10\% sample of supply ($N=1000$) and demand ($M=2000$) is sufficiently large to represent ridesourcing dynamics for the whole city.
When deciding how many replications of the experiment are needed, we allow a relative error $\varepsilon_{\mathrm{repl}}$ of 0.01, based on statistical significance $\alpha$ of 0.01.
\subsection{Scenario design}
\subsubsection{Supply market}
In this part of the experiment, we investigate the extent to which the volume of the pool of potential drivers $N$ is a decisive factor for ridesourcing supply in equilibrium. Compared to the reference scenario (\textit{DP1000} in Table \ref{tab:scenarios}), which assumes a relatively large pool of potential drivers compared to current supply in the network, in alternative scenarios (\textit{DP200} - \textit{DP800}) we test values for $N$ that are smaller, i.e. between 200 and 800 drivers with intervals of 200.
\begin{table}[!ht]
\tbl{Scenario design}
{\begin{tabular}{l c c c c c} \toprule
Label & $N$ (-) & $W_d$ (\euro) & $\beta_{\mathrm{ptp}}$ & $\pi$ (-) & $C_d$ (\euro) \\ \midrule
DP200 & 200 & 80 & 0.2 & 0.25 & 20 \\
DP400 & 400 & 80 & 0.2 & 0.25 & 20 \\
DP600 & 600 & 80 & 0.2 & 0.25 & 20 \\
DP800 & 800 & 80 & 0.2 & 0.25 & 20 \\
DP1000* & 1,000 & 80 & 0.2 & 0.25 & 20 \\ \midrule
RW50 & 1,000 & 50 & 0.2 & 0.25 & 20 \\
RW60 & 1,000 & 60 & 0.2 & 0.25 & 20 \\
RW70 & 1,000 & 70 & 0.2 & 0.25 & 20 \\
RW80* & 1,000 & 80 & 0.2 & 0.25 & 20 \\
RW90 & 1,000 & 90 & 0.2 & 0.25 & 20 \\
RW100 & 1,000 & 100 & 0.2 & 0.25 & 20 \\
RW110 & 1,000 & 110 & 0.2 & 0.25 & 20 \\ \midrule
HR0* & 1,000 & $\mathcal{N}$(80,0) & 0.2 & 0.25 & 20 \\
HR10 & 1,000 & $\mathcal{N}$(80,10) & 0.2 & 0.25 & 20 \\
HR20 & 1,000 & $\mathcal{N}$(80,20) & 0.2 & 0.25 & 20 \\
HR30 & 1,000 & $\mathcal{N}$(80,30) & 0.2 & 0.25 & 20 \\ \midrule
IV005 & 1,000 & 80 & 0.05 & 0.25 & 20 \\
IV010 & 1,000 & 80 & 0.1 & 0.25 & 20 \\
IV020* & 1,000 & 80 & 0.2 & 0.25 & 20 \\
IV050 & 1,000 & 80 & 0.5 & 0.25 & 20 \\
IV100 & 1,000 & 80 & 1.0 & 0.25 & 20 \\ \midrule
CF5 & 1,000 & 80 & 0.2 & 0.05 & 20 \\
CF15 & 1,000 & 80 & 0.2 & 0.15 & 20 \\
CF25* & 1,000 & 80 & 0.2 & 0.25 & 20 \\
CF35 & 1,000 & 80 & 0.2 & 0.35 & 20 \\
CF45 & 1,000 & 80 & 0.2 & 0.45 & 20 \\
CF55 & 1,000 & 80 & 0.2 & 0.55 & 20 \\ \midrule
RC0 & 1,000 & 80 & 0.2 & 0.25 & 0 \\
RC10 & 1,000 & 80 & 0.2 & 0.25 & 10 \\
RC20* & 1,000 & 80 & 0.2 & 0.25 & 20 \\
RC30 & 1,000 & 80 & 0.2 & 0.25 & 30 \\
RC40 & 1,000 & 80 & 0.2 & 0.25 & 40 \\\bottomrule
\end{tabular}}
\tabnote{*Reference scenario}
\label{tab:scenarios}
\end{table}
Another supply market condition that is expected to affect emergent ridesourcing supply is the reservation wage, which may be high or low depending for example on the ease of access to alternative sources of income \citep{baron2018disruptive,Chen2019value}. First, we examine six alternative scenarios in which the reservation wage is considered to be homogeneous across the population of drivers. With these scenarios, labeled \textit{RW50} - \textit{RW110} in Table \ref{tab:scenarios}, we cover the range of reservation wages from \euro 50 to \euro 110. Then, we consider three additional scenarios with heterogeneity in reservation wage $W_d$, to represent that the opportunity cost of ridesourcing participation may vary across the population due to uneven opportunities in the labour market. We represent the heterogeneity in $W_d$ with a normal distribution in which the mean is equal to the homogeneous reservation wage value from the reference scenario (\euro80). In scenarios \textit{HR0} - \textit{HR30}, we test the effect of reservation wage heterogeneity on ridesourcing supply with four values for the standard deviation of the reservation wage distribution: \euro0 (i.e. homogeneous reservation wage), \euro10, \euro20 and \euro30.
Since participation in our approach is modelled with a probabilistic participation choice model, we can also investigate how opportunistic behaviour in labour supply affects ridesourcing supply levels. We do this by varying the participation logit model parameter $\beta_{\mathrm{ptp}}$, representing the relative weight that drivers assign to income as opposed to other, in our model unobserved, variables. Lacking empirical evidence for the value of $\beta_{\mathrm{ptp}}$, in scenarios \textit{IV005} - \textit{IV100} we test a relatively large range of values: from 0.05 to 1.0.
\subsubsection{Platform pricing}
The main instrument that ridesourcing platforms hold to steer supply is their pricing strategy, including the ride fare structure and the commission rate, i.e. the proportion of each transaction retained by the platform. We investigate the implications of price settings in the ridesourcing market for drivers and travellers, accounting for the dynamics related to supply, by analysing a series of scenarios covering a relatively large range of commission rates $\pi$: from a limited 5\% to more than half of the ride fare, 55\%, with intervals of 10\%. The scenarios are included in Table \ref{tab:scenarios} as scenarios \textit{CF5} - \textit{CF55}.
\subsubsection{Entry barriers}
Ridesourcing uptake - and potentially excessive competition - on the supply side may partially be accredited to low entry barriers \citep{rayle2016just}. On the other hand, a lack of capital participation costs may also lead to less frequent participation \citep{hall2018analysis}. Hence, we investigate the effect of financial entry barriers, such as a taxi license, on emergent ridesourcing supply. We examine five scenarios for which we vary the registration cost parameter $C_d$, which represents costs that are sunk in participation but not in registration. We consider two extreme scenarios, one in which capital costs are absent, and one in which capital costs add up to half the reservation wage (\euro40). In the three intermediate scenarios, the relative penalty for registration amounts either \euro10, \euro20 or \euro30. In Table \ref{tab:scenarios}, the scenarios are labeled \textit{RC0} - \textit{RC40}.
\subsection{User equilibrium optimality}
Unlike transportation services in which drivers are employed by the service provider, supply in ridesourcing is a decentralised process centered around the labour decisions of individual drivers. So far, we have considered how to test the effect of labour market characteristics, platform policies and entry barriers on ridesourcing supply, but not yet how the emerging user equilibria compare to supply if controlled by a central service provider or organisations representing the interests of travellers and drivers. Specifically, we investigate the optimality of decentralised ridesourcing supply from three different perspectives:
\begin{itemize}
\item \textit{Service provider (platform)}: Aims to maximise the profit from collecting a fee from each transaction between travellers and drivers
\item \textit{Traveller union}: Representing the interests of travellers, it aims to minimise travel times and rejected requests. We formalize this objective with a value of time of \euro{8/h}, which was found to be the average value for travellers in the Netherlands \citep{RWS2020kengetallen}, and assigning a penalty of \euro8 for each rejected request.
\item \textit{Driver union}: Representing the interests of the driver community, it aims to maximise total driver surplus in the system. The surplus for an individual driver is defined as the difference between its actual earnings ${I_{dt}^{\mathrm{act}}}$ and reservation wage $W_d$ \citep{Chen2019value}.
\end{itemize}
We search for the optimal fleet size for the three different parties by performing a brute-force search, testing their respective objective functions for a single day assuming various participation volumes. We test values around the user equilibrium in the base scenario: from 20 to 300 participating drivers in steps of 20, i.e. $m = [20,40,\dots,280,300]$.
\section{Results}
We analyse the results of our experiments focusing on the evolution process of ridesourcing and specifically the role of the supply market and pricing policy. Table \ref{tab:results} contains the comprehensive list of KPIs on day 200 of our iterative simulation, when all replication runs have converged to an equilibrium.
\begin{table}[ht!]
\sisetup{round-mode=places}
\tbl{KPIs in equilibrium for all scenarios}
{\begin{tabular}{l c c c c c c c c c c c} \toprule
Label & \rot{Informed drivers} & \rot{Registered drivers} & \rot{Participating drivers} & \rot{\shortstack[l]{Expected income, \\ mean (\euro)}} & \rot{\shortstack[l]{Expected income, \\ std. among drivers (\euro)}} & \rot{\shortstack[l]{Experienced income, \\ mean (\euro)}} & \rot{\shortstack[l]{Experienced income, \\ std. among drivers (\euro)}} & \rot{Satisfied requests (\%)} & \rot{Average waiting time (s)} & \rot{Daily platform profit (\euro)} & \rot{\shortstack[l]{Convergence criterion \\ satisfied (day)}} \\ \midrule
DP200 & 200 & 198 & 125 & 85.96 & 10.49 & 89.36 & 19.93 & 100.0 & 162.8 & 5217 & 47.0 \\
DP400 & 400 & 301 & 136 & 77.56 & 10.62 & 82.25 & 20.75 & 100.0 & 153.6 & 5217 & 58.8 \\
DP600 & 600 & 350 & 140 & 75.04 & 10.57 & 80.38 & 21.15 & 100.0 & 151.1 & 5217 & 68.2 \\
DP800 & 800 & 377 & 142 & 73.79 & 10.77 & 79.48 & 20.85 & 100.0 & 150.5 & 5217 & 61.3 \\
DP1000 & 1000 & 425 & 145 & 71.83 & 10.29 & 77.58 & 21.47 & 100.0 & 148.8 & 5217 & 64.8 \\\midrule
RW50 & 1000 & 572 & 228 & 45.11 & 11.38 & 51.19 & 21.27 & 100.0 & 127.0 & 5351 & 58.1 \\
RW60 & 1000 & 504 & 192 & 54.17 & 11.33 & 60.63 & 22.49 & 100.0 & 137.7 & 5351 & 68.5 \\
RW70 & 1000 & 454 & 167 & 63.34 & 11.21 & 69.41 & 22.02 & 100.0 & 145.5 & 5351 & 62.0 \\
RW80 & 1000 & 422 & 147 & 72.42 & 11.03 & 78.36 & 21.51 & 100.0 & 153.5 & 5351 & 64.2 \\
RW90 & 1000 & 396 & 133 & 81.53 & 10.81 & 86.49 & 21.92 & 100.0 & 164.2 & 5351 & 74.2 \\
RW100 & 1000 & 368 & 118 & 90.98 & 10.21 & 96.56 & 20.89 & 100.0 & 185.3 & 5351 & 75.8 \\
RW110 & 1000 & 339 & 106 & 100.33 & 9.76 & 105.14 & 17.58 & 99.7 & 227.9 & 5334 & 71.0 \\\midrule
HR0 & 1000 & 398 & 143 & 71.56 & 13.62 & 78.98 & 23.22 & 100.0 & 152.8 & 5230 & 70.0 \\
HR10 & 1000 & 382 & 149 & 67.46 & 14.23 & 75.76 & 24.07 & 100.0 & 149.2 & 5230 & 55.4 \\
HR20 & 1000 & 393 & 168 & 60.95 & 14.01 & 67.51 & 24.12 & 100.0 & 138.0 & 5230 & 52.4 \\
HR30 & 1000 & 410 & 188 & 55.43 & 14.49 & 60.56 & 22.90 & 100.0 & 130.5 & 5230 & 53.6 \\\midrule
IV005 & 1000 & 401 & 158 & 70.36 & 10.67 & 72.51 & 22.33 & 100.0 & 150.0 & 5312 & 56.5 \\
IV010 & 1000 & 415 & 148 & 72.86 & 10.32 & 77.48 & 20.77 & 100.0 & 155.8 & 5312 & 60.3 \\
IV020 & 1000 & 432 & 137 & 74.15 & 10.30 & 83.11 & 20.94 & 100.0 & 161.3 & 5312 & 65.5 \\
IV050 & 1000 & 482 & 125 & 75.86 & 10.63 & 90.79 & 21.00 & 100.0 & 173.8 & 5312 & 68.5 \\
IV100 & 1000 & 538 & 120 & 77.67 & 10.51 & 93.86 & 20.70 & 100.0 & 176.9 & 5312 & 91.7 \\\midrule
CF5 & 1000 & 472 & 181 & 73.23 & 16.91 & 85.61 & 30.94 & 100.0 & 129.2 & 1042 & 56.4 \\
CF15 & 1000 & 441 & 163 & 72.95 & 13.11 & 82.15 & 25.47 & 100.0 & 137.5 & 3125 & 62.4 \\
CF25 & 1000 & 400 & 144 & 72.85 & 10.24 & 77.94 & 20.29 & 100.0 & 148.0 & 5195 & 72.6 \\
CF35 & 1000 & 356 & 120 & 72.25 & 7.61 & 75.25 & 14.24 & 100.0 & 166.1 & 7265 & 73.4 \\
CF45 & 1000 & 240 & 70 & 70.70 & 3.88 & 72.04 & 8.26 & 86.7 & 363.3 & 7962 & 77.0 \\
CF55 & 1000 & 52 & 11 & 67.03 & 2.08 & 67.29 & 4.14 & 16.9 & 194.1 & 1751 & 18.4 \\\midrule
RC0 & 1000 & 885 & 168 & 62.52 & 10.73 & 69.30 & 22.75 & 100.0 & 143.4 & 5394 & 68.3 \\
RC10 & 1000 & 632 & 157 & 66.57 & 10.95 & 74.35 & 21.72 & 100.0 & 148.6 & 5388 & 63.0 \\
RC20 & 1000 & 429 & 149 & 72.12 & 11.19 & 78.13 & 20.49 & 100.0 & 152.9 & 5384 & 68.7 \\
RC30 & 1000 & 287 & 138 & 78.87 & 11.08 & 84.04 & 20.53 & 100.0 & 158.0 & 5384 & 60.0 \\
RC40 & 1000 & 205 & 128 & 85.84 & 11.24 & 89.75 & 19.09 & 100.0 & 164.1 & 5383 & 55.7 \\\bottomrule
\end{tabular}}
\label{tab:results}
\end{table}
\subsection{Phases in ridesourcing provision}\label{subsect:phases}
In this subsection, we examine the evolution of ridesourcing supply and the implications for suppliers specifically for one of the reference scenarios, $\textit{RW80}$ (Figure \ref{fig:base-scn}a,b). In accordance with the specification of the information diffusion process, all 1,000 driver agents are eventually informed about the existence of the service. In equilibrium, considering multiple simulation iterations, after 200 days on average less than half of those agents (420) are registered, of which on a typical day approximately a third participate (145 drivers). We identify five phases in the evolution process:
\begin{enumerate}
\item \textit{Day 0 - 10}: Due to a lack of information, few driver agents have registered, meaning participation is low as well. Participating drivers profit from a lack of competition and can make a high profit.
\item \textit{Day 10 - 20}: Information transmission speeds up. Informed drivers are likely to register as they observe a high average income. Participation increases rapidly, leading to a collapse in the experienced income. Drivers start to learn that their anticipated income may not be feasible.
\item \textit{Day 20 - 50}: Information diffusion continues. Drivers further downscale their income expectation based on new participation experiences. As a result of the drop in expected income, the average driver participates less frequently. The number of registered drivers still increases, albeit at a slower pace than before. As a consequence, the total participation volume increases marginally, leading to a further decrease in the experienced income level.
\item \textit{Day 50 - 100}: All drivers are now informed. Registration continues at a decreasing pace, yet participation increases only marginally since individual drivers participate less frequently, as a result of the continuing decrease in the average expected income.
\item \textit{Day 100 - 200}: Equilibrium is reached. Registrations and the decrease in experienced and expected income are now limited. Participation remains constant over time.
\end{enumerate}
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/base_scenario.jpg}
\caption{(a) The evolution of the number of registered, informed and participating drivers, (b) the evolution of the average expected and experienced income, and (c) the distribution of expected income for participating drivers versus registered drivers in equilibrium} \label{fig:base-scn}
\end{figure}
There are two aspects in Figure \ref{fig:base-scn}b worth highlighting. First, the average expected income of drivers converges to a value below the average experienced income. Figure \ref{fig:base-scn}c provides an explanation for the discrepancy in expected and experienced income: drivers with a low expected income are relatively unlikely to participate compared to drivers with a higher expected income, and consequently less likely to 'update' their expected income based on a new (likely more positive) driving experience. Convergence is reached when the average experienced income is equal to the average expected income of participating drivers, which is higher than the average expected income of all - also non-participating - drivers. Second, the presented evolution process demonstrates that, when we assume that variables other than expected income play a role in participation choice, the average daily income of participating drivers on the platform may converge to a value below the reservation wage (Figure \ref{fig:base-scn}b). This can be attributed to unobserved variables in participation, like scheduled activities for a given day, which cause a significant group of drivers to work even when their experienced income is below the reservation wage (Figure \ref{fig:base-scn}c). In fact, more than half of the drivers that participate on a given day in the equilibrium expect to earn less than their reservation wage. This finding emphasizes that the main value of a ridesourcing service may be found in the flexibility it offers, as suggested also by \cite{Chen2019value}, rather than in providing a satisfactory level of income over a longer period of time.
\subsection{Supply market conditions}
In this subsection, we present the effect of the size of the driver pool, the reservation wage and unobserved variables in participation on dynamic ridesourcing provision. The information diffusion process is not affected in scenarios, except for those with an alternative size of the driver pool (see Equation \ref{eq:inf-diff}).
\subsubsection{Driver pool}
When the pool of drivers is limited to 200 (scenario \textit{DP200} in Table \ref{tab:scenarios}), we find that an equilibrium state is reached around day 50 (Table \ref{tab:results}). In this state, nearly all potential drivers have registered (Figure \ref{fig:res_pool-six}e) and the participation frequency is fairly stable at a high level (Figure \ref{fig:res_pool-six}f). When the pool of potential drivers is larger, there are still unregistered drivers left around this time in the simulation, of whom a part decides to sign up in a later phase. This explains why the transition process takes longer in our experiment when the pool of potential drivers is large.
For 200 potential drivers, we find an equilibrium average expected income for registered drivers that exceeds their reservation wage by nearly 10\%. In all other scenarios, representing supply markets of 400 potential drivers or more, the average drivers fails to match the reservation wage, falling short by 5 - 10\% (Figure \ref{fig:res_pool-six}a). It is striking that there seems to be little difference in service performance when a supply market consists of 1000 drivers as opposed to 400 drivers. In both cases, after approximately 25 iterations supply is sufficient to saturate the market and serve all requests in the system (Figure \ref{fig:res_pool-six}b), without a significant difference in the average waiting time for travellers (Figure \ref{fig:res_pool-six}c). Figure \ref{fig:res
|
_pool-six}d shows that the similarity in travellers' level of service follows directly from the daily participation volume, which is approximately equal in both scenarios. Apparently, 600 additional potential drivers in the supply market only yield around 125 more registrations around the 200th day (Figure \ref{fig:res_pool-six}e), while those that are registered also participate less frequently when the potential supply market is large (Figure \ref{fig:res_pool-six}f), on average 34\% versus 46\% of the days.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/driver_pool_six_fig.jpg}
\caption{The effect of the size of the driver pool on the evolution of (a) the expected income of registered drivers as ratio of their reservation wage, (b) the share of requests that are satisfied, (c) the average waiting time for pick-up for travellers, (d) daily participation volumes, (e) the total number of registered drivers, and (f) the share of registered drivers that participate}\label{fig:res_pool-six}
\end{figure}
The finding that ridesourcing supply converges to an invariant participation volume for different sizes of the labour supply market, as long as the total supply volume is relatively high compared to demand, demonstrates the existence of a balancing effect in ridesourcing supply. In such a market, the frequency of participation compensates for the size of the pool of registered drivers, which means that negative consequences related to oversupply have an inherent upper bound. Notwithstanding, in this upper bound, expected income may be below the reservation wage. Only when the size of the supply market is limited to a value close to the invariant participation volume when the supply market is sufficiently large, we find expected income to exceed the reservation wage. This resonates with the introduction of supply caps, implemented for example in New York City, in raising ridesourcing drivers' average income. The results also show that travellers may not suffer much from a supply cap, at least as long as the cap is set to a sensible level.
\subsubsection{Homogeneous reservation wage}
Based on our experiments, the reservation wage of potential drivers has a minor effect on the duration of the transition process. The equilibrium condition is reached marginally more quickly when the reservation wage of drivers is low (Table \ref{tab:results}). This may be caused by more registrations in an early phase of the evolution (Figure \ref{fig:res_wage-six}e) due to lower labour opportunity costs. While there are still new registrations in a later phase, the relative increase in the size of the pool of registered drivers is low compared to scenarios with higher reservation wages.
Remarkably, we find that in equilibrium the ratio between expected income and reservation wage is constant for various reservation wages (Figure \ref{fig:res_wage-six}a), slightly under 1. It means that as the reservation wage in a market increases, the expected income in equilibrium increases proportionally. The effect of reservation wage on the level of service for travellers seems to be limited. Even in scenario \textit{RW110}, in which labour costs are least favourable for supply, i.e. the reservation wage equals 110 euros, supply is sufficient to serve all requests (Figure \ref{fig:res_wage-six}b), albeit travellers are confronted with longer travel times than in scenarios with a lower cost of labour (Figure \ref{fig:res_wage-six}c). The additional waiting time is, however, limited to a maximum of two minutes and thereby fairly limited. The differences in waiting time stem from participation volumes that vary between 100 and 230 for different specifications of the reservation wage (Figure \ref{fig:res_wage-six}d). Lower participation when labour supply is costly results both from fewer registrations (Figure \ref{fig:res_wage-six}e) and less frequent participation among those registered (Figure \ref{fig:res_wage-six}f).
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/res_wage_six_fig.jpg}
\caption{The effect of (homogeneous) reservation wage on the evolution of (a) the expected income of registered drivers as ratio of their reservation wage, (b) the share of requests that are satisfied, (c) the average waiting time for pick-up for travellers, (d) daily participation volumes, (e) the total number of registered drivers, and (f) the share of registered drivers that participate}\label{fig:res_wage-six}
\end{figure}
The results imply that a weak labour market, associated with low reservation wages, leads to reduced income levels for suppliers in the ridesourcing market, because new suppliers are attracted to the market as a result of a lack of alternative employment, creating competition for pick-ups. Ridesourcing providers on the other hand can potentially profit from the inflow of supply in times of economic recession by means of reduced waiting time for travellers, which may attract new demand, or alternatively, by giving them the opportunity to increase the commission rate without sacrificing the level of service for travellers.
\subsubsection{Heterogeneous reservation wage}
It can be expected that the minimum income that drivers want to collect with ridesourcing participation is not equal for all drivers, for example because some drivers have better access to alternative employment than others. To capture reservation wage heterogeneity, one of the set of scenarios included in our experiment is directed at investigating ridesourcing supply for different reservation wage distributions, with the same mean $\mu$ as the reference scenario but different standard deviations $\sigma$.
Figure \ref{fig:het_res_wage-six}a shows that when there is a lot of variation in drivers' reservation wage (scenario \textit{HR30}), the expected income of registered drivers in equilibrium is relatively low. Yet, a high value for $\sigma$ does not seem to lead to a slower registration process (Figure \ref{fig:het_res_wage-six}b). In fact, participation appears to be higher with strong heterogeneity in the reservation wage (Figure \ref{fig:het_res_wage-six}c). Figure \ref{fig:het_res_wage-six}d demonstrates that in such a scenario, a relatively high share of registered drivers has a low reservation wage, meaning that they are relatively like to supply labour on a given day, even when they expect a low income. It explains also why registration (Figure \ref{fig:het_res_wage-six}b) peaks early in a scenario with high $\sigma$: drivers with a reservation wage below the mean benefit significantly from registration and are thus relatively likely to register. Due to the quick influx of drivers and the fact that drivers that are still unregistered have a relatively high reservation wage, registrations then slow down quickly. High participation volumes in scenarios with strong heterogeneity result in a low average income for drivers in the system (Figure \ref{fig:het_res_wage-six}e) and slightly lower waiting times for travellers (Figure \ref{fig:het_res_wage-six}f).
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/het_res_wage_six_fig.jpg}
\caption{The effect of heterogeneity in reservation wage on (a) the evolution of the average expected income of registered drivers as ratio of their reservation wage, (b) the evolution of the total number of registered drivers, (c) the evolution of daily participation volumes, (d) the probability density function of reservation wage for registered drivers, (e) the evolution of average experienced income of participating drivers as ratio of their reservation wage and (f) the evolution of the average waiting time for pick-up for travellers}\label{fig:het_res_wage-six}
\end{figure}
The results imply that with a high degree of inequality in the labour market, ridesourcing markets may be flooded with drivers with limited labour opportunities elsewhere. Due to their weak position in the labour market, they are willing to work for ridesourcing platforms even when wages are low, providing competition for other participating drivers. Our experiment demonstrates that high participation may only yield limited benefits in terms of the average waiting time for travellers, while the income for drivers may be significantly lower than in scenarios with lower participation. We conclude that especially in labour markets characterised by large inequalities supply caps may be necessary to guarantee a socially desired minimum income for ridesourcing drivers.
\subsubsection{Unobserved variables in participation}
Choice parameter $\beta_{\mathrm{ptp}}$ represents the value drivers attach to income as opposed to other variables in participation decisions. A low $\beta_{\mathrm{ptp}}$ indicates that drivers supply labour to the platform more opportunistically, potentially working one day but not the next even when the income they anticipate is the same. Our results show that while $\beta_{\mathrm{ptp}}$ has a limited effect on the average expected income of drivers registered with a platform (Figure \ref{fig:ptcp_beta-six}a), there is a clear difference in the average actual income generated by participating drivers (Figure \ref{fig:ptcp_beta-six}b). The reason for this discrepancy is that in the scenario with the highest value for $\beta_{\mathrm{ptp}}$ (scenario \textit{IV100}), despite a slightly higher average expected income, on average approximately 40 fewer drivers actually decide to participate compared to the scenario with lowest $\beta_{\mathrm{ptp}}$ (Figure \ref{fig:ptcp_beta-six}c). Figure \ref{fig:ptcp_beta-six}d provides an explanation for this phenomenon. With expected income as the dominant variable for participation when $\beta_{\mathrm{ptp}}$ is high, a driver that expects to make an income just below their reservation wage is relatively unlikely to participate, and consequently, also to update its income expectation based on new, potentially more positive, experiences. In this scenario, drivers confronted with a negative driving experience are therefore less likely to participate thereafter compared to scenarios with a lower value of $\beta_{\mathrm{ptp}}$, resulting in a large group of 'dissatisfied' drivers with an income just below the reservation wage, but ultimately also in a (relatively small) group of drivers profiting from the lack of competition when it comes to serving rides. Participating drivers in this scenario earn on average approximately 15\% more than their reservation wage, compared to 10\% less in the scenario with a $\beta_{\mathrm{ptp}}$ of 0.05. The average waiting time for travellers is, however, also highest in this scenario (Figure \ref{fig:ptcp_beta-six}f).
Due to slightly higher expected earnings when income is the dominant factor in the participation decision, more unregistered drivers decide to sign up in later phases of the transition process (Figure \ref{fig:ptcp_beta-six}e) compared to scenarios in which $\beta_{\mathrm{ptp}}$ is low. Hence, the equilibrium market state is achieved more quickly when drivers attribute more value to variables other than income.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/ptcp_beta_six_fig.jpg}
\caption{The effect of the valuation of income in participation choice on (a) the evolution of the average expected income of registered drivers as ratio of their reservation wage, (b) the evolution of average experienced income of participating drivers as ratio of their reservation wage, (c) the evolution of daily participation volumes, (d) the probability density function of expected income (as ratio of their reservation wage) for registered drivers, (e) the evolution of the total number of registered drivers and (f) the evolution of the average waiting time for pick-up for travellers}\label{fig:ptcp_beta-six}
\end{figure}
To summarize, if we assume that income is not the sole explanatory variable for participation, in line with what is suggested by early research on labour supply of ridesourcing drivers \cite{Chen2019value}, the average income for participating drivers in a ridesourcing system is likely to turn out relatively low, since every day a portion of drivers is willing to participate for a wage below their reservation wage, increasing competition for supply in the system. This implies that, in such a scenario, the ridesourcing service may be valuable for drivers wishing to supply labour flexibly, utilising the service for example only on days without planned activities or other work, but less so for drivers using the platform as a replacement for a full-time job.
\subsection{Platform policies}
We observe that a lower commission allows for higher earnings in early transition phases (Figure \ref{fig:comm_fee-six}a), which convinces more potential drivers to register in this time frame (Figure \ref{fig:comm_fee-six}e). After increased supply-side competition has brought earnings down, the number of new registrations slows down in all scenarios. In scenarios in which initially many drivers register, the relative increase in the size of the pool of registered drivers is lower than in scenarios in which fewer drivers registered. This means that these markets end up in an equilibrium state more quickly. This trend applies however only up to a certain point. When commissions are increased further, corresponding to a commission rate of 55\% in the experiment, hardly any drivers will register at all. In that case, the market equilibrium is achieved very quickly.
Interestingly, we find that the expected income of drivers in equilibrium is hardly affected by the commission fee that is charged by the platform (Figure \ref{fig:comm_fee-six}a). A commission rate of 55\% (scenario \textit{CF55}) yields an expected driver income which is not more than 10\% lower than when the commission rate is set to only 5\%. Ridesourcing users, on the other hand, can strongly be affected by the platform commission rate. The additional inconvenience is fairly limited when the commission rate is set to 35\% as opposed to 5\%, inducing an average additional waiting time of less than one minute. However, with a commission rate of 45\% or 55\%, a part of the requests needs to be rejected and the waiting time of the remaining travellers is significantly longer (Figure \ref{fig:comm_fee-six}b, \ref{fig:comm_fee-six}c). In fact, when the commission rate is 55\%, only 20\% of requests can be satisfied in equilibrium. Figures \ref{fig:comm_fee-six}d and \ref{fig:comm_fee-six}e demonstrate that supply adjusts itself to the commission rate that is in effect, which provides an explanation for why income levels are largely unaffected, while the level of service for an average ride strongly deteriorates. In this particular experiment, a commission rate of 45\% appeared to be the optimal strategy for the ridesourcing provider, generating approximately 8,000 euros per day in equilibrium (Figure \ref{fig:comm_fee-six}f).
These findings demonstrate that, profit-wise, the collection of a higher share per request may outweigh revenue loss from not being able to serve all incoming requests. This implies that profit maximisation in ridesourcing provision may come at the expense of travellers, who are exposed to longer waiting times and a higher probability of being rejected altogether. Interestingly, ridesourcing drivers are hardly affected by strategical platform behaviour relating to the commission rate, since driver registration is slower when commission rates are high. At the same time, we observe that within a certain range, platform profit can be vastly improved without significantly affecting riders in the system. Our experiment shows that a non-optimal pricing strategy in terms of profit (in the experiment a commission rate of 35\% as opposed to 45\%), may result in near-optimal platform profit and driver income, with a much improved level of service for travellers. Thus, it might be worthwhile for authorities to consider regulating the commission rate while considering its consequences for service affordability.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/comm_fee_six_fig.jpg}
\caption{The effect of platform commission rate on the evolution of (a) expected income of registered drivers as ratio of their reservation wage, (b) the share of requests that are satisfied, (c) the average waiting time for pick-up for travellers, (d) daily participation volumes, (e) the total number of registered drivers, and (f) daily platform profit}\label{fig:comm_fee-six}
\end{figure}
\subsection{Entry barriers}
The need for vehicle, insurance and medallion acquisition may prevent interested drivers from registering with a ridesourcing platform. In some markets, these factors are more prevalent than in others. We mimic markets with different registration regimes by varying registration cost parameter $C_d$. We find that when registration costs are high, indeed, significantly fewer drivers will register with a ridesourcing platform (Figure \ref{fig:reg-costs-six}a). The markets corresponding to these scenarios more quickly reach a state in which the number of new registrations is negligible in terms of its effect on the daily number of participating drivers.
We observe that the marginal decrease in registration volume when $C_d$ grows is especially large when registration costs are limited. In a scenario without registration costs (scenario \textit{RC0}), nearly 900 drivers register with the platform, compared to approximately 430 when registration costs add up to \euro20 per day, and just over 200 when the daily registration penalty amounts to \euro40. The consequence is that registration costs lead to reduced participation (Figure \ref{fig:reg-costs-six}b) and ultimately to a higher average experienced (Figure \ref{fig:reg-costs-six}c) and expected (Figure \ref{fig:reg-costs-six}d) income. Registration costs can thus be a crucial factor for whether drivers, on average, end up earning above or below the reservation wage. However, considering that registration costs need to be subtracted from the income of drivers, a scenario with $C_d$ equal to 40 still turns out to be least favourable for drivers, as demonstrated by Figure \ref{fig:reg-costs-six}e. In this scenario drivers that participate earn back on average 75\% of their total costs (including the cost of participation and registration), compared to 88\% when registration does not bear any costs and the total costs are made up of the reservation wage (and operational costs). This, however, considers only the income of participating drivers. It should not be forgotten that also registered drivers that do not participate on a given day end up with a negative daily profit due to their capital registration costs. In case they cannot easily discard their registration costs, for example by selling their car, it might still be their best option to keep participating, even when this results in a negative net income. Due to reduced supply, travellers may also be worse off when a ridesourcing service comes with high registration barriers for drivers (Figure \ref{fig:reg-costs-six}d), the extent to which likely dependent on context-specific variables. In our particular experiment, travel times are hardly affected by registration costs.
The results imply that ridesourcing providers, drivers and travellers may also suffer from high entry barriers for potential suppliers. Consequently, policies that aim at reducing the costs related to registration may be beneficial, for example offering affordable vehicle insurance deals to drivers.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/reg_costs_six_fig.jpg}
\caption{The effect of registration costs on the evolution of (a) the average expected income of registered drivers as ratio of their reservation wage, (b) the total number of registered drivers, (c) daily participation volumes, (d) average experienced income of participating drivers as ratio of their reservation wage, (e) average experienced income of participating drivers as ratio of the sum of their reservation wage and daily share of registration costs and (f) the average waiting time for pick-up for travellers}\label{fig:reg-costs-six}
\end{figure}
\subsection{System optimum supply and user equilibrium solutions}
In this section, we elaborate on the social optimality of a decentralised ridesourcing supply and discuss the implications for how regulation should be designed to safeguard the interests of different stakeholders in the process. The user-equilibrium solution obtained from our model is compared with the system optimum supply-level that is obtained from a brute force search for the optimal fleet size. Figure \ref{fig:soue-four}a shows the profit of a ridesourcing platform for different participation levels. Next to the typical ridesourcing scenario in which self-employed drivers get paid based on the rides they serve, we consider an alternative scenario in which drivers, instead, earn a guaranteed hourly wage, while also getting their operational costs reimbursed. Comparing platform profit in both scenarios, we observe a major difference in the financial consequences of oversupply for the service provider. In the typical ridesourcing scenario with fare-based payouts, oversupply does not induce additional costs, because ridesourcing providers pay drivers based on served demand, not participation. In the event of abrupt market contraction (e.g. pandemic crisis), for example, compared to service providers with employed drivers, ridesourcing providers benefit from reduced driver payouts that will partially offset the lower earnings from fares. Hence, a consequence of transaction-based driver payments is that, in contrast to more traditional transit providers paying drivers based on the number of hours worked, ridesourcing providers lack an incentive to curb their supply. In fact, as Figure \ref{fig:soue-four}b shows, they can benefit from oversupply as it leads to lower travel times for travellers, and thus, potentially, increased demand. These benefits are, however, relatively limited after supply reaches a specific point, which appears to be the minimum supply for which (nearly) all requests can be served. More supply will result in more efficient matches between drivers and travellers, yet yielding a minor effect on the travel times for riders in the system.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/social_optimality_four_fig.jpg}
\caption{Optimality of supply in a system with fare-based driver payouts (assuming a platform commission rate of 25\%) and a system with wage-based driver payouts, for (a) the service provider, aiming at maximum profit, (b) the traveller union, minimising costs from waiting and rejected requests, (c) the driver union, maximising driver earnings over the reservation wage, and (d) an authority that evaluates the three previous objectives equally, maximising the summed net value}\label{fig:soue-four}
\end{figure}
Figure \ref{fig:soue-four}a also shows that fare-based driver payments are not necessarily optimal for the service provider. In the presented scenario, the service provider would actually be better off paying an hourly wage to a relatively limited number of drivers, thereby earning the full share of ride fares, than allowing self-employed drivers to collect these fares in return for a fee. It should be noted that this particular example does not consider that employed drivers may be entitled to social benefits and that drivers may not be willing to work for the minimum wage.
When taking the driver perspective, we find that the optimal fleet size in the fare-based scenario is relatively low (Figure \ref{fig:soue-four}c), peaking between 40 and 100 participating drivers. If supply is even lower, a lot of potential income is lost due to rejected requests, however, if it is higher, excessive competition leads to incomes below the reservation wage, and consequently, dissatisfied drivers. For supply volumes over 120 the total driver surplus is in fact negative. Yet, remarkably, in the reference scenarios of our experiment with decentralised supply, we find average daily participation volumes of approximately 150 drivers in equilibrium. This demonstrates that in the ridesourcing market the notion of 'the tragedy of the commons' may apply, in which the self-interested labour decisions of individual drivers lead to a suboptimal result for the whole group: excessive competition for rides and ultimately low payouts.
If we consider a society in which the societal value of a single monetary unit is independent of the party that it is assigned to, i.e. an extra profit of one euro for the platform or a single driver has the same value as a travel cost saving of one euro for one traveller, we find that the optimal ridesourcing fleet size for our particular experiment is 100 drivers, as illustrated by the total net value sketched in Figure \ref{fig:soue-four}d. Lower supply levels are undesired from the platform's and travellers' perspective, while higher supply leads to a significantly deteriorated driver income with only a very limited benefit for travellers. The social optimum in this case is thus considerably lower than the user equilibrium, which depicts the potential value of supply caps in ridesourcing markets. Although ridesourcing providers are typically reluctant to accept the implementation of supply caps, our analysis illustrates that their negative effect on rider level of service and ultimately platform profit may be very limited, especially in a saturated market. In this particular case, a reduction of supply from 300 to 100 drivers only induces a single minute of extra waiting time per request.
We note that the socially optimal fleet size for ridesourcing services is equal to that of a transit service with employed drivers, because the objective function for the net total value ultimately contains the same elements: revenue from fares, operational costs and labour participation costs. The only difference is the distribution of those over different stakeholders. If a society indeed considers a single monetary unit equally valuable to all stakeholders, it can thus be stated that only the fleet size of an on-demand transit service matters from a societal perspective, not whether drivers are paid for participating or based on the travel requests they satisfied.
\subsection{Model sensitivity}
\subsubsection{Learning}
Learning parameter $\omega$ indicates how drivers value recent experiences compared to preceding experiences over time. A low value of $\omega$ corresponds to a situation in which drivers assign a relatively high value to their recent experience (see Equation \ref{eq:learning-2}), for example because they believe old experiences are not representative for the present state of the system or because they cannot perfectly memorize their income from previous days. In contrast, if $\omega$ goes to infinity, drivers' expected income equals their average experienced income. In this study, we assumed $\omega$ to be equal to 5, indicating that the weight of new experiences decreases to 0.2 within 5 days, and stays constant thereafter. To establish to what extent the results presented in this section are specific to the learning parameter, we have repeated the experiment for the reference scenarios, while varying the value of the learning parameter $\omega$ between 3 and 100.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/omega_six_fig.jpg}
\caption{The effect of the rate of learning on (a) the evolution of the average expected income of registered drivers as ratio of their reservation wage, (b) the probability density function of expected income (as ratio of their reservation wage) for registered drivers, (c) the evolution of daily participation volumes, (d) the evolution of average experienced income of participating drivers as ratio of their reservation wage, (e) the evolution of the average waiting time for pick-up for travellers and (f) the evolution of the total number of registered drivers}\label{fig:omega-six}
\end{figure}
We find that $\omega$ has a limited effect on ridesourcing provision. One of the notable differences is that, although the mean expected income in equilibrium is unaffected by $\omega$ (Figure \ref{fig:omega-six}a), the distribution of expected income over registered drivers differs (Figure \ref{fig:omega-six}b). This can be explained by the fact that when $\omega$ is small, drivers are more likely to 'overreact' to a single negative experience, resulting in a pool of 'unsatisfied' drivers with expected income levels significantly below the reservation wage. These drivers will not be tempted to participate again, limiting participation on the platform (Figure \ref{fig:omega-six}c) and driving up the experienced income (Figure \ref{fig:omega-six}d) and ultimately the expected income of the other registered drivers (Figure \ref{fig:omega-six}b). It also results in a minor difference in the average waiting time for travellers (Figure \ref{fig:omega-six}e). Moreover, we establish that registration volumes slightly diverge in an early stage of adoption (Figure \ref{fig:omega-six}f). The reason is that when $\omega$ is small, drivers more quickly observe that earnings are dropping (Figure \ref{fig:omega-six}d), which they communicate to drivers that have not yet registered. Nevertheless, the effect of $\omega$ was found to be limited and we do not expect a major impact on the main findings regarding dynamics in ridesourcing supply.
\subsubsection{Information diffusion}
This study considers that drivers need to become aware about the existence of a ridesourcing service before they can supply labour to it. To this end, we introduce an information diffusion process with a transmission rate $\beta_{\mathrm{inf}}$ of 0.2. Lacking empirical evidence of the specification of the information diffusion process, we need to test whether our findings also apply under different diffusion settings. We test four alternative values for $\beta_{\mathrm{inf}}$, ranging between 0.05 and 1.0. We find that, given a value for $\beta_{\mathrm{inf}}$ that allows (nearly) all drivers to be informed at the end of the simulation (Figure \ref{fig:inf-beta-six}a), the specification of the diffusion process has hardly any effect on labour supply in equilibrium. The different scenarios for $\beta_{\mathrm{inf}}$ converge to the same participation volume (Figure \ref{fig:inf-beta-six}b), with a similar average expected income (Figure \ref{fig:inf-beta-six}c), average waiting time for travellers (Figure \ref{fig:inf-beta-six}d) and service rate (Figure \ref{fig:inf-beta-six}e), which demonstrates the generalisability of the results, concerning the value of $\beta_{\mathrm{inf}}$. Although the indicators are similar in equilibrium, we note clear differences in the adoption process. When $\beta_{\mathrm{inf}}$ is high, many drivers become aware about the service at the same time. In an early phase of adoption (phase 1 as introduced in Subsection \ref{subsect:phases}), when there are few drivers supplying labour to the platform and income levels are high, this leads to a big registration peak (Figure \ref{fig:inf-beta-six}f) and excessive participation, with relatively low driver incomes and limited waiting times for travellers. In scenarios with a lower information transmission rate we do not observe such a peak in participation, but rather a steady increase towards the equilibrium value. As a consequence, in scenarios in which communication about innovations takes place slowly, it takes longer before the level of service reaches a satisfactory level, with the large majority of rides accepted and a relatively low average waiting time for travellers.
\begin{figure}[]
\centering
\includegraphics[width=\textwidth]{Figures/inf_beta_six_fig.jpg}
\caption{The effect of the information transmission rate on the evolution of (a) the number of informed agents, (b) daily participation volumes, (c) the average expected income of registered drivers as ratio of their reservation wage, (d) the average waiting time for pick-up for travellers, (e) platform profit and (f) the total number of registered drivers}\label{fig:inf-beta-six}
\end{figure}
\section{Conclusions}
\subsection{Study significance}
This study is pioneering in analysing the dynamics of (decentralised) ridesourcing supply while accounting for labour supply decisions considering both long-term platform registration and short-term participation. Our platform registration submodel considers that registration requires information about earnings, and that it comes with one-off registration costs like insurance and vehicle acquisition, which are sunk in subsequent participation decisions. With a probabilistic participation choice submodel, we account for unobserved variables in the decision to work on a given day, like planned activities for this particular day. The model is applied to the case of Amsterdam in order to investigate the effect of supply market properties, platform pricing and supply-side entry barriers on the evolution of ridesourcing supply. In addition, we comment on the optimality of decentralised ridesourcing supply from the perspectives of drivers, travellers and service provider, based on an exhaustive search.
The results demonstrate that labour supply in ridesourcing may be non-linear and undergo several transitions, hereby inducing significant variations in average income, profit and level of service. It highlights the need for models capturing dynamic interactions in ridesourcing provision, such as the one presented in this work.
\subsection{Key findings}
\paragraph*{Fleet size.}
We find that in a decentralised system, as long as drivers earn a competitive income and not yet all potential drivers are registered, new suppliers are attracted to the market at a relatively high pace. For the base scenario of our experiment, this phenomenon results in an equilibrium participation volume of 150 drivers. With this level of supply, there is relatively strong competition for pick-ups, resulting in payouts below drivers' reservation wages. Instead, for the community of (potential) ridesourcing drivers in our experiment, a fleet size of 40 - 100 drivers is considered to be optimal. Such a solution implies that the fewer drivers participating will earn a significantly higher income. The above findings demonstrate that the tragedy of the commons may apply in ridesourcing provision, in which the self-centered labour decisions of individuals ultimately harm the common interests of the group.
Unlike traditional transit providers with employed drivers, ridesourcing providers lack a direct financial incentive to curb supply. Our results demonstrate however that there may be an alternative balancing loop in ridesourcing supply, i.e. profit-maximising service providers may be best off claiming a relatively high rate on fares collected through their platforms, even when this means that fewer drives will participate and, consequently, that a portion of the travel requests has to be rejected. In our experiment, in equilibrium approximately 60 drivers participate when a platform opts for a profit-maximising commission rate of 45\%, compared to 180 drivers when the commission rate is 25\%. This results in a decline in the probability that a request can be matched from 100\% to 85\%, and in an increase in the average waiting time from 2 to 6 minutes. Remarkably, average drivers earnings in the experiment are hardly affected by the commission rate of the platform. The rationale here is that the influx of new drivers on the platform is limited when the commission rate is high. This implies that registration barriers may mitigate the tragedy of the commons in ridesourcing supply.
\paragraph*{Labour market effect.}
The expected income is especially low when the average reservation wage is low. In this case, drivers are relatively quick to register, leading to a fierce competition and ultimately a decreasing income for those already registered. Free-lance workers in the market will thus suffer from a shrinking economy in which other labour opportunities are scarce. The same applies to a labour market with large inequalities, in which ridesourcing services are flooded with drivers that have limited opportunities in the market, and are willing to work even when earnings are low.
\subsection{Policy implications}
\paragraph*{Supply regulation.}
Similarly to the results of the semi-dynamic model by \cite{yu2020balancing}, our findings provide support for the potential effectiveness of a supply cap, which has for example been implemented in New York City. It may push earnings over the reservation wage without significantly impeding travellers' waiting times. At the same time, our results show that the value to which the cap is set is crucial. For instance, in our experiment, supply caps above 400 drivers or more would yield no effect on driver income. On the other hand, we find that when supply caps are too restrictive, they may be detrimental to the level of service offered by the platform. This is in line with the results of the queuing theoretic equilibrium model formulated by \cite{li2019regulating}, demonstrating that a supply cap can lead to reduced driver earnings when too many consumers leave the market. In any case, given that capital registration costs jeopardise the income of drivers, transit authorities should avoid supply caps that assign an additional cost to operation under the supply cap.
\paragraph*{Pricing strategy.}
A profit-maximising platform will increase its commission rate up to the point that so many drivers opt out that lost commission from rejected requests outweighs the higher revenue on remaining requests. Our experiments demonstrate that at this point already a significant portion of ride requests may need to be rejected. In addition, we find that such a profit-maximising strategy may result in relatively long waiting times for travellers. These results suggest that the pricing strategy of a ridesourcing platform may need to be regulated. Our results in fact demonstrate that this may be highly beneficial from a societal standpoint, given that a near-optimal profit can be achieved with a significantly lower commission rate, yielding a much improved level of service for travellers. This confirms earlier findings based on an analytical economic model by \cite{zha2016economic} regarding the effectiveness of regulation of the commission in increasing the social welfare generated by ridesourcing platforms.
\subsection{Future research}
In this study, we focus on supply evolution in order to understand its dynamics and describe emerging phenomena, which can be further embedded in models of co-evolution of supply and demand. An interesting direction for future research is the extent to which outcomes of a monopolistic market are also applicable to markets in which service providers compete for supply and demand. For example, future research may consider how supply evolution is affected by aggressive penetration pricing strategies aimed at pushing other service providers out of the market. It may also be interesting to analyse how external shocks to the market lead to swings in the transition process. Our model can be extended to study supply evolution of ridesourcing services offering pooled rides, which will affect the income of participating drivers. In essence, our approach with a day-to-day shell and a core capturing within-day dynamics allows to analyse ridesourcing supply evolution under various operational within-day strategies.
As a concluding remark, we stress the need for more empirical evidence on labour supply by ridesourcing drivers, as model input - based on cross-sectional data - and for validation of the results - based on longitudinal data. Enhancing the empirical underpinning on labour supply behaviour by (potential) ridesourcing drivers will support the specification of a simulation framework like the one presented here and thereby allow to significantly improve our knowledge on ridesourcing implications for drivers, travellers, platforms and society at large.
\section*{Acknowledgement(s)}
Preliminary versions of this paper have been accepted for presentation at the 2020 hEART conference in Lyon and the 2021 TRB Annual Meeting in Washington D.C.
\section*{Funding}
This work was supported by the European Research Council under Grant 804469 and by the Amsterdam Institute for Advanced Metropolitan Solutions.
\bibliographystyle{tfcad}
|
\section{Introduction\label{Intro}}
Consider the so called Fisher-Kolmogorov-Petrowskii-Piskunov (FKPP) equation -
with all constants equal to 1, which is always possible by suitable rescaling
\begin{equation}
\frac{\partial u}{\partial t}=\Delta u+u\left( 1-u\right) ,\qquad
u|_{t=0}=u_{0}. \label{FKPP
\end{equation}
This is a paradigm of equations arising in biology and other fields. For
instance, in the mathematical description of cancer growth, although being too
simplified to capture several features of true tumors, it may serve to explore
mathematical features of diffusion and proliferation. In such applications, it
describes a density of cancer cells which diffuse and proliferate with
proliferation modulated by the density itself, such that, starting with an
initial density $0\leq u_{0}\leq1$, the growth due to proliferation cannot
exceed the threshold $1$. Having in mind this example, it is natural to expect
that this equation is the macroscopic limit of a system of microscopic
particles, like cancer cells, which are subject to proliferation. To be
biologically realistic, we have to require that the proliferation rate is not
uniform among particles but depends on the concentration of particles:
wherever particles are more concentrated, there is less space and more
competition for nutrients, which slows down proliferation. We prove a result
of convergence of such kind of proliferation particle systems - as described
in detail in section \ref{section microscopic} below - to the FKPP equation. A
key point of the microscopic model that should be known in advance, to
understand this introduction, is that the proliferation rate of particle
``$a$" (see below the meaning of this index) is given by the random
time-dependent rate
\begin{equation}
\lambda_{t}^{a,N}=\left( 1-\left( \theta_{N}\ast S_{t}^{N}\right) \left(
X_{t}^{a,N}\right) \right) ^{+}, \label{definition lambda
\end{equation}
where $N$ is the number of initial particles, $X_{t}^{a,N}$ is the position of
particle ``$a$", $S_{t}^{N}$ is the empirical measure, $\theta_{N}$ is a
family of smooth mollifiers - hence $\theta_{N}\ast S_{t}^{N}$ is a smoothed
version of the empirical density. Formula (\ref{definition lambda}) quantifies
the fact that proliferation is slower when the empirical measure is more
concentrated, and stops above a threshold. Since there is no reason why the
mollified empirical measure $\theta_{N}\ast S_{t}^{N}$ is smaller than one, we
have to cut with the positive part, in (\ref{definition lambda}). Hence,
initially the limit PDE will have the proliferation term $u\left( 1-u\right)
^{+}$, which is meaningful also for $u>1$, but by a uniqueness result, the
term reduces to $u\left( 1-u\right) $ when $0\leq u_{0}\leq1$.
The final result is natural and expected but there is a technical difficulty
which, in our opinion, is not sufficiently clarified in the literature. The
proof of convergence of the particle system to the PDE relies on the tightness
of the empirical measure and a passage to the limit in the identity satisfied
by the empirical measure. This identity includes the nonlinear ter
\[
\left\langle \left( 1-\theta_{N}\ast S_{t}^{N}\right) ^{+}S_{t}^{N
,\phi\right\rangle
\]
where $\phi$ is a smooth test function. Since $S_{t}^{N}$ converges only
weakly, it is required that $\theta_{N}\ast S_{t}^{N}$ \textit{converges
uniformly}, in the space variable, in order to pass to the limit. Maybe in
special cases one can perform special tricks but the question of uniform
convergence is a natural one in this problem and it is also of independent
interest, hence we investigate when it holds true.
Following the proposal of K. Oelschl\"{a}ger \cite{Oel1}, \cite{Oel2}, we
assume
\begin{equation}
\theta_{N}\left( x\right) =N^{\beta}\theta\left( N^{\beta/d}x\right) .
\label{theta N Oel
\end{equation}
Recall that the case $\beta=0$ is the mean field one (long range interaction),
the case $\beta=1$ corresponds to local (like nearest neighbor) interactions,
while the case $0<\beta<1$ corresponds to an intermediate regime, called
``moderate'' by \cite{Oel1}. Our main result is that uniform convergence of
$\theta_{N}\ast S_{t}^{N}$ to $u$ holds under the condition
\[
\beta<\frac{1}{2}.
\]
In addition to our main result, Theorem \ref{Thm 1}, see also Appendix
\ref{appendix B}\ where we show that this condition arises with other proofs
of uniform convergence. We believe this condition is strict for the uniform
convergence.
A second motivation for the analysis of uniform convergence, besides the
problem of passage to the limit in the nonlinear term outlined above, is the
question whether a "front" of microscopic particles which moves due to
proliferation approximates the traveling waves of FKPP equation. Results in
this direction seem to be related to uniform convergence of mollified
empirical measure but they require also several other ingredients and go
beyond the scope of the present paper, hence they are not discussed here.
\subsection{Comparison with related problems and results}
First, let us clarify that the problem treated here is more correct and
difficult than a two-step approach which does not clarify the true relation
between the particle system and the PDE, although it gives a plausible
indication of the link. The two-step approach freezes first the parameter in
the mollifier, namely it treats particles proliferating with rat
\[
\lambda_{t}^{a,N_{0},N}=\left( 1-\left( \theta_{N_{0}}\ast S_{t}^{N_{0
,N}\right) \left( X_{t}^{a,N_{0},N}\right) \right) ^{+
\]
and proves that $S_{t}^{N_{0},N}$ weakly converges as $N\rightarrow\infty$, to
the solution $u_{N_{0}}$ of the following equation with non-local
proliferatio
\begin{equation}
\frac{\partial u_{N_{0}}}{\partial t}=\Delta u_{N_{0}}+u_{N_{0}}\left(
1-\theta_{N_{0}}\ast u_{N_{0}}\right) ^{+}. \label{mollified FKPP
\end{equation}
The second step consists in proving that $u_{N_{0}}$ converges to the solution
$u$ of the FKPP equation. The link between the particle system $X_{t
^{a,N_{0},N}$ and the solution $u$ of the FKPP equation is only conjectured by
this approach. In principle the conjecture could be even wrong. Take a system
of particle interactions with short range couplings, where the two-steps
approach leads to the porous media equation with the non-linearity $\Delta
u^{2}$ (see \cite{Phi}). But a direct link between the particle system and the
limit PDE (the so called hydrodynamic limit problem)\ leads to a non-linearity
of the form $\Delta f\left( u\right) $ where $f\left( u\right) $ is not
necessarily $u^{2}$ (see \cite{Va}, \cite{Uc}). For a proof of the
\textit{mean field} result of convergence of $S_{t}^{N_{0},N}$ to $u_{N_{0}}$
as $N\rightarrow\infty$, see for instance \cite{ChMeleard}, \cite{FlaLeim}.
The issue of uniform convergence of $\theta_{N}\ast S_{t}^{N}$ to $u$ does not
arise and weak convergence of the measures $S_{t}^{N_{0},N}$ is sufficient.
Going back to the problem with the rates (\ref{definition lambda}), K.
Oelschl\"{a}ger papers \cite{Oel1}, \cite{Oel2} have been our main source of
inspiration. Our attempt in the present work is to clarify\ a result of
convergence in the case of diffusion and proliferation under assumptions
comparable to those of \cite{Oel1}, \cite{Oel2} but possibly with some
additional degree of generality and with a new proof.
We have extended the assumption $\beta<\frac{d}{\left( d+1\right) \left(
d+2\right) }$ and removed the restriction $V=W\ast W$ of \cite{Oel2} and,
hopefully, we have given a modern proof which clar
|
ifies certain issues of the
tightness and the convergence problem. Concerning extensions of the range of
$\beta$, maybe there are other directions, as remarked in \cite{Oel2}, page
575; our specific extension is however motivated not only by the generality
but also by the property of uniform convergence (not proved in \cite{Oel2}),
which seems relevant in itself.
Other interesting works related to the problem of particle approximation of
FKPP equation are \cite{Meleard}, \cite{MelRoelly}, \cite{Met}, \cite{Nappo},
\cite{Ste} and \cite{Baker}, \cite{BoVe} from the more applied literature. For
the FKPP limit of discrete lattice systems, even the more difficult question
of the hydrodynamic limit has been solved, see \cite{DeMFerrLeb} with
completely local interaction, but the analogous problem for diffusions is more
difficult and has not been done.
To solve the problem of uniform convergence, we propose a new approach, by
semigroup theory. Traces of this approach can be found in \cite{Met} and
\cite{ChMeleard}, but have been used for other purposes. In the work
\cite{FlaLeim} it is remarked that uniform convergence can be obtained as a
by-product of energy inequalities and Sobolev convergence, under the
assumption $\beta<\frac{d}{d+2}$, but only in dimension $d=1$, where the
condition is more restrictive than $\beta<1/2$.
The approach extends to other models, in particular with interactions. With
the same technique, under appropriate assumptions on the convolution kernels
$\theta_{N}$ below, we may recover a result, under different assumptions, of
\cite{Oel1}, where the macroscopic PDE has the for
\[
\frac{\partial u}{\partial t}=\Delta u-\operatorname{div}\left( uF\left(
u\right) \right) +u\left( 1-u\right) ,\qquad u|_{t=0}=u_{0
\]
and $F$ is a local nonlinear function, not a non-local operator as in mean
field theories.
Let us insist on the fact that our proliferation rate is natural from the
viewpoint of Biology. It is very different from the constant rate used in the
probabilistic formulae used by McKean and others to represent solutions of the
FKPP equations; these formula have several reasons of interest but do not have
a biological meaning - constant proliferation rate would lead to exponential
blow-up of the number of particles. Constant rates do not pose the
difficulties described above in taking the limit in the nonlinear term.
Approximation by finite systems of these representation formula therefore pose
different problems. For this and other directions, different from our one, see
\cite{McKean}, \cite{RegnierTalay} and references therein.
\subsection{The microscopic model\label{section microscopic}}
We consider a particle system on filtered probability space $\left(
\Omega,\mathcal{F},\mathcal{F}_{t},P\right) $ with $N\in\mathbb{N}$ initial
particles. We label particles by $a\in\Lambda^{N}$, where
\[
\Lambda^{N}=\left\{ \left( k,i_{1},...,i_{n}\right) \colon i_{1
,...,i_{n}\in\left\{ 1,2\right\} ,k=1,...,N,n\in\mathbb{N}_{0}\right\}
\]
is the set of all particles. For a non-initial particle $a=\left(
k,i_{1},...,i_{n}\right) $ we denote its parent particle by $(a,-)=\left(
k,i_{1},...,i_{n-1}\right) $. Each particle has a lifetime, which is the
random time interval $I^{a,N}=[T_{0}^{a,N},T_{1}^{a,N})\subset\lbrack
0,\infty)$, where $T_{0}^{a,N},T_{1}^{a,N}$ are $\mathcal{F}_{t}$-stopping
times. We have $T_{0}^{a,N}=0$ for initial particles $a=(k)$, $k=1,\dots,N$
and $T_{0}^{a,N}=T_{1}^{(a,-),N}$ for other particles. The time $T_{1}^{a,N}$
at which a particle dies and splits into two (we call this a proliferation
event) is described more precisely below.\newline Particles are born at the
position their parent died, i.e. $X_{T_{0}^{a,N}}^{a,N}=X_{T_{1}^{\left(
a,-\right) ,N}}^{\left( a,-\right) ,N}$ with the convention $X_{T_{1
^{a,N}}^{a,N}:=\lim_{t\uparrow T_{1}^{a,N}}X_{t}^{a,N}$. During its lifetime
the position of $a\in\Lambda^{N}$, $X_{t}^{a,N}\in\mathbb{R}^{d}$, is given
by
\begin{equation}
dX_{t}^{a,N}=\sqrt{2}dB_{t}^{a} \label{initial SDE
\end{equation}
where $B^{a}$ are independent Brownian motions in $\mathbb{R}^{d}$.\newline
Let $\Lambda_{t}^{N}$ denote the set of all particles alive at time $t$. We
define the empirical measure as
\[
S_{t}^{N}=\frac{1}{N}\sum_{a\in\Lambda_{t}^{N}}\delta_{X_{t}^{a,N}}.
\]
Take a family of standard Poisson processes $\left( \mathcal{N}^{0,a}\right)
_{a\in\Lambda^{N}}$ which is independent of the Brownian motion and the
initial condition $X_{0}^{(k),N}$, $k=1,\dots,N$. The branching time
$T_{1}^{a,N}$ of particle $a\in\Lambda^{N}$ is the first (and only) jump time
of $\mathcal{N}_{t}^{a,N}:=\mathcal{N}_{\Lambda_{t}^{a,N}}^{0,a}$, where
$\Lambda_{t}^{a,N}=\int_{0}^{t}1_{s\in I^{a,N}}\lambda_{s}^{a,N}ds$ and the
random rate $\lambda_{t}^{a,N}$ is given by
\[
\lambda_{t}^{a,N}=\left( 1-\left( \theta_{N}\ast S_{t}^{N}\right) \left(
X_{t}^{a,N}\right) \right) ^{+
\]
where
\begin{equation}
\theta_{N}(x)=\epsilon_{N}^{-d}\theta\left( \epsilon_{N}^{-1}x\right)
\label{theta N eps
\end{equation}
is a family of mollifiers with
\[
\epsilon_{N}=N^{-\frac{\beta}{d}
\]
namely we assume (\ref{theta N Oel}).
\subsection{Assumptions and main result\label{sect macroscopic limit}}
Throughout this paper we assume that
\begin{equation}
\beta\in(0,\frac{1}{2}) \label{assumption on eps beta
\end{equation}
and that $\theta:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a probability density
of clas
\begin{equation}
\theta\in W^{\alpha_{0},2}\left( \mathbb{R}^{d}\right) \text{ for some
}\alpha_{0}\in\left( \frac{d}{2},\frac{d(1-\beta)}{2\beta}\right]
\label{assumption on theta
\end{equation}
(notice that, for $\beta>0$, the inequality $\frac{d}{2}<\frac{d(1-\beta
)}{2\beta}$ is equivalent to $\beta<\frac{1}{2}$). The weaker assumption
$\beta=1$ corresponds to nearest-neighbor (or contact) interaction and it is
just the natural scaling to avoid that the kernel is more concentrated than
the typical space around a single particle, when the particles are uniformly
distributed. The case $\beta=0$ corresponds to mean field interaction. The
explanation for condition (\ref{assumption on eps beta}) is given at the
beginning of Section \ref{section main estimate}. At the biological level it
means that the modulation of proliferation by the local density of cells is
not completely local, but has a certain range of action, which is less than
long range as a mean field model.
Let us introduce the mollified empirical measure (the theoretical analog of
the numerical method of kernel smoothing) $h_{t}^{N} $ defined a
\[
h_{t}^{N}(x)=\left( \theta_{N}\ast S_{t}^{N}\right) \left( x\right) .
\]
Concerning
|
=1pt,node distance=0.6cm},main2/.style = {draw, circle , minimum size=4mm,line width=1pt,fill=gray!20,node distance=0.8cm}]
\node[main,label=above:{$v$},label=right:{$(2)$}] (1) {};
\node[main,label=left:{$(2)$}] (2) [below left of=1] {};
\node[main,label=right:{$(2)$}] (3) [below right of=1]{};
\node[main,label=left:{$(2)$}] (4) [below of=2] {};
\node[main,label=right:{$(2)$}] (5) [below of=3] {};
\node[main,label=right:{$(2)$}] (6) [below right of=4] {};
\node (7) [below=0cm and 0.7cm of 6] {$G_1$};
\draw[line width=1pt] (1) -- (2);
\draw[line width=1pt] (1) -- (3);
\draw[line width=1pt] (2) -- (3);
\draw[line width=1pt] (3) -- (4);
\draw[line width=1pt] (4) -- (5);
\draw[line width=1pt] (4) -- (6);
\draw[line width=1pt] (5) -- (6);
\end{tikzpicture} \hspace{1cm}
\begin{tikzpicture}[main/.style = {draw, circle , minimum size=2.6mm,line width=1pt,node distance=0.6cm},main2/.style = {draw, circle , minimum size=4mm,line width=1pt,fill=gray!20,node distance=0.8cm}]
\node[main,label=above:{$w$},label=right:{$(0)$}] (1) {};
\node[main,label=left:{$(0)$}] (2) [below left of=1] {};
\node[main,label=right:{$(0)$}] (3) [below right of=1]{};
\node[main,label=left:{$(0)$}] (4) [below of=2] {};
\node[main,label=right:{$(0)$}] (5) [below of=3] {};
\node[main,label=right:{$(0)$}] (6) [below right of=4] {};
\node (7) [below=0cm and 0.7cm of 6] {$H_1$};
\draw[line width=1pt] (1) -- (2);
\draw[line width=1pt] (1) -- (3);
\draw[line width=1pt] (2) -- (4);
\draw[line width=1pt] (3) -- (5);
\draw[line width=1pt] (3) -- (4);
\draw[line width=1pt] (4) -- (6);
\draw[line width=1pt] (5) -- (6);
\end{tikzpicture}
\caption{Two graphs that are indistinguishable by the $\wl{}$-test. The numbers between round brackets indicate how many homomorphic images of the $3$-clique each vertex is involved in.}
\label{fig:wlequivalent}
\end{figure}
A more practical approach is to extend the expressive power of $\mathsf{MPNN}\text{s}$ \textit{whilst preserving their $\mathcal{O}(n)$ cost in each iteration}. Various such extensions \citep{kipf-loose,chen2019powerful,li2019hierarchy,ishiguro2020weisfeilerlehman,bouritsas2020improving,geerts2020lets} achieve this by infusing $\mathsf{MPNN}\text{s}$ with \textit{local graph structural information from the start}. That is, the iterative message passing scheme of $\mathsf{MPNN}\text{s}$ is run on vertex labels that contain quantitative information about local graph structures. It is easy to see that such architectures can go beyond the $\wl{}$ test: for example,
adding triangle counts to $\mathsf{MPNN}\text{s}$ suffices to distinguish the vertices $v$ and $w$ and graphs $G_1$ and $H_1$ in Fig.~\ref{fig:wlequivalent}. Moreover, the cost is
\textit{a single preprocessing step} to count local graph parameters, thus maintaining the $\mathcal{O}(n)$ cost in the iterations of the $\mathsf{MPNN}$.
While there are some partial results showing that local graph parameters increase expressive power \citep{bouritsas2020improving,li2019hierarchy}, their precise expressive power and relationship to higher-order $\mathsf{MPNN}\text{s}$ was unknown, and there is little guidance in terms of which local parameters do help $\mathsf{MPNN}\text{s}$ and which ones do not.
The main contribution of this paper is a precise characterization of the expressive power of $\mathsf{MPNN}\text{s}$ with local graph parameters and its relationship to the hierarchy of higher-order
$\mathsf{MPNN}\text{s}$.
\paragraph{Our contributions.}
In order to nicely formalize local graph parameters, we propose to extend vertex labels with \textit{homomorphism counts} of small graph patterns.\footnote{We recall that homomorphisms are edge-preserving mappings between the vertex sets.} More precisely, given a graphs $P$ and $G$, and vertices $r$ in $P$ and $v$ in $G$, we add the number of homomorphisms from $P$ to $G$ that map $r$ to $v$, denoted by $\homc{P^r,G^v}$, to the initial features of $v$.
Such counts satisfy conditions (i) and (ii). Indeed, homomorphism counts are known to \textit{measure the similarity} of vertices and graphs \citep{MR214529,Grohe20}, and
serve as a \textit{basis for the efficient computation} of a number of other important graph parameters, e.g., subgraph and induced subgraphs counts \citep{Curticapean_2017,ZhangY0ZC20}. Furthermore, homomorphism counts underly \textit{characterisations of the expressive power} of $\mathsf{MPNN}\text{s}$ and higher-order $\mathsf{MPNN}\text{s}$. As an example, two vertices $v$ and $w$ in graphs $G$ and $H$, respectively, are indistinguishable by $\wl{}$, and hence by $\mathsf{MPNN}\text{s}$, precisely when $\homc{T^r,G^v} = \homc{T^r,H^w}$ for every rooted tree $T^r$ \citep{dvorak,DellGR18}.
Concretely, we propose $\mathcal F$-$\mathsf{MPNN}\text{s}$ where $\mathcal F = \{P_1^r,\allowbreak \dots,\allowbreak P_\ell^r\}$ is a set of (graph) patterns,
by (i) first allowing a pre-processing step that labels each vertex $v$ of a graph $G$ with the vector $\bigl(\homc{P^r_1,G^v},\ldots,\homc{P^r_\ell,G^v}\bigr)$, and (ii) then run
an $\mathsf{MPNN}$ on this labelling. As such, we can turn \textit{any} $\mathsf{MPNN}$ into an $\mathcal F$-$\mathsf{MPNN}$
by simply augmenting the initial vertex embedding. Furthermore,
several recently proposed extensions
of $\mathsf{MPNN}\text{s}$ fit in this approach, including $\mathsf{MPNN}\text{s}$ extended with information about vertex degrees \citep{kipf-loose}, walk counts \citep{chen2019powerful}, tree-based counts \citep{ishiguro2020weisfeilerlehman} and subgraph counts \citep{bouritsas2020improving}. Hence, $\mathcal F$-$\mathsf{MPNN}\text{s}$ can also be regarded as a unifying theoretical formalism.
\smallskip
\noindent
Our main results can be summarised, as follows:
\begin{enumerate}
\item We precisely characterise the expressive power of $\mathcal F$-$\mathsf{MPNN}\text{s}$ by means of an extension of $\wl{}$, denoted by $\mathcal F$-$\wl{}$.
For doing so, we use {\em $\mathcal F$-pattern trees}, which are obtained from standard trees by joining an arbitrary number of copies of the patterns in $\mathcal F$
to each one of its vertices.
Our result states
that vertices $v$ and $w$ in graphs $G$ and $H$, respectively, are indistinguishable by $\mathcal F$-$\wl{}$, and hence by $\mathcal F$-$\mathsf{MPNN}\text{s}$, precisely
when $\homc{T^r,G^v} = \homc{T^r,H^w}$ for every $\mathcal F$-pattern tree $T^r$. This characterisation gracefully extends the characterisation for standard $\mathsf{MPNN}\text{s}$, mentioned earlier, by setting $\mathcal F = \emptyset$. Furthermore,
$\mathcal F$-$\mathsf{MPNN}\text{s}$ provide insights in the expressive power of existing $\mathsf{MPNN}$ extensions, most notably the Graph Structure Networks of \citet{bouritsas2020improving}.
\item We compare $\mathcal F$-$\mathsf{MPNN}\text{s}$ to higher-order $\mathsf{MPNN}\text{s}$, which are characterized in terms of the $\wlk{k}$-test.
On the one hand, while $\mathcal F$-$\mathsf{MPNN}\text{s}$ strictly increase the expressive power of the $\wl{}$-test, for any finite set
$\mathcal F$ of patterns, $\wlk{2}$ can distinguish graphs which $\mathcal F$-$\mathsf{MPNN}\text{s}$ cannot.
On the other hand, for each $k \geq 1$ there are patterns $P$
such that $\{P\}$-$\mathsf{MPNN}\text{s}$ can distinguish graphs which $\wlk{k}$ cannot.
\item We deal with the technically challenging problem of pattern selection and comparing
$\mathcal F$-$\mathsf{MPNN}\text{s}$ based on the patterns included in $\mathcal F$. We prove two partial results: one establishing when a pattern $P$ in $\mathcal F$ is redundant, based on whether or not $P$ is the join of other patterns in $\mathcal F$, and another result indicating when $P$ does add expressive power, based on the treewidth of $P$ compared to the treewidth of other patterns in $\mathcal F$.
\item Our theoretical results are complemented by an experimental study in which we show that for various $\mathsf{GNN}$ architectures, datasets and graph learning tasks, all part of the recent benchmark by \citet{dwivedi2020benchmarkgnns}, the augmentation of initial features with homomorphism counts of graph patterns has often a positive effect, and the cost for computing these counts
incurs little to no overhead.
\end{enumerate}
As such, we believe that $\mathcal F$-$\mathsf{MPNN}\text{s}$ not only provide an elegant theoretical framework
for understanding local graph parameter enabled $\mathsf{MPNN}\text{s}$, they are also a valuable alternative to higher-order $\mathsf{MPNN}\text{s}$ as way to increase the expressive power of $\mathsf{MPNN}\text{s}$.
In addition, and as will be explained in Section \ref{sec:lgp}, $\mathcal F$-$\mathsf{MPNN}\text{s}$ provide a unifying framework for understanding the expressive power of several other existing extensions of $\mathsf{MPNN}\text{s}$. Proofs of our results and further details on the relationship to existing approaches and experiments can be found in the appendix.
\paragraph{Related Work.}
Works related to the distinguishing power of the $\wl{}$-test, $\mathsf{MPNN}\text{s}$ and their higher-order variants
are cited throughout the paper.
Beyond distinguishability, $\mathsf{GNN}\text{s}$ are analyzed in terms of universality and generalization properties \citep{DBLP:journals/corr/abs-1901-09342,Keriven,Chen2019,garg2020generalization}, local distributed
algorithms \citep{DBLP:conf/nips/SatoYK19,Loukas2020What}, randomness in features \citep{sato2020random,abboud2020surprising} and using local context matrix features \citep{vignac2020building}.
Other extensions of $\mathsf{GNN}\text{s}$ are surveyed, e.g., in
\citet{Zonghan2019,Zhou2018} and \citet{chami2021machine}.
Related are the Graph Homomorphism Convolutions by \citet{nt2020graph} which apply $\mathsf{SVM}$s directly on the representation of vertices by homomorphism counts.
Finally, our approach is reminiscent of the graph representations by means of graphlet kernels \citep{shervashidze09a}, but then on the level of vertices.
\section{\boldmath Local Graph Parameter Enabled $\mathsf{MPNN}\text{s}$}\label{sec:lgp}
In this section we introduce $\mathsf{MPNN}\text{s}$ with local graph parameters. We start by introducing preliminary concepts.
\paragraph{Graphs.}
We consider undirected vertex-labelled graphs $G=(V,E,\chi)$, with $V$ the set of vertices, $E$
the set of edges and $\chi$ a mapping assigning a label to each vertex in $V$.
The set of neighbours of a vertex is denoted as $N_G(v) = \bigl\{u \in V \bigm\vert \{u,v\}\in E \bigr\}$.
A \textit{rooted graph} is a graph in which one its vertices is declared as its root.
We denote a rooted graph by $G^v$, where $v\in V$ is the root and depict them as graphs in which the root is a blackened vertex.
Given graphs $G = (V_G,E_G,\chi_G)$ and $H = (H_H,E_H,\chi_H)$, an {\em homomorphism} $h$ is a mapping $h:V_G \to V_H$ such that (i) $\{h(u),h(v)\}\in E_H$ for every $\{u,v\}\in E_G$, and (ii)
$\chi_G(u) = \chi_H(h(u))$ for every $u \in V_G$.
For rooted graphs $G^v$ and $H^w$, an homomorphism must additionally map $v$ to $w$.
We denote by
$\homc{G,H}$ the number of homomorphisms from $G$ to $H$; similarly for rooted graphs. For simplicity of exposition we focus on vertex-labelled undirected graphs but all our results can be extended to edge-labelled directed graphs.
\paragraph{Message passing neural networks.}
The basic architecture for $\mathsf{MPNN}\text{s}$ \citep{GilmerSRVD17},
and the one used in recent studies on
GNN expressibility \citep{grohewl,xhlj19,barcelo2019logical}, consists of a sequence of rounds that update the feature vector of every vertex in the graph by combining its current feature vector with the result of an aggregation over the feature vectors of its neighbours. Formally, for a graph $G=(V,E,\chi)$, let $\mathbf{x}_{M,G,v}^{(d)}$ denote the feature vector computed for vertex $v\in V$ by an $\mathsf{MPNN}$ $M$ in round $d$. The initial feature vector $\mathbf{x}_{M,G,v}^{(0)}$ is a one-hot encoding of its label $\chi(v)$.
This feature vector is iteratively updated in a number of rounds. In particular, in round $d$,
$$
\mathbf{x}_{M,G,v}^{(d)} \, := \, \textsc{Upd}^{(d)}\Big(\mathbf{x}_{M,G,v}^{(d-1)},
\textsc{Comb}^{(d)} \big( \{\mskip-5mu\{ \mathbf{x}_{M,G,u}^{(d-1)}
\, \mid \, u\in N_G(v) \}\mskip-5mu\} \big)\Big),
$$
where $\textsc{Comb}^{(d)}$ and $\textsc{Upd}^{(d)}$ are an \textit{aggregating} and \textit{update} function, respectively.
Thus, the feature vectors $\mathbf{x}_{M,G,u}^{(d-1)}$ of all neighbours $u$ of $v$ are combined by the aggregating function $\textsc{Comb}^{(d)}$ into a single vector,
and then this vector is used together with $\mathbf{x}_{M,G,v}^{(d-1)}$ in order to produce $\mathbf{x}_{M,G,v}^{(d)}$ by applying the update function
$\textsc{Upd}^{(d)}$.
\paragraph{\boldmath $\mathsf{MPNN}\text{s}$ with local graph parameters.}
The $\mathsf{GNN}\text{s}$ studied in this paper leverage the power of $\mathsf{MPNN}\text{s}$
by enhancing initial features of vertices with \textit{local graph parameters} that are beyond their
classification power. To illustrate the idea, consider the graphs in Fig.~\ref{fig:wlequivalent}. As mentioned, these graphs cannot be distinguished by the $\wl{}$-test,
and therefore cannot be distinguished by the broad class of $\mathsf{MPNN}\text{s}$ (see e.g. \citep{xhlj19,grohewl}).
If we allow a \textit{pre-processing stage}, however, in which the initial labelling of every vertex $v$ is extended with
the number of (homomorphic images of) $3$-cliques in which $v$ participates (indicated by numbers between brackets in Fig.~\ref{fig:wlequivalent}), then clearly
vertices $v$ and $w$ (and the graphs $G_1$ and $H_1$) can be distinguished based on this extra structural information. In fact, the initial labelling already suffices for this purpose.
Let $\mathcal F = \{P_1^r, \dots,P_\ell^r\}$ be a set of (rooted) graphs, which we refer to as \emph{patterns}. Then, \textit{$\mathcal F$-enabled $\mathsf{MPNN}\text{s}$}, or just $\mathcal F$-$\mathsf{MPNN}\text{s}$,
are defined in the same way as $\mathsf{MPNN}\text{s}$ with the crucial difference that now
the initial feature vector of a vertex $v$ is a one-hot encoding of the label $\chi_G(v)$ of the vertex, and all the homomorphism counts from patterns in $\mathcal F$. Formally,
in each round $d$ an $\mathcal F$-$\mathsf{MPNN}$ $M$ labels each vertex $v$ in graph $G$ with a feature vector $\mathbf{x}_{M,\mathcal F, G,v}^{(d)}$ which is inductively defined as follows:
\begin{align*} \mathbf{x}_{M,\mathcal F, G,v}^{(0)} & := \big(\chi_G(v), \homc{P_1^r,G^v},\dots,\homc{P_\ell^r,G^v} \big) \label{eq:extend} \\
\mathbf{x}_{M,\mathcal F, G,v}^{(d)} & := \, \textsc{Upd}^{(d)}\Big(\mathbf{x}_{M,\mathcal F, G,v}^{(d-1)},
\textsc{Comb}^{(d)} \big( \{\mskip-5mu\{ \mathbf{x}_{M,\mathcal F, G,v}^{(d-1)}
\, \mid \, u\in N_G(v) \}\mskip-5mu\} \big)\Big).
\end{align*}
We note that standard $\mathsf{MPNN}\text{s}$ are $\mathcal F$-$\mathsf{MPNN}\text{s}$ with $\mathcal{F}=\emptyset$.
As for $\mathsf{MPNN}\text{s}$, we can equip $\mathcal F$-$\mathsf{MPNN}\text{s}$ with a \textsc{Readout} function that aggregates all final feature vectors into a single feature vector in order to classify or distinguish
graphs.
We emphasise that any $\mathsf{MPNN}$ architecture can be turned in an $\mathcal{F}$-$\mathsf{MPNN}$ by a simple
homomorphism counting preprocessing step. As such, we propose a \textit{generic plug-in for a large class of $\mathsf{GNN}$ architectures}. Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets \citep{ZhangY0ZC20} and they form the basis for counting (induced) subgraphs and other notions of subgraphs \citep{Curticapean_2017}. Despite the simplicity of $\mathcal F$-$\mathsf{MPNN}\text{s}$, we will show that they can substantially increase the power of $\mathsf{MPNN}\text{s}$ by varying $\mathcal F$, only paying the one-time cost for preprocessing.
\paragraph{\boldmath $\mathcal F$-$\mathsf{MPNN}\text{s}$ as unifying framework.}
An important aspect of $\mathcal F$-$\mathsf{MPNN}\text{s}$ is that they \textit{allow a principled analysis of the power of existing extensions of $\mathsf{MPNN}\text{s}$}.
For example, taking $\mathcal{F}=\{\raisebox{-1.1\dp\strutbox}{\includegraphics[height=2.7ex]{L1.pdf}}\}$ suffices to capture
degree-aware $\mathsf{MPNN}\text{s}$ \citep{geerts2020lets}, such as the Graph Convolution Networks ($\mathsf{GCN}$s) \citep{kipf-loose}, which use the \textit{degree of vertices}; taking
$\mathcal F=\{L_1,L_2,\ldots,L_\ell\}$ for rooted paths $L_i$ of length $i$ suffices to model the \textit{walk counts} used in \citet{chen2019powerful}; and taking $\mathcal F$ as the set of labeled trees of depth one precisely corresponds to the use of the \textit{$\wl{}$-labelling obtained after one round} by \citet{ishiguro2020weisfeilerlehman}. Furthermore, $\{C_\ell\}$-$\mathsf{MPNN}\text{s}$, where $C_\ell$ denotes the cycle of length $\ell$, correspond to the extension proposed in Section 4 in \citet{li2019hierarchy}.
In addition, $\mathcal F$-$\mathsf{MPNN}\text{s}$ can also capture the \textit{Graph Structure Networks} ($\mathsf{GSN}\text{s}$) by \citet{bouritsas2020improving}, which use subgraph isomorphism counts of graph patterns. We recall that an isomorphism from $G$ to $H$ is a \textit{bijective} homomorphism $h$ from $G$ to $H$ which additionally satisfies (i) $\{h^{-1}(u),h^{-1}(v)\}\in E_G$ for every $\{u,v\}\in E_H$, and (ii) $\chi_G(h^{-1}(u)) = \chi_H(u)$ for every $u \in V_H$. When $G$ and $H$ are rooted graphs, isomorphisms should preserve the roots as well.
Now, in a $\mathsf{GSN}$, the feature vector of each vertex $v$ is augmented with the the counts of every isomorphism from a rooted pattern $P^r$ to $G^v$, for rooted patterns $P$ in a set of patterns $\mathcal P$,\footnote{The use of orbits in \citet{bouritsas2020improving} is here ignored, but explained in the appendix.} and this is followed by the execution of an $\mathsf{MPNN}$, just as for our $\mathcal F$-$\mathsf{MPNN}\text{s}$. It now remains to observe that subgraph isomorphism counts can be computed in terms of homomorphism counts, and vice versa \citep{Curticapean_2017}.
That is, $\mathsf{GSN}\text{s}$ can be viewed as $\mathcal F$-$\mathsf{MPNN}\text{s}$ and thus our results for $\mathcal F$-$\mathsf{MPNN}\text{s}$ carry over to $\mathsf{GSN}\text{s}$. We adopt homomorphism counts instead of subgraph isomorphism counts because homomorphisms counts underly existing characterizations of the expressive power of $\mathsf{MPNN}\text{s}$ and homomorphism counts are in general more efficient to compute. Also, \citet{Curticapean_2017} indicate that all common graph counts are interchangeable in terms of expressive power.
\begin{figure}[t]
\centering
\begin{tikzpicture}[main/.style = {draw, circle , minimum size=2.3mm,line width=1pt,node distance=0.8cm},main2/.style = {draw, circle , minimum size=4mm,line width=1pt,fill=gray!20,node distance=0.8cm}]
\node[main] (1) {};
\node[main] (2) [below left of=1] {};
\node[main] (3) [below right of=1]{};
\node[main] (4) [below of=2] {};
\node[main] (6) [below of=3] {};
\node[main,label={[label distance=-1mm]above:{$v$}}] (5) [right=0.2cm of 4] {};
\node[main] (7) [below of=4] {};
\node[main] (8) [below of=6] {};
\node[main] (9) [below right of=7] {};
\node (10) [below=0cm of 9] {$G_2$};
\draw[line width=1pt] (1) -- (2);
\draw[line width=1pt] (1) -- (3);
\draw[line width=1pt] (2) -- (3);
\draw[line width=1pt] (2) -- (4);
\draw[line width=1pt] (3) -- (6);
\draw[line width=1pt] (4) -- (5);
\draw[line width=1pt] (4) -- (7);
\draw[line width=1pt] (5) -- (6);
\draw[line width=1pt] (6) -- (8);
\draw[line width=1pt] (7) -- (8);
\draw[line width=1pt] (7) -- (9);
\draw[line width=1pt] (8) -- (9);
\end{tikzpicture} \hspace{1cm}
\begin{tikzpicture}[main/.style = {draw, circle , minimum size=2.3mm,line width=1pt,node distance=0.8cm},main2/.style = {draw, circle , minimum size=4mm,line width=1pt,fill=gray!20,node distance=0.8cm}]
\node[main] (1) {};
\node[main] (2) [above left of=1] {};
\node[main] (3) [above right of=1]{};
\node[main] (4) [below left of=1] {};
\node[main,label={[label distance=-2mm]above left:{$w$}}] (5) [right=0.2cm of 4] {};
\node[main] (6) [below right of=1] {};
\node[main] (8) [below=0.2cm of 5] {};
\node[main] (7) [below left of=8] {};
\node[main] (9) [below right of=8] {};
\node (10) [below left=0cm and 0.04cm of 9] {$H_2$};
\draw[line width=1pt] (1) -- (2);
\draw[line width=1pt] (1) -- (3);
\draw[line width=1pt] (1) -- (5);
\draw[line width=1pt] (2) -- (3);
\draw[line width=1pt] (2) -- (4);
\draw[line width=1pt] (3) -- (6);
\draw[line width=1pt] (4) -- (7);
\draw[line width=1pt] (6) -- (9);
\draw[line width=1pt] (7) -- (9);
\draw[line width=1pt] (7) -- (8);
\draw[line width=1pt] (9) -- (8);
\draw[line width=1pt] (8) -- (5);
\end{tikzpicture}
\caption{Two graphs and vertices that cannot be distinguished by the $\wl{}$-test, and therefore by standard $\mathsf{MPNN}\text{s}$. When considering $\mathcal F$-$\mathsf{MPNN}\text{s}$, with $\mathcal{F}=\{\raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{K3.pdf}}\}$, one can show that both graphs cannot be distinguished just by focusing on the initial labelling, but they can be distinguished by an $\mathcal{F}$-$\mathsf{MPNN}$ with just one aggregation layer.
}
\label{fig:fwl_power_rounds}
\end{figure}
\section{\boldmath Expressive Power of $\mathcal{F}$-$\mathsf{MPNN}\text{s}$}\label{sec:exp-power}
Recall that the standard $\wl{}$-test \citep{Weisfeiler1968,grohe_2017} iteratively constructs a labelling of the vertices in a graph $G = (V,E,\chi)$
as follows. In round $d$, for each vertex $v$ the algorithm first collects the label of $v$ and all of its neighbours after round $d-1$,
and then it hashes this aggregated multiset
of labels into a new label for $v$. The initial label of $v$ is $\chi(v)$.
As shown in \citet{xhlj19} and \citet{grohewl}, the $\wl{}$-test provides a bound on the classification power of $\mathsf{MPNN}\text{s}$: if two vertices or two graphs are indistinguishable by the
$\wl{}$ test, then they will not be distinguished by any $\mathsf{MPNN}$.
In turn, the expressive power of the $\wl{}$-test, and thus of $\mathsf{MPNN}\text{s}$, can be characterised in terms of homomorphism counts of trees \citep{dvorak,DellGR18}.
This result can be seen as a characterisation of the expressiveness of the $\wl{}$-test in terms of a particular infinite-dimensional
graph kernel: the one defined by
the number of homomorphisms from every tree $T$ into the underlying graph $G$.
In this section we show that both characterisations extend in an elegant way to the setting of $\mathcal{F}$-$\mathsf{MPNN}\text{s}$, confirming that
$\mathcal{F}$-$\mathsf{MPNN}\text{s}$ are not just a useful, but also a well-behaved generalisation of standard $\mathsf{MPNN}\text{s}$.
\subsection{\boldmath Characterisation in terms of $\wlk{\mathcal{F}}$}
We bound the expressive power of $\mathcal{F}$-$\mathsf{MPNN}\text{s}$ in terms of what we call the {\em $\wlk{\mathcal{F}}$-test}.
Formally, the $\wlk{\mathcal{F}}$-test extends $\wl{1}$ in the same way as $\mathcal{F}$-$\mathsf{MPNN}\text{s}$ extend standard $\mathsf{MPNN}\text{s}$:
by including homomorphism counts of patterns in $\mathcal F$ in the initial labelling.
That is, let $\mathcal{F}=\{P_1^r,\dots,P_\ell^r\}$. The $\wlk{\mathcal{F}}$-test is a vertex labelling algorithm
that iteratively computes a label $\chi_{\mathcal{F},G,v}^{(d)}$ for each vertex $v$ of a graph $G$, defined as follows.
\begin{align*}
\chi_{\mathcal{F},G,v}^{(0)} \, & := \, \bigl(\chi_G(v),\homc{P_1^r,G^v},\dots,\homc{P_\ell^r,G^v}\bigr) \\
\chi_{\mathcal{F},G,v}^{(d)} & := \, \textsc{Hash}\bigl(\chi_{\mathcal{F},G,v}^{(d-1)},\{\!\!\{ \chi_{\mathcal{F},G,u}^{(d-1)}\mid u\in N_G(v) \}\!\!\}\bigr).
\end{align*}
The $\wlk{\mathcal{F}}$-test stops in round $d$ when no new pair of vertices are identified by means of $\chi_{\mathcal{F},G,v}^{(d)}$, that is,
when for any two vertices $v_1$ and $v_2$ from $G$, $\chi_{\mathcal{F},G,v_1}^{(d-1)} = \chi_{\mathcal{F},G,v_2}^{(d-1)}$ implies
$\chi_{\mathcal{F},G,v_1}^{(d)} = \chi_{\mathcal{F},G,v_2}^{(d)}$.
Notice that $\wl{1} = \wlk{\emptyset}$.
We can use the $\wlk{\mathcal{F}}$-test to compare vertices of the same graphs, or different graphs. We say that the
$\wlk{\mathcal{F}}$-test \emph{cannot distinguish vertices}
if their final labels are the same, and
that the $\wlk{\mathcal{F}}$-test \emph{cannot distinguish graphs} $G$ and $H$ if the multiset containing each label computed for $G$ is the same
as that of $H$.
Similarly as for $\mathsf{MPNN}\text{s}$ and the $\wl{}$-test \citep{xhlj19,grohewl}, we obtain that the $\wlk{\mathcal{F}}$-test provides an upper bound
for the expressive power of $\mathcal{F}$-$\mathsf{MPNN}\text{s}$.
\begin{proposition}\label{prop:wlupper}
If two vertices of a graph cannot be distinguished by the $\wlk{\mathcal{F}}$-test, then
they cannot be distinguished by any $\mathcal F$-$\mathsf{MPNN}$ either. Moreover,
if two graphs cannot be distinguished by the $\wlk{\mathcal{F}}$-test, then they cannot be distinguished by any
$\mathcal F$-$\mathsf{MPNN}$ either.
\end{proposition}
We can also construct $\mathcal F$-$\mathsf{MPNN}\text{s}$ that mimic the $\wlk{\mathcal F}$-test: Simply adding local parameters from a set
$\mathcal F$ of patterns to the $\mathsf{GIN}$ architecture of \citet{xhlj19} results in an $\mathcal F$-$\mathsf{MPNN}$ that classifies vertices and graphs
as
the $\wlk{\mathcal{F}}$-test.
\subsection{\boldmath Characterisation in terms of $\mathcal{F}$-pattern trees}
At the core of several results about the $\wl{}$-test lies a characterisation linking the test with homomorphism counts of (rooted) trees \citep{dvorak,DellGR18}. In view of the connection to $\mathsf{MPNN}\text{s}$, it tells that $\mathsf{MPNN}\text{s}$ only use \textit{quantitative tree-based structural information from the underlying graphs}.
We next extend this characterisation to $\wlk{\mathcal{F}}$ by using homomorphism counts of so-called $\mathcal{F}$-pattern trees.
In view of the connection with $\mathcal F$-$\mathsf{MPNN}\text{s}$ (Proposition~\ref{prop:wlupper}), this reveals that $\mathcal F$-$\mathsf{MPNN}\text{s}$ can use
quantitative information of \textit{richer graph structures} than $\mathsf{MPNN}\text{s}$.
To define $\mathcal{F}$-pattern trees we need the {\em graph join operator} $\star$. Given two rooted graphs $G^v$ and $H^w$, the join graph $(G\star H)^v$ is obtained by taking the disjoint union of $G^v$ and $H^w$, followed by identifying $w$ with $v$. The root of the join graph is $v$. For example,
the join of \raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{K3.pdf}} and \raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{C4.pdf}} is
\raisebox{-1.4\dp\strutbox}{\includegraphics[height=4ex]{joinK3C4.pdf}}. Further, if $G$ is a graph and $P^r$ is a rooted graph, then joining a vertex $v$ in $G$ with $P^r$ results
in the disjoint union of $G$ and $P^r$, where $r$ is identified with $v$.
Let $\mathcal{F} = \{P_1^r,\ldots,P_\ell^r\}$. An \textit{$\mathcal{F}$-pattern tree} $T^r$ is obtained from a standard rooted tree $S^r=(
|
V,E,\chi)$, called the \textit{backbone} of $T^r$, followed by joining every vertex $s\in V$ with any number of copies of patterns from $\mathcal{F}$.
We define the \textit{depth} of an $\mathcal{F}$-pattern tree as the depth of its backbone.
Examples of $\mathcal{F}$-pattern trees, for $\mathcal{F}=\{\raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{K3.pdf}}\}$, are:
\begin{center}
\vspace{-1ex}
\includegraphics[height=1.7cm]{Ftrees.pdf}
\vspace{-2ex}
\end{center}
where grey vertices are part of the backbones of the $\mathcal{F}$-pattern trees.
Standard trees are also $\mathcal F$-pattern trees.
We next use $\mathcal{F}$-pattern trees to characterise the expressive power of $\wlk{\mathcal{F}}$ and thus, by Proposition~\ref{prop:wlupper}, of $\mathcal{F}$-$\mathsf{MPNN}\text{s}$.
\begin{theorem}\label{thm:char}
For any finite collection $\mathcal{F}$ of patterns, vertices $v$ and $w$ in a graph $G$
$\mathsf{hom}(T^r,G^v)=\mathsf{hom}(T^r,G^w)$ for every rooted $\mathcal{F}$-pattern tree $T^r$.
Similarly,
$G$ and $H$ are undistinguishable by the $\wlk{\mathcal{F}}$-test if and only if
$\mathsf{hom}(T,G)=\mathsf{hom}(T,H)$, for every (unrooted) $\mathcal{F}$-pattern tree.
\end{theorem}
The proof of this theorem, which can be found in the appendix,
requires extending techniques from \citet{Grohe20,Grohe20Lics} that were used to characterise the expressiveness of
$\wl{1}$ in terms of homomorphism counts of trees.
In fact, we can make the above theorem more precise. When $\wlk{\mathcal F}$ is run for $d$ rounds, then \textit{only $\mathcal F$-patterns trees of depth $d$ are required}.
This tells that increasing the number of rounds of $\wlk{\mathcal{F}}$
results in that more complicated structural information is taken into account
For example, consider the two graphs $G_2$ and $H_2$ and vertices $v\in V_{G_2}$ and $w\in V_{H_2}$, shown in Fig.~\ref{fig:fwl_power_rounds}. Let $\mathcal{F}=\{\raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{K3.pdf}}\}$.
By definition, $\wlk{\mathcal F}$ cannot distinguish $v$ from $w$ based on the initial labelling. If run for one round, Theorem~\ref{thm:char} implies that $\wlk{\mathcal F}$ cannot distinguish $v$ from $w$ if and only if $\mathsf{hom}(T^r,G_2^v)=\mathsf{hom}(T^r,H_2^w)$ for any $\mathcal{F}$-pattern tree of depth at most $1$.
It is readily verified that
$$\mathsf{hom}(\raisebox{-1.8\dp\strutbox}{\includegraphics[height=4ex]{Ftree1g.pdf}},G_2^v)=0\neq 4=\mathsf{hom}(\raisebox{-1.8\dp\strutbox}{\includegraphics[height=4ex]{Ftree1g.pdf}},H_2^w),$$
and thus $\wlk{\mathcal F}$ distinguishes $v$ from $w$ after one round. Similarly, $G_2$
and $H_2$ can be distinguished by $\wlk{\mathcal F}$ after one round. We observe that
$G_2$ and $H_2$ are indistinguishable by $\wl{}$. Hence, $\mathcal F$-$\mathsf{MPNN}\text{s}$ are more expressive than $\mathsf{MPNN}\text{s}$.
Importantly, Theorem~\ref{thm:char} discloses the boundaries of $\mathcal F$-$\mathsf{MPNN}\text{s}$. To illustrate this for some specific instances of $\mathcal F$-$\mathsf{MPNN}\text{s}$ mentioned earlier, the expressive power of degree-based $\mathsf{MPNN}\text{s}$ \citep{kipf-loose,geerts2020lets} is captured by $\{L_1\}$-pattern trees, and walk counts-$\mathsf{MPNN}\text{s}$ \citep{chen2019powerful} are captured by $\{L_1,\ldots, L_\ell\}$-pattern trees. These pattern trees are just trees, since joining paths to trees only results in bigger trees. Thus,
Theorem~\ref{thm:char} tells that all these extensions are still bounded by $\wl{}$ (albeit needing less rounds). In contrast, beyond $\wl{}$, $\{C_\ell\}$-pattern trees capture cycle count $\mathsf{MPNN}\text{s}$ \citep{li2019hierarchy}, and
for $\mathsf{GSN}\text{s}$ \citep{bouritsas2020improving} which use subgraph isomorphism counts of pattern $P\in\mathcal P$, their expressive power is captured by $\mathsf{spasm}(\mathcal P)$-pattern trees, where $\mathsf{spasm}(\mathcal P)$ consists of all surjective homomorphic images of patterns in $\mathcal P$ \citep{Curticapean_2017}.
\section{\boldmath A Comparison with the $\wlk{k}$-test}\label{sec:choice}
We propose $\mathcal F$-$\mathsf{MPNN}\text{s}$ as an alternative and efficient way to extend the expressive power of
$\mathsf{MPNN}\text{s}$ (and thus the $\wl{}$-test) compared to the computationally intensive higher-order $\mathsf{MPNN}\text{s}$ based on the $\wlk{k}$-test \citep{grohewl,Morris2020WeisfeilerAL,DBLP:conf/nips/MaronBSL19}. In this section we
situate $\wlk{\mathcal F}$ in the $\wlk{k}$ hierarchy. The definition of $\wlk{k}$
is deferred to the appendix.
We have seen that $\wlk{\mathcal F}$ can distinguish graphs that $\wl{}$ cannot: it suffices to consider $\wlk{\{K_3\}}$ for the $3$-clique $K_3$.
In order to generalise this observation we need some notation.
Let $\mathcal F$ and $\mathcal G$ be two sets of patterns and consider
an $\mathcal F$-$\mathsf{MPNN}$ $M$ and a $\mathcal G$-$\mathsf{MPNN}$ $N$. We say that $M$ is \textit{upper bounded in expressive power} by $N$ if for any graph $G$,
if $N$ cannot distinguish vertices $v$ and $w$,\footnote{Just as for the $\wlk{\mathcal F}$-test, an $\mathcal F$-$\mathsf{MPNN}$ cannot distinguish two vertices if the label computed for
both of them is the same} then neither can $M$.
A similar notion is in place for pairs of graphs: if $N$ cannot distinguish graphs $G$ and $H$, then neither can $M$.
More generally, let $\mathcal M$ be a class of $\mathcal F$-$\mathsf{MPNN}$ and $\mathcal N$ be a class of $\mathcal G$-$\mathsf{MPNN}$. We say that the class $\mathcal M$ is \textit{upper bounded in expressive power} by $\mathcal N$ if every $M\in\mathcal M$ is upper bounded in expressive power by an $N\in\mathcal N$ (which may depend on $M$). When $\mathcal M$ is upper bounded by $\mathcal N$ and vice versa, then $\mathcal M$ and $\mathcal N$ are said to have the \textit{same expressive power}. A class $\mathcal N$ is \textit{more expressive} than a class $\mathcal M$ when $\mathcal M$ is upper bounded in expressive power by $\mathcal N$, but there exist graphs that can be distinguished by $\mathsf{MPNN}\text{s}$ in $\mathcal N$ but not by any $\mathsf{MPNN}$ in $\mathcal M$.
Finally, we use the
notion of \textit{treewidth} of a graph, which measures the tree-likeness of a graph. For example,
trees have treewidth one, cycles have treewidth two, and the $k$-clique $K_k$ has treewidth $k-1$ (for $k>1$). We define this standard notion in the appendix and only note that we define the
treewidth of a pattern $P^r$ as the treewidth of its unrooted version $P$.
Our first result is a consequence of the characterisation of $\wlk{k}$ in terms of homomorphism counts of graphs of treewidth $k$ \citep{dvorak,DellGR18}.
\begin{proposition}\label{prop:lgp-in-kwl}
For each finite set $\mathcal F$ of patterns, the expressive power of
$\wlk{\mathcal{F}}$ is bounded by $\wlk{k}$, where $k$ is the largest
treewidth of a pattern in $\mathcal F$.
\end{proposition}
For example, since the treewidth of $K_3$ is $2$, we have that $\wlk{\{K_3\}}$ is bounded by $\wlk{2}$. Similarly, $\wlk{\{K_{k+1}\}}$ is bounded in expressive power by $\wlk{k}$.
Our second result tells how to increase the expressive power of $\wlk{\mathcal F}$ beyond $\wlk{k}$. A pattern $P^r$ is a core if any homomorphism from $P$ to itself is injective. For example, any clique $K_k$ and cycle of odd length is a core.
\begin{theorem}\label{theo:games}
Let $\mathcal F$ be a finite set of patterns.
If $\mathcal F$ contains a pattern $P^r$ which is a core and has treewidth $k$, then there exist graphs that can be distinguished by $\wlk{\mathcal F}$ but not by $\wlk{(k-1)}$.
\end{theorem}
In other words, for such $\mathcal F$, $\wlk{\mathcal F}$ is not bounded by $\wlk{(k-1)}$.
For example, since $K_3$ is a core, $\wlk{\{K_3\}}$ is not bounded in expressive power by $\wl{}=\wlk{1}$.
More generally, $\wlk{\{K_{k}\}}$ is not bounded by $\wlk{(k-1)}$.
The proof of Theorem \ref{theo:games} is based on extending deep
techniques developed in finite
model theory, and that have been used to understand the expressive power of {\em finite variable logics} \citep{AtseriasBD07,BovaC19}. This result is stronger than the one underlying the strictness of the $\wlk{k}$ hierarchy \citep{O2017}, which states that $\wlk{k}$ is strictly more expressive than $\wlk{(k-1)}$. Indeed, \citet{O2017} only shows the \textit{existence} of a pattern $P^r$ of treewidth $k$ such that
$\wlk{(k-1)}$ is not bounded by $\wlk{\{P^r\}}$. In Theorem \ref{theo:games} we provide an \textit{explicit recipe} for finding such a pattern $P^r$, that is, $P^r$ can be taken a core of treewidth $k$.
In summary, we have shown that there is a set $\mathcal F$ of patterns such that (i) $\wlk{\mathcal F}$ can distinguish graphs which cannot be distinguished by $\wlk{(k-1)}$, yet (ii) $\wlk{\mathcal F}$ cannot
distinguish more graphs than $\wlk{k}$. This begs the question whether there is a finite set $\mathcal F$ such that $\wlk{\mathcal F}$ is equivalent in expressive power to $\wlk{k}$. We answer this negatively.
\begin{proposition}\label{prop:long-cycles}
For any $k>1$, there does not exist a finite set $\mathcal F$ of patterns such that
$\wlk{\mathcal F}$ is equivalent in expressive power to $\wlk{k}$.
\end{proposition}
In view of the connection between $\mathcal F$-$\mathsf{MPNN}\text{s}$ and $\mathsf{GSN}\text{s}$ mentioned earlier, we thus show that no $\mathsf{GSN}$ can match the power of $\wlk{k}$, which was a question left open in \citet{bouritsas2020improving}.
We remark that if we allow $\mathcal F$ to consist of all (\textit{infinitely many}) patterns of treewidth $k$, then $\wlk{\mathcal F}$ is equivalent in expressive power to $\wlk{k}$ \citep{dvorak,DellGR18}.
\section{When Do Patterns Extend Expressiveness?}\label{sec:patterns}
Patterns are not learned, but must be passed as an input to $\mathsf{MPNN}\text{s}$ together with the graph structure. Thus, knowing which patterns work well, and which do not, is of key importance for the power of the resulting $\mathcal{F}$-$\mathsf{MPNN}\text{s}$.
This is a difficult question to answer
since determining which patterns work well is clearly application-dependent.
From a theoretical point of view, however, we can still look into interesting questions related to the problem of which patterns to choose. One such a question, and the one studied in this section,
is
when a pattern adds expressive power over the ones that we have already selected. More formally, we study the following problem:
Given a finite set $\mathcal F$ of patterns,
when does adding a new pattern $P^r$ to $\mathcal F$ extends the expressive power of the $\wlk{\mathcal{F}}$-test?
To answer this question in the positive, we need to find two graphs $G$ and $H$, show that
they are indistinguishable by the $\wlk{\mathcal{F}}$-test, but show that they can be distinguished by the
$\wlk{\mathcal{F}\cup \{P^r\}}$-test. As an example of this technique we show that
longer cycles always add expressive power. We use $C_k$ to represent the cycle of length $k$.
\begin{proposition}\label{prop:cycles}
For any $k > 3$, $\wlk{\{C^r_3,\dots,C^r_k\}}$ is more expressive than
$\wlk{\{C^r_3,\dots,C^r_{k-1}\}}$.
\end{proposition}
We also observe that, by Proposition~\ref{prop:lgp-in-kwl}, $\wlk{\{C^r_3,\dots,C^r_k\}}$ is bounded by $\wlk{2}$ for any $k\geq 3$ because
cycles have treewidth two. It is often the case, however, that finding such graphs and proving that they are indistinguishable, can be rather challenging.
Instead, in this section we provide two techniques that can be used
to partially answer the question posed above by only looking at properties of the sets of patterns.
Our first result is for establishing when a pattern does not add expressive power to a given set $\mathcal F$ of patterns, and the second one
when it does.
\subsection{Detecting when patterns are superfluous}
Our first result is a simple recipe for choosing local features: instead of choosing complex patterns that are the joins of
smaller patterns, one should opt for the smaller patterns.
\begin{proposition}\label{prop:simplify}
Let $P^r = P_1^r \star P_2^r$ be a pattern that is the join of two smaller patterns.
Then for any set $\mathcal F$ of patterns, we have that
$\mathcal F \cup \{P^r\}$ is upper bounded by $\mathcal F \cup \{P_1^r,P_2^r\}$.
\end{proposition}
Stated differently, this means that adding to $\mathcal F$ any pattern which is the join of two patterns already in
$\mathcal F$ does not add expressive power.
Thus, instead of using, for example, the pattern \raisebox{-2\dp\strutbox}{\includegraphics[height=4.6ex]{join3foldK3.pdf}}, one should prefer to use instead the triangle \raisebox{-1.1\dp\strutbox}{\includegraphics[height=3ex]{K3.pdf}}. This result is in line with other advantages of smaller patterns: their homomorphism counts are easier to compute, and, since they are less specific, they should tend to produce less over-fitting.
\subsection{Detecting when patterns add expressiveness}
Joining patterns into new patterns does not give extra expressive power, but what about patterns which are not joins?
We provide next a useful recipe for detecting when a pattern does add expressive power.
We recall that the \textit{core of a graph} $P$ is its unique (up to isomorphism) induced subgraph which is a core.
\begin{theorem} \label{thm:increase}
Let $\mathcal{F}$ be a finite set of patterns and let
$Q^r$ be a pattern whose core has treewidth $k$. Then,
$\wlk{\mathcal F\cup \{Q^r\}}$ is more expressive than
$\wlk{\mathcal F}$ if every pattern $P^r\in\mathcal F$
satisfies one of the following conditions: (i)~$P^r$ has treewidth $< k$; or (ii)~
$P^r$ does not map homomorphically to $Q^r$.
\end{theorem}
As an example, $\wlk{\{K_{3},\ldots, K_k\}}$ is more expressive than $\wlk{\{K_3,\ldots, K_{k-1}\}}$ for any $k>3$ because of the first condition. Similarly, $\wlk{\{ K_3,\ldots, K_k, C_\ell\}}$ is more expressive than $\wlk{\{K_3,\ldots,K_k\}}$ for odd cycles $C_\ell$. Indeed, such cycles are cores and no clique $K_k$ with $k>2$ maps homomorphically to $C_\ell$.
\section{Experiments \label{sec:experiments}}
We next showcase that $\mathsf{GNN}$
architectures benefit when homomorphism counts of patterns are added
as additional vertex features. For patterns where homomorphism and subgraph
isomorphism counts differ (e.g., cycles) we compare with $\mathsf{GSN}\text{s}$ \citep{bouritsas2020improving}.
We use the benchmark for $\mathsf{GNN}\text{s}$ by \citet{dwivedi2020benchmarkgnns}, as it offers a broad choice of models, datasets and graph classification tasks.
\paragraph{\boldmath Selected $\mathsf{GNN}\text{s}$.}
We select the best architectures from \citet{dwivedi2020benchmarkgnns}: Graph Attention Networks ($\mathsf{GAT}$) \citep{GAT}, Graph Convolutional Networks ($\mathsf{GCN}$) \citep{kipf-loose}, $\mathsf{GraphSage}$ \citep{hyl17}, Gaussian Mixture Models ($\mathsf{MoNet}$) \citep{monet} and $\mathsf{GatedGCN}$ \citep{gated}. We leave out various linear architectures such as $\mathsf{GIN}$ \citep{xhlj19} as they were shown to perform poorly on the benchmark.
\paragraph{Learning tasks and datasets.} As in \citet{dwivedi2020benchmarkgnns} we consider (i)~graph regression and the ZINC dataset \citep{ZINCoriginal,dwivedi2020benchmarkgnns}; (ii)~vertex classification and the PATTERN and CLUSTER datasets \citep{dwivedi2020benchmarkgnns}; and (iii)~link prediction and the COLLAB dataset \citep{hu2020open}. We omit graph classification: for this task, the graph datasets from \citet{dwivedi2020benchmarkgnns} originate from image data and hence vertex neighborhoods carry little information.
\paragraph{Patterns.} We extend the initial features of vertices with
homomorphism counts of cycles $C_\ell$ of length $\ell\leq 10$, when molecular data (ZINC) is concerned, and with homomorphism counts of $k$-cliques $K_k$ for $k\leq 5$, when social or collaboration data (PATTERN, CLUSTER, COLLAB) is concerned. We use the $z$-score of the logarithms of homomorphism counts to make them standard-normally distributed and comparable to other features.
Section~\ref{sec:patterns} tells us that all these patterns will increase expressive power (Theorem~\ref{thm:increase} and Proposition~\ref{prop:cycles}) and are ``minimal''
in the sense that they are not the join of smaller patterns (Proposition~\ref{prop:simplify}). Similar pattern choices were used in
\citet{bouritsas2020improving}. We use DISC \citep{ZhangY0ZC20}\footnote{We thank the authors for providing us with an executable.},
a tool specifically built to get homomorphism counts for large datasets. Each model is trained and tested independently using combinations of patterns.\looseness=-1
\paragraph{\boldmath Higher-order $\mathsf{GNN}\text{s}$.}
We do not compare to higher-order $\mathsf{GNN}\text{s}$ since this was already done by \citet{dwivedi2020benchmarkgnns}. They included ring-$\mathsf{GNN}\text{s}$ (which outperform $\mathsf{2WL}$-$\mathsf{GNNs}$) and $\mathsf{3WL}$-$\mathsf{GNNs}$ in their experiments, and these were outperformed by our selected ``linear'' architectures. Although the increased expressive
power of higher-order $\mathsf{GNN}\text{s}$ may be beneficial for learning, scalability and learning issues (e.g., loss divergence) hamper their applicability \citep{dwivedi2020benchmarkgnns}. Our approach thus certainly outperforms higher-order $\mathsf{GNN}\text{s}$ with respect to the benchmark.
\paragraph{Methodology.}
Graphs were divided between training/test as instructed by \citet{dwivedi2020benchmarkgnns}, and all numbers reported correspond to the test set. The reported performance is the average over four runs with different random seeds for the respective combinations of patterns in $\mathcal{F}$, model and dataset. Training times were comparable to the baseline of training models without any augmented features.\footnote{Code to reproduce our experiments is available at \url{https://github.com/LGP-GNN-2021/LGP-GNN}} All models for ZINC, PATTERN and COLLAB were trained on a GeForce GTX 1080 Ti GPU, for CLUSTER a Tesla V100-SXM3-32GB GPU was used.
\smallskip
\noindent
Next we summarize our results for each learning task separately.
\paragraph{Graph regression.}
The first task of the benchmark is the prediction of the solubility of molecules in the ZINC dataset \citep{ZINCoriginal,dwivedi2020benchmarkgnns}, a dataset of about $12\,000$ graphs of small size, each of them consisting of
one particular molecule.
The results in Table \ref{ZINC_table} show that each of our models indeed improves by adding homomorphism counts of cycles and the best result
is obtained by considering all cycles. $\mathsf{GSN}\text{s}$ were applied to the ZINC dataset as well \citep{bouritsas2020improving}.
In Table \ref{ZINC_table} we also report
results by using subgraph isomorphism counts (as in $\mathsf{GSN}\text{s}$): homomorphism counts generally provide better results than subgraph isomorphisms counts. Our best result (framed in Table \ref{ZINC_table}) is competitive to the value of $0.139$ reported in \citet{bouritsas2020improving}. By looking at the full results, we see that some cycles are much more important than others. Table \ref{GAT_table} shows which cycles have greatest impact for the worst-performing baseline, $\mathsf{GAT}$.
Remarkably, adding homomorphism counts makes the $\mathsf{GAT}$ model competitive with the best performers of the benchmark.
\begin{table}[t]
\caption{Results for the ZINC dataset show that homomorphism (hom) counts of cycles improve every model. We compare the mean absolute error (MAE) of each model without any homomorphism count (baseline), against the model augmented with the hom count, and with subgraph isomorphism (iso) counts of $C_3$--$C_{10}$.}
\label{ZINC_table}
\vskip 0.15in
\centering
\begin{sc}
\begin{tabular}{p{2.5cm}p{2.75cm}p{2.7cm}p{2.7cm}}
\toprule
Model & MAE (base) & MAE (hom) & MAE (iso)\\
\midrule
$\mathsf{GAT}$ & 0.47$\pm$0.02 & \textbf{0.22}$\pm$\textbf{0.01} & 0.24$\pm$0.01\\
$\mathsf{GCN}$ & 0.35$\pm$0.01 & \textbf{0.20}$\pm$\textbf{0.01} & 0.22$\pm$0.01\\
$\mathsf{GraphSage}$ & 0.44$\pm$0.01 & \textbf{0.24}$\pm$\textbf{0.01} & 0.24$\pm$0.01\\
$\mathsf{MoNet}$ & 0.25$\pm$0.01 & 0.19$\pm$0.01 & \textbf{0.16}$\pm$\textbf{0.01}\\
$\mathsf{GatedGCN}$ & 0.34$\pm$0.05 & \!\!\!\fbox{\textbf{0.1353}$\pm$\textbf{0.01}} & 0.1357$\pm$0.01\\
\bottomrule
\end{tabular}
\end{sc}
\end{table}
\begin{table}[t]
\caption{The effect of different cycles for the $\mathsf{GAT}$ model over the ZINC dataset, using mean absolute error. }
\label{GAT_table}
\vskip 0.15in
\centering
\begin{sc}
\begin{tabular}{p{2.5cm}c}
\toprule
Set $(\mathcal F)$ & MAE \\
\midrule
None& 0.47$\pm$0.02 \\
$\{C_3\}$ & 0.45$\pm$0.01 \\
$\{C_4\}$ & 0.34$\pm$0.02 \\
$\{C_6\}$ & 0.31$\pm$0.01 \\
$\{C_5,C_6\}$ & 0.28$\pm$0.01 \\
$\{C_3,\ldots,C_6\}$ & 0.23$\pm$0.01\\
$\{C_3,\ldots,C_{10}\}$ & \textbf{0.22}$\pm$\textbf{0.01}\\
\bottomrule
\end{tabular}
\end{sc}
\end{table}
\paragraph{Vertex classification.}
The next task in the benchmark corresponds to vertex classification. Here we analyze two datasets, PATTERN and CLUSTER \citep{dwivedi2020benchmarkgnns}, both containing over $12\,000$ artificially generated graphs
resembling social networks or communities. The task is to predict whether a vertex belongs to a particular cluster or pattern, and
all results are measured using the accuracy of the classifier.
Also here, our results show that homomorphism counts, this times of cliques, tend to improve the accuracy of our models. Indeed, for the PATTERN dataset we see an improvement in all models but $\mathsf{GatedGCN}$
(Table \ref{PATTERN_table}), and three models are improved in the CLUSTER dataset (reported in the appendix). Once again, the best performer in this task is a model that uses homomorphism counts. We remark that for cliques, homomorphism counts coincide with subgraph isomorphism counts (up to a constant factor) so our extensions behave like $\mathsf{GSN}\text{s}$.
\begin{table}[t]
\caption{Results for the PATTERN dataset show that homomorphism counts improve all models except $\mathsf{GatedGCN}$. We compare weighted accuracy of each model without any homomorphism count (baseline) against the model augmented with the counts of the set $\mathcal F$ that showed best performance (best $\mathcal F$).}
\label{PATTERN_table}
\vskip 0.15in
\centering
\begin{sc}
\begin{tabular}{p{5.2cm}p{4.9cm}p{4.9cm}}
\toprule
Model + best $\mathcal F$ & Accuracy baseline & Accuracy best\\
\midrule
$\mathsf{GAT}$\! $\{K_3,K_4,K_5\}$ & 78.83 $\pm$ 0.60 & 85.50 $\pm$ 0.23 \\
$\mathsf{GCN}$\! $\{K_3,K_4,K_5\}$ & 71.42 $\pm$ 1,38 & 82.49 $\pm$ 0.48\\
$\mathsf{GraphSage}$ $\{K_3,K_4,K_5\}$ & 70.78 $\pm$ 0,19 & 85,85 $\pm$ 0.15 \\
$\mathsf{MoNet}$ $\{K_3,K_4,K_5\}$ & 85.90 $\pm$ 0,03 & \textbf{86.63} $\pm$ \textbf{0.03}\\
$\mathsf{GatedGCN}$ $\{\emptyset\}$ & 86.15 $\pm$ 0.08 & 86.15 $\pm$ 0.08 \\
\bottomrule
\end{tabular}
\end{sc}
\end{table}
\vspace{-2ex}
\paragraph{Link prediction}
In our final task we consider a single graph, COLLAB \citep{hu2020open}, with over $235\,000$ vertices, containing information about the collaborators in an academic network, and the task at hand is to predict future collaboration. The metric used in the benchmark is the Hits@50 evaluator \citep{hu2020open}. Here, positive collaborations are ranked among randomly sampled negative collaborations, and the metric is the ratio of positive edges that are ranked at place 50 or above.
Once again, homomorphism counts of cliques improve the performance of all models, see Table \ref{COLLAB_table}. An interesting observation is that this time the best set of features (cliques) does depend on the model, although the best model uses all cliques again.
\begin{table}[t]
\caption{All models improve the Hits@50 metric over the COLLAB dataset. We compare each model without any homomorphism count (baseline) against the model augmented with the counts of the set of patterns that showed best performance (best $\mathcal F$).}
\label{COLLAB_table}
\vskip 0.15in
\centering
\begin{sc}
\begin{tabular}{p{5.2cm}p{4.9cm}p{4.9cm}}
\toprule
Model + best $\mathcal F$& Hits@50 baseline & Hits@50 best \\
\midrule
$\mathsf{GAT}$ \! $\{K_3\}$ & 50.32$\pm$0.55 & 52.87$\pm$0.87 \\
$\mathsf{GCN}$ \! $\{K_3,K_4,K_5\}$ & 51.35$\pm$1.30 & \textbf{54.60}$\pm$\textbf{1.01} \\
$\mathsf{GraphSage}$ \! $\{K_5\}$ & 50.33$\pm$0.68 & 51.39$\pm$1.23 \\
$\mathsf{MoNet}$ \! $\{K_4\}$ & 49.81$\pm$1.56 & 51.76$\pm$1.38 \\
$\mathsf{GatedGCN}$ \! $\{K_3\}$ & 51.00$\pm$2.54 & 51.57$\pm$0.68 \\
\bottomrule
\end{tabular}
\end{sc}
\end{table}
\paragraph{Remarks.}
The best performers in each task use homomorphism counts,
in accordance with our theoretical results, showing that such counts do extend the power of $\mathsf{MPNN}\text{s}$.
Homomorphism counts are also cheap to compute. For COLLAB, the largest graph in our experiments, the homomorphism counts of all patterns we used, for all vertices, could be computed by DISC \citep{ZhangY0ZC20} in less than 3 minutes.
One important remark is that \textit{selecting} the best set of features is still a challenging endeavor. Our theoretical results help us streamline this search, but for now it is still an exploratory task. In our experiments we first looked at adding each pattern individually, and then tried with combinations of those that showed the best improvements. This feature selection strategy incurs considerable cost, both computational and environmental, and needs further investigation.
\section{Conclusion}
We propose local graph parameter enabled $\mathsf{MPNN}\text{s}$ as an efficient way to increase the expressive
power of $\mathsf{MPNN}\text{s}$.
The take-away message is that enriching features with
homomorphism counts of small patterns is a promising add-on to any $\mathsf{GNN}$ architecture, with little to no overhead.
Regarding future work, the problem of which parameters to choose deserves further study.
In particular, we plan to provide a complete characterisation of when adding a new pattern to $\mathcal F$ adds expressive power to the $\wlk{\mathcal F}$-test.
|
\section{Introduction}
\begin{abstract}
Many robots utilize commercial force/torque sensors to identify physical properties of a priori unknown objects. However, such sensors can be difficult to apply to smaller-sized robots due to their weight, size, and high-cost. In this paper, we propose a framework for smaller-sized humanoid robots to estimate the inertial properties of unknown objects without using force/torque sensors. In our framework, a neural network is designed and trained to predict joint torque outputs. The neural network's inputs are robot's joint angle, steady-state joint error, and motor current. All of these can be easily obtained from many existing smaller-sized robots. As the joint rotation direction is taken into account, the neural network can be trained with a smaller sample size, but still maintains accurate torque estimation capability. Eventually, the inertial properties of the objects are identified using a nonlinear optimization method. Our proposed framework has been demonstrated on a NAO humanoid robot.
\end{abstract}
\section{Introduction}
In order to manipulate different daily objects, it is crucial for humanoid robots to identify inertial properties of those objects. This process usually requires a humanoid robot to continuously interact with an object and then estimate its inertial properties based on the information of the sensors equipped on that robot.
Many existing methods for object physical property identification are based on the measurements using commercial force/torque (F/T) sensors. For example, a dynamic estimation of physical properties using end-effector wrench during general manipulator movement is proposed in \cite{atkeson1985rigid}.
The mass and center of mass (CoM) of an object is estimated by tipping and leaning operations \cite{yu1999estimation}. \cite{mason1986mechanics} and \cite{yoshikawa1991indentification} propose methods focusing on the estimation of CoM of an object in planar motions.
A similar problem in considered in \cite{lynch1996stable} to identify the physical properties of unknown objects by planning planar pushing motions. In \cite{DBLP:conf/iros/HanLC20}, the physical properties of unknown boxes are identified by a force sensing plate attached to the feet of a humanoid robot.
Since commercial F/T sensors are often heavy and expensive, they are not commonly equipped on smaller-sized humanoid robots. Therefore, it is still challenging for smaller-sized humanoid robots to estimate inertial properties such as mass and center of mass (CoM) location of unknown objects \cite{heunis2019reconstructing,hawley2019external,kofinas2013complete}. Despite lacking direct F/T sensing ability, many smaller-sized robots possess other sensors mounted on each joint, such as encoders and electric current sensors \cite{wright2007design, hogg2002algorithms, gouaillier2009mechatronic}. Although such sensors may be noisy in nature, they incorporate information for robot's joint torques, which directly relate to the inertial properties of the object that the robot grips. Here, we are specifically interested in estimating object inertial properties with existing sensors on smaller-sized humanoid robots.
Similar to object inertial property identification, another relevant field of research is robot interaction force or joint torque reconstruction. Many of these works do not use F/T sensors to construct interaction forces. In \cite{smith2005application} and \cite{yilmaz2020neural}, neural networks are used to map the relationship between joint torques without external forces and joint motions. In \cite{marban2019recurrent}, the authors investigate the effects of tool data and video sequences on force estimation for surgical robots and design convolutional neural networks to estimate interaction forces. In \cite{mattioli2016interaction}, joint steady-state control error is utilized to reconstruct interaction forces analytically for a humanoid robot. There are also many works using neural networks to infer interaction forces from visual data \cite{hwang2017inferring} or video \cite{kim2019efficient}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{Fig/Intro.PNG}
\caption{The NAO is estimating the inertial properties of an unknown screwdriver.}
\label{fig:intro}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.95\textwidth]{Fig/BlockDiagram.PNG}
\caption{Block diagram for the neural network training process (top, blue) and the object inertial property identification process (bottom, purple).}
\label{fig:blockdiag}
\end{figure*}
On the one hand, some of above-mentioned works directly model joint torques as proportional to the electric currents of the motors. The noisy nature of the electric current is ignored which can result in poor estimation performance. On the other hand, the frictional torques are often ignored in the afore mentioned approaches, which can not be simply calculated analytically. In addition, many vision based methods require complex system setup, which is difficult to be widely applied to various robots.
In this paper, we propose a framework for a smaller-sized humanoid robot to identify the mass and CoM location of a priori unknown objects without using F/T sensors. A neural network is first designed to estimate joint torques. The network inputs include motor electric current, joint angle and steady-state joint angle error, which can be measured by encoders and current sensors commonly existing in many smaller-sized robots. In the training process, the robot grips an object with known mass and CoM location to reach different configurations. In each configuration, the modelled joint torques can be derived analytically based on the known end-effector wrenches and are later used as labels for the neural network training.
In the object inertial property identification process, the robot first tests the object in different configurations and reaches different configurations. Then a nonlinear optimization technique is implemented to search for a set of optimal inertial parameters of the objects, of which the modelled torques best match the estimated torques from the neural network.
For both training and identification processes, we take the direction of frictional torque into account and design the training and identification trajectories such that the joints always do positive work (see Section \ref{section:poswork}) when approaching the desired configurations. This method reduces the sample size for the network training process, but still enables accurate joint torque estimation capability of the network.
The proposed method is demonstrated on a NAO humanoid robot. (Fig. \ref{fig:intro}). Six a priori unknown objects are used to evaluate the method's performance. Results show that our proposed method is able to provide relatively accurate estimation results of the masses and CoM locations of those testing objects.
\section{Method Overview}
Our proposed approach includes two steps: \textbf{training} and \textbf{identification}. The block diagram is shown in Fig. \ref{fig:blockdiag}.
In the \textbf{training} step, a neural network is used to estimate joint torque taking joint electric current, joint angle steady-state error and joint angle as inputs. The sensor outputs of several with a priori known inertial properties are collected during training process. The modelled joint torques are derived analytically by forward kinematics of the robot, which are used as labels of the training data. Then the network is trained by inputs and labels.
In the \textbf{identification} step, the robot grips an unknown object and tests it in different arm configurations. Then a nonlinear optimization approach is applied to search a set of optimal inertial parameters of the object to match the estimated joint torque from neural network and the modelled joint torque.
A NAO robot is used as our testing platform. Each arm of the robot includes five joints: ShoulderPitch, ShoulderRoll, ElbowYaw, ElbowRoll and WristYaw. Since the moment arm between the wrist and the hand gripping position of the robot is very short, the wrist torque barely change in different arm configurations. In addition, the shoulder motors of the robot are much weaker compared with elbow joints, which limit their capacities of motion in different arm configurations. As a result, the ElbowYaw and ElbowRoll joints are used in our training and identification steps.
Considering that the static joint torque is affected by the direction of rotation, a planning strategy is developed. This planning method significantly reduces the training sample numbers which prevent the motor of the robot from overheat from long-term experiments, but still enable accurate identification results.
\section{Neural Network Training}
\subsection{Neural Network Input}\label{section:nninput}
The joint electric current, joint angle steady-state error and joint angle are chosen as inputs for the neural network.\newline
\textbf{Joint electric current:}
In theory, the magnitude of measured joint torque $\tau_{m}$ can be directly calculated from the electric current $I$ of the joint motor:
$\tau_{m} = IK_{a}K_{b},$
in which $K_{a}$ and $K_{b}$ are the motor constant and the motor reduction ratio. However, electric current alone does not infer the torque direction and it is usually too noisy to be used to measure accurate joint torques. Another shortcoming is the lack of consideration of friction. \newline
\textbf{Joint angle steady-state error:} PD control has been adopted by many robots as their joint control strategy \cite{mattioli2016interaction, heredia2000high, robinson1999series}. The joint angle steady-state error from such controller can directly determine both the magnitude and the direction of the joint torque $\tau_{m}$:
$\tau_{m} = (q_{d} - q)K_{p}RK_{a}K_{b}$,
in which $q_{d}$ and $q$ are the command joint angle value and the measured joint angle value by an encoder. $K_{p}$ is the proportional controller gain and $R$ is a constant between motor current and controller output. \newline
\textbf{Joint angle:} Static friction generates measurement error for joint torque estimation for both the above-mentioned methods. Considering the static friction can be influenced by the joint configuration, the joint angle ($q$) is also included as an input for the neural network.
\subsection{Robot Arm Motion Planning}
For each training object, the moving ranges of ElbowYaw and ElbowRoll joints are between $-80^\circ \sim -20^\circ$ and $-70^\circ \sim -20^\circ$. The configurations for both joints are uniformly distributed with $10^\circ$ intervals and the training configuration space is shown as blue circles in Fig. \ref{fig:torquederiv}(a), left. The sensor data from overall 42 different configurations are collected.
A smooth trajectory which connects all the nodes in the configuration space is constructed for the training process. Initially, the configuration which is the lowest in the Cartesian space is chosen as a starting node of the trajectory. Afterwards, an iterative process takes place which connects the previous node to its nearest unvisited node in the Cartesian space. This process ends when all the nodes in the configuration space are connected. The planned trajectory is shown in Fig. \ref{fig:torquederiv}(a), right.
\subsection{Joint Moving Direction Control} \label{section:poswork}
We define \textbf{effective joint torque} ($\tau$) as the torque exerted on joints due to all the external forces acting on robots (including own weights of robots). When a robot reach steady state, the \textbf{motor torque} ($\tau_m$) should be balanced by the summation of effective joint torque and \textbf{frictional torque} ($\tau_f$). The purpose of the neural network in this paper is to eliminate frictional torque and estimate effective joint torque accurately.
The magnitude of joint motor torque at steady state can be significantly affected by the moving direction (clockwise or counter-clockwise) in which the joint rotates to its final state. Two possible conditions need to be considered. Given an example of arm link rotating around a fixed axis, if the arm link reaches its steady state along a counter-clockwise direction (Fig. \ref{fig:torquederiv}(b), left), the frictional torque $\tau_f$ is in clockwise direction to oppose the motion.
In this case, the motor torque $\tau_m$ does positive work while the effective joint torque $\tau$ does negative work and $\tau_m = \tau + \tau_f$. When the arm link reaches its steady state along a clockwise direction, the motor torque does negative work while the effective joint torque does positive work and $\tau_m = \tau - \tau_f$, as shown in Fig. \ref{fig:torquederiv}(b), right. The above-mentioned two conditions lead to different joint angle steady-state errors and joint currents.
\textbf{Positive-work direction} of a joint is defined as the rotational direction along which motor torque $\tau_m$ does positive work.
Positive-work direction can be determined as the opposite direction of effective joint torque $\tau$.
Regarding the example in Fig. \ref{fig:torquederiv}(b), left, the joint is rotating along positive-work direction to reach steady state.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{Fig/Configuration.PNG}
\caption{(a) Left: Training (blue circle) and testing (red square) elbow joint configurations for the neural network. Right: Planned trajectory for connecting training joint configurations visualized in Cartesian space. (b) Friction effect on joint torque. Left: Counter-clockwise motion. Right: Clockwise motion. (c) Analytical derivation of joint torque with known objects. Left: An example of a robot. Right: Free body diagram of $i$-th link.}
\label{fig:torquederiv}
\end{figure}
To simplify the training process, the robot arm is controlled to only move along the positive-work direction (This is also applied to the testing process). Since the inertial properties of the objects are known in training process, the joint torques can be solved analytically for each configuration (see Section \ref{section:torquedrivation}). Then the positive-work directions for each controlled joint can be subsequently acquired. In practice, the robot first moves to a generated prior configuration before moving to each node in the designed configuration space (Fig. \ref{fig:torquederiv}(a), left), such that the joint always reaches to its desired configuration along a positive-work direction.
Prior configuration is generated by rotate each joint along the opposite positive-work direction for a certain angle.
\subsection{Modelled Torque Derivation} \label{section:torquedrivation}
The modelled joint torque based on the inertial properties of the training objects are used as labels for the neural network. Given a serial robot with $n$ links and $n$ joints as shown in Fig. \ref{fig:torquederiv}(c), left, the coordinates of joints $r_{j_i}$ ($i=1,\dots,n$), end-effector $r_{j_{n+1}}$ and CoMs of links $r_{l_i}$ ($i=1,\dots,n$) can be derived from kinematics.
The free body diagram of $i$-th link is shown in Fig. \ref{fig:torquederiv}(c), right.
$f_{i,i+1}$ and $\mu_{i,i+1}$ are the force and moment applied on $i$-th link by $(i+1)$-th link.
Similarly, $f_{i,i-1}$ and $\mu_{i,i-1}$ are the force and moment applied on $i$-th link by $(i-1)$-th link.
When the robot is in stationary, the following equations can be built from Newton's Laws.
\begin{equation} \label{eq:force}
f_{i,i+1} + m_i g + f_{i,i-1} = 0
\end{equation}
\begin{equation} \label{eq:moment}
(r_{j_{i+1}} - r_{l_i}) \times f_{i,i+1} + \mu_{i,i+1} + (r_{j_i} - r_{l_i}) \times f_{i,i-1} + \mu_{i,i-1} = 0
\end{equation}
where $m_i$ is the mass of $i$-th link. Eq. \ref{eq:force} refers to the forces exerted on the link. Eq. \ref{eq:moment} refers to the resultant moment around the CoM of the link.
The forces and moments between two adjacent links are equal in magnitude but opposite in direction, i.e.
\begin{equation}
f_{i,i+1} = - f_{i+1,i}
\end{equation}
\begin{equation}
\mu_{i,i+1} = - \mu_{i+1,i}.
\end{equation}
Given external force and moment applied on end-effector, the forces and moments on all the joints from end-effector to the base can be solved recursively.
The joint torque on a certain joint applied by motor is the component of moment along the direction of rotational axis.
Given the unit vector $z_i$ along rotation axis of joint $i$, the joint torque can be solved as
\begin{equation}
\tau_i = \mu_{i,i-1} \cdot z_i.
\end{equation}
\subsection{Neural Network Details}
A multilayer perceptron (MLP) is used as the structure of the neural network (Fig. \ref{fig:MLP}).
For each set of data, the input of the network is a $28$-dimensional vector, consisting of joint angle error of elbow joints ($2$ dimensions), currents of elbow joints ($2$ dimensions) and joint angle of robot ($24$ dimensions).
The effective joint torques (see Section \ref{section:poswork}) of elbow joints ($2$-dimensional vector) are the output of the network.
In summary, the neural network is a $28-50-2$ MLP, with ReLU as activation function in hidden layer. This trained network aims to provide estimation for the effective torque using the robot's sensor data.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{Fig/MLP.png}
\caption{Neural network for effective joint torque estimation.}
\label{fig:MLP}
\end{figure}
\begin{algorithm}[t!] \label{alg:poswordet}
\caption{Positive-work Direction Detection}
\begin{algorithmic}
\STATE Move to desired configuration $q_d$ \\
$\text{Joint Index } i \leftarrow$ 1 \\
\WHILE{$i \leqslant N$}
\STATE (For positive direction)
\STATE Rotate joint $i$ from $(q_d)_i$ to $(q_d)_i - \Delta q$
\STATE Sleep for time $\Delta t$
\STATE Rotate joint $i$ from $(q_d)_i - \Delta q$ to $(q_d)_i$
|
\STATE Sleep for time $\Delta t$
\STATE Collect joint angle error $e_{i,pos} = (q_d)_i - q$
\STATE (For negative direction)
\STATE Rotate joint $i$ from $(q_d)_i$ to $(q_d)_i + \Delta q$
\STATE Sleep for time $\Delta t$
\STATE Rotate joint $i$ from $(q_d)_i + \Delta q$ to $(q_d)_i$
\STATE Sleep for time $\Delta t$
\STATE Collect joint angle error $e_{i,neg} = (q_d)_i - q$
\IF{$|e_{i,pos}| \geqslant |e_{i,neg}|$}
\STATE Mark flag for joint $i$ as $F_i = 1$
\ELSIF{$|e_{i,pos}| < |e_{i,neg}|$}
\STATE Mark flag for joint $i$ as $F_i = -1$
\ENDIF
\STATE $i = i + 1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Object Inertial Property Identification}
A nonlinear optimization technique is adopted for identifying the mass and CoM location of an a priori unknown object. In practice, the robot holds the object and reaches 20 different arm configurations. The estimated effective torques ${\bf \tau}_{i} = [\tau_{i}^{1},\tau_{i}^{2}] \in \mathbb{R}^{2}$ can be obtained from the trained neural network. The effective joint torques can also be formulated as a function related to the inertial properties of the unknown object and the configuration of the robot ${\bf q}$:
\begin{equation}
\begin{bmatrix}
{\bf \tau}_{1}\\
{\bf \tau}_{2}\\
\vdots\\
{\bf \tau}_{n}
\end{bmatrix}
=
\begin{bmatrix}
f(\textbf{CoM},m,{\bf q_{1}})\\
f(\textbf{CoM},m,{\bf q_{2}})\\
\vdots \\
f(\textbf{CoM},m,{\bf q_{n}})
\end{bmatrix}
\end{equation}
Here we try to search the optimal value of $\textbf{CoM}^{*}$ and $m^{*}$ which best fit the modelled joint torque to the estimated effective joint torque. We formulate this identification problem as a constraint nonlinear optimization problem, which is defined as:
\begin{align*}
\underset{\textbf{CoM},m}{\text{argmin}} \quad & J = \sum_{i = 1}^{n}\frac{1}{2}[w_{1}({\bf \tau}_{i}^{1} - f_{1}(\textbf{CoM},m,{\bf q_{i}}))^{2} \\
& \quad + w_{2}({\bf \tau}_{i}^{2} - f_{2}(\textbf{CoM},m,{\bf q_{i}}))^{2}]\\
\textrm{subject to:} \quad &
\textbf{CoM}_{min}< \textbf{CoM} < \textbf{CoM}_{max}\\
& \quad m > 0
\end{align*}
in which $w_{1}$ and $w_{2}$ are the weights for costs of two elbow joints. $\text{CoM}_{min}$ and $\text{CoM}_{max}$ represent a cuboidal convex hull of the geometry of the object.
Eventually, the CoM location is transformed to the object's body frame based on the known gripping position and orientation of the object.
\subsection{Positive Work Direction Searching}
For unknown objects, the joint torque can not be derived analytically.
So the positive-work directions are detected by experiments.
For each joint, we rotate the joint back and forth around the desired configuration, and compare the joint angle errors in steady states which are reached from different directions.
The direction with a larger magnitude of joint angle error is marked as positive-work direction. The details are shown in Algorithm 1.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{Fig/Setup.PNG}
\caption{Experimental setup. (a) Robot holds a dumbbell with known weight firmly during the training experiments. (b) The weight of a sample of a designed dumbbell. (c) A snapshot of the NAO gripping the dumbbell.}
\label{fig:setup}
\end{figure}
\section{Experiments}
\subsection{Experimental Design}
The inertial properties of the training objects and testing objects are pre-measured as known inertial properties or ground truth for evaluation. The coordinates of CoM locations are defined in end-effector frame (Fig. \ref{fig:setup}(c)), whose origin is located at the center of end-effector, and the axes are along the same directions as wrist frame (Fig. \ref{fig:setup}(a)).
As shown in Fig. \ref{fig:setup}(a), we design a dumbbell with adjustable weights for the network training experiments. The mass and CoM of the dumbbell can be changed by adjusting the weights on each side (Fig. \ref{fig:setup}(b)). In the experiments, the robot grips the dumbbell tightly without sliding (Fig. \ref{fig:setup}(c)). The network training dataset include 13 dumbbells with different masses, which varies from 34 g to 334 g. Since the dumbbell is rotationally symmetrical along y axis in end-effector frame, the x and z components of CoM location are equal to zero. The overall y components of CoM location in the training process range from 0 to -50.7 mm.
3 dumbbells and 3 household tools in Fig. \ref{fig:torqe_est}(a) are tested to verify the proposed method.
The ground truth of physical properties of test objects are shown in Table \ref{table:phyprop}.
Similar to dumbbells, the screwdriver is rotationally symmetrical.
The wrench and pliers are not rotationally symmetrical so both y and z components of their CoM locations are non-zero.
\subsection{Measurements Processing}
In training process, 500 samples of measurements are collected for each pair of dumbbell and configuration.
The raw measurements of each joint of the NAO robot are command joint angle, measured joint angle and electric current. Steady-state joint angle error is obtained by subtracting measured joint angle from command joint angle.
Given the inertial properties of objects and the joint angles of the robot, the effective joint torque can be derived analytically .
Fig. \ref{fig:measure} shows an example of training experiments.
The mass of the dumbbell is 284.0 g and the CoM is (0, -47.1 mm, 0).
The sensor measurements and the derived effective joint torques are shown in Fig. \ref{fig:measure}(b).
It is shown that the changing trends of joint angle error and current are in line with that of joint torque.
It is observable that joint angle error is more stable and less noisy compared to the electric current.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig/Measurement.PNG}
\caption{(a) Snapshots of an example of the training experiments. (b) From the top to the bottom show the joint angle, control steady-state error, electric current and ground truth torque for the elbow yaw (left) and elbow roll (right) relative to the training experiment shown in (a).}
\label{fig:measure}
\end{figure}
\subsection{Neural Network Joint Torque Output Evaluation}
The neural network is trained by the dataset collected in the training process.
To evaluate the trained neural network, the robot grips different unknown objects and test them in different arm configurations.
We compare the output effective joint torque and the reference joint torque derived using the human-measured inertial properties of unknown objects.
In addition, the joint torque measurements analytically derived from joint angle steady-state error are also put into comparison.
The analytical method is based on the assumption that joint angle steady-state error is proportional to the joint torque (see Section \ref{section:nninput}).
The coefficients are identified by least squares approximation using the same training dataset as neural network.
To explore the effects of components of input, we trained neural networks with different inputs as ablation study.
The evaluation metric for joint torque output is defined by mean absolute percentage error (MAPE):
\begin{equation}
e_\tau = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{\hat{\tau_i}-\tau_i}{\tau_i} \right| \times 100\%
\end{equation}
where $\hat{\tau_i}$ is the estimated torque, $\tau_i$ is the reference torque, $n$ is the amount of estimation results.
\begin{table*}[t!]
\renewcommand{\arraystretch}{1.3}
\caption{MAPE of Torque Estimation}
\label{table:mape_torque}
\begin{center}
\begin{tabular}{|c|c|c|c c c c c c c|}
\hline
\multicolumn{3}{|c|}{Input of neural network} & \multicolumn{7}{|c|}{Mean absolute percentage error (MAPE) (\%)}\\
\hline
$q_d-q$ & $I$ & $q$ & Dumbbell 1 & Dumbbell 2 & Dumbbell 3 & Screwdriver & Wrench & Pliers & Average \\
\hline
\checkmark & \checkmark & \checkmark & 6.69 & 6.10 & 5.80 & 10.36 & 8.70 & 7.22 & 7.48 \\
\checkmark & & \checkmark & 7.35 & 6.17 & 5.68 & 7.80 & 6.65 & 10.81 & 7.41 \\
& \checkmark & \checkmark & 12.90 & 9.99 & 8.34 & 24.91 & 21.88 & 13.41 & 15.24 \\
\checkmark & \checkmark & & 15.64 & 12.78 & 15.17 & 23.96 & 23.23 & 15.28 & 17.68 \\
\checkmark & & & 12.60 & 12.80 & 13.25 & 17.63 & 17.09 & 9.41 & 13.80 \\
& \checkmark & & 18.37 & 17.35 & 16.79 & 30.40 & 29.25 & 16.23 & 21.40 \\
& & \checkmark & 19.30 & 14.44 & 14.18 & 32.17 & 17.57 & 21.01 & 19.78 \\
\cline{1-3}
\multicolumn{3}{|c|}{Analytical method by $q_d - q$} & 21.11 & 16.25 & 12.57 & 49.32 & 40.56 & 9.99 & 24.97 \\
\hline
\end{tabular}
\end{center}
\vspace{-1em}
\end{table*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{Fig/TorqueEstimation.PNG}
\caption{(a) Snapshots of the testing samples. (b) The reference joint torque (blue), the estimated joint torque by neural network (red), the analytically derived joint torque by joint angle error (black) for the elbow yaw (up) and elbow roll (bottom) for the dumbbell 3 sample. (c) The results of the screwdriver sample. (d) The results of the wrench sample.}
\label{fig:torqe_est}
\end{figure}
The results of joint torque estimation by neural networks with different inputs and analytical method are summarized in Table \ref{table:mape_torque}.
It can be seen that the correlation between joint torques and available measurements are learned by the network.
The neural networks perform much better than analytical method.
Results also show that the joint angle error improves the estimation accuracy significantly.
The MAPE of the network with only joint angle error and joint angle as input is close to that of the network with all the 3 kinds of inputs.
We still substitute the estimated effective joint torques of the network with all kinds of inputs to identification process.
Because these results of estimated joint torques incorporate the information from electric current sensors besides the information from encoders.
The joint torque curves during identification process of 3 examples of dumbbell 3, screwdriver and wrench are plotted in Fig. \ref{fig:torqe_est}(b), (c) and (d) respectively.
The estimation results by the neural network although slightly fluctuate but closely follow the reference value.
\subsection{Inertial Properties Identification Evaluation}
Based on the estimation of joint torques, the mass and CoM location of the unknown object can be obtained via nonlinear optimization.
Since dumbbells and screwdriver are rotationally symmetric, the CoM locations of them can be represented by only the y components of CoM.
Therefore, during identification process, x and z components of CoM locations are fixed to zero, while y components and masses are identified.
The wrench and pliers are not rotationally symmetric, of which both y and z components of CoMs are non-zero.
In these cases, x components are fixed to zero, the other 3 parameters (2 for CoM and 1 for mass) are estimated.
\begin{table*}[t!]
\renewcommand{\arraystretch}{1.3}
\caption{Physical Properties Identification}
\label{table:phyprop}
\begin{center}
\begin{tabular}{|c|c c|c c|c c|}
\hline
\multirow{2}{3em}{Object} & \multicolumn{2}{|c|}{Reference} & \multicolumn{2}{|c|}{Estimation} & \multicolumn{2}{|c|}{Error metric} \\
\cline{2-7}
& Mass (g) & CoM (mm) & Mass (g) & CoM (mm) & Mass error (\%) & CoM error (\%) \\
\hline
Dumbbell 1 & 234.0 & (0, 0, 0) & 227.5 & (0, -4.7, 0) & 2.78 & 3.02 \\
Dumbbell 2 & 234.0 & (0, -21.6, 0) & 246.8 & (0, -20.5, 0) & 5.47 & 0.71 \\
Dumbbell 3 & 334.0 & (0, -16.9, 0) & 356.2 & (0, -10.1, 0) & 6.65 & 4.37 \\
Screwdriver & 83.0 & (0, -52, 0) & 85.7 & (0, -67.9, 0) & 3.25 & 7.25 \\
Wrench & 125.5 & (0, -18.27, 3.5) & 121.6 & (0, -30.5, 1.3) & 3.11 & 5.58 \\
Pliers & 186.5 & (0, -41, -24) & 179.1 & (0, -39.3, -20.1) & 3.96 & 2.31 \\
\hline
\end{tabular}
\end{center}
\vspace{-1em}
\end{table*}
We use the following two error metrics to evaluate the results of physical properties identification.
The absolute percentage error between estimated mass and ground truth is defined as:
\begin{equation}
e_m = \frac{\left| \hat{m}-m \right|}{m} \times 100\%
\end{equation}
where $\hat{m}$ is the estimated mass, $m$ is the reference mass.
The percentage error of CoM is defined as:
\begin{equation}
e_{\text{com}} = \frac{\left| \hat{r}_{\text{com}} - r_{\text{com}} \right|}{L_{\text{diag}}} \times 100\%
\end{equation}
where $\hat{r}_{\text{com}}$ and $r_{\text{com}}$ are the coordinates of estimation and ground truth of CoM.
$L_{\text{diag}}$ is the length of the diagonal of the smallest cuboid that can enclose the object.
The results of physical properties identification for unknown objects are shown in Table \ref{table:phyprop}. It shows that the CoM estimation error for all the unknown objects are less than 7$\%$; The CoM location estimation error for both 1D (dumbbells and screwdriver) and 2D (srench and pliers) cases are less than 8$\%$. The result shows that our proposed method can obtain relatively accurate estimation without using commercial F/T sensors.
\section{Conclusion}
A neural network based framework for estimating inertial properties of unknown objects is proposed in this paper.
A MLP is designed to estimate effective joint torques taking joint angle steady-state error, electric current and joint angle as inputs.
The mass and CoM location of unknown objects can be identified based on the estimated joint torques from the neural network.
By distinguishing different cases of frictional torque due to rotational directions of joints, we develop the planning strategy for both training and testing processes.
In this way, the network can be trained with a relatively small dataset.
Experiments on NAO robot show that the average percentage error of torque estimation is 7.48\% and the physical properties are identified with mass error less than 7\% and CoM error less than than 8\%.
In conclusion, the proposed method can obtain relatively accurate estimation of physical properties of unknown objects without using commercial F/T sensors.
\section*{Acknowledgment}
The authors thank Yunshan Ma for the suggestions on neural network and Sipu Ruan for the helpful discussion about experimental setup.
Zizhou Lao acknowledges the financial support from NUS Research Scholarship.
\bibliographystyle{IEEEtran}
|
\section{Introduction}
In the color-magnitude diagram (CMD) of any stellar aggregates Blue straggler stars (BSSs) typically appear as a sparse and scattered population of stars lying in a region at brighter magnitudes and bluer colors with respect to the cluster Main Sequence turnoff (MS-TO) stars~\citep{sandage53, fe97, fe03,piotto04,la07,leigh07,fe12,simu16,fe18}.
Their location in the CMD, combined with further observational evidence \citep{shara97,gilliland98,fe06,fiore14}, suggests that they are hydrogen-burning stars more massive than the parent cluster stars at the MS-TO. These exotica have been suggested to be powerful tools to investigate the internal dynamical evolution of the parent cluster \citep{fe12,ale16,la16,fe18}. BSSs are generated through two main formation channels: (1) mass-transfer (MT) phenomena and/or coalescence in primordial binary systems \citep{mc64}, and (2) direct stellar collisions \citep{hills76}. While the first process (which is driven by the secular evolution of binary systems) is common to any stellar environment (low and high-density stellar systems, as globular and open clusters, dwarf galaxies, the Galactic field), the second one requires a high-density environment, where the rate of stellar collisions is increased. Note that a collisional environment can affect also the efficiency of the MT formation channel, since dynamical interactions involve single and binary stars thus favoring the tightening of binaries. The congested central regions of high-density globular clusters (GCs) are the ideal habitat where collisions can occur. Hence in such an environment both the BSS channels can be active \citep{fe93,fe95}, although the MT formation channel seems to be the most active one (see \citealt{davies04,knigge09}).
Although a few intriguing spectroscopic features have been interpreted as the fingerprints of the MT formation channels (see \citealt{fe06}), BSSs produced by different formation processes still appear photometrically indistinguishable. In this respect, a promising route of investigation was opened by the discovery of two distinct sequences of BSSs in the post core-collapse cluster M30 by \citet{fe09}. After this discovery, a similar feature has been detected in NGC 362 \citep{da13} and NGC~1261 \citep{si14}. In only one case has a double BSS sequence been detected in a young cluster in the Large Magellanic Cloud \citep{li18a,li18b}. However, it has been argued \citep{dal19a,dal19b} that the observed bifurcation could be an artifact due to field star contamination.
The main characteristic ubiquitously detected in these clusters is the presence of a narrow BSS sequence on the blue side of the CMD, separated through a clear-cut gap from a more scattered population of BSSs on the red side. The narrowness of the blue sequence demonstrates that it is populated by a nearly coeval population of stars with different masses, generated over a relatively short period of time. Moreover, the perfect agreement of the blue sequence locus with collisional isochrones \citep{sills09} suggests that these objects have been formed through collisions. Instead, the red sequence is by far too red to be reproduced by collisional models of any age. All these facts support the hypothesis that the origin of the blue BSS sequence is related to a phenomenon able to enhance the probability of collisions over a relatively short period of time, thus promoting the formation of collisional BSSs. \citet{fe09} proposed that this phenomenon is the cluster core-collapse (CC). This hypothesis has been recently confirmed by numerical simulations \citep{pz19}
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{frame2.pdf}
\caption{Comparison between the central $1.5\hbox{$^{\prime\prime}$}\times1.5\hbox{$^{\prime\prime}$}$ region of M15 as seen through the super sampled B$_{435}$ image, and that observed through a single exposure in the same filter (left and right panels, respectively). North is up and East is left.}
\label{fig:frame}
\end{figure*}
In this context, \citet{fe09} also suggested that the properties of the blue-BSS sequence (essentially its extension in luminosity) can be used to date the epoch of CC. In short, while the MS-TO luminosity is a sensitive indicator of the age of the cluster, the luminosity of TO point of the blue BSS sequence can be used to determine the epoch at which the collisional event has promoted the formation of this stellar population.
In fact the comparison with collisional models allowed the dating of the epoch of the CC event: approximately 2 Gyr ago in M30 \citep{fe09} and NGC 1261 \citep{si14}, and 0.2 Gyr ago in NGC 362 \citep{da13}.
Recent Monte-Carlo simulations of synthetic MT-BSSs \citep{ji17} showed that the blue-BSS sequence can be also populated by MT-BSSs. This finding by itself does not invalidate the possibility that the blue BSS sequence might be mainly populated by collisional BSSs, since the easiest way to reproduce the observed narrowness of the blue sequence and the adjacent gap is still the hypothesis that the vast majority of BSSs along this sequence formed almost simultaneously, instead as a result of a continuous process extending over a long time interval. Conversely, the result of \citet{ji17} might explain the presence of the few W Uma stars (which are close binaries) actually detected by \citet{fe09} and \citet{da13} along the blue BSS sequence in M30 and in NGC 362. In this framework it is important to stress that, although the location in the CMD cannot be used to individually distinguish collisional from MT-BSSs, it mirrors the occurrence of the two formation mechanisms: when a narrow blue sequence is detected, this is mainly populated by collisional-BSSs, with some possible contamination from MT-BSSs (as the WUma variables detected along the blue sequences); the red side, instead, cannot be reproduced by any of the available collisional models \citep{sills09} and is mainly populated by MT-BSSs with some contamination from evolved BSSs generated by collisions.
In this paper we present the discovery of a double BSS sequence in the post-core collapse cluser M15 (NGC7078). Indeed, M15 is one of the most massive ($log M = 6.3M_{\odot}$, several $10^6 M_{\odot}$) and dense ($log \rho_0 =6.2 M_{\odot}/pc^3$) cluster of the Galaxy. As emphasised by~\citet[][]{le13}, while binary evolution seems to be the dominant channel for BSS formation, the fact that M15 has one of the densest and most massive cores among Galactic GCs, makes it the ideal environment where to expect the collisional formation channel to be relevant.
This fact has been recently confirmed by~\citet[][]{le19} where the authors foresee the collisions via 2+2 interactions to play a crucial role on the formation of BSSs in post-core collapse clusters.
Since the very first detections of a central cusp in its surface brightness profile \citep{dk86,lauer91,ste94} it has been considered as a sort of ``prototype" of post-CC clusters, and it is still catalogued so in all the most recent GC compilations (see the updated version of the Harris Catalog). The BSS sequence studied here shows a complex structure that might provide crucial information on the core-collapse phenomenon.
\section{Data reduction}
\label{sec_data}
In this work we use a set of archival high resolution images obtained with the Advanced Camera for Survey (ACS) on board the Hubble Space Telescope (HST). The core of the cluster was observed with the High Resolution Channel (HRC) mode of the ACS in the $F220W$ and $F435W$ filters (hereafter, U$_{220}$ and B$_{435}$, respectively). The HRC provides a supreme spatial resolution, with a pixel scale of $\sim0.028\times0.025$ arcsec$^2$/pixel in a field of view of $29\hbox{$^{\prime\prime}$}\times26\hbox{$^{\prime\prime}$}$. A total of 13 images of 125s of exposure time each were acquired in the B$_{435}$ filter under the program GO-10401 (PI Chandar), while
8$\times$290s exposures were taken with the U$_{220}$ filter under the program GO-9792 (PI Knigge). These images were already used in~\citet[][]{die07} and~\citet[][]{ha10} to probe the stellar population in the innermost regions of the cluster. As described in~\citet[][]{ha10}, the HRC images were combined using a dedicated software developed by Jay Anderson and similar to DrizzlePack\footnote{http://www.stsci.edu/hst/HST\_overview/drizzlepac}. In short, the software combines single raw images of a given band to improve the sampling of the Point Spread Function (PSF).
A $3\hbox{$^{\prime\prime}$}\times3\hbox{$^{\prime\prime}$}$ section of a super sampled frame thus obtained in the B$_{435}$ band is shown in Fig.~\ref{fig:frame} (left panel), where it is also compared to a single raw exposure (right panel). As immediately visible in the figure, the software allows to reduce the pixel size to $\sim0.0125\times0.0125$ arcsec$^2$/pixel on the combined images, thus doubling the effective spatial resolution in both the U$_{220}$ and the B$_{435}$ filters.
This image processing is crucial as it allows us to resolve the stars in the very central region of the cluster, where the stellar crowding is too severe even for the resolving power of the HRC~\citep[see also Figure 1 from][]{ha10}.
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{cmd_blend_zoom.pdf}
\caption{
CMDs obtained with standard PSF fitting photometry (left panel) and our de-blending procedure with ROMAFOT (right panel). For reference, the best-fit isochrone and the equal-mass binary sequence are shown as solid and dashed lines, respectively.
As apparent, many stars are located between these two lines in the left panel: the vast majority of these objects turned out to be blends due to poor PSF fitting.
For illustration purposes, four blends are highlighted with open symbols in the left-hand panel and their corresponding de-blended components are marked with the same (solid) symbols in the right panel. Each of the four blends is the combination of a MS-TO and a SGB star. The box used to select the BSS population is also shown.
}
\label{fig:cmd_roma}
\end{figure*}
We first performed a standard PSF fitting photometry on the two super-sampled U$_{220}$ and B$_{435}$ frames using DAOPHOTII~\citep[][]{ste87}. The catalogue was calibrated into the VEGAMAG system using the recipe from~\citet[][]{si05} and adopting the most recent zero points available through the ACS Zeropoint Calculator\footnote{https://acszeropoints.stsci.edu}.
All the post-MS stellar evolutionary sequences typical of a GC are well distinguishable on the first version of the CMD shown on the left panel of Fig.~\ref{fig:cmd_roma}. Still, we notice a number of objects falling in the region brighter than the Sub-Giant Branch (SGB) and bluer than the Red Giant Branch (RGB).
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{Fig3_NEWb.pdf}
\caption{B$_{435}$ images of the same sources discussed in Fig.~\ref{fig:cmd_roma}: the single (blended) objects in our starting photometry are marked with empty symbols in the left panels, while the corresponding resolved components, obtained after our de-blending procedure with ROMAFOT, are shown with the same (solid) symbols in the right panels.}
\label{fig:zoom_blend}
\end{figure*}
The origin and the reliability of these stars (sometime called ``yellow stragglers") is a well known problem and it has been discussed in many papers in the literature. The vast majority of these objects have been demonstrated to be optical blends (see, e.g., Figure 8 in \citealt{fe92} and Figure 7 in \citealt{fe91}, where a few examples of de-blending are illustrated). Here, we used the package ROMAFOT \citep{b83} to manually inspect the quality of the residuals of the initial PSF fitting process, and to iteratively improve the quality of the PSF fitting for a selection of individual problematic stars. ROMAFOT is a software developed to perform accurate photometry in crowded fields. While the PSF fitting procedure with ROMAFOT requires a higher level of manual interaction by the user with respect to DAOPHOTII, it has the unique advantage of a graphical interface that allows the user to improve the local deconvolution of stellar profiles \citep[see also][]{mo15}. Indeed the results of the analysis with ROMAFOT fully confirm the findings by \citet{fe91,fe92}: most of the objects lying in the yellow stragglers region turned out to be blends. For the sake of illustration, in the left panel of Fig. \ref{fig:cmd_roma} we highlight four spurious sources (open symbols) due to poor PSF fitting. As shown in the right-hand panel, each of these sources is indeed the blend of a MS-TO and a SGB star (solid symbols). These sources are also highlighted with the same symbols on the B$_{435}$ images shown in Fig.~\ref{fig:zoom_blend}. In Figure \ref{fig:cmd_roma} we also plot the equal-mass binary sequence (dashed line) and the adopted BSS selection box. It is evident that the conservative BSS selection discussed in Section 3 (considering only BSSs brighter than $U=19.8$) prevents any significant contamination from unresolved binaries and blends.
The iterative de-blending process described above was used to optimize the PSF modeling of all the objects lying outside the well recognizable canonical evolutionary sequences in the CMD, including all the BSSs. In Fig.~\ref{fig:cmd_HRC} we show the final CMD obtained at the end of the accurate de-blending procedure.
It should be noted that the HRC data-set that we use in this paper allows us to resolve stars with a separation larger than $\sim0.012\hbox{$^{\prime\prime}$}$, which translates into a separation of $\sim125$ au at the distance of M15 (10.4 kpc). Hence, even after our de-blending procedure, it is still possible that a few unresolved binaries and blends with separation smaller that 125 au are still populating the ``yellow straggler region". On the other hand, some of these objects could also be BSSs that are evolving from their main-sequence (core hydrogen burning) stage to a more evolved evolutionary phase (the SGB). Still, given the overall low number of BSSs, and taking into account that the SGB stage at the typical BSS mass ($\sim1.2$ M$_{\odot}$) is quite rapid, we expect just a few evolved BSSs in this part of the CMD.
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{M15_cmd.pdf}
\caption{Final, de-blended UV-CMD of the core of M15. The position in the CMD of the Blue Straggler star (BSS) population, and of the Red-Giant Branch (RGB) and Horizontal Branch (HB) evolutionary sequences is labeled.}
\label{fig:cmd_HRC}
\end{figure*}
\section{THE BSS POPULATION IN THE CORE OF M15}
\label{sec_bss}
The CMD shown in Fig.~\ref{fig:cmd_HRC} demonstrates the outstanding quality of the data, which allows us to properly sample the hot stellar population in the core of M15 and to detect MS stars well below the MS-TO. Thanks to the use of the U$_{220}$ filter, the hot horizontal-branch (HB) stars describe a very narrow and well defined sequence. This fact per se demonstrates already the excellent photometric quality of the data~\citep[see also][]{die07}. The same applies to the RGB stars whose sequence spans almost 2.5 mag in color on the CMD. As expected, the use of the UV filter clearly suppresses the contribution of the RGB stars to the total cluster's light, making this plane optimal for sampling the hottest stellar populations in the cluster (see also \citealt{fe97,fe03,ra17,fe18}).
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{M15_hist.pdf}
\caption{Left Panel: Portion of the CMD zoomed on the BSS region. All the BSSs used in this study (which are brighter than U$_{220}<19.8$) are marked as black circles. The dashed line shows a fit to the bluest stars of the blue-BSS population. The line is used as reference to calculate the distribution in U$_{220}$-B$_{435}$ color of the BSS population. Central Panel: Rectification of the CMD, showing the color distance $\Delta$(U$_{220}$-B$_{435}$) of the surveyed stars from the mean ridge line of the blue-BSS population (dashed line, the same as in the left panel). Right Panel: Histogram of the BSS color distances from the mean ridge line of the blue population.}
\label{fig:bss_hist}
\end{figure*}
\subsection{A double sequence of BSS}
A surprising feature clearly visible in the CMD of Fig.~\ref{fig:cmd_HRC} is the presence of two distinct, parallel and well separated sequences of BSSs. The two sequences are almost
vertical in the CMD, in a magnitude range $20<$U$_{220}<18$, and located at (U$_{220}$-B$_{435})\sim0.8$ and $\sim 1.2$. A similar feature was previously detected using optical bands in the GCs M30, NGC~362 and NGC~1261~\citep[][respectively]{fe09,da13,si14}.
{\it This is the first time that such feature is detected in an UV-CMD}.
We show in Fig.~\ref{fig:bss_hist} a portion of the CMD zoomed on the BSS region.Hereafter, we will consider as bona-fide BSSs those stars populating the region of the CMD in the colour ranges explicitly mentioned above and with U$\lesssim19.8$. As discussed above, such a conservative magnitude threshold guarantees that the BSS sample is negligibly affected by residual blended sources and unresolved binaries (see right panel in Fig. \ref{fig:cmd_roma}). On the left panel we show with a dashed line the fit to the bluest sequence of BSSs. We take this line as a reference and we then calculate the distance in (U$_{220}-$B$_{435})$ color of the BSSs observed at U$_{220}<19.8$ (black dots). In the rightmost panel we show the histogram of the distribution of the colour's distances. This is clearly not unimodal: at least two peaks, with a clear gap in-between, are well distinguishable. To assess the statistical significance of such bi-modality, we used the Gaussian mixture modeling (GMM) algorithm presented by~\citet{mu10}, which works under the assumption that the multi-modes of a distribution are well modeled via Gaussian functions. We found that the separation of the means of the 2 peaks relative to their widths is $D=4.71\pm0.78$. The parametric bootstrap probability of drawing such value of $D$ randomly from a unimodal distribution is $3.6\%$. The probability of drawing the measured kurtosis is $0.4\%$. As such, all three statistics clearly indicate that the observed BSS color distribution in the UV-CMD is bi-modal. The GMM code also provides the user with the probability distribution of each element to belong to a given peak. We find that 27 and 15 stars have a probability $>98\%$ to belong to the bluest and reddest peak, respectively. Their location in the CMD is shown in Fig. \ref{fig:bss_zoom}. The 2 stars shown as black dots in the figure have a probability of 96\% (36\%) and 69\% (31\%) to belong to the bluest (reddest) sequence. We also used the Dip test of uni-modality \citep{ha85} to further investigate on the significance of the deviation from uni-modality of the BSS color distribution shown if Fig.~\ref{fig:bss_hist}. The Dip test has the benefit of being insensitive to the assumption of Gaussianity. We found that the observed BSS color distribution has 98.5\% probability to deviate from a unimodal distribution. The tests described above hence indicate that the bi-modality of the BSS sequence that we have found in the UV-CMD of M 15 is a statistically solid result.
\begin{figure*}
\centering
\includegraphics[width=12truecm]{collision2.pdf}
\caption{UV-CMD zoomed on the BSS population. The stars belonging to the blue-BSS and red-BSS sequences are shown as blue and red circles, respectively. Each BSS is assigned to a given sequence according to a statistical test based on the Gaussian mixture modeling (GMM) algorithm presented in \citet{mu10}. The two stars shown as black circles have a probability of 96\% (36\%) and 69\% (31\%) to belong to the blue (red) sequence. Variable BSSs are highlighted with open symbols (see text). Two collisional isochrones, of 2 Gyr (thick grey line) and 5.5 Gyr (thick black line), are superimposed to the observed CMD. The evolutionary tracks of a $0.6+0.6 M_\odot$ and a $0.6+0
|
.4 M_\odot$ collision product are also plotted (grey and black dashed lines, respectively). For the sake of comparison, we also mark the 12 Gyr-old isochrone for normal (single) stars (black long-dashed line).}
\label{fig:bss_zoom}
\end{figure*}
In order to further prove the presence of the observed bi-modality in the BSS colour distribution, we retrieved a number of high resolution images of the core of M15 obtained with the HST/WFC3 (GO-13297; PI: Piotto). These images, obtained using the F275W and F336W filters were already used in~\citet[][]{ra17}
to investigate the dynamical age of the cluster through the study if the BSS'radial distribution~\citep[see also][]{la16}. We stress here that the plate scale of $\sim0.04\hbox{$^{\prime\prime}$}$ offered by the WFC3 did not allow the authors to firmly detect the double BSS sequence in the core of the cluster as reported in this work. Interestingluy enough, hints for the presence of a double BSS sequence are visible in the CMD of M15 shown in Figure 22 of~\citet[][]{pio15}.
We used the photometric catalogue obtained by~\citet[][]{ra17} to resolve the BSS in a region outside the area covered by our HRC data out to a radius r=80\hbox{$^{\prime\prime}$}. Moreover we used several hundreds stars in common between the WFC3 and our HRC catalogue to accurately register the position
of the stars in the HRC catalogue to the WFC3 position. We then used ALLFRAME~\citep[][]{ste94}
to obtain accurate PSF fitting photometry of the F275W and F336W images of the WFC3 using as input catalogue of stellar centroids the coordinates of HRC stars registered on the WFC3 coordinate system.
We show on Fig.~\ref{fig:cmd_all} the CMDs obtained with the two data-sets. Clearly the separation of the BSS into a blue and red sequences according to the selection done in the HRC plane, holds also in the CMD obtained with the WFC3.
\begin{figure*}
\centering
\includegraphics[width=12truecm]{M15_wfc3hrc.pdf}
\caption{UV-CMD obtained in the same area with the HRC (left panel) and WFC3 (right panel; see text for details) zoomed on the BSS population. The stars belonging to the blue-BSS and red-BSS sequences are shown as blue and red circles, respectively.}
\label{fig:cmd_all}
\end{figure*}
\subsection{Variable BSS}
\citet[][]{die07} used the HRC data-set to identify the variable stars in the core of M~15 (see their Table 2). We have used their catalog to identify possible variable among the BSS population. We have found 7 variables among our BSS populations, namely V20, V22, V33, V37, V39, V38, V41~\citep[nomenclature from Table 2 of][]{die07}.
The location of these variables in the CMD is shown with open symbols in Fig. \ref{fig:bss_zoom}. Two variables (V22 and V33; open pentagons) are classified as SX-Ph, 2 as Cataclysmic Variables (CV) candidates (V39 and V41; open circles), 2 are unclassified variables observed in the CMD region of BSSs (V37 and V38; open triangles) and V20 is classified as candidate B-HB star (open square). The variables V37, V38, V39 and V41 show flickering, i.e. irregular and small (few tenths of a magnitude) variations on short timescales. For this reason \citet[][]{die07} identify them as candidate CVs. While V37 and V38 are indeed located along the BSS sequence also in the CMD shown by~\citet[][]{die07}, the authors speculate that V39 and V41 might be early CVs. Our UV-CMD convincingly shows that these stars are all BSSs. Additional observations are needed to firmly assess the nature of these variables, primarily to clarify if they are binary systems or single stars. Indeed a few W Uma Variables (which are contact binaries) have been found along both the BSS sequences in M30 \citep{fe09} and NGC 362 \citep[][]{da13}. Using dedicated theoretical models, \citet{ji17} found that the contribution of the MT-BSSs to the blue or red sequence strongly depends on the contribution of the donor to the total luminosity of the unresolved binary system. Hence our observations further support the possibility that both collisional and MT-BSSs can populate the blue sequence.
\subsection{A collisional sequence}
We used a set of tracks and isochrones extracted from the collisional model library of \citet{sills09} to evaluate whether the location of the blue BSS sequence in M15 is in agreement with the locus predicted by BSS collisional models (as previously found in M30 and in NGC 362). While the details of the model are described in \citet{sills09}, we list shortly here the main ingredients: (1) BSSs are formed by direct collisions between two unrelated main-sequence stars: a set of 16 cases are investigated and involving the collisions of 0.4, 0.6 and 0.8 M$_\odot$ stars. (2) Evolutionary tracks have been calculated by using the Yale Rotational Evolutionary Code (YREC, \citet{guenther92}). (3) Collisional products are generated by using the code {\it ``Make Me A Star"} (MMAS. \citet{lombardi02}) that uses the YREC results to calculate (via the so-called {\it ``sort by entropy''} method) the structure and chemical profile of the collision products. (4) Collisions are assumed to occur with a periastron separation of 0.5 times the sum of the radii of the two stars. (5) Any rotation of the collision product is neglected. (6) Finally, the recipe outlined in \citet{sills97} has been adopted to evolve the collision products from the end of the collision to the main sequence: in particular the evolution is stopped when the energy generation due to hydrogen burning was larger than that due to gravitational contraction, which corresponds to the zero age main sequence for a normal pre-main-sequence evolutionary track.
We have colored the tracks and isochrones in the U$_{220}$ and B$_{435}$ photometric bands by convolving a grid of suitable \citet{ku93} stellar spectra of appropriate metallicity ([Fe/H]$\sim-2.4$~\citealt{Ha96}, 2010 edition) with the transmission curves of the used ACS/HRC filters. Thus, for each given stellar temperature and gravity, both the color and the bolometric corrections in the Vegamag system have been computed.
A 2 Gyr-old collisonal isochrone thus obtained is shown in Fig.~\ref{fig:bss_zoom} (grey thick line): a distance modulus $(m-M)_0 =15.14$ and a color excess $E(B-V)=0.08$ \citep[][2010 edition]{Ha96} have been adopted. As can be seen, the 2 Gyr isochrone well reproduces the brightest portion of the blue sequence (with U$_{220}<19.2$). To better illustrate the expected post-MS evolutionary path of collisional BSS we also plotted the track (dashed line) of a $0.6+0.6 M_\odot$ collisional product whose TO point occurs at 2 Gyr.
As sanity check we also plotted a canonical single star 12 Gyr-old isochrocrone of appropriate metallicity (black long-dashed line). As can be seen, it nicely reproduces the single star MS-TO of the cluster, thus demonstrating that the transformation and the adopted choice of distance modulus and reddening are appropriate.
The impressive agreement of the 2 Gyr collisional isochrone with the brightest portion of the blue sequence strongly supports the hypothesis that these BSSs have been simultaneously formed by an event that, about 2 Gyr ago, led to a significant increase of the stellar interaction rate. On the other hand, the red-BSS sequence cannot be reproduced by collisional models by any of the available collisional models, regardless of the age. All the features revealed by our analysis so far nicely resemble what was found in the study of the BSS populations in M30 and NGC 362. Still, the BSSs in M15 show an additional intriguing feature.
\subsection{An additional intriguing feature}
As can be seen from Fig.~\ref{fig:bss_zoom}, a clump of 7 stars (approximately at U$_{220}=19$) together with a few sparse stars at lower luminosity, can be distinguished in the region of the CMD between the blue and red BSS sequences. As explained in Sec. \ref{sec_data}, we manually checked the accuracy of the PSF fitting solution for each of these stars. Moreover, the visual inspection of their location on the ACS/HR images indicates that they are not contaminated by bright neighbors. We also carefully analyzed the shape of the brightness profile of each star and the corresponding parameters characterizing their PSF: indeed this analysis fully confirms that they are not blends, but well-measured single stars.
Although we are dealing with a small number of stars (and the statistical significance is therefore unavoidably low), such a clump of BSSs between the two sequences has never been observed before, neither in M30 nor in NGC 362, where the overall population numbers are similar, but this region of the CMD is essentially empty. Hence, the question raises: ``what is the origin of these objects?''.
Following the scenario suggested by \citet{fe09}, stars within the gap should be evolved (post-MS) BSSs that, because of the natural stellar evolution aging process, are leaving the MS stage.
However, for being as clumped as observed, these stars should be all evolving at the same rate and have been caught in the same evolutionary stage. This seems to be quite implausible, especially because the post-MS evolutionary time-scale is rapid. Instead the comparison with a 5.5 Gyr-old collisional isochrone (black thick line in Fig.~\ref{fig:bss_zoom}) shows an impressive match: the CMD location of the clump is nicely reproduced by the TO point of this model. Such a nice correspondence suggests that the observed clump is made of collisional BSSs with an age of 5.5 Gyr
that are currently reaching the main sequence TO evolutionary stage. By definition, the TO is the phase during which a star is leaving the MS and moves to the SGB stage, keeping roughly the same brightness (i.e. magnitude) while becoming colder (redder). Hence it is not surprising to have an over-density at this position in the CMD. In the same plot we also show the corresponding collisional track to illustrate the post-MS evolution of the collisional BSS currently located at the MS-TO point (in this case the BSS is originated by the collision of two stars of $0.4 M_\odot$ and $0.6 M_\odot$, respectively; black dashed line).
\subsection{BSS Radial distribution}
We plot in Fig.~\ref{fig:radial} the cumulative radial distribution of the BSSs belonging to the blue and the red sequences (blue/solid and red/dashed lines, respectively). As explained
in Sec.~\ref{sec_bss} we used a photometric catalogue obtained with the WFC3 to extend the BSS radial distribution out to a radius r=80\hbox{$^{\prime\prime}$}. As expected the BSSs are significantly more segregated than normal MS stars (black dotted line), regardless of which population they belong to. As also found in the GCs M30 and NGC 362, the red sequence appears to be more segregated than the blue one. As discussed in~\citet[][]{da13}, this difference may offer further support for a difference formation history of the two populations. In short the blue-BSSs, born as a consequence of increase of collisions during the core-collapse, have been also kicked outward during collisional interactions while most of the red BSSs sank into the cluster center because of dynamical friction and did
not suffer significant hardening during the core-collapse phase. The Kolgomorov-Smirnov test applied to the cumulative radial distributions suggests that the blue and the red sequences do not belong to the same parental population at only 1.5 $\sigma$ level of significance. We stress here that this is likely due to the small number statistics of the two populations.
The inset in the figure shows the cumulative radial distributions of the BSS along the two branches of the blue sequence. The BSSs populating the youngest collisional sequence (dashed line) appear more centrally segregated than the oldest ones (solid line). Although the two sub-populations include only a few stars each, the statistical significance of such a difference turns out to be of the order of 1.5-2 $\sigma$.
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{radial.pdf}
\caption{Cumulative radial distributions of the BSSs belonging to the blue and the red populations (blue/solid and red/dashed lines, respectively) as a function of the core radius r$_c$=0\farcm14~\citealt{Ha96}, 2010 edition). The cumulative radial distribution of MS stars is also shown for comparison (black dotted line). The inset shows the cumulative radial distributions of the BSSs observed along the two branches of the blue sequence: those with CMD position well reproduced by a 2 Gyr collisional isochrone (dashed line), and those corresponding to the 5.5 Gyr collisional isochrone (solid line).}
\label{fig:radial}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=11.9truecm]{mod.pdf}
\caption{Time evolution of the 1\% Lagrangian radius (r$_{1\%}$, top panel) and of the collisional parameter ($\Gamma$, bottom panel) as measured in a a Monte Carlo simulation of GC evolution (see text). Time is normalized to the core collapse time (t$_{CC}$ ), while r$_{1\%}$ and $\Gamma$ are normalized, respectively, to their minimum and maximum values (note that for the simulation shown here t$_{CC}\backsimeq11.9$ Gyr and the time interval shown in the plots corresponds to about 4.8 Gyr).}
\label{fig:mod}
\end{figure*}
\section{DISCUSSION}
As previously discussed, the presence of a double sequence of BSSs is a feature that was already discovered using optical photometry in the core of the GCs M30, NGC 362 and NGC 1261.
\citet{fe09} first suggested that the presence of two parallel sequences of BSSs in the CMD can be explained by the coexistence of BSSs formed through two different formation mechanisms: the blue sequence would be originated mainly by collisions, while the red sequence derives from a continuous MT formation process (see also \citealt{da13} and \citealt{xi15}).
This scenario has been recently confirmed by a set of simulations that follow the evolution of a sample of BSSs generated by stellar collisions \citep{pz19}. This work concludes the blue BSS chain detected in M30 can be indeed populated by collisional BSSs originated (over a short timescale) during the CC phase, thus finally proving that CC can be at the origin of a narrow sequence of nearly coeval Blue BSSs.
Conversely, the photometric properties of BSSs generated by MT processes in binary systems are currently controversial, since different models seem to predict different MT-BSS distributions in the CMD (see the case of the models computed for M30 by \citealt{xi15} and \citealt{ji17}).
In addition, even admitting that MT-BSSs may extend to the blue side of the BSS distribution in the CMD, it is unclear if and how MT processes alone could produce two split sequences, distinctly separated by gap.
Since at the moment there are no consolidated and specific models of BSS formation through MT for M15, here we focus on the blue sequence that, in this cluster, shows the most intriguing features.
The nice match between the blue-BSS locus in the CMD and the collisional isochrones of \citet{sills09} strongly suggests that this population could be produced during a dynamical phase characterized by a high rate of stellar collisions. Moreover, in striking similarity with previous cases, the fact that the blue BSSs in the CMD appear so nicely aligned along the collisional isochrone MS suggest that they were mostly generated over a relatively short timescale (within a few $10^8$ Myrs), thus representing an essentially coeval population. The superb photometric quality of the data presented here allows us to make a significant step forward in our understanding of this phenomenon. It is particularly interesting that in M15 we are able to distinguish possible sub-structures along the collisional sequence. The two branches discovered here, in fact, can be interpreted as two generations of collisional BSSs, possibly formed during two distinct episodes of intense dynamical interactions.
Since M15 is a core-collapsed cluster (see, e.g., \citealt{dk86,che89,mu11}), such episodes can quite possibly be associated to the cluster's structural properties at the time of core collapse and during the dynamical stages following this event.
In particular, after reaching a phase of deep core collapse, clusters may undergo a series of core oscillations driven by gravothermal effects and binary activity (see, e.g., \citealt{be84,he89,co89,ti91,ma96,me96,he09,bh12}), characterized by several distinct phases of high central density followed by more extended stages during which a cluster rebounds toward a structure with lower density and a more extended core. Interestingly, \citet[][]{gr92} performed detailed numerical simulations, including post-collapse core oscillations, able to reproduce the contemporary presence of a large core radius (13 pc) and a cusp in the stellar density profile of M 15 as revealed by early HST observations from~\citet[][]{lauer91}.
The two branches of blue-BSS sequence might be the outcome of the increased rate of collisions during two separate density peaks in the post-core collapse evolution.
In order to provide a general illustration of the possible history of the cluster's structural evolution leading to the observed blue-BSS branches, we show in Fig.~\ref{fig:mod} the time evolution of the 1\% Lagrangian radius (top panel) and of the collisional parameter (bottom panel)
as obtained from a Monte Carlo simulation run with the MOCCA code \citep{gi08}. The collisional parameter $\Gamma=\rho_c^2r_c^3/\sigma_c$ (where $\rho_c,~r_c, \hbox{and}~\sigma_c$ are, respectively, the cluster's central density, core radius and central velocity dispersion) is often used to provide a general quantitative measure of the rate of stellar collisions in a GC (see, e.g., \citealt{davies04}). The simulation presented here starts from the initial structural properties discussed in \citet{ho17}, with $7 \times 10^5$ stars, no primordial binaries and an initial half-mass radius equal to 2 pc. We point out that the adopted simulation and initial conditions do not consist in a detailed model of M15, but just illustrate the general evolution of a cluster reaching a phase of deep core collapse and undergoing core oscillations in the post-core collapse phase.
As shown in Fig.~\ref{fig:mod}, the initial deep core collapse (clearly distinguishable at $t/t_{CC}=1$) leads to the largest increase in the value of $\Gamma$, and is followed by several distinct re-collapse episodes leading to secondary peaks of different magnitudes in the collisional parameter (all smaller than the initial peak associated to the first collapse). The time interval between the various peaks in $\Gamma$ is not constant and, for the simulation presented here, typically ranges from approximately a few hundreds million years to approximately a billion years. Although we strongly emphasize that models exploring in detail the actual BSS production in each collapse episode, along with a proper characterization of the collision events and the number of stars involved, are necessary \citep[see e.g.][]{ba16}, here we suggest that the BSS populations observed along the two branches of the blue-BSS sequence in M15 might be produced during the initial deep collapse and during one of the subsequent collapse events, one sufficiently deep to trigger a significant number of collisions and BSS production.
Note that a similar argument was used by \citet{si14} to explain the presence of a few bright BSSs located on the blue side of the blue-BSS sequence in NGC 1261. However, this cluster does not show the typical signature of core collapse, and for this reason the authors suggest that it is currently experiencing a bounce state.
\section{Summary}
We used an exceptional set of high-resolution ACS/HRC images to explore the stellar population in the collapsed core of the GC M15. A high-precision UV-CMD has revealed the existence of two clear-cut sequences of BSSs.
In particular, we discovered that the blue sequence, which should be populated by collisional BSSs in the interpretative scenario proposed by \citet{fe09}, consists of two distinct branches nicely reproduced by collisional isochrones of 2 and 5.5 Gyr.
We interpret these features as the observational signature of two major collisional episodes
suffered by M15, likely connected to the collapse of its core: the first one (possibly tracing the beginning of the core-collapse process) occurred approximately 5.5 Gyr ago, while the most recent one dates back 2 Gyr ago. This result reinforces the evidence of the strong link existing between the observational properties of the BSS population in a cluster and its dynamical history \citep{fe09,fe12,fe18}, and it opens a window to gather a deeper understanding of core collapse and post-core collapse evolution, as well as the link between these dynamical phases and a cluster's stellar content.
.
\acknowledgments
We are grateful to an anonymous referee for precious comments and suggestions. We are grateful to Jay Anderson and Nathalie C. Haurberg for providing us with the processed ACS/HRC images. This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency. FRF is grateful to the Indiana University for the hospitality during his stay in Bloomington, when part of this work was performed.
|
\section{Introduction} \label{I}
Starting with an antiproton beam directed on a proton target many reactions
are possible. First of all is the elastic scattering $\bar{p}p \rightarrow
\bar{p}p$. For this reaction differential cross sections $\sigma_{el}(\theta)$,
analyzing-power data $A_{el}(\theta)$~\cite{Kun88,Ber89}, and even
some depolarization data $D_{yy}(\theta)$~\cite{Kun91} have been
measured. We will discuss these extensively in this talk.
The annihilation channel, $\bar{p}p \rightarrow$ mesons, is studied very
intensively by theorists as well as by experimentalists. Many different
reactions can be distinguished. For our purposes, however, only a global
description will turn out to be sufficient. The charge-exchange
reaction $\bar{p}p \rightarrow \bar{n}n$ has its threshold
at $p_L = 99$ MeV/c. Important is that in the one-boson-exchange (OBE)
picture only charged mesons can be exchanged. The most important of these
are the $\pi^{\pm}$ and $\rho^{\pm}$ mesons. The study of this reaction
allowed us to determine the coupling constant of the charged pion
to the nucleons~\cite{Tim91b}. Recently, excellent data for
this charge-exchange reaction has been obtained at LEAR for
the differential cross section $\sigma_{ce}(\theta)$
and for the analyzing power $A_{ce}(\theta)$~\cite{Bir90}.
Very recently, even charge-exchange depolarization data have
become available~\cite{Bir93}.
Excellent data are also available for the strangeness-exchange
reaction $\bar{p}p \rightarrow \bar{\Lambda}\Lambda$
with threshold $p_L = 1.435$ GeV/c~\cite{Bar87}.
More data for this reaction are forthcoming as well
as data for the other strangeness-exchange reactions
$\bar{p}p \rightarrow \bar{\Lambda}\Sigma,
\bar{\Sigma}\Lambda$~\cite{Bar90} and $\bar{\Sigma}\Sigma$.
These reactions are very important for the precise determination
of the $\Lambda N\!K$ and $\Sigma N\!K$ coupling constants
and combining these with the $N\!N\pi$ coupling constant
gives us information about flavor SU(3)~\cite{Tim91a}.
\section{Antinucleon-nucleon potentials} \label{II}
It is customary to start with some meson-theoretic $N\!N$ potential
and then apply the $G$-parity transformation \cite{Pai52}
to get the corresponding
$\overline{N}\!N$ potential. This is a straightforward, but rather
cumbersome procedure. When you ask people about details,
then most people must confess that they do not know.
We would like to point out that just charge conjugation,
together with charge independence, without actually combining
them to $G$, is sufficient for our purposes.
To understand this, let us look at the $ppm^{0}$ vertex describing
the coupling of a neutral meson $m^{0}$ to the proton $p$ with a
coupling constant $g$.
When we apply charge conjugation $C$ we have
\[
\bar{p} = Cp \rule{1cm}{0mm} {\rm and}
\rule{1cm}{0mm} \overline{m^{0}} = Cm^{0}\ ,
\]
and we describe now the $\bar{p}\bar{p}\overline{m^{0}}$ vertex.
For nonstrange, neutral mesons $m^{0}$ one can define the charge
parity $\eta_{c}$ by $\overline{m^{0}}=\eta_{c} m^{0}$.
Charge-conjugation invariance of the interaction
Lagrangian describing this $ppm^{0}$ vertex requires that the
coupling constant $g$ of the meson $m^{0}$ to the proton $p$ is equal to
the coupling constant of the antimeson $\overline{m^{0}}$ to the antiproton
$\bar{p}$. The coupling constant $\bar{g}$ of the meson $m^{0}$
to the antiproton is then given by
\[
\bar{g} = \eta_{c} g\ .
\]
For mesons of the type $Q\overline{Q}$, with relative orbital
angular momentum $L$ and total spin $S$ the charge parity is
\[
\eta_{c} = (-)^{L+S}\ .
\]
Therefore the pseudoscalar ($^{1}S_{0}$) mesons have $J^{PC} = 0^{-+}$,
the vector ($^{3}S_{1}$) mesons have $J^{PC} = 1^{--}$,
the scalar ($^{3}P_{0}$) mesons have $J^{PC} = 0^{++}$, etc.
We see that from the important mesons only the vector mesons have
negative charge parity and therefore the coupling constants of the vector
mesons change sign when going from the nucleons to the antinucleons.
In the OBE picture the $pp$ potential $V(pp)$ is the sum of the exchanges
of the pseudoscalar meson $\pi$, the vector mesons $\rho$ and $\omega$,
the scalar meson $\varepsilon(760)$, etc. That is
\[
V(pp) = V_{\pi} + V_{\rho} + V_{\omega} + V_{\varepsilon} + \ldots
\]
The potential $V(\bar{p}p)$ described in the same OBE picture is
then given by
\[
V(\bar{p}p) = V_{\pi} - V_{\rho} - V_{\omega} + V_{\varepsilon} + \ldots
\]
In these reactions only neutral mesons are exchanged. When we want to
describe the charge-exchange reaction $\bar{p}p \rightarrow \bar{n}n$, then
it is easiest to recall charge independence. Charge independence requires
that the coupling constant $g_{c}$ of the charged meson to the nucleons
is given by $g_{c} = g\sqrt{2}$, and to the antinucleons by
$\bar{g}_{c} = \bar{g}\sqrt{2}$. The charge-exchange potential
is therefore given by
\[
V_{ce} = 2(V_{\pi} - V_{\rho} + \ldots)\ .
\]
The diagonal potential in the $\bar{n}n$ channel is, using
charge independence, given by $V(\bar{n}n) = V(\bar{p}p)$.
{\it What can we learn from the $N\!N$ potentials about the
$\overline{N}\!N$ potentials}? \\
The $pp$ central force is relatively weak due to the cancellations between
the repulsive contribution of the vector mesons $\omega$ and $\rho$ and
the attractive contribution of the scalar mesons $\varepsilon(760)$, etc.
The $\bar{p}p$ central force is strongly attractive, because the
vectors mesons have now an attractive contribution which adds
coherently to the attractive contribution of the scalar mesons,
giving a very strong overall central force.
Also the tensor force in $N\!N$ is relatively weak, because
$\pi$ and the important $\rho$ contributions have opposite sign.
In the $\overline{N}\!N$ case these mesons add coherently again to
give a very strong tensor force. This strong tensor force is
responsible for the importance of the transitions
\[
\ ^3S_1 \leftrightarrow\ ^3D_1 \rule{1cm}{0mm} , \rule{1cm}{0mm}
\ ^3P_2 \leftrightarrow\ ^3F_2 \rule{1cm}{0mm} , \rule{1cm}{0mm}
\ ^3D_3 \leftrightarrow\ ^3G_3 \rule{1cm}{0mm} {\rm etc.}
\]
\section{Various models} \label{III}
\subsection{Black-disk model} \label{III.1}
One of the simplest models for the description of the elastic and
inelastic cross section is the black-disk model. This model gives
\[
\sigma_{el} = \sigma_{ann} = \frac{1}{2} \sigma_{T} = \pi R^{2}\ ,
\]
where $R$ is the radius of the black disk.
This relation is satisfied very approximately. It shows that the
annihilation cross section predicts radii for the black disk
which are energy dependent and pretty large. In the
momentum interval 200 MeV/c $< p_L <$ 1 GeV/c
this radius $R$ varies from more than 2 fm to about 1.4 fm.
\subsection{Boundary-condition model} \label{III.2}
The boundary-condition model in $\overline{N}\!N$ was first
introduced by M. Spergel in 1967~\cite{Spe67}. Later many
more people used this now more than a quarter century old
model (see \mbox{\it e.g.\/}\ Refs.~\cite{Dal77,Del78}).
The model is based on the observation that the interaction for large
values of the radius is often well-known, while the interaction for small
radii is very hard to describe. This problem is then solved by just
specifying a boundary condition at $r=b$. For this boundary condition
one takes the logarithmic derivative of the radial wave function at the
boundary radius $b$
\[
P=b \left( \frac{d\psi}{dr}/\psi \right)_{r=b}\ .
\]
Outside this radius one assumes that the interaction can be described
by a known potential $V_{L}$. This long-range interaction is made of meson
exchanges as described in section~\ref{II} and it contains of course also
the electromagnetic interaction.
A nice, instructive example is the {\it modified black disk},
where $V_{L} = 0$ and $P = -ipb$.
The $P$ matrix contains a negative imaginary part, implying
absorption of flux at the boundary. The boundary $b$ is a measure
for the annihilation radius. This modified black disk is
specified by only one parameter: the radius $b$.
When one looks how these boundary-condition
models have been used, then one sees that in these
extremely simple models every time only very few
parameters have been introduced. The conclusion is that such a
few-parameter model can fit possibly some data, but it will never
be able to fit all the available $\overline{N}\!N$ data, with
the same set of only a few parameters.
\subsection{Optical-potential model} \label{III.3}
The optical-potential models have become quite an
industry in the $\overline{N}\!N$ community. The first such model
was from R. Bryan and R. Phillips in 1968~\cite{Bry68}. In the
optical-potential model the interaction between the antinucleon and the
nucleon is described by a complex potential from $r=0$ to infinity.
For the basic potential one takes a meson-theoretic potential, obtained
from some known $N\!N$ potential by using the charge-conjugation
operation. Then, in order to get annihilation, to this potential is
added another complex potential,
\[
V(r) = (U-iW) f(r)\ .
\]
Here $U$ and $W$ are constants and $f(r)$ is some radial function. This radial
function can be the Woods-Saxon form, a Gaussian form, or even a square well.
Let us give you a DO-IT-YOURSELF-KIT called:
\begin{center}
{\it How to make your own optical potential?}
\end{center}
Instructions: \\
1. Look through the literature and decide which $N\!N$ potential
your want to use. \\
2. Apply to this potential charge conjugation, so that you obtain
the corresponding $\overline{N}\!N$ potential. \\
3. Pick your favored functional form for $f(r)$.
This will contain a range parameter $b$.
After you have made this choice, find some arguments,
which sound like QCD, to justify this chosen form. \\
4. Pick one of the beautiful differential cross sections as measured by
Eisenhandler \mbox{\it et al.\/}~\cite{Eis76}
and adjust $U$, $W$, and $b$ such that a reasonable
fit (at least at sight) is obtained for this particular cross section. \\
Your model is now a three-parameter model, which fits {\it some}
of the data (at least the Eisenhandler data at one energy)
reasonably well (at sight), but it cannot possibly fit
{\it all} the $\overline{N}\!N$ data, because the
model does not have enough freedom.
After Bryan and Phillips many people have constructed
similar models (see \mbox{\it e.g.\/}\ Refs.~\cite{Dov80,Koh86,Hip89,Car91}).
Also in Nijmegen we made such an optical-potential model,
which we optimized by making a least-squares fit to our database,
which contained at that time $N_{d} = 3309$ data. Because we
actually performed a fit to all the $\overline{N}\!N$ data we
think that we will have about the lowest $\chi^2$ of all the
available two- or three-parameter optical-potential models. For our
model $\chi_{\rm min}^{2} = 6\ 10^{9}$. This enormously large number is
NOT a printing error, but just an expression of the total failure of
such simple models.
It is, therefore, astonishing to see that regularly new
measurements from LEAR are compared to one or more of these
few-parameter optical-potential models (see \mbox{\it e.g.\/}\ Ref.~\cite{Kun91}),
as if something can be learned from such a comparison!
There is only one group, the theory group of R. Vinh Mau
in Paris, that has seriously tried to fit
all available $\overline{N}\!N$ data with an
optical-potential model. In 1982 they got a fit with
$\chi^{2}/N_{d} = 2.8$, where they compared with the then
available pre-LEAR data~\cite{Cot82}. In 1991 they published an
update~\cite{Pig91}, where they fitted now also the LEAR data.
For the real part of the potential they took the $G$-conjugated
Paris $N\!N$ potential~\cite{Lac80}.
Because the inner region of this $N\!N$ potential is treated
totally phenomenologically it is impossible to take that over
to $\overline{N}\!N$, so something has been done there and
probably some extra parameters have been introduced.
The imaginary part of the potential they write as
\begin{eqnarray*}
W(r) & = & \left\{ \rule{0mm}{6mm} g_{c} (1+f_{c}T_{L})+g_{ss}(1+f_{ss}T_{L})\,
\mbox{\boldmath $\sigma$}_{1} \cdot \mbox{\boldmath $\sigma$}_{2} \right. \\
&& \left. +\, g_{T} S_{12} + g_{LS}\ {\bf L} \cdot {\bf S}\ \frac{1}{4m^{2}r}
\frac{d}{dr} \right\} \frac{K_{0}(2mr)}{r} \ ,
\end{eqnarray*}
where $T_L$ is the lab kinetic energy and
the parameters are the $g$'s and the $f$'s. For each isospin a set
of 6 parameters is fitted, so that the imaginary part is described
by about 12 real parameters. In total the Paris $\overline{N}\!N$
potential uses at least 12, possibly about 22, parameters.
The correct number used is not so important, what is
important, is that the number is much larger than 3. The Paris group
do fit then to 2714 data and get $\chi^{2}/N_{d} = 6.7$.
The quality of this fit is very hard to assess, because the Paris group
did not try to make their own selection of the data, but tried to fit
all the available data, many of which are contradictory. It would
be interesting to see their fit to the Nijmegen
$\bar{p}p$ database \cite{Tim93} (see section~\ref{VIII}),
where all the contradictory sets have been removed.
An important lesson could have been learned already in 1982 from
this Paris work. An optical-potential model needs at least about
15 parameters to be able to give a reasonable fit to the
$\overline{N}\!N$ data. This means that practically all
few-parameter optical-potential models published after 1982
should have been rejected by the journals.
\subsection{Coupled-channels model} \label{III.4}
Another way to introduce inelasticity in our formalism is to
introduce explicitly couplings from the $\overline{N}\!N$
channels to annihilation channels.
This was done in 1984 by P. Timmers \mbox{\it et al.\/}\ in the Nijmegen
coupled-channels model: CC84~\cite{Tim84}. Fitting to the then
available pre-LEAR data resulted in a quite satisfactory fit
with $\chi^{2}/N_{d} = 1.39$. Several people have
have later tried similar models~\cite{Liu90,Dal90}. An update of
the old model CC84 was made in 1991 in Nijmegen in the thesis of
R. Timmermans~\cite{Tim91c}.
This new coupled-channels model, which we would like
to call the Nijmegen model CC93, gives $\chi^{2}/N_{d} = 1.58$,
when fitted to $N_{d} = 3646$ data.
We will come back to this model somewhat later.
\section{Antiproton-proton partial-wave analysis} \label{VII}
In Nijmegen we have for almost 15 years been busy with partial-wave
analyses of the $N\!N$ data. We have now developed rather sophisticated
and accurate methods to do these PWA's~\cite{Ber88,Ber90,Klo93}.
A few years ago we realized that it was possible to do a PWA
of all the available $\bar{p}p$ data in {\it exactly the same way} as
our $N\!N$ PWA. Before this realization we always thought that such a
PWA would be almost impossible in $\overline{N}\!N$. Luckily, it is
not impossible. We will try to give a short description of our
PWA~\cite{Tim93}.
In an energy-dependent partial-wave analysis one needs a model to describe
the energy dependence of the various partial-wave amplitudes. Our model
is a mixture of the boundary-condition model and the optical-potential
model. We choose the boundary at $b=1.3$ fm. This value is
determined by the width of the diffraction peak and cannot be
chosen differently, without deteriorating the fit to the data.
The long-range potential $V_{L}$ for $r>b$ is
\[
V_L = V_{\overline{N}\!N} + V_{C} + V_{M\!M}\ .
\]
Here $V_{C}$ is the relativistic Coulomb potential, $V_{M\!M}$ the
magnetic-moment interaction, and $V_{\overline{N}\!N}$ is
the charge-conjugated Nijmegen $N\!N$ potential, Nijm78~\cite{Nag78}.
We solve the relativistic Schr\"odinger equation~\cite{Swa78} for each
energy and for each partial wave, subject to the boundary condition
\[
P = b \left( \frac{d\psi}{dr}/\psi \right)_{r=b} \ ,
\]
at $r=b$.
This boundary condition may be energy dependent. To get the value of $P$
as a function of the energy we use for the spin-uncoupled waves
(like $^{1}S_{0}$, $^{1}P_{1}$, $^{1}D_{2}$, $\ldots$ and $^{3}P_{0}$,
$^{3}P_{1}$, $^{3}D_{2}$, $\ldots$) the optical-potential picture.
We take a square-well optical potential for $r \leq b$. This
short-ranged potential $V_{S}$ we write as
\[
V_{S} = U_{S} - iW_{S}\ .
\]
In this way we get in each partial wave and for each isospin the
parameters $U_{S}$ and $W_{S}$. Using these potentials we can calculate
easily the boundary condition $P$ and the scattering amplitudes.
For example in all singlet waves $^{1}S_{0}$, $^{1}P_{1}$, $^{1}D_{2}$,
$\ldots$, we get $U=0$ and $W \approx 100$ MeV.
For the triplet waves we take $W$
independent of the isospin. The parameters for the
$^{3}P_{0}$ wave are \mbox{\it e.g.\/}\ $W=159 \pm 9$ MeV and independent
of the isospin, and
$U (I=0) = -132 \pm 9$ MeV and $U (I=1) = 178 \pm 19$ MeV.
To describe all relevant partial waves we need in our $\overline{N}\!N$
PWA 30 parameters. In our fit to the data we use all
available data in the momentum interval
119 MeV/c $< p_{L} <$ 923 MeV/c. The lowest
momentum is determined by the fact that for lower momenta no
data are available. The highest momentum is determined by several
considerations. In $N\!N$ we use all data up to $T_{L} = 350$ MeV,
which corresponds to $p_{L} = 810$ MeV/c.
Because we wanted to include all the elastic backward
cross sections of Alston-Garnjost \mbox{\it et al.\/}~\cite{Als79},
we need to go to $p_{L} = 923$ MeV/c which corresponds to
$T_{L} = 454$ MeV. At this energy the potential description in $N\!N$ is
still valid, and therefore we feel that also here our description
must work at least up to this momentum.
Our final dataset contains $N_{d}=3646$ experimental data. In our
analyses we need to determine $N_{n}=113$ normalizations and $N_{p}=30$
parameters. This leads to the number $N_{df}$ of degrees of freedom
$N_{df} = N_{d} - N_{n} - N_{p} = 3503$.
When the dataset is a perfect statistical ensemble and when the model
to describe the data is totally correct, then one expects for
$\chi^2_{\rm min}$:
\[
\langle \chi^{2}_{\rm min} \rangle = N_{df} \pm \sqrt{2N_{df}}\ .
\]
Thus expected is $\langle \chi^{2}_{\rm min}(\bar{p}p)
\rangle/N_{df} = 1.000 \pm .024$. \\
In our PWA we obtain $\chi^{2}_{\rm min}(\bar{p}p)/N_{df} = 1.085$.
We see that we are about 3.5 standard deviations
away from the expectation value. To get a feeling
for these numbers let us compare with the $pp$ data and the
$pp$ PWA. The number of data is now $N_{d} = 1787$ and we expect
\[
\langle \chi^{2}_{\rm min}(pp) \rangle /N_{df} = 1.000 \pm 0.035\ .
\]
In the latest Nijmegen analysis, NijmPWA93~\cite{Klo93}, we get
\[
\chi^{2}_{\rm min}(pp)/N_{df} = 1.108,
\]
which is 3 standard deviations too high. We see
that our $\bar{p}p$ analysis compares favorable
with a similar analysis for the $pp$ data.
This means therefore that we have a statistically rather good
solution and also that this solution will be essentially correct.
\section{The Nijmegen ${\bf \overline{N}\! N}$ database}
\label{VIII}
An essential ingredient in our successfully completed PWA,
as well as an important product of this PWA, is the Nijmegen
$\overline{N}\!N$ database~\cite{Tim93}.
As pointed out before, we use all data with $p_{L} < 925$ MeV/c
or $T_{L} < 454$ MeV. This means that our momentum range is
similar to the momentum range used in the Nijmegen $N\!N$ PWA's.
We will compare regularly with the $N\!N$ case to show that the
same methods, which work well in $N\!N$, work also well in $\overline{N}\!N$
and that the results are also similar.
The number of data $N_{d}$ in the various final datasets are
$N_{d}(\bar{p}p) = 3543$, $N_{d}(pp) = 1787$, and $N_{d}(np) = 2514$.
In the processes to come to these final datasets we had to reject data.
We do not want to go into details~\cite{Ber88}
about what are the various criteria
to remove data from the dataset. We would like to point out, however,
that in $pp$ scattering there is a long history about which datasets
are reliable, and which not. We did not invent the method of discarding
incorrect data, we just followed common practice and used common sense.
In the $\bar{p}p$ case we needed to reject 744 data, which is 17\% of
our final dataset.
In the $pp$ case we discarded 292 data or 14\% of the final dataset,
and in the $np$ case we rejected 932 data, which amounts to 27\%
of the final dataset.
It is clear that the $\bar{p}p$ case does not seem to be out of bounds.
Of course, it is unfortunate that so many data have to be rejected,
because these data represented many man-years of work and a lot of
money and effort. However, when one wants to treat the data in a
statistically correct manner, then often one cannot handle all
datasets, but one must reject certain datasets.
This does not mean that all these rejected datasets
are ``bad'' data, it only means, that if we want to apply statistical
methods, then, unfortunately, certain datasets cannot be used.
\begin{table}
\centering
\begin{tabular}{lcc|cc}
& \multicolumn{2}{c|}{elastic} & \multicolumn{2}{c}{charge exchange} \\
& LEAR & rest & LEAR & rest \\ \hline
$\sigma_{T},\sigma_{A}$ & 124 & - & - & 63 \\
$\sigma(\theta)$ & 281 & 2507 & 91 & 154 \\
$A(\theta)$ & 200 & 29 & 89 & - \\
$D
|
$ & 5 & - & 9 & - \\ \hline
\multicolumn{1}{l}{total} & 610 & 2536 & 189 & 217
\end{tabular}
\caption{Number of elastic and charge-exchange data
divided over various categories.} \label{tab.I}
\end{table}
In Table~\ref{tab.I} we give the number of data points
divided into elastic versus charge exchange,
LEAR versus the rest, and total cross sections
$\sigma_{T}$, annihilation cross sections $\sigma_{A}$,
differential cross sections $\sigma(\theta)$, analyzing-power
data $A(\theta)$, and depolarization data $D(\theta)$. This table
give some interesting information. \\
The most striking fact is that: \\
{\it Of the final dataset only $22$\% of the data comes from LEAR.} \\
This is after 10 years operation of LEAR. Remember the promises
(or were it boasts)
from CERN, made before LEAR was built. They were something like:
``Only one day running of LEAR will produce
more scattering data then all other methods together.''
Unfortunately, this promise of CERN did not work out.
Also it is clear that LEAR has not given much valuable information
about $\sigma(\theta)$ for the elastic reaction.
A lot of the elastic $\sigma(\theta)$ data from
LEAR unfortunately needed to be rejected~\cite{Tim93}!
This does not mean that LEAR did not produce beautiful data. Some of
the charge-exchange data and the strangeness-exchange data are really
of high quality.
Another striking fact is the virtual absence of spin-transfer and
spin-correlation data. For the elastic reaction below 925 MeV/c
there are only 5 depolarizations measured with enormous
errors~\cite{Kun91}. Very recently, some depolarization
data of good quality have become available for the
charge-exchange reaction~\cite{Bir93}.
A valid question is therefore: \\
{\it Can one do a PWA of the $\bar{p}p$ data, when there are
essentially no ``spin data''?} \\
The answer is yes! The proof that it can be done lies in the fact that we
actually produced a $\bar{p}p$ PWA with a very good $\chi^{2}/N_{d}$.
We have also checked this at length in our $pp$ PWA's.
We convinced ourselves that a $pp$ PWA using only $\sigma(\theta)$ and
$A(\theta)$ data gives a pretty good solution. Of course,
adding spin-transfer and spin-correlation data
was helpful and tightened the error bands. However, most
spin-transfer and spin-correlation data in the
$pp$ dataset actually did not give any additional information.
\section{Fits to the data} \label{IX}
It is of course impossible to show here how well the various
experimental data are fitted. From our final $\chi^{2}/N_{d} = 1.085$
one can draw the conclusion, that almost every dataset will have a
contribution to $\chi^{2}$ which is roughly equal to the
number of data points as is required by statistics.
Let us look at some of the experimental data.
In Fig.~\ref{fig.1} we present total cross sections from PS172~\cite{Clo84}.
The fit gives for these 75 data points $\chi^{2} = 88.4$.
In the same Fig.~\ref{fig.1} one can also find 52 annihilation
cross sections from PS173~\cite{Bru87}.
These points contributed $\chi^{2} = 65.3$
to the total $\chi^{2}$.
\begin{figure}
\vspace*{8cm}
\special{psfile=fig1.ps hscale=90 vscale=90 hoffset=-90 voffset=-275}
\caption{Total cross sections (PS172) and annihilation cross
sections (PS173) with the curves from the Nijmegen PWA.}
\label{fig.1}
\end{figure}
In Fig.~\ref{fig.2} we plot the elastic
differential cross section $\sigma(\theta)$ at
$p_L = 790$ MeV/c as measured in 1976 by Eisenhandler \mbox{\it et al.\/}~\cite{Eis76}
The 95 data points contribute $\chi^{2} = 101.5$. The vertical scale
is logarithmic. The nice fit reflects the high quality of these
pre-LEAR data.
\begin{figure}
\vspace*{8cm}
\special{psfile=fig2.ps hscale=90 vscale=90 hoffset=-90 voffset=-275}
\caption{Elastic differential cross section at 790 MeV/c from Eisenhandler
{\it et al}., with the curve from the Nijmegen PWA.}
\label{fig.2}
\end{figure}
The differential cross section of the charge-exchange reaction
$\bar{p}p \rightarrow \bar{n}n$ at \linebreak[4]
$p_{L}=693$ MeV/c as measured
by PS199 is given in Fig.~\ref{fig.3}. The 33 data contribute \linebreak[4]
$\chi^{2} = 39.3$. This dataset can be considered important,
because it is very constraining. One needs all partial waves up
to $L=10$ to get a satisfactory fit to these data.
\begin{figure}
\vspace*{8cm}
\special{psfile=fig3.ps hscale=70 vscale=70 hoffset=-25 voffset=-220}
\caption{Charge-exchange differential cross section at 693 MeV/c
from PS199, with the curve from the Nijmegen PWA.}
\label{fig.3}
\end{figure}
\section{Coupled-channels potential model} \label{X}
Having finished our discussion of the Nijmegen $\bar{p}p$
partial-wave analysis we can look at the $\overline{N}\!N$ potentials.
We decided to update the old coupled-channels model Nijmegen CC84 of
Timmers \mbox{\it et al.\/}~\cite{Tim84}. Because of our experience with the various
datasets this was not very difficult, just very computer-time consuming.
The result was the new Nijmegen CC93 model~\cite{Tim91c}.
In this model we treat the $\overline{N}\!N$ coupled channels on the
particle basis. We therefore have a $\bar{p}p$ channel as well as a
$\bar{n}n$ channel. This allows us to introduce the charge-independence
breaking effects of the Coulomb interaction in the
$\bar{p}p$ channel and of the mass differences between the
proton and neutron as well as between the exchanged $\pi^{0}$ and
$\pi^{\pm}$.
These $\overline{N}\!N$ channels are coupled to annihilation channels.
We assume here that annihilation can happen only into two fictitious mesons;
into one pair of mesons with total mass 1700 MeV/c$^{2}$, and into
another pair with total mass 700 MeV/c$^{2}$. Moreover, we assume
that these annihilation channels appear in both isospins $I=0$
as well as $I=1$.
We end up with 6 coupled channels for each of the $\bar{p}p$ channels:
$^{1}S_{0}$, $^{1}P_{1}$, $^{1}D_{2}$, $^{1}F_{3}$, etc.\
and $^{3}P_{0}$, $^{3}P_{1}$, $^{3}D_{2}$, $^{3}F_{3}$, etc.
Due to the tensor force we end up with 12 coupled channels
for each of the $\bar{p}p$ coupled channels:
$^3S_1+\,^3D_1$, $^3P_2+\,^3F_2$,
$^3D_3+\,^3G_3$, etc.
We use the relativistic Schr\"odinger equation in coordinate space.
The interaction is then described by either a $6\times 6$ or a
$12\times12$ potential matrix
\[
V = \left( \begin{array}{cc}
V_{\overline{N}\!N} & V_{\!A} \\[2mm]
\widetilde{V}_{\!A} & 0
\end{array} \right) \ .
\]
The $2\times 2$ (or $4\times 4$) submatrix $V_{\overline{N}\!N}$
we write as
\[
V_{\overline{N}\!N} = V_{C} + V_{M\!M} + V_{O\!B\!E}\ ,
\]
where for $V_{C}$ we use the relativistic Coulomb potential, $V_{M\!M}$
describes the magnetic-moment interaction, and for $V_{O\!B\!E}$ we use
the charge-conjugated Nijmegen $N\!N$ potential Nijm78~\cite{Nag78}.
We have assumed that we may neglect the diagonal interaction in the
annihilation channels. The annihilation potential $V_{\!A}$
connects the $\overline{N}\!N$ channels to the two-meson annihilation
channels. It is either a $2\times 4$ matrix or a $4\times 8$ matrix.
This potential we write as
\[
V_{\!A}(r) = \left( V_{C} + V_{SS} \mbox{\boldmath $\sigma$}_{1} \cdot
\mbox{\boldmath $\sigma$}_{2} + V_{T} S_{12} m_{a}r + V_{SO}
{\bf L}\cdot {\bf S} \frac{1}{m_{a}^{2}r} \frac{d}{dr} \right)
\frac{1}{1+e^{m_ar}}\ .
\]
The factor $m_{a}r$ is introduced in the tensor force to make
this potential identically zero at the origin. Here $m_{a}$ is the
mass of the meson (either 850 MeV/c$^{2}$ or
350 MeV/c$^{2}$).
This annihilation potential depends on the spin structure of the initial
state. For each isospin and for each meson channel five parameters
are introduced: $V_{C}$, $V_{SS}$, $V_{T}$, $V_{SO}$, and $m_a$.
This gives a model with in total $4\times 5=20$ parameters. These
parameters can then be fitted to the $\overline{N}\!N$ data.
Doing this we obtained $\chi^{2}/N_{d}=3.5$.
It is clear, of course, that although the old Nijmegen soft-core
potential Nijm78 is a pretty good $N\!N$ potential, it is
definitely not the ultimate potential. We decided therefore to
introduce now as extra parameters the coupling constants of the
$\rho$, $\omega$, $\varepsilon(760)$,
pomeron, and $a_{0}(980)$. Adding these parameters
allowed us quite a drop in $\chi^{2}$. Now we reached
\[
\chi^{2}/N_{d} = 1.58 \ ,
\]
with a total of 26 parameters.
\section{The reaction $\bar{p}p \rightarrow \bar{\Lambda}\Lambda$} \label{XI}
It is perhaps not superfluous to point out here, that we also made a
PWA of the strangeness-exchange reaction $\bar{p}p \rightarrow \bar{\Lambda}
\Lambda$~\cite{Tim91a,Tim92}.
Fitting the $N_{d}=142$ data, we get $\chi^{2}_{\rm min}/N_{d}=1.027$.
The first theoretical treatments of this reaction were by F. Tabakin
and R.A. Eisenstein~\cite{Tab85} and independently by P. Timmers
in his thesis~\cite{Tim85}.
Many other treatments of this reaction can be found (see
{\it e.g.} Refs.~\cite{Nis85,Koh86a,Fur87,Kro87,Alb88,LaF88,Hai93}).
In the meson-exchange
models it is clear that next to $K(494)$ exchange, there is also
the exchange of the vector meson $K^{*}(892)$. In Nijmegen we have
been able to determine the $\Lambda N\!K$ coupling constant at the
pole~\cite{Tim91a}. We found \linebreak[4]
$f^{2}_{\Lambda N\!K} = 0.071 \pm 0.007$. This value is in agreement with the
value $f^{2}_{\Lambda N\!K}=0.0734$ \linebreak[4]
used in the recent soft-core Nijmegen
hyperon-nucleon potential~\cite{Mae89}. When we determine also the mass of
the exchanged pseudoscalar meson we find \linebreak[4]
$m(K) = 480 \pm 60$ MeV in
good agreement with the experimental value \linebreak[4]
$m(K)=493.646(9)$ MeV.
This shows that we are actually looking at the one-kaon-exchange mechanism
in the reaction $\bar{p}p \rightarrow \bar{\Lambda}\Lambda$.
When the data for the reactions $\bar{p}p \rightarrow \bar{\Lambda}\Sigma$
and $\bar{\Sigma}\Lambda$ are available, then also the $\Sigma N\!K$ coupling
constant can be determined. When this can be done with sufficient
accuracy, then information about the SU(3) ratio $\alpha = F/(F+D)$ can be
obtained, and SU(3) for these coupling constants can then actually be
studied.
\section{PWA as a TOOL} \label{concl}
We have presented here some of the results of the first, energy-dependent
partial-wave analysis of the elastic and charge-exchange $\bar{p}p$
scattering data~\cite{Tim93}. We also discussed the Nijmegen
$\overline{N}\!N$ dataset, where we removed the contradictory or otherwise
not so good data from the world $\bar{p}p$ dataset.
The main reason that we have been able to perform a PWA of the
$\bar{p}p$ scattering data is that practically all partial-wave amplitudes
are dominated by the potential outside $r=1.3$ fm. This long range
potential consists of the electromagnetic potential, the OPE potential,
and the exchange potentials of the mesons like $\rho,\ \omega,\
\varepsilon$, etc. This long-range potential is therefore well known.
In our PWA of the $N\!N$ scattering data~\cite{Klo93} it was noticed by
us that the long-range potential in the $N\!N$ case dominated the
$N\!N$ partial-wave scattering amplitudes. In the $\bar{p}p$ case the
long-range potential is much stronger (see section~\ref{II}), and
the dominance in the $\bar{p}p$ case is therefore more marked.
One could formulate this the following way. The important $\bar{p}p$
partial-wave amplitudes are ``$\pi,\ \rho,\ \varepsilon$, and $\omega$
dominated.'' This gives the most important energy dependence of
these amplitudes. The slower energy dependence due to the short-range
interaction can easily be parametrized.
A second reason for the successful PWA is the availability and
easy access to computers, because the methods used are very computer
intensive.
We want to stress the fact that our multienergy PWA can now be used
as a {\bf tool}. This tool allows us first of all to judge the
quality of a particular dataset. This enabled to us to set up the Nijmegen
$\overline{N}\!N$ database. Secondly, it can be used in the study
of the $\overline{N}\!N$ interaction.
To demonstrate these things let us look at the Meeting Report
of the Archamps meeting from October 1991~\cite{Bra93}. Beforehand
the participants were asked to discuss at the meeting such questions as: \\
{\it What is the evidence for one-pion exchange in the
$\overline{N}\!N$ interaction?} \\
In Nijmegen we determined~\cite{Tim91b,Tim93}, using the PWA
as a tool, the $N\!N\pi$ coupling constant for charged pions from
the data of the charge-exchange reaction $\bar{p}p \rightarrow \bar{n}n$.
We found~\cite{Tim93}
\[
f_{c}^{2} = 0.0732 \pm 0.0011\ .
\]
This is only 64 standard deviations away from zero!!
Using analogous techniques we could also
determine this coupling constant for charged pions in our analyses
of the $np$ scattering data.
We found there~\cite{Sto93}
\[
f_{c}^{2} = 0.0748 \pm 0.0003\ .
\]
The same coupling constant can also be seen in analyses of the
$\pi^{\pm}p$ scattering data. There the VPI\&SU group
finds $f_{c}^{2} = 0.0735\pm 0.0015$~\cite{Arn90}.
In $pp$ scattering we have
determined the $pp\pi^{0}$ coupling constant.
Our latest determination gives~\cite{Sto93}
\[
f_{p}^{2} = 0.0745 \pm 0.0006\ .
\]
The nice agreement between these different values shows \\
(1) the charge independence for these coupling constants and its shows that\\
(2) the presence of OPE in the $\overline{N}\!N$ interaction
is a 64 s.d.\ effect. \\
{\it What more evidence does one wants?} \\
We also played around with the pion masses.
In $N\!N$ scattering we were able to determine
the masses of the $\pi^{0}$ and $\pi^{\pm}$. We found there
$m_{\pi^0}=135.6(1.3)$ and $m_{\pi^\pm}=139.4(1.0)$ MeV/c$^2$,
to be compared to the particle-data values \\
$m_{\pi^0}=134.9739$ and $m_{\pi^\pm}=139.56755$ MeV/c$^2$. We did
not try to determine these masses again in $\overline{N}\!N$ scattering.
However, we think we could have. We checked that changing the correct
pion masses to an averaged $\pi$-mass raised our
$\chi^{2}_{\rm min}(\bar{p}p)$ with 9.
Another question posed before that meeting was
{\it ``What is the evidence for the $G$-parity rule?''}
In our determination of the $N\!N\pi$ coupling constant in the
charge-exchange reaction this G-parity rule was of course implicitly
assumed. Our determination of $f_{c}^{2}$ and its agreement with
the expected value can therefore be seen as a proof of this rule
for pion exchange.
When one looks through the literature one finds several, what we think,
artificially created problems. Why is this done? Only to get beamtime?
One of such problems is the statement: ``The OBE model does not work.''
We would like to point out that the OBE model works
excellently~\cite{Tim91c}. Other
examples can be found in the already mentioned Archamps Meeting
Report~\cite{Bra93}. The authors of this report claim
that the charge-exchange differential cross sections at low
energy pose a {\it challenge for every model}. Let us look at those
data. Contrary to what is stated in the Meeting Report these data
are a part of our dataset, so we have sufficient knowledge to discuss
them. The discussion concerns data of PS173~\cite{Bru86b}.
At four momenta the differential
cross section for $\bar{p}p \rightarrow \bar{n}n$
was measured. The results of our PWA for these measurements are: \\
At $p_{L} = 183$ MeV/c there are 13 $d\sigma_{ce}/d\Omega$ data. 4 of
these data are rejected because each of them contributes more than 9
to our $\chi^{2}$. This is the three-standard-deviation rule.
The remaining 9 data contribute $\chi^{2}=8.3$. \\
At $p_{L}=287$ MeV/c there are 14 $d\sigma_{ce}/d\Omega$ data, where 1
of these data points is discarded because it contributes more than 9
to our $\chi^{2}$. The remaining 13 data contribute
$\chi^{2}=24.0$. \\
At $p_{L}=505$ MeV/c there are 14 $d\sigma_{ce}/d\Omega$ data. One
of them is discarded because of its too large $\chi^{2}$ contribution.
The remaining 13 data contribute $\chi^{2}=30.1$. \\
At $p_{L}=590$ MeV/c there are 15 $d\sigma_{ce}/d\Omega$ data, where 2
of them are discarded.
The remaining 13 data points contribute $\chi^{2}=32.8$.
What can we conclude? At the lowest momentum we rejected 30\% of the
data and the remaining dataset is then OK. However, at the other three
momenta we find rather large contributions to $\chi^{2}$. A dataset
of 13 data is, according to the three-standard-deviation rule, not
allowed to contribute more than $\chi^{2}_{\rm max}=31.7$
to the $\chi^{2}_{\rm min}$ of our database.
This means that we really should reject the data at $p_{L} = 590$
MeV/c. When we combine the 4 datasets to one dataset with 47 data points,
we see that these data points contribute $\chi^{2}=95.2$ to
$\chi^{2}_{\rm min}$ of our database. The rule says that a set
of 47 data may not have a $\chi^{2}$-contribution larger than
78.5. This means that this whole dataset should be rejected.
The only reason, that these dubious data are still contained in the
Nijmegen $\overline{N}\!N$ database and not discarded, is that
there are no other charge-exchange data at such low momenta.
Our philosophy here was that these
imperfect data are perhaps better than no data at all. The authors of
the Archamps Meeting Report~\cite{Bra93}, two experimentalists and a
phenomenologist, are obviously incorrect. Our PWA shows clearly that these
data cannot pose ``a challenge for every model,'' because these data
should really be discarded!
Another {\it challenge for models} seems to be that
{\it ``the strangeness-exchange reaction}
$\bar{p}p \rightarrow \bar{\Lambda}\Lambda$ {\it takes place in
almost pure triplet states.''} Let us look for a moment
in more detail at the beautiful data of PS185~\cite{Bar87}.
These data have been studied by many people. In Nijmegen we
performed also a PWA of these data~\cite{Tim91a,Tim92}.
It is very clear from our PWA that in this reaction the tensor force
plays a dominant role. The tensor force acts only in triplet waves.
These triplet waves make up the bulk of the cross section. This result
has been confirmed by several groups and clearly this is not a challenge,
but only a case of strong tensor forces. In section~\ref{II} we already
explained the reason for such strong tensor forces.
A big deal is often made of the $\rho$ parameter, the
real-to-imaginary ratio of the forward scattering amplitude.
The extraction of
this parameter from the available experimental data is based on a rather
shaky theory and on not much better data, polluted by Moli\`ere
scattering and, in our opinion, the underestimation of systematic errors.
When we look for example at the seven $\rho$ determinations by
PS173~\cite{Bru85} then we note that this
group has published only at four of these energies the
corresponding $d\sigma/d\Omega$ data~\cite{Bru86a}!
In our PWA we discard these data at
three of the energies. We feel therefore strongly, that the
$\rho$ determinations by PS173 should clearly be discarded and very
probably the errors on the determinations by PS172~\cite{Lin87}
should be enlarged considerably.
This leads to the simple picture as shown in Fig.~\ref{fig.4}.
\begin{figure}
\vspace*{8cm}
\special{psfile=fig4.ps hscale=70 vscale=70 hoffset=-25 voffset=-220}
\caption{The $\rho$ amplitude. Data are from PS172 and PS173. The
curve is the prediction from the Nijmegen PWA.}
\label{fig.4}
\end{figure}
Another curious trend is the direct comparison between predictions
of meson-exchange models and of simple quark-gluon models for
the strangeness-exchange reaction. There are even serious
proposals~\cite{Hai92,Alb93} for experiments to distinguish between
these models: it is proposed to measure the spin transfer in $\bar{p}p
\rightarrow \bar{\Lambda}\Lambda$.
Let us make it clear from the outset, that we believe that {\bf all}
data must eventually be explained in terms of quark-gluon exchanges,
because this is the underlying theory. However, for the analogous
$N\!N$ interaction one has unfortunately not yet succeeded to give
a proper explanation of the meson-exchange mechanism in terms of quarks
and gluons exchanges. The theory is not so advanced yet. In the
$\overline{N}\!N$ reactions we are of course in exactly the same
situation. Using our PWA as a tool we determined the $\Lambda N\!K$ coupling
constant {\bf and} the mass of the exchanged kaon. This way we established
beyond {\bf any} doubt that the one-kaon-exchange potential is present
in the transition potential and that again the tensor potential dominates.
It is absolutely not necessary to measure the spin transfer to
distinguish between the $K(494)$- and $K^{*}(892)$-exchange picture and
a simple quark-gluon-exchange picture. This distinction has already
been made using our PWA as a tool and using just the differential
cross sections and polarizations.
It has become a fad to promote the measurements of spin-transfer and
spin-correlation data, as if these data will solve all our troubles. For
example in the already often mentioned Archamps Meeting Report~\cite{Bra93}
one can read that the longitudinal spin transfer is obviously a favorite of
one of its authors. Somewhere else in the same report one finds the
question: ``Which new spin measurement would be crucial to confirm or
rule out present models?'' The answer to this last question is of
course: None! When one measures differential cross sections, polarizations,
spin transfers, or spin correlations, carefully enough, none of the present
models will fit these new data, but adjustments will be made in the models
in such a way that they do fit the data again. Physics is hard work
from experimentalists as well as from theorists. One needs many
and varied data and one single experiment has only a marginal
influence.
\vspace{\baselineskip}
\noindent {\bf Acknowledgments} \\
Part of this work was included in the research program of the Stichting voor
Fundamenteel Onderzoek der Materie (FOM) with financial support from the
Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
|
\section{\label{sec:level1}Introduction}
Field ion microscopy (FIM) was the first microscopy technique that was able to image individual atoms on a metal surface with atomic spatial resolution\cite{muller1951feldionenmikroskop,muller1956resolution}. The imaging gas (e.g. He, Ne) is ionized above a surface of a nano-sharp needle-shaped specimen (end diameter $<100$\,nm) subject to a high electric field of $10^{10}$ V/m.
The ions are then accelerated along the field lines and hit a
two-dimensional detector to produce an image magnified by a factor of 10$^7$ \cite{brandon1963resolution}.
Even though field ion microscopy as a surface characterization method has largely been superseded by scanning
tunneling microscopy \cite{chen2021introduction}, atomic force microscopy and similar scanning probe techniques\cite{lucier2005determination}, there has been a recent revival when combined
with field evaporation (3D-FIM) \cite{katnagallu2022three,vurpillot2007towards,dagan2017automated} as one of the few techniques that can image crystallographic features with atomic resolution in three dimensions.
FIM was used to provide insights into crystallographic defects such as vacancies\cite{dagan2017automated,speicher1966observation,park1983quantitative}, dislocations, or voids \cite{katnagallu2019imaging,klaes2021development}.
However, the quantitative interpretations of imaging contrast enabled by theory have been sparse \cite{Katnagallu2018}. Numerous studies have focused on understanding the image contrast in field ion image \cite{FORBES1971, Forbes1972, Forbes1972b}.
Under best imaging field conditions, the local imaging contrast is dominated
by the ionization probability at 5-10\,\AA{} above the surface \cite{thesis_Katnagallu2018}.
Ionization near the surface requires an electron transfer
from the gas atom into an empty surface state. Without
an electrostatic field, this would be energetically impossible
since the ionization level of the imaging gas ($\approx$15-25\,eV) exceeds the work function (\mbox{$\approx$3-6\,eV} for most metals).
The electric field provides the required voltage drop
to overcome this
difference and to enable electron tunneling when the
gas atom is beyond a critical distance away from the
surface.
To simulate this within density-functional theory (DFT), we adopt Tersoff-Hamann theory known from scanning tunneling microscopy (STM)\cite{tersoff1985theory} for tunneling between a real surface and probe tip. In their ground-breaking work, Tersoff and Hamann found the tunneling current to be proportional
to the surface local density of states (LDOS) near the
Fermi level at the position of the tip in STM.
Their derivation equally applies when the spherical tip
is replaced by a single atom.
Both STM and FIM use electron tunneling for forming an image of a solid surface. However, the field strength in FIM is higher than in STM, so explicit fields must be considered in simulations \cite{genDipole,katnagallu2019imaging}. Our previous work \cite{katnagallu2019imaging} has applied the Tersoff-Hamann (TH) approximation in DFT simulations to explain the brighter appearance of Re atoms during FIM imaging of a Ni-Re alloy, focusing on states right above the Fermi level. While such a selective approach explained the chemical contrast in this case, the numerical accuracy in other cases emerged as a critical issue as discussed below, which limits its application.
\begin{figure}
\includegraphics[width=0.5\textwidth]{slab_noise.png}
\caption{Typical evolution of numerical wave functions on top of Ta in a Ni(012) surface in the presence of an electric field as obtained from a standard DFT code. The decay behavior of the wave function (orange curve) is shown for an eigenvalue at the Fermi level. For FIM image contrast, $\psi$ is needed at the ionization plane.}
\label{fig:sketch_idea}
\end{figure}
The computational challenge in these calculations is that the high electric field leads to a very fast decay of wave functions into the vacuum. When we employed the Tersoff-Hamann approach for a wider range of cases, we found that the accuracy of wave functions from standard DFT is often insufficient to make quantitative predictions for FIM, as sketched in Fig.~\ref{fig:sketch_idea}.
The standard wave-function optimization algorithms implemented in plane-wave DFT codes are based on the Rayleigh-Ritz method of minimizing the (global) norm of the residue\cite{payne1992iterative}. This implies that the highest magnitude in the wave function determines the global algorithm's notion of 'large' and 'small'. Hence, they give the best relative accuracy where the wave function amplitude is large. On the downside, in areas where the wave function is small in magnitude, the relative error between the approximate solution and the exact solution can become excessively large even if the total residue is at the numerical limit. The electrostatic field present in FIM further leads to a strong, non-exponential decay of wave functions and they run into a regime where noise dominates.
The main idea of the present work is to recompute the tails of the wave functions with an algorithm that works at the local scale. More precisely, we develop an algorithm that is local with respect to the dominant direction for scale (in the following: $z$), i.e., away from the slab surface\ignore{ to produce an accurate local density of states}.
For this, we assume that the eigenvalues $\epsilon_i$ and corresponding eigenfunctions $\psi_{i}(\mathbf r)$ close to the slab have been reliably computed. Then, the task is to integrate the underlying second-order differential equation i.e the Kohn-Sham equation from this trusted region along the $z$ direction.
The rest of this article is organized as follows.
In Sec.~\ref{sec:EXTRA}, we will present our
tail extrapolation scheme
and show that it is robust even if the wave function amplitude varies over 10 or more orders of magnitude along the direction of integration. In Sec.~\ref{sec:results} we apply the new
algorithm to a prototype surface the Ni(012) containing substitutional Ta atoms,
and show that we can successfully reproduce the enhanced
brightness observed in experiment \cite{morgado2021revealing}.
\section{Extrapolation of tail via reverse integration algorithm (EXTRA)}
\label{sec:EXTRA}
\subsection{Reverse Integration}
In DFT, the Kohn-Sham equation reads\cite{sholl2011density}
\begin{equation}
\left(-\frac{1}{2}\nabla + V_{\rm eff}(\mathbf r)\right)\psi_i(\mathbf r) = \epsilon_i \psi_i(\mathbf r)
\label{eq:KS}
\end{equation}
using Hartree atomic units ($\hbar$=1, $m_e$=1, $4\pi\epsilon_0$=1). The effective potential is
obtained as
\begin{equation}
V_{\rm eff} = V_{\rm ext}+V_{\rm H}+V_{\rm xc}
\;.
\end{equation}
The external potential $V_{\rm ext}$ defines the
Coulomb interaction between an electron and the collection of
atomic nuclei. The Hartree potential $V_{\rm H}$ describes the classical Coulomb repulsion between
the electrons, and the exchange-correlation potential
$V_{\rm xc}$ encompasses quantum mechanical
corrections.
In common density-based approximations (e.g. LDA, GGA, meta-GGA), $V_{\rm xc}$ depends on the local density
and vanishes as the density becomes zero. The vacuum potential is therefore dominated by the electrostatic potential $V_{\rm ext}+V_{\rm H}$.
Rewriting Eq.~(\ref{eq:KS}) as
\begin{widetext}
\begin{equation}
\frac{1}{2}\frac{\partial^2}{\partial z^2}\psi_i(\mathbf r)
=\left\{-\frac{1}{2}\left(\frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2}\right)+ V_{\rm eff}(\mathbf r) - \epsilon_i\right\}\psi_i(\mathbf r)
\label{eq:KS_along_z}
\end{equation}
\end{widetext}
provides the basis to numerically integrate the wave function $\psi_i$ along the $z$-direction. For the sake of readability, we will omit the state index $i$ from the equations in the following.
It is well known from the theory of second-order differential equations in one dimension (1D) that there are two linearly independent solutions.
In our case, a decaying and a rising solution are possible. In forward integration the decaying solution is always challenging to compute because numerical noise (e.g. rounding errors due to the finite precision) can produce a small contribution of the rising solution, which then grows as the algorithm proceeds and ultimately dominates.
To circumvent this problem, one can reverse the direction of integration: by starting deep in the vacuum and integrating towards the surface, the desired solution is growing in magnitude and thus can be easily computed. In consequence, the extrapolation problem is turned into a problem of choosing starting conditions such that the integrated tail solution matches the values of the global optimization near the surface, in our case at the matching plane $z = z_{\rm match}$.
Before we come to the integration algorithm, let us briefly review the key properties of the effective
potential.
In a vacuum, where the density becomes negligibly small, the electrostatic part becomes dominant and $V_{xc}$ vanishes. Hence, it is interesting to understand how the electrostatic potential develops far away from the surface.
For this, one can apply a Fourier transform in the $xy$ plane [with wave vector $\mathbf k=(k_x, k_y)$]
to a 'mixed-space' representation $V(\mathbf k, z)$.
As shown in Appendix A,
the $xy$-averaged electrostatic potential is constant or diverges
linearly along $z$, while the lateral potential variations decay exponentially.
Hence, beyond a certain point, the $xy$-averaged potential will strongly dominate the evolution of the wave functions along $z$. We, therefore, decompose the potential
\begin{equation}
V(x,y,z) = \overline V(z) +\delta V(x,y,z)
\label{equation:full_potential}
\end{equation}
into the average potential $ \overline{V}$ for $k_x=k_y=0$ component and the lateral variation $\delta V$ for
$|\mathbf k|>0$. Fig.~\ref{fig:average_pot} shows the average potential obtained from DFT calculations discussed in Sec.~\ref{sec:results}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{average_V_neu.png}
\caption{The effective potential along $z$ averaged over $xy$ plane of a charged Ta-Ni(012) slab with 9 atomic layers (as shown in orange vertical lines). An electric field of 40\,V/nm is applied at the top side.}
\label{fig:average_pot}
\end{figure}
\subsection{Overview of tail extrapolation}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{new_regions_1.png}
\caption{Algorithm applied to the various regions above the surface. The reverse integration in $z$ starts from the top to the bottom ($z_{match}$). In region I the 1D Numerov solution is computed. In region II the novel EXTRA algorithm is applied. Region III contains the unmodified plane-wave wave functions obtained from the global solver (plane-wave DFT code).}
\label{fig:region}
\end{figure}
In our algorithm, we split the space above the surface into three
regions, named regions I, II, and III from outside in, as shown schematically in Fig.~\ref{fig:region}. Region III, closest to the surface, is where the
DFT program's global solver provides sufficiently accurate numerical wave functions and hence does not require additional optimization.
For simplicity, we define region III below
a plane parallel to the surface, at height $z_{\rm match}$.
Region I is far away from the slab, where the lateral variations of the potential are negligible. As shown in Sec.~\ref{sec:regionI}, this simplification renders the Kohn-Sham equation separable in the mixed space within region I, and allows us to use 1D Numerov integration along $z$ as an efficient and accurate algorithm to compute wave-function tails. Closer to the slab, in region II, lateral variations in the potential become important and the 1D Numerov would be inaccurate. We therefore generalized the Numerov algorithm to
three dimensions (see Appendix \ref{app:genNumerov}) to perform integrations
in region II. In Sec.~\ref{sec:regionII} we show that this must be combined
with Fourier-filtering in mixed space to ensure robustness over many orders
of magnitude for the wave-function amplitude. In practice, our Fourier filtering
can be seen as introducing a curved boundary in mixed $(\mathbf k, z)$ space between region I and region II.
The key task to ensure a coherent
wave function across all three regions is to make the
separately computed wave functions match at
the region I/region II and the region II/region III boundaries, respectively.
For the former, this is readily achieved by rescaling, see Sec.~\ref{sec:regionI} below. For the latter,
where the region III wave functions are authorative, we employ an
iterative procedure summarized in Sec.~\ref{sec:matching}.
As the I/II boundary values serve as starting
values for region II reverse integration towards region III,
we vary these boundary values to minimize
mismatch at the II/III boundary.
The combined approach, i.e., 1D integration in region I (Sec.~\ref{sec:regionI}),
Fourier-filtered generalized Numerov integration in region II (Sec.~\ref{sec:regionII}), and the iterative
procedure for determining the I/II boundary values as the key unknowns in
wave-function matching (Sec.~\ref{sec:matching}), is termed EXTRA
({\bf EX}trapolation of \textbf Tails via \textbf Reverse integration \textbf Algorithm).
\subsection{Region I: 1D Numerov integration in mixed space}
\label{sec:regionI}
Deep in vacuum, $\delta V$ becomes negligible, and Eq.~(\ref{equation:full_potential}) reduces to
\begin{equation}
V(x,y,z) \approx \overline V(z)
\;.
\end{equation}
Within this approximation, an in-plane Fourier transform makes the
Kohn-Sham equation, Eq.~(\ref{eq:KS_along_z}), separable in $\mathbf{k}$ and z, i.e., in mixed space
\begin{equation}
\frac{1}{2} \frac{\partial^2 \psi(\mathbf k,z)}{ \partial z^2} = \left\{\frac{1}{2} |\mathbf k|^2 + \overline V(z) - \epsilon\right\}\psi(\mathbf k,z)
\label{equation:mixed_space}
\end{equation}
with both growing and decaying solutions if $\epsilon < \overline V(z)$ for all $z$. For recomputing the decaying tails, 1D Numerov is a feasible algorithm \cite{purevkhuu2021one,bennett2015numerical}. For this, one performs reverse Numerov integration, starting deep in the vacuum with arbitrary non-vanishing initial values, and rescales the intermediate solution such that it matches a given value $\psi(\mathbf k, z_{\rm start})$ at the boundary $z_{\rm start}$ between regions I and II.
The Numerov method is a finite difference method that calculates the shape of the wave function by integrating step-by-step along a grid. The one dimensional Schr\"odinger equation is solved using Numerov algorithm\cite{kenhere2007bound} in form of
\begin{equation}
\psi_{n+1} = \frac{2(1-\frac{5}{12}h^2k^2_n)\psi_n- (1+
\frac{1}{12}h^2k^2_{n-1})\psi_{n-1}}{1+\frac{1}{12}h^2k^2_{n+1}}
\end{equation}
where
\begin{equation}
k_n^2 = 2\epsilon-2\overline V(z_n)-|\mathbf k|^2
\;.
\end{equation}
There are different variants of Numerov \cite{simos2009new,pillai2012matrix} developed in the past for the approximate solution of Schr\"odinger equation.
The discretization error of the Numerov algorithm is $\mathcal O(h^6)$.
The numerical accuracy of the Numerov algorithm for rising solutions over many orders of magnitude arises
from the locality: only the
previous two values are needed, and these
lie at a very similar magnitude.
Uncorrelated numerical round-off errors cannot grow faster
than the exact solution, because they either are proportional to the desired
solution or belong to the linearly
independent, decaying solution. This also
ensures that starting from arbitrary
values will always converge towards the
rising solution after a warmup phase.
\subsection{Region II: Fourier-filtered generalized Numerov integration}
\label{sec:regionII}
\subsubsection{Generalized Numerov algorithm}
\label{sec:genNumerov}
In order to numerically integrate the 3D Schr\"odinger equation along the $z$ direction, we propose a generalized Numerov algorithm given in Appendix~\ref{app:genNumerov}. The working equation is
[cf. Eq.~(\ref{eq:gen_Numerov_final})]
\begin{eqnarray}
&&\left[1+\frac{1}{6}(\Delta z)^2\left\{\epsilon-\hat{V}(z_{n+1})\right\}\right]\psi_{n+1}
\nonumber\\&=& 2\left[1-\frac{5}{6}(\Delta z)^2\left\{\epsilon-\hat{V}(z_{n})\right\}\right]\psi_n
\nonumber\\&&
-\left[1+\frac{1}{6}(\Delta z)^2\left\{\epsilon-\hat{V}(z_{n-1})\right\}\right]\psi_{n-1}
\label{eq:gen_Numerov_text}
\end{eqnarray}
Eq.~(\ref{eq:gen_Numerov_text}) represents an in-plane partial differential equation [with linear operator $\hat V(z)$]
for $\psi_{n+1}$ with
a known right-hand side that depends on the values for the two previous steps $z_n$ and $z_{n-1}$. By solving this differential equation numerically using a standard (plane-global) iterative algorithm, one can step-wise proceed along the $z$ direction.
The in-plane kinetic operator
is computed in mixed space, while
the potential is applied in real space,
using fast Fourier transforms to switch
between these two spaces.
To solve the discretized differential equation,
we employ the root finding solver of
the Scipy optimization module\cite{virtanen2020scipy} (scipy.optimize.root) with
Krylov subspace iterations
\cite{kelley1995iterative} and a
numerical approximate inverse Jacobian.
The key advantage of this procedure is that the numerical solver of the in-plane equation
must only deal with scale variations within the plane, while the huge changes in magnitude
along the $z$ direction are taken care of by the explicit
iteration $(\psi_{n-1}, \psi_n)\rightarrow \psi_{n+1}$.
\subsubsection{High-frequency noise issues with unfiltered generalized Numerov}
We have tested the Generalized Numerov numerical integration in the reverse direction in a region near the surface where the global solver provides a nicely decaying reference solution
Generalized Numerov is stable against numerical noise from the unwanted solution (i.e., the exponential rising solution into vacuum). Unfortunately, when integrating towards the surface, it produces a rapid increase of contributions from high-frequency Fourier components in the plane that are absent from the reference solution. This is not a failure in principle, as we can expect from the separable approximation, Eq.~(\ref{equation:mixed_space}) that in-plane frequency coefficients for high $\mathbf k$ values have steeper slopes along $z$ compared to the low-frequency ones as shown in Fig.~\ref{fig:decaying}. Yet, this makes the Generalized
Numerov algorithm of Appendix~\ref{app:genNumerov}
sensitive to high-frequency noise.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{psi_kx_new.png}
\caption{1D Numerov wave-function magnitude as a function of $z$ using the potential shown in Fig.~\ref{fig:average_pot}. The decay rate depends on in-plane Fourier components $k_x$. The wave functions are calculated for an eigenvalue at the Fermi level and normalized with respect to $z_{\rm match}$=22\,bohr. }
\
|
label{fig:decaying}
\end{figure}
\begin{figure}
\includegraphics[width=.5\textwidth]{iso_new.png}
\caption{Iso-magnitude contours of Ta-Ni(012) plotted in mixed space. The wave functions were generated from 1D Numerov using the average potential (see Fig.~\ref{fig:average_pot}), and were normalized with respect to
$z_{\rm match}$=22\,bohr.}
\label{fig:iso_contour}
\end{figure}
The reason behind the discrepancy is visualized in Fig.~\ref{fig:decaying} which illustrates how the wave-function magnitude develops for different k values.
As shown in Fig.~\ref{fig:iso_contour}, iso-magnitude contours in mixed $\mathbf (k_x,k_y,z)$ space are not parallel to the $\mathbf (k_x,k_y)$ plane. In consequence, we find that it is not sufficient to find an algorithm that has a local scale in only z, but actually, one that respects the local scale in $\mathbf (k_x,k_y,z)$.
To circumvent the above issue, we additionally employ a $z$-dependent Fourier filtering in the $xy$ plane. The challenge is to distinguish between noise and the true signal. Fortunately, for each mixed-space coefficient, we can estimate the expected magnitude relative to the matching plane from the 1D separable equation, Eq.~(\ref{equation:mixed_space}). We can use this to make a Fourier-filtering at "equal magnitude".
\subsubsection{The iso-magnitude boundary for regions I/II}
\label{sec:isomagnitude}
At the boundary between region II/region III high frequency fourier components obtained by the Generalized Numerov algorithm grow rapidly and need to be filtered out. To do this we make the boundary k-dependent using the iso-magnitude condition
\begin{equation}
\psi_{1d}(\mathbf k,z_{\rm start}(\mathbf k))/\psi_{1d}(\mathbf k,z_{\rm match}) = \eta
\label{eq:iso_magnitude}
\end{equation}
where $\eta$ defines the magnitude threshold. Eq.~(\ref{eq:iso_magnitude}) defines a finite $z_{\rm start}(\mathbf k)$ beyond which the coefficients can be effectively ignored (set to zero) for the generalized Numerov step as shown in Fig.~\ref{fig:iso_contour}.
The choice of $\eta$ is not overly problematic
in practice. We have successfully employed values of $10^{-6}$, $10^{-8}$, and even $10^{-20}$ and observed negligible
differences between the results. If $\eta$ is chosen
too small,
high-frequency noise occurs. If chosen too large,
the original DFT wave functions are not well reproduced
near the matching plane (where the intrinsic
noise is small).
The values $\psi(\mathbf k,z_{\rm start}(\mathbf k))=\psi_{\rm start}(\mathbf k)$
are used as the initial conditions for the generalized Numerov integration. They
fully determine the shape of the wave function inside region II.
For initializing the previous value we use 1D Numerov to estimate $\psi(\mathbf k,z_{\rm start}(\mathbf k)-\Delta z)$. Similarly, we use 1D Numerov to extend $\psi$ beyond the filtering boundary in an approximate way namely ignoring the effect of in-plane scattering due to the in-plane potential variant ions $\delta V$.
In this way, the iso-magnitude contour is effectively treated as our dividing boundary between region I and region II in mixed space.
At this boundary, we initialize the Fourier components for
the region II integration. In short only coefficients inside the boundary are included in the generalized Numerov for region II.
Outside this contour boundary, i.e., in the region I, we rescale the
precomputed 1D Numerov solutions to match the boundary value.
We note in passing that we can combine the iso-magnitude boundary
condition with a maximum for $z_{\rm start}$ based on the in-plane
lateral variations $\delta V$. In such a case, we cap
the contour when the lateral variations become negligible,
and thus the 1D Numerov integration is accurate (and far more efficient).
\subsubsection{Fourier filtered generalized Numerov}
To summarize the Fourier-filtered generalized Numerov, the
iteration proceeds as follows.
\begin{enumerate}
\item Given $\psi(\mathbf k, z_{n-1})$ in mixed space and
$\psi(x,y,z_n)$ in real space
on a regular discretization grid, Fourier transform the
latter one to mixed space via Fast Fourier transforms (FFT).
\item Set $\psi(\mathbf k, z_n)$ to zero
where $z_n > z_{\rm start}(\mathbf k)$.
\item For cases at the boundary, where $z_n = z_{\rm start}$,
set
\begin{eqnarray*}
\psi(\mathbf k, z_n) &=& \psi_{\rm start}(\mathbf k)\\
\psi(\mathbf k, z_{n-1}) &=&
\psi_{\rm start}(\mathbf k)\cdot
\frac{\psi_{1d}(\mathbf k,z_{n-1})}{\psi_{1d}(\mathbf k,z_{n})}
\;.
\end{eqnarray*}
\item Save $\psi(\mathbf k, z_n)$ for the next iteration.
\item Fourier transform both $\psi(\mathbf k, z_{n-1})$
and $\psi(\mathbf k, z_n)$ to real space.
\item Perform generalized Numerov propagation by solving
Eq.~(\ref{eq:gen_Numerov_text}), yielding the real-space
for the next iteration $z_{n+1}$.
\end{enumerate}
When $z_{\rm match}$ has been reached, the missing
region I tails
are added to the mixed-space representation
by setting
\begin{equation}
\psi(\mathbf k, z_{n}) =
\psi_{\rm start}(\mathbf k)\cdot
\frac{\psi_{1d}(\mathbf k,z_{n})}{\psi_{1d}(\mathbf k,z_{\rm start}(\mathbf k))}
\end{equation}
for all $z_n > z_{\rm start}(\mathbf k)$. Afterward,
the real-space representation can be recomputed from
this via in-plane FFTs.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{matching.png}
\caption{Comparison of computed wave-functions of Ta-Ni(012) along the direction $z$. The wave functions are calculated for an eigenvalue at the Fermi level. The position of the matching plane where both curves agree is $z_{\rm match}$ = 22\,bohr.}
\label{fig:EXTRA_global}
\end{figure}
\subsection{Iterative determination of $\psi_{\rm start}$}
\label{sec:matching}
The final step is to determine the starting values
$\psi_{\rm start}(\mathbf k)$ such that integration across
region II according to the algorithm described above yields values $\psi_{II}(x,y,z_{\rm match})$
at the matching plane
that agree with the desired values.
These values are given by the original DFT
wave function in region III, i.e., by $\psi_{III}(x,y,z_{\rm match})$.
For this purpose, we define the residue
\begin{equation}
R(\mathbf k) = \psi_{III}(\mathbf k,z_{\rm match}) - \psi_{II}(\mathbf k,z_{\rm match})
\label{eq:matching}
\end{equation}
that implicitly depends on $\psi_{\rm start}(\mathbf k)$.
We then solve the multidimensional root-finding problem $R = 0$,
treating $R$ as a function of $\psi_{\rm start}$.
However, there is a huge difference in magnitude between the residue $R$
(which is similar in scale to the wave function at the matching plane),
and the starting values $\psi_{\rm start}$ at the iso-contour boundary. The latter are smaller by a factor $\eta$, cf. Eq.~(\ref{eq:iso_magnitude}).
To accommodate for the scale,
we iterate on
\begin{equation}
\psi_{\rm init}(\mathbf k) = \psi_{\rm start}(\mathbf k)\cdot
\frac{\psi_{1d}(\mathbf k,z_{\rm match})}{\psi_{1d}(\mathbf k,z_{\rm start}(\mathbf k))}
\;.
\end{equation}
$\psi_{\rm init}$ can be thought of as the boundary values
rescaled to the matching plane via the 1D Numerov approximation.
We use this flexible definition
rather than the constant $\eta$ to accommodate situations in
which we limit $z_{\rm start}(\mathbf k)$ to a maximum
based on the magnitude of $\delta V$, as explained in Sec.~\ref{sec:isomagnitude}.
The root finding algorithm, $scipy.optimize.root$
with Krylov iteration and numerical inverse Jacobian estimation \cite{kelley1995iterative}, is then used to solve for the
starting values, completely analogous to our solution
of the generalized Numerov propagation, Eq.~(\ref{eq:gen_Numerov_text}),
see Sec.~\ref{sec:genNumerov}. Fig.~\ref{fig:EXTRA_global} illustrates the comparison of the original, noisy wave function from the global DFT solver with one from EXTRA on the log scale. It demonstrates that EXTRA overcomes the limitations of the global solution.
\section {Investigation of substitutional Ta in Ni}
\label{sec:results}
In this Section, we will illustrate that the EXTRA algorithm
allows us to overcome the accuracy limitations that prevented
direct simulations of FIM contrast. For this, we choose the
case of substitutional Ta in Ni.
In a recent a-FIM study with Ne as an imaging gas,
Morgado \textit{et al.} investigated
segregation in Ni alloys with 2\% Ta \cite{morgado2021revealing}.
They observed that Ta
was imaged in FIM more brightly than Ni. This finding was
qualitatively explained by DFT calculations performed by
some of the present authors. The DFT calculations showed that Ta-related states appear energetically at 1-3 eV above the Fermi level, while only a few Ni states in the spin
minority channel are available for
tunneling electrons up to 1\,eV above the Fermi level.
However, due to the accuracy limitations, it
was not possible to actually compute the FIM contrast at relevant
ionization energies, nor to verify that Ta-related states at higher
energies give at all a brighter signal than the lower-lying Ni
states.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{structure-Ni-Ta.png}
\caption{Top view of Ta-Ni(012) where Tantalum (blue) is surrounded by Nickel (green). The $3\times3$ surface cell has been duplicated in both directions for clarity. }
\label{fig:structure}
\end{figure}
In the present work, DFT was performed in
the plane-wave PAW formalism with the SPHInX code \cite{boeck2011object} using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional \cite{PBE1996} with D2 van-der-Waals corrections\cite{Grimme2006}. The calculation was spin-polarized
with colinear spin and produces a ferromagnetic state. The Ni (012) surface was modeled in the repeated slab approach with 9 atomic layers at the theoretical lattice constant (3.465\,\AA ) and a vacuum separation of 17.5\,\AA.
An electric field of 40\, V/nm
was included via the generalized dipole correction \cite{genDipole}.
Tantalum substitution at the surface was modeled in a 3$\times$3 surface unit cell, i.e, with 9 surface atoms as shown in Fig.~\ref{fig:structure}.
The 6 topmost layers were relaxed via quasi-Newton optimization
with a redundant internal coordinated Hessian fitted on the fly \cite{ricQN} until the forces were below 0.015\,eV/\AA.
For the structure optimization, an offset $2\times 3\times 1$
$\mathbf k$-point sampling was used, equivalent to a
$\mathbf k$-point spacing of 0.13\,bohr$^{-1}$. For the DOS calculation,
the $\mathbf k$-point density in the plane was doubled ($4\times6\times1$).
\begin{figure*}
\centering
\includegraphics[width=0.32\textwidth]{dftownload_5_.png}
\hfill
\includegraphics[width=0.32\textwidth]{doftwnload_15_.png}
\hfill
\includegraphics[width=0.32\textwidth]{dftownload_21_.png}
\hfill
\includegraphics[width=0.32\textwidth]{download_2_.png}
\hfill
\includegraphics[width=0.32\textwidth]{download_3_.png}
\hfill
\includegraphics[width=0.32\textwidth]{download_4_.png}
\caption{Simulated FIM images of Ta-Ni(012) for different ionization energies of 10 eV, 15 eV and 21.5 eV. Top row: partial DOS obtained from the original DFT wave functions. Bottom row: refined results from EXTRA. In-plane positions of the top layer atoms are indicated in the graph. }
\label{fig:pdos_dft}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.44\textwidth]{line_prof_dft.png}
\hfill
\includegraphics[width=0.44\textwidth]{line_extra.png}
\caption{1D line scans of simulated FIM intensity from Fig.~\ref{fig:pdos_dft} on a log scale. The line runs along $x$ across the Ta position. }
\label{fig:line_profiles}
\end{figure*}
The partial DOS at energy $E$ is given by
\begin{equation}
\rho(\mathbf r, E) = \sum_{i\sigma\mathbf k} w_{\mathbf k}
|\psi_{i\sigma\mathbf k}(\mathbf r)|^2
\delta (E-\epsilon_{i\sigma\mathbf k})
\label{eq:pDOS}
\end{equation}
for state index $i$ and spin index $\sigma$. $w_\mathbf k$ denotes
the $\mathbf k$-point weight.
To turn this into a two-dimensional FIM contrast map,
we impose energy conservation for the tunneling process, i.e.,
\begin{equation}
E = \epsilon_{i\sigma\mathbf k} = V_{\rm avg}(x,y,z) - I
\label{eq:energyConserv}
\;,
\end{equation}
where I is the (positive) ionization energy of the imaging gas.
$V_{\rm avg}$ is the average potential that the
imaging gas atom experiences. For simplicity we assume here
that $V_{\rm avg}(x,y,z) \approx \overline V(z)$.
This energy conservation condition requires that tunneling
into higher-lying states
occurs further away from the surface. Due to the rapid, over-exponential decay of wave functions, the overall contribution
of higher-lying states is effectively dampened. This
relieves us from making \textit{ad hoc} assumptions on which
states are relevant for tunneling. This is only working
thanks to the accurate description of the decaying tails obtained from EXTRA.
Combining Eqs.~(\ref{eq:pDOS}) and (\ref{eq:energyConserv}),
the FIM contrast is proportional to
\begin{equation}
F(x,y) = \sum_{i\sigma\mathbf k} w_\mathbf k|\psi_{i\mathbf k}
(x,y,z_{i\sigma\mathbf k,I})|^2
\label{eq:FIM}
\end{equation}
where the sum runs over states above the Fermi level.
The evaluation height $z_{i\sigma\mathbf k,I}$ is implicitly
defined by
\begin{equation}
\overline V(z_{i\sigma\mathbf k,I}) = \epsilon_{i\sigma\mathbf k} + I
\;.
\end{equation}
In practice, we run a search on the discrete $z$ grid, and
then linearly interpolate the DOS between the discrete $z$
points.
In the following, we will treat the ionization energy $I$
as a tunable parameter
to evaluate the FIM contrast at different heights, namely
for $I$=10\,eV, 15\,eV, and 21.5\,eV (the ionization energy of Ne).
Note that increasing $I$ by 5\,eV shifts the evaluation region
by 2.4\,bohr in a field of 40 V/nm.
Fig.~\ref{fig:pdos_dft} illustrates the results. In the upper
row, we show results from the original DFT wave functions.
At ionization energies of 10\,eV and 15\,eV, there
is a single bright spot arising from Ta, while at a more
realistic ionization energy of 21.5\,eV, the simulated
contrast contains only noise. In the bottom row, obtained
with EXTRA results, also the 21.5\,eV contrast is very
clear. At low ionization energies, there are also weak signals
from the other surface atoms (Ni), but these are absent
at higher energies. We note that this finding is not observed in the experiment, which clearly images Ni atoms
even near Ta atoms. The reason for this strong exaggeration
of brightness contrast is not entirely clear. We suspect that an adsorbed imaging gas layer present in the experiment
may enhance spatial resolution. We will investigate this
in more detail in a forthcoming publication.
Fig.~\ref{fig:line_profiles} presents line scans of the simulated FIM images across the Tantalum atom. It clearly indicates the stable performance of our algorithm EXTRA over several orders of magnitude. At higher ionization energy of 21.5\,eV the partial DOS from DFT differs by 2 orders of magnitude with respect to the expected true signal from EXTRA. The simulated FIM contrast is hence strongly influenced by the extrapolation of wave-function tails.
The comparison of different ionization energies shows that all features become broader with increasing ionization energy (and hence increasing distance from the surface). Also, the relative contrast clearly changes. This highlights that one has to simulate using the correct ionization energy to do a quantitive comparison with the experiment.
\ignore{
We explain bright contrast using EXTRA augmented DFT-FIM simulations on a Ni (012) slab with Ta in the presence of the electric field of 40\,V/nm. The partial density of states at different ionization levels is compared with DFT results. The partial DOS calculations confirm that the bright atom is Ta surrounded by a hexagonal plane of Nickel atoms. However, this behavior smears out for higher energy states. EXTRA shows a good imaging contrast in the range 4-6Å above the surface as manifested in Fig.~\ref{fig:pdos_extra}. EXTRA predicts the imaging contrast even at the position far from the surface as shown in Fig.~\ref{fig:21_extra}.
}
\section{Conclusion}
In this work, we have laid out the foundations for an
accurate computation of tunneling-related contrast
in field ion microscopy based on state-of-the-art
density functional theory. For this, we apply the
Tersoff-Hamann approximation known from scanning tunneling
microscopy to the characteristic situation of tunneling
into imaging gas atoms hovering above the surface in
the presence of a very strong field. We
identified the numerical accuracy of wave functions from
the global solvers employed in plane wave DFT codes as a major
limitation, and developed a novel algorithm, termed EXTRA, to
recompute these tails in a very robust manner over many
orders of magnitude. Equipped with this algorithm, we
demonstrate for a prototypical case, Ta in Ni, that we
can simulate FIM contrast maps at realistic ionization
energies with practically no noise. This new scheme paves
the way to systematically address open questions
of contrast generation in FIM. We note that the
applicability of the EXTRA
the algorithm is not limited to these cases but maybe
employed in other surface situations where the tails
of the wave functions are of interest, e.g., in
overcoming tail shape limitations when localized orbitals
are used as a basis set.
\section*{Acknowledgement}
The authors like to thank Felipe M. Morgado and Baptiste Gault for valuable discussions.
We further thank the pyiron developers, notably Marvin Poul, for their continuous technical support.
SB gratefully acknowledges financial support from the International Max Planck Research School for Sustainable Metallurgy (IMPRS SusMet).
|
\section{Introduction}
In the recent years, there has been incessant avidity in studying multi-user quantum communication because it offers the opportunity to construct quantum networks. With quantum networks, quantum information between physically separate quantum systems can be transmitted. In fact, it forms a salient component of quantum computing and quantum cryptography systems. It has been discerned that transmission via quantum teleportation and, directly from one node to another are two methods to transmit an unknown quantum state between two nodes \cite{MA1}. In this paper, the later, i.e., the node-to-node transmission will be considered. Since quantum systems unavoidably interact with the environment, node-to-node transmission easily debase the quantum states. Consequently, it would be reasonable to take the influence of noise into consideration while investigating quantum wireless multihop teleportation.
Recently, a scheme for faithful quantum communication in quantum wireless multihop networks, by performing quantum teleportation between two distant nodes which do not initially share entanglement with each other, was proposed by Wang et al. \cite{MA2}. It has been found in Ref. \cite{RE1} that wireless quantum networks can be established between nodes of different hops sharing two qubit states. Xiong et al. \cite{MA3} proposed a quantum communication network model where a mesh backbone network structure was introduced. The entanglement source deployment problem has been scrutinized by Zou {\it et al.} \cite{RE2} in a quantum multihop network, which has a notable impact on quantum connectivity.
Some other outstanding reports can be found in Refs. (\cite{MA4,RE3,RE4,RE5} and references therein). Although several relevant results have been obtained along this direction, most contributions have been based on 2- and 3-qubit Greenberger-Horne-Zeilinger (GHZ) state as the quantum channel in a closed quantum system. However, quantum systems cannot but interact with the environment. These unavoidable interactions make quantum systems to lose some of their properties. In this paper, we examine a quantum routing protocol with multihop teleportation for wireless mesh backbone networks, based on four-qubit cluster state, in an amplitude damping channel, which can be induced experimentally.
The cluster state \cite{MA6}, which is a type of highly entangled state of multiple qubits, is generated in lattices of qubits with Ising type interactions. On the basis of single qubit operation, the cluster state serves as the initial resource for a universal computation scheme \cite{MA7}. Cluster state has been realized experimentally in photonic experiments \cite{MA7} and in optical lattices of cold atoms \cite{MA8}. In this paper, we select four-qubit cluster state as the entangled resource. When the state of one particle in the entangled pair changes, the state of other particle situated at a distant node changes non-contemporaneously. Thus, entanglement swapping can be applied. Using a classical communication channel, the results of the local measurement can be transmitted from node-to-node.
The rest of this paper is organized as follows. In Section II, we discuss wireless network and routing protocol. Section III deals with the process of establishing the quantum channel. In section IV, we present quantum wireless multihop teleportation in noisy channels. Some relevant results and discussion are presented in Section V. Section VI gives the conclusion.
\begin{figure*}[!ht]
\centering \includegraphics[width=\linewidth]{MFIG-1.eps}
\caption{\protect\small The quantum mesh backbone network. The dotted lines represent quantum channels while the solid lines denote classical channels. Node {\bf S} is not directly entangled with the node {\bf D}. However, quantum channels between them can be established via entanglement swapping.}
\label{Mfig1}
\end{figure*}
\section{Wireless mesh network and routing protocol}
A wireless mesh network (WMN) can be described as a mesh network established through the connection of wireless access points which have been installed at the location of each network users. It consists of mesh routers, which are stationary, and the mesh client, which are removable. In WMN, there exist quantum wireless channel and classical wireless channel for communications. The classical channel serves the purpose of classical information transmission while the quantum wireless channel exists between neighbor nodes. Quantum information can be transmitted from node-to-node only when quantum route and classical route co-exist. Classical information is transmitted along the classical route while quantum information is via the quantum route.
In wireless quantum communication, there exists mesh backbone network which consists of route nodes and edge route nodes. We delineate the quantum mesh network in Fig. \ref{Mfig1}. Node {\bf S} wishes to send information to node {\bf D}. To achieve this, it scrutinizes its routing table to find if there is any available route to {\bf D}. If there are available routes, it forwards the packet to next hop node. However, in the absence of none, source node {\bf S} requests for a quantum route discovery from the neighboring node {\bf E} and thus, the quantum route finding process commences. Once a routing path that permits co-existence of quantum and classical route, from the source node to the destination is found and selected, the edge route node {\bf I} sends a route reply to node {\bf S}. At this moment, the process of establishing the quantum channel commences.
\section{Process of establishing the quantum channel}
In this section, we establish the quantum channel linking the nodes. As it can be seen in Figs. \ref{Mfig2} (a) and (b), {\bf S} denotes the source node while the destination node is denoted by {\bf D}. The node {\bf S} is not directly entangled with {\bf D} but entanglement swapping can be used to set-up quantum channels between the two nodes. Thus, with this swapping, quantum information can be transmitted hop-by-hop from node {\bf S} to node {\bf D}. In the source node {\bf S}, there exists an arbitrary two-qubit entangled state, $\left|\chi\right\rangle_{S_1S_2}=a_0\left|00\right\rangle+d_0\left|11\right\rangle$, whose density matrix can be written as:
\begin{equation}
\rho_{\rm in_S}=\left[\begin{matrix}a_0^*a_0&0&0&d_0^*a_0\\
0&0&0&0\\
0&0&0&0\\
a_0^*d_0&0&0&d_0^*d_0
\end{matrix}\right].
\label{SUS1}
\end{equation}
Let $N_n$ denotes number of nodes such that between {\bf S} and {\bf D}, we have $N_n-1$ nodes. Also, let us represent the number of hops by $N$. The entangle state of the neighboring nodes is 4-qubit cluster state of the form $\left|\mathcal{CS}\right\rangle=\tau_0\left|0000\right\rangle+\tau_1\left|0011\right\rangle+\tau_2\left|1100\right\rangle-\tau_3\left|1111\right\rangle$ where $\tau_0^2+\tau_1^2+\tau_2^2+\tau_3^3=1$. Now, the edge node {\bf I} performs Bell state measurement on particle pairs $(I_3,I_4)$ and $(I_1,I_2)$ to obtain
\begin{widetext}
\begin{eqnarray}
&&\left|\Pi\right\rangle=\nonumber\\
&&\left|\chi\right\rangle_{S_1S_2}\otimes\frac{1}{8}\sum_{\varsigma,\kappa\in[+,-]}\bigg[\left|\Phi^\varsigma\right\rangle_{I_3I_4}\left|\Phi^\kappa\right\rangle_{I_1I_2}\left(\left|0000\right\rangle+\left|0011\right\rangle\pm^{(\varrho)}\mp^{(\zeta)}\left|1100\right\rangle\pm^{(\varrho)}\pm^{(\zeta)}\left|1111\right\rangle\right)_{R_4,D_1,D_2,D_3}\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left|\Psi^\varsigma\right\rangle_{I_3I_4}\left|\Phi^\kappa\right\rangle_{I_1I_2}\left(\left|0100\right\rangle-\left|0111\right\rangle\pm^{(\varrho)}\mp^{(\zeta)}\left|1000\right\rangle\pm^{(\varrho)}\mp^{(\zeta)}\left|1011\right\rangle\right)_{R_4,D_1,D_2,D_3}\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left|\Phi^\varsigma\right\rangle_{I_3I_4}\left|\Psi^\kappa\right\rangle_{I_1I_2}\left(\pm^{(\varrho)}\left|0100\right\rangle\mp^{(\varrho)}\left|0111\right\rangle\pm^{(\zeta)}\left|1000\right\rangle\pm^{(\zeta)}\left|1011\right\rangle\right)_{R_4,D_1,D_2,D_3}\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left|\Psi^\varsigma\right\rangle_{I_3I_4}\left|\Psi^\kappa\right\rangle_{I_1I_2}\left(\pm^{(\varrho)}\left|0000\right\rangle\pm^{(\varrho)}\left|0011\right\rangle\pm^{(\zeta)}\left|1100\right\rangle\mp^{(\zeta)}\left|1111\right\rangle\right)_{R_4,D_1,D_2,D_3}\bigg]\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \otimes_{i=3}^N\left|\mathcal{CS}\right\rangle^{n_i},
\label{SUS2}
\end{eqnarray}
\end{widetext}
where $n_i=n$, and $i=1,...,N$. We have denoted the four Bell states as $\left|\Phi^{\pm}\right\rangle=2^{-1/2}(\left|00\right\rangle\pm\left|11\right\rangle)$ and $\left|\Psi^{\pm}\right\rangle=2^{-1/2}(\left|01\right\rangle\pm\left|10\right\rangle)$. The $\pm^{(\varrho)}, \mp^{(\varrho)}$ and $\pm^{(\zeta)},\mp^{(\zeta)}$ represent the results corresponding to BSM on qubit pairs ($I_3,I_4$) and ($I_1,I_2$) respectively. Now, with the application of proper Pauli operator on qubit $R_4$, the entangled state of $R_4,D_1,D_2,D_3$ can be realized. For instance, if the entangled state of $R_4,D_1,D_2,D_3$ is $\left|0100\right\rangle-\left|0111\right\rangle-\left|1000\right\rangle-\left|1011\right\rangle$, applying the Pauli $z$ matrix and then the Pauli $x$, entangled state $\left|\mathcal{CS}\right\rangle$ can be realized. Now, edge route node {\bf I} sends the result of the measurements along with route reply to node {\bf S} through edge node {\bf E}. Once node {\bf S} receives the information, quantum channel between nodes {\bf S} and {\bf D} would be established. Thus the quantum state can then be transferred from node {\bf A} to node {\bf J}. A similar study was reported recently in Ref. \cite{MA3}. However, in the present study, we shall consider the viability of a cluster state as the entanglement resource and the influence of noisy channels on the multihop teleportation.
\section{Quantum wireless multihop teleportation in noisy channels}
In this section, we investigate the influence of noisy channel on quantum wireless multihop teleportation. Two noisy channels: amplitude and phase damping channels would be considered.
\subsection{Quantum wireless multihop teleportation in amplitude damping channel}
The general behavior of model for amplitude damping channel is characterized by the following set of Kraus operators \cite{MA10}
\begin{equation}
\mathcal{K}_0^{\rm Am}=\left[\begin{matrix}1&0\\0&\sqrt{\bar{\xi_{a}}}\end{matrix}\right], \mathcal{K}_1^{\rm Am}=\left[\begin{matrix}0&\sqrt{\xi_{a}}\\0&0\end{matrix}\right],
\label{SUS3}
\end{equation}
where $\xi_{a}(0\leq\xi_{a}\leq 1)$ represents the decoherence rate which characterize the probability error of amplitude-damping when a particle passes through a noisy environment and $\bar{\xi}_{a}=1-\xi_{a}$.
To achieve the aim of this section, first, let us consider a one-hop teleportation through the amplitude damping channel. From Figs. \ref{Mfig2} (a) and (b), it can be seen that {\bf S} is a neighborhood node of {\bf E} and consequently, we can infer that particles $S_3$, $E_1$, $E_3$ and $E_2$ are entangled such that the quantum entanglement is $\left|\mathcal{CS}\right\rangle_{S_3E_1E_3E_2}$. In order to transmit $\left|\chi\right\rangle_{S_1S_2}$ through a noisy channel, we perform Bell state measurements on particles pairs $(S_1,S_3)$ and $(S_2,E_3)$, and then take the noise model into consideration, thus we obtain.
\begin{eqnarray}
\left|\Omega\right\rangle&=&\frac{1}{2}\sum_{\mathcal{A},\mathcal{B}}\ _{S_2E_3}\left\langle\mathcal{A}^{\pm}\right| _{S_1S_3}\left\langle\mathcal{B}^{\pm}\right.\left|\Pi''\right\rangle,\ \ \ \ \ \mathcal{A},\mathcal{B}\in\left[\Phi,\Psi\right],\nonumber\\
&&\mbox{with}\ \ \ \left|\Pi''\right\rangle=\left|\chi\right\rangle\otimes\left|\mathcal{CS'}\right\rangle,
\label{SUS5}
\end{eqnarray}
where we have used the following formulas for mathematical simplicity:
\begin{widetext}
\begin{eqnarray}
_{S_2E_3}\left\langle\Psi^{\pm}\right|\ _{S_1S_3}\left\langle\Psi^{\pm}\right.\left|\Pi''\right\rangle&=&\mp^{(\varsigma)}\mp^{(\sigma)}\bar{\xi}_a^2d_0\tau_0\left|00\right\rangle_{E_1E_3}+a_0\left(\xi_a^2\tau_0-\tau_3\right)\left|11\right\rangle_{E_1E_2}\nonumber\\
_{S_2E_3}\left\langle\Phi^{\pm}\right|\ _{S_1S_3}\left\langle\Psi^{\pm}\right.\left|\Pi''\right\rangle&=&\pm^{(\varsigma)}\pm^{(\sigma)}\bar{\xi}_ad_0\tau_1\left|01\right\rangle_{E_1E_3}+\bar{\xi}_aa_0\tau_2\left|10\right\rangle_{E_1E_2}\nonumber\\
_{S_2E_3}\left\langle\Psi^{\pm}\right|\ _{S_1S_3}\left\langle\Phi^{\pm}\right.\left|\Pi''\right\rangle&=&\pm^{(\varsigma)}\pm^{(\sigma)}\bar{\xi}_ad_0\tau_2\left|10\right\rangle_{E_1E_3}+\bar{\xi}_aa_0\tau_1\left|01\right\rangle_{E_1E_2}\nonumber\\
_{S_2E_3}\left\langle\Phi^{\pm}\right|\ _{S_1S_3}\left\langle\Phi^{\pm}\right.\left|\Pi''\right\rangle&=&\pm^{(\varsigma)}\mp^{(\sigma)}d_0\left(\xi_a^2\tau_0-\tau_3\right)\left|11\right\rangle_{E_1E_3}+\bar{\xi}_a^2a_0\tau_0\left|00\right\rangle_{E_1E_2}.
\label{SUS6}
\end{eqnarray}
\end{widetext}
\begin{figure*}[!t]
\centering \includegraphics[width=\linewidth]{MFIG-2.eps}
\caption{\protect\small The process of establishing quantum channel. (a) Before route-finding process. (b) After route-finding process. The red lines represent Bell state measurement.}
\label{Mfig2}
\end{figure*}
The $\pm^{(\varsigma)}$ and $\pm^{(\sigma)},\mp^{
|
(\sigma)}$ represent the results corresponding to BSM on qubit pairs ($S_1,S_3$) and ($S_2,E_3$) respectively. Now, node {\bf S} transmits this result to node {\bf E}. By using an appropriate unitary transformation, the state $\left|\chi\right\rangle$ can be retrieved. Thus, the quantum communication is completed successfully. The output density that reaches node {\bf E} is
\begin{widetext}
\begin{eqnarray}
\rho_{\rm out_E}=\left[\begin{matrix}\left|a_0\right|^2\left[\tau_0^2\left(\bar{\xi}_a^4+\xi_a^4\right)+\left(\tau_1^2+\tau_2^2\right)\bar{\xi}_a^2-2\tau_0\tau_3\xi_a^2+\tau_3^2\right]&0&0&d_0^*a_0\\
0&0&0&0\\
0&0&0&0\\
a_0^*d_0&0&0&\left|d_0\right|^2\left[\tau_0^2\left(\bar{\xi}_a^4+\xi_a^4\right)+\left(\tau_1^2+\tau_2^2\right)\bar{\xi}_a^2-2\tau_0\tau_3\xi_a^2+\tau_3^2\right]
\label{SUS7}
\end{matrix}\right].\nonumber\\
\end{eqnarray}
\end{widetext}
Accordingly to Eq. \ref{SUS6}, the states of $E_1, E_2$ are not normalized, it then implies that each outcome of the measurement has different probability. In order to avoid redundancy, we shall not discuss all the outcomes but one. For other cases, node {\bf E} can apply similar approach to reconstruct the original state. Now, suppose the result of Bell state measurement is $\left|\Psi^{+}\right\rangle_{S_1,S_3}\left|\Phi^{-}\right\rangle_{S_2,E_3}$, consequently, without loss of generality, the state of qubit pair $(E_1,E_3)$ collapses to $\mathcal{G}=2^{-1}\bar{\xi}_a(a_0\tau_2\left|10\right\rangle_{E_1E_2}-d_0\tau_1\left|01\right\rangle_{E_1,E_3})$. With this result, the state $\left|\chi\right\rangle$ can be recovered at Node {\bf E}. In order for this to be achieved, it is required to apply positive-operator valued measure (POVM) \cite{MA9}.
In utilizing POVM, first an ancilla (i.e., auxiliary quantum system) is prepared in a known state, say $\rho_{anc}$. Combining this ancilla with the original quantum state gives an uncorrelated state. Now, the combined Hilbert space will be subjected to a maximal test which is represented by orthogonal resolution of the identity. The results of this test is related to orthogonal projector which satisfies the relations: $\mathcal{K}_\mu \mathcal{K}_\nu=\delta_{\mu\nu} \mathcal{K}_\nu$ and $\sum_\mu \mathcal{K}_\mu =1$. The probability that preparation $k$ will be followed by outcome $\mu$ is represented by $\mathcal{K}_{\mu k}=Tr(A_\mu\rho_k)$, where $A_\mu$ denotes an operator acting on the Hilbert space. The set of $A_\mu$ is called POVM \cite{MA9}. Several studies and applications of POVM have been noted down in many literature \cite{MA91,MA92,MA93,MA94}. The current study will also utilize it.
To do that, we need to set up a close indistinguishability such that the coefficient of $\left|00\right\rangle_{E_1,E_2}$ is $a_0$ and that of $\left|11\right\rangle_{E_1,E_2}$ should be $d_0$. To accomplish this, node {\bf E} performs a local unitary operation $\mathcal{UT}=\sigma_x\otimes I_{2\times 2}$ on $\mathcal{G}$ to obtain $\mathcal{G}_0=2^{-1}\bar{\xi}_a(a_0\tau_2\left|00\right\rangle_{E_1E_2}-d_0\tau_1\left|11\right\rangle_{E_1E_3})$. Now, the node introduces auxiliary qubits, say $\mathcal{DE}$ with state $\left|00\right\rangle_{\mathcal{DE}}$. Entangling these qubits with $\mathcal{G}_0$ gives $\mathcal{G}_1=2^{-1}\bar{\xi}_a(a_0\tau_2\left|0000\right\rangle_{E_1E_2\mathcal{DE}}-d_0\tau_1\left|1100\right\rangle_{E_1E_2\mathcal{DE}})$. The node then performs a C-NOT operation on qubit pairs $(E_1,\mathcal{D})$ and $(E_2,\mathcal{E})$ to obtain
\begin{eqnarray}
\mathcal{G}_2&=&1/4\left(a_0\left|00\right\rangle_{E_1E_2}+d_0\left|11\right\rangle_{E_1E_2}\right)\nonumber\\
&&\ \otimes\left(\bar{\xi}_a\tau_2\left|00\right\rangle_{\mathcal{DE}}-\bar{\xi}_a\tau_1\left|11\right\rangle_{\mathcal{DE}}\right)\nonumber\\
&&\ +\left(a_0\left|00\right\rangle_{E_1E_2}-d_0\left|11\right\rangle_{E_1E_2}\right)\nonumber\\
&&\ \otimes\left(\bar{\xi}_a\tau_2\left|00\right\rangle_{\mathcal{DE}}+\bar{\xi}_a\tau_1\left|11\right\rangle_{\mathcal{DE}}\right).
\end{eqnarray}
With the condition that $\tau_2\left|00\right\rangle_{\mathcal{DE}}\mp\tau_1\left|11\right\rangle_{\mathcal{DE}}$ can be conclusively discerned using a suitable measurement, state $\left(a_0\left|00\right\rangle_{E_1E_2}\pm d_0\left|11\right\rangle_{E_1E_2}\right)$ can be obtained at node {\bf E}. The optimal POVM required can be written in the following subspace
\begin{eqnarray}
&&\mathcal{P}_1=\frac{1}{{\varrho}}\left|\Lambda_1\right\rangle\left\langle\Lambda_1\right|,\ \ \ \mathcal{P}_2=\frac{1}{{\varrho}}\left|\Lambda_2\right\rangle\left\langle\Lambda_2\right|,\nonumber\\
&&\mathcal{P}_3=I-\frac{1}{\varrho}\sum_{i=1}^2\left|\Lambda_i\right\rangle\left\langle\Lambda_i\right|\label{EQ4},
\end{eqnarray}
where
\begin{eqnarray}
&&\left|\Lambda_1\right\rangle=\frac{1}{\sqrt{\gamma_a}}\left(\frac{1}{\bar{\xi}_a\tau_2}\left|00\right\rangle-\frac{1}{\bar{\xi}_a\tau_1}\left|11\right\rangle\right)_{\mathcal{DE}},\nonumber\\
&&\left|\Lambda_2\right\rangle=\frac{1}{\sqrt{\gamma_a}}\left(\frac{1}{\bar{\xi}_a\tau_2}\left.|00\right\rangle+\frac{1}{\bar{\xi}_a\tau_1}\left|11\right\rangle\right)_{\mathcal{DE}},\nonumber\\
&&\mbox{with}\ \ \gamma_a=\frac{1}{\left(\bar{\xi}_a\tau_1\right)^2}+\frac{1}{\left(\bar{\xi}_a\tau_2\right)^2}.\label{EQ5}
\end{eqnarray}
\begin{figure*}[!t]
\centering \includegraphics[width=\linewidth]{MFIG-3.eps}
\caption{\protect\footnotesize (a) Variation of the probability of success as a function of number of hops ($N$) for various decoherence rate of amplitude damping channel $\xi_a$. (b) Variation of the probability of success as a function of $\xi_a$ for various $N$. (c) Plot of fidelity as a function of $\xi_a$ for various $N$. (d) Plot of fidelity as a function of $N$ for various $\xi_a$. We take $\varrho=1$ in our numerical computations.}
\label{Mfig3}
\end{figure*}
\begin{figure*}[!t]
\centering \includegraphics[width=\linewidth]{MFIG-4.eps}
\caption{\protect\footnotesize (a) Variation of the probability of success as a function of number of hops ($N$) for various decoherence rate of phase damping channel $\xi_p$. (b) Variation of the probability of success as a function of $\xi_p$ for various $N$. (c) Plot of fidelity as a function of $\xi_p$ for various $N$. (d) Plot of fidelity as a function of $N$ for various $\xi_p$. We take $\varrho=1$ in our numerical computations.}
\label{Mfig4}
\end{figure*}
$I$ denotes an identity operator and $\varrho$ is a parameter which defines the range of positivity of operator $\mathcal{P}_3$. Now, if the node's POVM result yields $\mathcal{P}_1$ whose probability is $\left\langle\mathcal{G}_1\right|\mathcal{P}_1\left|\mathcal{G}_1\right\rangle=1/(4\varrho\gamma_a)$, then one can infer the state of qubits $E_1E_2$ to be $a_0\left|00\right\rangle-d_0\left|11\right\rangle$. Afterward, the node performs unitary operation $I_{2\times 2}\otimes\sigma_z$ on the particles in order to retrieve the original state. However, suppose the result is $\mathcal{P}_2$, with a probability calculated by $\left\langle\mathcal{G}_1\right.|\mathcal{P}_2|\left.\mathcal{G}_1\right\rangle=1/(4\varrho\gamma_a)$, then the node finds that the state of qubits $E_1E_2$ is $a_0\left|00\right\rangle+d_0\left|11\right\rangle$ which is the original state of the particle. Suppose the result is $\mathcal{P}_3$, the teleportation fails because of the node's ineptitude to infer anything about the identity of the particles state. Thus the success probability becomes
\begin{equation}
\text{P}_{\text{suc}}^{\rm\bf S-E}=\frac{1}{2\varrho\gamma_a},
\end{equation}
and using Eqs. (\ref{SUS1}) and Eqs. (\ref{SUS7}), we obtain the analytical expression for the fidelity as
\begin{eqnarray}
F_{\rm\bf S-E}&=&\left[\tau_0^2\left(\bar{\xi}_a^4+\xi_a^4\right)+\left(\tau_1^2+\tau_2^2\right)\bar{\xi}_a^2-2\tau_0\tau_3\xi_a^2+\tau_3^2\right]\nonumber\\
&&\times\left(\left|a_0\right|^2+\left|d_0\right|^2\right)^2.
\end{eqnarray}
Now, let us consider two-hop teleportation, i.e., {\bf S-E-T}. There exit no direct quantum channel between node {\bf S} and {\bf T}. However, nodes {\bf S} and {\bf E} are entangled while nodes {\bf E} and {\bf T} are also entangled. In that case, node {\bf S} transmits $\rho_{\rm in_S}$ through a noisy channel to node {\bf E} and then node {\bf E} transmit $\rho_{\rm out_E}$ to node {\bf T} through a noisy channel. Consequently, the quantum channel between nodes {\bf S} and {\bf T} will be established. Following the same procedure, it is easy to find that the success probability is
\begin{equation}
\text{P}_{\text{suc}}^{\rm\bf E-T}=\frac{1}{\varrho\gamma_a}\left(1-\frac{1}{4\varrho\gamma_a}\right),
\end{equation}
and the analytical expression for the fidelity
\begin{eqnarray}
F_{\rm\bf E-T}&=&\left[\tau_0^2\left(\bar{\xi}_a^4+\xi_a^4\right)+\left(\tau_1^2+\tau_2^2\right)\bar{\xi}_a^2-2\tau_0\tau_3\xi_a^2+\tau_3^2\right]^3\nonumber\\
&&\times\left(\left|a_0\right|^2+\left|d_0\right|^2\right)^2.
\end{eqnarray}
Now, let's assume that the information is transmitted to node $\mathcal{N}^{i}$, where $i=2,4,6,...2k$ denotes the number of Bell state measurement needed to be performed. In that sense, the source node is equivalent to $\mathcal{N}^{0}$, node {\bf E} is equivalent to $\mathcal{N}^{2}$ and the destination is equivalent to $\mathcal{N}^{2k}$. Thus, then the total probability of success can be written as
\begin{equation}
\text{P}_{\text{suc}}^{a}=1-\left(1-\frac{1}{2\varrho\gamma_a}\right)^{N},\label{EQ6}
\end{equation}
and the fidelity of the complete multihop teleportation becomes
\begin{eqnarray}
F_{\rm tot.}^a&=&\left[\tau_0^2\left(\bar{\xi}_a^4+\xi_a^4\right)+\left(\tau_1^2+\tau_2^2\right)\bar{\xi}_a^2-2\tau_0\tau_3\xi_a^2+\tau_3^2\right]^{2N}\nonumber\\
&&\times\left(\left|a_0\right|^2+\left|d_0\right|^2\right)^2.
\end{eqnarray}
We would like to remind the readers that the above equation should not be confused with the fidelity of the N-th hop teleportation.
\subsection{Quantum wireless multihop teleportation in phase damping channel}
The general behavior of the model for phase damping channel is characterized by the following set of Kraus operators \cite{MA10}
\begin{equation}
\mathcal{K}_0^{\rm Ph}=\sqrt{\bar{\xi_{p}}}\left[\begin{matrix}1&0\\0&1\end{matrix}\right], \mathcal{K}_1^{\rm Ph}=\sqrt{\xi_{p}}\left[\begin{matrix}1&0\\0&0\end{matrix}\right], \mathcal{K}_2^{\rm Ph}=\sqrt{\xi_{p}}\left[\begin{matrix}0&0\\0&1\end{matrix}\right]
\label{}
\end{equation}
where $\xi_p(0\leq\xi_p\leq1)$ represents the decoherence rate of phase damping noise and $\bar{\xi_{p}}=1-\xi_{p}$. Following the same procedure of section IV A, we can calculate the total probability of success of multihop teleportation from source to the destination node as
\begin{equation}
\text{P}_{\text{suc}}^{p}=1-\left(1-\frac{1}{2\varrho\gamma_p}\right)^{N}, \ \mbox{where}\ \gamma_p=\frac{1}{\left(\bar{\xi}_p^2\tau_1\right)^2}+\frac{1}{\left(\bar{\xi}_p^2\tau_2\right)^2}.\label{}
\end{equation}
and the fidelity of the complete multihop teleportation becomes
\begin{eqnarray}
F_{\rm tot.}^p&=&\big[\tau_0^2\left(\bar{\xi}_p^2+\xi_p^2\right)^2+\left(\tau_1^2+\tau_2^2+\tau_3^2\right)\bar{\xi}_p^4+2\tau_3^2\xi^2_p\bar{\xi}_p^2\nonumber\\
&&{\xi}_p^4\tau_3^
|
Table \ref{tab_spec} and Figure~\ref{fig_spec}, third and fourth
panels). The distance resulting from this fit is $D=3.2\pm 1.7$ kpc, which is
consistent with the electron density distance. Finally, we verified that even
when the thermal component is modeled by a neutron star atmosphere, an
additional power law is not required (last fit in Table \ref{tab_spec}).
We note that the residuals from the model fits shown in Figure~\ref{fig_spec}
are relatively large at $\sim 0.6-0.7$ keV. To try and reduce the residuals
we fitted the data using the previous models, each modified by an absorption,
namely phabs or tbabs in XSPEC. We find that the resulting fits are not
improved with respect to our previous results.
\section{Timing Analysis}
\label{timing}
For the timing analysis we only used PN data, extracted by applying the
filtering criteria and extraction region previously described in
\S\ref{obser}; the filtered file was then barycentrically corrected. In
order to search for an X-ray modulation at the PSR~B2334+61\ spin period, we first
determined a predicted pulse period at the epoch of our {\it XMM-Newton}\ observations,
assuming a linear spin-down rate and using the radio measurements
\citep{dtws85,hlk04}. We calculate $P = 0.495355469$ s ($f = 2.0187523$ Hz)
at the midpoint of our observation (MJD 53,047.6). As glitches and/or
deviations from a linear spin-down may alter the period evolution, we then
searched for a pulsed signal over a wider frequency range centered on
$f = 2.01875$ Hz. We searched for pulsed emission using two methods. In the
first method we implement the $Z^{2}_{n}$ test \citep{buc83}, with the number
of harmonics $n$ being varied from 1 to 5. In the second method we calculate
the Rayleigh statistic \citep{dej91,mar72} and then calculate the maximum
likelihood periodogram (MLP, \citealt{za02}) using the $C$ statistic
\citep{cas79} to determine significant periodicities in the data sets.
We do not find any significant peak near to the predicted frequency with
either method, either using the whole $0.3-10$~keV energy band or restricting
the search to the $0.3-1.5$~keV band. By folding the light curve of PSR~B2334+61\
on the radio frequency and fitting it with a sinusoid, we determine an upper
limit for the pulsation of 5\% in modulation amplitude (defined as
$(F_{max}-F_{min})/(F_{max}+F_{min})$ where $F_{max}$ and $F_{min}$ are the
maximum and minimum of the pulse light curve).
\section{Discussion}
\label{disc}
We have presented the results from the first {\it XMM-Newton}\ observation of PSR~B2334+61. The
source has been positively detected in all EPIC instruments, although the
X-ray emission is very faint and the spectrum does not have a high enough
signal-to-noise to make a detailed multi-component fit. However, single
component fits allowed us to discriminate between a thermal (blackbody-like)
or non-thermal (power law-like) nature of the dominant emission mechanism.
We find that the spectrum is well represented by either thermal
blackbody-like emission from a small emitting area (e.g. a hot polar cap) of
size $\sim 2$~km or emission from a pure H atmosphere of a neutron star with
$R\sim 13$~km and magnetic field $B\sim10^{13}$~G. The EPIC flux, measured
in the $0.3-10$~keV band, is $(9.2_{-0.9}^{+0.6}) \times 10^{-14}$ erg
cm$^{-2}$ s$^{-1}$ and $(1.7_{-1.6}^{+0.1}) \times 10^{-13}$ erg cm$^{-2}$
s$^{-1}$, in the two cases respectively. Both values are slightly higher
than that previously inferred by \cite{bec96}, $(7.1 \pm 0.2) \times 10^{-14}$
erg cm$^{-2}$ s$^{-1}$, although we note that due to their limited counts
{\it ROSAT} data could only be analyzed by making an assumption a priori on
the spectral shape. In both cases, the value of $N_H$ is consistently lower
than the total galactic one in the pulsar direction. A fit with a power-law
model alone is statistically worse, and we find little evidence for the
presence of a non-thermal component in the spectrum in addition to the
thermal one. The upper limit for the flux contribution from the power-law in
the energy range $0.3-1.5$ keV to the blackbody fit is 3\%, and to the
magnetized atmospheric model fit is $<< 1$\%. We do not detect X-ray
pulsations corresponding to the radio signal, to a limit of 5\% in modulation
amplitude.
Although both thermal fits are conceivable options, we tend to prefer the
atmospheric model representation based on physical grounds. In this case,
magnetic effects for a field strength of order $\sim 10^{13}$~G (which is the
value inferred from measurements of the source spin period and period
derivative in the radio band) are consistently accounted for in the radiative
transfer computation. Moreover, a value for the radius of $\sim 10-13$~km is
in agreement with the prediction of several neutron star equations of state
\citep{lapra} and we found that this parameter can be successfully adjusted
to make the distance inferred from the spectral fit consistent with that
obtained from the electron density value of this pulsar.
To date, thermal emission has been detected in only very few radio pulsars:
PSR~B0656+14 (\citealt{po96}), PSR~B1055-52 (\citealt{pa02}), PSR~J0437-4715
(\citealt{zavlin02}), PSR~J0538+2817 (\citealt{mcg03}), Geminga
(\citealt{hw97}), Vela (\citealt{pa01}), and PSR~B1706-44 (\citealt{gott02},
\citealt{mcg04}). In the case of the first three objects the thermal emission
detected above $\sim 0.5$~keV is more likely to originate from a hot-polar
cap. These sources are in fact old pulsars, at a more advanced stage
|
of
their cooling history and their surface emission should peak at much lower
(UV) energies. This idea was strengthened by the detection of a further
thermal component below $0.7$~keV (\citealt{pa02}) in the spectrum of the
brightest of them, PSR~B0656+14. The Vela pulsar and PSR~B1706-44 are
younger ($\tau \sim 10^4$~yrs) and are the only radio active sources for
which the thermal component observed in the soft X-rays is well explained by
a magnetized cooling atmosphere (\citealt{pa01}, \citealt{mcg04}). When this
model is assumed instead of a blackbody, the inferred radius is in agreement
with a neutron star equation of state. The only other neutron stars whose
thermal component is better described by an atmospheric model, and for which
this interpretation resolves all the inconsistencies which follow from the
blackbody interpretation, are the radio-silent neutron stars 1E~1207-52
(\citealt{za98}) and RX~J0822-4300 (\citealt{za99}).
The {\it XMM-Newton}\ observation reported here allows us to add another entry to the
list. If indeed the X-ray emission detected from PSR~B2334+61\ is originating in
the cooling atmosphere of the neutron star, our estimate of the effective
temperature allows us to localize the object in the neutron stars thermal
evolutionary diagram.
Our knowledge of neutron star interiors is still uncertain and accurate
measurements of the neutron star surface temperature are particularly
important to constrain the cooling models and provide information on the
physics of the neutron star. Roughly speaking, theoretical models predict
a two-fold behavior of the cooling curves depending on the star mass. In
low-mass neutron stars neutrino emission is mainly due to a modified Urca
process and nucleon-nucleon bremsstrahlung. These are relatively weak
mechanisms and produce {\it slow cooling}. In stars of higher mass the
neutrino emission is enhanced by a direct Urca process (or other mechanisms
in exotic matter), therefore these stars cool down much faster ({\it fast
cooling} regime). To date (see \citealt{ya02} for a discussion) it has
been realized that simple models which do not account for proton and
neutron superfluidity fail in explaining the surface temperatures observed
in many sources, unless objects such as e.g. Vela, Geminga, RX~J1856-3754
do have exactly the critical mass that bounds the transition between the very
different {\it slow cooling} and {\it fast cooling} regimes. This unlikely
fine-tuning is not required if the effects of nucleon superfluidity are
accounted for. In particular, models with proton superfluidity included
predict an intermediate region between fast cooling and slow cooling curves,
which is expected to be populated by medium mass neutron stars (roughly with
$M$ between 1.4 and 1.65 $M_\sun$). Although the full picture only holds if,
at the same time, neutron superfluidity is assumed to be rather weak, it is
still interesting that many neutron stars (as 1E~1207-52, Vela, RX~J1856-3754,
PSR~0656+14) have a surface temperature which falls in such a transition
region \citep{ya02}. In turn, this means that measuring the surface
temperature allows us to ``weigh'' neutron stars \citep{ka01}. As we can see
from the first two panels of Figure~2 in \cite{ya02}, assuming an age of
$\log \tau = 4.6$, the surface temperature of PSR~B2334+61\ derived from the
blackbody fit is even higher than the upper cooling curves i.e. those
corresponding to the slow cooling regime. However, the surface temperature
$\log T^\infty = 5.9$ obtained by fitting with the magnetized model and
$R=13$~km falls well within the above mentioned transition region of medium
mass neutron stars. The mass of PSR~B2334+61\ should then be $\sim 1.45 M_\sun $
or $\sim 1.6 M_\sun$, depending on the kind of proton superfluidity assumed
in the model (1p and 2p respectively). Indeed, the measured properties of
PSR~B2334+61\ reported here are in remarkable agreement with that measured for
PSR~B1706-44 \citep{mcg04}.
As mentioned in the introduction, we are now in the position to make
the first experimental studies of surface temperatures of isolated neutron
stars. However, although most of our insight of neutron star temperatures
and interior rely on them, at present these studies have to be considered
as pioneering. Only a few sources are currently available for this kind of
exercise, so every newly discovered candidate is important. At present,
X-ray spectra of thermal emission from NSs are fitted with either a blackbody
or magnetic atmosphere models. Although the latter definitely represent a
substantial improvement, inasmuch they include most of the relevant physics,
a large amount of work remains to be done before they can be claimed to be
fully satisfactory. The same is true for the cooling curves: progress has
been made in improving these models but they can not yet be considered as
completely realistic. Besides other effects, we make the caveat that both
spectra and cooling curves are computed assuming 1-dimensional transfer
of radiation/heat in a single effective temperature, a single magnetic field
zone and neglect all effects of more realistic neutron star thermal and
magnetic surface distributions (see \citealt{zt05}, \citealt{pa04},
\citealt{bl04}). However, what values to include for these parameters is not
necessarily obvious as it is not completely clear what the NS surface
temperature distribution is even in the case of a simple dipolar magnetic
field (see \citealt{gkp04}). We also note that interpreting the temperatures
obtained from the spectral fits in the context of theoretical cooling curves
relies on the true age of the pulsar being the same as the characteristic
spin-down age, which may not be valid. Our results for PSR~B2334+61\ provide
information that helps to constrain the current models and will enable more
realistic models to be produced in the future.
\acknowledgments
This work is based on observations obtained with {\it XMM-Newton}, an ESA science mission
with instruments and contributions directly funded by ESA Member States and
NASA. KEM and SZ acknowledge the support of the UK Particle Physics and
Astronomy Research Council (PPARC). Part of this work was supported by
NASA grant R-2865-04-0. We thank the anonymous referee for a careful reading
of the manuscript and several helpful comments. We thank Werner Becker and
Bernd Aschenbach for their contributions.
\clearpage
|
\section{Introduction} \label{sec:introduction}
Distributed denial of service (DDoS) attacks have been considered as a serious threat to the availability of Internet. Over the past few decades, both industry and academia make a considerable effort to address this problem. Academia have proposed various approaches, ranging from filtering-based approaches~\cite{practicalIPTrace, advancedIPTrace,AITF,pushback,implementPushback,StopIt}, capability-based approaches~\cite{siff, TVA, netfence, MiddlePolice}, overlay-based systems~\cite{phalanx,sos,mayday}, systems based on future Internet architectures~\cite{scion,aip,xia} and other variance~\cite{speakup,mirage, CDN_on_Demand}. Meanwhile, many large DDoS-protection-as-a-service providers (\emph{e.g.,}\xspace Akamai, CloudFlare), some of which are unicorns, have played an important role in DDoS prevention. These providers massively over-provision data centers for peak attack traffic loads and then share this capacity across many customers as needed. When under attack, victims use Domain Name System (DNS) or Border Gateway Protocol (BGP) to redirect traffic to the provider rather than their own networks. The DDoS-protection-as-a-service provider applies their proprietary techniques to scrub traffic, separating malicious from benign, and then re-injects only the benign traffic back into the network to be carried to the victim.
Despite such effort, recent industrial interviews with over 100 security engineers from over ten industry segments that are vulnerable to DDoS attacks indicate DDoS attacks have not been fully addressed~\cite{middlepolice-ton}. First, since most of the academic proposals incur significant deployment overhead (\emph{e.g.,}\xspace requiring software/hardware upgrades from a large number of Autonomous Systems (AS) that are unrelated to the DDoS victim, changing the client network stack such as inserting new packet headers), few of them have ever been deployed in the Internet. Second, existing security-service providers are not cures for DDoS attacks for all types of customer segments. In particular, a prerequisite of using their security services is that a destination site must redirect its network traffic to these service providers. Cloudflare, for instance, will terminate all user Secure Sockets Layer (SSL) connections to the destination at Cloudflare's network edge, and then send back user requests (after applying their secret sauce filtering) to the destination server using new connections. Although this operation model is acceptable for small websites (\emph{e.g.,}\xspace personal blogs), it is privacy invasive for some large organizations like government, hosting companies and medical foundations.
As a result, these organizations have no other choices but to rely on their Internet Service Providers (ISPs) to block attack traffic. Realizing this issue, in this paper, we propose Umbrella\xspace, a new DDoS defense mechanism focusing on enabling ISPs to offer readily deployable and privacy-preserving DDoS prevention services to their customers. The design of Umbrella\xspace is lessoned from real-world DDoS attacks that intentionally disconnect the victim from the public Internet by overwhelming the victim's inter-connecting links with its ISPs. Thus, Umbrella\xspace proposes to protect the victim by allowing its ISPs to throttle attack traffic, preventing any undesired traffic from reaching the victim. Compared with previous approaches requiring Internet-wide AS cooperation, Umbrella\xspace simply needs \emph{independent deployment} at the victim's direct ISPs to provide immediate DDoS defense. Further, unlike existing security-service providers, an ISP does not need to terminate the victim's connections. Instead, the ISP still operates on \emph{network layer} as usual to completely preserve the victim's application layer privacy. Third, Umbrella\xspace is \emph{lightweight} since it requires no software and hardware upgrades at both the Internet core and clients. Finally, Umbrella\xspace is \emph{performance friendly} because it is overhead-free during normal scenarios by staying completely idle and imposes negligible packet processing overhead during attack mitigation.
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:architecture:a}Filtering-based approaches.]{\includegraphics[scale=0.3]{deploy_filter.pdf}}
\subfigure[\label{fig:architecture:b}Capability-based approaches.]{\includegraphics[scale=0.3]{deploy_cap.pdf}}
\subfigure[\label{fig:architecture:c}Umbrella\xspace.]{\includegraphics[scale=0.3]{deploy_umbrella.pdf}}
\subfigure[\label{fig:architecture:d}Multi-layered defense.]{\includegraphics[scale=0.3]{umbrella_layer.pdf}}
}
\caption{Part (a) and part (b) show that filtering-based and capability-based approaches are
difficult to deploy in the Internet since they require Internet-wide AS cooperation and facility upgrades. Part (c)
shows Umbrella\xspace is immediately deployable at the victim's ISP. Part (d) details Umbrella\xspace's multi-layered defense.}
\label{fig:architecture}
\end{figure*}
In its design, Umbrella\xspace develops a novel multi-layered defense architecture. In the \emph{flood throttling} layer, Umbrella\xspace defends against the amplification-based attacks that exploit various network protocols (\emph{e.g.,}\xspace Simple Service Discovery Protocol (SSDP), Network Time Protocol (NTP))~\cite{arbor}. Although such attacks may involve extremely high volume of traffic (\emph{e.g.,}\xspace hundreds of gigabit per second), they can be effectively detected via static filters and therefore stopped. In the \emph{congestion resolving} layer, Umbrella\xspace defends against more sophisticated attacks in which adversaries may adopt various strategies. Umbrella\xspace brings out a key concept \emph{congestion accountability}~\cite{FlowPolice} to selectively punish users who keep injecting packets in case of severe congestive losses. The congestion resolving layer provides both \emph{guaranteed} and \emph{elastic} bandwidth shares for legitimate flows: \textsf{(i)}\xspace regardless of attackers' strategies, legitimate users are guaranteed to receive their fair share of the victim's bandwidth; \textsf{(ii)}\xspace when attackers fail to execute their optimal strategy, legitimate clients are able to enjoy more bandwidth shares. The last layer, \emph{user-specific} layer, allows the victim to enforce self-interested traffic policing rules that are most suitable for their business logic. For instance, if the victim never receives certain type of traffic, it can inform Umbrella\xspace to completely block such traffic when attacks are detected. Similarly, the victim can instruct Umbrella\xspace to reserve bandwidth for premium clients so that they will not be affected by DDoS attacks.
In summary, the major contributions of this paper are the design, implementation and evaluation of Umbrella\xspace, a new DDoS defense approach that enables ISPs to offer readily deployable and privacy-preserving DDoS prevention services to their downstream customers. The novelties of Umbrella\xspace live in the following two dimensions. First, unlike the vast majority of academic DDoS prevention proposals which require extensive Internet core and client network-stack change, Umbrella\xspace only requires lightweight upgrades from business-related entities (\emph{i.e.,}\xspace the potential DDoS victim itself and its direct ISPs), yielding instant deployability in the current Internet architecture. Second, compared with the existing deployable industrial DDoS mitigation services, Umbrella\xspace, through our novel multi-layer defense architecture, offers both privacy-preserving and complete DDoS prevention that can deal with a wide spectrum of attacks, and meanwhile offer victim-customizable defense. We implement Umbrella\xspace on Linux to study its scalability and overhead. The result shows that a commodity server can effectively handle $100$ million attackers and introduces merely ${\sim}0.06\mu s$ packet processing overhead. Finally, we perform detailed evaluation on our physical testbed, our flow-level simulator and the ns$3$ packet-level simulator~\cite{ns3} to demonstrate the effectiveness of Umbrella\xspace to mitigate DDoS attacks.
\section{Problem Space and Goals}\label{sec:design_rational}
In this section, we discuss Umbrella\xspace's problem space and its design goals. Completely preventing DDoS attacks is an extremely large scope. The problem space articulates Umbrella\xspace's position within this scope. Further, the design goals of Umbrella\xspace are designed based on the industrial interviews in \cite{middlepolice-ton} so that Umbrella\xspace indeed offers desirable DDoS prevention to those large and privacy-sensitive potential victims such as government and medical infrastructures. We do not however claim that these goals are universally applicable to all types of potential victims (for instance, a web blogger may simply choose CloudFlare to keep her website online).
\subsection{Problem Space}\label{sec:problem_space}
\noindent\textbf{Network-Layer DDoS Mitigation.}
We position Umbrella\xspace as a network-layer DDoS defense approach to stop undesirable traffic from reaching and consuming resources of the victim's network. Specifically, Umbrella\xspace is designed to prevent attackers from exhausting the bandwidth of the inter-network link connecting the victim's network to its ISP so as to keep the victim ``online" even in face of DDoS attacks. Note that Umbrella\xspace should be viewed to be complementary to other solutions addressing DDoS attacks at other layers (\emph{e.g.,}\xspace layer $7$ HTTP attacks). Only concerted efforts contributed by these solutions can potentially provide complete defense against all types of DDoS attacks.
\noindent\textbf{Adversary Model.}
We consider strong adversaries that can compromise both end-hosts and routers. They are able to launch strategic attacks (\emph{e.g.,}\xspace the on-off shrew attack~\cite{low-rate}), launch flash attacks with numerous short flows, adopt maliciously tampered transport protocols (\emph{e.g.,}\xspace poisoned TCP protocols that does not properly adjust rates based on network congestion), leverage the Tor network to hide their identities, spoof addresses and recruit vast amounts of bots to launch large scale DDoS attacks.
\noindent\textbf{Assumptions.}
Umbrella\xspace maintains per-sender state in the congestion resolving layer, and consequently relies on the correctness of source addresses. Such correctness can be assured by the more complete adoption of Ingress Filtering~\cite{ingressFilter, BCP38} or the source authentication schemes~\cite{passport, OPT}. On our way to achieve complete spoof elimination\footnote{In fact, the Spoofer Project~\cite{spoofer_project} extrapolates that only ${\sim}13.6\%$ of addresses are spoofable, indicating a tremendous progress.}, Umbrella\xspace requires victim's additional participation to minimize the chance of source spoofing. In particular, the victim needs to provide a list of authenticated (\emph{i.e.,}\xspace TCP handshakes with these sources are successfully established) or preferred source IP addresses (based on the victim's routine traffic analysis performed in normal scenarios) so that Umbrella\xspace will only maintain per-sender state for these addresses during attack mitigation.
\subsection{Design Goals}
\noindent\textbf{Readily Deployable.} One major design goal of Umbrella\xspace is to be immediately deployable in the current Internet architecture. To this end, the functionality of Umbrella\xspace relies only on independent deployment at the victim's ISP without further deployment requirements at remote ASs that are unrelated with the victim. As illustrated in Fig. \ref{fig:architecture:c}, Umbrella\xspace can be deployed at the upstream of the link connecting the victim's network and its ISP. In the rest of the paper, we refer to this link as interdomain link and its bandwidth interdomain bandwidth. Note that Umbrella\xspace deployed at victim's ISP cannot cannot stop DDoS attacks trying to disconnect the victim's ISP from its upstream ISPs. However, the victim's ISP, now becoming a victim itself, should have motivation to protect itself by purchasing Umbrella\xspace's protection from its upstream ISPs. Recursively, DDoS attacks happened at different levels of the Internet hierarchy can be resolved. The neat idea of Umbrella\xspace is that it never requires cooperation among a wide range of (unrelated) ASes. Rather, \emph{independent deployment} is sufficient and effective.
\noindent\textbf{Privacy-Preserving and Customizable DDoS Prevention:} Another primary design goal of Umbrella\xspace is to offer privacy-preserving and customizable DDoS prevention. On one hand, Umbrella\xspace requires no application connection termination at ISPs, allowing them to operate at the network layer as usual, which completely preserves the application-layer privacy of their customers. One the other hand, Umbrella\xspace's multi-layered defense enables ISPs to offer customizable DDoS prevention that is driven by the customer policies.
\noindent\textbf{Lightweight and Performance Friendly:} Umbrella\xspace's deployment is very lightweight: it can be implemented as a software router at the interdomain link, maintaining at most per-source state. Our prototype implementation demonstrates that a commodity server can effectively scale up to deal with millions of states. Further, Umbrella\xspace is completely idle and transparent to applications in normal scenarios, introducing zero overhead. During DDoS attack mitigation, Umbrella\xspace's traffic policing introduces negligible packet processing overhead comparing with previous approaches requiring complicated and expensive operation, such as adding cryptographic capabilities and extra packet headers~\cite{netfence}.
Given these goals, we position Umbrella\xspace as a practical DDoS prevention service, offered by ISPs, that is desirable by large and privacy-sensitive potential DDoS victims.
\section{Design Overview}\label{sec:system}
In its design, Umbrella\xspace develops a three-layered defense architecture to stop undesirable traffic. The user-specific layer, enforcing policies defined by the victim, has priority over the rest two layers, which operate in parallel. Umbrella\xspace is only active when it notices the features of volumetric DDoS attacks against the interdomain link (\emph{e.g.,}\xspace the link experiences enduring congestion causing severe packet losses). Umbrella\xspace stops traffic policing and becomes idle when the link restores its normal state. As part of the user-specific layer, the victim is free to define specific rules to determine when the traffic policing should be initiated or terminated.
\subsection{Flood Throttling Layer}\label{sec:flood_layer}
Flood throttling layer is designed to stop amplification-based attacks, in which attackers send numerous requests, with spoofed source address as the victim's address, to public servers serving certain Internet protocols (\emph{e.g.,}\xspace Network Time Protocol, Domain Name System). As a result, the victim receives extremely high volume of responses, resulting interdomain bandwidth exhaustion and disconnection from its ISP. Although the attack volumes can as high as around 600Gpbs~\cite{arbor}, these attacks are easy to be detected. According to the analysis in \cite{arbor}, only seven network service protocols are exploited to launch amplification-based DDoS attacks and over 87\% of them rely on UDP with specific port numbers. Thus, by installing a set of static filters based on these network service protocols, Umbrella\xspace can effectively throttle these large-yet-easy-to-catch DDoS attacks.
In the case where the victim does receive traffic for certain network service protocols that may be exploited to launch amplification attacks, Umbrella\xspace can leverage the Weighted Fair Queuing technique to minimize the potential affect of these attacks. For instance, if based on traffic analysis in normal scenarios, $10\%$ of the victim's traffic traversing the interdomain link is NTP (on UDP port 123), then such traffic should be served in a queue whose normalized weight is configured as 0.1. The weighted fair queuing ensures that the victim always has sufficient bandwidth to process other flows (which are served in separate queues), regardless of how much NTP traffic is thrown to the victim.
\subsection{Congestion Resolving Layer}\label{sec:congestion_layer}
The congestion resolving layer is designed to stop subtle and sophisticated DDoS attacks that rely on numerous seemly legitimated TCP traffic. The crucial part of the defense is to enforce \emph{congestion accountability}~\cite{FlowPolice} so as to punish attackers who keep injecting vast amounts of traffic in face of congestive losses. Specifically, in volumetric DDoS attacks, overloaded routers drop packets from all users regardless of which users cause the \emph{enduring} congestion. In other words, congestion accountability is not considered while dropping packets. Consequently, legitimate users that run congestion-friendly protocol (\emph{e.g.,}\xspace TCP) are falsely penalized during the congestion since it is actually caused by attackers. By enforcing congestion accountability, Umbrella\xspace is able to selectively punish misbehaved flows and therefore stop attack traffic.
Umbrella\xspace analyzes each user's congestion accountability from the perspective of the goal of network usage. Legitimate users aim to deliver or receive data via the network. Thus, when encountering congestion, a legitimate TCP sender tries to \emph{relieve} the congestion by reducing its rate because its receiver cannot decode the data if some packets are lost. Sending more packets therefore makes no progress on finishing the data transfer, but it causes more severe congestion. On the contrary, attackers focus on exhausting network resources and care little about delivering data. As a result, they consistently generate traffic to \emph{contribute to} the congestion regardless of how many packets have been dropped. Therefore, users who overlook packet losses and continuously inject packets are accountable for the enduring congestion happened during DDoS attacks.
To resolve the congestion, Umbrella\xspace keeps a \emph{rate limiting window} for each user to prevent any user from sending faster than its rate limiting window. The size of the rate limiting window is determined by the rate limiting algorithm (\S \ref{sec:rateLimit}), taking input as the users' sending rates and packet losses. All information needed to make rate limiting decisions is recorded in Umbrella\xspace's flow table (\S \ref{sec:flowTable}). The design of the rate limiting algorithm ensures \textsf{(i)}\xspace the more aggressively attackers behave, the less bandwidth shares they will obtain; \textsf{(ii)}\xspace each legitimate client is guaranteed to receive the per-sender fair share of the congestion resolving layer's bandwidth regardless of attackers' strategies (note that part of the interdomain bandwidth may be used in other layers). Further, legitimate users may obtain more bandwidth than the per-sender fair share when attackers fail to execute their optimal strategy.
\subsection{The User-specific Layer}\label{sec:user_layer}
The goal of adding the user-specific defense layer is to provide the flexibility for the victim to enforce self-interested traffic control policies that are most suitable for their business logic, including adopting different fairness metrics from the Umbrella\xspace's default one (per-sender fairness) and offering proactive DDoS defense for premium clients so that they will never be disconnected from the victim.
Allowing user-specific policies differs Umbrella\xspace from previous in-network DDoS prevention mechanisms that force the victim to accept the single policy proposed by these approaches. Thus Umbrella\xspace creates extra deployment incentives for ISPs by enabling them to offer customized DDoS defense to consumers.
\section{Design Details}
Since the flood throttling layer is straightforward in its design and the user-specific layer is typically driven by the victim, we focus on elaborating the congestion resolving layer in this section. However, we have a full implementation of Umbrella\xspace with all three layers in \S \ref{sec:implementation}.
\subsection{Flow Table}\label{sec:flowTable}
Umbrella\xspace's flow table maintains per-sender network usage. Specifically, all packets sent from the same source are aggregated (and defined) as one \emph{coflow}\footnote{The concept of coflow is also introduced in data centers, meaning a group of flows from the same task~\cite{coflow2}.} and the flow table maintains states for each coflow. As discussed in the \emph{Assumptions} section of \S \ref{sec:problem_space}, the flow table maintains state only for the set of source IP addresses explicitly provided by the victim to prevent adversaries from exhausting table state via spoofed addresses. Umbrella\xspace does not keep states for each individual TCP flow (identified by its $5$-tuple) since the behavior of one single flow may not reflect the intention of the sender (malicious or not). For instance, one bot keeps sending new flows to the victim although previous flows experience severe losses. Even each individual flow may be a legitimate TCP flow, the bot is actually acting maliciously. However, if we interpret its behaviors from the coflow's perspective, we can figure out that the bot continuously creates traffic in face of congestive losses. Thus it is accountable for the congestion and will be rate limited. In the rest of the paper, unless otherwise stated, flow and coflow are used interchangeably.
\begin{figure}[t]
\centering
\begin{tabular}{|>{}c|>{}c|>{}c|>{}c|>{}c|>{}c|}
\hline
$f$&
$\mathcal{T}_A$&
$\mathcal{W}_R$&
$\mathcal{P}_R$&
$\mathcal{P}_D$&
$\mathcal{L}_R$\\
\hline
$64$&
$32$&
$32$&
$32$&
$32$&
$64$\\
\hline
\end{tabular}
\caption{Each field in a flow entry and its corresponding size (bits).}
\label{fig:flow_entry}
\end{figure}
Each flow entry (identified by its source address $f$) in the flow table is composed of
a timestamp $\mathcal{T}_A$, $f$'s rate limiting window $\mathcal{W}_R$,
the number of packets $\mathcal{P}_R$ received from $f$, the number
of dropped packets $\mathcal{P}_D$ from $f$ and its packet loss rate $\mathcal{L}_R$.
Further, Umbrella\xspace maintains $\mathcal{W}_R^T$, the sum of rate limiting
windows of all flows, shared by all flow entries.
These information is necessary for the rate limiting algorithm.
\subsection{Rate Limiting Algorithm}\label{sec:rateLimit}
The rate limiting algorithm is designed to enforce congestion accountability by punishing misbehaved users who
keep sending packets in face of severe congestive losses. By early dropping the
undesirable packets, Umbrella\xspace can effectively prevent bandwidth exhaustion.
In its design, the algorithm executes periodic rate limiting for each flow during DDoS attacks.
Specifically, in each \emph{detection period}, the number of packets allowed for
each flow (or sender) is limited by its rate limiting window $\mathcal{W}_R$.
The $\mathcal{W}_R$ is updated every detection period according
to the flow's information recorded in the flow table, such as the flow's packet loss rate $\mathcal{L}_R$ and
its transmission rate $\mathcal{P}_R$.
\subsubsection{Populating the flow table}
Assume at time $ts$, a new flow $f$ is initiated.
Umbrella\xspace creates a flow entry for $f$ in its flow table.
All fields of the entry are initialized to be zero.
Then Umbrella\xspace updates $\mathcal{T}_A$ as $ts$, increases $\mathcal{P}_R$ by one
and sets the initial $\mathcal{W}_R$ as the pre-defined fair share rate
$\mathcal{W}_{fair}$ (discussed in \S\ref{sec:paraSettings}).
From then on, Umbrella\xspace increases $\mathcal{P}_R$ by one for each arrived
packet of $f$ until the end of the current detection period (\emph{e.g.,}\xspace the end of the first detection period).
Umbrella\xspace uses packet arrival time to detect when it should start a new detection period for $f$.
Specifically, letting $\mathcal{D}_p$ denote the length of detection period,
when received a packet with arrival time $t_0 {>} \mathcal{T}_A {+} \mathcal{D}_p$,
Umbrella\xspace realizes that this packet is the first one received in the new detection period.
Then Umbrella\xspace performs the following updates in order:
\textsf{(i)}\xspace Set $\mathcal{T}_A=t_0$;
\textsf{(ii)}\xspace Update $\mathcal{W}_R$ and $\mathcal{L}_R$ according to the Algorithm \ref{algo:rateLimit};
\textsf{(iii)}\xspace Reset $\mathcal{P}_R$ and $\mathcal{P}_D$ as zero.
\subsubsection{The rate limiting algorithm}
\newcommand{\algrule}[1][.2pt]{\par\vskip.5\baselineskip\hrule height #1\par\vskip.5\baselineskip}
\begin{algorithm}[t]
\SetDataSty{textsl}
\SetKwData{CQ}{CongestionQueue}
\SetKwData{RL}{RateLimitingDecision}
\SetKwData{FC}{\textbf{Function:}}
\SetKwData{IP}{\textbf{Input:}}
\SetKwData{OP}{\textbf{Output:}}
\SetKwData{Main}{\textbf{Main Procedure:}}
\SetKwData{WS}{RateLimitingWindow}
\SetKwData{IN}{\textbf{Flow Initialization:}}
\footnotesize
\IP \\
\quad The service queue $\mathcal{Q}_C$ executing the FIFO principle.\\
\quad $f$'s entry in the flow table. \\
\quad $\mathcal{B}$: the bandwidth available in the congestion resolving layer. \\
\quad $\mathcal{W}_R^T$: the sum of all flows' rate liming windows. \\
\quad System related parameters: the detection period length $\mathcal{D}_p$, the
weight $\lambda$ for previous packet losses, the packet loss rate threshold $L_{Th}$ and
the per-sender fair share rate $\mathcal{W}_{fair}$.\\
\OP \\
\quad Updated $\mathcal{Q}_C$, $\mathcal{W}_R^T$ and the flow entry of $f$.
\algrule[0.5pt]
\IN \\
\quad $\mathcal{W}_R = \mathcal{W}_{fair}$~\;
\algrule[0.5pt]
\Main \\
\For{each arrived packet $\mathcal{P}$ of flow $f$}{
$\mathcal{P}_R \leftarrow \mathcal{P}_R + 1$~\;
\If{$\mathcal{P}_R > \mathcal{W}_R$} {
Drop $\mathcal{P}$~; ~~$\mathcal{P}_D \leftarrow \mathcal{P}_D + 1$~\;
}
\lElse{
\CQ{$\mathcal{P}$}
}
\If{$\mathcal{P}$ is the first packet in a new detection period} {
\RL{$\mathcal{\mathcal{W}_R, \mathcal{L}_R, \mathcal{P}_R}, \mathcal{P}_D$}\;
}
}
\algrule[0.5pt]
\textbf{\FC} \RL{$\mathcal{W}_R, \mathcal{L}_R, \mathcal{P}_R, \mathcal{P}_D$}: \\
\quad $recentLoss \leftarrow \mathcal{P}_D / \mathcal{P}_R$~\;
\quad $packetLoss \leftarrow \lambda \cdot \mathcal{L}_R + (1-\lambda) \cdot recentLoss$~\;
\quad $\mathcal{W}_R^{original} \leftarrow \mathcal{W}_R$~; ~~$\mathcal{L}_R \leftarrow packetLoss$~\;
\quad \lIf{$packetLoss > L_{Th}$ and $\mathcal{P}_R > \mathcal{W}_{fair}$}{
$\mathcal{W}_R \leftarrow \mathcal{W}_R / 2$
}
\quad \lElse {
$\mathcal{W}_R \leftarrow$ \WS{$\mathcal{W}_R$}
}
\quad $\mathcal{W}_R^T \leftarrow \mathcal{W}_R^T + \mathcal{W}_R - \mathcal{W}_R^{original}$~\;
\algrule[0.5pt]
\textbf{\FC} \CQ{$\mathcal{P}$}: \\
\quad \If{the queue $\mathcal{Q}_C$ is full}{
Drop $\mathcal{P}$~\;
$\mathcal{P}_D \leftarrow \mathcal{P}_D + 1$~\;
}
\quad \lElse{
Append $\mathcal{P}$ to $\mathcal{Q}_C$
}
\algrule[0.5pt]
\textbf{\FC} \WS{$\mathcal{W}_R$}: \\
\quad \textbf{return} $\frac{\mathcal{W}_R}{\mathcal{W}_R^T}\cdot \mathcal{B}$~\;
\caption{\bf Rate Limiting Algorithm}\label{algo:rateLimit}
\end{algorithm}
\normalsize
At the very high level, the rate limiting algorithm determines the allowed rate
for each flow based on its congestion accountability. In particular,
the rate limiting windows of congestion-accountable flows (with both high packet loss rates and
high transmission rates) are significantly reduced.
Flows respecting packet losses by adjusting sending rates accordingly are
guaranteed to receive per-sender fair share of the bandwidth.
We adopt such a fairness metric because it is the optimal one that can be
guaranteed for legitimate users under strategic attacks. The proof is straightforward:
by behaving in the exact same way as legitimate users,
attackers can receive at least per-sender fair share, meaning that the
optimal guaranteed share for a legitimate user is also the per-sender fair share.
However, the algorithm allows legitimate users to obtain more bandwidth
shares when attackers fail to execute their optimal strategy.
Umbrella\xspace performs periodic rate limiting. In each detection period,
Umbrella\xspace learns each flow's transmission rate and packet loss rate to determine its $\mathcal{W}_R$.
One flow $f$'s transmission rate is quantified by $\mathcal{P}_R$,
the number of received packets from $f$ in the current period.
$f$'s packets may be dropped for two reasons: \textsf{(i)}\xspace $f$'s
sending rate exceeds its $\mathcal{W}_R$ or \textsf{(ii)}\xspace the service queue is full due to congestion.
$f$'s packet loss rate $\mathcal{L}_R$ in the current period is the ratio of dropped packets
to received packets. While making rate limiting decisions,
Umbrella\xspace adopts the metric $packetLoss$, which incorporates both packet losses in
the current period and previous packet losses. Such a design prevents
attackers from hiding their previous packet losses by stopping transmitting for a while before
sending a new traffic burst (\emph{e.g.,}\xspace the on-off shrew attack~\cite{low-rate}).
If both $packetLoss$ and $\mathcal{P}_R$ exceed their pre-defined thresholds,
Umbrella\xspace classifies $f$ as a maliciously behaved flow and reduces
its $\mathcal{W}_R$ by half.
We explain two design details of the rate limiting algorithm.
To begin with, the algorithm cannot make the rate limiting decision for a fresh flow
in its first detection period since Umbrella\xspace has not learned its packet loss rate and sending rate yet.
Thus Umbrella\xspace initializes its $\mathcal{W}_R$ as the pre-defined
per-sender fair share rate $\mathcal{W}_{fair}$ in the first detection period,
preventing attackers from exhausting bandwidth by creating new flows.
Besides $\mathcal{W}_{fair}$, the algorithm relies on anther three
system related parameters: $\mathcal{D}_p$, $\lambda$ and $L_{Th}$.
We discuss the reasoning for parameter settings in \S\ref{sec:paraSettings}.
Further, the \textsl{RateLimitingWindow} function returns the allowed bandwidth for $f$.
We need to convert the bandwidth value into the number of $1.5$KB packets allowed in
one detection period, which will be $f$'s updated $\mathcal{W}_R$.
We close our algorithm design with the remark concerning the SYN flooding attack.
When a SYN packet's source address is matched by one flow entry (meaning
the source address has been authenticated), it will be treated in the same way as
regular packets from the source. Thus sending SYN packets also consumes attackers'
bandwidth budget. SYN packets with unverified sources are appended to a
queue with bounded bandwidth (\emph{e.g.,}\xspace $5\%$ of $\mathcal{B}$).
Thus the spoofed SYN flooding cannot compromise Umbrella\xspace's defense. Regular packets with
unidentifiable sources in the flow table are denied.
\vspace*{-0.15in}
\subsection{Parameter Settings}\label{sec:paraSettings}
\noindent{$\bm{\mathcal{D}_p}$:}
The length of detection period should be long enough for Umbrella\xspace to characterize each flow's behaviors during the congestion as so to determine its congestion accountability. In particular, $\mathcal{D}_p$ needs to be long enough to allow legitimate users to adapt to the congestion so as to maintain a very low packet loss rate. Meanwhile, Umbrella\xspace is confident that users with high packet loss rates during such a long period of
time are misbehaving. Given that TCP adjusts its window every RTT, $\mathcal{D}_p$ should be much longer than typical Internet RTTs (hundreds of milliseconds based on the CAIDA's measurement~\cite{RTT_measure}). If $\mathcal{D}_p$ is too short, the legitimate flows may fail to adapt to the congestion quickly enough, resulting in inaccurate and highly fluctuating loss rates for them. On the contrary, $\mathcal{D}_p$ cannot be too long to avoid slow reaction to attacks. Balancing the two factors, $2{\sim}6$ seconds are reasonable choices for $\mathcal{D}_p$.
\noindent{$\bm{\lambda}$}: The value of $\lambda$
represents the weight assigned to one flow's previous packet losses.
To defend against the on-off shrew attack~\cite{low-rate}, Umbrella\xspace gives a non-trivial weight
to previous packet losses by setting $\lambda=0.5$. Therefore, once a flow misbehaves, it will have a bad reputation for a while. In order to regain reputation, the flow would have to honor congestion by reducing its sending rate when experienced packet losses.
\noindent{$\bm{L_{Th}}$:} The value of $L_{Th}$ should be larger than normal
packet loss rates to avoid false positives.
According to the previous measurements~\cite{tcpMeasure, internetMeasure},
we set $L_{Th} = 5\%$.
\noindent{$\bm{\mathcal{W}_{fair}}$:} We define the fair share of each flow
as $\mathcal{W}_{fair} {=} \mathcal{B}/\mathcal{N}$, where
$\mathcal{N}$ is the number of flows in the flow table.\footnote{When
Umbrella\xspace is activated from the idle state, $\mathcal{N}$ can be obtained from the network monitoring and logging
tools such as the NetFlow~\cite{netflow}.} Again, the bandwidth
value needs to be converted into the number of packets.
$\mathcal{W}_{fair}$ is updated when new flows are initiated.
As we aggregate all traffic from the same sender as one flow,
$\mathcal{W}_{fair}$ may be updated less frequently
than each flow's $\mathcal{W}_R$.
\subsection{Algorithm Analysis}\label{sec:algo_analysis}
In this section, we prove that the rate limiting algorithm provides both
guaranteed and elastic bandwidth shares for legitimate users:
they are guaranteed to obtain the per-sender fair share and can potentially
obtain more bandwidth shares. We first state the optimal
bandwidth shares attackers can get.
\begin{lemma}\label{lemma:attack}
Given that $\mathcal{N}_L$ legitimate flows and $\mathcal{N}_A$ attack flows share the
congestion resolving layer's bandwidth $\mathcal{B}$, regardless of attackers' strategies,
the aggregated bandwidth that can be obtained by attack flows is \textbf{at most} $\frac{(1+L_{Th})\cdot \mathcal{N}_A \cdot
\mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$.
\end{lemma}
\begin{proof}
Umbrella\xspace initializes each flow's $\mathcal{W}_R$ as per-sender fair share rate $\mathcal{W}_{fair}$.
Thus attackers can obtain $\frac{\mathcal{N}_A \cdot \mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$ initial bandwidth.
The rate limiting algorithm allows a maximum $L_{Th}$ loss rate before
further reducing one flow's rate limiting window. Thus the optimal strategy
for an attack flow is to strictly comply with Umbrella\xspace's rate limiting by
sending no more than $1+L_{Th}$ times its rate limiting window.
Otherwise, its bandwidth share will be further reduced.
In a hypothetical situation where attackers are able to know their exact rate limiters
and control their packet losses remotely, they can obtain at most
$\frac{(1+L_{Th})\cdot \mathcal{N}_A \cdot \mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$.
\end{proof}
Based on the Lemma \ref{lemma:attack}, we obtain the following theorem.
\begin{theorem}\label{theorem:legitimate}
Each legitimate flow can obtain \textbf{at least} $\frac{(1+L_{Th})\cdot\mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$
bandwidth share, given that its transport protocol can
fully utilize the allowed bandwidth.
\end{theorem}
\begin{proof}
As each legitimate flow complies with Umbrella\xspace's rate limiting,
it is guaranteed to receive the per-sender fair share.
However, the per-sender fair share
is the lower-bound of its bandwidth share. When attackers fail to adopt
their optimal strategy (\emph{e.g.,}\xspace sending flat rates),
their rate limiting windows are significantly reduced.
As a result, legitimate flows' windows, returned by the \textsl{RateLimitingWindow} function,
will be increased since $\mathcal{W}_R^T$ is reduced. Thus legitimate
flows can receive more bandwidth than the per-sender fair share.
\end{proof}
To sum up, Lemma~\ref{lemma:attack} and Theorem~\ref{theorem:legitimate} spell
a dilemma for attackers: sending aggressively
rapidly throttles themselves to almost zero bandwidth share whereas
complying with Umbrella\xspace's rate limiting makes their attacks in vain.
\section{Implementation and Evaluation}\label{sec:implementation}
In this section, we describe the implementation and evaluation of Umbrella\xspace. We
first demonstrate that Umbrella\xspace is scalable to deal with DDoS attacks
involving millions of attack flows and meanwhile introduces negligible packet processing overhead.
Then we implement all three layers of Umbrella\xspace's defense on our physical testbed to
evaluate Umbrella\xspace's performance. Further, we add detailed simulations to prove that
Umbrella\xspace is effective to mitigate large scale DDoS attacks.
\subsection{Overhead and Scalability Analysis}\label{sec:scalability}
The flood throttling layer can be implemented as weighted fair queuing. Thus
it introduces almost zero overhead since Umbrella\xspace does not maintain any extra states.
The overhead of user-specific layer depends on specific policies.
To learn the overhead of Umbrella\xspace's congestion resolving layer
(\emph{e.g.,}\xspace per-packet processing overhead and memory consumption),
we implement Umbrella\xspace's rate limiting logic on a Dell
PowerEdge R320 server shipped with an $8$-core Intel E$5$-$1410$
$2.8$GHz CPU and $24$GB memory.
As illustrated in Fig. \ref{fig:flow_entry}, the total size of a single flow
entry is $24$ bytes. Thus, even when Umbrella\xspace maintains a
flow table with $100$ million flows, the memory consumption is just a few gigabytes,
which can be easily supported by commodity servers.
We show both the memory consumption and per-packet processing overhead
for three table sizes ($1$, $10$ and $100$ million entries)
in Figure \ref{fig:scalability}.
\begin{figure}[t]
\centering
\mbox{
\subfigure{\includegraphics[scale=0.45]{scalability.pdf}}
}
\caption{Umbrella\xspace's memory consumption and per-packet processing overhead.}
\label{fig:scalability}
\end{figure}
For the largest table size, the memory consumption is around $6$GB,\footnote{Note that
the memory usage for $100$ million flow entries is not exactly $2.4$GB since
we adopt the \texttt{map} data structure to implement the flow table, resulting
in additional memory consumption.} indicating that
memory will not become the bottleneck of Umbrella\xspace's implementation.
Further, the per-packet processing overhead remains almost the same when the number of
flow entries increases from $1$ million to $100$ million. Thus Umbrella\xspace
can effectively scale up to deal with DDoS attacks involving millions of attack flows.
Moreover, the ${\sim}0.06\mu s$ per-packet processing overhead
is negligible even for the $10$Gbps Ethernet with around $1.2\mu s$ per-packet processing time.
Thus the victim can still enjoy high speed Ethernet after deploying Umbrella\xspace.
Note that the implementation of Umbrella\xspace's rate limiting algorithm may be optimized according to the
system hardware to further reduce the overhead.
\subsection{Testbed Experiments}\label{sec:testbed}
We implement a prototype of Umbrella\xspace on our testbed consisting $9$ servers, illustrated in Fig. \ref{fig:testbed_arch}.
Each server is the same Dell PowerEdge R$320$ used to learn Umbrella\xspace's overhead (\S\ref{sec:scalability}).
The server is running Debian $6.0$-$64$bit with Linux $2.6.38.3$ kernel and is installed a
Broadcom BCM$5719$ NetXtreme Gigabit Ethernet NIC.
We organize the servers into $7$ senders (either attackers or legitimate users),
one software router implementing Umbrella\xspace's three-layered defense and one victim, as illustrated in Fig. \ref{fig:testbed_arch:b}.
Thus the interdomain bandwidth in the testbed is $1$Gbps.
The flood throttling layer is implemented as a weighted fair queuing module at the
output port of the software router. The module serves TCP flows and UDP flows
in two separate queues with different weights. The normalized weight for
TCP flows' queue is $0.9$ whereas UDP flows' queue weight is $0.1$ (again the
victim can overwrite the setting).
Each queue has its own dedicated buffer since UDP flows will consume
all buffers when competing with TCP flows, resulting in almost zero throughput for TCP traffic.
With the protection of the flood throttling layer, TCP flows with
sufficient traffic demand can obtain $900$Mbps share of the interdomain link,
regardless of how much UDP traffic is thrown to the victim.
The congestion resolving layer is composed of a set of rate limiters.
Each rate limiter is implemented via the Hierarchical Token Bucket (HTB) of
the Linux's Traffic Control~\cite{linux_tc}. The bandwidth of each rate limiter
is each flow's $\mathcal{W}_R$, determined by Umbrella\xspace's rate limiting algorithm,
to ensure no flow can send faster than its $\mathcal{W}_R$. We set $\mathcal{D}_p{=}5$s
in the implementation.
\begin{figure}[t]
\centering
\mbox{
\subfigure[\label{fig:testbed_arch:a}Servers.]{\includegraphics[scale=0.3]{servers2.pdf}}\quad\quad
\subfigure[\label{fig:testbed_arch:b}Testbed topology.]{\includegraphics[scale=0.28]{testbed_arch.pdf}}
}
\caption{The prototype of Umbrella\xspace on our testbed.}
\label{fig:testbed_arch}
\end{figure}
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:testbed:a}Layer one defense.]{\includegraphics[scale=0.35]{set1.pdf}}
\subfigure[\label{fig:testbed:b}Layer two defense.]{\includegraphics[scale=0.35]{set2.pdf}}
\subfigure[\label{fig:testbed:c}Layer two and three defense.]{\includegraphics[scale=0.35]{set4.pdf}}
}
\caption{Testbed experiments.}
\label{fig:testbed}
\end{figure*}
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:simulation:a}On-off shrew attacks.]{\includegraphics[scale=0.35]{on_off.pdf}}
\subfigure[\label{fig:simulation:b}Varying the volume of attack traffic.]{\includegraphics[scale=0.35]{on_off_aggressive.pdf}}
\subfigure[\label{fig:simulation:c}Attackers' optimal strategy.]{\includegraphics[scale=0.35]{optimal.pdf}}
}
\caption{Mitigating large scale DDoS attacks.}
\label{fig:simulation}
\end{figure*}
In our prototype, we implement one representative traffic policing rule for the user-specific layer:
the victim reserves bandwidth for premium clients so that they will not be affected by
DDoS attacks. Such bandwidth guarantee is achieved by the weighted fair queuing
module assigning one dedicated queue to premium clients.
We did not limit common clients' rates to ensure bandwidth shares for premium clients
because otherwise the unused bandwidth guarantee is wasted. On the contrary,
weighted fair queuing is work-conserving, allowing common clients to grab
leftover bandwidth from premium clients.
Thus the final queuing module contains three queues.
We perform three experiments on our testbed to evaluate Umbrella\xspace's defense, detailed as follows.
\noindent\textbf{Layer-one defense:} In this experiment, $6$ senders,
each sending $1$Gbps UDP traffic towards the victim, emulate the amplification-based
DDoS attacks in which the total volume of attack traffic is
$6\times$ the interdomain bandwidth. The $7$th sender
sends TCP traffic to represent legitimate clients. As real-life interdomain link often has
overprovisioning to absorb traffic bursts,
we set TCP flows' demand as $700$Mbps ($70\%$ of the total interdomain bandwidth).
We present our experiment results in form of sequential events, illustrated in Fig.~\ref{fig:testbed:a}.
At $t=0$s, the victim is hammered by DDoS attacks, causing complete
denial of service to legitimate clients. Umbrella\xspace's layer-one defense is initiated
at $t=4s$ to provide (almost) immediate DDoS prevention. Legitimate clients' bandwidth
shares grow rapidly to accommodate their traffic demand. Due to the work-conservation of
weighted fair queuing, attack traffic consumes the spare bandwidth of the interdomain link.
\noindent\textbf{Layer-two defense:} In this experiment,
although all senders are adopting TCP, $6$ of them
deviate from TCP's congestion control algorithm by continuously
injecting packets in face of congestive losses. As illustrated in
Fig.~\ref{fig:testbed:b}, malicious senders successfully exhaust the interdomain
bandwidth without the protection of Umbrella\xspace. At $t=4$s, Umbrella\xspace starts to police
traffic based on its rate limiting algorithm. As attackers fail to
comply with Umbrella\xspace's rate limiting, their bandwidth shares are significantly reduced,
resulting in almost zero share in the steady state. However, legitimate clients' throughput
gradually converges to their traffic demand.
\noindent\textbf{Layer-three defense:}
In this experiment, one sender is upgraded to represent premium clients with
$200$Mbps bandwidth guarantee. Another sender stands for common clients with
$500$Mbps traffic demand. The rest senders emulate attackers transmitting
malicious TCP flows. To satisfy the guarantee, the victim configures the normalized queue weight
as $0.2$, $0.7$ and $0.1$ for premium clients, common clients and UDP traffic, respectively.
Fig.~\ref{fig:testbed:c} demonstrates that premium clients' bandwidth share is
guaranteed throughout the experiment, saving them from the turbulence caused by DDoS attacks.
Further, common clients are protected after Umbrella\xspace enables its rate limiting, which effectively
thwarts DDoS attacks.
\subsection{Mitigating Large Scale DDoS Attacks}\label{sec:evaluation}
In this section, we evaluate Umbrella\xspace's defense against large scale DDoS attacks.
In the evaluation, we develop a flow-level simulator rather than completely
relying on the existing packet-level emulators or simulators (\emph{e.g.,}\xspace Mininet~\cite{mininet}, ns-$3$~\cite{ns3})
because it takes them prohibitively long to emulate large scale DDoS attacks.
Specifically, assume that a packet-level simulator can
process one million packets per second and that one million
attack flows, each sending at $5$Mbps, attack a $10$Gbps
link. Even if we set the packet size as the maximum allowed
size $1.5$KB, it will take the simulator around $140$ hours to simulate
just one second of the attack. By concealing the detailed
per-packet processing and focusing on per-flow
behaviors, our flow-level simulator is still able to accurately
evaluate Umbrella\xspace, which in fact relies on flow-level states to police traffic.
However, we also perform a moderate scale simulation
on ns-$3$ to benchmark our flow-level simulator.
The network topology adopted in simulations is similar to that of the testbed experiments
except the number of senders can be more than $1$ million and we scale up the
interdomain bandwidth to $10$Gbps.
Unless otherwise stated, the following experiments
are performed on our flow-level simulator.
We design experiments for different strategies attackers may take: \textsf{(i)}\xspace
they launch on-off shrew attacks~\cite{low-rate} to evade detection, \textsf{(ii)}\xspace vary
the volume of attack traffic and \textsf{(iii)}\xspace dynamically adjust their rates based on
packet losses. In the on-off attack, attackers coordinate with each other to send high traffic bursts during
on periods and stay inactive during off periods. We use the ratio of the off-period's
length to the on-period's length (denoted by $\mathcal{R}^{off}_{on}$) to represent attackers' strategy in the on-off attack.
In the second strategic attack, we define the \emph{aggressiveness factor} $\mathcal{A}_f$ as the
ratio of the total volume of attack traffic to the interdomain bandwidth. Attackers may vary
the $\mathcal{A}_f$ during attacks.
In the first two experiments, attackers disrespect packet losses and
keep injecting packets in case of severe congestive losses.
The design of these two experiments is to prove that when attackers fail to adopt the optimal strategy (the third strategy discussed below), Umbrella\xspace accurately throttles attack flows so that legitimate senders receive more bandwidth than their guaranteed portions. In the third strategy (the optimal one), attackers rely on their transport protocols to probe packet losses so as to adjust their rates to honor the congestion. In this case, we demonstrate that Umbrella\xspace guarantees per-sender fairness for legitimate senders.
In the first strategic attack, we set
the length of the on-period the same as $\mathcal{D}_p$ ($5$s)
and vary the ratio $\mathcal{R}^{off}_{on}$ from $0$ to $18$.
Meanwhile we set the number of legitimate clients $\mathcal{N}_{L}{=}100K$ and
vary the number of attackers from $100K$ to $1$ million. Further, we set
$\mathcal{A}_f{=}2$ but varying the rate of each attacker based on a
Gaussian distribution.\footnote{Assume the aggregated rate of attackers is $R$,
then the Gaussian distribution's mean is $R/\mathcal{N}_A$ and the standardization is $1$.}
The experimental results, illustrated in Fig. \ref{fig:simulation:a}, show that
legitimate users can obtain more bandwidth than the per-sender
fair share regardless of $\mathcal{R}^{off}_{on}$'s value and the attack scale.
This is because Umbrella\xspace's rate limiting algorithm incorporates flows' previous
packet losses while making rate limiting decisions. Consequently,
even completely staying inactive during off-periods,
attackers fail to save their reputation by
the strategic on-off attack. Further, Umbrella\xspace can effectively
distinguish misbehaved flows from legitimate ones
no matter how many misbehaved flows are involving. Ironically,
larger attack scales result in higher benefit gains for legitimate users in the sense
that their bandwidth shares are boosted to higher levels compared with the
per-sender fair share. In all scenarios, attackers' bandwidth shares are limited to almost zero.
In the second strategic attack, we vary $\mathcal{A}_f$ from $0.9$ to $4$.
Although setting $\mathcal{A}_f{<}1$ cannot completely disconnect
the victim from its ISP, attackers can throttle legitimate users
to a tiny fraction of the total bandwidth by aggressively injecting packets
(image an analogical situation where a $900$Mbps UDP flow
competes with TCP flows on a $1$Gbps link).
Attackers experience low packet loss rates as legitimate users
cut their rates dramatically. To defend against such ``moderate"
attacks, the victim can configure Umbrella\xspace to start traffic policing when the
link utilization exceeds a pre-defined threshold (\emph{e.g.,}\xspace 90\%).
We fix $\mathcal{N}_L{=}100K$ and $\mathcal{N}_A{=}500K$ in this experiment.
The results (Fig. \ref{fig:simulation:b}) show that legitimate users get
at least the per-sender fair share in all settings.
Note that in moderate attacks, attackers can prevent their bandwidth shares from being
further reduced by extending the off-period, resulting in per-sender fairness.
However, increasing $\mathcal{A}_f$ actually puts attackers in a bad situation that
their flows are blocked.
The previous two experiments prove that when attackers fail to comply with
Umbrella\xspace's rate limiting, their bandwidth shares are significantly reduced.
Legitimate users may therefore obtain more bandwidth than the per-sender fair share.
In the third strategy, attackers actively adjust their rates based on
packet losses so as to maintain low packet loss rates. Besides our flow-level
simulator, we also adopt the ns-$3$~\cite{ns3} in this setting.
To circumvent ns-$3$'s scalability problem, we adopt the similar approach used in NetFence~\cite{netfence}.
Specifically, we fix the number of nodes ($500$ attackers and $50$ legitimate nodes in our experiments)
and scale down the link capacity to simulate the large scale attacks. By varying the link bandwidth
from $5$Mbps to $50$Mbps, we are able to simulate the attack scenarios where
$100$K to $1$ million attackers try to flood the $10$Gbps interdomain link.
We use the ns-$3.21$ version and add Umbrella\xspace's rate limiting logic to the \textsl{PointToPointNetDevice} module, which
performs flow analysis upon receiving packets. To execute their optimal strategy,
attackers have to rely on TCP-like protocols to probe the network condition and determine their rates accordingly.
We test all supported TCP congestion control algorithms in ns-$3$: Tahoe, Reno, and NewReno.
Legitimate clients are adopting the NewReno.
The results (Fig. \ref{fig:simulation:c}) show that complying
with Umbrella\xspace's rate limiting grants attackers the per-sender fairness (results for different
TCP protocols in ns-$3$ are very close and we plot the results for the NewReno).
As stated in Lemma~\ref{lemma:attack}, the hypothetical strategy for attackers,
assuming that they are able to
know their exact allowed rates and control remote packet losses, produces
an unreachable upper bound for their bandwidth shares.
Further, our flow-level simulator and the packet-level ns-$3$ simulator
share (almost) the same results.
\section{Related Work}\label{related}
In this section, we discuss related work that has inspired the design of Umbrella\xspace.
Generally speaking, we categorize the previous DDoS defense approaches
into two major schools (\emph{i.e.,}\xspace filtering-based and capability-based approaches), whereas
there are other approaches built on different defense primitives.
Filtering-based systems (\emph{e.g.,}\xspace IP Traceback~\cite{practicalIPTrace, advancedIPTrace}, AITF~\cite{AITF},
Pushback \cite{pushback, implementPushback}, StopIt~\cite{StopIt})
stop DDoS attacks by filtering attack flows.
Thus they need to distinguish attack flows from
legitimate ones. For instance, IP Traceback uses a packet marking
algorithm to construct the path that carries attack flows so as to block
them. AITF aggregates all traffic traversing the same series of
ASs as one \emph{Flow} and blocks such flows if the victim suspects attacks.
Pushback informs upstream routers to block certain type of traffic.
StopIt assumes the victim can identify the attack flows.
However, filtering-based systems often require remote ASs to block attack traffic on the victim's behalf,
which is difficult to enforce in the Internet.
Further, these systems may falsely block legitimate flows
since the method used to distinguish attack flows could have a high false positive rate.
The capability-based systems, such as SIFF~\cite{siff} and TVA~\cite{TVA}, try to suppress attack traffic by only accepting packets carrying valid capabilities. The original design is vulnerable to the DoC attack~\cite{doc}, which can be mitigated by the Portcullis protocol~\cite{portcullis}.
NetFence \cite{netfence} is proposed to achieve network-wide per-sender fairness
based on capabilities. However, these approaches assume universal capability deployment.
CRAFT \cite{craft} and Mirage~\cite{mirage} are proposed towards real-world
deployment. CRAFT emulates TCP states for all traversing flows so that no one
can obtain a greater share than what TCP allows.
However, CRAFT requires upgrades of both the Internet core and end-hosts.
Mirage~\cite{mirage}, a puzzle-based solution, needs to be
incorporated into IPv6 deployment. The state-of-the-art in this category MiddlePolice~\cite{MiddlePolice} is readily deployable in the current Internet. However, it still relies on cloud infrastructure to police traffic, which may be privacy-invasive for some organizations.
Other DDoS defense solutions, besides the above two categories,
include SpeakUp~\cite{speakup}, Phalanx \cite{phalanx}, SOS
\cite{sos} and few future Internet architecture proposals like XIA~\cite{xia} and SCION~\cite{scion}.
SpeakUp allows legitimate senders to increase their rates to compete with attackers.
Such an approach is effective when the bottleneck happens at the application layer so that
legitimate users can get more requests processed given all their requests can be delivered.
In the case where network is the bottleneck, SpeakUp may potentially congest
the network. Phalanx and SOS propose to use large scale overlay networks to defend DDoS attacks.
XIA and SCION focus on building the clean-slate Internet architecture so as to enhance
Internet security, \emph{e.g.,}\xspace enforcing accountability~\cite{aip}.
In contrast to these prior work, Umbrella\xspace is motivated to address a real-world threat and achieves two critical features (\emph{i.e.,}\xspace deployability and privacy-preserving) towards this end.
\section{Conclusion and Future Works}\label{conclusion}
This paper presents the design, implementation and evaluation of Umbrella\xspace, a new DDoS defense mechanism enabling ISPs to offer readily deployable and privacy-preserving DDoS prevention services. To provide effective DDoS prevention, Umbrella\xspace merely requires independent deployment at the victim's ISP and no Internet core or end-hosts upgrades, making Umbrella\xspace immediately deployable. Further, Umbrella\xspace does not require the ISP to terminate victim's application connections, allowing the ISP to operate at network layer as usual. In its design, Umbrella\xspace's multi-layered defense allows Umbrella\xspace to stop various DDoS attacks and provides both guaranteed and elastic bandwidth shares for legitimate clients. Based on the prototype implementation, we demonstrate that Umbrella\xspace is scalable to deal with large scale DDoS attacks involving millions of attackers and introduces negligible packet processing overhead. Finally, our physical testbed experiments and large scale simulations prove that Umbrella is effective to mitigate various strategic DDoS attacks.
We envision two major followup directions of this work in the near future. First, the user-specific layer in Umbrella\xspace enables a potential DDoS victim to enforce self-desired traffic control policies during DDoS mitigation. However, one challenge is how to guide the victim to develop reasonable policies that are most suitable for its business logic. This is because proposing valid policies may require profound understanding of the victim's network traffic, which typically depends on comprehensive traffic monitoring and analysis. Unfortunately, the potential victim may lack such capability in this regard. Thus, designing and implementing various machine learning based traffic discovery tools is part of our future work. The second potential research direction is to enable smart payment between ISPs and potential victims. The high level goal is to ensure that ISPs and victims can unambiguously agree on certain filtering services so that the ISPs are paid properly on each attack packet it filters and meanwhile a potential victim can reclaim its payment back if an ISP fails to stop attacks. We propose to design a smart-contract based system in this regard, relying on the ``non-stoppable'' features of smart contracts. Our initial proposal is under review.
\balance
\small
\bibliographystyle{ieeetr}
\section{Introduction} \label{sec:introduction}
Distributed denial of service (DDoS) attacks have been considered as a serious threat to the availability of Internet. Over the past few decades, both industry and academia make a considerable effort to address this problem. Academia have proposed various approaches, ranging from filtering-based approaches~\cite{practicalIPTrace, advancedIPTrace,AITF,pushback,implementPushback,StopIt}, capability-based approaches~\cite{siff, TVA, netfence, MiddlePolice}, overlay-based systems~\cite{phalanx,sos,mayday}, systems based on future Internet architectures~\cite{scion,aip,xia} and other variance~\cite{speakup,mirage, CDN_on_Demand}. Meanwhile, many large DDoS-protection-as-a-service providers (\emph{e.g.,}\xspace Akamai, CloudFlare), some of which are unicorns, have played an important role in DDoS prevention. These providers massively over-provision data centers for peak attack traffic loads and then share this capacity across many customers as needed. When under attack, victims use Domain Name System (DNS) or Border Gateway Protocol (BGP) to redirect traffic to the provider rather than their own networks. The DDoS-protection-as-a-service provider applies their proprietary techniques to scrub traffic, separating malicious from benign, and then re-injects only the benign traffic back into the network to be carried to the victim.
Despite such effort, recent industrial interviews with over 100 security engineers from over ten industry segments that are vulnerable to DDoS attacks indicate DDoS attacks have not been fully addressed~\cite{middlepolice-ton}. First, since most of the academic proposals incur significant deployment overhead (\emph{e.g.,}\xspace requiring software/hardware upgrades from a large number of Autonomous Systems (AS) that are unrelated to the DDoS victim, changing the client network stack such as inserting new packet headers), few of them have ever been deployed in the Internet. Second, existing security-service providers are not cures for DDoS attacks for all types of customer segments. In particular, a prerequisite of using their security services is that a destination site must redirect its network traffic to these service providers. Cloudflare, for instance, will terminate all user Secure Sockets Layer (SSL) connections to the destination at Cloudflare's network edge, and then send back user requests (after applying their secret sauce filtering) to the destination server using new connections. Although this operation model is acceptable for small websites (\emph{e.g.,}\xspace personal blogs), it is privacy invasive for some large organizations like government, hosting companies and medical foundations.
As a result, these organizations have no other choices but to rely on their Internet Service Providers (ISPs) to block attack traffic. Realizing this issue, in this paper, we propose Umbrella\xspace, a new DDoS defense mechanism focusing on enabling ISPs to offer readily deployable and privacy-preserving DDoS prevention services to their customers. The design of Umbrella\xspace is lessoned from real-world DDoS attacks that intentionally disconnect the victim from the public Internet by overwhelming the victim's inter-connecting links with its ISPs. Thus, Umbrella\xspace proposes to protect the victim by allowing its ISPs to throttle attack traffic, preventing any undesired traffic from reaching the victim. Compared with previous approaches requiring Internet-wide AS cooperation, Umbrella\xspace simply needs \emph{independent deployment} at the victim's direct ISPs to provide immediate DDoS defense. Further, unlike existing security-service providers, an ISP does not need to terminate the victim's connections. Instead, the ISP still operates on \emph{network layer} as usual to completely preserve the victim's application layer privacy. Third, Umbrella\xspace is \emph{lightweight} since it requires no software and hardware upgrades at both the Internet core and clients. Finally, Umbrella\xspace is \emph{performance friendly} because it is overhead-free during normal scenarios by staying completely idle and imposes negligible packet processing overhead during attack mitigation.
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:architecture:a}Filtering-based approaches.]{\includegraphics[scale=0.3]{deploy_filter.pdf}}
\subfigure[\label{fig:architecture:b}Capability-based approaches.]{\includegraphics[scale=0.3]{deploy_cap.pdf}}
\subfigure[\label{fig:architecture:c}Umbrella\xspace.]{\includegraphics[scale=0.3]{deploy_umbrella.pdf}}
\subfigure[\label{fig:architecture:d}Multi-layered defense.]{\includegraphics[scale=0.3]{umbrella_layer.pdf}}
}
\caption{Part (a) and part (b) show that filtering-based and capability-based approaches are
difficult to deploy in the Internet since they require Internet-wide AS cooperation and facility upgrades. Part (c)
shows Umbrella\xspace is immediately deployable at the victim's ISP. Part (d) details Umbrella\xspace's multi-layered defense.}
\label{fig:architecture}
\end{figure*}
In its design, Umbrella\xspace develops a novel multi-layered defense architecture. In the \emph{flood throttling} layer, Umbrella\xspace defends against the amplification-based attacks that exploit various network protocols (\emph{e.g.,}\xspace Simple Service Discovery Protocol (SSDP), Network Time Protocol (NTP))~\cite{arbor}. Although such attacks may involve extremely high volume of traffic (\emph{e.g.,}\xspace hundreds of gigabit per second), they can be effectively detected via static filters and therefore stopped. In the \emph{congestion resolving} layer, Umbrella\xspace defends against more sophisticated attacks in which adversaries may adopt various strategies. Umbrella\xspace brings out a key concept \emph{congestion accountability}~\cite{FlowPolice} to selectively punish users who keep injecting packets in case of severe congestive losses. The congestion resolving layer provides both \emph{guaranteed} and \emph{elastic} bandwidth shares for legitimate flows: \textsf{(i)}\xspace regardless of attackers' strategies, legitimate users are guaranteed to receive their fair share of the victim's bandwidth; \textsf{(ii)}\xspace when attackers fail to execute their optimal strategy, legitimate clients are able to enjoy more bandwidth shares. The last layer, \emph{user-specific} layer, allows the victim to enforce self-interested traffic policing rules that are most suitable for their business logic. For instance, if the victim never receives certain type of traffic, it can inform Umbrella\xspace to completely block such traffic when attacks are detected. Similarly, the victim can instruct Umbrella\xspace to reserve bandwidth for premium clients so that they will not be affected by DDoS attacks.
In summary, the major contributions of this paper are the design, implementation and evaluation of Umbrella\xspace, a new DDoS defense approach that enables ISPs to offer readily deployable and privacy-preserving DDoS prevention services to their downstream customers. The novelties of Umbrella\xspace live in the following two dimensions. First, unlike the vast majority of academic DDoS prevention proposals which require extensive Internet core and client network-stack change, Umbrella\xspace only requires lightweight upgrades from business-related entities (\emph{i.e.,}\xspace the potential DDoS victim itself and its direct ISPs), yielding instant deployability in the current Internet architecture. Second, compared with the existing deployable industrial DDoS mitigation services, Umbrella\xspace, through our novel multi-layer defense architecture, offers both privacy-preserving and complete DDoS prevention that can deal with a wide spectrum of attacks, and meanwhile offer victim-customizable defense. We implement Umbrella\xspace on Linux to study its scalability and overhead. The result shows that a commodity server can effectively handle $100$ million attackers and introduces merely ${\sim}0.06\mu s$ packet processing overhead. Finally, we perform detailed evaluation on our physical testbed, our flow-level simulator and the ns$3$ packet-level simulator~\cite{ns3} to demonstrate the effectiveness of Umbrella\xspace to mitigate DDoS attacks.
\section{Problem Space and Goals}\label{sec:design_rational}
In this section, we discuss Umbrella\xspace's problem space and its design goals. Completely preventing DDoS attacks is an extremely large scope. The problem space articulates Umbrella\xspace's position within this scope. Further, the design goals of Umbrella\xspace are designed based on the industrial interviews in \cite{middlepolice-ton} so that Umbrella\xspace indeed offers desirable DDoS prevention to those large and privacy-sensitive potential victims such as government and medical infrastructures. We do not however claim that these goals are universally applicable to all types of potential victims (for instance, a web blogger may simply choose CloudFlare to keep her website online).
\subsection{Problem Space}\label{sec:problem_space}
\noindent\textbf{Network-Layer DDoS Mitigation.}
We position Umbrella\xspace as a network-layer DDoS defense approach to stop undesirable traffic from reaching and consuming resources of the victim's network. Specifically, Umbrella\xspace is designed to prevent attackers from exhausting the bandwidth of the inter-network link connecting the victim's network to its ISP so as to keep the victim ``online" even in face of DDoS attacks. Note that Umbrella\xspace should be viewed to be complementary to other solutions addressing DDoS attacks at other layers (\emph{e.g.,}\xspace layer $7$ HTTP attacks). Only concerted efforts contributed by these solutions can potentially provide complete defense against all types of DDoS attacks.
\noindent\textbf{Adversary Model.}
We consider strong adversaries that can compromise both end-hosts and routers. They are able to launch strategic attacks (\emph{e.g.,}\xspace the on-off shrew attack~\cite{low-rate}), launch flash attacks with numerous short flows, adopt maliciously tampered transport protocols (\emph{e.g.,}\xspace poisoned TCP protocols that does not properly adjust rates based on network congestion), leverage the Tor network to hide their identities, spoof addresses and recruit vast amounts of bots to launch large scale DDoS attacks.
\noindent\textbf{Assumptions.}
Umbrella\xspace maintains per-sender state in the congestion resolving layer, and consequently relies on the correctness of source addresses. Such correctness can be assured by the more complete adoption of Ingress Filtering~\cite{ingressFilter, BCP38} or the source authentication schemes~\cite{passport, OPT}. On our way to achieve complete spoof elimination\footnote{In fact, the Spoofer Project~\cite{spoofer_project} extrapolates that only ${\sim}13.6\%$ of addresses are spoofable, indicating a tremendous progress.}, Umbrella\xspace requires victim's additional participation to minimize the chance of source spoofing. In particular, the victim needs to provide a list of authenticated (\emph{i.e.,}\xspace TCP handshakes with these sources are successfully established) or preferred source IP addresses (based on the victim's routine traffic analysis performed in normal scenarios) so that Umbrella\xspace will only maintain per-sender state for these addresses during attack mitigation.
\subsection{Design Goals}
\noindent\textbf{Readily Deployable.} One major design goal of Umbrella\xspace is to be immediately deployable in the current Internet architecture. To this end, the functionality of Umbrella\xspace relies only on independent deployment at the victim's ISP without further deployment requirements at remote ASs that are unrelated with the victim. As illustrated in Fig. \ref{fig:architecture:c}, Umbrella\xspace can be deployed at the upstream of the link connecting the victim's network and its ISP. In the rest of the paper, we refer to this link as interdomain link and its bandwidth interdomain bandwidth. Note that Umbrella\xspace deployed at victim's ISP cannot cannot stop DDoS attacks trying to disconnect the victim's ISP from its upstream ISPs. However, the victim's ISP, now becoming a victim itself, should have motivation to protect itself by purchasing Umbrella\xspace's protection from its upstream ISPs. Recursively, DDoS attacks happened at different levels of the Internet hierarchy can be resolved. The neat idea of Umbrella\xspace is that it never requires cooperation among a wide range of (unrelated) ASes. Rather, \emph{independent deployment} is sufficient and effective.
\noindent\textbf{Privacy-Preserving and Customizable DDoS Prevention:} Another primary design goal of Umbrella\xspace is to offer privacy-preserving and customizable DDoS prevention. On one hand, Umbrella\xspace requires no application connection termination at ISPs, allowing them to operate at the network layer as usual, which completely preserves the application-layer privacy of their customers. One the other hand, Umbrella\xspace's multi-layered defense enables ISPs to offer customizable DDoS prevention that is driven by the customer policies.
\noindent\textbf{Lightweight and Performance Friendly:} Umbrella\xspace's deployment is very lightweight: it can be implemented as a software router at the interdomain link, maintaining at most per-source state. Our prototype implementation demonstrates that a commodity server can effectively scale up to deal with millions of states. Further, Umbrella\xspace is completely idle and transparent to applications in normal scenarios, introducing zero overhead. During DDoS attack mitigation, Umbrella\xspace's traffic policing introduces negligible packet processing overhead comparing with previous approaches requiring complicated and expensive operation, such as adding cryptographic capabilities and extra packet headers~\cite{netfence}.
Given these goals, we position Umbrella\xspace as a practical DDoS prevention service, offered by ISPs, that is desirable by large and privacy-sensitive potential DDoS victims.
\section{Design Overview}\label{sec:system}
In its design, Umbrella\xspace develops a three-layered defense architecture to stop undesirable traffic. The user-specific layer, enforcing policies defined by the victim, has priority over the rest two layers, which operate in parallel. Umbrella\xspace is only active when it notices the features of volumetric DDoS attacks against the interdomain link (\emph{e.g.,}\xspace the link experiences enduring congestion causing severe packet losses). Umbrella\xspace stops traffic policing and becomes idle when the link restores its normal state. As part of the user-specific layer, the victim is free to define specific rules to determine when the traffic policing should be initiated or terminated.
\subsection{Flood Throttling Layer}\label{sec:flood_layer}
Flood throttling layer is designed to stop amplification-based attacks, in which attackers send numerous requests, with spoofed source address as the victim's address, to public servers serving certain Internet protocols (\emph{e.g.,}\xspace Network Time Protocol, Domain Name System). As a result, the victim receives extremely high volume of responses, resulting interdomain bandwidth exhaustion and disconnection from its ISP. Although the attack volumes can as high as around 600Gpbs~\cite{arbor}, these attacks are easy to be detected. According to the analysis in \cite{arbor}, only seven network service protocols are exploited to launch amplification-based DDoS attacks and over 87\% of them rely on UDP with specific port numbers. Thus, by installing a set of static filters based on these network service protocols, Umbrella\xspace can effectively throttle these large-yet-easy-to-catch DDoS attacks.
In the case where the victim does receive traffic for certain network service protocols that may be exploited to launch amplification attacks, Umbrella\xspace can leverage the Weighted Fair Queuing technique to minimize the potential affect of these attacks. For instance, if based on traffic analysis in normal scenarios, $10\%$ of the victim's traffic traversing the interdomain link is NTP (on UDP port 123), then such traffic should be served in a queue whose normalized weight is configured as 0.1. The weighted fair queuing ensures that the victim always has sufficient bandwidth to process other flows (which are served in separate queues), regardless of how much NTP traffic is thrown to the victim.
\subsection{Congestion Resolving Layer}\label{sec:congestion_layer}
The congestion resolving layer is designed to stop subtle and sophisticated DDoS attacks that rely on numerous seemly legitimated TCP traffic. The crucial part of the defense is to enforce \emph{congestion accountability}~\cite{FlowPolice} so as to punish attackers who keep injecting vast amounts of traffic in face of congestive losses. Specifically, in volumetric DDoS attacks, overloaded routers drop packets from all users regardless of which users cause the \emph{enduring} congestion. In other words, congestion accountability is not considered while dropping packets. Consequently, legitimate users that run congestion-friendly protocol (\emph{e.g.,}\xspace TCP) are falsely penalized during the congestion since it is actually caused by attackers. By enforcing congestion accountability, Umbrella\xspace is able to selectively punish misbehaved flows and therefore stop attack traffic.
Umbrella\xspace analyzes each user's congestion accountability from the perspective of the goal of network usage. Legitimate users aim to deliver or receive data via the network. Thus, when encountering congestion, a legitimate TCP sender tries to \emph{relieve} the congestion by reducing its rate because its receiver cannot decode the data if some packets are lost. Sending more packets therefore makes no progress on finishing the data transfer, but it causes more severe congestion. On the contrary, attackers focus on exhausting network resources and care little about delivering data. As a result, they consistently generate traffic to \emph{contribute to} the congestion regardless of how many packets have been dropped. Therefore, users who overlook packet losses and continuously inject packets are accountable for the enduring congestion happened during DDoS attacks.
To resolve the congestion, Umbrella\xspace keeps a \emph{rate limiting window} for each user to prevent any user from sending faster than its rate limiting window. The size of the rate limiting window is determined by the rate limiting algorithm (\S \ref{sec:rateLimit}), taking input as the users' sending rates and packet losses. All information needed to make rate limiting decisions is recorded in Umbrella\xspace's flow table (\S \ref{sec:flowTable}). The design of the rate limiting algorithm ensures \textsf{(i)}\xspace the more aggressively attackers behave, the less bandwidth shares they will obtain; \textsf{(ii)}\xspace each legitimate client is guaranteed to receive the per-sender fair share of the congestion resolving layer's bandwidth regardless of attackers' strategies (note that part of the interdomain bandwidth may be used in other layers). Further, legitimate users may obtain more bandwidth than the per-sender fair share when attackers fail to execute their optimal strategy.
\subsection{The User-specific Layer}\label{sec:user_layer}
The goal of adding the user-specific defense layer is to provide the flexibility for the victim to enforce self-interested traffic control policies that are most suitable for their business logic, including adopting different fairness metrics from the Umbrella\xspace's default one (per-sender fairness) and offering proactive DDoS defense for premium clients so that they will never be disconnected from the victim.
Allowing user-specific policies differs Umbrella\xspace from previous in-network DDoS prevention mechanisms that force the victim to accept the single policy proposed by these approaches. Thus Umbrella\xspace creates extra deployment incentives for ISPs by enabling them to offer customized DDoS defense to consumers.
\section{Design Details}
Since the flood throttling layer is straightforward in its design and the user-specific layer is typically driven by the victim, we focus on elaborating the congestion resolving layer in this section. However, we have a full implementation of Umbrella\xspace with all three layers in \S \ref{sec:implementation}.
\subsection{Flow Table}\label{sec:flowTable}
Umbrella\xspace's flow table maintains per-sender network usage. Specifically, all packets sent from the same source are aggregated (and defined) as one \emph{coflow}\footnote{The concept of coflow is also introduced in data centers, meaning a group of flows from the same task~\cite{coflow2}.} and the flow table maintains states for each coflow. As discussed in the \emph{Assumptions} section of \S \ref{sec:problem_space}, the flow table maintains state only for the set of source IP addresses explicitly provided by the victim to prevent adversaries from exhausting table state via spoofed addresses. Umbrella\xspace does not keep states for each individual TCP flow (identified by its $5$-tuple) since the behavior of one single flow may not reflect the intention of the sender (malicious or not). For instance, one bot keeps sending new flows to the victim although previous flows experience severe losses. Even each individual flow may be a legitimate TCP flow, the bot is actually acting maliciously. However, if we interpret its behaviors from the coflow's perspective, we can figure out that the bot continuously creates traffic in face of congestive losses. Thus it is accountable for the congestion and will be rate limited. In the rest of the paper, unless otherwise stated, flow and coflow are used interchangeably.
\begin{figure}[t]
\centering
\begin{tabular}{|>{}c|>{}c|>{}c|>{}c|>{}c|>{}c|}
\hline
$f$&
$\mathcal{T}_A$&
$\mathcal{W}_R$&
$\mathcal{P}_R$&
$\mathcal{P}_D$&
$\mathcal{L}_R$\\
\hline
$64$&
$32$&
$32$&
$32$&
$32$&
$64$\\
\hline
\end{tabular}
\caption{Each field in a flow entry and its corresponding size (bits).}
\label{fig:flow_entry}
\end{figure}
Each flow entry (identified by its source address $f$) in the flow table is composed of
a timestamp $\mathcal{T}_A$, $f$'s rate limiting window $\mathcal{W}_R$,
the number of packets $\mathcal{P}_R$ received from $f$, the number
of dropped packets $\mathcal{P}_D$ from $f$ and its packet loss rate $\mathcal{L}_R$.
Further, Umbrella\xspace maintains $\mathcal{W}_R^T$, the sum of rate limiting
windows of all flows, shared by all flow entries.
These information is necessary for the rate limiting algorithm.
\subsection{Rate Limiting Algorithm}\label{sec:rateLimit}
The rate limiting algorithm is designed to enforce congestion accountability by punishing misbehaved users who
keep sending packets in face of severe congestive losses. By early dropping the
undesirable packets, Umbrella\xspace can effectively prevent bandwidth exhaustion.
In its design, the algorithm executes periodic rate limiting for each flow during DDoS attacks.
Specifically, in each \emph{detection period}, the number of packets allowed for
each flow (or sender) is limited by its rate limiting window $\mathcal{W}_R$.
The $\mathcal{W}_R$ is updated every detection period according
to the flow's information recorded in the flow table, such as the flow's packet loss rate $\mathcal{L}_R$ and
its transmission rate $\mathcal{P}_R$.
\subsubsection{Populating the flow table}
Assume at time $ts$, a new flow $f$ is initiated.
Umbrella\xspace creates a flow entry for $f$ in its flow table.
All fields of the entry are initialized to be zero.
Then Umbrella\xspace updates $\mathcal{T}_A$ as $ts$, increases $\mathcal{P}_R$ by one
and sets the initial $\mathcal{W}_R$ as the pre-defined fair share rate
$\mathcal{W}_{fair}$ (discussed in \S\ref{sec:paraSettings}).
From then on, Umbrella\xspace increases $\mathcal{P}_R$ by one for each arrived
packet of $f$ until the end of the current detection period (\emph{e.g.,}\xspace the end of the first detection period).
Umbrella\xspace uses packet arrival time to detect when it should start a new detection period for $f$.
Specifically, letting $\mathcal{D}_p$ denote the length of detection period,
when received a packet with arrival time $t_0 {>} \mathcal{T}_A {+} \mathcal{D}_p$,
Umbrella\xspace realizes that this packet is the first one received in the new detection period.
Then Umbrella\xspace performs the following updates in order:
\textsf{(i)}\xspace Set $\mathcal{T}_A=t_0$;
\textsf{(ii)}\xspace Update $\mathcal{W}_R$ and $\mathcal{L}_R$ according to the Algorithm \ref{algo:rateLimit};
\textsf{(iii)}\xspace Reset $\mathcal{P}_R$ and $\mathcal{P}_D$ as zero.
\subsubsection{The rate limiting algorithm}
\newcommand{\algrule}[1][.2pt]{\par\vskip.5\baselineskip\hrule height #1\par\vskip.5\baselineskip}
\begin{algorithm}[t]
\SetDataSty{textsl}
\SetKwData{CQ}{CongestionQueue}
\SetKwData{RL}{RateLimitingDecision}
\SetKwData{FC}{\textbf{Function:}}
\SetKwData{IP}{\textbf{Input:}}
\SetKwData{OP}{\textbf{Output:}}
\SetKwData{Main}{\textbf{Main Procedure:}}
\SetKwData{WS}{RateLimitingWindow}
\SetKwData{IN}{\textbf{Flow Initialization:}}
\footnotesize
\IP \\
\quad The service queue $\mathcal{Q}_C$ executing the FIFO principle.\\
\quad $f$'s entry in the flow table. \\
\quad $\mathcal{B}$: the bandwidth available in the congestion resolving layer. \\
\quad $\mathcal{W}_R^T$: the sum of all flows' rate liming windows. \\
\quad System related parameters: the detection period length $\mathcal{D}_p$, the
weight $\lambda$ for previous packet losses, the packet loss rate threshold $L_{Th}$ and
the per-sender
|
fair share rate $\mathcal{W}_{fair}$.\\
\OP \\
\quad Updated $\mathcal{Q}_C$, $\mathcal{W}_R^T$ and the flow entry of $f$.
\algrule[0.5pt]
\IN \\
\quad $\mathcal{W}_R = \mathcal{W}_{fair}$~\;
\algrule[0.5pt]
\Main \\
\For{each arrived packet $\mathcal{P}$ of flow $f$}{
$\mathcal{P}_R \leftarrow \mathcal{P}_R + 1$~\;
\If{$\mathcal{P}_R > \mathcal{W}_R$} {
Drop $\mathcal{P}$~; ~~$\mathcal{P}_D \leftarrow \mathcal{P}_D + 1$~\;
}
\lElse{
\CQ{$\mathcal{P}$}
}
\If{$\mathcal{P}$ is the first packet in a new detection period} {
\RL{$\mathcal{\mathcal{W}_R, \mathcal{L}_R, \mathcal{P}_R}, \mathcal{P}_D$}\;
}
}
\algrule[0.5pt]
\textbf{\FC} \RL{$\mathcal{W}_R, \mathcal{L}_R, \mathcal{P}_R, \mathcal{P}_D$}: \\
\quad $recentLoss \leftarrow \mathcal{P}_D / \mathcal{P}_R$~\;
\quad $packetLoss \leftarrow \lambda \cdot \mathcal{L}_R + (1-\lambda) \cdot recentLoss$~\;
\quad $\mathcal{W}_R^{original} \leftarrow \mathcal{W}_R$~; ~~$\mathcal{L}_R \leftarrow packetLoss$~\;
\quad \lIf{$packetLoss > L_{Th}$ and $\mathcal{P}_R > \mathcal{W}_{fair}$}{
$\mathcal{W}_R \leftarrow \mathcal{W}_R / 2$
}
\quad \lElse {
$\mathcal{W}_R \leftarrow$ \WS{$\mathcal{W}_R$}
}
\quad $\mathcal{W}_R^T \leftarrow \mathcal{W}_R^T + \mathcal{W}_R - \mathcal{W}_R^{original}$~\;
\algrule[0.5pt]
\textbf{\FC} \CQ{$\mathcal{P}$}: \\
\quad \If{the queue $\mathcal{Q}_C$ is full}{
Drop $\mathcal{P}$~\;
$\mathcal{P}_D \leftarrow \mathcal{P}_D + 1$~\;
}
\quad \lElse{
Append $\mathcal{P}$ to $\mathcal{Q}_C$
}
\algrule[0.5pt]
\textbf{\FC} \WS{$\mathcal{W}_R$}: \\
\quad \textbf{return} $\frac{\mathcal{W}_R}{\mathcal{W}_R^T}\cdot \mathcal{B}$~\;
\caption{\bf Rate Limiting Algorithm}\label{algo:rateLimit}
\end{algorithm}
\normalsize
At the very high level, the rate limiting algorithm determines the allowed rate
for each flow based on its congestion accountability. In particular,
the rate limiting windows of congestion-accountable flows (with both high packet loss rates and
high transmission rates) are significantly reduced.
Flows respecting packet losses by adjusting sending rates accordingly are
guaranteed to receive per-sender fair share of the bandwidth.
We adopt such a fairness metric because it is the optimal one that can be
guaranteed for legitimate users under strategic attacks. The proof is straightforward:
by behaving in the exact same way as legitimate users,
attackers can receive at least per-sender fair share, meaning that the
optimal guaranteed share for a legitimate user is also the per-sender fair share.
However, the algorithm allows legitimate users to obtain more bandwidth
shares when attackers fail to execute their optimal strategy.
Umbrella\xspace performs periodic rate limiting. In each detection period,
Umbrella\xspace learns each flow's transmission rate and packet loss rate to determine its $\mathcal{W}_R$.
One flow $f$'s transmission rate is quantified by $\mathcal{P}_R$,
the number of received packets from $f$ in the current period.
$f$'s packets may be dropped for two reasons: \textsf{(i)}\xspace $f$'s
sending rate exceeds its $\mathcal{W}_R$ or \textsf{(ii)}\xspace the service queue is full due to congestion.
$f$'s packet loss rate $\mathcal{L}_R$ in the current period is the ratio of dropped packets
to received packets. While making rate limiting decisions,
Umbrella\xspace adopts the metric $packetLoss$, which incorporates both packet losses in
the current period and previous packet losses. Such a design prevents
attackers from hiding their previous packet losses by stopping transmitting for a while before
sending a new traffic burst (\emph{e.g.,}\xspace the on-off shrew attack~\cite{low-rate}).
If both $packetLoss$ and $\mathcal{P}_R$ exceed their pre-defined thresholds,
Umbrella\xspace classifies $f$ as a maliciously behaved flow and reduces
its $\mathcal{W}_R$ by half.
We explain two design details of the rate limiting algorithm.
To begin with, the algorithm cannot make the rate limiting decision for a fresh flow
in its first detection period since Umbrella\xspace has not learned its packet loss rate and sending rate yet.
Thus Umbrella\xspace initializes its $\mathcal{W}_R$ as the pre-defined
per-sender fair share rate $\mathcal{W}_{fair}$ in the first detection period,
preventing attackers from exhausting bandwidth by creating new flows.
Besides $\mathcal{W}_{fair}$, the algorithm relies on anther three
system related parameters: $\mathcal{D}_p$, $\lambda$ and $L_{Th}$.
We discuss the reasoning for parameter settings in \S\ref{sec:paraSettings}.
Further, the \textsl{RateLimitingWindow} function returns the allowed bandwidth for $f$.
We need to convert the bandwidth value into the number of $1.5$KB packets allowed in
one detection period, which will be $f$'s updated $\mathcal{W}_R$.
We close our algorithm design with the remark concerning the SYN flooding attack.
When a SYN packet's source address is matched by one flow entry (meaning
the source address has been authenticated), it will be treated in the same way as
regular packets from the source. Thus sending SYN packets also consumes attackers'
bandwidth budget. SYN packets with unverified sources are appended to a
queue with bounded bandwidth (\emph{e.g.,}\xspace $5\%$ of $\mathcal{B}$).
Thus the spoofed SYN flooding cannot compromise Umbrella\xspace's defense. Regular packets with
unidentifiable sources in the flow table are denied.
\vspace*{-0.15in}
\subsection{Parameter Settings}\label{sec:paraSettings}
\noindent{$\bm{\mathcal{D}_p}$:}
The length of detection period should be long enough for Umbrella\xspace to characterize each flow's behaviors during the congestion as so to determine its congestion accountability. In particular, $\mathcal{D}_p$ needs to be long enough to allow legitimate users to adapt to the congestion so as to maintain a very low packet loss rate. Meanwhile, Umbrella\xspace is confident that users with high packet loss rates during such a long period of
time are misbehaving. Given that TCP adjusts its window every RTT, $\mathcal{D}_p$ should be much longer than typical Internet RTTs (hundreds of milliseconds based on the CAIDA's measurement~\cite{RTT_measure}). If $\mathcal{D}_p$ is too short, the legitimate flows may fail to adapt to the congestion quickly enough, resulting in inaccurate and highly fluctuating loss rates for them. On the contrary, $\mathcal{D}_p$ cannot be too long to avoid slow reaction to attacks. Balancing the two factors, $2{\sim}6$ seconds are reasonable choices for $\mathcal{D}_p$.
\noindent{$\bm{\lambda}$}: The value of $\lambda$
represents the weight assigned to one flow's previous packet losses.
To defend against the on-off shrew attack~\cite{low-rate}, Umbrella\xspace gives a non-trivial weight
to previous packet losses by setting $\lambda=0.5$. Therefore, once a flow misbehaves, it will have a bad reputation for a while. In order to regain reputation, the flow would have to honor congestion by reducing its sending rate when experienced packet losses.
\noindent{$\bm{L_{Th}}$:} The value of $L_{Th}$ should be larger than normal
packet loss rates to avoid false positives.
According to the previous measurements~\cite{tcpMeasure, internetMeasure},
we set $L_{Th} = 5\%$.
\noindent{$\bm{\mathcal{W}_{fair}}$:} We define the fair share of each flow
as $\mathcal{W}_{fair} {=} \mathcal{B}/\mathcal{N}$, where
$\mathcal{N}$ is the number of flows in the flow table.\footnote{When
Umbrella\xspace is activated from the idle state, $\mathcal{N}$ can be obtained from the network monitoring and logging
tools such as the NetFlow~\cite{netflow}.} Again, the bandwidth
value needs to be converted into the number of packets.
$\mathcal{W}_{fair}$ is updated when new flows are initiated.
As we aggregate all traffic from the same sender as one flow,
$\mathcal{W}_{fair}$ may be updated less frequently
than each flow's $\mathcal{W}_R$.
\subsection{Algorithm Analysis}\label{sec:algo_analysis}
In this section, we prove that the rate limiting algorithm provides both
guaranteed and elastic bandwidth shares for legitimate users:
they are guaranteed to obtain the per-sender fair share and can potentially
obtain more bandwidth shares. We first state the optimal
bandwidth shares attackers can get.
\begin{lemma}\label{lemma:attack}
Given that $\mathcal{N}_L$ legitimate flows and $\mathcal{N}_A$ attack flows share the
congestion resolving layer's bandwidth $\mathcal{B}$, regardless of attackers' strategies,
the aggregated bandwidth that can be obtained by attack flows is \textbf{at most} $\frac{(1+L_{Th})\cdot \mathcal{N}_A \cdot
\mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$.
\end{lemma}
\begin{proof}
Umbrella\xspace initializes each flow's $\mathcal{W}_R$ as per-sender fair share rate $\mathcal{W}_{fair}$.
Thus attackers can obtain $\frac{\mathcal{N}_A \cdot \mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$ initial bandwidth.
The rate limiting algorithm allows a maximum $L_{Th}$ loss rate before
further reducing one flow's rate limiting window. Thus the optimal strategy
for an attack flow is to strictly comply with Umbrella\xspace's rate limiting by
sending no more than $1+L_{Th}$ times its rate limiting window.
Otherwise, its bandwidth share will be further reduced.
In a hypothetical situation where attackers are able to know their exact rate limiters
and control their packet losses remotely, they can obtain at most
$\frac{(1+L_{Th})\cdot \mathcal{N}_A \cdot \mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$.
\end{proof}
Based on the Lemma \ref{lemma:attack}, we obtain the following theorem.
\begin{theorem}\label{theorem:legitimate}
Each legitimate flow can obtain \textbf{at least} $\frac{(1+L_{Th})\cdot\mathcal{B}}{\mathcal{N}_L + \mathcal{N}_A}$
bandwidth share, given that its transport protocol can
fully utilize the allowed bandwidth.
\end{theorem}
\begin{proof}
As each legitimate flow complies with Umbrella\xspace's rate limiting,
it is guaranteed to receive the per-sender fair share.
However, the per-sender fair share
is the lower-bound of its bandwidth share. When attackers fail to adopt
their optimal strategy (\emph{e.g.,}\xspace sending flat rates),
their rate limiting windows are significantly reduced.
As a result, legitimate flows' windows, returned by the \textsl{RateLimitingWindow} function,
will be increased since $\mathcal{W}_R^T$ is reduced. Thus legitimate
flows can receive more bandwidth than the per-sender fair share.
\end{proof}
To sum up, Lemma~\ref{lemma:attack} and Theorem~\ref{theorem:legitimate} spell
a dilemma for attackers: sending aggressively
rapidly throttles themselves to almost zero bandwidth share whereas
complying with Umbrella\xspace's rate limiting makes their attacks in vain.
\section{Implementation and Evaluation}\label{sec:implementation}
In this section, we describe the implementation and evaluation of Umbrella\xspace. We
first demonstrate that Umbrella\xspace is scalable to deal with DDoS attacks
involving millions of attack flows and meanwhile introduces negligible packet processing overhead.
Then we implement all three layers of Umbrella\xspace's defense on our physical testbed to
evaluate Umbrella\xspace's performance. Further, we add detailed simulations to prove that
Umbrella\xspace is effective to mitigate large scale DDoS attacks.
\subsection{Overhead and Scalability Analysis}\label{sec:scalability}
The flood throttling layer can be implemented as weighted fair queuing. Thus
it introduces almost zero overhead since Umbrella\xspace does not maintain any extra states.
The overhead of user-specific layer depends on specific policies.
To learn the overhead of Umbrella\xspace's congestion resolving layer
(\emph{e.g.,}\xspace per-packet processing overhead and memory consumption),
we implement Umbrella\xspace's rate limiting logic on a Dell
PowerEdge R320 server shipped with an $8$-core Intel E$5$-$1410$
$2.8$GHz CPU and $24$GB memory.
As illustrated in Fig. \ref{fig:flow_entry}, the total size of a single flow
entry is $24$ bytes. Thus, even when Umbrella\xspace maintains a
flow table with $100$ million flows, the memory consumption is just a few gigabytes,
which can be easily supported by commodity servers.
We show both the memory consumption and per-packet processing overhead
for three table sizes ($1$, $10$ and $100$ million entries)
in Figure \ref{fig:scalability}.
\begin{figure}[t]
\centering
\mbox{
\subfigure{\includegraphics[scale=0.45]{scalability.pdf}}
}
\caption{Umbrella\xspace's memory consumption and per-packet processing overhead.}
\label{fig:scalability}
\end{figure}
For the largest table size, the memory consumption is around $6$GB,\footnote{Note that
the memory usage for $100$ million flow entries is not exactly $2.4$GB since
we adopt the \texttt{map} data structure to implement the flow table, resulting
in additional memory consumption.} indicating that
memory will not become the bottleneck of Umbrella\xspace's implementation.
Further, the per-packet processing overhead remains almost the same when the number of
flow entries increases from $1$ million to $100$ million. Thus Umbrella\xspace
can effectively scale up to deal with DDoS attacks involving millions of attack flows.
Moreover, the ${\sim}0.06\mu s$ per-packet processing overhead
is negligible even for the $10$Gbps Ethernet with around $1.2\mu s$ per-packet processing time.
Thus the victim can still enjoy high speed Ethernet after deploying Umbrella\xspace.
Note that the implementation of Umbrella\xspace's rate limiting algorithm may be optimized according to the
system hardware to further reduce the overhead.
\subsection{Testbed Experiments}\label{sec:testbed}
We implement a prototype of Umbrella\xspace on our testbed consisting $9$ servers, illustrated in Fig. \ref{fig:testbed_arch}.
Each server is the same Dell PowerEdge R$320$ used to learn Umbrella\xspace's overhead (\S\ref{sec:scalability}).
The server is running Debian $6.0$-$64$bit with Linux $2.6.38.3$ kernel and is installed a
Broadcom BCM$5719$ NetXtreme Gigabit Ethernet NIC.
We organize the servers into $7$ senders (either attackers or legitimate users),
one software router implementing Umbrella\xspace's three-layered defense and one victim, as illustrated in Fig. \ref{fig:testbed_arch:b}.
Thus the interdomain bandwidth in the testbed is $1$Gbps.
The flood throttling layer is implemented as a weighted fair queuing module at the
output port of the software router. The module serves TCP flows and UDP flows
in two separate queues with different weights. The normalized weight for
TCP flows' queue is $0.9$ whereas UDP flows' queue weight is $0.1$ (again the
victim can overwrite the setting).
Each queue has its own dedicated buffer since UDP flows will consume
all buffers when competing with TCP flows, resulting in almost zero throughput for TCP traffic.
With the protection of the flood throttling layer, TCP flows with
sufficient traffic demand can obtain $900$Mbps share of the interdomain link,
regardless of how much UDP traffic is thrown to the victim.
The congestion resolving layer is composed of a set of rate limiters.
Each rate limiter is implemented via the Hierarchical Token Bucket (HTB) of
the Linux's Traffic Control~\cite{linux_tc}. The bandwidth of each rate limiter
is each flow's $\mathcal{W}_R$, determined by Umbrella\xspace's rate limiting algorithm,
to ensure no flow can send faster than its $\mathcal{W}_R$. We set $\mathcal{D}_p{=}5$s
in the implementation.
\begin{figure}[t]
\centering
\mbox{
\subfigure[\label{fig:testbed_arch:a}Servers.]{\includegraphics[scale=0.3]{servers2.pdf}}\quad\quad
\subfigure[\label{fig:testbed_arch:b}Testbed topology.]{\includegraphics[scale=0.28]{testbed_arch.pdf}}
}
\caption{The prototype of Umbrella\xspace on our testbed.}
\label{fig:testbed_arch}
\end{figure}
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:testbed:a}Layer one defense.]{\includegraphics[scale=0.35]{set1.pdf}}
\subfigure[\label{fig:testbed:b}Layer two defense.]{\includegraphics[scale=0.35]{set2.pdf}}
\subfigure[\label{fig:testbed:c}Layer two and three defense.]{\includegraphics[scale=0.35]{set4.pdf}}
}
\caption{Testbed experiments.}
\label{fig:testbed}
\end{figure*}
\begin{figure*}[t]
\centering
\mbox{
\subfigure[\label{fig:simulation:a}On-off shrew attacks.]{\includegraphics[scale=0.35]{on_off.pdf}}
\subfigure[\label{fig:simulation:b}Varying the volume of attack traffic.]{\includegraphics[scale=0.35]{on_off_aggressive.pdf}}
\subfigure[\label{fig:simulation:c}Attackers' optimal strategy.]{\includegraphics[scale=0.35]{optimal.pdf}}
}
\caption{Mitigating large scale DDoS attacks.}
\label{fig:simulation}
\end{figure*}
In our prototype, we implement one representative traffic policing rule for the user-specific layer:
the victim reserves bandwidth for premium clients so that they will not be affected by
DDoS attacks. Such bandwidth guarantee is achieved by the weighted fair queuing
module assigning one dedicated queue to premium clients.
We did not limit common clients' rates to ensure bandwidth shares for premium clients
because otherwise the unused bandwidth guarantee is wasted. On the contrary,
weighted fair queuing is work-conserving, allowing common clients to grab
leftover bandwidth from premium clients.
Thus the final queuing module contains three queues.
We perform three experiments on our testbed to evaluate Umbrella\xspace's defense, detailed as follows.
\noindent\textbf{Layer-one defense:} In this experiment, $6$ senders,
each sending $1$Gbps UDP traffic towards the victim, emulate the amplification-based
DDoS attacks in which the total volume of attack traffic is
$6\times$ the interdomain bandwidth. The $7$th sender
sends TCP traffic to represent legitimate clients. As real-life interdomain link often has
overprovisioning to absorb traffic bursts,
we set TCP flows' demand as $700$Mbps ($70\%$ of the total interdomain bandwidth).
We present our experiment results in form of sequential events, illustrated in Fig.~\ref{fig:testbed:a}.
At $t=0$s, the victim is hammered by DDoS attacks, causing complete
denial of service to legitimate clients. Umbrella\xspace's layer-one defense is initiated
at $t=4s$ to provide (almost) immediate DDoS prevention. Legitimate clients' bandwidth
shares grow rapidly to accommodate their traffic demand. Due to the work-conservation of
weighted fair queuing, attack traffic consumes the spare bandwidth of the interdomain link.
\noindent\textbf{Layer-two defense:} In this experiment,
although all senders are adopting TCP, $6$ of them
deviate from TCP's congestion control algorithm by continuously
injecting packets in face of congestive losses. As illustrated in
Fig.~\ref{fig:testbed:b}, malicious senders successfully exhaust the interdomain
bandwidth without the protection of Umbrella\xspace. At $t=4$s, Umbrella\xspace starts to police
traffic based on its rate limiting algorithm. As attackers fail to
comply with Umbrella\xspace's rate limiting, their bandwidth shares are significantly reduced,
resulting in almost zero share in the steady state. However, legitimate clients' throughput
gradually converges to their traffic demand.
\noindent\textbf{Layer-three defense:}
In this experiment, one sender is upgraded to represent premium clients with
$200$Mbps bandwidth guarantee. Another sender stands for common clients with
$500$Mbps traffic demand. The rest senders emulate attackers transmitting
malicious TCP flows. To satisfy the guarantee, the victim configures the normalized queue weight
as $0.2$, $0.7$ and $0.1$ for premium clients, common clients and UDP traffic, respectively.
Fig.~\ref{fig:testbed:c} demonstrates that premium clients' bandwidth share is
guaranteed throughout the experiment, saving them from the turbulence caused by DDoS attacks.
Further, common clients are protected after Umbrella\xspace enables its rate limiting, which effectively
thwarts DDoS attacks.
\subsection{Mitigating Large Scale DDoS Attacks}\label{sec:evaluation}
In this section, we evaluate Umbrella\xspace's defense against large scale DDoS attacks.
In the evaluation, we develop a flow-level simulator rather than completely
relying on the existing packet-level emulators or simulators (\emph{e.g.,}\xspace Mininet~\cite{mininet}, ns-$3$~\cite{ns3})
because it takes them prohibitively long to emulate large scale DDoS attacks.
Specifically, assume that a packet-level simulator can
process one million packets per second and that one million
attack flows, each sending at $5$Mbps, attack a $10$Gbps
link. Even if we set the packet size as the maximum allowed
size $1.5$KB, it will take the simulator around $140$ hours to simulate
just one second of the attack. By concealing the detailed
per-packet processing and focusing on per-flow
behaviors, our flow-level simulator is still able to accurately
evaluate Umbrella\xspace, which in fact relies on flow-level states to police traffic.
However, we also perform a moderate scale simulation
on ns-$3$ to benchmark our flow-level simulator.
The network topology adopted in simulations is similar to that of the testbed experiments
except the number of senders can be more than $1$ million and we scale up the
interdomain bandwidth to $10$Gbps.
Unless otherwise stated, the following experiments
are performed on our flow-level simulator.
We design experiments for different strategies attackers may take: \textsf{(i)}\xspace
they launch on-off shrew attacks~\cite{low-rate} to evade detection, \textsf{(ii)}\xspace vary
the volume of attack traffic and \textsf{(iii)}\xspace dynamically adjust their rates based on
packet losses. In the on-off attack, attackers coordinate with each other to send high traffic bursts during
on periods and stay inactive during off periods. We use the ratio of the off-period's
length to the on-period's length (denoted by $\mathcal{R}^{off}_{on}$) to represent attackers' strategy in the on-off attack.
In the second strategic attack, we define the \emph{aggressiveness factor} $\mathcal{A}_f$ as the
ratio of the total volume of attack traffic to the interdomain bandwidth. Attackers may vary
the $\mathcal{A}_f$ during attacks.
In the first two experiments, attackers disrespect packet losses and
keep injecting packets in case of severe congestive losses.
The design of these two experiments is to prove that when attackers fail to adopt the optimal strategy (the third strategy discussed below), Umbrella\xspace accurately throttles attack flows so that legitimate senders receive more bandwidth than their guaranteed portions. In the third strategy (the optimal one), attackers rely on their transport protocols to probe packet losses so as to adjust their rates to honor the congestion. In this case, we demonstrate that Umbrella\xspace guarantees per-sender fairness for legitimate senders.
In the first strategic attack, we set
the length of the on-period the same as $\mathcal{D}_p$ ($5$s)
and vary the ratio $\mathcal{R}^{off}_{on}$ from $0$ to $18$.
Meanwhile we set the number of legitimate clients $\mathcal{N}_{L}{=}100K$ and
vary the number of attackers from $100K$ to $1$ million. Further, we set
$\mathcal{A}_f{=}2$ but varying the rate of each attacker based on a
Gaussian distribution.\footnote{Assume the aggregated rate of attackers is $R$,
then the Gaussian distribution's mean is $R/\mathcal{N}_A$ and the standardization is $1$.}
The experimental results, illustrated in Fig. \ref{fig:simulation:a}, show that
legitimate users can obtain more bandwidth than the per-sender
fair share regardless of $\mathcal{R}^{off}_{on}$'s value and the attack scale.
This is because Umbrella\xspace's rate limiting algorithm incorporates flows' previous
packet losses while making rate limiting decisions. Consequently,
even completely staying inactive during off-periods,
attackers fail to save their reputation by
the strategic on-off attack. Further, Umbrella\xspace can effectively
distinguish misbehaved flows from legitimate ones
no matter how many misbehaved flows are involving. Ironically,
larger attack scales result in higher benefit gains for legitimate users in the sense
that their bandwidth shares are boosted to higher levels compared with the
per-sender fair share. In all scenarios, attackers' bandwidth shares are limited to almost zero.
In the second strategic attack, we vary $\mathcal{A}_f$ from $0.9$ to $4$.
Although setting $\mathcal{A}_f{<}1$ cannot completely disconnect
the victim from its ISP, attackers can throttle legitimate users
to a tiny fraction of the total bandwidth by aggressively injecting packets
(image an analogical situation where a $900$Mbps UDP flow
competes with TCP flows on a $1$Gbps link).
Attackers experience low packet loss rates as legitimate users
cut their rates dramatically. To defend against such ``moderate"
attacks, the victim can configure Umbrella\xspace to start traffic policing when the
link utilization exceeds a pre-defined threshold (\emph{e.g.,}\xspace 90\%).
We fix $\mathcal{N}_L{=}100K$ and $\mathcal{N}_A{=}500K$ in this experiment.
The results (Fig. \ref{fig:simulation:b}) show that legitimate users get
at least the per-sender fair share in all settings.
Note that in moderate attacks, attackers can prevent their bandwidth shares from being
further reduced by extending the off-period, resulting in per-sender fairness.
However, increasing $\mathcal{A}_f$ actually puts attackers in a bad situation that
their flows are blocked.
The previous two experiments prove that when attackers fail to comply with
Umbrella\xspace's rate limiting, their bandwidth shares are significantly reduced.
Legitimate users may therefore obtain more bandwidth than the per-sender fair share.
In the third strategy, attackers actively adjust their rates based on
packet losses so as to maintain low packet loss rates. Besides our flow-level
simulator, we also adopt the ns-$3$~\cite{ns3} in this setting.
To circumvent ns-$3$'s scalability problem, we adopt the similar approach used in NetFence~\cite{netfence}.
Specifically, we fix the number of nodes ($500$ attackers and $50$ legitimate nodes in our experiments)
and scale down the link capacity to simulate the large scale attacks. By varying the link bandwidth
from $5$Mbps to $50$Mbps, we are able to simulate the attack scenarios where
$100$K to $1$ million attackers try to flood the $10$Gbps interdomain link.
We use the ns-$3.21$ version and add Umbrella\xspace's rate limiting logic to the \textsl{PointToPointNetDevice} module, which
performs flow analysis upon receiving packets. To execute their optimal strategy,
attackers have to rely on TCP-like protocols to probe the network condition and determine their rates accordingly.
We test all supported TCP congestion control algorithms in ns-$3$: Tahoe, Reno, and NewReno.
Legitimate clients are adopting the NewReno.
The results (Fig. \ref{fig:simulation:c}) show that complying
with Umbrella\xspace's rate limiting grants attackers the per-sender fairness (results for different
TCP protocols in ns-$3$ are very close and we plot the results for the NewReno).
As stated in Lemma~\ref{lemma:attack}, the hypothetical strategy for attackers,
assuming that they are able to
know their exact allowed rates and control remote packet losses, produces
an unreachable upper bound for their bandwidth shares.
Further, our flow-level simulator and the packet-level ns-$3$ simulator
share (almost) the same results.
\section{Related Work}\label{related}
In this section, we discuss related work that has inspired the design of Umbrella\xspace.
Generally speaking, we categorize the previous DDoS defense approaches
into two major schools (\emph{i.e.,}\xspace filtering-based and capability-based approaches), whereas
there are other approaches built on different defense primitives.
Filtering-based systems (\emph{e.g.,}\xspace IP Traceback~\cite{practicalIPTrace, advancedIPTrace}, AITF~\cite{AITF},
Pushback \cite{pushback, implementPushback}, StopIt~\cite{StopIt})
stop DDoS attacks by filtering attack flows.
Thus they need to distinguish attack flows from
legitimate ones. For instance, IP Traceback uses a packet marking
algorithm to construct the path that carries attack flows so as to block
them. AITF aggregates all traffic traversing the same series of
ASs as one \emph{Flow} and blocks such flows if the victim suspects attacks.
Pushback informs upstream routers to block certain type of traffic.
StopIt assumes the victim can identify the attack flows.
However, filtering-based systems often require remote ASs to block attack traffic on the victim's behalf,
which is difficult to enforce in the Internet.
Further, these systems may falsely block legitimate flows
since the method used to distinguish attack flows could have a high false positive rate.
The capability-based systems, such as SIFF~\cite{siff} and TVA~\cite{TVA}, try to suppress attack traffic by only accepting packets carrying valid capabilities. The original design is vulnerable to the DoC attack~\cite{doc}, which can be mitigated by the Portcullis protocol~\cite{portcullis}.
NetFence \cite{netfence} is proposed to achieve network-wide per-sender fairness
based on capabilities. However, these approaches assume universal capability deployment.
CRAFT \cite{craft} and Mirage~\cite{mirage} are proposed towards real-world
deployment. CRAFT emulates TCP states for all traversing flows so that no one
can obtain a greater share than what TCP allows.
However, CRAFT requires upgrades of both the Internet core and end-hosts.
Mirage~\cite{mirage}, a puzzle-based solution, needs to be
incorporated into IPv6 deployment. The state-of-the-art in this category MiddlePolice~\cite{MiddlePolice} is readily deployable in the current Internet. However, it still relies on cloud infrastructure to police traffic, which may be privacy-invasive for some organizations.
Other DDoS defense solutions, besides the above two categories,
include SpeakUp~\cite{speakup}, Phalanx \cite{phalanx}, SOS
\cite{sos} and few future Internet architecture proposals like XIA~\cite{xia} and SCION~\cite{scion}.
SpeakUp allows legitimate senders to increase their rates to compete with attackers.
Such an approach is effective when the bottleneck happens at the application layer so that
legitimate users can get more requests processed given all their requests can be delivered.
In the case where network is the bottleneck, SpeakUp may potentially congest
the network. Phalanx and SOS propose to use large scale overlay networks to defend DDoS attacks.
XIA and SCION focus on building the clean-slate Internet architecture so as to enhance
Internet security, \emph{e.g.,}\xspace enforcing accountability~\cite{aip}.
In contrast to these prior work, Umbrella\xspace is motivated to address a real-world threat and achieves two critical features (\emph{i.e.,}\xspace deployability and privacy-preserving) towards this end.
\section{Conclusion and Future Works}\label{conclusion}
This paper presents the design, implementation and evaluation of Umbrella\xspace, a new DDoS defense mechanism enabling ISPs to offer readily deployable and privacy-preserving DDoS prevention services. To provide effective DDoS prevention, Umbrella\xspace merely requires independent deployment at the victim's ISP and no Internet core or end-hosts upgrades, making Umbrella\xspace immediately deployable. Further, Umbrella\xspace does not require the ISP to terminate victim's application connections, allowing the ISP to operate at network layer as usual. In its design, Umbrella\xspace's multi-layered defense allows Umbrella\xspace to stop various DDoS attacks and provides both guaranteed and elastic bandwidth shares for legitimate clients. Based on the prototype implementation, we demonstrate that Umbrella\xspace is scalable to deal with large scale DDoS attacks involving millions of attackers and introduces negligible packet processing overhead. Finally, our physical testbed experiments and large scale simulations prove that Umbrella is effective to mitigate various strategic DDoS attacks.
We envision two major followup directions of this work in the near future. First, the user-specific layer in Umbrella\xspace enables a potential DDoS victim to enforce self-desired traffic control policies during DDoS mitigation. However, one challenge is how to guide the victim to develop reasonable policies that are most suitable for its business logic. This is because proposing valid policies may require profound understanding of the victim's network traffic, which typically depends on comprehensive traffic monitoring and analysis. Unfortunately, the potential victim may lack such capability in this regard. Thus, designing and implementing various machine learning based traffic discovery tools is part of our future work. The second potential research direction is to enable smart payment between ISPs and potential victims. The high level goal is to ensure that ISPs and victims can unambiguously agree on certain filtering services so that the ISPs are paid properly on each attack packet it filters and meanwhile a potential victim can reclaim its payment back if an ISP fails to stop attacks. We propose to design a smart-contract based system in this regard, relying on the ``non-stoppable'' features of smart contracts. Our initial proposal is under review.
\balance
\small
\bibliographystyle{ieeetr}
|
\section{Introduction}
\label{sec:introduction}
The parametric dependence of an eigenvector of an operator
often exhibits anholonomy in its phase
~\cite{GeometricPhaseReview}.
A simple demonstration of the phase anholonomy in an eigenvector
of a Hamiltonian is shown by Berry~\cite{Berry:PRSLA-430-405}:
Prepare the system to be in an eigenstate of
the Hamiltonian, whose energy spectrum is assumed to be discrete and
nondegenerate.
During the adiabatic change of the parameters of the Hamiltonian,
which is kept to be nondegenerate along the change,
the system continuously remains to be in an
eigenstate of the instantaneous Hamiltonian, according to
the adiabatic theorem~\cite{Born:ZP-51-165}.
When the parameter returns to its initial value,
after the adiabatic change along a closed path in the parameter space,
the difference between the initial and the final state vectors is only
in its phase, which is composed by two ingredients: One is
called a dynamical phase that is determined by the accumulation of the
eigenenergy along the adiabatic time evolution. The other is called
a geometric phase, or the phase anholonomy that reflects
the geometric
structure of the family of eigenvectors in the parameter space.
There is a non-Abelian generalization of the phase anholonomy:
This was pointed out by Wilczek and Zee in the parametric change of an
eigenspace of a Hamiltonian that has a spectral degeneracy%
~\cite{Wilczek:PRL-52-2111}.
The phase anholonomy appears in various fields of physics, besides
quantum mechanics, and brings profound
consequences~\cite{GeometricPhaseReview}.
Recently, Cheon found exotic anholonomies, which are completely different
from the conventional phase anholonomy, in a family of systems with
generalized pointlike potentials~\cite{Cheon:PLA-248-285}.
Cheon's anholonomies appear, surprisingly, both in eigenenergies and
eigenvectors:
The trail of an eigenenergy along a change of parameters
on a closed path that encircles a singularity does not draw a closed curve
but, instead, a spiral.
Since the initial and the final eigenenergies in the closed path are
different eigenvalues of a Hermite operator,
the corresponding eigenvectors must be orthogonal.
Hence the eigenenergy anholonomy induces another anholonomy in the
direction of eigenvectors.
The origin of Cheon's anholonomies in the family of systems with the
generalized pointlike potentials
is identified with the geometrical structure of the family's
parameter space~\cite{Cheon:AP-294-1,Tsutsui:JMP-42-5687}.
In order to distinguish Cheon's anholonomy in the directions of eigenvectors
from
Wilczek-Zee's phase anholonomy, which
requires a degenerate spectrum and
transports an eigenvector into its nonorthogonal
direction in general along adiabatic changes on closed paths,
we will call the former an {\em eigenspace anholonomy\/}:
Wilczek-Zee's phase anholonomy concerns the change of an eigenvector within
a single and degenerate eigenspace and
Cheon's eigenspace anholonomy, which do not require spectral degeneracies,
concerns the journey of an eigenvector from one eigenspace
into another eigenspace.
We can easily expect that Cheon's anholonomies
would bring profound consequences in various fields of physics,
as is done by the phase anholonomy.
For example, in the adiabatic (sometimes referred to as
Born-Oppenheimer~\cite{Born:AP-84-457})
approximation~\cite{Born:1954},
it has been considered to be legitimate to assume that
an adiabatic potential surface, which is an eigenvalue of an electronic
Hamiltonian with a frozen nuclear configuration, is a single-valued
function of the nuclear configuration. The single-valuedness would
be broken if Cheon's eigenenergy anholonomy emerged. A similar question
may be raised in the Bloch theory in solid state physics~\cite{Kittel:ISSP}.
At the same time, Cheon's anholonomies may be applied to manipulate
quantum systems to transfer a quantum state adiabatically into
another state, as is suggested by Cheon~\cite{Cheon:PLA-248-285}.
The last point will be discussed more precisely in this paper.
However, all known examples of the eigenenergy anholonomy, up to now,
require an exotic connection condition
around a singular potential~\cite{Tsutui:JPA-36-275}.
Hence it is still worth to find systems that exhibit
Cheon's anholonomies.
The purpose of the present paper is to show Cheon's anholonomies
in periodically driven systems.
More precisely, we will discuss
quasienergy and eigenspace anholonomies with respect to Floquet operators
that describe unit time evolutions of the periodically driven systems.
First, we provide an instance of a quantum map, i.e., a quantum system
under a periodically pulsed perturbation~\cite{QuantumMap}.
The simplicity of the Floquet operators of quantum maps
allows us a thorough analysis.
In order to prepare it, the parametric dependence,
induced by the change of the strength of the perturbation,
of eigenvectors of the Floquet operators of quantum maps is reviewed in
Section~\ref{sec:adiabaticTransport}.
In Section~\ref{sec:rank1}, a quantum map that is perturbed by
a rank-$1$ operator is introduced.
The details of its properties are explained
in Appendices~\ref{sec:normalForm} and~\ref{sec:cyclicity}.
In Section~\ref{sec:existance}, it is
shown that the rank-$1$ perturbation, with respect to the original
Floquet operator, enables us to introduce a family of Floquet
operators to realize Cheon's anholonomies.
Several examples are shown in Section\ref{sec:examples}.
Second, the stability of the anholonomies is examined.
A geometrical analysis, which is shown in Section~\ref{sec:abundance},
of the family of Floquet operators
elucidates that the appearance of Cheon's anholonomies is not
restricted in the periodically pulsed systems and is also possible
in periodically driven systems in general.
Furthermore, we may claim that Cheon's anholonomies are abundant
in systems whose time evolutions are described by Floquet operators.
Among possible consequences and applications of our result,
Section~\ref{sec:manipulation} provides a discussion on a design
principle on quantum state manipulations along adiabatic passages.
Section~\ref{sec:summary} provides a discussion and a summary.
A part of the present result was briefly announced in Ref.~\cite{TM061}.
\section{Adiabatic transport of eigenvectors in a quantum map}
\label{sec:adiabaticTransport}
To prepare our analysis of quantum maps,
we review the parametric motions of eigenvectors and eigenvalues of
Floquet operators and the adiabatic theorem for periodically driven systems.
Let us consider a periodically pulsed driven system (with a period $T$)
described by the ``kicked'' Hamiltonian:
\begin{equation}
\label{eq:kickedHamiltonian}
\hat{H}(t) = \hat{H}_0 + \lambda\hat{V}\sum_{n\in\bbset{Z}}\delta(t - nT),
\end{equation}
where $\hat{H}_0$ and $\hat{V}$ describe
the ``free'' motion and the pulsed perturbation, respectively,
and $\lambda$ is the strength of the perturbation.
In the following, we focus on the stroboscopic description of the
state vector $\ket{\psi_n}$ at $t = nT-0$. The time evolution of
$\ket{\psi_n}$ is described by the quantum map
$\ket{\psi_{n+1}}=\hat{U}_{\lambda}\ket{\psi_{n}}$, where
\begin{equation}
\label{eq:defUlambda}
\hat{U}_{\lambda} = e^{-i \hat{H}_0 T/\hbar} e^{-i\lambda\hat{V}/\hbar}
\end{equation}
is a Floquet operator~\cite{QuantumMap}.
In the following, we set $\hbar = 1$.
In order to show a simple example of the parametric motions of
eigenvalues and eigenvectors, we assume that
the spectrum of $\hat{U}_{\lambda}$ contains only
discrete components and has no degeneracy.
At the same time, in order to avoid subtle issues that are
brought from the infinite dimensionality
of the Hilbert space $\mathcal{H}$~\cite{endnote:infinite},
we assume that $N\equiv\dim\mathcal{H}$ is finite.
This assumption does no harm to the descriptions of many systems
where an appropriate introduction of the truncation of the Hilbert space
is feasible.
Since $\hat{U}_{\lambda}$ (\ref{eq:defUlambda}) is unitary,
its eigenvalues $\{z_n(\lambda)\}_{n=0}^{N-1}$
are complex and on the unit circle, i.e., $|z_n(\lambda)|=1$.
The phase of $z_n(\lambda)$ indicates
the increment of the dynamical phase during the unit time evolution
whose initial state is the corresponding eigenstate $\ket{\xi_n(\lambda)}$.
The time-average of the dynamical phase
determines a quasienergy
$E_n(\lambda) =
- T^{-1} {\rm Im}[\ln z_n(\lambda)]
$
(or, $z_n(\lambda) = e^{-i E_n(\lambda) T}$).
Note that the value of quasienergy has an ambiguity because of the period
$2\pi/T$ in the quasienergy space.
We remark that the eigenvalue equation
\begin{equation}
\label{eq:eigenvalueEq}
\hat{U}_{\lambda} \ket{\xi_n(\lambda)}
= z_n(\lambda) \ket{\xi_n(\lambda)}
\end{equation}
determines the eigenvalue and the eigenvector only pointwise
in $\lambda$.
By assuming the continuity about $\lambda$, we obtain the derivatives of
$E_n(\lambda)$ and $\ket{\xi_n(\lambda)}$~\cite{Nakamura:PRA-35-5294}:
\begin{eqnarray}
\label{eq:leveldynamics}
\pdfrac{}{\lambda} E_n(\lambda) &=&
\frac{1}{T} \braOket{\xi_n(\lambda)}{\hat{V}}{\xi_n(\lambda)},
\\
\label{eq:vectordynamics}
\pdfrac{}{\lambda} \ket{\xi_n(\lambda)} &=&
-i A_n(\lambda)\ket{\xi_n(\lambda)}
\nonumber \\ &&
{}+ i\sum_{m\ne n}
\frac{z_m(\lambda) \braOket{\xi_m(\lambda)}{\hat{V}}{\xi_n(\lambda)}}%
{z_m(\lambda) - z_n(\lambda)}
\ket{\xi_m(\lambda)},
\nonumber\\
\end{eqnarray}
where $\ket{\xi_n(\lambda)}$ is assumed to be normalized and
$A_n(\lambda) \equiv
i \bra{\xi_n(\lambda)}\partial\ket{\xi_n(\lambda)}/\partial \lambda$
is a geometric gauge potential~\cite{Mead:JCP-70-2284}.
These derivatives compose a set of equations of motion for a
virtual time $\lambda$~\cite{endnote:leveldynamics}.
With a given ``initial condition'' of
$\{E_n(\lambda), \ket{\xi_n(\lambda)}\}_n$, at $\lambda = \lambda_0$,
we may integrate the equations of motion~(\ref{eq:leveldynamics}) and
(\ref{eq:vectordynamics}).
Note that, under the presence of
anholonomy~\cite{GeometricPhaseReview},
the single-valuedness of the solution
$\{E_n(\lambda), \ket{\xi_n(\lambda)}\}_n$ generally holds only locally
in the parameter space of $\lambda$.
The adiabatic theorem~\cite{Born:ZP-51-165}
for periodically driven systems~\cite{Holthaus:PRL-69-1596}
provides
a physical significance of the geometry (i.e., $\lambda$-dependence) of
$\ket{\xi_n(\lambda)}$.
Note that the parameter $\lambda$, which is supposed to be slowly changed,
is the strength of the perturbation that is applied periodically:
$\lambda$ will be changed from $\lambda_{\rm i}$ to $\lambda_{\rm f}$, during
the $M$ steps, where the corresponding time interval is $TM$.
Let $\lambda_j$ be the value of $\lambda$ at the $j$-th step
($0 \le j \le M$).
In particular $\lambda_0 = \lambda_{\rm i}$ and
$\lambda_{M} = \lambda_{\rm f}$.
The slowness of the change of the parameter is
expressed by the condition
$\lambda_{j+1} - \lambda_j = \mathcal{O}(M^{-1})$ as $M\to\infty$.
We start with an initial condition that, at $\lambda=\lambda_{\rm i}$,
the system is in an
eigenstate $\ket{\xi_n(\lambda_{\rm i})}$ of $\hat{U}_{\lambda_{\rm i}}$.
The final state $\ket{\Psi_{\rm f}}$ is
\begin{equation}
\ket{\Psi_{\rm f}} \equiv
\Torder\left[\prod_{j=1}^{M} \hat{U}_{\lambda_j}\right]
\ket{\xi_n(\lambda_{\rm i})},
\end{equation}
where $\Torder$ represents a time-orderd (or, equivalently, path-orderd)
product.
According to the adiabatic theorem,
the final state will converge to
$\ket{\xi_n(\lambda_{\rm f})}$ as $M\to\infty$, except its phase.
In the following, we will evaluate the phase of the final state.
From the equation of motion~(\ref{eq:vectordynamics}), we have
\begin{equation}
\begin{split}
&\hat{U}_{\lambda} \ket{\xi_n(\lambda - \delta)}
\\&=
\exp\left\{-i E_n(\lambda)T + i A_n(\lambda)\delta\right\}
\ket{\xi_n(\lambda)}
\\ &\quad
- \sum_{m\ne n}
\frac{iz_m(\lambda)^2\bra{\xi_m(\lambda)}\hat{V}\ket{\xi_n(\lambda)}\delta}{z_m(\lambda) - z_n(\lambda)}\ket{\xi_m(\lambda)}
+\mathcal{O}(\delta^2)
\end{split}
\end{equation}
as $\delta\to0$.
According to the adiabatic theorem~\cite{Holthaus:PRL-69-1596},
we need only the first term above for the evaluation of the phase.
Hence we have
\begin{equation}
\ket{\Psi_{\rm f}}
\simeq
\exp \left\{
- i \sum_{j=1}^{M} E_n(\lambda_j) T
+ i \int_{\lambda_{\rm i}}^{\lambda_{\rm f}} A_n(\lambda) d\lambda
\right\}
\ket{\xi_n(\lambda_{\rm f})},
\end{equation}
as $M\to\infty$.
The first and the second terms in the phase factor correspond to the dynamical
and geometric phases, respectively~\cite{Berry:PRSLA-430-405}.
\section{Quantum map under a rank-$1$ perturbation}
\label{sec:rank1}
In order to demonstrate the simplest example of Cheon's
anholonomies in quantum maps, we employ a rank-$1$ perturbation
$\hat{V}=\ket{v}\bra{v}$ in Eq.~(\ref{eq:defUlambda})
with a normalized vector $\ket{v}$~\cite{Combescure:JSP-59-679}.
Since $\hat{V}$ satisfies $\hat{V}^2=\hat{V}$,
the quantum map~(\ref{eq:defUlambda}) has a periodicity about $\lambda$.
This is shown by an expansion of $\hat{U}_{\lambda}$ in $\hat{V}$,
\begin{equation}
\label{eq:expandUlambda}
\hat{U}_{\lambda} =
\hat{U}_{0} \left\{1 - (1 - e^{-i\lambda}) \hat{V}\right\},
\end{equation}
which has a $2\pi$ periodicity in $\lambda$.
Hence the parameter space of $\lambda$ is identified with a circle $S^1$.
We will discuss the parametric motion of quasienergies and
eigenvectors of $\hat{U}_{\lambda}$, along the changes of $\lambda$ on $S^1$.
Two kinds of ``trivial'' eigenvectors of $\hat{U}_{\lambda}$ are shown
in order to simplify the later analysis on Cheon's anholonomies.
For the first kind, we suppose that an eigenvector
$\ket{\xi}$ of $\hat{U}_{\lambda_0}$
is orthogonal to $\ket{v}$ and the corresponding eigenvalue is $z_0$.
Then, this implies that $\ket{\xi}$ is
also an eigenvector of $\hat{U}_{\lambda}$ for all $\lambda$
and the corresponding eigenvalue $z_0$ does not depend on $\lambda$.
In fact, we have
\begin{equation}
\label{eq:orthoTriviall}
\hat{U}_{\lambda}\ket{\xi}
= \hat{U}_{\lambda_0}e^{-i(\lambda - \lambda_0)\hat{V}/\hbar}\ket{\xi}
= \hat{U}_{\lambda_0}\ket{\xi}
= z_0 \ket{\xi},
\end{equation}
where we used $\hat{V}\ket{\xi} =\ket{v}\bracket{v}{\xi}=0$
and $\hat{U}_{\lambda_0}\ket{\xi} = z_0 \ket{\xi}$.
In Appendix~\ref{sec:normalForm},
we will show that such trivial eigenvectors~(\ref{eq:orthoTriviall})
are created by a spectral degeneracy of $\hat{U}_{\lambda}$.
For the second kind, we suppose that $\ket{v}$ is an eigenvector
of $\hat{U}_{\lambda_0}$
and the corresponding eigenvalue is $z_0$.
If this is the case, all the eigenvectors of
$\hat{U}_{\lambda_0}$, except $\ket{v}$, are orthogonal to $\ket{v}$, and
accordingly become trivial eigenvectors of the first kind mentioned above.
Furthermore, $\ket{v}$ is also a trivial one in the sense that
$\ket{v}$ is an eigenvector of $\hat{U}_{\lambda}$ for all $\lambda$.
This is because
\begin{eqnarray}
\label{eq:paraTriviall}
\hat{U}_{\lambda}\ket{v}
&=&
\hat{U}_{\lambda_0}e^{-i(\lambda - \lambda_0)\hat{V}/\hbar}\ket{v}
= \hat{U}_{\lambda_0} e^{-i(\lambda - \lambda_0)/\hbar}\ket{v}
\nonumber \\
&=&
z_0 e^{-i(\lambda - \lambda_0)/\hbar} \ket{v},
\end{eqnarray}
where the corresponding eigenvalue
$z_0 e^{-i(\lambda - \lambda_0)/\hbar}$ depends on $\lambda$.
The analysis of the two kinds of trivial eigenvectors
are completed.
In the following, we assume the absence of these trivial eigenvectors
since they
are irrelevant to the later argument to look for Cheon's
anholonomies. A systematic procedure to reduce a Hilbert space
by excluding these trivial eigenvectors of $\hat{U}_{\lambda}$
is explained in Appendix~\ref{sec:normalForm}.
On the reduced Hilbert space $\mathcal{H}$, it is assured that the spectrum of
$\hat{U}_{\lambda}$ has no degeneracies for all $\lambda$.
In terms of $\hat{U}_0$ and $\ket{v}$,
this assumption turns out to be equivalent to the following two conditions.
(i) The spectrum of $\hat{U}_0$ is nondegenerate.
Note that we have already introduced another assumption that
the spectrum of $\hat{U}_0$ contains only discrete and
a finite number of components to assure the smoothness of the parametric
dependence of eigenvalues and eigenvectors on $\lambda$
in Section~\ref{sec:adiabaticTransport}.
(ii) $\ket{v}$ is not orthogonal to any eigenvector of $\hat{U}_0$,
otherwise the reduction is not complete.
Note that (ii) implies that $\ket{v}$ is not any eigenvector of $\hat{U}_0$.
Thus the conditions (i) and (ii) guarantee, for all $\lambda$,
\begin{equation}
\label{eq:v_xi}
0 \lvertneqq |\bracket{v}{\xi}| \lvertneqq 1
\quad\text{for any eigenvector $\ket{\xi}$ of $\hat{U}_{\lambda}$},
\end{equation}
where either the lower or the upper bound of the equalities would hold
if any trivial eigenvector remains.
The conditions (i) and (ii) are further paraphrased with the help of
the notion {\em cyclicity}~\cite{RSICyclicity}, when we restrict
ourselves to
the dimensionality of the Hilbert space $\mathcal{H}$ being finite.
If $\hat{U}_0$ and $\ket{v}$ satisfy
$\mathcal{H} = \overline{{\rm span} \{(\hat{U}_0)^m \ket{v}\}_{m=0}^{\infty}}$%
~\cite{note:infiniteSpan},
$\ket{v}$ is called a cyclic vector for $\hat{U}_0$~\cite{RSICyclicity}.
It is shown in Appendix~\ref{sec:cyclicity}, the conditions (i) and (ii) are
equivalent to
(i') $\hat{U}_0$ has a cyclic vector
and (ii') $\ket{v}$ is a cyclic vector for $\hat{U}_0$,
respectively.
We will discuss the case that these assumptions are broken
in Section~\ref{sec:examples}.
\section{Cheon's anholonomies in quantum maps}
\label{sec:existance}
What happens to the quasienergies and the eigenvectors
when we adiabatically increase $\lambda$ by $2\pi$, the period of
the Floquet operator~(\ref{eq:expandUlambda}), and its spectrum,
starting from $\lambda=\lambda_0$?
The argument above suggests that we may have a conventional
(Abelian) phase anholonomy that appears only in the phase of
the eigenvectors.
However, the following argument will elucidate that we meet
Cheon's anholonomies in quasienergies as well as in eigenspaces.
First, we examine quasienergies $E_n(\lambda)$ ($0\le n <N = \dim\mathcal{H}$).
Note that we have $N$ cases to choose ``the ground quasienergy''
due to the periodicity in the quasienergy space.
Once we choose a ground state, whose quantum number is assigned to $0$,
the quantum number $n$ ($<N$) is assigned so
that $E_n(\lambda)$ increases as $n$ increases.
More precisely, in order to remove ambiguities due to the periodicity in
the quasienergies, we choose the branch of the quasienergies,
at $\lambda=\lambda_0$,
as $E_0(\lambda_0) < E_1(\lambda_0) < \cdots < E_{N-1}(\lambda_0)
< E_0(\lambda_0)+2\pi T^{-1}$ holds,
where $E_{0}(\lambda_0)$ and $E_0(\lambda_0)+2\pi T^{-1}$ correspond
to the same eigenvalue $z_0(\lambda_0) = e^{-i E_{0}(\lambda_0) T}$.
For brevity, we identify a quantum number $n$ with $n+N$.
To examine how much the ground quasienergy $E_0(\lambda_0)$ increases
during a cycle of $\lambda$, we evaluate
\begin{equation}
\label{eq:DEnIE}
\Delta E_n \equiv
\int_{\lambda_0}^{\lambda_0+2\pi}
\pdfrac{E_n(\lambda)}{\lambda} d\lambda.
\end{equation}
Note that $\Delta E_n$ is ``quantized''
due to the periodicity of the spectrum, e.g., we have
\begin{equation}
\label{eq:quantizedDE0}
\Delta E_0 = E_{\nu}(\lambda_0) - E_0(\lambda_0) \mod 2\pi T^{-1}
\quad\text{for some $\nu$},
\end{equation}
because $E_0(\lambda)$ should arrive at $E_{\nu}(\lambda_0)$
for some $\nu$ as $\lambda\nearrow \lambda_0+2\pi$.
To determine which $\nu$ is possible or not,
we evaluate the integral expression~(\ref{eq:DEnIE}) of $\Delta E_n$ with
$\partial {E_n(\lambda)}/\partial\lambda
= T^{-1} \braOket{\xi_n(\lambda)}{\hat{V}}{\xi_n(\lambda)}$~%
\cite{Nakamura:PRA-35-5294}.
Since $\hat{V}$ satisfies $\hat{V}^2=\hat{V}$,
the eigenvalues of $\hat{V}$ are only $0$ and $1$.
Accordingly we have
$0 \le \partial {E_n(\lambda)}/\partial\lambda \le T^{-1}$.
However, the equalities for the minimum and the maximum cannot hold
because of $ \partial{E_n(\lambda)}/\partial\lambda
= T^{-1} |\
|
bracket{v}{\xi_n(\lambda)}|^2$
and
$0 < |\bracket{v}{\xi_n(\lambda)}| < 1$ (see Eq.~(\ref{eq:v_xi})).
Hence we have
$
0 < \partial {E_n(\lambda)}/\partial\lambda < T^{-1}
$
and accordingly
\begin{equation}
\label{eq:DeltaEnRestriction}
0 < \Delta E_n < 2\pi T^{-1}.
\end{equation}
This imposes a restriction $0 < \nu < N$ in Eq.~(\ref{eq:quantizedDE0}).
In particular, neither $\nu = 0$ nor $N$ is possible.
Namely, the quasienergy $E_0(\lambda)$ arrives at
$E_{\nu}(\lambda_0)$ ($0 < \nu < N$), instead of $E_0(\lambda_0)$,
as $\lambda\nearrow \lambda_0+2\pi$.
This is nothing but a manifestation of Cheon's anholonomy in quasienergy.
If the system is two-level (i.e., $N = 2$),
the above argument immediately implies $\nu = 1$.
Hence it is straightforward to show that
\begin{equation}
\label{eq:quasienergyAnholonomyRank1}
E_n(\lambda_0 + 2\pi-0) = E_{n+1}(\lambda_0) \mod 2\pi T^{-1}
\end{equation}
holds for all $0 \le n < N$.
Equation~(\ref{eq:quasienergyAnholonomyRank1}) remains true for $N>2$.
Its justification requires one to examine a sum rule on
$\set{\Delta E_n}_{n=0}^{N-1}$:
\begin{equation}
\label{eq:sumRuleRank1}
\sum_{n=0}^{N-1}\Delta E_n
= \int_{\lambda_0}^{\lambda_0 + 2\pi} \frac{1}{T} ({\rm Tr} \hat{V})d\lambda
=\frac{2\pi}{T},
\end{equation}
where we used ${\rm Tr} \hat{V} = 1$ for $\hat{V}=\ket{v}\bra{v}$
with normalized $\ket{v}$.
The sum rule~(\ref{eq:sumRuleRank1}) implies that
Eq.~(\ref{eq:quasienergyAnholonomyRank1}) holds for all $0 \le n < N$,
and vice versa,
where the sum $\sum_{n=0}^{N-1}\Delta E_n$ in Eq.~(\ref{eq:sumRuleRank1})
takes its possible minimal value $2\pi T^{-1}$.
Actually, if we assume that
Eq.~(\ref{eq:quasienergyAnholonomyRank1}) is broken for some $n$,
e.g.,
$E_{n}(\lambda_0 + 2\pi-0) = E_{n+\nu}(\lambda_0) \mod 2\pi/T$
with $1 < \nu < N$,
this contradicts with the sum rule~(\ref{eq:sumRuleRank1}).
Thus the quasienergy anholonomy~(\ref{eq:quasienergyAnholonomyRank1})
for $N$-level quantum maps under
the rank-1 perturbation is revealed completely.
The quasienergy anholonomy~(\ref{eq:quasienergyAnholonomyRank1}) induces
an eigenspace anholonomy, which is expressed by projectors:
\begin{equation}
\ket{\xi_n(\lambda_0 + 2\pi-0)}\bra{\xi_n(\lambda_0 + 2\pi-0)}
= \ket{\xi_{n+1}(\lambda_0)}\bra{\xi_{n+1}(\lambda_0)}.
\end{equation}
Note that $\ket{\xi_{n}(\lambda_0)}$ and $\ket{\xi_{n+1}(\lambda_0)}$
are orthogonal, since the corresponding eigenvalues are different.
Finally, we show an anholonomy in a state vector as a result of
the adiabatic increment of $\lambda$ by the period $2\pi$
from $\lambda=\lambda_0$.
When the initial state is prepared to be
an eigenstate $\ket{\xi_n(\lambda_0)}$, the corresponding final state is
\begin{equation}
\exp \left\{- i \sum_{j=1}^{M} E_n(\lambda_j) T
+ i \int_{\lambda_0}^{\lambda_0+2\pi} A_n(\lambda) d\lambda
\right\}
\ket{\xi_{n+1}(\lambda_0)}.
\end{equation}
If we keep the adiabatic increment of $\lambda$, the state vector
will become parallel with the eigenvector
$\ket{\xi_{n+\nu}(\lambda_0)}$ of $\hat{U}_{\lambda_0}$
after the completion of the $\nu$-th iteration of the periodic increment
and return to the initial eigenstate at the end of the $N$-th iteration.
\section{Examples}
\label{sec:examples}
The simplest example of Cheon's anholonomies occurs in a two-level system.
The Floquet operator of the unperturbed system is
\begin{equation}
\hat{U}_0 \equiv \ketbra{\uparrow}{\uparrow} + \ket{\downarrow}e^{-i\delta}\bra{\downarrow},
\end{equation}
where $\delta\in(0,2\pi)$ and $T=1$ are assumed.
We employ $\ket{v}=(\ket{\uparrow}-i\ket{\downarrow})/\sqrt{2}$, which satisfies
the conditions (i) and (ii) mentioned in Section~\ref{sec:rank1}.
Although the constant term in
$\hat{V}=\ket{v}\bra{v} = \frac{1}{2}(1-\hat{\sigma}_y)$
seems to be
irrelevant, this term arises naturally in projection operators
in the two-level system and is required to ensure that
the perturbed Floquet operator
$\hat{U}_{\lambda}=\hat{U}_{0} e^{-i\lambda\hat{V}}$
has the $2\pi$ periodicity about $\lambda$.
In order to show an analytic form of quasienergies and eigenvectors,
we employ $\delta=\pi$ (i.e., $\hat{U}_0 = \hat{\sigma}_z$).
The eigenvalues of $\hat{U}_{\lambda}$ are
$z_{0}(\lambda) = e^{-i\lambda/2}$ and
$z_{1}(\lambda) = -e^{-i\lambda/2}$.
The period of each eigenvalue about $\lambda$ is $4\pi$,
though the period of the spectrum $\{z_{0}(\lambda), z_{1}(\lambda)\}$ is
$2\pi$.
The corresponding quasienergies are
\begin{equation}
\label{eq:sampleE}
E_{0}(\lambda)= \frac{1}{2}\lambda\mod 2\pi
\quad\text{and}\quad
E_{1}(\lambda)= \pi + \frac{1}{2}\lambda\mod 2\pi.
\end{equation}
Now we demonstrate the anholonomy in quasienergy (see Fig.~\ref{fig:twolevel}):
At $\lambda=0$, we start from the quasienergy $E_{0}(0) = 0$ of the 0-th
eigenstate.
The increment of $\lambda$ increases
$E_0(\lambda)$ because of the fact $dE_0(\lambda)/d\lambda = \frac{1}{2} > 0$.
At $\lambda=2\pi$, $E_{0}(\lambda)$ arrives at
$\pi$, which agrees with the quasienergy $E_1(0) = \pi$
of the first eigenstate at $\lambda=0$.
Next, we examine the eigenvectors
\begin{align}
\label{eq:twoleveleigenvecs}
\ket{\xi_0(\lambda)} =
\begin{bmatrix}\cos(\lambda/4)\\ \sin(\lambda/4)\end{bmatrix},
\quad
\ket{\xi_1(\lambda)} =
\begin{bmatrix}-\sin(\lambda/4)\\ \cos(\lambda/4)\end{bmatrix}.
\end{align}
The corresponding geometric gauge potentials $A_{n}(\lambda)$ ($n=0,1$)
happen to vanish in the present case. Hence it is easy to
find the geometric phases from
the parametric dependence of the eigenvectors~(\ref{eq:twoleveleigenvecs}).
The excursion of the eigenvectors by increasing the parameter $\lambda$
is the following:
\begin{align}
\ket{\xi_{0}(0)} &= \ket\uparrow,&\quad
\ket{\xi_{1}(0)} &= \ket\downarrow,
\nonumber \\
\ket{\xi_{0}(2\pi)} &= \ket{\xi_{1}(0)},&\quad
\ket{\xi_{1}(2\pi)} &= -\ket{\xi_{0}(0)},
\\
\ket{\xi_{0}(4\pi)} &= - \ket{\xi_{0}(0)},&\quad
\ket{\xi_{1}(4\pi)} &= -\ket{\xi_{1}(0)},
\nonumber
\end{align}
where nontrivial geometric phases appear after the completion
of the $4\pi$ increment of $\lambda$.
\begin{figure}
\includegraphics[width=8.6cm]{fig1a}
\hfill
\includegraphics[width=8.6cm]{fig1b}
\caption{\label{fig:twolevel} (Color online)
Parametric motions of quasienergies (bold lines) of two-level systems.
We choose the model whose ``ground'' quasienergy is zero at $\lambda = 0$.
The other quasienergy $\delta=\pi$ at $\lambda = 0$ is indicated
by the broken, horizontal lines.
(a) The example examined in the main text, Eq.~(\ref{eq:sampleE})
($\ket{v}=(\ket{\uparrow}-i\ket{\downarrow})/\sqrt{2}$).
The quasienergies draw two parallel lines, which have no
avoided crossing.
(b) A generic example
($\ket{v} = \cos(\pi/8)\ket{\uparrow} + \sin(\pi/8)\ket{\downarrow}$).
There is a single avoided crossing. The broken curve represents
$|\bracket{\uparrow}{\xi_0(\lambda)}|^2$, which depicts
that $\ket{\xi_0(\lambda)}$
becomes orthogonal to $\ket{\xi_0(0)}$ in
the limit $\lambda \uparrow 2\pi$.
}
\end{figure}
We suggest a possible implementation of
the example above in a charged particle with a spin-$1/2$.
Assume that the particle is localized to some place so that we may
ignore the motion of the particle.
The unperturbed system is the spin under a static magnetic
field. The perturbation $\hat{V}$ is composed of two ingredients:
One is a periodically pulsed magnetic field, whose direction needs to be
different from that of the unperturbed magnetic field.
The other is
a periodically pulsed electric field, which provides
``the constant part'' of $\hat{V}$. In order to prepare $\hat{V}$,
we need to adjust the ratio of the strength and the period of
the two perturbation fields.
Finally, we show another example that involves multiple levels
in Fig.~\ref{fig:multilevel} (a), where all quasienergies are involved
in the anholonomy.
This is due to the cyclicity of $\ket{v}$.
In Fig.~\ref{fig:multilevel}~(b), we also show an
example that breaks the cyclicity of $\ket{v}$.
This suggests that we may control the anholonomy to the limited number of
states by an appropriate choice of $\ket{v}$.
\begin{figure}
\includegraphics[width=8.6cm]{fig2a}
\hfill
\includegraphics[width=8.6cm]{fig2b}
\caption{\label{fig:multilevel} (Color online)
Parametric motions of quasienergies (bold lines) in systems
with multiple levels ($\dim\mathcal{H} = 5$).
The unperturbed Floquet operator $\hat{U}_0$,
whose ``ground'' quasienergy is adjusted to zero at $\lambda = 0$,
is randomly chosen.
The quasienergies at $\lambda = 0$ are indicated by the broken,
horizontal lines.
(a) A random choice of $\ket{v}$, which satisfies cyclicity for
$\hat{U}_0$.
All quasienergies exhibit anholonomy.
(b) An example for broken cyclicity in $\ket{v}$:
Only three components of $\ket{v}$, in the representation
that diagonalizes $\hat{U}_0$, take nonzero values.
The resultant parametric changes and anholonomy occur only in
the subspace ${\rm span} \set{\hat{U}_0^m\ket{v}}_{m}$,
whose dimensionality is three.
The other two quasienergies draw horizontal lines,
since they correspond to the trivial eigenvectors mentioned
in Section~\ref{sec:rank1},
and are not affected by the perturbation.
}
\end{figure}
\section{Geometry and abundance of quasienergy anholonomy}
\label{sec:abundance}
We remark on the geometry of the quasienergy anholonomy to discuss its
stability and abundance.
Concerning Cheon's eigenenergy anholonomy in a family of systems with
generalized pointlike potentials,
Tsutsui, F\"ulop, and Cheon examined the geometry of the anholonomy
using the fact that the parameter space of the family is
$U(2)$~\cite{Cheon:AP-294-1,Tsutsui:JMP-42-5687}.
When the dimension of the Hilbert space is two,
Tsutsui {\it et al.}'s argument is immediately applicable to the
quasienergy anholonomy in the systems whose unit time evolution is
described by a Floquet operator, which is a $2\times2$ unitary
matrix.
We employ a parametrization of such systems by
their quasienergy-spectrum $\{(E_0, E_1)\}$~(Fig.~\ref{fig:geometry} (a)),
whose element $(E_0, E_1)$ is identified with $(E_1, E_0)$.
The quotient quasienergy-spectrum space is accordingly
an orbifold $T^2/\bbset{Z}_2$
which has two topologically inequivalent and nontrivial
cycles (see Fig.~\ref{fig:geometry} (c) and Ref.~\cite{Tsutsui:JMP-42-5687}).
One cycle traverses the degeneracy line $E_0 = E_1$.
The other cycle concerns the ``increment'' (or ``decrement'') of
the quantum number.
More precisely,
the winding number along the latter cycle determines the increment of
the quantum number (Fig.~\ref{fig:geometry} (d)).
When the dimension of Hilbert space is larger than 2,
similar geometrical argument will be possible.
The geometrical nature implies that the quasienergy anholonomy is
stable against perturbations that preserves the topology of the cycle.
Hence we may expect that the same anholonomy appears in
nonautonomous systems whose unit time evolution is described by
a Floquet operator, e.g., periodically kicked systems and periodically driven
systems.
The stability of Cheon's anholonomies against perturbations is also
expected from the fact
that the parametric dependence of the quasienergies has no crossings
(see Figs.~\ref{fig:twolevel} and \ref{fig:multilevel}~(a)).
To achieve the stability in practice, the gap of narrowly avoided
crossings needs to be enlarged. This is possible with a
suitable adjustment of $\ket{v}$
(e.g., see Figs.~\ref{fig:twolevel} (a) and (b)).
We also remark that the presence of the trivial eigenvectors,
which are introduced in Section~\ref{sec:rank1},
induces the crossings of quasienergies, as can be seen in
Fig.~\ref{fig:multilevel}~(b) and in Appendix~\ref{sec:normalForm}.
Hence quasienergy anholonomies coexisting with trivial eigenvectors
are fragile against perturbations in general.
\begin{figure}
\includegraphics[width=4.22cm]{fig3a}
\
\includegraphics[width=4.22cm]{fig3b}
\\[\baselineskip]
\includegraphics[width=4.22cm]{fig3c}
\
\includegraphics[width=4.22cm]{fig3d}
\caption{\label{fig:geometry} (Color online)
A parametrization with the quasienergy-spectrum of quantum systems whose
unit time evolutions are described by two-dimensional Floquet operators
is explained.
(a) An $E_0-E_1$ plane, where a and a' are at
$(0, 2\pi/T)$ and $(2\pi/T, 0)$, respectively.
Because of the periodicity of quasienergies, pairs of lines
Oa and a'O', and, Oa' and aO' are identified.
On the diagonal line $E_0 = E_1$, two quasienergies are degenerate.
In the subsequent figures, $(E_0, E_1)$ and $(E_1, E_0)$ are identified.
(b) $\triangle$Oba' and $\triangle$O'ba are identified with
$\triangle$Oba and $\triangle$O'ba', respectively, and so are removed.
(c) The quotient space $T^2/\bbset{Z}_2$.
Since Oa and a'O' in (b) are identical, they are arranged to make
a square. Furthermore, if identical lines Ob' and ab are put
together, a M\"obius strip with edges Ob and ab', which are
degenerate lines, is obtained
(see Ref.~\cite{Tsutsui:JMP-42-5687}).
(d) Parametric motion of spectrum on $T^2/\bbset{Z}_2$.
Bold and dashed lines, which are topologically equivalent
on the M\"obius strip, correspond to
Figs.~\ref{fig:twolevel} (a) and (b), respectively, where
$T=1$ is assumed. At $\lambda=0$,
they start from the open circle at $(0, \pi)$. As $\lambda$ increases,
they move toward ab. At $\lambda=\pi$, they arrive on a point on a line
ab, which is identical with $Ob'$. Then they return to $(0, \pi)$.
Such a winding along the M\"obius strip induces the quasienergy
anholonomy.
}
\end{figure}
\section{Application: anholonomic quantum state manipulation}
\label{sec:manipulation}
As an application of the quasienergy anholonomy, a design principle of
systems that achieve manipulations of quantum states with adiabatic passages
is proposed.
Before describing our argument, we mention that the conventional
works on the application of adiabatic passages to the manipulation of
quantum states are the textbook results~\cite{QuantumControlTexts}.
At the same time, there are interesting proposals on quantum circuits
whose elementary operations are composed by adiabatic processes%
~\cite{AdiabaticControl}.
The reason why the adiabatic processes are employed is that
the operations governed by the adiabatic processes are
expected to be stable.
On the other hand, the manipulation that involves the phase anholonomy
is expected to be stable under the perturbation, due to its topological
nature.
Our scheme proposed here also relies on the adiabatic processes and
employs nonconventional, Cheon's anholonomies in quantum maps.
The aim is to evolve a quantum state (``the initial target'')
into another state (``the final target'').
What we need to carry it out is twofold:
One is an ``unperturbed'' Hamiltonian $\hat{H}_0$, whose eigenstates
must contain the two target states.
The other is a normalized vector $\ket{v}$, which must have
nonzero overlappings between the two target states.
Under the influence of a periodically pulsed perturbation
$\hat{V}=\ketbra{v}{v}$ with its period $T$ and strength $\lambda$,
the system is described
by the kicked Hamiltonian $\hat{H}(t)$~(\ref{eq:kickedHamiltonian}).
The use of the quasienergy anholonomy of
the corresponding Floquet operator $\hat{U}_{\lambda}$~(\ref{eq:defUlambda})
is straightforward
if $\hat{H}_0$ is bounded and contains only discrete eigenenergies
and $T$ is smaller than $2\pi\hbar/W$, where
$W$ is the difference between the maximum and the minimum eigenenergies
of $\hat{H}_0$.
Otherwise, we need to achieve these conditions effectively,
by adjusting $\ket{v}$.
For example, $\ket{v}$ needs to be prepared to have no overlapping
with the eigenstates that have higher eigenergies to make
an effective energy cutoff on $\hat{H}_0$.
Once we prepare such $\hat{U}_{\lambda}$, it is straightforward to
realize the manipulation, at least, in theory.
To convert a state vector, which is initially in
an eigenstate of $\hat{U}_0$, to the nearest higher eigenstate
of $\hat{U}_0$, is achieved by applying the periodically pulsed perturbation
$\hat{V}=\ketbra{v}{v}$,
whose strength $\lambda$ is adiabatically increased from 0 to $2\pi$.
Note that at the final stage of the manipulation, we may switch off
the perturbation suddenly, due to the periodicity of the
Floquet operator $\hat{U}_{2\pi} = \hat{U}_{0}$.
This closes a ``cycle.''
By repeating the cycle, the final state can be
any eigenstate of $\hat{U}_0$.
Note that, along the operation, $\ket{v}$ may vary adiabatically.
In other words, the adiabatically slow fluctuation on $\ket{v}$
does no harm.
We remark that an application of the present procedure to anholonomic
adiabatic quantum computation is described in a separate
publication~\cite{TM061}.
The strongest limitations of the present scheme,
in our opinion, is that the target states for the manipulation
must be eigenstates of $\hat{H}_0$.
Superpositions of the eigenstates of $\hat{H}_0$ cannot be the targets
due to the presence of dynamical phases that generally diverge in adiabatic
processes.
Note that, however, there is no obstacle to
handle ``superposed states'' when they are eigenstates of $\hat{H}_0$.
Furthermore, if we could introduce Cheon's anholonomies to
the systems whose quasienergy are degenerate, it may be possible to
carry out a coherent manipulation within a degenerate eigenspace.
This motivates us to seek an extension of the eigenspace anholonomy
for degenerate eigenspace,
i.e., Cheon's anholonomies \'a la Wilczek and Zee.
\section{Discussion and summary}
\label{sec:summary}
We have discussed Cheon's anholonomies in a family of quantum
map~(\ref{eq:expandUlambda}) with a rank-one projection $\hat{V}$.
Although our geometrical argument in Section~\ref{sec:abundance} assures
the abundance of the systems that exhibit the anholonomies,
we still do not have any systematic way to find such systems,
except the quantum map~(\ref{eq:expandUlambda}).
In order to suggest exploring other examples of the anholonomies,
we summarize conditions to find the anholonomies.
Two ingredients in our Floquet operator~(\ref{eq:expandUlambda})
facilitate us to find the anholonomies:
(a) the periodicity of the Floquet operator for the parameter $\lambda$
enforces the periodicity of the spectrum;
and (b) the positivity of the perturbation assures the monotonic increment
of each quasienergy for the increment of $\lambda$.
These two facts imply that $E_n(\lambda)$ arrives at a higher excited
quasienergy $E_{n + \delta n}(\lambda)$ ($\delta n>0$)
after an increment of $\lambda$ by the period $2\pi$.
To realize the first condition, $\hat{V}$ needs not to be a
projection operator. For the Floquet operator~(\ref{eq:defUlambda}),
the condition for the periodicity is
$e^{-i \Lambda\hat{V}/\hbar} = 1$, where $\Lambda$ is the period.
In terms of the eigenvalues $\set{v_n}_n$ of $\hat{V}$, this condition
is that $\Lambda v_n / (2\pi\hbar)$ is an integer for all $n$.
Although we suppose that the anholonomies may be realized without
the condition (b), we are not aware of any examples,
except the trivial cases, e.g., $\hat{V}$ is negative definite.
Furthermore, the above two conditions generally do not determine the exact
value of $\delta n$, which is the increment of the quantum number
after a single cycle,
whereas $\delta n=1$ for a rank-1 projection $\hat{V}$ is
shown in Section~\ref{sec:existance}.
The value of $\delta n$ could be determined by the geometric
argument shown in Section~\ref{sec:abundance}.
However, no systematic algorithm to compute $\delta n$
from a given family of Floquet operators is known to us.
\begin{acknowledgments}
M.M. would like to thank Professor I. Ohba and Professor H. Nakazato
for useful comments.
A.T. wishes to thank Professor A.~Shudo, Professor K.~Nemoto, and
Professor M.~Murao for useful conversations.
\end{acknowledgments}
|
\section{Introduction}
The traditional theory of noise induced transport deals with a
Langevin equation describing the motion of a model Brownian
particle in an external periodic potential, spatially symmetric or
asymmetric. \cite{ratchet,rmp,rev1,rev2,rev3} The nature of
asymmetry of the external force field, in which the Brownian
particle is moving, is crucial in generating biased directed
motion. While moving in a symmetric potential, the Brownian
particle is unable to generate motion in a preferred direction due
to the detailed balance principle which can be broken easily by
applying an external time dependent perturbation, either
deterministic or random. The correlation time of the external
perturbation needs to be greater than the correlation time of the
fluctuations which the system experiences from its immediate
surroundings, the heat bath. A general approach in this direction
involves the application of a time periodic deterministic field or
the application of a colored noise to the system of interest
\cite{ratchet,rev1,rev2}. Adopting a different approach, one can
create directed motion by putting the Brownian particle in a
biased asymmetric periodic potential from the very beginning. The
spatial bias in the potential is able to overcome the detailed
balance principle and hence can generate motion in a preferred
direction. \cite{rmp,rev1,rev2}
The theory of directed motion has gained wide interdisciplinary
attention to model the phenomena of noise induced transport, where
the interplay of fluctuations and non-linearity of the system
plays an important role. \cite{ratchet,rmp,rev1,rev2,rev3}
Exploitation of the nonequilibrium fluctuations present in the
medium helps to generate directed motion of the Brownian particle.
Presence of spatial anisotropy in the potential together with
nonequilibrium perturbation enables one to extract useful work
from random fluctuations without violating the second law of
thermodynamics. \cite{ratchet,rev1} This leads to its wide
applicability in explaining the mechanism of molecular motors,
\cite{rmp,rev1,howard} tunneling in a Josephson junction,
\cite{rev1} rotation of dipoles in a constant field, \cite{rev1}
phase locked loop, \cite{rev1} directed transport in photovoltaic
and photoreflective materials \cite{photo} and the efficiency of
tiny molecular machine in a highly stochastic environment.
\cite{rev2,rev3,tiny} Motor proteins like kinesins,dyenins and
myosins are versatile biomolecular shuttle cargo encapsulated in
vesicles and are present in the different parts of the cell. In
living cells, transport occurs via the cytoskeletal filaments and
motor proteins. \cite{howard,cell} Motor proteins are also
important ingredients of the mechanism of muscle contraction and
cell division. \cite{cell} The search for physical principles that
enable such tiny molecular machines to function efficiently in a
highly Brownian regime and construction of artificial molecular
rotors which produce controlled directional motion mimicking
molecular motor proteins \cite{motor} are the subject of ongoing
interest.
During the last two decades, several theoretical models have been
proposed using the idea of a Brownian particle moving in a ratchet
potential \cite{ratchet,rev1,rev2,rmp} to explain the transport
mechanism under various nonequilibrium situations. The ratchet
model and its many variants like rocking ratchet, \cite{ratchet}
diffusion ratchet, \cite{ref14} correlation ratchet, \cite{ref15}
flashing ratchet, \cite{ref16} etc., have found wide attention in
recent days. \cite{rev1} To get a unidirectional current, either
spatially asymmetric periodic potentials or time asymmetric
external forces are necessary in these models. In explaining the
above mentioned directional transport phenomena, most of the
theoretical approaches adopt phenomenological models. The first
self consistent microscopic attempt was made by Millonas
\cite{millonas} in the context of construction of a Maxwell's
demon like information engine that extracts work from a heat bath.
In this microscopic construction, the Hamiltonian for the whole
system includes a subsystem, a thermal bath and a nonequilibrium
bath that represents an information source or sink.
\cite{millonas}
In this article, we consider a simple variant of the system
reservoir hamiltonian \cite{millonas} to model the directional
transport processes where the associated bath is in a
nonequilibrium state. The model incorporates some of the features
of Langevin dynamics with a fluctuating barrier \cite{ref18} and
the kinetics due to space dependent friction along with the
presence of local hot spots. \cite{landauer,porto,jrc} Since the
theories of transport processes traditionally deal with stationary
bath, the nonstationary transport processes have remained largely
overlooked so far. We specifically address this issue and examine
the influence of initial excitation and subsequent relaxation of
bath modes \cite{millonas,jrc1,jrc2,rig,popov,bolivar} on the
transport of system particle. We show that relaxation of the
nonequilibrium bath modes may result in strong non-exponential
kinetics and nonstationary current. The physical situation that
has been addressed is that at $t=0_-$, the time just before the
system and the bath are subjected to an external excitation, the
system is appropriately thermalized. At $t=0$, the excitation is
switched on and the bath is thrown into a nonstationary state
which behaves as a nonequilibrium reservoir. We follow the
stochastic dynamics of the system mode after $t>0$. The separation
of the time scales of the fluctuations of the nonequilibrium bath
and the thermal bath to which it relaxes, is such that the former
effectively remains stationary on the fast correlation of the
thermal noise. \cite{jrc1}
The organization of the paper is as follows: We discuss in Sec.II
a microscopic model necessary to compute the transient transport
process where the system in question is not initially thermalized
and the associated bath is thrown into a nonequilibrium and
nonstationary situation by sudden initial excitation of some of
the bath modes. Appropriate elimination of the reservoir degrees
of freedom leads to a non-Markovian Langevin equation,
stochasticity being contributed by both additive thermal noise and
the multiplicative noise due to relaxing nonequilibrium modes. In
Sec.III, following the prescription of Ref.\onlinecite{sancho},
the Fokker-Planck description is provided in position space which
is valid for state dependent dissipation. We then derive the time
dependent solution of the associated Smoluchowski equation for
probability density function. As an application of our
development, in Sec.IV, we consider the motion of a Langevin
particle in a periodic ratchet potential and obtain the stationary
and time dependent average velocity of the Langevin particle and
show that for symmetric periodic potential, the direction of
average velocity depends on the initial excitation of intermediate
bath modes. Summarizing remarks are presented in Sec.V.
\section{The background and the model}
To make the paper self contained, we first discuss the essential
features of the traditional theory of system reservoir dynamics in
this section and then describe the model we adopt in the present
work. This shows how our model deviates from the usual system
reservoir theory and brings up the new features of our model.
\subsection{The traditional system reservoir model}
In the traditional system reservoir model, \cite{micro,open,ref2}
the reservoir is assumed to be in equilibrium at $t=0$ in the
presence of the system, and the appropriate distribution of the
initial state of the heat bath is governed by the Hamiltonian
\begin{eqnarray}\label{eq1}
H_B + H_{SB} = \sum_{\nu} \left [ \frac{p^2_{\nu} }{2m_{\nu}} +
\frac{m_{\nu} \omega_{\nu}^2}{2} \left ( q_{\nu} - \frac{g_{\nu}
x}{m_{\nu} \omega_{\nu}^2} \right )^2 \right ],
\end{eqnarray}
\noindent which includes the static interaction part, $H_{SB}$,
between the system and the reservoir. The total Hamiltonian of the
system plus bath is then usually written as
\begin{eqnarray}\label{eq2}
H = \frac{p^2}{2} + V(x) + H_B + H_{SB}.
\end{eqnarray}
\noindent In Eqs.(\ref{eq1}-\ref{eq2}), the system (mass weighted)
is described by the coordinate $x$ and the conjugate momentum $p$,
and the heat bath, composed of a set of linear harmonic
oscillators, by the coordinate $q_{\nu}$ and the conjugate momenta
$p_{\nu}$, $\nu = 1,2 \cdots N $. $m_{\nu}$ is the mass of the
$\nu$-th oscillator and $\omega_{\nu}$, the corresponding
frequency. The system bath interaction is generally taken to be
linear in both the system and the bath coordinates through the
coupling constant $g_{\nu}$. $V(x)$ represents the external force
field in which the Brownian particle is executing random motion.
The bath is assumed to be in thermal equilibrium at temperature
$T$ and the initial distribution is considered to be a canonical
one \cite{micro,open,ref2}
\begin{eqnarray}\label{eq3}
W[{\bf q(0),p(0)}] = \frac{1}{Z} \exp \left (
-\frac{H_B+H_{SB}}{k_BT} \right ),
\end{eqnarray}
\noindent where, $Z$ is the normalization constant and $k_B$ is
the Boltzmann constant. To derive the dynamical equations for the
system in terms of $x$ and $p$, one usually eliminates the bath
degrees of freedom from the equations of motion of the system
variable \cite{micro} and obtains,
\begin{eqnarray}\label{eq4}
\dot{x} &=& p, \nonumber \\
\dot{p} &=& - V^{'}(x) - \int_0^t d\tau \gamma (t-\tau)p(\tau) + \xi (t),
\end{eqnarray}
\noindent where $\gamma(t)$ is the memory kernel
\begin{eqnarray*}
\gamma(t) = \sum_{\nu} \frac{g_{\nu}^2} {m_{\nu} \omega_{\nu}^2 }
\cos\omega_{\nu}t,
\end{eqnarray*}
\noindent and $\xi(t)$, the forcing function
\begin{eqnarray}\label{eq55}
\xi(t) = \sum_{\nu} g_{\nu} \left [ \left \{ q_{\nu}(0) -
\frac{g_{\nu}}{m_{\nu} \omega_{\nu}^2} x(0) \right \}
\cos \omega_{\nu} t + \frac{p_{\nu}(0)} {m_{\nu} \omega_{\nu}} \sin
\omega_{\nu} t \right ].
\end{eqnarray}
\noindent Having chosen a distribution for the initial state of the
bath, given by Eq.(\ref{eq3}), the fluctuating force $\xi(t)$ becomes
zero centered, and the correlation function of $\xi(t)$ gives the
celebrated fluctuation dissipation relation (FDR) \cite{micro,
open,ref2}
\begin{eqnarray}\label{eq5}
\langle \xi(t) \xi(t^{'})\rangle = k_B T \gamma(t-t').
\end{eqnarray}
\noindent To complete the identification of Eq.(\ref{eq4}) as a
generalized Langevin equation, one must establish the conditions
on the coupling coefficients $g_{\nu}$, on the bath frequency
$\omega_{\nu}$ and on the number $N$ of the bath oscillators which
ensure that $\gamma(t)$ is indeed dissipative. Sufficient
conditions for $\gamma(t)$ to be dissipative are that it is
positive definite and decreases monotonically with time. Both the
conditions are achieved if $N \rightarrow\infty$ and if
$g_\nu/m_\nu \omega_\nu^2$ and $\omega_{\nu}$ are sufficiently
smooth functions of $\nu$. \cite{ref3} As $N \rightarrow\infty$,
one replaces the sum by an integral over $\omega$ weighted by a
density of states $D(\omega)$ to get
\begin{eqnarray}\label{eq6}
\gamma(t) = \int d\omega D(\omega) c(\omega) \cos(\omega t),
\end{eqnarray}
\noindent with $(g_\nu/m_\nu \omega_\nu^2) \rightarrow c(\omega)$.
For
\begin{eqnarray}\label{eq7}
D(\omega) c(\omega) = \frac{\gamma/\tau_c}{1+ \tau_c^2 \omega^2},
\end{eqnarray}
\noindent which can be achieved by a variety of combinations of
the density of states $D(\omega)$ and the coupling function
$c(\omega)$, and which broadly resembles the behavior of
hydrodynamic model in a macroscopic system, \cite{open} the
dissipation kernel $\gamma(t)$ becomes
\begin{eqnarray}\label{eq8}
\gamma(t) = \frac{\gamma}{\tau_c} \exp (-|t|/\tau_c).
\end{eqnarray}
\noindent $\tau_c$ in the above expression is the cut-off frequency
and is characterized as the correlation time of the bath. In the
limit $\tau_c \rightarrow 0$, $\gamma(t) \rightarrow 2 \gamma \delta(t)$
and one obtains the traditional Langevin equation in the Markovian domain
\begin{eqnarray}\label{eq9}
\dot{x} &=& p, \nonumber \\
\dot{p} & = & -V^{'}(x) - \gamma p + \xi(t),
\end{eqnarray}
\noindent where, $\langle \xi(t) \rangle = 0 $ and $\langle \xi(t)
\xi(t')\rangle = 2 \gamma k_B T \delta (t-t')$. If one considers
the dynamics of the Brownian particle in a periodic potential $V(x)
= V(x+L)$, whose spatial symmetry can be broken by an external load
(force) thereby creating a biased force field, then the system's
dynamics is governed by
\begin{eqnarray}\label{eq10}
\dot{x} &=& p, \nonumber \\
\dot{p} & = & -V^{'}(x) - \gamma p + \xi(t) +F,
\end{eqnarray}
\noindent where $F$ is the external force. The sum of the periodic
potential $V(x)$ and the potential $-Fx$ due to the external force
$F$, i.e., $U(x) = V(x) -Fx$, is a corrugated plane whose average
slope (a measurement of the bias) is determined by the external
force $F$. \cite{ref4}
Eq.(\ref{eq10}) is the standard Langevin equation of a particle
moving in an external potential under an external load force and
is Markovian in nature. In addition to that, the dissipation term
$\gamma$ is constant due to the linear system reservoir coupling
$g_\nu$ and the noise term $\xi(t)$ is Gaussian, additive in
nature reflecting the Markovian kinetics of the Brownian particle.
In the following subsection, we show how this Markovian kinetics
changes to a non-Markovian one due to the sudden excitation of the
few bath modes and splits the noise term $\xi (t)$ into two parts.
\subsection{The nonstationary system reservoir model}
We consider a Brownian particle of unit mass, described by the
coordinate $x$ and the conjugate momentum $p$, moving in a
periodic potential of periodicity $L$, i.e. $V(x+L) = V(x)$. It is
acted upon by an external force $F$, which for the present study
is assumed to be constant and time independent. The system mode is
coupled to a set of relaxing modes considered as a semi-infinite
dimensional system ($\{q_k\}$-subsystem) which effectively
constitutes a nonequilibrium bath. \cite{millonas,jrc1,popov}
These $\{q_k\}$ modes are in contact with a thermally equilibrated
reservoir. Both the reservoirs are composed of two sets of
harmonic oscillators of unit mass characterized by the frequency
sets $\{ \omega_k\}$ and $\{ \Omega_j\}$ for the nonequilibrium
and the equilibrium bath respectively. The system reservoir
combination evolves under the total Hamiltonian
\begin{eqnarray}\label{eq11}
H = \frac{p^2}{2} + V(x) - Fx + \frac{1}{2} \sum_j (P_j^2 +
\Omega_j^2 Q_j^2)
+ \frac{1}{2} \sum_k (p_k^2 + \omega_k^2 q_k^2) -x\sum_k \kappa_j
Q_j - g(x) \sum_k q_k - \sum_{j,k} \alpha_{jk} q_k Q_j.
\end{eqnarray}
\noindent In Eq.(\ref{eq11}), $\kappa_j$ is the coupling constant
describing the coupling of the system with the equilibrium bath
modes and $g(x)$ is the coupling function. The term $g(x) \sum_j
q_j$ indicates the coupling of the nonequilibrium bath to the
system and the last term describes the coupling between the
nonequilibrium bath and the thermal bath with coupling constant
$\alpha_{jk}$. The equilibrium bath is assumed to be in thermal
equilibrium at a temperature $T$ and the initial distribution of
equilibrium bath variables are assumed to Gaussian. The form of
the nonequilibrium bath, that of a set of phonons or photons, is
chosen for both simplicity and because of its generic relationship
to many condensed matter type systems. \cite{ref2}
Eliminating the equilibrium bath variables $\{Q_j,P_j\}$ in the
traditional way, \cite{micro,open,ref2} one may show that the
nonequilibrium bath modes obey the dynamic equations
\begin{eqnarray} \label{eq12}
\dot{q_k} &=& p_k, \nonumber \\
\dot{p_k} &=& -\gamma p_k -\omega_k^2 q_k - g(x) + \eta_k(t).
\end{eqnarray}
\noindent Eq.(\ref{eq12}) takes into account the average
dissipation $\gamma$ of the nonequilibrium reservoir modes $q_k$
due to their coupling to the thermal reservoir which induces
fluctuations $\eta_k(t)$ characterized by the usual FDR
$\langle\eta_k(t)\eta_k(0)\rangle = 2 \gamma k_B T\delta(t)$.
\cite{micro,jrc1} In general, $\langle\eta_k(t)\rangle$ being a
non-zero constant quantity which, without loss of any generality,
may be chosen as zero by shifting the origin of our coordinate
system as we are dealing with a periodic potential. In passing we
mention that in deriving Eq.(\ref{eq12}) from Eq.(\ref{eq11}), the
cross terms for $\sum_j \gamma_{kj} q_j$ have been neglected.
Proceeding similarly to eliminate the thermal reservoir variables
from the equations of motion of the system, we obtain
\begin{eqnarray}\label{eq13}
\dot{x}&=& p, \nonumber \\
\dot{p} &=& -\gamma_e p - V^{'}(x) + F +\xi_e(t) + g^{'}(x) \sum_k q_k.
\end{eqnarray}
\noindent where $\gamma_e$ refers to the dissipation coefficient
of the system mode due to its direct coupling to the thermal bath
providing fluctuations $\xi_e(t)$. The statistical properties of
$\xi_e(t)$ are $\langle \xi_e(t)\rangle = 0$ and $\langle \xi_e(t)
\xi_e(t^{'})\rangle = 2 \gamma_e k_B T \delta(t -t')$. Comparing
with Eq.(\ref{eq10}), it is easy to see that the dissipation term
$\gamma_e$ and the noise term $\xi_e (t)$ are basically $\gamma$
and $\xi(t)$, respectively, that arise due to the direct linear
system reservoir coupling. Now making use of the formal solution
of Eq.(\ref{eq12}) which takes into account the relaxation of the
nonequilibrium modes, and integrating over the nonequilibrium bath
with a Debye type frequency distribution of the form \cite{jrc1}
\begin{eqnarray}
\rho (\omega) &=& \frac {3 \omega^2}{2\omega_c^3} \text{ for }
|\omega|\leq\omega_c, \nonumber \\
&=& 0 \text{ for } |\omega|>\omega_c,
\end{eqnarray}
\noindent where $\omega_c$ is the high frequency Debye cut-off,
one finally obtains the following Langevin equation for the system
mode, from Eq.(\ref{eq13}) as
\begin{eqnarray}\label{eq14}
\dot{x} &=& p, \nonumber \\
\dot{p} &=& - \Gamma(x) p - \widetilde{V}'(x)
+F +\xi_e(t) + g^{'}(x) \xi_n(t).
\end{eqnarray}
\noindent In the above Eq.(\ref{eq14})
\begin{eqnarray}\label{eq15}
\Gamma(x) = \gamma_e + \gamma_n [g^{'}(x)]^2,
\end{eqnarray}
\noindent is the state dependent dissipation constant comprising
of $\gamma_n$ and $\gamma_{e}$. $\xi_n$ refers to the fluctuations
of the nonequilibrium bath modes which effectively cause a damping
of the system mode. This damping is also state dependent due to
the nonlinear coupling function $g(x)$ as is given by $\gamma_n
[g^{'}(x)]^2$ . In Eq.(\ref{eq14}), the potential $V(x)$ in which
the particle moves has been modified to
\begin{eqnarray}\label{eq16}
\widetilde{V}(x) = V(x) - \frac{\omega_c}{\pi} \gamma_n g^2(x).
\end{eqnarray}
\noindent The fluctuations $\xi_n(t)$ due to the presence of
nonequilibrium bath is also assumed to be Gaussian with zero mean
$\langle\xi_n(t)\rangle = 0$. Also, the essential properties of
$\xi_n(t)$ explicitly depend on the nonequilibrium state of the
intermediate oscillator modes $\{q_j\}$ through $u(\omega,t)$, the
energy density distribution function at time $t$ in terms of the
following FDR for the nonequilibrium bath \cite{jrc1}
\begin{eqnarray}\label{eq17}
u(\omega,t) = \frac{1}{4\gamma_n} \int_{-\infty}^{+\infty} d\tau
\langle \xi_n(t) \xi_n(t+\tau)\rangle e^{i\omega\tau}
= \frac{1}{2} k_BT + e^{-\gamma t/2} \left [ u(\omega,0) - \frac{1}{2} k_BT \right].
\end{eqnarray}
\noindent $[ u(\omega,0) - (k_BT/2) ]$ is a measure of the
departure of energy density from thermal average at $t=0$. The
exponential term $\exp (-\gamma t/2)$ implies that this deviation,
due to the initial excitation, decays asymptotically to zero as
$t\rightarrow \infty$, so that one recovers the usual FDR for the
thermal bath. \cite{jrc1,popov} Eq.(\ref{eq17}) thus attributes
the nonstationary character of the $\{q_k\}$-subsystem. At this
point it is pertinent to note that the above derivation is based
on the assumption that $\xi_n(t)$ is effectively stationary on the
fast correlation time scale of the equilibrium bath modes. This is
necessary for the systematic separation of the time scales
involved in the dynamics.
Eq.(\ref{eq14}) is the required Langevin equation for the particle
moving in a modified potential $\widetilde{V}(x)$ and is acted
upon by a uniform force $F$. The motion of the particle is damped
by a state dependent friction $\Gamma(x)$. Depending on the
coupling function $g(x)$, both $\widetilde{V}(x)$ and $\Gamma(x)$
are, in general, nonlinear in nature. As a result, the stochastic
differential Eq.(\ref{eq14}) becomes nonlinear. The fluctuating
part in Eq.(\ref{eq14}) is comprised of two quantities;
$\xi_e(t)$, an additive noise due to thermal bath and $\xi_n(t)$,
a multiplicative noise due to nonlinear coupling to the
$\{q_k\}$-subsystem. The Langevin equation (\ref{eq14}) describes
a non-Markovian process as well, where the non-Markovian nature is
characterized by the decaying term in Eq.(\ref{eq17}), describing
the initial nonequilibrium nature of the $\{q_k\}$-subsystem
created by applying sudden excitation at $t=0$. \cite{jrc1,popov}
\section{Stochastic dynamics in the overdamped regime and
the time dependent distribution}
For large dissipation, i.e., in the overdamped limit one usually
eliminates the fast variable $p$ adiabatically by omitting the
inertial term $dp/dt$ from the dynamical equations of motion to
get a simpler description of the system in position space. The
approach of adiabatically eliminating fast variables is valid on a
much slower time scale and is a
|
zero order approximation. For
constant large dissipation, this adiabatic elimination of the fast
variables leads to the correct description of the system's
dynamics. However, in presence of hydrodynamic interactions, i.e.,
when the dissipation is state dependent, the traditional adiabatic
reduction of fast variables does not work properly and gives an
incorrect description of the system's dynamics. For state
dependent dissipation, an alternative approach was proposed in
Ref.\onlinecite{sancho}. Using the method given in
Ref.\onlinecite{sancho}, and using Eq.(\ref{eq14}) one may carry
out a systematic expansion of the relevant variable in powers of
$1/\gamma_e$ by neglecting terms smaller than $O(1/\gamma_e)$.
Then, by Stratonovich interpretation, it is possible to obtain the
appropriate Langevin equation corresponding to a Fokker-Planck
equation (FPE) in position space. Thus, following
Ref.\onlinecite{sancho}, the formal FPE for the probability
density function (PDF) $P(x,t)$ corresponding to the process
described by Eq.(\ref{eq14}) can be obtained as
\begin{eqnarray}\label{eq18}
\frac{\partial P}{\partial t} & = & \frac{\partial}{\partial x}
\left \{ \frac{\widetilde{V}^{\prime}(x) - F} {\Gamma(x)} P \right
\} + \gamma_e k_B T \frac{\partial}{\partial x} \left\{
\frac{1}{\Gamma(x)} \frac{\partial}{\partial x} \frac{1}{\Gamma(x)}
P \right \}
+ \gamma_n k_B T \left( 1 + r e^{- \gamma t/2} \right)
\frac{\partial}{\partial x} \left \{ \frac{g^{'}(x)}{\Gamma(x)}
\frac{\partial}{\partial x} \frac{g^{'}(x)}{\Gamma(x)} P
\right\} \nonumber \\
&& + \gamma_n k_B T \left( 1 + r e^{- \gamma t/2} \right)
\frac{\partial}{\partial x} \left \{
\frac{g^{'}(x)g^{''}(x)}{\Gamma^2(x)} P \right \},
\end{eqnarray}
\noindent where $r = \left \{ [u(\omega \rightarrow 0,0)/2k_BT ]-1
\right \}$ and is a measure of the deviation from equilibrium at
$t=0$. Under the steady state condition (at $t \rightarrow
\infty$), $\partial P/\partial t =0 $ and the stationary
distribution obeys the following relation,
\begin{eqnarray}\label{eq19}
k_B T \frac{d P_S(x)}{dt} + \left ( \widetilde{V}'(x) -F\right) P_S(x)
=0,
\end{eqnarray}
\noindent which has the solution
\begin{eqnarray}\label{eq20}
P_s(x) = N \exp \left[-\frac{1}{k_B T} \int^x \left (
\widetilde{V}^{'}(x^{'}) -F \right) dx^{'} \right],
\end{eqnarray}
\noindent where $N$ is the normalization constant. In Stratonovich
description, the Langevin equation corresponding to the FPE given
by Eq.(\ref{eq19}) is
\begin{eqnarray}\label{eq21}
\dot{x} = - \frac{( \widetilde{V}^{'}(x) -F )}{\Gamma(x)} -
\frac{\widetilde{D}(t) g^{'}(x)g^{''}(x)}{\Gamma^2(x)} +
\frac{1}{\Gamma(x)}
\xi_e(t) + \frac{g^{'}(x)}{\Gamma(x)} \xi_n(t),
\end{eqnarray}
\noindent with $\widetilde{D} = \gamma_n k_B T \left( 1 + r \exp(-
\gamma t/2) \right) $ being the time dependent diffusion constant due
to the relaxation of nonequilibrium bath modes. \cite{jrc1} Let us
consider that the time dependent solution of Eq.(\ref{eq18}) is
given by \cite{jrc2}
\begin{eqnarray}\label{eq22}
P(x,t) = P_S(x) \exp(-\phi(t)),
\end{eqnarray}
\noindent where $\phi $ is a function of time only and
$\lim_{t\rightarrow \infty} \phi(t) =0$. $P_S(x)$ is the steady
state solution of Eq.(\ref{eq18})
\begin{eqnarray}\label{eq23}
&& \frac{d}{dx} \left\{ \frac{( \widetilde{V}^{'}(x) -F
)}{\Gamma(x) } P_S(x) \right\}
+\gamma_e k_B T \frac{d}{dx} \left\{ \frac{1}{\Gamma(x)}
\frac{d}{dx} \frac{1}{\Gamma(x)} P_S(x)\right \}
+ \gamma_n k_B T \frac{d}{dx} \left \{
\frac{g^{'}(x)}{\Gamma(x)} \frac{d}{dx}
\frac{g^{'}(x)}{\Gamma(x)} P_S(x) \right\} \nonumber \\
&& + \gamma_n k_B T \frac{d}{dx} \left \{ \frac
{g^{'}(x)g^{''}(x)}{\Gamma^2(x)} P_S(x) \right \} =0.
\end{eqnarray}
\noindent Substitution of Eq.(\ref{eq22}) in Eq.(\ref{eq18})
separates the space and time parts and we have the dynamic
equation for $\phi(t)$
\begin{eqnarray*}
-\frac{d\phi}{dt} \exp(\gamma t/2) ={\rm const}= \alpha \text{
(say)}.
\end{eqnarray*}
\noindent On integration over time we get,
\begin{eqnarray}\label{eq24}
\phi(t) = \frac{2 \alpha}{\gamma} \exp(-\gamma t/2),
\end{eqnarray}
\noindent where $\alpha$ can be determined from the initial
condition. The time dependent solution of Eq.(\ref{eq18}) thus
reads as
\begin{eqnarray}\label{eq25}
P(x,t) = P_S(x) \exp \left [ -\frac{2 \alpha}{\gamma} \exp(-\gamma
t/2)\right].
\end{eqnarray}
\noindent To determine $\alpha$, we now demand that just at the
moment the system (and the non-thermal bath) is subjected to
external excitation at $t=0$, the distribution must coincide with
the usual Boltzmann distribution where the energy term in the
Boltzmann factor, in addition to the usual kinetic and potential
energy terms, contains the initial fluctuation of energy density
$\Delta u[= u(\omega,0) - (k_B T/2)]$. This demands that
\begin{eqnarray}\label{eq26}
\alpha = \frac{\gamma \Delta u}{2 k_BT},
\end{eqnarray}
\noindent $\alpha$ is thus determined in terms of relaxing mode
parameters and fluctuations of the energy density distribution at
$t=0$.
\section{Stationary and Transient Current}
In the over damped limit, the stationary current from
Eq.(\ref{eq23}) can be represented as
\begin{eqnarray}\label{eq27}
J_S = -\frac{1}{\Gamma(x)} \left [ {\widetilde V^{'}}(x) -F + k_BT
\frac{d}{dx}\right]P_S(x).
\end{eqnarray}
\noindent Integrating Eq.(\ref{eq27}) we have the expression for
stationary probability distribution in terms of stationary current
as
\begin{eqnarray}\label{eq28}
P_S(x) = e^{-U(x)} h(x)
\left [ \frac{P_S(0)}{h(0)} - \frac{J_S \gamma_e} {k_B
T} \int_0^x h(x^{'}) e^{U(x^{'})} dx^{'} \right]
\end{eqnarray}
\noindent where $h(x) = 1+ (\gamma_n/\gamma_e)[g^{'}(x)]^2$,
$\Gamma(x) = \gamma_e h(x)$ and $U(x) = \gamma_e \int_0^x
dx^{\prime} h(x^{'}) [{\widetilde{V}^{'}}(x^{'}) -F]/ k_BT $. We
now consider a symmetric periodic potential with periodicity $L$,
i.e. $V(x) = V(x+L)$ as well as the periodic derivative of
coupling function with the same periodicity as that of the
potential, i.e., $g^{'}(x) = g^{'}(x+L)$. As a consequence of this
choice, $U(x)$ is also a periodic function of $x$ with the period
$L$. If we impose the condition that $P_S(x)$ is bounded for large
enough $x$, it follows from the above mentioned conditions of
periodicity, that $P_S(x+L)= P_S(x)$ i.e. $P_S(x)$ must be
periodic with the same period $L$. \cite{ref4} Now applying the
periodicity condition of $P_S(x)$, we have from Eq.(\ref{eq28})
\begin{eqnarray}\label{eq30}
\frac{P_S(0)}{h(0)} = J_S \frac{ \gamma_e/k_BT} {1-e^{U(L)}}
\int_0^L h(x) e^{U(x)} dx.
\end{eqnarray}
\noindent Because of the periodicity, we normalize the steady state
PDF in the periodic interval
\begin{eqnarray}\label{eq31}
\int_0^L P_S(x) dx =1,
\end{eqnarray}
\noindent to get
\begin{eqnarray}\label{eq32}
\int_0^L h(x) e^{-U(x)}
\left [ \frac{P_S(0)}{h(0)} - \frac{J_S \gamma_e}{k_BT}
\int_0^x h(x^{'}) e^{U(x^{'})} dx^{'} \right ]dx =
1.
\end{eqnarray}
\noindent Now eliminating $P_S(0)/h(0)$ from Eq.(\ref{eq30}) and
Eq.(\ref{eq32}), one obtains the steady state current
\begin{eqnarray}\label{eq33}
J_S &=& \frac{k_BT}{\gamma_e} \left [ 1 - e^{U(L)} \right ]
\left [\int_0^L h(x) e^{-U(x)} dx \int_0^L h(x^{'})
e^{U(x^{'})} dx^{'}
\right. \nonumber \\
&& \left. - \left [ 1 - e^{U(L)} \right ]
\int_0^L \left ( h(x) e^{-U(x)} \int_0^x h(x^{'})
e^{U(x^{'})} dx^{'} \right ) dx \right
]^{-1}.
\end{eqnarray}
\noindent From Eq.(\ref{eq33}) it is clear that in the absence of
any external bias $F$, the steady current vanishes. We thus observe
that there is no occurrence of current for a periodic potential and
for periodic derivative of the coupling function with the
same periodicity for $F=0$. At
the macroscopic level this confirms that there is no generation of
perpetual motion of the second kind, i.e., no violation of second law of
thermodynamics. In passing, we note that in the absence of
$\{q_k\}$-subsystem, i.e., when $\gamma_n =0 $ , Eq.(\ref{eq33})
reduces to the standard form \cite{ref4}
\begin{eqnarray}\label{eq34}
J_S &=& L \gamma_e k_B T \left [ 1 - e^{L F/k_BT} \right ]
\left [\int_0^L e^{V(x)/k_BT} dx \int_0^L e^{-V(x)/k_BT} dx -
\left [1 - e^{-2L F/k_BT} \right ] \right. \nonumber \\
&& \left. \times \left \{ \left ( \int_0^L e^{-V(x)/k_BT} \int_0^x
e^{V(x^{'})/k_BT} dx^{'} \right) dx \right \}
\right]^{-1}.
\end{eqnarray}
\noindent Next, to find the time dependent current $J(x,t)$ we
resort to Eq.(\ref{eq18}) and observe that
\begin{eqnarray}\label{eq35}
J(x,t) & = & - \frac{\partial}{\partial x} \left \{
\frac{\widetilde
{V}(x^{\prime}) - F} {\Gamma(x)} P \right \}
+ \gamma_e k_B T \left\{ \frac{1}{\Gamma(x)}
\frac{\partial}{\partial x} \frac{1}{\Gamma(x)}
P \right \}
+ \gamma_n k_B T \left( 1 + r e^{- \gamma t/2} \right)
\left \{ \frac{g^{'}(x)}{\Gamma(x)}
\frac{\partial}{\partial x} \frac{g^{'}(x)}{\Gamma(x)} P
\right\} \nonumber \\
&& + \gamma_n k_B T \left( 1 + r e^{- \gamma t/2} \right) \left \{
\frac{g^{'}(x)g^{''}(x)}{\Gamma^2(x)} P \right \}.
\nonumber \\
\end{eqnarray}
\noindent Now substituting Eq.(\ref{eq22}) in Eq.(\ref{eq35}) and
making use of Eq.(\ref{eq23}) we find that $J(x,t)$ can be expressed
in a much simpler form
\begin{eqnarray}\label{eq36}
J(x,t) = J_S e^{-\phi(t)} - {\cal {D}}(t) \frac{1}{\Gamma(x)}
\frac{d}{dx} \frac{1}{\Gamma(x)} [g^{'}(x)]^2 P_S(x),
\end{eqnarray}
\noindent where $P_S(x)$ is the steady state PDF and $J_S$ is the
steady state current given by Eq.(\ref{eq34}) and
\begin{eqnarray}\label{eq37}
{\cal{D}}(t) = r \gamma_n k_B T e^{- \gamma t /2} e^{-\phi(t)}.
\end{eqnarray}
\noindent The steady state current $J_S$ thus can be obtained from
\begin{eqnarray}\label{eq38}
J_S = - \frac{( \widetilde{V}^{'}(x) -F )}{\Gamma(x)} P_S(x) -
\gamma_e k_B T \frac{1}{\Gamma(x)} \frac{d}{dx}
\frac{1}{\Gamma(x)} P_S(x)
- \gamma_n k_B T \frac{g^{'}(x)}{\Gamma(x)}
\frac{d}{dx} \frac{g^{'}(x)}{\Gamma(x)} P_S(x)
- \gamma_n k_B T \frac{g^{'}(x)g^{''}(x)}{\Gamma^2(x)} P_S(x),
\end{eqnarray}
\noindent from which we have
\begin{eqnarray}\label{eq39}
\frac{1}{\Gamma(x)} \frac{d}{dx} \frac{1}{\Gamma(x)}
[g^{'}(x)]^2 P_S(x)
= - \frac{J_S} {\gamma_n k_B T } - \frac{( \widetilde{V}^{'}(x)
-F )}{ \gamma_n k_B T \Gamma(x)}
- \frac{\gamma_e} {\gamma_n} \frac{1}{\Gamma(x)} \frac{d}{dx}
\frac{1}{\Gamma(x)} P_S(x).
\end{eqnarray}
\noindent Using Eq.(\ref{eq39}) we then obtain from Eq.(\ref{eq36})
\begin{eqnarray}\label{eq40}
J(x,t) = J_S \left [ e^{-\phi(t)} + \frac{{\cal D}(t) }{\gamma_n
k_B T} \right]
+\frac{{\cal D}(t) }{\gamma_n k_B T} \left [ \frac{(
\widetilde{V}^{'}(x) -F )}{\Gamma(x)} P_S(x)
+ \gamma_e k_B T \frac{1}{\Gamma(x)} \frac{d}{dx}
\frac{1}{\Gamma(x)} P_S(x) \right ].
\end{eqnarray}
\noindent Defining the space dependent part on the RHS of
Eq.(\ref{eq40}) as $M(x)$, we obtain
\begin{eqnarray}\label{eq41}
J(x,t) = J_S \left [ e^{-\phi(t)} + \frac{{\cal D}(t) }{\gamma_n k_B
T}\right] + \frac{{\cal D}(t) }{\gamma_n k_B T} M(x),
\end{eqnarray}
\noindent where
\begin{eqnarray}\label{eq42}
M(x) & = & \left [ \frac{( \widetilde{V}^{'}(x) -F )}{\Gamma(x)} P_S(x)
+ \gamma_e k_B T \frac{1}{\Gamma(x)} \frac{d}{dx}
\frac{1}{\Gamma(x)} P_S(x) \right ]. \nonumber \\
\end{eqnarray}
\noindent From Eq.(\ref{eq41}) we observe that the current
$J(x,t)$ can be written as a sum of two terms. The first term is
space independent and only a function of time. The second term is
product separable in the form of time and space part. As $t
\rightarrow \infty$, RHS of ${\cal {D}}(t) \rightarrow 0$ and
asymptotically $J(x,t)$ reduces to the steady state current $J_S$.
Now using the continuity equation
\begin{eqnarray*}
\frac{\partial P(x,t)}{\partial t} = - \frac{\partial
J(x,t)}{\partial x},
\end{eqnarray*}
\noindent along with $P(x,t) = P_S(x) e^{-\phi(t)}$, we get from
Eq.(\ref{eq41})
\begin{eqnarray}\label{eq42n}
\frac{dM(x)}{dx} = - \frac{\alpha}{r}P_S(x),
\end{eqnarray}
\noindent or equivalently
\begin{eqnarray}\label{eq43}
M(x) = - \frac{\alpha}{r} \int^x P_S(x) dx.
\end{eqnarray}
\noindent As we are dealing with periodic functions, the constant
of integration is chosen to be zero. Now integrating
Eq.(\ref{eq27}) for $P_S(x)$ and using the normalization
condition, Eq.(\ref{eq31}), we have the expression for steady
state PDF as
\begin{eqnarray}\label{eq44}
P_S(x) = e^{- (\widetilde{V}(x) - F x)/k_B T}
\left [ \frac{1+ \frac{J_S}{k_BT} \int_0^L e^{-
(\widetilde{V}(x) - F x)/k_B T} \{ \int_0^x \Gamma(x^{'})
e^{(\widetilde{V}(x^{\prime}) - F x^{\prime})/k_B T} dx^{'} \} dx }
{\int_0^L e^{- (\widetilde{V}(x) - F x)/k_B T} dx}
\right].
\end{eqnarray}
\noindent Using Eq.(\ref{eq44}) along with Eq.(\ref{eq43}) one
obtains from Eq.(\ref{eq41}) the expression for the time dependent
current, $J(x,t)$ as
\begin{eqnarray}\label{eq45}
J(x,t) & = & J_S \left [ e^{-\phi(t)} + r e^{-\gamma t/2} \right]
- \frac{\alpha e^{-\gamma t/2} \int^x dx^{'} e^{-
({\widetilde{V}}(x^{'}) - F x^{\prime})/k_B T} }{\int_0^L e^{-
({\widetilde V}(x) - F x)/k_B T} dx}
\left [ 1+ \frac{J_S}{k_BT} \int_0^L e^{-
({\widetilde{V}}(x^{''}) - F x^{''})/k_B T} \right. \nonumber
\\
&& \left. \times \left \{ \int_0^{x^{'}} \Gamma(x^{'''}) \{ e^{
({\widetilde{V}}(x^{'''}) - Fx^{'''})/k_B T} dx^{^{'''}} \}dx^{''}
\right \} \right ],
\end{eqnarray}
\noindent where $J_S$ is given by Eq.(\ref{eq33}). Since the
potential possesses spatial periodicity, one has $J(x,t) =
J(x+L,t)$. Hence the net time dependent current is given by
\begin{eqnarray}\label{eq46}
j(t) = \frac{1}{L} \int_0^L J(x,t) dx.
\end{eqnarray}
\noindent It should be noted that for symmetric potential with
$F=0$, $J_S=0$. But, in our development, transient current exists
and the direction of current depends on the sign of $\alpha$. What
is immediately apparent is that for symmetric potential, the sign
of $\Delta u [= u(\omega, 0) - (1/2k_BT)]$ determines the
direction of initial current
\begin{eqnarray}\label{eq47}
j(t) = - \frac{\alpha e^{-\gamma t /2}}{L} \frac { \int_0^L dx
\int^x dx^{'} \exp[-{\widetilde{V}}(x^{'})/k_BT ]} { \int_0^L dx
\exp[- {\widetilde{V}} (x)/k_BT ] }.
\end{eqnarray}
\noindent It is also clear from Eq.({\ref{eq45}) that the time
dependent current reduces to the steady state current, $J_S$ in
the asymptotic limit. The presence of the term $\exp[-\phi(t)]$ in
the expression of $J(x,t)$ makes the transient current strongly
non-exponential in nature. The transient behavior of growth or
decay of charge and current in $L-R$, $C-R$ or $L-C-R$ circuit is
important in construction of many electrical and electronic
devices where there is the mechanism of storage of energy. In
construction of molecular motor or nano-switch, the transient
behavior of the devices may be worth studying. In our development,
the preparation of intermediate relaxing bath plays a key role to
generate the time dependent current. Nevertheless, our methodology
will also be applicable in the case when any arbitrarily prepared
bath is approaching towards equilibrium. In passing, we mention
that the model considered in the present paper may be realized in
a guest-host system embedded in a lattice where the immediate
neighborhood of the guest comprises intermediate oscillatory
modes, while the lattice acts as a thermal bath.
\section{conclusion}
We have hereby proposed a simple microscopic system nonequilibrium
bath model to simulate nonstationary Langevin dynamics. The
nonequilibrium bath is effectively realized in terms of a
semi-infinite dimensional reservoir which is subsequently kept in
contact with a thermal reservoir which allows the non-thermal bath
to relax with characteristic time. The frequency spectrum of the
relaxing bath is assumed to be of Debye type. By an appropriate
separation of time scale, we then construct the Langevin equation
for a particle in which the dissipation is state dependent and the
stochastic forces appearing are both additive and multiplicative.
The underlying stochastic dynamics is found to be nonstationary
and non-Markovian. Based on the strategy of Sancho \emph{et al}.,
\cite{sancho}
we then show that this Langevin equation can be recast into the
form of generalized nonstationary Smoluchowski equation which
reduces to its standard form asymptotically. We then solve the
expression for time dependent PDF. As an immediate application of
our recent development, we consider the dynamics of a Langevin
particle in a ratchet potential and obtain the analytic expression
for both stationary and nonstationary transient average velocity,
which is followed by an immediate observation that in a periodic
potential the direction of nonstationary current depends on the
preparation of the nonequilibrium bath.
\begin{acknowledgments}
We dedicate this article to Professor Jayanta Kr. Bhattacharjee, a
motivating scientist and an inspiring teacher spearheading the
proliferation of Physics as a whole through his skilled
exposition. SB wishes to acknowledge the constant support of his
school authority (Parulia K. K. High School, Burdwan 713513)
towards pursuing his Ph. D. work.
\end{acknowledgments}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.